././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.8144658 nova-21.2.4/0000775000175000017500000000000000000000000012566 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/.coveragerc0000664000175000017500000000012500000000000014705 0ustar00zuulzuul00000000000000[run] branch = True source = nova omit = nova/tests/* [report] ignore_errors = True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/.mailmap0000664000175000017500000001726700000000000014224 0ustar00zuulzuul00000000000000# Format is: # # Alvaro Lopez Garcia Alvaro Lopez Andrew Bogott Andrew Bogott Andy Smith Andy Smith Andy Smith andy Andy Smith termie Andy Smith termie Anne Gentle annegentle Anthony Young Anthony Young Sleepsonthefloor Arvind Somya Arvind Somya Arvind Somya asomya@cisco.com <> Brad McConnell Brad McConnell bmcconne@rackspace.com <> Brian Lamar Brian Lamar brian-lamar Dan Wendlandt danwent Dan Wendlandt danwent Dan Wendlandt danwent@gmail.com <> Dan Wendlandt danwent@gmail.com Davanum Srinivas Davanum Srinivas Édouard Thuleau Thuleau Édouard Ethan Chu Guohui Liu Jake Dahn jakedahn Jason Koelker Jason Kölker Jay Pipes jaypipes@gmail.com <> Jiajun Liu Jian Wen Jian Wen Joe Gordon Joel Moore Joel Moore joelbm24@gmail.com <> John Griffith john-griffith John Tran John Tran Joshua Hesketh Joshua Hesketh Justin Santa Barbara Justin Santa Barbara Justin SB Justin Santa Barbara Superstack Kei Masumoto Kei Masumoto Kei masumoto Kei Masumoto masumotok Kun Huang lawrancejing Matt Dietz Matt Dietz Cerberus Matt Dietz Matthew Dietz Matt Dietz matt.dietz@rackspace.com <> Matt Dietz mdietz NTT PF Lab. NTT PF Lab. NTT PF Lab. Nachi Ueno NTT PF Lab. nova Nikolay Sokolov Nickolay Sokolov Paul Voccio paul@openstack.org <> Philip Knouff Phlip Knouff Renuka Apte renukaapte Sandy Walsh SandyWalsh Sateesh Chodapuneedi sateesh Tiantian Gao Tiantian Gao Vishvananda Ishaya Vishvananda Ishaya Vivek YS Vivek YS vivek.ys@gmail.com <> Yaguang Tang Yolanda Robla yolanda.robla Zhenguo Niu Zhongyue Luo ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/.pre-commit-config.yaml0000664000175000017500000000173500000000000017055 0ustar00zuulzuul00000000000000--- default_language_version: # force all unspecified python hooks to run python3 python: python3 repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.4.0 hooks: - id: trailing-whitespace - id: mixed-line-ending args: ['--fix', 'lf'] exclude: '.*\.(svg)$' - id: check-byte-order-marker - id: check-executables-have-shebangs - id: check-merge-conflict - id: debug-statements # nova/cmd/manage.py imports pdb on purpose. exclude: 'nova/cmd/manage.py' - id: check-yaml files: .*\.(yaml|yml)$ - repo: https://github.com/Lucas-C/pre-commit-hooks rev: v1.1.7 hooks: - id: remove-tabs exclude: '.*\.(svg)$' - repo: local hooks: - id: flake8 name: flake8 additional_dependencies: - hacking>=2.0,<3.0 language: python entry: flake8 files: '^.*\.py$' exclude: '^(doc|releasenotes|tools)/.*$' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/.stestr.conf0000664000175000017500000000006100000000000015034 0ustar00zuulzuul00000000000000[DEFAULT] test_path=./nova/tests/unit top_dir=./ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/.zuul.yaml0000664000175000017500000004446500000000000014544 0ustar00zuulzuul00000000000000# See https://docs.openstack.org/infra/manual/drivers.html#naming-with-zuul-v3 # for job naming conventions. - job: name: nova-dsvm-multinode-base parent: legacy-dsvm-base-multinode description: | Base job for multinode nova devstack/tempest jobs. Will setup firewall rules on all the nodes allowing them to talk to each other. timeout: 10800 required-projects: - openstack/devstack-gate - openstack/nova - openstack/tempest irrelevant-files: &dsvm-irrelevant-files - ^api-.*$ - ^(test-|)requirements.txt$ - ^.*\.rst$ - ^.git.*$ - ^doc/.*$ - ^nova/hacking/.*$ - ^nova/locale/.*$ - ^nova/policies/.*$ - ^nova/tests/.*$ - ^nova/test.py$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ - job: name: nova-tox-functional-py36 parent: openstack-tox-functional-py36 description: | Run tox-based functional tests for the OpenStack Nova project under cPython version 3.6 with Nova specific irrelevant-files list. Uses tox with the ``functional-py36`` environment. This job also provides a parent for other projects to run the nova functional tests on their own changes. required-projects: # including nova here makes this job reusable by other projects - openstack/nova - openstack/placement irrelevant-files: &functional-irrelevant-files - ^.*\.rst$ - ^api-.*$ - ^doc/(source|test)/.*$ - ^nova/locale/.*$ - ^releasenotes/.*$ vars: # explicitly stating the work dir makes this job reusable by other # projects zuul_work_dir: src/opendev.org/openstack/nova bindep_profile: test py36 timeout: 3600 - job: name: nova-tox-functional-py37 parent: openstack-tox-functional-py37 description: | Run tox-based functional tests for the OpenStack Nova project under cPython version 3.7 with Nova specific irrelevant-files list. Uses tox with the ``functional-py37`` environment. This job also provides a parent for other projects to run the nova functional tests on their own changes. required-projects: # including nova here makes this job reusable by other projects - openstack/nova - openstack/placement irrelevant-files: *functional-irrelevant-files vars: # explicitly stating the work dir makes this job reusable by other # projects zuul_work_dir: src/opendev.org/openstack/nova bindep_profile: test py37 timeout: 3600 - job: name: nova-tox-validate-backport parent: openstack-tox description: | Determine whether a backport is ready to be merged by checking whether it has already been merged to master or more recent stable branches. Uses tox with the ``validate-backport`` environment. vars: tox_envlist: validate-backport - job: name: nova-live-migration parent: tempest-multinode-full-py3 description: | Run tempest live migration tests against local qcow2 ephemeral storage and shared LVM/iSCSI cinder volumes. irrelevant-files: *dsvm-irrelevant-files vars: tox_envlist: all tempest_test_regex: (^tempest\.api\.compute\.admin\.(test_live_migration|test_migration)) devstack_local_conf: test-config: $TEMPEST_CONFIG: compute-feature-enabled: volume_backed_live_migration: true block_migration_for_live_migration: true block_migrate_cinder_iscsi: true post-run: playbooks/nova-live-migration/post-run.yaml # TODO(lyarwood): The following jobs need to be written as part of the # migration to zuulv3 before nova-live-migration can be removed: # #- job: # name: nova-multinode-live-migration-ceph # description: | # Run tempest live migration tests against ceph ephemeral storage and # cinder volumes. #- job: # name: nova-multinode-evacuate-ceph # description: | # Verifiy the evacuation of instances with ceph ephemeral disks # from down compute hosts. - job: name: nova-lvm parent: devstack-tempest description: | Run tempest compute API tests using LVM image backend. This only runs against nova/virt/libvirt/* changes. # Copy irrelevant-files from nova-dsvm-multinode-base and then exclude # anything that is not in nova/virt/libvirt/* or nova/privsep/*. irrelevant-files: - ^(?!.zuul.yaml)(?!nova/virt/libvirt/)(?!nova/privsep/).*$ - ^api-.*$ - ^(test-|)requirements.txt$ - ^.*\.rst$ - ^.git.*$ - ^doc/.*$ - ^nova/hacking/.*$ - ^nova/locale/.*$ - ^nova/tests/.*$ - ^nova/test.py$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ # TODO(mriedem): Make this voting and gating once bug 1771700 is fixed # and we've had enough runs to feel comfortable with this setup. voting: false vars: # We use the "all" environment for tempest_test_regex and # tempest_black_regex. tox_envlist: all # Only run compute API tests. tempest_test_regex: ^tempest\.api\.compute # Skip slow tests. tempest_black_regex: .*\[.*\bslow\b.*\] devstack_local_conf: test-config: $TEMPEST_CONFIG: compute-feature-enabled: # NOTE(mriedem): resize of non-volume-backed lvm instances does # not yet work (bug 1831657). resize: false cold_migration: false devstack_localrc: NOVA_BACKEND: LVM # Do not waste time clearing volumes. LVM_VOLUME_CLEAR: none # Disable SSH validation in tests to save time. TEMPEST_RUN_VALIDATION: false devstack_services: # Disable non-essential services that we don't need for this job. c-bak: false - job: name: nova-next parent: tempest-multinode-full-py3 description: | This job was added in Newton when placement and cellsv2 were optional. Placement and cellsv2 are required starting in Ocata. In Pike, the service user token functionality was added. This job is also unique in that it runs the post_test_hook from the nova repo, which runs post-test scripts to ensure those scripts are still working, e.g. archive_deleted_rows. In Queens, this job started testing the TLS console proxy code in the libvirt driver. Starting in Stein, the job was changed to run with python 3 and enabled volume multi-attach testing. Starting in Train, the job enabled counting quota usage from placement. Starting in Ussuri, the job was changed to multinode. Runs all tempest compute API and most scenario tests concurrently. irrelevant-files: *dsvm-irrelevant-files # Run post-tempest tests like for nova-manage commands. post-run: playbooks/nova-next/post.yaml vars: # We use the "all" environment for tempest_test_regex and # tempest_black_regex. tox_envlist: all # Run all compute API tests and most scenario tests at the default # concurrency (nproc/2 which is normally 4 in the gate). tempest_test_regex: ^tempest\.(scenario|api\.compute) # The tempest.scenario.test_network* tests are skipped because they # (1) take a long time and (2) are already covered in the # tempest-slow* job. If this regex gets more complicated use # tempest_test_blacklist. tempest_black_regex: ^tempest.scenario.test_network devstack_local_conf: post-config: $NOVA_CPU_CONF: compute: # Switch off the provider association refresh, which should # reduce the number of placement calls in steady state. Added in # Stein. resource_provider_association_refresh: 0 $NOVA_CONF: quota: # Added in Train. count_usage_from_placement: True scheduler: # Added in Train. query_placement_for_image_type_support: True "/$NEUTRON_CORE_PLUGIN_CONF": # Needed for QoS port heal allocation testing. ovs: bridge_mappings: public:br-ex resource_provider_bandwidths: br-ex:1000000:1000000 test-config: $TEMPEST_CONFIG: network-feature-enabled: qos_placement_physnet: public devstack_localrc: # Added in Pike. NOVA_USE_SERVICE_TOKEN: True # Enable TLS between the noVNC proxy & compute nodes; this requires # the tls-proxy service to be enabled. Added in Queens. NOVA_CONSOLE_PROXY_COMPUTE_TLS: True # Added in Stein. ENABLE_VOLUME_MULTIATTACH: True # Added in Ussuri. FORCE_CONFIG_DRIVE: True devstack_services: tls-proxy: true # neutron-* needed for QoS port heal allocation testing. neutron-placement: true neutron-qos: true # Disable non-essential services that we don't need for this job. c-bak: false devstack_plugins: # Needed for QoS port heal allocation testing. neutron: https://opendev.org/openstack/neutron group-vars: subnode: devstack_localrc: NOVA_USE_SERVICE_TOKEN: True NOVA_CONSOLE_PROXY_COMPUTE_TLS: True FORCE_CONFIG_DRIVE: True devstack_services: tls-proxy: true c-bak: false - job: name: nova-tempest-v2-api parent: devstack-tempest branches: - master description: | This job runs the Tempest compute tests against v2.0 endpoint. Former names for this job was: * legacy-tempest-dsvm-nova-v20-api vars: tox_envlist: all tempest_test_regex: api.*compute devstack_localrc: TEMPEST_COMPUTE_TYPE: compute_legacy - job: name: nova-tempest-full-oslo.versionedobjects parent: tempest-full-py3 description: | Run test with git version of oslo.versionedobjects to check that changes to nova will work with the next released version of that library. required-projects: - openstack/oslo.versionedobjects - job: name: nova-grenade-multinode parent: grenade-multinode description: | Run a multinode grenade job and run the smoke, cold and live migration tests with the controller upgraded and the compute on the older release. The former names for this job were "nova-grenade-live-migration" and "legacy-grenade-dsvm-neutron-multinode-live-migration". irrelevant-files: *dsvm-irrelevant-files vars: devstack_local_conf: test-config: $TEMPEST_CONFIG: compute-feature-enabled: live_migration: true volume_backed_live_migration: true block_migration_for_live_migration: true block_migrate_cinder_iscsi: true tox_envlist: all tempest_test_regex: ((tempest\.(api\.compute|scenario)\..*smoke.*)|(^tempest\.api\.compute\.admin\.(test_live_migration|test_migration))) - job: name: nova-multi-cell parent: tempest-multinode-full-py3 description: | Multi-node python3 job which runs with two nodes and two non-cell0 cells. The compute on the controller runs in cell1 and the compute on the subnode runs in cell2. irrelevant-files: *dsvm-irrelevant-files vars: # We use the "all" environment for tempest_test_regex and # tempest_test_blacklist. tox_envlist: all # Run compute API and scenario tests. tempest_test_regex: ^tempest\.(scenario|(api\.compute)) tempest_test_blacklist: '{{ ansible_user_dir }}/{{ zuul.projects["opendev.org/openstack/nova"].src_dir }}/devstack/nova-multi-cell-blacklist.txt' devstack_local_conf: post-config: $NOVA_CONF: oslo_policy: # The default policy file is policy.json but the # setup-multi-cell-policy role will write to policy.yaml. policy_file: policy.yaml test-config: $TEMPEST_CONFIG: compute-feature-enabled: # Enable cold migration for migrating across cells. Note that # because NOVA_ALLOW_MOVE_TO_SAME_HOST=false, all cold migrations # will move across cells. cold_migration: true devstack_services: # Disable other non-essential services that we don't need for this job. c-bak: false devstack_localrc: # Setup two non-cell0 cells (cell1 and cell2). NOVA_NUM_CELLS: 2 # Disable resize to the same host so all resizes will move across # cells. NOVA_ALLOW_MOVE_TO_SAME_HOST: false # We only have two computes and we don't yet support cross-cell live # migration. LIVE_MIGRATION_AVAILABLE: false group-vars: peers: devstack_localrc: NOVA_ALLOW_MOVE_TO_SAME_HOST: true LIVE_MIGRATION_AVAILABLE: false subnode: devstack_localrc: # The subnode compute will get registered with cell2. NOVA_CPU_CELL: 2 devstack_services: # Disable other non-essential services that we don't need for this # job. c-bak: false # Perform setup for the multi-cell environment. Note that this runs # before devstack is setup on the controller host. pre-run: playbooks/nova-multi-cell/pre.yaml - job: name: nova-osprofiler-redis parent: tempest-smoke-py3-osprofiler-redis description: | Runs osprofiler with the Redis collector on a subset of compute-specific tempest-full-py3 smoke tests. irrelevant-files: *dsvm-irrelevant-files required-projects: - openstack/nova vars: # We use the "all" environment for tempest_test_regex. tox_envlist: all # Run compute API and only the test_server_basic_ops scenario tests. tempest_test_regex: ^tempest\.(scenario\.test_server_basic_ops|(api\.compute)) - project: # Please try to keep the list of job names sorted alphabetically. templates: - check-requirements - integrated-gate-compute - openstack-cover-jobs - openstack-python3-ussuri-jobs - periodic-stable-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 check: jobs: # We define our own irrelevant-files so we don't run the job # on things like nova docs-only changes. - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa: voting: false irrelevant-files: *dsvm-irrelevant-files - devstack-plugin-ceph-tempest-py3: voting: false irrelevant-files: *dsvm-irrelevant-files - neutron-tempest-linuxbridge: irrelevant-files: # NOTE(mriedem): This job has its own irrelevant-files section # so that we only run it on changes to networking and libvirt/vif # code; we don't need to run this on all changes, nor do we run # it in the gate. - ^(?!nova/network/.*)(?!nova/virt/libvirt/vif.py).*$ - nova-live-migration - nova-lvm - nova-multi-cell - nova-next - nova-tox-functional-py36 - nova-tox-validate-backport: voting: false - tempest-integrated-compute: # NOTE(gmann): Policies changes do not need to run all the # integration test jobs. Running only tempest and grenade # common jobs will be enough along with nova functional # and unit tests. irrelevant-files: &policies-irrelevant-files - ^api-.*$ - ^(test-|)requirements.txt$ - ^.*\.rst$ - ^.git.*$ - ^doc/.*$ - ^nova/hacking/.*$ - ^nova/locale/.*$ - ^nova/tests/.*$ - ^nova/test.py$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ - openstack-tox-lower-constraints: voting: false - nova-grenade-multinode: irrelevant-files: *policies-irrelevant-files - tempest-ipv6-only: irrelevant-files: *dsvm-irrelevant-files - openstacksdk-functional-devstack: irrelevant-files: *dsvm-irrelevant-files - cyborg-tempest: irrelevant-files: *dsvm-irrelevant-files voting: false gate: jobs: - nova-live-migration - nova-tox-functional-py36 - nova-multi-cell - nova-next - nova-tox-validate-backport - tempest-integrated-compute: irrelevant-files: *policies-irrelevant-files - openstack-tox-lower-constraints: voting: false - nova-grenade-multinode: irrelevant-files: *policies-irrelevant-files - tempest-ipv6-only: irrelevant-files: *dsvm-irrelevant-files - openstacksdk-functional-devstack: irrelevant-files: *dsvm-irrelevant-files experimental: jobs: - ironic-tempest-bfv: irrelevant-files: *dsvm-irrelevant-files - ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode: irrelevant-files: *dsvm-irrelevant-files - barbican-simple-crypto-devstack-tempest: irrelevant-files: *dsvm-irrelevant-files - devstack-plugin-nfs-tempest-full: irrelevant-files: *dsvm-irrelevant-files - nova-osprofiler-redis - tempest-full-py3-opensuse15: irrelevant-files: *dsvm-irrelevant-files - tempest-pg-full: irrelevant-files: *dsvm-irrelevant-files - nova-tempest-full-oslo.versionedobjects: irrelevant-files: *dsvm-irrelevant-files - nova-tempest-v2-api: irrelevant-files: *dsvm-irrelevant-files - neutron-tempest-dvr-ha-multinode-full: irrelevant-files: *dsvm-irrelevant-files - neutron-tempest-iptables_hybrid: irrelevant-files: *dsvm-irrelevant-files - os-vif-ovs: irrelevant-files: *dsvm-irrelevant-files # NOTE(mriedem): Consider moving nova-tox-functional-py37 to the # check and gate queues once it's stable (like openstack-python37-jobs) - nova-tox-functional-py37 - devstack-platform-fedora-latest: irrelevant-files: *dsvm-irrelevant-files - devstack-platform-fedora-latest-virt-preview: irrelevant-files: *dsvm-irrelevant-files ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736377.0 nova-21.2.4/AUTHORS0000664000175000017500000020500100000000000013634 0ustar00zuulzuul00000000000000Aaron Lee Aaron Rosen Aaron Rosen Aarti Kriplani Abhijeet Malawade Abhijeet Malawade Abhishek Anand Abhishek Chanda Abhishek Kekane Abhishek Sharma Abhishek Talwar Adalberto Medeiros Adam Gandelman Adam Gandelman Adam Gandelman Adam Johnson Adam Kacmarsky Adam Spiers Adam Young Adelina Tuvenie Aditi Rajagopal Aditi Raveesh Aditi Raveesh Aditya Prakash Vaja Adrian Chiris Adrian Smith Adrian Vladu Adrien Cunin Adrien Cunin Ahmad Hassan Akash Gangil Akihiro MOTOKI Akihiro Motoki Akira KAMIO Akira Yoshiyama Akira Yoshiyama Ala Rezmerita Alberto Planas Alessandro Pilotti Alessandro Pilotti Alessandro Tagliapietra Alessio Ababilov Alessio Ababilov Alex Gaynor Alex Glikson Alex Handle Alex Hmelevsky Alex Holden Alex Meade Alex Szarka Alex Szarka Alex Xu AlexFrolov AlexMuresan Alexander Bochkarev Alexander Burluka Alexander Gordeev Alexander Gorodnev Alexander Sakhnov Alexander Schmidt Alexandra Settle Alexandra Settle Alexandre Arents Alexandre Levine Alexandru Muresan Alexei Kornienko Alexey I. Froloff Alexey Roytman Alexis Lee Alexis Lee Alfredo Moralejo Alin Balutoiu Alin Gabriel Serdean Allen Gao Allen Gao Alvaro Lopez Garcia Amandeep Ameed Ashour Ameed Ashour Amir Sadoughi Amrith Kumar Amy Fong Ana Krivokapic Anand Shanmugam Andras Gyacsok Andre Andre Aranha Andrea Frittoli Andrea Rosa Andrea Rosa Andreas Jaeger Andreas Jaeger Andreas Karis Andreas Scheuring Andrei Bacos Andrei V. Ostapenko Andrew Bogott Andrew Boik Andrew Bonney Andrew Clay Shafer Andrew Glen-Young Andrew James Andrew Laski Andrew Laski Andrew Lazarev Andrew Melton Andrew Woodward Andrey Brindeyev Andrey Kurilin Andrey Kurilin Andrey Pavlov Andrey Volkov Andy Hill Andy Hsiang Andy McCrae Andy Smith Andy Southgate Aneesh Puliyedath Udumbath Angus Lees Anh Tran Anish Bhatt Anita Kuno Ankit Agrawal Ann Kamyshnikova Anne Gentle Anne Gentle Ante Karamatic Ante Karamatić Ante Karamatić Anthony Lee Anthony PERARD Anthony Woods Anthony Young Anton Arefiev Anton Gorenkov Anton Kremenetsky Anton V. Yanchenko Antoni Segura Puimedon Antony Messerli Anuj Mathur Anush Krishnamurthy Anusha Unnam Arata Notsu Arathi Archit Modi Armando Migliaccio Armando Migliaccio Arnaud Legendre Arnaud Legendre Arnaud Morin Artem Vasilyev Arthur Dayne Artom Lifshitz Artur Malinowski Arvind Nadendla Arvind Somya Arx Cruz Asbjørn Sannes Aswad Rangnekar Atsushi SAKAI Attila Fazekas Augustina Ragwitz Author Name Avinash Prasad Avishay Traeger Avishay Traeger Aysy Anne Duarte Ayush Garg Balazs Gibizer Balazs Gibizer Bartek Zurawski Bartosz Fic Beliveau, Ludovic Belmiro Moreira Ben McGraw Ben Nemec Ben Nemec Ben Nemec Ben Roble Ben Swartzlander Bernhard M. Wiedemann Bhagyashri Shewale Bharath Thiruveedula Bhuvan Arumugam Bilal Akhtar Bill Owen Bin Zhou Bo Quan Bo Wang Bob Ball Boden R Boris Bobrov Boris Filippov Boris Pavlovic Brad Hall Brad McConnell Brad Pokorny Brandon Irizarry Brant Knudson Brendan Maguire Breno Leitao Brent Eagles Brent Tang Brian D. Elliott Brian Elliott Brian Elliott Brian Haley Brian Lamar Brian Moss Brian Rosmaita Brian Rosmaita Brian Schott Brian Waldon Brianna Poulos Brooks Kaminski Brooks Kaminski Bruce Benjamin Burt Holzman Béla Vancsics Cady_Chen Cale Rath Cao ShuFeng Cao Xuan Hoang Carl Baldwin Carlos Goncalves Cedric Brandily Cedric LECOMTE Chandan Kumar Chang Bo Guo ChangBo Guo(gcb) Changbin Liu Chen Chen Fan Chen Hanxiao ChenZheng Chet Burgess Chiradeep Vittal Chmouel Boudjnah Chris Chris Behrens Chris Buccella Chris Dent Chris Friesen Chris J Arges Chris Jones Chris Krelle Chris Krelle Chris St. Pierre Chris Suttles Chris Yeoh Christian Berendt Christine Wang Christoph Manns Christoph Thiel Christopher Lefelhocz Christopher Lefelhocz Christopher MacGown Christopher Yeoh Chuck Carmack Chuck Short Chung Chih, Hung Cian O'Driscoll Clark Boylan Claudiu Belu Claxton Clay Gerrard Clemens Perz Clenimar Filemon Clif Houck Clint Byrum Cole Robinson Cor Cornelisse Corentin Ardeois Corey Bryant Corey Wright Cory Stone Cory Wright Craig Tracey Craig Vyvial Cyril Roelandt DamonLi Dan Dan Song Dan Emmons Dan Florea Dan Genin Dan Genin Dan Peschman Dan Prince Dan Smith Dan Smith Dan Smith Dan Wendlandt Dane Fichter Danfeng Danfly Daniel Abad Daniel Berrange (berrange@redhat.com) Daniel Berrange Daniel Genin Daniel Kuffner Daniel L Jones Daniel P. Berrange Daniel Pawlik Daniel Pawlik Daniel Stelter-Gliese Danil Akhmetov Danny Al-Gaaf Dao Cong Tien Darragh O'Reilly Darren Birkett Darren Sanders Darren Worrall Davanum Srinivas Davanum Srinivas Dave Lapsley Dave McCowan Dave McNally Dave Walker (Daviey) Dave Walker (Daviey) David Besen David Bingham David Edery David Hill David Hill David Kang David McNally David Medberry David Peraza David Pravec David Rabel David Ripton David Scannell David Shrewsbury David Subiros David Wahlstrom David Xie Dazhao Dean Troyer Debo Dutta Debo~ Dutta Deepak C Shetty Deepak Garg Deliang Fan Demontiê Junior Dennis Kliban DennyZhang Derek Higgins Devananda van der Veen Devdatta Kulkarni Devdeep Singh Devendra Modium Devin Carlen Dharini Chandrasekar Dheeraj Gupta Diana Clarke Dima Shulyak Dimitri Mazmanov Dina Belova Dinesh Bhor Dirk Mueller Divya Dmitry Borodaenko Dmitry Guryanov Dmitry Guryanov Dmitry Spikhalskiy Dmitry Tantsur Dmitry Tantsur Dmitry Tantsur Dolph Mathews Dominic Schlegel Dominik Heidler Don Dugger Donal Lafferty Dong Ma Dongcan Ye Dongdong Zhou Donovan Finch Dorin Paslaru Doug Hellmann Doug Hellmann Doug Royal Doug Wiegley Drew Fisher Drew Thorstensen DuYaHong Duan Jiong Duncan McGreggor Duong Ha-Quang Dustin Cowles Earle F. Philhower, III Ed Bak Ed Leafe EdLeafe Edan David Edgar Magana Eduardo Costa Edward Hope-Morley Edwin Zhai Eiich Aikawa Eiichi Aikawa Einst Crazy Eldar Nugaev Elena Ezhova Eli Qiao Eli Qiao Ellen Hui Elod Illes Előd Illés Emilien Macchi Emma Foley En Eoghan Glynn Eohyung Lee Eric Berglund Eric Blake Eric Brown Eric Day Eric Fried Eric Fried Eric Guo Eric Harney Eric Harney Eric M Gonzalez Eric Windisch Eric Windisch Eric Young Eric Young Erik Berg Erik Olof Gunnar Andersson Erik Zaadi Erwan Gallen Esha Seth Esra Celik Ethan Chu Euan Harris Eugene Kirpichov Eugene Nikanorov Eugeniya Kudryashova Evan Callicoat Evgeny Antyshev Evgeny Fedoruk Ewan Mellor Facundo Farias Facundo Maldonado Fan Zhang Fang He Fang Jinxing Fei Long Wang Fei Long Wang Felipe Monteiro Felix Li Feng Xi Yan Fengqian Gao Feodor Tersin Feodor Tersin Flaper Fesp Flavia Missi Flavio Percoco Flavio Percoco Florent Flament Florian Haas Florian Haas Forest Romain ForestLee Francesco Santoro Francois Palin François Charlier Frederic Lepied Gabe Westmaas Gabor Antal Gabriel Adrian Samfira Gabriel Hurley Gabriel Samfira Gage Hugo Gage Hugo Gao Yuan Gary Kotton Gary Kotton Gaudenz Steinlin Gaurav Gupta Gauvain Pocentek Gauvain Pocentek Georg Hoesch George Shuklin Gergo Debreczeni Gerry Kopec Ghanshyam Ghanshyam Ghanshyam Mann Ghanshyan Mann Ghe Rivero Giampaolo Lauria Giridhar Jayavelu Giulio Fidente Gleb Stepanov Gonéri Le Bouder Gordon Chung Gorka Eguileor Graham Hayes Grant Murphy Greg Althaus Greg Ball Gregory Haynes Grzegorz Grasza Guan Qiang Guang Yee Guangya Liu Guangyu Suo Guillaume Espanel Guohui Liu Guoqiang Ding Gyorgy Szombathelyi Gábor Antal Ha Van Tu Haiwei Xu Hamdy Khader Hang Yang Hans Lindgren Haomai Wang Harshada Mangesh Kakad Haruka Tanizawa He Jie Xu He Jie Xu He Jie Xu He Yongli Hemanth Makkapati Hemanth Nakkina Hendrik Volkmer Hengqing Hu Hervé Beraud Hesam Chobanlou Hieu LE Hirofumi Ichihara Hironori Shiina Hiroyuki Eguchi Hisaharu Ishii Hisaki Ohara Hongbin Lu Hongbin Lu Huan Xie Huang Rui Hyunsun Moon IWAMOTO Toshihiro Ian Cordasco Ian Wells Ian Wienand Ianeta Hutchinson Ice Yao Ifat Afek Ihar Hrachyshka Ildiko Vancsa Ildiko Vancsa Ilya Alekseyev Ilya Etingof Ilya Pekelny Ilya Popov Ilya Shakhat Inbar Ionuț Arțăriși Ionuț Bîru Irena Berezovsky Isaku Yamahata Itzik Brown Iury Gregory Melo Ferreira Ivan A. Melnikov Ivan Kolodyazhny Ivaylo Mitev Ivo Vasev J. Daniel Schmidt JC Martin Jack Ding Jackie Truong Jacob Cherkas Jake Dahn Jake Liu Jake Yip Jakub Pavlik Jakub Ruzicka James Carey James Chapman James E. Blair James E. Blair James Page James Penick Jamie Lennox Jamie Lennox Jan Grant Jan Gutter Jan Zerebecki Janis Gengeris Jared Culp Jared Winborne Jason Anderson Jason Cannavale Jason Dillaman Jason Koelker Jason.Zhao Javeme Jay Faulkner Jay Jahns Jay Lau Jay Lau Jay Lee Jay Pipes Jay S. Bryant Jean-Baptiste RANSY Jean-Marc Saffroy Jean-Philippe Evrard Jeegn Chen Jeegn Chen Jeffrey Zhang Jenkins Jennifer Mulsow Jenny Oshima Jens Harbott Jens Jorritsma Jens Rosenboom Jeremy Liu Jeremy Stanley Jesse Andrews Jesse J. Cook Jesse J. Cook Jesse Keating Jesse Keating Jesse Keating Jesse Pretorius JiaLei Shi Jiajun Liu Jian Wen JiangPF Jianghua Wang Jie Li Jim Fehlig Jim Rollenhagen Jimmy Bergman Jin Hui JinLi Jinwoo 'Joseph' Suh Jiří Suchomel Joe Cropper Joe Gordon Joe Heck Joe Julian Joe Mills Joe Talerico Joel Coffman Joel Moore Johannes Erdfelt Johannes Erdfelt Johannes Kulik Johannes Kulik John John Bresnahan John Dewey John Garbutt John Garbutt John Garbutt John Griffith John Griffith John H. Tran John Haan John Herndon John Hua John Kennedy John L. Villalovos John Stanford John Tran John Tran John Warren Johnson koil raj Jolyon Brown Jon Bernard Jon Grimm Jonathan Bryce Jordan Pittier Jordan Rinke JordanP JordanP Jorge Niedbalski Joris S'heeren Jose Castro Leon Joseph Suh Joseph W. Breu Josephine Seifert Josh Durgin Josh Durgin Josh Gachnang Josh Kearney Josh Kleinpeter Joshua Harlow Joshua Harlow Joshua Harlow Joshua Hesketh Joshua McKenty JuPing Juan Antonio Osorio Robles Juan Manuel Olle Juerg Haefliger Julia Kreger Julia Varlamova Julian Sy Julian Sy Julien Danjou Julien Danjou Justin Hammond Justin Santa Barbara Justin Shepherd Jérôme Gallard KIYOHIRO ADACHI Kaitlin Farr Kamil Rykowski Kanagaraj Manickam Karen Bradshaw Karen Noel Karimullah Mohammed Kartik Bommepally Kashi Reddy Kashyap Chamarthy Kaushik Chandrashekar Kaushik Chandrashekar Kei Masumoto Keigo Noha Keisuke Tagami Ken Burger Ken Igarashi Ken Pepple Ken'ichi Ohmichi Ken'ichi Ohmichi Kengo Sakai Kenji Yasui Kent Wang Kentaro Matsumoto Keshava Bharadwaj Kevin Benton Kevin Benton Kevin Benton Kevin Bringard Kevin L. Mitchell Kevin Zhao Kevin Zhao Kevin Zhao Kevin Zhao KevinZhao Kevin_Zheng Kiall Mac Innes Kien Nguyen Kieran Spear Kirill Shileev Kiseok Kim Kobi Samoray Koert van der Veer Koichi Yoshigoe Koji Iida Komei Shimamura Konstantinos Samaras-Tsakiris Kost Kravchenko Pavel Krisztian Gacsal Kui Shi Kun Huang Kurt Taylor Kurt Taylor Kurtis Cobb Kylin CG LIU Yulong Lajos Katona Lan Qi song Lance Bragstad Lance Bragstad Lars Kellogg-Stedman Laszlo Hegedus Launchpad Translations on behalf of nova-core <> Lauren Taylor Leander Bessa Beernaert Leandro I. Costantino Lee Yarwood Leehom Li (feli5) Leehom Li Lenny Verkhovsky LeopardMa Li Chen Liam Kelleher Liam Young Lianhao Lu Likitha Shetty Lin Hua Cheng Lin Tan Lingxian Kong LingxianKong LiuNanke Loganathan Parthipan Lorenzo Affetti Lorin Hochstein Lucas Alvares Gomes Lucian Petrut Lucian Petrut Ludovic Beliveau Luigi Toscano Luis A. Garcia Luis Fernandez Alvarez Luis Pigueiras Luis Tomas Luiz Capitulino Luo Gangyi Luong Anh Tuan Luyao Zhong LuyaoZhong Lvov Maxim MORITA Kazutaka Maciej Jozefczyk Maciej Józefczyk Maciej Kucia Maciej Szankin Madhu Mohan Nelemane Madhuri Kumari Madhuri Kumari Mahesh K P Mahesh Panchaksharaiah Maho Koshiya Maithem Major Hayden Malini Bhandaru Mana Kaneko Mandar Vaze Mandell Degerness Manjunath Patil Marcellin Fom Tchassem Marcin Juszkiewicz Marcin Juszkiewicz Marcio Roberto Starke Marco Sinhoreli Marcos Lobo Marcus Furlong Marian Horban Mario Villaplana Maris Fogels Mark Doffman Mark Giles Mark Goddard Mark Goddard Mark McClain Mark McLoughlin Mark T. Voelker Mark Washenberger Markus Zoeller Martin Kletzander Martin Packman Martin Schuppert Martins Jakubovics Maru Newby Masaki Matsushita Masanori Itoh Masanori Itoh Masayuki Igawa Mate Lakat Mathew Odden Mathieu Gagné Mathieu Mitchell Matt Dietz Matt Fischer Matt Joyce Matt Odden Matt Rabe Matt Riedemann Matt Riedemann Matt Stephenson Matt Thompson Matt Wisch Matthew Booth Matthew Edmonds Matthew Gilliard Matthew Hooker Matthew Macdonald-Wallace Matthew Oliver Matthew Sherborne Matthew Treinish Matthew Treinish Mauro S. M. Rodrigues Maxim Nestratov Maxim Nestratov Maxime Leroy Md Nadeem Mehdi Abaakouk Mehdi Abaakouk Mehdi Abaakouk Melanie Witt Michael Bayer Michael Davies Michael Gundlach Michael H Wilson Michael Henkel Michael J Fork Michael Kerrin Michael Krotscheck Michael Still Michael Turek Michael Wilson Michael Wurtz Michal Michal Dulko Michal Pryc Miguel Herranz Miguel Lavalle Miguel Lavalle Mike Bayer Mike Dorman Mike Durnosvistov Mike Fedosin Mike Lowe Mike Lundy Mike Milner Mike Perez Mike Pittaro Mike Scherbakov Mike Spreitzer MikeG451 Mikhail Chernik Mikhail Durnosvistov Mikhail Feoktistov Mikyung Kang Ming Yang Mitsuhiko Yamazaki Mitsuhiro SHIGEMATSU Mitsuhiro Tanino Mohammed Naser Monsyne Dragon Monty Taylor Morgan Fainberg Moshe Levi MotoKen Muawia Khan MultipleCrashes Muneyuki Noguchi NTT PF Lab. Nachi Ueno Nam Nguyen Hoai Nathan Kinder Naveed Massjouni Navneet Kumar Neha Alhat Neil Jerram Neil Jerram Newptone Ngo Quoc Cuong Nguyen Hai Truong Nguyen Hung Phuong Nguyen Phuong An Nicholas Kuechler Nick Bartos Nicolas Bock Nicolas Simonds Nikhil Komawar Nikhil Komawar Nikita Gerasimov Nikita Konovalov Nikola Dipanov Nikola Dipanov Nikola Đipanov Nikolai Korablin Nikolay Sokolov Nirmal Ranganathan Nisha Agarwal Noorul Islam K M Numan Siddique OctopusZhang OctopusZhang Oleg Bondarev Olga Kopilova Oliver Walsh Ollie Leahy Ondřej Nový OpenStack Release Bot Oshrit Feder Pablo Fernando Cargnelutti Pallavi PanYaLian Patrick East Patrick Schaefer Paul Green Paul Griffin Paul McMillan Paul Murray Paul Murray Paul Voccio Paulo Matias Pavel Gluschak Pavel Glushchak Pavel Kholkin Pavel Kirpichyov Pavel Kravchenco Pavlo Shchelokovskyy Pavlo Shchelokovskyy Pawel Koniszewski Pawel Palucki Pedro Navarro Perez Pekelny "I159" Ilya Peng Li Peng Yong Pengfei Zhang Peter Feiner Peter Hamilton Peter Krempa Peter Penchev Petersingh Anburaj Petrut Lucian Phil Day Philip Knouff Philip Schwartz Philipp Marek Phong Ly Pierre Blanc Pierre Riteau Pooja Jadhav Praharshitha Metla Pranali Deore PranaliDeore Pranav Salunke Pranav Salunke Prashanth kumar reddy Prateek Arora Praveen Yalagandula Prem Karat Przemyslaw Czesnowicz Puneet Goyal Pushkar Umaranikar Pádraig Brady Qiang Guan Qiao, Liyong Qiaowei Ren Qin Zhao Qin Zhao Qing Wu Wang QingXin Meng Qiu Yu Qiu Yu QunyingRan Rabi Mishra Racha Ben Ali Radomir Dopieralski Radoslav Gerganov Radoslaw Smigielski Rafael Folco Rafi Khardalian Rajesh Tailor Rajesh Tailor Rajesh Tailor Rakesh H S Ralf Haferkamp Ram Nalluri Raoul Hidalgo Charman Ravi Shekhar Jethani Rawan Herzallah Ray Chen Ray Sun Renier Morales Renuka Apte Ricardo Carrillo Cruz Ricardo Noriega Riccardo Pittau Richard Jones Richard W.M. Jones Rick Bartra Rick Clark Rick Harris Rikimaru Honjo Rikimaru Honjo Ripal Nathuji Ripal Nathuji Rob Esker Robert Collins Robert Collins Robert Ellis Robert Kukura Robert Li Robert Pothier Robert Tingirica Robin Naundorf Rodolfo Alonso Hernandez Rodrigo Barbieri Roey Chen Rohan Kanade Rohan Kanade Rohan Kanade Rohan Rhishikesh Kanade Rohit Karajgi Roland Hochmuth Romain Chantereau Romain Hardouin Roman Bogorodskiy Roman Bogorodskiy Roman Dobosz Roman Podoliaka Roman Podolyaka Ronald Bradford Ronen Kat Rongze Zhu RongzeZhu Rosario Di Somma Ruby Loo Rui Chen Rushi Agrawal Russell Bryant Russell Cloran Russell Sim Ryan Hsu Ryan Lane Ryan Lucio Ryan McNair Ryan Moe Ryan Moore Ryan Moore Ryan Rossiter Ryo Miki Ryota MIBU Ryu Ishimoto Sabari Kumar Murugesan Sachi King Sagar Ratnakara Nikam Sahid Orentino Ferdjaoui Sahid Orentino Ferdjaoui Sahid Orentino Ferdjaoui Salvatore Orlando Sam Alba Sam Betts Sam Morrison Sam Morrison Sam Yaple Samantha Blanco Sampath Priyankara Samuel Matzek Sandy Walsh Santiago Baldassin Sarafraj Singh Sascha Peilicke Sascha Peilicke Sascha Peilicke Sateesh Chodapuneedi Sathish Nagappan Satoru Moriya Satyanarayana Patibandla Satyanarayana Patibandla Saverio Proto Scott Moser Scott Moser Scott Reeve Scott Wilson Sean Chen Sean Dague Sean Dague Sean Dague Sean M. Collins Sean M. Collins Sean McCully Sean McGinnis Sean McGinnis Sean McGinnis Sean Mooney Sean Mooney Seif Lotfy Senhua Huang Sergey Nikitin Sergey Nikitin Sergey Skripnick Sergey Vilgelm Sergii Golovatiuk Sergio Cazzolato Shane Wang ShaoHe Feng Sharat Sharma Shawn Harsock Shawn Hartsock Shawn Hartsock Shih-Hao Li Shilla Saebi Shlomi Sasson Shoham Peller Shraddha Pandhe Shraddha Pandhe Shuangtai Tian ShunliZhou Shunya Kitada Shuquan Huang Sidharth Surana Sihan Wang Silvan Kaiser Simon Chang Simon Dodsley Simon Pasquier Simon Pasquier Simona Iuliana Toader Sirisha Devineni Sirushti Murugesan Sivasathurappan Radhakrishnan Slawek Kaplonski Solly Ross Somik Behera Soren Hansen Soren Hansen Spencer Krum Spencer Yu Stanislaw Pitucha Stanisław Pitucha Stef T Stefan Amann Stephanie Reese Stephen Finucane Stephen Finucane Stephen Finucane Stephen Finucane Stephen Gran StephenSun Steve Baker Steve Baker Steve Kowalik Steve Noyes Steven Dake Steven Hardy Steven Kaufer Steven Webster Stuart McLaren Subashini Soundararajan Subhadeep De Sudarshan Acharya Sudipta Biswas Sujitha Sukhdev Kapur Sulochan Acharya Sumanth Nagadavalli Sumedh Degaonkar Sumit Naiksatam Sundar Nadathur Sunil Thaha Surojit Pathak Surya Surya Seetharaman Sven Anderson Svetlana Shturm Swami Reddy Swaminathan Vasudevan Swapnil Kulkarni (coolsvap) Sylvain Bauza Sylvain Bauza Sławek Kapłoński Tadayoshi Hosoya Takaaki Suzuki Takashi Kajinami Takashi NATSUME Takashi Natsume Takashi Natsume Takashi Sogabe Takenori Yoshimatsu Taku Izumi Tang Chen Tang Chen Tao Li Tao Yang Tao Yang TaoBai Taylor Peoples Taylor Smith Teng Li Teran McKinney Tetsuro Nakamura Tetsuro Nakamura Thang Pham Thelo Gaultier Theodoros Tsioutsias Thierry Carrez Thomas Bachman Thomas Bechtold Thomas Bechtold Thomas Goirand Thomas Herve Thomas Kaergel Thomas Maddox Thomas Stewart Thorsten Tarrach Tiago Mello Tianpeng Wang Tiantian Gao Tim Miller Tim Potter Tim Pownall Tim Pownall Tim Rozet Tim Simpson Timofey Durakov Toan Nguyen Tobias Urdin Todd Willey Tom Cammann Tom Fifield Tom Fifield Tom Hancock Tom Patzig Tomi Juvonen Tomoe Sugihara Tomofumi Hayashi Tomoki Sekiyama Tomoki Sekiyama Tong Li Tony Breeds Tony NIU Tony Xu Tony Yang Toshiaki Higuchi Tovin Seven Tracy Jones Travis Ankrom Trey Morris Tristan Cacqueray Tristan Cacqueray Troy Toman Trung Trinh Tsuyoshi Nagata TuanLAF Tushar Kalra Tushar Patil Tyler Blakeslee Unmesh Gurjar Unmesh Gurjar Unmesh Gurjar Vasiliy Shlykov Vasyl Saienko VeenaSL Venkateswarlu Pallamala Vern Hart Vic Howard Victor Coutellier Victor Morales Victor Sergeyev Victor Stinner Victor Stinner Vijaya Erukala Vikhyat Umrao Vilobh Meshram Vincent Hou Vincent Untz Vipin Balachandran Vishakha Agarwal Vishvananda Ishaya Vivek Agrawal Vivek YS Vladan Popovic Vladik Romanovsky Vladik Romanovsky Vladyslav Drok Vu Cong Tuan Vu Tran Vui Lam Vui Lam Waldemar Znoinski Walter A. Boring IV Wang Huaqiang Wangliangyu Wangpan Wangpan Wanlong Gao Wei Jiangang Wen Zhi Yu Wen Zhi Yu Wenhao Xu Wenzhi Yu Will Foster William Wolf Wonil Choi Wu Wenxiang Xavier Queralt Xiang Hui Xiangyang Chu Xiao Chen XiaohanZhang <15809181826@qq.com> XiaojueGuan Xiaowei Qian Xiaoyan Ding XieYingYun Xing Yang Xinyuan Huang Xu Ao Xu Han Peng Xuanzhou Perry Dong Xurong Yang YAMAMOTO Takashi YI-JIE,SYU Yaguang Tang Yaguang Tang Yang Hongyang Yang Yu YangLei Yassine Lamgarchal Yasuaki Nagata Yikun Jiang Yingxin Yingxin Cheng Yixing Jia Yolanda Robla Yong Sheng Gong Yongli He Yongli he Yoon Doyoul Yosef Berman Yosef Hoffman Yoshiaki Tamura Yoshihiko Atsumi You Ji YuYang Yufang Zhang Yuiko Takada Yuiko Takada YuikoTakada Yukihiro KAWADA Yulia Portnova Yun Mao Yun Shen Yunhong Jiang Yunhong, Jiang Yuriy Taraday Yuriy Zveryanskyy Yury Kulazhenkov Yuuichi Fujioka Yuzlikeev Eduard ZHU ZHU Zack Cornelius Zaina Afoulki Zane Bitter Zara Zed Shaw ZhangShuaiyi Zhao Lei Zhen Qin Zheng Yue Zhengguang Zhenguo Niu Zhenguo Niu Zhenzan Zhou Zhi Yan Liu Zhi Yan Liu ZhiQiang Fan ZhiQiang Fan Zhihai Song Zhilong.JI Zhiteng Huang Zhiteng Huang ZhongShengping Zhongyue Luo Zhou Jianming Zhou ShaoYu ZhuRongze Ziad Sawalha Zoltan Arnold Nagy abdul nizamuddin abhilash-goyal abhishek-kekane abhishek.talwar abhishekkekane alexpilotti andrewbogott ankitagrawal april arches armando-migliaccio armando-migliaccio arvindn05 ashoksingh aulbachj bailinzhang baiwenteng benjamin.grassart bhagyashris bhavani.cr boh.ricky bria4010 caoyuan cedric.brandily chaochin@gmail.com chen chenaidong1 chenghuiyu chenpengzi <1523688226@qq.com> chenxiangui chenxiao chenxing chenxing chhagarw chinmay chohoor chris fattarsi csatari da52700 daisy-ycguo daisy-ycguo dane-fichter david martin deepak.mourya deepak_mourya deepak_mourya deepakmourya deevi rani dekehn dimtruck dineshbhor divakar-padiyar-nandavar dzyu eddie-sheffield eewayhsu ejbaek ericxiett ericzhou esberglu esubramanian evikbas facundo Farias falseuser fpxie ftersin fujioka yuuichi fuzk galstrom21 gaofei gaozx garyk garyk gengchc2 gengjh gh159m ghanshyam ghanshyam ghanshyam git-harry gong yong sheng gongxiao gongysh grace.yu gregory.cunha gseverina guanzuoyu guillaume-thouvenin guohliu gustavo panizzo hackertron hartsocks heha heijlong hgangwx hill hua zhang huang.zhiping huangpengtao huangtianhua huangtianhua huanhongda hussainchachuliya hutianhao27 hzguanqiang ianeta hutchinson iccha.sethi imacdonn inspurericzhang int32bit isethi iswarya_vakati ivan-zhu jakedahn javeme jaypei jcooklin jeckxie jenny-shieh jiajunsu jianghua wang jianghuaw jiangwt100 jiataotj jichen jichenjc jimmygc jinquanni jmeridth john.griffith8@gmail.com jokcylou jolie jufeng jufeng julykobe kairoaraujo kangyufei karimb karimull kashivreddy kevin shen <372712550@qq.com> kirankv kiwik-chenrui klyang ladquin lapgoon lawrancejing lei zhang leizhang lianghao lianghuifei liangjingtao libing licanwei linbing ling-yun linwwu liu-lixiu liu-sheng liudong liusheng liuyamin lixipeng liyingjun liyingjun liyuanyuan lizheming lkhysy llg8212 lqslan lrqrun lvdongbing lyanchih m.benchchaoui@cloudbau.de m4cr0v manas.mandlekar maqi mark.sturdevant mathieu-rohon mathrock mathrock mb mbasnight mdrabe melanie witt melanie witt melissaml mingyan bao mjbright mkislinska mmidolesov msdubov nafei yang naichuans oleksii panbalag pandatt pangliye park hei park hei parklong partys paul-carlton2 pcarlton pengyuwei piyush110786 pkholkin pmoosh pooja jadhav poojajadhav pran1990 preethipy pyw qiaomin qiufossen rackerjoe rajat29 ramboman ricolin root rsritesh rtmdk ruichen ryo.kurahashi s iwata saradpatel sarvesh-ranjan sarvesh-ranjan scottda scottda shaofeng_cheng sharat.sharma shenxindi shi liang shihanzhang shilpa shreeduth-awasthi shuangtai shuangyang.qian smartu3 smccully sonu.kumar space sridevik sridhargaddam srushti stanzgy stewie925 stewie925 sudhir_agarwal sunhao sunjia tamilhce tanlin tengqm thorst tianhui tianmaofu tilottama gaat to-niwa tonybrad uberj unicell vaddi-kiran venakata anil venkata anil venkatamahesh vijaya-erukala vladimir.p vsaienko wangbo wangdequn wangfaxin wanghao wanghongtaozz wanghongxu wangjiajing wangqi wangxiyuan warewang watanabe isao weiweigu wingwj wuhao xhzhf xianming mao xiaoding xiaojueguan xiexs xulei xushichao ya.wang yan97ao yangyapeng yanpuqing yatin yatin karel yatinkarel ydoyeul yenai yingjisun yongiman yuanyue yugsuo yuhui_inspur yunhong jiang yuntong yuntongjin yuntongjin yushangbin yuyafei zhang-jinnan zhang.lei zhang.yufei@99cloud.net <1004988384@qq.com> zhangbailin zhangchao010 zhangchunlong zhangchunlong1@huawei.com zhangdaolong zhangdebo zhangdebo1987 zhangfeng zhangshj zhangtralon zhangyangyang zhangyanxian zhangyanzi zhaolihui zhengyao1 zhhuabj zhiyanliu zhiyanliu zhiyuan_cai zhoudongshu zhoujunqiang zhouxinyong zhu.boxiang zhubx007 zhufl zhulingjie zhurong zhuzeyu zte-hanrong zwei Édouard Thuleau Édouard Thuleau Édouard Thuleau Édouard Thuleau Émilien Macchi 翟小君 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/CONTRIBUTING.rst0000664000175000017500000000111700000000000015227 0ustar00zuulzuul00000000000000The source repository for this project can be found at: https://opendev.org/openstack/nova Pull requests submitted through GitHub are not monitored. To start contributing to OpenStack, follow the steps in the contribution guide to set up and use Gerrit: https://docs.openstack.org/contributors/code-and-documentation/quick-start.html Bugs should be filed on Launchpad: https://bugs.launchpad.net/nova For more specific information about contributing to this repository, see the Nova contributor guide: https://docs.openstack.org/nova/latest/contributor/contributing.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736374.0 nova-21.2.4/ChangeLog0000664000175000017500000656526000000000000014364 0ustar00zuulzuul00000000000000CHANGES ======= 21.2.4 ------ * [stable-only] Pin virtualenv and setuptools 21.2.3 ------ * [stable-only] Set lower-constraints job as non-voting * address open redirect with 3 forward slashes * Fix 1vcpu error with multiqueue and vif\_type=tap * Fix request path to query a resource provider by uuid * Reduce mocking in test\_reject\_open\_redirect for compat * Reject open redirection in the console proxy * Fix error that cannot overwrite policy rule for 'forced\_host' 21.2.2 ------ * Initialize global data separately and run\_once in WSGI app init * [neutron] Get only ID and name of the SGs from Neutron * Error anti-affinity violation on migrations * Remove broken legacy zuul jobs * [CI] Fix gate by using zuulv3 live migration and grenade jobs * Move 'check-cherry-picks' test to gate, n-v check * Ignore PCI devices with 32bit domain * Reproduce bug 1897528 * Honor [neutron]http\_retries in the manual client 21.2.1 ------ * Update image\_base\_image\_ref during rebuild * Rebase qcow2 images when unshelving an instance * guestfs: With libguestfs >= v1.41.1 decode returned bytes to string * Dynamically archive FK related records in archive\_deleted\_rows * Add functional test for bug 1837995 * Centralize sqlite FK constraint enforcement * Add config parameter 'live\_migration\_scheme' to live migration with tls guide * Use absolute path during qemu img rebase * Make \_rebase\_with\_qemu\_img() generic 21.2.0 ------ * Handle instance = None in \_local\_delete\_cleanup * Add regression test for bug 1914777 * Fallback to same-cell resize with qos ports * Raise InstanceMappingNotFound if StaleDataError is encountered * Default user\_id when not specified in check\_num\_instances\_quota * Add regression test for bug 1893284 * only wait for plugtime events in pre-live-migration * tools: Allow check-cherry-picks.sh to be disabled by an env var * Add upgrade check about old computes * Reproduce bug 1907522 in functional test * Warn when starting services with older than N-1 computes * libvirt: Use specific user when probing encrypted rbd disks during extend 21.1.2 ------ * Set instance host and drop migration under lock * Reproduce bug 1896463 in func env * Disallow CONF.compute.max\_disk\_devices\_to\_attach = 0 * Use subqueryload() instead of joinedload() for (system\_)metadata * compute: Lock by instance.uuid lock during swap\_volume * [stable-only] fix lower-constraints and disable qos resize * Omit resource inventories from placement update if zero * Use cell targeted context to query BDMs for metadata * Fix a hacking test * Update pci stat pools based on PCI device changes * Add missing exception * Handle disabled CPU features to fix live migration failures * [doc]: Fix glance image\_metadata link * Set migrate\_data.vifs only when using multiple port bindings * libvirt: Only ask tpool.Proxy to autowrap vir\* classes * Prevent archiving of pci\_devices records because of 'instance\_uuid' * libvirt: Increase incremental and max sleep time during device detach * add functional regression test for bug #1888395 21.1.1 ------ * Follow up for cherry-pick check for merge patch * libvirt: 'video.vram' property must be an integer * Allow tap interface with multiqueue * doc: correct the link to user/flavor.rst * Test for disabling greendns * docs: fix aggregate weight multiplier property names * Ensure source compute is up when confirming a resize * test\_evacuate.sh: Stop using libvirt-bin * api: Set min, maxItems for server\_group.policies field * tests: Add regression test for bug 1894966 * compute: Skip cinder\_encryption\_key\_id check when booting from volume * Add regression test for bug #1895696 * Fix invalid assert\_has\_calls * Correctly disable greendns * Move revert resize under semaphore * Move confirm resize under semaphore * Add generic reproducer for bug #1879878 * Don't unset Instance.old\_flavor, new\_flavor until necessary * tests: Add reproducer for bug #1879878 * functional: Don't inherit from 'ProviderUsageBaseTestCase' * post live migration: don't call Neutron needlessly * Reject resize operation for accelerator * Add note and daxio version to the vPMEM document * resolve ResourceProviderSyncFailed issue * Removed the host FQDN from the exception message * Set different VirtualDevice.key * Add a lock to prevent race during detach/attach of interface * Change default num\_retries for glance to 3 * replace the "hide\_hypervisor\_id" to "hw:hide\_hypervisor\_id" * docs: Resolve issue with deprecated extra specs * hardware: Reject requests for no hyperthreads on hosts with HT * tests: Add reproducer for bug #1889633 21.1.0 ------ * Removes the delta file once image is extracted * libvirt: Provide VIR\_MIGRATE\_PARAM\_PERSIST\_XML during live migration * libvirt:driver:Disallow AIO=native when 'O\_DIRECT' is not available * libvirt: Do not reference VIR\_ERR\_DEVICE\_MISSING when libvirt is < v4.1.0 * Add checks for volume status when rebuilding * Add regression test for bug 1879787 * func: Introduce a server\_expected\_state kwarg to InstanceHelperMixin.\_live\_migrate * [Trivial] Remove wrong format\_message() conversion * compute: Don't delete the original attachment during pre LM rollback * compute: Validate a BDMs disk\_bus when provided * Add regression tests for bug #1889108 * func: Add live migration rollback volume attachment tests * func: Add \_live\_migrate helper to InstanceHelperMixin * tests: Define constants in '\_IntegratedTestBase' * Handle multiple 'vcpusched' elements during live migrate * compute: Do not allow rescue attempts using volume snapshot images * objects: Update keypairs when saving an instance * libvirt: Handle VIR\_ERR\_DEVICE\_MISSING when detaching devices * zuul: remove legacy-tempest-dsvm-neutron-dvr-multinode-full * Correct reported system memory * catch libvirt exception when nodedev not found * libvirt: Mark e1000e VIF as supported * Fix cherry-pick check for merge patch * libvirt: Don't allow "reserving" file-backed memory * Guard against missing image cache directory * hardware: Raise useful error for invalid mempage size * Check cherry-pick hashes in pep8 tox target * libvirt: Don't delete disks on shared storage during evacuate * Add functional test for bug 1550919 * remove support of oslo.messaging 9.8.0 warning message * Add admin doc information about image cache resource accounting * Reserve DISK\_GB resource for the image cache * compute: Allow snapshots to be created from PAUSED volume backed instances * Make quotas respect instance\_list\_per\_project\_cells * Silence amqp heartbeat warning * Update scheduler instance info at confirm resize * Reproduce bug 1869050 * Bump hacking min version to 3.0.1 * Remove stale nested backport from InstancePCIRequests 21.0.0 ------ * Add nova-status upgrade check and reno for policy new defaults * use more distinct link references in release notes * zuul: Switch to the Zuulv3 grenade job * Imported Translations from Zanata * Update TOX\_CONSTRAINTS\_FILE for stable/ussuri * Update .gitreview for stable/ussuri 21.0.0.0rc1 ----------- * FUP: Amend ussuri prelude to add docs for policy concepts * Add docs and releasenotes for BP policy-defaults-refresh * Ussuri 21.0.0 prelude section * Revert "Temporarily skip TestNovaMigrationsMySQL" * docs: Add stable device rescue docs * Allocate mdevs when resizing or reverting resize * Add new default roles in remaining servers policies * Introduce scope\_types in remaining servers Policies * Add test coverage of existing remaining servers policies * Add new default roles in servers attributes policies * Introduce scope\_types in servers attributes Policies * images: Make JSON the default output format of calls to qemu-img info * Fix follow up comments on policy work * fup: Fix [workarounds]/rbd\_volume\_local\_attach config docs * Fix server actions to be system and project scoped * Use oslo policy flag to disable default change warning instead of all * Add test coverage of existing server attributes policies * Add new default roles in servers policies * Introduce scope\_types in servers Policies * Add missing white spaces between words in log messages * Add test coverage of existing server policies * Fix servers policy for admin\_or\_owner * Pass the actual target in flavor access policy * Pass the actual target in quota class policy * Add new default roles in quota class policies * Update compute rpc version alias for ussuri * Add new default roles in server group policies * Pass the actual target in flavor extra specs policy * Add new default roles in flavor extra specs policies * Introduce scope\_types in flavor extra spec policy * Add test coverage of existing flavor extra spec policies * Add new default roles in quota sets policies * Introduce scope\_types in quota set Policies * Add test coverage of existing quota sets policies * fix scsi disk unit number of the attaching volume when cdrom bus is scsi * Use placement stable version for functional job * doc: mark the max microversion for ussuri * doc: Fix term mismatch warnings in glossary * Pass the actual target in server external events policy * Pass the actual target in server group policy * Introduce scope\_types in quota class Policies * Add test coverage of existing quota class policies * Add new default roles in server external events policies * Pass the target in os-services APIs policy * Add new default roles in os-evacuate policies * Pass allocations to virt drivers when resizing * [Trivial] FUP: addressed comments in support non-admin filter instances * Pass the actual target in keypairs policy * Add new default roles in keypairs policies * Introduce scope\_types in keypairs * Add test coverage of existing keypairs policies * Add new default roles in shelve server policies * Introduce scope\_types in shelve server * Add test coverage of existing shelve policies * libvirt: Change UEFI check to handle AArch64 better * Functional test with pGPUs * Support different vGPU types per pGPU * libvirt: Calculate disk\_over\_committed for raw instances * fup: Add missing docstrings from get\_rescue\_device|bus diskinfo funcs * Temporarily skip TestNovaMigrationsMySQL * api: Allow custom traits * fup: Remove the use of the term \`unstable rescue\` INFO logs * fup: Combine SUPPORTED\_DEVICE\_BUS and SUPPORTED\_STORAGE\_BUSES * libvirt: Break up get\_disk\_mapping within blockinfo * libvirt: Support boot from volume stable device instance rescue * compute: Extract \_get\_bdm\_image\_metadata into nova.utils * api: Introduce microverion 2.87 allowing boot from volume rescue * compute: Report COMPUTE\_RESCUE\_BFV and check during rescue * libvirt: Add support for stable device rescue * virt: Provide block\_device\_info during rescue * Pass the actual target in os-aggregates policy * Add new default roles in os-aggregates policies * Pass the actual target in os-console-auth-tokens policy * Add new default roles in os-console-auth-tokens policies * Add new default roles in tenant tenant usage policies * FUP: add missing test for PUT volume attachments API * Reset the cell cache for database access in Service * Add new default roles in server password policies * Follow-up for flavor-extra-spec-validators series * docs: Add documentation for flavor extra specs * api: Add microversion for extra spec validation * Drop concept of '?validation' parameter * api: Add support for new cyborg extra specs * api: Add framework for extra spec validation * Convert delete\_on\_termination from string to boolean * Separate update and swap volume policies * Introduce scope\_types in server topology * Provide the parent pGPU when creating a new vGPU * Add new default roles in server topology policies * Add test coverage of existing server topology policies * fup: Add removal TODOs for disable\_native\_luksv1 and rbd\_volume\_local\_attach * Support live migration with vpmem * partial support for live migration with specific resources * Correct server topology policy check\_str * Correct server shelve policy check\_str * Add new default roles in server tags policies * Introduce scope\_types in server tags policy * Add test coverage of existing server tags policies * Fix server tags policy to be admin\_or\_owner * workarounds: Add option to locally attach RBD volumes on compute hosts * workarounds: Add option to disable native LUKSv1 decryption by QEMU * Fix new context comparison workaround in base tests class * Disable the policy warning temporary * Pass the actual target in os-flavor-manage policy * Add new default roles in os-flavor\_manage policies * Introduce scope\_types in os-flavor-manage * Pass the actual target in server migration policy * Add new default roles in server migration policies * Introduce scope\_types in server migration * Add test coverage of existing server migrations policies * Add test coverage of existing flavor\_manage policies * Introduce scope\_types in simple tenant usage * Add new default roles in suspend server policies * Introduce scope\_types in suspend server * Add test coverage of existing suspend server policies * Fix resume server policy to be admin\_or\_owner * Add test coverage of existing simple tenant usage policies * Introduce scope\_types in server password policy * Add test coverage of existing server password policies * Add new default roles in server metadata policies * Introduce scope\_types in server metadata * Add test coverage of existing server metadata policies * Fix server metadata policy to be admin\_or\_owner * Fix server password policy to be admin\_or\_owner * Add new default roles in security group policies * Allow versioned discovery unauthenticated * Repro bug 1845530: versioned discovery is authed * Stabilize functional tests * Add release notes for Cyborg-Nova integration * Introduce scope\_types in server group policy * Add test coverage of existing server group policies * Introduce scope\_types in server external events * Pass the actual target in limits policy * Add new default roles in limits policies * Introduce scope\_types in limits policy * Add test coverage of existing server external events policies * Introduce scope\_types in security groups policy * Add test coverage of existing security groups policies * Correct security groups policy check\_str * Pass the actual target in server diagnostics policy * Add test coverage of existing limits policies * Support for nova-manage placement heal\_allocations --cell * Allow PUT volume attachments API to modify delete\_on\_termination * Fix assertEqual param order in Accelerator tests * Add new default roles in server diagnostics policies * Introduce scope\_types in server diagnostics * Add test coverage of existing server diagnostics policies * Add new default roles in remote console policies * Combine the limits policies in single place * libvirt: Remove QEMU\_VERSION\_REQ\_SHARED * images: Remove Libvirt specific configurable use from qemu\_img\_info * libvirt: Always provide the size in bytes when calling virDomainBlockResize * Add new default roles in rescue server policies * Introduce scope\_types in rescue server policy * Add test coverage of existing rescue policies * Introduce scope\_types in remote consoles policy * Add test coverage of existing remote console policies * Pass the actual target in unlock override policy * Pass the actual target in migrate server policy * Add new default roles in migrate server policies * Introduce scope\_types in migrate server * Add info about affinity requests to the troubleshooting doc * Add new default roles in lock server policies * Pass the actual target in migrations policy * Add new default roles in migrations policies * Add new default roles in pause server policies * Introduce scope\_types in pause server policy * Add test coverage of existing pause server policies * Add test coverage of existing lock server policies * Add cyborg tempest job * Block unsupported instance operations with accelerators * Bump compute rpcapi version and reduce Cyborg calls * Fix unpause server policy to be admin\_or\_owner * Introduce scope\_types in list migrations * Add test coverage of existing migrations policies * Add test coverage of existing migrate server policies * Correct limits policy check\_str * Pass the actual target in os-hypervisors policy * Introduce scope\_types in os-hypervisors * Add test coverage of existing hypervisors policies * Pass the actual target in os-agents policy * Add new default roles in os-hypervisors policies * Add new default roles in os-agents policies * Fix unlock server policy to be admin\_or\_owner * Pass the actual target in os-instance-usage-audit-log policy * Add new default roles in os-instance-usage-audit-log policies * FUP for Add a placement audit command * Add instance actions v284 samples test * Add new default roles in os-ips policies * Introduce scope\_types in os-ips * Add test coverage of existing ips policies * Fix os-ips policy to be admin\_or\_owner * Enable and use COMPUTE\_ACCELERATORS trait * Expose instance action event details out of the API * Add default cpu model for AArch64 * Introduce scope\_types in os-instance-usage-audit-log * Add test coverage of existing instance usage log policies * libvirt: Use virDomainBlockCopy to swap volumes when using -blockdev * [Community goal] Update contributor documentation * Enable start/stop of instances with accelerators * Enable hard/soft reboot with accelerators * Delete ARQs for an instance when the instance is deleted * Add transform\_image\_metadata request filter * libvirt: Use domain capabilities to get supported device models * tests: work around malformed serial XML * func tests: move \_run\_periodics() into base class * [Trivial] fixing some nits in instance actions policy tests * libvirt: Remove VIR\_DOMAIN\_BLOCK\_REBASE\_RELATIVE flag check * Compose accelerator PCI devices into domain XML in libvirt driver * Pass accelerator requests to each virt driver from compute manager * Create and bind Cyborg ARQs * Add Cyborg device profile groups to request spec * ksa auth conf and client for Cyborg access * nova-live-migration: Only stop n-cpu and q-agt during evacuation testing * Store instance action event exc\_val fault details * Make serialize\_args handle exception messages safely * libvirt: Fix unit test error block info on non x86 architecture * Add config option for neutron client retries * nova-live-migration: Ensure subnode is fenced during evacuation testing * Add new default roles in os-instance-actions policies * Add new default roles in os-flavor-access policies * Add service version check for evacuate with qos * Add service version check for live migrate with qos * Enable unshelve with qos ports * Support unshelve with qos ports * Bump python-subunit minimum to 1.4.0 * Introduce scope\_types in os-flavor-access * Add test coverage of existing flavor\_access policies * Switching new default roles in os-volumes-attachments policies * bug-fix: Reject live migration with vpmem * Refine and introduce correct parameters for test\_get\_guest\_config\_numa\_host\_instance\_topo\_cpu\_pinning * Ensures that COMPUTE\_RESOURCE\_SEMAPHORE usage is fair * Follow-ups for host\_status:unknown-only policy rule * Fix intermittently failing regression case * nova-live-migration: Wait for n-cpu services to come up after configuring Ceph * libvirt: Use oslo.utils >= 4.1.0 to fetch format-specific image data * libvirt: Correctly resize encrypted LUKSv1 volumes * virt: Pass request context to extend\_volume * images: Allow the output format of qemu-img info to be controlled * images: Move qemu-img info calls into privsep * Non-Admin user can filter their instances by more filters * Cleanup test for system reader and reader\_or\_owner rules * vif: Remove dead code * Run sdk functional tests on nova changes * Deprecate the vmwareapi driver * Use fair locks in resource tracker * trivial: Use 'from foo import bar' * libvirt: don't log error if guest gone during interface detach * [Trivial] Fix code comment of admin password tests * nit: Fix NOTE error of fatal=False * Lowercase ironic driver hash ring and ignore case in cache * Add new default roles in os-atttach-inerfaces policies * trivial: Rename directory for os-keypairs samples * Fix os-keypairs pagination links * Introduce scope\_types in os-instance-action policy * Validate id as integer for os-aggregates * Introduce scope\_types in os-aggregates policy * Introduce scope\_types in os-volumes-attachments policy * Add test coverage of existing os-volumes-attachments policies * Fix os-volumes-attachments policy to be admin\_or\_owner * Catch exception when use invalid architecture of image * Introduce scope\_types in os-create-backup * Add test coverage of existing create\_backup policies * Fix os-create-backup policy to be admin\_or\_owner * Introduce scope\_types in os-console-output * Add test coverage of existing console\_output policies * Introduce scope\_types in os-deferred\_delete * Add a tests to check when legacy access is removed * Add new default roles in os-admin-password policies * Introduce scope\_types in os-admin-password * Add test coverage of existing os-instance-actions policies * Correct the actual target in os-instance-actions policy * Add new default roles in os-create-backup policies * Add new default roles in os-console-output policies * Add new default roles in os-deferred\_delete policies * Fix os-console-output policy to be admin\_or\_owner * Stop using PlacementDirect * Introduce scope\_types in os-attach-interfaces * Add test coverage of existing attach\_interfaces policies * Introduce scope\_types in os-console-auth-tokens * Remove oslo\_db.sqlalchemy.compat reference * libvirt: Remove native LUKS compat code * hyper-v: update support matrix * functional: Avoid race and fix use of self.api within test\_bug\_1831771 * Add test coverage of existing deferred\_delete policies * Fix os-os-deferred-delete policy to be admin\_or\_owner * Remove old policy enforcement in attach\_interfaces * Introduce scope\_types in os-agents policy * Add test coverage of existing os-console-auth-tokens policies * Pass the actual target in os-availability-zone policy * Ensure we pass a target in admin actions * Fix two test cases that use side effects in comprehensions * Add new default roles in Admin Action API policies * Pass the actual target in os-assisted\_volume\_snapshots policy * Add new default roles in os-assisted\_volume\_snapshots policies * Introduce scope\_types in os-assisted\_volume\_snapshots policy * Add test coverage of existing os-assisted\_volume\_snapshots policies * Fix os-attach-interfaces policy to be admin\_or\_owner * Add test coverage of existing os-agents policies * Define Cyborg ARQ binding notification event * Fix H702 pep8 error with latest hacking * libvirt: Provide the backing file format when creating qcow2 disks * Unplug VIFs as part of cleanup of networks * Name Enums * Remove unnecessary parentheses * Functional test for UnexpectedDeletingTaskStateError * Avoid allocation leak when deleting instance stuck in BUILD * Fix hypervisors paginted collection\_name * Enforce os-traits/SUPPORTED\_STORAGE\_BUSES sync * libvirt: Report storage bus traits * trivial: Update '\_get\_foo\_traits' docstrings * Follow-up: Add delete\_on\_termination to volume-attach API * libvirt: Check the guest support UEFI * Avoid PlacementFixture silently swallowing kwargs * trivial: Use recognized extra specs in tests * Use tempest-full-py3 as base job * docs: Improve documentation on writing custom scheduler filters * conf: Deprecate '[scheduler] driver' * trivial: Remove FakeScheduler * nova-net: Remove unused parameters * nova-net: Remove unused nova-network objects * nova-net: Remove unnecessary exception handling, mocks * Remove 'nova.image.api' module * Introduce scope\_types in os-evacuate * Add test coverage of existing evacuate policies * Reject boot request for unsupported images * Absolutely-non-inheritable image properties * Add JSON schema and test for network\_data.json * Support large network queries towards neutron * Add new default roles in os-availability-zone policies * Introduce scope\_types in os-availability-zone * Add test coverage of existing availability-zone policies * Correct os-availability-zone policy check\_str * Monkey patch original current\_thread \_active * Allow TLS ciphers/protocols to be configurable for console proxies * Skip to run all integration jobs for policy-only changes * set default value to 0 instead of '' * Clean up allocation if unshelve fails due to neutron * Add test coverage of existing os-aggregates policies * Reproduce bug 1862633 * Add test coverage of existing admin\_password policies * Fix instance.hidden migration and querying * Remove universal wheel configuration * trivial: Remove 'run\_once' helper * trivial: Merge unnecessary 'NovaProxyRequestHandlerBase' separation * libvirt: Rename \_is\_storage\_shared\_with to \_is\_path\_shared\_with * Don't error out on floating IPs without associated ports * Deprecate base rules in favor of new rules * trivial: Bump minimum version of websockify * trivial: Fetch 'Service' objects once when building AZs * trivial: Remove unused 'cache\_utils' APIs * remove DISTINCT ON SQL instruction that does nothing on MySQL * Minor improvements to cell commands * Avoid calling neutron for N networks * Handle neutron without the fip-port-details extension * Add retry to cinder API calls related to volume detach * Handle unset 'connection\_info' * Enable live migration with qos ports * Use common server create function for qos func tests * Remove extra instance.save() calls related to qos SRIOV ports * docs: Fix the monkeypatching of blockdiag * tests: Validate huge pages * Recalculate 'RequestSpec.numa\_topology' on resize * Add a placement audit command * Use COMPUTE\_SAME\_HOST\_COLD\_MIGRATE trait during migrate * Make RBD imagebackend flatten method idempotent * Avoid fetching metadata when no subnets found * zuul: Add Fedora based jobs to the experimental queue * libvirt: Add a default VirtIO-RNG device to guests * Remove remaining Python 2.7-only dependencies * nova-net: Update API reference guide * Func test for failed and aborted live migration * functional: Stop setting Flavor.id * Remove unused code * functional: Add '\_create\_server' helper * Make removal of host from aggregate consistent * Clarify fitting hugepages log message * Add ironic hypervisor doc * Fix typos for update\_available\_resource reference * nova-net: Remove layer of indirection in 'nova.network' * nova-net: Remove unnecessary 'neutronv2' prefixes * nova-net: Remove unused exceptions * functional: Add '\_delete\_server' to 'InstanceHelperMixin' * functional: Add unified '\_(build|create)\_flavor' helper functions * functional: Add unified '\_build\_server' helper function * nova-net: Kill it * Add NovaEphemeralObject class for non-persistent objects * pre-commit: Use Python 3 to run checks * nova-net: Remove now unnecessary nova-net workaround * Add a workaround config toggle to refuse ceph image upload * Fix typos in nova doc * doc: define boot from volume in the glossary * Update Testing NUMA documentation * nova-net: Remove dependency on nova-net from fake cache * nova-net: Add TODOs to remove security group-related objects * nova-net: Remove 'MetadataManager' * nova-net: Remove final references to nova-network * nova-net: Copy shared utils from nova-net module * nova-net: Remove firewall support (pt. 3) * Use Placement 1.35 (root\_required) * Fix the suppress of policy deprecation warnings * Fix excessive runtime of test test\_migrate\_within\_cell * libvirt: avoid cpu check at s390x arch * downgrade when host does not support capabilities * nova-net: Remove firewall support (pt. 2) * nova-net: Remove firewall support (pt. 1) * Report trait 'COMPUTE\_IMAGE\_TYPE\_PLOOP' * Fix duplicated words issue like "during during boot time" * Add missing parameter vdi\_uuid in log message * [Trivial]Fix typo instnace * Handle cell failures in get\_compute\_nodes\_by\_host\_or\_node * Fix an invalid assertIsNotNone statement * Add description of live\_migration\_timeout\_action option * [api-ref] Fix the incorrect link * FUP to Iff8194c868580facb1cc81b5567d66d4093c5274 * FUP for docs nits in cross-cell-resize series * Use graceful\_exit=True in ComputeTaskManager.revert\_snapshot\_based\_resize * Plumb graceful\_exit through to EventReporter * Fix accumulated non-docs nits for cross-cell-resize series * Add cross-cell resize tests for \_poll\_unconfirmed\_resizes * Implement cleanup\_instance\_network\_on\_host for neutron API * Simplify FinishResizeAtDestTask event handling * Add sequence diagrams for cross-cell-resize * Flesh out docs for cross-cell resize/cold migrate * Enable cross-cell resize in the nova-multi-cell job * Add cross-cell resize policy rule and enable in API * Remove 'nova-xvpvncproxy' * Print help if nova-manage subcommand is not specified * FakeDriver: adding and removing instances on live migration * docs: Add note about an image signature validation limitation when using rbd * Add api for instance action details * FUP for in-place numa rebuild * Ensure source service is up before resizing/migrating * Fix race in test\_create\_servers\_with\_vpmem * Move common test method up to base class * Func test for qos live migration reschedule * Fix get\_request\_group\_mapping doc * Support live migration with qos ports * Zuul v3: use devstack-plugin-nfs-tempest-full * Add recreate test for bug 1855927 * FUP: Remove noqa and tone down an exception * nova-net: Correct some broken VIF tests * nova-net: Remove nova-network security group driver * nova-net: Remove 'is\_neutron\_security\_groups' function * nova-net: Convert remaining unit tests to neutron * Use reasonable name for provider mapping * DRY: Build ImageMetaPropsPayload from ImageMetaProps * api-ref: avoid mushy wording around server.image description * Sync ImageMetaPropsPayload fields * Move \_update\_pci\_request\_spec\_with\_allocated\_interface\_name * Revert "(Temporarily) readd bare support for py27" * db: Remove unused ec2 DB APIs * Create instance action when burying in cell0 * Do not reschedule on ExternalNetworkAttachForbidden * libvirt: flatten rbd image during cross-cell move spawn at dest * Support cross-cell moves in external\_instance\_event * Add functional test for anti-affinity cross-cell migration * Add test\_resize\_cross\_cell\_weigher\_filtered\_to\_target\_cell\_by\_spec * Add CrossCellWeigher * Add archive\_deleted\_rows wrinkle to cross-cell functional test * Confirm cross-cell resize while deleting a server * Refresh target cell instance after finish\_snapshot\_based\_resize\_at\_dest * Add functional cross-cell revert test with detached volume * Revert cross-cell resize from the API * Add revert\_snapshot\_based\_resize conductor RPC method * Flesh out RevertResizeTask.rollback * Add RevertResizeTask * Add finish\_revert\_snapshot\_based\_resize\_at\_source compute method * Deal with cross-cell resize in \_remove\_deleted\_instances\_allocations * Add revert\_snapshot\_based\_resize\_at\_dest compute method * Confirm cross-cell resize from the API * Add confirm\_snapshot\_based\_resize conductor RPC method * Follow up to I5b9d41ef34385689d8da9b3962a1eac759eddf6a * Don't hardcode Python versions in test * Keep pre-commit inline with hacking and fix whitespace * Move \_get\_request\_group\_mapping() to RequestSpec * trivial: Remove dead code * nova-net: Remove db methods for ProviderMethod * nova-net: Remove unused 'stub\_out\_db\_network\_api' * Add resource provider allocation unset example to troubleshooting doc * trivial: Resolve (most) flake8 3.x issues * Add troubleshooting doc about rebuilding the placement db * support pci numa affinity policies in flavor and image * Do not mock setup net and migrate inst in NeutronFixture * Extend NeutronFixture to handle multiple bindings * Revert "nova shared storage: rbd is always shared storage" * nova-net: Convert remaining API tests to use neutron * nova-net: Drop nova-network-base security group tests * Create a controller for qga when SEV is used * Also enable iommu for virtio controllers and video in libvirt * Switch to uses\_virtio to enable iommu driver for AMD SEV * libvirt: Remove MIN\_{LIBVIRT,QEMU}\_FILE\_BACKED\_VERSION * libvirt: Remove MIN\_QEMU\_FILE\_BACKED\_DISCARD\_VERSION * Optimization for nova-api \_checks\_for\_create\_and\_rebuild * Disable NUMATopologyFilter on rebuild * Nix os-server-external-events 404 condition * Add ConfirmResizeTask * Imported Translations from Zanata * Fix Typo mistake in documentation of "host aggregates in nova" * Remove dead code from MigrationTask.\_execute * Restore test\_minbw\_allocation\_placement in nova-next job * Use provider mappings from Placement (mostly) * Remove dict compat from populate\_filter\_properties * Remove now invalid cells v1 comments from conductor code * functional: Make '\_IntegratedTestBase' subclass 'InstanceHelperMixin' * functional: Remove 'api' parameter * functional: Remove 'get\_invalid\_image' * functional: Unify '\_build\_minimal\_create\_server\_request' implementations * functional: Unify '\_wait\_until\_deleted' implementations * Fup for I63c1109dcdb9132cdbc41010654c5fdb31a4fe31 * Block rebuild when NUMA topology changed * Tie requester\_id to RequestGroup suffix * refactor: RequestGroup.is\_empty() and .strip\_zeros() * Use Placement 1.34 (string suffixes & mappings) * nova-net: Remove SG tests that don't apply to neutron * Skip test\_minbw\_allocation\_placement in nova-next job * Skip cpu comparison on AArch64 * Introduce scope\_types in Admin Actions * Add test coverage of existing admin\_actions policies * Handle ServiceNotFound in DbDriver.\_report\_state * Remove unused rootwrap filters * Add new default roles in os-services API policies * Add QoS tempest config so bw tests run * nova-net: Remove use of legacy 'SecurityGroup' object * Cache security group driver * nova-net: Remove use of legacy 'Network' object * nova-net: Remove use of legacy 'FloatingIP' object * libvirt: Remove MIN\_LIBVIRT\_KVM\_AARCH64\_VERSION * Extend NeutronFixture to allow live migration with ports * Make the binding:profile handling consistent in NeutronFixture * VMware: disk\_io\_limits settings are not reflected when resize * api-guide: flesh out the server actions section * nova-net: Remove remaining nova-network quotas * docs: Clarify configuration steps for PF devices * add [libvirt]/max\_queues config option * Add a way to exit early from a wait\_for\_instance\_event() * Reusable RequestGroup.add\_{resource|trait} * Process requested\_resources in ResourceRequest init * nova-net: Flatten class hierarchy for neutron SG tests * xenapi: Remove vestigial nova-network support * zvm: Remove vestigial nova-network support * vmware: Remove vestigial nova-network support * hyperv: Remove vestigial nova-network support * libvirt: Remove vestigial nova-network support * libvirt: Remove 'enable\_hairpin' * nova-net: Remove final references to nova-net from functional tests * docs: Blast final references to nova-network * nova-net: Remove references to nova-net service from tests * Follow up I18d73212f9d98bc75974a024cf6fd872fdfb1ca4 * nova-net: Make the security group API a module * requirements: Limit hacking to one minor version * Switch to hacking 2.x * Integrate 'pre-commit' * nova-net: Remove associate, disassociate network APIs * docs: Blast most references to nova-network * Mask the token used to allow access to consoles * nova-net: Remove 'nova-network' binary * Suppress policy deprecated warnings in tests * Add new default rules and mapping in policy base class * Add confirm\_snapshot\_based\_resize\_at\_source compute method * Add negative test for prep\_snapshot\_based\_resize\_at\_source failing * Add negative test for cross-cell finish\_resize failing * compute: Use long\_rpc\_timeout in reserve\_block\_device\_name * Fix incorrect command examples * Introduce scope\_types in os-services * Add test coverage of existing os-services policies * nova-net: Remove 'nova-dhcpbridge' binary * api-guide: remove empty sections about inter-service interactions * doc: remove admin/manage-users * api-guide: flesh out todos in user doc * api-guide: flesh out networking concepts * api-guide: flesh out flavor extra specs and image properties * Remove nova-manage network, floating commands * docs: Rewrite quotas documentation * test cleanup: Make base TestCase subclass oslotest * api-guide: fix the file injection considerations drift * api-guide: flesh out BUILD and ACTIVE server create transitions * Add sequence diagrams to resize/cold migrate contrib doc * Add contributor doc for resize and cold migrate * nova-net: Remove 'networks' quota * Remove 'nova-console' service, 'console' RPC API * Remove 'os-consoles' API * nova-net: Remove 'USE\_NEUTRON' from functional tests * Remove '/os-tenant-networks' REST API * compute: Take an instance.uuid lock when rebooting * Do not update root\_device\_name during guest config * block\_device: Copy original volume\_type when missing for snapshot based volumes * ZVM: Implement update\_provider\_tree * Avoid spurious error logging in \_get\_compute\_nodes\_in\_db * libvirt: Bump MIN\_{LIBVIRT,QEMU}\_VERSION for "Ussuri" * Pick NEXT\_MIN libvirt/QEMU versions for "V" release * Force config drive in nova-next multinode job * Specify what RPs \_ensure\_resource\_provider collects * zuul: Remove unnecessary 'USE\_PYTHON3' * zuul: Remove unnecessary 'tox\_install\_siblings' * Add zones wrinkle to TestMultiCellMigrate * Validate image/create during cross-cell resize functional testing * Handle target host cross-cell cold migration in conductor * Start README.rst with a better title * Don't delete compute node, when deleting service other than nova-compute * Drop neutron-grenade-multinode job * FUP to Ie1a0cbd82a617dbcc15729647218ac3e9cd0e5a9 * (Temporarily) readd bare support for py27 * functional: Make '\_wait\_for\_state\_change' behave consistently * Remove (most) '/os-networks' REST APIs * nova-net: Remove unused '\*\_default\_rules' security group DB APIs * Remove 'os-security-group-default-rules' REST API * nova-net: Add TODOs for remaining nova-network functional tests * zuul: Make functional job inherit from openstack parents * Stop testing Python 2 * doc: mention that rescuing a volume-backed server is not supported * Use wrapper class for NeutronFixture get\_client * docs: Strip '.rst' suffix * docs: Replacing underscores with dashes * docs: Remove 'adv-config', 'system-admin' subdocs * functional: Rework '\_delete\_server' * docs: Extract rescue from reboot * functional: Change order of two classes * Remove duplicate ServerMovingTests.\_resize\_and\_check\_allocations * docs: Change order of PCI configuration steps * Reset vm\_state to original value if rebuild claim fails * Block deleting compute services with in-progress migrations * Add functional recreate revert resize test for bug 1852610 * Add functional recreate test for bug 1852610 * Convert legacy nova-live-migration and nova-multinode-grenade to py3 * docs: update SUSPENDED server status wrt supported drivers * api-ref: mark device response param as optional for list/show vol attachments * doc: add troubleshooting guide for cleaning up orphaned allocations * Remove functional test specific nova code * "SUSPENDED" description changed in server\_concepts guide and API REF * Add image caching to the support matrix * Consolidate [image\_cache] conf options * Fix review link * api-ref: re-work migrate action post-conditions * Use named kwargs in compute.API.resize * Start functional testing for cross-cell resize * Filter duplicates from compute API get\_migrations\_sorted() * Make API always RPC cast to conductor for resize/migrate * Abort live-migration during instance\_init * Helper to start computes with different HostInfos * Remove unused CannotMigrateWithTargetHost * Remove TODO from ComputeTaskManager.\_live\_migrate * Fix driver tests on Windows * Remove TODOs around claim\_resources\_on\_destination * Resolve TODO in \_remove\_host\_allocations * Remove service\_uuids\_online\_data\_migration * FUP for Ib62ac0b692eb92a2ed364ec9f486ded05def39ad * Replace time.sleep(10) with service forced\_down in tests * Remove get\_minimum\_version mocks from test\_resource\_tracker * Move compute\_node\_to\_inventory\_dict to test-only code * Delete \_normalize\_inventory\_from\_cn\_obj * Drop compat for non-update\_provider\_tree code paths * Implement update\_provider\_tree for mocked driver in test\_resource\_tracker * Remove now invalid TODO from ComputeManager.\_confirm\_resize * Remove dead HostAPI.service\_delete code * Remove the TODO about using OSC for BFV in test\_evacuate.sh * Remove super old br- neutron network id compat code * Improve error log when snapshot fails * Remove unused 'nova-dsvm-base' job * Use ListOfUUIDField from oslo.versionedobjects * Use admin neutron client to see if instance has qos ports * Use admin neutron client to gather port resource requests * Use admin neutron client to query ports for binding * Revert "openstack server create" to "nova boot" in nova docs * Move rng device checks to the appropriate method * Improve metadata server performance with large security groups * Plumb allow\_cross\_cell\_resize into compute API resize() * Refresh instance in MigrationTask.execute Exception handler * Execute CrossCellMigrationTask from MigrationTask * Provide a better error when \_verify\_response hits a TypeError * libvirt: check job status for VIR\_DOMAIN\_EVENT\_SUSPENDED\_MIGRATED event * cond: rename 'recreate' var to 'evacuate' * Pass exception through TaskBase.rollback * Follow up to I3e28c0163dc14dacf847c5a69730ba2e29650370 * Log reason for remove\_host action failing * Remove PlacementAPIConnectFailure handling from AggregateAPI * Add FinishResizeAtDestTask * Add finish\_snapshot\_based\_resize\_at\_dest compute method * Document CD mentality policy for nova contributors * doc: link to nova code review guide from dev policies * Use long\_rpc\_timeout in conductor migrate\_server RPC API call * Default AZ for instance if cross\_az\_attach=False and checking from API * Add functional test for two-cell scheduler behaviors * Deprecate [glance]api\_servers * Avoid error 500 on shelve task\_state race * Only allow one scheduler service in tests * Nova compute: add in log exception to help debug failures * Add support matrix for Delete (Abort) on-going live migration * Fix race in test\_vcpu\_to\_pcpu\_reshape * api-ref: re-work resize action post-conditions * Add known limitation about resize not resizing ephemeral disks * Reset instance to current vm\_state if rolling back in resize\_instance * Pass RequestContext to oslo\_policy * Add Aggregate image caching progress notifications * Remove dead set\_admin\_password code to generate password * Log some stats for image pre-cache * Switch to devstack-plugin-ceph-tempest-py3 for ceph * Add new policy rule for viewing host status UNKNOWN * Fix policy doc for host\_status and extended servers attribute * Add notification sample test for aggregate.cache\_images.start|end * Stop building docs with (test-)requirements.txt * Enable evacuation with qos ports * Allow evacuating server with port resource request * Make nova-next multinode and drop tempest-slow-py3 * libvirt: Ignore volume exceptions during post\_live\_migration * Stop converting Migration objects to dicts for migrate\_instance\_start * Require Migration object arg to migrate\_instance\_finish method * Add image precaching docs for aggregates * Remove fixed sqlalchemy-migrate deprecation warning filters * doc: note the need to configure cinder auth in reclaim\_instance\_interval * Fix listing deleted servers with a marker * Add functional regression test for bug 1849409 * Added openssh-client into bindep * Revert "Log CellTimeout traceback in scatter\_gather\_cells" * Adds view builders for keypairs controller * [Trivial] Add missing ws between words * Revert "vif: Resolve a TODO and update another" * Don't populate resources for not-yet-migrated inst * Func: bug 1849165: mig race with \_populate\_assigned\_resources * Join migration\_context and flavor in Migration.instance * Always trait the compute node RP with COMPUTE\_NODE * Fix ItemMatcher to avoid false positives * ItemsMatcher: mock call list arg in any order * Refactor rebuild\_instance * Make sure tox install requirements.txt with upper-constraints * Move Destination object tests to their own test class * Switch to opensuse-15 nodeset * Add compute side revert allocation test for bug 1848343 * Add live migration recreate test for bug 1848343 * Set instance CPU policy to 'share' through image property * Add functional recreate test for bug 1848343 * Fix up some feedback on image precache support * Add image caching API for aggregates * Add PrepResizeAtSourceTask * Add prep\_snapshot\_based\_resize\_at\_source compute method * Add PrepResizeAtDestTask * Remove compute compat checks for aborting queued live migrations * cleanup to objects.fields * Remove redundant call to get/create default security group * Fix legacy issues in filter migrations by user\_id/project\_id * Add cache\_images() to conductor * Filter migrations by user\_id/project\_id * Stop using NoAuthMiddleware in tests * Add prep\_snapshot\_based\_resize\_at\_dest compute method * Update compute rpc version alias for train * Add regression test for bug 1824435 * setup.cfg: Cleanup * nova-net: Use deepcopy on value returned by NeutronFixture * Avoid using image with kernel in BDM large request func test * libvirt: Change \_compare\_cpu to raise InvalidCPUInfo * Fix unit of hw\_rng:rate\_period * api-guide: Fix available info in handling down cells * Add cache\_image() support to the compute/{rpcapi,api,manager} * Add cache\_image() driver method and libvirt implementation * Fix exception translation when creating volume * Deprecate [api]auth\_strategy and noauth2 * Add support for cloud-init on LXC instances * Cache image GETs for multi-create/multi-BDM requests * Add boot from volume functional test with a huge request * nova-net: Migrate 'test\_floating\_ips' functional tests * fixtures: Add support for security groups * Remove Stein compute compat checks for volume type support * Remove dead reserve\_volume compat code in \_validate\_bdm * doc: link to user/index from main home page * doc: link to user/availability-zones from user home page * docs: Add redirects for '/user/aggregates' * Skip functional test jobs for doc redirect changes * doc: fix formatting in mitigation-for-Intel-MDS-security-flaws * nova-net: Make even more nova-net stuff optional * Pull up compute node queries to init\_host * Refine comments about move\_allocations * compute: refactor volume bdm rollback error handling * Remove @safe\_connect from put\_allocations * doc: Improve PDF document structure * [Gate fix] Avoid use cell\_uuid before assignment * Remove workaround for bug #1709118 * docs: Rewrite host aggregate, availability zone docs * Avoid raise InstanceNotRunning exception * Update contributor guide for Ussuri * api-ref: Fix security groups parameters * trivial: Remove unused API sample template * trivial: Make it obvious where we're getting our names from * nova-net: Stop mocking the instance network cache * trivial: Change name of network provided by NeutronFixture * fixtures: Store 'device\_id' when creating port in NeutronFixture * fixtures: Handle iterable params for 'NeutronFixture.list\_\*' * fixtures: Beef up NeutronFixture * trivial: Neutron fixture cleanup * nova-net: Migrate 'test\_simple\_tenant\_usage' functional tests * Filter out alembic logs below WARNING in tests * Remove Rocky compute compat checks for live migration with port bindings * nova-net: Migrate 'test\_attach\_interfaces' functional tests * nova-net: Migrate 'test\_hypervisors' functional tests * nova-net: Migrate 'test\_rescue' functional tests * nova-net: Migrate 'test\_hosts' functional tests * nova-net: Migrate 'test\_servers' functional tests * nova-net: Migrate 'test\_server\_tags' functional tests * tests: Correctly mock out security groups in NeutronFixture * nova-net: Migrate 'test\_quota\_sets' functional tests * nova-net: Migrate 'test\_floating\_ip\_pools' functional tests * nova-net: Migrate 'test\_availability\_zone' functional tests * FUP to I4d181b44494f3b0b04537d5798537831c8fdf400 * FUP to I30916d8d10d70ce25523fa4961007cedbdfe8ad7 * Add reserved schema migrations for Ussuri * Restore console proxy deployment info to cells v2 layout doc * Update cells v2 up-call caveats doc * Set Instance AZ from Selection AZ during migrate reschedule * Set Instance AZ from Selection AZ during build reschedule * Add Selection.availability\_zone field * Add functional regression test for migrate part of bug 1781286 * docs: Remove a whole load of unused images, most remainder * nova-net: Remove explicit 'USE\_NEUTRON = True' * nova-net: Use nova-net explicitly in functional tests * Test heal port allocations in nova-next * Do not print default dicts during heal\_allocations * Add functional regression test for build part of bug 1781286 * Handle get\_host\_availability\_zone error during reschedule * libvirt: Ignore DiskNotFound during update\_available\_resource * make virtual pmem feature compatible with python3 * Replace 'fake' with a real project ID * test cleanup: Use oslotest's CaptureOutput fixture * test cleanup: Use oslotest's Timeout fixture * test cleanup: Remove skipIf test decorator * api: Remove 'Debug' middleware * ec2: Move ec2utils functions to their callers * Reduce scope of 'path' query parameter to noVNC consoles * Add TODO note for mox removal * conf: Remove deprecated 'project\_id\_regex' opt * tox: Stop overriding the 'install\_command' * tox: Use common 'command' definition for unit tests * Add functional tests for virtual persistent memory * Update master for stable/train * Reset forced\_destination before migration at a proper time * Functional reproduction for bug 1845291 20.0.0.0rc1 ----------- * Fix incorrect usages of fake moref in VMware tests * doc: attaching virtual persistent memory to guests * Ignore warning from sqlalchemy-migrate * Ignore sqla-migrate inspect.getargspec deprecation warnings on py36 * docs: Update resize doc * docs: Document how to revert, confirm a cold migration * docs: Update CPU topologies guide to reflect the new PCPU world * docs: Clarify everything CPU pinning * VMware VMDK detach: get adapter type from instance VM * Add a prelude for the Train release * Correct link to placement upgrade notes * Move HostNameWeigher to a common fixture * Isolate request spec handling from \_cold\_migrate * Handle legacy request spec dict in ComputeTaskManager.\_cold\_migrate * Stop filtering out 'accepted' for in-progress migrations * Add functional tests for [cinder]/cross\_az\_attach=False * docs: Rework the PCI passthrough guides * docs: Document global options for nova-manage * docs: Correct 'nova-manage db sync' documentation * docs: Note use of 'nova-manage db sync --config-file' * Add missing parameter * Move pre-3.44 Cinder post live migration test to test\_compute\_mgr * nova-net: Migrate some API sample tests off of nova-net * Remove upgrade specific info from user facing exception text * Reject migration with QoS port from conductor if RPC pinned * Log error when volume validation fails during boot from volume * Log CellTimeout traceback in scatter\_gather\_cells * Rename Claims resources to compute\_node * Sanity check instance mapping during scheduling * Remove 'test\_cold\_migrate\_with\_physnet\_fails' test * Error out interrupted builds * Functional reproduction for bug 1844993 * Create volume attachment during boot from volume in compute * Revert "Temporarily skip TestNovaMigrationsMySQL" * Clear instance.launched\_on when build fails * libvirt: Get the CPU model, not 'arch' from get\_capabilities() * Func test for migrate reschedule with pinned compute rpc * libvirt: Enable driver configuring PMEM namespaces * Add evacuate vs rebuild contributor doc * Temporarily skip TestNovaMigrationsMySQL * Remove mox in unit/network/test\_neutronv2.py (22) * Remove mox in unit/network/test\_neutronv2.py (21) * Remove mox in unit/network/test\_neutronv2.py (20) * Remove mox in unit/network/test\_neutronv2.py (19) * Remove mox in unit/network/test\_neutronv2.py (18) * Remove mox in unit/network/test\_neutronv2.py (17) * Remove mox in unit/network/test\_neutronv2.py (16) * Remove mox in unit/network/test\_neutronv2.py (15) * Remove mox in unit/network/test\_neutronv2.py (14) * Remove mox in unit/network/test\_neutronv2.py (13) * Add librsvg2\* to bindep * Mark "block\_migration" arg deprecation on pre\_live\_migration method * Refactor pre-live-migration work out of \_do\_live\_migration * make config drives sticky bug 1835822 * Add note about needing noVNC >= v1.1.0 with using ESX * Add func test for 'required' PCI NUMA policy * Trigger real BuildAbortException during migrate with bandwidth * objects: use all\_things\_equal from objects.base * trivial: Use sane indent * Add reshaper for PCPU * libvirt: Mock 'libvirt\_utils.file\_open' properly * fakelibvirt: Make 'Connection.getHostname' unique * Add support for translating CPU policy extra specs, image meta * Include both VCPU and PCPU in core quota count * tests: Additional functional tests for pinned instances * libvirt: Start reporting 'HW\_CPU\_HYPERTHREADING' trait * hardware: Differentiate between shared and dedicated CPUs * objects: Add 'NUMACell.pcpuset' field * Validate CPU config options against running instances * objects: Add 'InstanceNUMATopology.cpu\_pinning' property * libvirt: '\_get\_(v|p)cpu\_total' to '\_get\_(v|p)cpu\_available' * libvirt: Start reporting PCPU inventory to placement * Refactor volume connection cleanup out of \_post\_live\_migration * Remove SchedulerReportClient from AggregateRequestFiltersTest * Remove redundancies from AggregateRequestFiltersTest.setUp * Follow up for the bandwidth series * Centralize volume create code during boot from volume * Use SpawnIsSynchronousFixture in reschedule functional tests * libvirt: stub logging of host capabilities * api-ref: remove mention about os-migrations no longer being extended * Use os-brick locking for volume attach and detach * Follow up for I220fa02ee916728e241503084b14984bab4b0c3b * Fix a misuse of assertGreaterEqual * Add reminder to update corresponding glance docs * Parse vpmem related flavor extra spec * libvirt: Support VM creation with vpmems and vpmems cleanup * libvirt: report VPMEM resources by provider tree * libvirt: Enable driver discovering PMEM namespaces * Claim resources in resource tracker * Retrieve the allocations early * Add resources dict into \_Provider * object: Introduce Resource and ResourceList objs * db: Add resources column in instance\_extra table * VMware: Update flavor-related metadata on resize * doc: mark the max microversion for train * Remove an unused file and a related description * Cleanup reno live-migration-with-PCI-device * Docs for isolated aggregates request filter * Add a new request filter to isolate aggregates * DB API changes to get non-matching aggregates from metadata * Deprecate CONF.workarounds.enable\_numa\_live\_migration * NUMA live migration support * LM: Use Claims to update numa-related XML on the source * New objects for NUMA live migration * libvirt: Correctly handle non-CPU flag traits * Note about Destination.forbidden\_aggregates * Set user\_id/project\_id from context when creating a Migration * Add user\_id and project\_id column to Migration * Skip querying resource request in revert\_resize if no qos port * Follow up for Ib50b6b02208f5bd2972de8a6f8f685c19745514c * Improve dest service level func tests * Extract pf$N literals as constants from func test * Allow resizing server with port resource request * Allow migrating server with port resource request * Support migrating SRIOV port with bandwidth * trivial: Remove single-use classmethod * Add nova-status to man-pages list * Make SRIOV computes non symmetric in func test * Func test for migrate re-schedule with bandwidth * Support reverting migration / resize with bandwidth * Use multiple attachments in test\_list\_volume\_attachments * Fix race in \_test\_live\_migration\_force\_complete * Make \_revert\_allocation nested allocation aware * Fix the race in confirm resize func test * Fixing broken links * Improve SEV documentation and other minor tweaks * Enable booting of libvirt guests with AMD SEV memory encryption * Reject live migration and suspend on SEV guests * Apply SEV-specific guest config when SEV is required * Nova object changes for forbidden aggregates request filter * Don't duplicate PlacementFixture in libvirt func tests * doc: Fix a broken reference link * Remove stubs from VolumeAttachmentsSample API sample test * Get pci\_devices from \_list\_devices * Decouple NVMe tests from os-brick * api-ref: fix server topology "host\_numa\_node" field param name * Find instance in another cell during floating IP re-association * Deprecate the XenAPIDriver * Func test for migrate server with ports having resource request * prepare func test env for moving servers with bandwidth * resize: Add bw min service level check of source compute * migrate: Add bw min service level check of source compute * Add min service level check for migrate with bandwidth * Fix incorrect invocation of openstacksdk's baremetal.nodes() * Support reporting multi CPU model traits * Add compatibility checks for CPU mode and CPU models and extra flags * vCPU model selection * Use fields="instance\_uuid" when calling Ironic API * Bump min for oslo.service & .privsep to fix SIGHUP * doc: cleanup references to conductor doc * Remove old comments about caching scheduler compat * Move get\_machine\_type() test to test\_utils.py * Extract fake KVM guest fixture for reuse * Ensure non-q35 machine type is not used when booting with SEV * update allocation in binding profile during migrate * Add delete\_on\_termination to volume-attach API * PDF documentation build * Remove unused methods * Introduce live\_migration\_claim() * unit test: do not fill rp mapping for failed re-schedule * libvirt: Make scheduler filters customizable * Make \_get\_cpu\_feature\_traits() always return a dict * libvirt: Fold in argument to '\_update\_provider\_tree\_for\_vgpu' * objects: Rename 'fields' import to 'obj\_fields' * libvirt: Start checking compute usage in functional tests * libvirt: Simplify 'fakelibvirt.HostInfo' object * Use SDK for setting instance id * Use SDK for validating instance and node * Remove dead code * Tune up db.instance\_get\_all\_uuids\_by\_hosts * libvirt: Fix service-wide pauses caused by un-proxied libvirt calls * Refactor MigrationTask.\_execute * Nice to have test coverage for If1f465112b8e9b0304b8b5b864b985f72168d839 * Use microversion in put allocations in test\_report\_client * trivial: Rewrap definitions of 'NUMACell' * Fix the incorrect powershell command * Add and to config.py * Extract SEV-specific bits on host detection * Add extra spec parameter and image property for memory encryption * re-calculate provider mapping during migration * Add request\_spec to server move RPC calls * Pass network API to the conducor's MigrationTask * allow getting resource request of every bound ports of an instance * Add cold migrate and resize to nova-grenade-multinode * Rename the nova-grenade-live-migration job to nova-grenade-multinode * Indent fake libvirt host capabilities fixtures more nicely * Handle VirtDriverNotReady in \_cleanup\_running\_deleted\_instances * fix lxml compatibility issues * libvirt/host.py: remove unnecessary temporary variable * Provide HW\_CPU\_X86\_AMD\_SEV trait when SEV is supported * Add server sub-resource topology API * Use SDK for node.list * Add functional test for AggregateMultiTenancyIsolation + migrate * Add FUP unit test for port heal allocations * Move live\_migration test hooks under gate/ * DRY get\_sdk\_adapter tests * Ensure online migrations have a unique name * trivial: Rename 'nova.tests.unit.test\_nova\_manage' * Follow up for specifying az to unshelve * [Trivial]Remove unused helper should\_switch\_to\_postcopy * [Trivial]Removed unused helper \_extract\_query\_params * [Trivial]Remove unused helper get\_allocated\_disk\_size * Remove unused args from archive\_deleted\_rows calls * [Trivial]Remove unused helper check\_temp\_folder * Update help for image\_cache\_manager\_interval option * Change HostManager to allow scheduling to other cells * Add Destination.allow\_cross\_cell\_move field * Add power\_on kwarg to ComputeDriver.spawn() method * Refactor ComputeManager.remove\_volume\_connection * Add nova.compute.utils.delete\_image * Specify availability\_zone to unshelve * Remove 'hw:cpu\_policy', 'hw:mem\_page\_size' extra specs from API samples * scheduler: Flatten 'ResourceRequest.from\_extra\_specs', 'from\_image\_props' * libvirt: use native AIO mode for StorPool Cinder volumes * Add a "Caveats" section to the eventlet profiling docs * Verify archive\_deleted\_rows --all-cells in post test hook * nova-manage db archive\_deleted\_rows is not multi-cell aware * Avoid error state for recovered instances after failed migrations * Remove descriptions of nonexistent hacking rules * [Trivial]Remove unused helper get\_vm\_ref\_from\_name * [Trivial]Remove unused helper \_get\_min\_service\_version * tests: Split NUMA object tests * Add support for 'initenv' elements * Add test for create server with integer AZ * Trap and log errors from \_update\_inst\_info\_cache\_for\_disassociated\_fip * neutron: refactor nw info cache refresh out of associate\_floating\_ip * Introduces SDK to IronicDriver and uses for node.get * Allow strict\_proxies for sdk Connection * Docs and functional test for max\_local\_block\_devices * Update SDK fixture for openstacksdk 0.35.0 * Process [compute] in $NOVA\_CPU\_CONF in nova-next * [Trivial]Remove unused helper get\_vif\_devname\_with\_prefix * Add docstring to check\_availability\_zone function * doc: pretty up return code table for sync\_aggregates * docs: pretty up return code table or heal\_allocations * Handle websockify v0.9.0 in console proxy * Rework 'hardware.numa\_usage\_from\_instances' * Remove 'hardware.instance\_topology\_from\_instance' * Remove 'hardware.host\_topology\_and\_format\_from\_host' * Remove 'hardware.get\_host\_numa\_usage\_from\_instance' * trivial: Rename exception argument * claims: Remove useless caching * Update docstring of 'revert\_resize' function * Address nits from privsep series * Document map\_instances return codes in table format * Change nova-manage unexpected error return code to 255 * Document archive\_deleted\_rows return codes * Revert "Filter UnsupportedServiceVersion warning" * Make a failure to purge\_db fail in post\_test\_hook.sh * Remove deprecated [neutron]/url option * FUP for I5576fa2a67d2771614266022428b4a95487ab6d5 * Extract new base class for provider usage functional tests * Track libvirt host/domain capabilities for multiple machine types * Make memtune parameters consistent with libvirt docs and code * Split fake host capabilities into reusable variables * Add a hacking rule for useless assertions * Add a hacking rule for non-existent assertions * Fix missing rule description in HACKING.rst * Add blocker migration for completing services.uuid migration * Delete InstanceMapping in conductor if BuildRequest is already deleted * Deprecate Aggregate[Core|Ram|Disk]Filters * libvirt: Remove unnecessary argument * libvirt: Remove unnecessary try-catch around 'getCPUMap' * objects: Rename 'nova.objects.instance\_numa\_topology' * doc: Trivial fixes to API version history * docs: Scrub available quotas * fakelibvirt: Stop distinguishing between NUMA, non-NUMA * Restrict RequestSpec to cell when evacuating * Add functional recreate test for bug 1823370 * Libvirt: add support for vPMU configuration * doc: remove confusing docs about aggregate allocation ratios * Update api-ref for 2.75 to add config\_drive in server update response * Switch some GitHub URLs to point to opendev.org * api-ref: add config\_drive to 2.75 rebuild response parameters * doc: cleanup 2.75 REST API microversion history doc * Re-use DB MetaData during archive\_deleted\_rows * Make it easier to run a selection of tests relevant to ongoing work * Tests: autospecs all the mock.patch usages * Fix wrong assertions in unit tests * Fix 'has\_calls' method calls in unit tests * Limit get\_sdk\_adapter to requested service type * Avoid timeout from service update api unit tests * Move router advertisement daemon restarts to privsep * Move dnsmasq restarts to privsep * Move iptables rule fetching and setting to privsep * libvirt: Mock libvirt'y things in setUp * Rename 'nova.common.config' module to 'nova.middleware' * Fix non-existent method of Mock * Fix libvirt driver tests to use LibvirtConfigCapsGuest instances * Allow assertXmlEqual() to pass options to matchers.XMLMatches * API microversion 2.76: Add 'power-update' external event * Fix use of mock.patch with new\_callable=PropertyMock * config: remove deprecated checksum options * Bump minimum ksa (3.16.0) and sdk (0.34.0) * add InstanceList.get\_all\_uuids\_by\_hosts() method * Enhance SDK fixture for 0.34.0 * api-ref: Fix collapse of 'host\_status' description * lxc: make use of filter python3 compatible * Execute TargetDBSetupTask * Add CrossCellMigrationTask * Prevent init\_host test to interfere with other tests * [Trivial]Remove unused helper filter\_and\_format\_resource\_metadata * [Trivial]Remove unused helper \_get\_instances\_by\_filters * Fix misuse of nova.objects.base.obj\_equal\_prims * Restore soft-deleted compute node with same uuid * Add functional regression recreate test for bug 1839560 * rt: only map compute node if we created it * Avoid timeout from service update notification tests * DRY get\_flavor in flavor manage tests * Multiple API cleanup changes * Add a document that describes profiling eventlet services * Rename 'map' variable to avoid shadowing keywords * Drop usage of lxml's deprecated getchildren() method * [Trivial]Remove unused \_last\_bw\_usage\_cell\_update * trivial: Use NoDBTestCase instead of TestCase * Fix rebuild of baremetal instance when vm\_state is ERROR * Dump versioned notifications when len assertions fail * Skip test\_migrate\_disk\_and\_power\_off\_crash\_finish\_revert\_migration * Use :oslo.config:\* in nova-manage doc * Add TargetDBSetupTask * Add Instance.hidden field * Add InstanceAction/Event create() method * Clean up docstrings for archive\_deleted\_rows * Don't mention CONF.api\_database.connection in user-facing messages/docs * Add useful error log when \_determine\_version\_cap raises DBNotAllowed * trivial: Remove unused '\_instance\_to\_allocations\_dict' function * api-ref: document valid GET /os-migrations?migration\_type values * docs: update 2.23 REST API version history * Update comments in HostManager.\_get\_instance\_info * Cache host to cell mapping in HostManager * Convert HostMapping.cells to a dict * Replace non-nova server fault message * doc: fix physets typo * Don't claim that CLI user data requires manual base64 encoding * Retrun 400 if invalid query parameters are specified * Filter UnsupportedServiceVersion warning * Make nova-multi-cell job voting and gating * Add nova-osprofiler-redis job to experimental queue * Modernize nova-lvm job * Convert nova-lvm job to zuul v3 * doc: correct the information of 'cpu\_map' * Add the support of CPU feature 'AVX512-VNNI' * trivial: Remove unused function parameter * Follow-up for I2936ce8cb293dc80e1a426094fdae6e675461470 * Functional reproduce for bug 1833581 * nit: fix the test case of migration obj\_make\_compatible * libvirt: Handle alternative UEFI firmware binary paths * rt: soften warning case in \_remove\_deleted\_instances\_allocations * neutron: log something more useful in \_get\_instance\_nw\_info * Don't generate service UUID for deleted services * Add functional regression test for bug 1778305 * Add functional recreate test for bug 1764556 * Remove Request Spec Migration upgrade status check * Cleanup when hitting MaxRetriesExceeded from no host\_available * Add functional regression test for bug 1837955 * Move adding vlans to interfaces to privsep * Fix wrong huge pages in doc * Get rid of args to RBDDriver.\_\_init\_\_() * libvirt: harden Host.get\_domain\_capabilities() * Use a less chipper title for release notes * doc: fix links for server actions in api guide * api-ref: touch up the os-services docs * Remove usused umask argument to virt.libvirt.utils.write\_to\_file * Completely remove fake\_libvirt\_utils * Revert "[libvirt] Filter hypervisor\_type by virt\_type" * compute: Use source\_bdms to reset attachment\_ids during LM rollback * Remove 'nova.virt.driver.ComputeDriver.estimate\_instance\_overhead' * Remove deprecated CPU, RAM, disk claiming in resource tracker * Disable cinder-backup service in nova-next job * Pass extra\_specs to flavor in vif tests * Remove test\_pre\_live\_migration\_instance\_has\_no\_fixed\_ip * Remove fake\_libvirt\_utils users in functional testing * Remove super old unnecessary TODO from API start() method * Convert nova-next to a zuul v3 job * Remove deprecated Core/Ram/DiskFilter * Use OpenStack SDK for placement * Consts for need\_healing * Use the safe get\_binding\_profile * Introduces the openstacksdk to nova * Pass migration to finish\_revert\_migration() * Correct project/user id descriptions for os-instance-actions * Update api-ref location * Remove Newton-era min compute checks for server create with device tags * Add functional test for resize crash compute restart revert * Run 'tempest-ipv6-only' job in gate * Disambiguate logs in delete\_allocation\_for\_instance * Remove @safe\_connect from \_delete\_provider * libvirt: move checking CONF.my\_ip to init\_host() * Bump the openstackdocstheme extension to 1.20 * Replace "integrated-gate-py3" template with new "integrated-gate-compute" * Fix cleaning up console tokens * Avoid logging traceback when detach device not found * bindep: Remove dead markers * tox: Keeping going with docs * Restore RT.old\_resources if ComputeNode.save() fails * Defaults missing group\_policy to 'none' * Add 'resource\_request' to neutronv2/constants * Use neutron contants in cmd/manage.py * Move consts from neutronv2/api to constants module * Translatable output strings in heal allocation * Use Adapter global\_request\_id kwarg * nova-manage: heal port allocations * vif: Remove dead minimum libvirt checks * vif: Resolve a TODO and update another * vif: Stop using getattr for VIF lookups * vif: Remove 'plug\_vhostuser', 'unplug\_vhostuser' * Add method 'all\_required\_traits' to scheduler utils * Fix no propagation of nova context request\_id * Revert resize: wait for events according to hybrid plug * docs: Correct issues with 'openstack quota set' commands * ec2: Pre-move cleanup of utils * ec2: Remove ec2.CloudController * objects: Remove unused ec2 objects * ec2: Remove unused functions from 'ec2utils' * doc: Fix a parameter of NotificationPublisher * doc: Add links to novaclient contributor guide * doc: Replace a wiki link with API ref guide link * Perf: Use dicts for ProviderTree roots * libvirt: remove unused Service.get\_by\_compute\_host mocks * Update AZ admin doc to mention the new way to specify hosts * nova-lvm: Disable [validation]/run\_validation in tempest.conf * Add host and hypervisor\_hostname flag to create server * db: Add vpmems to instance\_extra * Remove assumption of http error if consumer not exists * Remove Rocky-era min compute trusted certs compat check * Remove old TODO about forced\_host policy check * Add Python 3 Train unit tests * Remove nova-consoleauth * libvirt: vif: Remove MIN\_LIBVIRT\_MACVTAP\_PASSTHROUGH\_VLAN * libvirt: Remove MIN\_LIBVIRT\_PERF\_VERSION * api-ref: Fix a broken link * Stop sending bad values from libosinfo to libvirt * libvirt: Remove unreachable native QEMU iSCSI initiator config code * libvirt: Remove MIN\_{QEMU,LIBVIRT}\_LUKS\_VERSION * Remove 'nova.virt.libvirt.compat' * Exit 1 when db sync runs before api\_db sync * Fix GET /servers/detail host\_status performance regression * Follow up for pre-filter-disabled-computes series * Sync COMPUTE\_STATUS\_DISABLED from API * Refactor HostAPI.service\_update * Add placement request pre-filter compute\_status\_filter * Update COMPUTE\_STATUS\_DISABLED from set\_host\_enabled compute call * [FUP] Follow-up patch for SR-IOV live migration * libvirt: Add a rbd\_connect\_timeout configurable * libvirt: manage COMPUTE\_STATUS\_DISABLED for hypervisor connection * Add VirtAPI.update\_compute\_provider\_status * Stabilize unshelve notification sample tests * Add neutron-tempest-iptables\_hybrid job to experimental queue * Clean up test\_virtapi * Set COMPUTE\_STATUS\_DISABLED trait from update\_provider\_tree flow * Rename CinderFixtureNewAttachFlow to CinderFixture * Remove mox in virt/test\_block\_device.py * Add integration testing for heal\_allocations * Init HostState.failed\_builds * Remove needs:\* todo from deprecated APIs api-ref * Fix invalid assertIsNone states * Add missing tests for flavor extra\_specs mv 2.61 * Fix test\_flavors to run with correct microversion * Remove 'MultiattachSupportNotYetAvailable' exception * Follow-up for I6a777b4b7a5729488f939df8c40e49bd40aec3dd * Drop pre-cinder 3.44 version compatibility * Un-safe\_connect and publicize get\_providers\_in\_tree * Require at least cryptography>=2.7 * libvirt: flatten rbd images when unshelving an instance * pull out put\_allocation call from \_heal\_\* * Prepare \_heal\_allocations\_for\_instance for nested allocations * reorder conditions in \_heal\_allocations\_for\_instance * Fix type error on call to mount device * Fix RT init arg order in test\_unsupported\_move\_type * Fix AttributeError in RT.\_update\_usage\_from\_migration * Privsep the ebtables modification code * Remove unused FP device creation and deletion methods * Privsepify ipv4 forwarding enablement * Remove no longer required "inner" methods * Grab fresh power state info from the driver * pull out functions from \_heal\_allocations\_for\_instance * Correct the comment of RequestSpec's network\_metadata * Deprecate non-update\_provider\_tree compat code * xenapi: implement update\_provider\_tree * Implement update\_provider\_tree * Fix update\_provider\_tree signature in reference docs * Add functional test coverage for bug 1724172 * Enhance service restart in functional env * (Re-)enable vnc console tests in nova-multi-cell job * nova-status: Remove consoleauth workaround check * tests: Use consistent URL regex substitution * hacking: Resolve W605 (invalid escape sequence) * hacking: Resolve E741 (ambiguous variable name) * hacking: Resolve W503 (line break occurred before a binary operator) * Remove orphaned comment from \_get\_group\_details * Revert "Revert resize: wait for events according to hybrid plug" * Remove comments about mirroring changes to nova/cells/messaging.py * doc: Fix nova-manage cell\_v2 list\_cells output * [FUP] fix backleveling unit test for video models * extend libvirt video model support * api-guide: better explain scheduler hints * Remove global state from the FakeDriver * conf: Rename 'configuration drive' to 'config drive' * docs: Rework all things metadata'y * vif: Skip most of 'get\_base\_config' if not using virtio * Ignore hw\_vif\_type for direct, direct-physical vNIC types * Revert resize: wait for events according to hybrid plug * Remove deprecated arguments in db sync command * rbd: use MAX\_AVAIL stat for reporting bytes available * Clarify --before help text in nova manage * xvp: Remove use of '\_LI' marker * xvp: Start using consoleauth tokens * Replace deprecated with\_lockmode with with\_for\_update * Log disk transfer stats in live migration monitor * Remove redundant group host setup * Validate requested host/node during servers create * Fix wrong assert methods * Clean up NumInstancesFilter related docs * Log quota legacy method warning only if counting from placement * Deprecate RetryFilter * Fix enabled\_filters default value in admin config docs * Remove file-backed memory live migration compat check * tests: Stop starting consoleauth in functional tests * docs: Remove references to nova-consoleauth * docs: remove the RamFilter from example * Ensure controllers all call super * Add 'path' query parameter to console access url * Always Set dhcp\_server in network\_info * Add a test for the \_joinedload\_all helper * Replace joinedload\_all with joinedload * Fix :param: in docstring * Optimize SchedulerReportClient.delete\_resource\_provider * Avoid unnecessary joins in delete\_resource\_provider * Literalize CLI options in docs * Delete resource providers for all nodes when deleting compute service * Warn for duplicate host mappings during discover\_hosts * Api-guide: Add Block Device Mapping * Make RequestContext(instance\_lock\_checked) fail * Fix a warning about flags in an expression string * update comment on ignore\_basepython\_conflict * Add Migration.cross\_cell\_move and get\_by\_uuid * update constraints url * Remove 'InstanceUnknownCell' exception * Add reno for removed cells v1 policies * Remove nova.compute.\*API() shims * filters: Stop handling cells v1 * Stop passing 'delete\_type' to 'terminate\_instance' * Stop passing 'kwargs' to 'rebuild\_instance' * Remove cells v1 parameter from 'ComputeTaskAPI.resize\_instance' * Fix double word hacking test * fup: Merge machine\_type\_mappings into get\_default\_machine\_type * libvirt: Use SATA bus for cdrom devices when using Q35 machine type * Make get\_provider\_by\_name public and remove safe\_connect * Refresh instance network info on deletion * Skip test\_check\_doubled\_words hacking check UT * Fix python3 compatibility of rbd get\_fsid * Remove unnecessary setUp methods * Replace 'is comprised of' with 'comprises' * Hacking N363: \`in (not\_a\_tuple)\` * Remove 'ComputeManager.\_reschedule' * Add functional recreate test for bug 1829479 and bug 1817833 * Cleanup quota user docs * Update quota known issues docs * [Docs] Update the confusing console output * Modifying install-guide to include public endpoint for identity service * Remove an unused method * Delete unused get\_all\_host\_states method * Document mitigation for Intel MDS security flaws * Make nova-next archive using --before * Change the default of notification\_format to unversioned * Hide hypervisor id on windows guests * Move default policy target * Simplfy test setup for TestNovaMigrations\* tests * Avoid lazy-loading instance.flavor in cold migration * Exclude broken ironicclient versions 2.7.1 * Follow up for counting quota usage from placement * Remove remaining vestiges of fake\_libvirt\_utils from unit tests * Set/get group uuid when transforming RequestSpec to/from filter\_properties * Workaround missing RequestSpec.instance\_group.uuid * Add regression recreate test for bug 1830747 * Add documentation for counting quota usage from placement * Use instance mappings to count server group members * Remove fake\_libvirt\_utils from libvirt imagebackend tests * Remove fake\_libvirt\_utils from virt driver tests * Bump openstackdocstheme to 1.30.0 * xenapi: log quality warning in init\_host * Remove zeromq from getting started with compute docs * Raise InstanceFaultRollback for UnableToMigrateToSelf from \_prep\_resize * Change InstanceFaultRollback handling in \_error\_out\_instance\_on\_exception * Blacklist python-cinderclient 4.0.0 * Robustify attachment tracking in CinderFixtureNewAttachFlow * Update usage in RT.drop\_move\_claim during confirm resize * Fix hard-delete of instance with soft-deleted referential constraints * conf: Remove cells v1 options, group * db: Remove cell APIs * Remove unnecessary wrapper * Stop handling 'InstanceUnknownCell' exception * libvirt: Rework 'EBUSY' (SIGKILL) error handling code path * docs: Don't version links to reno docs * Make all functional tests reusable by other projects * Fix the server group "policy" field type in api-ref * extract baselineCPU API call from \_get\_cpu\_traits() * Reduce logging of host hypervisor capabilities to DEBUG level * cleanup evacuated instances not on hypervisor * Remove mox in unit/network/test\_neutronv2.py (12) * Remove mox in unit/network/test\_neutronv2.py (11) * Remove mox in unit/network/test\_neutronv2.py (10) * Remove mox in unit/network/test\_neutronv2.py (9) * Remove mox in unit/network/test\_neutronv2.py (8) * Ensure that metadata proxy raises correct exception * Don't rely on SQLAlchemy collections magically initializing \_\_dict\_\_ * Move selective patching of open() to nova.test for reuse * Skip novnc tests in multi-cell job until bug 1830417 is fixed * Move patch\_exists() to nova.test.TestCase for reuse * Link versioned notification talk into docs * Set [quota]count\_usage\_from\_placement = True in nova-next * Count instances from mappings and cores/ram from placement * Avoid unnecessary joins in InstanceGroup.get\_hosts * Do not start nova-network in the notification func test * Fix live-migration when glance image deleted * Add --before to nova-manage db archive\_deleted\_rows * refactor nova-manage archive\_deleted\_rows * Skip existing VMs when hosts apply force\_config\_drive * Update description of valid whitelist for non-admin user * [Docs] Fix minor typo * Keep attach\_mode as top-level field in \_translate\_attachment\_ref * Block swap volume on volumes with >1 rw attachment * Replace colon with comma in route comment * Allow driver to properly unplug VIFs on destination on confirm resize * Extract provider tree functional tests into new file * Remove 'etc/nova/cells.json' * Remove conductor\_api and \_last\_host\_check from manager.py * Restore connection\_info after live migration rollback * Fix failure to boot instances with qcow2 format images * libvirt: Do not reraise DiskNotFound exceptions during resize * Remove cells code * Stop handling cells v1 for instance naming * Stop handling 'update\_cells' on 'BandwidthUsage.create' * Remove 'instance\_update\_from\_api' * Move get\_pci\_mapping\_for\_migration to MigrationContext * Remove redundant conductor from ServersTestBase.setUp() * Fix guestfs.set\_backend\_settings call * api-ref: mention default project filtering when listing servers * Add detection of SEV support from QEMU/AMD-SP/libvirt on AMD hosts * Add infrastructure for invoking libvirt's getDomainCapabilities API * [ironic] Don't remove instance info twice in destroy * Fix some issues with the newton release notes * Stop logging traceback when skipping quiesce * Cap sphinx for py2 to match global requirements * Fix retry of instance\_update\_and\_get\_original * Disable limit if affinity(anti)/same(different)host is requested * [Trivial doc change] Admin can overwrite the locked\_reason of an owner * Add functional confirm\_migration\_error test * Remove fake\_libvirt\_utils from snapshot tests * Remove fake\_libvirt\_utils from connection tests * Change some URLs to point to better targets * Microversion 2.73: Support adding the reason behind a server lock * Trivial: Adds comments and tests for scheduler * Move \_fill\_provider\_mapping to the scheduler\_utils * Remove unused param from \_fill\_provider\_mapping * Add extra logging to request filters * Update the contributor doc for macos * Update Python 3 test runtimes for Train * Revert "Fix target\_cell usage for scatter\_gather\_cells" * Fix SynchronousThreadPoolExecutorFixture mock of Future * Add docs for image type support request filter * Enable image type query support in nova-next * Add image type request filter * [Docs] Change the server query parameter display into a list * api-ref: fix mention of all\_tenants filter for non-admins * Add zvm driver image type capabilities * Add xenapi driver image type capabilities * Add vmware driver image type capabilities * Add ironic driver image type capabilities * Make libvirt expose supported image types * Expose Hyper-V supported image types * Fix assert methods in unit tests * Exclude fake marker instance when listing servers * Add regression test for bug 1825034 * [Trivial fix]Remove unnecessary slash * Log when port resource is leaked during port delete * Make nova-tox-functional-py36 reusable * Use run\_immediately=True for \_cleanup\_running\_deleted\_instances * Remove macs kwarg from allocate\_for\_instance * Remove ComputeDriver.macs\_for\_instance method * Improve metadata performance * Add nova-status upgrade check for minimum required cinder API version * Reset the stored logs at each notification test steps * Remove 'instance\_update\_at\_top', 'instance\_destroy\_at\_top' * Refactor bandwidth related functional tests * Test macvtap port with resource request * Require at least oslo.versionedobjects>=1.35.0 * Fix invalid privsep.readpty test * Fix help for ironic.peer\_list config * Remove deprecated 'default\_flavor' config option * Enable n-novnc in nova-multi-cell job * Add nova-multi-cell job * Remove 'get\_keypair\_at\_top' * Remove 'instance\_info\_cache\_update\_at\_top' * Remove 'instance\_fault\_create\_at\_top' * Correct spelling errors * Delete the placement code * libvirt: Avoid using os-brick encryptors when device\_path isn't provided * Add Venn diagram showing taxonomy of traits and capabilities * Remove unused context parameter from RT.\_get\_instance\_type * Add functional recreate test for bug 1818914 * Remove MIN\_COMPUTE\_MULTIATTACH conditions in API * Always pass HostAPI to get\_availability\_zones * Remove [ironic]api\_endpoint option * test\_rpc: Stop f\*\*\*\*\*\* with global state * libvirt: auto detach/attach sriov ports on migration * libvirt: Always disconnect volumes after libvirtError exceptions * libvirt: Stop ignoring unknown libvirtError exceptions during volume attach * Don't run tempest/devstack jobs on nova/test.py only changes * Make nova.compute.rpcapi.ComputeAPI.router a singleton * AZ list performance optimization: avoid double service list DB fetch * Add image type capability flags and trait conversions * Create request spec, build request and mappings in one transaction * Fix mock specs set to strings * Do not perform port update in case of baremetal instance * Replace git.openstack.org URLs with opendev.org URLs * Pass on region when we don't have a valid ironic endpoint * Improve test coverage of nova.privsep.utils * Drop source node allocations if finish\_resize fails * Add functional recreate test for regression bug 1825537 * Fix {min|max}\_version in ironic Adapter setup * SR-IOV Live migration indirect port support * Improve CinderFixtureNewAttachFlow * Fix ProviderUsageBaseTestCase.\_run\_periodics for multi-cell * OpenDev Migration Patch * Only set oslo\_messaging\_notifications.driver if using RPCFixture * Trivial: use default value in next() func * Add get\_usages\_counts\_for\_quota to SchedulerReportClient * libvirt: set device address tag only if setting disk unit * Remove FlavorNotFound dead code condition in API.resize * Update volume-backed comment in \_validate\_flavor\_image\_nostatus * Fix volume-backed resize with a smaller disk flavor * Add ids to sections of flavors guide to allow deep-linking * Query \`in\_tree\` to placement * Pass target host to RequestGroup.in\_tree * Add get\_compute\_nodes\_by\_host\_or\_node() * Add in\_tree field to RequestGroup object * Add functional regression recreate test for bug 1825020 * Remove 'bdm\_(update\_or\_create|destroy)\_at\_top' * Remove old-style cell v1 instance listing * Stop handling cells v1 for console authentication * Remove 'nova-manage cell' commands * Stop handling cells v1 in '/os-servers' API * Stop handling cells v1 in '/os-hypervisors' API * Remove '/os-cells' REST APIs * objects: Remove ConsoleAuthToken.to\_dict * conf: Undeprecate and move the 'dhcp\_domain' option * Handle unsetting '[DEFAULT] dhcp\_domain' * Include all network devices in nova diagnostics * Add BFV wrinkle to TestNovaManagePlacementHealAllocations * Add --instance option to heal\_allocations * Dropping the py35 testing * Add instance hard delete * Bump to hacking 1.1.0 * Add minimum value in max\_concurrent\_live\_migrations * Uncap jsonschema * Add --dry-run option to heal\_allocations CLI * trivial: Remove dead nova.db functions * Use update\_provider\_tree in vmware virt driver * Add get\_counts() to InstanceMappingList * Use InstanceList.get\_count\_by\_hosts when deleting a compute service * Remove 'nova-cells' service * Remove cells v1 jobs * Use migration\_status during volume migrating and retyping * Cleanup migrate flags * Add post-release checklist items to the PTL guide * Drop delete\_build\_requests\_with\_no\_instance\_uuid online migration * Soft delete virtual\_interfaces when instance is destroyed * Delete require\_instance\_exists\_using\_uuid * Add placeholder migrations for Stein backports * Change a log level for overwriting allocation * Remove query\_client from resource\_tracker * libvirt: disconnect volume when encryption fails * Don't report 'exiting' when mapping cells * Mention [cinder]/cross\_az\_attach in the AZ docs * Document restrictions for moving servers between availability zones * Add testing guide for down cells * xenapi/agent: Change openssl error handling * Remove dead code * Log notifications if assertion in \_test\_live\_migration\_force\_complete fails * Add test coverage for nova.privsep.qemu * Add test coverage for nova.privsep.libvirt * Improve test coverage of nova.privsep.fs, continued * Improve test coverage of nova.privsep.fs * Improve test coverage of nova.privsep.path * Hacking N362: Don't abbrev/alias privsep import * Handle PortLimitExceeded in POST /servers/{server\_id}/os-interface * Do not log a warning about not using compute monitors * Handle Invalid exceptions as expected in attach\_interface * Add docs on what not to include in notifications * devstack: Remove 'tempest-dsvm-tempest-xen-rc' * Remove CellMappingPayload database\_connection and transport\_url fields * api-ref: fix description of os-server-external-events 'events' param * api-ref: document ordering for instance actions and events * libvirt: remove conditional on VIR\_DOMAIN\_EVENT\_SUSPENDED\_POSTCOPY * libvirt: drop MIN\_LIBVIRT\_POSTCOPY\_VERSION * Drop migrate\_keypairs\_to\_api\_db data migration * Libvirt: gracefully handle non-nic VFs * trivial: Remove dead resource tracker code * trivial: Remove unused constants, functions * Leave brackets on Ceph IP addresses for libguestfs * systemd detection result caching nit fixes * trivial: Remove dead 'ALIAS' constant * zvm: Remove dead code * hacking: Fix dodgy check * trivial: Remove dead code * Docs: emulator threads: clarify expected behavior * Fix comment in test\_attach\_with\_multiattach\_fails\_not\_available * Fix a deprecation warning * Style corrections for privsep usage * Mock time.sleep() in unit tests * Add placement as required project to functional py36 and 37 * Correct lower-constraints.txt and the related tox job * Do not persist RequestSpec.ignore\_hosts * tests: Full stub out os\_vif * Pass --nic when creating servers in evacuate integration test script * tests: Stub out privsep modules * Remove flavor id and name validation code * Remove mox in unit/network/test\_neutronv2.py (7) * Remove mox in unit/network/test\_neutronv2.py (6) * Remove mox in unit/network/test\_neutronv2.py (5) * Remove mox in unit/network/test\_neutronv2.py (4) * Fix bug preventing forbidden traits from working * Adding tests to demonstrate bug #1821824 * Only call \_fill\_provider\_mapping if claim succeeds * Handle placement error during re-schedule * api-ref: add more details to confirmResize troubleshooting * Delete allocations even if \_confirm\_resize raises * Fix exception type in test\_boot\_reschedule\_fill\_provider\_mapping\_raises * Adds systemd detection result caching in Quobyte driver * Error out migration when confirm\_resize fails * Explain why disk\_available\_least can be negative * doc: Fix openstack CLI command * Move create of ComputeAPI object in websocketproxy * Change the TODO to NOTE about instance multi-create * Reproduce bug #1819460 in functional test * doc: Capitalize keystone domain name * Use aggregate\_add\_host in nova-manage * Add a reference PTL guide to the contributor docs * Add functional test for the JsonFilter * Document a warning about using the JsonFilter * Fix JsonFilter query hint examples in docs * Fix incomplete instance data returned after build failure * Add doc on VGPU allocs and inventories for nrp * Add functional regression test for bug 1669054 * Remove expiremental note in the VGPU docs * s,git://github.com/,https://git.openstack.org/, * Re-enable testing of console with TLS in nova-next job * Replace openstack.org git:// URLs with https:// * Remove last use of rc\_fields * Fix return param docstring in check\_can\_live\_migrate\* methods * Update contributor guide for Train * bdm: store empty object as connection\_info by default * Eventlet monkey patching should be as early as possible * Add description about sort order in API ref guideline * Imported Translations from Zanata * Update master for stable/stein * Stop running tempest-multinode-full 19.0.0.0rc1 ----------- * Trivial: remove unused var from policies.base.py * Override the 'get' method in DriverBlockDevice class * libvirt: smbfs: Use 'writeback' QEMU cache mode * libvirt: vzstorage: Use 'writeback' QEMU cache mode * libvirt: Use 'writeback' QEMU cache mode when 'none' is not viable * Fix links to neutron QoS minimum bandwidth doc * Don't register placement opts mutiple times in a test * Add known issue for minimum bandwidth resource leak * Add a prelude release note for the 19.0.0 Stein GA * docs: Misc cleanups * Address old TODO in claim\_resources\_on\_destination * Move libvirt calculation of machine type to utils.py * Give the policy vision document a facelift * Add docs for compute capabilities as traits * Cleanup comments around claim\_resources method * Clarify policy shortcomings in policy enforcement doc * Remove additional policy configuration details from policy doc * Remove unnecessary default provider\_tree when getting traits * qemu: Make disk image conversion dramatically faster * Remove obsolete policy configuration details from docs * Documentation for bandwidth support * Move slight bonkers IP management to privsep * Speed up test\_report * Remove "Fixing the Scheduler DB model" from schedule evolution doc * Remove stale aggregates notes from scheduler evolution doc * Trivial typo fix for REST API in policy enforcement docs * Remove resize caveat from conductor docs * docs: cleanup driver parity scope section * Pass kwargs to exception to get better format of error message * Avoid crashing while getting libvirt capabilities with unknown arch names * Re-enable Ceph in live migration testing * Customize irrelevant-files for nova-live-migration job * Update instance.availability\_zone on revertResize * Add functional recreate test for bug 1819963 * Migrate legacy jobs to Ubuntu Bionic * Disable the tls-proxy in nova-next & fix nova-tox-functional-py35 parent * Trivial: fix typo in reno * Skip the ceph based live migration testing * api-ref: Add description for BDM volume\_size * add python 3.7 unit test job * Trivialfix for help description of images\_type * Add retry\_on\_deadlock to migration\_update DB API * Add functional test to delete a server while in VERIFY\_RESIZE * Don't warn on network-vif-unplugged event during live migration * Require python-ironicclient>=2.7.0 * pass endpoint interface to Ironic client * Allow utime call to fail on qcow2 image base file * Update docs: User token times out during long-running operations * Update compute rpc version alias for stein * fix race in test\_interface\_detach\_with\_port\_with\_bandwidth\_request * Use Selection object to fill request group mapping * doc: Fix a typo * Remove fake\_libvirt\_utils from the cache concurrency tests * Add descriptions of numbered resource classes and traits * Add online data migration for populating user\_id * Populate InstanceMapping.user\_id during migrations and schedules * Add user\_id field to InstanceMapping * update gate test for removal of force evacuate * Use assertXmlEqual() helper for all XML comparison tests * Should not skip volume\_size check for bdm.image\_id == image\_ref case * doc: mark the max microversion for stein * Remove duplicate cleanup in functional tests * Add user\_id column to the instance\_mappings table * Set min=0 for block\_device\_allocate\_retries option * Clean up block\_device\_allocate\_retries config option help * docs: Fix nits in remote console guide * Add get\_instance\_pci\_request\_from\_vif * Allow per-port modification of vnic\_type and profile * Separate methods to free claimed and allocated devs * Add missing libvirt exception during device detach * FUP for test\_reshape * Test proper allocation of devices during reshape * Cleanup the exec\_ebtables code a little * Move killing processes to privsep * Move cleaning conntrack to privsep * Move arping to privsep * doc: cleanup pci.alias references * De-cruft compute manager live migration * Extend volume for libvirt network volumes (RBD) * Do not run tempest.scenario.test\_network\* tests in nova-next * Warn if group\_policy is missing from flavor * tests: Create PCI tests for NUMA'y tests * fakelibvirt: Add ability to generate fake PCI devices * objects: Store InstancePCIRequest.numa\_policy in DB * Update --max-rows parameter description for archive\_deleted\_rows * Validate PCI aliases early in resize * Move additional IP address management to privsep * Move route management to privsep * Convert additional IP management calls to privsep * Move DHCP releasing to privsep * Move set\_vf\_interface\_vlan to be with its only caller * Fix WeighedHost logging regression * Use errors\_out\_migration decorator on finish\_resize * Delete the obj\_as\_admin context manager * De-cruftify the finish\_resize methods * Temporarily mutate migration object in finish\_revert\_resize * Improve libvirt image and snapshot handling * Flavor extra spec and image properties validation from API * Handle missing exception in instance creation code * Support server create with ports having resource request * Ensure that bandwidth and VF are from the same PF * Revert "Fixes race condition with privsep utime" * Handle templated cell mappings in nova-status * Parse elements from virConnectGetCapabilities() * Exec systemd-run without --user flag in Quobyte driver * api-ref: typo service.disable\_reason * Use a placement conf when testing report client * Improve existing flavor and image metadata validation * Correct instance port binding for rebuilds * Add nits from Id2beaa7c4e5780199298f8e58fb6c7005e420a69 * Fix wrong consumer type in logging * Fix an error when generating a host ID * Remove mox in unit/network/test\_neutronv2.py (3) * Remove wrong description for auto resize confirm * Fixes race condition with privsep utime * fix bug with XML matcher handling missing children * api-ref: explain aggregate set\_metadata semantics * Check hosts have no instances for AZ rename * Remove TypeError handling for get\_info * ironic: check fresh data when sync\_power\_state doesn't line up * Add oslo.privsep to config-generator list * Stop using "nova" in API samples when creating a server * Add "links" in the response of "nova show" for a down-cell instance * Make nova-grenade-live-migration voting and gating * Move legacy-grenade-dsvm-neutron-multinode-live-migration in-tree * Convert driver supported capabilities to compute node provider traits * Adds the server group info into show server detail API * Ironic: bump minimum API version to 1.38 * Record requester in the InstancePCIRequest * Remove port allocation during detach * fix up numa-topology live migration hypervisor check * Add remove\_resources\_from\_instance\_allocation to report client * Test live migration with config drive * conf: Call out where pci.alias should be set * conf: Deprecate 'disable\_libvirt\_livesnapshot' option * Summarize output of sample configuration generator * FUP: docs nit * Add functional test for libvirt vgpu reshape * Optimize populate\_queued\_for\_delete online data migration * Cleanup no longer required filters and add a release note * ironic: partition compute services by conductor group * Fix the api sample docs for microversion 2.68 * Fup for the bandwidth series * We no longer need rootwrap * Cleanup the \_execute shim in nova/network * Change LibvirtDriver.capabilities to an instance variable * [Doc] Best practices for effectively tolerating down cells * libvirt: implement reshaper for vgpu * Use the correct mdev allocated from the pGPU * remove deprecated os\_brick import from ScaleIO driver * Move final bridge commands to privsep * Move setting of device trust to privsep * Move calls to ovs-vsctl to privsep * Fix resetting non-persistent fields when saving obj * Add unit tests for missing VirtualInterface in 2.70 os-interface * conf: Deprecated 'defer\_iptables\_apply' * Refactor "networks" processing in ServersController.create * Remove \_legacy\_dict methods * Remove misleading code from \_move\_operation\_alloc\_request() * Log why rescheduling is disabled * Dump config options on wsgi startup earlier * Follow up for I0c764e441993e32aafef0b18049a425c3c832a50 * Remove deprecated 'flavors' policy * Remove deprecated 'os-server-groups' policy * Fix a typo in configuration description * Add microversion to expose virtual device tags * FUP for Id7827fe8dc27112e342dc25c902c8dbc25f63b94 * Test boot with more ports with bandwidth request * Send RP uuid in the port binding * Recalculate request group - RP mapping during re-schedule * Pass resource provider mapping to neutronv2 api * Fill the RequestGroup mapping during schedule * Calculate RequestGroup resource provider mapping * Added mount fstype based validation of Quobyte mounts * Replace ansible --sudo with --become in live\_migration/hooks scripts * Fix typo in initial\_disk\_allocation\_ratio release note * API microversion 2.69: Handles Down Cells Documentation * Move create\_tap\_dev into privsep * Create specialist set\_macaddr\_and\_vlan helper * Fix fake DELETE in PlacementFixture * libvirt: Omit needless check on 'CONF.serial\_console' * libvirt: Drop MIN\_LIBVIRT\_PARALLELS\_SET\_ADMIN\_PASSWD * libvirt: Rewrite \_create\_pty\_device() to be clearer * libvirt: Bump MIN\_{LIBVIRT,QEMU}\_VERSION for "Stein" * API microversion 2.69: Handles Down Cells * Add context.target\_cell() stub to DownCellFixture * Plumbing required in servers ViewBuilder to construct partial results * Trim fake\_deserialize\_context in test\_conductor * Cleanup inflight rpc messages between test cases * Fix irrelevant-files for legacy-grenade-dsvm-neutron-multinode-live-migration * Stub out port binding create/delete in NeutronFixture * Make VolumeAttachmentsSampleV249 test other methods * Fix deps for api-samples tox env * Fix a missing policy in test policy data * Remove deprecated 'os-flavor-manage' policy * Drop the integrated-gate (py27) template * Address nits from I9e30a24a4c0640f282f507d0a96640d3cdefe43c * api-ref: Add descriptions for vol-backed snapshots * Change sqlalchemy warnings filter to an error * Libvirt: do not set MAC when unplugging macvtap VF * Lock detach\_volume * docs: ComputeDriver.update\_provider\_tree in nova * Document how to make tests log at DEBUG level * Drop specific versions of openSUSE-based distributions * Remove cells v1 (for the most part) from the docs * api-ref: mark os-cells as deprecated * Further de-dupe os-vif VIF tests * Validate bandwidth configuration for other VIF types * Remove get\_config\_vhostuser * Use math.gcd starting with python 3.5 * Adding cross refs for config options in scheduler filter guide * Avoid redundant initialize\_connection on source post live migration * Change nova-next tempest test regex * Ensure config regexes match the entire string * Make move\_allocations handle empty source allocations * RT: improve logging in \_update\_usage\_from\_migration * Make Claim.\_claim\_test handle SchedulerLimits object * Move finish\_resize.(start|end) notifications to helper method * Don't set bandwidth limits for vhostuser, hostdev interfaces * Use tox 3.1.1 fixes * tox: Don't write byte code (maybe) * Trivial: reorder hashes according to object\_hashes.txt * Use placement.inventory.inuse in report client * Provide a useful error message when trying to update non-compute services * Avoid BadRequest error log on volume attachment * Follow up (#2) for the bw resource provider series * Fix race in test\_volume\_swap\_server\_with\_error * Ignore VolumeAttachmentNotFound exception in compute.manager * Cleanup return\_reservation\_id in ServersController.create * Refactor bdm handling in ServersController.create method * Share snapshot image membership with instance owner * API: Remove evacuate/live-migrate 'force' parameter * Plumbing for allowing the all-tenants filter with down cells * Plumbing for ignoring list\_records\_by\_skipping\_down\_cells * Modify InstanceMappingList.get\_not\_deleted\_by\_cell\_and\_project() * Convert CPU\_TRAITS\_MAPPING to use os\_traits * Extend RequestGroup object for mapping * Transfer port.resource\_request to the scheduler * create\_veth\_pair is unused, remove it * Move binding ips to privsep * Change live\_migration\_wait\_for\_vif\_plug=True by default * Fix deprecation warning for threadgroup.add\_timer * doc: specify --os-compute-api-version when setting flavor description * Ignore sqla-migrate inspect.getargspec deprecation warnings * Switch to using os-resource-classes * Remove placement from contributor doc * Remove link to placement configuration from nova config docs * Remove placement from nova install docs * Update nova docs front page for placement removal * Update help messages for weight multipliers * Add minimum value in maximum\_instance\_delete\_attempts * Use :oslo-config: role in hypervisor-kvm doc * api-ref: mention policy defaults for aggregates * api-ref: warn about changing/unsetting AZ name with instances * Fix legacy-grenade-dsvm-neutron-multinode-live-migration * doc: mention description field in user flavors docs * api-ref: fix link to flavor extra specs docs * cleanup \*.pyc files in docs tox envs * update flavor admin docs * Fix InstanceMapping to always default queued\_for\_delete=False * Ignore some PendingDeprecationWarnings for os-vif * Replace glance command with openstack command * Extract compute API \_create\_image to compute.utils * Move resize.(start|end) notification sending to helper method * Move resize.prep.start/end notifications to helper method * Isolate cell-targeting code in MigrationTask * Remove PLACEMENT\_DB\_ENABLED from nova-next job config * Drop nova-multiattach job * Don't force evacuate/live migrate in notification sample tests * doc: Add solution to live migration ssh issues * Follow up for per-instance serial number change * Change nova-next job to run with python3 * doc: update the security groups admin doc * doc: link Kashyap's cpu model talk to the libvirt driver config docs * doc: link admin/configuration from admin home page * Fup for the bandwidth resource provider series * Per-instance serial number * PCI: do not force remove allocated devices * Ignore SAWarnings for "Evaluating non-mapped column expression" * Move retry from \_update to \_update\_to\_placement * Collect duplicate codepaths in os\_vif\_util * Duplicate os-vif datapath offload metadata * Add support for vrouter HW datapath offloads * Switch tempest-slow to be run on python 3 * Move interface disabling to privsep * Move setting mac addresses for network devices to privsep * Fix config docs for handle\_virt\_lifecycle\_events * Add configuration of maximum disk devices to attach * Force refresh instance info\_cache during heal * Add fill\_virtual\_interface\_list online\_data\_migration script * Fix string interpolations in logging calls * FUPs: ReportClient traffic series * Fix port dns\_name reset * Reject unshelve with port having resource request * Reject evacuate with port having resource request * Reject migrate with port having resource request * Reject resize with port having resource request * Reject server create with port having resource request * Read port resource request from Neutron * Include requested\_resources to allocation candidate query * Create RequestGroup from neutron port * Reject networks with QoS policy * Add a warning for max\_concurrent\_live\_migrations * Convert vrouter legacy plugging to os-vif * Fix ComputeNode ovo compatibility code * Remove unused quota options * Raise 403 instead of 500 error from attach volume API * Reject interface attach with QoS aware port * Skip checking of target\_dev for vhostuser * Make 'plugin' a required argument for '\_get\_vif\_instance' * Add missing ws seperator between words * Don't call begin\_detaching when detaching volume from shelved vm * Convert port to str when validate console port * docs: Update references to "QEMU-native TLS" document * libvirt: A few miscellaneous items related to "native TLS" * Per aggregate scheduling weight * Cleanup soft (anti)affinity weight multiplier options * unused images are always deleted (add to in-tree hper-v code) * Fix using template cell urls with nova-manage * Turn off rp association refresh in nova-next * Fix incompatible version handling in BuildRequest * Use a static resource tracker in compute manager * api-ref: Body verification for the lock action * Rip out the SchedulerClient * Rip the report client out of SchedulerClient * Commonize \_update code path * Consolidate inventory refresh * Reduce calls to placement from \_ensure * Fix ovo compatibility code unit tests * Fix overcommit for NUMA-based instances * Send context.global\_id on neutron calls * Use X-Forwarded-Proto as origin protocol if present * Add method to generate device names universally * docs: Secure live migration with QEMU-native TLS * The field instance\_name was added to InstanceCreatePayload * Make functional-py37 job work like others * Allow run metadata api per cell * Enhance exception raised when invalid power state * Doc: rebuild can result in SHUTOFF VM state * Rename Ironic jobs * Extend NeutronFixture to return port with resource request * libvirt: Support native TLS for migration and disks over NBD * Follow up for "Add API ref guideline for body text" * Remove args(os=False) in monkey\_patch * Run nova-lvm job on nova/privsep/\* changes * Fix circular import in nova.privsep.utils * Change to debug repetitive info messages * libvirt: Add workaround to cleanup instance dir when using rbd * Remove useless test samples for v2.66 * Fix rfc3986.is\_valid\_uri deprecation warnings * Use oslo\_db.sqlalchemy.test\_fixtures * libvirt: generalize rbd volume fallback removal statement * Ensure rbd auth fallback uses matching credentials * doc: Switch header styles in support doc * Add links to summit videos in user/cells.rst * Add functional regression recreate test for bug 1790204 * nit: Add space to feature support docs * vmware:add support for the hw\_video\_ram image property * Update instance.availability\_zone during live migration * Fix a broken link * Drop old service version check compat from \_delete\_while\_booting * Remove "API Service Version" upgrade check * Remove "Resource Providers" upgrade check * Fix an inaccurate link in nova doc * Pass request\_spec from compute to cell conductor on reschedule * Exclude build request marker from server listing * Document using service user tokens for long running operations * Redirect user/placement to placement docs * Handle unbound vif plug errors on compute restart * Fix a broken-link in nova doc * Fix a broken-link in nova doc * Use renamed template 'integrated-gate-py3' * Remove legacy RequestSpec compat from conductor rebuild\_instance * Remove legacy RequestSpec compat from conductor unshelve\_instance * Remove legacy RequestSpec compat code from live migrate task * Remove legacy request spec compat code from API * Address nits on I1f1fa1d0f79bec5a4101e03bc2d43ba581dd35a0 * Address nits on I08991796aaced2abc824f608108c0c786181eb65 * doc: Rework 'resize' user doc * Migrate "reboot an instance" user guide docs * Fix jsonutils.to\_primitive UserWarning * Move interface enabling to privsep * Move simple execute call to processutils * Move some linux network helpers to use privsep * Move bridge creation to privsep * Move a generic bridge helper to a linux\_net privsep file * Properly log request headers in metadata API * Default zero disk flavor to RULE\_ADMIN\_API in Stein * Drop request spec migration code * Fix best\_match() deprecation warning * Remove mox in libvirt/test\_driver.py (8) * Remove mox in libvirt/test\_driver.py (7) * Fix the link to the Placement API Version History * Add descriptions about microversions * Migrate upgrade checks to oslo.upgradecheck * Fix up force live migration completion docs * libvirt: remove live\_migration\_progress\_timeout config * libvirt: add live migration timeout action * Fail to live migration if instance has a NUMA topology * Add DownCellFixture * Remove GROUP BY clause from CellMapping.get\_by\_project\_id * Add py36/py37 functional jobs to the experimental queue * Add python 3.7 unit and functional tox jobs * Replace ThreadPoolExecutor with GreenThreadPoolExecutor * Fix destination\_type attribute in the bdm\_v2 documentation * Add irrelevant-files for grenade-py3 jobs * allow tcp-based consoles in get\_console\_output * Use external placement in functional tests * Remove lock on SchedulerReportClient.\_create\_client * DRY up SchedulerReportClient init * Only construct SchedulerReportClient on first access from API * Cleanup vendordata docs * Remove utils.execute() from virt.disk.api * Remove utils.execute() from the hyperv driver * Remove the final user of utils.execute() from virt.images * Remove final users of utils.execute() in libvirt * Imagebackend should call processutils.execute directly * Handle tags in \_bury\_in\_cell0 * Make compute rpcapi version calculation check all cells * Only warn about not having computes nodes once in rpcapi * Fix typo * Move nova.libvirt.utils away from using nova.utils.execute() * Remove utils.execute() from quobyte libvirt storage driver * Fix target used in nova.policy.check\_is\_admin * refactor get\_console\_output() for console logfiles * Final release note for versioned notification transformation * Add API ref guideline for body text * Remove allocations before setting vm\_status to SHELVED\_OFFLOADED * Drop pre-cellsv2 compat in compute API.get() * Move nova-cells-v1 to experimental queue * Ignore MoxStubout deprecation warnings * Fixed concurrent access to direct io test file * Add docs for (initial) allocation ratio configuration * Note the aggregate allocation ratio restriction in scheduler docs * Add compute\_node ratio online data migration script * Add ratio online data migration when load compute node * Use tempest [compute]/build\_timeout in evacuate tests * Update mailinglist from dev to discuss * Clean up header encoding handling in compute API * Remove utils.execute() from libvirt remotefs calls * Remove utils.execute() calls from xenapi * Create BDMs/tags in cell with instance when over-quota * Add secret=true to fixed\_key configuration parameter * Add functional regression test for bug 1806064 * Fix sloppy initialization of the new disk ops semaphore * Revert "Add regression test for bug 1550919" * Use new \`\`initial\_xxx\_allocation\_ratio\`\` CONF * Remove placement perf check * Mention size limit on user data in docs * Transform scheduler.select\_destinations notification * SIGHUP n-cpu to clear provider tree cache * libvirt: Refactor handling of PCIe root ports * Fix misuse of assertTrue * Workaround a race initialising version control in db\_version() * Make [cinder]/catalog\_info no longer require a service\_name * Remove get\_node\_uuid * Restore nova-consoleauth to install docs * Change the default values of XXX\_allocation\_ratio * Remove Placement API reference * Always read-deleted=yes on lazy-load * Refactor TestEvacuateDeleteServerRestartOriginalCompute * Fix InstanceNotFound during \_destroy\_evacuated\_instances * Give drop\_move\_claim() correct docstring * Add missing ws seperator between words * Drop cruft code for all\_tenants behaviour * Remove ironic/pike note from \*\_allocation\_ratio help * Use links to placement docs in nova docs * Add a bug tag for nova doc * Add I/O Semaphore to limit concurrent disk ops * Remove NovaException logging from scatter\_gather\_cells * Transform compute\_task notifications * Add HPET timer support for x86 guests * Consider root id is None in the database case * Remove v1 check in Cinder client version lookup * Add CellsV2 FAQ about API design decisions * Use long\_rpc\_timeout in select\_destinations RPC call * Allow driver to specify switch&port for faster lookup * Fix server query examples * Nix refs to ResourceProvider obj from libvirt UT * Skip double word hacking test * Fix regression in glance client call * Add description of custom resource classes * Make \_instances\_cores\_ram\_count() be smart about cells * Make supports\_direct\_io work on 4096b sector size * modify the avaliable link * api-ref: Add a description about sort order * Add debug logs when doubling-up allocations during scheduling * Delete NeutronLinuxBridgeInterfaceDriver * Mention meta key suffix in tenant isolation with placement docs * libvirt: change "Ignoring supplied device name" warning to info * Fix a help string in nova-manage * Use SleepFixture instead of mocking \_ThreadingEvent.wait * remove mocks of oslo.service private members * Harden placement init under wsgi * Fix version details API does not return 200 OK * Add a link to the doc contrib guide * Improve formats of the Compute API guide * Remove LazyLoad of Scheduler Clients * Allow resource\_provider\_association\_refresh=0 * prevent common kwargs from glance client failure * Fix support matrix for VMware UEFI support * Add bandwidth related standard resource classes * Add requested\_resources field to RequestSpec * Add request\_spec.RequestGroup versioned object * Update compute API.get() stubs in test\_access\_ips * Update compute API.get() stubs for test\_disk\_config * Update compute API.get() stubs for test\_\*security\_groups * Update compute API.get() stubs in test\_server\_actions * Update compute API.get() stubs in test\_serversV21 * Update compute API.get() mocks in test\_server\_metadata * Convert exception messages to strings * Trivial: add reminder to update Tempest's scheduler\_enabled\_filters * Update the description to make it more accuracy * Pass disk\_info dict to libvirt\_info * Fix libvirt volume tests passing invalid disk\_info * Default embedded instance.flavor.is\_public attribute * [Trivial Fix] Correct spelling error of "should" and "resource" * Clean up cpu\_shared\_set config docs * quota: remove defaults kwarg in get\_project\_quotas * quota: remove QuotaEngine.register\_resources() * PowerVM upt parity for reshaper, DISK\_GB reserved * Minimal construct plumbing for nova service-list when a cell is down * Minimal construct plumbing for nova show when a cell is down * Refactor scatter-gather utility to return exception objects * Minimal construct plumbing for nova list when a cell is down * Modify get\_by\_cell\_and\_project() to get\_not\_deleted\_by\_cell\_and\_project() * Explain cpu\_model\_extra\_flags and nested guest support * Run negative server moving tests with nested RPs * Kill @safe\_connect in \_get\_provider\_traits * libvirt: Avoid setting MTU during live migration if unset * Add tests for bug #1800511 * No longer call \_normalize\_inventory\_from\_cn\_obj from upt flow * Provide allocation\_ratio/reserved amounts from update\_provider\_tree() * Fix nits in I7cbd5d9fb875ebf72995362e0b6693492ce32051 * tox: Stop build \*all\* docs in 'docs' * Fix min config value for shutdown\_timeout option * Fix os-simple-tenant-usage result order * Add recreate test for bug 1799892 * Add nova-status upgrade check for consoles * PowerVM: update\_provider\_tree() (compatible) * Add functional regression test for bug 1794996 * Add volume-backed evacuate test * Add post-test hook for testing evacuate * Cleanups for the scheduler code * Use RequestSpec.user\_id in scheduler.utils.claim\_resources * Remove restart\_scheduler\_service() method * Drop legacy live migrate allocation compat code * Reject forced move with nested source allocation * Add API ref guideline for examples * api-ref: Add descriptions of error cases * api-ref: Remove unnecessary minimum microversion * Add a hacking rule for deprecated assertion methods * Make CellDatabases fixture reentrant * Add more documentation for online\_data\_migrations CLI * Add functional recreate test for bug 1799727 * quota: remove default kwarg on get\_class\_quotas() * Fix ironic client ironic\_url deprecation warning * Consider allocations invovling child providers during allocation cleanup * quota: remove QuotaDriver.destroy\_all\_by\_project() * Add restrictions on updated\_at when getting instance action records * Add restrictions on updated\_at when getting migrations * quota: remove unused Quota driver methods * quota: remove unused code * Add regression test for bug 1550919 * Fix test bug when host doesn't have /etc/machine-id * conductor: Recreate volume attachments during a reschedule * Add regression test for bug#1784353 * fixtures: Track volume attachments within CinderFixtureNewAttachFlow * Fix up compute rpcapi version for pike release * Rename tempest-nova job to follow conventions * Convert legacy-tempest-dsvm-neutron-src-oslo.versionedobjects job * Drop legacy cold migrate allocation compat code * Add debug logs for when provider inventory changes * Log the operation when updating generation in ProviderTree * api-ref: 'os-hypervisors' doesn't reflect overcommit ratio * Document each libvirt.sysinfo\_serial choice * Use tempfile for powervm config drive * Remove the CachingScheduler * Ensure attachment cleanup on failure in driver.pre\_live\_migration * Use assertRegex instead of assertRegexpMatches * Remove the extensions framework from wsgi.py * Remove more code related to extensions and testing * Remove the caching the resource on Request object * Fix block\_device\_mapping\_v2 mention in server create API reference * Fix typo in libvirt.hw\_machine\_type help * Bump os-brick version to 2.6.1 * Ignore uuid if already set in ComputeNode.update\_from\_virt\_driver * Fix formatting non-templated cell URLs with no config * Use unique consumer\_id when doing online data migration * Add recreate test for bug 1798163 * Handle online\_data\_migrations exceptions * Remove duplicate legacy-tempest-dsvm-multinode-full job * Handle volume API failure in \_post\_live\_migration * Move live\_migration.pre.start to the start of the method * Add some more docs for upgrade checkers * Don't persist RequestSpec.requested\_destination * Add microversion 2.67 to rest api version history * Deprecate the nova-xvpvncproxy service * Deprecate the nova-console service * doc: Add minimal documentation for MKS consoles * doc: Add minimal documentation for RDP consoles * doc: Rewrite the console doc * doc: update metadata service doc * Migrate nova v2.0 legacy job to zuulv3 * Fix deprecated base64.decodestring warning * Fix NoneType error in \_notify\_volume\_usage\_detach * Zuul: Update barbican experimental job * Increment versioning with pbr instruction * Add regression test for bug 1797580 * Use tempest-pg-full * Add microversion 2.67 to support volume\_type * Add compute API validation for when a volume\_type is requested * Add compute version 36 to support \`\`volume\_type\`\` * Use nova-consoleauth only if workaround enabled * fix "you" typo * Skip \_remove\_deleted\_instances\_allocations if compute is new * Replace openSUSE experimental check with newer version * Transform volume.usage notification * api-ref: Replace non UUID string with UUID * Remove useless TODO section * api-ref: Remove a description in servers-actions.inc * Make ResourceTracker.tracked\_instances a set * Properly track local root disk usage during moves * Add regression test for bug 1796737 * Fix missing import in test\_compute\_mgr * Move test.nested to utils.nested\_contexts * conf: Deprecated 'config\_drive\_format' * Fix nits in choices documentation * Remove an unnecessary duplicate flag * Not set instance to ERROR if set\_admin\_password failed * De-dupe subnet IDs when calling neutron /subnets API * Handle missing marker during online data migration * Run ServerMovingTests with nested resources * Refactor allocation checking in functional tests * Use provider tree in virt FakeDriver * Enable nested allocation candidates in scheduler * consumer gen: support claim\_resources * api-ref: Move the evacuate action to admin action * Add scatter-gather-single-cell utility * Handle IndexError in \_populate\_neutron\_binding\_profile * Fix logging parameter in \_populate\_pci\_mac\_address * Skip test\_parallel\_evacuate\_with\_server\_group until fixed * doc: fix and clarify --block-device usage in user docs * Placement: Remove usage of get\_legacy\_facade() * conf: Convert 'live\_migration\_inbound\_addr' to HostAddressOpt * conf: Gather 'live\_migration\_scheme', 'live\_migration\_inbound\_addr' * VMware: Live migration of instances * Remove redundant irrelevant-files from neutron-tempest-linuxbridge * Add hide server address tests in test\_serversV21.py * Fix neutron-tempest-linuxbridge irrelevant-files * Raise error on timeout in wait\_for\_versioned\_notifications * Replace usage of get\_legacy\_facade() with get\_engine() * Add volume\_type field to BlockDeviceMapping object * Remove unnecessary redirect * Update doc * Fix stacktraces with redis caching backend * remove commented-out code * Use INFO for logging no allocation candidates * Don't emit warning when ironic properties are zero * Null out instance.availability\_zone on shelve offload * Follow up for Ie991d4b53e9bb5e7ec26da99219178ab7695abf6 * Follow up for Iba230201803ef3d33bccaaf83eb10453eea43f20 * Follow up for Ib6f95c22ffd3ea235b60db4da32094d49c2efa2a * nova-manage - fix online\_data\_migrations counts * Add attach kwarg to base/nova-net allocate\_for\_instance methods * consumer gen: more tests for delete allocation cases * Pick next minimum libvirt / QEMU versions for "T" release * Enforce case-sensitive hostnames in aggregate host add * Revert "Make host\_aggregate\_map dictionary case-insensitive" * api-ref: add 'migrations' param to GET /os-migrations * Option "scheduler\_default\_filters" is deprecated * consumer gen: move\_allocations * doc:update virtual gpu doc * Consumer gen: remove\_provider\_from\_instance\_allocation * Consumer gen support for put allocations * Consumer gen support for delete instance allocations * api-ref: Fix wrong bold decoration * placement: Always reset conf.CONF when starting the wsgi app * Set defult value of num\_nvme\_discover\_tries=5 * Rename "polling\_changes-since\_parameter.rst" * Imported Translations from Zanata * Ignore VirtDriverNotReady in \_sync\_power\_states periodic task * nova-status - don't count deleted compute\_nodes * libvirt: fix disk\_bus handling for root disk * Remove deprecated nova-consoleauth reference from doc * Imported Translations from Zanata * Add get\_by\_cell\_and\_project() method to InstanceMappingList * Making instance/migration listing skipping down cells configurable * ironic: stop hammering ironic API in power sync loop * Nix update\_instance\_allocation, \_allocate\_for\_instance * Filter deleted computes from get\_all\_by\_uuids() * Fix missing specifying doctrees directory * libvirt: Drop MIN\_LIBVIRT\_PF\_WITH\_NO\_VFS\_CAP\_VERSION * Remove an unnecessary comment * Mention SR-IOV cold migration limitation in admin docs * Add contributor guide for upgrade status checks * libvirt: mdevs returning parent and vendor PCI info * Remove deprecated hide\_server\_address\_states option * Resource retrieving: add changes-before filter * cells: Be explicit in docs about service restarts * doc trivial: additional info to admin-password-injection * Add missing backticks in nova-manage docs * Fix some typos in nova api ref doc * Transform libvirt.error notification * Remove mox in test\_compute\_api.py (4) * Remove mox in libvirt/test\_driver.py (6) * Refactor NeutronFixture * libvirt: Use 'virt' as the default machine type for ARMv7 * add caching to \_build\_regex\_range * Allow ability for non admin users to use all filters on server list * Rename changes-since test sample file * remove virt driver requires\_allocation\_refresh * Fix docs and add functional test for AggregateMultiTenancyIsolation * Noop CantStartEngineError in targets\_cell if API DB not configured * Fix mock.patch usage in unit tests * Fix evacuate logging * conf: Use new-style choice values * Follow devstack-plugin-ceph job rename * Fix resource tracker updates during instance evacuation * Cleanup zuul.yaml * add python 3.6 unit test job * switch documentation job to new PTI * import zuul job settings from project-config * fix a spelling error * Update docs for live\_migration\_progress\_timeout option * Add an example to add more pci devices in nova.conf * Fix formatting in changes-since guide * Do not dump all instances in the scheduler * Use six.string\_types to improve python2/3 compatibility * doc: update info for hypervisors * fup: Fix import order and test nit * Remove redundant image GET call in \_do\_rebuild\_instance * Configure placement DB context manager for nova-manage/status * Use uuidsentinel from oslo.utils * Fix DB archiver AttributeError due to wrong table name attribute used * Fix nova-status "\_check\_resource\_providers" check * Fix TypeError in nova-manage cell\_v2 list\_cells * Document unset/reset wrinkle for \*\_allocation\_ratio options * Docs: update link for remote debugging * Removing pip-missing-reqs from default tox jobs * Fix a failure to format config sample * Other host allocs may appear in gafpt during evac * Move conductor wait\_until\_ready() delay before manager init * Don't persist zero allocation ratios in ResourceTracker * hardware: fix memory check usage for small/large pages * Fix nits: Compute: Handle reshaped provider trees * Fix reshaper report client functonal test nits * Document differences and similaries between extra specs and hints * Combine error handling blocks in \_do\_build\_and\_run\_instance * Time how long select\_destinations() takes in conductor * Add encrypted volume support to feature matrix docs * Remove old check\_attach version check in API * Delete instance\_group\_member records from API DB during archive * Add functional test for live migrate with anti-affinity group * Revert "libvirt: add method to configure migration speed" * (Re)start caching scheduler after starting computes in tests * Restart scheduler in TestNovaManagePlacementHealAllocations * [placement] Make \_ensure\_aggregate context not independent * Send soft\_delete from context manager * Transform missing delete notifications * doc: add info how to troubleshoot vmware specific problems * Fix a broken conf file description in networking doc * Mention (unused) RP generation in POST /allocs/{c} * Fail heal\_allocations if placement is borked * reshaper gabbit: Nix comments re doubled max\_unit * Do test\_reshape with an actual startup * Compute: Handle reshaped provider trees * Revert "Don't use '\_TransactionContextManager.\_async'" * Don't use '\_TransactionContextManager.\_async' * libvirt: skip setting rx/tx queue sizes for not virto interfaces * Make monkey patch work in uWSGI mode * privsep: Handle ENOENT when checking for direct IO support * [placement] split gigantor SQL query, add logging * Optimize global marker re-lookup in multi\_cell\_list * Record cell success/failure/timeout in CrossCellLister * Make instance\_list perform per-cell batching * Update volume-attachment API url in policies * Fix race condition in reshaper handler * Make scheduler.utils.setup\_instance\_group query all cells * Deprecate Core/Ram/DiskFilter * Document no content on POST /reshaper 204 * api-ref: add a warning about calling swap volume directly * api-ref: fix volume attachment update policy note * Report client: update\_from\_provider\_tree w/reshape * Report client: \_reshape helper, placement min bump * Report client: get\_allocations\_for\_provider\_tree * Report client: Real get\_allocs\_for\_consumer * List instances from all cells explicitly * Batch results per cell when doing cross-cell listing * doc: Note NUMA topology requirements for numa-aware-vswitches * api: Remove unnecessary default parameter * hyperv: Cleans up live migration Planned VM * Correct the release notes related to nova-consoleauth * tests: Create functional libvirt test base class * Fix create\_resource\_provider docstring * tests: Move mocking to setUp * Remove noisy DEBUG log * Make get\_allocations\_for\_resource\_provider raise * reshaper: Look up provider if not in inventories * [placement] Add functional test to verify presence of policy * Normalize dashless 'resource provider create' uuid * [placement] Add /reshaper handler for POST * Clarify which context is used by do\_query() * Make RecordWrapper record RequestContext and expose cell\_uuid * Stash the cell uuid on the context when targeting * Make CELL\_TIMEOUT a constant * [placement] Regex consts for placement schema * Wait for network-vif-plugged on resize revert * libvirt: Always escape IPv6 addresses when used in migration URI * Move str to six.string\_types * libvirt: Don't react to VIR\_DOMAIN\_EVENT\_SUSPENDED\_MIGRATED events * Set policy\_opt defaults in placement deploy unit test * Explicitly fail if trying to attach SR-IOV port * Filter out instances without a host when populating AZ * Set policy\_opt defaults in placement gabbi fixture * Fix soft deleting vm fails after "nova resize" vm * Use placement microversion 1.26 in update\_from\_provider\_tree * Remove ChanceScheduler * Doc: PowerVM does support shelve * comment correction for libvirt multiattach * Remove the deprecated API extensions policies * Update contributor guide for Stein * Add zvm CI information * Add zvm admin intro and hypervisor information * Update api-guide and api-ref to be clear about forced-down * Making consistent used of GiB and MiB in API ref * placement: use single-shot INSERT/DELETE agg * Add trait query to placement perf check * Add explanatory prefix to post\_test\_perf output * Py3 fix in fake image service * use static pages for mitaka and newton release notes * Revisons on notifications doc * VMware: add missing os types in vSphere sdk 6.5 * Ironic: report 0 for vcpus/memory\_mb/disk\_gb resources * Remove blacklisted py3 xen tests * Add placement perf info gathering hook to end of nova-next * Fix service list for disabled compute using MC driver * Delete instance\_id\_mappings record in instance\_destroy * Add functional test for affinity with multiple cells * [placement] api-ref: Add missing aggregates example * Remove mox in libvirt/test\_driver.py (5) * add zvm into support matrix * Trivial fix to remove extra 'will' on microversion doc * Imported Translations from Zanata * Handle unicode characters in migration params * placement: use simple code paths when possible * Test case for multiple forbidden traits * Adds a test for \_get\_provider\_ids\_matching() * Make Xen code py3-compatible * Revert "libvirt: slow live-migration to ensure network is ready" * improve migration script * placement: ignore policy scope check failures if not enforcing scope * api-ref: fix GET /flavors?is\_public description * Update reno for stable/rocky * Remove patching the mock lib * block\_device: Rollback volumes to in-use on DeviceDetachFailed * Quota details for key\_pair "in\_use" is 0 * Add additional info to resource provider aggregates update API 18.0.0.0rc1 ----------- * Handle binding\_failed vif plug errors on compute restart * Fix image-defined numa claims during evacuate * Add a prelude release note for the 18.0.0 Rocky GA * Nix 'new in 1.19' from 1.19 sections for rp aggs * libvirt: Use os.stat and os.path.getsize for RAW disk inspection * Trivial fix on migration doc * [placement] api-ref: add description for 1.29 * Update the parameter explain when updating a volume attachment * Update ssh configuration doc * Update nova network info when doing rebuild for evacuate operation * Docs: Add guide to migrate instance with snapshot * Update compute rpc version alias for rocky * Add the guideline to write API reference * get provider IDs once when building summaries * Remove Neutron MetaAPIProxy from cellsv2-layout * [placement] Avoid rp.get\_by\_uuid in allocation\_candidates * Fix host validity check for live-migration * libvirt: Reduce calls to qemu-img during update\_available\_resource * Refactor cell\_type in compute/api.py * Add explicit functional-py36 tox target * xx\_instance\_type\_id in list\_migrations should be integer * Fix bad links for admin-guide * api-ref: Add descriptions for rebuild * Add microversion info in the os-server-groups API samples * Update really old comments about vmware hosts managing multiple nodes * doc: mark the max microversion for rocky * Fix resize revert to use non-legacy alloc handling * api-ref: fix min\_version for parent\_provider\_uuid in responses * [placement] Add version directives in the history doc * Use common functions in granular fixture * Fix none-ascii char in doc * Update resources once in update\_available\_resource * Define irrelevant-files for tempest-full-py3 job * Add tempest-slow job to run the tempest slow tests * Not use project table for user table * Adds a test for getting allocations API * Update RequestSpec.flavor on resize\_revert * Use CONF.long\_rpc\_timeout in post\_live\_migration\_at\_destination * Optimize AZ lookup during schedule\_and\_build\_instances * [placement] ensure\_rc\_cache only at start of process * Remove unused flavor\_delete\_info() method * Reno for notification-transformation-rocky * Deprecate upgrade\_levels options for deprecated/removed services * [placement] Move resource\_class\_cache into placement hierarchy * [placement] Debug log per granular request group * Fix nits in resource\_provider.py * Remove unused request API sample template * Grease some more tests hitting RetryDecorator * Scrub hw:cpu\_model from API samples * Grease test\_try\_deallocate\_network\_retry\_direct * libvirt: guest: introduce blockStats instead of domain.blockStats * Improve NeutronFixture and remove unncessary stubbing * Remove unused stubbing function from test * doc: fix resize user guide link * tox: Ensure reused envdirs share the same deps * Fix a typo in comment in resource\_provider.py * Refactor AllocationFixture in placement test * Increase max\_unit in placement test fixture * Use common functions in NonSharedStorageFixture * Hook resource\_tracker to remove stale node information * Make ResourceTracker.stats node-specific * Add recreate test for RT.stats bug 1784705 * Reload oslo\_context after calling monkey\_patch() * Fix comments in \_anchors\_for\_sharing\_providers and related test * Ensure the order of AllocationRequestResources * Don't overwrite greenthread-local context in host manager * libvirt: Remove usage of migrateToURI{2} APIs * Remove unnecessary PlacementFixture setups * Don't poison Host.\_init\_events if it's already mocked * Remove redundant join in \_anchors\_for\_sharing\_providers * [placement] Retry allocation writes server side * [placement] api-ref: add traits parameter * Retry decorator fix for instances which go into ERROR state during bulk delete * Fix formatting for vcpu\_pin\_set and reserved\_huge\_pages * Updated AggregateImagePropertiesIsolation filter illustration * [placement] Use a simplified WarningsFixture * [placement] Use a non-nova log capture fixture * [placement] Use oslotest CaptureOutput fixture * [placement] Use own set\_middleware\_defaults * Extract \_update\_to\_placement method in resource tracker * Set default of oslo.privsep.daemon logging to INFO level * Remove superfluous network stubbing in func test * Add additional functional tests for NUMA networks * Add description for placement 1.26 18.0.0.0b3 ---------- * Add functional test for forced live migration rollback allocs * Assorted cleanups from numa-aware-vswitches series * libvirt: Revert non-reporting DISK\_GB if sharing * Pass source vifs to driver.cleanup in \_post\_live\_migration * Fix create\_all() to replace\_all() in comments * compute node local\_gb\_used include swap disks * Use source vifs when unplugging on source during post live migrate * Fix all invalid obj\_make\_compatible test case * Change deprecated policies to policy * api-ref: document user\_data length restriction * Fix accumulated nits from port binding for live migration series * [placement] Use base test in placement functional tests * Fix signature of \_FakeImageService.download * [placement] Extract base functional test case from test\_direct * Use vif.vif\_name in \_set\_config\_VIFGeneric * doc: add missing permission for the vCenter service account * Hyper-V + OVS: plug vifs before starting VMs * Use placement context in placement functional tests * ironic: Report resources as reserved when needed * doc: remove rocky-specific nova-scheduler min placement version * scheduler: Start utilizing RequestSpec.network\_metadata * Consider network NUMA affinity for move operations * Add nova-manage placement sync\_aggregates * Add functional tests for numa-aware-vswitches * libvirt: Start populating NUMACell.network\_metadata field * conf: Add '[neutron] physnets' and related options * tox: Silence psycopg2 warnings * FakeLibvirtFixture: mock get\_fs\_info * Add method to get cpu traits * Blacklist greenlet 0.4.14 * Enhance doc to guide user to use nova user * doc: link to AZ talk from the Rocky summit * doc: link to CERN summit video about upgrading from cells v1 to v2 * Update queued-for-delete from the ComputeAPI during deletion/restoration * Online data migration for queued\_for\_delete flag * ironic: add instance\_uuid before any other spawn activity * Use consumer generation in \_heal\_allocations\_for\_instance * Cache is\_bfv check in ResourceTracker * Add shelve/unshelve wrinkle to volume-backed disk func test * Fix wonky reqspec handling in conductor.unshelve\_instance * Heal RequestSpec.is\_bfv for legacy instances during moves * Report 0 root\_gb in resource tracker if instance is bfv * Docs: Add Placement to Nova system architecture * libvirt: Remove reference to transient domain when detaching devices * Add queued\_for\_delete field to InstanceMapping object * Rename auth\_uri to www\_authenticate\_uri * Func test for improper cn local DISK\_GB reporting * perform reshaper operations in single transaction * docs: add nova host-evacuate command to evacuate documentation * compute: Ensure pre-migrating instances are destroyed during init\_host * In Python3.7 async is a keyword [1] * Check provider generation and retry on conflict * Fix missing print format error * Remove stevedore extensions server\_create method * Update RequestSpec.instance\_uuid during scheduling * Add regression test for bug 1781710 * Skip test\_resize\_server\_revert\_with\_volume\_attached in nova-lvm * Disable limits if force\_hosts or force\_nodes is set * conductor: use port binding extended API in during live migrate * Port binding based on events during live migration * Annotate flows and handle PortBindingDeletionFailed in ComputeManager * Implement migrate\_instance\_start method for neutron * libvirt: use dest host vif migrate details for live migration * libvirt: use dest host port bindings during pre\_live\_migration * libvirt: factor out pre\_live\_migration plug\_vifs call * Add VIFMigrateData.get\_dest\_vif * Add VIFMigrateData object for live migration * [placement] disallow additional fields in allocations * Fix ServerMigrationSampleJsonTests to use sample files from version dir * Remove "DEPRECATED" tag from Obsolete APIs * Remove support for /os-floating-ip-dns REST API * Remove support for /os-floating-ips-bulk REST API * Avoid requesting DISK\_GB allocation for root\_gb on BFV instances * [placement] cover bad content-length header * [placement] Add gabbi coverage for inv of missing rp * [placement] Add gabbi coverage for an inventory change * clarify usage of upgrade\_levels group * Fix confusing log message in scheduler * libvirt: remove unused attribute driver for LibvirtConfigNodeDevice * Fix the incorrect description and sample * Transform metrics.update notification * update tox venv env to install all requirements * Fix "XLibvirt KVM (ppc64)" typo in feature support matrix docs * Call generate\_image\_url only for legacy notification * Add unshelve instance error info to fault table * Address nit in 79dac41fee178dabb547f4d7bc10609630767131 * Escalate UUID validation warning to error in test * Fix a newly introduced UUID warning in the unit test * Move legacy-tempest-dsvm-nova-os-vif in repo * API: add support to abort queued live migration in microversion 2.65 * Fix ServerMigrationSampleJsonTestsV2\_24 to use its own sample file * Compute: add support to abort queued live migration * Use ThreadPoolExecutor for max\_concurrent\_live\_migrations * Update HostState.instances during \_consume\_selected\_host * Replace support matrix ext with common library * Add UUID validation for consumer\_uuid * Address nits in server group policy series * Adjust log style and remove ocata support * z/VM Driver: add get console output * z/VM Driver: add power actions * z/VM Driver: add snapshot function * z/VM Driver: Spawn and destroy function of z/VM driver * z/VM Driver: Initial change set of z/VM driver * Transform instance.live\_migration\_force\_complete notification * Transform aggregate.update\_prop notification * Add note about reschedules and num\_attempts in filter\_properties * Add another up-call to the cells v2 caveats list * Stop using HostAPI.service\_delete * Handle HostMappingNotFound when deleting a compute service * Skip more rebuild tests for cells v1 job * Refactor \_heal\_instances\_in\_cell * Heal allocations with incomplete consumer information * fix cellv2 delete\_host * Imported Translations from Zanata * ironic: Log an error when API version is not available * Microversion 2.64 - Use new format policy in server group * virt/ironic: Implement rescue and unrescue * ironic: provide facilities to gracefully navigate versions * do not assume 1 consumer in AllocList.delete\_all() * Update process doc to be more generic about point of contact * Follow up for Ie49d605c66062d2548241d7e04f5a2a6b98c011e * Mention osc-placement for managing traits in docs * Handle rebuild of instances with image traits * Complete the api-ref of security group rule * Adapt \_validate\_instance\_group\_policy to new policy model * Change the ServerGroupAntiAffinityFilter to adapt to new policy * Add policy field to ServerGroup notification object * Add policy to InstanceGroup object * Add nova-status upgrade check for request spec migrations * Add placement.concurrent\_udpate to generation pre-checks * Delete orphan compute nodes before updating resources * Test for unsanitized consumer UUID * Revert "docs: Disable smartquotes" * [placement] add error.code on a ConcurrentUpdateDetected * Fix TypeError in prep\_resize allocation cleanup * Use hard coded values in schema than reference * Update some placement docs to reflect modern times * Remove unused variable in migration * Address nits from consumer generation * update project/user for consumer in allocation * Use nova.db.api directly * Update root providers in same tree * hardware: fix hugepages memory usage per intances * Add queued for delete to instance\_mappings table * Remove duplicate parameter in API sample documents * placement: delete auto-created consumers on fail * delete consumers which no longer have allocations * make incomplete\_consumer\_project\_id a valid UUID * Refactor policies to policy in InstanceGroup DB model * Add rules column to instance\_group\_policy table * objects: Add RequestSpec.network\_metadata * api-ref: Example verification for servers.inc * hardware: Start accounting for networks in NUMA fitting * objects: Add NUMATopologyLimits.network\_metadata * Transform instance.rebuild\_scheduled notification * Remove irrelevant comment * Avoid joins in \_server\_group\_count\_members\_by\_user * Fix server\_group\_members quota check * Add functional regressions tests for server\_group\_members OverQuota * Handle compare in test\_pre\_live\_migration\_volume\_backed\* directly * Resource\_provider API handler does not return specific error codes * Remove mox in unit/network/test\_neutronv2.py (2) * Add documentation for emulator threads policy * Fix whitespace damage * Use valid UUID in the placement gabbits * Transform instance.live\_migration\_post notification * Transform instance.live\_migration\_rollback\_dest notification * Update install guide for placement database configuration * move lookup of provider from \_new\_allocations() * Time how long pre\_live\_migration() takes * Add action initiator attribute to the instance payload * Default embedded instance.flavor.disabled attribute * objects: Add NUMACell.network\_metadata * network: Retrieve tunneled status in '\_get\_physnet\_info' * network: Always retrieve network information if available * Stop setting glance\_api\_version in cinder.conf in nova-live-migration * Wait for vif plugging during live migration job * cover migration cases with functional tests * Fix unbound local when saving an unchanged RequestSpec * Prevent updating an RP's parent to form a loop * Handle nested serialized json entries in assertJsonEqual * libvirt: add qemu version check when configuring mtu for network * conf: Resolve Sphinx errors * Remove unnecessary execute permissions of a file * Convert 'placement\_api\_docs' into a Sphinx extension * Regression test for bug 1779635 * Regression test for bug 1779818 * Update admin/flavors document * Fix missing versioned notification examples * [doc] enhance admin/configuration/api.rst * Use 'version2' when syncing placement db * [placement] fix allocation handler docstring typo * Fix placement incompatible with webob 1.7 * manage: Remove dead code * Define common variables for irrelevant-files * Fix nits in placement-return-all-resources series * Add microversion for nested allocation candidate * libvirt: Fix the rescue race for vGPU instances * More config drive docs updates * Remove file injection from config drive sample docs * Use ironic-tempest-dsvm-ipa-wholedisk-bios-agent\_ipmitool-tinyipa in tree * Mention PowerVM support of config drive * tox: Reuse envdirs * Update xenapi\_disable\_agent config option usage in docs * conf: Correct documentation for '[pci] passthrough\_whitelist' * tox: Document and dedupe mostly everything * trivial: Remove 'tools/releasenotes\_tox.sh' * Add regression test for bug #1764883 * Remove mox in sec group test and functional tests * Use nova.test.TestingException * libvirt: Add missing encryption\_secret\_uuid tests for pre\_live\_migration * Mention server status in api-ref when rebuild * Remove mox in unit/network/test\_neutronv2.py (1) * Make nova-lvm run in check on libvirt changes and compute API tests * Allow templated cell\_mapping URLs * Remove remaining legacy DB API instance\_group\* methods * Remove unused DB API instance\_group\_member\* methods * Remove unused DB API instance\_group\_delete method * Remove compatibility code for instance groups * [placement] demonstrate part of bug 1778591 with a gabbi test * Handle CannotDeleteParentResourceProvider to 409 Conflict * Fix unit test modifying global state * [placement] Fix capacity tracking in POST /allocations * Update scheduler to use image-traits * [placement] Add test demonstrating bug 1778743 * conf: libvirt: Make \`/dev/urandom\` the default for 'rng\_dev\_path' * Skip ServerShowV247Test.test\_update\_rebuild\_list\_server in nova-cells-v1 job * libvirt: Drop MIN\_LIBVIRT\_VHOSTUSER\_MQ * Fix CLI docs for nova-manage api\_db commands * Update API reference for os-floating-ip-pools * Fix API reference for os-floating-ip-dns * Fix API reference for os-floating-ips-bulk * Remove support for /os-fixed-ips REST API * Fix the duplicated config options of api\_database and placement\_database * network: Rename 'create\_pci\_requests\_for\_sriov\_ports' * network: Rename '\_get\_phynet\_info' * Make nova list and migration-list ignore down cells * Add instance.unlock notification * [placement] Demonstrate bug in consumer generation handling * Delete port bindings in setup\_networks\_on\_host if teardown=True * Add "activate\_port\_binding" neutron API method * Add "delete\_port\_binding" network API method * Add "bind\_ports\_to\_host" neutron API method * Test alloc\_cands with indirectly sharing RPs * Switch to oslo\_messaging.ConfFixture.transport\_url * network: Unchain '\_get\_phynet\_info' from '\_get\_port\_vnic\_info' * Adapter raise\_exc=False by default * Bump keystoneauth1 minimum to 3.9.0 * conf: Deprecate 'network\_manager' * Fix bug to filter\_scheduler * Fix bug to api-ref * [placement] Extract create\_allocation\_list * libvirt: Log breadcrumb for known encryption bug * Remove mox in test\_conductor.py (2) * Remove mox in test\_conductor.py (1) * api-ref: Fix parameters about trusted certificate IDs * Remove mox in nova/tests/unit/virt/xenapi/stubs.py * Fix nits from change Ia7cf4414feb335b3c2e863b4c8b4ff559b275c34 * Implement discard for file backed memory * Fix nits from change I676291ec0faa1dea0bd5050ef8e3426d171de4c6 * placement: s/None/null/ in consumer conflict msg * objects: Remove legacy '\_to\_dict' functions * objects: Remove NUMATopologyLimits.obj\_from\_db\_obj * Cleanup nits in placement database changes * Add instance.lock notification * fix PowerVM get\_bootdisk\_path docstring * Implement file backed memory for instances in libvirt * Comment proposed ironic fix for removal of ironic driver workaround * Ironic update\_provider\_tree: restore traits override * Fix nits from change Id609789ef6b4a4c745550cde80dd49cabe03869a * Add a microversion for consumer generation support * Be graceful about vif plugging in early ironic driver startup * Mention nova-status upgrade check CLI in upgrade doc * Add information of deprecation nova-network in system-admin.rst * Validate transport\_url in nova-manage cell\_v2 commands * Add check if neutron "binding-extended" extension is available * Wait for network-vif-plugged before starting live migration * Don't heal allocations for deleted servers * Convert ironic virt driver to update\_provider\_tree * Fix regression when listing build\_requests with marker and ip filter * Ensure that os-traits sync is attempted only at start of process * Isolate placement database config * Add full traceback to ExceptionPayload in versioned notifications * Optimize member\_of check for nested providers * Resource tracker: improve resource tracker periodic task * Clarify placement DB schema migration * Fix MigrateData object tests for compat routines * Nix unused raise\_if\_custom\_resource\_class\_pre\_v1\_1 * Skip ServerShowV263Test.test\_show\_update\_rebuild\_list\_server for cellsv1 * Simplify instance name generation * ironic: bugfix: ensure a host is set for volume connectors * Revert "Re-using the code of os brick cinder" * placement: Make API history doc more consistent * Make host\_aggregate\_map dictionary case-insensitive * Return all nested providers in tree * Add osprofiler config options to generated reference * Fix retrying lower bound in requirements.txt * unquiesce instance after quiesce failure * Add policy rule to block image-backed servers with 0 root disk flavor * Enforce placement minimum in nova.cmd.status * Update the disk\_cachemodes to mention an rbd detail * Add trusted certs to feature support matrix docs * Fix nits from trusted certs notification change * Remove max\_size parameter from fake\_libvirt\_utils.fetch\_\*image methods * Add PLACEMENT\_DB\_ENABLED=True to the nova-next job * Optional separate database for placement API * Add supplementary info for simple\_cell\_setup cmd * Add certificate validation docs * Add troubleshooting item about ignored microversions * Make check\_can\_live\_migrate\_destination use long\_rpc\_timeout * [placement] Add status and links fields to version document at / * Add notification support for trusted\_certs * Fix execute mock for test\_convert\_image\_with\_errors * rework allocation handler \_allocations\_dict() * placement: Allocation.consumer field * Ignore UserWarning for scope checks during test runs * Add trusted\_image\_certificates to REST API * Powervm configuration cleanup * [placement] replace deprecated accept.best\_match * Update nova-status & docs: require placement 1.25 * Remove network info stubbing in functional test * XenAPI: update the document related to vdi streaming * XenAPI: define a new image handler to use vdi streaming * api-ref: expand on various bdm parameters * Add enhanced KVM storage QoS quotas * Plumb trusted\_certs through the compute service * add consumers generation field * Implement certificate\_utils * Provide a direct interface to placement * libvirt: Don't report DISK\_GB if sharing * Remove nova dependencies from test\_resource\_provider * Adjust db using allocation unit tests * Move db using provider unit tests to functional * Update links in README * Remove unnecessary parameters from create volume API * VMware: remove reading resourcePool data * VMware: save VC reads for information that is static * Use oslo.messaging per-call monitoring * Refactor libvirt get\_memory\_used\_mb() * xenapi: drop deprecated vif\_driver config option * placement: always create consumer records * Document the internal online\_migrations function behaviors * libvirt: remove unused get\_ovs\_interfaceid() * doc follow https://review.openstack.org/#/c/572195 * Extract part of PlacementFixture to placement * fix tox python3 overrides * Remove mox in libvirt/test\_driver.py (4) * Remove mox in test\_compute\_api.py (3) 18.0.0.0b2 ---------- * Fix bug to doc:nova-status * Fix the file name of development-environment.rst * Fix issues in nova-show-usage-statistics-for-hosts-instances.rst * Change consecutive build failure limit to a weigher * Do not use nova.test in placement.test\_deploy * Do not use nova.test in placement.test\_microversion * Do not use nova.test in placement.test\_handler * Do not use nova.test in placement.test\_fault\_wrap * Do not use nova.test in placement.test\_requestlog * Do not use nova.test in placement.handlers.test\_aggregate * Do not use nova.test in placement.test\_util * sync\_guest\_time: use the proper errno * Remove support for /os-virtual-interfaces REST API * add mtu to libvirt xml for ethernet and bridge types * Fix doc nit * Ensure resource class cache when listing usages * api-ref: mention that you can't re-parent a resource provider * Transform instance.exists notification * Enhance api-guide general info some updates * Fix some wrong urls in doc * Trivial: let internal use only func has \_ prefix * Fix bug to doc * Re-base placement object unit tests on NoDBTestCase * [placement] Do not import oslo\_service for log\_options * Fix bug for hypervisors * Fix typo in enable\_certificate\_validation config option help * Fix some inconsistencies in doc * Only run placement request filters when Placement will be called * Downgrade overquota warning * Remove unused \_disk\_qcow2\_to\_raw * Add nova-manage placement heal\_allocations CLI * Trim the fat on HostState.instances * Restrict CONF.quota.driver to DB and noop quota drivers * Consider hostdev devices when building metadata * Refactor \_build\_device\_metadata * Fix invalid raise in test\_compute\_mgr * Mention running rootwrap in daemon mode if hitting vif plug timeouts * Match ComputeNode.uuid to ironic node uuid in RT * network: update pci request spec to handle trusted tags * metadata: add vf\_trusted field to device metadata * Skip ServerShowV254Test.test\_rebuild\_server in cells v1 job * libvirt: add vf\_trusted field for network metadata * libvirt: configure trust mode for vfs * mirror nova host aggregate members to placement * Use instance project/user when creating RequestSpec during resize reschedule * add parameter docstring for 'params' to libvirt.guest.Guest.migrate() * Set scope for remaining placement policy rules * Update overriden to overridden * pci: don't consider case when match tags specs * Remove mox in libvirt/test\_driver.py (3) * Adding NVMEoF for libvirt driver * Fix doc mistakes * Remove unused function * Re-using the code of os brick cinder * Fix nits in nested provider allocation candidates(2) * Return all resources in provider\_summaries * placement: Use INNER JOIN for requied traits * Delete duplicate functions in placement test * Use list instead of set for duplicate check * Support nested alloc cands with sharing providers * Fix nits in nested provider allocation candidates * Follow up changes to granular placement policy reviews * Add granular policy rules for allocation candidates * Add granular policy rules for placement allocations * Add granular policy rules for traits in placement * Add granular placement policy rules for aggregates * Add granular policy rules for usages * Change exception type while deattaching root device * libvirt: Deprecate support for monitoring Intel CMT \`perf\` events * Remove mox in tests/unit/api/openstack/compute * PowerVM Driver: vSCSI Fibre Channel volume adapter * Honor availability\_zone hint via placement * Remove the remaining of the removed option * Convert libvirt's RBD storage to using processutils.execute() * libvirt: Skip fetching the virtual size of block devices * Add traits check in nested provider candidates * Return nested providers in get\_by\_request * Expand tests for multiple shared resources case * Pushing image traits to ironic node * Update placement upgrade docs for nova-api dependency on placement * Avoid unnecessary joins in HostManager.\_get\_instances\_by\_host * Placement: allow to set reserved value equal to total for inventory * Update PowerVM hypervisor docs * Update nova-status and docs for required placement 1.24 * Granular requests to get\_allocation\_candidates * libvirt: get\_inventory => update\_provider\_tree * Normalize inventory from update\_provider\_tree * ProviderTree.has\_inventory\_changed for new fields * PowerVM Driver: Localdisk * Expose instance\_get\_all\_uuids\_by\_host() from DB API and use it * Make instance.refresh() avoid recursion better * Make instance able to lazy-load almost everything * Fix interpretation of max\_attempts for scheduling alternates * Update the deprecate os\_region\_name option * libvirt: place emulator threads on CONF.compute.cpu\_shared\_set * Fix inconsistency in docs * Remove mox in libvirt/test\_driver.py (2) * Fakelibvirt migrateToURI3 should provide args according to libvirt doc * Metadata-API fails to retrieve avz for instances created before Pike * PowerVM snapshot cleanup * Add granular policy rules for resource providers inventories * Add granular policy rules for /resource\_classes\* * Implement granular policy rules for placement * Deduplicate config/policy reference docs from main index * Make nova service-list use scatter-gather routine * Fix auth\_url example in hypervisor-hyper-v.rst * Drop API compat handling for old compute error cases * PowerVM Driver: DiskAdapter parent class * Remove deprecated monkey\_patch config options * Debug logs for allocation\_candidates filters * Cleanup ugly stub in TestLocalDeleteAllocations * Deprecate running API services under eventlet * Add retrying to requirements.txt * [placement] default to accept of application/json when \*/\* * We don't need utils.trycmd any more * Move image conversion to privsep * Update auth\_url in install docs * Add INVENTORY\_INUSE to DELETE /rp/{u}/inventories * placement: Fix HTTP error generation * Remove unnecessary 'to\_primitive' call * Remove mox in test\_xenapi.py (3) * Remove mox in tests/unit/api/\*/test\_volumes.py * Remove mox in test\_live\_migrate.py * Remove mox in libvirt/test\_driver.py (1) * Added ability to configure default architecture for ImagePropertiesFilter * \_\_str\_\_ methods for RequestGroup, ResourceRequest * add lower-constraints job * XenAPI: Pass expected return codes to resize2fs * Make scheduler client allow multiple member\_of query parameters * Add contributor docs on deprecating and removing compute REST APIs * Suppress UUID warning in map\_instance unit tests * Don't reschedule on RequestedVRamTooHigh errors * Flexibly test keystonmiddleware in placement stack * Fix HTTP500 error of changes-since on v2.0 API * libvirt: Report the virtual size of RAW disks * Fix irrelevant-files in nova-dsvm-multinode-base * Remove '\_apply\_instance\_name\_template' * Add connection\_parameters to list of items copied from database * XenAPI: deprecate the config for image handler class path * Remove mox in test\_compute\_api.py (2) * api-ref: Fix parameters for os-volume-attachments.inc * Avoid warning log when image not exist * update scheduler to use image-traits * Remove support for /os-fping REST API * Add test\_set\_device\_mtu\_default back in * Move set\_vf\_interface\_vlan to the new utility module * Move create\_tap\_dev to the new utility module * Address feedback from instance\_list smart-cell behavior * trivial: Explain how the marker works for instance-cell mapping * Add random sleep between retry calls to placement * Remove remaning log translation in scheduler * Remove mox in test\_xenapi.py (2) * Make get\_instance\_objects\_sorted() be smart about cells * Add CellMapping.get\_by\_project\_id() query method * Skip ServerActionsTestJSON.test\_rebuild\_server for cells v1 job * [doc] Add soft\_deleted flag * Expose driver\_block\_device fields consistently * Fix detach\_volume calls when rolling back a failed attach * remove IVS plug/unplug as they're moved to separate plugin * Followup for multiple member\_of qparams support * [Doc]Link policies file into api * libvirt: always pass emulator threads policy * compute: introduce cpu\_shared\_set option * Add docs for hw\_video:ram\_max\_mb flavor extra spec * Use .. deprecated:: theme for deprecations * doc: Don't confuse CPU pinning/NUMA as Hyper-V only * Add tests for alloc cands with poor local disk * placement: Granular GET /allocation\_candidates * libvirt: remove old rbd snapshot removal error handling * libvirt: check image type before removing snapshots in \_cleanup\_resize * Remove unused methods in nova/compute/utils.py * Remove mox in test\_xenapi.py (1) * Migrate tempest-dsvm-multinode-live-migration job in-tree * Fix typos in Host aggregates documentation * Remove mox in unit/virt/xenapi/test\_vmops.py * Remove mox in test\_compute\_api.py (1) * Changing scheduler sync event from INFO to DEBUG * placement: Object changes for granular * Use helpers in test\_resource\_provider (func) * Use test\_base symbols directly * Base test module/class for functional placement db * Fix being able to hard reboot a pausing instance * Handle @safe\_connect returns None side effect in \_ensure\_resource\_provider * Deprecate the nova-consoleauth service * Update layout docs for running console proxies * Convert websocketproxy to use db for token validation * Remove [scheduler]/host\_manager config option * doc: Start using openstackdoctheme's extlink extension * support multiple member\_of qparams * doc: Don't use single backticks in man pages * trivial: Fix file permissions * [doc]remove nova-cert leftover in doc * Add multi-cell negative test for cold migration with target host * Fix the request context in ServiceFixture * Get anchors for sharing providers * Remove IronicHostManager and baremetal scheduling options * libvirt: Drop MIN\_LIBVIRT\_REALTIME\_VERSION * libvirt: Drop MIN\_QEMU\_POSTCOPY\_VERSION * libvirt: Drop BAD\_LIBVIRT\_CPU\_POLICY\_VERSIONS * Convert configdrive to use processutils * Make association\_refresh configurable * Convert certificate generation to processutils * Convert xenapi's xvp console to processutils * Convert fping API to processutils.execute() * Replace Chinese punctuation with English punctuation * libvirt: fix setting tx\_queue\_size when rx\_queue\_size is not set * Remove stale pip-missing-reqs tox test * Fix shelving a paused instance * libvirt: Lift the restriction of choices for \`cpu\_model\_extra\_flags\` * libvirt: Make \`cpu\_model\_extra\_flags\` case-insensitive for real * Add user\_id to RequestSpec * Remove ExactCoreFilter ExactDiskFilter ExactRamFilter * libvirt: Fix misleading debug msg "Instance is running" * libvirt: Drop BAD\_LIBVIRT\_NUMA\_VERSIONS * Handle PortNotFoundClient exception when getting ports * libvirt: Drop MIN\_LIBVIRT\_NUMA\_VERSION\_PPC * libvirt: Drop MIN\_LIBVIRT\_BLOCK\_LM\_WITH\_VOLUMES\_VERSION * log stale allocations as WARNING instead of DEBUG * Make host\_manager use scatter-gather and ignore down cells * Make service all-cells min version helper use scatter-gather * Simplify logic in get\_enforcer * Fix tox -e docs * placement: resource requests for nested providers * Add host/hostId to instance action events API * Simplify BDM boot index checking * Remove explicit instance.info\_cache.delete() * Handle deprecation of inspect.getargspec * ServerActionsSampleJsonTest refactor * Fix dropped check for boot\_index 0 in \_validate\_bdm * PowerVM Driver: Snapshot * libvirt: fix hard reboot issue with mdevs * Bump pypowervm minimum to 1.1.15 * Make accept-language tests work with webob 1.8.x * Fix invalid UUIDs in test * Functional test: cold migrate to compute down * Use os.rename, not mv * Proxy is\_volume through DriverBlockDevice * Use ConsoleAuthToken object to generate authorizations * Address issues raised in adding member\_of to GET /a-c * docs: link to volume multi-attach demo recording * api-ref: mark block\_device\_mapping\_v2.boot\_index as required * doc: add note about xenapi aggregate upcall being resolved * Remove vestigial system\_metadata param from info\_from\_instance() * Drop MIN\_LIBVIRT\_SET\_ADMIN\_PASSWD * libvirt: Bump MIN\_{LIBVIRT,QEMU}\_VERSION for "Rocky" * libvirt: add support for virtio-net rx/tx queue sizes * libvirt: fix wrong driver name for vhostuser interface * libvirt: Add a debug log entry before / after invoking migrate() * xenapi: Documents update for XAPI pool shared SR migration * Remove deprecated [placement] opts * Fix link in placement contributor doc 18.0.0.0b1 ---------- * Add \`hide\_hypervisor\_id\` flavor extra\_spec * Mention that users need noVNC >= 0.6 * xenapi: handle InstanceNotFound in detach\_interface() * fix a typo * Update docs for [keystone\_authtoken] changes since Queens * Move some tests into nova.tests.unit.notifications.objects.test\_instance * Leave a hint when populate\_schema fails * Add request\_id to instance action notifications * Add root and parent provider uuid to group by clause * Improve check capacity sql * Rename recreate to evacuate in driver signatures * Deduplicate notification samples Rocky - 7 * Add periodic task to clean expired console tokens * xenapi: Use XAPI pool instead of aggregate pool for shared SR migration * Remove mox in unit/api/openstack/compute/test\_hosts.py * Cleanup RP and HM records while deleting a compute service * Delete allocations from API if nova-compute is down * Block deleting compute services which are hosting instances * Add functional test for deleting a compute service * mock utils.execute() in qemu-img unit test * Add CPUWeigher * Fix docs for confirmResize action * Remove placement config check * Parse forbidden in extra\_specs * Deduplicate notification samples Rocky - 6 * Deduplicate notification samples Rocky - 5 * Deduplicate notification samples Rocky - 4 * doc: BFV instances and IsolatedHostsFilter * Remove redundant \_do\_check\_can\_live\_migrate\_destination * Improve performance when list instances with IP filter * Remove mox in test\_serversV21.py (2) * Remove mox in test\_serversV21.py (1) * libvirt: Report the allocated size of preallocated file based disks * Document how to disable notifications * tests for alloc candidates with nested and traits * Add config drive link to api-guide * Move update\_task\_state out of try/except * Fix doc link for api * Address nits in I00d29e9fd80e6b8f7ba3bbd8e82dde9d4cb1522f * Extract generate\_hostid method into utils.py * Record the host info in EventReporter * Deduplicate notification samples Rocky - 3 * Deduplicate notification samples Rocky - 2 * Deduplicate notification samples Rocky - 1 * Provide framework for setting placement error codes * Update os\_compute\_api:os-flavor-extra-specs:index docs for 2.61 * Update os\_compute\_api:os-flavor-extra-specs:index docs for 2.47 * [placement] Support forbidden traits in API * [placement] Filter allocation candidates by forbidden traits in db * [placement] Filter resource providers by forbidden traits in db * [placement] Parse forbidden traits in query strings * doc: cleanup API guide about instance faults * Address nits in Idf57fb5fbc611abb83943bd7e36d3cebf03b3977 * tests: Fix how context managers are mocked * Cleanup patch for the cell-disable series * libvirt: refactor get\_base\_config to accept host arg * libvirt: move version to string in utils * Update link of metadata * Move xenapi partition copies to privsep * Sync xenapi and libvirt on what flags to pass e2fsck * Move xenapi disk resizing to privsep * Use Queens UCA for nova-multiattach job * Skip placement on rebuild in same host * Remove the branch specifier from the nova-multiattach job * Make the nova-multiattach job non-voting temporarily * Give volume DriverBlockDevice classes a common prefix * remove ec2 in service and cmd * Remove mox in test\_neutron\_security\_groups.py * Remove RequestContext.instance\_lock\_checked * Fix race fail in test\_resize\_with\_reschedule\_then\_live\_migrate * Remove :return from update\_provider\_tree docstring * uncap eventlet in nova * xenapi: Support live migration in pooled multi-nodes environment * trivial: fix a comment typo * Add microversion to support extra\_specs in flavor API * Imported Translations from Zanata * Remove mox in tests/unit/test\_utils.py * api-ref: Fix parameter order in rebuild * api-ref: Parameter verification for servers.inc (3/3) * api-ref: Parameter verification for servers.inc (2/3) * Remove mox in test\_virt\_drivers.py * Make ResourceClass.normalize\_name handle sharp S * Test case: ResourceClass.normalize\_name with ß * PowerVM: Add proc\_units\_factor conf option * Update wording in @safe\_connect placement warnings * Expose shutdown retry interval as config setting * Pick next minimum libvirt / QEMU versions for "Stein" * Remove mox in unit/virt/xenapi/test\_vm\_utils.py (3) * Remove mox in unit/virt/xenapi/test\_vm\_utils.py (2) * Remove mox in unit/virt/xenapi/test\_vm\_utils.py (1) * make metadata doc up to date * Update port device\_owner when unshelving * Log a warning and add nova-status check for old API service versions * Avoid dumping stack on BuildAbortException * Fix comments at the 'save' method of objects.Instance * libvirt: Block swapping to an encrypted volume when using QEMU to decrypt * Remove mox in unit/api/\*/test\_server\_metadata.py * Remove mox in unit/api/\*/test\_server\_password.py * Replace mox stubs with stub\_out in test\_extended\_volumes.py * Remove mox in unit/api/\*/test\_instance\_actions.py * Remove mox in test\_user\_data.py * Don't persist RequestSpec.retry * Add regression test for persisted RequestSpec.retry from failed resize * Move test\_report\_client out of placement namespace * Log a more useful error when cinder auth isn't configured * doc: add a link in the install guides about configuring neutron * Cleanup \_get\_request\_spec\_for\_select\_destinations for live migrate * Clarify/correct the ordering of API and Cell database schema updates * Rename network.utils to network.linux\_utils * Update ImageMetaProp object to expose traits * Use a pythonic delete, with a retry * [placement] Fix incorrect exception import * Update the cells FAQs and scheduler maintenance docs * Log a more useful error when neutron isn't configured * Update the Cell filters section of the scheduler docs * update\_provider\_tree devref and docstring updates * libvirt: Allow to specify granular CPU feature flags * Support extending attached ScaleIO volumes * Transform aggregate.update\_metadata notification * Add nova-status check for ironic flavor migration * Add \_\_repr\_\_ for NovaException * Add --enable and --disable options to nova-manage update\_cell * Noauth should also use request\_id from compute\_req\_id.py * Avoid unnecessary port update during live migration * DRY up test\_rollback\_live\_migration\_set\_migration\_status * Default to py3 for the pep8 tox env because it's stricter * Avoid showing password in log * Remove a outdated warning * Move xenapi xenstore\_read's to privsep * Move configurable mkfs to privsep * Request only instance\_uuid in ironic node list * Include only required fields in ironic node cache * network: add command to configure trusted mode for VFs * [placement] api-ref: Fix parameters * [Trivial]Add missing blank space in conf description * Add tests for \_get\_trees\_matching\_all() function * Fix cancel\_all\_events event name parsing * Get rid of 406 paths in report client * Move pypowervm requirement to 1.1.12 * Use an independent transaction for \_trait\_sync * Test case: traits don't sync if first access fails * Expand member\_of functional test cases * Fix member\_of with sharing providers * Add tests for alloc\_cands with member\_of * Fix a missing white space in exception message * Make generation optional in ProviderTree * Fix nits in update\_provider\_tree series * Use update\_provider\_tree from resource tracker * SchedulerReportClient.update\_from\_provider\_tree * Complement tests in allocation candidates * trivial: Fix nits in code comments * [placement] Add test for provider summaries * Fix unit tests to work with new oslo.config * Allow scheduling only to enabled cells (Filter Scheduler) * Teardown networking when rolling back live migration even if shared disk * Remove unnecessary code encoding specification * [placement] Add to contributor docs about handler testing * Add trusted\_certs object * Add trusted\_certs to instance\_extra * Move get\_stashed\_volume\_connector to compute.utils * Documentation for tenant isolation with placement * [placement] Fix bad management of \_TRAITS\_SYNCED flag * Fix N332 api\_version decorator hacking check * Use ksa session for cinder microversion check * vmware: Fixes \_detach\_instance\_volumes method * PowerVM Driver: Network interface attach/detach * Fix issue for pep8 on py3 * Add require\_tenant\_aggregate request filter * Add AggregateList.get\_by\_metadata() query method * Add an index on aggregate\_metadata.value * Make get\_allocation\_candidates() honor aggregate restrictions * Move two more generic network utilities to a move obvious place * Start untangling network utilities * Add aggregates list to Destination object * Add request filter functionality to scheduler * tox: Make everything work with Python 3 * VMware: add log message for VIF info details * Fix spelling mistake of HTTPNotFound exception * tests: fixes mock autospec usage * Use a pythonic delete * Remove duplicative implementation of temporary directories * api-ref: add a note about volume-backed rescue not being supported * Scheduling Optimization: Remove cell0 from the list of candidates * api-ref: Parameter verification for servers.inc (1/3) * Add host to API and Conductor * doc: Upgrade placement first * Fix allocation\_candidates not to ignore shared RPs * remove unnecessary short cut in placement * Fix comments in get\_all\_with\_shared() * Unit test framework: common FakeResponse * tox: Remove unnecessary configuration * tox: Fix indentation * Standardize '\_get\_XXX\_constraint' functions * Updated from global requirements * Fix api-ref: nova image-meta is deprecated from 2.39 * Docs: modernise links * Updated from global requirements * Modify nova-manage cell\_v2 list\_cells to display "disabled" column * Add disabled option to create\_cell command * Move \_make\_instance\_list call outside of DB transaction context * Stop using mox in virt/xenapi/image/test\_vdi\_through\_dev.py * Use microversion parse 0.2.1 * Add the version description for InstanceActionEventList * Updated from global requirements * Add host field to InstanceActionEvent * remove a comment about ec2 * Add functional regression test for bug 1746509 * Always deallocate networking before reschedule if using Neutron * Change compute mgr placement check to region\_name * make PowerVM capabilities explicit * Move placement test cases from db to placement * List instances performace optimization * Add CellMappingList.get\_by\_disabled() query method * libvirt: move vpu\_realtime\_scheduler in designer * libvirt: move get\_numa\_memnode in designer module * Remove translate and a TODO * Add more functional test for placement.usage * deprecate fping\_path config option * Remove useless run\_periodic\_tasks call in ClientRouter * Handle EndpointNotFound when building image\_ref\_url in notifications * Don't log a warning for InstanceNotFound with deleted VIFs * Preserve multiattach flag when refreshing connection\_info * ironic: stop lying to the RT when ironic is down * Clarify log in RT.\_update\_usage\_from\_migration * Add disabled field to CellMapping object * libvirt: handle DiskNotFound during update\_available\_resource * only increment disk address unit for scsi devices * Fix message for unexpected external event * Fix typos in release notes * libvirt: slow live-migration to ensure network is ready * Remove version/date from CLI documentation * Move placement exceptions into the placement package * Report client: Remove version discovery comment * add check before adding cpus to cpuset\_reserved * trivial: omit condition evaluations * remove \_cleanup\_running\_deleted\_instances repeat detach volume * [libvirt] Add \_get\_XXXpin\_cpuset() * [libvirt] Add \_get\_numa\_memnode() * Add disabled column to cell\_mappings table * Add placeholder migrations for Queens backports * Updated from global requirements * Add --by-service to discover\_hosts * api-ref: add a note in DELETE /os-services about deleting computes * conf: Remove 'db\_driver' config opt * Add 'member\_of' param to GET /allocation\_candidates * Follow the new PTI for document build * docs: Disable smartquotes * Updated from global requirements * Stop assuming initial provider generation is 0 * ProviderTree.{add|remove}\_{traits|aggregates} * Unmap compute nodes when deleting host mappings in delete cell operation * Cleanup tempest-dsvm-cells-rc blacklist * Make nova-cells-v1 run with neutron * ironic: Get correct inventory for deployed node * Marker reset option for nova-manage map\_instances * XenAPI/Stops the migration of volume backed VHDS * placement: Return new provider from POST /rps * placement: generation in provider aggregate APIs * Change TestNewtonCellsCheck to not rely on objects * Revert "Refine waiting for vif plug events during \_hard\_reboot" * Revert "Make the InstanceMapping marker UUID-like" * Update contributor/placement.rst to contemporary reality * Updated from global requirements * Make archive\_deleted\_rows handle a missing CONF.api\_database.connection * Transform live\_migration.post.dest notifications * Reparent placement objects to oslo\_versionedobjects * Move resource provider objects into placement hierarchy * Move resource class fields * Updated from global requirements * Fix N358 hacking check * New-style \_set\_inventory\_for\_provider * conf: Fix indentation of database options * conf: Remove deprecated 'allow\_instance\_snapshots' opt * conf: Remove deprecated 'multi\_instance\_display\_name\_template' opt * conf: Remove '[conductor] topic' opt * Update deprecated log-config option in docs * Updated from global requirements * remove unnecessary conf imports * Fix indentation in doc/source/cli/\* * Make nova build reproducible * Raise a proper exception in unit test * Rename '\_numa\_get\_constraints\_XXX' functions * Migrate tempest-dsvm-cells job to an in-tree job definition * Make nova-manage db purge take --all-cells * hardware: Rework get\_number\_of\_serial\_ports * hardware: Rework '\_get\_cpu\_topology\_constraints' * Add --purge helper flag to archive\_deleted\_rows * Re-work the metadata service docs * conf: Remove 'nova.crypto' opts * ca: Remove 'nova/CA' directory * crypto: Remove unused functions * Allow to configure amount of PCIe ports * ironic: Clean up resources after unprovision fails * Update the nova-manage db archive\_deleted\_rows description * Deprecate sparse LVs * Rename the 'recreate' param in rebuild\_instance to 'evacuate' * Add simple db purge command * Run post-test archive against cell1 * XenAPI: XCP2.1+ Swallow VDI\_NOT\_IN\_MAP Exception * conf: Deprecate 'keymap' options * Removed unnecessary parantheses in yield statements * Handle IpAddressAlreadyAllocated exception * Update contributor guide for Rocky * Handle not found error on taking snapshot * Save admin password to sysmeta in libvirt driver * Refactor WSGI apps and utils to limit imports * Transform servergroup.addmember notification * Add more functional test for placement.aggregates * Fix version cap when no nova-compute started * Check for multiattach before removing connections * Updated from global requirements * VMware: fix TypeError while get console log * Make the nova-next job voting and gating * Fix the notification devref location in exception * Updated from global requirements * Updated from global requirements * Pass user context to virt driver when detaching volume * Updated from global requirements * Move db MAX constants to own file * [placement] use simple FaultWrapper * Allow 'network' in RequestContext service\_catalog * Stop using mox in api/openstack/fakes.py * Move makefs to privsep * Convert users of tune2fs to privsep * libvirt: mask InjectionInfo.admin\_pass * Remove unused LOG variables * Clarify wording in listing instance actions for deleted instances * Add check for redundant import aliases * Make \_get\_sharing\_providers more efficient * Update noVNC deployment docs to mention non-US keymap fix in 1.0.0 * Check for leaked server resource allocations in post\_test\_hook * rp: GET /resource\_providers?required= * compute: Cleans up allocations after failed resize * Clarify \`resources\` query param for /r\_p and /a\_c * Handle spawning error on unshelving * Ensure attachment\_id always exists for block device mapping * Avoid exploding if guest refuses to detach a volume * [placement] api-ref: Fix a missing response code * Add functional test for deleting BFV server with old attach flow * Only attempt a rebuild claim for an evacuation to a new host * Detach volumes when deleting a BFV server pre-scheduling * Add functional recreate test of deleting a BFV server pre-scheduling * Clean up ports and volumes when deleting ERROR instance * libvirt: disconnect volume from host during detach * Functional test: evacuate with no compute * Extending delete\_cell --force to delete instance\_mappings * Return 400 when compute host is not found * Fix PatternPropertiesTestCase for py 3.6 * [placement] Add functional tests for traits API * Scheduler multiple workers support * Imported Translations from Zanata * Updated from global requirements * Remove single quotes from posargs on stestr run commands * Clarify update\_provider\_tree docstring * Only pull associated \*sharing\* providers * Fix error handling in compute API for multiattach errors * Trivial: Update help of enabled\_filters * Add a nova-caching-scheduler job to the experimental queue * api-ref: Further clarify placement aggregates * Enable native mode for ScaleIO volumes * trivial: Move \_\_init\_\_ function * Add admin guide doc on volume multiattach support * Detach volumes when VM creation fails * Python 3 fix for sphinx doc * doc: Clarify how to create your own filter * Add functional tests to ensure BDM removal on delete * Store block device mappings in cell0 * Drop extra loop which modifies Cinder volume status * Remove deprecated aggregate DB compatibility * Remove old flavor\_create db api method * Remove old flavor\_get\_all db api method * Remove old flavor\_get db api method * Remove old flavor\_get\_by\_name db api method * Remove old flavor\_get\_by\_flavor\_id db api method * Remove old flavor\_destroy db api method * Remove old flavor\_access\_get\_by\_flavor\_id db api method * Test websocketproxy with TLS in the nova-next job * Updated from global requirements * libvirt: add Linux distribution guest only description for inject\_xxx options * libvirt: remove TODO on validation of scsi model * Avoid inventory DELETE API (no conflict detection) * install-guide: Wrap long console command * install-guide: Make formatting of console consistent * Cleanup the manage-volumes admin doc * Remove warning in feature support matrix page * Use correct arguments in task inits * Remove the deprecated scheduler\_driver\_task\_period option * Clarify the help text for [scheduler]periodic\_task\_interval * Fix and update compute schedulers config guide * Lazy-load instance attributes with read\_deleted=yes * Fix warn api\_class is deprecated, use backend * Drop compute RPC 4.x compatibility * Don't JSON encode instance\_info.traits for ironic * Move the nova-next job in-tree and update it * Use dict.get() when accessing capabilities dict * Fix typo in NUMATopologyFilter docs * [libvirt] Add \_get\_vcpu\_realtime\_scheduler() * [placement] annotate loadapp as public interface * Replace Chinese quotes to English quotes * Fix docs for IsolatedHostsFilter * Handle volume-backed instances in IsolatedHostsFilter * Add regression test for BFV+IsolatedHostsFilter failure * doc: merge numa.rst to cpu-topologies.rst * [placement] Add sending global request ID in get * [placement] Add sending global request ID in put (3) * Ensure resource classes correctly * Provide basic data for AArch64 support matrix/functionality * TrivialFix: Add a space between messages * Fix grammar error * Update reno for stable/queens * Refine waiting for vif plug events during \_hard\_reboot 17.0.0.0rc1 ----------- * doc: mention that --on-shared-storage is not needed with nova evacuate * doc: fix the link for the evacuate cli * Check quota before creating volume snapshots * Add the ability to get absolute limits from Cinder * unquiesce instance on volume snapshot failure * VGPU: Modify the example of vgpu white\_list set * [placement] Move body examples to an isolated directory * Remove MigrationPreCheckClientException * Encode libvirt domain XML in UTF-8 * Clean up reservations in migrate\_task call path * Compute RPC client bump to 5.0 * Bump compute RPC API to version 5.0 * Bindep does not catch missing libpcre3-dev on Ubuntu * Fixed auto-convergence option name in doc * Workaround glanceclient bug when CONF.glance.api\_servers not set * Remove a duplicate colon * Use with method to consistent oslo timeutils usage * Add log for snapshot an instance * TrivialFix: Add a blankline * trivial: Fix microversion number in test comment * Remove unnecessary arguments in notification methods * Remove unnecessary variables * XenAPI: Provide support matrix and doc for VGPU * Make the InstanceMapping marker UUID-like * fix link * Make bdms querying in multi-cell use scatter-gather and ignore down cell * update docstring param description * Add a prelude release note for the 17.0.0 Queens GA * Address comments from I51adbbdf13711e463b4d25c2ffd4a3123cd65675 * Add late server group policy check to rebuild * Add regression test for bug 1735407 * Remove microversion fallback code from report client * Fix wrong link for "Manage Flavors" in CPU topologies doc * Make sure that we have usable input for graphical console * Use check\_string\_length from oslo\_utils * update the description of hypervisor statistics response * fix misspelling of 'projectUser' * Test case: new standard resource class unusable * Clarify CONF.scheduler.max\_attempts * Add release note for Aggregate[Core|Ram|Disk]Filter change * placement doc: Conflict caveat for DELETE APIs * Trivial fix a missleading comment * Provide support matrix and doc for VGPU * doc: update the GPU passthrough HPC feature entry * [placement] Add sending global request ID in put (2) * [placement] Add sending global request ID in put (1) * [placement] Add sending global request ID in post * Update cells v2 layout doc caveats for Queens * Not use thread alloc policy for emulator thread * Refix disk size during live migration with disk over-commit * Zuul: Remove project name * Doc: Nix os-traits link from POST resource\_classes * Only log during pop retry phase * docs: Add booting from an encrypted volume * libvirt: fix native luks encryption failure to find volume\_id * Don't wait for vif plug events during \_hard\_reboot * Don't rely on parse.urlencode in url comparisons * Reset the \_RC\_CACHE between tests * Fix invalid UUIDs in test\_compute.py * Fix the wrong description * doc: placement upgrade notes for queens * Add functional tests for traits-based scheduling * Ensure the JSON-Schema covers the legacy v2 API * Cleanup launch instance and manage IPs docs * Migrate "launch instance" user guide docs * Pass limit to /allocation\_requests * doc: mark the max microversions for queens * test\_compute\_mgr: fix couple of unit tests * Updated from global requirements * trivial: Fix few policy doc * Query all cells for service version in \_validate\_bdm * Remove old flavor\_access\_add db api methods * Remove old flavor\_access\_remove db api method * Remove old flavor\_extra\_specs\_get db api method * Remove old flavor\_extra\_specs\_delete db api method * Remove old flavor\_access\_get\_by\_flavor\_id db api method * add "--until-complete" option for nova-manage db archive\_deleted\_rows * Mention required traits in the flavors user docs * Fix nits in support traits changes * Log options at debug when starting API services under wsgi * set\_{aggregates|traits}\_for\_provider: tolerate set * ProviderTree.get\_provider\_uuids: Top-down ordering * SchedulerReportClient.\_delete\_provider * ComputeDriver.update\_provider\_tree() * report client: get\_provider\_tree\_and\_ensure\_root * Remove unused method \_parse\_node\_instance\_info * Add resource\_class to fields in ironic node cache * Update docstring for get\_traits virt driver method * trivial: Fix typos in release notes * Allow force-delete even if task\_state is not None * Invalid query parameter could lead to HTTP 500 * [Placement] Invalid query parameter could lead to HTTP 500 * Use util.validate\_query\_params in list\_traits * Add functional tests for virt driver get\_traits() method * Implement get\_traits() for the ironic virt driver * Add get\_traits() method to ComputeDriver * [placement] Separate API schemas (resource\_provider) * Remove compute nodes arg from ProviderTree init * Fix invalid UUIDs in remaining tests * Don't modify objects directly * trivial: Resolve "X is renamed to Y" warnings * trivial: Don't use 'Test' prefix for non-TestCase classes * Remove unused tempest-dsvm-lxc-rc * ProviderTree.new\_child: parent is either uuid or name * trivialfix: cleanup \_pack\_instance\_onto\_cores() * Always pass 'NUMACell.siblings' to \_pack\_instance\_onto\_cores' * Ensure emulator threads are always calculated * tests: refactors and cleans up test\_rbd.py * Don't filter out sibling sets with one core * Add server filters whitelist in server api-ref * reno for notification-transformation-queens * Add the nova-multiattach job * api-ref: provide more detail on what a provider aggregate is * Remove redundant call to add\_instance\_fault\_from\_exc in rebuild\_instance * Collapse duplicate error handling in rebuild\_instance * Rollback instance.image\_ref on failed rebuild * hyper-v: Logs tips on PortBindingFailed * Add unit tests for EmulatorThreadsTestCase * [libvirt] Filter hypervisor\_type by virt\_type * Updated from global requirements * SchedulerReportClient.set\_aggregates\_for\_provider * Fix a comment in a notification functional test * Bumping functional test job timeouts * Remove deprecated policy items from fake\_policy * Reduce policy deprecation warnings in test runs * Handle network-changed event for a specific port * Fix the incorrect RST convention * Fix SUSE Install Guide: Placement port * Log the events we timed out waiting for while plugging vifs * Reduce complexity of \_from\_db\_object 17.0.0.0b3 ---------- * Ironic: Get IP address for volume connector * Add release note for QEMU native LUKS decryption * Fix missing 'if\_notifications\_enabled' decorator * Fix missing marker functions * Fix bug case by none token context * Transform instance.resize\_prep notification * Move remaining uses of parted to privsep * Avoid suspending guest with attached vGPUs * placement: enable required traits from the flavor extra specs * placement: using the dict format for the allocation in claim\_resources * Update VMWare vSphere link address * Handle TZ change in iso8601 >=0.1.12 * Updated from global requirements * Fix the order of target host checks * Add the Nova libvirt StorPool attachment driver * Expand on when you might want to set --max-count for map\_instances * libvirt: pass the mdevs when rebooting the guest * Set server status to ERROR if rebuild failed * Fix nits in allocation candidate limit handling * libvirt: QEMU native LUKS decryption for encrypted volumes * Replace curly quotes with straight quotes * Fix 'all\_tenants' & 'all\_projects' type in api-ref * Use neutron port\_list when filtering instance by ip * Start moving users of parted to privsep * Add PowerVM to feature-classification * Fix update\_cell to ignore existing identical cells * Change compute RPC to use alternates for resize * Report Client: PUT empty (not None) JSON data * Send traits to ironic on server boot * PowerVM Driver: SEA * Recreate mediated devices on reboot * [api] Allow multi-attach in compute api * doc: Document TLS security setup for noVNC proxy * placement: support traits in allocation candidates API * Do not multiply megabytes with 1024 to get gigabytes * api-ref: Fix parameter type in server-migrations.inc * Transform instance-evacuate notification * [placement] Add sending global request ID in delete (3) * Add index(instance\_uuid, updated\_at) on instance\_actions table * Fix 500 in test\_resize\_server\_negative\_invalid\_state * Generalize DB conf group copying * Track tree-associated providers in report client * ProviderTree.populate\_from\_iterable * Raise on API errors getting aggregates/traits * Updated from global requirements * Remove redundant swap\_volume tests * Track associated sharing RPs in report client * SchedulerReportClient.set\_traits\_for\_provider * ProviderTree.data => ProviderData * Cleanup redundant want\_version assignment * Fix format in flavors.rst * libvirt: Introduce disk encryption config classes * libvirt: Collocate encryptor and volume driver calls * libvirt: create vGPU for instance * Deduplicate service status notification samples * libvirt: don't attempt to live snapshot paused instances * Pass multiattach flag to reserve\_block\_device\_name * Handle swapping to a multiattach volume * [libvirt] Allow multiple volume attachments * trivial: Remove crud from 'conf.py' * Fix openstackdocstheme options for api-ref * Updated from global requirements * [placement] Add functional tests for resource class API * correct referenced url in comments * Transform instance.resize\_confirm notification * placement: \_get\_trees\_matching\_all\_resources() * Account for deprecation of personality files * PowerVM driver: ovs vif * add \_has\_provider\_trees() utility function * func tests for nested providers in alloc candidate * Deduplicate aggregate notification samples * Fix accumulated nits * Make sure that functional test triggered on sample changes * Add taskflow to requirements * Updated from global requirements * Enable py36 unit tests in tox * Stop globally caching host states in scheduler HostManager * make unit tests compatible with os-vif 1.8.0 * Remove unnecessary execute permissions in files * Update plugs Contrail methods to work with privsep * [placement] Fix resource provider delete * Transform rescue/unrescue instance notifications * conf: Do not inherit image signature props with snapshots * Track provider traits in report client * Fix missing rps in allocation candidates * Add aggregates check in allocation candidates * Fix accumulated nits in refactor series * Test helper: validate provider summaries * Revert "Deduplicate service status notification samples" * console: Provide an RFB security proxy implementation * console: introduce the VeNCrypt RFB authentication scheme * console: introduce framework for RFB authentication * console: Send bytes to sockets * Update links in documents * Add a warning in 'nova-manage cell\_v2 delete\_cell' * Modify the test case of get\_disk\_mapping\_rescue\_with\_config * Rename block\_device\_info\_get\_root * Address nits in change I7e01f95d7173d9217f76e838b3ea71555151ef56 * trivial: Resolve 'oslo.context' deprecation warnings * Increase notification wait timeout in functional tests * [placement] Add sending global request ID in delete (2) * Fix comment in MigrationSortContext * Add index(updated\_at) on migrations table * Add pagination and Changes-since filter support for os-migrations * Deduplicate service status notification samples * Add exception to no-upcall note of cells doc * Fix typo in release note * Add cross cell sort support for get\_migrations * libvirt: add tests to check multipath in iscsi/fc volume connectors * libvirt: test to make sure volume\_use\_multipath is properly used * libvirt: use 'host-passthrough' as default on AArch64 * Add reference to policy sample * Add an additional description for 'token\_ttl' * Updated from global requirements * Qualify the Placement 1.15 release note * Add migration db and object pagination support * Add regression test for resize failing during retries * Fix race condition in retrying migrations * libvirt: Provide VGPU inventory for a single GPU type * Fix OpenStack capitalization * Update FAQs about listing hosts in cellv2 * Add ConsoleAuthToken object * Optionalize instance\_uuid in console\_auth\_token\_get\_valid() * Add index on token\_hash and instance\_uuid for console\_auth\_tokens * Add access\_url\_base to console\_auth\_tokens table * Add debug output for selected page size * Use method validate\_integer from oslo.utils * conf: hyperv: fix a comment typo * Remove a duplicate line in a unit test * Use volume shared\_targets to lock during attach/detach * Handle no allocations during migrate * Add regression test for resizing failing when using CachingScheduler * zuul: Move legacy jobs to project * Imported Translations from Zanata * log test: use fixtures.StandardLogging in setUp * Fix up formatting for deprecate-api-extensions-policies release note * Fix documentation nits in set\_and\_clear\_allocations * Document lack of side-effects in AllocationList.create\_all() * VMware: add support for different firmwares * hyper-v: Deprecates support for Windows / Hyper-V Server 2012 * Use UEFI as the default boot for AArch64 * Don't log a warning for InstanceNotFound in detach\_interface * manager: more detailed info of unsupported compute driver * Add test for assignment of uuid to a deleted BDM * Fix fake libvirt XML generation for disks * Handle glance exception during rotating instance backup * Move aggregates from report client to ProviderTree * Only call numa\_fit\_instance\_to\_host if necessary * Expose BDM uuid to drivers * DriverBlockDevice: make subclasses inherit \_proxy\_as\_attr * Add an online migration for BDM.uuid * Address nits in I46d483f9de6776db1b025f925890624e5e682ada * Add support for getting volume details with a specified microversion * XenAPI: Unit tests must mock os\_xenapi calls * Revert "Modify \_poll\_shelved\_instances periodic task call \_shelve\_offload\_instance()" * Remove 'nova-manage host' and 'nova-manage agent' * Remove 'nova-manage logs' command * setup.cfg: Explicitly set [build\_sphinx] builder * conf: Remove deprecated 'remap\_vbd\_dev' option * api-ref: Fix incorrect parameter name * [placement] Add sending global request ID in delete * trivial: conf: libvirt: remove a redundant space * Fix the formatting for 2.58 in the compute REST API history doc * trivial: Modify signature of \_filter\_non\_requested\_pfs * Add PCI NUMA policies * Document testing guide for new API contributions * trivial: use cn instead of rp * Updated from global requirements * Test allocation candidates: multiple aggregates * Fix functional tests for USE\_NEUTRON * Make conductor pass and use host\_lists * Don't try to delete build request during a reschedule * libvirt: don't log snapshot success unless it actually happens * Add retry\_on\_deadlock decorator to action\_event\_start * conf: libvirt: Cleanup CPU modelling related options * Remove dead parameter from '\_create\_domain\_and\_network' * Handle images with no data * tests: Use correct response type in tests * Remove the inherits parameter for the Resource object * Remove the LoadedExtensionInfo object * Initialize osprofiler in WSGI application * doc: update supported drivers for cpu topology * Do not set allocation.id in AllocationList.create\_all() * [placement] Fix getting placement request ID * [placement] Enable limiting GET /allocation\_candidates * Pass RequestSpec to ConductorTaskAPI.build\_instances * Fix an error in \_get\_host\_states when deleting a compute node * Provide example for placement last-modified header of now * objects: Add PCI NUMA policy fields * Workaround missing RequestSpec.project\_id when moving an instance * Use instance.project\_id when creating request specs for old instances * Fix duplicate allocation candidates * trivial: conf: libvirt: fix a typo * Remove extensions module * Fix 4 doc typos * Fix false positive server group functional tests * Updated from global requirements * api-ref: sort parameters for limits, quotas and quota classes * XenAPI: create vGPU for instance * update\_cell allows more than once cell to have the same db/transport url * [placement] Add x-openstack-request-id in API ref * [placement] Separate API schemas (allocation\_candidate) * [placement] Separate API schemas (allocation) * Implement set\_and\_clear\_allocations in report client * Make BlockDeviceMapping object support uuid * Add uuid column to BlockDeviceMapping * Remove unused argument from LibvirtDriver.\_disconnect\_volume * Removed unused argument from LibvirtDriver.\_connect\_volume * Fix unit test failures when direct IO not supported * [placement] Separate API schemas (resource\_class) * Updated from global requirements * Deduplicate functional test code * Aggregate ops on ProviderTree * Implement query param schema for migration index * Make request\_spec.spec MediumText * Fix the formatting for 2.56 in the compute REST API history doc * Delete the TypeAffinityFilter * live-mig: keep disk device address same * Traits ops on ProviderTree * SchedulerReportClient.\_get\_providers\_in\_aggregates * [placement] Separate API schemas (inventory) * [placement] Separate API schemas (aggregate) * [placement] Separate API schemas (trait) * [placement] Separate API schemas (usage) * Fix the bug report link of API Guide * Extract instance allocation removal code * Test alloc\_cands with one RP shared between two RPs * Test alloc\_cands with non overlapping sharing RPs * handle traits with sharing providers * Fix possible TypeError in VIF.fixed\_ips * Add pagination and changes-since for instance-actions * Updated common create server sample request because of microversion 2.57 * Fix some typos in nova doc * Retry \_trait\_sync on deadlock * Remove unnecessary connector stash in attachment\_update * Pass mountpoint to volume attachment\_create with connector * Pass bdms to versioned notifications during finish\_revert\_resize * Update and complete volume attachments during resize * Pass mountpoint to volume attachment\_update * Don't persist could-be-stale InstanceGroup fields in RequestSpec * Update nova-status and docs for nova-compute requiring placement 1.14 * Wait for live\_migration\_rollback.end notification * Some nit fix in multi\_cell\_list * Raise MarkerNotFound if BuildRequestList.get\_by\_filters doesn't find marker * Move flushing block devices to privsep * Convert ext filesystem resizes to privsep * [placement] Add info about last-modified to contrib docs * [placement] Add cache headers to placement api requests * Stabilize test\_live\_migration\_abort func test * doc: add note about fixing admin-only APIs without a microversion * Deprecate file injection * VMware: implement get\_inventory() driver method * VMware: expose max vCPUs and max memory per ESX host * VMware: fix memory stats * api-ref: Fix a description for 'guest\_format' * Move the claim\_resources method to scheduler utils * Change RPC for select\_destinations() * Re-use existing ComputeNode on ironic rebalance * placement: skip authentication on root URI * Add instance action db and obj pagination support * Update Instance action's updated\_at when action event updated * Make live migration hold resources with a migration allocation * Add instance action record for snapshot instances * Add quiesce and unquiesce in support matrix * libvirt: throw NotImplementedError if qga is not responsive when setting password * [placement] Fix API reference for microversion 1.14 * Unmap compute nodes when deleting host mapping * Follow up on removing old-style quotas code * Add API and nova-manage tests that use the NoopQuotaDriver * Add instance action record for backup instances * Don't launch guestfs in a thread pool if guestfs.debug is enabled * Remove confusing comment in compute\_node\_get API method * [placement] add name to resource provider create error * Improve error message on invalid BDM fields * doc: link in some Sydney summit content * trivial: more suitable log in set\_admin\_password * Add support for listing hosts in cellv2 * [placement] Add 'Location' parameters in API ref * [placement] Object changes to support last-modified headers 17.0.0.0b2 ---------- * Implement new attach Cinder flow * Add new style volume attachment support to block\_device.py * SchedulerReportClient.\_get\_providers\_in\_tree * Modify select\_destinations() to return objects and alts * Move the to\_dict() method to the Selection object * Return Selection objects from the scheduler driver * Refactor the code to check for sufficient hosts * Fix 'force' parameter in os-quota-sets PUT schema * Reformat \_get\_all\_with\_shared * Updated from global requirements * Deprecate configurable Hide Server Address Feature * XenAPI: update the picture in Xen hypervisor document * Deprecate API extensions policies * Avoid stashed connector lookup for new style detach * placement: update client to set parent provider * Scheduler set\_inventory\_for\_provider does nested * placement: adds REST API for nested providers * placement: allow filter providers in tree * XenAPI: Don't use nicira-iface-id for XenServer VIF * archive\_deleted\_instances is not atomic for insert/delete * Remove the unused request\_id filter from api-paste.ini * Add a new check to volume attach * Add instance action record for shelve\_offload instances * Modify \_poll\_shelved\_instances periodic task call \_shelve\_offload\_instance() * Add Selection objects * Fix doubling allocations on rebuild * Add PowerVM to compute\_driver options * Updated from global requirements * Fix wrong argument order in functional test * [placement] Fix an error message in API validation * Transform instance.resize\_revert notification * Mention API behavior change when over quota limit * [placement] Fix foreign key constraint error * [placement] Add aggregate link note in API ref * Fail fast if changing image on a volume-backed server rebuild * Get original image\_id from volume for volume-backed instance rebuild * Add regression test for rebuilding a volume-backed server * ProviderTree.get\_provider\_uuids() * Fix cellsv1 messaging test * Make \_Provider really private * Split instance\_list into instance and multi\_cell * Genericify the instance\_list stuff * Remove 'nova-manage account' and 'nova-manage project' * Remove 'nova-manage shell' command * Updated from global requirements * Fixes 'Not enough available memory' log message * Only log not correcting allocation once per period * Add description for resource class creation * Trivial: Nix duplicate PlacementFixture() in test * Check the return code when forcing TCG mode with libguestfs * [placement] re-use existing conf with auth token middleware * Fix disk size during live migration with disk over-commit * Use ksa adapter for keystone conf & requests * Downgrade log for keystone verify client fail * [placement]Enhance doc for placement allocation list * Update description of Rebuild in server\_concepts.rst * Use oslo\_db Session in resource\_provider.py * VMware: Handle concurrent registrations of the VC extension * Proper error handling by \_ensure\_resource\_provider * Refactor placement version check * Nix log translations from scheduler.client.report * Remove old-style quotas code * Remove direct usage of glance.generate\_image\_url * remove glance usage inside compute * Assert that we restrict cold migrations to the same cell * [placement] Fix format in placement API ref * Enable cold migration with target host(2/2) * qemu-img do not use cache=none if no O\_DIRECT support * remove reserve\_quota\_delta * Raise specific exception when swapping migration allocations fails * Remove vestigial extra\_info update in PciDevice.save() * Fix ValueError when loading old pci device record * Updated from global requirements * Remove the objects for describing the extension for v2.1 API * Remove the objects which related to the old v2 API implementation * Updated from global requirements * Save updated libvirt domain XML after swapping volume * placement: add nested resource providers * Deprecate the IronicHostManager * Fix some incorrect option references for scheduler filters * Remove deprecated TrustedFilter * Fix NoneType error when [service\_user] is misconfigured * check query param for server groups function * Deduplicate instance.create notification samples * Nits from Ic3ab7d60e4ac12b767fe70bef97b327545a86e74 * [placement] Fix GET PUT /allocations nits * [placement] POST /allocations to set allocations for >1 consumers * Add instance action record for lock/unlock instances * XenAPI: provide vGPU inventory in compute node * XenAPI: get vGPU stats from hypervisor * Add 'all\_tenants' for GET sec group api ref * Update the documentation links * Add instance action record for attach/detach/swap volumes * Add regression test for rebuild with new image doubling allocations * Refined fix for validating image on rebuild * Address nits from service create/destroy notification review * Versioned notifications for service create and delete * Remove unnecessary self.flags and ConfPatcher * Implement query param schema for delete assisted vol * Add ProviderSummary.resource\_class\_names @property * required traits for no sharing providers * Fix invalid minRam error message * finish refactor AllocCandidates.\_get\_by\_filters() * PowerVM support matrix update * Fix the format file name * Simplify BDM boot index checking * Remove unused global variables * Updated from global requirements * Implement query param schema for flavor index * Implement query param schema for fping index * Implement query param schema for sec group APIs * Finish stestr migration * Fix incorrect known vcpuset when CPUPinningUnknown raised * Enable cold migration with target host(1/2) * Update server query section in the API concept doc * [placement] Add 'CUSTOM\_' prefix description in API ref * [placement] Fix parameter order in placement API ref * Remove 'nova-manage quota refresh' command * Api-guide: Address TODOs in user\_concepts section * Update server status api guide * Api guide:add Server Consoles * Update Metadata api section of api guide * Implement query param schema for simple\_tenant\_usage * Transform instance-live\_migration\_pre notification * Use FakeLiveMigrateDriver in notification test * Change live\_migrate tests to use fakedriver * Test resource allocation during soft delete * factor out compute service start in ServerMovingTest * Moving more utils to ProviderUsageBaseTestCase * Don't overwrite binding-profile * Fix TypeError of \_get\_project\_id when project\_id is None * Regenerate and pass configdrive when rebuild Ironic nodes * Update bindep.txt for doc builds * [placement] Symmetric GET and PUT /allocations/{consumer\_uuid} * Service token is not experimental * Use ksa adapter for neutron client * Get auth from context for glance endpoint * vgpu: add enabled white list * cleanup mapping/reqspec after archive instance * Fix the usage of instance.snapshot notification sample * Update document related to host aggregate * api-ref: Add a description of 'key\_name' in rebuild * api-ref: Fix an example in "Delete Assisted Volume Snapshot" * Use the RequestSpec when getting scheduler\_hints in compute * Add migration\_get\_by\_uuid in db api * Add instance action record for attach/detach interface * placement: Document request headers in api-ref * Deduplicate keypair notification samples * Include project\_id and user\_id in AllocationList.get\_all\_by\_consumer\_id * Clean up exception caught in \_validate\_and\_build\_base\_options * Implement query param schema for volume, snapshot API * Implement query param schema for quota set APIs * api-ref: fix the type on the block\_device\_mapping\_v2 parameter * placement: Document \`in:\` prefix for ?member\_of= * libvirt: Re-initialise volumes, encryptors, and vifs on hard reboot * VMware: serial console log (completed) * PowerVM Driver: config drive * Fix TypeError in nova-manage db archive\_deleted\_rows * Remove setting of version/release from releasenotes * Fix the formatting for the 2.54 microversion REST API version history * doc: Adds Hyper-V PCI passthrough details * hyper-v: Do not allow instances with pinned CPUs to spawn * Updated from global requirements * Add microversion to allow setting flavor description * Fix docstring for GET /os-migrations and related DB API * Add a note about versioned notification samples being per-release * Document the real behavior of notify\_on\_state\_change * Use NoDBTestCase for powervm driver tests * create allocation request for single provider * build alloc request resources for shared resources * build ProviderSummary objects in sep function * begin refactor AllocCandidates.\_get\_by\_filters() * Add security release note for OSSA-2017-005 * Add error message on metadata API * api-ref: make a note about os:scheduler\_hints being a top-level key * doc: fix link to creating unit tests in contributor guide * Validate new image via scheduler during rebuild * Add FlavorPayload.description for versioned notifications * placement: AllocCands.get\_by\_{filters => requests} * Deduplicate server\_group samples * Correct log message when removing a security group * Updated from global requirements * Enable reset keypair while rebuilding instance * Test allocation\_candidates with only sharing RPs * Test alloc candidates with same RC in cn & shared * rt: Make resource tracker always invoking get\_inventory() * Revert "Don't overwrite binding-profile" * Cleanup build\_request\_spec * Refactor test\_allocation\_candidates * block\_device\_mapping\_v2.bus\_type is missing from api-ref * Remove incorrect comment about instance.locked * Don't overwrite binding-profile * Do not use “-y” for package install * [placement] set accept to application/json if accept not set * [placement] Fix a wrong redirection in placement doc * Handle InstanceNotFound when setting password via metadata * Extract allocation candidates functional tests * Deduplicate instance.reboot notification samples * Deduplicate instance.live\_migration notification samples * Deduplicate instance.interface\_attach samples * Deduplicate instance.power-off notification samples * Transform instance-live\_migration\_abort notification * Deduplicated instance.(un)pause notification samples * Factor out duplicated notification sample data (2) * Move last\_bytes into the path module * Fix test\_get\_volume\_config method * Fix missing versioned notification sample * Clean up allocations if instance deleted during build * Avoid deleting allocations for instances being built * libvirt: remove old code in post\_live\_migration\_at\_destination * Using --option ARGUMENT * Add Flavor.description attribute * Modify incorrect debug meaasge in \_inject\_data * Avoid redundant security group queries in GET /servers/{id}/os-security-groups * Update contributor microversion doc for compute * Updated from global requirements * Granularize resources\_from\_{flavor|request\_spec} * Parse granular resources/traits from extra\_specs * placement: Parse granular resources & traits * RequestGroup class for placement & consumers * Factor out duplicated notification sample data * libvirt: Don't VIR\_MIGRATE\_NON\_SHARED\_INC without migrate\_disks * libvirt: do unicode conversion for error messages * Remove cells v2 transition code from update\_instance * Cleanup update\_instance cell mapping handling * Fix return type in FilterScheduler.\_legacy\_find\_hosts * Implement power\_off/power\_on for the FakeDriver * Remove instance.keypairs migration code * conf: Validate '[api] vendordata\_providers' options * conf: Remove 'vendordata\_driver' opt * Trivial grammar fix * Fix warning on {'cell\_id': 1} is an invalid UUID * Move contrail vif plugging to privsep * Move plumgrid vif plugging to privsep * Move midonet vif plugging to privsep * Move infiniband vif plugging to privsep * Remove compatibility method from FlavorPayload * placement: Contributor doc microversion checklist * libvirt: do not remove inst\_base when volume-backed during resize * Refactor claim\_resources() to use retries decorator * Make put\_allocations() retry on concurrent update * [placement] avoid case issues microversions in gabbits * Fix format in live-migration-usage.rst * Don't update RT in \_allocate\_network * Transform keypair.import notification * api-ref: document caveats with scheduler hints * add whereto for testing redirect rules * rp: break functions out of \_set\_traits() * Use Migration object in ComputeManagerMigrationTestCase * check query param for used\_limits function * VMware: add support for graceful shutdown of instances * Pass requested\_destination in filter\_properties * Functional regression test for evacuate with a target * Fix indent in configuring-migrations.rst * XenAPI: resolve VBD unplug failure with VM\_MISSING\_PV\_DRIVERS error * libvirt: properly decode error message from qemu guest agent * Use ksa adapter for placement conf & requests * Only filter/weigh hosts once if scheduling a single instance * Update placement api-ref: allocations link in 1.11 * rt: Implement XenAPI get\_inventory() method * Fix instance lookup in hide\_server\_addresses extension * libvirt: remove extraneous retry assignment in cleanup method * libvirt: Don't disregard cache mode for instance boot disks * Fix live migration grenade ceph setup * Pass the correct image to build\_request\_spec in conductor.rebuild\_instance * rp: remove \_HasAResourceProvider mixin * rp: move RP.\_set\_traits() to module scope * rp: Remove RP.get\_traits() method * [placement] Limit number of attempts to delete allocations * [placement] Allow \_set\_allocations to delete allocations * conf: Move additional nova-net opts to 'network' * Do not attempt volume swap when guest is stopped/suspended * Convert IVS VIF plugging / unplugging to privsep * Move blkid calls to privsep * trivial: Rename 'policy\_check' -> 'policy' * test: Store the OutputStreamCapture fixture * Accept all standard resource classes in flavor extra specs * Fix AttributeError in BlockDeviceMapping.obj\_load\_attr * Move project\_id and user\_id to Allocation object * VGPU: Define vgpu resource class * Make migration uuid hold allocations for migrating instances * Fix wrapping of neutron forbidden error * Import user-data page from openstack-manuals * Import the config drive docs from openstack-manuals * Move kpartx calls to privsep * Move nbd commands to privsep * Move loopback setup and removal to privsep * Move the idmapshift binary into privsep * Include /resource\_providers/uuid/allocations link * xenapi: cached images should be cleaned up by time * Add test so we remember why CUSTOM\_ prefix added * Move xend existence probes to privsep * Move shred to privsep * Add alternate hosts * Implement query param schema for host index * conf: Remove deprecated 'null\_kernel' opt * Adds 'sata' as a valid disk bus for qemu and kvm hypervisors * propagate OSError to MigrationPreCheckError * Trivial: fix spelling of allocation\_request * Transform instance.trigger\_crash\_dump notification * Add debug information to metadata requests 17.0.0.0b1 ---------- * placement: integrate ProviderTree to report client * [Trivial] Fix up a docstring * Remove duplicate error info * [placement] Clean up TODOs in allocations.yaml gabbit * Add attachment\_get to refresh\_connection\_info * Add 'delete\_host' command in 'nova-manage cell\_v2' * Keep updating allocations for Ironic * docs: Explain the flow of the "serial console" feature * Send Allocations to spawn * Move lvm handling to privsep * Cleanup mount / umount and associated rmdir calls * Update live migration to use v3 cinder api * placement: set/check if inventory change in tree * Move restart\_compute\_service to a common place * Fix nova-manage commands that do not exist * fix cleaning up evacuated instances * doc: Fix command output in scheduler document * Refactor resource tracker to account for migration allocations * Revert allocations by migration uuid * Split get\_allocations\_for\_instance() into useful bits * Regenerate context during targeting * Pick ironic nodes without VCPU set * Don't use mock.patch.stopall * Move test\_uuid\_sentinels to NoDBTestCase * [placement] Confirm that empty resources query causes 400 * [placement] add coverage for update of standard resource class * api-ref: add warning about force evacuate for ironic * Add snapshot id to the snapshot notifications * Reproduce bug 1721652 in the functional test env * Add 'done' to migration\_get\_in\_progress\_by\_host\_and\_node filter * Update "SHUTOFF" description in API guide * api-ref: fix server status values in GET /servers docs * Fix connection info refresh for reboot * rp: rework AllocList.get\_all\_by\_consumer\_id() * rp: fix up AllocList.get\_by\_resource\_provider\_uuid * rp: remove ability to delete 1 allocation record * rp: remove dead code in Allocation.\_create\_in\_db() * rp: streamline InventoryList.get\_all\_by\_rp\_uuid() * rp: remove CRUD operations on Inventory class * Make expected notifications output easier to read in tests * Elevate existing RequestContext to get bandwidth usage * Fix target\_cell usage for scatter\_gather\_cells * Nix bug msg from ConfGroupForServiceTypeNotFound * nova-manage map\_instances is not using the cells info from the API database * Updated from global requirements * Update cinder in RequestContext service catalog * Target context for build notification in conductor * Don't fix protocol-less glance api\_servers anymore * Move user\_data max length check to schema * Remove unnecessary BDM destroy during instance delete * rp: Move RP.\_get|set\_aggregates() to module scope * rp: de-ORM ResourceProvider.get\_by\_uuid() * use already loaded BDM in instance.create * use already loaded BDM in instance. (2) * use already loaded BDM in instance. * Remove dead code of api.fault notification sending * Fix sending legacy instance.update notification * doc: Rework man pages * Fix typo in test\_prep\_resize\_errors\_migration * Fix minor input items from previous patches * nova.utils.get\_ksa\_adapter() * De-duplicate \_numa\_get\_flavor\_XXX\_map\_list * hardware: Flatten functions * Update libvirt volume drivers to use os-brick constants * Always put 'uuid' into sort\_keys for stable instance lists * Fix instance\_get\_by\_sort\_filters() for multiple sort keys * Deprecate allowed\_direct\_url\_schemes and nova.image.download.modules * Add error notification for instance.interface\_attach * Note TrustedFilter deprecation in docs * Make setenv consistent for unit, func, and api-samples * Blacklist test\_extend\_attached\_volume from cells v1 job * Pre-create migration object * Remove metadata/system\_metadata filter handling from get\_all * fix unstable shelve offload functional tests * TrivialFix: Fix the incorrect test case * stabilize test\_resize\_server\_error\_and\_reschedule\_was\_failed * api-ref: note that project\_id filter only works with all\_tenants * Avoid redundant BDM lookup in check\_can\_live\_migrate\_source * Only query BDMs once in API during rebuild * Make allocation cleanup honor new by-migration rules * Modernize set\_vm\_state\_and\_notify * Remove system\_metadata loading in Instance.\_load\_flavor * Stop joining on system\_metadata when listing instances * Remove old compat code from servers ViewBuilder.\_get\_metadata * Remove unused get\_all\_instance\_\*metadata methods * doc: Add documentation for cpu\_realtime, cpu\_realtime\_mask * Remove 400 as expected error * Remove doc todo related to bug/1506667 * api-ref: add note about rebuild not replacing volume-backed root disk * api-ref: remove redundant preserve\_ephemeral mention from rebuild docs * [placement] gabbi tests for shared custom resource class * Update RT aggregate map less frequently * libvirt: add method to configure migration speed * Set migration object attributes for source/dest during live migrate * Refactor duplicate code for looking up the compute node name * Fix CellDatabases fixture swallowing exceptions * Use improved instance\_list module in compute API * Fix a pagination logic bug in test\_bug\_1689692 * Add hints to what the Migration attribute values are * Move cell0 marker test to Cellsv1DeprecatedTestMixIn * Ensure instance can migrate when launched concurrently * console: introduce basic framework for security proxying * [placement] Update the placement deployment instructions * Move allocation manipulation out of drop\_move\_claim() * Do not monkey patch eventlet in unit tests * Do not setup conductor in BaseAPITestCase * Make etree.tostring() emit unicode everywhere * Fix inconsistency of 'NOTE:' description * Don't shell out to mkdir, use ensure\_tree() * Read from console ptys using privsep * Move ploop commands to privsep * Set group\_members when converting to legacy request spec * Support qemu >= 2.10 * Fix policy check performance in 2.47+ * doc: make host aggregates examples more discoverable * Remove dest node allocations during live migration rollback * Fix race in delete allocation in ServerMovingTests * xenapi: pass migrate\_data to recover\_method if live migrate fails * \_rollback\_live\_migration in live-migration seqdiag * Log consumer uuid when retrying claims in the scheduler * Add recreate test for live migrate rollback not cleaning up dest allocs * Add slowest command to tox.ini * Make TestRPC inherit from the base nova TestCase * Ensure errors\_out\_migration errors out migration * use context mgr in instance.delete * Implement query param schema for GET hypervisor(2.33) * Remove SCREEN\_LOGDIR from devstack install setting * Fix --max-count handling for nova-manage cell\_v2 map\_instances * Set the Pike release version for scheduler RPC * Add functional for live migrate delete * Fix IoOpsFilter test case class name * Add get\_node\_uuid() helper to ResourceTracker * Live Migration sequence diagram * Deprecate idle\_timeout in api\_database * cleanup test-requirements * Add 400 as error code for resource class delete * Implement query param schema for agent index * fix nova accepting invalid availability zone name with ':' * check query param for service's index function * Remove useless periodic task that expires quota reservations * Add attachment\_get call to volume/cinder\_api * Add functional migrate force\_complete test * Copy some tests to a cellsv1 mixin * Add get\_instance\_objects\_sorted() * Make 'fault' a valid joined query field for Instance * Change livesnapshot to true by default * docs: Rename cellsv2\_layout -> cellsv2-layout * Add datapath type information to OVS vif objects * libvirt: Make 'get\_domain' private * Fix 500 if list servers called with empty regex pattern * Vzstorage: synchronize volume connect * Add \_wait\_for\_action\_fail\_completion to InstanceHelperMixin * Remove allocations when unshelve fails on host * Updated from global requirements * Add instance.interface\_detach notification * Add default configuration files to data\_files * Remove method "\_get\_host\_ref\_from\_name" * Add a regression test for bug 1718455 * Add recreate test for unshelve offloaded instance spawn fail * Add PowerVM hypervisor configuration doc * Add tests to validate instance\_list handles faults correctly * Add fault-filling into instance\_get\_all\_by\_filters\_sort() * Support pagination in instance\_list * Add db.instance\_get\_by\_sort\_filters() * Make instance\_list honor global query limit * Add base implementation for efficient cross-cell instance listing * Fix hyperlinks in document * api-ref: fix default sort key when listing servers * Add instance.interface\_attach notification * libvirt: bandwidth param should be set in guest migrate * Updated from global requirements * Add connection pool size to vSphere settings * Add live.migration.force.complete to the legacy notification whitelist * Restore '[vnc] vnc\_\*' option support * neutron: handle binding:profile=None during migration * doc: Add documentation for emulator\_thread\_policy * doc: Split flavors docs into admin and user guides * VMware: Factor out relocate\_vm() * remove re-auth logic for ironic client wrapper * hyperv: report disk\_available\_least field * Allow shuffling hosts with the same best weight * Enable custom certificates for keystone communication * Fix the ocata config-reference URLs * Fix a typo * Account for compute.metrics.update in legacy notification whitelist * use unicode in tests to avoid SQLA warning * Move libvirts dmcrypt support to privsep * Squash dacnet\_admin privsep context * Squash dac\_admin privsep context * Move the dac\_admin privsep code to a new location * Use symbolic names for capabilities, expand sys\_admin context * stabilize test\_resize\_server\_error\_and\_reschedule\_was\_failed * Updated from global requirements * Drop support for the Cinder v2 API * Remove 400 as expected error * Set error state after failed evacuation * Add @targets\_cell for live\_migrate\_instance method in conductor * [placement] Removing versioning from resource\_provider objects * doc: rename the Indices and Tables section * doc: Further cleanup of doc contributor guide * [placement] Unregister the ResourceProvider object * [placement] Unregister the ResourceProviderList object * [placement] Unregister the Inventory object * [placement] Unregister the InventoryList object * [placement] Unregister the Allocation object * [placement] Unregister the AllocationList object * [placement] Unregister the Usage object * [placement] Unregister the UsageList object * [placement] Unregister the ResourceClass object * [placement] Unregister the ResourceClassList object * [placement] Unregister the Trait object * [placement] Unregister the TraitList object * Add '\_has\_qos\_queue\_extension' function * Add '\_has\_dns\_extension' function * Assume neutron auto\_allocate extension's enabled * Add single quotes for posargs on jobs * Add nova-manage db command for ironic flavor migrations * enhance api-ref for os-server-external-events * Have one list of reboot task\_states * Call terminate\_connection when shelve\_offloading * Revert "Enable test\_iscsi\_volume in live migration job" * Target context when setting instance to ERROR when over quota * Cleanup running of osprofiler tests * Fix test runner config issues with os-testr 1.0.0 * Fix missed chown call * Updated from global requirements * Tweak connection\_info translation for the new Cinder attach/detach API * Add attachment\_complete call to volume/cinder.py * Remove dest node allocation if evacuate MoveClaim fails * Add a test to make sure failed evacuate cleans up dest allocation * Add recreate test for evacuate claim failure * Create allocations against forced dest host during evacuate * fake\_notifier: Refactor wait\_for\_versioned\_notification * Transform instance.resize.error notifications * Update docs to include standardization of VM diagnostics * Refactor ServerMovingTests for non-move tests * Remove deprecated keymgr code * Move execs of tee to privsep * Add ComputeNodeList.get\_by\_hypervisor\_type() * Split out the core of the ironic flavor migration * Fix binary name * Revert "Revert "Fix AZ related API docs"" * [placement] Correct a comment in \_set\_allocations * Remove Xen networking plugin * Revert "Fix AZ related API docs" * [placement] correct error on bad resource class in allocation * api-ref: note the microversions for GET /resource\_providers query params * doc: fix flavor notes * Fix AZ related API docs * Transform aggregate.remove\_host notification * Transform servergroup.delete notification * Transform aggregate.add\_host notification * Cleanup unused get\_iscsi\_initiator * Remove two testing stubs which aren't really needed * Typo error about help resource\_classes.inc * Transform servergroup.create notification * Set regex flag on ostestr command for osprofiler tests * Transform keypair.delete notification * Move execs of touch to privsep * Move libvirt usages of chown to privsep * Enable test\_iscsi\_volume in live migration job * Refactor out claim\_resources\_on\_destination into a utility * Fix broken URLs * Ensure instance mapping is updated in case of quota recheck fails * Track which cell each instance is created in and use it consistently * Make ConductorTaskTestCase run with 2 cells * xenapi: Exception Error logs shown in Citrix XenServer CI * Update contributor guide for Queens * Allow setting up multiple cells in the base TestCase * Fix test\_rpc\_consumer\_isolation for oslo.messaging 5.31.0 * Fix broken link * First attempt at adding a privsep user to nova itself * Provide hints when nova-manage db sync fails to sync cell0 * Add release note for force live migration allocations * Handle exception on adding secgroup * doc: Add configuration index page * doc: Add user index page * spelling mistake * Fix ValueError if invalid max\_rows passed to db purge * Remove usage of kwarg retry\_on\_request in API * Add release note for requiring shred 8.22 or above * Make xen unit tests work with os-xenapi>=0.3.0 * Skip more racy rebuild failing tests with cells v1 * Add some inline code docs tracing the cold migrate flow * Mark LXC as missing for swap volume support * Remove compatibility code for flavors * rbd: Remove unnecessary 'encode' calls * Updated from global requirements * Pass config object to oslo\_reports * Replace http with https for doc links in nova * Put base policy rules at first * Amend uuid4 hacking rule * conf: Rename two VNC options * Correct examples in "Manage Compute services" documentation * Handle deleted instances when refreshing the info\_cache * Remove qpid description in doc * Replace dd with shred for zeroing lvm volumes * Update docs for \_destroy\_evacuated\_instances * doc: link to versioned notification samples from main index * doc: link to placement api-ref and history docs from main index * doc: fix online\_data\_migrations option in upgrades doc * Add recreate test for forced host evacuate not setting dest allocations * add online\_data\_migrations to nova docs * Glance download: only fsync files * Functional test for regression bug #1713783 * doc: fix show-hide sample in notification devref * Default the service version in the notification tests * api-ref: add warnings about forcing the host for live migrate/evacuate * HyperV: Perform proper cleanup after failed instance spawns * [placement] Update user doc with api-ref link * [placement] api-ref GET /traits name:startswith * Add video type virtio for AArch64 * Document tagged attach in the feature support matrix * [placement] Require at least one resource class in allocation * Enhance doc for nova services * Update doc to indicate nova-network deprecated * Updated from global requirements * [placement] Add test for empty resources in allocation * Refactor LiveMigrationTask.\_find\_destination * Cleanup allocations on invalid dest node during live migration * Hyper-V: Perform proper cleanup after cold migration * Test InstanceNotFound handling in 'nova usage' * Typo fix in admin doc ssh-configuration.html * iso8601.is8601.Utc No Longer Exists * Fix nova assisted volume snapshots * Fix \_delete\_inventory log message in report client * Add functional recreate test for live migration pre-check fails * doc: Remove deprecated call to sphinx.util.compat * Remove unneeded attributes from context * Updates to scheduling workflow doc * Add uuid online migration for migrations * Add uuid to migration object and migrate-on-load * Add uuid to migration table * Add placeholder migrations for Pike backports * Clarify the field usage guidelines * Optimize MiniDNS for fewer syscalls * [Trivial] docstrings, typos, minor refactoring * Update PCI passthrough doc for moved options * tests: De-duplicate some graphics tests * Reduce code complexity - linux\_net.py * Refactor init\_instance:resume\_guests\_state * conf: Allow users to unset 'keymap' options * Change default for [notifications]/default\_publisher\_id to $host * Deprecate CONF.monkey\_patch * Add device tag support info in support matrix * Prevent blank line at start of migration placeholders * Remove useless error handling in prep\_resize * De-duplicate two delete\_allocation\_for\_\* methods * Move hash ring initialization to init\_host() for ironic * Fix bug on vmware driver attach volume failed * fix a typo in format\_cpu\_spec doc * Cleanup allocations in failed prep\_resize * Add functional test for rescheduling during a migration * Remove allocation when booting instance rescheduled or aborted * Fix sample configuration generation for compute-related options * Add formatting to scheduling activity diagram * Monkey patch the blockdiag extension * docs: Document the scheduler workflow * Updated from global requirements * Delete instance allocations when the instance is deleted * How about not logging errors every time we shelve offload? * Add missing tests for \_remove\_deleted\_instances\_allocations * nova-manage: Deprecate 'cell' commands * Add missing unit tests for FilterScheduler.\_get\_all\_host\_states * api-ref: fix key\_name note formatting * Assume neutron port\_binding extensions enabled * libvirt: Fix getting a wrong guest object * pci: Validate behavior of empty devname * Tests: Add cleanup of 'instances' directory * Remove the section about extensions from the API concept doc * Restrict live migration to same cell * Remove source node allocation after live migration completes * Allocate resources on forced dest host during live migration * Add language for compute node configuration * trivial: Remove some single use function from utils * Add functional live migrate test * Add functional force live migrate test * doc: Address review comments for main index * trivial: Remove dead function, variable * tests: Remove useless test * Remove plug\_ovs\_hybrid, unplug\_ovs\_hybrid * Correct statement in api-ref * Fix a typo in code comment * Refactor libvirt.utils.execute() away * Fix quobyte test\_validate\_volume\_no\_mtab\_entry * Updated from global requirements * update comment for dropping support * Move common definition into common layer * Remove host filter for \_cleanup\_running\_deleted\_instances periodic task * Fix contributor documentation * replace chance with filter scheduler in func tests * Clean up resources at shelve offload * test shelve and shelve offload with placement * Amend the code review guide for microversion API * delete allocation of evacuated instance * Make scheduler.utils.merge\_resources ignore zero values * Fix a wrong link * Fix reporting inventory for provisioned nodes in the Ironic driver * Avoid race in test\_evacuate * Reset client session when placement endpoint not found * Update api doc with latest updates in api framework * doc: Extend nfv feature matrix with pinning/NUMA * Always use application/json accept header in report client * Fix messages in functional tests * Handle addition of new nodes/instances in ironic flavor migration * Skip test\_rebuild\_server\_in\_error\_state for cells v1 * test server evacuation with placement * doc: add superconductor up-call caveat for cross\_az\_attach=False * doc: add another up-call caveat for cells v2 for xenapi aggregates * Update reno for stable/pike * Deprecate bare metal filters 16.0.0.0rc1 ----------- * Remove "dhcp\_options\_for\_instance" * Clarifying node\_uuid usage in ironic driver * doc: address review comments in stable-api guide updates * Resource tracker compatibility with Ocata and Pike * placement: avoid returning duplicated alloc\_reqs when no sharing rp * Imported Translations from Zanata * [placement] Make placement\_api\_docs.py failing * [placement] Add api-ref for allocation\_candidates * Clarify that vlan feature means nova-network support * [placement] Add api-ref for RP usages * Remove ram/disk sched filters from default list * Remove provider allocs in confirm/revert resize * placement: refactor healing of allocations in RT * remove log message with potential stale info * doc: Address review comments for contributor index * Require Placement 1.10 in nova-status upgrade check * Mark Chance and Caching schedulers as deprecated * [placement] Add api-ref for usages * Clean up \*most\* ec2 / euca2ools references * Add documentation for documentation contributions * Structure cli page * doc: Import configuration reference * Add release note for shared storage known issue * Improve stable-api doc with current API state * update policy UT fixtures * Bulk import all config reference figures * rework index intro to describe nova * Mark max microversion for Pike in history doc * Add a prelude section for Pike * doc: provide more details on scheduling with placement * Add functional test for local delete allocations * Document service layout for consoles with cells * Add For Operators section to front page * Create For End Users index section * doc: code review considerations for online data migrations * Add track\_instance\_changes note in disable\_group\_policy\_check\_upcall * Cleanup release note about ignoring allow\_same\_net\_traffic * no instance info cache update if instance deleted * Add format\_dom for PCI device addresses * doc: Add additional content to admin guide * Create reference subpage * Raise NoValidHost if no allocation candidates * Fix all >= 2 hit 404s * Handle ironicclient failures in Ironic driver * Fix migrate single instance when it was created concurrently * trivial: Remove files from 'tools' * trivial: Remove "vif" script * tools/xenserver: Remove 'cleanup\_sm\_locks' * Test resize with too big flavor * [placement] Add api-ref for RP allocations * placement: filtering the resource provider id when delete trait association * Updated from global requirements * Add resource utilities to scheduler utils * Add Contributor Guide section page * Fix getting instance bdms in multiple cells * Update install guide to clearly define between package installs * doc: Import administration guide * doc: Import installation guide * Complete dostring of live\_migration related methods * Add a caveat section about cellsv2 upcalls * doc: Start using oslo\_policy.sphinxext * policies: Fix Sphinx issues * doc: Start using oslo\_config.sphinxext * doc: Rework README to reflect new doc URLs * doc: Remove dead files * nova-manage: Deprecate '--version' parameters * imagebackend: cleanup constructor args to Rbd * Sum allocations in the scheduler when resizing to the same host * doc: Make use of definition lists, literals * hardware offload support for openvswitch * reflow rpc doc to 80 columns * fix list rendering in policy-enforcement * Fix scope of errors\_out\_migration in finish\_resize * Fix scope of errors\_out\_migration in resize\_instance * Split Compute.errors\_out\_migration into a separate contextmanager * fix list rendering in cells * fix list rendering in aggregates * Fix list rendering in bdm doc * fix list rendering in rpc doc * Fix list rendering in code-review.rst * Fix whitespace in rest\_api\_version\_history * Fix lists in process doc * [placement] Avoid error log on 405 response * Keep the code consistent * Filter out stale migrations in resource audit * Test resize to same host with placement api * fix rpc broken rst comment * sort redirectmatch lines * add top 404 redirect * [placement] Require at least one allocation when PUT * Add redirect for api-microversion-history doc * Fix 409 handling in report client when deleting inventory * Detach device from live domain even if not found on persistent * Cleanup unnecessary logic in os-volume\_attachments controller code * Adopt new pypowervm power\_off APIs * placement: remove existing allocs when set allocs * Additional assertions to resize tests * Accept any scheduler driver entrypoint * add redirects for existing broken docs urls * Add some more cellsv2 doc goodness * Test resize with placement api * Deprecate cells v1 * Add release note for PUT /os-services/\* for non-compute services * Updated from global requirements * Don't warn on expected network-vif-unplugged events * do not pass proxy env variables by tox * Show quota detail when inject file quota exceeds * rootwrap.d cleanup mislabeled files * always show urls in list\_cells * api-ref: requested security groups are not applied to pre-existing ports * api-ref: fix security\_groups response parameter in os-security-groups * Clean variable names and docs around neutron allocate\_for\_instance * explain payload inheritance in notification devref * Update SSL cert used in testing * Remove RamFilter and DiskFilter in default filter * Enhance support matrix document * remove extension param and usage * Add description on maximum placement API version * Updated from global requirements * Add cinder keystone client opts to config reference * Updated from global requirements * fix test\_rebuild\_server\_exc instability * [placement] quash unicode warning with shared provider * add a redirect for the old cells landing page * Remove unnecessary code 16.0.0.0b3 ---------- * claim resources in placement API during schedule() * placement: account for move operations in claim * add description about key\_name * doc: add FAQ entry for cells v1 config options * Add oslo\_concurrency=INFO to default log levels for nova-manage * stabilize test\_create\_delete\_server functional test * Ensure we unshelve in the cell the instance is mapped * Fix example in \_serialize\_allocations\_for\_consumer * deprecate \`\`wsgi\_log\_format\`\` config variable * Move the note about '/os-volume\_boot' to the correct place * Remove the useless fake ExtensionManager from API unittests * Netronome SmartNIC Enablement * Updated from global requirements * Enhance support matrix document * add cli to support matrix * add a retry on DBDeadlock to \_set\_allocations() * docstring and unused code removal * libvirt: Post-migration, set cache value for Cinder volume(s) * use os\_traits.MISC\_SHARES\_VIA\_AGGREGATE * style-only: s/context/ctx/ * Instance remains in migrating state forever * Add helper method for waiting migrations in functional tests * Improve assertJsonEqual error reporting * Translate the return value of attachment\_create and \_update * Move the last\_bytes util method to libvirt * Do not import nova.conf into nova/exception.py * Set IronicNodeState.uuid in \_update\_from\_compute\_node * Add VIFHostDevice support to libvirt driver * Remove redundant free\_vcpus logging in \_report\_hypervisor\_resource\_view * Remove the useless extension block\_device\_mapping\_v1 object * Remove the useless FakeExt * Remove the code related to extension loading from APIRouterV21 * Add 'updated\_at' field to InstancePayload in notifications * Use wsgi-intercept in OSAPIFixture * API ref: associate floating IP requires Active status * Suppress some test warnings * Use enum value instead of string service name * rename binary to source in versioned notifications * Trim the fat from InstanceInfo * [placement] Use wsgi\_intercept in PlacementFixture * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Ironic: Support boot from Cinder volume * [placement] Flush RC\_CACHE after each gabbit sequence * Stop using mox stubs in cast\_as\_call.py * Add online migration to move quotas to API database * Migrate Ironic Flavors * Add tags to instance.create Notification * request\_log addition for running under uwsgi * Stop using mox stubs in test\_console\_auth\_tokens.py * Increase cpu time for image conversion * Remove an unnecessary argument in \_prep\_resize * Updated from global requirements * Using plain routes for the microversions test * Updated from global requirements * Updated from global requirements * placement: add retry tight loop claim\_resources() * Dump versioned notifications when test\_create\_delete\_server * retry on authentication failure in api\_client * Change default policy to view quota details * Implement interface attach/detach in ironic virt driver * Update policy description for 'instance\_actions' * Update ironic feature matrix * Updated from global requirements * doc: Switch to openstackdocstheme * Don't cast cinderclient microversions to float * Remove the unittest for plugin framework * Use plain routes list for versions instead of stevedore * Removed unused 'wrap' property * Make Quotas object favor the API database * Remove check\_detach * Remove improper LOG.exception() calls in placement * VMware: Handle missing volume vmdk during detach * Use \_error\_out\_instance\_on\_exception in finish\_resize * placement: proper JOIN order for shared resources * placement: alloc candidates only shared resources * Allow wrapping of closures * Updated from global requirements * provide interface-scoped nameserver information * Only setup iptables for metadata if using nova-net * Fix and optimize external\_events for multiple cells * Add policy granularity to the Flavors API * Deprecate useless quota\_usage\_refresh from nova-manage * add dict of allocation requests to select\_dests() * Handle None returned from get\_allocation\_candidates due to connect failure * Updated from global requirements * Update URL home-page in documents according to document migration * api-ref: Fix an expand button in os-quota-sets * Correct the description of 'disable-log-reason' api-ref * Consider instance flavor resource overrides in allocations * Do not mention that tags are case sensitive in docs * api-ref: fix max\_version for deprecated os-quota-class-sets parameters * Handle uuids in os-hypervisors API * Use uuid for id in os-services API * Remove 'reserved' count from used limits * Make security\_group\_rules use check\_deltas() for quota * Make key\_pairs use check\_deltas() for quota * Count instances to check quota * Use plain routes list for extension\_info instead of stevedore * Use plain routes list for os-snapshots instead of stevedore * Use plain routes list for os-volume-attachments instead of stevedore * doc: Populate the 'user' section * doc: Populate the 'reference' section * doc: Populate the 'contributor' section * doc: Populate the 'configuration' section * Add log info in scheduler to mark start of scheduling * [placement] Add api-ref for allocations * [placement] Add api-ref for RP traits * [placement] Add api-ref for traits * Remove translation of log messages * Fix indentation in policy doc * conf: remove \*\_topic config opts * Stop using mox stubs in test\_remote\_consoles.py * api-ref: Verify parameters in os-migrations.inc * Use URIOpt * Convert HostState.limits['numa\_topology'] to primitive * Log compute node uuid when the record is created * Remove key\_manager.api\_class hack * Update policy descriptions for base * Consistent policies * Support tag instances when boot(4/4) * Fix instance evacuation with PCI devices * [placement] fix 500 error when allocating to bad class * [placement] Update allocation-candidates.yaml for gabbi 1.35 * fix test\_volume\_swap\_server instability * XenAPI: Fix ValueError in test\_slave\_asks\_master\_to\_add\_slave\_to\_pool * api-ref: mention disk size limitations in resize flavor * [placement] cover deleting a custom resource class in use * [placement] cover deleting standard trait * Updated from global requirements * fix unshelve notification test instability * scheduler: isolate \_get\_sorted\_hosts() * Set wsgi.keep\_alive=False globally for tests * Dump tracked version notifications when swap volume tests fail * Default reservations=None in Cells v1 and conductor APIs * Avoid false positives of Jinja2 in Bandit scan * Updated from global requirements * Remove 'create\_rule\_default' * Use oslo.polcy DocumentedRuleDefault * trivial: Remove unnecessary function * doc: Populate the 'cli' section * Fix the releasenote and api-ref for quota-class API * Fix typo * Stop counting hw\_video:ram\_max\_mb against quota * Add ability to signal and perform online volume size change * api-ref: mark instance action events parameter as optional * Add BDM to InstancePayload * placement: add claim\_resources() to report client * doc: Enable pep8 on doc generation code * doc: Remove dead plugin * Use plain routes list for os-volumes instead of stevedore * Use plain routes list for os-baremetal-nodes endpoint instead of stevedore * Use plain routes list for os-security-group-default-rules instead of stevedore * Use plain routes list for os-security-group-rules instead of stevedore * Use plain routes list for server-security-groups instead of stevedore * Use plain routes list for os-security-groups instead of stevedore * Use plain routes list for image-metadata instead of stevedore * Use plain routes list for images instead of stevedore * Remove the test for the route '/resources.:(format)' * doc: Use consistent author, section for man pages * Use plain routes list for os-networks instead of stevedore * doc: Remove cruft from conf.py * Fix wrong log parm * Query deleted instance records during \_destroy\_evacuated\_instances * Skip boot from encrypted volume on Xen+libvirt * improve notification short-circuit * Use PCIAddressField in oslo.versionedobjects * Fix quota class set APIs * api-ref: Add X-Openstack-Request-Id description * Fix a missing classifier * api-ref: Add missing parameters in limits.inc * api-ref: Fix parameters in server-security-groups * Stop using deprecated 'message' attribute in Exception * Adjust error msg for ImageNUMATopologyAsymmetric * placement: scheduler uses allocation candidates * Trivial: Remove unnecessary format specifier * Handle Cinder 3.27 style attachments in swap\_volume * Support tag instances when boot(3/4) * Remove reverts\_task\_state decorator from swap\_volume * Pre-load instance.device\_metadata in InstanceMetadata * Updated from global requirements * [placement] Improve allocation\_candidates coverage * xenapi: avoid unnecessary BDM query when building device metadata * Add release note for xenapi virt device tagging support * Make notification publisher\_id consistent * Modify some comments for virt driver * Fix parameters and description for os-volume\_attachments * Remove nova.api.extensions.server.extensions usage * Fix error message when support matrix entry is missing a driver * Fix comment for API binary name in WSGIService * Fix arguments in calling \_delete\_nic\_metadata * Fix incorrect docstrings in neutron network API * Add 'networks' quota in quota sample files * Reset the traits sync flag in the placement fixtures * Add api-ref for os-quota-class-set APIs * trivial: Use valid UUIDs in test\_admin\_password * placement: filter usage records by resource provider id * Fix 'project-id' 'user-id' as required in server group * Reduce (notification) test duplication * Use plain routes list for os-cells endpoint instead of stevedore * Hyper-V: fix live migration with CSVs * placement: support GET /allocation\_candidates * Handle keypair not found from metadata server using cells * Don't delete neutron port when attach failed * Removes getfattr from Quobyte Nova driver * libvirt: update the logic to configure volume with scsi controller * libvirt: update logic to configure device for scsi controller * Updated from global requirements * conf: fix netconf, my\_ip and host are unclear * Remove wsdl\_location configuration option * hyperv: Fixes log message in livemigrationops * hyperv: stop serial console workers while deleting vm files * hyperv: Fixes Generation 2 VMs volume boot order * Ensure the JSON-Schema covers the legacy v2 API * API support for tagged device attachment * Delete disk metadata when detaching volume * Add scatter gather utilities for cells * Sanitize instance in InstanceMetadata to avoid un-pickleable context * remove the very old unmaintained wsgi scripts * Extract custom resource classes from flavors * Fix the log information argument mistake * Remove mox from nova.tests.unit.virt.xenapi.test\_vm\_utils.py * Handle version for PUT and POST in PlacementFixture * Add a reset for traits DB sync * Strengthen the warning on the old broken WSGI script * Add key\_name field to InstancePayload * Add keypairs field to InstanceCreatePayload * api-ref: Fix missing parameters in API Versions * placement: refactor driver select\_destinations() * Updated from global requirements * VStorage: changed default log path * Add python 3.5 in classifier * Delete nic metadata when detaching interface * Remove mox from nova.tests.unit.api.openstack.compute.test\_limits * Add get\_count\_by\_vm\_state() to InstanceList object * move resources\_from\_request\_spec() to utils * return 400 Bad Request when empty string resources * placement: adds ProviderTree for nested resources * Add missing microversion documentation * Remove mox in test\_availability\_zone.py * Stop using mox stubs in test\_keypairs.py * Plumbing for tagged nic attachment * Remove code that produces warning in modern Python * Plumbing for tagged volume attachment * Fix misuse of assertIsNone * Simplify a condition * Support paging over compute nodes with a uuid marker * Update api-ref to indicate swap param * \_schedule\_instances() supporting a RequestSpec object * Removes potentially bad exit value from accepted list in Quobyte volume driver * Switch Nova Quobyte volume driver to mount via systemd-run * Clean up volumes on boot failure * Explain why API services are filtered out of GET /os-services * Fix redundant BDM lookups during rebuild * Delete all inventory has its own method DELETE * Remove translation of log messages * hypervisor\_hostname must match get\_available\_nodes * Fix using targeted cell context when finding services in cells * [doc] Updated sqlalchemy URL in migrate readme * placement: separate normalize\_resources\_qs\_param * Updated from global requirements * Use more specific asserts in tests * Transform instance.soft\_delete notifications * Fix the note at the end of allocate\_for\_instance * Count floating ips to check quota * Add FloatingIPList.get\_count\_by\_project() * Count fixed ips to check quota * Add FixedIPList.get\_count\_by\_project() * Count security groups to check quota * Add SecurityGroupList.get\_counts() * Count networks to check quota * Provide a hint when \_verify\_response fails * Provide error message in MismatchError for api-samples tests * placement: produce set of allocation candidates * Reduce code duplication * Use plain routes list for os-remote-consoles instead of stevedore * Remove multiple create from stevedore * Use plain routes list for os-tenant-networks instead of stevedore * Use plain routes list for os-cloudpipe endpoint instead of stevedore * Use plain routes list for os-quota-classes endpoint instead of stevedore * Consolidate index and detail methods in HypervisorsController * Handle uuid in HostAPI.compute\_node\_get * api-ref: fix unshelve asynchronous postconditions typo * add missing notification samples to dev ref * Fix regression preventing reporting negative resources for overcommit * Add separate instance.create payload type * placement: Add GET /usages to placement API * placement project\_id, user\_id in PUT /allocations * api-ref: fix hypervisor\_hostname description for Ironic * Updated from global requirements * Provide original fault message when BFV fails * Add PowerVM to nova support matrix * remove null\_safe\_int from module scope * Fix a wrong comment * Stop caching compute nodes in the request * Centralize compute\_node\_search\_by\_hypervisor in os-hypervisors * api-ref: cleanup PUT /os-hypervisors/statistics docs * Make compute\_node\_statistics() work across cells * Only auto-disable new nova-compute services * Cleanup the plethora of libvirt live migration options * [placement] Update placement devref to modern features * Make all timestamps formats equal * Transform keypair.create notification * remove mox from unit/virt/vmwareapi/test\_driver\_api.py * XenAPI: device tagging * Updated from global requirements * api-ref: fix misleading description in PUT /os-services/disable * Remove service control from feature support matrix * Indicate Hyper-v supports fibre channel in support matrix * Use CONF.host for powervm nodename * Pull out code that builds VIF in \_build\_network\_info\_model * Use plain routes list for os-server-groups endpoint instead of stevedore * Use plain routes list for user\_data instead of stevedore * remove get\_nw\_info\_for\_instance from compute.utils * remove ugly local import * Add missing query filter params for GET /os-services API * XenAPI: Create linux bridge in dest host during live migration * Remove translation of log messages * Count server group members to check quota * Add bool\_from\_string for force-down action * Remove old service version check for mitaka * Clarify conf/compute.py help text for ListOpts * Use plain routes list for block\_device\_mapping instead of stevedore * Use plain routes list for os-consoles, os-console-auth-tokens endpoint instead of stevedore * [placement] Increase test coverage * Remove unused variable * pci: add uuid field to PciDevice object * libvirt: dump debug info when interface detach times out * Amend api-ref for multiple networks request * Remove translation of log messages * Calculate stopped instance's disk sizes for disk\_available\_least * Transform instance.live\_migration\_rollback notification * Add InstanceGroup.\_remove\_members\_in\_db 16.0.0.0b2 ---------- * Fix lookup of instance mapping in metadata set-password * libvirt: Extract method \_guest\_add\_spice\_channel * libvirt: Extract method \_guest\_add\_memory\_balloon * libvirt: Extract method \_guest\_add\_watchdog\_action * libvirt: Extract method \_guest\_add\_pci\_devices * libvirt: Extract method \_guest\_add\_video\_device * libvirt: fix alternative\_device\_name for detaching interfaces * [placement] Add api-ref for aggregates * Add docstring for test\_limit\_check\_project\_and\_user\_zero\_values * Skip microversion discovery check for update/delete volume attachments * Use 3.27 microversion when creating new style volume attachments * Use microversions for new style volume attachments * libvirt: handle missing rbd\_secret\_uuid from old connection info * Log a warning if there is only one cell when listing instances * [placement] Use util.extract\_json in allocations handler * [placement] Disambiguate resource provider conflict message * raise exception if create Virtuozzo container with swap disk * Convert additional disassociate tests to mock * Remove useless API tests * Remove \*\*kwargs passing in payload \_\_init\_\_ * Prefer non-PCI host nodes for non-PCI instances * Add PCIWeigher * XenAPI: Remove bittorrent.py which is already deprecated * Count server groups to check quota * Default to 0 when merging values in limit check * api-ref: fix type for hypervisor\_marker * Fix html\_last\_updated\_fmt for Python3 * nfs fix for xenial images * Remove unused CONF import from placement/auth.py * xen: pass Xen console in cmdline * Add earliest-version tags for stable branch renos * Fix the race condition with novnc * Add service\_token for nova-glance interaction * Adopts keystoneauth with glance client * placement: use separate tables for projects/users * Move rebuild notification tests into separate method * contrail: add vrouter VIF plugin type support * Fix cell0 naming when QS params on the connection * libvirt: Check if domain is persistent before detaching devices * Fix device metadata service version check for multiple cells * Remove cells topic configuration option * Add get\_minimum\_version\_all\_cells() helper for service * libvirt: rearange how scsi controller is defined * libvirt: set full description of the controller used by disk * libvirt: update LibvirtConfigGuestDeviceAddress to provide XML * Use plain routes list for os-services endpoint instead of stevedore * use plain routes list for os-virtual-interfaces * use plain routes list for hypervisor endpoint instead of stevedore * Use plain routes list for hosts endpoint instead of stevedore * Use plain routes list for os-fping endpoint * Use plain routes list for instance actions endpoint * Use plain routes list for server ips endpoint * XenAPI: use os-xenapi 0.2.0 in nova * Add InstanceGroupList.get\_counts() * Reset the \_TRAITS\_SYNCED global in Traits tests * Revert "Remove Babel from requirements.txt" * Avoid unnecessary lazy-loads in mutated\_migration\_context * libvirt: log vm and task state when vif plugging times out * Send out notifications when instance tags changed * Catch neutronclient.NotFound on floating deletion * Move notifications/objects/test\_base.py * Fixed some nits for microversion 2.48 * code comments incorrectness * Remove Babel from requirements.txt * Sync os-traits to Traits database table * Support tag instances when boot(2/4) * ComputeDriver.get\_info not limited to inst name * Replace messaging.get\_transport with get\_rpc\_transport * Be more tolerant of keystone catalog configuration * Send request\_id on glance calls * Updated from global requirements * [placement] Add api-ref for resource classes * Standardization of VM diagnostics info API * Remove unused exceptions * Refactor a test method including 7 test cases * Fix missing marker functions * Completed implementation of instance diagnostics for Xen * Updated from global requirements * Use VIR\_DOMAIN\_BLOCK\_REBASE\_COPY\_DEV when rebasing * show flavor info in server details * placement: Specific error for inventory in use * Updated from global requirements * Add database migration and model for consumers * add new test fixture flavor with extra\_specs * Updated from global requirements * Connecting Nova to DRBD storage nodes directly * Update server create networks API reference description for tags * libvirt: fix call args to destroy() during live migration rollback * Pass a list of instance UUIDs to scheduler * Fix call to driver\_detach in remove\_volume\_connection * Use plain routes list for server diagnostics endpoint * Use plain routes list for os-server-external-events endpoint * Use plain routes list for server-migrations endpoint instead of stevedore * Use plain routes list for server-tags instead of stevedore * Use plain routes list for os-interface endpoint instead of stevedore * Remove usage of parameter enforce\_type * placement: test for agg association not sharing * placement: test non-shared out of inventory * placement: tests for non-shared with shared * placement: shared resources when finding providers * Fix live migration devstack hook for multicell environment * Target cell on local delete * Updated from global requirements * Fix default\_availability\_zone docs * Send request\_id on neutron calls * Update policy description for os-volumes * Fix doc job with correct ref link * Remove oslo.config deprecated parameter enforce\_type * Completely remove mox from unit/network/test\_linux\_net.py * Add configuration options for certificate validation * Do not rely on dogpile internals for mocks * XenAPI: nova-compute cannot restart after manually delete VM * Add policy description for os-networks * Changing deleting stale allocations warning to debug * Replace diagnostics objects with Nova diagnostics objects * Added nova objects for intance diagnostics * [placement] adjust resource provider links by microversion * Add \`img\_hide\_hypervisor\_id\` image property * Catch InstanceNotFound when deleting allocations * Remove mox from nova/tests/unit/virt/xenapi/test\_xenapi.py[1] * [placement] Add api-ref for DELETE resource provider * [placement] Add api-ref for PUT resource provider * [placement] Add api-ref for GET resource provider * [placement] Add api-ref for POST resource provider * [placement] Add api-ref for DELETE RP inventory * [placement] Add api-ref for PUT RP inventory * [placement] Add api-ref for GET RP inventory * [placement] Add api-ref for DELETE RP inventories * [placement] Add api-ref for PUT RP inventories * Add check\_deltas() and limit\_check\_project\_and\_user() to Quotas * Enhancement comments on CountableResource * Deprecate TypeAffinityFilter * [placement] Add api-ref for GET RP inventories * Optimize creating security\_group * Limit the min length of string for integer JSON-Schema * Avoid lazy-loading instance.id when cross\_az\_attach=False * Use plain routes list for os-migrations endpoint instead of stevedore * Updated from global requirements * Migrate to oslo request\_id middleware - mv 2.46 * Ensure the value of filter parameter is unicode * XenAPI: Deprecate nicira-iface-id for XenServer VIF * Don't run ssh validation in cells v1 job * Fix MarkerNotFound when paging and marker was found in cell0 * Add recreate functional test for regression bug 1689692 * cinder: add attachment\_update method * cinder: add attachment\_create method * Use targeted context when burying instances in cell0 * Send request\_id on cinder calls * Remove unused migrate\_data kwarg from virt driver destroy() method * Fix the display of updated\_at time when using memcache driver * Check instance existing before check in mapping * Remove mox from unit/cells/test\_cells\_messaging.py * make sure to rebuild claim on recreate * Nix redundant dict in set\_inventory\_for\_provider * PowerVM Driver: SSP emphemeral disk support * Avoid lazy-load error when getting instance AZ * Handle conflict from neutron when addFloatingIP fails * re-Allow adding computes with no ComputeNodes to aggregates * Libvirt volume driver for Veritas HyperScale * Make the method to put allocations public * Don't delete allocation if instance being scheduled * Exclude deleted service records when calling hypervisor statistics * Modify incorrect comment on return\_reservation\_id * Remove incorrect comments in multiple\_create * Have nova.context use super from\_context * Handle new hosts for updating instance info in scheduler * [Trivial] Hyper-V: accept Glance vhdx images * Add strict option to discover\_hosts * make route and controller in alpha sequence * [placement] Fix placement-api-ref check tool * Use plain routes list for limits endpoint instead of stevedore * Updated from global requirements * Handle uuid in HostAPI.\_find\_service * doc: add cells v2 FAQ on mapping instances * doc: add cells v2 FAQ on refreshing global cells cache * doc: start a FAQs section for cells v2 * De-complicate some of the instance delete path * doc: add links to summit videos on cells v2 * Make target\_cell() yield a new context * Move to proper target\_cell calling convention * Updated from global requirements * Repair links in Nova documentation * api-ref: Fix parameter order in os-services.inc * fix typo * Deprecate unused policy from policy doc * trivial: Remove dead code * convert unicode to string before we connect to rados * Use plain routes list for os-quota-sets endpoint instead of stevedore * Use plain routes list for os-certificates endpoint instead of stevedore * Remove mox from cells/test\_cells\_rpc\_driver.py * api-ref: Example verification for servers-actions.inc * Updated from global requirements * nova-manage: Deprecate 'log' commands * nova-manage: Deprecate 'host' commands * nova-manage: Deprecate 'project', 'account' commands * libvirt: remove glusterfs volume driver * libvirt: remove scality volume driver * Deprecate scheduler trusted filter * XenAPI: remove hardcode dom0 plugin version in unit test * Change log level from ERROR to DEBUG for NotImplemented * Skip policy rules on attach\_network for none network allocation * Skip ceph in grenade live migration job due to restart failures * Correct \_ensure\_console\_log\_for\_instance implementation * Cache database and message queue connection objects * Correct the error message for query parameter validation * correctly log port id in neutron api * Fix uuid replacement in aggregate notification test * Remove DeviceIsBusy exception * Catch exception.OverQuota when create image for volume backed instance * Add policy description for os-host * Libvirt support for tagged volume attachment * Libvirt support for tagged nic attachment * Updated from global requirements * Add policy description for 'os-hide-server-addresses' * Add policy description for os-fixed-ips * Add policy description for networks\_associate * Add policy description for server\_usage * Modify the description of flat\_injected in nova.conf * Add policy description for multinic * Add policy description for 'limits' * Use plain routes list for server-password endpoint instead of stevedore * libvirt: expand checks for SubclassSignatureTestCase * fix InvalidSharedStorage exception message * Fix decoding of encryption key passed to dmcrypt * Make compute auto-disable itself if builds are failing * Make discover\_hosts only query for unmapped ComputeNode records * api-ref: Fix examples for add/removeFixedIp action * Fix a typo in code comment * Updated from global requirements * [BugFix] Change the parameter of the exception error message * Handle special characters in database connection URL netloc * fix typo in parameter type definition * Move null\_safe funcs to module level * do not log error for missing \_save\_tags * Add more description to policies in the keypairs.py * Add description to policies in extended\_status and extended\_volumes * Address comments when moving volume detach to block\_device.py * Updated from global requirements * Add a functional test for 'removeFloatingIp' action * Correct the wording about filter options * libvirt: Fix races with nfs volume mount/umount * libvirt: Pass instance to connect\_volume and disconnect\_volume * Remove the can\_host column * Totally freeze the extension\_info API * Trivial fix typo in document * Add missing rootwrap filter for cryptsetup * Add Cinder v3 detach to shutdown\_instance * Make NovaException format errors fatal for tests * Fix unit test exception KeyErrors * [BugFix] Release the memory quota for video ram when deleting an instance * Remove the rebuild extension help methods * service: use restart\_method='mutate' for all services * Verify project id for flavor access calls * Add a convenience attribute for reportclient * Add uuid to service.update notification payload * objects: add ComputeNode.get\_by\_uuid method * objects: add Service.get\_by\_uuid method * db api: add service\_get\_by\_uuid * Add online data migration for populating services.uuid * placement: get providers sharing capacity * Remove cloudpipe APIs * Replace newton to release\_name in upgrade.rst * Fix a typo * neutron: retrieve physical network name from a multi-provider network * Use six.text\_type() when logging Instance object * Fix typo in wsgi applications release note * Catching OverQuota Exception * Add description to policies in extended\_az and extend\_ser\_attrs * Add policy description for os-quota-classes * Add policy description for instance actions * Add policy description for fping * Updated from global requirements * Ensure sample policy help text correctly wrapped * Add policy description for extensions * Use plain routes list for server-metadata endpoint instead of stevedore * Transform instance.volume\_detach notification * Transform instance.volume\_attach.error notification * Transform instance.volume\_attach notification * Fix units for description of "flavor\_swap" parameter * Don't lazy-load flavor.projects during destroy() * devref and reno for nova-{api,metadata}-wsgi scripts * Add pbr-installed wsgi application for metadata api * Update devref with vendordata changes * remove unused functions * Use systemctl to restart services * Remove nova-cert leftovers * Add policy description for image size * Add policy description for instance-usage-audit-log * Add policy description for Servers IPs * Add policy description for config\_drive * XenAPI: update support matrix to support detach interface * Remove unnecessary execute permissions * Use plain routes list for os-fixed-ips endpoint instead of stevedore * Use plain routes list for os-availability-zone endpoint instead of stevedore * Use plain routes list for os-assisted-volume-snapshots endpoint * Use plain routes list for os-agents endpoint instead of stevedore * Use plain routes list for os-floating-ip-dns endpoint instead of stevedore * Add compute\_nodes\_uuid\_idx unique index * Use plain routes list for os-floating-ips-bulk endpoint instead of stevedore * Use plain routes list for os-floating-ip-pools endpoint instead of stevedore * Use plain routes list for os-floating-ips endpoint instead of stevedore * api-ref: Fix unnecessary description in servers-admin-action * api-ref: Fix parameters in servers-action-console-output * api-ref: Use 'note' directive * use plain routes list for os-simple-tenant-usage * Use plain routes list for os-instance-usage-audit-log endpoint instead of stevedore * Support tag instances when boot(1) * Add Cinder v3 detach call to \_terminate\_volume\_connections * placement: implement get\_inventory() for libvirt * nova-manage: Deprecate 'agent' commands * Add reserved\_host\_cpus option * Update description to policies in remaining flavor APIs * Add description to policies in migrations.py * Trivial fix: fix broken links * Remove nova-cert * Fixed a broken link in API Plugins document * Stop using mox int unit/virt/xenapi/image/test\_utils.py * Add ability to query for ComputeNodes by their mapped value * Add ComputeNode.mapped field * Updated from global requirements * Add a note to \*\_allocation\_ratio options about Ironic hardcode * Remove legacy v2.0 code from test\_flavor\_access * Do not log live migration success when it actually failed * Expose StandardLogging fixture for use * Add Cinder v3 detach to local\_cleanup * Don't check for file type in \_find\_base\_file * Rename \_handle\_base\_image to \_mark\_in\_use * Add context comments to \_handle\_base\_image * Add mock check and fix uuid's use in test * Revert "Prevent delete cell0 in nova-manage command" * Improve comment for PCI port binding update * Parse algorithm from cipher for ephemeral disk encryption * Add description to policies in floating\_ip files * Add description to policies in migrate\_server.py * Remove all discoverable policy rules * PowerVM Driver: console * Update doc/source/process.rst * 2.45: Remove Location header from createImage and createBackup responses * Clean up ClientRouter debt * api-ref: move createBackup to server-actions * Deprecate Multinic, floatingip action and os-virtual-interface API * Register osapi\_compute when nova-api is wsgi * disable keepalive for functional tests * Use plain routes list for '/os-aggregates' endpoint instead of stevedore * Use plain routes list for '/os-keypairs' endpoint instead of stevedore * Use plain routes list for flavors-access endpoint instead of stevedore * Use plain routes list for flavors-extraspecs endpoint instead of stevedore * Use plain routes list for flavor endpoint instead of stevedore[1] * Use plain routes list for '/servers' endpoint instead of stevedore * encryptors: Switch to os-brick encryptor classes * Fix unnecessary code block in a release note * Remove redundant code * api-ref: Fix a parameter description in servers.inc * api-ref: Parameter verification for servers-actions (4/4) * api-ref: Parameter verification for servers-actions (3/4) * Refactor a test method including 3 test cases * Sort CellMappingList.get\_all() for safety * Add workaround to disable group policy check upcall * Make server groups api aware of multiple cells for membership * libvirt: remove redundant and broken iscsi volume test * Remove BuildRequest.block\_device\_mapping clone workaround * libvirt: Always disconnect\_volume after rebase failures * Rework descriptions in os-hypervisors * Trivial Fix a typo * api-ref: Parameter verification for servers-actions (2/4) * Updated from global requirements * PowerVM Driver: spawn/destroy #4: full flavor * Remove archaic reference to QEMU errors during post live migration * Tell people that the nova-cells man page is for cells v1 * Add release note and update cell install guide for multi-cell limitations * PowerVM Driver: spawn/destroy #3: TaskFlow * Allow CONTENT\_LENGTH to be present but empty * libvirt: Remove is\_job\_complete polling after pivot * Adding auto\_disk\_config field to InstancePayload * add tags field to instance.update notification * Add description to policies in hypervisors.py * Explicitly define enum type as string in schema * PowerVM Driver: power\_on/off and reboot * Using max api version in notification sample test * PowerVM Driver: spawn/destroy #2: functional * Warn the user about orphaned extra records during keypair migration * Deprecate os-hosts API * Update resource tracker to PUT custom resource classes * [placement] Idempotent PUT /resource\_classes/{name} * Update detach to use V3 Cinder API * conf: Move 'floating\_ips' opts into 'network' * conf: Deprecate 'default\_floating\_pool' * conf: Add neutron.default\_floating\_pool * libvirt: Use config types to parse XML for root disk * libvirt: Add missing tests for utils.find\_disk * libvirt: Use config types to parse XML for instance disks * Updated from global requirements * Mock timeout in test\_\_get\_node\_console\_with\_reset\_wait\_timeout * Add test ensure all the microversions are sequential in placement API * fix overridden error * fix typos * Add interfaces functional negative tests * Remove unused os-pci API * Fix mitaka online migration for PCI devices * Fix port update exception when unshelving an instance with PCI devices * Fix docstring in \_validate\_requested\_port\_ids * Fix the evacuate API without json-schema validation in 2.13 * api-ref: Fix response code and parameters in evacuate * Remove json-schema extension variable for resize * Update etherpad url * Use deepcopy when process filters in db api * Add regression test for server filtering by tags bug 1682693 * remove unused parameter in rpc call * Remove usage of parameter enforce\_type * Remove test\_init\_nonexist\_schedulerdriver * Spelling error "paramenter" * api-ref: Parameter verification for servers-actions (1/4) * Revert "Make server\_groups determine deleted-ness from InstanceMappingList" 16.0.0.0b1 ---------- * Fix hypervisors api missing HostMappingNotFound handlers * Updated from global requirements * Fix HTTP 500 raised for getConsoleLog for stopped instance * Remove backend dependency for key types * Fix libvirt group selection in live migration test * Update network metadata type field for IPv6 * Add description to policies in servers.py * Add description to policies in security\_groups.py * api-ref: Nova Update Compute services Link * api-ref: Fix parameters in os-hosts.inc * Add uuid to Service model * Modify PciPassthroughFilter to accept lists * Deprecate CONF.api.allow\_instance\_snapshots * Read NIC features in libvirt * Fix api-ref for create servers response * placement: Add Traits API to placement service * Remove aggregate uuid generation on load from DB * Document and provide useful error message for volume-backed backup * PowerVM Driver: spawn/delete #1: no-ops * Refactor: Move post method to APIValidationTestCase base class * remove log translation tags from nova.cells * Get BDMs when we need to in \_handle\_cell\_delete * Remove dead db api code * Add description to policies in server\_password.py * remove flake8-import-order * Expand help text for [libvirt]/disk\_cachemodes * Updated from global requirements * Add description to policies in remote\_consoles.py * api-ref: fix os-extended-volumes:volumes\_attached in servers responses * Image meta min\_disk should be int in fake\_request\_spec * Optimize the link address * Add description to policies in quota\_sets.py * Fix joins in instance\_get\_all\_by\_host * Fix test\_instance\_get\_all\_by\_host * Remove the stevedore extension point for server create * Remove the json-schema extension point of server create * Remove the extension check for os-networks in servers API * Make server\_groups determine deleted-ness from InstanceMappingList * Add get\_by\_instance\_uuids() to InstanceMappingList * Remove Mitaka-era service version check * Teach HostAPI about cells * Make scheduler target cells to get compute node instance info * Deprecate the Cinder API v2 support * Limit exposure of network device types to the guest * Remove a fallacy in scheduler.driver config option help text * [placement] Allow PUT and POST without bodies * Use physical utilisation for cached images * Remove config opts for extension black/white list * Remove the usage of extension black/white list opt in scheduler hints * Cleanup wording on compute service version checks in API * Fix test\_no\_migrations\_have\_downgrade * Perform old-style local delete for shelved offloaded instances * Regression test for local delete with an attached volume * Set size/status during image create with FakeImageService * Commit usage decrement after destroying instance * Add regression test for quota decrement bug 1678326 * Short-circuit local delete path for cells v2 and InstanceNotFound * api-ref: make it clear that os-cells is for cells v1 * Add description to policies in security\_group\_default\_rules.py * Remove the usage of extension black/white list opt in user data * Add empty flavor object info in server api-ref * placement: Enable attach traits to ResourceProvider * docs: update description for AggregateInstanceExtraSpecsFilter * nova-net: remove get\_instance\_nw\_info from API subclass * API: accept None as content-length in HTTP requests * Switch from pip\_missing\_reqs to pip\_check\_reqs * nova-manage: Deprecate 'shell' commands * doc: Separate the releasenotes guide from the code-review section * Distinguish between cells v1 and v2 in upgrades doc * Use HostAddressOpt for opts that accept IP and hostnames * Stop using ResourceProvider in scheduler and RT * Updated from global requirements * Remove unnecessary tearDown function in testcase * Ensure reservation\_expire actually expires reservations * Remove unnecessary duplicated NOTE * Add description to policies in server\_diagnostics.py * Add description to policies in server\_external\_events.py * Add server-action-removefloatingip.json file and update servers-actions.inc * api-ref: networks is mandatory in Create Server * Trivial: Remove unused method * Make metadata doc more readable * Remove the usage of extension black/white list opt in AZ * Remove the usage of extension black/white list opt in config drive * Remove the usage of extension black/white list opts in multi-create * Remove the usage of extension black/white list opts in BDM tests * Rename the model object ResourceProviderTraits to ResourceProviderTrait * Short circuit notifications when not enabled * Add description to policies in services.py * compute: Move detach logic from manager into driver BDM * doc: Move code-review under developer policies * Add description to policies in servers\_migrations.py * Remove mox from nova/tests/unit/consoleauth/test\_consoleauth.py * Remove unnecessary setUp function in testcase * api-ref: Fix wrong HTTP response codes * Make conductor ask scheduler to limit migrates to same cell * Updated from global requirements * Consolidate unit tests for shelve API * Remove \_wait\_for\_state\_change() calls from notification (action)tests * Fix calling super function in setUp method * Remove namespace check in creating traits * Add description for /consoles * Ensure instance is in active state after notification test * Add description to policies in used\_limits * Add description to policies in lock\_server.py * Add description to policies in server\_metadata.py * Add description to policies in evacuate.py and rescue.py * Add description to policies in server\_groups.py * Use cursive for signature verification * Fix api-ref for adminPass behavior * Fix 'server' and 'instance' occurrence in api-ref * Add description to policies in flavor\_extra\_specs.py * code comment redundant * Add exclusion list for tempest for a libvirt+xen job * Add description to policies in cells\_scheduler.py * Add description to policies in aggregates.py * Add description to policies in pause\_server.py * Add description to policies in simple\_tenant\_usage.py * Add description to policies in keypairs.py * Remove unused policy rule in admin\_actions.py * Add description to policies in admin\_actions * Add description to policies in certificates.py * libvirt: Remove dead code * Add description to policies in console\_output.py * tox: Stop calling config/policy generators twice * There is a error on annotation about related options * Remove mox from nova.tests.unit.objects.test\_instance.py * fixed typos and reword stable api doc * Fix some reST field lists in docstrings * Add description to nova/policies/shelve.py * [placement] Split api-ref topics per file * Add description to policies in tenant\_networks.py * placement: Add Trait and TraitList objects * Remove legacy regeneration of RequestSpec in MigrationTask * remove i18n log markers from nova.api.\* * [placement] add api-ref for GET /resource\_providers * Structure for simply managing placement-api-ref * 'uplug' word spelling mistake * Make xenapi driver compatible with assert\_can\_migrate * Remove mox from nova/tests/unit/api/openstack/compute/test\_virtual\_interfaces.py * Remove mox from nova/tests/unit/api/openstack/compute/test\_quotas.py * Remove mox from nova/tests/unit/api/openstack/compute/test\_migrations.py * Fix wrong unit test about config option enabled\_apis * Do not attempt to load osinfo if we do not have os\_distro * Add confirm resized server functional negative tests * remove mox from unit/api/openstack/compute/test\_disk\_config.py * Revert "libvirt: Pass instance to connect\_volume and ..." * Add descripiton to policies in virtual\_interfaces.py * Add description to policies to availability\_zone * Add description to policies in suspend\_server.py * api-ref: fix description of volumeAttachment for attach/swap-volume * Get instance availability\_zone without hitting the api db * Set instance.availability\_zone whenever we schedule * [placement] Don't use floats in microversion handling * tests: fix uefi testcases * libvirt: make emulator threads to run on the reserved pCPU * libvirt: return a CPU overhead if isolate emulator threads requested * virt: update overhead to take into account vCPUs * numa: update numa usage to include reserved CPUs * numa: take into account cpus reserved * numa: fit instance NUMA node with cpus reserved onto host NUMA node * remove mox from unit/api/openstack/compute/test\_flavor\_manage.py * remove mox from unit/compute/test\_compute\_utils.py * api-ref: Complete all the verifications of remote consoles * remove mox from unit/virt/xenapi/image/test\_bittorrent.py * Fix some reST field lists in docstrings * Add lan9118 as valid nic for hw\_vif\_model property for qemu * Add description to policies in deferred\_delete.py * Add description to policies in create\_backup.py * Add description to policies in consoles.py * Add description to policies in cloudpipe.py * Add description to policies in console\_auth\_tokens.py * Add description to policies in baremetal\_nodes.py * conf: Final cleanups in conf/network * conf: Deprecate 'allow\_same\_net\_traffic' * libvirt: Ignore 'allow\_same\_net\_traffic' for port filters * conf: Deprecate 'use\_ipv6' * netutils: Ignore 'use\_ipv6' for network templates * Add check for invalid inventory amounts * Add check for invalid allocation amounts * Remove the Allocation.create() method * Add release note for CVE-2017-7214 * Add description to policies in cells.py * Tests: remove .testrepository/times.dbm in tox.ini (functional) * Pre-add functional tests stub to notification testing * libvirt: conditionally set script path for ethernet vif types * Add description to policies in agents.py * Add description to policies in admin\_password.py * libvirt: mark some Image backend methods as abstract * Add description to policies in assisted\_volume\_snapshots.py * Change os-server-tags default policy * Ironic: hardcode min\_unit for standard resources to 1 * Refactor: remove \_items() in nova/api/openstack/compute/attach\_interfaces.py * delete more i18n log markers * remove log translation from nova.api.metadata * update i18n guide for nova * Add description to policies in attach\_interfaces.py * Add description to policies in volumes\_attachments.py * Add description to policies in volumes.py * Fix rest\_api\_version\_history (2.40) * fix os-volume\_attachments policy checks * libvirt: Ignore 'use\_ipv6' for port filters * conf: Fix indentation in conf/netconf * Remove unused VIFModel.\_get\_legacy method * Add helper method to add additional data about policy rule * DELETE all inventory for a resource provider * nova-status: don't coerce version numbers to floats for comparison * remove mox from unit/api/openstack/compute/test\_flavors.y * Improve descriptions for hostId, host, and hypervisor\_hostname * compute: Only destroy BDMs after successful detach call * Remove old oslo.messaging transport aliases * Updated from global requirements * do not include context to exception notification * Add api-ref for filter/sort whitelist * Fix functional regression/recreate test for bug 1671648 * api-ref: fix description in os-services * flake8: Specify 'nova' as name of app * objects: Add attachment\_id to BlockDeviceMapping * db: Add attachment\_id to block\_device\_mapping * Updated from global requirements * Clarify os-stop API description * remove flake8-import-order for test requirements * Avoid lazy-loading projects during flavor notification * libvirt: add debug logging in detach\_device\_with\_retry * Transform instance.reboot.error notification * Transform instance.reboot notifications * remove hacking rule that enforces log translation * doc: configurable versioned notifications topics * Replace obsolete vanity openstack.org URLs * Add populate\_retry to schedule\_and\_build\_instances * Add a functional regression/recreate test for bug 1671648 * virt: implement get\_inventory() for Ironic * Fix the help for the disk\_weight\_multiplier option * Add a note about force\_hosts only ever having a single value * Make os-availability-zones know about cells * Introduce fast8 tox target * Duplicate JSON line ending check to pep8 * trivial: Remove \r\n line endings from JSON sample * [placement] Raising http codes on old microversion * Updated from global requirements * doc: add some documentation around quotas * Make versioned notifications topics configurable * Use proper user and tenant in the owner section of libvirt.xml * Prevent delete cell0 in nova-manage command * Refactor InstancePayload creation * nova-status: require placement >= 1.4 * Temporarily untarget context when deleting from cell0 * Decrement quota usage when deleting an instance in cell0 * VMware: use WithRetrieval in ds\_util module * VMware: use WithRetrieval in get\_network\_with\_the\_name * Remove VMware driver \_get\_vm\_ref\_from\_uuid method * trivial: Add a note about 'cells\_api' * Add description for Image location in snapshot * Typo fix in releasenotes: deprecate network options * api-ref: Fix parameters and examples in aggregate API * Transform instance.rebuild.error notification * Transform instance.rebuild notification * Add regression test for bug 1670627 * No API cell up-call to delete consoleauth tokens * Add identity helper property to CellMapping * Correctly set up deprecation warning * Add cell field to Destination object * Change MQ targeting to honor only what is in the context * Remove duplicate attributes in sample files * api-ref: Fix keypair API parameters * Fix missing instance.delete notification * conf: Fix formatting of network options * Teach simple\_tenant\_usage about cells * Teach os-migrations about cells * Teach os-aggregates about cells * Stop using mox in unit/virt/disk/test\_api.py * Avoid using fdatasync() when fetching images * Fix API doc about server attributes (2.3 API) * Refactor cell loading in compute/api * Target cell in super conductor operations * Ensure image conversion flushes output data to disk * fdatasync() downloaded images before use * conf: fix default values reporting infra worker * Error message should not include SQL command * Make consoleauth target the proper cell * Enlighten server tags API about cells * Update docstrings for legacy notification methods * conf: Deprecate most 'network' option * Use Cinder API v3 as default * get\_model method missing for Ploop image * trivial: Standardize naming of variables * trivial: Standardize indentation of test\_vif * autospec the virt driver mock in test\_resource\_tracker * Add functional test for bad res class in set\_inventory\_for\_provider * Remove unused placement\_database config options * libvirt: pass log\_path to \_create\_pty\_device for non-kvm/qemu * virt: add get\_inventory() virt driver API method * conf: remove console\_driver opt * Use flake8-import-order * numa: add numa constraints for emulator threads policy * Remove mox from nova.tests.unit.api.openstack.compute.test\_block\_device\_mapping * Revert "Add some metadata logging to root cause ssh failure" * Add comment to instance\_destroy() * Remove GlanceImageService * Use Sphinx 1.5 warning-is-error * Add warning on setting secure\_proxy\_ssl\_header * handle uninited fields in notification payload * Fix api-ref with Sphinx 1.5 * Imported Translations from Zanata * Reno for additional-notification-fields-for-searchlight * Default firewall\_driver to nova.virt.firewall.NoopFirewallDriver * Handle conflicts for os-assisted-volume-snapshots * Remove mox from nova.tests.unit.api.openstack.compute.test\_create\_backup * Log with cell.uuid if cell.name is not set * Updated from global requirements * re-orphan flavor after rpc deserialization * Stop using mox stubs in nova.tests.unit.api.openstack.compute.test\_serversV21 * Skip unit tests for SSL + py3 * Add functional test for ip filtering with regex * Add resize server functional negative tests * conf: resolved final todos in libvirt conf * Only create vendordata\_dynamic ksa session if needed * Check for 204 case in DynamicVendorData * Add some metadata logging to root cause ssh failure * Remove unused variable * Remove domains \*-log-\* from compile\_catalog * Updated from global requirements * Updated from global requirements * [placement] Add Traits related table to the api database * Remove mox from nova/tests/unit/db/test\_db\_api.py * Complete verification of servers-action-fixed-ip.inc * Remove mox in nova/tests/unit/compute/test\_shelve.py (3) * libvirt: Pass instance to connect\_volume and disconnect\_volume * Stop using mox in compute/test\_hypervisors.py * Add device\_id when creating ports * Make compute/api instance get set target cell on context * Remove mox from nova.tests.unit.virt.xenapi.test\_vmops[1] * Tests: remove .testrepository/times.dbm in tox.ini * Updated from global requirements * Remove invalid tests-py3 whitelist item * Ignore deleted services in minimum version calculation * Add RPC version aliases for Ocata * Remove mox from nova/tests/unit/test\_configdrive2.py * Remove usage of config option verbose * Remove check\_attach * Handle VolumeBDMIsMultiAttach in os-assisted-volume-snapshots * api/metadata/vendordata\_dynamic: don't import Requests for its constants * Fix typos detected by toolkit misspellings * remove a TODO as all set for tags * Clean up metadata param in doc * Remove extension in API layer * Correct some spelling errors * Fix typo in config drive support matrix docs * doc: Don't put comments inside toctree * Fix doc generation warnings * Remove run\_tests.sh * Fix spice channel type * Updated from global requirements * libvirt: drop MIN\_LIBVIRT\_HUGEPAGE\_VERSION * libvirt: drop MIN\_LIBVIRT\_NUMA\_VERSION * libvirt: drop MIN\_QEMU\_NUMA\_HUGEPAGE\_VERSION * libvirt: Fix misleading error in Ploop imagebackend * More usage of ostestr and cleanup an unused dependency * Ensure that instance directory is removed after success migration/resize * api-ref: Body verification for os-hypervisors.inc * Make conductor create InstanceAction in the proper cell * Allow nova-status to work with custom ca for placement * libvirt: Handle InstanceNotFound exception * Make scheduler get hosts from all cells * Make servers API use cell-targeted context * Make CellDatabases fixture work over RPC * Use the keystone session loader in the placement reporting * Verify project\_id when quotas are checked * Remove mox from nova/tests/unit/virt/vmwareapi/test\_vif.py * conf: Fix invalid rST comments * Revert "fix usage of opportunistic test cases with enginefacade" * Placement api: set custom json\_error\_formatter in resource\_class * Enable coverage report * Make server\_external\_events cells-aware * Remove service version check for Ocata/Newton placement decisions * Remove a dead cinder v1 check * Raise correct error instead of class exist in Placement API * Remove mox from nova/tests/unit/objects/test\_service.py * Skip soft-deleted records in 330\_enforce\_mitaka\_online\_migrations * Stop using mox from tests/unit/test\_service.py * Update placement\_dev with info about new decorator * Remove unused logging import * Deprecate xenserver.vif\_driver config option and change default * Fix live migrate with XenServer * Fix novncproxy for python3 * Remove mox stubs in api/openstack/compute/test\_server\_reset\_state.py * Fix some typo errors * Enable defaults for cell\_v2 update\_cell command * Remove dead code: \_safe\_destroy\_instance\_residue * Updated from global requirements * Make eventlet hub use a monotonic clock * Tolerate WebOb===1.7.1 * Tolerate jsonschema==2.6.0 * Stop using mox in test\_compute\_cells.py * Stop using mox in virt/xenapi/image/test\_glance.py * Remove mox from unit/api/openstack/compute/test\_aggregates.py * Remove mox from api/openstack/compute/test\_deferred\_delete.py * Typo fix: degredation => degradation * api-ref: Fix deprecated proxy API parameters * api-ref: note that boot ignores bdm:device\_name * Skip test\_stamp\_pattern in cells v1 job * Fix misuse of assertTrue * Fix improper prompt when update RC with existed one's name * Remove mox from nova/tests/unit/virt/vmwareapi/test\_configdrive.py * Placement api: set custom json\_error\_formatter in root * Cleanup some issues with CONF.placement.os\_interface * Placement api: set custom json\_error\_formatter in aggregate and usage * Fix suggested database migration command * Placement api: set custom json\_error\_formatter in resource\_provider * api-ref: Fix network\_label parameter type * Fix incorrect example for querying resource for RP * Use ListOfIntegersField in oslo.versionedobjects * libvirt: drop MIN\_QEMU\_PPC64\_VERSION * libvirt: drop MIN\_LIBVIRT\_AUTO\_CONVERGE\_VERSION * libvirt: drop MIN\_QEMU\_DISCARD\_VERSION * libvirt: drop MIN\_LIBVIRT\_HYPERV\_TIMER\_VERSION * libvirt: drop MIN\_LIBVIRT\_UEFI\_VERSION * libvirt: drop MIN\_LIBVIRT\_FSFREEZE\_VERSION * libvirt: drop MIN\_LIBVIRT\_BLOCKJOB\_RELATIVE\_VERSION * Bump minimum required libvirt/qemu versions for Pike * api-ref: fix instance action 'message' description * Placement api: set custom json\_error\_formatter in inventory * conf/libvirt: remove invalid TODOs * conf/compute: remove invalid TODOs * Remove straggling use of main db flavors in cellsv1 code * Add Cells V1 -> Cells V2 step-by-step example * nova-manage: Update deprecation timeline * Enable global hacking checks and removed local checks * Update hacking version * Use min parameter to restrict live-migration config options * Fix typo in nova/network/neutronv2/api.py * libvirt: wait for interface detach from the guest * libvirt: fix and break up \_test\_attach\_detach\_interface * api-ref: mark id as optional in POST /flavors * Fix nova-manage cell\_v2 metavar strings * Remove unused validation code from block\_device * Prepare for using standard python tests * Placement api: set custom json\_error\_formatter in allocations * [3/3]Replace six.iteritems() with .items() * conf: Deprecate 'firewall\_driver' * conf: Deprecate 'ipv6\_backend' * libvirt: set vlan tag for macvtap on SR-IOV VFs * Removed unnecessary parantheses and fixed formation * Fix the spelling mistake in host.py * Allow None for block\_device\_mapping\_v2.boot\_index * Edits for Cells V2 step-by-step examples * api-ref: fix delete server async postcondition docs * libvirt: check if we can quiesce before volume-backed snapshot * Default live\_migration\_progress\_timeout to off * libvirt: Remove redundant bdm serial mangling and saving during swap\_volume * Consider startup scenario in \_get\_compute\_nodes\_in\_db * libvirt: Introduce Guest.get\_config method * libvirt: Parse basic attributes of LibvirtConfigGuest from xml * libvirt: Parse filesystem elements of guest config * libvirt: Parse virt\_type attribute of LibvirtConfigGuest from xml * libvirt: Parse os attributes of LibvirtConfigGuest from xml * libvirt: Remove unused disk\_info parameter * libvirt: Simplify internal usage of get\_instance\_disk\_info * Stop failed live-migrates getting stuck migrating * Stop \_undefine\_domain erroring if domain not found * tests: fix vlan test type from int to str * Add an update\_cell command to nova-manage * allocations.consumer\_id is not used in query * api-ref: document the 'tenant\_id' query parameter * TrivialFix: replace list comprehension with 'for' * Reserve migration placeholders for Ocata backports * Update the upgrades part of devref * Cleanup the caches when deleting a resource provider * vomiting * Clarify the deployment of placement for cellsv1 users * conf: remove deprecated image url options * conf: add min parameter to scheduler opts * Add step-by-step examples for Cells V2 setup * Add nodename to \_claim\_test log messages * Update reno for stable/ocata 15.0.0.0rc1 ----------- * Add placement request id to log when GET or POST rps * Add placement request id to log when GET aggregates * Add more debug logging on RP inventory delete failures * Add more debug logging on RP inventory update failures * Delete a compute node's resource provider when node is deleted * Remove mox from unit/virt/libvirt/test\_imagebackend.py (end) * Mark compute/placement REST API max microversions for Ocata * Add release note for filter/sort whitelist * Clarify the language in the apache wsgi sample * Stop swap allocations being wrong due to MB vs GB * Clarify the [cells] config option help * Add offset & limit docs & tests * Report reserved\_host\_disk\_mb in GB not KB * Fix access\_ip\_v4/6 filters params for servers filter * Fix typo in cells v2 ocata reno * doc: add upgrade notes to the placement devref * Simplify uses of assert\_has\_calls * Fix typo in help for discover\_hosts\_in\_cells\_interval * Handle NotImplementedError in \_process\_instance\_vif\_deleted\_event * Fix the terminated\_at field in the server query params schema * Add release note for nova-status upgrade check CLI * Add prelude section for Ocata * Collected release notes for Ocata CellsV2 * reno for notification-transformation-ocata * Allow scheduler to run cell host discovery periodically * doc: update the man page entry for nova-manage db sync * doc: refer to the cell\_v2 man pages from the cells v2 doc * doc: add some detail to the map\_cell0 man page * Remove pre-cellsv2 short circuit in instance get * Continue processing build requests even if one is gone already * Allow placement endpoint interface to be set * Ensure build request exists before creating instance * placement-api: fix ResourceProviderList query * tests: Remove duplicate NumaHostInfo * tests: Combine multiple NUMA-generation functions * tests: Don't reinvent \_\_init\_\_ * Explain how allow\_resize\_to\_same\_host is useful * nova-status: relax the resource providers check * Read instances from API cell for cells v1 * [placement] Use modern attributes of oslo\_context * Fix map\_cell\_and\_hosts help * Fresh resource provider in RT must have generation 0 * libvirt: Limit destroying disks during cleanup to spawn * Use is\_valid\_cidr and is\_valid\_ipv6\_cidr from oslo\_utils * Ignore IOError when creating 'console.log' * Fix unspecified bahavior on GET /servers/detail?tenant\_id=X as admin * Remove unused exceptions from nova.exception * nova-manage docs: cell\_v2 delete\_cell * nova-manage docs: cell\_v2 list\_cells * nova-manage docs: cell\_v2 discover\_hosts * nova-manage docs: cell\_v2 create\_cell * nova-manage docs: cell\_v2 verify\_instance * nova-manage docs: cell\_v2 map\_cell\_and\_hosts * Fix tag attribute disappearing in 2.33 and 2.37 * Scheduler calling the Placement API * Block starting compute unless placement conf is provided * Added instance.reboot.error to the legacy notifications * Avoid redundant call to update\_resource\_stats from RT * api-ref: Fix path parameters in os-hypervisors.inc * libvirt: fix vCPU usage reporing for LXC/QEMU guests * Adding vlans field to Device tagging metadata * libvirt: expose virtual interfaces with vlans to metadata * objects: vlan field to NetworkInterfaceMetadata object * Move instance creation to conductor * Updated from global requirements * Fix server group functional test by using all filters * Hyper-V PCI Passthrough * Change exponential function to linear * Fixed indentation in virt/libvirt/driver.py * Cache boot time roles for vendordata * Optionally make dynamic vendordata failures fatal * Use a service account to make vendordata requests * libvirt: ephemeral disk support for virtuozzo containers 15.0.0.0b3 ---------- * ironic: Add trigger crash dump support to ironic driver * Only warn about hostmappings during ocata upgrade * nova-manage docs: cell\_v2 map\_instances * nova-manage docs: cell\_v2 map\_cell0 * nova-manage docs: cell\_v2 simple\_cell\_setup * Add new configuration option live\_migration\_scheme * Fix race condition in instance.update sample test * libvirt: Use the mirror element to detect job completion * libvirt: Mock is\_job\_complete in test\_driver * adding debug info for pinning calculation * PCI: Check pci\_requests object is empty before passing to support\_requests * Ironic: Add soft power off support to Ironic driver * Add sort\_key white list for server list/detail * Trivial-fix: replace "json" with "yaml" in policy README * Release PCI devices on drop\_move\_claim() * objects: add new field cpuset\_reserved in NUMACell * Make api\_samples tests use simple cell environment * Assign mac address to vf netdevice when using macvtap port * conf: Deprecate 'console\_driver' * libvirt: avoid generating script with empty path * placement: minor refactor \_allocate\_for\_instance() * placement: report client handle InventoryInUse * Multicell support for instance listing * scheduler: Don't modify RequestSpec.numa\_topology * Fix and add some notes to the cells v2 first time setup doc * Add deleting log when config drive was imported to rbd * Updated from global requirements * Amend the PlacementFixture * Prevent compute crash on discovery failure * Ironic: Add soft reboot support to ironic driver * os-vif: convert libvirt driver to use os-vif for fast path vhostuser * Updated from global requirements * Add a PlacementFixture * Set access\_policy for messaging's dispatcher * libvirt: make coherent logs when reboot success * Add ComputeNodeList.get\_all\_by\_uuids method * Fix typo in 216\_havana.py * placement: create aggregate map in report client * Support Ironic interface attach/detach in nova virt * Generate necessary network metadata for ironic port groups * Ensure we mark baremetal links as phy links * os-vif-util: set vif\_name for vhostuser ovs os-vif port * Move migration\_downtime\_steps to libvirt/migration * libvirt: fix nova can't delete the instance with nvram * Remove mox in libvirt destory tests * VMWare: Move constant power state strings to the constant.py * Remove references to Python 3.4 * hyperv: make sure to plug OVS VIFs after resize/migrate * Strict pattern match query parameters * Raise InvalidInput exception * Fix Nova to allow using cinder v3 endpoint * [py35] Fixes to get more tempest tests working * Move to tooz hash ring implementation * api-ref: Fix a parameter in os-availability-zone.inc * objects: remove cpu\_topology from \_\_init\_\_ of InstanceNUMATopology * Integrate OSProfiler and Nova * Remove mox from unit/virt/libvirt/test\_imagebackend.py (5) * Enable virt.vmwareapi test cases on Python * Enable virt.test\_virt\_drivers.AbstractDriverTestCase on Python 3 * Port compute.test\_user\_data.ServersControllerCreateTest to Python 3 * Add revert resized server functional negative tests * XenAPI: Fix vif plug problem during VM rescue/unrescue * Handle oslo.serialization type error and binascii error * Remove invalid URL in gabbi tests * nova-manage cell\_v2 map\_cell0 exit 0 * Add query parameters white list for server list/detail * nova-manage docs: add cells commands prep * Add --verbose option to discover\_hosts command * Add more details when test\_create\_delete\_server\_with\_instance\_update fails * Updated from global requirements * Add some cellsv2 setup docs * Fix the generated cell0 default database name * rt: use a single ResourceTracker object instance * Add nova-manage cell\_v2 delete\_cell command * Add InstanceMappingList.get\_by\_cell\_id * Create HostMappingList object * Add nova-manage cell\_v2 list\_cells command * Add nova-manage cell\_v2 create\_cell command * Add rudimentary CORS support to placement API * libvirt: workaround findmnt behaviour change * api-ref: Fix parameters whose values are 'null' * Fix broken link of Doc * api-ref: Fix parameters and response in os-quota-sets.inc * Remove nova-manage image from man pages * Updated from global requirements * Fixes to get all functional tests working on py35 * [placement] Add a bit about extraction plans to placement\_dev * [placement] Add an "Adding a Handler" section to placement\_dev * [placement] placement\_dev info for testing and gabbi * [placement] placement\_dev info for microversion handling * Updated from global requirements * placement: validate member\_of values are uuids * Make metadata server know about cell mappings * Remove redundant arg check in nova-manage cell\_v2 verify\_instance * Expose a REST API for a specific list of RPs * copy pasta error * Set sysinfo\_serial="none" in LibvirtDriverTestCase * [py35] Fixes to get rally scenarios working * Fix missing RP generation update * Add service\_token for nova-neutron interaction * rt: explicitly pass compute node to \_update() * Make unit tests work with os-vif 1.4.0 * Updated from global requirements * libvirt: make live migration possible with Virtuozzo * Small improvements to placement.rst * Better black list for py35 tests * Fix class type error in attach\_interface() function * Hyper-V: Adds vNUMA implementation * Don't bypass cellsv1 replication if cellsv2 maps are in place * Adds Hyper-V OVS ViF driver * docs - Connect to placement service & retries * Improve flavor sample in notification sample tests * xenapi: support the hotplug of a neutron port * Update notification for flavor * Add service\_token for nova-cinder interaction * Make allocate\_for\_instance take consistent args * XenAPI Remove useless files when use os-xenapi lib * XenAPI Use os-xenapi lib for nova * Make placement client keep trying to connect * releasenotes: Add missing releasenote for encryption provider constants * Stop using mox stubs in test\_attach\_interfaces.py * Remove mox from api/openstack/compute/test\_floating\_ip\_dns.py * Remove mox in nova/tests/unit/compute/test\_shelve.py (end) * Remove mox in unit/api/openstack/test\_wsgi.py * Document testing process for zero downtime upgrade * Remove mox in nova/tests/unit/compute/test\_shelve.py (2) * Notifications for flavor operations * Add debug possibility for nova-manage command * conf: Deprecate yet more nova-net options * conf: Resolve formatting issues with 'quota' * [2/3]Replace six.iteritems() with .items() * Port xenapi test\_vm\_utils to Python 3 * docs: sort the Architecture Concepts index * Make the SingleCellSimple fixture a little more comprehensive * Fix non-parameterized service id in hypervisors sample tests * Fix TypeError in \_update\_from\_compute\_node race * Trivial indentation fix * Add missing CLI commands in support-matrix.ini * tests: Replace use of CONF with monkey patching * correct misleading wording * Fix a typo in documents * Don't translate exceptions w/ no message * Fix ksa mocking in test\_cinderclient\_unsupported\_v1 * [placement] fix typo in call to create auth middleware * HTTP interface for resource providers by aggregates * Return uuid attribute for aggregates * Update docstring of \_schema\_validation\_helper * api-ref: use the examples with paging links * Port libvirt.test\_vif to Python 3 * Port libvirt.test\_firewall to Python 3 * Move quota options to a config group * Handle Unauthorized exception in report client's safe\_connect() * Remove mox from unit/virt/libvirt/test\_imagebackend.py (4) * Remove mox from unit/virt/libvirt/test\_imagebackend.py (3) * Remove mox from unit/virt/libvirt/test\_imagebackend.py (2) * Do not post allocations that are zero * Remove mox from unit/compute/test\_compute\_api.py (1) * Add aggregate notification related enum values * Transform aggregate.delete notification * Transform aggregate.create notification * Added missing decorator for instance.create.error * Enable Neutron by default * Port virt.libvirt.test\_imagebackend to Python 3 * move gate hooks to gate/ * tools: Remove 'colorizer' * tools: Remove 'with\_venv' * tools: Remove 'install\_venv', 'install\_venv\_common' * tools: Remove 'clean-vlans' * tools: Remove 'enable-pre-commit-hook' * Use JSON-Schema to validate query parameters for keypairs API * Adds support for versioned schema validation for query parameters * Remove mox from api/openstack/compute/test\_extended\_hypervisors.py * Stop using mox in compute/test\_server\_actions.py * Remove mox from unit/api/openstack/compute/test\_cloudpipe.py * Add support matrix for attach and detach interfaces * Make last remaining unit tests work with Neutron by default * Make test\_metadata pass with CONF.use\_neutron=True by default * Make test\_nova\_manage pass with CONF.use\_neutron=True by default * Stub out os\_vif.unplug in libvirt instance destroy tests * Make test\_attach\_interfaces work with use\_neutron=True by default * Make test\_floating\_ip\* pass with CONF.use\_neutron=True by default * Make several API unit tests pass with CONF.use\_neutron=True by default * Make test\_server\_usage work with CONF.use\_neutron=True by default * Make test\_security\_group\_default\_rules work with use\_neutron=True by default * Make test\_tenant\_networks pass with CONF.use\_neutron=True by default * Make test\_security\_groups work with CONF.use\_neutron=True by default * Make test\_virtual\_interfaces work with CONF.use\_neutron=True by default * Make test\_user\_data and test\_multiple\_create work with use\_neutron=True * Make test\_quota work with CONF.use\_neutron=True by default * Make test\_compute pass with CONF.use\_neutron=True by default * api-ref: Fix parameters in os-server-groups.inc * Remove mox in test\_block\_device\_mapping\_v1.py * placement: Do not save 0-valued inventory * Add 'disabled' to WatchdogAction field * Remove more deprecated nova-manage commands * Make servers api view load instance fault from proper cell * Add support for setting boot order in Hyper-V * Create schema generation for NetworkModel * conf: added notifications group * Missing usage next links in api-ref * [placement] start a placement\_dev doc * Stop handling differences in registerCloseCallback * Enable TestOSAPIFixture.test\_responds\_to\_version on Python 3 * pci: Clarify SR-IOV ports vs direct passthrough ports * nova-status: check for compute resource providers * doc: add recomendation for delete notifications * Move FlavorPayload to a seperate file * Remove Rules.load\_json warning * Handle unicode when dealing with duplicate aggregate errors during migration * Handle unicode when dealing with duplicate flavors during online migrations * Actually test online flavor migrations * Remove unused init\_only kwarg from wsgi app init * api-ref: add notes about POST/DELETE errors for os-tenant-networks * Remove unnecessary attrs from TenantNetworksDeprecationTest * api-ref: microversion 2.40 overview * Fix assertion in test\_instance\_fault\_get\_by\_instance * Add more field's in InstancePayload * api-ref: cleanup os-server-groups 'policies' parameter description * objects: add new field cpu\_emulator\_threads\_policy * Support filtering resource providers by aggregate membership * Resource tracker doesn't free resources on confirm resize * Stop using mox stubs in nova/tests/unit/cells * Add release note to PCI passthrough whitelist regex support * api-ref: Fix parameter type in servers-admin-action.inc * Port security group related tests to Python 3 * Add create image functional negative tests * Don't apply multi-queue to SRIOV ports * Avoid multiple initializations of Host class * placement: correct improper test case inheritance * Remove mox in tests/unit/objects/test\_instance\_info\_cache * Port compute unit tests to Python 3 * Fix urllib.urlencode issue in functional tests on Python 3 * Trival fix typo * Enble network.test\_neutronv2.TestNeutronv2 on Python 3 * Enble compute.test\_compute\_mgr.ComputeManagerUnitTestCase on Python 3 * Port api.openstack.compute.test\_disk\_config to Python 3 * Updated from global requirements * Ignore 404s when deleting allocation records * nova-status: return 255 for unexpected errors * VMware: Update supported OS types for ESX 6.5 * Replace "Openstack" with "OpenStack" * Use bdm destination type allowed values hard coded * Fix BDM JSON-Schema validation * [TrivialFix] Fix comment and function name typo error * [TrivialFix] Fix comment typo error * Fix python3 issues with devstack * [1/3]Replace six.iteritems() with .items() * Fix typo * Fix misleading port delete description * conf: remove deprecated barbican options * conf: Remove 'virt' file * Trival fix typos in api-ref * make 2.31 microversion wording better * Add soft delete wrinkle to api-ref * Add document update for get console usage * Trivial: add ability to define action description * Added missed "raises:" docstrings into numa\_get\_constraints() method * Removes unnecessary utf-8 encoding * Port test\_matchers.TestDictMatches.test\_\_str\_\_ to Python 3 * Skip network.test\_manager.LdapDNSTestCase on Python 3 * Remove mox in tests/unit/objects/test\_security\_group * Remove v2.40 from URL string in usage API docs * nova-status: add basic placement status checking * nova-status: check for cells v2 upgrade readiness * Add nova-status upgrade check command framework * rt: remove fluff from test\_resource\_tracker * rt: pass the nodename to public methods * conf: make 'default' upper case * conf: move few console opts to xenserver group * conf: remove deprecated ironic options * conf: refactor conf\_fixture.py * Add unit test for extract\_snapshot with compression enabled * Refactor the code to add generic schema validation helper * Updated from global requirements * Fix error if free\_disk\_gb is None in CellStateManager * nova-manage: squash oslo\_policy debug logging * Pre-load info\_cache when handling external events and handle NotFound * Make nova-manage cell\_v2 discover\_hosts tests use DBs * Fix nova-manage cell\_v2 discover\_hosts RequestContext * Make nova-manage emit a traceback when things blow up * XenAPI: Remove ovs\_integration\_bridge default value * rt: pass nodename to internal methods * Failing test (mac osx) - test\_cache\_ephemeral * Catch VolumeEncryptionNotSupported during spawn * Updated from global requirements * Fix exception message formatting error in test * osapi\_max\_limit -> max\_limit * Add more detail to help text for reclaim\_instance\_interval option * Added PRSM to HVType class for support PR/SM hypervisor * conf: Deprecate more nova-net options 15.0.0.0b2 ---------- * [test]Change fake image info to fit instance xml * Cleanup Newton Release Notes * Port libvirt.storage.test\_rbd to Python 3 * VMware: ensure that provider networks work for type 'portgroup' * libvirt: Stop misusing NovaException * Fix the file permissions of test\_compute\_mgr.py * Add detail to cellsv2-related release notes * Revert "Use liberty-eol tag for liberty release notes" * Fix some release notes in preparation for the o-2 beta release * Add schedule\_and\_build\_instances conductor method * libvirt: Detach volumes from a domain before detaching any encryptors * libvirt: Flatten 'get\_domain' function * fakelibvirt: Remove unused functions * libvirt: Remove slowpath listing of instances * Only return latest instance fault for instances * Remove dead begin/end code from InstanceUsageAuditLogController * Use liberty-eol tag for liberty release notes * api-ref: Fix description of os-instance-usage-audit-log * conf: fix formatting in base * Stop allowing tags as empty string * libvirt: remove hack for dom.vcpus() returning None * Add Python 3.5 functional tests in tox.ini * Simple tenant usage pagination * Modify mistake of scsi adapter type class * Remove the EC2 compatible API tags filter related codes * Port virt vmwareapi tests to Python 3 * Mark sibling CPUs as 'used' for cpu\_thread\_policy = 'isolated' * Added missed "raises:" docstrings into numa\_get\_constraints() method * Changed NUMACell to InstanceNUMACell in test\_stats.py * TrivialFix: changed log message * api-ref: Fix 'id' (attachment\_id) parameters * Move tags validation code to json schema * Let nova-manage cell\_v2 commands use transport\_url from CONF * Make test\_create\_delete\_server\_with\_instance\_update deterministic * restore locking in notification tests * Remove mox from unit/compute/test\_compute\_api.py(2) * Deprecate compute options * Remove support for the Cinder v1 API * Make simple\_cell\_setup fully idempotent * Corrects the type of a base64 encoded string * Fix instructions for running simple\_cell\_setup * Quiet unicode warnings in functional test\_resource\_provider * conf: Detail the 'injected\_network\_template' opt * Add more description for rx and tx param * move rest\_api\_version\_history.rst to compute layer * Enhance PCI passthrough whitelist to support regex * Better wording for micorversion 2.36 * Port test\_servers to py3 * Catch InstanceNotFound exception * Remove mox in tests/unit/objects/test\_compute\_node * Refactor REGEX filters to eliminate 500 errors * Fix crashing during guest config with pci\_devices=None * Provide an online data migration to cleanup orphaned build requests * Add SecurityGroup.identifier to prefer uuid over name * Setup CellsV2 environment in base test * conf: add warning for vm's max delete attempts * Cleanup after any failed libvirt spawn * Guestfs handle no passwd or group in image * Return 400 when name is more than 255 characters * Check that all JSON files don't have \r\n in line * Enable test\_bdm.BlockDeviceMappingEc2CloudTestCase on Python 3 * network id is uuid instead of id * fix for auth during live-migration * Don't trace on ImageNotFound in delete\_image\_on\_error * Cascade deletes of RP aggregate associations * Make resource provider objects not remotable * Bump prlimit cpu time for qemu from 2 to 8 * test: drop unused config option fake\_manager * conf: Remove config option compute\_ manager * Extend get\_all\_by\_filters to support resource criteria * Port test\_virt\_drivers to Python 3 * Don't use 'updated\_at' to check service's status * libvirt: Fix initialising of LVM ephemeral disks * Remove extra ^M for json file * Port virt.disk.mount.test\_nbd to Python 3 * Remove unnecessary comment of BDM validation * Update ironic driver get\_available\_nodes docstring * api-ref: note that os-virtual-interfaces is nova-network only * Fix up non-cells-aware context managers in test\_db\_api * Add SingleCellSimple fixture * [proxy-api] microversion 2.39 deprecates image-metadata proxy API * Make RPCFixture support multiple connections * tests: avoid starting compute service twice in sriov functional test * tests: generate correct pci addresses for fake pci devices * Fix nova-serialproxy when registering cli options * Updated from global requirements * Revert "reduce pep8 requirements to just hacking" * conf: Improve help text for network options * conf: Deprecate all nova-net related opts * libvirt: Mock imagebackend template funcs in ImageBackendFixture * libvirt: Combine injection info in InjectionInfo * Fix misuse of assertTrue * Return 400 when name is more than 200 characters * Replace the assertEqual(None,A) with assertIsNone(A) * Rename few tests as per new config options * Handle MarkerNotFound from cell0 database * Removed unused ComputeNode create/update\_inventory methods * Fix a typo in a comment in microversion history * Handle ImageNotFound exception during instance backup * Add a CellDatabases test fixture * Pass context as kwarg instead of positional arg to get\_engine * Transform instance.snapshot notifications * libvirt: virtlogd: use virtlogd for char devices * libvirt: create consoles in an understandable/extensible way * Add more log when delete orphan node * libvirt: Add comments in \_hard\_reboot * Update cors-to-versions-pipeline release note * Unity the comparison of hw\_qemu\_guest\_agent * Add metadata functional negative tests * Require cellsv2 setup before migrating to Ocata * Improving help text for xenapi\_vmops\_opts * convert libvirt driver to use os-vif for vhost-user with ovs * Handle ComputeHostNotFound when listing hypervisors * Improve the error message for failed RC deletion * refactor: move down \`\`dev\_number\`\` in xenapi * Fix placement API version history 1.1 title * placement: Perform build list of standard classes once * placement: REST API for resource classes * Add a retry loop to ResourceClass creation * conf: Remove deprecated service manager opts * support polling free notification testing * conf: Standardize formatting of virt * Updated from global requirements * Remove invalid tests for config option osapi\_compute\_workers * placement: adds ResourceClass.save() * Add CORS filter to versions pipeline * Create hyperv fake images under proper directory * Some improvement to the process doc * libvirt: Improve \_is\_booted\_from\_volume implementation * libvirt: Delete duplicate check when live-migrating * Add block\_device\_mapping\_v2.uuid to api-ref * Correct the sorting of datetimes for migrations * Fix pci\_alias that include white spaces * Raise DeviceNotFound detaching volume from persistent domain * Always use python2.7 for docs target * objects: Removes base code that already exists in o.vo * libvirt: Don't re-resize disks in finish\_migration() * libvirt: Never copy a swap disk during cold migration * libvirt: Rename Backend snapshot and image * libvirt: Cleanup test\_create\_configdrive * libvirt: Test disk creation in test\_hard\_reboot * libvirt: Rewrite \_test\_finish\_migration * guestfs: Don't report exception if there's read access to kernel * Fix for live-migration job * Handle maximum limit in schema for int and float type parameters * Port compute.test\_extended\_ip\* to Python 3 * Remove more tests from tests-py3.txt * Support detach interface with same MAC from instance * placement: adds ResourceClass.destroy() * Make test\_shelve work with CONF.use\_neutron=True by default * Restrict test\_compute\_cells to nova-network * Make test\_compute\_mgr work with CONF.use\_neutron=True by default * Make test\_compute\_api work with CONF.use\_neutron=True by default * Make nova.tests.unit.virt pass with CONF.use\_neutron=True by default * Make xenapi tests work with CONF.use\_neutron=True by default * Make libvirt unit tests work with CONF.use\_neutron=True by default * Make vmware unit tests work with CONF.use\_neutron=True * Explicitly use nova-network in nova-network network tests * Make test\_serversV21 tests work with neutron by default * neutron: handle no\_allocate in create\_pci\_requests\_for\_sriov\_ports * Add a releasenote for bug#1633518 * libvirt: prefer cinder rbd auth values over nova.conf * libvirt: cleanup network volume driver auth config * Fix wait for detach code to handle 'disk not found error' * [api-ref] Minor text clean-up, formatting * Convert live migration uri back to string * conf: improve libvirt lvm * conf: Trivial fix of indentation in 'api' * config options: improve libvirt utils * Never pass boolean deleted to instance\_create() * Port xenapi test\_xenapi to Python 3 * Port libvirt test\_driver to Python 3 * conf: Deprecate 'torrent\_' options * hacking: Use uuidutils or uuidsentinel to generate UUID * Replace uuid4() with uuidsentinel * Replace uuid4() with uuidsentinel * Replace uuid4() with uuidsentinel * Add os-start/stop functional negative tests * Port ironic unit tests to Python 3 * Port test\_keypairs to Python 3 * Port test\_metadata to Python 3 * Fix expected\_attrs kwarg in server\_external\_events * Check deleted flag in Instance.create() * Revert "Revert "Make n-net refuse to start unless using CellsV1"" * Revert "Log a warning when starting nova-net in non-cellsv1 deployments" * Default deleted if the instance from BuildRequest is not having it * docs: cleanup wording for 'SOFT\_DELETED' in api-guide * libvirt: Acquire TCP ports for console during live migration * conf: Deprecate 'remap\_vbd\_dev' option * conf: Covert StrOpt -> PortOpt * Check Config Options Consistency for xenserver.py * Add description for 2.9 microversion * Remove AdminRequired usage in flavor * Optional name in Update Server description in api-ref * List support for force-completing a live migration in Feature support matrix * Remove mox from nova/tests/unit/compute/test\_virtapi.py * Remove mox from nova/tests/unit/virt/test\_virt.py * Catch ImageNotAuthorized during boot instance * Remove require\_admin\_context * remove NetworkDuplicated exception * InstanceGroupPolicyNotFound not used anymore * UnsupportedBDMVolumeAuthMethod is not used * Port virt.xenapi.client.test\_session to Python 3 * vif: allow for creation of multiqueue taps in vrouter * conf: Move api options to a group * [scheduler][tests]: Fix incorrect aggr mock values * objects: Move 'vm\_mode' to 'fields.VMMode' * objects: Move 'hv\_type' to 'fields.HVType' * objects: Move 'cpumodel' to 'fields.CPU\*' * objects: Move 'arch' to 'fields.Architecture' * Show team and repo badges on README * Remove config option snapshot\_name\_template * Remove deprecated compute\_available\_monitors option * Improve help text for interval\_opts * config options: improve libvirt remotefs * Improve consistency in libvirt * Fix root\_device\_name for Xen * Move tag schema to parameter\_types.py * Remove tests from tests-py3.txt * hardware: Flatten functions * add host to vif.py set\_config\_\* functions * linux\_net: allow for creation of multiqueue taps * Fix notification doc generator * Config options: improve libvirt help text (2) * Placement api: Add informative message to 404 response * Remove sata bus for virtuozzo hypervisor * Fix a typo in nova/api/openstack/compute/volumes.py * Fix race in test\_volume\_swap\_server\_with\_error * libvirt: Call host connection callbacks asynchronously * conf: remove deprecated cert\_topic option * Return build\_requests instead of instances * conf: remove deprecated exception option * doc: Add guidline about notification payload * Port libvirt test\_imagecache to Python 3 * Port test\_serversV21 to Python 3 * encryptors: Introduce encryption provider constants * Add TODO for returning a 202 from the volume attach API * Fix typo in image\_meta.py & checks.py & flavor.py * Refactor two nearly useless secgroup tests * Transform instance.finish\_resize notifications * Remove redundant VersionedObject Fields * Transform instance.create.error notification * Transform instance.create notification * api-ref: add missing os-server-groups parameters * libvirt: prepare domain XML update for serial ports * [placement] increase gabbi coverage of handlers.resource\_provider * [placement] increase gabbi coverage of handlers.inventory * [placement] increase gabbi coverage of handlers.allocation * libvirt: do not return serial address if disabled on destination * Remove mox from api/openstack/compute/test\_fping.py * Add index on instances table across project\_id and updated\_at * Complete verification for os-floating-ips * libvirt: handle os-brick InvalidConnectorProtocol on init * placement: adds ResourceClass.get\_by\_name() * placement: adds ResourceClass.create() * Improve help text for libvirt options * Use byte string or utf8 depending on python version for wsgi * Separate CRUD policy for server\_groups * Stop using mox stubs in nova/tests/unit/virt/disk * Remove the description of compute\_api\_class option * Remove mox in virt/xenapi/image/test\_bittorrent.py * Add context param to confirm\_migration virt call * Use pick\_context\_manager throughout DB APIs * Database poison note * tests: verify cpu pinning with prefer policy * api-ref: Body verification for os-simple-tenant-usage.inc * remove additional param * Fix typo for 'infomation' * Delete checking a bool opt of None condition * Remove unused code in nova/api/openstack/wsgi.py * conf: remove deprecated cells driver option * Fix detach\_interface() call from external event handler * Implement get and set aggregates in the placement API * Add {get\_,set\_}aggregates to objects.ResourceProvider * Log a warning when starting nova-net in non-cellsv1 deployments * Revert "Make n-net refuse to start unless using CellsV1" * HyperV: use os-brick for volume related operations * INFO level logging should be useful in resource tracker * hyper-v: wait for neutron vif plug events * Remove mox in nova/tests/unit/api/openstack/compute (1) * Use available port binding constants * Rename PCS to Virtuozzo in error message * [PY3] byte/string conversions and enable PY3 test * Fix mock arg list order in test\_driver.py * Add handle for 2 exceptions in force\_delete * Typo error about help libvirt.py * Updated from global requirements * Introduce PowerVMLiveMigrateData * Make n-net refuse to start unless using CellsV1 * Store security groups in RequestSpec * api-ref: body verification for abort live migration * Fix data error in api samples doc 15.0.0.0b1 ---------- * Typo error servers.py * Typo error allocations.yaml * Refactor console checks in live migration process * Remove mox in tests/unit/objects/test\_pci\_device * Add microversion cap information * No return for flavor destroy * neutron: actually populate list in populate\_security\_groups * Clarify the approval process of specless blueprints * Add uuid field to SecurityGroup object * api-ref: body verification for force\_complete server migration * api-ref: body verification for show server migration * api-ref: body verification for list server migrations * api-ref: example verification for server-migrations * api-ref: parameter verification for server-migrations * api-ref: method verification for server-migrations * [placement] Enforce min\_unit, max\_unit and step\_size * Remove ceph install/config functions from l-m hook * Ceph bits for live-migration job * Avoid unnecessary db\_calls in objects.Instance.\_from\_db\_object() * placement: genericize on resource providers * api-ref: fix server\_id in metadata docs * Add the initial documentation for the placement API * API Ref: update server\_id params * conf: fix formatting in wsgi * Transform requested secgroup names to uuids * conf: fix formatting in availability\_zone * libvirt: Cleanup spawn tests * Rename security\_group parameter in compute.API:create * Change database poison warning to an exception * Fix database poison warnings, part 25 * Updated from global requirements * Correct wrong max\_unit in placement inventory * Add flavor extra\_spec info link to api\_ref * Fix database poison warnings in resource providers * Placement api: 404 response do not indicate what was not found * Instance obj\_clone leaves metadata as changed * Add a no-op wait method to NetworkInfo * Move driver\_dict\_from\_config to libvirt driver * Create schema generation for AddressBase * conf: Improve help text for ldap\_dns\_opts * conf: Fix indentation of network * Fix config option types * libvirt: Fix incorrect libvirt library patching in tests * libvirt: refactor console device creation methods * libvirt: read rotated "console.log" files * libvirt: change get\_console\_output as prep work for bp/libvirt-virtlogd * Updated from global requirements * api-ref: Fix a 'port' parameter in os-consoles.inc * Update nova api.auth tests to work with newer oslo.context * Remove ironic instance resize from support matrix doc * [placement] add a placement\_aggregates table to api\_db * libvirt: remove py26 compat code in "get\_console\_output" * Change RPC post\_live\_migration\_at\_destination from cast to call * libvirt: add migration flag VIR\_MIGRATE\_PERSIST\_DEST * Revert MTU hacks for bug 1623876 * Pass MTU into os-vif Network object * Updated from global requirements * api-ref: fix addFloatingIp action docs * Fix a TypeError in notification\_sample\_base.py * Add functional api\_samples test for addFloatingIp action * Fix qemu-img convert image incompatability in alpine linux * migration.source\_compute should be unchanged after finish\_revert\_resize * Add explicit dependency on testscenarios * Updated from global requirements * cors: update default configuration in config * api-ref: remove user\_id from keypair list response and fix 2.10 * Don't parse PCI whitelist every time neutron ports are created * conf: Remove deprecated 'compute\_stats\_class' opt * conf: Remove extraneous whitespace * hardware: Split '\_add\_cpu\_pinning\_constraint' * libvirt: Delete the lase\_device of find\_disk\_dev\_for\_disk\_bus * EventReporterStub * Catch all local/catch-all addresses for IPv6 * placement: add ResourceClass and ResourceClassList * placement: raise exc when resource class not found * fix connection context manager in rc cache * pci: remove pci device from claims and allocations when freeing it * PCI: Fix PCI with fully qualified address * Log warning when user set improper config option value * libvirt: fix incorrect host cpus giving to emulator threads when RT * Transform instance.shutdown notifications * encryptors: Workaround mangled passphrases * Fix cold migration with qcow2 ephemeral disks * Updated from global requirements * config options: Improve help for SPICE * Remove manual handling of old context variables * api-ref: cleanup bdm.delete\_on\_termination field * api-ref: document the power\_state enum values * libvirt: Pass Host instead of Driver to volume drivers * conf: Attempt to resolve TODOs in scheduler.py * conf: Remove 'scheduler\_json\_config\_location' * Remove unreachable code * [api-ref] Fix path parameter console\_id * doc: add a note about conditional support for xenserver change password * Replace admin check with policy check in placement API * Fix import statement order * Fix database poison warnings, part 24 * libvirt: sync time on resumed from suspend instances * Fix database poison warnings, part 23 * Add RPC version aliases for Newton * Transform instance.unpause notifications * Catch NUMA related exceptions in create server API method * Notification object version test depends on SCHEMA * Updated from global requirements * Virt: add context to attach and detach interface * Imported Translations from Zanata * Stop using mox stubs in test\_shelve.py * Fix SAWarning in TestResourceProvider * Transform instance.unshelve notifications * TrivialFix: Fixed typo in 'MemoryPageSizeInvalid' exception name in docstrings * Make build\_requests.instance MediumText * Use six.wraps * Transform instance.resume notifications * Transform instance.shelve\_offload notifications * api-ref: fix image GET response example * Fix exception raised in exception wrapper * Add missing compat routine for Usage object * Updated from global requirements * Transform instance.power\_off notifications * conf: Removed TODO note and updated desc * Set 'last\_checked' flag if start to check scheduler file * Remove bandit.yaml in favor of defaults * Pre-add instance actions to avoid merge conflicts * Add swap volume notifications (error) * libvirt: add supported vif types for virtuozzo virt\_type * fix testcase test\_check\_can\_live\_migrate\_dest\_fills\_listen\_addrs * doc: Integrate oslo\_policy.sphinxpolicygen * Using get() method to prevent KeyError * tests: verify pci passthrough with numa * tests: Adding functional tests to cover VM creation with sriov * [placement] Add support for a version\_handler decorator * pci: in free\_device(), compare by device id and not reference * Mention API V2 should no longer be used * doc: Update libvirt-numa guide * Remove deprecated nova-manage vm list command * Remove block\_migration from LM rollback * PCI: Avoid looping over PCI devices twice * Update docs for serial console support * Remove conductor local api:s and 'use\_local' config option * Cleanup before removal of conductor local apis * compute: fixes python 3 related unit tests * XenAPI: Fix VM live-migrate with iSCSI SR volume * Fix the scope of cm in ServersTestV219 * Explicitly name commands target environments * \_run\_pending\_deletes does not need info\_cache/security\_groups * Updated from global requirements * hardware: Standarized flavor/image meta extraction * Tests: improve assertJsonEqual diagnostic message * api-ref: Fix wrong parameters in os-volumes.inc * Remove mox from unit/virt/libvirt/test\_imagebackend.py (1) * Send events to all relevant hosts if migrating * Catch error and log warning when not able to update mtimes * Clarify what changed with scheduler\_host\_manager * Add related options to floating ip config options * Correct bug in microversion headers in placement * Ironic Driver: override get\_serial\_console() * Updated from global requirements * Drop deprecated support for hw\_watchdog\_action flavor extra spec * Remove watchdog\_actions module * Removal of tests with different result depending on testing env * Add debug to tox environment * Document experimental pipeline in Nova CI * Update rolling upgrade steps from upgrades documentation * Add migrate\_uri for invoking the migration * Fix bug in "nova/tests/unit/virt/test\_virt\_drivers.py" for os-vif * Remove redundant req setting * Changed the name of the standard resource classes * placement: change resource class to a StringField * Remove nova/openstack/\* from .coveragerc * Remove deprecated nova-all binary * Fix issue with not removing rbd rescue disk * Require WebOb>=1.6.0 * conf: Remove deprecated \`\`use\_glance\_v1\`\` * Adding hugepage and NUMA support check for aarch64 * hacking: Use assertIs(Not), assert(True|False) * Use more specific asserts in tests * Add quota related tables to the api database * doc: add dev policy about no new metrics monitors * Always use python2.7 for functional tests * doc: note the future of out of tree support * docs: update the Public Contractual API link * Remove \_set\_up\_controller() from attach tests * Add InvalidInput handling for attach-volume * placement: add cache for resource classes * placement: add new resource\_classes table * hardware: Rework docstrings * doc: Comment on latin1 vs utf8 charsets * Improve help text for libvirt options * block\_device: Make refresh\_conn\_infos py3 compatible * Add swap volume notifications (start, end) * Add a hacking rule for string interpolation at logging * Stop using mox stubs in test\_snapshots.py * Stop using mox from compute/test\_multiple\_create.py * Don't attempt to escalate nova-manage privileges * Improve help text for upgrade\_levels options * Remove dead link from notification devref * Stop using mox stubs in test\_evacuate.py * Tests: fix a typo * ENOENT error on '/dev/log' * Patch mkisofs calls * conf: Group scheduler options * conf: Move consoleauth options to a group * Fix exception due to BDM race in get\_available\_resource() * Delete traces of in-progress snapshot on VM being deleted * Add error handling for delete-volume API * Catch DevicePathInUse in attach\_volume * Enable release notes translation * Fix drop\_move\_claim() on revert resize * Updated from global requirements * Fix API doc for os-console-auth-tokens * tests: avoid creation of instances dir in the working directory * config options: improve libvirt imagebackend * libvirt: fix DiskSmallerThanImage when block migrate ephemerals * Remove unnecessary credential sanitation for logging * Replace uuid4() with uuidsentinel * Change log level to debug for migrations pairing * Remove the duplicated test function * Move get\_instance() calls from try-except block * Allow running db archiving continuously * Add some extra logging around external event handling * Fix a typo in driver.py * Avoid Forcing the Translation of Translatable Variables * Fix database poison warnings, part 21 * libvirt: Fix BlockDevice.wait\_for\_job when qemu reports no job * Stop using mox from compute/test\_used\_limits.py * Updated from global requirements * Remove mox from tests/unit/conductor/tasks/test\_live\_migrate.py(3) * Remove mox from tests/unit/conductor/tasks/test\_live\_migrate.py(2) * Remove mox from tests/unit/conductor/tasks/test\_live\_migrate.py(1) * Fix calling super function in setUp method * refresh instances\_path when shared storage used * Prevent us from sleeping during DB retry tests * Fix error status code on update-volume API * conf: Trivial cleanup of console.py * conf: Trivial cleanup of compute.py * conf: Trivial cleanup of 'cells' * conf: Deprecate all topic options * Updated from global requirements * Disable 'supports\_migrate\_to\_same\_host' HyperV driver capability * Fix periodic-nova-py{27,35}-with-oslo-master * Report actual request\_spec when MaxRetriesExceeded raised * Make db archival return a meaningful result code * Remove the sample policy file * libvirt/guest.py: Update docstrings of block device methods * Fix small RST markup errors * [Trivial] changes tiny RST markup error * Add get\_context helper method * Use gabbi inner\_fixtures for better error capture * Hyper-V: Fixes os\_type image property requirement * conf: Cleanup of glance.py * conf: Move PCI options to a PCI group * Add Apache 2.0 license to source file * Updated from global requirements * Make releasenotes reminder detect added and untracked notes * [placement] reorder middleware to correct logging context * Fixes RST markup error to create a code-box * libvirt: support user password settings in virtuozzo * Removing duplicates from columns\_to\_join list * Ignore BuildRequest during an instance reschedule * Remove stale pyc files when running the cover job * Add a post-test-hook to run the archive command * [placement] ensure that allow headers are native strings * Fix a few typos in API reference * Fix typo on api-ref parameters * Fix typo in comment * Remove mox in nova/tests/unit/compute/test\_shelve.py (1) * Let schema validate image metadata type and key lengths * Remove scheduled\_at attribute from instances table * Fix database poison warnings, part 22 * Archive instance-related rows when the parent instance is deleted * Unwind circular import issue with api / utils * Fix database poison warnings, part 18 * Remove context object in oslo.log method * libvirt: pick future min libvirt/qemu versions * Improve consistency in serial\_console * conf: Improve consistency in scheduler opts * Move notification\_format and delete rpc.py * config options: improve libvirt smbfs * Fix database poison warnings, part 17 * Updated from global requirements * Fix database poison warnings, part 16 * Hyper-V: Adds Hyper-V UEFI Secure Boot * Stop overwriting thread local context in ClientRouter * Cleanup some redundant USES\_DB\_SELF usage * Fix database poison warnings, part 20 * Fix database poison warnings, part 19 * use proper context in libvirt driver unit test * Renamed parameters name in config.py * [placement] Allow both /placement and /placement/ to work * numa: Fixes NUMA topology related unit tests * VMware: Do not check if folder already exists in vCenter * libvirt: fixes python 3 related unit tests * Clean up stdout/stderr leakage in cmd testing * Capture stdout in for test\_wsgi:test\_debug * Add destroy method to the RequestSpec object * Remove last sentence * VMware: Enforce minimum vCenter version of 5.5 * test:Remove unused method \_test\_get\_test\_network\_info * Determine disk\_format for volume-backed snapshot from schema * Fix database poison warnings, part 15 * Fix CONTAINER\_FORMATS\_ALL to have ova insteadk of vmdk * Config options consistency of ephemeral\_storage.py * docs: Clarify sections & note on filter scheduler * Fixes python 3 unit tests * Add Hyper-V storage QoS support * Add blocker migration to ensure for newton online migrations * hacking: Always use 'assertIs(Not)None' * Hyper-V: fix image handling when shared storage is being used * Annotate online db migrations with cycle added * properly capture logging during db functional tests * [placement] 404 responses do not cause exception logs * Fix pep8 E501 line too long * Remove unused code * Replace uuid4() with generate\_uuid() from oslo\_utils * Return instance of Guest from method write\_instance\_config * Mock.side\_effects does not exist, use Mock.side\_effect instead * Remove redundant str typecasting * VMware: deprecate wsdl\_location conf option * Remove nova.image.s3 and configs * Remove internal\_id attribute from instances table * Fix stdout leakage during opportunistic db tests * Updated from global requirements * Improve help text for glance options * libvirt: ignore conflict when defining network filters * Add placeholder DB migrations for Ocata * Remove PCI parent\_addr online migration * Make nova-manage online migrations more verbose * Fix check\_config\_option\_in\_central\_place * Skip malformed cookies * Fix database poison warnings, part 14 * Standardize output capture for nova-manage tests * Work around tests that don't use nova.test as a base * Don't print to stdout when executing hacking checks * Make test logging setup fixture disable future setup * Fix typo in docsting in test\_migrations.py * Remove support for deprecated driver import * conf: Add 'deprecated\_reason' to osapi opts * Add hacking checks for xrange() * Using assertIsNone() instead of assertEqual(None) * move os\_vif.initialize() to nova-compute start * Add deprecated\_since parameter * [placement] Manage log and other output in gabbi fixure * Reduce duplication and complexity in format\_dom * Fix invalid exception mock for InvalidNUMANodesNumber * libvirt: fix serial console not correctly defined after live-migration * Add more description when service delete * trivial: Rewrap guide at 79 characters * plugins/xenserver: Add '.py' extension * conf: Fix opt indentation for scheduler.py * conf: Reorder scheduler opts * Updated from global requirements * Revert "Set 'serial' to new volume ID in swap volumes" * [placement] Adjust the name of the gabbi tests * placement: refactor instance translate function * Move wsgi-intercept to test-requirements.txt * Add missing slash to dir path * Expand feature classification matrix with gate checks * [placement] Stringify class and provider uuid in error * [api-ref] Correct parameter type * Remove default=None for config options * libvirt: cleanup never used migratable flag checking * Remove unnecessary setUp and tearDown * Remove unused parameters * Remove duplicate key from dictionary * Updated from global requirements * placement: refactor translate from node to dict * stub out instances\_path in unit tests * Add a new release note * XenAPI: add unit test for plugin test\_pluginlib\_nova.py * Add link ref to nova api concept doc * libvirt: Use the recreated disk.config.rescue during a rescue * Add members in InstanceGroup object members field * Updates URL and removes trailing characters * Stop ovn networking failing on mtu * Update reno for stable/newton * Don't pass argument sqlite\_db in method set\_defaults 14.0.0.0rc1 ----------- * Override MTU for os\_vif attachments * Fix object assumption in remove\_deleted\_instances() * Add is\_cell0 helper method * Set a bigger TIMEOUT\_SCALING\_FACTOR value for migration tests * Update minimum requirement for netaddr * [placement] consolidate json handling in util module * Fix unnecessary string interpolation * Handle TypeError when disabling host service * Fix an error in archiving 'migrations' table * Remove deprecated flag in neutron.py * Clean up allocation when update available resources * [placement] Mark HTTP error responses for translation * [placement] prevent a KeyError in webob.dec.wsgify * Body Verification of api-ref os-volume-attachments.inc * Add functional regression test for bug 1595962 * Use tempest tox with regex first * libvirt: add ps2mouse in choice for pointer\_model * Doc fix for Nova API Guide, added missing word * conf: Make list->dict conversion more specific * Revert "tox: Don't create '.pyc' files" * Improve help text for xenapi\_session\_opts * Improve help text for service options * Correct image.inc for heading * Complete verification for os-cloudpipe.inc * Use assertEqual() instead of assertDictEqual() * Fix typo of stevedore * [placement] functional test for report client * Add regression test for immediate server name update * Fixed suspend for PCI passthrough * libvirt: Rewrite test\_rescue and test\_rescue\_config\_drive * Guard against failed cache refresh during inventory * More conservative allocation updates * [placement] Correct serialization of inventory collections * Switching expression order within if condition * Correct sort\_key and sort\_dir parameter for flavor * Correct address, version parameter in ips.inc * Use to\_policy\_values for policy credentials * Doc fix for Nova API Guide, fixed wording * Nova shelve creates duplicated images in cells * More conservative inventory updates * Fix server group name on api-ref * Update BuildRequest if instance currently being scheduled * Fix reno for removal of nova-manage service command * Add note about display\_name in \_populate\_instance\_names * Extended description for sync\_power\_state\_pool\_size option * Use recursive obj\_reset\_changes in BuildRequest * HyperV: ensure config drives are copied as well during resizes * [placement] make PUT inventory consistent with GET * Fill destination check data with VNC/SPICE listen addresses * Revert "libvirt: move graphic/serial consoles check to pre\_live\_migration" * Fix MonitorMetric obj\_make\_compatible * Using assertIsNotNone() instead of assertIsNot(None,) * [api-ref] fix availability\_zone for server create * Fix SafeConfigParser DeprecationWarning in Python 3.2 * Set 'serial' to new volume ID in swap volumes * Fix policy tests for project\_id enforcement * neutron: don't trace on port not found when unbinding ports * Remove RateLimitFault class * Rate limit is removed , update doc accordingly * Fix a typo from ID to Id * context: change the name 'rule' to 'action' in context.can * Add description for v2.20 changes in api-ref * Add sync\_power\_state\_pool\_size option * Additional logging for placement API * Fix resizing in imagebackend.cache() * [placement] cleanup some incorrect comments * Updated from global requirements * Compute: ensure that InvalidDiskFormat is handled correctly * Add keypairs\_links into resp * Add hypervisor\_links into hypervisor v2.33 * Throw exception if numa\_nodes is not set to integer greater than 0 * Add reserved param for v2.4 * Add more description on v2.9 history * libvirt: inject files when config drive is not requested * Pin maximum API version of microversion * XenAPI: resolve the fetch\_bandwidth failure * Fix api-ref doc for server-rebuild * [api-ref] Update configuration file * fix broken link in api-ref * Trivial fix remove not used var in parameters * Trival fix a typo * Increase BDM column in build\_requests table * VMware: Refactor the image transfer * Pass GENERATE\_HASHES to the tox test environment * [placement] add two ways to GET allocations * Handle ObjectActionError during cells instance delete * [placement] Add some tests ensuring unicode resource provider info * cleanup: separate the creation of a local root to it's own method * standardize release note page ordering * Remove misleading warning message * Add deprecated\_reason for use\_usb\_tablet option * db: retry on deadlocks while adding an instance * virt: handle unicode when logging LifecycleEvents * Ensure ResourceProvider/Inventory created before add Allocations record * Libvirt: Correct PERF\_EVENTS\_CPU\_FLAG\_MAPPING * Enable py3 tests for unit.api.openstack.compute.test\_console\_output * Implement setup\_networks\_on\_host for Neutron networks * Add tests for safe\_connect decorator * libvirt: improve logging for shared storage check * Cleanup allocation todo items * [placement] Allow inventory to violate allocations * Refresh info\_cache after deleting floating IP * Remove deprecated configuration option network\_device\_mtu * Example & Parameter verification of os-security-group-default-rules.inc * [placement] clean up some nits in the requestlog middleware * correctly join the usage to inventory for capacity accounting * Annotate db models that have moved to the nova\_api db * Stop using mox in virt/libvirt/test\_imagecache.py * Stop using mox in unit/fake\_processutils.py * [api-ref]: Correcting server\_groups\_list parameter's type * Fix race condition bug during live\_snapshot * ironic: Rename private methods for instance info * [placement] Fix misleading comment in wsgi loader * Remove mox from api/openstack/compute/test\_networks.py * Remove mox from api/openstack/compute/test\_rescue.py * Remove mox from api/openstack/compute/test\_image\_size.py * Remove mox from api/openstack/compute/test\_extended\_ips.py * Remove mox from nova/tests/unit/virt/xenapi/test\_driver.py * Remove mox from unit/api/openstack/compute/test\_hide\_server\_addresses.py * fixing block\_device\_mapping\_v2 data\_type * Updated from global requirements * Add bigswitch command to compute rootwrap filters * libvirt: add hugepages support for Power * incorrect description in nova-api.log about quota check * Removed enum duplication from nova.compute * Remove unused conf 14.0.0.0b3 ---------- * Remove deprecated cinder options * Simple instance allocations from resource tracker * Add support for allocations in placement API * Add create\_all and delete\_all for AllocationList * Pull from cell0 and build\_requests for instance list * Remove hacked test that fails with latest os-brick * Report compute node inventories through placement * Delete BuildRequest regardless of service\_version * Fix service version lookups * Remove BuildRequest when scheduling fails * Run cell0 db migrations during nova-manage simple\_cell\_setup * Move cell message queue switching and add caching * Add basic logging to placement api * Fixed indentation * Update placement config reno * Ignore generated merged policy files * Register keystone opts for placement sample config * Remove deprecated neutron options * ironic\_host\_manager: fix population of instances info on start * Eliminate additional DB queries in nova lists * Remove the incomplete wsgi script placement-api.py * ironic\_host\_manager: fix population of instances info on schedule * rt: ensure resource provider records exist from RT * Allow linear packing of cores * Return 400 error for non-existing snapshot\_id * create placement API wsgi entry point * Fix qemu version check * Documentation for the vendordata reboot * Add more vd2 unit tests * Add a TODO and add info to a releasenote * [placement] remove a comment that is no longer a todo * Make api-ref bug link point to nova * Api-ref: Improve os-migrateLive input parameters * Fix a typo in the driver.py file * New discover command to add new hosts to a cell * Clean up instance mappings, build requests on quota failure * Not allow overcommit ratios to be negative * Updated from global requirements * Use StableObjectJsonFixture from o.vo * test\_keypairs\_list\_for\_different\_users for v2.10 * Fix using filter() to meet python2,3 * Emit warning when use 'user\_id' in policy rule * Adds nova-policy-check cmd * Reduce code complexity - api.py * Use cls in class method instead of self \_delete\_domain is a class method, so cls should be used instead of self * Revert "Optional separate database for placement API" * Changed exception catching order * Add BuildRequestList object * In InventoryList.find() raise NotFound if invalid resource class * Updated from global requirements * Imported Translations from Zanata * TrivialFix: Remove cfg import unused * Add oslopolicy script runs to the docs tox target * Add entry\_point for oslo policy scripts * Tests: use fakes.HTTPRequest in compute tests * Remove conversion from dict to object from xenapi live\_migration * Hyper-V: properly handle shared storage during migrations * TrivialFix: Remove logging import unused * Hyper-V: properly handle UNC instance paths * Get ready for os-api-ref sphinx theme change * Update link in general purpose feature matrix * List system dependencies for running common tests * [api-ref]: Update link reference * Abort on HostNotCompatibleWithFixedIpsClient * Add warning if metadata\_proxy\_shared\_secret is not configured * devspec: remove unused dev\_count in devspec * TrivialFix: removed useless storing of sample directory * [api-guide]: Update reference links * Fix link reference in Nova API version * Provide more duplicate VLAN network error info * Correct microversions URL in api\_plugins.rst * Create Instance from BuildRequest if not in a cell * Added todo for deletion LiveMigrateData.detect\_implementation usage * driver.pre\_live\_migration migrate\_data is always an object * Manage db sync command for cell0 * Updated common create server sample request because of microversion 2.37 * Remove TODO for service version caching * removed db\_exc.DBDuplicateEntry in bw\_usage\_update * Add online migration to move instance groups to API database * Remove locals() for formatting strings * Hyper-V: update live migrate data object * Config options consistency of notifications.py * Add networks to quota's update json-schema when network quota enabled * rt: isolate report and query sched client tests * rt: remove ComputeNode.create\_inventory * rt: rename test\_tracker -> test\_resource\_tracker * rt: remove old test\_resource\_tracker.py * Updated from global requirements * Remove deprecated security\_group\_api config option * Added min\_version field to 'host\_status' in 'api-ref' * Make InstanceGroup object favor the API database * Doc: Update PCI configuration options * Don't maintain user\_id and project\_id in context * Add support for usages in the placement API * Add a Usage and UsageList object * Add support for inventories to placement API * Check capacity and allocations when changing Inventory * Add release note to warn about os-brick lock dir * config options: improve help netconf * Config options consistency for consoleauth.py * Support Identity v3 when connecting to Ironic * Copy edit feature classification * don't report network limits after 2.35 * Adding details in general purpose feature matrix [1] * Improve placement API 404 and 405 response tests * doc: fix disk=0 use case in flavor doc * Config options: improve libvirt help text (1) * Dump json for nova.network.model.Model objects * Improve error message for empty cached\_nwinfo * Return HTTP 400 on list for invalid status * Move some flavor fakes closer to where they are being used * Replace flavors.get\_all\_flavors\_sorted\_list() with object call * Refactor and objectify flavor fakes used in api tests * Fix 'No data to report' error * Change api-site to v2.1 format * Refuse to run simple\_cell\_setup on CellsV1 * In placement API send microversion header when error * libvirt: Improve mocking of imagebackend disks * Updated flags for XVP config options * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 4) * [libvirt] Remove live\_migration\_flag & block\_migration\_flag * placement: add filtering by attrs to resource\_providers * Add support for resource\_providers urls * Remove nova/api/validator.py * Updated from global requirements * Change default value of live\_migration\_tunnelled to False * Remove code duplication in enums * [vncproxy] log for closing web is misleading * Return None in get\_instance\_id\_by\_floating\_address * Make simple\_cell\_setup work when multiple nodes are present * Add REST API support for get me a network * plugins/xenserver: Resolve PEP8 issues * Fix migration list + MigrationList operation * rt: Create multiple resize claim unit test * rt: Refactor unit test for trackable migrations * VIF: add in missing translation * Clean imports in code * Fix neutron security group tests for 5.1.0 neutronclient * modify description of "Inject guest networking config" * os-vif: do not set Route.interface if None * Check opt consistency for neutron.py * Improve help text for compute manager options * Make simple\_cell\_setup idempotent * Add cell\_v2 verify\_instance command * Remove unnecessary debug logs of normal API ops * Replace mox with mock in test\_validate\_bdm * Replace mox with mock in test\_cinder * Allow Nova Quotas to be Disabled * Allow authorization by user\_id for server evacuate * Allow authorization by user\_id for server update * Allow authorization by user\_id for server delete * Allow authorization by user\_id for server changePassword action * Update binding:profile for SR-IOV ports on resize-revert * Verified deprecation status for vnc options * Add tests for user\_id policy enforcement on trigger\_crash\_dump * Allow authorization by user\_id for server shelve action * Allow authorization by user\_id for force\_delete server * Allow authorization by user\_id for server resize action * Allow authorization by user\_id for server pause action * Add tests for user\_id policy enforcement on stop * Fix consistency in crypto conf * Add placement API web utility methods * Improve help text for XenServer Options * Improve help text for xenapi\_vm\_utils\_opts * network: fix handling of linux-bridge in os-vif conversion * Fix consistency in API conf * Improve consistency in WSGI opts * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 3) * Improve help text for xenapi\_opts * Maintain backwards compat for listen opts * Allow authorization by user\_id for server rescue action * Allow authorization by user\_id for server rebuild * Allow authorization by user\_id for server suspend action * Allow authorization by user\_id for server lock action * Optional separate database for placement API * Replace fake\_utils by using Fixture * virt/image: between two words without a space in output message * config options: improve help text of database (related) options (2/2) * config options: improve help text of database (related) options (1/2) * Remove hacking check [N347] for config options * Skipping test\_volume\_backed\_live\_migration for live\_migration job * rt: New unit test for rebuild\_claim() * List instances for secgroup without joining on rules * Improve help text for vmwareapi\_opts * Updated from global requirements * vnc host options need to support hostnames * Removed flag "check\_opt\_group\_and\_type" from pci.py * Removed flag "check\_opt\_group\_and\_type" * libvirt: convert over to use os-vif for Linux Bridge & OVS * Remove left over conf placeholders * libvirt: Rename import of nova.virt.disk.api in driver * Fix server operations' policies to admin only * Add support for vd2 user context to other drivers * api-ref: Example verification for os-simple-tenant-usage.inc * Remove unused exception: ImageNotFoundEC2 * Fix opt description for s3.py * virt/hardware: Check for threads when "required" * Improve consistency in VNC opts * Improve help text for compute\_opts * Config options: Improve help text for console options * Config options: Consistency check for remote\_debug options * docs: update code-review guide for config options * Add separate create/delete policies to attach\_interface * Fix handling of status in placement API json\_error\_formatter * Use constraints for all tox environments * Move JSON linting to pep8 * HyperV: remove instance snapshot lock * rt: Move monitor unit tests into test\_tracker * rt: Move unit tests for update usage for instance * rt: Move unit tests for update mig usage * rt: Remove useless unit test in resource tracker * rt: Remove dup tests in test\_resource\_tracker * rt: Remove incorrect unit test of resize revert * rt: Refactor test\_dupe\_filter unit test * rt: Remove duplicate unit test for missing mig ctx * rt: Refactor resize claim abort unit test * rt: Refactor resize\_claim unit test * Set enforce\_type=True in method flags * Use constraints for releasenotes * Add some logging and a comment for shelve/unshelve operations * Run shelve/shelve\_offload\_instance in a semaphore * Check opt consistency for api.py * Allow empty CPU info of hypervisors in API response * Config options consistency of rdp.py * Improve consistency in workarounds opts * Refresh README and its docs links * Correct InventoryList model references * instance.name should be blank if instance.id is not set * Cells: Handle delete with BuildRequest * Add NoopConductorFixture * Make notification objects use flavor capacity attributes * Fix busted release notes * config options: Improve help for conductor * Config options: base path configuration * PCI: Fix network calls order on finish\_revert\_resize() * Remove deprecated legacy\_api config options * Config Options: Improve help text for Ipv6 options * Update tags for Image file url from filesystems config option * Check options consistency in hyperv.py * Improve help text for floating ips options * config options: Improve help for base * Improve consistency in API * cleanup: some update xml cases in test\_migration * Use stashed volume connector in \_local\_cleanup\_bdm\_volumes * Ironic: allow multiple compute services * api-ref: Parameter verification for os-simple-tenant-usage.inc * Ironic: report node.resource\_class * network: introduce helper APIs for dealing with os-vif objects * ironic: Cleanup instance information when spawn fails * update wording around pep8 exceptions * Remove backward compatibility with pre-grizzly releases * use the HostPortGroupSpec.vswitchName instead of HostPortGroup.vswitch.split * Replace functions 'Dict.get' and 'del' with 'Dict.pop' * Updated from global requirements * Strict ImageRef validation to UUID only * Add the ability to configure glanceclient debug logging * Deprecate cert option * Merged barbican and key\_manager conf files into one * Config options consistency of pci.py * config option: rename libvirt iscsi\_use\_multipath * Fix require thread policy for multi-NUMA computes * Allocate PCI devices on migration * TrivialFix: Fixed a typo in nova/test.py * Updated from global requirements * Improve help text of image\_file\_url * Ironic: enable multitenant networking * libvirt: Remove some unnecessary mocking in test\_driver * libvirt: Pass object to \_create\_images\_and\_backing in test * libvirt: Reset can\_fallocate in test setUp() * libvirt: Create console.log consistently * Fixed invalid UUIDs in unit tests * Remove deprecated manager option in cells.py * Refactor deallocate\_fixed tests to use one mock approach instead of three * Improve consistency in virt opts * Updated header flag in SSL opts * Updated from global requirements * Don't cache RPC pin when service\_version is 0 * Imported Translations from Zanata * Remove white space between print and () * Flavor: correct confusing error message about flavorRef * Consistency changes for osapi config options * Fixed typos in nova: compute, console and conf dir * Add objects.ServiceList.get\_all\_computes\_by\_hv\_type * Add InstanceList.get\_uuids\_by\_host() call * Conf options: updated flags for novnc * Address feedback on cell-aggregate-api-db patches * Updated from global requirements * Add data migration methods for Aggregate * Config options: Consistency check for quota options * Add server name verification in instance search * Fix typo in DeviceDetachFailed exception message * Straddle python-neutronclient 5.0 for testing * Initialise oslo.privsep early in main * Cells: Simple setup/migration command * Aggregate create and destroy work against API db * Make Aggregate.save work with the API db * Improve help text for vmware * Config options consistency of exceptions.py * Help text for the mks options * Trivial option fixes * Properly quote IPv6 address in RsyncDriver * rbd\_utils: wrap blocking calls in tpool.Proxy() * Resolve PCI devices on the host during Guest boot-up * Fixed typos in nova, nova/api, nova/cells directory * Fix misspellings * Trivial: add 'DEPRECATED' for os-certificates API ref * Mention proxy API deprecation microversion in api-ref * xenserver: fix an output format error in cleanup\_smp\_locks * Add log for instance without host field set * Improve consistency in crypto * Deprecate barbican options * Improve consistency in flavors * Improve the help text for the guestfs options * Reminder that release notes are built from commits * Add initial framing of placement API * Add missing ComputeHostNotFound exception in live-migration * Free new pci\_devices on revert-resize * Use oslo\_config new type PortOpt for port options * Updated from global requirements * Remove unused imports in api/openstack/fakes.py * Add docs about microversion testing in Tempest * Remove leftover list\_opts entry points * Remove nova.cache\_utils oslo.config.opts entrypoint * Remove nova.network namespace from nova-config-generator.conf * Remove neutronv2.api oslo.config.opt entry point * Follow up on Update binding:profile for SR-IOV ports * Improve consistency in servicegroup opts * Improve help text for cloudpipe * Remove the useless version calculation for proxy api deprecated version * numa: remove the redundant check for hw\_cpu/hw\_mem list * Add support for oslo.context 2.6.0 * Update tags for Cache config option * Remove unused validation code for quota\_sets * Revert "Don't assert exact to\_dict output" * cleanup\_live\_migration\_destination\_check spacing * Default image.size to 0 when extracting v1 image attributes * Add details to general purpose feature matrix * Adding functional tests for 2.3 microversion * compute: Skip driver detach calls for non local instances * libvirt: Fix invalid test data * libvirt: Fix fake \_disk\_info data in LibvirtDriverTestCase * Don't set empty kernel\_id and ramdisk\_id to glance image * Config options consistency for cell.py * Refuse to have negative console ttls * Option Consistency for availability\_zone.py * Add a small debug line to show selection location * Fix wrong override value of config option vswitch\_name * Fix wrong override value of config option proxyclient\_address * Call release\_dhcp via RPC to ensure correct host * Adjust MySQL access with eventlet * Improve consistency in cert * Updated from global requirements * rt: don't log pci\_devices twice when updating resources * Config options consistency for configdrive.py * Remove deprecated ironic.api\_version config option * Improve the help text for compute timeout\_opts * Deprecate the nova-manage commands that rely on nova-network * Improve consistency in xenserver * Add the 'min' param to IntOpts where applicable * Remove unused config option 'fake\_call' * Make Aggregate metadata functions work with API db * Use deprecated\_reason for network quota options * "nova list-extensions" not showing summary for all * Fix typos in deprecates-proxy-apis release note * Enable deferred IP on Neutron ports * Improve help text for XenServer pool opts * remove config option iqn\_prefix * Deprecate os-certificates * Update RequestSpec nested flavor when a resize comes in * New style vendordata support * Add metadata server fixture * Improve help text for quota options * Improve help text for consoleauth config options * Bump Microversion to 2.36 for Proxy API deprecation * api: use 'if else' instead of 'try exception' to get password value * Add better help to rdp options * Adding details in general purpose feature matrix * Enables Py34 tests for unit.api.openstack.compute.test\_server\_actions * Filter network related limits from limits API * Filter network related quotas out of quotas API * Deprecate Baremetal and fping API * Deprecate volumes related APIs * Deprecate SecurityGroup related proxy API * Deprecated floating ip related proxy APIs * Complete verification of os-instance-actions.inc * Check opt group and type for nova.conf.service.py * Fix links to network APIs from api-ref * Add comment about how status field changed * Fix database poison warnings, part 13 * Deprecate network quota configuration * Verify os-aggregates.inc on sample files * Cleanup: validate option at config read level * :Add missing %s in print message * api-ref: unify the no response output in delete operation * Return 400 when SecurityGroupCannotBeApplied is raised * network: handle forbidden exception from neutron * Avoid update resource if compute node not updated * Document update\_task\_state for ComputeDriver.snapshot * Config Option consistency for crypto.py * Fix database poison warnings, part 12 * Don't check cinder volume states during attach * Clean up test\_check\_attach\_availability\_zone\_differs * Fix database poison warnings, part 11 * Fix opt description and indentation for flavors.py * Remove redundant flag value check * Improve help context of ironic options * Update instance node on rebuild only when it is recreate * Remove unneeded bounds-checking code * Improve the help text for the linuxnet options (4) * Don't assert exact to\_dict output * Fix database poison warnings, part 10 * config options: help text for enable\_guestfs\_debug\_opts * Fix database poison warnings, part 9 * Improve help text of s3 options * Remove deprecated config option volume\_api\_class * Fix inappropriate notification send * libvirt: Fix signature and behaviour of fake get\_disk\_backing\_file * libvirt: Pass path to Image base class * Remove max\_size argument to images.fetch and fetch\_to\_raw * Update tox.ini: Constraints are possible for api\* jobs * Separate api-ref for list security groups by server * Deprecate FixedIP related proxy APIs * Deprecated networks related proxy APIs * Check option descriptions and indentations for configdriver.py * Make Aggregate host operations work against API db * libvirt: open RBD in read-only mode for read-only operations * Remove unnecessary code added for ec2 deprecation * Enhance notification doc generation with samples * Depracate Images Proxy APIs * Correct the network config option help text * config options: improve help for noVNC * Replace deprecated LOG.warn with LOG.warning * Fixed typos in api-ref and releasenotes directory * Fix invalid import order and remove import \* * Improve the help text for the network options (4) * Add async param to local conductor live\_migrate\_instance * libvirt: update guest time after suspend * libvirt: Modify the interface address object assignment * Update binding:profile for SR-IOV ports * Port nova test\_serversV21.Base64ValidationTest to Python 3 * Refactor instance action notification sample test * Config option update tasks for availability\_zone * Expand initial feature classification lists * Add prototype feature classification matrix * [libvirt] Live migration fails when config\_drive\_format=iso9660 * Modify docstring of numa\_get\_reserved\_huge\_pages method * Use constraints for coverage job * Remove compute host from all host aggregates when compute service is deleted * Fix incorrect cellid numbering for NUMA memnode * Fix opt descripton for cells.py * Fix host mapping saving * Example and body verification of os-quota-sets.inc * Remove deprecated network\_api\_class option * neutron: destroy VIFs if allocating ports fails * Validate pci\_passthrough\_whitelist when starting n-cpu * Rename compute manager \_check\_dev\_name to \_add\_missing\_dev\_names * Remove unused context argument to \_default\_block\_device\_names() * Fix typo in AdminPasswordController 14.0.0.0b2 ---------- * Use from\_environ when creating a context * Pass kwargs through to base context * Fix opt description and check deprecate status for hyperv.py * VMware: Enable disk.EnableUUID=True in vmx * hyper-v: device tagging * Add release notes for notification transformation * Assert reservation\_id in notification sample test * Remove redundant DEPRECATED tag from help messages * Fix PUT server tag 201 to return empty content * Clean up helper methods in ResourceProvider * Transform instance.restore notifications * neutron: delete VIFs when deallocating networking * Add VirtualInterface.destroy() * Make notifications module use flavor capacity attributes * Make ironic driver use flavor fields instead of legacy ones * Make xenapi driver use flavor fields instead of legacy ones * Make libvirt driver use flavor fields instead of legacy ones * Make hyperv driver use flavor fields instead of legacy ones * Make vmware driver use flavor fields instead of legacy ones * Bump service version for BuildRequest deletion * Stop instance build if BuildRequest deleted * Add block\_device\_mappings to BuildRequest * Improve help text of flavors config options * Improve help text for cinder config options * Microversion 2.35 adds keypairs pagination support * Fix up legacy resource fields in simple-tenant-usage * Use flavor attributes instead of deprecated instance resources * Typo fix: remove multiple whitespace * network: handle unauthorized exception from neutron * Fix the broken links * 'limit' and 'marker' support for db\_api and keypair\_obj * Improve help text for exceptions * Improve help text for compute running\_deleted\_opts * rest api version bumped for async pre live migration checks * Add user\_id request parameter in os-keypairs list * Revert "Detach volume after deleting instance with no host" * Don't overwrite MarkerNotFound error message * tox: Use conditional targets * tox: Don't create '.pyc' files * Improve help text for allocation\_ratio\_opts * Release note for vzstorage volume driver * Fix typo in \_update\_usage\_from\_migrations * Transform instance.resize notifications * Refactors nova.cmd utils * Replace DOS line ending with UNIX * migration volume failed for invalid type * api-ref: fix wrong description about response example in os-hypervisor * api-ref: body verification of os-agents * Fix wrong JSON format in API samples * Implement ResourceProvider.destroy() * Add Allocation and AllocationList objects * Deprecate nova-manage vm list command * Remove live-migration from nova-manage man page * Deprecate the quota\_driver config option * Allow irrelevant,self-defined specs in ComputeCapacityFilter * Transform instance.pause notifications * Fix opt description for scheduler.py * Verify "needs:check\_deprecation\_status" for serial\_console.py * API: catch InstanceNotReady exception * Transform instance.shelve notifications * Replace unicode with six.text\_type * Added support for new block device format in vmops * XenAPI: add unit test for plugin bandwidth * api-ref: unify the delete response infomation * Add nova-manage quota\_usage\_refresh command * Quota changes for the nova-manage quota\_usage\_refresh command * Remove DictCompat from SecurityGroup * Replace use of eval with ast.literal\_eval * libvirt: fix missed test in migration * Improve the help text for the network options (3) * Correct reraising of exception * api-ref: Parameter verification for servers-actions.inc Part 1 * Body verification of os-interface.inc * Parameter verification of os-instance-actions.inc * xvp: change the default xvp conf path to CONF.xvp group * libvirt:code flow problem in wait\_for\_job * Clean up service version history comments * Add a ResourceProviderList object * Refactor block\_device\_mapping handling during boot * Remove spaces around keyword argument * Use ovo in test\_obj\_make\_compatible() * Improve the help text for the network options (2) * Update mutable-config reno with LM timeout params * Added better error messages during (un)pinning CPUs * Remove duplicate policy test * Complete verification for os-virtual-interfaces * api-ref: os-volumes.inc * Enable python34 tests for nova.tests.unit.pci.test\_manager and test\_stats * api-ref: merge multiple create to servers.inc * Improve the help text for configdrive options * Revert "Remove manual creation of console.log" * Fix invalid import order * Fix invalid import order * Fix invalid import order * config options: improve help for notifications * Fix invalid import order * Fix invalid import order * Remove unused itype parameter from get migration context * Do not try to backport when db has older object version * Detach volume after deleting instance with no host * Transform instance.suspend notifications * Hacking check for \_ENFORCER.enforce() * Remove final use of \_ENFORCER.enforce * Hacking check for policy registration * Extract \_update\_ports\_for\_instance * Extract port create from allocate\_for\_instance * Improve help text for resource tracker options * Transform instance.power\_on notifications * Add a py35 environment to tox * api-ref: add note about os-certificates API * XenAPI: UT: Always mock logging configuration * Fix api\_validation for Python 3 * api-ref: verify assisted-volume-snapshots.inc * Delete reduplicate code in test\_compute\_mgr.py * Port test\_hacking to Python 3 * Fix comment for version 1.15 ComputeNodeList * Microversion 2.33 adds pagination support for hypervisors * VMware: create vif with resource limitations * policy: clean-up * Make VIF.address unique with port id for neutron * Device tagging metadata API support * trivial: remove unnecessary mock from servers API test * Return HTTP 200 on list for invalid status * Complete verification for os-floating-ips-bulk * Transform instance.update notification * Pre-add instance actions to avoid merge conflicts * Transform instance.delete notifications * XenAPI: Add UT for independent compute option * Log DB exception if VIF creation fails * Fixes compute API unit tests for python3 * Reduce complexity in \_stub\_allocate\_for\_instance * Reorder allocate\_for\_instance preamble * Make \_validate\_requested\_network\_ids return a dict * Extract \_validate\_requested\_network\_ids * Create \_validate\_requested\_port\_ids * Extract \_filter\_hypervisor\_macs * Always call port\_update in allocate\_for\_instance * Device tagging API support * Mapping power\_state from integer to string * Compute manager device tagging support * trivial: comment about vif object address field * Example verification for os-fixed-ips.inc * Revert "Detach volume after deleting instance with no host" * policy: Replaces 'authorize' in nova-api (part 5) * libvirt: add todo about bdms in \_build\_device\_metadata * libvirt: virtuozzo instance rescue mode support * api-ref: os-certificates.inc * policy: Replaces 'authorize' in nova-api (part 4) * Make LM timeout params mutable * Help text for the ephemeral storage options * Config Options: Improve help text for debugger * Make Ironic options definitions consistent * Fix some typos * Add namespace oslo.db.concurrency in nova-config-generator.conf * Remove mox in tests/unit/objects/test\_quotas * Remove network information from IOVisor vif * Add automatic switching to postcopy mode when migration is not progressing * Extend live-migration-force-complete to use postcopy if available * Add a test utility for checking mock calls with objects * Remove invalid test for config option scheduler\_host\_manager * Complete verification for api-ref os-flavor-extra-specs * policy: Replaces 'authorize' in nova-api (part 3) * libvirt: Add migration support for perf event support * Libvirt driver implementation of device tagging * Add policy sample generation * Cleanup instance device metadata object code * libvirt: virtuozzo instance resize support * Fix test\_ipv6 and simplify to\_global() * Remove russian from unit/image/test\_glance.py * Py3: fix serial console output * \_security\_group\_get\_by\_names cleanup * Add reminder comments for compute rpcapi version bump * Update get\_instance\_diagnostics for instance objects * Improve help text for wsgi options * Don't immediately null host/node when shelving * Evaluate 'task\_state' in resource (de)allocation * Add new configuration option to turn auto converge on/off * Add new configuration option to turn postcopy on/off * Improve nova.rpc conf options documentation * Fix spelling mistake * Add ability to select specific tests for py34 * Remove mox from unit/compute/test\_compute.py (4) * Remove mox from unit/compute/test\_compute.py (end) * Remove mox from unit/compute/test\_compute.py (11) * Remove mox from unit/compute/test\_compute.py (10) * Remove mox from unit/compute/test\_compute.py (9) * Remove mox from unit/compute/test\_compute.py (8) * Remove mox from unit/compute/test\_compute.py (7) * Remove mox from unit/compute/test\_compute.py (6) * Remove mox from unit/compute/test\_compute.py (5) * UT: cleanup typo in libvirt test\_config * Remove mox from unit/compute/test\_compute.py (3) * Remove mox from unit/compute/test\_compute.py (2) * Remove mox from unit/compute/test\_compute.py (1) * Improve image signature verification failure notification * libvirt: attach configdrive after instance XML * libvirt: add nova volume driver for vzstorage * Moving test helpers to a common place * On port update check port binding worked * Refactor to create \_ensure\_no\_port\_binding\_failure * policy: Replaces 'authorize' in nova-api (part 2) * XenAPI: Add option for running nova independently from hypervisor * XenAPI: Stream config drive to XAPI * XenAPI: Perform disk operations in dom0 * Port test\_ipv6 to py3 and simplify to\_global() * api-ref: Example verification for os-agents.inc * Allow monitor plugins to set own metric object * api-ref: correct the order of APIs in server-tags * Remove unused LOG * Remove unnecessary \_\_init\_\_ * Release notes: fix typos * Make print py3 compatible * libvirt: fix disk size calculation for VZ container instances * Fix error message for VirtualInterfaceUnplugException * libvirt: Add boot ordering to individual disks * image\_meta: Add hw\_rescue\_device and hw\_rescue\_bus * collapse servers.ViewBuilderV21 into servers.ViewBuilder * remove personality extension * remove preserve-ephemeral rebuild extension * remove access\_ips extension * Bump the service version for get-me-a-network support * neutron: handle 'auto' network request in allocate\_for\_instance * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 2) * libvirt: split out code for recovering after migration tasks * libvirt: split out code for processing migration tasks * libvirt: split off code for updating migration stats in the DB * libvirt: split off code for updating live migration downtime * api-ref: verify images.inc * libvirt: split out code for determining if migration should abort * libvirt: split out code for detecting live migration job type * policy: Replaces 'authorize' in nova-api (part 1) * Check if flavor.vcpus is more than MAX\_TAP\_QUEUES * policy: Add defaults in code (part 6) * objects: Add devices\_metadata to instance object * objects: new InstanceDeviceMetadata object * db: add a device\_metadata column to instance\_extra * libvirt: add perf event support when create instance * Improve help text of crypto.py * objects: adding an update method to virtual\_interface * Rename driver method check\_can\_live\_migrate\_destination\_cleanup * api-ref: added docs for microversion 2.26 * policy: Add defaults in code (part 5) * policy: Add defaults in code (part 4) * policy: Add defaults in code (part 3) * policy: Add defaults in code (part 2) * add ploop support into qemu-img info * policy: Add defaults in code (part 1) * Handle UnableToAutoAllocateNetwork in \_build\_and\_run\_instance * Add note about preserve\_ephemeral limitations * Add console auth tokens db api methods * Remove mox from unit/virt/libvirt/volume/\*.py * Port cinder unit tests to Python 3 * Port test\_pipelib and test\_policy to Python 3 * Adding missing log translation hints * Add instance groups tables to the API database * Make live migration checks async * Check for None max\_count for Python 3 compat * Updated from global requirements * fix developer docs on API * libvirt: virtlogd: use "log" element in char devices * Fix ConsoleAuthTokens to work for all console types * remove os-disk-config part 4 * remove os-disk-config part 3 * remove load\_standard\_extensions method * Modify "policy.conf" to "policy.json" * Ensures that progress\_watermark and progress\_time are updated * Add a note for policy enforcement by user\_id * XenAPI: Support neutron security group * Added instance actions for conductor * Stop using mox stubs in nova/tests/unit/test\_metadata.py * remove support for legacy v2 generator extensions * Remove duplicate unit test resource tracker * Prevent instance disk overcommit against itself * api-ref: parameter verification os-agents * make failures on api\_samples more clear * api-ref, os-services.inc * api-ref: docs for microversion v2.28 * Update dhcp\_opts on both create and update * api-ref: Improve os-instance\_usage\_audit\_log samples * Add ironic mac address when updating and creating * pci: Deprecate is\_new from pci requests * Enhance notification sample test base * Handle multiple samples per versioned notification * Transform wrap\_exception notification to versioned format * XenAPI: OVS agent updates the wrong port with Neutron * Stop using mox from unit/fake\_server\_actions.py * objects: you want'em * libvirt: enhance method to return pointer\_model from image prop * Improve help text for service group options * Updated from global requirements * Skip network allocation if 'none' is requested * Separete notification object version test * [typo] replaced comupte to compute in test * api-ref, os-availability-zone.inc * Config: no need to set default=None * Add delete\_, update\_ and add\_ inventory to ResourceProvider * libvirt: fix typos in comments * Remove the nova.compute.resources entrypoint * Re-deprecate use\_usb\_tablet config option * Log the network when neutron won't apply security groups * api-ref: parameter verification os-fixed-ips * Add CellMappingList object * Add console auth tokens table and model * live migration check source failed caused bdm.device\_path lost * Use is\_valid\_ipv4 from oslo.utils * Include exception in \_try\_deallocate\_network error log * Remove mox from tests/unit/virt/test\_imagecache.py * Fix docstring nits from ResourceProvider.set\_inventory() review * fix errors in revert resize api docs * Add set\_inventory() method on ResourceProvider * Improve the help text for cells options (8) * VMware: Fix bug of TypeError when getting reference of VCenter cluster is None * XenAPI: Integers returned from XAPI are actually strings * Remove virt.block\_device.\_NoLegacy exception * rename libvirt has\_default\_ephemeral * Remove ec2\_code from exception * Add specific lazy-load method for instance.tags * Don't attempt to lazy-load tags on a deleted instance * Pre-load tags when showing server details * Policy-in-code servers rules * Fix image meta which is sent to glance v2 * Extract update\_port call into method * Refactor to create \_populate\_mac\_address * Rename \_populate\_mac\_address adding pci * Rename created\_port to created\_port\_id * Flip allocate\_for\_instance create or update if * libvirt: cleanup baselineCPU return value checking * Updated from global requirements * Remove mox from tests/unit/objects/test\_aggregate.py * Handle keypair not found from metadata server * Skip network validation if explicitly requesting no networks * nova-net: handle 'auto' network request in allocate\_for\_instance * neutron: validate auto-allocate is available * Add helpers to NetworkRequest(List) objects for auto/none cases * Remove api\_rate\_limit config option * Tear down of os-disk-config part 2 * Tear down os-disk-config part 1 * Disallow instance tag set for invalid instance states * Make instance as second arg in compute api calls * TrivialFix: Remove extra comma from json * Skip NFS and Ceph in live migration job test run * Added missed response to test\_server\_tags * api-ref: console types * api-ref: add version 2.3 parameters to servers * Remove extra expected error code (413) from image metadata * Use instance object instead of db record * Publish proxy APIs deprecation in api ref doc * Fix outdated parameter network\_info description in virt/driver * api-ref: Fix parameters in os-instance-usage-audit-log * Remove python code validation specific to legacy\_v2 * Remove DictCompat from instance\_info\_cache * Remove redundant test in test\_resource\_tracker * nova shared storage: rbd is always shared storage * Modify the disk bus and device name for Aarch64 * Remove mox from unit/compute/test\_compute\_mgr.py (end) * Remove mox in tests/unit/objects/test\_instance\_faults * Remove mox from unit/compute/test\_compute\_mgr.py (6) * Remove mox from unit/compute/test\_compute\_mgr.py (8) * Remove mox from unit/compute/test\_compute\_mgr.py (7) * Trivial-Fix: Fix typos * Fix some typos * Remove mox from unit/compute/test\_compute\_mgr.py (5) * Remove mox from unit/compute/test\_compute\_mgr.py (4) * Remove mox from unit/compute/test\_compute\_mgr.py (3) * Remove mox from unit/compute/test\_compute\_mgr.py (2) * Updated from global requirements * Make Aggregate.get\_by\_uuid use the API db * api-ref: parameter verification for os-aggregates * Improve help text for neutron\_opts * remove processing of blacklist/whitelist/corelist extensions * fix OS-SCH-HNT:scheduler\_hints location in sample * Fix reno from hyper-v-remotefx * Yield the thread when verifying image's signature * Remove invalid test methods for config option port\_range * libvirt: Prevent block live migration with tunnelled flag * Trivial: remove none existing py3 test from tests-py3.txt * Make host as second arg in compute api calls * Stop using mox stubs in tests/unit/fake\_notifier * Remove unused \_get\_flags method from integrated\_helpers * Enable all extension for all remaining sample tests * tox.ini: Remove unnecessary comments in api-ref target * Stop using mox stubs in nova/tests/unit * Updated from global requirements * Raise exception if BuildRequest deleted twice * Replace mox with mock for xenapi vm\_utils.lookup * Detach volume after deleting instance with no host * pci: Allow updating pci\_requests in instance\_extra * Change default fake\_ server status to ACTIVE * Fix update inventory for multiple providers * Default to using glance v2 * Enable all extension for remaining server API tests * Enable all extension for server API tests part-1 * Remove mox from unit/compute/test\_compute\_mgr.py (1) * Fixes py3 unit tests for nova.tests.unit.test\_block\_device.\* * Reno for mutable-config * Remove invalid test of config option default\_notification\_level * Improve the help text for cells options (7) * test: pass enable\_pass as kwarg in test\_evacuate * Remove config option config\_drive\_format's invalid value test * test: remove invalid test method in libvirt/test\_imagebackend * xenapi: Remove invalid values for config option image\_compression\_level * Remove mox from api/openstack/compute/test\_pci.py * Stop using mox from openstack/compute/test\_cells.py * Enable all extension for server actions sample tests * Enable all extension for Flavor API sample tests * Fix resource tracking for instances with no numa topology * Clarified "user" to plural type * Revert "Optimize \_cleanup\_incomplete\_migrations periodic task" * Remove unused authorizer methods * Remove legacy v2 policy rules * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 1) * Make create\_inventory() handle name change * Add ResourceProvider.save() * Remove the skip\_policy\_check flags * api-ref: verify keypairs * Make Xenplugin to work with glance v2 api * Trival: version history 2.30 is not indented as others * Do not register notification objects * Move notification objects to a separate package * Move notification related code to separate package * Adjust field types and defaults on Inventory * Add InventoryList.find() method * Add a get\_by\_uuid for aggregates * Imported Translations from Zanata * get rid of the old \_vhd methods * Make Hyper-V to work with glance v2 api * Stop using mox stubs in stub\_out\_key\_pair\_funcs * Remove v2 extension setting from functional tests * Add name and generation to ResourceProvider object * Remove duplicate test of DELETED instances * Added support for new block device format in Hyper-V * Enable mutable config in Nova * Improve help text for availability zones options * tests: make XMLMatches work with Python3 * Catch PciRequestAliasNotDefined exception * api-ref: parameter verification for os-hypervisors * xen: skip two more racey mox py34 test classes * libvirt: handle reserved pages size * Fix nova-compute start failed when reserved\_huge\_pages has value * Make the base options definitions consistent * virt: set address space & CPU time limits when running qemu-img * Remove manual creation of console.log * Fix imagecache.get\_cache\_fname() to work in python3 * Remove policy checkpoints for SecurityGroupAPI and NetworkAPI * Remove policy checkpoints from ComputeAPI * Stop using mox from objects/test\_instance.py (3) * Stop using mox from objects/test\_instance.py (2) * Stop using mox from objects/test\_instance.py (1) * Fix wrong patch of unittest in unit/test\_metadata.py * Remove code referencing inventory table in cell DB * Handle SetAdminPasswdNotSupported raised by libvirt driver * Prevent boot if ephemeral disk size > flavor value * [libvirt] Incorrect parameters passed to migrateToURI3 * Revert inventory/allocation child DB linkage * Only chown console log in rescue * Don't chown a config disk which already exists * Don't overwrite config disk when using Rbd * Add 'update' method to GlanceImageServiceV2 * Add 'create' method to GlanceImageServiceV2 * Add 'detail' method to GlanceImageServiceV2 * Add 'delete' method to GlanceImageServiceV2 * Add 'download' method to GlanceImageServiceV2 * Add 'show' method to GlanceImageServiceV2 * Split the glance API path based on config * Remove image\_meta * add "needs:\*" tags to the config option modules * api-ref method verification for os-cells * API change for verifying the scheduler when live migrating * Stop using mox stubs in volume/encryptors/test\_base.py * Introduce a CONF flag to determine glance client version * fix a typo in comment * Fix white spaces in api-ref * Updated from global requirements * virt/hardware: Add diagnostic logs for scheduling * Use assertNotIn instead of assertTrue(all(A != B)) * Use assert(Not)Equal instead of assertTrue(A == X) * Use assertLess(Equal) instead of assertTrue(A > X) * Use assertGreater(A, X) instead of assertTrue(A > X) * Fall back to flat config drive if not found in rbd * libvirt: Fix the content of "disk.config" lost after migrate/resize * remove /v2.1/{tenant\_id} from all urls * Remove "or 'reserved'" from \_create\_volume\_bdm * pci: Move PCI devices and PCI requests into migration context * Updated from global requirements * Fixes invalid uuid usages in test\_neutronv2 * Clarify message for Invalid/Bad Request exception * Cancelled live migration are not in progress * set wrap\_width for config generator to 80 * API change for verifying the scheduler when evacuating * Fix invalid uuid warnings in virt testcases 14.0.0.0b1 ---------- * Remove mox from nova/tests/unit/virt/libvirt/test\_utils.py * Fix multipath iSCSI encrypted volume attach failure * libvirt: add "get\_job\_info" to Guest's object * Modify 'an network' to 'a network' * Remove legacy v2 API code completely * Remove the usage of RateLimitingMiddleware * Remove unused inner\_app\_v21 and ext\_mgr * Remove legacy API code from sample tests * Remove InstanceUsageAuditLogTest for legacy API * Change instance\_claim parameter from instance\_ref to instance * Make AggregateList.get\_ return API & cell db items * Make Aggregate.get operation favor the API db * Add aggregates tables to the API db * Microversion 2.28 changes cpu\_info string to JSON object * libvirt: Skip CPU compatibility check for emulated guests * Specify the default cdrom type "scsi" for AARCH64 * Remove mox from nova/tests/unit/test\_iptables\_network.py * Updated from global requirements * pci: Make sure PF is 'available' when last VF is freed * pci: related updates are done without DB lookups * pci: make sure device relationships are kept in memory * Remove mox from nova/tests/unit/virt/libvirt/test\_vif.py * verify api-ref os-migrations.inc * Nova UTs broken due to modifying loopingcall global var * Remove mox from unit/api/openstack/compute/test\_consoles.py * Stop using mox from virt/libvirt/storage/test\_lvm.py * Update functional tests for fixtures 3 * Stop using mox in test\_firewall * Add tests to attach/detach vols for shelved server * Remove unused \_vlan\_is\_disabled test flag * libvirt: New configuration classes to parse device address element * Fixed clean up process in confirm\_resize() after resize/cold migration * VMware: remove dead code in test\_get\_vm\_create\_spec() * Remove mox from compute/test\_scheduler\_hints.py * Updated from global requirements * Remove normal API operation logs from API layer * Remove unused LOG from v2.1 API code * Adds RemoteFX support to the Hyper-V driver * libvirt: fix serial ports lost after hard-reboot * Stop using mox stubs in test\_server\_usage.py * Remove mox from compute/test\_instance\_usage\_audit\_log.py * api-ref: os-consoles.inc * Add proxy middleware to application pipeline * api-ref: Example verification for os-interface.inc * Remove redundant orphan instances unit test * Remove duplicate migration RT unit tests * Redundant test of CPU resources in test\_tracker * Remove duplicate test of RT.stats.current\_workload * Remove duplicate test of claim context manager * Remove pointless "additive claims" unit test * Remove oversubscribe test in test\_resource\_tracker * api: Improve the \_check\_multiple\* function names readability * api-ref verify servers-action-deferred-delete.inc * Fix the order of expected error codes * Remove DictCompat from NetworkRequest * api-ref: Add a sample test for os-interface * Use oslo\_log instead of logging * Verify requested\_destination in the scheduler * Add requested\_destination field to RequestSpec * Remove mox from compute/test\_extended\_ips\_mac.py * Ironic nodes with instance\_uuid are not available * Updated from global requirements * Fixes python 3 urllib quote / unquote usage * Make compute nodes update their own inventory records * Remove unused WsgiLimiter * Remove unused args from RateLimitingMiddleware * Remove unused use\_no\_auth from wsgi\_app\_v21() * Fix incorrectly named vmwareapi test * Make Inventory and ResourceProvider objects use the API DB instead * Rename ImageCacheManager.\_list\_base\_images to \_scan\_base\_images * Remove all references to image\_popularity from image cache * Remove image cache image verification * Fix test\_age\_and\_verify\_swap\_images * api and availablity\_zone opt definition consistent * Rename Image.check\_image\_exists to Image.exists() * Remomve mox from api/openstack/compute/test\_console\_output.py * Remove mox from api/openstack/compute/test\_config\_drive.py * VMware: set service status based on vc connection * Return 400 HTTP error for invalid flavor attributes * Get transport\_url from config in Cells v2 cell map utility * Support for both microversion headers * Fix unit test after the replace of key manager * Fix "KeyError: u'instance\_id'" in string format operation * Save all instance extras in a single db call * Remove APIRouter of legacy v2 API code * Remove legacy v2 API tests which use wsgi\_app() * limits.inc example verification * Remove duplicate unit test in test\_tracker * Remove delete stubs in test\_resource\_tracker * Remove service crud from test\_resource\_tracker * Remove conductor from test\_resource\_tracker * Remove StatsDicTestCase from test\_resource\_tracker * rt-unit: Replace hard-coded strings with constants * Remove useless test of incorrect stats value * Remove RT duplicate unit test for PCI stats * Remove more duplicate RT unit tests * Removes test\_claim\_saves\_numa\_topology() * objects: added 'os\_secure\_boot' property to ImageMetaProps object * Trivial: Fixes serial console minor nits * Revert "glance:add helper method to get client version" * Add length check in comparing object lists * Update Support Matrix * Improve the help text for the rdp options * No disable reason defined for new services * api-ref: limits.inc validate parameters * Make available to build docs with python3 * Updated from global requirements * remove db2 support from tree * Adds Hyper-V imagecache cleanup * raise exception ComputeHostNotFound if host is not found * Skip instance name templating in API cell * Add http\_proxy\_to\_wsgi to api-paste * Stop using mox stubs in test\_pipelib.py * api-ref: Parameter verification for os-interface.inc * devspec: remove unused VIRTFN\_RE and re * Remove duplicate test of set inst host/node * Remove SchedulerClientTrackerTestCase * Move unit tests of set\_instance\_host\_and\_name() * Remove MissingComputeNodeTestCase for res tracker * Remove tests for missing get\_available\_resource() * api-ref, os-fping.inc * Pass OS\_DEBUG to the tox test environment * Hyper-V: Implement nova rescue * Add resource provider tables to the api database * HyperV: Nova serial console access support * Let setup.py compile\_catalog process all language files * use\_neutron\_default\_nets: StrOpt ->BoolOpt * api-ref: Add fault parameter details * be more explicit that rate limits are gone in v2.1 * Warn when using null cache backend * Enable 'null' value for user\_data in V2.1 API * Updated from global requirements * fix Quota related error return incorrect problem * Add online migration to move keypairs from main to API database * Completed migrations are not "in progress" * Make flavor-manage api call destroy with Flavor object * Move is\_volume\_backed\_instance to compute.utils * Updated from global requirements * api-ref: verify flavors.inc * Fix use of invalid assert calls * Config options: remove import\_opts from cloudpipe section * Enables Py34 tests for unit.api.openstack.compute.test\_server\_tags * Fix the versions API for api-ref * Update link for hypervisor support matrix message * api-ref: complete verification of baremetal api * Keep BuildRequest db entry around longer * Drop fields from BuildRequest object and model * Resize API operation passing down original RequestSpec * Augment release note for import\_object\_ns removal * pci: add safe-guard to \_\_eq\_\_ of PciDevice * deprecate config option "fatal\_exception\_format\_errors" * config options: centralize exception options * libvirt: Add serial ports to the migration data object * Hyper-V: Fixes disk overhead claim issue * Config options: move set default opt of db section to centralized place * [Trivial] Fix a grammar error in comments * api-ref: Example verification for servers-action-shelve.inc * [Ironic] Correct check for ready to deploy * api-ref: Fix parameters in servers-action-shelve.inc * api-ref: parameter verification for os-server-groups * api-ref: servers-action-evacuate.inc * remove FlavorCreateFailed exception * Add tests for floating\_ip private functions * Trivial: remove os-security-groups needs:method\_verification line * Add RC file for excluding tempest tests for LVM job * Move config options from nova/api directory (5) * libvirt: add method to configure max downtime when migrating * libvirt: add "abort\_job" to Guest's object * libvirt: add method "migrate" to Guest's object * Only attempt to inject files if the injection disk exists * Remove deprecated option libvirt.remove\_unused\_kernels * Rename Raw backend to Flat * deprecate s3 image service config options * Cold migrate using the RequestSpec object * Add a RequestSpec generation migration script * Enables Py34 tests for unit.compute.test\_compute * Fixes invalid uuid usages in functional tests * Make neutronapi get\_floating\*() methods return objects * Switch api unit tests to use v2.1 API * Remove mox used in tests/unit/api/openstack/compute/test\_server\_start\_stop * Remove marker from nova-manage cells\_v2 map\_instances UI * api-ref: complete verification for os-flavor-access * Make some build\_requests columns nullable * Add message queue switching through RequestContext * trivial: remove unused argument from a method * baseproxy: stop requiring CONF.verbose * Cleanup validation logic in \_get\_requested\_networks * api-ref: complete verification of servers-action-crash-dump.inc * migrate to os-api-ref * api-ref: image.inc - Update method validation * config options: centralize section "database" + "api\_database" * api-ref: parameter verification for os-quota-sets * Fix network mtu in network\_metadata * Add a note about egress rules to os-security-group-rules api-ref * ironic: fix call to \_cleanup\_deploy on config drive failure * Follow-up for the API config option patch * api-ref: reorder parameters.yaml * Network: fix typo * Add online migration to store keypairs with instances * Make Keypair object favor the API database * api-ref: ips.inc example verification * Fix spelling mistake in libvirt * Body Verification of os-aggregates.inc * Move placement api request logging to middleware * conf: Move cloudpipe options to a group * conf: Address nits in I92a03cb * Fix corrupt "host\_aggregates\_map" in host\_manager * Fix spelling mistake * api-ref: Example verification for os-volume\_attachments.inc * api-ref: Parameter verification for os-volume\_attachments.inc * Remove fake\_imagebackend.Raw and cleanup dependent tests * Remove unused arguments to images.fetch and images.fetch\_to\_raw * api-ref: finish validation for os-server-external-events.inc * report info if parameters are out of order * Method verification of os-floating-ips-bulk.inc * api-ref: os-volumes.inc method verification * config options: move s3 related options * deprecate "default\_flavor" config option * config options: centralize default flavor option * Return HTTP 400 on boot for invalid availability zone * Config options: remove import\_opts from completed section * Fix migration query with unicode status * Config options: centralize cache options * Change 5 space indent to 4 spaces * Remove deprecated "memcached\_server" in Default section * Updated from global requirements * Add a functional test for instance fault message with retry * api-ref: complete verification for extensions resource * live-migration ceph: fix typo in ruleset parsing * api-ref: os-floating-ip-dns.inc method verification * api-ref: Method verification for servers-actions * Eager load keypairs in instance metadata * Complete method verification of os-networks * Method verification of os-security-group-default-rules * virt: reserved number of mempages on compute host * deprecate "file transfer" feature for Glance images * centralized conf: nova/network/rpcapi.py * Config options: centralize remotefs libvirt options (end) * Config options: centralize smbfs libvirt options (16) * imagebackend: Check that the RBD image exists before trying to cleanup * Rewrite \_cleanup\_resize and finish\_migration unit tests to use mock instead of mox * Remove mox in test\_volume\_snapshot\_create\_outer\_success * api-ref: Method verification for os-volume\_attachments.inc * Improve the help text for the API options (4) * Improve the help text for the API options (3) * api-ref: ips.inc parameter verification * Add Keypairs to the API database * Create Instances with keypairs * Method verification for server-action-deferred-delete * method verification for server-action-remote-consoles * method verification of os-server-external-events * method verification of os-instance-usage-audit-log * Add keypairs to Instance object * Complete method verification of os-baremetal-nodes.inc * api-ref: parameter validation for os-security-group-rules * Fixed missing variable * api-ref: Method verification for os-floating-ips * force\_live\_migration remove redundant check * pci: create PCI tracker in RT.\_init\_compute\_node * Fix race condition for live-migration-force-complete * api-ref: servers-action-shelve.inc * Added fault response parameter to Show Server Details API * pci: Remove unused 'all\_devs' method * Corrected the typo * Denormalize personality extension * method verification of os-assisted-volume-snapshots * api-ref: os-certificates.inc method verification * Complete method verification of os-cloudpipe.inc * Fix service version to update the DB * method verification for servers-action-fixed-ip * Added new exception to handle CinderClientException * Drop paramiko < 2 compat code * Config options: centralize scality libvirt options (15) * Compute: Adds driver disk\_gb instance overhead estimation * config options: move image\_file\_url download options * crypto: Add support for Paramiko 2.x * Denormalize extensions for clarity * Complete method verification of os-fping * Complete method verification of os-security-group-rules * Fix invalid uuid warnings * Correct some misspell words in nova * Remove 404 for list and details actions of servers * Improve the help text for the API options (2) * Improve the help text for the API options (1) * Complete method verification of os-migrations * Move config options from nova/api directory (4) * api-ref: perform all 4 phases of verification for action console output * api-ref: add url parameter to expand all sections * api-ref: complete verification for diagnostics.inc * api-ref: update parameter validation on servers * Complete method verification of os-tenant-networks * trivial: removed unused networks var from os-tenant-networks:create * Complete method verification of os-security-groups * Move config options from nova/api directory (3) * Move config options from nova/api directory (2) * Move config options from nova/api directory (1) * api-ref: method verification and fixes for servers.inc * Instance mapping save, properly load cell mapping * Fix exception when vcpu\_pin\_set is set to "" * config: remove deprecated ironic.client\_log\_level * Complete method verification of os-quotas * Compelete method verification of os-servers-admin * Complete method verification of os-shevle * Add api-sample test for showing quota detail * Remove legacy v2 tests which use APIRouter * pci: eliminate DB lookup PCI requests during claim * pci: pass in instance PCI requests to claim * Remove rate\_limit param in builder * Remove comment on v3 API * Not talking about V2 API code in review doc guide * Add keypairs to instance\_extra * Trivial: No need to exclude TestMoveClaim from py34 tests * Remove 400 as expected error * Cleaned up request and response formats page * Complete method verification of os-agents * update servers policy in code to use formats * Complete method verification of os-fixed-ips * Consolidate image\_href to image uuid validation code * Fix TestNeutronv2.test\_deallocate\_for\_instance\_2\* race failures * Centralize config option for nova/network/driver.py * Don't raise error when filtering on custom metadata * Config options: centralize quobyte libvirt options (14) * Config options: centralize volume nfs libvirt options (13) * Config options: centralize volume net libvirt options (12) * Config options: centralize iser libvirt options (11) * Config options: centralize iscsi libvirt options (10) * Config options: centralize glusterfs libvirt options (9) * Config options: centralize aoe vol libvirt options (8) * Config options: centralize volume libvirt options (7) * Config options: centralize vif libvirt options (6) * Config options: centralize utils libvirt options (5) * Config options: centralize lvm libvirt options (4) * Remove legacy v2 unit tests[q-v] * Remove legacy v2 unit tests[f-n] * Remove Limits dependency of legacy v2 API code * Remove mox in unit/virt/xenapi/test\_agent.py * Set migration status to 'error' on live-migration failure * Add pycrypto explicitly * Centralize vif,xenpool & vol\_utils config options * Config options: centralize imagecache libvirt options (3) * Config options: centralize imagebackend libvirt options (2) * Remove the legacy v2 API entry from api-paste.ini * Update stable API doc to indicate code removal * Config options: centralize driver libvirt options (1) * UEFI - instance terminates after boot * Fix unit tests for v2.1 API * Remove legacy v2 unit tests[a-e] * Config options: Centralize servicegroup options * libvirt: release serial console ports when destroying guests * Remove mox from tests/unit/network/test\_api.py * Remove legacy v2 API functional tests * fix wrong key name in test code * Remove the legacy v2 API test scenarios from API sample tests * Remove 413 expect in servers.py * Remove core extension list * rt: remove unused image\_meta parameter * Fail to start nova-api if no APIs were able to be started * Test that nova-api ignores paste failures, but continues on * libvirt: introduces module to handle domain xml migration * Trivial: dead code * Fix database poison warnings, part 8 * docs: link to Laski's cells talk from the Austin summit * compute: Retain instance metadata for 'evacuate' on shared storage * Archive instance\_actions and instance\_actions\_event * Add os-interface functional negative tests * api-ref: verify os-server-groups.inc * Avoid unnessary \_get\_power\_state call * Remove mox in test\_certificates.py * api-ref: verfiy limits body * api-ref: body verification of ips.inc * Change message format of Forbidden * Updated from global requirements * api-ref verify of servers-admin-action.inc * pci: Allow to assign pci devices in pci device list * Fix typo in support-matrix.ini: re(set)=>(re)set * Add ability to filter migrations by instance uuid * Wrong mocks, wrong mock order * verify api-ref metadata.inc * verify api-ref os-server-password.inc * Updated from global requirements * Fix database poison warnings, part 7 * Declare nova.virt namespace * [doc] fix 5 typos * Make compute rpcapi 'live\_migration' backward compatible * Replace key manager with Castellan * Deprecate Nova Network * verify api-ref os-instance-usage-audit-log.inc * Only reset dns\_name when unbinding port if DNS is integrated * Changed the storage size from GB to GiB * Remove unused FAKE\_UUID variables * Deprecated the concept of extensions in v2.1 * Fix database poison warnings, part 6 * Fix database poison warnings, part 5 * Avoid unconditional warnings in nova-consoleauth * libvirt: remove version checks for hyperv PV features * libvirt: remove version checks for libvirt disk discard feature * libvirt: remove version checks for block job handling * libvirt: remove version checks for PCI device detach * libvirt: remove version checks for live snapshot feature * libvirt: add explicit check for min required QEMU version * libvirt: increase min required libvirt to 1.2.1 * network: Fix nova boot with multiple security-groups * Updated config description on live snapshot * Fix NoSuchOptError when referring to conf.neutron.auth\_plugin * api-ref host verification (os-hosts.inc) * api-ref verify os-floating-ip-pools.inc * Complete Verification of server-metadata * Complete method Verification of os-hypervisors * Fix invalid uuid warnings in compute api testcases * Fix invalid uuid warnings * complete Method Verification of aggregates * Complete Method Verification of ips * Fix resize to same host failed using anti-affinity group * Complete method Verification of consoles * Config options: Centralize netconf options * Remove 413 as expected error code * Complete Verification of os-server-password * Complete Verification of os-hosts * Add links to API guide to describe links * Complete Method Verification of os-interface * Complet Method Verification of flavor-access * Complete Verification of os-virtual-interfaces * Complet Method Verification of os-instance-actions * Complete Verification of os-flavor-extra-specs * Fix database poison warnings, part 4 * Complet Method Verification of flavor * Complet Method Verification of server group * Trivial: fix mock decorator order * Add test for nova-compute and nova-network main database blocks * Prevent nova-api from dying if enabled\_apis is wrong * Complet Method Verification of keypair * Complet Method Verification of availability-zone * Complet Method Verification of simple tenant usage * remove the use of import\_object\_ns * Fixed typo in word "were" * Complet Method Verification of os-services * Complet Method Verification of server diag * Remove mox in tests/unit/compute/test\_host\_api.py * Config options: completing centralize neutron options * Add instances into dict when handle exception * Complet Method Verification of limits * Improve the help text for the compute rpcapi option * Move config options from nova/compute/rpcapi.py file * Updated from global requirements * deprecate nova-all * Remove unused base\_options param from \_get\_image\_defined\_bdms * Change BuildRequest to contain a serialized instance * Split out part of map\_cell\_and\_hosts to return a uuid * Add manage command for cell0 * Config options: centralize section "ssl" * config options: centralize security\_group\_api opt * Imported Translations from Zanata * Stop using mox stubs in test\_multinic.py * libvirt: deprecate use\_usb\_tablet in favor of pointer\_model * Config options: Centralize neutron metadata options * add tags to files for the content verification phase * Config options: Centralize compute options * Add 415 to list of exceptions for microversions devref * Added validation for rescue image ref * Final warnings removals for api-ref * Clean port dns\_name in case of port detach * Fix remaining json reference warnings * Add validations for volume\_size and destination\_type * Remove duplicate api ref for os-networks/actions * Fix all remaining sample file path * Stop using mox stubs in test\_access\_ips.py * Stop using mox stubs in test\_admin\_password.py * libvirt - Add log if libguestfs can't read host kernel * Fix sample file path for 4 files * Fix invalid uuid warnings in objects testcases * Fix invalid uuid warnings in server-group unit tests * Create image for suspended instance booted from volume * Fix content and sample file for keypair, migration, networks * Fix sample file path for os-i\* API * Fix the parameters for os-agents API * Fix sample file path for fixed, floating ips API * Fix sample path for aggregate, certificate, console * Add remaining image API ref * Fix the schema of assisted\_volume\_snapshots * config options: conductor live migrate options * xenapi: Fix xmlrpclib marshalling error * fix samples references in security group files * fix samples references in os-services * Fix api samples references in 3 more files * Fix reverse\_upsize\_quota\_delta attempt to look up deleted flavors * Fix api ref for os-hosts, os-quota-sets and os-fping * Fix api ref for os-cells, os-cloudpipe and server-action-shelve * Fix api sample references in 2 more files * Updated from global requirements * hardware: thread policy default value applied even if specified * Fix api ref for ips, limits, metdata and agent * virt: use more realistic fake network / VIF data * Fix json response example heading in api ref * Fix database poison warnings, part 3 * Remove 40X and 50X from Normal response codes * Specify normal status code on os-baremetal-nodes * Remove unused rotation param from \_do\_snapshot\_instance * Remove unused filter\_class\_names kwarg from get\_filtered\_hosts * Remove deprecated ability to load scheduler\_host\_manager from path * Fix "Creates an aggregate" parameters * Unavailable hosts have no resources for use * HyperV: Add SerialConsoleOps class * HyperV: Add serial console handler class * HyperV: Add serial console proxy * fix samples references for 2 files * Update servers.inc to be as accurate as api-site * Fix database poison warnings, part 2 * Fix "Creates an agent build" parameters * Update get\_by\_project\_id on InstanceMappingList * Clean up cell handling in nova-manage cell\_v2 map\_instances * Properly clean up BDMs when \_provision\_instances fails * clean up versions.inc reference document * Collection of CSS fixes * Fixes unexpectedly passing functional test * move sphinx h3 to '-' instead of '^' * fix blockquote font size * Add 'Show All' / 'Hide All' toggle * use 'required' instead of 'optional' for parameters * Fix css references to the glyphicons font * Initial use of microversion\_parse * Changed an HTTP exception to return proper code * Compute API: omit disk/container formats when creating images of snapshots * Fix formatting of rst in parameters.yaml * Add instance/instance\_uuid to build\_requests table * network: make nova to handle port\_security\_enabled=False * BaseCoreFilter docstring and formating improved * Fix NoMoreNetworks functional test traces * Fix typo in nova release notes * Updated from global requirements * Fix generation of Guru Meditation Report * Fix invalid uuid warnings in cell api testcases * cleanup some issues in parameters.yaml * Import RST files for documentation * add combined parameters.yaml file * claims: Do not assume image-meta is a dict * Fix nova opts help info * Fix doc build if git is absent * Add checks for driver attach\_interfaces capability * Updated from global requirements * Add AllServicesCurrent fixture * Improve the help text for the linuxnet options (3) * Improve the help text for the linuxnet options (2) * Fix signature of copy\_image * libvirt: remove live migrate workaround for an unsupported ver * libvirt: move graphic/serial consoles check to pre\_live\_migration * Fix invalid uuid warnings in api testcases * Minor updates to the how\_to\_get\_involved docs * Put more into compute.api.\_populate\_instance\_for\_create * Remove unused parameter from \_get\_requested\_instance\_group * Improved test coverage * Check API versions intersects * virt/hardware: Fix 'isolate' case on non-SMT hosts * Migrate compute node resource information to Inventory objects * Drop compute node uuid online migration code * increase error handling for dirty files * config options: centralize 'spice' options * Fix max concurrent builds's unlimited semaphore * VMware: add in context for log messages * XenAPI: specify block size for writing config drive * Fix database poison warnings * Make swap-volume an admin-only API by default * Updated from global requirements * Improve the help text for the linuxnet options (1) * Config options: Centralize network options * Config options: centralize base path configuration * Add new NeutronFloatingIP object * Add "\_\_repr\_\_" method to class "Service" * remove alembic from requirements.txt * Config options: centralize section "xvp" * Imported Translations from Zanata * Updated from global requirements * allow samples testing for PUT to not have a body * libvirt: delete the last file link in \_supports\_direct\_io() * db: retry instance\_info\_cache\_update() on deadlock * Moved tags filtering tests to TestInstanceTagsFiltering test case * Move config options from nova/network/linux\_net.py * Remove nova-manage service subcommand * config options: centralize quota options * DB API changes for the nova-manage quota\_usage\_refresh command * Improve the help text for the network options (1) * Fix typo in compute node mega join comments * Add api-ref/build/\* to .gitignore * Improve help text for the network object options * Config options: Centralize console options * Config options: Centralize notification options * Remove mox from tests/unit/network/security\_group/test\_neutron\_driver.py * Added server tags support in nova-api * Added server tags controller * Added db API layer to add instance tag-list filtering support * Improve 'workarounds' conf options documentation * Config options: centralize "configdrive" options * config options: centralize baseproxy cli options * Check if a exception has a code on it before read the code * Fix import statement order in nova/rpc.py * Document our policy on fixing v2.0 API bugs * Config options: Centralize neutron options * Remove mox from tests/unit/compute/test\_compute\_xen.py * Fix typo in comments of affinity and anti-affinity * Fix up online\_data\_migrations manage command to be consistent * Adds missing discoverable rules in policy.json * Config options: Centralize ipv6 options * config options: centralize xenserver vmops opts * Config options: Centralize xenapi driver options * config options: centralize xenserver vm\_utils opts * Remove flavor seeding from the base migration * Rely on devstack to skip rescue tests for cells v1 * Replace topic with topics for messaging.Notifier * Updated from global requirements * Fix test for empty policy rules * Improve 'monkey\_patch' conf options documentation * conf: Remove 'destroy\_after\_evacuate' * config options: Move crypto options into a group * config options: centralize section: "crypto" * config options: Centralise 'monkeypatch' options * config options: Centralise 'utils' options * doc: clean up oslo-incubator related stuff * config option generation doesn't work with a generator * Add link to the latest nova.conf example * Change the nova tempest blacklist to use to idempotent ids * HyperV: Refactor livemigr, avoiding getting disk paths remotely * Remove DictCompat from mapping objects * Enhance value check for option notify\_on\_state\_change * Fix flavor migration tests and edge case found * config options: Centralize upgrade\_levels section * config options: Centralize mks options * Remove DictCompat from S3 object * config options: Centralize vmware section * config options: centralize section "service" * Define context.roles using base class * TrivialFix: removed unnecessary cycle in servicegroup/test\_api.py * Handle pre-migration flavor creation failures in the crusty old API * config options: centralize section "guestfs" * config options: centralize section "workarounds" * config options: Centralize 'nova.rpc' options * Cleanup NovaObjectDictCompat from BandwidthUsage * config options: fix the missed cli options of novncproxy * Add metadata objects for device tagging * Nuke cliutils from oslo-incubator * libvirt: pci detach devices should use dev.address * Fix stale file handle error in resource tracker * Updated from global requirements * config options: Centralize xenapi torrent options * Fix: unable to delete instance when cinder is down * Block flavor creation until main database is empty * Further hack up the n.t.unit.db.fakes module of horribleness * Add flavor migration routine * Make Flavor create() and destroy() work against API DB * Move config options from nova/objects/network.py * Add tag column to vifs and bdm * Remove extensible resource tracking * Fix error message of nova baremetal-node-delete * Enhanced error handling for rest\_parameters parser * Fix not supported error message * config options: Centralise 'image\_file\_url' options * neutron: Update the port with a MAC address for PFs * Remove mox from tests/unit/network/test\_rpcapi.py * Remove mox from tests/unit/objects/test\_migration.py * The 'record' option of the WebSocketProxy should be string * config options: centralize section: "glance" * Move resource provider staticmethods to proxies * Add Service.get\_minimum\_version\_multi() for multiple binaries * remove the ability to disable v2.1 * Make git clean actually remove covhtml * Set 'libvirt.sysinfo\_serial' to 'none' in RealTimeServersTest * Make compute\_node\_statistics() use new schema * remove glance deprecated config * Config options: Centralize consoleauth options * config options: centralize section "cloudpipe" * After migrate in-use volume the BDM information lost * Allow to update resource per single node * pci: Add utility method for getting the MAC addr 13.0.0 ------ * Imported Translations from Zanata * VMware: Use Port Group and Key in binding details * Config options: Centralize resource tracker options * Fixed incorrect behavior of xenapi driver * Remove DictCompat from ComputeNode * config options: Centralise 'virt.imagecache' options * neutron: pci\_request logic considers 'direct-physical' vnic type * config options: remove the scheduler import\_opt()s * Improve the help text for hyperv options (3) * Improve the help text for hyperv options (2) * Improve the help text for hyperv options (1) * Imported Translations from Zanata * Remove a redundant 'that' * Cleanup NovaObjectDictCompat from NumaTopology * Fix detach SR-IOV when using LibvirtConfigGuestHostdevPCI * Stop using mox in test\_security\_groups * Cleanup the exception LiveMigrationWithOldNovaNotSafe * Add sample API content * Create api-ref docs site * Config options: Centralize debugger options * config options: centralize section: "keymgr" * libvirt: fix ivs test to use the ivs vif object * libvirt: pass a real instance object into vif plug/unplug methods * Add a vnic type for PF passthrough and a new libvirt vif driver * libvirt: live\_migration\_flags/block\_migration\_flags default to 0 * Imported Translations from Zanata * config options: Centralize xenapi options * Populate instance\_mappings during boot * libvirt: exercise vif driver 'plug' method in tests * config options: centralize xenserver options * Fix detach SR-IOV when using LibvirtConfigGuestHostdevPCI * Reduce number of db calls during image cache manager periodic task * Imported Translations from Zanata * Update cells blacklist regex for test\_server\_basic\_ops * Update cells blacklist regex for test\_server\_basic\_ops * Remove mox from tests/functional/api\_sample\_tests/test\_cells.py * Remove mox from tests/unit/api/openstack/compute/test\_baremetal\_nodes.py * Config options: Centralize ldapdns options * Add NetworkRequestList.from\_tuples helper * Stop providing force\_hosts to the scheduler for move ops * Enforce migration tests for api database * Objectify test\_flavors and test\_flavors\_extra\_specs * Allow ironic driver to specify cafile * trivial: Fix alignment of wsgi options * config options: Remove 'wsgi\_' prefix from opts * VMware: Always update image size for sparse image * VMware: create temp parent directory when booting sparse image * VMware: Use datastore copy when the image is already in vSphere * Imported Translations from Zanata * Fix typos in document * Removes some redundant words * Stop providing force\_hosts to the scheduler for move ops * Include CellMapping in InstanceMapping object * Make flavor extra\_specs operations work against the API DB * Make Flavor access routines work against API database * Clarify the \`\`use\_neutron\`\` option upgrade notes 13.0.0.0rc2 ----------- * Imported Translations from Zanata * Try to repopulate instance\_group if it is None * Try to repopulate instance\_group if it is None * modify duplicate // to / in doc * change host to host\_migration * Fixup test\_connection\_switch functional test * Fix SAWarning in \_flavor\_get\_by\_flavor\_id\_from\_db * Update 'os-hypervisors.inc' in api-ref * Fix os-server-groups.inc * cinder: accommodate v1 cinder client in detach call * Move config options from nova/network/manager.py * Change adminPass for several server actions * Fix os-virtual-interfaces and flavors api-ref * Make FlavorList.get\_all() return results from the API and main DBs * Objectify some tests in test\_compute and test\_flavors * Objectify test\_instance\_type\_extra\_specs * Add a DatabasePoisonFixture * config options: Use OptGroup for listing options * Live migration failure in API leaves VM in MIGRATING state * Fix flavor-access and flavor-extras api-ref * Fix diagnostics, extensions api ref * Fix typo 'mappgins' to 'mappings' * Imported Translations from Zanata * Fix hosts and az api samples * Change "libvirt.xml" back to the original after doing unrescue * Fix os-service related reference missing * Add 'binary' and 'disable-reason' into os-service * Remove unused argument v3mode * Clean up the TestGlanceClientWrapper retry tests * stop setting mtu when plugging vhost-user ports * config options: Move wsgi options into a group * Rewrite 'test\_filter\_schedule\_skipping' method using Mock * Remove stub\_compute config options * Added missing "=" in debug message * libvirt: serial console ports count upper limit needs to be checked * Imported Translations from Zanata * Return 400 on boot for invalid image metadata * Fix JSON format of server\_concepts * Remove /v1.1 endpoint from api-guide * config options: centralize section: "rdp" * Fixes hex decoding related unit tests * Fix conversion of config disks to qcow2 during resize/migration * xenapi: Fix when auto block\_migration in the API * xenapi: Fix up passing of sr\_uuid\_map * xenapi: Fix the live-migrate aggregate check * Add rebuild action descriptions in support-matrix * Config options: centralize section "hyperv" * Removal of unnecessary \`import\_opt\`s for centralized config options * Imported Translations from Zanata * Fixes bug with notify\_decorator bad getattr default value * config options: centralize section "monitors" * config options: Centralise floating ip options * Fix API Error on hypervisor-uptime API * VMware: make the opaque network attachment more robust * Add functional test for v2.7 * avoid microversion header in functional test * Add backrefs to api db models * Update reno for stable/mitaka * stop setting mtu when plugging vhost-user ports * Removes redundant object fields * Blacklist TestOSAPIFixture.test\_responds\_to\_version in python3 * Fix conversion of config disks to qcow2 during resize/migration * Remove auto generated module api documentation * Imported Translations from Zanata * Mark 2.25 as Mitaka maxmium API version * Add a hacking check for test method closures * Make Flavor.get operations prefer the API database * xenapi: Fix when auto block\_migration in the API * xenapi: Fix up passing of sr\_uuid\_map * Update to openSUSE versions * xenapi: Fix the live-migrate aggregate check * Error on API Guide warnings * Add Newton sanity check migration * Add placeholder migrations for Mitaka backports * Update .gitreview for stable/mitaka * Set RPC version aliases for Mitaka 13.0.0.0rc1 ----------- * Fix reno reverts that are still shown * Wait for device to be mapped * Add a prelude section for Mitaka relnotes * Fix reno for RC1 * libvirt: Fix ssh driver to to prevent prompting * Support-matrix of vmware for chap is wrong * Imported Translations from Zanata * Allocate free bus for new SCSI controller * config options: centralize cinder options * Add os-brick rootwrap filter for privsep * Fix retry mechanism for generator results * Add a cell and host mapping utility to nova-manage * Add release note for policy sample file update * Fix vmware quota extra specs reno formatting * Avoid lazy-loads of ec2\_ids on Instance * Replace deprecated LOG.warn with LOG.warning * libvirt: Allow use of live snapshots with RBD snapshot/clone * Typo fix in documentation * Redundant parentheses removed * Trivial: Use exact docstring for quota module * Replace deprecated LOG.warn with LOG.warning * Revert "virt: reserved hugepages on compute host" * Make tuple actually a tuple * xenapi: Image cache cannot be disabled * VMware: enable a resize of instance with no root disk * fixed typo in word "OpenStack" * hyper-v: Copies back files on failed migration * Add functional test for OverQuota * Translate OverLimit exceptions in Cinder calls * Add regression test for Cinder 403 forwarding * register the config generator default hook with the right name * pci - Claim devices outside of Claim constructor * Get instance security\_groups from already fetched instance * Use migrate\_data.block\_migration instead of block\_migration * Fix pre\_live\_migration result processing from legacy computes * Add reno for disco driver * linux\_net: use new exception for ovs-vsctl failures * Insure resource tracker updated for deleted instances * VMware: use datacenter path to fetch image * libvirt: check for optional LibvirtLiveMigrateData attrs before loading * Change SpawnIsSynchronous fixture return * Report instance-actions for live migration force complete API * Add release notes for security fixes in 13.0.0 mitaka GA * API: Raise up HTTPNotFound when no availabe while get\_console\_output * libvirt: Comment non-obvious security implications of migrate code * Update the doc of notification * fixed log warning in sqlalchemy/api.py * Add include\_disabled parameter to service\_get\_all\_by\_binary * Imported Translations from Zanata * Set personality/injected\_files to empty list if not specified * Fix processing of libvirt disk.info in non-disk-image cases * pci: avoid parsing whitelist repeatedly * Add Forbidden to caught cinder exceptions * Missing info\_cache.save() in db sqlalchemy api * tests: Add some basic compute\_api tests for attaching volumes * Clean up networks with SR-IOV binding on reschedule * virt: refactor method compute\_driver\_matches * Make force\_ and ignore\_hosts comparisons case insensitive * xenapi: fix when tar exits early during download * Address nits in I83a5f06ad * Fix config generation for Neutron auth options * Remove an unused method in FakeResourceTracker * Rework 'limited' and 'get\_limit\_and\_marker' * plugins/xenserver: Resolve PEP8 issues * Remove unused variable and redundant code path * Soft delete instance group member when delete instance * VMware: Refactor the formatting instance metadata * Remove sizelimit.py in favor of oslo\_middleware.sizelimit * libvirt: make snapshots call suspend() instead of reimplementing it * Use generic wrapper for cinder exceptions * Add ppc64le architecture to some libvirt unit tests * Add Database fixture to sync to a specific version * Drop the use of magic openstack project\_id * Aggregate object fixups * Address nits in Ia2296302 * Remove duplicated oslo.log configuration setup * libvirt: Always copy or recreate disk.info during a migration * nova-manage: Print, not raise, exceptions * virt: reserved hugepages on compute host * XenAPI:Resolve Nova/Neutron race condition * Don't use locals() and globals(), use a dict instead * update the deprecated \`\`security\_group\_api\`\` and \`\`network\_api\_class\`\` * [Ironic]Match vif-pif mac address before setting 'vif\_port\_id' * Correct the wrong usage of 'format' jsonschema keyword in servers API * Add ComputeNode and Aggregate UUID operations to nova-manage online migrations * Extend FakeCryptoCertificate.cert\_not\_valid\_after to 2 hours * Revert "functional: Grab the service version from the module" * libvirt: Fix resize of instance with deleted glance image * Reno for libvirt libosinfo with OS * Fix hyperv use of deprecated network\_api\_class * Fix v2.12 microversion REST API history doc * Add docstrings for nova.network.base\_api.get\_vifs\_by\_instance * Style improvements * Reno for Ironic api\_version opt deprecation * Release notes: online\_data\_migrations nova-manage command * nova-manage: Declare a PciDevice online migration script * test\_fields: Remove all 'Enum' subclass tests * Make test cases test\_crypto.py from NoDBTestCase * Ironic: remove backwards compatibility code * Ironic: Use ironicclient native retries for connection errors * RT: aborting claims clears instance host and NUMA info * Provide correct connector for evacuate terminate * Reset instance progress when LM finishes * Forbid new legacy notification event\_type * VMware: Remove VMwareHTTPReadFile * API: Mapping ConsoleTypeInvalid exception to HTTPBadRequest * VMware: remove deprecation warnings from oslo\_versionedobjects * Reject empty-named AZ in aggregate metadata * add checking for new image metadata property 'hw\_cpu\_realtime\_mask' * Remove unused methods in nova/utils.py * Fix string interpolations at logging calls * Generate better validation error message when using name regexes * Return 400 for os-virtual-interfaces when using Neutron * Dump metric exception text to logs * Updated from global requirements * Use SensitiveStringField for BlockDeviceMapping.connection\_info * Add index on instances table across deleted/created\_at columns * Tweak the resize\_confirm\_window help text * Enable rebuild tests in cellsv1 job * libvirt: clean up help text for live\_migration\_inbound\_addr option * Add release note for nova using neutron mtu value for vif plugging * deprecate security\_group\_api config option * update tests for use\_neutron=True; fix exposed bugs * deprecate \`\`volume\_api\_class\`\` and \`\`network\_api\_class\`\` * deprecate \`\`compute\_stats\_class\`\` config option * Deprecate the \`\`vendordata\_driver\`\` config option * Deprecate db\_driver config option * deprecate manager class options * remove default=None for config options * Check 'destination\_type' instead of 'source\_type' in \_check\_and\_transform\_bdm * Documentation fix regarding triggering crash dump * Use db connection from RequestContext during queries * Ironic: Clean up if configdrive build fails * Revert "Generate better validation error message when using name regexes" * Add unit tests for live\_migration\_cleanup\_flags * Replaced unittest and unittest2 to testtools * Sample nova.conf file has missing/duplicated config options 13.0.0.0b3 ---------- * Fix missing of unit in HostState.\_\_repr\_\_() * Make InstanceMappings.cell\_id nullable * Create BuildRequest object during boot process * Add BuildRequest object * Api\_version\_request.matches does not accept a string or None * Added Keystone and RequestID headers to CORS middleware * Generate better validation error message when using name regexes * XenAPI: introduce unit test for XenAPI plugins * Abstract a driver API for triggering crash dump * Fix evacuate support with Nova cells v1 * libvirt: don't attempt to get baseline cpu features if host cpu model is None * Ensure there are no unreferenced closures in tests * libvirt: set libvirt.sysinfo\_serial='none' for virt driver tests * libvirt: Add ppc to supported arch for NUMA * Use new inventory schema in all compute\_node gets * Remove unused libvirt \_get\_all\_block\_devices and \_get\_interfaces * Use new inventory schema in compute\_node\_get\_all() * Deprecate nova.hooks * Adjust resource-providers models for resource-pools * Fix Cells RPC API by accepting a RequestSpec arg * API: Improve os-migrateLive input parameters * Allow block\_migration and disk\_over\_commit to be None * Update time is not updated when metadata of aggregate is updated * complete the removal of api\_version from rest client parameters * objects: add HyperVLiveMigrateData stub * functional: Grab the service version from the module * Added missed '-' to the rest api history doc * Gracefully handle cancelling all events more than once * Cleanup service.kill calls in functional tests * Do not use constraints for venv * VMware: Use actual VM state instead of using the instance vm\_state * Do not pass call\_xenapi unmarshallable type * check max\_net\_count against min\_count when booting * objects: Allow instance to reset the NUMA topology * Mark 'network\_device\_mtu' as deprecated * Add service binary/host to service is down log for context * Abort an ongoing live migration * Add new APIs and deprecate old API for migrations * Deprecate conductor manager option * Xen: Calculate block\_migration if it's None * Libvirt: Calculate block\_migration if it's None * NUMATopologyFilter raise exception and not continue filter next node * Updated from global requirements * Add specific method to lazy-load instance.pci\_devices * Move logging outside of LibvirtConfigObject.to\_xml * Update the help for deprecated glance host/port/protocol options * Added missing execution of the test * Add build\_requests database table and model * Make db.aggregate\_get a reader not a writer * Remove an unnecessary variable in a unit test * Remove duplicate test case flavor\_create * Don't lazy-load instance.services if the instance is deleted * Add functional regression test for list deleted instances on v2.16 * Use constant\_time\_compare from oslo.utils * Remove APIRouterV3 * reduce pep8 requirements to just hacking * fix usage of opportunistic test cases with enginefacade * add regression test for bug #1541691 * Creates flavor\* tables in API database * Add test for unshelve in the conductor API * add a place for functional test to block specific regressions * make microversion a client level construct for tests * Allocate uuids for aggregates as they are created or loaded * bug and tests in 'instance\_info\_cache' * fix typo in comment * Fix conductor to \*really\* pass the Spec obj * Updated from global requirements * Catch iscsi VolumeDeviceNotFound when detaching * Add note about using OS-EXT-\* prefix for attribute naming * Remove use of \`list\` as variable name * resource-provider versioned objects * Fix networking exceptions in ComputeTestCase * Fix online\_data\_migrations() not passing context * Fix two bugs in online\_data\_migrations() * Make online\_data\_migrations do smaller batches in unlimited case * Use MTU value from Neutron in OVS/LB VIF wiring * tox: Remove 'oslo.versionedobjects' dependency * Fix API Guide doc * Add functional regression test for bug 1552888 * Fix an unnecessary interpolation * Change wording of microversion bump about 503 * Validate subs in api samples base class to improve error handling * Add a column for uuid to aggregate\_hosts * Hyper-V: Removes pointless check in livemigrationops * XenAPI: Fix VIF plug and unplug problem * Update ComputeNode values with disk allocation ratios in the RT * Update HostManager and DiskFilter to use ComputeNode disk ratio * Add disk\_allocation\_ratio to ComputeNode * config options: Centralise 'virt.disk' options * config options: Centralise 'virt.netutils' options * Improve 'virt.firewall' conf options documentation * config options: Centralise 'virt.firewall' options * Improve 'virt.images' conf options documentation * config options: Centralise 'virt.images' options * Update wrong comment * Fix misuse of assertTrue in console and virt tests * Failed migration shoudn't be reported as in progress * Fix missing of unit in debug info * always use python2.7 for pep8 * servicegroup: remove the zookeeper driver * Hacking: check for deprecated os.popen() * Log successful reverts\_task\_state calls * Hyper-V: os\_win related updates * Partial revert of ec2 removal patch * Fixed leaked UnexpectedMethodCallErrors in test\_compute * Unshelve using the RequestSpec object * Provide ReqSpec to live-migrate conductor task * Fix cell capacity when compute nodes are down * Fix misleading test name * Default "discoverable" policies to "@" * build smaller name regexes for validation * Add reno for block live migraton with cinder volumes * Remove support for integer ids in compute\_api.get * Add annotation to the kill() method * Add missing os types: suseGuest64/suseGuest * Hypervisor support matrix: add feature "trigger crash dump" * Update example policy.json to remove "" policies * Fixed arguement order in remove\_volume\_connection * Add better help text to scheduler options (7) * Add better help text to scheduler options (6) * RT: Decrese usage for offloaded instances * Allow saving empty pci\_device\_pools in ComputeNode object * Add StableObjectJsonFixture and use it in our base test class * nova-manage: Add hooks for running data-migration scripts * always use pip constraints * Update instance host in post live migration even when exception occurs * Use imageutils from oslo.utils * Remove duplicate key from dictionary * reset task\_state after select\_destinations failed * Pass bdm info to \_get\_instance\_disk\_info method * Fix create snapshot failure on VMs with SRIOV * Reorder name normalization for DNS * Allocate UUID for compute node * rpc.init() is being called twice per test * Use instance hostname for Neutron DNS unit tests * objects: Rename PciDevice \_migrate\_parent\_addr method * Use assertRaises() to check specific exception * libvirt: make live\_migration\_uri flag dependent on virt\_type * Remove unused CONF imports * Add /usr/local/{sbin,bin} to rootwrap exec\_dirs * write live migration progress detail to DB in migration monitor * Add migration progress detail in DB * Tolerate installation of pycryptodome * neutron: handle attach interface case with no networks * Move Disk allocation ratio to ResourceTracker * Updated from global requirements * HyperV: Fix vm disk path issue * Removal of unnecessary \`import\_opt\`s for cells config options * Fix 500 error for showing deleted flavor details * Fix \_compare\_result type handling comparison * neutron: remove redundant request.network\_id assignment * Fix reported ppc64le bug on video selection * Improve 'virt.driver' conf options documentation * Improve unit tests for instance multiple create * Change populate\_security\_groups to return a SecurityGroupList * Fix error message in imagebackend * config options: Centralise 'virt.driver' options * Avoid lazy-loading flavor during usage audit * resource\_providers, allocations and inventories models * Revert "Add new test\_rebuild\_instance\_with\_volume to cells exclude list" * Update the CONF import path for VNC * Improve 'vnc' conf options documentation * Remove discoverable policy from server:migrations resource * Improve the help text for cells options (6) * Improve the help text for cells options (5) * Improve the help text for cells options (4) * Improve the help text for cells options (3) * Improve the help text for cells options (2) * Allow block live migration of an instance with attached volumes * Implement an indexed ResourceClass Enum object * Add check to limit maximum value of max\_rows * Fix spelling mistake * Add methods for RequestContext to switch db connection * virt: osinfo will report once if libosinfo is not loaded * Replace eventlet-based raw socket client with requests * Add a tool for reserving migration placeholders during release time * libvirt: check for interface when detach\_interface fails * libvirt: implement LibvirtConfigGuestInterface.parse\_dom * Filter APIs out from services list * Config options: centralize options in conductor api * Improve the help text for cells options (1) * VMware: add release notes for the limits * Get a ReqSpec in evacuate API and pass it to scheduler * Fixes cells py3 unit tests * Fixes network py3 unit tests * Fixes Python 3 unit tests for nova.compute * Add new test\_rebuild\_instance\_with\_volume to cells exclude list * Add some obvious detail to nw\_info warning log * Fix fallocate test on newer util-linux * Remove \_create\_local function * Trivial logic cleanup in libvirt pre\_live\_migration * Return HTTP 400 for invalid server-group uuid * Properly inject network\_data.json in configdrive * enginefacade: remove 'get\_session' and 'get\_api\_session' * enginefacade: 'request\_spec' object * Add new API to force live migration to complete * Add new DB API method to retrieve migration for instance * Imported Translations from Zanata * Updated from global requirements * Sync L3Driver, NullL3 interface with LinuxNetL3 * Top 100 slow tests: api.openstack.compute.test\_api * Top 100 slow tests: api.openstack.compute.test\_versions * Top 100 slow tests: legacy\_v2.test\_servers * Top 100 slow tests: api.openstack.compute.test\_flavor\* * Top 100 slow tests: api.openstack.compute.test\_image\_size * Top 100 slow tests: api.openstack.compute.test\_volumes * Confusing typo fixed * doc: all\_tenants query option incorrectly identified as non-admin * Update driver support matrix for Ironic * parametrize max\_api\_version in tests * libvirt: Race condition leads to instance in error * Avoid lazy-loads in metadata requests * Join flavor when re-querying instance for floating ip association * Allow all api\_samples tests to be run individually * Make os-instance-action read deleted instances * enginefacade: 'flavor' * Updated from global requirements * Use instance hostname for Neutron DNS * libvirt: Make behavior of os\_require\_quiesce consistent * Split-network-plane-for-live-migration * Database not needed for most cells messaging tests * libvirt: use osinfo when configuring the disk bus * libvirt: use osinfo when configuring network model * Database not needed for test class: ConsoleAPITestCase * Database not needed for test class: ConductorImportTest * virt: adjusting the osinfo tests to use fakelibosinfo * Database not needed for RPC serializer tests * Database not needed for most crypto tests * Database not needed for most nova manage tests * ebtables/libvirt workaround * Test that new tables don't use soft deletes * Use instance in setup\_networks\_on\_host * enginefacade: test\_db\_api cleanup, missed decorators * Database not needed for test class: PciGetInstanceDevs * Add test coverage to functional api tests \_compare\_result method * Remove and deprecate conductor provider\_fw\_rule\_get\_all() * Remove prelude from disk-weight-sch reno * Enable volume operations for shelved instances * Gracefully handle a deleting instance during rebuild * remove the unnecessary parem of set\_vm\_state\_and\_notify * tests: adding fake libosinfo module * config options: Centralise 'vnc' options * config options: Make noVNC proxy into vnc group * Improve 'pci' conf options documentation * config options: centralize section "wsgi" * libvirt: deprecate live/block\_migration\_flag opts * Tidy up scheduler\_evolution.rst * config options: add hacking check for help text length * xrange() is renamed to range() in Python 3 * Do not use "file" builtin, but "open" instead * Fix some word spellings in messages * No need to have ironicclient parameter in methods * Add a TODO to make ComputeNode.cpu\_info non-nullable * Fix missing marker functions in nova/pci * Adding volume operations for shelved instances * Optimize Instance.create() for optional extra fields * Optimize servers path by pre-joining numa\_topology * Trivial: Remove a duplicated word * Update the home-page * Add better help text to scheduler options (5) * Switch to oslo.cache lib * Remove all remaining references to Quantum * doc: remove detail about extensions * Add description for trigger crash dump * Object: Give more helpful error message in TestServiceVersion * Spread allocations of fixed ips * Updated from global requirements * Stop using mox (scheduler) * Fix xvpvncproxy config path when running n-xvnc * Optimize the instance fetched by floating\_ips API * Improve efficiency of Migration.instance property * Prevent \_heal\_instance\_info\_cache() periodic lazy-loads * Revert "Added new scheduler filter: AggregateTypeExtraSpecsAffinityFilter" * Remove unused provider firewall rules functionality in nova * enginefacade: 'instance\_tags' * Apply scheduler limits to Exact\* filters * Fix typos in nova/scheduler and nova/virt * Replace exit() by sys.exit() * Trivial: Fix a typo in test\_policy.py * neutronv2: Allow Neutron to specify OVS/LB bridge * HyperV: do not log twice with different level * Replace stubs.Set with stub\_out (db) * Add a disk space weight-based scheduler * Fix up live-migration method docstrings * Libvirt: Support ovs fp plug in vhostuser vif * xenapi: simplify swap\_xapi\_host() * Allow sending the migrate data objects over the wire * Added new scheduler filter: AggregateTypeExtraSpecsAffinityFilter * Replace "all\_mappings" variable by "block\_device\_mappings" * Add better help text to scheduler options (4) * Migrate from keystoneclient to keystoneauth * fast exit dhcpbridge on 'old' * Ironic: Lightweight fetching of nodes * Fix RequestSpec \_from\_db\_object * doc:Ask reviews to reject new legacy notifications * Generate doc for versioned notifications * doc: add devref about versioned notifications * Adds json sample for the versioned notifications * relocate os\_compute\_api:servers:discoverable * libvirt: convert to use instance.image\_meta property * Updated from global requirements * doc: fix malformed api sample * Persist the request spec during an instance boot * Revise the compute\_upgrade\_levels\_auto release note * Adding guard on None value for some helpers method * Return HTTP 400 if volume size is not defined * API: Rearrange HTTPBadRequest raising in \_resize * remove the wrong param of fake\_db\_migration initiation * Enable all extension for server PUT API sample tests * Config options: centralize options in availability\_zones * We now require gettext for dev environments * Revert "Pass host when call attach to Cinder" * update feature support matrix documentation * Config options: centralize section "cells" * Use uuidsentinel in host\_status test * remove not used tpl * Return 409 instead of 503 when cidr conflict * releasenotes: Note on CPU thread pinning support * Use extra\_data\_func to get fingerprints of objects * Use stevedore for scheduler driver * Use stevedore for scheduler host manager * Enables conductor py3 unit tests * REST API changes for user settable server description * Use get\_notification\_transport() for notifications * Stop using stubs.Set in vmwareapi unit tests * Add tests for nova.rpc module * libvirt: check min required qemu/kvm versions on ppc64/ppc64le * VMware: Handle image size correctly for OVA and streamOptimized images * enginefacade: 'instance\_group' * enginefacade: 'floating\_ip' * enginefacade: 'compute\_node' * enginefacade: 'service' * Hyper-V: Trace original exception before converting exception * Fixed incorrect names/comments for API version 2.18 * Remove mox from tests/unit/objects/test\_keypair.py * API: Remove unexpected from errors get\_console\_output * Updated from global requirements * Fix docstrings for sphinx * Make project\_id optional in v2.1 urls * remove not used tpl file * Log retries at INFO level per guidelines * make logic clearer about template selection * Add ITRI DISCO os-brick connector for libvirt * Fix misleading comment of pci\_stats * cleanup: remove python 2.6 compat assertNotIsInstance * Add better help text to scheduler options (3) * (lxc) Updated regex to ignore failing tests * Add better help text to scheduler options (2) * Add better help text to scheduler options (1) * Note in HypervisorSupportMatrix for Libvirt/LXC shutdown kernel bug * Ceph for live-migration job * enginefacade: 'security\_group' * enginefacade: 'instance' * enginefacade: 'fixed\_ip' * enginefacade: 'quota' and 'reservation' * Python3: Replace dict.iteritems with six.iteritems * Updated from global requirements * Object: Fix wrong usage migrate\_data\_obj * \_can\_fallocate should throw a warning instead of error * VMware: no longer convert image meta from dict to object * cleanup: add comments about the pre/post extension processing * cleanup: remove custom serializer support * Add description for server query * remove docs about format extensions * Remove catching of ComputeHostNotFound exception * Return empty object list instead [] * cleanup: remove configurable action\_peek * libvirt: use native AIO mode for cinder volumes * libvirt: use native AIO mode for image backends * Issue an info log msg when port quota is exceeded * Validate translations * Imported Translations from Zanata 13.0.0.0b2 ---------- * doc: add client interactive guideline for microversions * doc: add version discovery guideline in api concept doc * doc: completes microversion use-cases in api concept doc * Fix indents of servers-detail-resp.json * libvirt: make snapshot use RBD snapshot/clone when available * Improve the help text for the cert options * cleanup: remove infrastructure for content/type deserializer * Pass host when call attach to Cinder * Pass attachment\_id to Cinder when detach a volume * libvirt: Fix/implement revert-resize for RBD-backed images * Added super() call in some of the Model's child * enginefacade: 'ec2\_instance' and 'instance\_fault' * cleanup: collapse wsgi serializer test hierarchy * Add service status notification * cleanup: remove wsgi serialize/deserialize decorators * enginefacade: 'block\_device\_mapping' * Fix invalid import order * Add a REST API to trigger crash dump in an instance * libvirt: adding a class to retrieve hardware properties * virt: introduce libosinfo library to set hardware policy * pci: changing the claiming and allocation logic for PF/VF assignment * pci: store context when creating pci devices * Make emitting versioned notifications configurable * Add infra for versioned notifications * Make sure that we always have a parent\_addr set * change set\_stubs to use stub\_out in vmwareapi/stubs.py * Add note to ComputeNode.numa\_topology * Reno for lock policy * Clean up nova/conf/scheduler.py * Reno for Xen rename * config options: Make xvp proxy into vnc group * XenAPI: Fix race on rotate\_xen\_guest\_logs * Add exception handling in \_cleanup\_allocated\_network * hardware: check whether realtime capable in API * Remove releasenotes/build between releasenotes runs * Add python3\* packages to development quickstart guide * Make sure full stack trace is logged on RT update failure * Changed filter\_by() to filter() during filtering instances in db API * config options: Centralise PCI options * HyperV: Set disk serial number for attached volumes * Use "regex" of StrOpt to check option "port\_range" * enable uefi boot * VMware: convert to use instance.image\_meta property * Config drive: convert to use instance.image\_meta property * Use of six.PY3 should be forward compatible * Add host\_status attribute for servers/detail and servers/{server\_id} * Revert "Workaround reno reverts by accepting warnings" * Adds relase notes for soft affinity feature * libvirt: handle migrate\_data as object in cleanup method * Create filter\_properties earlier in boot request * Parse availability\_zone in API * Add object and database support for host\_status API * Workaround reno reverts by accepting warnings * ports & networks gather should validate existance * objects: add virtual 'image\_meta' property to Instance object * compute: convert manager to use nova.objects.ImageMeta * Replace stubs.Set with stub\_out (os) * Fix Mock assert\_called\_once\_with() usage * ServerGroupsV213SampleJsonTest should actually test v2.13 * Move config options from nova/cert directory * Remove dead code from reserve\_block\_device\_name rpcapi * Adapt the code to the new get\_by\_volume BDM functions * Fix undetected races when getting BDMs by volume id * Fix instance not destroyed after successful evacuation * Use TimeFixture from oslo\_utils in functional tests * Fix indexing of dict.keys() in python3 * libvirt: add a new live\_migration\_tunnelled config * libvirt: force config related migration flags * libvirt: force use of direct vs p2p migration * libvirt: force use/non-use of NON\_SHARED\_INC flag * libvirt: parse live migration flags at startup * enginefacade: 'aggregate' * Add helper shim for getting items * hacking: check for common double word typos * Fix backing file detection in libvirt live snapshot * trivial: Add additional logs for NUMA scheduling * Add 'hw:cpu\_threads\_policy=isolate' scheduling * Replaces itertools.izip with six.moves.zip * Clean up network resources when reschedule fails * Replace stubs.Set with stub\_out (fakes) * Add maximum microversions for each releases * Remove "or 'reserved'" condition from reserve\_block\_device\_name * live-migration hook ansible 2.0 compaitability * update min tox version to 2.0 * pci: adding support to specify a device\_type in pci requests * Block flaky python34 test : vmwareapi.test\_configdrive.ConfigDriveTestCase * Actually pass the migration data object down to the virt drivers * nova conf single point of entry: fix error message * Fix sphinx warnings from signature\_utils * Sets binding:profile to empty dic when unbinding port * Use timedelta.total\_second instead of calculating * Use stub\_out and mock to remove mox:part 3 * Replaces \_\_builtin\_\_ with six.moves.builtins * Remove mm-ctl from network.filters * Add mm-ctl to compute.filters * Add reviewing point related to REST API * Stop using mox stubs in nova.tests.unit.console * pci: do not filter out any SRIOV Physical Functions * objects: update the old location parent\_addr only if it has value * Add xenapi support for XenapiLiveMigrateData objects * Fixes Hyper-V unit tests for latest os\_win release * Add 'hw:cpu\_thread\_policy=require' scheduling * add "hw\_firmware\_type" image metadata * Docstring change for consistency * Add tests for metadata functions * libvirt: fix TypeError calling \_live\_migration\_copy\_disk\_paths * Add DiskFormat as Enum in fields * Remove DictCompat from EC2 objects * Remove DictCompat from DNSDomain * Add description on how to run ./run\_test.sh -8 * Propagate qemu-img errors to compute manager * Change assertEqual(True/False) to assertTrue/False * objects: adding a parent\_addr field to the PciDevice object * Add caching of service\_min\_versions in the conductor * Scheduler: enforce max attempts at service startup * Fix unit tests on Mac OS X * Stop using mox stubs in nova.tests.unit.api.openstack.compute.test\_services * libvirt: add discard support for attached volumes * Remove DictCompat from CellMapping * Remove NovaObjectDictCompat from Aggregate * XenAPI: Cope with more Cinder backends * single point of entry for sample config generation * Remove Deprecated EC2 and ObjectStore impl/tests * libvirt: add realtime support * Imported Translations from Zanata * libvirt: update to min required version to 0.10.2 * Remove null AZ tests from API tests * Replace stubs.Set with stub\_out (functional tests) * Updated from global requirements * doc: minor corrections to the API version docco * Refactor \_load\_support\_matrix * Fix format conversion in libvirt snapshot * Fix format detection in libvirt snapshot * api: add soft-affinity policies for server groups * scheduler: fill RequestSpec.instance\_group.members * scheduler: add soft-(anti-)affinity weighers * Implements proper UUID format for compute/test\_stats\* * Add image signature verification * Convert nova.tests.unit.image.fake.stub\_out\_image\_service to use stub\_out * Block more flaky py34 tests * Replace deprecated library function os.popen() with subprocess * Remove mox and Stubs from tests/unit/pci/test\_manager.py * Correct the code description * Fix advice for new contribs * libvirt: better error for bad live migration flag * Add argument to support-matrix sphinx extension * Wrong URL reported by the run\_tests.sh message * Make use of 'InstanceNUMACell.cpu\_policy' field * Add 'cpu\_policy' and 'cpu\_thread\_policy' fields * Add 'CPUThreadAllocationPolicy' enum field * Blacklist flaky tests and add warning * Modify Scheduler RPC API to use RequestSpec obj * Implements proper UUID format for test\_compute\_mgr * Remove get\_lock method and policy action * libvirt: sort block\_device\_list in volume\_in\_mapping log * Stop explicitly running test discovery for py34 * introduce \`\`stub\_out\`\` method to base test class * Cleanup NovaObjectDictCompat from security\_group\_rule * Remove useless header not need microversion * Implements proper UUID format for test\_compute * Move Process and Mentoring pages to devref * Document restrictions for working on cells v1 * api-guide: add a doc on users * Assignment (from method with no return) removed * remove use of \_get\_regexes in samples tests * Improve 'virt' conf options documentation * config options: Centralise 'virt.hardware' options * Get list of disks to copy early to avoid multiple DB hits * Remove non-unicode bind param warnings * Fix typo, ReST -> REST * Wrong spelling of defined * libvirt: fix typo in test\_init\_host\_migration\_flags * docs: update refs to mitaka release schedule * doc: add how to arrange order of scheduler filters * libvirt: only get instance.flavor if needed in get\_disk\_mapping * Replace backtick with apostrophe in lazy-loading debug log * libvirt: fix TypeError in find\_disk\_dev\_for\_disk\_bus * Fix RPC revision log entry for 4.6 * signature\_utils: move to explicit image metadata * Unreference mocks are listed in the wrong order * remove API v1.1 from testing * remove /v1.1 from default paste.ini * libvirt: verify cpu bw policy capability for host * Implements proper UUID format for test\_compute\_cells and test\_compute\_utils * Add the missing return value in the comment * Updated from global requirements * xen: block BootableTestCase from py34 testing * Modify conductor to use RequestSpec object * db: querry to retrieve all pci device by parent address * db: adding columns to PciDevice table * Replace except Exception with specific exception * pci: minor fix to exception message format * Python 3 deprecated the logger.warn method in favor of warning * Check added for mandatory parameter size in schema * Remove redundant driver initialization in test * enginefacade: 'instance\_metadata' * Misspelling in messages * Add lock to host-state consumption * Add lock to scheduler host state updating * Allow virt driver to define binding:host\_id * [python3] Webob request body should be bytes * Replace copy.deepcopy of RequestContext with copy.copy * DriverBlockDevice must receive a BDM object, not a dict * Misspelling in message * Wrong usage of "a" * Remove unused logging import and LOG global var * Reduce the number of db/rpc calls to get instance rules * Use is\_supported() to check microversion * SameHostFilter should fail if host does not have instances * VMware: add method for getting hosts attached to datastore * Trivial: Fix wrong comment in service version * signature\_utils: handle ECC curve unavailability * Updated from global requirements * tests: Remove duplicate check * enginefacade: 'bw\_usage', 'vol\_usage' and 's3\_image' * VMware: improve instance names on VC * VMware: add in folder support on VC * VMware: cleanup unit test global variable * signature\_utils: refactor the list of ECC curves * Nuke EC2 API from api-paste and remove wsgi support * Remove cruft for things o.vo handles * Make scheduler\_hints schema allow list of id * Change logging level for 'oslo\_db' * Remove unused compute\_api in ServerUsageController * network: Don't repopulate instance info cache from Neutron ports * Fix doc comment for get\_available\_resource * objects: lazy-load instance.security\_groups more efficiently * VMware: cleanup unit tests * Use SpawnIsSynchronousFixture in most unit tests * Use stub\_out and mock to remove mox: part 1 * Disable the in tree EC2 API by default * deprecate old glance config options * remove temporary GlanceEndpoint object * convert GlanceClientWrapper to endpoint * Use stub\_out and mock to remove mox: part 2 * Add a compute API to trigger crash dump in instance * Make libvirt driver return migrate data objects for source and dest checks * Use TimeFixture from oslo\_utils to override time in tests * enginefacade: 'vif' and 'task\_log' * review guide: add location details for config options * libvirt: wrapper list\_guests to Host's object * remove vestigial XML\_NS\_V11 variable * remove unused EXTENSION\_DESERIALIZE\_\* constants * config options: Centralise 'virt.ironic' options * remove unused pipeline\_factory\_v3 alias * remove unused methods from integrated\_helpers test class * remove unused extends\_name attribute * Add upload/download vhd2 interfaces * Replace unicode with six.text\_type * conductor: fix unbound local variable request\_spec * Use just ids in all request templates for flavors/images * extract non instance methods * remove unused trigger\_handler * remove unused update\_dhcp\_hostfile\_with\_text method * remove nova-cert from most functional tests * enginefacade: 'migration' * XenAPI: Fix race in rotate\_xen\_guest\_logs * libvirt: introduce "pause" to Guest's object * libvirt: introduce "shutdown" to Guest's object * libvirt: introduce "snapshot" to Guest's object * libvirt: introduce thaw filesystems * libvirt: introduce freeze filesystems * libvirt: replace direct libvirt's call AbortJobBlock * Allow to update 'v2.1' links in sample files * Do not update links for 'versions' tests * centeralized conf:compute/emphemeral\_storage\_encryption * Add instance.save() when handling reboot in init instance * Add transitional support for migrate data objects to compute manager * Implements proper UUID format for few objects tests * Filter by leased=False when allocating fixed IPs * Increase informations in nova-net warnings * docs: add concept guide for certificate * Fix reclaim\_instance\_interval < 0 never delete instance completely * Updated from global requirements * Add placeholders for config options * Implements proper UUID format for the fake\_network * Refresh stale volume BDMs in terminate\_connection * Block requests 2.9.0 * Implements proper UUID format for the test\_compute\_api * Remove onSharedStorage from evacuate API * Fix CPU pinning for odd number of CPUs w hyperthreading * hardware: stop using instance cell topology in CPU pinning logic * Check context before returning cached value * deprecate run\_tests.sh * remove archaic references to XML in api * simplify the request / response format document * Add signature\_utils module * Remove XML description from extension concept * remove ctype from classes * Remove cells service from api samples that don't test cells * Add uuidsentinel test module * Remove the wrong usage of api\_major\_version in api sample tests * Updated from global requirements * Fix wrong method name in doc filter\_scheduler * doc: update threading.rst * Makes GET extension info sample tests run for v2 also * update api\_samples code to use better variables * Remove incorrect comments about file injection * Remove a restriction on injection files * Remove unnecessary log when search servers * Deprecated tox -downloadcache option removed * rework warning messages for extension whitelist/blacklist * Make sure bdm.volume\_id is set after auto-creating volumes * Replace safe\_utils.getcallargs with inspect.getcallargs * Fix wrap\_exception to get all arguments for payload * Add hypervisor, aggregates, migration description * retool xen glance plugin to work with urls * always create clients with GlanceEndpoint * Implement GlanceEndpoint object * Clean up glance url handling * Use RequestSpec in the ChanceScheduler * Modify left filters for RequestSpec * Modify NUMA, PCI and num\_instances filters for RequestSpec * Improve inject\_nmi() in libvirt driver and add tests * Report compute-api bugs against nova * XenAPI: Expose labels for ephemeral disks * Fix use of safeutils.getcallargs * Cache SecurityGroupAPI results from neutron multiplexer * Remove the executable bit from several python files * Optimize \_cleanup\_incomplete\_migrations periodic task * [Py34] api.openstack.compute.legacy\_v2.test\_servers.Base64ValidationTest * [Py34] api.openstack.test\_faults.TestFaultWrapper * [Py34] Enable api.openstack.test\_wsgi unit test * default host to service name instead of uuid * Remove start\_service calls from the test case * Add SIGHUP handlers for compute rpcapi to console and conductor * Cache the automatic version pin to avoid repeated lookups * virt: allow for direct mounting of LocalBlockImages * Use testscenarios to set attributes directly * update API samples to use endpoints * Updated from global requirements * Add project-id and user-id when list server-groups * Fixes Python 3 compatibility for filter results * Remove duplicate default=None for option compute\_available\_monitors * Disable IPv6 on bridge devices * Don't load deleted instances * Improve Filter Scheduler doc clarity * libvirt: report pci Type-PF type even when VFs are disabled * Remove deprecated neutron auth options * Fix capitalization of IP * Add separated section for configure guest os * Add separated section for extra specs and image properties * Add a note about fixing "db type could not be determined" with py34 * neutron: skip test\_deallocate\_for\_instance\_2\* in py34 job * tighten regex on objectify * Replace os.path.join() for URLs * Add hv testing for ImageMetaProps.\_legacy\_property\_map * Edit the text to be more native-English sounding * docs: add test strategy and feature classification * Fix the endpoint of /v2 on concept doc * Drop JSON decoding for supported\_instances * docs: update old stuff in version section * Scheduler: honor the glance metadata for hypervisor details * Implements proper UUID format for the ComputeAPITestCase * docs: add microversions description in the concept doc * Make admin consistent * Add more concepts for servers * Make "ReSTful service" consistent * Add retry logic for detaching device using LibVirt * Fix Exception message consistency with input protocol * Remove SQLite BigInteger/Integer translation logic * xen: Drop JSON for supported\_instances * vmware: Drop JSON for supported\_instances * ironic: Drop JSON for supported\_instances * hyperv: Drop JSON for supported\_instances * libvirt: Drop JSON for supported\_instances * Drop JSON for stats in virt API * Replaces izip\_longest with six.moves.zip\_longest * Fixes dict keys and items references for Python 3 * Scheduler: correct control flow when forcing host * Replaces longs with ints * neutron: only get port id when listing ports in validate\_networks * neutron: only list ports if there is a quota limit when validating * Add reviewing point related to REST API * Revert "Enable options for oslo.reports" * Fix wrong CPU metric value in metrics\_filter * Reset the compute\_rpcapi in Compute manager on SIGHUP * Remove the unused sginfo rootwrap filter * docs: ensure third party tests pass before +2 * Config options: centralize section "scheduler" * add api-samples tox target * Remove Instance object flavor helper methods only used in tests * Remove unnecessary extra instance saves during resize * docs: using the correct format and real world example for fault message * VMware: cleanup ExtraSpecs * Remove HTTPRequestEntityTooLarge usage in test * Enables py3 unit tests for libvirt.host module * Replaces \_\_builtin\_\_ with six.moves.builtins * Converting nova.virt.hyperv to py3 * Hyper-V: removes \*Utils modules and unit tests * docs: update services description for concept guide * docs: remove duplicated section about error handling * Remove Useless element in migrate\_server shcema * Optimize "open" method with context manager * trivial: Add some logs to 'numa\_topology\_filter' * Updated from global requirements * Docs: update the concept guide for Host topics * Cleanup of compute api reboot method * Hyper-V: adds os-win library * Remove description about image from faults section * api-guide: add note about users * Updated from global requirements * xenapi: Add helper function and unit tests for client session * Config options: centralize section "scheduler" * Ironic: Workaround to mitigate bug #1341420 * Libvirt: Support fp plug in vhostuser vif * Remove version from setup.cfg 13.0.0.0b1 ---------- * Add note for automatic determination of compute\_rpc version by service * Add note for Virtuozzo supporting snapshots * Add note for HyperV 2008 drop of support * Imported Translations from Zanata * Add note for removing conductor RPC API v2 * Add note for dropping InstanceV1 objects * Add note for force\_config\_drive opt change * Add note for deprecating local conductor * Revert "Detach volume after deleting instance with no host" * force releasenotes warnings to be treated as errors * Fix reno warning for API DB relnote * Adding a new vnic\_type for Ironic/Neutron/Nova integration * Use o.vo DictOfListOfStringsField * libvirt: remove todo note not useful anymore * Modify metric-related filters for RequestSpec * Modify default filters for RequestSpec * servicegroup: stop zombie service due to exception * Add persistence to the RequestSpec object * Updated from global requirements * add hacking check for config options location * Correct some nits for moving servers in concept doc * use graduated oslo.policy * TrivialFix: remove 'deleted' flag * Make server concept guide use 'server' consistently * api-guide: fix up navigation bar * Use version convert methods from oslo\_utils.versionutils * docs: reorder move servers text * docs: add clarifications to move servers * Change some wording on server\_concepts.rst * Cleanup unused test code in test\_scheduler.py * Modify Aggregate filters for RequestSpec * Add code-review devref for release notes * Hyper-V: refines the exceptions raised in the driver * Use o.vo FlexibleBooleanField * docs: describe migration and other movement concepts * Double 'an' in message * Unify on \_schedule\_instances * Add review guideline to microversion API * Remove the TestRemoteObject class * Catch FixedIpNotFoundForAddress when create server * doc: add server status to concept.rst * docs: update the concept guide shelve actions * Fixed incorrect name of 'tag' and 'tag-any' filters * Fix resource tracker VCPU counting * Add relnote for change in default setting * use NoDBTestCase for KeypairPolicyTest * doc: change policies.rst to indicate API links * Remove useless code in \_poll\_volume\_usage function * Neutron: add logging context * Remove unused param of CertificatesController * Add user data into general concept * Fix a typo in api-guide doc * Make some classes inherit from NoDBTestCase * XenAPI: Workaround for 6.5 iSCSI bug * NFS setup for live-migration job * Fix ebtables-version release note * config options: enhance help text of section "serial\_console" * Updating nova config-reference doc * Updated from global requirements * Prevent redundant instance.update notifications * VMware: fix docstring for cluster management * api: remove re-declared type in migrate schema * enginefacade: 'agent' and 'action' * config options: centralize section "serial\_console" * Replaced private field in get\_session/engine with public method * SR-IOV: Improve the vnic type check in the neutron api * Simplified boolean variable check * update connect\_volume test * Enable options for oslo.reports * Reverse sort tables before archiving * scheduler: fix incorrect log message * Updated from global requirements * Add release note for API DB migration requirements * Replaced deprecated timeutils methods * Multinode job for live-migration * Use o.vo VersionPredicateField * Use flavor instead of flavour * Corrected few grammatical nitpics * Add more 'actions' for server concepts doc * libvirt: mlnx\_direct vif type removal * xen: mask passwords in volume connection\_data dict * Updated from global requirements * Use --concurrent with ebtables * Removed extra spaces from double line strings * Change test function name to make more sense * Change Invalid exception to a specified exception * Add 'lxd' to the list of recognized hypervisors * Add microversions schema unit test for None * Clean up legacy multi-version test constructs * Fix Nova's indirection fixture override * Remove skips for resize tests from tempest-dsvm-cells-rc * Modify Affinity filter for RequestSpec * Prepare filters for using RequestSpec object * Use ServiceList object rather than direct db call * Add relnote for ERT deprecation * Remove IN-predicate warnings * docs: update the API faults concept guide * Deprecate nova-manage service subcommand * Double detach volume causes server fault * Use JSON format instead of json format * Network: add in missing translation * cells is a sad panda about scheduler hints * VMware: expand support for Opaque networks * Fix is\_volume\_backed\_instance() for unset image\_ref * Add \_LE to LOG.error statement in nova/service * Add service records for nova-api services * Added method is\_supported to check API microversions * enginefacade: 'host\_mapping' * Removes support for Hyper-V Server 2008 R2 * Fix the bug of "Error spelling of 'explicitely'" * Claims: fix log message * Fix paths for api-guide build * Remove flavors.get\_flavor() only used in tests * VMware: Raise DiskNotFound for missing disk device * Remove two unneeded db lookups during delete of a resizing instance * Fix pci\_stats logging in resource tracker * live-mig: Mark migration as failed on fail to schedule * Move the Migration set-status-if-exists pattern to a method * Don't track migrations in 'accepted' state * live-migrate: Change the status Migration is created with * compute: split check\_can\_live\_migrate\_destination * Replace N block\_device\_mapping queries with 1 * Add "unreleased" release notes page * Add reno for release notes management * XenAPI: Correct hypervisor type in Horizon's admin view * Fix typo in test\_post\_select\_populate * Rearranges to create new Compute API Guide * Added CORS support to Nova * Aggregate Extra Specs Filter should return if extra\_specs is empty * cells: skip 5 networking scenario tests that use floating IPs * force\_config\_drive: StrOpt -> BoolOpt * Updated from global requirements * Add test coverage for both types of not-found-ness in neutronclient for floating * Fix impotent \_poll\_shelved\_instances tests * Fix race in \_poll\_shelved\_instances task * Handle a NeutronClientException 404 Error for floating ips * Handle DB failures in servicegroup DB driver * Hook for live-migration job * Omit RescheduledException in instance\_fault.message * Remove duplicate server.kill on test shutdown * make the driver.Scheduler as abstract class * Fix a spelling mistake in the log * objects: remove remote\_object\_calls from \_BaseTestCase * Repair and rename test\_is\_volume\_backed\_instance\_no\_bdms() * Use ObjectVersionChecker fixture from oslo.versionedobjects * VMware: add in vif resource limitations * Untie subobject versions * Block oslo.messaging 2.8.0 * Split up test\_is\_volume\_backed\_instance() into five functions * Avoid the dual-naming confusion * enginefacade: 'provider\_fw', 'console\_pool' and 'console' * enginefacade: 'network' * clean up regex in tempest-dsvm-cells-rc * skip lock\_unlock\_server test for cells * ScalityVolume:fix how remote FS mount is detected * OpenStack typo * Remove duplicate keys in policy.json * Add missing policy rules * devref:Don't suggest decorate private method * VMware: use a constant for 'iscsi' * Config drive: make use of an instance object * Fix attibute error when cloning raw images in Ceph * Properly log BlockDeviceMappingList in \_create\_block\_device\_mapping * Exclude all BDM checks for cells * glance:add helper method to get client version * enginefacade: 'dnsdomain' and 'ec2' * enginefacade: 'certificate' and 'pci\_device' * enginefacade: 'key\_pair' and 'cell' * enginefacade: 'instance\_mapping' * enginefacade: 'cell\_mapping' * enginefacade: 'instance\_info' and 'instance\_extra' * Use EngineFacade from oslo\_db.enginefacade * VMware: fix trivial indentations * Remove flavors.get\_all\_flavors() only used in tests * Make lock policy default to admin or owner * libvirt:Fix a typo of test cases * Deprecate local conductor mode * Deprecate Extensible Resource Tracker * Change image to instance in comment * VMware: use oslo\_config new type PortOpt * Remove vcpu resource from extensible resource tracker * Add logging to snapshot\_volume\_backed method * Remove unnecessary destroy call from Ironic virt driver * cells: add debug logging to bdm\_update\_or\_create\_at\_top * Drop Instance v1.x support * Check prefix with startswith() instead of slicing * Add debug logging for when boot sequence is invalid in \_validate\_bdm * remove the redundant policy check for SecurityGroupsOutputController * virt: add constraint to handle realtime policy * libvirt: add cpu schedular priority config * libvirt: rework membacking config to support future features * Do not mask original spawn failure if shutdown\_instance fails * Point to cinder options in nova block alloc docs * Fix booting fail when unlimited project quota * Remove useless get\_instance\_faults() * Remove "Can't resolve label reference" warnings * Remove reservation\_id from the logs when a schedule fails * Use RequestSpec object in HostManager * Use RequestSpec object in the FilterScheduler * Add ppcle architectures to libvirt blockinfo * Deprecated: failIf * Imported Translations from Zanata * Remove obj\_relationships from objects * Delete dead test code * Add tempest-dsvm-lxc-rc * Mark set-admin-password as complete for libvirt in support matrix * Hypervisor support matrix: define pause & unpause * Revert "Implement online schema migrations" * Fix the os-extended-volumes key reference in the REST API history docs * Remove get\_all method from servicegroup API * Remove SoftDeleteMixin from NovaBase * libvirt: support snapshots with parallels virt\_type * Use oslo.config choices kwarg with StrOpt for servicegroup\_driver * Imported Translations from Zanata * Add -constraints sections for CI jobs * Add "vnc" option group for sample nova.conf file * Updated from global requirements * Expands python34 unit tests list * Fix missing obj\_make\_compatible() for ImageMetaProps object * Fix error handling in nova.cmd.baseproxy * Change 'ec2-api' stackforge url to openstack url * Fixes Python 3 str issue in ConfigDrive creation * Revert "Store correct VirtCPUTopology" * Enable all extension for image API sample tests * Add tags to .gitignore * Updated from global requirements * Add a nova functional test for the os-server-groups GET API with all\_projects parameter * Image meta: treat legacy vmware adapter type values * Attempt rollback live migrate at dest even if network dealloc fails * hacking check for contextlib.nested for py34 support * Print number of rows archived per table in db archive\_deleted\_rows * Updated from global requirements * Fix more inconsistency between Nova-Net and Neutron * Fix metadata service security-groups when using Neutron * Remove redundant deps in tox.ini * Add some tests for map\_dev * Clean up tests for dropping obj\_relationships * Fix up Service object for manifest-based backports * Fix service\_version minimum calculation for compute RPC * docs: add the scheduler evolution plans * Revert "virt: Use preexec\_fn to ulimit qemu-img info call" * Updated from global requirements * Ensure Glance image 'size' attribute is 0, not 'None' * Ignore errorcode=4 when executing \`cryptsetup remove\` command * libvirt: Don't attempt to convert initrd images * Revert "Fixes Python 3 str issue in ConfigDrive creation" * Monkey patch nova-ec2 api * Compute: remove unused parameter 12.0.0 ------ * Omnibus stable/liberty fix * Drop outdated sqlite downgrade script * Updated from global requirements * Fix Status-Line in HTTP response * Imported Translations from Zanata * Default ConvertedException code to 500 * Updated from global requirements * VMware: fix bug for config drive when inventory folder is used * Fix a typo * code-review guidelines: add checklist for config options * Add a code-review guideline document * virt: Use preexec\_fn to ulimit qemu-img info call * Clean up some Instancev1 stuff in the tests * Updated from global requirements * Replaces contextlib.nested with test.nested * Sync cliutils from oslo-incubator * Make archive\_deleted\_rows\_for\_table private 12.0.0.0rc2 ----------- * load consoleauth\_topic option before using it * Revert "[libvirt] Move cleanup of imported files to imagebackend" * Add more documentation for RetryFilter * Fix InstanceV1 backports to use context * Imported Translations from Zanata * Add test of claim context manager abort * Log DBReferenceError in archive\_deleted\_rows\_for\_table * Use DBReferenceError in archive\_deleted\_rows\_for\_table * Add testresources used by oslo.db fixture * Remove unused context parameter from db.archive\_deleted\_rows\* methods * xenapi\_device\_id integer, expected string * Fix InstanceV1 backports to use context * Drop unused obj\_to\_primitive() override * Updated from global requirements * libvirt: remove unnecessary else in blockinfo.get\_root\_info * Make test cases in test\_test.py use NoDBTest * XenAPI: Fix unit tests for python34 * docs: re-organise the API concept docs * VMware: specify chunk size when reading image data * Make ConsoleauthTestCase inherit from NoDBTest * Change a test class of consoleauth to no db test * Imported Translations from Zanata * Catch 3 InvalidBDM related exc when boot instance * Move create vm states to svg diagram * Ironic: Fix bad capacity reporting if instance\_info is unset * Revert "[libvirt] Move cleanup of imported files to imagebackend" * Honor until\_refresh config when creating default security group * remove sphinxcontrib-seqdiag * [Py34] nova.tests.unit.api.openstack.test\_common * [Py34] Enable api.openstack.test\_mapper unit test * [Py34] Enable test\_legacy\_v2\_compatible\_wrapper * Extend the ServiceTooOld exception with more data * Make service create/update fail if version is too old * Allow automatic determination of compute\_rpc version by service * Add get\_minimum\_version() to Service object and DB API * Correct memory validation for live migration * devref: change error messages no need microversion * Replace f.func\_name and f.func\_code with f.\_\_name\_\_ and f.\_\_code\_\_ * Imported Translations from Zanata * Add a note about the 500->404 not requiring a microversion * Ensure Nova metrics derived from a set of metrics * Updated from global requirements * Fixes Python 3 str issue in ConfigDrive creation * Make secgroup rules refresh with refresh\_instance\_security\_rules() * Remove unused refresh\_security\_group\_members() call * Imported Translations from Zanata * Check DBReferenceError foreign key in Instance.save * Fix Instance unit test for DBReferenceError * Ironic: Fix bad capacity reporting if instance\_info is unset * libvirt: check if ImageMeta.disk\_format is set before accessing it * libvirt: check if ImageMeta.disk\_format is set before accessing it * Rollback is needed if initialize\_connection times out * Updated from global requirements * Add Pillow to test-requirements.txt * VMware: raise NotImplementedError for live migration methods * xapi-tools: fixes cache cleaner script * Cleanup of Translations * Add Pillow to test-requirements.txt * Update rpc version aliases for liberty * Remove orphaned code related to extended\_volumes * Add checkpoint logging when terminating an instance * Add checkpoint logging when building an instance in compute manager * Removed unused method from compute/rpcapi * Remove unused read-only cell code * Change warn to debug logs when migration context is missing * Use os-testr for py34 tox target * Add sample config file to nova docs * Remove lazy-loading property compute\_task\_api from compute api * Remove conductor 2.x RPC API * Reserve 10 migrations for backports * Use StrOpt's parameter choices to restritct option auth\_strategy * vmware: set default value in fake \_db\_content when creating objects * Avoid needless list copy in 'scheduler\_host\_subset\_size' case * libvirt: Log warning for wrong migration flag config options * Slightly better translation friendly formatting * Identify more py34 tests that already pass * rebuild: Apply migration context before calling the driver * hardware: improve parse\_cpu\_spec to handle exclusion range * Correct Instance type check to work with InstanceV1 * Imported Translations from Zanata * Correct Instance type check to work with InstanceV1 * Only create volumes with instance.az if cinder.cross\_az\_attach is False * Fix the help text of monkey\_patch config param * Rollback of live-migration fails with the NFS driver * Set TrustedFilter as experimental * doc: gmr: Update instructions to generate GMR error reports * rebuild: Apply migration context before calling the driver * Fix MetricWeigher to use MonitorMetricList * VMware: update log to be warning * Add more help text to the cinder.cross\_az\_attach option * Cleanup of Translations * Revert "Deprecate cinder.cross\_az\_attach option" * Fix some spelling typo in manual * Fix NoneType error when calling MetricsWeigher * wsgi: removing semicolon * Fix logging\_sample.conf to use oslo\_log formatter * Remove unused \_check\_string\_length() * Deprecate cinder.cross\_az\_attach option * Neutron: update cells when saving info\_cache * Fix MetricWeigher to use MonitorMetricList 12.0.0.0rc1 ----------- * Imported Translations from Zanata * Detach volume after deleting instance with no host * Remove unnecessary call to info\_cache.delete * Filter leading/trailing spaces for name field in v2.1 compat mode * Give instance default hostname if hostname is empty * If rescue failed set instance to ERROR * Add some devref for AZs * Change parameter name in utility function * RT: track evacuation migrations * rebuild: RPC sends additional args and claims are done * Cells: Limit instances pulled in \_heal\_instances * Open Mitaka development * Fix order of arguments in assertEqual * devref: update the nova architecture doc * Imported Translations from Zanata * Fix quota update in init\_instance on nova-compute restart * net: explicitly set mac on linux bridge * live-migration: Logs exception if operation failed * libvirt: add unit tests for the designer utility methods * Add test cases for some classes in objects.fields * Change ignore-errors to ignore\_errors * libvirt: fix direct OVS plugging * claims: move a debug msg to a warn on missing migration * Fix order of arguments in assertEqual * Remove duplicate VALID\_NAME\_REGEX * Pep8 didn't check api/openstack/common.py * Updated from global requirements * libvirt: Add unit tests for methods * Devref: Document why conductor has a task api/manager * Imported Translations from Zanata * Fix nova configuration options description * libvirt:on snapshot delete, use qemu-img to blockRebase if VM is stopped * Allow filtering using unicode characters * Updated from global requirements * Imported Translations from Zanata * Test both NoAuthMiddleware and NoAuthMiddlewareV3 * Remove redundant variable 'context' * Add 'OS-EXT-VIF-NET:net\_id' for v21 compatible mode * libvirt: Add NUMA cell count to cpu\_info * Xenapi: Don't access image\_meta.id when booting from a volume * Imported Translations from Zanata * Fix typo in HACKING.rst * Remove comment in wrong place * Fix string formatting in api/metadata/vendordata\_json.py * Raise exception.Migration earlier in REST API layer * Remove "shelved\_image\_id" key from instance system metadata * Only set access\_ip\_\* when instance goes ACTIVE * VMware: fix typo in comment * RT: Migration resource tracking uses migration context * compute: migrate/resize paths properly handle stashed numa\_topology * Claims: Make sure move claims create a migration context records * libvirt:update live\_migration\_monitor to use Guest * VMware: create method for getting datacenter from datastore * User APIRouterV21 instead of APIRouterV3 for v2.1 unittests * Remove TestOpenStackClientV3 from nova functional tests * Rename all the ViewBuilderV3 to ViewBuilderV21 * libvirt: Split out resize\_image logic from create\_image * Reuse method to convert key to passphrase * Creating instance fail when inject ssh key in cells mode * Fix the usage output of the nova-idmapshift command * Make test\_revoke\_cert\_project\_not\_found\_chdir\_fails deterministic * Reduce the number of Instance.get\_by\_uuid calls * Remove 'v3' from comments in Nova API code * xapi: cleanup volume sr on live migration rollback * Hyper-V: Implements attach\_interface and detach\_interface method * Remove unnecessary 'context' param from quotas reserve method call * VMware: Replace get\_dynamic\_properties with get\_object\_properties\_dict * VMware: Replace get\_dynamic\_property with get\_object\_property * Return empty PciDevicePoolList obj instead of None * libvirt: add debug logging for lxc teardown paths * Add API schema for different\_cell filter * Add microversion bump exception for scheduler-hint * Use six.text\_type instead of str in serialize\_args * Set vif and allocated when associating fixed ip * Fix ScaleIO commands in rootwrap filters * Add missing information to docstring * Add microversion rule when adding attr to request * Check unknown event name when create external server event * Don't expect meta attributes in object\_compat that aren't in the db obj * CONF.allow\_resize\_to\_same\_host should check only once in controller * Updated from global requirements * Fix debug log format in object\_backport\_versions() * Add version 3.0 of conductor RPC interface * Remove and deprecate conductor object\_backport() * Invalidate AZ cache when the instance AZ information is different * Consolidate code to get the correct availability zone of an instance * Fix order of arguments in assertEqual * Ironic: Call unprovison for nodes in DEPLOYING state * libvirt: use guest as parameter for get serial ports * Separate API schemas for v2.0 compatible API * api: allow any scheduler hints * API: Handle InstanceUnknownCell exceptions * Updated from global requirements * Add some explanation for the instance AZ field * Remove 'v3' from extension code * Remove more 'v3' references from the code * Sorting and pagination params used as filters * Freeze v1 Instance and InstanceList schema hashes * Imported Translations from Transifex * Remove unused parameter overwrite in elevated * Add missing delete policies in the sample file * Fix a few typos * ironic: convert driver to use nova.objects.ImageMeta * objects: convert config drive to use ImageMeta object * VMware: ensure that instance is deleted when volume is missing * libvirt:Rsync compression removed * xenapi: Support extra tgz images that with only a single VHD * Hyper-V: Fixes snapshoting inexistent VM issue * Hyper-V: Adds RDPConsoleOps unit tests * Rectify spelling mistake in nova * libvirt: Add a finish log * Remove old unused baremetal rootwrap filters * Relax restrictions on server name * network\_request\_obj: Clean up outdated code * Object: Fix KeyError when loading instance from db * Add os-brick's scsi\_id command to rootwrap * Expose keystoneclient's session and auth plugin loading parameters * Remove and deprecate conductor compute\_node\_create() * Drop unused conductor manager vol\_usage\_update() mock * Add constraint target to tox.ini * nova-net: fix missing log variable in deallocate\_fixed\_ip * Provide working SQLA\_VERSION attribute * Don't "lock" the DB on expand dry run * New sensible network bandwidth quota values in Nova tests * Fix Cells gate test by modifying the regressions regex * Add functional test for server group * Reject the cell name include '!', '.' and '@' for Nova API * Hyper-V: Adds HyperVDriver unit tests * claims: Remove compat code with instance dicts * Add Instance and InstanceList v2.0 objects * Teach conductor to do manifest-based object\_class\_action() things * Make the conductor fixture use version manifests * Update objects test infrastructure for multiple versions * Refactor Instance tests to use objects.Instance * Fix an issue with NovaObjectRegistry hook * Pull out the common bits of InstanceList into \_BaseInstanceList * Pull out the common bits of Instance into \_BaseInstance * Clarify max\_local\_block\_devices config option usage * Allow to use autodetection of volume device path * Remove the blacklisted nova-cells shelve tests * Update from global requirements * objects: Hook migration object into Instance * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * Detach and terminate conn if Cinder attach fails * [libvirt] Move cleanup of imported files to imagebackend * hyperv: convert driver to use nova.objects.ImageMeta 12.0.0.0b3 ---------- * Add notes explaining vmware's suds usage * Adds instance\_uuid index for instance\_system\_metadata * Handle nova-compute failure during a soft reboot * Fix mistake in UT:test\_detach\_unattached\_volume * Fix RequestSpec.instance\_group hydration * Remove unused root\_metadata method of BlockDeviceMappingList * Add JSON-Schema note to api\_plugins.rst * Compute: update finish\_revert\_resize log to have some context * Revert "Remove references to suds" * Fix API directories on the doc * Fix incomplete error message of quota exceeded * Add secgroup param checks for Neutron * Implement manifest-based backports * Delete orphaned instance files from compute nodes * Fixed incorrect keys in cpu\_pinning * api: deprecate the api v2 extension configuration * Remove the v3 word from help message of api\_rate\_limit option * Use the same pci\_requests field for all filters and HostManager * objects: Add MigrationContext object * Don't query database with an empty list of tags for creation * Remove duplicate NullHandler test fixture * Add migration policy to upgrades devref * Add warning log when deprecated v2 and v3 code get used * Update ComputeNode values with allocation ratios in the RT * Update HostManager and filters to use ComputeNode ratios * Add cpu\_allocation\_ratio and ram\_allocation\_ratio to ComputeNode * VMware: adds support for rescue image * filter pre\_assigned\_dev\_names when finding disk dev * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * rt: Rewrite abort and update\_usage tests * Cleanup RT \_instance\_in\_resize\_state() * Compute: be consistent with logs about NotImplemented methods * VMware: pass network info to config drive * Remove/deprecate conductor instance\_update() * Make compute manager instance updates use objects * xenapi: add necessary timeout check * Fix permission issue of server group API * Make query to quota usage table order preserved * Change v3 to v21 for devref api\_plugins.rst * Remove duplicate exception * Don't trace on InstanceInfoCacheNotFound when refreshing network info\_cache * Cells: Improve block device mapping update/create calls * Rm openstack/common/versionutils from setup.cfg * Add a warning in the microversion docs around the usage of 'latest' * Fix exception message mistake in WSGI service * Replace "vol" variable by "bdm" * Remove v3 references in unit test 'contrib' * Removed unused dependency: discover * Rename tests so that they are run * Adds unit tests to test\_common.py * db: Add the migration\_context to the instance\_extra table * tests: Make test\_claims use Instance object * api: use v2.1 only in api-paste.ini * VMware: Update to return the correct ESX iqn * Pass block\_device\_info when delete an encrypted lvm * Handle neutron exception on bad floating ip create request * API: remove unused parameter * Consider that all scheduler calls are IO Ops * Add RequestSpec methods for primitiving into dicts * Add a note about the 400 response not requiring a microversion * api: deprecate the concept of extensions in v2.1 * Fix precedence of image bdms over image mappings * Cells: remove redundant check if cells are enabled * Strip the extra properties out when using legacy v2 compatible middleware * Remove unused sample files from /doc dir * Expose VIF net-id attribute in os-virtual-interfaces * libvirt: take account of disks in migration data size * Add deprecated\_for\_removal parm for deprecated neutron\_ops * Use compatibility methods from oslo * compute: Split the rebuild\_instance method * Allow for migration object to be passed to \_move\_claim * rt: move filtering of migration by type lower in the call stack * rt: generalize claim code to be useful for other move actions * libvirt: make guest to return power state * libvirt: move domain info to guest * Xen: import migrated ephemeral disk based on previous size * cleanup NovaObjectDictCompat from external\_event * cleanup NovaObjectDictCompat from agent * Catch invalid id input in service\_delete * Convert percent metrics back into the [0, 1] range * Cleanup for merging v2 and v2.1 functional tests * Remove doc/source/api and doc/build before building docs * Fixes a typo on nova.tests.unit.api.ec2.test\_api.py * Add a note about the 403 response not requiring a microversion * Pre-load expected attrs that the view builder needs for server details * Remove 'Retry-After' in server create and resize * Remove debug log message in SG API constructor * Updated from global requirements * Refactor test cases for live-migrate error case * Fixes Bug "destroy\_vm fails with HyperVException" * libvirt: refactor \_create\_domain\_setup\_lxc to use Image.get\_model * Set task\_state=None when booting instance failed * libvirt: Fix snapshot delete for network disk type for blockRebase op * [Ironic]Not count available resources of deployed ironic node * Catch OverQuota in volume create function * Don't allow instance to overcommit against itself * n-net: add more debug logging to release\_fixed\_ip * Fix scheduler code to use monitor metric objects * objects: add missing enum values to DiskBus field * Move objects registration in tests directory * xenapi: convert driver to use nova.objects.ImageMeta * libvirt: convert driver to use nova.objects.ImageMeta * Updated from global requirements * VMware: Delete vmdk UUID during volume detach * Move common sample files methods in test base class * Share server POST sample file for microversion too * Fix remote\_consoles microversion 2.8 not to run on /v3 * Remove merged sample tests and file for v2 tests * Move "versions" functional tests in v2.1 tests * Nil out inst.host and inst.node when build fails * Fix link's href to consider osapi\_compute\_link\_prefix * Fix abnormal quota usage after restore by admin * Specify current directory using new cwd param in processutils.execute * Remove and deprecate unused conductor method vol\_usage\_update() * Replace conductor proxying calls with the new VolumeUsage object * Add a VolumeUsage object * Updated from global requirements * Move CPU and RAM allocation ratios to ResourceTracker * Pull the all\_tenants search\_opts checking code into a common utility * Gate on nova.conf.sample generation * libvirt: use proper disk\_info in \_hard\_reboot * Update obj\_reset\_changes signatures to match * libvirt: only get bdm in \_create\_domain\_setup\_lxc if booted from volume * libvirt: \_create\_domain\_setup\_lxc needs to default disk mapping as a dict * libvirt: add docstring for \_get\_instance\_disk\_info * Add rootwrap daemon mode support * Removed duplicated keys in dictionary * Xenapi: Correct misaligned partitioning * libvirt:Remove duplicated check code for config option sysinfo\_serial * Test cases for better handling of SSH key comments * Allow compute monitors in different namespaces * cleanup NovaObjectDictCompat from hv\_spec * cleanup NovaObjectDictCompat from quota * Correct a wrong docstring * Create RequestSpec object * Clarify API microversion docs around handling 500 errors * libvirt: Fix KeyError during LXC instance boot * Xenapi: Handle missing aggregate metadata on startup * Handle NotFound exceptions while processing network-changed events * Added processing /compute URL * libvirt: enable live migration with serial console * Remove the useless require\_admin\_context decorator * Correct expected error code for os-resetState action * libvirt: add helper methods for getting guest devices/disks * compute: improve exceptions related to disk size checks * Improve error logs for start/stop of locked instance * pci: Remove nova.pci.device module * pci: Remove objects.InstancePCIRequests.save() * Remove unused db.security\_group\_rule\_get\_by\_security\_group\_grantee() * Revert "Make nova-network use conductor for security groups refresh" * Make compute\_api.trigger\_members\_refresh() issue a single db call * Fix cells use of legacy bdms during local instance delete operations * Hyper-V: Fixes serial port issue on Windows Threshold * Consolidate initialization of instance snapshot metadata * Fix collection of metadata for a snapshot of a volume-backed instance * Remove unnecessary ValueError exception * Update log's level when backup a volume backend instance * The API unit tests for serial console use http instead of ws * Drop scheduler RPC 3.x support * Move quota delta reserve methods from api to utils * nova.utils.\_get\_root\_helper() should be public * Host manager: add in missing log hints * Removing extension "OS-EXT-VIF-NET" from v2.1 extension-list * nova-manage: fix typo in docstring about mangaging * hyper-v: mock time.sleep in test\_rmtree * Remove tie between system\_metadata and extra.flavor * Fixes Hyper-V boot from volume fails when using ephemeral disk * Re-write way of compare APIVersionRequest's * Store "null api version" as 0.0 * add docstring to virt driver interface (as-is) [1 of ?] * Remove last of the plugins/v3 from unit tests * Rename classes containing 'v3' to 'v21' * Move the v2 api\_sample functional tests * Updated from global requirements * Add logging when filtering returns nothing * libvirt: cleanup() serial\_consoles after instance failure * Don't query database with an empty list of tags for IN clause * Libvirt: Make live\_migration\_bandwidth help msg more meaning * Move V2.1 API unittest to top level directory * Neutron: Check port binding status * Move legacy v2 api smaple tests * conductor: update comments for rpc and use object * Load flavor when getting instances for simple-tenant-usage * Make pagination tolerate a deleted marker * Updated from global requirements * Cleanup HTTPRequest for security\_groups test * Add api samples impact to microversion devref * Use min and max on IntOpt option types * Add hacking check for eventlet.spawn() * Updated from global requirements * neutron: filter None port\_ids from ports list in \_unbind\_ports * VMware: treat deletion exception with attached volumes * VMware: ensure that get\_info raises the correct exception * Allow resize root\_gb to 0 for volume-backed instances * Limit parallel live migrations in progress * Validate quota class\_name * Move V2 API unittests under legacy\_v2 directory * Updated from global requirements * Replace get\_cinder\_client\_version in cinder.py * Avoid querying for Service in resource tracker * Remove/deprecate unused parts of the compute node object * Make ComputeNode.service\_id nullable to match db schema * Add missing rules in policy.json * Add V2.1 API tests parity with V2 API tests * Fixed indentation * Simplify interface for creating snapshot of volume-backed instance * Add instance action events for live migration * Remove 'v3' directory for v2.1 json-schemas * Move v2.1 code to the main compute directory - remove v3 step3 * libvirt: qemu-img convert should be skipped when migrating * Add version counter to Service object * Fix the peer review link in the 'Patches and Reviews' policy section * Handle port delete initiated by neutron * Don't check flavor disk size when booting from volume * libvirt: make instance compulsory in blockinfo APIs * xapi: ensure pv driver info is present prior to live-migration * Move existing V2 to legacy\_v2 - step 2 * Move existing V2 to legacy\_v2 * Return v2 version info with v2 legacy compatible wrapper * Ironic: Add numa\_topology to get\_available\_resource return values * Fix three typos on nova/pci directory * Imported Translations from Transifex * pci: Use PciDeviceList for PciDevTracker.pci\_devs * pci: Remove get\_pci\_devices\_filter() method * pci: Move whitelist filtering inside PCI tracker * libvirt: call host.get\_capabilities after checking for bad numa versions * libvirt: log when BAD\_LIBVIRT\_NUMA\_VERSIONS detected * Use string substitution before raising exception * Hyper-V: deprecates support for Windows / Hyper-V Server 2008 R2 * VMware: Do not untar OVA on the file system * Add hacking check for greenthread.spawn() * Ironic: Use ironicclient native retries for Conflict in ClientWrapper * Prevent (un)pinning unknown CPUs * libvirt: use instance UUID with exception InstanceNotFound * Fix notify\_decorator errors * VMware: update supported vsphere 6.0 os types * libvirt: convert Scality vol driver to LibvirtBaseFileSystemVolumeDriver * libvirt: convert Quobyte driver to LibvirtBaseFileSystemVolumeDriver * pci: Use fields.Enum type for PCI device type * pci: Use fields.Enum type for PCI device status * More specific error messages on building BDM * Ensure test\_models\_sync() works with new Alembic releases * Hyper-V: Adds VolumeOps unit tests * Hyper-V: Adds MigrationOps unit tests * Suppress not image properties for image metadata from volume * Add non-negative integer and float fields * Fix DeprecationWarning when using BaseException.message * Added support for specifying units to hw:mem\_page\_size * Compute: use instance object for refresh\_instance\_security\_rules * libvirt: convert GPFS volume driver to LibvirtBaseFileSystemVolumeDriver * Updated from global requirements * Add os-brick based LibvirtVolumeDriver for ScaleIO * docs: add link to liberty summit session on v2.1 API * Refactor unit test for InstanceGroup objects * Don't pass the service catalog when making glance requests * libvirt: check min required qemu/libvirt versions on s390/s390x * libvirt: ensure LibvirtConfigGuestDisk parses readonly/shareable flags * libvirt: set caps on maximum live migration time * libvirt: support management of downtime during migration * cleanup NovaObjectDictCompat from numa object * Fix test\_relationships() for subobject versions * libvirt: don't open connection in driver constructor * Skip SO\_REUSEADDR tests on BSD * \_\_getitem\_\_ method not returning value * Compute: replace incorrect instance object with dict * Fix live-migrations usage of the wrong connector information * Honour nullability constraints of Glance schema in ImageMeta * Change docstring in test to comment * libvirt: convert GlusterFS driver to LibvirtBaseFileSystemVolumeDriver * libvirt: convert SMBFS vol driver to LibvirtBaseFileSystemVolumeDriver * libvirt: convert NFS volume driver to LibvirtBaseFileSystemVolumeDriver * Introduce LibvirtBaseFileSystemVolumeDriver * Add test to check relations at or below current * Add documentation for the nova-cells command * libvirt:Rsync remote FS driver was added * Clean the deprecated noauth middleware * Add os\_brick-based VolumeDriver for HGST connector * libvirt: add os\_admin\_user to use with set admin password * Fixed incorrect behaviour of method \_check\_instance\_exists * Squashing down update method * Fix the wrong file name for legacy v2 compatible wrapper functional test * Add scenario for API sample tests with legacy v2 compatible wrapper * Skip additionalProperties checks when LegacyV2CompatibleWrapper enabled * Libvirt: correct libvirt reference url link when live-migration failed * libvirt: enable virtio-net multiqueue * Replacing unichr() with six.unichr() and reduce with six.moves.reduce() * Fix resource leaking when consume\_from\_instance raise exception * :Add documentation for the nova-idmapshift command * RBD: Reading rbd\_default\_features from ceph.conf * New nova API call to mark nova-compute down * libvirt: move LibvirtISCSIVolumeDriver into it's own module * libvirt: move LibvirtNETVolumeDriver into it's own module * libvirt: move LibvirtISERVolumeDriver into it's own module * libvirt: move LibvirtNFSVolumeDriver into it's own module * allow live migration in case of a booted from volume instance * Handle MessageTimeout to MigrationPreCheckError * Create a new dictionary for type\_data in VMwareAPIVMTestCase class * resource tracker style pci resource management * Added missed '-' to the rest\_api\_version\_history.rst * Imported Translations from Transifex * Remove db layer hard-code permission checks for keypair * Fix a couple dead links in docs * cleanup NovaObjectDictCompat from virt\_cpu\_topology * Adding user\_id handling to keypair index, show and create api calls * Updated from global requirements * Remove legacy flavor compatibility code from Instance * libvirt: Fix root device name for volume-backed instances * Fix few typos in nova code and docs * Helper script for running under Apache2 * Raise NovaException for missing/empty machine-id * Fixed random failing of test\_describe\_instances\_with\_filters\_tags * libvirt: enhance libvirt to set admin password * libvirt: rework quiesce to not share "sensitive" informations * Metadata: support proxying loadbalancers * formely is not correct * Remove 'scheduled\_at' - DB cleanup * Remove unnecessary executable permission * Neutron: add in API method for updating VNIC index * Xen: convert image auto\_disk\_config value to bool before compare * Make BaseProxyTestCase.test\_proxy deterministic wrt traffic/verbose * Cells: Handle instance\_destroy\_at\_top failure * cleanup NovaObjectDictCompat from virtual\_interface * Fix test mock that abuses objects * VMware: map one nova-compute to one VC cluster * VMware: add serial port device * Handle SSL termination proxies for version list * Use urlencode instead of dict\_to\_query\_str function * libvirt: move LibvirtSMBFSVolumeDriver into it's own module * libvirt: move LibvirtAOEVolumeDriver into it's own module * libvirt: move LibvirtGlusterfsVolumeDriver into it's own module * libvirt: move LibvirtFibreChannelVolumeDriver into it's own module * VMware: set create\_virtual\_disk\_spec method as local * Retry live migration on pre-check failure * Handle config drives being stored on rbd * Change List objects to use obj\_relationships * Fixes delayed instance lifecycle events issue * libvirt-vif: Allow to configure a script on bridge interface * Include DiskFilter in the default list * Adding support for InfiniBand SR-IOV vif type * VMware: Add support for swap disk * libvirt: Add logging for dm-crypt error conditions * Service group drivers forced\_down flag utilization * libvirt: Replace stubs with mocks for test\_dmcrypt * clarify docs on 2.9 API change * Remove db layer hard-code permission checks for instance\_get\_all\_hung\_in\_rebooting * Undo tox -e docs pip install sphinx workaround * Set autodoc\_index\_modules=True so tox -e docs builds module docs again * Allow NUMA based reporting for Monitors * libvirt: don't add filesystem disk to parallels containers unconditionally * objects: add hw\_vif\_multiqueue\_enabled image property * Prepare for unicode enums from Oslo * rootwrap: remove obsolete filters for baremetal * Create class hierarchy for tasks in conductor * return more details on assertJsonEqual fail * Fix IronicHostManager to skip get\_by\_host() call * Store correct VirtCPUTopology * Add documentation for block device mapping * Show 'locked' information in server details * VMware: add resource limits for disk * VMware: store extra\_specs object * VMware: Resource limits for memory * VMware: create common object for limits, reservations and shares * VMware: add support for cores per socket * Add DiskNotFound and VolumeNotFound test * Not check rotation at compute level * Instance destroyed if ironic node in CLEANWAIT * Ironic: Better handle InstanceNotFound on destroy() * Fix overloading of block device on boot by device name * tweak graphviz formatting for readability * libvirt: rename parallels driver to virtuozzo * libvirt: Add macvtap as virtual interface (vif) type to Nova's libvirt driver * cells: document upgrade limitations/assumptions * rebuild: make sure server is shut down before volumes are detached * Implement compare-and-swap for instance update * docs: add a placeholder link to mentoring docs * libvirt: Kill rsync/scp processes before deleting instance * Updated from global requirements * Add console allowed origins setting * libvirt: move the LibvirtScalityVolumeDriver into it's own module * libvirt: move the LibvirtGPFSVolumeDriver into it's own module * libvirt: move the LibvirtQuobyteVolumeDriver into the quobyte module * libvirt: move volume/remotefs/quobyte modules under volume subdir * Add missing policy for limits extension * Move to using ovo's remotable decorators * Base NovaObject on VersionedObject * Document when we should have a microversion * libvirt: do relative block rebase only with non-null base * Add DictOfListOfStrings type of field * Get py34 subunit.run test discovery to work * Enable python34 tests for nova/tests/unit/scheduler/test\*.py * libvirt: mark NUMA huge page mappings as shared access * libvirt:Add a driver API to inject an NMI * virt: convert hardware module to use nova.objects.ImageMeta 12.0.0.0b2 ---------- * Replace openssl calls with cryptography lib * libvirt: move lvm/dmcrypt/rbd\_utils modules under storage subdir * Fix Instance object usage in test\_extended\_ips tests * Fix test\_extended\_server\_attributes for proper Instance object usage * Fix test\_security\_groups to use Instance object properly * Refactor test\_servers to use instance objects * Switch to using os-brick * Updated from global requirements * VMware: remove redundant check for block devices * Remove unused decorator on attach/detach volume * libvirt: test capability for supports\_migrate\_to\_same\_host * Added removing of tags from instance after its deletion * Remove unused import of the my\_ip option from the manager * Scheduler: enhance debug messages for multitenancy aggregates * VMware: Handle missing vmdk during volume detach * Running microversion v2.6 sample tests under '/v2' endpoint * VMware: implement get\_mks\_console() * Add MKS protocol for remote consoles * Add MKS console support * libvirt: improve logging in the driver.py code * Fix serializer supported version reporting in object\_backport * Updated from global requirements * Revert "Add error message to failed block device transform" * tox: make it possible to run pep8 on current patch only * Fix seven typos on nova documentation * Add two fields to ImageMetaProps object * Check flavor type before add tenant access * Switch to the oslo\_utils.fileutils * hypervisor support matrix: fix snapshot for libvirt Xen * libvirt: implement get\_device\_name\_for\_instance * libvirt: Always default device names at boot * Remove unused import of the compute\_topic option from the DB API * Remove unused call to \_get\_networks\_by\_uuids() * libvirt: fix disk I/O QOS support with RBD * Updated from global requirements * Remove unnecessary oslo namespace import checks * VMware: Fixed redeclared CONF = cfg.CONF * Execute \_poll\_shelved\_instances only if shelved\_offload\_time is > 0 * Switch to oslo.reports * Support Network objects in set\_network\_host * Fix Filter Schedulers doc to refer to all\_filters * Fixup uses of mock in hyperv tests * Cleanup log lines in nova.image.glance * Revert "Add config drive support for Virtuozzo containers" * Virt: fix debug log messages * Virt: use flavor object and not flavor dict * Add VersionPredicate type of field * Remove unnecessary method in FilterScheduler * Use utf8\_bin collation on the flavor extra-specs table in MySQL * docs: clear between current vs future plans * cleanup NovaObjectDictCompat subclassing from pci\_device * libvirt: make unit tests concise by setup guest object * libvirt: introduce method to wait for block device job * Decouple instance object tests from the api fakes module * Fixed typos in self parameter * Hyper-V: restart serial console workers after instance power change * Only work with ipv4 subnet metadata if one exists * Do not import using oslo namespace * Refresh instance info cache within lock * Remove db layer hard-code permission checks for fixed\_ip\_associate\_\* * Add middleware filterout Microversions http headers * Correct backup\_type param description * Fix a request body template for secgroup tests * Images: fix invalid exception message * Updated from global requirements * rebuild: fix rebuild of server with volume attached * objects: send PciDeviceList 1.2 to all code that can handle it * Fix libguestfs failure in test\_can\_resize\_need\_fs\_type\_specified * Fix the incorrect PciDeviceList version number * objects: Don't import CellMapping from the objects module * Deprecate the osapi\_v3.enabled option * Remove conductor api from resource tracker * Fix test\_tracker object mocks * Fix Python 3 issues in nova.utils and nova.tests * Remove db layer hard-code permission checks for instance\_get\_all\_by\_host\_and\_not\_type * Support all\_tenants search\_opts for neutron * libvirt : remove broken olso\_config choices option * Convert instance\_type to object in prep\_resize * VMware: clean up exceptions * Revert "Remove useless db call instance\_get\_all\_hung\_in\_rebooting" * VMware: Use virtual disk size instead of image size * Remove db layer hard-code permission checks for provider\_fw\_rule\_\* * Remove db layer hard-code permission checks for archive\_deleted\_rows\* * Revert "Implement compare-and-swap for instance update" * Add tool to build a doc latex pdf * make test\_save\_updates\_numa\_topology stable across python versions * Update HACKING.rst for running tests and building docs * Cleanup quota\_class unittest with appropriate request context * Remove db layer hard-code permission checks for quota\_class\_create/update * Remove db layer hard-code permission checks for quota\_class\_get\_all\_by\_name * Improve functional test base for microversion * Remove db layer hard-code permission checks for reservation\_expire * Introducing new forced\_down field for a Service object * Use stevedore for loading monitor extensions * libvirt: Remove dead code path in method clear\_volume * Switch to oslo.service library * Include project\_id in instance metadata * Convert test\_compute\_utils to use Instance object * Fix for mock-1.1.0 * Port crypto to Python 3 * Add HostMapping object * Remove useless db call instance\_get\_all\_hung\_in\_rebooting * Cleanup unused method fake\_set\_snapshot\_id * Handle KeyError when volume encryption is not supported * Expose Neutron network data in metadata service * Build Neutron network data for metadata service * Implement compare-and-swap for instance update * Added method exists to the Tag object * Add DB2 support * compute: rename ResizeClaim to MoveClaim * Fix the little spelling mistake of the comment * Remove db layer hard-code permission checks for quota\_create/update * Fix the typo from \_pre\_upgrade\_294 to \_pre\_upgrade\_295 for tests/unit/db/test\_migration * Ironic:check the configuration item api\_max\_retries * Modified testscenario for micro version 2.4 * Add some notifications to the evacuate path * Make evacuate leave a record for the source compute host to process * Fix incorrect enum in Migration object and DB model * Refactoring of the os-services module * libvirt: update docstring in blockinfo module for disk\_info * Ignore bridge already exists error when creating bridge * libvirt: handle rescue flag first in blockinfo.get\_disk\_mapping * libvirt: update volume delete snapshot to use Guest * libvirt: update live snapshot to use Guest object * libvirt: update swap volume to use Guest * libvirt: introduce GuestBlock to wrap around Block API * libvirt: rename GuestVCPUInfo to VCPUInfo * libvirt: save the memory state of guest * removed unused method \_get\_default\_deleted\_value * Remove flavor migration from db\_api and nova-manage * Rework monitor plugin interface and API * Adds MonitorMetric object * virt: add get\_device\_name\_for\_instance to the base driver class * libvirt: return whether a domain is persistent * Cells: fix indentation for configuration variable declaration * VMware: add unit tests for vmops attach and detach interface * Remove unneeded OS\_TEST\_DBAPI\_ADMIN\_CONNECTION * Switch from MySQL-python to PyMySQL * virt: fix picking CPU topologies based on desired NUMA topology * Port test\_exception to Python 3 * devref: virtual machine states and transitions * Consolidate the APIs for getting consoles * Remove db layer hard-code permission checks for floating\_ip\_dns * Fix typo in model doc string * virt: Fix AttributeError for raw image format * log meaningful error message on download exception * Updated from global requirements * Add bandit for security static analysis testing * Handle unexpected clear events call * Make on\_shared\_storage optional in compute manager * snapshot: Add device\_name to the snapshot bdms * compute: Make swap\_volume with resize updates BDM size * Make Nova better at keeping track of volume sizes in BDM * API: make sure a blank volume with no size is rejected * Ironic: Improve driver logs * Drop MANIFEST.in - it's not needed with PBR * Libvirt: Define system\_family for libvirt guests * Convert RT compute\_node to be a ComputeNode object * glance:check the num\_retries option * tests: Move test\_resource\_tracker to Instance objects * Remove compat\_instance() * Enable python34 tests for nova/tests/unit/objects/test\*.py * Soft delete system\_metadata when destroy instance * Remove python3 specific test-requirements file * Try luksFormat up to 3 times in case the device is in use * rootwrap: update ln --symbolic filter for FS and FC type volume drivers * Add wording to error message in TestObjectVersions.test\_relationships * Close temporary files in virt/disk/test\_api.py * Add BlockDeviceType enum field * Add BlockDeviceDestinationType enum field * Add BlockDeviceSourceType enum field * Avoid recursion in object relationships test * tests: move a test to the proper class in test\_resource\_tracker * Remove db layer hard-code permission checks for network\_set\_host * Block subtractive operations in migrations for Kilo and beyond * Remove db layer hard-code permission checks for network\_disassociate * libvirt: Correct domxml node name * Test relationships of List objects * libvirt: configuration for interface driver options * Fix Python 3 issues in nova.db.sqlalchemy * Update test\_db\_api for oslo.db 2.0 * Fix is\_image\_extendable() thinko * Validate maximum limit for quota * utils: ignore block device mapping in system metadata * libvirt: add in missing doc string for hypervisor\_version * Remove useless policy rule from fake\_policy.py * Replace ascii art architecture diagram with svg image * Adds MonitorMetricTypeField enum field * Unfudge tox -e genconfig wrt missing versionutils module * virt: update doctrings * hypervisor support matrix: add feature "evacuate" * XenAPI: Refactor rotate\_xen\_guest\_logs to avoid races * hypervisor support matrix: add feature "serial console" * hypervisor support matrix: add CLI commands to features * Fix typos detected by toolkit misspellings * hypervisor support matrix: fix "evacuate" for s390 and hyper-v * Make live migration create a migration object record * Cells: add instance cell registration utility to nova-manage * fix typos in docs * Logging corrected * Check mac for instance before disassociate in release\_fixed\_ip * Add the rule of separate plugin for Nova REST API in devref * Use flavor object in compute manager 12.0.0.0b1 ---------- * Changes conf.py for Sphinx build because oslosphinx now contains GA * Fix testing object fields with missing instance rows * Change group controller of V2 test cases * Reduce window for allocate\_fixed\_ip / release\_fixed\_ip race in nova-net * Make NoValidHost exceptions clearer * Hyper-V: Fixes method retrieving free SCSI controller slot on V1 * Refactor network API 'get\_instance\_nw\_info' * Removed extra '-' from rest\_api\_version\_history.rst * Remove an useless variable and fix a typo in api * VMware: convert driver to use nova.objects.ImageMeta * Bypass ironic server not available issue * Fix test\_create\_security\_group\_with\_no\_name * Remove unused "id" and "rules" from secgroup body * cells: add devstack/tempest-dsvm-cells-rc for gating * Add common function for v2.1 API flavor\_get * Fix comment typo * Fix up instance flavor usage in compute and network tests * Fix up ec2 tests for flavors on instances * Fix up xenapi tests for instance flavors * Fix up some bits of resource\_tracker to use instance flavors * Register the vnc config options under group 'vnc' * Cells: cell scheduler anti-affinity filter * Cells: add in missing unit test for get\_by\_uuid * VMware driver: Increasing speed of downloading image * Hyper-V: Fix virtual hard disk detach * Add flag to force experimental run of db contract * Make readonly field tests use exception from oslo.versionedobjects * Fixes "Hyper-V destroy vm fails on Windows Server 2008R2" * Add microversion to allow server search option ip6 for non-admin * Updated from global requirements * VMware: Handle port group not found case * Imported Translations from Transifex * libvirt: use correct translation format * Add explicit alembic dependency * network: add more debug logging context for race bug 1249065 * Add virt resource update to ComputeNode object * xenapi: remove bittorrent entry point lookup code * Use oslo-config-generator instead of generate\_sample.sh * Add unit tests for PCI utils * Support flavor object in migrate\_disk\_and\_power\_off * Remove usage of WritableLogger from oslo\_log * libvirt: Don't fetch kernel/ramdisk files if you already have them * Allow non-admin to list all tenants based on policy * Remove redundant policy check from security\_group\_default\_rule * Return bandwidth usage after updating * Update version for Liberty * neutron: remove deprecated allow\_duplicate\_networks config option * Validate maximum limit for integer * Improve the ability to resolve capabilities from Ironic * Fix the wrong address ref when the fixed\_ip is invalid * The devref for Nova stable API * Fix wrong check when use image in local * Fixes TypeError when libvirt version is BAD\_LIBVIRT\_CPU\_POLICY\_VERSIONS 12.0.0a0 -------- * Remove hv\_type translation shim for powervm * cells: remove deprecated mute\_weight\_value option * Make resize api of compute manager to send flavor object * VMware: detach cinder volume when instance destroyed * Add unit tests for the exact filters * test: add MatchType helper class as equivalent of mox.IsA * Validate int using utils.validate\_integer method * VMware: use min supported VC version in fake driver * Updated from global requirements * Added documentation around database upgrades * Avoid always saving flavor info in instance * Warn when CONF torrent\_base\_url is missing slash * Raise invalid input if use invalid ip for network to attach interface * Hyper-V: Removes old instance dirs after live migration * DB downgrades are no longer supported * Add Host Mapping table to API Database * VMware: verify vCenter server certificate * Implement online schema migrations * Hyper-V: Fixes live migration configdrive copy operation * Avoid resizing disk if the disk size doesn't change * Remove openstack/common/versionutils module * Fix TestObjEqualPrims test object registration * Remove references to suds * VMware: Remove configuration check * Remove and deprecate conductor task\_log methods * Remove unused compute utils methods * Make instance usage audit use the brand new TaskLog object * Add a TaskLog object * Updated from global requirements * Fix noVNC console access for an IPv6 setup * hypervisor support matrix: add status "unknown" * VMware: typo fix in config option help * Sync with latest oslo-incubator * Associating of floating IPs corrected * Minor refactor in nova.scheduler.filters.utils * Cleanup wording for the disable\_libvirt\_livesnapshot workaround option * Remove cell api overrides for force-delete * libvirt: convert imagebackend to support nova.virt.image.model classes * virt: convert disk API over to use nova.virt.image.model * Cells: Skip initial sync of block\_device\_mapping * Pass Down the Instance Name to Ironic Driver * Handle InstanceNotFound when sending instance update notification * Add an index to virtual\_interfaces.uuid * Updated from global requirements * Add config drive support for Virtuozzo containers * Update formatting of microversion 2.4 documentation * Consolidates scheduler utils tests into a single file * Send Instance object to cells instance\_update\_at\_top * VMware: use vCenter instead of VC * fix "down" nova-compute service spuriously marked as "up" * Improve formatting of rest\_api\_version\_history * Link to microversion history in docs * libvirt: fix live migration handling of disk\_info * libvirt: introduce method to get domain XML * libvirt: introduce method detach\_device to Guest object * Remove db layer hard-code permission checks for quota\_usage\_update * pass environment variables of proxy to tox * Remove db layer hard-code permission checks for quota\_get\_all\_\* * Fixed some misspellings * Clean up Fake\_Url for unit test of flavor\_access * Updated from global requirements * Add AggregateTypeAffinityFilter multi values support * volume: log which encryptor class is being used * VMware: Don't raise exception on resize of 0 disk * Hyper-V: sets supports\_migrate\_to\_same\_host capability * libvirt: remove \_get\_disk\_xml to use get\_disk from Guest * libvirt: introduce method to attach device * libvirt: update tests to use Mock instead of MagicMock * libvirt: Remove unnecessary JSON conversions * objects: fix parsing of NUMA cpu/mem properties * compute: remove get\_image\_metadata method * compute: only use non\_inheritable\_image\_properties if snapshotting * objects: add os\_require\_quiesce image property * libvirt: make default\_device\_names DRY-er * virt: Move building the block\_device\_info dict into a method * Objects: update missing adapter types * Add error handling for creating secgroup * libvirt: handle code=38 + sigkill (ebusy) in destroy() * Removed a non-conditional 'if' statement * Map uuid db field to instance\_uuid in BandwidthUsage object * Hyper-V: Fix missing WMI namespace issue on Windows 2008 R2 * Replace metaclass registry with explicit opt-in registry from oslo * Fix an objects layering violation in compute/api * Remove assertRemotes() from objects tests * Use fields from oslo.versionedobjects * Convert test objects to new field formats * Begin the transition to an explicit object registry * Set default event status to completed * Add a hacking rule for consistent HTTP501 message * Add and use raise\_feature\_not\_supported() * Objects: fix typo with exception * Remove useless volume when boot from volume failed * Hyper-V: Lock snapshot operation using instance uuid * Refactor show\_port() in neutron api * Ironic: Don't report resources for nodes without instances * libvirt: Remove unit tests for \_hard\_reboot * Adds hostutilsv2 to HyperV * libvirt: introduce method to delete domain config * libvirt: introduce method to get vcpus info * libvirt: Don't try to confine a non-NUMA instance * Removed explicit return from \_\_init\_\_ method * libvirt: introduce method resume to Guest object * libvirt: introduce method poweroff to Guest object * libvirt: make \_create\_domain return a Guest object * Raise InstanceNotFound when save FK constraint fails * Updated from global requirements * Add new VIF type VIF\_TYPE\_TAP * libvirt: Disable NUMA for broken libvirt * Handle FlavorNotFound when augmenting migrated flavors * virt: convert VFS API to use nova.virt.image.model * virt: convert disk mount API to use nova.virt.image.model * virt: introduce model for describing local image metadata * Remove unused instance\_group\_policy db calls * Improve compute swap\_volume logging * libvirt: introduce method get\_guest to Host object * libvirt: introduce a Guest to wrap around virDomain * Remove unused exceptions * Extract helper method to get image metadata from volume * Fix \_quota\_reserve test setup for incompatible type checking * Fixes referenced path in nova/doc/README.rst * Updated from global requirements * Handle cells race condition deleting unscheduled instance * Compute: tidy up legacy treatment for vif types * Allow libvirt cleanup completion when serial ports already released * objects: define the ImageMeta & ImageMetaProps objects * Ensure to store context in thread local after spawn/spawn\_n * Ironic: Parse and validate Node's properties * Hyper-V: Fix SMBFS volume attach race condition * Remove unit\_test doc * Make blueprints doc a reference for nova blueprints * Remove jenkins, launchpad and gerrit docs * Prune development.environment doc * docs: fixup libvirt NUMA testing docs to match reality * Fix some issues in devref for api\_microversions * nova response code 403 on block device quota error * Updated from global requirements * Remove unused variables from images api * Compute: improve logging using {} instead of dict * snapshot: Copy some missing attrs to the snapshot bdms * bdm: Make sure that delete\_on\_termination is a boolean * Get rid of oslo-incubator copy of middleware * Make nova-manage handle completely missing flavor information * Use oslo\_config choices support * Let soft-deleted instance\_system\_metadata readable * Make InstanceExternalEvent use an Enum for status * Add error message to failed block device transform * network: fix instance cache refresh for empty list * Imported Translations from Transifex * Add common function for v2 API flavor\_get * Remove cell policy check * VMware: replace hardcoded strings with constants * Add missing @require\_context * Standardize on assertJsonEqual in tests * Tolerate iso style timestamps for cells rpc communication * Force the value of LC\_ALL to be en\_US.UTF-8 * libvirt: disconnect\_volume does not return anything * Remove hash seed comment from tox.ini * Allow querying for migrations by source\_compute only * libvirt: Do not cache number of CPUs of the hypervisor * Create instance\_extra entry if it doesn't update * Ignore Cinder error when shutdown instance * Remove use of builtin name * Hyper-V: Fixes cold migration / resize issue * Fix cells capacity calculation for n:1 virt drivers * VMware: Log should use uuid instead of name * VMware: fill in instance metadata when resizing instances * VMware: fill in instance metadata when launching instances * Add the swap and ephemeral BDMs if needed * Updated from global requirements * Block oslo.vmware 0.13.0 due to a backwards incompatible change * hypervisor support matrix: update libvirt KVM (s390x) * Hyper-V: ensure only one log writer is spawned per VM * Prevent access to image when filesystem resize is disabled * Share admin password func test between v2 and v2.1 * VMware: remove dead function in vim\_util * Fix version unit test on Python 3 * Resource tracker: remove invalid conductor call from tests * Remove outdated TODO comment * Disable oslo.vmware test dependency on Python 3 * Run tests with PyMySQL on Python 3 * Drop explicit suds dependency * improve speed of some ec2 keypair tests * Add nova object equivalence based on prims * Cleanups for pci stats in preparation for RT using ComputeNode * Replace dict.iteritems() with six.iteritems(dict) * Add a maintainers file * virt: make sure convert\_all\_volumes catches blank volumes too * compute utils: Remove a useless context parameter * make SchedulerV3PassthroughTestCase use NoDBTest * Don't use dict.iterkeys() * VMware: enforce minimum support VC version * Split up and improve speed of keygen tests * Replace dict(obj.iteritems()) with dict(obj) * libvirt: Fix cpu\_compare tests and a wrong method when logging * Detect empty result when calling objects.BlockDeviceMapping.save() * remove \_rescan\_iscsi from disconnect\_volume\_multipath\_iscsi * Use six.moves.range for Python 3 * Use EnumField for instance external event name * Revert "Detach volume after deleting instance with no host" * Removed unused methods and classes * Removed unused variables * Removed unused "as e/exp/error" statements * Resource tracker: use instance objects for claims * Remove db layer hard-code permission checks for security\_group\_default\_rule\_destroy * Avoid AttributeError at instance.info\_cache.delete * Remove db layer hard-code permission checks for network\_associate * Remove db layer hard-code permission checks for network\_create\_safe * Pass project\_id when create networks by os-tenant-networks * Disassociate before deleting network in os-tenant-networks delete method * Remove db layer hard-code permission checks for v2.1 cells * Move unlock\_override policy enforcement into V2.1 REST API layer * tests: libvirt: Fix test\_volume\_snapshot\_delete tests * Add a finish log * Add nova-idmapshift to rootwrap filters * VMware: Missing docstring on parameter * Update docs layout * Add note to doc explaining scope * Show 'reserved' status in os-fixed-ips * Split instance event/tag correctly * libvirt: deprecate libvirt version usage < 0.10.2 * Fix race between resource audit and cpu pinning * Set migration\_type for existing cold migrations and resizes * Add migration\_type to Migration object * Add migration\_type and hidden to Migration database model * libvirt: improve logging * Fix pip-missing-reqs * objects: convert HVSpec to use named enums * objects: convert VirtCPUModel to use named enums * Ironic: Fix delete instance when spawning * Retry a cell delete if host constraint fails * objects: introduce BaseEnumField to allow subclassing * Add policy to cover snapshotting of volume backed instances * objects: add a FlexibleBoolean field type * Don't update RT status when set instance to ERROR * Delete shelved\_\* keys in n-cpu unshelve call * Fix loading things in instance\_extra for old instances * VMware: remove invalid comment * neutron: log hypervisor\_macs before raising PortNotUsable * VMware: use get\_object\_properties\_dict from oslo.vmware * VMware: use get\_datastore\_by\_ref from oslo.vmware * Unshelving volume backed instance fails * Avoid useless copy in get\_instance\_metadata() * Fix raise syntax for Python 3 * Replace iter.next() with next(iter) * libvirt: use instance UUID with exception InstanceNotFound * devref: add information to clarify nova scope * Refactor an unit test to use urlencode() * Additional cleanup after compute RPC 3.x removal * Drop compute RPC 3.x support * libvirt: deprecate the remove\_unused\_kernels config option * Updated from global requirements * libvirt: Use 'relative' flag for online snapshot's commit/rebase operations * Remove db layer hard-code permission checks for quota\_destroy\_all\_\* * Replace unicode with six.text\_type * Replace dict.itervalues() with six.itervalues(dict) * Use compute\_node consistently in ResourceTracker * Fix the wrong comment in the test\_servers.py file * Move ebrctl to compute.filter * libvirt: handle NotSupportedError in compareCPU * Hypervisor Support Matrix renders links in notes * Update fake flavor's root and ephemeral disk size * Code clean up db.instance\_get\_all\_by\_host() * use block\_dev.get\_bdm\_swap\_list in compute api * Catch SnapshotNotFound exception at os-volumes * Rename \_CellProxy.iteritems method to items on py3 * Overwrite NovaException message * API: remove unuseful expected error code from v2.1 service delete api * Fix quota-update of instances stuck in deleting when nova-compute startup finish * API: remove admin require from certificate\_\* from db layer * API: Add policy enforcement test cases for pci API * API: remove admin require for compute\_node(get\_all/search\_by\_hyperviso) from db * API: remove admin require for compute\_node\_create/update/delete from db layer * API: remove admin require from compute\_node\_get\_all\_by\_\* from db layer * Share deferred\_delete func tests between v2 and v2.1 * VMware: add support for NFS 4.1 * Compute: remove reverts\_task\_state from interface attach/detach * VMware: ensure that the adapter type is used * Fix failure of stopping instances during init host * Share assisted vol snapshots test between v2 and v2.1 * Compute: use instance object for \_deleted\_old\_enough method * API: remove instance\_get\_all\_by\_host(\_and\_node) hard-code admin check from db * Remove db layer hard-code permission checks for service\_get\_by\_host\* * Remove db layer hard-code permission checks for service\_get\_by\_compute\_host * Detach volume after deleting instance with no host * libvirt: safe\_decode xml for i18n logging * Fix scheduler issue when multiple-create failed * Move our ObjectListBase to subclass from the Oslo one * Fix cinder v1 warning with cinder\_catalog\_info option reference * Deprecate nova ironic driver's admin\_auth\_token * Handle return code 2 from blkid calls * Drop L from literal integer numbers for Python 3 * Libvirt: Use tpool to invoke guestfs api * Minor edits to support-matrix doc * hacking: remove unused variable author\_tag\_re * Update kilo version alias * Refactor tests that use compute's deprecated run\_instance() method * Helper scripts for running under Apache2 * downgrade log messages for memcache server (dis)connect events * don't report service group connection events as errors in dbdriver * Updated from global requirements * Switch to \_set\_instance\_obj\_error\_state in build\_and\_run\_instance * Add SpawnFixture * Log the actual instance.info\_cache when empty in floating ip associate * unify libvirt driver checks for qemu * VMware: Allow other nested hypervisors (HyperV) * servicegroup: remove get\_all method never used as public * libvirt: add todo note to avoid call to libvirt from the driver * libvirt: add method to compare cpu to Host * libvirt: add method to list pci devices to Host * libvirt: add method to get device by name to Host * libvirt: add method to define instance to host * libvirt: add method to get cpu stats to host * monitor: remove dependance with libvirt * Clean up ComputeManager.\_get\_instance\_nw\_info * Updated from global requirements * Cells: Call compute api methods with instance objects * Correct docstring info on two parameters * Start the conversion to oslo.versionedobjects * Cleanup conductor unused methods * Revert "Ironic: do not destroy if node is in maintenance" * fix network setup on evacuate * Reschedules sometimes do not allocate networks * Incorrect argument order passed to swap\_volume * Mark ironic credential config as secret * Fix missing format arg in compute manager * objects: remove field ListOfEnumField * Cleaning up debug messages from previous change in vmops.py * Remove orphaned tables - iscsi\_targets, volumes * console: clean tokens do not happen for all kind of consoles * Fix import order * Skip only one host weight calculation * Fix typo for test cases * VMWare: Isolate unit tests from requests * Imported Translations from Transifex * Cleanup docs landing page * Updated from global requirements * Add ability to inject routes in interfaces.template * tests: make API signature test also check static function * Make test\_version\_string\_with\_package\_is\_good work with pbr 0.11 * Fix disconnect\_volume issue when find\_multipath\_device returns None * Updated from global requirements * Fix assert on call count for encodeutils.safe\_decode mock * Don't wait for an event on a resize-revert * minor edit to policy\_enforcement.rst * Update self with db result in InstanceInfoCache.save * libvirt: retry to undefine network filters during \_post\_live\_migration * Wedge DB migrations if flavor migrations are not complete * Removed twice declared variables * Removed variables used not in the scope that they are declared * libvirt: add method to get hardware info to Host * libvirt: avoid call of listDefinedDomains when post live migration * Remove unused db.aggregate\_metadata\_get\_by\_metadata\_key() call * Removed 'PYTHONHASHSEED=0' from tox.ini * Changed logic in \_compare\_result api\_samples\_test\_base * Convert bandwidth\_usage related timestamp to UTC native datetime * Drop use of 'oslo' namespace package 2015.1.0 -------- * Add a method to skip cells syncs on instance.save * Add some testing for flavor migrations with deleted things * Add support for forcing migrate\_flavor\_data * Virt: update shared storage log information message * Fixed functional in tests\_servers, to pass with random PYTHONHASHSEED * Adds toctree to v2 section of docs * Fixes X509 keypair creation failure * Update rpc version aliases for kilo * libvirt/utils.py: Remove 'encryption' flag from create\_cow\_image * Libvirt: Correct logging information and progress when LM * libvirt/utils.py: Remove needless code from create\_cow\_image * libvirt/utils.py: Clarify comment in create\_cow\_image function * Fix documentation for scheduling filters * libvirt: check qemu version for NUMA & hugepage support * Add security group calls missing from latest compute rpc api version bump * Add security group calls missing from latest compute rpc api version bump * Make objects serialize\_args() handle datetimes in positional args * Imported Translations from Transifex * view hypervisor details rest api should be allowed for non-admins * n-net: turn down log level when vif isn't found in deallocate\_fixed\_ip * Associate floating IPs with first v4 fixed IP if none specified * Correct the help text for the compute option * Convert NetworkDuplicated to HTTPBadRequest for v2.1 API * Remove comment inconsistent with code * Remove db layer hard-code permission checks for fixed\_ip\_get\_\* * Fixed nova-network dhcp-hostsfile update during live-migration * Remove db layer hard-code permission checks for network\_get\_all\_by\_host * Remove db layer hard-code permission checks for security\_group\_default\_rule\_create * Remove db layer hard-code permission checks for floating\_ips\_bulk * sync oslo: service child process normal SIGTERM exit * Remove downgrade support from the cellsv2 api db * Fix migrate\_flavor\_data() to catch instances with no instance\_extra rows * libvirt: use importutils instead of python built-in 2015.1.0rc2 ----------- * Imported Translations from Transifex * Updated from global requirements * Control create/delete flavor api permissions using policy.json * Add config option to disable handling virt lifecycle events * Ironic: pass injected files through to configdrive * libvirt: Allow discrete online pCPUs for pinning * Fix migrate\_flavor\_data() to catch instances with no instance\_extra rows * libvirt: unused imported option default\_ephemeral\_format * libvirt: introduce new method to guest tablet device * Fix migrate\_flavor\_data string substitution * Object: Fix incorrect parameter set in flavor save\_extra\_specs * Fix max\_number for migrate\_flavor data * remove downgrade support from our database migrations * Add policy check for extension\_info * Cleanup unnecessary session creation in floating\_ip\_deallocate * Fix inefficient transaction usage in floating\_ip\_bulk\_destroy * Control create/delete flavor api permissions using policy.json * Fix handling of pci\_requests in consume\_from\_instance * Use list of requests in InstancePCIRequests.obj\_from\_db * Add numa\_node field to PciDevicePool * scheduler: re-calculate NUMA on consume\_from\_instance * VMware: remove unused method * VMware: enable configuring of console delay * Don't query compute\_node through service object in nova-manage * Fixed test in test\_tracker to work with random PYTHONHASHSEED * Update rpc version aliases for kilo * remove the CONF.allow\_migrate\_to\_same\_host * Fix kwargs['migration'] KeyError in @errors\_out\_migration decorator * Add equality operators to PciDeviceStats and PciDevice objects * libvirt: Add option to ssh to prevent prompting * Validate server group affinity policy * VMware: use oslo.vmware methods for handling tokens * Remove db layer hard-code permission checks for network\_get\_associated\_fixed\_ips * tests: use numa xml automatic generation in libvirt tests * Resource tracker: unable to restart nova compute * Include supported version information * Release Import of Translations from Transifex * Fixed tests in test\_glance to pass with random PYTHONHASHSEED * Refactored tests in test\_neutron\_driver to pass with random PYTHONHASHSEED * refactored test in vmware test\_read\_write\_util to pass with random PYTHONHASHSEED * fixed tests in test\_matchers to pass with random PYTHONHASHSEED * fix for vmware test\_driver\_api to pass with random PYTHONHASHSEED * Update hypervisor support matrix with kvm on system z * Fix kwargs['migration'] KeyError in @errors\_out\_migration decorator * VMware: remove unused parameter for VMOPS spawn * libvirt: make \_get\_instance\_disk\_info conservative * refactored tests to pass in test\_inject to pass with random PYTHONHASHSEED * fixed tests in test\_iptables\_network to work with random PYTHONHASHSEED * refactored tests in test\_objects to pass with random PYTHONHASHSEED * fixed tests in test\_instance to pass with random PYTHONHASHSEED * Replace ssh exec calls with paramiko lib * Fix handling of pci\_requests in consume\_from\_instance * Use list of requests in InstancePCIRequests.obj\_from\_db * Share hide server add tests between v2 and v2.1 * Share V2 and V2.1 images functional tests * change the reboot rpc call to local reboot * 'deleted' filter does not work properly * Spelling mistakes in nova/compute/api.py * Use kwargs from compute v4 proxy change\_instance\_metadata * Delay STOPPED lifecycle event for all domains, not just Xen * Use kwargs from compute v4 proxy change\_instance\_metadata * compute: stop handling virt lifecycle events in cleanup\_host() * Replace BareMetalDriver with IronicDriver in option help string * tests: introduce a NUMAServersTest class * Fix test\_set\_admin\_password\_bad\_state() * Fix test\_attach\_interface\_failure() * Fix test\_swap\_volume\_api\_usage() * Resource tracker: unable to restart nova compute * Forbid booting of QCOW2 images with virtual\_size > root\_gb * Pass migrate\_data to pre\_live\_migration * Fixed order of arguments during execution live\_migrate() * update .gitreview for stable/kilo * Add min/max of API microversions to version API * VMware: Fix attribute error in resize * Release bdm constraint source and dest type * Fix check\_can\_live\_migrate\_destination() in ComputeV4Proxy * compute: stop handling virt lifecycle events in cleanup\_host() * Store context in local store after spawn\_n * Fixed incorrect dhcp\_server value during nova-network creation * Share multiple create server tests between v2 and v2.1 * Remove power\_state.BUILDING * libvirt: cleanup unused lifecycle event handling variables from driver * Add min/max of API microversions to version API * Pass migrate\_data to pre\_live\_migration * libvirt: add debug logging to pre\_live\_migration * Don't ignore template argument in get\_injected\_network\_template * Refactor some service tests and make them not require db * Remove and deprecate unused conductor service calls * Convert service and servicegroup to objects * Add numa\_node field to PciDevicePool * Ironic: do not destroy if node is in maintenance * libvirt: remove unnecesary quotes * VMware: fix log warning * libvirt: quit early when mempages requested found * VMware: validate CPU limits level * Remove and deprecate conductor get\_ec2\_ids() * Remove unused metadata conductor parameter * Replace conductor get\_ec2\_ids() with new Instance.ec2\_ids attribute * Add EC2Ids object and link to Instance object as optional attribute * neutron: reduce complexity of allocate\_for\_instance (security\_groups) * neutron: reduce complexity of allocate\_for\_instance (requested\_networks) * Avoid indexing into an empty list in getcallargs * Fixed order of arguments during execution live\_migrate() * Fix check\_can\_live\_migrate\_destination() in ComputeV4Proxy 2015.1.0rc1 ----------- * Add compute RPC API v4.0 * Reserve 10 migrations for backports * Honor uuid parameter passed to nova-network create * Update compute version alias for kilo * Refactor nova-net cidr validation in prep for bug fix * Fix how service objects are looked up for Cells * websocketproxy: Make protocol validation use connection\_info * scheduler: re-calculate NUMA on consume\_from\_instance * Prevent scheduling new external events when compute is shutdown * Print choices in the config generator * Manage compute node that exposes no pci devices * libvirt: make fakelibvirt more customizable * Use cells.utils.ServiceProxy object within cells\_api * Fix Enum field, which allows unrestricted values * consoleauth: Store access\_url on token authorization * tests: add a ServersTestBase class * tests: enhance functional tests primitives * libvirt: Add version check when pinning guest CPUs * Open Liberty development * xenapi: pull vm\_mode and auto\_disk\_config from image when rescue * VMware: Fix attribute error in resize * Allow \_exec\_ebtables to parse stderr * Fix rebuild of an instance with a volume attached * Imported Translations from Transifex * Stacktrace on live migration monitoring * Add 'docker' to the list of known hypervisor types * Respect CONF.scheduler\_use\_baremetal\_filters * Make migration 274 idempotent so it can be backported * Add 'suspended' lifecycle event * Fix how the Cells API is returning ComputeNode objects * Ironic: fix log level manipulation * Fix serialization for Cells Responses * libvirt: fix disablement of NUMA & hugepages on unsupported platforms * Optimize periodic call to get\_by\_host * Fix multipath device discovery when UFN is enabled * Use retrying decorator from oslo\_db * virt: Make sure block device info is persisted * virt: Fix block\_device tests * instance termination with update\_dns\_entries set fails * Filter fixed IPs from requested\_networks in deallocate\_for\_instance * Fixes \_cleanup\_rbd code to capture ImageBusy exception * Remove old relation in Cells for ComputeNode and Service * consoleauth: remove an instance of mutation while iterating * Add json-schema for v2.1 fixed-ips * Share V2 and V2.1 tenant-networks functional tests * Share migrations tests between V2 and V2.1 * Merging instance\_actions tests between V2 and V2.1 * Share V2 and V2.1 hosts functional tests * Add serialization of context to FakeNotifier * Handle nova-network tuple format in legacy RPC calls * remove usage of policy.d which isn't cached * Update check before migrating flavor * Expand Origin header check for serial console * libvirt: reuse unfilter\_instance pass-through method * No need to create APIVersionRequest every time * Libvirt: preallocate\_images CONFIG can be arbitrary characters * Add some tests for the error path(s) in RBD cleanup\_volumes() * VMware: add instance to log messages * Hyper-V: checks for existent Notes in list\_instance\_notes * Fix incorrect statement in inline neutronv2 docs * Imported Translations from Transifex * Vmware:Find a SCSI adapter type for attaching iSCSI disk * Avoid MODULEPATH environment var in config generator * Be more forgiving to empty context in notification * Store cells credentials in transport\_url properly * Fix API links and labels * Stale rc.local file - vestige from cloudpipe.rst * Remove stale test + opensssl information from docs * Add the last of the oslo libraries to hacking check * Cancel all waiting events during compute node shutdown * Update hypervisor support matrix for ironic wrt pause/suspend * Scheduler: deprecate mute\_weight\_value option on weigher * Pass instance object to add\_instance\_fault\_from\_exc * Remove dead vmrc code * Add vnc\_keymap support for vmware compute * Remove compute/api.py::update() * add ironic hypervisor type * Removes XML MIME types from v2 API information * API: fix typo in unit tests * Add field name to error messages in object type checking * Remove obsolete TODO in scheduler filters * Expand valid server group name character set * Raise exception when backup volume-backed instance * Libvirt SMB volume driver: fix volume attach * Adds Compute API v2 docs * PCI tracker: make O(M \* N) clean\_usage algo linear * Fix v2.1 list-host to remove 'services' filter * Fix incorrect http\_conflict error message * Link to devstack guide for appropriate serial\_console instructions * Skip socket related unit tests on OSX * Add debug logging to quota\_reserve flow * Fix missing the cpu\_pinning request * Hyper-V: Sets \*DataRoot paths for instances * Refactored test in test\_neutron\_driver to pass with random PYTHONHASHSEED * fixed tests in test\_neutrounv2 to pass with random PYTHONHASHSEED * Refactored test in linux\_net to pass with random PYTHONHASHSEED * refactored tests in test\_wsgi to pass with random PYTHONHASHSEED * fixed tests in test\_simple\_tenant\_usage to pass with random PYTHONHASHSEED * Refactored test\_availability\_zone to work properly with random PYTHONHASHSEED * fixed test in test\_disk\_config to work with random PYTHONHASHSEED * Fixed test to work with random PYTHONHASHSEED * Fix \_instance\_action call for resize\_instance in cells * Add some logging in the quota.reserve flow * Check host cpu\_info if no cpu\_model for guest * Move ComputeNode creation at init stage in ResourceTracker * Releasing DHCP in nova-network fixed * Fix PCIDevicePool.to\_dict() when the object has no tags * Convert pci\_device\_pools dict to object before passing to scheduler * Sync from Oslo-Incubator - reload config files * Fix v2.1 hypervisor servers to return empty list * Add support for cleaning in Ironic driver * Adjust resource tracker for new Ironic states * Ironic: Remove passing Flavor's deploy\_{kernel, ramdisk} * don't 500 on invalid security group format * Adds cleanup on v2.2 keypair api and tests * Set conductor use\_local flag in compute manager tests * Use migration object in resource\_tracker * Move suds into test-requirements.txt * Make refresh\_instance\_security\_rules() handle non-object instances * Add a fixture for the NovaObject indirection API * Add missing \`shows\` to the RPC casts documentation * Override update\_available\_resources interval * Fix for deletes first preexisting port if second was attached to instance * Avoid load real policy from policy.d when using fake policy fixture * Neutron: simplify validate\_networks * Switch to newer cirros image in docs * Fix common misspellings * Scheduler: update doctring to use oslo\_config * Skip 'id' attribute to be explicitly deleted in TestCase * Remove unused class variables in extended\_volumes * libvirt: remove volume\_drivers config param * Make conductor use instance object * VMware: add VirtualVmxnet3 to the supported network types * Fix test cases still use v3 prefix * Typo in oslo.i18n url * Fix docs build break * Updated from global requirements * Fix typo in nova/tests/unit/test\_availability\_zones.py * mock out build\_instances/rebuild\_instance when not used * Make ComputeAPIIpFilterTestCase a NoDBTestCase * Remove vol\_get\_usage\_by\_time from conductor api/rpcapi * default tox cmd should also run 'functional' target * VMware: Consume the oslo.vmware objects * Release bdm constraint source and dest type * VMware: save instance object creation in test\_vmops * libvirt: Delay only STOPPED event for Xen domain * Remove invalid hacking recheck for baremetal driver * Adds Not Null constraint to KeyPair name * Fix orphaned ports on build failure * VMware: Fix volume relocate during detach 2015.1.0b3 ---------- * Fix AggregateCoreFilter return incorrect value * Remove comments on API policy, remove core param * Add policy check for consoles * Sync from oslo-incubator * Rename and move the v2.1 api policy into separated files * Disable oslo\_messaging debug logging * heal\_instance\_info\_cache\_interval help clearer * Forbid booting of QCOW2 images with virtual\_size > root\_gb * don't use oslo.messaging in mock * BDM: Avoiding saving if there were no changes * Tidy up sentinel comparison in pop\_instance\_event * Tidy up dict.setdefault() usage in prepare\_for\_instance\_event * Remove duplicate InvalidBDMVolumeNotBootable * libvirt: make default value of numa cell memory to 0 when not defined * Add the instance update calls from Compute * Save bdm.connection\_info before calling volume\_api.attach\_volume * Add InstanceMapping object * Add CellMapping object * load ram\_allocation\_ratio when asked * Remove pci\_device.update\_device helper function * Tox: reduce complexity level to 35 * Remove db layer hard-code permission checks for service\_get\_all * Expand help message on some quota config options * Test fixture for the api database * remove duplicate calls to cfg.get() * Remove context from remotable call signature * Actually stop passing context to remotable methods * Remove usage of remotable context parameter in service, tag, vif * Remove usage of remotable context parameter in security\_group\* * Remove usage of remotable context parameter in pci\_device, quotas * let fake virt track resources * doc: fix a docstext formatting * Update unique constraint of compute\_nodes with deleted column * Modify filters to get instance info from HostState * Add the RPC calls for instance updates * Implement instance update logic in Scheduler * Log exception from deallocate\_port\_for\_instance for triage * Remove usage of remotable context parameter in migration, network * Remove usage of remotable context parameter in compute\_node, keypair * Remove usage of remotable context parameter in instance\* objects * Remove usage of remotable context parameter in fixed\_ip, flavor, floating\_ip * Remove usage of remotable context parameter in ec2 object * libvirt: partial fix for live-migration with config drive * Added assertJsonEqual method to TestCase class * VMware: Improve reporting of path test failures * libvirt test\_cpu\_info method fixed random PYTHONHASHSEED compatibility * Remove usage of remotable context parameter in bandwidth, block\_device * Remove usage of remotable context parameter in agent, aggregate * Remove db layer hard-code permission checks for pci * Objects: use setattr rather than dict syntax in remotable * Split out NovaTimestampObject * libvirt: Resize down an instance booted from a volume * add neutron api NotImplemented test cases for Network V2.1 * Stop using exception.message * Remove unused oslo logging fixture * libvirt: don't allow to resize down the default ephemeral disk * Add api microvesion unit test case for wsgi.action * Change some comments for instance param * Hyper-V: Adds VMOps unit tests (part 2) * Add get\_api\_session to db api * Use the proper database engine for nova-manage * Add support for multiple database engines * Virt: update fake driver to use UUID as lookup key * VMware: use instance UUID as instance name * VMware: update test\_vm\_util to use instance object * Handle exception when doing detach\_interface * Variable 'name' already declared in 'for' loop * Handle RESIZE\_PREP status when nova compute do init\_instance * Move policy enforcement into REST API layer for v2.1 api volume\_attachment * Remove the elevated context when get network * Handles exception when unsupported virt-type given * Fix confusing log output in nova/nova/network/linux\_net.py * Workaround for race condition in libvirt * remove unneeded teardown related code * Fixed archiving of deleted records * libvirt: Remove minidom usage in driver.py * Stop spamming logs when creating context * Fix ComputeNode backport for Service.obj\_make\_compatible * Break out the child version calculation logic from obj\_make\_compatible() * Fix PciDeviceDBApiTestCase with referential constraint checking * Verify all quotas before updating the db * Add shadow table empty verification * Add @wrap\_exception() for 3 compute functions * Remove FK on service\_id and make service\_id nullable * Using Instance object instead of db call * Revert "Removed useless method \_get\_default\_deleted\_value." * Remove db layer hard-code permission checks for network\_count\_reserved\_ips * implement user negative testing for flavor manage * refactor policy fixtures to allow use of real policy * libvirt: remove unnecessary flavor parameter * Compute: no longer need to pass flavor to the spawn method * Update some ResizeClaimTestCase tests * Move InstanceClaimTestCase.test\_claim\_and\_audit * Handle exception when attaching interface failed * Deprecate Nova in tree EC2 APIs * cells: don't pass context to instance.save in instance\_update\_from\_api * ensure DatabaseFixture removes db on cleanup * objects: introduce numa topology limits objects * Add a test that validates object backports and child object versions * Fix ArchiveTestCase on MySQL due to differing exceptions * VMware: fix VM rescue problem with VNC console * VMware: Deprecation warning - map one nova-compute to one VC cluster * compute: don't trace on InstanceNotFound in reverts\_task\_state * Fix backporting objects with sub-objects that can look falsey * neutron: deprecate 'allow\_duplicate\_networks' config option * Fix Juno nodes checking service.compute\_node * Fix typo in \_live\_migration\_cleanup\_flags method * libvirt: add in missing translation for exception * Move policy enforcement into REST API layer for v2.1 extended\_volumes * Remove useless policy rules for v2.1 api which removed/disabled * Remove db layer hard-code permission checks for service\_get\_all\_by\_\* * Fix infinite recursion caused by unnecessary stub * Websocket Proxy should verify Origin header * Improve 'attach interface' exception handling * Remove unused method \_make\_stub\_method * Remove useless get\_one() method in SG API * Fix up join() and leave() methods of servicegroup * network: Fix another IPv6 test for Mac * Add InstanceList.get\_all method * Use session with neutronclient * Pass correct context to get\_by\_compute\_node() * Revert "Allow force-delete irrespective of VM task\_state" * Fix kwargs['instance'] KeyError in @reverts\_task\_state decorator * Fix copy configdrive during live-migration on HyperV * Move V2 sample files to respective directory * V2 tests -Reuse server post req/resp sample file * V2.1 tests - Reuse server post req/resp sample file * Remove an unused config import in nova-compute * Raise HTTPNotFound for Port/NetworkNotFound * neutronv2: only create client once when adding/removing fixed IPs * Stop stacktracing in \_get\_filter\_uuid * libvirt: Fix live migration failure cleanup on ceph * Sync with latest oslo-incubator * Better logging of resources * Preserve preexisting ports on server delete * Move oslo.vmware into test-requirements.txt * Remove db layer hard-code permission checks for network\_get\_by\_uuid * Refactor \_regex\_instance\_filter for testing * Add instance\_mappings table to api database * ec2: clean up in test\_cinder\_cloud * Remove unused method queue\_get\_for * Remove make\_ip\_dict method which is not used * Remove unused method delete\_subnet * Remove unused method disable\_vlan * Remove unused method get\_request\_extensions * Fix wrong log output in nova/nova/tests/unit/fake\_volume.py * Updated from global requirements * Remove db layer hard-code permission checks for network\_get\_by\_cidr * Add cell\_mappings table to api database * Ban passing contexts to remotable methods * Fix a remaining case of passing context to a remotable in scheduler * Fix several cases of passing context to quota-related remotable methods * Fix some cases of passing context to remotables with security groups * Replace RPC topic-based service queries with binary-based in cells * Replace RPC topic-based service queries with binary-based in scheduler * Fix some straggling uses of passing context to remotable methods in tests * VMware: remove code invoking deprecation warning * Fix typo in nova/scheduler/filters/utils.py * Remove db layer hard-code permission checks for network\_delete\_safe * Don't add exception instance in LOG.exception * Move policy enforcement into REST API layer for v2.1 servers * Move policy enforcement into REST API layer for v2.1 api attach\_interfaces * Remove db layer hard-code permission checks for flavor-manager * Remove db layer hard-code permission checks for service\_delete/service\_get * Remove db layer hard-code permission checks for service\_update * Fix 'nova show' return incorrect mac info * Use controller method in all admin actions tests * Remove db layer hard-code permission checks for flavor\_access * Modify filters so they can look to HostState * let us specify when samples tests need admin privs * Updated from global requirements * Remove cases of passing context to remotable methods in Flavor * Remove cases of passing context to remotable methods in Instance * Fix up PciDevice remotable context usage * libvirt: add comment for vifs\_already\_plugged=True in finish\_migration * neutron: check for same host in \_update\_port\_binding\_for\_instance * Move policy enforcement into REST API layer for v2.1 security groups * Keep instance state if lvm backend not impl * Replace RPC topic-based service queries in nova/api with binary-based * Remove service\_get\_by\_args from the DB API * Remove usage of db.service\_get\_by\_args * Make unit tests inherit from test.NoDBTestCase * Fixed incorrect behavior of method sqlalchemy.api.\_check\_instance\_exists * Remove db layer hard-code permission checks for migrations\_get\* * vmware: support both hard and soft reboot * xenapi: Fix session tests leaking state * libvirt: Cleanup snapshot tests * Change instance disappeared during destroy from Warning to Info * Replace instance flavor delete hacks with proper usage * Add delattr support to base object * Use flavor stored with instance in vmware driver * Use flavor stored with instance in ironic driver * Modify AggregateAPI methods to call the Scheduler client methods * Create Scheduler client methods for aggregates * Add update and delete \_aggregate() method to the Scheduler RPC API * Instantiate aggregates information when HostManager is starting * Add equivalence operators to NUMACell and NUMAPagesTopology * Adds x509 certificate keypair support * Better round trip for RequestContext<->Dict conversion * Make scheduler client reporting use ComputeNode object * Prevent update of ReadOnlyDict * Copy the default value for field * neutron: add logging during nw info\_cache refresh when port is gone * Add info for Standalone EC2 API to cut access to Nova DB * VMware: Fix disk UUID in instance's extra config * Update config generator to use new style list\_opts discovery * Avoid KeyError Exception in extract\_flavor() * Imported Translations from Transifex * Updated from global requirements * Move policy enforcement into REST API layer for v2.1 create backup * Truncate encoded instance sys meta to 255 or less * Adds keypair type in nova-api * Switch nova.virt.vmwareapi.\* to instance dot notation * Allow disabling the evacuate cleanup mechanism in compute manager * Change queries for network services to use binary instead of topic * Add Service.get\_by\_host\_and\_binary and ServiceList.get\_by\_binary * Compute: update config drive settings on instance * Fix docstrings for assorted methods * Config driver: update help text for force\_config\_drive * libvirt-numa.rst: trivial spelling fixes * Ensure bridge deleted with brctl delbr * create noauth2 * enhance flavor manage functional tests * Add API Response class for more complex testing * Add more log info around 'not found' error * Remove extended addresses from V2.1 update & rebuild * Switch nova.virt.hyperv.\* to instance dot notation * Revert instance task\_state when compareCPU fails * Libvirt: Fix error message when unable to preallocate image * Switch nova.virt.libvirt.\* to instance dot notation * Add nova-manage commands for the new api database * Add second migrate\_repo for cells v2 database migrations * Updated from global requirements * Force LANGUAGE=en\_US in test runs * neutron: consolidate common unbind ports logic * Sync oslo policy change * Remove compute\_node field from service\_get\_by\_compute\_host * Fix how the Service object is loading the compute\_node field * Remove compute\_node from service\_get\_by\_cn Cells API method * Remove want\_objects kwarg from nova.api.openstack.common.get\_instance * Switch nova.virt.\* to use the object dot notation * add string representation for context * Remove db layer hard-code permission checks for migration\_create/update * Disables pci plugin for v2.1 & microversions * Fix logic for checking if az can be updated * Add obj\_alternate\_context() helper * libvirt: remove libvirt import from tests so we only use fakelibvirt * capture stdout and logging for OSAPIfixture test * remove unused \_authorize\_context from security\_group\_default\_rules.py * Switch nova.context to actually use oslo.context * Fixed incorrect indent of test\_config\_read\_only\_disk * Fixed incorrect assertion in test\_db\_api * Remove TranslationFixture * Replace fanout to False for CastAsCall fixture * Make ConsoleAuthTokensExtensionTestV21 inherit from test.NoDBTestCase * Remove db layer hard-code permission checks for task\_log\_get\* * Remove db layer hard-code permission checks for task\_log\_begin/end\_task * Api: remove unusefull compute api from cells * Remove db layer hard-code permission checks for service\_create * Imported Translations from Transifex * Change v3 import to v21 in 2.1 api unit test * Fix NotImplementedError handling in interfaces API * Support specifing multiple values for aggregate keys * Remove attach/detach/swap from V2.1 extended\_volumes * Make metadata cache time configurable * Remove db layer hard-code permission checks for fixed\_ip\_disassociate\_all\_by\_timeout * Move policy enforcement into REST API layer for v2.1 api assisted\_volume\_snapshots * Fix tiny typo in api microversions doc * Fixes Hyper-V: configdrive is not migrated to destination * ensure that ram is >= 1 in random flavor creation * Fixes 500 error message and traces when no free ip is left * db: Add index on fixed\_ips updated\_at * Display host chosen for instance by scheduler * PYTHONHASHSEED bug fix in test\_utils * fixed tests in test\_vm\_util to work with random PYTHONHASHSEED * Add microversion allocation on devref * Remove OS-EXT-IPS attributes from V2.1 server ips * Remove 'locked\_by' from V2.1 extended server status * Remove 'id' from V2.1 update quota\_set resp * Fix bad exception logging * VMware: Ensure compute\_node.hypervisor\_hostname is unique * Inherit exceptions correctly * Remove en\_US translation * Move policy enforcement into REST API layer for v2.1 cloudpipe * Move policy enforcement into REST API layer for v2.1 security\_group\_default\_rules * linux\_net.metadata\_accept(): IPv6 support * Enforce in REST API layer on v2.1 api remote consoles * Remove accessips attribute from V2.1 POST server resp * Move policy enforcement into REST API layer for v2.1 floating\_ip\_dns * Fix bad interaction between @wsgi.extends and @wsgi.api\_version * Enforce in REST API layer on v2.1 shelve api * Move policy enforcement into REST API layer for v2.1 api evacuate * Add manual version comparison to microversion devref document * Switch to uuidutils from oslo\_utils library * Add developer documentation for writing V2.1 API plugins * Convert nova.compute.\* to use instance dot notation * Better power\_state logging in \_sync\_instance\_power\_state * Use instance objects in fping/instance\_actions/server\_metadata * Fix misspellings words in nova * Fix KeyErrors from incorrectly formatted NovaExceptions in unit tests * Move policy enforcement into REST API layer for v2.1 floating ips * Switch nova.network.\* to use instance dot notation * Revert : Switch off oslo.\* namespace check temporarily * Move policy enforcement into REST API layer for v2.1 networks related * Remove db layer hard-code permission checks for v2.1 agents * Move v2.1 virtual\_interfaces api policy enforcement into REST API layer * fix 'Empty module name' exception attaching volume * Use flavor stored with instance in libvirt driver * Handle 404 in os-baremetal-nodes GET * API: Change the API cpu\_info to be meaning ful * Updated from global requirements * Make compute unit tests inherit from test.NoDBTestCase * Request objects in security\_groups api extensions * Reuse is\_int\_like from oslo\_utils * VMware: fix network connectivity problems * Move policy enforcement into REST API layer for v2.1 admin password * Fix the order of base classes in migrations test cases * Libvirt: Allow missing volumes during delete * Move policy enforcement into REST API layer for v2.1 server\_diagnostics * Fix wrong log when reschedule is disabled * Replace select-for-update in fixed\_ip\_associate * Move policy enforcement into REST API layer for v2.1 fping * Consolidate use api request version header * Copy image from source host when ImageNotFound * VMware: update get\_available\_datastores to only use clusters * Add useful debug logging when policy checks fail * Remove unused conductor methods * Call notify\_usage\_exists() without conductor proxying * Updated from global requirements * Make notifications use BandwidthUsageList object * libvirt: Fix migration when image doesn't exist * Fix a typo of devref document for api\_plugin * console: add unit tests for baseproxy * libvirt: log host capabilities on startup * Allow configuring proxy\_host and proxy\_port in nova.conf * Fixes novncproxy logging.setup() * Add descriptions to some assertBooleans * Remove update\_store usage * Enforce policy checking in REST API layer for v2.1 server\_password * Add methods that convert any volume BDM to driver format * Split scheduler weight test on ram * Split scheduler weight test on metrics * Split scheduler weight test on ioops * Fix 500 when deleting a not existing ec2 security group * Remove backwards compat oslo.messaging entries from setup.cfg * Change utils.vpn\_ping() to return a Boolean * Enable retry when there are multiple force hosts/nodes * Use oslo.log * switch LOG.audit to LOG.info * Add catch FlavorExtraSpecsNotFound in V2 API * tests: remove duplicate keys from dictionary * Add blkid rootwrap filter * Fix idempotency of migration 269 * objects: fix issue in test cases for instance numa * VMware: Accept image and block device mappings * nova flavor manage functional test * extract API fixture * Fix V2 hide server address functional tests * Remove unused touch command filter * Add a test for block\_device\_make\_list\_from\_dicts * Move policy enforcement into REST API layer for v2.1 floating\_ip\_pools * libvirt: address test comments for zfcp volume driver changes * libvirt: Adjust Nova to support FCP on System z systems * Fix BM nodes extension to deal with missing node properties * VMware: update the support matrix for security groups * Ignore 'dynamic' addr flag on gateway initialization * Adds xend to rootwrap.d/compute.filters * Create volume in the same availability zone as instance * Wrap IPv6 address in square brackets for scp/rsync * fake: fix public API signatures to match virt driver * Added retries in 'network\_set\_host' function * Use NoDBTestCase instead of TestCase * Change microversion header name * VMware: ensure that resize treats CPU limits correctly * Compute: pass flavor object to migrate\_disk\_and\_power\_off * extract method from fc volume discovery * Set instance NUMA topology on HostState * Support live-migrate of instances in PAUSED state * Fix DB access by FormatMappingTestCase * api: report progress when instance is migrating * libvirt: proper monitoring of live migration progress * libvirt: using instance like object * libvirt: convert tests from mox to mock * XenAPI: Fix data loss on resize up * Delete instance files from dest host in revert-resize * Pass the capabilities to ironic node instance\_info * No need to re-fetch instance with sysmeta * Switch nova.api.\* to use instance dot notation * Objectify calls to service\_get\_by\_compute\_host * Refactor how to remove compute nodes when service is deleted * Move policy enforcement into REST API layer for v2.1 admin actions * Contrail VIF Driver changes for Nova-Compute * libvirt : Fix slightly misleading parameter name, validate param * libvirt: cleanup setattr usage in test\_host * libvirt: add TODOs for removing libvirt attribute stubs * Expand try/except for get\_machine\_ips * Switch nova.compute.manager to use instance dot notation * libvirt: stub out VIR\_CONNECT\_LIST\_DOMAINS\_INACTIVE * libvirt: stub out VIR\_SECRET\_USAGE\_TYPE\_ISCSI for older libvirt * Change calls to service information for Hypervisors API * Add handling for offlined CPUs to the nova libvirt driver * Make compute API create() use BDM objects * Remove redundant tearDown from ArchiveTestCase * libvirt: switch LibvirtConnTestCase back to NoDBTestCase * Replace usage of LazyPluggable by stevedore driver * Don't mock time.sleep with None * Libvirt: Support ovs plug in vhostuser vif * Removed duplicate key from dictionary * Fixes Attribute Error when trying to spawn instance from vhd on HyperV * Remove computenode relationship on service\_get * Remove nested service from DB API compute\_nodes * libvirt: Use XPath instead of loop in \_get\_interfaces * fixed tests to work with random PYTHONHASHSEED * Imported Translations from Transifex * Make the method \_op\_method() public * Quiesce boot from volume instances during live snapshot * Fix "Host Aggregate" section of the Nova Developer Guide * network: Fix another IPv6 test for Mac * Pre-load default filters during scheduler initialization * Libvirt: Gracefully Handle Destroy Error For LXC * libvirt: stub VIR\_CONNECT\_LIST\_DOMAINS\_ACTIVE for older libvirts * Fix VNC access, when reverse DNS lookups fail * Remove now useless requirements wsgiref * Add JSON schema for v2.1 add network API * Handle MessagingException in unshelving instance * Compute: make use of dot notation for console access * Compute: update exception handling for spice console * Add missing api samples for floating-ips api(v2) * Move v2.1 rescue api policy enforcement into REST API layer * Move policy enforcement into REST API layer for v2.1 ips * Move policy enforcement into REST API layer for v2.1 multinic * Move policy enforcement into REST API layer for v2.1 server\_metadata * VMware: fix resize of ephemeral disks * VMware: add in a utility method for detaching devices * VMware: address instance resize problems * Fixes logic in compute\_node\_statistics * Cover ListOfObjectField for relationship test * Replace oslo-incubator with oslo\_context * Libvirt: add in unit tests for driver capabilities * Ironic: add in unit tests for driver capabilities * Tests: Don't require binding to port 4444 * libvirt: fix overly strict CPU model comparison in live migration * Libvirt: vcpu\_model support * IP filtering is not accurate when used with limit * Change how the API is getting a list of compute nodes * Change how Cells are getting the list of compute nodes * Change how HostManager is calling the service information * Move scheduler.host\_manager to use ComputeNode object * patch out nova libvirt driver event thread in tests * Change outer to inner join in fixed IP DB API func * Small cleanup in pci\_device\_update * Remove useless NotFound exception catching for v2/v2.1 fping * V2.1 cleanup: Use concrete NotFound exception instead of generic * Drop deprecated namespace for oslo.rootwrap * Add vcpu\_model to instance object * Pass instance primitive to instance\_update\_at\_top() * Adds infrastructure for microversioned api samples * Libvirt: Support for generic vhostuser vif * Pull singleton config check cruft out of SG API * hacking: Got rid of unnecessary TODO * Remove unused function in test * Remove unused function * hardware: fix reported host mempages in numa cell * objects: fix numa obj relationships * objects: remove default values for numa cell * Move policy enforcement into REST API layer for v2.1 suspend/resume server * Move policy enforcement into REST API layer for v2.1 api console-output * Move policy enforcement into REST API layer for v2.1 deferred\_delete * Move migrate-server policy enforce into REST API * Add API schema for v2.1 tenant networks API * Move policy enforcement into REST API layer for v2.1 lock server * Libvirt: cleanup rescue lvm when unrescue * Sync simple\_tenant\_usage V2.1 exception with V2 and add test case * IP filtering can include duplicate instances * Add recursive flag to obj\_reset\_changes() * Compute: use dot convension for \_poll\_rescued\_instances * Add tests for nova-manage vm list * libvirt: add libvirt/parallels to hypervisor support matrix * Compute: update reboot\_instance to use dot instance notation * Fix incorrect compute api config indentation * libvirt: fix emulator thread pinning when doing strict CPU pinning * libvirt: rewrite NUMA topology generator to be more flexible * libvirt: Fix logically inconsistent host NUMA topology * libvirt: utils canonicalize now the image architecture property * A couple of grammar fixes in help strings * Implement api samples test for os-baremetal-nodes Part 2 * Compute: use consistant instance dot notation * Log warning if CONF.my\_ip is not found on system * libvirt: remove \_destroy\_instance\_files shim * virt: Fix interaction between disk API tests * network: Fix IPv6 tests for Mac * Use dot notation on instance object fields in \_delete\_instance * libvirt: memnodes shuold be set to a list instead of None * Cleanup add\_fixed\_ip\_to\_instance tests * Cleanup test\_instance\_dns * Fix detach\_sriov\_ports to get context to be able to get image metadata * Implement api samples test for os-baremetal-nodes * Fix description of parameters in nova functions * Stop making the database migration backend lazy pluggable * Updated from global requirements * Libvirt: Created Nova driver for Quobyte * Adds keypair type database migration * libvirt: Enable serial\_console feature for system z * Make tests use sha256 as openssl default digest algorithm * Improved performance of db method network\_in\_use\_on\_host * Replace select-for-update in floating\_ip\_allocate\_address * Move policy enforcement into REST API layer for v2.1 pause server * Libvirt: update log message * Update usage of exception MigrationError * Extract preserve ephemeral on rebuild from servers plugin * VMware: update get\_vm\_resize\_spec interface * VMware: Enable spawn from OVA image * Raise bad request for missing 'label' in tenant network * CWD is incorrectly set if exceptions are thrown * VMware: add disk device information to VmdkInfo * Use controller methods directly in test\_rescue * Call controller methods directly in test\_multinic * Add version specific test cases for microverison * Change v2.1 API status to CURRENT * Remove wsgi\_app usage from test\_server\_actions * Change some v2.1 extension names to v2 * Add VirtCPUModel nova objects * Add enum fieldtype field * Convert v2.1 extension\_info to show V2 API extension list * Remove compability check for ratelimit\_v3 * Keep instance state if ssh failed during migration * Cleanup and removal of unused code in scheduler unit tests * Fix incorrect use of mock in scheduler test * Make test re-use HTTPRequest part 5 * Refactor test\_filter\_scheduler use of fakes * consoliate set\_availability\_zones usage * Warn about zookeeper service group driver usage * Updated from global requirements * Update matrix for kvm on ppc64 * Switch off oslo.\* namespace check temporarily * Switch to using oslo\_\* instead of oslo.\* * Adjust object\_compat wrapper order * Add more tests for tenant network API * Sync with oslo-incubator * Make compute use objects usage 'best practice' * Enable BIOS bootmenu on AMI-based images 2015.1.0b2 ---------- * libvirt: fix console device for system z for log file * Fix references to non-existent "pause" section * libvirt: generate proper config for PCS containers * libvirt: add ability to add file and block based filesystem * libvirt: add ploop disks format support * Fix improper use of Stevedore * libvirt: Fail when live block migrating instance with volumes * Add notification for suspend * Add API schema for v2.1 networks API * Remove v1.1 from v2.1 extension description * Add \_LW for missing translations * Treat LOG.warning and LOG.warn same * Add JSON schema for v2.1 'quota\_class' API * Add missing setup.cfg entry for os-user-data plugin * Add api\_version parameter for API sample test base class * Add suggestion to dev docs for debugging odd test failures * Add max\_concurrent\_builds limit configuration * Fixes Hyper-V configdrive network injection issue * Update Power State after deleting instance * Remove temporary power state variables * Make obj\_set\_defaults() more useful * Adds devref for API Microversions * PCI NUMA filtering * Ensure publisher\_id is set correctly in notifications * libvirt: Use XPath instead of loop in \_get\_all\_block\_devices * libvirt: Use XPath instead of loop in get\_instance\_diagnostics * fix typo in rpcapi docstring * Fix conductor servicegroup joining when zk driver is used * Do not treat empty key\_name as None * Failed to discovery when iscsi multipath and CHAP both enabled * Fix network tests response code checking * Remove unused error from v2.1 create server * Fix corrupting the object repository with test instance objects * Change cell\_type values in nova-manage * Fix bad mocking of methods on Instance * Updated from global requirements * VMware: fix resume\_state\_on\_host\_boot * Fix cells rpc connection leak * Remove redundant assert of mock volume save call * Don't create block device mappings in the API cell * Add formal doc recording hypervisor feature capability matrix * Ironic: Adds config drive support * libvirt-xen: Fix block device prefix and disk bus * libvirt-xen: don't request features ACPI or APIC with PV guest * Make EC2 compatible with current AWS CLI * libvirt: remove pointless loop after live migration finishes * Remove useless argparse requirement * add asserts of DriverBlockDevice save call parameters * fix call of DriverVolumeBlockDevice save in swap\_volume * Use a workarounds group option to disable live snaphots * libvirt : Add support for --interface option in iscsiadm * Cells: Fix service\_get\_by\_compute\_host * Expand instances project\_id index to cover deleted as well * Remove unused conductor parameter from get\_host\_availability\_zone() * Fixes Hyper-V instance snapshot * Add more status when do \_poll\_rebooting\_instances * Adds barbican keymgr wrapper * libvirt: avoid setting the memnodes where when it's not a supported option * Make code compatible with v4 auth and workaround webob bug * Fix likely undesired use of redirection * Save bdm in swap\_volume * doc: document manual testing procedure for serial-console * nova net-delete network is not informative enough * Improvement in 'network\_set\_host' function * Fix typo in nova/virt/disk/vfs/localfs.py * Fix expected error in V2.1 add network API * libvirt: fix failure when attaching volume to iso instance * Add log message to is\_luks function * Access migration fields like an object in finish\_revert\_resize * Remove unused migration parameter from \_cleanup\_stored\_instance\_types * object: serialize set to list * Fix leaking exceptions from scheduler utils * Adds tests for Hyper-V LiveMigration utils * Adds tests for Hyper-V VHD utils * libvirt: fix missing block device mapping parameter * libvirt: add QEMU built-in iSCSI initiator support * Add update\_or\_create flag to BDM objects create() * Typos fixed * Remove unused method from test\_metadata * libvirt: Support iSCSI live migration for different iSCSI target * Add JSON schema for "associate\_host" API * Add migrate\_flavor\_data to nova-manage * Adds logging to ComputeCapabilitiesFilter failures * Add flavor fields to Instance object * Fix up some instance object creation issues in tests * Fix misspellings in hardware.py * VMware: add in utility methods for copying and deleting disks * Apply v2.1 API to href of version API * Revert "Raise if sec-groups and port id are provided on boot" * libvirt: always pass image\_meta when getting guest XML * libvirt: assume image\_meta is non-None in blockinfo module * libvirt: always pass image meta when getting disk info from bdm * Calls to superclass' \_\_init\_\_ function is optional * Enforce DB model matches results of DB migrations * Add missing foreign keys for sqlite * Fix an indentation in server group api samples template * Allow instances to attach to shared external nets * Handle ironic\_client non-existent case * Cells: Record initial database split in devref * Use a workarounds option to disable rootwrap * virt: Fix images test interaction * libvirt: add parallels virt\_type * Convert nova-manage list to use Instance objects * Create a 'workarounds' config group * Updated from global requirements * don't use exec cat when we can use read * don't assert\_called\_once\_with with a real time * Network: correct VMware DVS port group name lookup * Refactor ComputeCapabilitiesFilter as bugfix preparation * libvirt: Set SCSI as the default cdrom bus on System z * Adds common policy authorizer helper functions for Nova V2.1 API * Adds skip\_policy\_check flag to Compute/Network/SecurityGroup API * Make test re-use HTTPRequest part 4 * libvirt: update uri\_whitelist in fakelibvirt.Connection * Revert "Adds keypair type database migration" * Support for ext4 as default filesystem for ephemeral disks * Raise NotFound if attach interface with invalid net id or port id * Change default value of multi\_instance\_display\_name\_template * Check for LUKS device via 'isLuks' subcommand * disk: use new vfs method and option to extend * Replace select-for-update in fixed\_ip\_associate\_pool * Remove unused content\_type\_params() * libvirt: always pass image meta when getting disk mapping * libvirt: always pass image meta when getting disk info * Add API schema for v2.1 server reboot actions * objects: fix typo in changelog of compute\_node * Add API schema for v2.1 'removeFloatingIp' * Add API schema for v2.1 'addFloatingIp' * Add parameter\_types.ip\_address for cleanup * Reply with a meaningful exception when ports are over the quota limit * Adds keypair type database migration * A minor change of CamelCase parameter * Imported Translations from Transifex * Remove N331 hacking rules * GET details REST API next link missing 'details' * Add missing indexes in SQLite and PostgreSQL * libvirt: cleanup warning log formatting in \_set\_host\_enabled * Revert temporary hack to monkey patch the fake rpc timeout * Remove H238 comment from tox.ini * libvirt: use image\_meta when looking up default device names * Fix bdm transformation for volume backed servers * Removed host\_id check in ServersController.update * Fix policy validation in JSONSchema * Adds assert\_has\_no\_errors check * Removed useless method \_get\_default\_deleted\_value * virt: make tests pass instance object to get\_instance\_disk\_info * libvirt: rename conn variable in LibvirtConnTestCase * Raise if sec-groups and port id are provided on boot * Begin using ironic's "AVAILABLE" state * Transform IPAddress to string when creating port * Break base service group driver class out from API * Remove unused \_get\_ip\_and\_port() * Updated from global requirements * Add method for getting the CPU pinning constraint * libvirt: Consider CPU pinning when booting * Make ec2/cloud.py use get\_instance\_availability\_zone() helper * HACKING.rst: Update the location of unit tests' README.rst * Remove unused method log\_db\_contents * Make use of controller method in test\_flavor\_manage * libvirt: Use XPath instead of loop in \_get\_disk\_xml * Avoid bdms db call when cleaning deleted instance * Ignore warnings from contextlib.nested * Cleanup bad JSON files * Switch to oslo.vmware API for reading and writing files * Make test re-use HTTPRequest part 1 * Make test re-use HTTPRequest part 2 * Make test re-use HTTPRequest part 3 * Remove HTTPRequestV3 in scheduler\_hints test * Hyper-V: Adds instance missing metrics enabling * ephemeral file names should reflect fs type and mkfs command * Reschedule queries to nova-scheduler after a timeout occurs * libvirt: remove use of utils.instance\_sys\_meta * libvirt: remove use of fake\_instance.fake\_instance\_obj * Remove redundant catch for InstanceNotFound * Add to\_dict() method to PciDevicePool object * libvirt: rename self.conn in LibvirtVolume{Snapshot||Usage}TestCase * libvirt: rename self.libvirtconnection in LibvirtDriverTestCase * libvirt: convert LibvirtConnTestCase to use fakelibvirt fixture * Remove unused network rpcapi calls * Added hacking rule for assertEqual(a in b, True/False) * Add API schema for v2.1 createImage API * Fix errors in string formatting operations * libvirt: Create correct BDM object type for conn info update * Fixes undocumented commands * Make \_get\_instance\_block\_device\_info preserve root\_device\_name * Convert tests to NoDBTestCase * Fixes Hyper-V should log a clear error message * Provide compatibliity for db.compute\_node\_statistics * Update network resource when shelve offload instance * Update network resource when rescheduling instance * libvirt: Expanded test libvirt driver * Adds "file" disk driver support to Xen libvirt driver * Virt: remove unused 'host' parameter from get\_host\_uptime * Don't translate logs in tests * Don't translate exceptions in tests * disk/vfs: introduce new option to setup * disk/vfs: introduce new method get\_image\_fs * initialize objects with context in block device * Remove unused controller instance in test\_config\_drive * Fix v2.1 os-tenant-networks/networks API * Use controller methods in test\_floating\_ips * Cleanup in test\_admin\_actions * Calling controller methods directly in test\_snapshots * Add checking changePassword None in \_action\_change\_password(v2) * Add more exceptions handle when change server password (v2) * Share admin\_password unit test between V2 & V2.1 * Share server\_actions unit test between V2 & V2.1 * Fix server\_groups schema on v2.1 API * Implement a safe copy.copy() operation for Nova models * clean up extension loading logging * Hyper-V: Fixes wrong hypervisor\_version * console: introduce baseproxy and update consoles cmd * libvirt: update get\_capabilities to Host class * libvirt: add get\_connection doc string in Host class * Enable check for H238 rule * Call ComputeNode instead of Service for getting the nodes * Remove mox dependency * Fix JSONFilter docs * libvirt: move \_get\_hypervisor\_\* functions to Host class * libvirt: don't turn time.sleep into a no-op in tests * Adds Hyper-V generation 2 VMs implementation * VMware: ensure that correct disk details are returned * Improve api-microversion hacking check * Add unit test for getting project quota remains * Fix py27 gate failure - test\_create\_instance\_both\_bdm\_formats * Reduce complexity of the \_get\_guest\_config method * Cleanups in preparation of flavor attributes on Instance * Add flavor column to instance\_extra table * docs: document manual testing procedure for NUMA support * Add setup/cleanup\_instance\_network\_on\_host api for neutron/nova-network * Remove useless requirements * Make get\_best\_cpu\_topology consider NUMA requested CPU topology * Make libvirt driver expose sibling info in NUMA topology * VMware: snapshot as stream-optimized image * VMware: refactor utility functions related to VMDK * Get settable user quota maximum correctly * Add missing policy for nova in policy.json * Fix typo in nfs\_mount\_options option description * increase fake rpc POLL\_TIMEOUT to 0.1s * work around for until-failure * Fix inconsistencies in the ComputeNode object about service * Fixed incorrect initialization of availability zone tests * Revert "initialize objects with context in block device" * Fix wrong instructions for rebuilding API samples * Performance: leverage dict comprehension in PEP-0274 * Sync with latest oslo-incubator * initialize objects with context in VirtualInterface object tests * initialize objects with context in Tag object tests * initialize objects with context in Service object tests * Fixes Hyper-V boot from volume live migration * Expansion of matching XML strings logic * Xenapi: Attempt clean shutdown when deleting instance * don't use debug logs for object validation * create some unit of work logging in n-net * Make service-update work in API cells * oslo: remove useless modules * Do not use deprecated assertRaisesRegexp() * Honor shared storage on resize revert * Stub out instance action events in test\_compute\_mgr * Remove unused instance\_group\_metadata\_\* DB APIs * initialize objects with context in block device * Reduce the complexity of the create() method * speed up tests setting fake rpc polling timeout * xenapi: don't send terminating chunk on errors * Make service-delete work in API cells * Add version as request param for fake HTTPRequest * Fix OverQuota headroom KeyError in nova-network allocate\_fixed\_ip * Updated from global requirements * Make numa\_usage\_from\_instances consider CPU pinning * Cleanup in admin\_actions(v2.1api) * Cache ironic-client in ironic driver * tests: fix handling of TIMEOUT\_SCALING\_FACTOR * libvirt: remove/revert pointless logic for getVersion call * libvirt: move capabilities helper into host.py * libvirt: move domain list helpers into Host class * libvirt: move domain lookup helpers into Host class * Fix live migration RPC compatibility with older versions * Added \_get\_volume\_driver method in libvirt driver * fix wrong file path in docstring of hacking.checks * Make ec2 auth support v4 signature format * VMware: driver not handling port other than 443 * libvirt: use XPath in \_get\_serial\_ports\_from\_instance * Remove non existent rule N327 from HACKING.rst * Replace Hacking N315 with H105 * Enable W292 * Fix and re-gate on H306 * Move to hacking 0.10 * Fix nova-manage shell ipython * Make service-list output consistent * Updated from global requirements * Make V2.1 servers filtering (--tenant-id) same as V2 * Fix failure rebuilding instance after resize\_revert * Move WarningsFixture after DatabaseFixture so emit once * libvirt: Use arch.from\_host instead of platform.processor * Cells: Improve invalid hostname handling * Fix obj\_to\_primitive() expecting the dict interface methods * Remove unused XML\_WARNING variable in servers API * Guard against missing X-Instance-ID-Signature header * libvirt: not setting membacking when mempages are empty host topology * remove pylint source code annotations * Cleanup XML for api samples tests for Nova REST API * remove all traces of pylint testing infrastructure * initialize objects with context in SecurityGroupRule object tests * initialize objects with context in SecurityGroup object tests * initialize objects with context in base object tests * initialize objects with context in Migration object tests * initialize objects with context in KeyPair object tests * initialize objects with context in InstanceNUMATopology object tests * initialize objects with context in InstanceGroup object tests * initialize objects with context in InstanceFault object tests * Fix error message when no IP addresses available * Update WSGI SSL IPv6 test and SSL certificates * Catch more specific exception in \_get\_power\_state * Add WarningsFixture to only emit DeprecationWarning once in a test run * Maintain the creation order for vifs * Update docstring for wrap\_exception decorator * Doc: Adds python-tox to Ubuntu dependencies * Added hacking rule for assertTrue/False(A in B) * ironic: use instance object in driver.py * Add LibvirtGPFSVolumeDriver class * Make pagination work with deleted marker * Return 500 when unexpected exception raising when live migrate v2 * Remove no need LOG.exception on attach\_interface * Make LOG exception use format\_message * make IptablesRule debug calls meaningful * Switch to tempest-lib's packaged subunit-trace * Update eventlet API in libvirt driver * initialize objects with context in Instance object tests * initialize objects with context in Flavor object tests * initialize objects with context in FixedIP object tests * initialize objects with context in EC2 object tests * initialize objects with context in ComputeNode object tests * initialize objects with context in BlockDeviceMapping object tests * Nuke XML support from Nova REST API - Phase 3 * Return floating\_ip['fixed\_ip']['instance\_uuid'] from neutronv2 API * Add handling of BadRequest from Neutron * Add numa\_node to PCIDevice * Nuke XML support from Nova REST API - Phase 2 * Remove unused methods in nova utils * Use get\_my\_ipv4 from oslo.utils * Add cpu pinning check to numa\_fit\_instance\_to\_host * Add methods for calculating CPU pinning * Remove duplicated policy check at nova-network FlatManager * boot instance with same net-id for multiple --nic * XenAPI: Check image status before uploading data * XenAPI: Refactor message strings to remove locals * Cellsv2 devref addition * Nuke XML support from Nova REST API - Phase 1 * hardware: fix numa topology from image meta data * Support both list and dict for pci\_passthrough\_whitelist * libvirt: Add balloon period only if it is not None * Don't assume contents of values after aggregate\_update * Add API schema for server\_groups API * Remove unused function \_get\_flavor\_refs in flavor\_access extension * Make rebuild server schema 'additionalProperties' False * Tests with controller methods in test\_simple\_tenant\_usage * Convert wsgi call to controller in test\_virtual\_interfaces * Fix the comment of host index api * Imported Translations from Transifex * Use controller methods directly in test\_admin\_password * Drop workarounds for python2.6 * VMware: add in utility method for copying files * Remove lock files when remove libvirt images * Change log when set\_admin\_password failed * Catch InstanceInvalidState for start/stop action * Unshelving a volume backed instance doesn't work * Cache empty results in libvirt get\_volume\_connector * VMware: improve the performance of list\_instances * VMware: use power\_off\_instance instead of power\_off * VMware: refactor unit tests to use \_get\_info * libvirt: clean instance's directory when block migration fails * Remove unused scheduler driver methods * Reuse methods from netutils * VMware: make use of oslo.vmware logout * Remove unused directory nova/tests/unit/bundle * Prevent new code from using namespaced oslo imports * Move metadata filtering logic to utils.py * Make test\_consoles to directly call controller methods * Catch expected exceptions in remote console controller * Make direct call to controller test\_server\_password * Cleanup in test\_keypairs not to use wsgi\_app * Add ipv6 support to fake network models * Add host field when missing from compute\_node * Remove condition check for python2.6 in test\_glance * Cleanup in test\_availability\_zone not to use wsgi\_app * Call controller methods directly in test\_evacuate * VMware: Use datastore\_regex for disk stats * Add support for clean\_shutdown to resize in compute api layer * Fix Instance relationships in two objects * objects: remove NovaObjectDictCompat from Tag object * libvirt: introduce new helper for getting libvirt domain * libvirt: remove pointless \_get\_host\_uuid method * libvirt: pass Host object into firewall class * Cleanup in server group unit tests * Enhance EvacuateHostTestCase test cases * Call controller methods directly in test\_console\_output * Make direct call to controller in test\_console\_auth\_tokens * Populates retry info when unshelve offloaded instance * Catch NUMA related exceptions for create server v2.1 API * Remove unnecessary cleanup from ComputeAPITestCase * extract RPC setup into a fixture 2015.1.0b1 ---------- * Fix recent regression filling in flavor extra\_specs * remove detail method from LimitsController * Remove instance\_uuids from request\_spec * libvirt: remove unused get\_connection parameter from VIF driver * libvirt: sanitize use of mocking in test\_host.py * libvirt: convert test\_host.py to use FakeLibvirtFixture * libvirt: introduce a fixture for mocking out libvirt connections * Expand valid resource name character set * Set socket options in correct way * Make resize server schema 'additionalProperties' False * Make lock file use same function * Remove unused db.api.dnsdomain\_list * Remove unused db.api.instance\_get\_floating\_address * Remove unused db.api.aggregate\_host\_get\_by\_metadata\_key * Remove unused db.api.get\_ec2\_instance\_id\_by\_uuid * Join instances column before expecting it to exist * ec2: Change FormatMappingTestCase to NoDBTestCase * libvirt: enhance driver to configure guests based on hugepages * Fix ironic delete fails when flavor deleted * virt: pass instance object to block\_stats & get\_instance\_disk\_info * Add pci\_device\_pools to ComputeNode object * Handle invalid sort keys/dirs gracefully * hardware: determine whether a pagesize request is acceptable * objects: add method to verify requested hugepages * hardware: make get\_constraints to return topology for hugepages * hardware: add method to return requested memory page size * Cleanup in ResourceExtension ALIAS(v2.1api) * Replace use of handle\_schedule\_error() with set\_vm\_state\_and\_notify() * Fix set\_vm\_state\_and\_notify passing SQLA objects to send\_update() * Imported Translations from Transifex * Libvirt: use strutils.bool\_from\_string * Use constant for microversions header name (cleanup) * Adds support for versioned schema validation for microversions api * Add support for microversions API special version latest * Adds API microversion response headers * Use osapi\_compute worker for api v2 service * initialize objects with context in Aggregate object tests * Replace the rest of the non-object-using test\_compute tests * Fix using anyjson in fake\_notifier * Fix a bug in \_get\_instance\_nw\_info() where we re-query for sysmeta * Corrects link to API Reference on landing page * libvirt: disk\_bus setting is being lost when migration is reverted * libvirt: enable hyperv enlightenments for windows guests * libvirt: enhance to return avail free pages on cells * libvirt: move setting of guest features out into helper method * libvirt: add support for configuring hyperv enlightenments in XML * libvirt: change representation of guest features * libvirt: add support for hyperv timer source with windows guests * libvirt: move setting of clock out into helper method * libvirt: don't pass a module import into methods * Reject non existent mock assert calls * VMware: remove unused method in the fake module * Use oslo db concurrency to generate nova.conf.sample * Make instance\_get\_all\_\*() funtions support the smart extra.$foo columns * Make cells send Instance objects in build\_instance() * Fix spelling error in compute api * objects: fix changed fields for instance numa cell * Hyper-V: Fix volume attach issue caused by wrong constant name * Move test\_extension\_info from V3 dir to V2.1 * Make create server schema 'additionalProperties' False * Make update server schema 'additionalProperties' False * Updated from global requirements * Update devref with link to kilo priorities * Add vision of nova rest API policy improvement in devref * objects: remove dict compat support from all XXXList() objects * objects: stop conductor manager using dict field access on objects * objects: allow creation of objects without dict item compat * Remove duplicated constant DISK\_TYPE\_THIN * Hyper-V: Fix retrieving console logs on live migration * Remove FlavorExtraSpecsNotFound catch in v3 API * Add API schema for v2.1 block\_device\_mapping\_v1 * Add API schema for v2.1 block\_device\_mapping extension * VMware: Support volume hotplug * fix import of oslo.concurrency * libvirt: set guest cpu\_shares value as a multiple of guest vCPUs * Make objects use the generalized backport scheme * Fix base obj\_make\_compatible() handling ListOfObjectsField * VMware: make use of oslo.vmware pbm\_wsdl\_loc\_set * Replace stubs with mocks * Updated from global requirements * use more specific error messages in ec2 keystone auth * Add backoff to ebtables retry * Add support for clean\_shutdown to rescue in compute api layer * Add support for clean\_shutdown to shelve in compute api layer * Add support for clean\_shutdown to stop in compute api layer * Extend clean\_shutdown to the compute rpc layer * initialize objects with context in compute manager * Add obj\_as\_admin() to NovaPersistentObject * Bump major version of Scheduler RPC API to 4.0 * Use model\_query from oslo.db * Only check db/api.py for session in arguments * Small cleanup in db.sqlalchemy.api.action\_finish() * Inline \_instance\_extra\_get\_by\_instance\_uuid\_query * libvirt: Convert more tests to use instance objects * virt: Convert more tests to use instance objects * virt: delete unused 'interface\_stats' method * objects: fix version changelog in numa * libvirt: have \_get\_guest\_numa\_config return a named tuple * simplify database fixture to the features we use * extract the timeout setup as a fixture * Stop neutron.api relying on base neutron package * Move pci unit test from V3 to V2.1 * Clarify point of setting dirname in load\_standard\_extensions * Remove support for deprecated header X\_ROLE * move all conf overrides to conf\_fixture * move ServiceFixture and TranslationFixture * extract fixtures from nova.test to nova.test.fixtures * libvirt: Fix NUMA memnode assignments to host cells * libvirt: un-cruft \_get\_guest\_numa\_config * Make scheduler filters/weighers only load once * Refactor unit tests for scheduler weights * Fix cells RPC version 1.30 compatibility with dict-based Flavors * Objects: add in missing translation * network:Separate the translatable messages into different catalogs * objects: introduce numa pages topology as an object * check the configuration num\_vbd\_unplug\_retries * Doc: minor fixes to unit testing devref * Doc: Update i18n devref * VMware: remove flag in tests indicating VC is supported * virt: use instance object for attach in block\_device * VMware: clean up unit tests * Do not compute deltas when doing migration * Modify v21 alias name for compatible with v2 * Clean bdms and networks after deleting shelved VM * move eventlet GREENDNS override to top level * fix pep8 errors that apparently slipped in * include python-novaclient in abandon policy * replace httplib.HTTPSConnection in EC2KeystoneAuth * Re-revert "libvirt: add version cap tied to gate CI testing" * ironic: remove non-standard info in get\_available\_resource dict * hyperv: use standard architecture constants for CPU model * xenapi: fix structure of data reported for cpu\_info * ironic: delete cpu\_info data from get\_available\_resource * vmware: delete cpu\_info data from get\_available\_resource * pci: move filtering of devices up into resource tracker * Libvirt: Fsfreeze during live-snapshot of qemu/kvm instances * libvirt: Fixes live migration for volume backed instances * Updated from global requirements * Remove unused db.api.fixed\_ip\_get\_by\_address\_detailed * VMware: Remove unused \_check\_if\_folder\_file\_exists from vmops * VMware: Remove unused \_get\_orig\_vm\_name\_label from vmops * VMware: enable a cache prefix configuration parameter * Hyper-V: attach volumes via SMB * etc: replace NullHandler by Python one * Add cn\_get\_all\_by\_host and cn\_get\_by\_host\_and\_node to ComputeNode * Add host field to ComputeNode * Reject unsupported image to local BDM * Update LVM lockfile name identical to RAW and Qcow * Fix invalid read\_deleted value in \_validate\_unique\_server\_name() * Adds hacking check for api\_version decorator * Parse "networks" attribute if loading os-networks * Fixes interfaces template identification issue * VMware: support passing flavor object in spawn * Libvirt: make use of flavor passed by spawn method * Virt: change instance\_type to flavor * rename oslo.concurrency to oslo\_concurrency * Support macvtap for vif\_type being hw\_veb * downgrade 'No network configured!' to debug log level * Remove unnecessary timeutils override cleanup * Cleanup timeutils override in tests/functional/test\_servers * Downgrade quota exceeded log messages * libvirt: Decomposition plug hybrid methods in vif * Remove unused cinder code * Libvirt normalize numa cell ids * Remove needless workaround in utils module * Check for floating IP pool in nova-network * Remove except Exception cases * Fixes multi-line strings with missing spaces * Fix incorrectly formatted log message * libvirt: check value of need\_legacy\_block\_device\_info * Fixed typo in testcase and comment * Share server access ips tests between V2 & V2.1 * Workflow documentation is now in infra-manual * Add a validation format "cidr" * Use a copy of NEW\_NETWORK for test\_networks * Adds global API version check for microversions * Implement microversion support on api methods * Fix long hostname in dnsmasq * This patch fixes the check that 'options' object is empty correctly * Assert order of DB index members * Updated from global requirements * object-ify flavors manager side of the RPC * Add CPU pinning data to InstanceNUMACell object * Enforce unique instance uuid in data model * libvirt: Handle empty context on \_hard\_reboot * Move admin\_only\_action\_common out of v3 directory(cleanup) * Compute Add build\_instance hook in compute manager * SQL scripts should not manage transactions * Clear libvirt test on LibvirtDriverTestCase * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 4 * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 3 * Convert v3/v2.1 extension info to present v2 API format * Adds NUMA CPU Pinning object modeling * objects: Add several complex field types * VMware: ephemeral disk support * Imported Translations from Transifex * Fix disconnecting necessary iSCSI sessions issue * VMware: ensure that fake VM deletion returns a task * Compute: Catch binding failed exception while init host * libvirt: Fix domain creation for LXC * Xenapi: Allow volume backed instances to migrate * Break V2 XML Support * Libvirt: SMB volume driver * libvirt: Enable console and log for system z guests * libvirt: Set guest machine type on system z * Drop support for legacy server groups * Libvirt: Don't let get\_console\_output crash on missing console file * Hyper-V: Adds VMOps unit tests (part 1) * VMware: allow selection of vSAN datastores * libvirt: enhance config memory backing to handle hugepages * VMware: support spawn of stream-optimized image * libvirt: reuse defined method to return instance numa topology * Remove the volume api related useless policy rules * Error code for creating secgroup default rule * Don't mock external locks with Semaphore * Add shelve and unshelve info into devref doc * VMware: optimize resource pool usage * Added objects Tag and TagList * libvirt: video RAM setting should be passed in kb to libvirt * Switch to moxstubout and mockpatch from oslotest * Check that volume != root device during boot by image * Imported Translations from Transifex * Make a flavorRef validation strict * Add missing indexes from 203 migration to model * Fix type of uniq\_security\_groups0project\_id0name0deleted * Correct columns covered in migrations\_instance\_uuid\_and\_status\_idx * Add debug log for url not found * Optimize 'floating\_ip\_bulk\_create' function * factor out \_setup\_logging in test.py * extract \_setup\_timeouts in test.py * Scheduler: return a namedtuple from \_get\_group\_details * Use "is\_neutron\_security\_groups" check * Fix function name mismatch in test case * VMware: prevent exception with migrate\_disk\_and\_power\_off * Fix URL mapping of image metadata PUT request * Compute: catch correct exception when host does not exists * Fix URL mapping of server metadata PUT request * objects: move numa host and cell to objects * objects: introduce numa objects * Code cleanup: quota limit validation * Add api validation schema for image\_metadata * Correct InvalidAggregateAction translation&format * Remove blanks before ':' * Port virtual-interfaces plugin to v2.1(v3) API * Catch ComputeServiceUnavailable on v2 API * GET servers API sorting REST API updates * Add API validation schema for volume\_attachments * Changed testcase 'test\_send\_on\_vm\_change' to test vm change * VMware: associate instance with storage policy * VMware: use storage policy in datastore selection * VMWare: get storage policy from flavor * Share CreateBackup unit test between V2 & V2.1 * Share suspend\_server unit test between V2 & V2.1 * Share pause\_server unit test between V2 & V2.1 * Share lock\_server unit test between V2 & V2.1 * VMware: enable VMware driver to use new BDM format * Use admin only common test case in admin action unit test cases * objects: move virt numa instance to objects * Fix v2.1 API os-simple-tenant-usage policy * Set vm state error when raising unexpected exception in live migrate * Add delete not found unit testcase for floating\_ip api * Improve error return code of floating\_ips in v2/v2.1 api * Port floating\_ips extension to v2.1 * Removing the headroom calculation from db layer * Make multiple\_create unit tests share between v2 and v2.1 * Set API version request information on request objects * Change definition of API\_EXTENSION\_NAMESPACE to method * Adds APIVersionRequest class for API Microversions * Updated from global requirements * remove test.ReplaceModule from test.py * Added db API layer to add instance tag-list filtering support * Added db API layer for CRUD operations on instance tags * Implement 'personality' plugin for V2.1 * Fix API samples/templates of multinic-add-fixed-ip * move the integrated tests into the functional tree * Sync latest from oslo-incubator * Fix use of conf\_fixture * Make network/\* use Instance.get\_flavor() * Make metadata server use Instance.get\_flavor() * Fix use of extract\_flavor() in hyper-v driver * Check server group policy on migrate/evacuate * VMware: fix exception when multiple compute nodes are running * Add API json schema for server\_external\_event(v2.1) * Port v2 quota\_classes extension to work in v2.1(v3) framework * Share unit test case for server\_external\_events api * Add API schema for v2.1/v3 scheduler\_hints extension * Make compute/api.py::resize() use Instance.get\_flavor() * Make get\_image\_metadata() use Instance.get\_flavor() * Fix instance\_update() passing SQLA objects to send\_update() * Fix EC2 volume attachment state at attaching stage * Fixes Hyper-V agent IDE/SCSI related refactoring * dummy patch to let tox functional pass * Remove Python 2.6 classifier * Make aggregate filters use objects * hardware: clean test to use well defined fake flavor * Enable pep8 on ./tools directory * objects: Add test for instance \_save methods * Error code for creating duplicate floating\_ip\_bulk * Use HTTPRequest instead of HTTPRequestV3 for v2/v2.1 tests * objects: make instance numa topology versioned in db * Clean up in test\_server\_diagnostics unit test case * Add "x-compute-request-id" to a response header * Prevent admin role leak in context.elevated * Hyper-V: Refactors Hyper-V VMOps unit tests * Hyper-V: Adds Hyper-V SnapshotOps tests * Introduce a .z version element for backportable objects * Adds new RT unit tests for \_sync\_compute\_node * Fix for extra\_specs KeyError * Remove old Baremetal Host Manager * Remove unused network\_api.get\_instance\_uuids\_by\_ip\_filter() * Remove unused network\_api.get\_floating\_ips\_by\_fixed\_address() * add abandon\_old\_reviews script * Remove havana compat from nova.cert.rpcapi * Retry ebtables on race * Eventlet green threads not released back to pool * Hyper-V: Adds LiveMigrationOps unit tests * Hyper-V: Removes redundant utilsfactory tests from test\_hypervapi * Hyper-V: Adds HostOps unit tests * Make nova-api use quotas object for create\_security\_group * Make nova-api use quotas object for count() and limit\_check() * Add count and limit\_check methods to quota object * Make neutronapi get networks operations return objects * Hyper-V: fix tgt iSCSI targets disconnect issue * Network object: add missing translations * Adapting pylint runner to the new message format * Cleanup v2.1 controller inheritance * Load extension 2 times fix load sequence issue * Make get\_next\_device\_name() handle an instance object * Add obj\_set\_defaults() to NovaObject * Switch to oslo.config fixture * Remove VirtNUMAHostTopology.claim\_test() method * Instances with NUMA will be packed onto hosts * Make Instance.save() update numa\_topology * objects: remove VirtPageSize from hardware.py * VMware: enable backward compatibility with existing clusters * Make notifications use Instance.get\_flavor() * Make notify\_usage\_exists() take an Instance object * Convert hardware.VirtCPUTopology to nova object * Updated from global requirements * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 2 * compute: rename hvtype.py to hv\_type.py * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 1 * Replacement \`\_\` on \`\_LE\` in all LOG.exception * Use opportunistic approach for migration testing * Replacement \`\_\` on \`\_LI\` in all LOG.info - part 2 * Replacement \`\_\` on \`\_LI\` in all LOG.info - part 1 * Add ALL-IN operator to extra spec ops * Sync server\_external\_events v2 to v2.1 Part 2 * Sync server\_external\_events v2 to v2.1 Part 1 * Fix connecting unnecessary iSCSI sessions issue * Add API validation schema for services v2.1 plugin * Fix exception handling in \_get\_host\_metrics() * initialize objects with context in network manager tests * initialize objects with context in flavors * initialize objects with context in compute api * initialize objects with context in resource tracker * Use common get\_instance call in API plugins part 3 * Clean the test cases for service plugins * initialize objects with context in server groups api * initialize objects with context in cells * tests: update \_get\_instance\_xml to accept custom flavor object * libvirt: vif tests should use a flavor object * Compute: improve test\_compute\_utils time * Compute: improve usage of Xen driver support * libvirt: introduce new 'Host' class to manage the connection * Add CHAP credentials support * Document the upgrade plans * Move test\_hostops into nova/tests/unit * Fix get\_all API to pass search option filter to cinder api * VMware: remove ESX support for getting resource pool * objects: Makes sure Instance.\_save methods are called * Add support for fitting instance NUMA nodes onto a host * VMware: remove unnecessary brackets * Imported Translations from Transifex * Port volume\_attachments extension to v2.1 API * Only filter once for trusted filters * Indicate whether service is down for mc driver * Port assisted-volume-snapshots extension to v2.1 * Updated from global requirements * Add custom is\_backend\_avail() method * Fixes differencing VHDX images issue on Hyper-V * Add debug log when over quota exception occurs * Fix rule not found error in sec grp default rule API * Convert service v3 plugin to v2.1 API * Decrease admin context usage in \_get\_guest\_config * Catch NotImplemented nova exceptions in API extension * Add API json schema to volumes api(v2.1) * Don't modify columns\_to\_join formal parameter in \_manual\_join\_columns * Limit tcp/udp port to be empty string in json-schema * Fix the cell API with string rpc\_port failed * Add decorator expected\_errors for security\_group extension * Fix bulk floating ip ext to show uuid and fixed\_ip * Use session in cinderclient * Make objects.Flavor.\_orig\_projects a list * Refactor more compute tests to use Instance objects * Use Instance.get\_flavor() in more places * Support instance\_extra fields in expected\_attrs on Instance object * Adds host power actions support for Hyper-V * Exceptions: finish sentence with fullstop * Type conflict in trusted\_filter.py using attestation\_port default value * Get EC2 metadata localip return controller node ip * Rename private functions in db.sqla.api * Enable hard-reboot on more states * Better error message when check volume status * libvirt: use qemu (qdisk) disk driver for Xen >= 4.2.0 * Add resource types for JSON-Schema validation * Add integer types for JSON-Schema * Revert pause/unpause state when host restart * Extends use of ServiceProxy to more methods in HostAPI in cells * Nova devref: Fix the rpc documentation typos * Remove duplicated code in services api integrated test case * Share server\_password unit test between V2 & V2.1 * Key manager: ensure exception reason is translated * Virt: update spawn signature to pass instance\_type * Compute: set instance to ERROR if resume fails * Limit InstanceList join to system\_metadata in os-simple-tenant-usage * Pass expected\_attrs to instance\_get\_active\_by\_window\_joined * VMware: remove unused parameter (mountpoint) * Truncate encoded instance message to 255 or fewer * Only load necessary instance info for use in sync power state * Revert "Truncate encoded instance message to 255" * VMware: refactor cpu allocations * Fixes spawn issue on Hyper-V * Refine HTTP error code for os-interface * Share migrations unit test between V2 & V2.1 * Use common get\_instance call in API plugins part 2 * make get\_by\_host use slave in periodic task * Add update\_cells to BandwidthUsage.create() * Fix usage of BandwidthUsage.create() * Updated from global requirements * Hard reboot doesn't re-create instance folder * object-ify flavors api and compute/api side of RPC * Allow passing columns\_to\_join to instance\_get\_all\_by\_host\_and\_node() * Don't make a no-op DB call * Remove deprecated affinity filters * Generalize dependent object backporting * GET servers API sorting compute/instance/DB updates * Hyper-V: cleanup basevolumeutils * Specify storage IP for iscsi connector * Fix conductor processes race trying to join servicegroup (zk driver) * Remove unused db.api.floating\_ip\_set\_auto\_assigned * Remove unused db.api.flavor\_extra\_specs\_get\_item * Remove unused oslo.config import * Create instance\_extra items atomically with the instance itself * Shelve\_offload() should give guests a chance to shutdown * Fixes Hyper-V driver WMI issue on 2008 R2 * Fix circular reference error when live migration failed * Fix live migration api stuck when migrate to old nova node * Remove native security group api class * libvirt: pin emulator threads to union of vCPU cpuset * libvirt: add classes for emulator thread CPU pinning configuration * libvirt: set NUMA memory allocation policy for instances * Fixed quotas double decreasing problem * Convert v3 console plugin to v2.1 * Virt: make use of the InstanceInfo object * Virt: create an object InstanceInfo * Metadata service: make use of get\_instance\_availability\_zone * Metadata service: remove check for the instance object type * Metadata: use instance objects instead of dictionary * VMware: Fix problem transferring files with ipv6 host * VMware: pass vm\_ref to \_set\_machine\_id * VMware: pass vm\_ref to \_get\_and\_set\_vnc\_config * Add API schema for aggregates set\_metadata API * Compute: Add start notification for resume * VMware: fix regression for 'TaskInProgress' * Remove havana compat from nova.console.rpcapi * Remove havana compat from nova.consoleauth.rpcapi * Share console-auth-tokens tests between V2 & V2.1 * Raise HTTPNotFound in V2 console extension * Add 'instance-usage-audit-log' plugin for V2.1 * Truncate encoded instance message to 255 * Deduplicate some INFO and AUDIT level messages * move all tests to nova/tests/unit * Add tox -e functional * Don't touch info\_cache after refreshing it in Instance.refresh() * Drop max-complexity to 47 * Aggregate.save() shouldn't return a value * Remove useless host parameter in virt * Use real disk size to consider a resize down * Add virtual interface before add fixed IP on nova-network * image cache clean-up to clean swap disk * Make unit test floating ips bulk faster * Remove flush\_operations in the volume usage output * Updated from global requirements * xenapi plugins must target only Python 2.4 features * libvirt: add classes for NUMA memory binding configuration * libvirt: add in missing translation for LVM migration * Config bindings: remove redundant brackets * Config drive: delete deprecated config var config\_drive\_tempdir * Refactor Ironic driver tests as per review comment * Switch default cinder API to V2 * Remove deprecated spicehtml5 options * Fix xen plugin to retry on upload failure * Log sqlalchemy exception message in migration.py * Use six.text\_type instead of unicode * XENAPI add duration measure to log message * Quotas: remove deprecated configuration variable * Glance: remove deprecated config options * Cinder: remove deprecated configuration options * Neutron: remove deprecated config options * object: update instance numa object to handle pagesize * hardware: make cell instance topology to handle memory pages * hardware: introduce VirtNUMATopologyCellInstance * hardware: fix in doctstring the memory unit used * virt: introduce types VirtPageSize and VirtPagesTopology * Clearer default implmentation for dhcp\_options.. * Fix instance\_usage\_audit\_log test to use admin context * VMware: remove unused method \_get\_vmfolder\_ref * libvirt: safe\_decode domain.XMLDesc(0) for i18n logging * VMware: trivial fix for comment * Fix the uris in documentation * Make test\_security\_groups nose compatible * Make test\_quotas compatible with nosetests * Return HTTP 400 if use invalid fixed ip to attach interface * Fixed typos in nova.objects.base docstrings * Add note on running single tests to HACKING.rst * Use sizelimit from oslo.middleware * Use oslo.middleware * Make resource tracker always use Flavor objects * maint:Don't translate debug level logs * Make console show and delete exception msg better * Change error code of floating\_ip\_dns api(v2.1) * Make scheduler code use object with good practice * Switch Nova to use oslo.concurrency * scheduler: Remove assert on the exact number of weighers * Update docstring for check\_instance\_shared\_storage\_local * remove use of explicit lockutils invocation in tests * Delay STOPPED lifecycle event for Xen domains * Remove warning & change @periodic\_task behaviour * Fix nova-compute start issue after evacuate * Ignore DiskNotFound error on detaching volumes * Move setup\_instance\_group to conductor * Small doc fix in compute test * libvirt: introduce config to handle cells memory pages caps * Fixes DOS issue in instance list ip filter * Use 404 instead of 400 when security\_group is non-existed * Port security-group-default-rules extension into v2.1 * Port SecurityGroupRules controller into v2.1 * error if we don't run any tests * Revert "Switch Nova to use oslo.concurrency" * Updated from global requirements * Remove admin context which is not needed * Add API validation schema for disk\_config * Make test\_host\_filters a NoDBTestCase * Move group affinity filters tests to own test file * Split out metrics filter unit tests * Splits out retry filter unit tests * Split out compute filters unit tests * Update hooks from oslo-incubator copy * Split out aggregate disk filter unit tests * Split out core filter unit tests * Split out IO Ops filter unit tests * Split out num instances filter unit tests * Split and fix the type filters unit tests * Split and fix availability zone filter unit tests * Split out PCI passthrough filter unit tests * Use common get\_instance call in API plugins * Fix nova evacuate issues for RBD * DB API: Pass columns\_to\_join to instance\_get\_active\_by\_window\_joined * Read flavor even if it is already deleted * Use py27 version of assertRaisesRegexp * update retryable errors & instance fault on retry * xenapi: upload/download params consistency change * Use assertRaisesRegexp * Drop python26 support for Kilo nova * Switch Nova to use oslo.concurrency * Remove param check for backup type on v2.1 API * Set error state when unshelve an instance due to not found image * fix the error log print in encryptor \_\_init\_\_.py * Remove unused compute\_api in extend\_status * Compute: maint: adjust code to use instance object format * VMware: use instance.uuid instead of instance['uuid'] * Network: manage neutron client better in allocate\_for\_instance * Split out agg multitenancy isolation unit tests * Split agg image props isolation filter unit tests * Separate isolated hosts filter unit tests * Separate NUMA topology filter unit tests * resource-tracker: Begin refactor unit tests * Faster get\_attrname in nova/objects/base.py * Hyper-V: Skip logging out in-use targets * Compute: catch more specific exception for \_get\_instance\_nw\_info * typo in the policy.json "rule\_admin\_api" * Fix the unittest use wrong controller for SecurityGroups V2 * host manager: Log the host generating the warning * Add API validation schema for floating\_ip\_dns * Remove \`domain\` from floating-ip-dns-create-or-update-req body * Port floating\_ip\_dns extention to v2.1 * Remove LOG outputs from v2.1 API layer * Run build\_and\_run\_instance in a separate greenthread * VMware: Improve the efficiency of vm\_util.get\_host\_name\_for\_vm * VMware: Add fake.create\_vm() * Use wsgi.response for v2.1 API * Use wsgi.response for v2.1 unrescue API * Add API schema for v2.1 "resize a server" API * Remove use of unicode on exceptions * Fix error in comments * Make pci\_requests a proper field on Instance object * libvirt: fully parse PCI vendor/product IDs to integer data type * Remove uncessary instance.save in nova compute * api: add serial console API calls v2.1/v3 * Add API validation schema for cloudpipe api * Remove project id in ViewBuilder alternate link * Handle exception better in v2.1 attach\_interface * Cleanup of tenant network tests * Port floating\_ips\_bulk extention to v2.1 * Make v2.1 tests use wsgi\_app\_v21 and remove wsgi\_app\_v3 * Translate 'powervm' hypervisor\_type to 'phyp' for scheduling * Give a reason why NoValidHost in select\_destinations * ironic: use instance object for \`\_add\_driver\_fields\` * ironic: use instance object for \`\_wait\_for\_active\` * ironic: use instance object for \`get\_info\` * ironic: use instance object for \`rebuild\` * ironic: use instance object for plug\_vifs * Revert "Replace outdated oslo-incubator middleware" * Set logging level for glanceclient to WARN * Nova should be in charge of its log defaults * Reduce the complexity of \_get\_guest\_config() * VMware: fix compute node exception when no hosts in cluster * libvirt: use instance object for detach\_volume * libvirt: use instance object for attach\_volume * libvirt: use instance object for resume\_state\_on\_host\_boot * libvirt: treat suspend instance as an object * VMware: Remove redundant fake.reset() in test\_vm\_util * VMware: add tests for spawn with config drive enabled * Adds tests for Hyper-V Network utils * Adds tests for Hyper-V Host utils * Fix order of arguments in assertEqual * Replace custom patching in \`setUp\` on HypervisorsSampleJsonTests * Console: delete code for VMRCConsole and VMRCSessionConsole * VMware: delete the driver VMwareESXDriver * Replacement \`\_\` on \`\_LE\` in all LOG.error * VMware: rename vmware\_images to images * Remove unuseful parameter in cloudpipe api(v2/v2.1) * Moves trusted filter unit tests into own file * Port update method of cloudpipe\_update to v2.1(v3) * Clean up iSCSI multipath devices in Post Live Migration * Check fixed-cidr is within fixed-range-v4 * Porting baremetal\_nodes extension to v2.1/v3 * Port fixed\_ip extention to v2.1 * Separate filter unit tests for agg extra specs * Move JSON filter unit tests into own file * Separate compute caps filter unit tests * Separate image props filter unit tests * Separate disk filters out from test\_host\_filters * Separate and refactor RAM filter unit tests * Remove duplicate test * Reduce the complexity of stub\_out\_db\_network\_api() * Remove duplicate index from model * Remove useless join in nova.virt.vmwareapi.vm\_util * fixed typo in test name * Separate and refactor affinity filter tests * Pull extra\_specs\_ops tests from test\_host\_filters * Remove outdated docstring for XenApi driver's options * VMware: attach config drive if booting from a volume * Remove duplicated comments in virt/storage\_users * Compute: use instance object for vm\_state * libvirt: use six.text\_type when setting text node value in guest xml * Allow strategic loading of InstanceExtra columns * Create Nova Scheduler IO Ops Weighter * Put a cap on our cyclomatic complexity * Add notification for server group operations * Clean up the naming of PCI python modules * Port os-networks-associate plugin to v2.1(v3) infrastructure * Port os-tenant-networks plugin to v2.1(v3) infrastructure * Cleanup of exception handling in network REST API plugin * Fix instance\_extra backref * Refactor compute tests to not use \_objectify() * Refactor compute and conductor tests to use objects * Fix genconfig - missed one import from oslo cleanup * Handle Forbidden error from network\_api.show\_port in os-interface:show * Replace outdated oslo-incubator middleware * VMware: Improve logging on failure due to invalid guestId * Ironic: Continue pagination when listing nodes * Fix unit test failure due to tests sharing mocks * libvirt: fully parse PCI addresses to integer data type * libvirt: remove pointless HostState class * Porting SecurityGroup related controller into v2.1 * Allow force-delete irrespective of VM task\_state * Use response.text for returning unicode EC2 metadata * Remove unused modules copied from oslo-incubator * Remove unused code in pci\_manager.get\_instance\_pci\_devs() * VMWare: Remove unused exceptions * Switch to nova's jsonutils in oslo.serialization * VMware: mark virtual machines as 'belonging' to OpenStack * XenAPI: Inform XAPI who is connecting to it * Rename cli variable in ironic driver * Add more input validation of bdm param in server creation * Return HTTP 400 if use an in-use fixed ip to attach interface * VMware: get\_all\_cluster\_refs\_by\_name default to {} * Minor refactor of \_setup\_instance\_group() * add InstanceGroup.get\_by\_instance\_uuid * Add instance\_group\_get\_by\_instance to db.api * Updated from global requirements * Add supported\_hv\_specs to ComputeNode object * Pass block device info in pre\_live\_migration * Use 400 instead of 422 for security\_groups v2 API * Port floating\_ip\_pools extention to v2.1 * Imported Translations from Transifex * Sync with latest oslo-incubator * Don't translate unit test logs * Optimize get\_instance\_nw\_info and remove ipam * Convert migrate reqeusts to use joins * Use database joins for fixed ips to other objects * Keep migration status if instance still resizing * Don't log every (friggin) migration version step during unit tests * Remove init for object list in api layer * Revise compute API schemas and add tests * Add Quota roll back for deallocate fix ip in nova-network * Update README for openstack/common * Fix libvirt watchdog support * VMware: add support for default pbm policy * Remove unused imports from neutron api * Cleanup tenant networks plugin config creation * Port os-networks plugin to v2.1(v3) infrastructure * Use reasonable timeout for rpc service\_update() * Finish objects conversion in the os-interface API 2014.2 ------ * Fix pci\_request\_id break the upgrade from icehouse to juno * Fix pci\_request\_id break the upgrade from icehouse to juno * Updated translations * vfs: guestfs logging integration * Fix broken cert revocation * Port cloudpipe extension to v2.1 * Cleanup log marker in neutronv2 api * Add 'zvm' to the list of known hypervisor types * Fix wrong exception return in fixed\_ips v2 extention * Extend XML unicode test coverage * Remove unnecessary debug/info logs of normal API ops * Refactor of test case of floating\_ips * Make v2.1 API tests use v2 URLs(test\_[r-v].\*) * Make v2.1 API tests use v2 URLs(test\_[f-m].\*) * Break out over-quota calculation code from quota\_reserve() * Fix image metadata returned for volumes * Log quota refresh in\_use message at INFO level for logstash * Break out over-quota processing from quota\_reserve() * Remove obsolete vmware/esx tools * Fix broken cert revocation * Remove baremetal virt driver * Update rpc version aliases for juno * VMware: Set vmPathName properly in fake driver * Port disk\_config extension for V2.1 * Allow backup operation in paused and suspend state * Update NoMoreFixedIps message description * Make separate calls to libvirt volume * Correct VERSION of NetworkRequest * Break out quota usage refresh code from quota\_reserve() * libvirt: abort init\_host method on libvirt that is too old * Mask passwords in exceptions and error messages * Support message queue clusters in inter-cell communication * neutronv2: translate 401 and 404 neutron client errors in show\_port * Log id in raise\_http\_conflict\_for\_instance\_invalid\_state() * Use image metadata from source volume of a snapshot * Fix KeyError for euca-describe-images * Optimize 'fixed\_ip\_bulk\_create' function * Remove 'get\_host\_stats' virt driver API method * Suppressed misleading log in unshelve, resize api * Imported Translations from Transifex * libvirt: add \_get\_launch\_flags helper method in unit test * Refactoring of contrib.test\_networks tests * Make v2.1 API tests use v2 URLs(test\_[a-e].\*) * Port fping extension to work in v2.1/v3 framework * Use oslo.utils * Correctly catch InstanceExists in servers create API * Fix the os\_networks display to show cidr properly * Avoid using except Exception in unit test * nova-net: add more useful logging before raising FixedIpLimitExceeded * libvirt: convert conn test case to avoid DB usage * libvirt: convert driver test suite to avoid DB usage * Mask passwords in exceptions and error messages * Disable libvirt NUMA topology support if libvirt < 1.0.4 * Resource tracker: use brackets for line wrap * VMWare: Remove unnecessery method * console: make unsupported ws scheme in python < 2.7.4 * VMWare: Fix nova-compute crash when instance datastore not available * Disable libvirt NUMA topology support if libvirt < 1.0.4 * VMware: remove \_get\_vim() from VMwareAPISession * Compute: use an instance object in terminate\_instance * VMware: remove unnecessary deepcopy * Destroy orig VM during resize if triggered by user * Break out quota refresh check code from quota\_reserve() * move integrated api client to requests library * Fix unsafe SSL connection on TrustedFilter * Update rpc version aliases for juno * Fix the os\_networks display to show cidr properly * libvirt: convert mox to mock in test\_utils * Remove kombu as a dependency for Nova * Adds missing exception handling in resize and rebuild servers API * Remove keystoneclient requirement * Destroy orig VM during resize if triggered by user * VMware: Fix deletion of an instance with no files * console: introduce a new exception InvalidConnectionInfo * Remove the nova-manage flavor sub-command * support TRACE\_FAILONLY env variable * Ensure files are closed promptly when generating a key pair * libvirt: convert volume snapshot test case to avoid DB usage * libvirt: convert volume usage test case to avoid DB usage * libvirt: convert LibvirtNonblockingTestCase to avoid DB usage * libvirt: convert firewall tests to avoid DB usage * libvirt: convert HostStateTestCase to avoid DB usage * libvirt: split firewall tests out into test\_firewall.py * libvirt: convert utils test case to avoid DB usage * Add VIR\_ERR\_CONFIG\_UNSUPPORTED to fakelibvirt * Remove indexes that are prefix subsets of other indexes * remove scary error message in tox * Cleanup \_convert\_block\_devices * Enhance V2 disk\_config extension Unit Test * Add developer policy about contractual APIs * Reserve 10 migrations for backports * libvirt: Make sure volumes are well detected during block migration * Remove websocketproxy workaround * Fix unsafe SSL connection on TrustedFilter 2014.2.rc1 ---------- * Remove xmlutils module * libvirt: Make sure NUMA cell memory is in Kb in XML * Fix disk\_allocation\_ratio on filter\_scheduler.rst * Remove unused method within filter\_scheduler test * Open Kilo development * Correct missing vcpupin elements for numa case * VMware: remove unused variable from tests * Imported Translations from Transifex * VMWare: Fix VM leak when deletion of VM during resizing * Logging detail when attach interface failed * Removes unused code from wsgi \_to\_xml\_node * Fix XML UnicodeEncode serialization error * Add @\_retry\_on\_deadlock to \_instance\_update() * Remove duplicate entry from .gitignore file * console: fix bug when invalid connection info * console: introduce a new exception InvalidToken * cmd: update the default behavior of serial console * console: make websocketproxy handles token from path * VMware: Remove tests for None in fake.\_db\_content['files'] * Fix creating bdm for failed volume attachment * libvirt: consider vcpu\_pin\_set when choosing NUMA cells * Fix hook documentation on entry\_points config * Remove local version of generate\_request\_id * fix usage of obj\_reset\_changes() call in flavor * Fix Bad except clauses order * Typo in exception name - CellsUpdateProhibited * Log original error when attaching volume fails * Retry on closing of luks encrypted volume in case device is busy * VMware: Remove VMwareImage.file\_size\_in\_gb * VMware: remove unused argument from \_delete\_datastore\_file() * xenapi: deal with reboots while talking to agent * Ironic: Do not try to unplug VIF if not associated * Fix Typo in method name - parse\_Dom * Adds openSUSE support for developer documentation * VMware: Remove class orphaned by ESX driver removal * Fixes missing ec2 api address disassociate error on failure * Fixes potential reliablity issue with missing CONF import * Updated from global requirements * Port extended\_ips/extended\_ips\_mac extension to V2.1 * the value of retries is error in \_allocate\_network * Ironic driver must wait for power state changes * Fallback to legacy live migration if config error * libvirt: log exception info when interface detach failed * libvirt: support live migration with shared instances dir * Fix SecurityGroupExists error when booting instances * Undo changes to obj\_make\_compatible * Clarify virt driver test comments & log statement * move integrated api client to requests library * VMware: Make DatastorePath hashable * Remove usage of self.\_\_dict\_\_ for message var replacement * VMware: trivial formatting fix in fake driver * VMware: Improve logging of DatastorePath in error messages * VMware: Use vm\_mode constants * Imported Translations from Transifex * Updated from global requirements * do not use unittest.TestCase for tests * Neutron: Atomic update of instance info cache * Reduce the scope of RT work while holding the big lock * libvirt: convert CacheConcurrencyTestCase to avoid DB usage * Give context to the warning in \_sync\_power\_states * remove test\_multiprocess\_api * add time to logging in unit tests * XenAPI: clean up old snapshots before create new * Return vcpu pin set as set rather than list * Fix start/stop return active/stopped immediately in EC2 API * consistently set status as REBUILD when rebuilding * Add test case for vim header check * Add missing instance action record for start of live migration * Reduce the log level for the guestfs being missing * Sync network\_info if instance not found before \_build\_resources yield * Remove the AUDIT log message about loaded ext * Fix unset extra\_spec for a flavor * Add further debug logging for multiprocess test * Revert "libvirt: support live migrate of instances with conf drives" * Revert "libvirt: Uses correct imagebackend for configdrive" * Fixes server list filtering on metadata * Add good path test cases of osapi\_compute\_workers * Be less confusing about notification states * Remove unused py33 tox env * fix\_typo\_in\_heal\_instance\_info\_cache * Refactor test\_get\_port\_vnic\_info 2 and 3 * Revert "libvirt: reworks configdrive creation" * Making nova.compute.api to return Aggregate Objects * Scheduler: add log warning hints * Change test function from snapshot to backup * Fixes Hyper-V dynamic memory issue with vNUMA * Update InstanceInvalidState output * Add unit test for glanceclient ssl options * Fix Broken links in devref/filter\_scheduler.rst * Change "is lazy loaded" detection method in db\_api test * Handle VolumeBDMPathNotFound in \_get\_disk\_over\_committed\_size\_total * Handle volume bdm not found in lvm.get\_volume\_size * Updated from global requirements * Address nits in I6b4123590 * Add exists check to fetch\_func\_sync in libvirt imagebackend * libvirt: avoid changing UUID when redefining nwfilters * Vmware:Add support for ParaVirtualSCSIController * Fix floating\_ips\_bulk unit test name * refactor flavor manage tests in prep for object-ify flavors * refactor flavor db fakes in prep for object-ify flavors * move dict copy in prep for object-ify flavors * tests: kill worker pids as well on timeouts * Close standard fds in test child process * Mitigating performance impact with getting pci requests from DB * Return None from get\_swap() if input is not swap * Require tests for DB migrations * VMware: fix broken mock of ds\_util.mkdir * Fix KeyError for euca-describe-images * Fixes HyperV VM Console Log * FIX: Fail to remove the logical volume * correct \_sync\_instance\_power\_state log message * Add support for hypervisor type in IronicHostManager * Don't list entire module autoindex on docs index * Add multinic API unit test * Add plan for kilo blueprints: project priorities * make flavors use common limit and marker * Raise an exception if qemu-img fails * Libvirt: Always teardown lxc container on destroy * Mark nova-baremetal driver as deprecated in Juno, removed in K * libvirt: Unnecessary instance.save(s) called * Add progress and cell\_name into notifications * XenAPI: run vhd-util repair if VHD check fails * Get instance\_properties from request\_spec * libvirt: convert encrypted LVM test to avoid DB usage * libvirt: convert test\_dmcrypt to avoid DB usage * libvirt: convert test\_blockinfo.py to avoid DB usage * libvirt: convert test\_vif.py to avoid DB usage * libvirt: remove pointless class in util test suite * libvirt: avoid need for lockutils setup running test cases * VMware: Remove host argument to ds\_util.get\_datastore() * Fix DB migration 254 by adding missing unittest * postgresql: use postgres db instead of template1 * Assume VNIC\_NORMAL if binding:vnic\_type not set * mock.assert\_called\_once() is not a valid method * db: Add @\_retry\_on\_deadlock to service\_update() * Update ironic states and documentation * XenAPI improve post snapshot coalesce detection * Catch NotImplementedError on reset\_network for xen * VMware: Fix usage of assertEqual in test\_vmops * Add more information to generic \_add\_floating\_ip exception message * bring over pretty\_tox.sh from tempest * Console: warn that the Nova VMRC console driver will be deprecated in K * virt: use compute.vm\_mode constants and validate vm mode type * compute: tweaks to vm\_mode APIs to align with arch/hvtype * Fix NUMA fit testing in claims and filter class * consolidate apirequest tests to single file * ensure that we safely encode ec2 utf8 responses * instance\_topology\_from\_instance handles request\_spec properly * NUMA \_get\_constraint auto assumed Flavor object * Imported Translations from Transifex * Fix 'force' parameter for quota-update * Update devref * default=None is unneeded in config definitions * Remove unused elevated context param from quota helper methods * Remove stale code from ObjectListBase * Split up libvirt volume's connect\_volume method * Record instance faults during boot process * ironic/baremetal: add validation of host manager/state APIs * virt: move assertPublicAPISignatures into base test class * libvirt: avoid 30 second long test in LXC mount setup * Remove all redundant \`setUp\` methods * fix up assertEqual(None...) check to catch more cases * Fix object version hash test * disk/vfs: make docstring conventional to python * disk/vfs: ensure guestfs capabilities * NIST: increase RSA key length to 2048 bit * Fix incorrect exception when bdm with error state volume * ironic: Clean LOG usage * Improve secgroup create error message * Always log the releasing, even under failure * Fix race condition in update\_dhcp * Make obj\_make\_compatible consistent * Correct baremetal/ironic consume\_from\_instance.. * Fix parsing sloppiness from iscsiadm discover * correct inverted subtraction in quota check * Add quotas for Server Groups (quota checks) * Add quotas for Server Groups (V2 API change) * check network ambiguity before external network auth * Updated from global requirements * libvirt: Consider numa\_topology when booting * Add the NUMATopologyFilter * Make HostManager track NUMA usage * API boot process sets NUMA topology for instances * Make resource tracker track NUMA usage * Hook NUMA topology checking into claims * Stash numa-related flavor extra\_spec items in system\_metadata * Fixes network\_get\_all\_by\_host to use indexes * Add plan for kilo blueprints: when is a blueprint needed * Bump FakeDriver's resource numbers * delete python bytecode before every test run * Stop using intersphinx * Don't swallow exceptions in deallocate\_port\_for\_instance * neutronv2: attempt to delete all ports * Proxy nova baremetal commands to Ironic * Increase sleeps in baremetal driver * Improve logging of external events on the compute node * virt: use compute.virttype constants and validate virt type * compute: Add standard constants for hypervisor virt types * Fix test\_create\_instance\_invalid\_key\_name * Fix \`confirmResize\` action status code in V2 * Remove unnecessary imageRef setting from tests * Add unit test for add\_floating\_ip API * Remove unused config "service\_down\_time" reference * Clarify logging in lockutils * Make sure libvirt VIR\_ERR\_NO\_DOMAIN errors are handled correctly * Adds LOG statements in multiprocess API test * Block sqlalchemy migrate 0.9.2 as it breaks all of nova * Xen: Attempt to find and cleanup orphaned SR during delete * Nova-net: fix server side deallocate\_for\_instance() * Method for getting NUMA usage from an instance * Ironic: save DB calls for getting flavor * Imported Translations from Transifex * Fix 'os-interface' resource name for Nova V2.1 * Add new unit tests for PCI stats * Fixes AttributeError with api sample test fail * Fix "revertResize/confirmResize" for V2.1 API * Add unit test to os-agent API * check the block\_device\_allocate\_retries * Support SR-IOV networking in libvirt * Support SR-IOV networking in nova compute api and nova neutronv2 * Support SR-IOV networking in the PCI modules * Add request\_id in PciDevice * Replace pci\_request flavor storage with proper object usage * Adds a test for raw\_cpu\_arch in \_node\_resource * Stop stack tracing when trying to auto-stop a stopped instance * Add quotas for Server Groups (V2 API compatibility & V2.1 support) * Fixes Hyper-V volume mapping issue on reboot * Libvirt-Enable support for discard option for disk device * libvirt: set pae for Xen PVM and HVM * Add warning to periodic\_task with interval 0 * document why we disable usb\_tablet in code * Fix 'os-start/os-stop' server actions for V2.1 API * Fix 'createImage' server actions for V2.1 API * Add unit test to aggregate api * Handle exception better in v2 attach\_interface * Fix integrated test cases for assisted-volume-snapshots * libvirt: start lxc from block device * Remove exclude coverage regex from coverage job * Pass instance to set\_instance\_error\_state vs. uuid * Add InstancePCIRequests object * Drop verbose and useless nova-api log information * Add instance\_extra\_update\_by\_uuid() to DB API * Add pci\_requests to instance\_extra table * Add claims testing to VirtNUMAHostTopology class * Expose numa\_topology to the resource tracker * libvirt: fix bug when releasing port(s) * Specify correct operation type when NVH is raised * Ironic: don't canonicalize extra\_specs data * VMware: add tests for image fetch/cache functions * VMware: spawn refactor image fetch/cache * Ironic: Fix direct use of flavor and instance module objects * Ironic driver fetches extra\_specs when needed * Maint: neutronclient exceptions from a more appropriate module * Check requirements.txt files for missing (used) requirements * Sync oslo-incubator module log: * Add amd64 to arch.canonicalize() * Sync oslo lockutils to nova * libvirt: deprecated volume\_drivers config parameter * VMware: Remove get\_copy\_virtual\_disk\_spec from vmops and vm\_util * maint: various spelling fixes * Fix config generator to use keystonemiddleware * libvirt: improve unit test time * VMware: prevent race condition with VNC port allocation * VMware: Fix return type of get\_vnc\_console() * VMware: Remove VMwareVCVMOps * Network: enable instance deletion when dhcp release fails * Adds ephemeral storage encryption for LVM back-end images * Don't elevate context when rescheduling * Ironic driver backports: patch 7 * Improve Ironic driver performance: patch 6 * Import Ironic Driver & supporting files - part 5 * Import Ironic Driver & supporting files - part 4 * Import Ironic Driver & supporting files - part 3 * Import Ironic Driver & supporting files - part 2 * Import Ironic Driver & supporting files - part 1 * Add sqlite dev packages to devref env setup doc * Add user namespace support for libvirt-lxc * Move to oslo.db * api: add serial console API calls v2 * compute: add get\_serial\_console rpc and cells api calls * compute: add get\_serial\_console in manager.py * virt: add method get\_serial\_console to driver * Clean up LOG import in floating\_ips\_bulk v2 api 2014.2.b3 --------- * Update invalid state error message on reboot API * Fix race condition with vif plugging in finish migrate * Fix service groups with zookeeper * xenapi: send chunk terminator on subprocess exc * Add support for ipv6 nameservers * Remove unused oslo.config import * Support image property for config drive * warn against sorting requirements * VMware: remove unused \_get\_vmdk\_path from vmops * virt: use compute.arch constants and validate architectures * Change v3 quota-sets API to v2.1 * always set --no-hosts for dnsmasq * Allow \_poll\_bandwidth\_usage task to hit slave * Add bandwidth usage object * VMware: spawn refactor enlist image * VMware: image user functions for spawn() * Change v3 flavor\_manage API to v2.1 * Port used\_limits & used\_limits\_for\_admin into v2.1 * Add API schema for v2.1 access\_ips extension * Add API schema for v2.1 "rebuild a server" API * Add API schema for v2.1 "update a server" API * Enabled qemu memory balloon stats * Reset task state 'migrating' on nova compute restart * Pass certificate, key and cacert to glanceclient * Add a policy for handling retrospective vetos * Adds Hyper-V soft shutdown implementation * Fix swap\_volumes * Add API schema for v2.1/v3 multiple\_create extension * Return hydrated net info from novanet add/remove\_fixed\_ip calls * Add API schema for v2.1/v3 availability\_zone extension * Add API schema for v2.1/v3 server\_metadata API * Fixes a Hyper-V list\_instances localization issue * Adds list\_instance\_uuids to the Hyper-V driver * Change v3 admin\_actions to v2.1 * Change v3 aggregate API to v2.1 * Convert v3 ExtendedAvailabilityZone api to v2.1 * Convert v3 hypervisor plugin to v2.1 * Convert server\_usage v3 plugin to v2.1 API * Convert v3 servers return\_reservation\_id behaviour to v2.1 * the headroom infomation is incomplete * Port volumes extension to work in v2.1/v3 framework * vmwareapi oslo.vmware library integration * Allow forceDelete to delete running instances * Port limits extension to work in v2.1/v3 framework * Port image-size extension to work in v2.1/v3 framework * Port v2 image\_metadata extension to work in v2.1(v3) framework * Port v2 images extension to work in v2.1(v3) framework * Convert migrate\_server v3 plugin to v2.1 * Changes V3 evacuate extension into v2.1 * console: add typed console objects * virt: setup TCP chardevice in libvirt driver * Remove snapshot\_id from \_volume\_snapshot\_create() * Check min\_ram and min\_disk when boot from volume * Add API schema for v2.1 "create a server" API * InstanceNUMAToplogy object create remove uuid param * Change v3 flavor\_access to v2.1 * Convert rescue v3 plugin to v2.1 API * Change v3 security\_groups API to v2.1 * Changes V3 remote\_console extension into v2.1 * Use common get\_instance function in v2 consoles extension * Add API schema for v2.1/v3 user\_data extension * Convert v3 cells API to v2.1 * Convert v3 server metadata plugin to v2.1 * Convert multiple-create v3 plugin to v2.1 * Convert v3 flavor extraspecs plugin to v2.1 * Fix scheduler\_available\_filters help * cmd: add nova-serialproxy service * console: add serial console module * Changes V3 server\_actions extension into v2.1 * Change v3 version API to v2.1 * Change v3 shelve to v2.1 * Process power state syncs asynchronously * Made unassigned networks visible in flat networking * Add functions to setup user namespaced filesystems * Adds nova-idmapshift cli utility * Add idmap to libvirt config * Allow hard reboots when still attempting a soft reboot * Decrease amount of queries while adding aggregate metadata * Adds Hyper-V serial console log * Store original state when suspending * Fix NoopQuotasDriver.get\_settable\_quotas() * Use instance objects consistently in suspend tests * Instance objects: fix indentation issue * libvirt: Add method for getting host NUMA topology * Add instance\_extra table and related objects * Change v3 availability-zone API to v2.1 * Move and generalize decorator serialize\_args to nova.objects.base * Convert v3 certificate API to v2.1 * Make neutronapi use NetworkRequest for allocate\_for\_instance() * Use NetworkRequest objects through to nova-network * Add extension block\_device\_mapping\_v1 for v2.1 * Catch BDM related InvalidBDM exceptions for server create v2.1 * Changes block\_device\_mapping extension into v2.1 * Fix rootwrap for non openstack.org iqn's * Let update\_available\_resource hit slave * Plumb NetworkRequest objects through conductor and compute RPC * Updates available resources after live migration * Convert compute/api to use NetworkRequest object and list * Refactor the servers API to use NetworkRequest * Cells: Update set\_admin\_password for objects * Remove libvirt legacy LVM code * libvirt: reworks configdrive creation * Handle non dict metadata in server metadata V2 API * Fix wrong disk type limitation for disk IO throttling * Use v2.1 URLs instead of v3 ones in API unit tests * VMware: Add in support for CPU shares in event of resource contention * VMware: add resource limits for CPU * Refactor admin\_action plugin and test cases * Fix error in log when log exception in guestfs.py * Remove concatenation with translated messages * Port simple\_tenant\_usage into v2.1 * Convert console\_output v3 plugin to v2.1 * GET servers API sorting enhancements common utilities * Add \_security\_group\_ensure\_default() DBAPI method * Fix instance boot when Ceph is used for ephemeral storage * Add NetworkRequest object and associated list * Remove use of str on exceptions * Fix the current state name as 'shutting-down' * Explicitly handle exception ConsoleTypeUnavailable for v2 consoles * Convert v3 server diagnostics plugin to v2.1 * Porting v3 evacuate testcases to v2 * libvirt: Uses correct imagebackend for configdrive * Add v2.1 API router and endpoint * Change v3 keypairs API to v2.1 * Backport V3 hypervisor plugin unit tests to V2 * Remove duplicated negative factors in keypair test * filter: add per-aggregate filter to configure max\_instances\_per\_host * Updated from global requirements * Mask passwords in exceptions and error messages * Make strutils.mask\_password more secure * A minor change to a comments * Check vlan parameter is valid * filter: add per-aggregate filter to configure disk\_allocation\_ratio * Deprecate cinder\_\* configuration settings * Allow attaching external networks based on configurable policy * Fix CellStateManagerFile init to failure * Change v3 extended\_status to v2.1 * Fixes Hyper-V volume discovery exception message * Use default quota values in test\_quotas * libvirt: add validation of migration hostname * Add a Set and SetOfIntegers object fields * Add numa\_topology column to the compute\_node table * Preserve exception text during schedule retries * Change v3 admin-password to v2.1 * Make Object FieldType from\_primitive pass objects * Change V3 access\_ips extension into v2.1 * Update RESP message when failed to create flavor * Cleanup of V2 console output tests and add missing tests * Convert multinic v3 plugin to v2.1 * Change 'changes\_since'/'changes-since' into v2.1 style for servers * Backport v3 multinic tests to v2 * Change ViewBuilder into v2.1 for servers * Change v3 agents API to v2.1 * Change v3 attach\_interface to v2.1 * Backport V3 flavor extraspecs API unit tests to V2 * Return BadRequest instead of UnprocessableEntity for volumes API * Convert create\_backup v3 plugin to v2.1 API * Update instance state after compute service died for rebuilded instance * Make floatingip-ip-delete atomic with neutron * Add v3 versions plugin unit test to v2 * Remove duplicated code in test\_versions * Change v3 hosts to v2.1 * Change v3 extended\_server\_attibutes to v2.1 * Make test\_killed\_worker\_recover faster * Change v3 flavor\_rxtx to v2.1 * fix typo in docstring * libvirt: driver used memory tests cleanup * Avoid refreshing PCI devices on instance.save() * Updated from global requirements * Change v3 flavors to v2.1 * neutronv2: treat instance as object in deallocate\_for\_instance * Fix class name for ServerGroupAffinityFilter * Adds Hyper-V Compute Driver soft reboot implementation * Add QuotaError handling to servers rebuild API * Allow to create a flavor without specifying id * XenAPI: Remove interrupted snapshots * Fix typo in comment * Fix V2 unit tests to test hypervisor API as admin * Create compute api var at \_\_init\_\_ * libvirt: support live migrations of instances with config drives * Change v3 os-user-data extension to v2.1 * Remove duplicated code in test\_user\_data * Convert v3 server SchedulerHints plugin to v2.1 * Convert deferred\_delete v3 plugin to v2.1 API * Backport some v3 scheduler hints API UT to v2 API * Change error status code for out of quota to be 403 instead of 413 * Correct seconds of a day from 84400 to 86400 * VMware: add adapter type constants * Fix comment typo * scheduler sends select\_destinations notifications * Fix for volume detach error when use NFS as the cinder backend * objects: Add base test for obj\_make\_compatible() * objects: Fix InstanceGroup.obj\_make\_compatible() * Restore backward compat for int/float in extra\_specs * Convert v3 config drive plugin to v2.1 * Fix sample files miss for os-aggregates * Backport v3 config\_drive API unittest to v2 API * Backport some v3 availability zones API UT to v2 API * Handle non-ascii characters in spawn exception msg * Log warning message if volume quota is exceeded * Remove \_instance\_update usage in \_build\_instance * Treat instance like an object in \_build\_instance * Remove \_instance\_update usage in \_default\_block\_device\_names * Add missing flags to fakelibvirt for migration * Adds tests for Hyper-V Volume utils * Fix ability to generate object hashes in test\_objects.py * Fix expected error details from jsonschema * Extend the docstring for obj\_make\_compatible() with examples * HyperV Driver - Fix to implement hypervisor-uptime * Port os-server-groups extension to work in v2.1/v3 framework * Fix the exception for a nonexistent flavor * Add api extension for new network fields * Use real exceptions for network create and destroy * Support reserving ips at network create time * Adds get\_instance\_disk\_info to compute drivers * Use rfc3986 library to validate URL paths and URIs * Send create.end notification even if instance is deleted * Allow three periodic tasks to hit slave * Fixes Hyper-V unit test path separator issue * Share common test settings in test\_flavor\_manage * Shelve should give guests a chance to shutdown * Rescue should give guests a chance to shutdown * Resize should give guests a chance to shutdown * Power off commands should give guests a chance to shutdown * objects: Make use of utils.convert\_version\_to\_tuple() * tests: fix test\_compute to have predictable service list * libvirt: make sysinfo serial number configurable * Fixes Hyper-V resize down exception * Make usage\_from\_instances consider current usage * VMware: ensure test case for init\_host in driver * Add some v2 agents API tests * Libvirt: Do not raise ENOENT exception * Add missing create() method on SecurityGroupRule object * Add test for get\_instance\_disk\_info to test\_virt\_drivers * Move fake\_quotas and fake\_get\_quotas into a class * Objectify association in neutronapi * Objectify last uses of direct db access in network/floating\_ips * Update migration defaults * libvirt: reduce indentation in is\_vif\_model\_valid\_for\_virt * Fixes Hyper-V boot from volume root device issue * Fixes Hyper-V vm state issue * Imported Translations from Transifex * Share unittest between v2 and v2.1 for hide\_server\_addresses extension * Check compulsory flavor create parameters exist * Treat instance like an object in \_default\_block\_device\_names * Change 'image\_ref'/'flavor\_ref' into v2 style for servers * Change 'admin\_password' into v2 style for servers extension * Image caching tests: use list comprehension * Move \_is\_mapping to more central location * Stop augmenting oslo-incubators default log levels * Track object version relationships * Remove final use of glance\_stubs * Removes GlanceClient stubs * Pull transfer module unit tests from glance tests * VMware: remove specific VC support from class VMwareVolumeOps * VMware: remove Host class * Image cache tests: ensure that assertEquals has expected param first * VMware: spawn refactor \_configure\_config\_drive * VMware: refactor spawn() code to build a new VM * VMware: Fix type of VM's config.hardware.device in fake * VMware: Create fake VM with given datastore * VMware: Remove references to ebs\_root from spawn() * VMware: Create VMwareImage object for image metadata * Image caching: update image caching to use objects * Report all objects with hash mismatches in a single go * Include child\_versions in object hashes * Direct-load Instance.fault when lazy-loading * VMware: Remove unused variable in test\_configdrive * Raise HTTPNotFound error from V2 cert show API * Add dict and json methods to VirtNUMATopology classes * virt: helper for processing NUMA topology configuration * Raise Not Implemented error from V2 diagnostics API * Make NovaObjectSerializer work with dicts * Updated from global requirements * neutronv2: treat instance like object in allocate\_for\_instance * nova-network: treat instance like object in allocate\_for\_instance * Treat instance like object in \_validate\_instance\_group\_policy * Treat instance like an object in \_prebuild\_instance * Treat instance like an object in \_start\_building * Add graphviz to list of distro packages to install * Fixes Hyper-V agent force\_hyperv\_utils\_v1 flag issue * ec2: Use S3ImageMapping object * ec2: Add S3ImageMapping object * Remove unused db api methods * Get EC2 snapshot mappings with nova object * Use EC2SnapshotMapping for creating mappings * Add EC2SnapshotMapping object * Fix NotImplementedError in floating-ip-list * filter: add per-aggregate filter to configure max\_io\_ops\_per\_host * Hacking: a new hacking check was added that used an existing number * Fix hacking check for jsonutils * VMware: revert deletion of cleanup\_host * Use flavor in confirm-resize to drop claim * Add new db api get functions for ec2\_snapshot * Partial oslo-incubator sync -- log.py * Add unit tests for libvirt domain creation * Fix Trusted Filter to work with Mt. Wilson \`vtime\` * Fix 202 responses to contain valid content * Fix EC2 instance type for a volume backed instance * libvirt: add serial ports config * Split EC2 ID validator to validator per resource type * libvirt: do not fail instance destroy, if mount\_device is missing * libvirt: persist lxc attached volumes across reboots and power down * Resize block device after swap to larger volume * Make API name validation failure deterministic * VMware: spawn refactor add VirtualMachineConfigInfo * libvirt: Fix kwargs for \_create\_image * VMware: fix crash when VC driver boots * baremetal: Remove depenency on libvirt's fetch\_image method * libvirt: Remove unecessary suffix defaulting * Drop instance\_group\_metadata from the database * Neutron v2 API: fix get\_floating\_ip\_pools * libvirt: Allow specification of default machine type * Fix rebuild with cells * Added hacking check for jsonutils * Consistently use jsonutils instead of specific implementation * Convert network/api.py uses of vif database functions to objects * Convert last use of direct database instance fetching from network api * libvirt: skip disk resize when resize\_instance is False * libvirt: fix \_disk\_resize to make sure converted image will be restored * Backport some v3 certificate API unittest to v2 API * Backport some v3 aggregate API unittest to v2 API * Imported Translations from Transifex * More informative nova-scheduler log after NoValidHost is caught * Remove metadata/metadetails from instance/server groups * Prepend /dev/ to root\_device\_name in get\_next\_device\_name * Lock attach\_volume * Adjust audit logs to avoid negative disk info * Convert network/api.py to use FloatingIP object * Correct some IPAddress DB interaction in objects * docs - Set pbr 'warnerrors' option for doc build * docs - Fix errors,warnings from document generation * Provide a quick way to run flake8 * Add support for select\_destinations in Scheduler client * Create a Scheduler client library * VMware: handle case when VM snapshot delete fails * Use common get\_instance function in v2 attach\_interface * Add some v2 flavor\_manage API tests * Backport v3 api unittest into v2 api for attach\_interface extension * Fix the error status code of duplicated agents * Handle ExternalNetworkAttachForbidden exception * Allow empty volumes to be created * docs - Fix errors,warnings from document generation * docs - Fix exception in docs generation * docs - Fix docstring issues in virt tree * VMware: test\_driver\_api: Use local variables in closures * VMware: Remove ds\_util.build\_datastore\_path() * Use v1 as default for cinder\_catalog\_info * Fix live-migration failure in FC multipath case * Optimize instance\_floating\_address\_get\_all * Enhance PCI whitelist * Add a missing instance=instance in compute/mgr * Correct returned HTTP status code (Use 403 instead of 413) * Fix wrong command for \_rescan\_multipath * add log exception hints in some modules * Fix extension parameters in test\_multiple\_create * Standardize logging for v3 api extensions * Standardize logging for v2 api extensions * Add ListOfDictOfNullableString field type * Enable terminate for EC2 InstanceInitiatedShutdownBehavior * Remove duplicate test of passing glance params * Convert glance unit tests to not use stubs * Add decorator expected\_errors for ips v3 extension * Return 404 instead of 501 for unsupported actions * Return 404 when floating IP pool not found * Makes versions API output deterministic * Work on document structure and doc building * Catch NeutronClientException when showing a network * Add API schema for v2.1/v3 security\_groups extension * Add API schema for v2.1/v3 config\_drive extension * Remove pre-icehouse rpc client API compat * makes sure correct PCI device allocation * Adds tests for Hyper-V VM Utils * Make nova-api use quotas object for reservations * VMware: implement get\_host\_ip\_addr * Boot an instance with multiple vnics on same network * Optimize db.floating\_ip\_deallocate * Fixes wrong usage of mock.assert\_not\_called() * Code change for nova support cinder client v2 * libvirt: saving the lxc rootfs device in instance metadata * Add method for deallocating networks on reschedule * DB: use assertIsNotNone for unit test * Add expire reservations in backport position * Make network/api.py use Network object for associations * Migrate test\_glance from mox to mock * Add instanceset info to StartInstance response * Adds verbosity to child cell update log messages * Removes unnecessary instructions in test\_hypervapi * Diagnostics: add validation for types * Add missed discoverable policy rules for flavor-manage v3 * Rename rbd.py to rbd\_utils.py in libvirt driver directory * Correct a maybe-typo in pci\_manager * libvirt: make guestfs methods always return list of tuples * Revert "Deallocate the network if rescheduling for * libvirt: volume snapshot delete for network-attached disks * libvirt: parse disk backing chains from domain XML * Handle MacAddressInUseClient exception from Neutron when creating port * Updated from global requirements * Remove instance\_info\_cache\_delete() from conductor * Make spawn\_n() stub properly ignore errors in the child thread work * Update devref out-of-tree policy grammar error * Compute: add log exception hints * Handle NetworkAmbiguous error when booting a new instance with v3 api * Handle FloatingIpPoolNotFound exception in floating ip creation * Add policy on how patches and reviews go hand in hand * Add hacking check for explicit import of \_() * VMware: Do not read opaque type for DVS network * VMware: add in DVS VXLAN support * Network: add in a new network type - DVS * Network: interface attach and detach raised confusing exception * Deprecate metadata\_neutron\_\* configuration settings * Log cleanups for nova.network.neutron.api * Remove ESXDriver from Juno * Only get image location attributes if including locations * Use JSON instead of json in the parameter descriptions * Add a retry\_on\_deadlock to reservations\_expire * docs - Fix doc build errors with SQLAlchemy 0.9 * docs - Fix indentation for RPC API's * docs - Prevent eventlet exception during docs generation * docs - Add an index for the command line utilities * docs - fix missing references * Change LOG.warn to LOG.debug in \_shutdown\_instance * EC2: fixed AttributeError when metadata is not found * Import Ironic scheduler filters and host manager * EndpointNotFound deleting volume backend instance * Fix nova boot failure using admin role for another tenant * docs - Fix docstring issues * Update scheduler after instance delete * Remove duplicate index from block\_device\_mapping table * Fix ownership checking in get\_networks\_by\_uuid * Raises NotImplementedError for LVM migration * Convert network/api.py fixedip calls to use FixedIP object * Convert network/api.py get calls to use Network object * Add extensible resources to resource tracker (2) * Make DriverBlockDevice save() context arg optional * Improved error logging in nova-network for allocate\_fixed\_ip() * Issue multiple SQL statements in separate engine.execute() calls * Move check\_image\_exists out of try in \_inject\_data * Fix fake\_update in test\_update\_missing\_server * Add unit tests to cells conductor link * Revert "libvirt: add version cap tied to gate CI testing" * Use Ceph cluster stats to report disk info on RBD * Add trace logging to allocate\_fixed\_ip * Update devref setup docs for latest libvirt on ubuntu * libvirt re-define guest with wrong XML document * Improve logging when python-guestfs/libguestfs isn't working * Update dev env docs on libvirt-dev(el) requirement * Parse unicode cpu\_info as json before using it * Fix Resource tracker should report virt driver stats * Fix \_parse\_datetime in simple tenant usage extension * Add API schema for v2.1/v3 cells API * Fix attaching config drive issue on Hyper-V when migrate instances * Allow to unshelve instance booted from volume * libvirt: add support for guest NUMA topology in XML config * libvirt: remove pointless LibvirtBaseVIFDriver class * libvirt: remove 'vif\_driver' config parameter * libvirt: remove use of CONF.libvirt.virt\_type in vif.py * Handle NotImplementedError in server\_diagnostics v3 api * Remove useless check in \_add\_retry\_host * Initialize Ironic virt driver directory * Live migration is broken for NFS shared storage * Fix ImportError during docs generation * Updated from global requirements * Extend API schema for "create a server" extensions * Enable cloning for rbd-backed ephemeral disks * Add include\_locations kwarg to nova.image.API.get() * Add index for reservations on (deleted, expire) * VMWare Driver - Ignore datastore in maintenance mode * Remove outdated docstring for nova.network.manager * libvirt: remove 3 unused vif.py methods * Turn on pbr's autodoc feature * Remove api reference section in devref * Deduplicate module listings in devref * VMware: Resize operation fails to change disk size * Use library instead of CLI to cleanup RBD volumes * Move libvirt RBD utilities to a new file * Properly handle snatting for external gateways * Only use dhcp if enable\_dhcp is set on the network * Allow dhcp\_server to be set from new field * Set python hash seed to 0 in tox.ini * Make devref point to official devstack vagrant repo * Stop depending on sitepackages libvirt-python * libvirt: driver tests use non-mocked BDMs * Fix doc build errors in models.py * Make several ec2 API tests inherit from NoDBTestCase * Stub out rpc notifications in ec2 cloud unit tests * Add standard constants for CPU architectures * virt: switch order of args to assertEqual in guestfs test * virt: move disk tests into a sub-directory * virt: force TCG with libguestfs unless KVM is enabled in libvirt * Do not pass instances without host to compute API * Pass errors from detach methods back to api proc * libvirt: add tests for \_live\_snapshot and \_swap\_volume methods * libvirt: fill in metadata when launching instances * Increase min required libvirt to 0.9.11 * Rollback quota when confirm resize concurrently completed * API: Enable support for tenant option in nova absolute-limits * libvirt: removing lxc specific disk mapping * Method to filter non-root block device mappings * VMware: remove local variable * Use hypervisor hostname for compute trust level * Remove unused cell\_scheduler\_method * Fix the i18n for some warnings in compute utils * Fix FloatingIP.save() passing FixedIP object to sqlalchemy * Scheduler: throw exception if no configured affinity filter * xenapi: Attach original local disks during rescue * libvirt: remove VIF driver classes deprecated in Icehouse * Move logs of restore state to inner logic * Clean nova.compute.resource\_tracker:\_update\_usage\_from\_instances * Fix and Gate on E265 * Log translation hint for nova.api * Fix duplicated images in test\_block\_device\_mapping * Add Hyper-V driver in the "compute\_driver" option description * reduce network down time during live-migration * Augment oslo's default log levels with nova specific ones * Make the coding style consistent with other Controller in plugins/v3 * Fix extra metadata didn't assign into snapshot image * Add i18n log markers in disk api * VMware: improve log message for attachment of CDROM * Raise NotImplemented default-security-group-rule api with neutron * vmwareapi: remove some unused fake vim methods * Correct image\_metadata API use of nova.image.glance * Revert "Add extensible resources to resource tracker" * Update database columns nullable to match model * Updated from global requirements * Make quotas APIv3 extension use Quotas object for create/update * Make quotas APIv2 extension use Quotas object for create/update * Add quota limit create/update methods to Quotas object 2014.2.b2 --------- * libvirt: VM diagnostics (v3 API only) * Add ibmveth model as a supported network driver for KVM * libvirt: add support for memory tuning in config * libvirt: add support for memory backing parameters * libvirt: add support for per-vCPU pinning in guest XML * libvirt: add parsing of NUMA topology in capabilities XML * handle AutoDiskConfigDisabledByImage at API layer * Rollback quota in os\_tenant\_network * Raise specific error of network IP allocation * Convert to importutils * Catch CannotResizeDisk exception when resize to zero disk * VMware: do not cache image when root\_gb is 0 * Turn periodic tasks off in all unit tests * Rename virtutils to the more common libvirt\_utils * Check for resize path on libvirt instance delete * Return status for compute node * servers list API support specify multi-status * Deprecate scheduler prep\_resize * Updated from global requirements * Fix nova cells exiting on db failure at launch * Remove unneeded calls in test\_shelve to start instances * Correct InvalidAggregateAction reason for Xen * Handle a flavor create failed better * Add valid method check for quota resources * VMware: power\_off\_instance support * Add debug log for availability zone filter * Fix typo * Fix last of direct use of object modules * Check instance state before attach/detach interface * Fix error status code for cloudpipe\_update * Fix unit tests related to cloudpipe\_update * Add API schema for v2.1/v3 reset\_server\_state API * Adjust audit logs to avoid negative mem/cpu info * Re-add H803 to flake8 ignore list * Fix nova/pci direct use of object modules * Gate on F402/pep8 * Inject expected results for IBM Power when testing bus devices * Add extensible resources to resource tracker * libvirt: define XML schema for recording nova instance metadata * Sync loopingcall from oslo * Add APIv2 support to make host optional on evacuate * Add differencing vhdx resize support in Hyper-V Driver * Imported Translations from Transifex * Add context as param to cleanup function * Downgrade the warn log in network to debug * Correct use of nova.image.glance in compute API * Keep Migration status in automatic confirm-resize * Removes useless stub of glanceclient create * Remove rescue/unrescue NotImplementedError handle * Add missing foreign key on pci\_devices.compute\_node\_id * Revert "Add missing image to instance booted from volume" * Add debug log for pci passthrough filter * Cleanup and gate on hacking E711 and E712 rule * Keep resizing&resized instances when compute init * Commit quota when deallocate floating ip * Remove unnecessary error log in cell API * Remove stubs in favor of mock in test\_policy * Remove translation for debug message * Fix error status code for agents * Remove warn log for over quota * Use oslo.i18n * Cleanup: remove unused argument * Implement methods to modify volume metadata * Minor tweaks to hypervisor\_version to int * update ignore list for pep8 * Add decorator expected\_errors for v3 attach\_interfaces * Add instance to debug log at compute api * Don't truncate osapi\_glance\_link or osapi\_compute\_link prefixes * Add decorator expected\_errors to V3 servers core * Correctly reject request to add lists of hosts to an aggregate * Do not process events for instances without host * Fix Cells ImagePropertiesFilter can raise exceptions * libvirt: remove flawed get\_num\_instances method impl * libvirt: remove unused list\_instance\_ids method * libvirt: speed up \_get\_disk\_over\_committed\_size\_total method * Partial oslo-incubator sync * VMware: Remove unnecessary deepcopy()s in test\_configdrive * VMware: Convert vmops to use instance as an object * VMware: Trivial indentation cleanups in vmops * VMware: use datastore classes in file\_move/delete/exists, mkdir * VMware: use datastore classes get\_allowed\_datastores/\_sub\_folder * VMware: DatastorePath join() and \_\_eq\_\_() * VMware: consolidate datastore code * VMware: Consolidate fake\_session in test\_(vm|ds)\_util * Make BDM dict \_\_init\_\_ behave more like a dict * VMware: support the hotplug of a neutron port * Deallocate the network if rescheduling for Ironic * Make sure that metadata handler uses constant\_time\_compare() * Enable live migration unit test use instance object * Move volume\_clear option to where it's used * move the cloudpipe\_update API v2 extension to use objects * Avoid possible timing attack in metadata api * Move injected\_network\_template config to where it's used * Don't remove delete\_on\_terminate volumes on a reschedule * Defer raising an exception when deleting volumes * Xen: Cleanup orphan volume connections on boot failure * Adds more policy control to cells ext * shelve doesn't work on nova-cells environment * libvirt: add migrateToURI2 method to fakelibvirt * libvirt: fix recent test changes to work on libvirt < 0.9.13 * Update requirements to include decorator>=3.4.0 * Cleanup and gate on hacking E713 rule * libvirt: add version cap tied to gate CI testing * Small grammar fix in libvirt/driver.py. fix all occurrences * Correct exception for flavor extra spec create/update * Fixes Hyper-V SCSI slot selection * xenapi: Use netuils.get\_injected\_network\_template * libvirt: Support IPv6 with LXC * Improve shared storage checks for live migration * XenAPI: VM diagnostics for v3 API * Move retry of prep\_resize to conductor instead of scheduler * Retry db.api.instance\_destroy on deadlock * Translations: add LC to all LOG.critical messages * Remove redundant code in Libvirt driver * Virt: fix typo (flavour should be flavor) * Fix and gate on H305 and H307 * Remove unused instance variables from HostState * Send compute.instance.create.end after launched\_at is set * VMware: validate the network\_info is defined * Security groups: add missing translation * Standardization of nova.image.API.download * Catch InvalidAggregateAction when deleting an aggregate * Restore ability to delete aggregate metadata * Nova-api service throws error when SIGHUP is sent * Remove cell api overrides for lock and unlock * Don't mask out HostState details in WeighedHost * vmware: VM diagnostics (v3 API only) * Use pool/volume\_name notation when deleting RBD volumes * Add instanceset info to StopInstance response * Change compute updates from periodic to on demand * Store volume backed snapshot in current tenant * libvirt+lxc: Unmount guest FS from host on error * libvirt: speed up get\_memory\_mb\_used method * libvirt: speed up get\_vcpus method * libvirt: speed up get\_all\_block\_devices method * libvirt: speed up list\_instances method * libvirt: speed up list\_instance\_uuids method * Updated from global requirements * Fix interfaces template for two interfaces and IPv6 * Fix error status code for multinic * libvirt: fix typo in fakelibvirt listAllDomains() * Refactors VIF configuration logic * Add missing test coverage for MultiplePortsNotApplicable compute/api * Make the block device mapping retries configurable * Catch image and flavor exceptions in \_build\_and\_run\_instance * Restore instance flavor info when driver finish\_migration fails * synchronize 'stop' and power state periodic task * Fix more re-definitions and enable F811/F813 in gate * Prepend '/dev/' to supplied dev names in the API * Handle over quota exception from Neutron * Remove pause/unpause NotImplementedError API layer * Add test cases for 2 block\_device functions * Make compute api use util.check\_string\_length * add comment about why snapshot/backup have no lock check * VM diagnostics (v3 API only) * VM diagnostics: add serializer to Diagnostics object * VM diagnostics: add methods to class to update diagnotics * object-ify API v2 availability\_zone extension * object-ify availability\_zones * add get\_by\_metadata\_key to AggregateList object * xenapi: make boot from volume use volumeops * libvirt: Avoid Glance.show on hard\_reboot * Add host\_ip to compute node object * VMware: move fake.py to the test directory * libvirt: convert cpuset XML handling to use set instead of string * virt: add method for formatting CPU sets to strings * Fixes rbd backend image size * Prevent max\_count > 1 and specified ip address as input * Add aggregates.rst to devref index * VMware: virt unrescue method now supports objects * VMware: virt rescue method now supports objects * Remove duplicate python-pip from Fedora devref setup doc * Do not fail cell's instance deletion, if it's missing info\_cache * libvirt: more efficient method to list domains on host * vmwareapi: make method signatures match parent class * Remove duplicate keys from dictionaries * virt: split CPU spec parsing code out into helper method * virt: move get\_cpuset\_ids into nova.virt.hardware * Fix duplicate definitions of variables/methods * change the firewall debugging for clarity * VMware: consolidate common constants into one file * Require posix\_ipc for lockutils * hyperv: make method signatures match parent class * Format eph disk with specified format in libvirt * Resolve import dependency in consoleauth service * Add 'anon' kwarg to FakeDbBlockDeviceDict class * Make cells rpc bdm\_update\_or\_create\_at\_top use BDM objects * Improve BlockDeviceMapping object cells awareness * Add support for user\_id based authentication with Neutron * VMware: add in test utility to get correct VM backing * Change instance disappeared during destroy from Error to Warning * VMware: Fix race in spawn() when resizing cached image * VMware: add support for driver method instance\_exists * Object-ify APIv3 agents extension * Object-ify APIv2 agents extension * Avoid re-adding iptables rules for instances that have disappeared * libvirt: Save device\_path in connection\_info when booting from volume * sync periodic\_task fix from incubator * Fix virt BDM \_\_setattr\_\_ and \_\_getattr\_\_ * Handle InstanceUserDataTooLarge at api layer * Updated from global requirements * Mask node.session.auth.password in volume.py \_run\_iscsiadm debug logs * Nova api service doesn't handle SIGHUP properly * check ephemeral disk format at libvirt before use * Avoid referencing stale instance/network\_info dicts in firewall * Use mtu setting from table instead of flag * Add debug log for core\_filter * VMware: optimize VM spawn by caching the vm\_ref after creating VM * libvirt: Add configuration of guest VCPU topology * virt: add helper module for determining VCPU topology * Change the comments of SOFT\_DELETED race condition * Fix bad log message with glance client timeout * Move the instance\_type\_id judgment to the except-block * Update port binding when unshelve instance * Libvirt: Added suffix to configdrive\_path required for rescue * sync policy logging fix from incubator * Sync process utils from olso * Remove instance\_uuids argument to \_schedule * Add \_\_repr\_\_ handler for NovaObjects * Pass instance to \_reschedule rather than instance\_uuid * Pass instance to \_set\_instance\_error\_state * Pass instance to \_error\_out\_instance\_on\_exception * Add APIv3 support to make host optional on evacuate * Move rebuild to conductor and add find host logic * VMware: validate that VM exists on backend prior to deletion * VMware: remove duplicate key from test\_instance dict * ConfigDriveBuilder refactor for tempdir cleanliness * VMware: cleanup the constructors of the compute drivers * Fix wrong lock name for operating instance external events * VMware: remove unused parameter 'network\_info' * VM diagnostics: introduce Diagnostics model object * Fixes internal server error for add/remove tenant flavor access request * add repr for event objects * Sync oslo lockutils to nova * Neutronv2 api does not support neutron without port quota * Be explicit about objects in \_shutdown\_instance() * Pass instance object into \_shutdown\_instance() * Skip none value attributes for ec2 image bdm output * Fixed wrong assertion in test\_vmops.py * Remove a not used function \_get\_ip\_by\_id * make lifecycle event logs more clear * xenapi: make method signatures match parent class * libvirt: make method signatures match parent class * virt: add test helper for checking public driver API method names * virt: fix signature of set\_admin\_password method * virt: use context & instance as param names in migrate APIs * virt: add get\_instance\_disk\_info to virt driver API * vmwareapi: remove unused update\_host\_status method * libvirt: remove hack from ensure\_filtering\_rules\_for\_instance * libvirt: remove volume\_driver\_method API * libvirt: add '\_' prefix to remaining internal methods * Imported Translations from Transifex * Fake driver: remove unused method get\_disk\_available\_least * Baremetal driver: remove unused states * Fix nova/network direct use of object modules * Fix rest of API objects usage * Fix rest of compute objects usage * Clean conntrack records when removing floating ip * Updated from global requirements * Enforce task\_state is None in ec2 create\_image stop instance wait loop * Update compute rpcapi tests to use instance object instead of dict * Fix run\_instance() rpc method to pass instance object * Fix error in rescue rpcapi that prevents sending objects * add checksums to udp independent of /dev/vhost-net * Use dot notation to access instance object fields in ec2 create\_image * vmwareapi: remove unused fake vim logout method * vmware: remove unused delete\_disk fake vim method * Revert "Sync revert and finish resize on instance.uuid" * Add test cases for block\_device * Add assert\_called check for "brclt addif" test * Log when nova-conductor connection established * Xen: Remove extraneous logging of type information * Fix agent\_id with string type in API samples files for os-agents v2 * Fix update agent return agent\_id with string for os-agents v3 * VMware: Fix fake raising the wrong exception in \_remove\_file * VMware: refactor get\_datastore\_ref\_and\_name * libvirt: introduce separate class for cpu tune XML config * libvirt: test setting of CPU tuning data * Make Evacuate API use Instance objects * VMware: create utility function for reconfiguring a VM * effectively disable libvirt live snapshotting * Fix exception raised when a requested console type is disabled * Add missing image to instance booted from volume * Use default rpc\_response\_timeout in unit tests * vmware: Use exc\_info when logging exceptions * vmware: Reuse existing StorageError class * vmware: Refactor: fold volume\_util.py into volumeops.py * Use ebtables to isolate dhcp traffic * Replace nova.utils.cpu\_count() with processutils.get\_worker\_count() * Sync log and processutils from oslo * libvirt: add '\_' prefix to host state information methods * libvirt: add '\_' prefix to some get\_host\_\* methods * Deprecate and remove agent\_build\_get\_by\_triple() * Object-ify xenapi driver's use of agent\_build\_get\_by\_triple() * Add Agent object * Move the error check for "brctl addif" * Add API schema for v2.1/v3 quota\_sets API * Add API schema for v2.1/v3 flavors\_extraspecs API * Add API schema for v2.1/v3 attach\_interfaces API * Add API schema for v2.1/v3 remote\_consoles API * Use auth\_token from keystonemiddleware * Use \_set\_instance\_obj\_error\_state in compute manager set\_admin\_password * api: remove unused function * api: remove useless get\_actions() in consoles * Do not allow resize to zero disk flavor * api: remove dead code in WSGI XML serializer * Updated from global requirements * Standardize logging for nova.virt.libvirt * Fix log debug statement in compute manager * Add API schema for v2.1/v3 aggregates API * Fix object code direct use of other object modules * Fix the rest of direct uses of instance module objects * Imported Translations from Transifex * Add API schema for v2.1/v3 flavor\_manage API * Forcibly set libvirt uri in baremetal virtual power driver * Synced jsonutils and its dependencies * Sync revert and finish resize on instance.uuid * Object-ify APIv3 availability\_zone extension * Fix bug in TestObjectVersions * libvirt: add '\_' prefix to all get\_guest\_\*\_config methods * libvirt: remove unused 'get\_disks' method * Downgrade some exception LOG messages in the ec2 API * Conductor: remove irrelevant comment * Added statement for ... else * Avoid traceback logs from simple tenant usage extension * Fix detaching pci device failed * Adds instance lock check for live migrate * Don't follow HTTP\_PROXY when talking to localhost test server * Correct the variable name in trusted filter * Target host in evacuate can't be the original one * Add API schema for v2.1/v3 hosts API * Object-ify APIv3 flavor\_extraspecs extension * Object-ify APIv2 flavorextraspecs extension * Catch permission denied exception when update host * Fix resource cleanup in NetworkManager.allocate\_fixed\_ip * libvirt: Support snapshot creation via libgfapi * Allow evacuate from vm\_state=Error * xenapi: reorder volume\_utils * Replace assertTrue/False with assertEqual/NotEqual * Replace assert\* with more suitable asserts in tests * Replace assertTrue/False with assertIn/NotIn * VMware: remove unused code in vm\_util.py * Not count disabled compute node for statistics * Instance and volume cleanup when a build fails * wrap\_instance\_event() shouldn't swallow return codes * Don't replace instance object with dict in \_allocate\_network() * Determine shared ip from table instead of flag * Set reasonable defaults for new network values * Adds network fields to object * Add new fields to the networks table * Log exception if max scheduling attempts exceeded * Make remove\_volume\_connection() use objects * Create lvm.py module containing helper API for LVM * Reduce unit test times for glance * Should not delete active snapshot when instance is terminated * Add supported file system type check at virt layer * Don't store duplicate policies for server\_group * Make exception handling in get\_image\_metadata more specific * live migrate conductor tasks to use nova.image.API * Fix Flavor object extra\_specs and projects handling * Drop support for scheduler 2.x rpc interface * Drop support for conductor 1.x rpc interface * Deprecate glance\_\* configuration settings * Update websocketproxy to work with websockify 0.6 * XenAPI: disable/enable host will be failed when using XenServer * Remove traces of now unused host capabilities from scheduler * Fix BaremetalHostManager node detection logic * Add missing stats info to BaremetalNodeState * Replace assertTrue(not \*) with assertFalse(\*) * Clean nova.compute.api.API:\_check\_num\_instances\_quota * Fix the duplicated image params in a test * Imported Translations from Transifex * Fix "fixed\_ip" parameters in unit tests * Removes the use of mutables as default args * Add API schema for v2.1/v3 create\_backup API * Catch ProcessExecutionError in revoke\_cert * Updated from global requirements * Sync oslo lockutils to nova * devref policy: code is canonical source of truth for API * Log cleanups for nova.virt.libvirt.volume * Log cleanups for nova.virt.libvirt.imagecache * Rename VolumeMapping to EC2VolumeMapping * ec2: Convert to use EC2InstanceMapping object * Add EC2InstanceMapping object for use in EC2 * Add hook for network info update * Enhance and test exception safety in hooks * Object-ify server\_password APIv3 extension * Object-ify server\_password APIv2 extension * Move the fixed\_ips APIv2 extension to use objects * Completely object-ify the floating\_ips\_bulk V2 extension * Add bulk create/destroy functionality to FloatingIP * Cleanup and gate on pep8 rules that are stricter in hacking 0.9 * VMware: update file permissions and mode * Downgrade log level when create network failed * Updated from global requirements * libvirt: Use VIR\_DOMAIN\_AFFECT\_LIVE for paused instances * Initialize objects field in ObjectsListBase class * Remove bdms from run\_instance RPC conductor call * Sync "Prevent races in opportunistic db test cases" * Imported Translations from Transifex * Check the network\_info obj type before invoke wait function * Migrate nvp-qos to generic name qos-queue * Add test for HypervisorUnavailable on conductor * Test force\_config\_drive as a boolean as last resort * Add helper functions for getting local disk * Add more logging to nova-network * Make resize raise exception when no valid host found * Fix doc for service list * Handle service creation race by service workers * Add configurable HTTP timeout to cinder API calls * Prevent clean-up of migrating instances on compute init * Deprecate neutron\_\* configuration settings * Skip migrations test\_walk\_versions instead of pass * Remove duplicate code in Objects create() function * Fix object change detection * Fix object leak in nova.tests.objects.test\_fields.TestObject * Failure during termination should always leave state as error() * Make check\_instance\_shared\_storage() use objects * Save connection info in libvirt after volume connect * Remove unused code from test\_compute\_cells * libvirt: Don't pass None for image\_meta parameter in tests * Revert "Allow admin user to get all tenant's floating IPs" * libvirt: Remove use of db for flavor extra specs in tests * libvirt: Close opened file explicitly * Network: ensure that ports are 'unset' when instance is deleted * Don't translate debug level logs in nova * maint: Fixes wrong docstring of method get\_memory\_mb\_used * Ensure changes to api.QUOTA\_SYNC\_FUNCTIONS are restored * Fix the wrong dest of 'vlan' option and add new 'vlan\_start' option * Add deprecation warning to nova baremetal virt driver * Fixes typo error in Nova * Attach/detach interface to paused instance with affect live flag * Block device API missing translations for exceptions * Enabled swap disk to be resized when resizing instance * libvirt: return the correct instance path while cleanup\_resize * Remove the device handling from pci device object * Use new pci device handling code in pci\_manager * Separate the PCI device object handling code * xenapi: move find\_vbd\_by\_number into volume utils * Virt: remove unnecesary return code * Fixes hyper-v volume attach when host is AD member * Remove variability from object change detection unit test * Remove XML namespace from some v3 extensions 2014.2.b1 --------- * xenapi: Do not retry snapshot upload on 500 * Fix H401,H402 violations and re-enable gating * Bump hacking to 0.9.x series * Change listen address on libvirt live-migration * Make get\_console\_output() use objects * Add testing for hooks * Handle string types for InstanceActionEvent exc\_tb serialization * Revert "Remove broken quota-classes API" * Revert "Remove quota-class logic from context and make unit tests pass" * Fix cold-migrate missing retry info after scheduling * Disable rescheduling instance when no retry info * Fix infinitely reschedule instance due to miss retry info * Use VIF details dictionary to get physical\_network * Fix live\_migration method's docstring * Add subnet routes to network\_info when Neutron is used * fix nova test\_enforce\_http\_true unit test * novncproxy: Setup log when start nova-novncproxy * Make sure domain exists before referencing it * Network: add instance to the debug statement * V3 Pause: treat case when driver does not implement the operation * Don't translate debug level logs in nova.virt * Remove duplicate method * websocketproxy: remove leftover debug output * Remove unnecessary else block in compute manager set\_admin\_password * Treat instance objects like objects in set\_admin\_password flow * Move set\_admin\_password tests from test\_compute.py to api/mgr modules * Fix a wrong comment in the code * maint: correct docstring parameter description * libvirt: Remove dated docstring * Add unit tests for ipv4/ipv6 format validation * Cleanup allocating networks when InstanceNotFound is raised * Add test to verify ironic api contracts * VMware: spawn refactor - phase 1 - test for spawn * Revert "Fix migration and instance resize update order" * Simplify filter\_scheduler.populate\_retry() * libvirt: Use os\_command\_line when kernel\_id is set * libvirt: Make nwfilter driver use right filterref * libvirt: convert cpu features attribute from list to a set * Don't log TRACE info in notify\_about\_instance\_usage * xenapi: add tests for find\_bad\_volumes * Revert "Remove traces of now unused host capabilities from scheduler" * Check the length of aggregate metadata * Add out of tree support dev policy * Deprecate instance\_get\_by\_uuid() from conductor * Make metadata password routines use Instance object * Make SecurityGroupAPI use Object instead of instance\_get\_by\_uuid() * Add development policies section to devref * Add read\_only field attribute * Fix api direct use of instance module objects * Fix direct use of block\_device module objects * Fix InstanceActionEvent traceback parameter not serializable * Fix state mutation in cells image filter * libvirt: split and test finish\_migration disk resize * Use no\_timer\_check with soft-qemu * Add missing translation support * Update HACKING.rst to include N320 * Add tests to avoid inconsistent extension names * VMware: spawn refactor - Datastore class * VMware: remove dsutil.split\_datastore\_path * VMware: spawn refactor - DatastorePath class * Updated from global requirements * VMware: Fix memory leaks caused by caches * Allow user to specify image to use during rescue - V3 API changes * VMware: create utility functions * Check if volume is bootable when creating an instance * VMware: remove unused parameters in imagecache * xenapi: virt unrescue method now supports objects * libvirt: virt unrescue method now supports objects * libvirt: virt rescue method now supports objects * xenapi: virt rescue method now supports objects * Remove useless codes for server\_group * Catch InstanceInfoCacheNotFound during build\_instances * Do not replace the aggregate metadata when updating az * Move oslotest to test only requirements * libvirt: merge two utils tests files * libvirt: remove redundant 'libvirt\_' prefix in test case names * xenapi: refactor detach volume * Add API schema for v2.1/v3 migrate\_server API * Adds IVS unit tests for new VIF firewall logic * Don't set CONF options directly in unit tests * Fix docstring typo in need\_legacy\_block\_device\_info * Revert "Partially remove quota-class logic from nova.quotas" * Revert "Remove quota\_class params from rest of nova.quota" * Revert "Remove quota\_class db API calls" * Revert "Convert address to str in fixed\_ip\_obj.associate" * String-convert IPAddr objects for FixedIP.attach() * Updated from global requirements * Run instance root device determination fix * xenapi: tidy up volumeops tests * Don't return from a finally block * Support detection of fixed ip already in use * Rewrite nova policy to use the new changes of common policy * Treat instance objects as objects in unrescue API flow * Treat instance objects as objects in rescue API flow * Refactor test\_rescue\_unrescue into compute api/manager unit tests * Sync oslo network utils * Fix EC2 not found errors for volumes and snapshots * xenapi: refactor volumeops attach * xenapi: remove calls to call\_xenapi in volumeops * xenapi: move StorageError into global exception.py * Virt: ensure that instance\_exists uses objects * Use objects through the run\_instance() path * Deprecate run\_instance and remove unnecessary code * Change conductor to cast to build\_and\_run\_instance * Fix migration and instance resize update order * remove cpu feature duplications in libvirt * Add unit test trap for object change detection * Sync periodic\_task from oslo-incubator * VCDriver - Ignore host in Maintenance mode in stats update * Enable flake8 F841 checking * Imported Translations from Transifex * Reverse order of cinder.detach() and bdm.delete() * Correct exception info format of v3 flavor manage * Imported Translations from Transifex * Handle NetworkInUse exception in api layer * Correct exception handling when create aggregate * Properly skip coreutils readlink tests * Record right action name while migrate * Imported Translations from Transifex * Fix for multiple misspelled words * Refactor test to ensure file is closed * VM in rescue state must have a restricted set of actions * versions API: ignore request with a body * xenapi: fix live-migrate with volume attached * Add helper methods to convert disk * XenAPI: Tolerate multiple coalesces * Add helpers to create per-aggregate filters * Ensure live-migrate reverts if server not running * Raise HTTPInternalServerError when boot\_from\_volume with cinder down * Imported Translations from Transifex * [EC2]Correct the return status of attaching volume * Fix security group race condition while creating rule * VMware: spawn refactor - phase 1 - copy\_virtual\_disk * Catch InstanceNotFound exception if migration fails * Inject expected results for IBM Power when testing bus * Fix InstanceActionTestCase on PostgreSQL/MySQL * Fix ReservationTestCase on PostgreSQL * VMware: deprecate ESX driver from virt configuration * Add new ec2 instance db API calls * Remove two unused db.api methods * Fix direct use of aggregate module objects * Fix tests/compute direct use of instance module objects * share neutron admin auth tokens * Fix nova image-show with queued image * Catch missing Glance image attrs with None * Align internal image API with volume and network * Do not wait for neutron event if not powering on libvirt domain * Mask block\_device\_info auth\_password in virt driver debug logs * Remove all mostly untranslated PO files * Payload meta\_data is empty when remove metadata * Handle situation when key not memcached * Fix nova/compute direct use of instance module objects * Address issues with objects of same name * Register objects in more services * Imported Translations from Transifex * Default dhcp lease time of 120s is too short * Add VIF mac address to fixed\_ips in notifications * Call \_validate\_instance\_group\_policy in \_build\_and\_run\_instance * Add refresh=True to get\_available\_nodes call in build\_and\_run\_instance * Add better coverage support under tox * remove unneeded call to network\_api on detach\_interface * Cells: Pass instance objects to build\_instances * XenAPI: Add logging information for cache/download duration * Remove spaces from SSH public key comment * Make hacking test more accurate * Fix security group race condition while listing and deleting rules * On rebuild check for null image\_ref * Add a reference to the nova developer documentation * VMware: use default values in get\_info() when properties are missing * VMware: uncaught exception during snapshot deletion * Enforce query order for getting VIFs by instance * Fix typo in comment * Allow admin user to get all tenant's floating IPs * Defer applying iptable changes when nova-network start * Remove traces of now unused host capabilities from scheduler * Add log translation hints * Imported Translations from Transifex * Fix CIDR values denoting hosts in PostgreSQL * Sync common db and db/sqlalchemy * Remove quota\_class db API calls * Remove quota\_class params from rest of nova.quota * Fix wrong quota calculation when deleting a resizing instance * Ignore etc/nova/nova.conf.sample * Fix wrong method name assert\_called\_once * Correct pci resources log * Downgrade log when attach interface can't find resources * Fixes Hyper-V iSCSI target login method * VMware: spawn refactor - phase 1 - fetch\_image * vmware:Don't shadow builtin function type * Partially remove quota-class logic from nova.quotas and test\_quotas * Convert address to str in fixed\_ip\_obj.associate * Accurate exception info in api layer for aggregate * minor corrections to devref rpc page * libvirt: Handle unsupported host capabilities * Fix the duplicated extension summaries * Imported Translations from Transifex * Raise more information on V2 API volumes when resource not found * Remove comments since it's pointless * Downgrade and fix log message for floating ip already disassociated * Fix wrong method name for test\_hacking * Imported Translations from Transifex * Add specific regexp for timestamps in v2 xml * VMWare: spawn refactor - phase 1 - create\_virtual\_disk * VMware: spawn refactor - phase 1 - power\_on\_vm * Move tests into test\_volume\_utils * Tidy up xenapi/volume\_utils.py * Updated from global requirements * VMware: Fix usage of an alternate ESX/vCenter port * VMware: Add check for datacenter with no datastore * Remove unused instance\_update() method from virtapi * Make baremetal driver use Instance object for updates * Rename quota\_injected\_file\_path\_bytes * Imported Translations from Transifex * Fixes arguments parsing when executing command * Remove explicit dependency on amqplib * Deprecate action\_event\_\*() from conductor * Remove conductor usage from compute.utils.EventReporter * Unit test case for more than 1 ephemeral disks in BDM * Network: replace neutron check with decorator * Update links in README * Add mailmap entry * XenAPI: Remove unneeded instance argument from image downloading * XenAPI: adjust bittorrent settings * Fix a minor comments error * Code Improvement * Fix the explanation of HTTPNotFound for cell showing v2 API * Add Nova API Sample file & test for get keypair * Add a docstring to hacking unit tests * Make libvirt driver use instance object for updates * Make vmwareapi/vmops use Instance object for updates * Convert xenapi/vmops uses of instance\_update to objects * Make xenapi agent code use Instance object for updates * Check object's field * Use Field in fixed\_ip * Remove logging in libvirt \_connect\_auth\_cb to avoid eventlet locking * Fix v3 API extension names for camelcase * VMware: prevent image snapshot if no root disk defined * Remove unnecessary cleanup in test * Raise HTTPForbidden from os-floating-ips API rather than 404 * Improve hacking rule to avoid author markers * Remove and block DB access in dhcpbridge * Improve conductor error cases when unshelving * Dedup devref on unit tests * Shrink devref.unit\_tests, since info is in wiki * Fix calls to mock.assert\_not\_called() * VMware: reduce unit test times * Fix wrong used ProcessExecutionError exception * Clean up openstack-common.conf * Revert "Address the comments of the merged image handler patch" * Remove duplicated import in unit test * Fix security group list when not defined for an instance * Include pending task in log message on skip sync\_power\_state * Make cells use Fault obj for create * libvirt: Handle \`listDevices\` unsupported exception * libvirt: Stub O\_DIRECT in test if not supported * Deprecate instance\_fault\_create() from conductor * Remove conductor usage from add\_instance\_fault\_from\_exc() * Add create() method to InstanceFault object * Remove use of service\_\* conductor calls from xenapi host.py * Updated from global requirements * Optimize validate\_networks to query neutron only when needed * Remove quota-class logic from context and make unit tests pass * VMware: spawn refactor - phase 1 - execute\_create\_vm * xenapi: fixup agent tests * Don't translate debug level logs in nova.spice, storage, tests and vnc * libvirt: Refresh volume connection\_info after volume snapshot * Fix instance cross AZ check when attaching volumes * Raise descriptive error for over volume quota * Fix broken version responses * Don't translate debug level logs in objectstore, pci, rdp, servicegroup * Don't translate debug level logs in cloudpipe, hacking, ipv6, keymgr * Don't translate debug level logs in nova.cert, console and consoleauth * Don't translate debug level logs in nova.cmd and nova.db * Don't translate debug level logs in nova.objects * Don't translate debug level logs in nova.compute * Fix bad Mock calls to assert\_called\_once() * VCDriver - No longer returns uptime due to multiple hosts * Make live\_migration use instance objects * wrap\_check\_security\_groups\_policy is already defined * Updated from global requirements * Don't translate debug level logs in nova.conductor * Don't translate debug level logs in nova.cells * Use strtime() specific timestamp regexp * Use datetime object for fake network timestamps * Use datetime object for stub created\_at timestamp * Verify created\_at cloudpipe timestamp is isotime * Verify next-available limit timestamps are isotime * Verify created/updated timestamps are isotime * Use timeutils.isotime() in images view builder * Use actual fake timestamp in API templates * Normalize API extension updated timestamp format * Regenerate API samples for GET /extensions * objects: remove unused utils module * objects: restore some datetime field comments * Add fault wrapper for rescue function * Add x-openstack-request-id to nova v3 responses * Remove unnecessary wrapper for 5 compute APIs * Update block\_device\_info to contain swap and ephemeral disks * Hacking: add rule number to HACKING.rst * Create the image mappings BDMs earlier in the boot * Delete in-process snapshot when deleting instance * Imported Translations from Transifex * Fixed many typos * VMware: remove unneeded code * Rename NotAuthorized exception to Forbidden * Add warning to periodic\_task with interval 0 * Fix typo in unit tests * Remove a bogus and unnecessary docstring * Don't translate debug level logs in nova.api * Don't translate debug level logs in nova.volume * VMware: remove duplicate \_fake\_create\_session code * libvirt: Make \`fakelibvirt.libvirtError\` match * ec2utils: Use VolumeMapping object * ec2: create volume mapping using nova object * Add VolumeMapping object for use in EC2 * Add new ec2 volume db API calls * Remove legacy block device usage in ec2 API * Deprecate instance\_get\_active\_by\_window\_joined() from conductor * Deprecate instance\_get\_all\_by\_filters() from conductor * Don't translate debug level logs in nova.network * Fix bad param name in method docstring * Nova should pass device\_id='' instead of None to neutron.update\_port() * Set default auth\_strategy to keystone * Support multi-version pydevd * replace NovaException with VirtualInterfaceCreate when neutron fails * Spice proxy config setting to be read from the spice group in nova.conf * xenapi: make auto\_config\_disk persist boot flag * Deprecate compute\_unrescue() from conductor * Deprecate instance\_destroy() from conductor * libvirt: fix comment for get\_num\_instances * Fix exception message being changed by nested exception * DescribeInstances in ec2 shows wrong image-message * Imported Translations from Transifex * Remove unused nova.crypto.compute\_md5() * VMware: spawn refactor - phase 1 - get\_vif\_info * Remove comments and to-do for quota inconsistency * Set the volume access mode during volume attach * Fix a typo in compute/manager::remove\_volume\_connection() * XenAPI: Use local rsync rather than remote if possible * Delete image when backup operation failed on snapshot step * Fix migrate\_instance\_\*() using DB for floating addresses * Ignore errors when deleting non-existing vifs * Use eventlet.tpool.Proxy for DB API calls * Improve performance for checking hosts AZs * Correct the log in conductor unshelve\_instance * Imported Translations from Transifex * Make instance\_exists() take an instance, not instance\_name * Xen: Retry plugin call after connection reset * Remove metadata's network-api dependence on the database * Add helper method to determine disk size from instance properties * Deprecate nova-manage flavor subcommand * Updated from global requirements * Imported Translations from Transifex * VMware: remove unused variable * Scheduler: enable scheduler hint to pass the group name * Loosen import\_exceptions to cover all of gettextutils * Don't translate debug level scheduler logs * VMWare - Check for compute node before triggering destroy * Update version aliases for rpc version control * make ec2 errors not useless * VMware: ensure rescue instance is deleted when instance is deleted * Ensure info cache updates don't overwhelm cells * Remove utils.reset\_is\_neutron() to avoid races * Remove unnecessary call to fetch info\_cache * Remove deprecated config option names: Juno Edition * Don't overwrite instance object with dict in \_init\_instance() * Add specific doc build option to tox * Fix up import of conductor * Use one query instead of two for quota\_usages * VMware: Log additional details of suds faults * Disable nova-manage network commands with Neutron V2 * Fix the explanations of HTTPNotFound for keypair's API * remove unneeded call to network\_api on rebuild\_instance * Deprecate network\_migrate\_instance\_\* from conductor * Deprecate aggregate\_host\_\* operations in conductor * Convert instance\_usage\_audit() periodic task to objects * Return to using network\_api directly for migrations * Make \_is\_multi\_host() use objects * Remove unneeded call to fetch network info on shutdown * Instance groups: add method get\_by\_hint * Imported Translations from Transifex * GET details REST API next link missing 'details' * Don't explode if we fail to unplug VIFs after a failed boot * nit: correct docstring for FilterScheduler.schedule\_run\_instance * Revert "Fix network-api direct database hits in metadata server" * ec2: use BlockDeviceMappingList object * ec2: use SecurityGroup object * ec2: get services using ServiceList object * ec2: remove db.instance\_system\_metadata usage * Remove nova-clear-rabbit-queues * Allow -1 as the length of "get console output" API * Fix AvailabilityZone check for hosts in multiple aggregates * Move \_get\_locations to module level plus tests * Define constants for the VIF model types * Imported Translations from Transifex * Make aggregate host operations use Aggregate object * Convert poll\_rescued\_instances() periodic task to objects * Make update\_available\_resource() use objects * Add get\_by\_service() method to ComputeNodeList object * Add with\_compute\_node to service\_get() * Make \_get\_compute\_info() use objects * Pass configured auth strategy to neutronclient * Imported Translations from Transifex * Make quota rollback checks more robust in conductor tests * Updated from global requirements * Remove duplicate code from nova.db.sqlalchemy.utils * Downgrade the log level when automatic confirm\_resize fails * Refactor unit tests for image service CRUD * Finish \_delete\_instance() object conversion * Make detach\_volume() use objects * Add lock on API layer delete floating IP * ec2: Convert instance\_get\_by\_uuid calls to objects * Fix network-api direct database hits in metadata server * Scheduler: remove test scheduling methods that are not used * Add info\_cache as expected attribute when evacuate instance * Make compute manager use network api method return values * Allow user to specify image to use during rescue - V2 API changes * Allow user to specify image to use during rescue * Use debug level logging in unit tests, but don't save them * Update user\_id length to match Keystone schema in volume\_usage\_cache * Avoid the possibility of truncating disk info file * Read deleted instances during lifecycle events * Add RBAC policy for ec2 API security groups calls * compute: using format\_message() to convert exception to string * support local debug logging * Fix bug detach volume fails with "KeyError" in EC2 * Fix straggling uses of direct-to-database queries in nova-network * Xen: Do not resize root volumes * Remove mention of nova-manage.conf from nova-manage.rst * XenAPI: Add host information to glance download logs * Check image exists before calling inject\_data * xenapi: Cleanup tar process on glance error * Missing catch InstanceNotFound in v3 API * Recover from POWERING-\* state on compute manager start-up * Remove the unused \_validate\_device\_name() * Adds missing expected\_errors for V3 API multinic extension * Correct test boundary for libvirt\_driver.get\_info * Updated from global requirements * Update docs to reflect new default filters * Enable ServerGroup scheduler filters by default * Revert "Use debug level logging during unit tests" * Remove redundant tests from Qcow2TestCase * libvirt: remove\_logical\_volumes should remove each separately * VMware: Fixes the instance resize problem * Fix anti-affinity server-group boot failure * Nova utils: add in missing translation * Add exception handling in "nova diagnostics" * mark vif\_driver as deprecated and log warning * Revert object-assuming changes to \_post\_live\_migration() * Revert object-assuming changes to \_post\_live\_migration() * libvirt: optimize pause mode support * Check for None or timestamp in availability zone api sample * Refactor Network API * Require admin context for interfaces on ext network * remove redundant copy of test\_cache\_base\_dir\_exists * Make sure leases are maintained until release * Add tests for remaining expected conductor exceptions * Fix Jenkins translation jobs * libvirt: pause mode is not supported by all drivers * Reduce config access in scheduler * VMWare: add power off vm before detach disk during unrescue * Reduce logging in scheduler * xenapi: add a test for \_get\_partitions * Refactor network\_utils to new call\_xenapi pattern * Sync request\_id middleware bug fix from oslo * Make example 'entry\_points' parameter a dictionary * Localized error exception message on delete host aggregate * Note that XML support \*may\* be removed * Change errors\_out\_migration decorator to work with RPC * low hanging fruit oslo-incubator sync * Fix description of ServerGroupAffinityFilter * Added test cases in ConfKeyManagerTestCase to verify fixed\_key * Moved the registration of lifecycle event handler in init\_host() * Change NotFound to InstanceNotFound in server\_diagnostics.py * Remove unnecessary passing of task\_state to check\_instance\_state * Rename instance\_actions v3 to server\_actions * Drop nova-rpc-zmq-receiver man-page * Correct the keypairs-get-resp.json API sample file * Make hypervisor\_version an int in fakeVirt driver * Ensure network interfaces are in requested order * Reserve 10 migrations for backports * XenAPI: Calculate disk\_available\_least * Open Juno development 2014.1.rc1 ---------- * Fix getting instance events on subsequent attempts * Improved logs for add/remove security group rules * VMware: remove double import * VMware: clean up VNC console handling * Make conductor expect ActionEventNotFound for action methods * Remove zmq-receiver from setup.cfg * Add a note about deprecated group filters * Fix the section name in CONTRIBUTING.rst * Fix display of server group members * Add new style instance group scheduler filters * Automatically create groups that do not exist * Add InstanceGroup.get\_by\_name() * Remove unnecessary check for CONF.notify\_on\_state\_change * Add nova.conf.sample to gitignore * Use binding:vif\_details to control firewall * Disable volume attach/detach for suspended instances * Updated from global requirements * Persist image format to a file, to prevent attacks based on changing it * Add test cases for validate\_extra\_spec\_keys * Catch InstanceInLocked exception for rescue and instance metadata APIs * Imported Translations from Transifex * Make 'VDI too big' more verbose * Use osapi\_glance\_link\_prefix for image location header * postgres incompatibility in InstanceGroup.get\_hosts() * Add missing test for None in sqlalchemy query filter * Use instance data instead of flavor in simple\_tenant\_usage extension * Sync oslo imageutils, strutils to Nova * Use correct project/user id in conductor.manager * fix the extension of README in etc/nova * Tell pip to install packages it sees globally * Change exception type from HTTPBadRequest to HTTPForbidden * Don't attempt to fill faults for instance\_list if FlavorNotFound * Bypass the database if limit=0 for server-list requests * Fix availability-zone option miss when creates an instance * No longer any need to pass admin context to aggregate DB API methods * Updated Setting up Developer Environment for Ubuntu * Change libvirt close callback to use green thread * Re-work how debugger CLI opts are registered * Imported Translations from Transifex * \_translate\_from\_glance() can cause an unnecessary HTTP request * Add UNSHELVING and RESCUING into IoOPSFilter consideration state * VMware: fix booting from volume * Do not add current tenant to private flavor access * Disable oslo.messaging debug logs * Update vm\_mode when rebuilding instance with new image * VMware: fix list\_instances for multi-node driver * VMware: Add utility method to retrieve remote objects * Use project/user from instance for quotas * Refactors unit tests of image service detail() * Refactors nova.image.glance unit tests for show() * Revert deprecation warning on Neutron auth * V2 API: remove unused imports * Change HTTPUnprocessableEntity to HTTPBadRequest * Rename \_post\_live\_migration instance\_ref arg * Add a decorator decorator that checks func args * Updated from global requirements * Instance groups: cleanup * Use the list when get information from libvirt * Remove unused quota\_\* calls from conductor * Use correct project/user for quotas * Include proper Content-Type in the HTTP Headers * Fix inconsistent quota usage for security group * Handling unlimited values when updating quota * Fix service API and cells * Remove unnecessary stubbing in test\_services * InvalidCPUInfo exception added to except block * VMware: fix exception when no objects are returned * Don't allow empty or 0 volume size for images * Wait till message handling is done on service stop * Remove PciDeviceList usage in pci manager * Fix the rpc module import in the service module * Revert "VMware Driver update correct disk usage stat" * Catch HostBinaryNotFound exception in V2 API * Ignore InstanceNotFound while getting console output * Raise error on nova-api if missing subnets/fixed\_ips on networks/port * Fix the explanations of HTTPNotFound for new APIs * Remove the nova.config.sample file * Refuse to block migrate instances with config drive * Include next link when default limit is reached * Catch NotImplementedError on Network Associate * VMware: add a file to help config the firewall for vnc * Change initial delay for servicegroup api reporting * Fix KeyError if neutron security group is not TCP/UDP/ICMP and no ports * Prevent rescheduling on block device failure * Check if nfs/glusterfs export is already mounted * Make compute API resize methods use Quotas objects * Remove commented out code in test\_cinder\_cloud * Update quantum to neutron in comment * Add deleted\_at attribute in glance stub on delete() * Add API sample files of "unshelve a server" API * Remove unused method from fake\_network.py * Don't refresh network cache for instances building or deleting * GlanceImageService static methods to module scope * Remove XenAPI driver deprecation warning log message * VMware: bug fix for host operations when using VMwareVCDriver * xenapi: boot from volume without image\_ref * Use HTTPRequestV3 instead of HTTPRequest in v3 API tests * Cells: Send instance object for instance\_delete\_everywhere * Fix "computeFault" when v3 API "GET /versions/:(id)" is called * VMware: ensure that the task completed for resize operation * Change parameters of add\_timestamp in ComputeDriverCPUMonitor class * Cells API calls return 501 when cells disabled * Add version 2.0 of conductor rpc interface * Added missing raise statement when checking the config driver format * Make NovaObject report changed-ness of its children * Increase volume creation max waiting time * Remove action-args from nova-manage help * VMware: fix rescue disk location when image is not linked clone * Fix comment for block\_migration in nova/virt/libvirt/driver.py * Don't import library guestfs directly * Correct inheritance of nova.volume.cinder.API * VMware: enable booting an ISO with root disk size 0 * Remove bad log message in get\_remote\_image\_service * Raise NotImplementedError in NeutronV2 API * Remove block\_device\_mapping\_destroy() from conductor API * Make sure instance saves network\_info when we go ACTIVE * Fix sqlalchemy utils test cases for SA 0.9.x * Fix equal\_any() DB API helper * Remove migration\_update() from conductor API * Remove instance\_get() from conductor API * Remove aggregate\_get\_by\_host() from conductor API * add support for host driver cleanup during shutdown * Add security\_group\_rule to objects registry * Remove aggregate\_get() from conductor API * Delete meaningless lines in test\_server\_metadata.py * Imported Translations from Transifex * Move log statement to expose actually info\_cache value * Fix input validation for V2 API server group API extension * Adds test for rebuild in compute api * Specify spacing on periodic\_tasks in manager.py * network\_info cache should be cleared before being rescheduled * Don't sync [system\_]metadata down to cells on instance.save() * Fixes the Hyper-V agent individual disk metrics * VMware: remove unused code (\_delete method in vmops.py) * Fix docstring for shelve\_offload\_instance in compute manager * Block database access in nova-network binary * Make nova-network use conductor for security groups refresh * Make nova-network use quotas object * Reverts change to default state\_path * Fix raise\_http\_conflict\_for\_instance\_invalid\_state docstring * Cells: Pass instance objects to update/delete\_instance\_metadata * Don't detach root device volume * Revert "Adding image multiple location support" * Revert "Move libvirt RBD utilities to a new file" * Revert "enable cloning for rbd-backed ephemeral disks" * Add helper method for injecting data in an image * Add helper method for checking if VM is booting from a volume * Libvirt: Repair metadata injection into guests * Make linux\_net use objects for last fixed ip query * Add get\_by\_network() to FixedIPList * Update aggregate should not allow duplicated names * Recover from REBOOT-\* state on compute manager start-up * VMware: raise an exception for unsupported disk formats * VMware: ensure that deprecation does not appear for VC driver * rename ExtensionsResource to ExtensionsController * Ensure is\_image\_available handles V2 Glance API * libvirt: fix blockinfo get\_device\_name helper * Log Content-Type/Accept API request info * Remove the docker driver * xenapi: Speed up tests by not waiting on conductor * Updated from global requirements * xenapi: Fix test\_rescue test to ensure assertions are valid * VMware: image cache aging * Add py27local tox target * Fix broken API os-migrations * Catch FloatingIpNotFoundForHost exception * Fix get\_download\_hander() typo * Handle IpAddressGenerationClient neutron * Delete ERROR+DELETING VMs during compute startup * VMware: delete vm snapshot after nova snapshot * Fix difference between mysql & psql of flavor-show * Add version 3.0 of scheduler rpc interface * Make libvirt wait for neutron to confirm plugging before boot * Task cleanup\_running\_deleted\_instances can now use slave * Do not add HPET timer config to non x86 targets * Make test computations explicit * Instance groups: only display valid instances for policy members * Don't allow reboot when instance in rebooting\_hard * VMware: add missing translations * Fix typo and add test for refresh\_instance\_security\_rules * Add declaration of 'refresh\_instance\_security\_rules' to virt driver * Remove mention of removed dhcp\_options\_enabled * Fix compute\_node stats * Fix: Unshelving an instance uses original image * Noted that tox is the preferred unit tester * Updated development.environment.rst * Use instance object instead of \_instance\_update() * Remove compute virtapi BDM methods * enable cloning for rbd-backed ephemeral disks * Move libvirt RBD utilities to a new file * Fixup debug log statements in the nova compute manager * Use debug level logging during unit tests * Fix debug message formatting in server\_external\_events * VMware: VimException \_\_str\_\_ attempts to concatenate string to list * Mark ESX driver as deprecated * Volume operations should be blocked for non-null task state * xenapi: fix spawn servers with ephemeral disks * Fixes NoneType vcpu list returned by Libvirt driver * Add conversion type to LOG.exception's string * Remove compute API get\_instance\_bdms method * Move run\_instance compute to BDM objects * Move live migration callbacks to BDM objects * Instance groups: validate policy configuration * Add REST API for instance group api extension * VMware: boot from iso support * Store neutron port status in VIF model * Correct network\_model tests and \_\_eq\_\_ operator * Make network\_cache more robust with neutron * Error out failed migrations * Fix BDM legacy usage with objects * Fix anti-affinity race condition on boot * Initial scheduler support for instance\_groups * Add get\_hosts to InstanceGroup object * Add instance to instance group in compute.api * Add add\_members to InstanceGroup object * Remove run-time dependency on fixtures module by the nova baremetal * Make compute manager prune instance events on delete and migrate * Make compute manager's virtapi support waiting for events * Add os-server-external-events V3 API * Add os-server-external-events API * Add external\_instance\_event() method to compute manager * Fix invalid vim call in vim\_util.get\_dynamic\_properties() * Rescue API handle NotImplementedError * VMware: Add a test helper to mock the suds client * VMware: Ensure test VM is running in rescue tests * Move \_poll\_volume\_usage periodic task to BDM objects * Move instance\_resize code paths to BDM objects * Make swap\_volume code path use BDM objects * Fix log messages typos in rebuild\_instance function * Move detach\_volume and remove\_vol\_connection to BDM objects * Move instance delete to new-world BDM objects * VMware ESX: Boot from volume must not relocate vol * Fix development environment docs for redhat-based systems * neutron\_metadata\_proxy\_shared\_secret should not be written to log file * VMware: create datastore utility functions * Address the comments of the merged image handler patch * Ignore the image name when booting from volume 2014.1.b3 --------- * Fix typo in devref * VMware: refactor \_get\_volume\_uuid * Add return value to some network API methods * Fixing host\_ip configuration help message * No longer call check\_uptodate.sh in pep8 * notifier middleware broken by oslo.messaging * regenerate the config file to support 1.3.0a9 * Add doc update for 4 filters which is missing in filter\_scheduler.rst * Remove 3 unnecessary variables in scheduler * Adding image multiple location support * Move all shelve code paths to BDM objects * Move rebuild to BDM objects * sync sslutils to not conflict with oslo.messaging * Accurate comment in compute layer * Refactor xenapi/host.py to new call\_xenapi pattern * Add a missing space in a log message * VMware: iscsi target discovery fails while attaching volumes * Remove warn log in quota function on API layer * Sync the latest DB code from oslo-incubator * Prevent thrashing when deploying many bm instances * Support configuring libvirt watchdog from flavors * Add watchdog device support to libvirt driver * Remove extra space at the end of help string * Port libvirt copy\_image tests to mock * Updated from global requirements * Sync latest Guru Meditation Reports from Oslo * Skip sqlite-specific tests if sqlite is not configured * VMware: add in debug information for network selection * vmwareapi:Fix nova compute service down issue when injecting pure IPv6 * Make compute use quota object existing function * Fixes api samples for V2 os-assisted-volume-snapshots * Raise exception if volume snapshot id not found instead of return * Added os-security-groups prefix * VMware Driver update correct disk usage stat * attach/detach interface should raise exception when instance is locked * Restore get\_available\_resource method in docker driver * Make compute manager use InstanceInfoCache object for deletes * Deprecate conductor instance\_type\_get() and remove from VirtAPI * Make restore\_instance pass the Instance object to compute manager * Use uuid instead of name for lvm backend * Adds get\_console\_connect\_info API * Remove log\_handler module from oslo-incubator sync * Remove deleted module flakes from openstack-common.conf * When a claim is rejected, explain why * Move xenapi/agent.py to new call\_xenapi style * xenapi plugins: Make sure subprocesses finish executing * Update Oslo wiki link in README * Refactor pool.py to remove calls to call\_xenapi * Move vbd plug/unplug into session object * xenapi: make session calls more discoverable * Make error notifications more consistent * Adds unit test for etc/nova/policy.json data * Support IPv6 when booting instances * xenapi: changes the debug log formatting * libvirt: raises exception when attempt to resize disk down * xenapi: stop destroy\_vdi errors masking real error * Make resource\_tracker use Flavor object * Make compute manager use Flavor object * Make baremetal driver use Flavor object instead of VirtAPI * Sync latest config file generator from oslo-incubator * Fixes evacuate doesn't honor enable password conf for v3 * Removed copyright from empty files * Fix the explanations of HTTPNotFound response * VMware: support instance objects * Add support for tenant\_id based authentication with Neutron * Remove and recreate interface if already exists * Prevent caller from specifying id during Aggregate.create() * Enable flake8 H404 checking * Imported Translations from Transifex * Fix logic for aggregate\_metadata\_get\_by\_host\_with\_key test case * Use oslo-common's logging fixture * Re-Sync oslo-incubator fixtures * Updated from global requirements * Update pre\_live\_migration to take instance object * Remove unused method inject\_file() * Remove db query from deallocate\_fixed\_ip * update deallocate\_for\_instance to take instance obj * Update server\_diagnostics to use instance object * Move the metrics update to get\_metrics * Unmount the NFS and GlusterFS shares on detach * Add a caching scheduler driver * libvirt: image property variable already defined * Replaces exception re-raising in Hyper-V * Remove blank space after print * VMware: add instance detail to detach log message * libvirt: Enable custom video RAM setting * Remove trailing comma from sample JSON * Add pack\_action\_start/finish helper to InstanceAction object * Rewrite InstanceActionEvent object testcase using mock * Clean up \_make\_\*\_list in object models to use base.obj\_make\_list * libvirt: remove explicit /dev/random rng default * Document virt driver methods that take Instance objects * Make interface attach and detach use objects * Pass instance object to soft\_delete() and get\_info() * libvirt: setting a correct driver name for iscsi volumes * libvirt: host specific virtio-rng backend * Fix HTTP methods for test\_attach\_interfaces * Fix the calls of webob exception classes * VMware: remove unused parameter from \_wait\_for\_task * Downgrade the log level for floating IP associate * Removing redundant validation for rebuild request * VMware: add a test for driver capabilities * Catch HostBinaryNotFound exception when updating a service * VMware: ensure that datastore name exists prior to deleting disk * Move compute's \_get\_instance\_volume\_block\_device\_info to BDM objects * Use disk\_bus and device\_type in attaching volumes * Add device bus and type to virt attach\_volume call * Make volume attach use objects * compute: invalid gettext message format * VMware: fix the VNC port allocation * VMware: fix datastore selection when token is returned * Hyper-V log cleanups * vmware: driver races to create instance images * Introduce Guru Meditation Reports into Nova * Updated from global requirements * Revert "VMware: fix race for datastore directory existence" * Use instance object for delete * Update ubuntu dev env instructions * VMware: fix race for datastore directory existence * libvirt: adding a random number generator device to instances * Add 'use\_slave' to instance\_get\_all\_by\_filter in conductor * Fix the validation of flavor\_extraspecs v2 API * Make webob.exc.HTTPForbidden return correct message * Use image from the api in run\_instance, if present * Remove unused variables in the xenapi.vmops module * Describe addresses in ec2 api broken with neutron * Cleanup v3 test\_versions * Fix import order in log\_handler * Emit message which merged user-supplied argument in log\_handler * Adds service request parameter filter for V3 API os-hosts request * Fix comment typo in nova/compute/api.py * stop throwing deprecation warnings on init * Remove broken quota-classes API * VMware: fix instance lookup against vSphere * Add a new compute API method for deleting retired services * Fix instance\_get\_all\_by\_host to actually use slave * Periodic task poll\_bandwidth\_usage can use slave * Partially revert "XenAPI: Monitor the GC when coalescing" * Mark XML as deprecated in the v2 API * adjust version definition for v3 to be only json * Fix option indenting in compute manager * Adds create backup server extension for the V3 API * Catch InstanceNotFound exceptions for V2 API instance\_actions * Sync log.py from oslo * Make floating\_ips module use FloatingIP for associations * Remove \_\_del\_\_ usage in vmwareapi driver * Fixed spelling errors in nova * LibVirt: Disable hairpin when using Neutron * VMware: optimize instance reference access * Serialize the notification payload in json * Add resource tracking to unshelve\_instance() * Typo in the name 'libvirt\_snapshot\_compression' * Refactor driver BDM attach() to cover all uses * Fix assertEqual parameter order post V3 API admin-actions-split * Fix copyright messages after admin actions split for V3 API * Catch InstanceNotFound exceptions for V2 API virtual interfaces * Correct the assert() order in test\_libvirt\_blockinfo * Use disk\_bus when guessing the device name for vol * libvirt: add virtio-scsi disk interface support * libvirt: configuration element for virtual controller * VMware: factor out management of controller keys and unit numbers * Remove unused notifier and rpc modules from oslo sync * Imported Translations from Transifex * Remove XML support from schemas v3 * Treat port attachment failures correctly * Add experimental warning for Cells * Add boolean convertor to "create multiple servers" API * VMware: prevent race for vmdk deletion * VMware: raise more specific exceptions * Disable IGMP snooping on hybrid Linux bridge * libvirt: remove retval from libvirt \_set\_host\_enabled() * VMware: remove unused class * compute: format\_message is a method not an attribute * MetricsWeigher: Added support of unavailable metrics * Fix incorrect kwargs 'reason' for HTTPBadRequest * Fix the indents of v3 API sample docs * Refactor get\_iscsi\_initiator to a common location * Fix compute\_node\_update() compatibility with older clients * XenAPI: Add the mechanism to attach a pci device to a VM * Remove underscore for the STATE\_MAP variable * XenAPI: Add the support for updating the status of the host * libvirt: support configurable wipe methods for LVM backed instances * Fix InstanceNotFound error in \_delete\_instance\_files * Ensure parent dir exists while injecting files * Convert post\_live\_migration\_at\_destination to objects * Convert remove\_fixed\_ip\_to\_instance to objects * Convert add\_fixed\_ip\_to\_instance to objects * Fix invalid facilities documented in rootwrap.conf * VMware: improve unit test time * Replace assertEqual(None, \*) with assertIsNone in tests * Add comment/doc about utils.mkfs in rootwrap * Add mkfs to the baremetal-deploy-helper rootwrap * libvirt-volume: improve unit test time * Move consoleauth\_manager option into nova.service and fix imports * libvirt: improve unit test time * Imported Translations from Transifex * Make is\_neutron() thread-safe * Update the mailmap * Rewrite InstanceAction object test cases using mock * Make floating\_ips module use FloatingIP for updates * Make floating\_ips module use FloatingIP for (de-)allocations * Make floating\_ips module use FloatingIP for all get queries * Make floating\_ips module use Service object * Make floating\_ips module use Instance object * Make floating\_ips module use Network object * Make floating\_ips module use FixedIP object * Fix break in vm\_vdi\_cleaner after oslo changes * Fixes the Hyper-V VolumeOpsTestCase base class * libvirt: Uses available method get\_host\_state * Add V3 api for pci support * Update docstring for baremetal opportunistic tests * Fix upper bound checking for flavor create parameters * Fixed check in image cache unit test * Count memory and disk slots once in cells state manager * changed quantum to neutron in vif-openstack * Convert unrescue\_instance to objects * Don't allow compute\_node free\_disk\_gb to be None * compute: removes unnecessary condition * Rename Openstack to OpenStack * Support setting a machine type to enable ARMv7/AArch64 guests to boot * Catch InstanceNotFound exceptions for V2 API floating\_ips * Explicity teardown on error in libguestfs setup() * Catch InstanceNotFound exceptions for V2 API deferred delete * Replace oslo.sphinx with oslosphinx * Change assertTrue(isinstance()) by optimal assert * Make nova\_ipam\_lib use Network, FixedIP, and FloatingIP objects * Make nova-network use FixedIP for timeouts * Make nova-network use FixedIP object for updates * Make nova-network use FixedIP object for disassociations * Use six.moves.urllib.parse instead of urlparse * Add "body=" argument to v3 API unit tests * Remove unused methods * Adds migrate server extension for V3 API * Move policy check of start/stop to api layer * Refactor stats to avoid bad join * Remove @author from copyright statements * Remove character filtering from V3 API console\_output * DB: logging exceptions should use save\_and\_reraise * Fix incorrect check in aggregate/az test * xenapi: set viridian=false for linux servers * Delete baremetal image files after deployment * Make sure "volumeId" in req body on volume actions * Removes console output plugin from the core list * Using six.add\_metaclass * Fix bad log formatting * Remove quota classes extension from the V3 API * Group kvm image\_meta tests for get\_disk\_bus * Prefix private methods with \_ in docker driver * Fix the sample and unittest params of v3 scheduler-hints * Add a instance lookup helper to v3 plugins * Use raw string notation for regexes in hacking checks * Improve detection of imports in hacking check * Renumber some nova hacking checks * Docker cannot start a new instance because of an internal error * libvirt: configuration element for a random number generator device * VMware: fix instance rescue bug * Fix run\_tests.sh lockutils when run with -d * Adds tests to sqlachemy.api.\_retry\_on\_deadlock * Replace detail for explanation msgs on webob exceptions * Allow operators to customize max header size * Prevent caller from specifying id during Migration.create() * Prevent caller from specifying id during KeyPair.create() * Prevent caller from specifying id during Service.create() * Prevent caller from specifying id during ComputeNode.create() * Clean IMAGE\_SNAPSHOT\_PENDING state on compute manager start up * Fix trivial typo in libvirt test comment * Refactoring metadata/base * Removes XML namespace from V3 API test\_servers * correct the bugs reference url in man documents * Objectify instance\_action for cell scheduler * Remove tox locale overrides * libvirt: use to\_xml() in post\_live\_migration\_at\_destination * Removes os-instance-usage-audit-log from the V3 API * VMware: update test name * VMware: improve unit test performance * Fix english grammar in the quota error messages * Removes os-simple-tenant-usage from the V3 API * Fix a couple of unit test typos * Add HEAD api response for test s3 server BucketHandler * Removes XML support from security\_groups v3 API * Hyper-V driver RDP console access support * Make consoleauth token verification pass an Instance object * Adds RDP console support * Fix migrations changing the type of deleted column * Add hpet option for time drifting * Typo in backwards compat names for notification drivers * Support building wheels (PEP-427) * Fix misspellings in nova * Disable file injection in baremetal by default * Drop unused dump\_ SQL tables * Convert rescue\_instance to objects * Convert set\_admin\_password to objects * The object\_compat decorator should come first * Default video type to 'vga' for PowerKVM * Sync latest db.sqlalchemy from oslo-incubator * Guard against oversize flavor rxtx\_factor float * Make libvirt use Flavor object instead of using VirtAPI * Fix instance metadata tracking during resets * Make delete\_instance\_metadata() use objects * Break out the meat of the object hydration process * V2 Pause: treat case when driver does not implement the operation * VMware: fix bug for exceptions thrown in \_wait\_for\_task * Nova Docker: Metadata service doesn't work * nova: use RequestContextSerializer for notifications * Fix auto instance unrescue after poll period * Fix typos in hacking check warning numbers * Fix exception handling miss in remote\_consoles * Don't try to restore VM's in state ERROR * Make it possible to disable polling for bandwidth usage * XenAPI: Monitor the GC when coalescing * Revert "Allow deleting instances while uuid lock is held" * report port number for address already in use errors * Update my mailmap * libvirt: Adds missing tests to copy\_image * Sync latest gettextutils from oslo-incubator * Make change\_instance\_metadata() use objects * Add XenAPI driver deprecation warning log message * Adds host\_ip to hypervisor show API * VMware: update the default 'task\_poll\_interval' time * Fixes Hyper-V VHDX snapshot bigger than instance * Define common "name" parameter for Nova v3 API * Stacktrace on error from libvirt during unfilter * Disable libvirt driver file injection by default * Add super call to db Base class * Fix baremetal stats type * Fix bittorrent URL configuration option * Fix VirtualInterfaceMacAddressException message * Add serializer capability to fake\_notifier * Avoid deadlock when stringifying NetworkInfo model * Add hacking test to block cross-virt driver code usage * Hyper-V: Change variable in debug log message * Rename API schema modules with removing "\_schema" * Fixed naming issue of variable in a debug statement formatting * Use new images when spawning BM instances * Remove get\_instance\_type and get\_active\_by\_window from nova compute API * Make the simple\_tenant\_usage API use objects * Add instance\_get\_active\_by\_window\_joined to InstanceList * Update nova.conf.sample for python-keystoneclient 0.5.0 * Add ESX quality warning * Set SCSI as the default cdrom bus for PowerKVM * Enforce FlavorExtraSpecs Key format * Fix scheduler\_hints parameter of v3 API * Remove vi modelines * VMware: Remove some unused variables * Fix a bug in v3 API doc * Move logging out of BDM attach method * Add missing translation support * libvirt: making set\_host\_enabled to be a private methods * Remove unused variable * Call get\_pgsql\_connection\_info from \_test\_postgresql\_opportunistically * Port to oslo.messaging * Sync latest config file generator from oslo-incubator * Test guestfs without support for close\_on\_exit * Make nova-network use FixedIP object for vif queries and bulk create * Make nova-network use FixedIP for host and instance queries * Make nova-network use FixedIP object for associations * Make nova-network use FixedIP for get\_by\_address() queries * Add FixedIP.floating\_ips dynamic property * Add FloatingIP object implementation * Add FixedIP Object implementation * Deal with old versions of libguestfs * Destroy docker container if spawn fails to set up network * Adds suspend server extension for V3 API * Adds pause server extension for V3 API * Removes XML namespace definitions from V3 API plugins * Remove XML support from migrations pci multiple\_create v3 API plugins * Remove extra space in log message * Allow deleting instances while uuid lock is held * Add 'icehouse-compat' to [upgrade\_levels] compute= * Make os-service API return correct error messages * Make fixed\_ip\_get\_by\_address() take columns\_to\_join * Refactor return value of fixed\_ip\_associate calls * Make nova-network use Network object for deleting networks * Make nova-network use Network for associations * Make nova-network use Network object for set\_host() operation * Make nova-network use Network object for updates * Make nova-network use Network object for remaining "get" queries * Make nova-network use NetworkList for remaining "all" queries * Make nova-network use Network object for get-all-by-host query * Make nova-network a "conductor-using service" * Ignore 'dynamic' addr flag on bridge configuration * Remove XML support from some v3 API plugins * xenapi: clean up step decorator fake steps * Use objects internally in DriverBlockDevice class * Make snapshot\_volume\_backed use new-world objects * Make volume\_snapshot\_{create,delete} use objects * Move compute API is\_volume\_backed to BDM objects * Add block device mapping objects implementation * XenAPI: Wait for VDI on introduce * Shelve: The snapshot should be removed when delete instance * Revert "Allow deleting instances while uuid lock is held" * Retry reservation commit and rollback on deadlock * Adds lock server extension for V3 API * Remove duplicated method in mock\_key\_mgr * Add quality warning for non-standard libvirt configurations * Add docker driver removal warning * Remove V3 API XML entry points * Remove XML support from admin\_password V3 API plugin * Remove XML support from certificates v3 API * Remove XML support from some v3 API plugins(e.g. services) * Remove XML support from some extension v3 API plugins * Remove XML support from some server v3 API plugins * Remove XML support from quota and scheduler\_hints v3 API plugins * Remove XML support from flavor v3 API plugins * Revert "Fix race conditions between imagebackend and imagecache" * Remove XML support from v3 API plugins * Remove unused methods * Remove trace XML from unittests * removing xml from servers.py * Remove xml unit tests for v3 api plugins * Remove v3 xml API sample tests * Adds dmcrypt utility module * Adds ephemeral\_key\_uuid field to instance * Error message is malformed when removing a sec group from an instance * Do not set root device for libvirt+Xen * Docker Set Container name to Instance ID * Fix init of pci\_stats in resource tracker * Catch NotImplementedError in get\_spice\_console in v2/v3 API * Minor changes to make certificates test cases use HTTPRequestV3 * VMware: Only include connected hosts in cluster stats * disk/api.py: refactors extends and adds missing tests * Make nova-network use Network to create networks * Make obj\_to\_primitive() handle netaddr types * Add Network object * Make service workers gracefully handle service creation race * support stevedore >= 0.14 * Increase the default retry for iscsi connects * Finish compacting pre-Icehouse database migrations * Compact pre-Icehouse database migrations <= 210 * Compact pre-Icehouse database migrations <= 200 * Compact pre-Icehouse database migrations <= 190 * Fix cache lock for image not consistent * VMware: fix image snapshot with attached volume * Use block\_device\_info at post\_live\_migration\_at\_destination * Update policy check on each action for certificates * Use (# of CPUs) workers by default * Remove policy check in db layer for aggregates * Remove unused configurations * VMware: fix exception when using multiple compute nodes * Remove copyright from empty files in nova * disk/api.py: resize2fs fails silently + adds tests * remove 2 unused function in test\_volumes.py * Update log message to support translations * PCI address should be uniform * Remove flavor-disabled related policy rules for v3 api * Remove get\_all\_networks from nova.network.rpcapi * Remove get\_network from nova.network.rpcapi * Update nova.network to use DNSDomain object * Remove some dead dnsdomain code * Add DNSDomain object * Add db.dnsdomain\_get\_all() method * Update linux\_net to use VirtualInterface * Update nova\_ipam\_lib to use VirtualInterface * libvirt: Review of the code to use module units * Update network.manager to use VirtualInterface * Imported Translations from Transifex * Updated from global requirements * Define "supported\_instances" for fake compute * Remove get\_vif\_by\_mac\_address from network rpcapi * Remove unused method from network rpcapi * Allow delete when InstanceInfoCache entry is missing * libvirt: Fix root disk leak in live mig * Additional check for qemu-nbd hang * Correct host managers free disk calculation * Correct the state for PAUSED instances on reboot * XenAPI: Use get\_VALUE in preference to get\_record()['VALUE'] * XenAPI: Speedup get\_vhd\_parent\_uuid * XenAPI: Report the CPU details correctly * XenAPI: Tidy calls to get\_all\_ref\_and\_rec * XenAPI: get\_info was very expensive * Fix bug with not implemented virConnect.registerCloseCallback * Make test\_poll\_volume\_usage\_with\_data more reliable * Re-write sqlite BigInteger mapping test * Small edits on help strings * Make floating\_ip\_bulk\_destroy deallocate quota if not auto\_assigned * Sync processutils from oslo-incubator * Create common method for MTU treatment * Move fake\_network config option to linux\_net * libvirt: move unnecesary comment * Sync log.py from oslo-incubator * hyperv: Retry after WMI query fails to find dev * vmwareapi:remove unused variables in volumeops * Fix docstring in libvirt.driver.LibvirtDriver.get\_instance\_disk\_info() * Hide VIR\_CONNECT\_BASELINE\_CPU\_EXPAND\_FEATURES where needed * Make test\_different\_fname\_concurrency less racy * VMware: improve exception logging in driver.py 2014.1.b2 --------- * Add instance faults during live\_migrate errors * VMware: use .get() to access 'summary.accessible' * Nova Docker driver must remove network namespace * Added a new scheduler filter for metrics * Sync module units from oslo * Join pci\_devices for servers API * VMware: fix missing datastore regex with ESX driver * Fix the flavor\_ref type of unit tests * Sync unhandled exception logging change from Oslo * Fix race conditions between imagebackend and imagecache * Add explicit discussion of dependencies to README.rst * Add host and details column to instance\_actions\_events table * Join pci\_devices when getting all servers in API * Add sort() method to ObjectListBase * Add VirtualInterface object * VMware: Fix incorrect comment indentation * vmwareapi: simple refactor of config drive tests * Fix multi availability zone issue part 2 * Make exception message more friendly * disable debug in eventlet.wsgi server * Alphabetize core list for V3 API plugins * Ensure MTU is set when the OVS vif driver is used * remove redundant \_\_init\_\_() overwriting when getting ExtensionResources * Fix bug for neutron network-name * Fix rbd backend not working for none admin ceph user * Set objects indirection API in network service * Use oslo.rootwrap library instead of local copy * Remove admin auth when getting the list of Neutron API extensions * Fix the test parameter order for v3 evacuate test * Add API schema for v3 evacuate API * Remove unused code * Take a vm out of SNAPSHOTTING after Glance error * Corrected typo in metrics * libvirt: handle exception while get vcpu info * Fixed incorrect test case of test\_server\_metadata.py * Add API schema for v3 rescue API * Support preserve\_ephemeral in baremetal * Show bm deploy how to preserve ephemeral content * Add preserve\_ephemeral option to rebuild * Fix string formatting of exception.NoUniqueMatch message * docstring fix * xenapi: stop server destroy on live\_migrate errors * Ensure that exception raised in neutron are handled correctly * Fix updating device names when defaulting * libvirt: Fix confusing use of mox.StubOutWithMock * Sync request\_id middleware for nova * Calculate default security group into quota usage * Allow run\_image\_cache\_manager\_pass to hit db slave * Consolidate the blockdev related filters * VMware: upload images to temporary directory * Refactor CIDR field to use netaddr.IPNetwork * Make nova-network use Instance objects * Make nova-network use Service object * Allow \_check\_instance\_build\_time to hit db slave * Set objects indirection API in metadata service * libvirt: Configuration element for sVirt support * VMware: unnecessary session reconnection * Add API schema for v3 multinic API * API schema for v3 console\_output API * Workers verification for WSGI service * Remove unused dict BYTE\_MULTIPLIERS * Optimize libvirt live migration workflow at source * libvirt, fix test tpool\_execute\_calls\_libvirt * Using staticmethod to mock LibvirtDriver.\_supports\_direct\_io * Use the mangle checksum fill rule regardless to the multi\_host * Enabled Libvirt driver to read 'os\_command\_line' from image properties * Update nova.conf.sample * Capture exception for JSON load in virt.storage\_users * Ensure that headers are utf8, not unicode * Attribute snapshot not defined in libvirt/config.py * ec2 api should check 'max\_count'&'min\_count' para * nova docker driver cannot find cgroup in /proc/mounts on RHEL * VMware: fix rescue with disks are not hot-addable * VMware: bug fix for VM rescue when config drive is configured * Define common API parameter types * Fixed a problem in iSCSI multipath * Fix unhandled InvalidServerState exceptions in server start/stop * Cells rebuild regression fix * Fix potential fd leak * Rename instance\_type to flavor in libvirt virt driver tests * Rename instance\_type to flavor in vmware virt driver tests * Improve error message in services API * Make image props filter handle old vm\_modes * XenAPI: Use direct IO for writing config drive * Avoid unnecessary use of rootwrap for some network commands * Remove unused copyright from nova.api.\_\_init\_\_ * replace type() to isinstance() in nova * Make availability\_zone optional in create for aggregates * libvirt: Fix infinite loop waiting for block job * baremetal: stop deployment if block devices are not available * Cleanup 'deleting' instances on restart * Ignore duplicate delete requests * Let drivers override default rebuild() behaviour * Enable compute\_node\_update to tolerate deadlocks * xenapi: resize up ephemeral disks * xenapi: refactor generate\_ephemeral * xenapi: refactor resize\_up\_root\_vdi * Abstract add\_timestamp out of ComputeDriverCPUMonitor class * Revert "Whitelist external netaddr requirement" * The private method \_text\_node should be used as function * Add finer granularity to host aggregate APIs * Remove unused import * Adds new method nova.utils.get\_hash\_str * Make nova/quota use keypair objects * VMware: update test file names * Ensure instance action event list in order * Docker Driver doesn't respect CPU limit * libvirt: stop overwriting LibvirtConfigCPU in get\_host\_capabilities * Cleanup the flake8 section of tox.ini * Use the full string for localisation * Don't deallocate/reallocate networks on reschedules * Cleanup object usage in the rebuild path * Fix test case with wrong parameter in test\_quota\_classes * Remove unused variables in imagebackend.py * Remove unused code in test\_attach\_interfaces.py * Whitelist external netaddr requirement * Better exception handling for deletes during build * Translate the snapshot\_pending state for old instances * Prevent Instance.refresh() from returning a new info cache * Extends V3 os-hypervisor api for pci support * Sync config generator from oslo-incubator * Imported Translations from Transifex * Remove uneeded dhcp\_opts initialization * Update class/function name for test\_extended\_availability\_zone.py * Allow deleting instances while uuid lock is held * xenapi: add support for vcpu\_pin\_set * xenapi: more info when assert\_can\_migrate fails * fix ips to 'ips' in APIRouter * Hyper-V:Preserve config drive image after the instance is resized * fix log message in APIRouter * VMware: use session.call\_method to invoke api's * Rename instance\_type to flavor in hyper-v virt driver * Rename instance\_type to flavor in xenapi virt driver * Compact pre-Icehouse database migrations <= 180 * Change when exists notification is sent for rescue * Revert change of default FS from ext3 to etx4 * Convert nova.compute.manager's \_spawn to objects * Add alias as prefix for flavor\_rxtx v3 * Remove unused code in nova/api/ec2/\_\_init\_\_.py * Remove unused import * VMware: improve connection issue diagnostic * Fixes messages logged on Glance plugin retries * Aggregate: Hosts isolation based on image properties * Fix for qemu-nbd hang * Return policy error, not generic error * Fix lxc rootfs attached two devices in some action * Removes disk-config extension from v3 api * Fix typo'ed deprecated flag names in libvirt.imagebackend * Disable libguestfs' default atexit handlers * Add API schema for v3 extended\_volumes API * Catch InstanceIsLocked exception on server actions * Fix inconsistent "image" value on \_get\_image() * Add API schema for v3 keypairs API * Add API schema for v3 flavor\_access API * Add API schema for v3 agents API * Add API schema for v3 admin\_password API * Adds a PREPARED state after baremetal node power on * Make scheduler rpcapi use object serializer * Update log message when remove pci device * Add unit test for ListOfStrings field in object models * Sync oslo db.sqlalchemy.utils to nova * Remove duplicated test * Fixing availability-zone not take effect error * Fix image cache periodic task concurrent access bug * Fix interprocess locks for run\_tests.sh * lxc: Fix a bug of baselineCPU parse failure * platform independence for test\_virt unit tests * Imagecache: fix docstring * libvirt: Set "Disabled Reason" to None when enable nova compute * Change log from ERROR to WARNING when instance absent * VMware: clean up unnecessary help message of options * Don't use deprecated module commands * Add apache2 license header to appropriate files for enabling H102 * XenAPI: Allow use of clone\_vdi on all SR types * Remove unused variables in test\_conductor.py * Do not use contextlib.nested if only mock one function * Remove update\_service\_capabilities from nova * Adds user\_data extension to nova.api.v3.extensions * Add wsgiref to requirements.txt * pass the empty body into the controller * Imported Translations from Transifex * Revert recent change to ComputeNode * sync oslo service to fix SIGHUP handling * Fix parameter checking about quota update api * Spelling fix resouce=>resource * Change default ephemeral FS to ext4 * When inject admin password, no need to generate temp file * Make \_change\_index\_columns use existing utility methods * Fix interprocess locks when running unit-tests * Cleanup object usage in the delete path * Change RPC post\_live\_migration\_at\_destination from call to cast * Pass rbd\_user id and conf path as part of RBD URI for qemu-img * Allow some instance polling periodic tasks to hit db slave * Sync timeutils from oslo-incubator * Catch NotImplementedError for vnc in the api * List NotImplementedError as a client exception for vnc * remove vmwareapi.vmops.get\_console\_output() * Object-ify build\_and\_run\_instance * Retry on deadlock in instance\_metadata\_update * use 'os\_type' in ephemeral filename only if mkfs defined * ValueError should use '%' instead of ',' * Setting the xen vm device id on vm record * Rename instance\_type to flavor in nova.utils and nova.compute.utils * Rename instance\_type to flavor in nova.cloudpipe * Serialize instance object while building request\_spec * Make rebuild use Instance objects * Remove deprecated config aliases * Changed error message to match usage * Add configurable 120s timeout ovs-vsctl calls * Clarify rebuild\_instance's recreate parameter * Clean swap\_volume rollback, on libvirt exception * Image cache: move all of the variables to a common place * baremetal: set capabilites explicitly * Remove docker's unsupported capabilities * Set a sane default for state\_path * Fix incorrect exception on os-migrateLive * barematal: Cleanup the calls to assertEqual * Refactor time conversion helper function for objects in db api * Fixes ConfigDrive bug on Windows * Remove smoketests * Revert graceful shutdown patch * Handle InstanceUserDataMalformed in create server v2 api * Enable remote debugging for nova * Fix race in unit tests, which can cause gate job to fail * Add boolean convertor to cells sync\_instances API * Initialize iptables rules on initialization of MetadataManager * vmwareapi: raise on get\_console\_output * hyperv: remove get\_console\_output method * List NotImplementedError as client exception * api: handle NotImplementedError for console output * Make Serializer/Conductor able to backlevel objects * Make ec2 use Flavor object * Move restore and rebuild operations to Flavor objects * Add flavor access methods to Instance object * Rename instance\_type to flavor in nova.network tree * Stop, Rescue, and Delete should give guest a chance to shutdown * Remove middleware ratelimits from v3 api * Remove unused variables in neutron api interface and neutron tests * Remove unneeded call to conductor in network interface * Return client tokens in EC2 DescribeInstances * Require List objects to be able to backlevel their contents * Make Instance object compatible with older compute nodes * Deprecate/remove scheduler select\_hosts() * Pass Instance object to console output virt driver api * Send Instance object to validate\_console\_port * Pass Instance object to compute vnc rpc api * Update vnc virt driver api to take Instance object * Add error as not-in-progress migration status * Don't replace instance.info\_cache on each save * Add boolean convertors for migrate\_live API * VMWare: bug fix for Vim exception handling * XenAPI: Synchronize on all VBD plug/unplug per VM * Add IPAddress field type in object models * Fixes errors on start/stop unittest * Use a dictionary to eliminate the inner loop in \_choose\_host\_filters() * Correct uses of :params in docstrings * Delete iSCSI devices after volume detached * Prevent spoofing instance\_id from neutron to nova * Replaces call to lvs with blockdev * Refactor PXE DHCP Option support * Normalize the weights instead of using raw values * Compact pre-Icehouse database migrations <= 170 * XenAPI: Speedup host\_ref cannot change - get it once * Updated from global requirements * Rename instance\_type to flavor in test\_utils and nova.tests.utils * Rename instance\_type to flavor in baremetal virt driver * VMware: fix bug when more than one datacenter exists * Sync oslo lockutils for "fix lockutils.lock() to make it thread-safe" * Move calls to os.path.exists() in libvirt imagebackend * Ensure api\_paste\_conf is an absolute path * Log exception in \_heal\_instance\_info\_cache * Raise better exception if duplicate security groups * Remove the largely obsolete basepath helper * libvirt: Custom disk\_bus setting is being lost on hard\_reboot * Libvirt: Making the video driver element configurable * Give migrations tests more time to run * Remove the api\_thread\_pool option from libvirt driver * baremetal: volume driver refactoring and tests * Sync middleware audit, base, and notifier from oslo * Get test\_openAuth\_can\_refuse\_None\_uri to cleanup after itself * Hide injected\_file related quotas for V3 API * Make obj\_from\_primitive() preserve version information * Cells: check states on resize/rebuild updates * Make flavor\_access extension use Flavor object * libvirt: add a test to guard against set\_host\_enabled raising an error * Fix UnboundLocalError in libvirt.driver.\_close\_callback * Quota violations should not cause a stacktrace in the logs * Enforce permissions in snapshots temporary dir * Sync rpc fix from oslo-incubator * Fix changes-since filter for list-servers API * Make it possible to override test timeout value * Imported Translations from Transifex * libvirt: consider minimal I/O size when selecting cache type * Setup destination disk from virt\_disk\_size * Add Flavor object * Add atomic flavor access creation * Add extra\_resources field to compute\_nodes table * Recommend the right call instead of datetime.now() * libvirt: remove unused imports from fake libvirt utils * VMware: fix disk extend bug when no space on datastore * Fix monkey\_patch docstring bug * Change unit test for availability\_zones.reset\_cache * Make compute support monitors and store metrics * Added a new scheduler metrics weight plugin * LXC: Image device should be reset in mount() and teardown() * Add shutdown option to cleanup running periodic * xenapi: Update VM memory overhead estimation * Misc typos in nova * Add default arguments for Connection class * Update Instance from database after destroy * Libvirt: Adding video device to instances * Configuration element for describing video drivers * Don't log stacktrace for UnexpectedTaskStateError * Extends V3 servers api for pci support 2014.1.b1 --------- * LOG.warn() and LOG.error() should support translation * Minor change for typo from patch 80b11279b * network\_device\_mtu should be IntOpt * Fix HTTP response code for network APIs and improve error message * Use password masking utility provided in Oslo * Sync log.py from Oslo-incubator * xenapi: stop hang during glance download * Clean up test cases for compute.manager.\_check\_instance\_build\_time * Recover from IMAGE-\* state on compute manager start-up * Document when config options were deprecated * VMware: Fix unhandled session failure issues * Use utils method when getting instance metadata and system metadata * Add status mapping for shutoff instance when resize * Fix docstring on SnapshotController * Fix trivial typo 'descirption' * Compact pre-Icehouse database migrations <= 160 * Compact pre-Icehouse database migrations <= 150 * Compact pre-Icehouse database migrations <= 140 * Remove redundant body validation for createBackup * Change evacuate test hostnames to preferable ones * Change conductor live migrate task to use select\_destinations() * Ensure proper notifications are sent when build finishes * Periodic task \_heal\_instance\_info\_cache can now use slave db * docker: access system\_metadata as a dict * Don't overwrite marker when checking if it exists * There is no need to set VM status to ERROR on a failed migration * DB migration 209: Clean up child rows as well * Cleanup ec2/metadata/osapi address/port listen config option help * Recover from build state on compute manager start-up * Comply with new hacking 0.8 release * Correct network\_device\_mtu help string * Remove last of AssertEquals * Fix Neutron Authentication for Metadata Service * Update help for osapi\_compute\_listen\_port * libvirt: host update disable/enable report HTTP 400 * Catch InstanceIsLocked exception on server actions * VMware: enable driver to work with postgres database * Make test\_evacuate from compute API DRYer * Fix testcase config option imports * Fix "in" comparisons with one element tuples * Remove \_security\_group\_chain\_name from nova/virt/firewall.py * Remove duplicate setting of os\_type in libvirt config builder * Fix logic in LibvirtConnTestCase.\_check\_xml\_and\_uri * Remove unused flag 'host\_state\_interval' * Make object compat work with more positional args * Fix LibvirtGenericVIFDriver.get\_config() for quota * Fix a tiny double quote matching in field obj model * Move flags in libvirt's volume to the libvirt group * Check Neutron port quota during validate\_networks in API * Failure during termination should always leave state as Error(Deleting) * Remove duplicate FlavorNotFound exception handling in server create API * Make check more pythonic * Make sure report\_interval is less than service\_down\_time * Set is\_public to False by default for volume backed snapshots * Delete instance faults when deleting instance * Pass Instance object to spice compute rpc api * Pass Instance object to get\_spice\_console virt api * Remove update\_service\_capabilities from scheduler rpc api * Remove SchedulerDependentManager * powervm: remove powervm virt driver from nova * libvirt: Provide a port field for GlusterFS network disks * Add API input validation framework * Remove duplicate BuildAbortException block * Remove compute 2.x rpc api * Add v3 of compute rpc API * Fix incorrect argument position in DbQuotaDriver * Change ConductorManager to self.db when record cold\_migrate event * instance state will be stuck in unshelving when unshelve fails * Fix some i18n issue in nova/compute/manager.py * Don't gate on E125 * Supplement 'os-migrateLive' in actions list * Corrected typo in host\_manager * Fix a lazy-load exception in security\_group\_update() * fakevirt: return hypervisor\_version as an int instead of a string * Bump to sqlalchemy-migrate 0.8.2 * ComputeFilter shouldn't generate a warning for disabled hosts * Remove cert 1.X rpc api * Add V2 rpc api for cert * Remove console 1.X rpc api * Do not hide exception in update\_instance\_cache\_with\_nw\_info * Wrong handling of Instance expected\_task\_state * XenAPI: Fix caching of images * Extend LibvirtConfigGuest to parse guest cpu element info * Rename instance\_type parameter in migrate\_disk\_and\_power\_off to flavor * convert min\_count and max\_count to type int in nova v3 api * Add decorator expected\_errors for flavors\_extraspecs v3 * Remove nullable=True in models.py which is set by default * baremetal: Make api validate mac address * Use 204 instead of 202 for delete of keypairs v3 * Fix log message format issue for api * Remove "set()" from CoreAPIMissing exception * Move flag in libvirt's vif to the libvirt group * Move flag in libvirt's utils to the libvirt group * Move flags in libvirt's imagebackend to the libvirt group * Extend the scheduler HostState for metrics from compute\_node * docker: return hypervisor\_version as an int rather than a string * Sync Log Levels from OSLO * Removes check CONF.dhcp\_options\_enabled from nova * Improved debug ability for log message of cold migration * Adjust the order of notification for shelve instance * Add FloatField for objects * XenAPI: Fix config section usage * Fix performance of Server List with Neutron for Admins * Add context as parameter for two libvirt APIs * Add context as parameter for resume * xenapi: move session into new client module * xenapi: stop key\_init timeout failing set password * xenapi: workaround vbd.plug race * Address infinite loop in nova compute when getting network info * Use of logging in native thread causes deadlock connecting to libvirtd * Add v3 api samples for shelve * Imported Translations from Transifex * libvirt: Fix log message when disable/enable a host * Fix missing format specifier in ImagePropertiesFilter log message * Sync the DB2 communication error code change from olso * baremetal: refactor out powervm dependency * handle migration errors * Make compute manager \_init\_instance use native objects * Fix for reading the xenapi\_device\_id from image metadata * Check if reboot request type is None * Use model\_query() instead of session.query in db.instance\_destroy * Fix up spelling mistake * Periodic task \_poll\_unconfirmed\_resizes can now use slave db * Include image block device maps in info * Sync local from oslo * objects: declare some methods as static * Handle UnicodeEncodeError in validate\_integer * Remove traces of V3 personality extension from api samples * Removes os-personalities extension from the V3 API * VMware: add support for VM diagnostics * Remove useless api sample template files for flavor-rxtx v3 * Fix libvirt evacuate instance on shared storage fails * Fixes get\_vm\_storage\_paths issue for Hyper-V V2 API * Clean up how test env variables are parsed * Add missing argument max\_size in libvirt driver * VMware: Always upload a snapshot as a preallocated disk * Fix empty selector XML bug * Libvirt:Instance resize confirm issue against NFS * Add V2 rpc api for console * Fix sample parameter of agent API * VMware: fix snapshot failure when host in maintenance mode * Clean up unused variables * Add a driver method to toggle instance booting * Fix cells instance\_create extra kwarg * handle empty network info in instance cache * Remove deprecated instance\_type alias from nova-manage * xenapi: kernel and ramdisk missing after live-migrate * Remove V2 API version of coverage extensions * Remove V3 API version of coverage extension * Update openstack/common/periodic\_task * Use 201 instead of 200 for action create of flavor-manage v3 * Enforce metadata string type on key/value pairs * Fixes RequestContext initialization failure * Move flags in libvirt's imagecache to the libvirt group * Move base\_dir\_name option to somewhere more central * Move some libvirt specific flags into a group * Removed unused instance object helper function * Update openstack/common/lockutils * Rename InstanceType exceptions to Flavor * Added monitor (e.g. CPU) to monitor and collect data * Conditionalise automatic enabling of disabled host * Users with admin role in Nova should not re-auth with Neutron * Use 400 instead of 422 for invalid input in v3 servers core * Fix limits v3 follow API v3 rules * Remove used\_limits extension from the V3 API * Remove reduntant call to update\_instance\_info\_cache * Add flavor-extra-specs to core for V3 API * Add flavor-access to core for V3 API * Remove unused libvirt\_ovs\_bridge flag * Fix AttributeError(s) from get\_v4/6\_ips\_by\_interface * Raising exception for invalid floating\_ip's ID * libvirt: Allow delete to complete when a volume disconnect fails * replace assertNotEquals with assertNotEqual * Add V3 api samples for access\_ips * Add v3 api samples for scheduler-hints * Add v3 api samples for availability\_zone * Add V3 API sample for server's actions * Cache Neutron Client for Admin Scenarios * More instance\_type -> flavor renames in db.api * Cache compute node info in Hypervisor api * Reverse the quota reservation in revert\_resize * Rename virtapi.instance\_type\_get to flavor\_get * Xenapi: Allow windows builds with xentools 6.1 and 6.2 * Make baremetal support metadata for ephemeral block-device-mapping * Make baremetal\_deploy\_helper understand ephemeral disks * Removed unused methods from db.api * Fix type mismatch errors in NetworkTestCase * VMware: Detach volume should not delete vmdk * xenapi: Fix agent update message format * xenapi: Fix regression issue in agent update * Shrink the exception handling range * Moved quota headroom calculations into quota\_reserve * Remove dup of LibvirtISCSIVolumeDriver in LibvirtISERVolumeDriver * Replace assertEquals with assertEqual - tests/etc * libvirt: pass instance to a log() call in the standard way * xenapi: Move settings to their own config section * domainEventRegisterAny called too often * Allow configuring the wsgi pool size * driver tests (loose ends): replace assertEquals with assertEqual * baremetal: replace assertEquals with assertEqual * image tests: replace assertEquals with assertEqual * virt root tests: replace assertEquals with assertEqual * Remove unnecessary steps for cold snapshots * baremetal: Make volume driver use a correct source device * Update quota-class-set/quota-set throw 500 error * Add log\_handler to implement the publish\_errors config option * Imported Translations from Transifex * Enable non-ascii characters in flavor names * Move docker specific options into a group * Check return code of command instead of checking stderr * Added tests for get\_disk\_bus\_for\_disk\_dev function * Checking existence of index before dropping * add hints to api\_samples documentation * xenapi: check for IP address in live migration pre check * Remove live\_snapshot plumbing * Remove unused local variable in test\_compute * Make v3 admin\_password parameters consistent * Flavor name should not contain only white spaces * fix a typo error in test\_libvirt\_vif.py * Remove unused local variables in test case * Rename \_get\_vm\_state to \_get\_vm\_status * Ensure deleted instances' status is always DELETED * Let resource\_tracker report right migration status * Imported Translations from Transifex * nit: fix indentation * Always pass context to compute driver destroy() * Imported Translations from Transifex * db tests: replace assertEquals with assertEqual * compute tests: replace assertEquals with assertEqual * Catch exception while building due to instance being deleted * Refactor UnexpectedTaskStateError for handling of deleting instances * Parted 'invalid option' in XenAPI driver * Specify DB URL on command-line for schema\_diff.py * Fix \`NoopQuotaDriver.get\_(project|user)\_quotas\` format * Send delete.end with latest instance state * Add missing fields in DriverBlockDevice * Fix the boto version comparison * Add test for class InsertFromSelect * Process image BDM earlier to avoid duplicates * Clean BDM when snapshoting volume-backed instances * Remove superflous 'instances' joinedload * Fix OLE error for HyperV * Make the vmware pause/unpause unit tests actually test something * Fixes the destroy() method for the Docker virt driver * xenapi: converting XenAPIVolumeTestCase to NoDB * Move \`diff\_dict\` to compute API * Add compatibility for InstanceMetadata and primitives * Issue brctl/delif only if the bridge exists * ensure we don't boot oversized images * Add V3 API samples for config-drive * Remove duplicated test * Add notification for host operation * Sync log from oslo * Replace assertEquals with assertEqual - tests/scheduler * Make non-admin users can unshelve a server * Fix interface-attach removes existing interfaces from db * Correct exception handling * Utilizes assertIsNone and assertIsNotNone - tests/etc * Use elevated context in resource\_tracker.instance\_claim * Add updates and notifications to build\_and\_run\_instance * Add network handling to build\_and\_run\_instance * Make unshelve use new style BDM * Make \_get\_instance\_nw\_info() use Instance object * Convert evacuation code to use objects * Deprecate two security\_group-related methods from conductor * Make metadata server use objects for Instance and Security Groups * Replace assertEquals with assertEqual - tests/api * Remove security\_group-related methods from VirtAPI * Make virt/firewall use objects for Security Groups and Rules * Drop auth\_token configs for api-paste.ini * Add auth\_token settings to nova.conf.sample * Use \_get\_server\_admin\_password() * Pass volume\_api to get\_encryption\_metadata * Comments for db.api.compute\_node\_\*() methods * Fix migration 185 to work with old fkey names * Adds V3 API samples for user-data * Enforce compute:update policy in V3 API * tenant\_id implies all\_tenants for servers list in V3 API * Move get\_all\_tenants policy enforcement to API * all\_tenants=0 should not return instances from all tenants * Utilizes assertIsNone and assertIsNotNone - tests/virt * xenapi: workaround for failing vbd detach * xenapi: strip base\_mirror after live-migrate * xenapi: refactor get\_all\_vdis\_in\_sr * Remove unused expected\_sub\_attrs * Remove useless variable from libvirt/driver.py * Add a metadata type validation when creating vm * Update schema\_diff.py to use 'postgresql' URLs * Disable nova-compute on libvirt connectivity exceptions * Make InstanceInfoCache load base attributes * Add SecurityGroupRule object * Add ephemeral\_mb record to bm\_nodes * Stylistic improvement of models.ComputeNodeStat * clean up numeric expressions in tests * replaced e.message with unicode(e) * Add DeleteFromSelect to avoid database's limit * Imported Translations from Transifex * Utilizes assertIsNone and assertIsNotNone - tests/api * Include name/level in unit test log messages * Remove instance\_type\* proxy methods from nova.db.api * Add InstanceList.get\_by\_security\_group() * Make security\_group\_rule\_get\_by\_security\_group() honor columns * Claim IPv6 is unsupported if no interface with IPv6 configured * Pass thru credentials to allow re-authentication * network tests: replace assertEquals with assertEqual * Nova-all: Replace basestring by six for python3 compatability * clean up numeric expressions with byte constants * Adds upper bound checking for flavor create parameters * Remove fake\_vm\_ref from test\_vmwareapi.py * xen tests: replace assertEquals with assertEqual * Fix tests to work with mysql+postgres concurrently * Enable extension access\_ips for v3 API * Correct update extension point's check\_func for v3 server's controller * Updates the documentation for nova unit tests * Remove consoleauth 1.X rpc api * consoleauth: retain havana rpc client compat * Pull system\_metadata for notifications on instance.save() * Allow \_sync\_power\_states periodic task to hit slave DB * Fix power manager hangs while executing ipmitool * Update my mailmap * Stored metrics into compute\_nodes as a json dictionary * Bad except clauses order causes wrong text in http response * Add nova.db.migration.db\_initial\_version() * Fix consoleauth check\_token for rpcapi v2 * Nova db/api.py docstring cleanups.. * Adds XML namespace example for disk config extension * Remove multipath mapping device descriptor * VMware: fix VM resize bug * VMware: fix bug for reporting instance UUID's * Remove extra space in tox.ini * Fix migrate w/ cells * Add tests for compute (child) cell * Call baselineCPU for full feature list * Change testing of same flavor resize * Fix bad typo in cloudpipe.py * Fix compute\_api tests for migrate * Replace basestring by six for python3 compatability * Add flavor-manage to core for V3 API * Refactor unit tests code for python3 compatability * make libvirt driver get\_connection thread-safe * Remove duplicates from exceptions list * Apply six for metaclass * Add byte unit constants * Add block device handling to build\_and\_run\_instance * Reply with a meaningful exception, when libvirt connection is broken * Fix getting nwinfo for Instance obj * Make cells info\_cache updates more tolerant * Raise an error if module import fails * Drop RPC securemessage.py and crypto module * Remove deprecated libvirt VIF driver code * nova.exception does not have a ProcessExecutionError * Fix setting backdoor port in service start * Sync lockutils from oslo * Fix wrong description when updating quotas * Expose additional status in baremetal API extension * migrate server doesn't raise correct exception * Make security\_group\_get() more flexible about joins * Make Object FieldType take an object name instead of a class * Hyper-v: Change the hyper-v error log for debug when resize failed * Adds V3 API samples for the disk-config extension * Utilizes assertIn - tests/etc * Fix all scripts to honor the enabled\_ssl\_apis flag * Updated from global requirements * Fix i18n issue for nova/compute/manager.py * Change tab to blank space in hypervisors-detail-resp * Fixing ephemeral disk creation * Merging two mkfs commands * xenapi: ephemeral disk partition should fill disk * Fix the ConsolesController class doc string * xenapi: Speeding up the easy cases of test\_xenapi * xenapi: Speeding up more tests by switching to NoDB * Remove .pyc files before generating sample conf * xenapi: migrate multiple ephemeral disks * Fail quickly if file injection for boot volume * Add obj\_make\_compatible() * Updated from global requirements * Make cells 'flavorid' for resizes * Fixes unicode issue in the Hyper-V driver * Add missing ' to extra\_specs debug message * VMware: Fix ValueError unsupported format character in log message * graceful-shutdown: add graceful shutdown into compute * remove unused network module from certificates api extension * Sync fixture module from oslo * Fixes Invalid tag name error when using k:v tagname * Fix tests for migration 227 to check sqlite * Adds V3 API samples for console output * Add V2 rpc api for consoleauth * Update version aliases for rpc version control * Improve object instantiation syntax in some tests * A nicer calling convention for object instantiation * Updates OpenStack Style Commandments link * Updated from global requirements * Adding support for multiple hypervisor versions * Manage None value for the 'os\_type' property * Add CIDR field type * Validate parameters of agent API * Adding Read-Only volume attaching support to Nova * Update timeutils.py from oslo * Fix docstring related to create\_backup API * powervm tests: replace assertEquals with assertEqual * Add V3 API sample for admin-password * Remove duplicated test cases * Add extension access\_ips for v3 API * Ensure migration 209 works with NULL fkey values * Cells: Fix instance deletes * Uses oslo.imageutils * Add testr concurrency option for run\_tests.sh * Fix the image name of a shelved server * xenapi: test\_driver should use NoDBTestCase * xenapi: Speedup vm\_util and vmops tests * xenapi: speedup test\_wait\_for\_instance\_to\_start * Remove xenapi rpm building code * Fixes datastore selection bug * Fixes Hyper-V snapshot spawning issue * Make SecurityGroup receive context * Fix DB API mismatch with sqlalchemy API * Remove aggregate metadata methods from conductor and virtapi * Make XenAPI use Aggregate object * libvirt: add missing i18n support * Adds V3 API samples for attach-interfaces * Make aggregate methods use new-world objects * Add missing key attribute to AggregateList.get\_by\_host() * Fix i18n issue for nova/virt/baremetal/virtual\_power\_driver.py * Fix scheduler rpcapi deprecated method comment * Send notifications on keypair create/delete * Use \`versionutils.is\_compatible\` for Dom0 plugin * Use \`versionutils.is\_compatible\` for Nova Objects * Improve logging messages in libvirt driver * xenapi: stop agent errors stopping build * Fix NovaObject versioning attribute usage * xenapi: removes sleep after final upload retry * xenapi: stop using get\_all\_vdis\_in\_sr in spawn * populate local-ipv4 address in config drive * Harden version checking for boto * Handle MarkerNotFound better in Flavor API * Sanitize passwords when logging payload in wsgi * Remove unnecessary "LOG.error()" statement * xenapi: simplify \_migrate\_disk\_resizing\_up * xenapi: revert on \_migrate\_disk\_resizing\_up error * xenapi: make \_migrate\_disk\_resizing\_up use @step * libvirt tests: replace assertEquals with assertEqual * Use the oslo fixture module * Port server actions unittests to V3 API Part 2 * Remove unused method \_get\_res\_pool\_ref from VMware * Imported Translations from Transifex * Check for None when cleaning PCI dev usage * Fix vmwareapi driver get\_diagnostics calls * Remove instance\_info\_cache\_update() from conductor * compute api should throw exception if soft reboot invalid state VM * Make a note about Object deepcopy helper * Avoid caching quota.QUOTAS in Quotas object * Remove transitional callable field interface * Make the base object infrastructure use Fields * Migrate some tests that were using callable fields * Migrate NovaPersistentObject and ObjectListBase to Fields * Migrate Instance object to Fields * Utilizes assertIn - tests/api/etc * Utilizes assertIn - tests/virt * Utilizes assertIn - tests/api/contrib * Utilizes assertIn - tests/api/v3 * Make scheduler disk\_filter take swap into account * Add variable to expand for format string * Make quota sets update type handling a bit safer * Add test\_instance\_get\_active\_by\_window\_joined * Fixes error on live-migration of volume-backed vm * Migrate PciDevice object to Fields * Migrate InstanceInfoCache object to Fields * Migrate InstanceFault object to Fields * Migrate Service object to Fields * Migrate ComputeNode object to Fields * Migrate Quotas object to Fields * Migrate InstanceGroup object to Fields * Migrate InstanceAction and InstanceActionEvent objects to Fields * Move exception definitions out of db api * Remove unused scheduler rpcapi from compute api * Libvirt: disallow live-mig for volume-backed with local disk * xeanpi: pass network\_info to generate\_configdrive * Replace incorrect Null checking to return correctly * Fix nova DB 215 migration script logic error * Xenapi: set hostname when performing a network reset * Fix "resource" length in project\_user\_quotas table * Migrate SecurityGroup object to Fields * Migrate Migration object to Fields * VMware: fix regression attaching iscsi cinder volumes * Remove whitespace from cfg options * cleanup after boto 2.14 fix * Add boto special casing for param changes in 2.14 * xenapi: simplify PV vs HVM selection logic * fix missing host when unshelving * Fix a typo of tabstop * Fix error message of os-cells sync\_instances api * Log which filter failed when on log level INFO * Migrate KeyPair object to Fields * Migrate Aggregate object to Fields * Make field object support transitional call-based interface * Add Field model and tests * Fix conductor's object change detection * Remove obsolete redhat-eventlet.patch * Move is\_volume\_backed\_instance to new style BDM * Add a get\_root\_bdm utility function * Libvirt: allow more than one boot device * Libvirt: make boot dev a list in GuestConfig * Remove compute\_api\_class config option * Libvirt: add boot\_index to block device info dicts * Fixes Hyper-V issue with VHD file format * Update log message for add\_host\_to\_aggregate * Correct use of ConfigFilesNotFoundError * hyperv tests: replace assertEquals with assertEqual * Utilizes assertNotIn * VMware tests: replace assertEquals with assertEqual * Fix incorrect root partition size and compatible volume name * Imported Translations from Transifex * Utilize assertIsInstance * Fix typos in nova/api code * Make \`update\_test\` compatible with nose * Add a custom iboot power driver for nova bm * Fix FK violation errors in InstanceActionTestCase * Fix test\_shadow\_tables() on PostgreSQL/MySQL * Fix PCI devices DB API tests * Fix DB API tests depending on the order of rows * Use print function rather than print statement * Update default for running\_deleted\_instance\_action * Drop unused BM start\_console/stop\_console methods * VMware: Network fallback in case specified one not found * baremetal: Add missing method to volume driver * baremetal: Use network API to get fixed IPs * Replace decprecated method aliases in tests * catch exception in start and stop server api * Ensure that the netaddr import is in the 3rd party section * Fix status code of server's action confirm\_resize for v3 * Remove duplicated method in test\_compute\_api.py * Create flavor-access for the tenant when creating a private flavor * Fix root disk not be detached after deleting lxc container * fallocate image only when user has write access * Fixes typo in ListTargets CLI in hyperv driver * Fixes typos in nova/db code * Fixes typos in the files in the nova folder * Avoid clobbering {system\_,}metadata dicts passed to instance update * Baremetal: Be more patient with IPMI and BMC * VMware: fix bug with booting from volumes * Fixes typos in nova/compute files * Fixes typos in virt files * Fix docstring for disk\_cachemodes * Plug Vif into Midonet using Neutron port binding * VMware: remove deprecated configuration variable * Fix races in v3 cells extension tests * Add V3 API samples for consoles * Update allowvssprovider in xenstore\_data * Fix races in cells extension tests * Move \`utils.hash\_file\` -> \`imagecache.\_hash\_file\` * Remove \`utils.timefunc\` function * Remove \`utils.total\_seconds\` * Remove \`utils.get\_from\_path\` * Fix divergence in attach\_interfaces extensions * Replace assert\_ with assertTrue * Fixes several misc typos in scheduler code * Fix libvirt test on systems with real iSCSI devices * Reserve 10 migrations for backports * Sync three-part RPC versions support from Oslo * Remove unused dict functions from utils * Avoid mutable default args in \_test\_populate\_filter\_props * XenAPI: Add versioning for plugins * Add Docstring to some scheduler/driver.py methods * Libvirt: default device bus for floppy block devs * Fix filter\_properties of unshelve API * hyperv: Initialize target\_iqn in attach\_volume * Log if a quota\_usages sync updates usage information 2013.2.rc1 ---------- * Open Icehouse development * baremetal: Fix misuse of "instance" parameter of attach/detach\_volume * Fix the wrong params of attach/detach interface for compute driver * Imported Translations from Transifex * Adds missing entry in setup.cfg for V3 API shelve plugin * Avoid spamming conductor logs with object exceptions * Prefix \`utils.get\_root\_helper\` with underscore * Remove \`utils.debug\` * Remove \`utils.last\_octet\` * Remove \`utils.parse\_mailmap\` * Updated from global requirements * Remove unecessary \`get\_boolean\` function * Make Exception.format\_message aware of Messages * Disable lazy gettext * VMware: Check for the propSet attribute's existence before using * VMware: fix bug for invalid data access * Make rbd.libvirt\_info parent class compatible * Host aggregate configuration throws exception * VMware: Handle cases when there are no hosts in cluster * VMWare: Disabling linked clone doesn't cache images * Fixes inconsistency in flavors list with marker * Fix indentation in virt.libvirt.blockinfo module * Update jsonutils.py from oslo * Fix loading instance fault in servers view * Refactor test cases related to instance object * Use system locale for default request language * Update attach interface api to use new network model * Adds V3 API specific urlmap tests * Catch volume errors during local delete * Fix processutils.execute errors on windows * Fixes rescue doesn't honor enable password conf for v3 * VMware: Fix bug for root disk size * Fix incorrect exception raised during evacuate * Full sync of quota\_usages * Fix log format error in lazy-load message * xenapi: reduce impact of errors during SR.scan * Forced scheduling should be logged as Audit not Debug * xenapi: Resize operations could be faster * Resource limits check sometimes enforced for forced scheduling * Skip test if sqlite3 not installed * Add notification for pause/unpause instance * Make LiveMigrateTask use build\_request\_spec() * Ensure image property not set to None in build\_request\_spec() * Make sure periodic task sync\_power\_states continues on error * get\_all\_flavors uses id as key to be unique * fix the an Unexpected API Error issue in flavor API * Adds V3 API samples for srvcs, tenant usage, server\_diagnostics * VMware: Fix SwitchNotFound error when network exists * Fix unicode string values missing in previous patch * Fix stopping instance in sync\_power\_states * Remove deprecated task states * plug\_vif raise NotImplementedError instead of pass * Check instance exists or not when evacuate * xenapi: ignore 500 errors from agent resetnetwork * Add flavor name validation when create flavor * xenapi: enforce filters after live-migration * xenapi: set vcpu cap to ensure weight is applied * Get image metadata in to\_xml for generating xml * Add notification on deleting instance without host * Fix V3 API flavor returning empty string for attributes * Fix v3 server rebuild deserializer checking with wrong access\_ip key * Windows instances require the timezone to be "localtime" * Don't wrap Glance exceptions in NovaExceptions * Update rootwrap with code from oslo * fix typo & grammer in comment 363-364 * Make Instance.refresh() extra careful about recursive loads * Log object lazy-loads * Ensure we don't end up with invalid exceptions again * Fix console db can't load attribute pool * Fix HTTP response for PortNotFound during boot (v3 API) * Fixes assertion bug in test\_cells\_weights.py * Remove \_get\_compute\_info from filter\_scheduler.py * VMware: fix bug for incorrect cluster access * Add V3 API samples for security-groups * Correct lock path for storage-registry-lock * Moved registration of lifcycle events handler in init\_host() * Rebuilding stopped instance should not set terminated\_at * Require oslo.config 1.2.0 final * Removes pre\_live\_migration need for Fixed IPs * Move call to \_default\_block\_device\_names() inside try block * Fix several flake8 issues in the plugins/xenserver code * Fix type is overwritten when UPDATE cell without type specified * Adds v3 API samples for hide server addresses and keypairs * Always filter out multicast from reflection * VMware: fix bug with booting from volume * VMware: enable VNC access without user having to enter password * Remove exceptions.Duplicate * Add v3 API samples for rescue * Added 'page\_size' param to image list * Fix SecurityGroupsOutputTest v3 security group tests * Fixes file mode bits of compute/manager.py * Adds v3 API samples for hosts extension * Only update PCI stats if they are reported from the host * xenapi: Cleanup pluginlib\_nova * Fix Instance object assumptions about joins * Bring up interface when enslaving to a bridge * v3 API samples for servers * xenapi: refactor: move UpdateGlanceImage to common * Imported Translations from Transifex * Fixes modules with wrong file mode bits in virt package * Adds v3 API samples for ips and server\_metadata extensions * Fix V3 API server metadata XML serialization * libvirt: add test case for \_hard\_reboot * Add tests for pre\_live\_migration * Adds V3 API samples for evacuate,ext-az,ext-serv-attrs * Add V3 API samples for ext-status,hypervisor,admin-actions * Code change for regex filter matching * Convert TestCases to NoDBTestCase * VMware: ensure that resource exists prior to accessing * Fixes modules with wrong file mode bits * Fixes test scripts with wrong bitmode * Update sample config generator script * Instance object incorrectly handles None info\_cache * Don't allow pci\_devices/security\_groups to be None * Allow for nested object fields that cannot be None * Object cleanups * Convert TestCases to NoDBTestCase * Convert TestCases to NoDBTestCase * Actually fix info\_cache healing lazy load * Fixes host stats for VMWareVCDriver * libvirt: ignore false exception due to slow NFS on resize-revert * Syncs install\_venv\_common.py from oslo-incubator * Correct deleted\_at value in notification messages * VMwareVCDriver Fix sparse disk copy error on spawn * Remove unused \_instance\_update() method in compute api * Change service id to compute for compute/api.py * XenAPI raise InstanceNotFound in \_get\_vm\_opaque\_ref * Replace OpenStack LLC with OpenStack Foundation * Send notification for any updates to instance objects * Add flag to make baremetal.pxe file injection optional * Force textmode consoles on baremetal * Typo: certicates=>certificates in nova.conf * Remove print statement from test\_quotas that fails H233 check * Fix for os-availability-zone/detail returning 500 * Convert TestCases to NoDBTestCase * Fixes the usage of PowerVMFileTransferFailed class * MultiprocessWSGITest wait for workers to die bug * Prune node stats at compute node delete time * VMware: datastore regex not honoured * VMware: handle exceptions from RetrievePropertiesEx correctly * VMware: Fix volume detach failure * Remove two unused config options in baremetal * Adds API samples and unitests for os-server-usage V3 extension * xenapi: Make rescue safer * Add V3 API samples for quota-sets/class-sets,inst-usage-audit-log * Fix problem with starting Windows 7 instances using VMware Driver * VMware: bug fix for instance deletion with attached volume * Fix migration 201 tests to actually test changes * Don't change the default attach-method * Fix snapshot failure with VMwareVCDriver * Fix quota direct DB access in compute * Add new-world Quota object * Fix use of bare list/dict types in instance\_group object * Fix non-unicode string values on objects * Add missing get\_available\_nodes() refresh arg * Make Instance.Name() not lazy-load things * Add debugging to ComputeCapabilitiesFilter * xenapi: fix pep8 violations in nova plugins * Retry on deadlock in instance\_metadata\_delete * Make virt drivers use a consistent hostname * [VMware] Fix problem transferring files with ipv6 host * VMware: Fix ensure\_vlan\_bridge to work properly with existing DVS * Fix network info injection in pure IPv6 environment * delete a non existent flavor extra spec returns 204 * Don't use ModelBase.save() inside of transaction * send the good binding to neutron after live-migration * Add linked clone related unit tests for VMware Hyper * Ensure anti affinity scheduling works * pci passthrough bug fix:hasattr dones not work for dict * Fix rename q\_exc to n\_exc (from quantum to neutron) * Improve "keypair data is invalid" error message * Enable fake driver can live migration * Don't use sudo to discover ipv4 address * xenapi: Fix rescue * Fix create's response is different with requested for sec-grps V3 * Fix logging of failed baremetal commands * Add v3 API samples for os-extended-volumes * Better help for generate config * Fix hyper-v vhd real size bigger than flavor issue * Remove unused and duplicate code * Policy check for forced\_host should be before the instance is created * Remove cached console auth token after migration * Don't generate notifications when reaping running\_deleted instances * Add instance\_flavor\_id to the notification message * Edits for nova.conf.sample * xenapi: fix where root\_gb=0 causes problems * Wire in ConfKeyManager.\_generate\_hex\_key! * Drop unused logger from keymgr/\_\_init\_\_.py * Move required keymgr classes out of nova/tests * Translate more REST API error messages * pci passthrough fails while trying to decode extra\_info * Update requirements not to boto 2.13.0 * Port server actions unittests to V3 API Part 1 * Remove unused method in scheduler driver * Ignore H803 from Hacking * Fixes misuse of assertTrue in virt test scripts * Add missing notifications for rescue/unrescue * Libvirt: volume driver set correct device type * Make v3 API versions extensions core * Make Instance.save() log missing save handlers * Don't fail if volume has no image metadata * Get image properties instead of the whole image * Remove extra 'console' key for index in extensions consoles v3 * Fix V3 API server extension point exception propagation * VMware: nova-compute crashes if VC not available * Update mailmap for jhesketh * Code change for nova support glance ipv6 address * disassociate\_address response should match ec2 * Adds V3 API samples for remote consoles, deferred delete * Fix asymmetric view of object fields * Use test.TestingException where possible * Add encryption support for volumes to libvirt * VMware: fix driver support for hypervisor uptime * Wrong arguments when calling safe\_utils.getcallargs() * Add key manager implementation with static key * Remove duplication in disk checks * Change the duplicate class name TestDictMatches in test\_matches.py * Add alias as prefix to request params for config\_drive v3 * xenapi: Add per-instance memory overhead values * Fixes misuse of assertTrue in test scripts * Remove unused and wrong code in test\_compute.py * Remove cases of 'except Exception' in tests.image * Remove \_assert\_compute\_node\_has\_enough\_memory from filter\_scheduler.py * Fix regression issues with cells target filter * Remove out of date list of jenkins jobs * Don't lose exception info * Add filter for soft-deleted instances to periodic cleanup task * Don't return query from db API * Update fedora dev env instructions * Only return requested network ID's * Ensure get\_all\_flavors returns deleted items * Fix the order of query output for postgres * Fix migration 211 to downgrade with MySQL * Removed duplicated class in exception.py * Fix console api pass tuple as topic to console rpc api * Enable test\_create\_multiple\_servers test for V3 API * VMware image clone strategy settings and overrides * Reduce DB load caused by heal instance info cache * Clean up object comparison routines in tests * Clean up duplicated change-building code in objects * disable direct mounting of qcow2 images by default * xenapi: ensure finish\_migration cleans on errors * xenapi: regroup spawn steps for better progress * xenapi: stop injecting the hostname during resize * xenapi: add tests for finish\_migration and spawn * xenapi: tidy ups to some spawn related methods * xenapi: move kernel/ramdisk methods to vm\_utils * xenapi: ensure pool based migrate is live * Fix live-migrate when source image deleted * Adds v3 API samples for limits and simple tenant usage * Return a NetworkInfo object instead of a list * Fix compute\_node\_get\_all() for Nova Baremetal * Add Neutron port check for the creation of multiple instances * Remove unused exceptions * Add V3 API samples for flavor-manage,flavor-extra-specs * Add V3 API samples for flavors,flavor-rxtx,flavor-access * Catch more accuracy exception for \_lookup\_by\_name * Fixes race cond between delete and confirm resize * Fixes unexpected exception message in ProjectUserQuotaNotFound * Fixes unexpected exception message in PciConfigInvalidWhitelist * Add missing indexes back in from 152 * Fix the bootfile\_name method call in baremetal * update .mailmap * Don't stacktrace on ImageNotFound in image\_snapshot * Fix PCIDevice ignoring missing DB attributes * Revert "Call safe\_encode() instead of str()" * Avoid errors on some actions when image not usable * Add methods to get image metadata from instance * Fix inconsistent usages for network resources * Revert baremetal v3 API extension * Fixes misuse of assertTrue in compute test scripts * add conf for number of conductor workers * xenapi: Add efficient impl of instance\_exists() 2013.2.b3 --------- * Updated from global requirements * Fix failure to emit notification on Instance.save() * MultiprocessWSGITest wait for workers to die bug * Synchronize the key manager interface with Cinder * Remove indirect dependency from requirements.txt * Clean up check for migration 213 * Add V3 API samples for instance-actions,extenions * fix conversion type missing * Enable libvirt driver to use the new BDM format * Allow block devices without device\_name * Port to oslo.messaging.Notifier API * Add expected\_errors for extension aggregates v3 * Refresh network info cache for secgroups * Port "Make flavors is\_public option .." to v3 tree * Add missing Aggregate object tests * Generalize the \_make\_list() function for objects * PCI passthrough Libvirt vm config * Add columns\_to\_join to instance\_update\_and\_get\_original * XenAPI: Allow 10GB overhead on VHD file check size * Adds ephemeral storage support for Hyper-V * Adds Hyper-V VHDX support * Create mixin class for common DB fields * Deprecate conductor migration\_get() * Change finish\_revert\_resize paths to use objects * Change finish\_resize paths to use objects * Change resize\_instance paths to use objects * VMware: Nova boot from cinder volume * VMware: Multiple cluster support using single compute service * Nova support for vmware cinder driver * Adds Hyper-V dynamic memory support * xenapi: Fix download\_handler fallback * Ensure old style images can be resized * Add nova.utils.get\_root\_helper() * Inherit base image properties on instance creation * Use utils.execute instead of subprocess * Fixes misuse of assertTrue in Cells test scripts * Remove versioning from IOVisor APIs PATH * Revert "Importing correlation\_id middleware from oslo-incubator" * update neutronclient to 2.3.0 minimum * Adds metrics collection support in Hyper-V * Port all rpcapi modules to oslo.messaging interface * Fix a gross duplication of context code in objects tests * Make compute\_api use Aggregate objects * Add Aggregate object model * Add dict and list utility functions for object typing * VMware: remove conditional suds validation * Limit instance fault messages to 255 characters * Add os-assisted-volume-snapshots extension * Scheduler rpcapi 2.9 is not backwards compatible * Adds support for Hyper-V WMI V2 namespace * Port flavormanage extension to v3 API Part 2 * Add os-block-device-mapping to v3 API * Improves Hyper-V vmutils module for subclassing * xenapi: add support for auto\_disk\_config=disabled * Check ephemeral and swap size in the API * Adds V3 API samples for cells and multinic * Increase volume created checking retries to 60 * Fix changes\_since for V3 API * Make v3 API console-output extension core * Makes v3 API keypairs extension core * Add support for API message localization * Fix typo and indent error in isolated\_hosts\_filter.py * Adds 'instance\_type' param to build\_request\_spec * Guest-assisted-snaps libvirt implementation * Improve EC2 API error responses * Remove EC2 postfix from InvalidInstanceIDMalformedEC2 * Introduce InternalError EC2 error code * Introduce UnsupportedOperation EC2 error code * Introduce SecurityGroupLimitExceeded EC2 error code * Introduce IncorrectState EC2 error code * Introduce AuthFailure EC2 error code * Fix ArchiveTestCase on PostgreSQL * Fix AggregateDBApiTestCase on PostreSQL and MySQL * Port Cheetah templates to Jinja2 * Libvirt: call capabilites before getVersion() * Remove \_report\_driver\_status from compute/manager.py * Interpret BDM None size field as 0 on compute side * Add test cases for resume\_state\_on\_host\_boot * Add scheduler support for PCI passthrough * Fix v3 swap volume with wrong signature * vm\_state and task\_state not updated during instance delete * VMware: use VM uuid for volume attach and detach * xenapi: support raw tgz image download * xenapi: refactor - extract image\_utils * Add block\_device\_mapping\_get\_all\_by\_instance to virtapi * Sync rpc from oslo-incubator * Fix the multi-instance quota message * Fix virtual power driver fails silently * VMware: Config Drive Support * xenapi: skip metadata updates when VM not found * Make resource\_tracker record host\_ip * Disable compute fanout to scheduler * Make image\_props\_filter use information from DB not RPC * Make compute\_capabilities\_filter use information from DB not RPC * XenAPI: More operations with LVM-based SRs * XenAPI: make\_partition fixes for Dom0 * Fix wrong method call in baremetal * powervm: make start\_lpar timeout * Disable retry filter with force\_hosts or force\_nodes * Call safe\_encode() instead of str() * Fix usage of classmethod in various places * Fix V3 API quota\_set tests using V3 url and request * Handle port over-quota when allocating network for instance * Fix warning log message typo in resource\_tracker.instance\_claim * Sync filetuils from oslo-incubator * Fix VMware fakes * DRY up use of @wrap\_exception() decorator * Remove unused fake run\_instance() method * Use ExceptionHelper to bypass @client\_exceptions * Added new hypervisor to support Docker containers * Introduce InvalidPermission.Duplicate EC2 error code * Fix and gate on H302 (import only modules) * On snapshot errors delete the image * Remove dis/associate actions from security\_groups v3 * Add volume snapshot delete API test case * Assisted snapshots compute API plumbing * Adds V3 API samples for agents, aggregates and certificates * Adds support for security\_groups for V3 API server create * powervm: Use FixedIntervalLoopingCall for polling LPAR status * xenapi: agent not inject ssh-key if cloud-init * Tenant id filter test is not correct * Add PCI device tracker to compute resource tracker * PCI devices resource tracker * PCI device auto discover * Add PCI device filters support * Avoid swallowing exceptions in network manager * Make compute\_api use Service and ComputeNode objects * Adding VIF Driver to support Mellanox Plugin * Change prep\_resize paths to use objects * Make backup and snapshot use objects * Deprecate conductor migration\_create() * Make inject\_network\_info use objects * Convert reset\_network to use instance object * Make compute\_api use objects for lock/unlock * Add REUSE\_EXT in \_swap\_volume call to blockRebase * Remove unused \_decompress\_image\_file from powervm operator class * powervm: actually remove files after migration * Fix to disallow server name with all blank spaces (v3 API) * Add mock to test-requirements * xenapi: Improve test\_xenapi unit testing performance * Sets policy settings so V3 API extensions are discoverable * Pass objects for revert and confirm resizes * Convert \_poll\_unconfirmed\_resizes to use Migration object * Make compute\_api confirm/revert resize use objects * Make compute\_api migrate/resize paths use instance objects * Fix race when running initialize\_gateway\_device() * fix bad usage of exc\_info=True * Use implicit nullable=True in sqlalchemy model * Introduce Invalid\* EC2 error codes * Improve parameter related EC2 error codes * Disconnect from iSCSI volume sessions after live migration * Correct default ratelimits for v3 * Improve db\_sqlalchemy\_api test coverage * Safe db.api.compute\_node\_get\_all() performance improvement * Remove a couple of unused stubs * Fix Instance object issues * Adds API version discovery support for V3 * Port multiple\_create extension to V3 API * Add context information to download plugins * Adds V3 API samples for migrations * Filter network by project id * Added qemu guest agent support for qemu/kvm * PCI alias support * Add PCI stats * Raise timeout in fake RPC if no consumers found * Stub out instance\_update() in build instance tests * Mock out action event calls in build instance test * powervm: revert driver to pass for plug\_vifs * Remove capabilities.enabled from test\_host\_filters * xenapi: through-dev raw-tgz image upload to glance * Add PCI device object support * Store CONF.baremetal.instance\_type\_extra\_specs in DB * Pci Device DB support * VMware: remove redundant default=None for config options * Move live-migration control flow from scheduler to conductor * Fix v3 extensions inherit from wrong controller * Fix network creation in Vlan mode * compute rpcapi 2.29 is not backwards compatible * Fix the message of coverage directory error * Fix error messages in v3 aggregate API * compute rpcapi 2.37 is not backwards compatible * use 'exc\_info=True' instead of import traceback * Add env to make\_subprocess * Remove unused nova.common module * Adds Flavor ID validations * Imported Translations from Transifex * Add DocStrings for function allocate\_for\_instance * Removes V3 API images and image\_metadata extensions * Powervm driver now logs ssh stderr to warning * Update availability\_zone on time if it was changed * Add db.block\_device\_mapping\_get\_by\_id * Add volume snapshot APIs to driver interface * Pass the destination file name to download modules * Fix typo in baremetal docs * VMware: clean up get\_network\_with\_the\_name * Stylistic improvement of compute.api.API.update() * Removes fixed ips extension from V3 API * Libvirt: fix KeyError in set\_vif\_bandwidth\_config * Add expected\_errors for migrations v3 * Add alias as prefix to request params for user\_data v3 * Fix migrations index * Should finish allocating network before VM reaches ACTIVE * Fixes missing host in Hyper-V get\_volume\_connector * Fix various cells issues due to object changes * Document CONF.default\_flavor is for EC2 only * Revert task state when terminate\_instance fails * Revert "Make compute\_capabilities\_filter use ..." * Add resource tracking to build\_and\_run\_instance * Link Service.compute\_node with ComputeNode object * Add ComputeNode object implementation * Add Service object implementation * Make compute\_api use KeyPair objects * Add KeyPair object * Fix spice/vnc console api samples tests * Fix network manager tests to use correct network host * Stub out get\_console\_topic() in test\_create\_console * Stub out instance\_fault\_create() in compute tests * Fix confirm\_resize() mock in compute tests * Fix rpc calls on pre/post live migration tests * Stub out setup\_networks\_on\_host() in compute tests * maint: remove redundant disk\_cachemode validation entry * Fix unicode key of azcache can't be stored to memcache * XenAPI: SR location should default to location stored in PBD * XenAPI: Generic Fake.get\_all\_records\_where implementation * XenAPI: Return platform\_version if no product\_version * XenAPI: Support local connections * Delete expired instance console auth tokens * Fix aggregate creation/update with null or too long name * Fix live migration test for no scheduler running * Fix get\_diagnostics() test for no compute consumer * Stubout reserve\_block\_device\_name() in test * Stubout deallocate\_for\_instance() in compute tests * Stub out net API sooner in servers API test * PCI utils * Object support for instance groups * Add RBD supporting to libvirt for creating local volume * Add alias as prefix to request params for availability\_zone v3 * Remove deprecated legacy network info model in Hypervisor drivers * Correct the authorizer for extended-volumes v3 * emit warning while running flake8 without virtual env * Adds Instance UUID to rsync debug logging * Fixes sync issue for user level resources * Fix Fibre Channel attach for single WWN * nova.conf configurable gzip compression level * Stub out more net API methods floating IP DNS test * Enable CastAsCall for test\_api\_samples * Stub out attach\_volume() in test\_api\_samples * Fix remove\_fixed\_ip test with CastAsCall * Add add\_aggregate\_to\_host() to FakeDriver * Fix api samples image service stub * Add CastAsCall fixture * Enable consoleauth service during ec2 tests * Disable periodic tasks during integration tests * Use ExceptionHelper to bypass @client\_exceptions * Clean up some unused wrap\_exception() stuff * Add new compute method for building an instance * VMware: provide a coherent message to user when viewing console log * Use new BDM syntax when determining boot metadata * Allow more than one ephemeral device in the DB * Port flavormanage extension to v3 API part 1 * Correct the status code to 201 for create v3 * Pop extra keys from context in from\_dict() * Don't initialize neutronv2 state at module import * Remove instance exists check from rebuild\_instance * Remove unused variables in test\_compute\_cells * Fix fake image\_service import in v3 test\_disk\_config * Updates tools/config/README * xenapi: Added iPXE ISO boot support * Log exception details setting vm\_state to error * Fix instance metadata access in xenapi * Fix prep\_resize() stale system\_metadata issue * Implement hard reboot for powervm driver * Use the common function is\_neutron in servers.py * Make xenapi capabilities['enabled'] use service enabled * Remove duplicate test from V3 version of test\_hosts * Remove unused nova.tests.image.fake code * Remove unused fake run\_instance() method * Remove use of fake\_rabbit in Nova * libvirt: fix {attach,detach}\_interface() * Added test case in test\_migrations for migration 208 * Add flag to make IsolatedHostsFilter less restrictive * Add unique constraint to AggregateMetadata * Fix a typo in test\_migrations for migration 209 * Remove duplicate variable \_host\_state * enhance description of share\_dhcp\_address option * Adds missing V3 API scheduler hints testcase * [v3] Show detail of an quota in API os-quota-sets * Remove legacy network model in tests and compute manager * Remove redundant \_create\_instance method from test\_compute * Add jsonschema to Nova requirements.txt * Remove docstrings in tests * Fix scheduler prep\_resize deprecated comments * Search filters for get\_all\_system\_metadata should use lists * fix volume swap exception cases * Set VM back to its original state if cold migration failed * Enforce flavor access during instance boot * Stub out entry points in LookupTorrentURLTestCase * Port volumes swap to the new API-v3 * correct the name style issue of ExtendedServerAttributes in v3 api * Fix IVS vif to correctly delete interfaces on unplug * Adding support for iSER transport protocol * libvirt: allow passing 'os\_type' property to glance * Fixes auto confirm invalid error * Fix ratelimiting * quantum pxeboot-port support for baremetal * baremetal: Log IPMI power on/off timeouts * VMware: Added check for datastore state before selection * Boot from image destination - volume * Virt driver flag for different BDM formats * Refactor how BDMs are handled when booting * Change RPC to use new BDM format for instance boot * Make API part of instance boot use new BDM format * Add Migration object * Fix untranslated log messages in libvirt driver * Fix migration 210 tests for PostgreSQL * Handle InstanceInvalidState of soft\_delete * Don't pass RPC connection to pre\_start\_hook * VMware: Ensure Neutron networking works with VMware drivers * Unimplemented suspend/resume should not change vm state * Fix project\_user\_quotas\_user\_id\_deleted\_idx index * Allow Cinder to specify file format for NFS/GlusterFS * Add migration with missing fkeys * Implement front end rate-limiting for Cinder volume * Update mailmap * Fixup some non-unity-ness to conductor tests * Add scheduler utils unit tests * Convert admin\_actions ext tests to unit tests * Unit-ify the compute API resize tests * Raises masked AssertionError in \_test\_network\_api * Have tox install via setup.py develop * Set launch\_index to right value * Add passing a logging level to processutils.execute * Clear out service disabled reason on enable for V3 API * Fix HTTP response for PortInUse during boot (v3 API) * Adds infra for v3 API sample creation * Remove deprecated CONF.fixed\_range * Offer a paginated version of flavor\_get\_all * Port integrated tests for V3 API * Refactor integrated tests to support V2 and V3 API testing Part 2 * Refactor integrated tests to support V2 and V3 API testing * Fix cells manager RPC version * Upgrade to Hacking 0.7 * Fix logic in add\_host\_to\_aggregate() * Enforce compute:update policy in API * Removed the duplicated \_host\_state = None in libvirt driver * Sync gettextutils from oslo-incubator * Fix typo in exception message * Fix message for server name with whitespace * Demote personalities from core of API v3 as extensions os-personality * Port disk\_config API to v3 Part 2 * remove \_action\_change\_password the attribute in V3 server API * Fix exception handling in V3 API coverage extension * Remove "N309 Python 3.x incompatible construct" * Allow swap\_volume to be called by Cinder * Remove trivial cases of unused variables * Handle NeutronClientException in secgroup create * Fix bad check for openstack versions (vendor\_data/config drive) * Make compute\_capabilities\_filter use information from DB not RPC * Make affinity\_filters use host\_ip from DB not RPC * db: Add host\_ip and supported\_instances to compute\_nodes * Add supported\_instances to get\_available\_resource to all virt drivers * libvirt: sync get\_available\_resources and get\_host\_stats * Clean up unimplemented methods in the powervm driver * Make InvalidInstanceIDMalformed an EC2 exception * Fix one port can be attached to more devices * Removed code duplication in test\_get\_server\_\*\_by\_id * Add option for QEMU Gluster libgfapi support * Moves compute.rpcapi.prep\_resize call to conductor * Fix get\_available\_resource docstrings * Fix spelling in image\_props\_filter * Fix FK violation in ConsoleTestCase * Fix ReservationTestCase on PostgreSQL * Fix instance\_group\_delete() DB API method * Fix capitalization, it's OpenStack * Add test cases to validate neutron ports * Add expected\_errors for extension quota\_classes v3 * Fix leaking of image BDMs * Moved tests for server.delete * Fix VMwareVCDriver to support multi-datastore * Fixes typo in \_\_doc\_\_ of /libvirt/blockinfo.py * User quota update should not exceed project quota * Port "Accept is\_public=None .." to v3 tree * Remove clear\_rabbit\_queues script * Don't need to init testr in run\_tests.sh * Imported Translations from Transifex * Deprecate conductor's compute\_reboot() interface * Deprecate conductor's compute\_stop() interface * Make compute\_api use InstanceAction object * Add basic InstanceAction object * Add delete() operation to InstanceInfoCache * Make compute\_api use Instance.destroy() * Add Instance.destroy() * Make compute\_api use Instance.create() * Change swap\_volume volume\_api calls to use ID * Fix H501: Do not use locals() for string formatting * fix libguestfs mount order when inspecting * Imported Translations from Transifex * powervm: add test case for get\_available\_resource * Fix to allow ipv6 in host\_ip for ESX/vSphere driver * Improve performance of driver's get\_available\_nodes * Cleanup exception handling on evacuate * Removed code for modular exponentiation, pow() already does this * Remove unsafe XML parsing * Fix typo with network manager service\_name * Remove old legacy network info model in libvirt driver * maint: remove redundant default=None for config options * Fix simultaneous timeout with smart iptables usage * xenapi: send identity headers from glance plugin * Catch ldap ImportError * xenapi: refactor - extract get\_virtual\_size * xenapi: refactor - extract get\_stream\_funct\_for * xenapi: test functions for \_stream\_disk * Check host exists before evacuate * Fix EC2 API Fault wrapper * Fix deferred delete use of objects * Remove unsafe XML parsing * Update BareMetal driver to current nova.network.model * Personality files can be injected during server rebuild * Need to allow quota values to be set to zero * Merged flavor\_disabled extension into V3 core api * Merged flavorsextraspecs extension into core API * Code dedup in test\_update\_\* * Move tests test\_update\_\* to separate class * VMware: fix rescue/unrescue instance * Add an exception when doesn't have permissions to operate vm on hyper-v * Remove dead capabilities code * Spelling correction in test\_glance.py * Enhance object inheritance * Enable no\_parent and file\_only security * Add Instance.create() * Pull out instance object handling for use by create also * Make fake\_instance handle security groups * Fix instance actions testing * Sync models with migrations * Wrong unique key name in 200 migration * Remove unused variable * Make NovaObject.get() avoid lazy-load when defaulting * Fix migration downgrade 146 with mysql * Remove the indexes on downgrade to work with MySQL * Downgrade MySQL to the same state it used to be * Format CIDR strings as per storage * Fix migration downgrade 147 with mysql * Fix typo in compute.rpcapi comments * Imported Translations from Transifex * Avoid extra glance v2 locations call! * xenapi: Adding BitTorrent download handler * xenapi: remove dup code in make\_step\_decorator * Retry failed instance file deletes * xenapi: retry when plugin killed by signal * Do not use context in db.sqla.api private methods * Finish DB session cleanup * Clean up session in db.sqla.api.migration\_\* methods * Clean up session in db.sqla.api.network\_\* and sec\_groups\_\* methods * Don't inject files while resizing instance * Convert CamelCase attribute naming to camel\_case for servers V3 API * Convert camelCase attribute naming to camel\_case * Add plug-in modules for direct downloads of glance locations * Allow user and admin lock of an instance * Put fault message in the correct field * Fix Instance objects with empty security groups * db: Remove deprecated assert\_unicode attribute * VlanManager creates superfluous quota reservations * xenapi: allow non rsa key injection * Add expected\_errors for extensions simple\_tenant\_usage v3 * Clean destroy for project quota * Add expected\_errors for extension Console v3 * Add expected\_errors for extension baremetal v3 * Clean up session in db.sqla.api.get\_ec2 methods * Clean up db.sqla.api.instance\_\* methods * remove improper usage of 'assert' * Support networks without gateway * Raise 404 when instance not found in admin\_actions API * Switch to Oslo-Incubator's EnvFilter rootwrap * xenapi: Moving Glance fetch code into image/glance:download\_vhd * Performs hard reboot if libvirt soft reboot raises libvirtError * xenapi: Rename imageupload image * Make nbd reservation thread-safe * Code dedup in class QuotaReserveSqlAlchemyTestCase * Fix multi availability zone issue part 1 * Fix instance\_usage\_audit\_log v3 follow REST principles * Update mailmap * Add obj\_attr\_is\_set() method to NovaObject * Add ObjectActionFailed exception and make Instance use it * Fix change detection logic in conductor * Convert pause/unpause to use objects * Make delete/soft\_delete use objects * Refactor compute API's delete to properly do local soft\_deletes * Add identity headers while calling glanceclient * xenapi: Reduce code duplication in vmops * vendor-data minor format / style cleanups * maint: remove unused exceptions * Add support for Neutron https endpoint * Add index to reservations.uuid column * Refactor EC2 API error handling code * Cleanup copy/paste in test\_quota\_sets * Make EvacuateTest DRYer * Add expected\_errors for extensions quota\_sets and hypervisors * Remove generic exception catching for admin\_actions API v3 * Demote admin-passwd from core of API v3 as extensions os-admin-password * handle auto assigned flag on allocate floating ip * Add expected\_errors for extension shelve v3 * Use cached nwinfo for secgroup rules * Sync config.generator from Oslo * Remove \* import from xenserver plugins * EC2-API: Fix ambiguous ipAddress/dnsName issue * xenapi: no image upload retry on certain errors * Add error checking around host service checking * add vendor\_data to the md service and config drive * Moves compute.rpcapi.prep\_resize call to scheduler.manager * Removed scheduler doc costs section * Fix formatting on scheduler documentation * Add expected\_errors for extension server\_diagnostics V3 * Fix extensions agent follow API v3 rules * XenAPI: Change the default SR to be the pool default * Fix flavor\_access extension follow API V3 rules * Add notification for live migration call * Correct status code and response for quota\_sets API v3 * Fixes for v3 API servers tests * Remove sleep from service group db and mc tests * [xenapi] Unshadow an important test case class * Fix and Gate on H303 (no wildcard imports) * Remove unreachable code * powervm: pass on unimplemented aggregate operations * Fix timing issue in SimpleTenantUsageSample test * Code dedup in virt.libvirt.test\_imagecache.test\_verify\_checksum\_\* * Move tests test\_verify\_checksum\_\* to separate class * Logging virtual size of the QCOW2 * Add expected\_errors for extension certificates v3 * Support setting block size for block devices * Set the image\_meta for the instance booted from a volume * return 429 on API rate limiting occur * Add task\_state filter for nova list * Port server\_usage API to v3 part 2 * Port server\_usage API to v3 part 1 * Adds factory methods to load Hyper-V utils classes * Fix 2 pep8 errors in tests * Enabled hacking check for Python3 compatible print (H233) * Fix race between aggregate list and delete * Enforce authenticated connections to libvirt * Enabled the hacking warning for Py3 compatible octal literals (H232) * Remove fping plugin from V3 API * Moves scheduler.rpcapi.prep\_resize call on compute.api to conductor * Fix some Instance object class usage errors * xenapi: remove pv detection * Add expected\_errors for extension keypair and availablity\_zone * Add expected\_errors for extension console\_output v3 * Fix extension hosts follow API v3 rules * Use project quota as default user quota * Adds NoAuthMiddleware for V3 * xenapi: remove propagate xenapi\_use\_agent key * Update references with new Mailing List location * MinDisk size based on the flavor's Disk size * Use RetrievePropertiesEx instead of RetrieveProperties * Speed up test BareMetalPduTestCase.test\_exec\_pdutool * Port ips-extended to API-v3 ips core API Part 2 * Disable per-user rate limiting by default * Support EC2 API wildcards for DescribeTags filters * Remove the monkey patching of \_ into the builtins * Sync lockutils from Oslo * Set lock\_path in tests * Port ips-extended to API-v3 ips core API Part 1 * Fix postgresql failures related to Data type to API-v3 fixed-ip * Bypass queries which cause a contradiction * Add basic BDM format validation in the API layer * Servers API for the new BDM format * Fixes Hyper-V issues on versions prior to 2012 * Add expected\_errors for extension instance\_actions v3 * Fix extension server\_meta follow API v3 rules * Ensure that uuid is returned with mocked instance * Code dedup in class InstanceTypeExtraSpecsTestCase * Add expected\_errors for extension cells V3 * Add expected\_errors for extension\_info V3 * Add latest oslo DB support * Add note why E712 is ignored * Clarify instance\_type vs flavor in nova-manage * Fix leaky network tests * Fix HTTP response for PortNotFound during boot * Don't pass empty image to filter on live migration * Start using hacking 0.6 * Set VM back to its original state if cold migration failed * xenapi: ensure vcpu\_weight configured correctly * Fix failing network manager unit tests * Add expected\_errors for extensions services and server\_password v3 * Update oslo.config.generator * Fix the is\_volume\_backed\_instance check * Add support for volume swap * Fix policy failure on image\_metadata calls * Sync models for AgentBuild, Aggregates, AggregateHost tables * Imported Translations from Transifex * Make ServerXMLSerializationTest DRYer * Port migrations extension to v3 API part 2 * Port migrations extension to v3 API part 1 * xenapi: Fix console rotate script * Sync some of Instance\* models with migrations * Fix extension rescue follow API v3 rules * Per-project-user-quotas for more granularity * Add unique constraint to InstanceTypeExtraSpecs * Remove instance\_metadata\_get\_all\* from db api * Merged flavorextradata extension (ephemeral disk size) into core API * Fixed tests for flavor swap extension after merging in core API * Remove hostname param from XenApi after first boot * Cell Scheduler support for hypervisor versions * Fix flavor v3 follow API v3 rules * Sync sample config file generator with Oslo * Allow exceptions to propagate through stevedore map * Create vmware section * Sync latest rpc changes from oslo-incubator * Check instance on dest once during block migration * Revert "Add requests requirement capped <1.2.1." * Unit-ify compute\_api delete tests * Convert network API to use InfoCache object * Make InfoCache.network\_info be the network model * Make shelve pass old-world instance object to conductor * Make admin API state resets use Instance.save() * Deduplicate data in TestAddressesXMLSerialization * Move \_validate\_int\_value controller func to utils * Correct the action name for admin\_actions API v3 * Fixing dnsdomain\_get call in nova.network.manager * Raise exception if both port and fixed-ip are in requested networks * Sync eventlet\_backdoor from oslo-incubator * Fix up trivial license mismatches * Implements host uptime API call for cell setup * Ensure dates are dates, not strings * Use timeutils.utcnow() throughout the code * Add indexes to sqlite * Fix iptables rules when metadata\_host=127.0.0.1 * Sync gettextutils from oslo * Handle instance objects in conductor compute\_stop * Config drive attached as cdrom * Change EC2 client tokens to use system\_metadata * Check that the configuration file sample is up to date * Make servers::update() use Instance.save() to do the work * Make Instance.save() handle cells DB updates * Convert suspend/resume to use objects * Make compute\_api.reboot() use objects * Fix HTTP response for PortInUse during boot * Fix DB access when refreshing the network cache * Use valid IP addresses values in tests * Add ability to factor in per-instance overheads * Send updated aggregate to compute on add/rm host * Fix inconsistency between Nova-Net and Neutron * Fix parse\_transport\_url when url has query string * xenapi: no glance upload retry on 401 error * Code dedup in test\_libvirt\_vif * Raise exceptions when Spice/VNC are unavailable * xenapi: Pass string arguments to popen * Add rpcapi tests for shelving calls * Create key manager interface * Remove duplicate cells\_rpcapi test * ec2-api: Disable describing of instances using deleted tags as filter * Disable ssl layer compression for glance requests * Missed message -> msg\_fmt conversion * Refresh network cache when reassigning a floating IP in Neutron * Remove unnecessary comments for instance rebuild tests * Add missing tests for console\_\* methods * Force reopening eventlet's hub after fork * Remove project\_id from alternate image link path * Fixes wrong action comment 'lock' to 'unlock' * Add expected\_errors for extension extended\_volumes v3 * port BaremetalNodes API into v3 part2 * port baremetal\_nodes API into v3 part1 * Add validation of available\_zone during instance create * Move resource usage sync functions to db backend * Remove locals() from various places * Add expected\_errors for extension evacuate v3 * Add expected\_errors for extension deferred\_delete v3 * Fix accessing to '/' of metadata server without any checks to work * Fix duplicate osapi\_hide\_server\_address\_states config option * API for shelving * Fix shelve's use of system\_metadata * Fix Instance object handling of implied fields * Make Instance object properly update \*metadata * Support Client Token for EC2 RunInstances * Change get\_all\_instance\_metadata to use \_get\_instances\_by\_filters * Add a new GroupAffinityFilter * Move a migration test to MigrationTestCase * Use db.flavor\_ instead of db.instance\_type\_ * Periodic task for offloading shelved instances * Shelve/unshelve an instance * Code dedup in class ImagesControllerTest * Assert backing\_file should exist before attempting to create it * Add API-v3 merged core API into core API list * Don't ignore 'capabilities' flavor extra\_spec * Support scoped keys in aggregate extra specs filter * Fix blocking issue when powervm calculate checksum * Avoid shadowing Exception 'message' attribute * Code dedup in class TestServerActionRequestXMLDeserializer * Fix mig 186 downgrade when using sqlalchemy >= 0.8 * Move test\_stringified\_ips to InstanceTestCase * Move \*\_ec2\_\* tests in test\_db\_api to own test case * Code dedup in class ImageXMLSerializationTest * Fix malformed format string * Fix EC2 DescribeTags filter * Code dedup in test\_libvirt\_volume * Port AttachInterfaces API to v3 Part 2 * Make ServersViewBuilderTest DRYer * Move test\_security\_group\_update to SecurityGroupTestCase * Code dedup in class ServersControllerCreateTest * Code dedup in tests for server.\_action\_rebuild * Moved tests for server.\_action\_rebuild * Move bw\_usage\_\* tests in test\_db\_api to own test case * Move dnsdomain\_\* tests in test\_db\_api to own test case * Remove redundant if statements in cells.state * Move special cells logic for start/stop * Port used limits extension to v3 API Part 2 * Avoid deleting user-provided Neutron ports if VM spawn fails * Fix nic order not correct after reboot * Porting os-aggregates extensions to API v3 Part 2 * Porting os-aggregates extensions to API v3 Part 1 * Porting server metadata core API to API v3 Part 2 * Porting server metadata core api to API v3 Part 1 * Port limits core API to API-v3 Part 2 * xenapi: Only coalesce VHDs if needed * Don't attach to multiple Quantum networks by default * Load cell data from a configuration file * Fix filtering aggregate metadata by key * remove python-glanceclient cap * Remove duplicated key\_pair\* tests from test\_db\_api * Porting limits core api to API v3 Part 1 * Add missing tests for db.api.instance\_\* methods * Fix IPAddress and CIDR type decorators * Complete deletion when compute manager start-up * Port user\_data API to v3 Part 2 * Add legacy flag to get\_instance\_bdms * XenAPI: Refactor Fake to create pools, SRs and VIFs automatically * Port flavor\_rxtx extension to v3 API Part 2 * Port flavor\_rxtx extension to v3 API Part 1 * Fix aggregate\_get\_by\_host host filtering * Fix v3 hypervisor extension servers action follow REST principles * xenapi:populating hypervisor version in host state * Port attach and detach of volume-attachment into os-extended-volume v3 * Port deferredDelete API to v3 Part 2 * Fix status code for coverage API v3 * Port instance\_actions API to v3 Part 2 * port instance\_actions API into v3 part1 * Prompt error message when creating aggregate without aggregate name * Port used limits extension to v3 API Part 1 * Makes \_PATH\_CELL\_SEP a public global variable * port disk\_config API into v3 part1 * Imported Translations from Transifex * Remove locals() from virt directory * Handle ImageNotAuthorized exception * Port AvailabilityZone API to v3 Part 2 * port AvailabilityZone API into v3 part1 * Port service API to v3 Part 2 * Imported Translations from Transifex * Unify duplicate code for powering on an instance * Port hide srvr addresses extension to v3 API Pt2 * Sync v2/v3 console\_output API extensions * Port extended status extension to v3 API Part 2 * Port os-console-output extension to API v3 Part 2 * Changes select\_destinations to return dicts instead of objects * Better start/stop handling for cells * Make notifications properly string-convert instance datetimes * Fix default argument values on get\_all\_by\_filters() * Make db/api strip timezones for datetimes * Fix object\_compat decorator for non-kwargs * Imported Translations from Transifex * Remove unused recreate-db options from run\_test.sh * update Quantum usage to Neutron * Convert cells to use a transport URL * Fix aggregate update * Passing volume ID as id to InvalidBDMVolume exception * Handle instance being deleted while in filter scheduler * Port extended-availability-zone API into v3 part2 * Fix extensions os-remote-consoles to follow API v3 rules * Add unique constraints to AggregateHost * Unimplemented pause should not change vm state on PowerVM * Port server password extension to v3 API Part 2 * xenapi: Add disk config value to xenstore * Port hide srvr addresses extension to v3 API Pt1 * Add -U to the command line for pip * xenapi: support ephemeral disks bigger than 2TB * Cells: Make bandwidth\_update\_interval configurable * Add \_set\_instance\_obj\_error\_state() to compute manager * Update v3 servers API with objects changes * xenapi: enable attach volumes to non-running VM * Change force\_dhcp\_release default to True * Revert "Sync latest rpc changes from oslo-incubator" * Sync 10 DB models and migrations * Make compute\_api.get() use objects natively * port Host API into v3 part2 * Port admin-actions API into v3 part2 * Fix cells manager rpc api version * Allow ::/0 for IPv6 security group rules * Fix issue with pip installing oslo.config-1.2.0 * Sort output for unit tests in test\_describe\_tags before compare * Document rate limiting is per process * Properly pin pbr and d2to1 in setup.py * Add support for live\_snapshot in compute * xenapi: Stub out \_add\_torrent\_url for Vhd tests * Add Instance.get\_by\_id() query method * Fix duplicate fping\_path config option * Port images metadata functionality to v3 API Part 2 * Add unique constraint to ConsolePool * Enable core API-v3 to be optional when unit testing * Clarify flavorid vs instance\_type\_id in db * Sync db.models.Security\* and db.models.Volume\* * Sync db.models.Instance\* with migrations * Add "ExtendedVolumes" API extension * Fix misc issues with os-multinic v3 API extension * Port multinic extension to v3 API Part 2 * Port security groups extension to v3 API Part 2 * Port security groups extension to v3 API Part 1 * Add missing help messages for nova-manage command * Validate volume\_size in block\_device\_mapping * Imported Translations from Transifex * Fix info\_cache and bw\_usage update race * xenapi: glance plugin should close connections * Change db.api.instance\_type\_ to db.api.flavor\_ * Replace get\_instance\_metadata call in api.ec2.cloud.\_format\_instances * Add unique constraint to AgentBuild * Ensure flake8 tests run on all api code * Sync notifier change from oslo-incubator * Sync harmless changes from oslo-incubator * Sync latest rpc changes from oslo-incubator * Add missing matchmaker\_ring * Port extended-server-attributes API into v3 part2 * List migrations through Admin API * Add a VIF driver for IOVisor engine * port Service API into v3 part1 * Port admin-actions API into v3 part1 * Port fping extension to v3 API Part 2 * Disassociate fixed IPs not known to dnsmasq * Imported Translations from Transifex * Allow filters to only run once per request if their data is static * Port extended-availability-zone API into v3 part1 * Update openstack.common.config * Export just the volume metadata for the database to be populated * port Deferred\_delete API into v3 part1 * Misc fixes for v3 evacuate API extension * Imported Translations from Transifex * Baremetal ensures node is off before powering on * Remove references to deprecated DnsMasqFilter * Port user\_data API to v3 Part 1 * Update instance.node on evacuate * Fix formatting errors in documentation * Use oslo.sphinx and remove local copy of doc theme * Remove doc references to distribute * Sync install\_venv\_common from oslo * Make EC2 API request objects instead of converting them * Make instance show and index use objects * Remove conductor usage from consoleauth service * xenapi: Stub out entry points for BitTorrent tests * Fix debug message for GroupAntiAffinityFilter * Add unique constraints to Service * Add unique constraint to FixedIp * Fixed columns list in indexes * Add cinder cleanup to migrations * Change unique constraint in VirtualInterface * Changes ComputeTaskManager class to inherit base.Base * Moves populate retry logic to the scheduler utils * Exceptions raised by quantum validate\_networks result in 500 error * Fix and gate on E125 * Add object (de)serialization support to cells * Add cells get\_cell\_type() method * Add fill\_faults() batch operation to InstanceList * Make api\_samples reboot test use a plausible scenario * Fix compute\_api object handling code in cells messaging * Fix power\_state lookup in confirm\_resize * Make flavors is\_public option actually work * Imported Translations from Transifex * hyperv: Fix vmops.get\_info raises InstanceNotFound KeyError * Make instance\_update() string-convert IP addresses * Refactor compute\_api reboot tests to be unit-y * Refactors select\_destinations to return HostState objects * PowerVM resize and migrate test cases * Clear out service disabled reason on enable * Port agent API to v3 Part 2 * Fix v3 hypervisor extension search action follow REST principles * Fix resize ordering for COW VHD * Add inst\_type parameter * Store volume metadata as key/value pairs * Fixes a typo on AggregateCoreFilter documentation * xenapi: Tidy up Popen calls to avoid command injection attacks * Remove notify\_on\_any\_change option * Add unique constraints to Quota * Port images metadata functionality to v3 API Part 1 * Port scheduler hints extension to v3 API Part 2 * Adding action based authorization for keypairs * Port multinic extension to v3 API Part 1 * Port hypervisor API into v3 part2 * port Instance\_usage\_audit\_log API into v3 part2 * port Instance\_usage\_audit\_log API into v3 part1 * Fix metadata for create in child cell * update xen/vmware virt drivers not to hit db directly * Reduce nesting in instance\_usage\_audit * Port os-console-output extension to API v3 Part 1 * Fix to integer cast of length in console output extension * Imported Translations from Transifex * Add notifiers to both attach and detach volumes * Make test\_deferred\_delete() be deterministic * Added functionality for nova hooks pass functions * Fix compatibility with older confirm\_resize() calls * Pass instance host-id to Quantum using port bindings extension * libvirt: Fix spurious backing file existence check * Add unique constraint for security groups * powervm: make get\_host\_uptime output consistent with other virt drivers * Remove locals() from virt/vmwareapi package * Add HACKING check for db session param * Select disk driver for libvirt+Xen according to the Xen version * Port coverage API into v3 part2 * Port coverage API into v3 part1 * Fix grizzly compat issue in conducor rpc api * Xenapi shutdown should return True if vm is shutdown * Break out Compute Manager unit tests * Break out compute API unit tests * port Host API into v3 part1 * Imported Translations from Transifex * Standardize use of nova.db * Check system\_metadata type in \_populate\_instance\_for\_create * Clean up and make HACKING.rst DRYer * Sync db.models with migrations * Refactor ServerStatusTest class * Move tests db.api.instance\_\* to own class * Add tests for \`db.console\_pool\_\*()\` functions * Fix binding of SQL query params in DB utils * Make db.fakes stub out API not sqlalchemy * Reassign MAC address for vm when resize\_revert * test\_xmlutil.py covers more code in xmlutil.py * Handle UnexpectedTaskState and InstanceNotFound exceptions * Port quota classes extension to v3 API Part 2 * Ports image\_size extension to v3 API * xenapi: Add configurable BitTorrent URL fetcher * remove locals() from virt/hyperv package * Add resume state on host boot function to vmware Hyper * Port server\_diagnostics extension to v3 API Part2 * Port images functionality to v3 API Part 2 * Port cells extension to v3 API Part 2 * Notification support for host aggregate related operation * Fix vol\_usage\_update() DB API tests * Port consoles extension API into v3 part2 * Port consoles extension API into v3 part1 * Imported Translations from Transifex * New select\_destinations scheduler call * Session cleanup for db.security\_group\_\* methods * fix invalid logging * Port scheduler hints extension to v3 API Part 1 * Port config\_drive API to v3 Part 2 * Port config drive API to v3 Part 1 * Port images functionality to v3 API Part 1 * Moves scheduler.manager.\_set\_vm\_state\_and\_notify to scheduler.utils * VNC console does not work with VCDriver * Sane rest API rate limit defaults * Ignore lifecycle events for non-existent instances * Fix resizes with attached file-based volumes * Remove trivial cases of unused variables (3) * Remove locals() from compute directory * Hypervisor uptime fails if service is disabled * Fix metadata access in prep for instance objects * Sync to\_primitive() IPAddress support from Oslo * Merged flavor\_swap extension into core API * Fix typo for instance\_get\_all\_by\_filters() function * Implement get\_host\_uptime for powervm driver * Port flavor\_disabled extension to v3 API Part 2 * Fix sqlalchemy utils * Port flavor\_disabled extension to v3 API Part 1 * Port flavor\_access extension to v3 API Part 2 * Port flavor\_access extension to v3 API Part 1 * Fixes for quota\_sets v3 extension * Port server password extension to v3 API Part 1 * Port Simple\_tenant\_usage API to v3 Part 2 * xenapi: Remove vestigial \`compile\_metrics\` code * Add update() method to NovaObject for dict compatibility * Add obj\_to\_primitive() to recursively primitiveize objects * Make sure periodic instance reclaims continues on error * Remove broken config\_drive image\_href support * Report the az based on the value in the instance table * Allow retrying network allocations separately * Imported Translations from Transifex * Better default for my\_ip if 8.8.8.8 is unreachable * Fix a couple typos in the nova.exception module * Make fake\_network tolerant of objects * Prepare fake instance stubs for objects * Make info\_cache handle when network\_info is None * Fix instance object's use of a db query method parameter * Make NovaObject support the 'in' operator * Add Instance.fault * Add basic InstanceFault model * xenapi: Make BitTorrent url more flexible * xenapi: Improve cross-device linking error message * db.compute\_node\_update: ignore values['update\_at'] * Make sure periodic cleanup of instances continues on error * Fix for failure of periodic instance cleanup * Update instance properties values in child cells to create instance * port Attach\_interface API into v3 part1 * Sync models.Console\* with migrations * Port quota API into v3 part2 * Stop creating folders in virt unit tests * Imported Translations from Transifex * Refresh volume connections when starting instances * Fix trivial mismatch of license header * Exeption message of 'live migration' is not appropriate * Sync rpc from oslo-incubator * Fix types in test\_ec2\_ids\_not\_found\_are\_printable * Port quota API into v3 part1 * Skip security group code when there is no network * Sync db.models and migrations * Update pyparsing to 1.5.7 * Make InstanceList filter non-column extra attributes * Add Instance.security\_groups * Add basic SecurityGroup model * Revert XenApi virt driver should throw exception * Imported Translations from Transifex * Avoid redefining host to none in get\_instance\_nw\_info(...) * Extract live-migration scheduler logic from the scheduler driver * Fix the filtered characters list from console-log * Add invalid number checking in flavor creation api * Port quota classes extension to v3 API Part 1 * Remove usage of locals() from powervm virt package * Fix xenstore-rm race condition * Refactor db.security\_group\_get() instance join behavior * Fix serialization of iterable types * Fix orphaned instance from get\_by\_uuid() and \_from\_db\_object() * refactor security group api not to raise http exceptions * Perform additional check before live snapshotting * Do not raise NEW exceptions * Baremetal\_deploy\_helper error message formatting * Fix sys\_meta access in prep for instance object * Cells: Pass object for start/stop * Clarify the compute API is\_volume\_backed\_instance method * Add AggregateCoreFilter * Port extended-server-attributes into v3 part1 * Add AggregateRamFilter * Fix KeyError exception when scheduling to child cell * Port missing bits from httplib2 to requests * Revert "fixes nova resize bug when force\_config\_drive is set." * Port extended status extension to v3 API Part 1 * Fix quota logging on exceptions * XenApi virt driver should throw exception on failure * Retry quota\_reserve on DBDeadlock * Handle NoMoreFixedIps in \_shutdown\_instance * Make sure instance\_type has extra\_specs * Remove locals() from nova/virt/libvirt package * Fix importing InstanceInfoCache during register\_all() * Make \_poll\_unconfirmed\_resizes() use objects * Revert "Add oslo-config-1.2.0a2 and pbr>=0.5.16 to requirements." * Preserve network order when using ConfigDrive * Revert "Initial scheduler support for instance\_groups" * fixes nova resize bug when force\_config\_drive is set * Add troubleshoot to baremetal PXE template * Sync db.models.Quota\* with migrations * Modify \_assertEqualListsOfObjects() function * Port hypervisor API into v3 part1 * Remove a layer of nesting in \_poll\_unconfirmed\_resizes() * Use InstanceList for \_heal\_instance\_info\_cache() * Remove straggling use of all-kwarg object methods * Allow scheduler manager NoValidHost exception to pass over RPC * Imported Translations from Transifex * Add oslo-config-1.2.0a2 and pbr>=0.5.16 to requirements * Remove usage of locals() for formatting from nova.scheduler.\* * Libvirt driver: normalize variable names (part1) * xenapi: script to rotate the guest logs * Clean up scheduler tests * Drop unused \_virtual\_power\_settings global * Remove junk file when ftp transfer failure * xenapi: revisit error handling around calls to agent * Remove the unused plugins framework * Added unit tests for vmware cluster driver * Adds expected\_errors decorator for API v3 * Sync oslo-incubator gettextutils * port Simple\_tenant\_usage API into v3 part1 * Remove db session hack from conductor's vol\_usage\_update() * Converts scheduler.utils.build\_request\_spec return to json primitive * Revert "Delegate authentication to quantumclient" * Retry the sfdisk command up to 3 times * No support for double nested 64 bit guest using VCDriver * Fill context on objects in lists * Setting static ip= for baremetal PXE boot * Add tests for libvirt's reboot functionality * Check the instance ID before creating it * Add missing tests for nova.db.api.instance\_system\_metadata\_\* * Add err\_msg param to baremetal\_deploy\_helper * Remove \_is\_precooked pre-cells Zones hacks * Raise max header size to accommodate large tokens * Make NovaObject support extra attributes in items() * Imported Translations from Transifex * Fix instance obj refresh() * Fix overzealous conductor test for vol\_usage\_update * Add missing tests for certificate\_\* methods * Log xml in libvirt \_create\_domain failures * Add unique constraints to Cell * Accept is\_public=None when listing all flavors * Add missing tests for cell\_\* methods * Add missing tests for nova.db.api.instance\_metadata\_\* * Don't deallocate network if destroy time out * Port server\_diagnostics extension to v3 API Part1 * Add old display name to update notification * Port fping extension to v3 API Part 1 * libvirt fix resize/migrates with swap or ephemeral * Allow reboot or rebuild from vm\_state=Error * Initial scheduler support for instance\_groups * Fix the ServerPasswordController class doc string * Imported Translations from Transifex * Cleanup certificate API extension * Enforce sqlite-specific flow in drop\_unique\_constraint * Remove unused cert db method * Fix bad vm\_state change in reboot\_instance() * Add rpc client side version control * xenapi: ensure agent check respects image flags * Drop \`bm\_pxe\_ips\` table from baremetal database * Adding fixed\_ip in create.end notification * Improved tests for instance\_actions\_\* * Refactored tests for instance\_actions\_\* * Add missing tests for provider\_fw\_rule\_\* methods * Session cleanup for db.security\_group\_rule\_\* methods * Add tests for nova.db.api.security\_group\_rule\_\* methods * Refactors qemu image info parsing logic * Port cells extension to v3 API Part 1 * Organize limits units and per-units constants * Fix flavor extra\_specs filter doesn't work for number * Replace utils.to\_bytes() with strutils.to\_bytes() * Updates nova.conf.sample * Remove bin lookup in conf sample generator * Refactor conf sample generator script * Remove unused arg from make\_class\_properties.getter method * Fix obj\_load() in NovaObject base class * Backup and restore object registry for tests * Fix the wrong reference by CONF * Port flavors core API to v3 tree * Remove usage of locals() from xenapi package * Remove trivial cases of unused variables (1) * Don't make nova-compute depend on iSCSI * Change resource links when url has no project id * Make sync\_power\_state routines use InstanceList * Enhance the validation of the quotas update * Add missing tests for compute\_node\_\* methods * Fix VMware Hyper can't honor hw\_vif\_model image property * Remove use of locals() in db migrations * Don't advertise mute cells capabilities upwards * Allow confirm\_resize if instance is in 'deleting' status * Port certificates API to v3 Part 2 * port agent API into v3 part1 * Port certificates API to v3 Part 1 * Naming instance directory by uuid in VMware Hyper * Revert "Fix local variable 'root\_uuid' ref before assign" * Use Python 3.x compatible octal literals * Fix and enable H403 tests * Remove usage of locals() from manager.py * Fix local variable 'root\_uuid' ref before assign * Improve the performance of migration 186 * Update to the latest stevedore * Quantum API \_get\_floating\_ip\_by\_address mismatch with Nova-Net * xenapi: remove auto\_disk\_config check during resize * xenapi: implement get\_console\_output for XCP/XenServer * Check libvirt version earlier * update\_dns() method optimization * Sync can\_send\_version() helper from oslo-incubator * Remove unused db api call * Quantumapi returns an empty network list * Add missing tests for nova.db.api.network\_\* * Cleanup overshadowing in test\_evacuate.py * Give a way to save why a service has been disabled * Cells: Add support for global cinder * Fix race conditions with xenstore * Imported Translations from Transifex * Remove explicit distribute depend * Fix assumed port has port\_security\_enabled * Rename functions in nova.compute.flavors from instance\_type * Remove redundant architecture property update in powervm snapshot * Use an inner join on aggregate\_hosts in aggregate\_get\_by\_host * xenapi: ensure instance metadata always injected into xenstore * Nova instance group DB support * Fix to disallow server name with all blank spaces * Replace functions in utils with oslo.fileutils * Refactors get\_instance\_security\_groups to only use instance\_uuid * Create an image BDM for every instance * DB migration to the new BDM data format * Fix dangling LUN issue under load with multipath * Imported Translations from Transifex * Add missing tests for s3\_image\_\* methods * Register libvirt driver with closed connection callback * Enhance group handling in extract\_opts * Removed code duplication in conductor.api * Refactored tests for instance\_fault\_\* * Added verbose error message in tests helper mixin * Adds v3 API extension discovery filtering * Adds support for the Indigo Virtual Switch (IVS) * Some libvirt driver lookups lacks proper exception handling * Put VM UUID to live migration error notification * Fix db.models.Instance description * Fix db.models.Certificate description * Fix db.models.ComputeNodeStats description * Fix db.models.ComputeNode description * Fix db.models.Service description * BDM class and transformation functions * Remove unused method in VMware driver * Cleanup nova exception message conversion * Update analyze\_opts to work with new nova.conf sample format * Remove unused methods from VirtAPI * Make xenapi use Instance object for host\_maintenance\_mode() * Make xenapi/host use instance objects for \_uuid\_find * Use InstanceList object for init\_host * Add Instance.info\_cache * Use Instance Objects for Start/Stop * Add lists of instance objects * Add base mixin class for object lists * Add deleted flag to NovaObject base * Export volume metadata to new instances * Sending volume IO usage broken * Rename unique constraints due to new convention * Replace openstack-common with oslo in HACKING.rst * Fixes test\_config\_drive unittest * Port evacuate API to v3 Part 2 * Port evacuate API to v3 Part 1 * Speeding up scheduler tests * Port rescue API to v3 Part 2 * Port rescue API to v3 Part 1 * Handle security group quota exceeded gracefully * Adds check that the core V3 API is loaded * Call virt.driver.destroy before deallocating network * More KeypairAPI cleanups * Improve Keypair error messages in osapi * Fix Keypair exception messages * Moving more tests to appropriate locations * Skip ipv6 tests on system without ipv6 support * Keypair API test cleanup * Alphabetize v3 API extension entry point list * Add missing exception to cell\_update() * Refactors scheduler.chance.select\_hosts to raise NoValidHost * Enhance unit test code coverage for availability zone * Converts 'image' to json primitive on compute.rpcapi.prep\_resize * Import osapi\_v3/enabled option in nova/test * Regenerate missing resized backing files * Moving \`test\_misc\` tests to better locations * Allocate networks in the background * Make the datetime utility function coerce to UTC * API to get the Cell Capacity * Update rpc/impl\_qpid.py from oslo * More detailed log in failing aggregate extra filter * xenapi: Added logging for sparse copy * Make object actions pass positional arguments * Don't snat all traffic when force\_snat\_range set * Add x-compute-request-id header when no response body * Call scheduler for run\_instance from conductor * correctly set iface-id in vmware driver * Fix a race where a soft deleted instance might be removed by mistake * Fix quota checks while resizing up by admin * Refactor libvirt driver exception handling * Avoiding multiple code loops in filter scheduler * Don't log warn if v3 API is disabled * Link to explanation of --checksum-full rule * Imported Translations from Transifex * Stop libvirt errors from outputting to strerr * Delete unused bin directory * Make instance object tolerate isotime strings * Add fake\_instance.py * Fix postgresql failures related to Data type * hardcode pbr and d2to1 versions * Silence exceptions from qpid connection.close() (from oslo) * Add Davanum to the mailmap * Fix VMwareVCdriver reporting incorrect stats * Adds ability to black/whitelist v3 API extensions * Clean up vmwareapi.network\_util.get\_network\_with\_the\_name * Imported Translations from Transifex * Normalize path for finding api\_samples dir * Add yolanda to the mailmap * Add notes about how doc generation works * python3: Add py33 to tox.ini * Improve Python 3.x compatibility * Ports consoles API to v3 API * Fix nova-compute fails to start if quantum is down * Handle instance directories correctly for migrates * Remove unused launch\_time from instance * Launch\_at and terminated\_at on server(s) response * Fixed two minor docs niggles * Adds v3 API disable config option * Fix bug where consoleauth depended on remote conductor service * Only update cell capabilites once * Ports ips api to v3 API * Make pylint ignore nova/objects/ * Set resized instance back to original vm\_state * Add power\_on flag to virt driver finish/revert migration methods * Cosmetic fix to parameter name in DB API * compute.api call conductor ComputeTaskManager for live-migrate * Removed session from reservation\_create() * Raise exception instances not exception classes * \_s3\_create handles image being deleted * Imported Translations from Transifex * Add instance object * Add base object model * Enhance multipath parsing * Don't delete sys\_meta on instance delete * Fix volume IO usage notifications been sent too often * Add missing os.path.abspath around csrfile * Fix colorizier thowing exception when a test fails * Add db test that checks that shadow tables are up-to-date * Sync shadow table for 159 migration * Sync shadow table for 157 migration * Sync shadow table for 156 migration * Add missing tests for nova.db.api.quota\_\* methods * Add tests for some db.security\_group\_\* methods * Fix \_drop\_unique\_constraint\_in\_sqlite() function * Clean up failed image transfers in instance spawn * Make testr preserve existing OS\_\* env vars values * Fix msg version type sent to cells RPC API * Verify that CONF.compute\_driver is defined * Fix EC2 RegisterImage ImageLocation starts with / * Support Cinder mount options for NFS/GlusterFS * Raise exception instances, not exception classes * Add update method of security group name and description * Cell weighing class to handle mute child cells * Add posargs support to flake8 call * Enumerate Flake8 E12x ignores * Fix and enable flake8 F823 * Fix and enable flake8 F812 * libvirt: improve the specification of network disks * Imported Translations from Transifex * In utils.tempdir, pass CONF.tempdir as an argument * Delegate authentication to quantumclient * Pull binary name from sys.argv[0] * Rename policy auth for V3 os-fixed-ips * Fix internationalization for some LOG messages * Enumerate Flake8 Fxxx ignores * Enable flake8 E721 * Removing misleading error message * No relevant message when stop a stopped VM * Cells: Add filtering and weight support * API Extensions framework for v3 API Part 2 * fix a misleading docstring * xenapi: make the xenapi agent optional per image * Fix config drive code logical error * Add missing conversion specifier to ServiceGroupUnavailable * Deprecate compute\_api\_class option in the config * Add node as instance attribute for notification * removes project\_id/tenant\_id from v3 api urls * Set up 'compute\_task' conductor namespace * Removed superflous eval usage * Fix log message * Sync shadow table for 179 migration * Remove copy paste from 179 migration * Sync shadow table for 175 and 176 migration * Change db \`deleted\` column type utils * Fix tests for sqlalchemy utils * Add missing tests for nova.db.api.quota\_class\_\* * Moved sample network creation out of unittest base class constructor * Add missing tests for db.api.reservation\_\* * add xml api sample tests to os-tenant-network * Remove locals() usage from nova.virt.libvirt.utils * IPMI driver sets bootdev option persistently * update mailmap * Imported Translations from Transifex * Remove tempest hack for create/rebuild checks * Better error message on malformed request url * virt: Move generic virt tests to nova/tests/virt/ * vmwareapi: Move tests under tests/virt/vmwareapi/ * hyperv: Move tests under nova/tests/virt/hyperv * Fix UnboundLocalError in powervm lvm cleanup code * Delete a quota through admin api * Remove locals() usage from nova.virt.libvirt.volume * Importing correlation\_id middleware from oslo-incubator * Make a few places tolerant of sys\_meta being a dict * Remove locals() from scheduler filters * Rename requires files to standard names * Imported Translations from Transifex * translates empty remote\_ip\_prefix to valid cidr for nova * Reset task\_state when resetting vm\_state to ACTIVE * xenapi: Moving tests under tests/virt/xenapi/ * xenapi: Disable VDI size check when root\_gb=0 * Remove ImageTooLarge exception * Move ImageTooLarge check to Compute API * Share checks between create and rebuild * Remove path\_exists from NFS/GlusterFS drivers * Removed session from fixed\_ip\_\*() functions * Catch InstanceNotFound in instance\_actions GET * Using unicode() to handle image's properties * Adds live migration support to cells API * Raise AgentBuildNotFound on updating/destroying deleted object * Add missing tests for nova.db.api.agent\_build\_\* methods * Don't update API cell on get\_nwinfo * Optimize SecurityGroupsOutputController by len(servers) * get\_instance\_security\_groups() fails if no name on security group * libvirt: Moving tests under tests/virt/libvirt * Make it easier to add namespaced rpc APIs * baremetal: Move tests under tests/virt/baremetal * Disallow resize if image not available * powervm: Move tests under tests/virt/powervm * Sync RPC serializer changes from Oslo * Fix missing argument to logging warning call * set ERROR state when scheduler hits max attempts * Sync latest RPC changes from oslo * Add notification for live migration * Add requests requirement capped <1.2.1 * Adding tests for rebuild image checks * Add ImageNotActive check for instance rebuild * Fix error in instance\_get\_all\_by\_filters() use of soft\_deleted filter * Fix resize when instance has no image * Fixes encoding issues for nova api req body * Update run\_tests.sh to run flake8 too * Added validation for networks parameter value * Added attribute 'ip' to server search options * Make nova-api use servicegroup.API.service\_is\_up() * Add memorycache import into the oslo config * Fix require\_context() decorators * Imported Translations from Transifex * Remove locals() from nova/cells/\* * Update mailmap * Strip exec\_dirs prefix from rootwrap filters * Clean up test\_api\_samples a bit * Remove unnecessary parens in test\_volumes * Use strict=True instead of \`is\_valid\_boolstr\` * Editable default quota support * Remove usage of locals() for formatting from nova.api.\* * Switch to flake8+hacking * Fix flake8 errors in anticipation of flake8 * Don't update DB records for unchanged stats * baremetal: drop 'prov\_mac\_address' column * The vm\_state should not be modified until the task is complete * Return Customer's Quota Usage through Admin API * Use prettyxml output * Remove locals() from messages in virt/disk/api.py * 'm1.tiny' now has root\_gb=1 * Cast \`size\` to int before comparison * Don't raise unnecessary stack traces in EC2 API * Mox should cleanup before stubs * Reverse compare arguments in filters tests * Don't inject settings for dynamic network * Add ca cert file support to cinder client requests * libvirt: Catch VIR\_ERR\_NO\_DOMAIN in list\_instances * Revert "Include list of attached volumes with instance info" * Sync rpc from oslo * Remove openstack.common.version * Fix for missing multipath device name * Add missing tests for db.fixed\_ip\_\*(). functions * xenapi: ensure vdi is not too big when resizing down * Cells: Don't allow active -> build * Fix whitespace issue in indent * Pass the proper admin context to update\_dhcp * Fix quantum security group driver to accept none for from/to\_port * Reverse path SNAT for DNAT floating-ip * Use Oslo's \`bool\_from\_string\` * Handle IPMI transient failures better * Improve unit tests for DB archiving * Remove "#!/usr/bin/env python" from .py files under nova/cmd * Add missing unique constraint to KeyPair model * Refactored tests for db.key\_pair\_\*() functions * Refactor nova.volume.cinder.API to reduce roundtrips with Cinder * Fix response from snapshot create stub * Hide lock\_prefix argument using synchronized\_with\_prefix() * Cleanups for create-flavor * Cleanup create flavor tests * Imported Translations from Transifex * Test for remote directory creation before shutting down instance * Fix run\_tests.sh usage of tools/colorizer.py * Move get\_table() from test\_migrations to sqlalchemy.utils * Convert Nova to use Oslo service infrastructure * Show the cause of virt driver error * Detach volume fails when using multipath iscsi * API extensions framework for v3 API * Sync service and threadgroup modules from oslo * Fix header issue for baremetal\_deploy\_helper.py * Extract getting instance's AZ into a helper module * Allow different paths for deploy-helper helpers * Show exception details for failed deploys * Imported Translations from Transifex * Check QCOW2 image size during root disk creation * Adds useful debug logging to filter\_scheduler * fix non reporting of failures with floating IP assignment * Improve message and logging for corrupt VHD footers * Cleanup for test\_create\_server\_with\_deleted\_image * Check cached SSH connection in PowerVM driver * Allow a floating IP to be associated to a specific fixed IP * Record smoketest dependency on gFlags * Make resize/migrated shared storage aware * Imported Translations from Transifex * Add pointer to compute driver matrix wiki page * xenapi: cleanup vdi when disk too big exception raised * Update rootwrap with code from oslo * Fixes typo in server-evacuate-req.xml * Fix variable referenced before assginment in vmwareapi code * Remove invalid block\_device\_mapping volume\_size of '' * Architecture property updated in snapshot libvirt * Add sqlalchemy migration utils.create\_shadow\_table method * Add sqlalchemy migration utils.check\_shadow\_table method * Change type of cells.deleted from boolean to integer * Pass None to image if booted from volume in live migration * Raise InstanceInvalidState for double hard reboot * Removes duplicate assertEqual * Remove insecure default for signing\_dir option * Removes unnecessary check for admin context in evacuate * Fix zookeeper import and tests * Make sure that hypervisor nodename is set correctly in FakeDriver * Optimize db.instance\_floating\_address\_get\_all method * Session cleanup for db.floating\_ip\_\* methods * Optimize instance queries in compute manager * Remove duplicate gettext.install() calls * Include list of attached volumes with instance info * Catch volume create exception * Fixes KeyError bug with network api associate * Add unitests for VMware vif, and fix code logical error * Fix format error in claims * Fixes mock calls in Hyper-V test method * Adds instance root disk size checks during resize * Rename nova.compute.instance\_types to flavors * Convert to using newly imported processutils * Import new additions to oslo's processutils * Imported Translations from Transifex * Enable live block migration when using iSCSI volumes * Nova evacuate failed when VM is in SHUTOFF status * Transition from openstack.common.setup to pbr * Remove random print statements * Remove security\_group\_handler * Add cpuset attr to vcpu conf in libvirt xml * Imported Translations from Transifex * Remove referances to LegacyFormatter in example logging.conf * libvirt: ignore NOSTATE in resume\_state\_on\_host\_boot() method * Sync oslo-incubator print statement changes * Fix stub\_instance() to include missing attributes * Add an index to compute\_node\_stats * Convert to using oslo's execute() method * Import latest log module from oslo * Being more defensive around the use\_ipv6 config option * Update hypervisor\_hostname after live migration * Make nova-network support requested nic ordering * nova coverage creates lots of empty folders * fix broken WSDL logic * Remove race condition (in FloatingIps) * Add missing tests for db.floating\_ip\_\* methods * Deprecate show\_host\_resources() in scheduler manager * Add force\_nodes to filter properties * Adds --addn-hosts to the dnsmasq arg list * Update our import of oslo's processutils * Update oslo-incubator import * Delete InstanceSystemMetadata on instance deletion * vmwareapi: Add supported\_instances to host state * xenapi: Always set other\_config for VDIs * Copy the RHEL6 eventlet workaround from Oslo * Move db.fixed\_ip\_\* tests from DbApiTestCase to FixedIpTestCase * Checks if volume can be attached * Call format\_message on InstanceTypeNotFound exception * xenapi: Don't swallow missing SR exception * Prevent rescuing a VM with a partially mounted volume * Fix key error when create lpar instance failed * Reset migrating task state for MigrationError exceptions * Volume IO usage gets reset to 0 after a reboot / crash * Sync small and safe changes from oslo * Sync jsonutils from oslo * Fix EC2 instance bdm response * Rename \_check\_image\_size to \_get\_and\_check\_image\_metadata * Convert the cache key from unicode to a string * Catch glance image create exceptions * Update to using oslo periodic tasks implementation * Import oslo periodic tasks support * import and install gettext in vm\_vdi\_cleaner.py * Fix baremetal get\_available\_nodes * Fix attach when running as root without sysfsutils * Make \_build\_network\_info\_model testable * Fix building quantumapi network model with network list * Add the availability\_zone to the volume.usage notifications * Add delete\_net\_interface function * Performance optimization for contrib.flavorextraspecs * Small whitespace tweak * Kill off usage of locals() in the filter\_scheduler * Remove local variable only used in logging * Create instance with deleting image * Refactor work with db.instance\_type\_\* methods * Fix flakey TestS3ImageService bug * Add missing snapshot image properties for VMware Hyper * Imported Translations from Transifex * Fix VMware Hyper console url parameter error * Update NovaBase model per changes on oslo.db.sqlalchemy * Send a instance create error notification * Refactor \_run\_instance() to unify control flow * set bdm['volume\_id'] to None rather than delete it * Destroy conntrack table on source host during migration * Adds tests for isolated\_hosts\_filter * Fixes race condition of deleting floating ip * Imported Translations from Transifex * Wrong proxy port in nova.conf for Spice proxy * Fix missing kernel output via VNC/Spice on boot * Fix bug in db.instance\_type\_destroy * Move get\_backdoor\_port to base rpc API * Move db.instance\_type\_extra\_specs\_\* to db.instance\_type\_\* methods * Add missing test for db.instance\_type\_destroy method * Fix powervm driver resize instance error * Support FlatDHCP network for VMware Hyper * Imported Translations from Transifex * Deprecate conductor ping method * Add an rpc API common to all services * If rescue fails don't error the instance * Make os.services.update work with cells * Fix fixed\_ip\_count\_by\_project in DB API * Add unit tests for /db/api.py#fixed\_ip\_\* * Add option to exclude joins from instance\_get\_by\_uuid * Remove unnecessary method argument * Improve Python 3.x compatibility * ec2 CreateVolumes/DescribeVolumes status mapping * Can now reboot rescued instances in xenapi * Allows xenapi 'lookup' to look for rescue mode VMs * Adds tests to xenapi.vm\_utils's 'lookup' method * Imported Translations from Transifex * Stop vm\_state reset on reboot of rescued vm * Fix hyperv copy file error logged incorrect * Fix ec2 CreateVolumes/DescribeVolumes status * Imported Translations from Transifex * Don't swallow PolicyNotAuthorized for resize/reboot actions * Remove unused exception and variable from scheduler * Remove unnecessary full resource audits at the end of resizes * Update the log module from oslo-incubator * Translate NoMoreFloatingIps exception * Imported Translations from Transifex * Fix up regression tester * Delete extra space to api/volumes message * Map internal S3 image state to EC2 API values * removing unused variable from a test * Translate cinder NotFound exception * hypervisor tests more accurate db * Added comments to quantum api client * Cleanup and test volume usage on volume detach * Import and convert to oslo loopingcall * Remove orphaned db method instance\_test\_and\_set * baremetal: VirtualPowerDriver uses mac addresses in bm\_interfaces * Sync rpc from oslo-incubator * Correct disk's over committed size computing error * Imported Translations from Transifex * Allow listing fixed\_ips for a given compute host * Imported Translations from Transifex * baremetal: Change input for sfdisk * Make sure confirm\_resize finishes before setting vm\_state to ACTIVE * Completes the power\_state mapping from compute driver and manager * Make compute/manager use conductor for unrescue() * Add an extension to show the mac address of a ip in server(s) * Cleans up orphan compute\_nodes not cleaned up by compute manager * Allow for the power state interval to be configured * Imported Translations from Transifex * Fix bug in os-availability-zone extension * Remove unnecessary db call in scheduler driver live-migration code * baremetal: Change node api related to prov\_mac\_address * Don't join metadata twice in instance\_get\_all() * Imported Translations from Transifex * Don't hide stacktraces for unexpected errors in rescue * Fix issues with check\_instance\_shared\_storage * Remove "undefined name" pyflake errors * Optimize some of compute/manager's periodic tasks' DB queries * Optimize some of the periodic task database queries in n-cpu * Change DB API instance functions for selective metadata fetching * Replace metadata joins with another query * xenapi: Make \_connect\_volume exc handler eventlet safe * Fix typo: libvir => libvirt * Remove multi scheduler * Remove unnecessary LOG initialisation * Remove unnecessary parens * Simplify random host choice * Add NOVA\_LOCALEDIR env variable * Imported Translations from Transifex * Clarify volume related exception message * Cleanup trailing whitespace in api samples * Add tenant/ user id to volume usage notifications * Security groups may be unavailable * Encode consoleauth token in utf-8 to make it a str * Catch NoValidHost exception during live-migration * Evacuated instance disk not deleted * Fix a bad tearDown method in test\_quantumv2.py * Import eventlet in \_\_init\_\_.py * Raise correct exception for duplicate networks * Add an extension to show the network id of a virtual interface * Fix error message in pre\_live\_migration * Add reset function to nova coverage * Imported Translations from Transifex * nova-consoleauth start failed by consoleauth\_manager option missing * set timeout for paramiko ssh connection * Define LOG globally in baremetal\_deploy\_helper * Allow describe\_instances to use tags for searches * Correct network uuid field for os-network extension * Only call getLogger after configuring logging * Add SecurityGroups API sample tests * Cannot boot vm if quantum plugin does not support L3 api * Add missing tests for instance\_type\_extra\_specs\_\* methods * Remove race condition (in InstanceTypeProjects) * Deprecate old vif drivers * Optimize resource tracker queries for instances * baremetal: Integrate provisioning and non-provisioning interfaces * Move console scripts to entrypoints * Remove deprecated Grizzly code * Fallback to conductor if types are not stashed * Imported Translations from Transifex * Resolve conflicting mac address in resize * Simplify and correct the bm partition sizes * Fix legacy\_net\_info guard * Fix SecurityGroups XML sample tests * Modify \_verify\_response to validate response codes * Fix a typo in attach\_interface error path * After migrate, catch and remove deleted instances * Grab instance for migration before updating usage * Explain why the give methods are whitelisted * libvirt: Get driver type from base image type * Guard against content being None * Limit the checks for block device becoming available * Fix \_error\_out\_instance exception handler * Raise rather than generating millions of IPs * Add unit tests for nova.volume.cinder.API * Update latest oslo.setup * baremetal: Drop unused columns in bm\_nodes * Remove print statements * Imported Translations from Transifex * Fix the python version comparison * Remove gettext.install() from nova/\_\_init\_\_.py * Sync latest gettextutils from oslo-incubator * Return 409 on creating/importing same name keypair * Delete tests.baremetal.util.new\_bm\_deployment() * Return proper error message when network conflicts * Better iptables DROP removal * Query quantum once for instance's security groups * quantum security group driver nova list shows same group * Sync in matchmaker and qpid Conf changes from oslo * improve handling of an empty dnsmasq --domain * Fix automatic confirmation of resizes for no-db-compute * 'injected\_files' should be base 64 encoded * Add missing unit tests for FlavorActionController * Set default fixed\_ip quota to unlimited * Accepts aws-sdk-java timestamp format * Imported Translations from Transifex * get context from req rather than getting a new admin context * Use Cluster reference to reduce SDK calls * Fix missing punctuation in docstring * xenapi: fix support for iso boot * Ensure only pickle-able objects live in metadata * sync oslo db/sqlalchemy module * Convert host value from unicode to a string * always quote dhcp-domain, otherwise dnsmasq can fail to start * Fix typo in the XML serialization os-services API * Add CRUD methods for tags to the EC2 API * Fix migrating instance to the same host * Rework time handling in periodic tasks * Show quota 'in\_use' and 'reserved' info * Imported Translations from Transifex * Fix quantum nic allocation when only portid is specified * Make tenant\_usage fall back to instance\_type\_id * Use format\_message on exceptions instead of str() * Add a format\_message method to the Exceptions * List AZs fails if there are disabled services * Switch nova-baremetal-deploy-helper to use sfdisk * Bring back colorizer again with error results * Imported Translations from Transifex * Adds Tilera back-end for baremetal * Always store old instance\_type during a migration * Make more readable error msg on quantum client authentication failure * Adding netmask to dnsmasq argument --dhcp-range * Add missing tests for db.instance\_type\_access\_\* methods * Remove race condition (in InstanceTypes) * Add missing tests for db.instance\_type\_\* methods * Imported Translations from Transifex * set up FakeLogger for root logger * Fix /servers/os-security-groups using quantum * NoneType exception thrown if driver live-migration check returns None * Add missing info to docstring * Include Co-authored-by entries in AUTHORS * Do not test foreign keys with SQLite version < 3.7 * Avoid using whitespace in test\_safe\_parse\_xml * xenapi: Retrieve VM uuid from xenstore * Reformat openstack-common.conf * Imported Translations from Transifex * Fixes Nova API /os-hosts missing element "zone" * disable colorizer as it swallows fails * Make iptables drop action configurable * Fixes argument order of quantumv2.api.get\_instance\_nw\_info * Make \_downsize\_quota\_delta() use stashed instance types * py2.6 doesn't support TextTestRunner resultclass * Reset ec2 image cache between S3 tests * Sync everything from oslo-incubator * Sync rpc from oslo-incubator * Don't log traceback on rpc timeout * Adds return-type in two functions' docstrings * Remove unnecessary checks in api.py * translate cinder BadRequest exception * Initialize compute manager before loading driver * Add a comment to placeholder migrations * xenapi: fix console for rescued instance * Fixes passing arbitrary conductor\_api argument * Make nova.virt.fake.FakeDriver useable in integration testing * Remove unnecessary DB call to find EC2 AZs * Remove outdated try except block in ec2 code * nova-manage vm list fails looking 'instance\_type' * Update instance network info cache to include vif\_type * Bring back sexy colorized test results * Don't actually connect to libvirtd in unit tests * Add placeholder migrations to allow backports * Change arguments to volume\_detach() * Change type of ssh\_port option from Str to Int * xenapi: rpmbuild fixes * Set version to 2013.2 2013.1.rc1 ---------- * Fix Hyper V instance conflicts * Add caching for ec2 mapping ids * Imported Translations from Transifex * fix add-fixed-ip with quantum * Update the network info when using quantum * List InstanceNotFound as a client exception * Refactor db.service\_destroy and db.service\_update methods * Fix console support with cells * Fix missing argument to QemuImageInfo * Add missing tests for db.virtual\_interface\_\* methods * Fix multiple fixed-ips with quantum * Add missing tests for db.service\_\* methods * Ensure that headers are returned as strings, not integers * Enable tox use of site-packages for libvirt * Require netaddr>=0.7.6 to avoid UnboundLocalError * Pass project id in quantum driver secgroup list * Fixes PowerVM spawn failed as missing attr supported\_instances * Fix RequestContext crashes w/ no service catalog * Prevent volume-attach/detach from instances in rescue state * Fix XenAPI performance issue * xenapi: Adding logging for migration plugin * libvirt: Tolerate existing vm(s) with cdrom(s) * Remove dead code * Remove unused virt.disk.api methods bind/unbind * Imported Translations from Transifex * Revert "Remove the usage of instance['extra\_specs' * Add standard methods to the Limits API * Store project\_id for instance actions * rstrip() strips characters, not strings * Fix use of libvirt\_disk\_prefix * Revert 1154253 causes XenServer image compat issue * Reset migrating task state for more Exceptions * Fix db archiving bug with foreign key constraints * Imported Translations from Transifex * Update migration 153 for efficiency * Don't include traceback when wrapping exceptions * Fix exception message in Networks API extension * Make conductor's quota methods pass project\_id properly * Fix: improve API error responses from os-hosts extension * Add missing API doc for networks-post-req * Make os-services API extensions consistent * Fix system\_metadata "None" and created\_at values * Add the serial to connection info for boot volumes * Do not accept invalid keys in quota-update * Add quotas for fixed ips * Makes safe xml data calls raise 400 http error instead of 500 * Fixes an iSCSI connector issue in the Hyper-V driver * Check keypair destroy result operation * Resize/Migrate refactoring fixes and test cases * Fixes Hyper-V live migration with attached volumes * Force nova to use keystone v2.0 for auth\_token * Fix issues with cells and resize * Fix copyright - from LLC to Foundation * Don't log traceback on expected console error * Generalize console error handling during build * Remove sqlalchemy calling back to DB API * Make ssh key injection work with xenapi agent * Fix use of potentially-stale instance\_type in tenant\_usage * Drop gzip flag from tar command for OVF archives * Fix reconnecting to libvirt * List ComputeHostNotFound as a client exception * Fix: Nova aggregate API throws an uncaught exception on invalid host * Do cleaning up resource before rescheduling * nova-manage: remove unused import * Read instance resource quota info from "quota" namespace * LibvirtGenericVIFDriver update for stp * Switch to final 1.1.0 oslo.config release * Skip deleted fixed ip address for os-fixed-ips extension * Return error details to users in "dns-create-private-domain" * Lazy load CONF.quota\_driver * Fix cells instance deletion * Don't load system\_metadata when it isn't joined * List ConsoleTypeInvalid as a client exception * Make run\_instance() bail quietly if instance has been deleted * Delete instance metadata when delete VM * Virtual Power Driver list running vms quoting error * Refactor work with session in db.block\_device\_mapping\_\* methods * Add missing tests for db.block\_device\_mapping\_\* methods * websockify 0.4 is busted * Sync rpc from oslo-incubator * Fix: nova-manage throws uncaught exception on invalid host/service * Fix more OS-DCF:diskConfig XML handling * Fix: Managers that incorrectly derive from SchedulerDependentManager * Fix nova-manage --version * Pin SQLAlchemy to 0.7.x * Deprecate CONF.fixed\_range, do dynamic setup * Remove the usage of instance['extra\_specs'] * Fix behaviour of split\_cell\_and\_item * Fix quota issues with instance deletes * Fixes instance task\_state being left as migrating * Force resource updates to update updated\_at * Prepare services index method for use with cells * Handle vcpu counting failures gracefully * Return XML message with objectserver 404 * xenapi: Fix reboot with hung volumes * Rename LLC to Foundation * Pass migration\_ref when when auto-confirming * Revert changing to FQDN for hostnames * Add numerous fixes to test\_api\_samples * Fixes instance action exception in "evacuate" API * Remove instance['instance\_type'] relationship from db api * Refactor db tests to ensure that notdb driver is used * Rewrap two lines * Server create will only process "networks" if os-networks is loaded * Fixes nbd device can't be released error * Correct exception args in vfs/guestfs * Imported Translations from Transifex * Prevent nova services' coverage data from combining into nova-api's * Check if flavor id is an empty string * Simple syntax fix up * Fixes volume attach on Hyper-V with IPv6 * Add ability to control max utilization of a cell * Extended server attributes can show wrong hypervisor\_hostname * Imported Translations from Transifex * Remove uses of instance['instance\_type'] from nova/notifications * Libvirt driver create images even without meta * Prevent rescue for volume-backed instances * Fix OS-DCF:diskconfig XML handling * Imported Translations from Transifex * Compile BigInteger to INTEGER for sqlite * Add conductor to nova-all * Make bm model's deleted column match database * Update to Quantum Client 2.2.0 * Remove uses of instance['instance\_type'] from nova/scheduler * Remove uses of instance['instance\_type'] from nova/api * Remove uses of instance['instance\_type'] from nova/network * Remove uses of instance['instance\_type'] from nova/compute * Correct substring matching of baremetal VPD node names * Fix Wrong syntax for set:tag in dnsmasq startup option * Fix instance evacuate with shared storage * nova-manage: remove redundant 'dest' args * clear up method parameters for \_modify\_rules * Check CONF values \*after\* command line args are parsed * Make nova-manage db archive\_deleted\_rows more explicit * Fix for delete error in Hyper-V - missing CONF imports * add .idea folder to .gitignore pycharm creates this folder * Make 'os-hosts/node1' case sensitivity defer to DB * Fix access\_ip\_\* race * Add MultipleCreate template and fix conflict with other templates * Update tox.ini to support RHEL 6.x * Fix instance type cleanup when doing a same-id migration * Tiny typo * Remove unnecessary setUp() and tearDown() methods * Remove duplicate API logging * Remove uses of instance['instance\_type'] from libvirt driver * Remove uses of instance['instance\_type'] from powervm driver * Remove uses of instance['instance\_type'] from xenapi driver * Fixed image filter support for vmware * Switch to oslo.config * Fix instance\_system\_metadata deleted columns * Remove parameters containing passwords from Notifications * Add missing action\_start if deleting resized inst * Fix issues with re-raising exceptions * Don't traceback in the API on invalid keypair * delete deleted image 500 bug * Moves Hyper-V options to the hyperv section * Fix 'to integer' conversion of max and min count values * Standarize ip validation along the code * Adjusts reclaim instance interval of deferred delete tests * Fix Network object encoding issue when using qpid * Rename VMWare to VMware * Put options in a list * Bump instance updated\_at on network change * Catching InstanceNotFound exception during reboot instance * Imported Translations from Transifex * Remove completed FIXME * quantum security\_group driver queries db regression * Prevent reboot of rescued instance * Baremetal deploy helper sets ODIRECT * Read baremetal images from extra\_specs namespace * Rename source\_(group\_id/ip\_prefix) to remote\_(group\_id/ip\_prefix) * docs should indicate proper git commit limit * Imporove db.sqlalchemy.api.\_validate\_unique\_server\_name method * Remove unused db calls from nova.db.api * Fixes oslo-config update for deprecated\_group * fix postgresql drop race * Compute manager should remove dead resources * Fix an error in compute api snapshot\_volume\_backed bdm code * Fixes disk size issue during image boot on Hyper-V * Updating powervm driver snapshot with update\_task\_state flow * Imported Translations from Transifex * Add ssh port and key based auth to VPD * Make ComputeManager \_running\_deleted\_instances query by uuid * Refactor compute manager \_get\_instances\_by\_driver * Fix target host variable from being overwritten * Imported Translations from Transifex * Fixes live migration with attached volumes issue * Don't LOG.error on max\_depth (by default) * Set vm\_state to ERROR on net deallocate failure * validate security\_groups on server create * Fix IBM copyright strings * Implement rules\_exist method for quantum security group driver * Switch to using memorycache from oslo * Remove pylint errors for undefined GroupException members * Sync timeutils and memorycache from oslo * instance\_info\_cache\_update creates wrongly * Tone down logging while waiting for conductor * Add os-volumes extension to api samples * Regenerate nova.conf.sample * Fix ephemeral devices on LVM don't get mkfs'd * don't stack trace if long ints are passed to db * Pep8/pyflakes cleanup of deprecated\_api * Fix deprecated network api * Fixes the Hyper-V driver's method signature * Imported Translations from Transifex * Fixes a Hyper-V live migration issue * Don't use instance['instance\_type'] for scheduler filters in migration * Fallback coverage backdoor telnet connection to lo * Add instance\_type\_get() to virt api * Make compute manager revert crashed migrations on init\_host() * Adds API Sample tests for Volume Attachments * Ensure that FORWARD rule also supports DHCP * Remove duplicate options(joinedload) from aggregates db code * Shrink size of aggregate\_metadata\_get\_by\_host sql query * Remove old commented out code in sqlalchemy models * Return proper error messages while disassociating floating IP * Don't blindly skip first migration * Imported Translations from Transifex * Suppress retries on UnexpectedTaskStateErrors * Fix \`with\_data\` handling in test-migrations * BM Migration 004: Actually drop column * Actually run baremetal migration tests * Adds retry on upload\_vhd for xapi glance plugin * ec2 \_format\_security\_group() accesses db when using quantum\_driver * Remove un-needed methods * Prevent hacking.py from crashing on unexpected import exception * Bump python-quantumclient version to 2.1.2 * Improve output msgs for \_compare\_result * Add a 'hw\_' namespace to glance hardware config properties * Makes sure required powervm config options are set * Update OpenStack LLC to Foundation * Improve hackings docstring detection * Make sure no duplicate forward rules can exist * Use min\_ram of original image for snapshot, even with VHD * Revert IP Address column length to 39 * Additional tests for safe parsing with minidom * Make allocate\_for\_instance() return only info about ports allocated * Fix crash in quantumapi if no network or port id is specified * Unpin PasteDeploy dependency version * Unpin routes dependency version * Unpin suds dependency version * Unpin Cheetah dependency version * Allow zk driver be imported without zookeeper * Retry floating\_ip\_fixed\_ip\_associate on deadlock * Fix hacking.py to handle 'cannot import x' * Add missing import to fakelibvirt * Migration 148: Fix drop table dependency order * Minor code optimization in \_compute\_topic * Fix hacking.py to handle parenthesise in from import as * Fix redefinition of function test\_get\_host\_uptime * Migration 147: Prevent duplicate aggregate\_hosts * Rework instance actions to work with cells * Fix incorrect zookeeper group name * Sync nova with oslo DB exception cleanup * Fix broken baremetal migration tests * if reset fails, display the command that failed * Remove unused nova.db.api:instance\_get\_all\_by\_reservation * Add API Sample tests for Snapshots extension * Run libguestfs API calls in a thread pool * Change nova-dhcpbridge FLAGFILE to a list of files * Imported Translations from Transifex * Readd run\_tests.sh --debug option * Clean unused kernels and ramdisks from image cache * Imported Translations from Transifex * Ensure macs can be serialized * Remove Print Statement * Prevent default security group deletion * libvirt: lxml behavior breaks version check * Add missing import\_opt for flat\_injected * Add processutils from oslo * Updates to OSAPI sizelimit middleware * Remove compat cfg wrapper * Fix exception handling in baremetal API * Make guestfs use same libvirt URI as Nova * Make LibvirtDriver.uri() a staticmethod * Enable VM DHCP request to reach DHCP agent * Don't set filter name if we use Noop driver * Removes unnecessary qemu-img dependency on powervm driver * Migration 146: Execute delete call * Add \`post\_downgrade\` hook for migration tests * Fix migration snake-walk * BM Migrations 2 & 3: Fix drop\_column statements * Migration 144: Fix drop index statement * Remove function redefinitions * Migration 135: Fix drop\_column statement * Add missing ec2 security group quantum mixin * Fix baremetal migration skipping * Add module prefix to exception types * Flush tokens on instance delete * Fix launching libvirt instances with swap * Spelling: compatable=>compatible * import base\_dir\_name config option into vmwareapi * Fix ComputeAPI.get\_host\_uptime * Move DB thread pooling to DB API * Use a fake coverage module instead of real one * Standardize the coverage initializations * Sync eventlet\_backdoor from oslo-incubator * Sync rpc from oslo-incubator * Fix message envelope keys * Remove race condition (in Networks) * Move some context checking code from sqlalchemy * Baremetal driver returns accurate list of instance * Identify baremetal nodes by UUID * Improve performance of baremetal list\_instances * Better error handling in baremetal spawn & destroy * Wait for baremetal deploy inside driver.spawn * cfg should be imported from oslo.config * Add Nova quantum security group proxy * Add a volume driver in Nova for Scality SOFS * Make nova security groups more pluggable * libvirt: fix volume walk of /dev/disk/by-path * Add better status to baremetal deployments * Fix handling of source\_groups with no-db-compute * Improve I/O performance for periodic tasks * Allow exit code 21 for 'iscsiadm -m session' * Removed duplicate spawn code in PowerVM driver * Add API Sample tests for Hypervisors extension * Log lifecycle events to log INFO (not ERROR) * Sync rpc from oslo-incubator * sync oslo log updates * Adding ability to specify the libvirt cache mode for disk devices * Sync latest install\_venv\_common.py * Make add-fixed-ip update nwfilter wth in libvirt * Refactor nwfilter parameters * ensure we run db tests in CI * More gracefully handle TimeoutException in test * Multi-tenancy isolation with aggregates * Fix pep8 issues with test\_manager.py * Fix broken logging imports * Fix hacking test to handle namespace packages * Use oslo-config-2013.1b4 * support preallocated VM images * Fix instance directory path for lxc * Add snapshot methods to fakes.py * PowerVMDiskAdapter detach/cleanup refactoring * Make ComputeTestCase.test\_state\_revert faster * Add an extension to show image size * libvirt: Use uuid for instance directory name * Support running periodic tasks immediately at startup * Fix XMLMatcher error reporting * Fix XML config tests for disk/net/cpu tuning * Add support for network adapter hotplug * Handle lifecycle events in the compute manager * Add support for lifecycle events in the libvirt driver * Enhance IPAdresses migration tests * Add basic infrastructure for compute driver async events * Fix key check in instance actions formatter * Add a safe\_minidom\_parse\_string function * Documentation cleanups for nova devref * Fix leak of loop/nbd devices in injection using localfs * Add support for instance CPU consumption control * Add support for instance disk IO control * Retry bw\_usage\_update() on innodb Deadlock * Change CIDR column size on migration version 149 * Provide way to pass rxtx factor to quantum * Fibre channel block storage support (nova changes) * Default SG rules for the Security Group "Default" * create new cidr type for data storage * Ensure rpc result is primitive types * Change all instances of the non-word "inteface" to "interface" * Remove unused nova.db.api:network\_get\_by\_bridge * Fix a typo in two comments. networksa -> networks * Live migration with an auto selection of dest * Remove unused nova.db.api:network\_get\_by\_instance * Fix network list and show with quantum * Remove unused db calls from nova.db.sqlalchemy.api * Remove unused db calls * Small spelling fix in sqlalchemy utils * Fix \_get\_instance\_volume\_block\_device\_info call parameter * Do not use abbreviated config group names (zookeeper) * Prevent the unexpected with nova-manage network modify * Fix hacking tests on osx * Enable multipath for libvirt iSCSI Volume Driver * Add select\_hosts to scheduler manager rpc * Add and check data functions for test\_migrations 141 * fix incorrectly defined ints as strs * Remove race condition (in TaskLog) * Add generic dropper for duplicate rows * Imported Translations from Transifex * Fix typo/bug in generic UC dropper * remove intermediate libvirt downloaded images * Add support for instance vif traffic control * Add libvirt XML schema support for resource tuning parameters * Fix instance can not be deleted after soft reboot * Correct spelling of quantum * Make pep8 tests run inside virtualenv * Remove tests for non-existing SimpleScheduler * libvirt: Fix LXC container creation * Rename 'connection' to 'driver' in libvirt HostState * Ensure there is only one instance of LibvirtDriver * Stop unit test for prompting for a sudo password * clean up missing whitespace after ':' * Push 'Error' result from event to instance action * Speedup the revert\_state test * Add image to request\_spec during resize * Ensure start time is earlier than end time in simple\_tenant\_usage * Split out body of loop in \_sync\_power\_states in compute manager * Remove dead variable assignment in compute manager * Assign unique names with os-multiple-create * Nova network needs to take care of existing alias * Delete baremetal interfaces when their parent node is deleted * Harmonize PEP8 checking between tox and run\_tests.sh * VirtualPowerDriver catches ProcessExecutionError * [xenapi] Cooperatively yield during sparse copy * Allow archiving deleted rows to shadow tables, for performance * Adds API Sample tests for FlavorAccess extension * Add an update option to run\_tests.sh * filter\_scheduler: Select from a subset of hosts * use nova-conductor for live-migration * Fix script argument parsing * Add option to allow cross AZ attach configurable * relocatable roots doesn't handle testr args/opts * Remove a log message in test code * add config drive to api\_samples * Don't modify injected\_files inside PXE driver * Synchronize code from oslo * Canonizes IPv6 before insert it into the db * Only dhcp the first ip for each mac address * Use connection\_info on resize * Fix add-fixed-ip and remove-fixed-ip * API extension for accessing instance\_actions * Use joinedload for system\_metadata in db * Add migration with data test for migration 151 * Correct misspelling in PowerVM comment * Add GlusterFS libvirt volume connector * Module import style checking changes * Stub additional FloatingIP methods in FlatManager * Resize/Migrate functions for PowerVM driver * Added a service heartbeat driver using Memcached * Use a more specific error reporting invalid disk hardware * Allow VIF model to be chosen per image * Check the length of flavor name in "flavor-create" * Add API sample tests to Services extension * VMWare driver to use current nova.network.model * Add "is not" test to hacking.py * Update tools/regression\_tester * Fix passing conductor to get\_instance\_nw\_info() * Imported Translations from Transifex * Make compute manager use conductor for stopping instances * Move allowvssprovider=false to vm-data field * Allow aggregate create to have None as the az * Forces flavorRef to be string in servers resize api * xenapi: Remove unecessary exception handling * Sync jsonutils from openstack-common * Simplify and optimize az server output extension * Add an extension to show the type of an ip * Ensure that only one IP address is allocated * Make the metadata paths use conductor * Fix nova-compute use of missing DBError * Adding support for AoE block storage SANs * Update docs about testing * Allow generic rules in context\_is\_admin rule in policy * Implements resize / cold migration on Hyper-V * test\_(dis)associate\_by\_non\_existing\_security\_group\_name missing stub * Make scheduler remove dead nodes from its cache * More conductor support for resizes * Allow fixed to float ping with external gateway * Add generic UC dropper * Remove locking declarator in ServiceGroup \_\_new\_\_() * Use ServiceGroup API to show node liveness * Refine PowerVM MAC address generation algorithm * Fixes a bug in attaching volumes on Hyper-V * Fix unconsumed column name warning in test\_migrations * Fix regression in non-admin simple\_usage:show * Ensure 'subunit2pyunit' is run in venv from run\_tests.sh * Fix inaccuracies in the development environment doc * preserve order of pre-existing iptables chains * Adds API Sample tests for FloatingIPDNS extension * Don't call 'vif.plug' twice during VM startup * Disallow setting /0 for network other than 0.0.0.0 * Fix spelling in comment * Imported Translations from Transifex * make vmwareapi driver pass quantum port-id to ESX * Add control-M to list of characters to strip out * Update to simplified common oslo version code * Libvirt: Implement snapshots for LVM-backed roots * Properly write non-raw LVM images on creation * Changes GA code for tracking cross-domain * Return dest\_check\_data as expected by the Scheduler * Simplify libvirt snapshot code path * fix VM power state to be NOSTATE when instance not found * Fix missing key error in libvirt.driver * Update jsonutils from oslo-incubator * Update nova/compute/api to handle instance as dict * Use joined version of db.api calls * l3.py,add\_floating\_ip: setup NAT before binding * Regenerate nova.conf.sample * Fixes a race condition on updating security group rules * Ensure that LB VIF drivers creates the bridge if necessary * Remove nova.db call from baremetal PXE driver * Support for scheduler hints for VM groups * Fixed FlavorAccess serializer * Add a virtual PowerDriver for Baremetal testing * Optimize rpc handling for allocate and deallocate * Move floating ip db access to calling side * Implement ZooKeeper driver for ServiceGroup API * Added the build directory to the tox.ini list pep8 ignores * support reloctable venv roots in testing framework * Change to support custom nw filters * Allow multiple dns servers when starting dnsmasq * Clean up extended server output samples * maint: remove unused imports from bin/nova-\* * xenapi: Cleanup detach\_volume code * Access DB as dict not as attributes part 5 * Introduce support for 802.1qbg and 802.1qbh to Nova VIF model * Adds \_(prerun|check)\_134 functions to test\_migrations * Extension for rebuild-for-ha * Support hypervisor supplied macs in nova-network * Recache or rebuild missing images on hard\_reboot * Cells: Add cells support to hypervisors extension * Cells: Add cells support to instance\_usage\_audit\_log api extension * Update modules from common required for rpc with lock detection * Fix lazy load 'system\_metadata' failed problem * Ban database access in nova-compute * Move security\_groups refreshes to conductor * Fix inject\_files for storing binary file * Add regression testing tool * Change forward\_bridge\_interface to MultiStrOpt * Imported Translations from Transifex * hypervisor-supplied-nics support in PowerVM * Default the last parameter (state) in task\_log\_get to None * Sync latest install\_venv\_common from oslo * Remove strcmp\_const\_time * Adds original copyright notice to refactored files * Update .coveragerc * Allow disk driver to be chosen per image * Refactor code for setting up libvirt disk mappings * Refactor instance usage notifications for compute manager * Flavor Extra Specs should require admin privileges * Remove unused methods * Return to skipping filters when using force\_hosts * Refactor server password metadata to avoid direct db usage * lxc: Clean up namespace mounts * Move libvirt volume driver tests to separate test case * Move libvirt NFS volume driver impl into volume.py * replace ssh-keygen -m with a python equivalent * Allow connecting to self-signed quantum endpoints * Sync latest db and importutils from oslo * Use oslo database code * Fix check instance host for instance action * Make get\_dev\_name\_for\_instance() use stashed instance\_type info * Added Postgres CI opportunistic test case * Remove remaining instance\_types query from compute/manager * Make cells\_api fetch stashed instance\_type info * Teach resource tracker about stashed instance types * Fix up instance types in sys meta for resizes * lxc: virDomainGetVcpus is not supported by driver * Fix incorrect device name being raised * VMware VC Compute Driver * Default value of monkey\_patch\_modules is broken * Adds evacuate method to compute.api * Fix import for install\_venv.py * allow disabling file injection completely * separate libvirt injection and configdrive config variables * Add API sample tests to os-network * Fix incorrect logs in network * Update HACKING.rst per recent changes * Allow for specifying nfs mount options * Add REST API to show availability\_zone of instance * Make NFS mount hashes consistent with Cinder * Parse testr output through subunit2pyunit * Imported Translations from Transifex * Optimize floating ip list to make one db query * Remove hardcoded topic strings in network manager * Reimplement is\_valid\_ipv4() * Tweakify is\_valid\_boolstr() * Fix update quota with invalid value * Make system\_metadata update in place * Mark password config options with secret * Record instance actions and events * Postgres does not like empty strings for type inet * Add 'not in' test to tools/hacking.py * Split floating ip functionality into new file * Optimize network calls by moving them to api * Fixes unhandled exception in detach\_volume * Fixes FloatingIPDNS extension 'show' method * import tools/flakes from oslo * Use conductor for instance\_info\_cache\_update * Quantum metadata handler now uses X-Forwarded-For * instance.update notifications don't always identify the service * Handle compute node not available for live migration * Fixes 'not in' operator usage * Fixes "is not" usage * Make scheduler modules pass conductor to add\_instance\_fault * Condense multiple authorizers into a single one * Extend extension\_authorizer to enable cleaner code * Remove unnecessary deserializer test * Added sample tests to FlavorExtraSpecs API * Fix rebuild with volumes attached * DRYing up volume\_in\_mapping code * Use \_prep\_block\_device in rebuild * xenapi: Ax unecessary \`block\_device\_info\` params * Code cleanup for rebuild block device mapping * Fix eventlet/mysql db pooling code * Add support for compressing qcow2 snapshots * Remove deprecation notice in LibvirtBridgeDriver * Fix boto capabilities check * Add api samples to fping extension * Fix SQL Error with fixed ips under devstack/postgresql * Pass testropts in to setup.py in run\_tests.sh * Nova Hyper-V driver refactoring * Fixed grammar problems and typos in doc strings * Add option to control where bridges forward * xenapi: Add support for different image upload drivers * Removed print stmts in test cases * Fix get and update in FlavorExtraSpecs * Libvirt: Add support for live snapshots * Move task\_log functions to conductor * erase outdated comment * Keep flavor information in system\_metadata * Add instance\_fault\_create() to conductor * Adds API Sample tests for os-instance\_usage\_audit\_log extension * validate specified volumes to boot from at the API layer * Refactor libvirt volume driver classes to reduce duplication * Change ''' to """ in bin/nova-{novncproxy,spicehtml5proxy} * Pass parameter 'filter' back to model layer * Fix boot with image not active * refactored data upgrade tests in test\_migrations * Fix authorized\_keys file permissions * Finer access control in os-volume\_attachments * Stop including full service catalog in each RPC msg * Make sure there are no unused import * Fix missing wrap\_db\_error for Session.execute() method * Use install\_venv\_common.py from oslo * Add Region name to quantum client * Removes retry of set\_admin\_password * fix nova-baremetal-manage version printing * Refactoring/cleanup of compute and db apis * Fix an error in affinity filters * Fix a typo of log message in \_poll\_unconfirmed\_resizes * Allow users to specify a tmp location via config * Avoid hard dependency on python-coverage * iptables-restore error when table not loaded * Don't warn up front about libvirt loading issues in NWFilterFirewall * Relax API restrictions around the use of reboot * Strip out Traceback from HTTP response * VMware Compute Driver OVF Support * VMware Compute Driver Host Ops * VMware Compute Driver Networking * Move policy checks to calling side of rpc * Add api-samples to multinic extension * Add system\_metadata to db.instance\_get\_active\_by\_window\_joined * Enable N302: Import modules only * clean up api\_samples documentation * Fix bad imports that cause nova-novncproxy to fail * populate dnsmasq lease db with valid leases * Support optional 4 arg for nova-dhcpbridge * Add debug log when call out to glance * Increase maximum URI size for EC2 API to 16k * VMware Compute Driver Glance improvement * Refactored run\_command for better naming * Fix rendering of FixedIpNotFoundForNetworkHost * Fix hacking N302 import only modules * Avoid db lookup in info\_from\_instance() * Fixes task\_log\_get and task\_log\_get\_all signatures * Make failures in the periodic tests more detailed * Clearer debug when test\_terminate\_sigterm fails * Skip backup files when running pep8 * Added sample tests to floating-ip-pools API * \_sync\_compute\_node should log host and nodename * Don't pass the entire list of instances to compute * VMware Compute Driver Volume Management * Bump the base rpc version of the network api to 1.7 * Remove compute api from scheduler driver * Remove network manager from compute manager * Adds SSL support for API server * Provide creating real unique constraints for columns * Add version constraint for coverage * Correct a format string in virt/baremetal/ipmi.py * Add REST api to manage bare-metal nodes * Adding REST API to show all availability zones of an region * Fixed nova-manage argument parsing error * xenapi: Add cleanup\_sm\_locks script * Fix double reboot during resume\_state\_on\_host\_boot * Add support for memory overcommit in live-migration * Adds conductor support for instance\_get\_active\_by\_window\_joined * Make compare\_result show the difference in lists * Don't limit SSH keys generation to 1024 bits * Ensure service's servicegroup API is created first * Drop volume API * Fix for typo in xml API doc sample in nova * Avoid stuck task\_state on snapshot image failure * ensure failure to inject user files results in startup error * List servers having non-existent flavor should return empty list * Add version constraint for cinder * Remove duplicated tapdev creation code from libvirt VIF * Move helper APIs for OVS ports into linux\_net * Add 'ovs\_interfaceid' to nova network VIF model * Replace use of mkdtemp with fixtures.TempDir * Add trust level cache to trusted\_filter * Fix the wrong datatype in task\_log table * Cleanup of extract\_opts.py * Baremetal/utils should not log certain exceptions * Use setup.py testr to run testr in run\_tests.sh * Fix nova coverage * PXE driver should rmtree directories it created * Fix floating ips with external gateway * Add support for Option Groups in LazyPluggable * Fix incorrect use of context object * Unpin testtools * fix misspellings in logs, comments and tests * fix mysql race in tests * Fix get Floating ip pools action name to match with its policy * Generate coverage even if tests failed * Allow snapshots of paused and suspended instances * Update en\_US message translations * Sync latest cfg from oslo-incubator * Avoid testtools 0.9.25 * Cells: Add support for compute HostAPI() * Refactor compute\_utils to avoid db lookup * ensure zeros are written out when clearing volumes * fix service\_ref undefined problem * Add rootwrap filters for password injection with localfs * fix floating ip test that wasn't running * Prevent metadata updates until instance is active * More consistent libvirt XML handling and cleanup * pick up eventlet backdoor fix from oslo * Run\_as\_root to ensure resize2fs succeed for all image backends * libvirt: Fix typo in configdrive implementation * Refactor EC2 keypairs exception * Directly copy a file URL from glance * Remove restoring soft deleted entries part 2 * Remove restoring soft deleted entries part 1 * Use conductor in the servicegroup db driver * Add service\_update to conductor * Remove some db calls from db servicegroup driver * XenAPI: Fix volume detach * Refactor: extract method: driver\_dict\_from\_config * Cells: Fix for relaying instance info\_cache updates * Fix wrong quota reservation when deleting resizing instance * Go back to the original branch after pylint check * Ignore auto-generated files by lintstack * Add host to instance\_faults table * Clean up db network db calls for fixed and float * Remove obsolete baremetal override of MAC addresses * Fix multi line docstring tests in hacking.py * PXE driver should not accept empty kernel UUID * Use common rootwrap from oslo-incubator * Remove network\_host config option * Better instance fault message when rescheduling * libvirt: Optimize test\_connection and capabilities * don't allow crs in the code * enforce server\_id can only be uuid or int * Allow nova to use insecure cinderclient * Makes sure compute doesn't crash on failed resume * Fix fallback when Quantum doesn't provide a 'vif\_type' * Move compute node operations to conductor * correcting for proper use of the word 'an' * Correcting improper use of the word 'an' * Save password set through xen agent * Add encryption method using an ssh public key * Make resource tracker use conductor for listing instances * Make resource tracker use conductor for listing compute nodes * Updates prerequisite packages for fedora * Expose a get\_spice\_console RPC API method * Add a get\_spice\_console method to nova.virt.ComputeDriver API * Add nova-spicehtml5proxy helper * Pull NovaWebSocketProxy class out of nova-novncproxy binary * Add support for configuring SPICE graphics with libvirt * Add support for setting up elements in libvirt config * Add common config options for SPICE graphics * Create ports in quantum matching hypervisor MAC addresses * Make nova-api logs more useful * Override floating interface on callee side * Reject user ports that have MACs the hypervisor cannot use * Remove unused import * Reduce number of iptable-save restore loops * Clean up get\_instance\_id\_by\_floating\_address * Move migration\_get\_...\_by\_host\_and\_node to conductor * Make resource tracker use conductor for migration updates * minor improvements to nova/tests/test\_metadata.py * Cells: Add some cells support to admin\_actions extension * Populate service list with availability zone and correct unit test * Correct misspelling of fake\_service\_get\_all * Add 'devname' to nova.network.model.VIF class * Use testrepository setuptools support * Cleaning up exception handling * libvirt: use tap for non-blockdevice images on Xen * Export the MAC addresses of nodes for bare-metal * Cells: Add cells API extension * More HostAPI() cleanup for cells * Break out a helper function for working with bare metal nodes * Renames the new os-networks extension * Define a hypervisor driver method for getting MAC addresses * enables admin to view instance fault "details" * Revert "Use testr setuptools commands." * Revert "Populate service list with availability zone" * Fix typos in docstring * Fix problem with ipv6 link-local address(es) * Adds support for Quantum networking in Hyper-V * enable hacking.py self tests * Correct docstring on sizelimit middleware * sync latest log and lockutils from oslo * Fix addition of CPU features when running against legacy libvirt * Fix nova.availability\_zones docstring * Fix uses of service\_get\_all\_compute\_by\_host * VMware Compute Driver Rename * use postgresql INET datatype for storing IPs * Extract validation and provision code to separate method * Implement Quantum support for addition and removal of fixed IPs * Keep self and context out of error notification payload * Populate service list with availability zone * Add Compute API validations for block device map * Cells: Commit resize quota reservations immediately * Cells: Reduce the create\_image call depth for cells * Clean up compute API image\_create * Fix logic error in periodic task wait code * Centralize instance directory logic * Chown doesn't work on mounted vfat * instances\_path is now defined here * Convert ConfigDriveHelper to being a context manager itself * Use testr setuptools commands * Move migration\_create() to conductor * Move network call from compute API to the manager * Fix incorrect comment, and move a variable close to use * Make sure reboot\_instance uses updated instance * Cleanup reboot\_instance tests * Fix use of stale instance data in compute manager * Implements getPasswordData for ec2 * Add service\_destroy to conductor * Make nova.service get service through conductor * Add service\_create to conductor * Handle waiting for conductor in nova.service * Allow forcing local conductor * Make pinging conductor a part of conductor API * Fix some conductor manager return values * Handle directory conflicts with html output * Fix error in NovaBase.save() method * Skip domains on libvirt errors in get\_vcpu\_used() * Fix state sync logic related to the PAUSED VM state * Remove more unused opts from nova.scheduler.driver * Fix quota updating when admin deletes common user's instance * Tests for PXE bare-metal provisioning helper server * Correct the calculating of disk size when using lvm disk backend * Adding configdrive to xenapi * Validated device\_name value in block device map * Fix libvirt resume function call to get\_domain\_xml * Make it clearer that network.api.API is nova-network specific * Access instance as dict, not object in xenapi * Expand quota logging * Move logic from os-api-host into compute * Create a directory for servicegroup drivers * Move update\_instance\_info\_cache to conductor * Change ComputerDriver.legacy\_nwinfo to raise by default * Cleanup pyflakes in nova-manage * Add user/tenant shim to RequestContext * make runtests -p act more like tox * fix new N402 errors * Add host name to log message for \_local\_delete * Try out a new nova.conf.sample format * Regenerate nova.conf.sample * Make Quantum plugin fill in the 'bridge' name * Make nova network manager fill in vif\_type * Add some constants to the network model for drivers to use * Move libvirt VIF XML config into designer.py * Remove bogus 'unplug' calls from libvirt VIF test * Fix bash syntax error in run\_tests.sh * Update instance's cell\_name in API cell * Fix init\_host checking moved instances * Fix test cases in integrated.test\_multiprocess\_api * Map libvirt error to InstanceNotFound in get\_instance\_disk\_info * Fixed comment typo * Added sample tests to FlavorSwap API * Remove unused baremetal PXE options * Remove unused opt import in scheduler.driver * Move global service networking opts to new module * Move memcached\_servers opt into common.memorycache * Move service\_down\_time to nova.service * Move vpn\_key\_suffix into pipelib * fix N402 on tools/ * fix N402 for nova-manage * fix N402 for rest of nova * fix N402 for nova/c\* * fix N402 for nova/db * don't clear the database dicts in the tearDown method * Fixed typos in doc strings * Enhance wsgi to listen on ipv6 address * Adds a flag to allow configuring a region * Fix double reboot issue during soft reboot * Remove baremetal-compute-pxe.filters * Fix pyflakes issues in integrated tests * Adds option to rebuild instance with existing disk * Move common virt driver options to virt.driver * Move vpn\_image\_id to pipelib * Move enabled\_apis option into nova.service * Move default\_instance\_type into nova.compute * Move osapi\_compute\_unique\_server\_name\_scope to db * Move api\_class options to where they are used * Move manager options into nova.service * Move compute\_topic into nova.compute.rpcapi * fix N402 for nova/network * fix N402 for nova/scheduler * fix N402 for nova/tests * Fix N402 for nova/virt * Fix N402 for nova/api * New instance\_actions and events table, model, and api * Cope better with out of sync bm data * Import latest timeutils from oslo-incubator * Remove availability\_zones from service table * Enable Aggregate based availability zones * Sync log from oslo-incubator * Clarify the DBApi object in cells fakes * Fix lintstack check for multi-patch reviews * Adds to manager init\_host validation for instances location * Add to libvirt driver instance\_on\_disk method * add to driver option to keep disks when instance destroyed * Fix serialization in impl\_zmq * Added sample tests to FlavorRxtx API * Refresh instance metadata in-place * xenapi: Remove dead code, moves, tests * Fix baremetal VIFDriver * Adds a new tenant-centric network extension * CLI for bare-metal database sync * Move scheduler\_topic into nova.scheduler.rpcapi * Move console\_topic into nova.console.rpcapi * Move network\_topic into nova.network.rpcapi * Move cert\_topic into nova.cert.rpcapi * Move global s3 opts into nova.image.s3 * Move global glance opts into nova.image.glance * Remove unused osapi\_path option * attach/detach\_volume() take instance as a parameter * fix N401 errors, stop ignoring all N4\* errors * Add api extension to get and reset password * powervm: Implement snapshot for local volumes * Add exception handler for previous deleted flavor * Add NoopQuotaDriver * Conductor instance\_get\_all replaces \_by\_filters * Support cinderclient http retries * Sync rpc and notifier from oslo-incubator * PXE bare-metal provisioning helper server * Added sample tests to QuotaClasses API * Changed 'OpenStack, LLC' message to 'OpenStack Foundation' * Convert short doc strings to be on one line * Get instances from conductor in init\_host * Invert test stream capture logic for debugging * Upgrade WebOb to 1.2.3 * Make WebOb version specification more flexible * Refactor work with TaskLog in sqlalchemy.api * Check admin context in bm\_interface\_get\_all() * Provide a PXE NodeDriver for the Baremetal driver * Handle compute node records with no timestamp * config\_drive is missing in xml deserializer * Imported Translations from Transifex * NovaBase.delete() rename to NovaBase.soft\_delete() * livbirt: have a single source of console log file naming * Remove the global DATA * Add ping to conductor * Add two tests for resize action in ServerActionsControllerTest * Move service\_get\_all operations to conductor * Move migration\_get\_unconfirmed\_by\_dest\_compute to conductor * Move vol\_usage methods to conductor * Add test for resize server in ComputeAPITestCase * Allow pinging own float when using fixed gateway * Use full instance in virt driver volume usage * Imported Translations from Transifex * Refactor periodic tasks * Cells: Add periodic instance healing * Timeout individual tests after one minute * Fix regression in RetryFilter * Cells: Add the main code * Adding two snapshot related task states * update version urls to working v2 urls * Add helper methods to nova.paths * Move global path opts in nova.paths * Remove unused aws access key opts * Move fake\_network opt to nova.network.manager * Allow larger encrypted password posts to metadata * Move instance\_type\_get() to conductor * Move instance\_info\_cache\_delete() to conductor * Move instance\_destroy() to conductor * Move instance\_get\_\*() to conductor * Sync timeutils changes from Oslo * Remove system\_metadata db calls from compute manager * Move block\_device\_mapping destroy operations to conductor * Clean up setting of control\_exchange default * fix floating-ip in multihost case * Invalid EC2 ids should make the entire request fail * improve libguestfs exception handling * fix resize of unpartitioned images with libguestfs * xenapi: Avoid hotplugging volumes on resize * Remove unused VMWare VIF driver abstraction * Delete pointless nova.virt.VIFDriver class * Clarify & fix docs for nova-novncproxy * Removes unused imports * Imported Translations from Transifex * Fix spelling mistakes in nova.virt * Cells: Add cells commands to nova-manage * Add remaining get\_backdoor\_port() rpc calls to coverage * Fix race in resource tracker * Move block\_device\_mapping get operations to conductor * Move block\_device\_mapping update operations to conductor * Improve baremetal driver error handling * Add unit test to update server metadata * Add unit test to revert resize server action * Add compute build/resize errors to instance faults * Add unit test for too long metadata for server rebuild action * Adds os-volume\_attachments 'volume\_id' validation * Raise BadRequest when updating 'personality' * Imported Translations from Transifex * Ensure that Quantum uses configured fixed IP * Add conditions in compute APIRouter * Imported Translations from Transifex * CRUD on flavor extra spec extension should be admin-only * Report failures to mount in localfs correctly * Add API sample tests to FixedIPs extension * baremetal power driver takes \*\*kwargs * Implement IPMI sub-driver for baremetal compute * Fix tests/baremetal/test\_driver.py * Move baremetal options to [BAREMETAL] OptGroup * Adds test for HTTPUnprocessableEntity when rebooting * Make sure the loadables path is the absolute path * Fix bug and remove update lock in db.instance\_test\_and\_set() * Periodic update of DNS entries * Fix error in test\_get\_all\_by\_multiple\_options\_at\_once() * Remove session.flush() and session.query() monkey patching * Update nova-cert man page * Allow new XML API sample file generation * Remove unused imports * spelling in test\_migrations * Imported Translations from Transifex * Check for image\_meta in libvirt.driver.spawn * Adds test for 'itemNotFound' errors in 'Delete server' * Remove improper NotFound except block in list servers * Spelling: Compatability=>Compatibility * Imported Translations from Transifex * Ensure we add a new line when appending to rc.local * Verify the disk file exists before running qemu-img on it * Remove lxc attaching/detaching of volumes * Teardown container rootfs in host namespace for lxc * Fix cloudpipe instances query * Ensure datetimes can be properly serialized * Imported Translations from Transifex * Database metadata performance optimizations * db.network\_delete\_safe() method performance optimization * db.security\_group\_rule\_destroy() method performance optimization * Import missing exception * Ignore double messages to associate the same ip * Imported Translations from Transifex * Database reservations methods performance optimization * Using query.soft\_delete() method insead of soft deleting by hand * Create and use subclass of sqlalchemy Query with soft\_delete() method * Remove inconsistent usage of variable from hyperv * Log last compute error when rescheduling * Removed unused imports * Make libvirt driver default to virtio for KVM/QEMU NICs * Refactor libvirt VIF classes to reduce duplicate code * Makes sure to call crypto scripts with abspath * Enable nova exception format checking in tests * Eliminate race conditions in floating association * Imported Translations from Transifex * Provide a configdrive helper which uses contextlib * Add extension to allow hiding of addresses * Add html reports to report action in coverage extension * Add API samples tests for the coverage extension * Fix \_find\_ports() for when backdoor\_port is None * Parameterize database connection in test.py * fixing the typo of the error message from nbd * add 'random\_seed' entry to instance metadata * Baremetal VIF and Volume sub-drivers * Fix revert resize failure with disk.local not found * Fix a test isolation error in compute.test\_compute * New Baremetal provisioning framework * Move baremetal database tests to fixtures * address uuid overwriting * Add get\_backdoor\_port to cert * Add get\_backdoor\_port to scheduler * Add get\_backdoor\_port to console * Make libvirt driver.listinstances return defined * Add get\_backdoor\_port to consoleauth * Export custom SMBIOS info to QEMU/KVM guests * Make configdrive.py use version.product\_string() * Allow loading of product/vendor/package info from external file * Remove obsolete VCS version info completely * Trap exception when trying to write csr * Define a product, vendor & package strings in version.py * Extract image metadata from Cinder * Add expected exception to aggregate\_metadata\_delete() * Move aggregate\_get() to conductor * Add .testrepository/ directory to gitginore * Make load\_network\_driver load passed in driver * Fix race condition of resize confirmation * libvirt: Make vif\_driver.plug() returns None * Add an iptables mangle rule per-bridge for DHCP * Make NBD retry logic more generic, add retry to loop * Reliably include OS type in ephemeral filenames * Allow specification of libvirt guest interface backend driver * Fix "image\_meta" data passed in libvirt test case * Fix typos in vncserver\_listen config param help description * Traceback when user doesn't have permission * removed duplicate function definitions * network/api add\_fixed\_ip correctly passes uuid * Import cfg module in extract\_opts.py * Raise old exception instance instead of new one * Update exceptions to pass correct kwargs * Add option to make exception format errors fatal * allow for the ability to run partial coverage * Remove fake\_tests opt from test.py * Execute pygrub using nova-rootwrap in xenapi * Add DBDuplicateEntry exception for unique constraint violations * Fix stack trace on incorrect nova-manage args * Use service fixture in DB servicegroup tests * fix instance rescue without cmdline params in xml.rescue * Added sample tests to FlavorDisabled API * Reset the IPv6 API backend when resetting the conf stack * libvirt: Skip intermediate base files with qcow2 * fix test\_nbd using stubs * Imported Translations from Transifex * Properly remove the time override in quota tests * Fix API samples generation * Move TimeOverride to the general reusable-test-helper place * Added conf support for security groups * Add accounting for orphans to resource tracker * Add more association support to network API * Remove the WillNotSchedule exception * Replace fixtures.DetailStream with fixtures.StringStream * Move network\_driver into new nova.network.driver * Move DNS manager options into network.manager * Refactor xvp console * Move agent\_build\_get\_by\_triple to conductor * Move provider\_fw\_rule\_get\_all to conductor * Move security\_group operations in VirtAPI to conductor * Retry NBD device allocation * Use testr to run nova unittests * Add a developer trap for api samples * Update command on devref doc * Fixed deleting instance booted from invalid vol * Add general mechanism for testing api coverage * Add the missing replacement text in devref doc * Allow xenapi to work with empty image metadata * Imported Translations from Transifex * Fix for broken switch for config\_drive * Fix use of osapi\_compute\_extension option in api\_samples * Remove sleep in test\_consoleauth * Fix errors in used\_limits extension * Fix poll\_rescued\_instances periodic task * Add syslogging to nova-rootwrap * Clean up run\_tests.sh * Ensure that sql\_dbpool\_enable is a boolean value * Stop nbd leaks, remove pid race * Fixes KeyError: 'sr\_uuid' when booting from volume on xenapi * Add VirtAPI tests * Move remaining aggregate operations to conductor * remove session param from instance\_get * remove session param from instance\_get\_by\_uuid * Use nova.test.TestCase as the base test class * Ensure datetimes can be properly serialized * Fixes string formatting error * Adds API Sample tests for DiskConfig extension * Fix for correctly parsing snapshot uuid in ec2api * Autodetect nbd devices * Add Jian Wen to .mailmap * Move metadata\_{host,port} to network.linux\_net * Move API extension opts to api.openstack.compute * Move osapi\_max\_limit into api.openstack.common * Move link\_prefix options into api.openstack.common * Move some opts into nova.utils * Properly scope password options * Remove the deprecated quantum v1 code and directory * add and removed fixed ip now refresh cache * Implement an XML matcher * Add support for parsing the from libvirt host capabilities * Add support for libvirt domain XML config * Add support for libvirt domain XML config * Add coverage extension to nova API * Allow rpc-silent FloatingIP exceptions in n-net * Allow conductor exceptions to pass over RPC silently * Don't leak info from libvirt LVM backed instances * Add get\_backdoor\_port to nova-conductor * Properly scope isolated hosts config opts * Move monkey patch config opts into nova.utils * Move zombie\_instance\_updated\_at\_window option * Move some options into nova.image.glance * Move cache\_images to nova.virt.xenapi.vm\_utils * Move api\_rate\_limit and auth\_strategy to nova.api * Move api\_paste\_config option into nova.wsgi * Port to argparse based cfg * Cleanup the test DNS managers * Move all temporary files into a single /tmp subdir * Modified sample tests to FlavorExtraData API * Fix KeyError of log message in virt/libvirt/utils.py * Allows an instance to post encrypted password * Make nova/virt use aggregate['metadetails'] * Revert "Simplify how ephemeral disks are created and named." * Fix bw\_usage\_update issue with conductor * Correctly init XenAPIDriver in vm\_vdi\_cleaner.py * Set instance\_ref['node'] in \_set\_instance\_host\_and\_node * Consider reserved count in os-user-limits extension * Make DNS drivers inherit interface * Map cinder snapshot statuses to ec2 * i18n raise Exception messages * Set default DNS driver to No-op * Access DB values as dict not as attributes. Part 4 * Use conductor for bw\_usage operations * libvirt: enable apic setting for Xen or KVM guest * Improve virt/disk/mount/nbd test coverage * Add NFS to the libvirt volume driver list * Use admin user to read Quantum port * Add vif\_type to the VIF model * Make the nbd mounter respect CONF.max\_nbd\_devices * Imported Translations from Transifex * Raise NotImplementedError in dns\_driver.DNSDriver * Unpin lxml requirements * Added sample tests to FlavorManage API * Use fixtures library for nova test fixtures * Catch ProcessExecutionError when building config drives * Fix fname concurrency tests * Imported Translations from Transifex * Make ignore\_hosts and force\_hosts work again * Run test objectstore server on arbitrary free port * Fix network manager ipv6 tests * Prevent creation of extraneous resource trackers * Remove unused bridge interfaces * Use conductor for migration\_get() * Reset node to source in finish\_revert\_resize() * Simplify how ephemeral disks are created and named * Order instance faults by created\_at and id * Sync RPC logging-related bits from oslo * Fix bugs in test\_migrations.py * Fix regression allowing quotas to be applied to projects * Improve nova-manage usability * Add new cliutils code from oslo-incubator * Update tools/flakes to work with pydoc * Fix pep8 exclude logic for 1.3.3 * Avoid vm instance shutdown when power state is NOSTATE * Fix handling of unimplemented host actions * Fix positional arg swallow decorator * Fix minidns delete\_entry to work for hostname with mixed case chars * powervm: Refactored run\_command for better naming * Sync latest openstack.common.rpc * Ensure prep\_resize arguments can be serialized * Add host to get\_backdoor\_port() for network api * Add agent build API support for list/create/delete/modify agent build * Added sample tests to extended status API * Imported Translations from Transifex * Make policy.json not filesystem location specific * Use conductor for resourcetracker instance\_update * network managers: Pass elevated cxtx to update\_dhcp * Volume backed live migration w/o shared storage * Add pyflakes option to tox * Adds API Sample tests for Quotas extension * Boot from volume without image supplied * Implements volume usage metering * Configurable exec\_dirs to find rootwrap commands * Allow newer boto library versions * Add notifications when libvirtd goes down * Make update\_service\_capabilities() accept a list of capabilities * update mailmap to add my perferred mail * Fix test suite to use MiniDNS * Add support for new WMI iSCSI initiator API * Added sample tests to deferred delete API * On confirm\_resize, update correct resource tracker * Renaming xml test class in sample tests of consoles API * remove session param from certificate\_get * improve sessions for key\_pair\_(create,destroy) * powervm: add DiskAdapter for local volumes * Access DB values as dict not as attributes. Part 3 * Patch fake\_libvirt\_utils with fixtures.MonkeyPatch * Open test xenapi/vm\_rrd.xml relative to tests * Reset notifier\_api before each test * Reset volume\_api before cinder cloud tests * Fix rpc control\_exchange regression * Add generic customization hooks via decorator * add metadata support for overlapping networks * Split out part of compute's init\_host * Use elevated cxtx in resource\_tracker.resize\_claim * Fix test\_migrations for postgres * Add vpn ip/port setting support for CloudPipe * Access DB values as dict not as attributes. Part 2 * Enable debug in run\_tests using pdb * Add POWERVM\_STARTING state to powervm driver * Fix test\_inject\_admin\_password for OSX * Multi host DHCP networking and local DNS resolving * use file instead of tap for non-blockdevice images on Xen * use libvirt getInfo() to receive number of physical CPUs * Don't run the periodic task if ticks\_between\_runs is below zero * Fix args to AggregateError exception * Fix typo in inherit\_properties\_from\_image * Access DB values as dict not as attributes * Fix KeyError of log message in compute/api.py * Fix import problem in test\_virt\_disk\_vfs\_localfs * Remove start\_guests\_on\_host\_boot config option * Add aggregate\_host\_add and \_delete to conductor * Imported Translations from Transifex * Call plug\_vifs() for all instances in init\_host * Make compute manager use conductor for instance\_gets * Fixes HyperV compute "resume" tests * Convert datetimes for conductor instance\_update * Update migration so it supports PostgreSQL * Include 'hosts' and 'metadetails' in aggregate * Verify doc/api\_samples files along with the templates * Remove default\_image config option * Move ec2 config opts to nova.api.ec2.cloud * Move imagecache code from nova.virt.libvirt.utils * Use flags() helper method to override config in tests * RetryFilter checks 'node' as well as 'host' * Make resize and multi-node work properly together * Migration model update for multi-node resize fix * Add version to conductor migration\_update message * Validate rxtx\_factor as a float * Display errors when running nosetests * Respect the base\_dir\_name flag in imagebackend * Add exceptions to baremetal/db/api * Clean up unused methods in scheduler/driver * Provide better error message for aggregate-create * Imported Translations from Transifex * Allow multi\_host compute nodes to share dhcp ip * Add blank nova/virt/baremetal/\_\_init\_\_.py * Add migration\_update to conductor * Remove unnecessary topic argument * Add pluggable ServiceGroup monitoring APIs * Add SSL support to utils.generate\_glance\_url() * Add eventlet db\_pool use for mysql * Make compute manager use nova-conductor for instance\_update * Missing instance\_uuid in floating\_ip notifications * Make nova-dhcpbridge use CONFIG\_FILE over FLAGFILE * Rename instance\_info\_cache unique key constraints * Cleanup compute multi-node assignment of node * Imported Translations from Transifex * maint: remove an unused import from libvirt.utils * Encode consoleauth token in utf-8 to make it a str * nova-dhcpbridge should require the FLAGFILE is set * Added cpu\_info report to HyperV Compute driver * Remove stale flags unit tests * Truncate large console logs in libvirt * Move global fixture setup into nova/test.py * Complete API samples for Hosts extension * Fix HostDeserializer to enable multiple line xml * adjust rootwrap filters for recent file injection changes * Don't hard code the xen hvmloader path * Don't update arch twice when create server * remove db access in xen driver * Imported Translations from Transifex * Move compute\_driver into nova.virt.driver * Re-organize compute opts a bit * Move compute opts from nova.config * Add a CONTRIBUTING file * Compute doesn't set the 'host' field in instance * Xenapi: Don't resize down if not auto\_disk\_config * Cells: Re-add DB model and calls * Use more specific SecurityGroupHandler calls * Fix wait\_for\_deleted function in SmokeTests * Wrap log messages with \_() * Add methods to Host operations to fake hypervisor * Move sql options to nova.db.sqlalchemy.session * Add debug logging to disk mount modules * Remove the libguestfs disk mount API implementation * Remove img\_handlers config parameter usage * Convert file injection code to use the VFS APIs * Introduce a VFS implementation backed by the libguestfs APIs * Introduce a VFS implementation mapped to the host filesystem * Adds API for bulk creation/deletion of floating IPs * Remove obsolete config drive init.d example * Imported Translations from Transifex * Rename sql\_pool\_size to sql\_max\_pool\_size * Detect shared storage; handle base cleanup better * Allow VMs to be resumed after a hypervisor reboot * Fix non-primitive uses of instance in compute/manager * Remove extra space in exception * Adds missing index migrations by instance/status * Convert migrations.instance\_uuid to String(36) * Add missing binary * Change all tenants servers listing as policy-based * Fixes a bug in get\_info in the Hyper-V Driver * refactor: extract method: connect\_volume * Handle instances not being found in EC2 API responses * Pin pep8 to 1.3.3 * Return an error response if the specified flavor does not exists. (v4) * Send block device mappings to rebuild\_instance * Move db lookup for block device mappings * Use CONF.import\_opt() for nova.config opts * Imported Translations from Transifex * Remove nova.config.CONF * Add keystoneclient to pip-requires * Pass rpc connection to pre\_start\_hook * Fix typo: hpervisor=> hypervisor * Fix reversed args to call to \_reschedule * Add the beginnings of the nova-conductor service * remove old baremetal driver * Remove useless function quota\_usage\_create * Fix calls to private method in linux\_net * Drop unused PostgreSQL sequences from Folsom * Compact pre-Grizzly database migrations * Fix os-hosts extension can't return xml response correctly * Set node\_availability\_zone in XenAPIAggregateTestCase * Ignore editor backup files * Imported Translations from Transifex * Remove nova.flags * Remove FLAGS * Make fping extension use CONF * Use disk image path to setup lxc container * Use the auth\_token middleware from keystoneclient * improve session handling around instance\_ methods * add index to fixed\_ips * add instance\_type\_extra\_specs to instances * Change a toplevel function comment to a docstring * Ensure cat process is terminated * Add some sqlalchemy tweakables * Fixes an error reporting bug on Hyper-V * update api\_samples add os-server-start-stop * update api\_samples add os-services module * Switch to using eventlet\_backdoor from oslo * Sync eventlet\_backdoor from oslo * Added sample tests to consoles API * Fix use of 'volume' variable name * Ditch unused import and variable * Make ec2\_instance\_create db method consistant across db apis * Adds documentation for Hyper-V testing * Adds support for ConfigDriveV2 in Hyper-V * don't explode if a 413 didn't set Retry-After * Fix a couple uses of FLAGS * Remove nova.flags imports from scheduler code * Remove some unused imports from compute/\* * Remove importing of flags from compute/\* * Remove nova.flags imports from bin/\* * Move nova shared config options to nova.config * Fix use\_single\_default\_gateway * Update api\_samples README.rst to use tox * Do not alias stdlib uuid module as uuidutils, since nova has uuidutils * Allow group='foo' in self.flags() for tests * updated api\_samples with real hypervisor\_hostname * Issue a hard shutdown if clean fails on resize up * Introduce a VFS api abstraction for manipulating disk images * Fix network RPC API backwards compat * create\_db\_entry\_for\_new\_instance did not call sgh for default * Add support for backdoor\_port to be returned with a rpc call * Refactor scheduling filters * Unpin amqplib and kombu requirements * Add module for loading specific classes * Make sure instance data is always refreshed * Move all mount classes into a subdirectory * Add support for resizes to resource tracker * Fixes create instance \*without\* config drive test * Update db entry before upate the DHCP host file * Remove gen\_uuid() * Enhance compute capability filter to check multi-level * API extension for fpinging instances * Allow controller extensions to extend update/show * Isolate tests from the environment variable http\_proxy * Handle image cache hashing on shared storage * fix flag type define error * Simplify libvirt volume testing code * Migrate floating ip addresses in multi\_host live\_migration * Add DB query to get in-progress migrations * Try hard shutdown if clean fails on resize down * Restore self.test\_instance at LibvirtConnTestCase.setUp() * Fixes usage of migrate\_instance\_start * added getter methods for quantumv2 api * fix LVM backed VM logial volumes can't be deleted * Clean up \_\_main\_\_ execution from two tests for consistency * Imported Translations from Transifex * Update uuidutils from openstack common * Remove volume.driver and volume.iscsi * Use base image for rescue instance * Make xenapi shutdown mode explicit * Fix a bug in XenAPISession's use of virtapi * Ban db import from nova/virt * Update vol mount smoketest to wait for volume * Add missing webob to exc * Add missing exception NetworkDuplicated * Fix misuse of exists() * Rename config to vconfig * Move agent\_build\_get\_by\_triple to VirtAPI * Fix \_setup\_routes() signature in APIRouter * Move libvirt specific cgroups setup code out of nova.virt.disk.api * make libvirt with Xen more workable * script for configuring a vif in Xen in non-bridged mode * Upgrade pylint version to 0.26.0 * Removes fixed\_ip\_get\_network * improve session handling around virtual\_interfaces * improve sessions for reservation * improve session handling around quotas * Remove custom test assertions * Add nova option osapi\_compute\_unique\_server\_name\_scope * Add REST API support for list/enable/disable nova services * Switch from FLAGS to CONF in nova.compute * Switch from FLAGS to CONF in tests * Get rid of pylint E0203 in filter\_scheduler.py * Updated scheduler and compute for multiple capabilities * Switch from FLAGS to CONF in nova.db * Removed two unused imports * Remove unused functions * Fixes a bug in api.metadata.base.lookup() on Windows * Fixes a bug in nova.utils, due to Windows compatibility issues * improve session handling of dnsdomain\_list * Make tox.ini run pep8/hacking checks on bin * Fix import ordering in /bin scripts * add missing opts to test\_db\_api.py * clean up dnsdomain\_unregister * Make utils.mkfs() set label when fs=swap * Another case of dictionary access * Remove generic topic support from filter scheduler * Clarify server\_name, hostname, host * Refactor scheduling weights * update nova.conf.sample * Check instance\_type in compute capability filter * Sync latest code from oslo-incubator * Adds REST API support for Fixed IPs * Added separate bare-metal MySQL DB * Added bare-metal host manager * Remove unused volume exceptions * Adds a conf option for custom configdrive mkisofs * Fixed HyperV to get disk stats of instances drive * powervm: failed spawn should raise exception * Enable Quantum linux bridge VIF driver to use "bridge" type * Remove nova-volume DB * make diagnostics workable for libvirt with Xen * Avoid unnecessary system\_metadata db lookup * Make instance\_system\_metadata load with instance * Add some xenapi Bittorrent tests * Move security groups and firewall ops to VirtAPI * Move host aggregate operations to VirtAPI * Simplify topic handling in network rpcapi * Sync rpc from openstack-common * Send instance\_type to resize\_instance * Remove instance\_type db lookup in prep\_resize * Send all aggregate data to remove\_aggregate\_host * Fix incorrect LOG.error usage in \_compare\_cpu * Limit formatting routes when adding resources * Removes unnecessary db query for instance type * Fix verification in test\_api\_samples.py * Yield in between hash runs for the image cache manager * Remove unused function require\_instance\_exists * Refactor resource tracker claims and test logic * Remove out-of-date comment * Make HostManager.get\_all\_host\_states() return an iterator * Switch from FLAGS to CONF in nova.virt * 'BackupCreate' rotation parameter >= 0 * Corrects usage of db.api.network\_get * Switch from FLAGS to CONF in nova.console * Map NotAuthorized to 403 in floating ips extension * Decouple EC2 API from using instance id * libvirt: Regenerates xml instead of using on-disk * Imported Translations from Transifex * Fix to include error message in instance faults * Include hostname in notification payloads * Fix quota updating during soft delete and restore * Fix warnings found with pyflakes * make utils.mkfs() more general * Fixes snapshot instance failure on libvirt * Make ComputeDrivers send hypervisor\_hostname * Fixed instance deletion issue from Nova API * De-duplicate option: console\_public\_hostname * Don't verify image hashes if checksumming is disabled * Imported Translations from Transifex * Look up stuck-in-rebooting instances in manager * Use chance scheduler in EC2 tests * Send all aggregate data to add\_aggregate\_host * Send all migration data to finish\_revert\_resize * Send all migration data to revert\_resize * Fix migrations when not using multi-host network * Fix bandwidth polling exception * Fixes volume attach issue on Hyper-V * Shorten self.compute.resource\_tracker in test\_compute.py * Cleanup nova.db.sqlalchemy.api import * Use uuidutils.is\_uuid\_like for uuid validation * Add uuidutils module * Imported Translations from Transifex * Switch from FLAGS to CONF in nova.scheduler * Switch from FLAGS to CONF in nova.network * Switch from FLAGS to CONF in misc modules * Switch from FLAGS to CONF in nova.api * Switch from FLAGS to CONF in bin * Remove flags.DECLARE * Move parse\_args to nova.config * Forbid resizing instance to deleted instance types * Imported Translations from Transifex * Fix unused variables and wrong indent in test\_compute * Remove unnecessary db call from xenapi/vmops * xenapi: place boot lock when doing soft delete * Detangle soft delete and power off * Fix signing\_dir option for auth\_token middleware * Fix no attribute 'STD\_OUT\_HANDLE' on windows * Use elevated context in disassociate\_floating\_ip * Remove db.instance\_get\* from nova/virt * sync deprecated log method from openstack-common * move python-cinderclient to pip-requires * Tiny resource tracker cleanup * Fix Quantum v2 API method signatures * add doc to standardize session usage * improve sessions around floating\_ip\_get\_by\_address * Bump the base rpc version of the network api * Eliminates simultaneous schedule race * Introduce VirtAPI to nova/virt * Add some hooks for managers when service starts * Fix backwards compat of rpc to compute manager * xenapi: Make agent optional * Add xenapi host\_maintenance\_mode() test * refactor: extract \_attach\_mapped\_block\_devices * Make bdms primitive in rpcapi.terminate\_instance * Ability to specify a host restricted to admin * Improve EC2 describe\_security\_groups performance * Increased MAC address range to reduce conflicts * Move to a more canonicalized output from qemu-img info * Read deleted flavors when using to\_xml() * Fix copy-paste bug in block\_device\_info\_generation * Remove nova-volume scheduling support * Remove duplicate api\_paste\_config setting * Fixes hypervisor based image filtering on Hyper-V * make QuantumV2 support requested nic ordering * Add rxtx\_factor to network migration logic * Add scheduler retries for prep\_resize operations * Add call to reset quota usage * Make session.py reusable * Remove redundant code from PowerVM driver * Force earlier version of sqlalchemy * refactor: extract method vm\_ref\_or\_raise * Use env to set environ when starting dnsmasq * pep8 fixes for nova-manage * Fix VM deletion from down compute node * Remove database usage from libvirt check\_can\_live\_migrate\_destination * Clean up xenapi VM records on failed disk attaches * Remove nose detailed error reporting * Validate is-public parameter to flavor creation * refactor: extract \_terminate\_volume\_connections * improve sessions around compute\_node\_\* * Fix typo in xenapi/host.py * Remove extra print line in hacking.py * Ensures compute\_driver flag can be used by bdm * Add call to trigger\_instance[add/remove]\_security\_group\_refresh quantum * Validates Timestamp or Expiry time in EC2 requests * Add API samples to Admin Actions * Add live migration helper methods to fake hypervisor driver * Use testtools as the base testcase class * Clean up quantumv2.get\_client * Fix getattr usage * Imported Translations from Transifex * removes the nova-volume code from nova * Don't elevate context when calling run\_instance * remove session parameter from fixed\_ip\_get * Make instance\_get\_all() not require admin context * Fix compute tests abusing admin context * Fix use of elevated context for resize methods * Fix check for memory\_mb * Imported Translations from Transifex * Fix nova-network MAC collision logic * Fix rpcapi version for new methods * Remove useless return * Change hacking.py N306 to use logical\_lines * Add missing live migration methods to ComputeDriver base class * Fix hacking.py naivete regarding lines that look like imports * details the reboot behavior that a virt driver should follow * xenapi: refactor: Agent class * Send usage event on revert\_resize * Fix config-file overrides for nova-dhcpbridge * Make nova-rootwrap optional * Remove duplicated definition of is\_loaded() * Let scheduler know services' capabilities at startup * fetch\_images() method no more needed * Fix hardcoded topic strings with constants * Save exceptions earlier in finish\_resize * Correct \_extract\_query\_params in image.glance * Fix Broken XML Namespace Handling * More robust checking for empty requested\_networks * Imported Translations from Transifex * Rehydrate NetworkInfo in reboot\_instance() * Update common * Use cat instead of sleep for rootwrap test * Addtional 2 packages for dev environment on ubuntu * Let VlanManager keep network's DNS settings * Improve the performance of quantum detection * Support for nova client list hosts with specific zone * Remove unused imports in setup.py * Fixes fake for testing without qemu-img * libvirt: persist volume attachments into config * Extend IPv6 subnets to /64 if network\_size is set smaller than /64 * Send full migration data to finish\_resize * Send full migration to confirm\_resize * Send full migration to resize\_instance * Migrate to fileutils and lockutils * update sample for common logging * Add availability zone extension to API samples test * Refactor: config drive related functions * Fix live migration volume assignment * Remove unused table options dicts * Add better log line for undefined compute\_driver * Remove database usage from libvirt imagecache module * Return empty list when listing servers with bad status value * Consistent Rollback for instance creation failures * Refactor: move find\_guest\_agent to xenapi.agent * Fix Incorrect Exception when metadata is over 255 characters * Speed up volume and routing tests * Speed up api.openstack.compute.contrib tests * Allow loading only selected extensions * Migrate network of an instance * Don't require quantumclient when running nova-api * Handle the case where we encounter a snap shot correctly * Remove deprecated root\_helper config * More specific exception handling in migration 091 * Add virt driver capabilities definition * Remove is\_admin\_context from sqlalchemy.api * Remove duplicate methods from network/rpcapi.py * SanISCSIDriver SSH execution fixes * Fix bad Log statement in nova-manage * Move mkfs from libvirt.utils to utils * Fixes bug Snapshotting LXC instance fails * Fix bug in a test for the scheduler DiskFilter * Remove mountpoint from parse\_volume\_info * limit the usage of connection\_info * Sync with latest version of openstack.common.timeutils * nova-compute sends its capabilities to schedulers ASAP * Enable custom eventlet.wsgi.server log\_format * Fix the fail-on-zero-tests case so that it is tolerant of no output * add port support when QuantumV2 subclass is used * Add trove classifiers for PyPI * Fix and enable pep8 E502, E712 * Declare vpn client option in pipelib * Fix nova-volume-usage-audit * Fix error on invalid delete\_on\_termination value * Add Server diagnostics extension api samples * Add meaningful server diagnostic information to fake hypervisor * Use instance\_exists to check existence * Fix nova-volume-usage-audit * Imported Translations from Transifex * Avoid leaking BDMs for deleted instances * Deallocate network if instance is deleted in spawn * Create Flavors without Optional Arguments * Update policies * Add DNS records on IP allocation in VlanManager * update kwargs with args in wrap\_instance\_fault * Remove ComputeDriver.update\_host\_status() * Do not call directly vmops.attach\_volume * xenapi: fix bfv behavior when SR is not attached * Use consoleauth rpcapi in nova-novncproxy * Change install\_venv to use setup.py develop * Fixes syntax error in nova.tools.esx.guest\_tools.py * Allow local rbd user and secret\_uuid configuration * Set host prior to allocating network information * Remove db access for block devices and network info on reboot * Remove db access for block devices on terminate\_instance * Check parameter 'marker' before make request to glance * Imported Translations from Transifex * Internationalize nova-manage * Imported Translations from Transifex * Fixes live\_migration missing migrate\_data parameter in Hyper-V driver * handles empty dhcp\_domain with hostname in metadata * xenapi: Tag volumes in boot from volume case * Stops compute api import at import time * Fix imports in openstack compute tests * Make run\_tests.sh fail if no tests are actually run * Implement snapshots for raw backend * Used instance uuid rather than id in remove-fixed-ip * Migrate DHCP host info during resize * read\_deleted snapshot and volume id mappings * Make sure sleep can be found * Pass correct task\_state on snapshot * Update run\_tests.sh pep8 ignore list for pep8 1.2 * Clean up imports in test\_servers * Revert "Tell SQLite to enforce foreign keys." * Add api samples to simple tenant usage extension * Avoid RPC calls while holding iptables lock * Add util for image conversion * Add util for disk type retrieval * Fixes test\_libvirtr spawn\_with\_network\_info test * Remove unneeded temp variable * Add version to network rpc API * Remove cast\_to\_network from scheduler * Tell SQLite to enforce foreign keys * Use paramiko.AutoAddPolicy for the smoketests * nova-manage doesn't validate key to update the quota * Dis-associate an auto-assigned floating IP should return proper warning * Proxy floating IP calls to quantum * Handle invalid xml request to return BadRequest * Add api-samples to Used limits extension * handle IPv6 race condition due to hairpin mode * Imported Translations from Transifex * XenAPI should only snapshot root disk * Clarify trusted\_filter conf options * Fix pep8 error in bin/nova-manage * Set instance host field after resource claim * powervm: add polling timeout for LPAR stop command * Drop claim timeouts from resource tracker * Update kernel\_id and ramdisk\_id while rebuilding instance * Add Multiple Create extension to API sample tests * Fix typo in policy docstring * Fix reserve\_block\_device\_name while attach volume * Always use bdm in instance\_block\_mapping on Xen * Centralize sent\_meta definition * Move snapshot image property inheritance * Set read\_deleted='yes' for instance\_id\_mappings * Fix XML response for return\_reservation\_id * Stop network.api import on network import * libvirt: ignore deleted domain while get block dev * xenapi: Refactor snapshots during resize * powervm: remove broken instance filtering * Add ability to download images via BitTorrent * powervm: exception handling improvements * Return proper error messages while associating floating IP * Create util for root device path retrieval * Remove dependency on python-ldap for tests * Add api samples to Certificates extension * Add nova-cert service to integrated\_helpers * Compare lists in api samples against all matches * ip\_protocol for ec2 security groups * Remove unneeded lines from aggregates extension API sample tests * Remove deprecated Folsom code: config convert * Make resource tracker uses faster DB query * Remove deprecated Folsom code: bandwith\_poll\_interval * Add TestCase.stub\_module to make stubbing modules easier * Imported Translations from Transifex * Update tools hacking for pep8 1.2 and beyond * Remove outdated moduleauthor tags * remove deprecated connection\_type flag * Add aggregates extension to API samples test * Update RPM SPEC to include new bandwidth plugin * Remove TestCase.assertNotRaises * Imported Translations from Transifex * Imported Translations from Transifex * Use self.flags() instead of manipulating FLAGS by hand * Use test.TestCase provided self.mox and self.stubs * Remove unnecessary setUp, tearDown and \_\_init\_\_ in tests * xenapi: implement resume\_state\_on\_host\_boot * Revert "Add full test environment." * Synchronize docstring with actual implementation * Num instances scheduler filter * Add api samples to cloudpipe extension * Fix CloudPipe extension XML serialization * Max I/O ops per host scheduler filter * libvirt: continue detach if instance not found * libvirt: allows attach and detach from all domains * Fixes csv list required for qemu-img create * Added compute node stats to HostState * libvirt: Improve the idempotency of iscsi detach * Pass block\_device\_info to destroy in revert\_resize * Enable list with no dict objects to be sorted in api samples * Fixes error message for flavor-create duplicate ID * Loosen anyjson dependency to avoid clash with ceilometer * xenapi: make it easier to recover from failed migrations * Remove unnecessary check if migration\_ref is not None * Bump the version of SQLAlchemy in pip-requires * optimize slightly device lookup with LXC umounts * Support for several HA RabbitMQ servers * xenapi: Removing legacy swap-in-image * xenapi: increase timeout for resetnetwork agent request * Replaced default hostname function from gethostname to getfqdn * Fix issues deleting instances in RESIZED state * Modified 404 error response to show specific message * Updated code to update attach\_time of a volume while detaching * Check that an image is active before spawning instances * Fix issues with device autoassignment in xenapi * Deleting security group does not mark rules as deleted * Collect more accurate bandwidth data for XenServer * Zmq register opts fix in receiver * Revert explicit usage of tgt-adm --conf option * Fix booting a raw image on XenServer * Add servers/ips api\_samples tests * LOG.exception() should only be used in exception handler * Fix XenServer's ability to boot xen type images * all\_extensions api\_samples testing for server actions * Fixes remove\_export for IetAdm * libvirt: Fix \_cleanup\_resize * Imported Translations from Transifex * xenapi: fix undefined variable in logging message * Spelling: ownz=>owns * Fix NetAppCmodeISCSIDriver.\_get\_lun\_handle() method * Integration tests virtual interfaces API extension * Allow deletion of instance with failed vol cleanup * Fixes snapshotting of instances booted from volume * Move fakeldap.py from auth dir to tests * Remove refs to ATAoE from nova docs * Imported Translations from Transifex * Set volume status to error if scheduling fails * Update volume detach smoke test to check status * Fix config opts for Storwize/SVC volume driver * Ensure hybrid driver creates veth pair only once * Cleanup exception handling * Imported Translations from Transifex * Add lun number (0) to model\_update in HpSanDriver * libvirt: return after soft reboot successfully completes * Fixes to the SolarisISCSI Driver * Fix live migration when volumes are attached * Clarify dangerous use of exceptions in unit tests * Cleanup test\_api\_samples:\_compare\_result * Fix testContextClaimWithException * Fix solidfire unit tests * Stop double logging to the console * Recreate nw\_info after auto assigning floating ip * Re-generate sample config file * Use test.TestingException instead of duplicating it * Fix startup with DELETED instances * Fix solidfire option declaration * Restore SIGPIPE default action for subprocesses * Raise NotFound for non-existent volume snapshot create * Catch NotFound exception in FloatingIP add/remove * Adds API sample testing for rescue API extension * Fix bugs in resource tracker and cleanup * Replace builtin hash with MD5 to solve 32/64-bit issues * Properly create and delete Aggregates * No stack trace on bad nova aggregate-\* command * Clean up test\_state\_revert * Fix aggregate\_hosts.host migration for sqlite * Call compute manager methods with instance as keyword argument * Adds deserialization for block\_device\_mapping * Fix marker pagination for /servers * Send api.fault notification on API service faults * Always yield to other greenthreads after database calls * fix unused import * Don't include auto\_assigned ips in usage * Correct IetAdm remove\_iscsi\_target * Cleanup unused import in manager.py * xapi: fix create hypervisor pool * Bump version to 2013.1 * Add Keypairs extension to API samples test * sample api testing for os-floating-ips extension * Update quota when deleting volume that failed to be scheduled * Update scheduler rpc API version * Added script to find unused config options * Make sure to return an empty subnet list for a network without sunbet * Fix race condition in CacheConcurrencyTestCase * Makes scheduler hints and disk config xml correct * Add lookup by ip via Quantum for metadata service * Fix over rate limit error response * Add deserialization for multiple create and az * Fix doc/README.rst to render properly * Add user-data extension to API samples tests * Adds API sample testing for Extended server attributes extension * Inherit the base images qcow2 properties * Correct db migration 91 * make ensure\_default\_security\_group() call sgh * add ability to clone images * add get\_location method for images * Adds new volume API extensions * Add console output extension to API samples test * Raise BadRequest while creating server with invalid personality * Update 'unlimited' quota value to '-1' in db * Modified 404 error response for server actions * Fix volume id conversion in nova-manage volume * Improve error handling of scheduler * Fixes error handling during schedule\_run\_instance * Include volume\_metadata with object on vol create * Reset the task state after backup done * Allows waiting timers in libvirt to raise NotFound * Improve entity validation in volumes APIs * Fix volume deletion when device mapper is used * Add man pages * Make DeregisterImage respect AWS EC2 specification * Deserialize user\_data in xml servers request * Add api samples to Scheduler hints extension * Include Schedule Hints deserialization to XML API * Add admin actions extension * Allow older versions of libvirt to delete vms * Add security groups extension to API samples test * Sync a change to rpc from openstack-common * Add api\_samples tests for servers actions * Fix XML deserialization of rebuild parameters * All security groups not returned to admins by default * libvirt: Cleanup L2 and L3 rules when confirm vm resize * Corrects use of instance\_uuid for fixed ip * Clean up handling of project\_only in network\_get * Add README for doc folder * Correct typo in memory\_mb\_limit filter property * Add more useful logging around the unmount fail case * Imported Translations from Transifex * Make compute/manager.py use self.host instead of FLAGS.host * Add a resume delete on volume manager startup * Remove useless \_get\_key\_name() in servers API * Add entity body validation helper * Add 422 test unit test for servers API * Use tmpdir and avoid leaving test files behind * Includes sec group quota details in limits API response * Fixes import issue on Windows * Overload comment in generated SSH keys * Validate keypair create request body * Add reservations parameter when cast "create\_volume" to volume manager * Return 400 if create volume snapshot force parameter is invalid * Fix FLAGS.volumes\_dir help message * Adds more servers list and servers details samples * Makes key\_name show in details view of servers * Avoid VM task state revert on instance termination * Avoid live migrate overwriting the other task\_state * Backport changes from Cinder to Nova-Volume * Check flavor id on resize * Rename \_unplug\_vifs to unplug\_vifs * PowerVM: Establish SSH connection at use time * libvirt: Fix live block migration * Change comment for function \_destroy * Stop fetch\_ca from throwing IOError exceptions * Add 'detaching' to volume status * Reset task state before rescheduling * workaround lack of quantum/nova floatingip integration * fix rpcapi version * Added description of operators for extra\_specs * Convert to ints in VlanManager.create\_networks * Remove unused AddressAlreadyAllocated exception * Remove an unused import * Make ip block splitting a bit more self documenting * Prevent Partial terminations in EC2 * Add flag cinder\_endpoint\_template to volume.cinder * Handle missing network\_size in nova-manage * Adds API sample test for Flavors Extra Data extension * More specific lxml versions in tools/pip-requires * Fixes snat rules in complex networking configs * Fix flavor deletion when there is a deleted flavor * Make size optional when creating a volume from a snapshot * Add documentation for scheduler filters scope * Add and fix tests for attaching volumes * Fix auth parameter passed to libvirt openAuth() method * xapi: Fix live block migration * Add a criteria to sort a list of dict in api samples * delete a module never used * Update SolidFire volume driver * Adds get\_available\_resource to hyperv driver * Create image of volume-backed instance via native API * Improve floating IP delete speed * Have device mapping use autocreated device nodes * remove a never used import * fix unmounting of LXC containers in the presence of symlinks * Execute attach\_time query earlier in migration 98 * Add ServerStartStop extension API test * Set install\_requires in setup.py * Add Server Detail and Metadata tests * xenapi: Make dom0 serialization consistent * Refer to correct column names in migration 98 * Correct ephemeral disk cache filename * Stop lock decorator from leaving tempdirs in tests * Handle missing 'provider\_location' in rm\_export * Nail the pip requirement at 1.1 * Fix typo in tgtadm LOG.error() call * Call driver for attach/detach\_volume * rbd: implement create\_volume\_from\_snapshot * Use volume driver specific exceptions * Fake requests in tests should be to v1 * Implement paginate query use marker in nova-api * Simplify setting up test notifier * Specify the conf file when creating a volume * Generate a flavorid if needed at flavor creation * Fix EC2 cinder volume creation as an admin user * Allow cinder catalog match values to be configured * Fix synchronized decorator path cleanup * Fix and cleanup compute node stat tracking * avoid the buffer cache when copying volumes * Add missing argument to novncproxy websockify call * Use lvs instead of os.listdir in \_cleanup\_lvm * Fixing call to hasManagedSaveImage * Fix typo in simple\_tenant\_usage tests * Move api\_samples to doc dir * Add a tunable to control how many ARPs are sent * Get the extension alias to compose the path to save the api samples * Add scope to extra\_specs entries * Use bare container format by default * Sync some updates from openstack-common * Fix simple\_tenant\_usage's handing of future end times * Yield to another greenthread when some time-consuming task finished * Automatically convert device names * Fix creation of iscsi targets * Makes sure new flavors default to is\_public=True * Optimizes flavor\_access to not make a db request * Escape ec2 XML error responses * Skip tests in OSX due to readlink compat * Allow admins to de-allocate any floating IPs * Fix xml metadata for volumes api in nova-volume * Re-attach volumes after instance resize * Speed up creating floating ips * Adds API sample test for limits * Fix vmwareapi driver spawn() signature * Fix hyperv driver spawn() signature * Add API samples to images api * Add method to manage 'put' requests in api-sample tests * Add full python path to test stubbing modules for libvirt * Rename imagebackend arguments * Fixes sqlalchemy.api.compute\_node\_get\_by\_host * Fix instances query for compute stats * Allow hard reboot of a soft rebooting instance * On rebuild, the compute.instance.exists * Fix quota reservation expiration * Add api sample tests for flavors endpoint * Add extensions for flavor swap and rxtx\_factor * Address race condition from concurrent task state update * Makes sample testing handle out of order output * Avoid leaking security group quota reservations * Save the original base image ref for snapshots * Fixed boot from snapshot failure * Update zmq context cleanup to use term * Fix deallocate\_fixed\_ip invocation * fix issues with Nova security groups and Quantum * Clear up the .gitignore file * Allow for deleting VMs from down compute nodes * Update nova-rpc-zmq-receiver to load nova.conf * FLAG rename: bandwith\_poll\_\*=>bandwidth\_poll\_\* * Spelling: Persistant=>Persistent * Fix xml metadata for volumes extension * delete unused valiables * Clean up non-spec output in flavor extensions * Adds api sample testing for extensions endpoint * Makes api extension names consistent * Fixes spawn method signature for PowerVM driver * Spelling fix Retrive=> Retrieve * Update requires to glanceclient >=0.5.0 * Sort API extensions by alias * Remove scheduler RPC API version 1.x * Add version 2.0 of the scheduler RPC API * Remove some remnants of VSA support * hacking: Add driver prefix recommendation * Implements PowerVM get\_available\_resource method * Add a new exception for live migration * Assume virt disk size is consumed by instances * External locking for image caching * Stop using scheduler RPC API magic * Adds api sample testing for versions * Do not run pylint by default * Remove compute RPC API version 1.x * Add version 2.0 of compute RPC API * Accept role list from either X-Roles or X-Role * Fix PEP8 issues * Fix KeyError when test\_servers\_get fails * Update nova.conf.sample * Fixes backwards compatible rpc schedule\_run * Include launch-index in openstack style metadata * Port existing code to utils.ensure\_tree * Correct utils.execute() to check 0 in check\_exit\_code * Add the self parameter to NoopFirewallDriver methods * request\_spec['instance\_uuids'] as list in resize * Fix column variable typo * Add ops to aggregate\_instance\_extra\_specs filter * Implement project specific flavors API * Correct live\_migration rpc call in test * Allow connecting to a ssl-based glance * Move ensure\_tree to utils * Define default mode and device\_id\_string in Mount * Update .mailmap * Fix path to example extension implementation * Remove test\_keypair\_create\_quota\_limit() * Remove duplicated test\_migrate\_disk\_and\_power\_off() * Add missing import webob.exc * Fix broken SimpleScheduler.schedule\_run\_instance() * Add missing user\_id in revoke\_certs\_by\_user\_and\_project() * Rename class\_name to project\_id * Use the compute\_rpcapi instance not the module * Remove duplicated method VM\_migrate\_send * Add missing context argument to start\_transfer calls * Remove unused permitted\_instance\_types * Add lintstack error checker based on pylint * Make pre block migration create correct disk files * Remove unused and old methods in hyperv and powervm driver * Trap iscsiadm error * Check volume status before detaching * Simplify network create logic * Clean up network create exception handling * Adding indexes to frequently joined database columns * Ensure hairpin\_mode is set whenever vifs is added to bridge * Returns hypervisor\_hostname in xml of extension * Adds integration testing for api samples * Fix deallocate\_fixed\_ip() call by unifying signature * Make instance\_update\_and\_get\_original() atomic * Remove unused flags * Remove test\_instance\_update\_with\_instance\_id test * Remove unused instance id-to-uuid function * Re-work the handling of firewall\_driver default * Include CommonConfigOpts options in sample config * Re-generate nova.conf.sample * Ensure log formats are quoted in sample conf * Don't include hostname and IP in generated sample conf * Allow generate\_sample.sh to be run from toplevel dir * Let admin list instances in vm\_states.DELETED * Return actual availability zones * Provide a hint for missing EC2 image ids * Check association when removing floating ip * Add public network support when launching an instance * Re-define libvirt domain on "not found" exception * Add two prereq pkgs to nova devref env guide * Fix hyperv Cfgs: StrOpt to IntOpt * continue deleting instance even if quantum port delete fails * Typo fix: existant => existent * Fix hacking.py git checks to propagate errors * Don't show user-data when its not sent * Clarify nwfilter not found error message * Remove unused \_create\_network\_filters() * Adds missing assertion to FloatingIP tests * Restore imagebackend in test\_virt\_drivers.py * Add nosehtmloutput as a test dependency * Remove unused exceptions from nova/exception.py * Cleanup pip dependencies * Make glance image service check base exception classes * Add deprecated warning to SimpleScheduler * Have compute\_node\_get() join 'service' * XCP-XAPI version fix * add availability\_zone to openstack metadata * Remove stub\_network flag * Implements sending notification on metadata change * Code clean up * Implement network creation in compute API * Debugged extra\_specs\_ops.py * Fix typo in call in cinder.API unreserve\_volume * xenapi: Tag nova volumes during attach\_volume * Allow network to call get\_fixed\_ip\_by\_address * Add key\_name attribute in XML servers API * Fix is\_admin check via policy * Keep the ComputeNode model updated with usage * Remove hard-coded 'admin' role checking and use policy instead * Introduce ImagePropertiesFilter scheduler filter * Return HTTP 422 on bad server update PUT request * Makes sure instance deletion ok with deleted data * OpenStack capitalization added to HACKING.rst * Fix get\_vnc\_console race * Fix a TypeError that occurs in \_reschedule * Make missing imports flag in hacking settable * Makes sure tests don't leave lockfiles around * Update FilterScheduler doc * Disable I18N in Nova's test suites * Remove logging in volume tests * Refactor extra specs matching into a new module * Fix regression in compute\_capabilities filter * Refactor ComputeCapabilitiesFilter test cases * Revert per-user-quotas * Remove unused imports * Fix PEP8 issues * Sync changes from openstack common * Implement GET (show) in OS API keypairs extension * Fix spelling typos * Ignoring \*.sw[op] files * xenapi: attach root disk during rescue before boot * Allows libvirt to set a serial number for a volume * Adds support for serial to libvirt config disks * Remove unused variables * Always create the run\_instance records locally * Fix use of non-existant var pool * Adds Hyper-V support in nova-compute (with new network\_info model), including unit tests * Update sqlite to use PoolEvents for regexp * Remove unused function in console api * Allow nova to guess device if not passed to attach * Update disk config to check for 'server' in req * Changes default behavior of ec2 * Make ComputeFilter verify compute-related instance properties * Collect instance capabilities from compute nodes * Move volume size validation to api layer * Change IPtablesManager to preserve packet:byte counts * Add get\_key\_pair to compute API * Defined IMPL in global ipv6 namespace * xenapi: remove unnecessary json decoding of injected\_files * Remove unnecessary try/finally from snapshot * Port pre\_block\_migration to new image caching * Adding port attribute in network parameter of boot * Add support for NFS-based virtual block devices * Remove assigned, but unused variables from nova/db/sqlalchemy/api.py * xenapi: Support live migration without pools * Restore libvirt block storage connections on reboot * Added several operators on instance\_type\_extra\_specs * Revert to prior method of executing a libvirt hard\_reboot * Set task\_state=None when finished snapshotting * Implement get\_host\_uptime in libvirt driver * continue config-drive-v2, add openstack metadata api * Return values from wrapped functions in decorators * Allow XML payload for volume creation * Add PowerVM compute driver and unit tests * Revert task\_state on failed instance actions * Fix uuid related bug in console/api * Validate that min\_count & max\_count parameters are numeric * Allow stop API to be called in Error * Enforce quota limitations for instance resize * Fix rpc error with live\_migration * Simple checks for instance user data * Change time.sleep to greenthread.sleep * Add missing self. for parent * Rewrite image code to use python-glanceclient * Fix rpc error with live\_migration * volumes: fix check\_for\_export() in non-exporting volume drivers * Avoid {} and [] as default arguments * Improve bw\_usage\_update() performance * Update extra specs calls to use deleted: False * Don't stuff non-db data into instance dict * Fix type error in state comparison * update python-quantumclient dependency to >=2.0 * Key auto\_disk\_config in create server off of ext * Implement network association in OS API * Fix TypeError conversion in API layer * Key requested\_networks off of network extension * Key config\_drive off of config-drive extension * Make sure reservations is initialized * import module, not type * Config drive v2 * Don't accept key\_name if not enabled * Fix HTTP 500 on bad server create * Default behavior should restrict admins to tenant for volumes * remove nova code related to Quantum v1 API * Make sure ec2 mapping raises proper exceptions * Send host not ComputeNode into uptime RPC call * Making security group refresh more specific * Sync with latest version of openstack.common.cfg * Sync some cleanups from openstack.common * maint: compare singletons with 'is' not '==' * Compute restart causes period of network 'blackout' * Revert "Remove unused add\_network\_to\_project() method" * Add error log for live migration * Make FaultWrapper handle exception code = None * Don't accept scheduler\_hints if not enabled * Avoid double-reduction of quota for repeated delete * Traceback when over allocating IP addresses * xenapi: ensure all calls to agent get logged * Make update\_db an opt arg in scheduler manager * Key min\_count, max\_count, ret\_res\_id off of ext * Key availability\_zone in create server off of ext * Fix the inject\_metadata\_into\_fs in the disk API * Send updated instance model to schedule\_prep\_resize * Create unique volumes\_dir for testing * Fix stale instances being sent over rpc * Fix setting admin\_pass in rescue command * Key user\_data in create server off of extension * Key block\_device\_mapping off of volume extension * Moves security group functionality into extension * Adds ability to inherit wsgi extensions * Fixes KeyError when trying to rescue an instance * Make TerminateInstances compatible with EC2 api * Uniqueness checks for floating ip addresses * Driver for IBM Storwize and SVC storage * scheduler prep\_resize should not update instance['host'] * Add a 50 char git title limit test to hacking * Fix a bug on remove\_volume\_connection in compute/manager.py * Fix a bug on db.instance\_get\_by\_uuid in compute/manager.py * Make libvirt\_use\_virtio\_for\_bridges flag works for all drivers * xenapi: reduce polling interval for agent * xenapi: wait for agent resetnetwork response * Fix invalid exception format strings * General host aggregates part 2 * Update devref for general host aggregates * Cleanup consoles test cases * Return 409 error if get\_vnc\_console is called before VM is created * Move results filtering to db * Prohibit file injection writing to host filesystem * Added updated locations for iscsiadm * Check against unexpected method call * Remove deprecated use Exception.message * Remove temporary hack from checks\_instance\_lock * Remove temporary hack from wrap\_instance\_fault * Fix up some instance\_uuid usage * Update vmops to access metadata as dict * Improve external locking on Windows * Fix traceback when detaching volumes via EC2 * Update RPC code from common * Fixes parameter passing to tgt-admin for iscsi * Solve possible race in semaphor creation * Rename private methods of compute manager * Send full instance to compute live\_migration * Add underscore in front of post\_live\_migration * Send full instance to scheduler live\_migration * Send full instance to run\_instance * Use dict style access for image\_ref * Use explicit arguments in compute manager run\_instance * Remove topic from scheduler run\_instance * Use explicit args in run\_instance scheduler code * Update args to \_set\_vm\_state\_and\_notify * Reduce db access in prep\_resize in the compute manager * Remove instance\_id fallback from cast\_to\_compute\_host() * Remove unused InstanceInfo class * Adds per-user-quotas support for more detailed quotas management * Remove list\_instances\_detail from compute drivers * Move root\_helper deprecation warning into execute * Flavor extra specs extension use instance\_type id * Fix test\_resize\_xcp testcase - it never ran * tests: avoid traceback warning in test\_live\_migration * ensure\_tree calls mkdir -p * Only log deprecated config warnings once * Handle NetworkNotFound in \_shutdown\_instance * Drop AES functions and pycrypto dependency * Simplify file hashing * Allow loaded extensions to be checked from servers * Make extension aliases consistent * Remove old exception type * Fix test classes collision * Remove unused variables * Fix notification logic * Improve external lock implementation * maint: remove an unused import in libvirt.driver * Require eventlet >= 0.9.17 * Remove \*\*kwargs from prep\_resize in compute manager * Updates to the prep\_resize scheduler rpc call * Migrate a notifier patch from common: * Update list\_instances to catch libvirtError * Audit log messages in nova/compute/api.py * Rename \_self to self according to Python convention * import missing module time * Remove unused variables * Handle InstanceNotFound in libvirt list\_instances * Fix broken pep8 exclude processing * Update reset\_db to call setup if \_DB is None * Migrate a logging change from common: * Send 'create volume from snapshot' to the proper host * Fix regression with nova-manage floating list * Remove unused imports * Simple refactor of some db api tests * fix unmounting of LXC containers * Update usage of 'ip' to handle more return codes * Use function registration for policy checks * Check instance lock in compute/api * Fix a comment typo in db api * Audit log messages in nova/compute/manager.py * XenAPI: Add script to destroy cached images * Fix typo in db test * Fix issue with filtering where a value is unicode * Avoid using logging in signal handler * Fix traceback when using s3 * Don't pass kernel args to Xen HVM instances * Sync w/ latest openstack common log.py * Pass a full instance to rotate\_backups() * Remove agent\_update from the compute manager * Move tests.test\_compute\_utils into tests.compute * Send a full instance in terminate\_instance * maint: don't require write access when reading files * Fix get\_diagnostics RPC arg ordering * Fix failed iscsi tgt delete errors with new tgtadm * Deprecate root\_helper in favor of rootwrap\_config * Use instance\_get instead of instance\_by * Clarify TooManyInstances exception message * Setting root passwd no longer fails silently * XenAPI: Fix race-condition with cached images * Prevent instance\_info\_cache from being altered post instance * Update targets information when creating target * Avoid recursion from @refresh\_cache * Send a full instance in change\_instance\_metadata * Send a full instance in unrescue\_instance * Add check exit codes for vlans * Compute: Error out instance on rebuild and resize * Partially revert "Remove unused scheduler functions" * Use event.listen() instead of deprecated listeners kwarg * Avoid associating floating IP with two instances * Tidy up nova.image.glance * Fix arg to get\_instance\_volume\_block\_device\_info() * Send a full instance in snapshot\_instance * Send a full instance in set\_admin\_password * Send a full instance in revert\_resize * Send a full instance in rescue\_instance * Send a full instance in remove\_volume\_connection * Send a full instance in rollback\_live\_migration\_at\_destination * Send a full instance in resume\_instance * Send a full instance in resize\_instance * Send a full instance in reset\_network * Convert virtual\_interfaces to using instance\_uuid * Compute: VM-Mode should use instance dict * Fix image\_type=base after snapshot * Send a full instance in remove\_fixed\_ip\_from\_instance * Send a full instance in rebuild\_instance * Reverts fix lp1031004 * sync openstack-common log changes with nova * Set default keystone auth\_token signing\_dir loc * Resize.end now includes the correct instance\_type * Fix rootwrapper with tgt-admin * Use common parse\_isotime in GlanceImageService * Xen: VHD sequence validation should handle swap * Revert "Check for selinux before setting up selinux." * reduce debugging from utils.trycmd() * Avoid error during snapshot of ISO booted instance * Add a link from HACKING to wiki GitCommitMessages page * Instance cleanups from detach\_volumes * Check for selinux before setting up selinux * Prefer instance in reboot\_instance * maint: libvirt imagecache: remove redundant interpreter spec * Support external gateways in VLAN mode * Turn on base image cleanup by default * Make compute only auto-confirm its own instances * Fix state logic for auto-confirm resizes * Explicitly send primitive instances via rpc * Allow \_destroy\_vdis if a mapping has no VDI * Correct host count in instance\_usage\_audit\_log extension * Return location header on volume creation * Add persistent volumes for tgtd * xenapi: Use instance uuid when calling DB API * Fix HACKING violation in nova/api/openstack/volume/types.py * Remove ugly instance.\_rescue hack * Convert to using dict style key lookups in XenAPI * Implements notifications for more instance changes * Fix ip6tables support in xenapi bug 934603 * Moving where the fixed ip deallocation happens * Sanitize xenstore keys for metadata injection * Don't store system\_metadata in xenstore * use REDIRECT to forward local metadata request * Only enforce valid uuids if a uuid is passed * Send a full instance in pre\_live\_migration * Send a full instance in power\_on\_instance and start\_instance * Send a full instance in power\_off\_instance and stop\_instance * Make instance\_uuid backwards compat actually work * Send a full instance via rpc for post\_live\_migration\_at\_destination * Send a full instance via rpc for inject\_network\_info * Send a full instance via rpc for inject\_file * Send a full instance via rpc for get\_vnc\_console * Remove get\_instance\_disk\_info from compute rpcapi * Send a full instance via rpc for get\_diagnostics * Send a full instance via rpc for finish\_revert\_resize * Ensure instance is moved to ERROR on suspend failure * Avoid using 'is' operator when comparing strings * Revert "Add additional capabilities for computes" * Allow power\_off when instance doesn't exist * Fix resizing VDIs on XenServer >= 6 * Refactor glance image service code * Don't import libvirt\_utils in disk api * Call correct implementation for quota\_destroy\_all\_by\_project * Remove return values from some compute RPC methods * Reinstate instance locked error logging * Send a full instance via rpc for finish\_resize * Fix exception handling in libvirt attach\_volume() * Convert fixed\_ips to using instance\_uuid * Trim volume type representation * Fix a couple of PEP8 nits * Replace subprocess.check\_output with Popen * libvirt driver: set os\_type to support xen hvm/pv * Include architecture in instance base options passed to the scheduler * Fix typo of localhost's IP * Enhance nova-manage to set flavor extra specs * Send a full instance via rpc for detach\_volume * Remove unused methods from compute rpcapi * Send a full instance via rpc for confirm\_resize * Send a full instance via rpc for check\_can\_live\_migrate\_source * Send a full instance via rpc for check\_can\_live\_migrate\_destination * Remove unused scheduler functions * Send a full instance via rpc for attach\_volume * Send a full instance via rpc for add\_fixed\_ip\_to\_instance * Send a full instance via rpc for get\_console\_output * Send a full instance via rpc for suspend\_instance * Send a full instance via rpc for (un)pause\_instance * Don't use rpc to lock/unlock an instance * Convert reboot\_instance to take a full instance * Update decorators in compute manager * Include name in a primitive Instance * Shrink Simple Scheduler * Allow soft deletes from any state * Handle NULL deleted\_at in migration 112 * Add support for snapshots and volume types to netapp driver * Inject instance metadata into xenstore * Add missing tempfile import to libvirt driver * Fix docstring for SecurityGroupHandlerBase * Don't log debug auth token when using cinder * Remove temporary variable * Define cross-driver standardized vm\_mode values * Check for exception codes in openstack API results * Add missing parameters to novas cinder api * libvirt driver: set driver name consistently * Allow floating IP pools to be deleted * Fixes console/vmrc\_manager.py import error * EC2 DescribeImageAttribute by kernel/ramdisk * Xen: Add race-condition troubleshooting script * Return 400 in get\_console\_output for bad length * update compute\_fill\_first\_cost\_fn docstring * Xen: Validate VHD footer timestamps * Xen: Ensure snapshot is torn down on error * Provide rootwrap filters for nova-api-metadata * Fix a bug in compute\_node\_statistics * refactor all uses of the \`qemu-img info\` command * Xen: Fix snapshots when use\_cow=True * tests: remove misleading docstrings on libvirt tests * Update NovaKeystoneContext to use jsonutils * Use compute\_driver in vmware driver help messages * Use compute\_driver in xenapi driver help messages * Add call to get hypervisor statistics * Adds xcp disk resize support * Log snapshot UUID and not OpaqueRef * Remove unused user\_id and project\_id arguments * Fix wrong regex in cleanup\_file\_locks * Update jsonutils from openstack-common * Return 404 when attempting to remove a non-existent floating ip * Implements config\_drive as extension * use boto's HTTPResponse class for versions of boto >=2.5.2 * Migrations for deleted data for previously deleted instances * Add image\_name to create and rebuild notifications * Make it clear subnet\_bits is unused in ipam case * Remove unused add\_network\_to\_project() method * Adding networking rules to vm's on compute service startup * Avoid unrecognized content-type message * Updates migration 111 to work w/ Postgres * fixes for nova-manage not returning a full list of fixed IPs * Adds non\_inheritable\_image\_properties flag * Add git commit message validation to hacking.py * Remove unnecessary use of with\_lockmode * Improve VDI chain logging * Remove profane words * Adds logging for renaming and hardlinking * Don't create volumes if an incorrect size was given * set correct SELinux context for injected ssh keys * Fixes nova-manage fixed list with deleted networks * Move libvirt disk config setup out of main get\_guest\_config method * Refactor libvirt imagebackend module to reduce code duplication * Move more libvirt disk setup into the imagebackend module * Don't hardcode use of 'virtio' for root disk in libvirt driver * Ensure to use 'hdN' for IDE disk device in libvirt driver * Don't set device='cdrom' for all disks in libvirt driver * Move setup of libvirt disk cachemode into imagebackend module * Get rid of pointless 'suffix' parameter in libvirt imagebackend * Revert "Attach ISO as separate disk if given proper instruction" * Ensure VHDs in staging area are sequenced properly * Fix error in error handler in instance\_usage\_audit task * Fix SQL deadlock in quota reservations * Ensure 413 response for security group over-quota * fixes for nova-manage network list if network has been deleted * Allow NoMoreFloatingIps to bubble up to FaultWrapper * Fix cloudpipe keypair creation. Add pipelib tests * Don't let failure to delete filesystem block deletion of instances in libvirt * Static FaultWrapper status\_to\_type map * Make flavorextradata ignore deleted flavors * Tidy up handling of exceptions in floating\_ip\_dns * Raise NotImplementedError, not NotImplemented singleton * Fix the mis-use of NotImplemented * Update FilterSchedulerTestCase docstring * Remove unused testing.fake * Make snapshot work for stopped VMs * Split ComputeFilter up * Show all absolute quota limits in /limits * Info log to see which compute driver has loaded * Rename get\_lock() to \_get\_lock() * Remove obsolete line in host\_manager * improve efficiency of image transfer during migration * Remove unused get\_version\_from\_href() * Add debug output to RamFilter * Fixes bare-metal spawn error * Adds generic retries for build failures * Fix docstring typo * Fixes XenAPI driver import in vm\_vdi\_cleaner * Display key\_name only if keypairs extension is used * Fix EC2 CreateImage no\_reboot logic * Reject EC2 CreateImage for instance-store * EC2 DescribeImages reports correct rootDeviceType * Support EC2 CreateImage API for boot-from-volume * remove unused clauses[] variable * Partially implements blueprint xenapi-live-migration * Improved VM detection for bandwidth polling (XAPI) * Sync jsonutils from openstack-common * Adding granularity for quotas to list and update * Remove VDI chain limit for migrations * Refactoring required for blueprint xenapi-live-migration * Add the plugin framework from common; use and test * Catch rpc up to the common state-of-the-art * Support requested\_networks with quantum v2 * Return 413 status on over-quota in the native API * Fix venv wrapper to clean \*.pyc * Use all deps for tools/hacking.py tests in tox * bug 1024557 * General-host-aggregates part 1 * Attach ISO as separate disk if given proper instruction * Extension to show usage of limited resources in /limits response * Fix SADeprecationWarning: useexisting is deprecated * Fix spelling in docstrings * Fix RuntimeWarning nova\_manage not found * Exclude openstack-common from pep8 checks * Use explicit destination user in xenapi rsync call * Sync gettextutils fixes from openstack-common * Sync importutils from openstack-common * Sync cfg from openstack-common * Add SKIP\_WRITE\_GIT\_CHANGELOG to setup.py * Remove unnecessary logging from API * Sync a commit from openstack-common * Fix typo in docstring * Remove VDI chain limit for snapshots * Adds snapshot\_attached\_here contextmanager * Change base rpc version to 1.0 in compute rpcapi * Use \_lookup\_by\_name instead of \_conn.lookupByName * Use the dict syntax instead of attribute to access db objects * Raise HTTP 500 if service catalog is not json * Floating\_ip create /31,32 shouldn't silent error * Convert remaining network API casts to calls * network manager returns empty list, not raise an exception * add network creation call to network.api.API * overriden VlanManager.create\_networks must return a result * When over quota for floating ips, return HTTPRequestEntityTooLarge * Remove deprecated auth-related db code * Fix .mailmap to generate unique AUTHORS list * Imports base64 to fix xen file injection * Remove deprecated auth from GlanceImageService * Adds bootlocking to the xenserver suspend and resume * ensure libguestfs mounts are cleaned up * Making docs pretty! * allows setting accessIPvs to null via update call * Re-add nova.virt.driver import to xenapi driver * Always attempt to delete entire floating IP range * Adds network labels to the fixed ips in usages * only mount guest image once when injecting files * Remove unused find\_data\_files function in setup.py * Use compute\_api.get\_all in affinity filters * Refactors more snapshot code into vm\_utils * Clarifying which vm\_utils functions are private * Refactor instance\_usage\_audit. Add audit tasklog * Fixes api fails to unpack metadata using cinder * Remove deprecated auth docs * Raise Failure exception when setting duplicate other\_config key * Split xenapi agent code out to nova.virt.xenapi.agent * ensure libguestfs has completed before proceeding * flags documentation to deprecate connection\_type * refactor baremetal/proxy => baremetal/driver * refactor xenapi/connection => xenapi/driver * refactor vmwareapi\_conn => vmwareapi/driver * Don't block instance delete on missing block device volume * Adds diagnostics command for the libvirt driver * associate\_floating\_ip an ip already in use * When deleting an instance, avoid freakout if iscsi device is gone * Expose over-quota exceptions via native API * Fix snapshots tests failing bug 1022670 * Remove deprecated auth code * Remove deprecated auth-related api extensions * Make pep8 test work on Mac * Avoid lazy-loading errors on instance\_type * Fetch kernel/ramdisk images directly * Ignore failure to delete kernel/ramdisk in xenapi driver * Boot from volume for Xen * Fix 'instance %s: snapshotting' log message * Fix KeyError 'key\_name' when KeyPairExists raised * Propagate setup.py change from common * Properly name openstack.common.exception * Janitorial: Catch rpc up with a change in common * Make reboot work for halted xenapi instances * Removed a bunch of cruft files * Update common setup code to latest * fix metadata file injection with xen * Switch to common notifiers * Implements updating complete bw usage data * Fix rpc import path in nova-novncproxy * This patch stops metadata from being deleted when an instance is deleted * Set the default CPU mode to 'host-model' for Libvirt KVM/QEMU guests * Fallback to fakelibvirt in test\_libvirt.py test suite * Properly track VBD and VDI connections in xenapi fake * modify hacking.py to not choke on the def of \_() * sort .gitignore for readability * ignore project files for eclipse/pydev * Add checks for retrieving deleted instance metadata for notification events * Allow network\_uuids that begin with a prefix * Correct typo in tools/hacking.py l18n -> i18n * Add \*.egg\* to .gitignore * Remove auth-related nova-manage commands * Remove unnecessary target\_host flag in xenapi driver tests * Remove unnecessary setUp() method in xenapi driver tests * Finish AUTHORS transition * Don't catch & ignore exceptions when setting up LXC container filesystems * Ensure system metadata is sent on new image creation * Distinguish over-quota for volume size and number * Assign service\_catalog in NovaKeystoneContext * Fix some hacking violations in the quantum tests * Fix missing nova.log change to nova.openstack.common.log * Add Cinder Volume API to Nova * Modifies ec2/cloud to be able to use Cinder * Fix nova-rpc-zmq-receiver * Drop xenapi session.get\_imported\_xenapi() * Fix assertRaises(Exception, ...) HACKING violation * Make possible to store snapshots not in /tmp directory * Prevent file injection writing to host filesystem * Implement nova network API for quantum API 2.0 * Expand HACKING with commit message guidelines * Add ServiceCatalog entries to enable Cinder usage * Pass vdi\_ref to fake.create\_vbd() not a string * Switch to common logging * use import\_object\_ns for compute\_driver loading * Add compatibility for CPU model config with libvirt < 0.9.10 * Sync rpc from openstack-common * Redefine the domain's XML on volume attach/detach * Sync jsonutils from openstack-common * Sync iniparser from openstack-common * Sync latest importutils from openstack-common * Sync excutils from openstack-common * Sync cfg from openstack-common * Add missing gettextutils from openstack-common * Run hacking tests as part of the gate * Remove duplicate volume\_id * Make metadata content match the requested version of the metadata API * Create instance in DB before block device mapping * Get hypervisor uptime * Refactoring code to kernel Dom0 plugin * Ability to read deleted system metadata records * Add check for no domains in libvirt driver * Remove passing superfluous read\_deleted argument * Flesh out the README file with a little more useful information * Remove unused 'get\_open\_port' method from libvirt utils * deallocate\_fixed\_ip attempts to update deleted ip * Dom0 plugin now returns data in proper format * Add PEP8 checks back for Dom0 plugins * Add missing utils declaration to RPM spec * Fixes bug 1014194, metadata keys are incorrect for kernel-id and ramdisk-id * Clean up cruft in nova.image.glance * Retry against different Glance hosts * Fix some import ordering HACKING violations * Deal with unknown instance status * OS API should return SHUTOFF, not STOPPED * Implement blueprint ec2-id-compatibilty * Add multi-process support for API services * Allow specification of the libvirt guest CPU model per host * Refactor Dom0 Glance plugin * Switch libvirt get\_cpu\_info method over to use config APIs * Remove tpool stub in xenapi tests * Use setuptools-git plugin for MANIFEST * Remove duplicate check of server\_dict['name'] * Add missing nova-novncproxy to tarballs * Add libvirt config classes for handling capabilities XML doc * Refactor libvirt config classes for representing CPU models/features * Fix regression in test\_connection\_to\_primitive libvirt testcase * Rename the instance\_id column in instance\_info\_caches * Rename GlanceImageService.get to download * Use LOG.exception instead of logging.exception * Align run\_tests.py pep8 with tox * Add hypervisor information extension * Remove GlanceImageService.index in favor of detail * Swap VDI now uses correct name label * Remove image service show\_by\_name method * Cleanup of image service code * Adds default fall-through to the multi scheduler. Fixes bug 1009681 * Add missing netaddr import * Make nova list/show behave nicely on instance\_type deletion * refactor libvirt from connection -> driver * Switch to using new config parsing for vm\_vdi\_cleaner.py * Adds missing 'owner' attribute to image * Ignore floatingIpNotAssociated during disassociation * Avoid casts in network manager to prevent races * Stop nova\_ipam\_lib from changing the timeout setting * Remove extra DB calls for instances from OS API extensions * Allow single uuid to be specified for affinity * Fix invalid variable reference * Avoid reset on hard reboot if not supported * Fix several PEP-8 issues * Allow access to metadata server '/' without IP check * Fix db calls for snaphsot and volume mapping * Removes utils.logging\_error (no longer used) * Removes utils.fetch\_file (no longer used) * Improve filter\_scheduler performance * Remove unnecessary queries for network info in notifications * Re-factor instance DB creation * Fix hacking.py failures.. * fix libvirt get\_memory\_mb\_total() with xen * Migrate existing routes from flat\_interface * Add full test environment * Another killfilter test fix for Fedora 17 * Remove unknown shutdown kwarg in call to vmops.\_destroy * Refactor vm\_vdi\_cleaner.py connection use * Remove direct access to glance client * Fix import order of openstack.common * metadata: cleanup pubkey representation * Make tgtadm the default iscsi user-land helper * Move rootwrap filters definition to config files * Fixes ram\_allocation\_ratio based over subscription * Call libvirt\_volume\_driver with right mountpoint * XenAPI: Fixes Bug 1012878 * update refresh\_cache on compute calls to get\_instance\_nw\_info * vm state and task state management * Update pylint/pep8 issues jenkins job link * Addtional CommandFilters to fix rootwrap on SLES * Tidy up exception handling in contrib api consoles * do sync before fusermount to avoid busyness * Fix bug 1010581 * xenapi tests: changes size='0' to size=0 * fixes a bug in xenapi tests where a string should be int * Minor HACKING.rst exception fix * Make libvirt LoopingCalls actually wait() * Add instance\_id in Usage API response * Set libvirt\_nonblocking to true by default for Folsom * Admin action to reset states * Use rpc from openstack-common * add nova-manage bash completion script * Spelling fixes * Fix bug 1014925: fix os-hosts * Adjust the libvirt config classes' API contract for parsing * Move libvirt version comparison code into separate function helper * Remove two obsolete libvirt cheetah templates from MANIFEST.in * Propose nova-novncproxy back into nove core * Fix missing import in compute/utils.py * Add instance details to notifications * Xen Storage Manager: tests for xensm volume driver * SM volume driver: DB changes and tests * moved update cache functionality to the network api * Handle missing server when getting security groups * Imports cleanup * added deprecated.warn helper method * Enforce an instance uuid for instance\_test\_and\_set * Replaces functions in utils.py with openstack/common/timeutils.py * Add CPU arch filter scheduler support * Present correct ec2id format for volumes and snaps * xensm: Fix xensm volume driver after uuid changes * Cleanup instance\_update so it only takes a UUID * Updates the cache * Add libvirt min version check * Ensure dnsmasq accept rules are preset at startup * Re-add private \_compute\_node\_get call to sql api * bug #996880 change HostNotFound in hosts to HTTPNotFound * Unwrap httplib.HTTPConnection after WsgiLimiterProxyTest * Log warnings instead of full exceptions for AMQP reconnects * Add missing ack to impl\_qpid * blueprint lvm-disk-images * Remove unused DB calls * Update default policies for KVM guest PIT & RTC timers * Add support for configuring libvirt VM clock and timers * Dedupe native and EC2 security group APIs * Add two missing indexes for instance\_uuid columns * Revert "Fix nova-manage backend\_add with sr\_uuid" * Adds property to selectively enable image caching * Remove utils.deprecated functions * Log connection\_type deprecation message as WARNING * add unit tests for new virt driver loader * Do not attempt to kill already-dead dnsmasq * Only invoke .lower() on non-None protocols * Add indexes to new instance\_uuid columns * instance\_destroy now only takes a uuid * Do not always query deleted instance\_types * Rename image to image\_id * Avoid partially finished cache files * Fix power\_state mis-use bug 1010586 * Resolve unittest error in rpc/impl\_zmq * Fix whitespace in sqlite steps * Make eventlet backdoor play nicer with gettext * Add user\_name project\_name and color option to log * fixes bug 1010200 * Fixes affinity filters when hints is None * implement sql-comment-string stack traces * Finalize tox config * Fixes bug lp:999928 * Convert consoles to use instance uuid * Use OSError instead of ProcessExecutionError * Replace standard json module with openstack.common.jsonutils * Don't query nova-network on startup * Cleans up power\_off and power\_on semantics * Refactor libvirt create calls * Fix whitespace in sqlite steps * Update libvirt imagecache to support resizes * separate Metadata logic away from the web service * Fix bug 1006664: describe non existent ec2 keypair * Make live\_migration a first-class compute API * Add zeromq driver. Implements blueprint zeromq-rpc-driver * Fix up protocol case handling for security groups * Prefix all nova binaries with 'nova-' * Migrate security\_group\_instance\_association to use a uuid to refer to instances * Migrate instance\_metadata to use a uuid to refer to instances * Adds \`disabled\` field for instance-types * More meaningful help messages for libvirt migration options * fix the instance quota overlimit message * fix bug lp:1009041,add option "-F" to make mkfs non-interactive * Finally ack consumed message * Revert "blueprint " * Use openstack-common's policy module * Use openstack.common.cfg.CONF * bug #1006094 correct typo in addmethod.openstackapi.rst * Correct use of uuid in \_get\_instance\_volume\_bdm * Unused imports cleanup (folsom-2) * Quantum Manager disassociate floating-ips on instance delete * defensive coding against None inside bdm resolves bug 1007615 * Add missing import to quantum manager * Add a comment to rpc.queue\_get\_for() * Add shared\_storage\_test methods to compute rpcapi * Add get\_instance\_disk\_info to the compute rpcapi * Add remove\_volume\_connection to the compute rpcapi * blueprint * Implements resume\_state\_on\_host\_boot for libvirt * Fix libvirt rescue to work with whole disk images * Finish removing xenapi.HelperBase class * Remove network\_util.NetworkHelper class * Remove volume\_util.VolumeHelper class * Remove vm\_utils.VMHelper class * Start removing unnecessary classes from XenAPI driver * XenAPI: Don't hardcode userdevice for VBDs * convert virt drivers to fully dynamic loading * Add compare\_cpu to the compute rpcapi * Add get\_console\_topic() to the compute rpcapi * Add refresh\_provider\_fw\_rules() to compute rpcapi * Use compute rpcapi in nova-manage * Add post\_live\_migration\_at\_destination() to compute rpcapi * Add pre\_live\_migration() to the compute rpcapi * Add rollback\_live\_migration\_at\_destination() to compute rpcapi * Add finish\_resize() to the compute rpcapi * Add resize\_instance() to the compute rpcapi * Add finish\_revert\_resize() to the compute rpcapi * Add get\_console\_pool\_info() to the compute rpcapi * Fix destination host for remove\_volume\_connection * Don't deepcopy RpcContext * Remove resize function from virt driver * Cleans up extraneous volume\_api calls * Remove list\_disks/list\_interfaces from virt driver * Remove duplicate words in comments * Implement blueprint host-topic-matchmaking * Remove unnecessary setting of XenAPI module attribute * Prevent task\_state changes during VERIFY\_RESIZE * Eliminate a race condition on instance deletes * Make sure an exception is logged when config file isn't found * Removing double quotes from sample config file * Backslash continuation removal (Nova folsom-2) * Update .gitignore * Add a note on why quota classes are unused in Nova * Move queue\_get\_for() from db to rpc * Sample config file tool updates * Fix instance update notification publisher id * Use cfg's new global CONF object * Make xenapi fake match real xenapi a bit closer * Align ApiEc2TestCase to closer match api-paste.ini * Add attach\_time for EC2 Volumes * fixing issue with db.volume\_update not returning the volume\_ref * New RPC tests, docstring fixes * Fix reservation\_commit so it works w/ PostgreSQL * remove dead file nova/tests/db/nova.austin.sqlite * Fix the conf argument to get\_connection\_pool() * Remove Deprecated auth from EC2 * Revert "API users should not see deleted flavors." * Grammar fixes * Record instance architecture types * Grammar / spelling corrections * cleanup power state (partially implements bp task-management) * [PATCH] Allow [:print:] chars for security group names * Add scheduler filter for trustedness of a host * Remove nova.log usage from nova.rpc * Remove nova.context dependency from nova.rpc * \_s3\_create update only pertinent metadata * Allow adding fixed IPs by network UUID * Fix a minor spelling error * Run coverage tests via xcover for jenkins * Localize rpc options to rpc code * clean-up of the bare-metal framework * Use utils.utcnow rather than datetime.utcnow * update xen to use network\_model * fixes bug 1004153 * Bugfix in simple\_tenant\_usage API detail view * removed a dead db function register\_models() * add queue name argument to TopicConsumer * Cleanup tools/hacking using flake8 * Expose a limited networks API for users * Added a instance state update notification * Remove deprecated quota code * Update pep8 dependency to v1.1 * Nail pep8 dependencies to 1.0.1 * API users should not see deleted flavors * Add scheduler filter: TypeAffinityFilter * Add help string to option 'osapi\_max\_request\_body\_size' * Permit deleted instance types to be queried for active instances * Make validate\_compacted\_migration into general diff tool * Remove unused tools/rfc.sh * Finish quota refactor * Use utils.parse\_strtime rather than datetime.strptime * Add version to compute rpc API * Add version to scheduler rpc API * Add version to console rpc API * Remove wsgiref from requirements * More accurate rescue mode testing for XenAPI * Add tenant id in self link in /servers call for images * Add migration compaction validation tool * Enable checking for imports in alphabetical order * Include volume-usage-audit in tarballs * Fix XenServer diagnostics to provide correct details * Use cfg's new behavior of reset() clearing overrides * Sync with latest version of openstack.common.cfg * Only permit alpha-numerics and .\_- for instance type names * Use memcache to store consoleauth tokens * cert/manager.py not using crypto.fetch\_crl * Cleanup LOG.getLoggers to use \_\_name\_\_ * Imported Translations from Launchpad * Alphabetize imports in nova/tests/ * Fix Multi\_Scheduler to process host capabilities * fixed\_ip\_get\_by\_address read\_deleted from context * Fix for Quantum LinuxBridge Intf driver plug call * Add additional logging to compute filter * use a RequestContext object instead of context module * make get\_all\_bw\_usage() signature match for fake virt driver * Add unit test coverage for bug 1000261 * Moving network tests into the network folder * Add version to consoleauth rpc API * Add version to the cert rpc API * Add base support for rpc API versioning * fixes typo that completely broken Quantum/Nova integration * Make Iptables FW Driver handle dhcp\_server None * Add aliases to .mailmap for comstud and belliott * Add eventlet backdoor to facilitate troubleshooting * Update nova's copy of image metadata on rebuild * Optional timeout for servers stuck in build * Add configurable timeout to Quantum HTTP connections * Modify vm\_vdi\_cleaner to handle \`-orig\` * Add \_\_repr\_\_ to least\_cost scheduler * Bump XenServer plugin version * handle updated qemu-img info output * Rearchitect quota checking to partially fix bug 938317 * Add s3\_listen and s3\_listen\_port options * Misused and not used config options * Remove XenAPI use of eventlet tpool * Fixed compute periodic task. Fixes bug 973331 * get instance details results in volumes key error * Fix bug 988034 - Quantum Network Manager - not clearing ips * Stop using nova.exception from nova.rpc * Make use of openstack.common.jsonutils * Alphabetize imports in nova/api/ * Remove unused \_get\_target code from xenapi * Implement get\_hypervisor\_hostname for libvirt * Alphabetize imports * Alphabetize imports in nova/virt/ * Adding notifications for volumes * Pass 'nova' project into ConfigOpts * fixes bug 999206 * Create an internal key pair API * Make allocation failure a bit more friendly * Avoid setting up DHCP firewall rules with FlatManager * Migrate missing license info * Imported Translations from Launchpad * Fix libvirt Connection.get\_disks method * Create a utf8 version of the dns\_domains table * Setup logging, particularly for keystone middleware * Use default qemu-img cluster size in libvirt connection driver * Added img metadata validation. Fixes bug 962117 * Remove unnecessary stubout\_loopingcall\_start * Actually use xenapi fake setter * Provide a transition to new .info files * Store image properties with instance system\_metadata * Destroy system metadata when destroying instance * Fix XenServer windows agent issue * Use ConfigOpts.find\_file() to find paste config * Remove instance Foreign Key in volumes table, replace with instance\_uuid * Remove old flagfile support * Removed unused snapshot\_instance method * Report memory correctly on Xen. Fixes bug 997014 * Added image metadata to compute.instance.exists * Update PostgreSQL sequence names for zones/quotas * Minor help text related changes * API does need new image\_ref on rebuild immediately * Avoid unnecessary inst lookup in vmops \_shutdown * implement blueprint floating-ip-notification * Defer image\_ref update to manager on rebuild * fix bug 977007,make nova create correct size of qcow2 disk file * Remove unnecessary shutdown argument to \_destroy() * Do not fail on notify when quantum and melange are out of sync * Remove instance action logging mechanism * httplib throw "TypeError: an integer is required" when run quantum * fix bug 992008, we should config public interface on compute * A previous patch decoupled the RPC drivers from the nova.flags, breaking instance audit usage in the process. This configures the xvpvncproxy to configure the RPC drivers properly with FLAGS so that xvpvncproxy can run * Fix bug 983206 : \_try\_convert parsing string * pylint cleanup * Fix devref docs * Remove Deprecated AuthMiddleware * Allow sitepackages on jenkins * Replaces exceptions.Error with NovaException * Docs for vm/task state transitions * Fix a race with rpc.register\_opts in service.py * Mistake with the documentation about cost function's weight corrected * Remove state altering in live-migration code * Register fake flags with rpc init function * Generate a Changelog for Nova * Find context arg by type rather than by name * Default auto-increment for int primary key columns * Adds missing copyright to migration 082 * Add instance\_system\_metadata modeling * Use fake\_libvirt\_utils for libvirt console tests * Fix semantics for migration test environment var * Clean up weighted\_sum logic * Use ConfigOpts.find\_file() to locate policy.json * Sync to newer openstack.common.cfg * Fix test\_mysql\_innodb * Implement key pair quotas * Ensure that the dom0 we're connected to is the right one * Run ip link show in linux\_net.\_device\_exists as root * Compact pre-Folsom database migrations * Remove unused import * Pass context to notification drivers when we can * Use save\_and\_reraise\_exception() from common * Fix innodb tests again * Convert Volume and Snapshot IDs to use UUID * Remove unused images * Adding 'host' info to volume-compute connection information * Update common.importutils from openstack-common * Provide better quota error messages * Make kombu support optional for running unit tests * Fix nova.tests.test\_nova\_rootwrap on Fedora 17 * Xen has to create it's own tap device if using libvirt and QuantumLinuxBridgeVIFDriver * Fix test\_migrations to work with python 2.6 * Update api-paste.ini to remove unused settings * Fix test\_launcher\_app to ensure service actually got started * Minor refactor of servers viewbuider * A previous patch decoupled the RPC drivers from the nova.flags, breaking instance audit usage in the process. This configures the instance audit usage to configure the RPC drivers properly with FLAGS so that the job can run * Allow blank passwords in changePassword action * Allow blank adminPass on server create * Return a BadRequest on bad flavors param values * adjust logging levels for utils.py * Update integration tests to listen on 127.0.0.1 * Log instance consistently * Create name\_label local variable for logging message * Remove hack for xenapi driver tests * Migrate block\_device\_mapping to use instance uuids * Remove unnecessary return statements * Clean up ElementTree usage * Adds better bookending and robustness around the instance audit usage generation * Pass instance to resize\_disk() to fix exception * Minor spelling fix * Removes RST documentation and moves it to openstack-manuals * Trivial spelling fix * Remove workaround for sqlalchemy-migration < 0.6.4 * Remove unnecessary references to resize\_confirm\_window flag * Fix InnoDB migration bug in migrate script 86 * Use openstack.common.importutils * Ignore common code in coverage calculations * Use additional task states during resize * Add libvirt get\_console\_output tests: pty and file * Keep uuid with bandwidth usage tracking to handle the case where a MAC address could be recycled between instances * Added the validation for name check for rebuild of a server * Make KillFilter to handle 'deleted' w/o rstrip * Fix instance delete notifications * Disconnect stale instance VDIs when starting nova-compute * Fix timeout in EC2 CloudController.create\_image() * Add additional capabilities for computes * Move image checksums into a generic file * Add instance to several log messages * Imports to human alphabetical order * Fixes bug 989271, fixes launched\_at date on notifications * Enable InnoDB checking * make all mysql tables explicitly innodb * Use instance\_get\_by\_uuid since we're looking up a UUID * Use nova\_uuid attribute instead of trying to parse out name\_label * Add a force\_config\_drive flag * Fix 986922 * Improvement for the correct query extraction * Fixes bug 983024 * Make updating hostId raises BadRequest * Disallow network creation when label > 255. Fixes bug 965008 * Introduced \_atomic\_restart\_dhcp() Fixes Bug 977875 * Make the filename that image hashes are written to configurable * Xen: Pass session to destroy\_vdi * Add instance logging to vmware\_images.py * Add instance logging to vmops.py * fix bug #980452 set net.ipv4.ip\_forward=1 on network * Log instance * Log instance information for baremetal * Include instance in log message * Log instance * Ensure all messages include instance * Add instance to log messages * Include instance in log message * Refactor nova.rpc config handling * Don't leak RPC connections on timeouts or other exceptions * Small cleanup to attach\_volume logging * Implements EC2 DescribeAddresses by specific PublicIp * Introduced flag base\_dir\_name. Fixes bug 973194 * Set a more reasonable default RPC thread pool size * Number of missing imports should always be shown * Typo fix in bin/instance-usage-audit * Improved tools/hacking.py * Scope coverage report generation to nova module * Removes unnecessary code in \_run\_instance * Validate min\_ram/min\_disk on rebuild * Adding context to usage notifications * Making \`usage\_from\_instance\` private * Remove \_\_init\_\_.py from locale dir * Fixes bug 987335 * allow power state "BLOCKED" for live migrations if using Xen by libvirt * Exclude xenapi plugins from pep8/hacking checks * Imported Translations from Launchpad * Remove unnecessary power state translation messages * Add instance logging * Use utils.save\_and\_reraise\_exception * Removing XenAPI class variable, use session instead * Log instance consistently * Keep nova-manage commands sorted * Log instances consistently * Moves \`usage\_from\_instance\` into nova.compute.utils * Log instance * nova.virt.xenapi\_conn -> nova.virt.xenapi.connection * Remove unused time keyword arg * Remove unused variable * support a configurable libvirt injection partition * Refactor instance image property inheritance out to a method * Refactor availability zone handling out to a method * Include name being searched for in exception message * Be more tolerant of deleting failed builds * Logging updates in IptablesFirewallDriver * Implement security group quotas * Do not allow blank adminPass attribute on set password * Make rebuilds with an emtpy name raise BadRequest * Updates launched\_at in the finish and revert\_migration calls * Updated instance state on resize error * Reformat docstrings in n/c/a/o/servers as per HACKING * fix bug 982360, multi ip block for dmz\_cidr * Refactor checking instance count quota * Small code cleanup for config\_disk handling * Refactors kernel and ramdisk handling into their own method * Improve instance logging in compute/manager * Add deleted\_at to instance usage notification * Simplify \_get\_vm\_opaque\_ref in xenapi driver * Test unrescue works as well * Remove unused variable * Port types and extra specs to volume api * Make exposed methods clearer in xenapi.vmops * Fix error message to report correct operation * Make run\_tests.sh just a little bit less verbose * Log more information when sending notifications * xenapi\_conn -> xenapi.connection * Renamed current\_audit\_period function to last\_completed\_audit\_period to clarify its purpose * QuantumManager will start dnsmasq during startup. Fixes bug 977759 * Fixed metadata validation err. Fixes bug 965102 * Remove python-novaclient dependency from nova * Extend instance UUID logging * Remove references to RemoteError in os-networks * Fix errors in os-networks extension * Removes dead code around start\_tcp in Server * Improve grammar throughout nova * Improved localization testing * Log kwargs on a failed String Format Operation * Standardize quota flag format * Remove nova Direct API * migration\_get\_all\_unconfirmed() now uses lowercase "finished" Fixes bug 977719 * Run tools/hacking.py instead of pep8 mandatory * Delete fixed\_ips when network is deleted * Remove unecessary --repeat option for pep8 * Create compute.api.BaseAPI for compute APIs to use * Give all VDIs a reasonable name-label and name-description * Remove last two remaining hyperV references * bug 968452 * Add index to fixed\_ips.address * Use 'root' instead of 'os' in XenAPI driver * Information about DifferentHostFilter and SameHostFilter added * HACKING fixes, sqlalchemy fix * Add test to check extension timestamps * Fixes bug 952176 * Update doc to mention nova tool for type creation * Change Diablo document reference to trunk * Imported Translations from Launchpad * Cloudpipe tap vpn not always working * Allow instance logging to use just a UUID * Add the serialization of exceptions for RPC calls * Cleanup xenapi driver logging messages to include instance * Stop libvirt test from deleting instances dir * Move product\_version to XenAPISession * glance plugin no longer takes num\_retries parameter * Remove unused user\_id and project\_id parameters to fetch\_image() * Cleanup \_make\_plugin\_call() * Push id generation into \_make\_agent\_call() * Remove unused path argument for \_make\_agent\_call() * Remove unused xenstore methods * Combine call\_xenapi and call\_xenapi\_request * Fixed bug 962840, added a test case * Use -1 end-to-end for unlimited quotas * fix bug where nova ignores glance host in imageref * Remove unused \_parse\_xmlrpc\_value * Fix traceback in image cache manager * Fixes regression in release\_dhcp * Use thread local storage from openstack.common * Extend FilterScheduler documentation * Add validation on quota limits (negative numbers) * Get unit tests functional in OS X * Make sure cloudpipe extension can retrieve network * Treat -1 quotas as unlimited * Auto-confirming resizes would bail on exceptions * Grab the vif directly on release instead of lookup * Corrects an AttributeError in the quota API * Allow unprivileged RADOS users to access rbd volumes * Remove nova.rpc.impl\_carrot * Sync openstack.common.cfg from openstack-common * add libvirt\_inject\_key flag fix bug #971640 * Do not fail to build a snapshot if base image is not found * fix TypeError with unstarted threads in nova-network * remove unused flag: baremetal\_injected\_network\_template baremetal\_uri baremetal\_allow\_project\_net\_traffic * Imported Translations from Launchpad * fixed postgresql flavor-create * Add rootwrap for touch * Ensure floating ips are recreated on reboot * Handle instances being missing while listing floating IPs * Allow snapshots in error state to be deleted * Ensure a functional database connection * Add a faq to vnc docs * adjust logging levels for linux\_net * Handle not found in check for disk availability * Acccept metadata ip so packets aren't snatted * bug 965335 * Export user id as password to keystone when using noauth * Check that DescribeInstance works with deleted image * Check that volume has no snapshots before deletion * Fix libvirt rescue * Check vif exists before releasing ip * Make kombu failures retry on IOError * Adds middleware to limit request body sizes * Add validation for OSAPI server name length * adjust logging levels for libvirt error conditions * Fix exception type in \_get\_minram\_mindisk\_params * fixed bug lp:968019 ,fix network manager init floating ip problem * When dnsmasq fails to HUP log an error * Update KillFilter to handle 'deleted' exe's * Fix disassociate query to remove foreign keys * Touch in use image files when they're checked * Base image signature files are not images * Support timestamps as prefixes for traceback log lines * get\_instance\_uuids\_by\_ip\_filter to QM * Updated docstrings in /tools as per HACKING * Minor xenapi driver cleanups * Continue on the the next tenant\_id on 400 codes * Fix marker behavior for flavors * Remove auth\_uri, already have auth\_host, auth\_port * A missing checksum does not mean the image is corrupt * Default scheduler to spread-first * Reduce the image cache manager periodic interval * Handle Forbidden and NotAuthenticated glance exc * Destroy src and dest instances when deleting in RESIZE\_VERIFY * Allow self-referential groups to be created * Fix unrescue in invalid state * Clean up the shared storage check (#891756) * Don't set instance ACTIVE until it's really active * Fix traceback when sending invalid data * Support sql\_connection\_debug to get SQL diagnostic information * Improve performance of safe\_log() * Fix 'nova-manage config convert' * Add another libvirt get\_guest\_config() test case * Fix libvirt global name 'xml\_info' is not defined * Clean up read\_deleted support in host aggregates code * ensure atomic manipulation of libvirt disk images * Import recent openstack-common changes * makes volume versions display properly * Reordered the alphabet * Add periodic\_fuzzy\_delay option * Add a test case for generation of libvirt guest config * Convert libvirt connection class to use config APIs for CPU comparisons * Introduce a class for storing libvirt CPU configuration * Convert libvirt connection class to use config APIs for guests * Convert libvirt connection class to use config APIs for filesystem devices * Introduce a class for storing libvirt snapshot configuration * Move NIC devices back after disk devices * Convert libvirt connection class to use config APIs for disk devices * Convert libvirt connection class to use config APIs for input devices * Convert libvirt connection class to use config APIs for serial/console devices * Convert libvirt connection class to use config APIs for graphics * Convert libvirt vif classes over to use config API * Convert libvirt volume classes over to use config API * Delete the test\_preparing\_xml\_info libvirt test * Introduce a set of classes for storing libvirt guest configuration * Send a more appropriate error response for 403 in osapi * Use key in locals() that actually exists * Fix launching of guests where instances\_path is on GlusterFS * Volumes API now uses underscores for attrs * Remove unused certificate SQL calls * Assume migrate module missing \_\_version\_\_ is old * Remove tools/nova-debug * Inlining some single-use methods in XenAPI vmops * Change mycloud.com to example.com (RFC2606) * Remove useless dhcp\_domain flags in EC2 * Handle correctly QuotaError in EC2 API * Avoid unplugging VBDs for rescue instances * Imported Translations from Launchpad * Rollback create\_disks handles StorageError exception * Capture SIGTERM and Shut down python services cleanly * Fixed status validation. Fixes bug 960884 * Clarify HACKING's shadow built-in guidance * Strip auth token from log output * Fail-fast for invalid read\_deleted values * Only shutdown rescue instance if it's not already shutdown * Modify nova.wsgi.start() should check backlog parameter * Fix unplug\_vbd to retry a configurable number of times * Don't send snapshot requests through the scheduler * Implement quota classes * Fixes bug 949038 * Open Folsom * Fixes bug 957708 * Improvements/corrections to vnc docs * Allow rate limiting to be disabled via flag * Improve performance of generating dhcp leases * Fix lxc console regression * Strip out characters that should be escaped from console output * Remove unnecessary data from xenapi test * Correct accessIPv6 error message * Stop notifications from old leases * Fix typo in server diagnostics extension * Stub-implement floating-ip functions on FlatManager * Update etc/nova.conf.sample for ship * Make sqlite in-memory-db usable to unittest * Fix run/terminate race conditions * Workaround issue with greenthreads and lockfiles * allow the compute service to start with missing libvirt disks * Destroy rescue instance if main instance is destroyed * Tweak security port validation for ICMP * Debug messages for host filters * various cleanups * Remove Virtual Storage Array (VSA) code * Re-instate security group delete test case * db api: Remove check for security groups reference * Allow proper instance cleanup if state == SHUTOFF * Use getLogger for nova-all * Stop setting promisc on bridge * Fix OpenStack Capitalization * Remove improper use of redirect for hairpin mode * Fix OpenStack Capitalization * HACKING fixes, TODO authors * Keep context for logging intact in greenthreads * fix timestamps to match documented ec2 api * Include babel.cfg in tarballs * Fix LXC volume attach issue * Make extended status not admin-only by default * Add ssl and option to pass tenant to s3 register * Remove broken bin/\*spool\* tools * Allow errored volumes to be deleted * Fix up docstring * libvirt/connection.py: Set console.log permissions * nonblocking libvirt mode using tpool * metadata speed - revert logic changes, just caching * Refix mac change to work around libvirt issue * Update transfer\_vhd to handle unicode correctly * Fixes bug 954833 By adding the execute bit to the xenhost xenapi plugin * Cleanup flags * fix bug 954488 * Fix backing file cp/resize race condition * Use a FixedIp subquery to find networks by host * Changes remove\_fixed\_ip to pass the instance host * Map image ids to ec2 ids in metadata service * Remove date\_dhcp\_on\_disassociate comment and docs * Make fixed\_ip\_disassociate\_all\_by\_timeout work * Refactor glance id<->internal id conversion for s3 * Sort results from describe\_instances in EC2 API * virt/firewall: NoopFirewallDriver::instance\_filter\_exists must return True * fix nova-manage floating delete * fixed list warn when ip allocated to missing inst * Removes default use of obsolete ec2 authorizor * Additional extensions no longer break unit-tests * Use cPickle and not just pickle * Move (cast|call)\_compute\_message methods back into compute API class * Fix libvirt get\_console\_output for Python < 2.7 * doc/source/conf.py: Fix man page building * Update floating auto assignment to use the model * Make nova-manage syslog check /var/log/messages * improve speed of metadata * Fix linux\_net.py interface-driver loading * Change default of running\_deleted\_instance\_action * Nuke some unused SQL api calls * Avoid nova-manage floating create /32 * Add a serializer for os-quota-sets/defaults * Import nova.exception so exception can be used * refactoring code, check connection in Listener. refer to Bug #943031 * Fix live-migration in multi\_host network * add convert\_unicode to sqlalchemy connection arguments * Fixes xml representation of ext\_srv\_attr extension * Sub in InstanceLimitExceeded in overLimit message * Remove update lockmode from compute\_node\_get\_by\_host * Set 'dhcp\_server' in \_teardown\_network\_on\_host * Bug #922356 QuantumManager does not initiate unplug on the linux\_net driver * Clean up setup and teardown for dhcp managers * Display owner in ec2 describe images * EC2 KeyName validation * Fix issues with security group auths without ports * Replaced use of webob.Request.str\_GET * Allow soft\_reboot to work from more states: * Make snapshots with qemu-img instead of libvirt * Use utils.temporary\_chown to ensure permissions get reset * Add VDI chain cleanup script * Reduce duplicated code in xenapi * Since 'net' is of nova.network.model.VIF class and 'ips' is an empty list, net needs to be pulled from hydrated nw\_info.fixed\_ips(), and appended to ips * Fix nova-manage backend\_add with sr\_uuid * Update values in test\_flagfile to be different * Switch all xenapi async plugin calls to be sync * Hack to fixup absolute pybasedir in nova.conf.sample * fixup ldapdns default config * Use cache='none' for all disks * Update cfg from openstack-common * Add pybasedir and bindir options * Simply & unify console handling for libvirt drivers * Cleanup XenAPI tests * fix up nova-manage man page * Don't use glance when verifying images * Fixes os-volume/snapshot delete * Use a high number for our default mac addresses * Simplify unnecessary XenAPI Async calls to be synchronous * Remove an obsolete FIXME comment * Fixing image snapshots server links * Wait for rescue VM shutdown to complete before destroying it * Renaming user friendly fault name for HTTP 409 * Moving nova/network tests to more logical home * Change a fake classes variable to something other than id * Increase logging for xenapi plugin glance uploads * Deprecate carrot rpc code * Improve vnc proxy docs * Require a more recent version of glance * Make EC2 API a bit more user friendly * Add kwargs to RequestContext \_\_init\_\_ * info\_cache is related to deleted instance * Handle kwargs in deallocate\_fixed\_ip for FlatDHCP * Add a few missing tests regarding exception codes * Checks image virtual size before qemu-img resize * Set logdir to a tempdir in test\_network * Set lock\_path to a tempdir in TestLockCleanup * Exceptions unpacking rpc messages shouldn't hang the daemon * Use sqlalchemy reflection in migration 080 * Late load rabbit\_notifier in test\_notifier * boto shouldn't be required for production deploys * Don't use ec2 IDs in scheduler driver * pyflakes cleanups on libvirt/connection.py * Validate VDI chain before moving into SR * Fix racey snapshots * Don't swallow snapshot exceptions * allow block migration to talk to glance/keystone * Remove cruft and broken code from nova-manage * Update paste file to use service tenant * Further cleanup of XenAPI * Fix XML namespaces for limits extensions and versions * Remove the feature from UML/LXC guests * setup.py: Fix doc building * Add adjustable offset to audit\_period * nova-manage: allow use of /32 IP range * Clear created attributes when tearing down tests * Fix multi\_host column name in setup\_networks.. * HACKING fixes, all but sqlalchemy * Remove trailing whitespaces in regular file * remove undocumented, unused mpi 'extension' to ec2 metadata * Minor clarifications for the help strings in nova config options * Don't use \_ for variable name * Make test\_compute console tests more robust * test\_compute stubs same thing multiple times * Ignore InstanceNotFound when trying to set instance to ERROR * Cleans up the create\_conf tool * Fix bug 948611. Fix 'nova-manage logs errors' * api-paste.ini: Add /1.0 to default urlmap * Adds nova-manage command to convert a flagfile * bug 944145: race condition causes VM's state to be SHUTOFF * Cleanup some test docstrings * Cleans up a bunch of unused variables in XenAPI * Shorten FLAGS.rpc\_response\_timeout * Reset instance to ACTIVE when no hosts found * Replaces pipelines with flag for auth strategy * Setup and teardown networks during migration * Better glance exception handling * Distinguish rootwrap Authorization vs Not found errors * Bug #943178: aggregate extension lacks documentation * Rename files/dirs from 'rabbit' to 'rpc' * Change references to RabbitMQ to include Qpid * Avoid running code that uses logging in a thread * No longer ignoring man/novamanage * Fixing incorrect use of instance keyword in logging * Fix rst formatting and cross-references * Provide a provider for boto.utils * Only pass image uuids to compute api rebuild * Finally fix the docs venv bug * Get rid of all of the autodoc import errors * Rename DistributedScheduler as FilterScheduler * Allows new style config to be used for --flagfile * Add support for lxc consoles * Fix references to novncproxy\_base\_url in docs * Add assertRaises check to tools/hacking.py as N202 * fix restructuredtext formatting in docstrings that show up in the developer guide * Raise 409 when rescuing instance in RESCUE mode * Log a certain rare instance termination exception * Update fixed\_ip\_associate to not use relationships * Remove unnecessary code in test setUp/tearDown * Imported Translations from Launchpad * Only raw string literals should be used with \_() * assertRaises(Exception, ...) considered harmful * Added docs on MySQL queries blocking main thread * Fix test\_attach\_volume\_raise\_exception * Fix test\_unrescue to actually test unrescue * bug #941794 VIF and intf drivers for Quantum Linux Bridge plugin * Ensures that we don't exceed iptables chain max * Allows --flat\_interface flag to override db * Use self.mox instead of create a new self.mocker * Fix test\_migrate\_disk\_and\_power\_off\_exception * fakes.fake\_data\_store doesn't exist, so don't reset it * populate glance 'name' field through ec2-register * Remove unused \_setup\_other\_managers method from test case * Remove unused test\_obj parameter to setUp() * Use stubout instead of manually stubbing out os.path.exists * Remove superfluous \_\_init\_\_ from test case * Use test.TestCase instead of manually managing stubout * Handle InstanceNotFound during server update * Use stubout instead of manually stubbing out versions.VERSIONS * Remove unused session variable in test setup * Cleanup swap in \_create\_vm undo * Do not invoke kill dnsmasq if no pid file was found * Fixes for ec2 images * Retry download\_vhd with different glance host each time * Display error for invalid CIDR * Remove empty setUp/tearDown methods * Call super class tearDown correctly * Fixes bug 942556 and bug 944105 * update copyright, add version information to footer * Refactor spawn to use UndoManager * Fail gracefully when the db doesn't speak unicode * Remove unnecessary setting up and down of mox and stubout * Remove unnecessary variables from tests * Ensure image status filter matches glance format * fix for bug 821252. Smarter default scheduler * blueprint sphinx-doc-cleanup bug 944381 * Adds soft-reboot support to libvirt * Minor cleanup based on HACKING * libvirt driver calls unplug() twice on vm reboot * Add missing format string type on some exception messages * Fixing a request-id header bug * Test creating a server with metadata key too long * Fixes lp931801 and a key\_error * notifications for delete, snapshot and resize * Ensure that context read\_deleted is only one of 'no', 'yes' or 'only' * register Cell model, not Zone model * Option expose IP instead of dnshost in ec2 desc' * Fix \_sync\_power\_states to obtain correct 'state' * Ensures that keypair names are only AlphaNumeric * Cast vcpu\_weight to string before calling xen api * Add missing filters for new root commands * Destroy VM before VDIs during spawn cleanup * Include hypervisor\_hostname in the extended server attributes * Remove old ratelimiting code * Perform image show early in the resize process * Adds netapp volume driver * Fixes bug 943188 * Remove unused imports and variables from OS API * Return empty list when volume not attached * Be consistent with disabling periodic tasks * Cast volume-related ids to str * Fix for bug 942896: Make sure network['host'] is set * Allow xvd\* to be supplied for volume in xenapi * Initialize progress to 0 for build and resize * Fix issue starting nova-compute w/ XenServer * Provide retry-after guidance on throttled requests * Use constant time string comparisons for auth * Rename zones table to cells and Instance.zone\_name to cell\_name * Ensure temporary file gets cleaned up after test * Fixes bug 942549 * Use assertDictMatch to keep 2.6 unit tests passing * Handle case where instance['info\_cache'] is None * sm volume driver: fix backend adding failure * sm vol driver: Fix regression in sm\_backend\_conf\_update * TypeError API exceptions get logged incorrectly * Add NoopFirewallDriver * Add utils.tempdir() context manager for easy temp dirs * Check all migrations have downgrade in test\_misc * Remove monkey patching in carrot RPC driver * Call detach\_volume when attach fails * Do not hit the network\_api every poll * OS X Support fixed, bug 942352 * Make scheduler filters more pluggable * Adds temporary chown to sparse\_copy * make nova-network usable with Python < 2.6.5 * Re-adds ssl to kombu configuration and adds flags that are needed to pass through to kombu * Remove unused import * Make sure detail view works for volume snaphots * Imported Translations from Launchpad * Decode nova-manage args into unicode * Cleanup .rescue files in libvirt driver unrescue * Fixes cloudpipe extension to work with keystone * Add missing directive to tox.ini * Update EC2KeystoneAuth to grab tenant 'id' * Monkey patch migrate < 0.7.3 * Fixes bug lp#940734 - Adding manager import so AuthMiddleware works * Clean stale lockfiles on service startup : fixes bug 785955 * Fix nova-manage floating create docs * Fix MANIFEST.in to include missing files * Example config\_drive init script, label the config drive * fix unicode triggered failure in AuthManager * Fix bug 900864 Quantum Manager flag for IP injection * Include launch\_index when creating instances * Copy data when migration dst is on a different FS * bigger-than-unit test for cleanup\_running\_deleted\_instances * Nova options tool enhancements * Add hypervisor\_hostname to compute\_nodes table and use it in XenServer * Fixes error if Melange returns no networks * Print error if nova-manage should be run as root * Don't delete security group in use from OS API * nova-network can't deallocate ips from deleted instances * Making link prefixes support https * Prevent infinite loop in PublishErrorsHandler * blueprint host-aggregates: host maintenance - xenapi implementation * bug 939480 * libvirt vif-plugging fixes. Fixes bug 939252 , bug 939254 * Speeding up resize down with sparse\_copy * Remove network\_api fallback for info\_cache from APIs * Improve unit test coverage per bug/934566 * Return 40x for flavor.create duplicate * refactor a conditional for testing and understanding * Disable usb tablet support for LXC * Add Nexenta volume driver * Improve unit test coverage per bug/934566 * nova-manage: Fix 'fixed list' * Add lun number to provider\_location in create\_volume \* Fixes bug 938876 * Fix WeightedHost * Fix instance stop in EC2 create\_image * blueprint host-aggregates: improvements and clean-up * Move get\_info to taking an instance * Support fixed\_ip range that is a subnet of the network block * xenapi: nova-volume support for multiple luns * Fix error that causes 400 in flavor create * Makes HTTP Location Header return as utf-8 as opposed to Unicode * blueprint host-aggregates: host maintenance * blueprint host-aggregates: xenapi implementation * Rework base file checksums * Avoid copying file if dst is a directory * Add 'nova-manage export auth' * Alter output format of volume types resources * Scheduler notifications added * Don't store connection pool in RpcContext * Fix vnc docs: novaclient now supports vnc consoles * Clarify use of Use of deprecated md5 library * Extract get\_network in quantum manager * Add exception SnapshotIsBusy to be handled as VolumeIsBusy * Exception cleanup * Stop ignoring E202 * Support tox-based unittests * Add attaching state for Volumes * Fix quantum get\_all\_networks() signature (lp#936797) * Escape apostrophe in utils.xhtml\_escape() (lp#872450) * Backslash continuations (nova.api.openstack) * Fix broken method signiture * Handle OSError which can be thrown when removing tmpdir. Fixes bug 883326 * Update api-paste.ini with new auth\_token settings * Imported Translations from Launchpad * Don't tell Qpid to reconnect in a busy loop * Don't inherit controllers from each other, we don't want the methods of our parent * Improve unit test coverage per bug/934566 * Setting access ip values on server create * nova.conf sample tool * Imported Translations from Launchpad * Add support for admin\_password to LibVirt * Add ephemeral storage to flavors api * Resolve bug/934566 * Partial fix for bug 919051 * fix pre\_block\_migration() interaction with libvirt cache * Query directly for just the ip * bug 929462: compile\_diagnostics in xenapi erronously catch XenAPI.Failure * Use new style instance logging in compute api * Fix traceback running instance-usage-audit * Actual fix for bug 931608 * Support non-UTC timestamps in changes-since filter * Add additional information to servers output * Adding traceback to async faults * Pulls the main components out of deallocate * Add JSONFormatter * Allow file logging config * LOG.exception does not take an exc\_info keyword * InstanceNotFound exceptions for terminate\_intance now Log warning instead of throwing exeptions * bug 933620: Error during ComputeManager.\_poll\_bandwidth\_usage * Make database downgrade works * Run ovs-ofctl as root * 077\_convert\_to\_utf8: Convert \*all\* FK tables early * Fix bug 933147 Security group trigger notifications * Fixes nova-volume support for multiple luns * Normalize odd date formats * Remove all uniqueness constraints in migration 76 * Add RPC serialization checking, fix exposed problems * Don't send a SQLAlchemy model over rpc * Adds back e2fsck exit code checking * Syncs vncviewer mouse cursor when connected to Windows VMs * Backslash continuations (nova.tests) * The security\_group name should be an XML attribute * Core modifications for future zones service * Remove instance\_get stubs from server action tests * removed unused method and added another test * Enables hairpin\_mode for virtual bridge ports, allowing NAT reflection * Removed zones from api and distributed scheduler * Fix bug 929427 * Tests for a melange\_ipam\_lib, who is missing tests * Create a flag for force\_to\_raw for images * Resolve bug/927714 -- get instance names from db * Fix API extensions documentation, bug 931516 * misc networking fixes * Print friendly message if no floating IPs exist * Catch httplib.HTTPException as well * Expand Quantum Manager Unit Tests + Associated Fixes * bw\_usage takes a MAC address now * Adding tests for NovaException printing * fix a syntax error in libvirt.attach\_volume() with lxc * Prevent Duplicate VLAN IDs * tests: fix LdapDNS to allow running test\_network in isolation * Fix the description of the --vnc\_enabled option * Different exit code in new versions of iscsiadm * improve injection diagnostics when nbd unavailable. Bug 755854 * remove unused nwfilter methods and tests * LOG.exception only works while in an exception handler * \_() works best with string literals * Remove unnecessary constructors for exceptions * Don't allow EC2 removal of security group in use * improve stale libvirt images handling fix. Bug 801412 * Added resize support for Libvirt/KVM * Update migration 076 so it supports PostgreSQL * Replace ApiError with new exceptions * Simple way of returning per-server security groups * Declare deprecated auth flag before its used * e2fsck needs -y * Standardize logging delaration and use * Changing nova-manage error message * Fix WADL/PDF docs referenced in describedby links * bug 931604: improve how xenapi RRD records are retrieved * Resolve bug/931794 -- add uuid to fake * Use new style instance logging in compute manager * clean pyc files before running unit tests * Adding logging for 500 errors * typo fix * run\_tests.sh fix * get\_user behavior in ldapdriver * Fsck disk before removing journal * Don't query database with an empty list for IN clause * Use stubs in libvirt/utils get\_fs\_info test * Adding (-x | --stop) option back to runner.py * Remove duplicate variable * Fixing a unicode related metadata bug * bug 931356: nova-manage prints libvirt related warnings if libvirt isn't installed * Make melange\_port an integer * remove a private duplicate function * Changes for supporting fast cloning on Xenserver. Implements blueprint fast-cloning-for-xenserver 1. use\_cow\_images flag is reused for xenserver to check if copy on write images should be used. 2. image-id is used to tag an image which has already been streamed from glance. 3. If cow is true, when an instance of an image is created for the first time on a given xenserver, the image is streamed from glance and copy on write disk is created for the instance. 4. For subsequent instance creation requests (of the same image), a copy on write disk is created from the base image that is already present on the host. 5. If cow is false, when an instance of an image is created for the first time on a host, the image is streamed from glance and its copy is made to create a virtual disk for the instance. 6. For subsequent instance creation requests, a copy of disk is made for creating the disk for the instance. 7. Snapshot creation code was updated to handle cow=true. Now there can be upto 3 disks in the chain. The base disk needs to be uploaded too. 8. Also added a cache\_images flag. Depending on whether the flag is turned on on not, images will be cached on the host * Completes fix for LP #928910 - libvirt performance * Add some more comments to \_get\_my\_ip() * remove unused and buggy function from S3ImageService * Fix minor typo in runner.py * Remove relative imports from scheduler/filters * Converting db tables to utf8 * remove all instance\_type db lookups from network * Remedies LP Bug #928910 - Use libvirt lookupByName() to check existence * Force imageRef to be a string * Retry on network failure for melange GET requests * Handle network api failures more gracefully * Automatic confirmation of resizes on libvirt * Fix exception by passing timeout as None * Extend glance retries to show() as well * Disable ConfigParser interpolation (lp#930270) * fix FlatNetworkTestCase.test\_get\_instance\_nw\_info * remove unused and buggy function from baremetal proxy * Remove unused compute\_service from images controller * Backslash continuations (nova.virt.baremetal) * fixed bug 928749 * Log instance id consistently inside the firewall code * Remove the last of the gflags shim layer * Fix disk\_config typo * Pass instance to log messages * Fix logging in xenapi vmops * Ensures that hostId's are unique * Fix confirm\_resize policy handling * optimize libvirt image cache usage * bug 929428: pep8 validation on all xapi plugins * Move translations to babel locations * Get rid of distutils.extra * Backslash continuations (network, scheduler) * Remove unnecessary use of LoopingCall in nova/virt/xenapi/vm\_utils.py * Stop using LoopingCall in nova.virt.xenapi\_conn:wait\_for\_task() * Handle refactoring of libvirt image caching * linux\_net: Also ignore shell error 2 from ip addr * Consistently update instance in nova/compute/manager.py * Use named logger when available * Fix deprecated warning * Add support for LXC volumes * Added ability to load specific extensions * Add flag to include link local in port security * Allow e2fsck to exit with 1 * Removes constraints from instance and volume types * Handle service failures during finish\_resize gracefully * Set port security for all allocated ips * Move connection pool back into impl\_kombu/qpid * pep8 check on api-paste.ini when using devstack * Allows test\_virt\_drivers to work when run alone * Add an alias to the ServerStartStop extension * tests.integrated fails with devstack * Backslash continuations (nova.virt) * Require newer versions of SA and SA-Migrate * Optimizes ec2 keystone usage and handles errors * Makes sure killfilter doesn't raise ValueError * Fixes volume snapshotting issues and tests * Backslash continuations (misc.) * nova-rootwrap: wait() for return code before exit * Fix bug 921814 changes handling of adminPass in API * Send image properties to Glance * Check return code instead of output for iscsiadm * Make swap default to vdb if there is no ephemeral * Handle --flagfile by converting to .ini style * Update cfg from openstack-common * Fix xvpvncproxy error in nova-all (lp#928489) * Update MANIFEST.in to account for moved schemas * Remove ajaxterm from Nova * Adding the request id to response headers. Again * Update migration to work when data already exists * Fix support for --flagfile argument * Implements blueprint heterogeneous-tilera-architecture-support * Add nova/tests/policy.json to tarball * Fix quantum client filters * Store the correct tenant\_id/project\_id * dont show blank endpoint headers * Pass in project\_id in ext. authorizer * Fix \_poll\_bandwidth\_usage if no network on vif * Fix nova.virt.firewall debugging message to use UUID * Fix debugging log message to print instance UUID * mkfs takes vfat, not fat32 * Pass partition into libvirt file injection * bug 924266: connection\_type and firewall\_driver flags mismatch * bug 927507: fix quantum manager get\_port\_by\_attachment * Fix broken flag in test\_imagecache * Don't write a dns directive if there are no dns records in /etc/network/interfaces * Imported Translations from Launchpad * Backslash continuations (nova.db) * Add initiator to initialize\_connection * Allows nova to read files as root * Re-run nova-manage under sudo if unable to read conffile * Fix status transition when reverting resize * Adds flags for href prefixes * X\_USER is deprecated in favor of X\_USER\_ID * Move cfg to nova.openstack.common * Use Keystone Extension Syntax for EC2 Creds * Remove duplicate instances\_path option * Delete swap VDI if not used * Raise ApiError in response to InstanceTypeNotFound * Rename inst in \_create\_image, and pass instance to log msgs * Fix bug #924093 * Make sure tenant\_id is populated * Fix for bug 883310 * Increased coverage of nova/auth/dbdriver.py to 100%. Fixes 828609 * Make crypto use absolute imports * Remove duplicate logging\_debug\_format option * blueprint nova-image-cache-management phase1 * Set rescue instance hostnames appropriately * Throw an user error on creating duplicate keypairs Fixes bug 902162 * Fixes uuid lookup in virtual interfaces extension * Add comments to injected keys and network config * Remove hard coded m1.tiny behavior * Fix disassociation of fixed IPs when using FlatManager * Provides flag override for vlan interface * remove auto fsck feature from file injection. Bug 826794 * DRYing up Volume/Compute APIRouters * Excise M2Crypto! * Add missing dev. Fixes LP: #925607 * Capture bandwidth usage data before resize * Get rid of DeprecationWarning during db migration * Don't block forever for rpc.(multi)call response * Optionally disable file locking * Avoid weird test error when mox is missing * fix stale libvirt images on download failure. Bug 801412 * cleanup test case to use integers not strings * Respect availability\_zone parameter in nova api * Fix admin password skip check * Add support for pluggable l3 backends * Improve dom0 and template VM avoidance * Remove Hyper-V support * Fix logging to log correct filename and line numbers * Support custom routes for extensions * Make parsing of usage stats from XS more robust * lockfile.FileLock already appends .lock * Ties quantum, melange, and nova network model * Make sure multiple calls to \_get\_session() aren't nested * bug 921087: i18n-key and local-storage hard-coded in xenapi * optimize libvirt raw image handling. Bug 924970 * Boto 2.2.x failes. Capping pip-requires at 2.1.1 * fixed bug 920856 * Expand policies for admin\_actions extension * Correct checking existence of security group rule * Optionally pass a instance uuid to log methods * remove unsupported ec2 extensions * Fix VPN ping packet length * Use single call in ExtendedStatus extension * Add mkswap to rootwrap * Use "display\_name" in "nova-manage vm list" * Fix broken devref docs * Allow for auditing of API calls * Use os.path.basename() instead of string splitting * Remove utils.runthis() * Empty connection pool after test\_kombu * Clear out RPC connection pool before exit * Be more explicit about emptying connection pool * fixes melange ipam lib * bug 923798: On XenServer the DomU firewall driver fails with NotImplementedError * Return instancesSet in TerminateInstances ec2 api * Fix multinode libvirt volume attachment lp #922232 * Bug #923865: (xenapi driver)instance creation fails if no guest agent is avaiable for admin password configuration * Implementation of new Nova Volume driver for SolidFire ISCSI SAN * Handle kepair delete when not found * Add 'all\_tenants' filter to GET /servers * Use name filter in GlanceImageService show\_by\_name * Raise 400 if bad kepair data is provided * Support file injection on boot w/ Libvirt * Refactor away the flags.DEFINE\_\* helpers * Instances to be created with a bookmark link * fix \`nova-manage image convert\` exception * Added validation of name when creating a new keypair * Ignore case in policy role checks * Remove session arg from sm\_backend\_conf\_update * Remove session arguments from db.api * Add a note explaining why unhandled exceptions shouldn't be returned to users * Remove fetching of networks that weren't created via nova-manage * uses the instance uuid in libvirt by introducing a new variable 'uuid' for the used template instead of using a random uuid in libvirt * Fixing a rebuild race condition bug * Fixes bug 914418 * Remove LazySerializationMiddleware * Bug #921730: plugins/xenserver/xenapi/etc/xapi.d/plugins/objectstore no longer in use * Adding live migration server actions * bug 921931: fix Quantum Manager VM launch race condition * Fix authorization checks for simple\_usage.show * Simplify somewhat complicated reduce() into sum() * Ignore connection\_type when no instances exist * Add authorization checks to flavormanage extension * rootwrap: Fix KillFilter matching * Fix uptime calculation in simple\_usage * Fixing rebuilds on libvirt, seriously * Don't pass filter\_properites to managers * Fixing rebuilds on libvirt * Fix bug 921715 - 'nova x509-create-cert' fails * Return 403 instead of 401 when policies reject * blueprint host-aggregates: OSAPI extensions * blueprint host-aggregates: OSAPI/virt integration, via nova.compute.api * Fixes bug 921265 - i'nova-manage flavor create|list' * Remove unused flags.Help\*Flag * Convert vmwareapi code to UNIX style line endings * Blueprint xenapi-provider-firewall and Bug #915403 * Adds extension for retrieving certificates * Add os-start/os-stop server actions to OSAPI * Create nova cert worker for x509 support * Bug #916312: nova-manage network modify --network flag is inconsistent * Remove unused nova/api/mapper.py * Add nova.exception.InvalidRPCConnectionReuse * Add support for Qpid to nova.rpc * Add HACKING compliance testing to run\_test.sh * Remove admin\_only ext attr in favor of authz * usage: Fix time filtering * Add an API extension for creating+deleting flavors * extensions: Allow registering actions for create + delete * Explicitly encode string to utf8 before passing to ldap * Make a bunch of dcs into single-entry lists * Abstract out \_exact\_match\_filter() * Adds a bandwidth filter DB call * KVM and XEN Disk Management Parity * Tweak api-paste.ini to prepare for a devstack change * Remove deprecated serialization code * Add affinity filters updated to use scheduler\_hints and have non-douchey names * Do not output admin\_password in debug logs * Handle error in associate floating IP (bug 845507) * Brings back keystone middleware * Remove sensitive info from rpc logging * Error out instance on set password failure * Fixed limiting for flavors * Adds availability zone filter * Fixes nova-manage fixed list * API version check cleanups * ComputeNode Capacity support * blueprint host-aggregates: maintenance operations to host OSAPI exts * Add a specific filter for kill commands * Fix environment passing in DnsmasqFilter * Cleanups for rootwrap module * Fix 'nova-manage config list' * Add context and request spec to filter\_properties * Allow compute manager prep\_resize to accept kwargs * Adds isolated hosts filter * Make start\_instance cast directly to compute host * Refactor compute api messaging calls to compute manager * Refactor test\_scheduler into unit tests * Forgot to update chance scheduler for ignore\_hosts change * Add SchedulerHints compute extension * Add floating IP support to Quantum Manager * Support filter based on CPU core (over)allocation * bug 917397 * Add option to force hosts to scheduler * Change the logic for deleting a record dns\_domains * Handle FlavorNotFound on server list w/ filter * ERROR out instance if unrescue fails * Fix xenapi rescue without swap * Pull out ram\_filter into a separate filter * pass filter\_properties into scheduling requests for run\_instance * Fixes bug #919390 - Block Migration fails when keystone is un use * Fix nova-manage floating list (fixes bug 918804) * Imported Translations from Launchpad * scheduler host\_manager needs service for filters * Allow Quantum Manager to run in "Flat" mode * aws/ec2 api validation * Fix for bug 918502 * Remove deprecated extension code * Validating image id for rebuild * More cleanup of Imports to match HACKING * chmod nova-logspool * nova/network: pass network\_uuid to linuxnet\_interface\_driver and vif driver * Clean up crypto.py * Fix missing imports and bad call caught by pyflakes * Clarify error messages for admin passwords * Log uuid when instances fail to spawn * Removed references to FLAGS.floating\_ip\_dns\_domains * Removed some vestigial default args from DNS drivers * Allow config of vncserver\_proxyclient\_address * Rename 'zone' to 'domain.' * disk\_config extension now uses OS prefix * Do not write passwords to verbose logs. bug 916167 * Automatically clean up DNS when a floating IP is deallocated * Fix disassociating of auto assigned floating ips * Cleanup Imports to match HACKING guidelines * Added an LDAP/PowerDNS driver * Add dns domain manipulation to nova * fixes bug lp914962 * Fixed bug 912701 * Fix bug #917615 * Separate scheduler host management * Set instance\_ref property when creating snapshots * Implements blueprint vnc-console-cleanup * Rebuild/Resize support for disk-config * Allow instances in 'BUILD' state to be deleted * Stop allowing blank image names on snapshot/backup * Only update if there are networks to update * Drop FK constraint if it exists in migration 064 * Fix an error that prevents message from getting substituted * blueprint host-aggregates * Add missing scripts to setup.py (lp#917676) * Fixes bug 917128 * Clean up generate fingerprint * Add policy checking to nova.network.api.API * Add default policy rule * Super is not so super * Fixed the log line * Add tests for volume list and detail through new volume api, and fix error that the tests caught * Typofix for impl\_kombu * Refactoring logging \_log function * Update some extensions (1) * DECLARE osapi\_compute\_listen\_port for auth manager * Increase robustness of image filtering by server * Update some extensions (2) * Implement BP untie-nova-network-models * Add ipv4 and ipv6 validation * greenlet version inconsistency * Add policy checks to Volume.API * Remove unused extension decorator require\_admin * Fix volume api typo * Convert nova.volume.api.API to use volume objects * Remove a whole bunch of unused imports * have all quota errors return an http 413 * This import is not used * Refactor request and action extensions * Prefixing the request id with 'req-' to decrease confusion when looking at logs * Fixing a bug that was causing the logging to display the context info for the wrong user. bug: 915608 * Modify the fake ldap driver to fix compatibility * Create an instance DNS record based on instance UUID * Implements blueprint separate-nova-volumeapi * Implement more complete kombu reconnecting * First implementation of bp/live-migration-resource-calc * Remove 'status' from default snapshot properties * Clean up disk\_format mapping in xenapi.vm\_utils * Remove skipping of 2 tests * Make authz failures use proper response code * Remove compute.api.API.add\_network\_to\_project * Adds test for local.py * Fix policy import in nova.compute.api * Remove network\_api from Servers Controller * minor fix in comment * Updates linux\_net to ignore some shell errors * Add policy checks to Compute.API * Ensure nova is compatible with WebOb 1.2+ * improve handling of the img\_handlers config list * Unbreak start instance and fixes bug 905270 * catch InstanceInvalidState in more places * Fix some cfg test case naming conflicts * Remove 'location' from GlanceImageService * Makes common/cfg.py raise AttributeError * Call to instance\_info\_cache\_delete to use uuid * Bug #914907: register\_models in db/sqlalchemy/models.py references non-existent ExportDevice * Update logging in compute manager to use uuids * Do not overwrite project\_id from request params * Add optional revision field to version number * Imported Translations from Launchpad * nova-manage floating ip fixes * Add a modify function to the floating ip dns api * Adding the request id to response headers * Add @utils.deprecated() * Blueprint xenapi-security-groups * Fix call to compute\_api.resize from \_migrate * Fix metadata mapping in s3.\_s3\_parse\_manifest * Fix libguestfs operation with specified partitions * fix reboot\_instance typo * Fix bad test cases in smoketest * fix bug 914049: private key in log * Don't overwrite local context on elevated * Bug 885267: Fix GET /servers during instance delete * Adds support for floating ip pools * Adds simple policy engine support * Refactors utils.load\_cached\_file * Serialization, deserialization, and response code decorators * Isolate certain images on certain hosts * Workaround bug 852095 without importing mox * Bug #894683: nova.service does not handle attribute specific exceptions and client hangs * Bug #912858: test\_authors\_up\_to\_date does not deal with capitalized names properly * Adds workaround check for mox in to\_primitive * preload cache table and keep it up to date * Use instance\_properties in resize * Ensure tests are python 2.6 compatible * Return 409s instead of 500s when deleting certain instances * Update HACKING.rst * Tell users what is about to be installed via sudo * Fix LP912092 * Remove small unneeded code from impl\_kombu * Add missing space between XML attributes * Fix except format to match HACKING * Set VLAN MTU size when creating the vlan interface * Add instance\_name field to console detail command which will give the caller the necessary information to actually connect * Fix spelling of variable * Remove install\_requires processing * Send event notifications for suspend and resume * Call mkfs with the correct order of arguments * Fix bug 901899 * Fix typo in nova/rootwrap/compute.py. Fixes LP: #911880 * Make quantum\_use\_dhcp falsifiable * Fixing name not defined * PEP8 type comparison cleanup * Add cloudpipe/vpn api to openstack api contrib * Every string does not need to be internationalized * Adds running\_deleted\_instance\_reaper task * libvirt: implements boot from ISO images * Unused db.api cleanup * use name gateway\_v6 instead of gateway6 * PEP8 remove direct type comparisons * Install a good version of pip in the venv * Bug #910045: UnboundLocalError when failing to get metrics from XenAPI hosts * re-raising exceptions fix * use dhcp\_lease\_time for dnsmasq. Fix bug 894218 * Clean up pylint errors in top-level files * Ensure generated passwords meet minimum complexity * Fixing novaclient\_converter NameError * Bug 820059: bin/nova-manage.py VpnCommands.spawn calls non-existant method VpnCommands.\_vpn\_for - fixed * Bug 751229: Floating address range fixed * Brings some more files up to HACKING standards * Ensure queue is declared durable so messages aren't dropped * Create notification queues as durable * Adding index to instances project\_id column * Add an API for associating floating IPs with DNS entries * 'except:' to 'except Exception:' as per HACKING * Adds EC2 ImportKeyPair API support * Take the availability zone from the instance if available * Update glance Xen plugin w/ purge props header * Converting zones into true extension * Convering /users to admin extension * Add a DECLARE for dhcp\_doamin flag to metadata handler * Support local target for Solaris, use 'safe' command-line processing * Add 'os-networks' extension * Converting accounts resource to admin extension * Add exit\_code, stdout, stderr etc to ProcessExecutionException * Fixes LP bug #907898 * Switch extension namespace * Refactor Xen Vif drivers. Fixes LP907850 * Remove code in migration 064 to drop an fkey that does not exist. Fixes LP bug #907878 * Help clarify rpc API with docs and a bit of code * Use SQLAlchemy to drop foreign key in DB migrate * Move createBackup server action into extension * Bug#898257 support handling images with libguestfs * Bug#898257 abstract out disk image access methods * Move 'actions' subresource into extension * Make os-server-diagnostics extension admin-only * Remove unneeded broken test case * Fix spelling typos in comments * Allow accessIPv4 and accessIPv6 on rebuild action * Move 'diagnostics' subresource to admin extension * Cleaning up imports in compute and virt * Cleaning up imports in nova.api * Make reroute\_compute use functools.wraps. Fixes LP bug #906945 * Removing extra code from servers controller * Generate instance faults when instance errors * Clarify NoValidHost messages * Fix one last bug in os-console-output extension * Fix os-console-output extension integration * Set Location header in server create and rebuild actions * Consistently use REBUILDING vm\_state * Improve the minidns tests to handle zone matching * Remove unused FLAGS.block\_size * Make UUID format checking more correct * Set min\_ram and min\_disk on snapshot * Add support for port security to QuantumManager * Add a console output action to servers * Creating mechanism that loads Admin API extensions * Document return type from utils.execute() * Renamed the instance\_dns\_driver to dns\_driver for more general use * Specify -t rsa when calling ssh-keygen * create\_export and ensure\_export should pass up the return value, to update the database * Imported Translations from Launchpad * avoid error and trace on dom.vcpus() in lxc * Properly passes arg to run\_iscsiadm to fix logout * Makes disassociate by timeout work with multi-host * Call get\_instance\_nw\_info with elevated context, as documented in nova/network/manager.py * Adds missing joinedload for vif loading * missing comments about extensions to ec2 * Pull resource extensions into APIRouter * IPAM drivers aren't homogenous bug 903230 * use env to find 'false'. Fix for OS X * Fix scheduler error handler * Starting work on exposing service functionality * Bugfix for lp904932 * Ensure fkey is dropped before removing instance\_id * Fixes bug 723235 * nova.virt.libvirt.firewall: set static methods * Expose Asynchronous Fault entity in the OSAPI * Fix nova-manage flags declaration * Remove useless flags declaration * Remove useless input\_chain flags * Make XenAPI agent configuration synchronous * Switch disk\_config extension to use one DB query * Update utils.execute so that check\_exit\_code handles booleans. Fixes LP bug #904560 * Rename libvirt\_uri to uri * Make libvirt\_uri a property * Making pep8 output less verbose * Refactors handling of detach volume * Fixes bug 887402 * Bug 902626 * Make various methods static * Pass additional information from nova to Quantum * Refactor vm\_state and task\_state checking * Updates OVS rules applied to IPv4 VIFs * Follow-on to I665f402f to convert rxtx\_quota to rxtx\_factor in nova-manage and a couple of tests * Make sure the rxtx\_cap is used to set qos info * Fix some errors found by pychecker * Fix tgtadm off by one error. Fixes bug #871278 * Sanitize EC2 manifests and image tarballs * floating-ip: return UUID of instance rather than ID * Renaming instance\_actions.instance\_id column to instance\_uuid. blueprint: internal-uuids * Fix for bug 902175 * fixed typos. removed an unused import * Vm state management and error states * Added support for creating nova volume snapshots using OS API * Fix error when subnet doesn't have a cidr set * bug 899767: fix vif-plugging with live migration * Fixing snapshot failure task\_state * Imported Translations from Launchpad * Moves find config to utils because it is useful * fixed\_ips by vif does not raise * Add templates for selected resource extensions * Fix network forwarding rule initialization in QuantumManager * \_check\_image\_size returns are consistent * Fixed the perms on the linux test case file so that nose will run it * Add preparation for asynchronous instance faults * Add templates for selected resource extensions * Use more informative message when violating quota * Log it when we get a lock * removing TODO as we support Windows+XenServer and have no plans to support quiesce or VSS at the moment * Adds network model and network info cache * Rename .nova-venv to .venv * revert using git for novaclient * Port nova.flags to cfg * Make cfg work on python 2.6 * Relax novaclient and remove redis dependency * Relax dependency on boto 1.9b and nova-adminclient * Make QuantumManager no longer depend on the projects table * Imported Translations from Launchpad * Fix for bug 901459 * Updated the test runner module with a sys.path insert so that tests run in and outside a virtual environment * Add ability to see deleted and active records * Set instance['host'] to the original host value on revert resize * Fix race condition in XenAPI when using .get\_all * Clean up snapshot metadata * Handle the 'instance' half of blueprint public-and-private-dns * Refactors periodic tasks to use a decorator * Add new cfg module * Remove extra\_context support in Flags * A more secure root-wrapper alternative * Remove bzr related code in tests/test\_misc * Change cloudServersFault to computeFault * Update associate\_floating\_ip to use instance objs * vm\_state:=error on driver exceptions during resize * Use system M2Crypto package on Oneiric, bug 892271 * Update compute manager so that finish\_revert\_resize runs on the source compute host. Fixes bug #900849 * First steps towards consolidating testing infrastructure * Remove some remnants of ChangeLog and vcsversion.py generation * Pass '-r' option to 'collie cluster status' * Remove remnants of babel i18n infrastructure * Fixes a typo preventing attaching RBD volumes * Remove autogenerated pot file * Make admin\_password keyword in compute manager run\_instance method match what we send in the compute API. Fixes bug #900591 * remove duplicate netaddr in nova/utils * cleanup: remove .bzrignore * add index to instance\_uuid column in instances * Add missing documentation for shared folder issue with unit tests and Python lock file * Updated nova-manage to work with uuid images Fixes bug 899299 * Add availabity\_zone to the refresh list * Document nova-tarball Jenkins job * Adds extension documentation for some but not all extensions * Add templates for selected resource extensions * EC2 rescue/unrescue is broken, bug 899225 * Better exception handling during run\_instance * Implement resize down for XenAPI * Fix for EC2 API part of bug 897164 * Remove some unused imports from db * Replacing instance id's in in xenapi.vmops and the xen plugin with instance uuids. The only references to instance id's left are calls to the wait\_for\_task() method. I will address that in another branch. blueprint: internal-uuids * Convert get\_lock in compute to use uuids * Fix to correctly report memory on Linux 3.X * Replace more cases of instance ids with uuids * Make run\_instance only support instance uuids * Updates simple scheduler to allow strict availability\_zone scheduling * Remove VIF<->Network FK dependancy * Adds missing image\_meta to rescue's spawn() calls * Bug #898290: iSCSI volume backend treats FLAGS.host as a hostname * split rxtx\_factor into network and instance\_type * Fixing get\_info method implementations in virt drivers to accept instance\_name instead of instance\_id. The abstract class virt.ComputeDriver defines get\_info as: def get\_info(self, instance\_name). blueprint: internal-uuids * Fixes bug 767947 * Remove unused ec2.action\_args * Fix typo: priviledges -> privileges * Bug #896997: nova-vncproxy's flash socket policy port is not configurable * Convert compute manager delete methods to objects * Removing line dos line endings in vmwareapi\_conn.py * reboot & rebuild to use uuids in compute manager * Fix for bug 887712 * Add NAT/gateway support to QuantumManager * Fix QuantumManager update\_dhcp calls * Fix RPC responses to allow None response correctly * Use uuids for compute manager agent update * power\_on/power\_off in compute manager to use uuids * Use uuids for file injection * removed logic of throwing exception if no floating ip * Adding an install\_requires to the setup call. Now you can pip install nova on a naked machine * Removing obsolete bzr-related clauses in setup.py * Makes rpc\_allocate\_fixed\_ip return properly * Templatize extension handling * start/stop in compute manager to use uuids * Updating {add,remove}\_fixed\_ip\_from\_instance in compute.api and compute.manager to use instance uuid instead of instance id. blueprint internal-uuids * Use instance uuids for consoles and diagnostics * Fixes bug 888649 * Fix Bug #891718 * Bug #897091: "nova actions" fails with HTTP 400 / TypeError if a server action has been performed * Bug #897054: stack crashes with AttributeError on e.reason if the server returns an error * Refactor a few things inside the xenapi unit tests * New docs: unit tests, Launchpad, Gerrit, Jenkins * Fix trivial fourth quote in docstring * Fix deprecation warnings * Fix for bug 894431 * Remove boot-from-volume unreachable code path (#894172) * reset/inject network info in compute to use uuid * Updating set\_admin\_password in compute.api and compute.manager to use instance uuids instead of instance ids. Blueprint internal-uuids * rescue/unrescue in compute manager to use uuids * Updated development environment docs * Call df with -k instead of -B1 * Make fakelibvirt python2.6 compatible * Clean up compute api * Updating attach/detach in compute.api and compute.manager to use instance uuid instead of instance id. blueprint internal-uuids * Change compute API.update() to take object+params * Use XMLDictSerializer for resource extensions * Updating {add,remove}\_security\_group in compute.api to use instance uuids instead of instance ids. blueprint internal-uuids * Extend test\_virt\_driver to also test libvirt driver * poll\_rebooting\_instances passes an instance now * Revert "Fixes bug 757033" * Put instances in ERROR state when scheduler fails * Converted README to RST format * Workaround xenstore race conditions * Fix a minor memory leak * Implement schedule\_prep\_resize() * Fixes bug 886263 * snapshot/backup in compute manager to use uuids * Fixes bug 757033 * Converting tests to use v2 * lock/unlock in compute manager to use uuids * suspend/resume in compute manager to use uuids * Refactor metadata code out of ec2/cloud.py * pause/unpause in compute manager to use uuids * Creating new v2 namespace in nova.api.openstack * Add a "libvirt\_disk\_prefix" flag to libvirt driver * Added RST docs on how to use gettext * Refactoring/cleanup of some view builders * Convert remaining calls to use instance objects * Make run instances respect availability zone * Replacing disk config extension to match spec * Makes sure gateways forward properly * Convert security\_group calls to use instance objs * Remove hostname update() logic in compute.API * Fixes bug 890206 * Follow hostname RFCs * Reference Ron Pedde's cleanup script for DevStack * Remove contrib/nova.sh and other stale docs * Separate metadata api into its own service * Add logging, error handling to the xenstore lib * Converting lock/unlock to use instance objects * Deepcopy optparse defaults to avoid re-appending multistrings (#890489) * install\_venv: apply eventlet patch correctly with python 2.7 (#890461) * Fix multistring flags default handling (#890489) * Fixing image create in S3ImageService * Defining volumes table to allow FK constraint * Converting network methods to use instance objects * Handle null ramdisk/kernel in euca-describe-images * Bind engine to metadata in migration 054 * Adding downgrade for migration 57 plus test * Log the URL to an image\_ref and not just the ID * Converting attach\_volume to use instance object * Converting rescue/unrescue to use instance objects * Converting inject\_file to use instance objects * Bug #888719: openvswitch-nova runs after firstboot scripts * Bug #888730: vmwareapi suds debug logging very verbose * Converting consoles calls to use instance objects * Converting fixed ip calls to use instance objects * Convert pause/unpause, sus/res to use instance obj * fix rebuild sha1 not string error * Verify security group parameters * Converting set password to use instance objects * Converting snapshot/backup to use instance objects * Refactor of QuotaError * Fix a notification bug when creating instances * Converting metadata calls to use instance objects * nova-manage: exit with status 1 if an image registration fails * Converting start and stop to use instance objects * Converting delete to use instance objects * Capture exceptions happening in API layer * Removed some old cruft * Add more error handling to glance xenapi plugin * Fixes bug 871877 * Replace libvirt driver's use of libxml2 with ElementTree * Extend fake image service to let it hold image data * Bug #887805 Error during report\_driver\_status(): 'LibvirtConnection' object has no attribute '\_host\_state' * More spelling fixes inside of nova * Fixes LP878319 * Fix exception reraising in volume manager * Adding Chuck Short to .mailmap * Undefine libvirt saved instances * Split compute api/manager tests within module * Workaround for eventlet bug with unit tests in RHEL6.1 * Apply M2Crypto fix for all Fedora-based distributions * Fix failing libvirt test (bug 888250) * Spelling fixes in nova/api comments * Get MAC addresses from Melange * Refactor logging\_error into utils * Converting rebuild to use instance objects * Converting resize to use instance objects * Converting reboot to use instance objects * Reducing the number of compute calls to Glance * Remove duplicate method (2) * Move tests for extensions to contrib directory * Remove duplicate method * Remove debugging print * Adds extended status information via the Admin API to the servers calls * Wait until the instance is booted before setting VCPU\_params * changes logging reference in zone\_manager.py * Exception cleanup in scheduler * Fixing create\_vbd call per VolumeHelper refactoring * Switch glance XenAPI plugin to use urllib2 * Blueprint lasterror * Move failed instances to error state * Adding task\_states.REBOOTING\_HARD * Set task state to UPDATING\_PASSWORD when needed * Clean up docstrings for faults.Fault and it's usage * Fix typo in docstring * Add DHCP support to the QuantumManager and break apart dhcp/gateway * Change network delete to delete by uuid or cidr * Bug #886353: Faults raised by OpenStack API Resource handlers fail to be reported properly * Define faults.Fault.\_\_str\_\_ * Speed up tests a further 35 seconds * Removing duplicate kernel/ramdisk check in OSAPI * Remove unnecessary image list in OSAPI * Add auto-reloading JSON config file support to scheduler * Change floating-snat to float-snat * Allows non-admin users to use simple scheduler * Skip libvirt tests when libvirt not present * Correcting libvirt tests that were failing * Fix for launchpad bug #882568 * Gracefully handle Xen resize failure * Don't update database before resize * fix bug 816630 * Set nova-manage to executable Fixes LP885778 * Fixing immediate delete after boot on Libvirt * exception.KeypairNotFound usage correction * Add local storage of context for logging * Reserve memory/disk for dom0/host OS * Speed up tests yet another 45 seconds * APIs should not wait on scheduler for builds in single zone deployment * Added some documentation to db.api module docstring * Updated rst docs to include threading model * Adds documentation for Xen Storage Manager * Xen Storage Manager Volume Driver * Drop extra XML tests and remove \_json suffix from names * Fix empty group\_id to be considered invalid * Stop nova-ajax-console-proxy configuring its own logging * Bug 884863: nova logs everything to syslog twice * Log the exception when we get one * Use fat32 for Windows, linux-swap for Linux swap partitions * Fix KeyError when passed unknown format of time * flatten distributed scheduler * Bug #884534: nova-ajax-console-proxy crashes on shutdown * Bug 884527: ajax\_console\_proxy\_port needs to be an integer * Too much information is returned from POST /servers * Disable SQLite synchronous mode during tests * Creating uuid -> id mapping for S3 Image Service * Fix 'begining' typo in system usage data bug 884307 * Fixes lp883279 * Log original dropped exception when a new exception occurs * Fix lp:861160 -- newly created network has no uuid * Bug #884018: "stack help" prints stacktrace if it cannot connect to the server * Optional --no-site-packages in venv * fixes bug 883233. Added to Authors fix typo in scheduler/driver.py assert\_compute\_node\_has\_enough\_memory * Updated NoAuth to account for requests ending in / * Retry failed SQL connections (LP #876663) * Removed autogenerated API .rst files * Fix to a documentation generation script * Added code to libvirt backend to report state info * Adding bulk create fixed ips. The true issue here is the creation of IPs in the DB that are not currently used(we are building the entire block). This fix is just a bandaid, but it does cut ~25 seconds off of the quantum tests on my laptop * Fix overzealous use of faults.Fault() wrapper * Revert how APIs get IP address info for instances * Support server uuids with security groups * Support using server uuids when accessing consoles * Adding support for retrying glance image downloads * Fix deletion of instances without fixed ips * Speed up test suite by 20 seconds * Removed callback concept on VM driver methods: * Fix file injection for OSAPI rebuilds. Fixes 881649 * Replaces all references to nova.db.api with nova.db * venv: update distribute as well as pip * Fix undefined glance\_host in get\_glance\_client * Fix concurrency of XenAPI sessions * Server metadata must support server uuids * Add .gitreview config file for gerrit * Convert instancetype.flavorid to string * Make sure networks returned from get\_instance\_nw\_info have a label * Use UUIDs instead of IDs for OSAPI servers * Improve the liveness checking for services * Refactoring of extensions * Moves a-zone scheduling into simple scheduler * Adds ext4 and reiserfs to \_mount\_filesystem() * Remove nova dependency on vconfig on Linux * Upgrade pip in the venv when we build it * Fixes bug 872459 * Repartition and resize disk when marked as managed * Remove dead DB API call * Only log instance actions once if instance action logging is enabled (now disabled by default) * Start switching from gflags to optparse * Don't leak exceptions out to users * Fix EC2 test\_cloud timing issues * Redirects requests from /v#.# to /v#.#/ * Chain up to superclass tearDown in ServerActionsTest * Updated RST docs: bzr/launchpad -> git/github * Refactoring nova.tests.api.openstack.test\_flavors * Refactoring image and server metadata api tests * Refactoring nova.tests.api.openstack.test\_servers * Refactoring nova.tests.api.openstack.test\_images * Utility script that makes enforcing PEP8 within git's pre-commit hook as easy as possible * Add XML templates * Remove OSAPI v1.0 * Remove unused flag\_overrides from TestCase * Cancel any clean\_reboot tasks before issuing the hard\_reboot * Makes snapshots work for amis. Fixes bug 873156 * Xenapi driver can now generate swap from instance\_type * Adds the ability to automatically issue a hard reboot to instances that have been stuck in a 'rebooting' state for longer than a specified window * Remove redundant, dead code * Added vcpu\_weight to models * Updated links in the README that were out of date * Add INPUT chain rule for EC2 metadata requests (lp:856385) * Allow the user to choose either ietadm or tgtadm (lp:819997) * Remove VolumeDriver.sync\_exec method (lp:819997) * Adds more usage data to Nova's usage notifications * Fixes bug 862637 -- make instance\_name\_template more flexible * Update EC2 get\_metadata calls to search 'deleted': False. Fixes nova smoke\_tests!!! * Use new ip addr del syntax * Updating HACKING to make split up imports into three blocks * Remove RateLimitingMiddlewareTest * Remove AoE, Clean up volume code * Adds vcpu\_weight column to instance\_types table and uses this value when building XenServer instances * Further changes to the cleaner * Remove duplicated functions * Reference orphaned\_instance instead of instance * Continue to the next iteration of the loop if an instance is not found * Explicit errors on confirm/revertResize failures * Include original exception in ClassNotFound exception * Enable admin access to EC2 API server * Make sure unknown extensions return 404 * Handle pidfile exception for dnsmasq * Stop returning correct password on api calls * Restructure host filtering to be easier to use * Add support for header version parameter to specify API version * Set error state on spawn error + integration test * Allow db schema downgrades * moved floating ip db access and sanity checking from network api into network manager added floating ip get by fixed address added fixed\_ip\_get moved floating ip testing from osapi into the network tests where they belong * Adds a script that can automatically delete orphaned VDIs. Also had to move some flags around to avoid circular imports * Improve access check on images * Updating image progress to be more granular. Before, the image progress had only 2 states, 0 and 100. Now it can be 0, 25, 50 or 100 * Deallocate ip if build fails * Ensure non-default FLAGS.logfile\_mode is properly converted to an octet * Moving admin actions to extension * Fixes bug 862633 -- OS api consoles create() broken * Adds the tenant id to the create images response Location header Fixes bug 862672 * Fixes bug 862658 -- ec2 metadata issue getting IPs * Added ==1.0.4 version specifier to kombu in pip-requires to ensure tests pass in a clean venv * bug lp845714 * install\_venv: pip install M2Crypto doesn't work on Fedora * install\_venv: add support for distro specific code * install\_venv: remove versioned M2Crypto dependency * install\_venv: don't use --no-site-packages with virtualenv * install\_venv: pass the --upgrade argument to pip install * install\_venv: refactor out pip\_install helper * Replace socat with netcat * api.ec2.admin unit tests * Fixes Bug #861293 nova.auth.signer.Signer now honors the SignatureMethod parameter for SHA1 when creating signatures * Enforce snapshot cleanup * bug 861310 * Change 'recurse\_zones' to 'local\_zone\_only' * Fixes euca-describe-instances failing or not showing IPs * Fixes a test failure in master * Fixed bug lp850602. Adding backing file copy operation on kvm block migration * Add nova-all to run all services * Snapshots/backups can no longer happen simultaneously. Tests included * Accept message as sole argument to NovaException * Use latest version of SQLAlchemy * Fix 047 migration with SQLAlchemy 0.7.2 * Beef up nova/api/direct.py tests * Signer no longer fails if hashlib.sha256 is not available. test\_signer unit test added * Make snapshots private by default * use git config's review.username for rfc.sh * Raise InsufficientFreeMemory * Adding run\_test.sh artifacts to .gitignore * Make sure options is set before checking managed\_disk setting. Fixes bug 860520 * compute\_api create\*() and schedulers refactoring * Removed db\_pool complexities from nova.db.sqlalchemy.session. Fixes bug 838581 * Ensure minRam and minDisk are always integers * Call endheaders when auth\_token is None. Fixes bug 856721 * Catch ImageNotFound on image delete in OSAPI * Fix the grantee group loading for source groups * Add next links to images requests * put fully qualified domain name in local-hostname * Removing old code that snuck back in * Makes sure to recreate gateway for moved ip * Fix some minor issues due to premature merge of original code * \* Rework osapi to use network API not FK backref \* Fixes lp854585 * Allow tenant networks to be shared with domain 0 * Use ovs-vsctl iface-to-br to look up the bridge associated with the given VIF. This avoids assuming that vifX.Y is attached to xenbrY, which is untrue in the general case * Made jenkins email pruning more resilient * Fixing bug 857712 * Adds disk config * Adding xml schema validation for /versions resource * Fix bug 856664 overLimit errors now return 413 * Don't use GitPython for authors check * Fix outstanding pep8 errors for a clean trunk * Add minDisk and minRam to OSAPI image details * Fix rfc.sh's check for the project * Add rfc.sh to help with gerrit workflow * This patch adds flavor filtering, specifically the ability to flavor on minRam, minDisk, or both, per the 1.1 OSAPI spec * Add next links for server lists in OSAPI 1.1. This adds servers\_links to the json responses, and an extra atom:link element to the servers node in the xml response * Update exception.wrap\_exception so that all exceptions (not just Error and NovaException types) get logged correctly * Merging trunk * Adding OSAPI tests for flavor filtering * This patch adds instance progress which is used by the OpenStack API to indicate how far along the current executing action is (BUILD/REBUILD, MIGRATION/RESIZE) * Merging trunk * Fixes lp:855115 -- issue with disassociating floating ips * Renumbering instance progress migration * Fixing tests * Keystone support in Nova across Zones * trunk merge fixup * Fix keys in ec2 conversion to make sure not to use unicode * Adds an 'alternate' link to image views per 3.10 and 3.11 of http://docs.openstack.org/cactus/openstack-compute/developer/openstack-compute-api-1.1/content/LinksReferences.html * Typo * Fixing tests * Fixing tests * make sure kwargs are strings and not unicode * Merging trunk * Adding flavor filtering * Instance deletions in Openstack are immediate. This can cause data to be lost accidentally * Makes sure ips are moved on the bridge for nodes running dnsmasq so that the gateway ip is always first * pep8 * add tests and fix bug when no ip was set * fix diverged branch * migration conflict fixed * clean up based on cerberus review * clean up based on cerberus review * Remove keystone middlewares * fix moving of ips on flatdhcp bridge * Merged trunk * merged trunk * update floating ips tests * floating ip could have no project and we should allow access * actions on floating IPs in other projects for non-admins should not be allowed * floating\_ip\_get\_by\_address should check user's project\_id * Pep8 fixes * Merging trunk * Refactoring instance\_type\_get\_all * remove keystone url flag * merge trunk, fix conflicts * remove keystone * Include 'type' in XML output * Minor cleanup * Added another unit test * Fixed unit tests with some minor refactoring * Fix the display of swap units in nova manage * Refactored alternate link generation * pep8 fixes * Added function to construct a glance URL and unit test * merge from trunk * convert images that are not 'raw' to 'raw' during caching to node * show swap in Mb in nova manage * Address Soren's comments: \* clean up temp files if an ImageUnacceptable is going to be raised Note, a qemu-img execution error will not clean up the image, but I think thats reasonable. We leave the image on disk so the user can easily investigate. \* Change final 2 arguments to fetch\_to\_raw to not start with an \_ \* use 'env' utility to change environment variables LC\_ALL and LANG so that qemu-img output parsing is not locale dependent. Note, I considered the following, but found using 'env' more readable out, err = utils.execute('sh', '-c', 'export LC\_ALL=C LANG=C && exec "$@"', 'qemu-img', 'info', path) * Add iptables filter rules for dnsmasq (lp:844935) * create disk.local the same way ephemerals are created (LP: #851145) * merge with trunk r1601 * fix call to gettext * Fixed --uuid network command in nova-manage to desc to "uuid" instead of "net\_uuid" * removes warning set forth in d3 for deprecated setting of bridge automagically * Update migration 047 to dynamically lookup the name of the instance\_id fkey before dropping it. We can't hard code the name of the fkey since we didn't name it explicitly on create * added to authors cuz trey said I cant patch otherwise! * Fixed --uuid network command in nova-manage to desc to "uuid" instead of "net\_uuid" * merged with trunk * Update migration 047 to dynamically lookup the name of the instance\_id fkey before dropping it. We can't hard code the name of the fkey since we didn't name it explicitly on create * oops, add project\_id and 'servers' to next links * Fixes migration for Mysql to drop the FK on the right table * Reverted some changes to instance\_get\_all\_by\_filters() that was added in rev 1594. An additional argument for filtering on instance uuids is not needed, as you can add 'uuid: uuid\_list' into the filters dictionary. Just needed to add 'uuid' as an exact\_match\_filter. This restores the filtering to do a single DB query * fix syntax error in exception, remove "Dangerous!" comment * merged trunk and resolved conflict * run the alter on the right table * fix unrelated pep8 issue in trunk * use dictionary format for exception message * fix a test where list order was assumed * Removed the extra code added to support filtering instances by instance uuids. Instead, added 'uuid' to the list of exact\_filter\_match names. Updated the caller to add 'uuid: uuid\_list' to the filters dictionary, instead of passing it in as another argument. Updated the ID to UUID mapping code to return a dictionary, which allows the caller to be more efficient... It removes an extra loop there. A couple of typo fixes * Reworked the export command to be nova-manage shell export --filename=somefile * Adds the ability to automatically confirm resizes after the \`resize\_confirm\_window\` (0/disabled by default) * use '\_(' for exception messages * PEP8 cleanup * convert images that are not 'raw' to 'raw' during caching to node * now raising instead of setting bridge to br100 and warning as was noted * \* Remove the foreign key and backrefs tying vif<->instance \* Update instance filtering to pass ip related filters to the network manager \* move/update tests * Adds an optional flag to force dhcp releases on instance termination. This allows ips to be reused without having to wait for the lease to timeout * remove urllib import * Fixing case where OSAPI server create would return 500 on malformed body * Fix the issue with the new dnsmasq where it tries and fails to bind to ipv6 addresses * Merging trunk * Renaming progress migration to 47 * merge with trunk * Added unit test * Corrected the status in DB call * don't try to listen on ipv6 addresses, or new dnsmasq goes boom * make our own function instead of using urllib.urlencode since we apparently don't suppor urlencoded strings yet * Merged trunk * remove unused import * merge the sknurt * remove the polymorph * Fix typo in comment * Fixes the handling of snapshotting in libvirt driver to actually use the proper image type instead of using raw for everything. Also cleans up an unneeded flag. Based on doude's initial work * merge with trunk * removing extra newline * catching AttributeError and adding tests * Remove vestigial db call for fixed\_ips * Fixes the user credentials for installing a config-drive from imageRef * Some Linux systems can also be slow to start the guest agent. This branch extends the windows agent timeout to apply to all systems * remove extra line * get the interface using the network and instance * flag typo * add an optional flag to force dhcp release using dnsmasq-utils * Fix user\_id, project\_id reference for config\_drive with imageRefs * Fix a bug that would make spawning new instances fail if no port/protocol is given (for rules granting access for other security groups) * When swap is specified as block device mapping, its size becomes 0 wrongly. This patch make it set to correct size according to instance\_type * Fix pep8 issues * fixed grant user, added stdout support * This changes the interpretation of 'swap' for an instance-type to be in MB rather than GB * Fixing list prepend * Merging trunk * create disk.local the same way ephemerals are created (LP: #851145) * Fix failing test * Authorize to start a LXC instance withour, key, network file to inject or metadata * Update the v1.0 rescue admin action and the v1.1 rescue extension to generate 'adminPass'. Fixes an issue where rescue commands were broken on XenServer. lp#838518 * pep8 * merge the trunks * update tests to return fake\_nw\_info that is valid for the pre\_live\_migrate * make sure to raise since the tests require it * Pep8 Fix * Update test\_volumes to use FLAGS.password\_length * Zero out the progress when beginning a resize * Adding migration progress * Only log migration info if they exist * remove getting fixed\_ips directly from the db * removed unused import * Fixes libvirt rescue to use the same strategy as xen. Use a new copy of the base image as the rescue image. It leaves the original rescue image flags in, so a hand picked rescue image can still be used if desired * Fixing tests, PEP8 failures * fix permissions * Add a FakeVirDomainSnapshot and return it from snapshotCreateXML. Fixes libvirt snapshot tests * merge the trunks * Merged trunk * I am using iputils-arping package to send arping command. You will need to install this package on the network nodes using apt-get command apt-get install iputils-arping * Removed sudo from the arguments * Add a FakeVirDomainSnapshot and return it from snapshotCreateXML. Fixes libvirt snapshot tests * merge from trunk * Make sure grantee\_group is eagerly loaded * Merged trunk * compute/api: swap size issue * Update exception.wrap\_exception so that all exceptions (not just Error and NovaException types) get logged correctly * Removes the on-disk internal libvirt snapshot after it has been uploaded to glance * cleaned up * remove debugging * Merging trunk * Allowing resizes to the same machine * trunk merge * updates Exception.NoMoreFixedIps to subclass NovaException instead of Error * NoMoreFixedIps now subclasses NovaException instead of Error * merge trunk * was trying to create the FK when Should have been dropping * pep8 * well since sqlalchemy-migrate and sqlalchemy can't agree on what the FK is called, we fall back on just manually dropping it * tests working again * the table is the table for the reason its a table * uhh dialect doesn't exist, beavis * update comment * if no public-key is given (--key), do not show public-keys in metadata service * it merges the trunk; or else it gets the conflicts again * exceptions properly passed around now * merge with trunk at revno 1573 * add the fake\_network Manager to prevent rpc calls * This makes the OS api extension for booting from volumes work. The \_get\_view\_builder method was replaced in the parent class, but the BootFromVolume controller was not updated to use the new method * remove undedded imports and skips * pep8 fixes * Added a unit test * pass-through all other parameters in next links as well * update for the id->uuid flip * Merged trunk * Adding flavor extra data extension * Merged trunk * fix test * revert last change * Added virt-level support for polling unconfirmed resizes * build the query with the query builder * Removing toprettyxml from OSAPI xml serialization in favor of toxml * use uuids everywhere possible * make sure to use the uuid * update db api for split filterings searches * update tests * delete the internal libvirt snapshot after it is saved to glance * cleanup prints in tests * cleanup prints in tests * Add a simple test for the OS boot from volume api * get rid of debugs * Merged from trunk and resolved conflicts * Execute arping command using run\_as\_root=True instead of sudo * Return three rules for describe\_security\_groups if a rule refers to a foreign group, but does not specify protocol/port * pep8 issues * added xml support for servers\_list in response with tests * Merged trunk * added servers\_links in v1.1 with tests * added build\_list to servers controllers and view builder and kept all old tests passing * The 1.1 API specifies that two vendor content types are allowed in addition to the standard JSON and XML content types * pep8 * tests are back * PEP8 fix * Adding progress * In the unlikely case of an instance losing a host, make sure we still delete the instance when a forceDelete is done * 0 for the instance id is False ;) * Cleanup state management to use vm\_state instead of task\_state Add schedule\_delete() method so delete() actually does what it says it does * merge trunk * write out xml for rescue * fix up the filtering so it does not return duplicates if both the network and the db filters match * fix rescue to use the base image, reset firewall rules, and accept network\_info * make sure to pass in the context * move the FakeNetworkManager into fake\_network * Fix issue where floating ips don't get recreated when a network host reboots * ip tests were moved to networking * add tests * fix typo * allow matching on fixed\_ip without regex and don't break so all results are reported * add case where vif may not have an instance\_id associated with it * fix typo * Initial pass at automatically confirming resizes after a given window * Use the correct method to get a builder * merge trunks * pep8 * move ip filtering over to the network side * fix pep8 whitespace error * add necessary fields to flavor.rng schema * get all the vifs * get all the vifs * make sure we are grabbing out just the ids * flavor\_elem.setAttribute -> flavor\_elem.set, flavor -> flavor\_dict * minor changes to credentials for the correct format * Don't report the wrong content type if a mapped type doesn't exist * add stubs for future tests that need to be written * Test both content types for JSON and XML * Remove unnecessary vendor content types now that they are mapped to standard content types automatically * Add copyright * Map vendor content types to their standard content type before serializing or deserializing. This is so we don't have to litter the code with both types when they are treated identically * exporting auth to keystone (users, projects/tenants, roles, credentials) * make xml-api tests pass * update variable name after merge: flavor\_node -> flavor\_elem * resolve conflicts / merge with trunk revno 1569 * Fixes an issue where 'invalid literal for int' would occur when listing images after making a v1.1 server snapshot (with a UUID) * fixed tests * removing toprettyxml * add attributes to xml api * Remove debugging * Update test\_libvirt so that flags and fakes are used instead of mocks for utils.import\_class and utils.import\_object. Fixes #lp849329 * fix the test so that it fakes out the network * fix white space for pep8 * fix test\_extensions test to know of new extension FlavorExtraData * add extension description for FlavorExtraData * Adding migration for instance progress * Make tests pass * no need for the instance at all or compute * bump the migration * remove unused import, make call to network api to get vifs for the instance * merge the trunk * skip a bunch of tests for the moment since we will need to rework them * remove the vif joins, some dead code, and the ability to take in some instances for filtering * allow passing in of instances already * run the instances filter through the network api first, then through the db * add get\_vifs\_by\_instance and stub get\_instance\_ids\_by\_ip\_filter * change vifs to rpc call and add instance ids by ip * Multi-NIC support for vmwareapi virt driver in nova. Does injection of Multi-NIC information to instances with Operating system flavors Ubuntu, Windows and RHEL. vmwareapi virt driver now relies on calls to network manager instead of nova db calls for network configuration information of instance. Re-oranized VMWareVlanBridgeDriver and added session parmeter to methods to use existing session. Also removed session creation code as session comes as argument. Added check for flat\_inject flag before attempting an inject operation * last of the api.openstack.test\_images merge fixes * pep8 fixes * trunk merge * makes sure floating addresses are associated with host on associate so they come back * Deprecate aoe in preperation for removal in essex * Only allow up to 15 chars for a Windows hostname * pep8 * deprecate aoe * Fix instance rebooting (lp847604) by correcting a malformed cast in compute.api and an incorrect method signature in the libvirt driver * Fix mismerge * make tests pass * This patch teaches virt/libvirt how to format filesystem on ephemeral device depending on os\_type so that the behaviour matches with EC2's. Such behaviour isn't explicitly described in the documentation, but it is confirmed by checking realy EC2 instances. This patch introduces options virt\_mkfs as multistring. Its format is --virt\_mkfs== When creating ephemeral device, format it according to the option depending on os\_type. This addresses the bugs, https://bugs.launchpad.net/nova/+bug/827598 https://bugs.launchpad.net/nova/+bug/828357 * Test new vendor content types as well * Only allow up to 15 chars for a Windows hostname * Split accept tests to better match the name of the test * Remove debugging print * Inject hostname to xenstore upon creation * Update test\_libvirt so that flags and fakes are used instead of mocks for utils.import\_class and utils.import\_object. Fixes #lp849329 * interpret 'swap' to be in MB, not in GB * Actually test expected matches received * Test new content-types * This branch changes XML Serializers and their tests to use lxml.etree instead of minidom * add additional data to flavor's ViewBuilder * Inject hostname to xenstore upon creation * drop the virtual\_interfaces key back to instances * - remove translation of non-recognized attributes to user metadata, now just ignored - ensure all keys are defined in image dictionaries, defaulting to None if glance client doesn't provide one - remove BaseImageService - reorganize some GlanceImageService tests * And again * Update MANIFEST.in to match directory moves from rev1559 * we're back * Update MANIFEST.in to match directory moves from rev1559 * Moving tests/test\_cloud.py to tests/api/ec2/test\_cloud.py. They are EC2-specific tests, so this makes sense * Same as last time * Made tests version version links more robust * PEP8 cleanup * PEP8 cleanup * PEP8 cleanups * zone manager tests working * fixing import * working on getting tests back * relocating ec2 tests * merging trunk; resolving conflicts * Correctly map image statuses from Glance to OSAPI v1.1 * pep8 fixes in nova/db/sqlalchemy/api.py and nova/virt/disk.py * Add support for vendor content types * pep8 fixes * merging trunk; resolving conflicts * Update GlanceClient, GlanceImageService, and Glance Xen plugin to work with Glance keystone * Fix typo (woops) * pep8 fix * Some arches dont have dmidecode, check to see if libvirt is capable of running rather getInfo of the arch its running on * merging parent branch lp:~rackspace-titan/nova/glance-client-keystone * adding tests for deleted and pending\_delete statuses * Fixes rogue usage of sudo that crept in * fixups * remove unused dep * add test for method sig * parent merge * migration move * bug fixes * merging trunk * Fixes shutdown of lxc containers * Make quoting consistent * Fix rogue usage of 'sudo' bypassing the run\_as\_root=True method * trunk merge * region name * tweaks * fix for lp847604 to unbreak instance rebooting * use 'qemu-image resize' rather than 'truncate' to grow image files * When vpn=true in allocate ip, it attempts to allocate the ip that is reserved in the network. Unfortunately fixed\_ip\_associate attempts to ignore reserved ips. This fix allows to filter reserved ip address only when vpn=True * Do not require --bridge\_interface for FlatDHCPManager (lp:844944) * Makes nova-vncproxy listen for requests on the queue like it did before the bin files were refactored * Update GlanceClient, GlanceImageService, and Glance Xen plugin to work with Glance keystone * api/ec2/ebs: make metadata returns correct swap and ephemeral0 * api/ec2: make get\_metadata() return correct mappings * virt/libvirt: format ephemeral device and add fs label when formating ext3 fs * Fix spelling mistake * Stock zones follows a fill-first methodology—the current zone is filled with instances before other zones are considered. This adds a flag to nova to select a spread-first methodology. The implementation is simply adding a random.shuffle() prior to sorting the list of potential compute hosts by weights * Pass reboot\_type (either HARD or SOFT) to the virt layers from the API * merging trunk * fixing image status mapping * don't need random in abstract\_scheduler.py anymore.. * pull-up from trunk; move spread\_first into base\_scheduler.py * trunk merge * adding auth tokens to child zone calls * Add comment to document why random.shuffle() works * Merged trunk * Make whitespace consistent * Use triple quotes for docstrings to be consistent * Remove the unnecessary sudo from qemu-img as it is unneeded and doesn't work with our current packaging * Remove chanes\_since and key\_name from basic server entity * Merged trunk * remove extra line for pep8 * remove unnecessary qemu-img flag, use base image type by default * shorten comment to < 79 chars * merged rbp * remove sudo from qemu-img commands * adds a fake\_network module to tests to generate sensible network info for tests. It does not require using the db * Adding a can\_read\_deleted filter back to db.api.instance\_get\_all\_by\_filters that was removed in a recent merge * removing key\_name and config\_drive from non-detailed server entity * Authorize to start a LXC instance withour, key, network file to inject or metadata * Open Essex (switch version to 2012.1) * Last Diablo translations for Nova * Open Essex (switch version to 2012.1) * Last Diablo translations * pep 8 * Fixing security groups stuff * put key into meta-data, not top level 'data' * metadata key is 'public-keys', not 'keys' * fix for lp844364: fix check for fixed\_ip association in os-floating-ips * if no public-key is given (--key), do not show public-keys in metadata service * NetworkManager's add\_fixed\_ip\_to\_instance calls \_allocate\_fixed\_ips without vpn or requested\_networks parameters. If vpn or requested\_networks is not provided to the \_allocate\_fixed\_ips method, it throws an exception. This issue is fixed now * Merged trunk * First pass at adding reboot\_type to reboot codepath * child zone queries working with keystone now * Added docstring to explain usage of reserved keyword argument * One more bug fix to make zones work in trunk. Basic problem is that in novaclient using the 1.0 OSAPI, servers.create() takes an ipgroups argument, but when using the 1.1 OSAPI, it doesn't, which means booting instances in child zones won't work with OSAPI v1.0. This fix works around that by using keyword arguments for all the arguments after the flavor, and dropping the unused ipgroups argument * Fixes the reroute\_compute decorator in the scheduler API so that it properly: * make check for fixed\_ip association more defensive * Fix lp:844155 * Changing a behavior of update\_dhcp() to write out dhcp options file. This option file make dnsmasq offer a default gateway to only NICs of VM belonging to a network that the first NIC of VM belongs to. So, first NIC of VM must be connected to a network that a correct default gateway exists in. By means of this, VM will not get incorrect default gateways * merged trunk * merging trunk * merging trunk * merged trunk * Make weigh\_hosts() return a host per instance, instead of just a list of hosts * converting fix to just address ec2; updating test * Do not attempt to mount the swap VDI for file injection * Add a NOTE() * Merged trunk * Use .get instead * Do not attempt to mount the swap VDI for file injection * pull-up from trunk * pull-up from trunk * pull-up from trunk * adding can\_read\_deleted back to db api * Clean up shutdown of lxc containers * Cleanup some more comments * Cleanup some comments * fixes vncproxy service listening on rabbit * added tests for failure cases talking with zones * This code contains contains a new NetworkManager class that can leverage Quantum + Melange * comment fix * typo trying to raise InstanceNotFound when all zones returned nothing * create a new exception ZoneRequestError to use for returning errors when zone requests couldn't complete * pep8 fix for tests/api/openstack/test\_servers.py which is an issue in trunk * catch exceptions from novaclient when talking to child zones. store them and re-raise if no other child zones return any results. If no exceptions are raised but no results are returned, raise a NotFound exception * added test to cover case where no local hosts are available but child hosts are * remove the short circuit in abstract scheduler when no local hosts are available * fix for lp844364: improve check for fixed\_ip association * Ensure restore and forceDelete don't do anything unless the server is waiting to be reclaimed * actually shuffle the weighted\_hosts list.. * Check task\_state for queued delete * spread-first strategy * Make sure instance is deleted before allowing restore or forceDelete * Add local hostname to fix Authors test * delete\_instance\_interval -> reclaim\_instance\_interval * PEP8 cleanup * Restart compute with a lower periodic\_interval to make test run faster * merge trunk * properly handle the id resetters * removed vestige * pull-up from trunk * fix a couple of typos in the added unit test * modified unit tests, set use\_single\_default\_gateway flag to True whereever needed instead of setting it in the init method * exclude net tag from host\_dhcp if use\_single\_default\_gateway flag is set to false * forgot \_id * had used wrong variable * Fixes a case where if a VIF is returned with a NULL network it might not be able to be deleted. Added test case for that fix * Fix for LP Bug #837867 * weigh\_hosts() needs to return a list of hosts for the instances, not just a list of hosts * Merged trunk * Set flat\_injected to False by default * changed the fixed\_ip\_generator * PEP8 cleanup * Wait longer for all agents, not just Windows * merged trunk * updated floating\_ip generation * Tests for deferred delete, restore and forceDelete * An AMI image without ramdisk image should start * Added use\_single\_default\_gateway to switch from multiple default gateways to single default gateway * Fixed unit test * reverting change to GlanceImageService.\_is\_image\_available * At present, the os servers.detail api does not return server.user\_id or server.tenant\_id. This is problematic, since the servers.detail api defaults to returning all servers for all users of a tenant, which makes it impossible to tell which user is associated with which server * reverting xenapi change * Micro-fix; "exception" was misspelled as "exceptions" * Fix a misspelling of "exception" * revert changes to display description * merged trunk * novaclient v1\_0 has an ipgroups argument, but novaclient v1\_1 doesn't * Set flat\_injected to False by default * Fixes an issue where 'invalid literal for int' would occur when listing images after making a v1.1 server snapshot (with a UUID) * further cleanup * Default to 0 seconds (off) * PEP8 cleanups * Include new extension * Implement deferred delete of instances * trunk merge * cleaning up tests * zone name not overwritten * Update the v1.0 rescue admin action and the v1.1 rescue extension to generate 'adminPass'. Fixes an issue where rescue commands were broken on XenServer. lp#838518 * fix a mistaking of dataset and expected values on small test * fix a mistaking of deletion in ensure\_floating\_forward * revert codes for db * correct a method to collect instances from db add interface data to test * added me to Authors * meeging trunk * format for pep8 * format for pep8 * implement unit test for linux\_net * Adjust test\_api to account to multiple rules getting returned for a single set rule * Clean up security groups after use * Make a security group rule that references another security group return ipPermission for each of tcp, udp, and icmp * Multi-NIC support for vmwareapi virt driver in nova. Does injection of Multi-NIC information to instances with Operating system flavors Ubuntu, Windows and RHEL. vmwareapi virt driver now relies on calls to network manager instead of nova db calls for network configuration information of instance. Ensure if port group is properly associated with vlan\_interface specified in case of VLAN networking for instances. Re-oranized VMWareVlanBridgeDriver and added session parmeter to methods to use existing session. Also removed session creation code as session comes as argument. Added check for flat\_inject flag before attempting an inject operation. Removed stale code from vmwareapi stubs. Also updated some comments to be more meaningful. Did pep8 and pylint checks. Tried to improve pylint score for newly added lines of code * Fix bug #835919 that output a option file for dnsmasq not to offer a default gateway on second vif * Accidentally added instance to security group twice in the test. Fixed * Minor cleanup * Fixing xml serialization of limits resource * correct floating ip id to increment in fake\_network * Add iptables filter rules for dnsmasq * Merged trunk * Change non E ascii characte * Launchpad automatic translations update * Instance record is not inserted in db if the security group passed to the RunInstances API doesn't exists * Added unit tests to check instance record is not inserted in db when security groups passed to the instances are not existing * removed unneeded import * rick nits * alex meade issues * Added list of security groups to the newly added extension (Createserverext) for the Create Server and Get Server detail responses * default description to name * use 'qemu-image resize' rather than 'truncate' to grow image files * remove extra description stuff * fix pep8 violation * feedback from jk0's review, including removing a lot of spaces from docstrings * revert description changes, use metadata['description'] if it is set to populate field in db * merged trunk * change db migrate script again to match other similar scripts * Fix for LP Bug #839269 * move networks declarations within upgrade/downgrade methods * more review cleanup * remove import of 'fake' from nova manager, now that we've moved that to test\_quantum.py * Fixes a small bug which causes filters to not work at all. Also reworks a bit of exception handling to allow the exception related to the bug to propagate up * Email error again. Tired * Email error * Fixed review comments * Add documentation comment * pull-up from trunk * Forgot to handle return value * Add tests for flags 'snapshot\_image\_format' * Update snapshot image metada 'disk\_format' * Add flag 'snapshot\_image\_format' to select the disk format of the snapshot image generated with the libvirt driver * missing migration * Email contact error * Update Authors file * Merged trunk * Correct tests associated * Fix protocol-less security groups * Adding feedparser to pip-requires * Removing xml functions that are no longer called * Launchpad automatic translations update * Glance can now perform its own authentication/authorization checks when we're using keystone * import filters in scheduler/host\_filter.py so default\_host\_filter gets added to FLAGS; rework SchedulerManager() to only catch missing 'schedule\_' attribute and report other missing attributes * move content of quantum/fake.py to test\_quantum.py in unit testing class (most original content has been removed anyway) * melange testing cleanup, localization cleanup * remove references to MelangeIPAMTest, as they cannot be used yet * Deleted debug messages * Resolved conflicts and fixed pep8 errors * Fix a few references to state\_description that slipped through * added unit tests and cleanup of import statements * renamed fake\_network\_info.py * trunk merge * moved cidr\_v6 back * Probably shouldn't leave that commented out * Added test for NULL network * Fixed lp835242 * Fixes for minor network manager issues centered around deleting/accessing instances which don't have network information set * remove extra references to state\_description * pull-up from trunk * merge unit test from Chris MacGown * Adds test for image.glance.GlanceImageService.\_is\_image\_available * - implements changes-since for servers resource - default sort is now created\_at desc for instances * undo change in setting q\_tenant\_id in quantum\_manager.create\_network * additional review cleanup * docstring cleanup * merging trunk * Fixes NotFound exceptions to show the proper instance id in the ec2 api * typo * more review cleanup * another commit from brad * add specific exceptions for quantum client. Fix doc-strings in client.py * merge brad's changes that address most review feedback * fix for lp838583 - fixes bug in os-floating-ips view code that prevents instance\_id from being returned for associated addresses * Accept keypair when you launch a new server. These properties would be stored along with the other server properties in the database (like they are currently for ec2 api) * Launchpad automatic translations update * merge trunk, fix tests * fix for lp838583 - return instance\_id for associated floating\_ips, add test * removing unnecessary imports * remove BaseImageService * pep8 * move GlanceImageService tests to proper module; remove translation of non-standard image attributes to properties; ensure all image properties are available, defaulting to None if not provided * merge trunk * Add comment for an uncommon failure case that we need to fix * Fix for LP Bug #838466 * Correctly yield images from glance client through image service * Simple usage extension for nova. Uses db to calculate tenant\_usage for specified time periods * Fix for LP Bug #838251 * merge trunk, fix conflict * Validates that user-data is b64 encoded * Updated VersionsAtomSerializer.index to use lxml.etree to generate atom feed * remove extra test * merged trunk * Fixed and improved the way instance "states" are set. Instead of relying on solely the power\_state of a VM, there are now explicitly defined VM states and VM task states which respectively define the current state of the VM and the task which is currently being performed by the VM * Updating test for xml to use lxml * expect key\_name attribute in 1.1 * change to use \_get\_key\_name to retrieve the key * Implements lp:798876 which is 'switch carrot to kombu'. Leaves carrot as the default for now... decision will be made later to switch the default to kombu after further testing. There's a lot of code duplication between carrot and kombu, but I left it that way in preparation for ripping carrot out later and to keep minimal changes to carrot * Disassociated previously associated floating ips when calling network\_api.associate\_floating\_ip. Also guard against double-association in the network.manager * adding support for limiting in image service; updating tests with fixture ids and marker support * trunk merge * merging trunk * fix keypairs stubs * add explicit message for NoMoreFloatingIps exception * fix for chris behrens' comment - move tenant\_id => project\_id mapping to compute.api.get\_all * moved key\_name per review * zone\_add fixed to support zone name * kludge for kombu 1.1.3 memory transport bug * merged trunk * Removed extraneous import and s/vm\_state.STOP/vm\_states.STOPPED/ * Merged trunk * Code cleanup * Use feedparser to parse the generated atom feeds in the tests for the versions resource * add test to verify 400 response when out of addresses * switched default to kombu per vishy * use kombu.connection.BrokerConnection vs kombu.connection.Connection so that older versions of kombu (1.0.4) work as well as newer * fix FloatingIpAlreadyInUse to use correct string pattern, convert ApiErrors to 400 responses * Fix for LP Bug #782364 * Fix for LP Bug #782364 * more logging info to help identify bad payloads * Removed test\_parallel\_builds in the XenAPI tests due to it frequently hanging indefinitely * logging change when rpc pool creates new connection * pep8 fix * make default carrot again and delay the import in rpc/\_\_init\_\_.py * Removed debug messages * Fix for LP Bug #837534 * add kombu to pip-requires and contrib/nova.sh * restore old way FLAGS.rpc\_backend worked.. no short name support for consistency * fix remaining tests * Update RequestContext so that it correctly sets self.is\_admin from the roles array. Additionally add a bit of code to ignore case as well * pep8, fix fakes * fix a bunch of direct usages of db in compute api * make two functions instead of fast flag and add compute api commands instead of hitting db directly * fixing bug * fixing short-ciruit condition * yielding all the images * merged trunk * changing default sort to created\_at * The exception 'RamdiskNotFoundForImage' is no longer used * With OS API, if the property 'ramdisk\_id' isn't set on the AMI image, Nova can not instantiate it. With EC2 API, the AMI image can be instantiate * adding an assert * Use getCapabilities rather than getInfo() since some versions of libvirt dont provide dmi information * supporting changes-since * Fix a bad merge on my part, this fixes rebuilds\! * disassociate floating ips before re-associating, and prevent re-association of already associated floating ips in manager * Update RequestContext so that it correctly sets self.is\_admin from the roles array. Additionally add a bit of code to ignore case as well * Merged trunk * remove unneeded connection= in carrot Consumer init * pep8 fix for test\_rpc\_common.py * fix ajax console proxy for new create\_consumer method * doc string cleanup * created nova/tests/test\_rpc\_common.py which contains a rpc test base class so we can share tests between the rpc implementations * ditched rpc.create\_consumer(conn) interface... instead you now do conn.create\_consumer(. * Update the EC2 ToToken middleware to use eventlet.green.httplib instead of httplib2. Fixes issues where the JSON request body wasn't getting sent to Keystone * remove brackets from mailmap entry * access db directly in networkmanagers's delete\_network method, so stubbed test call works correctly * more logging info to help identify bad payloads * In the XenAPI simulator, set VM.domid, when creating the instance initially, and when starting the VM * remove 'uuid' param for nova-manage network delete that I had add previously * add alias to mailmap * update file name for db migrate script after merge (again) * update file name for db migrate script after merge * merged trunk * Fixes this bug by removing the test. The test has no asserts and seems to be raising more problems than it could solve * Removed test\_parallel\_builds * Merged trunk * Increased migration number * Fixes lp:813864 by removing the broken assert. The assert was a check for isinstance of 'int' that should have been 'long'. But it doesn't appear this assert really belongs, anyway * Merged trunk * Adds assertIn and assertNotIn support to TestCase for compatibility with python 2.6 This is a very minimal addition which doesn't require unittest2 * support the extra optional arguments for msg to assertIn and assertNotIn * removed broken assert for abstract\_scheduler * pep8 fixes * fix for assertIn and assertNotIn use which was added in python 2.7. this makes things work on 2.6 still * merge trunk * restore fixed\_ip\_associate\_pool in nova/db/sqlalchemy.py to its original form before this branch. Figured out how to make unit tests pass without requiring that this function changes * remove unused rpc connections in test\_cloud and test\_adminapi * carrot consumer thread fix * add carrot/kombu tests... small thread fix for kombu * add doc-strings for all major modules * remove fake IPAM lib, since qmanager must now access nova DB directly * Update the EC2 ToToken middleware to use eventlet.green.httplib instead of httplib2. Fixes issues where the JSON request body wasn't getting sent to Keystone * fix nova/tests/test\_test.py * fix nova-ajax-console-proxy * fix test\_rpc and kombu stuff * always set network\_id in virtual\_interfaces table, otherwise API commands that show IP addresses get confused * start to rework some consumer stuff * update melange ipam lib to use network uuid, not bridge * fix issue with setting 'Active' caused by Quantum API changes. Other misc fixes * Bug #835952: pep8 failures do not cause the tests to fail * Start domid's at 1, not 0, to avoid any confusion with dom0 * use 'uuid' field in networks table rather than 'bridge'. Specify project\_id when creating instance in unit test * Bug #835964: pep8 violations in IPv6 code * In the XenAPI simulator, set VM.domid, when creating the instance initially, and when starting the VM * Bug #835952: pep8 failures do not cause the tests to fail * Bug #835964: pep8 violations in IPv6 code * Virtual Storage Array (VSA) feature. - new Virtual Storage Array (VSA) objects / OS API extensions / APIs / CLIs - new schedulers for selecting nodes with particular volume capabilities - new special volume driver - report volume capabilities - some fixes for volume types * fix FALGS typo * changes a few double quotes to be single, as the rest in the vicinity are * Default rabbit max\_retries to forever Modify carrot code to handle retry backoffs and obey max\_retries = forever Fix some kombu issues from cut-n-paste Service should make sure to close the RPC connection * Updated VersionsXMLSerializer and corresponding tests to use lxml * v1.0 of server create injects first users keypair * add tests to verify NotFound exceptions are wrapped with the proper ids * use db layer for aggregation * merged trunk * flag for kombu connection backoff on retries * more fixes * more work done to restore original rpc interfaces * merge changes from brad due to recent quantum API changes * Minor changes based on recent quantum changes * start of kombu implementation, keeping the same RPC interfaces * doubles quotes to single * changed format string in nova-manage * removed self.test ip and \_setup\_networking from libvirt * updated libvirt test * merge trunk * stubbed some stuff in test\_libvirt * removed create\_volumes, added log & doc comment about experimental code * reverted CA files * couple of pep8s * Tiny tweaks to the migration script * updated fake values * updated fake values * Merged trunk and fixed conflicts * updated fake values * updated fake values * forgot ) * update libvirt tests * Update compute API and manager so that the image\_ref is set before spawning the rebuilt instance. Fixes issue where rebuild didn't actually change the image\_id * added debug prints for scheduler * update libvirt * updated instance type fake model * added vcpus to instance flavor test model * added memory\_mb to instance flavor test model * forgot test print statements * misplaced comma.. * Update compute API and manager so that the image\_ref is set before spawning the rebuilt instance. Fixes issue where rebuild didn't actually change the image\_id * Add brad to Authors file * replace accidental deletion in nova-mange * rearrange imports * fix for quantum api changes, change nova-mange to have quantum\_list command * merge brad's fixes * add priority for static networks * driver: added vsa\_id parameter for SN call * merged with rev.1499 * cosmetic cleanup * Updated server and image XML serializers to take advantage of the addresses and metadata serializers * VSA code redesign. Drive types completely replaced by Volume types * merged trunk * Just a couple of small changes I needed to get the migrations working with SQLAlchemy 0.7.x on Fedora 16 * Minor fixes * check log file's mode prior to calling chmod * The fix for run\_iscsiadm in rev 1489 changed the call to use a tuple because values were being passed as tuples. Unfortunately a few calls to the method were still passing strings * Add a set of generic tests for the virt drivers. Update a bit of documentation to match reality * updated LimitsXMLSerializer to use etree and supply the xml declaration * merge underlying fix for testing * merged trunk * updated additional limits test * pep8 * pass all commands to run\_iscsiadm as a tuple * altered fake network model * Updated limits serialization tests to use etree and added limits schema * Test fixup after last review feedback commit * Fix glance image authorization check now that glance can do authorization checks on its own; use correct image service when looking for ramdisk, etc.; fix a couple of PEP8 errors * forget a return * review feedback * Fixed integrated.test\_xml to be more robust * typo * fixed a couple of syntax errors * Add bug reference * updated tests * updated libvirt tests to use fake\_network\_info * Bumped migration number * Merged trunk * Review feedback * pep8 * DRYed up code by moving \_to\_xml into XMLDictSerializer * updated addresses serializer to use etree instead of minidom * Added addresses schema * updated addresses xml serialization tests to use etree instead of minidom * Updated ServerXMLSerializer to use etree instead of minidom * added unit tests to instance\_types for rainy day paths * Reverted two mistakes when looking over full diff * Updated MetadataXMLSerializer to use etree instead of minidom * Added: - volume metadata - volume types - volume types extra\_specs * Added schemas Updated metadata tests to use etree instead of minidom * Servers with metadata will now boot on xenserver with flat\_injected==False * moved import up * Verify resize needs to be set * changing comment * fixing bug * merged trunk * Updated ImagesXMLSerializer to use etree instead of minidom * Set error state when migration prep fails * Removed invalid test * Removed RESIZE-CONFIRM hack * Set state to RESIZING during resizing.. * Merged trunk * Another attempt at fixing hanging test * Once a network is associated with project, I can’t delete this network with ‘nova-manage network delete’. As you know, I can delete network by scrubbing the project with ‘nova-manage project scrub’. However it is too much. The cause of this problem is there is no modify command of network attribute * Update paste config so that EC2 admin API defaults to noauth * merged with volume types (based on rev.1490). no code rework yet * merged with volume\_types. no code refactoring yet * merged with nova 1490 * added new tables to list of DBs in migration.py * removes french spellings to satisfy american developers * added virtio flag; associate address for VSA; cosmetic changes. Prior to volume\_types merge * stub\_instance fix from merge conflict * moved import to the top * fixing inappropriate rubyism in test code * Added fix for parallel build test * Fixed silly ordering issue which was causing tons of test failures * merged trunk * change snapshot msg too * forgot to add new extension to test\_extensions * Add me to Authors * added Openstack APIs for volume types & extradata * Add comments for associate/dissociate logic * Updated ImageXMLSerialization tests to use etree instead of minidom Fixed incorrect server entity ids in tests * Merged from trunk * Add names to placeholders of formatting * The notifiers API was changed to take a list of notifiers. Some people might want to use more than one notifier so hopefully this will be accepted into trunk * use dict.get for user\_id, project\_id, and display\_description in servers view as suggested by ed leaf, so that not all tests require these fields * Updated flavors xml serialization to use lxml instead of minidom * merge trunk, fix tests * fix more tests * Removed unused imports * Updated FlavorsXMLSerialization tests to use etree and validation instead of minidom * Merged from trunk * split test\_modify() into specific unit tests * Added DELETED status to OSAPI just in case * Fixes iscsiadm commands to run properly * Fixed issue where we were setting the state to DELETED before it's actually deleted * merged with rev.1488 * Merged trunk and fixed conflicts * added volume type search by extra\_spec * Fix for trying rebuilds when instance is not active * Fixed rebuild naming issue and reverted other fix which didn't fix anythin * Attempt to fix issue when deleting an instance when it's still in BUILD * Fix default hostname generator so that it won't use underscores, and use minus signs instead * merged with 1487 * pep8 compliant * Merged from trunk * - rebuilds are functional again - OSAPI v1.1 rebuild will accept adminPass or generate a new one, returning it in a server entity - OSAPI v1.0 will generate a new password, but it doesn't communicate it back to the user * Fix flag override in unit test * merged with rev.1485 * add rainy day test to to\_global fixed to\_global to catch correct error from incorrect mac addresses * Let's be more elegant * similar to lp828614: add rainy day test and fix exception error catch to AddrFormatError * check log file mode prior to chmod * added unit tests for version.py * Merged trunk * Fix for migrations * Conversion to SQLAlchemy-style * dict formatting * Commit without test data in migration * Commit with test data in migration * Do not require --bridge\_interface for FlatDHCPManager * Fix quotas migration failure * Fix flavorid migration failure * fixed indentation * adding xml serialization and handling instance not found * removing extraneous imports * pep8 * Thou shalt not use underscores in hostnames * Catch exception for instances that aren't there * pep8 fixes * Couple of fixes to the review feedback changes * Launchpad automatic translations update * Address code review feedback from Rick and Matt * removing print statement * added volume metadata APIs (OS & volume layers), search volume by metadata & other * Update paste config so that EC2 admin API defaults to noauth * cleanup * updating tests * fix iscsi adm command * Fix pep8 * Merged from trunk * added volume\_types APIs * Fix not found exceptions to properly use ec2\_ips for not found * Stub out the DB in unit test. Fix 'nova-manage network modify' to use db.network\_update() * rebuilds are functional again * Adds a use\_deprecated\_auth flag to make sure creds generated using nova-manage commands will work with noauth * Merged from upstream * Fixed some pep8 and pylint issues * Forgot to set the flag for the test * I added notifications decorator for each API call using monkey\_patching. By this merge, users can get API call notification from any modules * Fixes bug that causes 400 status code when an instance wasn't attached to a network * fix for rc generation using noauth * Fixed doc string * Merged from upstream * Switched list\_notifier to log an exception each time notify is called, for each notification driver that failed to import * updating tests * merging trunk * Fixed some docstring Added default publisher\_id flagw * Removed blank line * Merged with trunk * Fixed typo and docstring and example class name * Updated migration number * Move use\_ipv6 into flags. Its used in multiple places (network manager and the OSAPI) and should be defined at the top level * Merged trunk * PEP8 fixes * 'use the ipv6' -- 'use ipv6' * Move use\_ipv6 into flags. Its used in multiple places (network manager and the OSAPI) and should be defined at the top level * Refresh translations * This branch does the final tear out of AuthManager from the main code. The NoAuth middlewares (active by default) allow a user to specify any user and project id through headers (os\_api) or access key (ec2\_api) * Implements first-pass of config-drive that adds a vfat format drive to a vm when config\_drive is True (or an image id) * Launchpad automatic translations update * pulling all qmanager changes into a branch based on trunk, as they were previously stacked on top of melange * Moved migration and fixed tests from upstream * Merged trunk * Added the fixes suggested by Eric Windisch from cloudscaling.. * removing unnecessary tthing * merge trunk, resolve conflicts, fix tests * unindented per review, added a note about auth v2 * Our goal is to add optional parameter to the Create server OS 1.0 and 1.1 API to achieve following objectives:- * fixing exception logging * Fixes bug 831627 where nova-manage does not exit when given a non-existent network address * Move documentation from nova.virt.fake into nova.virt.driver * initial cut on volume type APIs * fix pep8 issue * Change parameters of 'nova-manage network modify'. Move common test codes into private method * Merged from trunk,resolved conflicts and fixed broken unit tests due to changes in the extensions which now include ProjectMapper * xml deserialization, and test fixes * syntax * update test\_network test\_get\_instance\_nw\_info() * remove extra spaces * Fixed conflict with branch * merged trunk * The FixedIpCommandsTestCase in test\_nova\_manage previously accessed the database. This branch stubs out the database for these tests, lowering their run time from 104 secs -> .02 secs total * some readability fixes per ja feedback * fix comment * Update a few doc strings. Address a few pep8 issues. Add nova.tests.utils which provides a couple of handy methods for testing stuff * Make snapshot raise InstanceNotRunning when the instance isn't running * change NoAuth to actually use a tenant and user * Added Test Code, doc string, and fixed pip-requiresw * Merged trunk * Ensure that reserve and unreserve exit when an address is not found * Simple usage extension for nova. Uses db to calculate tenant\_usage for specified time periods * Stubbed out the database in order to improve tests * logging as exception rather than error * Merged from upstream * Changed list\_notifier to call sys.exit if a notification driver could not be found * merged trunk * implemented tenant ids to be included in request uris * Add a generic set of tests for hypervisor drivers * Upstream merge * Added ability to detect import errors in list\_notifier if one or more drivers could not be loaded * Fix pep8 * delete debug code * Fixes for a number of tests * Use 'vm\_state' instead of 'state' in instance filters query * Merged with Dan to fix some EC2 cases * Add 'nova-manage network modify' command * Fixes/updates to make test\_cloud pass * Fix scheduler and integrated tests * Update migration number * Merged with Dan * Merged task\_state -> task\_states and fixed test\_servers test * Update virt/fake to correct power state issue * fix test\_servers tests * update test\_security\_group tests that have been added * Merged trunk * Renamed task\_state to task\_states.. * Ec2 API updates * merge with trunk * Fixing merge conflicts * Launchpad automatic translations update * Adds accessIPv4 and accessIPv6 to servers requests and responses as per the current spec * adding import * Fixes utils.to\_primitive (again) to handle modules, builtins and whatever other crap might be hiding in an object * fixing bug lp:830817 * added test for bad project\_id ... although it may not be used * added exception catch and test for bad project\_id * added exception catch for bad prefix and matching test * added exception catch and test for bad prefix * comment strings * added unit tests for versions.py * Added OS APIs to associate/disassociate security groups to/from instances * add/remove security groups to/from the servers as server actions * lp:828610 * removed leftover netaddr import * added rainy day test for ipv6 tests. fixed ipv6.to\_global to trap correct exception * Merged from trunk * pep8 * improve test coverage for instance types / flavors * Launchpad automatic translations update * Assorted fixes to os-floating-ips to make it play nicely with an in-progress novaclient implementation, as well as some changes to make it more consistent with other os rest apis. Changes include: * finished fake network info, removed testing shims * updated a maths * updated a maths * Merged trunk * Lots of modifications surrounding the OSAPI to remove any mention of dealing with power states and exclusively using vm\_states and task\_state modules. Currently there are still a number of tests failing, but this is a stopping place for today * who cares * added return * Merged from trunk and fixed review comments * fixed formatting string * typo * typo * typo * typo * typo * typo * added fake network info * Fixed review comments * Fixed typo * better handle malformed input, and add associated tests * Fixed typo * initial committ * Fixed NoneType returned bugw * merged trunk * Updated accessIPv4 and accessIPv6 to always be in a servers response * Fixed mistake on mergew * tweak to comment * Merged with trunkw * a few tweaks - remove unused member functions, add comment * incorporate feedback from brian waldon and brian lamar. Move associate/disassociate to server actions * merge from trunk * pep8 * Finished changing ServerXMLSerializationTest to use XML validation and lxml * Added monkey patching notification code function w * Updated test\_show in ServerXMLSerializationTest to use XML validation * vm\_state --> vm\_states * Next round of prep for keystone integration * merge from trunk * Removes the incorrect hard-coded filter path * Revert irrelevant changes that accidentally crept into this patch :( * add tenant\_id to api. without tenant\_id, admins can't tell which servers belong to which tenants when retrieving lists * Merged from trunk * Fixes primitive with builtins, modules, etc * fix test\_virtual interfaces for tenant\_id stuff * fix test\_rescue tests for tenant\_id changes * Fix unit test for the change of 'nova-manage network list' format * Add copyright notices * merged trunk * Define FLAGS.default\_local\_format. By default it's None, to match current expected \_create\_local * Fix config\_drive migration, per Matt Dietz * updated migration number * merge with trunk * Bump migration number * pep8 * Start improving documentation * Added uuid column in virtual\_interfaces table, and an OpenStack extension API for virtual interfaces to expose these IDs. Also set this UUID as one of the external IDs in the OVS vif driver * Move documentation from nova.virt.fake to nova.virt.driver * add key\_name/data support to server stub * add user\_id and description. without user\_id, there is no way for a tenant to tell which user created the server. description should be added for ec2 parity * merge * Bugfix for lp 828429. Its still not clear to me exactly how this code path is actually invoked when nova is used, so I'm looking for input on whether we should be adding a test case for this, removing the code as unused, etc. Thanks * remove security groups, improve exception handling, add tests * Merged trunk * merged trunk * Currently, rescue/unrescue is only available over the admin API. Non-admin tenants also need to be able to access this functionality. This patch adds rescue functionality over an API extension * Makes all of the binary services launch using the same strategy.  \* Removes helper methods from utils for loading flags and logging  \* Changes service.serve to use Launcher  \* Changes service.wait to actually wait for all the services to exit  \* Changes nova-api to explicitly load flags and logging and use service.serve \* Fixes the annoying IOError when /etc/nova/nova.conf doesn't exist * tests pass * Fixes issue where ServersXMLSerializer was missing a method for update actions * follow same pattern as userdata (not metadata apporach) * rename the test method * Updated docs for the recent scheduler class changes * Passes empty string instead of None to MySQLdb driver if the DB password isn't set * merged trunk * added volume metadata. Fixed test\_volume\_types\_extra\_specs * declare the use\_forwarded\_for flag * merge trunk * Fixes lp828207 * Added unit test * allow specification of key pair/security group info via metadata * Fixed bug in which DescribeInstances was returning deleted instances. Added tests for pertinent api methods * Accept binary user\_data in radix-64 format when you launch a new server using OSAPI. This user\_data would be stored along with the other server properties in the database. Once the VM instance boots you can query for the user-data to do any custom installation of applications/servers or do some specific job like setting up networking route table * added unittests for volume\_extra\_data * Removed extra parameter from the call to \_provision\_resource\_locally() * resolve conflicts after upstream merge * Change the call name * Cleanup the '\_base' directory in libvirt tests * Oops * Review feedback * Added 'update' method to ServersXMLSerializer * Added more unit testcases for userdata functionality * Remove instances.admin\_pass column * merged trunk * Merged with trunk * typo * updated PUT to severs/id to handle accessIPv4 and accessIPv6 * DB password should be an empty string for MySQLdb * first cut on types & extra-data (only DB work, no tests) * merge from trunk * Better docstring for \_unrescue() * Review feedback * Need to pass the action * Updated the distributed scheduler docs with the latest changes to the classes * Syntax error * Moved compute calls to their own handler * Remove old comment * Don't send 'injected\_files' and 'admin\_pass' to db.update * fix docstrings in new api bins * one more * fix typo * remove signal handling and clean up service.serve * add separate api binaries * more cleanup of binaries per review * Changed the filter specified in \_ask\_scheduler\_to\_create\_instance() to None, since the value isn't used when creating an instance * Minor housecleaning * Fix to return 413 for over limit exceptions with instances, metadata and personality * Refactored a little and updated unit test * minor cleanup * dhcpbridge: add better error if NETWORK\_ID is not set, convert locals() to static dict * Added the fix for the missing parameter for the call to create\_db\_entry\_for\_new\_instance() * Updated a number of items to pave the way for new states * Corrected the hardcoded filter path. Also simplified the filter matching code in host\_filter.py * Added rescue mode extension * Fixed issue where accessIP was added in none detail responses * Updated ServersXMLSerializer to allow accessIPv4 and accessIPv6 in XML responses * Merged trunk * Added accessIPv4 and accessIPv6 to servers view builder Updated compute api to handle accessIPv4 and 6 * Fixed several logical errors in the scheduling process. Renamed the 'ZoneAwareScheduler' to 'AbstractScheduler', since the zone-specific designation is no longer relevant. Created a BaseScheduler class that has basic filter\_hosts() and weigh\_hosts() capabilities. Moved the filters out of one large file and into a 'filters' subdirectory of nova/scheduler * Merged trunk * Adds the enabled status of a host when XenServer reports its host's capabilities. This allows the scheduler to ignore hosts whose enabled is False when considering where to place a new instance * merge trunk and fix unit test errors * in dhcpbridge, only grab network id from env if needed * bug #828429: remove references to interface in nova-dhcpbridge * pep8 * remove extra reference in pipelib * clean up fake auth from server actions test * fix integration tests * make admin context the default, clean up pipelib * merged trunk * Merged with trunk and fixed broken testcases * merged with nova-1450 * nova-manage VSA print & forced update\_cap changes; fixed bug with report capabilities; added IP address to VSA APIs; added instances to APIs * Make all services use the same launching strategy * Updated compute manager/API to use vm/task states. Updated vm/task states to cover a few more cases I encountered * Updated server create XML deserializer to account for accessIPv4 and accessIPv6 * Added the host 'enabled' status to the host\_data returned by the plugin * Added accessip to models pep8 * Added migration for accessIPv4 and accessIPv6 * Fixed broken unit testcases * Initial instance states migration * pep8 fix * fix some naming inconsistencies, make associate/disassociate PUTs * Add NetworkCommandsTestCase into unit test of nova-manage * very minor cleanup * Undo an unecessary change * Merged trunk * Pep8 fixes * Split set state into vm, task, and power state functions * Add modules for task and vm states * Updated tests to correctly use the tenant id * DB object was being casted to dict() in API code. This did not work as intended and logic has been updated to reflect a more accurate way of getting information out of DB objects * merge from trunk * Cleaned up the extension metadata API data * Updated get\_updated time * Cleaned up the file * Fixed vif test to match the JSON key change * Added XML support and changed JSON output keys * Added virtual interfaces API test * Removed serverId from the response * Merged trunk * Merged Dan's branch to add VIF uuid to VIF drivers for Quantum * Removed a change from faults.py that was not required." * Changed return code to 413 for metadata, personality and instance quota issues * Append the project\_id to the SERVER-MANAGEMENT-URL header for v1.1 requests. Also, ensure that the project\_id is correctly parsed from the request * add new vif uuid for OVS vifplug for libvirt + xenserver * Remove instances.admin\_pass column * merge trunk * all tests passing * fix unit tests * Resolved conflicts and merged with trunk * Added uuid for networks and made changes to the Create server API format to accept network as uuid instead of id * I'm taking Thierry at his word that I should merge early and merge often :) * Fixes issue with exceptions getting eaten in image/s3.py if there is a failure during register. The variables referenced with locals() were actually out of scope * Allow local\_gb size to be 0. libvirt uses local\_gb as a secondary drive, but XenServer uses it as the root partition's size. Now we support both * Merged trunk * merge from trunk * make project\_id authorization work properly, with test * Use netaddr's subnet features to calculate subnets * make delete more consistant * Review feedback * Updated note * Allow local\_gb to be 0; PEP8 fixes * Updated ViewBuilderV10 as per feedback * \* Added search instance by metadata. \* instance\_get\_all\_by\_filters should filter deleted * This branch implements a nova api extension which allows you to manage and update tenant/project quotas * test improvements per peer review * fixing pep8 issue * defaults now is referred to using a tenant * fixing up the show quotas tests, and extension * making get project quotas require context which has access to the project/tenant) * fixing pep8 issues again * fixing spacing issues * cleaning up a few things from pyflakes * fixing pep8 errors * refactoring tests to not use authmanager, and now returning 403 when non admin user tries to update quotas * removed index, and separated out defaults into its own action * merging test\_extensions.py * another trunk merge * another trunk merge... a new change made it into nova before the code was merged * Cleanup the '\_base' directory in libvirt tests * Small bug fix...don't cast DB objects to dicts * merge from trunk * Updated the EC2 metadata controller so that it returns the correct value for instance-type metadata * Fix test\_metadata tests * merge the trunk * Merged with upstream * Added list\_notifier, a driver for the notifer api which calls a list of other drivers * merge with trunk * Refactored the HostFilterScheduler and LeastCostScheduler classes so that they can be combined into a single class that can do both host filtering and host weighting, allowing subclasses to override those processes as needed. Also renamed the ZoneAwareScheduler to AbstractScheduler, for two reasons: one, the 'zone-aware' designation was necessary when the zone code was being developed; now that it is part of nova, it is not an important distinction. Second, the 'Abstract' part clearly indicates that this is a class that is not designed to be used directly, but rather as the basis for specific scheduler subclasses * cosmetic change in test\_extensions. Avoids constant merge conflicts between proposals with new extensions * Validate the size of VHD files in OVF containers * Include vif UUID in the network info dictionary * Added uuid to allocate\_mac\_address * Fixed the naming of the extension * redux of floating ip api * Merged trunk * Merged trunk * log the full exception so we don't lose traceback through eventlet * fix error logging in s3.py * pep8 cleanup * Merged trunk * Removed newly added userdatarequesthandler for OS API, there is no need to add this handler since the existing Ec2 API metadatarequesthandler does the same job * got tests passing with logic changes * pep8 * pep8 * add note * have the tests call create\_networks directly * allow for finding a network that fits the size, also format string correctly * adding sqlalchemi api tests for test\_instance\_get\_all\_by\_filter to ensure doesn't return deleted instances * added cloud unit test for describe\_instances to ensure doesn't return deleted instances * return the created networks * pep8 fix * merge trunk * Adding kvm-block-migration feature * i hate these exceptions where it should just return an empty list * fix typo where I forgot a comma * merge trunk, remove \_validate\_cidrs and replace functionality with a double for loop * fix bug which DescribeInstances in EC2 api was returning deleted instances * We don't have source for open-wrt in the source tree, so we shouldn't use the images. Since the images are only there for uploading smoketests, They are now replaced with random images * Make response structure for list floating ips conform with rest of openstack api * put tenant\_id back in places where it was * This branch allows the standard inclusion of a body param which most http clients will send along with a POST request * Libvirt has some autogenerated network info that is breaking ha network * making body default to none * pep8 fix * Adding standard inclusion of a body param which most http clients will send along with a POST request * Fixed merging issue * Merged with trunk * Updated rate limiting tests to use tenants * Corrected names in TODO/FIXME * remove openwrt image * Fix the tests when libvirt actually exists * Merged trunk * Add durable flag for rabbit queues * Fixed merge conflict * merged trunk * Merged trunk * Dryed up contructors * make list response for floating ip match other apis * fix missing 'run\_as\_root' from bad merge * Added ability too boot VM from install ISO. System detects an image of type iso. Images is streamed to a VDI and mounted to the VM. Blank disk allocated to VM based on instance type * Add source-group filtering * added logic to make the creation of networks (IPv4 only) validation a bit smarter: - detects if the cidr is already in use - detects if any existing smaller networks are within the range of requested cidr(s) - detects if splitting a supernet into # of num\_networks && network\_size will fit - detects if requested cidr(s) are within range of already existing supernet (larger cidr) * fix InvalidPortRange exception shows up in euca2ools instead of UnknownError when euca-authorize is specified w/ invalid port # * Changes requests with an invalid server action to return an HTTP 400 instead of a 501 * Currently OS API doesn't accept availability zone parameter so there is no way to instruct scheduler (SimpleScheduler) to launch VM instance on specific host of specified zone * typo fix * Fix v1.1 /servers/ PUT request to match API documentation by returning 200 code and the server data in the body * Allow different schedulers for compute and volume * have NetworkManager generate MAC address and pass it to the driver for plugging. Sets the stage for being able to do duplicate checks on those MACs as well * make sure security groups come back on restart of nova-compute * fix all of the tests * rename project\_net to same\_net * use dhcp server instead of gateway for filter exception * get rid of network\_info hack and pass it everywhere * fix issue introduced in merge * merge trunk, fix conflict frim dprince's branch to remove hostname from bin/nova-dhcpbridge * merge in trunk, resolving conflicts with ttx's branch to switch from using sudo to run\_as\_root=True * remerge trunk * Added durable option for nova rabbit queues added queueu delete script for admin/debug purposes * Added add securitygroup to instance and remove securitygroup from instance functionality * Fix ugly little violations before someone says anything * Merged trunk * Updated logging * end of day * Check uncompressed VHD size * reworked test\_extensions code to avoid constant merge conflicts with newly added ext * nova-manage: fixed instance type in vsa creation * Stub out instance\_get as well so we can show the results of the name change * removed VSA/drive\_type code from EC2 cloud. changed nova-manage not to use cloud APIs * Merged with trunk and fixed broken unit testcases * merged rev1418 and fixed code so that less than 1G image can be migrated * Created the filters directory in nova/scheduler * removed admincontext middleware * updates from review * merge from trunk * fix merges from trunk * Nuke hostname from nova-dhcpbridge. We don't use it * merge the trunk * need to actually assign the v4 network * Fixes to the OSAPI floating API extension DELETE. Updated to use correct args for self.disassociate (don't sweep exceptions which should cause test cases to fail under the rug). Additionally updated to pass network\_api.release\_floating\_ip the address instead of a dict * Merged trunk * Fixed unit tests * only run if the subnet and cidr exist * only run if the subnet and cidr exist * merge from trunk * make sure network\_size gets set * merge from trunk * don't require ipv4 * forgot the closing paren * use subnet iteration from netaddr for subnet calculation * Fix a typo that causes ami images to launch with a kernel as ramdisk when using xen * Fixing a 500 error when -1 is supplied for flavorRef on server create * rewriting parsing * fix typo that causes ami instances to launch with a kernal as ramdisk * Merged trunk * Allows for a tunable number of SQL connections to be maintained between services and the SQL server using new configuration flags. Only applies when using the MySQLdb dialect in SQLAlchemy * Merged trunk * Fixes pep8 issues in test\_keypairs.py * Merged trunk * start of day * Fixes to the OSAPI floating API extension DELETE. Updated to use correct args for self.disassociate (don't sweep exceptions which should cause test cases to fail under the rug). Additionally updated to pass network\_api.release\_floating\_ip the address instead of a dict * API needs virtual\_interfaces.instance joined when pulling instances from the DB. Updated instance\_get\_all() to match instance\_get\_all\_by\_filters() even though the former is only used by nova-manage now. (The latter is used by the API) * remove extra log statements * join virtual\_interfaces.instance for DB queries for instances. updates instance\_get\_all to match instance\_get\_all\_by\_filters * remove accidentally duplicated flag * merged trunk * add keystone middlewares for ec2 api * Merged with trunk * added userdata entry in the api paste ini * Initial version * Accidentally added inject\_files to merge * Support for management of security groups in OS API as a new extension * Updates to libvirt, write metadata, net, and key to the config drive * prefixed with os- for the newly added extensions * Merged with trunk * Author added * allow scheduling topics to multiple drivers * Check compressed image size and PEP8 cleanup * v1.1 API also requires the server be returned in the body * capabilities fix, run\_as\_root fix * lp824780: fixed typo in update\_service\_capabilities * fix pep8 * spacing fixes * fixed pep8 issue * merge from trunk * fixed v1.0 stuff with X-Auth-Project-Id header, and fixed broken integrated tests * merged with 1416 * fixing id parsing * moved vsa\_id to metadata. Added search my meta * Refactored the scheduler classes without changing functionality. Removed all 'zone-aware' naming references, as these were only useful during the zone development process. Also fixed some PEP8 problems in trunk code * Added search instance by metadata. get\_all\_by\_filters should filter deleted * got rid of tenant\_id everywhere, got rid of X-Auth-Project-Id header support (not in the spec), and updated tests * Silly fixes * v1.0 and v1.1 API differs for PUT, so split them out Update tests to match API * Removed postgres, bug in current ubuntu package which won't allow it to work easily. Will add a bug in LP * minor cleanup * Added availability zone support to the Create Server API * Make PUT /servers/ follow the API specs and return a 200 status * More logging * removed extra paren * Logging for SQLAlchemy type * merged trunk * Fixed per HACKING * \* Removes rogue direct usage of subprocess module by proper utils.execute calls \* Adds a run\_as\_root parameter to utils.execute, that prefixes your command with FLAG.root\_helper (which defaults to 'sudo') \* Turns all sudo calls into run\_as\_root=True calls \* Update fakes accordingly \* Replaces usage of "sudo -E" and "addl\_env" parameter into passing environment in the command (allows it to be compatible with alternative sudo\_helpers) \* Additionally, forces close\_fds=True on all utils.execute calls, since it's a more secure default * Remove doublequotes from env variable setting since they are literally passed * Changed bad server actions requests to raise an HTTP 400 * removed typos, end of line chars * Fixed broken unit testcases * Support for postgresql * merge from trunk * tenant\_id -> project\_id * Adding keypair support to the openstack contribute api * elif and FLAG feedback * Removed un-needed log line * Make sure to not use MySQLdb if you don't have it * get last extension-based tests to pass * Allows multiple MySQL connections to be maintained using eventlet's db\_pool * Removed verbose debugging output when capabilities are reported. This was clogging up the logs with kbytes of useless data, preventing actual helpful information from being retrieved easily * Removed verbose debugging output when capabilities are reported * Updated extensions to use the TenantMapper * fix pep8 issues * Fixed metadata PUT routing * These fixes are the result of trolling the pylint violations here * Pass py\_modules=[] to setup to avoid installing run\_tests.py as a top-level module * Add bug reference * Pass py\_modules=[] to setup to avoid installing run\_tests.py as a top-level module * fix servers test issues and add a test * added project\_id for flavors requests links * added project\_id for images requests * merge trunk * fix so that the exception shows up in euca2ools instead of UnknownError * Dropped vsa\_id from instances * import formatting - thx * List security groups project wise for admin users same as other users * Merged with trunk * merge with nova-1411. fixed * pep8 fix * use correct variable name * adding project\_id to flavor, server, and image links for /servers requests * Merged with trunk * tests pass * merge from trunk * merged with nova-1411 * This branch makes sure to detach fixed ips when their associated floating ip is deallocated from a project/tenant * adding other emails to mailmap * add Keypairs to test\_extensions * adding myself to authors * This adds the servers search capabilities defined in the OS API v1.1 spec.. and more for admins * Be more tolerant of agent failures. It is often the case there is only a problem with the agent, not with the instance, so don't claim it failed to boot so quickly * Updated the EC2 metadata controller so that it returns the correct value for instance-type metadata * added tests - list doesn't pass due to unicode issues * initial port * merged trunk * Be more tolerant of agent failures. The instance still booted (most likely) so don't treat it like it didn't * Updated extensions to expect tenant ids Updated extensions tests to use tenant ids * Update the OSAPI v1.1 server 'createImage' and 'createBackup' actions to limit the number of image metadata items based on the configured quota.allowed\_metadata\_items that is set * Fix pep8 error * fixing one pep8 failure * I think this restores the functionality .. * Adds missing nova/api/openstack/schemas to tarball * Instance metadata now functionally works (completely to spec) through OSAPI * updated v1.1 flavors tests to use tenant id * making usage of 'delete' argument more clear * Fix the two pep8 issues that sneaked in while the test was disabled * Fix remaining two pep8 violations * Updated TenantMapper to handle resources with parent resources * updating tests; fixing create output; review fixes * OSAPI v1.1 POST /servers now returns a 202 rather than a 200 * Include missing nova/api/openstack/schemas * Rename sudo\_helper FLAG into root\_helper * Minor fix to reduce diff * Initial validation for ec2 security groups name * Remove old commented line * Command args can be a tuple, convert them to list * Fix usage of sudo -E and addl\_env in dnsmasq/radvd calls, remove addl\_env support, fix fake\_execute allowed kwargs * Use close\_fds by default since it's good for you * Fix ajaxterm's use of shell=True, prevent vmops.py from running its own version of utils.execute * With this branch, boot-from-volume can be marked as completed in some sense. The remaining is minor if any and will be addressed as bug fixes * Update the curl command in the \_\_public\_instance\_is\_accessible function of test\_netadmin to return an error code which we can then check for and handle properly. This should allow calling functions to properly retry and timeout if an actual test failure happens * updating more test cases * changing server create response to 202 * Added xml schema validation for extensions resources. Added corresponding xml schemas. Added lxml dep, which is needed for doing xml schema validation * Fixing a bug in nova.utils.novadir() * Adds the ability to read/write to a local xenhost config. No changes to the nova codebase; this will be used only by admin tools that have yet to be created * fixed conditional because jk0 is very picky :) * Fixed typo found in review * removing log lines * added --purge optparse for flavor delete * making server metadata work functionally * cleaning up instance metadata api code * Updated servers tests to use tenant id * Set image progress to 100 if the image is active * Cleaned up merge messes * Merged trunk * cleaned up unneeded line * nova.exception.wrap\_exception will re-raise some exceptions, but in the process of possibly notifying that an exception has occurred, it may clobber the current exception information. nova.utils.to\_primitive in particular (used by the notifier code) will catch and handle an exception clobbering the current exception being handled in wrap\_exception. Eventually when using the bare 'raise', it will attempt to raise None resulting a completely different and unhelpful exception * remove obsolete script from setup.py * assert that vmops.revert\_migration is called * Import sys as well * Resolve conflicts and fixed broken unit testcases * This branch adds additional capability to the hosts API extension. The new options allow an admin to reboot or shutdown a host. I also added code to hide this extension if the --allow-admin-api is False, as regular users should have no access to host API calls * adding forgotten import for logging * Adds OS API 1.1 support * Updated test\_images to use tenant ids * Don't do anything with tenant\_id for now * Review fixes * fixed wrong syntax * Assign tenant id in nova.context * another trunk merge * Merged trunk * Merged trunk * Cleaned up some old code added by the last merge * Fixed some typos from the last refactoring * Moved the restriction on host startup to the xenapi layer.: * Remove nova/tests/network, which was accidentally included in commit * upper() is even better * merged with 1383 * Updated with code changes on LP * Merged trunk * Save exception and re-raise that instead of depending on thread local exception that may have been clobbered by intermediate processing * Adding \_\_init\_\_.py files * Adds ability to disable snapshots in the Openstack API * Sync trunk * Set image progress to 100 if the image is active * Sync trunk * Update the curl command in the \_\_public\_instance\_is\_accessible function of test\_netadmin to return an error code which we can then check for and handle properly. This should allow calling functions to properly retry and timout if an actual test failure happens * ZoneAwareScheduler classes couldn't build local instances due to an additional argument ('image') being added to compute\_api.create\_db\_entry\_for\_new\_instance() at some point * simplified test cases further, thanks to trunk changes * Added possibility to mark fixed ip like reserved and unreserved * Update the OSAPI v1.1 server 'createImage' and 'createBackup' actions to limit the number of image metadata items based on the configured quota.allowed\_metadata\_items that is set * Pep8 fix * zone\_aware\_scheduler classes couldn't build instances due to a change to compute api's create\_db\_entry\_for\_new\_instance call. now passing image argument down to the scheduler and through to the call. updated a existing test to cover this * Adding check to stub method * moving try/except block, and changing syntax of except statement * Fixes broken image\_convert. The context being passed to glance image service was not a real context * Using decorator for snapshots enabled check * Disable flag for V1 Openstack API * adding logging to exception in delete method * Pass a real context object into image service calls * Adding flag around image-create for v1.0 * Refactored code to reduce lines of code and changed method signature * If ip is deallocated from project, but attached to a fixed ip, it is now detached * Glance Image Service now understands how to use glance client to paginate through images * Allow actions queries by UUID and PEP8 fixes * Fixed localization review comment * Allow actions queries by UUID and PEP8 fixes * Fixed review comments * fixing filters get * fixed per peer review * fixed per peer review * re-enabling sort\_key/sort\_dir and fixing filters line * Make sure mapping['dns'] is formatted correctly before injecting via template into images. mapping['dns'] is retrieved from the network manager via info['dns'], which is a list constructed of multiple DNS servers * Add a generic image service test and run it against the fake image service * Implemented @test.skip\_unless and @test.skip\_if functionality in nova/test.py * merged with 1382 * Updates v1.1 servers/id/action requests to comply with the 1.1 spec * fix typo * Moving from assertDictEqual to assertDictMatch * merging trunk * merging trunk * Add exception logging for instance IDs in the \_\_public\_instance\_is\_accessible smoke test function. This should help troubleshoot an intermittent failure * adding --fixes * glance image service pagination * Pass tenant ids through on on requests * methods renamed * Add exception logging for instance IDs in the \_\_public\_instance\_is\_accessible smoke test function. This should help troubleshoot an intermittent failure * Removed most direct sudo calls, make them use run\_as\_root=True instead * pep8 violations sneaking into trunk? * pep8 violations sneaking into trunk? * trunk merge * Fixes lp821144 * Make disk\_format and container\_format optional for libvirt's snapshot implementation * pep8 * fixed up zones controller to properly work with 1.1 * Add generic image service tests * Add run\_as\_root parameter to utils.execute, uses new sudo\_helper FLAG to prefix command * Remove spurious direct use of subprocess * Added virtual interfaces REST API extension controller * Trunk contained PEP8 errors. Fixed * Trunk merge * fix mismerge * Added migration to add uuid to virtual interfaces. Added uuid column to models * merged trunk * merged with nova trunk * Launchpad automatic translations update * fixed pep8 issue * utilized functools.wraps * added missing tests * tests and merge with trunk * removed redundant logic * merged trunk * For nova-manage network create cmd, added warning when size of subnet(s) being created are larger than FLAG.network\_size, in attempt to alleviate confusion. For example, currently when 'nova-manage network create foo 192.168.0.0/16', the result is that it creates a 192.168.0.0/24 instead without any indication to why * Remove instances of the "diaper pattern" * Read response to reset the connection state-machine for the next request/response cycle * Added explanations to exceptions and cleaned up reboot types * fix pep8 issues * fixed bug , when logic searched for next avail cidr it would return cidrs that were out of range of original requested cidr block. added test for it * Adding missing module xmlutil * fixed bug, wasn't detecting smaller subnet conflict properly added test for it * Properly format mapping['dns'] before handing off to template for injection (Fixes LP Bug #821203) * Read response to reset HTTPConnection state machine * removed unnecessary context from test I had left there from prior * move ensure\_vlan\_bridge,ensure\_bridge,ensure\_vlan to the bridge/vlan specific vif-plugging driver * re-integrated my changes after merging trunk. fixed some pep8 issues. sorting the list of cidrs to create, so that it will create x.x.0.0 with a lower 'id' than x.x.1.0 (as an example). <- was causing libvirtd test to fail * Revert migration now finishes * The OSAPI v1.0 image create POST request should store the instance\_id as a Glance property * There was a recent change to how we should flip FLAGS in tests, but not all tests were fixed. This covers the rest of them. I also added a method to test.UnitTest so that FLAGS.verbose can be set. This removes the need for flags to be imported from a lot of tests * Bad method call * Forgot the instance\_id parameter in the finish call * Merged in the power action changes * Removed test show() method * Fixed rescue/unrescue since the swap changes landed in trunk. Minor refactoring (renaming callback to \_callback since it's not used here) * Updates to the XenServer glance plugin so that it obtains the set of existing headers and sends them along with the request to PUT a snapshotted image into glance * Added admin-only decorator * This updates nova-ajax-console-proxy to correctly use the new syntax introduced last week by Zed Shaw * Merged trunk * Changed all references to 'power state' to 'power action' as requested by review * Added missing tests for server actions Updated reboot to verify the reboot type is HARD or SOFT Fixed case of having an empty flavorref on resize * Added more informative docstring * Added XML serialization for server actions * Removed debugging code * Updated create image server action to respect 1.1 * Fixes lp819397 * Fixed rescue unit tests * Nuke hostname. We don't use it * Split serverXMLDeserializers into v1.0 and v1.1 * another merge * Removed temporary debugging raise * Merged trunk * modify \_setup\_network for flatDHCP as well * Merged trunk * Added xenhost config get/setting * fix syntax error * Fixed rescue and unrescue * remove storing original flags verbosity * remove set\_flags\_verbosity.. it's not needed * Merged trunk * OS v1.1 is now the default into novarc * added NOVA\_VERSION to novarc * remove unused reference to exception object * Add a test for empty dns list in network\_info * Fix comments * uses 2.6.0 novaclient (OS API 1.1 support) * Fix to nova-ajax-console-proxy to use the new syntax * Update the OS API servers metadata resource to match the current v1.1 specification - move /servers//meta to /servers//metadata - add PUT /servers//metadata * fix pep8 issues that are in trunk * test\_host\_filter setUp needs to call its super * fix up new test\_server\_actions.py file for flags verbosity change * merged trunk * fixing typo * Sync with latest tests * The logic for confirming and reverting resizes was flipped. As a result, reverting a resize would end up deleting the source (instead of the destination) instance, and confirming would end up deleting the destination (instead of the source) instance * Found a case where an UnboundLocalError would be raised in xenapi\_conn.py's wait\_for\_task() method. This fixes the problem by moving the definition of the unbound name outside of the conditional * Moves code restarting instances after compute node reboot from libvirt driver to compute manager; makes start\_guests\_on\_host\_boot flag global * Moved server actions tests to their own test file. Updated stubbing and how flags are set to be in line with how they're supposed to be set in tests * merging trunk * add test for spawning a xenapi instance with an empty dns list * Nova uses instance\_type\_id and flavor\_id interchangeably when they almost always different values. This can often lead to an instance changing instance\_type during migration because the values passed around internally are wrong. This branch changes nova to use instance\_type\_id internally and flavor\_id in the API. This will hopefully avoid confusion in the future * The OSAPI v1.0 image create POST request should store the instance\_id as a Glance property * Linked to bug * Changed the definition of the 'action' dict to always occur * Updates to the XenServer glance plugin so that it obtains the set of existing headers and sends them along with the request to PUT a snapshotted image into glance * Fixed rescue and unrescue * Added in tests that verify tests are skipped appropriately * Merged trunk * Merged dietz' branch * Update HACKING: - Make imports more explicit - Add some dict/list formatting guidelines - Add some long method signature/call guidelines - Add explanation of i18n * Pep8 cleanup * Defaults \`dns\` to '' if not present, just as we do with the other network info data * Removes extraneous bodies from certain actions in the OSAPI servers controller * Revert should be sent to destination node and confirm should be sent to source node * Conditionals were not actually runing the tests when they were supposed to. Renamed example testcases * fix pylint W0102 errors * Remove whitespaces from name and description before creating security group * Remove instances of the "diaper pattern" * Fixes lp819397 * Initial version * Load instance\_types in downgrade method too * Fix trailing whitespace (PEP8) * fix test\_cloud FLAGS setting * dist scheduler flag setting fixes * fix scheduler tests that set FLAGS * fix more tests that use FLAGS setting * all subclasses of ComputeDriver should fully implement the interface of the destroy method * align multi-line string * fix test\_s3 FLAGS uses * switch FLAGS.\* = in tests to self.flags(...) remove unused cases of FLAGS from tests modified test.TestCase's flags() to allow multiple overrides added missing license to test\_rpc\_amqp.py * follow convention when raising exceptions * pep8 fixes * use an existing exception * use correct exception name * fix duplicate function name * fix undefined variable error * fix potential runtime exception * remove unused imports * remove bit-rotted code * more cleanup of API tests regarding FLAGS * fix use of FLAGS in openstack API servers tests to use the new way * Removes extraneous body argument from server controller methods * Merged trunk * Merged trunk * Default dns to '' if not present * replaced raise Exception with self.fail() * Removed dependancy on os.getenv. Test cases now raise Exception if they are not properly skipped * PEP8 issue * whoops, got a little comma crazy * Merged trunk and fixed conflicts to make tests pass * fumigate non-pep8 code * Use flavorid only at the API level and use instance\_type\_id internally * Yet another conflict resolved * forgot to remove comment * updated to work w/ changes after merged trunk fixing var renaming. the logic which forces default to FLAGS.network\_size if requested cidr was larger, was also applying to requested cidrs smaller than FLAGS.network\_size. Requested cidrs smaller than FLAGS.network\_size should be ignored and not overriden * merged from trunk * merged from trunk * merge trunk * Launchpad automatic translations update * Resolved pep8 errors * renaming test\_skip\_unless\_env\_foo\_exists() * merging trunk * Removed trailing whitespace that somehow made it into trunk * Merged trunk * Removed duplicate methods created by previous merge * Fixes lp819523 * Fix for bug #798298 * fix for lp816713: In instance creation, when nova-api is passed imageRefs generated by itself, strip the url down to an id so that default glance connection params are used * Added check for --allow-admin-api to the host API extension code * Another unittest * Merged trunk * Add support for 300 Multiple Choice responses when no version identifier is used in the URI (or no version header is present) * Merged trunk * Glance has been updated for integration with keystone. That means that nova needs to forward the user's credentials (the auth token) when it uses the glance API. This patch, combined with a forth-coming patch for nova\_auth\_token.py in keystone, establishes that for nova itself and for xenapi; other hypervisors will need to set up the appropriate hooks for their use of glance * Added changes from mini server * raise correct error * Minor test fixes * fix failing tests * fix pep8 complaints * merge from trunk * Fixed a missing space * Bad merge res * merge the trunk * fix missing method call and add failing test * Removed duplicate xattr from pip-requires * Fixed merge issues * Merged trunk * merged trunk * remove unused parameter * Merged trunk * Merged from lab * fix pylint errors * fix pylint errors * merge from trunk * Moves image creation from POST /images to POST /servers//action * Fixed several typos * Changed migration to be an admin only method and updated the tests * - Remove Twisted dependency from pip-requires - Remove Twisted patch from tools/install\_venv.py - Remove eventlet patch from tools/install\_venv.py - Remove tools/eventlet-patch - Remove nova/twistd.py - Remove nova/tests/test\_twistd.py - Remove bin/nova-instancemonitor - Remove nova/compute/monitor.py - Add xattr to pip-requires until glance setup.py installs it correctly - Remove references to removed files from docs/translations/code * Fix an error in fetch\_image() * Get instance by UUID instead of id * Merged trunk * Added the powerstate changes to the plugin * pull-up from trunk/fix merge conflict * fixing typo * refactored tests * pull-up from trunk * Removing the xenapi\_image\_service flag in favor of image\_service * cleanup * Merged trunk * abstraction of xml deserialization * fixing method naming problem * removing compute monitor * merge from trunk * code was checking for key in sqlalchemy instance and will ignore if value is None, but wasn't working if floating\_ip was a non-sqlalchemy dict obj. Therefore, updated the error checking to work in both caes * While we currently trap JSON encoding exceptions and bail out, for error notification it's more important that \*some\* form of the message gets out. So, we take complex notification payloads and convert them to something we know can be expressed in JSON * Better error handling for resizing * Adds the auth token to nova's RequestContext. This will allow for delegation, i.e., use of a nova user's credentials when accessing other services such as glance, or perhaps for zones * merged trunk rev1348 * Launchpad automatic translations update * added some tests for network create & moved the ipv6 logic back into the function * merged with nova trunk * Added host shutdown/reboot conditioning * avoid explicit type checking, per brian waldon's comment * Added @test.skip\_unless and @test.skip\_if functionality. Also created nova/tests/test\_skip\_examples.py to show the skip cases usage * fix LinuxBridgeInterfaceDriver * merge trunk, resolve conflict in net/manater.py in favor of vif-plug * initial commit of vif-plugging for network-service interfaces * Merged trunk * pep8 fixes * Controller -> self * Added option for rebooting or shutting down a host * removed redundant logic * merged from trunk * adding a function with logic to make the creation of networks validation a bit smarter: - detects if the cidr is already in use - when specifying a supernet to be split into smaller subnets via num\_networks && network\_size, ensures none of the returned subnets are in use by either a subnet of the same size and range, nor a SMALLER size within the same range. - detects if splitting a supernet into # of num\_networks && network\_size will fit - detects if the supernet/cidr specified is conflicting with a network cidr that currently exists that may be a larger supernet already encompassing the specified cidr. " * Carry auth\_token in nova's RequestContext * merge with trunk, resolve conflicts * Revert hasattr() check on 'set\_auth\_token' for clients * it makes the pep8, or else it gets the vim again * merge from trunk * Fixes this issue that I may have introduced * Update compute tests to use new exceptions * Resync to trunk * Remove copy/paste error * Launchpad automatic translations update * Launchpad automatic translations update * Fixed review comments: Put parsing logic of network information in create\_instance\_helper module and refactored unit testcases as per the changed code * pep8 * wow, someone whent all crazy with exceptions, why not just return an empty list? * Only call set\_auth\_token() on the glance client if there's one available * Make unit tests pass * merging * only attempt to get a fixed\_up from a v4 subnet if there is a v4 subnet * FlavorNotFound already existed, no need to create another exception * Created exceptions for accepting in OSAPI, and handled them appropriately * only create fixed\_ips if we have an ipv4 range * Revert to using context; to avoid conflict, we import context module as nova\_context; add context to rescue * You see what happens Danny when you forget to close the parenthesis * Merged with trunk * Merged trunk * allow the manager to try to do the right thing * allow getting by the cidr\_v6 * the netmask is implied by the cidr, so use that to display the v6 subnet * either v4 or v6 is required * merging trunk * pull-up from trunk and conflict resolution * merge trunk * stwart the switch to just fixed\_range * typo * Round 1 of changes for keystone integration. \* Modified request context to allow it to hold all of the relevant data from the auth component. \* Pulled out access to AuthManager from as many places as possible \* Massive cleanup of unit tests \* Made the openstack api fakes use fake Authentication by default * require either v4 or v6 * pull-up from trunk * Fix various errors discovered by pylint and pyflakes * fixing underline * removing extra verbage * merged trunk * This change creates a minimalist API abstraction for the nova/rpc.py code so that it's possible to use other queue mechanisms besides Rabbit and/or AMQP, and even use other drivers for AMQP rather than Rabbit. The change is intended to give the least amount of interference with the rest of the code, fixes several bugs in the tests, and works with the current branch. I also have a small demo driver+server for using 0MQ which I'll submit after this patch is merged * removing dict() comment * adding more on return\_type in docstrings * Fixes issue with OSAPI passing compute API a flavorid instead of an instance identifier. Added tests * made the whole instance handling thing optional * Reorganize the code to satisfy review comments * pull-up from trunk; fix problem obscuring context module with context param; fix conflicts and no-longer-skipped tests * remove unused import * --Stolen from https://code.launchpad.net/~cerberus/nova/lp809909/+merge/68602 * removing 'Defining Methods' paragraph * rewording * Use the util.import\_object to import a module * rewording * one last change * upgrades * expanding * merged trunk and fix time call * updating HACKING * Fixing lxml version requirement * Oops, I wasn't actually being compatible with the spec here * bumping novaclient version * Fixes lp:818050 * Updated resize to call compute API with instance\_type identifiers instead of flavor identifiers. Updated tests * fix run\_tests.sh * merge trunk * Fixed changes missed in merge * fix more spacing issues, and removed self link from versions template data * merged trunk * added instance support to to\_primitive and tests * merged trunk and fixed post\_live\_migratioin\_at\_destination to get nw\_info * Removing unnecessary imports * Added xml schema validation for extensions resources. Added corresponding xml schemas. Added lxml dep, which is needed for doing xml schema validation * remove extra log statement * api/ec2: rename CloudController.\_get\_instance\_mapping into \_format\_instance\_mapping * fixed typo * merge with trunk * fixed pep8 issues and removed unnecessary factory function * returned vsa\_manager, nova-manage arg and print changes * Added the config values to the return of the host\_data method * Adds XML serialization for servers responses that match the current v1.1 spec * Added methods to read/write values to a config file on the XenServer host * fix pep8 errors * minor cleanup * Removed unused Duplicate catch * Fix to\_dict() and elevated() to preserve auth\_token; revert an accidental change from context.get\_admin\_context() to simply context * Fixes bug 816604, which is the problem that timeformat in server responses for updated and created are incorrect. This fix just converts the datetime into the correct format * merging trunk * pep8 * moving server backup to /servers//action instead of POST /images * Simplified test cases * Rewrite ImageType enumeration to be more pythonic * refactoring and make self links correct (not hard coded) * Fix tests for checking pylint errors * Use utils.utcnow. Use True instead of literal 1 * Some tests for resolved pylint errors * simplify if statement * merge trunk * use wsgi XMLNS/ATOM vars * Updated deserialization of POST /servers in the OSAPI to match the latest v1.1 spec * Removed unused Duplicate catch * pull-up from trunk * Catch DBError for duplicate projects * Catch DBError for duplicate projects * Make network\_info truly optional * trunk infected with non-pep8 code * unicode instead of str() * Add a flag to set the default file mode of logs * merge trunk * make payload json serializable * moved test * Removed v1\_1 from individual tests * merge from trunk * merge to trunk * more commented code removed * some minor cosmetic work. addressed some dead code section * merged with nova-1336 * prior to nova-1336 merge * remove authman from images/s3.py and replace with flags * fix tests broken in the merge * merged trunk * fix undeclared name error * fix undeclared name error * fix undeclared name error * fix undeclared name errors * remove unused assignment which causes undeclared name error * fix undefined variable errors * fix call to nonexistant method to\_global\_ipv6. Add myself to authors file * Make network\_info truly optional * updates handling of arguments in nova-manage network create. updates a few of the arguments to nova-manage and related help. updates nova-manage to raise proper exceptions * forgot a line * fixed create\_networks ipv6 management * Fail silently * typo * --bridge defaults to br100 but with a deprecation warning and to be removed in d4 * Reverting to original code * use ATOM\_XMLNS everywhere * merge trunk * added unit testcase to increase code coverage * stub out VERSIONS for the tests * put run\_tests.sh back to how it was * Fixed conflict * Fail silently * Merged with trunk and fixed broken unit test cases * Fix the skipped tests in vmwareapi and misc spots. The vmware networking stuff is stubbed out, so the tests can be improved there by fixing the fakes * pep8 issue * refactoring MetadataXMLDeserializer in wsgi/common * move viewbuilder and serializer tests into their own test cases * Fix all of the skipped libvirt tests * fix typo * merged trunk * Fixes typo in attach volume * utilize \_create\_link\_nodes base class function * default the paramater to None, not sure why it was required to begin with * pass None in for nw\_info * added test for accept header of atom+xml on 300 responses to make sure it defaults back to json, and reworked some of the logic to make how this happens clearer * Drop FK before dropping instance\_id column * moved rest of build logic into builder * Drop FK before dropping instance\_id column * Removed FK import * Delete FK before dropping instance\_id column * oops! moved ipv6 block back into the for loop in network manager create\_networks * update everything to use global VERSIONS * merged trunk * change local variable name * updated handling of v6 in network manager create\_networks to it can receive None for v6 args * added ipv6 requirements to nova-manage network create. changed --network to --fixed\_range\_v4 * remove unexpected parameter * fixed xmlns issue * updated the bridge arg requirements based on manager * this change will require that local urls be input with a properly constructed local url: http://localhost/v1.1/images/[id]. Such urls are translated to ids at the api layer. Previously, any url ending with and int was ok * make atom+xml accept header be ignored on 300 responses in the VersionsRequestDeserializer * Removed superfluous parameter * Use auth\_token to set x-auth-token header in glance requests * Fixed the virt driver base * Some work on testing. Two cases related to lp816713 have some coverage already: using an id as an imageRef (test\_create\_instance\_v1\_1\_local\_href), and using a nova href as a url (test\_create\_instance\_v1\_1) * Remove xenapi\_inject\_image flag * Add a flag to set the default file mode of logs * fixed issue with factory for Versions Resource * Fix context argument in a test; add TODOs * improved the code per peer review * Add context argument a lot more places and make unit tests work * fix hidden breakage in test * Remove xenapi\_inject\_image flag * removed unused import * pep8 * pep8 * updated nova-manage create network. better help, handling of required args, and exceptions. Also updated FLAG flat\_network\_bridge to default to None * Re-enables and fixes test\_cloud tests that broke from multi\_nic * Fix for boto2 * Re-enables and fixes test\_cloud tests that broke from multi\_nic * add invalid device test and make sure NovaExceptions don't get wrapped * merge from trunk * pep8 * pep8 * updating common metadata xml serializer tests * Cleaned up test\_servers * Moved server/actions tests to test\_server\_actions.py * updating servers metadata resource * pull-up from trunk * Address merge review concerns * Makes security group rules with the newer version of the ec2 api and correctly supports boto 2.0 * merging parent branch servers-xml-serialization * updating tests * updated serializer tests for multi choice * pep8 cleanup * multi choice XML responses with tests * merged recent trunk * merge with trunk * Cherry-pick of tr3buchet's fix for add\_fixed\_ip\_to\_instance * Resolved conflicts with trunk * fix typo in attach\_volume * fix the last of them * fake plug for vif driver * couple more fixes * cleanup network create * code was checking for key in sqlalchemy instance but if floating\_ip is a non-sqlalchemy dict instance instead, value=None will cause NoneType exception * fix more tests * fix the first round of missing data * fix the skipped tests in vmwareapi xenapi and quota * Add myself to authors * Implements a simplified messaging abstraction with the least amount of impact to the code base * fix for lp816713: In instance creation, when nova-api is passed imageRefs generated by itself, strip the url down to an id so that default glance connection params are used * cloud tests all passing again * added multi\_choice test just to hit another resource * pep8 fixes * initial working 300 multiple choice stuff * cherry-pick tr3buchet's fix for milestone branch * cleanup * pep8 * pep8 * First pass at converting this stuff--pass context down into vmops. Still need to fix unit tests and actually use auth\_token from the context.. * pep8 and simplify rule refresh logic * pep8 * merging parent branch lp:~rackspace-titan/nova/osapi-create-server * adding xml deserialization for createImage action * remove some logging, remove extra if * compute now appends self.host to the call to add an additional fixed ip to an instance * Update security gropu rules to properly support new format and boto 2.0 * Updated test stubs to contain the correct data Updated created and updated in responses to use correct time format * pep8 compliance * VSA volume creation/deletion changes * moved v1.1 image creation from /images to /servers//action * fixed per peer review * passing host from the compute manager for add\_fixed\_ip\_to\_instance() * adding assert to check for progress attribute * removing extra function * Remove debugging code * cleanup * fixed minor issues * reverting tests to use imageRef, flavorRef * updating imageRef and flavorRef parsing * Updates to the compute API and manager so that rebuild, reboot, snapshots, and password resets work with the most recent versions of novaclient * merging trunk; resolving conflicts * Add OpenStack API support for block\_device\_mapping * queries in the models.Instance context need to reference the table by name (fixed\_ips) however queries in the models.FloatingIp context alias the tables out properly and return the data as fixed\_ip (which is why you need to reference it by fixed\_ip in that context) * added warning when size of subnet(s) being created are larger than FLAG.network\_size in attempt to alleviate confusion. For example, currently when 'nova-manage network create foo 192.168.0.0/16', the result is that it creates a 192.168.0.0/24 instead without any indication to why * xml deserialization works now * merged from trunk * merged trunk * merging trunk * pull-up from trunk * got rid of print * got rid of more xml string comparisons * atom test updates * got rid of some prints * got rid of string comparisons in serializer tests * removing objectstore and image\_service flag checking * Updates /servers requests to follow the v1.1 spec. Except for implementation of uuids replacing ids and access ips both of which are not yet implemented. Also, does not include serialized xml responses * fixed detail xml and json tests that got broken * updated atom tests * Updated ServerXMLSerializer to utilize the IPXMLSerializer * merged trunk * merge from trunk * fix pep8 issues * fix issue with failing test * merged trunk * I'm sorry, for my fail with rebasing. Any way previous branch grew to many other futures, so I supersede it. 1. Used optparse for parsing arg string 2. Added decorator for describe method params 3. Added option for assigning network to certain project. 4. Added field to "network list" for showing which project owns network * Moved the VIF network connectivity logic('ensure\_bridge' and 'ensure\_vlan\_bridge') from the network managers to the virt layer. In addition, VIF driver class is added to allow customized VIF configurations for various types of VIFs and underlying network technologies * merge with trunk, resolve conflicts * fix pep8 * Launchpad automatic translations update * removing rogue print * removing xenapi\_image\_service flag * adding to authors * fixing merge conflict * merge from trunk * initial stuff to get away from string comparisons for XML, and use ElementTree * merged with 1320 * volume name change. some cleanup * - Updates /images//meta and /images//meta/ to respect the latest specification - Renames ../meta to ../metadata - Adds PUT on ../metadata to set entire container (controller action is called update\_all) * Adds proper xml serialization for /servers//ips and /servers//ips/ * some cleanup. VSA flag status changes. returned some files * Pass on auth\_token * Warn user instead of ignoring * Added ensuring filter rules for all VMs * atom and xml\_detail working, with tests * Adds the -c|--coverage flag to run\_tests.sh to generate a local code coverage report * Estetic fix * Fix boot from volume failure for network block devices * Bug #796813: vmwareapi does not support distributed vswitch * modified to conform to latest AWS EC2 API spec for authorize & revoke ingress params using the IpPermissions data structure, which nests lists of CIDR blocks (IpRanges) as well as lists of Group data * Fixes faults to use xml serializers based on api version. This fixed bug 814228 * Fixes a typo in rescue instance in ec2 api. This is mnaser's fix, I just added a test to verify the change * Fixes bug 797250 where a create server request with the body '{"name":"server1"}' results in a HTTP 500 instead of HTTP 422 * adding xml serialization for /servers//ips and /servers//ips/ * add a simple broken test to verify the bug * Fixed old libvirt semantics, added resume\_guests\_state\_on\_host\_boot flag * xml version detail working with tests * adding testing to solidify handling of None in wsgi serialization * Added check to make sure there is a server entity in the create server request * Fixed some typos in log lines * removed prints, got versions detail tests passing, still need to do xml/atom * reverting some wsgi-related changes * merged trunk * removed print lines * This fixes the xml serialization of the /extensions and /extensions/foo resources. Add an ExtensionsXMLSerializer class and corresponding unit tests * added 1.0 detail test, added VersionRequestDeserializer to support Versions actions properly, started 300/multiple choice work * fix for reviews * Fixed bad test Fixed using wrong variable * Moved the exception handling of unplugging VIF from virt driver to VIF driver. Added better comments. Added OpenStack copyrights to libivrt vifs.py * pep8 + spelling fixes * Floating IP DB tests * Updated Faults controller to choose an xml serializer based on api version found in the request url * removing unnecessary assignments * Hotfix * Some estetic refactoring * Fixing PEP8 compliance issues * adding --fixes * fixing typos * add decorator for 'dns' params * merge with trunk, resolve conflicts * pep8 * Fixed logging * Fixed id * Fixed init\_host context name * Removed driver-specific autostart code * fix 'version' command * Add bug reference * Use admin context when fetching instances * Use subscript rather than attribute * Make IP allocation test work again * Adjust and re-enable relevant unit tests * some file attrib changes * some cosmetic changes. Prior to merge proposal * Added test\_serialize\_extenstions to test ExtensionsXMLSerializer.index() * tests: unit tests for describe instance attribute * tests: an unit test for nova.compute.api.API.\_ephemeral\_size() * tests: unit tests for nova.virt.libvirt.connection.\_volume\_in\_mapping() * tests/glance: unit tests for glance serializer * tests: unit tests for nova.virt * tests: unit tests for nova.block\_device * db/api: fix network\_get\_by\_cidr() * image/glance: teach glance block device mapping * tests/test\_cloud:test\_modify\_image: make it pass * nova/tests/test\_compute.py: make test\_compute.test\_update\_block\_device\_mapping happy * test\_metadata: make test\_metadata pass * test\_compute: make test\_compute pass * test\_libvirt: fix up for local\_gb * virt/libvirt: teach libvirt driver swap/ephemeral device * virt/libvirt: teach libvirt driver root device name * compute/api: pass down ephemeral device info * compute/manager, virt: pass down root device name/swap/ephemeral to virt driver * ec2/get\_metadata: teach block device mapping to get\_metadata() * api/ec2: implement describe\_instance\_attribute() * db/api: block\_device\_mapping\_update\_or\_create() * block\_device: introduce helper function to check swap or ephemeral device * ec2utils: factor generic helper function into generic place * Launchpad automatic translations update * Config-Drive happiness, minus smoketest * merged with latest nova-1308 * more unittest changes * Last patch broke libvirt mapping of network info. This fixes it * Fixes an issue with out of order operations in setup\_network for vlan mode in new ha-net code * Merged with 1306 + fix for dns change * update netutils in libvirt to match the 2 dns setup * merge * merge with 1305 * make sure dhcp\_server is available in vlan mode * Adds ability to set DNS entries on network create. Also allows 2 dns servers per network to be specified * pep8-compliant. Prior to merge with 1305 * Reverted volume driver part * pep cleanup * remove auth manager from instance helper * docstring update * pass in the right argument * pull out auth manager from db * merge trunk * default to None in the method signature * merged trunk * remove some more stubouts and fakes * clean up fake auth manager in other places * same as: https://code.launchpad.net/~tr3buchet/nova/lp812489/+merge/68448 fixes: https://bugs.launchpad.net/nova/+bug/812489 but in a slightly different context * pep8 * updating images metadata resource * ...and this is me snapping back into reality removing all trace of ipsets. Go me * fixed networks not defined error when creating instances when no networks exist * fix test\_access * This is me being all cocky, thinking I'll make it use ipsets.. * fix auth tests * Add i18n for logging, changed create\_bridge/vlan to should\_create\_bridge/vlan, changed unfilter\_instance's keyword param to positional, and added Dan's alternate ID to .mailmap * fix extensions tests * merge trunk * fix all tests * pep8 fixes * Updated the comments for VMWare VIF driver * initial test for v1.1 detail request * Moved restaring instances from livbirt driver to ComputeManager * Added network\_info to unfilter\_instance to avoid exceptions when shutting down instances * Removed unused exception object * Fixed the missing quotes for 802.1Qbh in libvirt template * add decorator for multi host option * Merged Dan's branch * Merged trunk * use new 'create\_vlan' field in XenAPIBridgeDriver * merge with trunk, resolve conflicts * remove IPy * for libvirt OVS driver, do not make device if it exists already * refactor xenapi vif plug to combine plug + get\_vif\_rec, tested and fixed XenAPIBridgeDriver * Correctly add xml namespaces to extensions xml * Added xml serialization for GET => /extensions. Added corresponding tests * merge ryu's branch * remove debugging * fix a whole bunch of tests * start removing references to AuthManager * change context to maintain exact time, store roles, use ids instead of objects and use a uuid for request\_id * Resolved conflict with trunk * Adds an XML serializer for limits and adds tests for the Limits view builder * pep8 * add in the right number of fields * pep8 * updated next-available to use utc time * merge trunk * rename in preperation for trunk merge * only include dns entries if they are not None in the database * Updated the compute API so that has\_finished\_migration uses instance\_uuid. Fixes some regressions with 1295-1296 * only use the flag if it evaluates true * Catch the FixedIpNotFoundForInstance exception when no fixed IP is mapped to instance * Updated time-available to be correct format Fixed old tests to respect this * This fixes issues with invalid flavorRef's being passed in returning a 500 instead of a 400, and adds tests to verify that two separate cases work * merge from trunk * Moving lp:~rackspace-titan/nova/extensions-xml-serialization to new branch based off of trunk. To remove dep on another branch * Perform fault wrapping in the openstack WSGI controller. This allows us to just raise webob Exceptions in OS API controllers with the appropriate explanations set. This resolves some inconsistencies with exception raising and returning that would cause HTML output to occur when faults weren't being handled correctly * pep8 and stuff * Some code was recently added to glance to allow the is\_public filter to be overridden. This allows us to get all images and filter properly on the nova side until keystone support is in glance. This fixes the issue with private images and snapshots disappearing from the image list * pep8 * Merged with trunk which includes ha-net changes * Updated the compute API so that has\_finished\_migration uses instance\_uuid. Fixes some regressions with 1295-1296 * Updating the /images and /images/detail OSAPI v1.1 endpoints to match spec w/ regards to query params * Ensure valid json/xml/atom responses for versions requests * Update OSAPI v1.1 /flavors, /flavors/detail, and /flavors/ to return correct xml responses * Renamed the virt driver resize methods to migration for marginally more understandable code * allow 2 dns servers to be specified on network create * allow 2 dns servers to be specified on network create * Fixes lp813006 * Fixes lp808949 - "resize doesn't work with recent novaclient" * minor fix * Some broken tests from my other merge * Fixed import issue * added tests, updated pep8 fixes * Changed test\_live\_migration\_raises\_exception to use mock for compte manager method * fixed another issue with invalid flavor\_id parsing, and added tests * minor cleanup * pep8 issue * cleanup * merge with trunk * Fixed the localization unit test error in the vif driver logging * cleanup tests and fix pep8 issues * removed vif API extension * Fixed Xenapi unit test error of test\_rescue * Slight indentation change * Merged Dan Wendlandt's branch and fixed pep8 errors * Added call to second coverage invocation * Fixed an issue where was invoked before it was defined in the case of a venv * - Add 'fixed\_ipv6' property to VirtualInterface model - Expose ipv6 addresses in each network in OSAPI v1.1 * forgot to add xenapi/vif.py * Perform fault wrapping in the openstack WSGI controller. This allows us to just raise webob Exceptions in OS API controllers with the appropriate explanations set. This resolves some inconsistencies with exception raising and returning that could cause HTML output to occur when an exception was raised * Added LimitsXMLSerializer Added LimitsViewBuidlerV11Test test case * Added create\_vlan/bridge in network unit test * Add OpenStack API support for block\_device\_mapping * Changed the default of VIF driver * Fixed PEP8 issues * Combined bridige and vlan VIF driver to allow better transition for current Nova users * Merged trunk * Merged lp:~~danwent/nova/network-refactoring * Adds HA networking (multi\_host) option to networks * CHanges based on feedback * Older Windows agents are very picky about the data sent to it. It also requires the public key for the password exchange to be in a string format and not an integer * adding flavors xml serialization * added versions list atom test and it passes * Set the status\_int on fault wrapped exceptions. Fixes WSGI logging issues when faults are returned * Fix plus passing tests * remove debug prints * merge ryu's branch * update for ryu's naming changes, fix some bugs. tested with OVSDriver only so far * Fixes bug #807764. Please disregard previous proposal with incorrect bug # * Whoops * Added LP bug num to TODO * Split tests into 2 * Fix email address in Author * Make sure reset\_network() call happens after we've determined the agent is running * pep8 * Merged trunk * Added Dan Wendlandt to Authors, and fixed failing network unit tests * merged trunk * Made all but one test pass for libvirt * Moved back allow\_project\_net\_traffic to libvirt conn * Set the status\_int on fault wrapped exceptions. Fixes WSGI logging issues when faults are returned * lp812489: better handling of periodic network host setup to prevent exception * add smoketests to verify image listing * default image to private on register * correct broken logic for lxc and uml to avoid adding vnc arguments (LP: #812553) * Stupid merge and fixed broken test * Most of the XenServer plugin files need the execute bit set to run properly. However, they are inconsistent as it is, with one file having the execute bit set, but the another having it set when it is not needed * Made the compute unit tests to pass * Host fix * Created \_get\_instance\_nw\_info method to clean up duplicate code * initial changes for application/atom+xml for versions * Update Authors file * network api release\_floating\_ip method will now check to see if an instance is associated to it, prior to releasing * merge from lp:~midokura/nova/network-refactoring-l2 * Corrects a bad model lookup in nova-manage * correct indentation * Fixes lp809587 * Fix permissions for plugins * Ya! Apparently sleep helps me fix failing tests * Some older windows agents will crash if the public key for the keyinit command is not a string * added 'update' field to versions * First attempt at vmware API VIF driver integration * Removed unnecessary context parameter * Merged get\_configurations and plug of VIF drivers * Moved ensure\_vlan\_bridge of vmware to VIF driver * Added network\_info parameter to all the appropriate places in virt layers and compute manager * remove xenapi\_net.py from network directory, as this functionality is now moved to virt layer * first cut of xenserver vif-plugging, some minor tweaks to libvirt plugging * Refactor device type checking * Modified alias ^Cd minor fixes * Merged with trunk * Reverted to original code, after network binding to project code is in integration code for testing new extension will be added * Fixed broken unit testcases after adding extension and minor code refactoring * Added a new extension instead of directly making changes to OS V1.1. API * have to use string 'none' and add a note * tell glance to not filter out private images * updated links to use proper atom:link per spec * Renamed setup\_vif\_network to plug\_vif * Fixes lp813006 - inconsistent DB API naming * move import network to the top * Merged lp:~danwent/nova/network-refactoring-l2 * merged from trunk * network api release\_floating\_ip method checks if an instance associated to the floating prior to releasing. added test * Added detroy\_vif\_network * Functionality fixed and new test passing * Updates to the compute API and manager so that rebuild, reboot, snapshots, and password resets work with the most recent versions of novaclient * better handling of periodic network host setup * Merged trunk * Removed blank lines * Fix unchecked key reference to mappings['gateway6']. Fixes LP #807764 * add downgrade * correct broken logic for lxc and uml to avoid adding vnc arguments (LP: #812553) * Beginnings of the patch * Fixed equality comparison bug in libvirt XML * Fixed bad parameters to setup\_vif\_networks * Zapped an extra newline * Merged with trunk * Add support for generating local code coverage report * respecting use\_ipv6 flag if set to False * merged trunk * merged trunk * fixed reviewer's comment. 1. ctxt -> context, 2. erase unnecessary exception message from nova.sccheduler.driver * cleanup * merge of ovs L2 branch * missed the vpn kwarg in rpc * fix bad merge * change migration number * merged trunk * This change adds the basic boot-from-volume support to the image service * Fixed the broken tests again * Merging from upstream * Some missed instance\_id casts * pep8 cleanup * adding --fixes * adding fixed\_ipv6 property to VirtualInterface model; exposing ipv6 in api * VSA schedulers reorg * Merged with trunk * fix issues that were breaking vlan mode * fixing bad lookup * Updates to the XenServer agent plugin to fix file injection: * Don't jsonify the inject\_file response. It is already json * localization changes. Removed vsa params from volume cloud API. Alex changes * Added auth info to XML * returncode is an integer * - Fixed the conflift in vmops.py * Check returncode in get\_agent\_features * resolved pep8 issues * merged from trunk * Updated servers to choose XML serializer based on api version * pep8 * updated servers to use ServerXMLSerializer * added 'create' to server XML serializer * added 'detail' to server XML serializer * convert group\_name to string, incase it's a long * nova/api/ec2/cloud.py: Rearranged imports to be alphabetical as per HACKING * pep8'd * Extended test to check for error specific error code and test cover for bad chars * Some basic validation for creating ec2 security groups. (LP: #715443) * changed to avoid localization test failure * Initial test case proving we have a bug of, ec2 security group name can exceed 255 chars * added index to servers xml serializer * Change \_agent\_has\_method to \_get\_agent\_features. Update the inject files function so that it calls \_get\_agent\_features only once per injected file * pep8 * Moved Metadata Serialization Test * Added ServerXMLSerializer with working 'show' method Factored out MetadataXMLSerializer from images and servers into common * added missing drive\_types.py * added missing instance\_get\_all\_by\_vsa * merged with 1280 * VSA: first cut. merged with 1279 * Added some unit and integration tests for updating the server name via the openstack api * renamed priv method arg\_to\_dict since it's not just used for revoke. modified to conform to latest AWS EC2 API spec for authorize & revoke ingress params using the IpPermissions data structure, which nests lists of CIDR blocks (IpRanges) as well as lists of Group data * got rid of return\_server\_with\_interfaces and added return\_server\_with\_attributes * Added ServerXMLSerializationTest * take out print statements * Ensures a bookmark link is returned in GET /images. Before, it was only returned in GET /images/detail * One last nit * Tests passing again * put maxDiff in setUp * remove get\_uuid\_from\_href and tests * stop using get\_uuid\_from\_href for now * Updated with some changes from manual testing * Updates to the XenServer agent plugin to fix file injection: * merging trunk * use id in links instead of uuid * pep8 fixes * fix ServersViewBuilderV11Tests * Adds greater configuration flexibility to rate limiting via api-paste.ini. In particular: * return id and uuid for now * merge with trunk * Adds distributed scheduler and multinic docs to the Developer Reference page * Added more view builder tests * merged wills revisions * Added ViewBuilderV11 tests Fixed bug with build detail * fix issues with uuid and old tests * - Present ip addresses in their actual networks, not just a static public/private - Floating ip addresses are grouped into the networks with their associated fixed ips - Add addresses attribute to server entities * Update the agent plugin so that it gets 'b64\_contents' from the args dict instead of 'b64\_file' (which isn't what nova sends) * Adding unit and integration tests for updating the server name via the 1.1 api * merge with trunk, resolve conflicts * remove argument help from docstrings + minor fix * Fixes Bug #810149 that had an incomplete regex * Existing Windows agent behaves differently than the Unix agents and require some workarounds to operate properly. Fixes are going into the Windows agent to make it behave better, but workarounds are needed for compatibility with existing installed base * Add possibility to call commands without subcommands * fix redundency * Updated Authors * Fixed remove\_version\_from\_href Added tests * mistakenly commited this code into my branch, reverting it to original from trunk * Merged with trunk and fixed pep errors * added integrated unit testcases and minor fixes * First pass * corrected catching NoNetworksDefined exception in host setup and getting networks for instance * catching the correct exception * Added ServersTestv1\_1 test case Changed servers links to use uuid instead of id * pep8 * Updated old tests * add support to write to stdout rather than file if '-' is specified. see bug 810157 * merging trunk * removed self links from flavors * added commands * exposing floating ips * updated image entity for servers requests * Update the agent plugin so that it gets 'b64\_contents' from the args dict instead of 'b64\_file' (which isn't what nova sends) * Use assertRaises instead of try/except--stupid brain-o * Added progress attribute to servers responses * fixing bad merge * pull-up from trunk, while we're at it * Comment on parse\_limits(); expand an exception message; add unit tests; fix a minor discovered bug * adding bookmark to images index * add updated and created to servers detail test, and make it work * removing mox object instantiation from each test; renaming \_param to filter\_name * add self to authors * use 'with' so that close is called on file handle * adding new query parameters * support '-' to indicate stdout in nova-manage project 'environment' and 'zip' * Improvements to nova-manage: 1. nova-manage network list now shows what belongs to what project, and what's the vlan id, simplifying management in case of several networks/projects 2. nova-manage server list [zone] - shows servers. Useful if you have many servers and want to list them in particular zone, instead of grep'ing nova-manage service list * Minor fixes * Merged with Trunk * updated to support and check for flavor links in server detail response * Updated responses for GET /images and GET /images/detail to respect the OSAPI v1.1 spec * merge * beginning server detail spec 1.1 fixup * Augment rate limiting to allow greater flexibility through the api-paste.ini configuration * merge from trunk * added unit testcases for validating the requested networks * Extends the exception.wrap\_exception decorator to optionally send an update to the notification system in the event of a failure * trunk merge * merging trunk * updating testing; simplifying instance-level code * pep8 * adding test; casting instance to dict to prevent sqlalchemy errors * merged branch lp:~rackspace-titan/nova/images-response-formatting * Add multinic doc and distributed scheduler doc to developer guide front page * merged trunk * Don't pop 'vpn' on kwargs inside a loop in RPCAllocateFixedIP.\_allocate\_fixed\_ips (fixes KeyError) * Added Mohammed Naser to Authors file * merge with trunk * fix reviewer's comment * Starting part of multi-nic support in the guest. Adds the remove\_fixed\_ip code, but is incomplete as it needs the API extension that Vek is working on * Don't pop 'vpn' on kwargs inside a loop in RPCAllocateFixedIP.\_allocate\_fixed\_ips (fixes KeyError's) * added unit test cases and minor changes (localization fix and added fixed\_ip validation) * Made sure the network manager accepts kwargs for FlatManager * Fix bug 809316. While attempting to launch cloudpipe instance via 'nova-manage vpn run' command, it comes up with IP from instances DHCP pool and not the second IP from the subnet, which break the forwarding rules that allow users to access the vpn. This is due 'allocate\_fixed\_ip' method in VlanManager doesn't receive 'vpn' as an argument from caller method and cloudpipe instances always considers as 'common' instances * cleanup * server create deserialization functional and tested * added xml deserialization unit test cases and fixe some pep errors * Updated some common.py functions to raise ValueErrors instead of HTTPBadRequests * Renamed 'nova-manage server list' -> 'nova-manage host list' to differentiate physical hosts from VMs * Allowed empty networks, handled RemoteError properly, implemented xml format for networks and fixed broken unit test cases * minor cleanup * Updated ImageXMLSerializer to serialize links in the server entity * Updated images viewbuilder to return links in server entity * updated images tests * merged trunk * pep8 * Updated remove\_version\_from\_href to be more intelligent Added tests * Fix PEP8 for 809316 bugfix * Fix 809316 bug which prevent cloudpipe to get valid IP * fix reviewer's comment * stray debug * pep8 * fixed marshalling problem to cast\_compute.. * fixed all failed unit test cases * This doesn't actually fix anything anymore, as the wsgi\_refactor branch from Waldon took care of the issue. However, a couple rescue unit tests would have caught this originally, so I'm proposing this to include those * fixes an issue where network host fails to start because a NoNetworksFound exception wasn't being handled correctly * Bad test * unknowingly made these changes, reverting to original * catch raise for networks not found in network host and instance setup * Merged with Trunk * add optional parameter networks to the Create server OS API * Changed broken perms * Tests * Made xen plugins rpm noarch * Set the proper return code for server delete requests * Making the xen plugins rpm to be noarch * merging trunk * Expanding OSAPI wsgi module to allow handling of headers and status codes * Updates some of the extra scripts in contrib and tools to current versions * updating code to implement tests * merging parent wsgi-refactor * allowing controllers to return Nonew * adding headers serializer * pep8 * minor refactoring * minor tweaks * Adds an extension which makes add\_fixed\_ip() available through an OpenStack extension * Comment out these two asserts; Sandy will uncomment in his merge-prop * Fix the bug 800759 * merging wsgi-refactor * adding 204 response code * pre trunk merge * Missing Author updated * Allows for ports in serverRef in image create through the openstack api * Adds security groups to metadata server. Also adds some basic tests for metadata code * fix comments * fix conflict * Added vif OS API extension to get started on it * Moved 'setup\_compute\_network' logic into the virt layer * Added myself to authors file * Fixed two typos in rescue API command * flaw in ec2 cloud api, \_get\_image method , if doing a search for aki-0000009, yet that image name doesn't exist, it strips off aki- and looks for any image\_id 0000009 and if there was an image match that happens to be an ami instead of aki, it will go ahead and deregister the ami instead. That behavior is unintended, so added logic to ensure that the original request image\_id matches the type of image being returned from database by matching against container\_format attr * Fixed up an incorrect key being used to check Zones * merged trunk * fix tests * make sure that old networks get the same dhcp ip so we don't break existing deployments * cleaned up on set network host to \_setup\_network and made networks allocate ips dynamically * Make the instance migration calls available via the API * Add a flag to disable ec2 or osapi * Add a flag to disable ec2 or osapi * refactor * easing up content-type restrictions * peer review fix - per vish: 'This method automatically converts unknown formats to ami, which is the same logic used to display unknown images in the ec2 api. This will allow you to properly deregister raw images, etc.' * Updated resize docstring * removing Content-Length requirement * Add docstrings for multinic extension * Add support for remove\_fixed\_ip() * Merged trunk * pull-up from trunk * Added unit tests * First take at migrations * Fixes bug #805604 "Multiprocess nova-api does not handles SIGTERM correctly." * image/fake: added teardown method * Updated mailmap due to wrong address in commit message * tests/test\_cloud: make an unit test, test\_create\_image, happy * nova/compute/api.py: fixed mismerge * ec2 api \_get\_image method logic flaw that strips the hex16 digit off of the image name, and does a search against the db for it and ignores that it may not be the correct image, such as if doing a search for aki-0000009, yet that image name doesn't exist, it strips off aki- and looks for any image\_id 0000009 and if there was an image match that happens to be an ami instead of aki, it will go ahead and deregister that. That behavior is unintended, so added logic to ensure that the original request image\_id matches the type of image being returned from database by matching against container\_format attr * sqlalchemy/migrate: resolved version conflict * merge with trunk * pull-up from trunk * unit test suite for the multinic extension * pull-up from trunk * Added server entity to images that only has id * Merging issues * Updated \_create\_link\_nodes to be consistent with other create\_\*\_nodes * Changed name of xml\_string to to\_xml\_string * Merging issuse * Temporarily moved create server node functionality into images.py Temporarily changed image XML tests to expect server entities with only ids * Removed serverRef from some tests and viewbuilder * Comments for bugfix800759 and pep8 * Removed bookmark link from non detailed image viewbuilder * implemented clean-up logic when VM fails to spawn for xenapi back-end * Adds the os-hosts API extension for interacting with hosts while performing maintenance. This differs from the previous merge prop as it uses a RESTful design instead of GET-based actions * Added param to keep current things from breaking until we update all of the xml serializers and view builders to reflect the current spec * Fixes Bug #805083: "libvirtError: internal error cannot determine default video type" when using UML * Dried up images XML serialization * Dried up images XML serialization * stricter zone\_id checking * trunk merge * cleanup * Added image index * pep8 fixes * Comments Incorporated for Bug800759 * Added API and supporting code for rebooting or shutting down XenServer hosts * fixed image create response test * Updated test\_detail * Merged trunk * make server and image metadata optional * Updated the links container for flavors to be compliant with the current spec * pep8 * Renamed function * moved remove\_version to common.py * unit tests * progress and server are optional * merged trunk * Add a socket server responding with an allowing flash socket policy for all requests from flash on port 843 to nova-vncproxy * pep8 compliance * Pull-up from trunk (post-multi\_nic) * changed calling signature to be (instance\_id, address) * correct test\_show * first round * removed extra comment * Further test update and begin correcting serialization * Removed a typo error in libvirt connection.py * updated expected xml in images show test to represent current spec * pep8 fixes * Added VIF driver concept * Added the missing 'self' parameter * after trunk merge * Changed the exception type for invalid requests to webob.exc.HTTPBadRequest * Added net\_attrs argument for ensure\_bridge/vlan methods * Added a L2 network driver for bridge/vlan creation * wrap list comparison in test with set()s * slightly more fleshed out call path * merged trunk * merge code i'd split from instance\_get\_fixed\_addresses\_v6 that's no longer needed to be split * fix metadata test since fixed\_ip searching now goes thru filters db api call instead of the get\_by\_fixed\_ip call * clean up compute\_api.get\_all filter name remappings. ditch fixed\_ip one-off code. fixed ec2 api call to this to compensate * clean up OS API servers getting * rename \_check\_servers\_options, add some comments and small cleanup in the db get\_by\_filters call * pep8 fix * convert filter value to a string just in case before running re.compile * add comment for servers\_search\_options list in the OS API Controllers * pep8 fixes * fix ipv6 search test and add test for multiple options at once * test fixes.. one more to go * resolved conflict incorrectly from trunk merge * merged trunk * doc string fix * fix OS API tests * test fixes and typos * typos * cleanup checking of options in the API before calling compute\_api's get\_all() * a lot of major re-work.. still things to finish up * merged trunk * remove debug from failing test * remove faults.Fault wrapper on exceptions * rework OS API checking of search options * merged trunk * missing doc strings for fixed\_ip calls I renamed * clarify a couple comments * test fixes after unknown option string changes * minor fixups * merged trunk * pep8 fixes * test fix for renamed get\_by\_fixed\_ip call * ec2 fixes * added API tests for search options fixed a couple of bugs the tests caught * allow 'marker' and 'limit' in search options. fix log format error * another typo * merged trunk * missed power\_state import in api fixed reversed compare in power\_state * more typos * typos * flavor needs to be converted to int from query string value * add image and flavor searching to v1.0 api fixed missing updates from cut n paste in some doc strings * added searching by 'image', 'flavor', and 'status' reverted ip/ip6 searching to be admin only * compute's get\_all should accept 'name' not 'display\_name' for searching Instance.display\_name. Removed 'server\_name' searching.. Fixed DB calls for searching to filter results based on context * Refactored OS API code to allow checking of invalid query string paremeters and admin api/context to the index/detail calls. v1.0 still ignores unknown parameters, but v1.1 will return 400/BadRequest on unknown options. admin\_api only commands are treated as unknown parameters if FLAGS.enable\_admin\_api is False. If enable\_admin\_api is True, non-admin context requests return 403/Forbidden * clean up checking for exclusive search options fix a cut n paste error with instance\_get\_all\_by\_name\_regexp * merged trunk * python-novaclient 2.5.8 is required * fix bugs with fixed\_ip returning a 404 instance searching needs to joinload more stuff * added searching by instance name added unit tests * pep8 fixes * Replace 'like' support with 'regexp' matching done in python. Since 'like' would result in a full table scan anyway, this is a bit more flexible. Make search options and matching a little more generic Return 404 when --fixed\_ip doesn't match any instance, instead of a 500 only when the IP isn't in the FixedIps table * start of re-work of compute/api's 'get\_all' to handle more search options * Silence warning in case tests.sqlite doesn't exist * fix libvirt test * update tests * don't set network host for multi\_host networks * add ability to set multi\_host in nova-manage and remove debugging issues * filter the dhcp to only respond to requests from this host * pass in dhcp server address, fix a bunch of bugs * PEP8 passed * Formatting fix * Proper Author section insertion (thx Eldar) * Signal handler cleanup, proper ^C handling * copy paste * make sure to filter out ips associated by host and add some sync for allocating ip to host * fixed zone id check * it is multi\_host not multi\_gateway * First round of changes for ha-flatdhcp * Updated the plugin to return the actual enabled status instead of just 'true' or 'false' * UML doesnt do vnc as well * fixed a bug which prevents suspend/resume after block-migration * Gracefull shutdown of nova-api * properly displays addresses in each network, not just public/private; adding addresses attribute to server entities * Gracefull shutdown of nova-api * Removing import of nova.test added to nova/\_\_init.py\_\_ as problem turned out to be somewhere else (not in nova source code tree) * Fixing weird error while running tests. Fix required patching nova/tests/\_\_\_init\_\_.py explictly importing nova.test * Added missing extension file and tests. Also modified the get\_host\_list() docstring to be more accurate about the return value * Silence warning in case tests.sqlite doesn't exist * Fix boot from volume failure for network block devices * Improvements to nova-manage: network list now includes vlan and projectID, added servers list filtered by zone if needed * removed unneeded old commented code * removed more stray debug output * removed debugging output * after trunk merge * Updated unit tests * remove logging statement * Found some additional fixed\_ip. entries in the Intance model contest that needed to be updated * use url parse instead of manually splitting * Changed fixed\_ip.network to be fixed\_ips.network, which is the correct DB field * Added the GroupId param to any pertinent security\_group methods that support it in the official AWS API * Removes 'import IPy' introduced in recent commit * removing IPy import * trunk merge * Fixed the case where an exception was thrown when trying to get a list of flavors via the api yet there were no flavors to list * fix up tests * tweak * review fixes * completed api changes. still need plugin changes * Update the fixed\_ip\_disassociate\_all\_by\_timeout in nova.db.api so that it supports Postgres. Fixes casting errors on postgres with this function * after trunk merge * Fixes MANIFEST.in so that migrate\_repo/versions/\*.sql files are now included in tarball * Include migrate\_repo/versions/\*.sql in tarball * Ensure auto-delete is false on Topic Queues * refactored the security\_group tests a bit and broke up a few of them into smaller tests * Reverses the self.auto\_delete = True that was added to TopicPublisher in the bugfix for lp804063. That bugfix should have only added auto\_delete = True to FanoutPublisher to match the previous change to FanoutConsumer * Added 'self.auto\_delete = True' to the two Publisher subclasses that lacked that setting * Added the '--fixes' tag to link to bug * Added self.auto\_delete = True to the Publisher subclasses that did not have that set * added multi-nic support * osapi test\_servers fixed\_ip -> fixed\_ips * updated osapi 1.0 addresses view to work with multiple fixed ips * trunk merge with migration renumbering * Allows subdirectory tests to run even if sqlite database doesn't exist * fix bug 800759 * Child Zone Weight adjustment available when adding Child Zones * trunk merge * blah * merge trunk * merged trunk * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * Theese changes eliminate dependancy between hostname and ec2-id. As I understand, there already were no such dependancy, but still we had confusing names in code. Also I added more sophisticated generation of default hostname to give user possibility to set the custom one * updated images * updated servers * refactored flavors viewbuilder * fixes lp:803615 * added FlavorRef exception handling on create instance * refactored instance type code * Update the ec2 get\_metadata handler so it works with the most recent version of the compute API get\_all call which now returns a list if there is only a single record * - add metadata container to /images/detail and /images/ responses - update xml serialization to encode image entities properly * merging trunk * PEP8 fix * Adapt flash socket policy branch to new nova/wsgi.py refactoring * clean up * Update the ec2 get\_metadata handler so it works with the most recent version of the compute API get\_all call which now returns a list if there is only a single record * trunk merge * pep8 * pep8 * done and done * Update the fixed\_ip\_disassociate\_all\_by\_timeout in nova.db.api so that it supports Postgres. Fixes casting errors on postgres with this function * phew ... working * compute\_api.get\_all should be able to recurse zones (bug 744217). Also, allow to build more than one instance at once with zone\_aware\_scheduler types. Other cleanups with regards to zone aware scheduler.. * Updated v1.1 links in flavors to represent the curret spec * fix issue of recurse\_zones not being converted to bool properly add bool\_from\_str util call add test for bool\_from\_str slight rework of min/max\_count check * fixed incorrect assumption that nullable defaults to false * removed port\_id from virtual interfaces and set network\_id to nullable * changes a few instance refs * merged trunk * Rename one use of timeout to expiration to make the purpose clearer * pulled in koelkers test changes * merge with trey * major reactor of the network tests for multi-nic * Merged trunk * Fixes Bug #803563 by changing how nova passes options in to glance. Before, if limit or marker were not set, we would pass limit=0 and marker=0 in to glance. However, marker is supposed to be an image id. With this change, if limit or marker are not set, they are simply not passed into glance. Glance is free then to choose the default behavior * Fixed indentation issues Fixed min/max\_count checking issues Fixed a wrongly log message when zone aware scheduler finds no suitable hosts * Fixes Bug #803563 by changing how nova passes options in to glance. Before, if limit or marker were not set, we would pass limit=0 and marker=0 in to glance. However, marker is supposed to be an image id. With this change, if limit or marker are not set, they are simply not passed into glance. Glance is free then to choose the default behavior * Sets 'exclusive=True' on Fanout amqp queues. We create the queues with uuids, so the consumer should have exclusive access and they should get removed when done (service stop). exclusive implies auto\_delete. Fixes lp:803165 * don't pass zero in to glance image service if no limit or marker are present * more incorrect list type casting in create\_network * removed the list type cast in create\_network on the NETADDR projects * renumbered migrations again * Make sure test setup is run for subdirectories * merged trunk, fixed the floating\_ip fixed\_ip exception stupidity * trunk merge * "nova-manage vm list" was still referencing the old "image\_id" column, which was renamed to "image\_ref" at revision 1144 * Implement backup with rotation and expose this functionality in the OS API * Allow a port name in the server ref for image create * Fanout queues use unique queue names, so the consumer should have exclusive access. This means that they also get auto deleted when we're done with them, so they're not left around on a service restart. Fixes lp:803165 * pep8 fix * removed extra stubout, switched to isinstance and catching explicit exception * get latest branch * Deprecate -r for run\_tests.sh and adds -n, switching the default back to recreate * check\_domid\_changes is superfluous right now since it's only used when timeout is used. So simplify code a little bit * updated pip-requires for novaclient * Merged trunk * pip requires * adopt merge * clean up logging for iso SR search * moved to wrap\_exception approach * Fix 'undefined name 'e'' pylint error * change the default to recreate the db but allow -n for faster tests * Fix nova-manage vm list * Adding files for building an rpm for xenserver xenapi plugins * moved migration again & trunk merge * Brought back that encode under condition * Add test for hostname generation * Remove unnessesary (and possibly failing) encoding * Fix for bug 803186 that fixes the ability for nova-api to run from a source checkout * moved to wrap\_exception decorator * Review feedback * Merged trunk * Put possible\_topdir back in nova-api * Use milestone cut * Merged trunk * Let glance handle sorting * merging trunk * Review feedback * This adds system usage notifications using the notifications framework. These are designed to feed an external billing or similar system that subscribes to the nova feed and does the analysis * Refactored usage generation * pep8 * remove zombie file * remove unecessary cast to list * merge with trey * OOPS * Whoops * Review feedback * skipping another libvirt test * Fix merge issue in compute unittest * adding unicode support to image metadata * Fix thinko in previous fix :P * change variable names to remove future conflict with sandy's zone-offsets branch * Fix yet more merge-skew * merge with trey * This branch allows LdapDriver to reconnect to LDAP server if connection is lost * Fix issues due to renming of imange\_id attrib * Re-worked some of the WSGI and WSGIService code to make launching WSGI services easier, less error prone, and more testable. Added tests for WSGI server, new WSGI loader, and modified integration tests where needed * Merged trunk * update a test docstring to make it clear we're testing multiple instance builds * log formatting typo pep8 fixes * Prevent test case from ruining other tests. Make it work in earlier python versions * pep8 fix * I accidently the whole unittest2 * Adds support for "extra specs", additional capability requirements associated with instance types * refactoring to compute from scheduler * remove network to project bind * resync with trunk * Add test for spawn from an ISO * Add fake SR with ISO content type * Revise key used to identify the SR used to store ISO images streamed from Glance * remerged trunk * Fix pep8 nits in audit script * Re-merging code for generating system-usages to get around bzr merge braindeadness * getting started * Added floating IP support in OS API * This speeds up multiple runs of tests to start up much faster because it only runs db migrations if the test db doesn't exist. It also adds the -r/--recreate-db option to run\_tests.sh to delete the tests db so it will be recreated * small formatting change * breaking up into individual tests for security\_groups * Proposing this because it is a critical fix before milestone. Suggestions on testing it are welcome * logging fixes * removed unneded mac parameter to lease and release fixed ip functions * Made \_issue\_novaclient\_command() behave better. Fixed a bunch of tests * Review feedback * merge with trey * trunk merge, getting fierce. * Merged trunk * Added nova.version to utils.py * - Modified NOTE in vm\_util.py - Changed gettext line to nova default in guest\_tool.py * renaming tests * make sure basic filters are setup on instance restart * typo * changed extension alias to os-floating-ips * missed the bin line * Updating license to ASL 2.0 * update nova.sh * make nova-debug work with new style instances * Changed package name to openstack-xen-plugins per dprince's suggestion. All the files in /etc/xapi.d/plugins must be executable. Added dependency on parted. Renamed build.sh to build-rpm.sh * remove extra stuff from clean vlans * Clarify help verbiage * making key in images metadata xml serialization test null as well * making image metadata key in xml serialization test unicode * extracting images metadata xml serialization tests into specific class; adding unicode image metadata value test * merged blamar's simpler test * Pulled changes, passed the unit tests * Pulled trunk, merged boot from ISO changes * Removed now un-needed fake\_connection * Use webob to test WSGI app * fixed pep style * review issues fixed * sqlalchmey/migration: resolved version conflict * merge with trunk * Adding files for building an rpm for xenserver xenapi plugins * Upstream merge * merging trunk; adding error handling around image xml serialization * adding xml serialization test of zero images * pep8 * add metadata tests * add fake connection object to wsgi app * add support to list security groups * only create the db if it doesn't exist, add an option -r to run\_tests.py to delete it * Fix for bug #788265. Remove created\_at, updated\_at and deleted\_at from instance\_type dict returned by methods in sqlalchemy API * PEP8 fix * pep8 * Updated \_dict\_with\_extra\_specs docstring * Renamed \_inst\_type\_query\_to\_dict -> \_dict\_with\_extra\_specs * Merged from trunk * Add api methods to delete provider firewall rules * This small change restores single quotes and double quotes as they were before in the filter expression for retrieving the PIF (physical interface) xenapi should use for creating VLAN interfaces * Remove the unnecessary insertion of whitespace. This happens to be enough to make this patch apply on recent versions of XenServer / Xen Cloud Platform * Removes the usage of the IPy module in favor of the netaddr module * - update glance image fixtures with expected checksum attribute - ensure checksum attribute is handled properly in image service * mailmap * mailmap * configure number of attempts to create unique mac address * merged * trunk merged. conflicts resolved * added disassociate method to tests * fixes * tests * PEP8 cleanup * parenthesis issue in the migration * merge * some tests and refactoring * Trunk merge fixes * Merging trunk * implement list test * some tests * fix tests for extensions * Fixed snapshot logic * PEP8 cleanup * Refactored backup rotate * conflict resolved * stub tests * add stubs for flating api os api testing * merge with kirill * associate diassociate untested, first attept to test * Pep8 fix * Adding tests for backup no rotation, invalid image type * Fixed the default arguments to None instead of an empty list * Fixing PEP8 compliance issues * Trailing whitespace * Adding tests for snapshot no-name and backup no-name * Edited the host filter test case for extra specs * Removed an import * Merged from trunk * Remove extra debug line * Merged with trunk * Add reconnect test * Use simple\_bind\_s instead of bind\_s * Add reconnect on server fail to LDAP driver * ec2/cloud: typo * image/s3: typo * same typo i made before! * on 2nd run through filter\_hosts, we've already accounted for the topic memory needs converted to Bytes from MB * LeastCostScheduler wasn't checking for topic cost functions correctly. Added support so that --least\_cost\_scheduler\_cost\_functions only needs to have method names specified, instead of the full blown version with module and class name. Still works the old way, too * requested\_mem typo * more typos * typo in least cost scheduler * Unwind last commit, force anyjson to use our serialization methods * debug logging of number of instances to build in scheduler * missed passing in min/max\_count into the create/create\_all\_at\_once calls * Dealing with cases where extra\_specs wasn't defined * pep8 fixes * Renamed from flavor\_extra\_specs to extra\_specs * All tests passing * missed passing an argument to consume\_resources * Committing some broken code in advance of trying a different strategy for specifying args to extensions.ResoruceExtensions, using parent * Starting to transition instance type extra specs API to an extension API * Now automatically populates the instance\_type dict with extra\_specs upon being retrieved from the database * pep8 * Created Bootstrapper to handle Nova bootstrapping logic * alter test, alter some debug statements * altered some tests * freakin migration numbering * trunk merge * removing erroneous block, must've been a copy and paste fat finger * specify keyword, or direct\_api proxy method blows up * updated the way vifs/fixed\_ips are deallocated and their relationships, altered lease/release fixed\_ip * Fixed syntax errors * This adds a way to create global firewall blocks that apply to all instances in your nova installation * Accept a full serverRef to OSAPI POST /images (snapshot) * Cast rotation to int * PEP8 cleanup * Fixed filter property and added logging * added tests * Implemented view and added tests * Adding missing import * Fixed issue with zero flavors returning HTTP 500 * Adding dict with single 'meta' key to /imgages//meta/ GET and PUT * fixing 500 error on v1.0 images xml * Small refactoring around getting params * libvirt test for deleting provider firewall rules * Make firewall rules tests idempotent, move IPy=>netaddr, add deltete test * merge from trunk * altho security\_group authorize & revoke tests already exist in test\_api, adding some direct ec2 api method tests. added group\_id param support to the pertinent security group methods * Make sure there are actually rules to test against * Add test for listing provider firewall rules * pep8: remove newline at end of file * Add admin api test case (like cloud test case) with a test for fw rules * Move migration to newer version * an int() was missed being removed from UUID changes when zone rerouting kicks in * fixing 500 on None metadata value * proper xml serialization for images * "nova-manage checks if user is member of proj, prior to adding role for that project" * adding metadata container to /images/detail and /images/ calls * Add xml serialization for all /images//meta and /images//meta/ responses * trunk merge and migration bump * handle errors for listing an instance by IP address * Merged markwash's fixes * Merged list-zone-recurse * str\_GET is a property * Fixed typo * Merged trunk * minor fixups * fixes for recurse\_zones and None instances with compute's get\_all * typo * add support for compute\_api.get\_all() recursing zones for more than just reservation\_id * Change so that the flash socket policy server is using eventlet instead of twisted and is running in the same process as the main vnx proxy * ec2/cloud: address review * compute/api: an unit test for \_update\_{image\_}bdm * ec2/cloud: unit tests for parser/formatter of block device mapping * ec2/cloud: an unit test for \_format\_instance\_bdm() * ec2utils: an unit test for mapping\_prepend\_dev() * ec2: bundle block device mapping * ec2utils: introduce helper function to prepend '/dev/' in mappings * volume/api: an unit test for create\_snapshot\_force() * Add some resource checking for memory available when scheduling Various changes to d-sched to plan for scheduling on different topics, which cleans up some of the resource checking. Re-compute weights when building more than 1 instance, accounting for resources that would be consumed * Returned code to original location * Merged from trunk * run launcher first since it initializes global flags and logging * Now passing unit tests * Two tests passing * Now stubbing nova.db instead of nova.db.api * Bug fixing * Added flavor extra specs controller * Initial unit test (failing) * This catches the InstanceNotFound exception on create, and ignores it. This prevents errors in the compute log, and causes the server to not be built (it should only get InstanceNotFound if the server was deleted right after being created). This is a temporary fix that should be fixed correctly once no-db-messaging stuff is complete * allocate and release implementation * fixed pep8 issues * merge from trunk * image -> instance in comment * added virtual\_interface\_update method * Fixes issues with displaying exceptions regarding flavors in nova-manage * better debug statement around associating floating ips when multiple fixed\_ips exist * pep8 fixes * merging trunk * added fixed ip filtering by null virtual interface\_id to network get associated fixed ips * fixed ip gets now have floating IPs correctly loaded * reverting non-xml changes * Adding backup rotation * moving image show/update into 'meta' container * Check API request for min\_count/max\_count for number of instances to build * updated libvirt tests network\_info to be correct * fixed error * skipping more ec2 tests * skipping more ec2 tests * skipping more ec2 tests * skipping test\_run\_with\_snapshot * updated test\_cloud to set stub\_network to true * fixed incorrect exception * updating glance image fixtures with checksum attribute; fixing glance image service to use checksum attribute * Round 1 of backup with rotation * merge from trunk * fix some issues with flags and logging * Add a socket server responding with an allowing flash socket policy for all requests from flash on port 843 to nova-vncproxy * api/ec2: an unit test for create image * api/ec2, boot-from-volume: an unit test for describe instances * unittest: an unit test for ec2 describe image attribute * test\_cloud: an unit test for describe image with block device mapping * ec2utils: an unit test for ec2utils.properties\_root\_defice\_name * unittest, image/s3: unit tests for s3 image handler * image/s3: factor out \_s3\_create() for testability * ec2utils: unit tests for case insensitive true/false conversion * ec2utils: add an unit test for dict\_from\_dotted\_str() * test\_api: unit tests for ec2utils.id\_to\_ec2\_{snap, vol}\_id() * api/ec2: make CreateImage pass unit tests * volume/api: introduce create\_snapshot\_force() * api/ec2/image: make block device mapping pass unit tests * db/block\_device\_mapping/api: introduce update\_or\_create * db/migration: resolve version conflict * merge with trunk * ec2 api describe\_security\_groups allow group\_id param , added tests for create/delete security group in test\_cloud although also exists in test\_api this tests directly the ec2 method * pip-requires * pep8 * fixed zone update * Stop trying to set a body for HTTP methods that do not allow it. It renders the unit tests useless (since they're testing a situation that can never arise) and webob 1.0.8 fails if you do this * fixed local db create * omg stop making new migrations.. * trunk merge * merge from trunk * added try except around floating ip get by host in host init * This branch adds support to the xenapi driver for updating the guest agent on creation of a new instance. This ensures that the guest agent is running the latest code before nova starts configuring networking, setting root password or injecting files * renamed migrations again * merge from trunk * if we get InstanceNotFound error on create, ignore (means it has been deleted before we got the create message) * some libvirt multi-nic just to get it to work, from tushar * Removed whitespace * Fixed objectstore test * merge with trey * Very small alterations, switched from using start() to pass host/port, to just defining them up front in init. Doesn't make sense to set them in start because we can't start more than once any way. Also, unbroke binaries * Bump WebOb requirement to 1.0.8 in pip-requires * Oops, I broke --help on nova-api, fixed now * pep8 fix * Monkey patching 'os' kills multiprocessing's .join() functionality. Also, messed up the name of the eventlet WSGI logger * Filter out datetime fields from instance\_type * erase unnecessary TODO: statement * fixed reviewer's comment. 1. adding dest-instance-dir deleting operation to nova.compute.manager, 2. fix invalid raise statement * fix comment line * Stop trying to set a body for HTTP methods that do not allow it. It renders the unit tests useless (since they're testing a situation that can never arise) and webob 1.0.8 fails if you do this * log -> logging to keep with convention * Removed debugging and switched eventlet to monkey patch everything * Removed unneeded import * Tests for WSGI/Launcher * Remove the unnecessary insertion of whitespace. This happens to be enough to match this patch apply on recent versions of XenServer / Xen Cloud Platform * trunk merge * fix lp 798361 * Removed logging logic from \_\_init\_\_, added concept of Launcher...no tests for it yet * nova-manage checks if user is member of proj, prior to adding role for that project * Other migrations have been merged in before us, so renumber * Merged trunk * pep8 fixes * assert\_ -> assertTrue since assert\_ is deprecated * added adjust child zone test * tests working again * updated the exceptions around virtual interface creation, updated flatDHCP manager comment * more trunks * another trunk merge * This patch adds support for working with instances by UUID in addition to integer IDs * importing sqlalchemy IntegrityError * Moving add\_uuid migration to 025 * Merging trunk, fixing conflicts * Enclosing tokens for xenapi filter in double quotes * working commit * Fix objectstore test * Cleanup and addition of tests for WSGI server * Merged trunk * Check that server exists when interacting with /v1.1/servers//meta resource * No, really. Added tests for WSGI loader * Added tests for WSGI loader * nova.virt.libvirt.connection.\_live\_migration is changed * Cleanup * merged rev trunk 1198 * Introduced Loader concept, for paste decouple * fix pep8 check * fix comments at nova.virt.libvirt.connection * Cleanup of the cleanup * Further nova-api cleanup * Cleaned up nova-api binary and logging a bit * Removed debugging, made objectstore tests pass again * General cleanup and refactor of a lot of the API/WSGI service code * Adding tests for is\_uuid\_like * Using proper UUID format for uuids * Implements a portion of ec2 ebs boot. What's implemented - block\_device\_mapping option for run instance with volume (ephemeral device and no device isn't supported yet) - stop/start instance * updated fixed ip and floating ip exceptions * pep8: white space/blank lines * Merging trunk * renamed VirtualInterface exception and extend NovaException * moving instance existance logic down to api layer * Ensure os\_type and architecture get set correctly * Make EC2 update\_instance() only update updatable\_fields, rather than all fields. Patch courtesy of Vladimir Popovski * Fixes two minor bugs (lp795123 and lp795126) in the extension mechanism. The first bug is that each extension has \_check\_extension() called twice on it; this is a minor cosmetic problem, but the second is that extensions which flunk \_check\_extension() are still added. The proposed fix is to make \_check\_extensions() return True or False, then make \_add\_extension() call it from the top and return immediately if \_check\_extensions() returns False * Fixes a bug where a misleading error message is outputted when there's a sqlalchemy-migrate version conflict * Result is already in JSON format from \_wait\_for\_agent * Fix PEP8 * Fix for lp:796834 * Add new architecture attribute along with os\_type * bunch of docstring changes * adding check for serverRef hostname matching app url * Fix for Bug lp:796813 * Fix the volumes extension resource to have a proper prefix - /os-volumes * Fixes lp797017, which is broken as a result of a fragile method in the xenapi drivers that assumed there would only ever be one VBD attached to an instance * adding extra image service properties to compute api snapshot; adding instance\_ref property * Missed a pep8 fix * Remove thirdwheel.py and do the test with a now-public ExtensionManager.add\_extension() * Removes nova/image/local.py (LocalImageService) * Add some documentation for cmp\_version Add test cases for cmp\_version * Increased error message readability for the OpenStack API * fixing test case * Updated "get\_all\_across\_zones" in nova/compute/api.py to have "context = context.elevated()", allowing it to be run by non-admin users * merging trunk * more words * Cleaned up some pep8 issues in nova/api/openstack/create\_instance\_helper.py and nova/api/openstack/\_\_init\_\_.py * Pull-up from trunk * Add a test to ensure invalid extensions don't get added * Update xenapi/vm\_utils.py so that it calls find\_sr instead of get\_sr. Remove the old get\_sr function which by default looked for an SR named 'slices' * add vlan diagram and some text * Added context = context.elevated() to get\_all\_across\_zones * auto load table schema instead of stubbing it out * Fixed migration per review feedback * Made hostname independent from ec2 id. Add generation of hostnames based on display name * Fix for a problem where run\_tests.sh would output a seemingly unrelated error message when there was a sqlalchemy-migrate version number conflict * stub api methods * Missed a InstanceTypeMetadata -> InstanceTypeExtraSpecs rename in register\_models * Fix unitttest so that it actually fails without the fix * Make $my\_ip Glance's default host, not localhost * We don't check result in caller, so don't set variable to return value * Remove debugging statement * Fix lp795123 and lp795126 by making \_check\_extension() return True or False and checking the result only from the top of \_add\_extension() * Glance host defaults to rather than localhost * Upstream merge * add in dhcp drawing * Rename: intance\_type\_metadata -> instance\_type\_extra\_specs * erroneous self in virtual\_interface\_delete\_by\_instance() sqlalchemy api * Fixes a bug where a unit test sometimes fails due to a race condition * remove the network-host fromt he flat diagram * add multinic diagram * add the actual image * Renaming to \_build\_instance\_get * merged trunk * returned two files to their trunk versions, odd that they were altered in the first place * Added a new test for confirming failure when no primary VDI is present * Unit tests pass again * more doc (and by more I mean like 2 or 3 sentances) * Fix copyright date * PEP8 cleanup * Attempting to retrieve the correct VDI for snapshotting * Fixing another test * Fixing test\_servers\_by\_uuid * floating\_ips extension is loading to api now * initial commit of multinic doc * generated files should not be in source control * Fixed UUID migration * Added UUID migration * Clean up docstrings to match HACKING * merge with trey * Small tweaks * Merged reldan changes * First implementation of FloatingIpController * First implementation of FloatingIpController * compute/api: fix mismerge due to instance creation change * ec2/cloud.py: fix mismerge * fix conflict with rebasing * api/ec2: support CreateImage * api/ec2/image: support block device mapping * db/model: add root\_device\_name column to instances table * ec2utils: consolidate 'vol-%08x' and 'snap-%08x' * api/ec2: check user permission for start/stop instances * ec2utils: consolidate 'vol-%08x' and 'snap-%08x' * api/ec2: check user permission for start/stop instances * api/ec2: check user permission for start/stop instances * Adds 'joinedload' statements where they need to be to prevent access of a 'detached' object * novaclient changed to support projectID in authentication. Caused some minor issues with distributed scheduler. This fixes them up * Add trailing LF (\n) to password for compatibility with old agents * Workaround windows agent bugs where some responses have trailing \\r\\n * removed commented out shim on Instance class * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * Split patch off to new branch instead * Add --fixes * First attempt to rewrite reroute\_compute * syntax * Merged trunk * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * Fixed bug * Added metadata joinedloads * Prep-work to begin on reroute\_compute * specify mysql\_engine for the virtual\_interfaces table in the migration * Passed in explanation to 400 messages * Fixing case of volumes alias * The volumes resource extension should be prefixed by its alias - os-volumes * Adding uuid test * Pep8 Fixes * Fixing test\_servers.py * pep8 * Fixing private-ips test * adding server existence check to server metadata resource * Fixing test\_create\_instance * made the test\_xenapi work * test xenapi injected set to True * something else with tests * something with tests * i dont even care anymore * network\_info has injected in xenapi tests * Adding UUID test * network\_info passed in test\_xenapi, mac\_address no longer in instance values dict * added network injected to stub * added injected to network dict oportion of tuple returned by get\_instance\_nw\_info * don't provision to all child zones * network info to \_create\_vm * fix mismerge * updated xenapi\_conn finish\_resize arguments * stubbed out get\_instance\_nw\_info for compute\_test * pip novaclient bump * merge with nova trunk * fixed up some little project\_id things with new novaclient * typo * updated finish\_resize to accept network\_info, updated compute and tests in accordance * \_setup\_block\_device\_mapping: raise ApiError when db inconsistency found * db/block\_device\_mapping\_get\_all\_by\_instance: don't raise * Print list of agent builds a bit prettier * PEP8 cleanups * Rename to 024 since 023 was added already * pep8 * The Xen driver supports running instances in PV or HVM modes, but the method it uses to determine which to use is complicated and doesn't work in all cases. The result is that images that need to use HVM mode (such as FreeBSD 64-bit) end up setting a property named 'os' set to 'windows' * typo * None project\_id now default * Adds code to run\_tests.py which: * Fixing code to ensure unit tests for objectstore, vhd & snapshots pass * ec2utils: minor optimize \_try\_convert() * block\_device\_mapping: don't use [] as default argument * api/ec2: make the parameter parser an independent method * Show only if we have slow tests, elapsed only if test success * Showing elapsed time is now default * Ensuring pep8 runs even when nose optons are passed * network tests now teardown user * Removing seconds unit * network user only set if doesnt exist * net base project id now from context, removed incorrect floatnig ip host assignment * fixed instance[fixed\_ip] in ec2 api, removed fixed\_ip shim * various test fixes * Updated so that we use a 'tmp' subdirectory under the Xen SR when staging migrations. Fixes an issue where you would get a 'File exists' error because the directory under 'images' already existed (created via the rsync copy) * db fakes silly error fix * debug statements * updated db fakes * updated db fakes * Changed requests with malformed bodies to return a HTTP 400 Bad Request instead of a HTTP 500 error * updated db fakes and network base to work with virtual\_interface instead of mac\_address * Phew ... ok, this is the last dist-scheduler merge before we get into serious testing and minor tweaks. The heavy lifting is largely done * db fakes * db fakes * updated libvirt test * updated libvirt test * updated libvirt test * updated libvirt test * updated libvirt test * getting the test\_host\_filter.py file from trunk, mine is jacked somehow * removed extra init calls * fixed HACKING * Changed requests with malformed bodies to return a HTTP 400 Bad Request instead of a HTTP 500 error * duplicate routes moved to base class * fixed scary diff from trunk that shouldnt have been there * version passing cleanup * refactored out controller base class to use aggregation over inheritance * Move ipy commands to netaddr * merged trunk * mp fixes * Really PEP8? A tab is inferior to 2 spaces? * pep8 fix * upstream merge * Stub out the rpc call in a unit test to avoid a race condition * merged trunk rev 1178 * Making timing points stricter, only show slow/sluggish tests in summary * Improved errors * added kernel/ramdisk migrate support * Added faults wrapper * remove file that got ressurected * Cleaned up pep8 errors using the current version of pep8 located in pip-requires. This is to remove the cluttered output when using the virtualenv to run pep8 (as you should). This will make development easier until the virtualenv requires the latest version of pep8 (see bug 721867) * merge with trey * autoload with the appropriate engine during upgrade/downgrade * Created new exception for handling malformed requests Wrote tests Raise httpBadRequest on malformed request bodies * Fixed bug 796619 * Adds --show-elapsed option for run\_tests * pep8 * Alias of volumes extension should be OS-VOLUMES * Illustrations now added to Distributed Scheduler documentation (and fixed up some formatting) * Load table schema automatically instead of stubbing out * Removed clocksource=jiffies from PV\_args * Test now passes even if the rpc call does not complete on time * - fixes bug that prevented custom wsgi serialization * Removed clocksource=jiffies from PV\_args * merging trunk, fixing pep8 * pep8 * Improved tests * removing unnecessary lines * wsgi can now handle dispatching action None more elegantly * This fixes the server\_metadata create and update functions that were returning req.body (as a string) instead of body (deserialized body dictionary object). It also adds checks where appropriate to make sure that body is not empty (and return 400 if it is). Tests updated/added where appropriate * removed yucky None return types * merging trunk * trunk merge * zones image\_id/image\_href support for 1.0/1.1 * Update xenapi/vm\_utils.py so that it calls find\_sr instead of get\_sr. Remove the old get\_sr function which by default looked for an SR named 'slices' * fixed bug 796619 * merge trunk * check for none and empty string, this way empty dicts/lists will be ok * Updated so that we use a 'tmp' subdirectory under the Xen SR when staging migrations. Fixes an issue where you would get a 'File exists' error because the directory under 'images' already existed (created via the rsync copy) * fix method chaining in database layer to pass right parameters * Add a method to delete provider firewall rules * Add ability to list ip blocks * pep 8 whitespace fix * Move migration * block migration feature added * Reorder firewall rules so the common path is shorter * ec2 api method allocate\_address ; raises exception.NoFloatingIpsDefined instead of UnknownError when there aren't any floating ips available * in XML Serialization of output, the toprettyxml() call would sometimes return a str() and sometimes unicode(), I've forced encoding to utf-8 to ensure that we always get str(). This fixes the related bug * A recent commit added a couple of directories that don't belong in version control. Remove them again * adding support for cusom serialization methods * forgot a comma * floating ips can now move around the network hosts * A recent commit added a couple of directories that don't belong in version control. Remove them again * 'network list' prints project id * got rid of prints for debugging * small pep8 fixes * return body correctly as object instead of a string, with tests, also check for empty body on requests that need a body * adding xml support to /images//meta resource; moving show/update entities into meta container * removed posargs decorator, all methods decorated * Allows Nova to talk to multiple Glance APIs (without the need for an external load-balancer). Chooses a random Glance API for each request * forgot a comma * misc argument alterations * force utf-8 encoding on toprettyxml call for XMLDictSerializer * added new exception more descriptive of not having available floating addresses avail for allocation * raise instance instead of class * Fix copyright year * style change * Only update updateable fields * removing LocalImageService from nova-manage * rebase from trunk * decorators for action methods added * source illustrations added & spelling/grammar based on comstud's feedback * fixed reraise in trap\_error * forgot some debugging statements * trunk merge and ec2 tests fixed * Add some docstrings for new agent build DB functions * Add test for agent update * Multiple position dependent formats and internationalization don't work well together * Adding caveat * Fixing code per review comments * removed fixed\_ips virtual\_interface\_id foreignkey constraint from multi\_nic migration, and added it as a standalone migration with special sqlite files * Record architecture of image for matching to agent build later. Add code to automatically update agent running on instance on instance creation * Add version and agentupdate commands * Add an extension to allow for an addFixedIp action on instances * further changes * tests working after merge-3 update * 022 migration has already been added, so make ours 023 now * parse options with optparse, options prepended '--' * renamed migration again * Pull-up from multi\_nic * merged koelkers tests branch * remove file that keeps popping up * Merging trunk * Fixing the tests * matched the inner exception specifically, instead of catching all RemoteError exceptions * Support multiple glance-api servers * Merged trunk * Fix merge conflict * removing custom exception, instead using NoFloatingIpsDefined * raises exception.NoFloatingIpsDefined instead of UnknownError * Normalize and update database with used vm\_mode * added a test for allocate\_address & added error handling for api instead of returning 'UnknownError', will give information 'AllocateAddressError: NoMoreAddresses * merged trunk again * updated docstring for nova-manage network create * Now forwards create instance requests to child zones. Refactored nova.compute.api.create() to support deferred db entry creation * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/021\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * merged trunk again * Support for header "X-Auth-Project-Id" in osapi * Cleaned up some pylint errors * tweaks * PEP8 fix * removed network\_info shims in vmops * Fix for bug#794239 to allow pep8 in run\_tests.sh to use the virtual environment * adding Authorizer key for ImportPublicKey * fix exception type catched * Look for vm\_mode property on images and use that if it exists to determine if image should be run in PV or HVM mode. If it doesn't exist, fall back to existing logic * removed straggler code * trunk merge * merge trunk * pep8 * removed autogen file * added field NOVA\_PROJECT\_ID to template for future using * added tests for X-Auth-Project-Id header * fix fake driver for using string project * adding Authorizer key for ImportPublicKey * Cleaned up some of the larger pylint errors. Set to ignore some lines that pylint just couldn't understand * DRY up the image\_state logic. Fix an issue where glance style images (which aren't required to have an 'image\_state' property) couldn't be used to run instances on the EC2 controller * remove the debuging lines * remove the old stuff * tests all pass * Added virtual environment to PEP8 tests * Added test\_run\_instances\_image\_status\_active to test\_cloud * Add the option to specify a default IPv6 gateway * pep8 * Removed use of super * Added illustrations for Distributed Scheduler and fixed up formatting * Disabled pylint complaining about no 'self' parameter in a decorator function * DRY up the image\_state logic. Fix an issue where glance style images (which aren't required to have an 'image\_state' property) couldn't be used to run instances on the EC2 controller * Fixed incorrect error message Added missing import Fixed Typo (pylint "undefined variable NoneV") * removing local image service * Remove unnecessary docstrings * Add the option to specify a default IPv6 gateway * port the floating over to storing in a list * Make libvirt snapshotting work with images that don't have an 'architecture' property * take out the host * Removed empty init * Use IPNetwork rather than IPRange * Fixed type causing pylint "exception is not callable" Added param to fake\_instance\_create, fake objects should appear like the real object. pylint "No value passed for parameter 'values' in function call" * sanity check * run\_instances will check image for 'available' status before attempting to create a new instance * fixed up tests after trunk merge * Use True/False instead of 1/0 when setting updating 'deleted' column attributes. Fixes casting issues when running nova with Postgres * merged from trunk * Remove more stray import IPy * Dropped requirement for IPy * Convert stray import IPy * Use True/False instead of 1/0 when setting updating 'deleted' column attributes.Fixes casting issues when running nova with Postgres * Removed commented code * Added test case for snapshoting base image without architecture * Remove ipy from virt code and replace with netaddr * Remove ipy from network code and replace with netaddr * Remove ipy from nova/api/ec2/cloud.py and use netaddr * Remove ipy from nova-manage and use netaddr * This branch allows marker and limit parameters to be used on image listing (index and detail) requests. It parses the parameters from the request, and passes it along to the glance\_client, which can now handle these parameters. Essentially all of the logic for the pagination is handled in glance, we just pass along the correct parameters and do some error checking * merge from trunk, resolved conflicts * Update the OSAPI images controller to use 'serverRef' for image create requests * Changed the error raise to not be AdminRequired when admin is not, in fact, required * merge with trey * Change to a more generic error and update documentation * make some of the tests * Merged trunk * merge trunk * Ignore complaining about dynamic definition * Removed Duplicate method * Use super on an old style class * Removed extraneous code * Small pylint fixes * merge with trunk * Fixed incorrect exception * This branch removes nwfilter rules when instances are terminated to prevent resource leakage and serious eventual performance degradation. Without this patch, launching instances and restarting nova-compute eventually become very slow * merge with trunk * resolve conflicts with trunk * Update migrate script version to 22 * Added 'config list' to nova-manage. This function will output all of the flags and their values * renamed migration * trunk merge after 2b hit * Distributed Scheduler developer docs * Updated to use the '/v1/images' URL when uploading images to glance in the Xen glance plugin. Fixes the issue where snapshots fail to upload correctly * merged trunk again * added 'nova-manage config list' which will list out all of the flags and their values. I also alphabetized the list of available categories * Updated to use the '/v1/images' URL when uploading images to glance in the Xen glance plugin. Fixes issue where snapshots failed to get uploaded * Removed "double requirement" from tools/pip-requires file * merged koelker migration changes, renumbered migration filename * fix comment * Fixed pip-requires double requirement * Added a test case for XML serialization * Removed unused and erroneous (yes, it was both) function * paramiko is not installed into the venv, but is required by smoketests/base.py. Added paramiko to tools/pip-requires * Changes all uses of utcnow to use the version in utils. This is a simple wrapper for datetime.datetime.utcnow that allows us to use fake values for tests * Set pylint to ignore correct lines that it could not determine were correct, due to the means by which eventlet.green imported subprocess Minimized the number of these lines to ignore * LDAP optimization and fix for one small bug caused huge performance leak. Dashboard's benchmarks showed overall x22 boost in page request completion time * Adds LeastCostScheduler which uses a series of cost functions and associated weights to determine which host to provision to * Make libvirt snapshotting work with images that don't have an 'architecture' property * Add serverRef to image metadata serialization list * Fixed pylint: no metadata member in models.py * Implement OSAPI v1.1 style image create * trunk merge * little tweaks * Flush AuthManager's cache before each test * Fixed FakeLdapDriver, made it call LdapDriver.\_\_init\_\_ * Merged with trunk * This change set adds the ability to create new servers with an href that points to a server image on any glance server (not only the default one configured). This means you can create a server with imageRef = http://glance1:9292/images/3 and then also create one with imageRef = http://glance2:9292/images/1. Using the old way of passing in an image\_id still works as well, and will use the default configured glance server (imageRef = 3 for instance) * added nova\_adminclient to tools/pip-requires * merged trunk * Added paramiko to tools/pip-requires * Tests that all exceptions can be raised properly, and fix the couple of instances where they couldn't be constructed due to typos * merge trunk... yay.. * switch zones to use utcnow * make all uses of utcnow use our testable utils.utcnow * Fix error with % as replacement string * Fixing conflicts * Tests to assure all exceptions can be raised as well as fixing NotAuthorized * use %% because % is a replacement string character * some comment docstring modifications * Makes novarc work properly on a mac and also for zsh in addition to bash. Other shells are not guaranteed to work * This adds the ability to publish nova errors to an error queue * don't use python if readlink is available * Sudo chown the vbd device to the nova user before streaming data to it. This resolves an issue where nova-compute required 'root' privs to successfully create nodes with connection\_type=xenapi * Bugfix #780784. KeyError when creating custom image * Remove some of the extra image service calls from the OS API images controller * pep8 fixes * merge with trey * make it pass for the demo * Merged with Will * Minor comment formatting changes * got rid of more test debugging stuff that shouldnt have made it in * Remove comment about imageRef not being implemented * Remove a rogue comment * more tests (empty responses) * get\_all with reservation id across zone tests * move index and detail functions to v10 controller * got rid of prints * Refactored after review, fixed merge * image href should be passed through the rebuild pipeline, not the image id * merge from trunk * got rid of print debugs * cleanup based on waldon's comments, also caught a few other issues * missed a couple chars * Little cleanups * pep8 and all that * tests all passing again * list --reservation now works across zones * fix novarc to work on mac and zsh * merged, with trunk, fixed the test failure, and split the test into 3 as per peer review * Fixes nova-manage bug. When a nova-network host has allocated floating ips \*AND\* some associated, the nova-manage floating list would throw exception because was expecting hash with 'ec2\_id' key , however, the obj returned is a sqlalchemy obj and the attr we need is 'hostname' * start the flat network * more testing fun * fixed as per peer review to make more consistent * merged from trunk * Implement the v1.1 style resize action with support for flavorRef * Updates to the 018\_rename\_server\_management\_url migration to avoid adding and dropping a column. Just simply rename the column * Support SSL AMQP connections * small fixes * Allow SSL AMQP connections * reservation id's properly forwarded to child zones on create * merge from trunk * fix pep8 issue from merge * coose the network\_manager based on instance variable * fix the syntax * forgot a comma * This just fixes a bunch of pep8 issues that have been lingering around for a while and bothering me :) * touch ups * Updates to the 018\_rename\_server\_management\_url to avoid adding and dropping a column. Just simply rename the column * basic reservation id support to GET /servers * - move osapi-specific wsgi code from nova/wsgi.py to nova/api/openstack/wsgi.py - refactor wsgi modules to use more object-oriented approach to wsgi request handling: - Resource object steps up to original Controller position - Resource coordinates deserialization, dispatch to controller, serialization - serialization and deserialization broken down to be more testable/flexible * merge from trunk * make the stubs * use the host * da stubs * Bumped migration number * Merged from trunk * updates to keep things looking better * merge from trunk * fix pep8 issues * PEP8 fix * Moved memcached driver import to the top of modules * fix pep8 issues * pep8 fixes * Cleanup instances\_path in the test\_libvirt test\_spawn\_with\_network\_info test. Fixes issue where the nova/tests/instance-00000001/ is left in the nova source tree when running run\_test.sh -N * fix filtering tests * Renamed migration to 020 * osapi: added support for header X-Auth-Project-Id * added /zones/boot reservation id tests * Adds hooks for applying ovs flows when vifs are created and destroyed for XenServer instances * Logs the exception if metadata fails and returns a 500 with an error message to the client * Fixing a bunch of conflicts * add new base * refator existing fakes, and start stubbing out the network for the new manager tests * pep8 * Incremented version of migration script to reflect changes in trunk * basic zone-boot test in place * Incremented version of migration script to reflect changes in trunk * Incremented version of migration script to reflect changes in trunk * switch to using webob exception * Added new snapshots table to InnoDB migrations * Adds a few more status messages to error states on image register for the ec2 api. This will hopefully provide users of the ec2 api with a little more info if their registration fails * Cleaned up bug introduced after fixing pep8 errors * Fixing Scheduler Tests * Cleaned up bug introduced after fixing ^Cp8 errors * Basic hook-up to HostFilter and fixed up the passing of InstanceType spec to the scheduler * make the old tests still pass * rename da stuffs * rename da stuffs * Resolving conflict and finish test\_images * merge * added tests for image detail requests * Merged trunk * Merged trunk and fixed conflicts * Whitespace cleanups * added pause/suspend implementation to nova.virt.libvirt\_conn * Change version number of migration * Update the rebuild\_instance function in the compute manager so that it accepts the arguments that our current compute API sends * Moved everything from thread-local storage to class attributes * Added the filtering of image queries with image metadata. This is exposing the filtering functionality recently added to Glance. Attempting to filter using the local image service will be ignored * This enables us to create a new volume from a snapshot with the EC2 api * Use a new instance\_metadata\_delete\_all DB api call to delete existing metadata when updating a server * added tests for GlanceImageService * Add vnc\_keymap flag, enable setting keymap for vnc console and fix bug #782611 * Add refresh\_provider\_fw\_rules to virt/driver.py#ComputeDriver so virtualization drivers other than libvirt will raise NotImplemented * Rebased to trunk rev 1120 * trunk merge * added get\_pagination\_params function in common with tests, allow fake and local image services to accept filters, markers, and limits (but ignore them for now) * Cleaned up text conflict * pep8 fixed * pep8 fixes * Cleaned up text conflict * removing semicolon * Cleaned up text conflict * skip the vlam test, not sure why it doesn't work * Cleaned up pep8 errors * Fixed the APIError typo * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/020\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/020\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * Handle the case when a v1.0 api tries to list servers that contain image hrefs * Added myself to Authors file * edits based on ed's feedback * More specific error messages for resize requests * pep8 fixes * merge trunk * tests passing again * Actually remove the \_action\_resize code from the base Servers controller. The V11 and V10 controllers implement these now * merge from trunk * This adds a volume snapshot support with the EC2 api * Fixed the typo of APIError with ApiError * nova/auth/novarc.template: Changed NOVA\_KEY\_DIR to allow symlink support * Updated compute api and manager to support image\_refs in rebuild * zone-boot working * regular boot working again * regular boot working again * first pass at reservation id support * Updates so that 'name' can be updated when doing a OS API v1.1 rebuild. Fixed issue where metadata wasn't getting deleted when an empty dict was POST'd on a rebuild * first cut complete * project\_id moved to be last * add support for keyword arguments * fixed nova.virt.libvirt\_conn.resume() method - removing try-catch * reservation\_id's done * basic flow done * lots more * starting * boot-from-volume: some comments and NOTE(user name) * Use metadata variable when calling \_metadata\_refs * Implement the v1.1 style resize action with support for flavorRef * Fixes to the SQLAlchmeny API such that metadata is saved on an instance\_update. Added integration test to test that instance metadata is updated on a rebuild * Update the rebuild\_instance function in the compute manager so that it accepts the arguments that our current compute API sends * Cleanup instances\_path in test\_libvirt test\_spawn\_with\_network\_info test * Added missing nova import to image/\_\_init\_\_.py * Another image\_id location in hyperv * Fixing nova.tests.api.openstack.fakes.stub\_out\_image\_service. It now stubs out the get\_image\_service and get\_default\_image\_service functions. Also some pep8 whitespace fixes * Fixing xen and vmware tests by correctly mocking glance client * Fixing integration tests by correctly stubbing image service * More image\_id to image\_ref stuff. Also fixed tests in test\_servers * When encrypting passwords in xenapi's SimpleDH(), we shouldn't send a final newline to openssl, as it'll use that as encryption data. However, we do need to make sure there's a newline on the end when we write the base64 string for decoding.. Made these changes and updated the test * Fixes the bug introduced by rpc-multicall that caused some test\_service.py tests to fail by pip-requiring a later version of mox * added \n is not needed with -A * now pip-requires mox version 0.5.3 * added -A back in to pass to openssl * merge with dietz * merge with dietz * XenAPI tests pass * fixed so all the new encryption tests pass.. including data with newlines and so forth * Glance client updates for xenapi and vmware API to work with image refs * Merged lp:~rackspace-titan/nova/lp788979 * get the right args * Fixing pep8 problems * Modified instance\_type\_create to take metadata * Added test for instance type metadata create * merge with trey * Added test for instance type metadata update * Added delete instance metadata unit test * Added a unit test * Adding test code * Changed metadata to meta to avoid sqlalchemy collision * Adding accessor methods for instance type metadata * remove errant print statement * prevent encryption from adding newlines on long messages * trunk merge * nova/auth/novarc.template: Changed NOVA\_KEY\_DIR to allow symlink support * docstrings again and import ordering * fix encryption handling of newlines again and restructure the code a bit * Libvirt updates for image\_ref * Commit the migration script * fixed docstrings and general tidying * remove \_take\_action\_to\_instance * fix calls to openssl properly now. Only append \n to stdin when decoding. Updated the test slightly, also * fixed read\_only check * Fix pep8 errors * Fix pep8 violations * Fix a description of 'snapshot\_name\_template' * unittest: make unit tests happy * unittest: tests for boot from volume and stop/start instances * compute: implement ec2 stop/start instances * compute, virt: support boot-from-volume without ephemeral device and no device * db: add a table for block device mapping * volume/api: allow volume clone from snapshot without size * api/ec2: parse ec2 block device mapping and pass it down to compute api * teach ec2 parser multi dot-separted argument * api/ec2: make ec2 api accept true/false * Adds the ability to make a call that returns multiple times (a call returning a generator). This is also based on the work in rpc-improvements + a bunch of fixes Vish and I worked through to get all the tests to pass so the code is a bit all over the place * fix a minor bug unrelated to this change * updated the way allocate\_for\_instance and deallocate\_for\_instance handle kwargs * Rename instances.image\_id to instances.image\_ref * changes per review * merge with dietz * stub out passing the network * Virt tests passing while assuming the old style single nics * adding TODOs per dabo's review * Fixes from Ed Leafe's review suggestions * merge trunk * move udev file so it follows the xen-backend.rules * Essentially adds support for wiring up a swap disk when building * add a comment when calling glance:download\_vhd so it's clear what is returned * make the fakes be the correct * skip vmware tests, since they need to be updated for multi-nic by someone who knows the backend * put back the hidden assert check i accidentally removed from glance plugin * fix image\_path in glance plugin * Merged trunk * skip the network tests for now * Change the return from glance to be a list of dictionaries describing VDIs Fix the rest of the code to account for this Add a test for swap * cleaning up getattr calls with default param * branch 2a merge (including trunk) * trunk merge * remerged with 2a * tests pass and pep8'ed * review fixups * Expanded tests * In vmwareapi\_net.py removed the code that defines the flag 'vlan\_interface' and added code to set default value for the flag 'vlan\_interface' to 'vmnic0'. This will now avoid flag re-definition issue * missed a driver reference * exceptions are logged via the raise, so just log an error message * log upload errors * instance obj returned is not a hash, instead is sqlalchemy obj and hostname attr is what the logic is looking for * we don't need the mac or the host anymore * Test tweaks * instances don't need a mac\_address to be created anymore * Make a cleaner log message and use [] instead of . to get database fields * use the skip decorator rather than comment out * merging trunk * Adding some pluralization * Double quotes are ugly #3 * merge with dietz * fix typo introduced during merge conflict resolution * Remove spurious newline at end of file * Move migration to fix ordering * remove dead/duplicate code * Double quotes are ugly #2 * Double quotes are ugly * refactoring compute.api.create() * Fix test\_cloud tests * Restricted image filtering by name and status only * Switch the run\_instances call in the EC2 back to 'image\_id'. Incoming requests use 'imageId' so we shouldn't modify this for image HREF's * Switching back to chown. I'm fine w/ setfacl too but nova already has 'chown' via sudoers so this seems reasonable for now * replace double quatation to single quatation at nova.virt.libvirt\_conn * remove unnecessary import inspect at nova.virt.libvirt\_conn * creating \_take\_action\_to\_instance to nova.virt.libvirt\_conn.py * Instead of redefining the flag 'vlan\_interface', just setting a default value (vmnic0) in vmwareapi\_net.py * Renamed image\_ref variables to image\_href. Since the convention is that x\_ref vars may imply that they are db objects * Added test skipper class * change the behavior of calling a multicall * move consumerset killing into stop * don't put connection back in pool * replace removed import * cleanups * cleanup the code for merging * make sure that using multicall on a call with a single result still functions * lots of fixes for rpc and extra imports * don't need to use a separate connection * almost everything working with fake\_rabbit * bring back commits lost in merge * connection pool tests and make the pool LIFO * Add rpc\_conn\_pool\_size flag for the new connection pool * Always create Service consumers no matter if report\_interval is 0 Fix tests to handle how Service loads Consumers now * catch greenlet.GreenletExit when shutting service down * fix consumers to actually be deleted and clean up cloud test * fakerabbit's declare\_consumer should support more than 1 consumer. also: make fakerabbit Backend.consume be an iterator like it should be. * convert fanout\_cast to ConnectionPool * pep8 and comment fixes * Add a connection pool for rpc cast/call Use the same rabbit connection for all topic listening and wait to be notified vs doing a 0.1 second poll for each * add commented out unworking code for yield-based returns * make the test more expicit * add support to rpc for multicall * merge with dietz * Fixing divergence * Merged trunk * Added params to local and base image service * Fixed the mistyped line referred to in bug 787023 * Merged trunk and resolved conflicts * Fixed a typo * make the test work * Merged with trunk * Several changes designed to bring the openstack api 1.1 closer to spec - add ram limits to the nova compute quotas - enable injected file limits and injected file size limits to be overridden in the quota database table - expose quota limits as absolute limits in the openstack api 1.1 limits resource - add support for controlling 'unlimited' quotas to nova-manage * During the API create call, the API would kick off a build and then loop in a greenthread waiting for the scheduler to pick a host for the instance. After API would see a host was picked, it would cast to the compute node's set\_admin\_password method * starting breakdown of nova.compute.api.create() * fix test. instance is not updated in DB with admin password in the API anymore * Merged upstream * pep8 fixes * Initial tests * fix forever looping on a password reset API call * updating admin\_pass moved down to compute where the password is actually reset. only update if it succeeds * merged trunk * change install\_ref.admin\_password to instance\_ref.admin\_pass to match the DB * Merged trunk * remove my print * we're getting a list of tuples now' * we have a list of tuples, not a list of dicts * pep8 fixes * return the result of the function * Updated tests to use mox pep8 * InstanceTypesMetadata is now registered * make some changes to the manager so dupe keywords don't get passed * Fixing the InstanceTypesMetadata table definition * try out mox for testing image request filters * Adding the migrate code to add the new table * dist-sched-2a merge * Created new libvirt directory, moved libvirt\_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities * make the column name correct * The code for getting an opaque reference to an instance assumed that there was a reference to an instance obj available when raising an exception. I changed this from raising an InstanceNotFound exception to a NotFound, as this is more appropriate for the failure, and doesn't require an instance ID * merge against 2a * trunk merge * simplified the limiting differences for different versions of the API * New tests added * Changed the exception type to not require an instance ID * Added model for InstanceTypeMetadata * Added test * Avoid wildcard import * Add unittests for cloning volumes * merged recent trunk * merged recent trunk * Make snapshot\_id=None a default value in VolumeManager:create\_volume(). It is not a regular case to create a volume from a snapshot * Don't need to import json * Fix wrong call of the volume api create() * pep8 fix in nova/compute/api.py * instead of the API spawning a greenthread to wait for a host to be picked, the instance to boot, etc for setting the admin password... let's push the admin password down to the scheduler so that compute can just take care of setting the password as a part of the build process * tests working again * eventlet.spawn\_n() expects the function and arguments, but it expects the arguments unpacked since it uses \*args * Don't pass a tuple since spawn\_n will get the arguments with \*args anyway * move devices back * Using the root-password subcommand of the nova client results in the password being changed for the instance specified, but to a different unknown password. The patch changes nova to use the password specified in the API call * Pretty simple. We call openssl to encrypt the admin password, but the recent changes around this code forgot to strip the newline off the read from stdout * DHSimple's decrypt needs to append \n when writing to stdin * need to strip newline from openssl stdout data * merge with trey * work on * merge trunk * moved auto assign floating ip functionality from compute manager to network manager * create a mac address entry and blindly use the first network * create a mac address entry and blindly use the first network * create a mac address entry and blindly use the first network * need to return the ref * Added filtering on image properties * Fixes a bug related to incorrect reparsing of flags and prevents many extra reparses * no use mac * comment out the direct cloud case * make fake\_flags set defaults instead of runtime values * add a test from vish and fix the issues * Properly reparse flags when adding dynamic flags * no use mac * instances don't have mac's anymore and address is now plural * let the fake driver accept the network info * Comment out the 2 tests that require the instance to contain mac/ip * initial use of limited\_by\_marker * more fix up * many tests pass now * its a dict, not a class * we don't get the network in a tuples anymore * specified image\_id keyword in exception arg * When adding a keypair with ec2 API that already exists, give a friendly error and no traceback in nova-api * added imageid string to exception, per peer review * Fixes some minor doc issues - misspelled flags in zones doc and also adds zones doc to an index for easier findability * removed most of debugging code * Fixing docstring * Synchronise with Diablo development * make \_make\_fixture respect name passed in * zone1 merge * sending calls * accepting calls * Fixing \_get\_kernel\_ramdisk\_from\_image to use the correct image service * Fixing year of copyright * merge * select partially going through * merge from trunk * make image\_ref and image\_id usage more consistant, eliminate redundancy in compute\_api.create() call * take out irrelevant TODO * blah * uhhh yea * local tweaks * getting closer to working select call * swap should use device 1 and rescue use device 2 * merged from trunk * fix tests, have glance plugin return json encoded string of vdi uuids * make sure to get a results, not the query * merged from trunk * Removing code duplication between parse\_image\_ref and get\_image service. Made parse\_image\_ref private * Changed ec2 api dupe key exception log handler info->debug * Added test case for attempting to create a duplicate keypair * Removing debug print line * Renaming service\_image\_id vars to image\_id to reduce confusion. Also some minor cleanup * cleanup and fixes * got rid of print statement * initial fudging in of swap disk * make the test\_servers pass by removing the address tests for 1.1, bug filed * port the current create\_networks over to the new network scheme * need to have the complete table def since sqlalchemy/sqlite won't reload the model * must have the class defined before referencing it * make the migration run with tests * get rid of all mention of drivers ... it's filter only now * merge trunk * Fixes euca-attach-volume for iscsi using Xenserver * fix typo * merge branch lp:~rackspace-titan/nova/ram-limits * Added test * Fixes missing space * Fixed mistyped line * Rebased to trunk rev 1101 * merge from trunk * moved utils functions into nova/image/ * Trunk merge * Fix bug #744150 by starting nova-api on an unused port * Removing utils.is\_int() * Added myself to Authors * When adding a keypair that already exists, give a friendly error and no traceback in nova-api * --dhcp-lease-max=150 by default. This prevents >150 instances in one network * Minor cleanup * No reason to modify the way file names are generated for kernel and ramdisk, since the kernel\_id and ramdisk\_id is still guaranteed to be ints * found a typo in the xenserver glance plugin that doesn't work with glance trunk. Also modified the image url to fetch from /v1/image/X instead of /image/X as that returned a 300 * fixing glance plugin bug and setting the plugin to use /v1 of the glance api * merge trunk * move init start position to 96 to allow openvswitch time to fully start * Include data files for public key tests in the tarball * minor cleanup * Makes sure vlan creation locks so we don't race and fail to create a vlan * merging trunk * Include data files for public key tests in the tarball * Merged with trunk * renaming resource\_factory to create\_resource * combined the exception catching to eliminate duplication * synchronize vlan creation * print information about nova-manage project problems * merge from trunk * fix comments * make nwfilter mock more 'realistic' by having it remember which filters have been defined * fix pep8 issue * fixed silly issue with variable needing to be named 'id' for the url mapper, also caught new exception type where needed * This is the groundwork for the upcoming distributed scheduler changes. Nothing is actually wired up here, so it shouldn't break any existing code (and all tests pass) * Merging trunk * Get rid of old virt/images.py functions that are no longer needed. Checked for any loose calls to these functions and found none. All tests pass for me * Update OSAPI v1.1 extensions so that it supports RequestExtensions. ResponseExtensions were removed since the new RequestExtension covers both use cases. This branch also removes some of the odd serialization code in the RequestExtensionController that converted dictionary objects into webob objects. RequestExtension handlers should now always return proper webob objects * Addressing bug #785763. Usual default for maximum number of DHCP leases in dnsmasq is 150. This prevents instances to obtain IP addresses from DHCP in case we have more than 150 in our network. Adding myself to Authors * foo * syntax errors * temp fixes * added support for reserving certain network for certain project * Fixed some tests * merge with trunk * Added an EC2 API endpoint that'll allow import of public key. Prior, api only allowed generation of new keys * This fix ensures that kpartx -d is called in the event that tune2fs fails during key injection, as it does when trying to inject a key into a windows instance. Bug #760921 is a symptom of this issue, as if kpartx -d is not called then partitions remain mapped that prevent the underlying nbd from being reused * Add new flag 'max\_kernel\_ramdisk\_size' to specify a maximum size of kernel or ramdisk so we don't copy large files to dom0 and fill up /boot/guest * The XenAPI driver uses openssl as part of the nova-agent implementation to set the password for root. It uses a temporary file insecurely and unnecessarily. Change the code to write the password directly to stdin of the openssl process instead * The tools/\* directory is now included in pep8 runs. Added an opt-out system for excluding files/dirs from pep8 (using GLOBIGNORE) * fill out the absolute limit tests for limits v1.0 controller * add absolute limits support to 1.0 api as well * Merged with trunk * fixed pep8 issue * merge from trunk * Fail early if requested imageRef does not exist when creating a server * Separate out tests for when unfilter is called from iptables vs. nwfilter driver. Re: lp783705 * Moved back templates and fixed pep8 issue. Template move was due to breaking packaging with template moves. That will need to happen in a later merge * further refactoring of wsgi module; adding documentation and tests * don't give instance quota errors with negative values * Merged trunk and resolved horrible horrible conflicts * No reason to hash ramdisk\_id and kernel\_id. They are ints * temp * waldon's naming feedback * Fixing role names to match code * Merging trunk * updated the hypervisors and ec2 api to support receiving lists from pluralized mac\_addresses and fixed\_ips * fname should have been root\_fname * minor cleanup, plus had to merge because of diverged-branches issue * Minor cleanup * merge from trunk * Fix comments * Add a unitest to test EC2 snapshot APIs * Avoid wildcard import * Simple change to sort the list of controllers/methods before printing to make it easier to read * missed the new wsgi test file * removing controller/serializer code from wsgi.py; updating other code to use new modules * merge lp:nova * fixup absolute limits to latest 1.1 spec * refactoring wsgi to separate controller/serialization/deserialization logic; creating osapi-specific module * default to port 80 if it isnt in the href/uri * return dummy id per vishs suggestion * hackish patch to fix hrefs asking for their metadata in boot (this really shouldnt be in ec2 api?) * Sort list of controllers/methods before printing * use a manual 500 with error text instead of traceback for failure * log any exceptions that get thrown trying to retrieve metadata * skeleton of forwarding calls to child zones * fix typo in udev rule * merge trunk * libvirt fixes to use new image\_service stuff * On second thought, removing decorator * Adding FlagNotSet exception * Implements a basic mechanism for pushing notifications out to interested parties. The rationale for implementing notifications this way is that the responsibility for them shouldn't fall to Nova. As such, we simply will be pushing messages to a queue where another worker entirely can be written to push messages around to subscribers * Spacing changes * get real absolute limits in openstack api and verify absolute limit responses * Added missing xenhost plugin. This was causing warnings to pop up in the compute logs during periodic\_task runs. It must have not been bzr add'd when this code was merged * fixed bug with compute\_api not having actual image\_ref to use proper image service * Adding xenhost plugin * Merging trunk * Added missing xenhost plugin * Fix call to spawn\_n() instead. It expects a callable * fix pep8 issues * oops, took out commented out tests in integrated.test\_servers and made tests pass again * fixed api.openstack.test\_servers tests...again * fixed QuotaTestCases * fixed ComputeTestCase tests * made ImageControllerWithGlanceServiceTests pass * fixed test\_servers small tests as well * get integrated server\_tests passing * Removed all utils.import\_object(FLAGS.image\_service) and replaced with utils.get\_default\_image\_service() * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB, updated version to 019 * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB, updated version to 019 * Small cleanups * Moving into scheduler subdir and refactoring out common code * Moving tests into scheduler subdirectory * added is\_int function to utils * Pep8 fixes * made get\_image\_service calls in servers.py * use utils.get\_image\_service in compute\_api * updates to utils methods, initial usage in images.py * added util functions to get image service * Using import\_class to import filter\_host driver * Adding fill first cost function * add more statuses for ec2 image registration * Add --fixes * Add --fixes * Fixes the naming of the server\_management\_url in auth and tests * Merging in Sandy's changes adding Noop Cost Fn with tests * merged trunk * move migration 017 to 018 * merge ram-limits * Removed extra serialization metadata * Docstring cleanup and formatting (nova/network dir). Minor style fixes as well * pep8 * Fixes improper attribute naming around instance types that broke Resizes * merge ram-limits * support unlimited quotas in nova-manage and flags * fix test * Changed builder to match specs and added test * add migration for proper name * Update test case to ensure password gets set correctly * make token use typo that is in database. Also fix now -> utcnow and stop using . syntax for dealing with tokens * Added missing metadata join to instance\_get calls * Avoid using spawn\_n to fix LP784132 * add ram limits to instance quotas * Convert instance\_type\_ids in the instances table from strings to integers to enable joins with instance\_types. This in particular fixes a problem when using postgresql * Set password to one requested in API call * don't throw type errors on NoneType int conversions * Added network\_info into refresh\_security\_group\_rules That fixs https://bugs.launchpad.net/nova/+bug/773308 * Improved error notification in network create * Instead of using a temp file with openssl, just write directly to stdin * First cut at least cost scheduler * merge lp:nova * Implemented builder for absolute limits and updated tests * provision\_resource no longer returns value * provision working correctly now * Re-pull changed notification branch * PEP8 fixes * adding --fixes lp:781429 * Fixed mistyped key, caused huge performance leak * Moved memcached connection in AuthManager to thread-local storage. Added caching of LDAP connection in thread-local storage. Optimized LDAP queries, added similar memcached support to LDAPDriver. Add "per-driver-request" caching of LDAP results. (should be per-api-request) * ugh, fixed again * tests fixed and pep8'ed * Update comment on RequestExtension class * failure conditions are being sent back properly now * Added opt-out system for excluding files/dirs from pep8 (using GLOBIGNORE) * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB * fix for lp783705 - remove nwfilters when instance is terminated * basic call going through * Added missing metadata join to instance\_get calls * add logging to migration and fix migration version * Migrate quota schema from hardcoded columns to a key-value approach. The hope is that this change would make it easier to change the quota system without future schema changes. It also adds the concept of quotas that are unlimited * Conceded :-D * updated the mac\_address delete function to actually delete the rows, and update fixed\_ips * Added missing flavorRef and imageRef checks in the os api xml deserialization code along with tests * Fixed minor pylint errors * This branch splits out the IPv6 address generation into pluggable backends. A new flag named ipv6\_backend specifies which backend to use * Reduce indentation to avoid PEP8 failures * merge koelker migration changes * using mac\_address from fixed\_ip instead of instance * PEP8 cleanups * Use new 3-argument API * add a todo * style fixing * Removed obsolete method and test * renamed test cases in nova/tests/api/openstack/test\_servers.py to use a consistent naming convention as used in nova/tests/api/openstack/test\_images.py. also fixed a couple of pylint #C0103 errors in test\_servers.py * make the migration work like we expect it to * Fixed all pep8 errors in tools/install\_venv.py. All tests pass * Added the imageRef and flavorRef attributes in the xml deserialization * Add vnc\_keymap flag and enable setting keymap for vnc console * Review changes and merge from trunk * Pep8 cleaning * Added response about error in nova-manage project operations * Removed tools/clean\_vlans and tools/nova-debug from pep8 tests as they are shell scripts * Added lines to include tools/\* (except ajaxterm) in pep8 tests * Add a unit test for snapshot\_volume * Define image state during snapshotting. Name snapshot to the name provided, not generate * Unit test for snapshotting (creating custom image) * fixed a few C0103 errors in test\_servers.py * renamed test cases to use a consistent naming convention as used in nova/tests/api/openstack/test\_images.py * fix sys.argv requirement * first cut at weighted-sum tests * merge trunk * add udev rules and modified ovs\_configure\_vif\_flows.py to work with udev rules * Adds proper error handling for images that can't be found and a test for deregister image * added |fixed\_ip\_get\_all\_by\_mac\_address| and |mac\_address\_get\_by\_fixed\_ip| to db and sqlalchemy APIs * started on integrating HostFilter * Add support for rbd snapshots * Merging in trunk * I'm assuming that openstack doesnt work with python < 2.6 here (which I read somewhere on the wiki). This patch will check to make sure python >= 2.6 is installed, and also allow it to work with python 2.7 (and greater in the future) * merge lp:nova * XenAPI was not implemented to allow for multiple simultaneous XenAPI requests. A single XenAPIConnection (and thus XenAPISession) is used for all queries. XenAPISession's wait\_for\_task method would set a self.loop = for looping calls to \_poll\_task until task completion. Subsequent (parallel) calls to wait\_for\_task for another query would overwrite this. XenAPISession.\_poll\_task was pulled into the XenAPISession.wait\_for\_task method to avoid having to store self.loop * pep8 fixes * Merged trunk * volume/driver: make unit test, test\_volume, pass * Make set\_admin\_password non-blocking to API * Merged trunk * Review feedback * Lost a flag pulling from another branch. Whoops * Update the compute manager so that it breaks out of a loop if set\_admin\_password is not implemented by the driver. This avoids excessively logging NotImplementedError exceptions * Merging in Sandy's changes * Make host timeout configurable * Make set\_admin\_password non-blocking to API * volume/driver: implement basic snapshot * merge trunk * Update the compute manager so that it breaks out of a loop if set\_admin\_password is not implemented by the driver * Add init script and sysconfig file for openvswitch-nova * volume/driver: factor out lvm opration * Authors: add myself to Authers file * trunk merge * Adding zones doc into index of devref plus a bug fix for flag spellings * fixup based on Lorin's feedback * added flag lost in migration * merge trunk * pep8 * Adding basic tests for call\_zone\_method * fixed\_ip disassociate now also unsets mac\_address\_id * Make sure imports are in alphabetical order * updated previous calls referring to the flags to use the column from the networks table instead * merged from trunk * handle instance\_type\_ids that are NULL during upgrade to integers * fix for lp760921. Previously, if tune2fs failed, as it does on windows hosts, kpartx -d also failed to be called which leaves mapped partitions that retain holds on the nbd device. These holds cause the observed errors * if a LoopingCall has canceled the loop, break out early instead of sleeping any more than needed * Add a test for parallel builds. verified this test fails before this fix and succeeds after this fix * incorporated ImageNotFound instead of NotFound * merged from trunk * misc related network manager refactor and cleanup * changed NotFound exception to ImageNotFound * Update comment * Variable renaming * Add test suite for IPv6 address generation * Accept and ignore project\_id * Make it so that ExtensionRequest objects now return proper webob objects. This avoids the odd serialization code in the RequestExtensionController class which converts JSON dicts to webobs for us * merged from trunk * Remove ResponseExtensions. The new RequestExtension covers both use cases * Initial work on request extensions * Added network\_info into refresh\_security\_group\_rules * fixed pep8 spacing issue * merge from trunk * rename quota column to 'hard\_limit' to make it simpler to avoid collisions with sql keyword 'limit' * Fix remote volume code * 1 Set default paths for nova.conf and api-paste.ini to /etc/nova/ 2 Changed countryName policy because https://bugs.launchpad.net/nova/+bug/724317 still affected * Implement IPv6 address generation that includes account identifier * messing around with the flow of create() and specs * Redundant line * changes per review * docstring cleanup, nova/network dir * make instance.instance\_type\_id an integer to support joins in postgres * merge from trunk and update .mailmap file * Merged trunk * Updated MANIFEST for template move * NoValidHost exception test * Fixes an issue with conversion of images that was introduced by exception refactoring. This makes the exceptions when trying to locate an ec2 id clearer and also adds some tests for the conversion methods * oops fixed a docstring * Pep8 stuff * Bluprint URL: https://blueprints.launchpad.net/nova/+spec/improve-pylint-scores/ * start of zone\_aware\_scheduler test * Moved everything into notifier/api * make sure proper exceptions are raised for ec2 id conversion and add tests * better function name * Updated the value of the nova-manager libvirt\_type * more filter alignment * Removed commented out 'from nova import log as logging' line, per request from Brian Lamar * merge trunk * align filters on query * better pylint scores on imports * Code cleanup * Merged trunk * Abstract out IPv6 address generation to pluggable backends * Merged trunk * First cut with tests passing * changing Authors file * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * Fix for #780276 (run\_tests.sh fails test\_authors\_up\_to\_date when using git repo) * extracted xenserver capability reporting from dabo's dist-scheduler branch and added tests * migrate back updated\_at correctly * added in log\_notifier for easier debugging * Add priority based queues to notifications. Remove duplicate json encoding in notifier (rpc.cast does encoding... ) make no\_op\_notifier match rabbit one for signature on notify() * Bugfix #780784. KeyError when creating custom image * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * Better tests * Add example * give a more informative message if pre-migration assertions fail * Whoops * fix migration bug * Pep8 * Test * remove stubbing of XenAPISession.wait\_for\_task for xenapi tests as it doesn't need to be faked. Also removed duplicate code that stubbed xenapi\_conn.\_parse\_xmlrpc\_value * migration bug fixes * Change xenapi's wait\_for\_task to handle multiple simultaenous queries to fix lp:766404 * Added GitPython to [install\_dir]/tools/pip-requires * got rid of unnecessary imports * Enable RightAWS style signature checking using server\_string without port number, add test cases for authenticate() and a new helper routine, and fix lp753660 * Better message format description * unified underscore/dash issue * update tests to handle unlimited resources in the db * pep8 * capabilities flattened and tests fixed * Set root password upon XenServer instance creation * trunk merge * clean up unused functions from virt/images.py * Removing a rogue try/catch expecting a non-existant exception.TimeoutException that is never raised * basic test working * db: fix db versioning * fix mismerge by 1059 * volume/driver: implement basic snapshot/clone * volume/driver: factor out lvm opration * Host Filtering for Distributed Scheduler (done before weighing) * Rebased to trunk rev 1057 * Adds coverage-related packages to the tools/pip-requires to allows users to generate coverage reporting when running unit tests with virtulenv * merge from trunk * Set publish\_errors default to False * convert quota table to key-value * Simple fix for this issue. Tries to raise an exception passing in a variable that doesn't exist, which causes an error * Fixed duplicate function * Review feedback * Review feedback * Fixed method in flavors * Review feedback * Review feedback * Merged trunk * Set root password upon XenServer instance creation * Added Python packages needed for coverage reports to virtualenv packages * Added interface functions * merge from trunk * added test for show\_by\_name ImageNotFound exception * tests pass again * Sanitize get\_console\_output results. See bug #758054 * revised file docs * New author in town * Changes to allow a VM to boot from iso image. A blank HD is also attached with a size corresponding to the instance type * Added stub function for a referenced, previously non-existant function * Merged trunk * grabbed from dist-sched branch * Explicitly casted a str to a str to please pylint * Removed incorrect, unreachable code * spacing fix * pep8 fix * Improved error notification in network create * Add two whitespaces to conform PEP8 * Publish errors via nova.notifier * Added myself to Authors file * terminology: no more plug-ins or queries. They are host filters and drivers * Added interface function to ViewBilder * Added interfaces to server controller * added self to authors * fixed issue with non-existent variable being passed to ImageNotFound exception * removing rogue TimeoutException * merge prop fixes * Merged trunk * print statements removed * merge with trunk * flipped service\_state in ZoneManager and fixed tests * pep8 * not = * not = * and or test * and or test * merge from trunk * Removed extra newline after get\_console\_output in fake virt driver * Moved all reencoding to compute manager to satisfy both Direct API and internal cloud call * Merged with current trunk * added myself to Authors * Adding a test case to show the xml deserialization failure for imageRef and flavorRef * Fixes for nova-manage vpn list * json parser * Don't fail the test suite in the absence of VCS history * It's ok if there's no commit history. Otherwise the test suite in the tarball will fail * Merged trunk * flavor test * Fix indentation * tests and better driver loading * Add missed hyphen * Adding OSAPI v1.1 limits resource * Adding support for server rebuild to v1.0 and v1.1 of the Openstack API * reduce policy for countyname * looking for default flagfile * adding debug log message * merging trunk * merging trunk * removing class imports * Merged trunk * Merged trunk * Moved reencoding logic to compute manager and cloud EC2 API * ensure create image conforms to OS API 1.1 spec * merge updates from trunk * Added support in the nova openstack api for requests with local hrefs, e.g., "imageRef":"2" Previously, it only supported "imageRef":"http://foo.com/images/2". The 1.1 api spec defines both approaches * Add a flag to allow the user to specify a dnsmasq configuration file for nova-network to use when starting dnsmasq. Currently the command line option is set to "--config-fil=" with nothing specified. This branch will leave it as it is if the user does not specify a config file, but will utilize the specific file if they do * merged from trunk * implemented review suggestion EAFP style, and fixed test stub fake\_show needs to have image\_state = available or other tests will fail * got rid of extra whitespace * Update tools/pip-requires and tools/install\_venv.py for python2.7 support (works in ubuntu 11.04) * No need to test length of admin password in local href test * merging trunk; resolving conflicts; fixing issue with ApiError test failing since r1043 * Added support in osapi for requests with local hrefs, e.g., "imageRef":"2" * initial pass * Implement get\_host\_ip\_addr in the libvirt compute driver * merging trunk; resolving conflicts * Modified the instance status returned by the OS api to more accurately represent its power state * Fixed 2 lines to allow pep8 check to pass * Since run\_tests.sh utilizes nose to run its tests, the -x, --stop flag works correctly for halting tests on the first failed test. The usage information for run\_tests.sh now includes the --stop flag * add support for git checking and a default of failing if the history can't be read * ApiError 'code' arg set to None, and will only display a 'code' as part of the str if specified * Fixed: Check for use of IPv6 missing * removed unused method and fixed imports * Change the links in the sidebar on the docs pages * Use my\_ip for libvirt version of get\_host\_ip\_addr * fix typo in import * removed unused method and fixed imports * small changes in libvirt tests * place ipv6\_rules creation under if ip\_v6 section * Added checking ip\_v6 flag and test for it * merging trunk * adding view file * Expose AuthManager.list\_projects user filter to nova-manage * Final cleanup of nova/exceptions.py in my series of refactoring branches * Uses memcached to cache roles so that ldap is actually usable * added nova version to usage output of bin/nova-manage for easy identification of installed codebase * Changing links in sidebar to previous release * Rebased to trunk rev 1035 * converted 1/0 comparison in db to True/False for Postgres cast compatibility * Changed test\_cloud and fake virt driver to show out the fix * converted 1/0 comparison to True/False for Postgres compatibility * pep8 * fixed docstring per jsb * added version list command to nova-manage * Added more unit-test for multi-nic-nova libvirt * Sanitize get\_console\_output in libvirt\_conn * added nova version output to usage printout for nova-manage * Make the import of distutils.extra non-mandatory in setup.py. Just print a warning that i18n commands are not available.. * Correcting exception case * further cleanup of nova/exceptions.py * added eagerloading mac adddresses for instance * merge with trunk and resolve conflicts * Added myself to authors file * pep8 fixes * Refactoring usage of nova.exception.NotFound * Let nova-mange limit project list by user * merging trunk * Make the import of distutils.extra non-mandatory in setup.py. Just print a warning that i18n commands are not available.. * Updated run\_tests.sh usage info to reflect the --stop flag * Fixed formatting to align with PEP 8 * Modified instance status for shutoff power state in OS api * Refactoring the usage of nova.exception.Duplicate * Rebased to trunk rev 1030 * removed extra newline * merged from trunk * updated tests to reflect serverRef as href (per Ilya Alekseyev) and refactored \_build\_server from ViewBuilder (per Eldar Nugaev) * Add a test checking spawn() works when network\_info is set, which currently doesn't. The following patch would fix parameter mismatch calling \_create\_image() from spawn() in libvirt\_conn.py * removed unused imports and renamed template variables * pep8 * merging trunk * Renamed test\_virt.py to test\_libvirt.py as per suggestion * fixing bad merge * Merged trunk and fixed simple exception conflict * merging trunk * Refactoring nova.exception.Invalid usage * adding gettext to setup.py * Use runtime XML instead of VM creation time XML for createXML() call in order to ensure volumes are attached after RebootInstances as a workaround, and fix bug #747922 * Created new libvirt directory, moved libvirt\_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities * Rebased to trunk rev 1027, and resolved a conflict in nova/virt/libvirt\_conn.py * Rebased to trunk rev 1027 * clarifies error when trying to add duplicate instance\_type names or flavorids via nova-manage instance\_type * merge trunk * Rework completed. Added test cases, changed helper method name, etc * pep8 * merge trunk, resolved conflict * merge trunk * Abstracted libvirt's lookupByName method into \_lookup\_by\_name * Provide option of auto assigning floating ip to each instance. Depend on auto\_assign\_floating\_ip boolean flag value. False by default * Fixes per review * Restore volume state on migration failure to fix lp742256 * Fixes cloudpipe to get the proper ip address * merging trunk * Fix bug with content-type and small OpenStack API actions refactor * merge with trunk * merge trunk * merged trunk * -Fixed indent for \_get\_ip\_version -Added LoopingCall to destroy as suggested by earlier bug report -Standardized all LoopingCall uses to include useful logging and better error handling * Create a dictionary of instance\_types before executing SQL updates in the instance\_type\_id migration (014). This should resolve a "cannot commit transaction - SQL statements in progress" error with some versions of sqlite * create network now takes bridge for flat networks * Adapt DescribeInstances to EC2 API spec * Change response of the EC2 API CreateVolume method to match the API docs for EC2 * Merged trunk and fixed api servers conflict * pep8 * Fixes and reworkings based on review * pep8 * Addressing exception.NotFound across the project * fix logging in reboot OpenStack API * eager loaded mac\_address attributes for mac address get functions * updated image builder and tests for OS API 1.1 compatibility (serverRef) * forgot import * change action= to actions= * typo * forgot to save * moved get\_network\_topic to network.api * style cleaning * Fixed network\_info creation in libvirt driver. Now creating same dict as in xenapi driver * Modified instance status for shutdown power state in OS api * rebase trunk * altered imports * commit to push for testing * Rebased to trunk rev 1015 * Utility method reworked, etc * Docstring cleanup and formatting (nova/image dir). Minor style fixes as well * Docstring cleanup and formatting (nova/db dir). Minor style fixes as well * Docstring cleanup and formatting (nova dir). Minor style fixes as well * use vpn filter in basic filtering so cloudpipe works with iptables driver * use simpler interfaces * Docstring cleanup and formatting (console). Minor style fixes as well * Docstring cleanup and formatting (compute). Minor style fixes as well * merge trunk * Add privateIpAddress and ipAddress to EC2 API DescribeInstances response * style fixing * Fix parameter mismatch calling \_create\_image() from spawn() in libvirt\_conn.py * Add a test checking spawn() works when network\_info is set, which currently doesn't. The following patch would fix it * put up and down in the right dir * Makes metadata correctly display kernel-id and ramdisk-id * pep8 cleaning * style fix * revert changes that doesn't affect the bug * in doesn't work properly on instance\_ref * Another small round of pylint clean-up * Added an option to run\_tests.sh so you can run just pep8. So now you can: ./run\_tests.sh --just-pep8 or ./run\_tests.sh -p * merge trunk * fix display of vpn instance id and add output rule so it can be tested from network host * Exit early if tests fail, before pep8 is run * more changes per review * fixes per review * docstring cleanup, nova/image dir * Docstring cleanup and formatting. Minor style fixes as well * cleanups per code review * docstring cleanup, nova dir * fixed indentation * docstring cleanup, console * docstring cleanup, nova/db dir * attempts to make the docstring rules clearer * fix typo * docstring cleanup compute manager * bugfix signature * refactor the way flows are deleted/reset * remove ambiguity in test * Pylinted nova-compute * Pylinted nova-manage * replaced regex to webob.Request.content\_type * fix after review: style, improving tests, replacing underscore * merge with trunk * fix Request.get\_content\_type * Reverted bad merge * Rebased to trunk rev 1005 * Removed no longer relevant comment * Removed TODO we don't need * Removed \_ and replaced with real variable name * instance type get approach changed. tests fixed * Merged trunk * trunk merged * fix: mark floating ip as auto assigned * Add to Authors * Change response format of CreateVolume to match EC2 * revamped spacing per Rick Harris suggestion. Added exact error to nova-manage output * only apply ipv6 if the data exists in xenstore * Create a dictionary of instance\_types before executing SQL updates in the instance\_type\_id migration (014). This should resolve a "cannot commit transaction - SQL statements in progress" error with some versions of sqlite * add support for git checking and a default of failing if the history can't be read * strip output, str() link local * merging lp:~rackspace-titan/nova/exceptions-refactor-invalid * Round 1 of pylint cleanup * Review feedback * Implement quotas for the new v1.1 server metadata controller * fix doc typo * fix logging in reboot OpenStack API * make geninter.sh use the right tmpl file * pep8 fix * refactoring usage of exception.Duplicate errors * rename all versions of image\_ec2\_id * Abstracted lookupByName calls to \_lookup\_by\_name for centralized error handling * actually use the ec2\_id * remove typo * merging lp:~rackspace-titan/nova/exceptions-refactor-invalid * Fixes cloudpipe to get the proper ip address * add include file for doc interfaces * add instructions for setting up interfaces * Merged trunk and fixed small comment * Fixed info messages * Tweak to destroy loop logic * Pretty critical spelling error * Removed extra calls in exception handling and standardized the way LoopingCalls are done * one last i18n string * Merged trunk * multi-line string spacing * removing rogue print * moving dynamic i18n to static * refractoring * Add support for cloning a Sheepdog volume * Add support for cloning a Sheepdog volume * Add support for creating a new volume from a existing snapshot with EC2 API * Add support for creating a new volume from a existing snapshot with EC2 API * Add support for creating a Sheepdog snapshot * Add support for creating a Sheepdog snapshot * Add support for creating a snapshot of a nova volume with euca-create-snapshot * Add support for creating a snapshot of a nova volume with euca-create-snapshot * trunk merged * Implement get\_host\_ip\_addr in the libvirt compute driver * Adding projectname username to the nova-manage project commands to fix a doc bug, plus some edits and elimination of a few doc todos * pep8 fixes * Remove zope.interface from the requires file since it is not used anywhere * use 'is not None' instead of '!= None' * Fix loggin in creation server in OpenStack API 1.0 * Support admin password when specified in server create requests * First round of pylint cleanup * merge lp:nova and resolve conflicts * Change '== None' to 'is None' * remove zope.interface requires * use 'is not None' instead of '!= None' * pep8 fixes * Change '== None' to 'is None' * Fixes nova-manage image convert when the source directory is the same one that local image service uses * trunk merged * pep8 fixed * calc link local * not performing floating ip operation with auto allocated ips * it is rename not move * pep8 fix * Rebased to trunk rev 995 * Rebased to trunk rev 995 * merge trunk * add fault as response * Fix logging in openstack api * Fix logging in openstack api * Fix logging in openstack api * trunk merged. conflict resolved * trunk merged. conflict resolved * The change to utils.execute's call style missed this call somehow, this should get libvirt snapshots working again * Fix parameter mismatch calling to\_xml() from spawn() in libvirt\_conn.py * move name into main metadata instead of properties * change libvirt snapshot to new style execute * Add additional logging for WSGI and OpenStack API authentication * Rename the id * Added period to docstring for metadata test * Merged trunk * Empty commit to hopefully regenerate launchpad diff * Explicitly tell a user that they need to authenticate against a version root * Merged trunk * merging trunk * adding documentation & error handling * correcting tests; pep8 * Removed the unused self.interfaces\_xml variable * Only poll for instance states that compute should care about * Diablo versioning * Diablo versioning * Rebased to trunk rev 989 * Rebased to trunk rev 989 2011.2 ------ * Final versioning for Cactus * initial roundup of all 'exception.Invalid' cases * merge trunk * set the bridge on each OvsFlow * merge with trunk * bugfix * bugfix * Fix parameter mismatch calling to\_xml() from spawn() in libvirt\_conn.py * add kvm-pause and kvm-suspend 2011.2rc1 --------- * Rework GlanceImageService.\_translate\_base() to not call BaseImageService.\_translate\_base() otherwise the wrong class attributes are used in properties construction.. * Updated following to RIck's comments * Rebased to trunk rev 987 * Rework GlanceImageService.\_translate\_base() to not call BaseImageService.\_translate\_base() otherwise the wrong class attributes are used in properties construction.. * Try to be nicer to the DB when destroying a libvirt instance * pep8 * merge trunk * fixed error message i18n-ization. added test * Don't hammer on the DB * Debug code clean up * Rebased to trunk rev 986 * An ultimate workaround workd... :( * Zero out volumes during deletion to prevent data leaking between users * Minor formatting cleanup * jesse@aire.local to mailmap * Changed pep8 command line option from --just-pep8 to --pep8 * re-add broken code * merge trunk * Final versioning * Updates the documentation on creating and using a cloudpipe image * iSCSI/KVM test completed * Minor fixes * Fix RBDDriver in volume manager. discover\_volume was raising exception. Modified local\_path as well * Fixes VMware Connection to inherit from ComputeDriver * Fixes s3.py to allow looking up images by name. Smoketests run unmodified again with this change! * move from try\_execute to \_execute * Make VMWare Connection inherit from ComputeDriver * add up and down .sh * fix show\_by\_name in s3.py and give a helpful error message if image lookup fails * remove extra newline * dots * Rebased to trunk rev 980 * Rework importing volume\_manager * Blushed up a little bit * Merged trunk * Only warn about rouge instances that compute should know about * Added some tests * Dangerous whitespace mistake! :) * Cleanup after prereq merge * Add new flag 'max\_kernel\_ramdisk\_size' to specify a maximum size of kernel or ramdisk so we don't copy large files to dom0 and fill up /boot/guest * Rebased to trunk rev 980 * Merged lp:~rackspace-titan/nova/server\_metadata\_quotas as a prereq * Merged trunk * Docstring cleanup and formatting. Minor style fixes as well * Updated to use setfacl instead of chown * Commit for merge of metadata\_quotas preq * merge trunk * Removed extra call from try/except * Reverted some superfluous changes to make MP more concise * Merged trunk * Reverted some superfluous changes to make MP more concise * Replace instance ref from compute.api.get\_all with one from instance\_get. This should ensure it gets fully populated with all the relevant attributes * Add a unit test for terminate\_instances * pep8 * Fix RBDDriver in volume manager. discover\_volume was raising exception. Modified local\_path as well * pep8 fixes * migaration and pep8 fixes * update documentation on cloudpipe * Makes genvpn path actually refer to genvpn.sh instead of geninter.sh * typo * Merged trunk * Updating the runnova information and fixing bug 753352 * merge trunk * network manager changes, compute changes, various other * Floating ips auto assignment * Sudo chown the vbd device to the nova user before streaming data to it. This resolves an issue where nova-compute required 'root' privs to successfully create nodes with connection\_type=xenapi * Minor blush ups * A minor blush up * A minor blush up * Remove unused self.interfaces\_xml * Rebased to trunk rev 977 * Rebase to trunk rev 937 * debug tree status checkpoint 2 * docstring cleanup, direct api, part of compute * bzr ignore the top level CA dir that is created when running 'run\_tests.sh -N' * fix reference to genvpn to point to the right shell script * Set default stateOrProvice to 'supplied' in openssl.cnf.tmpl * merge trunk * This branch fixes https://bugs.launchpad.net/bugs/751231 * Replace instance ref from compute.api.get\_all with one from instance\_get. This should ensure it gets fully populated with all the relevant attributes * When using libvirt, remove the persistent domain definition when we call destroy, so that behavior on destroy is as it was when we were using transient instances * Rebased to trunk rev 973 * Currently terminating an instance will hang in a loop, this allows for deletion of instances when using a libvirt backend. Also I couldn't help add a debug log where an exception is caught and ignored * merge trunk * resolved lazy\_match conflict between bin/nova-manage instance and instance\_type by moving instance subcommand under vm command. documented vm command in man page. removed unused instance\_id from vm list subcommand * Ooops - redefining the \_ variable seems like a \_really\_ bad idea * Handle the case when the machine is already SHUTOFF * Split logic on shutdown and undefine, so that even if the machine is already shutdown we will be able to proceed * Remove the XML definition when we destroy a machine * Rebased to trunk rev 971 * debug tree status checkpoint * Reabased to trunk rev 971 * Fixed log message gaffe * pylintage * typo - need to get nova-volumes working on this machine :-/ * dd needs a count to succeed, and remove unused/non-working special case for size 0 * There is a race condition when a VDI is mounted and the device node is created. Sometimes (depending on the configuration of the Linux distribution) nova loses the race and will try to open the block device before it has been created in /dev * zero out volumes on delete using dd * Added RST file on using Zones * Fixes euca-attach-volume for iscsi using Xenserver * pep8 * merge trunk * removes log command from nova-manage as it no longer worked in multi-log setup * Added error message to exception logging * Fixes bug which hangs nova-compute when terminating an instance when using libvirt backend * missing 'to' * Short circuit non-existant device during unit tests. It won't ever be created because of the stubs used during the unit tests * Added a patch for python eventlet, when using install\_venv.py (see FAQ # 1485) * fixed LOG level and log message phrase * merge prop tweaks 2 * Set default stateOrProvice to 'supplied' in openssl.cnf.tmpl * This branch fixes https://bugs.launchpad.net/nova/+bug/751242 * Ignore errors when deleting the default route in the ensure\_bridge function * bzr ignore the CA dir * merge prop tweaks * Import translations from Launchpad * added Zones doc * Update the describe\_image\_attribute and modify\_image\_attribute functions in the EC2 API so they use the top level 'is\_public' attribute of image objects. This brings these functions in line with the base image service * Import from lp:~nova-core/nova/translations * corrects incorrect openstack api responses for metadata (numeric/string conversion issue) and image format status (not uppercase) * Implement a mechanism to enforce a configurable quota limit for image metadata (properties) within the OS API image metadata controller * Update the describe\_image\_attribute and modify\_image\_attribute functions in the ec2 API so they use the top level 'is\_public' attribute of image objects. This brings these functions in line with the base image service * Ignore errors when deleting the default route in the ensure\_bridge function * merge trunk * removed log command from nova-manage. no longer applicable with multiple logfiles * merge trunk * reminde admins of --purge option * Fixes issues with describe instances due to improperly set metadata * Keep guest instances when libvirt host restarts * fix tests from moving access check into update and delete * Added support for listing addresses of a server in the openstack api. Now you can GET \* /servers/1/ips \* /servers/1/ips/public \* /servers/1/ips/private Supports v1.0 json and xml. Added corresponding tests * Log libvirt errcode on exception * This fixes how the metadata and addresses collections are serialized in xml responses * Fix to correct libvirt error code when the domain is not found * merged trunk * Removed commented-out old 'delete instance on SHUTOFF' code * Automatically add the metadata address to the network host. This allows guests to ARP for the address properly * merged trunk and resolved conflict * slight typo * clarified nova-manage instance\_type create error output on duplicate flavorid * This branch is a patch for fixing below issue. > Bug #746821: live\_migration failing due to network filter not found Link a bug report * fix pep8 violation * Update instances table to use instance\_type\_id instead of the old instance\_type column which represented the name (ex: m1.small) of an instance type * Drop extra 'None' arg from dict.get call * Some i18n fixes to instance\_types * Renamed computeFault back to cloudServersFault in an effort to maintain consistency with the 1.0 API spec. We can look into distinguishing the two in the next release. Held off for now to avoid potential regression * adds a timeout on session.login\_with\_password() * Drop unneeded Fkey on InstanceTypes.id * Bypass a potential security vulnerability by not setting shell=True in xenstore.py, using johannes.erdfelt's patch * Renamed computeFault to cloudServersFault * fixed the way ip6 address were retrieved/returned in \_get\_network\_info in nova/virt/xenapi/vmops * added -manage vm [list|live-migration] to man page * removed unused instance parameter from vm list ... as it is unused. added parameters to docstring for vm list * moved -manage instance list command to -manage vm list to avoid lazy match conflict with instance\_types * Simplify by always adding to loopback * Remove and from AllocateAddress response, and fix bug #751176 * remove unused code * better error message * Blush up a bit * Rebased to trunk rev 949 * pep8 * adds timeout to login\_with\_password * test provider fw rules at the virt/ipteables layer. lowercase protocol names in admin api to match what the firewall driver expects. add provider fw rule chain in iptables6 as well. fix a couple of small typos and copy-paste errors * fixed based on reviewer's comment - 1. erase unnecessary blank line, 2. adding LOG.debug * Rebased to trunk rev 949 * fixed based on reviewer's comment - 'locals() should be off from \_() * Make description of volume\_id more generic * add the tests * pep8 cleanup * ApiError code should default to None, and will only display a code if one exists. Prior was output an 'ApiError: ApiError: error message' string, which is confusing * ec2 api run\_instances checks for image status must be 'available'. Overhauled test\_run\_instances for working set of test assertions * if we delete the old route when we move it we don't need to check for exists * merged trunk * removed comment on API compliance * Added an option to run\_tests.sh so you can run just pep8. So now you can: ./run\_tests.sh --just-pep8 or ./run\_tests.sh -p * Add automatic metadata ip to network host on start. Also fix race where gw is readded twice * Controllers now inherit from nova.api.openstack.common.OpenstackController * Merged trunk * Support providing an XML namespace on the XML output from the OpenStack API * Merged with trunk, fixed up test that wasn't checking namespace * Added support for listing addresses of a server in the openstack api. Now you can GET \* /servers/1/ips \* /servers/1/ips/public \* /servers/1/ips/private Supports v1.0 json and xml. Added corresponding tests * check visibility on delete and update * YADU (Yet Another Docstring Update) * Make sure ca\_folder is created before chdir()ing into it * another syntax error * Use a more descriptive name for the flag to make it easier to understand the purpose * Added logging statements for generic WSGI and specific OpenStack API requests * syntax error * Incorprate johannes.erdfelt's patch * updated check\_vm\_record in test\_xenapi to check the gateway6 correctly * updated get\_network\_info in libvirt\_conn to correctly insert ip6s and gateway6 into the network info, also small style fixes * add docstrings * updated \_prepare\_injectables() to use info[gateway6] instead of looking inside the ip6 address dict for the gateway6 information * Enable RightAWS style signing on server\_string without port number portion * modified behavior of inject\_network\_info and reset\_network related to a vm\_ref not being passed in * Create ca\_folder if it does not already exist * Wait for device node to be created after mounting image VDI * Improved unit tests Fixed docstring formatting * Only create ca\_path directory if it does not already exist * Added bug reference * Only create ca\_path directory if it does not already exist * Make "setup.py install" much more thorough. It now installs tools/ into /usr/share/nova and makes sure api-paste.conf lands in /etc/nova rather than /etc * fixed based on reviwer's comment * return image create response as image dict * Add a patch for python eventlet, when using install\_venv.py (see FAQ # 1485) * Undo use of $ in chain name where not needed * Testing for iptables manager changes * Don't double-apply provider fw rules in NWFilter and Iptables. Don't create provider fw rules for each instance, use a chain and jump to it. Fix docstrings * typo * remove -None for user roles * pep8 * fallback to status if image\_state is not set * update and fix tests * unite the filtering done by glance client and s3 * Removing naughty semicolon * merged trunk * remove extraneous empty lines * move error handling down into get\_password function * refactor to handle invalid adminPass * fixed comment * merged trunk * add support for specifying adminPass for JSON only in openstack api 1.1 * add tests for adminPass on server create * Fix a giant batch of copypasta * Remove file leftover from conflict * adding support for OSAPI v1.1 limits resource * Moved 'name' from to , corrected and fixes bug # 750482 * This branch contains the fix for lp:749973. VNC is assumed that is default for all in libvirt which LXC does not support yet * Remove comments * Separate CA/ dir into code and state * removed blank lines for pep8 fix * pep8 fixed * Fixed the addresses and metadata collections in xml responses. Added corresponding tests * Dont configure vnc if we are using lxc * Help paste\_config\_file find the api config now that we moved it * Add bug reference * Move api-paste.ini into a nova/ subdir of etc/ * Add a find\_data\_files method to setup.py. Use it to get tools/ installed under /usr/(local/)/share/nova * Nits * Add missing underscore * fix bug lp751242 * fix bug lp751231 * Automatically create CA state dir, and make sure the CA scripts look for the templates in the right places * fix bug 746821 * Remove and from AllocateAddress response, and fix bug #751176 * Allow CA code and state to be separated, and make sure CA code gets installed by setup.py install * Rebased to trunk 942 * fix bug lp:682888 - DescribeImages has no unit tests * Correct variable name * correct test for numeric/string metadata value conversion * openstack api metadata responses must be strings * openstack api requires uppercase image format status responses * merge trunk * Refactor so that instances.instance\_type is now instances.instance\_type\_id * splitting test\_get\_nic\_for\_xml into two functions * Network injection check fixed in libvirt driver * merging trunk * fixing log message * working with network\_ref like with mapping * add test for NWFilterFirewall * Removed adminclient.py and added reference to the new nova-adminclient project in tools/pip-requires * Don't prefix adminPass with the first 4 chars of the instance name * Declares the flag for vncproxy\_topic in compute.api * Fixes bug 741246. Ed Leafe's inject\_file method for the agent plugin was mistakenly never committed after having to fix commits under wrong email address. vmops makes calls to this (previously) missing method * Attempt to circumvent errors in the API from improper/malformed responses from image service * fixes incorrect case of OpenStack API status response * Fixed network\_info creating * Moved 'name' property from to , corrected and fixes bug # 750482 * corrected capitalization of openstack api status and added tests * libvirt\_con log fix * Ensure no errors for improper responses from image service * merge trunk * Fixes error which occurs when no name is specified for an image * improving tests * network injection check fixed * Only define 'VIMMessagePlugin' class if suds can be loaded * Make euca-get-ajax-console work with Euca2ools 1.3 * Add bug reference * Use keyword arguments * add multi\_nic\_test * added preparing\_xml test * split up to\_xml to creation xml\_info and filling the template * use novalib for vif\_rules.py, fix OvsFlow class * extract execute methods to a library for reuse * Poller needs to check for BUILDING not NOSTATE now, since we're being more explict about what is going on * Add checking if the floating\_ip is allocated or not before appending to result array in DescribeAddresses * Added synchronize\_session parameter to a query in fixed\_ip\_disassociate\_all\_by\_timeout() and fix #735974 * Made the fix simpler * Add checking if the floating\_ip is allocated or not before appending to result array * Added updated\_at field to update statement according to Jay's comment * change bridge * Add euca2ools import * Rebased to trunk 930 * Rebased to trunk 726 * lots of updates to ovs scripts * Make euca-get-ajax-console work with Euca2ools 1.3 * merge trunk * Hopefully absolved us of the suds issue? * Removes excessive logging message in the event of a rabbitmq failure * Add a change password action to /servers in openstack api v1.1, and associated tests * Removal of instance\_set\_state from driver code, it shouldnt be there, but instead should be in the compute manager * Merged trunk * Don't include first 4 chars of instance name in adminPass * Friendlier error message if there are no compute nodes are available * merge lp:nova * Merged waldon * Adding explanation keyword to HTTPConflict * Merged waldon * makes sure s3 filtering works even without metadata set properly * Merged waldon * Didn't run my code. Syntax error :( * Now using the new power state instead of string * adding servers view mapping for BUILDING power state * removes excessive logging on rabbitmq failure * Review feedback * Friendlier error message if there are no compute nodes are available * Merged with Waldon * Better error handling for spawn and destroy in libvirt * pep8 * adding 'building' power state; testing for 409 from OSAPI when rebuild requested on server being rebuild * More friendly error message * need to support python2.4, so can't use uuid module * If the floating ip address is not allocated or is allocated to another project, then the user trying to associate the floating ip address to an instance should get a proper error message * Update state between delete and spawn * adding metadata support for v1.1 * Rebuild improvements * Limit image metadata to the configured metadata quota for a project * Add volume.API.remove\_from\_compute instead of compute.API.remove\_volume * Rebased to trunk rev 925 * Removed adminclient and referred to pypi nova\_adminclient module * fixed review comment for i18n string multiple replacement strings need to use dictionary format * fixed review comment for i18n string multiple replacement strings need to use dictionary format * Add obviously-missing method that prevents an Hyper-V compute node from even starting up * Avoid any hard dependencies in nova.virt.vmwareapi.vim * review cleanup * Handles situation where Connection.\_instances doesn't exist (ie. production) * localize NotImplementedError() * Change '"%s" % e' to 'e' * Fix for LP Bug #745152 * Merged waldon * adding initial v1.1 rebuild action support * Add ed leafe's code for the inject\_file agent plugin method that somehow got lost (fixes bug 741246). Update TimeoutError string for i18n * submitting a unit test for terminate\_instance * Update docstrings and spacing * fixed ordering and spacing * removed trailing whitespace * updated per code review, replaced NotFound with exception.NotFound * Merged Waldon's API code * remove all references to image\_type and change nova-manage upload to set container format more intelligently * Rough implementation of rebuild\_instance in compute manager * adding v1.0 support for rebuild; adding compute api rebuild support * Key type values in ec2\_api off of container format * Whoops * Handle in vim.py * Refixed unit test to check XML ns * Merged with trunk (after faults change to return correct content-type) * OpenStack API faults have been changed to now return the appropriated Content-Type header * More tests that were checking for no-namespace * Some tests actually tested for the lack of a namespace :-) * pep8 fixes * Avoid hard dependencies * Implement quotas for the new v1.1 server metadata controller. Modified the compute API so that metadata is a dict (not an array) to ensure we are using unique key values for metadata. This is isn't explicit in the SPECs but it is implied by the new v1.1 spec since PUT requests modify individual items * Add XML namespaces to the OpenStack API * Merged with trunk * Fixed mis-merge: OS API version still has to be v1.1 * Store socket\_info as a dictionary rather than an array * Merged with trunk * Added synchronize\_session parameter to a query in fixed\_ip\_disassociate\_all\_by\_timeout() and fix #735974 * Key was converted through str() even if None, resulting in "None" being added to authorized\_keys when no key was specified * queues properly reconnect if rabbitmq is restarted * Moving server update adminPass support to be v1.0-specific OS API servers update tests actually assert and pass now Enforcing server name being a string of length > 0 * Adding Content-Type code to openstack.api.versions.Versions wsgi.Application * Fixes metadata for ec2\_api to specify owner\_id so that it filters properly * Makes the image decryption code use the per-project private key to decrpyt uploaded images if use\_project\_ca is set. This allows the decryption code to work properly when we are using a different ca per project * exception -> Fault * Merged trunk * Do not push 'None' to authorized\_keys when no key is specified * Add missing method that prevent HyperV compute nodes from starting up * TopicAdapterConsumer uses a different callback model than TopicConsumer. This patch updates the console proxy to use this pattern * merge trunk * Uses the proc filesystem to check the volume size in volume smoketests so that it works with a very limited busybox image * merged trunk * The VNC Proxy is an OpenStack component that allows users of Nova to access their instances through a websocket enabled browser (like Google Chrome) * make sure that flag is there in compute api * fix localization for multiple replacement strings * fix doc to refer to nova-vncproxy * Support for volumes in the OpenStack API * Deepcopy the images, because the string formatting transforms them in-place * name, created\_at, updated\_at are required * Merged with trunk * "Incubator" is no more. Long live "contrib" * Rename MockImageService -> FakeImageService * Removed unused super\_verbose argument left over from previous code * Renamed incubator => contrib * Wipe out the bad docstring on get\_console\_pool\_info * use project key for decrypting images * Fix a docstring * Found a better (?) docstring from get\_console\_pool\_info * Change volume so that it returns attachments in the same format as is used for the attachment object * Removed commented-out EC2 code from volumes.py * adding unit tests for describe\_images * Fix unit test to reflect fact that instance is no longer deleted, just marked SHUTOFF * Narrowly focused bugfix - don't lose libvirt instances on host reboot or if they crash * fix for lp742650 * Added missing blank line at end of multiline docstring * pep8 fixes * Reverted extension loading tweaks * conversion of properties should set owner as owner\_id not owner * add nova-vncproxy to setup.py * clarify test * add line * incorporate feedback from termie * Make dnsmasq\_interface configurable * Stop nova-manage from reporting an error every time. Apparently except: catches sys.exit(0) * add comment * switch cast to a call * move functions around * move flags per termie's feedback * initial unit test for describe images * don't print the error message on sys.exit(0) * added blank lines in between functions & removed the test\_describe\_images (was meant for a diff bug lp682888) * Make Dnsmasq\_interface configurable * fix flag names * Now checking that exists at least one network marked injected (libvirt and xenapi) * This branch adds support for linux containers (LXC) to nova. It uses the libvirt LXC driver to start and stop the instance * use manager pattern for auth token proxy * Style fixes * style fix * Glance used to return None when a date field wasn't set, now it returns ''. Glance used to return dates in format "%Y-%m-%dT%H:%M:%S", now it returns "%Y-%m-%dT%H:%M:%S.%f" * Fix up docstring * Added content\_type to OSAPI faults * accidentally dropped a sentence * Added checks that exists at least one network marked inhected in libvirt and xenapi * Adds support for versioned requests on /images through the OpenStack API * Import order * Switch string concat style * adding xml test case * adding code to explicitly set the content-type in versions controller; updating test * Merged trunk * Added VLAN networking support for XenAPI * pep8 * adding server name validation to create method; adding tests * merge lp:nova * use informative error messages * adding more tests; making name checks more robust * merge trunk * Fix pep8 error * Tweaking docstrings just in case * Catch the error that mount might through a bit better * sorted pep8 errors that were introduced during previous fixes * merge trunk * make all openstack status uppercase * Add remove\_volume to compute API * Pass along the nbd flags although we dont support it just yet * cleaned up var name * made changes per code review: 1) removed import of image from objectstore 2) changed to comments instaed of triple quotes * Displays an error message to the user if an exception is raised. This is vital because if logfile is set, the exception shows up in the log and the user has no idea something went wrong * Yet more docstring fixes * More style changes * Merged with trunk * Multi-line comments should end in a blankline * add note per review * More fixes to keep the stylebot happy * Cleaned up images/fake.py, including move to Duplicate exception * Code cleanup to keep the termie-bot happy * displays an error message if a command fails, so that the user knows something went wrong * Fixes volume smoketests to work with ami-tty * address some of termie's recommendations * add period, test github * pep8 * osapi servers update tests actually assert now; enforcing server name being a string of length > 0; moving server update adminPass support to be v1.0-specific * Moving shared\_ip\_groups controller to APIRouterV10 Replacing all shared\_ip\_groups contoller code with HTTPNotImplemented Adding shared\_ip\_groups testing * fix docstrings * Merged trunk * Updated docstrings to satisfy * Updated docstrings to satisfy * merge trunk * merge trunk * minor fix and comment * style fixes * merging trunk * Made param descriptions sphinx compatible * Toss an \_\_init\_\_ in the test extensions dir. This gets it included in the tarball * pep8 * Fix up libvirt.xml.template * This fixes EC2 API so that it returns image displayName and description properly * merged from trunk * Moving backup\_schedule route out of base router to OS API v1.0 All controller methods return HTTPNotImplemented to prevent further confusion Correcting tests that referred to incorrect url * Fixed superfluous parentheses around locals() * Added image name and description mapping to ec2 api * use self.flags in virt test * Fixed DescribeUser in the ec2 admin client to return None instead of an empty UserInfo object * Remove now useless try/except block * Dont make the test fail * backup\_schedule tests corrected; controller moved to APIRouterV10; making controller fully HTTPNotImplemented * when image\_id provided cannot be found, returns more informative error message * Adds support for snapshotting (to a new image) in the libvirt code * merge lp:nova * More pep8 corrections * adding shared\_ip\_groups testing; replacing all shared\_ip\_groups contoller code with HTTPNotImplemented; moving shared\_ip\_groups controller to APIRouterV10 * Merged trunk * pep8 whitespace * Add more unit tests for lxc * Decided to not break old format so this should work with the way Glance used to work and the way glace works now..The best of both worlds? * update glance params per review * add snapshot support for libvirt * HACKING update for docstrings * merge trunk * Fix libvirt merge mistake * lock down requirements for change password * merge trunk * Changed TopicConsumer to TopicAdapterConsumer in bin/nova-ajax-console-proxy to allow it to start up once again * style changes * Removed iso8601 dep from pip-requires * Merged trunk * Removed extra dependency as per suggestion, although it fixes the issue much better IMO, we should be safe sticking with using the format from python's isoformat() * Assume that if we don't find a VM for an instance in the DB, and the DB state is NOSTATE, that the db instance is in the process of being spawned, and don't mark it SHUTOFF * merge with trunk * Added MUCH more flexiable iso8601 parser dep for added stability * Fix formatting of TODO and NOTE - should be a space after the # * merge lp:nova * Mixins for tests confuse pylint no end, and aren't necessary... you can stop the base-class from being run as a test by prefixing the class name with an underscore * Merged the two periodic\_tasks functions, that snuck in due to parallel merges in compute.manager * Start up nova-api service on an unused port if 0 is specified. Fixes bug 744150 * Removed 'is not None' to do more general truth-checking. Added rather verbose testing * Merged with trunk * merge trunk * merge trunk, fixed conflicts * TopicConsumer -> TopicAdapterConsumer * Fix typo in libvirt xml template * Spell "warn" correctly * Updated Authors file * Removed extraneous white space * Add friendlier message if an extension fails to include a correctly named class or factory * addressed reviewers' concerns * addressed termies review (third round) * addressed termie's review (second round) * Do not load extensions that start with a "\_" * addressed termies review (first round) * Clarified note about scope of the \_poll\_instance\_states function * Fixed some format strings * pep8 fixes * Assume that if we don't find a VM for an instance in the DB, and the DB state is NOSTATE, that the db instance is in the process of being spawned * pep8 fixes * Added poll\_rescued\_instances to virt driver base class * There were two periodic\_tasks functions, due to parallel merges in compute.manager * pep8 fixes * Bunch of style fixes * Fix utils checking * use\_ipv6 now passing to interfaces.template as first level variable in libvirt\_conn * Replaced import of an object with module import as per suggestion * Updates to the newest version of nova.sh, which includes:  \* Installing new python dependencies  \* Allows for use of interfaces other than eth0  \* Adds a run\_detached mode for automated testing * Now that it's an extension, it has to be v1.1. Also fixed up all the things that changed in v1.1 * merge trunk addressing Trey's comments * Initial extensification of volumes * Merged with trunk, resolved conflicts & code-flicts * Removed print * added a simple test for describe\_images with mock for detail funciton * merged trunk * merge trunk * merge lp:nova * Adding links container to openstack api v1.1 servers entities * Merged trunk * Add license and copyright to nova/tests/api/openstack/extensions/\_\_init\_\_.py * Fixed a typo on line 677 where there was no space between % and FLAGS * fix typos * updated nova.sh * Added a flag to allow a user to specify a dnsmasq\_config\_file is they would like to fine tune the dnsmasq settings * disk\_format is now an ImageService property. Adds tests to prevent regression * Merged trunk * Merged trunk * merging trunk * merge trunk * Merged trunk and fixed broken/conflicted tests * - add a "links" container to versions entities for Openstack API v1.1 - add testing for the openstack api versions resource and create a view builder * merging trunk * This is basic network injection for XenServer, and includes: * merging trunk * Implement image metadata controller for the v1.1 OS API * merging trunk * Changed use\_ipv6 passing to interfaces.template * merging trunk, resolving conflicts * Add a "links" container to flavors entities for Openstack API v1.1 * Toss an \_\_init\_\_ in the test extensions dir. This gets it included in the tarball * Use metadata = image.get('properties', {}) * merge trunk * Revert dom check * merge trunk * Fix unit tests w/ latest trunk merge * merging trunk and resolving conflicts * Fix up destroy container * Fix up templating * Implement metadata resource for Openstack API v1.1. Includes: -GET /servers/id/meta -POST /servers/id/meta -GET /servers/id/meta/key -PUT /servers/id/meta/key -DELETE /servers/id/meta/key * Dont always assume qemu * Removed partition from setup\_container * pep8 fix * disk\_format is now an ImageService property * Restore volume state on migration failure * merge trunk, add unit test * merge trunk * merge trunk addressing reviewer's comments * clarify comment * add documentation * Empty commit? * minor pep8 fix in db/fakes.py * Support for markers for pagination as defined in the 1.1 spec * add hook for osapi * merge trunk * Ports the Tornado version of an S3 server to eventlet and wsgi, first step in deprecating the twistd-based objectstore * Merged with trunk Updated net injection for xenapi reflecting recent changes for libvirt * Fix lp741415 by splitting arguments of \_execute in the iSCSI driver * make everything work with trunk again * Support for markers for pagination as defined in the 1.1 spec * add descriptive docstring * don't require integrated tests to recycle connections * remove twisted objectstore * port the objectstore tests to the new tests * update test base class to monkey patch wsgi * rename objectstore tests * port s3server to eventlet/wsgi * add s3server, pre-modifications * merge trunk * Added detail keywork and i18n as per suggestions * incorporate feedback from termie * Implementation of blueprint hypervisor-vmware-vsphere-support. (Link to blueprint: https://blueprints.launchpad.net/nova/+spec/hypervisor-vmware-vsphere-support) * fix typo * Addressing Trey's comments. Removed disk\_get\_injectables, using \_get\_network\_info's return value * Adds serverId to OpenStack API image detail per related\_image blueprint * Fix for bug #740947 Executing parted with sudo in \_write\_partition (vm\_utils.py) * Implement API extensions for the Openstack API. Based on the Openstack 1.1 API the following types of extensions are supported: * Merging trunk * Adds unit test coverage for XenAPI Rescue & Unrescue * libvirt driver multi\_nic support. In this phase libvirt can work with and without multi\_nic support, as in multi\_nic support for xenapi: https://code.launchpad.net/~tr3buchet/nova/xs\_multi\_nic/+merge/53458 * Merging trunk * Review feedback * Merged trunk * Additions to the Direct API: * Merged trunk * Added test\_get\_servers\_with\_bad\_limit, test\_get\_servers\_with\_bad\_offset and test\_get\_servers\_with\_bad\_marker * pep8 cleanups * Added test\_get\_servers\_with\_limit\_and\_marker to test pagination with marker and limit request params * style and spacing fixed * better error handling and serialization * add some more docs and make it more obvious which parts are examples * add an example of a versioned api * add some more docs to direct.py * add Limited, an API limiting/versioning wrapper * improve the formatting of the stack tool * support volume and network in the direct api * Merged with trunk, fix problem with behaviour of (fake) virt driver when instance doesn't reach scheduling * In this branch we are forwarding incoming requests to child zones when the requested resource is not found in the current zone * trunk merge * Fixes a bug that was causing tests to fail on OS X by ensuring that greenthread sleep is called during retry loops * Merged trunk * Fix some errors that pylint found in nova/api/openstack/servers.py * Fix api logging to show proper path and controller:action * Merged trunk * Pylint 'Undefined variable' E0602 error fixes * Made service\_get\_all()'s disabled parameter default to None. Pass False for enabled services; True for disabled services. Calls to this method have been updated to remain consistent * Merged with trunk * Reconcile tests with latest trunk merges * Merged trunk and resolved conflict in nova/db/sqlalchemy/api.py * Don't try to parse the empty string as a datetime * change names for consistency with existing db api * Merged with trunk * Forgot one set of flags * Paginated results should not include the item starting at marker. Improved implementation of common.limited\_by\_marker as suggested by Matt Dietz. Added flag osapi\_max\_limit * Detect if user is running the default Lucid version of libvirt, and give a nicer error message * Updated to use new APIRouterV11 class in tests * Fix lp741514 by declaring libvirt\_type in nova-manage * Docstring fixes * get image metadata tests working after the datetime interface change in image services * adding versioned controllers * Addressed issues raised by Rick Harris' review * Stubbing out utils.execute for migrate tests * Aggregates capabilities from Compute, Network, Volume to the ZoneManager in Scheduler * merged trunk r864 * removing old Versions application and correcting fakes to use new controller * Renamed \_\_image and \_\_compute to better describe their purposes. Use os.path.join to create href as per suggestion. Added base get\_builder as per pychecker suggestion * merging trunk r864 * trunk merged. conflicts resolved * Merged trunk * merge trunk * merge trunk * Small refactor * Merged trunk and fixed tests * Couple of pep8 fixes * pep8 clearing * making servers.generate\_href more robust * merging trunk r863 * Fixes lp740322: cannot run test\_localization in isolation * couple of bugs fixed * Merged trunk * Dont use popen in dettaching the lxc loop * Fix up formatting of libvirt.xml.template * trunk merge * fix based on sirp's comments * Grrr... because we're not recycling the API yet, we have to configure flags the first time it's called * merge trunk * Fake out network service as well, otherwise we can't terminate the instance in test\_servers now that we've started a compute service * merge trunk * Sorted out a problem occurred with units tests for VM migration * pep8 fixes * Test for attach / detach (and associated fixes) * Pass a fake timing source to live\_migration\_pre in every test that expectes it to fail, shaving off a whole minute of test run time * merge trunk * Poll instance states periodically, so that we can detect when something changes 'behind the scenes' * Merged with conflict and resolved conflict (with my own patch, no less) * Added simple nova volume tests * Created simple test case for server creation, so that we can have something to attach to.. * Merged with trunk * Added volume\_attachments * Declare libvirt\_type to avoid AttributeError in live\_migration * minor tweak from termie feedback * Added a mechanism for versioned controllers for openstack api versions 1.0/1.1. Create servers in the 1.1 api now supports imageRef/flavorRef instead of imageId/flavorId * Fixed the docstring for common.get\_id\_from\_href * better logging of exceptions * Merged trunk * Merged trunk * Fix issues with certificate updating & whitespace removal * Offers the ability to run a periodic\_task that sweeps through rescued instances older than 24 hours and forcibly unrescues them * Merged trunk * Added hyperv stub * Don't try to parse a datetime if it is the empty string (or None) * Remove a blank line * pep8 fix * Split arguments of \_execute in the iSCSI driver * merge trunk * Added revert\_resize to base class * Addressing Rick Clark's comments * Merged with lp:nova, fixed conflicts * boto\_v6 module is imported if the flag "use\_ipv6" is set to True * pep8 fixes, backported some important fixes that didn't make it over from my testing system :-( * Move all types of locking into utils.synchronize decorator * Doh! Missed two places which were importing the old driver location * Review feedback * make missing noVNC error condition a bit more fool-proof * clean some pep8 issues * general cleanup, use whitelist for webserver security * Better method name * small fix * Added docstring * Updates the previously merged xs\_migration functionality to allow upsizing of the RAM and disk quotas for a XenServer instance * Fix lp735636 by standardizing the format of image timestamp properties as datetime objects * migration gateway\_v6 to network\_info * merge prop fixes * Should not call super \_\_init\_\_ twice in APIRouter * fix utils.execute retries for osx * Keep the fallback code - we may want to do better version checking in future * Give the user a nicer error message if they're using the Lucid libvirt * Only run periodic task when rescue\_timeout is greater than 0 * Fixed some typos * Forgot extraneous module import again * Merged trunk * Forgot extraneous module import * Automatically unrescue instances after a given timeout * trunk merge * indenting cleanup * fixing some dictionary get calls * Unit test cleanup * one more minor fix * Moving the migration yet again * xml template fixed * merge prop changes * pep8 fixed * trunk merged * added myself to authors file * Using super to call parent \_setup\_routes in APIRouter subclasses * Merged trunk * pep8 fix * Implement v1.1 image metadata * This branch contains the fix for bug #740929 It makes sure cidr\_v6 is not null before building the 'ip6s' key in the network info dictionary. This way utils.to\_global\_ipv6 does not fail because of cidr==None * review comments fixed * add changePassword action to os api v1.1 * Testing of XML and JSON for show(), and conformance to API spec for JSON * Fixed tests * Merged trunk * Removed some un-needed code, and started adding tests for show(), which I forgot\! * id -> instance\_id * Checking whether cidr\_v6 is not null before populating ipv6 key in network info map (VMOps.\_get\_network\_info) * Executing parted with sudo in \_write\_partition * We update update\_ra method to synchronize, in order to prevent crash when we request multiple instance at once * merged with trunk Updated xenapi network injection for IPv6 Updated unit tests * merge trunk * merge trunk * removed excess debug line * more progress * use the nova Server object * separating out components of vnc console * Earlier versions of the python libvirt binding had getVersion in the libvirt namespace, not on the connection object. Check both * Report the exception (happens when can't import libvirt) * Use subset\_dict * Removing dead code * Touching up comment * Merging trunk * Pep8 fixes * Adding tests for owned and non-existent images * More small cleanups * Fix for #740742 - format describe\_instance\_output correctly to prevent errors in dashboard * Cleaning up make\_image\_fixutres * Merged with lp:nova * Small cleanup of openstack/images.py * Fixed up the new location of driver.py * Fix for lp740742 - format describe\_instance\_output correctly to prevent errors in dashboard * Merged with lp:nova * Filtering images by user\_id now * Clarified my "Yuk" comment * Cleaned up comment about virsh domain.info() return format * Added space in between # and TODO in #TODO * Added note about the advantages of using a type vs using a set of global constants * Filled out the base-driver contract, so it's not a false-promise * Enable flat manager support for ipv6 * Adding a talk bubble to the nova.openstack.org site that points readers to the 2011.1 site and the docs.openstack.org site - similar to the swift.openstack.org site. I believe it helps people see more sites are available, plus they can get to the Bexar site if they want to. Going forward it'll be nice to use this talk bubble to point people to the trunk site from released sites * Correctly imports greenthread in libvirt\_conn.py. It is used by live\_migrate() * Forgot this in the rename of check\_instance -> check\_isinstance * Test the login behavior of the OpenStack API. Uncovered bug732866 * trunk merge * Renamed check\_instance -> check\_isinstance to make intent clearer * Fix some crypto strangeness (\n in file\_name field of certificates, wrong IMPL method for certificate\_update) * Added note agreeing with Brian Lamar that the namespace doesn't belong in wsgi * Fix to avoid db migration failure in virtualenv * Fixed up unit tests and direct api that was also calling \_serialize (naughty!) * Fix the describe\_vpns admin api call * pep8 and fixed up zone-list * Support setting the xmlns intelligently * get\_all cleanup * Refactored out \_safe\_translate code * Set XML namespace when returning XML * Fix for LP Bug #704300 * Fix a typo in the ec2 admin api * typo fix * Pep8 fix * Merging trunk * make executable * Adding BASE\_IMAGE\_ATTRS to ImageService * intermediate progress on vnc-nova integration. checking in to show vish * add in eventlet version of vnc proxy * Updating doc strings in accordance with PEP 257. Fixing order of imports in common.py * one more copyright fix * pep8 stupidness * Tweak * fixing copyright * tweak * tweak * Whoops * Changed default for disabled on service\_get\_all to None. Changed calls to service\_get\_all so that the results should still be as they previously were * Now using urlparse to parse a url to grab id out of it * Resolved conflicts * Fix * Remove unused global semaphore * Addressed reviewer's comments * pep8 fix * Apparantly a more common problem than first thought * Adding more docstrings. image\_id and instance\_type fields of an instance will always exist, so no reason to check if keys exist * Pass a fake timing source to test\_ensure\_filtering\_rules\_for\_instance\_timeout, shaving off 30 seconds of test run time * pep8 * Merged trunk * Add a test for leaked semaphores * Remove checks in \_cache\_image tests that were too implementation specific * adding view builder tests * Add correct bug fixing metadata * When updating or creating set 'delete = 0'. (thus reactivating a deleted row) Filter by 'deleted' on delete * merging trunk r843 * making Controller.\_get\_flavors is\_detail a keyword argument * merging trunk r843 * Fix locking problem in security group refresh code * merging trunk r843 * Add unit test and code updates to ensure that a PUT requests to create/update server metadata only contain a single key * Add call to unset all stubs * IptablesManager.semaphore is no more * Get rid of IptablesManager's explicit semaphore * Add --fixes lp: metadata * Convert \_cache\_image to use utils.synchronized decorator. Disable its test case, since I think it is no longer needed with the tests for synchronized * Make synchronized decorator not leak semaphores, at the expense of not being truly thread safe (but safe enough for Eventlet style green threads) * merge trunk * Wrap update\_ra in utils.synchronized * Make synchronized support both external (file based) locks as well as internal (semaphore based) locks. Attempt to make it native thread safe at the expense of never cleaning up semaphores * merge with trunk * vpn changes * added zone routing flag test * routing test coverage * routing test coverage * xenapi support for multi\_nic. This is a phase of multi\_nic which allows xenapi to work as is and with multi\_nic. The other virt driver(s) need to be updated with the same support * better comments. First redirect test * better comments. First redirect test * Remove \_get\_vm\_opaque\_ref() calls in rescue/unrescue * Remove dupe'd code * Wrap update\_dhcp in utils.synchronized * if fingerprint data not provided, added logic to calculate it using the pub key * get rid of another datetime alias * import greenthread in libvirt * merge lp:nova * make bcwaldon happy * fix licenses * added licenses * wrap and log errors getting image ids from local image store * merge lp:nova * merging trunk * Fix for LP Bug #739641 * pep8; various fixes * Provide more useful exception messages when unable to load the virtual driver * Added Gabe to Authors file. He helped code this up too * Added XenAPI rescue unit tests * added an enumerate to track device in vmops.create\_vifs() * pep8 * Openstack api 1.0 flavors resource now implemented to match the spec * more robust extraction of arguments * Updated comment per the extension naming convention we actually use * Added copyright header * Fix pep8 issues in nova/api/openstack/extensions.py * Fix limit unit tests (reconciles w/ trunk changes) * Changed fixed\_range (CIDR) to be required in the nova-manage command; changed default num\_networks to 1 * merging trunk r837 * zones3 and trunk merge * Added space * trunk merge * remove scheduler.api.API. naming changes * Changed error to TypeError so that we get the arguments list * Added my name to Authors Added I18n for network create string * merge with trunk * merge trunk * merge trunk * merge trunk * Add bug metadata * Wrap update\_dhcp in utils.synchronized * fixes nova-manage instance\_type compatibility with postgres db * Tell PyLint not to complain about the "\_" function * Make smoketests' exit code reveal whether they were succesful * pep8 * Added run\_instances method to the connection.py of the contrib/boto\_v6/ec2 which would return ReservationV6 object instead of Reservation in order to access attribute dns\_name\_v6 of an instance * cleanup another inconsistent use of 1 for True in nova-manage * Changed Copyright to NTT for newly added files for flatmanager ipv6 * merge trunk * \* committing ovs scripts * fix nova-manage instance\_type list for postgres compatibility * fixed migration instance\_types migration to support postgres correctly * comment more descriptive * Seriously? * Fixed netadmin smoketests for ipv6 * Merged trunk * Better errors when virt driver isn't loaded * merge lp:nova * fix date formatting in images controller show * huh * fix ups * merge trunk * uses True/False instead of 1/0 for Postgres compatibility * cleaned up tests stubs that were accidentally checked in * works again. woo hoo * created api endpoint to allow uploading of public key * api decorator * Cleanup of FakeAuthManager * Replaced all pylint "disable-msg=" with "disable=" and "enable-msg=" with "enable=" * Change cloud.id\_to\_ec2\_id to ec2utils.id\_to\_ec2\_id. Fixes EC2 API error handling when invalid instances and volume names are specified * A few more single-letter variable names bite the dust * Re-implementation (or just implementation in many cases) of Limits in the OpenStack API. Limits is now available through /limits and the concept of a limit has been extended to include arbitrary regex / http verb combinations along with correct XML/JSON serialization. Tests included * Avoid single-letter variable names * auth\_data is a list now (thanks Rick!) * merge with trunk * Mark instance metadata as deleted when we delete the instance * results * fixed up novaclient usage to include managers * Added test case * Minor fixes to replace occurances of "VI" by "VIM" in 2 comments * whoopsy2 * whoopsy * Fixed 'Undefined variable' errors generated by pylint (E0602) * Merged trunk * Change cloud.id\_to\_ec2\_id to ec2utils.id\_to\_ec2\_id. Fixes EC2 API error handling when invalid instances and volume names are specified * enable-msg -> enable * disable-msg -> disable * enable\_zone\_routing flag * PEP-8 * Make flag parsing work again * Using eventlets greenthreads for optimized image processing. Fixed minor issues and style related nits * Fixed issue arisen from recent feature update (utils.execute) * Make proxy.sh work with both openbsd and traditional variants of netcat * Query the size of the block device, not the size of the filesystem * merge trunk * Ensuring kernel/ramdisk files are always removed in case of failures * merge trunk * merge trunk * Implement metadata resource for Openstack API v1.1. Includes: -GET /servers/id/meta -POST /servers/id/meta -GET /servers/id/meta/key -PUT /servers/id/meta/key -DELETE /servers/id/meta/key * Make "ApiError" the default error code for ApiError instances, rather than "Unknown." * When changing the project manager, if the new manager is not yet a project member, be sure to make them be a project member * Make the rpc cast/call debug calls show what topic they are sending to. This aides in debuugging * Final touches and bug/pep8 fixes * Support for markers for pagination as defined in the 1.1 spec * Merged trunk * Become compatible with ironcamel and bcwaldon's implementations for standardness * pep8 * Merged dependant branch lp:~rackspace-titan/nova/openstack-api-versioned-controllers * Updated naming, removed some prints, and removed some invalid tests * adding servers container to openstack api v1.1 servers entities * decorator more generic now * Images now v1.1 supported...mostly * fixed up bzr mess * Fix for LP Bug #737240 * refactored out middleware, now it's a decorator on service.api * Fix for LP Bug #737240 * Add topic name to cast/call logs * Changing project manager should make sure that user is a project member * Invert some of the original logic and fix a typo * Make the smoketests pep8 compliant (they weren't when I started working on them..) * Update the Openstack API to handle case where personality is set but null in the request to create a server * Fix a couple of things that assume that libvirt == kvm/qemu * Made fixed\_range a required parameter for nova-manage network create. Changed default num\_networks to 1; 1000 seems large * Fix a number of place in the volume driver where the argv hadn't been fully split * fix for lp712982, and likely a variety of other dashboard error handling issues. This fix simply causes the default error code for ApiError to be 'ApiError' rather than 'Unknown', which makes dashboard handle the error gracefully, and makes euca error output slightly prettier * Fix mis-merge * pep8 is hard * syntax error * create vifs before inject network info to remove rxtx\_cap from network info (don't need to inject it) * Make utils.execute not overwrite std{in,out,err} args to Popen on retries. Make utils.execute reject unknown kwargs * merged trunk, merged qos, slight refactor regarding merges * - general approach for openstack api versioning - openstack api version now preserved in request context - added view builder classes to handle os api responses - added imageRef and flavorRef to os api v1.1 servers - modified addresses container structure in os api v1.1 servers * Pep8 * Test changes * pep8 * Adjust test cases * pep8 * merge * Mark instance metadata as deleted when we delete the instance * Backfix of bugfix of issue blocking creating servers with metadata * Better comment for fault. Improved readability of two small sections * Add support for network QoS (ratelimiting) for XenServer. Rate is pulled from the flavor (instance\_type) when constructing a vm * pep8 * I suck at merging * Now returns a 400 for a create server request with invalid hrefs for imageRef/flavorRef values. Also added tests * moving Versions app out of \_\_init\_\_.py into its own module; adding openstack versions tests; adding links to version entities * fixed code formatting nit * handle create and update requests, and update the base image service documentation to reflect the (defacto) behavior * Move the check for None personalities into the create method * Get the migration out * get api openstack test\_images working * merge trunk * Improved exception handling * better implementation of try..except..else * merging parent branch lp:~bcwaldon/nova/osapi-flavors-1\_1 * merging parent branch lp:~rackspace-titan/nova/openstack-api-version-split * iptables filter firewall changes merged * merged trunk * pep8 * adding serialization\_metadata to encode links on flavors * merge with libvirt\_multinic\_nova * pep8 * teach glance image server get to handle timestamps * merge trunk * merge trunk * fixes for NWFilterFirewall and net injection * moving code out of try/except that would never trigger NotFound * handle timestamps in glance service detail * fixed IpTablesFirewal * Fixes lp736343 - Incorrect mapping of instance type id to flavor id in Openstack API * Comparisons to None should not use == or != * Pep8 error, oddly specific to pep8 v0.5 < x > v0.6 * Remove unconditional raise, probably left over from debugging * Mapping the resize status * Mapping the resize status * Fixed pep8 violation * adding comments; removing returns from build\_extra; removing unnecessary backslash * refactor to simpler implementation * Foo * glance image service show testcases * oh come on * refactoring * Add tests and code to handle multiple ResponseExtension objects * Just use 'if foo' instead of 'if len(foo)'. It will fail as spectacularly if its not acting on a sequence anyways * bugfix * Remove unconditional raise, probably left over from debugging * No need to modify this test case function as well * refactored: network\_info creation extracted to method * Call \_create\_personality\_request\_dict within the personalities\_null test * Foo * more pep8 fixes * Switch back to 'is not None' for personality\_files check. (makes mark happy) * pep8 fixes * 1) Update few comments where whitespace is missing after '#' 2) Update document so that copy right notice doesn't appear in generated document 3) Now using self.flag(...) instead of setting the flags like FLAGS.vmwareapi\_username by direct assignment. 4) Added the missing double quote at the end a string in vim\_util.py * more pep8 fixes * Fix up tests * Replaced capability flags with List * Fix more pep8 errors * Remove me from mailmap * Fix up setup container * Merged trunk * Update the Openstack API to handle case where personality is set but null in the request to create a server * Make smoketests' exit code reveal whether they were succesful * merge with trunk. moved scheduler\_manager into manager. fixed tests * Set nbd to false when mounting the image * Fixed typo when I was trying to add test cases for lxc * Remove target\_partition for setup\_container but still hardcode because its needed when you inject the keys into the image * Remove nbd=FLAGS.use\_cow\_images for destroy container * Update mailmap * Fix a number of place in the volume driver where the argv hadn't been fully split * Fix pep8 errors * Update authors again * Improved exception handling: - catching appropriate errors (OSError, IOError, XenAPI.Failure) - reduced size of try blocks - moved exception handling code in separate method - verifing for appropriate exeception type in unit tests * get\_console\_output is not supported by lxc and libvirt * Update Authors and testsuite * Comparisons to None should not use == or != * Make error message match the check * Setting the api verion in the request in the auth middle is no longer needed. Also, common.get\_api\_version is no longer needed. As Eric Day noted, having versioned controllers will make that unnecessary * moving code out of try/except that would never trigger NotFound * Added mechanism for versioned controllers for openstack api versions 1.0/1.1. Create servers in the 1.1 api now supports imageRef/flavorRef instead of imageId/flavorId * fix up copyright * removed dead method * pep8 * pep8 * Remerge trunk * cleanup * added in network qos support for xenserver. Pull qos settings from flavor, use when creating instance * moved scheduler API check into db.api decorator * Add basic tests for lxc containers * Revert testsuite changes * MErge trunk * Fix a few of the more obvious non-errors while we're in here * hacks in place * Fix the errors that pylint was reporting on this file * foo * foo * commit before monster * Fix \_\_init\_\_ method on unit tests (they take a method\_name kwarg) * Don't warn about C0111 (No docstrings) * In order to disable the messages, we have to use disable, not disable-msg * Avoid mixins on image tests, keeping pylint much happier * Use \_ trick to hide base test class, thereby avoiding mixins and helping PyLint * hurr * hurr * get started testing * foo * Don't complain about the \_ function being used * Again * pep8 * converted new lines from CRLF to LF * adding bookmarks links to 1.1 flavor entities * Reverting * Log the use of utils.synchronized * expanding osapi flavors tests; rewriting flavors resource with view builders; adding 1.1 specific links to flavors resources * Dumb * Unit test update * Fix lp727225 by adding support for personality files to the openstack api * Changes * fixes bug 735298: start of nova-compute not possible because of wrong xml paths to the //host/cpu section in "virsh capabilities", used in nova/virt/libvirt\_conn.py * update image service documentation * merge lp:nova and resolve conflicts * User ids are strings, and are not necessarily == name. Also fix so that non-existent user gives a 404, not a 500 * Fudge * Keypairs are not required in the OpenStack API; don't require them! * Merging trunk * Add missing fallback chain for ipv6 * Typo fix * fixed pep8 issue * chchchchchanges * libvirt template and libvirt\_conn.spawn modified in way that was proposed for xenapi multinic support * Re-commit r805 * Re-commit r804 * Refactored ZoneRedirect into ZoneChildHelper so ZoneManager can use this too * Don't generate insecure passwords where it's easy to use urandom instead * merging openstack-api-version-split * chchchchchanges * chchchchchanges * Fixes euca-get-ajax-console returning Unknown Error, by using the correct exception in get\_open\_port() logic. Patch from Tushar Patil * chchchchchanges * Revert commit that modified CA/openssl.cnf.tmpl * Comment update * Derped again * Move mapper code into the \_action\_ext\_controllers and \_response\_ext\_controllers methods * The geebees * forgot to return network info - teehee * refactored, bugfixes * merge trunk * moving code out of try/except that would never trigger NotFound * merge trunk * Logging statements * added new class Instances for managaging instances added new method list in class Instances: * tweak * Stuff * Removing io\_util.py. We now use eventlets library instead * Some typos * \* Updated document vmware\_readme.rst to mention VLAN networking \* Corrected docstrings as per pep0257 recommentations. \* Stream-lined the comments. \* Updated code with locals() where ever applicable. \* VIM : It stands for VMware Virtual Infrastructure Methodology. We have used the terminology from VMware. we have added a question in FAQ inside vmware\_readme.rst in doc/source \* New fake db: vmwareapi fake module uses a different set of fields and hence the structures required are different. Ex: bridge : 'xenbr0' does not hold good for VMware environment and bridge : 'vmnic0' is used instead. Also return values varies, hence went for implementing separate fake db. \* Now using eventlet library instead and removed io\_utils.py from branch. \* Now using glance.client.Client instead of homegrown code to talk to Glance server to handle images. \* Corrected all mis-spelled function names and corresponding calls. Yeah, an auto-complete side-effect! * Implement top level extensions * Added i18n to error message * Checks locally before routing * Really fix testcase * More execvp fallout * Fix up testsuite for lxc * Error codes handled properly now * merge trunk * Adding unit test * Fix instance creation fail under use\_ipv6=false and FlatManager * pep8 clean * Fix a couple of things that assume that libvirt == kvm/qemu * Updating gateway\_v6 in \_on\_set\_network\_host() is not required for FlatManager * added correct path to cpu information (tested on a system with 1 installed cpu package) * Fix unknown exception error in euca-get-ajax-console * fixed pep8 errors (with version 0.5.0) * Use integer ids for (fake) users * req envirom param 'nova.api.openstack.version' should be 'api.version' * pep8 fixes * Fixed DescribeUser in ec2 admin client * openstack api 1.0 flavors resource now implemented; adding flavors request value testing * response working * Added tests back for RateLimitingMiddleware which now throw correctly serialized errors with correct error codes * Add ResponseExtensions * revised per code review * first pass openstack redirect working * Adding newlines for pep8 * Removed VIM specific stuff and changed copyright from 2010 to 2011 * Limits controller and testing with XML and JSON serialization * adding imageRef and flavorRef attributes to servers serialization metadata * Merged with trunk (and brian's previous fixes to fake auth) * Plugin * As suggested by Eric Day: \* changed request.environ version key to more descriptive 'api.version' \* removed python3 string formatting \* added licenses to headers on new files * Tweak * A few fixes * pep8 * merge lp:nova * ignore differently-named nodes in personality and metadata parsing * wrap errors getting image ids from local image store * Moving the migration again * Updating paste config * pep8 * internationalization * Per Eric Day's suggest, the verson is not store in the request environ instead of the nova.context * s/onset\_files/injected\_files/g * pep8 fixes * Add logging to lock check * Now that the fix for 732866, stop working around the bug * Major cosmetic changes to limits, but little-to-no functional changes. MUCH better testability now, no more relying on system time to tick by for limit testing * Merged with trunk to get fix for bug 732866 * Merged trunk * modifying paste config to support v1.1; adding v1.1 entry in versions resource ( GET /) * Fixed lp732866 by catching relevant \`exception.NotFound\` exception. Tests did not uncover this vulnerability due to "incorrect" FakeAuthManager. I say "incorrect" because potentially different implementations (LDAP or Database driven) of AuthManager might return different errors from \`get\_user\_from\_access\_key\` * refactor onset\_files quota checking * Code clean up. Removing \_decorate\_response methods. Replaced them with more explicit methods, \_build\_image, and \_build\_flavor * Use random.SystemRandom for easy secure randoms, configurable symbol set by default including mixed-case * merge lp:nova * Support testing the OpenStack API without key\_pairs * merge trunk * Fixed bugs in bug fix (plugin call) * adding missing view modules; modifying a couple of servers tests to use enumerate * just fixing a small typo in nova-manage vm live-migration * exception fixup * Make Authors check account for tests being run with different os.getcwd() depending on how they're run. Add missing people to Authors * Removed duplicated tests * PEP8 0.5.0 cleanup * Really delete the loop * Add comments about the destroy container function * Mount the right device * Merged trunk * Always put the ipv6 fallback in place. FLAGS.use\_ipv6 does not exist yet when the firewall driver is instantiated and the iptables manager takes care not to fiddle with ipv6 if not enabled * merged with trunk and removed conflicts * Merging trunk * Reapplied rename to another file * serverId returned as int per spec * Reapplied rename of Openstack -> OpenStack. Easier to do it by hand than to ask Bazaar to do it * Merged with trunk. Had to hold bazaar's hand as it got lost again * Derive unit test from standard nova.test.TestCase * pep8 fixes * adding flavors and images barebones view code; adding flavorRef and imageRef to v1.1 servers * Fixed problem with metadata creation (backported fix) * Clarify the logic in using 32 symbols * moving addresses views to new module; removing 'Data' from 'DataViewBuilder' * Don't generate insecure passwords where it's easy to use urandom instead * Added a views package and a views.servers module. For representing the response object before it is serialized * Make key\_pair optional with OpenStack API * Moved extended resource code into the extensions.py module * Moving fixtures to a factory * Refactor setup contianer/destroy container * Fixing API per spec, to get unit-tests to pass * Implements basic OpenStack API client, ready to support API tests * Fix capitalization of ApiError (it was mistakenly called APIError) * added migration to repo * Clarified message when a VM is not running but still in DB * Implemented Hyper-V list\_instances\_detail function. Needs a cleanup by someone that knows the Hyper-V code * So the first of those tests doesn't pass. Removing as it looks like it was meant to be deleted * Added test and fixed up code so that it works * Fix for LP Bug #704300 * fixed keyword arg error * pep8 * added structure to virt.xenapi.vmops to support network info being passed in * Removed duplicated test, renamed same-named (but non-identical) tests * merge trunk * PEP8 cleanup * Fixes other half of LP#733609 * Initial implementation of refresh instance states * Add missing fallback chain for ipv6 * The exception is called "ApiError", not "APIError" * Implement action extensions * Include cpuinfo.xml.template in tarball * Adding instance\_id as Glance image\_property * Add fixes metadata * Include cpuinfo.xml.template in tarball * Merged test\_network.py properly. Before I had deleted this file and added again, but this file status should be modified when you see the merged difference * removed conflicts and merged with trunk * Create v1\_0 and v1\_1 packages for the openstack api. Added a servers module to each. Added tests to validate the structure of ip addresses for a 1.1 request * committing to share * small typo in nova-manage vm live-migration * NTT's live-migration branch, merged with trunk, conflicts resolved, and migrate file renamed * Reverted unmodified files * Reverted unmodified files * Only include kernel and ramdisk ID in meta-data output if they are actually set * Test fixes and some typos * Test changes * Migration moved again * Compute test * merge trunk * merge trunk * Make nova-dhcpbridge output lease information in dnsmasq's leasesfile format * Merged my doc changes with trunk * Fixed pep8 errors * Fixed failing tests in test\_xenapi * Fixes link to 2011.1 instad of just to trunk docs * fixes: 733137 * Add a unit test * Make utils.execute not overwrite std{in,out,err} args to Popen on retries. Make utils.execute reject unknown kwargs * Removed excess LOG.debug line * merge trunk * The extension name is constructed from the camel cased module\_name + 'Extension' * Merged with trunk * Fix instructions for setting up the initial database * Fix instructions for setting up the initial database * merged with latest trunk and removed unwanted files * Removed \_translate\_keys() functions since it is no longer used. Moved private top level functions to bottom of module * Use a consistent naming scheme for XenAPI variables * oops * Review feedback * Review feedback * Review feedback * Some unit tests * Change capitalization of Openstack to OpenStack * fixed conflicts after merging with trunk with 787 * Adding a sidebar element to the nova.openstack.org site to point people to additional versions of the site * oops * Review feedback * Replace raw SQL calls through session.execute() with SQLAlchemy code * Review feedback * Remove vish comment * Remove race condition when refreshing security groups and destroying instances at the same time * Removed EOL whitespace in accordance with PEP-8 * Beginning of cleanup of FakeAuthManager * Make the fallback value None instead of False * Indentation adjustment (cosmetical) * Fixed lp732866 by catching relevant \`exception.NotFound\` exception. Tests did not uncover this vulnerability due to "incorrect" FakeAuthManager. I say "incorrect" because potentially different implementations (LDAP or Database driven) of AuthManager might return different errors from \`get\_user\_from\_access\_key\` * Merged trunk * This change adds the ability to boot Windows and Linux instances in XenServer using different sets of vm-params * merge trunk * New migration * Passes net variable as value of keyword argument process\_input. Prior to the execvp patch, this was passed positionally * Changes the output of status in describe\_volumes from showing the user as the owner of the volume to showing the project as the owner * Added support for ips resource: /servers/1/ips Refactored implmentation of how the servers response model is generated * merge trunk * Adds in multi-tenant support to openstack api. Allows for multiple accounts (projects) with admin api for creating accounts & users * merge trunk * remerge trunk (again). fix issues caused by changes to deserialization calls on controllers * Add config for osapi\_extensions\_path. Update the ExtensionManager so that it loads extensions in the osapi\_extensions\_path * process\_input for tee. fixes: 733439 * Minor stylistic updates affecting indentation * Make linux\_net ensure\_bridge commands that add and remove ip addr's from devices/bridges work with with the latest utils.execute method (execvp) * Added volume api from previous megapatch * Made changes to xs-ipv6 code impacted because of addition of flatmanger ipv6 support * Need to set version to '1.0' in the nova.context in test code for tests to be happy * merge from trunk.. * Discovered literal\_column(), which does exactly what I need * Merged trunk * Further vmops cleanup * cast execute commands to str * Remove broken test. At least this way, it'll actually fix the problem and be mergable * \* Updated the readme file with description about VLAN Manager support & guest console support. Also added the configuration instructions for the features. \* Added assumptions section to the readme file * \* Modified raise statements to raise nova defined Exceptions. \* Fixed Console errors and in network utils using HostSystem instead of Datacenter to fetch network list \* Added support for vmwareapi module in nova/virt/connection.py so that vmware hypervisor is supported by nova \* Removing self.loop to achieve synchronization * merge trunk * Moved vlan\_interface flag in network.manager removed needless carriage return in vm\_ops * Use self.instances.pop in unfilter\_instance to make the check/removal atomic * Make Authors check account for tests being run with different os.getcwd() depending on how they're run. Add missing people to Authors * Make linux\_net ensure\_bridge commands that add and remove ip addr's from devices/bridges work with with the latest utils.execute method (execvp) * \_translate\_keys now needs one more argument, the request object * Added version attribute to RequestContext class. Set the version in the nova.context object at the middleware level. Prototyped how we can serialize ip addresses based on the version * execvp: fix params * merge lp:nova * switch to a more consistent usage of onset\_files variable names * re-added a test change I removed thinking it was related to removed code. It wasn't :> * merge trunk * Document known bug numbers by the code which is degraded until the bugs are fixed * fix minor typo * Fix a fer nits jaypipes found in review * Pep8 / Style * Re-removed the code that was deleted upstream but somehow didn't get merged in. Bizarre! * More resize * Merged with upstream * pep8 fun * Test login. Uncovered bug732866 * Merged with upstream * Better logging, be more careful about when we throw login errors re bug732866 * Don't wrap keys and volumes till they're in the API * Add a new IptablesManager that takes care of all uses of iptables * Last un-magiced session.execute() replaced with SQLAlchemy code.. * PEP8 * Add basic test case * Implements basic OpenStack API client, ready to support API tests * Initial support fo extension resources. Tests * Partial revert of one conversion due to phantom magic exception from SQLAlchemy in unrelated code; convert all deletes * merge lp:nova * add docstring * fixed formatting and redundant imports * Cleaned up vmops * merge trunk * initializing instance power state on launch to 0 (fixes EC2 API bug) * Correct a misspelling * merge lp:nova * merge trunk * Use a FLAGS.default\_os\_type if available * Another little bit of fallout from the execvp branch * Updated the code to detect the exception by fault type. SOAP faults are embedded in the SOAP response as a property. Certain faults are sent as a part of the SOAP body as property of missingSet. E.g. NotAuthenticated fault. So we examine the response object for missingSet and try to check the property for fault type * Another little detail. * Fix a few things that were either missed in the execvp conversion or stuff that was merged after it, but wasn't updated accordingly * Introduces the ZoneManager to the Scheduler which polls the child zones and caches their availability and capabilities * One more thing. * merge trunk * Only include ramdisk and kernel id if they are actually set * Add bugfix metadata * More execvp fallout * Make nova.image.s3 catch up with the new execute syntax * Pass argv of dnsmasq and radvd to execute as individual args, not as a list * Split dnsmasq and radvd commands into their respective argv's * s/s.getuid()/os.getuid()/ * merge lp:nova and add stub image service to quota tests as needed * merged to trunk rev781 * fix pep8 check * merge lp:nova * Modifies S3ImageService to wrap LocalImageService or GlanceImageService. It now pulls the parts out of s3, decrypts them locally, and sends them to the underlying service. It includes various fixes for image/glance.py, image/local.py and the tests * add tests to verify the serialization of adminPass in server creation response * Fixes nova.sh to run properly the first time. We have to get the zip file after nova-api is running * minor fixes from review * merged trunk * fixed based on reviewer's comment * merge lp:nova * Moved umount container to disk.py and try to remove loopback when destroying the container * Merged trunk * Replace session.execute() calls performing raw UPDATE statements with SQLAlchemy code, with the exception of fixed\_ip\_disassociate\_all\_by\_timeout() * Fixes a race condition where multiple greenthreads were attempting to resize a file at the same time. Adds tests to verify that the image caching call will run concurrently for different files, but will block other greenthreads trying to cache the same file * maybe a int instead ? * merge lp:nova * merge, resolve conflicts, and update to reflect new standard deserialization function signature * Fixes doc build after execvp patch * execvp: fix docs * initializing instance power state on launch to 0 (fixes EC2 API bug) * - Content-Type and Accept headers handled properly - Content-Type added to responses - Query extensions no long cause computeFaults - adding wsgi.Request object - removing request-specific code from wsgi.Serializer * Fixes bug 726359. Passes unit tests * merge lp:nova, fix conflicts, fix tests * fix the copyright notice in migration * execvp: cleanup * remove the semaphore when there is no one waiting on it * merge lp:nova and resolve conflicts * Hi guys * Update the create server call in the Openstack API so that it generates an 'adminPass' and calls set\_admin\_password in the compute API. This gets us closer to parity with the Cloud Servers v1.0 spec * Added naming scheme comment * Merged trunk * execvp passes pep8 * merge trunk * Add a decorator that lets you synchronise actions across multiple binaries. Like, say, ensuring that only one worker manipulates iptables at a time * renaming wsgi.Request.best\_match to best\_match\_content\_type; correcting calls to that function in code from trunk * merge lp:nova * Fixes bug #729400. Invalid values for offset and limit params in http requests now return a 400 response with a useful message in the body. Also added and updated tests * Add password parameter to the set\_admin\_password call in the compute api. Updated servers password to use this parameter * stuff * rearrange functions and add docstrings * Fixes uses of process\_input * update authors file * merged trunk r771 * merge lp:nova * remove unneeded stubs * move my tests into their own testcase * replaced ConnectionFailed with Exception in tools/euca-get-ajax-console was not working for me with euca2tools 1.2 (version 2007-10-10, release 31337) * Fixed pep8 issues * remerge trunk * removed uneeded \*\*kw args leftover from removed account-in-url changes * fixed lp715427 * fixed lp715427 * Fix spacing * merge lp:nova and resolve conflicts * remove superfluous trailing blank line * add override to handle xml deserialization for server instance creation * Added 'adminPass' to the serialization\_metadata * merge trunk * Merged with trunk Updated exception handling according to spawn refactoring * Fixed pep8 violation in glance plugin * Added unit tests for ensuring VDI are cleaned up upon spawn failures * Stop assuming anything about the order in which the two processes are scheduled * make static method for testing without initializing libvirt * tests and semaphore fix for image caching * execvp: unit tests pass * merged to trunk rev 769 * execvp: almost passes tests * Refactoring nova-api to be a service, so that we can reuse it in unit tests * Added documentation about needed flags * a few fixes for the tests * Renamed FLAG.paste\_config -> FLAG.api\_paste\_config * Sorted imports correctly * merge trunk * Fixes lp730960 - mangled instance creation in virt drivers due to improper merge conflict resolution * Use disk\_format and container\_format in place of image type * using get\_uuid in place of get\_record in \_get\_vm\_opaqueref changed SessionBase.\_getter in fake xenapi in order to return HANDLE\_INVALID failure when reference is not in DB (was NotImplementedException) * Merging trunk * Fixing tests * Pep8 fixes * Accidentally left some bad data around * Fix the bug where fakerabbit is doing a sort of prefix matching on the AMQP routing key * merge trunk * Use disk\_format and container\_format instead of image type * merged trunk * update manpage * update code to work with new container and disk formats from glance * modify nova manage doc * Nits * abstracted network code in the base class for flat and vlan * Remerged trunk. fixed conflict * Removes VDIs from XenServer backend if spawn process fails before vm rec is created * Added ability to remove networks on nova-manage command * Remove addition of account to service url * refactored up nova/virt/xenapi/vmops \_get\_vm\_opaque\_ref() no longer inspects the param to check to see if it is an opaque ref works better for unittests * This fix is an updated version of Todd's lp720157. Adds SignatureVersion checking for Amazon EC2 API requests, and resolves bug #720157 * \* pep8 cleanups in migrations \* a few bugfixes * Removed stale references to XenAPI * Moved guest\_tool.py from etc/esx directory to tools/esx directory * Removed excess comment lines * Fix todo comment * execvp * Merged trunk * virt.xenapi.vmops.\_get\_vm\_opaque\_ref changed vm to vm\_ref and ref to obj * virt.xenapi.vmops.\_get\_vm\_opaque\_ref assumes VM.get\_record raises * add a delay before grabbing zipfile * Some more refactoring and a tighter unit test * Moved FLAGS.paste\_config to its re-usable location * Merged with trunk and fixed conflict. Sigh * Converted tabs to spaces in bin/nova-api * A few more changes * Inhibit inclusion of stack traces in the logs UNLESS --verbose has been specified. This should help keep the logs compact, helping admins find the messages they're interested in (e.g., "Can't connect to MySQL server on '127.0.0.1' (111)") without having to sort through the stack traces, while still allowing developers to see those traces at will * Addresses bugs 704985 and 705453 by: * And unit tests * A few formatting niceties * First part of the bug fix * virt.xenapi.vmops.\_get\_vm\_opaque\_ref checks for basestring instance instead of str * virt.xenapi.vmops.\_get\_vm\_opaque\_ref exception caught properly * cleaned up virt.xenapi.vmops.\_get\_vm\_opaque\_ref. more reliable approach to checking if param is an opaque ref. code is cleaner * deleted network\_is\_associated from nova.db api * move the images\_dir out of the way when converting * pep8 * rework register commands based on review * added network\_get\_by\_cidr method to nova.db api * Use IptablesManager.semapahore from securitygroups driver to ensure we don't apply half a rule set * Log failed command execution if there are more retry attempts left * Make iptables rules class \_\_ne\_\_ just be inverted \_\_eq\_\_ * Invalid values for offset and limit params in http requests now return a 400 response with a useful message in the body. Also added and updated tests * Create --paste\_config flag defaulting to api-paste.ini and mv etc/nova-api.conf to match * Implementation for XenServer migrations. There are several places for optimization but I based the current implementation on the chance scheduler just to be safe. Beyond that, a few features are missing, such as ensuring the IP address is transferred along with the migrated instance. This will be added in a subsequent patch. Finally, everything is implemented through the Openstack API resize hooks, but actual resizing of the instance RAM and hard drive space is not yet implemented * Generate 'adminPass' and call set\_password when creating servers * Merged with current trunk * merge trunk * Resolving excess conflicts due to criss-cross in branch history * Make "dhcpbridge init" output correctly formatted leases information * Rebased to nova revision 761 * Fixed some more pep8 errors * \* Updated readme file with installation of suds-0.4 through easy\_install. \* Removed pass functions \* Fixed pep8 errors \* Few bug fixes and other commits * zipfile needs to be extracted after nova is running * make compute get the new images properly, fix a bunch of tests, and provide conversion commands * avoid possible string/int comparison problems * merge lp:nova * select cleanups * Merged to trunk rev 760, and fixed comment line indent according to Jay's comment * Fix renaming of instance fields using update\_instance api method * apirequest -> apireq * \* os\_type is no longer \`not null\` * respond well if personality attribute is incomplete * Added initial support to delete networks nova-manage * move the id wrapping into cloud layer instead of image\_service * added flatmanager unit testcases and renamed test\_network.py to test\_vlan\_network.py * remove xml testing infrastructure since it is not feasible to use at present * refactor server tests to support xml and json separately * More unit tests and rabbit hooks * Fix renaming of instance fields using update\_instance method * Fix api logging to show proper path and controller:action * merged trunk * \* Tests to verify correct vm-params for Windows and Linux instances * More fixes * delete unnecessary DECLARE * Fixed based on reviewer's comment. Main changes are below. 1. get\_vcpu\_total()/get\_memory\_mb()/get\_memory\_mb\_used() is changed for users who used non-linux environment. 2. test code added to test\_virt * merge lp:nova * merge trunk * fixed wrong local variable name in vmops * Use %s for instance-delete logging in case instance\_id comes through as a string * remove ensure\_b64\_encoding * add the ec2utils file i forgot * spawn a greenthread for image registration because it is slow * fix a couple issues with local, update the glance fake to actually return the same types as the real client, fix the image tests * make local image service work * use LocalImageServiceByDefault * Replace objectstore images with S3 image service backending to glance or local * Merged to trunk rev 759 * Merged trunk rev 758 * remove ra\_server from model and fix migration issue while running unit tests * Removed properties added to fixed\_ips by xs-ipv6 BP * altered ra\_server name to gateway\_v6 * merge lp:nova * rename onset\_files to personality\_files all the way down to compute manager * Changing output of status from showing the user as the owner, to showing the project * enforce personality quotas * localize a few error messages * Refactor wsgi.Serializer away from handling Requests directly; now require Content-Type in all requests; fix tests according to new code * pep8 * Renaming my migration yet again * Merged with Trunk * Use %s in case instance\_id came through as a string * Basic notifications drivers and tests * adding wsgi.Controller and wsgi.Request testing; fixing format keyword argument exception * This fix changes a tag contained in the DescribeKeyPairs response from to so that Amazon EC2 access libraries which does more strict syntax checking can work with Nova * some comments are modified * Merged to trunk rev 757. Main changes are below. 1. Rename db table ComputeService -> ComputeNode 2. nova-manage option instance\_type is reserved and we cannot use option instance, so change instance -> vm * adding wsgi.Request class to add custom best\_match; adding new class to wsgify decorators; replacing all references to webob.Request in non-test code to wsgi.Request * Remerged trunk, fixed a few conflicts * Add in multi-tenant support in openstack api * Merged to trunk rev 758 * Fix regression in the way libvirt\_conn gets its instance\_types * Updated DescribeKeyPairs response tag checked in nova/tests/test\_cloud.py * merged to trunk rev757 * Fixed based on reviewer's comments. Main changes are below. 1. Rename nova.compute.manager.ComputeManager.mktmpfile for better naming. 2. Several tests code in tests/test\_virt.py are removed. Because it only works in libvirt environment. Only db-related testcode remains * Fix regression in the way libvirt\_conn gets its instance\_types * more rigorous testing and error handling for os api personality * Updated Authors and .mailmap * Merged to rev 757 * merges dynamic instance types blueprint (http://wiki.openstack.org/ConfigureInstanceTypesDynamically) and bundles blueprint (https://blueprints.launchpad.net/nova/+spec/flavors) * moved migration to 008 (sigh) * merged trunk * catching bare except: * added logging to instance\_types for DB errors per code review * Very simple change checking for < 0 values in "limit" and "offset" GET parameters. If either are negative, raise a HTTPBadRequest exception. Relevant tests included * requested style change * Fixes Bug #715424: nova-manage : create network crashes when subnet range provided is not enough , if the network range cannot fit the parameters passed, a ValueError is raised * adding new source docs * corrected error message * changed \_context * pep8 * added in req.environ for context * pep8 * fixed \_context typo * coding style change per devcamcar review * fixed coding style per devcamcar review notes * removed create and delete method (and corresponding tests) from flavors.py * Provide the ability to rescue and unrescue a XenServer instance * Enable IPv6 injection for XenServer instances. Added addressV6, netmaskV6 and gatewayV6 columns to the fixed\_ips table via migration #007 as per NTT FlatManager IPv6 spec * Updated docstrings * add support for quotas on file injection * Added IPv6 migrations * merge fixes * Inject IPv6 data into XenStore for instance * Change DescribeKeyPairs response tag from keypairsSet to keySet, and fix lp720133 * Port Todd's lp720157 fix to the current trunk, rev 752 * Changed \_get\_vm\_opaqueref removing test-specific code paths * Removed excess TODO comments and debug line * initial commit of vnc support * merged trunk * Changed ra\_server to gateway\_v6 and removed addressv6 column from fixed\_ips db table * \* Added first cut of migration for os\_type on instances table \* Track os\_type when taking snapshots * merging trunk * \* Added ability to launch XenServer instances with per-os vm-params * test osapi server create with multiple personalities * ensure personality contents are b64 encoded * Merged trunk * Fixed pep8 issues, applied jaypipes suggestion * Rebased to nova revision 752 * Use functools.wraps to make sure wrapped method's metadata (docstring and name) doesn't get mangled * merge from trunk * Fake database module for vmware vi api. Includes false injection layer at the level of API calls. This module is base for unit tests for vmwareapi module. The unit tests runs regardless of presence of ESX/ESXi server as computer provider in OpenStack * Review feedback * Updated the code to include support for guest consoles, VLAN networking for guest machines on ESX/ESXi servers as compute providers in OpenStack. Removed dependency on ZSI and now using suds-0.4 to generate the required stubs for VMware Virtual Infrastructure API on the fly for calls by vmwareapi module * Added support for guest console access for VMs running on ESX/ESXi servers as computer providers in OpenStack * Support for guest consoles for VMs running on VMware ESX/ESXi servers. Uses vmrc to provide the console access to guests * Minor modification to document. Removed excess flags * Moved the guest tools script that does IP injection inside VM on ESX server to etc/esx directory from etc/ directory * support adding a single personality in the osapi * corrected copyrights for new files * Updated with flags for nova-compute, nova-network and nova-console. Added the flags, --vlan\_interface= --network\_driver=nova.network.vmwareapi\_net [Optional, only for VLAN Networking] --flat\_network\_bridge= [Optional, only for Flat Networking] --console\_manager=nova.console.vmrc\_manager.ConsoleVMRCManager --console\_driver=nova.console.vmrc.VMRCSessionConsole [Optional for OTP (One time Passwords) as against host credentials] --vmwareapi\_wsdl\_loc=/vimService.wsdl> * Fixed trunk merge issues * Merged trunk * At previous commit, I forget to erase conflict - fixed it * merged to trunk rev 752 * Rebased at lp:nova 759 * test\_compute is changed b/c lack of import instance\_types * rename db migration script * 1. merged trunk rev749 2. rpc.call returns '/' as '\/', so nova.compute.manager.mktmpfile, nova.compute.manager.confirm.tmpfile, nova.scheduler.driver.Scheduler.mounted\_on\_same\_shared\_storage are modified followed by this changes. 3. nova.tests.test\_virt.py is modified so that other teams modification is easily detected since other team is using nova.db.sqlalchemy.models.ComputeService * updated docs * updated docs * Fixed xenapi tests Gave up on clever things with map stored as string in xenstore. Used ast.liteeral\_eval instead * This branch implements the openstack-api-hostid blueprint: "Openstack API support for hostId" * refactored adminclient * No reason to initialize metadata twice * Units tests fixed partially. Still need to address checking data injected into xenstore need to convert string into dict or similar. Also todo PEP8 fixes * replaced ugly INSTANCE\_TYPE constant with (slightly less ugly) stubs * add test for instance creation without personalities * fixed pep8 * Add a lock\_path flag for lock files * refactored nova-manage list (-all, ) and fixed docs * moved nova-manage flavors docs * Edited \`nova.api.openstack.common:limited\` method to raise an HTTPBadRequest exception if a negative limit or offset is given. I'm not confident that this is the correct approach, because I guess this method could be called out of an API/WSGI context, but the method \*is\* located in the OpenStack API module and is currently only used in WSGI-capable methods, so we should be safe * merge trunk * moving nova-manage integration tests to smoke tests * Wrapped the instance\_types comparison with an int and added a test case for it. Removed the inadvertently added newline * Rename migration to coincide with latest trunk changes * Adds VHD build support for XenServer driver * Suppress stack traces unless --verbose is specified * Removed extraneous newline * Merging trunk to my branch. Fixed a conflict in servers.py * Fixed obvious errors with flags. Note: tests still fail * Merging trunk * Fixed default value for xenapi\_agent\_path flag * 1) merge trunk 2) removed preconfigure\_xenstore 3) added jkey for broadcast address in inject\_network\_info 4) added 2 flags: 4.1) xenapi\_inject\_image (default True) This flag allows for turning off data injection by mounting the image in the VDI (agreed with Trey Morris) 4.2) xenapi\_agent\_path (default /usr/bin/xe-update-networking) This flag specifies the path where the agent should be located. It makes sense only if the above flag is True. If the agent is found, data injection is not performed * Wrap IptablesManager.apply() calls in utils.synchronized to avoid having different workers step on each other's toes * merge trunk * Add utils.synchronized decorator to allow for synchronising method entrance across multiple workers on the same host * execvp * execvp * execvp * execute: shell=True removed * Add lxc to the libvirt tests * Clean up the mount points when it shutsdown * Add ability to mount containers * Add lxc libvirt driver * Rebased to Nova revision 749 * added listing of instances running on a specific host * fixed FIXME * beautification.. * introduced new flag "max\_nbd\_devices" to set the number of possible NBD devices * renamed flag from maximum\_... to max\_.. * replaced ConnectionFailed with Exception in tools/euca-get-ajax-console was not working for me with euca2tools 1.2 (version 2007-10-10, release 31337) * Did a pull from trunk to be sure I had the latest, then deleted the test directory. I guess it appeared when I started using venv. Doh * Deleting test dir from a pull from trunk * introduced new flag "maximum\_nbd\_devices" to set the number of possible NBD devices * reverted my changes from https://code.launchpad.net/~berendt/nova/lp722554/+merge/50579 and reused the existing db api methods to add the disabled services. Looks much better now :) * add timeout and retry for ssh * Makes nova-api correctly load the default flagfile * force memcache key to be str * only create auth connection if cache misses * No reason to dump a stack trace just because the AMQP server is unreachable; an error notification should be sufficient * Add error message to the error report so we know why the AMQP server is unreachable * No reason to dump a stack trace just because we can't reach the AMQP servire; it ends up being just noise * DescribeInstances modified to return ipv6 fixed ip address in case of flatmanager * Bootlock original instance during rescue * merge with zones2 fixes and trunk * check if QUERY\_STRING is empty or not before building the request URL in bin/nova-ajax-console-proxy * trunk merge * API changed to new style class * trunk merge, pip-requires and novatools to novaclient changes * Fixes FlatDHCP by making it inherit from NetworkManager and moving some methods around * fixed: bin/nova-ajax-console-proxy:66:19: W601 .has\_key() is deprecated, use 'in' * merged trunk * add a caching layer to the has\_role call to increase performance * Removed unnecessary compute import * Set rescue instance VIF device * use default flagfile in nova-api * Add tests for 718999, fix a little brittle code introduced by the committed fix * Rename test to describe what it actually does * Copy over to current trunk my tests, the 401/500 fix, and a couple of fixes to the committed fix which was actually brittle around the edges.. * I'm working on consolidating install instructions specifically (they're the most asked-about right now) and pointing to the docs.openstack.org site for admin docs * check if QUERY\_STRING is empty or not before building the request URL * Teardown rescue instance * Merged trunk * Create rescue instance * Merging trunk, conflicts fixed * Verify status of image is active * Rebased at lp:nova 740 * merged with trunk * Cleanup db method names for dealing with auth\_tokens to follow standard naming pattern * The proposed bug fix stubs out the \_is\_vdi\_pv routine for testing purposes * revert a few unnecessary changes to base.py * removed unused references to unittest * add customizable tempdir and remove extra code * Pass id of token to be deleted to the db api, not the actual object * Removing unecessary headers * Rename auth\_token db methods to follow standard * Removing unecessary nokernel stuff * Adding \_make\_subprocess function * No longer users image/ directory in tarball * Merging trunk, small fixes * make smoketests run with nose * IPV6 FlatManager changes * Make tests start with a clean database for every test * merge trunk * merge clean db * merged trunk * sorry, pep8 * adds live network injection/reconfiguration. Some refactoring * forgot to get vm\_opaque\_ref * new tests * service capabilities test * moved network injection and vif creation to above vm start in vmops spawn * Merged trunk * nothing * Removes processName from debug output since we aren't using multiprocessing and it doesn't exist in python 2.6.1 * Add some methods to the ec2 admin api to work with VPNs. Also implements and properly documents the get\_hosts method * Fix copypasta pep8 violation * moved migrate script to 007 (again..sigh) * Don't require metadata (hotfix for bug 724143) * merge trunk * Merged trunk * Updated email in Authors * Easy and effective fix for getting the DNS value from flag file, when working in FlatNetworking mode * Some first steps towards resolving some of the issues brought up on the mailing list related to documenting flags * Support HP/LeftHand SANs. We control the SAN by SSHing and issuing CLIQ commands. Also improved the way iSCSI volumes are mounted: try to store the iSCSI connection info in the volume entity, in preference to doing discovery. Also CHAP authentication support * This fix checks whether the boot/guest directory exists on the hypervisor. If that is not the case, it creates it * Globally exclude \*.pyc files from generated tarballs * stubbing out \_is\_vdi\_pv for test purposes * merge trunk * Globally exclude .pyc files from tarball contents * Get DNS value from Flag, when working in FlatNetworking mode. Passing the flag was ineffective previously. This is an easy fix. I think we would need nova-manage to accept dns also from command line * xenapi plugin function now checks whether /boot/guest already exists. If not, it creates the directory * capability aggregation working * fix check for existing port 22 rule * move relevant code to baseclass and make flatdhcp not inherit from flat * Hotfix to not require metadata * Documentation fixes so that output looks better * more smoketest fixes * Removed Milind from Authors file, as individual Contributer's License Agreement & Ubuntu code of conduct are not yet signed * Fixed problems found in localized string formatting. Verified the fixes by running ./run\_tests.sh -V * Change missed reference to run\_tests.err.log * PEP 257 fixes * Merged with trunk * fix missed err.log * Tests all working again * remove extra flag in admin tests * Revert commit 709. This fixes issues with the Openstack API causing 'No user for access key admin' errors * Adds colors to output of tests and cleans up run\_tests.py * Reverted bad-fix to sqlalchemy code * Merged with trunk * added comments about where code came from * merge and fix conflicts * Prevent logging.setup() from generating a syslog handler if we didn't request one (breaks on mac) * fix pep8 * merged upstream * Changed create from a @staticmethod to a @classmethod * revert logfile redirection and make colors work by temporarily switching stdout * merged trunk * add help back to the scripts that don't use service.py * Alphabetize imports * remove processName from debug output since we aren't using multiprocessing and it doesn't exist in python 2.6.1 * updates to nova.flags to get help working better * Helper function that supports XPath style selectors to traverse an object tree e.g * tests working again * Put back the comments I accidentally removed * Make sure there are two blank links after the import * Rename minixpath\_select to get\_from\_path * Fixes the describe\_availability\_zones to use an elevated context when getting services and the db calls to pass parameters correctly so is\_admin check works * Fix pep8 violation (trailing whitespace) * fix describe\_availability\_zones * Cope when we pass a non-list to xpath\_select - wrap it in a list * Fixes existing smoketests and splits out sysadmin tests from netadmin tests * Created mini XPath implementation, to simplify mapping logic * move the deletion of the db into fixtures * merged upstream * Revert commit 709. This fixes issues with the Openstack API causing 'No user for access key admin' errors * put the redirection back in to run\_tests.sh and fix terminal colors by using original stdout * Deleted trailing whitespace * Fixes and optimizes filtering for describe\_security\_groups. Also adds a unit test * merged trunk * fix for failing describe\_instances test * merged trunk * use flags for sqlite db names and fix flags in dhcpbridge * merged trunk * Fixes lp715424, code now checks network range can fit num\_networks \* network\_size * The proposed branch prevents FlatManager from executing network initialisation tasks contained in linux\_net.init\_host(), which are unnecessary when flat networking is used * Adds some features to run\_tests.sh: - if it crashes right away with a short erorr log, print that directly - allow specifying tests without the nova.tests part * The kernel\_id and the ramdisk\_id are optional, yet the OpenStack API was requiring them. In addition, with the ObjectStore these properties are not under 'properties' (as they are with Glance) * merged trunk * merge trunk * Initial support for per-instance metadata, though the OpenStack API. Key/value pairs can be specified at instance creation time and are returned in the details view. Support limits based on quota system * Merged trunk * Removed pass * Changed unit test to refer to compute API, per Todd's suggestion. Avoids needing to extend our implementation of the EC2 API * Fixes lots of errors in the unit tests * dump error output directly on short import errors * allow users to omit 'nova.tests' with run\_tests * Merged trunk * \* Took care of localization of strings \* Addressed all one liner docstrings \* Added Sateesh, Milind to Authors file * Fixed pep8 errors * FlatManager.init\_host now inhibits call to method in superclass. Floating IP methods have been redefined in FlatManager to raise NotImplementedError * speed up network tests * merged trunk * move db creation into fixtures and clean db for each test * fix failures * remove unnecessary stubout * Lots of test fixing * Update the admin client to deal with VPNs and have a function host list * Removed unused import & formatting cleanups * Exit with exit code 1 if conf cannot be read * Return null if no kernel\_id / ramdisk\_id * Reverted change to focus on the core bug - kernel\_id and ramdisk\_id are optional * Make static create method behave more like other services * merged fix-describe-groups * add netadmin smoketests * separate out smoketests and add updated nova.sh * fix and optimize security group filtering * Support service-like wait behaviour for API service * Added create static method to ApiService * fix test * Refactoring nova-api to be a service, so that we can reuse it in tests * test that shows error on filtering groups * don't make a syslog handler if we didn't ask for one * Don't blindly concatenate queue name if second portiion is None * Missing import for nova.exceptions (!) * At the moment --pidfile is still used in some scripts in contrib/puppet/. I don't use puppet, please check if there are possible side effects * We're not using prefix matching on AMQP, so fakerabbit shouldn't be doing it! * merge fixes from anso branch * merged trunk * Removed block of code that resurrected itself in the last merge * Added Andy Southgate to the Authors file * Merged with trunk, including manual conflict resolution in nova/virt/disk.py and nova/virt/xenapi/vmops.py * Put the whitespace back \*sigh\* * Remove duplicate import gained across a merge * Rename "SNATTING" chain to "snat" * Fix DescribeRegion answer by introducing '{ec2,osapi}\_listen' flags instead of overloading {ec2,osapi}\_host. Get rid of paste\_config\_to\_flags, bin/nova-combined. Adds debug FLAGS dump at start of nova-api * Also remove nova-combined from setup.py * Fixed some docstring * Get rid of nova-combined, see rationale on ML * Merged trunk * no, really fix lp721297 this time * Updated import statements according to HACKING guidelines. Added docstrings to each document. Verified pep8 over all files. Replaced some constants by enums accordingly. Still little bit more left in vm\_util.py and vim\_util.py files * Add flags for listen\_port to nova-api. This allows us to listen on one port, but return another port (for a proxy or load balancer) in calls like describe\_regions, etc * Fix tiny mitakes! (remove unnecessary comment, etc) * Fixed based on reviewer's comment. 1. Change docstrings format 2. Fix comment grammer mistake, etc * PEP8 again * Account for the fact that iptables-save outputs rules with a space at the end. Reverse the rule deduplication so that the last one takes precedence * floating-ip-snat was too long. Use floating-snat instead * PEP8 adjustments * Remove leftover from debugging * Add a bunch of tests for everything * Fixes various issues regarding verbose logging and logging errors on import * merged trunk * Add a new chain, floating-ip-snat, at the top of SNATTING, so that SNATting for floating ips gets applied before the default SNAT rule * Address some review comments * Some quick test cleanups, first step towards standardizing the way we start services in tests * use a different flag for listen port for apis * added disabled services to the list of displayed services in bin/nova-manage * merged to trunk rev709. NEEDS to be fixed based on 3rd reviewer's comment * just add 005\_add\_live\_migration.py * Fixed based on reviewer's comment. 1. DB schema change vcpu/memory/hdd info were stored into Service table. but reviewer pointed out to me creating new table is better since Service table has too much columns * update based on prereq branch * update based on prereq branch * fixed newline and moved import fake\_flags into run\_tests where it makes more sense * merged fix * remove changes to test db * Fixed my confusion in documenting the syntax of iSCSI discovery * pretty colors for logs and a few optimizations * Renamed db\_update to model\_update, and lots more documentation * modify tests to use specific hosts rather than default * Merged with head * remove keyword argument, per review * move test\_cloud to use start\_service, too * add a start\_service method to our test baseclass * add a test for rpc consumer isolation * Merged with trunk * The OpenStack API was using the 'secret' as the 'access key'. There is an 'access key' and there is a 'secret key'. Access key ~= username. Secret key ~= password. This fix is necessary for the OpenStack Python API bindings to log in * Add a bunch of docs for the new iptables hotness * fix pep8 and remove extra reference to reset * switch to explicit call to logging.setup() * merged trunk * Adds translation catalogs and distutils.extra glue code that automates the process of compiling message catalogs into .mo files * Merged with trunk * make sure that ec2 response times are xs:dateTime parsable * Removing pesky DS\_Store files too. Begone * Updated to remove built docs * Removing duplicate installation docs and adding flag file information, plus pointing to docs.openstack.org for Admin-audience docs * introducing a new flag timeout\_nbd for manually setting the time in seconds for waiting for an upcoming NBD device * use tests.sqlite so it doesn't conflict with running db * cleanup from review * Duh, continue skips iteration, not pass. #iamanidiot * reset to notset if level isn't in flags * Enable rescue testing * PEP8 errors and remove check in authors file for nova-core, since nova-core owns the translation export branch * Merged trunk * Stub out VM create * \* Removed VimService\_services.py & VimService\_services\_types.py to reduce the diffs to normal. These 2 files are auto-generated files containing stubs for VI SDK API end points. The stub files are generated using ZSI SOAP stub generator module ZSI.commands.wsdl2py over Vimservice.wsdl distributed as part of VMware Virtual Infrastructure SDK package. To not include them in the repository we have few options to choose from, 1) Generate the stub files in build time and make them available as packages for distribution. 2) Generate the stub files in installation/configuration time if ESX/ESXi server is detected as compute provider. Further to this, we can try to reduce the size of stub files by attempting to create stubs only for the API end points required by the module vmwareapi * introducing a new flag timeout\_nbd for manually setting the time in seconds for waiting for an upcoming NBD device * \* Removed nova/virt/guest-tools/guest\_tool.bat & nova/virt/guest-tools/guest\_tool.sh as guest\_tool.py can be invoked directly during guest startup * More PEP-8 * Wrap ipv6 rules, too * PEP-8 fixes * Allow non-existing rules to be removed * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOVA-CORE DEVELOPERS SHOULD NOT REVIEW THIS MERGE PROPOSAL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * merged with nova trunk revision #706 * Fix typo * Unfilter instance correctly on termination * move exception hook into appropriate location and remove extra stuff from module namespace * Also remove rules that jump to deleted chains * simplify logic for parsing log level flags * reset all loggers on flag change, not just root * add docstring to reset method * removed extra comments and initialized from flags * fix nova-api as well * Fix refresh sec groups * get rid of initialized flag * clean up location of method * remove extra references to logging.basicConfig * move the fake initialized into fake flags * fixes for various logging errors and issues * fanout works * fanout kinda working * service ping working * scheduler manager * tests passing * start of fanout * merge trunk * previous trunk merge * puppet scripts only there as an example, should be moved to some other place if they are still necessary * Various optimizations of lookups relating to users * If there are no keypairs registered on a create call, output a useful error message rather than an out-of-range exception * Fixes vpn images to use kernel and ramdisk specified by the image * added elif branch to handle the conversion of datetime instances to isoformat instead of plain string conversion * Calculate time correctly for ec2 request logs * fix ec2 launchtime response not in iso format bug * pep8 leftover * move from datetime.datetime.utcnow -> utils.utcnow * pass start time as a param instead of making it an attribute * store time when RequestLogging starts instead of using context's time * Fix FakeAuthManager so that unit tests pass; I believe it was matching the wrong field * more optimizations context.user.id to context.user\_id * remove extra * replace context.user.is\_admin() with context.is\_admin because it is much faster * remove the weird is\_vpn logic in compute/api.py * Don't crash if there's no 'fixed\_ip' attribute (was returning None, which was unsubscriptable) * ObjectStore doesn't use properties collection; kernel\_id and ramdisk\_id aren't required anyway * added purge option and tightened up testing * Wrap iptables calls in a semaphore * pep8 * added instance types purge test * Security group fallback is named sg-fallback * Rename a few things for more clarity * Port libvirt\_conn.IptablesDriver over to use linux\_net.IptablesManager * merged trunk * Typo fix * added admin api call for injecting network info, added api test for inject network info * If there are no keypairs, output a useful error message * Fix typo (?) in authentication logic * Changing type -> image\_type * Pep8 cleanup * moved creating vifs to its own function, moved inject network to its own function * sandy y u no read hacking guide and import classes? * Typo fix * XenAPI tests * Introduce IptablesManager in linux\_net. Port every use of iptables in linux\_net to it * Use WatchedFileHandler instead of RotatingFileHandler * Resize compute tests * Support for HP SAN * Merging trunk to my branch. Fixed conflicts in Authors file and .mailmap * Rename migration 004 => 005 * Added Author and tests * Merging trunk * fixups backed on merge comments * Fixed testing mode leftover * PEP8 fix * Remove paste\_config\_to\_flags since it's now unused * Port changes to nova-combined, rename flags to API\_listen and API\_listen\_port * Set up logging once FLAGS properly read, no need to redo logging config anymore (was inoperant anyway) * Switch to API\_listen and API\_listen\_port, drop wsgi.paste\_config\_to\_flags * added new class Instances to manage instances and added a new listing method into the class * added functionality to list only fixed ip addresses of one node and added exception handling to list method * Use WatchedFileHandler instead of RotatingFileHandler * Incorporating minor cleanups suggested by Rick Harris: \* Use assertNotEqual instead of assertTrue \* Use enumerate function instead of maintaining a counter * Resize compute tests * fixed based on reviewer's comment. 1. erase wrapper function(remove/exists/mktempfile) from nova.utils. 2. nova-manage service describeresource(->describe\_resource) 3. nova-manage service updateresource(->update\_resource) 4. erase "my mistake print" statement * Tests * pep8 * merged trunk * Makes FlatDHCPManager clean up old fixed\_ips like VlanManager * Correctly pass the associate paramater for project\_get\_network through the IMPL layer in the db api * changed migration to 006 for trunk compatibility * completed doc and added --purge option to instance type delete * moved inject network info to a function which accepts only instance, and call it from reset network * Test changes * Merged with trunk * Always compare incoming flavor\_id as an int * Initial support for per-instance metadata, though the OpenStack API. Key/value pairs can be specified at instance creation time and are returned in the details view. Support limits based on quota system * a few changes and a bunch of unit tests * remove leftover periodic tasks * Added support for feature parity with the current Rackspace Cloud Servers practice of "injecting" files into newly-created instances for configuration, etc. However, this is in no way restricted to only writing files to the guest when it is first created * missing docstring and fixed copyrights * move periodic tasks to base class based on class variable as per review * Correctly pass the associate paramater to project\_get\_network * Add \*\*kwargs to VlanManager's create\_networks so that optional args from other managers don't break * Uncommitted changes using the wrong author, and re-committing under the correct author * merge with zone phase 1 again * Added http://mynova/v1.0/zones/ api options for add/remove/update/delete zones. child\_zones table added to database and migration. Changed novarc vars from CLOUD\_SERVERS\_\* to NOVA\_\* to work with novatools. See python-novatools on github for help testing this * pip requires novatools * copyright notice * moved 003\_cactus.py migration file to 004\_add\_instance\_types.py to avoid naming collision with new trunk migration * Add \*\*kwargs to VlanManager's create\_networks so that optional args from other managers don't break * merge with zone phase 1 * changed from 003-004 migration * merged lp:~jk0/nova/dynamicinstancetypes * Merged trunk * merge from dev * fixed strings * multi positional string fix * Use a semaphore to ensure we don't run more than one iptables-restore at a time * Fixed unit test * merge with trunk * fixed zone list tests * Make eth0 the default for the public\_interface flag * Finished flavor OS API stubs * Re-alphabetise Authors, move extra addresses into .mailmap * Re-alphabetise Authors, move extra addressses into .mailmap * Move the ramdisk logging stuff * Hi guys * fixup * zone list now comes from scheduler zonemanager * Stop blowing away the ramdisk * Rebased at lp:nova 688 * Update the Openstack API so that it returns 'addresses' * I have a bug fix, additional tests for the \`limiter\` method, and additional commenting for a couple classes in the OpenStack API. Basically I've just tried to jump in somewhere to get my feet wet. Constructive criticism welcome * added labels to networks for use in multi-nic added writing network data to xenstore param-list added call to agent to reset network added reset\_network call to openstack api * Add a command to nova-manage to list fixed ip's * Foo * comments + Englilish, changed copyright in migration, removed network\_get\_all from db.api (vestigial) * Adding myself to Authors and .mailmap files * example: * Switched mailmap entries * Supporting networks with multiple PIFs. pep8 fixes unit tests passed * Merged kpepple * Merged trunk * More testing * Block diagram for vmwareapi module * added entry in the category list * Added vmwareapi module to add support of hypervisor vmware-vsphere to OpenStack * added new functionality to list all defined fixed ips * added more I18N * Merged trunk and fixed conflict with other Brian in Authors * removing superfluous pass statements; replacing list comprehension with for loop; alphabetizing imports * Rebased at lp:nova 687 * added i18n of 'No networks defined' * Make eth0 the default for FLAGS.public\_interface * Typo fixes * Merging trunk * Adding tests * first crack at instance types docs * merge trunk * style cleanup * polling tests * Use glance image type to determine disk type * Minor change. Adding a helper function stub\_instance() inside the test test\_get\_all\_server\_details\_with\_host for readability * Fixes ldapdriver so that it works properly with admin client. It now sanitizes all unicode data to strings before passing it into ldap driver. This may need to be rethought to work properly for internationalization * Moved definition of return\_servers\_with\_host stub to inside the test\_get\_all\_server\_details\_with\_host test * fixed * Pep8 fixes * Merging trunk * Adding basic test * Better exceptions * Update to our HACKING doc to add examples of our docstring style * add periodic disassociate from VlanManager to FlatDHCPManager * Flipped mailmap entries * -from migrate.versioning import exceptions as versioning\_exceptions + +try: + from migrate.versioning import exceptions as versioning\_exceptions +except ImportError: + try: + # python-migration changed location of exceptions after 1.6.3 + # See LP Bug #717467 + from migrate import exceptions as versioning\_exceptions + except ImportError: + sys.exit(\_("python-migrate is not installed. Exiting.")) * Accidently removed myself from Authors * Added alternate email to mailmap * zone manager tests * Merged to trunk * added test for reset\_network to openstack api tests, tabstop 5 to 4, renamed migration * Use RotatingFileHandler instead of FileHandler * pep8 fixes * sanitize all args to strings before sending them to ldap * Use a threadpool for handling requests coming in through RPC * Typos * Derp * Spell flags correctly (i.e. not in upper case) * Fixed merge error * novatools call to child zones done * novatools call to child zones done * Putting glance plugin under pep8 control * fixed authors, import sys in migration.py * Merged trunk * First commit of working code * Stubbed out flavor create/delete API calls * This implements the blueprint 'Openstack API support for hostId': https://blueprints.launchpad.net/nova/+spec/openstack-api-hostid Now instances will have a unique hostId which for now is just a hash of the host. If the instance does not have a host yet, the hostId will be '' * Fix for bug #716847 * merge trunk * First commit for xenapi-vlan-networking. Totally untested * added functionality to nova-manage to list created networks * Add back --logdir=DIR option. If set, a logfile named after the binary (e.g. nova-api.log) will be kept in DIR * Fix PEP-8 stuff * assertIsNone is a 2.7-ism * This branch should resolve nova bug #718675 (https://bugs.launchpad.net/nova/+bug/718675) * Added myself to the authors file * I fail at sessions * I fail at sessions * Foo * hurr durr * Merging trunk part 1 * stubbed out reset networkin xenapi VM tests to solve domid problem * foo * foo * Adding vhd hidden sanity check * Fixes 718994 * Make rpc thread pool size configurable * merge with trunk * fail * Fixing test by adding stub for get\_image\_meta * this bug bit me hard today. pv can be None, which does not translate to %d and this error gets clobbered by causing errors in the business in charge of capturing output and reporting errors * More pep8 fixes * Pep8 fixes * Set name-label on VDI * Merge * Don't hide RotatingFileHandler behind FileHandler's name * Refactor code that decides which logfile to use, if any * Fixing typo * polling working * Using Nova style nokernel * changed d to s * merge with trunk * More plugin lol * moved reset network to after boot durrrrr.. * Don't hid RotatingFileHandler behind FileHandler's name * removed flag --pidfile from nova/services.py * Added teammate Naveed to authors file for his help * plugin lol * Plugin changes * merging trunk back in; updating Authors conflict * Adding documentation * Regrouping methods so they make sense * zone/info works * Refactoring put\_vdis * Adding safe\_find\_sr * Merged lp:nova * Fixes tarball contents by adding missing scripts and files to setup.py / MANIFEST.in * Moving SR path code outside of glance plugin * When re-throwing an exception, use "raise", not "raise e". This way we don't lose the stack trace * Adding more documentation, code-cleanup * Replace placeholders in nova.pot with some actual values * The proposed fix puts a VM which fails to spawn in a (new) 'FAILED' power state. It does not perform a clean-up. This because the user needs to know what has happened to the VM he/she was trying to run. Normally, API users do not have access to log files. In this case, the only way for the user to know what happened to the instance is to query its state (e.g.: doing euca-describe-instances). If we perform a complete clean-up, no information about the instance which failed to spawn will be left * Some trivial cleanups in context.py, mostly just a test of using the updated git-bzr-ng * Use eventlet.green.subprocess instead of standard subprocess * derp * Better host acquisition * zones merge * fixed / renamed migration scripts * Merged trunk * Update .pot file with source file and line numbers after running python setup.py build * Adds Distutils.Extra support, removes Babel support, which is half-baked at best * Pull in .po message catalogs from lp:~nova-core/nova/translations * Fix sporadically failing unittests * Missing nova/tests/db/nova.austin.sqlite file * Translations will be shipped in po/, not locale/ * Adding missing scripts and files to setup.py / MANIFEST.in * Fixes issues when running euca-run-instances and euca-describe-image-attribute against the latest nova/trunk EC2 API * initial * Naïve attempt at threading rpc requests * Beautify it a little bit, thanks to dabo * OS-55: Moved conn\_common code into disk.py * Break out of the "for group in rv" loop in security group unit tests so that we are use we are dealing with the correct group * Tons o loggin * merged trunk * Refactored * Launchpad automatic translations update * trunk merge * better filtering * Adding DISK\_VHD to ImageTypes * Updates to that S3ImageService kernel\_id and ramdisk\_id mappings work with EC2 API * fixed nova-combined debug hack and renamed ChildZone to Zone * plugin * Removing testing statements * Adds missing flag that makes use\_nova\_chains work properly * bad plugin * bad plugin * bad plugin * fixed merge conflict * First cut on XenServer unified-images * removed debugging * fixed template and added migration * better filtering * Use RotatingFileHandler instead of FileHandler * Typo fixes * Resurrect logdir option * hurr * Some refactoring * hurr * Snapshot correctly * Added try clause to handle changed location of exceptions after 1.6.3 in python-migrate LP Bug #717467 * Use eventlet.green.subprocess instead of standard subprocess * Made kernel and ram disk be deleted in xen api upon instance termination * Snapshot correctly * merged recent version. no conflict, no big/important change to this branch * wharrgarbl * merge jk0 branch (with trunk merge) which added additional columns for instance\_types (which are openstack api specific) * corrected model for table lookup * More fixes * Derp * fix for bug #716847 - if a volume has not been assigned to a host, then delete from db and skip rpc * added call to reset\_network from openstack api down to vmops * merging with trunk * Got rid of BadParameter, just using standard python ValueError * Merged trunk * support for multiple IPs per network * Fix DescribeRegion answer by using specific 'listen' configuration parameter instead of overloading ec2\_host * Fixed tables creation order and added clearing db after errors * Modified S3ImageService to return the format defined in BaseService to allow EC2 API's DescribeImages to work against Glance * re-add input\_chain because it got deleted at some point * Launchpad automatic translations update * Fixes a typo in the auth checking for DescribeAvailabilityZones * Fixes describe\_security\_groups by forcing it to return a list instead of a generator * return a list instead of a generator from describe\_groups * Hi guys * Added missing doc string and made a few style tweaks * fix typo in auth checking for describe\_availability\_zones * now sorting by project, then by group * Launchpad automatic translations update * Made a few tweaks to format of S3 service implementation * Merged trunk * First attempt to make all image services use similar schemas * fix :returns: and add pep-0257 * Preliminary fix for issue, need more thorough testing before pushing to lp * Launchpad automatic translations update * More typos * More typos * More typos * More typos * More typos * fixed exceptions import from python migrate * Cast to host * This fixes a lazy-load issue in describe-instances, which causes a crash. The solution is to specifically load the network table when retrieving an instance * added instance\_type\_purge() to actually remove records from db * updated tests and added more error checking * Merged trunk * more error checking on inputs and better errors returned * Added more columns to instance\_types tables * Added LOG line to describe groups function to find out what's going * joinedload network so describe\_instances continues to work * zone api tests passing * Create a new AMQP connection by default * First, not all * Merged to trunk and fixed merge conflict in Authors * rough cut at zone api tests * Following Rick and Jay's suggestions: - Fixed LOG.debug for translation - improved vm\_utils.VM\_Helper.ensure\_free\_mem * Create a new AMQP connection by default * after hours of tracking his prey, ken slowly crept behind the elusive wilderbeast test import hiding in the libvirt\_conn.py bushes and gutted it with his steely blade * fixed destroy calls * Forgot the metadata includes * added get IPs by instance * added resetnetwork to the XenAPIPlugin.dispatch dict * Forgot the metadata includes * Forgot the metadata includes * Typo fixes and some stupidity about the models * passing instance to reset\_network instead of vm\_ref, also not converting to an opaque ref before making plugin call * Define sql\_idle\_timeout flag to be an integer * forgot to add network\_get\_all\_by\_instance to db.api * template adjusted to NOVA\_TOOLS, zone db & os api layers added * Spawn from disk * Some more cleanup * sql\_idle\_timeout should be an integer * merged model change: flavorid needs to unique in model * testing refactor * flavorid needs to unique in model * Add forwarding rules for floating IPs to the OUTPUT chain on the network node in addition to the PREROUTING chain * typo * refactored api call to use instance\_types * Use a NullPool for sqlite connections * Get a fresh connection in rpc.cast rather than using a recycled one * Make rpc.cast create a fresh amqp connection. Each API request has its own thread, and they don't multiplex well * Only use NullPool when using sqlite * Also add floating ip forwarding to OUTPUT chain * trunk merge * removed ZoneCommands from nova-manage * Try using NullPool instead of SingletonPool * Try setting isolation\_level=immediate * This branch fixes bug #708347: RunInstances: Invalid instance type gives improper error message * Wrap line to under 79 characters * Launchpad automatic translations update * adding myself to Authors file * 1. Merged to rev654(?) 2. Fixed bug continuous request. if user continuouslly send live-migration request to same host, concurrent request to iptables occurs, and iptables complains. This version add retry for this issue * forgot to register new instance\_types table * Plugin tidying and more migration implementation * fixed overlooked mandatory changes in Xen * Renamed migration plugin * A lot of stuff * - population of public and private addresses containers in openstack api - replacement of sqlalchemy model in instance stub with dict * Fixes the ordering of init\_host commands so that iptables chains are created before they are used * Pass timestamps to the db layer in fixed\_ip\_disassociate\_all\_by\_timeout rather than converting to strings ahead of time, otherwise comparison between timestamps would often fail * Added support for 'SAN' style volumes. A SAN's big difference is that the iSCSI target won't normally run on the same host as the volume service * added support to pull list of ALL instance types even those that are marked deleted * Indent args to ssh\_connect correctly * Fix PEP8 violations * Added myself to Authors * 1) Moved tests for limiter to test\_common.py (from \_\_init\_\_.py) and expanded test suite to include bad inputs and tests for custom limits (#2) * Added my mail alias (Part of an experiment in using github, which got messy fast...) * Fixed pep8 error in vm\_utils.py * Add my name to AUTHORS, remove parentheses from the substitution made in the previous commit * Don't convert datetime objects to a string using .isoformat(). Leave it to sqlalchmeny (or pysqlite or whatever it is that does the magic) to work it out * Added test case for 'not enough memory' Successfully ran unit tests Fixed pep8 errors * Give a better error message if the instance type specified is invalid * Launchpad automatic translations update * added testing for instance\_types.py and refactored nova-manage to use instance\_types.py instead of going directly to db * added create and delete methods to instance\_types in preparation to call them from nova-manage * added testing for nova-manage instance\_type * additional error checking for nova-manage instance\_type * Typos and primary keys * Automates the setup for FlatDHCP regardless of whether the interface has an ip address * add docstring and revert set\_ip changes as they are unnecessary * Commas help * Changes and bug fixes * avoiding HOST\_UNAVAILABLE exception: if there is not enough free memory does not spawn the VM at all. instance state is set to "SHUTDOWN" * merge lp:nova at revision #654 * merge with lp:nova * Fixed pep8 errors Unit tests passed * merge source and remove ifconfig * fixes #713766 and probably #710959, please test the patch before committing it * use route -n instead of route to avoid chopped names * Updates to the multinode install doc based on Wayne's findings. Merged with trunk so should easily merge in * Checks whether the instance id is a list or not before assignment. This is to fix a bug relating to nova/boto. The AWK-SDK libraries pass in a string, not a list. The euca tools pass in a list * Launchpad automatic translations update * Catching all socket errors in \_get\_my\_ip, since any socket error is likely enough to cause a failure in detection * Catching all socket errors in \_get\_my\_ip, since any socket error is likely enough to cause a failure in detection * blargh * Some stuff * added INSTANCE\_TYPES to test for compatibility with current tests * Checking whether the instance id is a list or not before assignment. This is to fix a bug relating to nova/boto. The AWK-SDK libraries pass in a string, not a list. the euca tools pass in a list * Added data\_transfer xapi plugin * Another quick fix to multinode install doc * Made updates to multinode install doc * fixed instance\_types methods to use database backend * require user context for most flavor/instance\_type read calls * added network\_get\_all\_by\_instance(), call to reset\_network in vmops * added new parameter --dhcp\_domain to set the used domain by dnsmasq in /etc/nova/nova.conf * minor * Fix for bug #714709 * A few changes * fixed format according to PEP8 * replaced all calls to ifconfig with calls to ip * added myself to the Authors file * applied http://launchpadlibrarian.net/63698868/713434.patch * Launchpad automatic translations update * aliased flavor to instance\_types in nova-manage. will probably need to make flavor a full fledged class as users will want to list flavors by flavor name * simplified instance\_types db calls to return entire row - we may need these extra columns for some features and there seems to be little downside in including them. still need to fix testing calls * refactor to remove ugly code in flavors * updated api.create to use instance\_type table * added preliminary testing for bin/nova-manage while i am somewhat conflicted about the path these tests have taken, i think it is better than no tests at all * rewrote nova-manage instance\_type to use correct db.api returned objects and have more robust error handling * instance\_types should return in predicatable order (by name currently) * flavorid and name need to be unique in the database for the ec2 and openstack apis, repectively * corrected db.instance\_types to return expect dict instead of lists. updated openstack flavors to expect dicts instead of lists. added deleted column to returned dict * converted openstack flavors over to use instance\_types table. a few pep changes * added FIXME(kpepple) comments for all constant usage of INSTANCE\_TYPES. updated api/ec2/admin.py to use the new instance\_types db table * Launchpad automatic translations update * allow for bridge to be the public interface * Removed (newly) unused exception variables * Didn't mean to actually make changes to the glance plugin * Added a bunch of stubbed out functionality * Moved ssh\_execute to utils; moved comments to docstring * Fixes for Vish & Devin's feedback * Fixes https://bugs.launchpad.net/nova/+bug/681417 * Don't swallow exception stack traces by doing 'raise e'; just use 'raise' * Implementation of 'SAN' volumes A SAN volume is 'special' because the volume service probably won't run on the iSCSI target. Initial support is for Solaris with COMSTAR (Solaris 11) * merging * Fixed PEP8 test problems, complaining about too many blank lines at line 51 * Adds logging.basicConfig() to run\_tests.py so that attempting to log debug messages from tests will work * Launchpad automatic translations update * flagged all INSTANCE\_TYPES usage with FIXME comment. Added basic usage to nova-manage (needs formatting). created api methods * added seed data to migration * Don't need a route for guests. Turns out the issue with routing from the guests was due to duplicate macs * Changes the behavior of run\_test.sh so that pep8 is only run in the default case (when running all tests). It will no longer run when individual test cases are being given as in: * open cactus * some updates to HACKING to describe the docstrings * Casting to the scheduler * moves driver.init\_host into the base class so it happens before floating forwards and sets up proper iptables chains 2011.1 ------ * Set FINAL = True in version.py * Open Cactus development * Set FINAL = True in version.py * pass the set\_ip from ensure\_vlan\_bridge * don't fail on ip add exists and recreate default route on ip move if needed * initial support for dynamic instance\_types: db migration and model, stub tests and stub methods * better setup for flatdhcp * added to inject networking data into the xenstore * forgot context param for network\_get\_all * Fixes bug #709057 * Add and document the provider\_fw method in virt/FakeConnection * Fix for LP Bug #709510 * merge trunk * fix pep8 error :/ * Changed default handler for uncaughted exceptions. It uses logging instead print to stderr * Launchpad automatic translations update * rpartition sticks the rhs in [2] * Fix for LP Bug #709510 * change ensure\_bridge so it doesn't overwrite existing ips * Fix for LP Bug #709510 * Enabled modification of projects using the EC2 admin API * Reorder insance rules for provider rules immediately after base, before secgroups * Merged trunk * Match the initial db version to the actual Austin release db schema * 1. Discard nova-manage host list Reason: nova-manage service list can be replacement. Changes: nova-manage * Only run pep8 after tests if running all the tests * add logging.basicConfig() to tests * fix austin->bexar db migration * woops * trivial cleanup for context.py * Made adminclient get\_user return None instead of throwing EC2Exception if requested user not available * pep8 * Added modify project to ec2 admin api * incorporate feedback from devin - use sql consistently in instance\_destroy also, set deleted\_at * Fixed whitespace * Made adminclient get\_user return None instead of throwing EC2Exception if requested user not available * OS-55: Fix typo for libvirt\_conn operation * merge trunk * remove extraneous line * Fixed pep8 errors * Changed default handler for uncaughted exceptions. Logging with level critical instead of print to stderr * Disassociate all floating ips on terminate instance * Fixes simple scheduler to able to be run\_instance by admins + availability zones * Makes having sphinx to build docs a conditional thing - if you have it, you can get docs. If you don't, you can't * Fixed a pep8 spacing issue * fixes for bug #709057 * Working on api / manager / db support for zones * Launchpad automatic translations update * Adds security group output to describe\_instances * Use firewall\_driver flag as expected with NWFilterFirewall. This way, either you use NWFilterFirewall directly, or you use IptablesFirewall, which creates its own instance of NWFilterFirewall for the setup\_basic\_filtering command. This removes the requirement that LibvirtConnection would always need to know about NWFirewallFilter, and cleans up the area where the flag is used for loading the firewall class * simplify get and remove extra reference to import logging * Added a test that checks for localized strings in the source code that contain position-based string formatting placeholders. If found, an exception message is generated that summarizes the problem, as well as the location of the problematic code. This will prevent future trunk commits from adding localized strings that cannot be properly translated * Made changes based on code review * makes sure that : is in the availability zone before it attempts to use it to send instances to a particular host * Makes sure all instance and volume commands that raise not found are changed to show the ec2\_id instead of the internal id * remove all floating addresses on terminate instance * Merged in trunk changes * Fixed formatting issues in current codebase * Added the test for localized string formatting * Fixes NotFound messages in api to show the ec2\_id * Changed cpu limit to a static value of 100000 (100%) instead of using the vcpu value of 1. There is no weight/limit variable now so I see no other solution than the static max limit * Make nova.virt.images fetch images from a Glance URL when Glance is used as the image service (rather than unconditionally fetch them from an S3/objectstore URL) * Fixed spacing... AGAIN * Make unit tests clean up their mess in /tmp after themselves * Make xml namespace match the API version requested * Missing import in xen plugin * Shortened comment for 80char limt * Added missing import * Naive, low-regression-risk fix enabling Glance to work with libvirt/hyperv * Add unit test for xmlns version matching request version * Properly pulling the name attribute from security\_group * adding testcode * Fix Bug #703037. ra\_server is None * Fix regression in s3 image service. This should be a feature freeze exception * I have a feeling if we try to migrate from imageId to id we'll be tracking it down a while * more instanceId => id fixes * Fix regression in imageId => id field rename in s3 image service * Apply lp:707675 to this branch to be able to test * merge trunk * A couple of bugfixes * Fixes a stupid mistake I made when I moved this method from a module into a class * Add dan.prince to Authors * Make xml namespace match the API version requested * Fix issue in s3.py causing where '\_fix\_image\_id' is not defined * added mapping parameter to write\_network\_config\_to\_xenstore * OS-55: Added a test case for XenAPI file-based network injection OS-55: Stubbed out utils.execute for all XenAPI VM tests, including command simulation where necessary * Simple little changes related to openstack api to work better with glance * Merged trunk * Cleaned up \_start() and \_shutdown() * Added missing int to string conversion * Simple little changes related to openstack api to work better with glance * use 'ip addr change' * Fix merge miss * Changed method signature of create\_network * merged r621 * Merged with http://bazaar.launchpad.net/~vishvananda/nova/lp703037 * Merged with vish branch * Prefixed ending multi-line docstring with a newline * Fixing documentation strings. Second attempt at pep8 * Removal of image tempdir in test tearDown. Also, reformatted a couple method comments to match the file's style * Add DescribeInstanceTypes to admin api. This lets the dashboard know what sizes can be launched (using the -t flag in euca-run-instances, for example) and what resources they provide * Rename Mock, since it wasn't a Mock * Add DescribeInstanceTypes to admin api (dashboard uses it) * Fix for LP Bug #699654 * Change how libvirt firewall drivers work to have meaningful flags * Fixed pep8 errors * This branch updates docs to reflect the db sync addition. It additionally adds some useful errors to nova-manage to help people that are using old guides. It wraps sqlalchemy errors in generic DBError. Finally, it updates nova.sh to use current settings * Added myself to the authors list * fix pep8 issue (and my commit hook that didn't catch it) * Add a host argument to virt drivers's init\_host method. It will be set to the name of host it's running on * merged trunk * Wraps the NotFound exception at the api layer to print the proper instance id. Does the same for volume. Note that euca-describe-volumes doesn't pass in volume ids properly, so you will get no error messages on euca-describe-volumes with improper ids. We may also need to wrap a few other calls as well * Fixes issue with SNATTING chain not getting created or added to POSTROUTING when nova-network starts * Fix for bug #702237 * Moving init\_host before metadata\_forward, as metadata\_forward modifies prerouting rules * another trunk merge * Limit all lines to a maximum of 79 characters * Perform same filtering for OUTPUT as FORWARD in iptables * Fixed up a little image\_id return * Trunk merged * This patch: * Trunk merged * In instance chains and rules for ipv4 and ipv6, ACCEPT target was missing * moved imageId change to s3 client * Migration for provider firewall rules * Updates for provider\_fw\_rules in admin api * Adds driver.init\_host() call to flatdhcp driver * Fixed pep8 errors * Fixed pep8 errors * No longer hard coding to "/tmp/nova/images/". Using tempdir so tests run by different people on the same development machine pass * Perform same filtering for OUTPUT as FORWARD in iptables. This removes a way around the filtering * Fix pep-8 problem from prereq branch * Add a host argument to virt driver's init\_host method. It will be set to the name of host it's running on * updated authors since build is failing * Adds conditional around sphinx inclusion * merge with trunk * Fixes project and role checking when a user's naming attribute is not uid * I am new to nova, and wanted to fix a fairly trivial bug in order to understand the process * Fix for LP Bug #707554 * Added iptables rule to IptablesFirewallDriver like in Hisaharu Ishii patch with some workaround * Set the default number of IP's to to reserve for VPN to 0 * Merged with r606 * Properly fixed spacing issue for pep8 * Fixed spacing issue for pep8 * Fixed merge conflict * Added myself to ./Authors file * Switches from project\_get\_network to network\_get\_by\_instance, which actually works with all networking modes. Also removes a couple duplicate lines from a bad merge * Set the default number of IP's to to reserver for VPN to 0 * Localized strings that employ formatting should not use positional arguments, as they prevent the translator from re-ordering the translated text; instead, they should use mappings (i.e., dicts). This change replaces all localized formatted strings that use more than one formatting placeholder with a mapping version * add ip and network to nwfilter test * merged ntt branch * use network\_get\_by\_instance * Added myself (John Dewey) to Authors * corrected nesting of the data dictionary * Updated a couple data structures to pass pep8 * Added static cpu limit of 100000 (100%) to hyperv.py instead of using the vcpu value of 1 * PEP8 fixes * Changes \_\_dn\_to\_uid to return the uid attribute from the user's object * OS-55: PEP8 fixes * merged branch to name net\_manager.create\_networks args * the net\_managers expect different args to create\_networks, so nova-manage's call to net\_manager.create\_networks was changed to use named args to prevent argument mismatching * OS-55: Post-merge fixes * Fix describe\_regions by changing renamed flags. Also added a test to catch future errors * changed nova-manage to use named arguments to net\_manager.create\_networks * Merged trunk * Removed tabs form source. Merged trunk changes * allow docs to build in virtualenv prevent setup.py from failing with sphinx in virtualenv * fixes doc build and setup.py fail in virtualenv * fix reversed assignment * fixes and refactoring of smoketests * remove extra print * add test and fix describe regions * merged trunk * This patch skips VM shutdown if already in the halted state * Use Glance to relate machine image with kernel and ramdisk * Skip shutdown if already halted * Refactoring \_destroy into steps * i18n! * merged trunk fixed whitespace in rst * wrap sqlalchemy exceptions in a generic error * Wrap instance at api layer to print the proper error. Use same logic for volumes * This patch adds two flags: * Using new style logging * Adding ability to remap VBD device * Resolved trunk merge conflicts * Adds gettext to pluginlib\_nova.py. Fixes #706029 * Adding getttext to pluginlib\_nova * Add provider\_fw\_rules awareness to iptables firewall driver * No longer chmod 0777 instance directories, since nova works just fine without them * Updated docs for db sync requirements; merged with Vish's similar doc updates * Change default log formats so that:  \* they include a timestamp (necessary to correlate logs)  \* no longer display version on every line (shorter lines)  \* use [-] instead of [N/A] (shorter lines, less scary-looking)  \* show level before logger name (better human-readability) * OS55: pylint fixes * OS-55: Added unit test for network injection via xenstore * fixed typo * OS-55: Fix current unit tests * Fixed for pep8 * Merged with rev597 * No longer chmod 0777 instance directories * Reverted log type from error to audit * undid moving argument * Fix for LP Bug #699654 * moved argument for label * fixed the migration * really added migration for networks label * added default label to nova-manage and create\_networks * syntax * syntax error * added plugin call for resetnetworking * Fix metadata using versions other than /later. Patch via ~ttx * should be writing some kindof network info to the xenstore now, hopefully * Use ttx's patch to be explict about paths, as urlmap doesn't work as I expected * Doc changes for db sync * Fixes issue with instance creation throwing errors when non-default groups are used * Saving a database call by getting the security groups from the instance object * Fixes issue with describe\_instances requiring an admin context * OS-55: pylint fixes * Fixing another instance of getting a list of ids instead of a list of objects * Adds security group output to describe\_instances * Finds and fixes remaining strings for i18n. Fixes bug #705186 * Pass a PluginManager to nose.config.Config(). This lets us use plugins like coverage, xcoverage, etc * i18n's strings that were missed or have been added since initial i18n strings branch * OS-55: Only modify Linux image with no or injection-incapable guest agent OS-55: Support network configuration via xenstore for Windows images * A couple of copypasta errors * Keep exception tracing as it was * Pass a PluginManager to nose.config.Config(). This lets us use plugins like coverage, xcoverage, etc * Also print version at nova-api startup, for consistency * Add timestamp to default log format, invert name and level for better readability, log version once at startup * When radvd is already running, not to hup, but to restart * fix ipv6 conditional * more smoketest fixes * Passing in an elevated context instead of making the call non-elevated * Added changes to make errors and recovery for volumes more graceful: * Fetches the security group from ID, allowing the object to be used properly, later * Changing service\_get\_all\_by\_host to not require admin context as it is used for describing instances, which any user in a project can do * Exclude vcsversion.py from pep8 check. It's not compliant, but out of our control * Exclude vcsversion.py from pep8 check. It's not compliant, but out of our control * Include paste config in tarball * Add etc/ directory to tarball * Fixes for bugs: * Return non-zero if either unit tests or pep8 fails * Eagerly load fixed\_ip.network in instance\_get\_by\_id * Add Rob Kost to Authors * Return non-zero if either unit tests or pep8 fails * Merged trunk * merge trunk * Add paste and paste.deploy installation to nova.sh, needed for api server * Updated trunk changes to work with localization * Implement provider-level firewall rules in nwfilter * Whitespace (pep8) cleanups * Exception string lacking 'G' for gigabytes unit * Fixes \*\*params unpacking to ensure all kwargs are strings for compatibility with python 2.6.1 * make sure params have no unicode keys * Removed unneeded line * Merged trunk * Refactor run\_tests.sh to allow us to run an extra command after the tests * update the docs to reflect db sync as well * add helpful error messages to nova-manage and update nova.sh * Fixed unit tests * Merged trunk * fixed pep8 error * Eagerly load instance's fixed\_ip.network attribute * merged trunk changes * minor code cleanup * minor code cleanup * remove blank from Authors * .mailmap rewrite * .mailmap updated * Refactor run\_tests.sh to allow us to run an extra command after the tests * Add an apply\_instance\_filter method to NWFilter driver * PEP-8 fixes * Revert Firewalldriver * Replace an old use of ec2\_id with id in describe\_addresses * various fixes to smoketests, including allowing admin tests to run as a user, better timing, and allowing volume tests to run on non-udev linux * merged trunk * replace old ec2\_id with proper id in describe\_addresses * merge vish's changes (which merged trunk and fixed a pep8 problem) * merged trunkand fixed conflicts and pep error * get\_my\_linklocal raises exception * Completed first pass at converting all localized strings with multiple format substitutions * Allows moving from the Austin-style db to the Bexar-style * move db sync into nosetests package-level fixtures so that the existing nosetests attempt in hudson will pass * previous commit breaks volume.driver. fix it. * per vish's feedback, allow admin to specify volume id in any of the acceptable manners (vol-, volume-, and int) * Merged trunk * Fixed unit tests * Fix merge conflict * add two more columns, set string lengths) * Enable the use\_ipv6 flag in unit tests by default * Fixed unit tests * merge from upstream and fix small issues * merged to trunk rev572 * fixed based on reviewer's comment * Basic stubbing throughout the stack * Enable the use\_ipv6 flag in unit tests by default * Add an apply\_instance\_filter method to NWFilter driver * update status to 'error\_deleting' on volumes where deletion fails * Merged trunk * This disables ipv6 by default. Most use cases will not need it on and it makes dependencies more complex * The live\_migration branch ( https://code.launchpad.net/~nttdata/nova/live-migration/+merge/44940 ) was not ready to be merged * merge from upstream to fix conflict * Trunk merge * s/cleanup/volume. volume commands will need their own ns in the long run * disable ipv6 by default * Merged trunk * Plug VBD to existing instance and minor cleanup * fixes related to #701749. Also, added nova-manage commands to recover from certain states: * Implement support for streaming images from Glance when using the XenAPI virtualization backend, as per the bexar-xenapi-support-for-glance blueprint * Works around the app-armor problem of requiring disks with backing files to be named appropriately by changing the name of our extra disks * fix test to respect xml changes * merged trunk * Add refresh\_security\_group\_\* methods to nova/virt/fake.py, as FakeConnection is the reference for documentation and method signatures that should be implemented by virt connection drivers * added paste pastedeploy to nova.sh * authors needed for test * revert live\_migration branch * This removes the need for the custom udev rule for iscsi devices. It instead attaches the device based on /dev/disk/by-path/ which should make the setup of nova-volume a little easier * Merged trunk * Risk of Regression: This patch don’t modify existing functionlities, but I have added some. 1. nova.db.service.sqlalchemy.model.Serivce (adding a column to database) 2. nova.service ( nova-compute needes to insert information defined by 1 above) * Docstrings aren't guaranteed to exist, so split() can't automatically be called on a method without first checking for the method docstring's existence. Fixes Bug #704447 * Removes circular import issues from bin/stack and replaces utils.loads with json.loads. Fixes Bug#704424 * ComputeAPI -> compute.API in bin/nova-direct-api. Fixes LP#704422 * Fixed apply\_instance\_filter is not implemented in NWFilterFirewall * pep8 * I might have gone overboard with documenting \_members * Add rules to database, cast refresh message and trickle down to firewall driver * Fixed error message in get\_my\_linklocal * openstack api fixes for glance * Stubbed-out code for working with provider-firewalls * Merged trunk * Merged with trunk revno 572 * Better shutdown handling * Change where paste.deploy factories live and how they are called. They are now in the nova.wsgi.Application/Middleware classes, and call the \_\_init\_\_ method of their class with kwargs of the local configuration of the paste file * Further decouple api routing decisions and move into paste.deploy configuration. This makes paste back the nova-api binary * Clean up openstack api test fake * Merged trunk * Add Start/Shutdown support to XenAPI * The Openstack API requires image metadata to be returned immediately after an image-create call * merge trunk * Fixing whitespace * Returning image\_metadata from snapshot() * Merging trunk * Merged trunk * merged trunk rev569 * merged to rev 561 and fixed based on reviewer's comment * Adds a developer interface with direct access to the internal inter-service APIs and a command-line tool based on reflection to interact with them * merge from upstream * pep8 fixes... largely to things from trunk? * merge from upstream * pep8 * remove print statement * This branch fixes two outstanding bugs in compute. It also fixes a bad method signature in network and removes an unused method in cloud * Re-removes TrialTestCase. It was accidentally added in by some merges and causing issues with running tests individually * removed rpc in cloud * merged trial fix again * fix bad function signature in create\_networks * undo accidental removal of fake\_flags * Merged trunk * merged lp:~vishvananda/nova/lp703012 * remove TrialTestCase again and fix merge issues * import re, remove extra call in cloud.py. Move get\_console\_output to compute\_api * Create and use a generic handler for RPC calls to compute * Create and use a generic handler for RPC calls to compute * Create and use a generic handler for RPC calls * Merged trunk * OS-55: Inject network settings in linux images * Merged with trunk revno 565 * use .local and .rescue for disk images so they don't make app-armor puke * Implements the blueprint for enabling the setting of the root/admin password on an instance * OpenStack Compute (Nova) IPv4/IPv6 dual stack support http://wiki.openstack.org/BexarIpv6supportReadme * Merged to rev.563 * This change introduces support for Sheepdog (distributed block storage system) which is proposed in https://blueprints.launchpad.net/nova/+spec/sheepdog-support * Sort Authors * Update Authors * merge from upstream: * pep8 fixes * update migration script to add new tables since merge * sort Authors * Merged with r562 * This modifies libvirt to use CoW images instead of raw images. This is much more efficient and allows us to use the snapshotting capabilities available for qcow2 images. It also changes local storage to be a separate drive instead of a separate partition * pep8. Someday I'll remember 2 blank lines between module methods * remove ">>>MERGE" iin nova/db/sqlalchemy/api.py * checking based on pep8 * merged trunk * Modified per sorens review * Fix for Pep-8 * Merged with r561 * Moved commands which needs sudo to nova.sh * Added netaddr for pip-requires * Marking snapshots as private for now * Merging Trunk * Fixing Image ID workaround and typo * Fixed based on the comments from code review. Merged to trunk rev 561 * Add a new method to firewall drivers to tell them to stop filtering a particular instance. Call it when an instance has been destroyed * merged to trunk rev 561 * Merged trunk * merge trunk rev560 * Fixes related to how EC2 ids are displayed and dealt with * Get reviewed and fixed based on comments. Merged latest version * Make libvirt and XenAPI play nice together * Spelling is hard. Typing even moreso * Revert changes to version.py * Minor code cleanups * Minor code cleanups * Minor code cleanups * Make driver calls compatible * Merged trunk * Stubbed out XenServer rescue/unrescue * Added unit tests for the Diffie-Hellman class. Merged recent trunk changes * Bring NWFilter driver up to speed on unfilter\_instance * Replaced home-grown Diffie-Hellman implementation with the M2Crypto version supplied by Soren * Instead of a set() to keep track of instances and security groups, use a dict(). \_\_eq\_\_ for stuff coming out of sqlalchemy does not do what I expected (probably due to our use of sessions) * Fixes broken call to \_\_generate\_rc in auth manager * Fixes bug #701055. Moves code for instance termination inline so that the manager doesn't prematurely mark an instance as deleted. Prematurely doing so causes find calls to fail, prevents instance data from being deleted, and also causes some other issues * Revert r510 and r512 because Josh had already done the same work * merged trunk * Fixed Authors * Merged with 557 * Fixed missing \_(). Fixed to follow logging to LOG changes. Fixed merge miss (get\_fixed\_ip was moved away). Update some missing comments * merge from upstream and fix leaks in console tests * make sure get\_all returns * Fixes a typo in the name of a variable * Fixes #701055. Move instance termination code inline to prevent manager from prematurely marking it as destroyed * fix invalid variable reference in cloud api * fix indentation * add support for database migration * fix changed call to generate\_rc * merged with r555 * fixed method signature of modify\_rules fixed unit\_test for ipv6 * standardize volume ids * standardize volume ids * standardize on hex for ids, allow configurable instance names * correct volume ids for ec2 * correct formatting for volume ids * Fix test failures on Python 2.7 by eagerly loading the fixed\_ip attribute on instances. No clue why it doesn't affect python 2.6, though * Adding TODO to clarify status * Merging trunk * Do joinedload\_all('fixed\_ip.floating\_ips') instead of joinedload('fixed\_ip') * Initialize logging in nova-manage so we don't see errors about missing handlers * \_wait\_with\_callback was changed out from under suspend/resume. fixed * Make rescue/unrescue available to API * Stop error messages for logs when running nova-manage * Fixing stub so tests pass * Merging trunk * Merging trunk, small fixes * This branch adds a backend for using RBD (RADOS Block Device) volumes in nova via libvirt/qemu. This is described in the blueprint here: https://blueprints.launchpad.net/nova/+spec/ceph-block-driver * Fix url matching for years 2010-forward * Update config for launching logger with cleaner factory * Update paste config for ec2 request logging * merged changes from trunk * cleaned up prior merge mess * Merged trunk * My previous modifications to novarc had CLOUDSERVER\_AUTH\_URL pointing to the ec2 api port. Now it's correctly pointing to os api port * Check for whole pool name in check\_for\_setup\_error * change novarc template from cc\_port to osapi\_port. Removed osapi\_port from bin scripts * Start to add rescue/unrescue support * fixed pause and resume * Fixed another issue in \_stream\_disk, as it did never execute \_write\_partition. Fixed fake method accordingly. Fixed pep8 errors * pep8 fixes * Fixing the stub for \_stream\_disk as well * Fix for \_stream\_disk * Merged with r551 * Support IPv6 firewall with IptablesFirewallDriver * Fixed syntax errors * Check whether 'device\_path' has ':' before splitting it * PEP8 fixes, and switch to using the new LOG in vm\_utils, matching what's just come in from trunk * Merged with trunk * Merged with Orlando's recent changes * Added support of availability zones for compute. models.Service got additional field availability\_zone and was created ZoneScheduler that make decisions based on this field. Also replaced fake 'nova' zone in EC2 cloud api * Eagerly load fixed\_ip property of instances * Had to abandon the other branch (~annegentle/nova/newscript) because the diffs weren't working right for me. This is a fresh branch that should be merged correctly with trunk. Thanks for your patience. :) * Added unit tests for the xenapi-glance integration. This adds a glance simulator that can stub in place of glance.client.Client, and enhances the xapi simulator to add the additional calls that the Glance-specific path requires * Merged with 549 * Change command to get link local address Remove superfluous code * This branch adds web based serial console access. Here is an overview of how it works (for libvirt): * Merged with r548 * Fixed bug * Add DescribeInstanceV6 for backward compatibility * Fixed test environments. Fixed bugs in \_fetch\_image\_objecstore and \_lookup\_image\_objcestore (objectstore was broken!) Added tests for glance * Fixed for pep8 Remove temporary debugging * changed exception class * Changing DN creation to do searches for entries * Fixes bug #701575: run\_tests.sh fails with a meaningless error if virtualenv is not installed. Proposed fix tries to use easy\_install to install virtualenv if not present * merge trunk, fix conflict * more useful prefix and fix typo in string * use by-path instead of custom udev script * Quick bugfix. Also make the error message more specific and unique in the equivalent code in the revoke method * remove extra whitspaces * Raise meaningful exception when there aren't enough params for a sec group rule * bah - pep8 errors * resolve pylint warnings * Removing script file * Read Full Spec for implementation details and notes on how to boot an instance using OS API. http://etherpad.openstack.org/B2RK0q1CYj * Added my name to Authors list * Changes per Edays comments * Fixed a number of issues with the iptables firewall backend: \* Port specifications for firewalls come back from the data store as integers, but were compared as strings. \* --icmp-type was misspelled as --icmp\_type (underscore vs dash) \* There weren't any unit tests for these issues * merged trunk changes * Removed unneeded SimpleDH code from agent plugin. Improved handling of plugin call failures * Now tries to install virtualenv via easy\_install if not present * Merging trunk * fixed issue in pluginlib\_nova.py * Trunk merge and conflcts resolved * Implementation of xs-console blueprint (adds support for console proxies like xvp) * Fixed a number of issues with the iptables firewall backend: \* Port specifications for firewalls come back from the data store as integers, but were compared as strings. \* --icmp-type was misspelled as --icmp\_type (underscore vs dash) \* There weren't any unit tests for these issues * Add support for EBS volumes to the live migration feature. Currently, only AoE is supported * Changed shared\_ip\_group detail routing * Changed shared\_ip\_group detail routing * A few more changes to the smoeketests. Allows smoketests to find the nova package from the checkout. Adds smoketests for security groups. Also fixes a couple of typos * Fixes the metadata forwarding to work by default * Adds support to nova-manage to modify projects * Add glance to pip-requires, as we're now using the Glance client code from Nova * Now removing kernel/ramdisk VDI after copy Code tested with PV and HVM guests Fixed pep8 errors * merged trunk changes * consolidate boto\_extensions.py and euca-get-ajax-console, fix bugs from previous trunk merge * Fixed issues raised by reviews * xenapi\_conn was not terminating utils/LoopingCall when an exception was occurring. This was causing the eventlet Event to have send\_exception() called more than once (a no-no) * merge trunk * whups, fix accidental change to nova-combined * remove uneeded superclass * Bugfix * Adds the requisite infrastructure for automating translation templates import/export to Launchpad * Added babel/gettext build support * Can now correctly launch images with external kernels through glance * re-merged in trunk to correct conflict * Fix describe\_availablity\_zones versobse * Typo fix * merged changes from trunk * Adding modify option for projects * Fixes describe\_instances to filter by a list of instance\_ids * Late import module for register\_models() so it doesn't create the db before flags are loaded * Checks for existence of volume group using vgs instead of checking to see if /dev/nova-volumes exists. The dev is created by udev and isn't always there even if the volume group does exist * Add a new firewall backend for libvirt, based on iptables * Create LibvirtConnection directly, rather than going through libvirt\_conn.get\_connection. This should remove the dependency on libvirt for tests * Fixed xenapi\_conn wait\_for\_task to properly terminate LoopingCall on exception * Fixed xenapi\_conn wait\_for\_task to properly terminate LoopingCall on exception * Fixed xenapi\_conn wait\_for\_task to properly terminate LoopingCall on exception * optimize to call get if instance\_id is specified since most of the time people will just be requesting one id * fix describe instances + test * Moved get\_my\_ip into flags because that is the only thing it is being used for and use it to set a new flag called my\_ip * fixes Document make configuration by updating nova version mechanism to conform to rev530 update * alphbetized Authors * added myself to authors and fixed typo to follow standard * typo correction * fixed small glitch in \_fetch\_image\_glance virtual\_size = imeta['size'] * fixed doc make process for new nova version (rev530) machanism * late import module for register\_models() so it doesn't create the db before flags are loaded * use safer vgs call * Return proper region info in describe\_regions * change API classname to match the way other API's are done * small cleanups * First cut at implementing partition-adding in combination with the Glance streaming. Untested * some small cleanups * merged from upstream and made applicable changes * Adds a mechanism to programmatically determine the version of Nova. The designated version is defined in nova/version.py. When running python setup.py from a bzr checkout, information about the bzr branch is put into nova/vcsversion.py which is conditionally imported in nova/version.py * Return region info in the proper format * Now that we aren't using twisted we can vgs to check for the existence of the volume group * s/canonical\_version/canonical\_version\_string/g * Fix indentation * s/string\_with\_vcs/version\_string\_with\_vcs/g * Some fixes to \_lookup\_image\_glance: fix the return value from lookup\_image, attach the disk read-only before running pygrub, and add some debug logging * Reverted formatting change no longer necessary * removed a merge conflict line I missed before * merged trunk changes * set the hostname factory in the service init * incorporated changes suggested by eday * Add copyright and license info to version.py * Fixes issue in trunk with downloading s3 images for instance creation * Fix pep8 errors * Many fixes to the Glance integration * Wrap logs so we can: \* use a "context" kwarg to track requests all the way through the system \* use a custom formatter so we get the data we want (configurable with flags) \* allow additional formatting for debug statements for easer debugging \* add an AUDIT level, useful for noticing changes to system components \* use named logs instead of the general logger where it makes sesnse * pep8 fixes * Bug #699910: Nova RPC layer silently swallows exceptions * Bug #699912: When failing to connect to a data store, Nova doesn't log which data store it tried to connect to * Bug #699910: Nova RPC layer silently swallows exceptions * pv/hvm detection with pygrub updated for glance * Bug #699912: When failing to connect to a data store, Nova doesn't log which data store it tried to connect to * Resolved merge differences * Additional cleanup prior to pushing * Merged with trunk * Fixing unescaped quote in nova-CC-install.sh script plus formatting fixes to multinode install * getting ready to push for merge prop * Fixing headers line by wrapping the headers in single quotes * Less code generation * grabbed the get\_info fix from my other branch * merged changes from trunk * Remove redundant import of nova.context. Use db instance attribute rather than module directly * Merging trunk * Removing some FIXMEs * Reserving image before uploading * merge * Half-finished implementation of the streaming from Glance to a VDI through nova-compute * Fix Nova not to immediately blow up when talking to Glance: we were using the wrong URL to get the image metadata, and ended up getting the whole image instead (and trying to parse it as json) * another merge with trunk to remedy instance\_id issues * merge * Include date in API action query * Review feedback * This branch implements lock functionality. The lock is stored in the compute worker database. Decorators have been added to the openstack API actions which alter instances in any way * Review feedback * Review feedback * Review feedback * typo * refers to instance\_id instead of instance\_ref[instance\_id] * passing the correct parameters to decorated function * accidentally left unlocked in there, it should have been locked * various cleanup and fixes * merged trunk * pep8 * altered argument handling * Got the basic 'set admin password' stuff working * Include date in action query * Let documentation get version from nova/version.py as well * Add default version file for developers * merge pep8 fixes from newlog2 * Track version info, and make available for logging * pep8 * Merged trunk * merge pep8 and tests from wsgirouter branch * Remove test for removed class * Pep8 * pep8 fix * merged trunk changes * commit before merging trunk * Fixes format\_instances error by passing reservation\_id as a kwarg instead of an arg. Also removes extraneous yields in test\_cloud that were causing tests to pass with broken code * Remove module-level factory methods in favor of having a factory class-method on wsgi components themselves. Local options from config are passed to the \_\_init\_\_ method of the component as kwargs * fix the broken tests that allowed the breakage in format to happen * Fix format\_run\_instances to pass in reservation id as a kwarg * Add factories into the wsgi classes * Add blank \_\_init\_\_ file for fixing importability. The stale .pyc masked this error locally * merged trunk changes * Introduces basic support for spawning, rebooting and destroying vms when using Microsoft Hyper-V as the hypervisor. Images need to be in VHD format. Note that although Hyper-V doesn't accept kernel and ramdisk separate from the image, the nova objectstore api still expects an image to have an associated aki and ari. You can use dummy aki and ari images -- the hyper-v driver won't use them or try to download them. Requires Python's WMI module * merged trunk changes * Renamed 'set\_root\_password' to 'set\_admin\_password' globally * merge with trunk * renamed sharedipgroups to shared\_ip\_groups and fixed tests for display\_name * Fix openstack api tests and add a FaultWrapper to turn exceptions to faults * Fixed display\_name on create\_instance * fix some glitches due to someone removing instanc.internal\_id (not that I mind) remove accidental change to nova-combined script * Fixed trunk merge conflicts as spotted by dubs * OS API parity: map image ID to numeric ID. Ensure all other OS operations are at least stubbed out and callable * add in separate public hostname for console hosts. flesh out console api data * allow smoketests to find nova package and add security rules * Fix a bunch of pep8 stuff * This addition to the docs clarifies that it is a requirement for contributors to be listed in the Authors file before their commits can be merged to trunk * merge trunk * another merge from trunk to the latest rev * pulled changes from trunk added console api to openstack api * Removed dependencies on nova server components for the admin client * Remove stale doc files so the autogeneration extension for sphinx will work properly * Add to Authors and mailmap * Make test case work again * This branch contains the internal API cleanup branches I had previously proposed, but combined together and with all the UUID key replacement ripped out. This allows multiple REST interfaces (or other tools) to use the internal API directly, rather than having the logic tied up in the ec2 cloud.py file * socat will need to be added to our nova sudoers * merged trunk changes * intermediate work * Created a XenAPI plugin that will allow nova code to read/write/delete from xenstore records for a given instance. Added the basic methods for working with xenstore data to the vmops script, as well as plugin support to xenapi\_conn.py * Merged trunk * Recover from a lost data store connection * Updated register\_models() docstring * simplify decorator into a wrapper fn * add in xs-console worker and tests * pep8 cleanup * more fixes, docstrings * fix injection and xml * Fixing formatting problems with multinode install document * Split internal API get calls to get and get\_all, where the former takes an ID and returns one resource, and the latter can optionally take a filter and return a list of resources * missing \_() * Fixed for pep8 * Fixed:Create instance fails when use\_ipv6=False * Removed debug message which is not needed * Fixed misspelled variable * Fixed bug in nova\_project\_filter\_v6 * The \_update method in base Instance class overides dns\_name\_v6,so fixed it * self.XENAPI.. * Changed Paused power state from Error to Paused * fixed json syntax error * stop using partitions and first pass at cow images * Remove stale doc files * pep8 * tests fixed up * Better method for eventlet.wsgi.server logging * Silence eventlet.wsgi.server so it doesn't go to stdout and pollute our logs * Declare a flag for test to run in isolation * Build app manually for test\_api since nova.ec2.API is gone * Recover from a lost data store connection * Added xenstore plugin changed * merged changes from trunk * some more cleanup * need one more newline * Redis dependency no longer needed * Make test\_access use ec2.request instead of .controller and .action * Revert some unneeded formatting since twistd is no longer used * pep8 fixes * Remove flags and unused API class from openstack api, since such things are specified in paste config now * i18n logging and exception strings * remove unused nova/api/\_\_init\_\_.py * Make paste the default api pattern * Rework how routing is done in ec2 endpoint * Change all 2010 Copyright statements to 2010-2011 in doc source directory only * rename easy to direct in the scripts * fix typo in stack tool * rename Easy API to Direct API * Moved \_\_init\_\_ api code to api.py and changed allowed\_instances quota method argument to accept all type data, not just vcpu count * Made the plugin output fully json-ified, so I could remove the exception handlers in vmops.py. Cleaned up some pep8 issues that weren't caught in earlier runs * merged from trunk * Renamed argument to represent possible types in volume\_utils * Removed leftover UUID reference * Removed UUID keys for instance and volume * Merged trunk * Final edits to multi-node doc and install script * Merged trunk changes * Some Bug Fix * Fixed bug in libvirt * Fixed bug * Fixed for pep8 * Fixed conflict with r515 * Merged and fiexed conflicts with r515 * some fixes per vish's feedback * Don't know where that LOG went.. * Final few log tweaks, i18n, levels, including contexts, etc * Apply logging changes as a giant patch to work around the cloudpipe delete + add issue in the original patch * dabo fix to update for password reset v2 * krm\_mapping.json sample file added * dabo fix to update for password reset * added cloudserver vars to novarc template * Update Authors * Add support for rbd volumes * Fixes LP688545 * First pass at feature parity. Includes Image ID hash * Fixing merge conflicts with new branch * merged in trunk changes * Fixing merge conflicts * Fixes LP688545 * Make sure we point to the right PPA's everywhere * Editing note about the database schema available on the wiki * Modifying based on reviewer comments * Uses paste.deploy to make application running configurable. This includes the ability to swap out middlewares, define new endpoints, and generally move away from having code to build wsgi routers and middleware chains into a configurable, extensible method for running wsgi servers * Modifications to the nova-CC-installer.sh based on review * Adds the pool\_recycle option to the sql engine startup call. This enables connection auto-timeout so that connection pooling will work properly. The recommended setting (per sqlalchemy FAQ page) has been provided as a default for a new configuration flag. What this means is that if a db connection sits idle for the configured # of seconds, the engine will automatically close the connection and return it to the available thread pool. See Bug #690314 for info * Add burnin support. Services are now by default disabled, but can have instances and volumes run on them using availability\_zone = nova:HOSTNAME. This lets the hardware be put through its paces without being put in the generally available pool of hardware. There is a 'service' subcommand for nova-manage where you can enable, disable, and list statuses of services * pep8 fixes * Merged compute-api-cleanup branch * Removed compute dependency in quota.py * add timeout constant, set to 5 minutes * removed extra whitespace chars at the end of the changed lines * Several documentation corrections and formatting fixes * Minor edits prior to merging changes to the script file * add stubs for xen driver * merge in trunk * merged latest trunk * merge trunk * merge trunk * temp * Stop returning generators in the refresh\_security\_group\_{rules,members} methods * Don't lie about which is the default firewall implementation * Move a closing bracket * Stub out init\_host in libvirt driver * Adjust test suite to the split between base firewall rules provided by nwfilter and the security group filtering * Fix a merge artifact * Remove references to nova-core/ppa and openstack/ppa PPA's * Updated the password generation code * Add support for Sheepdog volumes * Add support for various block device types (block, network, file) * Added agent.py plugin. Merged xenstore plugin changes * fixed pep8 issues * Added OpenStack's copyright to the xenstore plugin * fixed pep8 issues * merged in trunk and xenstore-plugin changes * Ignore CA/crl.pem * Before merge with xenstore-plugin code * Corrected the sloppy import in the xenstore plugin that was copied from other plugins * Ignore CA/crl.pem * Merged trunk * Merged trunk * deleting README.livemigration.txt and nova/livemigration\_test/\* * Merged trunk * Merged trunk * 最新バージョンにマージ。変更点は以下の通り。 Authorsに自分の所属を追加 utils.pyのgenerate\_uidがおかしいのでインスタンスIDがオーバーフローしていたが、 その処理を一時撤廃。後で試験しなおしとすることにした。 * Merged trunk * Auth Tokens assumed the user\_id was an int, not a string * Removed dependencies on flags.py from adminclient * Make InstanceActions and live diagnostics available through the Admin API * Cleanup * Improved test * removed some debugging code left in previous push * Converted the pool\_recycle setting to be a flag with a default of 3600 seconds * completed the basic xenstore read/write/delete functionality * Removed problematic test * PEP8 fix * \* Fix bad query in \_\_project\_to\_dn \* use \_\_find\_dns instead of \_\_find\_objects in \_\_uid\_to\_dn and \_\_project\_to\_dn * Moved network operation code in ec2 api into a generic network API class. Removed a circular dependency with compute/quota * Oopsies * merge trunk * merge trunk * Make compute.api methods verbs * Fail * Review feedback * Cleans up the output of run\_tests.sh to look closer to Trial * change exit code * Changing DN creation to do searches for entries * Merged trunk * Implemented review feedback * This patch is beginning of XenServer snapshots in nova. It adds: * Merged trunk * Calling compute api directly from OpenStack image create * Several documentation corrections * merge recent revision(version of 2010/12/28) Change: 1. Use greenthread instead of defer at nova.virt.libvirt\_conn.live\_migration. 2. Move nova.scheduler.manager.live\_migration to nova.scheduler.driver 3. Move nova.scheduler.manager.has\_enough\_resource to nova.scheduler.driver 4. Any check routine in nova-manage.instance.live\_migration is moved to nova.scheduler.driver.schedule\_live\_migration * Merging trunk * Note that contributors are required to be listed in Authors file before work can be merged into trunk * Mention Authors and .mailmap files in Developer Guide * pep 8 * remove cloudpipe from paste config * Clean up how we determine IP to bind to * Converted a few more ec2 calls to use compute api * Cleaned up the compute API, mostly consistency with other parts of the system and renaming redundant module names * fixed the compute lock test * altered the compute lock test * removed tests.api.openstack.test\_servers test\_lock, to hell with it. i'm not even sure if testing lock needs to be at this level * fixed up the compute lock test, was failing because the context was always admin * syntax error * moved check lock decorator from the compute api to the come manager... when it rains it pours * removed db.set\_lock, using update\_instance instead * added some logging * typo, trying to hurry.. look where that got me * altered error exception/logging * altered error exception/logging * fixd variables being out of scope in lock decorator * moved check lock decorator to compute api level. altered openstack.test\_servers according and wrote test for lock in tests.test\_compute * Moved ec2 volume operations into a volume API interface for other components to use. Added attach/detach as compute.api methods, since they operate in the context of instances (and to avoid a dependency loop) * pep8 fix, and add in flags that don't refernece my laptop * apt-get install socat, which is used to connect to the console * removed lock check from show and changed returning 404 to 405 * fix lp:695182, scheduler tests needed to DECLARE flag to run standalone * removed () from if (can't believe i did that) and renamed checks\_lock decorator * Add the pool\_recycle setting to enable connection pooling features for the sql engine. The setting is hard-coded to 3600 seconds (one hour) per the recommendation provided on sqlalchemy's site * i18n * Pep-8 cleanup * Fix scheduler testcase so it knows all flags and can run in isolation * removed some code i didn't end up using * fixed merge conflict with trunk * pep8 * fixed up test for lock * added tests for EC2 describe\_instances * PEP8 cleanup * This branch fixes an issue where VM creation fails because of a missing flag definition for 'injected\_network\_template'. See Bug #695467 for more info * Added tests * added test for lock to os api * refactor * Re-added flag definition for injected\_network\_template. Tested & verified fix in the same env as the original bug * forgot import * syntax error * Merged trunk * Added implementation availability\_zones to EC2 API * Updating Authors * merge * Changes and error fixes to help ensure basic parity with the Rackspace API. Some features are still missing, such as shared ip groups, and will be added in a later patch set * initial lock functionality commit * Merged with trunk * Additional edits in nova.concepts.rst while waiting for script changes * Bug #694880: nova-compute now depends upon Cheetah even when not using libvirt * add ajax console proxy to nova.sh * merge trunk * Fix pep8 violations * add in unit tests * removed superfluous line * Address bug #695157 by using a blank request class and setting an empty request path * Defualt services to enabled * Address bug #695157 by using a blank request class and setting an empty request path * Add flag --enable\_new\_services to toggle default state of service when created * merge from trunk * This commit introduces scripts to apply XenServer host networking protections * Whoops * merge from upstream and fix conflicts * Update .mailmap with both email addresses for Ant and myself * Make action log available through Admin API * Merging trunk * Add some basic snapshot tests * Added get\_diagnostics placeholders to libvirt and fake * Merged trunk * Added InstanceAction DB functions * merge trunk * Bug #694890: run\_tests.sh sometimes doesn't pass arguments to nosetest * Output of run\_tests.sh to be closer to trial * I've added suspend along with a few changes to power state as well. I can't imagine suspend will be controversial but I've added a new power state for "suspended" to nova.compute.power\_states which libvirt doesn't use and updated the xenapi power mapping to use it for suspended state. I also updated the mappings in nova.api.openstack.servers to map PAUSED to "error" and SUSPENDED to "suspended". Thoughts there are that we don't currently (openstack API v1.0) use pause, so if somehow an instance were to be paused an error occurred somewhere, or someone did something in error. Either way asking the xenserver host for the status would show "paused". Support for more power states needs to be added to the next version of the openstack API * fixed a line length * Bug #694880: nova-compute now depends upon Cheetah even when not using libvirt * Bug #694890: run\_tests.sh sometimes doesn't pass arguments to nosetest * fix bug #lp694311 * Typo fix * Renamed based on feedback from another branch * Added stack command-line tool * missed a couple of gettext \_() * Cleans up nova.api.openstack.images and fix it to work with cloudservers api. Previously "cloudservers image-list" wouldn't work, now it will. There are mappings in place to handle s3 or glance/local image service. In the future when the local image service is working, we can probably drop the s3 mappings * Fixing snapshots, pep8 fixes * translate status was returning the wrong item * Fixing bad merge * Converted Volume model and operation to use UUIDs * inst -> item * syntax error * renaming things to be a bit more descriptive * Merging trunk * Converted instance references to GUID type * Added custom guid type so we can choose the most efficient backend DB type easily * backup schedule changes * Merged trunk * Merging trunk, fixing failed tests * A few fixes * removed \ * Moving README to doc/networking.rst per recommendation from Jay Pipes * Merged trunk * couple of pep8s * merge trunk * Fixed after Jay's review. Integrated code from Soren (we now use the same 'magic number' for images without kernel & ramdisk * Fixed pep8 errors * launch\_at を前回コミット時に追加したが、lauched\_atというカラムが既に存在し、 紛らわしいのでlauched\_onにした。 * logs inner exception in nova/utils.py->import\_class * Fix Bug #693963 * remove requirement of sudo on tests * merge trunk * Merge * adding zones to api * Support IPv6 * test commit * テスト項目表を再び追加した状態でコミット * テスト項目表をローカルから一度削除した状態でコミット * テスト項目表がなぜか消えたので追加 * nova.compute.managerがこれまでの修正でデグレしていたので修正 CPUID, その他のチェックルーチンをnova.scheduler.manager.live\_migrationに追加 * nova.compute.managerがこれまでの修正でデグレしていたので修正 CPUID, その他のチェックルーチンをnova.scheduler.manager.live\_migrationに追加 * Make nova work even when user has LANG or LC\_ALL configured * merged trunk, resolved trivial conflict * merged trunk, resolved conflict * Faked out handling for shared ip groups so they return something * another typo * applied power state conversion to test * trying again * typo * fixed the os api image test for glance * updated the xenstore methods to reflect that they write to the param record of xenstore, not the actual xenstore itself * fixed typo * Merged with trunk All tests passed Could not fix some pep8 errors in nova/virt/libvirt\_conn.py * fixed merge conflict * updated since dietz moved the limited function * fixed error occuring when tests used glance attributes, fixed docstrings * Merged again from trunk * fixed a few docstrings, added \_() for gettext * added \_() for gettext and a couple of pep8s * adds a reflection api * unit test - should be reworked * Moves implementation specific Openstack API code from the middleware to the drivers. Also cleans up a few areas and ensures all the API tests are passing again * PEP8 fix * One more time * Pep8 cleanup * Resolved merge conflict * Merged trunk * Trying to remove twisted dependencies, this gets everything working under nosetests * Merged Monty's branch * Merged trunk and resolved conflicts * Working diagnostics API; removed diagnostics DB model - not needed * merged trunk * merged trunk * Superfluous images include and added basic routes for shared ip groups * Simplifies and improves ldap schema * xenapi iscsi support + unittests * Fixed trunk and PEP8 cleanup * Merged trunk * Added reference in setup.py so that python setup.py test works now * merge lp:nova * better bin name, and pep8 * pep8 fixes * some pep8 fixes * removing xen/uml specific switches. If they need special treatment, we can add it * add license * delete xtra dir * move euca-get-ajax-console up one directory * merge trunk * move port range for ajaxterm to flag * more tweaks * add in license * some cleanup * rewrite proxy to not use twisted * added power state logging to nova.virt.xenapi.vm\_utils * added suspend as a power state * last merge trunk before push * merge trunk, fixed unittests, added i18n strings, cleanups etc etc * And the common module * minor notes, commit before rewriting proxy with eventlet * There were a few unclaimed addresses in mailmap * first merge after i18n * remove some notes * Add Ryan Lane as well * added tests to ensure the easy api works as a backend for Compute API * fix commits from Anthony and Vish that were committed with the wrong email * remove some yields that snuck in * merge from trunk * Basic Easy API functionality * Fixes reboot (and rescue) to work even if libvirt doesn't know about the instance and the network doesn't exist * merged trunk * Fixes reboot (and rescue) to work even if libvirt doesn't know about the instance and the network doesn't exist * Adds a flag to use the X-Forwarded-For header to find the ip of the remote server. This is needed when you have multiple api servers with a load balancing proxy in front. It is a flag that defaults to False because if you don't have a sanitizing proxy in front, users could masquerade as other ips by passing in the header manually * Got basic xenstore operations working * Merged trunk * Modified InstanceDiagnostics and truncate action * removed extra files * merged trunk * Moves the ip allocation requests to the from the api host into calls to the network host made from the compute host * pep8 fix * merged trunk and fixed conflicts * Accidentally yanked the datetime line in auth * remove extra files that slipped in * merged trunk * add missing flag * Optimize creation of nwfilter rules so they aren't constantly being recreated * use libvirt python bindings instead of system call * fixed more conflicts * merged trunk again * add in support of openstack api * merge trunk and upgrade to cheetah templating * Optimize nwfilter creation and project filter * Merging trunk * fixed conflicts * Adding more comments regarding XS snapshots * working connection security * WSGI middleware for lockout after failed authentications of ec2 access key * Modifies nova-network to recreate important data on start * Puts the creation of nova iptables chains into the source code and cleans up rule creation. This makes nova play more nicely with other iptables rules that may be created on the host * Forgot the copyright info * i18n support for xs-snaps * Finished moving the middleware layers and fixed the API tests again * Zone scheduler added * Moved some things for testing * Merging trunk * Abstracted auth and ratelimiting more * Getting Snapshots to work with cloudservers command-line tool * merge trunk * Minor bug fix * Populate user\_data field from run-instances call parameter, default to empty string to avoid metadata base64 decoding failure, LP: #691598 * Adding myself and Antony Messerli to the Authors file * Fixes per-project vpns (cloudpipe) and adds manage commands and support for certificate revocation * merge trunk * merge antonymesserli's changes, fixed some formatting, and added copyright notice * merged i8n and fixed conflicts * Added networking protections readme * Moved xenapi into xenserver specific directory * after trunk merge * Fixes documentation builds for gettext.. * committing so that I can merge trunk changes * Log all XenAPI actions to InstanceActions * Merged trunk * merging trunk * merging trunk * Fix doc building endpoint for gettext * All merged with trunk and let's see if a new merge prop (with no pre-req) works. * Problem was with a missplaced parentheses. ugh * Adding me in the Authors file * Populate user\_data field from run-instances call parameter, default to empty string to avoid metadata base64 decoding failure, LP: #691598 * connecting ajax proxy to rabbit to allow token based security * remove a debugging line * a few more fixes after merge with trunk * merging in trunk * move prototype code from api into compute worker * Burnin support by specifying a specific host via availability\_zone for running instances and volumes on * Merged trunk * This stops the nova-network dhcp ip from being added to all of the compute hosts * prototype works with kvm. now moving call from api to compute * Style correction * fix reboot command to work even if a host is rebooted * Filter templates and dom0 from list\_instances() * removed unused import and fix docstring * merge fakerabbit fix and turn fake back on for cloud unit tests * Reworked fakerabbit backend so each connection has it's own. Moved queues and exchanges to be globals * PEP8 cleanup * Refactored duplicate rpc.cast() calls in nova/compute/api.py. Cleaned up some formatting issues * Log all XenAPI actions * correct xenapi resume call * activate fake rabbit for debugging * change virtualization to not get network through project * update db/api.py as well * don't allocate networks when getting vpn info * Added InstanceDiagnostics and InstanceActions DB models * PEP8 cleanup * Merged trunk * merge trunk * 1) Merged from trunk 2) 'type' parameter in VMHelper.fetch\_image converted in enum 3) Fixed pep8 errors 4) Passed unit tests * Remove ec2 config chain and move openstack versions to top-level application * Use paste.deploy for running the api server * pep8 and removed extra imports * add missing greenthread import * add a few extra joined objects to get instance * remove extra print statements * Tests pass after cleaning up allocation process * Merging trunk * Typo fix, stubbing out to use admin project for now * Close devnull filehandle * added suspend and resume * Rewrite of vif\_rules.py to meet coding standards and be more pythonic in general. Use absolute paths for iptables/ebtables/arptables in host-rules * Add raw disk image support * Add my @linux2go.dk address to .mailmap * fixed some pep8 business * directly copy ip allocation into compute * Minor spellchecking fixes * Adds support for Pause and Unpause of xenserver instances * Make column names more generic * don't add the ip to bridge on compute hosts * PEP8 fixups * Added InstanceActions DB model * initial commit of xenserver host protections * Merged trunk * Fixed pep8 errors * Integrated changes from Soren (raw-disk-images). Updated authors file. All tests passed * pep8 (again again) * pep8 (again) * small clean up * テストコードをレポジトリに追加 nova.compute.manager.pre\_live\_migration()について、異常終了しているのに正常終了の戻り値を返すことがあったため変更 - 正常終了の戻り値をTrueに変更 - fixed\_ipが見つからないときにはRemoteErrorをraiseする - それに合わせてnova.compute.manager.live\_migrationも変更 * テストコードをレポジトリに追加 nova.compute.manager.pre\_live\_migration()について、異常終了しているのに正常終了の戻り値を返すことがあったため変更 - 正常終了の戻り値をTrueに変更 - fixed\_ipが見つからないときにはRemoteErrorをraiseする - それに合わせてnova.compute.manager.live\_migrationも変更 * Support proxying api by using X-Forwarded-For * eventlet merge updates * Cleaned up TODOs, using flags now * merge trunk and minor fix(for whatever reason validator\_unittest did not get removed from run\_test.py) * fixed unittests and further clean-up post-eventlet merge * All API tests finally pass * Removing unneeded Trial specific code * A few more tweaks to get the OS API tests passing * Adding new install script plus changes to multinode install doc * Removing unneeded Trial specific code * Replaced the use of redis in fakeldap with a customized dict class. Auth unittests should now run fine without a redis server running, or without python-redis installed * Adding Ed Leafe to Authors file * Some tweaks * Adding in Ed Leafe so we can land his remove-redis test branch * Add wait\_for\_vhd\_coalesce * Some typo fixes * pep8 cleanup * Fixed some old code that was merged incorrectly * Replaced redis with a modified dict class * bug fixes * first revision after eventlet merge. Currently xenapi-unittests are broken, but everything else seems to be running okay * Integrated eventlet\_merge patch * Code reviewed * XenAPI Snapshots first cut * Fixed network test (thanks Vish!) and fixed run\_tests.sh * First pass at converting run\_tests.py to nosetests. The network and objctstore tests don't yet work. Also, we need to manually remove the sqlite file between runs * remerged for pep8 * pep8 * merged in project-vpns to get flag changes * clean up use of iptables chains * move some flags around * add conditional bind to linux net * make sure all network data is recreated when nova-network is rebooted * merged trunk * merged trunk, fixed conflicts and tests * Added Instance Diagnostics DB model * Put flags back in nova.virt.xenapi/vm\_utils * Removed unnecessary blank lines * Put flags back in vm\_utils * This branch removes most of the dependencies on twisted and moves towards the plan described by https://blueprints.launchpad.net/nova/+spec/unified-service-architecture * pep8 fixes for bin * PEP8 cleanups * use getent, update docstring * pep8 fixes * reviewed the FIXMEs, and spotted an uncaught exception in volume\_utils...yay! * fixed a couple of more syntax errors * Moved implementation specific stuff from the middleware into their respective modules * typo * fixed up openstack api images index and detail * fake session clean-up * Removed FakeInstance and introduced stubout for DB. Code clean-up * removed extra stuff used for debugging * Restore code which was changed for testing reasons to the original state. Kudos to Armando for spotting this * Make nova work even when user has LANG or LC\_ALL configured * Merged changes from trunk into the branch * Hostテーブルのカラム名を修正 FlatManager, FlatDHCPManagerに対応 * merged with trunk. fixed compute.pause test * fixup after merge with trunk * memcached requires strings not unicode * Fix 688220 Added dependency on Twisted>=10.1.0 to pip-requires * Make sure we properly close the bzr WorkingTree in our Authors up-to-datedness unit test * fixes for xenapi (thanks sandywalsh) * clean up tests and add overriden time method to utils * merged from upstream * add missing import * Adding back in openssh-lpk schema, as keys will likely be stored in LDAP again * basic conversion of xs-pause to eventlet done * brougth clean-up from unittests branch and tests * I made pep8 happy * \* code cleanup \* revised unittest approach \* added stubout and a number of tests * clean up code to use timeout instead of two keys * final cleanup * Restore alphabetical order in Authors file * removed temporary comment lines * Lots of PEP-8 work * refresh\_security\_group renamed to refresh\_security\_group\_rules * added volume tests and extended fake to support them * Make sure the new, consolidated template gets included * Make sure we unlock the bzr tree again in the authors unit test * The ppa was moved. This updates nova.sh to reflect that * merged upstream * remove some logging * Merged from trunk and fixed merge issues. Also fixed pep8 issues * Lockout middleware for ec2 api * updates per review * Initial work on i18n. This adds the installation of the nova domain in gettext to all the "endpoints", which are all the bin/\* files and run\_tests.py * For some reason, I forgot to commit the other endpoints.. * Remove default\_{kernel,ramdisk} flags. They are not used anymore * Don't attempt to fiddle with partitions for whole-disk-images * pep8 * Includes architecture on register. Additionally removes a couple lines of cruft * nothing * nothing * nothing * support for pv guests (in progress) * merge trunk * Now that we have a templating engine, let's use it. Consolidate all the libvirt templates into one, extending the unit tests to make sure I didn't mess up * first cut of unittest framework for xenapi * Added my contacts to Authors file * final cleanup, after moving unittest work into another branch * fixup after merge with trunk * added callback param to fake\_conn * added not implemented stubs for libvirt * merge with trey tests * Fixed power state update with Twisted callback * simplified version using original logic * moving xenapi unittests changes into another branch * Adds support to the ec2 api for filtering describe volumes by volume\_ids * Added LiveCD info as well as some changes to reflect consolidation of .conf files * Fix exception throwing with wrong instance type * Add myself * removing imports that should have not been there * second round for unit testing framework * Added Twisted version dependency into pip-requires * only needs work for distinguishing pv from hvm * Move security group refresh logic into ComputeAPI * Refactored smoketests to use novarc environment and to separate user and admin specific tests * Changed OpenStack API auth layer to inject a RequestContext rather than building one everywhere we need it * Elaborate a bit on ipsets comment * Final round of marking translation strings * First round of i18n-ifying strings in Nova * Initial i18n commit for endpoints. All endpoints must install gettext, which injects the \_ function into the builtins * Fixed spelling errors in index.rst * fix pep8 * Includes kernel and ramdisk on register. Additinally removes a couple lines of cruft * port new patches * merge-a-tat-tat upstream to this branch * Format fixes and modification of Vish's email address * There is always the odd change that one forgets! * \* pylint fixes \* code clean-up \* first cut for xenapi unit tests * added pause and unpause to fake connection * merged changes from sandy's branch * added unittest for pause * add back utils.default\_flagflie * removed a few more references to twisted * formatting and naming cleanup * remove service and rename service\_eventlet to service * get service unittests runnning again * whitespace fix * make nova binaries use eventlet * Converted the instance table to use a uuid instead of a auto\_increment ID and a random internal\_id. I had to use a String(32) column with hex and not a String(16) with bytes because SQLAlchemy doesn't like non-unicode strings going in for String types. We could try another type, but I didn't want a primary\_key on blob types * remove debug messages * merge with trey * pause and unpause code/tests in place. To the point it stuffs request in the queue * import module and not classe directely as per Soren recommendation * Make XenServer VM diagnostics available through nova.virt.xenapi * Merged trunk * Added exception handling to get\_rrd() * Changed OpenStack API auth layer to inject a RequestContext rather than building one everywhere we need it * changed resume to unpause * Import module instead of function * filter describe volumes by supplied ids. Includes unittest * merging sandy's branch * Make get\_diagnostics async * raw instances can now be launched in xenapi (only as hvm at the moment) * pause from compute.manager <-> xenapi * Merged Armando's XenAPI fix * merge with trunk to pull in admin-api branch * Flag to define which operations are exposed in the OpenStack API, disabling all others * Fixed Authors conflict and re-merged with trunk * fixes exception throwing with wrong instance type * Ignore security group rules that reference foreign security groups * fixed how the XenAPI library is loaded * remove some unused files * port volume manager to eventlet also * intermediate commit to checkpoint progress * some pylint caught changes to compute * added to Authors * adds bzr to the list of dependencies in pip-require so that upon checkout using run\_tests.sh succeeds * merge conflict * merged upstream changes * add bzr to the dev dependencies * Fixed docstrings * Merged trunk * Got get\_diagnostics in working order * merged updates to trunk * merge trunk * typo fix * removing extraneous config ilnes * Finished cleaning up the openstack servers API, it no longer touches the database directly. Also cleaned up similar things in ec2 API and refactored a couple methods in nova.compute.api to accommodate this work * Pushed terminate instance and network manager/topic methods into network.compute.api * Merged trunk * Moved the reboot/rescue methods into nova.compute.api * PEP8 fixes * Setting the default schema version to the new schema * Adding support for choosing a schema version, so that users can more easily migrate from an old schema to the new schema * merged with trunk. All clear! * Removing novaProject from the schema. This change may look odd at first; here's how it works: * test commit * コメントを除去 README.live\_migration.txtのレビュー結果を修正 * This change adds better support for LDAP integration with pre-existing LDAP infrastructures. A new configuration option has been added to specify the LDAP driver should only modify/add/delete attributes for user entries * More pep8 fixes to remove deprecated functions * pep8 fix * Clarifying previously commited exception message * Raising an exception if the user doesn't exist before trying to modify its attributes * Removing redundant check * Added livecd instructions plus fixed references to .conf files * pylint fixes * Initial diagnostics import -- needs testing and cleanup * Added a script to use OpenDJ as an LDAP server instead of OpenLDAP. Also modified nova.sh to add an USE\_OPENDJ option, that will be checked when USE\_LDAP is set * Reverting last change * a few more things ironed out * Make sure Authors check also works for pending merges (otherwise stuff can get merged that will make the next merge fail this check) * It looks like Soren fixed the author file, can I hit the commit button? * merge trunk * Make sure Authors check also works for pending merges (otherwise stuff can get merged that will make the next merge fail this check) * Add a helpful error message to nova-manage in case of NoMoreNetworks * Add Ryan Lucio to Authors * Adding myself to the authors list * Add Ryan Lucio to Authors * Addresses bug 677475 by changing the DB column for internal\_id in the instances table to be unsigned * importing XenAPI module loaded late * Added docstring for get\_instances * small fixes on Exception handling * first test commit * and yet another pylint fix * fixed pylint violations that slipped out from a previous check * \* merged with lp:~armando-migliaccio/nova/xenapi-refactoring \* fixed pylint score \* complied with HACKING guidelines * addressed review comments, complied with HACKING guidelines * adding README.livemigration.txt * rev439ベースにライブマイグレーションの機能をマージ このバージョンはEBSなし、CPUフラグのチェックなし * modified a few files * Fixed conflicts with gundlach's fixes * Remove dead test code * Add iptables based security groups implementation * Merged gundlach's fixes * Don't wrap HTTPAccepted in a fault. Correctly pass kwargs to update\_instance * fixed import module in \_\_init\_\_.py * minor changes to docstrings * added interim solution for target discovery. Now info can either be passed via flags or discovered via iscsiadm. Long term solution is to add a few more fields to the db in the iscsi\_target table with the necessary info and modify the iscsi driver to set them * merge with lp:~armando-migliaccio/nova/xenapi-refactoring * merge trunk * moved XenAPI namespace definition into xenapi/\_\_init\_\_.py * pylint and pep8 fixes * Decreased the maximum value for instance-id generation from uint32 to int32 to avoid truncation when being entered into the instance table. Reverted fix to make internal\_id column a uint * Finished cleaning up the openstack servers API, it no longer touches the database directly. Also cleaned up similar things in ec2 API and refactored a couple methods in nova.compute.api to accomodate this work * Merged reboot-rescue into network-manager * Merged trunk * Fixes a missing step (nova-manage network create IP/nn n nn) in the single-node install guide * Tired of seeing various test files in bzr stat * Updated sqlalchemy model to make the internal\_id column of the instances table as unsigned integer * \* Removes unused schema \* Removes MUST uid from novaUser \* Changes isAdmin to isNovaAdmin \* Adds two new configuration options: \*\* ldap\_user\_id\_attribute, with a default of uid \*\* ldap\_user\_name\_attribute, with a default of cn \* ldapdriver.py has been modified to use these changes * Pushed terminate instance and network manager/topic methods into network.compute.api * Fix bugs that prevented OpenStack API from supporting server rename * pep8 * Use newfangled compute\_api * Update tests to use proper id * Fixing single node install doc * Oops, update 'display\_name', not 'name'. And un-extract-method * Correctly translate instance ids to internal\_ids in some spots we neglected * Added test files to be ignored * Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two * Moved reboot/rescue methods into nova.compute.api * Merged trunk and resolved conflicts. Again * Instances are assigned a display\_name if one is not passed in -- and now, they're assigned a display\_name even if None is explicitly passed in (as the EC2 API does.) * Merged trunk and resolved conflicts * Default Instance.display\_name to a value even when None is explicitly passed in * Refactor nwfilter code somewhat. For iptables based firewalls, I still want to leave it to nwfilter to protect against arp, mac, and ip spoofing, so it needed a bit of a split * Add a helpful error message to nova-manage in case of NoMoreNetworks * minor refactoring after merge * merge lp:~armando-migliaccio/nova/refactoring * merge trunk * typo fix * moved flags into xenapi/novadeps.py * Add a simple abstraction for firewalls * fix nova.sh to reflect new location of ppa * Changed null\_kernel flag from aki-00000000 to nokernel * Guarantee that the OpenStack API's Server-related responses will always contain a "name" value. And get rid of a redundant field in models.py * Going for a record commits per line changes ratio * Oops, internal\_id isn't available until after a save. This code saves twice; if I moved it into the DB layer we could do it in one save. However, we're moving to one sqlite db per compute worker, so I'd rather have two saves in order to keep the logic in the right layer * Todd points out that the API doesn't require a display\_name, so let's make a default. That way the OpenStack API can rest assured that its server responses will always have a name key * Adds in more documentation contributions from Citrix * Remove duplicate field and make OpenStack API return server.name for EC2-API-created instances * Move cc\_host and cc\_port flags into nova/network/linux\_net.py. They weren't used anywhere else * Add include\_package\_data=True to setup.py * With utils.default\_flagfile() in its old location, the flagfile isn't being read -- twistd.serve() loads flags earlier than that point. Move the utils.default\_flagfile() call earlier so the flagfile is included * Removed a blank line * Broke parts of compute manager out into compute.api to separate what gets run on the API side vs the worker side * Move default\_flagfile() call to where it will be parsed in time to load the flagfile * minor refactoring * Move cc\_host and cc\_port flags into nova/network/linux\_net.py. They weren't used anywhere else * Added a script to use OpenDJ as an LDAP server instead of OpenLDAP. Also modified nova.sh to add an USE\_OPENDJ option, that will be checked when USE\_LDAP is set * Fixed termie's tiny bits from the prior merge request * Delete unused flag in nova.sh * Moving the openldap schema out of nova.sh into it's own files, and adding sun (opends/opendj/sun directory server/fedora ds) schema files * OpenStack API returns the wrong x-server-management-url. Fix that * Cleaned up pep8 errors * brought latest changes from trunk * iscsi volumes attach/detach complete. There is only one minor issue on how to discover targets from device\_path * Fix unit tests * Fix DescribeImages EC2 API call * merged Justin Santa Barbara's raw-disk-image back into the latest trunk * If only I weren't so lazy * Rename imageSet variable to images * remove FAKE\_subdomain reference * Return the correct server\_management\_url * Default flagfile moved in trunk recently. This updates nova.sh to run properly with the new flagfile location * Correctly handle imageId list passed to DescribeImages API call * update of nova.sh because default flagfile moved * merged trunk * Add a templating mechanism in the flag parsing * Adjust state\_path default setting so that api unit tests find things where they used to find them * Import string instead of importing Template from string. This is how we do things * brought the xenapi refactoring in plus trunk changes * changes * pep8 fixes and further round of refactoring * Rename cloudServersFault to computeFault -- I missed this Rackspace branding when we renamed nova.api.rackspace to nova.api.openstack * Make sure templated flags work across calls to ParseNewFlags * Add include\_package\_data=True to setup.py * fixed deps * first cut of the refactoring of the XenAPIConnection class. Currently the class merged both the code for managing the XenAPI connection and the business logic for implementing Nova operations. If left like this, it would eventually become difficult to read, maintain and extend. The file was getting kind of big and cluttered, so a quick refactoring now will save a lot of headaches later * other round of refactoring * further refactoring * typos and pep8 fixes * first cut of the refactoring of the XenAPIConnection class. Currently the class merged both the code for managing the XenAPI connection and the business logic for implementing Nova operations. If left like this, it would eventually become difficult to read, maintain and extend. The file was getting kind of big and cluttered, so a quick refactoring now will save a lot of headaches later * PEP fixes * Adding support for modification only of user accounts * This modification should have occured in a different branch. Reverting * added attach\_volume implementation * work on attach\_volume, with a few things to iron out * A few more changes: \* Fixed up some flags \* Put in an updated nova.sh \* Broke out metadata forwarding so it will work in flatdhcp mode \* Added descriptive docstrings explaining the networking modes in more detail * small conflict resolution * first cut of changes for the attach\_volume call * The image server should throw not found errors, don't need to check in compute manager * Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two * Setting "name" back to "cn", since id and name should be separate * Adding support for modification only of user accounts * don't error on edge case where vpn has been launched but fails to get a network * Make sure all workers look for their flagfile in the same spot * Fix typo "nova.util" -> "nova.utils" * Fix typo "nova.util" -> "nova.utils" * Added a .mailmap that maps addresses in bzr to people's real, preferred e-mail addresses. (I made a few guesses along the way, feel free to adjust according to what is actually the preferred e-mail) * Add a placeholder in doc/build. Although bzr handles empty directories just fine, setuptools does not, so to actually ship this directory in the tarball, we need a file in it * Add a placeholder in doc/build. Although bzr handles empty directories just fine, setuptools does not, so to actually ship this directory in the tarball, we need a file in it * Merged trunk * pep8 * merged trunk, added recent nova.sh * fix typos in docstring * docstrings, more flags, breakout of metadata forwarding * doc/build was recently accidentally removed from VCS. This adds it back, which makes the docs build again * Add doc/build dir back to bzr * Make aws\_access\_key\_id and aws\_secret\_access\_key configurable * add vpn ping and optimize vpn list * Add an alias for Armando * the serial returned by x509 is already formatted in hex * Adding developer documentation - setting up dev environment and how to add to the OpenStack API * Add a --logdir flag that will be prepended to the logfile setting. This makes it easier to share a flagfile between multiple workers while still having separate log files * Address pep8 complaints * Address PEP8 complaints * Remove FAKE\_subdomain from docs * Adding more polish * Adding developer howtos * Remove FAKE\_subdomain from docs * Make aws\_access\_key\_id and aws\_secret\_access\_key configurable * updated nova.sh * added flat\_interface for flat\_dhcp binding * changed bridge\_dev to vlan\_interface * * Add a --logdir flag that will be prepended to the logfile setting. This makes it easier to share a flagfile between multiple workers while still having separate log files * added svg files (state.svg is missing because its source is a screen snapshot) * Unify the location of the default flagfile. Not all workers called utils.default\_flagfile, and nova-manage explicitly said to use the one in /etc/nova/nova-manage.conf * Set and use AMQP retry interval and max retry FLAGS * Incorporating security groups info * Rename cloudServersFault (rackspace branding) to computeFault. Fixes bug lp680285 * Use FLAGS instead of constants * Incorporating more networking info * Make time.sleep() non-blocking * Removed unnecessary continue * Update Authors and add a couple of names to .mailmap (from people who failed to set bzr whoami properly) * Refactor AMQP retry loop * Allows user to specify hosts to listen on for nova-api and -objectstore * Make sure all the libvirt templates are included in the tarball (by replacing the explicitly listed set with a glob pattern) * fixed pep8 violations * Set and use AMQP retry interval and max retry constants * pep8 violations fix * added placeholders * added test for invalid handles * Make sure all templates are included (at least rescue tempaltes didn't used to be included) * Check for running AMQP instances * Use logging.exception instead * Reverted some changes * Added some comments * Adds images (only links one in), start for a nova-manage man file, and also documents all nova-manage commands. Can we merge it in even though the man page build isn't working? * Added some comments * Check for running AMQP instances * first cut of fixes for bug #676128 * Removed .DS\_Store files everywhere, begone! * Moves the EC2 API S3 image service into nova.service. There is still work to be done to make the APIs align, but this is the first step * PEP8 fixes, 2 lines were too long * First step to getting the image APIs consolidated. The EC2 API was using a one-off S3 image service wrapper, but this should be moved into the nova.image space and use the same interface as the others. There are still some mismatches between the various image service implementations, but this patch was getting large and wanted to keep it within a resonable size * Improved Pylint Score * Fixes improper display of api error messages that happen to be unicode * Make sure that the response body is a string and not unicode * Soren updated setup.py so that the man page builds. Will continue working on man pages for nova-compute and nova-network * Overwrite build\_sphinx, making it run once for each of the html and man builders * fixes flatdhcp, updates nova.sh, allows for empty bridge device * Update version to 2011.1 as that is the version we expect to release next * really adding images * adding images * Documenting all nova-manage commands * Documenting all nova-manage commands * Fixes eventlet race condition in cloud tests * fix greenthread race conditions in trunk and floating ip leakage * Testing man page build through conf.py * Improved Pylint Score * adjusting images size and bulleted list * merged with trunk * small edit * Further editing and added images * Update version to 2011.1 as that is the version we expect to release next * ec2\_api commands for describe\_addresses and associate\_address are broken in trunk. This happened during the switch to ec2\_id and internal\_id. We clearly didn't have any unit tests for this, so I've added a couple in addition to the three line change to actually fix the bugs * delete floating ips after tests * remove extra line and ref. to LOG that doesn't exist * fix leaking floating ip from network unittests and use of fakeldap driver * Adds nova-debug to tools directory, for debugging of instances that lose networking * fixes errors in describe address and associate address. Adds test cases * Ryan\_Lane's code to handle /etc/network not existing when we try to inject /etc/network/interfaces into an image * pep8 * First dump of content related to Nova RPC and RabbitMQ * Add docstrings to any methods I touch * pep8 * PEP8 fixes * added myself to Authors file. Enjoy spiders * Changed from fine-grained operation control to binary admin on/off setting * Changed from fine-grained operation control to binary admin on/off setting * Lots of documentation and docstring updates * The docs are just going to be wrong for now. I'll file a bug upstream * Change how wsgified doc wrapping happens to fix test * merge to trunk * pep8 * Adding contributors and names * merge with trunk * base commit * saw a duplicate import ... statement in the code while reading through unit tests - this removes the dupe * removed redundant unit test import * add in bzr link * adding a bit more networking documentation * remove tab * fix title * tweak * Fix heading * merge in anne's changes * tweak * Just a few more edits, misspellings and the like * fix spacing to enable block * merge to remote * unify env syntax * Add sample puppet scripts * fix install guide * getting started * create SPHINX\_DEBUG env var. Setting this will disable aggressive autodoc generation. Also provide some sample for P syntax * fix conf file from earlier merge * notes, and add code to enable sorted "..todo:: P[1-5] xyz" syntax * merge in more networking docs - still a work in progress * anne's changes to the networking documentation * Updated Networking doc * anne gentle's changes to community page * merge in heckj's corrections to multi-node install * Added a .mailmap that maps addresses in bzr to people's real, preferred e-mail addresses. (I made a few guesses along the way, feel free to adjust according to what is actually the preferred e-mail) * Updated community.rst to fix a link to the IRC logs * merging in changes from ~anso/nova/trunkdoc * fixed another spacing typo causing poor rendering * fixed spacing typo causing poor rendering * merge in anne's work * add docs for ubuntu 4, 10, others * Updated Cloud101 and admonition color * merge heckj's multi install notes * working on single node install * updating install notes to reference Vish' nova.sh and installing in MYSQL * Add Flat mode doc * Add Flat mode doc * Add Flat mode doc * Add VLAN Mode doc * Add VLAN Mode doc * merge in anne's changes * home page tweaks * Updated CSS and community.rst file * modifications and additions based on doc sprint * incorporate some feedback from todd and anne * merge in trunk * working on novadoc structure * add some info on authentication and keys * Since we're autodocumenting from a sphinx ext, we can scrap it in Makefile * Use the autodoc tools in the setup.py build\_sphinx toolchain * Fix include paths so setup.py build\_sphinx works again * Cleanups to doc process * quieter doc building (less warnings) * File moves from "merge" of termie's branch * back out stacked merge * Doc updates: \* quieter build (fewer warnings) \* move api reference out of root directory \* auto glob api reference into a TOC \* remove old dev entries for new-fangled auto-generated docs * Normalization of Dev reference docs * Switch to module-per-file for the module index * Allow case-by-case overriding of autodocs * add exec flags, apparently bzr shelve/unshelve does not keep track of them * Build autodocs for all our libraries * add dmz to flags and change a couple defaults * Per-project vpns, certificates, and revocation * remove finished todo * Fix docstrings for wsigfied methods * fix default twitter username * shrink tweet text a bit * Document nova.sh environment * add twitter feed to the home page * Community contact info * small tweaks before context switch * use include to grab todd's quickstart * add in custom todo, and custom css * Format TODO items for sphinx todo extension * additions to home page * Change order of secions so puppeting is last, add more initial setup tasks * update types of services that may run on machines * Change directory structure for great justice! * Refactored smoketests to use novarc environment and to separate user and admin specific tests * start adding info to multi-node admin guide * document purpose of documentation * Getting Started Guide * Nova quickstart: move vish's novascript into contrib, and convert reademe.md to a quickstart.rst * merge trunk * Add a templating mechanism in the flag parsing. Add a state\_path flag that will be used as the top-level dir for all other state (such as images, instances, buckets, networks, etc). This way you only need to change one flag to put all your state in e.g. /var/lib/nova * add missing file * Cleanup nova-manage section * have "contents" look the same as other headings * Enables the exclusive flag for DirectConsumer queues * Ensures that keys for context from the queue are passed to the context constructor as strings. This prevents hangs on older versions of python that can't handle unicode kwargs * Fix for bug #640400, enables the exclusive flag on the temporary queues * pep8 whitespace and line length fixes * make sure context keys are not unicode so they can be passed as kwargs * merged trunk * merged source * prettier theme * Added an extra argument to the objectstore listen to separate out the listening host from the connecting host * Change socket type in nova.utils.get\_my\_ip() to SOCK\_DGRAM. This way, we don't actually have to set up a connection. Also, change the destination host to an IP (chose one of Google's DNS's at random) rather than a hostname, so we avoid doing a DNS lookup * Fix for bug#613264, allowing hosts to be specified for nova-api and objectstore listeners * Fixes issue with security groups not being associated with instances * Doc cleanups * Fix flags help display * Change socket type in nova.utils.get\_my\_ip() to SOCK\_DGRAM. This way, we don't actually have to set up a connection. Also, change the destination host to an IP (chose one of Google's DNS's at random) rather than a hostname, so we avoid doing a DNS lookup * ISCSI Volume support * merged * more descriptive title for cloudpipe * update of the architecture and fix some links * Fixes after trunk merge * removed some old instructions and updated concepts * merge * Documentation on Services, Managers, and Drivers * Document final undocumented python modules * merged trunk * cloudpipe docs * Fixed --help display for non-twisted bin/\* commands * Adds support for multiple API ports, one for each API type (OS, EC2) * Fixed tests to work with new default API argument * Added support for OpenStack and EC2 APIs to run on different ports * More docs * Language change for conformity * Add ec2 api docs * Exceptions docs * API endpoint documentation * basics to get proxied ajaxterm working with virsh * :noindex: on the fakes page for virt.fakes which is included in compute.rst * Virt documentation * Change retrieval of security groups from kwargs so they are associated properly and add test to verify * don't check for vgroup in fake mode * merged trunk, just in case * Update compute/disk.py docs * Change volume TODO list * Volume documentation * Remove fakes duplication * Update database docs * Add support for google analytics to only the hudson-produced docs * Changes to conf.py * Updated location of layout.html and change conf.py to use a build variable * Update database page a bit * Fakes cleanup (stop duplicate autodoc of FakeAOEDriver) * Document Fakes * Remove "nova Packages and Dependencies" * Finished TODO item * Pep-257 * Pep-257 cleanups * Clean up todos and the like for docs * A shell script for showing modules that aren't documented in .rst files * merge trunkdoc * link binaries section to concepts * :func: links to python functions in the documentation * Todo cleanups in docs * cleanup todos * fix title levels * wip architecture, a few auth formatting fixes, binaries, and overview * volume cleanups * Remove objectstore, not referenced anywhere * Clean up volumes / storage info * Moves db writes into compute manager class. Cleans up sqlalchemy model/api to remove redundant calls for updating what is really a dict * Another heading was too distracting, use instead * Fix underlining -> heading in rst file * Whitespace and docstring cleanups * Remove outdated endpoint documentation * Clean up indentation error by preformatting * Add missing rst file * clean up the compute documentation a bit * Remove unused updated\_data variable * Fix wiki link * added nova-manage docs * merged and fixed conflicts * updates to auth, concepts, and network, fix of docstring * cleanup rrd doc generation * Doc skeleton from collaborative etherpad hack session * OK, let's try this one more time * Doc updates * updates from review, fix models.get and note about exception raising * Style cleanups and review from Eric * New structure for documentation * Fixes PEP8 violations from the last few merges * More PEP8 fixes that were introduced in the last couple commits * Adding Google Analytics code to nova.openstack.org * Fixes service unit tests after tornado excision * Added Google Analytics code * renamed target\_id to iscsi\_target * merged gundlach's excision * Oops, didn't mean to check this one in. Ninja-patch * Delete BaseTestCase and with it the last reference to tornado * fix completely broken ServiceTestCase * Removes some cruft from sqlalchemy/models.py like unused imports and the unused str\_id method * Adds rescue and unrescue commands * actually remove the conditional * fix tests by removing missed reference to prefix and unnecessary conditional in generate\_uid * Making net injection create /etc/network if non-existant * Documentation was missing; added * Moving the openldap schema out of nova.sh into it's own files, and adding sun (opends/opendj/sun directory server/fedora ds) schema files * validates device parameter for attach-volume * add nova-debug to setup.py * nova-debug, relaunch an instance with a serial console * Remove the last vestigial bits of tornado code still in use * pep8 cleanup * print the exception on fail, because it doesn't seem to reraise it * use libvirt connection for attaching disks and avoid the symlink * update error message * Exceptions in the OpenStack API will be converted to Faults as they should be, rather than barfing a stack trace to the user * pep8 * pep8 * Duplicate the two trivial escaping functions remaining from tornado's code and remove the dependency * more bugfixes, flag for local volumes * fix bugs, describe volumes, detach on terminate * ISCSI Volume support * Removed unused imports and left over references to str\_id * logging.warn not raise logging.Warn * whitespace * move create\_console to cloud.py from admin.py * merge lp:nova * add NotFound to fake.py and document it * add in the xen rescue template * pep 8 cleanup and typo in resize * add methods to cloud for rescue and unrescue * update tests * merged trunk and fixed conflicts/changes * part way through porting the codebase off of twisted * Another pep8 cleanup branch for nova/tests, should be merged after lp:~eday/nova/pep8-fixes-other. After this, the pep8 violation count is 0! * Changes block size for dd to a reasonable number * Another pep8 cleanup branch for nova/api, should be merged after lp:~eday/nova/pep8-fixes 2010.1 ------ * Created Authors file * Actually adding Authors file * Created Authors file and added to manifest for Austin Release * speed up disk generation by increasing block size * PEP8 cleanup in nova/tests, except for tests. There should be no functional changes here, just style changes to get violations down * PEP8 cleanup in nova/\*, except for tests. There should be no functional changes here, just style changes to get violations down * PEP8 cleanup in nova/db. There should be no functional changes here, just style changes to get violations down * PEP8 cleanup in nova/api. There should be no functional changes here, just style changes to get violations down * PEP8 and pylint cleanup. There should be no functional changes here, just style changes to get violations down * Moves db writes into compute manager class. Cleans up sqlalchemy model/api to remove redundant calls for updating what is really a dict * validate device in AttachDisk * Cleanup of doc for dependencies (redis optional, remove tornado, etc). Please check for accuracy * Delays the creation of the looping calls that that check the queue until startService is called * Made updates based on review comments * Authorize image access instead of just blindly giving it away * Checks the pid of dnsmasq to make sure it is actually referring to the right process * change boto version from 1.9b1 to 1.9b in pip-requires * Check the pid to make sure it refers to the correct dnsmasq process * make sure looping calls are created after service starts and add some tests to verify service delegation works * fix typo in boto line of pip-requires * Updated documentation * Update version set in setup.py to 2010.1 in preparation for Austin release * Also update version in docs * Update version to 2010.1 in preparation for Austin release * \* Fills out the Parallax/Glance API calls for update/create/delete and adds unit tests for them. \* Modifies the ImageController and GlanceImageService/LocalImageService calls to use index and detail routes to comply perfectly with the RS/OpenStack API * Makes disk.partition resize root drive to 10G, unless it is m1.tiny which just leaves it as is. Larger images are just used as is * reverted python-boto version in pip-requires to 1.9b1 * Construct exception instead of raising a class * Authorize Image before download * Add unit test for XML requests converting errors to Faults * Fixes https://bugs.launchpad.net/nova/+bug/663551 by catching exceptions at the top level of the API and turning them into Faults * Adds reasonable default local storage gb to instance sizes * reverted python-boto version in pip-requires to 1.9b1.\ * Fix typo in test case * Remember to call limited() on detail() in image controller * Makes nova-dhcpbridge notify nova-network on old network lease updates * add reasonable gb to instance types * it is flags.DEFINE\_integer, not FLAGS.define\_int * Makes disk.partition resize root drive to 10G, unless it is m1.tiny which just leaves it as is. Larger images are just used as is * update leases on old leases as well * Adds a simple nova-manage command called scrub to deallocate the network and remove security groups for a projeect * Refresh MANIFEST.in to make the tarball include all the stuff that belongs in the tarball * Added test case to reproduce bug #660668 and provided a fix by using the user\_id from the auth layer instead of the username header * Add the last few things to MANIFEST.in * Also add Xen template to manifest * Fix two problems with get\_console\_log: \* libvirt has this annoying "feature" where it chown()s your console to the uid running libvirt. That gets in the way of reading it. Add a call to "sudo chown ...." right before we read it to make sure it works out well. \* We were looking in the wrong directory for console.log. \*blush\* * This branch converts incoming data to the api into the proper type * Fixes deprecated use of context in nova-manage network create * Add a bunch of stuff to MANIFEST.in that has been added to the tree over the last couple of months * Fix the --help flag for printing help on twistd-based services * Fix two problems with get\_console\_log: libvirt has this annoying "feature" where it chown()s your console to the uid running libvirt. That gets in the way of reading it. We were looking in the wrong directory for console.log. \*blush\* * Fix for bug 660818 by adding the resource ID argument * Reorg the image services code to push glance stuff into its own directory * Fix some unit tests: \* One is a race due to the polling nature of rpc in eventlet based unit tests. \* The other is a more real problem. It was caused by datastore.py being removed. It wasn't caught earlier because the .pyc file was still around on the tarmac box * Add a greenthread.sleep(0.3) in get\_console\_output unit test. This is needed because, for eventlet based unit tests, rpc polls, and there's a bit of a race. We need to fix this properly later on * Perform a redisectomy on bin/nova-dhcpbridge * Removed 'and True' oddity * use context for create\_networks * Make Redis completely optional: * make --help work for twistd-based services * trivial style change * prevent leakage of FLAGS changes across tests * run\_tests.sh presents a prompt: * Also accept 'y' * A few more fixes for deprecations * make run\_tests.sh's default perform as expected * Added test case to reproduce bug #660668 and provided a fix by using the user\_id from the auth layer instead of the username header * get flags for nova-manage and fix a couple more deprecations * Fix for bug#660818, allows tests to pass since delete expects a resource ID * This branch modifies the fixes all of the deprecation warnings about empty context. It does this by adding the following fixes/features \* promotes api/context.py to context.py because it is used by the whole system \* adds more information to the context object \* passes the context through rpc \* adds a helper method for promoting to admin context (elevate()) \* modifies most checks to use context.project\_id instead of context.project.id to avoid trips to the database * timestamps are passed as unicode * Removed stray spaces that were causing an unnecessary diff line * merged trunk * Minimized diff, fixed formatting * remove nonexistent exception * Merged with trunk, fixed broken stuff * revert to generic exceptions * fix indent * Fixes LP Bug#660095 * Move Redis code into fakeldap, since it's the only thing that still uses it. Adjust auth unittests to skip fakeldap tests if Redis isn't around. Adjust auth unittests to actually run the fakeldap tests if Redis /is/ around * fix nosetests * Fixes a few concurrency issues with creating volumes and instances. Most importantly it adds retries to a number of the volume shell commands and it adds a unique constraint on export\_devices and a safe create so that there aren't multiple copies of export devices in the database * unit tests and fix * call stuff project\_id instead of project * review fixes * fix context in bin files * add scrub command to clean up networks and sec groups * merged trunk * merged concurrency * review comments * Added a unit test but not integrated it * merged trunk * fix remaining tests * cleaned up most of the issues * remove accidental paste * use context.project\_id because it is more efficient * elevate in proper places, fix a couple of typos * merged trunk * Fixes bug 660115 * Address cerberus's comment * Fix several problems keeping AuthMiddleware from functioning in the OpenStack API * Implement the REST calls for create/update/delete in Glance * Adds unit test for WSGI image controller for OpenStack API using Glance Service * Fixes LP Bug#660095 * Xen support * Adds flat networking + dhcpserver mode * This patch removes the ugly network\_index that is used by VlanManager and turns network itself into a pool. It adds support for creating the networks through an api command: nova-manage network create # creates all of the networks defined by flags or nova-manage network create 5 # create the first five networks * Newlines again, reorder imports * Remove extraneous newlines * Fix typo, fix import * merged upstream * cleanup leftover addresses * super teardown * fix tests * merged trunk * merged trunk * merged trunk * merged trunk * Revert the conversion to 64-bit ints stored in a PickleType column, because PickleType is incompatible with having a unique constraint * Revert 64 bit storage and use 32 bit again. I didn't notice that we verify that randomly created uids don't already exist in the DB, so the chance of collision isn't really an issue until we get to tens of thousands of machines. Even then we should only expect a few retries before finding a free ID * Add design doc, docstrings, document hyper-v wmi, python wmi usage. Adhere to pep-8 more closely * This patch adds support for EC2 security groups using libvirt's nwfilter mechanism, which in turn uses iptables and ebtables on the individual compute nodes. This has a number of benefits: \* Inter-VM network traffic can take the fastest route through the network without our having to worry about getting it through a central firewall. \* Not relying on a central firewall also removes a potential SPOF. \* The filtering load is distributed, offering great scalability * Change internal\_id from a 32 bit int to a 64 bit int * 32 bit internal\_ids become 64 bit. Since there is no 64 bit native type in SqlAlchemy, we use PickleType which uses the Binary SqlAlchemy type under the hood * Make Instance.name a string again instead of an integer * Now that the ec2 id is not the same as the name of the instance, don't compare internal\_id [nee ec2\_id] to instance names provided by the virtualization driver. Compare names directly instead * Fix bug 659330 * Catch exception.NotFound when getting project VPN data * Improve the virt unit tests * Remove spurious project\_id addition to KeyPair model * APIRequestContext.admin is no more. * Rename ec2\_id\_list back to instance\_id to conform to EC2 argument spec * Fix bug 657001 (rename all Rackspace references to OpenStack references) * Extracts the kernel and ramdisk id from manifests and puts in into images' metadata * Fix EC2 GetConsoleOutput method and add unit tests for it * Rename rsapi to osapi, and make the default subdomain for OpenStack API calls be 'api' instead of 'rs' * Fix bug 658444 * Adds --force option to run\_tests.sh to clear virtualenv. Useful when dependencies change * If machine manifest includes a kernel and/or ramdisk id, include it in the image's metadata * Rename ec2 get\_console\_output's instance ID argument to 'instance\_id'. It's passed as a kwarg, based on key in the http query, so it must be named this way * if using local copy (use\_s3=false) we need to know where to find the image * curl not available on Windows for s3 download. also os-agnostic local copy * Register the Hyper-V module into the list of virt modules * hyper-v driver created * Twisted pidfile and other flag parameters simply do not function on Windows * Renames every instance of "rackspace" in the API and test code base. Also includes a minor patch for the API Servers controller to use images correctly in the absence of Glance * That's what I get for not using a good vimrc * Mass renaming * Start stripping out the translators * Remove redis dependency from RS Images API * Remove redis dependency from Images controller * Since FLAGS.images\_path was not set for nova-compute, I could not launch instances due to an exception at \_fetch\_local\_image() trying to access to it. I think that this is the reason of Bug655217 * Imported images\_path from nova.objectstore for nova-compute. Without its setting, it fails to launch instances by exception at \_fetch\_local\_image * Defined images\_path for nova-compute. Without its setting, it fails to launch instances by exception at \_fetch\_local\_image * Cleans up a broken servers unit test * Huge sweeping changes * Adds stubs and tests for GlanceImageService and LocalImageService. Adds basic plumbing for ParallaxClient and TellerClient and hooks that into the GlanceImageService * Typo * Missed an ec2\_id conversion to internal\_id * Cleanup around the rackspace API for the ec2 to internal\_id transition * merge prop fixes * A little more clean up * Replace model.Instance.ec2\_id with an integer internal\_id so that both APIs can represent the ID to external users * Fix clause comparing id to internal\_id * Adds unit test for calling show() on a non-existing image. Changes return from real Parallax service per sirp's recommendation for actual returned dict() values * Remove debugging code, and move import to the top * Make (some) cloud unit tests run without a full-blown set up * Stub out ec2.images.list() for unit tests * Make rpc calls work in unit tests by adding extra declare\_consumer and consume methods on the FakeRabbit backend * Add a connect\_to\_eventlet method * Un-twistedify get\_console\_ouptut * Create and destroy user appropriately. Remove security group related tests (since they haven't been merged yet) * Run the virt tests by default * Keep handles to loggers open after daemonizing * merged trunk and fixed tests * Cleans up the unit tests that are meant to be run with nosetests * Update Parallax default port number to match Glance * One last bad line * merge from gundlach ec2 conversion * Adds ParallaxClient and TellerClient plumbing for GlanceImageService. Adds stubs FakeParallaxClient and unit tests for LocalImageService and GlanceImageService * Fix broken unit tests * Matches changes in the database / model layer with corresponding fixes to nova.virt.xenapi * Replace the embarrasingly crude string based tests for to\_xml with some more sensible ElementTree based stuff * A shiny, new Auth driver backed by SQLAlchemy. Read it and weep. I did * Move manager\_class instantiation and db.service\_\* calls out of nova.service.Service.\_\_init\_\_ into a new nova.service.Service.startService method which gets called by twisted. This delays opening db connections (and thus sqlite file creation) until after privileges have been shed by twisted * Add pylint thingamajig for startService (name defined by Twisted) * Revert r312 * Add a context of None to the call to db.instance\_get\_all * Honour the --verbose flag by setting the logging level to DEBUG * Accidentally renamed volume related stuff * More clean up and conflict resolution * Move manager\_class instantiation and db.service\_\* calls out of nova.service.Service.\_\_init\_\_ into a new nova.service.Service.startService method which gets called by twisted. This delays opening db connections (and thus sqlite file creation) until after privileges have been shed by twisted * Bug #653560: AttributeError in VlanManager.periodic\_tasks * Bug #653534: NameError on session\_get in sqlalchemy.api.service\_update * Fixes to address the following issues: * s/APIRequestContext/get\_admin\_context/ <-- sudo for request contexts * Bug #654034: nova-manage doesn't honour --verbose flag * Bug #654025: nova-manage project zip and nova-manage vpn list broken by change in DB semantics when networks are missing * Bug #654023: nova-manage vpn commands broken, resulting in erroneous "Wrong number of arguments supplied" message * fix typo in setup\_compute\_network * pack and unpack context * add missing to\_dict * Bug #653651: XenAPI support completely broken by orm-refactor merge * Bug #653560: AttributeError in VlanManager.periodic\_tasks * Bug #653534: NameError on session\_get in sqlalchemy.api.service\_update * Adjust db api usage according to recent refactoring * Make \_dhcp\_file ensure the existence of the directory containing the files it returns * Keep handles to loggers open after daemonizing * Adds BaseImageService and flag to control image service loading. Adds unit test for local image service * Cleans up the unit tests that are meant to be run with nosetests * Refactor sqlalchemy api to perform contextual authorization * automatically convert strings passed into the api into their respective original values * Fix the deprecation warnings for passing no context * Address a few comments from Todd * Merged trunk * Locked down fixed ips and improved network tests * merged remove-network-index * Fixed flat network manager with network index gone * merged trunk * show project ids for groups instead of user ids * create a new manager for flat networking including dhcp * First attempt at a uuid generator -- but we've lost a 'topic' input so i don't know what that did * Find other places in the code that used ec2\_id or get\_instance\_by\_ec2\_id and use internal\_id as appropriate * Convert EC2 cloud.py from assuming that EC2 IDs are stored directly in the database, to assuming that EC2 IDs should be converted to internal IDs * Method cleanup and fixing the servers tests * merged trunk, removed extra quotas * Adds support for periodic\_tasks on manager that are regularly called by the service and recovers fixed\_ips that didn't get disassociated properly * Replace database instance 'ec2\_id' with 'internal\_id' throughout the nova.db package. internal\_id is now an integer -- we need to figure out how to make this a bigint or something * merged trunk * refactoring * refactoring * Includes changes for creating instances via the Rackspace API. Utilizes much of the existing EC2 functionality to power the Rackspace side of things, at least for now * Get rid of mention of mongo, since we are using openstack/swift * Mongo bad, swift good * Add a DB backend for auth manager * Bug #652103: NameError in exception handler in sqlalchemy API layer * Bug #652103: NameError in exception handler in sqlalchemy API layer * Bug #651887: xenapi list\_instances completely broken * Grabbed the wrong copyright info * Cleaned up db/api.py * Refactored APIRequestContext * Bug #651887: xenapi list\_instances completely broken * Simplified authorization with decorators" " * Removed deprecated bits from NovaBase * Wired up context auth for keypairs * Completed quota context auth * Finished context auth for network * Finished instance context auth * Finished instance context auth * Made network tests pass again * Whoops, forgot the exception handling bit * Missed a few attributes while mirroring the ec2 instance spin up * pylint and pep8 cleanup * Forgot the context module * Some minor cleanup * Servers stuff * merge rsapi\_reboot from gundlach * Wired up context auth for services * Server creation up to, but not including, network configuration * Progress on volumes Fixed foreign keys to respect deleted flag * Support reboot in api.rackspace by extracting reboot function from api.ec2 into api.cloud * Make Fault raiseable, and add a test to verify that * Make Fault raiseable by inheriting from webob.exc.HTTPException * Related: https://code.launchpad.net/~anso/nova/authupdate/+merge/36925 * Remove debuggish print statement * Make update work correctly * Server update name and password * Support the pagination interface in RS API -- the &offset and &limit parameters are now recognized * Update from trunk to handle one-line merge conflict * Support fault notation in error messages in the RS API * Limit entity lists by &offset and &limit * After update from trunk, a few more exceptions that need to be converted to Faults * fix ordering of rules to actually allow out and drop in * fix the primary and secondary join * autocreate the models and use security\_groups * Began wiring up context authorization * Apply patch from Vish to fix a hardcoded id in the unit tests * removed a few extra items * merged with soren's branch * fix loading to ignore deleted items * Add user-editable name & notes/description to volumes, instances, and images * merged trunk * patch for test * fix join and misnamed method * fix eagerload to be joins that filter by deleted == False * \* Create an AuthManager#update\_user method to change keys and admin status. \* Refactor the auth\_unittest to not care about test order \* Expose the update\_user method via nova-manage * Updates the fix-iptables branch with a number of bugfixes * Fixes reversed arguments in nova-manage project environment * Makes sure that multiple copies of nova-network don't create multiple copies of the same NetworkIndex * Fix a few errors in api calls related to mistyped database methods for floating\_ips: specifically describe addresses and and associate address * Merged Termie's branch that starts tornado removal and fixed rpc test cases for twisted. Nothing is testing the Eventlet version of rpc.call though yet * Adds bpython support to nova-manage shell, because it is super sexy * Adds a disabled flag to service model and check for it when scheduling instances and volumes * Adds bpython support to nova-manage shell, because it is super sexy * Added random ec2 style id's for volumes and instances * fix security group revoke * Fixed tests * Removed str\_id from FixedIp references * missed a comma * improved commenting * Fault support * fix flag defaults * typo s/boo/bool * merged and removed duplicated methods * fixed merge conflicts * removed extra code that slipped in from a test branch * Fixed name property on instance model * Implementation of the Rackspace servers API controller * Added checks for uniqueness for ec2 id * fix test for editable image * Add authorization info for cloud endpoints * Remove TODO, since apparently newer boto doesn't die on extra fields * add disabled column to services and check for it in scheduler * Hook the AuthManger#modify\_user method into nova-manage commands * Refactored adminclient to support multiple regions * merged network-lease-fix * merged floating-ips * move default group creation to api * Implemented random instance and volume strings for ec2 api * Adds --force option to run\_tests.sh to clear virtualenv. Useful when dependencies change * merge from trunk * Instance & Image renaming fixes * merge from gundlach * Testing testing testing * get rid of network indexes and make networks into a pool * Add Serializer.deserialize(xml\_or\_json\_string) * merged trunk * return a value if possible from export\_device\_create\_safe * merged floating-ip-by-project * merged network-lease-fix * merged trunk * Stop trying to install nova-api-new (it's gone). Install nova-scheduler * Call out to 'sudo kill' instead of using os.kill. dnsmasq runs as root or nobody, nova may or may not be running as root, so os.kill won't work * Make sure we also start dnsmasq on startup if we're managing networks * Improve unit tests for network filtering. It now tracks recursive filter dependencies, so even if we change the filter layering, it still correctly checks for the presence of the arp, mac, and ip spoofing filters * Make sure arguments to string format are in the correct order * Make the incoming blocking rules take precedence over the output accept rules * db api call to get instances by user and user checking in each of the server actions * More cleanup, backup\_schedules controller, server details and the beginnings of the servers action route * This is getting ridiculous * Power state mapping * Set priority of security group rules to 300 to make sure they override the defaults * Recreate ensure\_security\_group\_filter. Needed for refresh * Clean up nwfilter code. Move our filters into the ipv4 chain * If neither a security group nor a cidr has been passed, assume cidr=0.0.0.0/0 * More re-work around the ORM changes and testing * Support content type detection in serializer * If an instance never got scheduled for whatever reason, its host will turn up as None. Filter those out to make sure refresh works * Only call \_on\_set\_network\_host on nova-network hosts * Allow DHCP requests through, pass the IP of the gateway as the dhcp server * Add a flag the specifies where to find nova-dhcpbridge * Ensure dnsmasq can read updates to dnsmasq conffile * Set up network at manager instantiation time to ensure we're ready to handle the networks we're already supposed to handle * Add db api methods for retrieving the networks for which a host is the designated network host * Apply IP configuration to bridge regardless of whether it existed before. The fixes a race condition on hosts running both compute and network where, if compute got there first, it would set up the bridge, but not do IP configuration (because that's meant to happen on the network host), and when network came around, it would see the interface already there and not configure it further * Removed extra logging from debugging * reorganize iptables clear and make sure use\_nova\_chains is a boolean * allow in and out for network and compute hosts * Modification of test stubbing to match new domain requirements for the router, and removal of the unnecessary rackspace base controller * Minor changes to be committed so trunk can be merged in * disable output drop for the moment because it is too restrictive * add forwarding ACCEPT for outgoing packets on compute host * fix a few missed calls to \_confirm\_rule and 80 char issues * allow mgmt ip access to api * flush the nova chains * Test the AuthManager interface explicitly, in case the user/project wrappers fail or change at some point. Those interfaces should be tested on their own * Update auth manager to have a update\_user method and better tests * add a reset command * Merged Termie's branch and fixed rpc test cases for tesited. Nothing is testing the Eventlet version of rpc.call though yet * improved the shell script for iptables * Finished making admin client work for multi-region * Install nova-scheduler * nova-api-new is no more. Don't attempt to install it * Add multi region support for adminclient * Merging in changes from rs\_auth, since I needed something modern to develop on while waiting for Hudson to right itself * whatever * Put EC2 API -> eventlet back into trunk, fixing the bits that I missed when I put it into trunk on 9/21 * Apply vish's patch * Applied vish's fixes * Implementation of Rackspace token based authentication for the Openstack API * fixed a few missing params from iptables rules * removed extra line in manage * made use of nova\_ chains a flag and fixed a few typos * put setup\_iptables in the right dir * Fixed rpc consumer to use unique return connection to prevent overlap. This could be reworked to share a connection, but it should be a wait operation and not a fast poll like it was before. We could also keep a cache of opened connections to be used between requests * fixed a couple of typos * Re-added the ramdisk line I accidentally removed * Added a primary\_key to AuthToken, fixed some unbound variables, and now all unit tests pass * Missed the model include, and fixed a broken test after the merge * Some more refactoring and another unit test * Refactored the auth branch based on review feedback * Replaced the existing Rackspace Auth Mechanism with one that mirrors the implementation in the design document * Merged gundlach's branch * renamed ipchains to iptables * merged trunk * Fixed cloudpipe lib init * merged fix-iptables * When calculating timedeltas make sure both timestamps are in UTC. For people ahead of UTC, it makes the scheduler unit tests pass. For people behind UTC, it makes their services time out after 60 seconds without a heart beat rather than X hours and 60 seconds without a heart beat (where X is the number of hours they're behind UTC) * Spot-fix endpoint reference * Wrap WSGI container in server.serve to make it properly handle command line arguments as well as daemonise properly. Moved api and wsgi imports in the main() function to delay their inclusion until after python-daemon has closed all the file descriptors. Without this, eventlet's epoll fd gets opened before daemonize is called and thus its fd gets closed leading to very, very, very confusing errors * Apply vish's patch * Added FLAGS.FAKE\_subdomain letting you manually set the subdomain for testing on localhost * Address Vishy's comments * All timestamps should be in UTC. Without this patch, the scheduler unit tests fail for anyone sufficiently East of Greenwich * Compare project\_id to '' using == (equality) rather than 'is' (identity). This is needed because '' isn't the same as u'' * Various loose ends for endpoint and tornado removal cleanup, including cloudpipe API addition, rpc.call() cleanup by removing tornado ioloop, and fixing bin/\* programs. Tornado still exists as part of some test cases and those should be reworked to not require it * Re-add root and metadata request handlers to EC2 API * Re-added the ramdisk line I accidentally removed * Soren's patch to fix part of ec2 * Add user display fields to instances & volumes * Responding to eday's feedback -- make a clearer inner wsgi app * Added a primary\_key to AuthToken, fixed some unbound variables, and now all unit tests pass * merge from trunk * typo in instance\_get * typo in instance\_get * User updatable name & description for images * merged trunk and fixed errors * cleaned up exception handling for fixed\_ip\_get * Added server index and detail differentiation * merged trunk * typo s/an/a * Reenable access\_unittest now that it works with new rbac * Rewrite rbac tests to use Authorizer middleware * Missed the model include, and fixed a broke test after the merge * Delete nova.endpoint module, which used Tornado to serve up the Amazon EC2 API. Replace it with nova.api.ec2 module, which serves up the same API via a WSGI app in Eventlet. Convert relevant unit tests from Twisted to eventlet * Remove eventlet test, now that eventlet 0.9.10 has indeed been replaced by 0.9.12 per mtaylor * In desperation, I'm raising eventlet.\_\_version\_\_ so I can see why the trunk tests are failing * merged trunk * bpython is amazing * Fix quota unittest and don't run rbac unit tests for the moment * merged trunk * Some more refactoring and another unit test * Implements quotas with overrides for instances, volumes, and floating ips * Renamed cc\_ip flag to cc\_host * Moves keypairs out of ldap and into the common datastore * Fixes server error on get metadata when instances are started without keypairs * allows api servers to have a list of regions, allowing multi-cluster support if you have a shared image store and user database * Don't use something the shell will escape as a separator. | is now = * Added modify project command to auth manager to allow changing of project manager and description * merged trunk * merged trunk * Refactored the auth branch based on review feedback * Whitespace fixes * Support querying version list, per the RS API spec. Fixes bug 613117 * Undo run\_tests.py modification in the hopes of making this merge * Add a RateLimitingMiddleware to the Rackspace API, implementing the rate limits as defined by the current Cloud Servers spec. The Middleware can do rate counting in memory, or (for deployments that have more than one API Server) can offload to a rate limiting service * Use assertRaises * A small fix to the install\_venv program to allow us to run it on the tarmac box as part of the tarmac build * Removes second copy of ProcessExecutionError that creeped in during a bad merge * Adds an omitted yield in compute manager detach\_volume * Move the code that extracts the console output into the virt drivers. Move the code that formats it up into the API layer. Add support for Xen console * Add Xen template and use it by default if libvirt\_type=xen * added rescue mode support and made reboot work from any state * Adds timing fields to instances and volumes to track launch times and schedule times * Fixes two errors in cloud.py in the nova\_orm branch: a) self.network is actually called network\_manager b) the logic for describe-instances check on is\_admin was reversed * Adds timing fields to instances and volumes to track launch times and schedule times * updated docstring * add in a few comments * s/\t/ /g, and add some comments * add in support for ajaxterm console access * add security and session timeout to ajaxterm * initial commit of ajaxterm * Replaced the existing Rackspace Auth Mechanism with one that mirrors the implementation in the design document * Whitespace fixes * Added missing masquerade rules * Fix things not quite merged perfectly -- all tests now pass * Better error message on the failure of a spawned process, and it's a ProcessExecutionException irrespective of how the process is run (twisted or not) * Added iptables host initial configuration * Added iptables host initial configuration * Proposing merge to get feedback on orm refactoring. I am very interested in feedback to all of these changes * Support querying version list * Add support for middleware proxying to a ratelimiting.WSGIApp, for deployments that use more than one API Server and thus can't store ratelimiting counters in memory * Test the WSGIApp * RateLimitingMiddleware tests * Address a couple of the TODO's: We now have half-decent input validation for AuthorizeSecurityGroupIngress and RevokeDitto * Clean up use of ORM to remove the need for scoped\_session * Roll back my slightly over-zealous clean up work * More ORM object cleanup * Clean up use of objects coming out of the ORM * RateLimitingMiddleware * Add ratelimiting package into Nova. After Austin it'll be pulled out into PyPI * When destroying a VM using the XenAPI backend, if the VM is still running (the usual case) the destroy fails. It needs to be powered-off first * Leave out the network setting from the interfaces template. It does not get passed anymore * Network model has network\_str attribute * Cast process input to a str. It must not be unicode, but stuff that comes out of the database might very well be unicode, so using such a value in a template makes the whole thing unicode * Make refresh\_security\_groups play well with inlineCallbacks * Fix up rule generation. It turns out nwfilter gets very, very wonky indeed if you mix rules and rules. Setting a TCP rule adds an early rule to ebtables that ends up overriding the rules which are last in that table * Add a bunch of TODO's to the API implementation * Multiple security group support * Remove power state constants that have ended up duplicated following a bad merge. They were moved from nova.compute.node.Instance into nova.compute.power\_state at the same time that Instance was moved into nova.compute.service. We've ended up with these constants in both places * now we can run files - thanks vish * Move vol.destroy() call out of the \_check method in test\_multiple\_volume\_race\_condition test and into a callback of the DeferredList. This should fix the intermittent failure of that test. I /think/ test\_too\_many\_volumes's failure was caused by test\_multiple\_volume\_race\_condition failure, since I have not been able to reproduce its failure after fixing this one * Adds 'shell run' to nova manage, which spawns a shell with flags properly imported * Finish pulling S3ImageService out of this mergeprop * Pull S3ImageService out of this mergeprop * Correctly pass ip\_address to templates * Fix call to listNWFilters * (Untested) Make changes to security group rules propagate to the relevant compute nodes * Filters all get defined when running an instance * added missing yield in detach\_volume * multiple network controllers will not create duplicate indexes * renamed \_get\_quota to get\_quota and moved int(size) into quota.py * add a shell to nova-manage, which respects flags (taken from django) * Move vol.destroy() call out of the \_check method in test\_multiple\_volume\_race\_condition test and into a callback of the DeferredList. This should fix the intermittent failure of that test. I /think/ test\_too\_many\_volumes's failure was caused by test\_multiple\_volume\_race\_condition failure, since I have not been able to reproduce its failure after fixing this one * removed second copy of ProcessExecutionError * move the warnings about leasing ips * simplified query * missed a space * set leased = 0 as well on disassociate update * speed up the query and make sure allocated is false * workaround for mysql select in update * Periodic callback for services and managers. Added code to automatically disassociate stale ip addresses * fixed typo * flag for retries on volume commands * auto all and start all exceptions should be ignored * generalized retry into try\_execute * more error handling in volume driver code * handle exceptions thrown by vblade stop and vblade destroy * merged trunk * deleting is set by cloud * re added missing volume update * Integrity error is in a different exc file * allow multiple volumes to run ensure\_blades without creating duplicates * fixed name for unique constraint * export devices unique * merged instance time and added better concurrency * make fixed\_ip\_get\_by\_address return the instance as well so we don't run into concurrency issues where it is disassociated in between * disassociate floating is supposed to take floating\_address * speed up generation of dhcp\_hosts and don't run into None errors if instance is deleted * don't allocate the same floating ip multiple times * don't allow deletion or attachment of volume unless it is available * fixed reference to misnamed method * manage command for project quotas * merged trunk * implement floating\_ip\_get\_all\_by\_project and renamed db methods that get more then one to get\_all\_by instead of get\_by * fixed reversed args in nova-manage project environment * merged scheduler * fix instance time * move volume to the scheduler * tests for volumes work * update query and test * merged quotas * use gigabytes and cores * use a string version of key name when constructing mpi dict because None doesn't work well in lookup * db not self.db * Security Group API layer cleanup * merged trunk * added terminated\_at to volume and moved setting of terminated\_at into cloud * remerged scheduler * merged trunk * merged trunk * merged trunk * merged trunk * fixed reversed admin logic on describe instances * fixed typo network => network\_manager in cloud.py * fixed old key reference and made keypair name constistent -> key\_pair * typo fixes, add flag to nova-dhcpbridge * fixed tests, added a flag for updating dhcp on disassociate * simplified network instance association * fix network association issue * merged trunk * improved network error case handling for fixed ips * it is called regionEndpoint, and use pipe as a separator * move keypair generation out of auth and fix tests * Fixed manager\_user reference in create\_project * Finished security group / project refactor * delete keypairs when a user is deleted * remove keypair from driver * moved keypairs to db using the same interface * multi-region flag for describe regions * make api error messages more readable * Refactored to security group api to support projects * set dnsName on describe * merged orm and put instance in scheduling state * just warn if an ip was already deallocated * fix mpi 500 on fixed ip * hostname should be string id * dhcpbridge needed host instead of node name * add a simple iterator to NovaBase to support converting into dictionary * Adjust a few things to make the unit tests happy again * First pass of nwfilter based security group implementation. It is not where it is supposed to be and it does not actually do anything yet * couple more errors in metadata * typo in metadata call * fixed messed up call in metadata * added modify project command to allow project manager and description to be updated * Change "exn" to "exc" to fit with the common style * Create and delete security groups works. Adding and revoking rules works. DescribeSecurityGroups returns the groups and rules. So, the API seems to be done. Yay * merged describe\_speed * merged scheduler * set host when item is scheduled * remove print statements * removed extra quotes around instance\_type * don't pass topic into schedule\_run\_instance * added scheduled\_at to instances and volumes * quotas working and tests passing * address test almost works * quota tests * merged orm * fix unittest * merged orm * fix rare condition where describe is called before instance has an ip * merged orm * make the db creates return refs instead of ids * add missing files for quota * kwargs don't work if you prepend an underscore * merged orm, added database methods for getting volume and ip data for projects * database support for quotas * Correct style issues brought up in termie's review * mocking out quotas * don't need to pass instance\_id to network on associate * floating\_address is the name for the cast * merged support code from orm branch * faster describe\_addresses * added floating ip commands and launched\_at terminated\_at, deleted\_at for objects * merged orm * solution that works with this version * fix describe addresses * remove extraneous get\_host calls that were requiring an extra db trip * pass volume['id'] instead of string id to delete volume * fix volume delete issue and volume hostname display * fix logging for scheduler to properly display method name * fixed logic in set\_state code to stop endless loops * Authorize and Revoke access now works * list command for floating ips * merged describe speed * merged orm * floating ip commands * removed extraneous rollback * speed up describe by loading fixed and floating ips * AuthorizeSecurityGroupIngress now works * switch to using utcnow * Alright, first hole poked all the way through. We can now create security groups and read them back * don't fail in db if context isn't a dict, since we're still using a class based context in the api * logging for backend is now info instead of error * merged orm * merged orm * set state everywhere * put soren's fancy path code in scheduler bin as well * missing deleted ref * merged orm * merged orm * consistent naming for instance\_set\_state * Tests turn things into inlineCallbacks * Missed an instance of attach\_to\_tornado * Remove tornado-related code from almost everything * It's annoying and confusing to have to set PYTHONPATH to point to your development tree before you run any of the scripts * deleted typo * merged orm * merged orm * fixed missing paren * merge orm * make timestamps for instances and volumes, includes additions to get deleted objects from db using deleted flag * merged orm * remove end of line slashes from models.py * Make the scripts in bin/ detect if they're being run from a bzr checkout or an extracted release tarball or whatever and adjust PYTHONPATH accordingly * merged orm * merged orm branch * set state moved to db layer * updated to the new orm code * changed a few unused context to \_context * a few formatting fixes and moved exception * fixed a few bugs in volume handling * merged trunk * Last of cleanup, including removing fake\_storage flage * more fixes from code review * review db code cleanup * review cleanup for compute manager * first pass at cleanup rackspace/servers.py * dhcpbridge fixes from review * more fixes to session handling * few typos in updates * don't log all sql statements * one more whitespace fix * whitespace fixes * fix for getting reference on service update * clean up of session handling * New version of eventlet handles Twisted & eventlet running at the same time * fix docstrings and formatting * Oops, APIRequestContext's signature has changed * merged orm * fix floating\_ip to follow standard create pattern * Add stubbed out handler for AuthorizeSecurityGroupIngress EC2 API call * merged orm\_deux * Merged trunk * Add a clean-traffic filterref to the libvirt templates to prevent spoofing and snooping attacks from the guests * Lots of fixes to make the nova commands work properly and make datamodel work with mysql properly * Bug #630640: Duplicated power state constants * Bug #630636: XenAPI VM destroy fails when the VM is still running * removed extra equals * Just a couple of UML-only fixes:  \* Due to an issue with libvirt, we need to chown the disk image to root.  \* Just point UML's console directly at a file, and don't bother with the pty. It was only used for debugging * removed extra file and updated sql note * merged fixed format instances from orm * fixed up format\_instances * merged server.py change from orm branch * reverting accidental search/replace change to server.py * merged orm * removed model from nova-manage * merged orm branch * removed references to compute.model * send ultimate topic in to scheduler * more scheduler tests * test for too many instances work * merged trunk * fix service unit tests * removed dangling files * merged orm branch * merged trunk and cleaned up test * renamed daemon to service and update db on create and destroy * pass all extra args from service to manager * fix test to specify host * inject host into manager * Servers API remodeling and serialization handling * Move nova.endpoint.images to api.ec2 and delete nova.endpoint * Cloud tests pass * OMG got api\_unittests to pass * send requests to the main API instead of to the EC2 subset -- so that it can parse out the '/services/' prefix. Also, oops, match on path\_info instead of path like we're supposed to * Remove unused APIRequestContext.handler * Use port that boto expects * merged orm branch * scheduler + unittests * removed underscores from used context * updated models a bit and removed service classes * Small typos, plus rework api\_unittest to use WSGI instead of Tornado * Replace an if/else with a dict lookup to a factory method * Nurrr * Abstractified generalization mechanism * Revert the changes to the qemu libvirt template and make the appropriate changes in the UML template where they belong * Create console.log ahead of time. This ensures that the user running nova-compute maintains read privileges * This improves the changelog generated as part of "setup.py sdist". If you look at it now, it says that Tarmac has done everything and every little commit is listed. With this patch, it only logs the "top-most" commit and credits the author rather than the committer * Fix simple errors to the point where we can run the tests [but not pass] * notes -- conversion 'complete' except now the unit tests won't work and surely i have bugs :) * Moved API tests into a sub-folder of the tests/ and added a stubbed-out test declarations to mirror existing API tickets * Delete rbac.py, moving @rbac decorator knowledge into api.ec2.Authorizer WSGI middleware * Break Router() into Router() and Executor(), and put Authorizer() (currently a stub) in between them * Return error Responses properly, and don't muck with req.params -- make a copy instead * merged orm branch * pylint clean of manager and service * pylint cleanup of db classes * rename node\_name to host * merged trunk * Call getInfo() instead of getVersion() on the libvirt connection object. virConnectGetVersion was not exposed properly in the python bindings until quite recently, so this makes us rather more backwards compatible * Better log formatter for Nova. It's just like gnuchangelog, but logs the author rather than the committer * Remove all Twisted defer references from cloud.py * Remove inlineCallbacks and yield from cloud.py, as eventlet doesn't need it * Move cloudcontroller and admincontroller into new api * Adjust setup.py to match nova-rsapi -> nova-api-new rename * small import cleanup * Get rid of some convoluted exception handling that we don't need in eventlet * First steps in reworking EC2 APIRequestHandler into separate Authenticate() and Router() WSGI apps * Call getInfo() instead of getVersion() on the libvirt connection object. virConnectGetVersion was not exposed properly in the python bindings until quite recently, so this makes us rather more backwards compatible * Fix up setup.py to match nova-rsapi -> nova-api-new rename * a little more cleanup in compute * pylint cleanup of tests * add missing manager classes * volume cleanup * more cleanup and pylint fixes * more pep8 * more pep8 * pep8 cleanup * add sqlalchemy to pip requires * merged trunk, fixed a couple errors * Delete \_\_init\_\_.py in prep for turning apirequesthandler into \_\_init\_\_ * Move APIRequestContext into its own file * Move APIRequest into its own file * run and terminate work * Move class into its own file * fix daemon get * Notes for converting Tornado to Eventlet * undo change to get\_my\_ip * all tests pass again * rollback on exit * merged session from devin * Added session.py * Removed get\_backup\_schedules from the image test * merged devin's sqlalchemy changes * Making tests pass * Reconnect to libvirt on broken connection * pylint fixes for /nova/virt/connection.py * pylint fixes for nova/objectstore/handler.py * ip addresses work now * Add Flavors controller supporting * Resolve conflicts and merge trunk * Detect if libvirt connection has been broken and reestablish it * instance runs * Dead code removal * remove creation of volume groups on boot * tests pass * Making tests pass * Making tests pass * Refactored orm to support atomic actions * moved network code into business layer * move None context up into cloud * split volume into service/manager/driver * moved models.py * removed the last few references to models.py * chown disk images to root for uml. Due to libvirt dropping CAP\_DAC\_OVERRIDE for uml, root needs to have explicit access to the disk images for stuff to work * Create console.log ahead of time. This ensures that the user running nova-compute maintains read privileges * fixed service mox test cases * Renamed test.py and moved a test as per merge proposal feedback * fixed volume unit tests * work endpoint/images.py into an S3ImageService. The translation isn't perfect, but it's a start * get to look like trunk * Set UML guests to use a file as their console. This halfway fixes get-console-output for them * network tests pass again * Fixes issue with the same ip being assigned to multiple instances * merged trunk and fixed tests * Support GET //detail * Moved API tests into a sub-folder of the tests/ and added a stubbed-out test declarations to mirror existing API tickets * Turn imageid translator into general translator for rackspace api ids * move network\_type flag so it is accesible in data layer * Use compute.instance\_types for flavor data instead of a FlavorService * more data layer breakouts, lots of fixes to cloud.py * merged jesse * Initial support for Rackspace API /image requests. They will eventually be backed by Glance * Fix a pep8 violation * improve the volume export - sleep & check export * missing context and move volume\_update to before the export * update volume create code * A few small changes to install\_venv to let venv builds work on the tarmac box * small tweaks * move create volume to work like instances * work towards volumes using db layer * merge vish * fix setup compute network * merge vish * merge vish * use vlan for network type since it works * merge vish * more work on getting running instances to work * merge vish * more cleanup * Flavors work * pep8 * Delete unused directory * Move imageservice to its own directory * getting run/terminate/describe to work * OK, break out ternary operator (good to know that it slowed you down to read it) * Style fixes * fix some errors with networking rules * typo in release\_ip * run instances works * Ensure that --gid and --uid options work for both twisted and non-twisted daemons * Fixes an error in setup\_compute\_network that was causing network setup to fail * add back in the needed calls for dhcpbridge * removed old imports and moved flags * merge and fixes to creates to all return id * bunch more fixes * moving network code and fixing run\_instances * jesse's run\_instances changes * fix daemons and move network code * Rework virt.xenapi's concurrency model. There were many places where we were inadvertently blocking the reactor thread. The reworking puts all calls to XenAPI on background threads, so that they won't block the reactor thread * merged trunk and fixed merge errors * Refactored network model access into data abstraction layer * Get the output formatting correct * Typo * Don't serialize in Controller subclass now that wsgi.Controller handles it for us * Move serialize() to wsgi.Controller so \_\_call\_\_ can serialize() action return values if they are dicts * Serialize properly * Support opaque id to rs int id as well * License * Moves auth.manager to the data layer * Add db abstraction and unittets for service.py * Clarified what the 'Mapped device not found' exception really means. Fixed TODO. Some formatting to be closer to 80 chars * Added missing "self." * Alphabetize the methods in the db layer * fix concurrency issue with multiple instances getting the same ip * small fixes to network * Fixed typo * Better error message on subprocess spawn fail, and it's a ProcessExecutionException irrespective of how the process is run * Check exit codes when spawning processes by default Also pass --fail to curl so that it sets exit code when download fails * PEP8/pylint cleanup in bin and nova/auth * move volume code into datalayer and cleanup * Complete the Image API against a LocalImageService until Glance's API exists (at which point we'll make a GlanceImageService and make the choice of ImageService plugin configurable.) * Added unit tests for WSGI helpers and base WSGI API * merged termies abstractions * Move deferredToThread into utils, as suggested by termie * Remove whitespace to match style guide * Data abstraction for compute service * this file isn't being used * Cleaned up pep8/pylint style issues in nova/auth. There are still a few pylint warnings in manager.py, but the patch is already fairly large * More pylintrc updates * fix report state * Removed old cloud\_topic queue setup, it is no longer used * last few test fixes * More bin/ pep8/pylint cleanup * fixing more network issues * Added '-' as possible charater in module rgx * Merged with trunk * Updated the tests to use webob, removed the 'called' thing and just use return values instead * Fix unit test bug this uncovered: don't release\_ip that we haven't got from issue\_ip * Fix to better reflect (my believed intent) as to the meaning of error\_ok (ignore stderr vs accept failure) * Merged with trunk * use with\_lockmode for concurrency issues * First in a series of patches to port the API from Tornado to WSGI. Also includes a few small style fixes in the new API code * Pull in ~eday/nova/api-port * Merged trunk * Merged api-port into api-port-1 * Since pylint=0.19 is our version, force everyone to use the disable-msg syntax * Missed one * Removed the 'controllers' directory under 'rackspace' due to full class name redundancy * pep8 typo * Changed our minds: keep pylint equal to Ubuntu Lucid version, and use disable-msg throughout * Fixed typo * Image API work * Newest pylint supports 'disable=', not 'disable-msg=' * Fix pep8 violation * tests pass * network tests pass * Added unittests for wsgi and api * almost there * progress on tests passing * remove references to deleted files so tests run * fix vpn access for auth * merged trunk * removed extra files * network datamodel code * In an effort to keep new and old API code separate, I've created a nova.api to put all new API code under. This means nova.endpoint only contains the old Tornado implementation. I also cleaned up a few pep8 and other style nits in the new API code * No longer installs a virtualenv automatically and adds new options to bypass the interactive prompt * Stylistic improvements * Add documentation to spawn, reboot, and destroy stating that those functions should return Deferreds. Update the fake implementations to do so (the libvirt ones already do, and making the xenapi ones do so is the subject of a current merge request) * start with model code * clean up linux\_net * merged refresh from sleepsonthefloor * See description of change... what's the difference between that message and this message again? * Move eventlet-using class out of endpoint/\_\_init\_\_.py into its own submodule, so that twisted-related code using endpoint.[other stuff] wouldn't run eventlet and make unit tests throw crazy errors about eventlet 0.9.10 not playing nicely with twisted * Remove duplicate definition of flag * The file that I create automates this step in http://wiki.openstack.org/InstallationNova20100729 : * Simpler installation, and, can run install\_venv from anywhere instead of just from checkout root * Use the argument handler specified by twistd, if any * Fixes quite a few style issues across the entire nova codebase bringing it much closer to the guide described in HACKING * merge from trunk * merged trunk * merged trunk and fixed conflicts * Fixes issues with allocation and deallocation of fixed and elastic addresses * Added documentation for the nova.virt connection interface, a note about the need to chmod the objectstore script, and a reference for the XenAPI module * Make individual disables for R0201 instead of file-level * All controller actions receive a 'req' parameter containing the webob Request * improve compatibility with ec2 clients * PEP8 and name corrections * rather comprehensive style fixes * fix launching and describing instances to work with sqlalchemy * Add new libvirt\_type option "uml" for user-mode-linux.. This switches the libvirt URI to uml:///system and uses a different template for the libvirt xml * typos * don't try to create and destroy lvs in fake mode * refactoring volume and some cleanup in model and compute * Add documentation to spawn, reboot, and destroy stating that those functions should return Deferreds. Update the fake implementations to do so (the libvirt ones already do, and making the xenapi ones do so is the subject of a current merge request) * Rework virt.xenapi's concurrency model. There were many places where we were inadvertently blocking the reactor thread. The reworking puts all calls to XenAPI on background threads, so that they won't block the reactor thread * add refresh on model * merge in latedt from vish * Catches and logs exceptions for rpc calls and raises a RemoteError exception on the caller side * Removes requirement of internet connectivity to run api server * Fixed path to keys directory * Update cloud\_unittest to match renamed internal function * Removes the workaround for syslog-ng of removing newlines * Fixes bug lp:616312 by reversing the order of args in nova-manage when it calls AuthManager.get\_credentials * merged trunk * Sets a hostname for instances that properly resolves and cleans up network classes * merged fix-hostname and fixed conflict * Implemented admin client / admin api for fetching user roles * Improves pep8 compliance and pylint score in network code * Bug #617776: DescribeImagesResponse contains type element, when it should be called imageType * Bug 617913: RunInstances response doesn't meet EC2 specification * remove more direct session interactions * refactor to have base helper class with shared session and engine * ComputeConnectionTestCase is almost working again * more work on trying to get compute tests passing * re-add redis clearing * make the fake-ldap system work again * got run\_tests.py to run (with many failed tests) * Bug #617776: DescribeImagesResponse contains type element, when it should be called imageType * initial commit for orm based models * Add a few unit tests for libvirt\_conn * Move interfaces template into virt/, too * Refactor LibvirtConnection a little bit for easier testing * Remove extra "uml" from os.type * Fixes out of order arguments in get\_credentials * pep8 and pylint cleanup * Support JSON and XML in Serializer * Added note regarding dependency upon XenAPI.py * Added documentation to the nova.virt interface * make rpc.call propogate exception info. Includes tests * Undo the changes to cloud.py that somehow diverged from trunk * Mergeprop cleanup * Mergeprop cleanup * Make WSGI routing support routing to WSGI apps or to controller+action * Make --libvirt\_type=uml do the right thing: Sets the correct libvirt URI and use a special template for the XML * renamed missed reference to Address * die classmethod * merged fix-dhcpbridge * remove class method * typo allocated should be relased * rename address stuff to avoid name collision and make the .all() iterator work again * keep track of leasing state so we can delete ips that didn't ever get leased * remove syslog-ng workaround * Merged with trunk * Implement the same fix as lp:~vishvananda/nova/fix-curl-project, but for virt.xenapi * Fix exception in get\_info * Move libvirt.xml template into nova/virt * Parameterise libvirt URI * Merged with trunk * fix dhcpbridge issues * Adapts the run\_tests.sh script to allow interactive or automated creation of virtualenv, or to run tests outside of a virtualenv * Prototype implementation of Servers controller * Working router that can target WSGI middleware or a standard controller+action * Added a xapi plugin that can pull images from nova-objectstore, and use that to get a disk, kernel, and ramdisk for the VM * Serializing in middleware after all... by tying to the router. maybe a good idea? * Merged with trunk * Actually pass in hostname and create a proper model for data in network code * Improved roles functionality (listing & improved test coverage) * support a hostname that can be looked up * updated virtualenv to add eventlet, which is now a requirement * Changes the run\_tests.sh and /tools/install\_venv.py scripts to be more user-friendly and not depend on PIP while not in the virtual environment * Fixed admin api for user roles * Merged list\_roles * fix spacing issue in ldapdriver * Fixes bug lp:615857 by changing the name of the zip export method in nova-manage * Wired up admin api for user roles * change get\_roles to have a flag for project\_roles or not. Don't show 'projectmanager' in list of roles * Throw exceptions for illegal roles on role add * Adds get\_roles commands to manager and driver classes * more pylint fixes * Implement VIF creation in the xenapi module * lots more pylint fixes * work on a router that works with wsgi and non-wsgi routing * Pylint clean of vpn.py * Further pylint cleanup * Oops, we need eventlet as well * pylint cleanup * pep8 cleanup * merged trunk * pylint fixes for nova/objectstore/handler.py * rename create\_zip to zipfile so lazy match works * Quick fix on location of printouts when trying to install virtualenv * Changes the run\_tests.sh and /tools/install\_venv.py scripts to be more user-friendly and not depend on PIP while not in the virtual environment. Running run\_tests.sh should not just work out of the box on all systems supporting easy\_install.. * 2 changes in doing PEP8 & Pylint cleaning: \* adding pep8 and pylint to the PIP requirements files for Tools \* light cleaning work (mostly formatting) on nova/endpoints/cloud.py * More changes to volume to fix concurrency issues. Also testing updates * Merge * Merged nova-tests-apitest into pylint * Merged nova-virt-connection into nova-tests-apitest * Pylint fixes for /nova/tests/api\_unittest.py * pylint fixes for nova/virt/connection.py * merged trunk, fixed an error with releasing ip * fix releasing to work properly * Add some useful features to our flags * pylint fixes for /nova/test.py * Fixes pylint issues in /nova/server.py * importing merges from hudson branch * fixing - removing unused imports per Eric & Jay review * initial cleanup of tests for network * Implement the same fix as lp:~vishvananda/nova/fix-curl-project, but for virt.xenapi * Run correctly even if called while in tools/ directory, as 'python install\_venv.py' * This branch builds off of Todd and Michael's API branches to rework the Rackspace API endpoint and WSGI layers * separated scheduler types into own modules * Fix up variable names instead of disabling pylint naming rule. Makes variables able to be a single letter in pylintrc * Disables warning about TODO in code comments in pylintrc * More pylint/pep8 cleanup, this time in bin/\* files * pylint fixes for nova/server.py * remove duplicated report\_state that exists in the base class more pylint fixes * Fixed docstring format per Jay's review * pylint fixes for /nova/test.py * Move the xenapi top level directory under plugins, as suggested by Jay Pipes * Pull trunk merge through lp:~ewanmellor/nova/add-contains * Pull trunk merge through lp:~ewanmellor/nova/xapi-plugin * Merged with trunk again * light cleanup - convention stuff mostly * convention and variable naming cleanup for pylint/pep8 * Used new (clearer) flag names when calling processes * Merged with trunk * Greater compliance with pep8/pylint style checks * removing what appears to be an unused try/except statement - nova.auth.manager.UserError doesn't exist in this codebase. Leftover? Something intended to be there but never added? * variable name cleanup * attempting some cleanup work * adding pep8 and pylint for regular cleanup tasks * Cleaned up pep8/pylint for bin/\* files. I did not fix rsapi since this is already cleaned up in another branch * Merged trunk * Reworked WSGI helper module and converted rackspace API endpoint to use it * Changed the network imports to use new network layout * merged with trunk * Change nova/virt/images.py's \_fetch\_local\_image to accept 4 args, since fetch() tries to call it with that many * Merged Todd and Michael's changes * pep8 and pylint cleanups * Some pylink and pep8 cleanups. Added a pylintrc file * fix copyrights for new files, etc * a few more commands were putting output on stderr. In general, exceptions on stderr output seems like a bad idea * Moved Scheduler classes into scheduler.py. Created a way to specify scheduler class that the SchedulerService uses.. * Make network its own worker! This separates the network logic from the api server, allowing us to have multiple network controllers. There a lot of stuff in networking that is ugly and should be modified with the datamodel changes. I've attempted not to mess with those things too much to keep the changeset small(ha!) * Fixed instance model associations to host (node) and added association to ip * Fixed write authorization for public images * Fixes a bug where if a user was removed from a group after he had a role, he could not be re-added * fix search/replace error * merged trunk * Start breaking out scheduler classes.. * WsgiStack class, eventletserver.serve. Trying to work toward a simple API that anyone can use to start an eventlet-based server composed of several WSGI apps * Use webob to simplify wsgi middleware * Made group membership check only search group instead of subtree. Roles in a group are removed when a user is removed from that group. Added test * Fixes bug#614090 -- nova.virt.images.\_fetch\_local\_image being called with 4 args but only has 3 * Fixed image modification authorization, API cleanup * fixed doc string * compute topic for a node is compute.node not compute:node! * almost there on random scheduler. not pushing to correct compute node topic, yet, apparently.. * First pass at making a file pass pep8 and pylint tests as an example * merged trunk * rename networkdata to vpn * remove extra line accidentally added * compute nodes should store total memory and disk space available for VMs * merged from trunk * added bin/nova-listinstances, which is mostly just a duplication of euca-describe-instances but doesn't go through the API * Fixes various concurrency issues in volume worker * Changed volumes to use a pool instead of globbing filesystem for concurrency reasons. Fixed broken tests * clean up nova-manage. If vpn data isn't set for user it skips it * method is called set\_network\_host * fixed circular reference and tests * renamed Vpn to NetworkData, moved the creation of data to inside network * fix rpc command line call, remove useless deferreds * fix error on terminate instance relating to elastic ip * Move the xenapi top level directory under plugins, as suggested by Jay Pipes * fixed tests, moved compute network config call, added notes, made inject option into a boolean * fix extra reference, method passing to network, various errors in elastic\_ips * use iteritems * reference to self.project instead of context.project + self.network\_model instead of network\_model * fixes in get public address and extra references to self.network * method should return network topic instead of network host * use deferreds in network * don't \_\_ module methods * inline commands use returnValue * it helps to save files BEFORE committing * Added note to README * Fixes the curl to pass in the project properly * Adds flag for libvirt type (hvm, qemu, etc) * Fix deprecation warning in AuthManager. \_\_new\_\_ isn't allowed to take args * created assocaition between project and host, modified commands to get host async, simplified calls to network * use get to retrieve node\_name from initial\_state * change network\_service flag to network\_type and don't take full class name * vblade commands randomly toss stuff into stderr, ignore it * delete instance doesn't fail if instances dir doesn't exist * Huge network refactor, Round I * Fixes boto imports to support both beta and older versions of boto * Get IP doesn't fail of you not connected to the intetnet * updated doc string and wrapper * add copyright headers * Fix exception in get\_info * Implement VIF creation * Define \_\_contains\_\_ on BasicModel, so that we can use "x in datamodel" * Fixed instance model associations to host (node) and added association to ip * Added a xapi plugin that can pull images from nova-objectstore, and use that to get a disk, kernel, and ramdisk for the VM. The VM actually boots! * Added project as parameter to admin client x509 zip file download * Turn the private \_image\_url(path) into a public image\_url(image). This will be used by virt.xenapi to instruct xapi as to which images to download * Merged in configurable libvirt\_uri, and fixes to raw disk images from the virtualbox branch * Fixed up some of the raw disk stuff that broke in the abstraction out of libvirt * Merged with raw disk image * Recognize 'magic' kernel value that means "don't use a kernel" - currently aki-00000000 * Fix Tests * Fixes nova volumes. The async commands yield properly. Simplified the call to create volume in cloud. Added some notes * another try on fix boto * use user.access instead of user.id * Fixes access key passing in curl statement * Accept a configurable libvirt\_uri * Added Cheetah to pip-requires * Removed duplicate toXml method * Merged with trunk * Merged with trunk, added note about suspicious behaviour * Added exit code checking to process.py (twisted process utils). A bit of class refactoring to make it work & cleaner. Also added some more instructive messages to install\_venv.py, because otherwise people that don't know what they're doing will install the wrong pip... i.e. I did :-) * Make nodaemon twistd processes log to stdout * Make nodaemon twistd processes log to stdout * use the right tag * flag for libvirt type * boto.s3 no longer imports connection, so we need to explicitly import it * Added project param to admin client zip download * boto.utils import doesn't work with new boto, import boto instead * fix imports in endpoint/images.py boto.s3 no longer imports connection, so we need to explicitly import it * Added --fail argument to curl invocations, so that HTTP request fails get surfaced as non-zero exit codes * Merged with trunk * Merged with trunk * strip out some useless imports * Add some useful features to our flags * Fixed pep8 in run\_test.py * Blank commit to get tarmac merge to pick up the tags * Fixed assertion "Someone released me too many times: too many tokens!" * Replace the second singleton unit test, lost during a merge * Merged with trunk to resolve merge conflicts * oops retry and add extra exception check * Fix deprecation warning in AuthManager. \_\_new\_\_ isn't allowed to take args * Added ChangeLog generation * Implemented admin api for rbac * Move the reading of API parameters above the call to \_get\_image, so that they have a chance to take effect * Move the reading of API parameters above the call to \_get\_image, so that they have a chance to take effect * Adds initial support for XenAPI (not yet finished) * More merges from trunk. Not everything came over the first time * Allow driver specification in AuthManager creation * pep8 * Fixed pep8 issues in setup.py - thanks redbo * Use default kernel and ramdisk properly by default * Adds optional user param to the get projects command * Ensures default redis keys are lowercase like they were in prior versions of the code * Pass in environment to dnsmasq properly * Releaed 0.9.0, now on 0.9.1 * Merged trunk * Added ChangeLog generation * Wired up get/add/remove project members * Merged lp:~vishvananda/nova/lp609749 * Removes logging when associating a model to something that isn't a model class * allow driver to be passed in to auth manager instead of depending solely on flag * make redis name default to lower case * Merged get-projects-by-user * Merged trunk * Fixed project api * Specify a filter by user for get projects * Create a model for storing session tokens * Fixed a typo from the the refactor of auth code * Makes ldap flags work again * bzr merge lp:nova/trunk * Tagged 0.9.0 and bumped the version to 0.9.1 * Silence logs when associated models aren't found. Also document methods used ofr associating things. And get rid of some duplicated code * Fix dnsmasq commands to pass in environment properly 0.9.0 ----- * Got the tree set for debian packaging * use default kernel and ramdisk and check for legal access * import ldapdriver for flags * Removed extra include * Added the gitignore files back in for the folks who are still on the git * Added a few more missing files to MANIFEST.in and added some placeholder files so that setup.py would carry the empty dir * Updated setup.py file to install stuff on a python setup.py install command * Removed gitignore files * Made run\_tests.sh executable * Put in a single MANIFEST.in file that takes care of things * Changed Makefile to shell script. The Makefile approach completely broke debhelper's ability to figure out that this was a python package * fixed typo from auth refactor * Add sdist make target to build the MANIFEST.in file * Removes debian dir from main tree. We'll add it back in in a different branch * Merged trunk * Wired up user:project auth calls * Bump version to 0.9.0 * Makes the compute and volume daemon workers use a common base class called Service. Adds a NetworkService in preparation for splitting out networking code. General cleanup and standardizarion of naming * fixed path to keys directory * Fixes Bug lp:610611: deleted project vlans are deleted from the datastore before they are reused * Add a 'sdist' make target. It first generates a MANIFEST.in based on what's in bzr, then calls python setup.py sdist * properly delete old vlans assigned to deleted projects * Remove debian/ from main branch * Bump version to 0.9.0. Change author to "OpenStack". Change author\_email to nova@lists.launchpad.net. Change url to http://www.openstack.org/. Change description to "cloud computing fabric controller" * Make "make test" detect whether to use virtualenv or not, thus making virtualenv optional * merged trunk * Makes the objectstore require authorization, checks it properly, and makes nova-compute provide it when fetching images * Automatically choose the correct type of test (virtualenv or system) * Ensure that boto's config has a "Boto" section before attempting to set a value in it * fixes buildpackage failing with dh\_install: missing files * removed old reference from nova-common.install and fixed spacing * Flag for SessionToken ttl setting * resolving conflict w/ merge, cleaning up virtenv setups * resolving conflict w/ merge, cleaning up virtenv setups * Fixes bug#610140. Thanks to Vish and Muharem for the patch * A few minor fixes to the virtualenv installer that were breaking on ubuntu * Give SessionToken an is\_expired method * Refactor of auth code * Fixes bug#610140. Thanks to Vish and Muharem for the patch * Share my updates to the Rackspace API * Fixes to the virtualenv installer * Ensure consistent use of filename for dhcp bridge flag file * renamed xxxservice to service * Began wiring up rbac admin api * fix auth\_driver flag to default to usable driver * Adds support scripts for installing deps into a virtualenv * In fact, it should delete them * Lookup should only not return expired tokens * Adds support scripts for installing deps into a virtualenv * default flag file full path * moved misnamed nova-dchp file * Make \_fetch\_s3\_image pass proper AWS Authorization headers so that image downloads work again * Make image downloads work again in S3 handler. Listing worked, but fetching the images failed because I wasn't clever enough to use twisted.web.static.File correctly * Move virtualenv installation out of the makefile * Expiry awareness for SessionToken * class based singleton for SharedPool * Basic standup of SessionToken model for shortlived auth tokens * merged trunk * merged trunk * Updated doc layout to the Sphinx two-dir layout * Replace hardcoded "nova" with FLAGS.control\_exchange * Add a simple set of tests for S3 API (using boto) * Fix references to image\_object. This caused an internal error when using euca-deregister * Set durable=False on TopicPublisher * Added missing import * Replace hardcoded example URL, username, and password with flags called xenapi\_connection\_url, xenapi\_connection\_username, xenapi\_connection\_password * Fix instance cleanup * Fix references to image\_object. This caused an internal error when using euca-deregister * removed unused assignment * More Cleanup of code * Fix references to get\_argument, fixing internal error when calling euca-deregister * Changes nova-volume to use twisted * Fixes up Bucket to throw proper NotFound and NotEmpty exceptions in constructor and delete() method, and fixes up objectstore\_unittest to properly use assertRaises() to check for proper exceptions and remove the assert\_ calls * Adds missing yield statement that was causing partitioning to intermittently fail * Merged lp:~ewanmellor/nova/lp609792 * Merged lp:~ewanmellor/nova/lp609791 * Replace hardcoded "nova" with FLAGS.control\_exchange * Set durable=False on TopicPublisher, so that it matches the flag on TopicConsumer. This ensures that either redeclaration of the control\_exchange will use the same flag, and avoid AMQPChannelException * Add an import so that nova-compute sees the images\_path flag, so that it can be used on the command line * Return a 404 when attempting to access a bucket that does not exist * Removed creation of process pools. We don't use these any more now that we're using process.simple\_execute * Fix assertion "Someone released me too many times: too many tokens!" when more than one process was running at the same time. This was caused by the override of SharedPool.\_\_new\_\_ not stopping ProcessPool.\_\_init\_\_ from being run whenever process.simple\_execute is called * Always make sure to set a Date headers, since it's needed to calculate the S3 Auth header * Updated the README file * Updated sphinx layout to a two-dir layout like swift. Updated a doc string to get rid of a Sphinx warning * Updated URLs in the README file to point to current locations * Add missing import following merge from trunk (cset 150) * Merged with trunk, since a lot of useful things have gone in there recently * fixed bug where partition code was sometimes failing due to initial dd not being yielded properly * Fixed bug 608505 - was freeing the wrong address (should have freed 'secondaddress', was freeing 'address') * renamed xxxnode to xxservice * Add (completely untested) code to include an Authorization header for the S3 request to fetch an image * Check signature for S3 requests * Fixes problem with describe-addresses returning all public ips instead of the ones for just the user's project * Fix for extra spaces in export statements in scripts relating to x509 certs * Adds a Makefile to fill dependencies for testing * Fix syslogging of exceptions by stripping newlines from the exception info * Merged fix for bug 608505 so unit tests pass * Check exit codes when spawning processes by default * Nobody wants to take on this twisted cleanup. It works for now, but could be much nicer if twisted has a nice hook-point for exception mapping * syslog changes * typo fixes and extra print statements removed * added todo for ABC * Fixed bug 608505 - was freeing the wrong address (should have freed 'secondaddress', was freeing 'address') * Merged trunk, fixed extra references to fake\_users * refactoring of imports for fakeldapdriver * make nova-network executable * refactor daemons to use common base class in preparation for network refactor * reorder import statement and remove commented-out test case that is the same as api\_unittest in objectstore\_unittest * Fixes up Bucket to throw proper NotFound and NotEmpty exceptions in constructor and delete() method, and fixes up objectstore\_unittest to properly use assertRaises() to check for proper exceptions and remove the assert\_ calls * Fix bug 607501. Raise 403, not exception if Authorization header not passed. Also added missing call to request.finish() & Python exception-handling style tweak * merge with twisted-volume * remove all of the unused saved return values from attach\_to\_twisted * fix for describe addresses showing everyone's public ips * update the logic for calculating network sizes * Locally administered mac addresses have the second least significant bit of the most significant byte set. If this byte is set then udev on ubuntu doesn't set persistent net rules * use a locally administered mac address so it isn't saved by udev * Convert processpool to a singleton, and switch node.py calls to use it. (Replaces passing a processpool object around all the time.) * Fixed the broken reference to * remove spaces from export statements in scripts relating to certs * Cleanups * Able to set up DNS, and remove udev network rules * Move self.ldap to global ldap to make changes easier if we ever implement settings * Cleanup per suggestions * network unittest clean up * Test cleanup, make driver return dictionaries and construct objects in manager * Able to boot without kernel or ramdisk. libvirt.xml.template is now a Cheetah template * Merged https://code.launchpad.net/~justin-fathomdb/nova/copy-error-handling * Merged bug fixes * Map exceptions to 404 / 403 codes, as was done before the move to twisted. However, I don't think this is the right way to do this in Twisted. For example, exceptions thrown after the render method returns will not be mapped * Merged lp:~justin-fathomdb/nova/bug607501 * Merged trunk. Fixed new references to UserManager * I put the call to request.finish() in the wrong place. :-( * More docstrings, don't autocreate projects * Raise 401, not exception if Authorization header not passed. Also minor fixes & Python exception-handling style tweak * LdapDriver cleanup: docstrings and parameter ordering * Ask curl to set exit code if resource was not found * Fixes to dhcp lease code to use a flagfile * merged trunk * Massive refactor of users.py * Hmm, serves me right for not understanding the request, eh? :) Now too\_many\_addresses test case is idempotent in regards to running in isolation and uses self.flags.network\_size instead of the magic number 32 * Redirect STDERR to output to an errlog file when running run\_tests.py * Send message ack in rpc.call and make queues durable * Fixed name change caused by remove-vendor merge * Replace tornado objectstore with twisted web * merged in trunk and fixed import merge errors * First commit of XenAPI-specific code (i.e. connections to the open-source community project Xen Cloud Platform, or the open-source commercial product Citrix XenServer) * Remove the tight coupling between nova.compute.monitor and libvirt. The libvirt-specific code was placed in nova.virt.libvirt\_conn by the last changeset. This greatly simplifies the monitor code, and puts the libvirt-specific XML record parsing in a libvirt-specific place * In preparation for XenAPI support, refactor the interface between nova.compute and the hypervisor (i.e. libvirt) * Fixed references to nova.utils that were broken by a change of import statement in the remove-vendor merge * Remove s3\_internal\_port setting. Objectstore should be able to handle the beatings now. As such, nginx is no longer needed, so it's removed from the dependencies and the configuration files are removed * Replace nova-objectstore with a twistd style wrapper. Add a get\_application method to objectstore handler * Minor post-merge fixes * Fixed \_redis\_name and \_redis\_key * Add build\_sphinx support * fix conf file to no longer have daemonize=1 because twistd daemonizes by default * make nova-volume start with twisteds daemonize stuff * Makin the queues non-durable by default * Ack messages during call so rabbit leaks less * simplify call to simple\_execute * merge extra singleton-pool changes * Added a config file to let setup.py drive building the sphinx docs * make simple method wrapper for process pool simple\_execute * change volume code to use twisted * remove calls to runthis from node * merge with singleton pool * Removed unused Pool from process.py, added a singleton pool called SharedPool, changed calls in node to use singleton pool * Fixes things that were not quite right after big merge party * Make S3 API handler more idiomatic Twisted Web-y * \_redis\_name wasn't picking up override\_type correctly, and \_redis\_key wasn't using it * Quick fix to variable names for consistency in documentation.. * Adds a fix to the idempotency of the test\_too\_many\_addresses test case by adding a simple property to the BaseNetwork class and calculating the number of available IPs by asking the network class to tell the test how many static and preallocated IP addresses are in use before entering the loop to "blow up" the address allocation.. * Adds a flag to redirect STDERR when running run\_tests.py. Defaults to a truncate-on-write logfile named run\_tests.err.log. Adds ignore rule for generated errlog file * no more print in storage unittest * reorder imports spacing * Fixes to dhcp lease code to use a flagfile * merged trunk * This branch fixes some unfortunate interaction between Nova and boto * Make sure we pass str objects instead of unicode objects to boto as our credentials * remove import of vendor since we have PPA now * Updates the test suite to work * Disabled a tmpdir cleanup * remove vendor * update copyrights * Volume\_ID identifier needed a return in the property. Also looking for race conditions in the destructor * bin to import images from canonical image store * add logging import to datastore * fix merge errors * change default vpn ports and remove complex vpn ip iteration * fix reference to BasicModel and imports * Cleanups related to BasicModel (whitespace, names, etc) * Updating buildbot address * Fixed buildbot * work on importing images * When destroying an Instance, disassociate with Node * Smiteme * Smiteme * Smiteme * Smiteme * Move BasicModel into datastore * Smiteme * Smiteme * Whitespace change * unhardcode the binary name * Fooish * Finish singletonizing UserManager usage * Debian package additions for simple network template * Foo * Whitespace fix * Remove debug statement * Foo * fix a typo * Added build-deps to debian/control that are needed to run test suite. Fixed an error in a test case * optimization to not load all instances when describe instances is called * More buildbot testing * More buildbot testing * More buildbot testing * More buildbot testing * More buildbot testing * More buildbot testing * Addin buildbot * Fix merge changelog and merge errors in utils.py * Fixes from code review * release 0.2.2-10 * fix for extra space in vblade-persist * Avoid using s-expr, pkcs1-conv, and lsh-export-key * release 0.2.2-9 * fixed bug in auth group\_exists * Move nova related configuration files into /etc/nova/ * move check for none before get mpi data * Refactored smoketests flags * Fixes to smoketest flags * Minor smoketest refactoring * fixes from code review * typo in exception in crypto * datetime import typo * added missing isotime method from utils * release 0.2.2-8 * missed a comma * release 0.2.2-7 * use a flag for cert subject * whitespace fixes and header changes * Fixed the os.environ patch (bogus) * Fixes as per Vish review (whitespace, import statements) * Off by one error in the allocation test (can someone check my subnet math?) * Adding more tests, refactoring for dhcp logic * Got dhcpleasor working, with test ENV for testing, and rpc.cast for real world * Capture signals from dnsmasq and use them to update network state * Relax the Twisted dependency to python-twisted-core (rather than the full stack) * releasing version 0.3.0+really0.2.2-0ubuntu0ppa3 * If set, pass KernelId and RamdiskId from RunInstances call to the target compute node * Add a default flag file for nova-manage to help it find the CA * Ship the CA directory in nova-common * Add a dependency on nginx from nova-objectsstore and install a suitable configuration file * releasing version 0.3.0+really0.2.2-0ubuntu0ppa2 * Don't pass --daemonize=1 to nova-compute. It's already daemonising by default * Add debian/nova-common.dirs to create var/lib/nova/{buckets,CA,images,instances,keys,networks} * keeper\_path is really caled datastore\_path * Fixed package version * Move templates from python directories to /usr/share/nova * Added --network\_path setting to nova-compute's flagfile * releasing version 0.3.0+really0.2.2-0ubuntu0ppa1 * Use rmdir instead of rm -rf to remove a tempdir * Set better defaults in flagfiles * Fixes and add interface template * Simple network injection * Simple Network avoids vlans * clean a few merge errors from network * Add curl as a dependency of nova-compute * getting started update * getting started update * Remove \_s errors from merge * fix typos in node from merge * remove spaces from default cert * Make sure get\_assigned\_vlans and BaseNetwork.hosts always return a dict, even if the key is currently empty in the KVS * Add \_s instance attribute to Instance class. It's referenced in a bunch of places, but is never set. This is unlikely to be the right fix (why have two attributes pointing to the same object?), but it seems to make ends meet * Replace spaces in x509 cert subject with underscores. It ends up getting split(' ')'ed and passed to subprocess.Popen, so it needs to not have spaces in it, otherwise openssl gets very upset * Expand somewhat on the short and long descriptions in debian/control * Use separate configuration files for the different daemons * Removed trailing whitespace from header * Updated licenses * Added flags to smoketests. General cleanup * removed all references to keeper * reformatting * Vpn ips and ports use redis * review reformat * code review reformat * We need to be able to look up Instance by Node (live migration) * Get rid of RedisModel * formatting fixes and refactoring from code review * reformatting to fit within 80 characters * simplified handling of tempdir for Fakes * fix for multiple shelves for each volume node * add object class violation exception to fakeldap * remove spaces from default cert * remove silly default from generate cert * fix of fakeldap imports and exceptions * More Comments, cleanup, and reformatting * users.py cleanup for exception handling and typo * Make fakeldap use redis * Refactor network.Vlan to be a BasicModel, since it touched Redis * bugfix: rename \_s to datamodel in Node in some places it was overlooked * fix key injection script * Fixes based on code review 27001 * added TODO * Admin API + Worker Tracking * fixed typo * style cleanup * add more info to vpn list * Use flag for vpn key suffix instead of hardcoded string * don't fail to create vpn key if dir exists * Create Volume should only take an integer between 0 and 1000 * Placeholders for missing describe commands * Set forward delay to zero (partial fix to bug #518) * more comment reformatting * fit comment within 80 lines * removed extraneous reference to rpc in objectstore unit test * Fix queue connection bugs * Fix deletion of user when he is the last member of the group * Fix error message for checking for projectmanager role * Installer now creates global developer role * Removed trailing whitespace from header * added nova-instancemonitor debian config * Updated licenses * Added flags to smoketests. General cleanup * A few missing files from the twisted patch * Tweaks to get instancemonitor running * Initial commit of nodemonitor * Create DescribeImageAttribute api method * release 0.2.2-6 * disk.py needed input for key injection to work * release 2.2-5 * message checking callbacks only need to run 10 times a second * release 2.2-4 * trackback formatting isn't logging correctly * documentation updates * fix missing tab in nova-manage * Release 2.2-3 * use logger to print trace of unhandled exceptions * add exit status to nova-manage * fix fakeldap so it can use redis keeper * fix is\_running failing because state was stored as a string * more commands in nova-manage for projects and roles * More volume test fixes * typo in reboot instances * Fix mount of drive for test image * don't need sudo anymore * Cleaning up smoketests * boto uses instance\_type not size * Fix to volume smoketests * fix display of project name for admin in describe instances * make sure to deexpress before we remove the host since deexpress uses the host * fix error in disassociate address * fixed reversed filtering logic * filter keypairs for vpn keys * allow multiple vpn connections with the same credentials * Added admin command to restart networks * hide vpn instances unless you are an admin and allow run\_instances to launch vpn image even if it is private * typo in my ping call * try to ping vpn instances * sensible defaults for instance types * add missing import to pipelib * Give vpns the proper ip address * Fix format addresses * Release 0.2.2-2 * fix more casing errors and make attachment set print * removed extraneous .volume\_id * don't allow volumes to be attached to the same mountpoint * fix case for volume attributes * fix sectors off by one * Don't use keeper for instances * fix default state to be 0 instead of pending * Release 0.2.2 * Fix for mpi cpu reporting * fix detach volume * fix status code printing in cloud * add project ids to volumes * add back accidentally removed bridge name. str is reserved, so don't use it as a variable name * whitespace fixes and format instances set of object fixes * Use instdir to iterate through instances * fix bridge name * Adding basic validation of volume size on creation, plus tests for it * finished gutting keeper from volume * First pass at validation unit tests. Haven't figured out class methods yet * Removing keeper sludge * Set volume status properly, first pass at validation decorators * Adding missing default values and fixing bare Redis fetch for volume list * one more handler typo * fix objectstore handler typo * fix modify image attribute typo * NetworkNode doesn't exist anymore * Added back in missing gateway property on networks * Refactored Instance to get rid of \_s bits, and fixed some bugs in state management * Delete instance files on shutdown * Flush redis db in setup and teardown of tests * Cleaning up my accidental merge of the docs branch * change pipelib to work with projects * Volumes support intermediate state. Don't have to cast to storage nodes for attach/detach anymore, just let node update redis with state * Adding nojekyll for directories * Fix for #437 (deleting attached volumes), plus some >9 blade\_id fixes * fix instance iteration to use self.instdir.all instead of older iterators * nasa ldap defaults * sensible rbac defaults * Tests for rbac code * Patch to allow rbac * Adding mpi data * Adding cloudpipe and vpn data back in to network.py * how we build our debs * Revert "fix a bug with AOE number generation" * re-added cloudpipe * devin's smoketests * tools to clean vlans and run our old install script * fix a bug with AOE number generation * Initial commit of nodemonitor * Create DescribeImageAttribute api method * Create DescribeImageAttribute api method * More rackspace API * git checkpoint commit post-wsgi * update spacing * implement image serving in objectstore so nginx isn't required in development * update twitter username * make a "Running" topic instead of having it flow under "Configuration" * Make nginx config be in a code block * More doc updates: nginx & pycurl * Add a README, because GitHub loves them. Update the getting started docs * update spacing * Commit what I have almost working before diverging * first go at moving from tornado to twisted * implement image serving in objectstore so nginx isn't required in development * update twitter username * Update documentation * fix for reactor.spawnProcess sending deprecation warning * patch from issue 4001 * Fix for LoopingCall failing Added in exception logging around amqp calls Creating deferred in receive before ack() message was causing IOError (interrupted system calls), probably because the same message was getting processed twice in some situations, causing the system calls to be doubled. Moving the ack() earlier fixed the problem. The code works now with an interval of 0 but that causes heavy processor usage. An interval of 0.01 keeps the cpu usage within reasonable limits * get rid of anyjson in rpc and fix bad reference to rpc.Connection * gateway undefined * fix cloud instances method * Various cloud fixes * make get\_my\_ip return 127.0.0.1 for testing * Adds a Twisted implementation of a process pool * make a "Running" topic instead of having it flow under "Configuration" * Make nginx config be in a code block * More doc updates: nginx & pycurl * Add a README, because GitHub loves them. Update the getting started docs * whitespace fixes for nova/utils.py * Add project methods to nova-manage * Fix novarc to use project when creating access key * removed reference to nonexistent flag * Josh's networking refactor, modified to work with projects * Merged Vish's work on adding projects to nova * missed the gitignore * initial commit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/HACKING.rst0000664000175000017500000002051200000000000014364 0ustar00zuulzuul00000000000000Nova Style Commandments ======================= - Step 1: Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ - Step 2: Read on Nova Specific Commandments --------------------------- - [N307] ``nova.db`` imports are not allowed in ``nova/virt/*`` - [N309] no db session in public API methods (disabled) This enforces a guideline defined in ``oslo.db.sqlalchemy.session`` - [N310] timeutils.utcnow() wrapper must be used instead of direct calls to datetime.datetime.utcnow() to make it easy to override its return value in tests - [N311] importing code from other virt drivers forbidden Code that needs to be shared between virt drivers should be moved into a common module - [N312] using config vars from other virt drivers forbidden Config parameters that need to be shared between virt drivers should be moved into a common module - [N313] capitalize help string Config parameter help strings should have a capitalized first letter - [N316] Change assertTrue(isinstance(A, B)) by optimal assert like assertIsInstance(A, B). - [N317] Change assertEqual(type(A), B) by optimal assert like assertIsInstance(A, B) - [N319] Validate that debug level logs are not translated. - [N320] Setting CONF.* attributes directly in tests is forbidden. Use self.flags(option=value) instead. - [N322] Method's default argument shouldn't be mutable - [N323] Ensure that the _() function is explicitly imported to ensure proper translations. - [N324] Ensure that jsonutils.%(fun)s must be used instead of json.%(fun)s - [N325] str() and unicode() cannot be used on an exception. Remove use or use six.text_type() - [N326] Translated messages cannot be concatenated. String should be included in translated message. - [N327] Do not use xrange(). xrange() is not compatible with Python 3. Use range() or six.moves.range() instead. - [N332] Check that the api_version decorator is the first decorator on a method - [N334] Change assertTrue/False(A in/not in B, message) to the more specific assertIn/NotIn(A, B, message) - [N335] Check for usage of deprecated assertRaisesRegexp - [N336] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs. - [N337] Don't import translation in tests - [N338] Change assertEqual(A in B, True), assertEqual(True, A in B), assertEqual(A in B, False) or assertEqual(False, A in B) to the more specific assertIn/NotIn(A, B) - [N339] Check common raise_feature_not_supported() is used for v2.1 HTTPNotImplemented response. - [N340] Check nova.utils.spawn() is used instead of greenthread.spawn() and eventlet.spawn() - [N341] contextlib.nested is deprecated - [N342] Config options should be in the central location ``nova/conf/`` - [N343] Check for common double word typos - [N344] Python 3: do not use dict.iteritems. - [N345] Python 3: do not use dict.iterkeys. - [N346] Python 3: do not use dict.itervalues. - [N348] Deprecated library function os.popen() - [N349] Check for closures in tests which are not used - [N350] Policy registration should be in the central location ``nova/policies/`` - [N351] Do not use the oslo_policy.policy.Enforcer.enforce() method. - [N352] LOG.warn is deprecated. Enforce use of LOG.warning. - [N353] Validate that context objects is not passed in logging calls. - [N355] Enforce use of assertTrue/assertFalse - [N356] Enforce use of assertIs/assertIsNot - [N357] Use oslo_utils.uuidutils or uuidsentinel(in case of test cases) to generate UUID instead of uuid4(). - [N358] Return must always be followed by a space when returning a value. - [N359] Check for redundant import aliases. - [N360] Yield must always be followed by a space when yielding a value. - [N361] Check for usage of deprecated assertRegexpMatches and assertNotRegexpMatches - [N362] Imports for privsep modules should be specific. Use "import nova.privsep.path", not "from nova.privsep import path". This ensures callers know that the method they're calling is using priviledge escalation. - [N363] Disallow ``(not_a_tuple)`` because you meant ``(a_tuple_of_one,)``. - [N364] Check non-existent mock assertion methods and attributes. - [N365] Check misuse of assertTrue/assertIsNone. Creating Unit Tests ------------------- For every new feature, unit tests should be created that both test and (implicitly) document the usage of said feature. If submitting a patch for a bug that had no unit test, a new passing unit test should be added. If a submitted bug fix does have a unit test, be sure to add a new one that fails without the patch and passes with the patch. For more information on creating unit tests and utilizing the testing infrastructure in OpenStack Nova, please read ``nova/tests/unit/README.rst``. Running Tests ------------- The testing system is based on a combination of tox and stestr. The canonical approach to running tests is to simply run the command ``tox``. This will create virtual environments, populate them with dependencies and run all of the tests that OpenStack CI systems run. Behind the scenes, tox is running ``stestr run``, but is set up such that you can supply any additional stestr arguments that are needed to tox. For example, you can run: ``tox -- --analyze-isolation`` to cause tox to tell stestr to add --analyze-isolation to its argument list. Python packages may also have dependencies that are outside of tox's ability to install. Please refer to `Development Quickstart`_ for a list of those packages on Ubuntu, Fedora and Mac OS X. To run a single or restricted set of tests, pass a regex that matches the class name containing the tests as an extra ``tox`` argument; e.g. ``tox -- TestWSGIServer`` (note the double-hypen) will test all WSGI server tests from ``nova/tests/unit/test_wsgi.py``; ``-- TestWSGIServer.test_uri_length_limit`` would run just that test, and ``-- TestWSGIServer|TestWSGIServerWithSSL`` would run tests from both classes. It is also possible to run the tests inside of a virtual environment you have created, or it is possible that you have all of the dependencies installed locally already. In this case, you can interact with the stestr command directly. Running ``stestr run`` will run the entire test suite. ``stestr run --concurrency=1`` will run tests serially (by default, stestr runs tests in parallel). More information about stestr can be found at: http://stestr.readthedocs.io/ Since when testing locally, running the entire test suite on a regular basis is prohibitively expensive, the ``tools/run-tests-for-diff.sh`` script is provided as a convenient way to run selected tests using output from ``git diff``. For example, this allows running only the test files changed/added in the working tree:: tools/run-tests-for-diff.sh However since it passes its arguments directly to ``git diff``, tests can be selected in lots of other interesting ways, e.g. it can run all tests affected by a single commit at the tip of a given branch:: tools/run-tests-for-diff.sh mybranch^! or all those affected by a range of commits, e.g. a branch containing a whole patch series for a blueprint:: tools/run-tests-for-diff.sh gerrit/master..bp/my-blueprint It supports the same ``-HEAD`` invocation syntax as ``flake8wrap.sh`` (as used by the ``fast8`` tox environment):: tools/run-tests-for-diff.sh -HEAD By default tests log at ``INFO`` level. It is possible to make them log at ``DEBUG`` level by exporting the ``OS_DEBUG`` environment variable to ``True``. .. _Development Quickstart: https://docs.openstack.org/nova/latest/contributor/development-environment.html Building Docs ------------- Normal Sphinx docs can be built via the setuptools ``build_sphinx`` command. To do this via ``tox``, simply run ``tox -e docs``, which will cause a virtualenv with all of the needed dependencies to be created and then inside of the virtualenv, the docs will be created and put into doc/build/html. Building a PDF of the Documentation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you'd like a PDF of the documentation, you'll need LaTeX and ImageMagick installed, and additionally some fonts. On Ubuntu systems, you can get what you need with:: apt-get install texlive-full imagemagick Then you can use the ``build_latex_pdf.sh`` script in tools/ to take care of both the sphinx latex generation and the latex compilation. For example:: tools/build_latex_pdf.sh The script must be run from the root of the Nova repository and it'll copy the output pdf to Nova.pdf in that directory. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/LICENSE0000664000175000017500000002363700000000000013606 0ustar00zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/MAINTAINERS0000664000175000017500000000135400000000000014266 0ustar00zuulzuul00000000000000Nova doesn't have maintainers in the same way as the Linux Kernel. However, we do have sub-teams who maintain parts of Nova and a series of nominated "czars" to deal with cross functional tasks. Each of these sub-teams and roles are documented on our wiki at https://wiki.openstack.org/wiki/Nova You can find helpful contacts for many parts of our code repository at https://wiki.openstack.org/wiki/Nova#Developer_Contacts We also have a page which documents tips and mentoring opportunities for new Nova developers at https://wiki.openstack.org/wiki/Nova/Mentoring Finally, you should also check out our developer reference at https://docs.openstack.org/nova/latest/contributor/index.html Thanks for your interest in Nova, please come again! ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.8144658 nova-21.2.4/PKG-INFO0000664000175000017500000000736400000000000013675 0ustar00zuulzuul00000000000000Metadata-Version: 2.1 Name: nova Version: 21.2.4 Summary: Cloud computing fabric controller Home-page: https://docs.openstack.org/nova/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ============== OpenStack Nova ============== .. image:: https://governance.openstack.org/tc/badges/nova.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Nova provides a cloud computing fabric controller, supporting a wide variety of compute technologies, including: libvirt (KVM, Xen, LXC and more), Hyper-V, VMware, XenServer, OpenStack Ironic and PowerVM. Use the following resources to learn more. API --- To learn how to use Nova's API, consult the documentation available online at: - `Compute API Guide `__ - `Compute API Reference `__ For more information on OpenStack APIs, SDKs and CLIs in general, refer to: - `OpenStack for App Developers `__ - `Development resources for OpenStack clouds `__ Operators --------- To learn how to deploy and configure OpenStack Nova, consult the documentation available online at: - `OpenStack Nova `__ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at: - `Bug Tracker `__ Developers ---------- For information on how to contribute to Nova, please see the contents of the CONTRIBUTING.rst. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. Further developer focused documentation is available at: - `Official Nova Documentation `__ - `Official Client Documentation `__ Other Information ----------------- During each `Summit`_ and `Project Team Gathering`_, we agree on what the whole community wants to focus on for the upcoming release. The plans for nova can be found at: - `Nova Specs `__ .. _Summit: https://www.openstack.org/summit/ .. _Project Team Gathering: https://www.openstack.org/ptg/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: Implementation :: CPython Requires-Python: >=3.6 Provides-Extra: osprofiler Provides-Extra: test ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/README.rst0000664000175000017500000000443500000000000014263 0ustar00zuulzuul00000000000000============== OpenStack Nova ============== .. image:: https://governance.openstack.org/tc/badges/nova.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Nova provides a cloud computing fabric controller, supporting a wide variety of compute technologies, including: libvirt (KVM, Xen, LXC and more), Hyper-V, VMware, XenServer, OpenStack Ironic and PowerVM. Use the following resources to learn more. API --- To learn how to use Nova's API, consult the documentation available online at: - `Compute API Guide `__ - `Compute API Reference `__ For more information on OpenStack APIs, SDKs and CLIs in general, refer to: - `OpenStack for App Developers `__ - `Development resources for OpenStack clouds `__ Operators --------- To learn how to deploy and configure OpenStack Nova, consult the documentation available online at: - `OpenStack Nova `__ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at: - `Bug Tracker `__ Developers ---------- For information on how to contribute to Nova, please see the contents of the CONTRIBUTING.rst. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. Further developer focused documentation is available at: - `Official Nova Documentation `__ - `Official Client Documentation `__ Other Information ----------------- During each `Summit`_ and `Project Team Gathering`_, we agree on what the whole community wants to focus on for the upcoming release. The plans for nova can be found at: - `Nova Specs `__ .. _Summit: https://www.openstack.org/summit/ .. _Project Team Gathering: https://www.openstack.org/ptg/ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0464725 nova-21.2.4/api-guide/0000775000175000017500000000000000000000000014432 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.098472 nova-21.2.4/api-guide/source/0000775000175000017500000000000000000000000015732 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-guide/source/accelerator-support.rst0000664000175000017500000000310600000000000022462 0ustar00zuulzuul00000000000000============================== Using accelerators with Cyborg ============================== Starting from microversion 2.82, nova supports creating servers with accelerators provisioned with the Cyborg service, which provides lifecycle management for accelerators. To launch servers with accelerators, the administrator (or an user with appropriate privileges) must do the following: * Create a device profile in Cyborg, which specifies what accelerator resources need to be provisioned. (See `Cyborg device profiles API `_. * Set the device profile name as an extra spec in a chosen flavor, with this syntax: .. code:: accel:device_profile=$device_profile_name The chosen flavor may be a newly created one or an existing one. * Use that flavor to create a server: .. code:: openstack server create --flavor $myflavor --image $myimage $servername As of 21.0.0 (Ussuri), nova supports only specific operations for instances with accelerators. The lists of supported and unsupported operations are as below: * Supported operations. * Creation and deletion. * Reboots (soft and hard). * Pause and unpause. * Stop and start. * Take a snapshot. * Backup. * Rescue and unrescue. * Unsupported operations * Rebuild. * Resize. * Evacuate. * Suspend and resume. * Shelve and unshelve. * Cold migration. * Live migration. Some operations, such as lock and unlock, work as they are effectively no-ops for accelerators. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/authentication.rst0000664000175000017500000000102300000000000021477 0ustar00zuulzuul00000000000000============== Authentication ============== Each HTTP request against the OpenStack Compute system requires the inclusion of specific authentication credentials. A single deployment may support multiple authentication schemes (OAuth, Basic Auth, Token). The authentication scheme is provided by the OpenStack Identity service. You can contact your provider to determine the best way to authenticate against the Compute API. .. note:: Some authentication schemes may require that the API operate using SSL over HTTP (HTTPS). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-guide/source/conf.py0000664000175000017500000002240000000000000017227 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # # Compute API documentation build configuration file # # All configuration values have a default; values that are commented out # serve to show the default. # import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = ['openstackdocstheme', 'sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The 'todo' and 'todolist' directive produce output. todo_include_todos = True # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Compute API Guide' bug_tag = u'api-guide' repository_name = 'openstack/nova' bug_project = 'nova' # Must set this variable to include year, month, day, hours, and minutes. html_last_updated_fmt = '%Y-%m-%d %H:%M' copyright = u'2015, OpenStack contributors' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '2.1.0' # The full version, including alpha/beta/rc tags. release = '2.1.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [openstackdocstheme.get_html_theme_path()] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = [] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. html_last_updated_fmt = '%Y-%m-%d %H:%M' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'compute-api-guide' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'ComputeAPI.tex', u'Compute API Documentation', u'OpenStack contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'computeapi', u'Compute API Documentation', [u'OpenStack contributors'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'ComputeAPIGuide', u'Compute API Guide', u'OpenStack contributors', 'APIGuide', 'This guide teaches OpenStack Compute service users concepts about ' 'managing resources in an OpenStack cloud with the Compute API.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] # -- Options for PDF output -------------------------------------------------- pdf_documents = [ ('index', u'ComputeAPIGuide', u'Compute API Guide', u'OpenStack ' 'contributors') ] # -- Options for openstackdocstheme ------------------------------------------- openstack_projects = [ 'glance', 'nova', 'neutron', 'placement', ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/down_cells.rst0000664000175000017500000003572100000000000020625 0ustar00zuulzuul00000000000000=================== Handling Down Cells =================== Starting from microversion 2.69 if there are transient conditions in a deployment like partial infrastructure failures (for example a cell not being reachable), some API responses may contain partial results (i.e. be missing some keys). The server operations which exhibit this behavior are described below: * List Servers (GET /servers): This operation may give partial constructs from the non-responsive portion of the infrastructure. A typical response, while listing servers from unreachable parts of the infrastructure, would include only the following keys from available information: - status: The state of the server which will be "UNKNOWN". - id: The UUID of the server. - links: Links to the servers in question. A sample response for a GET /servers request that includes one result each from an unreachable and a healthy part of the infrastructure is shown below. Response:: { "servers": [ { "status": "UNKNOWN", "id": "bcc6c6dd-3d0a-4633-9586-60878fd68edb", "links": [ { "rel": "self", "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/bcc6c6dd-3d0a-4633-9586-60878fd68edb" }, { "rel": "bookmark", "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/bcc6c6dd-3d0a-4633-9586-60878fd68edb" } ] }, { "id": "22c91117-08de-4894-9aa9-6ef382400985", "name": "test_server", "links": [ { "rel": "self", "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985" }, { "rel": "bookmark", "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985" } ] } ] } * List Servers Detailed (GET /servers/detail): This operation may give partial constructs from the non-responsive portion of the infrastructure. A typical response, while listing servers from unreachable parts of the infrastructure, would include only the following keys from available information: - status: The state of the server which will be "UNKNOWN". - id: The UUID of the server. - tenant_id: The tenant_id to which the server belongs to. - created: The time of server creation. - links: Links to the servers in question. - security_groups: One or more security groups. (Optional) A sample response for a GET /servers/details request that includes one result each from an unreachable and a healthy part of the infrastructure is shown below. Response:: { "servers": [ { "created": "2018-06-29T15:07:29Z", "id": "bcc6c6dd-3d0a-4633-9586-60878fd68edb", "status": "UNKNOWN", "tenant_id": "940f47b984034c7f8f9624ab28f5643c", "security_groups": [ { "name": "default" } ], "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/bcc6c6dd-3d0a-4633-9586-60878fd68edb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/bcc6c6dd-3d0a-4633-9586-60878fd68edb", "rel": "bookmark" } ] }, { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-y0w4v32k", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-10-10T15:49:09.516729", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "aa:bb:cc:dd:ee:ff", "OS-EXT-IPS:type": "fixed", "addr": "192.168.0.3", "version": 4 } ] }, "config_drive": "", "created": "2017-10-10T15:49:08Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "569f39f9-7c76-42a1-9c2d-8394e2638a6d", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "2017-10-10T15:49:09Z", "user_id": "fake" } ] } **Edge Cases** * **Filters:** If the user is listing servers using filters, results from unreachable parts of the infrastructure cannot be tested for matching those filters and thus no minimalistic construct will be provided. Note that by default ``openstack server list`` uses the ``deleted=False`` and ``project_id=tenant_id`` filters and since we know both of these fundamental values at all times, they are the only allowed filters to be applied to servers with only partial information available. Hence only doing ``openstack server list`` and ``openstack server list --all-projects`` (admin only) will show minimalistic results when parts of the infrastructure are unreachable. Other filters like ``openstack server list --deleted`` or ``openstack server list --host xx`` will skip the results depending on the administrator's configuration of the deployment. Note that the filter ``openstack server list --limit`` will also skip the results and if not specified will return 1000 (or the configured default) records from the available parts of the infrastructure. * **Marker:** If the user does ``openstack server list --marker`` it will fail with a 500 if the marker is an instance that is no longer reachable. * **Sorting:** We exclude the unreachable parts of the infrastructure just like we do for filters since there is no way of obtaining valid sorted results from those parts with missing information. * **Paging:** We ignore the parts of the deployment which are non-responsive. For example if we have three cells A (reachable state), B (unreachable state) and C (reachable state) and if the marker is half way in A, we would get the remaining half of the results from A, all the results from C and ignore cell B. .. note:: All the edge cases that are not supported for minimal constructs would give responses based on the administrator's configuration of the deployment, either skipping those results or returning an error. * Show Server Details (GET /servers/{server_id}): This operation may give partial constructs from the non-responsive portion of the infrastructure. A typical response while viewing a server from an unreachable part of the infrastructure would include only the following keys from available information: - status: The state of the server which will be "UNKNOWN". - id: The UUID of the server. - tenant_id: The tenant_id to which the server belongs to. - created: The time of server creation. - user_id: The user_id to which the server belongs to. This may be "UNKNOWN" for older servers. - image: The image details of the server. If it is not set like in the boot-from-volume case, this value will be an empty string. - flavor: The flavor details of the server. - availability_zone: The availability_zone of the server if it was specified during boot time and "UNKNOWN" otherwise. - power_state: Its value will be 0 (``NOSTATE``). - links: Links to the servers in question. - server_groups: The UUIDs of the server groups to which the server belongs. Currently this can contain at most one entry. Note that this key will be in the response only from the "2.71" microversion. A sample response for a GET /servers/{server_id} request that includes one server from an unreachable part of the infrastructure is shown below. Response:: { "server": [ { "created": "2018-06-29T15:07:29Z", "status": "UNKNOWN", "tenant_id": "940f47b984034c7f8f9624ab28f5643c", "id": "bcc6c6dd-3d0a-4633-9586-60878fd68edb", "user_id": "940f47b984034c7f8f9624ab28f5643c", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", }, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "OS-EXT-AZ:availability_zone": "geneva", "OS-EXT-STS:power_state": 0, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/bcc6c6dd-3d0a-4633-9586-60878fd68edb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/bcc6c6dd-3d0a-4633-9586-60878fd68edb", "rel": "bookmark" } ], "server_groups": ["0fd77252-4eef-4ec4-ae9b-e05dfc98aeac"] } ] } * List Compute Services (GET /os-services): This operation may give partial constructs for the services with :program:`nova-compute` as their binary from the non-responsive portion of the infrastructure. A typical response while listing the compute services from unreachable parts of the infrastructure would include only the following keys for the :program:`nova-compute` services from available information while the other services like the :program:`nova-conductor` service will be skipped from the result: - binary: The binary name of the service which would always be ``nova-compute``. - host: The name of the host running the service. - status: The status of the service which will be "UNKNOWN". A sample response for a GET /servers request that includes two compute services from unreachable parts of the infrastructure and other services from a healthy one are shown below. Response:: { "services": [ { "binary": "nova-compute", "host": "host1", "status": "UNKNOWN" }, { "binary": "nova-compute", "host": "host2", "status": "UNKNOWN" }, { "id": 1, "binary": "nova-scheduler", "disabled_reason": "test1", "host": "host3", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:02.000000", "forced_down": false, "zone": "internal" }, { "id": 2, "binary": "nova-compute", "disabled_reason": "test2", "host": "host4", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/extra_specs_and_properties.rst0000664000175000017500000000340000000000000024077 0ustar00zuulzuul00000000000000======================================= Flavor Extra Specs and Image Properties ======================================= Flavor extra specs and image properties are used to control certain aspects or scheduling behavior for a server. The flavor of a server can be changed during a :nova-doc:`resize ` operation. The image of a server can be changed during a :nova-doc:`rebuild ` operation. By default, flavor extra specs are controlled by administrators of the cloud. If users are authorized to upload their own images to the image service, they may be able to specify their own image property requirements. There are many cases of flavor extra specs and image properties that are for the same functionality. In many cases the image property takes precedence over the flavor extra spec if both are used in the same server. Flavor Extra Specs ================== Refer to the :nova-doc:`user guide ` for a list of official extra specs. While there are standard extra specs, deployments can define their own extra specs to be used with host aggregates and custom scheduler filters as necessary. See the :nova-doc:`reference guide ` for more details. Image Properties ================ Refer to the image service documentation for a list of official :glance-doc:`image properties ` and :glance-doc:`metadata definition concepts `. Unlike flavor extra specs, image properties are standardized in the compute service and thus they must be `registered`_ within the compute service before they can be used. .. _registered: https://opendev.org/openstack/nova/src/branch/master/nova/objects/image_meta.py ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/faults.rst0000664000175000017500000004120200000000000017761 0ustar00zuulzuul00000000000000====== Faults ====== This doc explains how to understand what has happened to your API request. Every HTTP request has a status code. 2xx codes signify the API call was a success. However, that is often not the end of the story. That generally only means the request to start the operation has been accepted. It does not mean the action you requested has successfully completed. Tracking Errors by Request ID ============================= There are two types of request ID. .. list-table:: :header-rows: 1 :widths: 2,8 * - Type - Description * - Local request ID - Locally generated unique request ID by each service and different between all services (Nova, Cinder, Glance, Neutron, etc.) involved in that operation. The format is ``req-`` + UUID (UUID4). * - Global request ID - User specified request ID which is utilized as common identifier by all services (Nova, Cinder, Glance, Neutron, etc.) involved in that operation. This request ID is same among all services involved in that operation. The format is ``req-`` + UUID (UUID4). It is extremely common for clouds to have an ELK (Elastic Search, Logstash, Kibana) infrastructure consuming their logs. The only way to query these flows is if there is a common identifier across all relevant messages. The global request ID immediately makes existing deployed tooling better for managing OpenStack. **Request Header** In each REST API request, you can specify the global request ID in ``X-Openstack-Request-Id`` header, starting from microversion 2.46. The format must be ``req-`` + UUID (UUID4). If not in accordance with the format, the global request ID is ignored by Nova. Request header example:: X-Openstack-Request-Id: req-3dccb8c4-08fe-4706-a91d-e843b8fe9ed2 **Response Header** In each REST API request, ``X-Compute-Request-Id`` is returned in the response header. Starting from microversion 2.46, ``X-Openstack-Request-Id`` is also returned in the response header. ``X-Compute-Request-Id`` and ``X-Openstack-Request-Id`` are local request IDs. The global request IDs are not returned. Response header example:: X-Compute-Request-Id: req-d7bc29d0-7b99-4aeb-a356-89975043ab5e X-Openstack-Request-Id: req-d7bc29d0-7b99-4aeb-a356-89975043ab5e Server Actions -------------- Most `server action APIs`_ are asynchronous. Usually the API service will do some minimal work and then send the request off to the ``nova-compute`` service to complete the action and the API will return a 202 response to the client. The client will poll the API until the operation completes, which could be a status change on the server but is usually at least always waiting for the server ``OS-EXT-STS:task_state`` field to go to ``null`` indicating the action has completed either successfully or with an error. If a server action fails and the server status changes to ``ERROR`` an :ref:`instance fault ` will be shown with the server details. The `os-instance-actions API`_ allows users end users to list the outcome of server actions, referencing the requested action by request id. This is useful when an action fails and the server status does not change to ``ERROR``. To illustrate, consider a server (vm1) created with flavor ``m1.tiny``: .. code-block:: console $ openstack server create --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --wait vm1 +-----------------------------+-----------------------------------------------------------------+ | Field | Value | +-----------------------------+-----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-12-02T19:14:48.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | private=10.0.0.60, fda0:e0c4:2764:0:f816:3eff:fe03:806 | | adminPass | NgascCr3dYo4 | | config_drive | | | created | 2019-12-02T19:14:42Z | | flavor | m1.tiny (1) | | hostId | 22e88bec09a7e33606348fce0abac0ebbbe091a35e29db1498ec4e14 | | id | 344174b8-34fd-4017-ae29-b9084dcf3861 | | image | cirros-0.4.0-x86_64-disk (cce5e6d6-d359-4152-b277-1b4f1871557f) | | key_name | None | | name | vm1 | | progress | 0 | | project_id | b22597ea961545f3bde1b2ede0bd5b91 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | 2019-12-02T19:14:49Z | | user_id | 046033fb3f824550999752b6525adbac | | volumes_attached | | +-----------------------------+-----------------------------------------------------------------+ The owner of the server then tries to resize the server to flavor ``m1.small`` which fails because there are no hosts available on which to resize the server: .. code-block:: console $ openstack server resize --flavor m1.small --wait vm1 Complete Despite the openstack command saying the operation completed, the server shows the original ``m1.tiny`` flavor and the status is not ``VERIFY_RESIZE``: .. code-block:: $ openstack server show vm1 -f value -c status -c flavor m1.tiny (1) ACTIVE Since the status is not ``ERROR`` there are is no ``fault`` field in the server details so we find the details by listing the events for the server: .. code-block:: console $ openstack server event list vm1 +------------------------------------------+--------------------------------------+--------+----------------------------+ | Request ID | Server ID | Action | Start Time | +------------------------------------------+--------------------------------------+--------+----------------------------+ | req-ea1b0dfc-3186-42a9-84ff-c4f4fb130fae | 344174b8-34fd-4017-ae29-b9084dcf3861 | resize | 2019-12-02T19:15:35.000000 | | req-4cdc4c93-0668-4ae6-98c8-a0a5fcc63d39 | 344174b8-34fd-4017-ae29-b9084dcf3861 | create | 2019-12-02T19:14:42.000000 | +------------------------------------------+--------------------------------------+--------+----------------------------+ To see details about the ``resize`` action, we use the Request ID for that action: .. code-block:: console $ openstack server event show vm1 req-ea1b0dfc-3186-42a9-84ff-c4f4fb130fae +---------------+------------------------------------------+ | Field | Value | +---------------+------------------------------------------+ | action | resize | | instance_uuid | 344174b8-34fd-4017-ae29-b9084dcf3861 | | message | Error | | project_id | b22597ea961545f3bde1b2ede0bd5b91 | | request_id | req-ea1b0dfc-3186-42a9-84ff-c4f4fb130fae | | start_time | 2019-12-02T19:15:35.000000 | | user_id | 046033fb3f824550999752b6525adbac | +---------------+------------------------------------------+ We see the message is "Error" but are not sure what failed. By default the event details for an action are not shown to users without the admin role so use microversion 2.51 to see the events (the ``events`` field is JSON-formatted here for readability): .. code-block:: $ openstack --os-compute-api-version 2.51 server event show vm1 req-ea1b0dfc-3186-42a9-84ff-c4f4fb130fae -f json -c events { "events": [ { "event": "cold_migrate", "start_time": "2019-12-02T19:15:35.000000", "finish_time": "2019-12-02T19:15:36.000000", "result": "Error" }, { "event": "conductor_migrate_server", "start_time": "2019-12-02T19:15:35.000000", "finish_time": "2019-12-02T19:15:36.000000", "result": "Error" } ] } By default policy configuration a user with the admin role can see a ``traceback`` for each failed event just like with an instance fault: .. code-block:: $ source openrc admin admin $ openstack --os-compute-api-version 2.51 server event show 344174b8-34fd-4017-ae29-b9084dcf3861 req-ea1b0dfc-3186-42a9-84ff-c4f4fb130fae -f json -c events { "events": [ { "event": "cold_migrate", "start_time": "2019-12-02T19:15:35.000000", "finish_time": "2019-12-02T19:15:36.000000", "result": "Error", "traceback": " File \"/opt/stack/nova/nova/conductor/manager.py\", line 301, in migrate_server\n host_list)\n File \"/opt/stack/nova/nova/conductor/manager.py\", line 367, in _cold_migrate\n raise exception.NoValidHost(reason=msg)\n" }, { "event": "conductor_migrate_server", "start_time": "2019-12-02T19:15:35.000000", "finish_time": "2019-12-02T19:15:36.000000", "result": "Error", "traceback": " File \"/opt/stack/nova/nova/compute/utils.py\", line 1410, in decorated_function\n return function(self, context, *args, **kwargs)\n File \"/opt/stack/nova/nova/conductor/manager.py\", line 301, in migrate_server\n host_list)\n File \"/opt/stack/nova/nova/conductor/manager.py\", line 367, in _cold_migrate\n raise exception.NoValidHost(reason=msg)\n" } ] } .. _server action APIs: https://docs.openstack.org/api-ref/compute/#servers-run-an-action-servers-action .. _os-instance-actions API: https://docs.openstack.org/api-ref/compute/#servers-actions-servers-os-instance-actions Logs ---- All logs on the system, by default, include the global request ID and the local request ID when available. This allows an administrator to track the API request processing as it transitions between all the different nova services or between nova and other component services called by nova during that request. When nova services receive the local request IDs of other components in the ``X-Openstack-Request-Id`` header, the local request IDs are output to logs along with the local request IDs of nova services. .. tip:: If a session client is used in client library, set ``DEBUG`` level to the ``keystoneauth`` log level. If not, set ``DEBUG`` level to the client library package. e.g. ``glanceclient``, ``cinderclient``. Sample log output is provided below. In this example, nova is using local request ID ``req-034279a7-f2dd-40ff-9c93-75768fda494d``, while neutron is using local request ID ``req-39b315da-e1eb-4ab5-a45b-3f2dbdaba787``:: Jun 19 09:16:34 devstack-master nova-compute[27857]: DEBUG keystoneauth.session [None req-034279a7-f2dd-40ff-9c93-75768fda494d admin admin] POST call to network for http://10.0.2.15:9696/v2.0/ports used request id req-39b315da-e1eb-4ab5-a45b-3f2dbdaba787 {{(pid=27857) request /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:640}} .. note:: The local request IDs are useful to make 'call graphs'. .. _instance-fault: Instance Faults --------------- Nova often adds an instance fault DB entry for an exception that happens while processing an API request. This often includes more administrator focused information, such as a stack trace. For a server with status ``ERROR`` or ``DELETED``, a ``GET /servers/{server_id}`` request will include a ``fault`` object in the response body for the ``server`` resource. For example:: GET https://10.211.2.122/compute/v2.1/servers/c76a7603-95be-4368-87e9-7b9b89fb1d7e { "server": { "id": "c76a7603-95be-4368-87e9-7b9b89fb1d7e", "fault": { "created": "2018-04-10T13:49:40Z", "message": "No valid host was found.", "code": 500 }, "status": "ERROR", ... } } Notifications ------------- In many cases there are also notifications emitted that describe the error. This is an administrator focused API, that works best when treated as structured logging. .. _synchronous_faults: Synchronous Faults ================== If an error occurs while processing our API request, you get a non 2xx API status code. The system also returns additional information about the fault in the body of the response. **Example: Fault: JSON response** .. code:: { "itemNotFound":{ "code": 404, "message":"Aggregate agg_h1 could not be found." } } The error ``code`` is returned in the body of the response for convenience. The ``message`` section returns a human-readable message that is appropriate for display to the end user. The ``details`` section is optional and may contain information--for example, a stack trace--to assist in tracking down an error. The ``details`` section might or might not be appropriate for display to an end user. The root element of the fault (such as, computeFault) might change depending on the type of error. The following link contains a list of possible elements along with their associated error codes. For more information on possible error code, please see: http://specs.openstack.org/openstack/api-wg/guidelines/http/response-codes.html Asynchronous faults =================== An error may occur in the background while a server is being built or while a server is executing an action. In these cases, the server is usually placed in an ``ERROR`` state. For some operations, like resize, it is possible that the operation fails but the instance gracefully returned to its original state before attempting the operation. In both of these cases, you should be able to find out more from the `Server Actions`_ API described above. When a server is placed into an ``ERROR`` state, a fault is embedded in the offending server. Note that these asynchronous faults follow the same format as the synchronous ones. The fault contains an error code, a human readable message, and optional details about the error. Additionally, asynchronous faults may also contain a ``created`` timestamp that specifies when the fault occurred. **Example: Server in error state: JSON response** .. code:: { "server": { "id": "52415800-8b69-11e0-9b19-734f0000ffff", "tenant_id": "1234", "user_id": "5678", "name": "sample-server", "created": "2010-08-10T12:00:00Z", "hostId": "e4d909c290d0fb1ca068ffafff22cbd0", "status": "ERROR", "progress": 66, "image" : { "id": "52415800-8b69-11e0-9b19-734f6f007777" }, "flavor" : { "id": "52415800-8b69-11e0-9b19-734f216543fd" }, "fault" : { "code" : 500, "created": "2010-08-10T11:59:59Z", "message": "No valid host was found. There are not enough hosts available.", "details": [snip] }, "links": [ { "rel": "self", "href": "http://servers.api.openstack.org/v2/1234/servers/52415800-8b69-11e0-9b19-734f000004d2" }, { "rel": "bookmark", "href": "http://servers.api.openstack.org/1234/servers/52415800-8b69-11e0-9b19-734f000004d2" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/general_info.rst0000664000175000017500000002701500000000000021121 0ustar00zuulzuul00000000000000======================== Key Compute API Concepts ======================== The OpenStack Compute API is defined as a RESTful HTTP service. The API takes advantage of all aspects of the HTTP protocol (methods, URIs, media types, response codes, etc.) and providers are free to use existing features of the protocol such as caching, persistent connections, and content compression among others. Providers can return information identifying requests in HTTP response headers, for example, to facilitate communication between the provider and client applications. OpenStack Compute is a compute service that provides server capacity in the cloud. Compute Servers come in different flavors of memory, cores, disk space, and CPU, and can be provisioned in minutes. Interactions with Compute Servers can happen programmatically with the OpenStack Compute API. User Concepts ============= To use the OpenStack Compute API effectively, you should understand several key concepts: - **Server** A virtual machine (VM) instance, physical machine or a container in the compute system. Flavor and image are requisite elements when creating a server. A name for the server is also required. For more details, such as server actions and server metadata, please see: :doc:`server_concepts` - **Flavor** Virtual hardware configuration for the requested server. Each flavor has a unique combination of disk space, memory capacity and priority for CPU time. - **Flavor Extra Specs** Key and value pairs that can be used to describe the specification of the server which is more than just about CPU, disk and RAM. For example, it can be used to indicate that the server created by this flavor has PCI devices, etc. For more details, please see: :doc:`extra_specs_and_properties` - **Image** A collection of files used to create or rebuild a server. Operators provide a number of pre-built OS images by default. You may also create custom images from cloud servers you have launched. These custom images are useful for backup purposes or for producing "gold" server images if you plan to deploy a particular server configuration frequently. - **Image Properties** Key and value pairs that can help end users to determine the requirements of the guest operating system in the image. For more details, please see: :doc:`extra_specs_and_properties` - **Key Pair** An ssh or x509 keypair that can be injected into a server at it's boot time. This allows you to connect to your server once it has been created without having to use a password. If you don't specify a key pair, Nova will create a root password for you, and return it in plain text in the server create response. - **Volume** A block storage device that Nova can use as permanent storage. When a server is created it has some disk storage available, but that is considered ephemeral, as it is destroyed when the server is destroyed. A volume can be attached to a server, then later detached and used by another server. Volumes are created and managed by the Cinder service. For additional info, see :nova-doc:`Block device mapping ` - **Quotas** An upper limit on the amount of resources any individual tenant may consume. Quotas can be used to limit the number of servers a tenant creates, or the amount of disk space consumed, so that no one tenant can overwhelm the system and prevent normal operation for others. Changing quotas is an administrator-level action. For additional info, see :nova-doc:`Quotas ` - **Rate Limiting** Please see :doc:`limits` - **Availability zone** A grouping of host machines that can be used to control where a new server is created. There is some confusion about this, as the name "availability zone" is used in other clouds, such as Amazon Web Services, to denote a physical separation of server locations that can be used to distribute cloud resources for fault tolerance in case one zone is unavailable for any reason. Such a separation is possible in Nova if an administrator carefully sets up availability zones for that, but it is not the default. Networking Concepts ------------------- Networking is handled by the :neutron-doc:`networking service <>`. When working with a server in the compute service, the most important networking resource is a *port* which is part of a *network*. Ports can have *security groups* applied to control firewall access. Ports can also be linked to *floating IPs* for external network access depending on the networking service configuration. When creating a server or attaching a network interface to an existing server, zero or more networks and/or ports can be specified to attach to the server. If nothing is provided, the compute service will by default create a port on the single network available to the project making the request. If more than one network is available to the project, such as a public external network and a private tenant network, an error will occur and the request will have to be made with a specific network or port. If a network is specified the compute service will attempt to create a port on the given network on behalf of the user. More advanced types of ports, such as :neutron-doc:`SR-IOV ports `, must be pre-created and provided to the compute service. Refer to the `network API reference`_ for more details. .. _network API reference: https://docs.openstack.org/api-ref/network/ Administrator Concepts ====================== Some APIs are largely focused on administration of Nova, and generally focus on compute hosts rather than servers. - **Services** Services are provided by Nova components. Normally, the Nova component runs as a process on the controller/compute node to provide the service. These services may be end-user facing, such as the OpenStack Compute REST API service, but most just work with other Nova services. The status of each service is monitored by Nova, and if it is not responding normally, Nova will update its status so that requests are not sent to that service anymore. The service can also be controlled by an Administrator in order to run maintenance or upgrades, or in response to changing workloads. - **nova-osapi_compute** This service provides the OpenStack Compute REST API to end users and application clients. - **nova-metadata** This service provides the OpenStack Metadata API to servers. The metadata is used to configure the running servers. - **nova-scheduler** This service provides compute request scheduling by tracking available resources, and finding the host that can best fulfill the request. - **nova-conductor** This service provides database access for Nova and the other OpenStack services, and handles internal version compatibility when different services are running different versions of code. The conductor service also handles long-running requests. - **nova-compute** This service runs on every compute node, and communicates with a hypervisor for managing compute resources on that node. - **Services Actions** .. note:: The services actions described in this section apply only to **nova-compute** services. - **enable, disable, disable-log-reason** The service can be disabled to indicate the service is not available anymore. This is used by administrator to stop service for maintenance. For example, when Administrator wants to maintain a specific compute node, Administrator can disable nova-compute service on that compute node. Then nova won't dispatch any new compute request to that compute node anymore. Administrator also can add note for disable reason. - **forced-down** .. note:: This action is enabled in microversion 2.11. This action allows you set the state of service down immediately. Nova only provides a very basic health monitor of service status, there isn't any guarantee about health status of other parts of infrastructure, like the health status of data network, storage network and other components. If you have a more extensive health monitoring system external to Nova, and know that the service in question is dead (and disconnected from the network), this can be used to tell the rest of Nova it can trust that this service is never coming back, and allow actions such as evacuate. .. warning:: This must *only* be used if you have fully fenced the service in question, and that it can never send updates to the rest of the system. This can be done by powering off the node or completely isolating its networking. If you force-down a service that is not fenced you can corrupt the VMs that were running on that host. - **Hosts** Hosts are the *physical machines* that provide the resources for the virtual servers created in Nova. They run a **hypervisor** (see definition below) that handles the actual creation and management of the virtual servers. Hosts also run the **Nova compute service**, which receives requests from Nova to interact with the virtual servers on that machine. When compute service receives a request, it calls the appropriate methods of the driver for that hypervisor in order to carry out the request. The driver acts as the translator from generic Nova requests to hypervisor-specific calls. Hosts report their current state back to Nova, where it is tracked by the scheduler service, so that the scheduler can place requests for new virtual servers on the hosts that can best fit them. - **Host Actions** .. note:: These APIs are deprecated in Microversion 2.43. A *host action* is one that affects the physical host machine, as opposed to actions that only affect the virtual servers running on that machine. There are three 'power' actions that are supported: *startup*, *shutdown*, and *reboot*. There are also two 'state' actions: enabling/disabling the host, and setting the host into or out of maintenance mode. Of course, carrying out these actions can affect running virtual servers on that host, so their state will need to be considered before carrying out the host action. For example, if you want to call the 'shutdown' action to turn off a host machine, you might want to migrate any virtual servers on that host before shutting down the host machine so that the virtual servers continue to be available without interruption. - **Hypervisors** A hypervisor, or virtual machine monitor (VMM), is a piece of computer software, firmware or hardware that creates and runs virtual machines. In nova, each Host (see `Hosts`) runs a hypervisor. Administrators are able to query the hypervisor for information, such as all the virtual servers currently running, as well as detailed info about the hypervisor, such as CPU, memory, or disk related configuration. Currently nova-compute also supports Ironic and LXC, but they don't have a hypervisor running. - **Aggregates** See :nova-doc:`Aggregates Developer Information `. - **Migrations** Migrations are the process where a virtual server is moved from one host to another. Please see :doc:`server_concepts` for details about moving servers. Administrators are able to query the records in database for information about migrations. For example, they can determine the source and destination hosts, type of migration, or changes in the server's flavor. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/index.rst0000664000175000017500000000553200000000000017600 0ustar00zuulzuul00000000000000.. Copyright 2009-2015 OpenStack Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =========== Compute API =========== The nova project has a RESTful HTTP service called the OpenStack Compute API. Through this API, the service provides massively scalable, on demand, self-service access to compute resources. Depending on the deployment those compute resources might be Virtual Machines, Physical Machines or Containers. This guide covers the concepts in the OpenStack Compute API. For a full reference listing, please see: `Compute API Reference `__. We welcome feedback, comments, and bug reports at `bugs.launchpad.net/nova `__. Intended audience ================= This guide assists software developers who want to develop applications using the OpenStack Compute API. To use this information, you should have access to an account from an OpenStack Compute provider, or have access to your own deployment, and you should also be familiar with the following concepts: * OpenStack Compute service * RESTful HTTP services * HTTP/1.1 * JSON data serialization formats End User and Operator APIs ========================== The Compute API includes all end user and operator API calls. The API works with keystone and oslo.policy to deliver RBAC (Role-based access control). The default policy file gives suggestions on what APIs should not be made available to most end users but this is fully configurable. API Versions ============ Following the Mitaka release, every Nova deployment should have the following endpoints: * / - list of available versions * /v2 - the first version of the Compute API, uses extensions (we call this Compute API v2.0) * /v2.1 - same API, except uses microversions While this guide concentrates on documenting the v2.1 API, please note that the v2.0 is (almost) identical to first microversion of the v2.1 API and are also covered by this guide. Contents ======== .. toctree:: :maxdepth: 2 users versions microversions general_info server_concepts authentication extra_specs_and_properties faults limits links_and_references paginated_collections polling_changes request_and_response_formats down_cells port_with_resource_request accelerator-support ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/limits.rst0000664000175000017500000000530500000000000017770 0ustar00zuulzuul00000000000000====== Limits ====== Accounts may be pre-configured with a set of thresholds (or limits) to manage capacity and prevent abuse of the system. The system recognizes *absolute limits*. Absolute limits are fixed. Limits are configured by operators and may differ from one deployment of the OpenStack Compute service to another. Please contact your provider to determine the limits that apply to your account. Your provider may be able to adjust your account's limits if they are too low. Also see the API Reference for `Limits `__. Absolute limits ~~~~~~~~~~~~~~~ Absolute limits are specified as name/value pairs. The name of the absolute limit uniquely identifies the limit within a deployment. Please consult your provider for an exhaustive list of absolute limits names. An absolute limit value is always specified as an integer. The name of the absolute limit determines the unit type of the integer value. For example, the name maxServerMeta implies that the value is in terms of server metadata items. **Table: Sample absolute limits** +-------------------+-------------------+------------------------------------+ | Name | Value | Description | +-------------------+-------------------+------------------------------------+ | maxTotalRAMSize | 51200 | Maximum total amount of RAM (MB) | +-------------------+-------------------+------------------------------------+ | maxServerMeta | 5 | Maximum number of metadata items | | | | associated with a server. | +-------------------+-------------------+------------------------------------+ | maxImageMeta | 5 | Maximum number of metadata items | | | | associated with an image. | +-------------------+-------------------+------------------------------------+ | maxPersonality | 5 | The maximum number of file | | | | path/content pairs that can be | | | | supplied on server build. | +-------------------+-------------------+------------------------------------+ | maxPersonalitySize| 10240 | The maximum size, in bytes, for | | | | each personality file. | +-------------------+-------------------+------------------------------------+ Determine limits programmatically ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Applications can programmatically determine current account limits. For information, see `Limits `__. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/links_and_references.rst0000664000175000017500000001005200000000000022625 0ustar00zuulzuul00000000000000==================== Links and references ==================== Often resources need to refer to other resources. For example, when creating a server, you must specify the image from which to build the server. You can specify the image by providing an ID or a URL to a remote image. When providing an ID, it is assumed that the resource exists in the current OpenStack deployment. **Example: ID image reference: JSON request** .. code:: { "server":{ "flavorRef":"http://openstack.example.com/openstack/flavors/1", "imageRef":"http://openstack.example.com/openstack/images/70a599e0-31e7-49b7-b260-868f441e862b", "metadata":{ "My Server Name":"Apache1" }, "name":"new-server-test", "personality":[ { "contents":"ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBpdCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5kIGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVsc2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4gQnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRoZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlvdSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vyc2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6b25zLiINCg0KLVJpY2hhcmQgQmFjaA==", "path":"/etc/banner.txt" } ] } } **Example: Full image reference: JSON request** .. code:: { "server": { "name": "server-test-1", "imageRef": "b5660a6e-4b46-4be3-9707-6b47221b454f", "flavorRef": "2", "max_count": 1, "min_count": 1, "networks": [ { "uuid": "d32019d3-bc6e-4319-9c1d-6722fc136a22" } ], "security_groups": [ { "name": "default" }, { "name": "another-secgroup-name" } ] } } For convenience, resources contain links to themselves. This allows a client to easily obtain rather than construct resource URIs. The following types of link relations are associated with resources: - A ``self`` link contains a versioned link to the resource. Use these links when the link is followed immediately. - A ``bookmark`` link provides a permanent link to a resource that is appropriate for long term storage. - An ``alternate`` link can contain an alternate representation of the resource. For example, an OpenStack Compute image might have an alternate representation in the OpenStack Image service. .. note:: The ``type`` attribute provides a hint as to the type of representation to expect when following the link. **Example: Server with self links: JSON** .. code:: { "server":{ "id":"52415800-8b69-11e0-9b19-734fcece0043", "name":"my-server", "links":[ { "rel":"self", "href":"http://servers.api.openstack.org/v2.1/servers/52415800-8b69-11e0-9b19-734fcece0043" }, { "rel":"bookmark", "href":"http://servers.api.openstack.org/servers/52415800-8b69-11e0-9b19-734fcece0043" } ] } } **Example: Server with alternate link: JSON** .. code:: { "image" : { "id" : "52415800-8b69-11e0-9b19-734f5736d2a2", "name" : "My Server Backup", "links": [ { "rel" : "self", "href" : "http://servers.api.openstack.org/v2.1/images/52415800-8b69-11e0-9b19-734f5736d2a2" }, { "rel" : "bookmark", "href" : "http://servers.api.openstack.org/images/52415800-8b69-11e0-9b19-734f5736d2a2" }, { "rel" : "alternate", "type" : "application/vnd.openstack.image", "href" : "http://glance.api.openstack.org/1234/images/52415800-8b69-11e0-9b19-734f5736d2a2" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/microversions.rst0000664000175000017500000001447500000000000021401 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============= Microversions ============= API v2.1 supports microversions: small, documented changes to the API. A user can use microversions to discover the latest API microversion supported in their cloud. A cloud that is upgraded to support newer microversions will still support all older microversions to maintain the backward compatibility for those users who depend on older microversions. Users can also discover new features easily with microversions, so that they can benefit from all the advantages and improvements of the current cloud. There are multiple cases which you can resolve with microversions: - **Older clients with new cloud** Before using an old client to talk to a newer cloud, the old client can check the minimum version of microversions to verify whether the cloud is compatible with the old API. This prevents the old client from breaking with backwards incompatible API changes. Currently the minimum version of microversions is `2.1`, which is a microversion compatible with the legacy v2 API. That means the legacy v2 API user doesn't need to worry that their older client software will be broken when their cloud is upgraded with new versions. And the cloud operator doesn't need to worry that upgrading their cloud to newer versions will break any user with older clients that don't expect these changes. - **User discovery of available features between clouds** The new features can be discovered by microversions. The user client should check the microversions firstly, and new features are only enabled when clouds support. In this way, the user client can work with clouds that have deployed different microversions simultaneously. Version Discovery ================= The Version API will return the minimum and maximum microversions. These values are used by the client to discover the API's supported microversion(s). Requests to `/` will get version info for all endpoints. A response would look as follows:: { "versions": [ { "id": "v2.0", "links": [ { "href": "http://openstack.example.com/v2/", "rel": "self" } ], "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z" }, { "id": "v2.1", "links": [ { "href": "http://openstack.example.com/v2.1/", "rel": "self" } ], "status": "CURRENT", "version": "2.14", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z" } ] } ``version`` is the maximum microversion, ``min_version`` is the minimum microversion. If the value is the empty string, it means this endpoint doesn't support microversions; it is a legacy v2 API endpoint -- for example, the endpoint `http://openstack.example.com/v2/` in the above sample. The endpoint `http://openstack.example.com/v2.1/` supports microversions; the maximum microversion is `2.14`, and the minimum microversion is `2.1`. The client should specify a microversion between (and including) the minimum and maximum microversion to access the endpoint. You can also obtain specific endpoint version information by performing a GET on the base version URL (e.g., `http://openstack.example.com/v2.1/`). You can get more information about the version API at :doc:`versions`. Client Interaction ================== A client specifies the microversion of the API they want by using the following HTTP header:: X-OpenStack-Nova-API-Version: 2.4 Starting with microversion `2.27` it is also correct to use the following header to specify the microversion:: OpenStack-API-Version: compute 2.27 .. note:: For more detail on this newer form see the `Microversion Specification `_. This acts conceptually like the "Accept" header. Semantically this means: * If neither ``X-OpenStack-Nova-API-Version`` nor ``OpenStack-API-Version`` (specifying `compute`) is provided, act as if the minimum supported microversion was specified. * If both headers are provided, ``OpenStack-API-Version`` will be preferred. * If ``X-OpenStack-Nova-API-Version`` or ``OpenStack-API-Version`` is provided, respond with the API at that microversion. If that's outside of the range of microversions supported, return 406 Not Acceptable. * If ``X-OpenStack-Nova-API-Version`` or ``OpenStack-API-Version`` has a value of `latest` (special keyword), act as if maximum was specified. .. warning:: The `latest` value is mostly meant for integration testing and would be dangerous to rely on in client code since microversions are not following semver and therefore backward compatibility is not guaranteed. Clients should always require a specific microversion but limit what is acceptable to the microversion range that it understands at the time. This means that out of the box, an old client without any knowledge of microversions can work with an OpenStack installation with microversions support. In microversions prior to `2.27` two extra headers are always returned in the response:: X-OpenStack-Nova-API-Version: microversion_number Vary: X-OpenStack-Nova-API-Version The first header specifies the microversion number of the API which was executed. The ``Vary`` header is used as a hint to caching proxies that the response is also dependent on the microversion and not just the body and query parameters. See :rfc:`2616` section 14.44 for details. From microversion `2.27` two additional headers are added to the response:: OpenStack-API-Version: compute microversion_number Vary: OpenStack-API-Version ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/paginated_collections.rst0000664000175000017500000000564600000000000023031 0ustar00zuulzuul00000000000000===================== Paginated collections ===================== To reduce load on the service, list operations return a maximum number of items at a time. The maximum number of items returned is determined by the compute provider. To navigate the collection, the ``limit`` and ``marker`` parameters can be set in the URI. For example: .. code:: ?limit=100&marker=1234 The ``marker`` parameter is the ID of the last item in the previous list. By default, the service sorts items by create time in descending order. When the service cannot identify a create time, it sorts items by ID. The ``limit`` parameter sets the page size. Both parameters are optional. If the client requests a ``limit`` beyond one that is supported by the deployment an overLimit (413) fault may be thrown. A marker with an invalid ID returns a badRequest (400) fault. For convenience, collections should contain atom ``next`` links. They may optionally also contain ``previous`` links but the current implementation does not contain ``previous`` links. The last page in the list does not contain a link to "next" page. The following examples illustrate three pages in a collection of servers. The first page was retrieved through a **GET** to `http://servers.api.openstack.org/v2.1/servers?limit=1`. In these examples, the *``limit``* parameter sets the page size to a single item. Subsequent links honor the initial page size. Thus, a client can follow links to traverse a paginated collection without having to input the ``marker`` parameter. **Example: Servers collection: JSON (first page)** .. code:: { "servers_links":[ { "href":"https://servers.api.openstack.org/v2.1/servers?limit=1&marker=fc45ace4-3398-447b-8ef9-72a22086d775", "rel":"next" } ], "servers":[ { "id":"fc55acf4-3398-447b-8ef9-72a42086d775", "links":[ { "href":"https://servers.api.openstack.org/v2.1/servers/fc45ace4-3398-447b-8ef9-72a22086d775", "rel":"self" }, { "href":"https://servers.api.openstack.org/v2.1/servers/fc45ace4-3398-447b-8ef9-72a22086d775", "rel":"bookmark" } ], "name":"elasticsearch-0" } ] } In JSON, members in a paginated collection are stored in a JSON array named after the collection. A JSON object may also be used to hold members in cases where using an associative array is more practical. Properties about the collection itself, including links, are contained in an array with the name of the entity an underscore (\_) and ``links``. The combination of the objects and arrays that start with the name of the collection and an underscore represent the collection in JSON. The approach allows for extensibility of paginated collections by allowing them to be associated with arbitrary properties. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/polling_changes.rst0000664000175000017500000001004500000000000021620 0ustar00zuulzuul00000000000000================= Efficient polling ================= The REST API allows you to poll for the status of certain operations by performing a **GET** on various elements. Rather than re-downloading and re-parsing the full status at each polling interval, your REST client may use the ``changes-since`` and/or ``changes-before`` parameters to check for changes within a specified time. The ``changes-since`` time or ``changes-before`` time is specified as an `ISO 8601 `__ dateTime (`2011-01-24T17:08Z`). The form for the timestamp is **CCYY-MM-DDThh:mm:ss**. An optional time zone may be written in by appending the form ±hh:mm which describes the timezone as an offset from UTC. When the timezone is not specified (`2011-01-24T17:08`), the UTC timezone is assumed. The following situations need to be considered: * If nothing has changed since the ``changes-since`` time, an empty list is returned. If data has changed, only the items changed since the specified time are returned in the response. For example, performing a **GET** against:: https://api.servers.openstack.org/v2.1/servers?changes-since=2015-01-24T17:08Z would list all servers that have changed since Mon, 24 Jan 2015 17:08:00 UTC. * If nothing has changed earlier than or equal to the ``changes-before`` time, an empty list is returned. If data has changed, only the items changed earlier than or equal to the specified time are returned in the response. For example, performing a **GET** against:: https://api.servers.openstack.org/v2.1/servers?changes-before=2015-01-24T17:08Z would list all servers that have changed earlier than or equal to Mon, 24 Jan 2015 17:08:00 UTC. * If nothing has changed later than or equal to ``changes-since``, or earlier than or equal to ``changes-before``, an empty list is returned. If data has changed, only the items changed between ``changes-since`` time and ``changes-before`` time are returned in the response. For example, performing a **GET** against:: https://api.servers.openstack.org/v2.1/servers?changes-since=2015-01-24T17:08Z&changes-before=2015-01-25T17:08Z would list all servers that have changed later than or equal to Mon, 24 Jan 2015 17:08:00 UTC, and earlier than or equal to Mon, 25 Jan 2015 17:08:00 UTC. Microversion change history for servers, instance actions and migrations regarding ``changes-since`` and ``changes-before``: * The `2.21 microversion`_ allows reading instance actions for a deleted server resource. * The `2.58 microversion`_ allows filtering on ``changes-since`` when listing instance actions for a server. * The `2.59 microversion`_ allows filtering on ``changes-since`` when listing migration records. * The `2.66 microversion`_ adds the ``changes-before`` filter when listing servers, instance actions and migrations. The ``changes-since`` filter nor the ``changes-before`` filter change any read-deleted behavior in the os-instance-actions or os-migrations APIs. The os-instance-actions API with the 2.21 microversion allows retrieving instance actions for a deleted server resource. The os-migrations API takes an optional ``instance_uuid`` filter parameter but does not support returning deleted migration records. To allow clients to keep track of changes, the ``changes-since`` filter and ``changes-before`` filter displays items that have been *recently* deleted. Servers contain a ``DELETED`` status that indicates that the resource has been removed. Implementations are not required to keep track of deleted resources indefinitely, so sending a ``changes-since`` time or a ``changes-before`` time in the distant past may miss deletions. .. _2.21 microversion: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id19 .. _2.58 microversion: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id53 .. _2.59 microversion: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id54 .. _2.66 microversion: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id59 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-guide/source/port_with_resource_request.rst0000664000175000017500000000346400000000000024171 0ustar00zuulzuul00000000000000================================= Using ports with resource request ================================= Starting from microversion 2.72 nova supports creating servers with neutron ports having resource request visible as a admin-only port attribute ``resource_request``. For example a neutron port has resource request if it has a QoS minimum bandwidth rule attached. Deleting such servers or detaching such ports works since Stein version of nova without requiring any specific microversion. However the following API operations are still not supported in nova: * Creating servers with neutron networks having QoS minimum bandwidth rule is not supported. The user needs to pre-create the port in that neutron network and create the server with the pre-created port. * Attaching Neutron ports and networks having QoS minimum bandwidth rule is not supported. Also the following API operations are not supported in the 19.0.0 (Stein) version of nova: * Moving (resizing, migrating, live-migrating, evacuating, unshelving after shelve offload) servers with ports having resource request is not yet supported. As of 20.0.0 (Train), nova supports cold migrating and resizing servers with neutron ports having resource requests if both the source and destination compute services are upgraded to 20.0.0 (Train) and the ``[upgrade_levels]/compute`` configuration does not prevent the computes from using the latest RPC version. However cross cell resize and cross cell migrate operations are still not supported with such ports and Nova will fall back to same-cell resize if the server has such ports. As of 21.0.0 (Ussuri), nova supports evacuating, live migrating and unshelving servers with neutron ports having resource requests. See :nova-doc:`the admin guide ` for administrative details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/request_and_response_formats.rst0000664000175000017500000000231100000000000024444 0ustar00zuulzuul00000000000000============================ Request and response formats ============================ The OpenStack Compute API only supports JSON request and response formats, with a mime-type of ``application/json``. As there is only one supported content type, all content is assumed to be ``application/json`` in both request and response formats. Request and response example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The example below shows a request body in JSON format: **Example: JSON request with headers** .. code:: POST /v2.1/servers HTTP/1.1 Host: servers.api.openstack.org X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb .. code:: JSON { "server": { "name": "server-test-1", "imageRef": "b5660a6e-4b46-4be3-9707-6b47221b454f", "flavorRef": "2", "max_count": 1, "min_count": 1, "networks": [ { "uuid": "d32019d3-bc6e-4319-9c1d-6722fc136a22" } ], "security_groups": [ { "name": "default" }, { "name": "another-secgroup-name" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-guide/source/server_concepts.rst0000664000175000017500000011551500000000000021700 0ustar00zuulzuul00000000000000=============== Server concepts =============== For the OpenStack Compute API, a server is a virtual machine (VM) instance, a physical machine or a container. Server status ~~~~~~~~~~~~~ You can filter the list of servers by image, flavor, name, and status through the respective query parameters. Server contains a status attribute that indicates the current server state. You can filter on the server status when you complete a list servers request. The server status is returned in the response body. The server status is one of the following values: **Server status values** - ``ACTIVE``: The server is active. - ``BUILD``: The server has not yet finished the original build process. - ``DELETED``: The server is deleted. - ``ERROR``: The server is in error. - ``HARD_REBOOT``: The server is hard rebooting. This is equivalent to pulling the power plug on a physical server, plugging it back in, and rebooting it. - ``MIGRATING``: The server is migrating. This is caused by a live migration (moving a server that is active) action. - ``PASSWORD``: The password is being reset on the server. - ``PAUSED``: The server is paused. - ``REBOOT``: The server is in a soft reboot state. A reboot command was passed to the operating system. - ``REBUILD``: The server is currently being rebuilt from an image. - ``RESCUE``: The server is in rescue mode. - ``RESIZE``: Server is performing the differential copy of data that changed during its initial copy. Server is down for this stage. - ``REVERT_RESIZE``: The resize or migration of a server failed for some reason. The destination server is being cleaned up and the original source server is restarting. - ``SHELVED``: The server is in shelved state. Depends on the shelve offload time, the server will be automatically shelved off loaded. - ``SHELVED_OFFLOADED``: The shelved server is offloaded (removed from the compute host) and it needs unshelved action to be used again. - ``SHUTOFF``: The server was powered down by the user, either through the OpenStack Compute API or from within the server. For example, the user issued a :command:`shutdown -h` command from within the server. If the OpenStack Compute manager detects that the VM was powered down, it transitions the server to the SHUTOFF status. - ``SOFT_DELETED``: The server is marked as deleted but will remain in the cloud for some configurable amount of time. While soft-deleted, an authorized user can restore the server back to normal state. When the time expires, the server will be deleted permanently. - ``SUSPENDED``: The server is suspended, either by request or necessity. See the :nova-doc:`feature support matrix ` for supported compute drivers. When you suspend a server, its state is stored on disk, all memory is written to disk, and the server is stopped. Suspending a server is similar to placing a device in hibernation and its occupied resource will not be freed but rather kept for when the server is resumed. If an instance is infrequently used and the occupied resource needs to be freed to create other servers, it should be shelved. - ``UNKNOWN``: The state of the server is unknown. It could be because a part of the infrastructure is temporarily down (see :doc:`down_cells` for more information). Contact your cloud provider. - ``VERIFY_RESIZE``: System is awaiting confirmation that the server is operational after a move or resize. Server status is calculated from vm_state and task_state, which are exposed to administrators: - vm_state describes a VM's current stable (not transition) state. That is, if there is no ongoing compute API calls (running tasks), vm_state should reflect what the customer expect the VM to be. When combined with task states, a better picture can be formed regarding the server's health and progress. Refer to :nova-doc:`VM States `. - task_state represents what is happening to the instance at the current moment. These tasks can be generic, such as `spawning`, or specific, such as `block_device_mapping`. These task states allow for a better view into what a server is doing. Server creation ~~~~~~~~~~~~~~~ Status Transition: - ``BUILD`` While the server is building there are several task state transitions that can occur: - ``scheduling``: The request is being scheduled to a compute node. - ``networking``: Setting up network interfaces asynchronously. - ``block_device_mapping``: Preparing block devices (local disks, volumes). - ``spawning``: Creating the guest in the hypervisor. - ``ACTIVE`` The terminal state for a successfully built and running server. - ``ERROR`` (on error) When you create a server, the operation asynchronously provisions a new server. The progress of this operation depends on several factors including location of the requested image, network I/O, host load, and the selected flavor. The progress of the request can be checked by performing a **GET** on /servers/*{server_id}*, which returns a progress attribute (from 0% to 100% complete). The full URL to the newly created server is returned through the ``Location`` header and is available as a ``self`` and ``bookmark`` link in the server representation. Note that when creating a server, only the server ID, its links, and the administrative password are guaranteed to be returned in the request. You can retrieve additional attributes by performing subsequent **GET** operations on the server. Server query ~~~~~~~~~~~~ There are two APIs for querying servers ``GET /servers`` and ``GET /servers/detail``. Both of those APIs support filtering the query result by using query options. For different user roles, the user has different query options set: - For general user, there is limited set of attributes of the servers can be used as query option. The supported options are: - ``changes-since`` - ``flavor`` - ``image`` - ``ip`` - ``ip6`` (New in version 2.5) - ``name`` - ``not-tags`` (New in version 2.26) - ``not-tags-any`` (New in version 2.26) - ``reservation_id`` - ``status`` - ``tags`` (New in version 2.26) - ``tags-any`` (New in version 2.26) - ``changes-before`` (New in version 2.66) - ``locked`` (New in version 2.73) - ``availability_zone`` (New in version 2.83) - ``config_drive`` (New in version 2.83) - ``key_name`` (New in version 2.83) - ``created_at`` (New in version 2.83) - ``launched_at`` (New in version 2.83) - ``terminated_at`` (New in version 2.83) - ``power_state`` (New in version 2.83) - ``task_state`` (New in version 2.83) - ``vm_state`` (New in version 2.83) - ``progress`` (New in version 2.83) - ``user_id`` (New in version 2.83) Other options will be ignored by nova silently. - For administrator, most of the server attributes can be used as query options. Before the Ocata release, the fields in the database schema of server are exposed as query options, which may lead to unexpected API change. After the Ocata release, the definition of the query options and the database schema are decoupled. That is also the reason why the naming of the query options are different from the attribute naming in the servers API response. Precondition: there are 2 servers existing in cloud with following info:: { "servers": [ { "name": "t1", "OS-EXT-SRV-ATTR:host": "devstack1", ... }, { "name": "t2", "OS-EXT-SRV-ATTR:host": "devstack2", ... } ] } **Example: General user query server with administrator only options** Request with non-administrator context: ``GET /servers/detail?host=devstack1`` .. note:: The ``host`` query parameter is only for administrator users and the query parameter is ignored if specified by non-administrator users. Thus the API returns servers of both ``devstack1`` and ``devstack2`` in this example. Response:: { "servers": [ { "name": "t1", ... }, { "name": "t2", ... } ] } **Example: Administrator query server with administrator only options** Request with administrator context: ``GET /servers/detail?host=devstack1`` Response:: { "servers": [ { "name": "t1", ... } ] } There are also some special query options: - ``changes-since`` returns the servers updated after the given time. Please see: :doc:`polling_changes` - ``changes-before`` returns the servers updated before the given time. Please see: :doc:`polling_changes` - ``deleted`` returns (or excludes) deleted servers - ``soft_deleted`` modifies behavior of 'deleted' to either include or exclude instances whose vm_state is SOFT_DELETED - ``all_tenants`` is an administrator query option, which allows the administrator to query the servers in any tenant. **Example: User query server with special keys changes-since or changes-before** Request: ``GET /servers/detail`` Response:: { "servers": [ { "name": "t1", "updated": "2015-12-15T15:55:52Z", ... }, { "name": "t2", "updated": "2015-12-17T15:55:52Z", ... } ] } Request: ``GET /servers/detail?changes-since='2015-12-16T15:55:52Z'`` Response:: { { "name": "t2", "updated": "2015-12-17T15:55:52Z", ... } } Request: ``GET /servers/detail?changes-before='2015-12-16T15:55:52Z'`` Response:: { { "name": "t1", "updated": "2015-12-15T15:55:52Z", ... } } Request: ``GET /servers/detail?changes-since='2015-12-10T15:55:52Z'&changes-before='2015-12-28T15:55:52Z'`` Response:: { "servers": [ { "name": "t1", "updated": "2015-12-15T15:55:52Z", ... }, { "name": "t2", "updated": "2015-12-17T15:55:52Z", ... } ] } There are two kinds of matching in query options: Exact matching and regex matching. **Example: User query server using exact matching on host** Request with administrator context: ``GET /servers/detail`` Response:: { "servers": [ { "name": "t1", "OS-EXT-SRV-ATTR:host": "devstack" ... }, { "name": "t2", "OS-EXT-SRV-ATTR:host": "devstack1" ... } ] } Request with administrator context: ``GET /servers/detail?host=devstack`` Response:: { "servers": [ { "name": "t1", "OS-EXT-SRV-ATTR:host": "devstack" ... } ] } **Example: Query server using regex matching on name** Request with administrator context: ``GET /servers/detail`` Response:: { "servers": [ { "name": "test11", ... }, { "name": "test21", ... }, { "name": "t1", ... }, { "name": "t14", ... } ] } Request with administrator context: ``GET /servers/detail?name=t1`` Response:: { "servers": [ { "name": "test11", ... }, { "name": "t1", ... }, { "name": "t14", ... } ] } **Example: User query server using exact matching on host and regex matching on name** Request with administrator context: ``GET /servers/detail`` Response:: { "servers": [ { "name": "test1", "OS-EXT-SRV-ATTR:host": "devstack" ... }, { "name": "t2", "OS-EXT-SRV-ATTR:host": "devstack1" ... }, { "name": "test3", "OS-EXT-SRV-ATTR:host": "devstack1" ... } ] } Request with administrator context: ``GET /servers/detail?host=devstack1&name=test`` Response:: { "servers": [ { "name": "test3", "OS-EXT-SRV-ATTR:host": "devstack1" ... } ] } Request: ``GET /servers/detail?changes-since='2015-12-16T15:55:52Z'`` Response:: { { "name": "t2", "updated": "2015-12-17T15:55:52Z" ... } } Server actions ~~~~~~~~~~~~~~ - **Reboot** Use this function to perform either a soft or hard reboot of a server. With a soft reboot, the operating system is signaled to restart, which allows for a graceful shutdown of all processes. A hard reboot is the equivalent of power cycling the server. The virtualization platform should ensure that the reboot action has completed successfully even in cases in which the underlying domain/VM is paused or halted/stopped. - **Rebuild** Use this function to remove all data on the server and replaces it with the specified image. Server ID, flavor and IP addresses remain the same. - **Evacuate** Should a nova-compute service actually go offline, it can no longer report status about any of the servers on it. This means they'll be listed in an 'ACTIVE' state forever. Evacuate is a work around for this that lets an administrator forcibly rebuild these servers on another node. It makes no guarantees that the host was actually down, so fencing is left as an exercise to the deployer. - **Resize** (including **Confirm resize**, **Revert resize**) Use this function to convert an existing server to a different flavor, in essence, scaling the server up or down. The original server is saved for a period of time to allow rollback if there is a problem. All resizes should be tested and explicitly confirmed, at which time the original server is removed. The resized server may be automatically confirmed based on the administrator's configuration of the deployment. Confirm resize action will delete the old server in the virt layer. The spawned server in the virt layer will be used from then on. On the contrary, Revert resize action will delete the new server spawned in the virt layer and revert all changes. The original server will be used from then on. - **Pause**, **Unpause** You can pause a server by making a pause request. This request stores the state of the VM in RAM. A paused server continues to run in a frozen state. Unpause returns a paused server back to an active state. - **Suspend**, **Resume** Administrative users might want to suspend a server if it is infrequently used or to perform system maintenance. When you suspend a server, its VM state is stored on disk, all memory is written to disk, and the virtual machine is stopped. Suspending a server is similar to placing a device in hibernation; memory and vCPUs become available to create other servers. Resume will resume a suspended server to an active state. - **Snapshot** You can store the current state of the server root disk to be saved and uploaded back into the glance image repository. Then a server can later be booted again using this saved image. - **Backup** You can use backup method to store server's current state in the glance repository, in the mean time, old snapshots will be removed based on the given 'daily' or 'weekly' type. - **Start** Power on the server. - **Stop** Power off the server. - **Delete**, **Restore** Power off the given server first then detach all the resources associated to the server such as network and volumes, then delete the server. The configuration option 'reclaim_instance_interval' (in seconds) decides whether the server to be deleted will still be in the system. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it's too old (deleted time greater than the value of reclaim_instance_interval). Administrator is able to use Restore action to recover the server from the delete queue. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by compute service automatically. - **Shelve**, **Shelve offload**, **Unshelve** Shelving a server indicates it will not be needed for some time and may be temporarily removed from the hypervisors. This allows its resources to be freed up for use by someone else. By default the configuration option 'shelved_offload_time' is 0 and the shelved server will be removed from the hypervisor immediately after shelve operation; Otherwise, the resource will be kept for the value of 'shelved_offload_time' (in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the server from hypervisor after 'shelved_offload_time' time passes. Set the option 'shelved_offload_time' to -1 make it never offload. Shelve will power off the given server and take a snapshot if it is booted from image. The server can then be offloaded from the compute host and its resources deallocated. Offloading is done immediately if booted from volume, but if booted from image the offload can be delayed for some time or infinitely, leaving the image on disk and the resources still allocated. Shelve offload is used to explicitly remove a shelved server that has been left on a host. This action can only be used on a shelved server and is usually performed by an administrator. Unshelve is the reverse operation of Shelve. It builds and boots the server again, on a new scheduled host if it was offloaded, using the shelved image in the glance repository if booted from image. - **Lock**, **Unlock** Lock a server so the following actions by non-admin users are not allowed to the server. - Delete Server - Change Administrative Password (changePassword Action) - Confirm Resized Server (confirmResize Action) - Force-Delete Server (forceDelete Action) - Pause Server (pause Action) - Reboot Server (reboot Action) - Rebuild Server (rebuild Action) - Rescue Server (rescue Action) - Resize Server (resize Action) - Restore Soft-Deleted Instance (restore Action) - Resume Suspended Server (resume Action) - Revert Resized Server (revertResize Action) - Shelf-Offload (Remove) Server (shelveOffload Action) - Shelve Server (shelve Action) - Start Server (os-start Action) - Stop Server (os-stop Action) - Suspend Server (suspend Action) - Trigger Crash Dump In Server - Unpause Server (unpause Action) - Unrescue Server (unrescue Action) - Unshelve (Restore) Shelved Server (unshelve Action) - Attach a volume to an instance - Update a volume attachment - Detach a volume from an instance - Create Interface - Detach Interface - Create Or Update Metadata Item - Create or Update Metadata Items - Delete Metadata Item - Replace Metadata Items - Add (Associate) Fixed Ip (addFixedIp Action) (DEPRECATED) - Remove (Disassociate) Fixed Ip (removeFixedIp Action) (DEPRECATED) .. NOTE(takashin): The following APIs can be performed by administrators only by default. So they are not listed in the above list. - Migrate Server (migrate Action) - Live-Migrate Server (os-migrateLive Action) - Force Migration Complete Action (force_complete Action) - Delete (Abort) Migration - Inject Network Information (injectNetworkInfo Action) - Reset Networking On A Server (resetNetwork Action) But administrators can perform the actions on the server even though the server is locked. By default, only owner or administrator can lock the sever, and administrator can overwrite owner's lock along with the locked_reason if it is specified. Unlock will unlock a server in locked state so additional operations can be performed on the server by non-admin users. By default, only owner or administrator can unlock the server. - **Rescue**, **Unrescue** The rescue operation starts a server in a special configuration whereby it is booted from a special root disk image. This enables the tenant to try and restore a broken guest system. Unrescue is the reverse action of Rescue. The server spawned from the special root image will be deleted. - **Set administrator password** Sets the root/administrator password for the given server. It uses an optionally installed agent to set the administrator password. - **Migrate**, **Live migrate** Migrate is usually utilized by administrator, it will move a server to another host; it utilizes the 'resize' action but with same flavor, so during migration, the server will be powered off and rebuilt on another host. Live migrate also moves a server from one host to another, but it won't power off the server in general so the server will not suffer a down time. Administrators may use this to evacuate servers from a host that needs to undergo maintenance tasks. - **Trigger crash dump** Trigger crash dump usually utilized by either administrator or the server's owner, it will dump the memory image as dump file into the given server, and then reboot the kernel again. And this feature depends on the setting about the trigger (e.g. NMI) in the server. Server passwords ~~~~~~~~~~~~~~~~ You can specify a password when you create the server through the optional adminPass attribute. The specified password must meet the complexity requirements set by your OpenStack Compute provider. The server might enter an ``ERROR`` state if the complexity requirements are not met. In this case, a client can issue a change password action to reset the server password. If a password is not specified, a randomly generated password is assigned and returned in the response object. This password is guaranteed to meet the security requirements set by the compute provider. For security reasons, the password is not returned in subsequent **GET** calls. Server metadata ~~~~~~~~~~~~~~~ Custom server metadata can also be supplied at launch time. The maximum size of the metadata key and value is 255 bytes each. The maximum number of key-value pairs that can be supplied per server is determined by the compute provider and may be queried via the maxServerMeta absolute limit. Block Device Mapping ~~~~~~~~~~~~~~~~~~~~ Simply speaking, Block Device Mapping describes how block devices are exposed to the server. For some historical reasons, nova has two ways to mention the block device mapping in server creation request body: - ``block_device_mapping``: This is the legacy way and supports backward compatibility for EC2 API. - ``block_device_mapping_v2``: This is the recommended format to specify Block Device Mapping information in server creation request body. Users cannot mix the two formats in the same request. For more information, refer to `Block Device Mapping `_. For the full list of ``block_device_mapping_v2`` parameters available when creating a server, see the `API reference `_. **Example for block_device_mapping_v2** This will create a 100GB size volume type block device from an image with UUID of ``bb02b1a3-bc77-4d17-ab5b-421d89850fca``. It will be used as the first order boot device (``boot_index=0``), and this block device will not be deleted after we terminate the server. Note that the ``imageRef`` parameter is not required in this case since we are creating a volume-backed server. .. code-block:: json { "server": { "name": "volume-backed-server-test", "flavorRef": "52415800-8b69-11e0-9b19-734f1195ff37", "block_device_mapping_v2": [ { "boot_index": 0, "uuid": "bb02b1a3-bc77-4d17-ab5b-421d89850fca", "volume_size": "100", "source_type": "image", "destination_type": "volume", "delete_on_termination": false } ] } } Scheduler Hints ~~~~~~~~~~~~~~~ Scheduler hints are a way for the user to influence on which host the scheduler places a server. They are pre-determined key-value pairs specified as a dictionary separate from the main ``server`` dictionary in the server create request. Available scheduler hints vary from cloud to cloud, depending on the `cloud's configuration`_. .. code-block:: json { "server": { "name": "server-in-group", "imageRef": "52415800-8b69-11e0-9b19-734f6f006e54", "flavorRef": "52415800-8b69-11e0-9b19-734f1195ff37" }, "os:scheduler_hints": { "group": "05a81485-010f-4df1-bbec-7821c85686e8" } } For more information on how to specify scheduler hints refer to `the create-server-detail Request section`_ in the Compute API reference. For more information on how scheduler hints are different from flavor extra specs, refer to `this document`_. .. _cloud's configuration: https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html .. _the create-server-detail Request section: https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server .. _this document: https://docs.openstack.org/nova/latest/reference/scheduler-hints-vs-flavor-extra-specs.html#scheduler-hints Server Consoles ~~~~~~~~~~~~~~~ Server Consoles can also be supplied after server launched. There are several server console services available. First, users can get the console output from the specified server and can limit the lines of console text by setting the length. Secondly, users can access multiple types of remote consoles. The user can use ``novnc``, ``rdp-html5``, ``spice-html5``, ``serial``, and ``webmks`` (starting from microversion 2.8) through either the OpenStack dashboard or the command line. Refer to :nova-doc:`Configure remote console access `. Server networks ~~~~~~~~~~~~~~~ Networks to which the server connects can also be supplied at launch time. One or more networks can be specified. User can also specify a specific port on the network or the fixed IP address to assign to the server interface. Server access addresses ~~~~~~~~~~~~~~~~~~~~~~~ In a hybrid environment, the IP address of a server might not be controlled by the underlying implementation. Instead, the access IP address might be part of the dedicated hardware; for example, a router/NAT device. In this case, the addresses provided by the implementation cannot actually be used to access the server (from outside the local LAN). Here, a separate *access address* may be assigned at creation time to provide access to the server. This address may not be directly bound to a network interface on the server and may not necessarily appear when a server's addresses are queried. Nonetheless, clients that must access the server directly are encouraged to do so via an access address. In the example below, an IPv4 address is assigned at creation time. **Example: Create server with access IP: JSON request** .. code-block:: json { "server": { "name": "new-server-test", "imageRef": "52415800-8b69-11e0-9b19-734f6f006e54", "flavorRef": "52415800-8b69-11e0-9b19-734f1195ff37", "accessIPv4": "67.23.10.132" } } .. note:: Both IPv4 and IPv6 addresses may be used as access addresses and both addresses may be assigned simultaneously as illustrated below. Access addresses may be updated after a server has been created. **Example: Create server with multiple access IPs: JSON request** .. code-block:: json { "server": { "name": "new-server-test", "imageRef": "52415800-8b69-11e0-9b19-734f6f006e54", "flavorRef": "52415800-8b69-11e0-9b19-734f1195ff37", "accessIPv4": "67.23.10.132", "accessIPv6": "::babe:67.23.10.132" } } Moving servers ~~~~~~~~~~~~~~ There are several actions that may result in a server moving from one compute host to another including shelve, resize, migrations and evacuate. The following use cases demonstrate the intention of the actions and the consequence for operational procedures. Cloud operator needs to move a server ------------------------------------- Sometimes a cloud operator may need to redistribute work loads for operational purposes. For example, the operator may need to remove a compute host for maintenance or deploy a kernel security patch that requires the host to be rebooted. The operator has two actions available for deliberately moving work loads: cold migration (moving a server that is not active) and live migration (moving a server that is active). Cold migration moves a server from one host to another by copying its state, local storage and network configuration to new resources allocated on a new host selected by scheduling policies. The operation is relatively quick as the server is not changing its state during the copy process. The user does not have access to the server during the operation. Live migration moves a server from one host to another while it is active, so it is constantly changing its state during the action. As a result it can take considerably longer than cold migration. During the action the server is online and accessible, but only a limited set of management actions are available to the user. The following are common patterns for employing migrations in a cloud: - **Host maintenance** If a compute host is to be removed from the cloud all its servers will need to be moved to other hosts. In this case it is normal for the rest of the cloud to absorb the work load, redistributing the servers by rescheduling them. To prepare the host it will be disabled so it does not receive any further servers. Then each server will be migrated to a new host by cold or live migration, depending on the state of the server. When complete, the host is ready to be removed. - **Rolling updates** Often it is necessary to perform an update on all compute hosts which requires them to be rebooted. In this case it is not strictly necessary to move inactive servers because they will be available after the reboot. However, active servers would be impacted by the reboot. Live migration will allow them to continue operation. In this case a rolling approach can be taken by starting with an empty compute host that has been updated and rebooted. Another host that has not yet been updated is disabled and all its servers are migrated to the new host. When the migrations are complete the new host continues normal operation. The old host will be empty and can be updated and rebooted. It then becomes the new target for another round of migrations. This process can be repeated until the whole cloud has been updated, usually using a pool of empty hosts instead of just one. - **Resource Optimization** To reduce energy usage, some cloud operators will try and move servers so they fit into the minimum number of hosts, allowing some servers to be turned off. Sometimes higher performance might be wanted, so servers are spread out between the hosts to minimize resource contention. Migrating a server is not normally a choice that is available to the cloud user because the user is not normally aware of compute hosts. Management of the cloud and how servers are provisioned in it is the responsibility of the cloud operator. Recover from a failed compute host ---------------------------------- Sometimes a compute host may fail. This is a rare occurrence, but when it happens during normal operation the servers running on the host may be lost. In this case the operator may recreate the servers on the remaining compute hosts using the evacuate action. Failure detection can be proved to be impossible in compute systems with asynchronous communication, so true failure detection cannot be achieved. Usually when a host is considered to have failed it should be excluded from the cloud and any virtual networking or storage associated with servers on the failed host should be isolated from it. These steps are called fencing the host. Initiating these action is outside the scope of Nova. Once the host has been fenced its servers can be recreated on other hosts without worry of the old incarnations reappearing and trying to access shared resources. It is usual to redistribute the servers from a failed host by rescheduling them. Please note, this operation can result in data loss for the user's server. As there is no access to the original server, if there were any disks stored on local storage, that data will be lost. Evacuate does the same operation as a rebuild. It downloads any images from glance and creates new blank ephemeral disks. Any disks that were volumes, or on shared storage, are reconnected. There should be no data loss for those disks. This is why fencing the host is important, to ensure volumes and shared storage are not corrupted by two servers writing simultaneously. Evacuating a server is solely in the domain of the cloud operator because it must be performed in coordination with other operational procedures to be safe. A user is not normally aware of compute hosts but is adversely affected by their failure. User resizes server to get more resources ----------------------------------------- Sometimes a user may want to change the flavor of a server, e.g. change the quantity of cpus, disk, memory or any other resource. This is done by restarting the server with a new flavor. As the server is being moved, it is normal to reschedule the server to another host (although resize to the same host is an option for the operator). Resize involves shutting down the server, finding a host that has the correct resources for the new flavor size, moving the current server (including all storage) to the new host. Once the server has been given the appropriate resources to match the new flavor, the server is started again. After the resize operation, when the user is happy their server is working correctly after the resize, the user calls Confirm Resize. This deletes the 'before-the-resize' server that was kept on the source host. Alternatively, the user can call Revert Resize to delete the new resized server and restore the old that was stored on the source host. If the user does not manually confirm the resize within a configured time period, the resize is automatically confirmed, to free up the space the old is using on the source host. As with shelving, resize provides the cloud operator with an opportunity to redistribute work loads across the cloud according to the operators scheduling policy, providing the same benefits as above. Resizing a server is not normally a choice that is available to the cloud operator because it changes the nature of the server being provided to the user. User doesn't want to be charged when not using a server ------------------------------------------------------- Sometimes a user does not require a server to be active for a while, perhaps over a weekend or at certain times of day. Ideally they don't want to be billed for those resources. Just powering down a server does not free up any resources, but shelving a server does free up resources to be used by other users. This makes it feasible for a cloud operator to offer a discount when a server is shelved. When the user shelves a server the operator can choose to remove it from the compute hosts, i.e. the operator can offload the shelved server. When the user's server is unshelved, it is scheduled to a new host according to the operators policies for distributing work loads across the compute hosts, including taking disabled hosts into account. This will contribute to increased overall capacity, freeing hosts that are ear-marked for maintenance and providing contiguous blocks of resources on single hosts due to moving out old servers. Shelving a server is not normally a choice that is available to the cloud operator because it affects the availability of the server being provided to the user. Configure Guest OS ~~~~~~~~~~~~~~~~~~ Metadata API ------------ Nova provides a metadata API for servers to retrieve server specific metadata. Neutron ensures this metadata API can be accessed through a predefined IP address, ``169.254.169.254``. For more details, refer to the :nova-doc:`user guide `. Config Drive ------------ Nova is able to write metadata to a special configuration drive that attaches to the server when it boots. The server can mount this drive and read files from it to get information that is normally available through the metadata service. For more details, refer to the :nova-doc:`user guide `. User data --------- A user data file is a special key in the metadata service that holds a file that cloud-aware applications in the server can access. This information can be accessed via the metadata API or a config drive. The latter allows the deployed server to consume it by active engines such as cloud-init during its boot process, where network connectivity may not be an option. Server personality ------------------ You can customize the personality of a server by injecting data into its file system. For example, you might want to insert ssh keys, set configuration files, or store data that you want to retrieve from inside the server. This feature provides a minimal amount of launch-time personalization. If you require significant customization, create a custom image. Follow these guidelines when you inject files: - The maximum size of the file path data is 255 bytes. - Encode the file contents as a Base64 string. The maximum size of the file contents is determined by the compute provider and may vary based on the image that is used to create the server. Considerations: - The maximum limit refers to the number of bytes in the decoded data and not the number of characters in the encoded data. - The maximum number of file path/content pairs that you can supply is also determined by the compute provider and is defined by the maxPersonality absolute limit. - The absolute limit, maxPersonalitySize, is a byte limit that is guaranteed to apply to all images in the deployment. Providers can set additional per-image personality limits. - The file injection might not occur until after the server is built and booted. - After file injection, personality files are accessible by only system administrators. For example, on Linux, all files have root and the root group as the owner and group owner, respectively, and allow user and group read access only (octal 440). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/users.rst0000664000175000017500000000471000000000000017627 0ustar00zuulzuul00000000000000.. Copyright 2015 OpenStack Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ===== Users ===== The Compute API includes all end user and administrator API calls. Role based access control ========================= Keystone middleware is used to authenticate users and identify their roles. The Compute API uses these roles, along with oslo.policy, to decide what the user is authorized to do. Refer to the to :nova-doc:`compute admin guide ` for details. Personas used in this guide =========================== While the policy can be configured in many ways, to make it easy to understand the most common use cases the API have been designed for, we should standardize on the following types of user: * application deployer: creates/deletes servers, directly or indirectly via API * application developer: creates images and applications that run on the cloud * cloud administrator: deploys, operates and maintains the cloud Now in reality the picture is much more complex. Specifically, there are likely to be different roles for observer, creator and administrator roles for the application developer. Similarly, there are likely to be various levels of cloud administrator permissions, such as a read-only role that is able to view a lists of servers for a specific tenant but is not able to perform any actions on any of them. .. note:: This is not attempting to be an exhaustive set of personas that consider various facets of the different users but instead aims to be a minimal set of users such that we use a consistent terminology throughout this document. Discovering Policy ================== An API to discover what actions you are authorized to perform is still a work in progress. Currently this reported by a HTTP 403 :ref:`error `. Refer to the :nova-doc:`configuration guide ` for a list of policy rules along with their default values. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-guide/source/versions.rst0000664000175000017500000000761700000000000020347 0ustar00zuulzuul00000000000000======== Versions ======== The OpenStack Compute API uses both a URI and a MIME type versioning scheme. In the URI scheme, the first element of the path contains the target version identifier (e.g. `https://servers.api.openstack.org/ v2.1/`...). The MIME type versioning scheme uses HTTP content negotiation where the ``Accept`` or ``Content-Type`` headers contains a MIME type that identifies the version (application/vnd.openstack.compute.v2.1+json). A version MIME type is always linked to a base MIME type, such as application/json. If conflicting versions are specified using both an HTTP header and a URI, the URI takes precedence. **Example: Request with MIME type versioning** .. code:: GET /214412/images HTTP/1.1 Host: servers.api.openstack.org Accept: application/vnd.openstack.compute.v2.1+json X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb **Example: Request with URI versioning** .. code:: GET /v2.1/214412/images HTTP/1.1 Host: servers.api.openstack.org Accept: application/json X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb Permanent Links ~~~~~~~~~~~~~~~ The MIME type versioning approach allows for creating of permanent links, because the version scheme is not specified in the URI path: `https://api.servers.openstack.org/224532/servers/123`. If a request is made without a version specified in the URI or via HTTP headers, then a multiple-choices response (300) follows that provides links and MIME types to available versions. **Example: Multiple choices: JSON response** .. code:: { "choices": [ { "id": "v2.0", "links": [ { "href": "http://servers.api.openstack.org/v2/7f5b2214547e4e71970e329ccf0b257c/servers/detail", "rel": "self" } ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2" } ], "status": "SUPPORTED" }, { "id": "v2.1", "links": [ { "href": "http://servers.api.openstack.org/v2.1/7f5b2214547e4e71970e329ccf0b257c/servers/detail", "rel": "self" } ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1" } ], "status": "CURRENT" } ] } The API with ``CURRENT`` status is the newest API and continues to be improved by the Nova project. The API with ``SUPPORTED`` status is the old API, where new features are frozen. The API with ``DEPRECATED`` status is the API that will be removed in the foreseeable future. Providers should work with developers and partners to ensure there is adequate time to migrate to the new version before deprecated versions are discontinued. For any API which is under development but isn't released as yet, the API status is ``EXPERIMENTAL``. Your application can programmatically determine available API versions by performing a **GET** on the root URL (i.e. with the version and everything following that truncated) returned from the authentication system. You can also obtain additional information about a specific version by performing a **GET** on the base version URL (such as, `https://servers.api.openstack.org/v2.1/`). Version request URLs must always end with a trailing slash (`/`). If you omit the slash, the server might respond with a 302 redirection request. For examples of the list versions and get version details requests and responses, see `API versions `__. The detailed version response contains pointers to both a human-readable and a machine-processable description of the API service. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0464725 nova-21.2.4/api-ref/0000775000175000017500000000000000000000000014111 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.106472 nova-21.2.4/api-ref/source/0000775000175000017500000000000000000000000015411 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/conf.py0000664000175000017500000000460400000000000016714 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # nova documentation build configuration file, created by # sphinx-quickstart on Sat May 1 15:17:47 2010. # # This file is execfile()d with the current directory set to # its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. extensions = [ 'openstackdocstheme', 'os_api_ref', ] # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2010-present, OpenStack Foundation' # openstackdocstheme options repository_name = 'openstack/nova' bug_project = 'nova' bug_tag = 'api-ref' # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "sidebar_mode": "toc", } # -- Options for LaTeX output ------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'Nova.tex', u'OpenStack Compute API Documentation', u'OpenStack Foundation', 'manual'), ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/diagnostics.inc0000664000175000017500000000432700000000000020421 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================================ Servers diagnostics (servers, diagnostics) ============================================ Shows the usage data for a server. Show Server Diagnostics ======================= .. rest_method:: GET /servers/{server_id}/diagnostics Shows basic usage data for a server. Policy defaults enable only users with the administrative role. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), notfound(404), conflict(409), notimplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- Starting from **microversion 2.48** diagnostics response is standardized across all virt drivers. The response should be considered a debug interface only and not relied upon by programmatic tools. All response fields are listed below. If the virt driver is unable to provide a specific field then this field will be reported as ``None`` in the response. .. rest_parameters:: parameters.yaml - config_drive: config_drive_diagnostics - state: vm_state_diagnostics - driver: driver_diagnostics - hypervisor: hypervisor_diagnostics - hypervisor_os: hypervisor_os_diagnostics - uptime: uptime_diagnostics - num_cpus: num_cpus_diagnostics - num_disks: num_disks_diagnostics - num_nics: num_nics_diagnostics - memory_details: memory_details_diagnostics - cpu_details: cpu_details_diagnostics - disk_details: disk_details_diagnostics - nic_details: nic_details_diagnostics **Example Server diagnostics (2.48)** .. literalinclude:: ../../doc/api_samples/os-server-diagnostics/v2.48/server-diagnostics-get-resp.json :language: javascript .. warning:: Before **microversion 2.48** the response format for diagnostics was not well defined. Each hypervisor had its own format. **Example Server diagnostics (2.1)** Below is an example of diagnostics for a libvirt based instance. The unit of the return value is hypervisor specific, but in this case the unit of vnet1_rx* and vnet1_tx* is octets. .. literalinclude:: ../../doc/api_samples/os-server-diagnostics/server-diagnostics-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/extensions.inc0000664000175000017500000000411200000000000020301 0ustar00zuulzuul00000000000000.. -*- rst -*- ===================================== Extensions (extensions) (DEPRECATED) ===================================== Lists available extensions and shows information for an extension, by alias. Nova originally supported the concept of API extensions, that allowed implementations of Nova to change the API (add new resources, or attributes to existing resource objects) via extensions. In an attempt to expose to the user what was supported in a particular site, the extensions resource provided a list of extensions and detailed information on each. The net result was gratuitous differentiation in the API that required all users of OpenStack clouds to write specific code to interact with every cloud. As such, the entire extensions concept is deprecated, and will be removed in the near future. List Extensions =============== .. rest_method:: GET /extensions Lists all extensions to the API. Normal response codes: 200 Error response codes: unauthorized(401) Response -------- .. rest_parameters:: parameters.yaml - extensions: extensions - name: extension_name - alias: alias - links: extension_links - namespace: namespace - description: extension_description - updated: updated **Example List Extensions** Lists all extensions to the API. .. literalinclude:: ../../doc/api_samples/extension-info/extensions-list-resp.json :language: javascript Show Extension Details ====================== .. rest_method:: GET /extensions/{alias} Shows details for an extension, by alias. Normal response codes: 200 Error response codes: unauthorized(401), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - alias: alias Response -------- .. rest_parameters:: parameters.yaml - extension: extension - name: extension_name - alias: alias - links: extension_links - namespace: namespace - description: extension_description - updated: updated **Example Show Extension Details** Shows details about the ``os-agents`` extension. .. literalinclude:: ../../doc/api_samples/extension-info/extensions-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/flavors.inc0000664000175000017500000001467200000000000017572 0ustar00zuulzuul00000000000000.. -*- rst -*- ========= Flavors ========= Show and manage server flavors. Flavors are a way to describe the basic dimensions of a server to be created including how much ``cpu``, ``ram``, and ``disk space`` are allocated to a server built with this flavor. List Flavors ============ .. rest_method:: GET /flavors Lists all flavors accessible to your project. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - sort_key: sort_key_flavor - sort_dir: sort_dir_flavor - limit: limit - marker: marker - minDisk: minDisk - minRam: minRam - is_public: flavor_is_public_query Response -------- .. rest_parameters:: parameters.yaml - flavors: flavors - id: flavor_id_body - name: flavor_name - description: flavor_description_resp - links: links **Example List Flavors (v2.55)** .. literalinclude:: ../../doc/api_samples/flavors/v2.55/flavors-list-resp.json :language: javascript Create Flavor ============= .. rest_method:: POST /flavors Creates a flavor. Creating a flavor is typically only available to administrators of a cloud because this has implications for scheduling efficiently in the cloud. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - flavor: flavor - name: flavor_name - description: flavor_description - id: flavor_id_body_create - ram: flavor_ram - disk: flavor_disk - vcpus: flavor_cpus - OS-FLV-EXT-DATA:ephemeral: flavor_ephem_disk_in - swap: flavor_swap_in - rxtx_factor: flavor_rxtx_factor_in - os-flavor-access:is_public: flavor_is_public_in **Example Create Flavor (v2.55)** .. literalinclude:: ../../doc/api_samples/flavor-manage/v2.55/flavor-create-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - flavor: flavor - name: flavor_name - description: flavor_description_resp - id: flavor_id_body - ram: flavor_ram - disk: flavor_disk - vcpus: flavor_cpus - links: links - OS-FLV-EXT-DATA:ephemeral: flavor_ephem_disk - OS-FLV-DISABLED:disabled: flavor_disabled - swap: flavor_swap - rxtx_factor: flavor_rxtx_factor - os-flavor-access:is_public: flavor_is_public - extra_specs: extra_specs_2_61 **Example Create Flavor (v2.75)** .. literalinclude:: ../../doc/api_samples/flavor-manage/v2.75/flavor-create-post-resp.json :language: javascript List Flavors With Details ========================= .. rest_method:: GET /flavors/detail Lists flavors with details. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - sort_key: sort_key_flavor - sort_dir: sort_dir_flavor - limit: limit - marker: marker - minDisk: minDisk - minRam: minRam - is_public: flavor_is_public_query Response -------- .. rest_parameters:: parameters.yaml - flavors: flavors - name: flavor_name - description: flavor_description_resp - id: flavor_id_body - ram: flavor_ram - disk: flavor_disk - vcpus: flavor_cpus - links: links - OS-FLV-EXT-DATA:ephemeral: flavor_ephem_disk - OS-FLV-DISABLED:disabled: flavor_disabled - swap: flavor_swap - rxtx_factor: flavor_rxtx_factor - os-flavor-access:is_public: flavor_is_public - extra_specs: extra_specs_2_61 **Example List Flavors With Details (v2.75)** .. literalinclude:: ../../doc/api_samples/flavors/v2.75/flavors-detail-resp.json :language: javascript Show Flavor Details =================== .. rest_method:: GET /flavors/{flavor_id} Shows details for a flavor. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id Response -------- .. rest_parameters:: parameters.yaml - flavor: flavor - name: flavor_name - description: flavor_description_resp - id: flavor_id_body - ram: flavor_ram - disk: flavor_disk - vcpus: flavor_cpus - links: links - OS-FLV-EXT-DATA:ephemeral: flavor_ephem_disk - OS-FLV-DISABLED:disabled: flavor_disabled - swap: flavor_swap - rxtx_factor: flavor_rxtx_factor - os-flavor-access:is_public: flavor_is_public - extra_specs: extra_specs_2_61 **Example Show Flavor Details (v2.75)** .. literalinclude:: ../../doc/api_samples/flavors/v2.75/flavor-get-resp.json :language: javascript Update Flavor Description ========================= .. rest_method:: PUT /flavors/{flavor_id} Updates a flavor description. This API is available starting with microversion 2.55. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - flavor: flavor - description: flavor_description_required **Example Update Flavor Description (v2.55)** .. literalinclude:: ../../doc/api_samples/flavor-manage/v2.55/flavor-update-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - flavor: flavor - name: flavor_name - description: flavor_description_resp_no_min - id: flavor_id_body - ram: flavor_ram - disk: flavor_disk - vcpus: flavor_cpus - links: links - OS-FLV-EXT-DATA:ephemeral: flavor_ephem_disk - OS-FLV-DISABLED:disabled: flavor_disabled - swap: flavor_swap - rxtx_factor: flavor_rxtx_factor - os-flavor-access:is_public: flavor_is_public - extra_specs: extra_specs_2_61 **Example Update Flavor Description (v2.75)** .. literalinclude:: ../../doc/api_samples/flavor-manage/v2.75/flavor-update-resp.json :language: javascript Delete Flavor ============= .. rest_method:: DELETE /flavors/{flavor_id} Deletes a flavor. This is typically an admin only action. Deleting a flavor that is in use by existing servers is not recommended as it can cause incorrect data to be returned to the user under some operations. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/images.inc0000664000175000017500000001751500000000000017362 0ustar00zuulzuul00000000000000.. -*- rst -*- ==================== Images (DEPRECATED) ==================== .. warning:: These APIs are proxy calls to the Image service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. All the Image services proxy APIs except image metadata APIs will fail with a 404 starting from microversion 2.36. The image metadata APIs will fail with a 404 starting from microversion 2.39. See: `Relevant Image APIs `__. Lists, shows details and deletes images. Also sets, lists, shows details, create, update and deletes image metadata. An image is a collection of files that you use to create and rebuild a server. By default, operators provide pre-built operating system images. You can also create custom images. See: `Create Image Action `__. By default, the ``policy.json`` file authorizes all users to view the image size in the ``OS-EXT-IMG-SIZE:size`` extended attribute. List Images =========== .. rest_method:: GET /images List images. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - changes-since: changes-since - server: image_server_query - name: image_name_query - status: image_status_query - minDisk: minDisk - minRam: minRam - type : image_type_query - limit : limit - marker : marker Response -------- .. rest_parameters:: parameters.yaml - images: images - id: image_id_body - name: image_name - links: links **Example List Images: JSON response** .. literalinclude:: ../../doc/api_samples/images/images-list-get-resp.json :language: javascript List Images With Details ======================== .. rest_method:: GET /images/detail List images with details. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - changes-since: changes-since - server: image_server_query - name: image_name_query - status: image_status_query - minDisk: minDisk - minRam: minRam - type : image_type_query - limit : limit - marker : marker Response -------- .. rest_parameters:: parameters.yaml - images: images - id: image_id_body - name: image_name - minRam: minRam_body - minDisk: minDisk_body - metadata: metadata_object - created: created - updated: updated - status: image_status - progress: image_progress - links: links - server: image_server - OS-EXT-IMG-SIZE:size: image_size - OS-DCF:diskConfig: OS-DCF:diskConfig **Example List Images Details: JSON response** .. literalinclude:: ../../doc/api_samples/images/images-details-get-resp.json :language: javascript Show Image Details ================== .. rest_method:: GET /images/{image_id} Shows details for an image. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id Response -------- .. rest_parameters:: parameters.yaml - images: images - id: image_id_body - name: image_name - minRam: minRam_body - minDisk: minDisk_body - metadata: metadata_object - created: created - updated: updated - status: image_status - progress: image_progress - links: links - server: image_server - OS-EXT-IMG-SIZE:size: image_size - OS-DCF:diskConfig: OS-DCF:diskConfig **Example Show Image Details: JSON response** .. literalinclude:: ../../doc/api_samples/images/image-get-resp.json :language: javascript Delete Image ============ .. rest_method:: DELETE /images/{image_id} Deletes an image. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id Response -------- There is no body content for the response of a successful DELETE action. List Image Metadata =================== .. rest_method:: GET /images/{image_id}/metadata List metadata of an image. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id Response -------- .. rest_parameters:: parameters.yaml - metadata: metadata_object **Example List Image Metadata Details: JSON response** .. literalinclude:: ../../doc/api_samples/images/image-metadata-get-resp.json :language: javascript Create Image Metadata ===================== .. rest_method:: POST /images/{image_id}/metadata Create an image metadata. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id - metadata: metadata_object **Example Create Image Metadata: JSON request** .. literalinclude:: ../../doc/api_samples/images/image-metadata-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - metadata: metadata_object **Example Create Image Metadata: JSON response** .. literalinclude:: ../../doc/api_samples/images/image-metadata-post-resp.json :language: javascript Update Image Metadata ===================== .. rest_method:: PUT /images/{image_id}/metadata Update an image metadata Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id - metadata: metadata_object **Example Update Image Metadata: JSON request** .. literalinclude:: ../../doc/api_samples/images/image-metadata-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - metadata: metadata_object **Example Update Image Metadata: JSON response** .. literalinclude:: ../../doc/api_samples/images/image-metadata-put-resp.json :language: javascript Show Image Metadata Item ======================== .. rest_method:: GET /images/{image_id}/metadata/{key} Shows metadata item, by key, for an image. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id - key: key Response -------- .. rest_parameters:: parameters.yaml - meta: meta **Example Show Image Metadata Item Details: JSON response** .. literalinclude:: ../../doc/api_samples/images/image-meta-key-get.json :language: javascript Create Or Update Image Metadata Item ==================================== .. rest_method:: PUT /images/{image_id}/metadata/{key} Creates or updates a metadata item, by key, for an image. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id - key: key - meta: meta **Example Create Or Update Image Metadata Item: JSON request** .. literalinclude:: ../../doc/api_samples/images/image-meta-key-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - meta: meta **Example Create Or Update Image Metadata Item: JSON response** .. literalinclude:: ../../doc/api_samples/images/image-meta-key-put-resp.json :language: javascript Delete Image Metadata Item ========================== .. rest_method:: DELETE /images/{image_id}/metadata/{key} Deletes a metadata item, by key, for an image. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id - key: key Response -------- There is no body content for the response of a successful DELETE action. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/index.rst0000664000175000017500000000542000000000000017253 0ustar00zuulzuul00000000000000:tocdepth: 2 ============= Compute API ============= This is a reference for the OpenStack Compute API which is provided by the Nova project. To learn more about the OpenStack Compute API concepts, please refer to the `API guide `_. .. rest_expand_all:: .. include:: versions.inc .. include:: urls.inc .. include:: request-ids.inc .. include:: servers.inc .. include:: servers-actions.inc .. include:: servers-action-fixed-ip.inc .. include:: servers-action-deferred-delete.inc .. include:: servers-action-console-output.inc .. include:: servers-action-shelve.inc .. include:: servers-action-crash-dump.inc .. include:: servers-action-remote-consoles.inc .. include:: servers-admin-action.inc .. include:: servers-action-evacuate.inc .. include:: servers-remote-consoles.inc .. include:: server-security-groups.inc .. include:: diagnostics.inc .. include:: ips.inc .. include:: metadata.inc .. include:: os-instance-actions.inc .. include:: os-interface.inc .. include:: os-server-password.inc .. include:: os-volume-attachments.inc .. include:: flavors.inc .. include:: os-flavor-access.inc .. include:: os-flavor-extra-specs.inc .. include:: os-keypairs.inc .. include:: limits.inc .. include:: os-agents.inc .. include:: os-aggregates.inc .. include:: os-assisted-volume-snapshots.inc .. include:: os-availability-zone.inc .. include:: os-hypervisors.inc .. include:: os-instance-usage-audit-log.inc .. include:: os-migrations.inc .. include:: server-migrations.inc .. include:: os-quota-sets.inc .. include:: os-quota-class-sets.inc .. include:: os-server-groups.inc .. include:: os-server-tags.inc .. include:: os-services.inc .. include:: os-simple-tenant-usage.inc .. include:: os-server-external-events.inc .. include:: server-topology.inc =============== Deprecated APIs =============== This section contains references for APIs which are deprecated and usually limited to some maximum microversion. .. include:: extensions.inc .. include:: os-networks.inc .. include:: os-volumes.inc .. include:: images.inc .. include:: os-baremetal-nodes.inc .. include:: os-tenant-network.inc .. include:: os-floating-ip-pools.inc .. include:: os-floating-ips.inc .. include:: os-security-groups.inc .. include:: os-security-group-rules.inc .. include:: os-hosts.inc ============= Obsolete APIs ============= This section contains the reference for APIs that were part of the OpenStack Compute API in the past, but no longer exist. .. include:: os-certificates.inc .. include:: os-cloudpipe.inc .. include:: os-fping.inc .. include:: os-virtual-interfaces.inc .. include:: os-fixed-ips.inc .. include:: os-floating-ips-bulk.inc .. include:: os-floating-ip-dns.inc .. include:: os-cells.inc .. include:: os-consoles.inc .. include:: os-security-group-default-rules.inc ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/ips.inc0000664000175000017500000000346200000000000016704 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================ Servers IPs (servers, ips) ============================ Lists the IP addresses for an instance and shows details for an IP address. List Ips ======== .. rest_method:: GET /servers/{server_id}/ips Lists IP addresses that are assigned to an instance. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - addresses: addresses_obj - network_label: network_label_body - addr: ip_address - version: version_ip **Example List Ips** .. literalinclude:: ../../doc/api_samples/server-ips/server-ips-resp.json :language: javascript Show Ip Details =============== .. rest_method:: GET /servers/{server_id}/ips/{network_label} Shows IP addresses details for a network label of a server instance. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - network_label: network_label Response -------- .. rest_parameters:: parameters.yaml - network_label: network_label_body - addr: ip_address - version: version_ip **Example Show Ip Details** .. literalinclude:: ../../doc/api_samples/server-ips/server-ips-network-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/limits.inc0000664000175000017500000000274600000000000017416 0ustar00zuulzuul00000000000000.. -*- rst -*- ================= Limits (limits) ================= Shows rate and absolute limits for the project. Show Rate And Absolute Limits ============================= .. rest_method:: GET /limits Shows rate and absolute limits for the project. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - reserved: reserved_query - tenant_id: tenant_id_query Response -------- .. rest_parameters:: parameters.yaml - limits: limits - absolute: limits_absolutes - maxServerGroupMembers: server_group_members - maxServerGroups: server_groups - maxServerMeta: metadata_items - maxTotalCores: cores - maxTotalInstances: instances - maxTotalKeypairs: key_pairs - maxTotalRAMSize: ram - totalCoresUsed: total_cores_used - totalInstancesUsed: total_instances_used - totalRAMUsed: total_ram_used - totalServerGroupsUsed: total_server_groups_used - maxSecurityGroupRules: security_group_rules_quota - maxSecurityGroups: security_groups_quota - maxTotalFloatingIps: floating_ips - totalFloatingIpsUsed: total_floatingips_used - totalSecurityGroupsUsed: total_security_groups_used - maxImageMeta: image_metadata_items - maxPersonality: injected_files - maxPersonalitySize: injected_file_content_bytes - rate: limits_rates **Example Show Rate And Absolute Limits: JSON response** .. literalinclude:: ../../doc/api_samples/limits/limit-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/metadata.inc0000664000175000017500000001371400000000000017672 0ustar00zuulzuul00000000000000.. -*- rst -*- ===================================== Server metadata (servers, metadata) ===================================== Lists metadata, creates or replaces one or more metadata items, and updates one or more metadata items for a server. Shows details for, creates or replaces, and updates a metadata item, by key, for a server. List All Metadata ================= .. rest_method:: GET /servers/{server_id}/metadata Lists all metadata for a server. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - metadata: metadata_object **Example List All Metadata** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-all-resp.json :language: javascript Create or Update Metadata Items =============================== .. rest_method:: POST /servers/{server_id}/metadata Create or update one or more metadata items for a server. Creates any metadata items that do not already exist in the server, replaces exists metadata items that match keys. Does not modify items that are not in the request. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - metadata: metadata_object **Example Update Metadata Items** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-all-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - metadata: metadata_object **Example Update Metadata Items** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-all-resp.json :language: javascript Replace Metadata Items ====================== .. rest_method:: PUT /servers/{server_id}/metadata Replaces one or more metadata items for a server. Creates any metadata items that do not already exist in the server. Removes and completely replaces any metadata items that already exist in the server with the metadata items in the request. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - metadata: metadata_object **Example Create Or Replace Metadata Items** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-all-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - metadata: metadata_object **Example Create Or Replace Metadata Items** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-all-resp.json :language: javascript Show Metadata Item Details ========================== .. rest_method:: GET /servers/{server_id}/metadata/{key} Shows details for a metadata item, by key, for a server. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - key: key Response -------- .. rest_parameters:: parameters.yaml - meta: metadata_object **Example Show Metadata Item Details** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-resp.json :language: javascript Create Or Update Metadata Item ============================== .. rest_method:: PUT /servers/{server_id}/metadata/{key} Creates or replaces a metadata item, by key, for a server. Creates a metadata item that does not already exist in the server. Replaces existing metadata items that match keys with the metadata item in the request. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - key: key **Example Create Or Update Metadata Item** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - meta: metadata_object **Example Create Or Update Metadata Item** .. literalinclude:: ../../doc/api_samples/server-metadata/server-metadata-resp.json :language: javascript Delete Metadata Item ==================== .. rest_method:: DELETE /servers/{server_id}/metadata/{key} Deletes a metadata item, by key, from a server. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - key: key Response -------- If successful, this method does not return content in the response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/os-agents.inc0000664000175000017500000000637500000000000020017 0ustar00zuulzuul00000000000000.. -*- rst -* ========================== Guest agents (os-agents) ========================== Creates, lists, updates, and deletes guest agent builds. Use guest agents to access files on the disk, configure networking, or run other applications or scripts in the guest while the agent is running. This hypervisor-specific extension is currently only for the Xen driver. Use of guest agents is possible only if the underlying service provider uses the Xen driver. List Agent Builds ================= .. rest_method:: GET /os-agents Lists agent builds. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - hypervisor: hypervisor_query Response -------- .. rest_parameters:: parameters.yaml - agents: agents - agent_id: agent_id - architecture: architecture - hypervisor: hypervisor_type - md5hash: md5hash - os: os - url: url - version: version **Example List Agent Builds: JSON response** .. literalinclude:: ../../doc/api_samples/os-agents/agents-get-resp.json :language: javascript Create Agent Build ================== .. rest_method:: POST /os-agents Creates an agent build. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - agent: agent - hypervisor: hypervisor_type - os: os - architecture: architecture - version: version - md5hash: md5hash - url: url **Example Create Agent Build: JSON request** .. literalinclude:: ../../doc/api_samples/os-agents/agent-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - agent: agent - agent_id: agent_id - architecture: architecture - hypervisor: hypervisor_type - md5hash: md5hash - os: os - url: url - version: version **Example Create Agent Build: JSON response** .. literalinclude:: ../../doc/api_samples/os-agents/agent-post-resp.json :language: javascript Update Agent Build ================== .. rest_method:: PUT /os-agents/{agent_build_id} Updates an agent build. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - agent_build_id: agent_build_id - para: para - url: url - md5hash: md5hash - version: version **Example Update Agent Build: JSON request** .. literalinclude:: ../../doc/api_samples/os-agents/agent-update-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - agent: agent - agent_id: agent_id_str - md5hash: md5hash - url: url - version: version **Example Update Agent Build: JSON response** .. literalinclude:: ../../doc/api_samples/os-agents/agent-update-put-resp.json :language: javascript Delete Agent Build ================== .. rest_method:: DELETE /os-agents/{agent_build_id} Deletes an existing agent build. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - agent_build_id: agent_build_id Response -------- There is no body content for the response of a successful DELETE query ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-aggregates.inc0000664000175000017500000002226600000000000020644 0ustar00zuulzuul00000000000000.. -*- rst -*- ================================ Host aggregates (os-aggregates) ================================ Creates and manages host aggregates. An aggregate assigns metadata to groups of compute nodes. Policy defaults enable only users with the administrative role to perform operations with aggregates. Cloud providers can change these permissions through `policy file configuration `__. List Aggregates =============== .. rest_method:: GET /os-aggregates Lists all aggregates. Includes the ID, name, and availability zone for each aggregate. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - aggregates: aggregates - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - hosts: aggregate_host_list - id: aggregate_id_body - metadata: aggregate_metadata_response - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example List Aggregates (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregates-list-get-resp.json :language: javascript Create Aggregate ================ .. rest_method:: POST /os-aggregates Creates an aggregate. If specifying an option availability_zone, the aggregate is created as an availability zone and the availability zone is visible to normal users. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - name: aggregate_name - availability_zone: aggregate_az_optional_create **Example Create Aggregate: JSON request** .. literalinclude:: ../../doc/api_samples/os-aggregates/aggregate-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - id: aggregate_id_body - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example Create Aggregate (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregate-post-resp.json :language: javascript Show Aggregate Details ====================== .. rest_method:: GET /os-aggregates/{aggregate_id} Shows details for an aggregate. Details include hosts and metadata. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id Response -------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - hosts: hosts - id: aggregate_id_body - metadata: aggregate_metadata_response - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example Show Aggregate Details (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregates-get-resp.json :language: javascript Update Aggregate ================ .. rest_method:: PUT /os-aggregates/{aggregate_id} Updates either or both the name and availability zone for an aggregate. If the aggregate to be updated has host that already in the given availability zone, the request will fail with 400 error. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id - aggregate: aggregate - name: aggregate_name_optional - availability_zone: aggregate_az_optional_update **Example Update Aggregate: JSON request** .. literalinclude:: ../../doc/api_samples/os-aggregates/aggregate-update-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - hosts: hosts - id: aggregate_id_body - metadata: aggregate_metadata_response - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example Update Aggregate (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregate-update-post-resp.json :language: javascript Delete Aggregate ================ .. rest_method:: DELETE /os-aggregates/{aggregate_id} Deletes an aggregate. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id Response -------- There is no body content for the response of a successful DELETE action. Add Host ======== .. rest_method:: POST /os-aggregates/{aggregate_id}/action Adds a host to an aggregate. Specify the ``add_host`` action and host name in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id - add_host: aggregate_add_host - host: host_name_body **Example Add Host: JSON request** .. literalinclude:: ../../doc/api_samples/os-aggregates/aggregate-add-host-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - hosts: hosts - id: aggregate_id_body - metadata: aggregate_metadata_response - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example Add Host (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregates-add-host-post-resp.json :language: javascript Remove Host =========== .. rest_method:: POST /os-aggregates/{aggregate_id}/action Removes a host from an aggregate. Specify the ``remove_host`` action and host name in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id - remove_host: aggregate_remove_host - host: host_name_body **Example Remove Host: JSON request** .. literalinclude:: ../../doc/api_samples/os-aggregates/aggregate-remove-host-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - hosts: hosts - id: aggregate_id_body - metadata: aggregate_metadata_response - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example Remove Host (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregates-remove-host-post-resp.json :language: javascript Create Or Update Aggregate Metadata =================================== .. rest_method:: POST /os-aggregates/{aggregate_id}/action Creates or replaces metadata for an aggregate. Specify the ``set_metadata`` action and metadata info in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id - set_metadata: set_metadata - metadata: aggregate_metadata_request **Example Create Or Update Aggregate Metadata: JSON request** .. literalinclude:: ../../doc/api_samples/os-aggregates/aggregate-metadata-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - aggregate: aggregate - availability_zone: aggregate_az - created_at: created - deleted_at: deleted_at - deleted: deleted - hosts: hosts - id: aggregate_id_body - metadata: aggregate_metadata_response - name: aggregate_name - updated_at: updated_consider_null - uuid: aggregate_uuid **Example Create Or Update Aggregate Metadata (v2.41): JSON response** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.41/aggregates-metadata-post-resp.json :language: javascript Request Image Pre-caching for Aggregate ======================================= .. rest_method:: POST /os-aggregates/{aggregate_id}/images Requests that a set of images be pre-cached on compute nodes within the referenced aggregate. This API is available starting with microversion 2.81. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - aggregate_id: aggregate_id - cache: cache - cache.id: image_id_body **Example Request Image pre-caching for Aggregate (v2.81): JSON request** .. literalinclude:: ../../doc/api_samples/os-aggregates/v2.81/aggregate-images-post-req.json :language: javascript Response -------- The response body is always empty. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-assisted-volume-snapshots.inc0000664000175000017500000000473200000000000023675 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================================== Assisted volume snapshots (os-assisted-volume-snapshots) ========================================================== Creates and deletes snapshots through an emulator/hypervisor. Only qcow2 file format is supported. This API is only implemented by the libvirt compute driver. An internal snapshot that lacks storage such as NFS can use an emulator/hypervisor to add the snapshot feature. This is used to enable snapshot of volumes on backends such as NFS by storing data as qcow2 files on these volumes. This API is only ever called by Cinder, where it is used to create a snapshot for drivers that extend the remotefs Cinder driver. Create Assisted Volume Snapshots ================================ .. rest_method:: POST /os-assisted-volume-snapshots Creates an assisted volume snapshot. Normal response codes: 200 Error response codes: badRequest(400),unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - snapshot: snapshot - volume_id: volume_id - create_info: create_info - create_info.snapshot_id: snapshot_id - create_info.type: type-os-assisted-volume-snapshot - create_info.new_file: new_file - create_info.id: create_info_id **Example Create Assisted Volume Snapshots: JSON request** .. literalinclude:: ../../doc/api_samples/os-assisted-volume-snapshots/snapshot-create-assisted-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - snapshot: snapshot - id: create_info_id_resp - volumeId: volume_id **Example Create Assisted Volume Snapshots: JSON response** .. literalinclude:: ../../doc/api_samples/os-assisted-volume-snapshots/snapshot-create-assisted-resp.json :language: javascript Delete Assisted Volume Snapshot =============================== .. rest_method:: DELETE /os-assisted-volume-snapshots/{snapshot_id} Deletes an assisted volume snapshot. To make this request, add the ``delete_info`` query parameter to the URI, as follows: DELETE /os-assisted-volume-snapshots/421752a6-acf6-4b2d-bc7a-119f9148cd8c?delete_info='{"volume_id": "521752a6-acf6-4b2d-bc7a-119f9148cd8c"}' Normal response codes: 204 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - snapshot_id: snapshot_id_path - delete_info: delete_info Response -------- There is no body content for the response of a successful DELETE query ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-availability-zone.inc0000664000175000017500000000377200000000000022157 0ustar00zuulzuul00000000000000.. -*- rst -*- .. _os-availability-zone: =========================================== Availability zones (os-availability-zone) =========================================== Lists and gets detailed availability zone information. An availability zone is created or updated by setting the availability_zone parameter in the ``create``, ``update``, or ``create or update`` methods of the Host Aggregates API. See `Host Aggregates `_ for more details. Get Availability Zone Information ================================= .. rest_method:: GET /os-availability-zone Lists availability zone information. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - availabilityZoneInfo: availability_zone_info - hosts: hosts.availability_zone_none - zoneName: OS-EXT-AZ:availability_zone - zoneState: availability_zone_state - available: available | **Example Get availability zone information** .. literalinclude:: ../../doc/api_samples/os-availability-zone/availability-zone-list-resp.json :language: javascript Get Detailed Availability Zone Information ========================================== .. rest_method:: GET /os-availability-zone/detail Gets detailed availability zone information. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - availabilityZoneInfo: availability_zone_info - hosts: hosts.availability_zone - zoneName: OS-EXT-AZ:availability_zone - zoneState: availability_zone_state - available: available | **Example Get detailed availability zone information** .. literalinclude:: ../../doc/api_samples/os-availability-zone/availability-zone-detail-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-baremetal-nodes.inc0000664000175000017500000000414500000000000021571 0ustar00zuulzuul00000000000000.. -*- rst -*- =================================================== Bare metal nodes (os-baremetal-nodes) (DEPRECATED) =================================================== .. warning:: These APIs are proxy calls to the Ironic service. They exist for legacy compatibility, but no new applications should use them. Nova has deprecated all the proxy APIs and users should use the native APIs instead. These will fail with a 404 starting from microversion 2.36. See: `Relevant Bare metal APIs `__. Bare metal nodes. List Bare Metal Nodes ===================== .. rest_method:: GET /os-baremetal-nodes Lists the bare metal nodes known by the compute environment. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), notImplemented(501) Response -------- .. rest_parameters:: parameters.yaml - nodes: baremetal_nodes - id: baremetal_id - interfaces: baremetal_interfaces - host: baremetal_host - task_state: baremetal_taskstate - cpus: baremetal_cpus - memory_mb: baremetal_mem - disk_gb: baremetal_disk **Example List Bare Metal Nodes** .. literalinclude:: ../../doc/api_samples/os-baremetal-nodes/baremetal-node-list-resp.json :language: javascript Show Bare Metal Node Details ============================ .. rest_method:: GET /os-baremetal-nodes/{node_id} Shows details for a bare metal node. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - node_id: node_id Response -------- .. rest_parameters:: parameters.yaml - node: baremetal_node - id: baremetal_id - instance_uuid: baremetal_instance_uuid - interfaces: baremetal_interfaces - host: baremetal_host - task_state: baremetal_taskstate - cpus: baremetal_cpus - memory_mb: baremetal_mem - disk_gb: baremetal_disk **Example Show Bare Metal Node Details** .. literalinclude:: ../../doc/api_samples/os-baremetal-nodes/baremetal-node-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/os-cells.inc0000664000175000017500000000705100000000000017630 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================== Cells (os-cells, capacities) ============================== Adds neighbor cells, lists neighbor cells, and shows the capabilities of the local cell. By default, only administrators can manage cells. .. warning:: These APIs refer to a Cells v1 deployment which was deprecated in the 16.0.0 Pike release. These are not used with Cells v2 which is required beginning with the 15.0.0 Ocata release where all Nova deployments consist of at least one Cells v2 cell. They were removed in the 20.0.0 Train release. List Cells ========== .. rest_method:: GET /os-cells Lists cells. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id - limit: limit_simple - offset: offset_simple Response -------- **Example List Cells: JSON response** .. literalinclude:: ../../doc/api_samples/os-cells/cells-list-resp.json :language: javascript Create Cell =========== .. rest_method:: POST /os-cells Create a new cell. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) Capacities ========== .. rest_method:: GET /os-cells/capacities Retrieve capacities. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) List Cells With Details ======================= .. rest_method:: GET /os-cells/detail Lists cells with details of capabilities. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - limit: limit_simple - offset: offset_simple Info For This Cell ================== .. rest_method:: GET /os-cells/info Retrieve info about the current cell. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) Show Cell Data ============== .. rest_method:: GET /os-cells/{cell_id} Shows data for a cell. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - cell_id: cell_id Response -------- **Example Show Cell Data: JSON response** .. literalinclude:: ../../doc/api_samples/os-cells/cells-get-resp.json :language: javascript Update a Cell ============= .. rest_method:: PUT /os-cells/{cell_id} Update an existing cell. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Delete a Cell ============= .. rest_method:: DELETE /os-cells/{cell_id} Remove a cell. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Show Cell Capacities ==================== .. rest_method:: GET /os-cells/{cell_id}/capacities Shows capacities for a cell. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - cell_id: cell_id Response -------- **Example Show Cell Capacities: JSON response** .. literalinclude:: ../../doc/api_samples/os-cells/cells-capacities-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-certificates.inc0000664000175000017500000000271400000000000021174 0ustar00zuulzuul00000000000000.. -*- rst -*- ==================================== Root certificates (os-certificates) ==================================== Creates and shows details for a root certificate. .. warning:: This API existed solely because of the need to build euca bundles when Nova had an in tree EC2 API. It no longer interacts with any parts of the system besides its own certificate daemon. It was removed in the 16.0.0 Pike release. Create Root Certificate ======================= .. rest_method:: POST /os-certificates Creates a root certificate. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - certificate: certificate - data: data - private_key: private_key | **Example Create Root Certificate** .. literalinclude:: ../../doc/api_samples/os-certificates/certificate-create-resp.json :language: javascript Show Root Certificate Details ============================= .. rest_method:: GET /os-certificates/root Shows details for a root certificate. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), notImplemented(501) Response -------- .. rest_parameters:: parameters.yaml - certificate: certificate - data: data - private_key: private_key | **Example Show Root Certificate Details** .. literalinclude:: ../../doc/api_samples/os-certificates/certificate-get-root-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-cloudpipe.inc0000664000175000017500000000451500000000000020514 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================= Cloudpipe (os-cloudpipe) ========================= .. warning:: This API only works with ``nova-network`` which is deprecated in favor of Neutron. It should be avoided in any new applications. It was removed in the 16.0.0 Pike release. Manages virtual VPNs for projects. List Cloudpipes =============== .. rest_method:: GET /os-cloudpipe Lists cloudpipes. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound (404) Response -------- .. rest_parameters:: parameters.yaml - cloudpipes: cloudpipes - created_at: created - instance_id: instance_id_cloudpipe - internal_ip: fixed_ip - project_id: project_id_server - public_ip: vpn_public_ip_resp - public_port: vpn_public_port_resp - state: vpn_state **Example List Cloudpipes: JSON response** .. literalinclude:: ../../doc/api_samples/os-cloudpipe/cloud-pipe-get-resp.json :language: javascript Create Cloudpipe ================ .. rest_method:: POST /os-cloudpipe Creates a cloudpipe. Normal response codes: 200 Error response codes: badRequest(400),unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - cloudpipe: cloudpipe - project_id: project_id **Example Create Cloudpipe: JSON request** .. literalinclude:: ../../doc/api_samples/os-cloudpipe/cloud-pipe-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - instance_id: instance_id_cloudpipe **Example Create Cloudpipe: JSON response** .. literalinclude:: ../../doc/api_samples/os-cloudpipe/cloud-pipe-create-resp.json :language: javascript Update Cloudpipe ================ .. rest_method:: PUT /os-cloudpipe/configure-project Updates the virtual private network (VPN) IP address and port for a cloudpipe instance. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - configure_project: configure_project_cloudpipe - vpn_ip: vpn_public_ip - vpn_port: vpn_public_port **Example Update Cloudpipe: JSON request** .. literalinclude:: ../../doc/api_samples/os-cloudpipe/cloud-pipe-update-req.json :language: javascript Response -------- There is no body content for the response of a successful PUT request ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-consoles.inc0000664000175000017500000000474200000000000020357 0ustar00zuulzuul00000000000000.. -*- rst -*- ================================================== XenServer VNC Proxy (XVP) consoles (os-consoles) ================================================== Manages server XVP consoles. .. warning:: These APIs are only applicable when using the XenServer virt driver. They were removed in the 21.0.0 (Ussuri) release. Lists Consoles ============== .. rest_method:: GET /servers/{server_id}/consoles Lists all consoles for a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), gone(410) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - consoles: consoles - console: console - console_type: console_type - id: console_id_in_body | **Example List Consoles** .. literalinclude:: ../../doc/api_samples/consoles/consoles-list-get-resp.json :language: javascript Create Console ============== .. rest_method:: POST /servers/{server_id}/consoles Creates a console for a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- If successful, this method does not return a response body. Show Console Details ==================== .. rest_method:: GET /servers/{server_id}/consoles/{console_id} Shows console details for a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - console_id: console_id Response -------- .. rest_parameters:: parameters.yaml - console: console - console_type: console_type - host: console_host - id: console_id_in_body - instance_name: instance_name - password: console_password - port: port_number | **Example Show Console Details** .. literalinclude:: ../../doc/api_samples/consoles/consoles-get-resp.json :language: javascript Delete Console ============== .. rest_method:: DELETE /servers/{server_id}/consoles/{console_id} Deletes a console for a server. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - console_id: console_id Response -------- If successful, this method does not return a response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-fixed-ips.inc0000664000175000017500000000363000000000000020415 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================= Fixed IPs (os-fixed-ips) ========================= .. warning:: These APIs only work with **nova-network** which is deprecated. These will fail with a 404 starting from microversion 2.36. They were removed in the 18.0.0 Rocky release. Shows data for a fixed IP, such as host name, CIDR, and address. Also, reserves and releases a fixed IP address. Show Fixed Ip Details ===================== .. rest_method:: GET /os-fixed-ips/{fixed_ip} Shows details for a fixed IP address. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - fixed_ip: fixed_ip_path Response -------- .. rest_parameters:: parameters.yaml - fixed_ip: fixed_ip_obj - address: ip_address - cidr: cidr - host: fixed_ip_host - hostname: fixed_ip_hostname - reserved: reserved_fixedip **Example Show Fixed Ip Details: JSON response** .. literalinclude:: ../../doc/api_samples/os-fixed-ips/fixedips-get-resp.json :language: javascript Reserve Or Release A Fixed Ip ============================= .. rest_method:: POST /os-fixed-ips/{fixed_ip}/action Reserves or releases a fixed IP. To reserve a fixed IP address, specify ``reserve`` in the request body. To release a fixed IP address, specify ``unreserve`` in the request body. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - fixed_ip: fixed_ip_path - reserve: action_reserve - unreserve: action_unreserve **Example Reserve Or Release A Fixed Ip: JSON request** .. literalinclude:: ../../doc/api_samples/os-fixed-ips/fixedip-post-req.json :language: javascript Response -------- There is no body content for the response of a successful POST operation. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-flavor-access.inc0000664000175000017500000000702700000000000021261 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================================ Flavors access (flavors, os-flavor-access) ============================================ Lists tenants who have access to a private flavor and adds private flavor access to and removes private flavor access from tenants. By default, only administrators can manage private flavor access. A private flavor has ``is_public`` set to ``false`` while a public flavor has ``is_public`` set to ``true``. List Flavor Access Information For Given Flavor =============================================== .. rest_method:: GET /flavors/{flavor_id}/os-flavor-access Lists flavor access information. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id Response -------- .. rest_parameters:: parameters.yaml - flavor_access: flavor_access - tenant_id: tenant_id_body - flavor_id: flavor_id_body **Example List Flavor Access Information For Given Flavor: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-access/flavor-access-list-resp.json :language: javascript Add Flavor Access To Tenant (addTenantAccess Action) ==================================================== .. rest_method:: POST /flavors/{flavor_id}/action Adds flavor access to a tenant and flavor. Specify the ``addTenantAccess`` action and the ``tenant`` in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) - 400 - BadRequest - if the `tenant` is not found in your OpenStack deployment, a 400 is returned to prevent typos on the API call. Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - addTenantAccess: addTenantAccess - tenant: tenant_id_body **Example Add Flavor Access To Tenant: JSON request** .. literalinclude:: ../../doc/api_samples/flavor-access/flavor-access-add-tenant-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - flavor_access: flavor_access - tenant_id: tenant_id_body - flavor_id: flavor_id_body **Example Add Flavor Access To Tenant: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-access/flavor-access-add-tenant-resp.json :language: javascript Remove Flavor Access From Tenant (removeTenantAccess Action) ============================================================ .. rest_method:: POST /flavors/{flavor_id}/action Removes flavor access from a tenant and flavor. Specify the ``removeTenantAccess`` action and the ``tenant`` in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) - 400 - BadRequest - if the `tenant` is not found in your OpenStack deployment, a 400 is returned to prevent typos on the API call. Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - removeTenantAccess: removeTenantAccess - tenant: tenant_id_body **Example Remove Flavor Access From Tenant: JSON request** .. literalinclude:: ../../doc/api_samples/flavor-access/flavor-access-remove-tenant-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - flavor_access: flavor_access - tenant_id: tenant_id_body - flavor_id: flavor_id_body **Example Remove Flavor Access From Tenant: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-access/flavor-access-remove-tenant-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-flavor-extra-specs.inc0000664000175000017500000001041200000000000022246 0ustar00zuulzuul00000000000000.. -*- rst -*- ====================================================== Flavors extra-specs (flavors, os-flavor-extra-specs) ====================================================== Lists, creates, deletes, and updates the extra-specs or keys for a flavor. Refer to `Compute Flavors `__ for available built-in extra specs. List Extra Specs For A Flavor ============================= .. rest_method:: GET /flavors/{flavor_id}/os-extra_specs Lists all extra specs for a flavor, by ID. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id Response -------- .. rest_parameters:: parameters.yaml - extra_specs: extra_specs - key: flavor_extra_spec_key2 - value: flavor_extra_spec_value **Example List Extra Specs For A Flavor: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-extra-specs/flavor-extra-specs-list-resp.json :language: javascript Create Extra Specs For A Flavor =============================== .. rest_method:: POST /flavors/{flavor_id}/os-extra_specs Creates extra specs for a flavor, by ID. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - extra_specs: extra_specs - key: flavor_extra_spec_key2 - value: flavor_extra_spec_value **Example Create Extra Specs For A Flavor: JSON request** .. literalinclude:: ../../doc/api_samples/flavor-extra-specs/flavor-extra-specs-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - extra_specs: extra_specs - key: flavor_extra_spec_key2 - value: flavor_extra_spec_value **Example Create Extra Specs For A Flavor: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-extra-specs/flavor-extra-specs-create-resp.json :language: javascript Show An Extra Spec For A Flavor =============================== .. rest_method:: GET /flavors/{flavor_id}/os-extra_specs/{flavor_extra_spec_key} Shows an extra spec, by key, for a flavor, by ID. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - flavor_extra_spec_key: flavor_extra_spec_key Response -------- .. rest_parameters:: parameters.yaml - key: flavor_extra_spec_key2 - value: flavor_extra_spec_value **Example Show An Extra Spec For A Flavor: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-extra-specs/flavor-extra-specs-get-resp.json :language: javascript Update An Extra Spec For A Flavor ================================= .. rest_method:: PUT /flavors/{flavor_id}/os-extra_specs/{flavor_extra_spec_key} Updates an extra spec, by key, for a flavor, by ID. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - flavor_extra_spec_key: flavor_extra_spec_key - key: flavor_extra_spec_key2 - value: flavor_extra_spec_value **Example Update An Extra Spec For A Flavor: JSON request** .. literalinclude:: ../../doc/api_samples/flavor-extra-specs/flavor-extra-specs-update-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - key: flavor_extra_spec_key2 - value: flavor_extra_spec_value **Example Update An Extra Spec For A Flavor: JSON response** .. literalinclude:: ../../doc/api_samples/flavor-extra-specs/flavor-extra-specs-update-resp.json :language: javascript Delete An Extra Spec For A Flavor ================================= .. rest_method:: DELETE /flavors/{flavor_id}/os-extra_specs/{flavor_extra_spec_key} Deletes an extra spec, by key, for a flavor, by ID. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - flavor_id: flavor_id - flavor_extra_spec_key: flavor_extra_spec_key Response -------- There is no body content for the response of a successful DELETE action. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-floating-ip-dns.inc0000664000175000017500000001052000000000000021514 0ustar00zuulzuul00000000000000.. -*- rst -*- .. NOTE(gmann): These APIs are deprecated so do not update this file even body, example or parameters are not complete. ============================================= Floating IP DNS records (os-floating-ip-dns) ============================================= .. warning:: Since these APIs are only implemented for **nova-network**, they are deprecated. These will fail with a 404 starting from microversion 2.36. They were removed in the 18.0.0 Rocky release. Manages DNS records associated with floating IP addresses. The API dispatches requests to a DNS driver that is selected at startup. List DNS Domains ================ .. rest_method:: GET /os-floating-ip-dns Lists registered DNS domains published by the DNS drivers. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), gone(410), notImplemented(501) Response -------- **Example List Dns Domains: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-list-resp.json :language: javascript Create Or Update DNS Domain =========================== .. rest_method:: PUT /os-floating-ip-dns/{domain} Creates or updates a DNS domain. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - domain: domain **Example Create Or Update Dns Domain: JSON request** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-req.json :language: javascript Response -------- **Example Create Or Update Dns Domain: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-resp.json :language: javascript Delete DNS Domain ================= .. rest_method:: DELETE /os-floating-ip-dns/{domain} Deletes a DNS domain and all associated host entries. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - domain: domain Response -------- List DNS Entries ================ .. rest_method:: GET /os-floating-ip-dns/{domain}/entries/{ip} Lists DNS entries for a domain and IP. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - domain: domain - ip: ip Response -------- **Example List DNS Entries: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-entry-list-resp.json :language: javascript Find Unique DNS Entry ===================== .. rest_method:: GET /os-floating-ip-dns/{domain}/entries/{name} Finds a unique DNS entry for a domain and name. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - domain: domain - name: name Response -------- **Example Find Unique DNS Entry: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-entry-get-resp.json :language: javascript Create Or Update DNS Entry ========================== .. rest_method:: PUT /os-floating-ip-dns/{domain}/entries/{name} Creates or updates a DNS entry. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - domain: domain - name: name **Example Create Or Update DNS Entry: JSON request** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-entry-req.json :language: javascript Response -------- **Example Create Or Update DNS Entry: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-entry-resp.json :language: javascript Delete DNS Entry ================ .. rest_method:: DELETE /os-floating-ip-dns/{domain}/entries/{name} Deletes a DNS entry. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - domain: domain - name: name Response -------- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-floating-ip-pools.inc0000664000175000017500000000243200000000000022067 0ustar00zuulzuul00000000000000.. -*- rst -*- ====================================================== Floating IP pools (os-floating-ip-pools) (DEPRECATED) ====================================================== .. warning:: This API is a proxy call to the Network service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. This API will fail with a 404 starting from microversion 2.36. For the equivalent functionality in the Network service, one can request:: GET /networks?router:external=True&fields=name Manages groups of floating IPs. List Floating Ip Pools ====================== .. rest_method:: GET /os-floating-ip-pools Lists floating IP pools. Policy defaults enable only users with the administrative role or user who is authorized to operate on tenant to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - floating_ip_pools: floating_ip_pools - name: floating_ip_pool_name_or_id **Example List Floating Ip Pools: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ip-pools/floatingippools-list-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-floating-ips-bulk.inc0000664000175000017500000000734000000000000022056 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================= Floating IPs bulk (os-floating-ips-bulk) ========================================= .. warning:: Since these APIs are only implemented for **nova-network**, they are deprecated. These will fail with a 404 starting from microversion 2.36. They were removed in the 18.0.0 Rocky release. Bulk-creates, deletes, and lists floating IPs. Default pool name is ``nova``. To view available pools, use the ``os-floating-ip-pools`` extension. List Floating Ips ================= .. rest_method:: GET /os-floating-ips-bulk Lists all floating IPs. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Response -------- .. rest_parameters:: parameters.yaml - floating_ip_info : floating_ips_list - address : floating_ip - fixed_ip : fixed_ip_address - instance_uuid : server_id - interface : virtual_interface - pool: floating_ip_pool_name - project_id : project_id_value **Example List Floating Ips: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-list-resp.json :language: javascript Create Floating Ips =================== .. rest_method:: POST /os-floating-ips-bulk Bulk-creates floating IPs. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409), gone(410) Request ------- .. rest_parameters:: parameters.yaml - floating_ips_bulk_create : floating_ip_bulk_object - ip_range : ip_range - interface : virtual_interface_id_optional - pool: floating_ip_pool_name_optional **Example Create Floating Ips: JSON request** .. literalinclude:: ../../doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - floating_ips_bulk_create : floating_ip_bulk_object - interface : virtual_interface - ip_range : ip_range - pool: floating_ip_pool_name **Example Create Floating Ips: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-create-resp.json :language: javascript Bulk-Delete Floating Ips ======================== .. rest_method:: PUT /os-floating-ips-bulk/delete Bulk-deletes floating IPs. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - ip_range: ip_range_delete **Example Bulk-Delete Floating Ips: JSON request** .. literalinclude:: ../../doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-delete-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - floating_ips_bulk_delete : ip_range_delete **Example Bulk-Delete Floating Ips: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-delete-resp.json :language: javascript List Floating Ips By Host ========================= .. rest_method:: GET /os-floating-ips-bulk/{host_name} Lists all floating IPs for a host. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - host_name: host_name Response -------- .. rest_parameters:: parameters.yaml - floating_ip_info : floating_ips_list - address : floating_ip - fixed_ip : fixed_ip_address - instance_uuid : server_id - interface : virtual_interface - pool: floating_ip_pool_name - project_id : project_id_value **Example List Floating Ips By Host: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-list-by-host-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-floating-ips.inc0000664000175000017500000001322700000000000021124 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================================ Floating IPs (os-floating-ips) (DEPRECATED) ============================================ .. warning:: These APIs are proxy calls to the Network service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. These will fail with a 404 starting from microversion 2.36. See: `Relevant Network APIs `__. Lists floating IP addresses for a project. Also, creates (allocates) a floating IP address for a project, shows floating IP address details, and deletes (deallocates) a floating IP address from a project. The cloud administrator configures a pool of floating IP addresses in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project. After you `allocate a floating IP address `__ for a project, you can: - `Add (associate) the floating IP address `__ with an instance in the project. You can associate only one floating IP address with an instance at a time. - `Remove (disassociate) the floating IP address `__ from an instance in the project. - Delete, or deallocate, a floating IP from the project, which automatically deletes any associations for that IP address. List Floating Ip Addresses ========================== .. rest_method:: GET /os-floating-ips Lists floating IP addresses associated with the tenant or account. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - floating_ips: floating_ips_list - fixed_ip: fixed_ip_address - id: floating_ip_id_value - instance_id: server_id - ip: floating_ip - pool: floating_ip_pool_name_or_id **Example List Floating Ip Addresses** .. literalinclude:: ../../doc/api_samples/os-floating-ips/floating-ips-list-resp.json :language: javascript Create (Allocate) Floating Ip Address ===================================== .. rest_method:: POST /os-floating-ips Creates, or allocates, a floating IP address for the current project. By default, the floating IP address is allocated from the public pool. If more than one floating IP address pool is available, use the ``pool`` parameter to specify from which pool to allocate the IP address. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - pool: floating_ip_pool_name_or_id **Example Create (Allocate) Floating Ip Address** .. literalinclude:: ../../doc/api_samples/os-floating-ips/floating-ips-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - floating_ip: floating_ip_obj - fixed_ip: fixed_ip_address - id: floating_ip_id_value - instance_id: server_id - ip: floating_ip - pool: floating_ip_pool_name_or_id **Example Create (Allocate) Floating Ip Address: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ips/floating-ips-create-resp.json :language: javascript Show Floating Ip Address Details ================================ .. rest_method:: GET /os-floating-ips/{floating_ip_id} Shows details for a floating IP address, by ID, that is associated with the tenant or account. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - floating_ip_id: floating_ip_id Response -------- .. rest_parameters:: parameters.yaml - floating_ip: floating_ip_obj - fixed_ip: fixed_ip_address - id: floating_ip_id_value - instance_id: server_id - ip: floating_ip - pool: floating_ip_pool_name_or_id **Example Show Floating Ip Address Details: JSON response** .. literalinclude:: ../../doc/api_samples/os-floating-ips/floating-ips-get-resp.json :language: javascript Delete (Deallocate) Floating Ip Address ======================================= .. rest_method:: DELETE /os-floating-ips/{floating_ip_id} Deletes, or deallocates, a floating IP address from the current project and returns it to the pool from which it was allocated. If the IP address is still associated with a running instance, it is automatically disassociated from that instance. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - floating_ip_id: floating_ip_id Response -------- There is no body content for the response of a successful DELETE action. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-fping.inc0000664000175000017500000000460600000000000017634 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================== Ping instances (os-fping) ========================== .. warning:: This API only works with ``nova-network`` which is deprecated. It should be avoided in any new applications. These will fail with a 404 starting from microversion 2.36. It was removed in the 18.0.0 Rocky release. Pings instances and reports which instances are alive. Ping Instances ============== .. rest_method:: GET /os-fping Runs the fping utility to ping instances and reports which instances are alive. Specify the ``all_tenants=1`` query parameter to ping instances for all tenants. For example: :: GET /os-fping?all_tenants=1 Specify the ``include`` and ``exclude`` query parameters to filter the results. For example: :: GET /os-fping?all_tenants=1&include=uuid1,uuid2&exclude=uuid3,uuid4 Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: serviceUnavailable(503), unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - all_tenants: all_tenants - include: include - exclude: exclude Response -------- .. rest_parameters:: parameters.yaml - servers: servers - alive: alive - id: server_id - project_id: project_id | **Example Ping Instances** .. literalinclude:: ../../doc/api_samples/os-fping/fping-get-resp.json :language: javascript Ping An Instance ================ .. rest_method:: GET /os-fping/{instance_id} Runs the fping utility to ping an instance and reports whether the instance is alive. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: serviceUnavailable(503), unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - instance_id: instance_id Response -------- .. rest_parameters:: parameters.yaml - server: server - alive: alive - id: server_id - project_id: project_id | **Example Ping An Instance** .. literalinclude:: ../../doc/api_samples/os-fping/fping-get-details-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-hosts.inc0000664000175000017500000001601500000000000017666 0ustar00zuulzuul00000000000000.. -*- rst -*- =============================== Hosts (os-hosts) (DEPRECATED) =============================== .. warning:: The ``os-hosts`` API is deprecated as of the 2.43 microversion. Requests made with microversion >= 2.43 will result in a 404 error. To list and show host details, use the :ref:`os-hypervisors` API. To enable or disable a service, use the :ref:`os-services` API. There is no replacement for the `shutdown`, `startup`, `reboot`, or `maintenance_mode` actions as those are system-level operations which should be outside of the control of the compute service. Manages physical hosts. Some virt drivers do not support all host functions. For more information, see `nova virt support matrix `__ Policy defaults enable only users with the administrative role to perform all os-hosts related operations. Cloud providers can change these permissions through the ``policy.json`` file. List Hosts ========== .. rest_method:: GET /os-hosts Lists hosts. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - hosts: hosts - zone: host_zone - host_name: host_name_body - service: host_service **Example List Hosts** .. literalinclude:: ../../doc/api_samples/os-hosts/hosts-list-resp.json :language: javascript Show Host Details ================= .. rest_method:: GET /os-hosts/{host_name} Shows details for a host. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - host_name: host_name Response -------- .. rest_parameters:: parameters.yaml - host: host_resource_array - resource: host_resource - resource.project: host_project - resource.cpu: host_cpu - resource.memory_mb: host_memory_mb - resource.disk_gb: host_disk_gb - resource.host: host_name_body **Example Show Host Details** .. literalinclude:: ../../doc/api_samples/os-hosts/host-get-resp.json :language: javascript Update Host status ================== .. rest_method:: PUT /os-hosts/{host_name} Enables, disables a host or put a host in maintenance or normal mode. .. warning:: Putting a host into maintenance mode is only implemented by the XenServer compute driver and it has been reported that it does not actually evacuate all of the guests from the host, it just sets a flag in the Xen management console, and is therefore useless. There are other APIs that allow you to do the same thing which are supported across all compute drivers, which would be disabling a service and then migrating the instances off that host. See the `Operations Guide `_ for more information on maintenance. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - host_name: host_name - status: host_status_body_in - maintenance_mode: host_maintenance_mode_in **Example Enable Host: JSON request** .. literalinclude:: ../../doc/api_samples/os-hosts/host-put-maintenance-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - host: host_name_body - status: host_status_body - maintenance_mode: host_maintenance_mode **Example Enable Host** .. literalinclude:: ../../doc/api_samples/os-hosts/host-put-maintenance-resp.json :language: javascript Reboot Host =========== .. rest_method:: GET /os-hosts/{host_name}/reboot Reboots a host. .. warning:: This is only supported by the XenServer and Hyper-v drivers. The backing drivers do no orchestration of dealing with guests in the nova database when performing a reboot of the host. The nova-compute service for that host may be temporarily disabled by the service group health check which would take it out of scheduling decisions, and the guests would be down, but the periodic task which checks for unexpectedly stopped instances runs in the nova-compute service, which might be dead now so the nova API would show the instances as running when in fact they are actually stopped. This API is also not tested in a live running OpenStack environment. Needless to say, it is not recommended to use this API and it is deprecated as of the 2.43 microversion. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - host_name: host_name Response -------- .. rest_parameters:: parameters.yaml - host: host_name_body - power_action: host_power_action **Example Reboot Host: JSON response** .. literalinclude:: ../../doc/api_samples/os-hosts/host-get-reboot.json :language: javascript Shut Down Host ============== .. rest_method:: GET /os-hosts/{host_name}/shutdown Shuts down a host. .. warning:: This is only supported by the XenServer and Hyper-v drivers. The backing drivers do no orchestration of dealing with guests in the nova database when performing a shutdown of the host. The nova-compute service for that host may be temporarily disabled by the service group health check which would take it out of scheduling decisions, and the guests would be down, but the periodic task which checks for unexpectedly stopped instances runs in the nova-compute service, which might be dead now so the nova API would show the instances as running when in fact they are actually stopped. This API is also not tested in a live running OpenStack environment. Needless to say, it is not recommended to use this API and it is deprecated as of the 2.43 microversion. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - host_name: host_name Response -------- .. rest_parameters:: parameters.yaml - host: host_name_body - power_action: host_power_action **Example Shut Down Host** .. literalinclude:: ../../doc/api_samples/os-hosts/host-get-shutdown.json :language: javascript Start Host ========== .. rest_method:: GET /os-hosts/{host_name}/startup Starts a host. .. warning:: This is not implemented by any in-tree compute drivers and therefore will always fail with a `501 NotImplemented` error. Needless to say, it is not recommended to use this API and it is deprecated as of the 2.43 microversion. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - host_name: host_name Response -------- .. rest_parameters:: parameters.yaml - host: host_name_body - power_action: host_power_action **Example Start Host** .. literalinclude:: ../../doc/api_samples/os-hosts/host-get-startup.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/os-hypervisors.inc0000664000175000017500000002652000000000000021125 0ustar00zuulzuul00000000000000.. -*- rst -*- .. _os-hypervisors: ============================== Hypervisors (os-hypervisors) ============================== Lists all hypervisors, shows summary statistics for all hypervisors over all compute nodes, shows details for a hypervisor, shows the uptime for a hypervisor, lists all servers on hypervisors that match the given ``hypervisor_hostname_pattern`` or searches for hypervisors by the given ``hypervisor_hostname_pattern``. List Hypervisors ================ .. rest_method:: GET /os-hypervisors Lists hypervisors. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - limit: hypervisor_limit - marker: hypervisor_marker - marker: hypervisor_marker_uuid - hypervisor_hostname_pattern: hypervisor_hostname_pattern_query - with_servers: hypervisor_with_servers_query Response -------- .. rest_parameters:: parameters.yaml - hypervisors: hypervisors - hypervisor_hostname: hypervisor_hostname - id: hypervisor_id_body - id: hypervisor_id_body_uuid - state: hypervisor_state - status: hypervisor_status - hypervisor_links: hypervisor_links - servers: hypervisor_servers - servers.uuid: hypervisor_servers_uuid - servers.name: hypervisor_servers_name **Example List Hypervisors (v2.33): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json :language: javascript **Example List Hypervisors With Servers (v2.53): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.53/hypervisors-with-servers-resp.json :language: javascript List Hypervisors Details ======================== .. rest_method:: GET /os-hypervisors/detail Lists hypervisors details. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - limit: hypervisor_limit - marker: hypervisor_marker - marker: hypervisor_marker_uuid - hypervisor_hostname_pattern: hypervisor_hostname_pattern_query - with_servers: hypervisor_with_servers_query Response -------- .. rest_parameters:: parameters.yaml - hypervisors: hypervisors - cpu_info: cpu_info - current_workload: current_workload - status: hypervisor_status - state: hypervisor_state - disk_available_least: disk_available_least - host_ip: host_ip - free_disk_gb: hypervisor_free_disk_gb - free_ram_mb: free_ram_mb - hypervisor_hostname: hypervisor_hostname - hypervisor_type: hypervisor_type_body - hypervisor_version: hypervisor_version - id: hypervisor_id_body - id: hypervisor_id_body_uuid - local_gb: local_gb - local_gb_used: local_gb_used - memory_mb: memory_mb - memory_mb_used: memory_mb_used - running_vms: running_vms - servers: hypervisor_servers - servers.uuid: hypervisor_servers_uuid - servers.name: hypervisor_servers_name - service: hypervisor_service - service.host: host_name_body - service.id: service_id_body_2_52 - service.id: service_id_body_2_53 - service.disabled_reason: service_disable_reason - vcpus: hypervisor_vcpus - vcpus_used: hypervisor_vcpus_used - hypervisor_links: hypervisor_links **Example List Hypervisors Details (v2.33): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json :language: javascript **Example List Hypervisors Details (v2.53): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json :language: javascript Show Hypervisor Statistics ========================== .. rest_method:: GET /os-hypervisors/statistics Shows summary statistics for all enabled hypervisors over all compute nodes. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. .. note:: As noted, some of the parameters in the response representing totals do not take allocation ratios into account. This can result in a disparity between the totals and the usages. A more accurate representation of state can be obtained using `placement`__. __ https://docs.openstack.org/api-ref/placement/#list-resource-provider-usages Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - hypervisor_statistics: hypervisor_statistics - count: hypervisor_count - current_workload: current_workload - disk_available_least: disk_available_least_total - free_disk_gb: hypervisor_free_disk_gb_total - free_ram_mb: free_ram_mb_total - local_gb: local_gb_total - local_gb_used: local_gb_used_total - memory_mb: memory_mb_total - memory_mb_used: memory_mb_used_total - running_vms: running_vms_total - vcpus: hypervisor_vcpus_total - vcpus_used: hypervisor_vcpus_used_total **Example Show Hypervisor Statistics: JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/hypervisors-statistics-resp.json :language: javascript Show Hypervisor Details ======================= .. rest_method:: GET /os-hypervisors/{hypervisor_id} Shows details for a given hypervisor. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. .. note:: As noted, some of the parameters in the response representing totals do not take allocation ratios into account. This can result in a disparity between the totals and the usages. A more accurate representation of state can be obtained using `placement`__. __ https://docs.openstack.org/api-ref/placement/#show-resource-provider-usages Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - hypervisor_id: hypervisor_id - hypervisor_id: hypervisor_id_uuid - with_servers: hypervisor_with_servers_query Response -------- .. rest_parameters:: parameters.yaml - hypervisor: hypervisor - cpu_info: cpu_info - state: hypervisor_state - status: hypervisor_status - current_workload: current_workload - disk_available_least: disk_available_least - host_ip: host_ip - free_disk_gb: hypervisor_free_disk_gb - free_ram_mb: free_ram_mb - hypervisor_hostname: hypervisor_hostname - hypervisor_type: hypervisor_type_body - hypervisor_version: hypervisor_version - id: hypervisor_id_body - id: hypervisor_id_body_uuid - local_gb: local_gb - local_gb_used: local_gb_used - memory_mb: memory_mb - memory_mb_used: memory_mb_used - running_vms: running_vms - servers: hypervisor_servers - servers.uuid: hypervisor_servers_uuid - servers.name: hypervisor_servers_name - service: hypervisor_service - service.host: host_name_body - service.id: service_id_body_2_52 - service.id: service_id_body_2_53 - service.disabled_reason: service_disable_reason - vcpus: hypervisor_vcpus - vcpus_used: hypervisor_vcpus_used **Example Show Hypervisor Details (v2.28): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.28/hypervisors-show-resp.json :language: javascript **Example Show Hypervisor Details With Servers (v2.53): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.53/hypervisors-show-with-servers-resp.json :language: javascript Show Hypervisor Uptime ====================== .. rest_method:: GET /os-hypervisors/{hypervisor_id}/uptime Shows the uptime for a given hypervisor. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - hypervisor_id: hypervisor_id - hypervisor_id: hypervisor_id_uuid Response -------- .. rest_parameters:: parameters.yaml - hypervisor: hypervisor - hypervisor_hostname: hypervisor_hostname - id: hypervisor_id_body - id: hypervisor_id_body_uuid - state: hypervisor_state - status: hypervisor_status - uptime: uptime **Example Show Hypervisor Uptime: JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/hypervisors-uptime-resp.json :language: javascript **Example Show Hypervisor Uptime (v2.53): JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/v2.53/hypervisors-uptime-resp.json :language: javascript Search Hypervisor ================= .. rest_method:: GET /os-hypervisors/{hypervisor_hostname_pattern}/search max_version: 2.52 Search hypervisor by a given hypervisor host name or portion of it. .. warning:: This API is deprecated starting with microversion 2.53. Use `List Hypervisors`_ with the ``hypervisor_hostname_pattern`` query parameter with microversion 2.53 and later. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response code: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - hypervisor_hostname_pattern: hypervisor_hostname_pattern Response -------- .. rest_parameters:: parameters.yaml - hypervisors: hypervisors - hypervisor_hostname: hypervisor_hostname - id: hypervisor_id_body_no_version - state: hypervisor_state - status: hypervisor_status **Example Search Hypervisor: JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/hypervisors-search-resp.json :language: javascript List Hypervisor Servers ======================= .. rest_method:: GET /os-hypervisors/{hypervisor_hostname_pattern}/servers max_version: 2.52 List all servers belong to each hypervisor whose host name is matching a given hypervisor host name or portion of it. .. warning:: This API is deprecated starting with microversion 2.53. Use `List Hypervisors`_ with the ``hypervisor_hostname_pattern`` and ``with_servers`` query parameters with microversion 2.53 and later. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response code: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - hypervisor_hostname_pattern: hypervisor_hostname_pattern Response -------- .. rest_parameters:: parameters.yaml - hypervisors: hypervisors - hypervisor_hostname: hypervisor_hostname - id: hypervisor_id_body_no_version - state: hypervisor_state - status: hypervisor_status - servers: servers - servers.uuid: server_uuid - servers.name: server_name **Example List Hypervisor Servers: JSON response** .. literalinclude:: ../../doc/api_samples/os-hypervisors/hypervisors-with-servers-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-instance-actions.inc0000664000175000017500000000720700000000000021773 0ustar00zuulzuul00000000000000.. -*- rst -*- ================================================ Servers actions (servers, os-instance-actions) ================================================ List actions and action details for a server. List Actions For Server ======================= .. rest_method:: GET /servers/{server_id}/os-instance-actions Lists actions for a server. Action information of deleted instances can be returned for requests starting with microversion 2.21. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - limit: instance_action_limit - marker: instance_action_marker - changes-since: changes_since_instance_action - changes-before: changes_before_instance_action Response -------- .. rest_parameters:: parameters.yaml - instanceActions: instanceActions - action: action - instance_uuid: instance_id_body - message: message - project_id: project_id_server_action - request_id: request_id_body - start_time: start_time - user_id: user_id_server_action - updated_at: updated_instance_action - links: instance_actions_next_links **Example List Actions For Server: JSON response** .. literalinclude:: ../../doc/api_samples/os-instance-actions/instance-actions-list-resp.json :language: javascript **Example List Actions For Server With Links (v2.58):** .. literalinclude:: ../../doc/api_samples/os-instance-actions/v2.58/instance-actions-list-with-limit-resp.json :language: javascript Show Server Action Details ========================== .. rest_method:: GET /servers/{server_id}/os-instance-actions/{request_id} Shows details for a server action. Action details of deleted instances can be returned for requests later than microversion 2.21. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - request_id: request_id Response -------- .. rest_parameters:: parameters.yaml - instanceAction: instanceAction - action: action - instance_uuid: instance_id_body - message: message - project_id: project_id_server_action - request_id: request_id_body - start_time: start_time - user_id: user_id_server_action - events: instance_action_events_2_50 - events: instance_action_events_2_51 - events.event: event - events.start_time: event_start_time - events.finish_time: event_finish_time - events.result: event_result - events.traceback: event_traceback - events.hostId: event_hostId - events.host: event_host - events.details: event_details - updated_at: updated_instance_action **Example Show Server Action Details For Admin (v2.62)** .. literalinclude:: ../../doc/api_samples/os-instance-actions/v2.62/instance-action-get-resp.json :language: javascript **Example Show Server Action Details For Non-Admin (v2.62)** .. literalinclude:: ../../doc/api_samples/os-instance-actions/v2.62/instance-action-get-non-admin-resp.json :language: javascript **Example Show Server Action Details For System Reader (v2.84)** .. literalinclude:: ../../doc/api_samples/os-instance-actions/v2.84/instance-action-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-instance-usage-audit-log.inc0000664000175000017500000000552700000000000023325 0ustar00zuulzuul00000000000000.. -*- rst -*- ======================================================== Server usage audit log (os-instance-usage-audit-log) ======================================================== Audit server usage of the cloud. This API is dependent on the ``instance_usage_audit`` configuration option being set on all compute hosts where usage auditing is required. Policy defaults enable only users with the administrative role to perform all os-instance-usage-audit-log related operations. Cloud providers can change these permissions through the ``policy.json`` file. List Server Usage Audits ======================== .. rest_method:: GET /os-instance_usage_audit_log Lists usage audits for all servers on all compute hosts where usage auditing is configured. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- .. rest_parameters:: parameters.yaml - instance_usage_audit_logs: instance_usage_audit_logs - hosts_not_run: host_not_run - log: instance_usage_audit_log - errors: errors - instances: instances_usage_audit - message: instance_usage_audit_log_message - state: instance_usage_audit_task_state - num_hosts: host_num - num_hosts_done: host_done_num - num_hosts_not_run: host_not_run_num - num_hosts_running: host_running_num - overall_status: overall_status - period_beginning: period_beginning - period_ending: period_ending - total_errors: total_errors - total_instances: total_instances **Example List Usage Audits For All Servers** .. literalinclude:: ../../doc/api_samples/os-instance-usage-audit-log/inst-usage-audit-log-index-get-resp.json :language: javascript List Usage Audits Before Specified Time ======================================= .. rest_method:: GET /os-instance_usage_audit_log/{before_timestamp} Lists usage audits that occurred before a specified time. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - before_timestamp: before_timestamp Response -------- .. rest_parameters:: parameters.yaml - instance_usage_audit_log: instance_usage_audit_logs - hosts_not_run: host_not_run - log: instance_usage_audit_log - errors: errors - instances: instances_usage_audit - message: instance_usage_audit_log_message - state: instance_usage_audit_task_state - num_hosts: host_num - num_hosts_done: host_done_num - num_hosts_not_run: host_not_run_num - num_hosts_running: host_running_num - overall_status: overall_status - period_beginning: period_beginning - period_ending: period_ending - total_errors: total_errors - total_instances: total_instances **Example List Usage Audits Before Specified Time** .. literalinclude:: ../../doc/api_samples/os-instance-usage-audit-log/inst-usage-audit-log-show-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-interface.inc0000664000175000017500000001133200000000000020463 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================= Port interfaces (servers, os-interface) ========================================= List port interfaces, show port interface details of the given server. Create a port interface and uses it to attach a port to the given server, detach a port interface from the given server. List Port Interfaces ==================== .. rest_method:: GET /servers/{server_id}/os-interface Lists port interfaces that are attached to a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - interfaceAttachments: interfaceAttachments - port_state: port_state - fixed_ips: fixed_ips_resp - ip_address: ip_address - subnet_id: subnet_id - mac_addr: mac_addr - net_id: net_id_resp - port_id: port_id_resp - tag: device_tag_nic_attachment_resp **Example List Port Interfaces: JSON response** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/attach-interfaces-list-resp.json :language: javascript **Example List Tagged Port Interfaces (v2.70): JSON response** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-list-resp.json :language: javascript Create Interface ================ .. rest_method:: POST /servers/{server_id}/os-interface Creates a port interface and uses it to attach a port to a server. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), computeFault(500), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - interfaceAttachment: interfaceAttachment - port_id: port_id - net_id: net_id - fixed_ips: fixed_ips - ip_address: ip_address_req - tag: device_tag_nic_attachment **Example Create Interface: JSON request** Create interface with ``net_id`` and ``fixed_ips``. .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/attach-interfaces-create-net_id-req.json :language: javascript Create interface with ``port_id``. .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/attach-interfaces-create-req.json :language: javascript **Example Create Tagged Interface (v2.49): JSON request** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/v2.49/attach-interfaces-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - interfaceAttachment: interfaceAttachment_resp - fixed_ips: fixed_ips_resp - ip_address: ip_address - subnet_id: subnet_id - mac_addr: mac_addr - net_id: net_id_resp - port_id: port_id_resp - port_state: port_state - tag: device_tag_nic_attachment_resp **Example Create Interface: JSON response** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/attach-interfaces-create-resp.json :language: javascript **Example Create Tagged Interface (v2.70): JSON response** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-resp.json :language: javascript Show Port Interface Details =========================== .. rest_method:: GET /servers/{server_id}/os-interface/{port_id} Shows details for a port interface that is attached to a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - port_id: port_id_path Response -------- .. rest_parameters:: parameters.yaml - interfaceAttachment: interfaceAttachment_resp - port_state: port_state - fixed_ips: fixed_ips_resp - ip_address: ip_address - subnet_id: subnet_id - mac_addr: mac_addr - net_id: net_id_resp - port_id: port_id_resp - tag: device_tag_nic_attachment_resp **Example Show Port Interface Details: JSON response** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/attach-interfaces-show-resp.json :language: javascript **Example Show Tagged Port Interface Details (v2.70): JSON response** .. literalinclude:: ../../doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-show-resp.json :language: javascript Detach Interface ================ .. rest_method:: DELETE /servers/{server_id}/os-interface/{port_id} Detaches a port interface from a server. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), NotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - port_id: port_id_path Response -------- No body is returned on successful request. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-keypairs.inc0000664000175000017500000000662500000000000020363 0ustar00zuulzuul00000000000000.. -*- rst -*- ===================== Keypairs (keypairs) ===================== Generates, imports, and deletes SSH keys. List Keypairs ============= .. rest_method:: GET /os-keypairs Lists keypairs that are associated with the account. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - user_id: keypair_user - limit: keypair_limit - marker: keypair_marker Response -------- .. rest_parameters:: parameters.yaml - keypairs: keypairs - keypair: keypair - name: keypair_name - public_key: keypair_public_key - fingerprint: keypair_fingerprint - type: keypair_type - keypairs_links: keypair_links **Example List Keypairs (v2.35): JSON response** .. literalinclude:: ../../doc/api_samples/os-keypairs/v2.35/keypairs-list-resp.json :language: javascript Create Or Import Keypair ======================== .. rest_method:: POST /os-keypairs Generates or imports a keypair. Normal response codes: 200, 201 .. note:: The success status code was changed from 200 to 201 in version 2.2 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - keypair: keypair - name: keypair_name - public_key: keypair_public_key_in - type: keypair_type_in - user_id: keypair_userid_in **Example Create Or Import Keypair (v2.10): JSON request** .. literalinclude:: ../../doc/api_samples/os-keypairs/v2.10/keypairs-import-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - keypair: keypair - name: keypair_name - public_key: keypair_public_key - fingerprint: keypair_fingerprint - user_id: keypair_userid - private_key: keypair_private_key - type: keypair_type **Example Create Or Import Keypair (v2.10): JSON response** .. literalinclude:: ../../doc/api_samples/os-keypairs/v2.10/keypairs-import-post-resp.json :language: javascript Show Keypair Details ==================== .. rest_method:: GET /os-keypairs/{keypair_name} Shows details for a keypair that is associated with the account. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - keypair_name: keypair_name_path - user_id: keypair_user Response -------- .. rest_parameters:: parameters.yaml - keypair: keypair - created_at: created - deleted: keypair_deleted - deleted_at: keypair_updated_deleted_at - fingerprint: keypair_fingerprint - id: keypair_id - name: keypair_name - public_key: keypair_public_key - updated_at: keypair_updated_deleted_at - user_id: keypair_userid - type: keypair_type **Example Show Keypair Details (v2.10): JSON response** .. literalinclude:: ../../doc/api_samples/os-keypairs/v2.10/keypairs-get-resp.json :language: javascript Delete Keypair ============== .. rest_method:: DELETE /os-keypairs/{keypair_name} Deletes a keypair. Normal response codes: 202, 204 .. note:: The normal return code is 204 in version 2.2 to match the fact that no body content is returned. Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - keypair_name: keypair_name_path - user_id: keypair_user Response -------- There is no body content for the response of a successful DELETE query ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-migrations.inc0000664000175000017500000000436200000000000020704 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================= Migrations (os-migrations) ========================================= Shows data on migrations. List Migrations =============== .. rest_method:: GET /os-migrations Lists migrations. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Starting from microversion 2.59, the response is sorted by ``created_at`` and ``id`` in descending order. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - hidden: migration_hidden - host: migration_host - instance_uuid: migration_instance_uuid - migration_type: migration_type - source_compute: migration_source_compute - status: migration_status - limit: migration_limit - marker: migration_marker - changes-since: changes_since_migration - changes-before: changes_before_migration - user_id: user_id_query_migrations - project_id: project_id_query_migrations Response -------- .. rest_parameters:: parameters.yaml - migrations: migrations - created_at: created - dest_compute: migrate_dest_compute - dest_host: migrate_dest_host - dest_node: migrate_dest_node - id: migration_id - instance_uuid: server_id - new_instance_type_id: migration_new_flavor_id - old_instance_type_id: migration_old_flavor_id - source_compute: migrate_source_compute - source_node: migrate_source_node - status: migrate_status - updated_at: updated - migration_type: migration_type_2_23 - links: migration_links_2_23 - uuid: migration_uuid - migrations_links: migration_next_links_2_59 - user_id: user_id_migration_2_80 - project_id: project_id_migration_2_80 **Example List Migrations: JSON response** .. literalinclude:: ../../doc/api_samples/os-migrations/migrations-get.json :language: javascript **Example List Migrations (v2.80):** .. literalinclude:: ../../doc/api_samples/os-migrations/v2.80/migrations-get.json :language: javascript **Example List Migrations With Paging (v2.80):** .. literalinclude:: ../../doc/api_samples/os-migrations/v2.80/migrations-get-with-limit.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-networks.inc0000664000175000017500000001666500000000000020415 0ustar00zuulzuul00000000000000.. -*- rst -*- ====================================== Networks (os-networks) (DEPRECATED) ====================================== .. warning:: This API was designed to work with ``nova-network`` which was deprecated in the 14.0.0 (Newton) release and removed in the 21.0.0 (Ussuri) release. Some features are proxied to the Network service (neutron) when appropriate, but as with all translation proxies, this is far from perfect compatibility. These APIs should be avoided in new applications in favor of `using neutron directly`__. These will fail with a 404 starting from microversion 2.36. They were removed in the 21.0.0 (Ussuri) release. __ https://docs.openstack.org/api-ref/network/v2/#networks Creates, lists, shows information for, and deletes networks. Adds network to a project, disassociates a network from a project, and disassociates a project from a network. Associates host with and disassociates host from a network. List Networks ============= .. rest_method:: GET /os-networks Lists networks for the project. Policy defaults enable all users to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- **Example List Networks: JSON response** .. literalinclude:: ../../doc/api_samples/os-networks/networks-list-resp.json :language: javascript Create Network ============== .. rest_method:: POST /os-networks Creates a network. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409), gone(410), notImplemented(501) Request ------- **Example Create Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-networks/network-create-req.json :language: javascript Response -------- **Example Create Network: JSON response** .. literalinclude:: ../../doc/api_samples/os-networks/network-create-resp.json :language: javascript Add Network =========== .. rest_method:: POST /os-networks/add Adds a network to a project. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), gone(410), notImplemented(501) Request ------- **Example Add Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-networks/network-add-req.json :language: javascript Response -------- Show Network Details ==================== .. rest_method:: GET /os-networks/{network_id} Shows details for a network. Policy defaults enable all users to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id Response -------- **Example Show Network Details: JSON response** .. literalinclude:: ../../doc/api_samples/os-networks/network-show-resp.json :language: javascript Delete Network ============== .. rest_method:: DELETE /os-networks/{network_id} Deletes a network. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), gone(410) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id Response -------- There is no body content for the response of a successful DELETE query. Associate Host ============== .. rest_method:: POST /os-networks/{network_id}/action Associates a network with a host. Specify the ``associate_host`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id - associate_host: associate_host **Example Associate Host to Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-networks-associate/network-associate-host-req.json :language: javascript Response -------- There is no body content for the response of a successful POST query. Disassociate Network ==================== .. rest_method:: POST /os-networks/{network_id}/action Disassociates a network from a project. You can then reuse the network. Specify the ``disassociate`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id **Example Disassociate Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-networks-associate/network-disassociate-req.json :language: javascript Response -------- There is no body content for the response of a successful POST query. Disassociate Host ================= .. rest_method:: POST /os-networks/{network_id}/action Disassociates a host from a network. Specify the ``disassociate_host`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id **Example Disassociate Host from Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-networks-associate/network-disassociate-host-req.json :language: javascript Response -------- There is no body content for the response of a successful POST query. Disassociate Project ==================== .. rest_method:: POST /os-networks/{network_id}/action Disassociates a project from a network. Specify the ``disassociate_project`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id **Example Disassociate Project from Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-networks-associate/network-disassociate-project-req.json :language: javascript Response -------- There is no body content for the response of a successful POST query. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-quota-class-sets.inc0000664000175000017500000001303600000000000021736 0ustar00zuulzuul00000000000000.. -*- rst -*- ======================================= Quota class sets (os-quota-class-sets) ======================================= Show, Create or Update the quotas for a Quota Class. Nova supports implicit 'default' Quota Class only. .. note:: Once a default limit is set via the ``default`` quota class via the API, that takes precedence over any changes to that resource limit in the configuration options. In other words, once you've changed things via the API, you either have to keep those synchronized with the configuration values or remove the default limit from the database manually as there is no REST API for removing quota class values from the database. For Example: If you updated default quotas for instances, to 20, but didn't change ``quota_instances`` in your ``nova.conf``, you'd now have default quota for instances as 20 for all projects. If you then change ``quota_instances=5`` in nova.conf, but didn't update the ``default`` quota class via the API, you'll still have a default quota of 20 for instances regardless of ``nova.conf``. Refer: `Quotas `__ for more details. .. warning:: There is a bug in the v2.1 API until microversion 2.49 and the legacy v2 compatible API which does not return the ``server_groups`` and ``server_group_members`` quotas in GET and PUT ``os-quota-class-sets`` API response, whereas the v2 API used to return those keys in the API response. There is workaround to get the ``server_groups`` and ``server_group_members`` quotas using "List Default Quotas For Tenant" API in :ref:`os-quota-sets` but that is per project quota. This issue is fixed in microversion 2.50, here onwards ``server_groups`` and ``server_group_members`` keys are returned in API response body. Show the quota for Quota Class ============================== .. rest_method:: GET /os-quota-class-sets/{id} Show the quota for the Quota Class. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - id: quota_class_id Response -------- .. rest_parameters:: parameters.yaml - quota_class_set: quota_class_set - cores: cores_quota_class - id: quota_class_id_body - instances: instances_quota_class - key_pairs: key_pairs_quota_class - metadata_items: metadata_items - ram: ram_quota_class - fixed_ips: fixed_ips_quota_class - floating_ips: floating_ips_quota_class - networks: networks_quota_optional - security_group_rules: security_group_rules_quota_class - security_groups: security_groups_quota_class - server_groups: server_groups_quota_class - server_group_members: server_group_members_quota_class - injected_file_content_bytes: injected_file_content_bytes - injected_file_path_bytes: injected_file_path_bytes - injected_files: injected_files_quota_class **Example Show A Quota Class: JSON response(2.50)** .. literalinclude:: ../../doc/api_samples/os-quota-class-sets/v2.50/quota-classes-show-get-resp.json :language: javascript Create or Update Quotas for Quota Class ======================================= .. rest_method:: PUT /os-quota-class-sets/{id} Update the quotas for the Quota Class. If the requested Quota Class is not found in the DB, then the API will create the one. Only 'default' quota class is valid and used to set the default quotas, all other quota class would not be used anywhere. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - id: quota_class_id - quota_class_set: quota_class_set - cores: cores_quota_class_optional - instances: instances_quota_class_optional - key_pairs: key_pairs_quota_class_optional - metadata_items: metadata_items_quota_optional - ram: ram_quota_class_optional - server_groups: server_groups_quota_class_optional - server_group_members: server_group_members_quota_optional - fixed_ips: fixed_ips_quota_class_optional - floating_ips: floating_ips_quota_class_optional - networks: networks_quota_optional - security_group_rules: security_group_rules_quota_class_optional - security_groups: security_groups_quota_class_optional - injected_file_content_bytes: injected_file_content_bytes_quota_optional - injected_file_path_bytes: injected_file_path_bytes_quota_optional - injected_files: injected_files_quota_class_optional **Example Update Quotas: JSON request(2.50)** .. literalinclude:: ../../doc/api_samples/os-quota-class-sets/v2.50/quota-classes-update-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - quota_class_set: quota_class_set - cores: cores_quota_class - instances: instances_quota_class - key_pairs: key_pairs_quota_class - metadata_items: metadata_items - ram: ram_quota_class - fixed_ips: fixed_ips_quota_class - floating_ips: floating_ips_quota_class - networks: networks_quota_optional - security_group_rules: security_group_rules_quota_class - security_groups: security_groups_quota_class - server_groups: server_groups_quota_class - server_group_members: server_group_members_quota_class - injected_file_content_bytes: injected_file_content_bytes - injected_file_path_bytes: injected_file_path_bytes - injected_files: injected_files_quota_class **Example Update Quotas: JSON response(2.50)** .. literalinclude:: ../../doc/api_samples/os-quota-class-sets/v2.50/quota-classes-update-post-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-quota-sets.inc0000664000175000017500000001646500000000000020644 0ustar00zuulzuul00000000000000.. -*- rst -*- .. _os-quota-sets: ============================ Quota sets (os-quota-sets) ============================ Permits administrators, depending on policy settings, to view default quotas, view details for quotas, revert quotas to defaults, and update the quotas for a project or a project and user. Show A Quota ============ .. rest_method:: GET /os-quota-sets/{tenant_id} Show the quota for a project or a project and a user. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) - 400 - BadRequest - the tenant_id is not valid in your cloud, perhaps because it was typoed. Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id - user_id: user_id_query_quota Response -------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - cores: cores - id: quota_tenant_or_user_id_body - instances: instances - key_pairs: key_pairs - metadata_items: metadata_items - ram: ram - server_groups: server_groups - server_group_members: server_group_members - fixed_ips: fixed_ips_quota - floating_ips: floating_ips - networks: networks_quota_set_optional - security_group_rules: security_group_rules_quota - security_groups: security_groups_quota - injected_file_content_bytes: injected_file_content_bytes - injected_file_path_bytes: injected_file_path_bytes - injected_files: injected_files **Example Show A Quota: JSON response** .. literalinclude:: ../../doc/api_samples/os-quota-sets/user-quotas-show-get-resp.json :language: javascript Update Quotas ============= .. rest_method:: PUT /os-quota-sets/{tenant_id} Update the quotas for a project or a project and a user. Users can force the update even if the quota has already been used and the reserved quota exceeds the new quota. To force the update, specify the ``"force": True`` attribute in the request body, the default value is ``false``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) - 400 - BadRequest - the tenant_id is not valid in your cloud, perhaps because it was typoed. Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id - user_id: user_id_query_set_quota - quota_set: quota_set - force: force - cores: cores_quota_optional - instances: instances_quota_optional - key_pairs: key_pairs_quota_optional - metadata_items: metadata_items_quota_optional - ram: ram_quota_optional - server_groups: server_groups_quota_optional - server_group_members: server_group_members_quota_optional - fixed_ips: fixed_ips_quota_optional - floating_ips: floating_ips_quota_optional - networks: networks_quota_set_optional - security_group_rules: security_group_rules - security_groups: security_groups_quota_optional - injected_file_content_bytes: injected_file_content_bytes_quota_optional - injected_file_path_bytes: injected_file_path_bytes_quota_optional - injected_files: injected_files_quota_optional **Example Update Quotas: JSON request** .. literalinclude:: ../../doc/api_samples/os-quota-sets/quotas-update-post-req.json :language: javascript **Example Update Quotas with the optional ``force`` attribute: JSON request** .. literalinclude:: ../../doc/api_samples/os-quota-sets/quotas-update-force-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - cores: cores - instances: instances - key_pairs: key_pairs - metadata_items: metadata_items - ram: ram - server_groups: server_groups - server_group_members: server_group_members - fixed_ips: fixed_ips_quota - floating_ips: floating_ips - networks: networks_quota_set_optional - security_group_rules: security_group_rules_quota - security_groups: security_groups_quota - injected_file_content_bytes: injected_file_content_bytes - injected_file_path_bytes: injected_file_path_bytes - injected_files: injected_files **Example Update Quotas: JSON response** .. literalinclude:: ../../doc/api_samples/os-quota-sets/quotas-update-post-resp.json :language: javascript Revert Quotas To Defaults ========================= .. rest_method:: DELETE /os-quota-sets/{tenant_id} Reverts the quotas to default values for a project or a project and a user. To revert quotas for a project and a user, specify the ``user_id`` query parameter. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id - user_id: user_id_query_quota_delete Response -------- There is no body content for the response of a successful DELETE operation. List Default Quotas For Tenant ============================== .. rest_method:: GET /os-quota-sets/{tenant_id}/defaults Lists the default quotas for a project. Normal response codes: 200 Error response codes: badrequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id Response -------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - cores: cores - id: quota_tenant_or_user_id_body - instances: instances - key_pairs: key_pairs - metadata_items: metadata_items - ram: ram - server_groups: server_groups - server_group_members: server_group_members - fixed_ips: fixed_ips_quota - floating_ips: floating_ips - networks: networks_quota_set_optional - security_group_rules: security_group_rules_quota - security_groups: security_groups_quota - injected_file_content_bytes: injected_file_content_bytes - injected_file_path_bytes: injected_file_path_bytes - injected_files: injected_files **Example List Default Quotas For Tenant: JSON response** .. literalinclude:: ../../doc/api_samples/os-quota-sets/quotas-show-defaults-get-resp.json :language: javascript Show The Detail of Quota ======================== .. rest_method:: GET /os-quota-sets/{tenant_id}/detail Show the detail of quota for a project or a project and a user. To show a quota for a project and a user, specify the ``user_id`` query parameter. Normal response codes: 200 Error response codes: badrequest(400), unauthorized(401), forbidden(403) - 400 - BadRequest - the {tenant_id} is not valid in your cloud, perhaps because it was typoed. Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id - user_id: user_id_query_quota Response -------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - cores: cores_quota_details - id: quota_tenant_or_user_id_body - instances: instances_quota_details - key_pairs: key_pairs_quota_details - metadata_items: metadata_items_quota_details - ram: ram_quota_details - server_groups: server_groups_quota_details - server_group_members: server_group_members_quota_details - fixed_ips: fixed_ips_quota_details - floating_ips: floating_ips_quota_details - networks: networks_quota_set_optional - security_group_rules: security_group_rules_quota_details - security_groups: security_groups_quota_details - injected_file_content_bytes: injected_file_content_bytes_quota_details - injected_file_path_bytes: injected_file_path_bytes_quota_details - injected_files: injected_files_quota_details **Example Show A Quota: JSON response** .. literalinclude:: ../../doc/api_samples/os-quota-sets/quotas-show-detail-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-security-group-default-rules.inc0000664000175000017500000001034700000000000024303 0ustar00zuulzuul00000000000000.. -*- rst -*- ==================================================================== Rules for default security group (os-security-group-default-rules) ==================================================================== .. warning:: This API only available with ``nova-network`` which is deprecated. It should be avoided in any new applications. These will fail with a 404 starting from microversion 2.36. They were completely removed in the 21.0.0 (Ussuri) release. Lists, shows information for, and creates default security group rules. List Default Security Group Rules ================================= .. rest_method:: GET /os-security-group-default-rules Lists default security group rules. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Response -------- .. rest_parameters:: parameters.yaml - security_group_default_rules: security_group_default_rules - from_port: from_port - id: secgroup_default_rule_id - ip_protocol: ip_protocol - ip_range: secgroup_rule_ip_range - ip_range.cidr: secgroup_rule_cidr - to_port: to_port **Example List default security group rules: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-group-default-rules/security-group-default-rules-list-resp.json :language: javascript Show Default Security Group Rule Details ======================================== .. rest_method:: GET /os-security-group-default-rules/{security_group_default_rule_id} Shows details for a security group rule. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - security_group_default_rule_id: security_group_default_rule_id Response -------- .. rest_parameters:: parameters.yaml - security_group_default_rule: security_group_default_rule - from_port: from_port - id: secgroup_default_rule_id - ip_protocol: ip_protocol - ip_range: secgroup_rule_ip_range - ip_range.cidr: secgroup_rule_cidr - to_port: to_port **Example Show default security group rule: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-group-default-rules/security-group-default-rules-show-resp.json :language: javascript Create Default Security Group Rule ================================== .. rest_method:: POST /os-security-group-default-rules Creates a default security group rule. If you specify a source port ( ``from_port`` ) or destination port ( ``to_port`` ) value, you must specify an IP protocol ( ``ip_protocol`` ) value. Otherwise, the operation returns the ``Bad Request (400)`` response code. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - security_group_default_rule: security_group_default_rule - ip_protocol: ip_protocol - from_port: from_port - to_port: to_port - cidr: secgroup_rule_cidr **Example Create default security group rule: JSON request** .. literalinclude:: ../../doc/api_samples/os-security-group-default-rules/security-group-default-rules-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - security_group_default_rule: security_group_default_rule - from_port: from_port - id: secgroup_default_rule_id - ip_protocol: ip_protocol - ip_range: secgroup_rule_ip_range - ip_range.cidr: secgroup_rule_cidr - to_port: to_port **Example Create default security group rule: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-group-default-rules/security-group-default-rules-create-resp.json :language: javascript Delete Default Security Group Rule ================================== .. rest_method:: DELETE /os-security-group-default-rules/{security_group_default_rule_id} Deletes a security group rule. Normal response codes: 204 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - security_group_default_rule_id: security_group_default_rule_id Response -------- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-security-group-rules.inc0000664000175000017500000000526300000000000022662 0ustar00zuulzuul00000000000000.. -*- rst -*- ================================================================ Rules for security group (os-security-group-rules) (DEPRECATED) ================================================================ .. warning:: These APIs are proxy calls to the Network service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. These will fail with a 404 starting from microversion 2.36. See: `Relevant Network APIs `__. Creates and deletes security group rules. Create Security Group Rule ========================== .. rest_method:: POST /os-security-group-rules Creates a rule for a security group. Either ``cidr`` or ``group_id`` must be specified when creating a rule. .. note:: nova-network only supports ingress rules. If you want to define egress rules you must use the Neutron networking service. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - security_group_rule: security_group_rule - parent_group_id: parent_group_id - ip_protocol: ip_protocol - from_port: from_port - to_port: to_port - cidr: secgroup_rule_cidr - group_id: group_id **Example Create security group rule: JSON request** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-group-rules-post-req.json :language: javascript Response -------- The ``group`` is empty if ``group_id`` was not provided on the request. The ``ip_range`` is empty if ``cidr`` was not provided on the request. .. rest_parameters:: parameters.yaml - security_group_rule: security_group_rule - ip_protocol: ip_protocol - from_port: from_port - to_port: to_port - ip_range: secgroup_rule_ip_range - ip_range.cidr: secgroup_rule_cidr - id: secgroup_rule_id - parent_group_id: parent_group_id - group: group - group.name: name_sec_group_optional - group.tenant_id: secgroup_tenant_id_body **Example Create security group rule: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-group-rules-post-resp.json :language: javascript Delete Security Group Rule ========================== .. rest_method:: DELETE /os-security-group-rules/{security_group_rule_id} Deletes a security group rule. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - security_group_rule_id: security_group_rule_id Response -------- There is no body content for the response of a successful DELETE query. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-security-groups.inc0000664000175000017500000001064300000000000021713 0ustar00zuulzuul00000000000000.. -*- rst -*- .. NOTE(gmann): These APIs are deprecated so do not update this file even body, example or parameters are not complete. ================================================== Security groups (os-security-groups) (DEPRECATED) ================================================== .. warning:: These APIs are proxy calls to the Network service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. These will fail with a 404 starting from microversion 2.36. See: `Relevant Network APIs `__. Lists, shows information for, creates, updates and deletes security groups. List Security Groups ==================== .. rest_method:: GET /os-security-groups Lists security groups. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - limit: limit_simple - offset: offset_simple - all_tenants: all_tenants_sec_grp_query Response -------- .. rest_parameters:: parameters.yaml - security_groups: security_groups_obj - description: description - id: security_group_id_body - name: name - rules: rules - tenant_id: tenant_id_body **Example List security groups: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-groups-list-get-resp.json :language: javascript Create Security Group ===================== .. rest_method:: POST /os-security-groups Creates a security group. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - security_group: security_group - name: name - description: description **Example Create security group: JSON request** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-group-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - security_group: security_group - description: description - id: security_group_id_body - name: name - rules: rules - tenant_id: tenant_id_body **Example Create security group: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-groups-create-resp.json :language: javascript Show Security Group Details =========================== .. rest_method:: GET /os-security-groups/{security_group_id} Shows details for a security group. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - security_group_id: security_group_id Response -------- .. rest_parameters:: parameters.yaml - security_group: security_group - description: description - id: security_group_id_body - name: name - rules: rules - tenant_id: tenant_id_body **Example Show security group: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-groups-get-resp.json :language: javascript Update Security Group ===================== .. rest_method:: PUT /os-security-groups/{security_group_id} Updates a security group. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - security_group_id: security_group_id - name: name - description: description **Example Update security group: JSON request** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-group-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - security_group: security_group - description: description - id: security_group_id_body - name: name - rules: rules - tenant_id: tenant_id_body **Example Update security group: JSON response** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-groups-create-resp.json :language: javascript Delete Security Group ===================== .. rest_method:: DELETE /os-security-groups/{security_group_id} Deletes a security group. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - security_group_id: security_group_id Response -------- There is no body content for the response of a successful DELETE query. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-server-external-events.inc0000664000175000017500000000424300000000000023156 0ustar00zuulzuul00000000000000.. -*- rst -*- ==================================================== Create external events (os-server-external-events) ==================================================== .. warning:: This is an ``admin`` level service API only designed to be used by other OpenStack services. The point of this API is to coordinate between Nova and Neutron, Nova and Cinder, Nova and Ironic (and potentially future services) on activities they both need to be involved in, such as network hotplugging. Unless you are writing Neutron, Cinder or Ironic code you **should not** be using this API. Creates one or more external events. The API dispatches each event to a server instance. Run Events ========== .. rest_method:: POST /os-server-external-events Creates one or more external events, which the API dispatches to the host a server is assigned to. If the server is not currently assigned to a host the event will not be delivered. You will receive back the list of events that you submitted, with an updated ``code`` and ``status`` indicating their level of success. Normal response codes: 200, 207 A 200 will be returned if all events succeeded, 207 will be returned if any events could not be processed. The ``code`` attribute for the event will explain further what went wrong. Error response codes: badRequest(400), unauthorized(401), forbidden(403) .. note:: Prior to the fix for `bug 1855752`_, error response code 404 may be erroneously returned when all events failed. .. _bug 1855752: https://bugs.launchpad.net/nova/+bug/1855752 Request ------- .. rest_parameters:: parameters.yaml - events: events - name: event_name - server_uuid: server_uuid - status: event_status - tag: event_tag **Example Run Events** .. literalinclude:: ../../doc/api_samples/os-server-external-events/event-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - events: events - code: code - name: event_name - server_uuid: server_uuid - status: event_status - tag: event_tag **Example Run Events** .. literalinclude:: ../../doc/api_samples/os-server-external-events/event-create-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-server-groups.inc0000664000175000017500000000701700000000000021353 0ustar00zuulzuul00000000000000.. -*- rst -*- ================================== Server groups (os-server-groups) ================================== Lists, shows information for, creates, and deletes server groups. List Server Groups ================== .. rest_method:: GET /os-server-groups Lists all server groups for the tenant. Administrative users can use the ``all_projects`` query parameter to list all server groups for all projects. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - all_projects: all_projects - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - server_groups: server_groups_list - id: server_group_id_body - name: name_server_group - policies: policies - members: members - metadata: metadata_server_group_max_2_63 - project_id: project_id_server_group - user_id: user_id_server_group - policy: policy_name - rules: policy_rules **Example List Server Groups (2.64): JSON response** .. literalinclude:: ../../doc/api_samples/os-server-groups/v2.64/server-groups-list-resp.json :language: javascript Create Server Group =================== .. rest_method:: POST /os-server-groups Creates a server group. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_group: server_group - name: name_server_group - policies: policies - policy: policy_name - rules: policy_rules_optional **Example Create Server Group (2.64): JSON request** .. literalinclude:: ../../doc/api_samples/os-server-groups/v2.64/server-groups-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - server_group: server_group - id: server_group_id_body - name: name_server_group - policies: policies - members: members - metadata: metadata_server_group_max_2_63 - project_id: project_id_server_group - user_id: user_id_server_group - policy: policy_name - rules: policy_rules **Example Create Server Group (2.64): JSON response** .. literalinclude:: ../../doc/api_samples/os-server-groups/v2.64/server-groups-post-resp.json :language: javascript Show Server Group Details ========================= .. rest_method:: GET /os-server-groups/{server_group_id} Shows details for a server group. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_group_id: server_group_id Response -------- .. rest_parameters:: parameters.yaml - server_group: server_group - id: server_group_id_body - name: name_server_group - policies: policies - members: members - metadata: metadata_server_group_max_2_63 - project_id: project_id_server_group - user_id: user_id_server_group - policy: policy_name - rules: policy_rules **Example Show Server Group Details (2.64): JSON response** .. literalinclude:: ../../doc/api_samples/os-server-groups/v2.64/server-groups-get-resp.json :language: javascript Delete Server Group =================== .. rest_method:: DELETE /os-server-groups/{server_group_id} Deletes a server group. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_group_id: server_group_id Response -------- There is no body content for the response of a successful DELETE action. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-server-password.inc0000664000175000017500000000414000000000000021670 0ustar00zuulzuul00000000000000.. -*- rst -*- ================================================ Servers password (servers, os-server-password) ================================================ Shows the encrypted administrative password. Also, clears the encrypted administrative password for a server, which removes it from the metadata server. Show Server Password ==================== .. rest_method:: GET /servers/{server_id}/os-server-password Shows the administrative password for a server. This operation calls the metadata service to query metadata information and does not read password information from the server itself. The password saved in the metadata service is typically encrypted using the public SSH key injected into this server, so the SSH private key is needed to read the password. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - password: password **Example Show Server Password** .. literalinclude:: ../../doc/api_samples/os-server-password/get-password-resp.json :language: javascript Clear Admin Password ==================== .. rest_method:: DELETE /servers/{server_id}/os-server-password Clears the encrypted administrative password for a server, which removes it from the database. This action does not actually change the instance server password. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- If successful, this method does not return content in the response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-server-tags.inc0000664000175000017500000000716400000000000020775 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================= Server tags (servers, tags) ============================= Lists tags, creates, replaces or deletes one or more tags for a server, checks the existence of a tag for a server. Available since version 2.26 Tags have the following restrictions: - Tag is a Unicode bytestring no longer than 60 characters. - Tag is a non-empty string. - '/' is not allowed to be in a tag name - Comma is not allowed to be in a tag name in order to simplify requests that specify lists of tags - All other characters are allowed to be in a tag name - Each server can have up to 50 tags. List Tags ========= .. rest_method:: GET /servers/{server_id}/tags Lists all tags for a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - tags: tags_no_min **Example List Tags:** .. literalinclude:: ../../doc/api_samples/os-server-tags/v2.26/server-tags-index-resp.json :language: javascript Replace Tags ============ .. rest_method:: PUT /servers/{server_id}/tags Replaces all tags on specified server with the new set of tags. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - tags: tags_no_min **Example Replace Tags:** .. literalinclude:: ../../doc/api_samples/os-server-tags/v2.26/server-tags-put-all-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - tags: tags_no_min **Example Replace Tags:** .. literalinclude:: ../../doc/api_samples/os-server-tags/v2.26/server-tags-put-all-resp.json :language: javascript Delete All Tags =============== .. rest_method:: DELETE /servers/{server_id}/tags Deletes all tags from the specified server. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- There is no body content for the response of a successful DELETE query Check Tag Existence =================== .. rest_method:: GET /servers/{server_id}/tags/{tag} Checks tag existence on the server. If tag exists response with 204 status code will be returned. Otherwise returns 404. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - tag: tag Add a Single Tag ================ .. rest_method:: PUT /servers/{server_id}/tags/{tag} Adds a single tag to the server if server has no specified tag. Response code in this case is 201. If the server has specified tag just returns 204. Normal response codes: 201, 204 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - tag: tag Response -------- .. rest_parameters:: parameters.yaml - Location: tag_location Delete a Single Tag =================== .. rest_method:: DELETE /servers/{server_id}/tags/{tag} Deletes a single tag from the specified server. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - tag: tag Response -------- There is no body content for the response of a successful DELETE query ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-services.inc0000664000175000017500000002457600000000000020364 0ustar00zuulzuul00000000000000.. -*- rst -*- .. _os-services: ================================ Compute services (os-services) ================================ Lists all running Compute services in a region, enables or disables scheduling for a Compute service and deletes a Compute service. For an overview of Compute services, see `OpenStack Compute `__. List Compute Services ===================== .. rest_method:: GET /os-services Lists all running Compute services. Provides details why any services were disabled. .. note:: Starting with microversion 2.69 if service details cannot be loaded due to a transient condition in the deployment like infrastructure failure, the response body for those unavailable compute services in the down cells will be missing keys. See `handling down cells `__ section of the Compute API guide for more information on the keys that would be returned in the partial constructs. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - binary: binary_query - host: host_query_service Response -------- .. rest_parameters:: parameters.yaml - services: services - id: service_id_body_2_52 - id: service_id_body_2_53 - binary: binary - disabled_reason: disabled_reason_body - host: host_name_body - state: service_state - status: service_status - updated_at: updated - zone: OS-EXT-AZ:availability_zone - forced_down: forced_down_2_11 **Example List Compute Services (v2.11)** .. literalinclude:: ../../doc/api_samples/os-services/v2.11/services-list-get-resp.json :language: javascript **Example List Compute Services (v2.69)** This is a sample response for the services from the non-responsive part of the deployment. The responses for the available service records will be normal without any missing keys. .. literalinclude:: ../../doc/api_samples/os-services/v2.69/services-list-get-resp.json :language: javascript Disable Scheduling For A Compute Service ======================================== .. rest_method:: PUT /os-services/disable Disables scheduling for a Compute service. Specify the service by its host name and binary name. .. note:: Starting with microversion 2.53 this API is superseded by ``PUT /os-services/{service_id}``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - host: host_name_body - binary: binary **Example Disable Scheduling For A Compute Service** .. literalinclude:: ../../doc/api_samples/os-services/service-disable-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - service: service - binary: binary - host: host_name_body - status: service_status **Example Disable Scheduling For A Compute Service** .. literalinclude:: ../../doc/api_samples/os-services/service-disable-put-resp.json :language: javascript Disable Scheduling For A Compute Service and Log Disabled Reason ================================================================ .. rest_method:: PUT /os-services/disable-log-reason Disables scheduling for a Compute service and logs information to the Compute service table about why a Compute service was disabled. Specify the service by its host name and binary name. .. note:: Starting with microversion 2.53 this API is superseded by ``PUT /os-services/{service_id}``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - host: host_name_body - binary: binary - disabled_reason: disabled_reason_body **Example Disable Scheduling For A Compute Service and Log Disabled Reason** .. literalinclude:: ../../doc/api_samples/os-services/service-disable-log-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - service: service - binary: binary - disabled_reason: disabled_reason_body - host: host_name_body - status: service_status **Example Disable Scheduling For A Compute Service and Log Disabled Reason** .. literalinclude:: ../../doc/api_samples/os-services/service-disable-log-put-resp.json :language: javascript Enable Scheduling For A Compute Service ======================================= .. rest_method:: PUT /os-services/enable Enables scheduling for a Compute service. Specify the service by its host name and binary name. .. note:: Starting with microversion 2.53 this API is superseded by ``PUT /os-services/{service_id}``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - host: host_name_body - binary: binary **Example Enable Scheduling For A Compute Service** .. literalinclude:: ../../doc/api_samples/os-services/service-enable-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - service: service - binary: binary - host: host_name_body - status: service_status **Example Enable Scheduling For A Compute Service** .. literalinclude:: ../../doc/api_samples/os-services/service-enable-put-resp.json :language: javascript Update Forced Down ================== .. rest_method:: PUT /os-services/force-down Set or unset ``forced_down`` flag for the service. ``forced_down`` is a manual override to tell nova that the service in question has been fenced manually by the operations team (either hard powered off, or network unplugged). That signals that it is safe to proceed with ``evacuate`` or other operations that nova has safety checks to prevent for hosts that are up. .. warning:: Setting a service forced down without completely fencing it will likely result in the corruption of VMs on that host. Action ``force-down`` available as of microversion 2.11. Specify the service by its host name and binary name. .. note:: Starting with microversion 2.53 this API is superseded by ``PUT /os-services/{service_id}``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - host: host_name_body - binary: binary - forced_down: forced_down_2_11 **Example Update Forced Down** .. literalinclude:: ../../doc/api_samples/os-services/v2.11/service-force-down-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - service: service - binary: binary - host: host_name_body - forced_down: forced_down_2_11 | **Example Update Forced Down** .. literalinclude:: ../../doc/api_samples/os-services/v2.11/service-force-down-put-resp.json :language: javascript Update Compute Service ====================== .. rest_method:: PUT /os-services/{service_id} Update a compute service to enable or disable scheduling, including recording a reason why a compute service was disabled from scheduling. Set or unset the ``forced_down`` flag for the service. This operation is only allowed on services whose ``binary`` is ``nova-compute``. This API is available starting with microversion 2.53. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - service_id: service_id_path_2_53_no_version - status: service_status_2_53_in - disabled_reason: disabled_reason_2_53_in - forced_down: forced_down_2_53_in **Example Disable Scheduling For A Compute Service (v2.53)** .. literalinclude:: ../../doc/api_samples/os-services/v2.53/service-disable-log-put-req.json :language: javascript **Example Enable Scheduling For A Compute Service (v2.53)** .. literalinclude:: ../../doc/api_samples/os-services/v2.53/service-enable-put-req.json :language: javascript **Example Update Forced Down (v2.53)** .. literalinclude:: ../../doc/api_samples/os-services/v2.53/service-force-down-put-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - service: service - id: service_id_body_2_53_no_version - binary: binary - disabled_reason: disabled_reason_body - host: host_name_body - state: service_state - status: service_status - updated_at: updated - zone: OS-EXT-AZ:availability_zone - forced_down: forced_down_2_53_out **Example Disable Scheduling For A Compute Service (v2.53)** .. literalinclude:: ../../doc/api_samples/os-services/v2.53/service-disable-log-put-resp.json :language: javascript **Example Enable Scheduling For A Compute Service (v2.53)** .. literalinclude:: ../../doc/api_samples/os-services/v2.53/service-enable-put-resp.json :language: javascript **Example Update Forced Down (v2.53)** .. literalinclude:: ../../doc/api_samples/os-services/v2.53/service-force-down-put-resp.json :language: javascript Delete Compute Service ====================== .. rest_method:: DELETE /os-services/{service_id} Deletes a service. If it's a ``nova-compute`` service, then the corresponding host will be removed from all the host aggregates as well. Attempts to delete a ``nova-compute`` service which is still hosting instances will result in a 409 HTTPConflict response. The instances will need to be migrated or deleted before a compute service can be deleted. Similarly, attempts to delete a ``nova-compute`` service which is involved in in-progress migrations will result in a 409 HTTPConflict response. The migrations will need to be completed, for example confirming or reverting a resize, or the instances will need to be deleted before the compute service can be deleted. .. important:: Be sure to stop the actual ``nova-compute`` process on the physical host *before* deleting the service with this API. Failing to do so can lead to the running service re-creating orphaned **compute_nodes** table records in the database. Normal response codes: 204 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - service_id: service_id_path_2_52 - service_id: service_id_path_2_53 Response -------- If successful, this method does not return content in the response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/os-simple-tenant-usage.inc0000664000175000017500000001153000000000000022405 0ustar00zuulzuul00000000000000.. -*- rst -*- ======================================== Usage reports (os-simple-tenant-usage) ======================================== Reports usage statistics of compute and storage resources periodically for an individual tenant or all tenants. The usage statistics will include all instances' CPU, memory and local disk during a specific period. Microversion 2.40 added pagination (and ``next`` links) to the usage statistics via optional ``limit`` and ``marker`` query parameters. If ``limit`` isn't provided, the configurable ``max_limit`` will be used which currently defaults to 1000. Older microversions will not accept these new paging query parameters, but they will start to silently limit by ``max_limit``. .. code-block:: none /os-simple-tenant-usage?limit={limit}&marker={instance_uuid} /os-simple-tenant-usage/{tenant_id}?limit={limit}&marker={instance_uuid} .. note:: A tenant's usage statistics may span multiple pages when the number of instances exceeds ``limit``, and API consumers will need to stitch together the aggregate results if they still want totals for all instances in a specific time window, grouped by tenant. List Tenant Usage Statistics For All Tenants ============================================ .. rest_method:: GET /os-simple-tenant-usage Lists usage statistics for all tenants. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - detailed: detailed_simple_tenant_usage - end: end_simple_tenant_usage - start: start_simple_tenant_usage - limit: usage_limit - marker: usage_marker Response -------- .. rest_parameters:: parameters.yaml - tenant_usages: tenant_usages - start: start_simple_tenant_usage_body - stop: stop_simple_tenant_usage - tenant_id: tenant_id_body - total_hours: total_hours - total_local_gb_usage: total_local_gb_usage - total_memory_mb_usage: total_memory_mb_usage - total_vcpus_usage: total_vcpus_usage - server_usages: server_usages_optional - server_usages.ended_at: ended_at_optional - server_usages.flavor: flavor_name_optional - server_usages.hours: hours_optional - server_usages.instance_id: server_id_optional - server_usages.local_gb: local_gb_simple_tenant_usage_optional - server_usages.memory_mb: memory_mb_simple_tenant_usage_optional - server_usages.name: server_name_optional - server_usages.started_at: started_at_optional - server_usages.state: vm_state_optional - server_usages.tenant_id: tenant_id_optional - server_usages.uptime: uptime_simple_tenant_usage_optional - server_usages.vcpus: vcpus_optional - tenant_usages_links: usage_links **Example List Tenant Usage For All Tenants (v2.40): JSON response** If the ``detailed`` query parameter is not specified or is set to other than 1 (e.g. ``detailed=0``), the response is as follows: .. literalinclude:: ../../doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get.json :language: javascript If the ``detailed`` query parameter is set to one (``detailed=1``), the response includes ``server_usages`` information for each tenant. The response is as follows: .. literalinclude:: ../../doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-detail.json :language: javascript Show Usage Statistics For Tenant ================================ .. rest_method:: GET /os-simple-tenant-usage/{tenant_id} Shows usage statistics for a tenant. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - tenant_id: tenant_id - end: end_simple_tenant_usage - start: start_simple_tenant_usage - limit: usage_limit - marker: usage_marker Response -------- .. rest_parameters:: parameters.yaml - tenant_usage: tenant_usage - server_usages: server_usages - server_usages.ended_at: ended_at - server_usages.flavor: flavor_name - server_usages.hours: hours - server_usages.instance_id: server_id - server_usages.local_gb: local_gb_simple_tenant_usage - server_usages.memory_mb: memory_mb_simple_tenant_usage - server_usages.name: server_name - server_usages.started_at: started_at - server_usages.state: OS-EXT-STS:vm_state - server_usages.tenant_id: tenant_id_body - server_usages.uptime: uptime_simple_tenant_usage - server_usages.vcpus: vcpus - start: start_simple_tenant_usage_body - stop: stop_simple_tenant_usage - tenant_id: tenant_id_body - total_hours: total_hours - total_local_gb_usage: total_local_gb_usage - total_memory_mb_usage: total_memory_mb_usage - total_vcpus_usage: total_vcpus_usage - tenant_usage_links: usage_links **Example Show Usage Details For Tenant (v2.40): JSON response** .. literalinclude:: ../../doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-specific.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-tenant-network.inc0000664000175000017500000000724700000000000021515 0ustar00zuulzuul00000000000000.. -*- rst -*- ==================================================== Project networks (os-tenant-networks) (DEPRECATED) ==================================================== .. warning:: These APIs are proxy calls to the Network service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. These will fail with a 404 starting from microversion 2.36. See: `Relevant Network APIs `__. Creates, lists, shows information for, and deletes project networks. List Project Networks ===================== .. rest_method:: GET /os-tenant-networks Lists all project networks. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Response -------- **Example List Project Networks: JSON response** .. literalinclude:: ../../doc/api_samples/os-tenant-networks/networks-list-res.json :language: javascript Create Project Network ====================== .. rest_method:: POST /os-tenant-networks .. note:: This API is only implemented for the nova-network service and will result in a 503 error response if the cloud is using the Neutron networking service. Use the Neutron ``networks`` API to create a new network. Creates a project network. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), conflict(409), gone(410), serviceUnavailable(503) **Example Create Project Network: JSON request** .. literalinclude:: ../../doc/api_samples/os-tenant-networks/networks-post-req.json :language: javascript Response -------- **Example Create Project Network: JSON response** .. literalinclude:: ../../doc/api_samples/os-tenant-networks/networks-post-res.json :language: javascript Show Project Network Details ============================ .. rest_method:: GET /os-tenant-networks/{network_id} Shows details for a project network. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id Response -------- **Example Show Project Network Details: JSON response** .. literalinclude:: ../../doc/api_samples/os-tenant-networks/networks-post-res.json :language: javascript Delete Project Network ====================== .. rest_method:: DELETE /os-tenant-networks/{network_id} .. note:: This API is only implemented for the nova-network service and will result in a 500 error response if the cloud is using the Neutron networking service. Use the Neutron ``networks`` API to delete an existing network. Deletes a project network. Policy defaults enable only users with the administrative role or the owner of the network to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), gone(410) Request ------- .. rest_parameters:: parameters.yaml - network_id: network_id Response -------- There is no body content for the response of a successful DELETE query. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-virtual-interfaces.inc0000664000175000017500000000421600000000000022335 0ustar00zuulzuul00000000000000.. -*- rst -*- ============================================================ Servers virtual interfaces (servers, os-virtual-interfaces) ============================================================ Lists virtual interfaces for a server. .. warning:: Since this API is only implemented for the nova-network, the API is deprecated from the Microversion 2.44. This API will fail with a 404 starting from microversion 2.44. It was removed in the 18.0.0 Rocky release. To query the server attached neutron interface, please use the API ``GET /servers/{server_uuid}/os-interface``. .. note:: This API is only implemented for the nova-network service and will result in a 400 error response if the cloud is using the Neutron networking service. Use the Neutron ``ports`` API to list ports for a given server by filtering ports based on the port ``device_id`` which is the ``{server_id}``. List Virtual Interfaces ======================= .. rest_method:: GET /servers/{server_id}/os-virtual-interfaces Lists the virtual interfaces for an instance. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), gone(410) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - virtual_interfaces: virtual_interfaces - id: virtual_interface_id - mac_address: mac_address - net_id: net_id_resp_2_12 .. note:: The API v2 returns the network ID in the "OS-EXT-VIF-NET:net_id" response attribute. But API v2.1 base version does not return the network ID. Network ID has been added in v2.12 micro-version and returns it in the "net_id" attribute. **Example List Virtual Interfaces: JSON response** .. literalinclude:: ../../doc/api_samples/os-virtual-interfaces/v2.12/vifs-list-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/os-volume-attachments.inc0000664000175000017500000002002700000000000022344 0ustar00zuulzuul00000000000000.. -*- rst -*- =================================================================== Servers with volume attachments (servers, os-volume\_attachments) =================================================================== Attaches volumes that are created through the volume API to server instances. Also, lists volume attachments for a server, shows details for a volume attachment, and detaches a volume. List volume attachments for an instance ======================================= .. rest_method:: GET /servers/{server_id}/os-volume_attachments List volume attachments for an instance. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - volumeAttachments: volumeAttachments - id: attachment_id_required - serverId: server_id - volumeId: volumeId_resp - device: attachment_device_resp - tag: device_tag_bdm_attachment_resp - delete_on_termination: delete_on_termination_attachments_resp **Example List volume attachments for an instance: JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/list-volume-attachments-resp.json :language: javascript **Example List tagged volume attachments for an instance (v2.79): JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.79/list-volume-attachments-resp.json :language: javascript Attach a volume to an instance ============================== .. rest_method:: POST /servers/{server_id}/os-volume_attachments Attach a volume to an instance. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) .. note:: From v2.20 attach a volume to an instance in SHELVED or SHELVED_OFFLOADED state is allowed. .. note:: From v2.60, attaching a multiattach volume to multiple instances is supported for instances that are not SHELVED_OFFLOADED. The ability to actually support a multiattach volume depends on the volume type and compute hosting the instance. Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - volumeAttachment: volumeAttachment_post - volumeId: volumeId - device: device - tag: device_tag_bdm_attachment - delete_on_termination: delete_on_termination_attachments_req **Example Attach a volume to an instance: JSON request** .. literalinclude:: ../../doc/api_samples/os-volumes/attach-volume-to-server-req.json :language: javascript **Example Attach a volume to an instance and tag it (v2.49): JSON request** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.49/attach-volume-to-server-req.json :language: javascript **Example Attach a volume to an instance with "delete_on_termination" (v2.79): JSON request** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.79/attach-volume-to-server-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - volumeAttachment: volumeAttachment - device: device_resp - id: attachment_id_required - serverId: server_id - volumeId: volumeId_resp - tag: device_tag_bdm_attachment_resp - delete_on_termination: delete_on_termination_attachments_resp **Example Attach a volume to an instance: JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/attach-volume-to-server-resp.json :language: javascript **Example Attach a tagged volume to an instance (v2.70): JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.70/attach-volume-to-server-resp.json :language: javascript **Example Attach a volume with "delete_on_termination" (v2.79): JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.79/attach-volume-to-server-resp.json :language: javascript Show a detail of a volume attachment ==================================== .. rest_method:: GET /servers/{server_id}/os-volume_attachments/{volume_id} Show a detail of a volume attachment. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - volume_id: volume_id_attached_path Response -------- .. rest_parameters:: parameters.yaml - volumeAttachment: volumeAttachment - id: attachment_id_required - serverId: server_id - volumeId: volumeId_resp - device: attachment_device_resp - tag: device_tag_bdm_attachment_resp - delete_on_termination: delete_on_termination_attachments_resp **Example Show a detail of a volume attachment: JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/volume-attachment-detail-resp.json :language: javascript **Example Show a detail of a tagged volume attachment (v2.79): JSON response** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.79/volume-attachment-detail-resp.json :language: javascript Update a volume attachment ========================== .. rest_method:: PUT /servers/{server_id}/os-volume_attachments/{volume_id} Update a volume attachment. .. note:: This action only valid when the server is in ACTIVE, PAUSED and RESIZED state, or a conflict(409) error will be returned. .. warning:: When updating volumeId, this API is typically meant to only be used as part of a larger orchestrated volume migration operation initiated in the block storage service via the ``os-retype`` or ``os-migrate_volume`` volume actions. Direct usage of this API to update volumeId is not recommended and may result in needing to hard reboot the server to update details within the guest such as block storage serial IDs. Furthermore, updating volumeId via this API is only implemented by `certain compute drivers`_. .. _certain compute drivers: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_swap_volume Policy default role is 'rule:system_admin_or_owner', its scope is [system, project], which allow project members or system admins to change the fields of an attached volume of a server. Policy defaults enable only users with the administrative role to change ``volumeId`` via this operation. Cloud providers can change these permissions through the ``policy.json`` file. Updating, or what is commonly referred to as "swapping", volume attachments with volumes that have more than one read/write attachment, is not supported. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - volume_id: volume_id_swap_src - volumeAttachment: volumeAttachment_put - volumeId: volumeId_swap - delete_on_termination: delete_on_termination_put_req - device: attachment_device_put_req - serverId: attachment_server_id_put_req - tag: device_tag_bdm_attachment_put_req - id: attachment_id_put_req .. note:: Other than ``volumeId``, as of v2.85 only ``delete_on_termination`` may be changed from the current value. **Example Update a volume attachment (v2.85): JSON request** .. literalinclude:: ../../doc/api_samples/os-volumes/v2.85/update-volume-attachment-delete-flag-req.json :language: javascript Response -------- No body is returned on successful request. Detach a volume from an instance ================================ .. rest_method:: DELETE /servers/{server_id}/os-volume_attachments/{volume_id} Detach a volume from an instance. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) .. note:: From v2.20 detach a volume from an instance in SHELVED or SHELVED_OFFLOADED state is allowed. Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - volume_id: volume_id_to_detach_path Response -------- No body is returned on successful request. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/os-volumes.inc0000664000175000017500000002201100000000000020211 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================================= Volume extension (os-volumes, os-snapshots) (DEPRECATED) ========================================================= .. warning:: These APIs are proxy calls to the Volume service. Nova has deprecated all the proxy APIs and users should use the native APIs instead. These will fail with a 404 starting from microversion 2.36. See: `Relevant Volume APIs `__. Manages volumes and snapshots for use with the Compute API. Lists, shows details, creates, and deletes volumes and snapshots. List Volumes ============ .. rest_method:: GET /os-volumes Lists the volumes associated with the account. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - volumes: volumes - attachments: volumeAttachments - attachments.device: attachment_device_resp - attachments.id: attachment_id_resp - attachments.serverId: attachment_server_id_resp - attachments.volumeId: attachment_volumeId_resp - availabilityZone: OS-EXT-AZ:availability_zone - createdAt: created - displayDescription: display_description - displayName: display_name - id: volume_id_resp - metadata: metadata_object - size: size - snapshotId: snapshot_id - status: volume_status - volumeType: volume_type | **Example List Volumes** .. literalinclude:: ../../doc/api_samples/os-volumes/os-volumes-index-resp.json :language: javascript Create Volume ============= .. rest_method:: POST /os-volumes Creates a new volume. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - volume: volume - size: size - availability_zone: OS-EXT-AZ:availability_zone_optional - display_name: display_name_optional - display_description: display_description_optional - metadata: metadata - volume_type: volume_type_optional - snapshot_id: snapshot_id_optional **Example Create Volume** .. literalinclude:: ../../doc/api_samples/os-volumes/os-volumes-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - volume: volume - attachments: volumeAttachments - attachments.device: attachment_device_resp - attachments.id: attachment_id_resp - attachments.serverId: attachment_server_id_resp - attachments.volumeId: attachment_volumeId_resp - availabilityZone: OS-EXT-AZ:availability_zone - createdAt: created - displayName: display_name - displayDescription: display_description - id: volume_id_resp - metadata: metadata_object - size: size - snapshotId: snapshot_id - status: volume_status - volumeType: volume_type | **Example Create Volume** .. literalinclude:: ../../doc/api_samples/os-volumes/os-volumes-post-resp.json :language: javascript List Volumes With Details ========================= .. rest_method:: GET /os-volumes/detail Lists all volumes with details. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - volumes: volumes - attachments: volumeAttachments - attachments.device: attachment_device_resp - attachments.id: attachment_id_resp - attachments.serverId: attachment_server_id_resp - attachments.volumeId: attachment_volumeId_resp - availabilityZone: OS-EXT-AZ:availability_zone - createdAt: created - displayName: display_name - displayDescription: display_description - id: volume_id_resp - metadata: metadata_object - size: size - snapshotId: snapshot_id - status: volume_status - volumeType: volume_type | **Example List Volumes With Details** .. literalinclude:: ../../doc/api_samples/os-volumes/os-volumes-detail-resp.json :language: javascript Show Volume Details =================== .. rest_method:: GET /os-volumes/{volume_id} Shows details for a given volume. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - volume_id: volume_id_path Response -------- .. rest_parameters:: parameters.yaml - volume: volume - attachments: volumeAttachments - attachment.device: attachment_device_resp - attachments.id: attachment_id_resp - attachments.serverId: attachment_server_id_resp - attachments.volumeId: attachment_volumeId_resp - availabilityZone: OS-EXT-AZ:availability_zone - createdAt: created - displayName: display_name - displayDescription: display_description - id: volume_id_resp - metadata: metadata_object - size: size - snapshotId: snapshot_id - status: volume_status - volumeType: volume_type | **Example Show Volume Details** .. literalinclude:: ../../doc/api_samples/os-volumes/os-volumes-get-resp.json :language: javascript Delete Volume ============= .. rest_method:: DELETE /os-volumes/{volume_id} Deletes a volume. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - volume_id: volume_id_path Response -------- There is no body content for the response of a successful DELETE query List Snapshots ============== .. rest_method:: GET /os-snapshots Lists snapshots. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - snapshots: snapshots - id: snapshot_id - createdAt: created - displayName: snapshot_name - displayDescription: snapshot_description - size: size - status: snapshot_status - volumeId: volume_id | **Example List Snapshots** .. literalinclude:: ../../doc/api_samples/os-volumes/snapshots-list-resp.json :language: javascript Create Snapshot =============== .. rest_method:: POST /os-snapshots Creates a new snapshot. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - snapshot: snapshot - volume_id: volume_id - display_description: snapshot_description_optional - display_name: snapshot_name_optional - force: force_snapshot **Example Create Snapshot** .. literalinclude:: ../../doc/api_samples/os-volumes/snapshot-create-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - snapshot: snapshot - id: snapshot_id - createdAt: created - displayName: snapshot_name - displayDescription: snapshot_description - volumeId: volume_id - size: size - status: snapshot_status **Example Create Snapshot** .. literalinclude:: ../../doc/api_samples/os-volumes/snapshot-create-resp.json :language: javascript List Snapshots With Details =========================== .. rest_method:: GET /os-snapshots/detail Lists all snapshots with details. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - limit: limit_simple - offset: offset_simple Response -------- .. rest_parameters:: parameters.yaml - snapshots: snapshots - id: snapshot_id - createdAt: created - displayName: snapshot_name - displayDescription: snapshot_description - volumeId: volume_id - size: size - status: snapshot_status | **Example List Snapshots With Details** .. literalinclude:: ../../doc/api_samples/os-volumes/snapshots-detail-resp.json :language: javascript Show Snapshot Details ===================== .. rest_method:: GET /os-snapshots/{snapshot_id} Shows details for a given snapshot. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - snapshot_id: snapshot_id_path Response -------- .. rest_parameters:: parameters.yaml - snapshot: snapshot - id: snapshot_id - createdAt: created - displayName: snapshot_name - displayDescription: snapshot_description - volumeId: volume_id - size: size - status: snapshot_status | **Example Show Snapshot Details** .. literalinclude:: ../../doc/api_samples/os-volumes/snapshots-show-resp.json :language: javascript Delete Snapshot =============== .. rest_method:: DELETE /os-snapshots/{snapshot_id} Deletes a snapshot from the account. This operation is asynchronous. You must list snapshots repeatedly to determine whether the snapshot was deleted. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - snapshot_id: snapshot_id_path Response -------- There is no body content for the response of a successful DELETE query ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/parameters.yaml0000664000175000017500000062463200000000000020455 0ustar00zuulzuul00000000000000# variables in header image_location: description: | The image location URL of the image or backup created, HTTP header "Location: " will be returned. .. note:: The URL returned may not be accessible to users and should not be relied upon. Use microversion 2.45 or simply parse the image ID out of the URL in the Location response header. in: header required: true type: string max_version: 2.44 server_location: description: | The location URL of the server, HTTP header "Location: " will be returned. in: header required: true type: string tag_location: description: | The location of the tag. It's individual tag URL which can be used for checking the existence of the tag on the server or deleting the tag from the server. in: header required: true type: string x-compute-request-id_resp: description: | The local request ID, which is a unique ID generated automatically for tracking each request to nova. It is associated with the request and appears in the log lines for that request. By default, the middleware configuration ensures that the local request ID appears in the log files. .. note:: This header exists for backward compatibility. in: header required: true type: string x-openstack-request-id_req: description: | The global request ID, which is a unique common ID for tracking each request in OpenStack components. The format of the global request ID must be ``req-`` + UUID (UUID4). If not in accordance with the format, it is ignored. It is associated with the request and appears in the log lines for that request. By default, the middleware configuration ensures that the global request ID appears in the log files. in: header required: false type: string min_version: 2.46 x-openstack-request-id_resp: description: | The local request ID, which is a unique ID generated automatically for tracking each request to nova. It is associated with the request and appears in the log lines for that request. By default, the middleware configuration ensures that the local request ID appears in the log files. in: header required: true type: string min_version: 2.46 # variables in path agent_build_id: description: | The id of the agent build. in: path required: true type: string aggregate_id: description: | The aggregate ID. in: path required: true type: integer api_version: in: path required: true type: string description: > The API version as returned in the links from the ``GET /`` call. before_timestamp: description: | Filters the response by the date and time before which to list usage audits. The date and time stamp format is as follows: :: CCYY-MM-DD hh:mm:ss.NNNNNN For example, ``2015-08-27 09:49:58`` or ``2015-08-27 09:49:58.123456``. in: path required: true type: string cell_id: description: | The UUID of the cell. in: path required: true type: string console_id: description: | The UUID of the console. in: path required: true type: string console_token: description: | Console authentication token. in: path required: true type: string # Used in the request path for PUT /os-services/disable-log-reason before # microversion 2.53. disabled_reason: description: | The reason for disabling a service. in: path required: false type: string domain: description: | The registered DNS domain that the DNS drivers publish. in: path required: true type: string fixed_ip_path: description: | The fixed IP of interest to you. in: path required: true type: string flavor_extra_spec_key: description: | The extra spec key for the flavor. in: path required: true type: string flavor_id: description: | The ID of the flavor. in: path required: true type: string floating_ip_id: description: | The ID of the floating IP address. in: path required: true type: string host_name: description: | The name of the host. in: path required: true type: string hypervisor_hostname_pattern: description: | The hypervisor host name or a portion of it. The hypervisor hosts are selected with the host name matching this pattern. in: path required: true type: string hypervisor_id: description: | The ID of the hypervisor. in: path required: true type: integer max_version: 2.52 hypervisor_id_uuid: description: | The ID of the hypervisor as a UUID. in: path required: true type: string min_version: 2.53 image_id: description: | The UUID of the image. in: path required: true type: string instance_id: description: | The UUID of the instance. in: path required: true type: string ip: description: | The IP address. in: path required: true type: string key: description: | The metadata item key, as a string. Maximum length is 255 characters. in: path required: true type: string keypair_name_path: description: | The keypair name. in: path required: true type: string migration_id_path: description: | The ID of the server migration. in: path required: true type: integer network_id: description: | The UUID of the network. in: path required: true type: string network_label: description: | The network label, such as ``public`` or ``private``. in: path required: true type: string node_id: description: | The node ID. in: path required: true type: string port_id_path: description: | The UUID of the port. in: path required: true type: string quota_class_id: "a_class_id description: | The ID of the quota class. Nova supports the ``default`` Quota Class only. in: path required: true type: string request_id: description: | The ID of the request. in: path required: true type: string security_group_default_rule_id: description: | The UUID of the security group rule. in: path required: true type: string security_group_id: description: | The ID of the security group. in: path required: true type: string security_group_rule_id: description: | The ID of the security group rule. in: path required: true type: string server_group_id: description: | The UUID of the server group. in: path required: true type: string server_id_path: description: | The UUID of the server. in: path required: true type: string service_id_path_2_52: description: | The id of the service. .. note:: This may not uniquely identify a service in a multi-cell deployment. in: path required: true type: integer max_version: 2.52 service_id_path_2_53: description: | The id of the service as a uuid. This uniquely identifies the service in a multi-cell deployment. in: path required: true type: string min_version: 2.53 service_id_path_2_53_no_version: description: | The id of the service as a uuid. This uniquely identifies the service in a multi-cell deployment. in: path required: true type: string snapshot_id_path: description: | The UUID of the snapshot. in: path required: true type: string tag: description: | The tag as a string. in: path required: true type: string tenant_id: description: | The UUID of the tenant in a multi-tenancy cloud. in: path required: true type: string volume_id_attached_path: description: | The UUID of the attached volume. in: path required: true type: string volume_id_path: description: | The unique ID for a volume. in: path required: true type: string volume_id_swap_src: description: | The UUID of the volume being replaced. in: path required: true type: string volume_id_to_detach_path: description: | The UUID of the volume to detach. in: path required: true type: string # variables in query access_ip_v4_query_server: in: query required: false type: string description: | Filter server list result by IPv4 address that should be used to access the server. access_ip_v6_query_server: in: query required: false type: string description: | Filter server list result by IPv6 address that should be used to access the server. all_projects: description: | Administrator only. Lists server groups for all projects. For example: ``GET /os-server-groups?all_projects=True`` If you specify a tenant ID for a non-administrative user with this query parameter, the call lists all server groups for the tenant, or project, rather than for all projects. Value of this query parameter is not checked, only presence is considered as request for all projects. in: query required: false type: string all_tenants: description: | Specify the ``all_tenants`` query parameter to ping instances for all tenants. By default this is only allowed by admin users. Value of this query parameter is not checked, only presence is considered as request for all tenants. in: query required: false type: string all_tenants_query: description: | Specify the ``all_tenants`` query parameter to list all instances for all projects. By default this is only allowed by administrators. If the value of this parameter is not specified, it is treated as ``True``. If the value is specified, ``1``, ``t``, ``true``, ``on``, ``y`` and ``yes`` are treated as ``True``. ``0``, ``f``, ``false``, ``off``, ``n`` and ``no`` are treated as ``False``. (They are case-insensitive.) in: query required: false type: boolean all_tenants_sec_grp_query: description: | Specify the ``all_tenants`` query parameter to list all security groups for all projects. This is only allowed for admin users. Value of this query parameter is not checked, only presence is considered as request for all tenants. in: query required: false type: string availability_zone_query_server: description: | Filter the server list result by server availability zone. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string binary_query: description: | Filter the service list result by binary name of the service. in: query required: false type: string changes-since: description: | Filters the response by a date and time when the image last changed status. Use this query parameter to check for changes since a previous request rather than re-downloading and re-parsing the full status at each polling interval. If data has changed, the call returns only the items changed since the ``changes-since`` time. If data has not changed since the ``changes-since`` time, the call returns an empty list. To enable you to keep track of changes, this filter also displays images that were deleted if the ``changes-since`` value specifies a date in the last 30 days. Items deleted more than 30 days ago might be returned, but it is not guaranteed. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. in: query required: false type: string changes_before_instance_action: description: | Filters the response by a date and time stamp when the instance actions last changed. Those instances that changed before or equal to the specified date and time stamp are returned. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. When both ``changes-since`` and ``changes-before`` are specified, the value of the ``changes-before`` must be later than or equal to the value of the ``changes-since`` otherwise API will return 400. in: query required: false type: string min_version: 2.66 changes_before_migration: description: | Filters the response by a date and time stamp when the migration last changed. Those migrations that changed before or equal to the specified date and time stamp are returned. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. When both ``changes-since`` and ``changes-before`` are specified, the value of the ``changes-before`` must be later than or equal to the value of the ``changes-since`` otherwise API will return 400. in: query required: false type: string min_version: 2.66 changes_before_server: description: | Filters the response by a date and time stamp when the server last changed. Those servers that changed before or equal to the specified date and time stamp are returned. To help keep track of changes this may also return recently deleted servers. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. When both ``changes-since`` and ``changes-before`` are specified, the value of the ``changes-before`` must be later than or equal to the value of the ``changes-since`` otherwise API will return 400. in: query required: false type: string min_version: 2.66 changes_since_instance_action: description: | Filters the response by a date and time stamp when the instance action last changed. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. When both ``changes-since`` and ``changes-before`` are specified, the value of the ``changes-since`` must be earlier than or equal to the value of the ``changes-before`` otherwise API will return 400. in: query required: false type: string min_version: 2.58 changes_since_migration: description: | Filters the response by a date and time stamp when the migration last changed. Those changed after the specified date and time stamp are returned. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. When both ``changes-since`` and ``changes-before`` are specified, the value of the ``changes-since`` must be earlier than or equal to the value of the ``changes-before`` otherwise API will return 400. in: query required: false type: string min_version: 2.59 changes_since_server: description: | Filters the response by a date and time stamp when the server last changed status. To help keep track of changes this may also return recently deleted servers. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. When both ``changes-since`` and ``changes-before`` are specified, the value of the ``changes-since`` must be earlier than or equal to the value of the ``changes-before`` otherwise API will return 400. in: query required: false type: string config_drive_query_server: description: | Filter the server list result by the config drive setting of the server. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string created_at_query_server: description: | Filter the server list result by a date and time stamp when server was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string delete_info: description: | Information for snapshot deletion. Include the ID of the associated volume. For example: .. code-block:: javascript DELETE /os-assisted-volume-snapshots/421752a6-acf6-4b2d-bc7a-119f9148cd8c?delete_info='{"volume_id": "521752a6-acf6-4b2d-bc7a-119f9148cd8c"}' in: query required: true type: string deleted_query: in: query required: false type: boolean description: | Show deleted items only. In some circumstances deleted items will still be accessible via the backend database, however there is no contract on how long, so this parameter should be used with caution. ``1``, ``t``, ``true``, ``on``, ``y`` and ``yes`` are treated as ``True`` (case-insensitive). Other than them are treated as ``False``. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. description_query_server: description: | Filter the server list result by description. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. .. note:: ``display_description`` can also be requested which is alias of ``description`` but that is not recommended to use as that will be removed in future. in: query required: false type: string detailed_simple_tenant_usage: description: | Specify the ``detailed=1`` query parameter to get detail information ('server_usages' information). in: query required: false type: integer disk_config_query_server: description: | Filter the server list result by the ``disk_config`` setting of the server, Valid values are: - ``AUTO`` - ``MANUAL`` This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: string end_simple_tenant_usage: description: | The ending time to calculate usage statistics on compute and storage resources. The date and time stamp format is any of the following ones: :: CCYY-MM-DDThh:mm:ss For example, ``2015-08-27T09:49:58``. :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. :: CCYY-MM-DD hh:mm:ss.NNNNNN For example, ``2015-08-27 09:49:58.123456``. If you omit this parameter, the current time is used. in: query required: false type: string exclude: description: | Specify ``exclude=uuid[,uuid...]`` to exclude the instances from the results. in: query required: false type: string flavor_is_public_query: in: query required: false type: string description: | This parameter is only applicable to users with the administrative role. For all other non-admin users, the parameter is ignored and only public flavors will be returned. Filters the flavor list based on whether the flavor is public or private. If the value of this parameter is not specified, it is treated as ``True``. If the value is specified, ``1``, ``t``, ``true``, ``on``, ``y`` and ``yes`` are treated as ``True``. ``0``, ``f``, ``false``, ``off``, ``n`` and ``no`` are treated as ``False`` (they are case-insensitive). If the value is ``None`` (case-insensitive) both public and private flavors will be listed in a single request. flavor_query: description: | Filters the response by a flavor, as a UUID. A flavor is a combination of memory, disk size, and CPUs. in: query required: false type: string host_query_server: description: | Filter the server list result by the host name of compute node. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: string host_query_service: description: | Filter the service list result by the host name. in: query required: false type: string hostname_query_server: description: | Filter the server list result by the host name of server. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: string hypervisor_hostname_pattern_query: description: | The hypervisor host name or a portion of it. The hypervisor hosts are selected with the host name matching this pattern. .. note:: ``limit`` and ``marker`` query parameters for paging are not supported when listing hypervisors using a hostname pattern. Also, ``links`` will not be returned in the response when using this query parameter. in: query required: false type: string min_version: 2.53 hypervisor_limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer min_version: 2.33 hypervisor_marker: description: | The ID of the last-seen item. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer min_version: 2.33 max_version: 2.52 hypervisor_marker_uuid: description: | The ID of the last-seen item as a UUID. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string min_version: 2.53 hypervisor_query: description: | Filters the response by a hypervisor type. in: query required: false type: string hypervisor_with_servers_query: description: | Include all servers which belong to each hypervisor in the response output. in: query required: false type: boolean min_version: 2.53 image_name_query: description: | Filters the response by an image name, as a string. in: query required: false type: string image_query: description: | Filters the response by an image, as a UUID. .. note:: 'image_ref' can also be requested which is alias of 'image' but that is not recommended to use as that will be removed in future. in: query required: false type: string image_server_query: description: | Filters the response by a server, as a URL. format: uri in: query required: false type: string image_status_query: description: | Filters the response by an image status, as a string. For example, ``ACTIVE``. in: query required: false type: string image_type_query: description: | Filters the response by an image type. For example, ``snapshot`` or ``backup``. in: query required: false type: string include: description: | Specify ``include=uuid[,uuid...]`` to include the instances in the results. in: query required: false type: string instance_action_limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer min_version: 2.58 instance_action_marker: description: | The ``request_id`` of the last-seen instance action. Use the ``limit`` parameter to make an initial limited request and use the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string min_version: 2.58 ip6_query: description: | An IPv6 address to filter results by. Up to microversion 2.4, this parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. Starting from microversion 2.5, this parameter is valid for no-admin users as well as administrators. in: query required: false type: string ip_query: description: | An IPv4 address to filter results by. in: query required: false type: string kernel_id_query_server: in: query required: false type: string description: | Filter the server list result by the UUID of the kernel image when using an AMI. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. key_name_query_server: description: | Filter the server list result by keypair name. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string keypair_limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer min_version: 2.35 keypair_marker: description: | The last-seen item. Use the ``limit`` parameter to make an initial limited request and use the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string min_version: 2.35 keypair_user: in: query required: false type: string description: | This allows administrative users to operate key-pairs of specified user ID. min_version: 2.10 launch_index_query_server: description: | Filter the server list result by the sequence in which the servers were launched. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: integer launched_at_query_server: description: | Filter the server list result by a date and time stamp when the instance was launched. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer limit_simple: description: | Used in conjunction with ``offset`` to return a slice of items. ``limit`` is the maximum number of items to return. If ``limit`` is not specified, or exceeds the configurable ``max_limit``, then ``max_limit`` will be used instead. in: query required: false type: integer locked_by_query_server: description: | Filter the server list result by who locked the server, possible value could be ``admin`` or ``owner``. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: string locked_query_server: description: | Specify the ``locked`` query parameter to list all locked or unlocked instances. If the value is specified, ``1``, ``t``, ``true``, ``on``, ``y`` and ``yes`` are treated as ``True``. ``0``, ``f``, ``false``, ``off``, ``n`` and ``no`` are treated as ``False``. (They are case-insensitive.) Any other value provided will be considered invalid. in: query required: false type: boolean min_version: 2.73 marker: description: | The ID of the last-seen item. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string migration_hidden: description: | The 'hidden' setting of migration to filter. The 'hidden' flag is set if the value is 1. The 'hidden' flag is not set if the value is 0. But the 'hidden' setting of migration is always 0, so this parameter is useless to filter migrations. in: query required: false type: integer migration_host: description: | The source/destination compute node of migration to filter. in: query required: false type: string migration_instance_uuid: description: | The uuid of the instance that migration is operated on to filter. in: query required: false type: string migration_limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer min_version: 2.59 migration_marker: description: | The UUID of the last-seen migration. Use the ``limit`` parameter to make an initial limited request and use the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string min_version: 2.59 migration_source_compute: description: | The source compute node of migration to filter. in: query required: false type: string migration_status: description: | The status of migration to filter. in: query required: false type: string migration_type: description: | The type of migration to filter. Valid values are: * ``evacuation`` * ``live-migration`` * ``migration`` * ``resize`` in: query required: false type: string minDisk: description: | Filters the response by a minimum disk space, in GiB. For example, ``100``. in: query required: false type: integer minRam: description: | Filters the response by a minimum RAM, in MiB. For example, ``512``. in: query required: false type: integer node_query_server: description: | Filter the server list result by the node. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: string not_tags_any_query: in: query required: false type: string description: | A list of tags to filter the server list by. Servers that don't match any tags in this list will be returned. Boolean expression in this case is 'NOT (t1 OR t2)'. Tags in query must be separated by comma. min_version: 2.26 not_tags_query: in: query required: false type: string description: | A list of tags to filter the server list by. Servers that don't match all tags in this list will be returned. Boolean expression in this case is 'NOT (t1 AND t2)'. Tags in query must be separated by comma. min_version: 2.26 offset_simple: description: | Used in conjunction with ``limit`` to return a slice of items. ``offset`` is where to start in the list. in: query required: false type: integer power_state_query_server: in: query required: false type: integer description: | Filter the server list result by server power state. Possible values are integer values that is mapped as:: 0: NOSTATE 1: RUNNING 3: PAUSED 4: SHUTDOWN 6: CRASHED 7: SUSPENDED This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. progress_query_server: description: | Filter the server list result by the progress of the server. The value could be from 0 to 100 as integer. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: integer project_id_query_migrations: description: | Filter the migrations by the given project ID. in: query required: false type: string min_version: 2.80 project_id_query_server: description: | Filter the list of servers by the given project ID. This filter only works when the ``all_tenants`` filter is also specified. .. note:: 'tenant_id' can also be requested which is alias of 'project_id' but that is not recommended to use as that will be removed in future. in: query required: false type: string ramdisk_id_query_server: in: query required: false type: string description: | Filter the server list result by the UUID of the ramdisk image when using an AMI. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. reservation_id_query: in: query required: false type: string description: | A reservation id as returned by a servers multiple create call. reserved_query: description: | Specify whether the result of resource total includes reserved resources or not. - ``0``: Not include reserved resources. - Other than 0: Include reserved resources. If non integer value is specified, it is the same as ``0``. in: query required: false type: integer server_name_query: description: | Filters the response by a server name, as a string. You can use regular expressions in the query. For example, the ``?name=bob`` regular expression returns both bob and bobb. If you must match on only bob, you can use a regular expression that matches the syntax of the underlying database server that is implemented for Compute, such as MySQL or PostgreSQL. .. note:: 'display_name' can also be requested which is alias of 'name' but that is not recommended to use as that will be removed in future. format: regexp in: query required: false type: string server_root_device_name_query: in: query required: false type: string description: | Filter the server list result by the root device name of the server. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. server_status_query: description: | Filters the response by a server status, as a string. For example, ``ACTIVE``. Up to microversion 2.37, an empty list is returned if an invalid status is specified. Starting from microversion 2.38, a 400 error is returned in that case. in: query required: false type: string server_uuid_query: description: | Filter the server list result by the UUID of the server. This parameter is only valid when specified by administrators. If non-admin users specify this parameter, it is ignored. in: query required: false type: string soft_deleted_server: description: | Filter the server list by ``SOFT_DELETED`` status. This parameter is only valid when the ``deleted=True`` filter parameter is specified. in: query required: false type: boolean sort_dir_flavor: description: | Sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the direction of the flavor ``sort_key`` attribute. in: query required: false type: string sort_dir_server: description: | Sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``desc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the direction of the server ``sort_key`` attribute. in: query required: false type: string sort_key_flavor: description: | Sorts by a flavor attribute. Default attribute is ``flavorid``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the flavor ``sort_key`` attribute. The sort keys are limited to: - ``created_at`` - ``description`` - ``disabled`` - ``ephemeral_gb`` - ``flavorid`` - ``id`` - ``is_public`` - ``memory_mb`` - ``name`` - ``root_gb`` - ``rxtx_factor`` - ``swap`` - ``updated_at`` - ``vcpu_weight`` - ``vcpus`` in: query required: false type: string sort_key_server: description: | Sorts by a server attribute. Default attribute is ``created_at``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server ``sort_key`` attribute. The sort keys are limited to: - ``access_ip_v4`` - ``access_ip_v6`` - ``auto_disk_config`` - ``availability_zone`` - ``config_drive`` - ``created_at`` - ``display_description`` - ``display_name`` - ``host`` - ``hostname`` - ``image_ref`` - ``instance_type_id`` - ``kernel_id`` - ``key_name`` - ``launch_index`` - ``launched_at`` - ``locked`` (New in version 2.73) - ``locked_by`` - ``node`` - ``power_state`` - ``progress`` - ``project_id`` - ``ramdisk_id`` - ``root_device_name`` - ``task_state`` - ``terminated_at`` - ``updated_at`` - ``user_id`` - ``uuid`` - ``vm_state`` ``host`` and ``node`` are only allowed for admin. If non-admin users specify them, a 403 error is returned. in: query required: false type: string start_simple_tenant_usage: description: | The beginning time to calculate usage statistics on compute and storage resources. The date and time stamp format is any of the following ones: :: CCYY-MM-DDThh:mm:ss For example, ``2015-08-27T09:49:58``. :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. :: CCYY-MM-DD hh:mm:ss.NNNNNN For example, ``2015-08-27 09:49:58.123456``. If you omit this parameter, the current time is used. in: query required: false type: string tags_any_query: in: query required: false type: string description: | A list of tags to filter the server list by. Servers that match any tag in this list will be returned. Boolean expression in this case is 't1 OR t2'. Tags in query must be separated by comma. min_version: 2.26 tags_query: in: query required: false type: string description: | A list of tags to filter the server list by. Servers that match all tags in this list will be returned. Boolean expression in this case is 't1 AND t2'. Tags in query must be separated by comma. min_version: 2.26 task_state_query_server: in: query required: false type: string description: | Filter the server list result by task state. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. tenant_id_query: description: | Specify the project ID (tenant ID) to show the rate and absolute limits. This parameter can be specified by admin only. in: query required: false type: string terminated_at_query_server: description: | Filter the server list result by a date and time stamp when instance was terminated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string usage_limit: description: | Requests a page size of items. Calculate usage for the limited number of instances. Use the ``limit`` parameter to make an initial limited request and use the last-seen instance UUID from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer min_version: 2.40 usage_marker: description: | The last-seen item. Use the ``limit`` parameter to make an initial limited request and use the last-seen instance UUID from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string min_version: 2.40 user_id_query_migrations: description: | Filter the migrations by the given user ID. in: query required: false type: string min_version: 2.80 user_id_query_quota: description: | ID of user to list the quotas for. in: query required: false type: string user_id_query_quota_delete: description: | ID of user to delete quotas for. in: query required: false type: string user_id_query_server: description: | Filter the list of servers by the given user ID. This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string user_id_query_set_quota: description: | ID of user to set the quotas for. in: query required: false type: string vm_state_query_server: description: | Filter the server list result by vm state. The value could be: - ``ACTIVE`` - ``BUILDING`` - ``DELETED`` - ``ERROR`` - ``PAUSED`` - ``RESCUED`` - ``RESIZED`` - ``SHELVED`` - ``SHELVED_OFFLOADED`` - ``SOFT_DELETED`` - ``STOPPED`` - ``SUSPENDED`` This parameter is restricted to administrators until microversion 2.83. If non-admin users specify this parameter on a microversion less than 2.83, it will be ignored. in: query required: false type: string # variables in body accessIPv4: in: body required: true type: string description: | IPv4 address that should be used to access this server. May be automatically set by the provider. accessIPv4_in: in: body required: false type: string description: | IPv4 address that should be used to access this server. accessIPv6: in: body required: true type: string description: | IPv6 address that should be used to access this server. May be automatically set by the provider. accessIPv6_in: in: body required: false type: string description: | IPv6 address that should be used to access this server. action: description: | The name of the action. in: body required: true type: string action_reserve: description: | The attribute to reserve an IP with a value of ``null``. in: body required: false type: string action_unreserve: description: | The attribute to release an IP with a value of ``null``. in: body required: false type: string addFixedIp: description: | The action to add a fixed ip address to a server. in: body required: true type: object addFloatingIp: description: | The action. Contains required floating IP ``address`` and optional ``fixed_address``. in: body required: true type: object address: description: | The floating IP address. in: body required: true type: string addresses: description: | The addresses for the server. Servers with status ``BUILD`` hide their addresses information. in: body required: true type: object addresses_obj: description: | The addresses information for the server. in: body required: true type: object addSecurityGroup: description: | The action to add a security group to a server. in: body required: true type: object addTenantAccess: description: | The action. in: body required: true type: string adminPass_change_password: description: | The administrative password for the server. in: body required: true type: string adminPass_evacuate: description: | An administrative password to access the evacuated instance. If you set ``enable_instance_password`` configuration option to ``False``, the API wouldn't return the ``adminPass`` field in response. in: body required: false type: string max_version: 2.13 adminPass_evacuate_request: description: | An administrative password to access the evacuated server. If you omit this parameter, the operation generates a new password. Up to API version 2.13, if ``onSharedStorage`` is set to ``True`` and this parameter is specified, an error is raised. in: body required: false type: string adminPass_request: description: | The administrative password of the server. If you omit this parameter, the operation generates a new password. in: body required: false type: string adminPass_rescue_request: description: | The password for the rescued instance. If you omit this parameter, the operation generates a new password. in: body required: false type: string adminPass_response: description: | The administrative password for the server. If you set ``enable_instance_password`` configuration option to ``False``, the API wouldn't return the ``adminPass`` field in response. in: body required: false type: string agent: description: | The guest agent object. in: body required: true type: object agent_id: description: | The agent ID. in: body required: true type: integer agent_id_str: description: | The agent ID. (This is a bug of API, this should be integer type which is consistent with the responses of agent create and list. This will be fixed in later microversion.) in: body required: true type: string agents: description: | A list of guest agent objects. in: body required: true type: array aggregate: description: | The host aggregate object. in: body required: true type: object aggregate_add_host: description: | The add_host object used to add host to aggregate. in: body required: true type: object aggregate_az: description: | The availability zone of the host aggregate. in: body required: true type: string aggregate_az_optional_create: description: | The availability zone of the host aggregate. You should use a custom availability zone rather than the default returned by the os-availability-zone API. The availability zone must not include ':' in its name. in: body required: false type: string aggregate_az_optional_update: description: | The availability zone of the host aggregate. You should use a custom availability zone rather than the default returned by the os-availability-zone API. The availability zone must not include ':' in its name. .. warning:: You should not change or unset the availability zone of an aggregate when that aggregate has hosts which contain servers in it since that may impact the ability for those servers to move to another host. in: body required: false type: string aggregate_host_list: description: | A list of host ids in this aggregate. in: body required: true type: array aggregate_id_body: description: | The ID of the host aggregate. in: body required: true type: integer aggregate_metadata_request: description: | Metadata key and value pairs associated with the aggregate. The maximum size for each metadata key and value pair is 255 bytes. New keys will be added to existing aggregate metadata. For existing keys, if the value is ``null`` the entry is removed, otherwise the value is updated. Note that the special ``availability_zone`` metadata entry cannot be unset to ``null``. .. warning:: You should not change the availability zone of an aggregate when that aggregate has hosts which contain servers in it since that may impact the ability for those servers to move to another host. in: body required: true type: object aggregate_metadata_response: description: | Metadata key and value pairs associated with the aggregate. in: body required: true type: object aggregate_name: description: | The name of the host aggregate. in: body required: true type: string aggregate_name_optional: description: | The name of the host aggregate. in: body required: false type: string aggregate_remove_host: description: | The add_host object used to remove host from aggregate. in: body required: true type: object aggregate_uuid: description: | The UUID of the host aggregate. in: body required: true type: string min_version: 2.41 aggregates: description: | The list of existing aggregates. in: body required: true type: array alias: description: | A short name by which this extension is also known. in: body required: true type: string alive: description: | Returns true if the instance is alive. in: body required: true type: boolean architecture: description: | The name of the cpu architecture. in: body required: true type: string associate_host: description: | The name of the host to associate. in: body required: true type: string attachment_device_put_req: description: | Name of the device in the attachment object, such as, ``/dev/vdb``. in: body required: false type: string min_version: 2.85 attachment_device_resp: description: | Name of the device in the attachment object, such as, ``/dev/vdb``. in: body required: false type: string attachment_id_put_req: description: | The UUID of the attachment. in: body required: false type: string min_version: 2.85 attachment_id_required: description: | The UUID of the attachment. in: body required: true type: string attachment_id_resp: description: | The UUID of the attachment. in: body required: false type: string attachment_server_id_put_req: description: | The UUID of the server. in: body required: false type: string min_version: 2.85 attachment_server_id_resp: description: | The UUID of the server. in: body required: false type: string attachment_volumeId_resp: description: | The UUID of the attached volume. in: body required: false type: string availability_zone: description: | The availability zone. in: body required: false type: string availability_zone_info: description: | The list of availability zone information. in: body required: true type: array availability_zone_state: description: | The current state of the availability zone. in: body required: true type: object availability_zone_unshelve: description: | The availability zone name. Specifying an availability zone is only allowed when the server status is ``SHELVED_OFFLOADED`` otherwise a 409 HTTPConflict response is returned. in: body required: false type: string min_version: 2.77 available: description: | Returns true if the availability zone is available. in: body required: true type: boolean backup_name: description: | The name of the image to be backed up. in: body required: true type: string backup_rotation: description: | The rotation of the back up image, the oldest image will be removed when image count exceed the rotation count. in: body required: true type: integer backup_type: description: | The type of the backup, for example, ``daily``. in: body required: true type: string baremetal_cpus: description: | Number of CPUs the node has. .. note:: This is a JSON string, even though it will look like an int value. in: body required: true type: string baremetal_disk: description: | Amount of disk in GiB the node has. .. note:: This is a JSON string, even though it will look like an int value. in: body required: true type: string baremetal_host: description: | This will always have the value ``IRONIC MANAGED``. in: body required: true type: string baremetal_id: description: | UUID of the baremetal node. in: body required: true type: string baremetal_instance_uuid: description: | UUID of the server instance on this node. in: body required: true type: string baremetal_interfaces: description: | A list of interface objects for active interfaces on the baremetal node. Each will have an ``address`` field with the address. in: body required: true type: array baremetal_mem: description: | Amount of memory in MiB the node has. .. note:: This is a JSON string, even though it will look like an int value. in: body required: true type: string baremetal_node: description: | A baremetal node object. in: body required: true type: object baremetal_nodes: description: | An array of baremetal node objects. in: body required: true type: array baremetal_taskstate: description: | The Ironic task state for the node. See Ironic project for more details. in: body required: true type: string binary: description: | The binary name of the service. in: body required: true type: string block_device_mapping_v2: description: | Enables fine grained control of the block device mapping for an instance. This is typically used for booting servers from volumes. An example format would look as follows: .. code-block:: javascript "block_device_mapping_v2": [{ "boot_index": "0", "uuid": "ac408821-c95a-448f-9292-73986c790911", "source_type": "image", "volume_size": "25", "destination_type": "volume", "delete_on_termination": true, "tag": "disk1", "disk_bus": "scsi"}] In microversion 2.32, ``tag`` is an optional string attribute that can be used to assign a tag to the block device. This tag is then exposed to the guest in the metadata API and the config drive and is associated to hardware metadata for that block device, such as bus (ex: SCSI), bus address (ex: 1:0:2:0), and serial. A bug has caused the ``tag`` attribute to no longer be accepted starting with version 2.33. It has been restored in version 2.42. in: body required: false type: array block_device_uuid: description: | This is the uuid of source resource. The uuid points to different resources based on the ``source_type``. For example, if ``source_type`` is ``image``, the block device is created based on the specified image which is retrieved from the image service. Similarly, if ``source_type`` is ``snapshot`` then the uuid refers to a volume snapshot in the block storage service. If ``source_type`` is ``volume`` then the uuid refers to a volume in the block storage service. in: body required: false type: string block_migration: description: | Set to ``True`` to migrate local disks by using block migration. If the source or destination host uses shared storage and you set this value to ``True``, the live migration fails. in: body required: true type: boolean max_version: 2.24 block_migration_2_25: description: | Migrates local disks by using block migration. Set to ``auto`` which means nova will detect whether source and destination hosts on shared storage. if they are on shared storage, the live-migration won't be block migration. Otherwise the block migration will be executed. Set to ``True``, means the request will fail when the source or destination host uses shared storage. Set to ``False`` means the request will fail when the source and destination hosts are not on the shared storage. in: body required: true type: string min_version: 2.25 boot_index: description: | Defines the order in which a hypervisor tries devices when it attempts to boot the guest from storage. Give each device a unique boot index starting from ``0``. To disable a device from booting, set the boot index to a negative value or use the default boot index value, which is ``None``. The simplest usage is, set the boot index of the boot device to ``0`` and use the default boot index value, ``None``, for any other devices. Some hypervisors might not support booting from multiple devices; these hypervisors consider only the device with a boot index of ``0``. Some hypervisors support booting from multiple devices but only if the devices are of different types. For example, a disk and CD-ROM. in: body required: true type: integer cache: description: A list of image objects to cache. in: body required: true type: array certificate: description: | The certificate object. in: body required: true type: object changePassword: description: | The action to change an administrative password of the server. in: body required: true type: object cidr: description: | The CIDR for address range. in: body required: true type: string cloudpipe: description: | The cloudpipe object. in: body required: true type: object cloudpipes: description: | The list of cloudpipe objects. in: body required: true type: array code: description: | The HTTP response code for the event. The following codes are currently used: * 200 - successfully submitted event * 400 - the request is missing required parameter * 404 - the instance specified by ``server_uuid`` was not found * 422 - no host was found for the server specified by ``server_uuid``, so there is no route to this server. in: body required: true type: string config_drive: description: | Indicates whether a config drive enables metadata injection. The config_drive setting provides information about a drive that the instance can mount at boot time. The instance reads files from the drive to get information that is normally available through the metadata service. This metadata is different from the user data. Not all cloud providers enable the ``config_drive``. Read more in the `OpenStack End User Guide `_. in: body required: false type: boolean config_drive_diagnostics: description: | Indicates whether or not a config drive was used for this server. in: body required: true type: boolean min_version: 2.48 config_drive_resp: description: | Indicates whether or not a config drive was used for this server. The value is ``True`` or an empty string. An empty string stands for ``False``. in: body required: true type: string config_drive_resp_update_rebuild: description: | Indicates whether or not a config drive was used for this server. The value is ``True`` or an empty string. An empty string stands for ``False``. in: body required: true type: string min_version: 2.75 configure_project_cloudpipe: description: | VPN IP and Port information to configure the cloudpipe instance.. in: body required: true type: object confirmResize: description: | The action to confirm a resize operation. in: body required: true type: none console: description: | The console object. in: body required: true type: object console_host: description: | The name or ID of the host. in: body required: false type: string console_id_in_body: description: | The UUID of the console. in: body required: true type: string console_output: description: | The console output as a string. Control characters will be escaped to create a valid JSON string. in: body required: true type: string console_password: description: | The password for the console. in: body required: true type: string console_type: description: | The type of the console. in: body required: true type: string consoles: description: | The list of console objects. in: body required: true type: array contents: description: | The file contents field in the personality object. in: body required: true type: string max_version: 2.56 cores: &cores description: | The number of allowed server cores for each tenant. in: body required: true type: integer cores_quota_class: &cores_quota_class <<: *cores description: | The number of allowed server cores for the quota class. cores_quota_class_optional: <<: *cores_quota_class required: false cores_quota_details: description: | The object of detailed cores quota, including in_use, limit and reserved number of cores. in: body required: true type: object cores_quota_optional: description: | The number of allowed server cores for each tenant. in: body required: false type: integer cpu_details_diagnostics: description: | The list of dictionaries with detailed information about VM CPUs. Following fields are presented in each dictionary: - ``id`` - the ID of CPU (Integer) - ``time`` - CPU Time in nano seconds (Integer) - ``utilisation`` - CPU utilisation in percents (Integer) in: body required: true type: array min_version: 2.48 cpu_info: description: | A dictionary that contains cpu information like ``arch``, ``model``, ``vendor``, ``features`` and ``topology``. The content of this field is hypervisor specific. .. note:: Since version 2.28 ``cpu_info`` field is returned as a dictionary instead of string. in: body required: true type: object create_info: description: | Information for snapshot creation. in: body required: true type: object create_info_id: description: | Its an arbitrary string that gets passed back to the user. in: body required: false type: string create_info_id_resp: description: | Its the same arbitrary string which was sent in request body. .. note:: This string is passed back to user as it is and not being used in Nova internally. So use ``snapshot_id`` instead for further operation on this snapshot. in: body required: true type: string createBackup: description: | The action. in: body required: true type: object created: description: | The date and time when the resource was created. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string createImage: description: | The action to create a snapshot of the image or the volume(s) of the server. in: body required: true type: object current_workload: description: | The current_workload is the number of tasks the hypervisor is responsible for. This will be equal or greater than the number of active VMs on the system (it can be greater when VMs are being deleted and the hypervisor is still cleaning up). in: body required: true type: integer data: description: | The certificate. in: body required: true type: string delete_on_termination: description: | To delete the boot volume when the server is destroyed, specify ``true``. Otherwise, specify ``false``. Default: ``false`` in: body required: false type: boolean delete_on_termination_attachments_req: description: | To delete the attached volume when the server is destroyed, specify ``true``. Otherwise, specify ``false``. Default: ``false`` in: body required: false type: boolean min_version: 2.79 delete_on_termination_attachments_resp: description: | A flag indicating if the attached volume will be deleted when the server is deleted. in: body required: true type: boolean min_version: 2.79 delete_on_termination_put_req: description: | A flag indicating if the attached volume will be deleted when the server is deleted. in: body required: false type: boolean min_version: 2.85 deleted: description: | A boolean indicates whether this aggregate is deleted or not, if it has not been deleted, ``false`` will appear. in: body required: true type: boolean deleted_at: description: | The date and time when the resource was deleted. If the resource has not been deleted yet, this field will be ``null``, The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string description: description: | Security group description. in: body required: true type: string destination_type: description: | Defines where the block device mapping will reside. Valid values are: * ``local``: The ephemeral disk resides local to the compute host on which the server runs * ``volume``: The persistent volume is stored in the block storage service in: body required: false type: string device: description: | Name of the device such as, ``/dev/vdb``. Omit or set this parameter to null for auto-assignment, if supported. If you specify this parameter, the device must not exist in the guest operating system. Note that as of the 12.0.0 Liberty release, the Nova libvirt driver no longer honors a user-supplied device name. This is the same behavior as if the device name parameter is not supplied on the request. in: body required: false type: string device_name: description: | A path to the device for the volume that you want to use to boot the server. Note that as of the 12.0.0 Liberty release, the Nova libvirt driver no longer honors a user-supplied device name. This is the same behavior as if the device name parameter is not supplied on the request. in: body required: false type: string device_resp: description: | Name of the device such as, ``/dev/vdb``. in: body required: true type: string device_tag_bdm: description: | A device role tag that can be applied to a block device. The guest OS of a server that has devices tagged in this manner can access hardware metadata about the tagged devices from the metadata API and on the config drive, if enabled. .. note:: Due to a bug, block device tags are accepted in version 2.32 and subsequently starting with version 2.42. in: body required: false type: string min_version: 2.32 device_tag_bdm_attachment: description: | A device role tag that can be applied to a volume when attaching it to the VM. The guest OS of a server that has devices tagged in this manner can access hardware metadata about the tagged devices from the metadata API and on the config drive, if enabled. .. note:: Tagged volume attachment is not supported for shelved-offloaded instances. in: body required: false type: string min_version: 2.49 device_tag_bdm_attachment_put_req: description: | The device tag applied to the volume block device or ``null``. in: body required: true type: string min_version: 2.85 device_tag_bdm_attachment_resp: description: | The device tag applied to the volume block device or ``null``. in: body required: true type: string min_version: 2.70 device_tag_nic: description: | A device role tag that can be applied to a network interface. The guest OS of a server that has devices tagged in this manner can access hardware metadata about the tagged devices from the metadata API and on the config drive, if enabled. .. note:: Due to a bug, network interface tags are accepted between 2.32 and 2.36 inclusively, and subsequently starting with version 2.42. in: body required: false type: string min_version: 2.32 device_tag_nic_attachment: description: | A device role tag that can be applied to a network interface when attaching it to the VM. The guest OS of a server that has devices tagged in this manner can access hardware metadata about the tagged devices from the metadata API and on the config drive, if enabled. in: body required: false type: string min_version: 2.49 device_tag_nic_attachment_resp: description: | The device tag applied to the virtual network interface or ``null``. in: body required: true type: string min_version: 2.70 device_type: description: | The device type. For example, ``disk``, ``cdrom``. in: body required: false type: string device_volume_type: description: | The device ``volume_type``. This can be used to specify the type of volume which the compute service will create and attach to the server. If not specified, the block storage service will provide a default volume type. See the `block storage volume types API `_ for more details. There are some restrictions on ``volume_type``: - It can be a volume type ID or name. - It is only supported with ``source_type`` of ``blank``, ``image`` or ``snapshot``. - It is only supported with ``destination_type`` of ``volume``. in: body required: false type: string min_version: 2.67 # Optional input parameter in the body for PUT /os-services/{service_id} added # in microversion 2.53. disabled_reason_2_53_in: description: | The reason for disabling a service. The minimum length is 1 and the maximum length is 255. This may only be requested with ``status=disabled``. in: body required: false type: string disabled_reason_body: description: | The reason for disabling a service. in: body required: true type: string disk_available_least: description: | The actual free disk on this hypervisor(in GiB). If allocation ratios used for overcommit are configured, this may be negative. This is intentional as it provides insight into the amount by which the disk is overcommitted. in: body required: true type: integer disk_available_least_total: description: | The actual free disk on all hypervisors(in GiB). If allocation ratios used for overcommit are configured, this may be negative. This is intentional as it provides insight into the amount by which the disk is overcommitted. in: body required: true type: integer disk_bus: description: | Disk bus type, some hypervisors (currently only libvirt) support specify this parameter. Some example disk_bus values can be: ``fdc``, ``ide``, ``sata``, ``scsi``, ``usb``, ``virtio``, ``xen``, ``lxc`` and ``uml``. Support for each bus type depends on the virtualization driver and underlying hypervisor. in: body required: false type: string disk_config: description: | Disk configuration. The value is either: - ``AUTO``. The API builds the server with a single partition the size of the target flavor disk. The API automatically adjusts the file system to fit the entire partition. - ``MANUAL``. The API builds the server by using the partition scheme and file system that is in the source image. If the target flavor disk is larger, The API does not partition the remaining disk space. in: body required: true type: string disk_details_diagnostics: description: | The list of dictionaries with detailed information about VM disks. Following fields are presented in each dictionary: - ``read_bytes`` - Disk reads in bytes (Integer) - ``read_requests`` - Read requests (Integer) - ``write_bytes`` - Disk writes in bytes (Integer) - ``write_requests`` - Write requests (Integer) - ``errors_count`` - Disk errors (Integer) in: body required: true type: array min_version: 2.48 disk_over_commit: description: | Set to ``True`` to enable over commit when the destination host is checked for available disk space. Set to ``False`` to disable over commit. This setting affects only the libvirt virt driver. in: body required: true type: boolean max_version: 2.25 display_description: description: | The volume description. in: body required: true type: string display_description_optional: description: | The volume description. in: body required: false type: string display_name: description: | The volume name. in: body required: true type: string display_name_optional: description: | The volume name. in: body required: false type: string driver_diagnostics: description: | The driver on which the VM is running. Possible values are: - ``libvirt`` - ``xenapi`` - ``hyperv`` - ``vmwareapi`` - ``ironic`` in: body required: true type: string min_version: 2.48 ended_at: description: | The date and time when the server was deleted. The date and time stamp format is as follows: :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. If the server hasn't been deleted yet, its value is ``null``. in: body required: true type: string ended_at_optional: description: | The date and time when the server was deleted. The date and time stamp format is as follows: :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. If the server hasn't been deleted yet, its value is ``null``. in: body required: false type: string errors: description: | The number of errors. in: body required: true type: integer evacuate: description: | The action to evacuate a server to another host. in: body required: true type: object event: description: | The name of the event. in: body required: true type: string event_details: min_version: 2.84 description: | Details of the event. May be ``null``. in: body required: true type: string event_finish_time: description: | The date and time when the event was finished. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string event_host: min_version: 2.62 description: | The name of the host on which the event occurred. Policy defaults enable only users with the administrative role to see an instance action event host. Cloud providers can change these permissions through the ``policy.json`` file. in: body required: false type: string event_hostId: min_version: 2.62 description: | An obfuscated hashed host ID string, or the empty string if there is no host for the event. This is a hashed value so will not actually look like a hostname, and is hashed with data from the project_id, so the same physical host as seen by two different project_ids will be different. This is useful when within the same project you need to determine if two events occurred on the same or different physical hosts. in: body required: true type: string event_name: description: | The event name. A valid value is: - ``network-changed`` - ``network-vif-plugged`` - ``network-vif-unplugged`` - ``network-vif-deleted`` - ``volume-extended`` (since microversion ``2.51``) - ``power-update`` (since microversion ``2.76``) - ``accelerator-request-bound`` (since microversion ``2.82``) in: body required: true type: string event_result: description: | The result of the event. in: body required: true type: string event_start_time: description: | The date and time when the event was started. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string event_status: description: | The event status. A valid value is ``failed``, ``completed``, or ``in-progress``. Default is ``completed``. in: body required: false type: string event_tag: description: | A string value that identifies the event. Certain types of events require specific tags: - For the ``accelerator-request-bound`` event, the tag must be the accelerator request UUID. - For the ``power-update`` event the tag must be either be ``POWER_ON`` or ``POWER_OFF``. - For the ``volume-extended`` event the tag must be the volume id. in: body required: false type: string event_traceback: description: | The traceback stack if an error occurred in this event. Policy defaults enable only users with the administrative role to see an instance action event traceback. Cloud providers can change these permissions through the ``policy.json`` file. in: body required: true type: string events: description: | List of external events to process. in: body required: true type: array extension: description: | An ``extension`` object. in: body required: true type: object extension_description: description: | Text describing this extension's purpose. in: body required: true type: string extension_links: description: | Links pertaining to this extension. This is a list of dictionaries, each including keys ``href`` and ``rel``. in: body required: true type: array extension_name: description: | Name of the extension. in: body required: true type: string extensions: description: | List of ``extension`` objects. in: body required: true type: array extra_specs: description: | A dictionary of the flavor's extra-specs key-and-value pairs. It appears in the os-extra-specs' "create" REQUEST body, as well as the os-extra-specs' "create" and "list" RESPONSE body. in: body required: true type: object extra_specs_2_47: min_version: 2.47 description: | A dictionary of the flavor's extra-specs key-and-value pairs. This will only be included if the user is allowed by policy to index flavor extra_specs. in: body required: false type: object extra_specs_2_61: min_version: 2.61 description: | A dictionary of the flavor's extra-specs key-and-value pairs. This will only be included if the user is allowed by policy to index flavor extra_specs. in: body required: false type: object fault: description: | A fault object. Only displayed when the server status is ``ERROR`` or ``DELETED`` and a fault occurred. in: body required: false type: object fault_code: description: | The error response code. in: body required: true type: integer fault_created: description: | The date and time when the exception was raised. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string fault_details: description: | The stack trace. It is available if the response code is not 500 or you have the administrator privilege in: body required: false type: string fault_message: description: | The error message. in: body required: true type: string fixed_address: description: | The fixed IP address with which you want to associate the floating IP address. in: body required: false type: string fixed_ip: description: | A fixed IPv4 address for the NIC. Valid with a ``neutron`` or ``nova-networks`` network. in: body required: false type: string fixed_ip_address: description: | Fixed IP associated with floating IP network. in: body required: true type: string fixed_ip_host: description: | The hostname of the host that manages the server that is associated with this fixed IP address. in: body required: true type: string fixed_ip_hostname: description: | The hostname of the server that is associated with this fixed IP address. in: body required: true type: string fixed_ip_obj: description: | A fixed IP address object. in: body required: true type: object fixed_ips: description: | Fixed IP addresses. If you request a specific fixed IP address without a ``net_id``, the request returns a ``Bad Request (400)`` response code. in: body required: false type: array fixed_ips_quota: description: | The number of allowed fixed IP addresses for each tenant. Must be equal to or greater than the number of allowed servers. in: body required: true type: integer max_version: 2.35 fixed_ips_quota_class: &fixed_ips_quota_class description: | The number of allowed fixed IP addresses for the quota class. Must be equal to or greater than the number of allowed servers. in: body required: true type: integer max_version: 2.49 fixed_ips_quota_class_optional: <<: *fixed_ips_quota_class required: false fixed_ips_quota_details: description: | The object of detailed fixed ips quota, including in_use, limit and reserved number of fixed ips. in: body required: true type: object max_version: 2.35 fixed_ips_quota_optional: description: | The number of allowed fixed IP addresses for each tenant. Must be equal to or greater than the number of allowed servers. in: body required: false type: integer max_version: 2.35 fixed_ips_resp: description: | Fixed IP addresses with subnet IDs. in: body required: true type: array flavor: description: | The ID and links for the flavor for your server instance. A flavor is a combination of memory, disk size, and CPUs. in: body required: true type: object flavor_access: description: | A list of objects, each with the keys ``flavor_id`` and ``tenant_id``. in: body required: true type: array flavor_cpus: in: body required: true type: integer description: | The number of virtual CPUs that will be allocated to the server. flavor_cpus_2_47: min_version: 2.47 in: body required: true type: integer description: | The number of virtual CPUs that were allocated to the server. flavor_description: type: string in: body required: false min_version: 2.55 description: | A free form description of the flavor. Limited to 65535 characters in length. Only printable characters are allowed. flavor_description_required: type: string in: body required: true description: | A free form description of the flavor. Limited to 65535 characters in length. Only printable characters are allowed. flavor_description_resp: description: | The description of the flavor. in: body required: true type: string min_version: 2.55 flavor_description_resp_no_min: description: | The description of the flavor. in: body required: true type: string flavor_disabled: in: body required: false type: boolean description: | Whether or not the flavor has been administratively disabled. This is typically only visible to administrative users. flavor_disk: in: body required: true type: integer description: | The size of the root disk that will be created in GiB. If 0 the root disk will be set to exactly the size of the image used to deploy the instance. However, in this case filter scheduler cannot select the compute host based on the virtual image size. Therefore, 0 should only be used for volume booted instances or for testing purposes. Volume-backed instances can be enforced for flavors with zero root disk via the ``os_compute_api:servers:create:zero_disk_flavor`` policy rule. flavor_disk_2_47: min_version: 2.47 in: body required: true type: integer description: | The size of the root disk that was created in GiB. flavor_ephem_disk: in: body required: true type: integer description: | The size of the ephemeral disk that will be created, in GiB. Ephemeral disks may be written over on server state changes. So should only be used as a scratch space for applications that are aware of its limitations. Defaults to 0. flavor_ephem_disk_2_47: min_version: 2.47 in: body required: true type: integer description: | The size of the ephemeral disk that was created, in GiB. flavor_ephem_disk_in: in: body required: false type: integer description: | The size of the ephemeral disk that will be created, in GiB. Ephemeral disks may be written over on server state changes. So should only be used as a scratch space for applications that are aware of its limitations. Defaults to 0. flavor_extra_spec_key2: description: | The extra spec key of a flavor. It appears in the os-extra-specs' "create" and "update" REQUEST body, as well as the os-extra-specs' "create", "list", "show", and "update" RESPONSE body. in: body required: true type: string flavor_extra_spec_key_2_47: description: | The extra spec key of a flavor. in: body required: true type: string min_version: 2.47 flavor_extra_spec_value: description: | The extra spec value of a flavor. It appears in the os-extra-specs' "create" and "update" REQUEST body, as well as the os-extra-specs' "create", "list", "show", and "update" RESPONSE body. in: body required: true type: string flavor_extra_spec_value_2_47: description: | The extra spec value of a flavor. in: body required: true type: string min_version: 2.47 flavor_id_body: description: | The ID of the flavor. While people often make this look like an int, this is really a string. in: body required: true type: string flavor_id_body_2_46: description: | The ID of the flavor. While people often make this look like an int, this is really a string. in: body required: true type: string max_version: 2.46 flavor_id_body_create: description: | The ID of the flavor. While people often make this look like an int, this is really a string. If not provided, this defaults to a uuid. in: body required: false type: string flavor_is_public: description: | Whether the flavor is public (available to all projects) or scoped to a set of projects. Default is True if not specified. in: body required: true type: boolean flavor_is_public_in: description: | Whether the flavor is public (available to all projects) or scoped to a set of projects. Default is True if not specified. in: body required: false type: boolean flavor_links_2_46: description: | Links to the flavor resource. See `API Guide / Links and References `_ for more info. in: body required: true type: array max_version: 2.46 flavor_name: description: | The display name of a flavor. in: body required: true type: string flavor_name_optional: description: | The display name of a flavor. in: body required: false type: string flavor_original_name: description: | The display name of a flavor. in: body required: true type: string min_version: 2.47 flavor_ram: description: | The amount of RAM a flavor has, in MiB. in: body required: true type: integer flavor_ram_2_47: description: | The amount of RAM a flavor has, in MiB. in: body required: true type: integer min_version: 2.47 flavor_rxtx_factor: description: | The receive / transmit factor (as a float) that will be set on ports if the network backend supports the QOS extension. Otherwise it will be ignored. It defaults to 1.0. in: body required: true type: float flavor_rxtx_factor_in: description: | The receive / transmit factor (as a float) that will be set on ports if the network backend supports the QOS extension. Otherwise it will be ignored. It defaults to 1.0. in: body required: false type: float flavor_server: description: | Before microversion 2.47 this contains the ID and links for the flavor used to boot the server instance. This can be an empty object in case flavor information is no longer present in the system. As of microversion 2.47 this contains a subset of the actual flavor information used to create the server instance, represented as a nested dictionary. in: body required: true type: object flavor_swap: description: | The size of a dedicated swap disk that will be allocated, in MiB. If 0 (the default), no dedicated swap disk will be created. Currently, the empty string ('') is used to represent 0. As of microversion 2.75 default return value of swap is 0 instead of empty string. in: body required: true type: integer flavor_swap_2_47: description: | The size of a dedicated swap disk that was allocated, in MiB. in: body required: true type: integer min_version: 2.47 flavor_swap_in: description: | The size of a dedicated swap disk that will be allocated, in MiB. If 0 (the default), no dedicated swap disk will be created. in: body required: false type: integer flavorRef: description: | The flavor reference, as an ID (including a UUID) or full URL, for the flavor for your server instance. in: body required: true type: string flavorRef_resize: description: | The flavor ID for resizing the server. The size of the disk in the flavor being resized to must be greater than or equal to the size of the disk in the current flavor. If a specified flavor ID is the same as the current one of the server, the request returns a ``Bad Request (400)`` response code. in: body required: true type: string flavors: description: | An array of flavor objects. in: body required: true type: array floating_ip: description: | The floating ip address. in: body required: true type: string floating_ip_bulk_object: description: | The floating ip bulk address object. in: body required: true type: object floating_ip_id_value: description: | The floating IP id value. .. note:: For nova-network, the value will be of type integer, whereas for neutron, the value will be of type string. in: body required: true type: string floating_ip_obj: description: | A floating IP address object. in: body required: true type: object floating_ip_pool_name: description: | The name of the floating IP pool. in: body required: true type: string floating_ip_pool_name_optional: description: | The name of the floating IP pool in: body required: false type: string floating_ip_pool_name_or_id: description: | The name or ID of the floating IP pool. in: body required: true type: string floating_ip_pools: description: | The ``floating_ip_pools`` object. in: body required: true type: array floating_ips: description: | The number of allowed floating IP addresses for each tenant. in: body required: true type: integer max_version: 2.35 floating_ips_list: description: | An array of floating ip objects. in: body required: true type: array floating_ips_quota_class: &floating_ips_quota_class description: | The number of allowed floating IP addresses for the quota class. in: body required: true type: integer max_version: 2.49 floating_ips_quota_class_optional: <<: *floating_ips_quota_class required: false floating_ips_quota_details: description: | The object of detailed floating ips quota, including in_use, limit and reserved number of floating ips. in: body required: true type: object max_version: 2.35 floating_ips_quota_optional: description: | The number of allowed floating IP addresses for each tenant. in: body required: false type: integer max_version: 2.35 force: description: | You can force the update even if the quota has already been used and the reserved quota exceeds the new quota. To force the update, specify the ``"force": "True"``. Default is ``False``. in: body required: false type: boolean force_evacuate: description: | Force an evacuation by not verifying the provided destination host by the scheduler. .. warning:: This could result in failures to actually evacuate the instance to the specified host. It is recommended to either not specify a host so that the scheduler will pick one, or specify a host without ``force=True`` set. Furthermore, this should not be specified when evacuating instances managed by a clustered hypervisor driver like ironic since you cannot specify a node, so the compute service will pick a node randomly which may not be able to accomodate the instance. in: body required: false type: boolean min_version: 2.29 max_version: 2.67 force_live_migrate: description: | Force a live-migration by not verifying the provided destination host by the scheduler. .. warning:: This could result in failures to actually live migrate the instance to the specified host. It is recommended to either not specify a host so that the scheduler will pick one, or specify a host without ``force=True`` set. in: body required: false type: boolean min_version: 2.30 max_version: 2.67 force_migration_complete: description: | The action to force an in-progress live migration to complete. in: body required: true type: none force_snapshot: description: | Indicates whether to create a snapshot, even if the volume is attached. in: body required: false type: boolean # This is both the request and response parameter for # PUT /os-services/force-down which was added in 2.11. forced_down_2_11: description: | Whether or not this service was forced down manually by an administrator after the service was fenced. This value is useful to know that some 3rd party has verified the service should be marked down. in: body required: true type: boolean min_version: 2.11 # This is the optional request input parameter for # PUT /os-services/{service_id} added in 2.53. forced_down_2_53_in: description: | ``forced_down`` is a manual override to tell nova that the service in question has been fenced manually by the operations team (either hard powered off, or network unplugged). That signals that it is safe to proceed with ``evacuate`` or other operations that nova has safety checks to prevent for hosts that are up. .. warning:: Setting a service forced down without completely fencing it will likely result in the corruption of VMs on that host. in: body required: false type: boolean # This is the response output parameter for # PUT /os-services/{service_id} added in 2.53. forced_down_2_53_out: description: | Whether or not this service was forced down manually by an administrator after the service was fenced. This value is useful to know that some 3rd party has verified the service should be marked down. in: body required: true type: boolean forceDelete: description: | The action. in: body required: true type: none free_ram_mb: description: | The free RAM in this hypervisor(in MiB). This does not take allocation ratios used for overcommit into account so this value may be negative. in: body required: true type: integer free_ram_mb_total: description: | The free RAM on all hypervisors(in MiB). This does not take allocation ratios used for overcommit into account so this value may be negative. in: body required: true type: integer from_port: description: | The port at start of range. in: body required: true type: integer group: description: | A ``group`` object. Includes the ``tenant_id`` and the source security group ``name``. in: body required: true type: object group_id: description: | The source security group ID. in: body required: false type: string guest_format: description: | Specifies the guest server disk file system format, such as ``ext2``, ``ext3``, ``ext4``, ``xfs`` or ``swap``. Swap block device mappings have the following restrictions: * The ``source_type`` must be ``blank`` * The ``destination_type`` must be ``local`` * There can only be one swap disk per server * The size of the swap disk must be less than or equal to the ``swap`` size of the flavor in: body required: false type: string host: description: | The name or ID of the host to which the server is evacuated. If you omit this parameter, the scheduler chooses a host. .. warning:: Prior to microversion 2.29, specifying a host will bypass validation by the scheduler, which could result in failures to actually evacuate the instance to the specified host, or over-subscription of the host. It is recommended to either not specify a host so that the scheduler will pick one, or specify a host with microversion >= 2.29 and without ``force=True`` set. in: body required: false type: string host_cpu: description: | The number of virtual CPUs on the host. in: body required: true type: integer host_disk_gb: description: | The disk size on the host (in GiB). in: body required: true type: integer host_done_num: description: | The number of the hosts whose instance audit tasks have been done. in: body required: true type: integer host_ip: description: | The IP address of the hypervisor's host. in: body required: true type: string host_maintenance_mode: description: | Mode of maintenance state, either ``on_maintenance`` or ``off_maintenance``. in: body required: false type: string host_maintenance_mode_in: description: | Mode of maintenance state, either ``enable`` or ``disable``. in: body required: false type: string host_memory_mb: description: | The memory size on the host (in MiB). in: body required: true type: integer host_migration: description: | The host to which to migrate the server. If this parameter is ``None``, the scheduler chooses a host. .. warning:: Prior to microversion 2.30, specifying a host will bypass validation by the scheduler, which could result in failures to actually migrate the instance to the specified host, or over-subscription of the host. It is recommended to either not specify a host so that the scheduler will pick one, or specify a host with microversion >= 2.30 and without ``force=True`` set. in: body required: true type: string host_migration_2_56: description: | The host to which to migrate the server. If you specify ``null`` or don't specify this parameter, the scheduler chooses a host. in: body required: false type: string min_version: 2.56 host_name_body: description: | The name of the host. in: body required: true type: string host_not_run: description: | A list of the hosts whose instance audit tasks have not run. in: body required: true type: array host_not_run_num: description: | The number of the hosts whose instance audit tasks have not run. in: body required: true type: integer host_num: description: | The number of the hosts. in: body required: true type: integer host_power_action: description: | The power action on the host. in: body required: true type: string host_project: description: | The project id (or special name like total, used_now, used_max). in: body required: true type: string host_resource: description: | The resource info of the host. in: body required: true type: object host_resource_array: description: | The array that includes resource info of the host. in: body required: true type: array host_running_num: description: | The number of the hosts whose instance audit tasks are running. in: body required: true type: integer host_service: description: | The name of the service which is running on the host. in: body required: true type: string host_status: description: | The host status. Values where next value in list can override the previous: - ``UP`` if nova-compute up. - ``UNKNOWN`` if nova-compute not reported by servicegroup driver. - ``DOWN`` if nova-compute forced down. - ``MAINTENANCE`` if nova-compute is disabled. - Empty string indicates there is no host for server. This attribute appears in the response only if the policy permits. By default, only administrators can get this parameter. in: body required: false type: string min_version: 2.16 host_status_body: description: | The status of the current host, either ``enabled`` or ``disabled``. in: body required: false type: string host_status_body_in: description: | The status of the host, either ``enable`` or ``disable``. in: body required: false type: string host_status_update_rebuild: description: | The host status. Values where next value in list can override the previous: - ``UP`` if nova-compute up. - ``UNKNOWN`` if nova-compute not reported by servicegroup driver. - ``DOWN`` if nova-compute forced down. - ``MAINTENANCE`` if nova-compute is disabled. - Empty string indicates there is no host for server. This attribute appears in the response only if the policy permits. By default, only administrators can get this parameter. in: body required: false type: string min_version: 2.75 host_zone: description: | The available zone of the host. in: body required: true type: string hostId: description: | An ID string representing the host. This is a hashed value so will not actually look like a hostname, and is hashed with data from the project_id, so the same physical host as seen by two different project_ids, will be different. It is useful when within the same project you need to determine if two instances are on the same or different physical hosts for the purposes of availability or performance. in: body required: true type: string hosts: description: | An array of host information. in: body required: true type: array hosts.availability_zone: description: | An object containing a list of host information. The host information is comprised of host and service objects. The service object returns three parameters representing the states of the service: ``active``, ``available``, and ``updated_at``. in: body required: true type: object hosts.availability_zone_none: description: | It is always ``null``. in: body required: true type: none hours: description: | The duration that the server exists (in hours). in: body required: true type: float hours_optional: description: | The duration that the server exists (in hours). in: body required: false type: float hypervisor: description: | The hypervisor object. in: body required: true type: object hypervisor_count: description: | The number of hypervisors. in: body required: true type: integer hypervisor_diagnostics: description: | The hypervisor on which the VM is running. Examples for libvirt driver may be: ``qemu``, ``kvm`` or ``xen``. in: body required: true type: string min_version: 2.48 hypervisor_free_disk_gb: description: | The free disk remaining on this hypervisor(in GiB). This does not take allocation ratios used for overcommit into account so this value may be negative. in: body required: true type: integer hypervisor_free_disk_gb_total: description: | The free disk remaining on all hypervisors(in GiB). This does not take allocation ratios used for overcommit into account so this value may be negative. in: body required: true type: integer hypervisor_hostname: description: | The hypervisor host name provided by the Nova virt driver. For the Ironic driver, it is the Ironic node uuid. in: body required: true type: string hypervisor_id_body: description: | The id of the hypervisor. in: body required: true type: integer max_version: 2.52 hypervisor_id_body_no_version: description: | The id of the hypervisor. in: body required: true type: integer hypervisor_id_body_uuid: description: | The id of the hypervisor as a UUID. in: body required: true type: string min_version: 2.53 hypervisor_links: description: | Links to the hypervisors resource. See `API Guide / Links and References `_ for more info. in: body type: array min_version: 2.33 required: false hypervisor_os_diagnostics: description: | The hypervisor OS. in: body type: string required: true min_version: 2.48 hypervisor_servers: description: | A list of ``server`` objects. This field has become mandatory in microversion 2.75. If no servers is on hypervisor then empty list is returned. in: body required: true type: array min_version: 2.53 hypervisor_servers_name: description: | The server name. in: body required: false type: string min_version: 2.53 hypervisor_servers_uuid: description: | The server ID. in: body required: false type: string min_version: 2.53 hypervisor_service: description: | The hypervisor service object. in: body required: true type: object hypervisor_state: description: | The state of the hypervisor. One of ``up`` or ``down``. in: body required: true type: string hypervisor_statistics: description: | The hypervisors statistics summary object. in: body required: true type: object hypervisor_status: description: | The status of the hypervisor. One of ``enabled`` or ``disabled``. in: body required: true type: string hypervisor_type: in: body required: true type: string description: | The hypervisor type for the agent. Currently only ``xen`` is supported. hypervisor_type_body: description: | The hypervisor type. in: body required: true type: string hypervisor_vcpus: description: | The number of vcpu in this hypervisor. This does not take allocation ratios used for overcommit into account so there may be disparity between this and the used count. in: body required: true type: integer hypervisor_vcpus_total: description: | The number of vcpu on all hypervisors. This does not take allocation ratios used for overcommit into account so there may be disparity between this and the used count. in: body required: true type: integer hypervisor_vcpus_used: description: | The number of vcpu used in this hypervisor. in: body required: true type: integer hypervisor_vcpus_used_total: description: | The number of vcpu used on all hypervisors. in: body required: true type: integer hypervisor_version: description: | The hypervisor version. in: body required: true type: integer hypervisors: description: | An array of hypervisor information. in: body required: true type: array image: description: | The UUID and links for the image for your server instance. The ``image`` object will be an empty string when you boot the server from a volume. in: body required: true type: object image_id_body: description: | The ID of the Image. in: body required: true type: string image_metadata: description: | Metadata key and value pairs for the image. The maximum size for each metadata key and value pair is 255 bytes. in: body required: false type: object image_metadata_items: description: | The number of allowed metadata items for each image. Starting from version 2.39 this field is dropped from 'os-limits' response, because 'image-metadata' proxy API was deprecated. in: body required: true type: integer max_version: 2.38 image_name: description: | The display name of an Image. in: body required: true type: string image_progress: description: | A percentage value of the image save progress. This can be one of: - ``ACTIVE``: 100 - ``SAVING``: 25 or 50 in: body required: true type: integer image_server: description: | The server booted from image. in: body required: false type: object image_size: description: | The size of the image. in: body required: true type: integer image_status: description: | The status of image, as a string. This can be one of: - ``ACTIVE``: image is in active state - ``SAVING``: image is in queued or in saving process - ``DELETED``: image is deleted or in progress of deletion - ``ERROR``: image is in error state - ``UNKNOWN``: image is in unknown state in: body required: true type: string imageRef: description: | The UUID of the image to use for your server instance. This is not required in case of boot from volume. In all other cases it is required and must be a valid UUID otherwise API will return 400. in: body required: false type: string imageRef_rebuild: description: | The UUID of the image to rebuild for your server instance. It must be a valid UUID otherwise API will return 400. If rebuilding a volume-backed server with a new image (an image different from the image used when creating the volume), the API will return 400. For non-volume-backed servers, specifying a new image will result in validating that the image is acceptable for the current compute host on which the server exists. If the new image is not valid, the server will go into ``ERROR`` status. in: body required: true type: string images: description: | An array of Image objects. in: body required: true type: array injected_file_content_bytes: description: | The number of allowed bytes of content for each injected file. in: body required: true type: integer max_version: 2.56 injected_file_content_bytes_quota_details: description: | The object of detailed injected file content bytes quota, including in_use, limit and reserved number of injected file content bytes. in: body required: true type: object max_version: 2.56 injected_file_content_bytes_quota_optional: description: | The number of allowed bytes of content for each injected file. in: body required: false type: integer max_version: 2.56 injected_file_path_bytes: description: | The number of allowed bytes for each injected file path. in: body required: true type: integer max_version: 2.56 injected_file_path_bytes_quota_details: description: | The object of detailed injected file path bytes quota, including in_use, limit and reserved number of injected file path bytes. in: body required: true type: object max_version: 2.56 injected_file_path_bytes_quota_optional: description: | The number of allowed bytes for each injected file path. in: body required: false type: integer max_version: 2.56 injected_files: &injected_files description: | The number of allowed injected files for each tenant. in: body required: true type: integer max_version: 2.56 injected_files_quota_class: &injected_files_quota_class <<: *injected_files description: | The number of allowed injected files for the quota class. injected_files_quota_class_optional: <<: *injected_files_quota_class required: false injected_files_quota_details: description: | The object of detailed injected files quota, including in_use, limit and reserved number of injected files. in: body required: true type: object max_version: 2.56 injected_files_quota_optional: description: | The number of allowed injected files for each tenant. in: body required: false type: integer max_version: 2.56 injectNetworkInfo: description: | The action. in: body required: true type: none instance_action_events_2_50: description: | The events which occurred in this action in descending order of creation. Policy defaults enable only users with the administrative role to see instance action event information. Cloud providers can change these permissions through the ``policy.json`` file. in: body required: false type: array max_version: 2.50 instance_action_events_2_51: description: | The events which occurred in this action in descending order of creation. Policy defaults enable only users with the administrative role or the owner of the server to see instance action event information. Cloud providers can change these permissions through the ``policy.json`` file. in: body required: true type: array min_version: 2.51 instance_actions_next_links: description: | Links pertaining to the instance action. This parameter is returned when paging and more data is available. See `Paginated collections `__ for more info. in: body required: false type: array min_version: 2.58 instance_id_body: description: | The UUID of the server. in: body required: true type: string instance_id_cloudpipe: description: | The UUID of the cloudpipe instance. in: body required: true type: string instance_name: description: | The name of the instance. in: body required: true type: string instance_usage_audit_log: description: | The object of instance usage audit logs. in: body required: true type: object instance_usage_audit_log_message: description: | The log message of the instance usage audit task. in: body required: true type: string instance_usage_audit_logs: description: | The object of instance usage audit log information. in: body required: true type: object instance_usage_audit_task_state: description: | The state of the instance usage audit task. ``DONE`` or ``RUNNING``. in: body required: true type: string instanceAction: description: | The instance action object. in: body required: true type: object instanceActions: description: | List of the actions for the given instance in descending order of creation. in: body required: true type: array instances: &instances description: | The number of allowed servers for each tenant. in: body required: true type: integer instances_quota_class: &instances_quota_class <<: *instances description: | The number of allowed servers for the quota class. instances_quota_class_optional: <<: *instances_quota_class required: false instances_quota_details: description: | The object of detailed servers quota, including in_use, limit and reserved number of instances. in: body required: true type: object instances_quota_optional: description: | The number of allowed servers for each tenant. in: body required: false type: integer instances_usage_audit: description: | The number of instances. in: body required: true type: integer interfaceAttachment: description: | Specify the ``interfaceAttachment`` action in the request body. in: body required: true type: string interfaceAttachment_resp: description: | The interface attachment. in: body required: true type: object interfaceAttachments: description: | List of the interface attachments. in: body required: true type: array internal_access_path: description: | The id representing the internal access path. in: body required: false type: string ip_address: description: | The IP address. in: body required: true type: string ip_address_req: description: | The IP address. It is required when ``fixed_ips`` is specified. in: body required: true type: string ip_host: description: | The name or ID of the host associated to the IP. in: body required: true type: string ip_protocol: description: | The IP protocol. A valid value is ICMP, TCP, or UDP. in: body required: true type: string ip_range: description: | The range of IP addresses to use for creating floating IPs. in: body required: true type: string ip_range_delete: description: | The range of IP addresses from which to bulk-delete floating IPs. in: body required: true type: string key_name: description: | Key pair name. .. note:: The ``null`` value was allowed in the Nova legacy v2 API, but due to strict input validation, it is not allowed in the Nova v2.1 API. in: body required: false type: string key_name_rebuild_req: description: | Key pair name for rebuild API. If ``null`` is specified, the existing keypair is unset. .. note:: Users within the same project are able to rebuild other user's instances in that project with a new keypair. Keys are owned by users (which is the only resource that's true of). Servers are owned by projects. Because of this a rebuild with a key_name is looking up the keypair by the user calling rebuild. in: body required: false type: string min_version: 2.54 key_name_rebuild_resp: description: | The name of associated key pair, if any. in: body required: true type: string min_version: 2.54 key_name_resp: description: | The name of associated key pair, if any. in: body required: true type: string key_name_resp_update: description: | The name of associated key pair, if any. in: body required: true type: string min_version: 2.75 key_pairs: &key_pairs description: | The number of allowed key pairs for each user. in: body required: true type: integer key_pairs_quota_class: &key_pairs_quota_class <<: *key_pairs description: | The number of allowed key pairs for the quota class. key_pairs_quota_class_optional: <<: *key_pairs_quota_class required: false key_pairs_quota_details: description: | The object of detailed key pairs quota, including in_use, limit and reserved number of key pairs. .. note:: ``in_use`` field value for keypair quota details is always zero. In Nova, key_pairs are a user-level resource, not a project- level resource, so for legacy reasons, the keypair in-use information is not counted. in: body required: true type: object key_pairs_quota_optional: description: | The number of allowed key pairs for each user. in: body required: false type: integer keypair: in: body type: object required: true description: | Keypair object keypair_deleted: description: | A boolean indicates whether this keypair is deleted or not. The value is always ``false`` (not deleted). in: body required: true type: boolean keypair_fingerprint: in: body required: true type: string description: | The fingerprint for the keypair. keypair_id: description: | The keypair ID. in: body required: true type: integer keypair_links: description: | Links pertaining to keypair. See `API Guide / Links and References `_ for more info. in: body type: array required: false min_version: 2.35 keypair_name: in: body required: true type: string description: | A name for the keypair which will be used to reference it later. keypair_private_key: description: | If you do not provide a public key on create, a new keypair will be built for you, and the private key will be returned during the initial create call. Make sure to save this, as there is no way to get this private key again in the future. in: body required: false type: string keypair_public_key: description: | The keypair public key. in: body required: true type: string keypair_public_key_in: description: | The public ssh key to import. If you omit this value, a keypair is generated for you. in: body required: false type: string keypair_type: in: body required: true type: string description: | The type of the keypair. Allowed values are ``ssh`` or ``x509``. min_version: 2.2 keypair_type_in: in: body required: false type: string description: | The type of the keypair. Allowed values are ``ssh`` or ``x509``. min_version: 2.2 keypair_updated_deleted_at: description: | It is always ``null``. in: body required: true type: none # NOTE(mriedem): This is the user_id description for the keypair create/show # response which has always been returned. keypair_userid: in: body required: true type: string description: | The user_id for a keypair. keypair_userid_in: in: body required: false type: string description: | The user_id for a keypair. This allows administrative users to upload keys for other users than themselves. min_version: 2.10 keypairs: in: body type: array required: true description: | Array of Keypair objects length: description: | The number of lines to fetch from the end of console log. All lines will be returned if this is not specified. .. note:: This parameter can be specified as not only 'integer' but also 'string'. in: body required: false type: integer limits: description: | Data structure that contains both absolute limits within a deployment. in: body required: true type: object limits_absolutes: description: | Name/value pairs that set quota limits within a deployment and Name/value pairs of resource usage. in: body required: true type: object limits_rate_uri: description: | A human readable URI that is used as a friendly description of where the api rate limit is applied. in: body required: true type: string limits_rates: description: | An empty list for backwards compatibility purposes. in: body required: true type: array links: description: | Links to the resources in question. See `API Guide / Links and References `_ for more info. in: body required: true type: array local_gb: description: | The disk in this hypervisor(in GiB). This does not take allocation ratios used for overcommit into account so there may be disparity between this and the used count. in: body required: true type: integer local_gb_simple_tenant_usage: description: | The sum of the root disk size of the server and the ephemeral disk size of it (in GiB). in: body required: true type: integer local_gb_simple_tenant_usage_optional: description: | The sum of the root disk size of the server and the ephemeral disk size of it (in GiB). in: body required: false type: integer local_gb_total: description: | The disk on all hypervisors(in GiB). This does not take allocation ratios used for overcommit into account so there may be disparity between this and the used count. in: body required: true type: integer local_gb_used: description: | The disk used in this hypervisor(in GiB). in: body required: true type: integer local_gb_used_total: description: | The disk used on all hypervisors(in GiB). in: body required: true type: integer lock: description: | The action to lock a server. This parameter can be ``null``. Up to microversion 2.73, this parameter should be ``null``. in: body required: true type: object locked: description: | True if the instance is locked otherwise False. in: body required: true type: boolean min_version: 2.9 locked_reason_req: description: | The reason behind locking a server. Limited to 255 characters in length. in: body required: false type: string min_version: 2.73 locked_reason_resp: description: | The reason behind locking a server. in: body required: true type: string min_version: 2.73 mac_addr: description: | The MAC address. in: body required: true type: string mac_address: description: | The MAC address. in: body required: true type: string md5hash: description: | The MD5 hash. in: body required: true type: string media_types: description: | The `media types `_. It is an array of a fixed dict. .. note:: It is vestigial and provide no useful information. It will be deprecated and removed in the future. in: body required: true type: array members: description: | A list of members in the server group. in: body required: true type: array memory_details_diagnostics: description: | The dictionary with information about VM memory usage. Following fields are presented in the dictionary: - ``maximum`` - Amount of memory provisioned for the VM in MiB (Integer) - ``used`` - Amount of memory that is currently used by the guest operating system and its applications in MiB (Integer) in: body required: true type: array min_version: 2.48 memory_mb: description: | The memory of this hypervisor(in MiB). This does not take allocation ratios used for overcommit into account so there may be disparity between this and the used count. in: body required: true type: integer memory_mb_simple_tenant_usage: description: | The memory size of the server (in MiB). in: body required: true type: integer memory_mb_simple_tenant_usage_optional: description: | The memory size of the server (in MiB). in: body required: false type: integer memory_mb_total: description: | The memory of all hypervisors(in MiB). This does not take allocation ratios used for overcommit into account so there may be disparity between this and the used count. in: body required: true type: integer memory_mb_used: description: | The memory used in this hypervisor(in MiB). in: body required: true type: integer memory_mb_used_total: description: | The memory used on all hypervisors(in MiB). in: body required: true type: integer message: description: | The related error message for when an action fails. in: body required: true type: string meta: description: | The object of detailed key metadata items. in: body required: true type: object metadata: description: | Metadata key and value pairs. The maximum size of the metadata key and value is 255 bytes each. in: body required: false type: object metadata_compat: description: | A dictionary of metadata key-and-value pairs, which is maintained for backward compatibility. in: body required: true type: object metadata_items: description: | The number of allowed metadata items for each server. in: body required: true type: integer metadata_items_quota_details: description: | The object of detailed key metadata items quota, including in_use, limit and reserved number of metadata items. in: body required: true type: object metadata_items_quota_optional: description: | The number of allowed metadata items for each server. in: body required: false type: integer metadata_object: description: | Metadata key and value pairs. The maximum size for each metadata key and value pair is 255 bytes. in: body required: true type: object metadata_server_group_max_2_63: description: | Metadata key and value pairs. The maximum size for each metadata key and value pair is 255 bytes. It's always empty and only used for keeping compatibility. in: body required: true type: object max_version: 2.63 migrate: description: | The action to cold migrate a server. This parameter can be ``null``. Up to microversion 2.55, this parameter should be ``null``. in: body required: true type: object migrate_dest_compute: description: | The target compute for a migration. in: body required: true type: string migrate_dest_host: description: | The target host for a migration. in: body required: true type: string migrate_dest_node: description: | The target node for a migration. in: body required: true type: string migrate_disk_processed_bytes: description: | The amount of disk, in bytes, that has been processed during the migration. in: body required: true type: integer migrate_disk_remaining_bytes: description: | The amount of disk, in bytes, that still needs to be migrated. in: body required: true type: integer migrate_disk_total_bytes: description: | The total amount of disk, in bytes, that needs to be migrated. in: body required: true type: integer migrate_memory_processed_bytes: description: | The amount of memory, in bytes, that has been processed during the migration. in: body required: true type: integer migrate_memory_remaining_bytes: description: | The amount of memory, in bytes, that still needs to be migrated. in: body required: true type: integer migrate_memory_total_bytes: description: | The total amount of memory, in bytes, that needs to be migrated. in: body required: true type: integer migrate_source_compute: description: | The source compute for a migration. in: body required: true type: string migrate_source_node: description: | The source node for a migration. in: body required: true type: string migrate_status: description: | The current status of the migration. in: body required: true type: string migration: description: | The server migration object. in: body required: true type: object migration_id: description: | The ID of the server migration. in: body required: true type: integer migration_links_2_23: description: | Links to the migration. This parameter is returned if the migration type is ``live-migration`` and the migration status is one of ``queued``, ``preparing``, ``running`` and ``post-migrating``. See `Paginated collections `__ for more info. in: body required: false type: array min_version: 2.23 migration_new_flavor_id: description: | In ``resize`` case, the flavor ID for resizing the server. In the other cases, this parameter is same as the flavor ID of the server when the migration was started. .. note:: This is an internal ID and is not exposed in any other API. In particular, this is not the ID specified or automatically generated during flavor creation or returned via the ``GET /flavors`` API. in: body required: true type: integer migration_next_links_2_59: description: | Links pertaining to the migration. This parameter is returned when paging and more data is available. See `Paginated collections `__ for more info. in: body required: false type: array min_version: 2.59 migration_old_flavor_id: description: | The flavor ID of the server when the migration was started. .. note:: This is an internal ID and is not exposed in any other API. In particular, this is not the ID specified or automatically generated during flavor creation or returned via the ``GET /flavors`` API. in: body required: true type: integer migration_type_2_23: description: | The type of the server migration. This is one of ``live-migration``, ``migration``, ``resize`` and ``evacuation``. in: body required: true type: string min_version: 2.23 migration_uuid: description: | The UUID of the migration. in: body required: true type: string min_version: 2.59 migrations: description: | The list of server migration objects. in: body required: true type: array minDisk_body: description: | The minimum amount of disk space an image requires to boot, in GiB. For example, ``100``. in: body required: true type: integer minRam_body: description: | The minimum amount of RAM an image requires to function, in MiB. For example, ``512``. in: body required: true type: integer name: description: | The security group name. in: body required: true type: string name_sec_group_optional: description: | The security group name. in: body required: false type: string name_server_group: description: | The name of the server group. in: body required: true type: string name_update_rebuild: description: | The security group name. in: body required: true type: string min_version: 2.75 namespace: description: | A URL pointing to the namespace for this extension. in: body required: true type: string net_id: description: | The ID of the network for which you want to create a port interface. The ``net_id`` and ``port_id`` parameters are mutually exclusive. If you do not specify the ``net_id`` parameter, the OpenStack Networking API v2.0 uses the network information cache that is associated with the instance. in: body required: false type: string net_id_resp: description: | The network ID. in: body required: true type: string net_id_resp_2_12: description: | The network ID. in: body required: true type: string min_version: 2.12 network_label_body: description: | List of IP address and IP version pairs. The ``network_label`` stands for the name of a network, such as ``public`` or ``private``. in: body required: true type: array network_uuid: description: | To provision the server instance with a NIC for a network, specify the UUID of the network in the ``uuid`` attribute in a ``networks`` object. Required if you omit the ``port`` attribute. Starting with microversion 2.37, this value is strictly enforced to be in UUID format. in: body required: false type: string networks: description: | A list of ``network`` object. Required parameter when there are multiple networks defined for the tenant. When you do not specify the networks parameter, the server attaches to the only network created for the current tenant. Optionally, you can create one or more NICs on the server. To provision the server instance with a NIC for a network, specify the UUID of the network in the ``uuid`` attribute in a ``networks`` object. To provision the server instance with a NIC for an already existing port, specify the port-id in the ``port`` attribute in a ``networks`` object. If multiple networks are defined, the order in which they appear in the guest operating system will not necessarily reflect the order in which they are given in the server boot request. Guests should therefore not depend on device order to deduce any information about their network devices. Instead, device role tags should be used: introduced in 2.32, broken in 2.37, and re-introduced and fixed in 2.42, the ``tag`` is an optional, string attribute that can be used to assign a tag to a virtual network interface. This tag is then exposed to the guest in the metadata API and the config drive and is associated to hardware metadata for that network interface, such as bus (ex: PCI), bus address (ex: 0000:00:02.0), and MAC address. A bug has caused the ``tag`` attribute to no longer be accepted starting with version 2.37. Therefore, network interfaces could only be tagged in versions 2.32 to 2.36 inclusively. Version 2.42 has restored the ``tag`` attribute. Starting with microversion 2.37, this field is required and the special string values *auto* and *none* can be specified for networks. *auto* tells the Compute service to use a network that is available to the project, if one exists. If one does not exist, the Compute service will attempt to automatically allocate a network for the project (if possible). *none* tells the Compute service to not allocate a network for the instance. The *auto* and *none* values cannot be used with any other network values, including other network uuids, ports, fixed IPs or device tags. These are requested as strings for the networks value, not in a list. See the associated example. in: body required: true type: array networks_quota_optional: &networks_quota_optional description: | The number of private networks that can be created per project. in: body required: false type: integer max_version: 2.49 networks_quota_set_optional: <<: *networks_quota_optional max_version: 2.35 new_file: description: | The name of the qcow2 file that Block Storage creates, which becomes the active image for the VM. in: body required: true type: string nic_details_diagnostics: description: | The list of dictionaries with detailed information about VM NICs. Following fields are presented in each dictionary: - ``mac_address`` - Mac address of the interface (String) - ``rx_octets`` - Received octets (Integer) - ``rx_errors`` - Received errors (Integer) - ``rx_drop`` - Received packets dropped (Integer) - ``rx_packets`` - Received packets (Integer) - ``rx_rate`` - Receive rate in bytes (Integer) - ``tx_octets`` - Transmitted Octets (Integer) - ``tx_errors`` - Transmit errors (Integer) - ``tx_drop`` - Transmit dropped packets (Integer) - ``tx_packets`` - Transmit packets (Integer) - ``tx_rate`` - Transmit rate in bytes (Integer) in: body required: true type: array min_version: 2.48 no_device: description: | It is no device if ``True``. in: body required: false type: boolean num_cpus_diagnostics: description: | The number of vCPUs. in: body required: true type: integer min_version: 2.48 num_disks_diagnostics: description: | The number of disks. in: body required: true type: integer min_version: 2.48 num_nics_diagnostics: description: | The number of vNICs. in: body required: true type: integer min_version: 2.48 on_shared_storage: description: | Server on shared storage. .. note:: Starting since version 2.14, Nova automatically detects whether the server is on shared storage or not. Therefore this parameter was removed. in: body required: true type: boolean max_version: 2.13 os: description: | The name of the operating system. in: body required: true type: string os-availability-zone:availability_zone: description: | The availability zone from which to launch the server. When you provision resources, you specify from which availability zone you want your instance to be built. Typically, an admin user will use availability zones to arrange OpenStack compute hosts into logical groups. An availability zone provides a form of physical isolation and redundancy from other availability zones. For instance, if some racks in your data center are on a separate power source, you can put servers in those racks in their own availability zone. Availability zones can also help separate different classes of hardware. By segregating resources into availability zones, you can ensure that your application resources are spread across disparate machines to achieve high availability in the event of hardware or other failure. See `Availability Zones (AZs) `_ for more information. You can list the available availability zones by calling the :ref:`os-availability-zone` API, but you should avoid using the `default availability zone `_ when creating the server. The default availability zone is named ``nova``. This AZ is only shown when listing the availability zones as an admin. in: body required: false type: string OS-DCF:diskConfig: description: | Controls how the API partitions the disk when you create, rebuild, or resize servers. A server inherits the ``OS-DCF:diskConfig`` value from the image from which it was created, and an image inherits the ``OS-DCF:diskConfig`` value from the server from which it was created. To override the inherited setting, you can include this attribute in the request body of a server create, rebuild, or resize request. If the ``OS-DCF:diskConfig`` value for an image is ``MANUAL``, you cannot create a server from that image and set its ``OS-DCF:diskConfig`` value to ``AUTO``. A valid value is: - ``AUTO``. The API builds the server with a single partition the size of the target flavor disk. The API automatically adjusts the file system to fit the entire partition. - ``MANUAL``. The API builds the server by using whatever partition scheme and file system is in the source image. If the target flavor disk is larger, the API does not partition the remaining disk space. in: body required: false type: string OS-EXT-AZ:availability_zone: description: | The availability zone name. in: body required: true type: string OS-EXT-AZ:availability_zone_optional: description: | The availability zone name. in: body required: false type: string OS-EXT-AZ:availability_zone_update_rebuild: description: | The availability zone name. in: body required: true type: string min_version: 2.75 OS-EXT-SRV-ATTR:host: description: | The name of the compute host on which this instance is running. Appears in the response for administrative users only. in: body required: true type: string OS-EXT-SRV-ATTR:host_update_rebuild: description: | The name of the compute host on which this instance is running. Appears in the response for administrative users only. in: body required: true type: string min_version: 2.75 OS-EXT-SRV-ATTR:hypervisor_hostname: description: | The hypervisor host name provided by the Nova virt driver. For the Ironic driver, it is the Ironic node uuid. Appears in the response for administrative users only. in: body required: true type: string OS-EXT-SRV-ATTR:hypervisor_hostname_update_rebuild: description: | The hypervisor host name provided by the Nova virt driver. For the Ironic driver, it is the Ironic node uuid. Appears in the response for administrative users only. in: body required: true type: string min_version: 2.75 OS-EXT-SRV-ATTR:instance_name: description: | The instance name. The Compute API generates the instance name from the instance name template. Appears in the response for administrative users only. in: body required: true type: string OS-EXT-SRV-ATTR:instance_name_update_rebuild: description: | The instance name. The Compute API generates the instance name from the instance name template. Appears in the response for administrative users only. in: body required: true type: string min_version: 2.75 OS-EXT-STS:power_state: description: | The power state of the instance. This is an enum value that is mapped as:: 0: NOSTATE 1: RUNNING 3: PAUSED 4: SHUTDOWN 6: CRASHED 7: SUSPENDED in: body required: true type: integer OS-EXT-STS:power_state_update_rebuild: description: | The power state of the instance. This is an enum value that is mapped as:: 0: NOSTATE 1: RUNNING 3: PAUSED 4: SHUTDOWN 6: CRASHED 7: SUSPENDED in: body required: true type: integer min_version: 2.75 OS-EXT-STS:task_state: description: | The task state of the instance. in: body required: true type: string OS-EXT-STS:task_state_update_rebuild: description: | The task state of the instance. in: body required: true type: string min_version: 2.75 OS-EXT-STS:vm_state: description: | The VM state. in: body required: true type: string OS-EXT-STS:vm_state_update_rebuild: description: | The VM state. in: body required: true type: string min_version: 2.75 os-extended-volumes:volumes_attached: description: | The attached volumes, if any. in: body required: true type: array os-extended-volumes:volumes_attached.delete_on_termination: description: | A flag indicating if the attached volume will be deleted when the server is deleted. By default this is False. in: body required: true type: boolean min_version: 2.3 os-extended-volumes:volumes_attached.delete_on_termination_update_rebuild: description: | A flag indicating if the attached volume will be deleted when the server is deleted. By default this is False. in: body required: true type: boolean min_version: 2.75 os-extended-volumes:volumes_attached.id: description: | The attached volume ID. in: body required: true type: string os-extended-volumes:volumes_attached.id_update_rebuild: description: | The attached volume ID. in: body required: true type: string min_version: 2.75 os-extended-volumes:volumes_attached_update_rebuild: description: | The attached volumes, if any. in: body required: true type: array min_version: 2.75 os-getConsoleOutput: description: | The action to get console output of the server. in: body required: true type: object os-getRDPConsole: description: | The action. in: body required: true type: object os-getRDPConsole-type: description: | The type of RDP console. The only valid value is ``rdp-html5``. in: body required: true type: string os-getRDPConsole-url: description: | The URL used to connect to the RDP console. in: body required: true type: string os-getSerialConsole: description: | The action. in: body required: true type: object os-getSerialConsole-type: description: | The type of serial console. The only valid value is ``serial``. in: body required: true type: string os-getSerialConsole-url: description: | The URL used to connect to the Serial console. in: body required: true type: string os-getSPICEConsole: description: | The action. in: body required: true type: object os-getSPICEConsole-type: description: | The type of SPICE console. The only valid value is ``spice-html5``. in: body required: true type: string os-getSPICEConsole-url: description: | The URL used to connect to the SPICE console. in: body required: true type: string os-getVNCConsole: description: | The action. in: body required: true type: object os-getVNCConsole-type: description: | The type of VNC console. The only valid value is ``novnc``. in: body required: true type: string os-getVNCConsole-url: description: | The URL used to connect to the VNC console. in: body required: true type: string os-migrateLive: description: | The action. in: body required: true type: object os-resetState: description: | The action. in: body required: true type: object os-resetState_state: description: | The state of the server to be set, ``active`` or ``error`` are valid. in: body required: true type: string OS-SRV-USG:launched_at: description: | The date and time when the server was launched. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``hh±:mm`` value, if included, is the time zone as an offset from UTC. If the ``deleted_at`` date and time stamp is not set, its value is ``null``. in: body required: true type: string OS-SRV-USG:launched_at_update_rebuild: description: | The date and time when the server was launched. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``hh±:mm`` value, if included, is the time zone as an offset from UTC. If the ``deleted_at`` date and time stamp is not set, its value is ``null``. in: body required: true type: string min_version: 2.75 OS-SRV-USG:terminated_at: description: | The date and time when the server was deleted. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. If the ``deleted_at`` date and time stamp is not set, its value is ``null``. in: body required: true type: string OS-SRV-USG:terminated_at_update_rebuild: description: | The date and time when the server was deleted. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. If the ``deleted_at`` date and time stamp is not set, its value is ``null``. in: body required: true type: string min_version: 2.75 os-start: description: | The action to start a stopped server. in: body required: true type: none os-stop: description: | The action to stop a running server. in: body required: true type: none os:scheduler_hints: description: | The dictionary of data to send to the scheduler. Alternatively, you can specify ``OS-SCH-HNT:scheduler_hints`` as the key in the request body. .. note:: This is a top-level key in the request body, not part of the `server` portion of the request body. There are a few caveats with scheduler hints: * The request validation schema is per hint. For example, some require a single string value, and some accept a list of values. * Hints are only used based on the cloud scheduler configuration, which varies per deployment. * Hints are pluggable per deployment, meaning that a cloud can have custom hints which may not be available in another cloud. For these reasons, it is important to consult each cloud's user documentation to know what is available for scheduler hints. in: body required: false type: object os:scheduler_hints_build_near_host_ip: description: | Schedule the server on a host in the network specified with this parameter and a cidr (``os:scheduler_hints.cidr``). It is available when ``SimpleCIDRAffinityFilter`` is available on cloud side. in: body required: false type: string os:scheduler_hints_cidr: description: | Schedule the server on a host in the network specified with an IP address (``os:scheduler_hints:build_near_host_ip``) and this parameter. If ``os:scheduler_hints:build_near_host_ip`` is specified and this paramete is omitted, ``/24`` is used. It is available when ``SimpleCIDRAffinityFilter`` is available on cloud side. in: body required: false type: string os:scheduler_hints_different_cell: description: | A list of cell routes or a cell route (string). Schedule the server in a cell that is not specified. It is available when ``DifferentCellFilter`` is available on cloud side that is cell v1 environment. in: body required: false type: array os:scheduler_hints_different_host: description: | A list of server UUIDs or a server UUID. Schedule the server on a different host from a set of servers. It is available when ``DifferentHostFilter`` is available on cloud side. in: body required: false type: array os:scheduler_hints_group: description: | The server group UUID. Schedule the server according to a policy of the server group (``anti-affinity``, ``affinity``, ``soft-anti-affinity`` or ``soft-affinity``). It is available when ``ServerGroupAffinityFilter``, ``ServerGroupAntiAffinityFilter``, ``ServerGroupSoftAntiAffinityWeigher``, ``ServerGroupSoftAffinityWeigher`` are available on cloud side. in: body required: false type: string os:scheduler_hints_query: description: | Schedule the server by using a custom filter in JSON format. For example:: "query": "[\">=\",\"$free_ram_mb\",1024]" It is available when ``JsonFilter`` is available on cloud side. in: body required: false type: string os:scheduler_hints_same_host: description: | A list of server UUIDs or a server UUID. Schedule the server on the same host as another server in a set of servers. It is available when ``SameHostFilter`` is available on cloud side. in: body required: false type: array os:scheduler_hints_target_cell: description: | A target cell name. Schedule the server in a host in the cell specified. It is available when ``TargetCellFilter`` is available on cloud side that is cell v1 environment. in: body required: false type: string overall_status: description: | The overall status of instance audit tasks. :: M of N hosts done. K errors. The ``M`` value is the number of hosts whose instance audit tasks have been done in the period. The ``N`` value is the number of all hosts. The ``K`` value is the number of hosts whose instance audit tasks cause errors. If instance audit tasks have been done at all hosts in the period, the overall status is as follows: :: ALL hosts done. K errors. in: body required: true type: string para: description: | The parameter object. in: body required: true type: object parent_group_id: description: | Security group ID. in: body required: true type: string password: description: | The password returned from metadata server. in: body required: false type: string path: description: | The path field in the personality object. in: body required: true type: string max_version: 2.56 pause: description: | The action to pause a server. in: body required: true type: none period_beginning: description: | The beginning time of the instance usage audit period. For example, ``2016-05-01 00:00:00``. in: body required: true type: string period_ending: description: | The ending time of the instance usage audit period. For example, ``2016-06-01 00:00:00``. in: body required: true type: string personality: description: | The file path and contents, text only, to inject into the server at launch. The maximum size of the file path data is 255 bytes. The maximum limit is the number of allowed bytes in the decoded, rather than encoded, data. in: body required: false type: array max_version: 2.56 policies: description: | A list of exactly one policy name to associate with the server group. The current valid policy names are: - ``anti-affinity`` - servers in this group must be scheduled to different hosts. - ``affinity`` - servers in this group must be scheduled to the same host. - ``soft-anti-affinity`` - servers in this group should be scheduled to different hosts if possible, but if not possible then they should still be scheduled instead of resulting in a build failure. This policy was added in microversion 2.15. - ``soft-affinity`` - servers in this group should be scheduled to the same host if possible, but if not possible then they should still be scheduled instead of resulting in a build failure. This policy was added in microversion 2.15. in: body required: true type: array max_version: 2.63 policy_name: description: | The ``policy`` field represents the name of the policy. The current valid policy names are: - ``anti-affinity`` - servers in this group must be scheduled to different hosts. - ``affinity`` - servers in this group must be scheduled to the same host. - ``soft-anti-affinity`` - servers in this group should be scheduled to different hosts if possible, but if not possible then they should still be scheduled instead of resulting in a build failure. - ``soft-affinity`` - servers in this group should be scheduled to the same host if possible, but if not possible then they should still be scheduled instead of resulting in a build failure. in: body required: true type: string min_version: 2.64 policy_rules: description: | The ``rules`` field, which is a dict, can be applied to the policy. Currently, only the ``max_server_per_host`` rule is supported for the ``anti-affinity`` policy. The ``max_server_per_host`` rule allows specifying how many members of the anti-affinity group can reside on the same compute host. If not specified, only one member from the same anti-affinity group can reside on a given host. in: body required: true type: object min_version: 2.64 policy_rules_optional: description: | The ``rules`` field, which is a dict, can be applied to the policy. Currently, only the ``max_server_per_host`` rule is supported for the ``anti-affinity`` policy. The ``max_server_per_host`` rule allows specifying how many members of the anti-affinity group can reside on the same compute host. If not specified, only one member from the same anti-affinity group can reside on a given host. Requesting policy rules with any other policy than ``anti-affinity`` will be 400. in: body required: false type: object min_version: 2.64 pool: description: | Pool from which to allocate the IP address. If you omit this parameter, the call allocates the floating IP address from the public pool. If no floating IP addresses are available, the call returns the ``400`` response code with an informational message. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. in: body required: false type: string port: description: | To provision the server instance with a NIC for an already existing port, specify the port-id in the ``port`` attribute in a ``networks`` object. The port status must be ``DOWN``. Required if you omit the ``uuid`` attribute. Requested security groups are not applied to pre-existing ports. in: body required: false type: string port_id: description: | The ID of the port for which you want to create an interface. The ``net_id`` and ``port_id`` parameters are mutually exclusive. If you do not specify the ``port_id`` parameter, the OpenStack Networking API v2.0 allocates a port and creates an interface for it on the network. in: body required: false type: string port_id_resp: description: | The port ID. in: body required: true type: string port_number: description: | The port number. in: body required: true type: integer port_state: description: | The port state. in: body required: true type: string preserve_ephemeral: description: | Indicates whether the server is rebuilt with the preservation of the ephemeral partition (``true``). .. note:: This only works with baremetal servers provided by Ironic. Passing it to any other server instance results in a fault and will prevent the rebuild from happening. in: body required: false type: boolean previous: description: | Moves to the previous metadata item. format: uri in: body required: false type: string private_key: description: | The secret key. in: body required: true type: string progress: description: | A percentage value of the operation progress. This parameter only appears when the server status is ``ACTIVE``, ``BUILD``, ``REBUILD``, ``RESIZE``, ``VERIFY_RESIZE`` or ``MIGRATING``. in: body required: false type: integer project_id: description: | The UUID of the project. If omitted, the project ID defaults to the calling tenant. in: body required: false type: string project_id_migration_2_80: description: | The ID of the project which initiated the server migration. The value may be ``null`` for older migration records. in: body required: true type: string min_version: 2.80 project_id_server: description: | The ID of the project that this server belongs to. in: body required: true type: string project_id_server_action: description: | The ID of the project which initiated the server action. in: body required: true type: string project_id_server_group: description: | The project ID who owns the server group. min_version: 2.13 in: body required: true type: string project_id_value: description: | The project id under which the bulk ip addresses are created in: body required: true type: string quota_class_id_body: <<: *quota_class_id in: body quota_class_set: description: | A ``quota_class_set`` object. in: body required: true type: object quota_set: description: | A ``quota_set`` object. in: body required: true type: object quota_tenant_or_user_id_body: description: | The UUID of the tenant/user the quotas listed for. in: body required: true type: string ram: &ram description: | The amount of allowed server RAM, in MiB, for each tenant. in: body required: true type: integer ram_quota_class: &ram_quota_class <<: *ram description: | The amount of allowed instance RAM, in MiB, for the quota class. ram_quota_class_optional: <<: *ram_quota_class required: false ram_quota_details: description: | The object of detailed key ram quota, including in_use, limit and reserved number of ram. in: body required: true type: object ram_quota_optional: description: | The amount of allowed server RAM, in MiB, for each tenant. in: body required: false type: integer reboot: description: | The action to reboot a server. in: body required: true type: object reboot_type: description: | The type of the reboot action. The valid values are ``HARD`` and ``SOFT``. A ``SOFT`` reboot attempts a graceful shutdown and restart of the server. A ``HARD`` reboot attempts a forced shutdown and restart of the server. The ``HARD`` reboot corresponds to the power cycles of the server. in: body required: true type: string rebuild: description: | The action to rebuild a server. in: body required: true type: object remote_console: description: | The remote console object. in: body required: true type: object remote_console_protocol: description: | The protocol of remote console. The valid values are ``vnc``, ``spice``, ``rdp``, ``serial`` and ``mks``. The protocol ``mks`` is added since Microversion ``2.8``. in: body required: true type: string remote_console_type: description: | The type of remote console. The valid values are ``novnc``, ``rdp-html5``, ``spice-html5``, ``serial``, and ``webmks``. The type ``webmks`` is added since Microversion ``2.8``. in: body required: true type: string remote_console_url: description: | The URL is used to connect the console. in: body required: true type: string removeFixedIp: description: | The action to remove a fixed ip address from a server. in: body required: true type: object removeFloatingIp: description: | The action to remove or disassociate a floating IP address from the server. in: body required: true type: object removeSecurityGroup: description: | The action to remove a security group from the server. in: body required: true type: object removeTenantAccess: description: | The action. in: body required: true type: string request_id_body: description: | The request id generated when execute the API of this action. in: body required: true type: string rescue: description: | The action to rescue a server. in: body required: true type: object rescue_image_ref: description: | The image reference to use to rescue your server instance. Specify the image reference by ID or full URL. If you omit an image reference, default is the base image reference. in: body required: false type: string reservation_id: description: | The reservation id for the server. This is an id that can be useful in tracking groups of servers created with multiple create, that will all have the same reservation_id. in: body required: true type: string reserved: description: | The reserved quota value. in: body required: true type: integer reserved_fixedip: description: | True if the fixed ip is reserved, otherwise False. in: body required: true type: boolean min_version: 2.4 resetNetwork: description: | The action. in: body required: true type: none resize: description: | The action to resize a server. in: body required: true type: object restore: description: | The action. in: body required: true type: none resume: description: | The action to resume a suspended server. in: body required: true type: none return_reservation_id: description: | Set to ``True`` to request that the response return a reservation ID instead of instance information. Default is ``False``. in: body required: false type: boolean revertResize: description: | The action to revert a resize operation. in: body required: true type: none rules: description: | The list of security group rules. in: body required: true type: array running_vms: description: | The number of running vms on this hypervisor. in: body required: true type: integer running_vms_total: description: | The total number of running vms on all hypervisors. in: body required: true type: integer secgroup_default_rule_id: description: | The security group default rule ID. in: body required: true type: string secgroup_rule_cidr: description: | The CIDR for address range. in: body required: false type: string secgroup_rule_id: description: | The security group rule ID. in: body required: true type: string secgroup_rule_ip_range: description: | An IP range object. Includes the security group rule ``cidr``. in: body required: true type: object secgroup_tenant_id_body: description: | The UUID of the tenant that owns this security group. in: body required: false type: string security_group: description: | Specify the ``security_group`` action in the request body. in: body required: true type: string security_group_default_rule: description: | A ``security_group_default_rule`` object. in: body required: true type: object security_group_default_rules: description: | A list of the ``security_group_default_rule`` object. in: body required: true type: array security_group_id_body: description: | The ID of the security group. in: body required: true type: string security_group_rule: description: | A ``security_group_rule`` object. in: body required: true type: object security_group_rules: description: | The number of allowed rules for each security group. in: body required: false type: integer max_version: 2.35 security_group_rules_quota: description: | The number of allowed rules for each security group. in: body required: true type: integer max_version: 2.35 security_group_rules_quota_class: &security_group_rules_quota_class description: | The number of allowed rules for each security group. in: body required: true type: integer max_version: 2.49 security_group_rules_quota_class_optional: <<: *security_group_rules_quota_class required: false security_group_rules_quota_details: description: | The object of detailed security group rules quota, including in_use, limit and reserved number of security group rules. in: body required: true type: object max_version: 2.35 security_groups: description: | One or more security groups. Specify the name of the security group in the ``name`` attribute. If you omit this attribute, the API creates the server in the ``default`` security group. Requested security groups are not applied to pre-existing ports. in: body required: false type: array security_groups_obj: description: | One or more security groups objects. in: body required: true type: array security_groups_obj_optional: description: | One or more security groups objects. in: body required: false type: array security_groups_obj_update_rebuild: description: | One or more security groups objects. in: body required: false type: array min_version: 2.75 security_groups_quota: description: | The number of allowed security groups for each tenant. in: body required: true type: integer max_version: 2.35 security_groups_quota_class: &security_groups_quota_class description: | The number of allowed security groups for the quota class. in: body required: true type: integer max_version: 2.49 security_groups_quota_class_optional: <<: *security_groups_quota_class required: false security_groups_quota_details: description: | The object of detailed security groups, including in_use, limit and reserved number of security groups. in: body required: true type: object max_version: 2.35 security_groups_quota_optional: description: | The number of allowed security groups for each tenant. in: body required: false type: integer max_version: 2.35 server: description: | A ``server`` object. in: body required: true type: object server_description: type: string in: body required: false min_version: 2.19 description: | A free form description of the server. Limited to 255 characters in length. Before microversion 2.19 this was set to the server name. server_description_resp: description: | The description of the server. Before microversion 2.19 this was set to the server name. in: body required: true type: string min_version: 2.19 server_group: description: | The server group object. in: body required: true type: object server_group_id_body: description: | The UUID of the server group. in: body required: true type: string server_group_members: &server_group_members description: | The number of allowed members for each server group. in: body required: true type: integer server_group_members_quota_class: <<: *server_group_members min_version: 2.50 server_group_members_quota_details: description: | The object of detailed server group members, including in_use, limit and reserved number of server group members. in: body required: true type: object server_group_members_quota_optional: description: | The number of allowed members for each server group. in: body required: false type: integer server_groups: &server_groups description: | The number of allowed server groups for each tenant. in: body required: true type: integer server_groups_2_71: description: | The UUIDs of the server groups to which the server belongs. Currently this can contain at most one entry. in: body required: true type: array min_version: 2.71 server_groups_list: description: | The list of existing server groups. in: body required: true type: array server_groups_quota_class: <<: *server_groups description: | The number of allowed server groups for the quota class. min_version: 2.50 server_groups_quota_class_optional: <<: *server_groups description: | The number of allowed server groups for the quota class. required: false server_groups_quota_details: description: | The object of detailed server groups, including in_use, limit and reserved number of server groups. in: body required: true type: object server_groups_quota_optional: description: | The number of allowed server groups for each tenant. in: body required: false type: integer # This is the host in a POST (create instance) request body. server_host_create: description: | The name of the compute service host on which the server is to be created. The API will return 400 if no compute services are found with the given host name. By default, it can be specified by administrators only. in: body required: false type: string min_version: 2.74 server_hostname: in: body required: false type: string description: | The hostname set on the instance when it is booted. By default, it appears in the response for administrative users only. min_version: 2.3 server_hostname_update_rebuild: in: body required: false type: string description: | The hostname set on the instance when it is booted. By default, it appears in the response for administrative users only. min_version: 2.75 # This is the hypervisor_hostname in a POST (create instance) request body. server_hypervisor_hostname_create: description: | The hostname of the hypervisor on which the server is to be created. The API will return 400 if no hypervisors are found with the given hostname. By default, it can be specified by administrators only. in: body required: false type: string min_version: 2.74 server_id: description: | The UUID of the server. in: body required: true type: string server_id_optional: description: | The UUID of the server. in: body required: false type: string server_kernel_id: in: body required: false type: string description: | The UUID of the kernel image when using an AMI. Will be null if not. By default, it appears in the response for administrative users only. min_version: 2.3 server_kernel_id_update_rebuild: in: body required: false type: string description: | The UUID of the kernel image when using an AMI. Will be null if not. By default, it appears in the response for administrative users only. min_version: 2.75 server_launch_index: in: body required: false type: integer description: | When servers are launched via multiple create, this is the sequence in which the servers were launched. By default, it appears in the response for administrative users only. min_version: 2.3 server_launch_index_update_rebuild: in: body required: false type: integer description: | When servers are launched via multiple create, this is the sequence in which the servers were launched. By default, it appears in the response for administrative users only. min_version: 2.75 server_links: description: | Links pertaining to the server. See `API Guide / Links and References `_ for more info. in: body type: array required: true server_name: description: | The server name. in: body required: true type: string server_name_optional: description: | The server name. in: body required: false type: string server_ramdisk_id: in: body required: false type: string description: | The UUID of the ramdisk image when using an AMI. Will be null if not. By default, it appears in the response for administrative users only. min_version: 2.3 server_ramdisk_id_update_rebuild: in: body required: false type: string description: | The UUID of the ramdisk image when using an AMI. Will be null if not. By default, it appears in the response for administrative users only. min_version: 2.75 server_reservation_id: in: body required: false type: string description: | The reservation id for the server. This is an id that can be useful in tracking groups of servers created with multiple create, that will all have the same reservation_id. By default, it appears in the response for administrative users only. min_version: 2.3 server_reservation_id_update_rebuild: in: body required: false type: string description: | The reservation id for the server. This is an id that can be useful in tracking groups of servers created with multiple create, that will all have the same reservation_id. By default, it appears in the response for administrative users only. min_version: 2.75 server_root_device_name: in: body required: false type: string description: | The root device name for the instance By default, it appears in the response for administrative users only. min_version: 2.3 server_root_device_name_update_rebuild: in: body required: false type: string description: | The root device name for the instance By default, it appears in the response for administrative users only. min_version: 2.75 server_status: description: | The server status. in: body required: true type: string server_tags_create: description: | A list of tags. Tags have the following restrictions: - Tag is a Unicode bytestring no longer than 60 characters. - Tag is a non-empty string. - '/' is not allowed to be in a tag name - Comma is not allowed to be in a tag name in order to simplify requests that specify lists of tags - All other characters are allowed to be in a tag name - Each server can have up to 50 tags. in: body required: false type: array min_version: 2.52 server_topology_nodes: description: | NUMA nodes information of a server. in: body required: true type: array server_topology_nodes_cpu_pinning: description: | The mapping of server cores to host physical CPU. for example:: cpu_pinning: { 0: 0, 1: 5} This means vcpu 0 is mapped to physical CPU 0, and vcpu 1 is mapped physical CPU 5. By default the ``cpu_pinning`` field is only visible to users with the administrative role. You can change the default behavior via the policy rule:: compute:server:topology:host:index in: body required: false type: dict server_topology_nodes_cpu_siblings: description: | A mapping of host cpus thread sibling. For example:: siblings: [[0,1],[2,3]] This means vcpu 0 and vcpu 1 belong to same CPU core, vcpu 2, vcpu 3 belong to another CPU core. By default the ``siblings`` field is only visible to users with the administrative role. You can change the default behavior via the policy rule:: compute:server:topology:host:index in: body required: false type: list server_topology_nodes_host_node: description: | The host NUMA node the virtual NUMA node is map to. By default the ``host_node`` field is only visible to users with the administrator role. You can change the default behavior via the policy rule:: compute:server:topology:host:index in: body required: false type: integer server_topology_nodes_memory_mb: description: | The amount of memory assigned to this NUMA node in MB. in: body required: false type: integer server_topology_nodes_vcpu_set: description: | A list of IDs of the virtual CPU assigned to this NUMA node. in: body required: false type: list server_topology_pagesize_kb: description: | The page size in KB of a server. This field is ``null`` if the page size information is not available. in: body required: true type: integer server_trusted_image_certificates_create_req: description: | A list of trusted certificate IDs, which are used during image signature verification to verify the signing certificate. The list is restricted to a maximum of 50 IDs. This parameter is optional in server create requests if allowed by policy, and is not supported for volume-backed instances. in: body required: false type: array min_version: 2.63 server_trusted_image_certificates_rebuild_req: description: | A list of trusted certificate IDs, which are used during image signature verification to verify the signing certificate. The list is restricted to a maximum of 50 IDs. This parameter is optional in server rebuild requests if allowed by policy, and is not supported for volume-backed instances. If ``null`` is specified, the existing trusted certificate IDs are either unset or reset to the configured defaults. in: body required: false type: array min_version: 2.63 server_trusted_image_certificates_resp: description: | A list of trusted certificate IDs, that were used during image signature verification to verify the signing certificate. The list is restricted to a maximum of 50 IDs. The value is ``null`` if trusted certificate IDs are not set. in: body required: true type: array min_version: 2.63 server_usages: description: | A list of the server usage objects. in: body required: true type: array server_usages_optional: description: | A list of the server usage objects. in: body required: false type: array server_user_data: in: body required: false type: string description: | The user_data the instance was created with. By default, it appears in the response for administrative users only. min_version: 2.3 server_user_data_update: in: body required: false type: string description: | The user_data the instance was created with. By default, it appears in the response for administrative users only. min_version: 2.75 server_uuid: description: | The UUID of the server instance to which the API dispatches the event. You must assign this instance to a host. Otherwise, this call does not dispatch the event to the instance. in: body required: true type: string servers: description: | A list of ``server`` objects. in: body required: true type: array servers_links: description: | Links to the next server. It is available when the number of servers exceeds ``limit`` parameter or ``[api]/max_limit`` in the configuration file. See `Paginated collections `__ for more info. in: body type: array required: false servers_max_count: in: body required: false type: integer description: | The max number of servers to be created. Defaults to the value of ``min_count``. servers_min_count: in: body required: false type: integer description: | The min number of servers to be created. Defaults to 1. servers_multiple_create_name: in: body required: true type: string description: | A base name for creating unique names during multiple create. service: description: | Object representing a compute service. in: body required: true type: object service_disable_reason: description: | The disable reason of the service, ``null`` if the service is enabled or disabled without reason provided. in: body required: true type: string service_id_body: description: | The id of the service. in: body required: true type: integer service_id_body_2_52: description: | The id of the service. in: body required: true type: integer max_version: 2.52 service_id_body_2_53: description: | The id of the service as a uuid. in: body required: true type: string min_version: 2.53 service_id_body_2_53_no_version: description: | The id of the service as a uuid. in: body required: true type: string service_state: description: | The state of the service. One of ``up`` or ``down``. in: body required: true type: string service_status: description: | The status of the service. One of ``enabled`` or ``disabled``. in: body required: true type: string # This is an optional input parameter to PUT /os-services/{service_id} added # in microversion 2.53. service_status_2_53_in: description: | The status of the service. One of ``enabled`` or ``disabled``. in: body required: false type: string services: description: | A list of service objects. in: body required: true type: array set_metadata: description: | The set_metadata object used to set metadata for host aggregate. in: body required: true type: object shelve: description: | The action. in: body required: true type: none shelveOffload: description: | The action. in: body required: true type: none size: description: | The size of the volume, in gibibytes (GiB). in: body required: true type: integer snapshot: description: | A partial representation of a snapshot that is used to create a snapshot. in: body required: true type: object snapshot_description: description: | The snapshot description. in: body required: true type: string snapshot_description_optional: description: | The snapshot description. in: body required: false type: string snapshot_id: description: | The UUID for a snapshot. in: body required: true type: string snapshot_id_optional: description: | The UUID for a snapshot. in: body required: false type: string snapshot_id_resp_2_45: description: | The UUID for the resulting image snapshot. in: body required: true type: string min_version: 2.45 snapshot_name: description: | The snapshot name. in: body required: true type: string snapshot_name_optional: description: | The snapshot name. in: body required: false type: string snapshot_status: description: | The status of the snapshot. Valid status values are: - ``available`` - ``creating`` - ``deleting`` - ``error`` - ``error_deleting`` in: body required: true type: string snapshots: description: | A list of snapshot objects. in: body required: true type: array source_type: description: | The source type of the block device. Valid values are: * ``blank``: Depending on the ``destination_type`` and ``guest_format``, this will either be a blank persistent volume or an ephemeral (or swap) disk local to the compute host on which the server resides * ``image``: This is only valid with ``destination_type=volume``; creates an image-backed volume in the block storage service and attaches it to the server * ``snapshot``: This is only valid with ``destination_type=volume``; creates a volume backed by the given volume snapshot referenced via the ``block_device_mapping_v2.uuid`` parameter and attaches it to the server * ``volume``: This is only valid with ``destination_type=volume``; uses the existing persistent volume referenced via the ``block_device_mapping_v2.uuid`` parameter and attaches it to the server This parameter is required unless ``block_device_mapping_v2.no_device`` is specified. See `Block Device Mapping in Nova `_ for more details on valid source and destination types. in: body required: false type: string start_simple_tenant_usage_body: description: | The beginning time to calculate usage statistics on compute and storage resources. The date and time stamp format is as follows: :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. in: body required: true type: string start_time: description: | The date and time when the action was started. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string started_at: description: | The date and time when the server was launched. The date and time stamp format is as follows: :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. in: body required: true type: string started_at_optional: description: | The date and time when the server was launched. The date and time stamp format is as follows: :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. in: body required: false type: string stop_simple_tenant_usage: description: | The ending time to calculate usage statistics on compute and storage resources. The date and time stamp format is as follows: :: CCYY-MM-DDThh:mm:ss.NNNNNN For example, ``2015-08-27T09:49:58.123456``. in: body required: true type: string subnet_id: description: | The UUID of the subnet. in: body required: true type: string suspend: description: | The action to suspend a server. in: body required: true type: none tags: description: | A list of tags. The maximum count of tags in this list is 50. in: body required: true type: array min_version: 2.26 tags_no_min: description: | A list of tags. The maximum count of tags in this list is 50. in: body required: true type: array tenant_id_body: description: | The UUID of the tenant in a multi-tenancy cloud. in: body required: true type: string tenant_id_optional: description: | The UUID of the tenant in a multi-tenancy cloud. in: body required: false type: string tenant_usage: description: | The tenant usage object. in: body required: true type: object tenant_usages: description: | A list of the tenant usage objects. in: body required: true type: array to_port: description: | The port at end of range. in: body required: true type: integer total_cores_used: description: | The number of used server cores in each tenant. If ``reserved`` query parameter is specified and it is not 0, the number of reserved server cores are also included. in: body required: true type: integer total_errors: description: | The total number of instance audit task errors. in: body required: true type: integer total_floatingips_used: description: | The number of used floating IP addresses in each tenant. If ``reserved`` query parameter is specified and it is not 0, the number of reserved floating IP addresses are also included. in: body required: true type: integer max_version: 2.35 total_hours: description: | The total duration that servers exist (in hours). in: body required: true type: float total_instances: description: | The total number of VM instances in the period. in: body required: true type: integer total_instances_used: description: | The number of servers in each tenant. If ``reserved`` query parameter is specified and it is not 0, the number of reserved servers are also included. in: body required: true type: integer total_local_gb_usage: description: | Multiplying the server disk size (in GiB) by hours the server exists, and then adding that all together for each server. in: body required: true type: float total_memory_mb_usage: description: | Multiplying the server memory size (in MiB) by hours the server exists, and then adding that all together for each server. in: body required: true type: float total_ram_used: description: | The amount of used server RAM in each tenant. If ``reserved`` query parameter is specified and it is not 0, the amount of reserved server RAM is also included. in: body required: true type: integer total_security_groups_used: description: | The number of used security groups in each tenant. If ``reserved`` query parameter is specified and it is not 0, the number of reserved security groups are also included. in: body required: true type: integer max_version: 2.35 total_server_groups_used: description: | The number of used server groups in each tenant. If ``reserved`` query parameter is specified and it is not 0, the number of reserved server groups are also included. in: body required: true type: integer total_vcpus_usage: description: | Multiplying the number of virtual CPUs of the server by hours the server exists, and then adding that all together for each server. in: body required: true type: float trigger_crash_dump: in: body required: true type: none description: | Specifies the trigger crash dump action should be run type-os-assisted-volume-snapshot: description: | The snapshot type. A valid value is ``qcow2``. in: body required: true type: string unlock: description: | The action to unlock a locked server. in: body required: true type: none unpause: description: | The action to unpause a paused server. in: body required: true type: none unrescue: description: | The action to unrescue a server in rescue mode. in: body required: true type: none unshelve: description: | The action. in: body required: true type: none updated: description: | The date and time when the resource was updated. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string updated_consider_null: description: | The date and time when the resource was updated, if the resource has not been updated, this field will show as ``null``. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string updated_instance_action: description: | The date and time when the instance action or the action event of instance action was updated. The date and time stamp format is `ISO 8601 `_ :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. in: body required: true type: string min_version: 2.58 updated_version: description: | This is a fixed string. It is ``2011-01-21T11:33:21Z`` in version 2.0, ``2013-07-23T11:33:21Z`` in version 2.1. .. note:: It is vestigial and provides no useful information. It will be deprecated and removed in the future. in: body required: true type: string uptime: description: | The total uptime of the hypervisor and information about average load. in: body required: true type: string uptime_diagnostics: description: | The amount of time in seconds that the VM has been running. in: body required: true type: integer min_version: 2.48 uptime_simple_tenant_usage: description: | The uptime of the server. in: body required: true type: integer uptime_simple_tenant_usage_optional: description: | The uptime of the server. in: body required: false type: integer url: description: | The URL associated with the agent. in: body required: true type: string usage_links: description: | Links pertaining to usage. See `API Guide / Links and References `_ for more info. in: body type: array required: false min_version: 2.40 user_data: description: | Configuration information or scripts to use upon launch. Must be Base64 encoded. Restricted to 65535 bytes. .. note:: The ``null`` value allowed in Nova legacy v2 API, but due to the strict input validation, it isn't allowed in Nova v2.1 API. in: body required: false type: string user_data_rebuild_req: description: | Configuration information or scripts to use upon rebuild. Must be Base64 encoded. Restricted to 65535 bytes. If ``null`` is specified, the existing user_data is unset. in: body required: false type: string min_version: 2.57 user_data_rebuild_resp: in: body required: true type: string description: | The current user_data for the instance. min_version: 2.57 user_id: description: | The user ID of the user who owns the server. in: body required: true type: string user_id_migration_2_80: description: | The ID of the user which initiated the server migration. The value may be ``null`` for older migration records. in: body required: true type: string min_version: 2.80 user_id_server_action: description: | The ID of the user which initiated the server action. in: body required: true type: string user_id_server_group: description: | The user ID who owns the server group. min_version: 2.13 in: body required: true type: string vcpus: description: | The number of virtual CPUs that the server uses. in: body required: true type: integer vcpus_optional: description: | The number of virtual CPUs that the server uses. in: body required: false type: integer version: description: | The version. in: body required: true type: string version_id: type: string in: body required: true description: > A common name for the version in question. Informative only, it has no real semantic meaning. version_ip: description: | The IP version of the address associated with server. in: body required: true type: integer version_max: type: string in: body required: true description: > If this version of the API supports microversions, the maximum microversion that is supported. This will be the empty string if microversions are not supported. version_min: type: string in: body required: true description: > If this version of the API supports microversions, the minimum microversion that is supported. This will be the empty string if microversions are not supported. version_status: type: string in: body required: true description: | The status of this API version. This can be one of: - ``CURRENT``: this is the preferred version of the API to use - ``SUPPORTED``: this is an older, but still supported version of the API - ``DEPRECATED``: a deprecated version of the API that is slated for removal versions: type: array in: body required: true description: > A list of version objects that describe the API versions available. virtual_interface: description: | Virtual interface for the floating ip address. in: body required: true type: string virtual_interface_id: description: | The UUID of the virtual interface. in: body required: true type: string virtual_interface_id_optional: description: | Virtual interface for the floating ip address in: body required: false type: string virtual_interfaces: description: | An array of virtual interfaces. in: body required: true type: array vm_state_diagnostics: description: | A string enum denoting the current state of the VM. Possible values are: - ``pending`` - ``running`` - ``paused`` - ``shutdown`` - ``crashed`` - ``suspended`` in: body required: true type: string min_version: 2.48 vm_state_optional: description: | The VM state. in: body required: false type: string volume: description: | The ``volume`` object. in: body required: true type: object volume_id: description: | The source volume ID. in: body required: true type: string volume_id_resp: description: | The UUID of the volume. in: body required: true type: string volume_size: description: | The size of the volume (in GiB). This is integer value from range 1 to 2147483647 which can be requested as integer and string. This parameter must be specified in the following cases: - An image to volume case * ``block_device_mapping_v2.source_type`` is ``image`` * ``block_device_mapping_v2.destination_type`` is ``volume`` - A blank to volume case * ``block_device_mapping_v2.source_type`` is ``blank`` * ``block_device_mapping_v2.destination_type`` is ``volume`` in: body required: false type: integer volume_status: description: | The status of the volume. in: body required: true type: string volume_type: description: | The name or unique identifier for a volume type. in: body required: true type: string volume_type_optional: description: | The unique identifier for a volume type. in: body required: false type: string # This is the volumeAttachment in a response body. volumeAttachment: description: | A dictionary representation of a volume attachment containing the fields ``device``, ``id``, ``serverId`` and ``volumeId``. in: body required: true type: object # This is the volumeAttachment in a POST (attach volume) request body. volumeAttachment_post: description: | A dictionary representation of a volume attachment containing the fields ``device`` and ``volumeId``. in: body required: true type: object # This is the volumeAttachment in a PUT (swap volume) request body. volumeAttachment_put: description: | A dictionary representation of a volume attachment containing the field ``volumeId`` which is the UUID of the replacement volume, and other fields to update in the attachment. in: body required: true type: object volumeAttachments: description: | The list of volume attachments. in: body required: true type: array volumeId: description: | The UUID of the volume to attach. in: body required: true type: string volumeId_resp: description: | The UUID of the attached volume. in: body required: true type: string volumeId_swap: description: | The UUID of the volume to attach instead of the attached volume. in: body required: true type: string volumes: description: | The list of ``volume`` objects. in: body required: true type: array vpn_public_ip: description: | The VPN IP address. in: body required: true type: string vpn_public_ip_resp: description: | The VPN public IP address. in: body required: false type: string vpn_public_port: description: | The VPN port. in: body required: true type: string vpn_public_port_resp: description: | The VPN public port. in: body required: false type: string vpn_state: description: | The VPN state. in: body required: false type: string ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/request-ids.inc0000664000175000017500000000107300000000000020352 0ustar00zuulzuul00000000000000.. -*- rst -*- =========== Request IDs =========== Users can specify the global request ID in the request header. Users can receive the local request ID in the response header. For more details about Request IDs, please reference: `Faults `_ **Request** .. rest_parameters:: parameters.yaml - X-Openstack-Request-Id: x-openstack-request-id_req **Response** .. rest_parameters:: parameters.yaml - X-Compute-Request-Id: x-compute-request-id_resp - X-Openstack-Request-Id: x-openstack-request-id_resp ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/server-migrations.inc0000664000175000017500000001660700000000000021576 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================= Server migrations (servers, migrations) ========================================= List, show, perform actions on and delete server migrations. List Migrations =============== .. rest_method:: GET /servers/{server_id}/migrations Lists in-progress live migrations for a given server. .. note:: Microversion 2.23 or greater is required for this API. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - migrations: migrations - created_at: created - dest_compute: migrate_dest_compute - dest_host: migrate_dest_host - dest_node: migrate_dest_node - disk_processed_bytes: migrate_disk_processed_bytes - disk_remaining_bytes: migrate_disk_remaining_bytes - disk_total_bytes: migrate_disk_total_bytes - id: migration_id - memory_processed_bytes: migrate_memory_processed_bytes - memory_remaining_bytes: migrate_memory_remaining_bytes - memory_total_bytes: migrate_memory_total_bytes - server_uuid: server_id - source_compute: migrate_source_compute - source_node: migrate_source_node - status: migrate_status - updated_at: updated - uuid: migration_uuid - user_id: user_id_migration_2_80 - project_id: project_id_migration_2_80 **Example List Migrations (2.80)** .. literalinclude:: ../../doc/api_samples/server-migrations/v2.80/migrations-index.json :language: javascript Show Migration Details ====================== .. rest_method:: GET /servers/{server_id}/migrations/{migration_id} Show details for an in-progress live migration for a given server. .. note:: Microversion 2.23 or greater is required for this API. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - migration_id: migration_id_path Response -------- .. rest_parameters:: parameters.yaml - migration: migration - created_at: created - dest_compute: migrate_dest_compute - dest_host: migrate_dest_host - dest_node: migrate_dest_node - disk_processed_bytes: migrate_disk_processed_bytes - disk_remaining_bytes: migrate_disk_remaining_bytes - disk_total_bytes: migrate_disk_total_bytes - id: migration_id - memory_processed_bytes: migrate_memory_processed_bytes - memory_remaining_bytes: migrate_memory_remaining_bytes - memory_total_bytes: migrate_memory_total_bytes - server_uuid: server_id - source_compute: migrate_source_compute - source_node: migrate_source_node - status: migrate_status - updated_at: updated - uuid: migration_uuid - user_id: user_id_migration_2_80 - project_id: project_id_migration_2_80 **Example Show Migration Details (2.80)** .. literalinclude:: ../../doc/api_samples/server-migrations/v2.80/migrations-get.json :language: javascript Force Migration Complete Action (force_complete Action) ======================================================= .. rest_method:: POST /servers/{server_id}/migrations/{migration_id}/action Force an in-progress live migration for a given server to complete. Specify the ``force_complete`` action in the request body. .. note:: Microversion 2.22 or greater is required for this API. .. note:: Not all `compute back ends`_ support forcefully completing an in-progress live migration. .. _compute back ends: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_force_live_migration_to_complete Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. **Preconditions** The server OS-EXT-STS:vm_state value must be ``active`` and the server OS-EXT-STS:task_state value must be ``migrating``. If the server is locked, you must have administrator privileges to force the completion of the server migration. The migration status must be ``running``. **Asynchronous Postconditions** After you make this request, you typically must keep polling the server status to determine whether the request succeeded. **Troubleshooting** If the server status remains ``MIGRATING`` for an inordinate amount of time, the request may have failed. Ensure you meet the preconditions and run the request again. If the request fails again, investigate the compute back end. More details can be found in the `admin guide `_. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - migration_id: migration_id_path - force_complete: force_migration_complete **Example Force Migration Complete (force_complete Action)** .. literalinclude:: ../../doc/api_samples/server-migrations/v2.22/force_complete.json :language: javascript Response -------- There is no body content for the response of a successful POST operation. Delete (Abort) Migration ======================== .. rest_method:: DELETE /servers/{server_id}/migrations/{migration_id} Abort an in-progress live migration. .. note:: Microversion 2.24 or greater is required for this API. .. note:: With microversion 2.65 or greater, you can abort live migrations also in ``queued`` and ``preparing`` status. .. note:: Not all `compute back ends`__ support aborting an in-progress live migration. .. __: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_abort_in_progress_live_migration Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. **Preconditions** The server OS-EXT-STS:task_state value must be ``migrating``. If the server is locked, you must have administrator privileges to force the completion of the server migration. For microversions from 2.24 to 2.64 the migration status must be ``running``, for microversion 2.65 and greater, the migration status can also be ``queued`` and ``preparing``. **Asynchronous Postconditions** After you make this request, you typically must keep polling the server status to determine whether the request succeeded. You may also monitor the migration using:: GET /servers/{server_id}/migrations/{migration_id} **Troubleshooting** If the server status remains ``MIGRATING`` for an inordinate amount of time, the request may have failed. Ensure you meet the preconditions and run the request again. If the request fails again, investigate the compute back end. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - migration_id: migration_id_path Response -------- There is no body content for the response of a successful DELETE operation. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/server-security-groups.inc0000664000175000017500000000170100000000000022573 0ustar00zuulzuul00000000000000.. -*- rst -*- ====================================================== Servers Security Groups (servers, os-security-groups) ====================================================== Lists Security Groups for a server. List Security Groups By Server ============================== .. rest_method:: GET /servers/{server_id}/os-security-groups Lists security groups for a server. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - security_groups: security_groups_obj - description: description - id: security_group_id_body - name: name - rules: rules - tenant_id: tenant_id_body **Example List security groups by server** .. literalinclude:: ../../doc/api_samples/os-security-groups/server-security-groups-list-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/server-topology.inc0000664000175000017500000000264700000000000021275 0ustar00zuulzuul00000000000000.. -*- rst -*- ===================================== Servers Topology (servers, topology) ===================================== Shows the NUMA topology information for a server. Show Server Topology ==================== .. rest_method:: GET /servers/{server_id}/topology .. versionadded:: 2.78 Shows NUMA topology information for a server. Policy defaults enable only users with the administrative role or the owners of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 200 Error response codes: unauthorized(401), notfound(404), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- All response fields are listed below. If some information is not available or not allow by policy, the corresponding key value will not exist in response. .. rest_parameters:: parameters.yaml - nodes: server_topology_nodes - nodes.cpu_pinning: server_topology_nodes_cpu_pinning - nodes.vcpu_set: server_topology_nodes_vcpu_set - nodes.siblings: server_topology_nodes_cpu_siblings - nodes.memory_mb: server_topology_nodes_memory_mb - nodes.host_node: server_topology_nodes_host_node - pagesize_kb: server_topology_pagesize_kb **Example Server topology (2.xx)** .. literalinclude:: ../../doc/api_samples/os-server-topology/v2.78/servers-topology-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/servers-action-console-output.inc0000664000175000017500000000252700000000000024054 0ustar00zuulzuul00000000000000.. -*- rst -*- Show Console Output (os-getConsoleOutput Action) ================================================ .. rest_method:: POST /servers/{server_id}/action Shows console output for a server. This API returns the text of the console since boot. The content returned may be large. Limit the lines of console text, beginning at the tail of the content, by setting the optional ``length`` parameter in the request body. The server to get console log from should set ``export LC_ALL=en_US.UTF-8`` in order to avoid incorrect unicode error. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), notFound(404), conflict(409), methodNotImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-getConsoleOutput: os-getConsoleOutput - length: length **Example Show Console Output (os-getConsoleOutput Action)** This example requests the last 50 lines of console content from the specified server. .. literalinclude:: ../../doc/api_samples/os-console-output/console-output-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - output: console_output **Example Show Console Output (os-getConsoleOutput Action)** .. literalinclude:: ../../doc/api_samples/os-console-output/console-output-post-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/servers-action-crash-dump.inc0000664000175000017500000000264200000000000023115 0ustar00zuulzuul00000000000000.. -*- rst -*- Trigger Crash Dump In Server ============================ .. rest_method:: POST /servers/{server_id}/action .. versionadded:: 2.17 Trigger a crash dump in a server. When a server starts behaving oddly at a fundamental level, it maybe be useful to get a kernel level crash dump to debug further. The crash dump action forces a crash dump followed by a system reboot of the server. Once the server comes back online, you can find a Kernel Crash Dump file in a certain location of the filesystem. For example, for Ubuntu you can find it in the ``/var/crash`` directory. .. warning:: This action can cause data loss. Also, network connectivity can be lost both during and after this operation. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) * 400 is returned if the server does not support a crash dump (either by configuration or because the backend does not support it) * 409 is returned if the server is not in a state where a crash dump action is allowed. Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - trigger_crash_dump: trigger_crash_dump **Example Trigger crash dump: JSON request** .. literalinclude:: ../../doc/api_samples/servers/v2.17/server-action-trigger-crash-dump.json :language: javascript Response -------- No body is returned on a successful submission. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/servers-action-deferred-delete.inc0000664000175000017500000000360100000000000024066 0ustar00zuulzuul00000000000000.. -*- rst -*- Force-Delete Server (forceDelete Action) ======================================== .. rest_method:: POST /servers/{server_id}/action Force-deletes a server before deferred cleanup. Specify the ``forceDelete`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - forceDelete: forceDelete **Example Force-Delete Server (forceDelete Action): JSON request** .. literalinclude:: ../../doc/api_samples/os-deferred-delete/force-delete-post-req.json :language: javascript Response -------- No body is returned on a successful submission. Restore Soft-Deleted Instance (restore Action) ============================================== .. rest_method:: POST /servers/{server_id}/action Restores a previously soft-deleted server instance. You cannot use this method to restore deleted instances. Specify the ``restore`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - restore: restore **Example Restore Soft-Deleted Instance (restore Action): JSON request** .. literalinclude:: ../../doc/api_samples/os-deferred-delete/restore-post-req.json :language: javascript Response -------- No body is returned on a successful submission. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/servers-action-evacuate.inc0000664000175000017500000000260100000000000022642 0ustar00zuulzuul00000000000000.. -*- rst -*- Evacuate Server (evacuate Action) ================================= .. rest_method:: POST /servers/{server_id}/action Evacuates a server from a failed host to a new host. - Specify the ``evacuate`` action in the request body. - In the request body, if ``onSharedStorage`` is set, then do not set ``adminPass``. - The target host should not be the same as the instance host. Starting from API version 2.68, the ``force`` parameter is no longer accepted as this could not be meaningfully supported by servers with complex resource allocations. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - evacuate: evacuate - host: host - adminPass: adminPass_evacuate_request - onSharedStorage: on_shared_storage - force: force_evacuate | **Example Evacuate Server (evacuate Action)** .. literalinclude:: ../../doc/api_samples/os-evacuate/server-evacuate-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - adminPass: adminPass_evacuate .. note:: API does not return any Response for Microversion 2.14 or greater. **Example Evacuate Server (evacuate Action)** .. literalinclude:: ../../doc/api_samples/os-evacuate/server-evacuate-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/servers-action-fixed-ip.inc0000664000175000017500000000500300000000000022551 0ustar00zuulzuul00000000000000.. -*- rst -*- Add (Associate) Fixed Ip (addFixedIp Action) (DEPRECATED) ========================================================== .. warning:: This API is deprecated and will fail with a 404 starting from microversion 2.44. This is replaced with using the Neutron networking service API. .. rest_method:: POST /servers/{server_id}/action Adds a fixed IP address to a server instance, which associates that address with the server. The fixed IP address is retrieved from the network that you specify in the request. Specify the ``addFixedIp`` action and the network ID in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - addFixedIp: addFixedIp - networkId: net_id_resp **Example Add (Associate) Fixed Ip (addFixedIp Action)** .. literalinclude:: ../../doc/api_samples/os-multinic/multinic-add-fixed-ip-req.json :language: javascript Response -------- No response body is returned after a successful addFixedIp action. Remove (Disassociate) Fixed Ip (removeFixedIp Action) (DEPRECATED) =================================================================== .. warning:: This API is deprecated and will fail with a 404 starting from microversion 2.44. This is replaced with using the Neutron networking service API. .. rest_method:: POST /servers/{server_id}/action Removes, or disassociates, a fixed IP address from a server. Specify the ``removeFixedIp`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - removeFixedIp: removeFixedIp - address: ip_address **Example Remove (Disassociate) Fixed Ip (removeFixedIp Action)** .. literalinclude:: ../../doc/api_samples/os-multinic/multinic-remove-fixed-ip-req.json :language: javascript Response -------- No response body is returned after a successful removeFixedIp action. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/servers-action-remote-consoles.inc0000664000175000017500000001302000000000000024160 0ustar00zuulzuul00000000000000.. -*- rst -*- Get RDP Console (os-getRDPConsole Action) (DEPRECATED) ====================================================== .. rest_method:: POST /servers/{server_id}/action max_version: 2.5 Gets an `RDP `__ console for a server. .. warning:: This action is deprecated in microversion 2.5 and superseded by the API `Server Consoles`_ in microversion 2.6. The new API offers a unified API for different console types. The only supported connect type is ``rdp-html5``. The ``type`` parameter should be set as ``rdp-html5``. Specify the ``os-getRDPConsole`` action in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-getRDPConsole: os-getRDPConsole - type: os-getRDPConsole-type **Example Get RDP Console (os-getRDPConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-rdp-console-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - console: remote_console - type: os-getRDPConsole-type - url: os-getRDPConsole-url **Example Get RDP Console (os-getRDPConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-rdp-console-post-resp.json :language: javascript Get Serial Console (os-getSerialConsole Action) (DEPRECATED) ============================================================ .. rest_method:: POST /servers/{server_id}/action max_version: 2.5 Gets a serial console for a server. .. warning:: This action is deprecated in microversion 2.5 and superseded by the API `Server Consoles`_ in microversion 2.6. The new API offers a unified API for different console types. Specify the ``os-getSerialConsole`` action in the request body. The only supported connection type is ``serial``. The ``type`` parameter should be set as ``serial``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-getSerialConsole: os-getSerialConsole - type: os-getSerialConsole-type **Example Get Serial Console (os-getSerialConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-serial-console-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - console: remote_console - type: os-getSerialConsole-type - url: os-getSerialConsole-url **Example Get Serial Console (os-getSerialConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-serial-console-post-resp.json :language: javascript Get SPICE Console (os-getSPICEConsole Action) (DEPRECATED) ========================================================== .. rest_method:: POST /servers/{server_id}/action max_version: 2.5 Gets a SPICE console for a server. .. warning:: This action is deprecated in microversion 2.5 and superseded by the API `Server Consoles`_ in microversion 2.6. The new API offers a unified API for different console types. Specify the ``os-getSPICEConsole`` action in the request body. The only supported connection type is ``spice-html5``. The ``type`` parameter should be set to ``spice-html5``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-getSPICEConsole: os-getSPICEConsole - type: os-getSPICEConsole-type **Example Get Spice Console (os-getSPICEConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-spice-console-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - console: remote_console - type: os-getSPICEConsole-type - url: os-getSPICEConsole-url **Example Get SPICE Console (os-getSPICEConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-spice-console-post-resp.json :language: javascript Get VNC Console (os-getVNCConsole Action) (DEPRECATED) ====================================================== .. rest_method:: POST /servers/{server_id}/action max_version: 2.5 Gets a VNC console for a server. .. warning:: This action is deprecated in microversion 2.5 and superseded by the API `Server Consoles`_ in microversion 2.6. The new API offers a unified API for different console types. Specify the ``os-getVNCConsole`` action in the request body. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-getVNCConsole: os-getVNCConsole - type: os-getVNCConsole-type **Example Get Vnc Console (os-getVNCConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-vnc-console-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - console: remote_console - type: os-getVNCConsole-type - url: os-getVNCConsole-url **Example Get VNC Console (os-getVNCConsole Action)** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/get-vnc-console-post-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/servers-action-shelve.inc0000664000175000017500000001267500000000000022347 0ustar00zuulzuul00000000000000.. -*- rst -*- Shelve Server (shelve Action) ============================= .. rest_method:: POST /servers/{server_id}/action Shelves a server. Specify the ``shelve`` action in the request body. All associated data and resources are kept but anything still in memory is not retained. To restore a shelved instance, use the ``unshelve`` action. To remove a shelved instance, use the ``shelveOffload`` action. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. **Preconditions** The server status must be ``ACTIVE``, ``SHUTOFF``, ``PAUSED``, or ``SUSPENDED``. If the server is locked, you must have administrator privileges to shelve the server. **Asynchronous Postconditions** After you successfully shelve a server, its status changes to ``SHELVED`` and the image status is ``ACTIVE``. The server instance data appears on the compute node that the Compute service manages. If you boot the server from volumes or set the ``shelved_offload_time`` option to 0, the Compute service automatically deletes the instance on compute nodes and changes the server status to ``SHELVED_OFFLOADED``. **Troubleshooting** If the server status does not change to ``SHELVED`` or ``SHELVED_OFFLOADED``, the shelve operation failed. Ensure that you meet the preconditions and run the request again. If the request fails again, investigate whether another operation is running that causes a race condition. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - shelve: shelve | **Example Shelve server (shelve Action)** .. literalinclude:: ../../doc/api_samples/os-shelve/os-shelve.json :language: javascript Response -------- If successful, this method does not return content in the response body. Shelf-Offload (Remove) Server (shelveOffload Action) ==================================================== .. rest_method:: POST /servers/{server_id}/action Shelf-offloads, or removes, a shelved server. Specify the ``shelveOffload`` action in the request body. Data and resource associations are deleted. If an instance is no longer needed, you can remove that instance from the hypervisor to minimize resource usage. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. **Preconditions** The server status must be ``SHELVED``. If the server is locked, you must have administrator privileges to shelve-offload the server. **Asynchronous Postconditions** After you successfully shelve-offload a server, its status changes to ``SHELVED_OFFLOADED``. The server instance data appears on the compute node. **Troubleshooting** If the server status does not change to ``SHELVED_OFFLOADED``, the shelve-offload operation failed. Ensure that you meet the preconditions and run the request again. If the request fails again, investigate whether another operation is running that causes a race condition. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - shelveOffload: shelveOffload | **Example Shelf-Offload server (shelveOffload Action)** .. literalinclude:: ../../doc/api_samples/os-shelve/os-shelve-offload.json :language: javascript Response -------- If successful, this method does not return content in the response body. Unshelve (Restore) Shelved Server (unshelve Action) =================================================== .. rest_method:: POST /servers/{server_id}/action Unshelves, or restores, a shelved server. Specify the ``unshelve`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. **Preconditions** The server status must be ``SHELVED`` or ``SHELVED_OFFLOADED``. If the server is locked, you must have administrator privileges to unshelve the server. **Asynchronous Postconditions** After you successfully unshelve a server, its status changes to ``ACTIVE``. The server appears on the compute node. The shelved image is deleted from the list of images returned by an API call. **Troubleshooting** If the server status does not change to ``ACTIVE``, the unshelve operation failed. Ensure that you meet the preconditions and run the request again. If the request fails again, investigate whether another operation is running that causes a race condition. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - unshelve: unshelve - availability_zone: availability_zone_unshelve | **Example Unshelve server (unshelve Action)** .. literalinclude:: ../../doc/api_samples/os-shelve/os-unshelve.json :language: javascript **Example Unshelve server (unshelve Action) (v2.77)** .. literalinclude:: ../../doc/api_samples/os-shelve/v2.77/os-unshelve.json :language: javascript Response -------- If successful, this method does not return content in the response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/servers-actions.inc0000664000175000017500000010275600000000000021246 0ustar00zuulzuul00000000000000.. -*- rst -*- .. needs:body_verification =========================================== Servers - run an action (servers, action) =========================================== Enables all users to perform an action on a server. Specify the action in the request body. You can associate a fixed or floating IP address with a server, or disassociate a fixed or floating IP address from a server. You can create an image from a server, create a backup of a server, and force-delete a server before deferred cleanup. You can lock, pause, reboot, rebuild, rescue, resize, resume, confirm the resize of, revert a pending resize for, shelve, shelf-offload, unshelve, start, stop, unlock, unpause, and unrescue a server. You can also change the password of the server and add a security group to or remove a security group from a server. You can also trigger a crash dump into a server since Mitaka release. You can get an RDP, serial, SPICE, or VNC console for a server. Add (Associate) Floating Ip (addFloatingIp Action) (DEPRECATED) ================================================================ .. warning:: This API is deprecated and will fail with a 404 starting from microversion 2.44. This is replaced with using the Neutron networking service API. .. rest_method:: POST /servers/{server_id}/action Adds a floating IP address to a server, which associates that address with the server. A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project. After you `create (allocate) a floating IPaddress `__ for a project, you can associate that address with the server. Specify the ``addFloatingIp`` action in the request body. If an instance is connected to multiple networks, you can associate a floating IP address with a specific fixed IP address by using the optional ``fixed_address`` parameter. **Preconditions** The server must exist. You can only add a floating IP address to the server when its status is ``ACTIVE`` or ``STOPPED`` Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - addFloatingIp: addFloatingIp - address: address - fixed_address: fixed_address **Example Add (Associate) Floating Ip (addFloatingIp Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-addfloatingip-req.json :language: javascript Response -------- If successful, this method does not return content in the response body. Add Security Group To A Server (addSecurityGroup Action) ======================================================== .. rest_method:: POST /servers/{server_id}/action Adds a security group to a server. Specify the ``addSecurityGroup`` action in the request body. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - addSecurityGroup: addSecurityGroup - name: name **Example Add Security Group To A Server (addSecurityGroup Action)** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-group-add-post-req.json :language: javascript Response -------- If successful, this method does not return content in the response body. Change Administrative Password (changePassword Action) ====================================================== .. rest_method:: POST /servers/{server_id}/action Changes the administrative password for a server. Specify the ``changePassword`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - changePassword: changePassword - adminPass: adminPass_change_password **Example Change Administrative Password (changePassword Action)** .. literalinclude:: ../../doc/api_samples/os-admin-password/admin-password-change-password.json :language: javascript Response -------- If successful, this method does not return content in the response body. Confirm Resized Server (confirmResize Action) ============================================= .. rest_method:: POST /servers/{server_id}/action Confirms a pending resize action for a server. Specify the ``confirmResize`` action in the request body. After you make this request, you typically must keep polling the server status to determine whether the request succeeded. A successfully confirming resize operation shows a status of ``ACTIVE`` or ``SHUTOFF`` and a migration status of ``confirmed``. You can also see the resized server in the compute node that OpenStack Compute manages. **Preconditions** You can only confirm the resized server where the status is ``VERIFY_RESIZE``. If the server is locked, you must have administrator privileges to confirm the server. **Troubleshooting** If the server status remains ``VERIFY_RESIZE``, the request failed. Ensure you meet the preconditions and run the request again. If the request fails again, the server status should be ``ERROR`` and a migration status of ``error``. Investigate the compute back end or ask your cloud provider. There are some options for trying to correct the server status: * If the server is running and networking works, a user with proper authority could reset the status of the server to ``active`` using the :ref:`os-resetState` API. * If the server is not running, you can try hard rebooting the server using the :ref:`reboot` API. Note that the cloud provider may still need to cleanup any orphaned resources on the source hypervisor. Normal response codes: 204 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - confirmResize: confirmResize **Example Confirm Resized Server (confirmResize Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-confirm-resize.json :language: javascript Response -------- If successful, this method does not return content in the response body. Create Server Back Up (createBackup Action) =========================================== .. rest_method:: POST /servers/{server_id}/action Creates a back up of a server. .. note:: This API is not supported for volume-backed instances. Specify the ``createBackup`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. .. note:: Starting from version 2.39 the image quota enforcement with Nova `metadata` is removed and quota checks should be performed using Glance API directly. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - createBackup: createBackup - name: backup_name - backup_type: backup_type - rotation: backup_rotation - metadata: metadata **Example Create Server Back Up (createBackup Action)** .. literalinclude:: ../../doc/api_samples/os-create-backup/create-backup-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - Location: image_location - image_id: snapshot_id_resp_2_45 **Example Create Server Back Up (v2.45)** .. literalinclude:: ../../doc/api_samples/os-create-backup/v2.45/create-backup-resp.json :language: javascript Create Image (createImage Action) ================================= .. rest_method:: POST /servers/{server_id}/action Creates an image from a server. Specify the ``createImage`` action in the request body. After you make this request, you typically must keep polling the status of the created image to determine whether the request succeeded. If the operation succeeds, the created image has a status of ``active`` and the server status returns to the original status. You can also see the new image in the image back end that OpenStack Image service manages. .. note:: Starting from version 2.39 the image quota enforcement with Nova `metadata` is removed and quota checks should be performed using Glance API directly. **Preconditions** The server must exist. You can only create a new image from the server when its status is ``ACTIVE``, ``SHUTOFF``, ``SUSPENDED`` or ``PAUSED`` (``PAUSED`` is only supported for image-backed servers). The project must have sufficient volume snapshot quota in the block storage service when the server has attached volumes. If the project does not have sufficient volume snapshot quota, the API returns a 403 error. **Asynchronous Postconditions** A snapshot image will be created in the Image service. In the image-backed server case, volume snapshots of attached volumes will not be created. In the volume-backed server case, volume snapshots will be created for all volumes attached to the server and then those will be represented with a ``block_device_mapping`` image property in the resulting snapshot image in the Image service. If that snapshot image is used later to create a new server, it will result in a volume-backed server where the root volume is created from the snapshot of the original root volume. The volumes created from the snapshots of the original other volumes will be attached to the server. **Troubleshooting** If the image status remains uploading or shows another error status, the request failed. Ensure you meet the preconditions and run the request again. If the request fails again, investigate the image back end. If the server status does not go back to an original server's status, the request failed. Ensure you meet the preconditions, or check if there is another operation that causes race conditions for the server, then run the request again. If the request fails again, investigate the compute back end or ask your cloud provider. If the request fails due to an error on OpenStack Compute service, the image is purged from the image store that OpenStack Image service manages. Ensure you meet the preconditions and run the request again. If the request fails again, investigate OpenStack Compute service or ask your cloud provider. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - createImage: createImage - name: image_name - metadata: image_metadata **Example Create Image (createImage Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-create-image.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - Location: image_location - image_id: snapshot_id_resp_2_45 **Example Create Image (v2.45)** .. literalinclude:: ../../doc/api_samples/servers/v2.45/server-action-create-image-resp.json :language: javascript Lock Server (lock Action) ========================= .. rest_method:: POST /servers/{server_id}/action Locks a server. Specify the ``lock`` action in the request body. Most actions by non-admin users are not allowed to the server after this operation is successful and the server is locked. See the "Lock, Unlock" item in `Server actions `_ for the restricted actions. But administrators can perform actions on the server even though the server is locked. Note that from microversion 2.73 it is possible to specify a reason when locking the server. The `unlock action `_ will unlock a server in locked state so additional actions can be performed on the server by non-admin users. You can know whether a server is locked or not and the ``locked_reason`` (if specified, from the 2.73 microversion) by the `List Servers Detailed API `_ or the `Show Server Details API `_. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Administrators can overwrite owner's lock. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - lock: lock - locked_reason: locked_reason_req **Example Lock Server (lock Action)** .. literalinclude:: ../../doc/api_samples/os-lock-server/lock-server.json :language: javascript **Example Lock Server (lock Action) (v2.73)** .. literalinclude:: ../../doc/api_samples/os-lock-server/v2.73/lock-server-with-reason.json :language: javascript Response -------- If successful, this method does not return content in the response body. Pause Server (pause Action) =========================== .. rest_method:: POST /servers/{server_id}/action Pauses a server. Changes its status to ``PAUSED``. Specify the ``pause`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - pause: pause **Example Pause Server (pause Action)** .. literalinclude:: ../../doc/api_samples/os-pause-server/pause-server.json :language: javascript Response -------- If successful, this method does not return content in the response body. .. _reboot: Reboot Server (reboot Action) ============================= .. rest_method:: POST /servers/{server_id}/action Reboots a server. Specify the ``reboot`` action in the request body. **Preconditions** The preconditions for rebooting a server depend on the type of reboot. You can only *SOFT* reboot a server when its status is ``ACTIVE``. You can only *HARD* reboot a server when its status is one of: * ``ACTIVE`` * ``ERROR`` * ``HARD_REBOOT`` * ``PAUSED`` * ``REBOOT`` * ``SHUTOFF`` * ``SUSPENDED`` If the server is locked, you must have administrator privileges to reboot the server. **Asynchronous Postconditions** After you successfully reboot a server, its status changes to ``ACTIVE``. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - reboot: reboot - type: reboot_type **Example Reboot Server (reboot Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-reboot.json :language: javascript Response -------- If successful, this method does not return content in the response body. Rebuild Server (rebuild Action) =============================== .. rest_method:: POST /servers/{server_id}/action Rebuilds a server. Specify the ``rebuild`` action in the request body. This operation recreates the root disk of the server. For a volume-backed server, this operation keeps the contents of the volume. **Preconditions** The server status must be ``ACTIVE``, ``SHUTOFF`` or ``ERROR``. **Asynchronous Postconditions** If the server was in status ``SHUTOFF`` before the rebuild, it will be stopped and in status ``SHUTOFF`` after the rebuild, otherwise it will be ``ACTIVE`` if the rebuild was successful or ``ERROR`` if the rebuild failed. .. note:: There is a `known limitation`_ where the root disk is not replaced for volume-backed instances during a rebuild. .. _known limitation: https://bugs.launchpad.net/nova/+bug/1482040 Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - rebuild: rebuild - imageRef: imageRef_rebuild - accessIPv4: accessIPv4_in - accessIPv6: accessIPv6_in - adminPass: adminPass_request - metadata: metadata - name: server_name_optional - OS-DCF:diskConfig: OS-DCF:diskConfig - personality: personality - personality.path: path - personality.contents: contents - preserve_ephemeral: preserve_ephemeral - description: server_description - key_name: key_name_rebuild_req - user_data: user_data_rebuild_req - trusted_image_certificates: server_trusted_image_certificates_rebuild_req **Example Rebuild Server (rebuild Action) (v2.63)** .. literalinclude:: ../../doc/api_samples/servers/v2.63/server-action-rebuild.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - Location: server_location - server: server - accessIPv4: accessIPv4 - accessIPv6: accessIPv6 - addresses: addresses_obj - created: created - flavor: flavor_server - flavor.id: flavor_id_body_2_46 - flavor.links: flavor_links_2_46 - flavor.vcpus: flavor_cpus_2_47 - flavor.ram: flavor_ram_2_47 - flavor.disk: flavor_disk_2_47 - flavor.ephemeral: flavor_ephem_disk_2_47 - flavor.swap: flavor_swap_2_47 - flavor.original_name: flavor_original_name - flavor.extra_specs: extra_specs_2_47 - flavor.extra_specs.key: flavor_extra_spec_key_2_47 - flavor.extra_specs.value: flavor_extra_spec_value_2_47 - hostId: hostId - id: server_id - image: image - image.id: image_id_body - image.links: links - links: server_links - metadata: metadata_object - name: server_name - OS-DCF:diskConfig: disk_config - status: server_status - tenant_id: tenant_id_body - updated: updated - user_id: user_id - adminPass: adminPass_response - progress: progress - locked: locked - description: server_description_resp - tags: tags - key_name: key_name_rebuild_resp - user_data: user_data_rebuild_resp - trusted_image_certificates: server_trusted_image_certificates_resp - server_groups: server_groups_2_71 - locked_reason: locked_reason_resp - config_drive: config_drive_resp_update_rebuild - OS-EXT-AZ:availability_zone: OS-EXT-AZ:availability_zone_update_rebuild - OS-EXT-SRV-ATTR:host: OS-EXT-SRV-ATTR:host_update_rebuild - OS-EXT-SRV-ATTR:hypervisor_hostname: OS-EXT-SRV-ATTR:hypervisor_hostname_update_rebuild - OS-EXT-SRV-ATTR:instance_name: OS-EXT-SRV-ATTR:instance_name_update_rebuild - OS-EXT-STS:power_state: OS-EXT-STS:power_state_update_rebuild - OS-EXT-STS:task_state: OS-EXT-STS:task_state_update_rebuild - OS-EXT-STS:vm_state: OS-EXT-STS:vm_state_update_rebuild - OS-EXT-SRV-ATTR:hostname: server_hostname_update_rebuild - OS-EXT-SRV-ATTR:reservation_id: server_reservation_id_update_rebuild - OS-EXT-SRV-ATTR:launch_index: server_launch_index_update_rebuild - OS-EXT-SRV-ATTR:kernel_id: server_kernel_id_update_rebuild - OS-EXT-SRV-ATTR:ramdisk_id: server_ramdisk_id_update_rebuild - OS-EXT-SRV-ATTR:root_device_name: server_root_device_name_update_rebuild - os-extended-volumes:volumes_attached: os-extended-volumes:volumes_attached_update_rebuild - os-extended-volumes:volumes_attached.id: os-extended-volumes:volumes_attached.id_update_rebuild - os-extended-volumes:volumes_attached.delete_on_termination: os-extended-volumes:volumes_attached.delete_on_termination_update_rebuild - OS-SRV-USG:launched_at: OS-SRV-USG:launched_at_update_rebuild - OS-SRV-USG:terminated_at: OS-SRV-USG:terminated_at_update_rebuild - security_groups: security_groups_obj_update_rebuild - security_group.name: name_update_rebuild - host_status: host_status_update_rebuild **Example Rebuild Server (rebuild Action) (v2.75)** .. literalinclude:: ../../doc/api_samples/servers/v2.75/server-action-rebuild-resp.json :language: javascript Remove (Disassociate) Floating Ip (removeFloatingIp Action) (DEPRECATED) ========================================================================= .. warning:: This API is deprecated and will fail with a 404 starting from microversion 2.44. This is replaced with using the Neutron networking service API. .. rest_method:: POST /servers/{server_id}/action Removes, or disassociates, a floating IP address from a server. The IP address is returned to the pool of IP addresses that is available for all projects. When you remove a floating IP address and that IP address is still associated with a running instance, it is automatically disassociated from that instance. Specify the ``removeFloatingIp`` action in the request body. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - removeFloatingIp: removeFloatingIp - address: address **Example Remove (Disassociate) Floating Ip (removeFloatingIp Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-removefloatingip-req.json :language: javascript Response -------- If successful, this method does not return content in the response body. Remove Security Group From A Server (removeSecurityGroup Action) ================================================================ .. rest_method:: POST /servers/{server_id}/action Removes a security group from a server. Specify the ``removeSecurityGroup`` action in the request body. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - removeSecurityGroup: removeSecurityGroup - name: name **Example Remove Security Group From A Server (removeSecurityGroup Action)** .. literalinclude:: ../../doc/api_samples/os-security-groups/security-group-remove-post-req.json :language: javascript Response -------- If successful, this method does not return content in the response body. Rescue Server (rescue Action) ============================= .. rest_method:: POST /servers/{server_id}/action Puts a server in rescue mode and changes its status to ``RESCUE``. .. note:: This API is not supported for volume-backed instances. Specify the ``rescue`` action in the request body. If you specify the ``rescue_image_ref`` extended attribute, the image is used to rescue the instance. If you omit an image reference, the base image reference is used by default. **Asynchronous Postconditions** After you successfully rescue a server and make a ``GET /servers/​{server_id}​`` request, its status changes to ``RESCUE``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - rescue: rescue - adminPass: adminPass_rescue_request - rescue_image_ref: rescue_image_ref **Example Rescue server (rescue Action)** .. literalinclude:: ../../doc/api_samples/os-rescue/server-rescue-req-with-image-ref.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - adminPass: adminPass_response **Example Rescue server (rescue Action)** .. literalinclude:: ../../doc/api_samples/os-rescue/server-rescue.json :language: javascript Resize Server (resize Action) ============================= .. rest_method:: POST /servers/{server_id}/action Resizes a server. Specify the ``resize`` action in the request body. **Preconditions** You can only resize a server when its status is ``ACTIVE`` or ``SHUTOFF``. If the server is locked, you must have administrator privileges to resize the server. **Asynchronous Postconditions** A successfully resized server shows a ``VERIFY_RESIZE`` status and ``finished`` migration status. If the cloud has configured the `resize_confirm_window`_ option of the Compute service to a positive value, the Compute service automatically confirms the resize operation after the configured interval. .. _resize_confirm_window: https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.resize_confirm_window .. note:: There is a `known limitation `__ that ephemeral disks are not resized. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - resize: resize - flavorRef: flavorRef_resize - OS-DCF:diskConfig: OS-DCF:diskConfig **Example Resize Server (Resize Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-resize.json :language: javascript Response -------- If successful, this method does not return content in the response body. Resume Suspended Server (resume Action) ======================================= .. rest_method:: POST /servers/{server_id}/action Resumes a suspended server and changes its status to ``ACTIVE``. Specify the ``resume`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - resume: resume **Example Resume Suspended Server (Resume Action)** .. literalinclude:: ../../doc/api_samples/os-suspend-server/server-resume.json :language: javascript Response -------- If successful, this method does not return content in the response body. Revert Resized Server (revertResize Action) =========================================== .. rest_method:: POST /servers/{server_id}/action Cancels and reverts a pending resize action for a server. Specify the ``revertResize`` action in the request body. **Preconditions** You can only revert the resized server where the status is ``VERIFY_RESIZE`` and the OS-EXT-STS:vm_state is ``resized``. If the server is locked, you must have administrator privileges to revert the resizing. **Asynchronous Postconditions** After you make this request, you typically must keep polling the server status to determine whether the request succeeded. A reverting resize operation shows a status of ``REVERT_RESIZE`` and a task_state of ``resize_reverting``. If successful, the status will return to ``ACTIVE`` or ``SHUTOFF``. You can also see the reverted server in the compute node that OpenStack Compute manages. **Troubleshooting** If the server status remains ``VERIFY_RESIZE``, the request failed. Ensure you meet the preconditions and run the request again. If the request fails again, investigate the compute back end. The server is not reverted in the compute node that OpenStack Compute manages. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - revertResize: revertResize **Example Revert Resized Server (revertResize Action)** .. literalinclude:: ../../doc/api_samples/servers/server-action-revert-resize.json :language: javascript Response -------- If successful, this method does not return content in the response body. Start Server (os-start Action) ============================== .. rest_method:: POST /servers/{server_id}/action Starts a stopped server and changes its status to ``ACTIVE``. Specify the ``os-start`` action in the request body. **Preconditions** The server status must be ``SHUTOFF``. If the server is locked, you must have administrator privileges to start the server. **Asynchronous Postconditions** After you successfully start a server, its status changes to ``ACTIVE``. **Troubleshooting** If the server status does not change to ``ACTIVE``, the start operation failed. Ensure that you meet the preconditions and run the request again. If the request fails again, investigate whether another operation is running that causes a race condition. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-start: os-start **Example Start server** .. literalinclude:: ../../doc/api_samples/servers/server-action-start.json :language: javascript Response -------- If successful, this method does not return content in the response body. Stop Server (os-stop Action) ============================ .. rest_method:: POST /servers/{server_id}/action Stops a running server and changes its status to ``SHUTOFF``. Specify the ``os-stop`` action in the request body. **Preconditions** The server status must be ``ACTIVE`` or ``ERROR``. If the server is locked, you must have administrator privileges to stop the server. **Asynchronous Postconditions** After you successfully stop a server, its status changes to ``SHUTOFF``. This API operation does not delete the server instance data and the data will be available again after ``os-start`` action. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-stop: os-stop **Example Stop server** .. literalinclude:: ../../doc/api_samples/servers/server-action-stop.json :language: javascript Response -------- If successful, this method does not return content in the response body. Suspend Server (suspend Action) =============================== .. rest_method:: POST /servers/{server_id}/action Suspends a server and changes its status to ``SUSPENDED``. Specify the ``suspend`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - suspend: suspend **Example Suspend Server (suspend Action)** .. literalinclude:: ../../doc/api_samples/os-suspend-server/server-suspend.json :language: javascript Response -------- If successful, this method does not return content in the response body. Unlock Server (unlock Action) ============================= .. rest_method:: POST /servers/{server_id}/action Unlocks a locked server. Specify the ``unlock`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - unlock: unlock **Example Unlock Server (unlock Action)** .. literalinclude:: ../../doc/api_samples/os-lock-server/unlock-server.json :language: javascript Response -------- If successful, this method does not return content in the response body. Unpause Server (unpause Action) =============================== .. rest_method:: POST /servers/{server_id}/action Unpauses a paused server and changes its status to ``ACTIVE``. Specify the ``unpause`` action in the request body. Policy defaults enable only users with the administrative role or the owner of the server to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - unpause: unpause **Example Unpause Server (unpause Action)** .. literalinclude:: ../../doc/api_samples/os-pause-server/unpause-server.json :language: javascript Response -------- If successful, this method does not return content in the response body. Unrescue Server (unrescue Action) ================================= .. rest_method:: POST /servers/{server_id}/action Unrescues a server. Changes status to ``ACTIVE``. Specify the ``unrescue`` action in the request body. **Preconditions** The server must exist. You can only unrescue a server when its status is ``RESCUE``. **Asynchronous Postconditions** After you successfully unrescue a server and make a ``GET /servers/​{server_id}​`` request, its status changes to ``ACTIVE``. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - unrescue: unrescue **Example Unrescue server** .. literalinclude:: ../../doc/api_samples/os-rescue/server-unrescue-req.json :language: javascript Response -------- If successful, this method does not return content in the response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/servers-admin-action.inc0000664000175000017500000001645300000000000022147 0ustar00zuulzuul00000000000000.. -*- rst -*- ========================================================== Servers - run an administrative action (servers, action) ========================================================== Enables administrators to perform an action on a server. Specify the action in the request body. You can inject network information into, migrate, live-migrate, reset networking on, reset the state of a server, and evacuate a server from a failed host to a new host. Inject Network Information (injectNetworkInfo Action) ===================================================== .. rest_method:: POST /servers/{server_id}/action Injects network information into a server. Specify the ``injectNetworkInfo`` action in the request body. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. .. warning:: There is very limited support on this API, For more information, see `nova virt support matrix `__ Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - injectNetworkInfo: injectNetworkInfo **Example Inject Network Information (injectNetworkInfo Action)** .. literalinclude:: ../../doc/api_samples/os-admin-actions/admin-actions-inject-network-info.json :language: javascript Response -------- If successful, this method does not return content in the response body. Migrate Server (migrate Action) =============================== .. rest_method:: POST /servers/{server_id}/action Migrates a server to a host. Specify the ``migrate`` action in the request body. Up to microversion 2.55, the scheduler chooses the host. Starting from microversion 2.56, the ``host`` parameter is available to specify the destination host. If you specify ``null`` or don't specify this parameter, the scheduler chooses a host. **Asynchronous Postconditions** A successfully migrated server shows a ``VERIFY_RESIZE`` status and ``finished`` migration status. If the cloud has configured the `resize_confirm_window`_ option of the Compute service to a positive value, the Compute service automatically confirms the migrate operation after the configured interval. .. _resize_confirm_window: https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.resize_confirm_window Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403) itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - migrate: migrate - host: host_migration_2_56 **Example Migrate Server (migrate Action) (v2.1)** .. literalinclude:: ../../doc/api_samples/os-migrate-server/migrate-server.json :language: javascript **Example Migrate Server (migrate Action) (v2.56)** .. literalinclude:: ../../doc/api_samples/os-migrate-server/v2.56/migrate-server.json :language: javascript Response -------- If successful, this method does not return content in the response body. Live-Migrate Server (os-migrateLive Action) =========================================== .. rest_method:: POST /servers/{server_id}/action Live-migrates a server to a new host without rebooting. Specify the ``os-migrateLive`` action in the request body. Use the ``host`` parameter to specify the destination host. If this param is ``null``, the scheduler chooses a host. If a scheduled host is not suitable to do migration, the scheduler tries up to ``migrate_max_retries`` rescheduling attempts. Starting from API version 2.25, the ``block_migration`` parameter could be to ``auto`` so that nova can decide value of block_migration during live migration. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Starting from REST API version 2.34 pre-live-migration checks are done asynchronously, results of these checks are available in ``instance-actions``. Nova responds immediately, and no pre-live-migration checks are returned. The instance will not immediately change state to ``ERROR``, if a failure of the live-migration checks occurs. Starting from API version 2.68, the ``force`` parameter is no longer accepted as this could not be meaningfully supported by servers with complex resource allocations. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403) itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-migrateLive: os-migrateLive - host: host_migration - block_migration: block_migration - block_migration: block_migration_2_25 - disk_over_commit: disk_over_commit - force: force_live_migrate **Example Live-Migrate Server (os-migrateLive Action)** .. literalinclude:: ../../doc/api_samples/os-migrate-server/v2.30/live-migrate-server.json :language: javascript Response -------- If successful, this method does not return content in the response body. Reset Networking On A Server (resetNetwork Action) ================================================== .. rest_method:: POST /servers/{server_id}/action Resets networking on a server. .. note:: Only the XenServer driver implements this feature and only if the guest has the XenAPI agent in the targeted server. Specify the ``resetNetwork`` action in the request body. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - resetNetwork: resetNetwork **Example Reset Networking On A Server (resetNetwork Action)** .. literalinclude:: ../../doc/api_samples/os-admin-actions/admin-actions-reset-network.json :language: javascript Response -------- If successful, this method does not return content in the response body. .. _os-resetState: Reset Server State (os-resetState Action) ========================================= .. rest_method:: POST /servers/{server_id}/action Resets the state of a server. Specify the ``os-resetState`` action and the ``state`` in the request body. Policy defaults enable only users with the administrative role to perform this operation. Cloud providers can change these permissions through the ``policy.json`` file. Normal response codes: 202 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - os-resetState: os-resetState - os-resetState.state: os-resetState_state **Example Reset Server State (os-resetState Action)** .. literalinclude:: ../../doc/api_samples/os-admin-actions/admin-actions-reset-server-state.json :language: javascript Response -------- If successful, this method does not return content in the response body. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/servers-remote-consoles.inc0000664000175000017500000000465500000000000022723 0ustar00zuulzuul00000000000000.. -*- rst -*- ================= Server Consoles ================= Manage server consoles. Create Console ============== .. rest_method:: POST /servers/{server_id}/remote-consoles .. note:: Microversion 2.6 or greater is required for this API. The API provides a unified request for creating a remote console. The user can get a URL to connect the console from this API. The URL includes the token which is used to get permission to access the console. Servers may support different console protocols. To return a remote console using a specific protocol, such as RDP, set the ``protocol`` parameter to ``rdp``. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409), notImplemented(501) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - remote_console: remote_console - protocol: remote_console_protocol - type: remote_console_type **Example Get Remote VNC Console** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/v2.6/create-vnc-console-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - remote_console: remote_console - protocol: remote_console_protocol - type: remote_console_type - url: remote_console_url **Example Get Remote VNC Console** .. literalinclude:: ../../doc/api_samples/os-remote-consoles/v2.6/create-vnc-console-resp.json :language: javascript Show Console Connection Information =================================== .. rest_method:: GET /os-console-auth-tokens/{console_token} Given the console authentication token for a server, shows the related connection information. This method used to be available only for the ``rdp-html5`` console type before microversion 2.31. Starting from microversion 2.31 it's available for all console types. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - console_token: console_token Response -------- .. rest_parameters:: parameters.yaml - console: console - instance_uuid: instance_id_body - host: console_host - port: port_number - internal_access_path: internal_access_path **Example Show Console Authentication Token** .. literalinclude:: ../../doc/api_samples/os-console-auth-tokens/get-console-connect-info-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/api-ref/source/servers.inc0000664000175000017500000010427700000000000017610 0ustar00zuulzuul00000000000000.. -*- rst -*- .. needs:body_verification =================== Servers (servers) =================== Lists, creates, shows details for, updates, and deletes servers. **Passwords** When you create a server, you can specify a password through the optional adminPass attribute. The password must meet the complexity requirements set by your OpenStack Compute provider. The server might enter an ``ERROR`` state if the complexity requirements are not met. In this case, a client might issue a change password action to reset the server password. If you do not specify a password, the API generates and assigns a random password that it returns in the response object. This password meets the security requirements set by the compute provider. For security reasons, subsequent GET calls do not require this password. **Server metadata** You can specify custom server metadata at server launch time. The maximum size for each metadata key-value pair is 255 bytes. The compute provider determines the maximum number of key-value pairs for each server. You can query this value through the ``maxServerMeta`` absolute limit. **Server networks** You can specify one or more networks to which the server connects at launch time. Users can also specify a specific port on the network or the fixed IP address to assign to the server interface. .. note:: You can use both IPv4 and IPv6 addresses as access addresses, and you can assign both addresses simultaneously. You can update access addresses after you create a server. **Server personality** .. note:: The use of personality files is deprecated starting with the 2.57 microversion. Use ``metadata`` and ``user_data`` to customize a server instance. To customize the personality of a server instance, you can inject data into its file system. For example, you might insert ssh keys, set configuration files, or store data that you want to retrieve from inside the instance. This customization method provides minimal launch-time personalization. If you require significant customization, create a custom image. Follow these guidelines when you inject files: - The maximum size of the file path data is 255 bytes. - Encode the file contents as a Base64 string. The compute provider determines the maximum size of the file contents. The image that you use to create the server determines this value. .. note:: The maximum limit refers to the number of bytes in the decoded data and not to the number of characters in the encoded data. - The ``maxPersonality`` absolute limit defines the maximum number of file path and content pairs that you can supply. The compute provider determines this value. - The ``maxPersonalitySize`` absolute limit is a byte limit that applies to all images in the deployment. Providers can set additional per-image personality limits. The file injection might not occur until after the server builds and boots. After file injection, only system administrators can access personality files. For example, on Linux, all files have root as the owner and the root group as the group owner, and allow only user and group read access (``chmod 440``). **Server access addresses** In a hybrid environment, the underlying implementation might not control the IP address of a server. Instead, the access IP address might be part of the dedicated hardware; for example, a router/NAT device. In this case, you cannot use the addresses that the implementation provides to access the server from outside the local LAN. Instead, the API might assign a separate access address at creation time to provide access to the server. This address might not be directly bound to a network interface on the server and might not necessarily appear when you query the server addresses. However, clients should use an access address to access the server directly. List Servers ============ .. rest_method:: GET /servers Lists IDs, names, and links for servers. By default the servers are filtered using the project ID associated with the authenticated request. Servers contain a status attribute that indicates the current server state. You can filter on the server status when you complete a list servers request. The server status is returned in the response body. The possible server status values are: - ``ACTIVE``. The server is active. - ``BUILD``. The server has not finished the original build process. - ``DELETED``. The server is permanently deleted. - ``ERROR``. The server is in error. - ``HARD_REBOOT``. The server is hard rebooting. This is equivalent to pulling the power plug on a physical server, plugging it back in, and rebooting it. - ``MIGRATING``. The server is being migrated to a new host. - ``PASSWORD``. The password is being reset on the server. - ``PAUSED``. In a paused state, the state of the server is stored in RAM. A paused server continues to run in frozen state. - ``REBOOT``. The server is in a soft reboot state. A reboot command was passed to the operating system. - ``REBUILD``. The server is currently being rebuilt from an image. - ``RESCUE``. The server is in rescue mode. A rescue image is running with the original server image attached. - ``RESIZE``. Server is performing the differential copy of data that changed during its initial copy. Server is down for this stage. - ``REVERT_RESIZE``. The resize or migration of a server failed for some reason. The destination server is being cleaned up and the original source server is restarting. - ``SHELVED``: The server is in shelved state. Depending on the shelve offload time, the server will be automatically shelved offloaded. - ``SHELVED_OFFLOADED``: The shelved server is offloaded (removed from the compute host) and it needs unshelved action to be used again. - ``SHUTOFF``. The server is powered off and the disk image still persists. - ``SOFT_DELETED``. The server is marked as deleted but the disk images are still available to restore. - ``SUSPENDED``. The server is suspended, either by request or necessity. When you suspend a server, its state is stored on disk, all memory is written to disk, and the server is stopped. Suspending a server is similar to placing a device in hibernation and its occupied resource will not be freed but rather kept for when the server is resumed. If a server is infrequently used and the occupied resource needs to be freed to create other servers, it should be shelved. - ``UNKNOWN``. The state of the server is unknown. Contact your cloud provider. - ``VERIFY_RESIZE``. System is awaiting confirmation that the server is operational after a move or resize. There is whitelist for valid filter keys. Any filter key other than from whitelist will be silently ignored. - For non-admin users, whitelist is different from admin users whitelist. The valid whitelist can be configured using the ``os_compute_api:servers:allow_all_filters`` policy rule. By default, the valid whitelist for non-admin users includes - ``changes-since`` - ``flavor`` - ``image`` - ``ip`` - ``ip6`` (New in version 2.5) - ``name`` - ``not-tags`` (New in version 2.26) - ``not-tags-any`` (New in version 2.26) - ``reservation_id`` - ``status`` - ``tags`` (New in version 2.26) - ``tags-any`` (New in version 2.26) - ``changes-before`` (New in version 2.66) - ``locked`` (New in version 2.73) - ``availability_zone`` (New in version 2.83) - ``config_drive`` (New in version 2.83) - ``key_name`` (New in version 2.83) - ``created_at`` (New in version 2.83) - ``launched_at`` (New in version 2.83) - ``terminated_at`` (New in version 2.83) - ``power_state`` (New in version 2.83) - ``task_state`` (New in version 2.83) - ``vm_state`` (New in version 2.83) - ``progress`` (New in version 2.83) - ``user_id`` (New in version 2.83) - For admin user, whitelist includes all filter keys mentioned in :ref:`list-server-request` Section. .. note:: Starting with microversion 2.69 if server details cannot be loaded due to a transient condition in the deployment like infrastructure failure, the response body for those unavailable servers will be missing keys. See `handling down cells `__ section of the Compute API guide for more information on the keys that would be returned in the partial constructs. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) .. _list-server-request: Request ------- .. rest_parameters:: parameters.yaml - access_ip_v4: access_ip_v4_query_server - access_ip_v6: access_ip_v6_query_server - all_tenants: all_tenants_query - auto_disk_config: disk_config_query_server - availability_zone: availability_zone_query_server - changes-since: changes_since_server - config_drive: config_drive_query_server - created_at: created_at_query_server - deleted: deleted_query - description: description_query_server - flavor: flavor_query - host: host_query_server - hostname: hostname_query_server - image: image_query - ip: ip_query - ip6: ip6_query - kernel_id: kernel_id_query_server - key_name: key_name_query_server - launch_index: launch_index_query_server - launched_at: launched_at_query_server - limit: limit - locked_by: locked_by_query_server - marker: marker - name: server_name_query - node: node_query_server - power_state: power_state_query_server - progress: progress_query_server - project_id: project_id_query_server - ramdisk_id: ramdisk_id_query_server - reservation_id: reservation_id_query - root_device_name: server_root_device_name_query - soft_deleted: soft_deleted_server - sort_dir: sort_dir_server - sort_key: sort_key_server - status: server_status_query - task_state: task_state_query_server - terminated_at: terminated_at_query_server - user_id: user_id_query_server - uuid: server_uuid_query - vm_state: vm_state_query_server - not-tags: not_tags_query - not-tags-any: not_tags_any_query - tags: tags_query - tags-any: tags_any_query - changes-before: changes_before_server - locked: locked_query_server Response -------- .. rest_parameters:: parameters.yaml - servers: servers - id: server_id - links: links - name: server_name - servers_links: servers_links **Example List Servers** .. literalinclude:: ../../doc/api_samples/servers/servers-list-resp.json :language: javascript **Example List Servers (2.69)** This is a sample response for the servers from the non-responsive part of the deployment. The responses for the available server records will be normal without any missing keys. .. literalinclude:: ../../doc/api_samples/servers/v2.69/servers-list-resp.json :language: javascript Create Server ============= .. rest_method:: POST /servers Creates a server. The progress of this operation depends on the location of the requested image, network I/O, host load, selected flavor, and other factors. To check the progress of the request, make a ``GET /servers/{id}`` request. This call returns a progress attribute, which is a percentage value from 0 to 100. The ``Location`` header returns the full URL to the newly created server and is available as a ``self`` and ``bookmark`` link in the server representation. When you create a server, the response shows only the server ID, its links, and the admin password. You can get additional attributes through subsequent ``GET`` requests on the server. Include the ``block_device_mapping_v2`` parameter in the create request body to boot a server from a volume. Include the ``key_name`` parameter in the create request body to add a keypair to the server when you create it. To create a keypair, make a `create keypair `__ request. .. note:: Starting with microversion 2.37 the ``networks`` field is required. **Preconditions** - The user must have sufficient server quota to create the number of servers requested. - The connection to the Image service is valid. **Asynchronous postconditions** - With correct permissions, you can see the server status as ``ACTIVE`` through API calls. - With correct access, you can see the created server in the compute node that OpenStack Compute manages. **Troubleshooting** - If the server status remains ``BUILDING`` or shows another error status, the request failed. Ensure you meet the preconditions then investigate the compute node. - The server is not created in the compute node that OpenStack Compute manages. - The compute node needs enough free resource to match the resource of the server creation request. - Ensure that the scheduler selection filter can fulfill the request with the available compute nodes that match the selection criteria of the filter. Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) .. TODO(sdague): leave these notes for later when fixing the body language. They are commented out so they won't render, but are useful to not have to look this up again later. A conflict(409) is returned in the event of trying to allocated already allocated resources (such as networks) to the server in question. entityTooLarge(413) is returned if the ``user_data`` exceeds what is allowed by the backend. All other failure conditions map to 400, and will need to be disambiguated by the error string returned. Request ------- .. rest_parameters:: parameters.yaml - server: server - flavorRef: flavorRef - name: server_name - networks: networks - networks.uuid: network_uuid - networks.port: port - networks.fixed_ip: fixed_ip - networks.tag: device_tag_nic - accessIPv4: accessIPv4_in - accessIPv6: accessIPv6_in - adminPass: adminPass_request - availability_zone: os-availability-zone:availability_zone - block_device_mapping_v2: block_device_mapping_v2 - block_device_mapping_v2.boot_index: boot_index - block_device_mapping_v2.delete_on_termination: delete_on_termination - block_device_mapping_v2.destination_type: destination_type - block_device_mapping_v2.device_name: device_name - block_device_mapping_v2.device_type: device_type - block_device_mapping_v2.disk_bus: disk_bus - block_device_mapping_v2.guest_format: guest_format - block_device_mapping_v2.no_device: no_device - block_device_mapping_v2.source_type: source_type - block_device_mapping_v2.uuid: block_device_uuid - block_device_mapping_v2.volume_size: volume_size - block_device_mapping_v2.tag: device_tag_bdm - block_device_mapping_v2.volume_type: device_volume_type - config_drive: config_drive - imageRef: imageRef - key_name: key_name - metadata: metadata - OS-DCF:diskConfig: OS-DCF:diskConfig - personality: personality - security_groups: security_groups - user_data: user_data - description: server_description - tags: server_tags_create - trusted_image_certificates: server_trusted_image_certificates_create_req - host: server_host_create - hypervisor_hostname: server_hypervisor_hostname_create - os:scheduler_hints: os:scheduler_hints - os:scheduler_hints.build_near_host_ip: os:scheduler_hints_build_near_host_ip - os:scheduler_hints.cidr: os:scheduler_hints_cidr - os:scheduler_hints.different_cell: os:scheduler_hints_different_cell - os:scheduler_hints.different_host: os:scheduler_hints_different_host - os:scheduler_hints.group: os:scheduler_hints_group - os:scheduler_hints.query: os:scheduler_hints_query - os:scheduler_hints.same_host: os:scheduler_hints_same_host - os:scheduler_hints.target_cell: os:scheduler_hints_target_cell **Example Create Server** .. literalinclude:: ../../doc/api_samples/servers/server-create-req.json :language: javascript **Example Create Server With Networks(array) and Block Device Mapping V2 (v2.32)** .. literalinclude:: ../../doc/api_samples/servers/v2.32/server-create-req.json :language: javascript **Example Create Server With Automatic Networking (v2.37)** .. literalinclude:: ../../doc/api_samples/servers/v2.37/server-create-req.json :language: javascript **Example Create Server With Trusted Image Certificates (v2.63)** .. literalinclude:: ../../doc/api_samples/servers/v2.63/server-create-req.json :language: javascript **Example Create Server With Host and Hypervisor Hostname (v2.74)** .. literalinclude:: ../../doc/api_samples/servers/v2.74/server-create-req-with-host-and-node.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - Location: server_location - server: server - id: server_id - links: links - OS-DCF:diskConfig: disk_config - security_groups: security_groups_obj - security_groups.name: name - adminPass: adminPass_response **Example Create Server** .. literalinclude:: ../../doc/api_samples/servers/server-create-resp.json :language: javascript Create Multiple Servers ======================= .. rest_method:: POST /servers There is a second kind of create call which can build multiple servers at once. This supports all the same parameters as create with a few additional attributes specific to multiple create. Error handling for multiple create is not as consistent as for single server create, and there is no guarantee that all the servers will be built. This call should generally be avoided in favor of clients doing direct individual server creates. Request (Additional Parameters) ------------------------------- These are the parameters beyond single create that are supported. .. rest_parameters:: parameters.yaml - name: servers_multiple_create_name - min_count: servers_min_count - max_count: servers_max_count - return_reservation_id: return_reservation_id **Example Multiple Create with reservation ID** .. literalinclude:: ../../doc/api_samples/os-multiple-create/multiple-create-post-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - reservation_id: reservation_id If ``return_reservation_id`` is set to ``true`` only the ``reservation_id`` will be returned. This can be used as a filter with list servers detailed to see the status of all the servers being built. **Example Create multiple servers with reservation ID** .. literalinclude:: ../../doc/api_samples/os-multiple-create/multiple-create-post-resp.json :language: javascript If ``return_reservation_id`` is set to ``false`` a representation of the ``first`` server will be returned. **Example Create multiple servers without reservation ID** .. literalinclude:: ../../doc/api_samples/os-multiple-create/multiple-create-no-resv-post-resp.json :language: javascript List Servers Detailed ===================== .. rest_method:: GET /servers/detail For each server, shows server details including config drive, extended status, and server usage information. The extended status information appears in the OS-EXT-STS:vm_state, OS-EXT-STS:power_state, and OS-EXT-STS:task_state attributes. The server usage information appears in the OS-SRV-USG:launched_at and OS-SRV-USG:terminated_at attributes. HostId is unique per account and is not globally unique. .. note:: Starting with microversion 2.69 if server details cannot be loaded due to a transient condition in the deployment like infrastructure failure, the response body for those unavailable servers will be missing keys. See `handling down cells `__ section of the Compute API guide for more information on the keys that would be returned in the partial constructs. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403) Request ------- .. rest_parameters:: parameters.yaml - access_ip_v4: access_ip_v4_query_server - access_ip_v6: access_ip_v6_query_server - all_tenants: all_tenants_query - auto_disk_config: disk_config_query_server - availability_zone: availability_zone_query_server - changes-since: changes_since_server - config_drive: config_drive_query_server - created_at: created_at_query_server - deleted: deleted_query - description: description_query_server - flavor: flavor_query - host: host_query_server - hostname: hostname_query_server - image: image_query - ip: ip_query - ip6: ip6_query - kernel_id: kernel_id_query_server - key_name: key_name_query_server - launch_index: launch_index_query_server - launched_at: launched_at_query_server - limit: limit - locked_by: locked_by_query_server - marker: marker - name: server_name_query - node: node_query_server - power_state: power_state_query_server - progress: progress_query_server - project_id: project_id_query_server - ramdisk_id: ramdisk_id_query_server - reservation_id: reservation_id_query - root_device_name: server_root_device_name_query - soft_deleted: soft_deleted_server - sort_dir: sort_dir_server - sort_key: sort_key_server - status: server_status_query - task_state: task_state_query_server - terminated_at: terminated_at_query_server - user_id: user_id_query_server - uuid: server_uuid_query - vm_state: vm_state_query_server - not-tags: not_tags_query - not-tags-any: not_tags_any_query - tags: tags_query - tags-any: tags_any_query - changes-before: changes_before_server - locked: locked_query_server Response -------- .. rest_parameters:: parameters.yaml - server: server - accessIPv4: accessIPv4 - accessIPv6: accessIPv6 - addresses: addresses - config_drive: config_drive_resp - created: created - flavor: flavor_server - flavor.id: flavor_id_body_2_46 - flavor.links: flavor_links_2_46 - flavor.vcpus: flavor_cpus_2_47 - flavor.ram: flavor_ram_2_47 - flavor.disk: flavor_disk_2_47 - flavor.ephemeral: flavor_ephem_disk_2_47 - flavor.swap: flavor_swap_2_47 - flavor.original_name: flavor_original_name - flavor.extra_specs: extra_specs_2_47 - flavor.extra_specs.key: flavor_extra_spec_key_2_47 - flavor.extra_specs.value: flavor_extra_spec_value_2_47 - hostId: hostId - id: server_id - image: image - key_name: key_name_resp - links: links - metadata: metadata_compat - name: server_name - OS-DCF:diskConfig: disk_config - OS-EXT-AZ:availability_zone: OS-EXT-AZ:availability_zone - OS-EXT-SRV-ATTR:host: OS-EXT-SRV-ATTR:host - OS-EXT-SRV-ATTR:hypervisor_hostname: OS-EXT-SRV-ATTR:hypervisor_hostname - OS-EXT-SRV-ATTR:instance_name: OS-EXT-SRV-ATTR:instance_name - OS-EXT-STS:power_state: OS-EXT-STS:power_state - OS-EXT-STS:task_state: OS-EXT-STS:task_state - OS-EXT-STS:vm_state: OS-EXT-STS:vm_state - os-extended-volumes:volumes_attached: os-extended-volumes:volumes_attached - os-extended-volumes:volumes_attached.id: os-extended-volumes:volumes_attached.id - os-extended-volumes:volumes_attached.delete_on_termination: os-extended-volumes:volumes_attached.delete_on_termination - OS-SRV-USG:launched_at: OS-SRV-USG:launched_at - OS-SRV-USG:terminated_at: OS-SRV-USG:terminated_at - status: server_status - tenant_id: tenant_id_body - updated: updated - user_id: user_id - fault: fault - fault.code: fault_code - fault.created: fault_created - fault.message: fault_message - fault.details: fault_details - progress: progress - security_groups: security_groups_obj_optional - security_group.name: name - servers_links: servers_links - OS-EXT-SRV-ATTR:hostname: server_hostname - OS-EXT-SRV-ATTR:reservation_id: server_reservation_id - OS-EXT-SRV-ATTR:launch_index: server_launch_index - OS-EXT-SRV-ATTR:kernel_id: server_kernel_id - OS-EXT-SRV-ATTR:ramdisk_id: server_ramdisk_id - OS-EXT-SRV-ATTR:root_device_name: server_root_device_name - OS-EXT-SRV-ATTR:user_data: server_user_data - locked: locked - host_status: host_status - description: server_description_resp - tags: tags - trusted_image_certificates: server_trusted_image_certificates_resp - locked_reason: locked_reason_resp **Example List Servers Detailed (2.73)** .. literalinclude:: /../../doc/api_samples/servers/v2.73/servers-details-resp.json :language: javascript **Example List Servers Detailed (2.69)** This is a sample response for the servers from the non-responsive part of the deployment. The responses for the available server records will be normal without any missing keys. .. literalinclude:: ../../doc/api_samples/servers/v2.69/servers-details-resp.json :language: javascript Show Server Details =================== .. rest_method:: GET /servers/{server_id} Shows details for a server. Includes server details including configuration drive, extended status, and server usage information. The extended status information appears in the ``OS-EXT-STS:vm_state``, ``OS-EXT-STS:power_state``, and ``OS-EXT-STS:task_state`` attributes. The server usage information appears in the ``OS-SRV-USG:launched_at`` and ``OS-SRV-USG:terminated_at`` attributes. HostId is unique per account and is not globally unique. **Preconditions** The server must exist. .. note:: Starting with microversion 2.69 if the server detail cannot be loaded due to a transient condition in the deployment like infrastructure failure, the response body for the unavailable server will be missing keys. See `handling down cells `__ section of the Compute API guide for more information on the keys that would be returned in the partial constructs. Normal response codes: 200 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- .. rest_parameters:: parameters.yaml - server: server - accessIPv4: accessIPv4 - accessIPv6: accessIPv6 - addresses: addresses - config_drive: config_drive_resp - created: created - flavor: flavor_server - flavor.id: flavor_id_body_2_46 - flavor.links: flavor_links_2_46 - flavor.vcpus: flavor_cpus_2_47 - flavor.ram: flavor_ram_2_47 - flavor.disk: flavor_disk_2_47 - flavor.ephemeral: flavor_ephem_disk_2_47 - flavor.swap: flavor_swap_2_47 - flavor.original_name: flavor_original_name - flavor.extra_specs: extra_specs_2_47 - flavor.extra_specs.key: flavor_extra_spec_key_2_47 - flavor.extra_specs.value: flavor_extra_spec_value_2_47 - hostId: hostId - id: server_id - image: image - key_name: key_name_resp - links: links - metadata: metadata_compat - name: server_name - OS-DCF:diskConfig: disk_config - OS-EXT-AZ:availability_zone: OS-EXT-AZ:availability_zone - OS-EXT-SRV-ATTR:host: OS-EXT-SRV-ATTR:host - OS-EXT-SRV-ATTR:hypervisor_hostname: OS-EXT-SRV-ATTR:hypervisor_hostname - OS-EXT-SRV-ATTR:instance_name: OS-EXT-SRV-ATTR:instance_name - OS-EXT-STS:power_state: OS-EXT-STS:power_state - OS-EXT-STS:task_state: OS-EXT-STS:task_state - OS-EXT-STS:vm_state: OS-EXT-STS:vm_state - os-extended-volumes:volumes_attached: os-extended-volumes:volumes_attached - os-extended-volumes:volumes_attached.id: os-extended-volumes:volumes_attached.id - os-extended-volumes:volumes_attached.delete_on_termination: os-extended-volumes:volumes_attached.delete_on_termination - OS-SRV-USG:launched_at: OS-SRV-USG:launched_at - OS-SRV-USG:terminated_at: OS-SRV-USG:terminated_at - status: server_status - tenant_id: tenant_id_body - updated: updated - user_id: user_id - fault: fault - fault.code: fault_code - fault.created: fault_created - fault.message: fault_message - fault.details: fault_details - progress: progress - security_groups: security_groups_obj_optional - security_group.name: name - OS-EXT-SRV-ATTR:hostname: server_hostname - OS-EXT-SRV-ATTR:reservation_id: server_reservation_id - OS-EXT-SRV-ATTR:launch_index: server_launch_index - OS-EXT-SRV-ATTR:kernel_id: server_kernel_id - OS-EXT-SRV-ATTR:ramdisk_id: server_ramdisk_id - OS-EXT-SRV-ATTR:root_device_name: server_root_device_name - OS-EXT-SRV-ATTR:user_data: server_user_data - locked: locked - host_status: host_status - description: server_description_resp - tags: tags - trusted_image_certificates: server_trusted_image_certificates_resp - server_groups: server_groups_2_71 - locked_reason: locked_reason_resp **Example Show Server Details (2.73)** .. literalinclude:: ../../doc/api_samples/servers/v2.73/server-get-resp.json :language: javascript **Example Show Server Details (2.69)** This is a sample response for a server from the non-responsive part of the deployment. The responses for available server records will be normal without any missing keys. .. literalinclude:: ../../doc/api_samples/servers/v2.69/server-get-resp.json :language: javascript Update Server ============= .. rest_method:: PUT /servers/{server_id} Updates the editable attributes of an existing server. Normal response codes: 200 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path - server: server - accessIPv4: accessIPv4_in - accessIPv6: accessIPv6_in - name: server_name_optional - OS-DCF:diskConfig: OS-DCF:diskConfig - description: server_description .. note:: You can specify parameters to update independently. e.g. ``name`` only, ``description`` only, ``name`` and ``description``, etc. **Example Update Server (2.63)** .. literalinclude:: ../../doc/api_samples/servers/v2.63/server-update-req.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - server: server - accessIPv4: accessIPv4 - accessIPv6: accessIPv6 - addresses: addresses - created: created - flavor: flavor_server - flavor.id: flavor_id_body_2_46 - flavor.links: flavor_links_2_46 - flavor.vcpus: flavor_cpus_2_47 - flavor.ram: flavor_ram_2_47 - flavor.disk: flavor_disk_2_47 - flavor.ephemeral: flavor_ephem_disk_2_47 - flavor.swap: flavor_swap_2_47 - flavor.original_name: flavor_original_name - flavor.extra_specs: extra_specs_2_47 - flavor.extra_specs.key: flavor_extra_spec_key_2_47 - flavor.extra_specs.value: flavor_extra_spec_value_2_47 - hostId: hostId - id: server_id - image: image - links: links - metadata: metadata_compat - name: server_name - OS-DCF:diskConfig: disk_config - status: server_status - tenant_id: tenant_id_body - updated: updated - user_id: user_id - fault: fault - fault.code: fault_code - fault.created: fault_created - fault.message: fault_message - fault.details: fault_details - progress: progress - locked: locked - description: server_description_resp - tags: tags - trusted_image_certificates: server_trusted_image_certificates_resp - server_groups: server_groups_2_71 - locked_reason: locked_reason_resp - config_drive: config_drive_resp_update_rebuild - OS-EXT-AZ:availability_zone: OS-EXT-AZ:availability_zone_update_rebuild - OS-EXT-SRV-ATTR:host: OS-EXT-SRV-ATTR:host_update_rebuild - OS-EXT-SRV-ATTR:hypervisor_hostname: OS-EXT-SRV-ATTR:hypervisor_hostname_update_rebuild - OS-EXT-SRV-ATTR:instance_name: OS-EXT-SRV-ATTR:instance_name_update_rebuild - OS-EXT-STS:power_state: OS-EXT-STS:power_state_update_rebuild - OS-EXT-STS:task_state: OS-EXT-STS:task_state_update_rebuild - OS-EXT-STS:vm_state: OS-EXT-STS:vm_state_update_rebuild - OS-EXT-SRV-ATTR:hostname: server_hostname_update_rebuild - OS-EXT-SRV-ATTR:reservation_id: server_reservation_id_update_rebuild - OS-EXT-SRV-ATTR:launch_index: server_launch_index_update_rebuild - OS-EXT-SRV-ATTR:kernel_id: server_kernel_id_update_rebuild - OS-EXT-SRV-ATTR:ramdisk_id: server_ramdisk_id_update_rebuild - OS-EXT-SRV-ATTR:root_device_name: server_root_device_name_update_rebuild - OS-EXT-SRV-ATTR:user_data: server_user_data_update - os-extended-volumes:volumes_attached: os-extended-volumes:volumes_attached_update_rebuild - os-extended-volumes:volumes_attached.id: os-extended-volumes:volumes_attached.id_update_rebuild - os-extended-volumes:volumes_attached.delete_on_termination: os-extended-volumes:volumes_attached.delete_on_termination_update_rebuild - OS-SRV-USG:launched_at: OS-SRV-USG:launched_at_update_rebuild - OS-SRV-USG:terminated_at: OS-SRV-USG:terminated_at_update_rebuild - security_groups: security_groups_obj_update_rebuild - security_group.name: name_update_rebuild - host_status: host_status_update_rebuild - key_name: key_name_resp_update **Example Update Server (2.75)** .. literalinclude:: ../../doc/api_samples/servers/v2.75/server-update-resp.json :language: javascript Delete Server ============= .. rest_method:: DELETE /servers/{server_id} Deletes a server. By default, the instance is going to be (hard) deleted immediately from the system, but you can set ``reclaim_instance_interval`` > 0 to make the API soft delete the instance, so that the instance won't be deleted until the ``reclaim_instance_interval`` has expired since the instance was soft deleted. The instance marked as ``SOFT_DELETED`` can be recovered via ``restore`` action before it's really deleted from the system. **Preconditions** - The server must exist. - Anyone can delete a server when the status of the server is not locked and when the policy allows. - If the server is locked, you must have administrator privileges to delete the server. **Asynchronous postconditions** - With correct permissions, you can see the server status as ``deleting``. - The ports attached to the server, which Nova created during the server create process or when attaching interfaces later, are deleted. - The server does not appear in the list servers response. - If hard delete, the server managed by OpenStack Compute is deleted on the compute node. **Troubleshooting** - If server status remains in ``deleting`` status or another error status, the request failed. Ensure that you meet the preconditions. Then, investigate the compute back end. - The request returns the HTTP 409 response code when the server is locked even if you have correct permissions. Ensure that you meet the preconditions then investigate the server status. - The server managed by OpenStack Compute is not deleted from the compute node. Normal response codes: 204 Error response codes: unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) .. TODO(sdague): for later phase of updating body. conflict is returned under 2 conditions. When the instance is locked, so can't be deleted, or if the instance is in some other state which makes it not possible to delete. Request ------- .. rest_parameters:: parameters.yaml - server_id: server_id_path Response -------- There is no body content for the response of a successful DELETE query ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/urls.inc0000664000175000017500000000210300000000000017065 0ustar00zuulzuul00000000000000.. -*- rst -*- ============== Service URLs ============== All API calls described throughout the rest of this document require authentication with the OpenStack Identity service. After authentication, a base ``service url`` can be extracted from the Identity token of type ``compute``. This ``service url`` will be the root url that every API call uses to build a full path. For instance, if the ``service url`` is ``http://mycompute.pvt/compute/v2.1`` then the full API call for ``/servers`` is ``http://mycompute.pvt/compute/v2.1/servers``. Depending on the deployment, the Compute ``service url`` might be http or https, a custom port, a custom path, and include your tenant id. The only way to know the urls for your deployment is by using the service catalog. The Compute URL should never be hard coded in applications, even if they are only expected to work at a single site. It should always be discovered from the Identity token. As such, for the rest of this document we will be using short hand where ``GET /servers`` really means ``GET {your_compute_service_url}/servers``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/api-ref/source/versions.inc0000664000175000017500000000535600000000000017765 0ustar00zuulzuul00000000000000.. -*- rst -*- ============== API Versions ============== In order to bring new features to users over time, the Nova API supports versioning. There are two kinds of versions in Nova. - ''major versions'', which have dedicated urls - ''microversions'', which can be requested through the use of the ``X-OpenStack-Nova-API-Version`` header, or since microversion 2.27 the ``OpenStack-API-Version`` header may also be used. For more details about Microversions, please reference: `Microversions `_ .. note:: The maximum microversion supported by each release varies. Please reference: `API Microversion History `__ for API microversion history details. The Version APIs work differently from other APIs as they *do not* require authentication. List All Major Versions ======================= .. rest_method:: GET / This fetches all the information about all known major API versions in the deployment. Links to more specific information will be provided for each API version, as well as information about supported min and max microversions. Normal Response Codes: 200 Response -------- .. rest_parameters:: parameters.yaml - versions: versions - id: version_id - links: links - min_version: version_min - status: version_status - updated: updated_version - version: version_max Response Example ---------------- This demonstrates the expected response from a bleeding edge server that supports up to the current microversion. When querying OpenStack environments you will typically find the current microversion on the v2.1 API is lower than listed below. .. literalinclude:: /../../doc/api_samples/versions/versions-get-resp.json :language: javascript Show Details of Specific API Version ==================================== .. rest_method:: GET /{api_version}/ This gets the details of a specific API at its root. Nearly all this information exists at the API root, so this is mostly a redundant operation. .. TODO(sdague) we should probably deprecate this call as everything that's needed is really in the root now Normal Response Codes: 200 Request ------- .. rest_parameters:: parameters.yaml - api_version: api_version Response -------- .. rest_parameters:: parameters.yaml - version: version - id: version_id - links: links - media-types: media_types - min_version: version_min - status: version_status - updated: updated_version - version: version_max Response Example ---------------- This is an example of a ``GET /v2.1/`` on a relatively current server. .. literalinclude:: /../../doc/api_samples/versions/v21-version-get-resp.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/babel.cfg0000664000175000017500000000002100000000000014305 0ustar00zuulzuul00000000000000[python: **.py] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/bindep.txt0000664000175000017500000000276700000000000014604 0ustar00zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for install and tests; # see https://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg test] gcc [platform:rpm test] # gettext and graphviz are needed by doc builds only. For transition, # have them in both doc and test. # TODO(jaegerandi): Remove test once infra scripts are updated. gettext [doc test] graphviz [doc test] # libsrvg2 is needed for sphinxcontrib-svg2pdfconverter in docs builds. librsvg2-tools [doc platform:rpm] librsvg2-bin [doc platform:dpkg] language-pack-en [platform:ubuntu] libffi-dev [platform:dpkg test] libffi-devel [platform:rpm test] libmysqlclient-dev [platform:dpkg] libpq-dev [platform:dpkg test] libsqlite3-dev [platform:dpkg test] libxml2-dev [platform:dpkg test] libxslt-devel [platform:rpm test] libxslt1-dev [platform:dpkg test] locales [platform:debian] mysql [platform:rpm] mysql-client [platform:dpkg] mysql-devel [platform:rpm test] mysql-server openssh-client [platform:dpkg] openssh-clients [platform:rpm] pkg-config [platform:dpkg test] pkgconfig [platform:rpm test] postgresql postgresql-client [platform:dpkg] postgresql-devel [platform:rpm test] postgresql-server [platform:rpm] python-dev [platform:dpkg test] python-devel [platform:rpm test] python3-all [platform:dpkg] python3-all-dev [platform:dpkg] python3-devel [platform:fedora] python34-devel [platform:centos] sqlite-devel [platform:rpm test] libpcre3-dev [platform:dpkg test] pcre-devel [platform:rpm test] ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.106472 nova-21.2.4/devstack/0000775000175000017500000000000000000000000014372 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/devstack/nova-multi-cell-blacklist.txt0000664000175000017500000000111300000000000022105 0ustar00zuulzuul00000000000000# --blacklist-file contents for the nova-multi-cell job defined in .zuul.yaml # See: https://stestr.readthedocs.io/en/latest/MANUAL.html#test-selection # Exclude tempest.scenario.test_network tests since they are slow and # only test advanced neutron features, unrelated to multi-cell testing. ^tempest.scenario.test_network # Also exlude resize and migrate tests with qos ports as qos is currently # not supported in cross cell resize case . See # https://bugs.launchpad.net/nova/+bug/1907511 for details test_migrate_with_qos_min_bw_allocation test_resize_with_qos_min_bw_allocation ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/0000775000175000017500000000000000000000000013333 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/README.rst0000664000175000017500000000066400000000000015030 0ustar00zuulzuul00000000000000OpenStack Nova Documentation README =================================== Both contributor developer documentation and REST API documentation are sourced here. Contributor developer docs are built to: https://docs.openstack.org/nova/latest/ API guide docs are built to: https://docs.openstack.org/api-guide/compute/ For more details, see the "Building the Documentation" section of doc/source/contributor/development-environment.rst. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0584724 nova-21.2.4/doc/api_samples/0000775000175000017500000000000000000000000015630 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/consoles/0000775000175000017500000000000000000000000017455 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/consoles/consoles-get-resp.json0000664000175000017500000000030600000000000023720 0ustar00zuulzuul00000000000000{ "console": { "console_type": "fake", "host": "fake", "id": 1, "instance_name": "instance-00000001", "password": "C4jBpJ6x", "port": 5999 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/consoles/consoles-list-get-resp.json0000664000175000017500000000022600000000000024672 0ustar00zuulzuul00000000000000{ "consoles": [ { "console": { "console_type": "fake", "id": 1 } } ] }././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/extension-info/0000775000175000017500000000000000000000000020575 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/extension-info/extensions-get-resp.json0000664000175000017500000000040400000000000025411 0ustar00zuulzuul00000000000000{ "extension": { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/extension-info/extensions-list-resp-v21-compatible.json0000664000175000017500000007632200000000000030344 0ustar00zuulzuul00000000000000{ "extensions": [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "Adds type parameter to the ip list.", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "Adds mac address parameter to the ip list.", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-VIF-NET", "description": "Adds network id parameter to the virtual interface list.", "links": [], "name": "ExtendedVIFNet", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "Support to show the disabled status of a flavor.", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "Provide additional data for flavors.", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n Actions include: resetNetwork, injectNetworkInfo, os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server API.\n 2. Add availability zones describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "Add extended status in Baremetal Nodes v2 API.", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "Allow boot with the new BDM data format.", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "Adding functionality to get cell capacities.", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding neighbor cells,\n listing neighbor cells, and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n When running with the Vlan network mode, you need a mechanism to route\n from the public Internet to your vlans. This mechanism is known as a\n cloudpipe.\n\n At the time of creating this class, only OpenVPN is supported. Support for\n a SSH Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "Adds the ability to set the vpn ip/port for cloudpipe instances.", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "Extended support to the Create Server v1.1 API.", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "Enables server evacuation without target host. Scheduler will select one to target.", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "Adds optional fixed_address to the add floating IP command.", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "Extended hypervisors support.", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "Adds additional fields to networks.", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "Adds ability for admins to delete quota and optionally force the update Quota command.", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "Allow the user to specify the image to use for rescue.", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "Extended services support.", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "Extended services deletion support.", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "Support to show the swap status of a flavor.", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "Show hypervisor status.", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "Adds quota support to server groups.", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "Allow to filter the servers by a set of status values.", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "Add sorting support in get Server v2 API.", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "Start/Stop instance compute API support.", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "Provide data to admin on limited resources used by other tenants.", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "Project user quota support.", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "Support for updating a volume attachment.", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/extension-info/extensions-list-resp.json0000664000175000017500000007560700000000000025626 0ustar00zuulzuul00000000000000{ "extensions": [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "Adds type parameter to the ip list.", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "Adds mac address parameter to the ip list.", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "Support to show the disabled status of a flavor.", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "Provide additional data for flavors.", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n Actions include: resetNetwork, injectNetworkInfo, os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server API.\n 2. Add availability zones describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "Add extended status in Baremetal Nodes v2 API.", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "Allow boot with the new BDM data format.", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "Adding functionality to get cell capacities.", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding neighbor cells,\n listing neighbor cells, and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n When running with the Vlan network mode, you need a mechanism to route\n from the public Internet to your vlans. This mechanism is known as a\n cloudpipe.\n\n At the time of creating this class, only OpenVPN is supported. Support for\n a SSH Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "Adds the ability to set the vpn ip/port for cloudpipe instances.", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "Extended support to the Create Server v1.1 API.", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "Enables server evacuation without target host. Scheduler will select one to target.", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "Adds optional fixed_address to the add floating IP command.", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "Extended hypervisors support.", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "Adds additional fields to networks.", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "Adds ability for admins to delete quota and optionally force the update Quota command.", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "Allow the user to specify the image to use for rescue.", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "Extended services support.", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "Extended services deletion support.", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "Support to show the swap status of a flavor.", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "Show hypervisor status.", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "Adds quota support to server groups.", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "Allow to filter the servers by a set of status values.", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "Add sorting support in get Server v2 API.", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "Start/Stop instance compute API support.", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "Provide data to admin on limited resources used by other tenants.", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "Project user quota support.", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "Support for updating a volume attachment.", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/flavor-access/0000775000175000017500000000000000000000000020360 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/flavor-access-add-tenant-req.json0000664000175000017500000000010300000000000026577 0ustar00zuulzuul00000000000000{ "addTenantAccess": { "tenant": "fake_tenant" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/flavor-access-add-tenant-resp.json0000664000175000017500000000017200000000000026767 0ustar00zuulzuul00000000000000{ "flavor_access": [ { "flavor_id": "10", "tenant_id": "fake_tenant" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/flavor-access-list-resp.json0000664000175000017500000000017200000000000025723 0ustar00zuulzuul00000000000000{ "flavor_access": [ { "flavor_id": "10", "tenant_id": "fake_tenant" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/flavor-access-remove-tenant-req.json0000664000175000017500000000010600000000000027347 0ustar00zuulzuul00000000000000{ "removeTenantAccess": { "tenant": "fake_tenant" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/flavor-access-remove-tenant-resp.json0000664000175000017500000000004000000000000027526 0ustar00zuulzuul00000000000000{ "flavor_access": [ ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/flavor-create-req.json0000664000175000017500000000026600000000000024576 0ustar00zuulzuul00000000000000{ "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "10", "os-flavor-access:is_public": false } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/flavor-access/v2.7/0000775000175000017500000000000000000000000021054 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/v2.7/flavor-access-add-tenant-req.json0000664000175000017500000000010300000000000027273 0ustar00zuulzuul00000000000000{ "addTenantAccess": { "tenant": "fake_tenant" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-access/v2.7/flavor-create-req.json0000664000175000017500000000026500000000000025271 0ustar00zuulzuul00000000000000{ "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "10", "os-flavor-access:is_public": true } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/flavor-extra-specs/0000775000175000017500000000000000000000000021355 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-extra-specs/flavor-extra-specs-create-req.json0000664000175000017500000000013700000000000030024 0ustar00zuulzuul00000000000000{ "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-extra-specs/flavor-extra-specs-create-resp.json0000664000175000017500000000013700000000000030206 0ustar00zuulzuul00000000000000{ "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-extra-specs/flavor-extra-specs-get-resp.json0000664000175000017500000000003500000000000027517 0ustar00zuulzuul00000000000000{ "hw:numa_nodes": "1" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-extra-specs/flavor-extra-specs-list-resp.json0000664000175000017500000000013700000000000027716 0ustar00zuulzuul00000000000000{ "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-extra-specs/flavor-extra-specs-update-req.json0000664000175000017500000000003500000000000030040 0ustar00zuulzuul00000000000000{ "hw:numa_nodes": "2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-extra-specs/flavor-extra-specs-update-resp.json0000664000175000017500000000003500000000000030222 0ustar00zuulzuul00000000000000{ "hw:numa_nodes": "2" } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/flavor-manage/0000775000175000017500000000000000000000000020347 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/flavor-create-post-req.json0000664000175000017500000000024500000000000025545 0ustar00zuulzuul00000000000000{ "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "10", "rxtx_factor": 2.0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/flavor-create-post-resp.json0000664000175000017500000000123200000000000025724 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 10, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "10", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/10", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/10", "rel": "bookmark" } ], "name": "test_flavor", "ram": 1024, "swap": "", "rxtx_factor": 2.0, "vcpus": 2 } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.110472 nova-21.2.4/doc/api_samples/flavor-manage/v2.55/0000775000175000017500000000000000000000000021126 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.55/flavor-create-post-req.json0000664000175000017500000000032000000000000026316 0ustar00zuulzuul00000000000000{ "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "10", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.55/flavor-create-post-resp.json0000664000175000017500000000130500000000000026504 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 10, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "10", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/10", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/10", "rel": "bookmark" } ], "name": "test_flavor", "ram": 1024, "swap": "", "rxtx_factor": 2.0, "vcpus": 2, "description": "test description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.55/flavor-update-req.json0000664000175000017500000000010700000000000025355 0ustar00zuulzuul00000000000000{ "flavor": { "description": "updated description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.55/flavor-update-resp.json0000664000175000017500000000130100000000000025534 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "updated description" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.114472 nova-21.2.4/doc/api_samples/flavor-manage/v2.61/0000775000175000017500000000000000000000000021123 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.61/flavor-create-post-req.json0000664000175000017500000000032000000000000026313 0ustar00zuulzuul00000000000000{ "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "10", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.61/flavor-create-post-resp.json0000664000175000017500000000134000000000000026500 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 10, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "10", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/10", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/10", "rel": "bookmark" } ], "name": "test_flavor", "ram": 1024, "swap": "", "rxtx_factor": 2.0, "vcpus": 2, "description": "test description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.61/flavor-update-req.json0000664000175000017500000000010700000000000025352 0ustar00zuulzuul00000000000000{ "flavor": { "description": "updated description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.61/flavor-update-resp.json0000664000175000017500000000133400000000000025537 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "updated description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.114472 nova-21.2.4/doc/api_samples/flavor-manage/v2.75/0000775000175000017500000000000000000000000021130 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.75/flavor-create-post-req.json0000664000175000017500000000032000000000000026320 0ustar00zuulzuul00000000000000{ "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "10", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.75/flavor-create-post-resp.json0000664000175000017500000000133700000000000026513 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 10, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "10", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/10", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/10", "rel": "bookmark" } ], "name": "test_flavor", "ram": 1024, "swap": 0, "rxtx_factor": 2.0, "vcpus": 2, "description": "test description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.75/flavor-update-req.json0000664000175000017500000000010700000000000025357 0ustar00zuulzuul00000000000000{ "flavor": { "description": "updated description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavor-manage/v2.75/flavor-update-resp.json0000664000175000017500000000133300000000000025543 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": "updated description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.114472 nova-21.2.4/doc/api_samples/flavors/0000775000175000017500000000000000000000000017304 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/flavor-get-resp.json0000664000175000017500000000122100000000000023210 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/flavors-detail-resp.json0000664000175000017500000001053000000000000024061 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "ram": 4096, "swap": "", "vcpus": 2, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "ram": 8192, "swap": "", "vcpus": 4, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "ram": 16384, "swap": "", "vcpus": 8, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/flavors-list-resp.json0000664000175000017500000000542200000000000023576 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny" }, { "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small" }, { "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium" }, { "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large" }, { "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge" }, { "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.114472 nova-21.2.4/doc/api_samples/flavors/v2.55/0000775000175000017500000000000000000000000020063 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.55/flavor-get-resp.json0000664000175000017500000000131300000000000023771 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.55/flavors-detail-resp.json0000664000175000017500000001247000000000000024645 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "ram": 4096, "swap": "", "vcpus": 2, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "ram": 8192, "swap": "", "vcpus": 4, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "ram": 16384, "swap": "", "vcpus": 8, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.55/flavors-list-resp.json0000664000175000017500000000674600000000000024367 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "description": null }, { "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "description": null }, { "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "description": null }, { "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "description": null }, { "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "description": null }, { "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "description": null }, { "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.114472 nova-21.2.4/doc/api_samples/flavors/v2.61/0000775000175000017500000000000000000000000020060 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.61/flavor-get-resp.json0000664000175000017500000000146700000000000024000 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.61/flavors-detail-resp.json0000664000175000017500000001324000000000000024636 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "ram": 4096, "swap": "", "vcpus": 2, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "ram": 8192, "swap": "", "vcpus": 4, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "ram": 16384, "swap": "", "vcpus": 8, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": { "hw:numa_nodes": "1" } }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.61/flavors-list-resp.json0000664000175000017500000000674600000000000024364 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "description": null }, { "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "description": null }, { "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "description": null }, { "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "description": null }, { "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "description": null }, { "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "description": null }, { "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.114472 nova-21.2.4/doc/api_samples/flavors/v2.75/0000775000175000017500000000000000000000000020065 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.75/flavor-get-resp.json0000664000175000017500000000146600000000000024004 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "ram": 2048, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.75/flavors-detail-resp.json0000664000175000017500000001323100000000000024643 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "ram": 2048, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "ram": 4096, "swap": 0, "vcpus": 2, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "ram": 8192, "swap": 0, "vcpus": 4, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "ram": 16384, "swap": 0, "vcpus": 8, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": { "hw:numa_nodes": "1" } }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "ram": 2048, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/flavors/v2.75/flavors-list-resp.json0000664000175000017500000000674600000000000024371 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "description": null }, { "id": "2", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "description": null }, { "id": "3", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "description": null }, { "id": "4", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "description": null }, { "id": "5", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "description": null }, { "id": "6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "description": null }, { "id": "7", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description", "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/images/0000775000175000017500000000000000000000000017075 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-get-resp.json0000664000175000017500000000226600000000000022604 0ustar00zuulzuul00000000000000{ "image": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage7", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-meta-key-get.json0000664000175000017500000000006700000000000023344 0ustar00zuulzuul00000000000000{ "meta": { "kernel_id": "nokernel" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-meta-key-put-req.json0000664000175000017500000000007400000000000024160 0ustar00zuulzuul00000000000000{ "meta": { "auto_disk_config": "False" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-meta-key-put-resp.json0000664000175000017500000000007300000000000024341 0ustar00zuulzuul00000000000000{ "meta": { "auto_disk_config": "False" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-metadata-get-resp.json0000664000175000017500000000024300000000000024353 0ustar00zuulzuul00000000000000{ "metadata": { "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "nokernel", "ramdisk_id": "nokernel" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-metadata-post-req.json0000664000175000017500000000013100000000000024373 0ustar00zuulzuul00000000000000{ "metadata": { "kernel_id": "False", "Label": "UpdatedImage" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-metadata-post-resp.json0000664000175000017500000000030100000000000024554 0ustar00zuulzuul00000000000000{ "metadata": { "Label": "UpdatedImage", "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "False", "ramdisk_id": "nokernel" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-metadata-put-req.json0000664000175000017500000000013200000000000024217 0ustar00zuulzuul00000000000000{ "metadata": { "auto_disk_config": "True", "Label": "Changed" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/image-metadata-put-resp.json0000664000175000017500000000013200000000000024401 0ustar00zuulzuul00000000000000{ "metadata": { "Label": "Changed", "auto_disk_config": "True" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/images-details-get-resp.json0000664000175000017500000002114600000000000024410 0ustar00zuulzuul00000000000000{ "images": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage7", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "a2459075-d96c-40d5-893e-577ff92e721c", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-DCF:diskConfig": "MANUAL", "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "a440c04b-79fa-479c-bed1-0b816eaec379", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "auto_disk_config": "False", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage6", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "c905cedb-7281-47e4-8a62-f26bc5fc4c77", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "kernel_id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "ramdisk_id": null }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "cedef40a-ed67-4d10-800e-17455edce175", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-EXT-IMG-SIZE:size": "74185822", "created": "2011-01-01T01:02:03Z", "id": "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/images/images-list-get-resp.json0000664000175000017500000001325400000000000023737 0ustar00zuulzuul00000000000000{ "images": [ { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage7" }, { "id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "a2459075-d96c-40d5-893e-577ff92e721c", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "a440c04b-79fa-479c-bed1-0b816eaec379", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage6" }, { "id": "c905cedb-7281-47e4-8a62-f26bc5fc4c77", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "cedef40a-ed67-4d10-800e-17455edce175", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/limits/0000775000175000017500000000000000000000000017131 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/limits/limit-get-resp.json0000664000175000017500000000141200000000000022664 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxSecurityGroupRules": -1, "maxSecurityGroups": -1, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": -1, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalSecurityGroupsUsed": 0, "totalFloatingIpsUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/limits/v2.36/0000775000175000017500000000000000000000000017707 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/limits/v2.36/limit-get-resp.json0000664000175000017500000000110400000000000023440 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/limits/v2.39/0000775000175000017500000000000000000000000017712 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/limits/v2.39/limit-get-resp.json0000664000175000017500000000104300000000000023445 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxPersonality": 5, "maxPersonalitySize": 10240, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/limits/v2.57/0000775000175000017500000000000000000000000017712 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/limits/v2.57/limit-get-resp.json0000664000175000017500000000073100000000000023450 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/os-admin-actions/0000775000175000017500000000000000000000000020775 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-admin-actions/admin-actions-inject-network-info.json0000664000175000017500000000004200000000000030304 0ustar00zuulzuul00000000000000{ "injectNetworkInfo": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-admin-actions/admin-actions-reset-network.json0000664000175000017500000000003500000000000027223 0ustar00zuulzuul00000000000000{ "resetNetwork": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-admin-actions/admin-actions-reset-server-state.json0000664000175000017500000000007300000000000030160 0ustar00zuulzuul00000000000000{ "os-resetState": { "state": "active" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/os-admin-password/0000775000175000017500000000000000000000000021177 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-admin-password/admin-password-change-password.json0000664000175000017500000000007700000000000030111 0ustar00zuulzuul00000000000000{ "changePassword" : { "adminPass" : "foo" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/os-agents/0000775000175000017500000000000000000000000017530 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-agents/agent-post-req.json0000664000175000017500000000035700000000000023276 0ustar00zuulzuul00000000000000{ "agent": { "hypervisor": "xen", "os": "os", "architecture": "x86", "version": "8.0", "md5hash": "add6bb58e139be103324d04d82d8f545", "url": "http://example.com/path/to/resource" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-agents/agent-post-resp.json0000664000175000017500000000040600000000000023453 0ustar00zuulzuul00000000000000{ "agent": { "agent_id": 1, "architecture": "x86", "hypervisor": "xen", "md5hash": "add6bb58e139be103324d04d82d8f545", "os": "os", "url": "http://example.com/path/to/resource", "version": "8.0" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-agents/agent-update-put-req.json0000664000175000017500000000023600000000000024375 0ustar00zuulzuul00000000000000{ "para": { "url": "http://example.com/path/to/resource", "md5hash": "add6bb58e139be103324d04d82d8f545", "version": "7.0" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-agents/agent-update-put-resp.json0000664000175000017500000000027000000000000024555 0ustar00zuulzuul00000000000000{ "agent": { "agent_id": "1", "md5hash": "add6bb58e139be103324d04d82d8f545", "url": "http://example.com/path/to/resource", "version": "7.0" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-agents/agents-get-resp.json0000664000175000017500000000046700000000000023437 0ustar00zuulzuul00000000000000{ "agents": [ { "agent_id": 1, "architecture": "x86", "hypervisor": "xen", "md5hash": "add6bb58e139be103324d04d82d8f545", "os": "os", "url": "http://example.com/path/to/resource", "version": "8.0" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1184719 nova-21.2.4/doc/api_samples/os-aggregates/0000775000175000017500000000000000000000000020360 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-add-host-post-req.json0000664000175000017500000000011700000000000026451 0ustar00zuulzuul00000000000000{ "add_host": { "host": "21549b2f665945baaa7101926a00143c" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-metadata-post-req.json0000664000175000017500000000021200000000000026522 0ustar00zuulzuul00000000000000{ "set_metadata": { "metadata": { "key": "value" } } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-post-req.json0000664000175000017500000000013700000000000024752 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "name", "availability_zone": "london" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-post-resp.json0000664000175000017500000000036200000000000025134 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2013-08-18T12:17:55.751757", "deleted": false, "deleted_at": null, "id": 1, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-remove-host-post-req.json0000664000175000017500000000012200000000000027212 0ustar00zuulzuul00000000000000{ "remove_host": { "host": "bf1454b3d71145d49fca2101c56c728d" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-update-post-req.json0000664000175000017500000000014000000000000026224 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "newname", "availability_zone": "nova2" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregate-update-post-resp.json0000664000175000017500000000055300000000000026416 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "nova2", "created_at": "2013-08-18T12:17:56.259751", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "nova2" }, "name": "newname", "updated_at": "2013-08-18T12:17:56.286720" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregates-add-host-post-resp.json0000664000175000017500000000061200000000000027016 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2013-08-18T12:17:56.297823", "deleted": false, "deleted_at": null, "hosts": [ "21549b2f665945baaa7101926a00143c" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregates-get-resp.json0000664000175000017500000000052200000000000025107 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2013-08-18T12:17:56.380226", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregates-list-get-resp.json0000664000175000017500000000066500000000000026070 0ustar00zuulzuul00000000000000{ "aggregates": [ { "availability_zone": "london", "created_at": "2013-08-18T12:17:56.856455", "deleted": false, "deleted_at": null, "hosts": ["21549b2f665945baaa7101926a00143c"], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregates-metadata-post-resp.json0000664000175000017500000000060600000000000027076 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2013-08-18T12:17:55.959571", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london", "key": "value" }, "name": "name", "updated_at": "2013-08-18T12:17:55.986540" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/aggregates-remove-host-post-resp.json0000664000175000017500000000052200000000000027563 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2013-08-18T12:17:56.990581", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1224718 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/0000775000175000017500000000000000000000000021132 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-add-host-post-req.json0000664000175000017500000000006500000000000027225 0ustar00zuulzuul00000000000000{ "add_host": { "host": "compute" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-metadata-post-req.json0000664000175000017500000000021200000000000027274 0ustar00zuulzuul00000000000000{ "set_metadata": { "metadata": { "key": "value" } } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-post-req.json0000664000175000017500000000013400000000000025521 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "name", "availability_zone": "nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-post-resp.json0000664000175000017500000000045200000000000025706 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2016-12-27T22:51:32.877711", "deleted": false, "deleted_at": null, "id": 1, "name": "name", "updated_at": null, "uuid": "86a0da0e-9f0c-4f51-a1e0-3c25edab3783" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-remove-host-post-req.json0000664000175000017500000000007000000000000027766 0ustar00zuulzuul00000000000000{ "remove_host": { "host": "compute" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-update-post-req.json0000664000175000017500000000014000000000000026776 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "newname", "availability_zone": "nova2" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregate-update-post-resp.json0000664000175000017500000000064200000000000027167 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "nova2", "created_at": "2016-12-27T23:47:32.897139", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "nova2" }, "name": "newname", "updated_at": "2016-12-27T23:47:33.067180", "uuid": "6f74e3f3-df28-48f3-98e1-ac941b1c5e43" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregates-add-host-post-resp.json0000664000175000017500000000065100000000000027573 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2016-12-27T23:47:30.594805", "deleted": false, "deleted_at": null, "hosts": [ "compute" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "d1842372-89c5-4fbd-ad5a-5d2e16c85456" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregates-get-resp.json0000664000175000017500000000061200000000000025661 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2016-12-27T23:47:30.563527", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "fd0a5b12-7e8d-469d-bfd5-64a6823e7407" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregates-list-get-resp.json0000664000175000017500000000076600000000000026644 0ustar00zuulzuul00000000000000{ "aggregates": [ { "availability_zone": "london", "created_at": "2016-12-27T23:47:32.911515", "deleted": false, "deleted_at": null, "hosts": [ "compute" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "6ba28ba7-f29b-45cc-a30b-6e3a40c2fb14" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregates-metadata-post-resp.json0000664000175000017500000000067600000000000027657 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2016-12-27T23:59:18.623100", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london", "key": "value" }, "name": "name", "updated_at": "2016-12-27T23:59:18.723348", "uuid": "26002bdb-62cc-41bd-813a-0ad22db32625" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.41/aggregates-remove-host-post-resp.json0000664000175000017500000000061200000000000030335 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2016-12-27T23:47:30.594805", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "d1842372-89c5-4fbd-ad5a-5d2e16c85456" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1224718 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/0000775000175000017500000000000000000000000021136 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-add-host-post-req.json0000664000175000017500000000006500000000000027231 0ustar00zuulzuul00000000000000{ "add_host": { "host": "compute" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-images-post-req.json0000664000175000017500000000012300000000000026766 0ustar00zuulzuul00000000000000{ "cache": [ {"id": "70a599e0-31e7-49b7-b260-868f441e862b"} ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-metadata-post-req.json0000664000175000017500000000021200000000000027300 0ustar00zuulzuul00000000000000{ "set_metadata": { "metadata": { "key": "value" } } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-post-req.json0000664000175000017500000000013600000000000025527 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "name", "availability_zone": "london" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-post-resp.json0000664000175000017500000000045100000000000025711 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2019-10-08T15:15:27.988513", "deleted": false, "deleted_at": null, "id": 1, "name": "name", "updated_at": null, "uuid": "a25e34a2-4fc1-4876-82d0-cf930fa04b82" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-remove-host-post-req.json0000664000175000017500000000007000000000000027772 0ustar00zuulzuul00000000000000{ "remove_host": { "host": "compute" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-update-post-req.json0000664000175000017500000000014000000000000027002 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "newname", "availability_zone": "nova2" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregate-update-post-resp.json0000664000175000017500000000064200000000000027173 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "nova2", "created_at": "2019-10-11T14:19:00.718841", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "nova2" }, "name": "newname", "updated_at": "2019-10-11T14:19:00.785838", "uuid": "4e7fa22f-f6cf-4e81-a5c7-6dc485815f81" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregates-add-host-post-resp.json0000664000175000017500000000065000000000000027576 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2019-10-11T14:19:05.250053", "deleted": false, "deleted_at": null, "hosts": [ "compute" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "47832b50-a192-4900-affe-8f7fdf2d7f22" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregates-get-resp.json0000664000175000017500000000061100000000000025664 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2019-10-11T14:19:07.366577", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "7c5ff84a-c901-4733-adf8-06875e265080" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregates-list-get-resp.json0000664000175000017500000000076500000000000026647 0ustar00zuulzuul00000000000000{ "aggregates": [ { "availability_zone": "london", "created_at": "2019-10-11T14:19:07.386637", "deleted": false, "deleted_at": null, "hosts": [ "compute" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "070cb72c-f463-4f72-9c61-2c0556eb8c07" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregates-metadata-post-resp.json0000664000175000017500000000067500000000000027662 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2019-10-11T14:19:03.103465", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london", "key": "value" }, "name": "name", "updated_at": "2019-10-11T14:19:03.169058", "uuid": "0843db7c-f161-446d-84c8-d936320da2e8" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-aggregates/v2.81/aggregates-remove-host-post-resp.json0000664000175000017500000000061100000000000030340 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "2019-10-11T14:19:05.250053", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "47832b50-a192-4900-affe-8f7fdf2d7f22" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1224718 nova-21.2.4/doc/api_samples/os-assisted-volume-snapshots/0000775000175000017500000000000000000000000023413 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-assisted-volume-snapshots/snapshot-create-assisted-req.json0000664000175000017500000000047600000000000032017 0ustar00zuulzuul00000000000000{ "snapshot": { "volume_id": "521752a6-acf6-4b2d-bc7a-119f9148cd8c", "create_info": { "snapshot_id": "421752a6-acf6-4b2d-bc7a-119f9148cd8c", "type": "qcow2", "new_file": "new_file_name", "id": "421752a6-acf6-4b2d-bc7a-119f9148cd8c" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-assisted-volume-snapshots/snapshot-create-assisted-resp.json0000664000175000017500000000021400000000000032167 0ustar00zuulzuul00000000000000{ "snapshot": { "id": "421752a6-acf6-4b2d-bc7a-119f9148cd8c", "volumeId": "521752a6-acf6-4b2d-bc7a-119f9148cd8c" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-attach-interfaces/0000775000175000017500000000000000000000000021634 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/attach-interfaces-create-net_id-req.json0000664000175000017500000000031200000000000031376 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3" } ], "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/attach-interfaces-create-req.json0000664000175000017500000000014000000000000030135 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "port_id": "ce531f90-199f-48c0-816c-13e38010b442" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/attach-interfaces-create-resp.json0000664000175000017500000000062200000000000030324 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/attach-interfaces-list-resp.json0000664000175000017500000000071700000000000030041 0ustar00zuulzuul00000000000000{ "interfaceAttachments": [ { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/attach-interfaces-show-resp.json0000664000175000017500000000062200000000000030041 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.49/0000775000175000017500000000000000000000000022416 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.49/attach-interfaces-create-req.json0000664000175000017500000000016700000000000030730 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "tag": "foo" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.49/attach-interfaces-create-resp.json0000664000175000017500000000062300000000000031107 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.70/0000775000175000017500000000000000000000000022410 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-net_id-req.json0000664000175000017500000000034200000000000032155 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3" } ], "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "public" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-req.json0000664000175000017500000000017100000000000030715 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "tag": "public" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-resp.json0000664000175000017500000000065300000000000031104 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE", "tag": "public" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-list-resp.json0000664000175000017500000000075400000000000030616 0ustar00zuulzuul00000000000000{ "interfaceAttachments": [ { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE", "tag": "public" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-attach-interfaces/v2.70/attach-interfaces-show-resp.json0000664000175000017500000000065300000000000030621 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE", "tag": "public" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-availability-zone/0000775000175000017500000000000000000000000021672 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-availability-zone/availability-zone-detail-resp.json0000664000175000017500000000207300000000000030421 0ustar00zuulzuul00000000000000{ "availabilityZoneInfo": [ { "hosts": { "conductor": { "nova-conductor": { "active": true, "available": true, "updated_at": null } }, "scheduler": { "nova-scheduler": { "active": true, "available": true, "updated_at": null } } }, "zoneName": "internal", "zoneState": { "available": true } }, { "hosts": { "compute": { "nova-compute": { "active": true, "available": true, "updated_at": null } } }, "zoneName": "nova", "zoneState": { "available": true } } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-availability-zone/availability-zone-list-resp.json0000664000175000017500000000030200000000000030123 0ustar00zuulzuul00000000000000{ "availabilityZoneInfo": [ { "hosts": null, "zoneName": "nova", "zoneState": { "available": true } } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-baremetal-nodes/0000775000175000017500000000000000000000000021311 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-baremetal-nodes/baremetal-node-get-resp.json0000664000175000017500000000046400000000000026613 0ustar00zuulzuul00000000000000{ "node": { "cpus": "2", "disk_gb": "10", "host": "IRONIC MANAGED", "id": "058d27fa-241b-445a-a386-08c04f96db43", "instance_uuid": "1ea4e53e-149a-4f02-9515-590c9fb2315a", "interfaces": [], "memory_mb": "1024", "task_state": "active" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-baremetal-nodes/baremetal-node-list-resp.json0000664000175000017500000000106100000000000027001 0ustar00zuulzuul00000000000000{ "nodes": [ { "cpus": "2", "disk_gb": "10", "host": "IRONIC MANAGED", "id": "058d27fa-241b-445a-a386-08c04f96db43", "interfaces": [], "memory_mb": "1024", "task_state": "active" }, { "cpus": "2", "disk_gb": "10", "host": "IRONIC MANAGED", "id": "e2025409-f3ce-4d6a-9788-c565cf3b1b1c", "interfaces": [], "memory_mb": "1024", "task_state": "active" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-cells/0000775000175000017500000000000000000000000017351 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cells/cells-capacities-resp.json0000664000175000017500000000116200000000000024420 0ustar00zuulzuul00000000000000{ "cell": { "capacities": { "disk_free": { "total_mb": 1052672, "units_by_mb": { "0": 0, "163840": 5, "20480": 46, "40960": 23, "81920": 11 } }, "ram_free": { "total_mb": 7680, "units_by_mb": { "16384": 0, "2048": 3, "4096": 1, "512": 13, "8192": 0 } } } } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cells/cells-get-resp.json0000664000175000017500000000023500000000000023072 0ustar00zuulzuul00000000000000{ "cell": { "name": "cell3", "rpc_host": null, "rpc_port": null, "type": "child", "username": "username3" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cells/cells-list-resp.json0000664000175000017500000000160400000000000023267 0ustar00zuulzuul00000000000000{ "cells": [ { "name": "cell1", "rpc_host": null, "rpc_port": null, "type": "child", "username": "username1" }, { "name": "cell3", "rpc_host": null, "rpc_port": null, "type": "child", "username": "username3" }, { "name": "cell5", "rpc_host": null, "rpc_port": null, "type": "child", "username": "username5" }, { "name": "cell2", "rpc_host": null, "rpc_port": null, "type": "parent", "username": "username2" }, { "name": "cell4", "rpc_host": null, "rpc_port": null, "type": "parent", "username": "username4" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-certificates/0000775000175000017500000000000000000000000020714 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-certificates/certificate-create-resp.json0000664000175000017500000000672300000000000026311 0ustar00zuulzuul00000000000000{ "certificate": { "data": "Certificate:\n Data:\n Version: 1 (0x0)\n Serial Number: 1018 (0x3fa)\n Signature Algorithm: md5WithRSAEncryption\n Issuer: O=NOVA ROOT, L=Mountain View, ST=California, C=US\n Validity\n Not Before: Aug 12 07:20:30 2013 GMT\n Not After : Aug 12 07:20:30 2014 GMT\n Subject: C=US, ST=California, O=OpenStack, OU=NovaDev, CN=openstack-fake-2013-08-12T07:20:30Z\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n Public-Key: (1024 bit)\n Modulus:\n 00:ac:ff:b1:d1:ed:54:4e:35:6c:34:b4:8f:0b:04:\n 50:25:a3:e2:4f:02:4c:4f:26:59:bd:f3:fd:eb:da:\n 18:c2:36:aa:63:42:72:1f:88:4f:3a:ec:e7:9f:8e:\n 44:2a:d3:b8:94:7b:20:41:f8:48:02:57:91:4c:16:\n 62:f1:21:d4:f2:40:b5:86:50:d9:61:f0:be:ff:d8:\n 8d:9f:4b:aa:6a:07:38:a2:7f:87:21:fc:e6:6e:1d:\n 0a:95:1a:90:0e:60:c2:24:e9:8e:e8:68:1b:e9:f3:\n c6:b0:7c:da:c5:20:66:9b:85:ea:f5:c9:a7:de:ee:\n 16:b1:51:a0:4d:e3:95:98:df\n Exponent: 65537 (0x10001)\n Signature Algorithm: md5WithRSAEncryption\n 15:42:ca:71:cc:32:af:dc:cf:45:91:df:8a:b8:30:c4:7f:78:\n 80:a7:25:c2:d9:81:3e:b3:dd:22:cc:3b:f8:94:e7:8f:04:f6:\n 93:04:9e:85:d4:10:40:ff:5a:07:47:24:b5:ae:93:ad:8d:e1:\n e6:54:4a:8d:4a:29:53:c4:8d:04:6b:0b:f6:af:38:78:02:c5:\n 05:19:89:82:2d:ba:fd:11:3c:1e:18:c9:0c:3d:03:93:6e:bc:\n 66:70:34:ee:03:78:8a:1d:3d:64:e8:20:2f:90:81:8e:49:1d:\n 07:37:15:66:42:cb:58:39:ad:56:ce:ed:47:c6:78:0b:0e:75:\n 29:ca\n-----BEGIN CERTIFICATE-----\nMIICNDCCAZ0CAgP6MA0GCSqGSIb3DQEBBAUAME4xEjAQBgNVBAoTCU5PVkEgUk9P\nVDEWMBQGA1UEBxMNTW91bnRhaW4gVmlldzETMBEGA1UECBMKQ2FsaWZvcm5pYTEL\nMAkGA1UEBhMCVVMwHhcNMTMwODEyMDcyMDMwWhcNMTQwODEyMDcyMDMwWjB2MQsw\nCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTESMBAGA1UECgwJT3BlblN0\nYWNrMRAwDgYDVQQLDAdOb3ZhRGV2MSwwKgYDVQQDDCNvcGVuc3RhY2stZmFrZS0y\nMDEzLTA4LTEyVDA3OjIwOjMwWjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA\nrP+x0e1UTjVsNLSPCwRQJaPiTwJMTyZZvfP969oYwjaqY0JyH4hPOuznn45EKtO4\nlHsgQfhIAleRTBZi8SHU8kC1hlDZYfC+/9iNn0uqagc4on+HIfzmbh0KlRqQDmDC\nJOmO6Ggb6fPGsHzaxSBmm4Xq9cmn3u4WsVGgTeOVmN8CAwEAATANBgkqhkiG9w0B\nAQQFAAOBgQAVQspxzDKv3M9Fkd+KuDDEf3iApyXC2YE+s90izDv4lOePBPaTBJ6F\n1BBA/1oHRyS1rpOtjeHmVEqNSilTxI0Eawv2rzh4AsUFGYmCLbr9ETweGMkMPQOT\nbrxmcDTuA3iKHT1k6CAvkIGOSR0HNxVmQstYOa1Wzu1HxngLDnUpyg==\n-----END CERTIFICATE-----\n", "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIICXgIBAAKBgQCs/7HR7VRONWw0tI8LBFAlo+JPAkxPJlm98/3r2hjCNqpjQnIf\niE867OefjkQq07iUeyBB+EgCV5FMFmLxIdTyQLWGUNlh8L7/2I2fS6pqBziif4ch\n/OZuHQqVGpAOYMIk6Y7oaBvp88awfNrFIGabher1yafe7haxUaBN45WY3wIDAQAB\nAoGBAIrcr2I/KyWf0hw4Nn10V9TuyE/9Gz2JHg3QFKjFJox2DqygADT5WAeHc6Bq\nNKNf0NA2SL1LSpm+ql01tvOw4VjE5TF6OHiIzHuTTnXggG6vuA8rxp6L24HtkAcC\n0CBno9ggSX6jVornJPBfxpkwITYSvH57BUFVD7ovbPyWGzS5AkEA1JeUtL6zxwps\nWRr1aJ8Ill2uQk/RUIvSZOU61s+B190zvHikFy8LD8CI6vvBmjC/IZuZVedufjqs\n4vX82uDO3QJBANBSh2b2dyB4AGVFY9vXMRtALAspJHbLHy+zTKxlGPFiuz7Se3ps\n8Kehz4C/CBXgQkk194dwFSGE19/PQfyJROsCQQCFFDJZhrtBUMwMZ2zSRiN5BUGt\nbwuncS+OS1Su3Yz5VRYq2BZYEPHKtYrAFkLWQ8eRwTaWaN5pFE/fb38OgQXdAkA4\nDm0W/K0zlHbuyUxEpNQ28/6mBi0ktiWvLT0tioq6sYmXLwZA/D2JrhXrG/xt/ol3\nr8jqrfNRsLByLhAgh0N/AkEAl2eR0O97lTEgFNqzIQwVmIAn9mBO3cnf3tycvlDU\nm6eb2CS242y4QalfCCAEjxoJURdfsm3/D1iFo00X+IWF+A==\n-----END RSA PRIVATE KEY-----\n" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-certificates/certificate-get-root-resp.json0000664000175000017500000000214400000000000026577 0ustar00zuulzuul00000000000000{ "certificate": { "data": "-----BEGIN CERTIFICATE-----\nMIICyzCCAjSgAwIBAgIJAJ8zSIxUp/m4MA0GCSqGSIb3DQEBBAUAME4xEjAQBgNV\nBAoTCU5PVkEgUk9PVDEWMBQGA1UEBxMNTW91bnRhaW4gVmlldzETMBEGA1UECBMK\nQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDE3MDEzMzM5WhcNMTMxMDE3\nMDEzMzM5WjBOMRIwEAYDVQQKEwlOT1ZBIFJPT1QxFjAUBgNVBAcTDU1vdW50YWlu\nIFZpZXcxEzARBgNVBAgTCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTMIGfMA0GCSqG\nSIb3DQEBAQUAA4GNADCBiQKBgQDXW4QfQQxJG4MqurqK8nU/Lge0mfNKxXj/Gwvg\n2sQVwxzmKfoxih8Nn6yt0yHMNjhoji1UoWI03TXUnPZRAZmsypGKZeBd7Y1ZOCPB\nXGZVGrQm+PB2kZU+3cD8fVKcueMLLeZ+LRt5d0njnoKhc5xjqMlfFPimHMba4OL6\nTnYzPQIDAQABo4GwMIGtMAwGA1UdEwQFMAMBAf8wHQYDVR0OBBYEFKyoKu4SMOFM\ngx5Ec7p0nrCkabvxMH4GA1UdIwR3MHWAFKyoKu4SMOFMgx5Ec7p0nrCkabvxoVKk\nUDBOMRIwEAYDVQQKEwlOT1ZBIFJPT1QxFjAUBgNVBAcTDU1vdW50YWluIFZpZXcx\nEzARBgNVBAgTCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTggkAnzNIjFSn+bgwDQYJ\nKoZIhvcNAQEEBQADgYEAXuvXlu1o/SVvykSLhHW8QiAY00yzN/eDzYmZGomgiuoO\n/x+ayVzbrz1UWZnBD+lC4hll2iELSmf22LjLoF+s/9NyPqHxGL3FrfatBkndaiF8\nAx/TMEyCPl7IQWi+3zzatqOKHSHiG7a9SGn/7o2aNTIWKVulfy5GvmbBjBM/0UE=\n-----END CERTIFICATE-----\n", "private_key": null } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-cloudpipe/0000775000175000017500000000000000000000000020233 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cloudpipe/cloud-pipe-create-req.json0000664000175000017500000000013200000000000025211 0ustar00zuulzuul00000000000000{ "cloudpipe": { "project_id": "059f21e3-c20e-4efc-9e7a-eba2ab3c6f9a" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cloudpipe/cloud-pipe-create-resp.json0000664000175000017500000000007500000000000025401 0ustar00zuulzuul00000000000000{ "instance_id": "1e9b8425-34af-488e-b969-4d46f4a6382e" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cloudpipe/cloud-pipe-get-resp.json0000664000175000017500000000056500000000000024721 0ustar00zuulzuul00000000000000{ "cloudpipes": [ { "created_at": "2012-11-27T17:18:01Z", "instance_id": "27deecdb-baa3-4a26-9c82-32994b815b01", "internal_ip": "192.168.1.30", "project_id": "fa1765bd-a352-49c7-a6b7-8ee108a3cb0c", "public_ip": "127.0.0.1", "public_port": 22, "state": "down" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-cloudpipe/cloud-pipe-update-req.json0000664000175000017500000000014000000000000025227 0ustar00zuulzuul00000000000000{ "configure_project": { "vpn_ip": "192.168.1.1", "vpn_port": "2000" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-console-auth-tokens/0000775000175000017500000000000000000000000022151 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-console-auth-tokens/get-console-connect-info-get-resp.json0000664000175000017500000000032600000000000031370 0ustar00zuulzuul00000000000000{ "console": { "instance_uuid": "b48316c5-71e8-45e4-9884-6c78055b9b13", "host": "localhost", "port": 5900, "internal_access_path": "51af38c3-555e-4884-a314-6c8cdde37444" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-console-auth-tokens/get-rdp-console-post-req.json0000664000175000017500000000010000000000000027605 0ustar00zuulzuul00000000000000{ "os-getRDPConsole": { "type": "rdp-html5" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-console-output/0000775000175000017500000000000000000000000021247 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-console-output/console-output-post-req.json0000664000175000017500000000007400000000000026713 0ustar00zuulzuul00000000000000{ "os-getConsoleOutput": { "length": 50 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-console-output/console-output-post-resp.json0000664000175000017500000000007300000000000027074 0ustar00zuulzuul00000000000000{ "output": "FAKE CONSOLE OUTPUT\nANOTHER\nLAST LINE" }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-create-backup/0000775000175000017500000000000000000000000020755 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-create-backup/create-backup-req.json0000664000175000017500000000016200000000000025142 0ustar00zuulzuul00000000000000{ "createBackup": { "name": "Backup 1", "backup_type": "daily", "rotation": 1 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1264718 nova-21.2.4/doc/api_samples/os-create-backup/v2.45/0000775000175000017500000000000000000000000021533 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-create-backup/v2.45/create-backup-req.json0000664000175000017500000000016300000000000025721 0ustar00zuulzuul00000000000000{ "createBackup": { "name": "Backup 1", "backup_type": "weekly", "rotation": 1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-create-backup/v2.45/create-backup-resp.json0000664000175000017500000000007200000000000026102 0ustar00zuulzuul00000000000000{ "image_id": "0e7761dd-ee98-41f0-ba35-05994e446431" }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-deferred-delete/0000775000175000017500000000000000000000000021267 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-deferred-delete/force-delete-post-req.json0000664000175000017500000000003400000000000026265 0ustar00zuulzuul00000000000000{ "forceDelete": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-deferred-delete/restore-post-req.json0000664000175000017500000000002700000000000025414 0ustar00zuulzuul00000000000000{ "restore": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-evacuate/0000775000175000017500000000000000000000000020044 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/server-evacuate-find-host-req.json0000664000175000017500000000014400000000000026515 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "MySecretPass", "onSharedStorage": "False" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/server-evacuate-find-host-resp.json0000664000175000017500000000004400000000000026676 0ustar00zuulzuul00000000000000{ "adminPass": "MySecretPass" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/server-evacuate-req.json0000664000175000017500000000023000000000000024620 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "b419863b7d814906a68fb31703c0dbd6", "adminPass": "MySecretPass", "onSharedStorage": "False" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/server-evacuate-resp.json0000664000175000017500000000004400000000000025005 0ustar00zuulzuul00000000000000{ "adminPass": "MySecretPass" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-evacuate/v2.14/0000775000175000017500000000000000000000000020616 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/v2.14/server-evacuate-find-host-req.json0000664000175000017500000000010000000000000027257 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "MySecretPass" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/v2.14/server-evacuate-req.json0000664000175000017500000000013400000000000025375 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "testHost", "adminPass": "MySecretPass" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-evacuate/v2.29/0000775000175000017500000000000000000000000020624 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/v2.29/server-evacuate-find-host-req.json0000664000175000017500000000007700000000000027302 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "MySecretPass" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/v2.29/server-evacuate-req.json0000664000175000017500000000016400000000000025406 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "testHost", "adminPass": "MySecretPass", "force": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-evacuate/v2.68/0000775000175000017500000000000000000000000020627 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/v2.68/server-evacuate-find-host-req.json0000664000175000017500000000007700000000000027305 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "MySecretPass" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-evacuate/v2.68/server-evacuate-req.json0000664000175000017500000000013300000000000025405 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "testHost", "adminPass": "MySecretPass" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-fixed-ips/0000775000175000017500000000000000000000000020137 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-fixed-ips/fixedip-post-req.json0000664000175000017500000000002700000000000024231 0ustar00zuulzuul00000000000000{ "reserve": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-fixed-ips/fixedips-get-resp.json0000664000175000017500000000023700000000000024373 0ustar00zuulzuul00000000000000{ "fixed_ip": { "address": "192.168.1.1", "cidr": "192.168.1.0/24", "host": "host", "hostname": "compute.host.pvt" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-fixed-ips/v2.4/0000775000175000017500000000000000000000000020630 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-fixed-ips/v2.4/fixedip-post-req.json0000664000175000017500000000002700000000000024722 0ustar00zuulzuul00000000000000{ "reserve": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-fixed-ips/v2.4/fixedips-get-resp.json0000664000175000017500000000027200000000000025063 0ustar00zuulzuul00000000000000{ "fixed_ip": { "address": "192.168.1.1", "cidr": "192.168.1.0/24", "host": "host", "hostname": "compute.host.pvt", "reserved": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-floating-ip-dns/0000775000175000017500000000000000000000000021242 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-entry-req.json0000664000175000017500000000012400000000000032210 0ustar00zuulzuul00000000000000{ "dns_entry": { "ip": "192.168.53.11", "dns_type": "A" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-entry-resp.json0000664000175000017500000000024700000000000032400 0ustar00zuulzuul00000000000000{ "dns_entry": { "domain": "domain1.example.org", "id": null, "ip": "192.168.1.1", "name": "instance1", "type": "A" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-req.json0000664000175000017500000000013100000000000031047 0ustar00zuulzuul00000000000000{ "domain_entry": { "scope": "public", "project": "project1" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-create-or-update-resp.json0000664000175000017500000000024400000000000031236 0ustar00zuulzuul00000000000000{ "domain_entry": { "availability_zone": null, "domain": "domain1.example.org", "project": "project1", "scope": "public" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-entry-get-resp.json0000664000175000017500000000025000000000000030010 0ustar00zuulzuul00000000000000{ "dns_entry": { "domain": "domain1.example.org", "id": null, "ip": "192.168.1.1", "name": "instance1", "type": null } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-entry-list-resp.json0000664000175000017500000000032200000000000030204 0ustar00zuulzuul00000000000000{ "dns_entries": [ { "domain": "domain1.example.org", "id": null, "ip": "192.168.1.1", "name": "instance1", "type": null } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-dns/floating-ip-dns-list-resp.json0000664000175000017500000000031200000000000027044 0ustar00zuulzuul00000000000000{ "domain_entries": [ { "availability_zone": null, "domain": "domain1.example.org", "project": "project1", "scope": "public" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1304717 nova-21.2.4/doc/api_samples/os-floating-ip-pools/0000775000175000017500000000000000000000000021612 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ip-pools/floatingippools-list-resp.json0000664000175000017500000000020500000000000027633 0ustar00zuulzuul00000000000000{ "floating_ip_pools": [ { "name": "pool1" }, { "name": "pool2" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1344717 nova-21.2.4/doc/api_samples/os-floating-ips/0000775000175000017500000000000000000000000020643 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips/floating-ips-create-req.json0000664000175000017500000000003100000000000026152 0ustar00zuulzuul00000000000000{ "pool": "public" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips/floating-ips-create-resp.json0000664000175000017500000000030200000000000026335 0ustar00zuulzuul00000000000000{ "floating_ip": { "fixed_ip": null, "id": "8baeddb4-45e2-4c36-8cb7-d79439a5f67c", "instance_id": null, "ip": "172.24.4.17", "pool": "public" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips/floating-ips-get-resp.json0000664000175000017500000000030200000000000025651 0ustar00zuulzuul00000000000000{ "floating_ip": { "fixed_ip": null, "id": "8baeddb4-45e2-4c36-8cb7-d79439a5f67c", "instance_id": null, "ip": "172.24.4.17", "pool": "public" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips/floating-ips-list-empty-resp.json0000664000175000017500000000003300000000000027202 0ustar00zuulzuul00000000000000{ "floating_ips": [] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips/floating-ips-list-resp.json0000664000175000017500000000066700000000000026063 0ustar00zuulzuul00000000000000{ "floating_ips": [ { "fixed_ip": null, "id": "8baeddb4-45e2-4c36-8cb7-d79439a5f67c", "instance_id": null, "ip": "172.24.4.17", "pool": "public" }, { "fixed_ip": null, "id": "05ef7490-745a-4af9-98e5-610dc97493c4", "instance_id": null, "ip": "172.24.4.78", "pool": "public" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1344717 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/0000775000175000017500000000000000000000000021576 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-create-req.json0000664000175000017500000000020600000000000030044 0ustar00zuulzuul00000000000000{ "floating_ips_bulk_create": { "ip_range": "192.168.1.0/24", "pool": "nova", "interface": "eth0" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-create-resp.json0000664000175000017500000000020500000000000030225 0ustar00zuulzuul00000000000000{ "floating_ips_bulk_create": { "interface": "eth0", "ip_range": "192.168.1.0/24", "pool": "nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-delete-req.json0000664000175000017500000000004400000000000030043 0ustar00zuulzuul00000000000000{ "ip_range": "192.168.1.0/24" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-delete-resp.json0000664000175000017500000000006400000000000030227 0ustar00zuulzuul00000000000000{ "floating_ips_bulk_delete": "192.168.1.0/24" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-list-by-host-resp.json0000664000175000017500000000037100000000000031324 0ustar00zuulzuul00000000000000{ "floating_ip_info": [ { "address": "10.10.10.3", "instance_uuid": null, "fixed_ip": null, "interface": "eth0", "pool": "nova", "project_id": null } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-floating-ips-bulk/floating-ips-bulk-list-resp.json0000664000175000017500000000124700000000000027744 0ustar00zuulzuul00000000000000{ "floating_ip_info": [ { "address": "10.10.10.1", "instance_uuid": null, "fixed_ip": null, "interface": "eth0", "pool": "nova", "project_id": null }, { "address": "10.10.10.2", "instance_uuid": null, "fixed_ip": null, "interface": "eth0", "pool": "nova", "project_id": null }, { "address": "10.10.10.3", "instance_uuid": null, "fixed_ip": null, "interface": "eth0", "pool": "nova", "project_id": null } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1344717 nova-21.2.4/doc/api_samples/os-fping/0000775000175000017500000000000000000000000017352 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-fping/fping-get-details-resp.json0000664000175000017500000000024000000000000024513 0ustar00zuulzuul00000000000000{ "server": { "alive": false, "id": "f5e6fd6d-c0a3-4f9e-aabf-d69196b6d11a", "project_id": "6f70656e737461636b20342065766572" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-fping/fping-get-resp.json0000664000175000017500000000030100000000000023066 0ustar00zuulzuul00000000000000{ "servers": [ { "alive": false, "id": "1d1aea35-472b-40cf-9337-8eb68480aaa1", "project_id": "6f70656e737461636b20342065766572" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1344717 nova-21.2.4/doc/api_samples/os-hosts/0000775000175000017500000000000000000000000017407 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/host-get-reboot.json0000664000175000017500000000012100000000000023316 0ustar00zuulzuul00000000000000{ "host": "9557750dbc464741a89c907921c1cb31", "power_action": "reboot" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/host-get-resp.json0000664000175000017500000000140600000000000023004 0ustar00zuulzuul00000000000000{ "host": [ { "resource": { "cpu": 2, "disk_gb": 1028, "host": "c1a7de0ac9d94e4baceae031d05caae3", "memory_mb": 8192, "project": "(total)" } }, { "resource": { "cpu": 0, "disk_gb": 0, "host": "c1a7de0ac9d94e4baceae031d05caae3", "memory_mb": 512, "project": "(used_now)" } }, { "resource": { "cpu": 0, "disk_gb": 0, "host": "c1a7de0ac9d94e4baceae031d05caae3", "memory_mb": 0, "project": "(used_max)" } } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/host-get-shutdown.json0000664000175000017500000000012300000000000023701 0ustar00zuulzuul00000000000000{ "host": "77cfa0002e4d45fe97f185968111b27b", "power_action": "shutdown" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/host-get-startup.json0000664000175000017500000000012200000000000023527 0ustar00zuulzuul00000000000000{ "host": "4b392b27930343bbaa27fd5d8328a564", "power_action": "startup" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/host-put-maintenance-req.json0000664000175000017500000000007600000000000025135 0ustar00zuulzuul00000000000000{ "status": "enable", "maintenance_mode": "disable" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/host-put-maintenance-resp.json0000664000175000017500000000016700000000000025320 0ustar00zuulzuul00000000000000{ "host": "65c5d5b7e3bd44308e67fc50f362aee6", "maintenance_mode": "off_maintenance", "status": "enabled" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hosts/hosts-list-resp.json0000664000175000017500000000072100000000000023362 0ustar00zuulzuul00000000000000{ "hosts": [ { "host_name": "b6e4adbc193d428ea923899d07fb001e", "service": "conductor", "zone": "internal" }, { "host_name": "09c025b0efc64211bd23fc50fa974cdf", "service": "compute", "zone": "nova" }, { "host_name": "abffda96592c4eacaf4111c28fddee17", "service": "scheduler", "zone": "internal" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1344717 nova-21.2.4/doc/api_samples/os-hypervisors/0000775000175000017500000000000000000000000020644 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-detail-resp.json0000664000175000017500000000175600000000000026354 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": "{\"arch\": \"x86_64\", \"model\": \"Nehalem\", \"vendor\": \"Intel\", \"features\": [\"pge\", \"clflush\"], \"topology\": {\"cores\": 1, \"threads\": 1, \"sockets\": 4}}", "current_workload": 0, "status": "enabled", "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": 1, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "e6a37ee802d74863ab8b91ade8f12a67", "id": 2, "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-list-resp.json0000664000175000017500000000026300000000000026055 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-search-resp.json0000664000175000017500000000026300000000000026347 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-show-resp.json0000664000175000017500000000157100000000000026065 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": "{\"arch\": \"x86_64\", \"model\": \"Nehalem\", \"vendor\": \"Intel\", \"features\": [\"pge\", \"clflush\"], \"topology\": {\"cores\": 1, \"threads\": 1, \"sockets\": 4}}", "state": "up", "status": "enabled", "current_workload": 0, "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": 1, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "043b3cacf6f34c90a7245151fc8ebcda", "id": 2, "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-statistics-resp.json0000664000175000017500000000055700000000000027302 0ustar00zuulzuul00000000000000{ "hypervisor_statistics": { "count": 1, "current_workload": 0, "disk_available_least": 0, "free_disk_gb": 1028, "free_ram_mb": 7680, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "vcpus": 2, "vcpus_used": 0 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-uptime-resp.json0000664000175000017500000000035200000000000026404 0ustar00zuulzuul00000000000000{ "hypervisor": { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled", "uptime": " 08:32:11 up 93 days, 18:25, 12 users, load average: 0.20, 0.12, 0.14" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-with-servers-resp.json0000664000175000017500000000100200000000000027534 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ] } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/hypervisors-without-servers-resp.json0000664000175000017500000000026300000000000030274 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1384716 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/0000775000175000017500000000000000000000000021423 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-detail-resp.json0000664000175000017500000000227000000000000027123 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "status": "enabled", "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": 1, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "e6a37ee802d74863ab8b91ade8f12a67", "id": 2, "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-list-resp.json0000664000175000017500000000026300000000000026634 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-search-resp.json0000664000175000017500000000026300000000000027126 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-show-resp.json0000664000175000017500000000201700000000000026640 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "state": "up", "status": "enabled", "current_workload": 0, "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": 1, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "043b3cacf6f34c90a7245151fc8ebcda", "id": 2, "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-statistics-resp.json0000664000175000017500000000055700000000000030061 0ustar00zuulzuul00000000000000{ "hypervisor_statistics": { "count": 1, "current_workload": 0, "disk_available_least": 0, "free_disk_gb": 1028, "free_ram_mb": 7680, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "vcpus": 2, "vcpus_used": 0 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-uptime-resp.json0000664000175000017500000000035200000000000027163 0ustar00zuulzuul00000000000000{ "hypervisor": { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled", "uptime": " 08:32:11 up 93 days, 18:25, 12 users, load average: 0.20, 0.12, 0.14" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-with-servers-resp.json0000664000175000017500000000100200000000000030313 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ] } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.28/hypervisors-without-servers-resp.json0000664000175000017500000000026300000000000031053 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1384716 nova-21.2.4/doc/api_samples/os-hypervisors/v2.33/0000775000175000017500000000000000000000000021417 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json0000664000175000017500000000255200000000000027122 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "status": "enabled", "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "host1", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": 2, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "host1", "id": 6, "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors/detail?limit=1&marker=2", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json0000664000175000017500000000057100000000000026632 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "host1", "id": 2, "state": "up", "status": "enabled" } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors?limit=1&marker=2", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1384716 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/0000775000175000017500000000000000000000000021421 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json0000664000175000017500000000272700000000000027130 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "status": "enabled", "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "host2", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "1bb62a04-c576-402c-8147-9e89757a09e3", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "host1", "id": "62f62f6e-a713-4cbe-87d3-3ecf8a1e0f8d", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors/detail?limit=1&marker=1bb62a04-c576-402c-8147-9e89757a09e3", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-with-servers-resp.json0000664000175000017500000000306600000000000031565 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ], "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "host1", "id": "5d343e1d-938e-4284-b98b-6a2b5406ba76", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-list-resp.json0000664000175000017500000000070100000000000026627 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "host2", "id": "1bb62a04-c576-402c-8147-9e89757a09e3", "state": "up", "status": "enabled" } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors?limit=1&marker=1bb62a04-c576-402c-8147-9e89757a09e3", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-search-resp.json0000664000175000017500000000033000000000000027117 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-show-resp.json0000664000175000017500000000213100000000000026633 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "state": "up", "status": "enabled", "current_workload": 0, "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "043b3cacf6f34c90a7245151fc8ebcda", "id": "5d343e1d-938e-4284-b98b-6a2b5406ba76", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-show-with-servers-resp.json0000664000175000017500000000260000000000000031274 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ], "current_workload": 0, "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "043b3cacf6f34c90a7245151fc8ebcda", "id": "5d343e1d-938e-4284-b98b-6a2b5406ba76", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-statistics-resp.json0000664000175000017500000000055700000000000030057 0ustar00zuulzuul00000000000000{ "hypervisor_statistics": { "count": 1, "current_workload": 0, "disk_available_least": 0, "free_disk_gb": 1028, "free_ram_mb": 7680, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "vcpus": 2, "vcpus_used": 0 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-uptime-resp.json0000664000175000017500000000041700000000000027163 0ustar00zuulzuul00000000000000{ "hypervisor": { "hypervisor_hostname": "fake-mini", "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "state": "up", "status": "enabled", "uptime": " 08:32:11 up 93 days, 18:25, 12 users, load average: 0.20, 0.12, 0.14" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-with-servers-resp.json0000664000175000017500000000104700000000000030322 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ] } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-hypervisors/v2.53/hypervisors-without-servers-resp.json0000664000175000017500000000033000000000000031044 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": "b1e43b5f-eec1-44e0-9f10-7b4945c0226d", "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1384716 nova-21.2.4/doc/api_samples/os-instance-actions/0000775000175000017500000000000000000000000021511 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/instance-action-get-resp.json0000664000175000017500000000121500000000000027206 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:29.585618", "result": "Success", "start_time": "2018-04-25T01:26:29.299627", "traceback": null } ], "instance_uuid": "e0a7ed34-899c-4b4d-8637-11ca627346ef", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-14122cb1-4256-4a16-a4f9-6faf494afaa7", "start_time": "2018-04-25T01:26:29.074293", "user_id": "admin" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/instance-actions-list-resp.json0000664000175000017500000000140300000000000027564 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "fcd19ef2-b593-40b1-90a5-fc31063fa95c", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-f8a59f03-76dc-412f-92c2-21f8612be728", "start_time": "2018-04-25T01:26:29.092892", "user_id": "admin" }, { "action": "create", "instance_uuid": "fcd19ef2-b593-40b1-90a5-fc31063fa95c", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-50189019-626d-47fb-b944-b8342af09679", "start_time": "2018-04-25T01:26:25.877278", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1384716 nova-21.2.4/doc/api_samples/os-instance-actions/v2.21/0000775000175000017500000000000000000000000022261 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.21/instance-action-get-resp.json0000664000175000017500000000121500000000000027756 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:29.262877", "result": "Success", "start_time": "2018-04-25T01:26:29.012774", "traceback": null } ], "instance_uuid": "a53525ef-9ed5-4169-9f2e-dd141d575d87", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-343506f4-4dc3-4153-8de5-de6a60cb26ab", "start_time": "2018-04-25T01:26:28.757301", "user_id": "admin" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.21/instance-actions-list-resp.json0000664000175000017500000000140300000000000030334 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "07cacdbb-2e7f-4048-b69c-95cbdc47af6f", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-c022e6dc-d962-426e-b623-1cdbac0da64b", "start_time": "2018-04-25T01:26:28.752049", "user_id": "admin" }, { "action": "create", "instance_uuid": "07cacdbb-2e7f-4048-b69c-95cbdc47af6f", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-862ef1ff-da4f-4b3b-9a29-6d621442c76c", "start_time": "2018-04-25T01:26:25.595858", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1384716 nova-21.2.4/doc/api_samples/os-instance-actions/v2.51/0000775000175000017500000000000000000000000022264 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.51/instance-action-get-non-admin-resp.json0000664000175000017500000000115100000000000031636 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:29.565338", "result": "Success", "start_time": "2018-04-25T01:26:29.294207" } ], "instance_uuid": "11a932ff-48b8-46ed-a409-7d9e50ec75d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-fad89d20-0311-44bd-a8c2-ee3f2411bcf0", "start_time": "2018-04-25T01:26:29.073738", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.51/instance-action-get-resp.json0000664000175000017500000000121500000000000027761 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:30.798227", "result": "Success", "start_time": "2018-04-25T01:26:30.526590", "traceback": null } ], "instance_uuid": "07afdfe5-3791-48e3-9bda-1a0804796bab", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-f574c934-6f67-4945-b357-5a52a28a46a6", "start_time": "2018-04-25T01:26:30.301030", "user_id": "admin" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.51/instance-actions-list-resp.json0000664000175000017500000000140300000000000030337 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "a0dbc3b0-6f14-4fb7-8500-172e82584d05", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-ca4313e0-b514-46ea-a9b9-e49f8a1ad344", "start_time": "2018-04-25T01:26:29.206664", "user_id": "admin" }, { "action": "create", "instance_uuid": "a0dbc3b0-6f14-4fb7-8500-172e82584d05", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-a897e43f-8733-499d-a9cc-78993de2b8e8", "start_time": "2018-04-25T01:26:25.910998", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1424716 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/0000775000175000017500000000000000000000000022273 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/instance-action-get-non-admin-resp.json0000664000175000017500000000123500000000000031650 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:30.518082", "result": "Success", "start_time": "2018-04-25T01:26:30.261571" } ], "instance_uuid": "ee4c91a6-f214-486d-8e2a-efa29ad91ecd", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-ada57283-2dd7-4703-9781-d287aaa4eb95", "start_time": "2018-04-25T01:26:30.041225", "updated_at": "2018-04-25T01:26:30.518082", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/instance-action-get-resp.json0000664000175000017500000000130100000000000027764 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:29.409773", "result": "Success", "start_time": "2018-04-25T01:26:29.203170", "traceback": null } ], "instance_uuid": "cab10fb8-6702-40ba-a91c-18009cec0a09", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0514d54b-0f2c-4611-963d-fc24afb57f1f", "start_time": "2018-04-25T01:26:28.996024", "updated_at": "2018-04-25T01:26:29.409773", "user_id": "admin" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/instance-actions-list-resp.json0000664000175000017500000000156100000000000030353 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "247cc793-7cf4-424a-a529-11bd62f960b6", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-1448f44f-490d-42ff-8781-c3181d103a7c", "start_time": "2018-04-25T01:26:28.793416", "updated_at": "2018-04-25T01:26:29.292649", "user_id": "fake" }, { "action": "create", "instance_uuid": "247cc793-7cf4-424a-a529-11bd62f960b6", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-de0561a0-09d9-4902-b1fc-23ee95b14c67", "start_time": "2018-04-25T01:26:25.527791", "updated_at": "2018-04-25T01:26:28.749039", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/instance-actions-list-with-changes-since.json0000664000175000017500000000071100000000000033056 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "84fb2511-ed79-418c-ac0d-11337e1a1d76", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0176a4e5-15ae-4038-98a9-5444aa277c31", "start_time": "2018-04-25T01:26:29.051607", "updated_at": "2018-04-25T01:26:29.538648", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/instance-actions-list-with-limit-resp.json0000664000175000017500000000134000000000000032433 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "ca3d3be5-1a40-427f-9515-f5e181f479d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-4dbefbb7-d743-4d42-b0a1-a79cbe256138", "start_time": "2018-04-25T01:26:28.909887", "updated_at": "2018-04-25T01:26:29.400606", "user_id": "admin" } ], "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/ca3d3be5-1a40-427f-9515-f5e181f479d0/os-instance-actions?limit=1&marker=req-4dbefbb7-d743-4d42-b0a1-a79cbe256138", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.58/instance-actions-list-with-marker-resp.json0000664000175000017500000000071200000000000032600 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "31f35617-317d-4688-8046-bb600286e6b6", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-922232c3-faf8-4628-9c40-0e8f0cdab020", "start_time": "2018-04-25T01:26:33.694447", "updated_at": "2018-04-25T01:26:35.944525", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1424716 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/0000775000175000017500000000000000000000000022266 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/instance-action-get-non-admin-resp.json0000664000175000017500000000136300000000000031645 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:34.784165", "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "result": "Success", "start_time": "2018-04-25T01:26:34.612020" } ], "instance_uuid": "79edaa44-ad4f-4af7-b994-154518c2b927", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-8eb28d4a-db6c-4337-bab8-ce154e9c620e", "start_time": "2018-04-25T01:26:34.388280", "updated_at": "2018-04-25T01:26:34.784165", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/instance-action-get-resp.json0000664000175000017500000000147200000000000027770 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:36.790544", "host": "compute", "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "result": "Success", "start_time": "2018-04-25T01:26:36.539271", "traceback": null } ], "instance_uuid": "4bf3473b-d550-4b65-9409-292d44ab14a2", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0d819d5c-1527-4669-bdf0-ffad31b5105b", "start_time": "2018-04-25T01:26:36.341290", "updated_at": "2018-04-25T01:26:36.790544", "user_id": "admin" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/instance-actions-list-resp.json0000664000175000017500000000156300000000000030350 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-f04d4b92-6241-42da-b82d-2cedb225c58d", "start_time": "2018-04-25T01:26:36.036697", "updated_at": "2018-04-25T01:26:36.525308", "user_id": "admin" }, { "action": "create", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-d8790618-9bbf-4df0-8af8-fc9e24de29c0", "start_time": "2018-04-25T01:26:33.692125", "updated_at": "2018-04-25T01:26:35.993821", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/instance-actions-list-with-changes-since.json0000664000175000017500000000071100000000000033051 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "2150964c-30fe-4214-9547-8822375aa7d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0c3b2079-0a44-474d-a5b2-7466d4b4c642", "start_time": "2018-04-25T01:26:29.594237", "updated_at": "2018-04-25T01:26:30.065061", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/instance-actions-list-with-limit-resp.json0000664000175000017500000000133700000000000032434 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "7a580cc0-3469-441a-9736-d5fce91003f9", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-b8ffb713-61a2-4e7c-a705-37052cba9d6e", "start_time": "2018-04-25T01:26:28.955571", "updated_at": "2018-04-25T01:26:29.414973", "user_id": "fake" } ], "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/7a580cc0-3469-441a-9736-d5fce91003f9/os-instance-actions?limit=1&marker=req-b8ffb713-61a2-4e7c-a705-37052cba9d6e", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.62/instance-actions-list-with-marker-resp.json0000664000175000017500000000071200000000000032573 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "9bde1fd5-8435-45c5-afc1-bedd0605275b", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-4510fb10-447f-4572-a64d-c2324547d86c", "start_time": "2018-04-25T01:26:33.710291", "updated_at": "2018-04-25T01:26:35.374936", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1424716 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/0000775000175000017500000000000000000000000022272 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-action-get-non-admin-resp.json0000664000175000017500000000136300000000000031651 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:34.784165", "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "result": "Success", "start_time": "2018-04-25T01:26:34.612020" } ], "instance_uuid": "79edaa44-ad4f-4af7-b994-154518c2b927", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-8eb28d4a-db6c-4337-bab8-ce154e9c620e", "start_time": "2018-04-25T01:26:34.388280", "updated_at": "2018-04-25T01:26:34.784165", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-action-get-resp.json0000664000175000017500000000147200000000000027774 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:36.790544", "host": "compute", "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "result": "Success", "start_time": "2018-04-25T01:26:36.539271", "traceback": null } ], "instance_uuid": "4bf3473b-d550-4b65-9409-292d44ab14a2", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0d819d5c-1527-4669-bdf0-ffad31b5105b", "start_time": "2018-04-25T01:26:36.341290", "updated_at": "2018-04-25T01:26:36.790544", "user_id": "admin" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-actions-list-resp.json0000664000175000017500000000156300000000000030354 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-f04d4b92-6241-42da-b82d-2cedb225c58d", "start_time": "2018-04-25T01:26:36.036697", "updated_at": "2018-04-25T01:26:36.525308", "user_id": "admin" }, { "action": "create", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-d8790618-9bbf-4df0-8af8-fc9e24de29c0", "start_time": "2018-04-25T01:26:33.692125", "updated_at": "2018-04-25T01:26:35.993821", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-actions-list-with-changes-before.json0000664000175000017500000000156400000000000033225 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "2150964c-30fe-4214-9547-8822375aa7d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0c3b2079-0a44-474d-a5b2-7466d4b4c642", "start_time": "2018-04-25T01:26:29.594237", "updated_at": "2018-04-25T01:26:30.065061", "user_id": "admin" }, { "action": "create", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-d8790618-9bbf-4df0-8af8-fc9e24de29c0", "start_time": "2018-04-25T01:26:33.692125", "updated_at": "2018-04-25T01:26:35.993821", "user_id": "admin" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-actions-list-with-changes-since.json0000664000175000017500000000071100000000000033055 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "2150964c-30fe-4214-9547-8822375aa7d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0c3b2079-0a44-474d-a5b2-7466d4b4c642", "start_time": "2018-04-25T01:26:29.594237", "updated_at": "2018-04-25T01:26:30.065061", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-actions-list-with-limit-resp.json0000664000175000017500000000134000000000000032432 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "ca3d3be5-1a40-427f-9515-f5e181f479d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-4dbefbb7-d743-4d42-b0a1-a79cbe256138", "start_time": "2018-04-25T01:26:28.909887", "updated_at": "2018-04-25T01:26:29.400606", "user_id": "admin" } ], "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/ca3d3be5-1a40-427f-9515-f5e181f479d0/os-instance-actions?limit=1&marker=req-4dbefbb7-d743-4d42-b0a1-a79cbe256138", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.66/instance-actions-list-with-marker-resp.json0000664000175000017500000000071200000000000032577 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "9bde1fd5-8435-45c5-afc1-bedd0605275b", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-4510fb10-447f-4572-a64d-c2324547d86c", "start_time": "2018-04-25T01:26:33.710291", "updated_at": "2018-04-25T01:26:35.374936", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1424716 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/0000775000175000017500000000000000000000000022272 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-action-get-non-admin-resp.json0000664000175000017500000000136300000000000031651 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:34.784165", "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "result": "Success", "start_time": "2018-04-25T01:26:34.612020" } ], "instance_uuid": "79edaa44-ad4f-4af7-b994-154518c2b927", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-8eb28d4a-db6c-4337-bab8-ce154e9c620e", "start_time": "2018-04-25T01:26:34.388280", "updated_at": "2018-04-25T01:26:34.784165", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-action-get-resp.json0000664000175000017500000000153400000000000027773 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "events": [ { "event": "compute_stop_instance", "finish_time": "2018-04-25T01:26:36.790544", "host": "compute", "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "result": "Success", "start_time": "2018-04-25T01:26:36.539271", "traceback": null, "details": null } ], "instance_uuid": "4bf3473b-d550-4b65-9409-292d44ab14a2", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0d819d5c-1527-4669-bdf0-ffad31b5105b", "start_time": "2018-04-25T01:26:36.341290", "updated_at": "2018-04-25T01:26:36.790544", "user_id": "admin" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-actions-list-resp.json0000664000175000017500000000156300000000000030354 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-f04d4b92-6241-42da-b82d-2cedb225c58d", "start_time": "2018-04-25T01:26:36.036697", "updated_at": "2018-04-25T01:26:36.525308", "user_id": "admin" }, { "action": "create", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-d8790618-9bbf-4df0-8af8-fc9e24de29c0", "start_time": "2018-04-25T01:26:33.692125", "updated_at": "2018-04-25T01:26:35.993821", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-actions-list-with-changes-before.json0000664000175000017500000000156400000000000033225 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "2150964c-30fe-4214-9547-8822375aa7d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0c3b2079-0a44-474d-a5b2-7466d4b4c642", "start_time": "2018-04-25T01:26:29.594237", "updated_at": "2018-04-25T01:26:30.065061", "user_id": "admin" }, { "action": "create", "instance_uuid": "15835b6f-1e14-4cfa-9f66-1abea1a1c0d5", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-d8790618-9bbf-4df0-8af8-fc9e24de29c0", "start_time": "2018-04-25T01:26:33.692125", "updated_at": "2018-04-25T01:26:35.993821", "user_id": "admin" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-actions-list-with-changes-since.json0000664000175000017500000000071100000000000033055 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "2150964c-30fe-4214-9547-8822375aa7d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-0c3b2079-0a44-474d-a5b2-7466d4b4c642", "start_time": "2018-04-25T01:26:29.594237", "updated_at": "2018-04-25T01:26:30.065061", "user_id": "admin" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-actions-list-with-limit-resp.json0000664000175000017500000000134000000000000032432 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "ca3d3be5-1a40-427f-9515-f5e181f479d0", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-4dbefbb7-d743-4d42-b0a1-a79cbe256138", "start_time": "2018-04-25T01:26:28.909887", "updated_at": "2018-04-25T01:26:29.400606", "user_id": "admin" } ], "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/ca3d3be5-1a40-427f-9515-f5e181f479d0/os-instance-actions?limit=1&marker=req-4dbefbb7-d743-4d42-b0a1-a79cbe256138", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-actions/v2.84/instance-actions-list-with-marker-resp.json0000664000175000017500000000071200000000000032577 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "9bde1fd5-8435-45c5-afc1-bedd0605275b", "message": null, "project_id": "6f70656e737461636b20342065766572", "request_id": "req-4510fb10-447f-4572-a64d-c2324547d86c", "start_time": "2018-04-25T01:26:33.710291", "updated_at": "2018-04-25T01:26:35.374936", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1424716 nova-21.2.4/doc/api_samples/os-instance-usage-audit-log/0000775000175000017500000000000000000000000023040 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-usage-audit-log/inst-usage-audit-log-index-get-resp.json0000664000175000017500000000225300000000000032530 0ustar00zuulzuul00000000000000{ "instance_usage_audit_logs": { "hosts_not_run": [ "samplehost3" ], "log": { "samplehost0": { "errors": 1, "instances": 1, "message": "Instance usage audit ran for host samplehost0, 1 instances in 0.01 seconds.", "state": "DONE" }, "samplehost1": { "errors": 1, "instances": 2, "message": "Instance usage audit ran for host samplehost1, 2 instances in 0.01 seconds.", "state": "DONE" }, "samplehost2": { "errors": 1, "instances": 3, "message": "Instance usage audit ran for host samplehost2, 3 instances in 0.01 seconds.", "state": "DONE" } }, "num_hosts": 4, "num_hosts_done": 3, "num_hosts_not_run": 1, "num_hosts_running": 0, "overall_status": "3 of 4 hosts done. 3 errors.", "period_beginning": "2012-06-01 00:00:00", "period_ending": "2012-07-01 00:00:00", "total_errors": 3, "total_instances": 6 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-instance-usage-audit-log/inst-usage-audit-log-show-get-resp.json0000664000175000017500000000225200000000000032400 0ustar00zuulzuul00000000000000{ "instance_usage_audit_log": { "hosts_not_run": [ "samplehost3" ], "log": { "samplehost0": { "errors": 1, "instances": 1, "message": "Instance usage audit ran for host samplehost0, 1 instances in 0.01 seconds.", "state": "DONE" }, "samplehost1": { "errors": 1, "instances": 2, "message": "Instance usage audit ran for host samplehost1, 2 instances in 0.01 seconds.", "state": "DONE" }, "samplehost2": { "errors": 1, "instances": 3, "message": "Instance usage audit ran for host samplehost2, 3 instances in 0.01 seconds.", "state": "DONE" } }, "num_hosts": 4, "num_hosts_done": 3, "num_hosts_not_run": 1, "num_hosts_running": 0, "overall_status": "3 of 4 hosts done. 3 errors.", "period_beginning": "2012-06-01 00:00:00", "period_ending": "2012-07-01 00:00:00", "total_errors": 3, "total_instances": 6 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1424716 nova-21.2.4/doc/api_samples/os-keypairs/0000775000175000017500000000000000000000000020076 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/keypairs-get-resp.json0000664000175000017500000000140000000000000024337 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "44:fe:29:6e:23:14:b9:53:5b:65:82:58:1c:fe:5a:c3", "name": "keypair-6638abdb-c4e8-407c-ba88-c8dd7cc3c4f1", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1HTrHCbb9NawNLSV8N6tSa8i637+EC2dA+lsdHHfQlT54t+N0nHhJPlKWDLhc579j87vp6RDFriFJ/smsTnDnf64O12z0kBaJpJPH2zXrBkZFK6q2rmxydURzX/z0yLSCP77SFJ0fdXWH2hMsAusflGyryHGX20n+mZK6mDrxVzGxEz228dwQ5G7Az5OoZDWygH2pqPvKjkifRw0jwUKf3BbkP0QvANACOk26cv16mNFpFJfI1N3OC5lUsZQtKGR01ptJoWijYKccqhkAKuo902tg/qup58J5kflNm7I61sy1mJon6SGqNUSfoQagqtBH6vd/tU1jnlwZ03uUroAL Generated-by-Nova\n", "user_id": "fake", "deleted": false, "created_at": "2014-05-07T12:06:13.681238", "updated_at": null, "deleted_at": null, "id": 1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/keypairs-import-post-req.json0000664000175000017500000000053100000000000025677 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-d20a3d59-9433-4b79-8726-20b431d89c78", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/keypairs-import-post-resp.json0000664000175000017500000000067600000000000026073 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "1e:2c:9b:56:79:4b:45:77:f9:ca:7a:98:2c:b0:d5:3c", "name": "keypair-803a1926-af78-4b05-902a-1d6f7a8d9d3e", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/keypairs-list-resp.json0000664000175000017500000000124100000000000024536 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-50ca852e-273f-4cdc-8949-45feba200837", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" } } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/keypairs-post-req.json0000664000175000017500000000013100000000000024363 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-ab9ff2e6-a6d7-4915-a241-044c369c07f9" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/keypairs-post-resp.json0000664000175000017500000000445500000000000024562 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-50ca852e-273f-4cdc-8949-45feba200837", "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEApBdzF+fTq5QbN3R+QlO5TZr6W64GcUqcho5ZxPBZZIq53P1K\ndtpaY856ManqEwME1tN+JOw8+mmCK2RpkMHtk5BNPOMqr5Y+OQ5MqI/eX1v7GWnJ\ntHGTbi+vRDmxBh3aa3xiUGo66c9tjUKAg/ExQfFr/vKJvTR/S3urPlj3vfFgu+yi\n8PKoH0LGyHsviWsD1peDuu2XS+ca8qbkY3yD1o4Mv1R/OSF4P2fxjjWdp8R4EkoT\nJMKkhRAgAuS9zxwftPv9djP4opHWrRUlRo6bh75CzrN6Hu5uh5Tn5bkifOQcy1gW\n772vd6pBpi4OGQHPKz4djvmCLAVBzSyzDP6EKQIDAQABAoIBAQCB+tU/ZXKlIe+h\nMNTmoz1QfOe+AY625Rwx9cakGqMk4kKyC62VkgcxshfXCToSjzyhEuyEQOFYloT2\n7FY2xXb0gcS861Efv0pQlcQhbbz/GnQ/wC13ktPu3zTdPTm9l54xsFiMTGmYVaf4\n0mnMmhyjmKIsVGDJEDGZUD/oZj7wJGOFha5M4FZrZlJIrEZC0rGGlcC0kGF2no6B\nj1Mu7HjyK3pTKf4dlp+jeRikUF5Pct+qT+rcv2rZ3fl3inxtlLEwZeFPbp/njf/U\nIGxFzZsuLmiFlsJar6M5nEckTB3p25maWWaR8/0jvJRgsPnuoUrUoGDq87DMKCdk\nlw6by9fRAoGBANhnS9ko7Of+ntqIFR7xOG9p/oPATztgHkFxe4GbQ0leaDRTx3vE\ndQmUCnn24xtyVECaI9a4IV+LP1npw8niWUJ4pjgdAlkF4cCTu9sN+cBO15SfdACI\nzD1DaaHmpFCAWlpTo68VWlvWll6i2ncCkRJR1+q/C/yQz7asvl4AakElAoGBAMId\nxqMT2Sy9xLuHsrAoMUvBOkwaMYZH+IAb4DvUDjVIiKWjmonrmopS5Lpb+ALBKqZe\neVfD6HwWQqGwCFItToaEkZvrNfTapoNCHWWg001D49765UV5lMrArDbM1vXtFfM4\nDRYM6+Y6o/6QH8EBgXtyBxcYthIDBM3wBJa67xG1AoGAKTm8fFlMkIG0N4N3Kpbf\nnnH915GaRoBwIx2AXtd6QQ7oIRfYx95MQY/fUw7SgxcLr+btbulTCkWXwwRClUI2\nqPAdElGMcfMp56r9PaTy8EzUyu55heSJrB4ckIhEw0VAcTa/1wnlVduSd+LkZYmq\no2fOD11n5iycNXvBJF1F4LUCgYAMaRbwCi7SW3eefbiA5rDwJPRzNSGBckyC9EVL\nzezynyaNYH5a3wNMYKxa9dJPasYtSND9OXs9o7ay26xMhLUGiKc+jrUuaGRI9Asp\nGjUoNXT2JphN7s4CgHsCLep4YqYKnMTJah4S5CDj/5boIg6DM/EcGupZEHRYLkY8\n1MrAGQKBgQCi9yeC39ctLUNn+Ix604gttWWChdt3ozufTZ7HybJOSRA9Gh3iD5gm\nzlz0xqpGShKpOY2k+ftvja0poMdGeJLt84P3r2q01IgI7w0LmOj5m0W10dHysH27\nBWpCnHdBJMxnBsMRPoM4MKkmKWD9l5PSTCTWtkIpsyuDCko6D9UwZA==\n-----END RSA PRIVATE KEY-----\n", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/0000775000175000017500000000000000000000000020644 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/keypairs-get-resp.json0000664000175000017500000000142700000000000025116 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "44:fe:29:6e:23:14:b9:53:5b:65:82:58:1c:fe:5a:c3", "name": "keypair-6638abdb-c4e8-407c-ba88-c8dd7cc3c4f1", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1HTrHCbb9NawNLSV8N6tSa8i637+EC2dA+lsdHHfQlT54t+N0nHhJPlKWDLhc579j87vp6RDFriFJ/smsTnDnf64O12z0kBaJpJPH2zXrBkZFK6q2rmxydURzX/z0yLSCP77SFJ0fdXWH2hMsAusflGyryHGX20n+mZK6mDrxVzGxEz228dwQ5G7Az5OoZDWygH2pqPvKjkifRw0jwUKf3BbkP0QvANACOk26cv16mNFpFJfI1N3OC5lUsZQtKGR01ptJoWijYKccqhkAKuo902tg/qup58J5kflNm7I61sy1mJon6SGqNUSfoQagqtBH6vd/tU1jnlwZ03uUroAL Generated-by-Nova\n", "user_id": "fake", "deleted": false, "created_at": "2014-05-07T12:06:13.681238", "updated_at": null, "deleted_at": null, "id": 1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/keypairs-import-post-req.json0000664000175000017500000000061400000000000026447 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-d20a3d59-9433-4b79-8726-20b431d89c78", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/keypairs-import-post-resp.json0000664000175000017500000000072500000000000026634 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "1e:2c:9b:56:79:4b:45:77:f9:ca:7a:98:2c:b0:d5:3c", "name": "keypair-803a1926-af78-4b05-902a-1d6f7a8d9d3e", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/keypairs-list-resp.json0000664000175000017500000000130000000000000025300 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-50ca852e-273f-4cdc-8949-45feba200837", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" } } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/keypairs-post-req.json0000664000175000017500000000021400000000000025133 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-ab9ff2e6-a6d7-4915-a241-044c369c07f9", "type": "ssh", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.10/keypairs-post-resp.json0000664000175000017500000000450500000000000025324 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-ab9ff2e6-a6d7-4915-a241-044c369c07f9", "type": "ssh", "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEApBdzF+fTq5QbN3R+QlO5TZr6W64GcUqcho5ZxPBZZIq53P1K\ndtpaY856ManqEwME1tN+JOw8+mmCK2RpkMHtk5BNPOMqr5Y+OQ5MqI/eX1v7GWnJ\ntHGTbi+vRDmxBh3aa3xiUGo66c9tjUKAg/ExQfFr/vKJvTR/S3urPlj3vfFgu+yi\n8PKoH0LGyHsviWsD1peDuu2XS+ca8qbkY3yD1o4Mv1R/OSF4P2fxjjWdp8R4EkoT\nJMKkhRAgAuS9zxwftPv9djP4opHWrRUlRo6bh75CzrN6Hu5uh5Tn5bkifOQcy1gW\n772vd6pBpi4OGQHPKz4djvmCLAVBzSyzDP6EKQIDAQABAoIBAQCB+tU/ZXKlIe+h\nMNTmoz1QfOe+AY625Rwx9cakGqMk4kKyC62VkgcxshfXCToSjzyhEuyEQOFYloT2\n7FY2xXb0gcS861Efv0pQlcQhbbz/GnQ/wC13ktPu3zTdPTm9l54xsFiMTGmYVaf4\n0mnMmhyjmKIsVGDJEDGZUD/oZj7wJGOFha5M4FZrZlJIrEZC0rGGlcC0kGF2no6B\nj1Mu7HjyK3pTKf4dlp+jeRikUF5Pct+qT+rcv2rZ3fl3inxtlLEwZeFPbp/njf/U\nIGxFzZsuLmiFlsJar6M5nEckTB3p25maWWaR8/0jvJRgsPnuoUrUoGDq87DMKCdk\nlw6by9fRAoGBANhnS9ko7Of+ntqIFR7xOG9p/oPATztgHkFxe4GbQ0leaDRTx3vE\ndQmUCnn24xtyVECaI9a4IV+LP1npw8niWUJ4pjgdAlkF4cCTu9sN+cBO15SfdACI\nzD1DaaHmpFCAWlpTo68VWlvWll6i2ncCkRJR1+q/C/yQz7asvl4AakElAoGBAMId\nxqMT2Sy9xLuHsrAoMUvBOkwaMYZH+IAb4DvUDjVIiKWjmonrmopS5Lpb+ALBKqZe\neVfD6HwWQqGwCFItToaEkZvrNfTapoNCHWWg001D49765UV5lMrArDbM1vXtFfM4\nDRYM6+Y6o/6QH8EBgXtyBxcYthIDBM3wBJa67xG1AoGAKTm8fFlMkIG0N4N3Kpbf\nnnH915GaRoBwIx2AXtd6QQ7oIRfYx95MQY/fUw7SgxcLr+btbulTCkWXwwRClUI2\nqPAdElGMcfMp56r9PaTy8EzUyu55heSJrB4ckIhEw0VAcTa/1wnlVduSd+LkZYmq\no2fOD11n5iycNXvBJF1F4LUCgYAMaRbwCi7SW3eefbiA5rDwJPRzNSGBckyC9EVL\nzezynyaNYH5a3wNMYKxa9dJPasYtSND9OXs9o7ay26xMhLUGiKc+jrUuaGRI9Asp\nGjUoNXT2JphN7s4CgHsCLep4YqYKnMTJah4S5CDj/5boIg6DM/EcGupZEHRYLkY8\n1MrAGQKBgQCi9yeC39ctLUNn+Ix604gttWWChdt3ozufTZ7HybJOSRA9Gh3iD5gm\nzlz0xqpGShKpOY2k+ftvja0poMdGeJLt84P3r2q01IgI7w0LmOj5m0W10dHysH27\nBWpCnHdBJMxnBsMRPoM4MKkmKWD9l5PSTCTWtkIpsyuDCko6D9UwZA==\n-----END RSA PRIVATE KEY-----\n", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/0000775000175000017500000000000000000000000020565 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/keypairs-get-resp.json0000664000175000017500000000142700000000000025037 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "44:fe:29:6e:23:14:b9:53:5b:65:82:58:1c:fe:5a:c3", "name": "keypair-6638abdb-c4e8-407c-ba88-c8dd7cc3c4f1", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1HTrHCbb9NawNLSV8N6tSa8i637+EC2dA+lsdHHfQlT54t+N0nHhJPlKWDLhc579j87vp6RDFriFJ/smsTnDnf64O12z0kBaJpJPH2zXrBkZFK6q2rmxydURzX/z0yLSCP77SFJ0fdXWH2hMsAusflGyryHGX20n+mZK6mDrxVzGxEz228dwQ5G7Az5OoZDWygH2pqPvKjkifRw0jwUKf3BbkP0QvANACOk26cv16mNFpFJfI1N3OC5lUsZQtKGR01ptJoWijYKccqhkAKuo902tg/qup58J5kflNm7I61sy1mJon6SGqNUSfoQagqtBH6vd/tU1jnlwZ03uUroAL Generated-by-Nova\n", "user_id": "fake", "deleted": false, "created_at": "2014-05-07T12:06:13.681238", "updated_at": null, "deleted_at": null, "id": 1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/keypairs-import-post-req.json0000664000175000017500000000056000000000000026370 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-d20a3d59-9433-4b79-8726-20b431d89c78", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/keypairs-import-post-resp.json0000664000175000017500000000072500000000000026555 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "1e:2c:9b:56:79:4b:45:77:f9:ca:7a:98:2c:b0:d5:3c", "name": "keypair-803a1926-af78-4b05-902a-1d6f7a8d9d3e", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/keypairs-list-resp.json0000664000175000017500000000130000000000000025221 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-50ca852e-273f-4cdc-8949-45feba200837", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" } } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/keypairs-post-req.json0000664000175000017500000000016000000000000025054 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-ab9ff2e6-a6d7-4915-a241-044c369c07f9", "type": "ssh" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.2/keypairs-post-resp.json0000664000175000017500000000450400000000000025244 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-50ca852e-273f-4cdc-8949-45feba200837", "type": "ssh", "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEApBdzF+fTq5QbN3R+QlO5TZr6W64GcUqcho5ZxPBZZIq53P1K\ndtpaY856ManqEwME1tN+JOw8+mmCK2RpkMHtk5BNPOMqr5Y+OQ5MqI/eX1v7GWnJ\ntHGTbi+vRDmxBh3aa3xiUGo66c9tjUKAg/ExQfFr/vKJvTR/S3urPlj3vfFgu+yi\n8PKoH0LGyHsviWsD1peDuu2XS+ca8qbkY3yD1o4Mv1R/OSF4P2fxjjWdp8R4EkoT\nJMKkhRAgAuS9zxwftPv9djP4opHWrRUlRo6bh75CzrN6Hu5uh5Tn5bkifOQcy1gW\n772vd6pBpi4OGQHPKz4djvmCLAVBzSyzDP6EKQIDAQABAoIBAQCB+tU/ZXKlIe+h\nMNTmoz1QfOe+AY625Rwx9cakGqMk4kKyC62VkgcxshfXCToSjzyhEuyEQOFYloT2\n7FY2xXb0gcS861Efv0pQlcQhbbz/GnQ/wC13ktPu3zTdPTm9l54xsFiMTGmYVaf4\n0mnMmhyjmKIsVGDJEDGZUD/oZj7wJGOFha5M4FZrZlJIrEZC0rGGlcC0kGF2no6B\nj1Mu7HjyK3pTKf4dlp+jeRikUF5Pct+qT+rcv2rZ3fl3inxtlLEwZeFPbp/njf/U\nIGxFzZsuLmiFlsJar6M5nEckTB3p25maWWaR8/0jvJRgsPnuoUrUoGDq87DMKCdk\nlw6by9fRAoGBANhnS9ko7Of+ntqIFR7xOG9p/oPATztgHkFxe4GbQ0leaDRTx3vE\ndQmUCnn24xtyVECaI9a4IV+LP1npw8niWUJ4pjgdAlkF4cCTu9sN+cBO15SfdACI\nzD1DaaHmpFCAWlpTo68VWlvWll6i2ncCkRJR1+q/C/yQz7asvl4AakElAoGBAMId\nxqMT2Sy9xLuHsrAoMUvBOkwaMYZH+IAb4DvUDjVIiKWjmonrmopS5Lpb+ALBKqZe\neVfD6HwWQqGwCFItToaEkZvrNfTapoNCHWWg001D49765UV5lMrArDbM1vXtFfM4\nDRYM6+Y6o/6QH8EBgXtyBxcYthIDBM3wBJa67xG1AoGAKTm8fFlMkIG0N4N3Kpbf\nnnH915GaRoBwIx2AXtd6QQ7oIRfYx95MQY/fUw7SgxcLr+btbulTCkWXwwRClUI2\nqPAdElGMcfMp56r9PaTy8EzUyu55heSJrB4ckIhEw0VAcTa/1wnlVduSd+LkZYmq\no2fOD11n5iycNXvBJF1F4LUCgYAMaRbwCi7SW3eefbiA5rDwJPRzNSGBckyC9EVL\nzezynyaNYH5a3wNMYKxa9dJPasYtSND9OXs9o7ay26xMhLUGiKc+jrUuaGRI9Asp\nGjUoNXT2JphN7s4CgHsCLep4YqYKnMTJah4S5CDj/5boIg6DM/EcGupZEHRYLkY8\n1MrAGQKBgQCi9yeC39ctLUNn+Ix604gttWWChdt3ozufTZ7HybJOSRA9Gh3iD5gm\nzlz0xqpGShKpOY2k+ftvja0poMdGeJLt84P3r2q01IgI7w0LmOj5m0W10dHysH27\nBWpCnHdBJMxnBsMRPoM4MKkmKWD9l5PSTCTWtkIpsyuDCko6D9UwZA==\n-----END RSA PRIVATE KEY-----\n", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-keypairs/v2.35/0000775000175000017500000000000000000000000020653 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.35/keypairs-list-resp.json0000664000175000017500000000166000000000000025320 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" } } ], "keypairs_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-keypairs?limit=1&marker=keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.35/keypairs-list-user1-resp.json0000664000175000017500000000130000000000000026344 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" } } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.35/keypairs-list-user2-resp.json0000664000175000017500000000167600000000000026365 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3", "type": "ssh", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" } } ], "keypairs_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-keypairs?limit=1&marker=keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3&user_id=user2", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.35/keypairs-post-req.json0000664000175000017500000000021400000000000025142 0ustar00zuulzuul00000000000000{ "keypair": { "name": "keypair-ab9ff2e6-a6d7-4915-a241-044c369c07f9", "type": "ssh", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-keypairs/v2.35/keypairs-post-resp.json0000664000175000017500000000450500000000000025333 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": "keypair-ab9ff2e6-a6d7-4915-a241-044c369c07f9", "type": "ssh", "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEApBdzF+fTq5QbN3R+QlO5TZr6W64GcUqcho5ZxPBZZIq53P1K\ndtpaY856ManqEwME1tN+JOw8+mmCK2RpkMHtk5BNPOMqr5Y+OQ5MqI/eX1v7GWnJ\ntHGTbi+vRDmxBh3aa3xiUGo66c9tjUKAg/ExQfFr/vKJvTR/S3urPlj3vfFgu+yi\n8PKoH0LGyHsviWsD1peDuu2XS+ca8qbkY3yD1o4Mv1R/OSF4P2fxjjWdp8R4EkoT\nJMKkhRAgAuS9zxwftPv9djP4opHWrRUlRo6bh75CzrN6Hu5uh5Tn5bkifOQcy1gW\n772vd6pBpi4OGQHPKz4djvmCLAVBzSyzDP6EKQIDAQABAoIBAQCB+tU/ZXKlIe+h\nMNTmoz1QfOe+AY625Rwx9cakGqMk4kKyC62VkgcxshfXCToSjzyhEuyEQOFYloT2\n7FY2xXb0gcS861Efv0pQlcQhbbz/GnQ/wC13ktPu3zTdPTm9l54xsFiMTGmYVaf4\n0mnMmhyjmKIsVGDJEDGZUD/oZj7wJGOFha5M4FZrZlJIrEZC0rGGlcC0kGF2no6B\nj1Mu7HjyK3pTKf4dlp+jeRikUF5Pct+qT+rcv2rZ3fl3inxtlLEwZeFPbp/njf/U\nIGxFzZsuLmiFlsJar6M5nEckTB3p25maWWaR8/0jvJRgsPnuoUrUoGDq87DMKCdk\nlw6by9fRAoGBANhnS9ko7Of+ntqIFR7xOG9p/oPATztgHkFxe4GbQ0leaDRTx3vE\ndQmUCnn24xtyVECaI9a4IV+LP1npw8niWUJ4pjgdAlkF4cCTu9sN+cBO15SfdACI\nzD1DaaHmpFCAWlpTo68VWlvWll6i2ncCkRJR1+q/C/yQz7asvl4AakElAoGBAMId\nxqMT2Sy9xLuHsrAoMUvBOkwaMYZH+IAb4DvUDjVIiKWjmonrmopS5Lpb+ALBKqZe\neVfD6HwWQqGwCFItToaEkZvrNfTapoNCHWWg001D49765UV5lMrArDbM1vXtFfM4\nDRYM6+Y6o/6QH8EBgXtyBxcYthIDBM3wBJa67xG1AoGAKTm8fFlMkIG0N4N3Kpbf\nnnH915GaRoBwIx2AXtd6QQ7oIRfYx95MQY/fUw7SgxcLr+btbulTCkWXwwRClUI2\nqPAdElGMcfMp56r9PaTy8EzUyu55heSJrB4ckIhEw0VAcTa/1wnlVduSd+LkZYmq\no2fOD11n5iycNXvBJF1F4LUCgYAMaRbwCi7SW3eefbiA5rDwJPRzNSGBckyC9EVL\nzezynyaNYH5a3wNMYKxa9dJPasYtSND9OXs9o7ay26xMhLUGiKc+jrUuaGRI9Asp\nGjUoNXT2JphN7s4CgHsCLep4YqYKnMTJah4S5CDj/5boIg6DM/EcGupZEHRYLkY8\n1MrAGQKBgQCi9yeC39ctLUNn+Ix604gttWWChdt3ozufTZ7HybJOSRA9Gh3iD5gm\nzlz0xqpGShKpOY2k+ftvja0poMdGeJLt84P3r2q01IgI7w0LmOj5m0W10dHysH27\nBWpCnHdBJMxnBsMRPoM4MKkmKWD9l5PSTCTWtkIpsyuDCko6D9UwZA==\n-----END RSA PRIVATE KEY-----\n", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-lock-server/0000775000175000017500000000000000000000000020503 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-lock-server/lock-server.json0000664000175000017500000000002400000000000023626 0ustar00zuulzuul00000000000000{ "lock": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-lock-server/unlock-server.json0000664000175000017500000000002600000000000024173 0ustar00zuulzuul00000000000000{ "unlock": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-lock-server/v2.73/0000775000175000017500000000000000000000000021262 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-lock-server/v2.73/lock-server-with-reason.json0000664000175000017500000000007100000000000026645 0ustar00zuulzuul00000000000000{ "lock": {"locked_reason": "I don't want to work"} }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-lock-server/v2.73/lock-server.json0000664000175000017500000000002400000000000024405 0ustar00zuulzuul00000000000000{ "lock": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-lock-server/v2.73/unlock-server.json0000664000175000017500000000002600000000000024752 0ustar00zuulzuul00000000000000{ "unlock": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrate-server/0000775000175000017500000000000000000000000021203 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/live-migrate-server.json0000664000175000017500000000023200000000000025764 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "01c0cadef72d47e28a672a76060d492c", "block_migration": false, "disk_over_commit": false } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/migrate-server.json0000664000175000017500000000002700000000000025031 0ustar00zuulzuul00000000000000{ "migrate": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrate-server/v2.25/0000775000175000017500000000000000000000000021757 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/v2.25/live-migrate-server.json0000664000175000017500000000017000000000000026541 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "01c0cadef72d47e28a672a76060d492c", "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrate-server/v2.30/0000775000175000017500000000000000000000000021753 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/v2.30/live-migrate-server.json0000664000175000017500000000022000000000000026531 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "01c0cadef72d47e28a672a76060d492c", "block_migration": "auto", "force": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrate-server/v2.56/0000775000175000017500000000000000000000000021763 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/v2.56/migrate-server-null.json0000664000175000017500000000002700000000000026561 0ustar00zuulzuul00000000000000{ "migrate": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/v2.56/migrate-server.json0000664000175000017500000000006300000000000025611 0ustar00zuulzuul00000000000000{ "migrate": { "host": "host1" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrate-server/v2.68/0000775000175000017500000000000000000000000021766 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrate-server/v2.68/live-migrate-server.json0000664000175000017500000000017000000000000026550 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "01c0cadef72d47e28a672a76060d492c", "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrations/0000775000175000017500000000000000000000000020423 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/migrations-get.json0000664000175000017500000000206200000000000024247 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2012-10-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1234, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 2, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "done", "updated_at": "2012-10-29T13:42:02.000000" }, { "created_at": "2013-10-22T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 5678, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "done", "updated_at": "2013-10-22T13:42:02.000000" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1464715 nova-21.2.4/doc/api_samples/os-migrations/v2.23/0000775000175000017500000000000000000000000021175 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.23/migrations-get.json0000664000175000017500000000534100000000000025024 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 2, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T13:42:02.000000" }, { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 2, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "error", "migration_type": "live-migration", "updated_at": "2016-01-29T13:42:02.000000" }, { "created_at": "2016-01-22T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-01-22T13:42:02.000000" }, { "created_at": "2016-01-22T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-01-22T13:42:02.000000" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-migrations/v2.59/0000775000175000017500000000000000000000000021206 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.59/migrations-get-with-changes-since.json0000664000175000017500000000237700000000000030521 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.59/migrations-get-with-limit.json0000664000175000017500000000157300000000000027125 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" } ], "migrations_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-migrations?limit=1&marker=42341d4b-346a-40d0-83c6-5f4f6892b650", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.59/migrations-get-with-marker.json0000664000175000017500000000217400000000000027266 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.59/migrations-get.json0000664000175000017500000000572100000000000025037 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "error", "migration_type": "live-migration", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-migrations/v2.66/0000775000175000017500000000000000000000000021204 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.66/migrations-get-with-changes-before.json0000664000175000017500000000217400000000000030653 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.66/migrations-get-with-changes-since.json0000664000175000017500000000237700000000000030517 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.66/migrations-get-with-limit.json0000664000175000017500000000157400000000000027124 0ustar00zuulzuul00000000000000 { "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" } ], "migrations_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-migrations?limit=1&marker=42341d4b-346a-40d0-83c6-5f4f6892b650", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.66/migrations-get-with-marker.json0000664000175000017500000000217400000000000027264 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.66/migrations-get.json0000664000175000017500000000572100000000000025035 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "error", "migration_type": "live-migration", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-migrations/v2.80/0000775000175000017500000000000000000000000021200 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.80/migrations-get-with-changes-before.json0000664000175000017500000000237500000000000030652 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.80/migrations-get-with-changes-since.json0000664000175000017500000000300100000000000030474 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.80/migrations-get-with-limit.json0000664000175000017500000000177500000000000027123 0ustar00zuulzuul00000000000000 { "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" } ], "migrations_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-migrations?limit=1&marker=42341d4b-346a-40d0-83c6-5f4f6892b650", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.80/migrations-get-with-marker.json0000664000175000017500000000237500000000000027263 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.80/migrations-get-with-user-or-project-id.json0000664000175000017500000000376100000000000031434 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "error", "migration_type": "live-migration", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-migrations/v2.80/migrations-get.json0000664000175000017500000000672500000000000025036 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "error", "migration_type": "resize", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" }, { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "error", "migration_type": "live-migration", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "migration_type": "live-migration", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-multinic/0000775000175000017500000000000000000000000020073 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-multinic/multinic-add-fixed-ip-req.json0000664000175000017500000000013100000000000025623 0ustar00zuulzuul00000000000000{ "addFixedIp": { "networkId": "e1882e38-38c2-4239-ade7-35d644cb963a" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-multinic/multinic-remove-fixed-ip-req.json0000664000175000017500000000007700000000000026401 0ustar00zuulzuul00000000000000{ "removeFixedIp": { "address": "10.0.0.4" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-multiple-create/0000775000175000017500000000000000000000000021343 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-multiple-create/multiple-create-no-resv-post-req.json0000664000175000017500000000041700000000000030473 0ustar00zuulzuul00000000000000{ "server": { "name": "new-server-test", "imageRef": "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef": "1", "metadata": { "My Server Name": "Apache1" }, "min_count": "2", "max_count": "3" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-multiple-create/multiple-create-no-resv-post-resp.json0000664000175000017500000000124500000000000030655 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "wfksH3GTTseP", "id": "440cf918-3ee0-4143-b289-f63e1d2000e6", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/440cf918-3ee0-4143-b289-f63e1d2000e6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/440cf918-3ee0-4143-b289-f63e1d2000e6", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-multiple-create/multiple-create-post-req.json0000664000175000017500000000047000000000000027103 0ustar00zuulzuul00000000000000{ "server": { "name": "new-server-test", "imageRef": "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef": "1", "metadata": { "My Server Name": "Apache1" }, "return_reservation_id": "True", "min_count": "2", "max_count": "3" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-multiple-create/multiple-create-post-resp.json0000664000175000017500000000004600000000000027264 0ustar00zuulzuul00000000000000{ "reservation_id": "r-3fhpjulh" }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-networks/0000775000175000017500000000000000000000000020123 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks/network-add-req.json0000664000175000017500000000002200000000000024014 0ustar00zuulzuul00000000000000{ "id": "1" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks/network-create-req.json0000664000175000017500000000044500000000000024540 0ustar00zuulzuul00000000000000{ "network": { "label": "new net 111", "cidr": "10.20.105.0/24", "mtu": 9000, "dhcp_server": "10.20.105.2", "enable_dhcp": false, "share_address": true, "allowed_start": "10.20.105.10", "allowed_end": "10.20.105.200" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks/network-create-resp.json0000664000175000017500000000173300000000000024723 0ustar00zuulzuul00000000000000{ "network": { "bridge": null, "bridge_interface": null, "broadcast": "10.20.105.255", "cidr": "10.20.105.0/24", "cidr_v6": null, "created_at": null, "deleted": null, "deleted_at": null, "dhcp_server": "10.20.105.2", "dhcp_start": "10.20.105.2", "dns1": null, "dns2": null, "enable_dhcp": false, "gateway": "10.20.105.1", "gateway_v6": null, "host": null, "id": "d7a17c0c-457e-4ab4-a99c-4fa1762f5359", "injected": null, "label": "new net 111", "mtu": 9000, "multi_host": null, "netmask": "255.255.255.0", "netmask_v6": null, "priority": null, "project_id": null, "rxtx_base": null, "share_address": true, "updated_at": null, "vlan": null, "vpn_private_address": null, "vpn_public_address": null, "vpn_public_port": null } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks/network-show-resp.json0000664000175000017500000000163200000000000024436 0ustar00zuulzuul00000000000000{ "network": { "bridge": null, "bridge_interface": null, "broadcast": null, "cidr": null, "cidr_v6": null, "created_at": null, "deleted": null, "deleted_at": null, "dhcp_server": null, "dhcp_start": null, "dns1": null, "dns2": null, "enable_dhcp": null, "gateway": null, "gateway_v6": null, "host": null, "id": "20c8acc0-f747-4d71-a389-46d078ebf047", "injected": null, "label": "private", "mtu": null, "multi_host": null, "netmask": null, "netmask_v6": null, "priority": null, "project_id": null, "rxtx_base": null, "share_address": null, "updated_at": null, "vlan": null, "vpn_private_address": null, "vpn_public_address": null, "vpn_public_port": null } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks/networks-list-resp.json0000664000175000017500000000205700000000000024616 0ustar00zuulzuul00000000000000{ "networks": [ { "bridge": null, "bridge_interface": null, "broadcast": null, "cidr": null, "cidr_v6": null, "created_at": null, "deleted": null, "deleted_at": null, "dhcp_server": null, "dhcp_start": null, "dns1": null, "dns2": null, "enable_dhcp": null, "gateway": null, "gateway_v6": null, "host": null, "id": "20c8acc0-f747-4d71-a389-46d078ebf047", "injected": null, "label": "private", "mtu": null, "multi_host": null, "netmask": null, "netmask_v6": null, "priority": null, "project_id": null, "rxtx_base": null, "share_address": null, "updated_at": null, "vlan": null, "vpn_private_address": null, "vpn_public_address": null, "vpn_public_port": null } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-networks-associate/0000775000175000017500000000000000000000000022074 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks-associate/network-associate-host-req.json0000664000175000017500000000004400000000000030167 0ustar00zuulzuul00000000000000{ "associate_host": "testHost" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks-associate/network-disassociate-host-req.json0000664000175000017500000000004100000000000030664 0ustar00zuulzuul00000000000000{ "disassociate_host": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks-associate/network-disassociate-project-req.json0000664000175000017500000000004400000000000031360 0ustar00zuulzuul00000000000000{ "disassociate_project": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-networks-associate/network-disassociate-req.json0000664000175000017500000000003400000000000027713 0ustar00zuulzuul00000000000000{ "disassociate": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1504717 nova-21.2.4/doc/api_samples/os-pause-server/0000775000175000017500000000000000000000000020670 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-pause-server/pause-server.json0000664000175000017500000000002500000000000024201 0ustar00zuulzuul00000000000000{ "pause": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-pause-server/unpause-server.json0000664000175000017500000000002700000000000024546 0ustar00zuulzuul00000000000000{ "unpause": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1544716 nova-21.2.4/doc/api_samples/os-preserve-ephemeral-rebuild/0000775000175000017500000000000000000000000023466 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/doc/api_samples/os-preserve-ephemeral-rebuild/server-action-rebuild-preserve-ephemeral-resp.json 22 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-preserve-ephemeral-rebuild/server-action-rebuild-preserve-ephemeral-r0000664000175000017500000000343100000000000033647 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-12-30T12:28:14Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "ee8ea077f8548ce25c59c2d5020d0f82810c815c210fd68194a5c0f8", "id": "810e78d5-47fe-48bf-9559-bfe5dc918685", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/810e78d5-47fe-48bf-9559-bfe5dc918685", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/810e78d5-47fe-48bf-9559-bfe5dc918685", "rel": "bookmark" } ], "metadata": { "meta_var": "meta_val" }, "name": "foobar", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-12-30T12:28:15Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/doc/api_samples/os-preserve-ephemeral-rebuild/server-action-rebuild-preserve-ephemeral.json 22 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-preserve-ephemeral-rebuild/server-action-rebuild-preserve-ephemeral.j0000664000175000017500000000037100000000000033640 0ustar00zuulzuul00000000000000{ "rebuild": { "imageRef": "70a599e0-31e7-49b7-b260-868f441e862b", "name": "foobar", "adminPass": "seekr3t", "metadata": { "meta_var": "meta_val" }, "preserve_ephemeral": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1544716 nova-21.2.4/doc/api_samples/os-quota-class-sets/0000775000175000017500000000000000000000000021457 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/quota-classes-show-get-resp.json0000664000175000017500000000064700000000000027647 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "id": "test_class", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/quota-classes-update-post-req.json0000664000175000017500000000061300000000000030166 0ustar00zuulzuul00000000000000{ "quota_class_set": { "instances": 50, "cores": 50, "ram": 51200, "floating_ips": -1, "fixed_ips": -1, "metadata_items": 128, "injected_files": 5, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "security_groups": -1, "security_group_rules": -1, "key_pairs": 100 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/quota-classes-update-post-resp.json0000664000175000017500000000061300000000000030350 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 50, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 50, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1544716 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.50/0000775000175000017500000000000000000000000022231 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.50/quota-classes-show-get-resp.json0000664000175000017500000000056000000000000030413 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 20, "id": "test_class", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.50/quota-classes-update-post-req.json0000664000175000017500000000052400000000000030741 0ustar00zuulzuul00000000000000{ "quota_class_set": { "instances": 50, "cores": 50, "ram": 51200, "metadata_items": 128, "injected_files": 5, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "key_pairs": 100, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.50/quota-classes-update-post-resp.json0000664000175000017500000000052400000000000031123 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 50, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 50, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1544716 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.57/0000775000175000017500000000000000000000000022240 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.57/quota-classes-show-get-resp.json0000664000175000017500000000037400000000000030425 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 20, "id": "test_class", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.57/quota-classes-update-post-req.json0000664000175000017500000000034000000000000030744 0ustar00zuulzuul00000000000000{ "quota_class_set": { "instances": 50, "cores": 50, "ram": 51200, "metadata_items": 128, "key_pairs": 100, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-class-sets/v2.57/quota-classes-update-post-resp.json0000664000175000017500000000034000000000000031126 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 50, "instances": 50, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1544716 nova-21.2.4/doc/api_samples/os-quota-sets/0000775000175000017500000000000000000000000020354 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-show-defaults-get-resp.json0000664000175000017500000000074300000000000027076 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-show-detail-get-resp.json0000664000175000017500000000321100000000000026522 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": 0, "limit": 20, "reserved": 0 }, "fixed_ips": { "in_use": 0, "limit": -1, "reserved": 0 }, "floating_ips": { "in_use": 0, "limit": -1, "reserved": 0 }, "id": "fake_tenant", "injected_file_content_bytes": { "in_use": 0, "limit": 10240, "reserved": 0 }, "injected_file_path_bytes": { "in_use": 0, "limit": 255, "reserved": 0 }, "injected_files": { "in_use": 0, "limit": 5, "reserved": 0 }, "instances": { "in_use": 0, "limit": 10, "reserved": 0 }, "key_pairs": { "in_use": 0, "limit": 100, "reserved": 0 }, "metadata_items": { "in_use": 0, "limit": 128, "reserved": 0 }, "ram": { "in_use": 0, "limit": 51200, "reserved": 0 }, "security_group_rules": { "in_use": 0, "limit": -1, "reserved": 0 }, "security_groups": { "in_use": 0, "limit": -1, "reserved": 0 }, "server_group_members": { "in_use": 0, "limit": 10, "reserved": 0 }, "server_groups": { "in_use": 0, "limit": 10, "reserved": 0 } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-show-get-resp.json0000664000175000017500000000074300000000000025271 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-update-force-post-req.json0000664000175000017500000000011500000000000026704 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-update-force-post-resp.json0000664000175000017500000000070600000000000027074 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-update-post-req.json0000664000175000017500000000006100000000000025610 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 45 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/quotas-update-post-resp.json0000664000175000017500000000070600000000000026000 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 45, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/user-quotas-show-get-resp.json0000664000175000017500000000074300000000000026245 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/user-quotas-update-post-req.json0000664000175000017500000000011400000000000026563 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 9 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/user-quotas-update-post-resp.json0000664000175000017500000000070500000000000026753 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 9, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1584716 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/0000775000175000017500000000000000000000000021132 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-show-defaults-get-resp.json0000664000175000017500000000055300000000000027653 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-show-detail-get-resp.json0000664000175000017500000000227500000000000027311 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": 0, "limit": 20, "reserved": 0 }, "id": "fake_tenant", "injected_file_content_bytes": { "in_use": 0, "limit": 10240, "reserved": 0 }, "injected_file_path_bytes": { "in_use": 0, "limit": 255, "reserved": 0 }, "injected_files": { "in_use": 0, "limit": 5, "reserved": 0 }, "instances": { "in_use": 0, "limit": 10, "reserved": 0 }, "key_pairs": { "in_use": 0, "limit": 100, "reserved": 0 }, "metadata_items": { "in_use": 0, "limit": 128, "reserved": 0 }, "ram": { "in_use": 0, "limit": 51200, "reserved": 0 }, "server_group_members": { "in_use": 0, "limit": 10, "reserved": 0 }, "server_groups": { "in_use": 0, "limit": 10, "reserved": 0 } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-show-get-resp.json0000664000175000017500000000055300000000000026046 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-update-force-post-req.json0000664000175000017500000000011500000000000027462 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-update-force-post-resp.json0000664000175000017500000000051600000000000027651 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-update-post-req.json0000664000175000017500000000006400000000000026371 0ustar00zuulzuul00000000000000{ "quota_set": { "instances": 45 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/quotas-update-post-resp.json0000664000175000017500000000051600000000000026555 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/user-quotas-show-get-resp.json0000664000175000017500000000055300000000000027022 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/user-quotas-update-post-req.json0000664000175000017500000000011400000000000027341 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 9 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.36/user-quotas-update-post-resp.json0000664000175000017500000000051500000000000027530 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 9, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1584716 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/0000775000175000017500000000000000000000000021135 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-show-defaults-get-resp.json0000664000175000017500000000036700000000000027661 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-show-detail-get-resp.json0000664000175000017500000000151200000000000027305 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": 0, "limit": 20, "reserved": 0 }, "id": "fake_tenant", "instances": { "in_use": 0, "limit": 10, "reserved": 0 }, "key_pairs": { "in_use": 0, "limit": 100, "reserved": 0 }, "metadata_items": { "in_use": 0, "limit": 128, "reserved": 0 }, "ram": { "in_use": 0, "limit": 51200, "reserved": 0 }, "server_group_members": { "in_use": 0, "limit": 10, "reserved": 0 }, "server_groups": { "in_use": 0, "limit": 10, "reserved": 0 } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-show-get-resp.json0000664000175000017500000000036700000000000026054 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-update-force-post-req.json0000664000175000017500000000011500000000000027465 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-update-force-post-resp.json0000664000175000017500000000033200000000000027650 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-update-post-req.json0000664000175000017500000000006400000000000026374 0ustar00zuulzuul00000000000000{ "quota_set": { "instances": 20 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/quotas-update-post-resp.json0000664000175000017500000000033200000000000026554 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "instances": 20, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/user-quotas-show-get-resp.json0000664000175000017500000000036700000000000027030 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/user-quotas-update-post-req.json0000664000175000017500000000006300000000000027347 0ustar00zuulzuul00000000000000{ "quota_set": { "instances": 9 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets/v2.57/user-quotas-update-post-resp.json0000664000175000017500000000033100000000000027527 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "instances": 9, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1584716 nova-21.2.4/doc/api_samples/os-quota-sets-noop/0000775000175000017500000000000000000000000021325 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-show-defaults-get-resp.json0000664000175000017500000000073300000000000030046 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-show-detail-get-resp.json0000664000175000017500000000323500000000000027501 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": -1, "limit": -1, "reserved": -1 }, "fixed_ips": { "in_use": -1, "limit": -1, "reserved": -1 }, "floating_ips": { "in_use": -1, "limit": -1, "reserved": -1 }, "id": "fake_tenant", "injected_file_content_bytes": { "in_use": -1, "limit": -1, "reserved": -1 }, "injected_file_path_bytes": { "in_use": -1, "limit": -1, "reserved": -1 }, "injected_files": { "in_use": -1, "limit": -1, "reserved": -1 }, "instances": { "in_use": -1, "limit": -1, "reserved": -1 }, "key_pairs": { "in_use": -1, "limit": -1, "reserved": -1 }, "metadata_items": { "in_use": -1, "limit": -1, "reserved": -1 }, "ram": { "in_use": -1, "limit": -1, "reserved": -1 }, "security_group_rules": { "in_use": -1, "limit": -1, "reserved": -1 }, "security_groups": { "in_use": -1, "limit": -1, "reserved": -1 }, "server_group_members": { "in_use": -1, "limit": -1, "reserved": -1 }, "server_groups": { "in_use": -1, "limit": -1, "reserved": -1 } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-show-get-resp.json0000664000175000017500000000073300000000000026241 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-update-force-post-req.json0000664000175000017500000000011500000000000027655 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-update-force-post-resp.json0000664000175000017500000000067600000000000030053 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-update-post-req.json0000664000175000017500000000007200000000000026563 0ustar00zuulzuul00000000000000{ "quota_set": { "security_groups": 45 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/quotas-update-post-resp.json0000664000175000017500000000067600000000000026757 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/user-quotas-show-get-resp.json0000664000175000017500000000073300000000000027215 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/user-quotas-update-post-req.json0000664000175000017500000000011400000000000027534 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 9 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-quota-sets-noop/user-quotas-update-post-resp.json0000664000175000017500000000067600000000000027733 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1584716 nova-21.2.4/doc/api_samples/os-remote-consoles/0000775000175000017500000000000000000000000021365 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-rdp-console-post-req.json0000664000175000017500000000010000000000000027021 0ustar00zuulzuul00000000000000{ "os-getRDPConsole": { "type": "rdp-html5" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-rdp-console-post-resp.json0000664000175000017500000000021300000000000027210 0ustar00zuulzuul00000000000000{ "console": { "type": "rdp-html5", "url": "http://127.0.0.1:6083/?token=191996c3-7b0f-42f3-95a7-f1839f2da6ed" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-serial-console-post-req.json0000664000175000017500000000010000000000000027513 0ustar00zuulzuul00000000000000{ "os-getSerialConsole": { "type": "serial" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-serial-console-post-resp.json0000664000175000017500000000020500000000000027703 0ustar00zuulzuul00000000000000{ "console": { "type": "serial", "url":"ws://127.0.0.1:6083/?token=f9906a48-b71e-4f18-baca-c987da3ebdb3" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-spice-console-post-req.json0000664000175000017500000000010300000000000027342 0ustar00zuulzuul00000000000000{ "os-getSPICEConsole": { "type": "spice-html5" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-spice-console-post-resp.json0000664000175000017500000000023300000000000027530 0ustar00zuulzuul00000000000000{ "console": { "type": "spice-html5", "url": "http://127.0.0.1:6082/spice_auto.html?token=a30e5d08-6a20-4043-958f-0852440c6af4" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-vnc-console-post-req.json0000664000175000017500000000007300000000000027033 0ustar00zuulzuul00000000000000{ "os-getVNCConsole": { "type": "novnc" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/get-vnc-console-post-resp.json0000664000175000017500000000023500000000000027215 0ustar00zuulzuul00000000000000{ "console": { "type": "novnc", "url": "http://127.0.0.1:6080/vnc_auto.html?path=%3Ftoken%3Ddaae261f-474d-4cae-8f6a-1865278ed8c9" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-remote-consoles/v2.6/0000775000175000017500000000000000000000000022060 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/v2.6/create-vnc-console-req.json0000664000175000017500000000012500000000000027225 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "vnc", "type": "novnc" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/v2.6/create-vnc-console-resp.json0000664000175000017500000000030200000000000027404 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "vnc", "type": "novnc", "url": "http://example.com:6080/vnc_auto.html?path=%3Ftoken%3Db60bcfc3-5fd4-4d21-986c-e83379107819" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-remote-consoles/v2.8/0000775000175000017500000000000000000000000022062 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/v2.8/create-mks-console-req.json0000664000175000017500000000012600000000000027234 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "mks", "type": "webmks" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-remote-consoles/v2.8/create-mks-console-resp.json0000664000175000017500000000026400000000000027421 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "mks", "type": "webmks", "url": "http://example.com:6090/mks.html?token=b60bcfc3-5fd4-4d21-986c-e83379107819" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-rescue/0000775000175000017500000000000000000000000017535 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-rescue/server-get-resp-rescue.json0000664000175000017500000000500500000000000024746 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-18T07:22:09Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "f04994c5b4aac1cacbb83b09c2506e457d97dd54f620961624574690", "id": "2fd0c66b-50af-41d2-9253-9fa41e7e8dd8", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/2fd0c66b-50af-41d2-9253-9fa41e7e8dd8", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/2fd0c66b-50af-41d2-9253-9fa41e7e8dd8", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "status": "RESCUE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-18T07:22:11Z", "user_id": "fake", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-STS:power_state": 4, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "rescued", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-rescue/server-get-resp-unrescue.json0000664000175000017500000000503300000000000025312 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-18T07:22:09Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "53cd4520a6cc639eeabcae4a0512b93e4675d431002e0b60e2dcfc04", "id": "edfc3905-1f3c-4819-8fc3-a7d8131cfa22", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/edfc3905-1f3c-4819-8fc3-a7d8131cfa22", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/edfc3905-1f3c-4819-8fc3-a7d8131cfa22", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-18T07:22:12Z", "user_id": "fake", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/server-rescue-req-with-image-ref.json0000664000175000017500000000020200000000000026604 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "MySecretPass", "rescue_image_ref": "70a599e0-31e7-49b7-b260-868f441e862b" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/server-rescue-req.json0000664000175000017500000000007600000000000024012 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "MySecretPass" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/server-rescue.json0000664000175000017500000000004400000000000023220 0ustar00zuulzuul00000000000000{ "adminPass": "MySecretPass" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-rescue/server-unrescue-req.json0000664000175000017500000000003000000000000024343 0ustar00zuulzuul00000000000000{ "unrescue": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-rescue/v2.87/0000775000175000017500000000000000000000000020321 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-rescue/v2.87/server-get-resp-rescue.json0000664000175000017500000000602500000000000025535 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-d0bls59j", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 4, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "rescued", "OS-SRV-USG:launched_at": "2020-02-07T17:39:49.259481", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2020-02-07T17:39:48Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "69bebe1c-3bdb-4feb-9b79-afa3d4782d95", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/69bebe1c-3bdb-4feb-9b79-afa3d4782d95", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/69bebe1c-3bdb-4feb-9b79-afa3d4782d95", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "RESCUE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2020-02-07T17:39:49Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-rescue/v2.87/server-get-resp-unrescue.json0000664000175000017500000000605300000000000026101 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-g20x6pwt", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2020-02-07T17:39:55.632592", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2020-02-07T17:39:54Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "5a0ffa96-ae59-4f82-b7a6-e0c9007cd576", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/5a0ffa96-ae59-4f82-b7a6-e0c9007cd576", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/5a0ffa96-ae59-4f82-b7a6-e0c9007cd576", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2020-02-07T17:39:56Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/v2.87/server-rescue-req-with-image-ref.json0000664000175000017500000000020100000000000027367 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "MySecretPass", "rescue_image_ref": "70a599e0-31e7-49b7-b260-868f441e862b" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/v2.87/server-rescue-req.json0000664000175000017500000000007500000000000024575 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "MySecretPass" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/v2.87/server-rescue.json0000664000175000017500000000004300000000000024003 0ustar00zuulzuul00000000000000{ "adminPass": "MySecretPass" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-rescue/v2.87/server-unrescue-req.json0000664000175000017500000000003000000000000025127 0ustar00zuulzuul00000000000000{ "unrescue": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-security-group-default-rules/0000775000175000017500000000000000000000000024022 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-create-req.json 22 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-create-req.0000664000175000017500000000024000000000000033620 0ustar00zuulzuul00000000000000{ "security_group_default_rule": { "ip_protocol": "TCP", "from_port": "80", "to_port": "80", "cidr": "10.10.10.0/24" } }././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-create-resp.json 22 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-create-resp0000664000175000017500000000032100000000000033724 0ustar00zuulzuul00000000000000{ "security_group_default_rule": { "from_port": 80, "id": 1, "ip_protocol": "TCP", "ip_range": { "cidr": "10.10.10.0/24" }, "to_port": 80 } }././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-list-resp.json 22 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-list-resp.j0000664000175000017500000000040200000000000033664 0ustar00zuulzuul00000000000000{ "security_group_default_rules": [ { "from_port": 80, "id": 1, "ip_protocol": "TCP", "ip_range": { "cidr": "10.10.10.0/24" }, "to_port": 80 } ] }././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-show-resp.json 22 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-group-default-rules/security-group-default-rules-show-resp.j0000664000175000017500000000032100000000000033671 0ustar00zuulzuul00000000000000{ "security_group_default_rule": { "from_port": 80, "id": 1, "ip_protocol": "TCP", "ip_range": { "cidr": "10.10.10.0/24" }, "to_port": 80 } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-security-groups/0000775000175000017500000000000000000000000021433 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-group-add-post-req.json0000664000175000017500000000007200000000000027624 0ustar00zuulzuul00000000000000{ "addSecurityGroup": { "name": "test" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-group-post-req.json0000664000175000017500000000013600000000000027077 0ustar00zuulzuul00000000000000{ "security_group": { "name": "test", "description": "description" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-group-remove-post-req.json0000664000175000017500000000007500000000000030374 0ustar00zuulzuul00000000000000{ "removeSecurityGroup": { "name": "test" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-group-rules-post-req.json0000664000175000017500000000032500000000000030227 0ustar00zuulzuul00000000000000{ "security_group_rule": { "parent_group_id": "21111111-1111-1111-1111-111111111112", "ip_protocol": "tcp", "from_port": 22, "to_port": 22, "cidr": "10.0.0.0/24" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-group-rules-post-resp.json0000664000175000017500000000050400000000000030410 0ustar00zuulzuul00000000000000{ "security_group_rule": { "from_port": 22, "group": {}, "id": "00000000-0000-0000-0000-000000000000", "ip_protocol": "tcp", "ip_range": { "cidr": "10.0.0.0/24" }, "parent_group_id": "11111111-1111-1111-1111-111111111111", "to_port": 22 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-groups-create-resp.json0000664000175000017500000000027400000000000027725 0ustar00zuulzuul00000000000000{ "security_group": { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-groups-get-resp.json0000664000175000017500000000027400000000000027241 0ustar00zuulzuul00000000000000{ "security_group": { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/security-groups-list-get-resp.json0000664000175000017500000000034500000000000030211 0ustar00zuulzuul00000000000000{ "security_groups": [ { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-security-groups/server-security-groups-list-resp.json0000664000175000017500000000034500000000000030740 0ustar00zuulzuul00000000000000{ "security_groups": [ { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1624715 nova-21.2.4/doc/api_samples/os-server-diagnostics/0000775000175000017500000000000000000000000022062 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-diagnostics/server-diagnostics-get-resp.json0000664000175000017500000000060300000000000030313 0ustar00zuulzuul00000000000000{ "cpu0_time": 17300000000, "memory": 524288, "vda_errors": -1, "vda_read": 262144, "vda_read_req": 112, "vda_write": 5778432, "vda_write_req": 488, "vnet1_rx": 2070139, "vnet1_rx_drop": 0, "vnet1_rx_errors": 0, "vnet1_rx_packets": 26701, "vnet1_tx": 140208, "vnet1_tx_drop": 0, "vnet1_tx_errors": 0, "vnet1_tx_packets": 662 } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-diagnostics/v2.48/0000775000175000017500000000000000000000000022643 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-diagnostics/v2.48/server-diagnostics-get-resp.json0000664000175000017500000000201500000000000031073 0ustar00zuulzuul00000000000000{ "config_drive": true, "cpu_details": [ { "id": 0, "time": 17300000000, "utilisation": 15 } ], "disk_details": [ { "errors_count": 1, "read_bytes": 262144, "read_requests": 112, "write_bytes": 5778432, "write_requests": 488 } ], "driver": "libvirt", "hypervisor": "kvm", "hypervisor_os": "ubuntu", "memory_details": { "maximum": 524288, "used": 0 }, "nic_details": [ { "mac_address": "01:23:45:67:89:ab", "rx_drop": 200, "rx_errors": 100, "rx_octets": 2070139, "rx_packets": 26701, "rx_rate": 300, "tx_drop": 500, "tx_errors": 400, "tx_octets": 140208, "tx_packets": 662, "tx_rate": 600 } ], "num_cpus": 1, "num_disks": 1, "num_nics": 1, "state": "running", "uptime": 46664 } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-external-events/0000775000175000017500000000000000000000000022677 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-external-events/event-create-req.json0000664000175000017500000000031600000000000026741 0ustar00zuulzuul00000000000000{ "events": [ { "name": "test-event", "tag": "foo", "status": "completed", "server_uuid": "3df201cf-2451-44f2-8d25-a4ca826fc1f3" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-external-events/event-create-resp.json0000664000175000017500000000035400000000000027125 0ustar00zuulzuul00000000000000{ "events": [ { "code": 200, "name": "network-changed", "server_uuid": "ff1df7b2-6772-45fd-9326-c0a3b05591c2", "status": "completed", "tag": "foo" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-groups/0000775000175000017500000000000000000000000021072 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/server-groups-get-resp.json0000664000175000017500000000030300000000000026330 0ustar00zuulzuul00000000000000{ "server_group": { "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {} } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/server-groups-list-resp.json0000664000175000017500000000035400000000000026532 0ustar00zuulzuul00000000000000{ "server_groups": [ { "id": "616fb98f-46ca-475e-917e-2563e5a8cd19", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {} } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/server-groups-post-req.json0000664000175000017500000000013600000000000026360 0ustar00zuulzuul00000000000000{ "server_group": { "name": "test", "policies": ["anti-affinity"] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/server-groups-post-resp.json0000664000175000017500000000030300000000000026536 0ustar00zuulzuul00000000000000{ "server_group": { "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {} } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-groups/v2.13/0000775000175000017500000000000000000000000021643 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.13/server-groups-get-resp.json0000664000175000017500000000043000000000000027102 0ustar00zuulzuul00000000000000{ "server_group": { "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {}, "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.13/server-groups-list-resp.json0000664000175000017500000000051100000000000027276 0ustar00zuulzuul00000000000000{ "server_groups": [ { "id": "616fb98f-46ca-475e-917e-2563e5a8cd19", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {}, "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.13/server-groups-post-req.json0000664000175000017500000000013600000000000027131 0ustar00zuulzuul00000000000000{ "server_group": { "name": "test", "policies": ["anti-affinity"] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.13/server-groups-post-resp.json0000664000175000017500000000043000000000000027310 0ustar00zuulzuul00000000000000{ "server_group": { "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {}, "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-groups/v2.64/0000775000175000017500000000000000000000000021651 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.64/server-groups-get-resp.json0000664000175000017500000000045100000000000027113 0ustar00zuulzuul00000000000000{ "server_group": { "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "name": "test", "policy": "anti-affinity", "rules": {"max_server_per_host": 3}, "members": [], "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.64/server-groups-list-resp.json0000664000175000017500000000053200000000000027307 0ustar00zuulzuul00000000000000{ "server_groups": [ { "id": "616fb98f-46ca-475e-917e-2563e5a8cd19", "name": "test", "policy": "anti-affinity", "rules": {"max_server_per_host": 3}, "members": [], "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.64/server-groups-post-req.json0000664000175000017500000000020700000000000027136 0ustar00zuulzuul00000000000000{ "server_group": { "name": "test", "policy": "anti-affinity", "rules": {"max_server_per_host": 3} } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-groups/v2.64/server-groups-post-resp.json0000664000175000017500000000045100000000000027321 0ustar00zuulzuul00000000000000{ "server_group": { "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "name": "test", "policy": "anti-affinity", "rules": {"max_server_per_host": 3}, "members": [], "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-password/0000775000175000017500000000000000000000000021415 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-password/get-password-resp.json0000664000175000017500000000055600000000000025704 0ustar00zuulzuul00000000000000{ "password": "xlozO3wLCBRWAa2yDjCCVx8vwNPypxnypmRYDa/zErlQ+EzPe1S/Gz6nfmC52mOlOSCRuUOmG7kqqgejPof6M7bOezS387zjq4LSvvwp28zUknzy4YzfFGhnHAdai3TxUJ26pfQCYrq8UTzmKF2Bq8ioSEtVVzM0A96pDh8W2i7BOz6MdoiVyiev/I1K2LsuipfxSJR7Wdke4zNXJjHHP2RfYsVbZ/k9ANu+Nz4iIH8/7Cacud/pphH7EjrY6a4RZNrjQskrhKYed0YERpotyjYk1eDtRe72GrSiXteqCM4biaQ5w3ruS+AcX//PXk3uJ5kC7d67fPXaVz4WaQRYMg==" }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0544724 nova-21.2.4/doc/api_samples/os-server-tags/0000775000175000017500000000000000000000000020511 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-tags/v2.26/0000775000175000017500000000000000000000000021266 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-tags/v2.26/server-tags-index-resp.json0000664000175000017500000000004100000000000026472 0ustar00zuulzuul00000000000000{ "tags": ["tag1", "tag2"] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-tags/v2.26/server-tags-put-all-req.json0000664000175000017500000000004100000000000026557 0ustar00zuulzuul00000000000000{ "tags": ["tag1", "tag2"] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-tags/v2.26/server-tags-put-all-resp.json0000664000175000017500000000004100000000000026741 0ustar00zuulzuul00000000000000{ "tags": ["tag1", "tag2"] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-server-tags/v2.26/server-tags-show-details-resp.json0000664000175000017500000000602400000000000027775 0ustar00zuulzuul00000000000000{ "server": { "tags": ["tag1", "tag2"], "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2012-12-02T02:11:55Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "c949ab4256cea23b6089b710aa2df48bf6577ed915278b62e33ad8bb", "id": "5046e2f2-3b33-4041-b3cf-e085f73e78e7", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/5046e2f2-3b33-4041-b3cf-e085f73e78e7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/5046e2f2-3b33-4041-b3cf-e085f73e78e7", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2012-12-02T02:11:55Z", "key_name": null, "user_id": "fake", "locked": false, "description": null, "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ], "host_status": "UP" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/os-server-tags/v2.26/servers-tags-details-resp.json0000664000175000017500000000655500000000000027213 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "bcf92836fc9ed4203a75cb0337afc7f917d2be504164b995c2334b25", "id": "f5dc173b-6804-445a-a6d8-c705dad5b5eb", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:32Z", "user_id": "fake", "locked": false, "tags": ["tag1", "tag2"], "description": null, "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "c3f14e9812ad496baf92ccfb3c61e15f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "2013-09-23T13:53:12.774549", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ], "host_status": "UP" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0544724 nova-21.2.4/doc/api_samples/os-server-topology/0000775000175000017500000000000000000000000021427 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-server-topology/v2.78/0000775000175000017500000000000000000000000022213 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-topology/v2.78/servers-topology-resp-user.json0000664000175000017500000000104400000000000030373 0ustar00zuulzuul00000000000000{ "nodes": [ { "memory_mb": 1024, "siblings": [ [ 0, 1 ] ], "vcpu_set": [ 0, 1 ] }, { "memory_mb": 2048, "siblings": [ [ 2, 3 ] ], "vcpu_set": [ 2, 3 ] } ], "pagesize_kb": 4 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-server-topology/v2.78/servers-topology-resp.json0000664000175000017500000000142200000000000027417 0ustar00zuulzuul00000000000000{ "nodes": [ { "cpu_pinning": { "0": 0, "1": 5 }, "host_node": 0, "memory_mb": 1024, "siblings": [ [ 0, 1 ] ], "vcpu_set": [ 0, 1 ] }, { "cpu_pinning": { "2": 1, "3": 8 }, "host_node": 1, "memory_mb": 2048, "siblings": [ [ 2, 3 ] ], "vcpu_set": [ 2, 3 ] } ], "pagesize_kb": 4 } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1664715 nova-21.2.4/doc/api_samples/os-services/0000775000175000017500000000000000000000000020072 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/service-disable-log-put-req.json0000664000175000017500000000012500000000000026176 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute", "disabled_reason": "test2" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/service-disable-log-put-resp.json0000664000175000017500000000022600000000000026362 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/service-disable-put-req.json0000664000175000017500000000006500000000000025422 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/service-disable-put-resp.json0000664000175000017500000000016200000000000025602 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/service-enable-put-req.json0000664000175000017500000000006500000000000025245 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/service-enable-put-resp.json0000664000175000017500000000016100000000000025424 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "enabled" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/services-list-get-resp.json0000664000175000017500000000227300000000000025311 0ustar00zuulzuul00000000000000{ "services": [ { "id": 1, "binary": "nova-scheduler", "disabled_reason": "test1", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:02.000000", "zone": "internal" }, { "id": 2, "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "zone": "nova" }, { "id": 3, "binary": "nova-scheduler", "disabled_reason": null, "host": "host2", "state": "down", "status": "enabled", "updated_at": "2012-09-19T06:55:34.000000", "zone": "internal" }, { "id": 4, "binary": "nova-compute", "disabled_reason": "test4", "host": "host2", "state": "down", "status": "disabled", "updated_at": "2012-09-18T08:03:38.000000", "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-services/v2.11/0000775000175000017500000000000000000000000020641 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-disable-log-put-req.json0000664000175000017500000000012500000000000026745 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute", "disabled_reason": "test2" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-disable-log-put-resp.json0000664000175000017500000000022600000000000027131 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-disable-put-req.json0000664000175000017500000000006500000000000026171 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-disable-put-resp.json0000664000175000017500000000016200000000000026351 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-enable-put-req.json0000664000175000017500000000006500000000000026014 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-enable-put-resp.json0000664000175000017500000000016100000000000026173 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "enabled" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-force-down-put-req.json0000664000175000017500000000011700000000000026627 0ustar00zuulzuul00000000000000{ "host": "host1", "binary": "nova-compute", "forced_down": true } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/service-force-down-put-resp.json0000664000175000017500000000016200000000000027011 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "forced_down": true } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.11/services-list-get-resp.json0000664000175000017500000000250300000000000026054 0ustar00zuulzuul00000000000000{ "services": [ { "id": 1, "binary": "nova-scheduler", "disabled_reason": "test1", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:02.000000", "forced_down": false, "zone": "internal" }, { "id": 2, "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" }, { "id": 3, "binary": "nova-scheduler", "disabled_reason": null, "host": "host2", "state": "down", "status": "enabled", "updated_at": "2012-09-19T06:55:34.000000", "forced_down": false, "zone": "internal" }, { "id": 4, "binary": "nova-compute", "disabled_reason": "test4", "host": "host2", "state": "down", "status": "disabled", "updated_at": "2012-09-18T08:03:38.000000", "forced_down": false, "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-services/v2.53/0000775000175000017500000000000000000000000020647 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-disable-log-put-req.json0000664000175000017500000000010200000000000026746 0ustar00zuulzuul00000000000000{ "status": "disabled", "disabled_reason": "maintenance" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-disable-log-put-resp.json0000664000175000017500000000052300000000000027137 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": "maintenance", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-disable-put-req.json0000664000175000017500000000003400000000000026173 0ustar00zuulzuul00000000000000{ "status": "disabled" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-disable-put-resp.json0000664000175000017500000000051200000000000026356 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": null, "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-enable-put-req.json0000664000175000017500000000003300000000000026015 0ustar00zuulzuul00000000000000{ "status": "enabled" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-enable-put-resp.json0000664000175000017500000000051100000000000026200 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": null, "host": "host1", "state": "up", "status": "enabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-force-down-put-req.json0000664000175000017500000000003300000000000026632 0ustar00zuulzuul00000000000000{ "forced_down": true }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/service-force-down-put-resp.json0000664000175000017500000000051600000000000027022 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "state": "down", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": true, "zone": "nova" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.53/services-list-get-resp.json0000664000175000017500000000272700000000000026072 0ustar00zuulzuul00000000000000{ "services": [ { "id": "c4726392-27de-4ff9-b2e0-5aa1d08a520f", "binary": "nova-scheduler", "disabled_reason": "test1", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:02.000000", "forced_down": false, "zone": "internal" }, { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "state": "up", "status": "disabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" }, { "id": "bbd684ff-d3f6-492e-a30a-a12a2d2db0e0", "binary": "nova-scheduler", "disabled_reason": null, "host": "host2", "state": "down", "status": "enabled", "updated_at": "2012-09-19T06:55:34.000000", "forced_down": false, "zone": "internal" }, { "id": "13aa304e-5340-45a7-a7fb-b6d6e914d272", "binary": "nova-compute", "disabled_reason": "test4", "host": "host2", "state": "down", "status": "disabled", "updated_at": "2012-09-18T08:03:38.000000", "forced_down": false, "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-services/v2.69/0000775000175000017500000000000000000000000020656 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-services/v2.69/services-list-get-resp.json0000664000175000017500000000041300000000000026067 0ustar00zuulzuul00000000000000{ "services": [ { "binary": "nova-compute", "host": "host1", "status": "UNKNOWN" }, { "binary": "nova-compute", "host": "host2", "status": "UNKNOWN" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-shelve/0000775000175000017500000000000000000000000017535 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-shelve/os-shelve-offload.json0000664000175000017500000000003600000000000023744 0ustar00zuulzuul00000000000000{ "shelveOffload": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-shelve/os-shelve.json0000664000175000017500000000002600000000000022333 0ustar00zuulzuul00000000000000{ "shelve": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-shelve/os-unshelve.json0000664000175000017500000000003100000000000022672 0ustar00zuulzuul00000000000000{ "unshelve": null } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-shelve/v2.77/0000775000175000017500000000000000000000000020320 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-shelve/v2.77/os-shelve.json0000664000175000017500000000002600000000000023116 0ustar00zuulzuul00000000000000{ "shelve": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-shelve/v2.77/os-unshelve-null.json0000664000175000017500000000003000000000000024424 0ustar00zuulzuul00000000000000{ "unshelve": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-shelve/v2.77/os-unshelve.json0000664000175000017500000000010200000000000023454 0ustar00zuulzuul00000000000000{ "unshelve": { "availability_zone": "us-west" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/0000775000175000017500000000000000000000000022131 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/simple-tenant-usage-get-detail.json0000664000175000017500000000174600000000000030733 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "2012-10-08T20:10:44.587336", "stop": "2012-10-08T21:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0, "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8fe0", "local_gb": 1, "memory_mb": 512, "name": "new-server-test", "started_at": "2012-10-08T20:10:44.541277", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ] } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/simple-tenant-usage-get-specific.json0000664000175000017500000000156500000000000031255 0ustar00zuulzuul00000000000000{ "tenant_usage": { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8fe0", "local_gb": 1, "memory_mb": 512, "name": "new-server-test", "started_at": "2012-10-08T20:10:44.541277", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "2012-10-08T20:10:44.587336", "stop": "2012-10-08T21:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/simple-tenant-usage-get.json0000664000175000017500000000056100000000000027465 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "2012-10-08T21:10:44.587336", "stop": "2012-10-08T22:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1704714 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/v2.40/0000775000175000017500000000000000000000000022702 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-all.json0000664000175000017500000000474500000000000031014 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8f06", "local_gb": 1, "memory_mb": 512, "name": "instance-3", "started_at": "2018-10-09T11:29:04.166194", "state": "active", "tenant_id": "0000000e737461636b20342065000000", "uptime": 3600, "vcpus": 1 } ], "start": "2018-10-09T11:29:04.166194", "stop": "2018-10-09T12:29:04.166194", "tenant_id": "0000000e737461636b20342065000000", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 }, { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8f00", "local_gb": 1, "memory_mb": 512, "name": "instance-1", "started_at": "2018-10-09T11:29:04.166194", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 }, { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8f03", "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "2018-10-09T11:29:04.166194", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "2018-10-09T11:29:04.166194", "stop": "2018-10-09T12:29:04.166194", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 2.0, "total_local_gb_usage": 2.0, "total_memory_mb_usage": 1024.0, "total_vcpus_usage": 2.0 } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-detail.json0000664000175000017500000000245300000000000031500 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "2012-10-08T20:10:44.587336", "stop": "2012-10-08T21:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0, "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8fe0", "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "2012-10-08T20:10:44.541277", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ] } ], "tenant_usages_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-simple-tenant-usage?detailed=1&end=2016-10-12+18%3A22%3A04.868106&limit=1&marker=1f1deceb-17b5-4c04-84c7-e0d4499c8fe0&start=2016-10-12+18%3A22%3A04.868106", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-specific.json0000664000175000017500000000231700000000000032022 0ustar00zuulzuul00000000000000{ "tenant_usage": { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "1f1deceb-17b5-4c04-84c7-e0d4499c8fe0", "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "2012-10-08T20:10:44.541277", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "2012-10-08T20:10:44.587336", "stop": "2012-10-08T21:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 }, "tenant_usage_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-simple-tenant-usage/6f70656e737461636b20342065766572?end=2016-10-12+18%3A22%3A04.868106&limit=1&marker=1f1deceb-17b5-4c04-84c7-e0d4499c8fe0&start=2016-10-12+18%3A22%3A04.868106", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get.json0000664000175000017500000000126000000000000030233 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "2012-10-08T21:10:44.587336", "stop": "2012-10-08T22:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 } ], "tenant_usages_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-simple-tenant-usage?end=2016-10-12+18%3A22%3A04.868106&limit=1&marker=1f1deceb-17b5-4c04-84c7-e0d4499c8fe0&start=2016-10-12+18%3A22%3A04.868106", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-suspend-server/0000775000175000017500000000000000000000000021234 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-suspend-server/server-resume.json0000664000175000017500000000002600000000000024731 0ustar00zuulzuul00000000000000{ "resume": null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-suspend-server/server-suspend.json0000664000175000017500000000002700000000000025113 0ustar00zuulzuul00000000000000{ "suspend": null }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-tenant-networks/0000775000175000017500000000000000000000000021412 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-tenant-networks/networks-list-res.json0000664000175000017500000000024500000000000025722 0ustar00zuulzuul00000000000000{ "networks": [ { "cidr": "None", "id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "label": "private" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-tenant-networks/networks-post-req.json0000664000175000017500000000024500000000000025732 0ustar00zuulzuul00000000000000{ "network": { "label": "public", "cidr": "172.0.0.0/24", "vlan_start": 1, "num_networks": 1, "network_size": 255 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-tenant-networks/networks-post-res.json0000664000175000017500000000021300000000000025727 0ustar00zuulzuul00000000000000{ "network": { "cidr": "172.0.0.0/24", "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9", "label": "public" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-virtual-interfaces/0000775000175000017500000000000000000000000022056 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-virtual-interfaces/v2.12/0000775000175000017500000000000000000000000022626 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-virtual-interfaces/v2.12/vifs-list-resp.json0000664000175000017500000000034100000000000026406 0ustar00zuulzuul00000000000000{ "virtual_interfaces": [ { "id": "cec8b9bb-5d22-4104-b3c8-4c35db3210a6", "mac_address": "fa:16:3e:3c:ce:6f", "net_id": "cec8b9bb-5d22-4104-b3c8-4c35db3210a7" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-virtual-interfaces/vifs-list-resp-v2.json0000664000175000017500000000036000000000000026164 0ustar00zuulzuul00000000000000{ "virtual_interfaces": [ { "id": "cec8b9bb-5d22-4104-b3c8-4c35db3210a6", "mac_address": "fa:16:3e:3c:ce:6f", "OS-EXT-VIF-NET:net_id": "cec8b9bb-5d22-4104-b3c8-4c35db3210a7" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-virtual-interfaces/vifs-list-resp.json0000664000175000017500000000024200000000000025636 0ustar00zuulzuul00000000000000{ "virtual_interfaces": [ { "id": "cec8b9bb-5d22-4104-b3c8-4c35db3210a6", "mac_address": "fa:16:3e:3c:ce:6f" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-volumes/0000775000175000017500000000000000000000000017741 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/attach-volume-to-server-req.json0000664000175000017500000000017400000000000026120 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "device": "/dev/sdb" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/attach-volume-to-server-resp.json0000664000175000017500000000035600000000000026304 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "802db873-0373-4bdd-a433-d272a539ba18", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/list-volume-attachments-resp.json0000664000175000017500000000100300000000000026366 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "/dev/sdc", "id": "227cc671-f30b-4488-96fd-7d0bf13648d8", "serverId": "4b293d31-ebd5-4a7f-be03-874b90021e54", "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" }, { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "4b293d31-ebd5-4a7f-be03-874b90021e54", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/os-volumes-detail-resp.json0000664000175000017500000000141500000000000025155 0ustar00zuulzuul00000000000000{ "volumes": [ { "attachments": [ { "device": "/", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "serverId": "3912f2b4-c5ba-4aec-9165-872876fe202e", "volumeId": "a26887c6-c47b-4654-abb5-dfadf7d3f803" } ], "availabilityZone": "zone1:host1", "createdAt": "1999-01-01T01:01:01.000000", "displayDescription": "Volume Description", "displayName": "Volume Name", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/os-volumes-get-resp.json0000664000175000017500000000125700000000000024476 0ustar00zuulzuul00000000000000{ "volume": { "attachments": [ { "device": "/", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "serverId": "3912f2b4-c5ba-4aec-9165-872876fe202e", "volumeId": "a26887c6-c47b-4654-abb5-dfadf7d3f803" } ], "availabilityZone": "zone1:host1", "createdAt": "2013-02-18T14:51:18.528085", "displayDescription": "Volume Description", "displayName": "Volume Name", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/os-volumes-index-resp.json0000664000175000017500000000141400000000000025021 0ustar00zuulzuul00000000000000{ "volumes": [ { "attachments": [ { "device": "/", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "serverId": "3912f2b4-c5ba-4aec-9165-872876fe202e", "volumeId": "a26887c6-c47b-4654-abb5-dfadf7d3f803" } ], "availabilityZone": "zone1:host1", "createdAt": "2013-02-19T20:01:40.274897", "displayDescription": "Volume Description", "displayName": "Volume Name", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/os-volumes-post-req.json0000664000175000017500000000026600000000000024521 0ustar00zuulzuul00000000000000{ "volume": { "availability_zone": "zone1:host1", "display_name": "Volume Name", "display_description": "Volume Description", "size": 100 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/os-volumes-post-resp.json0000664000175000017500000000125700000000000024704 0ustar00zuulzuul00000000000000{ "volume": { "attachments": [ { "device": "/", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "serverId": "3912f2b4-c5ba-4aec-9165-872876fe202e", "volumeId": "a26887c6-c47b-4654-abb5-dfadf7d3f803" } ], "availabilityZone": "zone1:host1", "createdAt": "2013-02-18T14:51:17.970024", "displayDescription": "Volume Description", "displayName": "Volume Name", "id": "a26887c6-c47b-4654-abb5-dfadf7d3f803", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/snapshot-create-req.json0000664000175000017500000000030300000000000024515 0ustar00zuulzuul00000000000000{ "snapshot": { "display_name": "snap-001", "display_description": "Daily backup", "volume_id": "521752a6-acf6-4b2d-bc7a-119f9148cd8c", "force": false } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/snapshot-create-resp.json0000664000175000017500000000044100000000000024702 0ustar00zuulzuul00000000000000{ "snapshot": { "createdAt": "2013-02-25T16:27:54.680544", "displayDescription": "Daily backup", "displayName": "snap-001", "id": 100, "size": 100, "status": "available", "volumeId": "521752a6-acf6-4b2d-bc7a-119f9148cd8c" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/snapshots-detail-resp.json0000664000175000017500000000156500000000000025074 0ustar00zuulzuul00000000000000{ "snapshots": [ { "createdAt": "2013-02-25T16:27:54.671372", "displayDescription": "Default description", "displayName": "Default name", "id": 100, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "2013-02-25T16:27:54.671378", "displayDescription": "Default description", "displayName": "Default name", "id": 101, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "2013-02-25T16:27:54.671381", "displayDescription": "Default description", "displayName": "Default name", "id": 102, "size": 100, "status": "available", "volumeId": 12 } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/snapshots-list-resp.json0000664000175000017500000000156500000000000024605 0ustar00zuulzuul00000000000000{ "snapshots": [ { "createdAt": "2013-02-25T16:27:54.684999", "displayDescription": "Default description", "displayName": "Default name", "id": 100, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "2013-02-25T16:27:54.685005", "displayDescription": "Default description", "displayName": "Default name", "id": 101, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "2013-02-25T16:27:54.685008", "displayDescription": "Default description", "displayName": "Default name", "id": 102, "size": 100, "status": "available", "volumeId": 12 } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/snapshots-show-resp.json0000664000175000017500000000041200000000000024600 0ustar00zuulzuul00000000000000{ "snapshot": { "createdAt": "2013-02-25T16:27:54.724209", "displayDescription": "Default description", "displayName": "Default name", "id": "100", "size": 100, "status": "available", "volumeId": 12 } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/update-volume-req.json0000664000175000017500000000013600000000000024210 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-volumes/v2.49/0000775000175000017500000000000000000000000020523 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.49/attach-volume-to-server-req.json0000664000175000017500000000016400000000000026701 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "tag": "foo" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.49/attach-volume-to-server-resp.json0000664000175000017500000000035600000000000027066 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "69d19439-fa5f-4d6e-8b78-1868e7eb93a5", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.49/list-volume-attachments-resp.json0000664000175000017500000000100300000000000027150 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "/dev/sdc", "id": "227cc671-f30b-4488-96fd-7d0bf13648d8", "serverId": "1453a6a8-10ec-4797-9b9e-da3c703579d5", "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" }, { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "1453a6a8-10ec-4797-9b9e-da3c703579d5", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.49/update-volume-req.json0000664000175000017500000000013600000000000024772 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.49/volume-attachment-detail-resp.json0000664000175000017500000000035600000000000027266 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "9ad0352c-48ff-4290-9db8-3385a676f035", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1744714 nova-21.2.4/doc/api_samples/os-volumes/v2.70/0000775000175000017500000000000000000000000020515 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.70/attach-volume-to-server-req.json0000664000175000017500000000016400000000000026673 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "tag": "foo" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.70/attach-volume-to-server-resp.json0000664000175000017500000000040400000000000027052 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "70f5c62a-972d-4a8b-abcf-e1375ca7f8c0", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.70/list-volume-attachments-resp.json0000664000175000017500000000106600000000000027153 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "/dev/sdc", "id": "227cc671-f30b-4488-96fd-7d0bf13648d8", "serverId": "68426b0f-511b-4cb3-8169-bba2e7a8bc89", "tag": null, "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" }, { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "68426b0f-511b-4cb3-8169-bba2e7a8bc89", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.70/update-volume-req.json0000664000175000017500000000013600000000000024764 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.70/volume-attachment-detail-resp.json0000664000175000017500000000040400000000000027252 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "d989feee-002d-40f6-b47d-f0dbee48bbc1", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/os-volumes/v2.79/0000775000175000017500000000000000000000000020526 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.79/attach-volume-to-server-req.json0000664000175000017500000000023300000000000026701 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "tag": "foo", "delete_on_termination": true } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.79/attach-volume-to-server-resp.json0000664000175000017500000000045300000000000027067 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "delete_on_termination": true, "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "09b3b9d1-b8c5-48e1-841d-62c3ef967a88", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.79/list-volume-attachments-resp.json0000664000175000017500000000121500000000000027160 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "delete_on_termination": false, "device": "/dev/sdc", "id": "227cc671-f30b-4488-96fd-7d0bf13648d8", "serverId": "d5e4ae35-ac0e-4311-a8c5-0ee863e951d9", "tag": null, "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" }, { "delete_on_termination": true, "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "d5e4ae35-ac0e-4311-a8c5-0ee863e951d9", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.79/update-volume-req.json0000664000175000017500000000013600000000000024775 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.79/volume-attachment-detail-resp.json0000664000175000017500000000045300000000000027267 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "delete_on_termination": true, "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "2aad99d3-7aa4-41e9-b4e6-3f960b115d68", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/os-volumes/v2.85/0000775000175000017500000000000000000000000020523 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.85/attach-volume-to-server-req.json0000664000175000017500000000023300000000000026676 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "tag": "foo", "delete_on_termination": true } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.85/attach-volume-to-server-resp.json0000664000175000017500000000045300000000000027064 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "delete_on_termination": true, "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "09b3b9d1-b8c5-48e1-841d-62c3ef967a88", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.85/list-volume-attachments-resp.json0000664000175000017500000000121500000000000027155 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "delete_on_termination": false, "device": "/dev/sdc", "id": "227cc671-f30b-4488-96fd-7d0bf13648d8", "serverId": "d5e4ae35-ac0e-4311-a8c5-0ee863e951d9", "tag": null, "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" }, { "delete_on_termination": true, "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "d5e4ae35-ac0e-4311-a8c5-0ee863e951d9", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.85/update-volume-attachment-delete-flag-req.json0000664000175000017500000000020600000000000031265 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.85/update-volume-req.json0000664000175000017500000000013600000000000024772 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "227cc671-f30b-4488-96fd-7d0bf13648d8" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/v2.85/volume-attachment-detail-resp.json0000664000175000017500000000045300000000000027264 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "delete_on_termination": true, "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "2aad99d3-7aa4-41e9-b4e6-3f960b115d68", "tag": "foo", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/os-volumes/volume-attachment-detail-resp.json0000664000175000017500000000035600000000000026504 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "/dev/sdb", "id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "serverId": "1ad6852e-6605-4510-b639-d0bff864b49a", "volumeId": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-ips/0000775000175000017500000000000000000000000017727 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-ips/server-ips-network-resp.json0000664000175000017500000000015300000000000025356 0ustar00zuulzuul00000000000000{ "private": [ { "addr": "192.168.1.30", "version": 4 } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-ips/server-ips-resp.json0000664000175000017500000000023400000000000023667 0ustar00zuulzuul00000000000000{ "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-metadata/0000775000175000017500000000000000000000000020714 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-metadata/server-metadata-all-req.json0000664000175000017500000000006700000000000026231 0ustar00zuulzuul00000000000000{ "metadata": { "foo": "Foo Value" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-metadata/server-metadata-all-resp.json0000664000175000017500000000006600000000000026412 0ustar00zuulzuul00000000000000{ "metadata": { "foo": "Foo Value" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-metadata/server-metadata-req.json0000664000175000017500000000006300000000000025457 0ustar00zuulzuul00000000000000{ "meta": { "foo": "Bar Value" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-metadata/server-metadata-resp.json0000664000175000017500000000006300000000000025641 0ustar00zuulzuul00000000000000{ "meta": { "foo": "Foo Value" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0584724 nova-21.2.4/doc/api_samples/server-migrations/0000775000175000017500000000000000000000000021310 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-migrations/v2.22/0000775000175000017500000000000000000000000022061 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.22/force_complete.json0000664000175000017500000000003700000000000025742 0ustar00zuulzuul00000000000000{ "force_complete": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.22/live-migrate-server.json0000664000175000017500000000023200000000000026642 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "01c0cadef72d47e28a672a76060d492c", "block_migration": false, "disk_over_commit": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-migrations/v2.23/0000775000175000017500000000000000000000000022062 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.23/migrations-get.json0000664000175000017500000000120500000000000025704 0ustar00zuulzuul00000000000000{ "migration": { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "4cfba335-03d8-49b2-8c52-e69043d1e8fe", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.23/migrations-index.json0000664000175000017500000000133200000000000026235 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "4cfba335-03d8-49b2-8c52-e69043d1e8fe", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-migrations/v2.24/0000775000175000017500000000000000000000000022063 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.24/live-migrate-server.json0000664000175000017500000000023200000000000026644 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "01c0cadef72d47e28a672a76060d492c", "block_migration": false, "disk_over_commit": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-migrations/v2.59/0000775000175000017500000000000000000000000022073 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.59/migrations-get.json0000664000175000017500000000127500000000000025724 0ustar00zuulzuul00000000000000{ "migration": { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "4cfba335-03d8-49b2-8c52-e69043d1e8fe", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.59/migrations-index.json0000664000175000017500000000142600000000000026252 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "4cfba335-03d8-49b2-8c52-e69043d1e8fe", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-migrations/v2.65/0000775000175000017500000000000000000000000022070 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.65/live-migrate-server.json0000664000175000017500000000013200000000000026650 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": null, "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1784713 nova-21.2.4/doc/api_samples/server-migrations/v2.80/0000775000175000017500000000000000000000000022065 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.80/live-migrate-server.json0000664000175000017500000000013200000000000026645 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": null, "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.80/migrations-get.json0000664000175000017500000000146600000000000025720 0ustar00zuulzuul00000000000000{ "migration": { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "4cfba335-03d8-49b2-8c52-e69043d1e8fe", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "8dbaa0f0-ab95-4ffe-8cb4-9c89d2ac9d24", "project_id": "5f705771-3aa9-4f4c-8660-0d9522ffdbea" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/server-migrations/v2.80/migrations-index.json0000664000175000017500000000162700000000000026247 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "4cfba335-03d8-49b2-8c52-e69043d1e8fe", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "8dbaa0f0-ab95-4ffe-8cb4-9c89d2ac9d24", "project_id": "5f705771-3aa9-4f4c-8660-0d9522ffdbea" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1824713 nova-21.2.4/doc/api_samples/servers/0000775000175000017500000000000000000000000017321 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-addfloatingip-req.json0000664000175000017500000000015300000000000026364 0ustar00zuulzuul00000000000000{ "addFloatingIp" : { "address": "10.10.10.10", "fixed_address": "192.168.1.30" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-confirm-resize.json0000664000175000017500000000003600000000000025726 0ustar00zuulzuul00000000000000{ "confirmResize" : null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-create-image.json0000664000175000017500000000020000000000000025306 0ustar00zuulzuul00000000000000{ "createImage" : { "name" : "foo-image", "metadata": { "meta_var": "meta_val" } } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-reboot.json0000664000175000017500000000006200000000000024263 0ustar00zuulzuul00000000000000{ "reboot" : { "type" : "HARD" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/server-action-rebuild-resp.json0000664000175000017500000000343100000000000025371 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-11-14T06:29:00Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "bookmark" } ], "metadata": { "meta_var": "meta_val" }, "name": "foobar", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-11-14T06:29:02Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-rebuild.json0000664000175000017500000000156400000000000024427 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "OS-DCF:diskConfig": "AUTO", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-removefloatingip-req.json0000664000175000017500000000010500000000000027126 0ustar00zuulzuul00000000000000{ "removeFloatingIp": { "address": "172.16.10.7" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-resize.json0000664000175000017500000000013100000000000024267 0ustar00zuulzuul00000000000000{ "resize" : { "flavorRef" : "2", "OS-DCF:diskConfig": "AUTO" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-revert-resize.json0000664000175000017500000000003500000000000025577 0ustar00zuulzuul00000000000000{ "revertResize" : null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-start.json0000664000175000017500000000003100000000000024122 0ustar00zuulzuul00000000000000{ "os-start" : null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-action-stop.json0000664000175000017500000000003000000000000023751 0ustar00zuulzuul00000000000000{ "os-stop" : null }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-create-req-v237.json0000664000175000017500000000233400000000000024251 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "1", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" }, "OS-SCH-HNT:scheduler_hints": { "same_host": "48e6a9f6-30af-47e0-bc04-acaed113bb4e" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-create-req-v257.json0000664000175000017500000000114700000000000024254 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "http://openstack.example.com/flavors/1", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-create-req.json0000664000175000017500000000230000000000000023543 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "1", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==" }, "OS-SCH-HNT:scheduler_hints": { "same_host": "48e6a9f6-30af-47e0-bc04-acaed113bb4e" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-create-resp.json0000664000175000017500000000124500000000000023734 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "6NpUwoz2QDRN", "id": "f5dc173b-6804-445a-a6d8-c705dad5b5eb", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/server-get-resp.json0000664000175000017500000000524300000000000023252 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "92154fab69d5883ba2c8622b7e65f745dd33257221c07af363c51b29", "id": "0e44cc9c-e052-415d-afbf-469b0d384170", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/0e44cc9c-e052-415d-afbf-469b0d384170", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/0e44cc9c-e052-415d-afbf-469b0d384170", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1" }, { "id": "volume_id2" } ], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:33Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/server-update-req.json0000664000175000017500000000024200000000000023565 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "4.3.2.1", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "name" : "new-server-test" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/server-update-resp.json0000664000175000017500000000340700000000000023755 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "2012-12-02T02:11:57Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "6e84af987b4e7ec1c039b16d21f508f4a505672bd94fb0218b668d07", "id": "324dfb7d-f4a9-419a-9a19-237df04b443b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2012-12-02T02:11:58Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/servers-details-resp.json0000664000175000017500000000632500000000000024305 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "bcf92836fc9ed4203a75cb0337afc7f917d2be504164b995c2334b25", "id": "f5dc173b-6804-445a-a6d8-c705dad5b5eb", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "c3f14e9812ad496baf92ccfb3c61e15f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1" }, { "id": "volume_id2" } ], "OS-SRV-USG:launched_at": "2013-09-23T13:53:12.774549", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:32Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/servers-list-resp.json0000664000175000017500000000147600000000000023635 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "22c91117-08de-4894-9aa9-6ef382400985", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=22c91117-08de-4894-9aa9-6ef382400985", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/servers-list-status-resp.json0000664000175000017500000000151400000000000025147 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "22c91117-08de-4894-9aa9-6ef382400985", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&status=error&marker=22c91117-08de-4894-9aa9-6ef382400985", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1824713 nova-21.2.4/doc/api_samples/servers/v2.16/0000775000175000017500000000000000000000000020075 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.16/server-get-resp.json0000664000175000017500000000630300000000000024024 0ustar00zuulzuul00000000000000{ "server": { "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-16T02:55:07Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "3bf189131c61d0e71b0a8686a897a0f50d1693b48c47b721fe77155b", "id": "c278163e-36f9-4cf2-b1ac-80db4c63f7a8", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/c278163e-36f9-4cf2-b1ac-80db4c63f7a8", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/c278163e-36f9-4cf2-b1ac-80db4c63f7a8", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "c5f474bf81474f9dbbc404d5b2e4e9b3", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:reservation_id": "r-12345678", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:kernel_id": null, "OS-EXT-SRV-ATTR:ramdisk_id": null, "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ], "locked": false, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "progress": 0, "status": "ACTIVE", "host_status": "UP", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-16T02:55:08Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.16/servers-details-resp.json0000664000175000017500000000744100000000000025061 0ustar00zuulzuul00000000000000{ "servers": [ { "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-16T02:55:03Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "63cf07a9fd82e1d2294926ec5c0d2e1e0ca449224246df75e16f23dc", "id": "a8c1c13d-ec7e-47c7-b4ff-077f72c1ca46", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a8c1c13d-ec7e-47c7-b4ff-077f72c1ca46", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a8c1c13d-ec7e-47c7-b4ff-077f72c1ca46", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "bc8efe4fdb7148a4bb921a2b03d17de6", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:reservation_id": "r-12345678", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:kernel_id": null, "OS-EXT-SRV-ATTR:ramdisk_id": null, "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "locked": false, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:53:12.774549", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "host_status": "UP", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-16T02:55:05Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=a8c1c13d-ec7e-47c7-b4ff-077f72c1ca46", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.16/servers-list-resp.json0000664000175000017500000000147600000000000024411 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "22c91117-08de-4894-9aa9-6ef382400985", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=22c91117-08de-4894-9aa9-6ef382400985", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1824713 nova-21.2.4/doc/api_samples/servers/v2.17/0000775000175000017500000000000000000000000020076 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.17/server-action-trigger-crash-dump.json0000664000175000017500000000004300000000000027251 0ustar00zuulzuul00000000000000{ "trigger_crash_dump": null } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1824713 nova-21.2.4/doc/api_samples/servers/v2.19/0000775000175000017500000000000000000000000020100 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-action-rebuild-resp.json0000664000175000017500000000354400000000000026155 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-11-14T06:29:00Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "description" : "description of foobar", "progress": 0, "status": "ACTIVE", "OS-DCF:diskConfig": "AUTO", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-11-14T06:29:02Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-action-rebuild.json0000664000175000017500000000051600000000000025202 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "description" : "description of foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-create-req.json0000664000175000017500000000057000000000000024331 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "description" : "new-server-description", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "http://openstack.example.com/flavors/1", "metadata" : { "My Server Name" : "Apache1" } } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-create-resp.json0000664000175000017500000000124400000000000024512 0ustar00zuulzuul00000000000000{ "server": { "adminPass": "rySfUy7xL4C5", "OS-DCF:diskConfig": "AUTO", "id": "19923676-e78b-46fb-af62-a5942aece2ac", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/19923676-e78b-46fb-af62-a5942aece2ac", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/19923676-e78b-46fb-af62-a5942aece2ac", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-get-resp.json0000664000175000017500000000626300000000000024034 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "created": "2015-12-07T17:24:14Z", "description": "new-server-description", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "c656e68b04b483cfc87cdbaa2346557b174ec1cb6be6afbd2a0133a0", "id": "ddb205dc-717e-496e-8e96-88a3b31b075d", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/ddb205dc-717e-496e-8e96-88a3b31b075d", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/ddb205dc-717e-496e-8e96-88a3b31b075d", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "host_status": "UP", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2015-12-07T17:24:15Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-put-req.json0000664000175000017500000000017000000000000023672 0ustar00zuulzuul00000000000000{ "server" : { "name" : "updated-server-test", "description" : "updated-server-description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.19/server-put-resp.json0000664000175000017500000000353100000000000024060 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "2015-12-07T19:19:36Z", "description": "updated-server-description", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "4e17a358ca9bbc8ac6e215837b6410c0baa21b2463fefe3e8f712b31", "id": "c509708e-f0c6-461f-b2b3-507547959eb2", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/c509708e-f0c6-461f-b2b3-507547959eb2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/c509708e-f0c6-461f-b2b3-507547959eb2", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "updated-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2015-12-07T19:19:36Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.19/servers-details-resp.json0000664000175000017500000000742500000000000025066 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "created": "2015-12-07T19:54:48Z", "description": "new-server-description", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "a672ab12738567bfcb852c846d66a6ce5c3555b42d73db80bdc6f1a4", "id": "91965362-fd86-4543-8ce1-c17074d2984d", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/91965362-fd86-4543-8ce1-c17074d2984d", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/91965362-fd86-4543-8ce1-c17074d2984d", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "c3f14e9812ad496baf92ccfb3c61e15f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:53:12.774549", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "host_status": "UP", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2015-12-07T19:54:49Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=91965362-fd86-4543-8ce1-c17074d2984d", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.19/servers-list-resp.json0000664000175000017500000000147600000000000024414 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "78d95942-8805-4597-b1af-3d0e38330758", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/78d95942-8805-4597-b1af-3d0e38330758", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/78d95942-8805-4597-b1af-3d0e38330758", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=22c91117-08de-4894-9aa9-6ef382400985", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1824713 nova-21.2.4/doc/api_samples/servers/v2.26/0000775000175000017500000000000000000000000020076 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.26/server-action-rebuild-resp.json0000664000175000017500000000360600000000000026152 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-11-14T06:29:00Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "bookmark" } ], "metadata": { "meta_var": "meta_val" }, "name": "foobar", "OS-DCF:diskConfig": "AUTO", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-11-14T06:29:02Z", "user_id": "fake", "locked": false, "description" : "description of foobar", "tags": ["tag1", "tag2"] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.26/server-action-rebuild.json0000664000175000017500000000171500000000000025202 0ustar00zuulzuul00000000000000{ "rebuild" : { "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "name" : "foobar", "OS-DCF:diskConfig": "AUTO", "personality" : [ { "path" : "/etc/banner.txt", "contents" : "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "preserve_ephemeral": false, "description" : "description of foobar" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.3/0000775000175000017500000000000000000000000020011 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.3/server-get-resp.json0000664000175000017500000000620700000000000023743 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "92154fab69d5883ba2c8622b7e65f745dd33257221c07af363c51b29", "id": "0e44cc9c-e052-415d-afbf-469b0d384170", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/0e44cc9c-e052-415d-afbf-469b0d384170", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/0e44cc9c-e052-415d-afbf-469b0d384170", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:33Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.3/servers-details-resp.json0000664000175000017500000000733500000000000024777 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "bcf92836fc9ed4203a75cb0337afc7f917d2be504164b995c2334b25", "id": "f5dc173b-6804-445a-a6d8-c705dad5b5eb", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "c3f14e9812ad496baf92ccfb3c61e15f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:53:12.774549", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:32Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.3/servers-list-resp.json0000664000175000017500000000147600000000000024325 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "22c91117-08de-4894-9aa9-6ef382400985", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=22c91117-08de-4894-9aa9-6ef382400985", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.32/0000775000175000017500000000000000000000000020073 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.32/server-create-req.json0000664000175000017500000000102300000000000024316 0ustar00zuulzuul00000000000000{ "server" : { "name" : "device-tagging-server", "flavorRef" : "http://openstack.example.com/flavors/1", "networks" : [{ "uuid" : "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "nic1" }], "block_device_mapping_v2": [{ "uuid": "70a599e0-31e7-49b7-b260-868f441e862b", "source_type": "image", "destination_type": "volume", "boot_index": 0, "volume_size": "1", "tag": "disk1" }] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.32/server-create-resp.json0000664000175000017500000000124700000000000024510 0ustar00zuulzuul00000000000000{ "server": { "adminPass": "rojsEujtu7GB", "OS-DCF:diskConfig": "AUTO", "id": "05ec6bde-40bf-47e8-ac07-89c12b2eee03", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/05ec6bde-40bf-47e8-ac07-89c12b2eee03", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/05ec6bde-40bf-47e8-ac07-89c12b2eee03", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.37/0000775000175000017500000000000000000000000020100 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.37/server-create-req.json0000664000175000017500000000033100000000000024324 0ustar00zuulzuul00000000000000{ "server": { "name": "auto-allocate-network", "imageRef": "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef": "http://openstack.example.com/flavors/1", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.37/server-create-resp.json0000664000175000017500000000124400000000000024512 0ustar00zuulzuul00000000000000{ "server": { "adminPass": "rySfUy7xL4C5", "OS-DCF:diskConfig": "AUTO", "id": "19923676-e78b-46fb-af62-a5942aece2ac", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/19923676-e78b-46fb-af62-a5942aece2ac", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/19923676-e78b-46fb-af62-a5942aece2ac", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.42/0000775000175000017500000000000000000000000020074 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.42/server-create-req.json0000664000175000017500000000102300000000000024317 0ustar00zuulzuul00000000000000{ "server" : { "name" : "device-tagging-server", "flavorRef" : "http://openstack.example.com/flavors/1", "networks" : [{ "uuid" : "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "nic1" }], "block_device_mapping_v2": [{ "uuid": "70a599e0-31e7-49b7-b260-868f441e862b", "source_type": "image", "destination_type": "volume", "boot_index": 0, "volume_size": "1", "tag": "disk1" }] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.42/server-create-resp.json0000664000175000017500000000124600000000000024510 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "S5wqy9sPYUvU", "id": "97108291-2fd7-4dc2-a909-eaae0306a6a9", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.45/0000775000175000017500000000000000000000000020077 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.45/server-action-create-image-resp.json0000664000175000017500000000007200000000000027042 0ustar00zuulzuul00000000000000{ "image_id": "0e7761dd-ee98-41f0-ba35-05994e446431" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.45/server-action-create-image.json0000664000175000017500000000020000000000000026064 0ustar00zuulzuul00000000000000{ "createImage" : { "name" : "foo-image", "metadata": { "meta_var": "meta_val" } } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.47/0000775000175000017500000000000000000000000020101 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-action-rebuild-resp.json0000664000175000017500000000347000000000000026154 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-11-14T06:29:00Z", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "description" : null, "progress": 0, "status": "ACTIVE", "OS-DCF:diskConfig": "AUTO", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-11-14T06:29:02Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-action-rebuild.json0000664000175000017500000000156700000000000025212 0ustar00zuulzuul00000000000000{ "rebuild" : { "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "name" : "foobar", "OS-DCF:diskConfig": "AUTO", "personality" : [ { "path" : "/etc/banner.txt", "contents" : "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-create-req.json0000664000175000017500000000236300000000000024334 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "6", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" }, "OS-SCH-HNT:scheduler_hints": { "same_host": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-create-resp.json0000664000175000017500000000124600000000000024515 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "S5wqy9sPYUvU", "id": "97108291-2fd7-4dc2-a909-eaae0306a6a9", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-get-resp.json0000664000175000017500000000634400000000000024035 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ov3q80zj", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-14T19:23:59.895661", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2017-02-14T19:23:58Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "9168b536-cd40-4630-b43f-b259807c6e87", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/9168b536-cd40-4630-b43f-b259807c6e87", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/9168b536-cd40-4630-b43f-b259807c6e87", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [ { "delete_on_termination": false, "id": "volume_id1" }, { "delete_on_termination": false, "id": "volume_id2" } ], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "2017-02-14T19:24:00Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-update-req.json0000664000175000017500000000031700000000000024350 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.47/server-update-resp.json0000664000175000017500000000346500000000000024541 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "2012-12-02T02:11:57Z", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "6e84af987b4e7ec1c039b16d21f508f4a505672bd94fb0218b668d07", "id": "324dfb7d-f4a9-419a-9a19-237df04b443b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "2012-12-02T02:11:58Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.47/servers-details-resp.json0000664000175000017500000000752200000000000025065 0ustar00zuulzuul00000000000000{ "servers": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-iffothgx", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-14T19:24:43.891568", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2017-02-14T19:24:42Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "764e369e-a874-4401-b7ce-43e4760888da", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/764e369e-a874-4401-b7ce-43e4760888da", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/764e369e-a874-4401-b7ce-43e4760888da", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [ { "delete_on_termination": false, "id": "volume_id1" }, { "delete_on_termination": false, "id": "volume_id2" } ], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "2017-02-14T19:24:43Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=764e369e-a874-4401-b7ce-43e4760888da", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.47/servers-list-resp.json0000664000175000017500000000150000000000000024401 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "6e3a87e6-a133-452e-86e1-a31291c1b1c8", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.52/0000775000175000017500000000000000000000000020075 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.52/server-create-req.json0000664000175000017500000000244300000000000024327 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "http://openstack.example.com/flavors/1", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto", "tags": ["tag1", "tag2"] }, "OS-SCH-HNT:scheduler_hints": { "same_host": "48e6a9f6-30af-47e0-bc04-acaed113bb4e" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.52/server-create-resp.json0000664000175000017500000000124600000000000024511 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "S5wqy9sPYUvU", "id": "97108291-2fd7-4dc2-a909-eaae0306a6a9", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.52/server-get-resp.json0000664000175000017500000000627200000000000024031 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ov3q80zj", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-14T19:23:59.895661", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2017-02-14T19:23:58Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "9168b536-cd40-4630-b43f-b259807c6e87", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/9168b536-cd40-4630-b43f-b259807c6e87", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/9168b536-cd40-4630-b43f-b259807c6e87", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [ { "delete_on_termination": false, "id": "volume_id1" }, { "delete_on_termination": false, "id": "volume_id2" } ], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": ["tag1", "tag2"], "tenant_id": "6f70656e737461636b20342065766572", "updated": "2017-02-14T19:24:00Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.52/servers-details-resp.json0000664000175000017500000000744000000000000025060 0ustar00zuulzuul00000000000000{ "servers": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-iffothgx", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-14T19:24:43.891568", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2017-02-14T19:24:42Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "764e369e-a874-4401-b7ce-43e4760888da", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/764e369e-a874-4401-b7ce-43e4760888da", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/764e369e-a874-4401-b7ce-43e4760888da", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [ { "delete_on_termination": false, "id": "volume_id1" }, { "delete_on_termination": false, "id": "volume_id2" } ], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": ["tag1", "tag2"], "tenant_id": "6f70656e737461636b20342065766572", "updated": "2017-02-14T19:24:43Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=764e369e-a874-4401-b7ce-43e4760888da", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.52/servers-list-resp.json0000664000175000017500000000150000000000000024375 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "6e3a87e6-a133-452e-86e1-a31291c1b1c8", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.54/0000775000175000017500000000000000000000000020077 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.54/server-action-rebuild-resp.json0000664000175000017500000000355200000000000026153 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-11-14T06:29:00Z", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "key_name": "new-key", "description" : "description of foobar", "progress": 0, "status": "ACTIVE", "OS-DCF:diskConfig": "AUTO", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-11-14T06:29:02Z", "user_id": "fake", "tags": [] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.54/server-action-rebuild.json0000664000175000017500000000055500000000000025204 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "key_name": "new-key", "description" : "description of foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" } } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1864712 nova-21.2.4/doc/api_samples/servers/v2.57/0000775000175000017500000000000000000000000020102 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.57/server-action-rebuild-resp.json0000664000175000017500000000363200000000000026155 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2013-11-14T06:29:00Z", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a0a80a94-3d81-4a10-822a-daa0cf9e870b", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "key_name": "new-key", "description": "description of foobar", "progress": 0, "status": "ACTIVE", "OS-DCF:diskConfig": "AUTO", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-11-14T06:29:02Z", "user_id": "fake", "tags": [], "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.57/server-action-rebuild.json0000664000175000017500000000062700000000000025207 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "imageRef": "70a599e0-31e7-49b7-b260-868f441e862b", "name": "foobar", "key_name": "new-key", "description": "description of foobar", "adminPass": "seekr3t", "metadata" : { "meta_var": "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.57/server-create-req.json0000664000175000017500000000114700000000000024334 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "http://openstack.example.com/flavors/1", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.57/server-create-resp.json0000664000175000017500000000124600000000000024516 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "S5wqy9sPYUvU", "id": "97108291-2fd7-4dc2-a909-eaae0306a6a9", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1904712 nova-21.2.4/doc/api_samples/servers/v2.63/0000775000175000017500000000000000000000000020077 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-action-rebuild-resp.json0000664000175000017500000000416000000000000026147 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2017-10-10T16:06:02Z", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "28d8d56f0e3a77e20891f455721cbb68032e017045e20aa5dfc6cb66", "id": "a0a80a94-3d81-4a10-822a-daa0cf9e870b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/a4baaf2a-3768-4e45-8847-13becef6bc5e", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/a4baaf2a-3768-4e45-8847-13becef6bc5e", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "key_name": "new-key", "description" : "description of foobar", "progress": 0, "status": "ACTIVE", "tags": [], "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "2017-10-10T16:06:03Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-action-rebuild.json0000664000175000017500000000113500000000000025177 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "OS-DCF:diskConfig": "AUTO", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "key_name": "new-key", "description" : "description of foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-create-req.json0000664000175000017500000000155100000000000024330 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "6", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] }, "OS-SCH-HNT:scheduler_hints": { "same_host": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-create-resp.json0000664000175000017500000000124600000000000024513 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "wKLKinb9u7GM", "id": "aab35fd0-b459-4b59-9308-5a23147f3165", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/aab35fd0-b459-4b59-9308-5a23147f3165", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/aab35fd0-b459-4b59-9308-5a23147f3165", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-get-resp.json0000664000175000017500000000622600000000000024032 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ov3q80zj", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-14T19:23:59.895661", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2017-02-14T19:23:58Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "9168b536-cd40-4630-b43f-b259807c6e87", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/9168b536-cd40-4630-b43f-b259807c6e87", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/9168b536-cd40-4630-b43f-b259807c6e87", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "2017-02-14T19:24:00Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-update-req.json0000664000175000017500000000031700000000000024346 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.63/server-update-resp.json0000664000175000017500000000401700000000000024531 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "2012-12-02T02:11:57Z", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "6e84af987b4e7ec1c039b16d21f508f4a505672bd94fb0218b668d07", "id": "324dfb7d-f4a9-419a-9a19-237df04b443b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "2012-12-02T02:11:58Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.63/servers-details-resp.json0000664000175000017500000000736000000000000025063 0ustar00zuulzuul00000000000000{ "servers": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-y0w4v32k", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-10-10T15:49:09.516729", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2017-10-10T15:49:08Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "569f39f9-7c76-42a1-9c2d-8394e2638a6d", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "2017-10-10T15:49:09Z", "user_id": "fake" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1904712 nova-21.2.4/doc/api_samples/servers/v2.66/0000775000175000017500000000000000000000000020102 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.66/server-create-req.json0000664000175000017500000000153300000000000024333 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "6", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] }, "OS-SCH-HNT:scheduler_hints": { "same_host": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.66/server-create-resp.json0000664000175000017500000000124600000000000024516 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "wKLKinb9u7GM", "id": "aab35fd0-b459-4b59-9308-5a23147f3165", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/aab35fd0-b459-4b59-9308-5a23147f3165", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/aab35fd0-b459-4b59-9308-5a23147f3165", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.66/servers-details-with-changes-before.json0000664000175000017500000000700600000000000027733 0ustar00zuulzuul00000000000000{ "servers": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-y0w4v32k", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2018-10-10T15:49:09.516729", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.0.1", "version": 4 } ] }, "config_drive": "", "created": "2018-10-10T15:49:08Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "569f39f9-7c76-42a1-9c2d-8394e2638a6e", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/569f39f9-7c76-42a1-9c2d-8394e2638a6d", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "2018-10-10T15:49:09Z", "user_id": "fake" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.66/servers-list-with-changes-before.json0000664000175000017500000000113700000000000027260 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "6e3a87e6-a133-452e-86e1-a31291c1b1c8", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/6e3a87e6-a133-452e-86e1-a31291c1b1c8", "rel": "bookmark" } ], "name": "new-server-test" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1904712 nova-21.2.4/doc/api_samples/servers/v2.67/0000775000175000017500000000000000000000000020103 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.67/server-create-req.json0000664000175000017500000000107500000000000024335 0ustar00zuulzuul00000000000000{ "server" : { "name" : "bfv-server-with-volume-type", "flavorRef" : "http://openstack.example.com/flavors/1", "networks" : [{ "uuid" : "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "nic1" }], "block_device_mapping_v2": [{ "uuid": "70a599e0-31e7-49b7-b260-868f441e862b", "source_type": "image", "destination_type": "volume", "boot_index": 0, "volume_size": "1", "tag": "disk1", "volume_type": "lvm-1" }] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.67/server-create-resp.json0000664000175000017500000000124600000000000024517 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "S5wqy9sPYUvU", "id": "97108291-2fd7-4dc2-a909-eaae0306a6a9", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/97108291-2fd7-4dc2-a909-eaae0306a6a9", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1904712 nova-21.2.4/doc/api_samples/servers/v2.69/0000775000175000017500000000000000000000000020105 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.69/server-create-req.json0000664000175000017500000000107700000000000024341 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "http://openstack.example.com/flavors/1", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.69/server-create-resp.json0000664000175000017500000000124600000000000024521 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "mqtDAwb2y7Zh", "id": "6f81aefe-472a-49d8-ba8d-758a5082c7e5", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/6f81aefe-472a-49d8-ba8d-758a5082c7e5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/6f81aefe-472a-49d8-ba8d-758a5082c7e5", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.69/server-get-resp.json0000664000175000017500000000237700000000000024043 0ustar00zuulzuul00000000000000{ "server": { "OS-EXT-AZ:availability_zone": "UNKNOWN", "OS-EXT-STS:power_state": 0, "created": "2018-12-03T21:06:18Z", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "id": "33748c23-38dd-4f70-b774-522fc69e7b67", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "status": "UNKNOWN", "tenant_id": "project", "user_id": "fake", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/33748c23-38dd-4f70-b774-522fc69e7b67", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/33748c23-38dd-4f70-b774-522fc69e7b67", "rel": "bookmark" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.69/servers-details-resp.json0000664000175000017500000000130500000000000025062 0ustar00zuulzuul00000000000000{ "servers": [ { "created": "2018-12-03T21:06:18Z", "id": "b6b0410f-b65f-4473-855e-5d82a71759e0", "status": "UNKNOWN", "tenant_id": "6f70656e737461636b20342065766572", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/b6b0410f-b65f-4473-855e-5d82a71759e0", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/b6b0410f-b65f-4473-855e-5d82a71759e0", "rel": "bookmark" } ] } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.69/servers-list-resp.json0000664000175000017500000000113000000000000024404 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "2e136db7-b4a4-4815-8a00-25d9bfe59617", "status": "UNKNOWN", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/2e136db7-b4a4-4815-8a00-25d9bfe59617", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/2e136db7-b4a4-4815-8a00-25d9bfe59617", "rel": "bookmark" } ] } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1904712 nova-21.2.4/doc/api_samples/servers/v2.71/0000775000175000017500000000000000000000000020076 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-action-rebuild-resp.json0000664000175000017500000000401100000000000026141 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2019-02-28T03:16:19Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "id": "36b2afd5-1684-4d18-a49c-915bf0f5344c", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/36b2afd5-1684-4d18-a49c-915bf0f5344c", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/36b2afd5-1684-4d18-a49c-915bf0f5344c", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "progress": 0, "server_groups": [ "f3d86fe6-4246-4be8-b87c-eb894626c741" ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-02-28T03:16:20Z", "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-action-rebuild.json0000664000175000017500000000056200000000000025201 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "OS-DCF:diskConfig": "AUTO", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-create-req.json0000664000175000017500000000117500000000000024331 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "1", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" }, "OS-SCH-HNT:scheduler_hints": { "group": "f3d86fe6-4246-4be8-b87c-eb894626c741" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-create-resp.json0000664000175000017500000000124600000000000024512 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "DB2bQBhxvq8a", "id": "84e2b49d-39a9-4d32-9100-e62161c236db", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/84e2b49d-39a9-4d32-9100-e62161c236db", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/84e2b49d-39a9-4d32-9100-e62161c236db", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-get-down-cell-resp.json0000664000175000017500000000253100000000000025706 0ustar00zuulzuul00000000000000{ "server": { "OS-EXT-AZ:availability_zone": "UNKNOWN", "OS-EXT-STS:power_state": 0, "created": "2019-02-28T03:16:19Z", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "id": "2669556b-b4a3-41f1-a0c1-f9c7ff75e53c", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "server_groups": [ "f3d86fe6-4246-4be8-b87c-eb894626c741" ], "status": "UNKNOWN", "tenant_id": "project", "user_id": "fake", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/2669556b-b4a3-41f1-a0c1-f9c7ff75e53c", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/2669556b-b4a3-41f1-a0c1-f9c7ff75e53c", "rel": "bookmark" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-get-resp.json0000664000175000017500000000610500000000000024025 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-0scisg0g", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2019-02-28T03:16:19.600768", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2019-02-28T03:16:18Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "84e2b49d-39a9-4d32-9100-e62161c236db", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/84e2b49d-39a9-4d32-9100-e62161c236db", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/84e2b49d-39a9-4d32-9100-e62161c236db", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [ "f3d86fe6-4246-4be8-b87c-eb894626c741" ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-02-28T03:16:19Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-groups-post-req.json0000664000175000017500000000012400000000000025361 0ustar00zuulzuul00000000000000{ "server_group": { "name": "test", "policy": "affinity" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-groups-post-resp.json0000664000175000017500000000041300000000000025544 0ustar00zuulzuul00000000000000{ "server_group": { "id": "f3d86fe6-4246-4be8-b87c-eb894626c741", "members": [], "name": "test", "policy": "affinity", "project_id": "6f70656e737461636b20342065766572", "rules": {}, "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-update-req.json0000664000175000017500000000031600000000000024344 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.71/server-update-resp.json0000664000175000017500000000367400000000000024540 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "2019-02-28T03:16:19Z", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "id": "60e840f8-dd17-476b-bd1d-33785066c496", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/60e840f8-dd17-476b-bd1d-33785066c496", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/60e840f8-dd17-476b-bd1d-33785066c496", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "server_groups": [ "f3d86fe6-4246-4be8-b87c-eb894626c741" ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-02-28T03:16:19Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/api_samples/servers/v2.73/0000775000175000017500000000000000000000000020100 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.73/lock-server-with-reason.json0000664000175000017500000000007100000000000025463 0ustar00zuulzuul00000000000000{ "lock": {"locked_reason": "I don't want to work"} }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-action-rebuild-resp.json0000664000175000017500000000375300000000000026157 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "2019-04-23T17:10:22Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "id": "0c37a84a-c757-4f22-8c7f-0bf8b6970886", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/0c37a84a-c757-4f22-8c7f-0bf8b6970886", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/0c37a84a-c757-4f22-8c7f-0bf8b6970886", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "progress": 0, "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-04-23T17:10:24Z", "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-action-rebuild.json0000664000175000017500000000056200000000000025203 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "OS-DCF:diskConfig": "AUTO", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-create-req.json0000664000175000017500000000103200000000000024323 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "1", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-create-resp.json0000664000175000017500000000124600000000000024514 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "kJTmMkszoB6A", "id": "ae10adbb-9b5e-4667-9cc5-05ebdc80a941", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/ae10adbb-9b5e-4667-9cc5-05ebdc80a941", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/ae10adbb-9b5e-4667-9cc5-05ebdc80a941", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-get-resp.json0000664000175000017500000000607200000000000024032 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-t61j9da6", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2019-04-23T15:19:10.855016", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2019-04-23T15:19:09Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "0e12087a-7c87-476a-8f84-7398e991cecc", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/0e12087a-7c87-476a-8f84-7398e991cecc", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/0e12087a-7c87-476a-8f84-7398e991cecc", "rel": "bookmark" } ], "locked": true, "locked_reason": "I don't want to work", "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-04-23T15:19:11Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-update-req.json0000664000175000017500000000031600000000000024346 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.73/server-update-resp.json0000664000175000017500000000363600000000000024540 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "2019-04-23T17:37:48Z", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "id": "f9a6c4fe-28e0-48a9-b02c-164e4d04d0b2", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/f9a6c4fe-28e0-48a9-b02c-164e4d04d0b2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/f9a6c4fe-28e0-48a9-b02c-164e4d04d0b2", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-04-23T17:37:48Z", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.73/servers-details-resp.json0000664000175000017500000000657500000000000025073 0ustar00zuulzuul00000000000000{ "servers": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-l0i0clt2", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2019-04-23T15:19:15.317839", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2019-04-23T15:19:14Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "2ce4c5b3-2866-4972-93ce-77a2ea46a7f9", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/2ce4c5b3-2866-4972-93ce-77a2ea46a7f9", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/2ce4c5b3-2866-4972-93ce-77a2ea46a7f9", "rel": "bookmark" } ], "locked": true, "locked_reason": "I don't want to work", "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-04-23T15:19:15Z", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/api_samples/servers/v2.74/0000775000175000017500000000000000000000000020101 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.74/server-create-req-with-host-and-node.json0000664000175000017500000000123000000000000027733 0ustar00zuulzuul00000000000000{ "server" : { "adminPass": "MySecretPass", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "6", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto", "host": "openstack-node-01", "hypervisor_hostname": "openstack-node-01" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.74/server-create-req-with-only-host.json0000664000175000017500000000114400000000000027233 0ustar00zuulzuul00000000000000{ "server" : { "adminPass": "MySecretPass", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "6", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto", "host": "openstack-node-01" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.74/server-create-req-with-only-node.json0000664000175000017500000000116300000000000027204 0ustar00zuulzuul00000000000000{ "server" : { "adminPass": "MySecretPass", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "name" : "new-server-test", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "flavorRef" : "6", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "networks": "auto", "hypervisor_hostname": "openstack-node-01" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.74/server-create-resp.json0000664000175000017500000000124600000000000024515 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "DB2bQBhxvq8a", "id": "84e2b49d-39a9-4d32-9100-e62161c236db", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/84e2b49d-39a9-4d32-9100-e62161c236db", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/84e2b49d-39a9-4d32-9100-e62161c236db", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/api_samples/servers/v2.75/0000775000175000017500000000000000000000000020102 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.75/server-action-rebuild-resp.json0000664000175000017500000000601600000000000026154 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-t61j9da6", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2019-04-23T15:19:10.855016", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "config_drive": "", "created": "2019-04-23T17:10:22Z", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "0c37a84a-c757-4f22-8c7f-0bf8b6970886", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/0c37a84a-c757-4f22-8c7f-0bf8b6970886", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/0c37a84a-c757-4f22-8c7f-0bf8b6970886", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2019-04-23T17:10:24Z", "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.75/server-action-rebuild.json0000664000175000017500000000056200000000000025205 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "1.2.3.4", "accessIPv6" : "80fe::", "OS-DCF:diskConfig": "AUTO", "imageRef" : "70a599e0-31e7-49b7-b260-868f441e862b", "name" : "foobar", "adminPass" : "seekr3t", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.75/server-update-req.json0000664000175000017500000000031700000000000024351 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.75/server-update-resp.json0000664000175000017500000000607200000000000024537 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "r-t61j9da6", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2019-04-23T15:19:10.855016", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "2012-12-02T02:11:57Z", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "6e84af987b4e7ec1c039b16d21f508f4a505672bd94fb0218b668d07", "host_status": "UP", "id": "324dfb7d-f4a9-419a-9a19-237df04b443b", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/324dfb7d-f4a9-419a-9a19-237df04b443b", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "2012-12-02T02:11:58Z", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/api_samples/servers/v2.9/0000775000175000017500000000000000000000000020017 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.9/server-get-resp.json0000664000175000017500000000624000000000000023746 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "92154fab69d5883ba2c8622b7e65f745dd33257221c07af363c51b29", "id": "0e44cc9c-e052-415d-afbf-469b0d384170", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/0e44cc9c-e052-415d-afbf-469b0d384170", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/0e44cc9c-e052-415d-afbf-469b0d384170", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "b8b357f7100d4391828f2177c922ef93", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:37:00.880302", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:33Z", "user_id": "fake", "locked": false } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/servers/v2.9/servers-details-resp.json0000664000175000017500000000737200000000000025006 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "2013-09-03T04:01:32Z", "flavor": { "id": "1", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ] }, "hostId": "bcf92836fc9ed4203a75cb0337afc7f917d2be504164b995c2334b25", "id": "f5dc173b-6804-445a-a6d8-c705dad5b5eb", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "c3f14e9812ad496baf92ccfb3c61e15f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:reservation_id": "r-00000001", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "fake-hostname", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ { "id": "volume_id1", "delete_on_termination": false }, { "id": "volume_id2", "delete_on_termination": false } ], "OS-SRV-USG:launched_at": "2013-09-23T13:53:12.774549", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "2013-09-03T04:01:32Z", "user_id": "fake", "locked": false } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/detail?limit=1&marker=f5dc173b-6804-445a-a6d8-c705dad5b5eb", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers/v2.9/servers-list-resp.json0000664000175000017500000000147600000000000024333 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "22c91117-08de-4894-9aa9-6ef382400985", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=22c91117-08de-4894-9aa9-6ef382400985", "rel": "next" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1824713 nova-21.2.4/doc/api_samples/servers-sort/0000775000175000017500000000000000000000000020306 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/servers-sort/server-sort-keys-list-resp.json0000664000175000017500000000113500000000000026365 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "e08e6d34-fcc1-480e-b11e-24a675b479f8", "links": [ { "href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/e08e6d34-fcc1-480e-b11e-24a675b479f8", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/e08e6d34-fcc1-480e-b11e-24a675b479f8", "rel": "bookmark" } ], "name": "new-server-test" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/api_samples/versions/0000775000175000017500000000000000000000000017500 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_samples/versions/v2-version-get-resp.json0000664000175000017500000000122400000000000024130 0ustar00zuulzuul00000000000000{ "version": { "id": "v2.0", "links": [ { "href": "http://openstack.example.com/v2/", "rel": "self" }, { "href": "http://docs.openstack.org/", "rel": "describedby", "type": "text/html" } ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2" } ], "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/versions/v21-version-get-resp.json0000664000175000017500000000123500000000000024213 0ustar00zuulzuul00000000000000{ "version": { "id": "v2.1", "links": [ { "href": "http://openstack.example.com/v2.1/", "rel": "self" }, { "href": "http://docs.openstack.org/", "rel": "describedby", "type": "text/html" } ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1" } ], "status": "CURRENT", "version": "2.87", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_samples/versions/versions-get-resp.json0000664000175000017500000000135600000000000023774 0ustar00zuulzuul00000000000000{ "versions": [ { "id": "v2.0", "links": [ { "href": "http://openstack.example.com/v2/", "rel": "self" } ], "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z" }, { "id": "v2.1", "links": [ { "href": "http://openstack.example.com/v2.1/", "rel": "self" } ], "status": "CURRENT", "version": "2.87", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/api_schemas/0000775000175000017500000000000000000000000015607 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/api_schemas/config_drive.json0000664000175000017500000000122200000000000021135 0ustar00zuulzuul00000000000000{ "anyOf": [ { "type": "object", "properties": { "meta_data": { "type": "object" }, "network_data": { "type": "object" }, "user_data": { "type": [ "object", "array", "string", "null" ] } }, "additionalProperties": false }, { "type": [ "string", "null" ] } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/api_schemas/network_data.json0000664000175000017500000004025100000000000021166 0ustar00zuulzuul00000000000000{ "$schema": "http://openstack.org/nova/network_data.json#", "id": "http://openstack.org/nova/network_data.json", "type": "object", "title": "OpenStack Nova network metadata schema", "description": "Schema of Nova instance network configuration information", "required": [ "links", "networks", "services" ], "properties": { "links": { "$id": "#/properties/links", "type": "array", "title": "L2 interfaces settings", "items": { "$id": "#/properties/links/items", "oneOf": [ { "$ref": "#/definitions/l2_link" }, { "$ref": "#/definitions/l2_bond" }, { "$ref": "#/definitions/l2_vlan" } ] } }, "networks": { "$id": "#/properties/networks", "type": "array", "title": "L3 networks", "items": { "$id": "#/properties/networks/items", "oneOf": [ { "$ref": "#/definitions/l3_ipv4_network" }, { "$ref": "#/definitions/l3_ipv6_network" } ] } }, "services": { "$ref": "#/definitions/services" } }, "definitions": { "l2_address": { "$id": "#/definitions/l2_address", "type": "string", "pattern": "(?i)^([0-9A-F]{2}[:-]){5}([0-9A-F]{2})$", "title": "L2 interface address", "examples": [ "fa:16:3e:9c:bf:3d" ] }, "l2_id": { "$id": "#/definitions/l2_id", "type": "string", "title": "L2 interface ID", "examples": [ "eth0" ] }, "l2_mtu": { "$id": "#/definitions/l2_mtu", "title": "L2 interface MTU", "anyOf": [ { "type": "number", "minimum": 1, "maximum": 65535 }, { "type": "null" } ], "examples": [ 1500 ] }, "l2_vif_id": { "$id": "#/definitions/l2_vif_id", "type": "string", "title": "Virtual interface ID", "examples": [ "cd9f6d46-4a3a-43ab-a466-994af9db96fc" ] }, "l2_link": { "$id": "#/definitions/l2_link", "type": "object", "title": "L2 interface configuration settings", "required": [ "ethernet_mac_address", "id", "type" ], "properties": { "id": { "$ref": "#/definitions/l2_id" }, "ethernet_mac_address": { "$ref": "#/definitions/l2_address" }, "mtu": { "$ref": "#/definitions/l2_mtu" }, "type": { "$id": "#/definitions/l2_link/properties/type", "type": "string", "enum": [ "bridge", "dvs", "hw_veb", "hyperv", "ovs", "tap", "vhostuser", "vif", "phy" ], "title": "Interface type", "examples": [ "bridge" ] }, "vif_id": { "$ref": "#/definitions/l2_vif_id" } } }, "l2_bond": { "$id": "#/definitions/l2_bond", "type": "object", "title": "L2 bonding interface configuration settings", "required": [ "ethernet_mac_address", "id", "type", "bond_mode", "bond_links" ], "properties": { "id": { "$ref": "#/definitions/l2_id" }, "ethernet_mac_address": { "$ref": "#/definitions/l2_address" }, "mtu": { "$ref": "#/definitions/l2_mtu" }, "type": { "$id": "#/definitions/l2_bond/properties/type", "type": "string", "enum": [ "bond" ], "title": "Interface type", "examples": [ "bond" ] }, "vif_id": { "$ref": "#/definitions/l2_vif_id" }, "bond_mode": { "$id": "#/definitions/bond/properties/bond_mode", "type": "string", "title": "Port bonding type", "enum": [ "802.1ad", "balance-rr", "active-backup", "balance-xor", "broadcast", "balance-tlb", "balance-alb" ], "examples": [ "802.1ad" ] }, "bond_links": { "$id": "#/definitions/bond/properties/bond_links", "type": "array", "title": "Port bonding links", "items": { "$id": "#/definitions/bond/properties/bond_links/items", "type": "string" } } } }, "l2_vlan": { "$id": "#/definitions/l2_vlan", "type": "object", "title": "L2 VLAN interface configuration settings", "required": [ "vlan_mac_address", "id", "type", "vlan_link", "vlan_id" ], "properties": { "id": { "$ref": "#/definitions/l2_id" }, "vlan_mac_address": { "$ref": "#/definitions/l2_address" }, "mtu": { "$ref": "#/definitions/l2_mtu" }, "type": { "$id": "#/definitions/l2_vlan/properties/type", "type": "string", "enum": [ "vlan" ], "title": "VLAN interface type", "examples": [ "vlan" ] }, "vif_id": { "$ref": "#/definitions/l2_vif_id" }, "vlan_id": { "$id": "#/definitions/l2_vlan/properties/vlan_id", "type": "integer", "title": "VLAN ID" }, "vlan_link": { "$id": "#/definitions/l2_vlan/properties/vlan_link", "type": "string", "title": "VLAN link name" } } }, "l3_id": { "$id": "#/definitions/l3_id", "type": "string", "title": "Network name", "examples": [ "network0" ] }, "l3_link": { "$id": "#/definitions/l3_link", "type": "string", "title": "L2 network link to use for L3 interface", "examples": [ "99e88329-f20d-4741-9593-25bf07847b16" ] }, "l3_network_id": { "$id": "#/definitions/l3_network_id", "type": "string", "title": "Network ID", "examples": [ "99e88329-f20d-4741-9593-25bf07847b16" ] }, "l3_ipv4_type": { "$id": "#/definitions/l3_ipv4_type", "type": "string", "enum": [ "ipv4", "ipv4_dhcp" ], "title": "L3 IPv4 network type", "examples": [ "ipv4_dhcp" ] }, "l3_ipv6_type": { "$id": "#/definitions/l3_ipv6_type", "type": "string", "enum": [ "ipv6", "ipv6_dhcp", "ipv6_slaac" ], "title": "L3 IPv6 network type", "examples": [ "ipv6_dhcp" ] }, "l3_ipv4_host": { "$id": "#/definitions/l3_ipv4_host", "type": "string", "pattern": "^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$", "title": "L3 IPv4 host address", "examples": [ "192.168.81.99" ] }, "l3_ipv6_host": { "$id": "#/definitions/l3_ipv6_host", "type": "string", "pattern": "^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))(/[0-9]{1,2})?$", "title": "L3 IPv6 host address", "examples": [ "2001:db8:3:4::192.168.81.99" ] }, "l3_ipv4_netmask": { "$id": "#/definitions/l3_ipv4_netmask", "type": "string", "pattern": "^(254|252|248|240|224|192|128|0)\\.0\\.0\\.0|255\\.(254|252|248|240|224|192|128|0)\\.0\\.0|255\\.255\\.(254|252|248|240|224|192|128|0)\\.0|255\\.255\\.255\\.(254|252|248|240|224|192|128|0)$", "title": "L3 IPv4 network mask", "examples": [ "255.255.252.0" ] }, "l3_ipv6_netmask": { "$id": "#/definitions/l3_ipv6_netmask", "type": "string", "pattern": "^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7})|(::))$", "title": "L3 IPv6 network mask", "examples": [ "ffff:ffff:ffff:ffff::" ] }, "l3_ipv4_nw": { "$id": "#/definitions/l3_ipv4_nw", "type": "string", "pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$", "title": "L3 IPv4 network address", "examples": [ "0.0.0.0" ] }, "l3_ipv6_nw": { "$id": "#/definitions/l3_ipv6_nw", "type": "string", "pattern": "^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7})|(::))$", "title": "L3 IPv6 network address", "examples": [ "8000::" ] }, "l3_ipv4_gateway": { "$id": "#/definitions/l3_ipv4_gateway", "type": "string", "pattern": "^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$", "title": "L3 IPv4 gateway address", "examples": [ "192.168.200.1" ] }, "l3_ipv6_gateway": { "$id": "#/definitions/l3_ipv6_gateway", "type": "string", "pattern": "^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$", "title": "L3 IPv6 gateway address", "examples": [ "2001:db8:3:4::192.168.81.99" ] }, "l3_ipv4_network_route": { "$id": "#/definitions/l3_ipv4_network_route", "type": "object", "title": "L3 IPv4 routing configuration item", "required": [ "gateway", "netmask", "network" ], "properties": { "network": { "$ref": "#/definitions/l3_ipv4_nw" }, "netmask": { "$ref": "#/definitions/l3_ipv4_netmask" }, "gateway": { "$ref": "#/definitions/l3_ipv4_gateway" }, "services": { "$ref": "#/definitions/ipv4_services" } } }, "l3_ipv6_network_route": { "$id": "#/definitions/l3_ipv6_network_route", "type": "object", "title": "L3 IPv6 routing configuration item", "required": [ "gateway", "netmask", "network" ], "properties": { "network": { "$ref": "#/definitions/l3_ipv6_nw" }, "netmask": { "$ref": "#/definitions/l3_ipv6_netmask" }, "gateway": { "$ref": "#/definitions/l3_ipv6_gateway" }, "services": { "$ref": "#/definitions/ipv6_services" } } }, "l3_ipv4_network": { "$id": "#/definitions/l3_ipv4_network", "type": "object", "title": "L3 IPv4 network configuration", "required": [ "id", "link", "network_id", "type" ], "properties": { "id": { "$ref": "#/definitions/l3_id" }, "link": { "$ref": "#/definitions/l3_link" }, "network_id": { "$ref": "#/definitions/l3_network_id" }, "type": { "$ref": "#/definitions/l3_ipv4_type" }, "ip_address": { "$ref": "#/definitions/l3_ipv4_host" }, "netmask": { "$ref": "#/definitions/l3_ipv4_netmask" }, "routes": { "$id": "#/definitions/l3_ipv4_network/routes", "type": "array", "title": "L3 IPv4 network routes", "items": { "$ref": "#/definitions/l3_ipv4_network_route" } } } }, "l3_ipv6_network": { "$id": "#/definitions/l3_ipv6_network", "type": "object", "title": "L3 IPv6 network configuration", "required": [ "id", "link", "network_id", "type" ], "properties": { "id": { "$ref": "#/definitions/l3_id" }, "link": { "$ref": "#/definitions/l3_link" }, "network_id": { "$ref": "#/definitions/l3_network_id" }, "type": { "$ref": "#/definitions/l3_ipv6_type" }, "ip_address": { "$ref": "#/definitions/l3_ipv6_host" }, "netmask": { "$ref": "#/definitions/l3_ipv6_netmask" }, "routes": { "$id": "#/definitions/properties/l3_ipv6_network/routes", "type": "array", "title": "L3 IPv6 network routes", "items": { "$ref": "#/definitions/l3_ipv6_network_route" } } } }, "ipv4_service": { "$id": "#/definitions/ipv4_service", "type": "object", "title": "Service on a IPv4 network", "required": [ "address", "type" ], "properties": { "address": { "$ref": "#/definitions/l3_ipv4_host" }, "type": { "$id": "#/definitions/ipv4_service/properties/type", "type": "string", "enum": [ "dns" ], "title": "Service type", "examples": [ "dns" ] } } }, "ipv6_service": { "$id": "#/definitions/ipv6_service", "type": "object", "title": "Service on a IPv6 network", "required": [ "address", "type" ], "properties": { "address": { "$ref": "#/definitions/l3_ipv6_host" }, "type": { "$id": "#/definitions/ipv4_service/properties/type", "type": "string", "enum": [ "dns" ], "title": "Service type", "examples": [ "dns" ] } } }, "ipv4_services": { "$id": "#/definitions/ipv4_services", "type": "array", "title": "Network services on IPv4 network", "items": { "$id": "#/definitions/ipv4_services/items", "$ref": "#/definitions/ipv4_service" } }, "ipv6_services": { "$id": "#/definitions/ipv6_services", "type": "array", "title": "Network services on IPv6 network", "items": { "$id": "#/definitions/ipv6_services/items", "$ref": "#/definitions/ipv6_service" } }, "services": { "$id": "#/definitions/services", "type": "array", "title": "Network services", "items": { "$id": "#/definitions/services/items", "anyOf": [ { "$ref": "#/definitions/ipv4_service" }, { "$ref": "#/definitions/ipv6_service" } ] } } } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.1944711 nova-21.2.4/doc/ext/0000775000175000017500000000000000000000000014133 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/ext/__init__.py0000664000175000017500000000000000000000000016232 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/ext/extra_specs.py0000664000175000017500000001517300000000000017034 0ustar00zuulzuul00000000000000# Copyright 2020, Red Hat, Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Display extra specs in documentation. Provides a single directive that can be used to list all extra specs validators and, thus, document all extra specs that nova recognizes and supports. """ import typing as ty from docutils import nodes from docutils.parsers import rst from docutils.parsers.rst import directives from docutils import statemachine from sphinx import addnodes from sphinx import directives as sphinx_directives from sphinx import domains from sphinx import roles from sphinx.util import logging from sphinx.util import nodes as sphinx_nodes from nova.api.validation.extra_specs import base from nova.api.validation.extra_specs import validators LOG = logging.getLogger(__name__) class ExtraSpecXRefRole(roles.XRefRole): """Cross reference a extra spec. Example:: :nova:extra-spec:`hw:cpu_policy` """ def __init__(self): super(ExtraSpecXRefRole, self).__init__( warn_dangling=True, ) def process_link(self, env, refnode, has_explicit_title, title, target): # The anchor for the extra spec link is the extra spec name return target, target class ExtraSpecDirective(sphinx_directives.ObjectDescription): """Document an individual extra spec. Accepts one required argument - the extra spec name, including the group. Example:: .. extra-spec:: hw:cpu_policy """ def handle_signature(self, sig, signode): """Transform an option description into RST nodes.""" # Insert a node into the output showing the extra spec name signode += addnodes.desc_name(sig, sig) signode['allnames'] = [sig] return sig def add_target_and_index(self, firstname, sig, signode): cached_options = self.env.domaindata['nova']['extra_specs'] signode['ids'].append(sig) self.state.document.note_explicit_target(signode) # Store the location of the option definition for later use in # resolving cross-references cached_options[sig] = self.env.docname def _indent(text, count=1): if not text: return text padding = ' ' * (4 * count) return padding + text def _format_validator_group_help( validators: ty.Dict[str, base.ExtraSpecValidator], summary: bool, ): """Generate reStructuredText snippets for a group of validators.""" for validator in validators.values(): for line in _format_validator_help(validator, summary): yield line def _format_validator_help( validator: base.ExtraSpecValidator, summary: bool, ): """Generate reStucturedText snippets for the provided validator. :param validator: A validator to document. :type validator: nova.api.validation.extra_specs.base.ExtraSpecValidator """ yield f'.. nova:extra-spec:: {validator.name}' yield '' # NOTE(stephenfin): We don't print the pattern, if present, since it's too # internal. Instead, the description should provide this information in a # human-readable format yield _indent(f':Type: {validator.value["type"].__name__}') if validator.value.get('min') is not None: yield _indent(f':Min: {validator.value["min"]}') if validator.value.get('max') is not None: yield _indent(f':Max: {validator.value["max"]}') yield '' if not summary: for line in validator.description.splitlines(): yield _indent(line) yield '' if validator.deprecated: yield _indent('.. warning::') yield _indent( 'This extra spec has been deprecated and should not be used.', 2 ) yield '' class ExtraSpecGroupDirective(rst.Directive): """Document extra specs belonging to the specified group. Accepts one optional argument - the extra spec group - and one option - whether to show a summary view only (omit descriptions). Example:: .. extra-specs:: hw_rng :summary: """ required_arguments = 0 optional_arguments = 1 option_spec = { 'summary': directives.flag, } has_content = False def run(self): result = statemachine.ViewList() source_name = self.state.document.current_source group = self.arguments[0] if self.arguments else None summary = self.options.get('summary', False) if group: group_validators = { n.split(':', 1)[1]: v for n, v in validators.VALIDATORS.items() if ':' in n and n.split(':', 1)[0].split('{')[0] == group } else: group_validators = { n: v for n, v in validators.VALIDATORS.items() if ':' not in n } if not group_validators: LOG.warning("No validators found for group '%s'", group or '') for count, line in enumerate( _format_validator_group_help(group_validators, summary) ): result.append(line, source_name, count) LOG.debug('%5d%s%s', count, ' ' if line else '', line) node = nodes.section() node.document = self.state.document sphinx_nodes.nested_parse_with_titles(self.state, result, node) return node.children class NovaDomain(domains.Domain): """nova domain.""" name = 'nova' label = 'nova' object_types = { 'configoption': domains.ObjType( 'extra spec', 'spec', ), } directives = { 'extra-spec': ExtraSpecDirective, } roles = { 'extra-spec': ExtraSpecXRefRole(), } initial_data = { 'extra_specs': {}, } def resolve_xref( self, env, fromdocname, builder, typ, target, node, contnode, ): """Resolve cross-references""" if typ == 'option': return sphinx_nodes.make_refnode( builder, fromdocname, env.domaindata['nova']['extra_specs'][target], target, contnode, target, ) return None def setup(app): app.add_domain(NovaDomain) app.add_directive('extra-specs', ExtraSpecGroupDirective) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/ext/feature_matrix.py0000664000175000017500000005230400000000000017530 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This provides a sphinx extension able to render from an ini file (from the doc/source/* directory) a feature matrix into the developer documentation. It is used via a single directive in the .rst file .. feature_matrix:: feature_classification.ini """ import re import sys from six.moves import configparser from docutils import nodes from docutils.parsers import rst class Matrix(object): """Matrix represents the entire feature matrix parsed from an ini file This includes: * self.features is a list of MatrixFeature instances, the rows and cells * self.targets is a dict of (MatrixTarget.key, MatrixTarget), the columns """ def __init__(self): self.features = [] self.targets = {} class MatrixTarget(object): def __init__(self, key, title, driver, hypervisor=None, architecture=None, link=None): """MatrixTarget modes a target, a column in the matrix This is usually a specific CI system, or collection of related deployment configurations. :param key: Unique identifier for the hypervisor driver :param title: Human friendly name of the hypervisor :param driver: Name of the Nova driver :param hypervisor: (optional) Name of the hypervisor, if many :param architecture: (optional) Name of the architecture, if many :param link: (optional) URL to docs about this target """ self.key = key self.title = title self.driver = driver self.hypervisor = hypervisor self.architecture = architecture self.link = link class MatrixImplementation(object): STATUS_COMPLETE = "complete" STATUS_PARTIAL = "partial" STATUS_MISSING = "missing" STATUS_UKNOWN = "unknown" STATUS_ALL = [STATUS_COMPLETE, STATUS_PARTIAL, STATUS_MISSING, STATUS_UKNOWN] def __init__(self, status=STATUS_MISSING, notes=None, release=None): """MatrixImplementation models a cell in the matrix This models the current state of a target for a specific feature :param status: One off complete, partial, missing, unknown. See the RST docs for a definition of those. :param notes: Arbitrary string describing any caveats of the implementation. Mandatory if status is 'partial'. :param release: Letter of the release entry was last updated. i.e. m=mitaka, c=cactus. If not known it is None. """ self.status = status self.notes = notes self.release = release class MatrixFeature(object): MATURITY_INCOMPLETE = "incomplete" MATURITY_EXPERIMENTAL = "experimental" MATURITY_COMPLETE = "complete" MATURITY_DEPRECATED = "deprecated" MATURITY_ALL = [MATURITY_INCOMPLETE, MATURITY_EXPERIMENTAL, MATURITY_COMPLETE, MATURITY_DEPRECATED] def __init__(self, key, title, notes=None, cli=None, maturity=None, api_doc_link=None, admin_doc_link=None, tempest_test_uuids=None): """MatrixFeature models a row in the matrix This initialises ``self.implementations``, which is a dict of (MatrixTarget.key, MatrixImplementation) This is the list of cells for the given row of the matrix. :param key: used as the HTML id for the details, and so in the URL :param title: human friendly short title, used in the matrix :param notes: Arbitrarily long string describing the feature :param cli: list of cli commands related to the feature :param maturity: incomplete, experimental, complete or deprecated for a full definition see the rst doc :param api_doc_link: URL to the API ref for this feature :param admin_doc_link: URL to the admin docs for using this feature :param tempest_test_uuids: uuids for tests that validate this feature """ if cli is None: cli = [] if tempest_test_uuids is None: tempest_test_uuids = [] self.key = key self.title = title self.notes = notes self.implementations = {} self.cli = cli self.maturity = maturity self.api_doc_link = api_doc_link self.admin_doc_link = admin_doc_link self.tempest_test_uuids = tempest_test_uuids class FeatureMatrixDirective(rst.Directive): """The Sphinx directive plugin Has single required argument, the filename for the ini file. The usage of the directive looks like this:: .. feature_matrix:: feature_matrix_gp.ini """ required_arguments = 1 def run(self): matrix = self._load_feature_matrix() return self._build_markup(matrix) def _load_feature_matrix(self): """Reads the feature-matrix.ini file and populates an instance of the Matrix class with all the data. :returns: Matrix instance """ # SafeConfigParser was deprecated in Python 3.2 if sys.version_info >= (3, 2): cfg = configparser.ConfigParser() else: cfg = configparser.SafeConfigParser() env = self.state.document.settings.env filename = self.arguments[0] rel_fpath, fpath = env.relfn2path(filename) with open(fpath) as fp: cfg.readfp(fp) # This ensures that the docs are rebuilt whenever the # .ini file changes env.note_dependency(rel_fpath) matrix = Matrix() matrix.targets = self._get_targets(cfg) matrix.features = self._get_features(cfg, matrix.targets) return matrix def _get_targets(self, cfg): """The 'targets' section is special - it lists all the hypervisors that this file records data for. """ targets = {} for section in cfg.sections(): if not section.startswith("target."): continue key = section[7:] title = cfg.get(section, "title") link = cfg.get(section, "link") driver = key.split("-")[0] target = MatrixTarget(key, title, driver, link=link) targets[key] = target return targets def _get_features(self, cfg, targets): """All sections except 'targets' describe some feature of the Nova hypervisor driver implementation. """ features = [] for section in cfg.sections(): if section == "targets": continue if section.startswith("target."): continue if not cfg.has_option(section, "title"): raise Exception( "'title' field missing in '[%s]' section" % section) title = cfg.get(section, "title") maturity = MatrixFeature.MATURITY_INCOMPLETE if cfg.has_option(section, "maturity"): maturity = cfg.get(section, "maturity").lower() if maturity not in MatrixFeature.MATURITY_ALL: raise Exception( "'maturity' field value '%s' in ['%s']" "section must be %s" % (maturity, section, ",".join(MatrixFeature.MATURITY_ALL))) notes = None if cfg.has_option(section, "notes"): notes = cfg.get(section, "notes") cli = [] if cfg.has_option(section, "cli"): cli = cfg.get(section, "cli") api_doc_link = None if cfg.has_option(section, "api_doc_link"): api_doc_link = cfg.get(section, "api_doc_link") admin_doc_link = None if cfg.has_option(section, "admin_doc_link"): admin_doc_link = cfg.get(section, "admin_doc_link") tempest_test_uuids = [] if cfg.has_option(section, "tempest_test_uuids"): tempest_test_uuids = cfg.get(section, "tempest_test_uuids") feature = MatrixFeature(section, title, notes, cli, maturity, api_doc_link, admin_doc_link, tempest_test_uuids) # Now we've got the basic feature details, we must process # the hypervisor driver implementation for each feature for item in cfg.options(section): key = item.replace("driver-impl-", "") if key not in targets: # TODO(johngarbutt) would be better to skip known list if item.startswith("driver-impl-"): raise Exception( "Driver impl '%s' in '[%s]' not declared" % (item, section)) continue impl_status_and_release = cfg.get(section, item) impl_status_and_release = impl_status_and_release.split(":") impl_status = impl_status_and_release[0] release = None if len(impl_status_and_release) == 2: release = impl_status_and_release[1] if impl_status not in MatrixImplementation.STATUS_ALL: raise Exception( "'%s' value '%s' in '[%s]' section must be %s" % (item, impl_status, section, ",".join(MatrixImplementation.STATUS_ALL))) noteskey = "driver-notes-" + item[12:] notes = None if cfg.has_option(section, noteskey): notes = cfg.get(section, noteskey) target = targets[key] impl = MatrixImplementation(impl_status, notes, release) feature.implementations[target.key] = impl for key in targets: if key not in feature.implementations: raise Exception("'%s' missing in '[%s]' section" % (key, section)) features.append(feature) return features def _build_markup(self, matrix): """Constructs the docutils content for the feature matrix""" content = [] self._build_summary(matrix, content) self._build_details(matrix, content) return content def _build_summary(self, matrix, content): """Constructs the docutils content for the summary of the feature matrix. The summary consists of a giant table, with one row for each feature, and a column for each hypervisor driver. It provides an 'at a glance' summary of the status of each driver """ summarytitle = nodes.subtitle(text="Summary") summary = nodes.table() cols = len(matrix.targets.keys()) cols += 2 summarygroup = nodes.tgroup(cols=cols) summarybody = nodes.tbody() summaryhead = nodes.thead() for i in range(cols): summarygroup.append(nodes.colspec(colwidth=1)) summarygroup.append(summaryhead) summarygroup.append(summarybody) summary.append(summarygroup) content.append(summarytitle) content.append(summary) # This sets up all the column headers - two fixed # columns for feature name & status header = nodes.row() blank = nodes.entry() blank.append(nodes.emphasis(text="Feature")) header.append(blank) blank = nodes.entry() blank.append(nodes.emphasis(text="Maturity")) header.append(blank) summaryhead.append(header) # then one column for each hypervisor driver impls = sorted(matrix.targets.keys()) for key in impls: target = matrix.targets[key] implcol = nodes.entry() header.append(implcol) if target.link: uri = target.link target_ref = nodes.reference("", refuri=uri) target_txt = nodes.inline() implcol.append(target_txt) target_txt.append(target_ref) target_ref.append(nodes.strong(text=target.title)) else: implcol.append(nodes.strong(text=target.title)) # We now produce the body of the table, one row for # each feature to report on for feature in matrix.features: item = nodes.row() # the hyperlink target name linking to details id = re.sub("[^a-zA-Z0-9_]", "_", feature.key) # first the to fixed columns for title/status keycol = nodes.entry() item.append(keycol) keyref = nodes.reference(refid=id) keytxt = nodes.inline() keycol.append(keytxt) keytxt.append(keyref) keyref.append(nodes.strong(text=feature.title)) maturitycol = nodes.entry() item.append(maturitycol) maturitycol.append(nodes.inline( text=feature.maturity, classes=["fm_maturity_" + feature.maturity])) # and then one column for each hypervisor driver impls = sorted(matrix.targets.keys()) for key in impls: target = matrix.targets[key] impl = feature.implementations[key] implcol = nodes.entry() item.append(implcol) id = re.sub("[^a-zA-Z0-9_]", "_", feature.key + "_" + key) implref = nodes.reference(refid=id) impltxt = nodes.inline() implcol.append(impltxt) impltxt.append(implref) impl_status = "" if impl.status == MatrixImplementation.STATUS_COMPLETE: impl_status = u"\u2714" elif impl.status == MatrixImplementation.STATUS_MISSING: impl_status = u"\u2716" elif impl.status == MatrixImplementation.STATUS_PARTIAL: impl_status = u"\u2714" elif impl.status == MatrixImplementation.STATUS_UKNOWN: impl_status = u"?" implref.append(nodes.literal( text=impl_status, classes=["fm_impl_summary", "fm_impl_" + impl.status])) if impl.release: implref.append(nodes.inline(text=" %s" % impl.release)) summarybody.append(item) def _build_details(self, matrix, content): """Constructs the docutils content for the details of the feature matrix. This is generated as a bullet list of features. Against each feature we provide the description of the feature and then the details of the hypervisor impls, with any driver specific notes that exist """ detailstitle = nodes.subtitle(text="Details") details = nodes.bullet_list() content.append(detailstitle) content.append(details) # One list entry for each feature we're reporting on for feature in matrix.features: item = nodes.list_item() # The hypervisor target name linked from summary table id = re.sub("[^a-zA-Z0-9_]", "_", feature.key) # Highlight the feature title name item.append(nodes.strong(text=feature.title, ids=[id])) if feature.notes is not None: para_notes = nodes.paragraph() para_notes.append(nodes.inline(text=feature.notes)) item.append(para_notes) self._add_feature_info(item, feature) if feature.cli: item.append(self._create_cli_paragraph(feature)) para_divers = nodes.paragraph() para_divers.append(nodes.strong(text="drivers:")) # A sub-list giving details of each hypervisor target impls = nodes.bullet_list() for key in feature.implementations: target = matrix.targets[key] impl = feature.implementations[key] subitem = nodes.list_item() id = re.sub("[^a-zA-Z0-9_]", "_", feature.key + "_" + key) subitem += [ nodes.strong(text=target.title + ": "), nodes.literal(text=impl.status, classes=["fm_impl_" + impl.status], ids=[id]), ] if impl.release: release_letter = impl.release.upper() release_text = \ ' (updated in "%s" release)' % release_letter subitem.append(nodes.inline(text=release_text)) if impl.notes is not None: subitem.append(self._create_notes_paragraph(impl.notes)) impls.append(subitem) para_divers.append(impls) item.append(para_divers) details.append(item) def _add_feature_info(self, item, feature): para_info = nodes.paragraph() para_info.append(nodes.strong(text="info:")) info_list = nodes.bullet_list() maturity_literal = nodes.literal(text=feature.maturity, classes=["fm_maturity_" + feature.maturity]) self._append_info_list_item(info_list, "Maturity", items=[maturity_literal]) self._append_info_list_item(info_list, "API Docs", link=feature.api_doc_link) self._append_info_list_item(info_list, "Admin Docs", link=feature.admin_doc_link) tempest_items = [] if feature.tempest_test_uuids: for uuid in feature.tempest_test_uuids.split(";"): base = "https://github.com/openstack/tempest/search?q=%s" link = base % uuid inline_ref = self._get_uri_ref(link, text=uuid) tempest_items.append(inline_ref) tempest_items.append(nodes.inline(text=", ")) # removing trailing punctuation tempest_items = tempest_items[:-1] self._append_info_list_item(info_list, "Tempest tests", items=tempest_items) para_info.append(info_list) item.append(para_info) def _get_uri_ref(self, link, text=None): if not text: text = link ref = nodes.reference("", text, refuri=link) inline = nodes.inline() inline.append(ref) return inline def _append_info_list_item(self, info_list, title, text=None, link=None, items=None): subitem = nodes.list_item() subitem.append(nodes.strong(text="%s: " % title)) if items: for item in items: subitem.append(item) elif link: inline_link = self._get_uri_ref(link, text) subitem.append(inline_link) elif text: subitem.append(nodes.literal(text=text)) info_list.append(subitem) def _create_cli_paragraph(self, feature): """Create a paragraph which represents the CLI commands of the feature The paragraph will have a bullet list of CLI commands. """ para = nodes.paragraph() para.append(nodes.strong(text="CLI commands:")) commands = nodes.bullet_list() for c in feature.cli.split(";"): cli_command = nodes.list_item() cli_command += nodes.literal(text=c, classes=["fm_cli"]) commands.append(cli_command) para.append(commands) return para def _create_notes_paragraph(self, notes): """Constructs a paragraph which represents the implementation notes The paragraph consists of text and clickable URL nodes if links were given in the notes. """ para = nodes.paragraph() # links could start with http:// or https:// link_idxs = [m.start() for m in re.finditer('https?://', notes)] start_idx = 0 for link_idx in link_idxs: # assume the notes start with text (could be empty) para.append(nodes.inline(text=notes[start_idx:link_idx])) # create a URL node until the next text or the end of the notes link_end_idx = notes.find(" ", link_idx) if link_end_idx == -1: # In case the notes end with a link without a blank link_end_idx = len(notes) uri = notes[link_idx:link_end_idx + 1] para.append(nodes.reference("", uri, refuri=uri)) start_idx = link_end_idx + 1 # get all text after the last link (could be empty) or all of the # text if no link was given para.append(nodes.inline(text=notes[start_idx:])) return para def setup(app): app.add_directive('feature_matrix', FeatureMatrixDirective) app.add_stylesheet('feature-matrix.css') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/ext/versioned_notifications.py0000664000175000017500000001302500000000000021435 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This provides a sphinx extension able to list the implemented versioned notifications into the developer documentation. It is used via a single directive in the .rst file .. versioned_notifications:: """ import os from docutils import nodes from docutils.parsers import rst import importlib from oslo_serialization import jsonutils import pkgutil from nova.notifications.objects import base as notification from nova.objects import base from nova.tests import json_ref import nova.utils class VersionedNotificationDirective(rst.Directive): SAMPLE_ROOT = 'doc/notification_samples/' TOGGLE_SCRIPT = """ """ def run(self): notifications = self._collect_notifications() return self._build_markup(notifications) def _import_all_notification_packages(self): list(map(lambda module: importlib.import_module(module), ('nova.notifications.objects.' + name for _, name, _ in pkgutil.iter_modules(nova.notifications.objects.__path__)))) def _collect_notifications(self): # If you do not see your notification sample showing up in the docs # be sure that the sample filename matches what is registered on the # versioned notification object class using the # @base.notification_sample decorator. self._import_all_notification_packages() base.NovaObjectRegistry.register_notification_objects() notifications = {} ovos = base.NovaObjectRegistry.obj_classes() for name, cls in ovos.items(): cls = cls[0] if (issubclass(cls, notification.NotificationBase) and cls != notification.NotificationBase): payload_name = cls.fields['payload'].objname payload_cls = ovos[payload_name][0] for sample in cls.samples: if sample in notifications: raise ValueError('Duplicated usage of %s ' 'sample file detected' % sample) notifications[sample] = ((cls.__name__, payload_cls.__name__, sample)) return sorted(notifications.values()) def _build_markup(self, notifications): content = [] cols = ['Event type', 'Notification class', 'Payload class', 'Sample'] table = nodes.table() content.append(table) group = nodes.tgroup(cols=len(cols)) table.append(group) head = nodes.thead() group.append(head) for _ in cols: group.append(nodes.colspec(colwidth=1)) body = nodes.tbody() group.append(body) # fill the table header row = nodes.row() body.append(row) for col_name in cols: col = nodes.entry() row.append(col) text = nodes.strong(text=col_name) col.append(text) # fill the table content, one notification per row for name, payload, sample_file in notifications: event_type = sample_file[0: -5].replace('-', '.') row = nodes.row() body.append(row) col = nodes.entry() row.append(col) text = nodes.literal(text=event_type) col.append(text) col = nodes.entry() row.append(col) text = nodes.literal(text=name) col.append(text) col = nodes.entry() row.append(col) text = nodes.literal(text=payload) col.append(text) col = nodes.entry() row.append(col) with open(os.path.join(self.SAMPLE_ROOT, sample_file), 'r') as f: sample_content = f.read() sample_obj = jsonutils.loads(sample_content) sample_obj = json_ref.resolve_refs( sample_obj, base_path=os.path.abspath(self.SAMPLE_ROOT)) sample_content = jsonutils.dumps(sample_obj, sort_keys=True, indent=4, separators=(',', ': ')) event_type = sample_file[0: -5] html_str = self.TOGGLE_SCRIPT % ((event_type, ) * 3) html_str += ("" % event_type) html_str += ("
%s
" % (event_type, sample_content)) raw = nodes.raw('', html_str, format="html") col.append(raw) return content def setup(app): app.add_directive('versioned_notifications', VersionedNotificationDirective) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.218471 nova-21.2.4/doc/notification_samples/0000775000175000017500000000000000000000000017545 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-add_host-end.json0000664000175000017500000000041400000000000024714 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "hosts": ["compute"] } }, "event_type": "aggregate.add_host.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-add_host-start.json0000664000175000017500000000030400000000000025301 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#" }, "event_type": "aggregate.add_host.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-cache_images-end.json0000664000175000017500000000042000000000000025514 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "hosts": ["compute"] } }, "event_type": "aggregate.cache_images.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-cache_images-progress.json0000664000175000017500000000115700000000000026622 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "AggregateCachePayload", "nova_object.data": { "name": "my-aggregate", "uuid": "788608ec-ebdc-45c5-bc7f-e5f24ab92c80", "host": "compute", "total": 1, "index": 1, "images_cached": ["155d900f-4e14-4e4c-a73d-069cbf4541e6"], "images_failed": [], "id": 1 } }, "event_type": "aggregate.cache_images.progress", "publisher_id": "nova-conductor:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-cache_images-start.json0000664000175000017500000000042200000000000026105 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "hosts": ["compute"] } }, "event_type": "aggregate.cache_images.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-create-end.json0000664000175000017500000000030000000000000024364 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#" }, "event_type": "aggregate.create.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-create-start.json0000664000175000017500000000043500000000000024764 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "hosts": null, "id": null } }, "event_type": "aggregate.create.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-delete-end.json0000664000175000017500000000030000000000000024363 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#" }, "event_type": "aggregate.delete.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-delete-start.json0000664000175000017500000000030200000000000024754 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#" }, "event_type": "aggregate.delete.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-remove_host-end.json0000664000175000017500000000030500000000000025460 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#" }, "event_type": "aggregate.remove_host.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-remove_host-start.json0000664000175000017500000000042100000000000026046 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "hosts": ["compute"] } }, "event_type": "aggregate.remove_host.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-update_metadata-end.json0000664000175000017500000000050600000000000026253 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "metadata": { "availability_zone": "AZ-1" } } }, "event_type": "aggregate.update_metadata.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-update_metadata-start.json0000664000175000017500000000031300000000000026636 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#" }, "event_type": "aggregate.update_metadata.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-update_prop-end.json0000664000175000017500000000042500000000000025453 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "name": "my-new-aggregate" } }, "event_type": "aggregate.update_prop.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/aggregate-update_prop-start.json0000664000175000017500000000042700000000000026044 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/AggregatePayload.json#", "nova_object.data": { "name": "my-new-aggregate" } }, "event_type": "aggregate.update_prop.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.222471 nova-21.2.4/doc/notification_samples/common_payloads/0000775000175000017500000000000000000000000022731 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/AggregatePayload.json0000664000175000017500000000053500000000000027027 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.1", "nova_object.namespace": "nova", "nova_object.name": "AggregatePayload", "nova_object.data": { "name": "my-aggregate", "metadata": { "availability_zone": "nova" }, "uuid": "788608ec-ebdc-45c5-bc7f-e5f24ab92c80", "hosts": [], "id": 1 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/AuditPeriodPayload.json0000664000175000017500000000041100000000000027343 0ustar00zuulzuul00000000000000{ "nova_object.data": { "audit_period_beginning": "2012-10-01T00:00:00Z", "audit_period_ending": "2012-10-29T13:42:11Z" }, "nova_object.name": "AuditPeriodPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/notification_samples/common_payloads/BandwidthPayload.json0000664000175000017500000000035000000000000027040 0ustar00zuulzuul00000000000000{ "nova_object.data": { "network_name": "private", "out_bytes": 0, "in_bytes": 0 }, "nova_object.name": "BandwidthPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/BlockDevicePayload.json0000664000175000017500000000052100000000000027306 0ustar00zuulzuul00000000000000{ "nova_object.data": { "boot_index": null, "delete_on_termination": false, "device_name": "/dev/sdb", "tag": null, "volume_id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" }, "nova_object.name": "BlockDevicePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/ComputeTaskPayload.json0000664000175000017500000000137300000000000027401 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "ComputeTaskPayload", "nova_object.data": { "instance_uuid": "d5e6a7b7-80e5-4166-85a3-cd6115201082", "reason": {"$ref": "ExceptionPayload.json#"}, "request_spec": { "$ref": "RequestSpecPayload.json#", "nova_object.data": { "flavor": { "nova_object.data": { "extra_specs": { "hw:numa_cpus.0": "0", "hw:numa_mem.0": "512", "hw:numa_nodes": "1" } } }, "numa_topology": {"$ref": "InstanceNUMATopologyPayload.json#"} } }, "state": "error" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/ExceptionPayload.json0000664000175000017500000000074500000000000027102 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.1", "nova_object.namespace": "nova", "nova_object.name": "ExceptionPayload", "nova_object.data": { "function_name": "_schedule_instances", "module_name": "nova.conductor.manager", "exception": "NoValidHost", "exception_message": "No valid host was found. There are not enough hosts available.", "traceback": "Traceback (most recent call last):\n File \"nova/conductor/manager.py\", line ..." } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/FlavorPayload.json0000664000175000017500000000112400000000000026365 0ustar00zuulzuul00000000000000{ "nova_object.name": "FlavorPayload", "nova_object.data": { "flavorid": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "name": "test_flavor", "root_gb": 1, "vcpus": 1, "ephemeral_gb": 0, "memory_mb": 512, "disabled": false, "rxtx_factor": 1.0, "extra_specs": { "hw:watchdog_action": "disabled" }, "projects": null, "swap": 0, "is_public": true, "vcpu_weight": 0, "description": null }, "nova_object.version": "1.4", "nova_object.namespace": "nova" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/ImageMetaPayload.json0000664000175000017500000000143000000000000026765 0ustar00zuulzuul00000000000000{ "nova_object.namespace": "nova", "nova_object.data": { "checksum": null, "container_format": "raw", "created_at": "2011-01-01T01:02:03Z", "direct_url": null, "disk_format": "raw", "id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "min_disk": 0, "min_ram": 0, "name": "fakeimage123456", "owner": null, "properties": {"$ref":"ImageMetaPropsPayload.json#"}, "protected": false, "size": 25165824, "status": "active", "tags": [ "tag1", "tag2" ], "updated_at": "2011-01-01T01:02:03Z", "virtual_size": null, "visibility": "public" }, "nova_object.name": "ImageMetaPayload", "nova_object.version": "1.0" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/notification_samples/common_payloads/ImageMetaPropsPayload.json0000664000175000017500000000030000000000000030004 0ustar00zuulzuul00000000000000{ "nova_object.namespace": "nova", "nova_object.data": { "hw_architecture": "x86_64" }, "nova_object.name": "ImageMetaPropsPayload", "nova_object.version": "1.3" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionPayload.json0000664000175000017500000000032100000000000030034 0ustar00zuulzuul00000000000000{ "$ref": "InstancePayload.json", "nova_object.data":{ "fault":null }, "nova_object.name":"InstanceActionPayload", "nova_object.namespace":"nova", "nova_object.version":"1.8" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionRebuildPayload.json0000664000175000017500000000056700000000000031357 0ustar00zuulzuul00000000000000{ "$ref": "InstanceActionPayload.json", "nova_object.data": { "architecture": null, "image_uuid": "a2459075-d96c-40d5-893e-577ff92e721c", "trusted_image_certificates": [ "rebuild-cert-id-1", "rebuild-cert-id-2" ] }, "nova_object.name": "InstanceActionRebuildPayload", "nova_object.version": "1.9" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionRescuePayload.json0000664000175000017500000000035200000000000031207 0ustar00zuulzuul00000000000000{ "$ref": "InstanceActionPayload.json", "nova_object.data": { "rescue_image_ref": "a2459075-d96c-40d5-893e-577ff92e721c" }, "nova_object.name": "InstanceActionRescuePayload", "nova_object.version": "1.3" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionResizePrepPayload.json0000664000175000017500000000174700000000000032062 0ustar00zuulzuul00000000000000{ "$ref": "InstanceActionPayload.json", "nova_object.data":{ "new_flavor": { "nova_object.name": "FlavorPayload", "nova_object.data": { "description": null, "disabled": false, "ephemeral_gb": 0, "extra_specs": { "hw:watchdog_action": "reset" }, "flavorid": "d5a8bb54-365a-45ae-abdb-38d249df7845", "is_public": true, "memory_mb": 256, "name": "other_flavor", "projects": null, "root_gb": 1, "rxtx_factor": 1.0, "swap": 0, "vcpu_weight": 0, "vcpus": 1 }, "nova_object.namespace": "nova", "nova_object.version": "1.4" }, "task_state": "resize_prep" }, "nova_object.name": "InstanceActionResizePrepPayload", "nova_object.version": "1.3" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionSnapshotPayload.json0000664000175000017500000000041600000000000031561 0ustar00zuulzuul00000000000000{ "$ref": "InstanceActionPayload.json", "nova_object.data":{ "snapshot_image_id": "d2aae36f-785c-4518-8016-bc9534d9fc7f" }, "nova_object.name":"InstanceActionSnapshotPayload", "nova_object.namespace":"nova", "nova_object.version":"1.9" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionVolumePayload.json0000664000175000017500000000040600000000000031230 0ustar00zuulzuul00000000000000{ "$ref": "InstanceActionPayload.json", "nova_object.data":{ "volume_id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" }, "nova_object.name": "InstanceActionVolumePayload", "nova_object.namespace": "nova", "nova_object.version": "1.6" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceActionVolumeSwapPayload.json0000664000175000017500000000052100000000000032061 0ustar00zuulzuul00000000000000{ "$ref": "InstanceActionPayload.json", "nova_object.data": { "new_volume_id": "227cc671-f30b-4488-96fd-7d0bf13648d8", "old_volume_id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" }, "nova_object.name": "InstanceActionVolumeSwapPayload", "nova_object.namespace": "nova", "nova_object.version": "1.8" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceCreatePayload.json0000664000175000017500000000210200000000000030021 0ustar00zuulzuul00000000000000{ "$ref":"InstanceActionPayload.json", "nova_object.data": { "block_devices": [], "keypairs": [ { "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "KeypairPayload", "nova_object.data": { "user_id": "fake", "name": "my-key", "fingerprint": "1e:2c:9b:56:79:4b:45:77:f9:ca:7a:98:2c:b0:d5:3c", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova", "type": "ssh" } } ], "tags": ["tag"], "trusted_image_certificates": [ "cert-id-1", "cert-id-2" ], "instance_name": "instance-00000001" }, "nova_object.name":"InstanceCreatePayload", "nova_object.version": "1.12" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceExistsPayload.json0000664000175000017500000000051200000000000030100 0ustar00zuulzuul00000000000000{ "$ref": "InstancePayload.json", "nova_object.data":{ "audit_period": {"$ref": "AuditPeriodPayload.json#"}, "bandwidth": [ {"$ref": "BandwidthPayload.json#"} ] }, "nova_object.name":"InstanceExistsPayload", "nova_object.namespace":"nova", "nova_object.version":"1.2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceNUMACellPayload.json0000664000175000017500000000062500000000000030166 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "InstanceNUMACellPayload", "nova_object.data": { "cpu_pinning_raw": null, "cpu_policy": null, "cpu_thread_policy": null, "cpu_topology": null, "cpuset": [0], "cpuset_reserved": null, "id": 0, "memory": 512, "pagesize": null } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceNUMATopologyPayload.json0000664000175000017500000000053700000000000031125 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "InstanceNUMATopologyPayload", "nova_object.data": { "cells": [ {"$ref": "InstanceNUMACellPayload.json#"} ], "emulator_threads_policy": null, "instance_uuid": "75cab9f7-57e2-4bd1-984f-a0383d9ee60e" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstancePCIRequestsPayload.json0000664000175000017500000000037000000000000030772 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "InstancePCIRequestsPayload", "nova_object.data":{ "instance_uuid": "d5e6a7b7-80e5-4166-85a3-cd6115201082", "requests": [] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/InstancePayload.json0000664000175000017500000000300300000000000026676 0ustar00zuulzuul00000000000000{ "nova_object.data":{ "architecture":"x86_64", "availability_zone": "nova", "block_devices": [ {"$ref": "BlockDevicePayload.json#"} ], "created_at":"2012-10-29T13:42:11Z", "deleted_at":null, "display_name":"some-server", "display_description":"some-server", "host":"compute", "host_name":"some-server", "ip_addresses": [ {"$ref": "IpPayload.json#"} ], "kernel_id":"", "key_name": "my-key", "launched_at":"2012-10-29T13:42:11Z", "image_uuid": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "metadata":{}, "locked":false, "node":"fake-mini", "os_type":null, "progress":0, "ramdisk_id":"", "reservation_id":"r-npxv0e40", "state":"active", "task_state":null, "power_state":"running", "tenant_id":"6f70656e737461636b20342065766572", "terminated_at":null, "auto_disk_config":"MANUAL", "flavor": {"$ref": "FlavorPayload.json#"}, "updated_at": "2012-10-29T13:42:11Z", "user_id":"fake", "uuid":"178b0921-8f85-4257-88b6-2e743b5a975c", "request_id": "req-5b6c791d-5709-4f36-8fbe-c3e02869e35d", "action_initiator_user": "fake", "action_initiator_project": "6f70656e737461636b20342065766572", "locked_reason": null }, "nova_object.name":"InstancePayload", "nova_object.namespace":"nova", "nova_object.version":"1.8" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/notification_samples/common_payloads/InstanceUpdatePayload.json0000664000175000017500000000207500000000000030051 0ustar00zuulzuul00000000000000{ "$ref": "InstancePayload.json", "nova_object.data": { "audit_period": { "nova_object.data": { "audit_period_beginning": "2012-10-01T00:00:00Z", "audit_period_ending": "2012-10-29T13:42:11Z" }, "nova_object.name": "AuditPeriodPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }, "bandwidth": [], "block_devices": [], "old_display_name": null, "state_update": { "nova_object.data": { "new_task_state": null, "old_state": "active", "old_task_state": null, "state": "active" }, "nova_object.name": "InstanceStateUpdatePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }, "tags": [], "updated_at": "2012-10-29T13:42:11Z" }, "nova_object.name": "InstanceUpdatePayload", "nova_object.namespace": "nova", "nova_object.version": "1.9" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/IpPayload.json0000664000175000017500000000060100000000000025503 0ustar00zuulzuul00000000000000{ "nova_object.name": "IpPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": { "mac": "fa:16:3e:4c:2c:30", "address": "192.168.1.3", "port_uuid": "ce531f90-199f-48c0-816c-13e38010b442", "meta": {}, "version": 4, "label": "private", "device_name": "tapce531f90-19" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/KeypairPayload.json0000664000175000017500000000105100000000000026537 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.0", "nova_object.namespace": "nova", "nova_object.name": "KeypairPayload", "nova_object.data": { "user_id": "fake", "name": "my-key", "fingerprint": "1e:2c:9b:56:79:4b:45:77:f9:ca:7a:98:2c:b0:d5:3c", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGgB4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0lRE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYcpSxsIbECHw== Generated-by-Nova", "type": "ssh" } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/RequestSpecPayload.json0000664000175000017500000000146700000000000027411 0ustar00zuulzuul00000000000000{ "nova_object.namespace": "nova", "nova_object.data": { "availability_zone": null, "flavor": {"$ref": "FlavorPayload.json#"}, "ignore_hosts": null, "image": {"$ref": "ImageMetaPayload.json#"}, "instance_uuid": "d5e6a7b7-80e5-4166-85a3-cd6115201082", "num_instances": 1, "numa_topology": null, "pci_requests": {"$ref": "InstancePCIRequestsPayload.json#"}, "project_id": "6f70656e737461636b20342065766572", "scheduler_hints": {}, "security_groups": ["default"], "force_hosts": null, "force_nodes": null, "instance_group": null, "requested_destination": null, "retry": null, "user_id": "fake" }, "nova_object.name": "RequestSpecPayload", "nova_object.version": "1.1" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/ServerGroupPayload.json0000664000175000017500000000100400000000000027414 0ustar00zuulzuul00000000000000{ "nova_object.version": "1.1", "nova_object.namespace": "nova", "nova_object.name": "ServerGroupPayload", "nova_object.data": { "uuid": "788608ec-ebdc-45c5-bc7f-e5f24ab92c80", "name": "test-server-group", "project_id": "6f70656e737461636b20342065766572", "user_id": "fake", "policies": [ "anti-affinity" ], "policy": "anti-affinity", "rules": {"max_server_per_host": "3"}, "members": [], "hosts": null } }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/common_payloads/ServiceStatusPayload.json0000664000175000017500000000076700000000000027754 0ustar00zuulzuul00000000000000{ "nova_object.name": "ServiceStatusPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": { "host": "host2", "disabled": false, "binary": "nova-compute", "topic": "compute", "disabled_reason": null, "forced_down": false, "version": 23, "availability_zone": null, "uuid": "fa69c544-906b-4a6a-a9c6-c1f7a8078c73", "last_seen_up": null, "report_count": 0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/compute-exception.json0000664000175000017500000000121300000000000024105 0ustar00zuulzuul00000000000000{ "event_type": "compute.exception", "payload": { "nova_object.data": { "exception": "AggregateNameExists", "exception_message": "Aggregate versioned_exc_aggregate already exists.", "function_name": "_aggregate_create_in_db", "module_name": "nova.objects.aggregate", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" }, "priority": "ERROR", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/compute_task-build_instances-error.json0000664000175000017500000000031000000000000027423 0ustar00zuulzuul00000000000000{ "event_type": "compute_task.build_instances.error", "payload": {"$ref":"common_payloads/ComputeTaskPayload.json#"}, "priority": "ERROR", "publisher_id": "nova-conductor:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/compute_task-migrate_server-error.json0000664000175000017500000000043300000000000027301 0ustar00zuulzuul00000000000000{ "event_type": "compute_task.migrate_server.error", "payload": { "$ref":"common_payloads/ComputeTaskPayload.json#", "nova_object.data":{ "state": "active" } }, "priority": "ERROR", "publisher_id": "nova-conductor:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/compute_task-rebuild_server-error.json0000664000175000017500000000032600000000000027300 0ustar00zuulzuul00000000000000{ "event_type": "compute_task.rebuild_server.error", "payload": { "$ref": "common_payloads/ComputeTaskPayload.json#" }, "priority": "ERROR", "publisher_id": "nova-conductor:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/flavor-create.json0000664000175000017500000000134700000000000023177 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "nova_object.namespace": "nova", "nova_object.version": "1.4", "nova_object.name": "FlavorPayload", "nova_object.data": { "name": "test_flavor", "memory_mb": 1024, "ephemeral_gb": 0, "disabled": false, "vcpus": 2, "swap": 0, "rxtx_factor": 2.0, "is_public": true, "root_gb": 10, "vcpu_weight": 0, "flavorid": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "extra_specs": null, "projects": [], "description":null } }, "event_type": "flavor.create", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/flavor-delete.json0000664000175000017500000000135100000000000023171 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "nova_object.namespace": "nova", "nova_object.version": "1.4", "nova_object.name": "FlavorPayload", "nova_object.data": { "name": "test_flavor", "memory_mb": 1024, "ephemeral_gb": 0, "disabled": false, "vcpus": 2, "swap": 0, "rxtx_factor": 2.0, "is_public": true, "root_gb": 10, "vcpu_weight": 0, "flavorid": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "extra_specs": null, "projects": null, "description":null } }, "event_type": "flavor.delete", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/flavor-update.json0000664000175000017500000000144500000000000023215 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "nova_object.namespace": "nova", "nova_object.version": "1.4", "nova_object.name": "FlavorPayload", "nova_object.data": { "name": "test_flavor", "memory_mb": 1024, "ephemeral_gb": 0, "disabled": false, "vcpus": 2, "extra_specs": { "hw:numa_nodes": "2" }, "projects": ["fake_tenant"], "swap": 0, "rxtx_factor": 2.0, "is_public": false, "root_gb": 10, "vcpu_weight": 0, "flavorid": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "description":null } }, "event_type": "flavor.update", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-create-end.json0000664000175000017500000000026300000000000024252 0ustar00zuulzuul00000000000000{ "event_type":"instance.create.end", "payload":{"$ref":"common_payloads/InstanceCreatePayload.json#"}, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-create-error.json0000664000175000017500000000201500000000000024632 0ustar00zuulzuul00000000000000{ "event_type":"instance.create.error", "payload":{ "$ref":"common_payloads/InstanceCreatePayload.json#", "nova_object.data": { "fault": { "nova_object.data": { "exception": "FlavorDiskTooSmall", "exception_message": "The created instance's disk would be too small.", "function_name": "_build_resources", "module_name": "nova.tests.functional.notification_sample_tests.test_instance", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" }, "ip_addresses": [], "launched_at": null, "power_state": "pending", "state": "building" } }, "priority":"ERROR", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-create-start.json0000664000175000017500000000070700000000000024644 0ustar00zuulzuul00000000000000{ "event_type":"instance.create.start", "payload":{ "$ref":"common_payloads/InstanceCreatePayload.json#", "nova_object.data": { "host": null, "ip_addresses": [], "launched_at": null, "node": null, "updated_at": null, "power_state": "pending", "state": "building" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-delete-end.json0000664000175000017500000000066000000000000024252 0ustar00zuulzuul00000000000000{ "event_type":"instance.delete.end", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "deleted_at":"2012-10-29T13:42:11Z", "ip_addresses":[], "power_state":"pending", "state":"deleted", "terminated_at":"2012-10-29T13:42:11Z" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-delete-end_compute_down.json0000664000175000017500000000065100000000000027035 0ustar00zuulzuul00000000000000{ "event_type":"instance.delete.end", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "block_devices":[], "deleted_at":"2012-10-29T13:42:11Z", "ip_addresses":[], "state":"deleted", "terminated_at":"2012-10-29T13:42:11Z" } }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-delete-end_not_scheduled.json0000664000175000017500000000110700000000000027147 0ustar00zuulzuul00000000000000{ "event_type":"instance.delete.end", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "availability_zone": null, "block_devices":[], "deleted_at":"2012-10-29T13:42:11Z", "host":null, "ip_addresses":[], "launched_at":null, "node":null, "power_state":"pending", "state":"deleted", "terminated_at":"2012-10-29T13:42:11Z" } }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-delete-start.json0000664000175000017500000000041700000000000024641 0ustar00zuulzuul00000000000000{ "event_type":"instance.delete.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state":"deleting" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-delete-start_compute_down.json0000664000175000017500000000041500000000000027422 0ustar00zuulzuul00000000000000{ "event_type":"instance.delete.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state":"deleting" } }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-delete-start_not_scheduled.json0000664000175000017500000000100700000000000027535 0ustar00zuulzuul00000000000000{ "event_type":"instance.delete.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "availability_zone": null, "block_devices":[], "host":null, "ip_addresses":[], "launched_at":null, "node":null, "power_state":"pending", "state":"error", "task_state":"deleting" } }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-evacuate.json0000664000175000017500000000056600000000000024046 0ustar00zuulzuul00000000000000{ "event_type": "instance.evacuate", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "host": "host2", "node": "host2", "task_state": "rebuilding", "action_initiator_user": "admin" } }, "priority": "INFO", "publisher_id": "nova-api:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-exists.json0000664000175000017500000000055500000000000023566 0ustar00zuulzuul00000000000000{ "event_type":"instance.exists", "payload":{ "$ref":"common_payloads/InstanceExistsPayload.json#", "nova_object.data":{ "architecture":null, "image_uuid":"a2459075-d96c-40d5-893e-577ff92e721c", "task_state":"rebuilding" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-interface_attach-end.json0000664000175000017500000000274700000000000026304 0ustar00zuulzuul00000000000000{ "publisher_id": "nova-compute:compute", "event_type": "instance.interface_attach.end", "priority": "INFO", "payload": { "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "ip_addresses": [ { "nova_object.data": { "device_name": "tapce531f90-19", "address": "192.168.1.3", "version": 4, "label": "private", "port_uuid": "ce531f90-199f-48c0-816c-13e38010b442", "mac": "fa:16:3e:4c:2c:30", "meta": {} }, "nova_object.name": "IpPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }, { "nova_object.data": { "device_name": "tap88dae9fa-0d", "address": "192.168.1.30", "version": 4, "label": "private", "port_uuid": "88dae9fa-0dc6-49e3-8c29-3abc41e99ac9", "mac": "00:0c:29:0d:11:74", "meta": {} }, "nova_object.name": "IpPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" } ] } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-interface_attach-error.json0000664000175000017500000000157100000000000026661 0ustar00zuulzuul00000000000000{ "priority": "ERROR", "event_type": "instance.interface_attach.error", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data": { "fault": { "nova_object.data": { "exception": "InterfaceAttachFailed", "exception_message": "dummy", "function_name": "_unsuccessful_attach_interface", "module_name": "nova.tests.functional.notification_sample_tests.test_instance", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" } } }, "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-interface_attach-start.json0000664000175000017500000000030200000000000026654 0ustar00zuulzuul00000000000000{ "priority": "INFO", "event_type": "instance.interface_attach.start", "payload":{"$ref":"common_payloads/InstanceActionPayload.json#"}, "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-interface_detach-end.json0000664000175000017500000000030000000000000026247 0ustar00zuulzuul00000000000000{ "publisher_id": "nova-compute:compute", "event_type": "instance.interface_detach.end", "priority": "INFO", "payload":{"$ref":"common_payloads/InstanceActionPayload.json#"} } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-interface_detach-start.json0000664000175000017500000000275100000000000026652 0ustar00zuulzuul00000000000000{ "priority": "INFO", "event_type": "instance.interface_detach.start", "payload": { "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "ip_addresses": [ { "nova_object.data": { "device_name": "tapce531f90-19", "address": "192.168.1.3", "version": 4, "label": "private", "port_uuid": "ce531f90-199f-48c0-816c-13e38010b442", "mac": "fa:16:3e:4c:2c:30", "meta": {} }, "nova_object.name": "IpPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }, { "nova_object.data": { "device_name": "tap88dae9fa-0d", "address": "192.168.1.30", "version": 4, "label": "private", "port_uuid": "88dae9fa-0dc6-49e3-8c29-3abc41e99ac9", "mac": "00:0c:29:0d:11:74", "meta": {} }, "nova_object.name": "IpPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" } ] } }, "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_abort-end.json0000664000175000017500000000051300000000000027204 0ustar00zuulzuul00000000000000{ "event_type":"instance.live_migration_abort.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "migrating", "action_initiator_user": "admin" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_abort-start.json0000664000175000017500000000051500000000000027575 0ustar00zuulzuul00000000000000{ "event_type":"instance.live_migration_abort.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "migrating", "action_initiator_user": "admin" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_force_complete-end.json0000664000175000017500000000053200000000000031064 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_force_complete.end", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "action_initiator_user": "admin", "task_state": "migrating" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_force_complete-start.json0000664000175000017500000000053400000000000031455 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_force_complete.start", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "action_initiator_user": "admin", "task_state": "migrating" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_post-end.json0000664000175000017500000000052000000000000027060 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_post.end", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "action_initiator_user": "admin", "task_state": "migrating" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_post-start.json0000664000175000017500000000052200000000000027451 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_post.start", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "action_initiator_user": "admin", "task_state": "migrating" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_post_dest-end.json0000664000175000017500000000054100000000000030102 0ustar00zuulzuul00000000000000{ "event_type":"instance.live_migration_post_dest.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "host": "host2", "node": "host2", "action_initiator_user": "admin" } }, "priority":"INFO", "publisher_id":"nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_post_dest-start.json0000664000175000017500000000052000000000000030466 0ustar00zuulzuul00000000000000{ "event_type":"instance.live_migration_post_dest.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "migrating", "action_initiator_user": "admin" } }, "priority":"INFO", "publisher_id":"nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_pre-end.json0000664000175000017500000000051500000000000026665 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_pre.end", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "task_state": "migrating", "action_initiator_user": "admin" } }, "priority": "INFO", "publisher_id": "nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_pre-start.json0000664000175000017500000000051700000000000027256 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_pre.start", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "task_state": "migrating", "action_initiator_user": "admin" } }, "priority": "INFO", "publisher_id": "nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_rollback-end.json0000664000175000017500000000044700000000000027674 0ustar00zuulzuul00000000000000{ "event_type":"instance.live_migration_rollback.end", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "action_initiator_user": "admin" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_rollback-start.json0000664000175000017500000000052000000000000030253 0ustar00zuulzuul00000000000000{ "event_type":"instance.live_migration_rollback.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "action_initiator_user": "admin", "task_state": "migrating" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_rollback_dest-end.json0000664000175000017500000000052700000000000030712 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_rollback_dest.end", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "action_initiator_user": "admin", "task_state": "migrating" } }, "priority": "INFO", "publisher_id": "nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-live_migration_rollback_dest-start.json0000664000175000017500000000053100000000000031274 0ustar00zuulzuul00000000000000{ "event_type": "instance.live_migration_rollback_dest.start", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "action_initiator_user": "admin", "task_state": "migrating" } }, "priority": "INFO", "publisher_id": "nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-lock-with-reason.json0000664000175000017500000000045200000000000025431 0ustar00zuulzuul00000000000000{ "event_type":"instance.lock", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "locked":true, "locked_reason":"global warming" } }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-lock.json0000664000175000017500000000043700000000000023176 0ustar00zuulzuul00000000000000{ "event_type":"instance.lock", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "locked":true, "locked_reason": null } }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-pause-end.json0000664000175000017500000000041000000000000024116 0ustar00zuulzuul00000000000000{ "event_type":"instance.pause.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "state": "paused" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-pause-start.json0000664000175000017500000000042000000000000024506 0ustar00zuulzuul00000000000000{ "event_type":"instance.pause.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "task_state": "pausing" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-power_off-end.json0000664000175000017500000000046100000000000024775 0ustar00zuulzuul00000000000000{ "event_type":"instance.power_off.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "state":"stopped", "power_state":"shutdown" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-power_off-start.json0000664000175000017500000000042700000000000025366 0ustar00zuulzuul00000000000000{ "event_type":"instance.power_off.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state":"powering-off" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-power_on-end.json0000664000175000017500000000026600000000000024642 0ustar00zuulzuul00000000000000{ "event_type":"instance.power_on.end", "payload":{"$ref": "common_payloads/InstanceActionPayload.json#"}, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-power_on-start.json0000664000175000017500000000053200000000000025225 0ustar00zuulzuul00000000000000{ "event_type":"instance.power_on.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state":"powering-on", "state":"stopped", "power_state":"shutdown" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-reboot-end.json0000664000175000017500000000026300000000000024301 0ustar00zuulzuul00000000000000{ "event_type":"instance.reboot.end", "payload":{"$ref":"common_payloads/InstanceActionPayload.json#"}, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-reboot-error.json0000664000175000017500000000171100000000000024663 0ustar00zuulzuul00000000000000{ "event_type":"instance.reboot.error", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data": { "fault": { "nova_object.data": { "exception": "UnsupportedVirtType", "exception_message": "Virtualization type 'FakeVirt' is not supported by this compute driver", "function_name": "_hard_reboot", "module_name": "nova.tests.functional.notification_sample_tests.test_instance", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" }, "task_state":"reboot_started_hard" } }, "priority":"ERROR", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-reboot-start.json0000664000175000017500000000043200000000000024666 0ustar00zuulzuul00000000000000{ "event_type":"instance.reboot.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state":"reboot_pending_hard" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-rebuild-end.json0000664000175000017500000000030000000000000024425 0ustar00zuulzuul00000000000000{ "event_type": "instance.rebuild.end", "publisher_id": "nova-compute:compute", "payload": {"$ref": "common_payloads/InstanceActionRebuildPayload.json#"}, "priority": "INFO" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-rebuild-error.json0000664000175000017500000000171100000000000025017 0ustar00zuulzuul00000000000000{ "priority": "ERROR", "payload": { "$ref": "common_payloads/InstanceActionRebuildPayload.json#", "nova_object.data": { "fault": { "nova_object.name": "ExceptionPayload", "nova_object.data": { "module_name": "nova.tests.functional.notification_sample_tests.test_instance", "exception_message": "Virtual Interface creation failed", "function_name": "_virtual_interface_create_failed", "exception": "VirtualInterfaceCreateException", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.version": "1.1", "nova_object.namespace": "nova" }, "task_state": "rebuilding" } }, "publisher_id": "nova-compute:compute", "event_type": "instance.rebuild.error" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-rebuild-start.json0000664000175000017500000000044000000000000025021 0ustar00zuulzuul00000000000000{ "priority": "INFO", "event_type": "instance.rebuild.start", "publisher_id": "nova-compute:compute", "payload": { "$ref": "common_payloads/InstanceActionRebuildPayload.json#", "nova_object.data": { "task_state": "rebuilding" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-rebuild_scheduled.json0000664000175000017500000000044600000000000025714 0ustar00zuulzuul00000000000000{ "priority": "INFO", "event_type": "instance.rebuild_scheduled", "publisher_id": "nova-conductor:compute", "payload": { "$ref": "common_payloads/InstanceActionRebuildPayload.json#", "nova_object.data": { "task_state": "rebuilding" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-rescue-end.json0000664000175000017500000000047100000000000024276 0ustar00zuulzuul00000000000000{ "event_type": "instance.rescue.end", "payload": { "$ref": "common_payloads/InstanceActionRescuePayload.json#", "nova_object.data": { "state": "rescued", "power_state": "shutdown" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-rescue-start.json0000664000175000017500000000043400000000000024664 0ustar00zuulzuul00000000000000{ "event_type": "instance.rescue.start", "payload": { "$ref": "common_payloads/InstanceActionRescuePayload.json#", "nova_object.data": { "task_state": "rescuing" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize-end.json0000664000175000017500000000042700000000000024312 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize.end", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "resize_migrated" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize-error.json0000664000175000017500000000170500000000000024675 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize.error", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "fault":{ "nova_object.data":{ "exception":"FlavorDiskTooSmall", "exception_message":"The created instance's disk would be too small.", "function_name":"_build_resources", "module_name":"nova.tests.functional.notification_sample_tests.test_instance", "traceback":"Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name":"ExceptionPayload", "nova_object.namespace":"nova", "nova_object.version":"1.1" }, "block_devices": [], "task_state": "resize_prep" } }, "priority":"ERROR", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize-start.json0000664000175000017500000000043100000000000024674 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "resize_migrating" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_confirm-end.json0000664000175000017500000000074700000000000026034 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize_confirm.end", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "flavor": { "nova_object.data": { "flavorid": "2", "memory_mb": 2048, "name": "m1.small", "root_gb":20 } } } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_confirm-start.json0000664000175000017500000000101100000000000026404 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize_confirm.start", "payload": { "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "flavor": { "nova_object.data": { "flavorid": "2", "memory_mb": 2048, "name": "m1.small", "root_gb":20 } }, "state": "resized" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_finish-end.json0000664000175000017500000000111400000000000025644 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize_finish.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "flavor": { "nova_object.data": { "name": "other_flavor", "flavorid": "d5a8bb54-365a-45ae-abdb-38d249df7845", "extra_specs": {"hw:watchdog_action": "reset"}, "memory_mb": 256 } }, "state": "resized" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_finish-start.json0000664000175000017500000000113100000000000026232 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize_finish.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "flavor": { "nova_object.data": { "name": "other_flavor", "flavorid": "d5a8bb54-365a-45ae-abdb-38d249df7845", "extra_specs": {"hw:watchdog_action": "reset"}, "memory_mb": 256 } }, "task_state": "resize_finish" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_prep-end.json0000664000175000017500000000030600000000000025334 0ustar00zuulzuul00000000000000{ "event_type": "instance.resize_prep.end", "payload": {"$ref":"common_payloads/InstanceActionResizePrepPayload.json#"}, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_prep-start.json0000664000175000017500000000031000000000000025716 0ustar00zuulzuul00000000000000{ "event_type": "instance.resize_prep.start", "payload": {"$ref":"common_payloads/InstanceActionResizePrepPayload.json#"}, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_revert-end.json0000664000175000017500000000027200000000000025677 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize_revert.end", "payload":{"$ref":"common_payloads/InstanceActionPayload.json#"}, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resize_revert-start.json0000664000175000017500000000124700000000000026271 0ustar00zuulzuul00000000000000{ "event_type":"instance.resize_revert.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data": { "flavor":{ "nova_object.data": { "flavorid": "d5a8bb54-365a-45ae-abdb-38d249df7845", "name": "other_flavor", "memory_mb": 256, "extra_specs": { "hw:watchdog_action": "reset" } } }, "state":"resized", "task_state":"resize_reverting" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-restore-end.json0000664000175000017500000000030100000000000024463 0ustar00zuulzuul00000000000000{ "event_type":"instance.restore.end", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#" }, "priority":"INFO", "publisher_id":"nova-compute:compute" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-restore-start.json0000664000175000017500000000046500000000000025065 0ustar00zuulzuul00000000000000{ "event_type":"instance.restore.start", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#", "nova_object.data": { "state":"soft-delete", "task_state":"restoring" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resume-end.json0000664000175000017500000000030100000000000024300 0ustar00zuulzuul00000000000000{ "event_type":"instance.resume.end", "payload":{ "$ref":"common_payloads/InstanceActionPayload.json#" }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-resume-start.json0000664000175000017500000000046700000000000024704 0ustar00zuulzuul00000000000000{ "event_type": "instance.resume.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "state": "suspended", "task_state": "resuming" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-shelve-end.json0000664000175000017500000000046000000000000024274 0ustar00zuulzuul00000000000000{ "event_type":"instance.shelve.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "state": "shelved", "power_state": "shutdown" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-shelve-start.json0000664000175000017500000000042100000000000024660 0ustar00zuulzuul00000000000000{ "event_type":"instance.shelve.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "shelving" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-shelve_offload-end.json0000664000175000017500000000063500000000000025772 0ustar00zuulzuul00000000000000{ "event_type":"instance.shelve_offload.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "availability_zone": null, "state": "shelved_offloaded", "power_state": "shutdown", "host": null, "node": null } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-shelve_offload-start.json0000664000175000017500000000055400000000000026361 0ustar00zuulzuul00000000000000{ "event_type":"instance.shelve_offload.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "state": "shelved", "task_state": "shelving_offloading", "power_state": "shutdown" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-shutdown-end.json0000664000175000017500000000046100000000000024662 0ustar00zuulzuul00000000000000{ "event_type":"instance.shutdown.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "ip_addresses": [], "task_state": "deleting" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-shutdown-start.json0000664000175000017500000000042300000000000025247 0ustar00zuulzuul00000000000000{ "event_type":"instance.shutdown.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "deleting" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-snapshot-end.json0000664000175000017500000000031400000000000024643 0ustar00zuulzuul00000000000000{ "event_type":"instance.snapshot.end", "payload":{ "$ref": "common_payloads/InstanceActionSnapshotPayload.json#" }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-snapshot-start.json0000664000175000017500000000044000000000000025232 0ustar00zuulzuul00000000000000{ "event_type":"instance.snapshot.start", "payload":{ "$ref": "common_payloads/InstanceActionSnapshotPayload.json#", "nova_object.data":{ "task_state":"image_snapshot" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-soft_delete-end.json0000664000175000017500000000050700000000000025305 0ustar00zuulzuul00000000000000{ "event_type":"instance.soft_delete.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "deleted_at": "2012-10-29T13:42:11Z", "state": "soft-delete" } }, "priority":"INFO", "publisher_id":"nova-compute:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-soft_delete-start.json0000664000175000017500000000052000000000000025667 0ustar00zuulzuul00000000000000{ "event_type":"instance.soft_delete.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "deleted_at": "2012-10-29T13:42:11Z", "task_state": "soft-deleting" } }, "priority":"INFO", "publisher_id":"nova-compute:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-suspend-end.json0000664000175000017500000000041400000000000024466 0ustar00zuulzuul00000000000000{ "event_type":"instance.suspend.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "state": "suspended" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-suspend-start.json0000664000175000017500000000042400000000000025056 0ustar00zuulzuul00000000000000{ "event_type":"instance.suspend.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "task_state": "suspending" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-trigger_crash_dump-end.json0000664000175000017500000000032000000000000026651 0ustar00zuulzuul00000000000000{ "event_type":"instance.trigger_crash_dump.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-trigger_crash_dump-start.json0000664000175000017500000000032200000000000027242 0ustar00zuulzuul00000000000000{ "event_type":"instance.trigger_crash_dump.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unlock.json0000664000175000017500000000027400000000000023540 0ustar00zuulzuul00000000000000{ "event_type":"instance.unlock", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#" }, "priority":"INFO", "publisher_id":"nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unpause-end.json0000664000175000017500000000026500000000000024471 0ustar00zuulzuul00000000000000{ "event_type":"instance.unpause.end", "payload":{"$ref": "common_payloads/InstanceActionPayload.json#"}, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unpause-start.json0000664000175000017500000000046300000000000025060 0ustar00zuulzuul00000000000000{ "event_type":"instance.unpause.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "state": "paused", "task_state": "unpausing" } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unrescue-end.json0000664000175000017500000000027100000000000024637 0ustar00zuulzuul00000000000000{ "event_type": "instance.unrescue.end", "payload":{"$ref": "common_payloads/InstanceActionPayload.json#"}, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unrescue-start.json0000664000175000017500000000054000000000000025225 0ustar00zuulzuul00000000000000{ "event_type": "instance.unrescue.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data": { "power_state": "shutdown", "task_state": "unrescuing", "state": "rescued" } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unshelve-end.json0000664000175000017500000000030400000000000024634 0ustar00zuulzuul00000000000000{ "event_type":"instance.unshelve.end", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#" }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-unshelve-start.json0000664000175000017500000000063200000000000025227 0ustar00zuulzuul00000000000000{ "event_type":"instance.unshelve.start", "payload":{ "$ref": "common_payloads/InstanceActionPayload.json#", "nova_object.data":{ "state": "shelved_offloaded", "power_state": "shutdown", "task_state": "unshelving", "host": null, "node": null } }, "priority":"INFO", "publisher_id":"nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-update-tags-action.json0000664000175000017500000000040500000000000025732 0ustar00zuulzuul00000000000000{ "event_type": "instance.update", "payload": { "$ref":"common_payloads/InstanceUpdatePayload.json#", "nova_object.data": { "tags": ["tag1"] } }, "priority": "INFO", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-update.json0000664000175000017500000000132700000000000023527 0ustar00zuulzuul00000000000000{ "event_type": "instance.update", "payload": { "$ref":"common_payloads/InstanceUpdatePayload.json#", "nova_object.data": { "ip_addresses": [], "updated_at": null, "power_state": "pending", "launched_at": null, "task_state": "scheduling", "state_update": { "nova_object.data": { "old_state": "building", "state": "building"}, "nova_object.name": "InstanceStateUpdatePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0"} } }, "priority": "INFO", "publisher_id": "nova-compute:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_attach-end.json0000664000175000017500000000032300000000000025637 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_attach.end", "payload": { "$ref": "common_payloads/InstanceActionVolumePayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_attach-error.json0000664000175000017500000000163700000000000026233 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_attach.error", "payload": { "$ref": "common_payloads/InstanceActionVolumePayload.json#", "nova_object.data": { "fault": { "nova_object.data": { "exception": "CinderConnectionFailed", "exception_message": "Connection to cinder host failed: Connection timed out", "function_name": "attach_volume", "module_name": "nova.tests.functional.notification_sample_tests.test_instance", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" } } }, "priority": "ERROR", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_attach-start.json0000664000175000017500000000032500000000000026230 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_attach.start", "payload": { "$ref": "common_payloads/InstanceActionVolumePayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_detach-end.json0000664000175000017500000000032300000000000025623 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_detach.end", "payload": { "$ref": "common_payloads/InstanceActionVolumePayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_detach-start.json0000664000175000017500000000032500000000000026214 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_detach.start", "payload": { "$ref": "common_payloads/InstanceActionVolumePayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_swap-end.json0000664000175000017500000000136200000000000025351 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_swap.end", "payload": { "$ref": "common_payloads/InstanceActionVolumeSwapPayload.json#", "nova_object.data": { "block_devices": [{ "nova_object.data": { "boot_index": null, "delete_on_termination": false, "device_name": "/dev/sdb", "tag": null, "volume_id": "227cc671-f30b-4488-96fd-7d0bf13648d8" }, "nova_object.name": "BlockDevicePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }] } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_swap-error.json0000664000175000017500000000177000000000000025737 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_swap.error", "payload": { "$ref": "common_payloads/InstanceActionVolumeSwapPayload.json#", "nova_object.data": { "new_volume_id": "9c6d9c2d-7a8f-4c80-938d-3bf062b8d489", "old_volume_id": "828419fa-3efb-4533-b458-4267ca5fe9b1", "fault": { "nova_object.data": { "exception": "TypeError", "exception_message": "'tuple' object does not support item assignment", "function_name": "_init_volume_connection", "module_name": "nova.compute.manager", "traceback": "Traceback (most recent call last):\n File \"nova/compute/manager.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" } } }, "priority": "ERROR", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/instance-volume_swap-start.json0000664000175000017500000000032700000000000025740 0ustar00zuulzuul00000000000000{ "event_type": "instance.volume_swap.start", "payload": { "$ref": "common_payloads/InstanceActionVolumeSwapPayload.json#" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/keypair-create-end.json0000664000175000017500000000025600000000000024114 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/KeypairPayload.json#"}, "event_type": "keypair.create.end", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/keypair-create-start.json0000664000175000017500000000044700000000000024505 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/KeypairPayload.json#", "nova_object.data": { "fingerprint": null, "public_key": null } }, "event_type": "keypair.create.start", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/keypair-delete-end.json0000664000175000017500000000025500000000000024112 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/KeypairPayload.json#"}, "event_type": "keypair.delete.end", "publisher_id": "nova-api:fake-mini" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/keypair-delete-start.json0000664000175000017500000000025700000000000024503 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/KeypairPayload.json#"}, "event_type": "keypair.delete.start", "publisher_id": "nova-api:fake-mini" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/keypair-import-end.json0000664000175000017500000000025500000000000024162 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/KeypairPayload.json#"}, "event_type": "keypair.import.end", "publisher_id": "nova-api:fake-mini" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/keypair-import-start.json0000664000175000017500000000040600000000000024547 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/KeypairPayload.json#", "nova_object.data": { "fingerprint": null } }, "event_type": "keypair.import.start", "publisher_id": "nova-api:fake-mini" }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/libvirt-connect-error.json0000664000175000017500000000170300000000000024672 0ustar00zuulzuul00000000000000{ "event_type": "libvirt.connect.error", "payload": { "nova_object.data": { "reason": { "nova_object.data": { "exception": "libvirtError", "exception_message": "Sample exception for versioned notification test.", "function_name": "_get_connection", "module_name": "nova.virt.libvirt.host", "traceback": "Traceback (most recent call last):\n File \"nova/virt/libvirt/host.py\", line ..." }, "nova_object.name": "ExceptionPayload", "nova_object.namespace": "nova", "nova_object.version": "1.1" }, "ip": "10.0.2.15" }, "nova_object.name": "LibvirtErrorPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }, "priority": "ERROR", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/metrics-update.json0000664000175000017500000001161000000000000023365 0ustar00zuulzuul00000000000000{ "event_type": "metrics.update", "payload": { "nova_object.version": "1.0", "nova_object.name": "MetricsPayload", "nova_object.namespace": "nova", "nova_object.data": { "host_ip": "10.0.2.15", "host": "compute", "nodename": "fake-mini", "metrics":[ { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.iowait.percent", "value": 0 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.frequency", "value": 800 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.idle.percent", "value": 97 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.iowait.time", "value": 6121490000000 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.kernel.percent", "value": 0 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.kernel.time", "value": 5664160000000 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.percent", "value": 2 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.user.percent", "value": 1 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.user.time", "value": 26728850000000 } }, { "nova_object.version": "1.0", "nova_object.name": "MetricPayload", "nova_object.namespace": "nova", "nova_object.data": { "timestamp": "2012-10-29T13:42:11Z", "source": "fake.SmallFakeDriver", "numa_membw_values": null, "name": "cpu.idle.time", "value": 1592705190000000 } } ] } }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/scheduler-select_destinations-end.json0000664000175000017500000000030700000000000027223 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/RequestSpecPayload.json#"}, "event_type": "scheduler.select_destinations.end", "publisher_id": "nova-scheduler:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/scheduler-select_destinations-start.json0000664000175000017500000000031100000000000027605 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/RequestSpecPayload.json#"}, "event_type": "scheduler.select_destinations.start", "publisher_id": "nova-scheduler:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/server_group-add_member.json0000664000175000017500000000045600000000000025244 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/ServerGroupPayload.json#", "nova_object.data": { "members": ["54238a20-f9be-47a7-897e-d7cb0e4c03d0"] } }, "event_type": "server_group.add_member", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/server_group-create.json0000664000175000017500000000026300000000000024424 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/ServerGroupPayload.json#"}, "event_type": "server_group.create", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/server_group-delete.json0000664000175000017500000000026300000000000024423 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": {"$ref": "common_payloads/ServerGroupPayload.json#"}, "event_type": "server_group.delete", "publisher_id": "nova-api:fake-mini" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/service-create.json0000664000175000017500000000027700000000000023347 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/ServiceStatusPayload.json#" }, "event_type": "service.create", "publisher_id": "nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/service-delete.json0000664000175000017500000000027700000000000023346 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/ServiceStatusPayload.json#" }, "event_type": "service.delete", "publisher_id": "nova-compute:host2" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/service-update.json0000664000175000017500000000052700000000000023364 0ustar00zuulzuul00000000000000{ "priority": "INFO", "payload": { "$ref": "common_payloads/ServiceStatusPayload.json#", "nova_object.data": { "host": "host1", "last_seen_up": "2012-10-29T13:42:05Z", "report_count": 1 } }, "event_type": "service.update", "publisher_id": "nova-compute:host1" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/notification_samples/volume-usage.json0000664000175000017500000000132700000000000023054 0ustar00zuulzuul00000000000000{ "event_type": "volume.usage", "payload": { "nova_object.data": { "availability_zone": "nova", "instance_uuid": "88fde343-13a8-4047-84fb-2657d5e702f9", "last_refreshed": "2012-10-29T13:42:11Z", "project_id": "6f70656e737461636b20342065766572", "read_bytes": 0, "reads": 0, "user_id": "fake", "volume_id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113", "write_bytes": 0, "writes": 0 }, "nova_object.name": "VolumeUsagePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }, "priority": "INFO", "publisher_id": "nova-compute:compute" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/requirements.txt0000664000175000017500000000120700000000000016617 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. sphinx>=1.8.0,!=2.1.0 # BSD sphinxcontrib-actdiag>=0.8.5 # BSD sphinxcontrib-seqdiag>=0.8.4 # BSD sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD sphinx-feature-classification>=0.2.0 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0 openstackdocstheme>=1.30.0 # Apache-2.0 # releasenotes reno>=2.5.0 # Apache-2.0 # redirect tests in docs whereto>=0.3.0 # Apache-2.0 # needed to generate osprofiler config options osprofiler>=1.4.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.222471 nova-21.2.4/doc/source/0000775000175000017500000000000000000000000014633 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.222471 nova-21.2.4/doc/source/_extra/0000775000175000017500000000000000000000000016115 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/_extra/.htaccess0000664000175000017500000001537000000000000017721 0ustar00zuulzuul00000000000000redirectmatch 301 ^/nova/([^/]+)/addmethod.openstackapi.html$ /nova/$1/contributor/api-2.html redirectmatch 301 ^/nova/([^/]+)/admin/flavors2.html$ /nova/$1/admin/flavors.html redirectmatch 301 ^/nova/([^/]+)/admin/numa.html$ /nova/$1/admin/cpu-topologies.html redirectmatch 301 ^/nova/([^/]+)/admin/quotas2.html$ /nova/$1/admin/quotas.html redirectmatch 301 ^/nova/([^/]+)/aggregates.html$ /nova/$1/user/aggregates.html redirectmatch 301 ^/nova/([^/]+)/api_microversion_dev.html$ /nova/$1/contributor/microversions.html redirectmatch 301 ^/nova/([^/]+)/api_microversion_history.html$ /nova/$1/reference/api-microversion-history.html redirectmatch 301 ^/nova/([^/]+)/api_plugins.html$ /nova/$1/contributor/api.html redirectmatch 301 ^/nova/([^/]+)/architecture.html$ /nova/$1/user/architecture.html redirectmatch 301 ^/nova/([^/]+)/block_device_mapping.html$ /nova/$1/user/block-device-mapping.html redirectmatch 301 ^/nova/([^/]+)/blueprints.html$ /nova/$1/contributor/blueprints.html redirectmatch 301 ^/nova/([^/]+)/cells.html$ /nova/$1/user/cells.html redirectmatch 301 ^/nova/([^/]+)/code-review.html$ /nova/$1/contributor/code-review.html redirectmatch 301 ^/nova/([^/]+)/conductor.html$ /nova/$1/user/conductor.html redirectmatch 301 ^/nova/([^/]+)/development.environment.html$ /nova/$1/contributor/development-environment.html redirectmatch 301 ^/nova/([^/]+)/devref/api.html /nova/$1/contributor/api.html redirectmatch 301 ^/nova/([^/]+)/devref/cells.html /nova/$1/user/cells.html redirectmatch 301 ^/nova/([^/]+)/devref/filter_scheduler.html /nova/$1/user/filter-scheduler.html # catch all, if we hit something in devref assume it moved to # reference unless we have already triggered a hit above. redirectmatch 301 ^/nova/([^/]+)/devref/([^/]+).html /nova/$1/reference/$2.html redirectmatch 301 ^/nova/([^/]+)/feature_classification.html$ /nova/$1/user/feature-classification.html redirectmatch 301 ^/nova/([^/]+)/filter_scheduler.html$ /nova/$1/user/filter-scheduler.html redirectmatch 301 ^/nova/([^/]+)/gmr.html$ /nova/$1/reference/gmr.html redirectmatch 301 ^/nova/([^/]+)/how_to_get_involved.html$ /nova/$1/contributor/how-to-get-involved.html redirectmatch 301 ^/nova/([^/]+)/i18n.html$ /nova/$1/reference/i18n.html redirectmatch 301 ^/nova/([^/]+)/man/index.html$ /nova/$1/cli/index.html redirectmatch 301 ^/nova/([^/]+)/man/nova-api-metadata.html$ /nova/$1/cli/nova-api-metadata.html redirectmatch 301 ^/nova/([^/]+)/man/nova-api-os-compute.html$ /nova/$1/cli/nova-api-os-compute.html redirectmatch 301 ^/nova/([^/]+)/man/nova-api.html$ /nova/$1/cli/nova-api.html redirectmatch 301 ^/nova/([^/]+)/man/nova-cells.html$ /nova/$1/cli/nova-cells.html # this is gone and never coming back, indicate that to the end users redirectmatch 301 ^/nova/([^/]+)/man/nova-compute.html$ /nova/$1/cli/nova-compute.html redirectmatch 301 ^/nova/([^/]+)/man/nova-conductor.html$ /nova/$1/cli/nova-conductor.html redirectmatch 301 ^/nova/([^/]+)/man/nova-dhcpbridge.html$ /nova/$1/cli/nova-dhcpbridge.html redirectmatch 301 ^/nova/([^/]+)/man/nova-manage.html$ /nova/$1/cli/nova-manage.html redirectmatch 301 ^/nova/([^/]+)/man/nova-network.html$ /nova/$1/cli/nova-network.html redirectmatch 301 ^/nova/([^/]+)/man/nova-novncproxy.html$ /nova/$1/cli/nova-novncproxy.html redirectmatch 301 ^/nova/([^/]+)/man/nova-rootwrap.html$ /nova/$1/cli/nova-rootwrap.html redirectmatch 301 ^/nova/([^/]+)/man/nova-scheduler.html$ /nova/$1/cli/nova-scheduler.html redirectmatch 301 ^/nova/([^/]+)/man/nova-serialproxy.html$ /nova/$1/cli/nova-serialproxy.html redirectmatch 301 ^/nova/([^/]+)/man/nova-spicehtml5proxy.html$ /nova/$1/cli/nova-spicehtml5proxy.html redirectmatch 301 ^/nova/([^/]+)/man/nova-status.html$ /nova/$1/cli/nova-status.html redirectmatch 301 ^/nova/([^/]+)/notifications.html$ /nova/$1/reference/notifications.html redirectmatch 301 ^/nova/([^/]+)/placement.html$ /nova/$1/user/placement.html redirectmatch 301 ^/nova/([^/]+)/placement_dev.html$ /nova/$1/contributor/placement.html redirectmatch 301 ^/nova/([^/]+)/policies.html$ /nova/$1/contributor/policies.html redirectmatch 301 ^/nova/([^/]+)/policy_enforcement.html$ /nova/$1/reference/policy-enforcement.html redirectmatch 301 ^/nova/([^/]+)/process.html$ /nova/$1/contributor/process.html redirectmatch 301 ^/nova/([^/]+)/project_scope.html$ /nova/$1/contributor/project-scope.html redirectmatch 301 ^/nova/([^/]+)/quotas.html$ /nova/$1/user/quotas.html redirectmatch 301 ^/nova/([^/]+)/releasenotes.html$ /nova/$1/contributor/releasenotes.html redirectmatch 301 ^/nova/([^/]+)/rpc.html$ /nova/$1/reference/rpc.html redirectmatch 301 ^/nova/([^/]+)/sample_config.html$ /nova/$1/configuration/sample-config.html redirectmatch 301 ^/nova/([^/]+)/sample_policy.html$ /nova/$1/configuration/sample-policy.html redirectmatch 301 ^/nova/([^/]+)/scheduler_evolution.html$ /nova/$1/reference/scheduler-evolution.html redirectmatch 301 ^/nova/([^/]+)/services.html$ /nova/$1/reference/services.html redirectmatch 301 ^/nova/([^/]+)/stable_api.html$ /nova/$1/reference/stable-api.html redirectmatch 301 ^/nova/([^/]+)/support-matrix.html$ /nova/$1/user/support-matrix.html redirectmatch 301 ^/nova/([^/]+)/test_strategy.html$ /nova/$1/contributor/testing.html redirectmatch 301 ^/nova/([^/]+)/testing/libvirt-numa.html$ /nova/$1/contributor/testing/libvirt-numa.html redirectmatch 301 ^/nova/([^/]+)/testing/serial-console.html$ /nova/$1/contributor/testing/serial-console.html redirectmatch 301 ^/nova/([^/]+)/testing/zero-downtime-upgrade.html$ /nova/$1/contributor/testing/zero-downtime-upgrade.html redirectmatch 301 ^/nova/([^/]+)/threading.html$ /nova/$1/reference/threading.html redirectmatch 301 ^/nova/([^/]+)/upgrade.html$ /nova/$1/user/upgrade.html redirectmatch 301 ^/nova/([^/]+)/user/aggregates.html$ /nova/$1/admin/aggregates.html redirectmatch 301 ^/nova/([^/]+)/user/cellsv2_layout.html$ /nova/$1/user/cellsv2-layout.html redirectmatch 301 ^/nova/([^/]+)/user/config-drive.html$ /nova/$1/user/metadata.html redirectmatch 301 ^/nova/([^/]+)/user/metadata-service.html$ /nova/$1/user/metadata.html redirectmatch 301 ^/nova/([^/]+)/user/placement.html$ /placement/$1/ redirectmatch 301 ^/nova/([^/]+)/user/user-data.html$ /nova/$1/user/metadata.html redirectmatch 301 ^/nova/([^/]+)/user/vendordata.html$ /nova/$1/user/metadata.html redirectmatch 301 ^/nova/([^/]+)/vendordata.html$ /nova/$1/user/metadata.html redirectmatch 301 ^/nova/([^/]+)/vmstates.html$ /nova/$1/reference/vm-states.html redirectmatch 301 ^/nova/([^/]+)/wsgi.html$ /nova/$1/user/wsgi.html redirectmatch 301 ^/nova/([^/]+)/admin/adv-config.html$ /nova/$1/admin/index.html redirectmatch 301 ^/nova/([^/]+)/admin/system-admin.html$ /nova/$1/admin/index.html redirectmatch 301 ^/nova/([^/]+)/admin/port_with_resource_request.html$ /nova/$1/admin/ports-with-resource-requests.html redirectmatch 301 ^/nova/([^/]+)/admin/manage-users.html$ /nova/$1/admin/arch.html ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.222471 nova-21.2.4/doc/source/_static/0000775000175000017500000000000000000000000016261 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/feature-matrix.css0000664000175000017500000000100400000000000021723 0ustar00zuulzuul00000000000000.fm_maturity_complete, .fm_impl_complete { color: rgb(0, 120, 0); font-weight: normal; } .fm_maturity_deprecated, .fm_impl_missing { color: rgb(120, 0, 0); font-weight: normal; } .fm_maturity_experimental, .fm_impl_partial { color: rgb(170, 170, 0); font-weight: normal; } .fm_maturity_incomplete, .fm_impl_unknown { color: rgb(170, 170, 170); font-weight: normal; } .fm_impl_summary { font-size: 2em; } .fm_cli { font-family: monospace; background-color: #F5F5F5; } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.226471 nova-21.2.4/doc/source/_static/images/0000775000175000017500000000000000000000000017526 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/architecture.dia0000664000175000017500000001527200000000000022676 0ustar00zuulzuul00000000000000]ms7_V'F7 `V^Kmr]ȼHI9}~h[P"kV%eiF/]OWr 0tq-.|?/_}O,Z}fO^fa=,Wf>6WW/\L6fL/LuZ^/.^4W\w/Wg'/_z6{\N߬mͯtuWYd݃KG57WE˯f3]Z7~_ۗ*hs5Y]e7ƻ$/neqo+n~\q㊛_[6lPr>,dUǂwe,L]wz/WǍ[boJw?[̧m~vS??Pm".O[b~B^Noo.{Ԫ߿nׅi>aɇ|ܪn~9ofo}&75=<4K=?{ytO}=Y|$/f_{w?jq+nzYc bJ#Uy)ys_*r>mD̲l " 7lih3uj ;o}:^>_)JkXd8{PC HSd.nG YNz0k6/daK׫jqZM/|w~"onOj00]\_0uckw.WY.6w-&fv^O΋|۶oge9O6ϛɺmOC& #HFRHct&#p"9 wKe wDJ ODU;#G)du7mZv8}\\\o+E'? : G@t*Z$"I.K 0ʪS8e h B" bGz|hZH5w[I`q|D H=jzG嫿/b8cیR7*YjzoOAݵwvŬ0S;.8F͒{-cJa츏%Tvi:ʎwbǩv2* b]v@q#3d+;~L5;v!xE)ʱܕJH褰)ZNJ+=Vz܉3rnCH1 46. vjƒruR)`(rϑ2{wv-A` %vTY n@#QuZg#Bd3(EVV*GG*yP< 8gr !h(%$LP|X*b %вLSex# (V'9 `q-_B՗W_^}n{$ ! "nɐg[>[8 =R dFK~ejλw&/rdʑ#w ל,@u92s*}M! GH話];hrpPy#-(WE捣VH՘/2 1;±Y]XO׶mG,Ze-eWz8Ki6r]'!0Q) !oԵ9BMձcsv(H}foC]CH!Y'yiG&aH]vT!T1Rɻ($͑4ᰓ >ÄzHUOE]*ܘLL=Z maY.4V6x`aaZ>ꍒjq}!l ^ ed ʑAm/yc>s. V] CJ<^w kĺn|dmLY ls ΋:]G;cD8BQ\lh8M[LE;7LQS?US>L}:|D4I_T34Kѹ" ȍNbه>PK 4As4Sl.-p̹ěYUd" m{cGfD.ӏl?ѳ?Nލ8vGsoQOr&MUc@L `̺!LDz wJ--7M@cПDyQ: "Q40)XB.r9aZSBH"zVQ9J0$FN"UADDNDY@*#gmaouN#g=P"ϺDW \DńI>d myv Ѻrm:Tu>3JI78DT8[X?q>im"xkS?83o_6+g>&Ec21,󥹚ד^O j׵Dkkwd}H}]]dRh1 *wU+w p׀B\ʈco,}vGVzLڮ_OhKF=̱f:;E2'TX$3v%a̱nK}JeO5Zy=x~R&WLX[J#'oj8i&%~ɣKkR" xV/K -d%`YQ%!:ꨞʶ9b]De* di*aNn;]/TutZj#4u<zGla$"0 GCB~rfFN.jl}8|"(ffn˝䃫S0 #{%EInM:K)Ga{X]L)I&DIOHvHcӷFG7k]ȾrϘF.J[+{PF A9`%q䇯nylSvvr V[}n5@[8vv$fp!y3[1}Tz0䙛 }>Ieڡˏ606^2KKtv"KYϝ +AF{)rΏ Du 2b(LҔUr&%g J ?A'@ Y#KIȣصYډ×P d%J;GTJʉ!=$Ljll "7c=(IA'$$'ɡ+bvCvת`k IljBvy$%T$YIn$"'r#ouqCJ;<,"dOkLQk@.D! RYz99 Qzc:P|X)B &І@S(ªɤH 3^eg,+b Cp+wc*Kl͈.Afc$C&L*m}Z+Ǖ8hr渆CHzlNA×PidJlnǺ4 KB&GoxqRiph2>2*WLŮϢJIH nER"u{͢j}mI[QT(N{M {I%YaF7ξp.Tз=gSKK?/Ko~/KgJb$Ti۪%biNlʆ&@(yFcdj7)|󅶰=Ҹ~,Ԅ1Ea=b݈}XOsnr)r'zv֒q"Tφ!^,[z=TܘTT@lM3i;clCԕ)tIA"Kija+_Ц"<ΠB Mv W9[@L&LQ(Z " "=T=+)Rlr0ݎEm/\ RKftP4I4[00R]C>W.0I P._# !lI &O ?r лcVҰ"˱42 1c4F.cJwǡVt,L$Xhr*o3t&ʍPHá|8@#/VG(0;,1eAPٱceʎ;ft# PU.q3#u@ٱ'Rpq|d{k&`/=a"D l\$S:Blc_ ~RYCU!jaPS}Y8 ܇trA ׫u^G,dND:LSF~6\.ZܟtpQK0ESPC9ǜI97 CL_I)@iK_:W:|҇  AJLoaaL[Uxv=6,+aRj l01tR+GMQ˜j.MgɡU/Q/el@$WSzD6ix ҄ D l{̇fWɕcP Q 9 "}nD$fȱ* 8Cbl=i}6 A =IU 9m y,L|c$ !ª^Gƍ Ǝ sh$|VPCPvpd+#ٚ`$$OcG$ P \78z7BGoUP 'e2Xyz_y28g0YcVx+01fי{&l}֛2MCT[ EGuL^Y1N@0{vVh8zsHٵb솃!}ds;Rv& 'sRݵ>폹-K ΨRS?wrq}b=TEH 4p∝!zv*(]Q%-cLdfx>crO1<@|`n}"Y'9ɣtpQBJ?仭KERjH~0!y!uGmS9([IKhH(|f\($Vm:9`&`'}}ѻeFaZ4TJ Hӳ#w*d^DMS?^x;rVޗ3+ŒU+LI ꭋۀWƪ T'~@N(t͛jrz);././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/architecture.svg0000664000175000017500000013026000000000000022733 0ustar00zuulzuul00000000000000 image/svg+xml oslo.messaging DB HTTP Nova service External service API Conductor API API Conductor Conductor Scheduler Scheduler Scheduler DB Compute Compute Compute Keystone Glance & Cinder Hypervisor Neutron Placement ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/create-vm-states.dia0000664000175000017500000000177300000000000023401 0ustar00zuulzuul00000000000000seqdiag { edge_length = 250; span_height = 40; node_width=200; default_note_color = lightblue; // Use note (put note on rightside) api [label="Compute.api"]; manager [label="Compute.manager"]; api -> manager [label = "create_db_entry_for_new_instance", note = "VM: Building Task: Scheduling Power: No State"]; manager -> manager [label="_start_building", note ="VM: Building Task: None"]; manager -> manager [label="_allocate_network", note ="VM: Building Task: Networking"]; manager -> manager [label="_prep_block_device", note ="VM: Building Task: Block_Device_Mapping"]; manager -> manager [label="_spawn", note ="VM: Building Task: Spawning"]; api <-- manager [note ="VM: Active Task: None"]; } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/create-vm-states.svg0000664000175000017500000002420400000000000023435 0ustar00zuulzuul00000000000000 blockdiag seqdiag { edge_length = 250; span_height = 40; node_width=200; default_note_color = lightblue; // Use note (put note on rightside) api [label="Compute.api"]; manager [label="Compute.manager"]; api -> manager [label = "create_db_entry_for_new_instance", note = "VM: Building Task: Scheduling Power: No State"]; manager -> manager [label="_start_building", note ="VM: Building Task: None"]; manager -> manager [label="_allocate_network", note ="VM: Building Task: Networking"]; manager -> manager [label="_prep_block_device", note ="VM: Building Task: Block_Device_Mapping"]; manager -> manager [label="_spawn", note ="VM: Building Task: Spawning"]; api <-- manager [note ="VM: Active Task: None"]; } Compute.api Compute.manager VM: Building Task: Scheduling Power: No State VM: Building Task: None VM: Building Task: Networking VM: Building Task: Block_Device_Mapping VM: Building Task: Spawning VM: Active Task: None create_db_entry_for_new_instance _start_building _allocate_network _prep_block_device _spawn ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/evolution-of-api.png0000664000175000017500000007316500000000000023445 0ustar00zuulzuul00000000000000PNG  IHDRsRGBgAMA a pHYsodv IDATx^ۓə]IÒ 9]m01RkQfHDʻ<6%Y  ipA C7;vְatэ>O̬;Whb7˓cʼn3 6A@Y 4[sSI(@;ެ}SkbmOn_d 줭-^; P41ԛ;&6?~rthb7{s 66} M pgݓ¶-^;{D&`^D&iOmmfqM G/ҖncA@1v+v%RP{e;Z8 A@/Җ'w֓0hi WN.&.`A@)Xm7xM xَ&Qm%`$A@dnչhbhg<ہmܿM M-;2bA@xrn;GWO?[Oh&&-msv2q #oz{[uh/ @gvʼn3rll~o&@CA@rَ:uq M S=kg7̠Y!dَb?57 M Wv'/2a!pَxJ1414/Ql'N'yCA@CDz-mmAR-hbh}Z-88&пe;Z•SB'&pI8(j 'Ro˓c;Ii>9 `َp⽤@PPewLuM av49bo+)T414&:Hj y:uayr̓X=ZI zQ+ m=͏՘)41t 638~&XBؖncE_A@B,Gs"•Shb@W8~&]wwgQ1qS=M 0BmMOzb -\9x/9 pot1\l㽉LCנᴐJyu)OBa39bo+9hb"On_d4ȚxArM@jXīQr@G*1OagMq& Z'MGWO33C6g'+ J&`l0r뼸ؒJ&#`lXgkɅ@D0V?ɥ Uĕ8F'/"ZP7BZspzOnM%4@0#"Vng81EQA$o,mMM_ZvSJrC Fý{M<~UO7`Xm~6F yԅGWO{BF֦/$7ȃ&?[8 lƹ>y@tg0aX1M MhO 3JxiL@@}1"&ڵ+{ P/?}&_aXp'@&Js|t7ƹ_xkb{M/G:q>M.hbs" ]˜'[B !I.'}Q;z0U' @ =Jxmf51τ$=?<9+W>?[{͏ǂhCd"oohWq$PlyATý-ڲWĕdA>kiy-=$fj41@UXxob9ðb[DOV޾K7m/Q 410a5{{Mq-孷i~>aU̕<,[8#ޫCM 08V6g'ă$֦xX:-}gc#>Yq]8%O^wֽRa0TQ hb~<63@a6?~R'?ՁeQ1lX_McSM KD+c5lzSi7/&ʰymlXLG =M Yd %*.^{ө}S ; waՃz s>m=<:H;2yed˓c=M 6LUvb ʧ<:H'?,gv񓢪sz ,LUT\3QlX?lkNg85N6t]G<:`{@+s[u^=/llܿƪlz?9QOjv^ٰڣ ! `z'Ri/>!Q۶qhb Rm?WYOGE{ezhkjo+hbiw֟)?N}Evzezeg+hbEf25~oOSi?DLyezezv7hb!Xqa<'>IՁ?eXz+[zzAA@9ښZ k'l7>ucwXfe٦߼VcNI侻x+ֽ M M`{Fҍ~>aO۾Ͼ+r2}+ֽmܿi0 V֦/-\95-l]'?>i~Opo=1T&#~o簑5U|7En>YIszn$hb+z0H`7_zS7}]丿R,0`P3.ay{t^k?{_WnJui+뺅&pa +~Ɠo|#["ʀuipoKG0hb: ư&y<;yBWZ~p_b$ 41T cX {r{Ƨ?•.mmzp@@`0uc'VtM C%V°>}s_/>a͂]-^;^``mmNfk0 뇭L~ow^x>_|C"'}q?Obvq'+?i"hb˳On_ `O/, G |,uYON_msv2n6hb;;30UV^z0:lm>ES7DΫ/!lyr,zhb# wVn10̷;ɳ*z^y^ai~8<1g6N/Oy~ 0̶[&T41%0 +c>JP9t ck;g(T 41tJbG /o.[xV =ðM_JP1˳SH٣3[Z~0~pGf/9a}yU6g'+0{Ăv08qF<@@@)s|knj^h_U6,߼78yiG 1Sp`XǶpQwJ7Lei&JdM -8ZYZV/N&zO}c޺ɉ690LS[w'\rL6ޛHJ&(|?gL(Z]'G<[`_䇯Nx13auֻ[sr-!$hb dOzbgE~=!C'>Ϛ5V3]zC~G?e:&=^gޙO556WMxcѠv,8 ƈ{uPƺ7=U5=~Uqpfz7>?d5'֠j1pW,'G_= q-ũG 8F;BXmN&NjYY?g=C{Yc9 =3X%yGZdq%jJ 6-*g?'d{h|EByYcelƹX4-p_^o(ibe?' M Pu5aOʞ,>{G$cD ibU\ZnVծ;+LvN]H|AAT!6__r{b3 iٞ8ĉfD_Ś8 19qJB>&lMye[i.i hbj!*>d˟hwYCGA}S4":،v(Įz+TrYɚv_gsPHƺ-\9?! $4%O\zJppnF.&B=}{bU3X>[k~q۵_䨉άxb@j$a`K6$&4B N]Xe|Wd}̓%MiC||tPhq='rt7yVWz Q-rҍsh$hbqNp=@vwkپO1G{f,iF75 /XX-5q[Or441 `pm˻K૟oN}c{X J X8?]eY& MIl_Q'Zcsv2IM _aISKc|=,ON|^Y<ۼЯܾ(Ztؔ_;&ij&NFD6" W^vvm)lJAX<9z|t LL%-NY{$~/^ M KA c8=&:BzwVf 75=cOL 'A.?r 0bM !P< F:X>Ͷ }l)}%s=[NHzblƑήzc̺AZj# [\3rَ?G>=PX.:+ =*j41`sv0m~⹗rdhbJZoreEsV=dF2Czcv&[Co..Tjm@)s̕{Pe%VX km֌ 1VŠ&`E:LZ{"d"Zeh9jTF M %qI Y)wJ&_zs~(UI4,T R)6k[`UuC[sSp]W ڕUlbye,| v8.f\T%cM #ͳ1Oac6Mݞ)TCkTԒEP"b5`Iftr欨* m '4QZrɫ4rEe_LNc:He]ReZ0XB, Wמ^\#r,鼒aD9YrQGWO^+ 9'B6'Oun&R3C70׃Q*A"/"cw k8Qd'(FʟAj\Ƀ۾4$'ʇ ۡ0+~ap[nĎ w.Iqv5kI`\K+,a@ȡ0``t:=[TYI۠i5Vymx1^8|rF4n*h FK^i⢕}q&3'i\ Zh#y)Khvv, O7$8+vܶK D׌enAh<ѱ!Ao-\Gisj\u)JEF<5VҾPVȓa @d}b CF4S\jWDmm'Gˉ[@ހY*+D E09jnj~ڴQ#jfz&;/VHV41 ;+{ kw94bNCB6 Vm5n XN;al;xԒ4·_5\K)-aAC%ޛ`DMթ ըX2SkLȴ3RUed7B!MrqZMMV*&QїACYiOBaͰ1lm>9كę hD *%lS&ҭӮ4}3\&M m&r`13-kTr4qLhazi?%_›0}ifM #M ͳɱ&N5%֗2ь7S$RUqA4,K&zp&Nz]m}ӄM"kX$M M ́0=Z5qV.8A+T1_颉⠉  I+Fpԓ][G:LHY`^zˡ:r;_.;!yhb= w,QS[vv{sɹm-yC -Hc3vwJui M 5`k;z2 鹄w֓  |:tܢU:}U~Q.ZuZaMy.M D,q:3Kj}Yfĕz[IsWC?54-sx~v?~<[_v֓\X5M|00:et}wEjǑJ{n"N艇=ՅUЖnJP[\JOhqi~ Ƒ=Ǒ6S 6am J&ڰ<8q^Xly9P#*n,<;5z5ū5٘i[&p6}ɓ_XEl~F+kOxⲚzf']r_2+M-AC]e •SEegArX]-ĥ;W.(-M.]yFGu)I`lRKpm.]LO0PQN/\9 2lXlP9wI)|:G@AC؜d%ęo+ 'zh W^^}q{sƄ_{wo!ab_J M ç 5/$W^W>+j+dU:6hiriwe&(7}liW`alo+/.KrR<>>N~PԪHA}%#CaCHzblƑήDQ]R)l+fv% M än|pg]g$wOSr^^;»˳C^y*w>`!>s\*ejI4{z !?I퉱c;Y,6N]6]Y̔EȔ9=(FѲQ+uT3hb{[}=,q_맿7_G}7|Ͻ?^_*ɯWn6m Ć>@G?% %ůZ9\z?yaEL*HlmQL˜Wb4q]ו,veŷ!WrXTQ۹ :.41 ~=h!>>>?;M7g~;BOUH^}Oj&Kc%Qy,)ReRo"\%4vxp;C_U,|מ޲Pg0PV.xX 8>~79q?xgGG"Vg{ΫCT 5hTǵB*\ )oPv`F_Oj/W3M( *Ibrv(̊vܖ f=[z]g7D̗?LJ{⵳f 5 C=lwkZK>N'yK?w{^1 pp+@c@CZAۗ/Iv<=>:wֽ,k !&&pz(6UٞlS/?j/Å*mڂuoXM/Åzl_hi'uo?X-^;C+41㣃oznW8::|O_~n;O7vO]_Xyz+~9:|LLu0JãӐ[bm3e9= $36(ҡ[tGd:F #}2}sRݚN7!d6Z5{nl?65?e~IL4NQ)4:jWьqpuB+[o>b(W rhq-Io^B{OEiMh)T-MV^xeƘr.#ؕt:h}GIgϭU ZF2DN"ZPA[0tM񒗴'/%RTZKKdg w^;N: U cp.NWeKD/ʌr5 /<OmjL`,r'&hb(v%b&K; w<),⽵w;b"XȞE#MbLZi,!Z鰬gT"%%CvŖ5cMƏ&ܦ}&aJQZr#$:M ax.F4ޖ'ǶO\>ؑ};(}JnjZ0ZJvmiDt&Lx~ / ԧ*d,;)i]&{[By*6?~R*o,%5PSdgԩm٩ZHĔ0M>32~~x;pk<]1D>8"ym,N$Sω_T41<[Z0=Ͱ7.((`9@ R%%Ԯ- O}ɨHc.|\LAQ?_)q[Ɂ;qBzNEAn!k=~Uh6hbX'G&0"ArW=E8px=`6 M rNmzxM hQgg^3f Җncn5MZIjF4(rt㜧GbAfA{s#;xqAR 4h!'GĖ'Ƕ$&hQ`mfS ?o| yΫ:lf)`tA#$ %ί/& /^MnZ`ɜA ii_86V-rrZfC& y3"hh&<;:SLM_:YO '8&4c,čeᴧjXX+Hn M[.!(,ɑӖ/Srʁ&= wvd̙a< ,f`P$kD3JW&(elӼn)5/N]៸f&<:4U݉r*ٺCbZNIN 0 '%`On_D kdc:ZU6ķ)HTZq&;QPd)m}9+KRSܽJE*;_sj@xf@W+8N||ZA9qs;v$hSA\s'yH!hɸp'qc~r4DSY{ @PL,_0tL2^r{*<͏n8 ϐ/Y~Zdۍ/r#;UxGx%?i}UV4+-mT;͍x; 4y6_ A\QX3(C>rJP-^0 Q%_ԯhɮ<4u&]j nn> b1ZgUZ`Wd($vb\&0eLM)VRՌK^=&u|9[עdܶ8P=O+ [χջuh%;F`'׫P[}7=U:,?9; z@Rjk]JuNrl75ɿ[+0U6?Be6(wP",IfB9׋,kpgĊ_J0S.te.v1[zK=XYH(3-[eHN_)f _,;2@oqt%l.C݇2Jt]BfɤX܃"_7~'VHB&n(2gZ ,)qjCJYk18yNW{[`^ls,NbG:v .H0³61@:YDBEŸޮPl !=En!Mr^Thb41M!3Ή8m{WDý+/{6?~rsvR Ě$5j*e? .O8V4kWg:g^X6Sf;;\ᔞ޵&|㧇 ӬL.lm{m'>@>7ya8U.[ICӋ +7 bZRek"XȎCnSYdskA5t!MgK'ԍxUM6{sųF+Dp~ϊ<m7#EѾjk@:MuәmT♥ HI.Ћl.6tKn ̻-H9OV\JV }\vbEXϖ^OB M;iWW?)}>=XϤ]Pɶ![S-^65Ԡ'rKZ Ug[N(~x*dbb%g;f? /X\MZaޏ<.qk$jX_iv@8E5)7BGlJf@ҍsr͛и"vL.ylkԭlRN_5*-i\ A:ڻe|J)D,MLw9˖ɁgLыS{rku;Y'e d:փa9 1CWNaK:v bqQyF-v[K$).7BU&n'/z5h}SۋH<߳9]7X U @(5^B8k"JkEKjnųtcʹ`WҦuD:Ξ[* d9a2=heԕ<ܹp9^T-&=;I*\94V/H3pJc|:eps(m`M74n䈬H@:DL'}SyY}8qڼ:@I~kK<@Zz"kqST ڪkFu?_HApg})#y[•P^{I'Ϗ~h:^Lw"LSY^y4OY Ԗ&NDg-^O!TXvS~C2_*j4n6@)-uYOu7}ݓھOnmML?wag gֳk!SRW䯮"4 hOBOlFTX"#VYq4z{Gd Mew9c{toΙ``j/`a]8}a-cfѫ0ÒE@CAYlOLlK4E41@ظӲ=_bQmKfBHKj釛ⴾYt"yV LM]0i3CV\L#.70hb(oOn_d~ &cCT56SesHE$DF(K(p3Q83U8u S*&v8zt'gzgh2Xlm ^3G܎%x?\呿rn8@ ZL-sn5hbg#P`ߘ ˍtBbXB;&*&QJ#2? Y3x)@ƴ.SռYY&xMOvi˓cL0*MEm]ɇ OU(XB_s% &l9i7H,d!ԛ1ٰ'NƔz:~?h6RR4qIOvcwd1a7-vkÂU4l L]G y1јc`0]+ADocʯ*5# pr!RYPi >H#qc?Mc嬨y{uXb81uFV9OcM]~Og- *jnQr#/HX;҄$fl;PvFW?s#znl{vb:Nj\a@4YeZ}*˟$@K%#{UgY•C7,~upqPͺ]kQ /gݬ^{VĒKM_.^yqb9p@Eɵ+A#YKj)Cۦ3X3Qf  o7aN4_[) ySb\cuM9iۡNh=M†&z6PG" (B~b4@X՘$veHv[{pĜ$IQ'P*i/1CAvԹ.0ꚸk8oܿ8͐OQҶ&NѷJ.>ѥ #I!olұrW-bj5x-*#;5!YO\@}qJ'pZ5 }ͨs'B j\uSxԍ)q%;Z(oxP=FBw0jբPY"]ήlbZ:\'o/!m5QY|[&l*QDX࿥!DP"˼r9,⣉0wq4n^+(|Mթ O1їW}G[dbs)^L%֯(ER2G:E5q4ήp[tD`g*?R?N {b?h@?p; Fh. Bڶ;jbmfNM NN*_x4΃X޴,6*y:TV<[lyn8mJH[wh!rŢ&޶GO~fT;B5`q⌧zc6?~rg^+,+&,+9\?8ݨYGʦv9#e6 ܃GdG",/ge [v/ܽJS Mc57=3&!n8JrisJbpW%L R<ͪqvZ>76{eD Tf(#-b׮ӭU0'' eVlXbK9IPff>Fyv8}4Fijr۳:l;M%,m҉B6F.(&,3z?8z18 Sieu \[WcըF+d)N*414Fi[=YiZ4rfcN6c/h|5%jîJ:*U` e T8,NY [բ|MSzy%ǐ"KfPgwy سL^&)%&V)cYtmFBPOd^S+URI:+QK%_r"7LD܌-yR4h▟ r@c Yw1[L'6Ήhݲ(hTjD^qv#4--ZK;EdW'Ѭ+ֹpakC!wէӑ:/>C@"lM*@%gNvU;94Do7`)&?mod YNEXqm݌9mCf̕ ^6z~"UT+7*O}ލ/rtr\KnY*/[M: ~ء T ]0 :* 'NͻJVWԳ,ybA4Al|Z7XA,mhԵôRN"\ۢ[͕N(h#M{LE̴Omhb+ƒ+ 4} b&Ch O׹Ѻs ^"`͵')IZ3<+R=lq I4OkByg^i6C%N[k֔P`ơe= ?94]zpLZ$ĪW5EL$Z0qZ E%-0ph<½< !-bGtXx ã8iĻ˳IH p4eۆkKXR'4t. oaфB:exV u#TRoM||t8qSª(%LHCNT?=DOYɹj,ɒLϸzN, >cζu (p.GHX*x~RE/Wfo/8*8A0ꭉ7_԰ bBs{mz+~"%ԮQZBEԞ?c\_9vp:"nK%RN4%XS|fv)xh|22ۻColJ"ScM|nʩ$SpCPR,<MjjhsjPpdXWT|zͷ{\uW˿yzQ@ߨ&?lm>N1:>=uEϷ!׼/"T&\v*hͦ',0SrzJ<8eOW[6ԞNR*Lt2f8W/@ߩ&^qzC.VHi_FԤ_FG+N'NmKW.#N^Y.rG]~y~ҹݚUG*9_LU6IT4key3כj2Myn['G8D]+<,\PT*T80XjM'qՍjIm2<\hp Uvu"O({\/\P`7S;ovVTCU*1댸f3<{IE+b^@A*S?UUx#5ytnJIR'MlB%w$X~?Dj#C=LdsZl~J7 Mp``XC2mVEx[Shb!v;3E++3=%?Mh9q2-{,Am4 ޚJBѣIbq*u|܋l{Cc'lm rՁc v|A̕1^5"@ Չ' \ݠ"[*0Q-mya$8@f$ER*9j18drZ.'1ˋHG/ĺxdFꦵ$RWY+΢Iʝ<ϖNo}hL]ݘEŴZ5d2Ms*JO*mf8|Y&rRNdU:B+d,kT*r)MfqWXbܰb0c'?ć{[ WNR 7s-{L*M(uN=,ovvXrb۱q]K֏mE#&2w\v މpwţCh3w?i?`[jLvo.ĴRv=NNaW"vdjfƓQhq5qL4WU2IP?$?\hb'5U_CW@;>&QM &^Ǝ3` d+ MZ 3x&gכLR/v;M 'Yr!qUmhѿF5e§*#X*a~ڔ\^`4!Tѵq x{P!&-V:QR/] [K&&"7kY 5޲6 i^_0Χ4˺|Df} {zĄh=Fc=b1 czIKnŌhOf+kW/Hf<2n1pze|$fԇ'Mh?( c8Yu9PF }?s3m3Y xq|cAWZ%1f":L;vA|X&)^@̃&W dxaFBBGZUhb~"Kd"4hضґsc3n1 N%Nm>+0Kpr;ۅRa-\T1;x1c9p-ͮq֢0X|gN%,/΂뺊VH V MA69v\s 8d/!h(u=iݓb&${o`k"jhLk' f F>qETn||+_+<ܰ2䁓b\+ hb.p7iŐ^K]9nl[HgdHQIF^T33$ 3 ,tR2W m*Im EѬBg0Y$M Pab: } /ЙAG _yM Pa"fPt0/X ,# kDإ`2`p6u:Oͨ?F]?d`~]Yn`aeC7ҢU၃u bƀ& R_IY.")bQPƗ>s-Fdʒ D(~,D,*Յf3VFoRup `QbSq_Vxܖ p:FCBђ]u%z3 :g}o&RKu Q%E>76cu$ᙟpFv^ԑ:By\e A T@ejtO&#mCDVXG05ٖ –6衇M @: J"%7=rn/'0lCX@|{{e41F41:hbu0ꠉ`A&QM ƛWtOU.)mABGԡ byx<&q%Bc֐}ebzUAfsAfd\!9ҲbΉ೑t6ki;_Qቓ G,Jc]-g-IkVT,ǔs$t`pT7XY)NeK zZWKp#\TaYZ! :tngv)wKVT4njųQت@A砉F-y[[R8q"NXvb8t6+MO,`d /=41! uF TTbAZi'ŗᖊuVݜOA̭G)b1I4~4Gh.vk:M 0RztapvЉ[7܋P@̕&'K+ı7:h|41hcV3VU2 iDRI#c4i+X$G ?ϖH$R87SDD)4]{a["n`*щ ~0ɞs4mP*E0Q`&\g11<بH0_ضR6\6 o{AuWsx]c۶-$d;1{=fgY-Q|>RCCBЪh+D gO&˛%Hat59I&yVXS'N`r';ԛM޺q#14/S' `hpP |D KRm#h'Rf{|<~~OZd+< tpӹb^A4ʹD"dﱘ0=zqGŤIvoLY["2Mr E<%+Haw&[SS, Ttf|pa9J ^}-GvfmՙYbq.?GW&''Y(IBZ*r#CDhuv:KkkKs"@ gʪ&7ޞb'>2PЪh*e|tTɎ87ofFjl>sKnqٳ|3\ejvR%B01Y"*즨a\&9:E LLP{\~N#oD"ɣ .X*ƈ+ D#ezi\>/P#30OT$:4EEa`bmquf'oD"O,+Ih2 3ei`>avy`*#Aƭ؎#oD"Α/7!Oz=LNOj5J\"x`$rp'/U%=6Jr[T"dgtLϣA`J<fz(D4<$ט[X HaL޾MuH l"%ŵ7hX&C7{>4KjUD" k,Wd76rJɖͫWCŢ2#M$)LcR]fY.bFE$**GJQhfׯ|D %N=čí),ۖ)?t]Zğcxw2SU(j,e2RH$Rl qz: "()w;ܼ}[&J>qvsmM&}%a0,,H$R<B2ѣ<;!# J%i2n#cQõDz|6K8DQﭑ\EpBT]Gx<ܜ";y{lYXğG]m& \-Har͙i*$.֮_l)O8t:>{ss/,pI9J| L$3ݦj1=Le`_4D~+Had [UAbFfB5jB,JH'Z%dWVW+'NCcFfG2~E9QnJ>fhXTٳh$)LޛVlf 4vtbxs$;sN|2r 0@"XI=b.`xliKrרV;;E!=2L.%1x6J$]YF-,.S6%Jz,MD§Jn]\.#0(w>DT;mih=[6t:4 i4sgշVqx^/z0rV&J$RܛJJv@4+Uts#>yVz r7T*1XLrtMSZ!n)䉦Ө??7YEH ;F.wZ)ޑh_ZC&#ŽD"ɏ `~q  9-'1kQ¡/JַoP0gCa^6);a*p׻Mu=>ТD}sj%L$RK$R|q\L/}GHc2-qN-ORpחTz; 9n߾ͅem 0 EBuzϓ lSWp݉T KQ19O>)'Cܼuk9X ͪnBZ-BjdZ~exqI$Ur5 ]TQ !yԁɭF]RfE8|jT{&}'N`LDK299/Jq/Av 8 'gP16nWDrI\aSruMkNV[u?uA?ۄжmfBH0qy<{vli=zq]Eqgn}jxjEБH0Ԫ{&~9;XʨlRHJveyuE{u[\BxdS9Pdqu%#J]]H07uۻs"Wgq9[}L !p]{?µiVɾNH*ɲ,} iw:R?ҵ;d_rF*X{±J$.L,rZ^,FI._KJV/^oۺkRus_c^WԺ]yHKQ(ҒBGHxW'ƻ^u73e閿ԭ봘 ^yP#Q =`iyGpH ;Q5Mrk"}]\EhFHdmTp4+l M a1& ɳ PU:&h-^37 2; 8J!?=4XnG?-U,BÄChR "ig7G\{Z">=3ֱtjd"!OBjht;NelD35UA4|^/!Ϗ' B;"L kk@G:>۝%Wok:c,2u/XTP5 6'?ѣTKSYͽE8re'.q:$d0$(ҳk"nFin^[ =b/pOn]sr&G818LJQo( SI ac_nO1CO]|Lt$JuvK .]aqoNJO8CXgN_OCS)qd|\dGBPVɮ+q4@,J*x3/PuK0u]z.k+ngp`>8$/; K34 ¥VK\,Aս]MQF3;C\ר1#ۗt*ܭK ceuU }JvyG7s~~ <(D>E  x'x$^?Ñ5V Q;4'~,Y96FmR.8a K/Q o(--DcKǑG%[ƶm^|!H0LS@?uYko160=#NM~Ucx|73].-~л^RP~^{?fR%,pz.ȃ2-R{ىv!o:?t޶n|/K4a xq]b%:{Urq?=wM{<qqQJrEǏmN^/0qܹ=!ʷ-bV;zu3}L[iu:zb@80|>Jrt2%g}D\2ރ?M[9?'8%QP| Dܚr 0rľHV`4gyq뭷8y],T^A~mP""3ڷ Jj 8-O⪦ YY~,Z]^ ![%9&x8IX)Ũk8#Vr_l_c.J0"N;?=:ys]}l赻]HTZ*mN=9Br!WiB8RIapbJ4[p[%:?U.uV'OIN˛׮\' [V5 crv]{<0B(h.-DI fdMNRcy`<(QwDI =EU+F~%Nޠi?1w.൭B,ƛ30 ~ba_2)W{(J'~jtDۆedĺp *D2GyzDI&(v#%WoܠbI'D@* C nK%GT{е*CG-N?5F?FK)7؎-{c6z]n<(YEg>?T:e9ܒ,ׯSv=G˃(DC!(8>ce(.aHSOKL7SχjuR{ N=n Lsǭ|IZ(JOq3mt=긼B_:-duy:`4B0E4FG0q W(|EXNQh:cD;(( Ϸ]N{|ڪ_@p"}O,`;ξ*Aa,+*㏝9d`KK;_vzU2D4!jtK %'~\5LӔҳ,ߺEWC??>J6Kl9Ȏ-{[ƴ-bm+ o^| <Gn)@8DZ2vm#;&Jo (V\]t(Y(Ɠ} qiluU G;HB f<"R̿I[`Lt[i8JZ% J=d!jv#H2D@_- bcG2{7VhDn IDAT|4mz׋@ސdQo6p6|LX\D3y}}tfxBDzc Pr"I/| !Pw.buTnQo8x{b R*+})({(Dh1rn٧[&algi%(i]Tj" 72}}rbCKetom>-!0 3tȷ3yQ?x{=  NLO1w&g~Q/.S,7 >lpXYKKy,qzC"L\L.J#_\ıc:mItMZah,X9^E>RCD9_0HRdžcSmcT._'lK$xK=NNO!HOFq6j$ Bx~u4 XO4BeS*6g|}?uץnKarHBp{jm186?U'YQ_L n b)) Nl>O:">ޒ0i68Blku .;${RnW =Bӥiۺm:0wmr H$O$v@7Hap Jcg"x(bV38즠Z_y@b'<ѡm%oP,qa`gQՈcÇrlH|佷u*s lA>R1c{~ʪO=0ܒ0)U+h CۖE yT`uQrR;'TCTpX:{z`L7|/9}U:*0@>Epa)sdz2P[SOMo\~;xTbDb۾5]1L*" ՝nFM4aMt:GuC,6m7Mkk6rJ(YXʲP^q]m(z؂k֖:ç.}P.b/0|`(5VRg!y> G^EKLXD {fwECxk#uoЖy&vrHrd%9C=Mѷ8~| M;DLNj1G<8F.=ПOV-CfU[h)x÷ r !F훻'VˢݞmZIYVKa9KSl44W Ggg7_3^<Duq&"N ~ôy;_gBp5fnw?8.Bd@4CZW&oݤ$"j}eN(bB@> ם%Xc5PL*x¤TyRĵUT50~wXy@7 \U^Ka4 mK"@Ӓxo+ICFgZ]_e Vz}D*9|Rg.㿄~XVM[ޅ`J ցlRn!\}pD@z[\qfI&z~`ť%CmUvтj/ƁpW4S㱋jU oG/ZkXUF o^9ķ> i4x_Bmno>'=ae 3F湽 Ez>5,˒܂"rdo2R}@i yם#Zz`M%),.QTH>D䁄iht:D2k5y`WW#Z/7V$K7{QuQVO63ZypE6_y L~qWC1tG>n DD2s&>&CͶiw:D09P f%~,}01~Pt^]4s -lO^<3ss$"j@¤`]RK`v r@vc5v`WI.tWHђkς' jrbf=,p՗u[/!"3'0lu[E:5Wa TF#A$-{&Z@Uی>Qn#֗&73K\&Ln=(\Qu7P GYOv]v5"0QUrC:NH`ڙS'Nln+/qJ)±x!gkWŽS?@uc$rBۡotD6jhY\wk=H5< ؇(j59D^C3i|j9t^_9* xԝ37>L?Y$lr_wʯlUvU//\/'90ђ%a]m4nbDAܑ}BZ ¥\mr!\NfcVX\Vnf#hzPTڑ4=4N}}fX"ǾAAա7+(D^/mK41fDX{B!=i6MX`H5)<~M\E<<&~!lF/'X4 $ b:(Ds{ryK4 |}ݍڬ}7˙ cEݘ4ӡk8~7yqğ217 k5An?,; >o^k ީׄ}r2OuEݑ˹x=^LԤ"fewĤn(r1-՘ӵ;vӬc lǾgm!e?<֏٘dfk>SNyՅ/Y[BnеOHztOKUG eI 꺈 E4*&W^X8XS"'^tߥݭΆ+fy dw(غrA4UhJmVEB`ގ,eʶGLW?@DUש(F rg~ϋٸ[_ޮ#_Gט_LD肣OoT=_oY fgLg=KATT6@@z#lbUÇi^U5>'[vc`'96zşz8A|80z.B Ω&I>A kp4n( gčz@fQi;Z*QtTAwwo*phz]4\]|@Ws}Ӡ-L=*ԧX^Nk+B˶a噾Ԕ8}Э S7S:IV@ pu ,f^(3X(#bt(Op#M ޤ﫳:zt*%gGeBl6Q cOV¤BJrd 7/S\[= %),~ NBe'G?sln70y=|7(*#9at% r+@h}s/ JU>qr5?l1~SOXlD}K !0@c=/l)D|->,OۻxTcc; Lj5|^0+*+nZ%c~0`Unnе?zZ%p}>|Q}FgJ 25 E?j:O,|iCCvEFVP-\F< яUAS֙~'\/9Q$/xzquN^ 2PM /o0|^ڽ~N鶞a䐁PXcm]t O>~vk,du8\ď ٻ/#a++ w)q0-(̥3xQ11e*q.VKZۧ,Fwu[SO_O2pxvWbjɋxuW~w40640ug9j$k e} fIO{f\Q4TUG}[(fB|hlKtz5T-*"‰vzSg6&{R1&cO٧%a6yv 완P]U1SX;ˉ[莃a[ OJ#\@C-,W{@D>SC^e5?6I4=-VP7*~F/"ߦ5o35msdh^g{b<噊*Z-<V%ij p KFY6E,N1]L}xxF-14"@HdK|ӼfWK"תptnn3chz$.Th6Q}j.kRթfO@5||s4VC|?s  &c A6`|bJ;"Ov= QZt+ F1zI4hw~鹗vBp hZPho-EA pd9||qx4C2CyÍ49N6n?⣨*C&.#&7K BU@~cK A #nMDU8+ Feo9(:O8O[{O w|[_Z*p7ǸV.1jo*|6.%RPz,2N&B0HHsɘU~Z(Ptx"YVOCH$Lm`ZSqPv3g1<3OQ ęJHM`cM ;}M=`>w8I%I,5O _215%I9ӹ|-1%#oR,ci+ѱ{꽍M*"(^:l4yrcl4FIKWêon:l>v9 f.\6E(H.*2w0Lg2HB $ -I$]N ́p08oN.0 CǴl,ƒ:N'})ۥ0< ܿEEI IDATT}BA0X{jeۘl{}*@]8VZw8I2eS)\><_55# dpeEɁ~0%|^ΊaL) (ߦqmi헬(H>wQxO`Lf-&I /Wu2F۶\ޚm12k$+ 4]'L =G%J1 n}7!$ ccb?PT=,-IX{= L"'ЮO.X"[ΚG,2RC1b{ Jzd,[&|εi01nd4B]u N/LݕW?m*U۲6W/ Z8EH$^Z!I֜ж!6'HoH⪕a0dU%Ls՜ D,}# dGu[H <[QUҖElj/L22 }L*hZhL@zzߊ|fd8- UJ-4JF6X 3yt.>@1%u? "Wd+at:>ϲ+Xbw1i-NrI]"EBui&Y+|2,arȃ۹AiDF1͢yP*&,Ҧ QT <fD[@*E&'皤it] >^#$&NRZz IZ:Zu:c]5:ɀ$!-sReZ|za/Vdy}MmHccXV zXV^}1, EYm@8 oA#V4ӌ~D4ŶKW ,VM#O 9IӘ^ W>īO[(N1ɓAbqJK/!K;=nfq,vpwWL92m̈PhO,H8ccv% !EA q(4IJ,edLl.FWԡ(uP4SD"D\it&sLfgD"_\+Y1w7{PL0Q,[`yoɀeaY6k^;Iln3L}D"uc$-cO`3#HLT$Hܛ1l }Nme}-DIM$YSw.tH\`8b @A|,c}Դ%$&p1݃ɲ-|t=[h_L+ynSܯϒԑwKar͉>"7CI65fPրɚFr6if&Y 6я_Xpa׮`?[[tK^%El&f:aڔ e5_= @  j~3p=熻I[ϣJ0'8JIQ5p ݃hMVLzٙA)t !5ϘdY1 +L6l(Gr{oaL8?{Ӥ_/ō0[I.ɉ7ZE]TVT!w4^L,zCGm~QW傤3idE3et{EU-lԐrG- Jk֥q!GN(W B[j2Y0qq6WZ]-{t:*̜gMdYB]C9RرyyjQ>D 4<`QL3aT4 FNrn]guͿJ^m ŏh73/W϶} ~M^dU>F@,\%5MPp my'uƣv񯈧ӴU$M…u CF)G_9&V1 9q6e2 yf2 _;TפS*o}{ MC] Ty!}n9BaR!o&KP:oMtڤCN,$/)2}3q4M YY=L4Ճ&٤aJEQ .:U_%9ss?C;̛ZOnUmt05oYK(Yimh9]xUA0ƙߍSN0p0rw`{"SBšy~hT0Hfp.C$EQINH2Z9x/[-ɮ'!>x}p)Wmãz(h K\pMQz2ͩB9"p9%17ž$y܋1L3K852sgPR^izYJM#V9ˮf:vP9w~C4b?$SQ FbT/6k4$yMǹ6|{)Gdx}uEdZ us/oPUAR$v3kԶO9ifӦTw ;yQtr+3<ږFNЍ$ [H9ʘxuNg@!E4&Wzޡ@~Ǒ]L]%ϜGT\6ZP|#4AWd$ w>ɵO8=[4UoZQAQW^V߯먺G[2s3DR[Px[YVC4v. ږoSXrx d7`3^ AsoRlr$ Ytl)対pHhv>$4#l,iU,]Ε>+}' a6zQ$AiI)jJ{Ov;q߰YGR- eM-Ӓt>7ȿӳ^/CExyg6TbqKk p줗MijᅣtK?;ڹ9M4us:d5D?DndIcKS6+Ȋey»o.eST$ ~j@R3IvP\u +6(%VU9Ĵm^© [:; s۶:XIR)*xr.t||;Iq,Upۨ-z먎BYF+Bq%uԡ9^؅/k78u,B6NNR7rL@Ws?`Te_un*NBeTs'8! s'"ŷ./ݮʤLD /Q$-]T5W+D- ˵恉:Wu JMZwQT_ob ڣTda%Pon3#Xu)$W8ڐ(75,3/:mnW6G7 V~}N)L6*Gbfձ3!l٩i,!1(..Ǘk655ŋuA} Ź {p02Zm6+]qEz7Ԓ<{r&''222O?O:::(,,'Hqx sU;pXsF)o( |%N{vWP]Sd>x( lKWc4sQ_t>K4yC"PT$f=fF/Őari>9}WR}< L')Ԕ:bv.CՊl%9||*Cqq[=|VY'N;mmmݻgy]v E,kBEaDdG1ξFq]Q}j|G|ޡ ,K.,瞣ʗk?etډbPܱm6utճ٧) 48ewy??%PYYISSv-[PXXa?Ӱ$6Fќ֖d,er066M6~{1:;;;IsQ706cRb1KPwGa`dP+W 5tRyיr7w #>b|‹Ļo3nك8?{PYVY$<ÎcN7Q][Ihۣlj'. +MC/1>Aӵ=}_|)**{w^#$͎TuˣQ# sdSl /M1v=m[ΌSV_=[V IDAT1tzѷ$db7 3gpCOd2`W!, Ӵo:&F0$gi=~BY!Jڋ{xU}/f6rr~q8OΈU J|'.2<$X}OMA&cMEmbX3gꫯxj뿶2m19¡ K ULD6/+!9B^4S*VР@?8ڶͥKn+;PRR®]kjd /|Fq?f/*z ożegTMo4IGru{SXBvqb7ۄ&cQmTW"J\;!7좼T3NEӣH:TIYI~-*.+bF T~yaL&L j] yUvn'~xu;vZFQ:ġCBl۶Vyr ;Y.ɯ]r% ^g욁݂YAm3:zٙi+ݦR)&o]/;^?~3g'{_eq~n藨.zZ}~/ !1._1<`&\RcWu!I:%lсfPN梳 LS/#/?H=ztѿwvm$|.6r,-rT:$$'p@V$+6a#<+vAqff4ǪKNׇ{~Nm۸;J~?.J0i-)*HڵkVΊ*B+K|f{02c.g6*D0h҆`G0rul%$h*ί* ,laJesNCcn㮷&/ L<kt:| T0: $FWF-/@]|AN`"IRv*^2v5]^$A(FW;N>]}W_}/*^MӨ۷SL*B&x~oJ4D2gx}|VYloQirQ$;osꍿ瘭RƮP[Zpikk[ 8}4LM(**"sAmSO=WB|/~@@92 Fq%4L!I2ΜDaۘlۏ\bY&)(BGu'? p8۷uV۹64H_*w/V')߱E|psuT Y9ۿmiφe,#%84BUA*X,~9;/CCXi+?͹)w۶3{,(n{!k.:;;ٹs'TׯXPSC@4Mi;)?,4A&K.EIq7lPrlIZqYV:S餽Jhiia۶mrBi߶m3+ P_s-)R\DB%`屗~ݩ9Ʈ|cS8jPϮIaa!@]ٲe {'sCFA]?w*~'σi!>?tQ-kQ?IK$ 85E 3DAoVFyQ%԰ bl4(]F6U'{b>sٲe ;w\ЗɯҼx'2Qjlz\ VEVROsB C S_(#f8I|ж /@I'OQ_;~^XFm&FL}A_Y!TY;~cS9ݟcx*I}.suu5?0]]]tuuyfBX OK$5 tؙ3g φ0Ȳ=I,2B0x9蕍J26֏e} Ţ^ )++B1BTJ%TSHMC%9զ*WN}ĴZ8=_'m`f"ne7nll_f۶m߿j:^؂QU}=G J'(G]bVT$# dܸ^%!M!{s塧wpX2Oֲ,"D"CXV {ޠ~|Jn+SVɔ\OE؃h?-uDhh:0{:Goxkh(C@v(.vS> TK AfHxM;9r}tV8). #'Lqf楤tȣ9{ ^ R\죬"OedΝ?`0w$ #4 f]5=$ꥸxˆ0JU ^gjOaJ10XtnW^y完ʗ%zt$I!TF PTME#>z(_`I m g.\/MdGqY G>>ƵO"2@-ٌ>6Kgg'`,ll:~}=zwnI`r&JIBUGLtG|1HnF:/P,SrHdph;̇$IQVVvןC,Iũt]M&'JkyZR  ACRYRy虯STFm>E>rP+͖'^dmϽD$p]KEE( 4[ P":6e]A~Lŋ-oP_TF pj2O|WcZ6KAIxWl2R𓔶 6$gKoWQxӰ0Ԡ@Eqs 2wm(,WqH.(d`HfyjU(p3 i>x $W^ ܗ `|f.gW])NEYsќ;̽AI/OǨlCry<܀* |y>4 &ĉ/xCݒ^|[MǤtyq46z_C]UhYB-7DّCxx4Yfg>\4iY<+J/(Zslbt0]ޭJUn/Gا&H  xw+?J6^qb_%;(w4vOi|/8~@OK;gD3xjќ;Jڐ0+wqiD?+ZNuTƕN1l}ɇTKe\"OsG`E=W]ڏdx$-;yk/u[=C+>E7ԁTUC$'>> "x<ƺ%>fg(.AV D&,ˋrdYVF\?U "1L~\HLr 'e9{%H1t;SD2y~oUƟqmW`fS/jUb2YHzUX'$+:v0E(BIŰ,KUwI#t6@Ԩeee8zwÚ{&#cj(/< ;LJ}yfCa':n,T,VZlBQ (֎֍_ttC^>N[WO͢zK+!,$ QWd+Î#oN$n'tc2QJ:P k0.Dzp͌`&²k.,$#eiD5 /tec v" wǤOS/_$ ׻Gw(=z_]Bߠ略q !r]ۖs~%!čj#}?|a9˩y=t?v BS"v&\9M_%*YHv곧Xۉe8\v8MO}PкۇQ>C}eó2 ݄(" HS5l3t'\!A&塾i(`g|z(D&:1w5j 6V[<>ۜ7M4dFZT,ؓe$N)+Xfif H29KtKiԅJOkϿ$"c Hi,6.gX*{o,qA^۠ضlT5Ϙ0q:&V11U~:[Go?9_現:}_%vF(>Cm;񳟆9gQHm(80TEtM>_iƧF TJ7qG(O }P]"я0 rqlc{S2u-!LJGKuw/m7),0аm\\UF{΄ǚwIG.9BqVP>IofkV4v:Lhj hA N51…LDf/g j$}ym1,E3 ]ǶU ږblקub3*6p`M2`fB'6bKa&zDZ”UUiʽ7fi}?@ڑ&ZlBe{3&"g 3?PmkhTp+,?ٿPp~o)"m'g3>xoPq_gl$c$';Mn} }8DٶO%Es)* at"a$p7OZ(D}S8'M==?qBLJ0R^9DhNh`YI.:9le1?{;HFdK)^xe}Nj?Ϳ'$ %}ϱ )LQ#p*.ό К4Ewi+W/)ۈ27)@&BH(:o%EòSXDI+l+hb,r׹~Uש /~g Ʀ']ʕO1'JP5 mtעض=C- Н^ 40B*~,#1$*H2%5/xl?') ϜY'W{ +K;*+( =LwsczQCJ"G(i!AzXmXQ!Zy!9oFWIon{>T5h0e?@<|CnSN0\X#|-fc`l7xd@ $E emrsAe`?0U伏/{c,q$kow5{7# {)C<)AKg=nԍdf( Ma(2X/%.vt%=~[\ff.` 񐎷PUV+R 8Ns]w1/1( ]wݾ]!QV5T8@+\Gk >~:G_g-}ccoq5/Gk+4 4$&o5rIX\S*`p{EITO%ǂdkHkbYyï. o}Δ/`0\V1M}L>8eYzp/m{#%k%:{O*s{UQ5C ق05i0XH~t'OpSeloNLяe+MQDEHdDy$w2])]ܑ_=@WqKWEdIE_˗IPu]76.:E ]S?o??Ӛ$qU nsutM["jM{xOkޠ<j2+uR. |>S_x/H*_4vY x-HX|钉0I#~;FV"kgY]|t}t,ͽF4{_GסgW\'&M}eK%%nɲm$Pǫ_Qg GZ #Y"u&{!S='8j}?zh?#{ zWEҙ5u$1/UڂD}wn'q?G<\W\B'uO%*ZH4 m}}>]I}7:_u:лݷyu/x9Z[@Bq(ĻKFX  IDAT N-4~.nU**zY\#6dn\0ks[u3@s$A>%^3ᰕp]$a~1eTeca3."X#I{6ᮃ(1DUׇmqf/r{p҇h́?d\;-)n`kG3Q&_MJ})6{Hn,\EV-p jӞ5PQ" :@k&?No, /Xa{ ۅexk+DCXGxBW2wsM~۹9Q]b˛^,3zٵ=9ML6\Ѯ k<))MWXfp\uo@QAB_mͺ~z]ՉG J 6T%YޗY9XǤ$O567Aj(U](,]ſpć ^'V .\ɒLWzTw)K:0-${8NL$ 4!I hpJj!dE`(a4p7i2[_ro05=&$7 iHб4=Ǯޜz4 \{xްp+h8./Y]fP '@W(/iMR l$LJ_:-e9O[pd A5TI±׍.FJ寓{?XYd3k岷3 !%q'7$'0P$YB¡b2G= 15}5e~In A=z\N /`Ƕeɋ7( 4$Au?C==!"N$=s.v $Cy0MU- lSS];@TtM{ 7VDK xs¶mk2m&kON$`9{,KzY`&T0t­sίP:҃۔g*­$&,lt΄cۨ^cc`y-Ȭw ` rYJaPY7!$6'/e gJaٵgCh; kW0MʥϚJh]MJ,#[pɉg.!DUjh!-QӧjͳkEVfpaujMj Z 4`嬵K䧊?w lY0٪BЬ>Rнyħ딪QjWZ0͖I M#&eab^9]7}8M"Ŭ`,DvbgX\XB߄|Tf3 ۢ&b+C!\FtCLʹmГI0}&+WqJi"Յ kkT*~`ژ[*äqG4!+3T2Adyg@rSSD0׭ڵ,LaD|CpJNɺɦ@rYfv.4W-{>T~X\I~bpxwrU$) LoڳmiL]Gb}&s2w.ߡ tR"&A D2gǐD+2'?p7_2yuZpxgBͭ!қ+yĤQeEQ7ArNe5H&%B2;^P"`U+q#!;y&;{."}OJ,…0 Hu9iX$>c3Hjо/02Ѷwj6$y &&o 1.ݢBw !fo37wbBMyZ 4UJ^qkOJj5 ",vnVT1!}|4 Y5RaV!N(o'ꭓKBhF|ҪVuKk E!QVRVhGvRR*͗"(Hƛu]nݺ|@%(fur3DġCԶm.\J*uYq]Zd2aP,>]ש52'Rb,v1I(Է YXDPEۂI!I8bRǓT~Ν{$ gnnRhVaZ]oNy q??YXXOID"Z,>_e{9G47,b) dE}Eah#ѵGw $tK ٥MAX >c"I@ju9+DB!l1B s qZiU̹s瘟_^?H$9w$Ittt0:3cۨ-fM@,?c"IHسv V yn!&D0hlL0yK6uͫUΝ;4_}>~? ###~O) Z5_%|,챊# kk[⫔j߽sͅKԺ׼P..æ U^Vf =} w+K|=:eJ,@!©(W"ie±w7KW*9s۷osIP/I*iB(\r`+Ȯ R A\FIib 7˭je }m*7.X-h+)ǿmVf Eƹ#j1.wԍìH'2H-2f%:?z]^Vpv óyJ~mn޼ɉ'9p`'d0p>$. Wd.dy~ZYA22t/\[bb45fp$ǞMӝKX"OML>a[Κsc "aS*\)u2CE?y":vs uG@%e[ZLR0!~d@$Iv^r/G,w}kׯ:tnݞjc dZ?SvP=:ˁ=U|ĮhBc٧:NU;KK\>r#\K?n?~/J*hy/_IۜY~-ylb-2f-mA9w Q@u_e_CMq ~acd䯣LL~Zn0^R_{\.eG)w Io8m}(,ǤZ* =ͷH( P,mw2]]dO8} Fk:FBoۚ&pŵ[^' ^}ؽeG\hR"g >G+ l ʕ+9|ߖJI4'WH07`jAšH{A֝ ~u\Y*ҕ ZPsWvK>('?C(Tj0>3OML(+hvҗy$&YX}+%z+qm He(&9x94n#jm5†(%]8/oX}BTSmdAm? zZqƾ=ڵר1bmjxk2Ƨ72YS-a9{JWJ+%Q4 %Ok`jncGn},o0>C.]ċ/#G,ӖLrknp<4J0b# W}/:ȮBvg>j3=ӟ, M,:|qȝA,_X$qREz<1Bfq]ւJz7yxL9 v [Șf`M Èc;\ξ_d!7c7t$q^ZMѤG=py>C9‘#GFigg(^-qm#s_|/N[A.?^~v.CL]!3;CH|>V^7~i_x|7E|ǟ| )cR,)s|[3MMϵE8'L4~P8t7$(v7"?/VEXVy5PrE"b_-Gv27o|[>c(U DL|DU k\?uQ(AH I$4=z8pAO~ .222±c3MIfV@L-m;C'du|ܬW_8Tħj? eq4?]y 뱰= -aj\|3{=6uzb;1V]@;Ph dM3|D˗IyGKWp73@ީ"QQ,LS:.ttd'X̗iUlb˥Hіnߺ RK*ib4L{Dq"l⸳ ]V,}SBӨ̵mzre~m/7x$I1DͲ>ǪcoCVNRWBHUXѯOړGA~ʗ9kp C%>OVEi1,,bS%? ( >lznqUL%lto3._@B@(@ɟ4c$IB[2-5ȝ?FS ^(ɄZ^}ۿ- Ii~ܖcP$&03mT(,.iߑd_T`z?2 t$aJ? $IRi-vjbx+WNk$(z ]=oP\<|  p$aZ4~i&8ÕW9[  ˯`lG1Y[\sӾs8T#HL鸽6X 3sD*qfg_P̘9'Ԝ?]%z EFۨMإC}(OѻLD o#} Y3xmȒL,uV !if}K&jUd4=a (P=-$ݻȊGkHK+# j#rH[ӆMB(B8vSguDZ #B 9JЧ!K-:kخ-CCq]dŇ鋡2F)^>ُ#d$n9A*:11z=+r,4}8`cIP#u%%ׯ_睷fhpǏo)$=]]]pg94f1 T^jpV.-=G =To:$"GwIG+iyJA*q8фeF)h$zI:X1|(fЌ|x#IZ R4*MtކBɊpye5@!y})Izkj\?HBPTne.nwa__'Nh'*YB-]+)+._]To41F'H9ʼn(|᫋gܹBR\;SkL_|s_q~hT|7$ IDATɵ;~Po/Zus L "G#C)SyKd!Dk/'hD1IJDFZl~@;6:Gt9ɻ5:ʹsM ɲL|p m?sŜ:|Wgw_+i Z _ 3` |/zWy^ßm-Yܐ=Ah#J"5vOd[~UxU']0pqVEޝx4JX>tYP#%{SռV %nqi9qĦoh8BXәAR8-Ajב$ŵ ===ϔGDD$ Rۭ:@Q#}b>Z,FEo2bvb<3)Phzck @#ψQN>M[*ʼn'ѓ$Øc&} TWi G{KVViDH>Ag&&X|RcJJ9$5=,\Qz{MG0@ٰ=<);dtey0663gH&>Ιgb:ujyo%Ibd~N C%ZzA06[\"=Ѱ%>>]Zu,i3Իty #ݜD:\!cn, G- ŞBEa8]vGQ{c4hU芊kTdfvg 9u$:\\1Lӳ}{Qvq.o'_:_!/e2h={ң鉗^4" k"dt靿H"]QnR,%}bL.-"\j[ BKU+/lz 4<>bR.зpJJ8{ sIɭ;I$dIn$I\IQnU-bɲ=]vO 8u`پxj=y NU[`Y5tX_c BHFd6__ґnbiyg.[כusU*?BȔ-K;= V嵓JNTB*=4"A4"Px$Tf{ؐRU%O5N>M.^{}>VΟHUoSFdZZDa1( <7C2gba\\[5*)Ey[)h^~c!p2 'Ow[~]ɵ;>ϰ4'0m?1$D[7{ȗXX*vF[VH}룤@Yώ`0*I ]Ԙ1I)g=U0pldLb=q<#|>[o*'Nd޽z:yIxg^Y+)*6$)D"0ԪUfV4S@YX8O2W7bR-r0 H H>o1)djL^M "I;kN< #ľ|> .9w mOJdIf`> ̏޳h/-հ&yOML S֙dVATzvFs,.^T҉R?a DC! %D2`yj!+ Ts~dygWr,-MP.0 y{N2Νcff'Nׇ, "i#|p"+S]9q;tbtwv6l-(B2/ԗ,m]CJ/Q(Q?кHK" (u>G1k3qej.D(;H  Vbu6؉ӌ/3߿cH' #JelyR#4^yԚLe!0LAn%)wt}昛D>"D}kZ %[>aP ?P9c*g- -0ss I0H<7<%[_?2Y*XB&բÝcׂD"C;!f'Eןb CRCZٳ\~'O;z n q%Ax/>IRd$Jny'2 !(L]_O B-0?a!(Z[l=<Z[qguR2z:ŕA$igLf1Ezo>Y2p *ɕJQE! !%t /塪*H%L.B2zksΖj5, Dzk:q/?R˼KS0Ёt;ݑ *UJ+qnw|fb( HERO(ڎ>B@qX۪Y"DLfǯU:M,-D<^9</^9|xWt]A.|1wzTlD===. LL$Iُ/qͱm ]@USA,Xh)0kP]hnĢэSe|'KT:+q\`RX"{<.^?Gz#irdd׮49Euxs˸"twu5Dn`U*OI1ݠn#ȲLku]n޼;'N7)Ý9H1?vb6eY)++tR/$l&$qst5Y"nC$]J\+ؗn]Ӷl.\hK&)e bsﰘ>[KX,OkN-sc õN )[Bf` *ikm6G[*!I `ە[Ǯ#er) ­P,̮ \ܾ.4νB$V~8m^{2ԻJm $K)kI$wm۷9s4D'O=a%+ĢQ?zCCC$Mi'&)5$IVܪEqq +txa0t}KSMOQ.o‡^kA2nۏ&Kﳖ_B j 7L̝'("L{޻\4U:A89o3 hhA9_ `vyNi,dd,~uERH(nk/+  IAo2YV+ܙx  [E.^ 7iWhJ֖7[J$Ϝ!q)O4,ňEE_ZbyiI0CA DxUUy\^$RհbTFX8DwXl[(V|e2 #&]Rl{\;斸}0Je?Jm[M8 c+3SGqLUϓU i{M1hWl 5O>Gl)lT7 uG J1{ 5Bk2qj&F %l~ޗ~" h?? &)abm{Z(^d>[BeUeh+MD.SaI~]C j_̾ЖcλqYB NmΖ$BaÞ. +kkQp@V$MEuT]C%$IBmlkp㠫`doh4ۖ^+3K R6 [UIuUez2s{#Y_}A*;8j;{L0)q4Ag|m?'Ū䰊P-M19[SM%'R vHyaDKkخȸ +LSD d+ص5׾bہ.IU@ 8@mnE82}wΜ>MZ7:zzY>8LbJn18aX]T-ΰZIߦՕq,W !V >"F!˷ɕVZ,7k  s׳Clk#O։`bO{ڕs@!rI00(lݶ3gŹfu 37f9gmٖ%YJRb H9* @)R%Q DU~]:̇X<=8-?"c%|_E,رcdYKmmt &6jNmwjdb5ΤN#(9P>D6qW!Q".hltrژ=Fg4#" `R?俓Zes;muP,jmCA U'x[>lɎӊ x~BYan0+s*)HY]gaO̩ N;jo-e&3>>)@FAU=&p't64>2McǏ'No^`w?>_(0t ꪻ}'|Y7SY?IDᦃX;FٱZM͓LhBDC %Eca, osS~ :qz: ݧ`J(tPQ IQP*@nWMzmhl h43Xy<ǎcl|;w٩]N]dRMV}J:7ܓ(P9\+͡><"8vp\f 7:0F#ussXvjZBxˠ=^`1ޠIZ؅dI=;{GgkGd-VSr8MMҷDUUΞ=˥KشM6'ttau5XHFxA N݉HpqKyL&UAbu80Yz|2ښF4Ν;ǹпdY/@w%$I4QN)A׾B&/Р؈j%Y +(ѹv PwqCCC477k.LG&Iۋ]4KX# Z@VU ]ug6HOWN;- qq;?`Ϟ="G~IAho`yyũijZ b89Mۍ[=a2uGt5ũiNU~&LNNr1<{q9_x1=΃_%f諂,g/](KḢK6l#|-4M( 1iHI6isss9ryCC <ַ~t c16OsFZC!.^'gIc|?ݝ|}Sٻw/`Ptta@ <&zz!8ĉI2. IzKԹ@CC=5b1 ̺y7@drdih4'e Rtt֝0llŦ̺/7K"SX }}zFPs "͡UA+ه X$M"T"`hht* PHo:0yq:l!DT-l 89E߿<\iij!PEjaq5C*e ''qM lܸ.DI:fhhh4ݻijj>n= t8ӋKYCyOH.꠪[JtOqJLdT,YrIQ5|6_ȩgqèFrOR"bk_[-g fRZv=\6'XP  }}C? f&爗JUE02%'HM_OﺨUU9uΟB˗?_iw>` ~x.jKm=@sO,}V;] ^ÖZ3_aFo9E'r,]mXoQ%e?q5aMdru6'h54׿Vaa}x{XT4jX63N:z(4b"j'P-;F#/_fnb_}=f5MJ[x]+N٬NLi5bH}0H:9F#2<<(YTUEeZ۶5MswhNS+D秉,'D#FDA@Si" T`O>|bWީ!-![67awLzJmK+NfIOE`͈F1evjBYSd4J.bK]S=4b:tBIU@mQPL]h\`5 KϬi(<rE fuML"s't[8/Lckm3@.a~nRuMTW94 j+sVBujdfsux3@us3M2)ml] ! ˳FuM8FBѱ%LO,M]k7Jfk3iBnsT6N3ODGmԆjY,Mg8n|c2Y&Py@xr (@cC>3<<^ntp%"SӸXA`z.uR$&2R|N| wR>x'/bv8%9^%j<6o~xӎaaf‹=sD;tpH)PË<^x۽&O'&4ME/op@! [ֵy,X*Ze45<Nzap xcTοo8qg2W=-UV8'Mr`?vt5I qpOi8˿;\16 c0}p`[+3XO7dx1-/$ll>0xFfDj7lED~4ȹW.>im=M¾[)Μضʱ1KFlJ7㧶BnUHE4Bco?vtP\;331\K;y_̫ho3رe cT :@e\A(4Bz,u.67Ssu3l6^, 2EThmn!XU'F0O H[FiRLx5T&'Vj$@\Sd|#yy;hhi!L bihbWHMFsO/vcd>b%LFxBoh41VgP A" 3"ڱj'r6TkcN:z;0&03#ND#|խ"wT 2r Wrh"I6[*.k~A(Ik @#MM̗A2 ᵙ|Wau/rĵ F4R2,/V"P Qj`0yhhk`|nMihE$LvRʥ.a Qc0 tl;4ƃ;f{bܘ :=0 kHge#NJP|ɶ͛YX\drfh:vw܆%A@-IEc("~PɤYZ{~fjWQO}DPUU>$I(к ?k f.l~%u|6yk^erbOV^R*i*!kοFM[A6mNšŧRQe&+H ď _efj~q+O<эX6,k T-}h3sD6zeQӈKp! ckGkX߾` EA`4ߟn'FvXVd1Ӫ[ d zF*"44* hvsa=t)Q= a $S\N,NwvzG+)3R)&3!UU"#f=$jk [\`9t`u8M7f(r94K<[Zq\zeMCT(d (84Vzhyzb2iaqiiEfn7F`@D{I wTRT"I!a2xVr: xI M|i>]*qqGb9`6Zp` L `Q_&8cw:nkC;~;eL\Yboh /#m۸9u>ӗh@Gb ھ~n39._5ݛ7q'kj0bF?sKc\>/V%Ź#2o)[ k_&۲Ʌp'l۾^9{bUcbx[nxVV #W9w.zZrW(;QaƦl|9Z,d0PэQu^I ҷ[K//qvrAD& RfT Ycd S4 `@+(Ӥ#2(rYFn%YWnCY'VB\FDXݎQUEAUUDm5PP)q2(Zb6G¹dllT.GMSn'M+n6毠 | O}AˋK{پ(rS'Y /ɕ=[۬DHlYYd%C5ٴom5QT&HIrq66P*&Ҁ( V x-3>n v_Q#0O,Gvְ47u-pշe6|6|3@}(sHV?M mkX<,FeJ6(8&#PU^V I˓ΖԆhixK[woa(ʸCT{iy̞ uU(v ~JT) H"NDX* 6.χu94M#ϓNHe234Je"˹uas_'?1N'wP )FN}gtmV{R[_S'/I,3ա$h"WOsN[9:::::k3FP,#"F ,!T bC-)UT F3g.R|&HT*%R]#hB!\QDf UUJyUCMXfDbLلI(RT$ qN>F#ь erl 99یN-R.k~{j >;,%ؼ̞C+L}{70q4W͢\exdi :sQEgh+̬ȘA|j9 =]qxO7 yEhz| D3 Pr({xrGqu0۩ zɦ{ 4ȵp6D<ߺ{9u+WG(`Sűz2/DKݷ oU2̍:y"{C>>wU";ĘQ5Gx$Ŧ'# $w%/Fg9I̭x~==YX0yGNNӾi^Ugo3C+F4y R7Ws|>O-oR<ť#0bcKdߎv2❪I+Fpm.Cǖ-47:Kmm{\oH Ox*q#o&Wf1a6o촊`"S,1fq.Lx93=9I"S6r7bsog}("tǎ~n^A&+\![P((Z\x'ǙJtR̖ikCEWYIh|CbgFQ5ɍ[~"_}I Nlm$aq)EYdP5 D+-ݴm&h%XqWS;==Դ׆zE1he, tٿ Bl!P@KǦ]TXDAbr])!R(xk(,M07 }tR߻fw3'?cC>"Sc)bH<$ \.3v A0z In _QSERPd<3ĒY@0Ȧ}ܷpUWfP, Y FH5rh΀*H1j-&%!ݳ _M;6R,μꥥ+-:v*dYNx<%se%CkK-F-,ƩEFm餷4ߟEVZ9CrZFvrT^]q#X,U@Elt sf8{Êd3b~ʩ8o`mNRFZ`0 | VSK$y*S4A 2Z IDAT%Sqykl.3qědb ꗖI3֞AQ$7]lGiS ̱Djs(gy)hFLFW\h0bW3@6gjVfQV!XY%YBe3,px]LGRDGGGG/L?V I~%X-j6] r,JrS^@!gP3dz P)d("O` w'F\N35;_[6jbIŠ2 S_A#ȣ)4Q]X򿱩Mvӗ/}ɱ duٳ_>uM')"NC} 3׮0!ĩݾj%FEԚ[{?#h\݁X]IlQ2W2)rFb{ƗZR6vZ#H*-[9:::::$ri+\>yxRlLQab _bqb1;'dx#T ߇lP4n3c\# sl$#ydF s,D'OȔdC5^̍F2>1M|$ l"I|adW"ʷ[0є{ASW ėp WWdDP ,dʽ5Du[ O̲861RsX`j`^Ν9WTZ ]6-z͋ "miuN/ | zD.y*]UMs7]ѷ@29܏i Z5yl76O-!7?z–BxyEE2qÆǞ"!X=lx[ٺ23Sػ b@^@>h|'%[y3oElÇS6}5o b!!Z44jD6\Q%Vٰ)%2 eEC6[۬jDN㱭h3)2&IɊݱMMې8l"ɔ=ކry4рDP R9,n/j*a7F> _TX"j@&f6Q(ct|8ţ08($cn QXɧӫe$ˉ,>G6CX2sӨf.1:G Is}`qz0"t\AVR$Q@-YkFb bEJ+S=MA!Hmwo #|mM&:::::_rea> dReSʱw͔Q*fg|^\/)NLEGu{}[Naf|2Vj9Ùf#5<&pe+eUk!NT'B*/0[xsÕ#qP1cG]o& `*2vJ*2Kmߌ]Hr" P|v?͵.O正6Q+,)߽ ;G=ܱaM|F. c>-s7ʢ0~,JUΗK )YjFDU}=M8~jC 2ScXT\ CSW 9SZ4VjE@ziuR҄îQ'L#H˓LϡFI /nz6j!$fftHO1>|XTI]@v0\<~=:ЊbEEdad(ttdHv6-}=hy?rTqՊ`tكZ SMKgQbv~|jL X~ZlkBh fLFJ7_LήP߷{b󽢔VϢ "첒,!h*L.Za|$e~ k(۰e@T*QJ'I̍SZ2[kо~{h{Aov 4t`?|:CA2 |h6Z[J%2-w[>{o|Pc5b YXݒLiV#+\ƩP?%.Lttttt6AB@)VbŸc-xn,wғ %_6`]lA6lht*tdlħi_ @6Zpݸc}kϐ!_1b:PʐHdR$/b 8z5\_^'82qӺe'~3 G)Uʹ5:| U;$/O07 VF8 ܚ5JǮ6VO_$`ʊvo'4,5r=;iۺll7_̱ٴ dm1LP:;f&N @Oҩc߾mwX>q"unK#op [|L4 9%ukd--;l2 3C̋UdjRSI 07l#ɒe[[k}m˒7 J{Yd 6&34$F |R_-|0y,x|3M"ŗG4\L18REuyxmC捴q U节FM&A͛T١Aw /aJ/GIX}htgpdoXqz?;Elۿ"|+_۫ !SOO8루v_e(Ir+KmٌGo1h ,L2nHu] j|m\ԍ+a}D.b /CD1D%̎y+3izx cm7HKcir 4CP'5BePBlmtwhڿBi43q7.m N~>J^~1eE~zqѩ[wkt]˽ye{)J 1sIY-q̏c[*0V|vgcmCM$ ةE&ձisQ;/q-w"b,3=(jXkLN-SRH>c \e!ePcN399OA afGzJ>`9{~<㾟qtnم{o]^Zx84Y'd2i+,,+〪h(8E&c,rq`Y6*:zg3 :V0͕r2a]rmX[IdY S'ܲ~qu(膱2kmeZznd"ɢᆗC&Q4;mZ;cYd- б25}U3T,e֧k <'}͠&ʹ};'WnqRO]w.,[iCӔQ]XQuU}3UMc jtKk$ݸ[ Ρwۆ5%**˼s]Pu|}\麽CN[J1:0@ƾǹm?5<ʧob2x9ˣ|HOǔzi?ʒ奪8Ń<+-&B!TrώaxlT>`-q ^_O팶|̙(Uٱq $H&6P }ŭj;˩"x jƱ:'p|a=zM5(8,^ L,y;ͱ!} WO~@y?'8ߓd3pK$R69@}4@穣Ϛ) [3' kAh]5͇g$'N-r<͵8wCS؊Nqu?FЄӴ^ba) RmuI}>3~=8p=fimc.~$~.AЫxPٶ"5IBצdt0%Nfo4~Q}ZgM{tNQ-h6j7puJ%,ݟ7ࣣ=0} mR"e4ys-ApWi»DZN0˒f6ǨzM:zn(*팍/db \n2CcsVKeXlm!r ٍ;67et.MrO?GJPPD ZZnˮ}dfz%eTnffd9d|de8[\aÎ473}^N!@=NeXB+RťWVP#%k˦6<7hp/o_>*:mŘJ_h"-&Bgv?\Z5ȖT(u`ybWf9bl{M&ϣʆm83a(jI_|Oev+&!-,„`sORp-p,d雰9x) 6w]gSuCUreV.FGFt3PS]µa^)߱ю#^*a_|kJ=>Aģ՝"Kqtu,7y^IHrI|fEǪpcgcgrU{bv+]#T%voBq2X9rH%x yTT@Cb4@SlҩL6cN&A mT?U8;he}zXFqk4] L_r;Nu5MPE-{2\udp>κ:>;r^leU%?FoPZbȭ1OlI ]^~3XʁkLO9/e[cۤSIIL6s$O @^0Lby7ohA:6T2NÇزى1[rwF\/Vf(.@i׃ǟI[(B!% LIHb(IF(0=A$4n>gj'cI;hX,-.Jgqn c1‡m硧^a}> m OĨl,pR\C+$[AUTe=ҊNUד] e; + +Mӏ_лSm:L㥼ζoc]ХN0e!V +Io2AjHՖ]'i8cK$iL?[=֯y\mH 18yx}--AgpIDAT{ m%XZWW 7|n'COG'Hle`"?ӅwyP;__86;ʮ0 j~m.0pUk ˧/>~G= hmH^No8X(N]l^쑷0^7rZ?<{([`K1]3G=86p%MƦ(04gQ{D´\ꡢ]Ui(3G8oPU ˂R.v9jbw#.CSl#ͣ-snU y[( hy%4pQ9}($JdYOU O̯^%HdeB2뽑N2UҪ)/a}+tS 3Wa|5SLN3JXF$@j3ZӉḑ||5M8rYbǺ =W:C|:6rRRg/^cD<'Ȳ#ի,sgnԛ2WqWM!e>182nje9(+LyEhT ߒ$ϋQHqEluyE~z/wvVfgqY˺q(;O p|hX%TV>ri6K\=L$Djk;!ydnH\f| `*iv\Vd`| =p P.15_5T,2=9Y2[X ?iR(qHB@ ݰ4LRD_k%*!c'T/pذ{'GTӾ"! !aw:qy#rL²#f|bW`w) [ d373KX|r9g)&0 S¥`bbPhx$V a:jki#0KDd_\>i^,L 6_hpejS<NWS+9IGVȲLպ]Ӷ]'{ח˜kp@3ǚ/ c1|UPɅ$6Uyu4Klؐ@UWWFenNef&nUAnv#|̆ mؤwvjk9u mUYgn}93͉C'ԷS_6F(eDU$Lv,L q*S Dkh,݇æaqoy*jWrBMuJcE@]eIMxa-xoB9{~sIWWiR']@S6v ]?|vhtk sG(f7t)>ݧ bUr?g}[![Bő뢔Gob6f#W]峩3y6GXXXBxLrd"A:Y{ʊ-Rv B $DT$ *{y$csДM(6:-d)&p:QdXkndy<'rE o:>0 29rӧEH:ot\F&[ вi3tChi*=PSz&rn݊X KJ1=éO?gRDRT{ؾcroR$ Bw( $Esқp@`dxuFXyK @$.^Nбe#z~sOQ> l!D/]]kp`s D* 6~YS2Jnd 9v<%te >3پMZ=?jXS#37C"DH pǁ$QaxP) íSP%ISLM{H~~+!9GOuk)fsQzִRBǣgD=FyS!JN?5LR6; (]i~:R{dE IBOQb bzдAS/t#Ǚc[oîB0;pw~iﹱ ;IR vC+wAN=tGi#0|Ffm}n(5.>B＀^sΟ"sptGȦS PDiZd 2sTD"N[@R =TI,9:#xjS8zFbCe2, Y2vz&2s\< |Au:*Z%a(0ä.eyE+Rꢘҡ|?vUba_6o 0u^&p8RfZ#l]$XJs'͟U_B>1ų}4As 9ui?%!0y~ΩǙYLs3 [D&_>5nmMgOxblW1 zO vR"I>fmR.(/wrn A:1,2p%>8L"ZSc$* ꢁŀ"Q&&W07~zs?FϋkK,ӓD5-E%U!.M5\x#u56Ѵ"Nu. }Nը h0 :Cl U5( .#)GSIcsg/rĎgh:O&ʉ1~rO};uU>TMC@Ul5-D+/0,)] w;œ?\DOcvx|1M|tt~;>@?璉Y`rK2.4B:m^ir$Wh̓Oj LTtF47d͙lz-68o97eIĖfC2STh 3RW$z38\ϋa068\dmgiۖӒwgx`q(^$myo\^!ݓGHkhC꒤]ӈy7Q "m)M3;4nx5?٬2kΎ 9Ĺ[8KxE`2pz额}ol|I&ۚqh[30q f'uYN}) _};ֳ=pϽ.HU( MCP, KibIMe/ȩ vYL b~D!cw<3`tHS]YpJM=>}0\;ѓ-ff4t?΂ś,?ryc<#5THC 5+ F}qGx7bkȩ9]ᢳ,p<=7_#$0D5R?Nv4|xpL, <%IJub-0֚ˊ,-.=av-,>i"-;KeuqԱUZvH I]T Ӡ\.i[ Ɍ NZa05y3O} LΕX']*}2g\>~,Q*̏?ͮ7gfhqVԲo.?YV"mN*>BhR ?U$joL3iik xػՉzlb'R]l9siJV=ف$OcLO܆eCTjb\8IYwʱ35вai,_U檝/JGN_9H]HHb ~?FD Yͮ9.q96r>C]z9پ%UC Ȳ+w.Iߥ)hK_.[B8* .xy7E5(ʒnjP0I:(ӓ=TS, 囩y6/U!4FCkK ?VlzH.J1_@ɋuGUeYι|p?>D/<1^<$ ݋sfSY>bMƆK[]2 ktV$*sRߓylaPddn($#Yq!Me !'ZC3`Rpxd*mK_$Fe8_>ծaWFX?9BI#dIc;oDG(h ہ=K*MWxP;>3\juZ2n2_tNv|[߃XùdK+x(B0VCǖU !cc+%)nrc҇ N~)#Su"m~a>OBȁG[yis3^s6}1k}aRGh1)v3 غ?H]g9.M)t W/rT9~9M!Y;ߛD#x ʔT| ,d1ۖ D>_ _("L\.G^l7O[~6l&V(6/.B%1ȸB5ktոVVe\16B3-u4"Ɏ=>j2ϒ,MdI"5M4ͧ"%$r[wmi͎r%tM,\O$&U]Փ'8 kF&8|8)f |kd;< Ӵ6xa6w`'YCI_b䗧F29fyͳ۞k֠*X=jkȀ&q1>bRP$4lz9_`ϼ$9p84|55_Mo VNbsHfwk7p2._ЇP_{oqd, FNsSN2=?Ğv)KYX?bdd.1DOj1Ջh0ϼ2 UK<剐l{)I`J*/̖4&#&]Wku~O4zك.!)21T́X0 Iz0GzK-/ƦܸcCN4WPIbfO(x(6'HnۧQ=KP$IgoSK<.݆{^o{)>PR_Srv$Iƚȅ|s a bo3K{x?eޠ.6?{C:3@$g( E/;qmCmYv=(+oah_Vyo~L -eiha█!9=t>L/oAv7!wW G_#A!2Ay:7Cl\ivkĪd wC( 4L4KMkFx<$YCx:be<3KQ >vocs1_F~kv;? IOa^3|uk x8[C0Vw_:6C5ި1nK^'STClZ :T$B6\r|:<*k<]bBMÝuYs42:̳546I,@}P ش'V@,ӹv-2Ϙ\htj3mTH IDAT)Ef㮝t]ev*AE:lY*Vo߼` `u~j.) '_Jҫâ?X߁"?^IhX˳7F{$\w]@m6Y2_?+KOl^!2x4.=zWg{)7o& Z22_B&WYRfcSg'M-,,I07m"zg#sIZL@WYB6L&: y9y$Ѩbxry@,*(a.-jW=Lɟ (6$ݱ,wMS0<:FX>kD]MgAQhNk7dfff1a19yhtuʼkIJ=c/#y@ݪ96+o]0 LX'#+ʊGlH1d7E4K&U5 ˏ^ 9؈$=@&-<̻ȣU4ڜ̿33;82e˻x!VlNV̇nJnBPLc*4lه)'?a 01JtY3&s16a[۷8fLPFuڰZQN|B"ܕkmZ腏܏VYG] `#D(I)=0|E bYYy 9sH,cV2+<3Y/LI7ʗ9GxWӜ8ߍacSRUӨ3ar¾beY"9v5N$IB"?K@Z/~ݿ%<z9R;p5*٬`Pӳl k.CLP.`*njCڎ{d!^" X7FLLqg C w[ ȥg&sDbK>'vmo>o26DW3y-$ʲ|b N^$Lʹ,݋򥴝$)H&5~ bWL2M!JӣD[i޵KbޏeD.* j-G[VfrHE]srrrh+_3<Fn!1tXj^24_݊L3OS6!X#q(g?#xm}ӆ}B *o";p\~l2HTUVa)sBGWЅJUXM3+ᱤ{}')[& RRD)Gb ŒIU+:\9櫧mwn0z`y@|b̥d6 ` PM{H.[® cӸQJ)/fk?Ė"Ρ*& }3lv(N\ 3F 몄|QUt J@H*BZ[? ˱rkn%jjVKaxX&gqM$IѰ;Fn%i1-h:k5kס ~X5R,86R,$@V%S>)fP+_X+QU9 JJQOH-x2Wto&;|lNU0Bj<D"# r3̌\%J|Z Uף WRfGy!M??LkDcBipV%=>BcwY iRvSv`won%*sWQaɧBBsxt^~$-~&IHHw#!K#f/ Wls#hj}%Bc$/1UfΜ`jl/܅GrLӴmEH*~4!eQNo(6h;m_+ \:N2^s>J !Dt9p|>. gt|nf=^'ZU&uW?$2 l4?t)k{ _m)z][$0@ו_+[H{*$0v.Wbfxmf@!ZN=@&\RR#hC =F`Kn oz:۩EASfca<ϙ2v$ņ'$f_ _.1}A$Eż< f1"ֽ'?Aߡ_#;Bؗ2NJL͏$k*.&;:ַc8~d46R.EY4SDZ0 I ulW_K dbCblWV8Tkۅ.'vߘ4Md-@ 2B{=+FPzTN #t%OFvҶ%f3[i~{uY{Puw3#f,~Cҙ A5P۹ۍr1{"ؔ"l BLMq+ ݳx ʗX};n 6i=Km4hnűTxV? 11l"*{]xNRgp%Vӊz&z/9-Ԉ!^:Q* 6ʅI)lؗ\"P%N2oHRiٛQa %p*G l6r+k6mg1u;B諽I`d'HfRD;~j0IG,iTG$Pb>%B * GN}JocɥC&/3u?࡟34:xAFSjpc}GhM15n88Y|9Ix{×&2JK۾H=sJ~ GZ$1#޽Und'Rv|r/O=5S`&+ɩO8GM~i>$EcÙij<=qAEV;w8qiҍל u2צ]잎$hvt3BӘ,x&"K!5u\,=;cd8WIQQBU[3v[ EQ@w~)s:Nϱkvf]7yb5Eu=ڗ31q `'_yg=Cyc=9f3`3)QEJZn~ؿakzwז%]Y-ʦ(sAAf0{BzD!J%4'<< [3ߦ}h5>S%?~"W]VHj?<7q$ϡ @3.hv$ Z8|k9k1/?=w=i%O25sj Q$;;'܇?Ck',}G!wu#|߿U:+R n݋G~ muRW m_zI m'|4␟u:n@%zi~_9*P4U4~U23+ANsF)k5I%u:o>'񛢞vK?Wx_$M߀ ފM7PtnҸ4_Z>'agcřI\ӓ#lA08MRCLs~w0R1+Fo7C/S4NOT0 ]T!pleeBPDpTJ TL̫C& &{It<}Q\A|;;f|v@ #Lώ D=l| J5s<)U1kY`Fr7G*5O9:bzW*o"Qzm@d躬s%ݒPSp!E{G"{ Tb cS5,WInMS?Q%[p1|>=EU"GW$5.A>m&Ex[@QVT͕4n| h+hcmE7jj@j6BW]D{\pv}0B<{ֵy'ݖ~+q*tw !07n\{/1_@LrF3*VahL='ރ7Ǜ~ѐ&>}Ftpre E'ԾP8U֤tvnZaZ]B{C:;=SR+3juǕA0ar^w&>|{ض\OzCXsvoZWw߲gk"qƮw\ I݅~ɗ"9رC^CѮC=C$>FlՃ7ECմEk|aa[@ao`u7t֔W~ٶkJQ(<ϠE^k_4ʔ(ִM|E`~zkPm s<6aޠ^]כ%ekiK5a\0x24wm(ȃغ 4)+p|w(X ] 7Ჰ@T{ [7iZ(Mi psK4KƬau;іE0LNVO b5k. XwvtvpS#餷 41X\3 7Җ2˼xh --Z[w,>:xkɦkI*m6Sw&ZG,Rc>::4K{}c5a.oz_[8,Ո[6wA͹Chh] UiaT״c+O5oUaM?m#Kv8vvB- d-)c}Je˪6k07o(;=5ɋ'DcĻ0IW/{ r]Mi\y?2jpfZ2UXCk~Bcl}5&/37'ش[ 5~xzp!t ΞX[s7[*z9p%R3!6`~$?;Wc$ 5KLh4.~WϨ|b ZZfik t)aq́yl[uoRC̏e8hQi̞"7qӎ^qH"y2(k)ulcvãTV!4~Bf[O*#&̯Τ9%K+\,adeS./J|;+yt27( s)g9(;(ӴLA9̌")qV\@m׃Z\D`"8R>@\KOS7*̏M3U+vYg|Ƃy˼X\$,P.@w̏97"O.OAʻn?'O.hp@%b~Cl^#̬ێpǨ@J; G SԂIXɊ\]@h)~6ڄ \K` UgRb R[%{eciꭍs%0{9z6f,NR~BڲM|O/q{)yBcGFbZJ$q i#_ZX֟iJf{/-Xsnc4a~0;l&12fl\'w,0[)h Acܽs^T g79?D x(to=Rb#|[(s&Bxə`'UddgYX&ЗEjGN #,V!8v694rw;R~QN8*gfSyi'9Xg*6p$GU*TH uvR.q)Q*skPKD*:q)w? <~7H;څ7 F2c^M,Ny7Z`Pt 1 lIYV' T*;莄$^ 08'd 1l3/ffǧ9o[!I bfHLmڋ&H}*Z"VeOs 1̞@'~La1R^yrn4ˏAR*Y_{s0QP ss9l,~|faBWδЁbX3.%KT;;McVesmIX-LoO}TIN!86B8BNQC1{cZ~J,܊++J\Kא4ٹ< -@)5Dv~ O 0}Cr<5&\=T/Y-kb_epBww9YrhIjʝ#Av7)mƗCll_R?v*v-8A~@'(: z)&;n;]Rl.ӯKA !O(>U nݏw)uzPʑ9Hpe @S>Wd_8Om`ٴ)Mi)_E-fdgQ S @] }t-\qW{wSWwTgIz'bc(-yJ2w{M]u?2װsU\b8ΊtӴ;wpȳsE4;~nJMin<"`\UA803۟V_W͍+xo梞^mLb=R%*uI5!m>RRn!0 ,bX i,!O?UYӡ;|mx]BG-3MQ{6EQ.xO@o)ylG_`y ~<)v!(Z)%:q5p*uU}XZ/p@jJ1Mx4ULfh;kNJp8MfgylK5އ-q([x>w7xy,r3[B90yr"jԌ Di!R8zra[H͸\;5БX(E wot~i11zJXm{;P#kS*A+.Pyma{!z >B]j!QōuS\]Q̃]|TЪgA1<~<2=ϒ<>o5d }5XRg/Vd 9ҥu@@rH濝{RC% |~NqqKHaא9Uk!\Μ*:ykQc28 yyʅ4OX"TOB @m۸K)ql+&.ZMb D`YW 4@#sփ0Y %I|+Zs*pUÃ9 ɣ-rX{wBݵB۩e֮lÖߩ7Lg|&J8k?~F1*Yza;6<ֆHMaߊ 8Ɗ )R\hꄺ(DNE5E$F{cZ~yls)k8!nG-;J>%%+?y>=׋T\Ppm/6$'Kd"Od0[Բ<1qRSt^-`_J0q ;G_z61vQsIge|[/7"DT3Dhz U3QjɈ`y)4GPI?ϣ}5Oml!EQ ω[/,0| ~޳!}į_)k۵B)%tۮ_܆RbWy5˾#U UI_Sos&KWTb1@:PH kb_V؎D(9|{a傺y?+-^_Xך^S=6[]xϽM>j WuPːp/NJ{h')FͯY\u*my=<%⣿A7B8~OD)XrU\0]gCjF?{A'uYP$ ۇCfap'5? @0Bbس bOj,fvIq5"`&r :=^TJzjbUR"U$d׳ _U`:^)GoJZ0) !N'Xȏc*.먙E`MOQ*RH x`DLBz^A=w1ff6_%qnwCRIg!`e񏾅Q-z~S?~ݺ_k&ɷ_f;vf0)+S^C]uvȧgPk5 !(BVT}rȭ_X`䬍-!4yk VlQ 8F aڣn?~|lZk. s\0.>٥N _v5i j g}bgO*FDU` 1*l a{,bP%v?ho2 (}~oPlK y-ad>Ne^"ܺ3Z*,m\֪m:&0} O-BeWJ5e?_G9مTÛPiu c, 1v) ][niM(bY\+2ëշ]\U7K[|mK9 +Y>%N0vx1 }lů]<$ɉO ptV!(z5oq/PFb̟Wybngϵͮl[z}&p)S*fYckOOR°}&G^6z SGk]3G1dQ P8MtҚ1)1?Y&~bB8TTé+cv xNDh+j4;U~l P@Jc~(u6:,ȷ.qFB!&''I8"$07kq`^r֢P#e⫂0wȿ𢡄x< tc:C~քU~NX#V?"XCGOlv?f!QNxV!4Co͇hEg7%ô|(ߧFVmc,sfݯ+u;Jw/0ss(1CTC ^fw)L7J5EPxjj=zB `y} wr6zJsuJH=ڈh"scQYS,+Fc5SD+&(V 糥`| ?"68oO2Qޙ#T?H"kLakZnn ɖb'϶eF4=N\MRAA vS WJh"H!"L7նs=3?kֻ T*DAO0{ϟ1!C{?!yu cS"8Za\gs%'{ erTpjCo/{}_"عl*uSSb&x)(JB(A#9Ͽ_%>e)Wl,Mg&i g;OՊ<y9^"Sw+j)JE#Ep2F>O1A)QA-?JS2ӣ-Fy뙣zg%bB-HhVZ({cWߒjU\V+ُTTsKJ̩8Dfx+QZbG'}OIx~xY`،`):uVSEuCP*I[// YTܦ8ǮCb=< [~t+PE̪YY9St4`Kz۳^j8*7`:tL& E-Et *5Bv?'%#szg)g3\ek h|0~F"oQoU%VmI_ҽxf֒OzG(ue4נY ~M{<_QW9S y ݟ~wVq5J D kD2(2P>[b74ұyPW~@ IDATS=C%5RF[c,ae1JEjXYT%?G.S'۶s-@fqVйi~=F瀄|5*~|J[c9`r2y˭]`ݫɫc(Rо)6C_`!Mսv$sEuQK@8Ohz@sqUrN28Vd[Fxr bjq! Ywީ^ RIޔƴk(s)uȮ{|9* 6[//+B@T0K#;#"1*E*HlVͭH[BT<Ǐs֭[nw!$h U n#aZC ]]a8|J&8۷У!>_&=caFL{6aJFJőkwGơcU5Gl[SE&B`{zȭyfncwݟW{Ȍ0ՏH$Yܒ[b74oam3 vI%MC8 0D^u ® PB`%7pA c/f*!q)7RN| ܳ@'#BDvf!9)mv#@ 2Az}Rr|l~G,VG]bHL;yl}o)'ra#L_'{T6_KR 8r$|CO\\(Z5_`HmH?U]u tj_5U,yӲW/tytxb>C^/:u.MC<K0~`xK-˼ ^/_^V5ZA뀏%{Bb[ۄPHc >SK7;_Pbs?ss0SCW)>콛"#yDGw~T:!euwMB玑K ݏbJG_$Bj:Z+(/`G˰6Uj!2񡫯!#LvW(t;(vbY0 SuCsq޵n\^s_oW$?"5?OR\ezftz-_bcЅJaddn^ﲭ9/S&`ĭ.y07>{s)2{<G6 av!@_f( i*_Ğ:*mo='曤Wo|oW1TQ47z T%9!ZWy+?Hҕh:=gnɮZDu\#7XLvb1̙32<b]f >(~*ֹKAu:|^;MiJSҔ@%9t) b1줵P(</4}MiJS*q\ɹyJ9(U7hAONNP,ܹcFs0@v`WԋDhM|SrC0gd9MKc;w$‵jʾ} k.6lpѥtd&)j 9ΊHtTZ͖HFƁp" #$*?Γm75MSr0فEKins"xEɖRJFGG9~8>UV144tK`~>rwh D.)abؘ˶m?bxxm5ONgk5O̸kabAaUPr)M~-mʗ¼F0}INq T*:trLgg'|7WuΞG>6E9I)(Q,fzUB„/rrֻ!a>7=H- {hJSK%̛eH cTiΞ=(tttqSouB.6}} Ç(JX3vxa~r35PyڄySr0 yk ds'OR,I$ ޔuFG?!{ƃy/n 0HAl830ѬxՔ KM/a-Hq/|ER߱@./?F*7 ̏`\w,f9v; cÁfI֦\SscLWY|| sd9fǩ^q)l&HJXX)0)4a@OPPRцTFeLᎃÖ %¸ֺ" 4ΑB[JTq-Wb\7j!7\ .ر7sLM 8w.adDQJH|dw2B5\/N3V5̛RE\FVL-AFK)%`cNj"l)%Pb) 7鴋m[Re#]Z6U4)ŭ?4mc{[)P&nt*lOeFgybp]e~~|>%L(7*b1Tj1,eiA5e)8sS'|Xo0HS[R/~Jj(ߡfKP y㻦}f H.s2?7W?`KuYMY ;DR,, O4,UIMY]IM~MR&y귘YtaZ Bu;"f蔣kɇF_!9vh#ݽpW Ӑw hVBfJH:Q>x^SK.Ckn)t ¿L&ܥ5?BH Q9CjmI|S`t0^~MHg+"eqPg"TpJbU)VKY\`1bF(c%_}`T_8RJ0i6 4еt_r# H/+<y4WH!@.b{[p=X:q4ȦUSS2W掳V]P 8|Xty25/vK8X4SVl})ԭafWd(QitKzhi2<Y F)QjsCş,]j͒T%1(94G<7RJza,M/YVFQL2THEІAT0U6B=R!].%z% F B`pe öo{$f2=koz9fא 5(D'8b!P&)\#k>k爟>e5Uz}#rnh7t}E.9Ep-8S_NԱF.#"'(2̫MӾ!l8o){nw[#0&3`V[*Q c%uST"eYH)Qeŀ.$7gQVZ5X.SvC )%z:. s6RIDH5O}H7ޙ0DljUv&VT9pWЩDgZe!޶4v7l]Eq@C(h8k0B|BFXG-<(k# \XH_?ԝ2(M6[k%Aա$&bbꘋ1KP;Tg(PvFu$'NSEqvOa! j?{\ו{~I7 0(Ԓ(R'wkeTMux5So]kV؊T$E9D7sΞ. R$VD\޻k{BA*Q4NRsO~@!;|YFxh/0[zKŦhOG-D+#9!gI7&8W)F:)%e!ehv7~f@uF?uigmuE,[p QxkMTg_|i\{y=d['>Oxi?oɀT 9]~ߒ͙Xvc_YL4~4EmJf6K!դVLeDZtV'<)([عV b=t!-m-,d%¶.ZNRHf|4Jx_HL`*ªQ$E7*7Rllx G(wv-̯:mXDwG~G\eo+A0&M'OШƷiހUR&2>UohgK-Y &7 |?* #7ulO'IU+tU–ڟZ?M(}A!+rhzǬa.RYM$Ѯ L7Gȵ>g7Ӕ-ɶwx\Dɣ?#$*תDΑ^EOlc2h%]"XO)-zhG1/Q UeD^PT5/Gj̅l=E WuJݎviK, LuYhs@PGk΁W3*.6^?(;YbZ8uKK̋GkHΖ( ֶ$'07NWejeYY%<?h>7ćߧ}ȓtߏV[2[)FQi]\Iϒfh3騋] t&7t2S X{*=vHaA-`/UG`k^{=/(L̥Yq S wp>JPpqb{_aAD4]~LtjAkD Oɇ[" J|gvs /JI(|e&P 9%<AFaOniT5B! mM%1s`a{(HӦj%h\W]V 6˾Z\nrX8#{IRޏD,CS@Mg:G(gkL%Ҳ9NEa8oYđ|l6.k$~vųsdn}B֤X;6]t)TjM ٞƼQc+P2ދB#! 2H l}lS-I Z~_fLt?B^bnkY S=pS!qT]C1shZօhRgF/S\`6Ã5=a:*5ʊBj>:k{ݬ%]U+!]y(c^rlw9@_n37-%ihP.hLZ!=ʉg BtÛ䬌݇#qPcxΓ8{=OPzK| 1e_vB pȚ|sXeWvjabOϜ%q`iOP(KF?F6[y!vmЌ/Uý^SirTU!-T-,VvKdLwifqj^k/p*uވ.Aەb{VÆ(+X%,5/*loSч֧K al8*j A*M!ecAzöqcA\3l(z6Fۺ 'O90qWOh撼_>ur$_QuɅ|lOPqPb)rR*Z&"z|>bgDfP B"})Lm~i>UAl V_z-hʗ 7 Lw %Lb6SqC +:@h[$8-GKUPke ϐ6аjx5a_jȅ-=m[H` ,kS0@<҂d[w_t Z`Fi@GRh% {v?wq.~tM=a\ӔWh :g.T6D]šeb<祿;&B@fS$LWͫ=qON p!wEoK?5m|ǣNC1ۻSwc`nzr/O{/tǽ"'|󎋓0l")r~,}qZ6&4=C:$i+ T2 /I H-\3bkQ:"9:],c6IEZ%:6r䒏RsŨx\-j$w{.َރo:حmٞݬmFבMC hLL% ͎Z˄JVd۞Zbct\PojKBPnadSϏlgj+ zĶ;Nz(Q"#"63=^Ltr|3/tɻ75 mO+zawRj|ejnNץ>``\bM-"$ڶJDHPFz|ޑm&mzae?[zBb19IEG꼸}-m;7E]U@Aʱ 6ZN-S}!,{1`7مY-xz6Wڶh$n\fG bz|/b2{nذtEASO{46n@Ԡ?j\5 ƚW#ټ6jnX\lڶJ:F"+[`GxxCsF́z6KCHRLyiB]awW,K%i#*70nYs~\hw83 ĠN/֭YK}wjB U}0wu0O$JN\.M?d>nהj_=ˎ*tmt*YSmw|)aw.^vw:3|oW/2ͥzGqVK38>vJCOSRХ/]"lkdЭ<=29BNVyt;蓓S_3.m?J|svT1Ў>o/65(qKY8Yfݮ%,νdj:U5-09 Хc/]̶f%+=Ie|C|8дfI/aUyfc}}!68@ҁ12FyDu?.\tw t}[Rԩ"릱w̥ٖIS ,&!gؑyzb3(r͕0Yy0?ztx<|FDzХ/ sIOU.i2ҳ( R] OtaHUu8$ _gU{mX,Il~%]̶ZQ0wN` ff3 6\ߓ\*RQ@BeS@)N#rft+,-'8L90A;ȑ33KR‘\_ 垝:)/ӎG+<]Խ.4k"'Q,AX(lMDO˂QQ*#qx;|: ;'ƚe(^*Y&G`s``,.%-V3gi[mw\/>Sqi ?JqP{;#VT M'3ELQ}>ֽKB% us %T>3^!) 0(Ym$lj/cKʡc>,O(_‘Is49N~8AO@2L;irH|YFF998D= ;ēe^A15N=cvBcJ L18NǛ3}M`@p؆WYK"UdSnTH[v۴ڽʉ.cY*^)9csY]s)8rdei֙(~k`<=t oÁUI!\9{}VBkzKQjiB>d\:~u|ܚbʕwurU\w}d$`i/^)um[_xlLط W;+nRBYC):6Ε)E| K}7gy78xx)Q*'YU*_qr@yъ%n02hr=j10MlOɵ^6%Zad hZZB[ELD`!/qkL>ѻ{{aZ9vӚ}. crnK/N/ T)%G2jqȿev>"w]]k%7AQ\2'BafʱuGHA5CСIcG<Cg ^dze!d:L x0EyqBJjYN(ڲ 1|ɋG\>ƗK Lsª!@* Vz"T?/ c":X֫ bWtۏoxjfDv #W ZK]{헄F&PL˗dɧOOhRNBLyloW*L< 4?3;^8ǰڻP.9j$RrZq+WnۘցDH [U^ 3N* rkyyvf-8U*u[lwyŹ:ѽ*k7z VkcDZ4wc,l*6v/~O\:;h⮈ i`xEZ_s%MoQچghg"喝VA5FŌa5bYA#;}wխ`-&W릟cے @(l'#(>E)˜8Eajm5>$xO~(NotZshJc=""u ƚ>]W@=C+cc#IS37uu|S3T0(^$N07(FTo3}s(i< q?" {_)2`Ь8_%mM%0Ҫ5/nE9r xqZZ]{(/f -?k>ŗJŲ.oѳ2ZRb=4z>M^YTS!I݃\CV1)-l6m.@^rP9=D o?34y4ajɡ"Ll(nK@BbQ7mN,`Yd5.;]tmvW}/i mO`"t~?I48bh\[ қBR?8W4u.p `̎8?{DB<[۸ݮw1?$HoQl!kv3ajncĎT"573?O1C1ݓm˄p蘁s۾BzӦHkt`3a^h ֢^3"}qqb4UpU\dVǞ: EQGZ"x9X˜@̀Z:^_8۰J-knJvw#~j!=_JX}$jf(_6C_$] =Žz鉒~\j&fu_ۚ'40* *kIwU%Se9&ʗ},F6%XJykdyxAf+ぇD<0hjՙzBlk6L tfoho<r.SM 3O+,桒hNMXQ*鹗0Oj#?5.u]7loZЇ>v߈~PBu_EPl_4_?Z7]~~A*OY&I&ׯU䛿IlBT <`m -AՑz0XE-ost7|.vێGw}v9Kgʁ?E%D:~`ĊuQ Wk $Z1e#]EV`{OvD E~u!nMTZСw̆:ۼRR\(uS"=PM$`桯Qha[ FD3:Z}i6n`V|:.H+İ.G%ª#a[WVInjg+byT:z39F>@B*MM:~a팆*MfvّTBm15Q{9^ X3c P5a}XzwR}SۼQ8X g|/5TV*V堯%%9~3k#pռdMy<X?2V[%2SU4ߓ.G#|t?v}96ع% +"Q,n- gj|j4X/=~Ɋq+F90lGRi*LB*: *1B K1"1=i ΃rE$-wY"o/=A%[hT2Gwݏ4gszw0sg f~"Y&>V{_KǨb2"t9tn OVY}oæ^5=.(R7%*ȤLM:*Bl[r|Lz:LLwDKI[:)7\66DagH9Bۯa^fy|KUkB8L7 Q|ڙ۱|W׍ E# ĺ"ȯF GK0=5ՄBLՋJ}T%?NQkIVC5\n{3M>嚺(FiͶk,lB6&wg)UΞ5xqjB4?shsI/X<>^(A~ 6й9b(=H(UE0|>aC4tc"W/ľƂ@|.U!3 Z P ζRZž{6Rs튢 0o~hÏ[/In]?M_#D)lE$rszw\V4sl$lwwp-™vlea0\Lտ%~[h {wXk^!;>G5@OQiSDIxDS~g.0_|Gwt!,]X.׊;Œxzؼ EqV$! ( 75;!mY}0K{`9cxfgA1&;(72~e+r kSz  vOU UNTl6)R;fmeԽR^C{V_v2mQliE9&voY{ݨ\.E/Nމ<ʳ[;rhi`^+̻ gat}k>E2ٺ e\Y&e6lYhjg#!Ys-ڸaU?-I6RsC5k<~vBM07MͲ٬Ph=I"Ѻh.cTJS+ÔS[/@Ydzi UfS2 ?Pv.-ڳ\ afײ,ZZX[0rU[2WX"%]p]E&oK6n4 nhN )%1\WzM2?O{eϞ=N\wXVWh+{Ѵ[WGdY.ErWG'Ӝ !P,MABNCM@wȑaX'_qׯܚ$,*GпTzlHv&#G7hFf 8vbߗi3BfxʘL6UU{n*A[iuplFN/Mui-:8>sTyvrɑ'0kc'+1/ٹtCd9:0?>ٿ4mfO156LM 53 04<P.< 186ǎD,M^J-Ϟm^t̡R>U̿O^ ۠lv<[\/[K)O_s+tv.2R>\b̡P躏{,Zb,V57ϸ4JgKw(uj14Z(̎5n#̥r 6`I8$*/KK Y>E5뢚ϬIU29>LI垹eY| ,+-]tv3p9'3S թQAJ?hoϼTر7qFYfӧ\:05=|-<=rVJaJ\7l)(ҜrX{1&7!|f)A\Z73넺cruv9w߈:ι(oEw (62b3;+9UR.#W GA,mkfEjS|gX?]|~/S{ M*5)uI R 9@Jvk̜yO I8>2v=FgŁ#GI>%PT4WOտ:W]g%ڶ89FضkcA [>+c{a]T'q|o1mdncfƆ"lǩ"p}#`2Q◥)-kYd;rcV6M`8yPS'Djvb.h !)'LTO@4E=s{Q[xBEmRAw .օP%{II[ :IUmO_%ս{ق#>@ULXe7k~ z: H a'|[W*39rhIg6lFxaz}# obժ{'03zHj`%rU&(ΎSc:V{\D:yBL,rSX{uau*AB[#(5EOijh(=:KQ( 2&Β=WT-wQb&Yf)-_!}iRAUω H>B)DVc*Mݸus-lF]hOP%2A|jIe(3oba߂ɹTe(U4[lFX[װ籧Qk[p?oC^B&"M]cPĚQ,1XZnBm Bo Mk[ dn(GKϣ1(FukH΁,vm^M2\q=WqH{:,{{\xH"%JTRuwm5eoوnb7bwg{H*e { E$5'ss;svd,2e2 gx",$Q _`ϰ>OmKe#oPR깇TgS^zϴ?`Cj;ZQ5ljv2o41@:h9,f{\J`3tmlQ'1yj~ i ܶ\Y%J=R#=*Y>Yc5]wZdzhʓvZ9[%ڰ9wGBB|^R$Fj,;UoE* |x(T+,Aܥ%̇pWT:;r x+[շ35=JQAVF1H :ܵGG.Hɗ-3L~[.j^t~b*$'Qt.lf!Ҳʞ)ob˘G.0:tF@~%.ΐ= z1`Pl}L,╶3xoQXai6NDzsЋ*~0Ȼz)܈(>uQPdQ-sw@jmorYUr#~; J*KٜeOHSU !|Ԟx 9ėFT5ƾ)0>UYD8oâ۱[ >"4tmӡHЯ˶+Eqd Pc嗝GvO?2vuwzS}C0# HQrUUv|Ir$9:;L2o|]+ٚge:H5v7fcFL)f,Ϡۍ@oUF08?)Y$tq=~OMzν+]$ID*:ʵ?E-_:8~6pZoҁQ:]d,5UR@їZ 1?K&A29.3c=f*Dn"Sc3t?M*Z#v:%[ܻ.bKskDU@ͬ:=0Uح0ҿ@.ߍB> cpQ 3@"\Vy.:J;^Y}ŢQ`4 t/1Jqʑ$ҶW06@cTbx=oYPS8\@M㋢AX"ldfifg5^'ea7a}FdsRG)ѭu<-g]#aHd37_,NK}3~YGɆ&R3q,/(w> E;N&$x{koVHfޑ3.ژCV,^Ϥ1:_۬~]*j5ݕj_觶+AUHE6Q .N `tU/H$O(?@J>[O )(r#/%:ӯ zD1;"~cY0PRVl=z.xyJv9KrE0gUқӬ]޽&ZGVTPe6cvlb^IJǑv${NjZZu_U#UJ<ŢҕB&ARseBw/O/EAR*f,ŗ ͂CރN[ez=\X86??i*R~ #츫:Mwu޻eV&HDB,k"uФ* G噚HgP$.}6?'Y%9I#CqS 7QUVJxOdd=6Q/`u8XZ%{ޜ']re.2#;-pSu)7dbI<$ tw4թL>#a<2)>$7TbQ,U/Q~!&w" PHH'`b21V;#,mZf#&/yM g3Jo I$@g)p" st6A2b-l1#?^XPЙ]8| /{J61ʉ'n˗q՟IȇYM^;v3G.$ Q;P9UEz&seuLSQG[[Ӝi9AgHkY`{>b*^R(gjym`Kb~)+8;ևYa>nrznYuQޙ&H:qg9dnh:Iajnl6x-!kv;U_UJkl̐O'-McR%9,G,7`kYBV!,N( F;/bU~?!vNHNJl:6vn=5tAUt8{ 7Wl& uxU_p퓏Qt}QNP4-7~ϼ,{c?F"ɂ[Eu z 18k҃ Qoh߳1{#+TYAOj]I!Ml}T"`U( I?|5G-dQ>n=9/kv{/J-,# * >#hK C<3MPDgr| }|o0SYװ "N.`=/^xs?3@$fěDsMFFYռlpc6JDxW:%+Z$M]R/P2:QA.$]J1%4G8PځeǶ(,فQɱ2:%K\H 6Lhns<N:уmVFOGdwUZBw']"+Rx/җ )<Դ_~4[FVuWn|ˁ~fj肨CvZ 1;D[Ȋ@L,B"O;u/=$#1z"iۻDW+0$ d|&k祫}Aɓ+L%uQo"]TzV3 |r UXɇ|dXH#tyFj 꿰stWy&.}¼=@mޯ052B[TUWn8AiEAf;iCKXdP5ӕ䝪k쾷fnUݔei-L$Lj*^GxzXZOqG0/b`5|H'ꏾYyـl4ci_qXV(iy;GSoSS "QIBr E='/6,p_I b9|m$ ;]v#U NT.`JD"ʯ&+ _ޏsN/=kU\,~!n0wTz7 : n3}݄$mg я7PF*`s31X_'PY$ (25=gm9>s%bIRFx@N6!+S$]8 IDATzK#Ʉ Wlx|=eob1PU\6/>.2: (t|aaWV}cgRķWfAImLs)i~݋NDcv1 {*gI9TFmz߶SwePrfި?CWuCwn*z^A\++)gXJ8 m߇ 8ЮX$HEszr{ fo3_fu~DT]G kS4wHIID[R ,[wsa^(<5L<*zcwwVS18+h9h@4Yiz/XJb{[ig0;J ?rQDhz+}xC]GX2~!Zo Ya6,"}Si8jmxVRc/H2X\z̘uZ=ǰSyɈl@G_Q҃^'Y%81^?RycC!%.$c | ~?yLfC?o$]xAw"@a'"ekeW%>ߎ׬w<$&46_5d, ^?P1q!-bHN]'{ %I:P,x뢈^'ax;nvF@o#K225q*KP yr;7(҈J_Ǯ (i@C[پ>[cܿ΢3!#6^6uZs4j:%&ʺ&p i 5v~π4;$HJ_ (mRަ( xY_[K0}&nEM{#;}#{ \.xͦSYJ3;ԲS:P 9~8&g 2,,[ʞ]*+SSJ܈ Ȫ,v"&k$>E"  {s~+0p.O_׆_s ν #I6gHE#XˏRtJ_B!}{s+-B f4 ΄;<˲,;s&&PȅŲCNgN#M9lrjŠ?؏KQzH5;=zI5fCI Kh %yZ`45Cp\UUTUE$Fy=.2Usm8Ez{{tl$i ho!K֗C4XrU<2 Ջ "ԴbPBu&/_ z#UݯasVV2GyTq$)ZDA3, zh(zL&&ć Wq8\L5B?JB한ix'9r9nܸ˗IR8 ԜkʻbyID{<¡&r hsLR4ߐM%P`dYOe*E~2uw^ @t@NGQVMx>%Ҝ;NGso2$͢o"W케~$H(OJ ̅mNu5p`_OR\vk׮pNs^E ̓Igr39qDicс͸XGj)[o$PLMESIqȃUNj+hETl6dnUUmyy9GL&^KsB1 1VAIiണaAf`0xtbX~RYYO4 LKHR;~z333CYYX dz qfDO})L|߾VQ5<<<׮]chd|B`ZvS(Ķ' 3u5i =11^ t:|Ȅ$ 65kJU~ ?EGnܸ+Wp?;-}DØL&1hm.]DKs MMMY{'J,#NkVd佰:n)))ٗ Lݯ42e~._S鞚]N"d2{o@<ҥK,,.7&>M311ǟ~\z֑dsk+4ؤ!.]D2uB>"M\tA&&ӳ7OJ67ob2q\5OOOH&9qz^ ӏoh4clu:, :Q B,--!I)qn޼,ugrʠIW-nܸ ,SS[X@?kƢ SWWGYY& C2("" ",1yPb.`0o!)q׮]X,3r=jzzk׮1;;l6gA#51>>JG AhkkpQdTK4errUU|= JUU&''z*\OM&'tUU?&jkd$:;gbb@ @C}myFUUBjjjhll|lIp:b6f=$tAA&AQ|>fY${e9y6U!"/vw?5h4ʵkr |mmm78KZIC6erj`0Hmm-2Z-:נ 2x<S᠄B!._CQWW@>gjj χݜEOEaxx-Ξ=Kuu5L&dPÜ{EP>}UUIӌ099$@GGc*;v9$Iz9@$ehkkA{d20+*4I 8l'^W666ꫯd2X,9rn@zffM xGt~^/OLLdhmmdAÏ  HOOS#fKkk+---.UUfeuǎ=sXxBeb(ϟjf"? jrrf|>k#$333NA׿d2aT,&Ȅa}}p:@4h3A,QG ݸyX,K6pOV+'yp:  4l *h~r]UU!JJJOr_\\dmmv_A?/KKK34weB*A>IfMQ L***AÏ 1>1AEy8(SZZx=ׇ(h;"LPԠaemmh$X?+d;EMO ߺDWWׁ 311ACCv@^/嚾!dYfjrJyy&'777c CQFGGm [[[lnnԤeHeZ'г,}}->iZZZpݚ#Aß'&&ZPk;4;;$TUUiyVAOw6}0f-:נ"ƕ{d2,8y:~:0TUUim<ڮs  BE&>p|ں:M Og2q\Ԗ#r##~ D" }O+*(z5t ֎h4J,ta}mvm׹ 3yѣ{.뵵5(Z),,,0:6F{{;ھD:fllmk 3Ǽj:weIZ@說H&Ѯ]wJnh:D<^+ 1C!666hllԮtBWҙ {zl"444PZZ E?|> >OF*Z&[۩i>eEޮeF4hx֖8KK464h}UУ(}}}+*+03==Mss3C bI|>Z^ۨٹ9v-zPEQ:ari5hx@lmoJCCVcxt:5TBWUɩ)8|0^W{,yiooRY4<nu[;N5'xm2x:x }{{~j먯cfL3L4< ( kͻ{B4~L\zNOOF҆4<Earj ^mlnnI}]vÇq:b38###TWW4kHRRWW9cj(cbbEQв#4<` 25=MPQ1ba45DBOܼyơC42y Fik{{gLc#Gs4hxdYn} :&''tڌJ芢022v(-eelllgysehX&-هoqqQ+mTBu`ll6*}(-,3?7G[[v] B,Uh)РSx^::::CW F6wh"3;;KMu5n[OeAϟ}wSUUM?CȷMΞ=lAگi]k;5-ikk?M.=#[Ȧ&' A(S,'`0H QQQA0AbcceN9s׼B BA1&`0xeN*d Ed=F*++osO]UU8WTz^ j߇"7Zۨ*SSSi%UU Br9M4v~W#2333\{v.--G75PK(byi鶝|{2fM)) qD'.2r9kJ>G$КP}H$ W&I>3"8:m!gTC &#mCAo4[ZZ[od Lʖ|UP5<Ә$UUL&ޅQamx<x6I3 UUu:rH$MK3AEmQ##uw<}}}@ww6T1X,244bAS `PUU d2RUU8H*,..ݍ҈1ť%VWW9t萶\$)*++q'ڪMMMtP]UU677IK|Xk"y뼶 -:נ!B8E#=ƭ^1 ??kMUUVVVJTUX,"2*fY{` r9Z[[5AC֘WC vRbEQX, {B,..rQFm޸5Ad2ioBfQ, IF$INSNx4iﱂLMM҂P( 2`45נG /,Ex<N:ua?"* z<B!677Fp1f'J,cssbɴ3_6vp8G###D2 W^q $cGR]] J36G"#JvNss3>gVUkI#-u%U)0?8@觥%(( SCMTD /1^mN6^*T lEzLck43lnmS(X].Z[[)//32WU: K$TQ sPUt*jA c~l]GeH@dBRzM z=f^ &LjnxI%Rvb\>df}cłd@s]Ȧ hdY1!a1o,BTVf3$eSUl*h{<ܧI&2X6Dn&b $2vL2lǠ6_bXX,X{l&U5||DlT5NDjko%l2F4$_PK2+cWfE]s;b1_8rRjJD}]`ߏnc4Eq셪X~DVO2g.ss~*[o`4 |Ʀ_,wRs̭dVR.qNU%8'\U-Ι ӧNQڏ+}10EBpPYYsq__c/^z*T-_oNz UCɄ<ǩ ?3+4sS6*1>ޢyl.BIrÌ߸8ٚCvG|gi=Al }_e 磬;MזH%&4ry҂Rr-&%Ե41,X*zI$brdd?D&PUb~bFƶ6npU9\Rى,J=6L&?AʬMy?njZ2q2QA!_TH01FAg_@UsyJl 6{~P2;1E$`PҌaFƙ#bwd { 喎lMwuT[++eP 9R4  i#WUl2A._DXvbK߲yp[ E`j|/bO{$<*,~g]8c LU#LTE)dI&Ө-6LFݎL2A # X6lo&jYaI'e`j3G*\T*jhYcrbo9*+}*N rQN:Sa67M;ٝTQonJgI(nǠGF*5,x`90u-H |T9b6-Lpӯ48imFL*Λߑ1݆$J."Ɂ(a;BxJbt:3bsXU$ 2΀aCv҆Y"zl;AmK38k!DflR zSwQLCpTE&LP\eD"W j,a}qSu%V֩U߿G`Wbg~t xE+\z .bʚ2\敿ﰢ5;̬_@R,\D(6պbQQ I})3enXHkc7-5|5>,uWeQg9nxH)H¢dܸ_ aGY* qFOYgPUJ쵋\ Dy ]Y{IYÔ]HEf7eIo+T|EYT̮2=fzY]"l03t9,Օx &~>M oKGLxK^d#BQ̫/S0Eꪪ8te8ֲF^y5~ +;x8v5*yֶ(I"#} BY%vU'VwΛ@8Swd|ѧn%@? O]Ͼ%- rO^Ӌcx}>L d蛤 ڞ3<Ѿ$qyqh5#.# 4얛-\̈́O]`cDnL0!z'7fmgEn ReQGiΜ@Dꘀ즪:@;aj~tO_cz!_ Agl9PLh_;?\׌,|R_PfS7>W5BcW%fytcD2fY.(k=87/2us,./fڏƣK0;F᷸p 5,.0t_x:ia8]m\Kx޿t-3#S]d֦X{fsjWRb"(E~bWy~կx7o {:yQk3,ljjב9U(bBQ^9x4ZLN@)LnHίqIIvmEWyW 8JjV>r~őfäA^J!WYyWob\p>ESC. ]~*.8r(mm9y̕g&h3G5-.!7yn;j!b29tݗF5#E^}]-׾ddbzӿk c1lz'(e;/8dmzX1C 7U.0̄\k9]ae)V‰֧/nkbqqi-ct $YjNX/r2mXB:3_ec+j!DpzGb{[A*J7) ɍY.y9T| P߾?F5, HNQA9WE8` V`@!W(Q-O> /k`5[c [ۀA#[m]ˣV_ ^FG]@ yv/;z 1>kʏOaPl6694eKc7MBޚ!;9O2:4.lf^}6~~@o#} ܼF4/|Vi4Eپ˲Vȣ")m(IŨVSPfDA\%HONQbnBGl zq[˪q&ygv?`Nh5j;c!μs}xՓq :s;<\hElc=ʭ"o)F$PJٷax:ͣ H*l D݂E4 2:9A,^Bgz{rMV^}ED) ۸M7pYd%BipoᶩRRsX{lJfsy9F{۫{d "Yqg>*JW&UA,J#{\]؋gHD}h5&xx#%~&a&@_bf@M[)U uW GӔkH 1>^O(fɑ^ x[*Y29 0;&䙙'\=Rt jhBgaS7jT7Vq:"-A p.N/etfl+qxKP43Kq&]޾j iٍes4|Yznl b+YrTcŅy&LCC7z .+6x9Y,+od17;C&a ,cNu &qԃI6;<.n=`Al^ y&[hA.s_""3/`s< K s`sC:҆+gRR;K:_,R zn%rE+/()n^^#Ԅo&[X'ٱeJNp8w.{YN&6ky0 EDu ,TՔ#޲}<[ӁZ)o;&:D}XUb ᶸq`q)0u4@OM$EIfQP$E]x[ 8Q4P?.7Li [X}8A8\ Xy3 ݈ƃA=_'~oI.b8u[X"w<W$jB4v)(4yāg?0[m8ʏKu\N?ơʉFvR "!M'_ZSq8TWNjs564QV;ߩOw7ƁtѪ$Xlv,j1W+('!H26Yq6W. xʚx(E3E\B$}5Di#P-8&K$IWWW%R^j|m ZlKJ ۑВIH85E Kt4D%X* (͆Ţn]PV3r:]74?@P`vSuKhIfCjnיzdYa_C6=N4 j-j".`ٶ ۮ%u qX-۹w\%Պ,5){l)ۦ`M ><-OB ɥ%w?&H|)+++^9-T;E~=;xe_xY~>AۙSX?(b5F)+*=Kvje7O~|d0dhfǶ-?{I]?6==|?Ay)-Ze 8?G={_Y ?ʏ* l6AIRxy֯!4Mt@?Ǒ\1&:g|Mf'1,>)LOLዕ}f$-+ⷋMBM&Z7xյa~K I2 M3"nk躱:Zȷ_ >¿X_IMov.WIrww)|Ȭr?u{A>ܾE?51]9)w /;j^k 4Mpmf>k75INa8؍e.}rʃg*s703t~z {AZKePdf9s=% LANWHy8qj?F!Გ.k:KJٜǍ; Yw&5; qZQ5 iz_!mЁ2_u]m4 ''GX,w^mJ +膀+%o,19*$B69@YI/+K HY)VV7yCbg}gyvZQqw8XSj:9Hg1p"^`enl!/㴂27jr$@ FأaJ&w75ޯrtRJpeL +ǥݼh]%M 榊iVhIme{Ab%OX8L L5%z`Y\ZD90H-α7REW(?HsK=W]$="0s{av3I3'J)2`#"vU$Q$q֥#b;AD% ]0LDYFz"BϦcôY@qByǴb۸QzY444]@&Aox4bD[7IE0SC9} hhjȊk I!454[a2,b:Emo.P@Vԭ$&ZACV"4E>$y;vMqÏQrHc~7"HE/ ؚ,o=9Q&aKd+D 9Hve{4$^VFƦ|pj#ŚeDڣK1M1%lħ/lxl "Rlc&Yl&Iאb|o&ّLA˯^++]iOlUGk2?1(/ N0/o&9m 4F]wFMp9}&I..Ǧp{sӛH4瘹 IDAT Xn\7_0;4=,̌2 0?WȂN&/p7h|aM[\L+zxOau80t['ZYe~Ac~vRkQ@6?2a^Ki>FkeR$Wk M^Ƥ]cmq)R|X~V7Q ){-1>*Չ v:8Q;U,2d9jNcjKzn X@Lgh4BGCܿ~tC)S#\vNnc&S]_ K+8 ~GEիLLJ(k9-ȂLm}f+@3T%;Й%6͵$Txí^76UGhi6KW_Bu9s?ǽ4 ܾ@:*9h  @Y-GNyJ C g0S'ɍwс2Qm1hec2Eg~pU]t-r~`#r q߶@,q((ϜDX=R9ʶ#4,6xu]lzTG ZKHͧ_6exyiOCgOp4_1 i]=\ƣ4[|ULQ%^¡-H>e|zǏE)+qs=NDEL?W vEB,/_ejvHu#GOc v;L:[Q(7;:/q4@JX7;ﲶAR픷!L,XkP_cƇɩ!~pYFGTU3 :Eܦ@ǒ72/7?7hj[jj+QJiz*4+kxl w/o`xVy?ѵjO~sB1ܑֆ:"Sv?ɩe?Z`ej~nE[C`Yrյ6"ًKP][x:nOPzHts-un$#i9rcyO/tPg*s;eI&)o"Ya=Qvʨ]J>=ȅ.'3qj\xcVӅݮHr;+ƁSǰ䦹~*\|Crj#uO/r?AñӴ61u?W BC'hkfez #KT74Q<5K \=%=Y䏴Fnm{~(br{XC=yE`zb%/+"wΓ*X 'h Ex.Ajé d*"xoz}dJJQg8ߥLnx 㖫B6R`e /VTxM9UXMmj@Mί'W.Fqp9,HXH2*J?$~dwQRV7`MO勷~NuK߱nkY_tȲB)"#nI SxY%vCu)UV఩.?XD-!K8Ur VnY*y_y0T =Q[.A@uz8mX~BA(h9vh@e fh8##V}M$)Eqx5QQ[`m3cb(xvYuR޸ƖF4"7,0i Yaiu DBZV&Fa6 'OSYWKۉXL224MIah8rʨqr A#8B}1l.dwu&G ns4`5VV1 O0SD(o_`rjlj*jYb~5)9curcAV"!6秙^i>uNVXfv~i ~B0*8}A:Bmcu 6ג䳛'CŔ*SClhJ!Jޏ\g "hX>:C#L bVzTƶj?{c7W?a Ta|p"`zNibUU<n$$+4C8{ J}5T4`azt~5 ӆ!Zpl bb`OM3mZ`"p9-S{١AL_w,?kTWDW2k2h )l^?F[݆4G"?By]#Zؘgu#'q *}, Y 0x8iWGWï81\Rko[7G 0,2Pti4MT4WND-pwvXwOq8@ԧ{=>:9ik](edIoT:Ams8(ibbQQՊ$niVBR-8fݚA&e#d]e0WSL$[ݯTCϰŐdvbG ӛdswl lVi|ՍE0k!jEt#E%յ]2UVe0 Sg#Z]DEi aEV`31?A0 H/Q|@;ULm1F2ź4Mdqkq]n}.RuGۂ>,Nc.v^epu~Bbe;U!ϰZ;]L2kd6sXvDb/̡HRV0Qƃ |kDM >ggi!]JQZNв:@V][.5Y0uoy#g~.[K۹Whi(yz\ҨoG͹uDm&AgfjB9E˻,-R0@pJL<&bgsjMP8WBnfy@g8-GL=C~aŵenv9Դ$gZr R(dHӤǘ\Шom ^QFz~٥~Z;}d-!jZPR`k+IܢV$P^$r p@t Ahɡ!-e(fVU54K2k8 aDL20@M[3J+ܟ]$nX. )vƗ QX[]Eh<դ/A _EiЎ$* {BϬsH'W-n^'cm-CG%JNd*U A˲Sş DIc[yS("l$Aux@gvh%›o5R],$':Y1p;.R3~+Zv `fnC6%Ә> U8mg.fQ{vU$Z!kKE ?+!:k+Ng2FA7+KJIDĠoX^lG Y ko=FIm [s3T_zF>&S %X_[`(Z C^[mUҚ#ؙ{ <ē3ɸr+:V`\`/?KEON0zҮ4'#H5Y "\T8T kjlClqލ{İLs7७HHǵw&5p܎Ds7؝$WCkS q笲J8x(7owljF%lNcw<]5Ni>u6`*ۈ >IG]~L]SFkK w/sF78vwXb^`zhk5ځUE$CGe\{蹦":D- %̵Uνt׊ k[imHhgL[XYWpl"dӧH~rL[ɑ#{qbBg |,6>*0x.$k: 8_DaAR\`%f_fAjK8Z6;Da U5OX46H֖˧:_]b@#QQr K8ԇ7na!n3.z[&Y4bg.x#3T:CB18}7o2K9ҹIaJ߿ho04`"?_a$ : a(8*8 <:+ms{ݽoLPUGh,G VPQxD(ېIs*[K JES3vs%^auG8vU~MܹsNR(ƦR{hM tsOqiƦf(v/!훨aGY(uMtvq]a7R-K{JXE2Z8l}wHL-y +Xn, : cCܸp9dZ*T'߹O/ؘ]yǛhg=.|``/12Ɖ7~V]Q&^߀2-I_!΀qz:;Xc=oІE*N7[ۖT"Vۈ#:?D~f?C[_VgzV&r jx5XB2s!?ۿey%dq#n{o FD!~ikqr71\Q|NW ӹ95ͬ-v EY* 9է? VWU;h"h?I&b aUe'_$ZfQRB8L ~@lDc!J+1ޯLm~N#Cøm{µͬod%w0a%AAsAm^ Kdr6$@Q%ڱ_TJ 7LA$\$Viߢ,&\G0B,v?1TYMuH.,3)k>āM(h肕r6B6"^A0u4C TVeid!XU O˺PRQ?& M"~.&5Yg*iOrqG#g'"6,/':s` T`e~R-M &x9'`,Vȡ:CG%_M`uaFHU-(hr"0b!Cre ?Fvd=%@q{<!T,`Uތ":}c2k,/#4l'ۉHPȐCR˫((Ϝ4yfņF? *  %47)OhGX܏$ɄʱIP__/ũ.-a.\NI%qVPt5 ӍM+kX|1:I,Ǫl6DBFDbTH.i:~C=wQ}T#Y_^deewvode~Mvt$ua+ r" v|+HuZ[ }hIRjbrxD66628av Hba+$8zVm$PP[Qɱ<>/vNjr;Q-vn;(xP{OR,icxĦţ}W:1S`M)OxmG|, 3ny3;BO~ԊE'_<^5v6[(}=k> zsˬ5GtDq+{<ءccxj @1(K,aל嫅o}sg HMa%Jծ _h2T;n OD|;S?OxA.|'뮿ŧÒϢ{\|ϵlEbLDRxrrgy9QylB^]w(>9'_uOL|Wa9W [}{,%@KUlʟ!X<[k{{/ &<~mwV!3gb3[gA-w 4I/MrwK6)G&}SklT@»c ͮ_OϮqS>;]>ss'lczk~b2|or7M! )`umkC]ֳ.BM7o)M q}Mx] MMwɩE dU4֣eё$*y·GO&?78ٜ@;ug{{{7o3:r9َ xQF2r)ƧD u;>.L }IDATqw`/ v3:BXb@j}rVw8Gϝtc```9sF[xRq/)NITה> ^duM|0#GV}g_9bVy0@Z"3Tvǚ\55Fz#t_5-;w?߃=.3=ekCp Dأ/n X YRo!(v|Ve- Z>Cj-%,!3NE/{}.S1WqQݻ;:p48=؃Df}]H^B!G/"AE9]== yB$6 f.K.a:$ MtAƪHd]ɢ(ͦ](4u24GVm4 c%Ï,m9Ae1V#:_%x[Fjɴ=؃5O~%q)D8?ʭ{֐ft:#] UхYe-,r~c":⠼{/a/32m?Kx8-XNe^#bbd ͡q>%M=bM6b=؃1l wS̑\!>  N=ξNO &8{!r8N\>]TTWVm7$ vϼL+<켌L#O35y!Vje,QSqN&2t2 DZN|{OvKKc)#{ $# w)F%{MoSs/ i膉()(JQM,jEr,oZ0 ?1f% ?kAjB4b5u`70gC;MsqLҦ/%#4f$MRnRٯdX$ݵ&mק6{ gf&__?(p{kڔŐcrb8{=5ί,إݯ<{xqvK)v>c: : i㔛 6i|~WUwTky`ZlLzm\.s>K: : :: : : G rIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/nova-spec-process.graphml0000664000175000017500000012701500000000000024457 0ustar00zuulzuul00000000000000 API Cell Folder 1 nova-cells nova-cells rabbit cell slots cell slots nova-api child cell Folder 2 rabbit nova-scheduler mysql - hoststate hoststate nova-cells nova-compute <?xml version="1.0" encoding="utf-8"?> <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="40px" height="48px" viewBox="0 0 40 48" enable-background="new 0 0 40 48" xml:space="preserve"> <defs> </defs> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="655.0938" x2="409.4502" y2="655.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="0.0558" style="stop-color:#5F5F5F"/> <stop offset="0.2103" style="stop-color:#8D8D8D"/> <stop offset="0.3479" style="stop-color:#AEAEAE"/> <stop offset="0.4623" style="stop-color:#C2C2C2"/> <stop offset="0.5394" style="stop-color:#C9C9C9"/> <stop offset="0.6247" style="stop-color:#C5C5C5"/> <stop offset="0.7072" style="stop-color:#BABABA"/> <stop offset="0.7885" style="stop-color:#A6A6A6"/> <stop offset="0.869" style="stop-color:#8B8B8B"/> <stop offset="0.9484" style="stop-color:#686868"/> <stop offset="1" style="stop-color:#4D4D4D"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M19.625,37.613C8.787,37.613,0,35.738,0,33.425v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,35.738,30.464,37.613,19.625,37.613z"/> <linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="649.0938" x2="409.4502" y2="649.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#B3B3B3"/> <stop offset="0.0171" style="stop-color:#B6B6B6"/> <stop offset="0.235" style="stop-color:#D7D7D7"/> <stop offset="0.4168" style="stop-color:#EBEBEB"/> <stop offset="0.5394" style="stop-color:#F2F2F2"/> <stop offset="0.6579" style="stop-color:#EEEEEE"/> <stop offset="0.7724" style="stop-color:#E3E3E3"/> <stop offset="0.8853" style="stop-color:#CFCFCF"/> <stop offset="0.9965" style="stop-color:#B4B4B4"/> <stop offset="1" style="stop-color:#B3B3B3"/> </linearGradient> <path fill="url(#SVGID_2_)" d="M19.625,37.613c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.927-18.396,3.927 c-9.481,0-17.396-1.959-18.396-3.927l-1.229,2C0,35.738,8.787,37.613,19.625,37.613z"/> <linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="371.4297" y1="646" x2="408.2217" y2="646" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#C9C9C9"/> <stop offset="1" style="stop-color:#808080"/> </linearGradient> <ellipse fill="url(#SVGID_3_)" cx="19.625" cy="31.425" rx="18.396" ry="3.926"/> <linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="641.0938" x2="409.4502" y2="641.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="0.0558" style="stop-color:#5F5F5F"/> <stop offset="0.2103" style="stop-color:#8D8D8D"/> <stop offset="0.3479" style="stop-color:#AEAEAE"/> <stop offset="0.4623" style="stop-color:#C2C2C2"/> <stop offset="0.5394" style="stop-color:#C9C9C9"/> <stop offset="0.6247" style="stop-color:#C5C5C5"/> <stop offset="0.7072" style="stop-color:#BABABA"/> <stop offset="0.7885" style="stop-color:#A6A6A6"/> <stop offset="0.869" style="stop-color:#8B8B8B"/> <stop offset="0.9484" style="stop-color:#686868"/> <stop offset="1" style="stop-color:#4D4D4D"/> </linearGradient> <path fill="url(#SVGID_4_)" d="M19.625,23.613C8.787,23.613,0,21.738,0,19.425v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,21.738,30.464,23.613,19.625,23.613z"/> <linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="635.0938" x2="409.4502" y2="635.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#B3B3B3"/> <stop offset="0.0171" style="stop-color:#B6B6B6"/> <stop offset="0.235" style="stop-color:#D7D7D7"/> <stop offset="0.4168" style="stop-color:#EBEBEB"/> <stop offset="0.5394" style="stop-color:#F2F2F2"/> <stop offset="0.6579" style="stop-color:#EEEEEE"/> <stop offset="0.7724" style="stop-color:#E3E3E3"/> <stop offset="0.8853" style="stop-color:#CFCFCF"/> <stop offset="0.9965" style="stop-color:#B4B4B4"/> <stop offset="1" style="stop-color:#B3B3B3"/> </linearGradient> <path fill="url(#SVGID_5_)" d="M19.625,23.613c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926l-1.229,2C0,21.738,8.787,23.613,19.625,23.613z"/> <linearGradient id="SVGID_6_" gradientUnits="userSpaceOnUse" x1="371.4297" y1="632" x2="408.2217" y2="632" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#C9C9C9"/> <stop offset="1" style="stop-color:#808080"/> </linearGradient> <ellipse fill="url(#SVGID_6_)" cx="19.625" cy="17.426" rx="18.396" ry="3.926"/> <linearGradient id="SVGID_7_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="627.5938" x2="409.4502" y2="627.5938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="0.0558" style="stop-color:#5F5F5F"/> <stop offset="0.2103" style="stop-color:#8D8D8D"/> <stop offset="0.3479" style="stop-color:#AEAEAE"/> <stop offset="0.4623" style="stop-color:#C2C2C2"/> <stop offset="0.5394" style="stop-color:#C9C9C9"/> <stop offset="0.6247" style="stop-color:#C5C5C5"/> <stop offset="0.7072" style="stop-color:#BABABA"/> <stop offset="0.7885" style="stop-color:#A6A6A6"/> <stop offset="0.869" style="stop-color:#8B8B8B"/> <stop offset="0.9484" style="stop-color:#686868"/> <stop offset="1" style="stop-color:#4D4D4D"/> </linearGradient> <path fill="url(#SVGID_7_)" d="M19.625,10.113C8.787,10.113,0,8.238,0,5.925v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,8.238,30.464,10.113,19.625,10.113z"/> <linearGradient id="SVGID_8_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="621.5938" x2="409.4502" y2="621.5938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#B3B3B3"/> <stop offset="0.0171" style="stop-color:#B6B6B6"/> <stop offset="0.235" style="stop-color:#D7D7D7"/> <stop offset="0.4168" style="stop-color:#EBEBEB"/> <stop offset="0.5394" style="stop-color:#F2F2F2"/> <stop offset="0.6579" style="stop-color:#EEEEEE"/> <stop offset="0.7724" style="stop-color:#E3E3E3"/> <stop offset="0.8853" style="stop-color:#CFCFCF"/> <stop offset="0.9965" style="stop-color:#B4B4B4"/> <stop offset="1" style="stop-color:#B3B3B3"/> </linearGradient> <path fill="url(#SVGID_8_)" d="M19.625,10.113c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926L0,5.925C0,8.238,8.787,10.113,19.625,10.113z"/> <linearGradient id="SVGID_9_" gradientUnits="userSpaceOnUse" x1="371.4297" y1="618.5" x2="408.2217" y2="618.5" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#C9C9C9"/> <stop offset="1" style="stop-color:#808080"/> </linearGradient> <ellipse fill="url(#SVGID_9_)" cx="19.625" cy="3.926" rx="18.396" ry="3.926"/> <path opacity="0.24" fill="#FFFFFF" enable-background="new " d="M31.291,46.792c0,0-4.313,0.578-7.249,0.694 C20.917,47.613,15,47.613,15,47.613l-2.443-10.279l-0.119-2.283l-1.231-1.842L9.789,23.024l-0.082-0.119L9.3,20.715l-1.45-1.44 L5.329,8.793c0,0,5.296,0.882,7.234,1.07s8.375,0.25,8.375,0.25l3,9.875l-0.25,1.313l1.063,2.168l2.312,9.644l-0.375,1.875 l1.627,2.193L31.291,46.792z"/> </svg> <?xml version="1.0" encoding="utf-8"?> <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="41px" height="48px" viewBox="-0.875 -0.887 41 48" enable-background="new -0.875 -0.887 41 48" xml:space="preserve"> <defs> </defs> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-979.1445" x2="682.0508" y2="-979.1445" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#3C89C9"/> <stop offset="0.1482" style="stop-color:#60A6DD"/> <stop offset="0.3113" style="stop-color:#81C1F0"/> <stop offset="0.4476" style="stop-color:#95D1FB"/> <stop offset="0.5394" style="stop-color:#9CD7FF"/> <stop offset="0.636" style="stop-color:#98D4FD"/> <stop offset="0.7293" style="stop-color:#8DCAF6"/> <stop offset="0.8214" style="stop-color:#79BBEB"/> <stop offset="0.912" style="stop-color:#5EA5DC"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M19.625,36.763C8.787,36.763,0,34.888,0,32.575v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,34.888,30.464,36.763,19.625,36.763z"/> <linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-973.1445" x2="682.0508" y2="-973.1445" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="0.0039" style="stop-color:#9DD7FF"/> <stop offset="0.2273" style="stop-color:#BDE5FF"/> <stop offset="0.4138" style="stop-color:#D1EEFF"/> <stop offset="0.5394" style="stop-color:#D9F1FF"/> <stop offset="0.6155" style="stop-color:#D5EFFE"/> <stop offset="0.6891" style="stop-color:#C9E7FA"/> <stop offset="0.7617" style="stop-color:#B6DAF3"/> <stop offset="0.8337" style="stop-color:#9AC8EA"/> <stop offset="0.9052" style="stop-color:#77B0DD"/> <stop offset="0.9754" style="stop-color:#4D94CF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_2_)" d="M19.625,36.763c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.927-18.396,3.927 c-9.481,0-17.396-1.959-18.396-3.927l-1.229,2C0,34.888,8.787,36.763,19.625,36.763z"/> <path fill="#3C89C9" d="M19.625,26.468c10.16,0,19.625,2.775,19.625,2.775c-0.375,2.721-5.367,5.438-19.554,5.438 c-12.125,0-18.467-2.484-19.541-4.918C-0.127,29.125,9.465,26.468,19.625,26.468z"/> <linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-965.6948" x2="682.0508" y2="-965.6948" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#3C89C9"/> <stop offset="0.1482" style="stop-color:#60A6DD"/> <stop offset="0.3113" style="stop-color:#81C1F0"/> <stop offset="0.4476" style="stop-color:#95D1FB"/> <stop offset="0.5394" style="stop-color:#9CD7FF"/> <stop offset="0.636" style="stop-color:#98D4FD"/> <stop offset="0.7293" style="stop-color:#8DCAF6"/> <stop offset="0.8214" style="stop-color:#79BBEB"/> <stop offset="0.912" style="stop-color:#5EA5DC"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_3_)" d="M19.625,23.313C8.787,23.313,0,21.438,0,19.125v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,21.438,30.464,23.313,19.625,23.313z"/> <linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-959.6948" x2="682.0508" y2="-959.6948" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="0.0039" style="stop-color:#9DD7FF"/> <stop offset="0.2273" style="stop-color:#BDE5FF"/> <stop offset="0.4138" style="stop-color:#D1EEFF"/> <stop offset="0.5394" style="stop-color:#D9F1FF"/> <stop offset="0.6155" style="stop-color:#D5EFFE"/> <stop offset="0.6891" style="stop-color:#C9E7FA"/> <stop offset="0.7617" style="stop-color:#B6DAF3"/> <stop offset="0.8337" style="stop-color:#9AC8EA"/> <stop offset="0.9052" style="stop-color:#77B0DD"/> <stop offset="0.9754" style="stop-color:#4D94CF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_4_)" d="M19.625,23.313c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926l-1.229,2C0,21.438,8.787,23.313,19.625,23.313z"/> <path fill="#3C89C9" d="M19.476,13.019c10.161,0,19.625,2.775,19.625,2.775c-0.375,2.721-5.367,5.438-19.555,5.438 c-12.125,0-18.467-2.485-19.541-4.918C-0.277,15.674,9.316,13.019,19.476,13.019z"/> <linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-952.4946" x2="682.0508" y2="-952.4946" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#3C89C9"/> <stop offset="0.1482" style="stop-color:#60A6DD"/> <stop offset="0.3113" style="stop-color:#81C1F0"/> <stop offset="0.4476" style="stop-color:#95D1FB"/> <stop offset="0.5394" style="stop-color:#9CD7FF"/> <stop offset="0.636" style="stop-color:#98D4FD"/> <stop offset="0.7293" style="stop-color:#8DCAF6"/> <stop offset="0.8214" style="stop-color:#79BBEB"/> <stop offset="0.912" style="stop-color:#5EA5DC"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_5_)" d="M19.625,10.113C8.787,10.113,0,8.238,0,5.925v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,8.238,30.464,10.113,19.625,10.113z"/> <linearGradient id="SVGID_6_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-946.4946" x2="682.0508" y2="-946.4946" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="0.0039" style="stop-color:#9DD7FF"/> <stop offset="0.2273" style="stop-color:#BDE5FF"/> <stop offset="0.4138" style="stop-color:#D1EEFF"/> <stop offset="0.5394" style="stop-color:#D9F1FF"/> <stop offset="0.6155" style="stop-color:#D5EFFE"/> <stop offset="0.6891" style="stop-color:#C9E7FA"/> <stop offset="0.7617" style="stop-color:#B6DAF3"/> <stop offset="0.8337" style="stop-color:#9AC8EA"/> <stop offset="0.9052" style="stop-color:#77B0DD"/> <stop offset="0.9754" style="stop-color:#4D94CF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_6_)" d="M19.625,10.113c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926L0,5.925C0,8.238,8.787,10.113,19.625,10.113z"/> <linearGradient id="SVGID_7_" gradientUnits="userSpaceOnUse" x1="644.0293" y1="-943.4014" x2="680.8223" y2="-943.4014" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <ellipse fill="url(#SVGID_7_)" cx="19.625" cy="3.926" rx="18.396" ry="3.926"/> <path opacity="0.24" fill="#FFFFFF" enable-background="new " d="M31.04,45.982c0,0-4.354,0.664-7.29,0.781 c-3.125,0.125-8.952,0-8.952,0l-2.384-10.292l0.044-2.108l-1.251-1.154L9.789,23.024l-0.082-0.119L9.5,20.529l-1.65-1.254 L5.329,8.793c0,0,4.213,0.903,7.234,1.07s8.375,0.25,8.375,0.25l3,9.875l-0.25,1.313l1.063,2.168l2.312,9.645l-0.521,1.416 l1.46,1.834L31.04,45.982z"/> </svg> ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/nova-spec-process.svg0000664000175000017500000013200100000000000023613 0ustar00zuulzuul00000000000000 launchpad create a bug create a blueprint create a blueprint End states out of scope code merged bug fix? idea REST API change? submit spec for review a feature? spec merged blueprint approved for release spec required? add link on nova meeting agenda blueprint hit by feature freeze re-submit for next release blueprint unapproved apply procedural -2 upload code for review remove procedural -2 review blueprint in nova meeting no yes ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/nova-weighting-hosts.png0000664000175000017500000022303000000000000024320 0ustar00zuulzuul00000000000000PNG  IHDRwSsRGBbKGD pHYs  tIME   tEXtCommentCreated with GIMPW IDATxw\םX9PD QD,rXso0O3ffwpQ9}($JdYOU O̯^%HdeB2뽑N2UҪ)/a}+tS 3Wa|5SLN3JXF$@j3ZӉḑ||5M8rYbǺ =W:C|:6rRRg/^cD<'Ȳ#ի,sgnԛ2WqWM!e>182nje9(+LyEhT ߒ$ϋQHqEluyE~z/wvVfgqY˺q(;O p|hX%TV>ri6K\=L$Djk;!ydnH\f| `*iv\Vd`| =p P.15_5T,2=9Y2[X ?iR(qHB@ ݰ4LRD_k%*!c'T/pذ{'GTӾ"! !aw:qy#rL²#f|bW`w) [ d373KX|r9g)&0 S¥`bbPhx$V a:jki#0KDd_\>i^,L 6_hpejS<NWS+9IGVȲLպ]Ӷ]'{ח˜kp@3ǚ/ c1|UPɅ$6Uyu4Klؐ@UWWFenNef&nUAnv#|̆ mؤwvjk9u mUYgn}93͉C'ԷS_6F(eDU$Lv,L q*S Dkh,݇æaqoy*jWrBMuJcE@]eIMxa-xoB9{~sIWWiR']@S6v ]?|vhtk sG(f7t)>ݧ bUr?g}[![Bő뢔Gob6f#W]峩3y6GXXXBxLrd"A:Y{ʊ-Rv B $DT$ *{y$csДM(6:-d)&p:QdXkndy<'rE o:>0 29rӧEH:ot\F&[ вi3tChi*=PSz&rn݊X KJ1=éO?gRDRT{ؾcroR$ Bw( $Esқp@`dxuFXyK @$.^Nбe#z~sOQ> l!D/]]kp`s D* 6~YS2Jnd 9v<%te >3پMZ=?jXS#37C"DH pǁ$QaxP) íSP%ISLM{H~~+!9GOuk)fsQzִRBǣgD=FyS!JN?5LR6; (]i~:R{dE IBOQb bzдAS/t#Ǚc[oîB0;pw~iﹱ ;IR vC+wAN=tGi#0|Ffm}n(5.>B＀^sΟ"sptGȦS PDiZd 2sTD"N[@R =TI,9:#xjS8zFbCe2, Y2vz&2s\< |Au:*Z%a(0ä.eyE+Rꢘҡ|?vUba_6o 0u^&p8RfZ#l]$XJs'͟U_B>1ų}4As 9ui?%!0y~ΩǙYLs3 [D&_>5nmMgOxblW1 zO vR"I>fmR.(/wrn A:1,2p%>8L"ZSc$* ꢁŀ"Q&&W07~zs?FϋkK,ӓD5-E%U!.M5\x#u56Ѵ"Nu. }Nը h0 :Cl U5( .#)GSIcsg/rĎgh:O&ʉ1~rO};uU>TMC@Ul5-D+/0,)] w;œ?\DOcvx|1M|tt~;>@?璉Y`rK2.4B:m^ir$Wh̓Oj LTtF47d͙lz-68o97eIĖfC2STh 3RW$z38\ϋa068\dmgiۖӒwgx`q(^$myo\^!ݓGHkhC꒤]ӈy7Q "m)M3;4nx5?٬2kΎ 9Ĺ[8KxE`2pz额}ol|I&ۚqh[30q f'uYN}) _};ֳ=pϽ.HU( MCP, KibIMe/ȩ vYL b~D!cw<3`tHS]YpJM=>}0\;ѓ-ff4t?΂ś,?ryc<#5THC 5+ F}qGx7bkȩ9]ᢳ,p<=7_#$0D5R?Nv4|xpL, <%IJub-0֚ˊ,-.=av-,>i"-;KeuqԱUZvH I]T Ӡ\.i[ Ɍ NZa05y3O} LΕX']*}2g\>~,Q*̏?ͮ7gfhqVԲo.?YV"mN*>BhR ?U$joL3iik xػՉzlb'R]l9siJV=ف$OcLO܆eCTjb\8IYwʱ35вai,_U檝/JGN_9H]HHb ~?FD Yͮ9.q96r>C]z9پ%UC Ȳ+w.Iߥ)hK_.[B8* .xy7E5(ʒnjP0I:(ӓ=TS, 囩y6/U!4FCkK ?VlzH.J1_@ɋuGUeYι|p?>D/<1^<$ ݋sfSY>bMƆK[]2 ktV$*sRߓylaPddn($#Yq!Me !'ZC3`Rpxd*mK_$Fe8_>ծaWFX?9BI#dIc;oDG(h ہ=K*MWxP;>3\juZ2n2_tNv|[߃XùdK+x(B0VCǖU !cc+%)nrc҇ N~)#Su"m~a>OBȁG[yis3^s6}1k}aRGh1)v3 غ?H]g9.M)t W/rT9~9M!Y;ߛD#x ʔT| ,d1ۖ D>_ _("L\.G^l7O[~6l&V(6/.B%1ȸB5ktոVVe\16B3-u4"Ɏ=>j2ϒ,MdI"5M4ͧ"%$r[wmi͎r%tM,\O$&U]Փ'8 kF&8|8)f |kd;< Ӵ6xa6w`'YCI_b䗧F29fyͳ۞k֠*X=jkȀ&q1>bRP$4lz9_`ϼ$9p84|55_Mo VNbsHfwk7p2._ЇP_{oqd, FNsSN2=?Ğv)KYX?bdd.1DOj1Ջh0ϼ2 UK<剐l{)I`J*/̖4&#&]Wku~O4zك.!)21T́X0 Iz0GzK-/ƦܸcCN4WPIbfO(x(6'HnۧQ=KP$IgoSK<.݆{^o{)>PR_Srv$Iƚȅ|s a bo3K{x?eޠ.6?{C:3@$g( E/;qmCmYv=(+oah_Vyo~L -eiha█!9=t>L/oAv7!wW G_#A!2Ay:7Cl\ivkĪd wC( 4L4KMkFx<$YCx:be<3KQ >vocs1_F~kv;? IOa^3|uk x8[C0Vw_:6C5ި1nK^'STClZ :T$B6\r|:<*k<]bBMÝuYs42:̳546I,@}P ش'V@,ӹv-2Ϙ\htj3mTH IDAT)Ef㮝t]ev*AE:lY*Vo߼` `u~j.) '_Jҫâ?X߁"?^IhX˳7F{$\w]@m6Y2_?+KOl^!2x4.=zWg{)7o& Z22_B&WYRfcSg'M-,,I07m"zg#sIZL@WYB6L&: y9y$Ѩbxry@,*(a.-jW=Lɟ (6$ݱ,wMS0<:FX>kD]MgAQhNk7dfff1a19yhtuʼkIJ=c/#y@ݪ96+o]0 LX'#+ʊGlH1d7E4K&U5 ˏ^ 9؈$=@&-<̻ȣU4ڜ̿33;82e˻x!VlNV̇nJnBPLc*4lه)'?a 01JtY3&s16a[۷8fLPFuڰZQN|B"ܕkmZ腏܏VYG] `#D(I)=0|E bYYy 9sH,cV2+<3Y/LI7ʗ9GxWӜ8ߍacSRUӨ3ar¾beY"9v5N$IB"?K@Z/~ݿ%<z9R;p5*٬`Pӳl k.CLP.`*njCڎ{d!^" X7FLLqg C w[ ȥg&sDbK>'vmo>o26DW3y-$ʲ|b N^$Lʹ,݋򥴝$)H&5~ bWL2M!JӣD[i޵KbޏeD.* j-G[VfrHE]srrrh+_3<Fn!1tXj^24_݊L3OS6!X#q(g?#xm}ӆ}B *o";p\~l2HTUVa)sBGWЅJUXM3+ᱤ{}')[& RRD)Gb ŒIU+:\9櫧mwn0z`y@|b̥d6 ` PM{H.[® cӸQJ)/fk?Ė"Ρ*& }3lv(N\ 3F 몄|QUt J@H*BZ[? ˱rkn%jjVKaxX&gqM$IѰ;Fn%i1-h:k5kס ~X5R,86R,$@V%S>)fP+_X+QU9 JJQOH-x2Wto&;|lNU0Bj<D"# r3̌\%J|Z Uף WRfGy!M??LkDcBipV%=>BcwY iRvSv`won%*sWQaɧBBsxt^~$-~&IHHw#!K#f/ Wls#hj}%Bc$/1UfΜ`jl/܅GrLӴmEH*~4!eQNo(6h;m_+ \:N2^s>J !Dt9p|>. gt|nf=^'ZU&uW?$2 l4?t)k{ _m)z][$0@ו_+[H{*$0v.Wbfxmf@!ZN=@&\RR#hC =F`Kn oz:۩EASfca<ϙ2v$ņ'$f_ _.1}A$Eż< f1"ֽ'?Aߡ_#;Bؗ2NJL͏$k*.&;:ַc8~d46R.EY4SDZ0 I ulW_K dbCblWV8Tkۅ.'vߘ4Md-@ 2B{=+FPzTN #t%OFvҶ%f3[i~{uY{Puw3#f,~Cҙ A5P۹ۍr1{"ؔ"l BLMq+ ݳx ʗX};n 6i=Km4hnűTxV? 11l"*{]xNRgp%Vӊz&z/9-Ԉ!^:Q* 6ʅI)lؗ\"P%N2oHRiٛQa %p*G l6r+k6mg1u;B諽I`d'HfRD;~j0IG,iTG$Pb>%B * GN}JocɥC&/3u?࡟34:xAFSjpc}GhM15n88Y|9Ix{×&2JK۾H=sJ~ GZ$1#޽Und'Rv|r/O=5S`&+ɩO8GM~i>$EcÙij<=qAEV;w8qiҍל u2צ]잎$hvt3BӘ,x&"K!5u\,=;cd8WIQQBU[3v[ EQ@w~)s:Nϱkvf]7yb5Eu=ڗ31q `'_yg=Cyc=9f3`3)QEJZn~ؿakzwז%]Y-ʦ(sAAf0{BzD!J%4'<< [3ߦ}h5>S%?~"W]VHj?<7q$ϡ @3.hv$ Z8|k9k1/?=w=i%O25sj Q$;;'܇?Ck',}G!wu#|߿U:+R n݋G~ muRW m_zI m'|4␟u:n@%zi~_9*P4U4~U23+ANsF)k5I%u:o>'񛢞vK?Wx_$M߀ ފM7PtnҸ4_Z>'agcřI\ӓ#lA08MRCLs~w0R1+Fo7C/S4NOT0 ]T!pleeBPDpTJ TL̫C& &{It<}Q\A|;;f|v@ #Lώ D=l| J5s<)U1kY`Fr7G*5O9:bzW*o"Qzm@d躬s%ݒPSp!E{G"{ Tb cS5,WInMS?Q%[p1|>=EU"GW$5.A>m&Ex[@QVT͕4n| h+hcmE7jj@j6BW]D{\pv}0B<{ֵy'ݖ~+q*tw !07n\{/1_@LrF3*VahL='ރ7Ǜ~ѐ&>}Ftpre E'ԾP8U֤tvnZaZ]B{C:;=SR+3juǕA0ar^w&>|{ض\OzCXsvoZWw߲gk"qƮw\ I݅~ɗ"9رC^CѮC=C$>FlՃ7ECմEk|aa[@ao`u7t֔W~ٶkJQ(<ϠE^k_4ʔ(ִM|E`~zkPm s<6aޠ^]כ%ekiK5a\0x24wm(ȃغ 4)+p|w(X ] 7Ჰ@T{ [7iZ(Mi psK4KƬau;іE0LNVO b5k. XwvtvpS#餷 41X\3 7Җ2˼xh --Z[w,>:xkɦkI*m6Sw&ZG,Rc>::4K{}c5a.oz_[8,Ո[6wA͹Chh] UiaT״c+O5oUaM?m#Kv8vvB- d-)c}Je˪6k07o(;=5ɋ'DcĻ0IW/{ r]Mi\y?2jpfZ2UXCk~Bcl}5&/37'ش[ 5~xzp!t ΞX[s7[*z9p%R3!6`~$?;Wc$ 5KLh4.~WϨ|b ZZfik t)aq́yl[uoRC̏e8hQi̞"7qӎ^qH"y2(k)ulcvãTV!4~Bf[O*#&̯Τ9%K+\,adeS./J|;+yt27( s)g9(;(ӴLA9̌")qV\@m׃Z\D`"8R>@\KOS7*̏M3U+vYg|Ƃy˼X\$,P.@w̏97"O.OAʻn?'O.hp@%b~Cl^#̬ێpǨ@J; G SԂIXɊ\]@h)~6ڄ \K` UgRb R[%{eciꭍs%0{9z6f,NR~BڲM|O/q{)yBcGFbZJ$q i#_ZX֟iJf{/-Xsnc4a~0;l&12fl\'w,0[)h Acܽs^T g79?D x(to=Rb#|[(s&Bxə`'UddgYX&ЗEjGN #,V!8v694rw;R~QN8*gfSyi'9Xg*6p$GU*TH uvR.q)Q*skPKD*:q)w? <~7H;څ7 F2c^M,Ny7Z`Pt 1 lIYV' T*;莄$^ 08'd 1l3/ffǧ9o[!I bfHLmڋ&H}*Z"VeOs 1̞@'~La1R^yrn4ˏAR*Y_{s0QP ss9l,~|faBWδЁbX3.%KT;;McVesmIX-LoO}TIN!86B8BNQC1{cZ~J,܊++J\Kא4ٹ< -@)5Dv~ O 0}Cr<5&\=T/Y-kb_epBww9YrhIjʝ#Av7)mƗCll_R?v*v-8A~@'(: z)&;n;]Rl.ӯKA !O(>U nݏw)uzPʑ9Hpe @S>Wd_8Om`ٴ)Mi)_E-fdgQ S @] }t-\qW{wSWwTgIz'bc(-yJ2w{M]u?2װsU\b8ΊtӴ;wpȳsE4;~nJMin<"`\UA803۟V_W͍+xo梞^mLb=R%*uI5!m>RRn!0 ,bX i,!O?UYӡ;|mx]BG-3MQ{6EQ.xO@o)ylG_`y ~<)v!(Z)%:q5p*uU}XZ/p@jJ1Mx4ULfh;kNJp8MfgylK5އ-q([x>w7xy,r3[B90yr"jԌ Di!R8zra[H͸\;5БX(E wot~i11zJXm{;P#kS*A+.Pyma{!z >B]j!QōuS\]Q̃]|TЪgA1<~<2=ϒ<>o5d }5XRg/Vd 9ҥu@@rH濝{RC% |~NqqKHaא9Uk!\Μ*:ykQc28 yyʅ4OX"TOB @m۸K)ql+&.ZMb D`YW 4@#sփ0Y %I|+Zs*pUÃ9 ɣ-rX{wBݵB۩e֮lÖߩ7Lg|&J8k?~F1*Yza;6<ֆHMaߊ 8Ɗ )R\hꄺ(DNE5E$F{cZ~yls)k8!nG-;J>%%+?y>=׋T\Ppm/6$'Kd"Od0[Բ<1qRSt^-`_J0q ;G_z61vQsIge|[/7"DT3Dhz U3QjɈ`y)4GPI?ϣ}5Oml!EQ ω[/,0| ~޳!}į_)k۵B)%tۮ_܆RbWy5˾#U UI_Sos&KWTb1@:PH kb_V؎D(9|{a傺y?+-^_Xך^S=6[]xϽM>j WuPːp/NJ{h')FͯY\u*my=<%⣿A7B8~OD)XrU\0]gCjF?{A'uYP$ ۇCfap'5? @0Bbس bOj,fvIq5"`&r :=^TJzjbUR"U$d׳ _U`:^)GoJZ0) !N'Xȏc*.먙E`MOQ*RH x`DLBz^A=w1ff6_%qnwCRIg!`e񏾅Q-z~S?~ݺ_k&ɷ_f;vf0)+S^C]uvȧgPk5 !(BVT}rȭ_X`䬍-!4yk VlQ 8F aڣn?~|lZk. s\0.>٥N _v5i j g}bgO*FDU` 1*l a{,bP%v?ho2 (}~oPlK y-ad>Ne^"ܺ3Z*,m\֪m:&0} O-BeWJ5e?_G9مTÛPiu c, 1v) ][niM(bY\+2ëշ]\U7K[|mK9 +Y>%N0vx1 }lů]<$ɉO ptV!(z5oq/PFb̟Wybngϵͮl[z}&p)S*fYckOOR°}&G^6z SGk]3G1dQ P8MtҚ1)1?Y&~bB8TTé+cv xNDh+j4;U~l P@Jc~(u6:,ȷ.qFB!&''I8"$07kq`^r֢P#e⫂0wȿ𢡄x< tc:C~քU~NX#V?"XCGOlv?f!QNxV!4Co͇hEg7%ô|(ߧFVmc,sfݯ+u;Jw/0ss(1CTC ^fw)L7J5EPxjj=zB `y} wr6zJsuJH=ڈh"scQYS,+Fc5SD+&(V 糥`| ?"68oO2Qޙ#T?H"kLakZnn ɖb'϶eF4=N\MRAA vS WJh"H!"L7նs=3?kֻ T*DAO0{ϟ1!C{?!yu cS"8Za\gs%'{ erTpjCo/{}_"عl*uSSb&x)(JB(A#9Ͽ_%>e)Wl,Mg&i g;OՊ<y9^"Sw+j)JE#Ep2F>O1A)QA-?JS2ӣ-Fy뙣zg%bB-HhVZ({cWߒjU\V+ُTTsKJ̩8Dfx+QZbG'}OIx~xY`،`):uVSEuCP*I[// YTܦ8ǮCb=< [~t+PE̪YY9St4`Kz۳^j8*7`:tL& E-Et *5Bv?'%#szg)g3\ek h|0~F"oQoU%VmI_ҽxf֒OzG(ue4נY ~M{<_QW9S y ݟ~wVq5J D kD2(2P>[b74ұyPW~@ IDATS=C%5RF[c,ae1JEjXYT%?G.S'۶s-@fqVйi~=F瀄|5*~|J[c9`r2y˭]`ݫɫc(Rо)6C_`!Mսv$sEuQK@8Ohz@sqUrN28Vd[Fxr bjq! Ywީ^ RIޔƴk(s)uȮ{|9* 6[//+B@T0K#;#"1*E*HlVͭH[BT<Ǐs֭[nw!$h U n#aZC ]]a8|J&8۷У!>_&=caFL{6aJFJőkwGơcU5Gl[SE&B`{zȭyfncwݟW{Ȍ0ՏH$Yܒ[b74oam3 vI%MC8 0D^u ® PB`%7pA c/f*!q)7RN| ܳ@'#BDvf!9)mv#@ 2Az}Rr|l~G,VG]bHL;yl}o)'ra#L_'{T6_KR 8r$|CO\\(Z5_`HmH?U]u tj_5U,yӲW/tytxb>C^/:u.MC<K0~`xK-˼ ^/_^V5ZA뀏%{Bb[ۄPHc >SK7;_Pbs?ss0SCW)>콛"#yDGw~T:!euwMB玑K ݏbJG_$Bj:Z+(/`G˰6Uj!2񡫯!#LvW(t;(vbY0 SuCsq޵n\^s_oW$?"5?OR\ezftz-_bcЅJaddn^ﲭ9/S&`ĭ.y07>{s)2{<G6 av!@_f( i*_Ğ:*mo='曤Wo|oW1TQ47z T%9!ZWy+?Hҕh:=gnɮZDu\#7XLvb1̙32<b]f >(~*ֹKAu:|^;MiJSҔ@%9t) b1줵P(</4}MiJS*q\ɹyJ9(U7hAONNP,ܹcFs0@v`WԋDhM|SrC0gd9MKc;w$‵jʾ} k.6lpѥtd&)j 9ΊHtTZ͖HFƁp" #$*?Γm75MSr0فEKins"xEɖRJFGG9~8>UV144tK`~>rwh D.)abؘ˶m?bxxm5ONgk5O̸kabAaUPr)M~-mʗ¼F0}INq T*:trLgg'|7WuΞG>6E9I)(Q,fzUB„/rrֻ!a>7=H- {hJSK%̛eH cTiΞ=(tttqSouB.6}} Ç(JX3vxa~r35PyڄySr0 yk ds'OR,I$ ޔuFG?!{ƃy/n 0HAl830ѬxՔ KM/a-Hq/|ER߱@./?F*7 ̏`\w,f9v; cÁfI֦\SscLWY|| sd9fǩ^q)l&HJXX)0)4a@OPPRцTFeLᎃÖ %¸ֺ" 4ΑB[JTq-Wb\7j!7\ .ر7sLM 8w.adDQJH|dw2B5\/N3V5̛RE\FVL-AFK)%`cNj"l)%Pb) 7鴋m[Re#]Z6U4)ŭ?4mc{[)P&nt*lOeFgybp]e~~|>%L(7*b1Tj1,eiA5e)8sS'|Xo0HS[R/~Jj(ߡfKP y㻦}f H.s2?7W?`KuYMY ;DR,, O4,UIMY]IM~MR&y귘YtaZ Bu;"f蔣kɇF_!9vh#ݽpW Ӑw hVBfJH:Q>x^SK.Ckn)t ¿L&ܥ5?BH Q9CjmI|S`t0^~MHg+"eqPg"TpJbU)VKY\`1bF(c%_}`T_8RJ0i6 4еt_r# H/+<y4WH!@.b{[p=X:q4ȦUSS2W掳V]P 8|Xty25/vK8X4SVl})ԭafWd(QitKzhi2<Y F)QjsCş,]j͒T%1(94G<7RJza,M/YVFQL2THEІAT0U6B=R!].%z% F B`pe öo{$f2=koz9fא 5(D'8b!P&)\#k>k爟>e5Uz}#rnh7t}E.9Ep-8S_NԱF.#"'(2̫MӾ!l8o){nw[#0&3`V[*Q c%uST"eYH)Qeŀ.$7gQVZ5X.SvC )%z:. s6RIDH5O}H7ޙ0DljUv&VT9pWЩDgZe!޶4v7l]Eq@C(h8k0B|BFXG-<(k# \XH_?ԝ2(M6[k%Aա$&bbꘋ1KP;Tg(PvFu$'NSEqvOa! j?{\ו{~I7 0(Ԓ(R'wkeTMux5So]kV؊T$E9D7sΞ. R$VD\޻k{BA*Q4NRsO~@!;|YFxh/0[zKŦhOG-D+#9!gI7&8W)F:)%e!ehv7~f@uF?uigmuE,[p QxkMTg_|i\{y=d['>Oxi?oɀT 9]~ߒ͙Xvc_YL4~4EmJf6K!դVLeDZtV'<)([عV b=t!-m-,d%¶.ZNRHf|4Jx_HL`*ªQ$E7*7Rllx G(wv-̯:mXDwG~G\eo+A0&M'OШƷiހUR&2>UohgK-Y &7 |?* #7ulO'IU+tU–ڟZ?M(}A!+rhzǬa.RYM$Ѯ L7Gȵ>g7Ӕ-ɶwx\Dɣ?#$*תDΑ^EOlc2h%]"XO)-zhG1/Q UeD^PT5/Gj̅l=E WuJݎviK, LuYhs@PGk΁W3*.6^?(;YbZ8uKK̋GkHΖ( ֶ$'07NWejeYY%<?h>7ćߧ}ȓtߏV[2[)FQi]\Iϒfh3騋] t&7t2S X{*=vHaA-`/UG`k^{=/(L̥Yq S wp>JPpqb{_aAD4]~LtjAkD Oɇ[" J|gvs /JI(|e&P 9%<AFaOniT5B! mM%1s`a{(HӦj%h\W]V 6˾Z\nrX8#{IRޏD,CS@Mg:G(gkL%Ҳ9NEa8oYđ|l6.k$~vųsdn}B֤X;6]t)TjM ٞƼQc+P2ދB#! 2H l}lS-I Z~_fLt?B^bnkY S=pS!qT]C1shZօhRgF/S\`6Ã5=a:*5ʊBj>:k{ݬ%]U+!]y(c^rlw9@_n37-%ihP.hLZ!=ʉg BtÛ䬌݇#qPcxΓ8{=OPzK| 1e_vB pȚ|sXeWvjabOϜ%q`iOP(KF?F6[y!vmЌ/Uý^SirTU!-T-,VvKdLwifqj^k/p*uވ.Aەb{VÆ(+X%,5/*loSч֧K al8*j A*M!ecAzöqcA\3l(z6Fۺ 'O90qWOh撼_>ur$_QuɅ|lOPqPb)rR*Z&"z|>bgDfP B"})Lm~i>UAl V_z-hʗ 7 Lw %Lb6SqC +:@h[$8-GKUPke ϐ6аjx5a_jȅ-=m[H` ,kS0@<҂d[w_t Z`Fi@GRh% {v?wq.~tM=a\ӔWh :g.T6D]šeb<祿;&B@fS$LWͫ=qON p!wEoK?5m|ǣNC1ۻSwc`nzr/O{/tǽ"'|󎋓0l")r~,}qZ6&4=C:$i+ T2 /I H-\3bkQ:"9:],c6IEZ%:6r䒏RsŨx\-j$w{.َރo:حmٞݬmFבMC hLL% ͎Z˄JVd۞Zbct\PojKBPnadSϏlgj+ zĶ;Nz(Q"#"63=^Ltr|3/tɻ75 mO+zawRj|ejnNץ>``\bM-"$ڶJDHPFz|ޑm&mzae?[zBb19IEG꼸}-m;7E]U@Aʱ 6ZN-S}!,{1`7مY-xz6Wڶh$n\fG bz|/b2{nذtEASO{46n@Ԡ?j\5 ƚW#ټ6jnX\lڶJ:F"+[`GxxCsF́z6KCHRLyiB]awW,K%i#*70nYs~\hw83 ĠN/֭YK}wjB U}0wu0O$JN\.M?d>nהj_=ˎ*tmt*YSmw|)aw.^vw:3|oW/2ͥzGqVK38>vJCOSRХ/]"lkdЭ<=29BNVyt;蓓S_3.m?J|svT1Ў>o/65(qKY8Yfݮ%,νdj:U5-09 Хc/]̶f%+=Ie|C|8дfI/aUyfc}}!68@ҁ12FyDu?.\tw t}[Rԩ"릱w̥ٖIS ,&!gؑyzb3(r͕0Yy0?ztx<|FDzХ/ sIOU.i2ҳ( R] OtaHUu8$ _gU{mX,Il~%]̶ZQ0wN` ff3 6\ߓ\*RQ@BeS@)N#rft+,-'8L90A;ȑ33KR‘\_ 垝:)/ӎG+<]Խ.4k"'Q,AX(lMDO˂QQ*#qx;|: ;'ƚe(^*Y&G`s``,.%-V3gi[mw\/>Sqi ?JqP{;#VT M'3ELQ}>ֽKB% us %T>3^!) 0(Ym$lj/cKʡc>,O(_‘Is49N~8AO@2L;irH|YFF998D= ;ēe^A15N=cvBcJ L18NǛ3}M`@p؆WYK"UdSnTH[v۴ڽʉ.cY*^)9csY]s)8rdei֙(~k`<=t oÁUI!\9{}VBkzKQjiB>d\:~u|ܚbʕwurU\w}d$`i/^)um[_xlLط W;+nRBYC):6Ε)E| K}7gy78xx)Q*'YU*_qr@yъ%n02hr=j10MlOɵ^6%Zad hZZB[ELD`!/qkL>ѻ{{aZ9vӚ}. crnK/N/ T)%G2jqȿev>"w]]k%7AQ\2'BafʱuGHA5CСIcG<Cg ^dze!d:L x0EyqBJjYN(ڲ 1|ɋG\>ƗK Lsª!@* Vz"T?/ c":X֫ bWtۏoxjfDv #W ZK]{헄F&PL˗dɧOOhRNBLyloW*L< 4?3;^8ǰڻP.9j$RrZq+WnۘցDH [U^ 3N* rkyyvf-8U*u[lwyŹ:ѽ*k7z VkcDZ4wc,l*6v/~O\:;h⮈ i`xEZ_s%MoQچghg"喝VA5FŌa5bYA#;}wխ`-&W릟cے @(l'#(>E)˜8Eajm5>$xO~(NotZshJc=""u ƚ>]W@=C+cc#IS37uu|S3T0(^$N07(FTo3}s(i< q?" {_)2`Ь8_%mM%0Ҫ5/nE9r xqZZ]{(/f -?k>ŗJŲ.oѳ2ZRb=4z>M^YTS!I݃\CV1)-l6m.@^rP9=D o?34y4ajɡ"Ll(nK@BbQ7mN,`Yd5.;]tmvW}/i mO`"t~?I48bh\[ қBR?8W4u.p `̎8?{DB<[۸ݮw1?$HoQl!kv3ajncĎT"573?O1C1ݓm˄p蘁s۾BzӦHkt`3a^h ֢^3"}qqb4UpU\dVǞ: EQGZ"x9X˜@̀Z:^_8۰J-knJvw#~j!=_JX}$jf(_6C_$] =Žz鉒~\j&fu_ۚ'40* *kIwU%Se9&ʗ},F6%XJykdyxAf+ぇD<0hjՙzBlk6L tfoho<r.SM 3O+,桒hNMXQ*鹗0Oj#?5.u]7loZЇ>v߈~PBu_EPl_4_?Z7]~~A*OY&I&ׯU䛿IlBT <`m -AՑz0XE-ost7|.vێGw}v9Kgʁ?E%D:~`ĊuQ Wk $Z1e#]EV`{OvD E~u!nMTZСw̆:ۼRR\(uS"=PM$`桯Qha[ FD3:Z}i6n`V|:.H+İ.G%ª#a[WVInjg+byT:z39F>@B*MM:~a팆*MfvّTBm15Q{9^ X3c P5a}XzwR}SۼQ8X g|/5TV*V堯%%9~3k#pռdMy<X?2V[%2SU4ߓ.G#|t?v}96ع% +"Q,n- gj|j4X/=~Ɋq+F90lGRi*LB*: *1B K1"1=i ΃rE$-wY"o/=A%[hT2Gwݏ4gszw0sg f~"Y&>V{_KǨb2"t9tn OVY}oæ^5=.(R7%*ȤLM:*Bl[r|Lz:LLwDKI[:)7\66DagH9Bۯa^fy|KUkB8L7 Q|ڙ۱|W׍ E# ĺ"ȯF GK0=5ՄBLՋJ}T%?NQkIVC5\n{3M>嚺(FiͶk,lB6&wg)UΞ5xqjB4?shsI/X<>^(A~ 6й9b(=H(UE0|>aC4tc"W/ľƂ@|.U!3 Z P ζRZž{6Rs튢 0o~hÏ[/In]?M_#D)lE$rszw\V4sl$lwwp-™vlea0\Lտ%~[h {wXk^!;>G5@OQiSDIxDS~g.0_|Gwt!,]X.׊;Œxzؼ EqV$! ( 75;!mY}0K{`9cxfgA1&;(72~e+r kSz  vOU UNTl6)R;fmeԽR^C{V_v2mQliE9&voY{ݨ\.E/Nމ<ʳ[;rhi`^+̻ gat}k>E2ٺ e\Y&e6lYhjg#!Ys-ڸaU?-I6RsC5k<~vBM07MͲ٬Ph=I"Ѻh.cTJS+ÔS[/@Ydzi UfS2 ?Pv.-ڳ\ afײ,ZZX[0rU[2WX"%]p]E&oK6n4 nhN )%1\WzM2?O{eϞ=N\wXVWh+{Ѵ[WGdY.ErWG'Ӝ !P,MABNCM@wȑaX'_qׯܚ$,*GпTzlHv&#G7hFf 8vbߗi3BfxʘL6UU{n*A[iuplFN/Mui-:8>sTyvrɑ'0kc'+1/ٹtCd9:0?>ٿ4mfO156LM 53 04<P.< 186ǎD,M^J-Ϟm^t̡R>U̿O^ ۠lv<[\/[K)O_s+tv.2R>\b̡P躏{,Zb,V57ϸ4JgKw(uj14Z(̎5n#̥r 6`I8$*/KK Y>E5뢚ϬIU29>LI垹eY| ,+-]tv3p9'3S թQAJ?hoϼTر7qFYfӧ\:05=|-<=rVJaJ\7l)(ҜrX{1&7!|f)A\Z73넺cruv9w߈:ι(oEw (62b3;+9UR.#W GA,mkfEjS|gX?]|~/S{ M*5)uI R 9@Jvk̜yO I8>2v=FgŁ#GI>%PT4WOտ:W]g%ڶ89FضkcA [>+c{a]T'q|o1mdncfƆ"lǩ"p}#`2Q◥)-kYd;rcV6M`8yPS'Djvb.h !)'LTO@4E=s{Q[xBEmRAw .օP%{II[ :IUmO_%ս{ق#>@ULXe7k~ z: H a'|[W*39rhIg6lFxaz}# obժ{'03zHj`%rU&(ΎSc:V{\D:yBL,rSX{uau*AB[#(5EOijh(=:KQ( 2&Β=WT-wQb&Yf)-_!}iRAUω H>B)DVc*Mݸus-lF]hOP%2A|jIe(3oba߂ɹTe(U4[lFX[װ籧Qk[p?oC^B&"M]cPĚQ,1XZnBm Bo Mk[ dn(GKϣ1(FukH΁,vm^M2\q=WqH{:,{{\xH"%JTRuwm5eoوnb7bwg{H*e { E$5'ss;svd,2e2 gx",$Q _`ϰ>OmKe#oPR깇TgS^zϴ?`Cj;ZQ5ljv2o41@:h9,f{\J`3tmlQ'1yj~ i ܶ\Y%J=R#=*Y>Yc5]wZdzhʓvZ9[%ڰ9wGBB|^R$Fj,;UoE* |x(T+,Aܥ%̇pWT:;r x+[շ35=JQAVF1H :ܵGG.Hɗ-3L~[.j^t~b*$'Qt.lf!Ҳʞ)ob˘G.0:tF@~%.ΐ= z1`Pl}L,╶3xoQXai6NDzsЋ*~0Ȼz)܈(>uQPdQ-sw@jmorYUr#~; J*KٜeOHSU !|Ԟx 9ėFT5ƾ)0>UYD8oâ۱[ >"4tmӡHЯ˶+Eqd Pc嗝GvO?2vuwzS}C0# HQrUUv|Ir$9:;L2o|]+ٚge:H5v7fcFL)f,Ϡۍ@oUF08?)Y$tq=~OMzν+]$ID*:ʵ?E-_:8~6pZoҁQ:]d,5UR@їZ 1?K&A29.3c=f*Dn"Sc3t?M*Z#v:%[ܻ.bKskDU@ͬ:=0Uح0ҿ@.ߍB> cpQ 3@"\Vy.:J;^Y}ŢQ`4 t/1Jqʑ$ҶW06@cTbx=oYPS8\@M㋢AX"ldfifg5^'ea7a}FdsRG)ѭu<-g]#aHd37_,NK}3~YGɆ&R3q,/(w> E;N&$x{koVHfޑ3.ژCV,^Ϥ1:_۬~]*j5ݕj_觶+AUHE6Q .N `tU/H$O(?@J>[O )(r#/%:ӯ zD1;"~cY0PRVl=z.xyJv9KrE0gUқӬ]޽&ZGVTPe6cvlb^IJǑv${NjZZu_U#UJ<ŢҕB&ARseBw/O/EAR*f,ŗ ͂CރN[ez=\X86??i*R~ #츫:Mwu޻eV&HDB,k"uФ* G噚HgP$.}6?'Y%9I#CqS 7QUVJxOdd=6Q/`u8XZ%{ޜ']re.2#;-pSu)7dbI<$ tw4թL>#a<2)>$7TbQ,U/Q~!&w" PHH'`b21V;#,mZf#&/yM g3Jo I$@g)p" st6A2b-l1#?^XPЙ]8| /{J61ʉ'n˗q՟IȇYM^;v3G.$ Q;P9UEz&seuLSQG[[Ӝi9AgHkY`{>b*^R(gjym`Kb~)+8;ևYa>nrznYuQޙ&H:qg9dnh:Iajnl6x-!kv;U_UJkl̐O'-McR%9,G,7`kYBV!,N( F;/bU~?!vNHNJl:6vn=5tAUt8{ 7Wl& uxU_p퓏Qt}QNP4-7~ϼ,{c?F"ɂ[Eu z 18k҃ Qoh߳1{#+TYAOj]I!Ml}T"`U( I?|5G-dQ>n=9/kv{/J-,# * >#hK C<3MPDgr| }|o0SYװ "N.`=/^xs?3@$fěDsMFFYռlpc6JDxW:%+Z$M]R/P2:QA.$]J1%4G8PځeǶ(,فQɱ2:%K\H 6Lhns<N:уmVFOGdwUZBw']"+Rx/җ )<Դ_~4[FVuWn|ˁ~fj肨CvZ 1;D[Ȋ@L,B"O;u/=$#1z"iۻDW+0$ d|&k祫}Aɓ+L%uQo"]TzV3 |r UXɇ|dXH#tyFj 꿰stWy&.}¼=@mޯ052B[TUWn8AiEAf;iCKXdP5ӕ䝪k쾷fnUݔei-L$Lj*^GxzXZOqG0/b`5|H'ꏾYyـl4ci_qXV(iy;GSoSS "QIBr E='/6,p_I b9|m$ ;]v#U NT.`JD"ʯ&+ _ޏsN/=kU\,~!n0wTz7 : n3}݄$mg я7PF*`s31X_'PY$ (25=gm9>s%bIRFx@N6!+S$]8 IDATzK#Ʉ Wlx|=eob1PU\6/>.2: (t|aaWV}cgRķWfAImLs)i~݋NDcv1 {*gI9TFmz߶SwePrfި?CWuCwn*z^A\++)gXJ8 m߇ 8ЮX$HEszr{ fo3_fu~DT]G kS4wHIID[R ,[wsa^(<5L<*zcwwVS18+h9h@4Yiz/XJb{[ig0;J ?rQDhz+}xC]GX2~!Zo Ya6,"}Si8jmxVRc/H2X\z̘uZ=ǰSyɈl@G_Q҃^'Y%81^?RycC!%.$c | ~?yLfC?o$]xAw"@a'"ekeW%>ߎ׬w<$&46_5d, ^?P1q!-bHN]'{ %I:P,x뢈^'ax;nvF@o#K225q*KP yr;7(҈J_Ǯ (i@C[پ>[cܿ΢3!#6^6uZs4j:%&ʺ&p i 5v~π4;$HJ_ (mRަ( xY_[K0}&nEM{#;}#{ \.xͦSYJ3;ԲS:P 9~8&g 2,,[ʞ]*+SSJ܈ Ȫ,v"&k$>E"  {s~+0p.O_׆_s ν #I6gHE#XˏRtJ_B!}{s+-B f4 ΄;<˲,;s&&PȅŲCNgN#M9lrjŠ?؏KQzH5;=zI5fCI Kh %yZ`45Cp\UUTUE$Fy=.2Usm8Ez{{tl$i ho!K֗C4XrU<2 Ջ "ԴbPBu&/_ z#UݯasVV2GyTq$)ZDA3, zh(zL&&ć Wq8\L5B?JB한ix'9r9nܸ˗IR8 ԜkʻbyID{<¡&r hsLR4ߐM%P`dYOe*E~2uw^ @t@NGQVMx>%Ҝ;NGso2$͢o"W케~$H(OJ ̅mNu5p`_OR\vk׮pNs^E ̓Igr39qDicс͸XGj)[o$PLMESIqȃUNj+hETl6dnUUmyy9GL&^KsB1 1VAIiണaAf`0xtbX~RYYO4 LKHR;~z333CYYX dz qfDO})L|߾VQ5<<<׮]chd|B`ZvS(Ķ' 3u5i =11^ t:|Ȅ$ 65kJU~ ?EGnܸ+Wp?;-}DØL&1hm.]DKs MMMY{'J,#NkVd佰:n)))ٗ Lݯ42e~._S鞚]N"d2{o@<ҥK,,.7&>M311ǟ~\z֑dsk+4ؤ!.]D2uB>"M\tA&&ӳ7OJ67ob2q\5OOOH&9qz^ ӏoh4clu:, :Q B,--!I)qn޼,ugrʠIW-nܸ ,SS[X@?kƢ SWWGYY& C2("" ",1yPb.`0o!)q׮]X,3r=jzzk׮1;;l6gA#51>>JG AhkkpQdTK4errUU|= JUU&''z*\OM&'tUU?&jkd$:;gbb@ @C}myFUUBjjjhll|lIp:b6f=$tAA&AQ|>fY${e9y6U!"/vw?5h4ʵkr |mmm78KZIC6erj`0Hmm-2Z-:נ 2x<S᠄B!._CQWW@>gjj χݜEOEaxx-Ξ=Kuu5L&dPÜ{EP>}UUIӌ099$@GGc*;v9$Iz9@$ehkkA{d20+*4I 8l'^W666ꫯd2X,9rn@zffM xGt~^/OLLdhmmdAÏ  HOOS#fKkk+---.UUfeuǎ=sXxBeb(ϟjf"? jrrf|>k#$333NA׿d2aT,&Ȅa}}p:@4h3A,QG ݸyX,K6pOV+'yp:  4l *h~r]UU!JJJOr_\\dmmv_A?/KKK34weB*A>IfMQ L***AÏ 1>1AEy8(SZZx=ׇ(h;"LPԠaemmh$X?+d;EMO ߺDWWׁ 311ACCv@^/嚾!dYfjrJyy&'777c CQFGGm [[[lnnԤeHeZ'г,}}->iZZZpݚ#Aß'&&ZPk;4;;$TUUiyVAOw6}0f-:נ"ƕ{d2,8y:~:0TUUim<ڮs  BE&>p|ں:M Og2q\Ԗ#r##~ D" }O+*(z5t ֎h4J,ta}mvm׹ 3yѣ{.뵵5(Z),,,0:6F{{;ھD:fllmk 3Ǽj:weIZ@說H&Ѯ]wJnh:D<^+ 1C!666hllԮtBWҙ {zl"444PZZ E?|> >OF*Z&[۩i>eEޮeF4hx֖8KK464h}UУ(}}}+*+03==Mss3C bI|>Z^ۨٹ9v-zPEQ:ari5hx@lmoJCCVcxt:5TBWUɩ)8|0^W{,yiooRY4<nu[;N5'xm2x:x }{{~j먯cfL3L4< ( kͻ{B4~L\zNOOF҆4<Earj ^mlnnI}]vÇq:b38###TWW4kHRRWW9cj(cbbEQв#4<` 25=MPQ1ba45DBOܼyơC42y Fik{{gLc#Gs4hxdYn} :&''tڌJ芢022v(-eelllgysehX&-هoqqQ+mTBu`ll6*}(-,3?7G[[v] B,Uh)РSx^::::CW F6wh"3;;KMu5n[OeAϟ}wSUUM?CȷMΞ=lAگi]k;5-ikk?M.=#[Ȧ&' A(S,'`0H QQQA0AbcceN9s׼B BA1&`0xeN*d Ed=F*++osO]UU8WTz^ j߇"7Zۨ*SSSi%UU Br9M4v~W#2333\{v.--G75PK(byi鶝|{2fM)) qD'.2r9kJ>G$КP}H$ W&I>3"8:m!gTC &#mCAo4[ZZ[od Lʖ|UP5<Ә$UUL&ޅQamx<x6I3 UUu:rH$MK3AEmQ##uw<}}}@ww6T1X,244bAS `PUU d2RUU8H*,..ݍ҈1ť%VWW9t萶\$)*++q'ڪMMMtP]UU677IK|Xk"y뼶 -:נ!B8E#=ƭ^1 ??kMUUVVVJTUX,"2*fY{` r9Z[[5AC֘WC vRbEQX, {B,..rQFm޸5Ad2ioBfQ, IF$INSNx4iﱂLMM҂P( 2`45נG /,Ex<N:ua?"* z<B!677Fp1f'J,cssbɴ3_6vp8G###D2 W^q $cGR]] J36G"#JvNss3>gVUkI#-u%U)0?8@觥%(( SCMTD /1^mN6^*T lEzLck43lnmS(X].Z[[)//32WU: K$TQ sPUt*jA c~l]GeH@dBRzM z=f^ &LjnxI%Rvb\>df}cłd@s]Ȧ hdY1!a1o,BTVf3$eSUl*h{<ܧI&2X6Dn&b $2vL2lǠ6_bXX,X{l&U5||DlT5NDjko%l2F4$_PK2+cWfE]s;b1_8rRjJD}]`ߏnc4Eq셪X~DVO2g.ss~*[o`4 |Ʀ_,wRs̭dVR.qNU%8'\U-Ι ӧNQڏ+}10EBpPYYsq__c/^z*T-_oNz UCɄ<ǩ ?3+4sS6*1>ޢyl.BIrÌ߸8ٚCvG|gi=Al }_e 磬;MזH%&4ry҂Rr-&%Ե41,X*zI$brdd?D&PUb~bFƶ6npU9\Rى,J=6L&?AʬMy?njZ2q2QA!_TH01FAg_@UsyJl 6{~P2;1E$`PҌaFƙ#bwd { 喎lMwuT[++eP 9R4  i#WUl2A._DXvbK߲yp[ E`j|/bO{$<*,~g]8c LU#LTE)dI&Ө-6LFݎL2A # X6lo&jYaI'e`j3G*\T*jhYcrbo9*+}*N rQN:Sa67M;ٝTQonJgI(nǠGF*5,x`90u-H |T9b6-Lpӯ48imFL*Λߑ1݆$J."Ɂ(a;BxJbt:3bsXU$ 2΀aCv҆Y"zl;AmK38k!DflR zSwQLCpTE&LP\eD"W j,a}qSu%V֩U߿G`Wbg~t xE+\z .bʚ2\敿ﰢ5;̬_@R,\D(6պbQQ I})3enXHkc7-5|5>,uWeQg9nxH)H¢dܸ_ aGY* qFOYgPUJ쵋\ Dy ]Y{IYÔ]HEf7eIo+T|EYT̮2=fzY]"l03t9,Օx &~>M oKGLxK^d#BQ̫/S0Eꪪ8te8ֲF^y5~ +;x8v5*yֶ(I"#} BY%vU'VwΛ@8Swd|ѧn%@? O]Ͼ%- rO^Ӌcx}>L d蛤 ڞ3<Ѿ$qyqh5#.# 4얛-\̈́O]`cDnL0!z'7fmgEn ReQGiΜ@Dꘀ즪:@;aj~tO_cz!_ Agl9PLh_;?\׌,|R_PfS7>W5BcW%fytcD2fY.(k=87/2us,./fڏƣK0;F᷸p 5,.0t_x:ia8]m\Kx޿t-3#S]d֦X{fsjWRb"(E~bWy~կx7o {:yQk3,ljjב9U(bBQ^9x4ZLN@)LnHίqIIvmEWyW 8JjV>r~őfäA^J!WYyWob\p>ESC. ]~*.8r(mm9y̕g&h3G5-.!7yn;j!b29tݗF5#E^}]-׾ddbzӿk c1lz'(e;/8dmzX1C 7U.0̄\k9]ae)V‰֧/nkbqqi-ct $YjNX/r2mXB:3_ec+j!DpzGb{[A*J7) ɍY.y9T| P߾?F5, HNQA9WE8` V`@!W(Q-O> /k`5[c [ۀA#[m]ˣV_ ^FG]@ yv/;z 1>kʏOaPl6694eKc7MBޚ!;9O2:4.lf^}6~~@o#} ܼF4/|Vi4Eپ˲Vȣ")m(IŨVSPfDA\%HONQbnBGl zq[˪q&ygv?`Nh5j;c!μs}xՓq :s;<\hElc=ʭ"o)F$PJٷax:ͣ H*l D݂E4 2:9A,^Bgz{rMV^}ED) ۸M7pYd%BipoᶩRRsX{lJfsy9F{۫{d "Yqg>*JW&UA,J#{\]؋gHD}h5&xx#%~&a&@_bf@M[)U uW GӔkH 1>^O(fɑ^ x[*Y29 0;&䙙'\=Rt jhBgaS7jT7Vq:"-A p.N/etfl+qxKP43Kq&]޾j iٍes4|Yznl b+YrTcŅy&LCC7z .+6x9Y,+od17;C&a ,cNu &qԃI6;<.n=`Al^ y&[hA.s_""3/`s< K s`sC:҆+gRR;K:_,R zn%rE+/()n^^#Ԅo&[X'ٱeJNp8w.{YN&6ky0 EDu ,TՔ#޲}<[ӁZ)o;&:D}XUb ᶸq`q)0u4@OM$EIfQP$E]x[ 8Q4P?.7Li [X}8A8\ Xy3 ݈ƃA=_'~oI.b8u[X"w<W$jB4v)(4yāg?0[m8ʏKu\N?ơʉFvR "!M'_ZSq8TWNjs564QV;ߩOw7ƁtѪ$Xlv,j1W+('!H26Yq6W. xʚx(E3E\B$}5Di#P-8&K$IWWW%R^j|m ZlKJ ۑВIH85E Kt4D%X* (͆Ţn]PV3r:]74?@P`vSuKhIfCjnיzdYa_C6=N4 j-j".`ٶ ۮ%u qX-۹w\%Պ,5){l)ۦ`M ><-OB ɥ%w?&H|)+++^9-T;E~=;xe_xY~>AۙSX?(b5F)+*=Kvje7O~|d0dhfǶ-?{I]?6==|?Ay)-Ze 8?G={_Y ?ʏ* l6AIRxy֯!4Mt@?Ǒ\1&:g|Mf'1,>)LOLዕ}f$-+ⷋMBM&Z7xյa~K I2 M3"nk躱:Zȷ_ >¿X_IMov.WIrww)|Ȭr?u{A>ܾE?51]9)w /;j^k 4Mpmf>k75INa8؍e.}rʃg*s703t~z {AZKePdf9s=% LANWHy8qj?F!Გ.k:KJٜǍ; Yw&5; qZQ5 iz_!mЁ2_u]m4 ''GX,w^mJ +膀+%o,19*$B69@YI/+K HY)VV7yCbg}gyvZQqw8XSj:9Hg1p"^`enl!/㴂27jr$@ FأaJ&w75ޯrtRJpeL +ǥݼh]%M 榊iVhIme{Ab%OX8L L5%z`Y\ZD90H-α7REW(?HsK=W]$="0s{av3I3'J)2`#"vU$Q$q֥#b;AD% ]0LDYFz"BϦcôY@qByǴb۸QzY444]@&Aox4bD[7IE0SC9} hhjȊk I!454[a2,b:Emo.P@Vԭ$&ZACV"4E>$y;vMqÏQrHc~7"HE/ ؚ,o=9Q&aKd+D 9Hve{4$^VFƦ|pj#ŚeDڣK1M1%lħ/lxl "Rlc&Yl&Iאb|o&ّLA˯^++]iOlUGk2?1(/ N0/o&9m 4F]wFMp9}&I..Ǧp{sӛH4瘹 IDAT Xn\7_0;4=,̌2 0?WȂN&/p7h|aM[\L+zxOau80t['ZYe~Ac~vRkQ@6?2a^Ki>FkeR$Wk M^Ƥ]cmq)R|X~V7Q ){-1>*Չ v:8Q;U,2d9jNcjKzn X@Lgh4BGCܿ~tC)S#\vNnc&S]_ K+8 ~GEիLLJ(k9-ȂLm}f+@3T%;Й%6͵$Txí^76UGhi6KW_Bu9s?ǽ4 ܾ@:*9h  @Y-GNyJ C g0S'ɍwс2Qm1hec2Eg~pU]t-r~`#r q߶@,q((ϜDX=R9ʶ#4,6xu]lzTG ZKHͧ_6exyiOCgOp4_1 i]=\ƣ4[|ULQ%^¡-H>e|zǏE)+qs=NDEL?W vEB,/_ejvHu#GOc v;L:[Q(7;:/q4@JX7;ﲶAR픷!L,XkP_cƇɩ!~pYFGTU3 :Eܦ@ǒ72/7?7hj[jj+QJiz*4+kxl w/o`xVy?ѵjO~sB1ܑֆ:"Sv?ɩe?Z`ej~nE[C`Yrյ6"ًKP][x:nOPzHts-un$#i9rcyO/tPg*s;eI&)o"Ya=Qvʨ]J>=ȅ.'3qj\xcVӅݮHr;+ƁSǰ䦹~*\|Crj#uO/r?AñӴ61u?W BC'hkfez #KT74Q<5K \=%=Y䏴Fnm{~(br{XC=yE`zb%/+"wΓ*X 'h Ex.Ajé d*"xoz}dJJQg8ߥLnx 㖫B6R`e /VTxM9UXMmj@Mί'W.Fqp9,HXH2*J?$~dwQRV7`MO勷~NuK߱nkY_tȲB)"#nI SxY%vCu)UV఩.?XD-!K8Ur VnY*y_y0T =Q[.A@uz8mX~BA(h9vh@e fh8##V}M$)Eqx5QQ[`m3cb(xvYuR޸ƖF4"7,0i Yaiu DBZV&Fa6 'OSYWKۉXL224MIah8rʨqr A#8B}1l.dwu&G ns4`5VV1 O0SD(o_`rjlj*jYb~5)9curcAV"!6秙^i>uNVXfv~i ~B0*8}A:Bmcu 6ג䳛'CŔ*SClhJ!Jޏ\g "hX>:C#L bVzTƶj?{c7W?a Ta|p"`zNibUU<n$$+4C8{ J}5T4`azt~5 ӆ!Zpl bb`OM3mZ`"p9-S{١AL_w,?kTWDW2k2h )l^?F[݆4G"?By]#Zؘgu#'q *}, Y 0x8iWGWï81\Rko[7G 0,2Pti4MT4WND-pwvXwOq8@ԧ{=>:9ik](edIoT:Ams8(ibbQQՊ$niVBR-8fݚA&e#d]e0WSL$[ݯTCϰŐdvbG ӛdswl lVi|ՍE0k!jEt#E%յ]2UVe0 Sg#Z]DEi aEV`31?A0 H/Q|@;ULm1F2ź4Mdqkq]n}.RuGۂ>,Nc.v^epu~Bbe;U!ϰZ;]L2kd6sXvDb/̡HRV0Qƃ |kDM >ggi!]JQZNв:@V][.5Y0uoy#g~.[K۹Whi(yz\ҨoG͹uDm&AgfjB9E˻,-R0@pJL<&bgsjMP8WBnfy@g8-GL=C~aŵenv9Դ$gZr R(dHӤǘ\Шom ^QFz~٥~Z;}d-!jZPR`k+IܢV$P^$r p@t Ahɡ!-e(fVU54K2k8 aDL20@M[3J+ܟ]$nX. )vƗ QX[]Eh<դ/A _EiЎ$* {BϬsH'W-n^'cm-CG%JNd*U A˲Sş DIc[yS("l$Aux@gvh%›o5R],$':Y1p;.R3~+Zv `fnC6%Ә> U8mg.fQ{vU$Z!kKE ?+!:k+Ng2FA7+KJIDĠoX^lG Y ko=FIm [s3T_zF>&S %X_[`(Z C^[mUҚ#ؙ{ <ē3ɸr+:V`\`/?KEON0zҮ4'#H5Y "\T8T kjlClqލ{İLs7७HHǵw&5p܎Ds7؝$WCkS q笲J8x(7owljF%lNcw<]5Ni>u6`*ۈ >IG]~L]SFkK w/sF78vwXb^`zhk5ځUE$CGe\{蹦":D- %̵Uνt׊ k[imHhgL[XYWpl"dӧH~rL[ɑ#{qbBg |,6>*0x.$k: 8_DaAR\`%f_fAjK8Z6;Da U5OX46H֖˧:_]b@#QQr K8ԇ7na!n3.z[&Y4bg.x#3T:CB18}7o2K9ҹIaJ߿ho04`"?_a$ : a(8*8 <:+ms{ݽoLPUGh,G VPQxD(ېIs*[K JES3vs%^auG8vU~MܹsNR(ƦR{hM tsOqiƦf(v/!훨aGY(uMtvq]a7R-K{JXE2Z8l}wHL-y +Xn, : cCܸp9dZ*T'߹O/ؘ]yǛhg=.|``/12Ɖ7~V]Q&^߀2-I_!΀qz:;Xc=oІE*N7[ۖT"Vۈ#:?D~f?C[_VgzV&r jx5XB2s!?ۿey%dq#n{o FD!~ikqr71\Q|NW ӹ95ͬ-v EY* 9է? VWU;h"h?I&b aUe'_$ZfQRB8L ~@lDc!J+1ޯLm~N#Cøm{µͬod%w0a%AAsAm^ Kdr6$@Q%ڱ_TJ 7LA$\$Viߢ,&\G0B,v?1TYMuH.,3)k>āM(h肕r6B6"^A0u4C TVeid!XU O˺PRQ?& M"~.&5Yg*iOrqG#g'"6,/':s` T`e~R-M &x9'`,Vȡ:CG%_M`uaFHU-(hr"0b!Cre ?Fvd=%@q{<!T,`Uތ":}c2k,/#4l'ۉHPȐCR˫((Ϝ4yfņF? *  %47)OhGX܏$ɄʱIP__/ũ.-a.\NI%qVPt5 ӍM+kX|1:I,Ǫl6DBFDbTH.i:~C=wQ}T#Y_^deewvode~Mvt$ua+ r" v|+HuZ[ }hIRjbrxD66628av Hba+$8zVm$PP[Qɱ<>/vNjr;Q-vn;(xP{OR,icxĦţ}W:1S`M)OxmG|, 3ny3;BO~ԊE'_<^5v6[(}=k> zsˬ5GtDq+{<ءccxj @1(K,aל嫅o}sg HMa%Jծ _h2T;n OD|;S?OxA.|'뮿ŧÒϢ{\|ϵlEbLDRxrrgy9QylB^]w(>9'_uOL|Wa9W [}{,%@KUlʟ!X<[k{{/ &<~mwV!3gb3[gA-w 4I/MrwK6)G&}SklT@»c ͮ_OϮqS>;]>ss'lczk~b2|or7M! )`umkC]ֳ.BM7o)M q}Mx] MMwɩE dU4֣eё$*y·GO&?78ٜ@;ug{{{7o3:r9َ xQF2r)ƧD u;>.L }IDATqw`/ v3:BXb@j}rVw8Gϝtc```9sF[xRq/)NITה> ^duM|0#GV}g_9bVy0@Z"3Tvǚ\55Fz#t_5-;w?߃=.3=ekCp Dأ/n X YRo!(v|Ve- Z>Cj-%,!3NE/{}.S1WqQݻ;:p48=؃Df}]H^B!G/"AE9]== yB$6 f.K.a:$ MtAƪHd]ɢ(ͦ](4u24GVm4 c%Ï,m9Ae1V#:_%x[Fjɴ=؃5O~%q)D8?ʭ{֐ft:#] UхYe-,r~c":⠼{/a/32m?Kx8-XNe^#bbd ͡q>%M=bM6b=؃1l wS̑\!>  N=ξNO &8{!r8N\>]TTWVm7$ vϼL+<켌L#O35y!Vje,QSqN&2t2 DZN|{OvKKc)#{ $# w)F%{MoSs/ i膉()(JQM,jEr,oZ0 ?1f% ?kAjB4b5u`70gC;MsqLҦ/%#4f$MRnRٯdX$ݵ&mק6{ gf&__?(p{kڔŐcrb8{=5ί,إݯ<{xqvK)v>c: : i㔛 6i|~WUwTky`ZlLzm\.s>K: : :: : : G rIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-arch.png0000664000175000017500000006410200000000000021736 0ustar00zuulzuul00000000000000PNG  IHDRp HsRGB@} pHYs+tEXtSoftwareMicrosoft Office5qgIDATx \gy9g3q[j[ Xhq()X)ϴ~=:3mVqhV9xTy2^)Ea| #p)SK43:v Y9KH0wo{[?sU}~~o@vT \*!9 Ge#qUm, q }}ΛgtwY׽o@XɖF.ʶ[s+n^-;uWnhxbV>Mk6u9' N@ל7lt̗-'e#p鶹J!p&py\[n nWv} Z_+e+ufzC p \kdӆDo2 3K`N|7ls8FuI{+7nSx^9[k~@#p5뮹~n>KDޥ%iUӿy˹m̻eV p~pL[$Nn p p@ΥYuT>bs屺)7xJ^K[Usuݪ{_* n w9nI LMU@U_"U&V'dZ`KBDdE 7ɀҎLiI'p@'peI+bg[mGs@% \'py ѼymZR*N˜@8f89 W+rfDlۦZuͤwv\xWp2U蜛ULĭ2 nl\~p0ǁ&˾yt+pGۛ}g?W̞<8@8@=j-Q:ArU?Շ[g p T}"CT^(z~  p@յ@1_[ p{@-zV1UL"ކ#pR%ai8 pRU'a zi.S8@*:^}z p A7˻i p7WΛ6c#ru 1ۙ p{muM6, p}K;-o~7Ьey(]y啟߼yM6u#MlܸBaEٲe˅e{݈e?VN @zg۫_պ|辞Y۽5eNc{ ԓߖ^#Cx6}fVk>}K'}ǚA-2(/N$\2"m"[ܿ7i]DD$:vׯ4?~y8Ν;|'Zrk׮g4z'4 "6rOiYN4+p"\d_\+%Ѭ_8vX+5 /Id[oȨDgff>&QEN&f;ԗZr8yԏOY^ "q"t C{qD٤?Y$n?Vx3g&Bڽ$xѦD/YEf9`ed5|aJ4<ۥitff{ed+++S!mq,,ѹ 6|5$i8?9+^ \L 5"p i$VH&Qϛm 5u"pC8)#3%ږihfƍ;33+t0i'DD,Ҭj3 pC8I*#He$|#- LӼn!mذᖫ&e鸼$m۶F"w.pMdJR4m8҈ pm.˽okv…|gɈLKRK'y_G;xky,p~G$Ft2G~kuLC:8 oSO=o^'"^{G-4'9 C qeQ%pRtΝߑ/p"k?pGuz;8oh2 \;KIKS9an@`lN*(IxE^D>O~Z%J"`"Pmּ뮻ɖlKǮ#O8(p|'}{ُ/p;ɶ*~տΗemn$d*iFC8)%B'5$tiN|}INEeEt/>*hNoX񿇼~NU>w_]W?틧ø(\q!p C^L8۔ (Ӵ Q#=&LY_"W*p?X+ UwwmfڿIh]Y}I9WnýkEnx厛>!T0q*]`dw{m& )P,$PYKfm3 *E| nq/I["0'&IM@8nLNRsܟ>ſ ?Kj^s5gu4 NrW|;Y6 ZT&oe]F>ڪv?ML.#?wDDeLC8Pꫯڇ>u}x%ͪ"pqA^w[n&&pZo>ڣVn|9:"HVd2Mee85]G  mтvK]F~|}wҮ~+eV#۶mDZ^2[ٴiol (p*T6zr F+оty.9m7q7odPXo\o/i.ݵk׳ҿ0{*pފȑ/V>2ҸmHPB?,GD佢Q9v@&Ñe˖;\YY^裏6_>i&붏  pU"cڟL3-IGT"/mIA86;;333ۿ.ٳڦS+|Y8ی*BMPjI?7`#A'E?!Y@Ñ㒈]c=6u9IA6I0= gY1d-fNѮpFʺ2@͛(͆RjRs{= ]tыGI`"Niis BF V"pMj\m:YF[v/!p=TvHmOU*J:{MqܹL[#G{KXS}D̀)Q.jd<4iδD4凊.7wkojeIKyt^~/ wC:{lNSqGDHF "K0^ܫ|i %.97rH~ 8bO+5,8@Oщ%AIK/ȔE#y"ocƍ?KsE{pߋDDbǢ pT \*;I*8ynZ}^t[lމV%W>o[},[G88Fi81Ras2VM Y[u(鲡&T'sx{ߗiYÅW`Qp \vC8@{INiڌUEyiщޢ׬}hBE8@`.ois(N'%h?~NnW$:"0U'$r$B*aÎ Ҳˋ2)(F%mwF88@C@*@88fd~^@18M+h=AU/IL#7-' F88^ $ -KWy!-B8@"oEɼ4d CN^͗VOV lKk2)VkW~J}8,,F^!pPNu ҈H/,cבZ|^dL[ѓ:WvlQ{ݞJLSe}9D8Pi\ؠͬPDOh//u:WHTL5Gnr p-^ɰSe5jהvg?/+g*zU|c+p*AI͇$ p p8mT4Gr"EVO~6XN4=lDC8  p pS#p[_~ |7{CC8mx)y%N<98@88@8@8@C8@8Bo} rȮ/8@8@ƙNn]E`[:Z0wח8#(E4.(`Q]Ku[K>!\X͸9'ry..nl!7n+=3\}N͵uwmo+v'!p#*޴~>(镈^Q7h{wuf*!v?>ќT~;"q~,>r躪F!p.VMs9#m }IXAY uULd~FT("p!pOv_Y$nNVąXK/яrG:9بX}YIeY[nL)y*&bWw&C8@5ڇc/ymjΛVH#p'oU|ZUrQ5g}e'e8õMWqb`6ֽ抁\4mO/Bt p7>ק^hom>_|8w$>TupYv?7ѷ{ K&nIB94}@%#y(UmjU~o6@%.%`=0m]Wn14mޔܳL{?g!p I{aLCk˵'8Oa+plN8K2_85B^@D$fT^!p!p@q-H!p7zH Vd62CR6(%&&ͨڔ*5!p_5nڌM2}%/\&ӉQ]G%ǹ #8C\jMYVݘ*unN.H3"}ߤG8CF=D`YĹ*SN ,WbޫĐY_$ \`4\s/P8@8.k0ebaJej U3F :u]}As w'7绻kG߿3_U/I+z!p-vFiIX%_gmfJeB hVtpe?3rkK7\>/V`5w GW%&Q7i: !pHwвYm/H['` ߤ>pKȽُf^Pm:#pZ'ͧ0ivg=0=5mЭMAETڤߛD$H/8@8?\޳\ 0NW4P .p \ tX'pF?\M/:^ZJKGф!p7뀤0^n 93d"knE~]n[Ծmf=1^moe+__ovN8@xy>CC\*.iK0_u@E|gZN7%))8i T7K?88nn2'TL9MsUf7ɼ38[ӑ&TCv 6%z@FμnM/j8#y p\ّؗQ!p]cΘݫyNw|g4ij?MK=d9"9N1 ppz8M#"zM!pgvlnv pЗKӈhTȀ!pK$n;%pZUp*s!p\q-Aj'HN襤ŞD8TN"nkŞg 8HEN8s78kځ=KG9NJC8SH  Bഘ ͨA !pd }K2 p!pC^@`(gn!p\Qqn8&#Ry!pK{ p0pc*!p\_4I{ B C8NqN8; C8|&i/ p0XKMQ2 8@8 K{s8@8I{Cːg( p\F!peH(-pg+#=} :#i/ pq'ه'#CǾt_s۫_N}7qGs.wi I{C8h{-Y&&BҒ8_"p.iiJfϱ&??Z}0M"u<[o ܑ2'p$m/,w&m#ozs)?ҝw4Y8;-&Gܴ)5Ko=i굛e~/} 8䤥TYri~"y{ byڟ9w'#{N8NxC8DZG&VtsS|'DN?=KGA8!q8‰O Y8^$E8!q8 騒s(!pC#pS!prHڋ!pC#S!p6sBw<8=7'ed@Zfڐ4 Z 3fg7~~מ"25.ud8!q8m*6o/$GG8!űMexMC^/BhNI-g>B\wHdE4q> 8|AhN4x=\+Y&!pAn*eM)ZEc/&GN8c!pm\nZ"p pMn`By ceEhB 5q|p؄:M%85Mj}m$pc/ѵ2=)g^U=T\)amho{8ܚ/`~6v8;cn"cZbIͮyzqY֗8%)C$uȴ<385UEl3vG'Ix;=w FT-є5^3sQysЭSX0f4%?Jcs}]G?xg=pxi}w.E!p5iuhSmNS;7kg?7>֍S+h.$#p]nx˭."nP~tQ \hQ^q+ѫLkȱ1;:Y+\c]tAC8n v/nJ?ZOG|نVY}mC b6`f`ќJf;%nzNj%Jq;n|9^Cw'|I~w>̍{K$`c-p˧N)Rg+ _DRJF&T=4ݧD⒍m٨8]?TTM]7 ,8[0 70ȼ<.pI&{ jXϻ>:&C*IQ~pi' =v3MxY~EX_ܠFZyMV\Ɋ ڞ~TtZ{j.i7d>[i+/yilBO/6 7QgC=HRG$1lG.t3& K8˻6Q9<5MoH4 kO'?hD'.  _)/˽{6&hGM 9!\j 7Z4{sSFɛ]= \^ q֎1Vgf#֎#)'&vΗa$$=$ä"8}3FB{l4=_CWX~ŏN?\wMUݼm:Pň2.cCCAͺi$mGNE"JO}j2/&%M[KZ5Z 'fIzh]j%)6{.kIe_n˖󣽡h} @F"pClBuIMX>OBQ4tqY"նoM9% 5}jGAH \:ǹ8:Bk7W;6@/^ky5t}k.A4Ku¯Gg`;%pk"PRӈu칦sPkz[)ϵӈlٲiW2^JV K7ȒY!p}( T;SqM~"P h}ŴSd/5o{sE71d_I()AJ!}#ceͩr~K42Y;6mܱ{MȡLIe\ۚ^Y1 M |nu lbP\oB6RPs?tNjtzct?Cǰ} i)Nsv 3gW$p"#p f "p_#6!*~Iuա=iKLisD- F[?+NEfԁK78/I2)O85߱=)GD$č"Ljw؅dI>rX:(AaN4!$R p4C 鹥N8ӇͥX^lr(>oR)PscoSih6srIlS$5}gG&vJ8V=?A/pXz/t"ycr!p pCS4 дM 1G*IC NK)-=*Eo~6 &ˮ:{ pU3i^$%ErmʵzٶowF`h7L[1{@H oC:9^{j@7h:$NqcMu.n8 bP#!p9j"uE".x!p p,KsRk IjlWN /͸}u"pMY?IBe۽J*<8='p˒,4'[^#wv]nٚ~6N$p~~^j(#pC#pc)p:9%2Vp9SJZ4T*Uv%+O~Ύ:Wi ?G+lY>zqc!p)˵'v]ۋ%7qv'+Ez!t_!S1Gk%#8!q8PG)pK!p{=^{ٗ8'#G8X rT5P;=^.'\rT-&tϒi3h鶚K\ŕq:^8%7M1gRO :+TCٗ3iZ}8!q8QS*[n"NQ;}٬iH:8+uݗ?AL/<8=7K}Hj8/89.]M`:˝+nn :x9(J4"pKrjϱf&dt_qPrILkx厛>>G? :? C8sx87 .'ܹI.p}}GNm_<OC8w&ۑ]F/t"V.p:y9םǞB8[NCk{%/ pMMpҜ!p/ps+.!( pH{ght5nj7*;}͋.!i2%-U4=ѽ,i e{ρIõ~C 86߄1g 7ZK?C8J8;\; -< p7'ͫړT~er!p!p!pA'H!p!p8y3C8@;OʨT8CC#p˒C8@+pC8@\\]=!5R8@8C8.+TqC8C2"p.'ܹI(p!pMh+p85_C8! RmGtΞᜁn8C8=#"y!p .;wPR9oC8oeDNJNI+ppB󩧞 !p%Eͻo^}#p!pIݵؗs8n$6^dC8k{p!p7=\;"p!p\FN[99C*pZ5=!p@q8}}ӟnEgy4"!p2wv z<97Tѧ|>8qC8D^*pBN8@ DBR!puY[y78CpKy|8nh8|8C>Sι7pm/7Ji!pu}/G<7p MIE97pӺ6&B8n=G|I88Cz:q> \$&i*!p=.'9  E%?%'Є!p|4  p.r3S!psiIy!p p8)%8C"p2Uj5C8.5;9 7ț: "tRJ C8~,b? p@ΎFѧ"sA8󼾻wr -@ ܉'V+3ڨ!p6 pܘ DDz-99kG/@TN1< b@8K9'7 3w00o* RZ>!p@nF#\y啟߼yM6>7} Š[7oٲ傿e]l_9cZNFJ7k]}G8Cz>ˈT1ȬEK 1331.K/G*[z{m>-8}@Ξ?}A'?'ᵯ}"yߎNFl4/8}1 C8.\ 2!p"<lr/ȚҭGoٳǏ_kwDJ'El C8K 8`,E܈}O<^+++wnݺLC!p p-p.'9iN圃\.{[ltN:gle~ ۔biH⾙ Cos˜smذo{yE[iwĉ־5tƿ뮻n:yIF$C8@_$ᜃTNFJn߾z]gO&#'m-MgH[GyH@&_9ZrAj'?|TNNOSaבmؔv<ɲ2]~R%?]]_hX7F@88)%8 ^۶lrxR􏋋hYђyϙ|yt;S>%MLҧMiTqWfY8i%S%M"1Q#v?>coMm''$p{h(N% FgDb(jh+Kj%}6mWc+RC , \*x3{jie 9[ ,X'LW[_"^ *0i Q]UHJ6v"p=x0Μ9TMkN埡Y{GDÊXGT#i Uqϲ^rބۄڍ٨,/͠@麡"~tivW۾vJc*M4\or=5] ~2M8nNkA%L{5o:UmTt_ĹfčZA,C8FFo'H mSdsѰ_>|ͫ LTn"p pFj;$3gaBb?\J΍lU}fN#p  `@90l[pѪbH.h{BWksfCN _!_7=?҃!pr= ]CZyLҊʵrwm$o8䄻M0VpQ p!p^""z~0d!L~.jC&V91|Y?si=lʀWJ`F8`:سT=\} ps*޴F(Ȁ_BC&õ=!p8|V>0ꕽ*xӋ*1)s#{cwߪI[rIx~ {[hBOyh pc#p/_eN8n:PKF*Fƴ"&-!V4/M;Vn(8}yߵn=,7~\B%D:#D$!pEom#I m.gWer~=++yR;MpKѵsC%m!Q*y͐K WFLkAD81vSInzMkZ-f]ҡSw1Aw(֯k-W?p,K^85r$2eU쳑$On[ŀ)cdۊr@?!p*)&S`}7iF iM )@qC8 #p]CCFD8 #-vDKЈPzaK`3"r.an +g3]Mo. W\knre?ݚ'?݇08D~E}`:@Q;,׎uMpq ܺT^z7 foa/Fz\ Y i{>x& :¢W̾`*鶶iTCuL%[!nX~KKtT~tэLjɖ`k{w/מ@.gZ px=$'u*qO⋝\ޥյ*L\;" p\ZWtѰ7=Xʦ1T;9 Y p\KrM!p!,j{׊dõ=!p@V%7 pA2Hu@[팮.oI _҆ pxmCzKRvWf9+.j48 #@^#`TimTVk2_3mƑɢM\|?8I@*2*p~Ky侥?Mi_I؛ E\ݒWdWLbޒMŅ&u˗#u?*脳A_k_D3;ŗ]/BCG{Cs۫_7qGDN[/qU'Fk}"勞^qrXtSWt4B\g;|>O4b !}l}UZR'dµ,C\ƒF#_dɫaZJhYhKhUJlǟ^ @,r ܧ!pܿ/Vd̓eykeBӺn'l3MJjy'K5} g=fڼ[?ׅ8@ 8 .Y(֏?[)e5z=ѓn :VBr& :_ewM]'\ ڢ67秊=yf'oz͛gE p0Q{j" șHO͎"jg<=0]F$LI5g?vN'-#h4yr6gϘռFE7dRZylgA G͈)tU/O4O}="4"d6Y i+.2$Z:Me+6jnNӁz]ƒ\ p@H8'oʏw'=*IF▵"'k.VDԴ֧iM.{Y2U[G p}kxN%ITZn?}s}FΏMGFJ1˭ۏC#|kh<jB{8~MqҴګte pN/מ |(uNH>[)ҦJګUrfQ$gbuVNu`0HI8DoXJ *V*Gqԋv/2HU3q۝h9̴I b@8N%=1WRDlXR34CJ3O+qq42'ےmE |Yn(.~IJ,֢Il|ش !pɸ (sD(B7d)o=]+Y3yBhrUL~K[{ V ~t݊Nfy}ُ-gߨ/ p\ % 1 RrL%Z*:TwjEtb pyj-_ "QrHڤ91t*p1 杘j4ڊ%]\y>%BǤ~?`k/>k1.Egͩ +yNi m Hu"p%;xҴlL4[ݿ0B}5eUL_ӄ`G2&CElM8y:?:'1{~f+ !pU#ju#ꁁeoz$팋pi$oeJ&4[>4ɵ-?biHFMAI71|-xK5@:,l3Utb4I!& p5aOqvZd7qkJJV?h7 >.޼Ri[ެIDͭR;"LCH`puN;)iSMC/[\̀B׶|ݦFBQ=]&TP/0x봫ڍ:uH8_hqDGʶ{T.3f _i^nG>ډ b%UY22M뚪 ٮz}::O+Ifa{j?_ه~^ !p0T\(b'pzz>|R3 0Xay9SJZ%R:aD"pON#zf[XvL6]ͯw7.^\ oFv҄$I%WnIQ? $z%BBbASw&Y ^n'q~t7oƔG}tpM YHiz3{w#pzMvK=~H^ezopnuΛy @I[:r[ v8?YlWgж6ӆEF`0I8'_NuМqOC畴ļyjKbed6nW^[6Yy.|v"76: Ok'\IT V8$È!p0D1M1 BR^e.N~bM5P՝-[>k (SkQ8JfE%]].onFnjqi8mF$e!Tl׍:3#78mfИ^"%/WrQ ̳ͯ"pWiCҬ ^TfXJ ;D"K''"[}N viWN89I7o_ p7eyC`JGp uy]0rVItZ=$Q&lnޭS67om5 FmE3h\#p7ßo?zAo$y:\/oRm7'_e7Nk-SԨٶOW4Ӌ^@z"^*p!pϧrd@Ϧe +.!p%`G|٬2b.=8C`*k%HR?=Ȱ*# p&&[\i|hٸ}pQ7< KM3kn%ifeVs+*2O-*P!pW}*3׈@r7DElNmZC5MUdyM)mO_8`M FTD$7+y5:~mC8ۦSur\HMp~'lZFLK,BԡO-:ge04MZp5F+^\[k/_pڥ bѽ1ZRKrA \R=A]._wQI>! pᇌVTh fS[0^-Y#\gHe\-SCb(Ҥ5-4 p\zurȪi(pu͆)VG|Yo{fl 'pYH6Nde4MQ$ FIW4Q8/Z{[#gG+FCj_9mJ@Cĩt "j2 >P!Sl:*U0.JRu=:;@%ai12`͇:1+(`Gq$^>j~j6Tt 5"78>pc+0xٲ#OYx}+r!Ls8@8ٰeȹ A!p0 pTY~ e+njޤh/+Y4B)O}l#f+es-俜K9 ^b.꼙em-ՆB'gJ}/󾧢"NEWaQmO@8 [:n~/mTU'V4-}?w1f%JDI܂}rZU.H p(J[ɫP7Fl5-YjxߣNO %\MHa@8)._54ڧ%/JsM߹$R\U/ -m.e2=+RX=n0h ̜rk.ЯnMG4& N5Iӯ|'ܢ,8`XXz6Ħ_˙A%AOt|zW5uui~\L/@8Q \tL@lsZ[i޴Jq9,|@8 \3v) px0 8@@^BS$q$p;D`d<8IQSi3IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-arch.svg0000664000175000017500000004404400000000000021754 0ustar00zuulzuul00000000000000 Page-1 Box.8 Compute Compute Box.2 Volume Storage VolumeStorage Box Auth Manager Auth Manager Box.4 Cloud Controller CloudController Box.3 API Server API Server Box.6 Object Store ObjectStore Box.7 Node Controller NodeController Dynamic connector Dynamic connector.11 Dynamic connector.12 http http Circle Nova-Manage Nova-Manage Circle.15 Euca2ools Euca2ools Dynamic connector.16 Dynamic connector.17 Sheet.15 Project User Role Network VPN ProjectUserRoleNetworkVPN Sheet.16 VM instance Security group Volume Snapshot VM image IP address... VM instanceSecurity groupVolumeSnapshotVM imageIP addressSSH keyAvailability zone Box.20 Network Controller Network Controller Box.5 Storage Controller Storage Controller Dot & arrow Dot & arrow.14 Dynamic connector.13 Sheet.22 AMQP AMQP Sheet.23 AMQP AMQP Sheet.24 AMQP AMQP Sheet.25 REST REST Sheet.26 local method local method Sheet.27 local method local method Sheet.28 local method local method ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-flow-1.png0000664000175000017500000012002600000000000022124 0ustar00zuulzuul00000000000000PNG  IHDRYiásRGBgAMA a cHRMz&u0`:pQ< pHYs&?IDATx^ ?pEPWJHb"nqGçDTB *"ʧĨ QED%cHL&1Im=3=<̽oWw_SU P*@T P*@T P*@T P*@T P*@T P*@T g}N۷oFm3Q̶xn+[)G!{84T P* mm}馛}|?1r;Z5e @v7Q^=ˇy7h555QCe\< *@PdžnD1WKW>  <6o~;wf 7VzMxT P*@@eP-|166P zٹ|Դv5T P*@@;v\cF Zllllh+WG;v9ϝ ύ P*@Ȏ=6d?~`6666.ḩb鮭1;kJT MMѥ-ו!llllͷM4bf |T dCF6Gh6666.]RU&]Ґ-gWebӿoZ8+C[m:.6666 r;O|VUOl ^,2fMNj0NȓC1X.&7S| _ ŹeI&fT P*Z'4<h}Fw33~*+@e T P*V7^8e N"iRmQ`.yT T@۶>XtE$FKoY=@lllnnMuhYLړrdTfW?}4m0 L9[mFKjXT P*@,W @dmQ=aKT P hhM'mmQ=]u.%?T $@> CғњGT `llllDlϮ=OQ1eOZK.grDe*@@h磾O`Nx# jyE!F"ˠT  `u ( kdlvEl F@> `"md ,V8z`x4*@@F =2DyxjX^ u{(A@ 6666i :u9N9 Ϩ^DQ6;eP*P4r)DѱKߵ[=աCL뮻s#)_9/QF5L= N/`25휅8 %YW SF$}Pkk/]w\>ǏWs{E]䔿K.jΜ9Ny0uSܦ͜g5֩'` N 3Q*@Z+)Pf>B@K&+ ` X> U(@e66H ` ^lp)JoXa&&I,6,` l޻5uVmm 6@%ֹK |`vW>T dBD פ==<lkXl&z*I%`x$*@r(@"mKQũ`Ū\zj~΢k>;>v!euh%&G lb}|/-_#a(@"mKc'Z9`[n%!X Ib/e=8B)r8hօa>C9U&_Y"GM{QxoXlnXl;ĄG+)i3g{ `3FG\_LwNG9KYiC u A CJ,oll,Pk`y[=a/+y&b?nf7JVs`zX,(˶g/׏KB`c!OSS oL}77*E9/P AN{@lj/ +R᥽aONNXl̨O)2Eaq)@KYK7y x Xv(edzYhzQj^as"'Ӌm⽕j6{" `pX*@GD׎e$K8'B&$N,!L4<#9ݳiEVU"*ٸSz# ɉ‰qd\ d ` 97pl,@ R:N{B'b+ rK[!¶E(|}ӌ~YS T " m/ 66H `F&²(1+[ <6USF` Kا=\XP: .ix^,>TV!mvnցrecK Ј1kk7m#*eZQ&ʐ|?VQlJ#̱ $0Ɉ NW>y|',[x/ {Y7yX7 4'X*W9A@,ʇAl/^&pF{PyL$ vɉ`vǝv"U_W>װ?`5g&&2[[ "l(pP~0><sHX,; vQCd=@Գ>I! !NJLµ|靧1B\h.8[Bm20dX.~/VC&sL@_^?")OnMuܹsziV%c5l Xly`^X+}b?٬+Kk/_'Qs6b{YbЉ2n:_f5Gƽ^BlN |f9~^W_Z.L 0H%zt.%p}?|RZ`4hP)OϭVSΈ^Y̋PO,Ao"jVg_y]_{냊 Wypd?˟^K^foCHfyvR/y>\]|W(,-7So}OX)C`cMyx] JPڵkWuM7zl{U޽K eYVWГK%֧e,<GVB!M@yڰ}k2-+Ʒ*;h&c.s\,ݲu+6N!,:KSz_:tu/{7Ϩ,k3:6g!<6 J^Q `voB_%E߉Gɱfn VR|ئ7ooг> S]` -mn=hQ8W,#کS'u 7իWGLݾ8~5$IL]u&` 9E?,Ҍv9cccc`mlgYs|,l[ظDf]}!`kj@ݙ@Xޠ8#F7xC^5Gj|숑VYr,Ko2E4+<ƕ:GOP9Sc3ks=!K؄*!:ֆ ^Wm~{Yj˱#fh8fxxe^!,:d,s;ҋ/=Xc|lguf jKKZyd`Ye#|a2k^ g dR&.Z֞1uAqƈVhpFaÖ EA5 ` Q.eCYW6lO'xBׯ@` oT;w.,r|l4,_NMMMS'L㐠8`j6o|7~CB6M<[>Kyr2eNuvkypw&e(?ͺ0-028i9,h}G1z~59c&o=8;Z $` X,Ee <럸\uH6QQ9@Lo@10ȋQvC2E7)PjzPy emB|^f} ,Va"(,G?zմin<@Ϫ?uMX?~SuMx,˞8J0 pg{FB|@s9$m(6Ӌ[ 4weJXt%- 8f5!=>SG/H\pBˠ6c)t^NH]pZh0aѣ6lZjU]!=sLէOj)Y, ` 6a_`ˁOMϪYF_ޠyoxXMFJ%vvi'/aaE(gl_ji9ۗS%وfmƎ>裺qƵ{ҩUz\QD%`s֓t6nJքz,f*L!)komӠR]v)o7=ǥL S0zY_ kN%X?)i{+;6'NrYM(A:czV2i V~McA~G{,pPE<&@+chM>+p6]l6M+B,8_Bul^}ݞVZt:=k_׺_޹Ոd٢l{k`c ?|?8qBh&7L m*Ui}>W{fcmw rqb@(<+~V&C۱ iXl-RvK{G=?\}]۶׿ZmV\,XShNX l~(seHHr%؉[US$Ne#ٳgP_)Mjv*+xq;?\T|I]ɠDXg紟 6fVZGuO\+C3^a3Vj>GGqSj-(XLd{LBًLdzq,n֘{Kl{'ڴU+Z1;0v[O<PI,E6fgHu FDѮ(^BӜI[/azZ,,$N2iI1&b(Ij'OP͜;B"c`1h=uu oxٳ:KCM:U}駉>LV?O?.Rl (le=f[Y2 Nv{>)[`3i륇x4.vj,.e`ᕑL)3 與՜^F.Ky8n~XoȌwy%胔rP1Ly*q#ۥI]r%&q1B,X,f&~ bR(f ?k^Ń>H `+[l}bAUk5x| )clSk6h S4@Sk ԬԼhaRX3,Nt CSR͚+[miُ[sT@dW]oT2lX`1 6mxa]ӏ5vpCue)BZz>}ݻ:՛oQ%_y['pLM'zȞQ=.oGNGc+@@v4b,jf?26.2`Y6PȎ! p^s5j 6 v9 ;n8oJX?O>PW]uٳ3fn,B}MqG^x {I 7<Փ|؉nYróJasGʳ}69?֩&:L `M`v\xmLM,Ʊ9 8R+m'zrFS%mYaEK#;!wɘ+Iخ`-ZcȧB|g-,oHou6 @f><.k{c]|X'S$39* `fyϣv"9x6lvI9! p9?mzBWP R gV:&w}j;=&MZ}քZl^HJ]ClcKmaQB<^rG km7l!xTk{zD0Dy_]#bRW=.5%wXFtjTsϩoQ=%,}: /r `}% 7:l0Ld/\O>K%BI7ݦ `ǃYW h.Ab;|E8&62W`su9~2mSH\xM]#t11BI{uPA,p}ꩧu1qSW\qE}#lP t uh:I{EݫJ8!IB o~ҞY<[wʖ !~]wݥ0$,cilKՏVڴ|=Iױǝ:q&K-7U+-3 ^Z$OUM+9~vjY!ZR9x~ͬ?fEVj`]ղN=Ƿf`1#0f6w3br0h E~|W_uޝ[~s`TuI{@8kv;YDL{:PCnnԬY2q輪.eK,\PaJXĉ` ̩ |ro+&YXշ~[]ry-e= u 7WꤣVWn-z"\x^7*[D8`zi>a귖jr^(/x~/d+o_Wc`csqw眣~zjwb‹uy]?6xaO<(;h5iXz<_l /&r lNeoNBXmN@"+0o9H"#)a~+!dIʖr\lC9&dC sT'z`. ZWj/j,ׄ}/}%@I~!P`'( aHXJ `S٧  cõSl6 T  d`1k5& ʨ: 96hmT*`cK 1ʕ+ը#=//'NPC[MK:~D߫N;[>yd V`|Me,,a  P/ꕵ6@fT d@,`PBY]֞؄Jke,$-C1#>*`c^ucZW{3'ք׫/l-{ԀexPuܸq(V1v"v;o,6.ǡ%`'% d`-&0(o VJPl{Rs0m/ z'[Y1<:ýz=X@묣~{UWy XK8&f ɘAy޼y^3^r= ɢ v@dwM 41@ DK:hq\mu_y&lnze+ȏ0`^w9AXWgl.2KXQu`_"bҦFH0†ixⷻᄏٓĽvU=kR`Xjc`W ^I[Hx)ݧErA2> `V,Ac`1v/`++kڳ'BmsgKXV:D&:L `T6,KƸժ !~%/:u^a}hu\U.S/f̘+ 0+/y`A)\/sZ½D6U];Lu : lX,[-RE0rh=t=>o_o&cdXٰc1GP \Cu\"D&:L `T6,KƐ,wq5g!;ݛmsgϮ'N&r_x)`Lt]j ܧ5$:X'QPF%` _c[;"P8eo A`u]^X,V\.o̘1jСjN.`"zT\>`l"<6,K0F :먣:J=c>sjᄏ6hyI{_&zgJK<}-D,Bh"ZTL>׏X'K:2m׮/G1eOQ9j="K%Vkxbc?d{b %KZ,ւuI#>[ >'O5ar.ӳ uM`v?lzEkn׎dFMMMSӭ}IN & 6 #,c.Z#`[&kvt-UW d{s56hmVs;kޚ=&[nEmmCLX3&lz lzE <Յd%`Cy?s`a'% DFAmQ Ɣ! pj0%m2O VсN:Cm.%Wbvߠ5Yz"P=O#}xc|kBMu5^H[/Ν;7t"`ueOMj{Z\"φn赕]o K6m8CYaA,֩s< I\JuW2JgDlo'~gtJ~V<4uV؇5 n>NQ=PݻP.D1%go ]tS~9s>vs᭭@~d-lgd$FMRmh d(Hz?~Ho\,:F=gyt7zc`` 7n,`#|Pa2(oϽD"R##a$x'&kJ`wd$FMRmh \bL:В7vM7U^{4N_.אq~4G?<$XeKM!`-ZW`jP>lF=K'S{>%; 1t\zm"4rԩjžiҤIb9z)-:vTM Ξ=[U>1mOd;HH{ ` 삣TKmma)9d}hS`U,ŀF$@$BnMB{O~ /԰ ̘FpҩgK\yN%`Gb'% `Sr!rR l IpI(,vv:rHhѢRz衇=T] E%lw@: dVXlj;O=JXO?* >LX;s/Ǟc|~H%`jBDZ9QuWp+ !:q-c3KYi/&Fq ΪVTKM=ar]wۣ| kΟ?K;m4/r-jРAxj< ٝxkqݻW/cܔ' vE͙3ǩh0uS{0ԠO;UhW=E[|X~;C(„G7 ӳ{SRVJ%AXlf ALԱc|jƌva "ZtjQk2si4(=HQZ $bx_x,`+Ʒa/+y6p N 3Q+@%f .Tc۴i*+\GH.Xl6v V?Pe,[Z?~fr#QH `3҉b#:vUzt^&))!m!` Y-[%xXl߬9TK4aR=[n{52}.iˤC%`sh]?%X{<+æU6'r2&[yG\O\=oAKHm\C%` ɛihXCXթGG="VEd D/PI֐E>7'63<#9ՠ)WK%`Xl򖂦%I7H)X ե@<+$Q,'t,"ٳrb˕W uyh*!غK:O'Ym XlV ~av%_f! u<+[McOTMiއ[;/C6 x|+p>=+5&6p ` ~kyeWk& i:6Qq 6.?)Q$%W"hiDv%ʎeep!` a˵WvmX+∣쁕tE޻͐{ĥBǥ.-X,@mK~۶m} yy>6ɲjT[ܝ TP^玘gT>)l6V}s٩.Cs?^)/2.]9s8 a6Ե}GB`szE]̉ -ey3؟`k`{myfW^:"최cfdĒZ41/]wA 1k-t~_tHtqM'l&|1t`R(ס [XdQXg#} l=`kR;6U˼؄'ݢ7mC؆e;;%lΩeCg/vcer,f~۰eoX,n..f9ucP-ͲN;_ٯ^(H6#z߾aa ?,]gu`hhKCP=IpTlI}&W= s`/6;jO$[l Ap.+0[I󵽺c;ز]gMn ݐl uu ZkOkz!c 3ܳ(9r|Am/7kn󋜲˗')yUR>g-l\~R6%"'@ ?_(;?fg 5y3T:U̐S@]FGǬ ?`K^U RtGlkQ F*aQ@K&ZONvq ն4\sáܾ.לyJ` JQ n zI6/4 `!*/ZK.04엲Ug_AHC4\ԁh1;ZVe9:yclϘ҉TW4e˸]s*3#|MPm28xq,bs68$`iģ@oB' z,x`74J$,4$sOTlB~@3ʸF|WX 5={fDž}e ~G2UfD.!ɘYׄF\(KB]rm85˖I!eH'.0.,Vf/tRGݾ9I$L`^fX˯ I_]Ji)ZJ`LrQ6fXVū߄Dk8ZZ~%dva6K9fÝέs˭Ejau~E9] F=>7:(:Ko3%%cP+"\p#{d#P)@%:r"`g;$ű`\Tnrbx[t%`sfOz:Xbu\1Hok.@~${"'&Ƕqc>R&kdt=Fܟ C,67KJuZ` 032gaBSCŒT ` XF#$B[k<}78Yɤ`ՇP*X,F@a_ux%.x<<*X,F@#` ʤ`3uϲTn ` 9K$݁xŞL#ueM%֭O Xd@ `[,f11@YTZ>hnO?T&yYK=uz^$Z8@IMX4st#G}~_%ݡO?vX`;wD͙3'0~j뭷'eW577;tM"ϥa}3w ` Q,l[ظDf]Ϊj?_*Ph!`Kr xw}4 3X{q^Sw+0ͷU;찓S{mvmwoS>#Lw=(+A%F htqnd2`љG)26X4bsf}#7ܷ9N: vxlXR&'6zV*V xl@%`nlf"fPk~cNMw/W3$Ya%`i2 0WYGʳё~-y9),i1YtuL1ј \j:9ԒqˀϢma)9d}hS|@6wQq/}&q\sj~͸K6/\t`MR`ibP /%.yd:+!1CMϤ{Z ͨ^:q@cO0Y;Y=H$ʶL6U~qQ>c}<灜,F>\ruvꀲ$ïiX,*@^JUIxuV4`Ő4biR?㽖o1 ^-QmUvh]Q"yt%4CʳNj#(7^W.+k2K=L Ica_9>`O֖{qKϢ䥔]5^]WK-9-F.O04=koڍ 6RX{oԣ\x1 6_Au[!ߘSR=96/H.HƫC%`Mxq= ^Q3$_/h3Q,3x|À[,7vHzuʥR¤;ӝr2VQ9xA;#` oXۀ@WՄVs `즷Ƌ0»k$n[ףv'<X'[j5)'3Q`K%>ZDȃY,#$i$X" !MViV_ tۃtl}7@lS(@MϵCM&@ hh*xƷt\/"ꫯzU3g]ۭ[w5GAW `[_@p1pq,X6C4Yg^{5oWLSNU{.l>+U˸ΛqKϢ䥔]5^]WKJhg`Xu=|b\a;*][nXa fRby)eW$e$!` /qypX.=0&`b؃7!I'Kw}j*u:@NTeRjrɌTjL]WKqJe@9JfS 0}Yzʕ+= 6C,„;d&*y)XDs$*wF%^2/UY3_2Jչ&%p=蠃ŋ{epbJ(ǁ>qyQdK Թ8d&*y)XDs4Mkt8s^6<":L `T6=ȠqnC]:aʤwkA#`ix^n{`:u}<Л?[sN뙐Ȝ`LQLe3Pbʮ4:,KeHi0駟V .T\sѣ+\5g;73$c)L<_,Z!:L `T6؅{vS45Ël,4kQi<=묳fmƎ>裺yۯݺuWnӖ)֩ `4х.DTtmrj13`#1uE.Ԋ+26[mm 5m>sjܹꨣb5k5UmŸ]}WfNzCu#Xm`tav [N-L+]h'zЪf;k5 ^-YD=jwVV/_Rt7頃Ju#'uFf(Ѡ}KuenZL%!<$ڞra ,H0Bq6 /Zl0a|~3[o?n kՎ=ZuС=ղ9Ku"ا Ff(Ѡqz`aFNhY0NM,ߝ gXԘeԅ#Gag=20uOv;1/gaPIjX@ !~ǔ,b|,@kbq7|Svi%mll"MDu2Ψ F NS[<Zn6D'LQm߷袋7tQC 'ekN ><0_|(wwT{vʋaZ_S~8~66P `^_1V3g_Z>CJ ۩&jk#+@` l[}L؂XBSAFcgm{ɓ'do[#SO;ep jСNy)L]s"to7tF>|~3^O4D+WT/~;?p[5=j/=~{iso` n`f"'zސr1 _0PrĕXxQキj?MJ' W_}kVu]v'|RkظKKX~[ptd6786{BLK%L9sǏ~aQb<[n^L:T&y"ڟeMtZiYF8eol?` Q֞|VXlZcZO:HuzyW}wۭbx`o7… ՠAڣh"駟ƞx'|R͘1|#`26߷'K'` )5{ yT]!K\s52ط~[&@;{ァ>0ٳլYc.K- 5סɞyG;)PCc;Sc1[cShhvN1]w.zj/+TݺuSƍSΨqߘkV ) 0ŘvKH\pA Vy7|qEhzTK,=9PnB h$NmUuNKXz&"ƼV DOzիzƪV>s\.>uXXlKCm ({x^ u<*#bO?_(õ^(z`遭g+{`yu 7Uf!~)Z|~ k9XLGycbw_o,_sp>Zdzg }w|YfLsFl;L!Ee3fFKI٢# osuJSƍᑕ Dxb<|)o.6K(a|4郺`c Klj<}SCetnf_/-YX\=?ϥY{F>80A(!/"^2:ۭ O3¾])+v[+մcAasǩ<lrI=THm-7xL rcFv#첪ڇmõ^(z`遭g㱃=X,fuZUӞX٦1|x(|wG,^dL淀+f2Lflٻ ۙXۙ;4ɨ?ؾvWÖKM>Ǒ LxEZm6 n{ W\qECѬSsL,=@%s=Տ=2:V&x /k^577:,^]V,b -f6&/;EulM{`ќ? U28[8kL`0Űe~, $lk޸>lN |Zv{ pXz`xl7,bM^xA-[[FcFbxa%wNo&sXYFǜ * [wL t"\?BXn{Υ'ծE@?TVV6Ӌ+b3c;I^8`kki߻kߜ.+y!9'qj}`遥<sO5.IJp[l `NА -JmW[4̚:^D, IO$? %)tOʀVs.:>qi5oݶ>B}skghK߄hbtv=MiF='% 4znB6?G)iڕP▥qZ2dp l|XlF}[jNag)lg>Fcam G(cB죈j8>V{m{oo~*/#J'tTD: lqEt&͂ծcLضE$oYuZ]{e>,FkBME7 m4zc^g=:va)ST{n OcoyeBrwV#< >"D\`1YӱǝzO']ho:c8#0-}K.JqXLi̩v( uYF{TwrK 53ACpe{`Q@<*D5 %KI5V@cnO}Z/E慓& 8&hv]A G՞W=#ӽf7\P0!$f͊ev{ Hf~@_xD;oك%Nb Vxִ` +Mg7X؟~a?w`+٥vbNJV&?=Ys~Z34ﭔyCg־pӸae4es;i_]+C'`[Ծ}o pBi έ^y$NǏW[n:g!AŋYzꙇpS6gĉLĘ燂XXx\`ًὝ={5k1cׯ"%e=Qwbf' v 0'rǹXN䚑vx1% RM\,l[ظDL`1 )7,b? ! ~RDʉ` )~P b,{+.\P=3 PxUWJ+c#9D;K0]OtO:نyF3&rI!xW A@ .kqgf9@+ٹ)!ftvmL /EB[b$D0Wj%DA7Frùx`˅d70|(țrӘgKm,!K,=yqԱ%/G=iU<2:AK4^1 0 5L%` IXaksTMY`a,6ٹoRzL6^JX\$Vbm+Ltxk"dr,}mLcr>oܲy RcӻE,=l<ݕSR bs9`L^vB,to[{cefG}v}`j!WQb} SnY\z`km?sT/qޠ3\።z;3xL픩37t->{챪I-K&[`m]tZ+a'qbX~*}4I8U ٕhC`ـل%"obs GtɜX %M}o V2$|юAWܟK,=9pɘ+Ucz~Ue=Sp^Y%whƛ]WmoW]Lb|]?c0K 3 lP Dl#eiQȋy[f16㶷 ڷu/QנZ`W6yU78O tKlP`oi-[2'yj;vl(y晪^J:dNgFe|9ڸ5XFyD%`T>6V6` --Sy(FXlLjHcRXV[mtRc~U 0/h;Hy=M\66dLgIl3s%tgX,{G` !fq&oWuةw=$O.aHWRƬY<;wnD͓%`uoܐ S+݇Cc3iFɢبˋרgb|l3֎x8u^;b`u]>j&Ϙ1A}rbKUsF3hYg@!?yRKrF!u`(z1>vq7V]wK$N:g)ƾXC m5'k2s!` R9,K%`+}A]!Cr bCwqGջwoή9uF}矌UYS|w dqw} kVJ&b6>#S}Ob3gz Om5IVÄ|X'CK%:*D%` S#FrJ?>'jOG8E5յl׼g ?['冩kpNUNxF}=&tС~xb$o;,Vnu_zKM%`ng"` بdoŋj|j.z=.:Z_kƮu"$q+m` n,td3S/]AVKmiC{C*8q-Ԥ̬zz2Y3/]7[nnP-*kj&?b9t饗M7ݴt!8ݰϣ':Y Xl 6.?)QYףYؔU0! [lYXMs`u˞y?X҆'^i3g{7H>}7cq<X$~_]v٥t c\n֩'`XFL)@- 2:ɵlpU%C{;`asakU۶m=\GzQGy橅 iӦyr K`r7N;+n^4p%:؂۷)9"عbʦ' sX~e6ئ6Šk{I%8K.D-X`t{~SOm5#/X'"R}_Wn)CN-2@Tۨa0eR5nȲ L`4Cf# a=0.CAn P/c6'.ƴBlp)yin-UW db u7~ f8מ={= m|IՍdD td9[-LlN)@c5V`RP&y„F:3hWO F H3 ]2R/9:<:u2^q<(Ko1ҠvS%w-^{ ϟ?_M:X#M4Il06W N}xIdtd994bZ~r6g*ht ~0} 7r[ hխE& ˱2BM/pjlݬ,o3KzmڴQ'|2ed;cK\gϘqlyV P:(@5h0cʱ /H.HƫCm} si)7=W*2Cih+PNJ]q%pXewv1,rΙ]㵋 `3n)YT ` BLa2Қ3%']p6P6@͸gQRʮ/H.HƫC%~F4o+Xs-3)'YW`^NuuN9 +@^ ( ʝ` ,Bl1:Hc`)wr,μf :Ru9N9 +@^ (CnHSX'3$39*@u*lMIʵX,֘6mm 6@uN21XG dQ,Ke`HUu{t*(#:GX'QPF*5Jeѓ666&G:`swIzBغσSx ` d `5P:ؔ]> Z: ~P sF=L0a&z-c&/m嫔:[vx)]ʽΩ6$g7kac P:YeYi<\!N&F/Giװ*Q60 P*v +cy'ּAmIaʼnn)=?tw'a'e|Mx8'*uR''`e ^*6 ]*Hȧ;%xZNOx-\͹s*{~cebxi!T7X/a_%| r̝uǐmm,)rpa_M;2D&v"ʅ}o!#яaP*PAj7\M `0@,XM煰RLXuugoːh9[xd v9AфvDo=&'8,d3IW@5+q02nլ<0dxQ y@`{zz`ۇKe``` `g DPcQS  ՜c QeWʰ!:SgT TP/A`|$ DEB 1`"5Z荠Gmmmm[ ,%RPwt4Zֶmq~\jjp#rOT P)9(jcxmmm5asFgc;)ed ъ%iU&]1<|t.y&0cԕֻ ء7nTo12~Z(cN~cBfI0;1Ca=rN86cHPWoy|f⑮5 s| 6IBRh]0#QNj,5a9;hf#n3Gō$J(׬ B5Ȣ79K Ģa% It9{$@&x@E{醲r^ԡ>2q<_<6aĊ l/}>(+M}ӡ Ûd,Va^Sl͡t0ZU)KhfJ'8 85!֮jY lOL^s"ۀx#]`v:c[KYI{ҩ>С=HPA] V+`2ZE%I|J]cx6OaS˛3, רa-;fWUv_2txzA{v",&J7ؠßuXUĝR\$gGuŎ =,;,E 3F}ze(f<8Dx-iFFN%F+I~KC&bpxH(`M[ۨT_$V }f>^|^J(5GnfWZ?M{fE7جH;E.Cx(@ `Qaإ=-"uOM;a|6l70m\q,F7AkB,SA y`f|['Vn80BM&aĵ< ޱc/ḩi1X| ++@~~]vd+Z< M"=S#foa`gͺӨ vʝK8Gi㧍YI[hwUNnP;ҩc yÌY{ 9aT9?wh⍓E3?&%%@mQ;l5ss6ӛ +6XIJy~}.1:[5+ǐ}Źoh2=4bsbxX&+_2y Y$jM1dB͘YWd\1qƝ:=rɧsL,'|B8hI8cH \s?cI1ܜ0Eډw]רQ f 7n@y[d{tÔ)y,n^?Tmvxd̕0l#7@=u#?2k:\-|1{| =T> 4gd JԞ v;V Gyrؖ@,uَ`˹͖lf8/V43Vǽ³ A`ޤaD #j˱7IvFzZWa΃ym^ml\Z!?S}zcD 2\кm.GƳ LaQdHoaʌ+ohŸ(˥T D@S 8E'.`, P*P'z6:UY*VsfcEwC[1ݧ {ъ^1GT P*@Q L;X!axLbn$?8TǢT P*@T P*@T P*@T P*@T P*@T P*@T P*@T P*@T P*@T P*@fĬXi\ẏSODT P*@@)%,uy֟eO ([9 ,>AL,K1d|rJ'\ʋݨT P*@FF@+jKj!ؕM]SΥ`fQOW~f+=ʍ. o, P*@T P0=Â7@(ҮJ+N~y ƒP,M^m 6]rKuQyT P*@ȼBvb]O$EpXxŻX),X~}P%@#1`d*@T P*@@%d Wp B~h,N~gy`_6 8Ts8=h@.QGW(:+?`ECeV5BrQ~pHnT P*@T +P `l(ȘN'߆#6!$^H? 3cL˘ScHMhϻ̺PNyhtuL5.Wwk?Zü[`%4Zݢ{1AT P*@T P_,\+J .M57˒ c?+k1pi?J1 jX_{WlzM-5?~!2|u2CŇ%T P*@TYs s< RBfz<()Pm' c6t1ߣXYmmv9WW>B)c'Ow:.۠\Q*@T P*@JKRUni9L`]ARH3+^a36 m ٨6hvg?5V4)w [ m% njT P*@T Psd9,V/i{* r". e{/XZ a~\",`yZG8f {рl5CT P*@(M$ڢ شZ[ !A;JwY:*E9rΑ۩T P*@ Txx[.&,03VF偅h8o$ X5k^.;78@2O /Pgc%5F@ J=b3]onkkyubM P*@T P(UZRԘ=gSpr8rk"O]}ʕe ǖ8 Page-1 Rounded rectangle ATM switch name: control_exchange (type: topic) Sheet.3 Sheet.4 Sheet.5 Sheet.6 Sheet.7 Sheet.8 name: control_exchange(type: topic) Sheet.9 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.17 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.25 Sheet.26 key: topic key: topic Sheet.27 key: topic.host key: topic.host Sheet.28 Rectangle Topic Consumer Topic Consumer Rectangle.30 Topic Consumer Topic Consumer Sheet.31 Sheet.32 Sheet.33 Rectangle.34 Rectangle.35 Direct Publisher DirectPublisher Sheet.36 Worker (e.g. compute) Worker(e.g. compute) ATM switch.37 name: msg_id (type: direct) Sheet.38 Sheet.39 Sheet.40 Sheet.41 Sheet.42 Sheet.43 name: msg_id(type: direct) Sheet.44 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.52 key: msg_id key: msg_id Sheet.53 Sheet.54 Rectangle.57 Rectangle.56 Direct Consumer DirectConsumer Sheet.57 Invoker (e.g. api) Invoker(e.g. api) Rectangle.55 Topic Publisher Topic Publisher Sheet.59 Sheet.60 Sheet.61 RabbitMQ Node RabbitMQ Node Sheet.62 Sheet.64 rpc.call (topic.host) rpc.call(topic.host) Sheet.63 Sheet.66 Sheet.67 Sheet.68 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-flow-2.png0000664000175000017500000007367200000000000022143 0ustar00zuulzuul00000000000000PNG  IHDRWcsRGBgAMA a cHRMz&u0`:pQ< pHYs&?w#IDATx^ $E!CfoA@<("Ȣ0(=,.l31 #8 ( "*SdVFUeUETu]y9OƲj|PPPPPPPPPPPPPPPPPPPPPPPP`p ,v۩fϞ}Zkuc6 `fͺqbbҩ}+(((+ h=rbo1cpЋyq'.3o64l ?8 3>?%l-~~|xu{Ofzq(((c y|!/`eClF;X+ˆX, dfMܿWF c60~6pg*}xkfdۢQp@@@U9*oxv m݀GY:wqΜq-m3@@@3g/y٢ݿx^lmn)6xӥ8 h*~]ț?n<;6 ܳb-|ێfK] d̙kj_+v>_z /3w]6EV!T2o[[_7wW\3MiPJa^[džM45C`["MNmFܒݖoo]hGIPP 3"?&pU2^U̞=mHHADǵ)Ǖ|%l`«ޓ@.(0j 8@6 `M;P/`epD`q1@+6帒 `@u`q1@$`q8l`6p+[bK.B_>eWDw6K<;Ц`؁>r\ P Xe5b{>Q۬Yw޹6;Xtݍ6ڨX*n`ATz7lT7޸2lb酗<;AvM; } @ Š[3޶N*N9"+V&9sԦS#8XhQTZ%dMe˖Ew/tg}.Dh!f# =>D(xCPIͰOWl5 =>D(,3(:mk,a"gm"o@Woh!&l(Xvn`d*SNCB @& G |;X6fqP`XvPOA-XW0|l(XvZϞ`+5ڳE |i^m`siTΑأYRn!.ndV}_p@w0ox Ālˠ @ 0o9~vizɔbw(M`;NA>~i#y Mj`1j (Q?tC1 Q"# {{[+[AW`, _σ*|~jš{rקl,pѯ|-X6نp8{=-|t7cGTZ}ݏC0T!v8ݰ*HP<,uX #rm 0Hi*ES} VitLii=,<]y8*$}g6ƶX4}.Xi9vݔS|'6nU;@`w4zv  6]lٸ:`:;zQ) Kl0u+.UX%iS6_{hR3,X`'EءLg,s[ه5xVj ֑lj4UZyO|譥O-i >5ԒO VJE`>Nv@!r]/G寫y]\+SVOՌ,6DN,XF 2gDOڎcT 5gt`l`V`ʩ}3U.#f*^XEPe~MGbYlp/KZ >`dׇK< [_:]" l]\w;gyYl1yl)lhXucRk)  *{3ܾmU* .mvetF`#}#P  o,60@`a1ljeC4f-a @gm0|Ɇy_Y__Ua=XϏ8relv8߉-i މ(#ƪO]NX`bor;Ϻ*VXQ/̙SN 8bѢEQihM6)-[^u zCGߨ}*)\ݡGXk?y+vgT+Cl6/n[~<#?O[m3=度iS66v"O}vD`31 ;>u=;`؁l҆d6&NYD`g}WݾG֛= y}F.^Hw^*kƆz^5 9`}ē Mz+ !ύ!{ݒ73fhu=jm 1֡`N1u뢬ZNb\ډޯ둼Z7Thk]8QH$j `OVTl5~Xv /)nA \\sbM7?ˮX 2uescu[cWv*j==-|* ^}i+ՑM܂{s&ٲ`pb= XtMSN)f͚:xU' `XV, `Zu w/mU6JuȡizI|_]wÖS?:`&s-?ox+_9 4-GpL, ڄwv{n: ejaw(惮޹уtLinU_oDQ^_5-?-|[9²*d xfLquQl,.LaVVk;裋}{ŏ~MKt(:k;W\}]_LO lmY9&?un7Si xRZ㣎YJ ,|yj1;m:_u;SkJcmʧ!6>.2/7MDqmʍ\@FN̄]xw,?qvy9ldzK dI^u,H{&lD@A`؝X(4X] X]5A _@qܪ,~&Uy4 ;@PXo C w]3(~t?(N8UǪ;φn`XV)0[ 9}cfl_R׺[:=_wN5 8y= k9{a$?y[|[-Y5. -HK`ս7˲U.U9eY䶮 q-:\ Gݟ`XVAjۮ?_owvֳv߳[f~#i, ^6\VﶺI..mX$O!]+['[ [x;v$;IKZ~F\6`]x֬Pjos9f31,;j-iW2:;16&Mc,݊V۹#Jg+t g6ەIw[w>vu`;4l=j-r _ַg۴ϛ鷿t, ¶,Vhg ~Վ2q:ơ3[uuMOs4I6ѓ/6 qjIOwQp~s;NvVBu Ewu #29t v ^9`#ql{yu5,N??6Nôjqn*_~%}tܺ+?c!6Dޮue~[K^奁mMR3q`;؛okE.6+z;W}g:G, {ۺ6wtϷa=ٓ8 b|"1cF.vdIe][@VbӪ}86ŕFF&J`bwܾ/ ܟԧ7߼ws[v gS`o[o-.]W{zhh㪼ہ>;5fW݌>w:*>6t}0n~~T|h97Yy>bw.MnZ;׿6Eciv)lT&1nxcxWF{+/8*]e^9,ԁ`h`HiIoV݄} {&8뮻Z׼5nşgGyd_z.myyqt`_ [#f]M>[B\fH&%wp1wpM:Iu`6*@m6p0vHť`08䠃M]zqM&&JV3^[<.絢oiO<5.VewM9/n'ظT,+SB<*k]czΚv݇u}v `È߭8URe)Yzأr9:sTj{^Z?я[ne񖷼noۡlńoA,ASN| mM`؞V]dqf7Ty1]ݬdW`4~bw,ji:E`؎i'vbm`⁑Qn|[Zu+V(4_l%\2pXmFm`7Z1q`v~;wSq*]jbNj徏w1Vi3 ,>ZV3 c`-??}<i2gIfoWn۟r-a⧻gf`w~_nyI4ֺ[V{}?я ->/_PT>C5\Ӛ-,5՚_-|H`q¿wSC>lt3}KMȿk~gdK'm#n}®7tf# ft/v[-zscƦfaB(5X,˷*mlo`WU/V3^tgr$C'>Qlu!=_M^xÑG7Z\V]xӟ]vYV[{xxG_mV|K_j;ëM` 6on2`iy`Ѥˇ-O E[ [@:}mKj4$t7usbx zEakF9uM'`<a-[FcXΗ&)m=Z:lc^׬FH?m s=7`?ϊw-G>҂^7{ō7ؚL (E~tS}AD`}o+v;mfޝ-?uX]}n1񲚨hǝvx٘u`ΟJ1B:V5+sY8d}v!>؊r!Ů/fʕ+[}kl"cx=}_p*?|i]vݭ}㗶)ʩ:"׹گs۵_O۵_ ԔʢoS:uR2~,04- k`6b|MdZ |/6hbxn:[r@;Z6 `52:6LGfwON27*vi? MOj!7Dl}ۏ WL;QA F-iw4 @:fI+[WS׳4VVQY<5+_K n{ylt; FKE¾&=rɋݶ,mE'fo3`:viy:hV_vQIEF{ꢐݼC2wⷯ=-;8̍ύu[o-^W/]sU^VVbeV}of ifWu[3+2f{!ROG>kp((RTZT p-@Wf8.BoOgye SNFlT$QbwUGqr>9+M|x57UvBO}W<د녺oS,,;lE{f3gƝF`Ս8t" .lu"<#ٍMu'7`Bܐ558֚+/d ^ Bu{EOSVSN9xH O(:vZ˖-ku+^hQUXU:E^SX?~KwQHgPhm|G7?ܰp6lsvFlT$D6 t@OL|;GL$, d.jfb-7)JNU׬XM5]-RuוºunVd_6m/6[o?>;;wnZU)f~>ǭ 1U&cy i5]ֈ tnl8fVleyB>jRzXur:ֆ>sXո;Vڰe0)ϘtLdlWI?mMk9_\wKtI'x`5{Y)Q*žzVf$ֶ/(]}n<7tbDhS2k[+8 x ǜ*u-MdyҔN_WQ\g]O׳Q_>JdTMR nhHԲ9ro[D '|r曯 o^Ʀn]sܶk 2<lOy`&Nr`vmw `nbqد~n@ws7Io|cq}Cnp X߾Jt6VE[n&J6R+-IcqT;G"oݷ2/cݖ_x-) 2:ͷgfVa7Q ֻzm.== UV?(x`uYlf42`nߟ uS Xt~O8a.7u\<\o\lRnO&UeA0Vp&4QS]v#A㨕l~cٸ~\'25 ZgncnW]y{c]S\W`Ea7sT= 4r a57֕=5=96S+2NX=~Գ;I .ݒRU`$V0qT9yw6??:BtڧS0ky:}]YMj]>lpۉS^+_Jk)+~-=(~mwn|yrJCه)uh5~]_?k6lzl DIiTR'lt; FKEBWXBlQXQi`} yj_Z 뀳 >R^{)Y`m?]ϋ\4nWp@kb[dϫ9ն&X[sun8[ Q>Ѿ- k8'IPl"6X &(*Hֵp AuÞA6|بνVպoU]vokp*nU Vwviuk_Z+n,PPWU Ѯ%ڼ$Dj+dQbJԠ]Vk]` f{6;1لu`5A&qRv1JǻtT5Qԗ_k'ԧ*elu댓-l=?;J^6*`Җu 2,4е_E`mlٌ51P9k]٦b +n//|akYR?<ȖEQ_ᆱj&6ؠַ؛nmw AZ+Z{7^{m/}>WlF;HG]{Zcy]UohF|;M(P$Zw`fIlV&`R:jZ:LON4udPY-}Cn=\w`EJ7qcZ"׺I5לؽ]w^qnҪf0~'":GalK` LTomm&G`󵁑XE誺m&`עekhׅ؟MX{[ɚ`zs=*vw^zn?U]˺ھ<;ZGV3 ˶Tv酗T.ޠQ'|;M(kYl,Yv=m׭^"^Lbtl0[;TMU `b+UXiok.g} `5V٪m]Y@+zЋ^RVv1c8mW\}ݦz)l49 4 vr> ;y>~,"\=8j&}[u.Nq9裋]mXgDQU۠Wo^w->ymvs6s95|;M(kB8.J=l}[dAt>v[kVagQ?kt^wZ=uZ_lٲnÚIsλ4ŽM7%;Qxvp7WB))aS`؂lO(9^wuŖ믿J`8@aZ{v밺 enmM'yҾ7߼&W׿^̝95{~vk__r1Tw5Io}G݂rb"\NFYWO6>åNMBVllPޙZL t\7uccWz3ֈdKW@VݎMTvL{5״R\v~g5ɷ*`ϟl"uC9>4WݹM/^?);U|A]Sk]WnV@V/| {\p \WGث;j7 lZ1ȺkѮ- k`D|'`Bzg `o𗽬x/1~Ӌ\%X8)I&\vL{UW~{ M>Yp-6`"! 1 ;.`iMZFon*馛9._ܚj;5i-^vemӆy`Fl_ZݘΗֹ6ڨ5C0˺ASN| vhHblciG-e%aݠz;\xqq׷V2}+635U*iVmt\u39G AgU>+U w +l:uhע pW;-[ &}BFmCyM %m\D`_Yubu7I@v]7~Oo~X`떷{k2'U>E{]K`nD`Qqnɛ9s>N*e={[ D*2ΜnY^O?[O84/u?uMU/:4 >c\g`~O,t奁mMB0۝3K 4vaf![5<Κ՚xG|}nkwL `?϶Ƶv1nA`_M'yl .q>m lMQœ6U%o = `\N`sW]uUkv[^gZ `/B0/~>Gz6 2+<.J Mnʌն9e y.!S Fba:1c,[G]h X7qYkPsNum^}]mx`q콐7`N67xwxXY87am'a4[Mx FA>[o}&umOkNjcER;ݎ=BKLklq. cy4lQ\!Tk&rР튫+ g>Ph9q(*pXZ|WFoZvivQ]`t,5 ,&pZ؄+'â@;XM̤H⨣j-6E]T\q_nZgT0`X6C!"' ߼kEqWn*nFڟsF-6@%ՏRPu|`${Vk= 1C:3U,;Gw=*jYV阺+w?VaEzu[´lOB WNE`dN4jN;q{_~yk[ti{NfK/${L͙dyXkHYUn~b~}JouUzKc)w/`|Q*o,ls_Fmb#"߀O|-/~X'!؄+'âl6wy&`Rbm/vK̝;zYZ)7ڔ` |rlتqTE[ `n:n`X6`Um V0 fړa}' :0v, &dx۴ H ƥPFO`kK^V;m.X;[3'SPP`krlFS鸓Fb;GFnn;.^]uםUX6__̙36qE'&˖-۽SuZ?g4Nfp۩lm5XـN͝7w{{'C?v|p_4>˝``ky׊kmQo^xI60RDKx\qK_(S"1}ф^Ko`pnPAV GyvAk9. r)lQ[\yُ&J> ҅XA} ]k`O]sT\'| ,׺wڅ`8Ku $N` j0#@V [#$cepgt^x*?k)NAVٹo}ؾ `>Uÿ0ϪkuU~G>ƏVՃPVŮ>l&`~`ؑmHnևW]u /8GfIoM&dzk-2}n TemP; /mU`Ď۠ EkX#mZٷq5*.ku҅Xl۷ozƿײzu5xViUvl~`5  d +m/k]15 NP +bM*  2 |4ТzR??5>)x;Tf+lS:ǠJe+Ҕu!6k*]̵bGAu>CmοgrڏUꃿiaew; CgI6g*-Pr>XFw&zԤ|mIaQCȮe=бAaYuOv_M i 2gG=IOŽiT`k]2'aΠz;ԢeN5дb}K;ԶVUCiPH(E VW~i`)q>{`17y}ز/|t;~({MُC˗.kFjPaztY]~|,M>r/ ZwRWo3`bB`AߘAY;X ZUECMFۢcPU^NʉkʁPՏd???mwV^Н-(, 8$XlDmMLv=n,nXo`^?vZ U >M}*'a4[6 E-c%"a~4vzviҮZN" Eu~8 l *Ch?xCbC?~@<o?l?`ӪK&x^#@ ?.Fp.l\8~ +ˢ\-I_m4c[5 %5'F:f~:'x`˟:2wu{X{&h#P$v*1[`XcѶ^#l6&,_hCX4~7ܘ?35e͵? v_VkUVUׅαz`.V^~7eHit`؄|Q)J_"6PSIMh?HcN"zYuQ׻.ģ`~ZW? ҇il=аQjf1,kFlLmx{W~LVk5y9az+[/@?tB` (E; t`gRu@~-nz s{z6Yc˺c`;pO#ilT$PQ:"XcT4L>X6m&Ig `.g^1#c7=| g' ׉gج{ {Όq q:*N]e2rf]H3RX6y&U `Vl΁Nz =xgY)jȩ]STf^YI:̓W`b`ؑ&R>܍Clb~s+ 10w 3:ͨ2(* WA6\k` |Aq|^>lۋwnz~wL o6p j36(0dXc<.q8RC]`5qCn.l$,{0R Y2dCy @ pZ3ywu4R,Q[|H[.-ǁv6mҿ 6;suBJ(hk*}MS~GP̎#_H.^XlYԶ_|qmڳ:M~^{V7p3ΈJީQ:-Oq&>{כ=|ZZX)56 H)0{9w֥su|Ӡ n酗g[m>s7 oYN~…SJ哷*vQ7|tYW\}]6",P<`.6n֦Ȭ"8:ǠN\EJu҇cG혎oWޚ!tct]+gg]}6&Tcڴ . я QP(8 paw[1HU_ȭR4-s$ʣtSl׷%G4lYذˮҰ۱`1P*EU}ؕ"ԇ)lyX޺2PZֆݬ꒺l`%߄]AhP]Ɯ[ !$[ Sӵ+U?6 p elt loB6ߺ +>~2g #uk -16 btk?ln'm#֋z3尽]g`|]%Wt>Nq* g!dʢÏj Nu S12 %+SԺmCz ]`]&vެYn9s iPu||o};w-a&qH>eXyjMo$Q~e'n~7L!iL??S31TY{yfMh=?{beh  GZ+-;،+/k\""-'3)1GflE;met옮<:gxGX~2:ue}O)_Rs k7ۏ@gGyT}6U6FgѩI ,s&(M4%RS57XBiCgvX6U&ӧrp^t `5X{Lo'6Q`<6ɰh,J0Y8u 짳ډvkAup|կInQ fe}vnLԢ3*Iܰ;-{T6]콡rW/ ܦ )hdz fQ`A6ɰh,;AV`Թq퇓XgSe*sqֺǺ]m4'Ͷ=sַƆzaxn\DMΦ6a^mMe`݆6cߧ6Bg#ȹ+&\l•aXvHK?:ol8qA'󗪒f7F-YgYb7}ʹU+ʆTMtoTpv #ȹ+&\l•aXESsG<6X89} d2-k" #Uaڰ|׆ȾVm06C"@`oO=Ɂ, @lecam~7\ְpWl8N $Y}r 4 C`gP؄MMr2, /}:w6 -Yڃut* ' `OD`Iʺ{? fqPdl)w'8l 2:u<:#}$q>+E^Vllb#wjkα ,kYT\U?) ֵíRqf@3 ;p/kEz%K k@je4ialB y~fkww6ͱZ~~gX:P5.3lxzd*{^tj@l֡GR`>K 4elĵ^UKTuNUxS?i:N֕]^M^M`.3|PW^}ΟG`Xl 1nƃZW9MA `)6Z*(b"(0& , ZiFiU_k.F{QlT$`@), `@6F;YlT$`@)p*2<%¹)6 h5 3!f(0> 8rt. - QPPS=(Nl`X6FlT$D@@qo {u5l -`] 6Z* 6 h(3\яϷT.iE/&Q6 m=- Q _滢a(Р,r2e~lnhH*[wեNMBV@;@i*Y/ 96-gm-+&\yl•aX5 dΎ7em#ȹ+&\l•aX`lh ="l5&\9 `q\v\^BC4 ="l5&\9 `Xm#ȹ+&\l•aXǵa5' nlE]6` , `@6fQ`A6ɰh,ڰJa6F{D)I`6ɰh, 6 4llGtK9/:5 QZ6aC~Ghy)8 ;G(?Olw`.3|PW^3?ُ, 6 4ll#FKEXLDqmq%z{ 0w`(6Z* `Xla`,6Z* X )0s:޶nܣ(H 6 tclwFKEr'P PF''sl(6}`"! x 86 `MbRPPXLe>bRPPX>8MEpȇh 6 .- Q _&\ȷT.:86 flt[FKEBWz 4LyIl _`h6Z*@ %OX6_G'lnhH*[w)\)Slh5 &l"l•aъ ܗ":86 fltS FKE㓨lkة`qSs)6 kl[tK9#:5 QZ6a` ,R:6 4llE]6` 8 ;DQw]S6fQ`A6ɰh, 6 4llE]6` 8 ;MEpȇh 6 zD9w؄kMr2, ,6 ` GDsWM؄+'â,kÎ+Q|fuה zD9w؄kMr2, ,6 ` GDsWM؄+'â,kÎkS! klG4/:% Ql&\9 `Xmr)A^`{UOUA|P XǵaǕYQ3ꎺkhe b(N`Xl6`"aD`1Xǵaǵ 6ڋ`"! H`Xl6`"! H1f\ۖߍװG)uGa]RF}(c۽v6 jlFKEB@@`qqlhh(((e.6 h(((qm*C>D|mv1hH*0ኾ4SrHS(Sw6 m=- Q _滢a(Р,pj0&|mnhH*[wD|mvhHb(g3gm9lmvhHb(R`S7Al.hHX TB@QN<$6 - QPPSQɧl 5`] 6Z* 8ԜF`@ 6FlT$D@@el`lFKEBW W->%G4 1Q6;F(]AhPv4FR6`"! oQ`qzSpz)v >- Q _|.ŒbQ&v4FR6%`"aVL*2-w`qzSpz)v 4- kXOR iK`j zlvEKEB6_`K,ч%4Rp) ]4 M؄+'â, b6а zD9w؄kMr2, 6^ >`3(r pdX4`Xl6C"p  WNE`X׆T"@h$60<`3(r pdX4`Xl6C"p  WNE`X׆W^Ëz=ڧblE]6` , `@6F{D)I`6ɰh,ڰJrglG¥ozUU>?mnk6_e*`Xla`=a]W巶_9^MF[ SJt6^Qga=E3yw5^ ѮN}YK/C5}((0: spF@@HRzhglM=B@zJvW60j6 ((0,Nw6ω"B\#?XQsl PbP *Zt=lQv yrk\ P`$`qǴ~il`x6&Ѵm\P= ˂4>v r@UVSq{!og%#o\Š|reg`d=6 # ݊~j+eiTlT `OFouG6|9h>֥€X?_kܣmRz][ūAH%+zhgl*!U]TS;Ǣa[j+ >762,)_23.4J wa _ayv^av{q spFرs-kA,ł$,jc>~1x<}P1cǮaU2E t=p/{X(塏;Em-=O *@`lMo,j(U~da@kUBǂ/CJgk t ?+Շb[&M+)ڋ@5XZʻW;eׂؕ2\nVtp̬e%%;M7K$@is |~tR Ee_0M@^CVŁ5;F}xZ^f蛆KD3ކ<-~WfS"6Gvx6•QP9z8h `fl3msRգ/ z*Wt{6uXa6FL " @[XQsl ct(Yz >ۯway>`T93LKul͓G)fMLu8|sl`Tl)A@@9s؇pGŁ>el`x6S"@@5?%Çh `b8H>((((7pЋl yZkpZ+2F@@@[oJ8|60*6pu?eE@&ݭ a6н lrB[@@Q@ XUwª@ M:R`a7C;lgm݅!P0A_t '?Y>>9&+[]rBVT־cؿ?]4?JcW3ohƬYg{@l̘1M Kdռl*(چލ=t(eЧ_d^,UWS1u_9:麖Fy?je4q5&_5 v{p޺s ݰl`\m@_gF-'[}nq*~oY ܉Q4Y) '?xE#vUH~A_re+*i #5:a7 `@6:qoC&&TO@h`%(0{Ծگsѷ҄ gst;u$P!K^,UXh'k%å `>zX(=9ݩY)p-}bl>rE3gu5fq r (h60>/zr@>o|_Wi /-;ς;~9]ߘon `~ G=P;׺ۋ?K#g6k֬/Bt@34qq*'dO@P)Pϕ#*>\Tn:CW Au!u` EqrĸWxœrk'քQr {VE&gVrW/u2<^~9{DRbi]oAu/Gk;"?ֺI H ÞN;v!w l wa޼ ַ f>5ȞXom `AZ(Pu~[7a_pv'u$Y{u=<&FS`bb6b.U@,clso>G8g-j"h+R >6螀6/ZY4պ6QAm,-fX$@(, tjz0zUVČՃ]"GSmgϞ}Q#6 `g[̚xx?k4p.{*Njd0:[ClKP镶]/&ݧV\6l?z䍍-ln6Ef=s:akq^ D +GFva@.6pGO͞Vrkۄ}(i'`0,CO ,rփ&<ձ0O|529ul FO͝q+PS(ؼl79y+pZ᭷vMǝTh{;=`3V6 `y#^{[{u~"wxϻ٢S~3zukw=SwͲj_y(( * :-vߓlh `@v6`%SuG2zϞMS6LWA5PPPPPFu޴~WON.a&/k=? G3=AJsf2|PPPPPP` N­Bi=miXtLZBB+L^`*AYݢ~(n;qjoaܮAꤓ:V9PPPPPd Mĺ{X^Gi5T ([*u<:VIaewv F5A@@@@h\XEa=40w/~RFR ؘA,oyZDW=2Vb(((((zeQH[UUs= NFi@Ri>eZ:++#ۃqr* @^ vq:WA}WMQ(F+#k{:Yqm:f+ʤtK*ZG]ں!D|UNXi2_YX=>Wi.~:LU#0QPPPPW `}5ˢ|u  ,iװ|>U3UZ2{SY>v~̤Nu?Wj`rZ^~ުVki.u _QPPPPPT*U,<c?!G] ȧcS+'^VkyYe,Y>e[$Q_O!l[}?B\hGM&&E@@@@@U0@3X#ief@,i08a@Vآ&V)ӲX,[qP76.jݮ1 t@ZӪe윲n"M>`}.ezuQ=\~t8N].' ehv5u1FԕE"{ʫ@cN@[ǫ`} jϱ"޷p|o: q(((((+PUi죁:+P궚 J8+Ֆ rݏEЯ: ӖMskPPPPPPU[5VҺѵnWuޕ]4xentK;QU7]vV_X+lB\C@@@@@(PXժbl/j5*j]ҲTU*w6҄P\UUePz®nY]lvi/Fa];yY]@PPPPPLv[Yl7pXc(\ZzKmUe͇*ѪL++ge5mMTf@Դ@418!DZM[іQ xPPPPPPQ.d>뎪og0vXAct MYwU,bە߇[rVu:Oԅ״y*+}+:gj̮3YQ5u:_*CcUzv6tW:+WF:ڪ4`UnHxkn((((((*Gہa\((((((c@UW1E@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@1Pߎő>7IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-flow-2.svg0000664000175000017500000005775000000000000022155 0ustar00zuulzuul00000000000000 Page-1 Rounded rectangle ATM switch name: control_exchange (type: topic) Sheet.3 Sheet.4 Sheet.5 Sheet.6 Sheet.7 Sheet.8 name: control_exchange(type: topic) Sheet.9 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.17 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.25 Sheet.26 key: topic key: topic Sheet.27 key: topic.host key: topic.host Sheet.28 Rectangle Topic Consumer Topic Consumer Rectangle.30 Topic Consumer Topic Consumer Sheet.31 Sheet.32 Sheet.33 Rectangle.34 Sheet.36 Worker (e.g. compute) Worker(e.g. compute) Rectangle.57 Sheet.57 Invoker (e.g. api) Invoker(e.g. api) Rectangle.55 Topic Publisher Topic Publisher Sheet.59 Sheet.61 RabbitMQ Node RabbitMQ Node Sheet.62 Sheet.63 rpc.cast(topic) rpc.cast(topic) Sheet.64 Sheet.65 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-rabt.png0000664000175000017500000012764400000000000021764 0ustar00zuulzuul00000000000000PNG  IHDR6sRGBgAMA a cHRMz&u0`:pQ< pHYs&? IDATx^ U H dWV5"AЈ#,FAAq (O"@%&D@v1 (*:3sm{9TuuuuZ6 `m`g͡޳f. @M2:qU_7M/*}șa]Fv lvs?sne +|ml*@@@J+tJ+'9_z{/l6Y\ykV]uo## `66[OwG +jk|ȏ~wXh?N76 `؀o;=5*GC`~МGl@6p#n׽2;,WÄQ~{'ˆȂ( $6y WslhВpZ׼4PP[e}͑ BAl`l`:!yo5OPPS/?w?@qs<l/M\eئ(((0 ?<8=qN:6 `aZO9~4IIW@ ,W^`yl`(m '>\uH`7؆^+@E٨a;606e4*@`t5h T89`6P9xWl;(0 GZԏ e, v՛Y7)}q5> n7M*l_a`9+ RRmVU~UWn3aC$ 0?0O<֒ t z[uUc_ٳg)SDh~\/n޸fx׻nA|tzӒoP! ,kcVHלl>9s8CKb4G2~} 6%S`zRU?5|0\{/ǕW-T O=;s_pTiӦnɝ}nݾF/iMӪu],[w-CT|) n7{xv;(; pp쬊).BS%̹7y: .M?;0o׷.ziU^:`X:,.Km{)[Y?rV4akֵr<i5cm[Qv7OSCE3 B-*8>ә?{׻5zݑ3q.~* ;X-VLLJR7*?"8#*UV{xcοR7[ەG)#_ByL dԵ@c6Y-qڧj{PX;70V;_,ɏ`Î3r{{gUW]z2&igD=p;i;g'j l-;d.++jXEXP!8'UbEfW}6.xZ몬w=׭mJMmqX~ץunVNu< U`ql1G&aǹnnpկJ}knҤI 5qq\=vP, ؂CvXA--bdw. jnVjbVGPUL+cpmuYR  leŃ_[) z[1MSY\~awGeYf˲;Cޥ`bleNED=AM9a *3ܦE|cV.Jk( ufYjuQH~`Ǎ3+IbN쏱&YsGRyGŋg}Ҵ`{RW2>V3k;v ]X-v?99} :Tɚq'z6݌.1]3zlݡc|u`%eVdj>ZĐi'RY%*Q-x 5&Y1vӧOww{纚.b֫5q^_ ZO`leRE'w= `ET~bAm]Mgkd(`֜<`yVwvpr98MdfsOf 1xc~ǝݍCҵ`ble~F+x Oj݃f*N돉MB![bOei~ ;WQFP[C蘴zÉXs[Nϱ^|U vhKll1G'G{UVi⪫9O^xQw'O 7ڸ'n<(s ,[wt L)vTW/6ߴ#slde`#`;6=ݡYg/Q+~g'>1Lid͘\r:N5`X}F: !kT,)LƤ1On`9:"_W>G?rozVF-ɌP/Z& e2/`K`XJ|r`9:1;|ww;mܣ>~5*]nʔ)L>̠;*5`X= V*li`ӟ/<3Ug?ӀleR"{]w uk;[oT:j?QY3+W Uչjl ->}f=[c,UB[ovmJˬ}E`+{p*-{?^׻|;_lTz'̙3nj41٘Zgόb6ݏz [o}ckSN9e еh|:\u^p}l? `;(,6.Ċw}wsql;C/FA.sm(Ns%8}m7|IgqK~򪼢sᇻ78MEME9zklm` usW~3~]F]V謺ݎ}@~ۧϣJYӶ=1[]S7=[kܩ?6.|n„ [vw-Xj&foOq;6J`167oϯrvS&oxct$7xjScN9M`5eMmFrUm`V&4@4UF7c) Zv`}+;kX؇~)R;cƌVD+tѨ㏻~v4ԩ`;U}6YSoQ"lYXEDZXu/#\?낫nFIYvvz޼QB;& neq'pB*zǜy{VF%iƌeٝj'm5S^me tjrqymVPv 63MS v޼y$bk^K^0%7{r"OOg}nTַ56̹`;m6ӭ.D` -F`($S`L9*F(h8c[ ._#p#}=[^ v~OQooh7 `jlYc`yc`:1]*a w˝Rȫ"XŋSO=4AN~򓟸Y/??}7v,l3K75eVsF5,U`f_u 'cRuX,XHCusjewi lW} `;wҺns>&c`?k]};<[{'kؗ^z}ݭ|`Ts}3'Ы h`ZuI/m՛^$ t!:8C"K-ԍU>[n'\VDžj)l٭2>8`v6mydE`\~y[fg!I~+sϹ:˭ڭ߀_| .׿oG?Tlg /^[1X>`+C&&%.[R6 i5تmTl6S) vnAlW_=<7+դN'reN}O?>O 7]veNYu2pUwaD>H/[bek#li"ۧ)߱ۨ60luDž^p!leRRP] lش:3ßYs=+nYmwfƬCu;KYt, b-ۿ23hM9m $}| Wy Nm'l8V)z8[A_Vmm!@Tum:^AS`k ^VIIA( 6W  `/H7[ɎH*hS#Xnn9Gs.pmove;Fg!3g"[EaՕ(o¤YMyM6q?Zլ$pGg5 Kt ׫WNr] f ~0) KD[ ԮyX}/n;5 *?Z-k6 vNuk`\lTdAIli5y+J*jWYj~&2"Vss4_IօVKwK^\1=, 뮻m?׿R=eVr9kO^ nMpsM CPմhQ"El# `6ܞ5a楃PLJ @in=]B<+1|%>9 ЅgϷ@ēN-Cy{=C_\0f/M_B~z{+b6wۚku5VWa?-x<O=b+B[v]V.9"DW44Sd3**p?cUWܢq^Bg2W LJ @(T$ai$ɭkU*7H7Vsϔ-}Qm"mtubV"i5ցtM~ j=guV4tŭɔ` YMn+GWCUa.[> <~>2T^O(E҅;9&sǿS=et> >㏻6޸>*ɚQ=s[?Y;[oeb9?dkl}݌ګnl{`+*P {>QS'cq!/86:U"/@Uweଲ p'X+S[0Zx["Ν*sΘ5ӓg$*{T!>?w?.ܘ|žs9j+XMKZGmx衇ZhsAo7~뮻57 =<&a 'b<|P  , |P,,*"yȫpb16u-6Ej<Ϫ`{{;\UwъYf̤MXE^ ^H@v{p5BEa-i 4!Ԏ]R `m`fU 2:>E*iKt 6V6`:l ꏱXK~6]ƖVuה+XE/^ ~|Aett(+{r0Uz=UDu7q x7:Fp߶ŋ&s;^O` W `Umnx̚؟ *` :k]  jl۹Õ̜VL"\7ovfEO=XEc7JgΞ:ooTR?v!2HZW20[AU(PMqi4)d6{+ykWegc ږQY~TyOOQa;Ͻa5Qǻt 'L/'|m2ՄNZSVsm~C8+ jYO:ia*1 qv<9c7W RYwfCZ\NdPk@),<~],MiPo]`# ;p!{mtXdGbTr4a2ɢieذMlS9Wdh_ZbǯVKc`N,xlNkMN_j݀#r_K"}XT0qj߾{>EO~Ҋ>mSS_"gҤRբD`;a![i ; U@e<2f=vz%`;wB=]* 8P$bkzH2v~Z3x≎ֆ6^șr7j?i}\{ CC"CzD;mǍ`;wBk -ksHY=_%u '-s„Qx=)9vV[fVM_cWۍk}rK `y jmYszʶQ0KWXθá![?Q l)`{yU8i8QXomn…|u;(^c{[K쬝@ dw*7/5`r#Ofswedҙvv tUE/ [21)iNeQ Ԡ 5hUWAsg7 `%I$v5pW_}ܔd Uex6r<@k_kMXMn7܊j4쬱D`c leźII++PXF s/kN>:k t >͓I`8yV[}ݣ_ynqۮK Ә.7[`ͮ|u׵h$T}{νpPЫsϪltӮ)5`+,ʤ`Xz,XX-GկvR,UUo׶0%["Io],Y tMf)I8K/=] leN[T,$̺`;wԲVݭ(m9`7yw(DOLn=k_Kf͚4@W+}{oT`;^Aۿ׮W63y1leRRP Mg.M4+1X%>9 ;V,+ Or8'p,ĚTdM%pdg@7cn7Xil.x?=] le[T fN*lw[E;j7zwƐjǍ1ZvK-x*r$w)<_jml`;^۟׭BB\V,(uwՌlG7g;;nܹcV /LG`yo}[nɜ4+7X+Cm c۹mn`;v"-U4~DԤdj>7'$pZk3eK`;wvNW%vm-X{'ft\ph"Z6Y+V3+ ;)鎬oiYPl+(-> 188a t!.A,q\7v檫Nzi"i5XկKerAQ㫪=Xhتr9Ze7qd'$3tI-UwauZؑGcً5Q {cʘ&nUWmՕV_ߊ |$: 56'6rCQY>%$O4VV) vhL(VPigItؽ op}\uX'Fd78]tEe`L"~qhj:85D`ÆMS2W =cߏ؟#Hߒ8K*|yd_jd:xmS޵^yJl}vkOYkÎt;sT~vMʫ2/s7ol]tWˮ,vjUF: XoD`r۩ 9}3>}o+2Zb@v$.X͓w `e]뷕29} g$%u+-Z6NƻǶh~[;A26^1lu~xH>Ul`{%0mFAvsG'h)ik5Ubu5r4nUXM"U6";`]GcWMͯr!SLmRkB*1s`#d+ 2Iw @`#{(9 +2 믿; Af!V qvZoѕMlm%dX(Nթ (^u 8`#mmh|cd2ҵ;쮺*wwm7h F dW9 B;I6 f!nSluG: 6YBHߧJ+`TjQ`zR Ol1GȏqZf5 J@tƌ[.\ `uxd<컦MsZՎ1]'.s{+)|9W,[`*lYD'زr PStͽz8 hiIG}Úk;oC{x(ܦ=ߴ_-}0w9 V|P l1G?EW};`ۯTޱo 9s渝vi8#g4h[ [`XؽV#U"nK[g98mjyMԏ>y{ح/(UHybN??hp\k`lÐ XUY LqX1`|O `͚(Pll1GD ~H!?;T>3Nr;Zp'V~Mn}l FWl1Gq) ~H!?[ZAͿ1 - {`X} F: :IC,;s6>c`#1[Ы6l)6a)b`8ϝ*`|k st 4 ,[7tU!vybNiגetۀϭV]uh~\ٳg)SDh~\9s8(C~ևoa}79_6@ϰ>,=y;tz7+|~l>/n5,|-d_XIipkߋj8/)}dP5U鸓m`#//;/֛aIyMkO'Ҵc]r \V _&~|[IFj&OѣjoWn>(ȫ>6mJpAyrͿ,]#dU`bQ|97_Vo 4A6*z0[AeͺU]W)~]X6a lPOMj'iHu,蔵Ɂs`b O'GMpFtm5P68Zt<+82X*x`˚`b? 4P.u8ݾVq۷"íϴOIHU:VyE4G#G-[y/^1]vn{Y)+OWS;Vm QI+-lWYUv>vLzk;ZK?_Otva'kb?Eg.Yc[D`쟴dbКZV=#[Ѥg ς߯}%P:QCX(=0Pҏ`P "zog0npp]ʷvXUiPC4hdD*G49(lN_ڦ:^ Ͽ F*OUP2iH~meZyt W=>N@ nZIwQ9a8.w\Q[!y sK%Ex6V`? u4bWI zBPAy  6i~\( X3PkAV,2@+S[C- :mR@hD0صs#jsxja~0:c|K!эwr:`l^ygKϠg2np2`*3fu[[:ҶOoվ:m}HPY hVd~-|aTYHgvQ, nkiE#,4xjږuJ{=tGdW럧{ܐ l|Kڌ=#%ې%늬m>K[+6*|Y4eX%ް~àzL`# @AjXb(`+ FCS՛_!kcpp986F:VLxu`XĤY_ y/ד={e{>c~$4`{fÉV+ga(?ígz 0+$+Ut Ml^vSV6`BBdaN7-*:"k!6@ KkgҔ;`XϺI+yN3%|akz{ļ\ {v aOzf+IWlwš +$ 4UD`j61gq YD2|V6`}nNcl`Nb $ۛ_ [jpx]C 6ζ]u|XvyQN`XM"ÅbVeY6[k޷AoޱlWM GRIkRߺv:և:+;i?En>?ipipo]Ah8Em5+Xp[ڟ!l= cu'/3u]®u{ZP`X>Ll+Rƴb5P=*ǯ#X/+wl;H#YƖ j\3k̨a6ɢWGtL(ZSO`-ZW4sÞ#ip8 ~xU]۽W]gY$y؏5)0+GOCN'0vy"uONQyQVy`XnQ_=,gtZ.U /3VZ>Z^VmɵɝT oT,NwyJlr$o3VYgJɸWSJ{,k9iCГp6 l@S|_T蘴zGacTn/fs9< XPll3.аkw\ `H\X~{+*rQ(Pll~yP=6P ,&cH]/As tG6RW-(tXte`8j`#'N85 Al, w8 Z stXte`XܽQ(pk `x,[xE"w5-.ot: `H\6`;D`kQ7K˔!I=Ui~e΁ Yvʔ)={+r̜9s1'Og l`8zFeNeR tU"ِC\/&E8~D^ iuN0QRtZjWEUE=9ԑÎ,66H!?$ 嘥; } 2l`mT7XmJM}5}8H(/6F: ITWn>(ȫf;r9:n oYAc8Ӻ> .{Mw׎ #H;aY-뗫\EdlЛl,[f$v!օneq(FH &k M ®Ƃ:ZY֝V 4Vx[F}ǛZ9Y#eO[d,mЪUVZ0kqEA63`ill,[ زq\Ԭ'G6`& J+8 ZC8A^15߰I|NsڧrԾ"MC.ayc\^6 46U`ز:59м)~hl#~$3tvFj{Yc] os6\i݊<~1p'`@7mmsJX vU^ G(;YKdM ljuMV~?qX"B^x/**u;ܺF@/+B"3 4C6~ l)i w'B6cʬx[+.6n291`7u $[%gBDS' t? *M.OzjNdT5VU?n&vjkVTl4,VT5d`\Jgb@N ݛuou卣󩢽l@+ţë`IlI4P5NΕE! .ѱHԍ_`.ѓ6ɱ>i G٘p4) ngif(.~@r-6 B `+"Pnl h׭7ʁLX|E@TvΤOmiW_iݪcl@VዯN)L^%;nۋ.BH_ U-ǕvM `mWjZ>) ڂo+Seile[Tӓ2f):TeF!YN wQ>h͈mk3iy=2Һ[ `pDǪ fZL$Iu=ء<Ir'`y톶Ql'LG@`mo%HԒ"@ ]4ov@=M~WkVJ4x`# kgҶ-$:C[l^.>a]AZltӏfu !`MXn:.l]~>(*9+,i@)F^N^Aol{6`$U&R5ӘkTy!Ԫ,"! ӺEt!ni_u#lxwMOl5`L8Al:HvԆp^[)-kc`K4c+еq9Z+h7p__$bΌNӮv$NC~66_#rԧ[Tllus4u#a4j]ӖQ9f!ƂFVݟXkcy?߀7mc.vmȯ_ߦ4}flÐ ׈)֧55U)$ 4͙=dY熓9 e%`Fk>#`}W6/ǏZ7mU]H[(m[ɖ*J[c6OC~66_#rԧ[Tl,PH}\m \fat>`#`aw^9l)#lt>`#`{4Nh 49n*+o{lÐ ׈)֧55U)$<z؝I:mtMLܜ(Pl-2SI j8u:ԅa@ltȆ@ۀ 4A6*8Ms&i6 `u0 K(rkȫ ((R `M6a  l]/Z(0}$!F,dӜIڃMb@6*@*059CΌlBG*((R `M MB`kjPJXɦ9:mҫ,/^Z 8u:ԅa@l!BUkϙ,dӜIڃMb@6CQCp,b"uaofs'ݰ?/[11)iNeQ Ԡ[TU+`Fly Zf &:SϽ96 VY(jPAd@`DN{l`IX7}}ǀ{6`mV[͝u@lC ̳`+RKiA ` :uƀf}ln„ yf[w ,[SV&%ե[$W4v?~(n /tO?t/2te|llU`+*P`zRr<_ʔD7zz瘣}tך̶ s9gMrcjw/JleRRP HȝT @iL6RJ"q t4p]{ٳݯ~ctx0g`5"G} iMM)F  pݰmēNu"&Mr'p[x{g;J*GYٛm<\_g6akDح\Q_ *F}y[{:r8G|IsU}QwGeYf=zPlt>P +kv)D[lm`΅8EB-**vÍ6v_z% [DOlÐ ׈)֧55U)$6I^iWw?яo~FuQc֛-,d;Y6akD`Ӛ*R&o01c[c5ܧ?8/B#OS}LÎ`|l F: |Q[ƍIC7>ll; `կo}l9soƤ+M2śyc|l9[`#llFO6 :h\*i6RL4606ϟ﮽Z]vzw]c7M6g}C~IĝP';MWtwqi5uZ/*_>O;-d#C%t` , `M* @CmG! wl-[Z(vrk @;sOk\_zw'Ig1cymlQϡ LLX%ih4zӒ +6ު%`[ DJJ|@+p/bcfOfmƌ=s_pO=" l 硆C47ޘX s*4-(pK;=(CtqxEi2wb?4&{cNf|l*t|}?7pёUIJͳn"lX.lD6ޱ |( ~o[y駟v~_;nحl뮽aؑh,L^~}Q ގQW2qg>N;GmW_wm]|W{&Nt$3s}ZN1{'>~aO}gqƤܽ3>v>2>6خ P, F,M9ƓY};i k*_9FtrT4Sٳg 蠌8;F|E]Zw]7UjM>hv[m?5&m;(.3~| y|,$䕶l lwZu~ŋ]GpYS4.r&YuUT4 .z?>V"wt˻ɽk =SꫯvSNuz׻C=P:~GvHV;`+v,**`+2,[ǝVqlc /gK r!~ss믴;}֨b'p/g_}~u _Q`+s,&&%U6  Vfm `X;:8`تfʋ)7n\O>>^{mw/=M_ֵ`*~cq&'`تl]9, qU\Dl E}qf:o~MI `z{oK'?qYkz饗vvR;,[kXY1RSf몎a%`d(/ޫ[1n‹^L~2\Bg.>>3WkI[p+vש]{uB\{H= CJq2`L9]ؙO8'`)ꋻ7) `V\ѽA;+[nqN,>swzp I'?8u:)*pڴwq馛Xo*V*pH)NO,{vuebcјK AS槁ne_J V[̚,[5(Q^o5Oet4 MWt+~'I&KT}#i[`߸n̙nM6q]vJ">h ^mk{le .IJs'Jjzu!@j`y;nSUleRsX6|w/5],UwMWY=72 60)b>n-,/~-Zk/{x#8WM$@O.֋vr`Rl8XsVyE1.rL/=8J9,t}qrNY+뷢ߨ-;lS>{`Ѕ.nY"28?':֌PoTȪ.ĭY#_A-srX6|w/5]<n<}~޿+#Y(q va7=LJjVݙX-C2 8zKS 6) %t ,H5ձ0x˜]v؆i. 3,'4-(`m:xѾ{9O ,hu!u`F (UW]8W?]q-{'ƺn FtNl.i`32+Oízr*rh CULe)جR@f7 <`MQ3p0o|l-(ƍ`^KM)`5)f Vz$عs3L;n緿=u}nu`0g#pHؕ7 2pjZoK`ZM0B2I]F'߬ 1z7< {%jXl9=,t}qrN?ywH2KևDcN_=䓭O?{gZR9'4͙3=i6s܍78fX\`_6aklͣ#Zq0 ɴYp8X xװkf&y`Eohl&q"t3l?^jN{v_ߚXi˱&waMdۓ1&-# #-gle^llЧ3l8!eD``t7P{BϢ~x?*+.kڊXX6&Z&6`ئ틻t`~a{>k-raӦ^5^֒'MrIO؞wyg4V-[\IIVI Z>Pz?nr0Yc`>zl2#,"D`0~H,$<5}#ہ7k lթm9З-sxX6|w/5]NVW\]vYO}j pq ~I#YEf6A`[_ذ1qDZMlaهz9 ha%~[U7a.'` +Dz8`^Ծ9^┺qΝ$ԇպ'Nt>hk 췓hwXDb'IVC}_ó҂SK[gtvfz+aoLlHX-B6Ӱe8iw.V 4Mn)) l7`2P85Tfeƻr$H%Q`9(>iw-VYŽAIÇzgZZU@)͟?5& [ܦHl~o̬>Ԇ&cubz\j~ڞIּN.~ «~bn:6`*2< `~~ǝ݉'n[4VՒnXmlfkيZ?(YVK~cJ VӒ`k֩UCMtw'nrZs{y_waG?] `oJS>4ϺϥkZ%C~Uy~O6Ρ`تfʋtXf7pH]{üB0U :]1%0{^2+{&\"m7&q,=+UVC^U/m[>Rʯn iG6ʰXԍ&]1yP:nܳ#X`GIZ7rt5X5q/Χr|WEфjtWEJ+qI6&FbRAoL7<[}o[}׊pg?s>,r8ց-]K9]R˛&qzܸ/<<4OM q6 v\w(Hs/~1;mD@6&5`w˾uXa7« VuGS Rۥ:&qzvSwuRHzSz `E{#0U:.oʸq9o4Oʃ?qn) `ӢGZFG>æk]آ%:ޫ {z_k[dzRkXuSJZzdv+ӂb #9C/U γJ}مXF"y!#1Cq_(wm֛˧aWdڮr|hqͯ'{cfEfkKu]:Nos¶uƻUVHlSmQi[ITEXLmwv%pjzTi=dօ1jIk *-<@n`>\XEmk3so6ƸdOݍ̸}c.ȂLIm3hQ(4pRmKoz_A)Oگm*˺Lox87Oћ9ZO6`~tisn'u;ݷѱ[f7!I X&~reG^4idccNf~馛ZyiL qN@Gs{nPV{`+3 `ȦE}ƈɓkC l@LFi@6?<FoVWo Exa4эrS(-r<[كTAl ygE`oI"I8}pK u$/#cmR3fMWܵ5b3]hIlyv2zD=1}?To}^.?2~'tNaO_6oGE`J,9]Sgz;nO{olZ" aY7rCݏpYC,r[=SMCSVcYSuq63l`/`/V֟+t;o-Y";3ۂʴفoF7yd⣒ΛI8{.3&`zK7ÝA|ߐȾ!K}!~ HYP^i=?TY~Pcɩ"~}ʬ_ 1VUFaƣrt!. v34B\:9)\ q6/F;;v…(jwOtw,gq[1 RAM6iɞ6Hf2`XPش^El[;3Ap+36;qT'>ֱlYmvolҥP0z7#3ó7 ~;Xzԃp҅8?K6Ζf'ƍ7؝}٭%o4V([C}sђ7'tR+L_WUw}wt`|Xc~#1p' {m+&`m VVNhe%w%.N/oT\9l ^ܽ2,:unWٵ^{kllt9,_j jD5~XM&U&tMNK=sk']> %N4u5K<l`rVXK/= o~[{e7%@֢GS[nT`^S~HVl}pK.QO,ꖨW|s7wHq6NIcx^d|Ig}{ĉ< bOO"h9\sM `uI, vhS+,\ y8l>qNsm7d>ڱEMd<,7pC?N&)BmKuX& ui9, x+igV^yd79s渻;*mF; b2:j ؗmر8ȔlwZuqNmny]ҫ_|VRIز[VkMF_5/6=f-royt!Nho4xtșǸWo}[_ >jyE_Їw1`$]|NcvJ+ ;۔΃q{f9 ==06]DU^Q,cqI6=Fci~N:f|2:Z'- `$@˻7߼5[+lE`e}ahE~HV,[e%K;:ph?v6pWu_(0&SOM½.?|%uh={_.-{Mwͪ9Cm3[)ܯ!jiy|_GUuնSկo"r&ʗ6VTgǨ qEf:>4r( ӜYZۧ6% ScjOUKlFtWtsUW~~5]yDQFmC~Jv$.d&V(GnyEbɌSsp4_?lhہeFhU":΢VZykt8!Z?1l-Fw gs[׿͚5=tzʶNCatUU2XC&)F^ cQİk ч\ߏpg]8eV ~ 5^U6i9 `CX`Z `k񱂬^z4"?aD\K.QF: |Ql}ZSSE B_H~wa`$Er8)F+!Zqi6-fk1g4]3>vpIpt>`#`9$N6Iuni3a*hL^EA5f.ayy2-Ӵ(.c`p^Y6^b' 0g`5"G} iMM)F s~et6yVPiqx3Ӵ樖Zڤ[Mڤr-8!ZdSN x`#llFO>"H!x'G **@󣗊k"Ӣayiδe(!XDmipE:~[M(6d?B/lz`#llFO>"H!ꝡ( ƔY6=Yup\o ȷ6akD`Ӛ*Rwn Hn7-ӺZg-Yx67v Ԛ9ydoәÓ<@#`qhDH&v.TF˱`ͱzmtȆ@۫Z6:J8 `@ltȆ@N@]o-yf9R8\l6a  l]/Z(0}$!Fl)zc6,`qP` աygF6O!@`Hr=li6ڤ21]^[l5)$4ՠ@ 8E=c3؀&tK9K+ڀfVמ;`*( KE8LNlh (\S(Џ l7tY9'=1"g>x4lEw6P-4tn ou߰m5StQzZ}V-irl[VVvֹ 6&OpSOZ(PlygcJ~kM}䈙NTJߏjm<_tOu18c i_nx=_<[ePV}nCqY'( a~+bM:ԁ((C]`A_l>0pɄ!|k#AS&v "'-qgSg l6``UvL+ׇTw+׏>}PC}7}LCrG@@:`scEqMr6 i|} (PH\dX[N!b[uۀ͟WoxQ}#0ni.lA[FGyնp=oEd:16˲AͻN?WϟY՞Prom<0iܔll#e7!0jD2bҩIQ)P#COt},šVX+W7}6?ڦK琶LhI;ElZj\f"A6le.Š$[=-O3`ºt~! `Nʀm)h <i TujeZ;غ%(`uܰll@{3ݚ.ƊZd6 kҘ6ʟPj@ d=rC~\jw{ .}PaXArVt5m x/%o ɹa@6 Uw_n̶̎`ֶ M` (U/ T'-H ,, bqY&C CtY|my@,aFb}F5! B-[~ V0ya+606tU_?mFufaP*V+|zMoZln*P`}6Enu!#~u, :v `6 T )`2U %g4_>-7}60‚߳X34lYiՅXMi݊n.[AǨm&oC}=mk5jYA'̆[c]~O4-xdG Hj{-F-c Hֈ&>doUB1og-].e`qb؆KfZ& 3jGJG:^ـ` ix)]3T[liUyCUg@RSX`닗aD7,ۢ*dQ``Aæd뭿{hImkjڼJkJײ68v枚! +*P2P &>p#41=6;H-E B˦iMݨKtWYe~#4 PPP`KU{CF\OlVҏTۭ g Vٴq~7Y*{Ë"Cueu,] ڡV鐥v.0M8a@@@>P`Vx,SĹ&//6`jpi[ x!*Ϗ1MꯒU>BiWhuvn~bMOU;i, :oCDZ;\x[inG6MD@@Xy-%gaۀ`,:u5aTpçAOI58w֧V( / 4`UO[emh9B܏]4?Sr-5<@@@>P`+_{9Ѕ.60P6`p.^| !ASDg9 D6@LJ0kX#G[>4PPP Q`OD?5 >XF- `|lu7v5iZ]g+oo(FobT<@SS^5Ŷև0{_/r606ۅXiMռl^43沠 f}ȓne0>9g@T@k( TfWGpE }Q08XǟuHvi,3I kc` f/mcHjXwf^Ӵ6뱵V2ӗN;í &r"{s)r*e?~g;/ 9^6`6^˰?q 8#uXeڌ!̽;IFƝZYUi+˖Q9~j[Vȟ9 7Mp~n̳c4 BZoV)U ]rSԎ)zmǥuoTKqӶ17/ߩ+U:!h"jKɜ}Lij1Im֡2~mWڕ!VYvNa{*29US-p'N? r+(lh.SdVIfiхcWoD72}>_# SeH!nAl(jYfc3 O\=NtY3w^=w+ 0<:Y-8HC0{ .aRӇGƬɐ"O+pou -h_OhYT+ڧn[(,˩`؀u.xRETc-zVw; rZ?% $m8RowX/P9w+Rc.pS7lzE4¥tK5R}p9VN.3yJx> 7[I1[5z5vb],Xf]3M>h_v3}ԆAGui^3\_Lߠڱu3ѥEvNV[TUF*ol׾ct(6 `ՋqϺ[I}P 4m(`?g*_umJ ;CȠiӼ|LشYOSQi|v-xFiŰLJݼ12Lu|,ݝZꪓO^8ԉS `6- Zmⳗ6u0teUӱ]sgi-YŏԆ+X7c7s줭R:6 X4+CLZЀޒ0(-bc-'rqPlE^ߺ~k~yٜшaSY=CùflioO+#wYuL6l$*҂tlo˴'v2DXq&`ONg"f~7F۲?Pzn[qi{t3.gzp̱lh ,\H<҆]aj³6tMHC?SVC woMM+cV:Z]3ea\GY^3lUŶ~ւd8v /4&aպ+s- m &Zflk3u:`6o[uҤgVC O&ZD:N?3p8C擫>}:u!qVIM6Z/BP A?7ʪ1MOVT9G|rmɋ"8QN6 `W<ۭj/N8=qӜaжav0d+)Y-Xծ/oijpgeZy~WN_ZS ֵڇh) @]G<vf!- ^"MhԺ b,}8VaSW^y{Xc?~șq, `6PO8//_p4yp{g跇˾KP>oj@kvk3 P~Lruc#i5?3\/GF̎umq *c4՝#\1vۑꍌ9)Iq 9 ows?snvl6`=sv:Ə_)<Mf};_qr-{9@@@@@X>)GEn_Tt2kФE?lB:h=BW/5츲8+K[t~e{ɋ(((((0 &YIQBXK?. ԥAky;gvv_yŜ{<l r< @KXA4m 1S'vEt v1GR"WTo)Xдv|1W<(((((Sذq^l0TENm't,>Eu!NXt&1݆4PPPPP Fvu~Eo"vݍCTF'-ke+Ϗ8eGcifg2&im^<4λiۂE۵ʻ1L@@@@@V `Ɍ|P ZwP6*Auv]~V7+ kQSՍZ2mjV9k;k_uUNkw]fZ~Ktژ`ٕ_Kyɩ X+ XZɴҢMб>M^P~g &X #ifAFuOx]Z4S~0rx6TyO(l 1~lY^]GficZ鬺 ,× iv⃯-¬sL%;@@@@@h)``֭VpQfЬm*fz> U )+j],=L7^K:㳺 !녅I>67k4]eQPPPP\?2h]j53P6oW+ˠ&EL>٘6mتtg>6"t6i^0OLd9\Hk ۖG@@@@T @;5 R$UaFqCP[+*X#UldNֵ8'eq-pM!=9]@@@@@_]OТ(((((}mσr]G6 _ce{U10N+W/j† @* pM8v%#x ۮV_nj. YPcF ڭYݙ^aLC;6jjf9_BMUF Ĵ!.Tڷ3ߞAu>mZ:ԝ(((((mH*smR%e߾ U *Q!ژU}Y3qaQW>JjYGtvܬˡsY9w V~pib6L.Ŵ!.o߀6\Z&,˺lG@@@@P@PNdvx3yT`Ӡ %BX lڸ߬(i AݱX@^0&FK6(iFѭ6X=ۆłoi܇UE#n 4Am18оp,mI!LBuv3e bISUWeHkC<]p[xiggڤ`^;cڐVFڶMɿo:<6Mա (((((PR0;ƕX Ni|Ӝ Ӗ`U+tlܱ̃S]> 9 PPPPU!OF% %`Mjp)pI2 ` " D?cam 1]v.YjB _b!ͮm>Q~Ft=qj?XV3 qv@@@@l_|Jn3kG- -QP^nYT"t?Vg8mW.;,2=m[x}ٙv/\{}YHp;_}^b#u#,_ٔl(C|V&(ە& duLZT-pF[ՙ,_EwϬxڢ㲎-]^Ӯm;߼6䝃owl[ Ŷ|((((( P mңe3}hNA4PPPPPPP4T~7>wFꖜ PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP KEmWIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-rabt.svg0000664000175000017500000010200700000000000021761 0ustar00zuulzuul00000000000000 Page-1 Rounded rectangle ATM switch name: control_exchange (type: topic) Sheet.3 Sheet.4 Sheet.5 Sheet.6 Sheet.7 Sheet.8 name: control_exchange(type: topic) Sheet.17 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.9 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.25 Sheet.27 key: topic key: topic Sheet.28 key: topic.host key: topic.host Sheet.26 Rectangle Topic Consumer Topic Consumer Rectangle.30 Topic Consumer Topic Consumer Sheet.31 Sheet.32 Sheet.33 Rectangle.34 Rectangle.35 Direct Publisher DirectPublisher Sheet.36 Worker (e.g. compute) Worker(e.g. compute) ATM switch.37 name: msg_id (type: direct) Sheet.38 Sheet.39 Sheet.40 Sheet.41 Sheet.42 Sheet.43 name: msg_id(type: direct) Sheet.44 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.52 key: msg_id key: msg_id Sheet.53 Sheet.54 Rectangle.57 Rectangle.58 Direct Consumer DirectConsumer Sheet.59 Invoker (e.g. api) Invoker(e.g. api) Rectangle.55 Topic Publisher Topic Publisher Sheet.56 Sheet.60 Sheet.62 RabbitMQ Node (single virtual host context) RabbitMQ Node(single virtual host context) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/rpc-state.png0000664000175000017500000011321700000000000022143 0ustar00zuulzuul00000000000000PNG  IHDR `JmsRGBgAMA a cHRMz&u0`:pQ< pHYsodIDATx^E]`vDfB[ IAN$$\qAeq]Yv{[շns={zoΩsРAbۦѶ 9snWڟ%S Г!K6nLcB_|} /ұ/9ߏ/5)հ9?>T>?D~ۧYSO @>i}mӇGA!m ۶lm+mnlMClfĶ16ц~F n}]ښ6,Žn&vAEF>li&>ͬ[K[ne }?|L}(sk39?t)Fxy?5l94zXeƴMF[ShtmoM?]3&QF(׍Y=w75#oEWgzq׮kk3<bX~wĘ>V+v5v#)cߎ]~յvوkvW{vW+lg]z%]xrve.(οȱ.Dٹ);*;sΣNy|tg*;ؓNcN(4dh{CC [ـ!{*?x07h(ͱ>PgAzۍH Pֵҫ?uWY}u֛g$X.=]؍`%*kݮZDtTֲMjQҞSyyλAW[ԝu҃5ӓӫ 6Di?knko=iX{>VB/tǾRàvrv0C>tңÏ>^ݢyvv '+;S{ҩܾc'5p{⩧ٞIɧ)[3)ʞy9f-*5Xfs9~.Ɩ?ٚu ΠV)+[3aZ;ͱ l]ʷ>6!l&1ߔ3Glme]ک)isOEソn.Zkt+\ά^N /q@R4T-`{ X*0^wF]u3l,0z,8`nY6 q46F>&00GmHk +lVA9W1.`R%W_ye^@ K'=L:3踓j"=/=8@ `MW ^w 뀕/ll~>₋ o" E m۩gLCV,̈́l@pIL Ӱ?1^d b`A[3vi f۵9×MW5/ tx1x+*@JZxF`]{3p={z1 \Bl{Сt^ ߃x yv G27ߢv>SlO+s [I_q% QyhGծϷk' 淚lٶNr#]o8~]}̏iʶmqh=Z5V'}ッ 8CswPC{-\7$V32] `U@s <6}k9~R~'Z~ʣ $6;ȴ18Fz;[|twn]÷f nM&F0dzǀM6@:Y_v`}5]^3ep籷{.Cs3q˰==ۣr'/eؚb'BV{/6'뇬b kzړ5AkBV{ jS ,ɺ^dK2,@M{a- 7j.gUQു5aukl5p;<^Xt=]Cw@.]7 .`k][ׁ [m^oe֬/TVgD^|޼Neﲭf{;i Z I6muwMϜI}wmx.Z˶lﲽŶ%t^C<W^}fd`;c `OkM@ [GkMk\V+yw;﹏~1{ࡌ[v}RÏ w?^Ox{)nRvݞƃ @X(dNcn875B8o_zT<~C{ O'qs<[!cj"dgÓY+T Ț&dŚ.=(ȶC-ȓ(u@ 0nlüX&tnkCtyvY{ 5ˀ^!f xݟY0*/k E}hcK|豥[lڒe1mK,[qr λtڋiOdٲ'd?DvM<8Ц =ԣiջlouzm5mw mw4}Ǯ > <M8J~eςl6u}]}Xz-c -c ʂm:*4T?<$֭bxa7r^*3w}>L?=8~=$?q2=2qJ0l{x$Jb=2#pax~<`zp/lӃsf0hO-| 9ƜE,.sp=l5lp37cY.dMXnXXd="dֆDzs6l[sf1?f,ȳ5iGk ޸ڰyۤsIA fw3$e ݘ]g^+^t t1\3&)rZtB .p۶;Л|?%pl&MAN${M9 \Ņ)i^-|` Țs6dMldh1?Bքo۳pM64at7jqUetܤsnfN3UxܨTAfhYCrETвb.7h>!e հ}i*ze&ΊQdע|MmҔ|-eiNVDvO|Ӧ@^;Sg /;ߏ_zmk+V]w1ue6;l,/ul]D]N:=xYg:흑[lo6zmյKȷTLҾ=`P2vT mпW^yƎufx&W/&3*xQFbu-,{$ٜf=1r1jY1m6C6>9 h),>{<~,kx;gϟ ~pD^U>۝X<2V-c2V>q̉l0Ze06zu1FdBldl`moVV\iz [ӫ5C札 %yUYBDVM CZlyA lo7B*żnkjOW /eXl*ؾlqO=۱qCCDm"ClyI' # sAϰ7x5<2=‹t!{{ӳ9V\y>|pe1b8Ui5W;޵u|{-]Y{չ=k>V߾uyk#Σl/t"lޚ1Օ=5WkZju\رd¶nC' Є,B],XX"ڳEwa ߣm>BB[ѱ|gT]_@} R{mle(tc"T Of" yn;45wER7|:ldZ17 U Y6+u؅ 0ȚaQa m H6(t5ʞ. [Y <ݠ&t5p|oXhٞ5P[^eضiA-TN}̓imX s/3oE0`9SO?CaVݻ}klT0_~Q1]{LCO]tgؖ=Yg Ul]r֥g;lﱽ ZkN^jeg*{v+I/B۳lO1:{apl5hYh1g. ڛwyG>O/]2{ D&sinKQ,2uoKQkLan wscs͞ac7q7xhg7<^\| »=a{1ǫQTs&kX2b9 jV-fM/Vd]bKhhL}miA6 -PBO ,g؞g{%n~sc9_ag[nm %GyQEv4_{6mr+ijMͳY e-kz;W=.V-Re?+ m{M3+H?\AvMsyknʏ{V92]~:Y"sv@lx71¦uCM%3h/e::m E{c:#ɼp *e=2:osEH^/>`coO66ŚmȚՐfMzgyۈ Zu Z:6XosLf[{mV 9)lu)ŭ`V!*l/mB̼p~]&3W.csXr ]rIYD6CIkS핗~*5`Dx2{P%j>Aאg|[LGvtỐAK$UA7?lŋ@iVz[+zmBV4mی-gK׳-ٜjNK{7ؖ=>7ۘTiXG/rg؞b[ƶw-UD{9Zhj4m"ۄ驫/Ailuv$, Ѕ>ز bjN,¹$'_2q;v=4/St΃ Ychٞ Ai)/Y%ʳ~f؞})[ʞ2Y{ ~HׄZF_23QiTr1 öm;ݓ#M{ll3O+: 4dE+6ۅQ r+]H 駟 s0Y!vMh{zh|bX1ì,l3z^nAKe}[Slٞg{0'wN8/Yؖ-\ػ%݊.iZ6g c =} [^yAk5zBvj0$gu@810C{Y:YfٓF:6|Yc1 ߇i8Cw Zt!{8=]{ҹsw=`˞xȁ-cH/a(/yA+D=Ʃigxι3i԰?mϡ{?^{vֽ|tjB&^Oןv'Mvfo>s)O\,zLa%=AŶ-zK!=Jy{{{:i*S5]0}wЄh4m ی:Ah.|ElKg{rpzvp{znH{zqH;p\~9&vt xq/0=-g{fHzmh "w9lfHSwHS&={Õ4h[ Z@@V-?(*HM7:l7\s<9dIcp:!}q'!cǟOLȿOƚf϶8'mv&ٛՇ`VWo϶p^h gAϲ=ۋ>q>}t/c{mkOϟ϶m1w}hLlSٖaGh[:CyZg1ӣ^-Gye 3y3psryr^χ-{R~_x ~:ۇ}x\e4 c˿܇<"Ǘ7<,* xj~x<}Jә|L2lG!| j!` 4 Nɏ Ex#';٣ݽ`f'|6mJVے*gwkpgóN^KNqQ|GP@ ~\@Ffb52rќlUݍS-P@X!kjj!i.~A Z X_ ڪm]mu@[Bo^5W-AUAs|ўϵ3Q \ˀ-l'M Loئw#Eiǜ|wNŽ)X#]N<:vi񚛉4g=iji|z ε~a]ď;v;2fm=2FvhF&?HJ3f6*v0ZؑCi)lO3=fp± V۴}[ ӏÖ;DSO5-f[t0Z6=t ;jz[iX]*[s~lǦk" lu4v<<@v*%`oOx鰱.5z02sJ:QI k/|2sa2NYg^4@׆nzAۀ] r `7*հl"${q.raruC̺PVV*-U:|5QU tJt<^x | @rb3xwi S6õs.<h_,m/Hed: /Wz c#"~ܵ0]cQ&MM#3'ܓylOҢSؖ=~ҳlϟy0p9˰sW\{om8UJE,gJٞa{Ciibz8 ?kJ9Amc<=]XX3q2onCW#c ]yNY}O I='t&@ a:@f8 Uyz 醂g!j{Y億ϺO/rW'z1ەHuX;' Ggw`F*3`O*:U{xVm2:T;=\G5؝Fw@v4exrhЈ-?ziU*$~BIBXUkyqY1B!9!#UEC8 #Z\$P\X.ٚ>6Ɓ ɅBZBY■lH6aq;[Tls]'5ݠ4f!{P|nV!"y-tY]3@ι+3`<7ܵ||+a$%*:c +y?uTvYv1KE]:vU7rͬX@7TfҢNEL"-DzɋN}'֓腋O='˗H.;^eo\~i[|?ˮ5~+NWV^Οɉv-w?uI vi e.ͼU]T70Eh5l*csձ.ȃ\w57I[*k %߮` bEnb١i:«d=w3M9gq$؞z#o9x;xOA9gU^'oQɾe@ {5[zyWx SPrVϻLwLOf*oN96cKC=ƞ;FO.؀\j[J_˫٫ŖdR9Z\uQ  -W{ _a$VQb{@kʚ{eò=%ʢ̼l[%к^NkBH%lmjO׮kg V&c\8->vC<ݨUne U +JM?^]bG<4~);/[ {-Zߖ>\ڞ⿟i0ǖ,{1\AjǞx֞ۓ|ߵǟ\ΎQ=ŏSמ[n߽mz%zszл3{hl6lf׶mmdzl.?`[l]w@kg6w-6ƍQ J [f^Nla(hN6h5i6'JjrJw;LGHnu(^V>M%}So5 Y~R 'HgoN--mQZQ[Z1Z1].%t\Wnmm]@'3 xz ǟq\tlvtL׶t4QlGv*#:-ճ=bZ \4nh5tV6(rl1.'y[7[0`=\qaS6R.: Hȣy0ާK 0zD7j@esK8ͼɘC^W/?2ICv\HQwVqޮ^,m=6r伮 [7 t3j tc"tѷZ-%0kF T\^Qz_0$9Syiʰ`0LJ! #r٪l' {mCm2sh[amiH1k*%1$ iAVLz&hMFC5#-Wy7ߢLmw xd`¾Pdebery%0ib4Y&G<0(ETu=N]ORaH24501Z΅afZUVGqym*|ԯEEZ^uPhXo9 ^SNixAV^T@/2Kus Tfa ݪеAcaЭpߪN mcj~*ǮzJ{7 7UV8s;@[rP ȣ.cp!Ze gc̹b`hn|]ʋP> .<Q^.<\n[f l/q[U{ ]DG3\</N(垜Ш< 7 ;3V+m=m|2im|ln1GYtA T Rs0 H`&V;t ]zjlW2+Չ(k='U^թǝgO- &P{8՝! Cjo֞)Ak9$#hyq T 2AFӞ͚ݥ)Cq!CuHݠ𲙍L6$U@٧]aa^nQ^nU<\xu!l h&1l ,O.x^ĐhΜ.6;'x.o|3aR W'0CAsI\3l{GI3ܸyܠ2`[a$!e&Wsڦa#jm_$Y ۠[ ] 8=^/8 _ /x|ks=@<BաT iouz$8a?9[2eS d/4X{mDE l"8]^'+ag ;E-YP@q O@!v-DRp^ ,vbpbTрEc4d{m=.dmІA^K\N8+qxYI0W)NV>lޓ[Eky|vٞ^Dce2knJ*,&,l`{$^VNg?hy-y:4Q8|e[f+oðO[J{%ezA {K={a^xQrop~aO/~ֱYɧUpqlRT-.| !C*r s#S^WdrmU*ϲQ^m2s-E (s0&ajy)js2LciV׈q9ӲWs2ma{&}qƋGY+}꾻@߆dn#6_Myw Az='`/6Gm=DC?DT6Er?!}i.c])_}e||y~^Sk+{E&4 o_}1 V0+kvXآy=5nZ$&m  D @$@V˔ DuS/o~~g/>nM_~~vm\y¶w [MHg& *4ak/ J`[% DuSin6lXq,.M4  TNN[lV ܼmi9X±2OFX2(/܎e+|VV4 CUb{M^-(VTPYw0+4lU(@'}}V-ڣ&jq{|=^[ ZaU?jD9ɮs{ݪklZ6#AgTl*ʩ4a|Jzg5ϋ{5- "$Wt\ IuՎI?ykjC9poCZ 괏addX mY2l?kPo68CW^6nM﫴\_K}|f<)Yﱎ Q[އ7.7?xG' #wzbგxU"{#|~J7H.+Ѿ)/uh#_!G;TaOBJ(,0e<Bm_5\Wk7})}cT?%w(bk@۫W,[Lr}~/|:agOH|_B=Os^5y [/|lymf;tlOUL;:v̅|ls>3{@c \*9ʌ8 NfTn?o@Nq|q AB?AOh1^8m=7': Ʀ^جVu<[;2' n3χ۞x"7קW}vo|<տ?:?$j]zMl5hKJJhvN;8>~[N{5g o]FD[{҄U0a-B_}I:ΔpöP؆7ƴ _噃Vվ8v{7A `w{y|Dog^gk3}sοm%NW[ vѡJFqtwxis׮>Ta+m\6+^w`Q AfJ {>tK3W*h4vvX5o>OKҾU=O wqПl b0JpNlu990rgƄBMYO| mپЕ"*sЪ{q4'5՟H~%/PRo^ՃWl#o2C3YaʠOھ~y1 A}yaH&d'i k$_ק , >'*`/@LOr|\s1GؚFvl͂l?grt S>Ysil)tA9R,픬jEjxr?I`۸2-JW+Eڿ^2KIǬ^ǴH{V=Ju~ = lJG㬿Yν{@~kfa{ϫwʺي[r~ 6(M QتNKV3ad҉EрhH Ͽ}E >\V% lEUO# % ۳ϻnjd*MYRgcDU@ZE)sO>ϽYleTc.(, s/8"FC0_٪8TG/יJrщᚬ1[S.Pr>|RBlQK. .L‚l>TF#E 7JG! m sϿek`뀶*5;UN=ͨ<ðzIU -R3RN\25! ҄0F7 ^ti`GvaiǫZO3 իճtrJθv O_!j -ؾel_1_=[3|'W=S4c0rlCAիi%):e%y^4"; [=_ fKռ-lI-9۠y XO3-آIh!ΐ0)LuP&u!j Wul0l8nْ, l{LROJ k׫P*hי"W"'9yMyH}@Z]tEy]Y`BkxpIA&W'0iJN4rӰ=ZԲulEh@4PH /q)Ub$`f<[ #K96bG$ ԮRUn@VR϶vOt>iрh@4P3H ZS _ХW\E.p2l*lE5#tigigрh65lᤂ`9[W%WPad7Yy ez*-рh@4H pR5KS_ p򫮡F\MFY~ZrB y|9 ԾR-;'X- W\u+IWXI+TTP9?y3*{*25/^5+r9T#}AIs  5:lqe'/l׌$W*;p+5VՇUpQu f*'$hUeVY>LC}|_1^[^umSoUK/ jSiܼFun7Mf^h@4  d4l_c؎!Q0˯ wF4F8ڳs[6I/ D` n^op0l΄qcil7Qoq)emRW' ҄ eʄ-4cxo-?RG4  ҂q,W_lk=6Iۘ[ͷJ'+NVh#t=u*`[=L` F+]ٛo lEh@4P4lMP7g5~'~]lųV4  ҂[M| #ڝ|{'jtdum.+^h } [8KQa-~nsw'l2Ih@4PH f7-{莻UVγ!kCwD`<*2WLyC,f=ۄb3ǹ^xqpdvY>鈅y R{)4S5a;ݯlkmֳuNtg믗R6r/;g/+dtr~ Siv5;s8wg;=qX Ԏ҂-[VF?-% ҂~D2?Pŝ˞%l-}[8} VtZqϛ$_&Z$=?;9/@Zx2(qgǟ|iZƷKx>'[9.EI:C> K75E`1;K1C=>E?3_i؈qSV= R4Tjrb/\K'L2ߋxqg[ՁB*F<9Fۭѝ4 a~)+3k#H]4 ȥ҂Iu4=ɷO2S|瞧gÞSO?ͻz^nd hly+¨z^^c# P`-z>̲UjP0KVn b0 IzmV.pgEiJ<=ru ˯°}i/({|mԳB[ -`׻u:Md]c­}>^,l|B/B9b5lqCa/;/V}wla OпeМ߳N6 VGB $8\w|Dii -I}4Kg+啯K/rl*/m(l0a[o֚ #d!vj{ ,Vϫsq~g J4 (P [8+^ZI+^v \7|g+*{y+>\9/1dѕlŝW_{^yuzUkŗ~ئP5[6u9\_'9K b~Kߙ0o^-z7ߤW_ꫜzuM >RW.uf@Z˯ `wxm|^gꫯs4G>C:h@4  ` '+ 3-;[oޡ7پm.HCr  EiN*c;wWl|#-!h@4 @je'2(x|}=~يg+[D@k -~N0d]\d {l5CoJ'+N&}D`7:Nk`` ;tB鄢рh5l9"p\c[vZn5l}}^zm1\g,;c9Ǣ@H rDxuc`Z{^[l2I- * ND}V>Գ*SʴLKRisNsZ0Cu(_R dhĵ#銫M2 V/ nD=[ʱS 麻zwcɪ}u dP" ҂UҥW^MF\x z?#ci^{lW6ʳg3NGW5^@?lSɅOΑh@4`k -[ ٓ6t/f龴3ŝ{07h(7<|gۘD+AVyۼ ɅT.@e5l[@쮬nCOlAP3p͝hճ m|=[tv@yhF4P?4lS~Q ]nK l|-` Ж;͂mB ~[K[JD0 ] [npٺzXb5VU*7Գ˅O4  Ҁm]vܝڏZsBYU!d v]r |i`9рh@4&l[wmOj٦{2׶f¦r&\y"nh@4  ҀmQ4m֣ԪloԢa #7RnQMG^!V: le9~/ϗ(!P Ҁm}=+[ l[p|m+j[DSg+;_94llܜK͋{PV]Nnֺ#vc&ņW3hÄ؊g'޽h@4 HMДiYK,{Zu u}ݤ`ۺ}Z3eζLF2 ҀmiԙԤE{jܢ ߶|nL=[,MְEY{/}@4P4 lIS+ڼXYlآ,B JHzрh@4PO4ls.\GYV VٰEYa4#TЩvP&+]b&UB/6S#2縞|UןQk9ע@*mZD&Md-3adlUe-` !86#UƮ窀k&⏬7['B:h@4 (\ ۿ6nAm҂vjڂaۂ֮ ltZ [O6MiY_Y' NTYf%% D9@:mΞk"mQluA:ޱU^.lXF;Rs+V4 Ha=f{jq9[F6K0$Uq jr{>jEрh l5p aˏ[{6Q=لa`Y!z9hणyE@j MejlEd+29rnE`ɔjV#!_.Hрh056lwܵYgS5 l SPr* 5&lwܥ)sFF(V<[E@}@ڰa @j6/n>5VЈD4a *u RjGdrAs- w n(lZtK  ?n֬ l5hGdrAs- w Ŝx{Ǔ/}@4P4+[H%iDрh5 A4  rmXBE+T$[ηh@4Z$1I4  ' lųh@4  X5 [I(#4*z ҂ ZB֖لʅH4   [[ұKǒ)Z L 8l%P.Bрhi m鯍 RV:[}l{EYG`[E'9@}@` 64]쳕V:^Ѽh~k@`Uw/_4 jVD'рh@4P5 V2LjD@5 q'u/+VPyzN"  lSmYE9V3\\W S-sUt@JJ4 ȵ ><=꾂Y=@7HZ ߀mzs?'ʳU-/g~t\w|јh@4+ lix>ju=X.Ϋ5a!ޫ -?X|[-@i ->21SGl F'򼰪^0d4)l|)G_g+]s[ZZ4_`4úګ䷋w# G zUrg[аߚ4 , A lmبZӂ=ېz^Vyen0*,>/^˨>z7E`;y*I 5gmMF4ڇ;B-`~_TY^mvY`+¾+@öYvFl`A+-рh@4PHzE2[g~YmaG.rE@2 (ljK0fjdleQh@4  Ԓ҄NM[Nw|-!}׸-2(= ,*ߞff@qf)y٪\Zč@!K>6+V/*q6{|}L`w1Xd}I_WۧVE/-}N҂- uLe6˳ؖUwwV:p pAh fw瓴_mq%/}T;&9$A^ps?G5 [7qfÝȱgNueF~8՛'6<皩S*f<s-3f`|Lcz {+t%"XGI:ޣeǞs~;R'H.+ѾQt|mm êx6TJQP6ώ,vCD?cOlTRoVivWL˺u2"y2LC׆|_wȸzq;pV [s!1ޯk{ʥTjt:|ywhqg{ύo6KVjq0%X|l"?AOh2C|zqwG~t~qBMc/9z5!1خa…D^U6,tWo6yrI`u#quVl>?-B?{P'gΚpP,➯6l"B7_p<ϟ7A I6qP{PX-$x|Wzӑ7Qk@l77yQs!ZwV:|\yƟ[{ǿlsaϪG[:iE=[4COx u8Z`ƭk?Ō&`#MW/IE1:7a]s]h5p l:\ X8pƇKמ~kzrrl4(ݹ[g[G`&Z#rSگr%JͳMviFg+}Q9s  ҆mSTs,M=>21`gk / nK I#_(]hS\OF{ij @mֺ5y[x,V`+MSY'рhhFaFމ| 0tQ8E@u4P mV$J D@ ֟]أm<&Z8 lUJ&`1יPr{et- ҂3yq{j޺=b۰QFlQ'h#R|*'O8*|fwR#9 Dta-v%cbu&%4atjZ:1lNa9NSO3sb/1me;߭SuJϯ/C·hf5lhYTTܑێ Z@#W`um{.N&Nɰ-I [-DUQ |%lju~yf;h4*lK:S[bvanmmǰm.;`M2hlU _.IQ 7Po3zO^t֢@:mljѦ+[7jٶ;lד6lL 6oo{ҮE=غ.EԼeHfb=ʹ`E_CA-z!9 [^PFBM[&r[Bdz:\ӌ$&z=[uf]]ƽ_."5}͉5l4l0hQqAԦjuO퇴v [Ynp[IK*m/AD`;m\h*mm?ڸ#v;*6/DM[x#guaUVuFz@:mEfsڡԮ>ԡǁi ۝L³mHjYĦsbE@h -Ngض;GlFB͊ERWK[t>L{]΅ рh4lٖtİa佨C9 ϖjd,^m/j=AI-j!{D@4l[ry.t6Jދa.rnyʣUI-8Ԅ35J' D@m̫jj" @v8mc΍<-Am5|tBрh@4 ȕҁ-'Pl{QkNhQ[J: R# k+RUԹqqDрh 4 l jlũ 83UZ;viA;pm[_=ۄ%j?q r DU@:-t8/r׺*߸a˅Sv]n?܌OL)6I!\1(OtU/"Wh } [kQ҉:ߝ?(v{;7ul9Ϊ'rzDWA7{N *+3N֋S)Յ|UT"Sc,/+ UNKBl=$kGJ/7+m*m*;H]϶\i[T6l:eoզ<ۦVt@d߬^tޠ4P=q (4plt7諯 (׮JW_ Cݹ0ȹs%HWցl nxg[N0&\ k!06km^"N-`M.>ou]:l_u_:oWSS4Pw4 ph9h¤u Js`Qa+ ,׷Zpa(؆_zk0+:4Mra;9WrDjV`W$ r}r´|~_e~6ݎ*))7MlǗ-[/ riVLNNрh } l%Md%/ҦҦkFa}YjzV{AIKDl lųV4  rmXFx2 DVFрh@4c lQzf!>*=RVFiI>K$Y lkk.SH=ۚr1 @6jPF3pL=V/iQƫk|?]?~Lic԰NNh@g+:N-#Z zIņzqUkܼYyJ>:7~)ەz׉&D4 @vzK"{ml#$]܁_U5(^v\nEBν{@5 q֐"zsҶI[D5mO|\lD@i@`+рh@4c ls2rh@4 leD+ D9ր6 ,9.D@i@`+h@4  X5۵MZxem͍d*m- jWJfj^t0@U4w5vVz"rT\KhTA*2 3znd}^ŘzTOUNG:h@4 o l2w3@hCͺS"fپ$b\A~Ks-\P%'r9?U@uma|ڲAxx(计mS`~,{Lrr=E@~k֪&lەBΏрh QԴ"9۳ W딣Ӟgdل5=Ux<^Uap8ir1 DuKy[ǮW ,RRϛc] rgbyb!z2'sрhi `њԋ[#\jA>[  ԤRYԲm'jѦ#tP~Fڦa#jmZąså^t\K>[% DiV`LR RE. R:h@4 _܉ vDрh QܬJ϶~d/[4 l1o ˚JgϝM~_4P5P+B)DMC7rq79rI[ Eрh QҼ5Prڞ]#'{Ue c@Ҍ|$~Ȩ66   lq² T{S!}i} u'ؤoz竾\wf*QUVS7k=_^ml[~46o`; Fǫ줳4scKR7+Cg(  6X ջڟgkכEج FNROC-(4rvY^Y8kP <6hP]zBT"9rn ][&NferƇt#ayy`eҬ$zJR63цZ,zp=+5Oq6E[4 G l-a$nȉV{!9n^/$m~벅v99Couׅ%(~рh HyLYl|X}7ݨnj>fH7ެ@h/mwko~|ml&i5յq IޯkCU\"& U lj#qK^8D H^1NjeF!W!Dmx,Rmr j x5r%ފdl#/ j@`+T DkV`IS[dt$DmG3Ngs%J4 ȕjMZmlӯZXls% \8Dx  lqӯ9HdZV:D|66 rԳue#&g qgEG4  Գ5r Wtt:h@4jo 4^,l?|'^>O.&рh4g~ԳoZ\4  jM[%"9ȳzNY<ۚm^Z4  lS7clcDǐDij `ϒ" Dm䋘8& 5P+ż-lrjdU4   O[leh@4  X5 fۑ^,md4.T4  AY`+Q:h@4 6ǡ(*r1 D~ l2W# D9@fPEsET]F2 D e[.+dGK"fD@vFj^ܞ")۳[ˍt8QPt^Rϓ.-w"㫧zzHG(HΑh57@ed׷Fz-`lV <L}E|U\!1<ެYDί_@2 l} 8!5FsCU `DSA2 DB@Z6cjיZD-tT#4lD MCZF֞m(lM0F̃FbK o7E:pj oDU@)jdaۀiy 5=P1< e/n2<_][J̪t4y\E[y[sAQsտ)`,R-ަ ̭?+'_οh~j o`+.]4 Hm9[m7n #ׇ(рh@4P4*lR[Yрh@4 04*lų59 DH7RH4  ? ]6'V<' h9 Dl{=Zd M2%˦Níi/k?@` tQTAHM>7+?"xIIDQ-@W__짟~#}7g}NoK*6֞mxJ'K0VBрh@4Pl'Mo~W_/f~A3ӆ7ަMЄ ##g#nNs-kIdn a\K/8,а՞͠W׿~'?>~v+o*6"gΖ=[6}4ܖ)tA%SqnʦϜM͊Z> cwE^4 M _@?GB[|H6l5k6oG[![Ocnk{kl6ڎ6nL]P;eiԼUqJX^ ۰zծ+,m-m- Ҁm-hTܙN]=۰3H5ږ4v{ڸy uu5YsS5$ ;rv9mvksd%-#{рh@4P;H-Z9ԦGn73p[6mz Fn((jӾf`Tn%FV%JXoW^;Bvv jSiiV4c\jׇzQlϴyg>{נ=}t1TMXo6O|\lD@h 6/)Y SA ummܺm=`mV8 ݏWټ.lu)U}*gFgF*'Nvv ԼҀmQ4{Bא96}O`?`;`Ra+h P)rj2𸐼JjzL$#Gŭ/na V&⋯BmW#Ԟ*,EtvӀ=b;PK`+'ȱE@5`hС4s#={,ضґ<*ݗm?ۗ6!m~gjVJ'6 D ^pT1{6=䓞˴z@ض։>q9o{ۿ0lysnCg +JS4   ZildGnuH `gHjdrA YF?s$>k 0lXeBʞg֟C>ngbjݶ@@ėDuXi[v@s=F=އzh;ho^ !m#/jԢ=bJjѲ (V@k 0rhc >2P!Fkܴy+u3(BƢV%"Xb lH? #7oۆf=:vߍ6m}N!-ԱG?U>4 BINWU#mey\Q=*5Ҟ@logʕ\gCjf4sY嚶]#;V`+ Ҁmcz9qkQVQ4Tw]ضlۉ`Sx"!Kb\ٸ{^lWwp\Jy&wOshlsc~u6 \r- m]^h76eΜEw8FضnEAEʦNgm. o]wߗ^s&5Nm>? zq߬z=o<<3 $2Qg?O`^' hk 6WaDӊ[[lU|~0Xc>Qya'L %'^,x N\?U#¶mت02^TV_zN #jzea'K@ KӳE d ?IV}+K`+؊ XlC`Il0rg Ƅa´zN1yY= " -I[5՟ߘɎ߿HInXs!Z[>؊h64`5lW?~{U u6{ﭥ7zVzo '~`^ad-`!&߂98ilD+x5$ hhU4믿enQ`}۫ߢW D-W l!dE2K)鷩\MEyv…W_ѯJ~ʻ??>oV.޳{4|س,r`@ki#͛ZƂ-R1 '5~mIǮI-8,Zl B#[GވC.r Dh &#k*M1;;V`+ D4`FN0m:#&yJ;D@gy- r~D@Yت{]TA4.UB"t+)) E (lv \ԵdԳmҬfœ$,рh@4P5P3:{7m]{S=3f-e!xрh@4P-? 5u46lmڼ:@Rn}hԬEn`Ѩ?9@-@۷o_ԩ]{A\p7NA8fy="hiV0u;ٍ:@̧V%[ъD@Ak ʳ=謳΢QFԌVR lCa-[?ރ^RGEqۂn4FD2 Du[as:|ҢE@LR>϶S'iG^р=CUbjզVFрh@4Pа|)“]dAEPni9{0 @h X ,Ѻ='O4 HC?ʁ 4K5 BFO;>oGѰ!*.XJxm?D>C:h@4  4l0@gmrWku ܣi G{p|m9.鸢рh@44l,X X6g(4xRR(MDgA4  4lÀ1 K[#qБh톑ų^D׀mLO[@jѠjUN"w-F΍hf4a hVbHm}|HOxO!?'MƯ׷v> e%tP[qǂSݢ55-jM6iBxnܴ +XӢV 1w?|F+zj31iрh@4 5p ']h?q:k=a*EC .L=y h4k|YmͪLg[~^?Sm}ߋp ~Y*a1cONϚl cQV1f*2M>L6&O irMl ?cL#M16aTMB {N&NGblWe֦Sy]v|Go. ԃ+u]5o[BmV{-JÆV9~Mq~--=num:Ryq{j֎ִU[&-ېi[PZTLҼ5EZm;5mIq&-ȴ4.8qTYafg޹)USmgg~tyyњҷJoA`]‚|<ߒ~P?N}/%7A}%f?I'*zX5mav? OP_ҿP4[tljܤ)APYou  ;T R`m:u15^t>6@M43fBV54.J mٶ k Y-r ZZynħ6=۫\CՃ-h-{ϏΧÎ:+" U@ k@"̫}!3 TM]tJN`0h0]\,\b*Tر$&" @ąɹmXXW6N0:3imTYآ/ա\Ml image/svg+xml ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/vmware-nova-driver-architecture.jpg0000664000175000017500000020440300000000000026446 0ustar00zuulzuul00000000000000JFIF``ExifMM*;JiR >cuz1509696 2015:10:12 22:42:552015:10:12 22:42:55cuz150 http://ns.adobe.com/xap/1.0/ 2015-10-12T22:42:55.962cuz150 C   '!%"."%()+,+ /3/*2'*+*C  ***************************************************" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?F(((SM:tGG lG+= &3m[] 3t_?[ +(TBw@ _[ rQܻTy}J`Bw@ ?n]*?¼>O}fgܻT.?^{ER>3пn]*?[ =ԩ3=c(w@ (0Cw@ O[ =S/[ +hT 3= (w@ )N`Bw@ ?n]*?¼>O}fgܻT.?^{ES>3пn]*?—+(tfz-GOQ rQQ:}3= (w@ (> rQ.?^{ER>3пn]*?—+(Tfz-˿GORܻTy}J`Bw@ ?n]*?¼>O}bgܻT.?^{ER>3пn]*?[ =Y_.?G-˿GOWG/[ +h*cCw@ ?n]&?¼{ >L>3n]&?§#sFk/h/hs=Χ4foX&X&g>#sFk/h/hs=Χ4_' ɣ' ɣϰ{Nhr<7AM<7AM/g>uW/ _hs'SZİf-/ccN+ad ws֥ŭROaRPQEQEQEQEQEQEQEQEQEQGz(*9 E?S[-5.Ay4,d#VO ]VuVԻQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEP?5 n^mx_y8_ 8? r\^ǢP;Ҋ𺞸QE3M&h;[֭=6K(GxC_fW)#Z{}6ͯU2s^#Vm9Q^,{w}&hUZisqIE}&h,{qsqF)(=bRQG,{4f)r5?QG*IROv H`0TOLTv͎#c?\YBGsա:>'d+AM>2^iXNr֧$Sڻ*AV=rQEEPEPEPEPEPEPEPEPEPEPj)Էҥ5[Mn)l|^kTuoUk||l((@QE0 (Q@QHaEQtES((PEQp (QL(ES(hqހ (u0 (QGNR(h(EPEPER((?%wWzO_\8X_qKIK^Q@j9LGaRC q=sF NkA&19;{b\KE3Ek 1FJi;gZzAE. 80})!Y?_JM.-2;HTJ{K'ǥ&99D(83ZE]H3y9RN)Jnd@EZ\QL̾&hx샥xOs5#|-Za(E 3((((((((((*)ԿҥRJkq=uO ]fuVj}5/ &QEj@QE(`Q@`lgzE,b׵R >b+ }|$k1AGVo&VNMOM[&;L|k #F9uü_ 0d3#F^qiko}ego4ٗkSqemMkPoCFQO&ts|4"hN]B6#|*Ɛu5of)ˎn&sqi7B&#vzMy (䍇zs\߉nu(MsSXmh#ҵ( We>VAگLl-rX-jj{yvFRR IZ/գy[[8?L? G̈́ZfC捁hN - p|:Ȗ\ڦ2bK@A#!G"9'Q]7t{uTdY# OoM{gm7+ؠH7M\jXciPF Pur)6nU,]ZgѾl &Auw⋍6˧#vAG@(ah,CO;Ȫv}1G9UTa'5sQ?O%(71N.9OKn7 ~Q3U7dܪ䎸WcmR}V=8>[lnd =+Vm*NK|Ÿ9w n8_r0O5h)jbXlkFа Mkvvwf᷶߼UΤ>dBṆ@ɦ e)̨O ,qau2''#Wi?.A k4Y.MDkXML6R>a% 5c2]8]- ϲ&g(U <o`^ApIʜE%ZJZUF)yȯC{)2="]63$~dw?Ut7Q񞞑<2oP wXr}zͥ8f#w 텮soH_Kq[y!c|Z|QIxȊ̥vFzXRTS5t?]HҬV8ȮRٱӑS"AZR3̤@`^wkqKkkICĦEl;>J+]4|FZyS[LΉd)5pW%p)+o^徚[! [G'ȉO_L]9sIYQV@QE ((?ƽK/?-ֽK/oά7z5-%-xGQE%EsTN;=u ST!j}5Oֵw|61{ޑ#)$68iBU9]n.m'Zc>mxAy.t."iSjyI^6:SM]-=y-0Sieͯn,-0B^H@j/ .5y^irMEh7cDy湼R\m>k#Ov-zD־QkQGj?k"w5BU|:Fe[xgn"eZKltˉ| HRK+5FF6U${ȼJ$\DUXfVQ^nUP|xwdEXOdڵu-ob pGd˫i:VucxB4jzUׅ#uY&bѼkZ%k>dߋF^k1X:j״%״X۵H#?:^j2 gmGIYĝ&ITsveˆ'sZzF5$ld\} ]Ϩ3܉0 GcPAOh6ٴwk+3vމFN*ng뚄W춫koj98Z!ĽVI3_YIJg<͠QE43^: o5kGA_?!aΰ((((((((((RJK)q=uO ]VuVj}5/ &QEj@QE((`*A u Z1s5͸ړ5XԤq)kϥfh;fҟZݞMTZŴJ4y>㎵4[G;CBx_e,:{3cWw)imDG|{7Xt fen(D$m3A?a;zNi7 DȡzEji2̍ {P:}kOcsnAni?4=XԺֹo{kka@a;_5F{{Tk晪Y\5aZ{yo][]*Ƨdzt⨣eޥO.XX󊩨]h?dtҬ}:oXukxg&*wS]k6O]%I{qcO\MՖ/I>NW%=+#cȫtӏ) sTNӴXA /z\Ub(W}و\Ja 6;w.APūBK,+k*nGlZmoEpf_-'rqꖺ^ Y.;GbIaڳm\;CܭQ|G}u, k(_VY&ue6ۨ'a_ .2Ҳ;cZ(Xx3mnYq u3U4ZL" U*N[Xzа@6Is'q,0z|vJM(`QEQE0 (RK +ƽK/ oμ/z0J)(Š(?*PCt'γ!j]7y]MR4xu>6QEjd4R͖sbW&O.J=:{]9/y44.-g'jWjAc=Zz+ `vuld9 S׽֕ H"ky- 3ъ~tpZ?ɧ"4 dfN)[-7(pd`ӷJ;ҤKyBE#**1T&4 1EDQE (Os5_c׳C| + ( ( ( ( ( ( ( ( ( ( MME?ϡ>u?/w][ZMj[%?ΩRTaEVaER()((PER(((QEQEQE(((E(((((b ( ( ( ( ( ( ( (QC+Ծ2^[Eze#IcˑW7;p袖RQERW?8>y7y]MR?멪UԿŠ( (;I2/Z᛽'fk$ s׋RFLߛRHF-Q f̥*?[׮,-+Ro",X?:s\.t+-9b*֤z5-]o}>Fc{8\F6Di J *Mf>[&<۞-?2wS,mTk94(&O|潙o]Š^ PlI5xط%}1xYTMƽj:tu2 ] aL֐x2rEtфjI[BQ^Š(ES%5rڅƒ #a|Q243όX=+Og|eiO揬SF]γ>243ϔX=+Sc|gX)?*>O{tV#:|α>2T}bp3eV#ZQֱ>2T}bp3ekX*??_揬S}+Sc|eire6~}bp3eQZgZ6RG&Wgl>panlx8$Ŷ ?#%@RK( |ՕiuFb+/VCٳnoJƵB- nfEZ+S`|i??\Sw+KEj3ό?/OϱEj3ϔ?/G)c>]α>R43ϔX=tV#:|γ>24}bp22OFug|dh>deZ?'G#:|}ˢ?OFuF+Sg|diO揬SFV(j5όk?'G)c#+bOF1Zk'Iֱ>R4}bp22OFug|MX=tv'ҵ?OԶ&TșV]FO4*F f8\ 'JGwaX(n.s0^b5 yx(QՏKEvQ@ Q\ǻ KQ\ǻ 8'γ!j]7y]MR4xu>6QEhdQGP 9q{H$$zHO,2ڲuU_.0;GȠR[җ%qg9Nh1xd#^(4Y-Փ\|#c47W;f. G_ޕH)lm9 ܭl'^Ju:L6v&Bl(D(QE5<-Gk чFxRxot!*ux5ΛF}+RoL6.ԁ Et=+Ǵ7 2fNzF硯_FAҸ鄺2AE 4QEQEQEQEQEQEQEQEQEQEyNhhPvoʍSVBmR@( !}NY =(=)PCvJ6S,hQ@Y =(=)SY =(=)Ad7ҌJ 4]_Oҹ]k'~.P?r|~uJ?͋tz/TIlvh֭qk7;^FNqpnn=X~T+w=ƹ}{zںt G ,q@UⅫš޺e'L1,+wH4?+DmF0^4Տ'567s*)B(ƚIT;VbM 3zW> << )y$VqtNl@ˀ捤Y\or(P8"w|HVԿiZx>㼹s[zR0Ht{Gp#Z{#H֡C%7!H%n舤+K?:p|6qV#/feW1JС]>+p~|}I<֢,K~'jVhV*lإ 5i!URC;REAq y_x?} ^?5 |y?#9|Hh>J}fhQ@W*֖sF u" j &yÛ8nށsuuf!i.&fnٔ{HxQ5Oi܎E|}K5MWLpk<kJ)__Ϯ 6 kFzX~~Ql.gP!-|_h$]б }k-5+d4M*RLE 9"(((((((((((Vx)4-<Ocknb\NQF21޼H) r^jP)G+jsQKhBrzQF<)G>ԔQdrF: hcZXB`SO*(. (:zQV:SXm'TJj+R&v ,&ajZaXc.Om;٣XFxDUҥ6GqҺjháe%~rZIsMϝfI^Q^M.GԭC'}5o_ڰhPx{ֵg [FjKMQEQEQEQEQEQEQEQEQEQEQEQEQEQEQHhh=)rON+~#xGm,#O}&gml_%.X{(OP}Hs]T4٧oay&W :¸~#4 .uSs}yaF;,N]B+(QbtE@~9dGkWPon`83)Hx>E>5 ( ( ($GVh`14 hn"xHwU[Kku)nD8eY+Y-Q@ ( ( ( ( ( ( ( B)h%2JAk}AġX␌n(nxπ/cb#_}G|=@QQWBM<~ $gc%zWȻ~!rK1{bnXIQ[SwVf5#mQ |4❈B 'ua]04)QEQEQEQEQEQEQM&EV"I5ŏZEQرGŏ-^lc? @KY9/jO O 4Uo OB~cG4ep:P6h2 2Inħ[G&eӠ`d ;Go_ʴMvFu*(n} 9zk:E/!ui[h=kȴOĚMw:D(Kr897žKw\M~ѩIO9SQ~ܗ^T>n5b8\O|Ax'Ns}kK XPpҦX qصO2[ #Dt8N2}IZk}G-uZ[5慎&ʲDqXrܚ*t}p.#2Ȳ.QU9c) -ƑQrzF~#S@h6G}P*G}Qi?TfU-??ꃨZc>#i\S+}y}"O.5KHK}ʪ^pMu:l|Kˋ '{9?q Κ..+}-eʺF.-iIMZiw6SH2Fە"&%6M j7tcV}W#gRE*}Qi?Tfi?1ThYhZ*[; Xu-QA)8[kO>Kjd5{EuEq s .Ai?T g]|=/]hw=@{=|]%U^g-R@K7\}UW6s}_="9-t>"@4F'.vȿU9!>&m.ìD ]Mxi:OMsqZ# 7(<=2?R^e㫏Y:m㱯~_V]ݡIF8#eM]=QUhF~GG}V&Ś*G}Qi?1Tfi?1ThYhZ6@hG}P*}SFs#c@H .h((1F)h QERb@Mbf_M&GVc=qI9Mé߷iտLwRf76=룮sWnoMre7}7w_&?QE%((JZ(("j<8ahS]w iن=/Lԛku$^@NAOZo"h+o5_ᧈQLb/NSԁsz)iNŠ(EPEPEPEPEPII8kHY z֯/ B&sqKFs:;\N1iq\ω|a `$tUܶ+)UHVǵ}9տ稣[z6ϑGz6WsQQ έ=E?n>Gm(WsJuGBϑQ_mp %r?  "9ӜI~u|qiF3H+H6"Y_6֏.GHRz9bK<7#ź9?*|mI*%we-~U @?'VNuo(ٳ=\>m?ԡ j]j jMT\j)p9:ŋI Lњ; ıWq~k}DGN1@z'$fzuR~X4VQbһ'w6GX,L݇i=ֹ]K_ ېVy.5O- ;/4_jo?Rk]SS;/B;W w=Mt.sucZnZc̸4ԯ.J+A]7(h}j:Zn~tjS;AJ\/4ڷ!QP\;OX;fI'B+"5FssJצedF$aH5?=+KjйΜQFj9Xi2ivќn@ڦ-܆;v)N:?1O4T ;4uis3L7kZt$nwYE)$Rn~bZ| g}X3,DkBUh=?_j_ܿѪX6@*ֱxLlE${hd:SC0790j`s UK9h쏠t)̉t?1_=NܿѥӾQ2}?蟘t?1_>iܿѣN~SyA7G|}?rF;4Z>~~b*ǰ_=ig>I9xoS'vğ@F=@xRkJl5,}c޻O\5);3SWA)Uc%OQYo"vK/Enө =}kyqv~7Zr@QEQEQEQEQEn"oy7?j_!^7WWJ\B=G'JtRK+?K߶޷{,{8#X-]slWտ 7_OAm]T"cQE%Q@Q(4PE-!4f94s W كwaрWW9ݹ W ͧ?msZRfQ=L(Eӷ3ǹ ޻_ᬦXc;=J7`^Tj9-I#cu-FhQEQEs*rZ%\Z|T?{s€ d:On_M)liOGSg:FoT:_ʅԴmfQF9=IL4 (KDRQihh@k::Wa$@N3\~+ZSLZ&z+>UOdHA^^{f"|W/J6Sן]rݫfPۏRȑWMc[llp\OJZk#>s(74ZJ \(<ZWK:Jc3~r4`¸?ۃQ1[~b/XVmxc}#Ҹu.{/؞Bg(&( OxGiKxI-'&Om~aC 91QGAmc(guRbb (4QE,E==z6(s4^(d8;<ׄSFkOS1=,7-Q\Pu¬s"¿~Z(Q@Q@Q@Q@Q@!4^O_+zpoWM?v/xB=GݤRݤéVo?*X}1]sg:%y?g+?{k`*J ( JZC@ Bu Vv薞u*k1IOXzQDZ7]EXw~<ҡlC܏XW"akm\YR ~@Һc`tsrqgڭ/k'Ռ}Z?Z#?hx/?a53kWDOZT(tz3.lCB\% *P b^m^Q*xYEolY4/e$nXMrsUȾ [IXW-=Udw~-ƒ7Hݰߕl$ 8as^JG"X縧[Iu˩-Mc,/cH}h5/:Y1enT[z4lKETQEs)[sxlUPǜ94?)FoUU}Դ/-~-4ۅQ[w^hdk]ZhF-_gkW]<0\J]=X։hs)J xgL|#^Gk_ zT|oaor[F#ZΫܜg4n8Vqb#^q}F"DY0p {SC{(=>{2V6Ƕ+d}|]FEO%|5ay1W0Q=dhfKpsj9z+_h n/Eu~5Դ>EҴc-J9,hhjhsvk9Wy K$FQޕ'EԵbM!ƹZ|!Ɲ?1_gQAɽm?z^yi+6o/=?ޯBCkn?O:?օgk)@&{g^V17=^- m^84ǑL4"2}wW/j" |E9W_XkZX9?tDg@X8](?.7/ArxrqN4s@ɶ(ϢJ9]n6CgK+%Ʊch#ԝDB}ER#h&B69PH8Gy0:nӵߏQ>d2r>kr泧8 :vP JVe@Vvhg.9ǰÒiu N_jw|dڊ=tFhwiEq{W.9& xf.?#(/枵h:]n.ƺG[a>1ڭ @+Vφ J͙ך|E!#|E!#ɢ(x/aRox'a`8lxcw>'ՄJ^iXd"=+{M!6;WQ(+&^K+}r 1;U[M /],qȧ1ֹ1ʀ߽w>GkL&j7.v8] RGޤh#S*+ajz" 9Z^&|uuku7@qq:z}-64pWk nXhs#'k{躝Hw2@5(|W*ѧlM$n0r zuuw[s)'T"RyR$#OG:w;+;UofNꇋ_H5-CXRkU;Ys߁Ob};|1 pW_ O/ǶsZS}aˢi+K3@A7԰{ U#6NJ%9rj(N)i;Q@ E%RQ@ E%RQ@ Hii TJo Z?Bfo-5ЅoOfaSG'JI~NXu7sk65o'۟5W7{ߵ˪տcOـ[QE%Q@4Hh̼}z#5ܟuC}jr{R|Sv)ixWBt*ڝt5mrzUfW0oW#2'.)7)—Z\]Q@ KF(RR“4:^h\Zj"KT&,\Ϡ- j6.H.RJywW%i )&;R|=ķ g,} Ƚe\VHJ֢+;(c?bv/*ST7grje?)NlxSULcQ5Yk{]OZh'+]*;?WoWlꞌ=y|iévOnMrJFn̴:]o\^alnmtVzNo%kt\zs\)ފ؝:I(fKU$ NE_6(덃>v/PX{x(TG8.ö6e݂U\gv=dY4oFM:{K.ැ4Ҳ߰(< ܘqĮ?k?Ͻ=F(D7-CC_+mDu?z"CfPۏ)?>dz#c*]f+lmP g˃\fO>f0;{W)sE?i 5lt c4ϿtO=*kqC孰T2y =}MbuG^{.nj휺zhַMqXЯk&ܷR޸w?Zz]FscNR6ꐤB;_Oz})9q/Q[37A\roD\=k~]ǿ]xOp6ElgB jҽVuHv^J|E!#EPdVJOw]+ޥ"=F?⡻yǵw#5^xgV[@$&XW⦵8v[仵 ]EcԁXc𞣣5{rS`1[2ZxIvzgMڛJJXlWc[H5:c$v2Eѵ / iWj vMWQ,=ЅH&I0PQBFuzW#N5t̷1>Nj3Zݦ}ka:)Q{Fm^_xM-#Dь}+fľY!3 Bˈ݇|W*M= R'.褓QJ%9 ^V>Em7\c_]seƬ?|"o+:nLv ((4_5 )jZ '1MEqp.]N"s4 u 5dhr^*chrx"R1Rfzk>f ^ ֪)X麄3D}~UǕW 9B%Rc֥oH}"? Ȉ?Oѭnl{N A!z۫M^2 Z:a3ݰꢢ}fJ<oq$& fhtZLq6v\9lRÖe}v30<Uh-c`QYǘk,J3*&CM} mM} lחQ߄֢+= (Oث?bv t!dIG'ӳPOҹa.v¤W,r9(Yw~U_Z? o>ɛh@Ҏ?ȯC,ǎk)Rgx=Z?Z/B,~Tg<>x=@kؿ)T.9c'҇@#5&R^iejz`UTCQXzUH|&GMAPV4{Y|>CjP| io[C"hU'p1[i*彪Ę RmXSwE?ȟkKhhCHH֖եشs03>ZO<<>fY5´ 6JNM' )%~ 9 V_o"o+Y<{޺J7{Y<֏gټgz*1((w4<ݮ_vJ 'Q]>DŽj8}+sF%|[HܤJaX&Y`NUn#IRwY${gki\5,t~ }[pMZO ȿRT"/ҹf#G p4Eq<)"C3Kœ}]: &wiڷ C9XljAlj;Kg>AK;vhc=U=vRi.c9++9Pj#m.QtIr 4<?Qo;o/9GohvZ`(6}k7G%ݣ1;c8NqYŒʕDnjx}FNi2Br:uk5jRPrWl4y˙Q[tОv/ۥ}lו ע+= (Oث]b}Fy=(nه#9&5uvC惍8tn;7sU/Zoȭ!Y-e"+]`b3pP2}`,0pgn1@ :eŴ~dJ2472F)QS9mB$'z՟[ݭ ˟<}xiF(;FJ~ jZݎ^s\[[c%[8H㙱r 19 6fWne8 UI#hhe@kEyC]z@ Ckn?OЦRakIC$Ei%o*.JZ0"4_PZ]Ӽsq,JpQ=dPjE^m1XyU .,qK6 %HP ?*i-<,0@(oH`&>SV?e"IIA@UM:[qNtfhn"0pݠ *ޜmxIf۹*DO* MgQW.-!zthoo\[Z~&Д^| eݒH<-SW}+nm#:>ޔך|E!#q" E*ōگ=+]4GT7/(S֓X[Oo,1rAuh[Fw,>SJ*>_-]R Kf6Rȧp3zRgڗy=zeb%ĞT9'ڀ(cޮ5\G! Yu+e`QVf\'jh[dk[}Ì{Q# ,2LԮ ,Zt a#nh`fc1=He8v[;6wQ퓏J;krZdbs*֐b,~F6}uΙWYeJ1/ k¿~Z ( ( ( ( ( QE$tӈ6Ohe/Z=fˀ?u뤮owk՗[CȽh}.[((() -!"! pRտC־ 3|aE|kN:طkjyJx8Fe< Qln<)s! z/O2#ֹϱAutxI PKe$GZf; MNC p8b"ohUgA%~En5-᥊8u@#9D%,(Zgc&k+ m TXg-o 4\5IJm̮3ޛizsmHnlsJ5Ethzuos !{QGٍTtOkQ&dI#d,SKtQwO[.䃀*{GYo/<2{ڽS @XxeY#;]zӼ4}Nd<omom r}sVl}4uh}3rwm"ޛrOZU[F0ɔ gEsi3wǥ (wڙe<̅$Q$:}BA+?kjRũ 5-Jǫ]a7+rVYi#'If3{?+ЇJȌ!{0&Ҁ6L-C\~H֥ldr\ kJʱ'z(scOCs,TqY)+Дn#X[t.Qހ5mbFkUF¦J浗'ƚ[ 7ԑ,J[㸠 8-SYˤy*;U 6ـ u?a$Z$ݾwm&%)[M]~j72I;c⤊y`9F%ym$d)$p~7jlUUw1Y4K!ww=ÕjFX5YcrJ8]呣R#bƁ$Px^3tf~pIJ(=W֭)87Bc,P˲-fUmg=זWk&f c;o&eϕܨ3Ҁ6< [GSkIфgFگy+DxR6$fqяQ@s=Ŧ,m' :jkۧc\^lxZ&i0H=7_?J<;nVN^؍Z繪8 P89\Ąv<楴e/7HX4zkneݰ3HK,;BH7q9altsK3APhY^1(<Gv/*]CD \d\aޜ]oj82L3+*Ͽ3X8*}A ;[v+y.FgR.U W$K1,sܜIVb=7p(n %R6_I8z\qAӛn}jaj~XYw)8&A#8'z-m>+uJP,t$`HԷR^Cq /Uw1<.+9}I. Wsm=@x" O|TvsRF8ncq>4#ꐻ4K^JȪ9#}y{tM+ߗ޹<-KVvO Z((((((('ݧ$xvZvc}y?]W7ߵjZo.~Wټ?뺺1U-ɎE%Q@!<|S?9 wN vC_Ab?XԜgҊ==]} uZ_ k iRQ/pkV=b?IInaq114Ja^:xw'jwSMEj/{=(bzP]@tV#ҏF/eZ^J?Aמ^UcB\W'qk?u8׬Rcqɔoֹom SJom>dlt { FH ohph+fzQ(ˢ?1{=(.ʼk{ؒ_Ajw{f!at=+Ѐ )å!ozi \¹aֺZku*܃@ncf?.͗·VpXUE\M[ ^Jtc֧#ҏF/eZ^J?G׊4>hQ#vN;Wq-K " RɫsT5kyjA76c)+ѭ2/؊—wq(84KZ—hZzdV"hZe +Jo\d8ug`MrM6c\gMQrؗ$>bQ7I4a?kOO?.a5Q5@!;-IS ^J̤OF/^J̣JbzQ +SG#Ҁ2OF/Jˣ^>+#07@Vy2jH& akjeCdDZt(((((((Ii>sVow7^?wWH+ڵb/o}'o//~[((( ')4_3opw1 'p$EmEp?@ݫU̡feeu>l,Dq^* CsK."i)>[,cf3 YtO%C|BOnFœo/^to$}ap~-T|D]@k_m}0s>bhڦ%r0Ixvn(T䯯Jj^tٌ;QZƁ.}o2-]`wkA dڲh8TfK,i;P<c & [8w\{m~5Alhm*v 1.'<>}(ĚI'h+_s] oj5{Gټ|swAU-ɎETQEiƚM <`r^_)[`Iyr6 {9ǒǑڵiF! 3c?JקZr;nl;vN6G.XK'PġBDi?ЭR.idI33\ԷW3(Y%{rO69灉WVvnHk{TuzU tqNf7ՐA%  @?\W$Ȓ#ifc$KDt:oC4%Fs]tՇ]z|z̏w|?¸$#;2 xZ|WW)M"!= tŻRɶ y]VlSXQK~T*&׮Z - 9뢓Z$7W4hh&dY#mAe^u޽s T66ҍn~j:ڴ4O#ev* n&!eG}4%{7SKz(<яOһZ!_AkE/NsGBWivdahkI6/QEEPEu⠒j4f̎>Q.1@hǨ3sF=.t+~tn_Q^=F=G: =GYx??.jnpetctsz΍~usяs΂?:M~usяs΂{፽?:sяs΂K8B?:q`̩,g7)搨=8Cׇ+=qUWCvu<[ꈲ WnЩP0=jBV-|M|#M2MZ_ n#''MmaQ~rRV]F ߟZ~pg 7  RT8"ާdB5 AX&&1n.,Gifʝatg/z626b_GEQ_Kխ'6y)4JqgYEe}{ѹTs"jRѨѨ@h3 ⟆X0Wn!ڼd~m&W9&"(|AR}=*]=g̓-;Krn_Z}s|TsѤXzѺBIVFXq#?4NIHO5|Cյ zuӲ="YK-X FƊd0VH3c|>e))ב-!϶x+Eb;tXƁG*y MkD nN|J'kOރ1C|1W^ɩ7Qr'WHf77u?S0֗u/IlZ5ꈽ6hMO: =ZǹǹB57QѸzβ:?̇sSp/M~tn_Q^=F=G: ~tn{Ό{Ύt57Qѹ}GYx??.jn_Qo_Qf=IzΎt53HүpĞiQTUdC- ( ( ( ( ( ( ( lvM;ߵk_n=O#_>_s] oj5{Gټ|swAU-ɎETQER hcdAڹ kĴqsaֻcI3^ [-};jV䇀:WO?i4i~`W\1r,4YO?JiSʑxf\AUc'_WDqxCiN=kgМk^<؟ӇËa,OLװGz j~S'R+ꯩ)پC^$<c\ iQ,s~rS`WQ|8k\Gc^1*`1\NFWP 0? P)kɷ[QH(5 ,)M[~ jh!gWIMW~ XA{}x&? .?& =>>hV=? 3^? &?Iu_4}`9`ϸ>MW~M/$>.C(ϸVG$>q) μMW~M/$_XC=eFΠ{4=1yo$5'$XfzL;kme> ͧƒ۷iPxa,S[觹5Eܢ/K[=j~+Ƴa^鯖?H57uSB:J lw6|;u/yWPH}~a,gg~Wǭ@ ?hyN? N??f^ҼxYO5G{32=EMW~tjhQ^A ?&H_4}`~3+?$ ?&!rz3^>|IɣM[~MXCgf׏IɣMW~Z!r3^? &?-G$>{}ϸVG$> 3Я UI &? =>Կjɣ=ZאIy8w t_n{ >1FTR$,k"IZŘnf?CVV_ך 2篥mE4ӪQEQEQEQEQEQE>:'ݠwj5{Gټ|swA\߇~լkk}!QN((:Q@Q@Q@Q@ (Q@Š(AEPEPEPN"i֚ǯ~C&NN+R7 ݭ Ӗ[ڊ*aQ^3yַ(Q@Q@Q@Q@Q@6ONhڵb/o}'o/}J ߵk_n=O#_85RܘQEIAEPEPEPEPEPEPEPEPEPEPEPEPRpMPQ#֦m:u( zx5욧yI5Lj6CEWQE(((((((((()) v1nqL+.jzC`!b;UA\$Rm"#2!ek:6&,q2[xvԒ03ّۅP T:V_}1kʑ&fIHHrҫaw>d,yZ Qyc#=rT:ۃA(PsH%KF TgMiqjT\BEk/ YlNLà皌^ϪmA-2ȃeχo-{G8P=MfCm=˕]8TկVu iB;ڧ;?ٺjcNɒaLc bE4lvF57.$4\}Z}!`[im8xXMRM:O&Fj#'N06Š0(((((((((((_>~kq=\vJbvA^?Eb tOk}܎`5E-EMK[ ((((()}NMsufou_&W;wzAҹ :~:+=s] U-ɎETQEQEQEQEQEQEQEQEQEQEQEOҀ3$mGx n *@(]O<_^9yku3'S^;{9(Y,c!ܹFК縣Wcf?ՅvBD;fִ[D2YnOp+9@ou*+Ko5sy*pG5Qjڛ޴~S8QNqGNPƐQEQE ((((((((((%*i{EkǃZC9帴QEY!IJZ=vCO?QKqM<ɗM^@G՟py&%'cҚ8 5r"8!$ZO"Q"nRC/)H>`Rw2kǜ3&ȗy5zG"'"z~՟s^s<QK<gU?*><țyG/oʽ#fG"ҟ՟p6ȗy^wҏEJ>oK<ڏ"_ߕzG"}Yhy/oʏ"_ߕz?"/"V}oK<ȗy^IV}qK<ڏ"_ߕzG"QV}oK<ڏ"_QV}qK<ȗyz?"z~,G՟py6%m^G"V}oK<ȗy^G"V}oK<ڏ"oߕzG"}Yhy7j>/ݠ{@{kXFR!TX ͞]}PEx"1,zꥹ1((((((((((((~SO4u&?'ԕ~O+ *((((()c+~40AaiANV"r~"Z__د_a{c۶AVRkӤ1Sֈ[_أV+RX4,?^Ώp[_أv?W¥}¤i{:=S?kbx^M x?oKc<7wjZUE}>m=y/*[ƏT?ʗp[xGڭ/W¥ hIc<7wjZU}~"y7*KƓ-G9v=kV(UE}_T?/*KƏeKsTzڭ/Qo,?R4{*]Úc־m?b[_د%Ka<o-Ɵ\{oum3q^M x?H~X`Q.}xDo:+aW]C7WϚDŽ-ijDŽvOoZbOd0??ѿƺ*Ɠ2)z/Qo+ɿRi?R54RxG_د&Ia<7 x?o>jY]}~"y/*KƗ%G9v=gV/Qo,?R4.S_jGڭ_د%Ic<7 x7oR;o}{}_T?l=.S_jGڭ/W¤ i?R4{*]Úc־m?b[_د%Kc<7 x?oR;o}~"y/*KƏT?ʗqszأv/W¥iR4S?kbx^K x?o-K>jZU<_ؤ[ ד¤?ak^^ eFI.IbrsW5Y{Rpx3O+; \Z(Š(((([UaJ[QL((((-U-fj]&1%w;R%['Pq뢬/JRL]vzFخ[:f Wݜm#n+<#t:^cew5݇ˋ7߾QEQ@Š(()(h\G$]H7]zW]LmU s%u-c!kU-j(۪>]3ړn<#z})+;^՞ W V_KBAk4TEll5Qntyr)8;V=mX/cՠ8WP 1 T +`E"OISYɨx>5NџJNՏ,=n(M(([UaJ[R`- ((((-snhyu̶N"y#n\>+{l5Ryp$l{p݁{.A#TlG@:QE%Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@4NhL~O*8SRV3AETQEQEQE+]M}&p+]Mg3w1iʥ2ޘo.ݤXb{xyғɏz$hTQ'h;I=h{ʠeZxVy%S/]Yv\E$id/^?/ҭJ \,jɶ}Kjv\%ҭׇ|),@sxn8O ^O5]ϟQS ((((!sA3Z"k: 11M30.GaGEu>/g\ 6p[b:㞕 XDGQx E '<_~T}/?*,}/]=&C"pW;iwrltH"Fܧm;ON-ޞ\pL먣XQEQEQEQEvpIp*KPt?ξsU!Kk81|SI+u'7g zJh,^U>S٠4T( _e~GG`#U4~[H%´Ķx]3ցLZ[wS 7WIYbH2ח|mӴ!sW{g޼:%cNdQE`o`)KaJT4U )S-PEPEPHN-G3mZ,4c.sձam;-fcTfmǹ=I&}C֯5aK \FZ6rp0rkOD+Wp*J ( ( ( ( ( ( ( ( ( ( ( ( iM?t SRTq>g((((ҾjҏҾlS=s7$"1}'=+t'RňƫU-eIMMߞ_qP#5V=3^Z18dŢ*0)QECYgv+?\Lu>917Oof^_m [nzjk* oS-m#=M;sQ3s׃@}_E>/=? &9}w7F?iEL93s??\d=wMJI5YC!B+ @/&tI^,EݙguQIk,((((h_*j?? uᯕ5 \C^_LgS_Ρ{dm#~2ϛΡo΍oMG'=6}sPo@zw4*ԭ4Bi5)%e&rZ ԡ!#e_,b>~t!xpƣ+8},7WSч9> |Aҥ%[d5[.OHH??U`SIonZ)(KR)h7~;̂!X0c#zA2XW Z;d=wбJllp*{Nu__PYǜ?[@3-,X)(J&kBUzbz%0eTJZImz˞BS^Ԭd7y?TSPb%_}X梩>?G@}(Z,`ϯJЌrGz,v}1փB;ӵ=kk.scZCG/JZQE(((Q@lĞ=D_XQy"].q7 *3+䖌ħcwf'ڭ^_ֽU? _V+ pT i)~c E/AavKvJvPA sK5BR;Y68_K/OW??rz_>LrG ZJcMaJT,?WaEPF( ( M@ĴP8#i"bMV2ic42:מjSz嵊m;nUa~t;*J ( ( ( ( ( ( ( ( ( ( ( ( ( iM?t SRTq>g((((=v V/ZV*3]1Q#2渿W h=;W\nq`:\ԐtpTG%y-=nTN"nj"wQ="/ oaZ5Ÿisj[2>am-Qv8mcZմ xutĊI0+5ZKxgwBHmF6hk!LԴKKY7?Z6d^.b][N 1,'Vi'ʹO$JvB-eh6uw8?Ú+昑IYa*#w5~$kh;nX%o]W?_'m]W,QA<$QE ( B\7>Դu 2Dv5)Dl M3tbZ~Uьң;vV?O_,i&dhu qujk ZʌRb==s\#uxZ[h7t.B3kiz w\_P+2]LI5[ɹ24u[ CgAB=+TjԧH5F. ZIs5]BPӂŦ(y״DuzVXoFQ޻+ 4;[ \/t[RҵlZ6!sЌjgk zWA qTW%KGҝM_>QQE(())hCHJ$oݐnr\%w)YUGYh]jG\cױW&5 w(ӢCu?Z<ֿ?noy^՝M"ˊ|S[Ěe{ -mUv$\?X[}hs*$>$Z ֞aEFv\n6q^Hԕхr& [Tu'7:5 x.芐w˺ܯ,=?_kZɹ&ukgG~gVW2ڙ[=ˏ:ϚV)$yNjGd۬pJaY?\iKV؊ͮn9|8]G>go#_L/+ 1ۆZ(ŹKaJT,8J[QL(('m}El%KjYe LZIXzHfӭŖ׊7 5Q]Y-Fl^SjwfDۏkQE&4QE!Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@4NhL~O*8SRV3AETQEQEQE+O֬:Q^5r7tx5ûo-+TƉNr"kxYP6h0>cc%)Y;zRGI-cnokvzT,jу` ?bQ; 捎黅,.r ׵;.0+? \]K% Jo6O5… GaU4Q > /ֹ ҏĆ=a~SJZQQE((())i(#"x>-r"?yj1nEIYSPs=ks.?,H#Wۃt=*KgY{5},woH_[:)ڍɯ+ǿx0KH+̎Y5*P[%_0E0 ( i00)>X!k~lD݊ڲMH4uȸ % x徝t5@I 5i{FtGf gէ' waETQEQEQEQEQEQEQEQEQEQEQEQEQEQEM:~飨1>}MIXqQPEPEPEPER:@ ʱ|O&< 3շNQ>tռev e><9b*Bm2rK'P`1V|iXxwQ?j[ԟʾߊiЬ݃ST2)2z=K [ZHN$UާwQ?z HDp z? _ʶEAʖi9+Gu*ÚlO>G J4K5 V14-d;kףM-\FREY}jҪ *(((J3Zǰ~r+ѾSq^621یv2(u!a&l#ڹjaԉj *j?foy/йO!L5%u4t&@:CmmmBH AdurXX;_ޱ *v쫏k?,瘠6W?"ky,z<w^ 2clm@PA+1ݽ(4WWKQEQEQEQEQEQ)8ޘ_q;kMpq]#;AϰHppA2;iFQӎW[y jM"V%Z:I P吤>? xU; FHⴎ9I,:Z6n]HG+؎7": t/tTΙ~`؜f!WS( hTS^,|<@^)ɢZ2:dV/+1teʟ<פ68%ڀ>GvuB*(L`unY5*P[%_Š(Mw)Y#l 0({Y)w QUt ;d}_R%.DAWd&iLe+N$s!O=տuXQEQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@4NhL~O*8SRV3AETQEQEQEQ {c޸ mj"llsU~2k:G{G-܉n$CDŽ6d6PID db2NjMCMn#XmEʅ$0q^[m;~ b.rq]{O⇈4KdU0$K349ڸ+7]Ȳ[i F9rw|dx+,sEEiBfdN'cNdzQ]->WwHFT^%YOmiZnr:zQVi.--|ۦqc$T%?ac.g_D ?^Ock~7?7ر'~98j/ߤw/go%𶬺Y5]վ" w/[sԚ:o Oɱ@oa}],v02㖐bD4sx= $}s,6RH`XFS(:s\LibMC%B/ޮ Dft_mCfTy_ʤWH|(rGuݜsל|!b_e/f`%tt|-[F38iAfd V6M+(^.6W̾06rۆx !媎BksxO5HH;8axe. IGznx_{7;d CX4ksIE-@wk}jPWR3ETp"(%1eq>ۤ^1 0x|bH7ri#|M"}Y"Q0c20:uP@*R>/AhŠ((((((((((((((((~SO4u&?'ԕ~O+ *(((((/m w%|y \~^t2v!ۦ#wvT_֗{Nywֵ+QHݒr_0gGq{o&yiڬ KEU<A^*)>xź止ln-8 jԣ^>sz&+L~T,59')bz{]&់uXf-ݭ{"x@ _zMsIl̖qb1 z8y F3xQ{]0ĻX& klֻu柦k ZlJʲR0_7ud9ɮw1%c>+ӷ躕8Yy5ᆹ-,&}>Q[U?wk BX9?$2 JP8]블QcGi{9v E4{9ve(4~g.̻~9pzQ1ƎIve!x*ģO4+giRy?oMR{qoouci$vo;vA#>O\>Yve/S5~ϻ[~5iDŽu böw:f3JS ÓֽSȠz9e9sμx;: kfK']7R{f~i-{YmG+v0rhO |˹^(7xQӭ-b2ew9g|<ѼQ<3agq}.pAV7Q돳6I>?G ]I?e佦!&ZZlcu/L|/}Gßˡwv]6  `',˹:7>%i>%ntM;RԮK;*ڴLmEg{(rƴ{(LfIv2r=i)pI?r]Š?Ghr˸QGh ]w ( ˰s.E4{9ve) /h{\˹OTҬ6k N.-tqk1%mƦw%19ҽn=h?CG,2b=.&r%p}@o&1Ҳuz>+Z]̸&1u4me9sҾC-tf D~,zuO Kխu -9[ۏ1 >.̻>!.$U ܣz2.Z GʹcF=Ojv1,[Ҝd&r\scIhzKh_sǽz|=am,R̭!$ָGweۜHɥNq4u Z iشְݲzW]h:ۤ "(~zQE JZJkp&[%]w +:@KI{}ogn7[7I+Qɮ.&b5,9¨I5뚂4/qcci4Ss zl$/# y9,f.Hʀzrxa8P0¯'"!!8*t((((((((((((((((((iM@ɏ5%GߓjJ{( ( ( ( ( JZ(((=)hGw-NYA.˕$o75%@4ZvV&V1*5*?o_zebO?mԿ:?jCX)?(w<ߨ;-?G}ebMoԨw7U>ZQ寥Y}خ;*?w zϖyiO/{oG|]C~G#.RY-?(w<ߨ9*-?(=y?#0R?xKXOʏWs G#0ROj?(vF(>b9u*\狿n ?w zϖyi\Y}y7#.RߩWkGݧ~w<ߩQoԫ|O>]&w7T;*-?(Wsɿw JgOZu*<}b].73M,_0U-{(J> w9_};}؞sWWފ+ IݛF<()hIC$LyI8Zu'wIwir@屎U۲iP8^A #d={~]j/^Ѧ1 |'B7W7rI_OM%h⸎h((AEP0(((Z()h ( ('֐-@8ZU"渕!ZI*$zv77nNe], 6EΩ\xmI|$8<  [K[.;n"+b y5iv=YR !vO=M[UT@UF u]Il?IWꅇ*j (Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@!KH~PD~O*8SRV3AETQEQEQEQEQEQEQEQEQEQEQEQEQE'րw[ yX*ɟđ=.I +CxVO^JH_L UW.9Ƕjy;-ݔ}M4\K!圶}i|u>\lom׬gS^(0uKҗ!jnWk h-/h_c=ֱ?Q9 oO[{Ef10]?*Xc ǵ2(g]+9L{Fhbuwl%_θR#kh_)y/J[{jE>?=r/}[WiX{Sr88KX!Vd(ƵE0e5w"E%-0 ( ( ( ( ( (T/?N@gQ3tA d՛m4#GiZYc: }lJ-X{}'oN;sTjzKF[{UC2f[{Roqڛi.1['`bvZz~g,/,\N|۩U(n{QBd뤫Bt~C (Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@!KM?tL~O*8SRV2AETQEh)qF@Mh8A =hԕZ|A𞣩 >\;BJ?Q@-4U,xV;ָŕV{I{GZo 9 Ne05K+ͧ󻩨XOѫ.ruH ɤ k.&X}Tdc9= >t#Gv5m50U8DEEa}?"勯Zs!V`-2qOݦ~sGjzmYC|=/YvCԝM*+ bcQnOh: 9ڱVU=Jibt\CnEQ? ynTyS;䓎/:P>ִf2:WYQE (RPM,1%,Wi)v 'QS[3[t^|ƒ,r*ڶ ٶ6qso&p'*}pj^"6VrLAwsŠ-ڠǝ݂-We8_njְ̗Wo~,^ 99O}Xv[j xv $SKkwlcLcZץɽƒAERQEQEl~ieQE(((((((((((((((((();ZCL}48Oc (}S>ӷp0Vh<[#FAn:SZ-.-CFXVIȘ 9OZ4c~k\h5Wv^0L 6q:'/k^KxdJ}Xz$^ 6'޴G4(_la|]~lEվ~=T m1`iX[˩/JƳt6h#Uo+ac88c]oO _x06&k _O>.l0H B{kǟyk5VΈlX TAQvK%} D~}MKc?ʢ?xMhGC2kFt֠lto|USQzʶTJLﴙ>c2ޭqGh,VMɞ2tE&{Z ޡ|fB\̺}սx kes:ĶѶjֻi6 "(dtsG*b0O%*.U-eJ& }Rˉ.%sNwR3W6T;sG"ك.},|{3MI}KO\۟/<lx'˓I2 ?Jl:֑eZBv\9 }yʇsOе QY`irώ>5m^OUz8f7¤ץ:Qf 0Zz`<Β z_&a^fǎ+е :Ԋ+Rfj$Uv8c]H]4}-RnWs0&@)`*m\E q.G zz ub̹%XBuϵR{ 5ǥ0ǥe\jXKѮ9!+F@뚒^[\i/N#2ȇ`.eõg:e3\Ζ)#}`~ _Ԯ.AnH tٌᲵiY4H@,{G2q#-7 M*H[dMKV[x xsmQG7`T-.AGAXun*oV=(aEPEPEPGj) ~-N[-hV~>GsԵh C (Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@'jZ(yZ)Z2#Y4pzYu)+ EgU7(8 ϥ8tw`F8EOu|g!Lڻ}hE UXU4`vUEW`IFzq(x&LpU8H߿.hE0ǩ)4جo&BCk 2/STL\8y8#ҪGN=:Sl,6ƙt., u{\$Mmg8Ʊ=j\,];ecXgBN }E3Rׯ[Tѣ&k7gs;XvA(jrKm;AԼWFs^`,HIsZ+8v3F{K 7ZUw4s[JYc9Vci!&VvaV፬ݥq1-y 1ڶB03t9f#7pTZs:}+$1ȍºDka7N7"HB6^+#\٥ZYG5'1r=9(+h3k|q^e^t~"fuRⒽh(((((((((P`#Uf'uVaOUdf5n%E袷QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQER0ҖoZ~ ;Gl=;Z¿)`JrWqtV}*|A{;r|n6(((((((((LREq~0k'$ޕMC56]&W"n<.O`yw5S</N lik:d,:2h#ēW~$tMVOFwIetN ^A2U@-UuP&׹t );261R |iy}fwӚ_/bw.wY}Z<h줃).WiAJ'70+7SOcͬidr2/XkbH7 @izޣ`$LVh95)㏬dR?@.\jl8~"/7lѷ@Ȭ)CZ$hE-fNhlLQ@Wg1凩T+ fv!Rg#Os!ޥ"K`n2c[L5ڙ]KNcj-FĊ=عc_ibrˍ /u;URsc]]֝8܉T%\mˉ(uHŒSOQұr+j(R QڷGR%+QZ( ( ( ( ( ( (KGҀ )qQMq3g]4-ս*/:TʴS9ֶ:;(浌@]:,cW@-XŠ((((((((((((((((((((((P}ˆ=cȺ<⺊B(]kauF*Z 0SZOV5ޟk|O*l\zLhz\?V4P!~u~i?SO}꟝hz\ah,?MTD??:?SGah@thzG!5X>?SG"D??:>OιH?9/!~u}~Ȁ~OΏC=Sks4}~DK~t}꟝s_es4r :_C=ST,?Me꟝hz\!iE4r :?C=We$4dymn^?ιϲҋ8W,}G?o[?M🩣l4/ҢmG~Yd-94P[o!*3b>o<SJ,4r %WA?ҏEt!(J64be|5eRE^?*li>o<SG" J='JNU *agaϮEcHAs4r 7VKLc({P8x:ŜY[?ȂD&Ɵ;0\حi Y*-ST$4}~P:O=ST$M/ai :OC=ST,?M'ai :_C=ST,?Me꟝hz\ah,?MTD??:?SGah@thzG!7X_ȀOΏC=Sos4dʀ~OΏ?U}~[+r1M ( A*V}rxF27j?_Zٳ+ilc_V; o2sxڠ}8`) BjEPQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/images/xenserver_architecture.png0000664000175000017500000024641200000000000025030 0ustar00zuulzuul00000000000000PNG  IHDR}t/RbKGD pHYs B(xtIME (PN IDATxwXǿG]4)TEPA&[4QS4ݨXXH@c;Pc<=ξ;;;;`0+X?7A $/@.fSR%%Ś._' =sg,XĉfL*e{<駅ZѥpB%&xd,^աC-뢣>d0 Ӝ7M0= ĉS.=}]yyE2SΣ׺ gg;#Gp`*a0 AAZ;`,nj,S.M8s|V `|x|;]xvϼizVeccʕ&<{D `0׾Ї3O``|L00Zurvf3 `6))iZ6:T`sw[ˇ률 ϔ`0 hii(&% 䊋K/20>P @ ˙& `0^[$W2m0>tX=`ee&99u `0 3n\YL f37A+==6mt`0WD}}n=&̨g0}&[c'`Qq"<}Δ'==eeM~ii22^ kz>Enn>G~w~dg硠YYY9?|gSTTԌZ\\ikrmvWY%hիpW"k0jԧǡCص6wA``+{j6@eVVVORGѣpr; ,Z#{^+cȐOa_ysOT"/Ì߼s33|L^||2Y[Ϸ@ ߙxhyӦ?ѧV&>qkֲ1Q`0^'_$s~~!:ի7KϜBBBsJL|~&݅wesݽ>yz`ΜIosN; ЯW:zj&F~m[=D#w~͛wa /- M@Qޯ]㫯VP״q";&RS/, [[wCDH90}ZC̙3Qd>SXR^^av/X˗%'):#=| P" `aњ;>1Q}}hkkr. PRR=F q](++յ&NLp唗"̌%۷ /@ykIIXZAuu \*ZDbNN>bcAIInn-_tOO`oo-Q\\(,,LMPPP|Wpz=[⑛//n^')IG#aee.+H]~8w&ƭ33c()) ٰo))i6,JJȋ"rr5hFh}}=ӳ 99>ڴ1q~"RS3ЮD]o*+ ##}8:}-cVXsW `0 BE~QQʕDII:gmmA/NNHڷw ѱ-x$"b !D@RS/kk 89%vd$2OH*+Jo ssL"222M3󉳳IJ'c "66Đ""9~|;!D@**Ȁ݉Ȉ&'&j˅;EERTtYݜ916nE䉏O{E֯_Fd2n\@-#~~$33;ѱ-CZ6$VVDFFxyD.iD[[_>]$q^bl܊hj++s O6mV"Ӊ,16nE2ͽEϟNU6qv#v$- !D@vXMTTq+bhOtuId䟜̌hlG|>2'm۶!aaHHH@?Kڴ1#s-]!D@Nkk OM'˗biOƍ #&&Ƃ$//cbbH̙DuOBdܸ0ҥK EEwٿ=!D@VG44H˅ټyiƌ;:L2x/ӢE3u#FFDAAlذ\";}nDGGK˖!r@xx8e"DBD "nnDVVXYGf- K~F|>&J23c㬭-:/ɇ;ohhW2cXnޚ̘14>>Iuu~D&LJcVRm/K6PǫW굒"16nETUUq+rҾfu`K[[KlGFݽ{ObbbHq+|;6'&ڵ%vDFF ]l۸q9T'vǧ=P#DNOڴ1#ZZ̘ܺuߣG'2g$n;)Իḵܱc5QWW%ǧ=QWW%g";wH t !D@.]GLL ׾ҷomۆhii1DOOŶYå=O\{wOPP# gg;ȉ;89k|Eddd1QVV" $r&.Q" u" {Bɧ' ֒\"[$˅XZpLNNHHٮ]i;_ww'._VuIx<1cqmSI=,Y2(smԩ#ȋ.^'`0^ vQv̙3kgCR4IhhWRPG799>9p`#!D@]KYN9wn7!D@/Y33c.)a9|x @v?:cȨ!o*n??w))u?Cڴ1茛}}r65o8JpΝ׏'adRҪI"ASZzAc~ٲ9DCC<;gܥ^'&kj8CM_z[ !~7cXҶmΉ~XY/*a[wOq&KAB dgߔ0ȶmsi{'ƕH7M_C\wHrwl]] ͽլRR.q駅DII{^33c2e3,r߿kkDsNMӅ8:% H}#o߯ٻ ǧqpؖ64THLaB$$#222duF[[̚5ǓdҤ@F=!2}( |`'!ɹIBCssf ;dWRPGlm-' ޏxz:szOIDZ#Vo\UUP_\Q@3ԩr.α5rl=gF}}#ӞB p턜<ۖ%˗NJKsȁ3$9h`K͛;$rr|ǯW&ZZ-ݣ̽{K{ABDQQ3jݺŤ6$ݺ'5jvE {Nsi*ȑc "\*o1G ի7cU\(dϞ #1th_\}`gg 􂑑ÇOsN<`~7o%N++slݺ߿'qxbco{{+N222߈[…3)¼y+_^=zׯ?—OŋGӾcѵkBCb΃ᇯ<t l66s'ݻw#<}$<=LQr&_?MSh8z4]c_TO,rmѣ(--ǼyS:=th_a w~//t싋_WܻʼwOK>|ʊ4i(deec 5kܺ!+p dddZZ{ qL6{t??ws'3 ߿ƍw79 yPQQ  xli- //2kkk' <ľ$2wcZ䇣G#Sq-޽^,̌pZZ:v8>0u7{q& %;8BUURZ9J99>RR^بvmq:JII!#G~#%!rR5FFF7ĂOc222HNNE\\Tۥ ,)^感ssmhpg?oZ^^f@RmhKF}JJ3=H˖Ó'OvvYIb]-ƭ"5j6aggMa[r PZZ&nkn󥤤]]gѺ§UY#c?/#c̱Xv;6l)eG9הQp8~-Ъy2NmmM<|(~b# h"c)~yVތؽ{a?qP?|IO6PUU͍4ևt9xD_H<TǓ'ךf==mܿwwG\wkA*KÇ{ɩR^e ]/BB$n~=@p. 5<ǙXt-_7u⥟G wMVp\gɒPCrE1CStDA׸kh!33[ȑx8u˖CjjV+3f|/rԾ}A>:HJssPgqm"MOUU3E#ɢz{R;``s =z/nhkk"99L騩REPddUs $$qqؿ8 ?@KK]o߯y 7okvXOO7n}a="33Ett4ڰR;Q-x#-ݗ@{'w$H9QWW9D3d5}R¯.@Zmjj*HL|$PLLH=/tޑTT$x5gc0"ޡ./+CYY _}KJ|ښزe1N)ddd%**J曟8>8]ٳz | G#aff,ws…=HIICh{2 /]k^(--k6gDI|رϵ & C5zȑf}nee&ѱs'cO˥ %%E=? v¤IätHV 'zfPWW'O9ri{}Ͻ.TT⚽\yf͋PVVE.p&055¾}Ǜ[qq)?lyD[6̙c1bD?DGjR^ee$to1׋ť矦@/|99%ɉ~wW`7܉oT8v,V]]ӧ/=ҘAjjƋ zƇ )--%/ `ذaFѱ/n1uhյ nvŰaE` }-@?'1R?mHl۶660iP\- _ܣ8wn7u^-ؿ8oASSZ>>1qP 0=ztB׮Wv;v0Ν={ҲƏѣ?Ǐoյϟc`׮ @aa1N8U _G#ab⍑#O_.]:`i:4?:FNo__7ddd᯿NƍPQQҥOqHODξؼy34LL Ѷm? ^^ķ~ID\Æ ;#(0a>|o?ٳ^X'Ǐ?nz,4?{:CRR*ƏoRʕ75jttpY= m۶3¦aȐ8v,+^G`7LS ƍ"(h(ڶ{AKKo߇"֯_B5j¦CCC ر/󲲲%8p bccĈ~@[qc׮-Po߮@͛wc,2 莈7. 11qHHH;ƏG~гggt$ GܹSpI`ذPXܹ+Xz,-Mz| 0M>fע/=zF``t͛w朕KƬYKP[[ -i2ޥK|r 0[a޽#wΝ`p}al qq|m0g$;vvvA߿\\c9/nݒ6(54$]]wW0y0\]{`rk1jt` =z \\W0a9 Xb.:tܹ!1PUUAnn~ yWL=>>1~|8u`$'rUQk; <|:Wo۔ @VVRRpBԉ`,[7n܁q+,Z46 t6=зoWlݺ|>y^zٳ'g:wO{D}=AMM-]>4CNN2]z=z ,-MQ\\SS#nѯ@o"77CK u`̘Av-Xv1%  6U?tteΝ}tMMu|$$$#)).X+nQ*AE;ښ3fx<SQ\\޽9nǍwС'V^ytM[PSSEN-kX< :u򆺺ZCťф= t1f "99ٹĄ \xHN:v z!..<|3 n0% *[[K EE%Bغ;NW< ["=ztӧy(**ySХ?,-͸z`MUxph+xÇGq<ǃ-ѻѻ'0bD ++ppA55L4**J(/-uб7:twCP䠭^())Ǚ5j ƏԺ '';4=zK{AQ*z~< b . 5x< ťx`?,Z4**rxӑ^X[#::FDž01iw_P/RX3>ߧ`ff W>N}{9* ƻYф4USXX΃<=/|\}0̨g02::Z̙fɪL `0><ƇlԳO1o7M`0 6)g4aF=SVVJd`0 xy\Y6̨g0i`0  C&̨g0o ^߾rf=`0 7lc b GaG-`Kh_\nq^r$$ƍ;ׁ -aC׮F^^ݝ`cc ƭ[a l99w,ugMW10> ;ر'n|L7iiO6 55Xr""01>(!Xf ,X99>jkkݻȈ5iӧ/aذ!{;S[ƍ;$,l 99'L# f3ws,0>.A X{HEE%lm;!++55TUUr `*))YY9mLQq梲h*L0) sl#:tz36R`brƶn۶_NNNHCȠ))i55";;`l jj* Z1X@\|ٛ"44_KMM-S[ahg2q9OCii17$L7c` iW `^`#S=%M21ͨ |=Y NW,7X:ڵk|S.L{5X`0bv_T7S11S(kn`3a0oͨ EX.x#BŠyr,YZt> ƇD84@IlaO2U}f@$d!,jSx[ ~H 5O^n `g/Xz[aGhc:H Z}}pIAGXN7t5sk53q(ay!NLg.3ok8`)fa|`f @7_, Utd:a|,,-bJa0o7~?BhJD'#ۺ|ZapJ .4}{y-(7t* t@G| lH/g!t4yP >uoBa3(>C/`0 x῁s46c|D#hXDwe!ACŌY84Z^J(tΊׁ :| !<ε@ @x!tNtFs@ tdj`0 `0̨^~ᗚ' GKe+ [7_ʄx`!b{f9 @*ܯz$DW?G\5߃ft11a`0 `0 o"~ ͗Z'Z)40/F5_tt)e!::/Zoh7to/(~:240_(Ū<`0 `0m +#{E>w"@ߋޒfdBHz߄9N7e(? PɰBlYP'}P*#;Σa`0 `0)?' AW +4SWݏT KvS@?!:(z]lЩ^HED<9W}4ҡ;3 `0 =*.>LЕAy֠7>Lm :W^m`0 `0].x/q|<g0 `0 = x()~rdT`oH m4J荍KäM宬DXH{!22ߛ{+1V f355sg`NO%{70ptbEQC@Epqnj} * (+O9CYZө!&ܗL-+Ν"**giDBS㊋ ,z]7nVē'yy@|<5.8[g}9\$,jy4A"rrIIIyjju?~8m[hdŵk3=ջ/0u*=74.`gGLLr\Hѣ=2YYԿQYIX8m52p&/ȗx3q .D>3vՔ|!BSKˑc/%~`F=xKV_۷KHשqw5̷m {0ߡC o#.ёcOO`̆[Rի_`.slHw_F))'&ȕ+!:`t߹C MFu1r$q0}:1~ Sg(ظ{0oխxDc#y,ǔjwEAA! 紴iTFm-cgאv,oO'LۉT?Je?/Y=w:Oٻ>G*#SiuN]?܀`$^.]U0>VO//ZEAꔒM5FM˖}KԿԭC}|sFE?=C#VjEňE /_N󉈉>5Z],ر1^ ׭kA>}ʚ׍!]LqRʪ)ii&+ɒU!bW{c =OX6JV][o}0{lnd9֟_/UƜ{Rw 2 Y4کҩS7 ±jJ1M-7.ØuDi~ 3 Ƈ"aoN (ucz2L IDAT>pP㻱@d$ё߸66Vih` }UѺr%`A~Æ8l?5qqUKd`irٿBaèݹ3(Zž:!44im ((H{&dGUZtl:ϏFĉY::&L:f~22A jj'f $}o)Cz(*q?>{78> p =磶eUe-CwxYzaᡅtqn 33NR7BI^ =)5`&Z&qe=G} qzWNoQqWTQ.8}4&N1neǖqM.Ɗ+ݡ;*{iU't6>228|=ZtN__:ύ`rV@ythO΄iJ[Pc|jaPUUԤGK4i(gC:*7̉Y"%?[/Ì۰A`NJV^r)(Pcںinnt&L K齈lMMQRF` @\II۸QZG…40Ҕ,O>iHSQC:y֋LIߌ t[멪N-[@Oϕoɷ4g@X(߹vavNp0#GhLkϩFwi"#3C[y.c! Xpw~?:|}9 8va]ΜeaM$>+::HJs렦K]6 >X~=uc:<˩swE] {#{}5\>ssXsz ~ AvA@shhg"1o ~ P2j `-E’+cbʼne@//u;.@>wu*%zQE ܕ+`Юao/92x߸0THoki!Ϙ)=3M\'#TTB^ҢE itӆ~$'^׮jACcjj ~ I_Sn.5[gΤݸQǎI::t&Ffl == RPP@]]L,)#$ӭ[<#yJIƫe鱥P+@CIa㈍06Eaysdۉ_/?+?L\bץ/gп>m|ӱ'=ؗBcZEAoI Aďa?"q,' +kK7}GQoIjzW@C H` X.r/6T!+**bA ( U0 ZB !!{~f7H44V-=$?e>1Dr}?{z8q߬;rBC}9z=;=@ }{-y˗fy{O"" )I_(1^ǖna_Ӭj3.ݠr劗;kyc)Ǩ\>ǬgesS"H޿Mr[Μ] sdӦvo?|UkFNPDy?<ekdߋDⷬѣo\C|7lxnž׮f:?_Cv?۵K/m|[؃V!m@^x+; z:wTr]U5gl=喬i2kvo_J\]pj:u;;`GD=fwÆvdYO_G۸n0q]?hPUFvF,Աk}c2;ddr#<{G;6w;tȾg{_WXXtǎv%3N{UKE7xa:4c?ؽLrS;y=_۶M5T_ B[e]+lYu_e!7>W۩UdTQk?Uj--oᱯ^;TT]=,ߘ4iL߮N;O%CXpwOr\S`}ʊ)]wFI?}{?uTS/"Gg~vwiUsMů.c:bՀ~;ڣemG`_;vGNJg^R~Ŏ}6c;z챬7"|2c1n;-_y%cθm[թ950 C uG`(+$N07֭,22cծ ѣqv}͌'0{nTh.VuMv:nޜ14%~ܯ1ƕW%vߞy𺜴icr݂ 1s4f_ȑpImk[[-Mh0r5`n2žA0e{r0i}_];=bb iզ}%τ?yf3fYӕh"RRxW,jsW5*n/V}}3,۾{7zU-of㘻~.ii_]jT)U/W,Z+ExֵZNm[DMSjsGNOȒ?0w 9=v&9ϗT) "1f7vmcG"7Y)SƘ@cJ6&8ؘ3ׯō טxƔ*eLd1+3y}dL*X^ĉi>hLzI?-=ݘ&M}m[i֞^j׶{?4lh/cb)^ܘ4}J1aat4cF/4C3Ƙ%>sXDa†-w7a3sM0y)@q2$Ĕx Ӻ= 07O4 ŒbwѓG{MaUy`1>dy0b Nv2CCԓ~xǦ%LP L籝ĽO]>T|s0͔SAEւx 诔ޠ=vD咕y9I=Jܡ8‚èY1۸;G T\ԋ3''M[LDDDD¯zuODDDD@y""""""" EDDDDDDBR{7@'{aݺgmeJ).MJ .UQQVH?MV-^u{Tʈ?|$N("w t3#=)UY\4Bi&4i%\d^DDDDDDRP/""""""^DDDDDDDԋz"""""""^DDDD$?2Ƥ7A>'ΰ>\AJ"9pd~xRD.ԋ("""""""^DDDDDDDA(KA@D P*RRH~GډbՠCeX F 4"R>bߡtG%(tt r›@nDRZm@9^%Kk|b@{9j$ ꥰ20 P9O4gZtCRR)>!=S3"rLwQRɥILTJDc!!La\ ;CgB}p->m[|&3eSx,7&s6*)F2%owOZR4Jc ܭKBj*!uJH"ާ,[eR~zt0SޝUx6ylc适I剈䑟3/PJI!심S*H>"LL^DD,`W۰::J%sqTI>T2ȥt^-TJI!36N I&3?Tύ>+)RS͢@=Mw4h-"d>gY~v{_ϲ3>7e㙖ύ@y8%/~3`|)UV9њlN(ۮL=0V#Z ui}V4<%3^D$z@O)LTJ h S#"_0>=gP$K|EEDҵpzz-vfzhl ׵T.=YD yה.^DTNnʘT/)z^DDDD$w98tPQ\Ǖ"^DDDDr\9?i&zNX5 =&'kQZRh`s . (NI A80' ҃'nJBJLcG<%@} B7v SX_.Áݓg .tϿ jſsA:v֋)?eg:". "g]{N:9 񕼣B2pk>H@)7bA{Z4]|O. :EhO~.e[o:}XExi tk"GZܤ,!""""" `KA@Y58(|RR) ܁TDDDDDDAKS W9) LIDDDDDP@yǭFDDp 70|v#ҥ9tй~0"S>}d˖-,_ٕ(QdҔR TP]0}t5N:tZj۷y;(2eʐɓ'c *NIP(c?b#U^xo'44Ԍ?:tcÇomBCCe=2.m+7iČ7Ό7DFF۴i1Ƙ'u=vٱc1ƘSN Zj~Q)Z/V^6l7n4h;zƍqO<駟LBBILL4~Z٦@yyTTL[߼c… ϸlڴɸnocÆ kͫ>=pYʕ+=W_cYhYsWכtsiᅴ_kBs={0UYFA}^] MObӧiii|ڵ+}hٲ%ݻwѣL:{?@ٲe֭{<[у-[/H+;e'kQ̛7M6VVtŻ쭷ʑ#Gضm bΜ9,[ z)ΝK߾}R %KdĈto3.;Gn{ƍc֬YzȓO?e[l1ժU3Hf111~5|M6,|US/g'5ݺuv5vX;#|vZ-[?oÇ݆ z:tjr3g4u1g(!{}Y?zT^8ZhA׮] /dС4lؐxM^z%&L@2e6~I͚5/VS^/ʟnݚ2es7mDǎؿ?QQQӇs2w\^}U.\weŗM7ĉ'_>G&:::r7'Nн{w"Bpv1oLLo߾} ֫W~h\.99svz>:%*Z ^!8/Nn oپ}7~w1PR%op vǥKf)2/8X~=}Xb+Vȍ  00ʕ+sN*UD"Eعs繤1je+ bŊ=J˖-RF 6nxNyb7sT]y@*We:#<~'N\.5VXXSNO>;v>}xvQ8u座e˖ԪU+_d5-*o߾<;?y?AAA\.srjռ=ޱc*UZ˕ig:n bƘh p*&K~Ts=G͚59}4AAA?%Kpv[oseKMM{H/;͛7?E>|cǎ ~/ٹs'uLk^Q3C@J>ؗi;v[heY~O:|cͤ(><.N\.ZBM\\5kdΜ9%灄LAq65/j8Lnzz|~~2lٲ̚5-[Ozpdq/ n*]}*p0}z{7dɒߜپ׺obb"qqqT^ݯjڵR3yo3;vW_vU2w}7?~@֭[S~}nj3X|ba„ <Yq}] oҥKQߡHnL2{>+EbY[nVPdV\鷎&Mp嗓ȪUػwo^WɷAl1}N4iBTTdÆ lڴ)r޾ , 22reժU8qڵkg8ݱrDjYd(/Auy⠠ bbbY&. ˲X|9III~ӴiSý9 6lٲ$&&z VR {;nj֬ŋg_lWa&}```/\&664goi~_f…aݻe˖[f6oc _Inj- /7jI="{S0^AKAKBLAS_<UO& d/@xX~qx{o?cN_N[$e QP/A3p?v-K-(KHa$(>4¼|,Rb3r'PIP`aЕ UP$ MI+hѢ,רƘ[sl_f4izfa_M6>z⢠qpB L:E 8 ح$( !@-(;:vCǎ<=NwcיMn^63/gҍ1n'"wYsw{}vn;===cv[n}\Y\_<ȥ`@.O+RD-R 2eʸ˔)GmgO>VGOzQP/""""""" EDDDDDDԋzQP/""""""^DDDDDDDԋ$8B)%\ ŀ8%0y$}|MU6΢@k}-]W.Ձ@` ' FSkqNpP8 ,ʃ.3uד9g(G( ?)""9غukИ1c"10<==5j8ӦMfϞ]~wl*W\Jv˥:utߩSTZuE|衇0Ƙ-Z /,С3Ѽyٓ'OY~ĉ_G=Զm&MZ>?v.<^>T`:<8 fҁf@?'h)8fQi `zgvԏn1$}Wb0*MP p-,#S@S""#G~г-/'}]^{N-Z)6.˽k׮+ٳgVʕ=z뭫{@ӦMKNNHHHh㙿x⊀7/ Tq^J8kjNаw՝ % ؊]Op_v` Pʹ6{)Np |}%RUq~ao:8~E'Nu~[[Y[/2}C-¹9!&{2vZN̫ w[r/`SR|S&l9C9SV4?x^g^y_<nMTVDDrnݺ|!c0F*eYVPJƌ6Ƹkժuȑ#ׯOLLt5lD͚5j7lnʕ+RlSK.LJJ l߾%Jx[^Ν;CSRR\ժU;s0,,{۵kWݻCOj-["~rM6MhٲA?h||| g֯__ԩS-[DfͲ(ٳcǶMOO//|7lGŋyw#99փ>xor˖-E,#]men gz8k11>sz/?]CZu$MQڙ6gmzL6:~U|(=;ey79qާO8Aqׇ3}I{G:Ar;ȸQW{Fl;e}o D|5%eGdSf8e~[uj|gzgu>~uuM49\zƎ,+ղ˲NXu̲$˲[uвe,keY,ˊ])ԭ[7Ųem,*RHSN 6mZlYOePhTteWtttg2e_x ˲Z5%***)9D_}e,k_0!!!r|ͳ,zs;שSgeY-˺˲;,˺Ͳ[vysxxATXqeY-˺_lLL˲:ZgZDD&.]g˲.o֬hm,Ӽy{,˪fYVe˲*ZU޲e,+ҲeEXfYV˲"""j [ qMZ Ǯ- fl,v٬g$9vm|#zgx' \Ů ̍灩d?]EoU߾RyS49s|rE[o g`Kح`I>۹1>}y)Of97|}~9q y_}v>#OMgV }sqЖontq|pLfo9KvϔuED|4iRzBjjj޺lٲ~Dz>hU1:v\3f' ط5Sk,s~ :.e.{(pu|;즳5PFYX >?sz8|w{OOdNyǢӱt۝d4G`q9F-t7sʤzw:`NODDD)ShԨСnݺ={vSN3*U:t҅|5jرccǎ{]v IIIE7mT,GW_}5; %_~<ˍ5jάY>}]EƘ7|mzzk˖-~m۶mb`- tZ^y1'O!&&frTD`;hR욯٬#ܕ4O"Nqn~Og8v#f{Tym) K!% r}l?vm}m~.5@;[Oyb6ۭc^<=8@g><=' P46+2uoHN>^u->\3/'o:7 :D9/v~.]T'==ݵqưMz7o~r'눈T`'pիWm۶e{gϞ &~޴NLL ;ߥO>SO=u֮]_qɲ111?Gydwqcǚ4h@Rk̹ssq5ݙ/ӲY%Q洮sCg6ڲ><7(NcVa6]K{ʹ\fձ(,'87:6W'a7o4_}6\fJ͋d?c-\xxs/ ͝yy C|.ȷ91;<Ϝ_7&G{(t>=/ώe3g_:8=.G9cvwwwoi6ϲ;b?o!ؙwzqnL=`lsv+.fdY>|*ˊ֪U[n /_|ZHOZxٲerӧOxf͚>yd9s}k۶ ZO8ʕ+K3$$$4^x1pѓcbb[J4 UPjezGn&8,{T/GuuWyN>y v]La"YvSΏ41Y-ZL[FFM>v4zϢN=8ǽyqb΍7vvmqn<ߊ>ʹzҮFuSVd.v+S5vfr9ss]n]_:pzAAA. *7|Lv4Hs**`*v`77Ůᮀ]ciyvұxs>{vO9pbd/`7_3ms`O\ AC)bd?ع˝>;r'@f6efSu~䓪˗/NKK ꪫ=vРA_Ϝ9s19obŊ7jhv7oѰa#UV}vƍoWx8V@I"?&EDD" LxƞƮ KwNżZ>uԩSW+x"""""""S@D ﰛßVRz7H""""""" EDDDDDDDA(QP/""""""" EDDDDDDDAzQP/""""""" EDDDDDDDAzQP/""""""" EDDDDDDԋz, IDATQP/""""""" EDDDDDDԋzQP/""""""^DDDDDDDԋzQP/""""""^DDDDDDDԋz"""""""^DDDDDDDԋ("""""""^DDDDDDDԋ("""""""^DDDDDDDA("""""""^DDDDDDDA(""""""" EDDDDDDDA(""""""" EDDDDDDDA(QP/""""""" EDDDDDDDA(QP/""""""" EDDDDDDDAH!T?aX \PY "N(Dg*W}mwGP]q~gG$(SrT9w{o M""""""" EDDDDDDDA(QP/""""""" EDDDDDDDAzQP/"""""""@ХO=-9x*PZ4wøl} Z bT۝hZwE˚vd2t?e?qe.uysh;lxb"UjץBw{ТT^kv}vDF)S(/1p%JY FUz)!~7{۷Qfm\ػszV4~Cu^tLj7iAK)V_.z9odQd)*ŞRe<V|i߮82TYaοx},>"8Jw}繼AW)QP_]>yiӣǓ;b(Oڗ7-Hhݧ}SɼtKh; #G*׊fڛ07CE^}4qN1yE]NK?_ڛ%"y ؂͍GYle)%FqÐݠfΔXgn+"?5ԍMBjÍ"H7(^۵-kWC?OP\3W‘}._7:}=neokBBCikvnɵIfWf?ֵr40Q{ C̝AxD1:5zדb)U<=q\'1,֮K!|̏տP|EbI嚵X:w&kgM,S{quT*ۯc o@P,$2״dGx9p++Q*7 Nz u3Id!)'SY nEX;OyFRhII*D mr1/Q Mv`ݲ~ٷ3yeXq;~ >Ĩ{K/k0?| iO؁_g[᳏3R洸+~7]Ǖ:a.Ҳ[M)U5}xb:̾]q\sn~1UCo\qpo<µɡ}{Mߖ,`7V EDD K~DUɾ|Tjrߴ U[ЦGow3dl͛K_ͣf&4-o5I{Ӽ5>Gby#}0Y6bA4nٚwQT׀RQ(IN_nڶG$0Oi֮#[0ϼ˵#w_4+8:w1 +ը]͸@؅n?gO~:ayk.os5\qјx?~1K^rpڛжg_}ܶg_o&GhԲ5w=>N y躎mHF94nC)_NCy_K7rڌ|cұ3{O=KX<ΘOr&![ے4m.&SVVΟM yuӸUz=W^@tF*EYa}Yڿ_MIL[{Ijèw>.wUמY|{=ſ^zSc(/`_8n]gzPpߴƭdyczb,m[A}Upmڜ}_|Dj8TBBZ.#MzA tcQE)RD*UJ-CP=2dI;3Lf癹ͫWڲϿ`t׷hvwnזlc).'<,@2Źnj69k)>qAOںj֭ƕDFD{@JVf}7>_9ELt%Ķk*Zw/ҡcdT>W~c<Eqptֵ,Y* 2{;>{U6z8:; q(>Nw~9|N.77mLKOI}|p xYi|Κ#C&Uڲ5+KuAcM7o;_:ODX!FgQ\>I㷻?~3>|"=I^3Ir 9K!@kllliѽEs猙\P9hر+u~lc)an[wWx>gj4omT '6&ٲ?0.IXHCc~ɯd%+Wa29ZXfxH\l\3gM6ᡡX[b ^GOwZ'",иY1[uOs[bcc(V?E=U5~A$ ɑq^O37\2%uwKto~_g7nATdQ5΄jz3T0iۺMZѬk Q'ord2qt.޸[5"Mdգt9=DНۄ}1>}g*܉cx'U `Ri.έ6u"VUٶzl 2<ۃw=nJD\yЋ!>r,[V-巩 CLL4އ..U nxm-f%9zm՛ {~=sǏn G^>vl%,$(ydɞwo+_Srq(U'Gdx8k1fGMεټr aɞnO/c^^T|O}Mtb~;?N[[j6oMQ%J,[߇}EL kӈ  "[<=l Ɩ( .Ϩ󱶎Mjؾ]k|F*oε۰v*[?d:jn/B9z8?L&r`²ǦASlvP/"Ó|[68g=j֎Ҡ}{o~qpt|zK'7zпq Q/V8p8߻6ww}Nrs3 聓 1QQ,ж"TkGg~;?L[[jlKc0YY%:8f!DEFVƓlG'|?yE~53q;)[3Sm5[֨9EU#oᢼAnZkݺgBB0~#0W8_:zTo[{{܇Ȉj6SH*dw^ :C C֜1LƷ-/fΧtZ?X>H=;9%DD^ FZvlkk|M:D@ yqjgGpyj%;iG)#_L0lYN,PXY-eKsmNs&cևoK9k鍾 /"K Sr>vB`Ybkg@{!"""B`7RDD^6WDDD$ѓz%"""""""^DDDDDDDԋ(%"""""""^DDDDDD? <ͣ>zR/""""""^DDDDDDDԋC=7֎>Dk^DDDDDDDI4oB{Z҄={в`*k`u5\pH8QRHɒ%ٽ{ѹsgf̘5$$ŋsq8QRرc)P@XW;;;>3rȡ'">}XAq%4MTT3gpus@@]s%*̙3̖-[8rfbf͚%Kdʕ+ܸq(Ν;x{{{f1X cÆ \p-Ɔ͛@xx8ޘfX~=gϞMrSN_q!bbbtr^^^,_\x ~/V(gIul[oEÆ 0a;{f֭f} .L`` >>>Ջo(Amۖ7͚5cP|yOj׮z 2=JTT.]PBZ9s&2)[,֭_7jՊ`O,Yc̙/h"u lٲر2ep ^u >sܸtǏ_~;wFa2ȗ/.\QF|w:D۳gkX???7nL*U,|2Vʕ+*U7|[[[.\Hٲe)^<+*UH"\|M6q)Y&5jxuܽ{7SxqZnAAA,[  ,H6mpuu5]p!ժUʕ+(PVZ舧'7o͍ݻ;/_fƍk׎+VpI*VH˖-r<˗i֬1ܹsܹ~'N~={6kצPB=z 6p֭De%zRrrrbϬ^ĉ)Y$*Tl۶n:fΜɎ;,Zx1Wlٲ۷@j1Çs!5jcod?~SNqҥDO1?LoT^'OvZN8G}LD+W2zh4hݻٽ{7gÆ 4;vB L>۷oӧOԩChh(7nd̘18p^zacw~ԨQl޼tqZjeq79#G^z_;;;.]r|3vX*UdQ#jĈ :=zpU FϞ=5kzի3ݻcǎ4i;;;իWɓ'(7ޠB TXڵktKe?N,ʹ~:Ga,^?˗sf̘ٳYj~~~=G;'ZoRymI1opY{w@f!!!6C8&b._$,">Ȓ% gϞeҤIvI"Eh߾=fb͚5DFFZ\4TZs*UxbjUT!&&+W2dj5nXxZke2hٲ%VVVFp%7on]n]ҥK֭[([nl6憧'7o^ٓꏥ֣R'8iٞ;GF_ߏL -dO _"iOn2 4jFV,EӦ^ǽYSl ỦIÀ@ j_,ʀ UL>P'M.)ȓRܬYqL0'Pzu,Xĵe/RH2@ܹsd̘1Ex2eJm2acccTMgΜ!<ccǑ3gN̙Ä Xf SL̝;W<㹸MύKL2lڴ).SN,X~~zBCCiӦ7oޜK.QN<<o޼٢5jɪUEERJTT?v=k֬tޝ}w^`"RȞ=;&)S={7o^o͑#G8r-e˖FgvNb۶mL:XܹssdR̙3ʞ=? ///ﴓ'OOfpEÙ>}:Vٳg>Cw'yYy,vX{r|ZV/V##BBB _Cm-ڸ'-p0Ȩ_1 YDK"88.]н{wZjE3fQɓ7n{әҥKGŊ4i+V͛ :4ESX1*Wi޼9K,aܸqNߩS'f̘Aݻ7GAxwiР˗ҥK=S4jԈC/_ҷo_d"Rpppv,[ -z~J*L2.]2f̈`̙ejՊ@̙ck6W6nܘ 6M.\?cQmIo %K`6_> nݺue˖"[....Lcȑ#->'_gܸq֭[G;T^36nH-2dI&/F{5j(M={vNt$22ٳgӫW/d"ohѢԯ_ZjѼys6aޢEpwwf͚pwwwz-5kF)\0 ?ar7nÆ hѢtܙ2eʰvm\ C?~ʔ)CF(]4Yf}ij'""RJQV- رcק|TT RdIԩcQFժUqww7 Xv-YfeΜ9,Y|ѱcG6mJ\/_V;7ݦKH7 n悅(fk6cFF2?v:琮ss=z?L:4qpYX<D̃'L^{c4<ܭEkgD4od dK%MC")$QR//ܰa8|0yӓ(fΜ#"iFN⹻5999ѨуwO>ٛ@'6|%[cƆڵkSv$k2\+++WnQ+tѲeKaI4x>4*i׃ !=8dO 26[AȽ$ިe<:&?h񐐉>NC,)!]-n2vO$/"^3nܸ'd#}O}&NȮ]vݺu|z8ޓw9fURtDDI;v|2֭LIDD^UVM4DmFɛ(4mE^tE$Q"""""""JEDDDDDDDI(QR/"""""""/4}ttK.՞4Ӕ/QAI#V^ cPDHՙ5;kOK09P D'O(r1 O|5[ⵛ@y,5QDH?ey|E%Xx)V$I};QR/""򤊔@j"""":QR/"""""""JEDDDDDDԦ^DDD3L (y;éz<"e*#ͳg?*xXղ!"JEDD-kZ!)o^"#"R/+:kc9szII};v<7kONPDu`Ȑ i8f~y?#P_UD$M$_[èAڕ"qBVDDDDDԧ~wh7ȫHQR/"""""""JEDDDDDDDIzI,Y̙3_v={$((H3ҫW/= իWٳ'}=͛O?""JEDD$9pw~t5bifXl:  @@@ .|!޹s'۷o4).K CmM6ڕ"q‚cEGT$DQ?;vif=<< ""J_dRٶwٝ+Epq3W(%"ٌ3ppp s̝; ڶmKϞ=-[x1˖-֭[+W#FFll,{sԨQØ>((3|p+ƲeXz5OՕ зo_llJ5k 4b_~Iܹҥ ,'7o^Yl8::̴iXr%2d`ڴiܹs8q"3f`Ϟ=̜9/9r֭[f͚ 6tq\B 1c׮]q 49sp1f3ʕcȑdɒ`ȑ#_ػw/g˗>bN.%Tpp0cƌa)R#G&9۷/`=yCBBoٵk.]O>lٲpBnݺ jՊ޽{[,_eѢE888ХKQR2sd^aF}MBسg'k֬tޝ'N0l0eF˖-'Ǐk׮lْI&j*}8|0nݚ .0g7oΎ;"22e˖qTB޽:u*k%OAfcРAҶm[fsuFjѣ?;vdΝ{G2eصk~~~d˖j*0`|1l0Fȑ#ywX"_~%AAAT\Y;IDԋH_ξ}ppp0իWӲeKBBB4iO>UVʕٳg3h tB6myyѢEo;;;vd͚QF%7m{{{/_N׮]qssvn۷9x 0;;;*T@͓ښsRreL'rbرZ;vpQl \2bŊyfrsXzwbkצTR8pkԨ?oIΜ9P͛7o>(Z(ٳggmժC O<4nܘK.;wnOΙ3g8<2e^z_rooor @̙oׯӧg„ ׏>ȝ;v(_T\={pyBBB,ggg*VȁSYfeɒ%ۗϳckc'N0c ;?o$A Ѿ}{.\h$ .SNXY%_oaЧDٙ={6mڴŅݻwKlX`<8qH+T`$LXߟӧsN]Fll,VVV={"U񷭭-UV:XC-[6#ђ-ZWT ˗/;wnVZrB  }>=z4gϞ͍k׮Y,ՕҥKk'zy̙3'Jǿ+[ld˖geeENX`}e/^2eqׯOz۷/ХK"""]Ν;sY"""8x {ķ;\& ٌQz@39r$%K|`,ccc-Ν;Ӈ"ELZs֬Yq֧,88Y&Ok۠ 9IIn9!!!ңzy$)(PHdnJΝ-Չ'ra-ZD=q#44iӦ>_̲eRxqubUTPB)ކ?AW駟n:zŋ-ZD׮]&)ƶmX|9 6`׮]Io~۶mOmoNƍu@>e 0j?eLq9 fɒ%DGG'Oاc3fOOO^{5ٹs'5Ҏ4IyhҤ 3gd׮]0zh-ڇ{xxPre ŋرErc$K7ndѢEQFFct¢EXdc^Z5֯_@;vF@LL ݻw\r :sm6fΜiG >|8'N **իWsΝGZJ,ɖ-[ԩS|I6X|93e.];C)k֬Ό3[nq)cD= bԨQ\pӧOsUc>38z(f͢qƸakkKv3g'880$QR/"""Ofڴid˖zQ@~g~g+f1]Νٽ{7 66ٳ'9rgϞ|G󆇇ɭ[,wЁ+WpڴiX߳gO(W%JΝ;xzzuwY~gL&sf֬Y|G>|̙3zjN8A ȓ'3g?'&&O?!W\4iĢuӱcGرc)[ƧٙEzj-J5xoCR-ʢEXb%J wԪUˢDFpTR[[[Nj?~<ԪU r>EDRp[txr.uOݴ+E^i7{%7:\y\w빠ۈ1ca~ONS$C ~.۷ɓ'O{G\RNϏ,Y+ƅ ȑ#цi:mg4쐲kZ\̅ ș3'OT/QQQʕؿC ʕ+,Z@BCCqwwOr7n`kkksY ފ[Yx{{|/ )vux<"z.=O* ŸW" [aee",8G/U},[s|~@ǤJv`.ҥftCc-Oe ͕R IDATa[Q ˲] e0XD)+TƉܘ?T  |6$j #cw1R]DgYtMvLYkaYiC[o=jcc}R5]01ފeqV@ ˋw 'WBb(DƙiYE:O${L?EDDDIkS ?Mq=|si@:7yV3esq!NuXxy М| RFC֦RV&>^X4q團D5gZS hBqbK:{mO.4bɮGNuoY Nʀ)tQR/""")! 22@DD$Sz%"""""""^DDDDDDDԋ+""""6PYa%""7Pzs6!""""JEDR@:`'S%""߀r7ϢQDD}@d<?BDDDDԋ.'@sBDDDDԋ>?DDDDU3P ] Xh♝k_wy٘6mƍffQػMb\9n`t?p2_N Kr$6'XVllllll6^Gkќ{{8&62220<~b0yIiY)]!%<3&<9ttYxz]􋤹/rlLw5o?'{J$H{OU[CZV)J[+A"BqNHy^yss7ywI+ e<%B<^m*(!˜JKz!BIB܌&)![!B!IBSA:DK`!BHR/TkyIB!$Bazֵ'$B! !09>v\ O'/WWo.68rhyT^ou%HB!^!'6vvt sjCP,I4~-Q$BIq#]D) ~"^!+=7+y"rnk}*ICk|YZBIB<֕ zrϛÖqS[s:g_!gY&}Uigܱ\Ty+J7^b B<_=!x YX*8-B!IBStB!^!I*nE,O!Bz!&ɂ9!BIBgsR%B!^!ɱw 51W!B!IBSdA !BHR/9I{!BIB$' ҤB!$B!L !BHR/YZ)d!BIB\ssE!B!IBS f!BIB`Rb&-B! !0>OLz!BIB^R#B!$BaI}tB!^!IJO^B!$Barnf\R!BxX;Ne}YԀB'玤| a:R(p-T,gX-k2H1SdBܯHVAuooǛ͎|!0kgQ?6|Ofi{ԛʡ˼B?xa{^^㸛KJB3} &c,dc$0B<Ϣ ^`X*H`#(\ !pI B^!_ !BIB'8-B!^!iiAU!BHR/„% B! !0=$B!^!iz!BIBj!B!^!i 0!BHR/4e!BIB_ !Bǟ Tg3gBW{TZ)v͚Y&MQ}S>`D8@n/;>,syzVȸv-2((JܯǼw*WaϻIRƣPƔ)LK;lٴ^^SryKN]\Lڵ$'+ZW !BǟDjк[iРN=;zI aUY-fccoC GZ0M@d%|'mK2VD#ARݧO!@[g͚YRmB,,,0Հ}܁^@[UlWttv@}BȐ^!HT]ף_|0[]Avom Bxd[U5ѣԫWIIq/>`D=ǿx> -d{!B'ʣRoQz!?],U('&! !BHRY 9s&b93{̘IslB!$h7k׮f&U# lsO kDZ%&-B!ICQKNkM{Sv5֮ݍz7n$녽vvt⮷/JXiYsssʕsm;nӣG{ 鋛[Y>`fۧZRSә6m<ժyҩSKaEkK\Х_UU}oߜ+3r@7l˟h?|wr4k֐E~'9===+ЬYxaF} jժʧAŊn4n#Yjqq UaXO7z%&&|]0x.f͖ێgΜ|4}_MyҜ>QhBX l?]B!"Soknnp.b]oPtIssssZԮ]r\9}<Ўla st*u2JĪT?|_޽:GhdV=ɫ~¶m{pv.ARR2/j'Oў6m[Fnn.M[̜=ųjJ34l؍F܅Am>ҥ)S:d`:thQ8񵳳jJ9AÆu&S>…3g)cLܹpu-yۺ|M^yJm?wd:E˖,]:cS|툏7sZ?~)bK3[-7lXv۶ժU*kx|ѫvL #F7[^mFy۶56koܸM11W{耳cԴG0=E@"PB!=q=O4| \~^xk}6ʕ=ڵaܺ#ѣHH67;vqc_~ 'W)M UU ݵ---pv.qdk'Oe>T批9+}}dÆ;W}̘ma{Y?|Cagg՛\/\zhQòk%ڴi?ί~ʕ&~~ޜ;e4botu-׋;CiРm~o~7ݘ8Q0}}ص^{ ;MhQVAdNr\?|5U=>@( * B!()t7MFXڷoNZZ: dݺm\Q2neeg@rr*J93f$iN=~uWaSԬYgؽ _}]Νx WunLbӦط?9l?!11eKӪUzhǴi R"Fpoذ.=zrYzhϥKWHKKOF[/akkCǎXz[7c Νkל% KRR ].r}{{;5k1j Q}HO`Μtю}ȑ9s&]M7n23g.յ7}(/=пoԩID6o?|R6a͚-< dذXw y-[2zH/^Cn/2p`Ⱥu[:u]K@mok̞[ Mlڴਢx>jfLԓ>-!B('V5hsp|,Y{{;|}8th-իW6Z>ȋ4mڀĐըQÇ_{_NJF]6cw \\JUZp*.\UTdym&N|K3rʗ/CVMQL4ޤBrL.kIŊ&?~ +Vl`_{ig#Fގ*U*(˫:ژҥ]8r$ ;;,]:"_yGUY&2bGeʔbƌEDD\lҴkK/=gؾQzFy jlt!pؾ7&NRNM zVRLb駻˖-;zMԿ-^V59|0?Drr*իcfٲ|hC 9fMdȐwٹ3@YOdܸܼD ޽azjSQի{R3.]79cބH jC= |}(UE  M@^!B;R =ȪU3ޯXC#Rۏ>N\!$ @z!B_092#FB<0yyyĚE:6)`% !Bao޼t]Frrc]YY̝&::ŋ'!:p#"K:2w2ߗcشiM<56^@ ڸw&;\7ڰ$B!sRug:8;ť.ժ#9w.X]f ׮]+=55A""]oz#?Wݿ(}|E~g兆Og!F|˖|ےA8?~fMZ8'h?0(pkB!~G]@kk99$&&nʲzm>ω2BBrJwrYG*^{m0Yy'Oc%=]Z Xf 6s Klv_=` 'ty !BIoՠO?˫:D G}v$'##ss3ڶ 4ZݺaooKXi駅tư^={YA6Xc.BQ͛w`kkCeXb))i݉uk{.ر#>}:d:ΟS4k֐dVĩSԩ%AAaʍ\]0:E~Szٵ6,~Kg`,Ò%8zJ9ӧOgTXQQ KRR!~zZ"##-Ѯ]~}3pw/G>`Tʼnn%[XA&6VȆ mm};kG}âE I鋈'""[7%,4_|17$U:k}ܹǨ |OL%K:W_sС8E\]KѲe/}7Z5#Xj}vcӦ7xy FIKW>CVdRB۶sezΝ|d֯Cn[?99+W6͛D߾;wGs-ط'cI7vٹS™3!DrYn+Nys~a!!{Xэ V!43f|aHJ.\eԇW?1$7ng֬%ZAff&'ļyЯ_/-[ٳQtd(Ss=|9w-[x7[Y6mc_wNVի{rF7fԨϘ57lYXdbr,ABQYzÇȈ2S{KZnJǎA^x7v- q4hc({Ҥ9rm9s1s.("^>0G}: x{PGX9-]:U'wUuIe;={7EQ۵k~۾ukkX6{Wj7}aݺ9_s  6iR7ݺ5<SΩ` ߛ꫃ԺukϝZ:akIu„w n8Xs~Sd}$%P-,,Խ{WMСZVUUU}UU Sw3Mf ۤΝ{(s^W- ǪZ'W8G:ͷ"~z󤤔[fVԠ^DF^dɒqu-f_ Θ=|O@,,, m߾ZZx\xh֭P|Vp'6777jm++Kz`t<.\ƍDwN+ѤI}n m{P< @Qwo+nݺhј;T#'ȼrBCN߾ ˺uk-۶)r13oݾ}*gX܅xN<[ qs+k}Æuhժ …w1qcۖ6$77˗c kIڷo^=ӴLbb2{ocZ_~Yqݱ>oP^9OM=T64jT?܅lݺ?~K\寿ceee{s= u]Wג9Ǐf߾#A7@!B>&n~\*T(ORΔ.m|gy||sl$8pŋ'U`eeIVV6qq ?|90zF8; 33uqq2 #(Q2eJhoXht,_xv  6>>5^S濮K( _|1hyIII5J4$.:6F1vj&ŗ/beew1e߱׺wWZh%(oɩ! nWm˜JܶߩRQ;gcQ4芃Ԯ] {֮Ll׫1c"J9.Mf.ukC^J@͚5d9r-;7Ý  !B!Ih䳀?ٸjJ N>}iikh4[,^-۳GkTݽa *T(PUЊz̨{{[2Lx; __/~ۻ>^sc^ $~rp*U@VV6vEFEĥ 2:UGZ|툍7Zw;+( ۶);|9]:ffmQwhӦ'ڵ[iݺ)#GNp7f*xyU# !Bf8;w bذ^kzOevچ֮T_Obg~_i2E~'!!y=t(=If:ujI||kl ==ŋ7*Vdddi_6` ۷˗aԝS.]%99հ, R\1cYY٨̙ͣk6EӨQ=*Urgڴ_HLL6,/k6ƳlzEsQzeZsÇOpE~ :ujɂ+tm1pu-Enn.⮳pB:GƾxzVofz!EiҤ>fff|Lڴi @۶Xv+ǎӦM;/Μ|}lڴ ?Z@y 4xI(BHR`X<fw ի#fҽ{[ڶ $++Wvb߾#wUfέ>u{xz6#4(ӧO0L</SR37M{ѧOg{ʐK1͟J渺2*Z5OFϞ/ѼӸ7NZw;^:?3:u:з+nSƍ}qwoDz;w%J80}[Nj-[#o9 ޡVEcccͼy ً[#zJfyՏmQ^$8x $$d/3g~itEFhшFzLʁ4o˝2Srss }~oߏ`>֭O{zJ*iժ^^iҤ>nnZ}̙;;[/ž=(SݻC~52Pʒ@]K0܆~}rssqu-y9V%pLpH.E`bui7 K&rs9@˾뭍ՅB<=ȪU3F>|ӞgY1DF^^ٲgϡݱc?5ƻ:ѣpqqysJr1vY,--GE]"11[]ٳ4mʕ8c&ʕ+qF]R}YcN IDAT] ˫: s4cdž ۉqc_*Ur'4 (=űcӳ-Z4۱ #D!xtdR/^rLQ#xmذ!Cs3 |\Ir7=]_ t0Zkb}e8ȟ1SѓbZ-@(֋.Zw=Tc7-p\X,D`Vxz*4Be+z5P:~~KBI~L- zL.}|} ʼnUF} CS|p~*WwX0sI%B?Z q$$ܤf֤wc$BQ">z◠'_4(ígNOF :'{| ~ӓ E7ńfz"_V>KO+k \`ȷKOPR26yhRRל}L}n|ǒ-<$o1~,*d}@+ q /l/*/1Fh쀡B <1Ib]hɒ\vS ԲHPPc> '`h-$'{|z;J>[O!h-z z?}T M?uK`B~x}(^VZ+~̀wf{^ $MPC5>_DɿϬ|tg\/H~ʠZ 7G^O.un7"~m9 FY>[>VB!Roy#Feddr5<=+LSRRILLݽ#{qqױ_u\\faشiC;D.Wƚo@AڸQ_D= Th ==ֹ'~!y\=qg{7_;Z7кO/d1sC0}ZWv Da}.nguxmBV1璅6ߪ4I:FbYP|eo@;HY`|}d6l*U*vjR6t4&n_3jVߦk֬a֜9K?蛾OJ \zXf_22)gZ/$ ȏfcj}?蜜ƍƍ۱%8+Æ=wmlIꕩTݰl|O=ȾA:wH:z?~{zz6c7096!aaIMMqcd'Rq6.;\OxuWѺ,A1`DwC(#nέrk<9ГZ'Juo=.Zu_dzNbbTx yO@*RK? Cr$?[/~v^i]2E6Z#z/( OH g_smh.rZ/qHm.DU:{Ќ±w2~_4ݾn9x3?sĉ3/^յ$ éSSiܼXi۶g}' o[*UFVV6[2&LM߾}qK+~s>[Y8vg> /%11Q>e葼Nիwǐ!}~ɒ1% )YjcsN{f7wtRO|R? Uܶ,&%W0FO֬eDu26g$n_M %8zi3$ڤvwHNKꫠn6@nM,<$P~RdM` 0m@l=c|K=$Kx)&'E`'g=_?K Nz^m֝:|1##kדos!0UUQФI3W΄m;awj'n|Y.]zs8}:*U*R݆ΟյfwO T LO`ϞC.]ukmd:BB~E $331c&kcʕ8ϣ( 5kV8Ϟ~}ʖ5t2>>L*T(;E׮݋9t( sqp7lOnn.nn:AܼčT|kbVV6CUU둘LFF*W8NFF&~X4bbb#<\mp..}붂{[[5Y;vԫWe]Qؿ(UT4 HKKgϞC-[Feddr ]Q=Ғ%8x0 7h __"*;Źs4hP''G"".PZ%,,n}]KѓXSvvr`kkM8p {{;իm( ϴt=ERR SN#::*bx=!!4C}zx7!~o]<#g=ד1gu[|ߢx^mB8=)m]_E~@+qqƫԓz"?s~JZ]ғMP[O6Z/! c.zȭ!Yh-M4XZxq\p۵4 8S]S8suX~| ,[6L~#b kvzwc>2wܙ9.V"ԲFGG,PX1}$`zh׃zSH :+7SF݋qO&2h$<))hkkf.s,&<7?w?\A),>: ڵ[TjIzz# xHO޽pT魘g;+W嗿]9ȑ3rUN!A895С@…+3(*T(K``0 |C_ug0wseΜI8;;:2eJsv0sL⫯es'[X[[˥KWqvvdAQ_S!~DGvʕ+qM'1ll~Y3nnm8q?jլ cҤO3g1g7O~q_~JI %ؾ} `)TTS.ўN=U:t?Tҥѿkul~kk 9MJ!q4nlطIId6]2zn_943?ݬž{턲nؿ { rKxYyr8‹ģ_u|g/kh&{ٗ|椺Uum|+Nuw)vJ9#33SV< E/&)V]~o>`x᱇wrzsV6lة̢E+طGl0?f֜>wݺ>㏵ϝ̛7;wNœ'z_}5s8/0|iiFÇOOپ}7oi~g<߱wJs[7e?f6k8~'[ѱ;K8:lه7leg4G "Xp:#F|IpXԯ_{z 0wrxn8;;ЇFгI1 ([ky[, xD96^ J~]Nm^B }}mx⩂cb[`Ӧ=8:6zjg ]\FlqskÆ 4ٳ5kVͭ-iӜjլ՟LlE-YZZ/^L!O'@KɓGPDqtѣ==0a;޽GO`ڴQ(QR 4i8QxznO߾];vtEiS{ڵS&X.[ ggG\{޽+vvE:x1r7Tf_wNTn^'-- WזTZY}s'+g7ر 'Nή: `KN)S֭r _9r-[ciYy۶Y*]*[--+Ec˗o}V(.^c O#nժ1͛+OףbrV?qrj:ǖ-4tuuP( ܇ŋi\'%JgRDqvmq"__Ξi%D2y^Ck'6!+ᠾ 3A:SNLH)(ڛ455[(ŋKJAAYd됐0 ̧ҥx4Q4:8<dll)U>>^9ܔh=55csʪʙ_[ZV !)iiihk+nذ0Uk^^Mu܈%66Gdž;:6֭;ҲtŸߖEybcEIIIrBGD|JsVLi5ޫXQYW144m?&,>j`iY#5Zs>\Y0TT5s YNOJgjZ{ILLRaGo,L=cdd@b{3e_nʘkbRb ;koe$99Hou?\o#<~3/V=æM{P&`.[֌Kj|O Ra˖?^zyVMLJaff½{gyk]SFj׶9!B66VlڴDz! }}=Z[ć62]r6ߤI9CjjOd[fceٻ͚x.]ڐŽ]kk 5ll*s^D>C>MBw r劜>}Q6oާ :95a$$<;?gccETT4O&ifIMMq\f_Ⱶ裣cٿXy޾s;nok\Lvّ]q~dSml-,-8p!,##Ϯ#(:*##rA쿠[.^Z"kP fŊMԨњ?dٖ>37F =q*)SdĈ/rE?? _ٶm _ݻ01)O?}epp㏻ӣӧ_qsՑ/X@i۶[lY3qOmۏ\Mu ^0igWR i~vvuIeU0~,v8HN%pIL۞쉋K?ƍA,_zjEӦue?XONZz\љ/Ӻrxz8q8w{RvzhOus'sW>FGdžL յm6'* {?Sg/m4gѣ,-+2rG<B!xm ^NQdϟZ;oy;msYt e%GHLL`wweᘚ$'Dv,]::u[l:/BE  jD=^MR__3g:~i4ik+P(j,g eԉT{T)CQ(( ,2ueECcL0\:&,>3=Sb9<ʙ3`@O*WHUX <|Rjeոpe}g!ʄyof'Ȁ{0TfР^)cFDn޼!cR'1TSm\{ْ1]88]IZZ:nl逎2gCʕ4kcffС}#11 ;9NgX\苫kkak[Cޅ;z„aف={Vpgjժ1gO 5!g5@ ([;3A=Qvo 4S- pJjm([· Fzպ`5CUF9hP =7hP\aY4Q+Vxд=5kV%&&w1dw1ܻΈ_ҨU[{7w1С$$<} wjl/9s+m8 tcA^HFƍ2d/A&ҨzС9|:J>iذ: 1V͛2pD^6ݞt0~?9.setgϑu߳~l >M{ΜlڴGmWQ4hБΝwgII̚ ..hѢ73f,$))9K>[J۶ӸqW>snLk{/h}A&> bXh: Q#7ùsWG>F:&&3ڟ ;ѣ;2r-ZSq60(Q-##7PS(֢_U7lY3ukiݺ)%KJ)`ŋcޱRB|RR`+}w.p H(Q TPۮ g*_>7eτ""qA'!CvOİ} 6nMt\dd7Khkk1eԩNÆu166 ww7llHNNao?@ppŋۅ ֶGOgٲy^FͨQy^iViذ?~(4 [hi)5kn帽O>{sv>R Κ5[/ť|XLiCԳfΩSo#GhLy<{"Gř;w2ݻr@9ť۶K߿;gz/\}G1sNaúDE~4_ h,v_c&o)Wo㏻hҁ(""y"!)7&..Ga#G~̬YvoXR9)o޼ÂPF޸q_Aw7eF _oMzz:K"P1%ؠz46l ۪ >S((?#/͇P8__zji5ؐ+7gZ~'?o;H.\H|0"+Wm$'_><OϞ;w2nnm03k5[6=۲z~~]k~3{whL)3g>?~[q&~ѣ=-ݽ ehN=alҥЭ[;0gbv{1s/=Nj\\QRl Ϝ9)L8<6YNoF4hP\ή6۶_-P6mu~ƌȇ*cٝrڂ_~ݹ X {z~7nT_/˖P4T~͏CKP_ !'6l-LB:eF<ʙbP_"t@K͈Vlo?I FX޽;bhXfz:qK;@rrGUd MMͥKWs\?ڵkkKnOBS]IBSwyA&͙3^B:TΜI)(B~F eƌg%''seʪZ5fM4nlGbbYdN|)7[:{ևYLs.׮ή6}tbȐw=Ӯ] {ľ};3e<ݕ.]\}2yJ||Bi˕3'##*uWn`Pظ,ojj*I91Sfφ%KPg V/7xpo6lERR2;vx͇8ttt4h5W>3gNvj\6oXzE͚vk]Y?Сhi):t 9y|_z yF|l':8pEN>׸t*;v 99};_sytP7ǰvٳ>9rF Q00(@jBJGGH.s@v+n' \@٭t!>8,a\ɍ۽RCÒݑ+7b&UE _%g l K׏G_m6nnmiٲ~~/C%o;zKsJ(^%6mφٳiqիзo/~kԨJjj*խUIegu˵nݔ%ʔ1VxW#!cNqKvXhȖ\^?55۷_޽Gѣ=']Ԫe`07,-+б3?\O=r}y :O[PcÇQuDL& ;CjBOdyPf_Tʦ Aq? tw /xdvx6e__!0Pu."8]c!Cpe-[V4oވxyyI )hkkӠAVDDC1cFƍ ƍRSSY~ _92qq Iݳ6̝篰dؾ~꠿Sr9LZZIIuGKEfпZÕ+~/,=ټy/ 99?Xq#@KKj<Ǔ'1,[d={5YzxSRRWF*dmƍ}g޼%a y1jǬY OS'T$ߟ9s#$ 'Еc͖޽pF&Mѡ@֭{~Ec- e(՛r @{qU`_ nPvӷtC?gS]Pfrs֢ժYȀ=^ij0n I}[9矧s/ԯߞm[`ff>ꆃCW0{"f͚Je;v..(_ށEVfٳfM`YԧGoƪYWK5T?*Gٝ #prjB(]ڎU63n eO#1`bdT[\}Рޜ;wU-qtlzY~1e˚ڟҥ05e۶吃pUjnyCJًHKK#55SSd-,,Z%ժYo~yOT^MLɨQ_ahX>ԩ#57o FFup}}}&MD}cѢ,YSS[:u1`TbE*Uj̱c^67\][RLi46{*u HǶ {+ew3tj֬[J6a?hժ W@ppvvhժ1~膯?ٳ'2zN@qvvK62릫7cΜIbjjKbbׯ ݌5^BKKCqnƎHŊ7o ff&\mmm<>l*--0,,ʣݽ{ ʾu||EPF]OĐqtmmm*T(zˋxS| }q4GbnnJ&=z'TZ-g/ɩ/AA(U8x,-+^>V?dk#ضO@TDDb!GY(UʐzjjdtqiƎ3g_؈ky+<=O숖wrf-Hݺزezj̘9={vvnݡCAnݔ笑BYV([F([}rlT(eSYG^!WꑕCs3rk? ^f{)ջwBY8]]VeJ,/ QyJ⧫-?*͍mm24,01)I?+]$[3sDl)YUXo<x*,-+Я_MAŋ{\%J H |2KaFanԺuSbb3B-ޱ.BE-Z8ТE?m1d5z#OmٲO_A֯_| <14,!izzʟew*U,)VժY3oTtu_|u14,ɱc^xzb0wׯܹrNwiӦOvlY3865-0}}=ty0|?R5P͹ !!t0H=ydaQ|-Z8DKPFW^Me/FRTڴj՘]<9s:Q?ʕ+X[[pum (ǯ\Ukk 6o~$&&tw."kk Ba%! 6kAT"?xyɧobSĉs,ZB=m]TZp#:Y>fZ+ʕ3GPкuSsF7z[lmk1wIRDq>0w:V8zԋ&烼z'ߖTPXH5!Bֱ @ Ǎa&۹ +VY(_ff (WvT/3`@ q=A}rrFߦMl) <<mq m[J2s]VK ?5UNDZcg_:݄?q(O !zڴ Kabbyߏ6O $$\:::^޽p[Ǜ3G=WyS322 D*T(Ʀ{0O_8퓰;8ZIdJ;lJMMiWx9MLiWsР^GVP?ٸq7|{8РA͛u֭رrB3r_&śצGw==O |J;J6ɓŦo殅BAJq(~S B85Z5x' Cz)VIok_'@=9Ba_|Ag!oŊqM*WH4-߹I)ܹh`N޾s- NXYUʶLRR2WAժSx{_ѣ>˭ \#22ƍ}^fU,,4{&'p%QJq4Gbmm1EMaI]N-/I5!(j6l*?~6뷧U܆?SCNE޴hћ\Hݩ_=&ں9O"56N>kMD!M/44"ϑ{z]͛G|y:u~l[W4f/_~p||}Vn/W]zBϞ#qrjBHOo%<<~EOoȑ3Z%eX|Gzq6n5Y''0s/\#>>8}z+8;c?* BHMֺҥ'O."==]!3{#G~%rB*_C+oڴGdžt@z5۷36!##C\..4j`Z5k뗇wNԯ_[ѬlؠL{'%5[j {br%wBa~IBo2⍸? voBQ^VMx4QcJ:GdžXZV$&&.Wx8x4'FprjcccEժsF`]Wwť?'۷KTߏݣyF=Z#b8qԗ[0* CNatV.(Ƹ4 R KPODßK@&lxCjB!D!Ӱadjj̭[w4޻uZZZdIu~03SdnnJHHqq9œ910(G>< ~bry>nLv-8~'.U&|gl~@c¼yK-Wxcf͚^?@VDGK"_].^XRB! cEu&MpI5' IDATs{z+/֦QsX^rr Ph@rr ;wqOiذBDC?ck2sx6[c>}:ab՚c"44BNQSNASĤ!!k17yx쉗BQM: =TڒCk!n`߾UѾ4H׮믍hii`k[?BmtL@]WO3p`OzR/^+7ck[+OeW*뼛B[)3:rǎ=TjDTg6"VJ@{`+xxcvR= w_m+J. J0p`O222u+;:ll( 5P@:թV۶_rEww%>)AAh׮?uR<WV|A=?6ʙӳg{LMV '&\B`4k66Vԭ[C [ZTbcyLLJFlla߃O>ehZYUή6%)^n]m~k (SƌMZ2Ύ\|Zl00((pvvP!7ywaaoawc@`UB/##(."/RRRYZ:Ԇ@y;CB}+w:M^a݅ 1x902ˢߴST_!!=vv/_avW M߾}==O|wB `͚G !(bcSx7Mַ?^! Uuk4ڳg1.cB@FFFB0`tBQ?t7Ԅ=_>GaCP(j8i)p466Ҟ;wrdf6Ι3?xHGWW7ca]Ç`˖}6: &6~mv˶cA6dEf|`m?Й3g{KXDd?n Nh O!D0xpu IKKC[[gaS[(%^{Ł̬I.eU?6|cTنll#emCa2Hy cś"cꋎ=1ddiedd+2k$Ļc ׬xK{z!Bz!AE*t7 ^Y~xwTt' [ܥ'B!ۗOjBwDppXrFFʷ_ B!(w޳E ! ~ %AB!iF}2?hĎ%B!(V8E6 vQ($B!=_}m6EHEѸ qAAz!B! Prrr!)!DvHhhD/B!B>|>88KjC"cY:]zd!$B!8s7-==}뷤B(8֨Y*ˣ#D!BAJ[XTggWѢE3ZYU>|?6IZB!(\bBBº?MIIR3Be<ӉVj A9] -B!BN-7oɂ #{;֬Y%mߢrwuHֶvZ:лwe2-۠A=S Ւm_8zKUnD '}F^+g DwqǏ}$fCaC+)9!U!Y{վ PW Ucsno7`d!;@e_ KǀǪ;@p@n~6Qy߷Q,BAHEMWּP;Uc@y~fP2^!B!($B!xu@u^ZW\7gW1KV!AB!\YՇ 4fR4nf?!^!8v% YkQ&rWV$`4€TA}ur !$B!Y_M`UFr)({ ު\zGU}(:"6|-p~ Dɥ*xɔvB!2x@c6P\Qq %UYQꨞU!͇7e2콐f9n(G棚FuC"6 vBOG@!"߅?gy TA*X!X- 3dh?PS|Xu Z/Jn6d =?I9rbƝBw$B!xҀ[Η,ܶj& (WW{d<ħ# !B`6=:!"$QB!B!AB!B!$B!B!B!B!B!B!^!B! !B! !B!Bz!B!B:RB!@7!D4*BHP/B image/svg+xml xapi plug-ins nova-compute nova.virt.xenapi XenAPI nova-network dhcpd Xen Domain 0 OpenStack VM xapi Tenant VM OpenStack add-ons OpenStack eth0 eth1 eth2 eth0 xenbr0 Networks connected to physical interfaces according to the selected configuration. Storage Repository Physical Host Virtual block devices, usually on local disk. OpenStack is using the XenAPI Python module to communicate with dom0 through the management network. Management Network Public Network Tenant Network ... ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/_static/support-matrix.css0000664000175000017500000000117600000000000022016 0ustar00zuulzuul00000000000000 .sp_feature_mandatory { font-weight: bold; } .sp_feature_optional { } .sp_feature_choice { font-style: italic; font-weight: bold; } .sp_feature_condition { font-style: italic; font-weight: bold; } .sp_impl_complete { color: rgb(0, 120, 0); font-weight: normal; } .sp_impl_missing { color: rgb(120, 0, 0); font-weight: normal; } .sp_impl_partial { color: rgb(170, 170, 0); font-weight: normal; } .sp_impl_unknown { color: rgb(170, 170, 170); font-weight: normal; } .sp_impl_summary { font-size: 2em; } .sp_cli { font-family: monospace; background-color: #F5F5F5; }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2344708 nova-21.2.4/doc/source/admin/0000775000175000017500000000000000000000000015723 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/admin-password-injection.rst0000664000175000017500000000465700000000000023401 0ustar00zuulzuul00000000000000==================================== Injecting the administrator password ==================================== Compute can generate a random administrator (root) password and inject that password into an instance. If this feature is enabled, users can run :command:`ssh` to an instance without an :command:`ssh` keypair. The random password appears in the output of the :command:`openstack server create` command. You can also view and set the admin password from the dashboard. .. rubric:: Password injection using the dashboard By default, the dashboard will display the ``admin`` password and allow the user to modify it. If you do not want to support password injection, disable the password fields by editing the dashboard's ``local_settings.py`` file. .. code-block:: none OPENSTACK_HYPERVISOR_FEATURES = { ... 'can_set_password': False, } .. rubric:: Password injection on libvirt-based hypervisors For hypervisors that use the libvirt back end (such as KVM, QEMU, and LXC), admin password injection is disabled by default. To enable it, set this option in ``/etc/nova/nova.conf``: .. code-block:: ini [libvirt] inject_password=true When enabled, Compute will modify the password of the admin account by editing the ``/etc/shadow`` file inside the virtual machine instance. .. note:: Linux distribution guest only. .. note:: Users can only use :command:`ssh` to access the instance by using the admin password if the virtual machine image is a Linux distribution, and it has been configured to allow users to use :command:`ssh` as the root user with password authorization. This is not the case for `Ubuntu cloud images `_ which, by default, does not allow users to use :command:`ssh` to access the root account, or `CentOS cloud images `_ which, by default, does not allow :command:`ssh` access to the instance with password. .. rubric:: Password injection and XenAPI (XenServer/XCP) When using the XenAPI hypervisor back end, Compute uses the XenAPI agent to inject passwords into guests. The virtual machine image must be configured with the agent for password injection to work. .. rubric:: Password injection and Windows images (all hypervisors) For Windows virtual machines, configure the Windows image to retrieve the admin password on boot by installing an agent such as `cloudbase-init `_. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/aggregates.rst0000664000175000017500000004425700000000000020602 0ustar00zuulzuul00000000000000=============== Host aggregates =============== Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud, or a region of an OpenStack cloud, based on arbitrary characteristics. Examples where an administrator may want to do this include where a group of hosts have additional hardware or performance characteristics. Host aggregates started out as a way to use Xen hypervisor resource pools, but have been generalized to provide a mechanism to allow administrators to assign key-value pairs to groups of machines. Each node can have multiple aggregates, each aggregate can have multiple key-value pairs, and the same key-value pair can be assigned to multiple aggregates. This information can be used in the scheduler to enable advanced scheduling, to set up Xen hypervisor resource pools or to define logical groups for migration. Host aggregates are not explicitly exposed to users. Instead administrators map flavors to host aggregates. Administrators do this by setting metadata on a host aggregate, and matching flavor extra specifications. The scheduler then endeavors to match user requests for instances of the given flavor to a host aggregate with the same key-value pair in its metadata. Compute nodes can be in more than one host aggregate. Weight multipliers can be controlled on a per-aggregate basis by setting the desired ``xxx_weight_multiplier`` aggregate metadata. Administrators are able to optionally expose a host aggregate as an :term:`Availability Zone`. Availability zones are different from host aggregates in that they are explicitly exposed to the user, and hosts can only be in a single availability zone. Administrators can configure a default availability zone where instances will be scheduled when the user fails to specify one. For more information on how to do this, refer to :doc:`/admin/availability-zones`. .. _config-sch-for-aggs: Configure scheduler to support host aggregates ---------------------------------------------- One common use case for host aggregates is when you want to support scheduling instances to a subset of compute hosts because they have a specific capability. For example, you may want to allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or access to compute hosts that have GPU cards to take advantage of GPU-accelerated code. To configure the scheduler to support host aggregates, the :oslo.config:option:`filter_scheduler.enabled_filters` configuration option must contain the ``AggregateInstanceExtraSpecsFilter`` in addition to the other filters used by the scheduler. Add the following line to ``nova.conf`` on the host that runs the ``nova-scheduler`` service to enable host aggregates filtering, as well as the other filters that are typically enabled: .. code-block:: ini [filter_scheduler] enabled_filters=...,AggregateInstanceExtraSpecsFilter Example: Specify compute hosts with SSDs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example configures the Compute service to enable users to request nodes that have solid-state drives (SSDs). You create a ``fast-io`` host aggregate in the ``nova`` availability zone and you add the ``ssd=true`` key-value pair to the aggregate. Then, you add the ``node1``, and ``node2`` compute nodes to it. .. code-block:: console $ openstack aggregate create --zone nova fast-io +-------------------+----------------------------+ | Field | Value | +-------------------+----------------------------+ | availability_zone | nova | | created_at | 2016-12-22T07:31:13.013466 | | deleted | False | | deleted_at | None | | id | 1 | | name | fast-io | | updated_at | None | +-------------------+----------------------------+ $ openstack aggregate set --property ssd=true 1 +-------------------+----------------------------+ | Field | Value | +-------------------+----------------------------+ | availability_zone | nova | | created_at | 2016-12-22T07:31:13.000000 | | deleted | False | | deleted_at | None | | hosts | [] | | id | 1 | | name | fast-io | | properties | ssd='true' | | updated_at | None | +-------------------+----------------------------+ $ openstack aggregate add host 1 node1 +-------------------+--------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------+ | availability_zone | nova | | created_at | 2016-12-22T07:31:13.000000 | | deleted | False | | deleted_at | None | | hosts | [u'node1'] | | id | 1 | | metadata | {u'ssd': u'true', u'availability_zone': u'nova'} | | name | fast-io | | updated_at | None | +-------------------+--------------------------------------------------+ $ openstack aggregate add host 1 node2 +-------------------+--------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------+ | availability_zone | nova | | created_at | 2016-12-22T07:31:13.000000 | | deleted | False | | deleted_at | None | | hosts | [u'node1', u'node2'] | | id | 1 | | metadata | {u'ssd': u'true', u'availability_zone': u'nova'} | | name | fast-io | | updated_at | None | +-------------------+--------------------------------------------------+ Use the :command:`openstack flavor create` command to create the ``ssd.large`` flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and 4 vCPUs. .. code-block:: console $ openstack flavor create --id 6 --ram 8192 --disk 80 --vcpus 4 ssd.large +----------------------------+-----------+ | Field | Value | +----------------------------+-----------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | id | 6 | | name | ssd.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+-----------+ Once the flavor is created, specify one or more key-value pairs that match the key-value pairs on the host aggregates with scope ``aggregate_instance_extra_specs``. In this case, that is the ``aggregate_instance_extra_specs:ssd=true`` key-value pair. Setting a key-value pair on a flavor is done using the :command:`openstack flavor set` command. .. code-block:: console $ openstack flavor set \ --property aggregate_instance_extra_specs:ssd=true ssd.large Once it is set, you should see the ``extra_specs`` property of the ``ssd.large`` flavor populated with a key of ``ssd`` and a corresponding value of ``true``. .. code-block:: console $ openstack flavor show ssd.large +----------------------------+-------------------------------------------+ | Field | Value | +----------------------------+-------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | id | 6 | | name | ssd.large | | os-flavor-access:is_public | True | | properties | aggregate_instance_extra_specs:ssd='true' | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+-------------------------------------------+ Now, when a user requests an instance with the ``ssd.large`` flavor, the scheduler only considers hosts with the ``ssd=true`` key-value pair. In this example, these are ``node1`` and ``node2``. Aggregates in Placement ----------------------- Aggregates also exist in placement and are not the same thing as host aggregates in nova. These aggregates are defined (purely) as groupings of related resource providers. Since compute nodes in nova are represented in placement as resource providers, they can be added to a placement aggregate as well. For example, get the UUID of the compute node using :command:`openstack hypervisor list` and add it to an aggregate in placement using :command:`openstack resource provider aggregate set`. .. code-block:: console $ openstack --os-compute-api-version=2.53 hypervisor list +--------------------------------------+---------------------+-----------------+-----------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +--------------------------------------+---------------------+-----------------+-----------------+-------+ | 815a5634-86fb-4e1e-8824-8a631fee3e06 | node1 | QEMU | 192.168.1.123 | up | +--------------------------------------+---------------------+-----------------+-----------------+-------+ $ openstack --os-placement-api-version=1.2 resource provider aggregate set \ --aggregate df4c74f3-d2c4-4991-b461-f1a678e1d161 \ 815a5634-86fb-4e1e-8824-8a631fee3e06 Some scheduling filter operations can be performed by placement for increased speed and efficiency. .. note:: The nova-api service attempts (as of nova 18.0.0) to automatically mirror the association of a compute host with an aggregate when an administrator adds or removes a host to/from a nova host aggregate. This should alleviate the need to manually create those association records in the placement API using the ``openstack resource provider aggregate set`` CLI invocation. .. _tenant-isolation-with-placement: Tenant Isolation with Placement ------------------------------- In order to use placement to isolate tenants, there must be placement aggregates that match the membership and UUID of nova host aggregates that you want to use for isolation. The same key pattern in aggregate metadata used by the :ref:`AggregateMultiTenancyIsolation` filter controls this function, and is enabled by setting :oslo.config:option:`scheduler.limit_tenants_to_placement_aggregate` to ``True``. .. code-block:: console $ openstack --os-compute-api-version=2.53 aggregate create myagg +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | availability_zone | None | | created_at | 2018-03-29T16:22:23.175884 | | deleted | False | | deleted_at | None | | id | 4 | | name | myagg | | updated_at | None | | uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 | +-------------------+--------------------------------------+ $ openstack --os-compute-api-version=2.53 aggregate add host myagg node1 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | availability_zone | None | | created_at | 2018-03-29T16:22:23.175884 | | deleted | False | | deleted_at | None | | hosts | [u'node1'] | | id | 4 | | name | myagg | | updated_at | None | | uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 | +-------------------+--------------------------------------+ $ openstack project list -f value | grep 'demo' 9691591f913949818a514f95286a6b90 demo $ openstack aggregate set \ --property filter_tenant_id=9691591f913949818a514f95286a6b90 myagg $ openstack --os-placement-api-version=1.2 resource provider aggregate set \ --aggregate 019e2189-31b3-49e1-aff2-b220ebd91c24 \ 815a5634-86fb-4e1e-8824-8a631fee3e06 Note that the ``filter_tenant_id`` metadata key can be optionally suffixed with any string for multiple tenants, such as ``filter_tenant_id3=$tenantid``. Usage ----- Much of the configuration of host aggregates is driven from the API or command-line clients. For example, to create a new aggregate and add hosts to it using the :command:`openstack` client, run: .. code-block:: console $ openstack aggregate create my-aggregate $ openstack aggregate add host my-aggregate my-host To list all aggregates and show information about a specific aggregate, run: .. code-block:: console $ openstack aggregate list $ openstack aggregate show my-aggregate To set and unset a property on the aggregate, run: .. code-block:: console $ openstack aggregate set --property pinned=true my-aggregrate $ openstack aggregate unset --property pinned my-aggregate To rename the aggregate, run: .. code-block:: console $ openstack aggregate set --name my-awesome-aggregate my-aggregate To remove a host from an aggregate and delete the aggregate, run: .. code-block:: console $ openstack aggregate remove host my-aggregate my-host $ openstack aggregate delete my-aggregate For more information, refer to the :python-openstackclient-doc:`OpenStack Client documentation `. Configuration ------------- In addition to CRUD operations enabled by the API and clients, the following configuration options can be used to configure how host aggregates and the related availability zones feature operate under the hood: - :oslo.config:option:`default_schedule_zone` - :oslo.config:option:`scheduler.limit_tenants_to_placement_aggregate` - :oslo.config:option:`cinder.cross_az_attach` Finally, as discussed previously, there are a number of host aggregate-specific scheduler filters. These are: - :ref:`AggregateCoreFilter` - :ref:`AggregateDiskFilter` - :ref:`AggregateImagePropertiesIsolation` - :ref:`AggregateInstanceExtraSpecsFilter` - :ref:`AggregateIoOpsFilter` - :ref:`AggregateMultiTenancyIsolation` - :ref:`AggregateNumInstancesFilter` - :ref:`AggregateRamFilter` - :ref:`AggregateTypeAffinityFilter` The following configuration options are applicable to the scheduler configuration: - :oslo.config:option:`cpu_allocation_ratio` - :oslo.config:option:`ram_allocation_ratio` - :oslo.config:option:`filter_scheduler.max_instances_per_host` - :oslo.config:option:`filter_scheduler.aggregate_image_properties_isolation_separator` - :oslo.config:option:`filter_scheduler.aggregate_image_properties_isolation_namespace` .. _image-caching-aggregates: Image Caching ------------- Aggregates can be used as a way to target multiple compute nodes for the purpose of requesting that images be pre-cached for performance reasons. .. note:: `Some of the virt drivers`_ provide image caching support, which improves performance of second-and-later boots of the same image by keeping the base image in an on-disk cache. This avoids the need to re-download the image from Glance, which reduces network utilization and time-to-boot latency. Image pre-caching is the act of priming that cache with images ahead of time to improve performance of the first boot. .. _Some of the virt drivers: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_cache_images Assuming an aggregate called ``my-aggregate`` where two images should be pre-cached, running the following command will initiate the request: .. code-block:: console $ nova aggregate-cache-images my-aggregate image1 image2 Note that image pre-caching happens asynchronously in a best-effort manner. The images and aggregate provided are checked by the server when the command is run, but the compute nodes are not checked to see if they support image caching until the process runs. Progress and results are logged by each compute, and the process sends ``aggregate.cache_images.start``, ``aggregate.cache_images.progress``, and ``aggregate.cache_images.end`` notifications, which may be useful for monitoring the operation externally. References ---------- - `Curse your bones, Availability Zones! (Openstack Summit Vancouver 2018) `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/arch.rst0000664000175000017500000003327200000000000017401 0ustar00zuulzuul00000000000000=================== System architecture =================== OpenStack Compute contains several main components. - The cloud controller represents the global state and interacts with the other components. The ``API server`` acts as the web services front end for the cloud controller. The ``compute controller`` provides compute server resources and usually also contains the Compute service. - The ``object store`` is an optional component that provides storage services; you can also use OpenStack Object Storage instead. - An ``auth manager`` provides authentication and authorization services when used with the Compute system; you can also use OpenStack Identity as a separate authentication service instead. - A ``volume controller`` provides fast and permanent block-level storage for the compute servers. - The ``network controller`` provides virtual networks to enable compute servers to interact with each other and with the public network. You can also use OpenStack Networking instead. - The ``scheduler`` is used to select the most suitable compute controller to host an instance. Compute uses a messaging-based, ``shared nothing`` architecture. All major components exist on multiple servers, including the compute, volume, and network controllers, and the Object Storage or Image service. The state of the entire system is stored in a database. The cloud controller communicates with the internal object store using HTTP, but it communicates with the scheduler, network controller, and volume controller using Advanced Message Queuing Protocol (AMQP). To avoid blocking a component while waiting for a response, Compute uses asynchronous calls, with a callback that is triggered when a response is received. Hypervisors ~~~~~~~~~~~ Compute controls hypervisors through an API server. Selecting the best hypervisor to use can be difficult, and you must take budget, resource constraints, supported features, and required technical specifications into account. However, the majority of OpenStack development is done on systems using KVM and Xen-based hypervisors. For a detailed list of features and support across different hypervisors, see :doc:`/user/support-matrix`. You can also orchestrate clouds using multiple hypervisors in different availability zones. Compute supports the following hypervisors: - :ironic-doc:`Baremetal <>` - `Hyper-V `__ - `Kernel-based Virtual Machine (KVM) `__ - `Linux Containers (LXC) `__ - `PowerVM `__ - `Quick Emulator (QEMU) `__ - `User Mode Linux (UML) `__ - `Virtuozzo `__ - `VMware vSphere `__ - `Xen (using libvirt) `__ - `XenServer `__ - `zVM `__ For more information about hypervisors, see :doc:`/admin/configuration/hypervisors` section in the Nova Configuration Reference. Projects, users, and roles ~~~~~~~~~~~~~~~~~~~~~~~~~~ To begin using Compute, you must create a user with the :keystone-doc:`Identity service <>`. The Compute system is designed to be used by different consumers in the form of projects on a shared system, and role-based access assignments. Roles control the actions that a user is allowed to perform. Projects are isolated resource containers that form the principal organizational structure within the Compute service. They consist of an individual VLAN, and volumes, instances, images, keys, and users. A user can specify the project by appending ``project_id`` to their access key. If no project is specified in the API request, Compute attempts to use a project with the same ID as the user. For projects, you can use quota controls to limit the: - Number of volumes that can be launched. - Number of processor cores and the amount of RAM that can be allocated. - Floating IP addresses assigned to any instance when it launches. This allows instances to have the same publicly accessible IP addresses. - Fixed IP addresses assigned to the same instance when it launches. This allows instances to have the same publicly or privately accessible IP addresses. Roles control the actions a user is allowed to perform. By default, most actions do not require a particular role, but you can configure them by editing the ``policy.json`` file for user roles. For example, a rule can be defined so that a user must have the ``admin`` role in order to be able to allocate a public IP address. A project limits users' access to particular images. Each user is assigned a user name and password. Keypairs granting access to an instance are enabled for each user, but quotas are set, so that each project can control resource consumption across available hardware resources. .. note:: Earlier versions of OpenStack used the term ``tenant`` instead of ``project``. Because of this legacy terminology, some command-line tools use ``--tenant_id`` where you would normally expect to enter a project ID. Block storage ~~~~~~~~~~~~~ OpenStack provides two classes of block storage: ephemeral storage and persistent volume. .. rubric:: Ephemeral storage Ephemeral storage includes a root ephemeral volume and an additional ephemeral volume. The root disk is associated with an instance, and exists only for the life of this very instance. Generally, it is used to store an instance's root file system, persists across the guest operating system reboots, and is removed on an instance deletion. The amount of the root ephemeral volume is defined by the flavor of an instance. In addition to the ephemeral root volume, all default types of flavors, except ``m1.tiny``, which is the smallest one, provide an additional ephemeral block device sized between 20 and 160 GB (a configurable value to suit an environment). It is represented as a raw block device with no partition table or file system. A cloud-aware operating system can discover, format, and mount such a storage device. OpenStack Compute defines the default file system for different operating systems as Ext4 for Linux distributions, VFAT for non-Linux and non-Windows operating systems, and NTFS for Windows. However, it is possible to specify any other filesystem type by using ``virt_mkfs`` or ``default_ephemeral_format`` configuration options. .. note:: For example, the ``cloud-init`` package included into an Ubuntu's stock cloud image, by default, formats this space as an Ext4 file system and mounts it on ``/mnt``. This is a cloud-init feature, and is not an OpenStack mechanism. OpenStack only provisions the raw storage. .. rubric:: Persistent volume A persistent volume is represented by a persistent virtualized block device independent of any particular instance, and provided by OpenStack Block Storage. Only a single configured instance can access a persistent volume. Multiple instances cannot access a persistent volume. This type of configuration requires a traditional network file system to allow multiple instances accessing the persistent volume. It also requires a traditional network file system like NFS, CIFS, or a cluster file system such as GlusterFS. These systems can be built within an OpenStack cluster, or provisioned outside of it, but OpenStack software does not provide these features. You can configure a persistent volume as bootable and use it to provide a persistent virtual instance similar to the traditional non-cloud-based virtualization system. It is still possible for the resulting instance to keep ephemeral storage, depending on the flavor selected. In this case, the root file system can be on the persistent volume, and its state is maintained, even if the instance is shut down. For more information about this type of configuration, see :cinder-doc:`Introduction to the Block Storage service `. .. note:: A persistent volume does not provide concurrent access from multiple instances. That type of configuration requires a traditional network file system like NFS, or CIFS, or a cluster file system such as GlusterFS. These systems can be built within an OpenStack cluster, or provisioned outside of it, but OpenStack software does not provide these features. Building blocks ~~~~~~~~~~~~~~~ In OpenStack the base operating system is usually copied from an image stored in the OpenStack Image service. This is the most common case and results in an ephemeral instance that starts from a known template state and loses all accumulated states on virtual machine deletion. It is also possible to put an operating system on a persistent volume in the OpenStack Block Storage volume system. This gives a more traditional persistent system that accumulates states which are preserved on the OpenStack Block Storage volume across the deletion and re-creation of the virtual machine. To get a list of available images on your system, run: .. code-block:: console $ openstack image list +--------------------------------------+-----------------------------+--------+ | ID | Name | Status | +--------------------------------------+-----------------------------+--------+ | aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 14.04 cloudimg amd64 | active | | 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 14.10 cloudimg amd64 | active | | df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | active | +--------------------------------------+-----------------------------+--------+ The displayed image attributes are: ``ID`` Automatically generated UUID of the image ``Name`` Free form, human-readable name for image ``Status`` The status of the image. Images marked ``ACTIVE`` are available for use. ``Server`` For images that are created as snapshots of running instances, this is the UUID of the instance the snapshot derives from. For uploaded images, this field is blank. Virtual hardware templates are called ``flavors``. By default, these are configurable by admin users, however that behavior can be changed by redefining the access controls for ``compute_extension:flavormanage`` in ``/etc/nova/policy.json`` on the ``compute-api`` server. For more information, refer to :doc:`/configuration/policy`. For a list of flavors that are available on your system: .. code-block:: console $ openstack flavor list +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+ Compute service architecture ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ These basic categories describe the service architecture and information about the cloud controller. .. rubric:: API server At the heart of the cloud framework is an API server, which makes command and control of the hypervisor, storage, and networking programmatically available to users. The API endpoints are basic HTTP web services which handle authentication, authorization, and basic command and control functions using various API interfaces under the Amazon, Rackspace, and related models. This enables API compatibility with multiple existing tool sets created for interaction with offerings from other vendors. This broad compatibility prevents vendor lock-in. .. rubric:: Message queue A messaging queue brokers the interaction between compute nodes (processing), the networking controllers (software which controls network infrastructure), API endpoints, the scheduler (determines which physical hardware to allocate to a virtual resource), and similar components. Communication to and from the cloud controller is handled by HTTP requests through multiple API endpoints. A typical message passing event begins with the API server receiving a request from a user. The API server authenticates the user and ensures that they are permitted to issue the subject command. The availability of objects implicated in the request is evaluated and, if available, the request is routed to the queuing engine for the relevant workers. Workers continually listen to the queue based on their role, and occasionally their type host name. When an applicable work request arrives on the queue, the worker takes assignment of the task and begins executing it. Upon completion, a response is dispatched to the queue which is received by the API server and relayed to the originating user. Database entries are queried, added, or removed as necessary during the process. .. rubric:: Compute worker Compute workers manage computing instances on host machines. The API dispatches commands to compute workers to complete these tasks: - Run instances - Delete instances (Terminate instances) - Reboot instances - Attach volumes - Detach volumes - Get console output .. rubric:: Network Controller The Network Controller manages the networking resources on host machines. The API server dispatches commands through the message queue, which are subsequently processed by Network Controllers. Specific operations include: - Allocating fixed IP addresses - Configuring VLANs for projects - Configuring networks for compute nodes ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/availability-zones.rst0000664000175000017500000003073500000000000022273 0ustar00zuulzuul00000000000000================== Availability Zones ================== .. note:: This section provides deployment and admin-user usage information about the availability zone feature. For end-user information about availability zones, refer to the :doc:`user guide
`. Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. Availability zones are not modeled in the database; rather, they are defined by attaching specific metadata information to an :doc:`aggregate
` The addition of this specific metadata to an aggregate makes the aggregate visible from an end-user perspective and consequently allows users to schedule instances to a specific set of hosts, the ones belonging to the aggregate. However, despite their similarities, there are a few additional differences to note when comparing availability zones and host aggregates: - A host can be part of multiple aggregates but it can only be in one availability zone. - By default a host is part of a default availability zone even if it doesn't belong to an aggregate. The name of this default availability zone can be configured using :oslo.config:option:`default_availability_zone` config option. .. warning:: The use of the default availability zone name is requests can be very error-prone. Since the user can see the list of availability zones, they have no way to know whether the default availability zone name (currently ``nova``) is provided because an host belongs to an aggregate whose AZ metadata key is set to ``nova``, or because there is at least one host not belonging to any aggregate. Consequently, it is highly recommended for users to never ever ask for booting an instance by specifying an explicit AZ named ``nova`` and for operators to never set the AZ metadata for an aggregate to ``nova``. This can result is some problems due to the fact that the instance AZ information is explicitly attached to ``nova`` which could break further move operations when either the host is moved to another aggregate or when the user would like to migrate the instance. .. note:: Availability zone names must NOT contain ``:`` since it is used by admin users to specify hosts where instances are launched in server creation. See `Using availability zones to select hosts`_ for more information. In addition, other services, such as the :neutron-doc:`networking service <>` and the :cinder-doc:`block storage service <>`, also provide an availability zone feature. However, the implementation of these features differs vastly between these different services. Consult the documentation for these other services for more information on their implementation of this feature. .. _availability-zones-with-placement: Availability Zones with Placement --------------------------------- In order to use placement to honor availability zone requests, there must be placement aggregates that match the membership and UUID of nova host aggregates that you assign as availability zones. The same key in aggregate metadata used by the `AvailabilityZoneFilter` filter controls this function, and is enabled by setting :oslo.config:option:`scheduler.query_placement_for_availability_zone` to ``True``. .. code-block:: console $ openstack --os-compute-api-version=2.53 aggregate create myaz +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | availability_zone | None | | created_at | 2018-03-29T16:22:23.175884 | | deleted | False | | deleted_at | None | | id | 4 | | name | myaz | | updated_at | None | | uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 | +-------------------+--------------------------------------+ $ openstack --os-compute-api-version=2.53 aggregate add host myaz node1 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | availability_zone | None | | created_at | 2018-03-29T16:22:23.175884 | | deleted | False | | deleted_at | None | | hosts | [u'node1'] | | id | 4 | | name | myagg | | updated_at | None | | uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 | +-------------------+--------------------------------------+ $ openstack aggregate set --property availability_zone=az002 myaz $ openstack --os-placement-api-version=1.2 resource provider aggregate set --aggregate 019e2189-31b3-49e1-aff2-b220ebd91c24 815a5634-86fb-4e1e-8824-8a631fee3e06 With the above configuration, the `AvailabilityZoneFilter` filter can be disabled in :oslo.config:option:`filter_scheduler.enabled_filters` while retaining proper behavior (and doing so with the higher performance of placement's implementation). Implications for moving servers ------------------------------- There are several ways to move a server to another host: evacuate, resize, cold migrate, live migrate, and unshelve. Move operations typically go through the scheduler to pick the target host *unless* a target host is specified and the request forces the server to that host by bypassing the scheduler. Only evacuate and live migrate can forcefully bypass the scheduler and move a server to a specified host and even then it is highly recommended to *not* force and bypass the scheduler. With respect to availability zones, a server is restricted to a zone if: 1. The server was created in a specific zone with the ``POST /servers`` request containing the ``availability_zone`` parameter. 2. If the server create request did not contain the ``availability_zone`` parameter but the API service is configured for :oslo.config:option:`default_schedule_zone` then by default the server will be scheduled to that zone. 3. The shelved offloaded server was unshelved by specifying the ``availability_zone`` with the ``POST /servers/{server_id}/action`` request using microversion 2.77 or greater. 4. :oslo.config:option:`cinder.cross_az_attach` is False, :oslo.config:option:`default_schedule_zone` is None, the server is created without an explicit zone but with pre-existing volume block device mappings. In that case the server will be created in the same zone as the volume(s) if the volume zone is not the same as :oslo.config:option:`default_availability_zone`. See `Resource affinity`_ for details. If the server was not created in a specific zone then it is free to be moved to other zones, i.e. the :ref:`AvailabilityZoneFilter ` is a no-op. Knowing this, it is dangerous to force a server to another host with evacuate or live migrate if the server is restricted to a zone and is then forced to move to a host in another zone, because that will create an inconsistency in the internal tracking of where that server should live and may require manually updating the database for that server. For example, if a user creates a server in zone A and then the admin force live migrates the server to zone B, and then the user resizes the server, the scheduler will try to move it back to zone A which may or may not work, e.g. if the admin deleted or renamed zone A in the interim. Resource affinity ~~~~~~~~~~~~~~~~~ The :oslo.config:option:`cinder.cross_az_attach` configuration option can be used to restrict servers and the volumes attached to servers to the same availability zone. A typical use case for setting ``cross_az_attach=False`` is to enforce compute and block storage affinity, for example in a High Performance Compute cluster. By default ``cross_az_attach`` is True meaning that the volumes attached to a server can be in a different availability zone than the server. If set to False, then when creating a server with pre-existing volumes or attaching a volume to a server, the server and volume zone must match otherwise the request will fail. In addition, if the nova-compute service creates the volumes to attach to the server during server create, it will request that those volumes are created in the same availability zone as the server, which must exist in the block storage (cinder) service. As noted in the `Implications for moving servers`_ section, forcefully moving a server to another zone could also break affinity with attached volumes. .. note:: ``cross_az_attach=False`` is not widely used nor tested extensively and thus suffers from some known issues: * `Bug 1694844 `_. This is fixed in the 21.0.0 (Ussuri) release by using the volume zone for the server being created if the server is created without an explicit zone, :oslo.config:option:`default_schedule_zone` is None, and the volume zone does not match the value of :oslo.config:option:`default_availability_zone`. * `Bug 1781421 `_ .. _using-availability-zones-to-select-hosts: Using availability zones to select hosts ---------------------------------------- We can combine availability zones with a specific host and/or node to select where an instance is launched. For example: .. code-block:: console $ openstack server create --availability-zone ZONE:HOST:NODE ... SERVER .. note:: It is possible to use ``ZONE``, ``ZONE:HOST``, and ``ZONE::NODE``. .. note:: This is an admin-only operation by default, though you can modify this behavior using the ``os_compute_api:servers:create:forced_host`` rule in ``policy.json``. However, as discussed `previously `_, when launching instances in this manner the scheduler filters are not run. For this reason, this behavior is considered legacy behavior and, starting with the 2.74 microversion, it is now possible to specify a host or node explicitly. For example: .. code-block:: console $ openstack --os-compute-api-version 2.74 server create \ --host HOST --hypervisor-hostname HYPERVISOR ... SERVER .. note:: This is an admin-only operation by default, though you can modify this behavior using the ``compute:servers:create:requested_destination`` rule in ``policy.json``. This avoids the need to explicitly select an availability zone and ensures the scheduler filters are not bypassed. Usage ----- Creating an availability zone (AZ) is done by associating metadata with a :doc:`host aggregate `. For this reason, the :command:`openstack` client provides the ability to create a host aggregate and associate it with an AZ in one command. For example, to create a new aggregate, associating it with an AZ in the process, and add host to it using the :command:`openstack` client, run: .. code-block:: console $ openstack aggregate create --zone my-availability-zone my-aggregate $ openstack aggregate add host my-aggregate my-host .. note:: While it is possible to add a host to multiple host aggregates, it is not possible to add them to multiple availability zones. Attempting to add a host to multiple host aggregates associated with differing availability zones will result in a failure. Alternatively, you can set this metadata manually for an existing host aggregate. For example: .. code-block:: console $ openstack aggregate set \ --property availability_zone=my-availability-zone my-aggregate To list all host aggregates and show information about a specific aggregate, in order to determine which AZ the host aggregate(s) belong to, run: .. code-block:: console $ openstack aggregate list --long $ openstack aggregate show my-aggregate Finally, to disassociate a host aggregate from an availability zone, run: .. code-block:: console $ openstack aggregate unset --property availability_zone my-aggregate Configuration ------------- Refer to :doc:`/admin/aggregates` for information on configuring both host aggregates and availability zones. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/cells.rst0000664000175000017500000001141100000000000017555 0ustar00zuulzuul00000000000000================== CellsV2 Management ================== This section describes the various recommended practices/tips for runnning and maintaining CellsV2 for admins and operators. For more details regarding the basic concept of CellsV2 and its layout please see the main :doc:`/user/cellsv2-layout` page. .. _handling-cell-failures: Handling cell failures ---------------------- For an explanation on how ``nova-api`` handles cell failures please see the `Handling Down Cells `__ section of the Compute API guide. Below, you can find some recommended practices and considerations for effectively tolerating cell failure situations. Configuration considerations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since a cell being reachable or not is determined through timeouts, it is suggested to provide suitable values for the following settings based on your requirements. #. :oslo.config:option:`database.max_retries` is 10 by default meaning every time a cell becomes unreachable, it would retry 10 times before nova can declare the cell as a "down" cell. #. :oslo.config:option:`database.retry_interval` is 10 seconds and :oslo.config:option:`oslo_messaging_rabbit.rabbit_retry_interval` is 1 second by default meaning every time a cell becomes unreachable it would retry every 10 seconds or 1 second depending on if it's a database or a message queue problem. #. Nova also has a timeout value called ``CELL_TIMEOUT`` which is hardcoded to 60 seconds and that is the total time the nova-api would wait before returning partial results for the "down" cells. The values of the above settings will affect the time required for nova to decide if a cell is unreachable and then take the necessary actions like returning partial results. The operator can also control the results of certain actions like listing servers and services depending on the value of the :oslo.config:option:`api.list_records_by_skipping_down_cells` config option. If this is true, the results from the unreachable cells will be skipped and if it is false, the request will just fail with an API error in situations where partial constructs cannot be computed. Disabling down cells ~~~~~~~~~~~~~~~~~~~~ While the temporary outage in the infrastructure is being fixed, the affected cells can be disabled so that they are removed from being scheduling candidates. To enable or disable a cell, use :command:`nova-manage cell_v2 update_cell --cell_uuid --disable`. See the :ref:`man-page-cells-v2` man page for details on command usage. Known issues ~~~~~~~~~~~~ 1. **Services and Performance:** In case a cell is down during the startup of nova services, there is the chance that the services hang because of not being able to connect to all the cell databases that might be required for certain calculations and initializations. An example scenario of this situation is if :oslo.config:option:`upgrade_levels.compute` is set to ``auto`` then the ``nova-api`` service hangs on startup if there is at least one unreachable cell. This is because it needs to connect to all the cells to gather information on each of the compute service's version to determine the compute version cap to use. The current workaround is to pin the :oslo.config:option:`upgrade_levels.compute` to a particular version like "rocky" and get the service up under such situations. See `bug 1815697 `__ for more details. Also note that in general during situations where cells are not reachable certain "slowness" may be experienced in operations requiring hitting all the cells because of the aforementioned configurable timeout/retry values. .. _cells-counting-quotas: 2. **Counting Quotas:** Another known issue is in the current approach of counting quotas where we query each cell database to get the used resources and aggregate them which makes it sensitive to temporary cell outages. While the cell is unavailable, we cannot count resource usage residing in that cell database and things would behave as though more quota is available than should be. That is, if a tenant has used all of their quota and part of it is in cell A and cell A goes offline temporarily, that tenant will suddenly be able to allocate more resources than their limit (assuming cell A returns, the tenant will have more resources allocated than their allowed quota). .. note:: Starting in the Train (20.0.0) release, it is possible to configure counting of quota usage from the placement service and API database to make quota usage calculations resilient to down or poor-performing cells in a multi-cell environment. See the :doc:`quotas documentation
` for more details. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2344708 nova-21.2.4/doc/source/admin/common/0000775000175000017500000000000000000000000017213 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/common/nova-show-usage-statistics-for-hosts-instances.rst0000664000175000017500000002014400000000000031070 0ustar00zuulzuul00000000000000============================================= Show usage statistics for hosts and instances ============================================= You can show basic statistics on resource usage for hosts and instances. .. note:: For more sophisticated monitoring, see the `Ceilometer `__ project. You can also use tools, such as `Ganglia `__ or `Graphite `__, to gather more detailed data. Show host usage statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~ The following examples show the host usage statistics for a host called ``devstack``. * List the hosts and the nova-related services that run on them: .. code-block:: console $ openstack host list +-----------+-------------+----------+ | Host Name | Service | Zone | +-----------+-------------+----------+ | devstack | conductor | internal | | devstack | compute | nova | | devstack | network | internal | | devstack | scheduler | internal | +-----------+-------------+----------+ * Get a summary of resource usage of all of the instances running on the host: .. code-block:: console $ openstack host show devstack +----------+----------------------------------+-----+-----------+---------+ | Host | Project | CPU | MEMORY MB | DISK GB | +----------+----------------------------------+-----+-----------+---------+ | devstack | (total) | 2 | 4003 | 157 | | devstack | (used_now) | 3 | 5120 | 40 | | devstack | (used_max) | 3 | 4608 | 40 | | devstack | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 | | devstack | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 | +----------+----------------------------------+-----+-----------+---------+ The ``CPU`` column shows the sum of the virtual CPUs for instances running on the host. The ``MEMORY MB`` column shows the sum of the memory (in MB) allocated to the instances that run on the host. The ``DISK GB`` column shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the host. The row that has the value ``used_now`` in the ``PROJECT`` column shows the sum of the resources allocated to the instances that run on the host, plus the resources allocated to the host itself. The row that has the value ``used_max`` in the ``PROJECT`` column shows the sum of the resources allocated to the instances that run on the host. .. note:: These values are computed by using information about the flavors of the instances that run on the hosts. This command does not query the CPU usage, memory usage, or hard disk usage of the physical host. Show instance usage statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Get CPU, memory, I/O, and network statistics for an instance. #. List instances: .. code-block:: console $ openstack server list +----------+----------------------+--------+------------------+--------+----------+ | ID | Name | Status | Networks | Image | Flavor | +----------+----------------------+--------+------------------+--------+----------+ | 84c6e... | myCirrosServer | ACTIVE | private=10.0.0.3 | cirros | m1.tiny | | 8a995... | myInstanceFromVolume | ACTIVE | private=10.0.0.4 | ubuntu | m1.small | +----------+----------------------+--------+------------------+--------+----------+ #. Get diagnostic statistics: .. note:: As of microversion v2.48, diagnostics information for all virt drivers will have a standard format as below. Before microversion 2.48, each hypervisor had its own format. For more details on diagnostics response message see `server diagnostics api `__ documentation. .. code-block:: console $ nova diagnostics myCirrosServer +----------------+------------------------------------------------------------------------+ | Property | Value | +----------------+------------------------------------------------------------------------+ | config_drive | False | | cpu_details | [] | | disk_details | [{"read_requests": 887, "errors_count": -1, "read_bytes": 20273152, | | | "write_requests": 89, "write_bytes": 303104}] | | driver | libvirt | | hypervisor | qemu | | hypervisor_os | linux | | memory_details | {"used": 0, "maximum": 0} | | nic_details | [{"rx_packets": 9, "rx_drop": 0, "tx_octets": 1464, "tx_errors": 0, | | | "mac_address": "fa:16:3e:fa:db:d3", "rx_octets": 958, "rx_rate": null, | | | "rx_errors": 0, "tx_drop": 0, "tx_packets": 9, "tx_rate": null}] | | num_cpus | 0 | | num_disks | 1 | | num_nics | 1 | | state | running | | uptime | 5528 | +----------------+------------------------------------------------------------------------+ ``config_drive`` indicates if the config drive is supported on the instance. ``cpu_details`` contains a list of details per vCPU. ``disk_details`` contains a list of details per disk. ``driver`` indicates the current driver on which the VM is running. ``hypervisor`` indicates the current hypervisor on which the VM is running. ``nic_details`` contains a list of details per vNIC. ``uptime`` is the amount of time in seconds that the VM has been running. | Diagnostics prior to v2.48: .. code-block:: console $ nova diagnostics myCirrosServer +---------------------------+--------+ | Property | Value | +---------------------------+--------+ | memory | 524288 | | memory-actual | 524288 | | memory-rss | 6444 | | tap1fec8fb8-7a_rx | 22137 | | tap1fec8fb8-7a_rx_drop | 0 | | tap1fec8fb8-7a_rx_errors | 0 | | tap1fec8fb8-7a_rx_packets | 166 | | tap1fec8fb8-7a_tx | 18032 | | tap1fec8fb8-7a_tx_drop | 0 | | tap1fec8fb8-7a_tx_errors | 0 | | tap1fec8fb8-7a_tx_packets | 130 | | vda_errors | -1 | | vda_read | 2048 | | vda_read_req | 2 | | vda_write | 182272 | | vda_write_req | 74 | +---------------------------+--------+ * Get summary statistics for each project: .. code-block:: console $ openstack usage list Usage from 2013-06-25 to 2013-07-24: +---------+---------+--------------+-----------+---------------+ | Project | Servers | RAM MB-Hours | CPU Hours | Disk GB-Hours | +---------+---------+--------------+-----------+---------------+ | demo | 1 | 344064.44 | 672.00 | 0.00 | | stack | 3 | 671626.76 | 327.94 | 6558.86 | +---------+---------+--------------+-----------+---------------+ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/config-drive.rst0000664000175000017500000001122200000000000021027 0ustar00zuulzuul00000000000000============= Config drives ============= .. note:: This section provides deployment information about the config drive feature. For end-user information about the config drive feature and instance metadata in general, refer to the :doc:`user guide
`. Config drives are special drives that are attached to an instance when it boots. The instance can mount this drive and read files from it to get information that is normally available through :doc:`the metadata service `. There are many use cases for the config drive. One such use case is to pass a networking configuration when you do not use DHCP to assign IP addresses to instances. For example, you might pass the IP address configuration for the instance through the config drive, which the instance can mount and access before you configure the network settings for the instance. Another common reason to use config drives is load. If running something like the OpenStack puppet providers in your instances, they can hit the :doc:`metadata servers ` every fifteen minutes, simultaneously for every instance you have. They are just checking in, and building facts, but it's not insignificant load. With a config drive, that becomes a local (cached) disk read. Finally, using a config drive means you're not dependent on the metadata service being up, reachable, or performing well to do things like reboot your instance that runs `cloud-init`_ at the beginning. Any modern guest operating system that is capable of mounting an ISO 9660 or VFAT file system can use the config drive. Requirements and guidelines --------------------------- To use the config drive, you must follow the following requirements for the compute host and image. .. rubric:: Compute host requirements The following virt drivers support the config drive: libvirt, XenServer, Hyper-V, VMware, and (since 17.0.0 Queens) PowerVM. The Bare Metal service also supports the config drive. - To use config drives with libvirt, XenServer, or VMware, you must first install the :command:`genisoimage` package on each compute host. Use the :oslo.config:option:`mkisofs_cmd` config option to set the path where you install the :command:`genisoimage` program. If :command:`genisoimage` is in the same path as the :program:`nova-compute` service, you do not need to set this flag. - To use config drives with Hyper-V, you must set the :oslo.config:option:`mkisofs_cmd` config option to the full path to an :command:`mkisofs.exe` installation. Additionally, you must set the :oslo.config:option:`hyperv.qemu_img_cmd` config option to the full path to an :command:`qemu-img` command installation. - To use config drives with PowerVM or the Bare Metal service, you do not need to prepare anything. .. rubric:: Image requirements An image built with a recent version of the `cloud-init`_ package can automatically access metadata passed through the config drive. The cloud-init package version 0.7.1 works with Ubuntu, Fedora based images (such as Red Hat Enterprise Linux) and openSUSE based images (such as SUSE Linux Enterprise Server). If an image does not have the cloud-init package installed, you must customize the image to run a script that mounts the config drive on boot, reads the data from the drive, and takes appropriate action such as adding the public key to an account. For more details about how data is organized on the config drive, refer to the :ref:`user guide `. Configuration ------------- The :program:`nova-compute` service accepts the following config drive-related options: - :oslo.config:option:`api.config_drive_skip_versions` - :oslo.config:option:`force_config_drive` - :oslo.config:option:`config_drive_format` If using the HyperV compute driver, the following additional options are supported: - :oslo.config:option:`hyperv.config_drive_cdrom` For example, to ensure nova always provides a config drive to instances but versions ``2018-08-27`` (Rocky) and ``2017-02-22`` (Ocata) are skipped, add the following to :file:`nova.conf`: .. code-block:: ini [DEFAULT] force_config_drive = True [api] config_drive_skip_versions = 2018-08-27 2017-02-22 .. note:: The ``img_config_drive`` image metadata property can be used to force enable the config drive. In addition, users can explicitly request a config drive when booting instances. For more information, refer to the :ref:`user guide `. .. note:: If using Xen with a config drive, you must use the :oslo.config:option:`xenserver.disable_agent` config option to disable the agent. .. _cloud-init: https://cloudinit.readthedocs.io/en/latest/ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2344708 nova-21.2.4/doc/source/admin/configuration/0000775000175000017500000000000000000000000020572 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/api.rst0000664000175000017500000000225400000000000022100 0ustar00zuulzuul00000000000000========================= Compute API configuration ========================= The Compute API, is the component of OpenStack Compute that receives and responds to user requests, whether they be direct API calls, or via the CLI tools or dashboard. Configure Compute API password handling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The OpenStack Compute API enables users to specify an administrative password when they create, rebuild, rescue or evacuate a server instance. If the user does not specify a password, a random password is generated and returned in the API response. In practice, how the admin password is handled depends on the hypervisor in use and might require additional configuration of the instance. For example, you might have to install an agent to handle the password setting. If the hypervisor and instance configuration do not support setting a password at server create time, the password that is returned by the create API call is misleading because it was ignored. To prevent this confusion, set the ``enable_instance_password`` configuration to ``False`` to disable the return of the admin password for installations that do not support setting instance passwords. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/configuration/cross-cell-resize.rst0000664000175000017500000003242600000000000024700 0ustar00zuulzuul00000000000000================= Cross-cell resize ================= This document describes how to configure nova for cross-cell resize. For information on :term:`same-cell resize `, refer to :doc:`/admin/configuration/resize`. Historically resizing and cold migrating a server has been explicitly `restricted`_ to within the same cell in which the server already exists. The cross-cell resize feature allows configuring nova to allow resizing and cold migrating servers across cells. The full design details are in the `Ussuri spec`_ and there is a `video`_ from a summit talk with a high-level overview. .. _restricted: https://opendev.org/openstack/nova/src/tag/20.0.0/nova/conductor/tasks/migrate.py#L164 .. _Ussuri spec: https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/cross-cell-resize.html .. _video: https://www.openstack.org/videos/summits/denver-2019/whats-new-in-nova-cellsv2 Use case -------- There are many reasons to use multiple cells in a nova deployment beyond just scaling the database and message queue. Cells can also be used to shard a deployment by hardware generation and feature functionality. When sharding by hardware generation, it would be natural to setup a host aggregate for each cell and map flavors to the aggregate. Then when it comes time to decommission old hardware the deployer could provide new flavors and request that users resize to the new flavors, before some deadline, which under the covers will migrate their servers to the new cell with newer hardware. Administrators could also just cold migrate the servers during a maintenance window to the new cell. Requirements ------------ To enable cross-cell resize functionality the following conditions must be met. Minimum compute versions ~~~~~~~~~~~~~~~~~~~~~~~~ All compute services must be upgraded to 21.0.0 (Ussuri) or later and not be pinned to older RPC API versions in :oslo.config:option:`upgrade_levels.compute`. Policy configuration ~~~~~~~~~~~~~~~~~~~~ The policy rule ``compute:servers:resize:cross_cell`` controls who can perform a cross-cell resize or cold migrate operation. By default the policy disables the functionality for *all* users. A microversion is not required to opt into the behavior, just passing the policy check. As such, it is recommended to start by allowing only certain users to be able to perform a cross-cell resize or cold migration, for example by setting the rule to ``rule:admin_api`` or some other rule for test teams but not normal users until you are comfortable supporting the feature. Compute driver ~~~~~~~~~~~~~~ There are no special compute driver implementations required to support the feature, it is built on existing driver interfaces used during resize and shelve/unshelve. However, only the libvirt compute driver has integration testing in the ``nova-multi-cell`` CI job. Networking ~~~~~~~~~~ The networking API must expose the ``Port Bindings Extended`` API extension which was added in the 13.0.0 (Rocky) release for Neutron. Notifications ------------- The types of events and their payloads remain unchanged. The major difference from same-cell resize is the *publisher_id* may be different in some cases since some events are sent from the conductor service rather than a compute service. For example, with same-cell resize the ``instance.resize_revert.start`` notification is sent from the source compute host in the `finish_revert_resize`_ method but with cross-cell resize that same notification is sent from the conductor service. Obviously the actual message queue sending the notifications would be different for the source and target cells assuming they use separate transports. .. _finish_revert_resize: https://opendev.org/openstack/nova/src/tag/20.0.0/nova/compute/manager.py#L4326 Instance actions ---------------- The overall instance actions named ``resize``, ``confirmResize`` and ``revertResize`` are the same as same-cell resize. However, the *events* which make up those actions will be different for cross-cell resize since the event names are generated based on the compute service methods involved in the operation and there are different methods involved in a cross-cell resize. This is important for triage when a cross-cell resize operation fails. Scheduling ---------- The :ref:`CrossCellWeigher ` is enabled by default. When a scheduling request allows selecting compute nodes from another cell the weigher will by default *prefer* hosts within the source cell over hosts from another cell. However, this behavior is configurable using the :oslo.config:option:`filter_scheduler.cross_cell_move_weight_multiplier` configuration option if, for example, you want to drain old cells when resizing or cold migrating. Code flow --------- The end user experience is meant to not change, i.e. status transitions. A successfully cross-cell resized server will go to ``VERIFY_RESIZE`` status and from there the user can either confirm or revert the resized server using the normal `confirmResize`_ and `revertResize`_ server action APIs. Under the covers there are some differences from a traditional same-cell resize: * There is no inter-compute interaction. Everything is synchronously `orchestrated`_ from the (super)conductor service. This uses the :oslo.config:option:`long_rpc_timeout` configuration option. * The orchestration tasks in the (super)conductor service are in charge of creating a copy of the instance and its related records in the target cell database at the beginning of the operation, deleting them in case of rollback or when the resize is confirmed/reverted, and updating the ``instance_mappings`` table record in the API database. * Non-volume-backed servers will have their root disk uploaded to the image service as a temporary snapshot image just like during the `shelveOffload`_ operation. When finishing the resize on the destination host in the target cell that snapshot image will be used to spawn the guest and then the snapshot image will be deleted. .. _confirmResize: https://docs.openstack.org/api-ref/compute/#confirm-resized-server-confirmresize-action .. _revertResize: https://docs.openstack.org/api-ref/compute/#revert-resized-server-revertresize-action .. _orchestrated: https://opendev.org/openstack/nova/src/branch/master/nova/conductor/tasks/cross_cell_migrate.py .. _shelveOffload: https://docs.openstack.org/api-ref/compute/#shelf-offload-remove-server-shelveoffload-action Sequence diagram ---------------- The following diagrams are current as of the 21.0.0 (Ussuri) release. .. NOTE(mriedem): These diagrams could be more detailed, for example breaking down the individual parts of the conductor tasks and the calls made on the source and dest compute to the virt driver, cinder and neutron, but the diagrams could (1) get really complex and (2) become inaccurate with changes over time. If there are particular sub-sequences that should have diagrams I would suggest putting those into separate focused diagrams. Resize ~~~~~~ This is the sequence of calls to get the server to ``VERIFY_RESIZE`` status. .. seqdiag:: seqdiag { API; Conductor; Scheduler; Source; Destination; edge_length = 300; span_height = 15; activation = none; default_note_color = white; API ->> Conductor [label = "cast", note = "resize_instance/migrate_server"]; Conductor => Scheduler [label = "MigrationTask", note = "select_destinations"]; Conductor -> Conductor [label = "TargetDBSetupTask"]; Conductor => Destination [label = "PrepResizeAtDestTask", note = "prep_snapshot_based_resize_at_dest"]; Conductor => Source [label = "PrepResizeAtSourceTask", note = "prep_snapshot_based_resize_at_source"]; Conductor => Destination [label = "FinishResizeAtDestTask", note = "finish_snapshot_based_resize_at_dest"]; Conductor -> Conductor [label = "FinishResizeAtDestTask", note = "update instance mapping"]; } Confirm resize ~~~~~~~~~~~~~~ This is the sequence of calls when confirming `or deleting`_ a server in ``VERIFY_RESIZE`` status. .. seqdiag:: seqdiag { API; Conductor; Source; edge_length = 300; span_height = 15; activation = none; default_note_color = white; API ->> Conductor [label = "cast (or call if deleting)", note = "confirm_snapshot_based_resize"]; // separator to indicate everything after this is driven by ConfirmResizeTask === ConfirmResizeTask === Conductor => Source [label = "call", note = "confirm_snapshot_based_resize_at_source"]; Conductor -> Conductor [note = "hard delete source cell instance"]; Conductor -> Conductor [note = "update target cell instance status"]; } .. _or deleting: https://opendev.org/openstack/nova/src/tag/20.0.0/nova/compute/api.py#L2171 Revert resize ~~~~~~~~~~~~~ This is the sequence of calls when reverting a server in ``VERIFY_RESIZE`` status. .. seqdiag:: seqdiag { API; Conductor; Source; Destination; edge_length = 300; span_height = 15; activation = none; default_note_color = white; API ->> Conductor [label = "cast", note = "revert_snapshot_based_resize"]; // separator to indicate everything after this is driven by RevertResizeTask === RevertResizeTask === Conductor -> Conductor [note = "update records from target to source cell"]; Conductor -> Conductor [note = "update instance mapping"]; Conductor => Destination [label = "call", note = "revert_snapshot_based_resize_at_dest"]; Conductor -> Conductor [note = "hard delete target cell instance"]; Conductor => Source [label = "call", note = "finish_revert_snapshot_based_resize_at_source"]; } Limitations ----------- These are known to not yet be supported in the code: * Instances with ports attached that have :doc:`bandwidth-aware ` resource provider allocations. Nova falls back to same-cell resize if the server has such ports. * Rescheduling to alternative hosts within the same target cell in case the primary selected host fails the ``prep_snapshot_based_resize_at_dest`` call. These may not work since they have not been validated by integration testing: * Instances with PCI devices attached. * Instances with a NUMA topology. Other limitations: * The config drive associated with the server, if there is one, will be re-generated on the destination host in the target cell. Therefore if the server was created with `personality files`_ they will be lost. However, this is no worse than `evacuating`_ a server that had a config drive when the source and destination compute host are not on shared storage or when shelve offloading and unshelving a server with a config drive. If necessary, the resized server can be rebuilt to regain the personality files. * The ``_poll_unconfirmed_resizes`` periodic task, which can be :oslo.config:option:`configured ` to automatically confirm pending resizes on the target host, *might* not support cross-cell resizes because doing so would require an :ref:`up-call ` to the API to confirm the resize and cleanup the source cell database. .. _personality files: https://docs.openstack.org/api-guide/compute/server_concepts.html#server-personality .. _evacuating: https://docs.openstack.org/api-ref/compute/#evacuate-server-evacuate-action Troubleshooting --------------- Timeouts ~~~~~~~~ Configure a :ref:`service user ` in case the user token times out, e.g. during the snapshot and download of a large server image. If RPC calls are timing out with a ``MessagingTimeout`` error in the logs, check the :oslo.config:option:`long_rpc_timeout` option to see if it is high enough though the default value (30 minutes) should be sufficient. Recovering from failure ~~~~~~~~~~~~~~~~~~~~~~~ The orchestration tasks in conductor that drive the operation are built with rollbacks so each part of the operation can be rolled back in order if a subsequent task fails. The thing to keep in mind is the ``instance_mappings`` record in the API DB is the authority on where the instance "lives" and that is where the API will go to show the instance in a ``GET /servers/{server_id}`` call or any action performed on the server, including deleting it. So if the resize fails and there is a copy of the instance and its related records in the target cell, the tasks should automatically delete them but if not you can hard-delete the records from whichever cell is *not* the one in the ``instance_mappings`` table. If the instance is in ``ERROR`` status, check the logs in both the source and destination compute service to see if there is anything that needs to be manually recovered, for example volume attachments or port bindings, and also check the (super)conductor service logs. Assuming volume attachments and port bindings are OK (current and pointing at the correct host), then try hard rebooting the server to get it back to ``ACTIVE`` status. If that fails, you may need to `rebuild`_ the server on the source host. Note that the guest's disks on the source host are not deleted until the resize is confirmed so if there is an issue prior to confirm or confirm itself fails, the guest disks should still be available for rebuilding the instance if necessary. .. _rebuild: https://docs.openstack.org/api-ref/compute/#rebuild-server-rebuild-action ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/configuration/fibre-channel.rst0000664000175000017500000000154200000000000024023 0ustar00zuulzuul00000000000000================================= Configuring Fibre Channel Support ================================= Fibre Channel support in OpenStack Compute is remote block storage attached to compute nodes for VMs. .. todo:: This below statement needs to be verified for current release Fibre Channel supported only the KVM hypervisor. Compute and Block Storage support Fibre Channel automatic zoning on Brocade and Cisco switches. On other hardware Fibre Channel arrays must be pre-zoned or directly attached to the KVM hosts. KVM host requirements ~~~~~~~~~~~~~~~~~~~~~ You must install these packages on the KVM host: ``sysfsutils`` Nova uses the ``systool`` application in this package. ``sg3-utils`` or ``sg3_utils`` Nova uses the ``sg_scan`` and ``sginfo`` applications. Installing the ``multipath-tools`` or ``device-mapper-multipath`` package is optional. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-basics.rst0000664000175000017500000000106200000000000024777 0ustar00zuulzuul00000000000000=============================== Hypervisor Configuration Basics =============================== The node where the ``nova-compute`` service is installed and operates on the same node that runs all of the virtual machines. This is referred to as the compute node in this guide. By default, the selected hypervisor is KVM. To change to another hypervisor, change the ``virt_type`` option in the ``[libvirt]`` section of ``nova.conf`` and restart the ``nova-compute`` service. Specific options for particular hypervisors can be found in the following sections. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-hyper-v.rst0000664000175000017500000003567100000000000025142 0ustar00zuulzuul00000000000000=============================== Hyper-V virtualization platform =============================== .. todo:: This is really installation guide material and should probably be moved. It is possible to use Hyper-V as a compute node within an OpenStack Deployment. The ``nova-compute`` service runs as ``openstack-compute``, a 32-bit service directly upon the Windows platform with the Hyper-V role enabled. The necessary Python components as well as the ``nova-compute`` service are installed directly onto the Windows platform. Windows Clustering Services are not needed for functionality within the OpenStack infrastructure. The use of the Windows Server 2012 platform is recommend for the best experience and is the platform for active development. The following Windows platforms have been tested as compute nodes: - Windows Server 2012 - Windows Server 2012 R2 Server and Core (with the Hyper-V role enabled) - Hyper-V Server Hyper-V configuration ~~~~~~~~~~~~~~~~~~~~~ The only OpenStack services required on a Hyper-V node are ``nova-compute`` and ``neutron-hyperv-agent``. Regarding the resources needed for this host you have to consider that Hyper-V will require 16 GB - 20 GB of disk space for the OS itself, including updates. Two NICs are required, one connected to the management network and one to the guest data network. The following sections discuss how to prepare the Windows Hyper-V node for operation as an OpenStack compute node. Unless stated otherwise, any configuration information should work for the Windows 2012 and 2012 R2 platforms. Local storage considerations ---------------------------- The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume. .. _configure-ntp-windows: Configure NTP ------------- Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the following commands: .. code-block:: bat C:\>net stop w32time C:\>w32tm /config "/manualpeerlist:pool.ntp.org,0x8" /syncfromflags:MANUAL C:\>net start w32time Keep in mind that the node will have to be time synchronized with the other nodes of your OpenStack environment, so it is important to use the same NTP server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller. Configure Hyper-V virtual switching ----------------------------------- Information regarding the Hyper-V virtual Switch can be found in the `Hyper-V Virtual Switch Overview`__. To quickly enable an interface to be used as a Virtual Interface the following PowerShell may be used: .. code-block:: none PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false .. note:: It is very important to make sure that when you are using a Hyper-V node with only 1 NIC the -AllowManagementOS option is set on ``True``, otherwise you will lose connectivity to the Hyper-V node. __ https://technet.microsoft.com/en-us/library/hh831823.aspx Enable iSCSI initiator service ------------------------------ To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically. .. code-block:: none PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic PS C:\> Start-Service MSiSCSI Configure shared nothing live migration --------------------------------------- Detailed information on the configuration of live migration can be found in `this guide`__ The following outlines the steps of shared nothing live migration. #. The target host ensures that live migration is enabled and properly configured in Hyper-V. #. The target host checks if the image to be migrated requires a base VHD and pulls it from the Image service if not already available on the target host. #. The source host ensures that live migration is enabled and properly configured in Hyper-V. #. The source host initiates a Hyper-V live migration. #. The source host communicates to the manager the outcome of the operation. The following three configuration options are needed in order to support Hyper-V live migration and must be added to your ``nova.conf`` on the Hyper-V compute node: * This is needed to support shared nothing Hyper-V live migrations. It is used in ``nova/compute/manager.py``. .. code-block:: ini instances_shared_storage = False * This flag is needed to support live migration to hosts with different CPU features. This flag is checked during instance creation in order to limit the CPU features used by the VM. .. code-block:: ini limit_cpu_features = True * This option is used to specify where instances are stored on disk. .. code-block:: ini instances_path = DRIVELETTER:\PATH\TO\YOUR\INSTANCES Additional Requirements: * Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled * A Windows domain controller with the Hyper-V compute nodes as domain members * The instances_path command-line option/flag needs to be the same on all hosts * The ``openstack-compute`` service deployed with the setup must run with domain credentials. You can set the service credentials with: .. code-block:: bat C:\>sc config openstack-compute obj="DOMAIN\username" password="password" __ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine How to setup live migration on Hyper-V -------------------------------------- To enable 'shared nothing live' migration, run the 3 instructions below on each Hyper-V host: .. code-block:: none PS C:\> Enable-VMMigration PS C:\> Set-VMMigrationNetwork IP_ADDRESS PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos .. note:: Replace the ``IP_ADDRESS`` with the address of the interface which will provide live migration. Additional Reading ------------------ This article clarifies the various live migration options in Hyper-V: `Hyper-V Live Migration of Yesterday `_ Install nova-compute using OpenStack Hyper-V installer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In case you want to avoid all the manual setup, you can use Cloudbase Solutions' installer. You can find it here: `HyperVNovaCompute_Beta download `_ The tool installs an independent Python environment in order to avoid conflicts with existing applications, and dynamically generates a ``nova.conf`` file based on the parameters provided by you. The tool can also be used for an automated and unattended mode for deployments on a massive number of servers. More details about how to use the installer and its features can be found here: `Cloudbase `_ .. _windows-requirements: Requirements ~~~~~~~~~~~~ Python ------ Python 2.7 32bit must be installed as most of the libraries are not working properly on the 64bit version. **Setting up Python prerequisites** #. Download and install Python 2.7 using the MSI installer from here: `python-2.7.3.msi download `_ .. code-block:: none PS C:\> $src = "https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi" PS C:\> $dest = "$env:temp\python-2.7.3.msi" PS C:\> Invoke-WebRequest -Uri $src -OutFile $dest PS C:\> Unblock-File $dest PS C:\> Start-Process $dest #. Make sure that the ``Python`` and ``Python\Scripts`` paths are set up in the ``PATH`` environment variable. .. code-block:: none PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path") PS C:\> $newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\" PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User Python dependencies ------------------- The following packages need to be downloaded and manually installed: ``setuptools`` https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe ``pip`` https://pip.pypa.io/en/latest/installing/ ``PyMySQL`` http://codegood.com/download/10/ ``PyWin32`` https://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe ``Greenlet`` http://www.lfd.uci.edu/~gohlke/pythonlibs/#greenlet ``PyCryto`` http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe The following packages must be installed with pip: * ``ecdsa`` * ``amqp`` * ``wmi`` .. code-block:: none PS C:\> pip install ecdsa PS C:\> pip install amqp PS C:\> pip install wmi Other dependencies ------------------ ``qemu-img`` is required for some of the image related operations. You can get it from here: http://qemu.weilnetz.de/. You must make sure that the ``qemu-img`` path is set in the PATH environment variable. Some Python packages need to be compiled, so you may use MinGW or Visual Studio. You can get MinGW from here: http://sourceforge.net/projects/mingw/. You must configure which compiler is to be used for this purpose by using the ``distutils.cfg`` file in ``$Python27\Lib\distutils``, which can contain: .. code-block:: ini [build] compiler = mingw32 As a last step for setting up MinGW, make sure that the MinGW binaries' directories are set up in PATH. Install nova-compute ~~~~~~~~~~~~~~~~~~~~ Download the nova code ---------------------- #. Use Git to download the necessary source code. The installer to run Git on Windows can be downloaded here: https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe #. Download the installer. Once the download is complete, run the installer and follow the prompts in the installation wizard. The default should be acceptable for the purposes of this guide. .. code-block:: none PS C:\> $src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe" PS C:\> $dest = "$env:temp\Git-1.9.2-preview20140411.exe" PS C:\> Invoke-WebRequest -Uri $src -OutFile $dest PS C:\> Unblock-File $dest PS C:\> Start-Process $dest #. Run the following to clone the nova code. .. code-block:: none PS C:\> git.exe clone https://opendev.org/openstack/nova Install nova-compute service ---------------------------- To install ``nova-compute``, run: .. code-block:: none PS C:\> cd c:\nova PS C:\> python setup.py install Configure nova-compute ---------------------- The ``nova.conf`` file must be placed in ``C:\etc\nova`` for running OpenStack on Hyper-V. Below is a sample ``nova.conf`` for Windows: .. code-block:: ini [DEFAULT] auth_strategy = keystone image_service = nova.image.glance.GlanceImageService compute_driver = nova.virt.hyperv.driver.HyperVDriver volume_api_class = nova.volume.cinder.API fake_network = true instances_path = C:\Program Files (x86)\OpenStack\Instances use_cow_images = true force_config_drive = false injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.json mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe allow_resize_to_same_host = true running_deleted_instance_action = reap running_deleted_instance_poll_interval = 120 resize_confirm_window = 5 resume_guests_state_on_host_boot = true rpc_response_timeout = 1800 lock_path = C:\Program Files (x86)\OpenStack\Log\ rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_host = IP_ADDRESS rabbit_port = 5672 rabbit_userid = guest rabbit_password = Passw0rd logdir = C:\Program Files (x86)\OpenStack\Log\ logfile = nova-compute.log instance_usage_audit = true instance_usage_audit_period = hour [glance] api_servers = http://IP_ADDRESS:9292 [neutron] endpoint_override = http://IP_ADDRESS:9696 auth_strategy = keystone project_name = service username = neutron password = Passw0rd auth_url = http://IP_ADDRESS:5000/v3 auth_type = password [hyperv] vswitch_name = newVSwitch0 limit_cpu_features = false config_drive_inject_password = false qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe config_drive_cdrom = true dynamic_memory_ratio = 1 enable_instance_metrics_collection = true [rdp] enabled = true html5_proxy_base_url = https://IP_ADDRESS:4430 Prepare images for use with Hyper-V ----------------------------------- Hyper-V currently supports only the VHD and VHDX file format for virtual machine instances. Detailed instructions for installing virtual machines on Hyper-V can be found here: `Create Virtual Machines `_ Once you have successfully created a virtual machine, you can then upload the image to `glance` using the `openstack-client`: .. code-block:: none PS C:\> openstack image create --name "VM_IMAGE_NAME" --property hypervisor_type=hyperv --public \ --container-format bare --disk-format vhd .. note:: VHD and VHDX files sizes can be bigger than their maximum internal size, as such you need to boot instances using a flavor with a slightly bigger disk size than the internal size of the disk file. To create VHDs, use the following PowerShell cmdlet: .. code-block:: none PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE Inject interfaces and routes ---------------------------- The ``interfaces.template`` file describes the network interfaces and routes available on your system and how to activate them. You can specify the location of the file with the ``injected_network_template`` configuration option in ``/etc/nova/nova.conf``. .. code-block:: ini injected_network_template = PATH_TO_FILE A default template exists in ``nova/virt/interfaces.template``. Run Compute with Hyper-V ------------------------ To start the ``nova-compute`` service, run this command from a console in the Windows server: .. code-block:: none PS C:\> C:\Python27\python.exe c:\Python27\Scripts\nova-compute --config-file c:\etc\nova\nova.conf Troubleshoot Hyper-V configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * I ran the :command:`nova-manage service list` command from my controller; however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do? Verify that you are synchronized with a network time source. For instructions about how to configure NTP on your Hyper-V compute node, see :ref:`configure-ntp-windows`. * How do I restart the compute service? .. code-block:: none PS C:\> net stop nova-compute && net start nova-compute * How do I restart the iSCSI initiator service? .. code-block:: none PS C:\> net stop msiscsi && net start msiscsi ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-ironic.rst0000664000175000017500000000412600000000000025022 0ustar00zuulzuul00000000000000Ironic ====== Introduction ------------ The ironic hypervisor driver wraps the Bare Metal (ironic) API, enabling Nova to provision baremetal resources using the same user-facing API as for server management. This is the only driver in nova where one compute service can map to many hosts , meaning a ``nova-compute`` service can manage multiple ``ComputeNodes``. An ironic driver managed compute service uses the ironic ``node uuid`` for the compute node ``hypervisor_hostname`` (nodename) and ``uuid`` fields. The relationship of ``instance:compute node:ironic node`` is ``1:1:1``. Scheduling of bare metal nodes is based on custom resource classes, specified via the ``resource_class`` property on a node and a corresponding resource property on a flavor (see the `flavor documentation`_). The RAM and CPU settings on a flavor are ignored, and the disk is only used to determine the root partition size when a partition image is used (see the `image documentation`_). .. _flavor documentation: https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html .. _image documentation: https://docs.openstack.org/ironic/latest/install/configure-glance-images.html Configuration ------------- - `Configure the Compute service to use the Bare Metal service `_. - `Create flavors for use with the Bare Metal service `__. - `Conductors Groups `_. Scaling and Performance Issues ------------------------------ - The ``update_available_resource`` periodic task reports all the resources managed by Ironic. Depending the number of nodes, it can take a lot of time. The nova-compute will not perform any other operations when this task is running. You can use conductor groups to help scale, by setting :oslo.config:option:`ironic.partition_key`. Known limitations / Missing features ------------------------------------ * Migrate * Resize * Snapshot * Pause * Shelve * Evacuate ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-kvm.rst0000664000175000017500000007500500000000000024340 0ustar00zuulzuul00000000000000=== KVM === .. todo:: Some of this is installation guide material and should probably be moved. KVM is configured as the default hypervisor for Compute. .. note:: This document contains several sections about hypervisor selection. If you are reading this document linearly, you do not want to load the KVM module before you install ``nova-compute``. The ``nova-compute`` service depends on qemu-kvm, which installs ``/lib/udev/rules.d/45-qemu-kvm.rules``, which sets the correct permissions on the ``/dev/kvm`` device node. To enable KVM explicitly, add the following configuration options to the ``/etc/nova/nova.conf`` file: .. code-block:: ini compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = kvm The KVM hypervisor supports the following virtual machine image formats: * Raw * QEMU Copy-on-write (QCOW2) * QED Qemu Enhanced Disk * VMware virtual machine disk format (vmdk) This section describes how to enable KVM on your system. For more information, see the following distribution-specific documentation: * `Fedora: Virtualization Getting Started Guide `_ from the Fedora 22 documentation. * `Ubuntu: KVM/Installation `_ from the Community Ubuntu documentation. * `Debian: Virtualization with KVM `_ from the Debian handbook. * `Red Hat Enterprise Linux: Installing virtualization packages on an existing Red Hat Enterprise Linux system `_ from the ``Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide``. * `openSUSE: Installing KVM `_ from the openSUSE Virtualization with KVM manual. * `SLES: Installing KVM `_ from the SUSE Linux Enterprise Server ``Virtualization Guide``. .. _enable-kvm: Enable KVM ~~~~~~~~~~ The following sections outline how to enable KVM based hardware virtualization on different architectures and platforms. To perform these steps, you must be logged in as the ``root`` user. For x86 based systems --------------------- #. To determine whether the ``svm`` or ``vmx`` CPU extensions are present, run this command: .. code-block:: console # grep -E 'svm|vmx' /proc/cpuinfo This command generates output if the CPU is capable of hardware-virtualization. Even if output is shown, you might still need to enable virtualization in the system BIOS for full support. If no output appears, consult your system documentation to ensure that your CPU and motherboard support hardware virtualization. Verify that any relevant hardware virtualization options are enabled in the system BIOS. The BIOS for each manufacturer is different. If you must enable virtualization in the BIOS, look for an option containing the words ``virtualization``, ``VT``, ``VMX``, or ``SVM``. #. To list the loaded kernel modules and verify that the ``kvm`` modules are loaded, run this command: .. code-block:: console # lsmod | grep kvm If the output includes ``kvm_intel`` or ``kvm_amd``, the ``kvm`` hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute. If the output does not show that the ``kvm`` module is loaded, run this command to load it: .. code-block:: console # modprobe -a kvm Run the command for your CPU. For Intel, run this command: .. code-block:: console # modprobe -a kvm-intel For AMD, run this command: .. code-block:: console # modprobe -a kvm-amd Because a KVM installation can change user group membership, you might need to log in again for changes to take effect. If the kernel modules do not load automatically, use the procedures listed in these subsections. If the checks indicate that required hardware virtualization support or kernel modules are disabled or unavailable, you must either enable this support on the system or find a system with this support. .. note:: Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command did not produce output, reboot your machine, enter the system BIOS, and enable the VT option. If KVM acceleration is not supported, configure Compute to use a different hypervisor, such as ``QEMU`` or ``Xen``. See :ref:`compute_qemu` or :ref:`compute_xen_api` for details. These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation. .. rubric:: Intel-based processors If your compute host is Intel-based, run these commands as root to load the kernel modules: .. code-block:: console # modprobe kvm # modprobe kvm-intel Add these lines to the ``/etc/modules`` file so that these modules load on reboot: .. code-block:: console kvm kvm-intel .. rubric:: AMD-based processors If your compute host is AMD-based, run these commands as root to load the kernel modules: .. code-block:: console # modprobe kvm # modprobe kvm-amd Add these lines to ``/etc/modules`` file so that these modules load on reboot: .. code-block:: console kvm kvm-amd For POWER based systems ----------------------- KVM as a hypervisor is supported on POWER system's PowerNV platform. #. To determine if your POWER platform supports KVM based virtualization run the following command: .. code-block:: console # cat /proc/cpuinfo | grep PowerNV If the previous command generates the following output, then CPU supports KVM based virtualization. .. code-block:: console platform: PowerNV If no output is displayed, then your POWER platform does not support KVM based hardware virtualization. #. To list the loaded kernel modules and verify that the ``kvm`` modules are loaded, run the following command: .. code-block:: console # lsmod | grep kvm If the output includes ``kvm_hv``, the ``kvm`` hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute. If the output does not show that the ``kvm`` module is loaded, run the following command to load it: .. code-block:: console # modprobe -a kvm For PowerNV platform, run the following command: .. code-block:: console # modprobe -a kvm-hv Because a KVM installation can change user group membership, you might need to log in again for changes to take effect. Configure Compute backing storage ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Backing Storage is the storage used to provide the expanded operating system image, and any ephemeral storage. Inside the virtual machine, this is normally presented as two virtual hard disks (for example, ``/dev/vda`` and ``/dev/vdb`` respectively). However, inside OpenStack, this can be derived from one of these methods: ``lvm``, ``qcow``, ``rbd`` or ``flat``, chosen using the :oslo.config:option:`libvirt.images_type` option in ``nova.conf`` on the compute node. .. note:: The option ``raw`` is acceptable but deprecated in favor of ``flat``. The Flat back end uses either raw or QCOW2 storage. It never uses a backing store, so when using QCOW2 it copies an image rather than creating an overlay. By default, it creates raw files but will use QCOW2 when creating a disk from a QCOW2 if :oslo.config:option:`force_raw_images` is not set in configuration. QCOW is the default backing store. It uses a copy-on-write philosophy to delay allocation of storage until it is actually needed. This means that the space required for the backing of an image can be significantly less on the real disk than what seems available in the virtual machine operating system. Flat creates files without any sort of file formatting, effectively creating files with the plain binary one would normally see on a real disk. This can increase performance, but means that the entire size of the virtual disk is reserved on the physical disk. Local `LVM volumes `__ can also be used. Set the :oslo.config:option:`libvirt.images_volume_group` configuration option to the name of the LVM group you have created. Specify the CPU model of KVM guests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include: * To maximize performance of virtual machines by exposing new host CPU features to the guest * To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model names. These models are defined in the ``/usr/share/libvirt/cpu_map.xml`` file for libvirt prior to version 4.7.0 or ``/usr/share/libvirt/cpu_map/*.xml`` files thereafter. Make a check to determine which models are supported by your local installation. Two Compute configuration options in the :oslo.config:group:`libvirt` group of ``nova.conf`` define which type of CPU model is exposed to the hypervisor when using KVM: :oslo.config:option:`libvirt.cpu_mode` and :oslo.config:option:`libvirt.cpu_models`. The :oslo.config:option:`libvirt.cpu_mode` option can take one of the following values: ``none``, ``host-passthrough``, ``host-model``, and ``custom``. See `Effective Virtual CPU configuration in Nova`_ for a recorded presentation about this topic. .. _Effective Virtual CPU configuration in Nova: https://www.openstack.org/videos/summits/berlin-2018/effective-virtual-cpu-configuration-in-nova Host model (default for KVM & QEMU) ----------------------------------- If your ``nova.conf`` file contains ``cpu_mode=host-model``, libvirt identifies the CPU model in ``/usr/share/libvirt/cpu_map.xml`` for version prior to 4.7.0 or ``/usr/share/libvirt/cpu_map/*.xml`` for version 4.7.0 and higher that most closely matches the host, and requests additional CPU flags to complete the match. This configuration provides the maximum functionality and performance and maintains good reliability. With regard to enabling and facilitating live migration between compute nodes, you should assess whether ``host-model`` is suitable for your compute architecture. In general, using ``host-model`` is a safe choice if your compute node CPUs are largely identical. However, if your compute nodes span multiple processor generations, you may be better advised to select a ``custom`` CPU model. Host pass through ----------------- If your ``nova.conf`` file contains ``cpu_mode=host-passthrough``, libvirt tells KVM to pass through the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives the best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration. In ``host-passthrough`` mode, the guest can only be live-migrated to a target host that matches the source host extremely closely. This definitely includes the physical CPU model and running microcode, and may even include the running kernel. Use this mode only if * your compute nodes have a very large degree of homogeneity (i.e. substantially all of your compute nodes use the exact same CPU generation and model), and you make sure to only live-migrate between hosts with exactly matching kernel versions, *or* * you decide, for some reason and against established best practices, that your compute infrastructure should not support any live migration at all. Custom ------ If :file:`nova.conf` contains :oslo.config:option:`libvirt.cpu_mode`\ =custom, you can explicitly specify an ordered list of supported named models using the :oslo.config:option:`libvirt.cpu_models` configuration option. It is expected that the list is ordered so that the more common and less advanced cpu models are listed earlier. An end user can specify required CPU features through traits. When specified, the libvirt driver will select the first cpu model in the :oslo.config:option:`libvirt.cpu_models` list that can provide the requested feature traits. If no CPU feature traits are specified then the instance will be configured with the first cpu model in the list. For example, if specifying CPU features ``avx`` and ``avx2`` as follows: .. code-block:: console $ openstack flavor set FLAVOR_ID --property trait:HW_CPU_X86_AVX=required \ --property trait:HW_CPU_X86_AVX2=required and :oslo.config:option:`libvirt.cpu_models` is configured like this: .. code-block:: ini [libvirt] cpu_mode = custom cpu_models = Penryn,IvyBridge,Haswell,Broadwell,Skylake-Client Then ``Haswell``, the first cpu model supporting both ``avx`` and ``avx2``, will be chosen by libvirt. In selecting the ``custom`` mode, along with a :oslo.config:option:`libvirt.cpu_models` that matches the oldest of your compute node CPUs, you can ensure that live migration between compute nodes will always be possible. However, you should ensure that the :oslo.config:option:`libvirt.cpu_models` you select passes the correct CPU feature flags to the guest. If you need to further tweak your CPU feature flags in the ``custom`` mode, see `Set CPU feature flags`_. .. note:: If :oslo.config:option:`libvirt.cpu_models` is configured, the CPU models in the list needs to be compatible with the host CPU. Also, if :oslo.config:option:`libvirt.cpu_model_extra_flags` is configured, all flags needs to be compatible with the host CPU. If incompatible CPU models or flags are specified, nova service will raise an error and fail to start. None (default for all libvirt-driven hypervisors other than KVM & QEMU) ----------------------------------------------------------------------- If your ``nova.conf`` file contains ``cpu_mode=none``, libvirt does not specify a CPU model. Instead, the hypervisor chooses the default model. Set CPU feature flags ~~~~~~~~~~~~~~~~~~~~~ Regardless of whether your selected :oslo.config:option:`libvirt.cpu_mode` is ``host-passthrough``, ``host-model``, or ``custom``, it is also possible to selectively enable additional feature flags. Suppose your selected ``custom`` CPU model is ``IvyBridge``, which normally does not enable the ``pcid`` feature flag --- but you do want to pass ``pcid`` into your guest instances. In that case, you would set: .. code-block:: ini [libvirt] cpu_mode = custom cpu_models = IvyBridge cpu_model_extra_flags = pcid Nested guest support ~~~~~~~~~~~~~~~~~~~~ You may choose to enable support for nested guests --- that is, allow your Nova instances to themselves run hardware-accelerated virtual machines with KVM. Doing so requires a module parameter on your KVM kernel module, and corresponding ``nova.conf`` settings. Nested guest support in the KVM kernel module --------------------------------------------- To enable nested KVM guests, your compute node must load the ``kvm_intel`` or ``kvm_amd`` module with ``nested=1``. You can enable the ``nested`` parameter permanently, by creating a file named ``/etc/modprobe.d/kvm.conf`` and populating it with the following content: .. code-block:: none options kvm_intel nested=1 options kvm_amd nested=1 A reboot may be required for the change to become effective. Nested guest support in ``nova.conf`` ------------------------------------- To support nested guests, you must set your :oslo.config:option:`libvirt.cpu_mode` configuration to one of the following options: Host pass through In this mode, nested virtualization is automatically enabled once the KVM kernel module is loaded with nesting support. .. code-block:: ini [libvirt] cpu_mode = host-passthrough However, do consider the other implications that `Host pass through`_ mode has on compute functionality. Host model In this mode, nested virtualization is automatically enabled once the KVM kernel module is loaded with nesting support, **if** the matching CPU model exposes the ``vmx`` feature flag to guests by default (you can verify this with ``virsh capabilities`` on your compute node). If your CPU model does not pass in the ``vmx`` flag, you can force it with :oslo.config:option:`libvirt.cpu_model_extra_flags`: .. code-block:: ini [libvirt] cpu_mode = host-model cpu_model_extra_flags = vmx Again, consider the other implications that apply to the `Host model (default for KVM & Qemu)`_ mode. Custom In custom mode, the same considerations apply as in host-model mode, but you may *additionally* want to ensure that libvirt passes not only the ``vmx``, but also the ``pcid`` flag to its guests: .. code-block:: ini [libvirt] cpu_mode = custom cpu_models = IvyBridge cpu_model_extra_flags = vmx,pcid Nested guest support limitations -------------------------------- When enabling nested guests, you should be aware of (and inform your users about) certain limitations that are currently inherent to nested KVM virtualization. Most importantly, guests using nested virtualization will, *while nested guests are running*, * fail to complete live migration; * fail to resume from suspend. See `the KVM documentation `_ for more information on these limitations. .. _amd-sev: AMD SEV (Secure Encrypted Virtualization) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `Secure Encrypted Virtualization (SEV)`__ is a technology from AMD which enables the memory for a VM to be encrypted with a key unique to the VM. SEV is particularly applicable to cloud computing since it can reduce the amount of trust VMs need to place in the hypervisor and administrator of their host system. __ https://developer.amd.com/sev/ Nova supports SEV from the Train release onwards. Requirements for SEV -------------------- First the operator will need to ensure the following prerequisites are met: - At least one of the Nova compute hosts must be AMD hardware capable of supporting SEV. It is entirely possible for the compute plane to be a mix of hardware which can and cannot support SEV, although as per the section on `Permanent limitations`_ below, the maximum number of simultaneously running guests with SEV will be limited by the quantity and quality of SEV-capable hardware available. - An appropriately configured software stack on those compute hosts, so that the various layers are all SEV ready: - kernel >= 4.16 - QEMU >= 2.12 - libvirt >= 4.5 - ovmf >= commit 75b7aa9528bd 2018-07-06 .. _deploying-sev-capable-infrastructure: Deploying SEV-capable infrastructure ------------------------------------ In order for users to be able to use SEV, the operator will need to perform the following steps: - Ensure that sufficient memory is reserved on the SEV compute hosts for host-level services to function correctly at all times. This is particularly important when hosting SEV-enabled guests, since they pin pages in RAM, preventing any memory overcommit which may be in normal operation on other compute hosts. It is `recommended`__ to achieve this by configuring an ``rlimit`` at the ``/machine.slice`` top-level ``cgroup`` on the host, with all VMs placed inside that. (For extreme detail, see `this discussion on the spec`__.) __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-reservation-solutions __ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167 An alternative approach is to configure the :oslo.config:option:`reserved_host_memory_mb` option in the ``[DEFAULT]`` section of :file:`nova.conf`, based on the expected maximum number of SEV guests simultaneously running on the host, and the details provided in `an earlier version of the AMD SEV spec`__ regarding memory region sizes, which cover how to calculate it correctly. __ https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#proposed-change See `the Memory Locking and Accounting section of the AMD SEV spec`__ and `previous discussion for further details`__. __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-locking-and-accounting __ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167 - A cloud administrator will need to define one or more SEV-enabled flavors :ref:`as described in the user guide `, unless it is sufficient for users to define SEV-enabled images. Additionally the cloud operator should consider the following optional steps: .. _num_memory_encrypted_guests: - Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests` option in :file:`nova.conf` to represent the number of guests an SEV compute node can host concurrently with memory encrypted at the hardware level. For example: .. code-block:: ini [libvirt] num_memory_encrypted_guests = 15 This option exists because on AMD SEV-capable hardware, the memory controller has a fixed number of slots for holding encryption keys, one per guest. For example, at the time of writing, earlier generations of hardware only have 15 slots, thereby limiting the number of SEV guests which can be run concurrently to 15. Nova needs to track how many slots are available and used in order to avoid attempting to exceed that limit in the hardware. At the time of writing (September 2019), work is in progress to allow QEMU and libvirt to expose the number of slots available on SEV hardware; however until this is finished and released, it will not be possible for Nova to programmatically detect the correct value. So this configuration option serves as a stop-gap, allowing the cloud operator the option of providing this value manually. It may later be demoted to a fallback value for cases where the limit cannot be detected programmatically, or even removed altogether when Nova's minimum QEMU version guarantees that it can always be detected. .. note:: When deciding whether to use the default of ``None`` or manually impose a limit, operators should carefully weigh the benefits vs. the risk. The benefits of using the default are a) immediate convenience since nothing needs to be done now, and b) convenience later when upgrading compute hosts to future versions of Nova, since again nothing will need to be done for the correct limit to be automatically imposed. However the risk is that until auto-detection is implemented, users may be able to attempt to launch guests with encrypted memory on hosts which have already reached the maximum number of guests simultaneously running with encrypted memory. This risk may be mitigated by other limitations which operators can impose, for example if the smallest RAM footprint of any flavor imposes a maximum number of simultaneously running guests which is less than or equal to the SEV limit. - Configure :oslo.config:option:`libvirt.hw_machine_type` on all SEV-capable compute hosts to include ``x86_64=q35``, so that all x86_64 images use the ``q35`` machine type by default. (Currently Nova defaults to the ``pc`` machine type for the ``x86_64`` architecture, although `it is expected that this will change in the future`__.) Changing the default from ``pc`` to ``q35`` makes the creation and configuration of images by users more convenient by removing the need for the ``hw_machine_type`` property to be set to ``q35`` on every image for which SEV booting is desired. .. caution:: Consider carefully whether to set this option. It is particularly important since a limitation of the implementation prevents the user from receiving an error message with a helpful explanation if they try to boot an SEV guest when neither this configuration option nor the image property are set to select a ``q35`` machine type. On the other hand, setting it to ``q35`` may have other undesirable side-effects on other images which were expecting to be booted with ``pc``, so it is suggested to set it on a single compute node or aggregate, and perform careful testing of typical images before rolling out the setting to all SEV-capable compute hosts. __ https://bugs.launchpad.net/nova/+bug/1780138 Launching SEV instances ----------------------- Once an operator has covered the above steps, users can launch SEV instances either by requesting a flavor for which the operator set the ``hw:mem_encryption`` extra spec to ``True``, or by using an image with the ``hw_mem_encryption`` property set to ``True``. These do not inherently cause a preference for SEV-capable hardware, but for now SEV is the only way of fulfilling the requirement for memory encryption. However in the future, support for other hardware-level guest memory encryption technology such as Intel MKTME may be added. If a guest specifically needs to be booted using SEV rather than any other memory encryption technology, it is possible to ensure this by adding ``trait:HW_CPU_X86_AMD_SEV=required`` to the flavor extra specs or image properties. In all cases, SEV instances can only be booted from images which have the ``hw_firmware_type`` property set to ``uefi``, and only when the machine type is set to ``q35``. This can be set per image by setting the image property ``hw_machine_type=q35``, or per compute node by the operator via :oslo.config:option:`libvirt.hw_machine_type` as explained above. Impermanent limitations ----------------------- The following limitations may be removed in the future as the hardware, firmware, and various layers of software receive new features: - SEV-encrypted VMs cannot yet be live-migrated or suspended, therefore they will need to be fully shut down before migrating off an SEV host, e.g. if maintenance is required on the host. - SEV-encrypted VMs cannot contain directly accessible host devices (PCI passthrough). So for example mdev vGPU support will not currently work. However technologies based on `vhost-user`__ should work fine. __ https://wiki.qemu.org/Features/VirtioVhostUser - The boot disk of SEV-encrypted VMs can only be ``virtio``. (``virtio-blk`` is typically the default for libvirt disks on x86, but can also be explicitly set e.g. via the image property ``hw_disk_bus=virtio``). Valid alternatives for the disk include using ``hw_disk_bus=scsi`` with ``hw_scsi_model=virtio-scsi`` , or ``hw_disk_bus=sata``. - QEMU and libvirt cannot yet expose the number of slots available for encrypted guests in the memory controller on SEV hardware. Until this is implemented, it is not possible for Nova to programmatically detect the correct value. As a short-term workaround, operators can optionally manually specify the upper limit of SEV guests for each compute host, via the new :oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration option :ref:`described above `. Permanent limitations --------------------- The following limitations are expected long-term: - The number of SEV guests allowed to run concurrently will always be limited. `On the first generation of EPYC machines it will be limited to 15 guests`__; however this limit becomes much higher with the second generation (Rome). __ https://www.redhat.com/archives/libvir-list/2019-January/msg00652.html - The operating system running in an encrypted virtual machine must contain SEV support. Non-limitations --------------- For the sake of eliminating any doubt, the following actions are *not* expected to be limited when SEV encryption is used: - Cold migration or shelve, since they power off the VM before the operation at which point there is no encrypted memory (although this could change since there is work underway to add support for `PMEM `_) - Snapshot, since it only snapshots the disk - ``nova evacuate`` (despite the name, more akin to resurrection than evacuation), since this is only initiated when the VM is no longer running - Attaching any volumes, as long as they do not require attaching via an IDE bus - Use of spice / VNC / serial / RDP consoles - `VM guest virtual NUMA (a.k.a. vNUMA) `_ For further technical details, see `the nova spec for SEV support`__. __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html Guest agent support ~~~~~~~~~~~~~~~~~~~ Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol. To enable this feature, you must set ``hw_qemu_guest_agent=yes`` as a metadata parameter on the image you wish to use to create the guest-agent-capable instances from. You can explicitly disable the feature by setting ``hw_qemu_guest_agent=no`` in the image metadata. KVM performance tweaks ~~~~~~~~~~~~~~~~~~~~~~ The `VHostNet `_ kernel module improves network performance. To load the kernel module, run the following command as root: .. code-block:: console # modprobe vhost_net Troubleshoot KVM ~~~~~~~~~~~~~~~~ Trying to launch a new virtual machine instance fails with the ``ERROR`` state, and the following error appears in the ``/var/log/nova/nova-compute.log`` file: .. code-block:: console libvirtError: internal error no supported architecture for os type 'hvm' This message indicates that the KVM kernel modules were not loaded. If you cannot start VMs after installation without rebooting, the permissions might not be set correctly. This can happen if you load the KVM module before you install ``nova-compute``. To check whether the group is set to ``kvm``, run: .. code-block:: console # ls -l /dev/kvm If it is not set to ``kvm``, run: .. code-block:: console # udevadm trigger ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-lxc.rst0000664000175000017500000000310000000000000024314 0ustar00zuulzuul00000000000000====================== LXC (Linux containers) ====================== LXC (also known as Linux containers) is a virtualization technology that works at the operating system level. This is different from hardware virtualization, the approach used by other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in the Compute service) is not a secure virtualization technology for multi-tenant environments (specifically, containers may affect resource quotas for other containers hosted on the same machine). Additional containment technologies, such as AppArmor, may be used to provide better isolation between containers, although this is not the case by default. For all these reasons, the choice of this virtualization technology is not recommended in production. If your compute hosts do not have hardware support for virtualization, LXC will likely provide better performance than QEMU. In addition, if your guests must access specialized hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors. .. note:: Some OpenStack Compute features might be missing when running with LXC as the hypervisor. See the `hypervisor support matrix `_ for details. To enable LXC, ensure the following options are set in ``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. .. code-block:: ini compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = lxc On Ubuntu, enable LXC support in OpenStack by installing the ``nova-compute-lxc`` package. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-powervm.rst0000664000175000017500000000453400000000000025241 0ustar00zuulzuul00000000000000PowerVM ======= Introduction ------------ OpenStack Compute supports the PowerVM hypervisor through `NovaLink`_. In the NovaLink architecture, a thin NovaLink virtual machine running on the Power system manages virtualization for that system. The ``nova-compute`` service can be installed on the NovaLink virtual machine and configured to use the PowerVM compute driver. No external management element (e.g. Hardware Management Console) is needed. .. _NovaLink: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8eig/p8eig_kickoff.htm Configuration ------------- In order to function properly, the ``nova-compute`` service must be executed by a member of the ``pvm_admin`` group. Use the ``usermod`` command to add the user. For example, to add the ``stacker`` user to the ``pvm_admin`` group, execute:: sudo usermod -a -G pvm_admin stacker The user must re-login for the change to take effect. To enable the PowerVM compute driver, set the following configuration option in the ``/etc/nova/nova.conf`` file: .. code-block:: ini [Default] compute_driver = powervm.PowerVMDriver The PowerVM driver supports two types of storage for ephemeral disks: ``localdisk`` or ``ssp``. If ``localdisk`` is selected, you must specify which volume group should be used. E.g.: .. code-block:: ini [powervm] disk_driver = localdisk volume_group_name = openstackvg .. note:: Using the ``rootvg`` volume group is strongly discouraged since ``rootvg`` is used by the management partition and filling this will cause failures. The PowerVM driver also supports configuring the default amount of physical processor compute power (known as "proc units") which will be given to each vCPU. This value will be used if the requested flavor does not specify the ``powervm:proc_units`` extra-spec. A factor value of 1.0 means a whole physical processor, whereas 0.05 means 1/20th of a physical processor. E.g.: .. code-block:: ini [powervm] proc_units_factor = 0.1 Volume Support -------------- Volume support is provided for the PowerVM virt driver via Cinder. Currently, the only supported volume protocol is `vSCSI`_ Fibre Channel. Attach, detach, and extend are the operations supported by the PowerVM vSCSI FC volume adapter. :term:`Boot From Volume` is not yet supported. .. _vSCSI: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8hat/p8hat_virtualscsi.htm ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-qemu.rst0000664000175000017500000000256000000000000024506 0ustar00zuulzuul00000000000000.. _compute_qemu: ==== QEMU ==== From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment. The typical uses cases for QEMU are * Running on older hardware that lacks virtualization support. * Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests. To enable QEMU, add these settings to ``nova.conf``: .. code-block:: ini compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = qemu For some operations you may also have to install the :command:`guestmount` utility: On Ubuntu: .. code-block:: console # apt-get install guestmount On Red Hat Enterprise Linux, Fedora, or CentOS: .. code-block:: console # yum install libguestfs-tools On openSUSE: .. code-block:: console # zypper install guestfs-tools The QEMU hypervisor supports the following virtual machine image formats: * Raw * QEMU Copy-on-write (qcow2) * VMware virtual machine disk format (vmdk) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-virtuozzo.rst0000664000175000017500000000210200000000000025622 0ustar00zuulzuul00000000000000========= Virtuozzo ========= Virtuozzo 7.0.0 (or newer), or its community edition OpenVZ, provides both types of virtualization: Kernel Virtual Machines and OS Containers. The type of instance to span is chosen depending on the ``hw_vm_type`` property of an image. .. note:: Some OpenStack Compute features may be missing when running with Virtuozzo as the hypervisor. See :doc:`/user/support-matrix` for details. To enable Virtuozzo Containers, set the following options in ``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. .. code-block:: ini compute_driver = libvirt.LibvirtDriver force_raw_images = False [libvirt] virt_type = parallels images_type = ploop connection_uri = parallels:///system inject_partition = -2 To enable Virtuozzo Virtual Machines, set the following options in ``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. .. code-block:: ini compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = parallels images_type = qcow2 connection_uri = parallels:///system ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-vmware.rst0000664000175000017500000010635500000000000025047 0ustar00zuulzuul00000000000000============== VMware vSphere ============== Introduction ~~~~~~~~~~~~ OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). This section describes how to configure VMware-based virtual machine images for launch. The VMware driver supports vCenter version 5.5.0 and later. The VMware vCenter driver enables the ``nova-compute`` service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features. The following sections describe how to configure the VMware vCenter driver. High-level architecture ~~~~~~~~~~~~~~~~~~~~~~~ The following diagram shows a high-level view of the VMware driver architecture: .. rubric:: VMware driver architecture .. figure:: /_static/images/vmware-nova-driver-architecture.jpg :width: 100% As the figure shows, the OpenStack Compute Scheduler sees three hypervisors that each correspond to a cluster in vCenter. ``nova-compute`` contains the VMware driver. You can run with multiple ``nova-compute`` services. It is recommended to run with one ``nova-compute`` service per ESX cluster thus ensuring that while Compute schedules at the granularity of the ``nova-compute`` service it is also in effect able to schedule at the cluster level. In turn the VMware driver inside ``nova-compute`` interacts with the vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement. The VMware vCenter driver also interacts with the Image service to copy VMDK images from the Image service back-end store. The dotted line in the figure represents VMDK images being copied from the OpenStack Image service to the vSphere data store. VMDK images are cached in the data store so the copy operation is only required the first time that the VMDK image is used. After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in vCenter and can access vSphere advanced features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it as you would any other OpenStack VM. You can perform advanced vSphere operations in vCenter while you configure OpenStack resources such as VMs through the OpenStack dashboard. The figure does not show how networking fits into the architecture. For details, see :ref:`vmware-networking`. Configuration overview ~~~~~~~~~~~~~~~~~~~~~~ To get started with the VMware vCenter driver, complete the following high-level steps: #. Configure vCenter. See :ref:`vmware-prereqs`. #. Configure the VMware vCenter driver in the ``nova.conf`` file. See :ref:`vmware-vcdriver`. #. Load desired VMDK images into the Image service. See :ref:`vmware-images`. #. Configure the Networking service (neutron). See :ref:`vmware-networking`. .. _vmware-prereqs: Prerequisites and limitations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver: Copying VMDK files In vSphere 5.1, copying large image files (for example, 12 GB and greater) from the Image service can take a long time. To improve performance, VMware recommends that you upgrade to VMware vCenter Server 5.1 Update 1 or later. For more information, see the `Release Notes `_. DRS For any cluster that contains multiple ESX hosts, enable DRS and enable fully automated placement. Shared storage Only shared storage is supported and data stores must be shared among all hosts in a cluster. It is recommended to remove data stores not intended for OpenStack from clusters being configured for OpenStack. Clusters and data stores Do not use OpenStack clusters and data stores for other purposes. If you do, OpenStack displays incorrect usage information. Networking The networking configuration depends on the desired networking model. See :ref:`vmware-networking`. Security groups If you use the VMware driver with OpenStack Networking and the NSX plug-in, security groups are supported. .. note:: The NSX plug-in is the only plug-in that is validated for vSphere. VNC The port range 5900 - 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control. .. note:: In addition to the default VNC port numbers (5900 to 6000) specified in the above document, the following ports are also used: 6101, 6102, and 6105. You must modify the ESXi firewall configuration to allow the VNC ports. Additionally, for the firewall modifications to persist after a reboot, you must create a custom vSphere Installation Bundle (VIB) which is then installed onto the running ESXi host or added to a custom image profile used to install ESXi hosts. For details about how to create a VIB for persisting the firewall configuration modifications, see `Knowledge Base `_. .. note:: The VIB can be downloaded from `openstack-vmwareapi-team/Tools `_. To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations. VMware vCenter service account ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ OpenStack integration requires a vCenter service account with the following minimum permissions. Apply the permissions to the ``Datacenter`` root object, and select the :guilabel:`Propagate to Child Objects` option. .. list-table:: vCenter permissions tree :header-rows: 1 :widths: 12, 12, 40, 36 * - All Privileges - - - * - - Datastore - - * - - - Allocate space - * - - - Browse datastore - * - - - Low level file operation - * - - - Remove file - * - - Extension - - * - - - Register extension - * - - Folder - - * - - - Create folder - * - - Host - - * - - - Configuration - * - - - - Maintenance * - - - - Network configuration * - - - - Storage partition configuration * - - Network - - * - - - Assign network - * - - Resource - - * - - - Assign virtual machine to resource pool - * - - - Migrate powered off virtual machine - * - - - Migrate powered on virtual machine - * - - Virtual Machine - - * - - - Configuration - * - - - - Add existing disk * - - - - Add new disk * - - - - Add or remove device * - - - - Advanced * - - - - CPU count * - - - - Change resource * - - - - Disk change tracking * - - - - Host USB device * - - - - Memory * - - - - Modify device settings * - - - - Raw device * - - - - Remove disk * - - - - Rename * - - - - Set annotation * - - - - Swapfile placement * - - - Interaction - * - - - - Configure CD media * - - - - Power Off * - - - - Power On * - - - - Reset * - - - - Suspend * - - - Inventory - * - - - - Create from existing * - - - - Create new * - - - - Move * - - - - Remove * - - - - Unregister * - - - Provisioning - * - - - - Clone virtual machine * - - - - Customize * - - - - Create template from virtual machine * - - - Snapshot management - * - - - - Create snapshot * - - - - Remove snapshot * - - Profile-driven storage - - * - - - Profile-driven storage view - * - - Sessions - - * - - - - Validate session * - - - - View and stop sessions * - - vApp - - * - - - Export - * - - - Import - .. _vmware-vcdriver: VMware vCenter driver ~~~~~~~~~~~~~~~~~~~~~ Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS). VMwareVCDriver configuration options ------------------------------------ Add the following VMware-specific configuration options to the ``nova.conf`` file: .. code-block:: ini [DEFAULT] compute_driver = vmwareapi.VMwareVCDriver [vmware] host_ip = host_username = host_password = cluster_name = datastore_regex = .. note:: * Clusters: The vCenter driver can support only a single cluster. Clusters and data stores used by the vCenter driver should not contain any VMs other than those created by the driver. * Data stores: The ``datastore_regex`` setting specifies the data stores to use with Compute. For example, ``datastore_regex="nas.*"`` selects all the data stores that have a name starting with "nas". If this line is omitted, Compute uses the first data store returned by the vSphere API. It is recommended not to use this field and instead remove data stores that are not intended for OpenStack. * Reserved host memory: The ``reserved_host_memory_mb`` option value is 512 MB by default. However, VMware recommends that you set this option to 0 MB because the vCenter driver reports the effective memory available to the virtual machines. * The vCenter driver generates instance name by instance ID. Instance name template is ignored. * The minimum supported vCenter version is 5.5.0. Starting in the OpenStack Ocata release any version lower than 5.5.0 will be logged as a warning. In the OpenStack Pike release this will be enforced. A ``nova-compute`` service can control one or more clusters containing multiple ESXi hosts, making ``nova-compute`` a critical service from a high availability perspective. Because the host that runs ``nova-compute`` can fail while the vCenter and ESX still run, you must protect the ``nova-compute`` service against host failures. .. note:: Many ``nova.conf`` options are relevant to libvirt but do not apply to this driver. .. _vmware-images: Images with VMware vSphere ~~~~~~~~~~~~~~~~~~~~~~~~~~ The vCenter driver supports images in the VMDK format. Disks in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible to convert other formats, such as qcow2, to the VMDK format using the ``qemu-img`` utility. After a VMDK disk is available, load it into the Image service. Then, you can use it with the VMware vCenter driver. The following sections provide additional details on the supported disks and the commands used for conversion and upload. Supported image types --------------------- Upload images to the OpenStack Image service in VMDK format. The following VMDK disk types are supported: * ``VMFS Flat Disks`` (includes thin, thick, zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is exported from VMFS to a non-VMFS location, like the OpenStack Image service, it becomes a preallocated flat disk. This impacts the transfer time from the Image service to the data store when the full preallocated flat disk, rather than the thin disk, must be transferred. * ``Monolithic Sparse disks``. Sparse disks get imported from the Image service into ESXi as thin provisioned disks. Monolithic Sparse disks can be obtained from VMware Fusion or can be created by converting from other virtual disk formats using the ``qemu-img`` utility. * ``Stream-optimized disks``. Stream-optimized disks are compressed sparse disks. They can be obtained from VMware vCenter/ESXi when exporting vm to ovf/ova template. The following table shows the ``vmware_disktype`` property that applies to each of the supported VMDK disk types: .. list-table:: OpenStack Image service disk type settings :header-rows: 1 * - vmware_disktype property - VMDK disk type * - sparse - Monolithic Sparse * - thin - VMFS flat, thin provisioned * - preallocated (default) - VMFS flat, thick/zeroedthick/eagerzeroedthick * - streamOptimized - Compressed Sparse The ``vmware_disktype`` property is set when an image is loaded into the Image service. For example, the following command creates a Monolithic Sparse image by setting ``vmware_disktype`` to ``sparse``: .. code-block:: console $ openstack image create \ --disk-format vmdk \ --container-format bare \ --property vmware_disktype="sparse" \ --property vmware_ostype="ubuntu64Guest" \ ubuntu-sparse < ubuntuLTS-sparse.vmdk .. note:: Specifying ``thin`` does not provide any advantage over ``preallocated`` with the current version of the driver. Future versions might restore the thin properties of the disk after it is downloaded to a vSphere data store. The following table shows the ``vmware_ostype`` property that applies to each of the supported guest OS: .. note:: If a glance image has a ``vmware_ostype`` property which does not correspond to a valid VMware guestId, VM creation will fail, and a warning will be logged. .. list-table:: OpenStack Image service OS type settings :header-rows: 1 * - vmware_ostype property - Retail Name * - asianux3_64Guest - Asianux Server 3 (64 bit) * - asianux3Guest - Asianux Server 3 * - asianux4_64Guest - Asianux Server 4 (64 bit) * - asianux4Guest - Asianux Server 4 * - darwin64Guest - Darwin 64 bit * - darwinGuest - Darwin * - debian4_64Guest - Debian GNU/Linux 4 (64 bit) * - debian4Guest - Debian GNU/Linux 4 * - debian5_64Guest - Debian GNU/Linux 5 (64 bit) * - debian5Guest - Debian GNU/Linux 5 * - dosGuest - MS-DOS * - freebsd64Guest - FreeBSD x64 * - freebsdGuest - FreeBSD * - mandrivaGuest - Mandriva Linux * - netware4Guest - Novell NetWare 4 * - netware5Guest - Novell NetWare 5.1 * - netware6Guest - Novell NetWare 6.x * - nld9Guest - Novell Linux Desktop 9 * - oesGuest - Open Enterprise Server * - openServer5Guest - SCO OpenServer 5 * - openServer6Guest - SCO OpenServer 6 * - opensuse64Guest - openSUSE (64 bit) * - opensuseGuest - openSUSE * - os2Guest - OS/2 * - other24xLinux64Guest - Linux 2.4x Kernel (64 bit) (experimental) * - other24xLinuxGuest - Linux 2.4x Kernel * - other26xLinux64Guest - Linux 2.6x Kernel (64 bit) (experimental) * - other26xLinuxGuest - Linux 2.6x Kernel (experimental) * - otherGuest - Other Operating System * - otherGuest64 - Other Operating System (64 bit) (experimental) * - otherLinux64Guest - Linux (64 bit) (experimental) * - otherLinuxGuest - Other Linux * - redhatGuest - Red Hat Linux 2.1 * - rhel2Guest - Red Hat Enterprise Linux 2 * - rhel3_64Guest - Red Hat Enterprise Linux 3 (64 bit) * - rhel3Guest - Red Hat Enterprise Linux 3 * - rhel4_64Guest - Red Hat Enterprise Linux 4 (64 bit) * - rhel4Guest - Red Hat Enterprise Linux 4 * - rhel5_64Guest - Red Hat Enterprise Linux 5 (64 bit) (experimental) * - rhel5Guest - Red Hat Enterprise Linux 5 * - rhel6_64Guest - Red Hat Enterprise Linux 6 (64 bit) * - rhel6Guest - Red Hat Enterprise Linux 6 * - sjdsGuest - Sun Java Desktop System * - sles10_64Guest - SUSE Linux Enterprise Server 10 (64 bit) (experimental) * - sles10Guest - SUSE Linux Enterprise Server 10 * - sles11_64Guest - SUSE Linux Enterprise Server 11 (64 bit) * - sles11Guest - SUSE Linux Enterprise Server 11 * - sles64Guest - SUSE Linux Enterprise Server 9 (64 bit) * - slesGuest - SUSE Linux Enterprise Server 9 * - solaris10_64Guest - Solaris 10 (64 bit) (experimental) * - solaris10Guest - Solaris 10 (32 bit) (experimental) * - solaris6Guest - Solaris 6 * - solaris7Guest - Solaris 7 * - solaris8Guest - Solaris 8 * - solaris9Guest - Solaris 9 * - suse64Guest - SUSE Linux (64 bit) * - suseGuest - SUSE Linux * - turboLinux64Guest - Turbolinux (64 bit) * - turboLinuxGuest - Turbolinux * - ubuntu64Guest - Ubuntu Linux (64 bit) * - ubuntuGuest - Ubuntu Linux * - unixWare7Guest - SCO UnixWare 7 * - win2000AdvServGuest - Windows 2000 Advanced Server * - win2000ProGuest - Windows 2000 Professional * - win2000ServGuest - Windows 2000 Server * - win31Guest - Windows 3.1 * - win95Guest - Windows 95 * - win98Guest - Windows 98 * - windows7_64Guest - Windows 7 (64 bit) * - windows7Guest - Windows 7 * - windows7Server64Guest - Windows Server 2008 R2 (64 bit) * - winLonghorn64Guest - Windows Longhorn (64 bit) (experimental) * - winLonghornGuest - Windows Longhorn (experimental) * - winMeGuest - Windows Millennium Edition * - winNetBusinessGuest - Windows Small Business Server 2003 * - winNetDatacenter64Guest - Windows Server 2003, Datacenter Edition (64 bit) (experimental) * - winNetDatacenterGuest - Windows Server 2003, Datacenter Edition * - winNetEnterprise64Guest - Windows Server 2003, Enterprise Edition (64 bit) * - winNetEnterpriseGuest - Windows Server 2003, Enterprise Edition * - winNetStandard64Guest - Windows Server 2003, Standard Edition (64 bit) * - winNetEnterpriseGuest - Windows Server 2003, Enterprise Edition * - winNetStandard64Guest - Windows Server 2003, Standard Edition (64 bit) * - winNetStandardGuest - Windows Server 2003, Standard Edition * - winNetWebGuest - Windows Server 2003, Web Edition * - winNTGuest - Windows NT 4 * - winVista64Guest - Windows Vista (64 bit) * - winVistaGuest - Windows Vista * - winXPHomeGuest - Windows XP Home Edition * - winXPPro64Guest - Windows XP Professional Edition (64 bit) * - winXPProGuest - Windows XP Professional Convert and load images ----------------------- Using the ``qemu-img`` utility, disk images in several formats (such as, qcow2) can be converted to the VMDK format. For example, the following command can be used to convert a `qcow2 Ubuntu Trusty cloud image `_: .. code-block:: console $ qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img \ -O vmdk trusty-server-cloudimg-amd64-disk1.vmdk VMDK disks converted through ``qemu-img`` are ``always`` monolithic sparse VMDK disks with an IDE adapter type. Using the previous example of the Ubuntu Trusty image after the ``qemu-img`` conversion, the command to upload the VMDK disk should be something like: .. code-block:: console $ openstack image create \ --container-format bare --disk-format vmdk \ --property vmware_disktype="sparse" \ --property vmware_adaptertype="ide" \ trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk Note that the ``vmware_disktype`` is set to ``sparse`` and the ``vmware_adaptertype`` is set to ``ide`` in the previous command. If the image did not come from the ``qemu-img`` utility, the ``vmware_disktype`` and ``vmware_adaptertype`` might be different. To determine the image adapter type from an image file, use the following command and look for the ``ddb.adapterType=`` line: .. code-block:: console $ head -20 Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk: .. code-block:: console $ openstack image create \ --disk-format vmdk \ --container-format bare \ --property vmware_adaptertype="lsiLogic" \ --property vmware_disktype="preallocated" \ --property vmware_ostype="ubuntu64Guest" \ ubuntu-thick-scsi < ubuntuLTS-flat.vmdk Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be attached to the IDE controller. Therefore, as the previous examples show, it is important to set the ``vmware_adaptertype`` property correctly. The default adapter type is lsiLogic, which is SCSI, so you can omit the ``vmware_adaptertype`` property if you are certain that the image adapter type is lsiLogic. Tag VMware images ----------------- In a mixed hypervisor environment, OpenStack Compute uses the ``hypervisor_type`` tag to match images to the correct hypervisor type. For VMware images, set the hypervisor type to ``vmware``. Other valid hypervisor types include: ``hyperv``, ``ironic``, ``lxc``, ``qemu``, ``uml``, and ``xen``. Note that ``qemu`` is used for both QEMU and KVM hypervisor types. .. code-block:: console $ openstack image create \ --disk-format vmdk \ --container-format bare \ --property vmware_adaptertype="lsiLogic" \ --property vmware_disktype="preallocated" \ --property hypervisor_type="vmware" \ --property vmware_ostype="ubuntu64Guest" \ ubuntu-thick-scsi < ubuntuLTS-flat.vmdk Optimize images --------------- Monolithic Sparse disks are considerably faster to download but have the overhead of an additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat thin provisioned disks. The download and conversion steps only affect the first launched instance that uses the sparse disk image. The converted disk image is cached, so subsequent instances that use this disk image can simply use the cached version. To avoid the conversion step (at the cost of longer download times) consider converting sparse disks to thin provisioned or preallocated disks before loading them into the Image service. Use one of the following tools to pre-convert sparse disks. vSphere CLI tools Sometimes called the remote CLI or rCLI. Assuming that the sparse disk is made available on a data store accessible by an ESX host, the following command converts it to preallocated format: .. code-block:: console vmkfstools --server=ip_of_some_ESX_host -i \ /vmfs/volumes/datastore1/sparse.vmdk \ /vmfs/volumes/datastore1/converted.vmdk Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if necessary. ``vmkfstools`` directly on the ESX host If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX data store through scp and the vmkfstools local to the ESX host can use used to perform the conversion. After you log in to the host through ssh, run this command: .. code-block:: console vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk ``vmware-vdiskmanager`` ``vmware-vdiskmanager`` is a utility that comes bundled with VMware Fusion and VMware Workstation. The following example converts a sparse disk to preallocated format: .. code-block:: console '/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk In the previous cases, the converted vmdk is actually a pair of files: * The descriptor file ``converted.vmdk``. * The actual virtual disk data file ``converted-flat.vmdk``. The file to be uploaded to the Image service is ``converted-flat.vmdk``. Image handling -------------- The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP from the Image service to a data store that is visible to the hypervisor. To optimize this process, the first time a VMDK file is used, it gets cached in the data store. A cached image is stored in a folder named after the image ID. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the Image service. Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared data store. To avoid this copy, boot the image in linked_clone mode. To learn how to enable this mode, see :ref:`vmware-config`. .. note:: You can also use the ``img_linked_clone`` property (or legacy property ``vmware_linked_clone``) in the Image service to override the linked_clone mode on a per-image basis. If spawning a virtual machine image from ISO with a VMDK disk, the image is created and attached to the virtual machine as a blank disk. In that case ``img_linked_clone`` property for the image is just ignored. If multiple compute nodes are running on the same host, or have a shared file system, you can enable them to use the same cache folder on the back-end data store. To configure this action, set the ``cache_prefix`` option in the ``nova.conf`` file. Its value stands for the name prefix of the folder where cached images are stored. .. note:: This can take effect only if compute nodes are running on the same host, or have a shared file system. You can automatically purge unused images after a specified period of time. To configure this action, set these options in the :oslo.config:group`image_cache` section in the ``nova.conf`` file: * :oslo.config:option:`image_cache.remove_unused_base_images` * :oslo.config:option:`image_cache.remove_unused_original_minimum_age_seconds` .. _vmware-networking: Networking with VMware vSphere ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The VMware driver supports networking with the Networking Service (neutron). Depending on your installation, complete these configuration steps before you provision VMs: #. Before provisioning VMs, create a port group with the same name as the ``vmware.integration_bridge`` value in ``nova.conf`` (default is ``br-int``). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in. Volumes with VMware vSphere ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The VMware driver supports attaching volumes from the Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing volumes based on vSphere data stores. For more information about the VMware VMDK driver, see Cinder's manual on the VMDK Driver (TODO: this has not yet been imported and published). Also an iSCSI volume driver provides limited support and can be used only for attachments. .. _vmware-config: Configuration reference ~~~~~~~~~~~~~~~~~~~~~~~ To customize the VMware driver, use the configuration option settings below. .. TODO(sdague): for the import we just copied this in from the auto generated file. We probably need a strategy for doing equivalent autogeneration, but we don't as of yet. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _nova-vmware: .. list-table:: Description of VMware configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[vmware]** - * - ``api_retry_count`` = ``10`` - (Integer) Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. * - ``ca_file`` = ``None`` - (String) Specifies the CA bundle file to be used in verifying the vCenter server certificate. * - ``cache_prefix`` = ``None`` - (String) This option adds a prefix to the folder where cached images are stored This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. .. note:: This should only be used when the compute nodes are running on same host or they have a shared file system. Possible values: * Any string representing the cache prefix to the folder * - ``cluster_name`` = ``None`` - (String) Name of a VMware Cluster ComputeResource. * - ``console_delay_seconds`` = ``None`` - (Integer) Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. * - ``datastore_regex`` = ``None`` - (String) Regular expression pattern to match the name of datastore. The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". .. note:: If no regex is given, it just picks the datastore with the most freespace. Possible values: * Any matching regular expression to a datastore must be given * - ``host_ip`` = ``None`` - (String) Hostname or IP address for connection to VMware vCenter host. * - ``host_password`` = ``None`` - (String) Password for connection to VMware vCenter host. * - ``host_port`` = ``443`` - (Port number) Port for connection to VMware vCenter host. * - ``host_username`` = ``None`` - (String) Username for connection to VMware vCenter host. * - ``insecure`` = ``False`` - (Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. Related options: * ca_file: This option is ignored if "ca_file" is set. * - ``integration_bridge`` = ``None`` - (String) This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. Possible values: * Any valid string representing the name of the integration bridge * - ``maximum_objects`` = ``100`` - (Integer) This option specifies the limit on the maximum number of objects to return in a single result. A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. * - ``pbm_default_policy`` = ``None`` - (String) This option specifies the default policy to be used. If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. Possible values: * Any valid storage policy such as VSAN default storage policy Related options: * pbm_enabled * - ``pbm_enabled`` = ``False`` - (Boolean) This option enables or disables storage policy based placement of instances. Related options: * pbm_default_policy * - ``pbm_wsdl_location`` = ``None`` - (String) This option specifies the PBM service WSDL file location URL. Setting this will disable storage policy based placement of instances. Possible values: * Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl * - ``serial_port_proxy_uri`` = ``None`` - (String) Identifies a proxy service that provides network access to the serial_port_service_uri. Possible values: * Any valid URI Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri * - ``serial_port_service_uri`` = ``None`` - (String) Identifies the remote system where the serial port traffic will be sent. This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. Possible values: * Any valid URI * - ``task_poll_interval`` = ``0.5`` - (Floating point) Time interval in seconds to poll remote tasks invoked on VMware VC server. * - ``use_linked_clone`` = ``True`` - (Boolean) This option enables/disables the use of linked clone. The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service. If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. * - ``wsdl_location`` = ``None`` - (String) This option specifies VIM Service WSDL Location If vSphere API versions 5.1 and later is being used, this section can be ignored. If version is less than 5.1, WSDL files must be hosted locally and their location must be specified in the above section. Optional over-ride to default location for bug work-arounds. Possible values: * http:///vimService.wsdl * file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl Troubleshooting ~~~~~~~~~~~~~~~ Operators can troubleshoot VMware specific failures by correlating OpenStack logs to vCenter logs. Every RPC call which is made by an OpenStack driver has an ``opID`` which can be traced in the vCenter logs. For example consider the following excerpt from a ``nova-compute`` log: .. code-block:: console Aug 15 07:31:09 localhost nova-compute[16683]: DEBUG oslo_vmware.service [-] Invoking Folder.CreateVM_Task with opID=oslo.vmware-debb6064-690e-45ac-b0ae-1b94a9638d1f {{(pid=16683) request_handler /opt/stack/oslo.vmware/oslo_vmware/service.py:355}} In this case the ``opID`` is ``oslo.vmware-debb6064-690e-45ac-b0ae-1b94a9638d1f`` and we can grep the vCenter log (usually ``/var/log/vmware/vpxd/vpxd.log``) for it to find if anything went wrong with the ``CreateVM`` operation. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-xen-api.rst0000664000175000017500000004375400000000000025112 0ustar00zuulzuul00000000000000.. _compute_xen_api: ============================================= XenServer (and other XAPI based Xen variants) ============================================= .. deprecated:: 20.0.0 The xenapi driver is deprecated and may be removed in a future release. The driver is not tested by the OpenStack project nor does it have clear maintainer(s) and thus its quality can not be ensured. If you are using the driver in production please let us know in freenode IRC and/or the openstack-discuss mailing list. .. todo:: os-xenapi version is 0.3.1 currently. This document should be modified according to the new version. This todo has been reported as `bug 1718606`_. .. _bug 1718606: https://bugs.launchpad.net/nova/+bug/1718606 This section describes XAPI managed hypervisors, and how to use them with OpenStack. Terminology ~~~~~~~~~~~ Xen --- A hypervisor that provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by `XenProject.org `_, a cross-industry organization and a Linux Foundation Collaborative project. Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you're not clear which toolstack you are using. Make sure you know what `toolstack `_ you want before you get started. If you want to use Xen with libvirt in OpenStack Compute refer to :doc:`hypervisor-xen-libvirt`. XAPI ---- XAPI is one of the toolstacks that could control a Xen based hypervisor. XAPI's role is similar to libvirt's in the KVM world. The API provided by XAPI is called XenAPI. To learn more about the provided interface, look at `XenAPI Object Model Overview `_ for definitions of XAPI specific terms such as SR, VDI, VIF and PIF. OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed servers could be used with OpenStack. XenAPI ------ XenAPI is the API provided by XAPI. This name is also used by the python library that is a client for XAPI. A set of packages to use XenAPI on existing distributions can be built using the `xenserver/buildroot `_ project. XenServer --------- An Open Source virtualization platform that delivers all features needed for any server and datacenter implementation including the Xen hypervisor and XAPI for the management. For more information and product downloads, visit `xenserver.org `_. XCP --- XCP is not supported anymore. XCP project recommends all XCP users to upgrade to the latest version of XenServer by visiting `xenserver.org `_. Privileged and unprivileged domains ----------------------------------- A Xen host runs a number of virtual machines, VMs, or domains (the terms are synonymous on Xen). One of these is in charge of running the rest of the system, and is known as domain 0, or dom0. It is the first domain to boot after Xen, and owns the storage and networking hardware, the device drivers, and the primary control software. Any other VM is unprivileged, and is known as a domU or guest. All customer VMs are unprivileged, but you should note that on XenServer (and other XenAPI using hypervisors), the OpenStack Compute service (``nova-compute``) also runs in a domU. This gives a level of security isolation between the privileged system software and the OpenStack software (much of which is customer-facing). This architecture is described in more detail later. Paravirtualized versus hardware virtualized domains --------------------------------------------------- A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). This refers to the interaction between Xen, domain 0, and the guest VM's kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and domain 0; this gives them better performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM guests do not need to modify the guest operating system, which is essential when running Windows. In OpenStack, customer VMs may run in either PV or HVM mode. However, the OpenStack domU (that's the one running ``nova-compute``) must be running in PV mode. xapi pool --------- A resource pool comprises multiple XenServer host installations, bound together into a single managed entity which can host virtual machines. When combined with shared storage, VMs could dynamically move between XenServer hosts, with minimal downtime since no block copying is needed. XenAPI deployment architecture ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A basic OpenStack deployment on a XAPI-managed server, assuming that the network provider is neutron network, looks like this: .. figure:: /_static/images/xenserver_architecture.png :width: 100% Key things to note: * The hypervisor: Xen * Domain 0: runs XAPI and some small pieces from OpenStack, the XAPI plug-ins. * OpenStack VM: The ``Compute`` service runs in a paravirtualized virtual machine, on the host under management. Each host runs a local instance of ``Compute``. It is also running neutron plugin-agent (``neutron-openvswitch-agent``) to perform local vSwitch configuration. * OpenStack Compute uses the XenAPI Python library to talk to XAPI, and it uses the Management Network to reach from the OpenStack VM to Domain 0. Some notes on the networking: * The above diagram assumes DHCP networking. * There are three main OpenStack networks: * Management network: RabbitMQ, MySQL, inter-host communication, and compute-XAPI communication. Please note that the VM images are downloaded by the XenAPI plug-ins, so make sure that the OpenStack Image service is accessible through this network. It usually means binding those services to the management interface. * Tenant network: controlled by neutron, this is used for tenant traffic. * Public network: floating IPs, public API endpoints. * The networks shown here must be connected to the corresponding physical networks within the data center. In the simplest case, three individual physical network cards could be used. It is also possible to use VLANs to separate these networks. Please note, that the selected configuration must be in line with the networking model selected for the cloud. (In case of VLAN networking, the physical channels have to be able to forward the tagged traffic.) * With the Networking service, you should enable Linux bridge in ``Dom0`` which is used for Compute service. ``nova-compute`` will create Linux bridges for security group and ``neutron-openvswitch-agent`` in Compute node will apply security group rules on these Linux bridges. To implement this, you need to remove ``/etc/modprobe.d/blacklist-bridge*`` in ``Dom0``. Further reading ~~~~~~~~~~~~~~~ Here are some of the resources available to learn more about Xen: * `Citrix XenServer official documentation `_ * `What is Xen? by XenProject.org `_ * `Xen Hypervisor project `_ * `Xapi project `_ * `Further XenServer and OpenStack information `_ Install XenServer ~~~~~~~~~~~~~~~~~ Before you can run OpenStack with XenServer, you must install the hypervisor on `an appropriate server `_. .. note:: Xen is a type 1 hypervisor: When your server starts, Xen is the first software that runs. Consequently, you must install XenServer before you install the operating system where you want to run OpenStack code. You then install ``nova-compute`` into a dedicated virtual machine on the host. Use the following link to download XenServer's installation media: * http://xenserver.org/open-source-virtualization-download.html When you install many servers, you might find it easier to perform `PXE boot installations `_. You can also package any post-installation changes that you want to make to your XenServer by following the instructions of `creating your own XenServer supplemental pack `_. .. important:: When using ``[xenserver]image_handler=direct_vhd`` (the default), make sure you use the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do not work when you use the LVM SR. Storage repository (SR) is a XAPI-specific term relating to the physical storage where virtual disks are stored. On the XenServer installation screen, choose the :guilabel:`XenDesktop Optimized` option. If you use an answer file, make sure you use ``srtype="ext"`` in the ``installation`` tag of the answer file. Post-installation steps ~~~~~~~~~~~~~~~~~~~~~~~ The following steps need to be completed after the hypervisor's installation: #. For resize and migrate functionality, enable password-less SSH authentication and set up the ``/images`` directory on dom0. #. Install the XAPI plug-ins. #. To support AMI type images, you must set up ``/boot/guest`` symlink/directory in dom0. #. Create a paravirtualized virtual machine that can run ``nova-compute``. #. Install and configure ``nova-compute`` in the above virtual machine. #. To support live migration requiring no block device migration, you should add the current host to a xapi pool using shared storage. You need to know the pool master ip address, user name and password: .. code-block:: console xe pool-join master-address=MASTER_IP master-username=root master-password=MASTER_PASSWORD Install XAPI plug-ins --------------------- When you use a XAPI managed hypervisor, you can install a Python script (or any executable) on the host side, and execute that through XenAPI. These scripts are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack os-xenapi code repository. These plug-ins have to be copied to dom0's filesystem, to the appropriate directory, where XAPI can find them. It is important to ensure that the version of the plug-ins are in line with the OpenStack Compute installation you are using. The plugins should typically be copied from the Nova installation running in the Compute's DomU (``pip show os-xenapi`` to find its location), but if you want to download the latest version the following procedure can be used. **Manually installing the plug-ins** #. Create temporary files/directories: .. code-block:: console $ OS_XENAPI_TARBALL=$(mktemp) $ OS_XENAPI_SOURCES=$(mktemp -d) #. Get the source from the openstack.org archives. The example assumes the latest release is used, and the XenServer host is accessible as xenserver. Match those parameters to your setup. .. code-block:: console $ OS_XENAPI_URL=https://tarballs.openstack.org/os-xenapi/os-xenapi-0.1.1.tar.gz $ wget -qO "$OS_XENAPI_TARBALL" "$OS_XENAPI_URL" $ tar xvf "$OS_XENAPI_TARBALL" -d "$OS_XENAPI_SOURCES" #. Copy the plug-ins to the hypervisor: .. code-block:: console $ PLUGINPATH=$(find $OS_XENAPI_SOURCES -path '*/xapi.d/plugins' -type d -print) $ tar -czf - -C "$PLUGINPATH" ./ | > ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins #. Remove temporary files/directories: .. code-block:: console $ rm "$OS_XENAPI_TARBALL" $ rm -rf "$OS_XENAPI_SOURCES" Prepare for AMI type images --------------------------- To support AMI type images in your OpenStack installation, you must create the ``/boot/guest`` directory on dom0. One of the OpenStack XAPI plugins will extract the kernel and ramdisk from AKI and ARI images and put them to that directory. OpenStack maintains the contents of this directory and its size should not increase during normal operation. However, in case of power failures or accidental shutdowns, some files might be left over. To prevent these files from filling up dom0's filesystem, set up this directory as a symlink that points to a subdirectory of the local SR. Run these commands in dom0 to achieve this setup: .. code-block:: console # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal) # LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels" # mkdir -p "$LOCALPATH" # ln -s "$LOCALPATH" /boot/guest Modify dom0 for resize/migration support ---------------------------------------- To resize servers with XenServer you must: * Establish a root trust between all hypervisor nodes of your deployment: To do so, generate an ssh key-pair with the :command:`ssh-keygen` command. Ensure that each of your dom0's ``authorized_keys`` file (located in ``/root/.ssh/authorized_keys``) contains the public key fingerprint (located in ``/root/.ssh/id_rsa.pub``). * Provide a ``/images`` mount point to the dom0 for your hypervisor: dom0 space is at a premium so creating a directory in dom0 is potentially dangerous and likely to fail especially when you resize large servers. The least you can do is to symlink ``/images`` to your local storage SR. The following instructions work for an English-based installation of XenServer and in the case of ext3-based SR (with which the resize functionality is known to work correctly). .. code-block:: console # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal) # IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images" # mkdir -p "$IMG_DIR" # ln -s "$IMG_DIR" /images XenAPI configuration reference ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following section discusses some commonly changed options when using the XenAPI driver. The table below provides a complete reference of all configuration options available for configuring XAPI with OpenStack. The recommended way to use XAPI with OpenStack is through the XenAPI driver. To enable the XenAPI driver, add the following configuration options to ``/etc/nova/nova.conf`` and restart ``OpenStack Compute``: .. code-block:: ini compute_driver = xenapi.XenAPIDriver [xenserver] connection_url = http://your_xenapi_management_ip_address connection_username = root connection_password = your_password ovs_integration_bridge = br-int These connection details are used by OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer node. .. note:: The ``connection_url`` is generally the management network IP address of the XenServer. Networking configuration ------------------------ The Networking service in the Compute node is running ``neutron-openvswitch-agent``. This manages ``dom0``\'s OVS. You should refer to the :neutron-doc:`openvswitch_agent.ini sample ` for details, however there are several specific items to look out for. .. code-block:: ini [agent] minimize_polling = False root_helper_daemon = xenapi_root_helper [ovs] of_listen_address = management_ip_address ovsdb_connection = tcp:your_xenapi_management_ip_address:6640 bridge_mappings = :, ... integration_bridge = br-int [xenapi] connection_url = http://your_xenapi_management_ip_address connection_username = root connection_password = your_pass_word .. note:: The ``ovsdb_connection`` is the connection string for the native OVSDB backend, you need to enable port 6640 in dom0. Agent ----- The agent is a piece of software that runs on the instances, and communicates with OpenStack. In case of the XenAPI driver, the agent communicates with OpenStack through XenStore (see `the Xen Project Wiki `_ for more information on XenStore). If you don't have the guest agent on your VMs, it takes a long time for OpenStack Compute to detect that the VM has successfully started. Generally a large timeout is required for Windows instances, but you may want to adjust: ``agent_version_timeout`` within the ``[xenserver]`` section. VNC proxy address ----------------- Assuming you are talking to XAPI through a management network, and XenServer is on the address: 10.10.1.34 specify the same address for the vnc proxy address: ``server_proxyclient_address=10.10.1.34`` Storage ------- You can specify which Storage Repository to use with nova by editing the following flag. To use the local-storage setup by the default installer: .. code-block:: ini sr_matching_filter = "other-config:i18n-key=local-storage" Another alternative is to use the "default" storage (for example if you have attached NFS or any other shared storage): .. code-block:: ini sr_matching_filter = "default-sr:true" Use different image handler --------------------------- We support three different implementations for glance image handler. You can choose a specific image handler based on the demand: * ``direct_vhd``: This image handler will call XAPI plugins to directly process the VHD files in XenServer SR(Storage Repository). So this handler only works when the host's SR type is file system based e.g. ext, nfs. * ``vdi_local_dev``: This image handler uploads ``tgz`` compressed raw disk images to the glance image service. * ``vdi_remote_stream``: With this image handler, the image data streams between XenServer and the glance image service. As it uses the remote APIs supported by XAPI, this plugin works for all SR types supported by XenServer. ``direct_vhd`` is the default image handler. If want to use a different image handler, you can change the config setting of ``image_handler`` within the ``[xenserver]`` section. For example, the following config setting is to use ``vdi_remote_stream`` as the image handler: .. code-block:: ini [xenserver] image_handler=vdi_remote_stream ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-xen-libvirt.rst0000664000175000017500000002533600000000000026010 0ustar00zuulzuul00000000000000=============== Xen via libvirt =============== OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be integrated with OpenStack Compute via the `libvirt `_ `toolstack `_ or via the `XAPI `_ `toolstack `_. This section describes how to set up OpenStack Compute with Xen and libvirt. For information on how to set up Xen with XAPI refer to :doc:`hypervisor-xen-api`. Installing Xen with libvirt ~~~~~~~~~~~~~~~~~~~~~~~~~~~ At this stage we recommend using the baseline that we use for the `Xen Project OpenStack CI Loop `_, which contains the most recent stability fixes to both Xen and libvirt. `Xen 4.5.1 `_ (or newer) and `libvirt 1.2.15 `_ (or newer) contain the minimum required OpenStack improvements for Xen. Although libvirt 1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary Xen changes have also been backported to the Xen 4.4.3 stable branch. Please check with the Linux and FreeBSD distros you are intending to use as `Dom 0 `_, whether the relevant version of Xen and libvirt are available as installable packages. The latest releases of Xen and libvirt packages that fulfil the above minimum requirements for the various openSUSE distributions can always be found and installed from the `Open Build Service `_ Virtualization project. To install these latest packages, add the Virtualization repository to your software management stack and get the newest packages from there. More information about the latest Xen and libvirt packages are available `here `__ and `here `__. Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package **4.4.1-0ubuntu0.14.04.4** (Xen 4.4.1) and apply the patches outlined `here `__. You can also use the Ubuntu LTS 14.04 libvirt package **1.2.2 libvirt_1.2.2-0ubuntu13.1.7** as baseline and update it to libvirt version 1.2.15, or 1.2.14 with the patches outlined `here `__ applied. Note that this will require rebuilding these packages partly from source. For further information and latest developments, you may want to consult the Xen Project's `mailing lists for OpenStack related issues and questions `_. Configuring Xen with libvirt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable Xen via libvirt, ensure the following options are set in ``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. .. code-block:: ini compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = xen Additional configuration options ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use the following as a guideline for configuring Xen for use in OpenStack: #. **Dom0 memory**: Set it between 1GB and 4GB by adding the following parameter to the Xen Boot Options in the `grub.conf `_ file. .. code-block:: ini dom0_mem=1024M .. note:: The above memory limits are suggestions and should be based on the available compute host resources. For large hosts that will run many hundreds of instances, the suggested values may need to be higher. .. note:: The location of the grub.conf file depends on the host Linux distribution that you are using. Please refer to the distro documentation for more details (see `Dom 0 `_ for more resources). #. **Dom0 vcpus**: Set the virtual CPUs to 4 and employ CPU pinning by adding the following parameters to the Xen Boot Options in the `grub.conf `_ file. .. code-block:: ini dom0_max_vcpus=4 dom0_vcpus_pin .. note:: Note that the above virtual CPU limits are suggestions and should be based on the available compute host resources. For large hosts, that will run many hundred of instances, the suggested values may need to be higher. #. **PV vs HVM guests**: A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). The virtualization mode determines the interaction between Xen, Dom 0, and the guest VM's kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and Dom 0. The choice of virtualization mode determines performance characteristics. For an overview of Xen virtualization modes, see `Xen Guest Types `_. In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a property of the operating system image used by the VM, and is changed by adjusting the image metadata stored in the Image service. The image metadata can be changed using the :command:`openstack` commands. To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use :command:`openstack` to set the ``vm_mode`` property to ``hvm``. To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one of the following two commands: .. code-block:: console $ openstack image set --property vm_mode=hvm IMAGE To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one of the following two commands .. code-block:: console $ openstack image set --property vm_mode=xen IMAGE .. note:: The default for virtualization mode in nova is PV mode. #. **Image formats**: Xen supports raw, qcow2 and vhd image formats. For more information on image formats, refer to the `OpenStack Virtual Image Guide `__ and the `Storage Options Guide on the Xen Project Wiki `_. #. **Image metadata**: In addition to the ``vm_mode`` property discussed above, the ``hypervisor_type`` property is another important component of the image metadata, especially if your cloud contains mixed hypervisor compute nodes. Setting the ``hypervisor_type`` property allows the nova scheduler to select a compute node running the specified hypervisor when launching instances of the image. Image metadata such as ``vm_mode``, ``hypervisor_type``, architecture, and others can be set when importing the image to the Image service. The metadata can also be changed using the :command:`openstack` commands: .. code-block:: console $ openstack image set --property hypervisor_type=xen vm_mode=hvm IMAGE For more information on image metadata, refer to the `OpenStack Virtual Image Guide `__. #. **Libguestfs file injection**: OpenStack compute nodes can use `libguestfs `_ to inject files into an instance's image prior to launching the instance. libguestfs uses libvirt's QEMU driver to start a qemu process, which is then used to inject files into the image. When using libguestfs for file injection, the compute node must have the libvirt qemu driver installed, in addition to the Xen driver. In RPM based distributions, the qemu driver is provided by the ``libvirt-daemon-qemu`` package. In Debian and Ubuntu, the qemu driver is provided by the ``libvirt-bin`` package. Troubleshoot Xen with libvirt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Important log files**: When an instance fails to start, or when you come across other issues, you should first consult the following log files: * ``/var/log/nova/nova-compute.log`` * ``/var/log/libvirt/libxl/libxl-driver.log``, * ``/var/log/xen/qemu-dm-${instancename}.log``, * ``/var/log/xen/xen-hotplug.log``, * ``/var/log/xen/console/guest-${instancename}`` (to enable see `Enabling Guest Console Logs `_) * Host Console Logs (read `Enabling and Retrieving Host Console Logs `_). If you need further help you can ask questions on the mailing lists `xen-users@ `_, `wg-openstack@ `_ or `raise a bug `_ against Xen. Known issues ~~~~~~~~~~~~ * **Live migration**: Live migration is supported in the libvirt libxl driver since version 1.2.5. However, there were a number of issues when used with OpenStack, in particular with libvirt migration protocol compatibility. It is worth mentioning that libvirt 1.3.0 addresses most of these issues. We do however recommend using libvirt 1.3.2, which is fully supported and tested as part of the Xen Project CI loop. It addresses live migration monitoring related issues and adds support for peer-to-peer migration mode, which nova relies on. * **Live migration monitoring**: On compute nodes running Kilo or later, live migration monitoring relies on libvirt APIs that are only implemented from libvirt version 1.3.1 onwards. When attempting to live migrate, the migration monitoring thread would crash and leave the instance state as "MIGRATING". If you experience such an issue and you are running on a version released before libvirt 1.3.1, make sure you backport libvirt commits ad71665 and b7b4391 from upstream. Additional information and resources ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following section contains links to other useful resources. * `wiki.xenproject.org/wiki/OpenStack `_ - OpenStack Documentation on the Xen Project wiki * `wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt `_ - Information about the Xen Project OpenStack CI Loop * `wiki.xenproject.org/wiki/OpenStack_via_DevStack `_ - How to set up OpenStack via DevStack * `Mailing lists for OpenStack related issues and questions `_ - This list is dedicated to coordinating bug fixes and issues across Xen, libvirt and OpenStack and the CI loop. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisor-zvm.rst0000664000175000017500000001414000000000000024350 0ustar00zuulzuul00000000000000zVM === z/VM System Requirements ~~~~~~~~~~~~~~~~~~~~~~~~ * The appropriate APARs installed, the current list of which can be found: z/VM OpenStack Cloud Information (http://www.vm.ibm.com/sysman/osmntlvl.html). .. note:: IBM z Systems hardware requirements are based on both the applications and the load on the system. Active Engine Guide ~~~~~~~~~~~~~~~~~~~ Active engine is used as an initial configuration and management tool during deployed machine startup. Currently the z/VM driver uses ``zvmguestconfigure`` and ``cloud-init`` as a two stage active engine. Installation and Configuration of zvmguestconfigure --------------------------------------------------- Cloudlib4zvm supports initiating changes to a Linux on z Systems virtual machine while Linux is shut down or the virtual machine is logged off. The changes to Linux are implemented using an activation engine (AE) that is run when Linux is booted the next time. The first active engine, ``zvmguestconfigure``, must be installed in the Linux on z Systems virtual server so it can process change request files transmitted by the cloudlib4zvm service to the reader of the virtual machine as a class X file. .. note:: An additional activation engine, cloud-init, should be installed to handle OpenStack related tailoring of the system. The cloud-init AE relies on tailoring performed by ``zvmguestconfigure``. Installation and Configuration of cloud-init -------------------------------------------- OpenStack uses cloud-init as its activation engine. Some Linux distributions include cloud-init either already installed or available to be installed. If your distribution does not include cloud-init, you can download the code from https://launchpad.net/cloud-init/+download. After installation, if you issue the following shell command and no errors occur, cloud-init is installed correctly:: cloud-init init --local Installation and configuration of cloud-init differs among different Linux distributions, and cloud-init source code may change. This section provides general information, but you may have to tailor cloud-init to meet the needs of your Linux distribution. You can find a community-maintained list of dependencies at http://ibm.biz/cloudinitLoZ. As of the Rocky release, the z/VM OpenStack support has been tested with cloud-init 0.7.4 and 0.7.5 for RHEL6.x and SLES11.x, 0.7.6 for RHEL7.x and SLES12.x, and 0.7.8 for Ubuntu 16.04. During cloud-init installation, some dependency packages may be required. You can use zypper and python setuptools to easily resolve these dependencies. See https://pypi.python.org/pypi/setuptools for more information. Image guide ~~~~~~~~~~~ This guideline will describe the requirements and steps to create and configure images for use with z/VM. Image Requirements ------------------ * The following Linux distributions are supported for deploy: * RHEL 6.2, 6.3, 6.4, 6.5, 6.6, and 6.7 * RHEL 7.0, 7.1 and 7.2 * SLES 11.2, 11.3, and 11.4 * SLES 12 and SLES 12.1 * Ubuntu 16.04 * A supported root disk type for snapshot/spawn. The following are supported: * FBA * ECKD * An image deployed on a compute node must match the disk type supported by that compute node, as configured by the ``zvm_diskpool_type`` property in the `zvmsdk.conf`_ configuration file in `zvm cloud connector`_ A compute node supports deployment on either an ECKD or FBA image, but not both at the same time. If you wish to switch image types, you need to change the ``zvm_diskpool_type`` and ``zvm_diskpool`` properties in the `zvmsdk.conf`_ file, accordingly. Then restart the nova-compute service to make the changes take effect. * If you deploy an instance with an ephemeral disk, both the root disk and the ephemeral disk will be created with the disk type that was specified by ``zvm_diskpool_type`` property in the `zvmsdk.conf`_ file. That property can specify either ECKD or FBA. * The network interfaces must be IPv4 interfaces. * Image names should be restricted to the UTF-8 subset, which corresponds to the ASCII character set. In addition, special characters such as ``/``, ``\``, ``$``, ``%``, ``@`` should not be used. For the FBA disk type "vm", capture and deploy is supported only for an FBA disk with a single partition. Capture and deploy is not supported for the FBA disk type "vm" on a CMS formatted FBA disk. * The virtual server/Linux instance used as the source of the new image should meet the following criteria: 1. The root filesystem must not be on a logical volume. 2. The minidisk on which the root filesystem resides should be a minidisk of the same type as desired for a subsequent deploy (for example, an ECKD disk image should be captured for a subsequent deploy to an ECKD disk). 3. The minidisks should not be a full-pack minidisk, since cylinder 0 on full-pack minidisks is reserved, and should be defined with virtual address 0100. 4. The root disk should have a single partition. 5. The image being captured should not have any network interface cards (NICs) defined below virtual address 1100. In addition to the specified criteria, the following recommendations allow for efficient use of the image: * The minidisk on which the root filesystem resides should be defined as a multiple of full gigabytes in size (for example, 1GB or 2GB). OpenStack specifies disk sizes in full gigabyte values, whereas z/VM handles disk sizes in other ways (cylinders for ECKD disks, blocks for FBA disks, and so on). See the appropriate online information if you need to convert cylinders or blocks to gigabytes; for example: http://www.mvsforums.com/helpboards/viewtopic.php?t=8316. * During subsequent deploys of the image, the OpenStack code will ensure that a disk image is not copied to a disk smaller than the source disk, as this would result in loss of data. The disk specified in the flavor should therefore be equal to or slightly larger than the source virtual machine's root disk. .. _zvmsdk.conf: https://cloudlib4zvm.readthedocs.io/en/latest/configuration.html#configuration-options .. _zvm cloud connector: https://cloudlib4zvm.readthedocs.io/en/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/hypervisors.rst0000664000175000017500000000636600000000000023734 0ustar00zuulzuul00000000000000=========== Hypervisors =========== .. TODO: Add UML (User-Mode Linux) hypervisor to the following list when its dedicated documentation is ready. .. toctree:: :maxdepth: 1 hypervisor-basics hypervisor-kvm hypervisor-qemu hypervisor-xen-api hypervisor-xen-libvirt hypervisor-lxc hypervisor-vmware hypervisor-hyper-v hypervisor-virtuozzo hypervisor-powervm hypervisor-zvm hypervisor-ironic OpenStack Compute supports many hypervisors, which might make it difficult for you to choose one. Most installations use only one hypervisor. However, you can use :ref:`ComputeFilter` and :ref:`ImagePropertiesFilter` to schedule different hypervisors within the same installation. The following links help you choose a hypervisor. See :doc:`/user/support-matrix` for a detailed list of features and support across the hypervisors. The following hypervisors are supported: * `KVM`_ - Kernel-based Virtual Machine. The virtual disk formats that it supports is inherited from QEMU since it uses a modified QEMU program to launch the virtual machine. The supported formats include raw images, the qcow2, and VMware formats. * `LXC`_ - Linux Containers (through libvirt), used to run Linux-based virtual machines. * `QEMU`_ - Quick EMUlator, generally only used for development purposes. * `VMware vSphere`_ 5.1.0 and newer - Runs VMware-based Linux and Windows images through a connection with a vCenter server. * `Xen (using libvirt)`_ - Xen Project Hypervisor using libvirt as management interface into ``nova-compute`` to run Linux, Windows, FreeBSD and NetBSD virtual machines. * `XenServer`_ - XenServer, Xen Cloud Platform (XCP) and other XAPI based Xen variants runs Linux or Windows virtual machines. You must install the ``nova-compute`` service in a para-virtualized VM. * `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively on the Windows virtualization platform. * `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual Machines supported via libvirt virt_type=parallels. The supported formats include ploop and qcow2 images. * `PowerVM`_ - Server virtualization with IBM PowerVM for AIX, IBM i, and Linux workloads on the Power Systems platform. * `zVM`_ - Server virtualization on z Systems and IBM LinuxONE, it can run Linux, z/OS and more. * `UML`_ - User-Mode Linux is a safe, secure way of running Linux versions and Linux processes. * `Ironic`_ - OpenStack project which provisions bare metal (as opposed to virtual) machines. .. _KVM: https://www.linux-kvm.org/page/Main_Page .. _LXC: https://linuxcontainers.org .. _QEMU: https://wiki.qemu.org/Manual .. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html .. _Xen (using libvirt): https://www.xenproject.org .. _XenServer: https://xenserver.org .. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview .. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html .. _PowerVM: https://www.ibm.com/us-en/marketplace/ibm-powervm .. _zVM: https://www.ibm.com/it-infrastructure/z/zvm .. _UML: http://user-mode-linux.sourceforge.net .. _Ironic: https://docs.openstack.org/ironic/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/index.rst0000664000175000017500000000152400000000000022435 0ustar00zuulzuul00000000000000=============== Configuration =============== To configure your Compute installation, you must define configuration options in these files: * ``nova.conf`` contains most of the Compute configuration options and resides in the ``/etc/nova`` directory. * ``api-paste.ini`` defines Compute limits and resides in the ``/etc/nova`` directory. * Configuration files for related services, such as the Image and Identity services. A list of config options based on different topics can be found below: .. toctree:: :maxdepth: 1 /admin/configuration/api /admin/configuration/resize /admin/configuration/cross-cell-resize /admin/configuration/fibre-channel /admin/configuration/iscsi-offload /admin/configuration/hypervisors /admin/configuration/schedulers /admin/configuration/logs /admin/configuration/samples/index ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/iscsi-offload.rst0000664000175000017500000000562600000000000024057 0ustar00zuulzuul00000000000000=============================================== Configuring iSCSI interface and offload support =============================================== Compute supports open-iscsi iSCSI interfaces for offload cards. Offload hardware must be present and configured on every compute node where offload is desired. Once an open-iscsi interface is configured, the iface name (``iface.iscsi_ifacename``) should be passed to libvirt via the ``iscsi_iface`` parameter for use. All iSCSI sessions will be bound to this iSCSI interface. Currently supported transports (``iface.transport_name``) are ``be2iscsi``, ``bnx2i``, ``cxgb3i``, ``cxgb4i``, ``qla4xxx``, ``ocs``. Configuration changes are required on the compute node only. iSER is supported using the separate iSER LibvirtISERVolumeDriver and will be rejected if used via the ``iscsi_iface`` parameter. iSCSI iface configuration ~~~~~~~~~~~~~~~~~~~~~~~~~ * Note the distinction between the transport name (``iface.transport_name``) and iface name (``iface.iscsi_ifacename``). The actual iface name must be specified via the iscsi_iface parameter to libvirt for offload to work. * The default name for an iSCSI iface (open-iscsi parameter ``iface.iscsi_ifacename``) is in the format transport_name.hwaddress when generated by ``iscsiadm``. * ``iscsiadm`` can be used to view and generate current iface configuration. Every network interface that supports an open-iscsi transport can have one or more iscsi ifaces associated with it. If no ifaces have been configured for a network interface supported by an open-iscsi transport, this command will create a default iface configuration for that network interface. For example : .. code-block:: console # iscsiadm -m iface default tcp,,,, iser iser,,,, bnx2i.00:05:b5:d2:a0:c2 bnx2i,00:05:b5:d2:a0:c2,5.10.10.20,, The output is in the format:: iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname * Individual iface configuration can be viewed via .. code-block:: console # iscsiadm -m iface -I IFACE_NAME # BEGIN RECORD 2.0-873 iface.iscsi_ifacename = cxgb4i.00:07:43:28:b2:58 iface.net_ifacename = iface.ipaddress = 102.50.50.80 iface.hwaddress = 00:07:43:28:b2:58 iface.transport_name = cxgb4i iface.initiatorname = # END RECORD Configuration can be updated as desired via .. code-block:: console # iscsiadm -m iface-I IFACE_NAME--op=update -n iface.SETTING -v VALUE * All iface configurations need a minimum of ``iface.iface_name``, ``iface.transport_name`` and ``iface.hwaddress`` to be correctly configured to work. Some transports may require ``iface.ipaddress`` and ``iface.net_ifacename`` as well to bind correctly. Detailed configuration instructions can be found at: https://github.com/open-iscsi/open-iscsi/blob/master/README ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/configuration/logs.rst0000664000175000017500000000152500000000000022273 0ustar00zuulzuul00000000000000================= Compute log files ================= The corresponding log file of each Compute service is stored in the ``/var/log/nova/`` directory of the host on which each service runs. .. list-table:: Log files used by Compute services :widths: 35 35 30 :header-rows: 1 * - Log file - Service name (CentOS/Fedora/openSUSE/Red Hat Enterprise Linux/SUSE Linux Enterprise) - Service name (Ubuntu/Debian) * - ``nova-api.log`` - ``openstack-nova-api`` - ``nova-api`` * - ``nova-compute.log`` - ``openstack-nova-compute`` - ``nova-compute`` * - ``nova-conductor.log`` - ``openstack-nova-conductor`` - ``nova-conductor`` * - ``nova-manage.log`` - ``nova-manage`` - ``nova-manage`` * - ``nova-scheduler.log`` - ``openstack-nova-scheduler`` - ``nova-scheduler`` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/resize.rst0000664000175000017500000000302700000000000022627 0ustar00zuulzuul00000000000000====== Resize ====== Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. For this feature to work properly, you might need to configure some underlying virt layers. This document describes how to configure hosts for standard resize. For information on :term:`cross-cell resize `, refer to :doc:`/admin/configuration/cross-cell-resize`. Virt drivers ------------ .. todo:: This section needs to be updated for other virt drivers, shared storage considerations, etc. KVM ~~~ Resize on KVM is implemented currently by transferring the images between compute nodes over ssh. For KVM you need hostnames to resolve properly and passwordless ssh access between your compute hosts. Direct access from one compute host to another is needed to copy the VM file across. Cloud end users can find out how to resize a server by reading :doc:`/user/resize`. XenServer ~~~~~~~~~ To get resize to work with XenServer (and XCP), you need to establish a root trust between all hypervisor nodes and provide an ``/image`` mount point to your hypervisors dom0. Automatic confirm ----------------- There is a periodic task configured by configuration option :oslo.config:option:`resize_confirm_window` (in seconds). If this value is not 0, the ``nova-compute`` service will check whether servers are in a resized state longer than the value of :oslo.config:option:`resize_confirm_window` and if so will automatically confirm the resize of the servers. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2344708 nova-21.2.4/doc/source/admin/configuration/samples/0000775000175000017500000000000000000000000022236 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/configuration/samples/api-paste.ini.rst0000664000175000017500000000026700000000000025436 0ustar00zuulzuul00000000000000============= api-paste.ini ============= The Compute service stores its API configuration settings in the ``api-paste.ini`` file. .. literalinclude:: /../../etc/nova/api-paste.ini ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/samples/index.rst0000664000175000017500000000040700000000000024100 0ustar00zuulzuul00000000000000========================================== Compute service sample configuration files ========================================== Files in this section can be found in ``/etc/nova``. .. toctree:: :maxdepth: 2 api-paste.ini policy.yaml rootwrap.conf ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/configuration/samples/policy.yaml.rst0000664000175000017500000000031500000000000025227 0ustar00zuulzuul00000000000000=========== policy.yaml =========== The ``policy.yaml`` file defines additional access controls that apply to the Compute service. .. literalinclude:: /_static/nova.policy.yaml.sample :language: yaml ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/configuration/samples/rootwrap.conf.rst0000664000175000017500000000070600000000000025574 0ustar00zuulzuul00000000000000============= rootwrap.conf ============= The ``rootwrap.conf`` file defines configuration values used by the rootwrap script when the Compute service needs to escalate its privileges to those of the root user. It is also possible to disable the root wrapper, and default to sudo only. Configure the ``disable_rootwrap`` option in the ``[workaround]`` section of the ``nova.conf`` configuration file. .. literalinclude:: /../../etc/nova/rootwrap.conf ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuration/schedulers.rst0000664000175000017500000013637700000000000023506 0ustar00zuulzuul00000000000000================== Compute schedulers ================== Compute uses the ``nova-scheduler`` service to determine how to dispatch compute requests. For example, the ``nova-scheduler`` service determines on which host a VM should launch. In the context of filters, the term ``host`` means a physical node that has a ``nova-compute`` service running on it. You can configure the scheduler through a variety of options. Compute is configured with the following default scheduler options in the ``/etc/nova/nova.conf`` file: .. code-block:: ini [scheduler] driver = filter_scheduler [filter_scheduler] available_filters = nova.scheduler.filters.all_filters enabled_filters = AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter By default, the scheduler ``driver`` is configured as a filter scheduler, as described in the next section. In the default configuration, this scheduler considers hosts that meet all the following criteria: * Are in the requested :term:`Availability Zone` (``AvailabilityZoneFilter``). * Can service the request (``ComputeFilter``). * Satisfy the extra specs associated with the instance type (``ComputeCapabilitiesFilter``). * Satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties (``ImagePropertiesFilter``). * Are on a different host than other instances of a group (if requested) (``ServerGroupAntiAffinityFilter``). * Are in a set of group hosts (if requested) (``ServerGroupAffinityFilter``). The scheduler chooses a new host when an instance is migrated. When evacuating instances from a host, the scheduler service honors the target host defined by the administrator on the :command:`nova evacuate` command. If a target is not defined by the administrator, the scheduler determines the target host. For information about instance evacuation, see :ref:`Evacuate instances `. .. _compute-scheduler-filters: Prefiltering ~~~~~~~~~~~~ As of the Rocky release, the scheduling process includes a prefilter step to increase the efficiency of subsequent stages. These prefilters are largely optional, and serve to augment the request that is sent to placement to reduce the set of candidate compute hosts based on attributes that placement is able to answer for us ahead of time. In addition to the prefilters listed here, also see :ref:`tenant-isolation-with-placement` and :ref:`availability-zones-with-placement`. Compute Image Type Support -------------------------- Starting in the Train release, there is a prefilter available for excluding compute nodes that do not support the ``disk_format`` of the image used in a boot request. This behavior is enabled by setting :oslo.config:option:`[scheduler]/query_placement_for_image_type_support=True `. For example, the libvirt driver, when using ceph as an ephemeral backend, does not support ``qcow2`` images (without an expensive conversion step). In this case (and especially if you have a mix of ceph and non-ceph backed computes), enabling this feature will ensure that the scheduler does not send requests to boot a ``qcow2`` image to computes backed by ceph. Compute Disabled Status Support ------------------------------- Starting in the Train release, there is a mandatory `pre-filter `_ which will exclude disabled compute nodes similar to (but does not fully replace) the `ComputeFilter`_. Compute node resource providers with the ``COMPUTE_STATUS_DISABLED`` trait will be excluded as scheduling candidates. The trait is managed by the ``nova-compute`` service and should mirror the ``disabled`` status on the related compute service record in the `os-services`_ API. For example, if a compute service's status is ``disabled``, the related compute node resource provider(s) for that service should have the ``COMPUTE_STATUS_DISABLED`` trait. When the service status is ``enabled`` the ``COMPUTE_STATUS_DISABLED`` trait shall be removed. If the compute service is down when the status is changed, the trait will be synchronized by the compute service when it is restarted. Similarly, if an error occurs when trying to add or remove the trait on a given resource provider, the trait will be synchronized when the ``update_available_resource`` periodic task runs - which is controlled by the :oslo.config:option:`update_resources_interval` configuration option. .. _os-services: https://docs.openstack.org/api-ref/compute/#compute-services-os-services Isolate Aggregates ------------------ Starting in the Train release, there is an optional placement pre-request filter :doc:`/reference/isolate-aggregates` When enabled, the traits required in the server's flavor and image must be at least those required in an aggregate's metadata in order for the server to be eligible to boot on hosts in that aggregate. Filter scheduler ~~~~~~~~~~~~~~~~ The filter scheduler (``nova.scheduler.filter_scheduler.FilterScheduler``) is the default scheduler for scheduling virtual machine instances. It supports filtering and weighting to make informed decisions on where a new instance should be created. When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the :ref:`weights` section. **Filtering** .. figure:: /_static/images/filtering-workflow-1.png The ``available_filters`` configuration option in ``nova.conf`` provides the Compute service with the list of the filters that are available for use by the scheduler. The default setting specifies all of the filters that are included with the Compute service: .. code-block:: ini [filter_scheduler] available_filters = nova.scheduler.filters.all_filters This configuration option can be specified multiple times. For example, if you implemented your own custom filter in Python called ``myfilter.MyFilter`` and you wanted to use both the built-in filters and your custom filter, your ``nova.conf`` file would contain: .. code-block:: ini [filter_scheduler] available_filters = nova.scheduler.filters.all_filters available_filters = myfilter.MyFilter The :oslo.config:option:`filter_scheduler.enabled_filters` configuration option in ``nova.conf`` defines the list of filters that are applied by the ``nova-scheduler`` service. Compute filters ~~~~~~~~~~~~~~~ The following sections describe the available compute filters. .. _AggregateCoreFilter: AggregateCoreFilter ------------------- .. deprecated:: 20.0.0 ``AggregateCoreFilter`` is deprecated since the 20.0.0 Train release. As of the introduction of the placement service in Ocata, the behavior of this filter :ref:`has changed ` and no longer should be used. In the 18.0.0 Rocky release nova `automatically mirrors`_ host aggregates to placement aggregates. In the 19.0.0 Stein release initial allocation ratios support was added which allows management of the allocation ratios via the placement API in addition to the existing capability to manage allocation ratios via the nova config. See `Allocation ratios`_ for details. .. _`automatically mirrors`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-mirror-host-aggregates.html Filters host by CPU core count with a per-aggregate ``cpu_allocation_ratio`` value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. Refer to :doc:`/admin/aggregates` for more information. .. important:: Note the ``cpu_allocation_ratio`` :ref:`bug 1804125 ` restriction. .. _AggregateDiskFilter: AggregateDiskFilter ------------------- .. deprecated:: 20.0.0 ``AggregateDiskFilter`` is deprecated since the 20.0.0 Train release. As of the introduction of the placement service in Ocata, the behavior of this filter :ref:`has changed ` and no longer should be used. In the 18.0.0 Rocky release nova `automatically mirrors`_ host aggregates to placement aggregates. In the 19.0.0 Stein release initial allocation ratios support was added which allows management of the allocation ratios via the placement API in addition to the existing capability to manage allocation ratios via the nova config. See `Allocation ratios`_ for details. Filters host by disk allocation with a per-aggregate ``disk_allocation_ratio`` value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. Refer to :doc:`/admin/aggregates` for more information. .. important:: Note the ``disk_allocation_ratio`` :ref:`bug 1804125 ` restriction. .. _AggregateImagePropertiesIsolation: AggregateImagePropertiesIsolation --------------------------------- Matches properties defined in an image's metadata against those of aggregates to determine host matches: * If a host belongs to an aggregate and the aggregate defines one or more metadata that matches an image's properties, that host is a candidate to boot the image's instance. * If a host does not belong to any aggregate, it can boot instances from all images. For example, the following aggregate ``myWinAgg`` has the Windows operating system as metadata (named 'windows'): .. code-block:: console $ openstack aggregate show myWinAgg +-------------------+----------------------------+ | Field | Value | +-------------------+----------------------------+ | availability_zone | zone1 | | created_at | 2017-01-01T15:36:44.000000 | | deleted | False | | deleted_at | None | | hosts | [u'sf-devel'] | | id | 1 | | name | myWinAgg | | properties | os_distro='windows' | | updated_at | None | +-------------------+----------------------------+ In this example, because the following Win-2012 image has the ``windows`` property, it boots on the ``sf-devel`` host (all other filters being equal): .. code-block:: console $ openstack image show Win-2012 +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-12-13T09:30:30Z | | disk_format | qcow2 | | ... | | name | Win-2012 | | ... | | properties | os_distro='windows' | | ... | You can configure the ``AggregateImagePropertiesIsolation`` filter by using the following options in the ``nova.conf`` file: .. code-block:: ini [scheduler] # Considers only keys matching the given namespace (string). # Multiple values can be given, as a comma-separated list. aggregate_image_properties_isolation_namespace = # Separator used between the namespace and keys (string). aggregate_image_properties_isolation_separator = . .. note:: This filter has limitations as described in `bug 1677217 `_ which are addressed in placement :doc:`/reference/isolate-aggregates` request filter. Refer to :doc:`/admin/aggregates` for more information. .. _AggregateInstanceExtraSpecsFilter: AggregateInstanceExtraSpecsFilter --------------------------------- Matches properties defined in extra specs for an instance type against admin-defined properties on a host aggregate. Works with specifications that are scoped with ``aggregate_instance_extra_specs``. Multiple values can be given, as a comma-separated list. For backward compatibility, also works with non-scoped specifications; this action is highly discouraged because it conflicts with :ref:`ComputeCapabilitiesFilter` filter when you enable both filters. Refer to :doc:`/admin/aggregates` for more information. .. _AggregateIoOpsFilter: AggregateIoOpsFilter -------------------- Filters host by disk allocation with a per-aggregate ``max_io_ops_per_host`` value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. Refer to :doc:`/admin/aggregates` and :ref:`IoOpsFilter` for more information. .. _AggregateMultiTenancyIsolation: AggregateMultiTenancyIsolation ------------------------------ Ensures hosts in tenant-isolated host aggregates will only be available to a specified set of tenants. If a host is in an aggregate that has the ``filter_tenant_id`` metadata key, the host can build instances from only that tenant or comma-separated list of tenants. A host can be in different aggregates. If a host does not belong to an aggregate with the metadata key, the host can build instances from all tenants. This does not restrict the tenant from creating servers on hosts outside the tenant-isolated aggregate. For example, consider there are two available hosts for scheduling, HostA and HostB. HostB is in an aggregate isolated to tenant X. A server create request from tenant X will result in either HostA *or* HostB as candidates during scheduling. A server create request from another tenant Y will result in only HostA being a scheduling candidate since HostA is not part of the tenant-isolated aggregate. .. note:: There is a `known limitation `_ with the number of tenants that can be isolated per aggregate using this filter. This limitation does not exist, however, for the :ref:`tenant-isolation-with-placement` filtering capability added in the 18.0.0 Rocky release. .. _AggregateNumInstancesFilter: AggregateNumInstancesFilter --------------------------- Filters host by number of instances with a per-aggregate ``max_instances_per_host`` value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. Refer to :doc:`/admin/aggregates` and :ref:`NumInstancesFilter` for more information. .. _AggregateRamFilter: AggregateRamFilter ------------------ .. deprecated:: 20.0.0 ``AggregateRamFilter`` is deprecated since the 20.0.0 Train release. As of the introduction of the placement service in Ocata, the behavior of this filter :ref:`has changed ` and no longer should be used. In the 18.0.0 Rocky release nova `automatically mirrors`_ host aggregates to placement aggregates. In the 19.0.0 Stein release initial allocation ratios support was added which allows management of the allocation ratios via the placement API in addition to the existing capability to manage allocation ratios via the nova config. See `Allocation ratios`_ for details. Filters host by RAM allocation of instances with a per-aggregate ``ram_allocation_ratio`` value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. Refer to :doc:`/admin/aggregates` for more information. .. important:: Note the ``ram_allocation_ratio`` :ref:`bug 1804125 ` restriction. .. _AggregateTypeAffinityFilter: AggregateTypeAffinityFilter --------------------------- This filter passes hosts if no ``instance_type`` key is set or the ``instance_type`` aggregate metadata value contains the name of the ``instance_type`` requested. The value of the ``instance_type`` metadata entry is a string that may contain either a single ``instance_type`` name or a comma-separated list of ``instance_type`` names, such as ``m1.nano`` or ``m1.nano,m1.small``. Refer to :doc:`/admin/aggregates` for more information. AllHostsFilter -------------- This is a no-op filter. It does not eliminate any of the available hosts. .. _AvailabilityZoneFilter: AvailabilityZoneFilter ---------------------- Filters hosts by availability zone. You must enable this filter for the scheduler to respect availability zones in requests. Refer to :doc:`/admin/availability-zones` for more information. .. _ComputeCapabilitiesFilter: ComputeCapabilitiesFilter ------------------------- Matches properties defined in extra specs for an instance type against compute capabilities. If an extra specs key contains a colon (``:``), anything before the colon is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace is present and is not ``capabilities``, the filter ignores the namespace. For backward compatibility, also treats the extra specs key as the key to be matched if no namespace is present; this action is highly discouraged because it conflicts with :ref:`AggregateInstanceExtraSpecsFilter` filter when you enable both filters. Some virt drivers support reporting CPU traits to the Placement service. With that feature available, you should consider using traits in flavors instead of ComputeCapabilitiesFilter, because traits provide consistent naming for CPU features in some virt drivers and querying traits is efficient. For more detail, please see `Support Matrix `_, :ref:`Required traits `, :ref:`Forbidden traits ` and `Report CPU features to the Placement service `_. Also refer to `Compute capabilities as traits`_. .. _ComputeFilter: ComputeFilter ------------- Passes all hosts that are operational and enabled. In general, you should always enable this filter. DifferentHostFilter ------------------- Schedules the instance on a different host from a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using ``different_host`` as the key and a list of instance UUIDs as the value. This filter is the opposite of the ``SameHostFilter``. Using the :command:`openstack server create` command, use the ``--hint`` flag. For example: .. code-block:: console $ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \ --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \ --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 With the API, use the ``os:scheduler_hints`` key. For example: .. code-block:: json { "server": { "name": "server-1", "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", "flavorRef": "1" }, "os:scheduler_hints": { "different_host": [ "a0cf03a5-d921-4877-bb5c-86d26cf818e1", "8c19174f-4220-44f0-824a-cd1eeef10287" ] } } .. _ImagePropertiesFilter: ImagePropertiesFilter --------------------- Filters hosts based on properties defined on the instance's image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, hypervisor version (for Xen hypervisor type only), and virtual machine mode. For example, an instance might require a host that runs an ARM-based processor, and QEMU as the hypervisor. You can decorate an image with these properties by using: .. code-block:: console $ openstack image set --architecture arm --property hypervisor_type=qemu \ img-uuid The image properties that the filter checks for are: ``architecture`` describes the machine architecture required by the image. Examples are ``i686``, ``x86_64``, ``arm``, and ``ppc64``. ``hypervisor_type`` describes the hypervisor required by the image. Examples are ``xen``, ``qemu``, and ``xenapi``. .. note:: ``qemu`` is used for both QEMU and KVM hypervisor types. ``hypervisor_version_requires`` describes the hypervisor version required by the image. The property is supported for Xen hypervisor type only. It can be used to enable support for multiple hypervisor versions, and to prevent instances with newer Xen tools from being provisioned on an older version of a hypervisor. If available, the property value is compared to the hypervisor version of the compute host. To filter the hosts by the hypervisor version, add the ``hypervisor_version_requires`` property on the image as metadata and pass an operator and a required hypervisor version as its value: .. code-block:: console $ openstack image set --property hypervisor_type=xen --property \ hypervisor_version_requires=">=4.3" img-uuid ``vm_mode`` describes the hypervisor application binary interface (ABI) required by the image. Examples are ``xen`` for Xen 3.0 paravirtual ABI, ``hvm`` for native ABI, ``uml`` for User Mode Linux paravirtual ABI, ``exe`` for container virt executable ABI. IsolatedHostsFilter ------------------- Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated images. The flag ``restrict_isolated_hosts_to_isolated_images`` can be used to force isolated hosts to only run isolated images. The logic within the filter depends on the ``restrict_isolated_hosts_to_isolated_images`` config option, which defaults to True. When True, a volume-backed instance will not be put on an isolated host. When False, a volume-backed instance can go on any host, isolated or not. The admin must specify the isolated set of images and hosts in the ``nova.conf`` file using the ``isolated_hosts`` and ``isolated_images`` configuration options. For example: .. code-block:: ini [filter_scheduler] isolated_hosts = server1, server2 isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09 .. _IoOpsFilter: IoOpsFilter ----------- The IoOpsFilter filters hosts by concurrent I/O operations on it. Hosts with too many concurrent I/O operations will be filtered out. The ``max_io_ops_per_host`` option specifies the maximum number of I/O intensive instances allowed to run on a host. A host will be ignored by the scheduler if more than ``max_io_ops_per_host`` instances in build, resize, snapshot, migrate, rescue or unshelve task states are running on it. JsonFilter ---------- .. warning:: This filter is not enabled by default and not comprehensively tested, and thus could fail to work as expected in non-obvious ways. Furthermore, the filter variables are based on attributes of the `HostState`_ class which could change from release to release so usage of this filter is generally not recommended. Consider using other filters such as the :ref:`ImagePropertiesFilter` or :ref:`traits-based scheduling `. The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported: * = * < * > * in * <= * >= * not * or * and The filter supports any attribute in the `HostState`_ class such as the following variables: * ``$free_ram_mb`` * ``$free_disk_mb`` * ``$hypervisor_hostname`` * ``$total_usable_ram_mb`` * ``$vcpus_total`` * ``$vcpus_used`` Using the :command:`openstack server create` command, use the ``--hint`` flag: .. code-block:: console $ openstack server create --image 827d564a-e636-4fc4-a376-d36f7ebe1747 \ --flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1 With the API, use the ``os:scheduler_hints`` key: .. code-block:: json { "server": { "name": "server-1", "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", "flavorRef": "1" }, "os:scheduler_hints": { "query": "[\">=\",\"$free_ram_mb\",1024]" } } .. _HostState: https://opendev.org/openstack/nova/src/branch/master/nova/scheduler/host_manager.py MetricsFilter ------------- Filters hosts based on meters ``weight_setting``. Only hosts with the available meters are passed so that the metrics weigher will not fail due to these hosts. NUMATopologyFilter ------------------ Filters hosts based on the NUMA topology that was specified for the instance through the use of flavor ``extra_specs`` in combination with the image properties, as described in detail in the `related nova-spec document `_. Filter will try to match the exact NUMA cells of the instance to those of the host. It will consider the standard over-subscription limits for each host NUMA cell, and provide limits to the compute host accordingly. .. note:: If instance has no topology defined, it will be considered for any host. If instance has a topology defined, it will be considered only for NUMA capable hosts. .. _NumInstancesFilter: NumInstancesFilter ------------------ Hosts that have more instances running than specified by the ``max_instances_per_host`` option are filtered out when this filter is in place. PciPassthroughFilter -------------------- The filter schedules instances on a host if the host has devices that meet the device requests in the ``extra_specs`` attribute for the flavor. RetryFilter ----------- .. deprecated:: 20.0.0 Since the 17.0.0 (Queens) release, the scheduler has provided alternate hosts for rescheduling so the scheduler does not need to be called during a reschedule which makes the ``RetryFilter`` useless. See the `Return Alternate Hosts`_ spec for details. Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter prevents the scheduler from retrying that host for the service request. This filter is only useful if the :oslo.config:option:`scheduler.max_attempts` configuration option is set to a value greater than one. .. _Return Alternate Hosts: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/return-alternate-hosts.html SameHostFilter -------------- Schedules the instance on the same host as another instance in a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using ``same_host`` as the key and a list of instance UUIDs as the value. This filter is the opposite of the ``DifferentHostFilter``. Using the :command:`openstack server create` command, use the ``--hint`` flag: .. code-block:: console $ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \ --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \ --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 With the API, use the ``os:scheduler_hints`` key: .. code-block:: json { "server": { "name": "server-1", "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", "flavorRef": "1" }, "os:scheduler_hints": { "same_host": [ "a0cf03a5-d921-4877-bb5c-86d26cf818e1", "8c19174f-4220-44f0-824a-cd1eeef10287" ] } } .. _ServerGroupAffinityFilter: ServerGroupAffinityFilter ------------------------- The ServerGroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must create a server group with an ``affinity`` policy, and pass a scheduler hint, using ``group`` as the key and the server group UUID as the value. Using the :command:`openstack server create` command, use the ``--hint`` flag. For example: .. code-block:: console $ openstack server group create --policy affinity group-1 $ openstack server create --image IMAGE_ID --flavor 1 \ --hint group=SERVER_GROUP_UUID server-1 .. _ServerGroupAntiAffinityFilter: ServerGroupAntiAffinityFilter ----------------------------- The ServerGroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must create a server group with an ``anti-affinity`` policy, and pass a scheduler hint, using ``group`` as the key and the server group UUID as the value. Using the :command:`openstack server create` command, use the ``--hint`` flag. For example: .. code-block:: console $ openstack server group create --policy anti-affinity group-1 $ openstack server create --image IMAGE_ID --flavor 1 \ --hint group=SERVER_GROUP_UUID server-1 SimpleCIDRAffinityFilter ------------------------ Schedules the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints: ``build_near_host_ip`` The first IP address in the subnet (for example, ``192.168.1.1``) ``cidr`` The CIDR that corresponds to the subnet (for example, ``/24``) Using the :command:`openstack server create` command, use the ``--hint`` flag. For example, to specify the IP subnet ``192.168.1.1/24``: .. code-block:: console $ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \ --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1 With the API, use the ``os:scheduler_hints`` key: .. code-block:: json { "server": { "name": "server-1", "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", "flavorRef": "1" }, "os:scheduler_hints": { "build_near_host_ip": "192.168.1.1", "cidr": "24" } } .. _weights: Weights ~~~~~~~ When resourcing instances, the filter scheduler filters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each requested instance. All weights are normalized before being summed up; the host with the largest weight is given the highest priority. **Weighting hosts** .. figure:: /_static/images/nova-weighting-hosts.png Hosts are weighted based on the following options in the ``/etc/nova/nova.conf`` file: .. list-table:: Host weighting options :header-rows: 1 :widths: 10, 25, 60 * - Section - Option - Description * - [DEFAULT] - ``ram_weight_multiplier`` - By default, the scheduler spreads instances across all hosts evenly. Set the ``ram_weight_multiplier`` option to a negative number if you prefer stacking instead of spreading. Use a floating-point value. If the per aggregate ``ram_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [DEFAULT] - ``disk_weight_multiplier`` - By default, the scheduler spreads instances across all hosts evenly. Set the ``disk_weight_multiplier`` option to a negative number if you prefer stacking instead of spreading. Use a floating-point value. If the per aggregate ``disk_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [DEFAULT] - ``cpu_weight_multiplier`` - By default, the scheduler spreads instances across all hosts evenly. Set the ``cpu_weight_multiplier`` option to a negative number if you prefer stacking instead of spreading. Use a floating-point value. If the per aggregate ``cpu_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [DEFAULT] - ``scheduler_host_subset_size`` - New instances are scheduled on a host that is chosen randomly from a subset of the N best hosts. This property defines the subset size from which a host is chosen. A value of 1 chooses the first host returned by the weighting functions. This value must be at least 1. A value less than 1 is ignored, and 1 is used instead. Use an integer value. * - [DEFAULT] - ``scheduler_weight_classes`` - Defaults to ``nova.scheduler.weights.all_weighers``. Hosts are then weighted and sorted with the largest weight winning. * - [DEFAULT] - ``io_ops_weight_multiplier`` - Multiplier used for weighing host I/O operations. A negative value means a preference to choose light workload compute hosts. If the per aggregate ``io_ops_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [filter_scheduler] - ``soft_affinity_weight_multiplier`` - Multiplier used for weighing hosts for group soft-affinity. Only a positive value is allowed. * - [filter_scheduler] If the per aggregate ``soft_affinity_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. - ``soft_anti_affinity_weight_multiplier`` - Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is allowed. If the per aggregate ``soft_anti_affinity_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [filter_scheduler] - ``build_failure_weight_multiplier`` - Multiplier used for weighing hosts which have recent build failures. A positive value increases the significance of build failures reported by the host recently, making them less likely to be chosen. If the per aggregate ``build_failure_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [filter_scheduler] - ``cross_cell_move_weight_multiplier`` - Multiplier used for weighing hosts during a cross-cell move. By default, prefers hosts within the same source cell when migrating a server. If the per aggregate ``cross_cell_move_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [metrics] - ``weight_multiplier`` - Multiplier for weighting meters. Use a floating-point value. If the per aggregate ``metrics_weight_multiplier`` metadata is set, this multiplier will override the configuration option value. * - [metrics] - ``weight_setting`` - Determines how meters are weighted. Use a comma-separated list of metricName=ratio. For example: ``name1=1.0, name2=-1.0`` results in: ``name1.value * 1.0 + name2.value * -1.0`` * - [metrics] - ``required`` - Specifies how to treat unavailable meters: * True - Raises an exception. To avoid the raised exception, you should use the scheduler filter ``MetricFilter`` to filter out hosts with unavailable meters. * False - Treated as a negative factor in the weighting process (uses the ``weight_of_unavailable`` option). * - [metrics] - ``weight_of_unavailable`` - If ``required`` is set to False, and any one of the meters set by ``weight_setting`` is unavailable, the ``weight_of_unavailable`` value is returned to the scheduler. For example: .. code-block:: ini [DEFAULT] scheduler_host_subset_size = 1 scheduler_weight_classes = nova.scheduler.weights.all_weighers ram_weight_multiplier = 1.0 io_ops_weight_multiplier = 2.0 soft_affinity_weight_multiplier = 1.0 soft_anti_affinity_weight_multiplier = 1.0 [metrics] weight_multiplier = 1.0 weight_setting = name1=1.0, name2=-1.0 required = false weight_of_unavailable = -10000.0 Utilization aware scheduling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is possible to schedule VMs using advanced scheduling decisions. These decisions are made based on enhanced usage statistics encompassing data like memory cache utilization, memory bandwidth utilization, or network bandwidth utilization. This is disabled by default. The administrator can configure how the metrics are weighted in the configuration file by using the ``weight_setting`` configuration option in the ``nova.conf`` configuration file. For example to configure metric1 with ratio1 and metric2 with ratio2: .. code-block:: ini weight_setting = "metric1=ratio1, metric2=ratio2" XenServer hypervisor pools to support live migration ---------------------------------------------------- When using the XenAPI-based hypervisor, the Compute service uses host aggregates to manage XenServer Resource pools, which are used in supporting live migration. Allocation ratios ~~~~~~~~~~~~~~~~~ The following configuration options exist to control allocation ratios per compute node to support over-commit of resources: * :oslo.config:option:`cpu_allocation_ratio`: allows overriding the VCPU inventory allocation ratio for a compute node * :oslo.config:option:`ram_allocation_ratio`: allows overriding the MEMORY_MB inventory allocation ratio for a compute node * :oslo.config:option:`disk_allocation_ratio`: allows overriding the DISK_GB inventory allocation ratio for a compute node Prior to the 19.0.0 Stein release, if left unset, the ``cpu_allocation_ratio`` defaults to 16.0, the ``ram_allocation_ratio`` defaults to 1.5, and the ``disk_allocation_ratio`` defaults to 1.0. Starting with the 19.0.0 Stein release, the following configuration options control the initial allocation ratio values for a compute node: * :oslo.config:option:`initial_cpu_allocation_ratio`: the initial VCPU inventory allocation ratio for a new compute node record, defaults to 16.0 * :oslo.config:option:`initial_ram_allocation_ratio`: the initial MEMORY_MB inventory allocation ratio for a new compute node record, defaults to 1.5 * :oslo.config:option:`initial_disk_allocation_ratio`: the initial DISK_GB inventory allocation ratio for a new compute node record, defaults to 1.0 Scheduling considerations ------------------------- The allocation ratio configuration is used both during reporting of compute node `resource provider inventory`_ to the placement service and during scheduling. .. _bug-1804125: .. note:: Regarding the `AggregateCoreFilter`_, `AggregateDiskFilter`_ and `AggregateRamFilter`_, starting in 15.0.0 (Ocata) there is a behavior change where aggregate-based overcommit ratios will no longer be honored during scheduling for the FilterScheduler. Instead, overcommit values must be set on a per-compute-node basis in the Nova configuration files. If you have been relying on per-aggregate overcommit, during your upgrade, you must change to using per-compute-node overcommit ratios in order for your scheduling behavior to stay consistent. Otherwise, you may notice increased NoValidHost scheduling failures as the aggregate-based overcommit is no longer being considered. See `bug 1804125 `_ for more details. .. _resource provider inventory: https://docs.openstack.org/api-ref/placement/?expanded=#resource-provider-inventories Usage scenarios --------------- Since allocation ratios can be set via nova configuration, host aggregate metadata and the placement API, it can be confusing to know which should be used. This really depends on your scenario. A few common scenarios are detailed here. 1. When the deployer wants to **always** set an override value for a resource on a compute node, the deployer would ensure that the ``[DEFAULT]/cpu_allocation_ratio``, ``[DEFAULT]/ram_allocation_ratio`` and ``[DEFAULT]/disk_allocation_ratio`` configuration options are set to a non-None value (or greater than 0.0 before the 19.0.0 Stein release). This will make the ``nova-compute`` service overwrite any externally-set allocation ratio values set via the placement REST API. 2. When the deployer wants to set an **initial** value for a compute node allocation ratio but wants to allow an admin to adjust this afterwards without making any configuration file changes, the deployer would set the ``[DEFAULT]/initial_cpu_allocation_ratio``, ``[DEFAULT]/initial_ram_allocation_ratio`` and ``[DEFAULT]/initial_disk_allocation_ratio`` configuration options and then manage the allocation ratios using the placement REST API (or `osc-placement`_ command line interface). For example: .. code-block:: console $ openstack resource provider inventory set --resource VCPU:allocation_ratio=1.0 815a5634-86fb-4e1e-8824-8a631fee3e06 Note the :ref:`bug 1804125 ` restriction. 3. When the deployer wants to **always** use the placement API to set allocation ratios, then the deployer should ensure that ``[DEFAULT]/xxx_allocation_ratio`` options are all set to None (the default since 19.0.0 Stein, 0.0 before Stein) and then manage the allocation ratios using the placement REST API (or `osc-placement`_ command line interface). This scenario is the workaround for `bug 1804125 `_. .. _osc-placement: https://docs.openstack.org/osc-placement/latest/index.html .. _hypervisor-specific-considerations: Hypervisor-specific considerations ---------------------------------- Nova provides three configuration options, :oslo.config:option:`reserved_host_cpus`, :oslo.config:option:`reserved_host_memory_mb`, and :oslo.config:option:`reserved_host_disk_mb`, that can be used to set aside some number of resources that will not be consumed by an instance, whether these resources are overcommitted or not. Some virt drivers may benefit from the use of these options to account for hypervisor-specific overhead. HyperV Hyper-V creates a VM memory file on the local disk when an instance starts. The size of this file corresponds to the amount of RAM allocated to the instance. You should configure the :oslo.config:option:`reserved_host_disk_mb` config option to account for this overhead, based on the amount of memory available to instances. XenAPI XenServer memory overhead is proportional to the size of the VM and larger flavor VMs become more efficient with respect to overhead. This overhead can be calculated using the following formula:: overhead (MB) = (instance.memory * 0.00781) + (instance.vcpus * 1.5) + 3 You should configure the :oslo.config:option:`reserved_host_memory_mb` config option to account for this overhead, based on the size of your hosts and instances. For more information, refer to https://wiki.openstack.org/wiki/XenServer/Overhead. Cells considerations ~~~~~~~~~~~~~~~~~~~~ By default cells are enabled for scheduling new instances but they can be disabled (new schedulings to the cell are blocked). This may be useful for users while performing cell maintenance, failures or other interventions. It is to be noted that creating pre-disabled cells and enabling/disabling existing cells should either be followed by a restart or SIGHUP of the nova-scheduler service for the changes to take effect. Command-line interface ---------------------- The :command:`nova-manage` command-line client supports the cell-disable related commands. To enable or disable a cell, use :command:`nova-manage cell_v2 update_cell` and to create pre-disabled cells, use :command:`nova-manage cell_v2 create_cell`. See the :ref:`man-page-cells-v2` man page for details on command usage. .. _compute-capabilities-as-traits: Compute capabilities as traits ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Starting with the 19.0.0 Stein release, the ``nova-compute`` service will report certain ``COMPUTE_*`` traits based on its compute driver capabilities to the placement service. The traits will be associated with the resource provider for that compute service. These traits can be used during scheduling by configuring flavors with :ref:`Required traits ` or :ref:`Forbidden traits `. For example, if you have a host aggregate with a set of compute nodes that support multi-attach volumes, you can restrict a flavor to that aggregate by adding the ``trait:COMPUTE_VOLUME_MULTI_ATTACH=required`` extra spec to the flavor and then restrict the flavor to the aggregate :ref:`as normal `. Here is an example of a libvirt compute node resource provider that is exposing some CPU features as traits, driver capabilities as traits, and a custom trait denoted by the ``CUSTOM_`` prefix: .. code-block:: console $ openstack --os-placement-api-version 1.6 resource provider trait list \ > d9b3dbc4-50e2-42dd-be98-522f6edaab3f --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_DEVICE_TAGGING | | COMPUTE_NET_ATTACH_INTERFACE | | COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG | | COMPUTE_TRUSTED_CERTS | | COMPUTE_VOLUME_ATTACH_WITH_TAG | | COMPUTE_VOLUME_EXTEND | | COMPUTE_VOLUME_MULTI_ATTACH | | CUSTOM_IMAGE_TYPE_RBD | | HW_CPU_X86_MMX | | HW_CPU_X86_SSE | | HW_CPU_X86_SSE2 | | HW_CPU_X86_SVM | +---------------------------------------+ **Rules** There are some rules associated with capability-defined traits. 1. The compute service "owns" these traits and will add/remove them when the ``nova-compute`` service starts and when the ``update_available_resource`` periodic task runs, with run intervals controlled by config option :oslo.config:option:`update_resources_interval`. 2. The compute service will not remove any custom traits set on the resource provider externally, such as the ``CUSTOM_IMAGE_TYPE_RBD`` trait in the example above. 3. If compute-owned traits are removed from the resource provider externally, for example by running ``openstack resource provider trait delete ``, the compute service will add its traits again on restart or SIGHUP. 4. If a compute trait is set on the resource provider externally which is not supported by the driver, for example by adding the ``COMPUTE_VOLUME_EXTEND`` trait when the driver does not support that capability, the compute service will automatically remove the unsupported trait on restart or SIGHUP. 5. Compute capability traits are standard traits defined in the `os-traits`_ library. .. _os-traits: https://opendev.org/openstack/os-traits/src/branch/master/os_traits/compute :ref:`Further information on capabilities and traits ` can be found in the :doc:`Technical Reference Deep Dives section `. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/configuring-migrations.rst0000664000175000017500000003513500000000000023150 0ustar00zuulzuul00000000000000.. _section_configuring-compute-migrations: ========================= Configure live migrations ========================= Migration enables an administrator to move a virtual machine instance from one compute host to another. A typical scenario is planned maintenance on the source host, but migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine. This document covers live migrations using the :ref:`configuring-migrations-kvm-libvirt` and :ref:`configuring-migrations-xenserver` hypervisors. .. :ref:`_configuring-migrations-kvm-libvirt` .. :ref:`_configuring-migrations-xenserver` .. note:: Not all Compute service hypervisor drivers support live-migration, or support all live-migration features. Similarly not all compute service features are supported. Consult :doc:`/user/support-matrix` to determine which hypervisors support live-migration. See the :doc:`/configuration/index` for details on hypervisor configuration settings. The migration types are: - **Non-live migration**, also known as cold migration or simply migration. The instance is shut down, then moved to another hypervisor and restarted. The instance recognizes that it was rebooted, and the application running on the instance is disrupted. This section does not cover cold migration. - **Live migration** The instance keeps running throughout the migration. This is useful when it is not possible or desirable to stop the application running on the instance. Live migrations can be classified further by the way they treat instance storage: - **Shared storage-based live migration**. The instance has ephemeral disks that are located on storage shared between the source and destination hosts. - **Block live migration**, or simply block migration. The instance has ephemeral disks that are not shared between the source and destination hosts. Block migration is incompatible with read-only devices such as CD-ROMs and Configuration Drive (config\_drive). - **Volume-backed live migration**. Instances use volumes rather than ephemeral disks. Block live migration requires copying disks from the source to the destination host. It takes more time and puts more load on the network. Shared-storage and volume-backed live migration does not copy disks. .. note:: In a multi-cell cloud, instances can be live migrated to a different host in the same cell, but not across cells. The following sections describe how to configure your hosts for live migrations using the KVM and XenServer hypervisors. .. _configuring-migrations-kvm-libvirt: KVM-libvirt ~~~~~~~~~~~ .. _configuring-migrations-kvm-general: General configuration --------------------- To enable any type of live migration, configure the compute hosts according to the instructions below: #. Set the following parameters in ``nova.conf`` on all compute hosts: - ``server_listen=0.0.0.0`` You must not make the VNC server listen to the IP address of its compute host, since that addresses changes when the instance is migrated. .. important:: Since this setting allows VNC clients from any IP address to connect to instance consoles, you must take additional measures like secure networks or firewalls to prevent potential attackers from gaining access to instances. - ``instances_path`` must have the same value for all compute hosts. In this guide, the value ``/var/lib/nova/instances`` is assumed. #. Ensure that name resolution on all compute hosts is identical, so that they can connect each other through their hostnames. If you use ``/etc/hosts`` for name resolution and enable SELinux, ensure that ``/etc/hosts`` has the correct SELinux context: .. code-block:: console # restorecon /etc/hosts #. Enable password-less SSH so that root on one compute host can log on to any other compute host without providing a password. The ``libvirtd`` daemon, which runs as root, uses the SSH protocol to copy the instance to the destination and can't know the passwords of all compute hosts. You may, for example, compile root's public SSH keys on all compute hosts into an ``authorized_keys`` file and deploy that file to the compute hosts. #. Configure the firewalls to allow libvirt to communicate between compute hosts. By default, libvirt uses the TCP port range from 49152 to 49261 for copying memory and disk contents. Compute hosts must accept connections in this range. For information about ports used by libvirt, see the `libvirt documentation `_. .. important:: Be mindful of the security risks introduced by opening ports. .. _`configuring-migrations-securing-live-migration-streams`: Securing live migration streams ------------------------------- If your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is strongly recommended to secure all your live migration streams by taking advantage of the "QEMU-native TLS" feature. This requires a pre-existing PKI (Public Key Infrastructure) setup. For further details on how to set this all up, refer to the :doc:`secure-live-migration-with-qemu-native-tls` document. .. _configuring-migrations-kvm-block-and-volume-migration: Block migration, volume-based live migration -------------------------------------------- If your environment satisfies the requirements for "QEMU-native TLS", then block migration requires some setup; refer to the above section, `Securing live migration streams`_, for details. Otherwise, no additional configuration is required for block migration and volume-backed live migration. Be aware that block migration adds load to the network and storage subsystems. .. _configuring-migrations-kvm-shared-storage: Shared storage -------------- Compute hosts have many options for sharing storage, for example NFS, shared disk array LUNs, Ceph or GlusterFS. The next steps show how a regular Linux system might be configured as an NFS v4 server for live migration. For detailed information and alternative ways to configure NFS on Linux, see instructions for `Ubuntu`_, `RHEL and derivatives`_ or `SLES and OpenSUSE`_. .. _`Ubuntu`: https://help.ubuntu.com/community/SettingUpNFSHowTo .. _`RHEL and derivatives`: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/nfs-serverconfig.html .. _`SLES and OpenSUSE`: https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_nfs_configuring-nfs-server.html #. Ensure that UID and GID of the nova user are identical on the compute hosts and the NFS server. #. Create a directory with enough disk space for all instances in the cloud, owned by user nova. In this guide, we assume ``/var/lib/nova/instances``. #. Set the execute/search bit on the ``instances`` directory: .. code-block:: console $ chmod o+x /var/lib/nova/instances This allows qemu to access the ``instances`` directory tree. #. Export ``/var/lib/nova/instances`` to the compute hosts. For example, add the following line to ``/etc/exports``: .. code-block:: ini /var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash) The asterisk permits access to any NFS client. The option ``fsid=0`` exports the instances directory as the NFS root. After setting up the NFS server, mount the remote filesystem on all compute hosts. #. Assuming the NFS server's hostname is ``nfs-server``, add this line to ``/etc/fstab`` to mount the NFS root: .. code-block:: console nfs-server:/ /var/lib/nova/instances nfs4 defaults 0 0 #. Test NFS by mounting the instances directory and check access permissions for the nova user: .. code-block:: console $ sudo mount -a -v $ ls -ld /var/lib/nova/instances/ drwxr-xr-x. 2 nova nova 6 Mar 14 21:30 /var/lib/nova/instances/ .. _configuring-migrations-kvm-advanced: Advanced configuration for KVM and QEMU --------------------------------------- Live migration copies the instance's memory from the source to the destination compute host. After a memory page has been copied, the instance may write to it again, so that it has to be copied again. Instances that frequently write to different memory pages can overwhelm the memory copy process and prevent the live migration from completing. This section covers configuration settings that can help live migration of memory-intensive instances succeed. #. **Live migration completion timeout** The Compute service will either abort or force complete a migration when it has been running too long. This behavior is configurable using the :oslo.config:option:`libvirt.live_migration_timeout_action` config option. The timeout is calculated based on the instance size, which is the instance's memory size in GiB. In the case of block migration, the size of ephemeral storage in GiB is added. The timeout in seconds is the instance size multiplied by the configurable parameter :oslo.config:option:`libvirt.live_migration_completion_timeout`, whose default is 800. For example, shared-storage live migration of an instance with 8GiB memory will time out after 6400 seconds. #. **Instance downtime** Near the end of the memory copy, the instance is paused for a short time so that the remaining few pages can be copied without interference from instance memory writes. The Compute service initializes this time to a small value that depends on the instance size, typically around 50 milliseconds. When it notices that the memory copy does not make sufficient progress, it increases the time gradually. You can influence the instance downtime algorithm with the help of three configuration variables on the compute hosts: .. code-block:: ini live_migration_downtime = 500 live_migration_downtime_steps = 10 live_migration_downtime_delay = 75 ``live_migration_downtime`` sets the maximum permitted downtime for a live migration, in *milliseconds*. The default is 500. ``live_migration_downtime_steps`` sets the total number of adjustment steps until ``live_migration_downtime`` is reached. The default is 10 steps. ``live_migration_downtime_delay`` sets the time interval between two adjustment steps in *seconds*. The default is 75. #. **Auto-convergence** One strategy for a successful live migration of a memory-intensive instance is slowing the instance down. This is called auto-convergence. Both libvirt and QEMU implement this feature by automatically throttling the instance's CPU when memory copy delays are detected. Auto-convergence is disabled by default. You can enable it by setting ``live_migration_permit_auto_converge=true``. .. caution:: Before enabling auto-convergence, make sure that the instance's application tolerates a slow-down. Be aware that auto-convergence does not guarantee live migration success. #. **Post-copy** Live migration of a memory-intensive instance is certain to succeed when you enable post-copy. This feature, implemented by libvirt and QEMU, activates the virtual machine on the destination host before all of its memory has been copied. When the virtual machine accesses a page that is missing on the destination host, the resulting page fault is resolved by copying the page from the source host. Post-copy is disabled by default. You can enable it by setting ``live_migration_permit_post_copy=true``. When you enable both auto-convergence and post-copy, auto-convergence remains disabled. .. caution:: The page faults introduced by post-copy can slow the instance down. When the network connection between source and destination host is interrupted, page faults cannot be resolved anymore and the instance is rebooted. .. TODO Bernd: I *believe* that it is certain to succeed, .. but perhaps I am missing something. The full list of live migration configuration parameters is documented in the :doc:`Nova Configuration Options ` .. _configuring-migrations-xenserver: XenServer ~~~~~~~~~ .. :ref:Shared Storage .. :ref:Block migration .. _configuring-migrations-xenserver-shared-storage: Shared storage -------------- **Prerequisites** - **Compatible XenServer hypervisors**. For more information, see the `Requirements for Creating Resource Pools `_ section of the XenServer Administrator's Guide. - **Shared storage**. An NFS export, visible to all XenServer hosts. .. note:: For the supported NFS versions, see the `NFS and SMB `_ section of the XenServer Administrator's Guide. To use shared storage live migration with XenServer hypervisors, the hosts must be joined to a XenServer pool. .. rubric:: Using shared storage live migrations with XenServer Hypervisors #. Add an NFS VHD storage to your master XenServer, and set it as the default storage repository. For more information, see NFS VHD in the XenServer Administrator's Guide. #. Configure all compute nodes to use the default storage repository (``sr``) for pool operations. Add this line to your ``nova.conf`` configuration files on all compute nodes: .. code-block:: ini sr_matching_filter=default-sr:true #. To add a host to a pool, you need to know the pool master ip address, user name and password. Run below command on the XenServer host: .. code-block:: console $ xe pool-join master-address=MASTER_IP master-username=root master-password=MASTER_PASSWORD .. note:: The added compute node and the host will shut down to join the host to the XenServer pool. The operation will fail if any server other than the compute node is running or suspended on the host. .. _configuring-migrations-xenserver-block-migration: Block migration --------------- - **Compatible XenServer hypervisors**. The hypervisors must support the Storage XenMotion feature. See your XenServer manual to make sure your edition has this feature. .. note:: - To use block migration, you must use the ``--block-migrate`` parameter with the live migration command. - Block migration works only with EXT local storage storage repositories, and the server must not have any volumes attached. VMware ~~~~~~ .. :ref:`_configuring-migrations-vmware` .. _configuring-migrations-vmware: vSphere configuration --------------------- Enable vMotion on all ESX hosts which are managed by Nova by following the instructions in `this `_ KB article. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/cpu-topologies.rst0000664000175000017500000006414200000000000021435 0ustar00zuulzuul00000000000000============== CPU topologies ============== The NUMA topology and CPU pinning features in OpenStack provide high-level control over how instances run on hypervisor CPUs and the topology of virtual CPUs available to instances. These features help minimize latency and maximize performance. .. include:: /common/numa-live-migration-warning.txt SMP, NUMA, and SMT ------------------ Symmetric multiprocessing (SMP) SMP is a design found in many modern multi-core systems. In an SMP system, there are two or more CPUs and these CPUs are connected by some interconnect. This provides CPUs with equal access to system resources like memory and input/output ports. Non-uniform memory access (NUMA) NUMA is a derivative of the SMP design that is found in many multi-socket systems. In a NUMA system, system memory is divided into cells or nodes that are associated with particular CPUs. Requests for memory on other nodes are possible through an interconnect bus. However, bandwidth across this shared bus is limited. As a result, competition for this resource can incur performance penalties. Simultaneous Multi-Threading (SMT) SMT is a design complementary to SMP. Whereas CPUs in SMP systems share a bus and some memory, CPUs in SMT systems share many more components. CPUs that share components are known as thread siblings. All CPUs appear as usable CPUs on the system and can execute workloads in parallel. However, as with NUMA, threads compete for shared resources. Non-Uniform I/O Access (NUMA I/O) In a NUMA system, I/O to a device mapped to a local memory region is more efficient than I/O to a remote device. A device connected to the same socket providing the CPU and memory offers lower latencies for I/O operations due to its physical proximity. This generally manifests itself in devices connected to the PCIe bus, such as NICs or vGPUs, but applies to any device support memory-mapped I/O. In OpenStack, SMP CPUs are known as *cores*, NUMA cells or nodes are known as *sockets*, and SMT CPUs are known as *threads*. For example, a quad-socket, eight core system with Hyper-Threading would have four sockets, eight cores per socket and two threads per core, for a total of 64 CPUs. PCPU and VCPU ------------- PCPU Resource class representing an amount of dedicated CPUs for a single guest. VCPU Resource class representing a unit of CPU resources for a single guest approximating the processing power of a single physical processor. Customizing instance NUMA placement policies -------------------------------------------- .. important:: The functionality described below is currently only supported by the libvirt/KVM and Hyper-V driver. The Hyper-V driver may require :ref:`some host configuration ` for this to work. When running workloads on NUMA hosts, it is important that the vCPUs executing processes are on the same NUMA node as the memory used by these processes. This ensures all memory accesses are local to the node and thus do not consume the limited cross-node memory bandwidth, adding latency to memory accesses. Similarly, large pages are assigned from memory and benefit from the same performance improvements as memory allocated using standard pages. Thus, they also should be local. Finally, PCI devices are directly associated with specific NUMA nodes for the purposes of DMA. Instances that use PCI or SR-IOV devices should be placed on the NUMA node associated with these devices. NUMA topology can exist on both the physical hardware of the host and the virtual hardware of the instance. In OpenStack, when booting a process, the hypervisor driver looks at the NUMA topology field of both the instance and the host it is being booted on, and uses that information to generate an appropriate configuration. By default, an instance floats across all NUMA nodes on a host. NUMA awareness can be enabled implicitly through the use of huge pages or pinned CPUs or explicitly through the use of flavor extra specs or image metadata. If the instance has requested a specific NUMA topology, compute will try to pin the vCPUs of different NUMA cells on the instance to the corresponding NUMA cells on the host. It will also expose the NUMA topology of the instance to the guest OS. In all cases where NUMA awareness is used, the ``NUMATopologyFilter`` filter must be enabled. Details on this filter are provided in :doc:`/admin/configuration/schedulers`. .. caution:: The NUMA node(s) used are normally chosen at random. However, if a PCI passthrough or SR-IOV device is attached to the instance, then the NUMA node that the device is associated with will be used. This can provide important performance improvements. However, booting a large number of similar instances can result in unbalanced NUMA node usage. Care should be taken to mitigate this issue. See this `discussion`_ for more details. .. caution:: Inadequate per-node resources will result in scheduling failures. Resources that are specific to a node include not only CPUs and memory, but also PCI and SR-IOV resources. It is not possible to use multiple resources from different nodes without requesting a multi-node layout. As such, it may be necessary to ensure PCI or SR-IOV resources are associated with the same NUMA node or force a multi-node layout. When used, NUMA awareness allows the operating system of the instance to intelligently schedule the workloads that it runs and minimize cross-node memory bandwidth. To restrict an instance's vCPUs to a single host NUMA node, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:numa_nodes=1 Some workloads have very demanding requirements for memory access latency or bandwidth that exceed the memory bandwidth available from a single NUMA node. For such workloads, it is beneficial to spread the instance across multiple host NUMA nodes, even if the instance's RAM/vCPUs could theoretically fit on a single NUMA node. To force an instance's vCPUs to spread across two host NUMA nodes, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:numa_nodes=2 The allocation of instances vCPUs and memory from different host NUMA nodes can be configured. This allows for asymmetric allocation of vCPUs and memory, which can be important for some workloads. To spread the 6 vCPUs and 6 GB of memory of an instance across two NUMA nodes and create an asymmetric 1:2 vCPU and memory mapping between the two nodes, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:numa_nodes=2 # configure guest node 0 $ openstack flavor set [FLAVOR_ID] \ --property hw:numa_cpus.0=0,1 \ --property hw:numa_mem.0=2048 # configure guest node 1 $ openstack flavor set [FLAVOR_ID] \ --property hw:numa_cpus.1=2,3,4,5 \ --property hw:numa_mem.1=4096 .. note:: Hyper-V does not support asymmetric NUMA topologies, and the Hyper-V driver will not spawn instances with such topologies. For more information about the syntax for ``hw:numa_nodes``, ``hw:numa_cpus.N`` and ``hw:num_mem.N``, refer to the :ref:`NUMA topology ` guide. Customizing instance CPU pinning policies ----------------------------------------- .. important:: The functionality described below is currently only supported by the libvirt/KVM driver and requires :ref:`some host configuration ` for this to work. Hyper-V does not support CPU pinning. .. note:: There is no correlation required between the NUMA topology exposed in the instance and how the instance is actually pinned on the host. This is by design. See this `invalid bug `_ for more information. By default, instance vCPU processes are not assigned to any particular host CPU, instead, they float across host CPUs like any other process. This allows for features like overcommitting of CPUs. In heavily contended systems, this provides optimal system performance at the expense of performance and latency for individual instances. Some workloads require real-time or near real-time behavior, which is not possible with the latency introduced by the default CPU policy. For such workloads, it is beneficial to control which host CPUs are bound to an instance's vCPUs. This process is known as pinning. No instance with pinned CPUs can use the CPUs of another pinned instance, thus preventing resource contention between instances. CPU pinning policies can be used to determine whether an instance should be pinned or not. There are two policies: ``dedicated`` and ``shared`` (the default). The ``dedicated`` CPU policy is used to specify that an instance should use pinned CPUs. To configure a flavor to use the ``dedicated`` CPU policy, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:cpu_policy=dedicated This works by ensuring ``PCPU`` allocations are used instead of ``VCPU`` allocations. As such, it is also possible to request this resource type explicitly. To configure this, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property resources:PCPU=N where ``N`` is the number of vCPUs defined in the flavor. .. note:: It is not currently possible to request ``PCPU`` and ``VCPU`` resources in the same instance. The ``shared`` CPU policy is used to specify that an instance **should not** use pinned CPUs. To configure a flavor to use the ``shared`` CPU policy, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:cpu_policy=shared .. note:: For more information about the syntax for ``hw:cpu_policy``, refer to the :doc:`/user/flavors` guide. It is also possible to configure the CPU policy via image metadata. This can be useful when packaging applications that require real-time or near real-time behavior by ensuring instances created with a given image are always pinned regardless of flavor. To configure an image to use the ``dedicated`` CPU policy, run: .. code-block:: console $ openstack image set [IMAGE_ID] --property hw_cpu_policy=dedicated Likewise, to configure an image to use the ``shared`` CPU policy, run: .. code-block:: console $ openstack image set [IMAGE_ID] --property hw_cpu_policy=shared .. note:: For more information about image metadata, refer to the `Image metadata`_ guide. .. important:: Flavor-based policies take precedence over image-based policies. For example, if a flavor specifies a CPU policy of ``dedicated`` then that policy will be used. If the flavor specifies a CPU policy of ``shared`` and the image specifies no policy or a policy of ``shared`` then the ``shared`` policy will be used. However, the flavor specifies a CPU policy of ``shared`` and the image specifies a policy of ``dedicated``, or vice versa, an exception will be raised. This is by design. Image metadata is often configurable by non-admin users, while flavors are only configurable by admins. By setting a ``shared`` policy through flavor extra-specs, administrators can prevent users configuring CPU policies in images and impacting resource utilization. Customizing instance CPU thread pinning policies ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. important:: The functionality described below requires the use of pinned instances and is therefore currently only supported by the libvirt/KVM driver and requires :ref:`some host configuration ` for this to work. Hyper-V does not support CPU pinning. When running pinned instances on SMT hosts, it may also be necessary to consider the impact that thread siblings can have on the instance workload. The presence of an SMT implementation like Intel Hyper-Threading can boost performance `by up to 30%`__ for some workloads. However, thread siblings share a number of components and contention on these components can diminish performance for other workloads. For this reason, it is also possible to explicitly request hosts with or without SMT. __ https://software.intel.com/en-us/articles/how-to-determine-the-effectiveness-of-hyper-threading-technology-with-an-application To configure whether an instance should be placed on a host with SMT or not, a CPU thread policy may be specified. For workloads where sharing benefits performance, you can request hosts **with** SMT. To configure this, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property hw:cpu_policy=dedicated \ --property hw:cpu_thread_policy=require This will ensure the instance gets scheduled to a host with SMT by requesting hosts that report the ``HW_CPU_HYPERTHREADING`` trait. It is also possible to request this trait explicitly. To configure this, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property resources:PCPU=N \ --property trait:HW_CPU_HYPERTHREADING=required For other workloads where performance is impacted by contention for resources, you can request hosts **without** SMT. To configure this, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property hw:cpu_policy=dedicated \ --property hw:cpu_thread_policy=isolate This will ensure the instance gets scheduled to a host with SMT by requesting hosts that **do not** report the ``HW_CPU_HYPERTHREADING`` trait. It is also possible to request this trait explicitly. To configure this, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property resources:PCPU=N \ --property trait:HW_CPU_HYPERTHREADING=forbidden Finally, for workloads where performance is minimally impacted, you may use thread siblings if available and fallback to not using them if necessary. This is the default, but it can be set explicitly: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property hw:cpu_policy=dedicated \ --property hw:cpu_thread_policy=prefer This does not utilize traits and, as such, there is no trait-based equivalent. .. note:: For more information about the syntax for ``hw:cpu_thread_policy``, refer to the :doc:`/user/flavors` guide. As with CPU policies, it also possible to configure the CPU thread policy via image metadata. This can be useful when packaging applications that require real-time or near real-time behavior by ensuring instances created with a given image are always pinned regardless of flavor. To configure an image to use the ``require`` CPU policy, run: .. code-block:: console $ openstack image set [IMAGE_ID] \ --property hw_cpu_policy=dedicated \ --property hw_cpu_thread_policy=require Likewise, to configure an image to use the ``isolate`` CPU thread policy, run: .. code-block:: console $ openstack image set [IMAGE_ID] \ --property hw_cpu_policy=dedicated \ --property hw_cpu_thread_policy=isolate Finally, to configure an image to use the ``prefer`` CPU thread policy, run: .. code-block:: console $ openstack image set [IMAGE_ID] \ --property hw_cpu_policy=dedicated \ --property hw_cpu_thread_policy=prefer If the flavor does not specify a CPU thread policy then the CPU thread policy specified by the image (if any) will be used. If both the flavor and image specify a CPU thread policy then they must specify the same policy, otherwise an exception will be raised. .. note:: For more information about image metadata, refer to the `Image metadata`_ guide. Customizing instance emulator thread pinning policies ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. important:: The functionality described below requires the use of pinned instances and is therefore currently only supported by the libvirt/KVM driver and requires :ref:`some host configuration ` for this to work. Hyper-V does not support CPU pinning. In addition to the work of the guest OS and applications running in an instance, there is a small amount of overhead associated with the underlying hypervisor. By default, these overhead tasks - known collectively as emulator threads - run on the same host CPUs as the instance itself and will result in a minor performance penalty for the instance. This is not usually an issue, however, for things like real-time instances, it may not be acceptable for emulator thread to steal time from instance CPUs. Emulator thread policies can be used to ensure emulator threads are run on cores separate from those used by the instance. There are two policies: ``isolate`` and ``share``. The default is to run the emulator threads on the same core. The ``isolate`` emulator thread policy is used to specify that emulator threads for a given instance should be run on their own unique core, chosen from one of the host cores listed in :oslo.config:option:`compute.cpu_dedicated_set`. To configure a flavor to use the ``isolate`` emulator thread policy, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property hw:cpu_policy=dedicated \ --property hw:emulator_threads_policy=isolate The ``share`` policy is used to specify that emulator threads from a given instance should be run on the pool of host cores listed in :oslo.config:option:`compute.cpu_shared_set`. To configure a flavor to use the ``share`` emulator thread policy, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property hw:cpu_policy=dedicated \ --property hw:emulator_threads_policy=share .. note:: For more information about the syntax for ``hw:emulator_threads_policy``, refer to the :doc:`/user/flavors` guide. Customizing instance CPU topologies ----------------------------------- .. important:: The functionality described below is currently only supported by the libvirt/KVM driver. .. note:: Currently it also works with libvirt/QEMU driver but we don't recommend it in production use cases. This is because vCPUs are actually running in one thread on host in qemu TCG (Tiny Code Generator), which is the backend for libvirt/QEMU driver. Work to enable full multi-threading support for TCG (a.k.a. MTTCG) is on going in QEMU community. Please see this `MTTCG project`_ page for detail. In addition to configuring how an instance is scheduled on host CPUs, it is possible to configure how CPUs are represented in the instance itself. By default, when instance NUMA placement is not specified, a topology of N sockets, each with one core and one thread, is used for an instance, where N corresponds to the number of instance vCPUs requested. When instance NUMA placement is specified, the number of sockets is fixed to the number of host NUMA nodes to use and the total number of instance CPUs is split over these sockets. Some workloads benefit from a custom topology. For example, in some operating systems, a different license may be needed depending on the number of CPU sockets. To configure a flavor to use a maximum of two sockets, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:cpu_sockets=2 Similarly, to configure a flavor to use one core and one thread, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] \ --property hw:cpu_cores=1 \ --property hw:cpu_threads=1 .. caution:: If specifying all values, the product of sockets multiplied by cores multiplied by threads must equal the number of instance vCPUs. If specifying any one of these values or the multiple of two values, the values must be a factor of the number of instance vCPUs to prevent an exception. For example, specifying ``hw:cpu_sockets=2`` on a host with an odd number of cores fails. Similarly, specifying ``hw:cpu_cores=2`` and ``hw:cpu_threads=4`` on a host with ten cores fails. For more information about the syntax for ``hw:cpu_sockets``, ``hw:cpu_cores`` and ``hw:cpu_threads``, refer to the :doc:`/user/flavors` guide. It is also possible to set upper limits on the number of sockets, cores, and threads used. Unlike the hard values above, it is not necessary for this exact number to used because it only provides a limit. This can be used to provide some flexibility in scheduling, while ensuring certain limits are not exceeded. For example, to ensure no more than two sockets are defined in the instance topology, run: .. code-block:: console $ openstack flavor set [FLAVOR_ID] --property hw:cpu_max_sockets=2 For more information about the syntax for ``hw:cpu_max_sockets``, ``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the :doc:`/user/flavors` guide. Applications are frequently packaged as images. For applications that prefer certain CPU topologies, configure image metadata to hint that created instances should have a given topology regardless of flavor. To configure an image to request a two-socket, four-core per socket topology, run: .. code-block:: console $ openstack image set [IMAGE_ID] \ --property hw_cpu_sockets=2 \ --property hw_cpu_cores=4 To constrain instances to a given limit of sockets, cores or threads, use the ``max_`` variants. To configure an image to have a maximum of two sockets and a maximum of one thread, run: .. code-block:: console $ openstack image set [IMAGE_ID] \ --property hw_cpu_max_sockets=2 \ --property hw_cpu_max_threads=1 The value specified in the flavor is treated as the absolute limit. The image limits are not permitted to exceed the flavor limits, they can only be equal to or lower than what the flavor defines. By setting a ``max`` value for sockets, cores, or threads, administrators can prevent users configuring topologies that might, for example, incur an additional licensing fees. For more information about image metadata, refer to the `Image metadata`_ guide. .. _configure-libvirt-pinning: Configuring libvirt compute nodes for CPU pinning ------------------------------------------------- .. versionchanged:: 20.0.0 Prior to 20.0.0 (Train), it was not necessary to explicitly configure hosts for pinned instances. However, it was not possible to place pinned instances on the same host as unpinned CPUs, which typically meant hosts had to be grouped into host aggregates. If this was not done, unpinned instances would continue floating across all enabled host CPUs, even those that some instance CPUs were pinned to. Starting in 20.0.0, it is necessary to explicitly identify the host cores that should be used for pinned instances. Nova treats host CPUs used for unpinned instances differently from those used by pinned instances. The former are tracked in placement using the ``VCPU`` resource type and can be overallocated, while the latter are tracked using the ``PCPU`` resource type. By default, nova will report all host CPUs as ``VCPU`` inventory, however, this can be configured using the :oslo.config:option:`compute.cpu_shared_set` config option, to specify which host CPUs should be used for ``VCPU`` inventory, and the :oslo.config:option:`compute.cpu_dedicated_set` config option, to specify which host CPUs should be used for ``PCPU`` inventory. Consider a compute node with a total of 24 host physical CPU cores with hyperthreading enabled. The operator wishes to reserve 1 physical CPU core and its thread sibling for host processing (not for guest instance use). Furthermore, the operator wishes to use 8 host physical CPU cores and their thread siblings for dedicated guest CPU resources. The remaining 15 host physical CPU cores and their thread siblings will be used for shared guest vCPU usage, with an 8:1 allocation ratio for those physical processors used for shared guest CPU resources. The operator could configure ``nova.conf`` like so:: [DEFAULT] cpu_allocation_ratio=8.0 [compute] cpu_dedicated_set=2-17 cpu_shared_set=18-47 The virt driver will construct a provider tree containing a single resource provider representing the compute node and report inventory of ``PCPU`` and ``VCPU`` for this single provider accordingly:: COMPUTE NODE provider PCPU: total: 16 reserved: 0 min_unit: 1 max_unit: 16 step_size: 1 allocation_ratio: 1.0 VCPU: total: 30 reserved: 0 min_unit: 1 max_unit: 30 step_size: 1 allocation_ratio: 8.0 Instances using the ``dedicated`` CPU policy or an explicit ``PCPU`` resource request, ``PCPU`` inventory will be consumed. Instances using the ``shared`` CPU policy, meanwhile, will consume ``VCPU`` inventory. .. note:: ``PCPU`` and ``VCPU`` allocations are currently combined to calculate the value for the ``cores`` quota class. .. _configure-hyperv-numa: Configuring Hyper-V compute nodes for instance NUMA policies ------------------------------------------------------------ Hyper-V is configured by default to allow instances to span multiple NUMA nodes, regardless if the instances have been configured to only span N NUMA nodes. This behaviour allows Hyper-V instances to have up to 64 vCPUs and 1 TB of memory. Checking NUMA spanning can easily be done by running this following PowerShell command: .. code-block:: console (Get-VMHost).NumaSpanningEnabled In order to disable this behaviour, the host will have to be configured to disable NUMA spanning. This can be done by executing these following PowerShell commands: .. code-block:: console Set-VMHost -NumaSpanningEnabled $false Restart-Service vmms In order to restore this behaviour, execute these PowerShell commands: .. code-block:: console Set-VMHost -NumaSpanningEnabled $true Restart-Service vmms The *Virtual Machine Management Service* (*vmms*) is responsible for managing the Hyper-V VMs. The VMs will still run while the service is down or restarting, but they will not be manageable by the ``nova-compute`` service. In order for the effects of the host NUMA spanning configuration to take effect, the VMs will have to be restarted. Hyper-V does not allow instances with a NUMA topology to have dynamic memory allocation turned on. The Hyper-V driver will ignore the configured ``dynamic_memory_ratio`` from the given ``nova.conf`` file when spawning instances with a NUMA topology. .. Links .. _`Image metadata`: https://docs.openstack.org/image-guide/introduction.html#image-metadata .. _`discussion`: http://lists.openstack.org/pipermail/openstack-dev/2016-March/090367.html .. _`MTTCG project`: http://wiki.qemu.org/Features/tcg-multithread ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/default-ports.rst0000664000175000017500000000210400000000000021243 0ustar00zuulzuul00000000000000========================================== Compute service node firewall requirements ========================================== Console connections for virtual machines, whether direct or through a proxy, are received on ports ``5900`` to ``5999``. The firewall on each Compute service node must allow network traffic on these ports. This procedure modifies the iptables firewall to allow incoming connections to the Compute services. **Configuring the service-node firewall** #. Log in to the server that hosts the Compute service, as root. #. Edit the ``/etc/sysconfig/iptables`` file, to add an INPUT rule that allows TCP traffic on ports from ``5900`` to ``5999``. Make sure the new rule appears before any INPUT rules that REJECT traffic: .. code-block:: console -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT #. Save the changes to the ``/etc/sysconfig/iptables`` file, and restart the ``iptables`` service to pick up the changes: .. code-block:: console $ service iptables restart #. Repeat this process for each Compute service node. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/evacuate.rst0000664000175000017500000000743000000000000020256 0ustar00zuulzuul00000000000000================== Evacuate instances ================== If a hardware malfunction or other error causes a cloud compute node to fail, you can evacuate instances to make them available again. To preserve user data on the server disk, configure shared storage on the target host. When you evacuate the instance, Compute detects whether shared storage is available on the target host. Also, you must validate that the current VM host is not operational. Otherwise, the evacuation fails. There are two different ways to evacuate instances from a failed compute node. The first one using the :command:`nova evacuate` command can be used to evacuate a single instance from a failed node. In some cases where the node in question hosted many instances it might be easier to use :command:`nova host-evacuate` to evacuate them all in one shot. Evacuate a single instance ~~~~~~~~~~~~~~~~~~~~~~~~~~ The procedure below explains how to evacuate a single instance from a failed compute node. Please be aware that these steps describe a post failure scenario and should not be used if the instance is still up and running. #. To find a host for the evacuated instance, list all hosts: .. code-block:: console $ openstack host list #. Evacuate the instance. You can use the ``--password PWD`` option to pass the instance password to the command. If you do not specify a password, the command generates and prints one after it finishes successfully. The following command evacuates a server from a failed host to ``HOST_B``. .. code-block:: console $ nova evacuate EVACUATED_SERVER_NAME HOST_B The command rebuilds the instance from the original image or volume and returns a password. The command preserves the original configuration, which includes the instance ID, name, uid, IP address, and so on. .. code-block:: console +-----------+--------------+ | Property | Value | +-----------+--------------+ | adminPass | kRAJpErnT4xZ | +-----------+--------------+ Optionally you can omit the ``HOST_B`` parameter and let the scheduler choose a new target host. #. To preserve the user disk data on the evacuated server, deploy Compute with a shared file system. To configure your system, see :ref:`section_configuring-compute-migrations`. The following example does not change the password. .. code-block:: console $ nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage .. note:: Starting with the 2.14 compute API version, one no longer needs to specify ``--on-shared-storage`` even if the server is on a compute host which is using shared storage. The compute service will automatically detect if it is running on shared storage. Evacuate all instances ~~~~~~~~~~~~~~~~~~~~~~ The procedure below explains how to evacuate all instances from a failed compute node. Please note that this method should not be used if the host still has instances up and running. #. To find a host for the evacuated instances, list all hosts: .. code-block:: console $ openstack host list #. Evacuate all instances from ``FAILED_HOST`` to ``TARGET_HOST``: .. code-block:: console $ nova host-evacuate --target_host TARGET_HOST FAILED_HOST The option ``--target_host`` is optional and can be omitted to let the scheduler decide where to place the instances. The above argument ``FAILED_HOST`` can also be a pattern to search for instead of an exact hypervisor hostname but it is recommended to use a fully qualified domain name to make sure no hypervisor host is getting evacuated by mistake. As long as you are not using a pattern you might want to use the ``--strict`` flag which got introduced in version 10.2.0 to make sure nova matches the ``FAILED_HOST`` exactly. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2424707 nova-21.2.4/doc/source/admin/figures/0000775000175000017500000000000000000000000017367 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-Flat-DHCP-manager.jpg0000664000175000017500000054140400000000000031364 0ustar00zuulzuul00000000000000JFIFHHXICC_PROFILEHMSFTmntrRGB XYZ  -!acspMSFT-BEP rTRC, gTRC8 bTRCD vcgtP'lumi xrXYZ gXYZ bXYZ bkpt wtpt chad ,Infodescicprt( curv  *5AO^o+Jj%O{ >sY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloqsY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloqsY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloq?@ABCEFGH!I+J4K=LFMONXOaPjQsR|STUVWXYZ[\]^_abcd'e3f@gMh[ihjwklmnopqrt uv.w@xSyezw{|}~Ҁ(8HWftÑϒܓ!0@Qcv̢8VtѬ (C\tĸ˹κ̻ƼyX/ÕWƆ9ȘDʘA̒<Α>ОQҾx4ձr4ؼقHܠi2W r4j!FN?_,P\Ag  N 1FPSOF9* !"#y$g%R&8''()*+d,;--./0j1@22345o6F7789:};W<0= =>?@zAWB6CCDEFGHfILJ3KLLMNOPQ{ReSOT8U"V VWXYZ[w\\]@^$__`abcldLe*ffghizjUk0l lmnojpAqqrstiu8V9m:;<=>?ABC/DCEUFhGzHIJKLMNOPQSTU VWXYZ[ \ ]]^_`abcdefgzhgiTj@k+lmmnopqrvs^tFu-vvwxyz{|i}Q~9! ܂ŃhQ9!֌lP3הuR/ ›wP*۠c;nF˫yO%ͰrAܴn4v2뺠T[I!S~ģ4SnȆɝ(ʳ?Xw Ξ5nѲVӝ <"Luminance" = "0"> <"Gamma" = "2.2"> <"Whitepoint" = "6500 K"> <"ChromaticAdaptation" = "2"> <"BlackLuminance" = "0"> } descDell G2410.iccDell G2410.icctextCopyright by ColorLogictExifMM*>F(iNHHC     C  " }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?(((((((((((((((((((((((((((((((((((((((((((((((((((((?~?>,Evu!k% g|:AB|BO_W[k5֧E_G,om%v&2k3i> k$2cF˜ppHn~> ӠᏊ>_=/|A2ꍜ>y[W>7yTahxYk&\xJ6HP H9օOܩBQWÿ -|c:|7qw"@_78S?KKVKxP 0e5A]5Ox~ ԼU Mu*IO=CHlGV 7gcDbnvZilvjG Oً/߆~ӯ%wJganyػP+2jaIN$G?GwzD@]r#S+Hx0A+ȯ߆_{?F0e[8-Jْg;1$QJkVG3v2j3)g}*Ȟ&9;[p#_$QJ)t[?|i _Ļ_viv,ZpX53$ZKthqxÿ[K[i2HwfA9)rOJOО?^xb̲Ĺ}VD%OoA?~3|{3Ͽm̯"Kb6md* >$xGƝ;N9vrJ#5xŭ),R$:GFʰ<)b ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (kڿ??iνyޛFD6s܅%s20xY#gv 2NOS~ :|<_#ú&Ѯd^OXXyͭW7-ntKLh<@u ᙱ_h^y^Vm(I>{Q{VOI?+ڏ+ڏffg}(Oi^y^{03/#>((((((((((((((((((((((((((((((_ߴ?ٿ|q2{ "#Z8ɗa7IG5{7 Vo><7)5+ M\^C` MƕSe2(׍[Ԝ2.BۡV\ x~ֿ+/E|<_z!U՜dE9{G%Dž=R/z|Jd]yLKtbCkcGдxbDt;DѬYX[$[計^m|GH+4G'*7O?__VlY tHv{!BybA/ǯ⿄90?#G91%_b0^W~FKt|a#'qjdڗXW;qN6s R>a5t3OIhS %E~ 2__g5m\[NҞ@#ԼqDzퟀ>#[/|8nO jUlQH22)k9֊(((((((((((((((((((((((((((((((෿k?jQ_YcUdпc?F>>/3D%t`*r=~|DT#3I"~毮x\*hu!TOSH({$|(z{h)$>WOAil5ʹP:3~2]kܦ5xϗ4]2cGkS4y^Ưc/4y^Ƶ<˹'`:խ2[u?~A?:r]6ޅDSI#mQ5h~g_NV'Yo5 wLvW: o;בI+CCYov6q=*uW\z|Cmݞ5cs'5s_םZ.Fn)d1&i]O/#>/#>O>X(((((((((((((((((((((((((((((>,ǁ ^!|+OVKkhZG!FOI~.~џW9Rvgd2> > |0?h~ 9P ;.^i1#Vc__>/@__uoowZ`n= lShڇl?l~%Kf"wyI<@bS~|#gWÿ bє}42]ް)|!́[iY(Iw '~>:A ށo=q$1F1_{࿆m<) hՖl"Bpv#sbX&/j֝G3v)ygޏ,/G}hc3~Ο~>?'wtnX)eFM#B.Ƀ|3} &Ҩ# ɶ<\Ƚ\uMƞ1mwmHmtcevL6I7FCm5jK3֨csjkjx6OUaفv5-t_hZZms賂%Y JLgĖ>HOeG{[`BДW[o;Hs5>$p5j)s ~TqqBy~BjߙUk0^V~G;JLM7mn$yYR!h]C(a\C2dwo x6}|] Ś 5-;iRGq)$?/N^$d{18HF/_ytyT|֓+U\6PHɼk]EC\zqU3*o;UyUwLo:~\I|6$W5V˟)?D2o ogGLZW^^aGLZW^_,QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ__Q|%[~_H? AI"E@~suoccmKqqq 8FYَOW;oE[ |&/dЭ$ˆ#!F+t ]{;g&i-c*A6 6F9`O?vZ͞ (@fF I$/k)n:daWyHr?3_٧|uO?O>\gt*#ʜymf6eJ%8}"yH[ymSmΩt13<= p>\_Oѳ^mZөDa|r>]]^g֎@Gˣ˫>l(tyu{g֍Z9W#KbF,R!sd``H}65i j/;d3n Hтg"m=A2FU `BfOx,yFV*Yi{{<(ȯ-ʀ$U2&UdYAj'A2|ZᶶO/ąbb?ӅDUG`FsSp<* ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( AɬIE~WMgWJ(O+]~>ѵX4[:uRD؂éS/?(w:769 y/E\cq d }ff >&3I"y+%wCue:xđJ?S=GQ.%M=f%$G(O u<ڷ<]nDw2Y z˧*GLy>/O/'|#džy=u^1-ų,cC/QXd__StNo~L$ >t?Lv؇n|/2Fvxgv$b{q}~xRrٟos&{Yk}sPKlX *́lD_?Zƛ?/cƿmʶm'LyC w uGfe,~$d?o#1s|ҊW!p;ty;3љ~w/;Ak;V]UW+y޼zm3s'5s_םZ.Fn(ɿcOL1џ2_%i_Gz}y2_%i_Gz}~|QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEcx^+^>&޴Qq1UGSNzٯ?io۫4/xknu$|w!-wJHUr1_oE7<V>]֡H?<]_Eb +x~3Z1Oˡ"QJNf[-HdqDF;|/wM\/1$ ְGou[TG?ٷ Sа4 T Y_ ųBҝ#D|?-AҴFEgaal[۠+KgָVM"ĉ*(ª=;e[gֹ (V}h (V}h (V}h (V}h (V}h (W&vڪ}qt] ^(<hړ**J'dTb쎛UD]UG'=y~'?mF[d<1'u/0r|/?Mp3cskRV= 8%xo!3B>XdpmO,_+^rr=jqIY=f/?V(],/ZW_a?E[ ( Š((((((((((((((((((BBf!T O-j*o^f֟>&C'OƑ8?h\OXrY4{XikqZX[1-c\$z_ҿ?-۔c-&ʭo` kEx2]Vz HZω_~FV(,5[kHW2PGםG6UBs]/|cx{ĸ kԆ{MťTm%.MgZ|Ők$Լ5t߼'rF^2NgNり((((((෿k?jQ_YcU3o/¯$kɢ*?LH]_1Gp㑬I 1Eþ#=9SJz<2T%D qe<9tA5h"hа; S]f~ /ZF/_V?_ AV~?uύ |^{x6W[\<>#D"F1G~Cjiem-m$s>kd;Bz{Ikm7|D<_N7d'`}ϲVmW>'_4O ,;uy mEl$'޲riCeI>=xc㟆'kx/6Z#ۥMh6@ȧ`}ѯ?GU_ψ߶ě ?i/xJִ!eYo!O9_bHжò+1eZ|mi|^R|N'"^]7Ku =D'G afOgV?_ h+M*KFO.wKmE4E˪lo=F>nSJM QP"? |2=#ZSm~ӧ &啝j*9sRՆdmg+fbY ] v} //~^|M_n007Z$W,v3L%)ӎ-_7wSFѿOv}LpE\7/[}KW&WƟboi?վ&xF/ y:H ~́uWd>x_Zmo◍X4#xEL4[w2K-ѷ~UT,8W|~mO ?_@,~ l.pmSޱ S]xwA/ڳ?/|5Εvfj^-$VVSwqu|[_ ~(Pz3ʫԝF,/ZW_wVb\Gi_ҍ}v_V$B(0(((((((((((((((((?i?3Ow'rCijyyL'οR?xŚ}ƴv]UQn_ᯂ44?37fs)&idaRG#-+1)JD]>CĹ< (Sv{ъrvvNoMZ_ .t{_^`4c@x#uy xsžM_ '1DNG񑎕6z]Ceaiogic= Jw; f:C~'IwˆW_Z-yȿCUuҼ]'UJItu+;{9F$x+~_ zQJѣ2JuʝHFJ=SO]Q֭{~񵧍xW߈1k~6;:{aڿIfO+lj<+|>4++aP Bs_?yҹ/^4>ɯi긆?x~rpk q-O/~>mό,5g“ߥEnQ |3NV#Z\;~(+I]%ѻ`,7cgbO{Sڼi ivtuw /}X7_ʀ?Z(>~{xwtxw& gb(H粨,{ s|-um>m~\iέ<1e npqm/dgok6IvZ((mLڭhr錝. 2P_? ?fφ6wx.4P~^|<^&MoܭݕP_Ewq"278'O+/OZºeewj0<(lW$}{k<毝ckō[/>xv cW"6"ɸɗfd~rW~_^g帰osm>;2; oo@X5gcq}v|1#Oi7_m`0 &42Mxo۾3Sj_hSi> x Eoih-UʁZKUE^_*@q-͗ό +)guܵG⧉a|;[O?xs_j2iYՏ30_#9>j9$aM0#<jV SBp,9aƪ'ӿgKIMA5 گ|1wǟ\ѠῃC+Z:.|Go Y,Ҙge7I|[_ _yY6̘R}j (btﯡ:f.Vb\Gi_ҍ5OYrJ58OC~G[ ( Š(((((((((((((((((-W?e[𗉮#\[&g&i?"bHtC)eW_[55](/0N;/x Z_<_,UW @ |R:e)YKۧc&*U&֫*RqnIߒQN)]vx~|I&4_x̰]4 ,I! RE0a$=zm>3h>յ*Qrxtr0h3zuwQUlĶҽl>Ҁ `1_gq9Ji(N.[^NO?<4#Ů\d M* )'.k:ZNtQZ@39b?ޣzG=cɑ^55mclMc7aJ {Qj**dO ?KPZoϺ{w[ϣ^7.Wv\M 9dVNhmOeU-m]I#U7vODcr8:Od),3FOPAƿ׭fr*Hph%tNq_doN+>/S/E5夺.faTW{8V~-| ßڷύqx#U8&H1쬇^C XӼC'_E擩yepP:>eXk!8*QwLLaʕh.5ff>d|cғ>-pJkLK޳'ݘtZF? 4x]s@j\.xEQ?(|w=;fF<{d6Qb=MzP)nFDiBg5/|0ԗ*}zI9ͻZg{?g\Ijd^X@x ] ծAKyח[B>˰o_k|mWhyj9FaѧnXHȪ6W!? Y=|1 [7ťiZcF5}=#~3uMf`b>&`=@~|߂ -OY[ͬ lo.5 @rV0s0pԒ {&?ƫ%}_?[>WwtK"AOsJKmZ0 $he4 ĆYpI5gW5XQ7 v3yi,Ϯ_υ>5<>Xͭ63{MмEDwEh"7)2##dNJY^i>4[tV?|}2XI8OK_OnI?֝5j-/?MiZ?$ j,?RȕO1GHb{hT&ZxKouGv &{}VbpRWq޵y6>#8l? U3{{GU{CƿGYU7>x&oxĺ_e8P]v6GZ+(ʞ=jVMޣ0'C,J:$݅*3;7l?lz,WMQؤUۯͯ:CV|UB߲[Jv{h '$Cb۲I;W߳_:6iړďH.du"!;ҮN%l 3Ly~s?V g ^,a`$eܳ7rkN67ɗ^+U9k>7ɗ^+U9h _$+ ď ]sI慘m\q,272*H?J!_`MU⋟n6u]]Al B))fS@%US)6ό)tw{tZ O~џOW~-Qq2$-,>V: *'L}6}'6@n{x/_aiZ-듀-H -C5惫h)uiaet9dp{>]࿀1#oYom9 *5Jk]Wrԋסv۵|yU??lZ隿80DZ;M݁\=B4(|"Ryg)J ]?R IT1ɔE׀k8y_tfG\:h~ VDo RxJE_~Ej `Om U]|E{]gƗi)NK&_U VӃխSِxH/Sׁ֡4KXr+ "㞹s-+,z}=+WU y6TzؽQU0Q Øӕ!y53wEEt4Ml}X5P聉?CX?CX: bMMcO[3}XםX鉇?CXmO,^uC'_OYrJ5=f/?V(OC~G$B(0(((((((((((((((((w FKvGby*HnD-A77,fⱤkaؾ 鷹b@G9N9B8j_&oo%u/h6Q%>?+U-쇜!'$Ȇ9 9flbs 1t]9ޟ?f<x̣ë+i)Eezy([xN'#w~N./,~WH,.NO8⾕+VK'ԅE}L#>rQ5@_岺Ҟ᯼)}Bf<. ~|^Ѣ3\|2nOmRxrz90 <7IUB~n?^ɷz7zNrB3Y%y%JU&,V!"N=ݯ.lt 8ٵ0NLd`g[+=B8x: c 8={t7Xll=26)$l$E#>W58w5K}[W?][^:``:{W:uU:ׅtmYDopP $JӏeժB ܔӣʣ'.eg>}W^⟈wII`vzTvFj7?/sc *{ZU.Twj^gl0ԖOm({8I'ϥg>#fa;nW+$G FT5Og|%8>?46+VqR~YI唰#XQo}]z%m_ S􂏽)cyJKRc SkxwIֵk6p$S8rIOt տhOk%w᫑%d@y9 8ʻrRkeֽ!kzyW?+(+..gkJF'%I'|&^}½y29_q&Of-:_ޒi'~_^5hwzhb #NIh$@/EbVqWIȥ@/#UM\FW og?#u`Si.mmmmEWQ@~E|No ]a 9 _O(\V]g?f/]7_fBv6މʊ kk6IY`# + ^!EiٟRewQujKTurpUF ~z|:wG¿ x-l8Yj}/ vȌh\dBOB+/wo>)|Z|Oz%i%[Z;&L6߻6O@gazAZP<{}=I>|Nv|O3HX@K3#0I53Mz{C𿅮nڦ땆(ǻ1;?oxFKmKeL 09SVw3U8ZحF{Zi͛a# +|8J=34YSg _xwrVIYZڨ&cذ^Ga]%C_y}sUt_9'. sY1)eg/Ňᗃ"x|]jcrqCv~aq׿SïU߳k/ Z(<qpS€8g9Š((((((((((((((((((((((((((w_#>w_#?B)m ~|g+vu_ ZjNco"pF \$d7X+?%0[_Z]R1G")tr)?Q_җS6/sMZh4/Ai$0Eu`DVxO/O~3ҬMCc6$z9O]eeJ/g#|;tĚEyǎwE_:Fq {0-έƷK5O%Ŵ9y mF,Ia dž"I(XՂ"`~ԟ |SoX--I,P_1nwu L`/JkSz_ YZK[{#CYm\YUWVVVro'Vh =V?-rZw6S}@0b%#пT<_ֹ,ZkP/0y,Ukfsik~.+n~.+Ϩtěƞf:~g*ڟYDOş?+Jk{_뗈+Q0IzQ]aEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEGS4zX:]f[aA՝؀ܚM˧NSU%Խ\'WyWȣ\젟¾muY< G~#OǭZ[  <\@k~7/ߋu>+8q&*g!>~G#bkcZ۸|6#+v㏁?xf,$UԚ~Ly+屍7!d|ܯ_/77GoMޔ. oCrM~vNM gIghu1pp u8 +Gۆʱ3Ξ2x  Iq <::|m'EO>T߇D\D8MxG|-??h^!ѼQu'Smƪp$q]GOPƺMwO iӮa meIPyn3_di`Iӆ_3O% WJM~Tc%Ӛ'܏3\Q8M&[2nwǣn?⻉lm5 :NMDuO|W7&Qw-N.f%2 \:YZRa/J16yY%-ky] iBᓓ]w[ÎLHԞ%uUF7֛*tLz~=7: 71eoR 5qx-u X7r vUGeP+?D|?kxƒ[ס`.VU`($>Fp9y}WXfRK.M4fem:[]MS5X~e(EI9:ϙ]II(+-ti!-C\qm8ԃCȃ;^3᫯?#~ ..n-@Sg2刎8f$ )ndN6=Sk`6=kiJs}+M_{xZRֵ {[(uV_"(Y ż_٣wcͥu $N+>~Oh?ni*h͞xI6}MCW|ZkGhto*ϓe 2<9f Ym#k[⿉TRTc6oH$]~|V˟ ~87Jm{-c$ 1e1؍Ÿ ~8|,¯3WS[9所1[[^k"M,$FuۍBls%IWCP|Jki)1>TcSӣaO?5eUte5z;k_Wx?n,|Ki |a[c{/ܞrKF8ٍ;QUoM.uatu$2A^i(yPo-'F6;c.Yc]"2A o^#7߳oNs"Y㈅$tYRL`5_?jEc jZSyMpc1Gd}~HE6-߳YncɶjYOl=E&VD|Wռ ⟈M~K}6BUjZU=soω+L=3 3>ϦY"DgcHC 5zWEa%˙A_֛Vlse}~ϟ,6:Ƌo]8{hsHWAQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEW|n/bVrקט|n/bVrHW?-A{n1O ok۵|Fc?=?bt6R t"|=kf:ݳũjv %|ތ*kn[vb*E3oO|em 6ґH3u92h}v繯oco 潺*KuwrT|VZsͳwEtfG!['D՜FӞ6S VױKqΖ|I)+^L«Dۏip09x/|X~5Z6}kslڕդ&I6J@f`8>[\4Is|eOԑ^_:[g~o"B qccU*^70NǻjL?5q^guZğBNObe#н{{4Q5/yyGy`.;efOhc~.+n~.+ Iifokz10k⭩şΨtD+ +1|Y^#ҴF'gxJҿ z/ėQEfQEQEQEQEQEQEQEQEW?mo /tg񷈡%d|6p"oI',"\ g^_x qk?em&bk>GfU--?~>э|.QF&ךMWUB^j aum-/!s#r{Wx ]N+?|(N܋N#_0N\79}S_ǽ}6jGC^cڼʧl>4߶G|%CWM얷-a^FFk8 U}x0GYW`og7[Koko{h1e\Z<\ sM ־(~׈5yʹx,ki IJ6:3k??b׼f|GdH>#ʀ;rdT5((@+Y>?3|ٌa>u ~ i(E=>!}.FV!<Q_WTEF+?1qfVo]薋 ( ~#|,qw@sZjk07ma_?_NJy/!|EdYi^YAtK2=%gSǿMfEDCq袿??hoOέ_%um:Ŧ/[2Y F8fN XR0Aʵ uR7^g~Yc2DqJ.z=z5[5-Fo1o.^7(VS;+5ozci߄>|K+6{[,܎w@?WxQ }ϊ&5I|5kXkFG)eFI,Nk/I0x/;Y]H[P2r$ow4v55$D\$*gh|KѾ >[hzuZO Cɾcrcס,=5Qx2J1SJRwz-uSA>mJ=<{ &d\OM}Ru-&_iyvnl4,in.'$q"3 $OOgV0 w4x oiFp$yIH|PuZ=A[Q9F=+\DH0p$* ~߁SSCZe\ՙ6`` k{dĞh(((((((k?h/g?>-_VP[jv_ٖjgrŽ6@JQX>ljxsċllUy͉dٻ8݌g+^&{[.Q\4RPqڀ'%多q[=JHPd@qWμW_ ?>2xp=s឵7HyݧQ0v.|tUWMR;Ktv(8琹ժ(-g3E `u~o?C/73z;MQֵzk[yhZie'9F YT酧)#BQ#sY_ >1"πakԿ{| H~{|=ma L4A۸W>2~߶<EρKm4+hl}z@ߕ-/3 c1uO+\5+i$F!^K1%,GE~x{z!g hfZ.{ u(SPI䞤O5}O65ĿaE5W]O?_9\-5)˽8};ƥ௄?N^m7sE PiZ7 Dr7;`yWzφ~=ND*yLKF$߹`9d$؎E|k_ mÙ-I#iF]3(/J;&"Z-~o+Ʈ[N9_3hO :c7M'krtIA JB>C-Qeߌp;\mhut̫@I?2u$FuFtOiZ~cYOfFON?P7#~ȔmJrm=żY2?}G52&Ϟ`~V~Ů*p\kXlUM=ޮ>移OK SFtMGOKouk2FC+)!8uxKuմXu]ꄍb#v,8֏?OZiz|i&Z¬>NVo&5*?.'|5M?6SWj=ҷڋmkcZ)AzK_\;KQԴD}BKmн1(@QM~xmx?2Oῃ6I|$Gt =gCnYO^ 8| *o-9M&ϼ+CtO xVBhz5{-lmb_EU_z֢%eٽտW#>:3xfu4ӂ-]MnQEQEQEQ^edž~-(5 v./kV"A;^9MIa&xϮ{{DFDF` 1!KtGw$mIZ{nچ#gj r3ᔲ@3Ce0{r-WR$uWCr1@7m[Wy2=[N?/"yTXZ8(Oa^F2rAoq>׶~KK8q9kғn$ڔh˯Ϣoҏ'VL`^"Fvտ(\1'ZQvz3+(}JyG|`}J/<; ϢxFִ~1ٔVH#?Q)ƣֆaVHԥ'-SN>魏~~I_iz!0j0Fzߨ~?WȆmЦֹ ݼgoYz HZxjNɶQң yn2Ye{u˸jW7 E71V7SO+|+KL׉9/i'𖽤xÚ"kOKKk?Ē!*kr=h'_ /6xPY4qc4]y 6v2X9+%%tjaJ5!}ꞩŠ(lQEQEQEQEQEw5k45OoSr_]v x],H[rac=F;/l&^$Ηo ^izޅzmn[;4S̉w|&]#t-k~WyuM6J{5FGbřY;np$` ~`oύ{\xx|,!D@7 C9ҪS[=NJ|WIۦ#ΙUdP:Bdt|&?g㞷qŋ^1b r,#5?*_sˡ_D!A$\ cAI2b|]  |%? /~0|T GRBkбl~m?࠷_|@ou%u]#Nz>rY.Wx]d{^C_gNeϊ|u] ww٧-4{C W־,~Zhӟ|C|vԼ]&w%LCu$1!tu.c\~~7玵O 5wiqqdZ2G$ǧΞh~^ c_VѬCgc`w4-l##t$q@k?ڟg-k.OhGo(fUKBr2ϟyd ':x_>?^%t\IۀI,AbI5_ᇃ|jmǀIoKw28 w/|:}n'>|7WmOCcܾ4)Voݸ<|zI.~84~ymO6zk- dex(+;ӿfmY񾝡,E4hO6RFZgكM\u:2QrYr2T1o?Po{D7| ˝J d ./I!~F  Jz8|gF74;YdrlKt۹9؇汫^eG*?n(/#A[zIͦImh[Hy0 2go&wτcLJ mҭI"B:npD5MhZO1ch:VhQ,--tTDUGC^Ul]IGT)F;$P$QFƊW@+şW?tohr/б㣣+Z(p}_l mkZvd XiFHP ݍ򏡯>%o/ |DwLW$0%AQ:OStb>qG1eƾot FLr/ފAQ°w ?e?w?c$lҭXa)`##v1_EοW}r/~^wY~qm6_ .3[>G%7I#:uc5xQks𿊼3oi(w4ooPͲĹ^I<x ߭ ( ( ( ( ( ( ( ( ( ( eJZeJZW c5mڿ&?B)m ~vtG N۵|?;t4[+5#Vڍ*V<)еnoRѴRDk5~]݅Ok8Zr%J~m.ƿr:gխ5,dCb4fF3rGWFkͼh75)"doRKm8\8&ZYMQ, e.gvrh=MTֲ=umtA?vŲ?W5` \Vz5@ejQC<8Q<ksG/j;~m# y0Lo#5d|Uxu~'7fH4Ƿ*phcwo/[-%I2r\(|R1Vj<Ϣߡ!,/ZW_Ɨſ_S/k9F.`iw|@`s?O+Gᇋ![&H }WO|I;$22vr+JsNJJU?5f_R*zISZ( ( ( ( ( (;~ > M|}㨉F!ppE FQC8 katJQ^|/9㣂Qw+M/3CQt;OU4X[(aEgwbN~m|twA"񆹓l,!ncLGkjۛP_j~EXf^Q)g|V gAj+K`xXs2b yi邏~gOYcl-'>ժyuK8x HHx/uijC:,c؀ܪ σ?> x,hݦ//ޚf*v+hk)&sT{Z_|Ax+ VJ? KRE5£QEQ@Q@Q@u+?m׭?geţU +~g>Kwɗm;`lP?~Q^sX??3 WSG+95+1Xs<፳Nneˍ ok㟄mlG ɖ,ﳐR+*dxMb*TtoFv?(Q e֖2겔S}D:njw,WZ'HXPc?u ~_0 6r(`$#pYc+-4 ̘_-Vu-fw ~Ls_Zy>Qe3|?T#U)dx:k\ӂߎ/RQE}YQEQEQEWŏ`|Xixsĺ׆?ZI'2u`>IlҼ_mעkZlvq0Uw+M&}(b졥ڮ[ɯ>!> _xO_< ~ֵm?WS˗k4[`̲)p^l?ܧ$xr?m/|#ak4|c>dĽ}P9^Q=O:vX˜@cy1+-CGUt@xIMPB). ofrf&_$tK xE$-'fɥX$TPVlДl9Mj?_w7Oj6 \E{e N1a˜"= *UaZ>_ZTVe`ip0}30r?hxOiK.4K1Z6jQ<*gy^3k ƾ+p #qAVu)Fjs< YmM'5֎C uO&M:w SLy/|%n`$uQwD;ߨmOßݕqo<wFÂ+࠿={_xNޡÿۛMu]A  '!Kf9>/Ƶ_N| Wie֝ F!@"3C^ $oekmo]t3l NOH:5Iyi߆.ψW3[!A՛rq<}u:&?{l4m7X=*:ε{LJ% )A<3N 04{x_T|>BxI%]OcTMsy2\J"4,(IW [T|7㝕cV3-Qtk M+`g|RSn S /x/gJ"1ԁIϔ5$kEv9?WmbaJe*Q#+¾ğ@<Ȟej,g80^x^?~%_>r:n#{XrUu SQMsu)G>.]u?J8Er_ϧOOĴpt2z^/G6ZkeʟQ_͹Fg|fboI9$h_"?ynkHz~#Š(@(((((((((((}}ghuoc@\ H;1(O ^sCO .|eOƚ=@'Ngˆ1LtH՘W{OGn⋩o}GOBp[?l5?C2woh\I'dR ||"*r Hje5Ƣk [Q|>zZ(Eq}V^% lS[ ox(~:ֵjZuyr^ߒOHIc//IU(das)@.b$Ѧx߂<<+GöV:e™ 9w8ؖc$aCZ~]]rr\g} Q5ÙQ4yG֟GGffyGCZ 3U,Ǡ$!h^E,Rǥ;2}:'dTT pX=~xgu_xBĿi"g(=Œ΀L98*yjFL'"+-\>د1ɮK[PY+UhksD[˩?dO62\pxH`g%#wDzW~ ~8io ( ( ( ( ( ( ( ( ( ( eJZeJZW c5mڿ&?B)m }RUƇږͤU6q[1SoP!$K|sXc&oc۠B'іݫa[x/ߍjW "~ϪiYb<'ȹFs=J -Qs֍ZލbO -m?W5`kbֿ*\{gE/yQQTwok0z<*֍b=E`GF֎p{(micԴ;aTI$^X' k X0[kJu2S!9#V+/?n4CṓK8#A^K ?kVJ]_2ngFUh]%սx^o\V<^Cf9Nnd)WΫdWψl&ť\[(.Sׂ]/GG*qtJ˪W|q5H]}2]Q-^oM6˾t yj,B>xXxߊO^:]8cqSsZOu=.YF"Ohqt=+^Ajؖ!ywhIܹ$l<'1_r~{?G?<|^:w>6Easb[f?Zg$ڒcpN+)b9&E _d5(co |>-׉k /Z=A܅8ES9rF\-lMhѣ9$mKVW|_玼ClP9RTU20X+/ٿúy!Yj7N0@@8g 팊Ws 8qx7wYmYob:UTt2~ CiXSzƄm,EO.]yvmɹ^.?i/W7S^f+D0 6)E`J0⾢/͟>+iD#I`AFO`A q> 𯀼 ihg+aKjN.ǫ˱,ǒI=1<G><YG a]Zu*|W~N [(mH#P0@8TQ_Z>6ۻ (Q@Q@Q@Q@Q@2HTIbu*!;} ;u|B? ŏ(]^`$\~By<3E~?o-< z \M_,q[Ve.L03~-2þ+4wK{+Xa0A=`jy|/;[3Q8M:itom??+/F4u}g]!' g(OF5QcX> ~%ZC<ڃCq'6Jzg2Szx^,?i jvhuȴ 'p=w6sXJY9?fIksOp9y⾷M+ʄM?wi;mպ+|3Zx:-f ıIuu8 @tF V.2MYM=S (((??G4W4;䱃Q4-#$^Qr$rA ?|e<_s&W,W+IYCp9*~tP'~i=Z^)]=fYZB1!GO | ':߆|=˪Βܲ4숊N (Z(>~7>(|5LE _[ mE%1 K6OŸ٣z֗Wk /M槪˹4aAvUQ$I?BK1Oj _>O|P46}? )k)я4ݏ><&üF:|~Inߢ>uþuNGҭtw,Qcǰ|/˾uD>hz֭rylWc4z|"Mx5 Ck&0ϵF28_+_'-868Wڼi{ߡN/r?e>4 Q+/K[G_HxK8xo!c[^7 d~>!> |b+JXǠ^铰g#e>Ҿ.~&6)~0z5b ]͢/=d㎤ˊja6i?m~kn+W%_a_܋RϤb>Gյo~By,vٜ?98?6{Z[oB#^X3gDXNI?L@[ME_ϹI,> C/žI#qc[yV0?06k\tQ[t>75a~߳_Zj6jzsm*YIW^x5ɡ:OڣfT^]WhƝ3@ olC2 m}ͯxC𿆧$|SWmxW .?܌6:ω~u{PʶVLd}5!uc^_xTn\?EFSV9iVv]>~['8OzG},c?sFHnr 6}G r?j eJZFͥx{ß<_7WQMR '6QmJFтwP{ahu9c<pA  WS okD4=U -n%`nf1/03 UeR#:QkF}1QW-i:vvN0%V':RH.$ԠѕQVqGqZreyTyUU{-.eXeYbYzГ{6gm !ZIrXg9|=μ;NI'^v'tҢ#񦫨%fXG|?~0=o:uWuj]YwQ;׌|b]bWw&/mCId^g+BYcU"?~|pQEQEQEQEQEQEQEQEQEQEW|n/bVrקט|n/bVrHW?-A~(o, nao*R :ߺl־W c535]Q?Aqrk-\CIT-G+1t {T9O ih~x}x)L#tL8{iףmݓO.iD,6+h3g72X-5{lGQ} gN.b'-'[8W=`TfW2mn 5i-Wn}ϦoIlT!-z2:=-Z+T9ƽ/_q骷seړt&2sTGo? wZJ~.0^ju$H8eȥ;u ďh$/Kp xg yNw*w:J\vjŦI|/i^F-?My+K^:h-{>yި4W1a>Gy`{>?~~Ͼ7o|Goyqs1 HY.>u m@} ^EE^Pq'&aW!sN0IӋNZnk:.j:SmyJ߉? T2)+, "2P3kϥ|l5JSp<9fhbpcRRM?F_ҏ/WG}+>S<_үgҏ,Qء(>ygҎPLj௅h^CZd1Vo{5- S Jh-&y/f. dcnY.l,eU+,3 tpzWd[ZwA}[d 1~ܿjm _ayqB<mc2H.N+{K I$h,ǀzWGOzm[F" ⶵ?t/k)7 1@Nc y 7$C(ҩ<غO.f 9yK5V 7o(ʥ?*omS> 7IokLmH4VPVgG~6Ừ5ƭ."3mX3S8sc߲3o_Tp:d l洔`( د!cs)*ޔ֧|\ ,oUKY)T}ZJa}OE }~HfjT(c zT)t9q1skJY&7 +c ( ( ( ( ( ( ( ( ( ~|h_7kKԢyS ꦽbʽ uR*Iz\n[+VTAJ-ŧ֧of3T玾xVg˫[HyC'm՟):am}/dmxyUFГ>:~ǿ 6Gwo$_ҡPf[Y>8EˤezӖ_G\ÜcJ8N7֎.Q{H%jy+n}YOmȓC"D`FAuwZW kmB|sN73iI# ['l~>qҵ?Fdj*Nǿvu~_Zx+'J}_lT;Uj[7}袊3񀢊(k|<ߌuϙ'`34goeކM=)U5V۲Kͳ;"OǺ O"KB8O61_xx/5}6 ֿ_ȝ7TH^ b.-e9slNICpw^\+>\!c*yGhӷKQ^8QEVV~ ɯhVkot\~UEy|45ޜ~|3Һۏ و2x;9\OEе~G?Qqc,C$g8G, E~Q@VgEm_Oo/A>.|?Nd;x8ȇQ"s?kC@ucwĖcW %@nːsƵZj1LYl#v٬B]O' ejJc(KM{&Y'k37:M?@#U5Y$|H^I;UGQZQEQEQEQEQEQEQEQEQEQEQEQEQEW|n/bVrקט|n/bVrܴ=_ZV=PI ѴaX rt?@|I(m }O <-??Kgysk{oc\FHtYSit%w/* z?YzVq=3[{VWBv a0#]F&F5Ś7ZVD"TY^+V~ϲcw|Vds.Y,(@Xv7??[~uVɦz~uahc~W;Ha5 i~FNj7!Xҷ?Apϊth.a$bO# B"O ¾3烼MxD Pϴ)/Jl崿:UjA6Zdtu5|KRk=\evn[rz9&|s{ι'^RݚƚEKJo;kmOr:rZo:xg-y޼cĿ%x/5߉+jK%y?>_9_"?~ ~8ioり((((((((((w_#>OmfT2H0T _wj~|U6uYZ`C z$Z(n,(D)$r(eu#x 1__c$ -1\j-)j:p-ph'Sp,hc$/Ux |ٳšx[YĿXI[?9\ILR5]ea+PPSԾ৊o<]6 K Nn5τד]3le9)ڢ|N9Dx{Xk -F\}M5Ծ=f[}ǒ7oP>!V'"ʉ'_j h~9_O+lF]_Qf|~?+/SV "Țoc<]߃~:|~{-|FWficԣ=J=Sz֭+s.%+;'oSWş~|k!x;R\bKoil#lthmd|q]Z|:vGe$՟P[e\aA$޼/R~M|6ozC+vZX=1?_oi^8p5ѢӤxN6#.]J?_Q?w^[ν^ډMyNּ||~F_GG=:\<4{tyZ7s)~^NwQӟ?k/MX;v>vB:gq݃_?OƯ<3“xkZ0Y8jT\|wm\ws_"5mӡfj>ҵ _uL `4G`sA5^zRռan uu*|~-vAݱҼTJKmZ~wOHRZ4lq8B.S嬡m!Vgw|{F zL6Hɺn)&1 9k{67q^Q"0U/;P^[i,{lX#ou9Zt׬okvkĹ_Ln6i`1m>vu՛Q򏡣>"nX)#8=E;sl>:F$ѭGzQ4yGև(>SongGCZPr3h򏡭(Q 9CۙQ5Mo.0#T](]9NTrha%C8IYI4fOgƿ|Gi :ߖK)1>\cH'!kX'sx#ӭ~x0|Qht9{G2NZ1| |Iu͸TPAӯ8E~qHZ1]2įϋ9I>?3SuYj6ciW֗S\F*J j" ~:~?NJO5=Α0';%9&@1_yk:~j^5O xAHO|;~_I "Q*[6*"JJI?s̃1ɱrcʕH]}}eQ]QEQEQEQEQEQEQEUk=;J.,m2Oqq(8rYS_+x?KlS̭BlE:J9<)iw[>6?ot~t7fo~ }9֦~L~vs\k77nx>SXvO+~~џ %Ccx,5~敫Gk?\73*VslcEVUHwM~|(@N I1чIe<Gnn|Qn) .ņ4y{r͞7 ~^ve1+G> ?-EC 7s.L['A}ό *|={ .,E̪vcaIAxj='_kR{xoߞMRvۆ3Ϩ+١RSE5fnxw["F8]7f3R7|=e5udSU Al' 'ğ-O||&da҇3-#O57¿>ָ҅Sl%#+c|k٫xgYbOp?9WWQ(G)xOC~c˂|1G9i$WE**JTcJ/$(lQEQEQ_=Һ'~6_<#C\[5DȪL< %w?> x^>8 \[\]\n(nQ#/dğ 3/%7N|#!|M7dF?5;!>KBh=?n+g¯Ck}^5I҉`V| jÕ((((((((((((((+>7ɗ^+U9k>7ɗ^+U9h# S$+ Կר|J~9|co5Y5]JK$6WRZY<䏴 ,W_J!??`_d@ѐ2 `zk+tMNOOG}0x(zo1ň9ǁj/_^x_f]sYм!Yn fNا&Iї9fwNM7w-Hd;&KB}(CYcX5߄:!WM)Em:^A!NrFk.4S8!o>! ,rE%f0a1;G_Z#N U4|IUIGc&K[: [\[$,zK%o/>aޞ!zSI+>pSG}My~ѝ|J ¼y5VG**-y\V[έ^Ms^w&/mCIdg^1Mw_ڇ^O:W&ƫO%5}_?D/55Z)|ࢊ((((((((((( [k6 ;)0ʌ0x*A lo&G<>>~ iN){W䴺a99=I@ugo(N Jߴ ~ͼdmrh̪vB#  _\G=sC*M `FAuOߵVs_ZSXxHS$ yUnS%dT+X7/+qȚKɏ7O`[$ @|7$Rz1gּ_!x/BJ1o=8?1$|q|omm4k%N(H?<J9}@9vpBvi\*-5GW׍7h~x"҂G< ^~"ѯ*Y]-5YK :1)1̣̩"S^ƒ[GR?7aMO3޼^}?Vv׺a>+WȆFqrGr5hωV!=+{eR11>M|1Zqo܎1tyྊfbMUmb{LDʧU}~,< :x'YCW2^DmPm#,##pe}8WZe˟xyo03H8vX ~Kt 'z'7b-4&-ORy959ayGjThٗ Զ>-ƠNVg'r$9.9 zuJ#ѷo 'xn5tkXe1nS)S_|u7V^kzMƗA7x'p9V 9t|$9o.C 0ʡ1G`q{NxI<-,:ģdN6zynFzt'_NJ_uCv|PYM&4U-ch@Tzxv?3t:f " ;X0??H_qVyW|ni۾ Öʨe_[cWOzi=eA֗$Y/=x_vKi2GayɰBǮI< xkLWΙ,5ż&yYcaXzږ-,< {+c>"=O*&{!gc}kϋ/5VZ+uyp:xS㫒m].Oݗ[qNY(êTiu#NMJ0qakJJqA+)BSIGOf}զelm uL`)&}q}BULG[ OC{;K& KGOW*|#{fN}}amoMYZմY-dAUI?vMחkIҿ?$8ˆWBJ4O~Xڕ՜g:s E(giG| K |D,w1b(q^{X֭giq(pS#`FyUMN #r8s* kkIu˾]]iy^Ə+!K5 ~&x+ީ"oj?4ʃ 'ҕ ԒmG&;6`J&#I/6<˯lu=5}lehǸ)8r0YIG?~(G&99 xT#5ŏ>/u-rIt6:=l,0pc](ᰟ|X*]=ߚ>g Br{ _Ջ_ꓳrǪz߃tKg-~ xXǾ+B;G C0pj^GF9hBp [?&De8Agv8 $UʺW-VJn?3RG[9ZDTcE]]_HXcx[$|.W>9^jLhqdqH@Zz+-%EZWq`p c(=0y.W*rK~OM_/< t*w+&}n@Z|~2seIMz]~AWő^B⾇t .=O86${xm{ZQpoI~'5|>K,Pj.@%&Fiʯx5_joᾑs_ q?w1k5? ?b׿*ݮMZgҭďqp~y\r>+ #GҴ&civɲ(ǠU bq?|~USa'¿w/z5ÿÔSxWu^ôzuɴ[1LµFWWy|{KA?Y=ǝ$N+v8k Q8(WZ>qS,)Ukѩ(ϥܝ̖S۷irmE:}?u9BOIs_s  "4|-׺ ?̹s?<[+zrNIn65MT7\yVt=\-ݺ):/y8受1~s#ºذ(uχ2ǁ*"Tތx8j))EGyfY62x<„Ui}aEWAQ@~hP)hb |w-ޡg/^,kk^h`>-c~ ~1_ h>;x]h> .4F" 4N{cz5D|)bd7aUU͊7 qnx}KTv^/\+]Gyqs!esA^;i~)??a麟#!k ,mlnrhXQ 0."I Kݶ)8Q I\ +[|KN|=m:rũȖ^Dnn>W߿z|f/>ŇVw4uo]z{ȁy7;S敹8 ઐOsK2Ju 1a_k|*`obQ_*_~G?lW-Oc\'/[C r8XU[ *pNxoO;ZƳwڊ^VuBx7(M}EPEPEPEPEPEPEPEPET,//./.-$E ʜsV(((((0&_{ĭWHO0&_{ĭWH%O0[_v?O%O0[_v?{T?+h3Ja<0#5cſ<7+&9Zt~OɿpTΗ]tu%t C.$Z fhD$v;Xq]Tlhc*TwfIymV [kFil8$k.yp|;H#! \x_H9 _7gfDŽtc?gZtTvӅ;ty?<g){ΐˑk+j?=rW+yի}9bWsXz_kV?J{~%ح,| _k??jS_BYcUώ ( ( ( ( ( ( ( ( ( ( ( ( ygiiW67WsF$TaP 0kVqqq&+`Jh/R ~ok?HC?ÝNǀ>寇 9pxq(NV*2qwLd<'34MdjzFj]#Plv>o¾w5ĭ;÷oZhϡyG~|@Iⶆ4߉<EBCtЏfΥg??ODK}7,a/i3ܾ]y ~ⰥØeQNn<ʫ -#obV謯?PZK8tv]_VW ^0&Ifoz+ߌTUn(b ( >~Z]猼64xAUPA65-QM7Eޭ:NkuPugv (&i+ӔvIu?_Ox^W>)PjעTveFKyN# `ޟ"x_|a]~IC& d#9 BcNo^O-Z^wʡe=dŝbel<.!#:kx 4zH&;c, ?x $_Ϊaquvmj2ZI{&Xcz2 R1Ng8-o ҏ&J!xP:o'myݼM']x|׺z 9q%pymI/ .ɼQp]OAu6iqn#?/4N .?z(hKO9 օZbԏxtM',r1GIQ_Gh:FZVWZ>3;N*9#s^)wgeygݬ[iJ=YW2 zʾAZ_|a/C+ca`^t9&PJǩTkkJ{ۛ˹2M<F'%$5_>WNy"_謼T6$LsEV0Ax:i>)u[^|!'?Z-~q*|Cm=/ ]ϕ,$e983_IVgor㖆>+>ܓOIz&|_?^2?/|3xangr¨,q~.ڞ m񆶤jn|-|Nzҫ!x?gߍ`wTдY1ITN{^8G+יW1N\W4?+rsNdΗ?xρz1-Hy01|scdVWߎZ̾|4[؜ss"2IAÏUϰx;Ci-֡7/.FFv 9ŒפR\s>ϗo_1oWE$׫nbÚ<+kkHM&q"4I)xp1Эc˻xyr5TˍateG8Kt3߳ş>1o|->3Р}ؒksGn'Z߄?G[m 61tӆ<}_iο?f__Q8%u:0 4|,^+x*6oc pIb8n ~CxkxD'PVAE>+ue3߆l.K`T~ϡ–[/0Y^(eMILǿva* |'.IYv~#X<6%oNz_/i._EmO}+*ׇgOմWVw,̇0!zM&˧RP]5/5g-a8$Dz$N@we|r8g|_sMxS\.*jQ&p$|y;_5@i_~þ=֛=1c m،AB |+]*]?e>?U90pʸ 23O4r׬%x◁bIxKl $ FvKa\]~Bx3/FY[G(MN$JmǗyPOcuK+үXGo>&q ۾x$n\"0(3QPaz>?Yn+eS=bFڟU7SӵU5 -WLKmwi: z2:Pj}bip$%fAES$((((((((((k[Ѽ7MC^Z&{CPK{{hWG!UG WKu}ooJ>#f07u+g΅ۜ=8w@ J8ɤ?Y~1vM|/]#:O+l^ϗoIrp2k??:kWfYg ʺnG*(hZ>9|P'ǯxZ.\yl$F-4PUq3xoDM;DӭQXח>ǖ>_ q zxO%_>/豞q&/8o zٯyE zGϾx qi0$˩>rI>[s, ]!+(((㜱|`i(:gv9z^[an;/c42%.%aaɎkk?8NŸ 4 !;t($|Fܟ)T6yNHOO-A׻u.]4s̰pHc3$8UrM~|(> }s:[Nm)h/4Ԑ\4=",t:;K+h QDA@76&l–9pN[:DEg?u[Qoǁty[V5]QGh>>hvXGh>>h5fo:Iy|dTNwCH;׌|b]bWw,P~$Xx^v 2mH3!ɬpIM~W MgVJk_>8(((((((((((((((((((B|.um~1{4;Mu,r93ߍ|#o<+-.(2| _4<6~7Wc?]Os#ü}W. :ʽW{9_G$}[}t=e㘲L'gxU Dk]3? ]w \dm| uuIJug׿a/ >ůxAƑ^YPA93kPW|~f C}O>#t8?R+F?sM_RZ>~| c P1mψ0ɑlb$?݌ nFk ?kֹTw#xÚ|K[(4m4K8 @ 2Q_WCBcN G?͸63c13]ܟ2kUg],:<#m4|NP-6feq"1?xAI^Ǚ@ߨ=^~_J HU:BfӼɒK2FO>djTG⯎_[^/jj$c"=Jp2T߅*[)$%y==O/:Rx\K*[K(J:\hmsdUЯ9TF%${f}D 6Q oсM:쎺JSגoKy1}q_<֭KN?5\>y;ق{)|ʝv= 5qBUB7dǐ}0k/]FOeN֧9Vk?KKB,3V.zABXO?ocr8:x&\~'5 ^&iWVqTd&Ml6 IV=`j)yԄhnǒteuKJסHʹ)~8L~,oDz?4ePڝ6 /XW=3XXj_ή_I5 ;r{W>,G-%xKu^~iOAIY{p嗜o;u>$=֙i ֧}pa$Xw|M>xc+(STTWzgxӛcr^ 1)UW,[WfnUYc5)ѭ4^ 2s9: U3Í-3Ş񭿄5 Kkq9"_1B/p%1^ώᷴgb4Ǘȷ \mF^\a_=uI-oe7ʴNd쥤ڬh[S/**tk k[{yʌ28 AgWNb9ӓѧfGZ_ҏ/KngTyU(s<wO|OQOv-bB͞`(B+`?`ό>Iu 1;?Bd+/L9TXJ:X s}/xxѝۍ/mA}=e}5+E>~Ӑh /Ï2WrgqK,3_}OG׉[}< |ϲEO:__l?>,(ռ*H.$Sدis(6iG.O>[' exfߕMy;@M%^.Ӧnx\+~Aك\.t_i0̃'/ؿ Ohuh6yᘙrx[y#`"l0`JN KH#1HFelF'Zϖ=pK}_M~Y/ك¯jfC cR~0Ec qώ+ (JR,O`2:*p]+S ( ( ( ( ( (!kka KH#-~>k .bWCyluʎ0}EaSM\!l<u%5%~n~bcx/㇇]wJOŌq)l3?|*5붚*-؟3)8m~񧄧мW/-owfSXgRk~2EŹ2.^UCXe_5ˉ_~}~)_෎¯]G͹D~9Zo m^6m}Y5> 宯V:r໳eAq]|]*.pY'<K|QzN>*(O _㦗$(цem^ǁ8? 8TtW>+ GMӭ(,2qeJv[O>kQQ]N'> |ۯ*œf6A }+YYhx2?9X0pc5KUe#;~? >'{Ǎ$˗%׎y ?:U|['?a7U0_X1ojkI.*cо6ӟ׋l!C7庸3!ϝ@e+-c^_޵G}/=(|((((((((<jg/^ÿ|uw|1|IukֱRKcydcp;/|4gKk-{5fiF_)grzׄi֒?ŭ*+m5yb 0I~Ur澱*|O/x'|)@aqHHc`(˲,%02k8t#QGz'[u?<^WUbcC66jJ4߆h2zo)8>z{|Yľ֏>j49aelw 5~hb}w/y gч89Y3x ^[~+i?Xާ?|AOoUɇXi>N~T}sOĩ<>GqԵA |;E| F>|*Tq՟$.> irH,@Oj<7oo%&SFIa5馮P?,ncڇὪ >).=X&;,;mg~_>\dF~&$oI9h-c|NZǿwԧ_ ^xC³9Mw]zU3 7J BVl?nUNsRRLVvVzvimae !q(ῂm5[Cb'Clx'5ƨ(HD YJR0&_{ĭWH%.}gG/nqsoJ?i;;O2}l5<}@ͺ6 R21_. @%)w<]ifKJc[ʰO^ @־xo~:N@02F-2^4xh<^ETn*4u;aդҺj0]7P_k;PU1lsqk7i٧u/Z8fzv7[8P<3vku5I^ofl|?]BEׁ?h!x֮(Xάw91Tu\|:]JEW%Ҟ掊Z?C'y?hwd?Qp3T,OZ o఺.5/Lw*jך'_4f#!aJ߲{v^% ~5Ωk%y ga%) Fq/ji-~~/|Jl~ Sii ͭY# #!ץ~`>ފCǯWF|-~d߀7Ƨ.v/ucU=MT\*(u,IcWrQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@յ=Fdy,xI5  \cݜ?|R[]U6 [=6mCCJ, ^v.Wc>F'bOHWw6}?xCo~]wH{W鎃h~𥞅#NдkDmean }Mk_AL^߫<_U3:~qivM.&l(O ( -ľKqp-|[Ң+}Ao*(>O"?k9u.ۤ;eeD2ߍ_A+}!4xqSI ʫ^YZE֟[_\\[\$daFV2H k֊_F{9y&28̺Uҋ}.Toc S,+K%u[`ZU({1 +[E|:&/Góü~h/fp4cӖ]/95?Nj/又.lCxcH*B MFt%7kioo#+*Bg[ ̰MbiUIʋnX^/T_JZ;~6mRQkK+WG/ӓ_A sL6HyWdPAp.8P-@vK$\[iEsO~~gx 8f \oguuϐe? 43IZDҴۛ(Wp魏o͟|H_{It2km~mRJMo|: ?[-ƚnѨqMѷr9TVa^S3+/&m^̈Q 澓s[U߯o7ˆ"S/m5示/ zo| >"lu=..-] k~+}v?]ھ@RVF#w $oTW lb߆?.KxzQS"["@%wg}*(<2Jx< ӳWOɤEUEPEPEPEPEPE`^W5x_/>#-%zBN+w)s V2C! K0 >@Wmw >=xoxznuu+ XMc ӗތALմ t/mvL{A'IDlI 51d[4R++ƮeX=ijyW߂,ff~UWɎW ٯfឯuo.ȥdyLRh,FW̴t|/fV* -I_&>t|  ~c ?f5 AWe?~׫i|]&\&fe9$,cC~վo'mLZ 7vwoV]]2Q1ڧRzW[z:.f~Q^-ÿz'Ae*nD";㒣8GqԅHE幦0NQVw—4;]Ѯe͕ ЫߌO/g|7Zn Чx7K !=(Rh+4p-Z:ѯG#!CW%SNkwZ]ZIl-_5uhDmBM :HHĄOB=</ĺ$%Ք¶2QGVp+֠P~u.яE28~_?f|ASEBZ{+ebsc˼NsDkׁ3lN|q^k[O89ĿjK]7[%&~_^]gN;kfg܃.+BN IC<2IVG <6cН5GkxϪgM-$T=Q^EPEPEPEPEPEP-&뿶_tMb-/RF.61aeH`p>|S׼ , mlmu;)׳*[6!>_ݟsd+?7WiV0<]glηq < G$#Ne׃]5_Z{sG?00N_T9-a+iOɿuH>e3eTipRmԋJJrZJ)R榭zj(,O |vjږmxqA =uhsciqj P;4,7;m^0+mf_q=|B $_fK=]NOkZ$j%c7KupNS(+Ut{)٪' iETUQתڏ {V6Q{uMue<}`+p1һk+u&,j}Ury3#yڵ}(9>y7~GW[XMVK#y({ڃ&#O⋝cM:TƙvZg1<`8q_xZ&& | {E@p=2#ⷵ_ >M^/C?Z1a30`d3-iڗijV=#o;xsq50vtV oMxx3ޱEC_485"IqW޵%WSyI# 6'ҿ^:q4k(ZcsP5KXi-IJ0*H?ZE1 Qҗ.G*Thʟً #wSi)soQBSB ]`na,֫KK[ .%G(ªTV(t ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (' jĖl!kM6/cE+*уI(C+ʱ*\)T7e&ߒZ_(|t|J{/F L>Ls_ xj/o xU˺eϔn-Ԏv!_F |G)#xq xU9WNYwIiQÜJ88~׎R{I'ji=VҺzYrq{/!? L`ZzslǜWw7G{}CK?%p]阢 X(4 qUE8%zo PW.֧qqiZ*4Mh[=ھEWџQ@Q@Q@U=CQRo>=Q,@$u4r^hͻɻYkm"EPEP_~l>4.%iK"_x晨%5Ȧ:+7Ph[?7H*kk^4:+6UB02d9 8CÍOEqx:wnq |+P< )  >7/|?k 4^mu=p\~6N2 ?UW7E+K^z?|RMN򏡣>\஫epri+-үdgo񶖣Vǯxd25L_ii3/Q{U.fҎhQVnow} Q5?\x{Z_@M Q>1&#l~[P_aoL>Ow7J/K~IB4Ep(aou יS/=?-̼8xlDo{zSzzE}Dg<# I;^4Đ"4mKĿ.|coL>qH6Z~}΢ɦ?uؤ_Uܸ9x[=H9YrW\G wGg?8ϧzuwҊaXj5^j:}bK{YHSѕGu1M;QE1|q~|qFf1NO8l Z_M madYg!Q,G2cSQSG4!I#C+ v̸j"*u?hslS;WT; t+|3S-45i4[s_VO ~; &qLbD?wO/60Tc^NkO ߴWk_|}֯ +l}ϖn.vHKt}+3/R:Z[xƔ|WQ97jy~Q^Qoo'D wK2xİFx2zւ9)'jkj_eO>&mQ$VA2Kv汯¾!s|1R/4? IoG66 z{AẔL-S3|Y4ӎ墂kP W漇C ۴Mcp8̀08F+_: d1ۼ~=F K x;熾].%c*V./n?U##޾'/-dӷ+l-Ӫ?]þἾ~BsɾZ8,.iR t%4VI)6gks5O x:?ɎR{fǗ,R? Ѿ0Ϣht+}#vi4Þ>s $p+_VA'%O 钾jZvQ@$~^e|AkZE}١OU4>{>JsG9o 8N|)Xjг4Q+IY?敬}Iᗌ>*kqGZyo@y&?*2G9`O7<h=o0˔M!۟2Al3_Eύ-bŪh)0o̎ETH2I>b6?$#O֍\MJRm|Q^i[ |iWz7<=U&|bPdLs5f8/ sG/>~ּ\nD+iмx&;(?- U$>zti$m%ݶGWucŨkƩ#7 -'Z˒L[vrVLWW|M3@ 01t| %M:h. -5(>`__Uޣ{OΤǭ<'éZL#[8 _y\ό|6ռ%:-OD`1\B㑞a M'0E>;5ׁ5^;u'18o2#aM###S5ˢh~ oN*}BHO'%x!q;5ox.qubK#K֗89+[r.]<PI OR's\SpwtѺ[H2ϊbSkY𮙩}~nr p8&i,tiUhyC(_Gt$g׾%<-#p!15HpUO3eG|$W׈BhB{>y'ˏ07!kbي&n4oSsivr$;D,yUmsI,^c4%Ѿ}S6 iN>|]5Fג W>?m<9 Ll?sgcy9czkHB0U?qVNRmݶm[ (((((+?)yu+?)yu`Og7?"_|;FX多T2y09ʅ#9W~,1o~4˲Giw"Xî7Fde5goL?|9xĞ'y59WF `Qn!u+HY pg {x CJ S+{ E,U2L @2? ]7/bĦO.Uw*$Fd)#(7'bld3r_,ki, K$嫰<a|N_Z(ϫ\xrHbnJ'f g>$g߷?:fgA ic-ԳEs7:ݍ= v~~ܟG액~XFGm/-o.mԎe⺉eT+P'knOյ9Vl:HI%U$kॆ1AeR  _F? <ȑMU,8tە6(>տ'LïhGQu[c )WvH9e᰺tIcS(lffom0=Qw6F޵Ɵ?-uj2xS%ĖX+`)G8#1^}l*\4V~mxxb |=]~c o;_亱ž(}k!k;Da\7o |AvL4ow%n$ bnX-rWoo|+]x'Ѿ%?t/,e`hm+*/B>?iYᛣ)+C2نG]˂evaΚKONkO?Nq}}X`[NqIKmO~_Z/ \I 6^laո`1ǁ|i|~=ܗ^E‰ 2wcw>'|=>-?lxŞ+%ЇO~~9.%ůju 5fmL@݈=5r2,:켽l"ץ*QW z]7{j}m}WSG=M|?gqdxvk-?#a[ƣ-`7GDOZwZ]8#`G >F9VВz/O[Mw3g͎xrL׬5.vmyRB_9jk5x ^~Xei̙P-e-'' K Ya[ğux4 "OF7Bsa~^/4]F, iVH_[XiƜ`ꩴ-!hŷuѹCp8,1eꔱx|UHAʥ5)(pr'UJ/⦶?JAM9Fυe@ekgݨ^d}gNGMq^ rĜk%V܏KYZZgUoz#5fuS:F)%ъbE`t5epH2#fĺoh}ry<($QF.X2zns&H4~iCi,'ֿ[P;ؕih< ~ڿQ_x7᦯SN,mNEa24(lm]͊MH߃xMѴ-2;M>ơ4Aª ,JF[^|h vѾvIuӢ+?E ( (O< u6Kb2Xhjz2~vo8OeWǟ?~m͟|RV= LH!mҲdlb!#@k_'% ';.uU17e"+8F Qj0M'eftcҜKt$BI\sXկ {9<~%-we0^u%VR*s&k o|Ea㸱 mJ.fujI|ųB!=#Ct Yhh=B+; d t#@TzhuVNz- I#""44P @aOU..MnSQ]]roҍߥ\ {~l*GG(\uh~(ML;،W~j·]a7 [WX߳_k%0hW2*\l}l7GZSœ$a W^?a_>8_/ /H/0 3՘<F1_?iς_?~xMק <3DχP ۼ~7Z|J}wOx[DwnGlu;UޱJnl?wͻYjS,2R62ZҢ)E>~DW^+!ӅtMnvuDP4yxG@˳j 4zkmZGQE}W; xc -3zw 3Xg 36*ƭ)8.f٦Lx,2Um.`[]doJA#9v*o_?<5 tui}g~אY_y{ۘЭoogcnBzupl}bt|B~5`s,28nVx8K\s^ 53:% 6t*yh/I~^1j^.h>eeqǒo_?%-ȌO4.|0scmx{U_g^ Y c{OR]W),)$nFNCЃSOZQEQEq:ெ~ľ;&>R|Ҷ34yTT qr]YӃ1uC9F)dx7Cg_f= LUN k4W:{V v'SR&Z>m#7ßZ-=YxZͱx#$VԆV+>8&l֏C<$.'Ԥ97c2~&i1lcN=M>7ωYh^(%"u]6' |3h֕Կ(~/w.:uV8E?ҏɧG(ܟunU\2Pzbt(绺>Aыj>A3;9r: 5KO CLjmeY^`n#cIp$c'{ƶ~6u^_|}W!}4]NY# c\/RgG%%lgs(qΣrOj%N7oܾjW;]oBÃ7UBT xW~ x.Z7/  <7_gڋ /M_ i1pe۾A< ׃Lf?0h뭾잯w?ZȾ7\Ff:٧-kiԜ/N7M?v@ wUG?4q/TqU})B ɻtKc̾ |I*~@u˩jڞ~ Kwu3P"@H +OxW_1mQ5?ppr#CH8ۯ|ox&Y \VALYcy'9#?| + ?~" 3JX]?CW+8(Ui+ EnOD.g +'~cڄdʆvuB5|it hUFA&aR9l '+1ylCeuĕ?ڪr꺿7+W0]mu76ҡIbC# _eM2Q"^ Q]~$-~|_ƚOt]G칱r3@ؒu-O5_ ɣxF4$FznFdo_?fxQ,6W]iǗu1ѫ_]p'Foˇm?&Iiʊ@௞Th?>Ɵ+1wo%竮I"%R~:=߂E,na YY ^j_/dZj ]nmo_/-iZ)hXZ6;nmԟC:BG+Q@Q@Q@Q@Q@Q@Q@Q@V~iS3~ ޓiOjC #-p%W ]~ǿ~[A}aKsmj_Ϛ[2YeBw(w?7kýsNbiwBv6o)uS(?R zov 5u 6Dr\Kp5]dd3;Qz(ibχ4u(<7'vPVIWT+g/oͮ[дs& $W;w#g9%(5/ /^9XĐ0~JiP#odIG>jiޜZ+wd}C߄A:¾-LWiSWVC?7X#[Laa|[X<|D7m"sqqP ¾Ҵ/CiZVl--!X}T+bq?|~d7M]K)^=Oym_ĩ|~mߛa_)ڷtZ[_銱:$݌MI&)23pO?og~ŗچ7DRE}ŧCunS/ЉC|6ХԙT{ ߍcU3lq-o[2q ;?޺1Gz⭀R\YZ332LҿgMӭ?)7_L^AݛKa!R9 NƷ^xSšXkq]n&_/a x$U^l  >M/qFs׌q~>= IuxxgZ]@:!`88f}Zq/֤d.?(< de-|]໔Q=Cwn#$ 7Z/x? <[=jZ h+EL.;.ߔp{jW\ź*ӣ,F9[8 P͜ xu_(T| J mg([[;IF!NQrMp:5W[l?=.?#ΰ䄪CTzS;)o[sЬw@ʊWiZl kDh{|ڃ^ #Ѽ*"bc-<ס|jZw>2j:¯[iuKBWqe`JR1!W*|coyuS◰{_ݏOY[L?~Ӿ$m:|C/kk` 1iǯ޾2׈~$~'ԭ>s>.Pw`{yPm#z~~ g[Zj"|IWP6r Hˌ$`B }OL{<OY_+Ï)aaχ<{1?K&]8f\1 4dj͎qVI>-:W'YY%oG|/~&AgM+H4V}fXȈ. 󐊤 ć1ZEU ?bwW:pծ82[ڍ |K$DEU-WrlO"(Ww{^ߖ$!'bQҿ[> S#e_O elݔl9[>lݔl9[>lݔl9[>lݔl9[>lݕzGl~5Fů{XY}0H}oxg]𦷧T,nê0cᧂ~&xT3Э5hTFˋb9̇CAW>(1| mǍ>xU4xvK6\\睃.lN~?3}aW_Ƃ_m.m#{/B?7u \pw-,:Wv]uk<76Ҡx# ;Z7NWa궒i^&+sG*.mGs&[sJv?<,}qErcp|]'Nh} ^orD]b]KiWjw^O-[? MK0<98FI$~O۟/ho%VV~诡fCȯnm,&ÝE#YOaw_Yz)X᫡ LHdTG(205T$zKzz8sqVXlW\F$ԤyFGzo mHMg2zrGc\ߊ>3|&]?4o,aV *_J:_mB-2)Ͼ?*&O[|J~1PY_irS?k>#8$pӌ9Gk +ۿsZv8Uށsycm"~Ҿ,4[ Ӱ:-"Ǘds_ g um_V-sz8BYO{]L8wKhF ׫&r^ BRV&t{/SϾ,/?wwOtc|Ҷ^Fbq`q^EB K|v"xUGRܤۓ}z+C((+~2M@|/Z^(nkYgBV.Fwj(Cqhׄq[} n8쌌_vhXYiZ]+ 0F `U( 8?zLgH'W[x&;MmwƑm 6s+&Ǩ]i_?fWAKN^lI>y)$*?v?H=n+ϱBS$A=:97 nM5⡦[k^w &餖k$%ЅliqvU&ݥ毢?XC2uꗺJK+z3i~-moiz\[`3;Gjk3|$Ǫ6>՞}[º'tREBpd>NO6T # {:u#+Uև0q6N]OѳEG99o$Qǘo>OOky"$QY<#wSy5;LC.S7} |yHl]O_ C ÌInI4E7JVE5dp0FA`sm/xc<J^ւr{/ӟ2|HWៅ:40|ռk~֮ǩ~U%\$*3_|Y|3s"Kx@o_Dy8dk{ Q|>j>DVW%fL̻1WsJ={=[ (jNZIwI/8?hovM[IK[.뀪?ue9RY5蟤Q@Q@Q@Q@U{nHG$($ߊqM!6,QUnteY\C;U$v`j(C [j뮋mY>Hg?ŏl/{[{>kK{yWkt#~ݗ~[ qf?p(NТ((((((((ٶXv*x.XW7:׿~G~~~?iڴ{-㿝Gȷt^f;K q \[RGe4^?uy Ւ Ӄ{{X|5Rxno3Jʅx$B4p}һUaR pwOT~Wjaq$M;4((((((((lP;V:dwŞ-+L1 zO/+ >\sYU Q曲<;eXgT຿-WgїWV:t0YBy$jK3 hh%Sd2ԝXF#QNcjf^#Dڴw"L,c;\sTҾM;|:Ik ֕l&%5>qϔ>Kׅj#%?3X8=ܢհ_M{_zg>.nl5-_G2ÍW?s'Ï5ѼZh<7Mrp3讬6 V<$n4VRomw ( ( ( ( ()vW^cgp7Vp 2:0!A`w? #&k^m>6OЪǒv ?9e&tVW? -RsgjK$DHum~QEQEQE~qͿo xwO?^#~)N_K畋HW ]xÿ|N/-gLveۘ#nKRT ]~̪FA`Lyw6-ô*ŧRI8Xn\;ӛF\C||<8J<^]牵BZ9A;s})y>~7?gVKc_^-xbO7[QtuPdHd`_'|M4|Cŏ ;Z Oi֓yѺ J*][<:n홪Sfϼٷ'^o_FmzV-wP1Dg >C~fϼSgOA;C տMd-M>0p 7#`+?^oWýO3W ,Pw/ $(X| >뿳o?Oҏ.uVV J42{ui p2GH=86}+wY~_ m xV÷pjk]Kp],Q5bRoۑGa_7a;KڇFh)cVHcN0j˿ٷ'GmW. \a?ݷ]G!7 M~i?ot!l}iz[G)KL+ݤѼ2ShCڳ6}+ w*{h2TΝyxRn:lV19W*5Y|XyZ/Z\|KtXGCXd-uD/tCn#>y>~:cgxeTQ,95|[s@3Vj"D(4:4jf>2B}y>j>>Agy$$9|'=x~otK_)cOtERHT`ZF]Sτu'I5+?zp,[icweJk䐮饎t֑Gi7?#kϵTuv)Ё26C_|_-{~Ė촕u  fkf #HcTHkNaFs:a02mڞ:9Z?%dZ/Mw_v*A5"pk߳V o~l>+4(((?_ïV6^(e޹ Iݔt68 D?O쑫r|6ͧOVOk!Rq:d,UCw_ ˞/];]M+jZ)'׫m~+C-pѵ@20~\s_FWc_xu6_쨇>O|^?goxCá[\ɺ,+2$Sbh>\B>DžKlj6U_[S_<Oß mD sc'}x25q] Ql<-wO悊(:((((+_~Ӟ.V~!h>-={ڭ% J$fW,(ļz}_8?nऺċ-jܵcM0h_l,[ |3|7OL[.fh6"k2pA'+W]-wgo i<5c|@؎O %lzN6#9 i/P7 O|q>Ԓ{hfyX*e$Xfe w:g.~+~Ӿn-M7썠DE-pM~I|M|&Ԧy\|`GHM^:fC3\[pi FUxic?>|zм3a};xz<]~qYȭdҞ~m@դWťk,iwM)Y6cNz:K IVS Wʟ FOnekm7gV6w60=~[ \БXOk57|u۹/!=O!:Z\iI$p $O=ji?g5KoH!x=rGtP1cm߈ ׇ5FèirЧ[ˍ³n=3x^d;iCGj,> @T aD.P` H 0?cw45-m)r]jvv> ݱ&K) +;7VZ'Z*)5gI__I4٦@mJK* @pÂCc W_x_K׾ש35sj$m/pۋ7猒[j?xB<_3ox|{HFsqn>N*#yVg:WQћD19fQ4dT(= _qY_ x]ZYzs28jQùk%'{-Tw?žm!#E3 9u>O|'_ ze>ߧE{ۏ\5™ (|2jſ¯aҾ?PK\` HI&{?VMʤҳj 䓖Km' K N>OO|'8=o.)@x$f) W|:kۄw ƸEڶ &,2H)*`M%ۧf|//TcR)%{=S(+jNDWp4כriGc"?~ ~8ioC% ( ( ( ( ?J?_֥E<\m:XH#*FiJj3R}"S̾[᧹hϑEϚ㌧:z`*dqP$Q"G(TD @aOط5k,*T{Q\XQEQEQEQEQEQEQEQEQEQEW +*|O I֦& pCG2gXv#"**҅H8M]=:8F OH5(i5枧⇀<}/|C^[LgGLom6HF? τ"=WWCw6O̎30 ;>Wzɖ5r _*ǿGxjm-c.DC'eէ)po@Xrc_c&~NӘO W8MJM:O]>~~͗'mDKx컷0'}+ Y^5tetae9z4{J_]8π}O6úSޱ-$;1QEzQ@Q@Q@V'KxK—:牵{ I~``3Տe'JέXRV^lۯ4w? =kNZNw?DqqŽW_l/x51GIde |9Ujoγ?g^u+ˤEzeSqq}zc:W<|K~!s:po5-)Cus׾6uQu/0N˩'x0G>Z`G^8_ZԐxīKKۯs)jNh~𵮉&Em#E8IO$MlU7=g/z#'ޗV;9X_| T‘ƋjUT`((D4((((>~t?KUk+ZŠYEq0,ۘI,`&ؘgq9#{?]ৃyA}mxDaEY@+@f-oHy^K*ßGᎏ/?< i/|auy-᳂9T\2S#u9_OJo~Ծu+ z\Ayg-KP)"rUh_ |=uip..5mhܪ?iAU_5M3Dj6FeOy}{po $yor?o x}bmM4˛a)qlved| 'x5ǃ46ƫ~3wa2gY0Tqj$[r26DjXB~}2((( )K? ~芯9C)@Anw̟ NM4i-zY$ip{&k=-?zF6g#lh,_ac߁u_xa=/Mjn'.]~x7E-m3[s6 wJd-yn+Vʜ 'M#Yh~.m}pVVPC)־b)?r~k\6BYd,׆)숾'׊_ix9IOUMك'<&~^=}G[v4 ~:dML( Q-"*. ywo;||Wᾙ[5ZżL0 K!0Ἵ"5!-%>W?Hku~^6^PgpԘScךgd! _Hh6ŏ? ~9x/Ji2I5+.?"dV#89 7̳EA-GupE+C~Go'_3,x^_3ϕGwsS##~~Ng᎕iZA4QI xy^C\~l]y~!Ҭ.u3~NicNrǕGv ==nGk=ީqa htrr9'_z/ï{㏍>$[ ^"Vi3f*|_ |:/;xb4X7H7n߸䜜ϷIv?5|Ex6c"{|]7ǐ4zUޭnRvXFnˑ$W[?@4h!o2s_/~lnti <ftH#g&oʞmH+Wry`) <7-~>Au;M{L7Zi^ᵖV2bi2GM,kE˯Zj旿Ѭ-(?y <^s]R_,|)'Ew6+;E3DHbi88澻%OZH>u/5 .%V! 쪤+E 8_!j>.sOA~In%Cca밍w7SA_z }_>"+v'i \VҺ7#fy{F>`p?5׶?i #CF˥\Md! *g$VpF}լzƉ BĂhTE|biOh7u=,Zt?*dZ/Mw_v*A5"pk߳V [_袊3 ( ( ( ( xVD.Lg0Gc[tRi5fgV*i4i꟪?:~"Ǟ 4 7eIs#pB8]퍭xsg+;Qq VF)>!_U ⯇~ɑ j;_(<9Rz2x S|wt~WV/+ ]7)[lv:t?Z[a7 n#zt V~]~:~'o=wP/Au [9ů"@H?l_G|JA<ľ|71d|Y䬹e?Fu$RXi<.#/8Oo~I}E5$$HC+)`z})蟦Q@Q@Q@~PCc*~:Z>Ro ү7tP2t?W3#oٟ_ e=+`觕#4&8?h8'i5Y;jn.n&Oʈ 8QKO(ME{S ó[5GOlLƼ|WKVjWh~~D\rKɤXѩw'BIžzwv? oL=eдHn]9N3J~D|\|{~>|TTњuɼ[˃YOYX(q>U|y5-]hLsyAx{W1#wM ]R:ߴpȗw3]Ÿ>\Hߏ>۠r|6B%NQwdw? 8|ֿxcqŢxMG¾'>Vy<!X1>|WoZhn?0 Ƕ޼pO|>=;oiq\A͞9 R:A!xڷck1F1tj^|v<ӧ*>FQ D-~/wH~?S}xo/m/5 in@ w|y2/&PRBOݜ kԡ4`, &ppy7:6d,Dϗ:XKY6'bQ[%-]🎾&9mdl_o?Cw/S֓|HB[OJ3)> O5\+U+ KFI?a٠ÿâw=SVM^(J}ɯ R2< |mu: 3EB;.HO0bA,@xrg}a[隅QcYg*YUvJ9𕬗2ƟQ{c[ 9;?!'ZեT{5ws:P.(a9k<<ސ$QM٧_挾FG"7?WyʺuْkR ~Py5}]B=k?>yq8ߣ[KQjKɅQ]Q@2I#I"@OA^}7⇅>6Ğ*Ӕ\dG>E~vͫ|}|Q= l$xc8>P05ʼnƓPKO>x ֎7_?7+ڟB|]>Eh쒲_$QEQQEQEQEQEQE~3l Go /6y|\GOh sv>|Wshx xuuHb-e볻'|#xo-|*+S7Z )y#ncu,ԔnHʟ|ZAoζyheZ?}Z𗆭?9񵾍cu/\i׺-%DVV˒s}_^-j:^-R+=wR^<`@#@Hs7e}Sᅿ7~$Y]FM$X4~lyEgFۇ8ƿ"?ړi_>Ckطů xFmG:=Cq#J4m 9xE|]o ]xo񷇧*Ri{f%KYLZȢ{ izmΫqigܜrȁLѠ(( )K? }*?x'>(Ҭ<'"cC1 2B&RfOC_6|zҫM}C+G_ZGʼng? |gCWc,wr6vڃo|%&D?V~X/ۯ+0X>3|Tv~(mZEƭd`3;n#iW *+8\>kno2Zum":8 EiZ*sY͚5`6zFEE >(?|WCir印>;UA]D!ivc7~!l~po65܃TALoL[S[6=ʇREpݪ=_Զkw+'ẽ:uk.OYBb.4i9Bo}cZλ{&ufy#FD|'VZI^Qgu/k+,>q,=>rwUʇy22'cI"܍ d\eS9<ѪF%5?V5?WC&-bbןT艇qֱ>񭛎qyNkmuKT: ~}D?ȴ_-i?aoCbUko !&EkuI go?_p͟;}_EWўhQEQEQEQEQEQEWf~ө"--Dg͎++*aVQ^WrUk.mmt-໴3MtHVSv5şؿ̾!Ux k}G F2zם1mi>h{\/e>_oފs>梿2<)I|]!|ڶy6>ܨ76v]'l+|G_į clS{f?,g Q VJG}w qWJtG54_;Q]مQ@Q@Q@TS73EooY\*"I<=hi+ZQ x/S+ִ Ih'{*Oa_!|Z1ɧ>_1 A4= x~2x|3WBp1\睇96`(i~ ՟f#Ʈ!r:?Zkp'Ls/#u7&5ͼt-\Ks">~w: =j[dͣxoy>tq_eែ>xTi> Э4\c}$cׂp3)C/s>!>êqoTZ-)C?k=F|?]#Clt.6ig cTb$GpEF*lW//x;/|+ >0Husc ÐyWQE2_FW׺?;KE+i矖Yf>я@++"M6m5BI;fy7G?F7=EtRi=ȝ8ZJ\>-8k٫χ_t?6YnaA,BF>$>gM_H~ GJ3xsTq$ڷlMj?OM˦_%RSŚ\?B\V;d [ N7p_FNQVm%(5$l?c>__>w\EK3;87+aZG]tx}ᇇ'Å5i{m*sȸۺw%c/5~F'RkkdA^e'傈:1WƉ𖟠xsH4  ZcG+YmUue~dxLҮg:~Ҵyjww٣Wd| Ew;d]gZ_XH9ͥ,AH^@FTq_Q]AEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPE^ s~>V ZԵ`Rs 2X4Q_Sb(<[&$Tk-=s+ds_V>*BzOh[ϝV" .m;p21 ?xw?ž8&}qnv#O yjS{Ljk(&m.4]NH43$ͳ2ÎYtGdA-N[4Nޤ+xF4Kkw?~=UngsJOhEY;;+$믑?3]+UsJO}_k_QočMW\o(My᱐m`_O+}N {Ϳnt.igOYdu3**)^=eqI5v՟jf=/^oMRq 9Ue`A h2X`2 C WAxQujoFߦH25PB?og,OiR<7 dHp;Bz@>> ='U:!V5"D@|W5~B^O9$"|_DgtY?,6~_#B"aVizhD}#d9OT.'?9$ܒWm~ۡs'E:v85{#/Z,vau+EyG+c *ʵG+Dɽ">Ե=7En]cPtX̗70£;G5_(O? uImZi<?ıZgïڣ[a-g9#}oG93}4 Z }PcD«L $I90/2Ѫj/u.LA^ʯ#0+ = uK!N^ 5υ_$u7:gRqUF w}3'ߍotZ 5Y_3PaqaBWzv7:m鷖se)PVRC |k;#bi5oY5Ė=.\w Uo(YL8}8yiȇnp7J晦Xa)<Q#| |op%౏l>"WjUuײ}@g+=x2ڤܹm? '=sH0X>2BjK? ~8w,i^^qmyQ]g΅Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@R2&:('&5ٓ0M!~9O_36zOq߭ݥ[JLwAUgU&EW>qWĿS|E|iiR V|7@Av6U;;|CwF|I*[h.|6$eiGR$1.,qb*m!H֛TAK9 mڨQF?Q(C\RXnI/kYW_| yO;ڷb<9ehO `+yR+TI_zs־O&RM7@na-2:b? ekUtI\N4g9(xd8e 8:|"tkx~-xN[mճ}PBX& P&I;rkjoI`;3KI| kl~O𧈯M˥u&]}w? SM?WJ^+5hB!W91=yQg/}.#`5*3%oAmWnX=nW\iůvF|n}rڴxR'k{}rź?/qb%Z`;Z7m``њK9YRğn~MbOUP&-bbןT艇qֱ>񭛎qyNkmuKT: ~}D?ȴ_-i?aoCbUko !&EkuI go?_p͟;}_EWўhQEQEQEQEQEQEQEQExτм[k\a%MFe Z㯇+o|[}˾zY1_W.'Jĵ'=JXrԏR>];myj4j8ž7F}J2~JմsjeoeUVRA4GŽWTHkDEyze5v+|SD7ܷ3< W`P#zqG9b05Tkt(d(9c_ >1l%d-oa.}|Ǟ7(E| /?ho?ރxsv\\۾ۏ,煅HjǑW'N]q7e<ZiAsM4>L;QƟ-76(-y9Xsš-hVY_nW!ߺz=5?\48!oO;6 iHQp: |N,~U|%{*5 *RG_cƽ+ҥJd~e,JJD}vaEVQEQEQ^OO_ &|_/^woE'BF42 X y+_?}+1R:pԯ|? o1i8VRw.~`/WaԒ\6wvI,ۨ?0'4^aOzl^9×)mc݋́uN*L$(((((((((((((((no?Hu L'j4O iɃ vv( ; WK>=x%iJZþ2i@7ATz#^ah>kMko)q{8#xQ}/al¾XT'*qJ-zWVt º^C:uwS$ƥ*OҿfOi.< g{%<8R;{`H&qȣ|~AڂTd/6^!'tUӴȶW w Gɰ~ f4y}Y?N$!p~bgs_O]]bxWGOܪx_>34BRBEkᎄ7@~;> x3)yό5L֟Ot9.etDurdDu?.ܲ,pT~OR 5rKwemuۧ]u?1 հ}l 8šyVw#'to_]焾h~ z_ʽᢄ!r0G¿ 5GYom9 < }hZ/+e{JM1 J;*Vkqjl*[SKԩI7!@:_?Q WfҥUڄ'W?hⲝtkz߆Jsoy]ƞ1dVy~QHe  ;צ~UxUцXdiP`?M޿|6[C˷¿XIF5ׇ>߲ڇ˲18ʓ$ G_5h-}M[qenE Vdu;mZфRkq/`J !_9h%\$Ο#{Mk3? <ˁ_ZWv"D}3?Xk^2<kOg;UWK_[ qRg' [VTAqvikt0*((((((((( )K? ~tw̟ NJm W/du}sM[MYfU.?m?{kxwBZL`JFup_viY\ Z x;Dž/RM_~?~?[Q֯lؤ#m-k%2Eyr${6VzD!ۯ9kB(u\9!TW>$hm e"]+P}8 RFzm=+ொOF>7m LQoH!H n2Q]ԣைIxź-m?$<8p+_υ/ᾳmSߴ~ŬeKX@ah/9*o?f#~*|+xw\/.|3O-on[[{6$+uCe@ Jz~S[j?/ok#?/k/p覵 G}Z{{9ahT|z DfaD{ 9%rIqaϊ;PX% mk׿W.9sF}ևdžyF3xw,5gKU?4KK =^U߷$PyEqow k 7ڿ;dh6Q-޸ViEgQ*ұ?s*s]òj//Ow?^.ׯq$RHmu'}Pց/ [h~,4M&[HB =rOs[tWVJϩ7FE:VOY?Y=~J(>(((+O}*:ï|>KZ{|.6K"poVrs_U?_st߇/L3Uyѧ't@ax*O?e#w>!5OZn]H3;9J'\_xoksg^dJn.KLI9| O_Y`_Zx~ XULޅTW Ce6Gjtn"iw|a_G-u?ǯ@ fGтesҿ]<~iJ{i_9\xz-؀a0]\&L vfYo>@M_ > ?-@IKkxg#0I$I$$~S?&?R_*V_4QEQEQEQEQEQEQEQEQEQEQEQER 1 dx[_-~kcĺuQyXP)+V  ;VRGiv<)xY|8Y,3 0g8.RvHچzJ.RI]%gQoڗkn= ѵmg?,<ąpy < ˸]mK$ =v7g9GNzW~~=If4&+ybGW#Û5 mfĉ&rh9 Ydr*5SZGU@gx6Zelka%t\0 ڇ|gI} v񯵛˴XR~TE iQV|mL|~k_'_xv3Oso41zcCbUH<1F=]}L;Zӆos>p__< aŔ84SqK˚MEmQOǦhv1YۨL}n*A5k>yg޿9ԛ`ᩨSRI.-Ɵh>0<9}N2G_=̿~-xVRR5 2QYi^J^aG$vB_N.k$׾ 7nzj:JRxhJV]jݫj<(EW'ڜ\371b4_gW$ڮ]?j18դ:'R]bUmbjbkEuWL0YMYOPq]W$5ȻvXS8\5fygŽupa涷d'*|Cƾ&x]3IY±jѩ?/cp= K>9+żG7ydǖtQq:s݃U~SxFuTSyuoF|$dnvMoQE(Twe3.*u $qoC~7}g_^;.-0RF~NkPxk_ Yu/.=%F>j ~9~ȿ ~7u] @JxIVI#g2N|^_'S,iXGW?,Ҏ0jǷV-P<7VPMż)bpFC8 A-~+nal d[YJs셏~| >pM??uiUeϑ' 8 (%^J˯ٟ)ƞ fNW,vZCYA[6tQE}!QEQEQEQEQEQEQEQE])'`ӡқn5ٓ0MA_3t n[v~۵tݫʤt-VN6QCwjNLtϮC0k۵kֽ:Rh暹?;q/lns~Z{N1]e{|FE[F \BWohʧ!oS_cэ]ֹ?>> k?F5sbe|/x L Kq޶:%z*?>dZ/Mw_v*A5"pk߳V и[ϝ>+4(((((((((((((((((+Pjτчn~?[-HU|LN/c|!of,|7w%F4NP|sm.N3^w#i=FHRxW$Zͥ oGo,I濜@·vtP_?fh>I8]V_vv-N4q a?|^qq=/o_$[b?ߴpuW𥮮ڦ+u{$s?@Q@}{k<rx@51/V6rV0yTiN(+&} ˶I5@g*.NN5j(((((((((((+8b?c?AiYog?,6ZK!THodhϺ-I|E r_ܯ::h}.~fxǟk3WnrE?,Q\嘒Iff?Zx;4vh;~xq< j3÷Yc[ec<ğ@qc^մUqPwoV?ӯ8Nfi+JkN2WQfӼ|-k>OV6ʍ򯌲?+_1^zH{76bEEv:}0%K Q`VQUSZ8ӜیvM#eLe\e ZےX"plF@w{!4HI0rj'HSjOcsWZGY5z5\GZ_;⡉yַ< *qSbJVժ"GV*8W?jDV3.#ۧN=~m>taF?ϥam>eR;Rj}O9&=x5fs<#$y?_-W>7η] 4cFisp&XvoOŤv3ZAʹRX@F A k^:)ksb?Ԓ=2}/Ms=vYG9uzY[R2T)ԛm٦ڲkm?o<එ ᧉl+=O h-c_yGQW9T`Jj ngT8du#!Gq^kqun 5x# 3jGOc_$j=ſ ĶՓ}F +64{]g_?63R7[tǂ?LgֲK.xv^^륙ڏ|mi׉:%ӆY)tź W>1;g%"|2{K |@lm8;ISSߌ 3Ě!=OTqٔ??'^ь_y7`ɂ@g'6z+kCF9Nc/R_ޏoչ=:+>% V'`)"o{e=D;OAgl nݦk))##A׽xL7-Vת'c\W% $wח2/m,>)ovoxÒbK i iYVhnQȁYP|oxGozOC-# Ke88 = 7Nc-ΛpL;:،H WeN]pM̰aaҽD!x y'Fkkk?}7{{=Ѽz>Ӕ{#IF~ѫ=Mr'>!F5sb(/0y>a>0y>a>s){>aj}]/k6~A)'ڦN%qֱ>񭛎qqU: b\uKqֹRNno k x̓\J#$f$k̞>ɑh8Z5~~īOOҵ/xBE^5Y7+'QW-(~JM~qY;sF{QE}Q@Q@Q@Q@Q@|+x?o3_ |0.xRyj~Lb 9el3?[W/+[ٺt}]/_ ~S<;wz/m?64qK42.9~ fگχb_KahnEk, ATo|x 폍?฿σ3w_bKĒ>$O|YxB ]GX2zFU5BǾf;_xNڝZiZ ky<1Jd3TOf3|2 c|J#ƾ.׃~;>kF]\ĪTƟ'+D~ IcGh񟅴aXIF @&n+sm/WísQkz 3X>UɄ 0kG1<6ςUi ;[{ʸ-&e 22' xWCao^&M[}qǵn#IG+ mnU=.Ou?`7&`[hbBYݻ@/O$'Y[\^F- 9p|*Lj$;tŞ>_D)_[GE~LxIb:G[/u/^nokI&,8d8rItg${jhOߴ?>|>xŖڔPxb+x6Z+ETyh[82@?DUoxBT񿋼12i뚤6P!+(,pxW<⸥v\Ih9XyDW_OĿ;Lbtַf/ > Ir@ vpÜKS/Y|OQY&vԩ%;Iz+ϳ ƒ*7NJy5{E]>XF0JvV-Ɏ q|k8YЧ:k)~ ƽcya2jdž :ZMwP@VH9aghĚ&ˏ=E !8;W~WGuGk-K$宿y#MUSŧvr:W tIDM|WDˏ>DGk]N$uJNJ:+0c4Nu%NUorSmmō;DYTO*XJݟ5i^[R 65Ċ:{X4}w]IE6[4?W1uunnJC?~5-lmJxK .f9~T|ܜU8p*|b>ykBΖ6ejėiZiR ,', Ox~(KߵhpOn$GpsӜ׉k(JBվo$%b"|p<0'K}ce~:x2_ܭ#Iy,F:z^F֫P[tZ}h|8 V<Ŭ@&89?ۂ$zuKxV.661RqIk!P^~?Cc昹(SXB;M75e\\/ĭRxnYn/,icd*U;XwSK>gq&t_~ ]՚D;fMv[c6 (NkɞGIS?[ڨ:vM(T˼ݒfF\HKT~OlmnWP0x[?LѴԱI0u2*Mխӳ?82G/?k^I%cI{|;o _+^xk»n|I;MIdGw7C_vgکK[XWUB a8`h{r?Ɵq8\Gf5prU4+5y ( d5KWG\Eϋ.ƾyIHIsj'>Qׇ̜x{všj>#{7OⷺfP9t*xVD.Lg0Gc^m\ss|?17Kd^חs6[6r,x⟇?<ȈOI eObkѫ/y xOg}lX/ W0Aqv!g,Kּ9#ߏZ8]^+#[|1C֦S&!r͂W{ E)e=cJ2Z_P[_K|= d{+/~?=sL#2콖c9;W|+|yȯ*==W};;z%inQeWʏj+_Ÿzl0xsXObBcl?cp\⾊idfy&6x<ƄUOWO (((((xٓ0MA_WJ_`?k۠:?rFj-W?mڷ5R:dp+32>-r ٯ26cAM# )hƊTtv[֍ZuA't.oѿTh;7kL|MjkcKģ#B5/֊0Q sƇ(gQQG8X`0z<( `."yQQG8Xc\}^wwOk=$:C^^0>f(s_m? | 5xEIegrXr2P]< ͵-H J&0znIF1Mɾ-O>>~? > ]<дs7+g`*|i Gy a.4prLq'8*8 hOj~d>=5D@ 0 L၏<[/'E f;TT`9#w|g!.<9u6`UZFٷ98#=Z?ট |1>]v4/,&rHAt1NB!4c 1H[ccjۇG^} עvH xCiap>j?}8=Q77;zŏc?o(xvoukwOX0Ff_?Q=c>"'xU4VP%*lm*o?/c̿ /_|Amns'ؾj97ơx\k&Kϋ>3o|K;e=GOP[I 1fCIGlZ#Kj6q]ڻ!RȁАypkJ>Z_9SVNjpjNzLDi5-K$hLh0eW 6W$@\DwM>Xؠv\L3H[-k(((((((((((((((((((((_- )j2ZP]=B?O"?#sU_;[Sy:GtXԱ*7s9 vϝ€<#e$6>7+,XF{E\4DEIRyjX$mZnCQ'<}1+Ol)d9˰I sdM;Gē.\?}"W>ws&8z|hRkGQ-H\nccm&3=tj:{Uө8FQOIh4|f FHT]ލPoݔZ{w߇~xC&8 @-601i> > X,ij$rhʙap=8t_]㸼QmvOkp[l {{Ea@O ӊډY-$O__e{vn>3*?|~o{4Myյ#xc1mɍSs9yx8AoOg-ofX1w]8E/mB !A&m׼kKA{ jDhیGHӥuWRj+|g;t:Tbw+Is%iZjg)[O׼h Q۴~b2tpz#MvWW?n"{/aq8'1!4+kA_$8;NTץS8Fq咺9q>2b n&~]k~:~>(g=sP/Au [9ů"@L+޾~ h;A<xߋÿsl<'2t#7p}~Wc|;^.Ux'79Zt*;OXW:I IVS S cuh[K>b =Los@Rrv3_n&~|[J?LW2z8?vUJm!R\\{|DVz).kEWyQEUK WFuK+MKNqku x*ЊE&VeBr j^?b=4iqe#%{wOy_?lO_wÿcšg=JdQG8Lwq9̍ҿ_ï|N4߆i/Gc|n0ѿHApϳS_?]C* /i1Sw'VЫ⏀+#~~-֯[ g&5<}V7D~V K\ZzqIT%c=0-6eppvz,x'~*m<$6ֶ-XBNS!FIWџ xb  Vy.px8 5f(eTJu$K^:xSV3q1J:u$f ݫx?1VKMFkB(zW:Fh:$ZvaoYGb03{O&_/G3\Kl|S?&[C{%Iz/(\yZyJO=h(r2EFV\" 4Zg<#2wX B2-1gx_HJS0E|/ %Z|CwBlدLTp@xpCI|M}w/yk:9y+x ^[~+i?XjeW ?z+Wſ&U,[Q#2B  >' :oj0u=..-HM5t*dѻEx~yE_i"ﮤEPII׉|]|љ_žC}ʯ+ooď2o߳wلв')o<0s"mi@|)g???੖~ÿ5w?j:q592D8Xb#yj)l]| <@?q7?"<(bY>=ᐮ͞V;nˎHa>#GX~ڟ >Al߅l$ԵM7Gwq!i A8jߋU;|I¿7tznX > i׾ lOWƏw#L{{ e 1I[HJ+BYw4uK$ߎg/kZ˪,v-XnGʂY0 o^)?>24CKmϊ~! r=#ٙfX l?08n1_|~<[?7^WU'2lQG2s< DQu ٛ~~-xsljI]d%[ xci[pP|?^f ʚ[x ؚ_6˫jpvS˶ $),ҝv'p U&m?I>=?Iz^ke"[ITbG˟i"Ǩf}}wB#9t$`Nr:Y\zkuss0qgߪ}OUju^rk|.u#sq_˚Q|̩O)ׇ57qʹ5eVW4WҬ*Ыh9XTUsh:*~2fLeG0?ZDRQ_Y5528IdhTK۸IIn*{Y SH;0GQG^s[[p_>BOCD᳾V=8W?՚#ҜR9hhG\HHe^<^)"^J nD1]#گG +B ʥw._Y[@}+4-]Aim@bzZ(UخaTG5k$/nLE7_qMwSG1K#d'Ҵ{>Cc$ZfejNmm O_S˹yVzGKW~-#1֗:ۍԳ@Z#oi~epmC>(Uk ( ( ( ( ( ( ӧd)4$R0UA5şؿ@«n殛#7ؤq1@^=# Nm5s縋rRRiG5T ~x|siۑ;.ܶO Ax7^-|Jvld0q#g|/͡xDtzsJ# 27)zd|4;y2f,/^ca8V008]+}aGh/'Ҋ+⟀l>3ס' g1a7C|c6)Zu~üKEϚ;5%ë+s ( Og¿z3hkm4аC/$A0y{h)ubўIf9>2̾Uҋi=SsUgR~>%Z|2okBh[c8_g [z$ F}8RI菵p7#_|v>Un4׍˛~rI!o&叟.4?ܟ|w?|qCQ֒]UTw>ߢ_c{ Qn|iK=4 BȆQqs-7E,, ב5}ރb\ǒG,A<ع^Uew>{ .8 z/KtQE}QE])'`ӡ#$imڮ6h`2qdҿ5k=nf%W ؖbxI5Ye"Ko:IrA:?2GҞ? } vk[KR2U#͏Æn5Ź>ExB,g ĽHW|T1ߋZ_h*ҟiyǪ6Q .ᄔCc_!ԭ/e.H0<5NjJIYi?i\c'\=Oh Om < R1\7<#qr\7rno?>|Lm>+|8E߀ww>&(z 2$͟__LE;6|#g/SNvIrw4;'~V`Wkk3/^-:OTBclșNF}-aiowz:u)WYI{^O>;vmHb/<7D\(`9Ldk/C>x?݇~x7bMη=Yg쪽2j6z~MEh>hG^W9^Eh>hGG8G*=:<+Հeid]X<}2wMg]hPغ ymPN^iS?j/C{yoi%vW ? ޿pa}wmiw}Ƴx~P/ip*̇ 5(NYml\ѫ9wxjx8¤=/&RH5R(Q~)=k! fUEIJӯ}zl2ƽtA5~>"xWMm|3wYܵaayfG4G%@9Pz\­f^֥ygҴ|h򏡯ۙY>CG} ,QJ򏡣>Pw}(ϥhGCG({s|k¾Sd u=*XLQ?V:28ih>9Ih7o-RMkf\<Ǥn@NGO#ևKGc998خs,~=|B1?Qxg^lxcFیt'ˎ䆯J {ӟ޳/M?QUf1>7P1 5QXnx~҇J7X?)i}?_ xÖ:Ht,?Y[,x08gkE&"MG:}ZǦKI܀<Ǔ⾘x(((((*)k ,@bO&>9>ֿzx#_Nмe]kPe.Tu;QuU;>+. %_ xcŷ4_{JO,$o!;9rk˾l"/ׁ,hGϜHXͤM#|!?WhoѦ~V߳|15 OO k{GK{k9PhW$\)8\6|a։i>`柩xrMFQ@pcdP!h+Yo|sstu7q[J-+4\ܓ#8Z,+xh_?ᮕ{Hz>Ijk4ID"yFRE|_OG~!{&sjWa7K[(oGGπ^  g⿉ 7zR^[uLG#dn0j~$|KGwk߄zt= sIվMtbH%ddfXn!1ʾ:?lKm.3OկQ7r-QR; d)$pH QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQECUsZjFk%BHnaJIᕔAj~O_'ŭO=7Y)OԴ$>pǫ~~|gg=7+O|@2#c lQ^6GP d_7?g⧂4EQzfYlp,8ٕ>^kpt>hPS VR%kWVfR)"Yڣa\χo"?f~S8=;T#Z:o|UiڄܹwL.ݰ+99C"6u# :;%l#)EvӹMBuR^3,${s$cϛDLц9j:WF}.*Ο)Sia,/TsƁh۟YIaCӥ%FW.gFv4̤D9$$1ʘqCU J]G[+|*dUF}OjT%RvL6 SZv7E+FHvc@$+ka'Oh6S~鉿kuOxmO_LFw8Aw_]ā_]CҝW{?O? ùaiԫ>Q_6۲MB|o4۟6;05mJJϓÃq#?cxjo{65Z5l%ߕ<|YT &5((Dࢊ((((((((Oڏv߄|}+dbA(+ulm9v~.>'%Z}.'׶"9l}yjο/5|i׶m/{HײN89D'֍hqF<52ZJx7RntQEzaEPEPEP^h# +O{wVW43)> m|u׮|)It)5Y9\q_WdLv{5}<#ur1S(=_I%GoW_?5ajej0/@G\DA#'2_~$xľMnIc;$CupzVWğ-%j^imţU:`8gOOnVq|iH Ǘy]?_?ϛeW_~~W?9u/'.)s!Ljƽ:?x$Wi[/lr Y:W轝힣_v7--G*AYno;]W?o8VͰ xh3g/}|;+AgɎ'p}y .H˟_?/G5z~Dt3I)?#3غ鸨W_"[doxNloGWcJs̄ m`x<+Chu% ]]jv-[G,M.a,3*He#AZb8T)$r.u#x 1_-{P36|B{θ鷠[)?gKISS~ {i]Gy,HJW{,9`c nx=zT^g:|SyٻŰ+.d3jk jzW<2ܯNMC+x=/u|1;k+~᫖<Dk[=d[}IHnl巸+yPHԌ Ab#W󹯳á>$a׆-5c>_di: a uSw|B.|x"ڇ=OlokԄ|nzq~]^F֍Ra'.h5ch:{iљ..XVgbM~}b-^ᯃz_,DzOI K0TҺ:rҍ/Ƶjt}_xs3-ח+k2O,{(' ~q|Mux/~|W;~g˦ ~^)~1ȒRj_= 8ov^bԩMII|G_F?ٯ a'KQ&i75𴓦 Z(;3O,YkkŵC{j!i$ 3(u'5w^YQo鱜jY7'tNO9xgt?H5Cci@>U!:bχ&`_+[hHGq~8_^XWݔ$vҵ~vg^#%q58ZPޯ4rtKJqq|Q^iuꆆ9\m?>{iW^<]WډYnDA<&QG\`69;B ,V,Q۬ז7715ͨnb7~2=\\w_$dvWuOC&9\5 ~ͨPy#*^뙸e ̡ }~n2\|]|7 u;/*t yj,V3Bs^7¿ȵğ^%1 mxjKx[;Xdr+_?y.j o< m o`4UC76JČ $Ը{:ƴ0ɧ)lho~dAøyB+:h˚J اII]{$?M;YYa6Ȍ2pA?/ؿe 4((((+2xVoSº uz~neXiji,q!W,dy"">oi~x{gIxJx_aya۵Z[#&6X~mλ~߄o| xe<[K4(]fRۮ.#]%7~Va,|ZTsq"4I0!O̪@&ψ߶_G' Du4YMŊM@n ZxPqqiE|=9>3~Ӻ4l<_B]o 7Vf!ġ/ 8o?gsdsR-KR4>&^$:5%leӑ*@$2xg1x rp#sL=\p N`|̗O?sʙliIRk5ouJFEXQ?x+QV6{~&OᏇ!ٹOxmm`4ҰT@f'__W|fV iڮ"iӬkdJ 3䐠nQڵ\k %OZ|QWt<>dE ݺ09f!>ˆxnXVމv_6x]( ,Ma&[ջh3O- 7)Cis1Ѳw }~R+?w {ċIs5nC( 1'ʟi/>#|έ2jv\iw7nsK9bi==td&7-es-gr-2*{q}aS\m3𷃬e o0FgXAO]MQEQEQEQEQEQEQ_ I-+.?"}im]=6r.$Ieg V$w)P\7_06nN?oAr8LL9#QZ3"oxaAw,88vpAE^'C>$6Uw];Bsc9Nj|/+wt>}:Mg? ʏe ?+^#3sX|j\;0սy_? + <~5_@~/Aɡkk B!4nC1)8 w-tPQE}XC |0a/S|^zͿP}Cƿ %h;g(Xs{}V_/YAp)Kwo^x~7jѯq$60@ZX&7Lgk ǩA*z1?KEO{+QL((((㝤ךt/l>"ҕc F6̽8q8VZ?js\C>8L%1HM*O&XsYx˩ I{QL8XJ8`x s2>&:>|k\'^>qWIӹ;5I}O~? 45#2N6? /_Ɂ|=nAɐ g./0qEbs_3xkdY#㟇u?<.{|[VU*L0?w^|sf\38^=*GoytW 9b#jּuUe65+1_!|7|+ Lh&^VpC  A' oػڤxJS6R쐳'o*bhy Kƿ K^"4 y\&?src^_[F,E5:rRGk㲼\RV2N-z|0x~/SwD(աͭ OvL?]s-|U =X&ejOKOſFRQx:;Cm=WgO#ɟs[:~|I3嚼VGJ|":qZee~:__?챩&D''Qլ<bԭC[}ȑA''Q|SIui5' ~%1Tחs2+㗌;w޸ ~e/+xRZfvׂ~ybxK^liM|y Cۗib0Òm~]:},q4:(%&(r|4q|YPX[?O\ aYMx_k@R_嘤ڍIcl0eqU(`2#kGbe߳ӼEiĨ6|CjAVyh s$JJĻ/qbsfgÿO|Bo \Mks=Fz5"K)U JzW~Ͽ 47oGO3TK{GpG9hNx={ܬu Sp_w4S{_c0UwFoIKM~j?԰y)SQ0xN5* {+k:ma(UV/jF;zMjLl1-?ͤ/hJy9T`q[/_GRZis2WnF57[1RRxYlhg~\nu\=<%wB+х9Qj46֗+e+$5~s&xo'|Atn<%6"6fgnabJI)!:Xizkؿ2TyOUopAEcvtZ]Ԯɫ|-ŴK2I-*+Qwjڎ1AmXdpvhy).M7>P:W&.ukn.@:*ʠWq5UJ.X{?OElaR8ӺT"{Z5KLBP{i1{o su;K 6{,"R4'_|^z?mI-d–rlcA7< -j:q+>k?9^f x­UH6Vm-m׽/4O-xMͣvVEAź|뗐r _?೟(Ͷ%{xŭڥHWkң^XA_a'UT&OٟQEjzEPEPX"?|Cznijۣ06YPXEc k>,xkczi<{4kQ2F\\%2MϟΑN׎[5m=RI6 tKey3o!Pq)`~#f ǯnm״+{`Y?$W@Oq+ܿ>|16^{}WB4X,mgV- %f)Ѧjo6YZIK{ͣS~ĉVVh&𖡠xGA_aӵ+TI#pUE)$ՙtN֩CsMcO))rI+ޢzm<\]xGg/?|D-ƷKll HUf U|wsA_OjRM=J`pnx-@d_IN,ʍe SO$W}Yk_Zeŷ-nMtHPx/_?mtyS{V4 ۓN9i_"­P7gBt2r>GrM"#lsO aMg mte%=vijI'r̓ _??x>T֪V}vmt?Q;}F'_ smhWPL㨒+6 z?C*-__]m+_:iygڿ{\_A_@O:W羃ɽ'4n<9EN;2ĤIY@lvh|bxD %,'HZ~k_Nn.dtku+cRGQ"7.ʊ\UP 3"4Ȣf֣w#3XbZ+xpoՓW`kն{տ-־B_6^5 3M,V7NRcv~󪌂+ks5kFE6,pv&5lg>@<5}q_5;}K̏$VZ;%9l)voB5|*oE8YcE:!ʣ 8P9=I'Eƚi|ycrz|ZגKU}qUBqӵRӮ,o P2H0U;U双sJE_3{ #zV!n 7I4}F sWJI1(r+yC+FAz'UuG: k70IymuMLnv1*\N܂F ~PP;_򵈞\xZ|R7)Te[m _/_.m [8]2}7Io__h:IT !Qg]ӯ+MIS߭3OE_5ώ4'fGr_a%Ua]6B4S\FqmiZ`H6MR85C?DŽkOXxc⇇m 9XPynX۾%l6;0ǚ.aqx0-SNϸ [nf̒#TP2XrM~ZxsVF\Em췟kC6-rbMC[/?g &mE.yǮPg'0- ?eH7g/yGFx'Fn-׭t2#xywr S+e%-춹gk-~1},OFP{מy|.|)gwWKC%XׇY|n'lx>#χ6l9k-\#(Qꌥm ?Gu ᛉں͋F/԰i΋6::u)ſ7/m;?/.KQu_fr Y̺&l2Ha7$ KbV*K#I#^to[w?R(؞ɶѼlxM/)VOdƾVW]]eYNACNy~&[azuQEl{EPEPEPX>%'{ Ok,߅+1_"?*-;~M 76rEIbC+*A8|}G.|ƕK:F R=S,sA"H2AGz"oMT/?3V=^V6U}G>↣/oC(-OOzE:$&gd<-H~Q@-mm{++x-,XG(ª( (((A__OOxLޏkpҷ]H'ڿ/#>0~~֭Sp]drH9׍LI^Ohd_~^U0t(CQR['ݭڶh/qx/[r!ݚ-ڰ2vBy"8ӭ>Ѿ l'kk:v6Lhda6<`~ ?!}m[IB:BMbBQWA"/\ў>Sb7,?y@A5<Bi o=&ӟlW#mFK麷uL1۳e8G#O^$ |# Fl<~w &lʅyN\ӍZ~ο7,bGMr1Z)^#oxq-b 0=9VtGqna L-._,Ƽ~9YCă8Kᇆ幋PZA=v]}A>C+eRqO*v)j}u\G^XJ05)Μc~M'{PI c!'8WN?5/'o֭qcZIi22dG+#x?ڳĺr'5@m`5PfXyi6#E42G1T]')$I[;%h+(ɮYQaJQ^ҕ)TE)J-ŸJ5x^.׌>Ɵ< je@D)sXW q]g?ֿ:\¼bg)~QdJXzj䋴RJ)eս_viZ' xEOk6*r2?2xZ|LĿxAt i3^5ڬ1\3p~E띸suWo>ͫx _vL?xOS_aj4!{ꢿW_y ye hU)ݶnz$ڟ  /~-?| [|3OHv3lq_o:5zjMVM`Iu/݌"rybtU{qPT.EM׊8Ra,ޱn^Et3|Qʴ$|x"EoiLz_G[PT.h;382q+gy4|C?bwNG->¨Ckg]c|obP1 5οڃ ?jNWGߧӁ>yvaˇңSnvzu?j+17~ԯٖ +P]eŵ3Bs_{J:_Gm!~'jh:]>`0p$@N a?ŐE}i.XT5k3𯀼q/oth>A=GWcT{ k7_^蘓W{?:qд02Lq{?_7v<mgÚ#YܨdjՔ6y~|6K/ƍ_2suWd98xqLO߰MhmzԂm 敤sWYXoGm 2Vbog.,c6r"'l&J('C]'}'Ol'<&m1\/>+ UC~oLIVTd,F j^ ~Z<5G4bLkL^,sw#'¾:Q2X\W$Ȥ+{MvP'x  🆮cӣJ u d( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( +_/|)5/+jp^B߱aS6oN1!Ls|b>$*b@$+𿌼!k/^~otMN3f\57l~wojc$]\Ml儑1R<Ek? f(}{Yf<{tseo(NT>c8I _Z;.y|Ie:vw !P{~Uf>t%+M~KS@zFx"4 qUE8zWF|Wks&6X%V4nOUmu$ߞ>lzz'Ŀ@M m5:;PTA?m?WYQjO ]x[ESi-&QbYch 䲝˹ ~t?VDii );0 ƿ'7"OYW袽jIcu׿?$xHaR%վP0";9OJ,Ei߻?8խc%Y^we亍]EU 1r:LX5UHGRR]b^ߝZXerH+hE* p#2IKR)\/h;yxkscr0<."̧H˟j'ZKY-аք ;Q$8eC}gxe->"xOpĩe%(e'ܬ'{&#/?|&%PḲsFO_?f>ލ՚umݦqkUGPrT;*+W oFhv3x~qgpXk4!J7+un+}Rˡ,#īו$-{|-_/{|Gx?_GMFnel1Qyi?|]~;sWլc|7t=+(jQ|o-Ex[[ndSnZٳ00־W72ʵKI=We~lqq|OE`qlU}S%bъ+w٫Vh k$P避Xx.s'͜.1_ 0|:[j al:0zG5e\C7ZxI/_?#[ ?'JI/ҵ7QE{E?;OKߍʍIR;\O35ɍtJQ^ֿ#xc"Z{Eyɥ}e{}e:wkaaoU( fb:_ࠞ5|yy$W'6;@<\q_>Zcu?ޟ|'2IAh`8Pr7J%,(c ׇl#Ue$a^ |giZLpwx#wڵNuI3߲P|D|WviI'>ZC.?v1Ï~Md<5~Ã;ĻasJy݉Az YysM)k'#^=w81ךBVw@+?/wx={813N\;ҹ/ ᯉ5jdngrrPvVC|Dּ9MKǏ_隥g!Դֆ2zZo=s|x-Gw._9!CT+!a/d%NzzOn8,գ/ƕ)-)cpUHtp>eˢ ]A阔;J4 o37`x&I?xK]~;qebs쑀Dh%۠kdxQP薽Ev~\J:PqjK[G.u]ZMu{-?yH#94;߈? O_ [TGO6tcq*k> xi_豙S۾t\\#y;xls_!#xVZԖGΉq=;IVT`J[[V|k=4RQ惴WvWN.3Oi/T).lttYj{Btg( z_um6Wp{/xUxH^[*=F - e[pIP@4/Lu[ᕆyAAWJ{==o|;<7~J{_F(PmyJ򽫇ۙ_ҏ/Z^WW/G/+ڏ+ڎ`w%xw?Ś<E>9=y>5|UKçHe Iq*GI=Yi#z d/_WWz8,־Gxpp9-ci*Y|[:w7aw b]FY!+3ڒbPqN+x%x$@WR2#_̟ŏٛΧ +|clMI 88QK>iߓؘ">2YHyca$1:'ivs[b8F9*OWZ?z+_ً Ri~džnn"C~ Aۘ'tdYS8?W ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( r?Zs^K4|RfռC:t^/K9Onyv^ᦵcWKCJuP*ʆgOD6SG3 {vrJ KX|N& nsz=O_d9}RoK|MoKlE,QXĶ|R!77~-1_:u[iub8- J:**_'[k^I?+BMs;yܑ&|1B1Ú|snGTԤH#AOd|gG3Y!K/Xr!8F*mex ȻԌA;W~cղг@;{v`C4q8$g8 |3sxI\iG}*Nr#k]gƺxW^?ȗWՑ7PFOQ_~א|Qސ}!b=͖"orp`gIao'#WcrpdޮKӬ_Q:fZjZr&Yb}UGүW喧xTŞdHai`*;}Wko|FO$,|(}~cyִE˒嗟G$W97,y{4}QEz!EPEPEPEP+/G|5oi%'n :7G8u!b+C~o@o 7qv ^.EyEǤC[5_3^W8JR]_Rsӗ{VȾ;~_ 6Xm-ث]_xmЪIeBp ēךOA}LIz| :{Ø{K&9xb[ *\%:q~8i?P/)M#վ^q+¶ UJTr=~#|2mwkxT藖4g딀t"ʊ ¸JuUjufދox]ҧǑS[DHXUHNsQEQEQEQEQE|AʿicT E4XUӳn'ny4dC_+gֵooc_xCfnsbXw 08ꢊӌդ6*7jOUg~G 8.<~tuM kwQ a"aHza)A_?gF/NRQa_ ?{˿9!rrv+hF *XUlMGVܤm < ݟKGk F Hp pA{_Y oat$xgTp" լo18 [E'z_|7ᗉRXCRvs|͔Y # ͧ;x)y,<'|GA*I1^gJQ_(eeগ2nkD(NVv[q=sǍ%6>ZWe%z[4<2UYSt;pHYg|ǪCýN [IdBv+'̙n[|<>3)M]Y4>oEkNvk)$4~$읬gy^+Q6ZΘo4{o0s(b: d{֗} vBg(|%IΜfFf^+V} Q5w#ۙWCG} nf^«^i֚qcik}e:1$r)꬧[[ynfpCyP9$½& _ ?N%կ(g,9 oBJעEeTo vmr{$gKTbgt4(IWXՁG89 `85|AixL?@/eG,,)-2RylHr.u~56UCHX:5+&zK15I!Cd}ށ4*v#9\F ;c^_#P䶶~c,ʘUHdVdgSpn36*m? X~$hl|9ijS_d,O%f0|0;8# 2[_i(veLa3IaOqNJI95mVcK58WEp0p7}=25\ωtOOd{ֵ In8T<@{~5i~+}Ο(Q}I{$p Ei  !*F7-9r׾ֺoƵz5\ºCZfo4m> ,$/$oatzF7,w'QtcEea-t*C\=upn\vX֮z6>,/3?Nߐ%}n==*wx>'IKk+vߙ9N=kNvyf1ӕOf7n?۹q\H̼o+A\Nujǜ}bRC;a9A'+w`/u0/۴NKKW1APHg?.X/m-~[s{nz@Y$rU.p q֜ `O Qlu .H6+W|#}*+/xMmb,D0˞#e+H;3kƭIliFG#:txm_ a,Cbp&68_HoC 1Pj?d [_5)u/;!\byfDO|I?M-Awvqx|m%R>{4O4-7+붺ɼW1 晥(;[]c1#p}ϋzR5uXXh׸S8uBÑ:Z֗YjmleUe`AZGa՛_5i5@tzLNo>wDށ?yEyggK]7¯u7ȯK]W?]Ꮛ>sUhC;藠f^?R 9s_|yu'3k#jciV/^sc28П-XTg]EQ]gׅQ@Q@Q@Q@Q@Q@Q@Q@Q@m cCOȻJ7KEyAn=+rOťꚭYws,PxX;@NoOX+o5LaA$31n٠(/?] o^>灣֮ZJ:ȌȠ:'2yP|/v3y'NykwyMUShY ]{4M7(jρ+*}C&&if($3$rH.6 أX~_oJ-M_Wy0K#A=v)a~QEx^Q|5x7ɲLl GbUPÐy>Yj_?do蚤 n#f2mnN,C3_Pɜ#8]3~1xWG~/CW? j^ZZK0$Nj^١xd=W3V8@?n@Cc>x7ƿaj-?J;3nUȒ| p\~=|6[p7<%:,~?9]QFxxQf<JT`[5?arqgMI%5%>Rz%6g~hΪ4zqF-o"p?xC~ kHK(er` G ;Xx:7:PZVR*oJ߈wtJ4~FpI&opŴz?(5g-Uq<ҭՌZUae(Rz^t2~2 ⿁v.-/JĒNy䓞{cq ǿn8CXzZK[[`f9p`5r_y!ڮ!ɭ/5Ko<*3il/u?Q> Ry1NBk|S?[ j5tMt+,'\*H8cM! ^^Pǎy!&r:?lFk a'nKw%zׄkZ& 헁4֭ &3)=ؓ|3|yߋ%/7o|&{iZ}ԋ̯" A,@ SsW"":uFPOv=|>'5_ G Su̷^z<1tZUYƝ)?f$JԢԔi^j#}#ஈ?LsYdDdb5GU|j֣yW k9'{5zxԲz%ly$*45 Pir>]{Of5MXǟl5藲$pG`I$w-djV WLNеmȆXHaku?i'I}iXNDO袊((((((((((((((((((((((((((((((tU1YZ4DrJ.f&C!`0YYM/AZ~Ѿ!{ZG;`cPUP t\?h_,*MGD𽨓1 L&W:AQ_m7ދ)Q}.vI[ݛ88̞QsO eI+_Il޺|&'r]V{u}$B34U~Uڠ(d9JVwW ^}^YWʈkd|3MNS9Zjߦ#B[[3=Gr>&յo7M{UGÚd~lwmܥs3tBNǷՄtMy>euMᏍ"ھVax2otmsX~혃"Idx Q2u(\~-iuu%{ܪyn  98 W%a\~ayg/Ib BLw>,]'.f_r?^ELG*>IgR)5'ec.[nd×>;OxÚMy.•hP9=x?4o]~Ė:׎gLMu^[v*BA+mzWCW|aiWVțm۾]:`=Nw[=ZX9ÖT/qT4(kזo<7EE=]K,WtWU Y{dl֧EK:y6 :4 =CI#ǵ}5C㿅%m5m D$kyE Twm?e~]_{o.-C43HfٻZB0޻?c#A֟SV򓜝ۍص'#3܉^1a%B6qq"fy19vWDP}j[ bRʖlpUz\M¿xZU<-=h}s$.&I! zS:CY[ Ŏ2u F58R䔔eP5QWY^sk7\H'nn׭d_ >o'_=W^^5rCE$n6zkt_z7tk7h.FDy6MAjP<ctӾIMo{Y6Lwwq _:X}UrTdc w,~ x~/ڬ#nir;wKg:VU2zO2s[h11cA1߭fxW^&NºٺZ oǒx=\Rq% ٯ,~)Jb-7oݳWY x~OԵUUmn'EPuG#>^N:u_k_|BF]g@zdkh*!Xr6q]ߌ<Ş4oGGB{ӅIƱT _jAmH|grt[G?9XԱ. Ǧ4ZJ+zٮݿC pcBgMO]Ԝ⓷廋25{&c[pnob8C`:2WQ}J&z>dY\q+ Ƥe V2IlyXB 8IV~ g\ʉLM"[v? gȯ~> x_|YŖ" cGI9zɉR[ mU jA>|+5k9j|#- 7ppDr6z+ԒP qo 9eaW|X>\ZGx܌bU@V%xȐ#yJ,9l2@\Ҫu#|/x2^W<ͷE|wHZw7 I&'s xY=s־FhU4fM`3\2`}W;0+S ( ( ( ( ( ( (?_k.O?bjR8Y(%cLw8N1ZT߁k[[jsβȱy\/#rd}3k(—kvZufݞXr 1[$BR@7~Uol*Q qq(8uff =M|1[t<7O>,d%5YacjN1Ӣ8|F*y82QyK1Zuh>9<%& vBӗ!^y[eoT_5~#Us5kE:E l$)|a|^7=[M 8kNce~A־W<Ix7A,F ̳)$9i݉\?8G|7Vض5oZ4,y&y3g\. ~H~ݟdߴO37F𾥦[^?<7dcnv?T-7N|?}Z{Y^@Cq H vѥgexP\Vs{ݒ?:xs^uCWf''ʃ|Ԯ~/|?6k_ : ?X-՜hn+kڧ #,1g셪AMm%72Y:'$e)2vȀk_^''[hk<7=8=wyTvz񗄘V[5?g^>JN5ιd{Mςz6=&ˉ59^q޾zou/]xJvտr2vʯ쐖kW#QbU9\!h?|CN\M6?x?ۇwx~Oٮ;~ NkҼo:y4>YO[eozoPc_U':k:mnulZzy6_XO#5w?,uգk' D Q#"澯yZIH(<iЌd:M^*v{;wSF4$qwcu$~>Z_cAsb">8)%ksH# K:HAoF>9O_g{< y.5}_ݎ>eOYjrMwl*ka ڿU_o>1#ĞwZ/LFIAm)}?7?az;ٲG}O X|w wͬ'w'/$ ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( u4<2.,fer?JլcOM[z#KI-ي:'d>$%s\]_򜴒Ȯ}ͭWmĩme-ħƹ<ggQoiҥޑtqcG_Zͼr@q$֣}6 }'UO 4v߁Wơr#?$@'@ VFYYY]NX`jE_μ?(cN)Eh NҪu:KfS29VUlA:=VUlB02T:P: L#juOƧXj[9Tƽ5<[8'@fw9f$&sMviFE<?޹ӥoM\MRqcڮG:4ѥ6Zćڭ>4q6rAVVگ$^m jboJf8sZ[~:R%cy$M//reW\|oxOӔM>'[ˑ4;l{iUa/X8SS `>(S_L(Š(((((((ʯ#0+ = :w}YI'|Hc@o|u%X'o_-|+_E?lmPw8|0>JFsA|K᛭U\.٭o!Y#PyU7='/<Ê/=Ww/)gik?>o2:޴RsG_S{_% JلRjfbѦ8Q|Kķ?*GNl WPwm#/ߵO߆iҬf_z8Jdc%׵@k|>(+7|)_ ˡn/?YӢ~*ʺ(3bojs]|{_s>_F䅆)%8/$akvd¾5Ԑ<vT=+Ί{i/$g\|FK_ bfCQjYo.]F ~pN5~ӾԴ_rc9X|¾md߁>)xF ^Lzo4[R<9L|O%s\]VrۖUH3}$;3_?|8wT׾ bL[ VyDRL2}W-^{|9|Jqo9~x=?>"ao7c:qԯ9+3$@t&0H1E?Ï_ <'7o-46[%sB̒31k9ei'=?mWGmVK.U(߲X? 5?hB⯌'-rAGj?IЬ*L-!Xmm-!X5TDPrOҰQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEIs_^jg||SW "ChK6$־A3IGKF$+|PCY(ޘ'rA?^kg_왨x\h4Zb]㖆@6H|ʄ)+wҷ/4Ks$&m+\OD30Icd#7>M[I-w? PR{4O[5ٳ@e# ^^pUΝt*?$~V: {/?^L0"2sau_SPGC2ȱp8Ztݜ[kTiaEzwmߕ {Aq40$6ϸ0(Fp;ri}aN68%\U_δCE mݍ`=}qS.ln,˕ʟ֫w:JNIrj­1FXQ&g!꽪1%H8%V|r Ee 0G\Ԫ-#}ؘ0osj9STWɑHc1n?kKp~cڤm2PrS,nZ"U*%ֳ1Ajv})vU*EGM#E,}ymD8:kKe*\aW)4(%58\tle+{q^HH칮]txTB ny6h"8IBC?((((((((((=|1- Qؤ[^'I^y*q5TT͌ateF-WO>x?SZەGdrv3^BDl,ovn`>Jھ֯~-?,Gq\Y7oYe   ͇z+ ~nTmb{SEqkKH6 dGޤʱ'$jadş 2^}??~}x~_-E#*ah%ïL6Jچa ˒k]{=. 2 [Ӟ}6o4QEwQ@Q@Q@TS 3EooY*I<zJ얹xÞ ƻjBarUYePI+?w3=ƃ/xq6Ns0~3{m7xUtvP.geV睍כW0N\W4s_S9W[8y{|kox|SOr\g:DK{rBi^%ukVq4D7#iX+J/ <^ mt`>s7$wS78 C.|C}# U*XT֔fݮi:>l`c1죏ǽiEzI$Ӎ8Y- (aEPEPE7K7O77i~8 [=>l엉g';;fI\A'}sEPE?h?A?>*x;:&k[F 0q\Rx @()bXd@񺜆R2I@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@x/w{/(XV'N !o3QnB1p?_WҘ4KQ-.?1FߥW;lmnbVvzXoҦ2ۻe}mQEW(*ߥ~\9X_JA#R ж)eg۫6o{<[oiَ0?9U)YV~f? i\7f ha%Pcs,$M{<NcR ˫?x3r= ~^rjեZlkɿ~ ~?K)|?uAFӁo3 R?X_oIxEַs}.Smoi6hrR-s@'h[KK[ 2 +k{+8$0AHQUQU!Z* 4 좒K-QEG@QEQEQEQEQEQEQEQEQEQEQEQECsg-1\[ʅ%E `ھ)x[ .aGrd|m @s2 _mXWSM\8qԔG^iW `o,7 ،qpsza7]@E;.m$G_L]+xJBNa3 ̧A> ~~.??6J;f2&~U#Xf/mGrq\7_~yn)]h7¯]ZWO.Iׇ:w.l7c*{w _y^ O}gúS]YNޣ*x#<ۇҮ} qUrռzN>zיEWIEy/^_O2ȇFxp?>|lUpq;Ϊ}_7_}|Y ?f_ `F0ҩ݄u`౯(*TNfyXK ]^ *K.oCo<2AU_ 7(%Q~vƾյñ7%=$1ɷP3*d1^M: ;>A‡+:j\%:cݏo>^8oZٻ~(-j.3A&r`) C@~_olt|G'uIIJ^v 𣒻`o(?+W/[R_zeXUIO.]pWoooCmk k0ā4Qt.O_~k@^y:Xy:@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@;VاōQk~m#N OQB pAEz/CI 6g/,KP .xF`:3ܓ^_^5]HSqt/@5wvI7;}fGæzfW߳U|761Xf1|sxGxW,,i^` k) :c`AcI(%Ϳxn=,M*88F8E.dN5z-Sxi^/к[O2,dee7xivV̓Owg=J1W꿵7M#K{~!iV&Ϡ {{_mWM߇>Y_xgI3Ƴy"Ub!,Erar|]yYA| 5*4JgR.K7k_(-B]{Z-e=sAi'xݚE%px<~q'v1K_Oc?"kɿcO&?h_|Bk sn؇5iNbmpd8ZL,?h"[=߹kHh 35k9;?!U7 ֭Η(+vpK[w{ۧᶏtg")&8 Ϩ/U'|)2>elZIKyXoß"oOH]Ht{Y&iJ$??>i^}7Neq'pȓrI쏞G B.<  ,K yܾMr&<MZZj,j:OmlH޿JO|6nOz;'rFa4 vn8"@72I_&]r;W28/3B=Jy_j_AxOÞ ,Z=7DӢV #P  2qɮ.[;Ӆ[~ Uur"ޜRdoNc>P|tWӣu<иe1*$g!QT,|RhniQ^K(klsWK!N3_巄C Y4-.q`oslMvtQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@_'8iS>UfA.{Wĺwf> |DT-}>IQ븨?ݯҪ+W՟u>+|?sj|VӃ޷~g.xQ >m/SJS]D?RI+mc˞(@x goVCdгfxہ((((((((((((((((((((((((((((((((((((((4~?hO\"F[;c1^ؖ`B% 6G ۭF_[G.LgD'nax?⢀?7Mn5> Ӌ>k?O*} ca_h?:o6I犼t4Q#S)S P0AEPEPEPEPEPEPEPEPEPEPEPEPEPEPEnj! $u Ũoe_Ij;V(ѐAbv@ JOg gO?~1Gmị=g:H[kR7'L~Ο>$>!i,56 2ķɣFiga[5CF/9Pb㸴~;XEf–T@|/ٍx++]v ./ ih[|j7 0ݵHt?*+'Ck>8[3njCr>{%Yψ4HHX7޾|_Xx#<#1h :e#k;MSźı$$8PIE~~_6>0aҾ {UӬFʹ&daE 29 Z+VIk4o^3tK="N X[QI]%U {ᶭ_%ދk%֩ ]HٝR-[?+h^2[7~ 4HqbAҠ(((((((((((((((((((((((((((((((((((((((((((((((?ෟj?lZO-.<|:W?ī&ni %gv?.=\~:g.i4𼶱+ m1&q:<߲s iW?xxPR-٥hRCYkV xwpkSMqrI'nǦ@YVO_?YVO@=A|T#Nx]G0&۲#|F>4q:2=3D=9cпf/]Kux岴bV ;ƾä~Ku0׬|/\[φbM5ɬ;ºgeĀsk㟸+կ\,,~^<9ծ4k5h nD1}+~1ǟ&ŸNgnd%[f b:x̘< |UOxoNZ\6$u_(:Y?,{Rt"*Q#A΀?X(L򰗂o2?&%k񿎼ᮧG4KGRX@yg "x(3~4 $V|-mQ$qڿX?__Q#}~K\]ݟ_tK]YxMyl{PEmO8_?[/G(?6d)V%7? l~i8mzV:#k6hO/sҏ U֟Z^Ն9ͺp|st$Ÿ7Y:>x~:/uZN,#*32lSK=;W_oCVV=[9&eEbTzPgcOxNZ}~+~v#3 ՟^+#x̌4[p{WOQ=O):iyjW \~"j !iRv9쾊𯊼7㏇z?!$Ϋl:va8 ۣ+AV |+|a?4_V[Ka*EJ"?b cSwExzi4z}:I O@+ sZ?|G4[]oCmH$̟P [s/Fq?:_O8_ 5 #?.ۛxb&x+.F{Wou6]m&]r(mnlmrA]jiIH村 eAְ<+_ x/Z_3ۋ}?Lӭ-TQ$NI&)|_w^a[-Bᶚv wéV\L/^[???OK.~3^ 7|#CNe4й>\?Ҁ>4?r_}ǑjIl:_ ڏc{}_'  i'ۮt߆Zau22j73LnX)N.kiëGG?5wǮgexG5Z5T6iv Aʀ??_sVҖ*# '[K Ƶf:yG I5՗̇ 5BbrX__࡞ ᯴jc5niuMC˓bc>O@}= $ wǖ,uό q"`|'c k'/ZLski(6<F;_Jcg_,K_hyQ 2ǒY$I$}txwg_ G⍷}͔Vkvrk@PvqOk}A_lOîh  /-FH#Vi)M6Yy8͙IT:1H;G$qLZm^>[7#Ry)4Z:*no6̐2k$=R~|0ƽuo6S#̬ '?<3I xf b[;Z;aBp8G_\f :gEpG`89u4P4}?kw@Өf+&>| Tcrڭϙs,Gۏ7zfh?b|KOoڨma(E3+?zu}b[kmHTQ9QϯZ( JpyO࿉_ tĞ#XTݦa#+AW,V9 smEuoE6'9~5Pr|hAei ^`m-$Eˌ]+Ll4o >k cmC QTEPU(۳[b#:=?[4=CWW񲖎dwZdrʙ=eg W|}Ra4.ne,? {Cʅ$d>@/К(/e_ < c@-|YmsK2@b&PbFL@ی@l~W?eO؂ᗎ5 k'E·$n#(W2ƍ?.9ڴPEP?i];>/|eg.s&ziz n*rUr2H?i%~Eޛ);GTU&~ӮDϙux$eu^ ?d'~ xzS66ˉ&$9P?;+'>W;o7m?~ۊ(((OqKx+ ~?Yj$𞣤Yv̰-ͬHT3 8^E~S24nwx>yy%D\rc+ _o|<h6:~>$v$ 9|E~s[~ɟ/ߛ/-=kpI;dfH6Ģ,**C[}_᾵NƉCFCg(oGUѿk_Y~0EwLL`k` 1~,͟{~>wi:Fy V: K<@aU@ ~@?oO/Oφ dӭc }*F&S1| -'$ [X!5b@(U:TP왨?. ^мoj:NV<,#œiӭf?f⿆sY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloqsY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloqsY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloq?@ABCEFGH!I+J4K=LFMONXOaPjQsR|STUVWXYZ[\]^_abcd'e3f@gMh[ihjwklmnopqrt uv.w@xSyezw{|}~Ҁ(8HWftÑϒܓ!0@Qcv̢8VtѬ (C\tĸ˹κ̻ƼyX/ÕWƆ9ȘDʘA̒<Α>ОQҾx4ձr4ؼقHܠi2W r4j!FN?_,P\Ag  N 1FPSOF9* !"#y$g%R&8''()*+d,;--./0j1@22345o6F7789:};W<0= =>?@zAWB6CCDEFGHfILJ3KLLMNOPQ{ReSOT8U"V VWXYZ[w\\]@^$__`abcldLe*ffghizjUk0l lmnojpAqqrstiu8V9m:;<=>?ABC/DCEUFhGzHIJKLMNOPQSTU VWXYZ[ \ ]]^_`abcdefgzhgiTj@k+lmmnopqrvs^tFu-vvwxyz{|i}Q~9! ܂ŃhQ9!֌lP3הuR/ ›wP*۠c;nF˫yO%ͰrAܴn4v2뺠T[I!S~ģ4SnȆɝ(ʳ?Xw Ξ5nѲVӝ <"Luminance" = "0"> <"Gamma" = "2.2"> <"Whitepoint" = "6500 K"> <"ChromaticAdaptation" = "2"> <"BlackLuminance" = "0"> } descDell G2410.iccDell G2410.icctextCopyright by ColorLogictExifMM*>F(iNHHnC     C  n" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?w)|-fײ׋~<_Jv_%kiAER(((((((((((((((((((((((((((((((((((((((((((((((((((((((g0)蕯j?~اeVQE ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (ԊL~)`wg\lKWZ願iQ|a?P//0=Oe2w0[c(+.kpg{Hx=3>VmuHmIYwxY;OBqY(ϭ袊Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@k<#xĺΗ5]c#_Q|>~:|D4k5v2G$ꧬbxm3Cv/ď|)i{-мQVaB$"pp 7@ ~A|D|]CV'Rb؅OXBe^܏'5/?n:׎^IE8+UQ5kYx[|)A}?LX"S݈QI(to xr?2SoiO ?xپ~̞,\mN 02쎤C#?_;߱UCW_?NwܢGhE"׼?3V)OX:lpPguE e\oX>&_g.<7k~ah^U ۬Q 'Ï?~|9ohb0XCd9y_]٘}W[9i Inr< OA|7YGim[ * s7|)/'Ɖ=󍜸шI`$ّTנ1jǪQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEG4[< y$($w }"s)tا3Ƭ,:2!oYĿ+oU x¿I8uMoO w+]j"Fuܢo$%I<~O-,wKs_s@1zim?;_ccV,No3 `69V(pkO~Go %![G$O}?vSn=|SQ''#lzayʝhG )8^g@Vau+o54;IZ9H,.6?򃟄_j1ؑPs;^WWbׅG7~FY(qQ]((((((((((((?~اeVWp ?bZlQE((((((((?io۫4/xknu$|w!-wJHUr1_ y?e_پX o[v"TJ*i6}IK>|n|!YW<>Ha.2nLq$ A־)ݩ럴n|E4F W}lgدO٧u lxGŚ[E06XF7<_*i.>Ͼ _?DH_j?P93B)'jJ/گ~Ə/Mt(y~y~4y~Ɨ s=iI7)e*85eԫcJ ww#P)S ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (? mZgcwk^.׺ 07Wk[ |9Gb`+LNҍv׼=D?? _kV$|{s%ƭ[+[QO(TTzt*'^Μv=Sҏ9$s~="OC2D%,vmörzc{f}.#p" _+kb.1O^Iwڔ.PN@8ݾ\]VW)t;(A#*WqU#o LM_J+??Xӵ|Aɓb#U)_%g}v/_f}(Š(((((((((((?~اeVWp ?bZlQE((((((𯏿/٣4^/umn]-3jr+"]̹5Ϣ~~~#CqNf0pnTn L 6kk gm~_YTxf:Hu++)A 0kߌl*wijЈc17Q 1(J1oNI|XK"oZJ sCd1裿޸n>鯗Sw=JqQI#WtWtR,1vGEW'ʅQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@31;J5>xxV`\k wĨ4)p堜2<^ah?]Ѝ1쥴(ngwG(#U JM]46zh7M۹W'j ijC<_@H->}bj q ;վ/U}q\'tofq\>_J/}CGߴ?/5_"V~ޱk'bFSWҿJA/?y_E_é#(QE}iQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@+8ɄND{Ux&;/T )QEQEQEQEQEQEloԿC&U˩7(ÐXA"A~?7_Wv־_?$= cfHtLd$cA8|!d-6\ALUkF# OisiwG}rDp) w?yTs%Fм\%NzJ*.KCfOCs1(~wG5ݮOA 0$m%s|V9kh^h-Oq,d慎C:W__ k??m\ϨxO\|;y=w]]S#к11Lngwy|Zu0e) ~8BG*+轜{o3?e})U{8k28峊VCjHg d;xğ k7ΗVuσm?}A89lRoxe U}=Vu{mCyw{d` 5 9{8f~6-:JDrTّWWbo o5׈/Y.0{p󮬮yr2Kgk~d蟳|HqI|qr; @vvqG`gpassq 3m8Vh<068ٵ)lԭ弌fHU2 㒠u~3 ?Ƶ~ֺ=bf]Yu J@x̪Ud9?x:65H`_mG>'~$x /WZ%/=6]4`!4C, Z>iះ<:I.$񧈯udM!S#D"m 4{8f~jv7+Cxt R䜳?*ᎌλ0' fqo\g<>#пf WIk~Ѻ'Žkj+5PSn6*2wm }Ww&?d(-GWFu]E#?q1rI< pQQq8Ϸ2 ~|giMN8gbn3 Sd` z{EH{xE~Ә^>xLIԦ?hB}+S̈́Z_Od3)+:*w_[~t/?/ÏjP54OxB5Dw;n3 J[߂:*vQ7[4ѕ X`־Q9i63CѾx][Lde Ѳ IR>$QkӨ3sիGRM JN:M}sիGRM JN:My\?#*_?%y_?*(Q>T(((((((((((((Ӯ//.!6yp(31$ W !'AcP͇\Hk?Xs_Ө#1XB:t} |ynǤSm9eUi G7-t<-uEWp|ۚ۟P rCEYQEQEQEQE~d(kLh?]m3Wg?Y`I%/>&xBEnuҼm}@ 1\We/L_Ox?_tZ6K(C 2#9k|#x]i}}s-#b0Gj`7p=iQ\ԫV2u!kt28!s 5bv!W}^n.Jt7~?A`a}G5My5jO2N+?\8szh'bFSWҿJA/?y_5~_dHjW Y(9E_z+Ku=WyٿC>(>(>4 'R[o_c 𿂿/_> Nuk"CjZ !oPvȤ3 2M*10ϷUS?w44χSxGLDŽ Mhc"MVT!Wہm9ɪSiÁ|^wohAfv]m 9:3_[ǾYi17$H_id0Ezej5g&C ^>:e >8u9'@]|'_6x#[Y\jzI+ģ I7[p A |{%Սƅw#n}[_e^Z#SWM;F՛P|5үQQk? C!GpYcRVO 6&+H<˙{gz5_XK|Eoȃp#V8rzƚ[̳z6j猲R w"<9)|;7u]AvZ\ ۱#Ċ\g1*>=kgm%;nzG=Zxi4D)C a?*/>UQ(lJjŹXնeƜ{8|*ំxɼA+N|Dcׄ}"[扚ZC!Z w\d?Ϥ?xiS?ozR&Uo-p~Q*Z*ck;:#F {5v/=\Kqgee,ŷl kѮ>Zx ? n| jۭ|9/_M3e-yjrrxk ૘bRtGKQxx. x;þ )vm?DᲃsVm]3=+̮>돾k1m&NUK_?oST?qk _?oST?qk^/ݥ!QEEPEPEPEPEPEPEPEPEPEPEPEgC^.opxRm ӡ4LWqYcq1Y0 Oڃ.i:o5Vm|Ȳ#7 r9sG+O MY_$m5:=pp^[yYS u ˃t*˙7 о%B˖$ȹ}y qiŻ<x+ AV/<,zӧ߯_\?gVpgo=$-?ςtC6fOӠSgxN7*Cyh{j#)f]JTGG&3Vs]Ѵdۻup s\8?Yӵ}+W3:B:irI?|->j:o6x~߫=,#7}Ohi/E!=(Eg=Eyd8?g(~ X5w?kѮg/7Yv`:\I3ӵޟ諚}:5_ Zu^eּm_yz~ G>a}>qS~Cs5?J!xgQƺH/--S\&ЫlAkC^𿀾^cUNu t8U%I$I56VTrg{y&tTQE{QEQEQEQEQEQEQEQEQEQEQE⿳OJ׵Wa?S+^M (Q@Q@Q@Q@Q@Q@V;'KJ:K@6Xw;Kr@#=zd}E|[loԿCYkn .Z~1 v]Hw|t}Wv]r|Omsgi^ 3x{zT+.o0[nͷ|"0; g3{cLJu4C]+mv^"A۾ئWY^3 e _|I<PfI-l$?=dT |Q㧇MMꋶ wH2 5 zJ&.[3Mj~̟xKφJuNv]FpJȸ=+Ծ |MΓkZB5{1v #+.YIRHj!7}QvNj\jWWQvLѱ{&^:)-̣̪ggs̙sޗmW@qְn>뎵qMp:"b\}+kva\}^mm| 5q`}^uc&%ViI7)e*85ViI7)e*852Yc/ (O ( ( ( ( ( ( ( ( ( ( (  ((p_>i|`|#K i/Q4OigYn% 4# 4~3nˆWKfm-[pY>XU,BkMZF4"qYRa`Gf Ve|¾!4 Xu)#% ],˔)aH=cd U_ڳ79PVD 1~UM:u)S]Rٮg=Q2L%~$)jC1m/2yUo\r->-.W qKoj5ltk}Fkh|K30猌MZ( <$l1Fk)%Q.Mo~?||UƵUZR\nnWR~ٕ7fJ>}+IQqz Y*+I-NRFtJe,{^Y |XFO/OkB}w|GJ(8'!ԁ^c^࿃^"_Fux'틿k ßy>V#_&cٟ=-Ğ\ M֭-.gY_V5_jVO4;' KG7+ Ẳh-@KGR2rx< ^"x#sSxn,A#=%_ibIiO8_乤qht~&pn\ܕ"C(E9E7fWQEz__w? l$V\i]u 1ŽG5:Ck񿂼9n.jPwgk)tou` r >\|~薟 ? eٻKw0;rL/1A`TO=SR|ҼmbMt;)RZh!F2 Bcؗ\>(ηw,̱.$nT爜RpQž,?uxiDZZ?o>Y+Q røl 1yEH_G:_h1@׮5H+RCkV rT\(}Ư>_7+d8M9,ZWmV~#X8钛)}$h0`60p͞kzh|ۘZz-}"0v JP;WVUhT^zpmק1?5f? y_w ,vs؏{_lo 7I EПE*K0+x֯V0~N &{m湹{xQ $ko?߇l|i>sQ&S y{: qW ׿ݶE' \Kl)ǜe<85]!5'#?4X_wCg]ZLTm,CfwҚ?hi:>!j'{@FB]qR2kC?o~E> (_5,a3f>WS qQxI խ˿H?z3sҾ]C2Ir/FS+ ]ƕiTqAm ڗ i=^"b'&8.aa0^jF1}w 'w>[}ʛHIC`pR;Xz1-_ׅy޹yyKgF>a>s(¸]-c\}XT&%?Z¸n>5V鉋q`}[| 5V:bb\j?~ŸRӎ_x\j?~ŸRӎ]|?#*_?%?~hDP(((((((((((߈<'"OtgdQ.^FPqsYի PsI.c؈aJvQnMIjboZ=; ?mH00D?眡dyO}_d|a_ψ{ vvN(f),A4y< 6Uj^9w᱐yMQ\2#o PN8_J xM^/.~?Y;Z]xԟ.)##r7 frpMzE~?]"qY60Ƨ^-Fݡw'{5O-,me$#V:Z]6,ṃex * ?Q+o #ѭ4#Y݉5Ot[Rm鲺JC~[}18g8d4WJ~)˙vQM赔^Ŭomo R_jGN/cu[sI 6^񗈭~~4>'+,'P%랇{֙斏|Qb? 51B> KHXÏwC'x~~ڋO;|GuzEpcnr3D+UM|=^yXeR=m?&|Ah)N~ OEmeȌ0GW?=ƣϊ_Kyimxϥ 0 ;w&m7A𾣭7iM^]NRK3&Ə_. M&mpG<= [/ğ 07CΌrcVI '#|Oozx?0LEoj#y\zg>M&NI |2_kj.}R5G3Y_k'w g[mgكNGW[Ƕ.7$am-Fؕr1h)7 (((((((((>DxKxk m֩.FTd,z|o/syZY7ٖ7@.Yb9K/V 9MOǟZ 9?e +ȗڭAkA<'A,G?D|:u~^;{}WA Zh80|p[NXQ/^O kG|=S$Jگ+k}؏↩xyF39P'8Cʼ,M}/ >4 i? ~ΟK$zT$ 1mJ*+œ88Oj*妡[_][XIJ[$TauaAb ((_L'v_%kګg0)蕯jQH((((((ď+gc~z}k}k+gc~z}k}k3Cў_譻WAmڹn[v6=ڎgwף) =Okl]^uEx,/c۹ q tgzv9.+}aitf>^uSGZI~.XI )v%%Tolea#c'9ǙGZGbyj9j>*2k7_]J!;pGfVQU0Q 6/yyGyp/yJyFi9] ɸXWt?Z¸klo>kάtĹ#)&?C&#)&?C&FTK1ݥ!QEEPEPEPEPEPEPEPEPEPEPYWλi6ia>Di|>w| m7o[:#`cY%@ۛ?0͑x7vMeڇŗJ'62e 1l N+9]GGkQoo?%v~xQV8f|E]`pLⷎ&!O/dPF5~x+េa߁<9oG4o^Gi>Rȥ^j>~KhV?ZVxü)+JjXJVK*Xz=}b;{khV8@0U@ j+IYrgB ŒwFbsYb`dws]"<iX?C- `~QJP(b*КJN2]S @2mxsVӥ2 帕 $!ֿ*+=sԓMJRniI$z~=k:V$}F|Z/]jEGLʹ A/>2`~*iVBWIԦ!_K ];,6o>Z/otOxnE yļd9&ۂ;8 %%ZZ=:^4(#<| 7F6ow<lj昇i%hE>H6Nэ|MɦvSZo?JK kX6B^**yF@"0Ax?|~5+5<;j@VV7R<D)^Oho_Oi*es0#q u>S5tiEM-}Wk?͟x-#E4j? -զMH/Qj/= y?[0^+ww X[PT"7@;7hhR4dSep59 lד>C%O3ÞaL~4kuM;6ebw\eRڵ ?]п|Kk -[HEՓ[ #s XdkzNӧd)4$SVS_/st|AJf,GU@o3 +0E 5~燿I<5 z*O-\_ ?+/.~(VS5~ +i]ځ I1Ju 4nH)4?%%uQE (+~/\ozOOo;kYZ)x"5/Q%t;PTWoƟۻ Q mWm#^ݥ=Gvc"] $-_|w?o׎uo|c;uodRϹp>?cH׭|mX-'E{(Ѥb̡X:cWo_ iZ_%ukſԮo.5E,P#9TbDaqǍʾ%RMIkCu_v2<1͉U6T8S@W֞c/2Uu3^ͯN.,%i` 3$[5~5$m5i~ҧ8Rh1F$ 2Fە$:zg/?|oU]{?QXi|d׋oLuE7Cq˧k A$,Ȫ_.t [w/ȥ3]|>#3O dn޲Z4r1b`1_k̿R_[mcZvbK+hm،GP"+[}J>#/W MVՆE+) N+ hmbX-`b5芣`OV2l; ī&W^L==Ꟙ}rWʋOWy4Ҙkͼ]˓הSy\x{Y`C>c~R(P==R(P==RIy;kY+M5]q`}^1U'L 2<GuS_ď:ZxSկbM?vޭ_ _x;@Ҽ7[glrp2c$׊fNo~R![唩w`7O(K[!"ÓQM|@ž0AnC}覱}~ukj's'Ko` c'ѿލBL,ƚ桍7i$46xx{ۆOPԏ]I1>^w\uя a#9Jo??~<fj݋ǤI~-NJў90H&LA~Կ xDcK7Mk!@dSCg̩92fXlN1cin趣(;sJZ4WwQ/My폎5Iݷѕ]$Uk.h?ڃ_|8/5̂##  kh ҊWD~C繎q3ZUjyIZ-QEQ@c^!<7amw-sGmnn㴷Q%CVrHDžQ=xWo=>'~@ִé#Ǝ0#2Xq O;Mφ!|GߍvZ_˯:n; VOMr 2PĿ_ kO(Þ5b6)r@;AFq[9m­+6)[7v"Z\C2F!p95Hx]nFO-/K,G3\>fBx/^+~hƏº&;~VW.LXXU ت@Rҋ+-OI4]B1$6ҬJ+) +; 9 Hc A-_:t[Q;2I }4e%*6W,o::m'6=?^ЖRֺDH-nK.9$l@H)4"8ԃџֹW^/Oi)w[UVk+ ͲyHTǚhبr{º~6tZ?cbp5 - a&~ akx㽣F|+yZ$Zt*[I8#wbNV)?>)i ku{f VXa`LL?vV K>cp{?2E *sT;˕JKuJcN_uz<&eӬ.?4K顆iT0 d$n.3.=V+ς״n5 .fNtSa O)\'8dzsg̋ ))9:@, |G{k"tK؝N3ʴ e9W׵ޖ?1"K*|[&U(-z/Uw x^Aơ@T%eٗ+>+|3緹۠ :aє[1[d=0A:/AMxW5:p?* ѧ%o7g0{?+:ӥ5{$Vno䟌wWy.1mf<}MB53?oiz|.}48>(јp ue>SrE&A>?n RaUZJ4 ( ( {x.Z !-NGЀjj(2]-tnIf`~>aG=>_MiS^#b2|Z(f6kͧ˪[F< f\@<⠲д=6YtJ[6yXa%@ˑnh KgonKoo  '1rI5q/N:ЇfOf7mlh -ïc_ۦ/(?dO癍}}kj TInX ##ԓZTPEP_qD>2?i\z`dr/J|%ybN09d?̋$OXdKzk΍xtta|5i xv?.MT@'<9׋(w] p}l=?QOח8Z[мGd&k,glȍ!oȇ j)Gڭ)SJ>}+kҏJHǜ9ҵg=mċvS!=\[vsyk;U5XmQE} ,y?)+"@2ÚlF+ͧxtKRO֍ZטRO֍Z9pBxӌUy^p@ӄWo^Y_ָqz G_(*}Z<9bGT<_֎p̣̬BL3$8 f<Rk?Eӵ~8gW(L= &0}ViNVwzӧ:#N\'dmI-[>Ŀ/ lj}m/,Yk0Ҷ34y_?{oFg|]([GBc=7Ђ"2M;xQ`s-BUs\{56:|Vv.ء0#X\/A{I _k?|5-fy&/fwq] )4m^Mw]gq,d&C0 I,O;9P/GqʞҼܟo/p 8<4H[&o ~'Z\࿷39S xW ~,9n|sxĐk^yr8!H}+(Cˤ]_5卥^Uakviĵlu&}D3wH>𵗄@-O|-狥̏ѰxuKU`Av"f<-P2C(ݬe9 xN+6|W>txsyCvKXRl'A!L=*[˽ݺ%Żan l5EPE>83+/OxOD-goq"dc{a#s UӣRbeYN73C *&~jz|UMmLL+#p 5oڇO-#Cu 'Pytqp[\Ba(9zWkcy3;?ߵ틩[k_u$ik]'Nَ~MΣR/$|eьp(4).Z~R]\޺QE{gEP\< Þ7mSm{m7}pC k8.2WO:0ZѭBni?C?Ŀ1 ." L { d46j &'Yk?67MMnmG2yy%lY=/u8¼~_y]V8wVߥ.Jݤϡ(˂((O. |5Mr//^ Zhtv+m#*% ܪOBsU/whc_~=;Og"%;r@fn~@ aѵ.mZΑc-V[3I PdxXO&ş NЮ/4SMK_C$ye!H%q#^%+qX=&˛ɂ&{{(=|]~ѿ>-xo&OwF(>9L`篊KG>k?̿)i9UN+reO?k_ x+Xޟ?fImkp1;IH4|uE;$73x6_ ;_V\^Gq#?,{S__KTfq4 ZG'ɘIL"⾶DXTETE*v#e9c7%O*-tѳ~~?u&sQQ%':DxQ:?m'eO?c-r;@>\rn,FQQ#d}%`pTьiܩh߭~<_XǛ)4Mak&i>s:WC2v1^|?I{mMlJu D'.4hVN1=hJ=?nc?2&> <%i p1¢*]>7$N/4ԄdEc.q Vm[|J]\m ;|mɣ>ؓ:}`l9=uVG?f%߆Z:#FqxT IKV6o=kL/-vӭ;g7ֳG%j׶7 ݜq$,CʿlDӆ_h -1hm-̤Ij|hBIs+~Oi$''xZ wORz dstW$ w ɎC"#! Q3 gWvriKxs$lcKg d2@ ˼XےN@5AaJڤ(/ޫ,y$4H5j}yy ϙ?Pj>qz1'}#;3)+C"BLJa?)PHr?S,w_?(J>`(_L'v_%kګg0)蕯jQH((((((ď+gc~z}EI{WgćrG>,$JwW7 ʹ~RFqf/iR+Yt+N{1/c#qϗ&W4YDy8ty>Mcsk}+FLOHƉ5cOZſ~'i177kfo7HYkr@VfPu2;[Uxу )#֝^ E,)۵ވޢ [,`c}|;"XyAx_ @O`zow;oMei6u嶡c:U9T*H#UZϙbſY|Wo^Sν.wyGgpGgF:?_Ψyo^GG=:{{8AygҲ=ngy^+V}(ϥ+S$X)bX!ZY>(R-"| vVd-2td! 8+3kl? 5"ΕhZ1#>xoF.&鶺eSꧪpAx:?eϊm=l:P^$𶷤"kKLKkFIa WG<3^xs5= 2X5|6?&_u/ K/x#Y&Ԙ(f|f9p kO߃mOڔX{{(om` "% YKWZ<'.gR PG[_okI87gooÏk<5ȷFs.$]QCja?k%_Vtf/˰xG VKU~O~JvIͦVӴk >%!p**U(I$~#)96ۻaES$((((+ߍ_Ï:)2Kí,4ЩP[8?{&ރs)Ky9~+< S^$ӼE6-p?,ҺKLӵJ,]23ͥ ,3!ꮌ`}~ Z|s/w a\r`Qؤr {.u{K~e611^A7M^T[h^~Q_^5O x/'257#87mRe''5M7Zj:uudu$0l5cb)jOFS|3V1Ҵ'*ESqd]R/QExş;txևyk}؇O`)ڵaNi{ጎ \]'Qc>)' ;:WX$;Fvzwڙ k_ܦJ}O;5xomiHSN_ \O;'ʃbL|?_'t I\Ckg58IIMlQ]40jV}.A™vQ*r/r|ӗ+/ +@(|K_~'k?kKrX[Z噎]wm0gsW?ߎ_z=חb̖No!pHJD62 U_%Y kSi}n?a5tٍCNIvYc(((((((((+Vɦ|A%EU+?> ޼C?Ğ ďxoEODVSp*Â=6?7cxsgQT,A- $,I4G}mψ|Qh5Q5ԛT`31 ,H@^'|S?4sDB׶S bTpA"ytV\ϖ~BljK,={as=CVҵY#[+:Vh/*ֱ alēo#^|BW{+k-.vglrA#hxp1q]-:xʩBϴjOjHiޯ3ڝ\vCuJ$%w| u.<] B/ ӵ.>}*&%L(%yf|5n~xAC%ݻ߅h5F]= )(+$lvJ=~1&]&YYAgˑyY̒)]He< |g{p7]v[O4 蒤8 \ӫ>8弘l|J᷻Fkky!O3kҲ~7.p~*xY9]7WX5U7N<-~A%v֮!"~)u,cb Fۈy"2G5woW%_6qxr3%<|SL"!?9N8xSKYx>3q>mo-V-m A;88kMh?>ş $^RBi -)|ш°xE񎹡x'Etя66QgvJS Υ]l;-ό9l_[m^E{רm/E ~/yC֍ZΎRGT7h9OCE?R+F%в*(Rr:+ߍ xtȎΰ5:A9x$0 ~ ɱ&g8$w\M5J|J)9-,/tߪڛAqn[¾N3ЏA^Ţv 𖝭/`Yn2Gb=+ɾ:U߃ djIyq5Rѧ{eYG,MGSl7'jxo/Y>|_ ϙo_Zy2F|1[t1|Mciy:%?ko^Þ=ޝ=-1bHoAB ~jxįط^)Xj~o|#KC1ޖ#na]z% m7ߎ Hw؆ty;} /ll>&"JJQ}Q9&?) *Ucdkv{>EVQEWѮ@{-g_5X¸񎟮xHd-oFDW"$P,7ƾծ{y -ܓ4%YLA@OS7/A}mowݷo%?iѰy^v88LԴQEQE2I#Ie"@OA@C^?O/~ D,7:tħ qE1kG|/cxE?(0ldy>HNޠʄbs?;L^8O3?*qG+%Wjq}mۜ{8N??>>^X;ƌ> |Yf6-X*'$‡O}iQd%f(HQEQEQEy_~S|u+-ԃx]\]03rHLcKzQE~:>hw>\?=l&+} ̤LmLh'U5MM-Xd{m:p-mlr 8†$X?!wMO:}ܞ\7r3-mb0!9%2t53;3)+C"BGz=G&l]>WӾ#>e ¼֫.fE)c8W*?ɓ~h袊((g0)蕯j?~اeVQE ( ( ( ( ( ( _~ 4i)AeK1H\~hX9$ml8v @l>~oOo G\(},yjd"d&DF|--#[1д'`!U<siM<| 8-$s Yn23"8* ?> j2vyj$}6X3ϙ3HgJ.*P@p+ra){#f5*hG}?jzaYi2\{Xe-C\֬w0Ƨo'%(¿Ġz_w+| 97-;źC }RA%WYcʮsدZm{P\Zd >{|2KIRp<~&jV nxϕp?A6|ֽڴ8.'8,F<<<%e?8nSvCWi煭owzmț7\pA.GNmb whW, 8$SDx]m8k|;MzΝ~dH!'ͬZM`903H0l rw>ƴ8ѻ?OGݥ`'Ir82x ;zo]qvxШJrUHJQII^Zڛ~PZ^_ҏ/\{s7P4-[#4-2Xn[lp4qA'`+''ÿ}. Pn>VX`\)Ȯ89N[Enމ]ߑ|KTy՗N rK)5Ho5Mz[ _%ス,%晰NN"1jݝ_^i6ϥxz#,]J 8fThi}"n7{Gj x_]SW] t,TH%F&Lk'\tOVBk5{.lohe_FV_jFBQMo_5f~x:HeIX,tjKU_.h;%d|oOxK.7 qg.퍌u8hwVq]m~sx6\|E|g}oxZu-L/s\\[ mkOn#{t ;nBe FMTfc𗤺z;>#b*Mri_[sO>^Zj]սbH.-G*ÂV+ӹԢjQA!EPEPEPEP_/)O_k+m=tƨqciq#̹ #$ FRoSD?/$@K I*%px$޹h OhF*Rۙ.J;?`<ǀ%ǂO[mVF8ea!vJ㗅Y*j C}᫋KNu{y,|,,吡rdVޗ/cweFXaف-t:nwzt[O<;9 q e.XZP$c\endQwեWҏJ7fg_]< -&Z :zdlLW|?i^.GǯcXogo*n@1dXTǒj?{RPAA5/rE>h+OKxgTTĿ{vE$~R٧?-|M_X/fj oF@Ʒl*5mn>;ⷴWHCXNCJv/,iYq sci2 )qHG5O*ozo'7wy"?g`?YK1;ky|p}G,U>zҬ.~~Rg ޟ|k)7l=t 8ai_Ƌ[&oY@ꨳ<{<3_sx>'4h~1;]1e q ½E+_#}58.CiPk%g(u_kLz&r%^I@7cퟪ]Ftei:fg֩:+u"O;*n9$~rrG'|u׿+>뚜|V{ {,=ӵ|⯀5׿? N׌t|?:Il X,-jD'$ɯ q C 6LZ#ݷ`fȦ)9$l]H #55o;?jIĚ_xsǖgrXRG*~,~`U?3)j*Y(FMQm^^&Ӽj?`O|T j1WyӏݏW_E[w%Sh|:1skQ($ n$s`cῲ6~8TҼA<v,oXآF&s_4_=;8`h^)|;wF/2;*zG3?8k麭ums\M3TB,ceF$\dd;q]#C?aPWTɝs֏\O'#ᗃ_߉ti"6QkɄ6Uik+|{c[^>=;kM2_]8W#QD*I mX\ߴ_< wdZL[ neV"V2$ ]|k-_=[S7g,v@2~>eK; ,_wJOتz?2'6W=ʸy@G_A}O+ ݊l44!ORy$ŸaT[k=%j:\>>"'&VYC[9yvl~nzVaeiv6vp,0*"( Whu$Š( o|74$ψ h{pdlgdh2?,}**T8M.`F9N1M$oƞ /i /W(:4>LZ;dh#HC)r~Jx7Vѿҵ8-{∳4Z}Xc_//K|3^M?]gʒ TC5dfq$!T=ξ#?hTy ş'6O6)a)(uJw]aq4h5R&\ҧtiUOih ?H!m߈JdS Hq/ Wiڝ^L=Ѿq^Ke W7 >tT](rG9LwZO%K=V&TyjNʼʜ q/ V/ƑX+, i?, k*-%^4_nmP|rLZ#M/yjX%JhC+/%HkpGgcjy,rOL.k֯S-XiT"N0wO4YpY-%U߼[ZUl\lk7w_Z+_ L]ޢyިCSˋj$d_8] rO, ;tI5:?]CY#zeS pgߌ?$W3|9! stFPzLPl%V)oEno<;SR=9Iɾi7I-˳Yxg֛798<6F:_ܤ8wKѴB{ =3Nvmm{U h~(Gt-_Mqwv#WWQP4] ʼ+*S~g +x/_EA.}]e|PcbI@q__ƝZt0]+]e.\.'` N^g~&i[}NCl|{W FS#'.9 ƽV^KL?9Vůkj@!F{+}Ji!c d lᯊ_EQ𶫶U-H@".d=:r>OmSss%Dž^cqjOX+[U|Ld^/bh?-ℱiN.]Tpx~:J*G\9uS&OمQZWKGB5e .m2N +(QV"}˳,^ Nԃ[ROkT~`?e N[V^n![lKXf2}C] w'yrwY+I SY8?&kㅴ|9-&8ν>²μܞ췏<qZT%wъSMZ5Wy.Yiv}/E~^Gڃ?Vq,#uaAC$Bp(ri~k2_Y紐3'gt1RNѯ4|x_dUPqX)|5>hzM|T3K]gQE쟚Q@Q@Q@/S ඗ɥDB I> Ѽe|qb{7a%ˉ!|FHqXbpJT殚d'*iӒ{j}A^?%x_^ˢRkZ s1 !e}ΰZ[[#R[hTJ4{w>`SzWg`MU1̓*E0e]<PUsW WDnwqθyW2}^ FRy5'\Ҕ!Fe֍iZޘVK$cbVER9q5 |#z^Womb1-d11A*2 ǕVс^-?9 k0ҡ'%Uk_~_qSgmK U*U㇨w)ьS|wn1Q=ckmni๷i7^=9}?Qv9&8 _}!sM>3Xk:xcEмI.|Ct,L$֑8@Cls^rU\:z<y^E-Tܖ/+^ 4Z0I$:ױApZxWD֮gym Js]5ŏ-{Oz?Qv٫LlCɹ/Z_k~!u oW}}ى?Aڢs5_5Yy.հѹ$=҄4JVt= ?/Wo|+^Q/干5MX1tL, if*pJ,.h^]kIfN-89 $ :Y?$ij>} DnB#(fؿgX_K? G)K#±M^я,1J`C$ /jZݶ?ȏ1R*uƜovEEjމkVNUm5-2RӮYfYbe]I  [ᗎj[k߂#2x\(1߃[ji8x'5ɢbr}$A"xm+CK&v" ?$ qy]>飊=7Gxg }xCoMC;D[9.q9+<#+s֛hF^_\,Q '{<^)~,|?7]xÖEm:o 6nSc^5~6E<'rog?jo_%M=i|Bg͑[K5'hkH[E }eG S;oW//@ Yʺe;r򼪤E9S_=G <+Vj"Wld ǹ5׷0HڜuᯊY=𦅤xk.- F*((((_L'v_%kګg0)蕯jQH((((((((((((YiEΡX[dc$%ʟlOn,> b *Rpy@"36d_hگV׿h/]1hdZx3K_*yT!=A*>֧e_AF&x(uQ7RKޟwm>㙉j1.`rR!²x`W^"o Zݕϖ}b"dgsk? >|$R>ΟY>,k+enx ++yLLL|_/x(KwVyXӷD9-@GGkecn ʊ^^E20L 2HaAYq?Vu&7vm|o~9~Þ(Ze_:"l,sA|8jH |N.!{v'{Z7zu}#I ]4kFl/x.baI#`U o$GOk7+mWo&I'j[; x^"rB^o}//Gn~;2l=,mAb ­/U{ӥ8YZ< oOx">Y4k>=Z9rv ?镔s?P ѡmy1MJ[_o@/ev$  3#N+{Os oݥeLթV Iʣ}_][JaO'c=EWԵ E|+e'mc2ka{e@ϊJVv?W1XJ/4$޼7UxL:rY5JJ vdSv_,H ^_O3?4 }+ ][VD$o,ܐA,xKi_t41S4۬ǘ{u64Mf`HEO>JՓ+n:x_l}_uF8V1eRUUmCR3,LR+:s5^M1?MeZ^gX,~"Vm TrYLɎF53ۏ=]V橨ƦtoDU|0* ~ij~Jִ=WMMBG e-[RI)e @zN+IIwn@9¬.pxз]\_Oح/ƿ& xX?gDPOa9X8b7˾^0W5x7DŽƋE834jZ[K#e<'2f*|c[o>(#Ź_RMV>>9XkZM9ޣ#M5t~J*AN0)2HHf%ԫUQ_ X7߂:ψgx&krzˌ3ll9+FۺOt}W uX+m%fpwOSc2iBkGJ1h0 Y&A/ x"𖻦xDu2{# 'iC(+; '}]?hȋ|p"r7έXSݒݽCTSroKVwO*KuxǺÚc,(zr231/CxknL77Տ1yɯz%|5S}|#+?6br^X|. 3כqϗw~KV<<_ѼN k͑EYQok^IN$zNC#K+8ly`Zko<-߇z|k#>ԡ0i 8  dJq79OvKN^&bX=|Ν8ɴ49;&t4y^ghjA~尺ybU%zMlC0zLy~դZE__ŗ$~,k n{Wg%ʸ#,ǯ=K&?ou$ 㟂:boq`r;q_9J^R^vdt-R5%b0/朝?I޷)}\§tHG-=;t杫d񎩤_ͤ{&+|ǧQ^q'/ٗZ|j۹6X,^ %e\k85iSiҦWҖ,o@ Uê6V%k[ ssT,ZP'䢺7*|+O5cafdb@Yk´}; 6FGg5 4@ڝAbEs4W=fk'$2; Q Ic{9l.k~_"/h{ 1u _&iS2c٦AeG'Ƀ]'&`V+M]l=},>e'pf;Ҕ4ZRvK);YkjǬ8':raii6ϫ^bK8u@<תڼ'i!^Et_[Y]o ⸷K Do9(T>j1RUݟWͩcMʶm]5t=-ZYlaڼ_O?FcKJkzh 0a.8`㓆'?E,LK~%XpU9]Ӻgd |/:[KVn'fbp~п?h+⯄3|Mhm _iݰ6 RÚpIm! HA|Tz+e`yFwDp־0_Gߡa_H&' Goi-ꮼ/f5/̃Kdzl׶u?¯^ gfeԶ0`.fvUu PE~?w߂Ɵ_'ě:wkk5vJ R3/#x@ox[[|GB5]ŵg$J3.*&kQ%iћ4QEQQEQEQEQEQEQEx&;/U⿳OJ׵S`(EPEPEPEPEPEPEPEPEPEPEs+ ໯x^9[_N#@{(,Dz/cW㏋<&x/Po.ڭH,cd30¼8`v>9θN )u>JP_ޛWd}o/~爡 Y6M}y9xP+Eawiue#ח#?$[c!]KW*>q< &届OPe#xR+TD4XE @+?1/eO")t}\;"uލ5u:ٟ7|D_#Օd6½xA:jjtGy6XƼU]蕒肊( ( ( (^6GP d_?/FaOqd!+K2ۦ ǃ$h&S(˥VtfMh/W~3: ݝ4V>$\iښ$Q1TbEn^gៈV#ž oYB=?HҭFv{Af% aF:fx'>^׆_b=zfh(v8a0S#5_duS?}/Bds&u4ױ睝lR-_3Wr{FM7W~?o"8NhԫJ3**U#s?ܪjOW{oz&^ 4M}E5goU#Ddu~VXu3|A'f{T;xa; ,m|]+]N$B蚚烟.<+57߳{OM"љGnE 0MnͶV 1(YJ{猣_+a$ED$nf=cQJOO_??uae\E^4#;MF:TIZ媕pZ_Z>oh7?me\)P7B2r Lo4x///>ul5C/R_A]fZ։#Lf2 'JԛFl5?fk__Gsgsm%וkr3p$Q$r7rW5' }+nvoFsQB8*4^[Is^3*QwߒjVMIGg+M׌f+^' 3z&s3]ßڿAռH|[J!q O\Is7_Ut8o%E"9G g*{\/ :NϪ~/*TK=i:vǣ(ee9 B: ~4ukɾ-|,FVk82Ɂ,Vh}wf+M3P>hZ2:gZX\\G25,6?{Kf5=ڊ(B(%?Ɵ eÖ[=J^0 ꦾߵ}%BoܼF{$vƹt T+ % D[EkɟMy}hB懬>9vwEokƶ>Iu{''uY 0=dXt٧iw?ş[Xg8ُ^? }z~2u߶7^iqo8^a$~n]۶ n#$v\{b< Nڒrv~x /O ؄N+NqR>k'o!_M~!qNe< < 4x{z z}m*G8 q1UI {~/<^|-ՏҖWY>&6w*wl቏~t_)>#>iwmʹ<9Zƍƒ\]Y@-(U6HSp',exF+}D$bLFy%IR2?~p=+4S6cٙ5=F[ VpR pOy#|S-cuVsHswQ{`G*ɸOaN(VOMQJ}#7s~=(8ž<-H#˹e'uhX^O>"񀸸ӭy5 E&I[2>QRyK/iޖcU xsh~*lu2OeqJcr0H#ּa|L~ 5/t,>~0:X bA134^\ _{_x b]?G}Ǚq$* U?~ʾ:ּQkzc{7mmQvU½즴c{ݥe}>ù ?m8TԩǚJ-N^z _߲'lz5NJ-aaU{tW  \Hze|(f&/_G!EPEPEPEPEPEPa?S+^^+8ɄND{U6(@QEQEQEQEQEQEQEQEQX"Ok&bӴM.K:EjY$p$9)(QRHӧ)IնIwfCmg-ıAoY*ĞŸm|.>1Dc-a˧OOBc oOW֣qkx-mIFPIJci,َ#/W|7i[FOYaA袾]cqٟ;O1T1_Aڝ7U-e%֝=?dO|omp?iA`S,#&Rs_|?xWE|?Z.{+(u''y$ݢl(`'c&uĎF?5J-VŠ(L`((((㥇 x^5̚CQ&U;b9@ v[RNCL0J^zW4y\iCjH!T0~n+ߵ~|W;| ŢI_j;?v1D&@t,~K m,P15I g⿍_4O |3k+^_,g48K^X;UU ~߿?h+ד@NoIklfXd.HTVy$g M~__I^+տfwOjZs& rxc_sV2JOrcuSMOij |Ad'zȬ(O 9@?v鶚DkЯeM i!LErZb7 7H_C6~uVU=."帵`G1?w9u7hTYh}Y5IRf߽(rJcR˚>E%/[}O<K-JΆYd#$Ȥ2 =+2 [o xHđ˪0ڬ̏$:|JAto!ַk֨hiPD;R\cxHdin-lfY@"+NnӴmv/mҝ97-t-]l֜l͏g$1)>{'oX=_^6iMCn&џݝ0}yֽl}DŽ/Ihj852}?iwxwQ.[f]D^?#d8=F2 m{W+}e@EI`9!ȭφ~9t~2k2hoxFc-ˤ2Mh0!XdzVzبJ-}g3U19d5'did(͞g^jV7׳8Hm2I#TI /+.<>ѥI/>')n2\patˍ6'ldSƹr]bxe@^qq%IJ4,ŝܒI$TiR? 򜩭U1~IU֧yo|:ml~] H$cuğ~w 88&oX=YML驙ULDܞ˲]KH$zƿ?zkzO9k'Ѱ_| $_^i (O ( ( ( ( ( ˬ2-oK@FU7(хX]^osL;))}ҭVf(e$w >h?o!_ mmXS{]E!ˀ0f r2p@?Ko?~eѺG'ե}wZFj&ZGk_E1>=Xx6Kω:e֡1wnnY |K-z?'㦱U5ދI,yIR+q𯈼gi:PulǼxoړ7 ≼׌"j#HՆڏo4Oj7K66rW#dq\CxgaCchֺlI+)*H >%d}~.֣umo.܏<{⏊> 'bltmCWL& v9/<=I&㩜M=Τ%rz|Xt&zMy:Oq~76[:Ӻ֣|~~ۿu#ᦽ[d֗BDf23#12|Gg<֚='ᯉYɊ37Q*AOX濭xsAW_ Gӵjvii,\뵣t< -@,'|8o:M/ɗPkyRI^ΠgiveMF+Sߟ_>fX qi^*ME)6q^2׈Ǿ[M{.~H/$R(xjveem|;º1`&5=?ѥwEFq_%:ֻS7ӯSBO8H1U,<+.3g]ޘM G^7"Tk7 Gk'ʼSm u WNnyxDPn1G)KIP HcҾ?DֵZV/U0~NxTA⋋ijd[].U׉AB>H pnkoxMKXzqG\Yڣ7W.*HPM.cȨ)T83Q=Ԣvd{FWK.d&Aɯki_7K[_[>'XbE˳T>c=O|Iqk~4"lAZF8)+"UfO%z1_<չkYhrLΧnR2+ʲJjU۩/Xlm 4ڦԣ s[H>9> mO'L5XaJ\Ckrue##J(?Š((((((?~اeVWp ?bZlQE((((((((_\|?C^40SstD 79?[tok)tW#z) Y?oZLox &tQլ.rZ5w>8|2,xEoqA6B;K8z5S})⼻*/$-ChWcFQixb4!R,k+Q݅eWo 85/+藈aVE];dMI'Vq18H:]Mu]c^!=VmRNO} (c(((((ޯwwwy~xǞ>zɦF F0P9\uqW^$'?5MkNkc\Xo>lmU#],?#ZUޙ)uk6O77t ɯ~^?mύ ?#TpPoՀ#ҊX񯁿e!|MCoå^b+% avBA]4*_tGXU,@I#?CiSOb4i,\g.1XS7c8/(\eUKyIV$}CB𷅮u&n3-4_AԞrOW!*DXHo~x+1_69>?h"$M}7?$x?+a[qQIU1tʜ`V4.DSQ<rROSj|wZ|ii2i7o6h"#ݢ d9 H¿ڝ_៉/<4|?u |{|Cуһy%^_8??X (Ky;Zs"8*CWs_'RB]Y-gPp-|@_;_.E'nG^}wUӟ+xuڳi8.>>l=7㇈"0atÀ3br=Gq^5=x sE{{tWFapEyŔ!i4'??AׅP35b:v5佴W)VckrXce:WRwt>)a4>]7/}6vj~&~/wҵwSs7S|lj?w\/POv𞟤 me.߳Eppu¾氵^wP]x']> KnW\a2Euԋ7v_e+}h_S*rBmuAdks7}hϭcľ/1.mbFӣq&7Uz_9nNaF(IWmInv6<Mg2nKėSz|A=~C~8k_(TƮE=IH9drٓ wέm㏊SYĚ:nrc-f,c5x rhu??]ҥ y>daٔ=k W/>!G[[Dgo|#HJPe~I3_y\}g#沍Z´~|y+|0!koZMQIaH8*8? Sz_Zњ/躢ǶXWza(_jX~п͛?soD=;2 HRkmZW4{yCyM~2t~^7ÿb mqX$.q"! w⽒iԍHEwfxLF;~tQEYQEQEs🅼iׇ-]5FR5 8Ig xxk1ŽVlky8^?gyn.ľ-,zV=:pr)N+ > !\nzn".˧|Fm)b Mq xh0^~GQoO^ g_Z0v-&$< $ڜ_yÿ{M>(?N]onmnH$o\}ўAu5g%LJ ^cQWk{NԣuJ>zZot=~Ѵ .Fҭl6p,Qu=S޵hIYS qP[% (cdDTY#u*!+|qğWρ|3O^iwV)Cb+(M|Av-[N-׉|)+ CwR"%PHƽ\^@ 6YxOǘh~B S|gmlDqkgHG2BOT9I\75o'~|@yuR] e2$bByi2¡9$ujִBPu e #08 9c Gnڗf| jCarr}+SU=k1C6Ԥ*V꒽ߣv?eln#69=,E}QA/k^=Q_P~QEQEQEQEQEQ\G#3ᯄYg|03@RQWnbaJi:ݒl~,~Kij^'N`E$32MkT@\IsY?78ԭ=K3߳ڟZQpnѿs¿9"]xO %; Vpdcjte?>xWI3H;Q +Z88A=eZG<oU{Ԟ Zw (ϯ ( ωߌvҧ>.߶1rJtP瞫5=DJf$XSTbNs/p=>{/kw `xہ-Z.0?d6E%ZgxMAQ˸sd$kt5π|mp ]~YЮ"YK %99ol=NԬu >&%)XA 0E~TaB~ο>6X~iCWHI''z9>P񁀱/9eܹ3 ژWA)8JqI_IF~~YĻ5_>%!-xs˒>)OO۟]&+awzor38$W?<+z_,We./+rp ڹ…WU28u-{;;ìRN>Yw0|3o x+o(((((((((((oO K/jzRYG!HgUrKt(Ӿ7|b wnMvqt$f7 !Nꖣsj6zr.-nYbOfV*5I3eU^^6~s9%v]ӛxoX]]|McUnj$⋯rh1Ec)rBq@oڻ?%H^ҮH(|>% T,kuo=Ojg]S)zI&>좾>,xbsboq_t? [+y=^0=r: MT?ٕe_BӜI~m#?< ڞ3׭tpM;nHCoLh$W|BI˧|/>6v)?g,؇쟧'?4ӭ7Ƴ0Hg@Cs0`l50oiQH8Pp}z8s\t$50[9m{ũEl'vwIjhE~Eynyc_AEvR q傲>)Yfat8.}[waEVQET-"*J6FTGA/4]iLl%ia 1#vԆQп-x⎑M |)nOYY֤-B2o\̙PI?uĿhotO<xu{KĺƆuAekA8` $ S_~_ ?!xV\:Iw;4qx^E| E/ooz>0j;_.>!~_<}ueu_ ֑9t{h1HRz/Jb|sx4?ս b4T{Gmހ=((((((((?~اeVWp ?bZlQE((jzW+ Iu{}p*rq@诀g;|]O_ʞn댭HNRF}3.d'CoƏ.ZvL葛Y_ٍ}GoxX?*Fa ;+O:55.i23E$x!Tb:Ʀ Kxgea*x ~xIm.Rd)nv,3|7м E\Z$n=|% sN[/վsN=UQcu.T?Zo-[<:u1Xdd g,xu?okFn5iEիA-Y/^Hۏ5O%O6㫮xsG&.["U |oĞ ;ahSh/"PAo0VUL@ r3Q5(v]q̰U#7TM NiڄJŘE0'~v#sJd_v"3k}^gY>1W[ :n1~K{^mm/,;(o_CEWOiwqcnwc $Z.|¿o9G? ܴҠYo_dL(UUk15b~L>SGm+P|mp =ƿJ+C˘YN+w{]wcLU 5iv):uME+h`+?- ( ( ( (^[Zl27]̱E1Wjx;Vo x*_:O& /2A"9%kt8'.]Ͳ ` \qR2 R+59rR\^,ǎ!:4Yogjp_WoIxt狵ɯ\[mn F M%*uOAoX̰SV8&PJ`w:lV6vp H`0ƣQ]1J(QEQEQ_6g8WkqxxxnI#{+xB,IڪZg3o~u9tOYKwk E$x`ے f 8Qٛ Ꮑ,vm}N , _i\ϓ[]}{Ex/ DxS~B񅇉"4w[BbmGſ?7]kZ?VӢ kD;HO Ry^E|O5[]i7|.nͤ:X]EEKE-xFZOSk\ߊ,O4DFV6YZ5=QEQEQEQEQEQEQEQE⿳OJ׵Wa?S+^M (Q@_~~#TΏi6.c8!O؅Q$G#toᆟv_M崬~69mЖ#RE}Ag|\;5N&wngr8"$Tўqߏ/🅴AE1I wß^4/xbѤk]+HKkh9T@,}I4EQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE⿳OJ׵Wa?S+^M (yG |)Ɵ|eCCck=܀WSO|x^Nд}>kFOHUX|]ۻ  NDӋf=N߈Eyvw-9XxzRQ+sr^q&o ~2Uj:%}ݟWNJUo)}M**km :nxo-{#'UuGt2Gw&u#dV<4IX I^N-5-3EN:0ć}՛]1XȣWk*5k󵼯saTRSEknyVψ1G4 S.],j1agj].q>o?3/ cySgo$/5"C5_FRzp}+DW ,Ӟyb#d6@pGwUӅJ^'fk{cY39byjҦMBz''o{+)uzݼ/$24nBc+s+?Q |f:^GIJYdOQ\߈!:/GckzdMv HXRV߳jJ5OR7:>jz.  a nH8",/.-ocow=fUm2S.F2I:\u7,j6Z`̒-yma@;'# G45Lk$W{wKkrCGg!xL;9C{&=}tXXn"#Я]ImNnTy(EY)5k|C=کW?(g,Tds975;0Q." X{Hkȼ_yj"Z.? !+ Q$`tx:F?1[J.l rő jTa?i{}{{3So|qvr^6ud%$d~aFT;gڴnnwnּQ7>x'Yjnl5|ax Z&uFytn=[P?dD3Gmi&ieX ʼnϟ< }^c<m7Hi6rg#M5,Ԋb{/&},3 NZp]W.^c+# ( ( (>N^~OҖ]Cx'zeeOAhy?X#_t++Ʈ2 ZiV{Goџc/vTѯEWyQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE⿳OJ׵Wa?S+^M (O_6o~V1219vW'VXպ>_^;ۼ+u5j] WC cUJrNI>siBKas_Ք+3[RNt^Քm]J?jTm#TN V/j5ac+kUrMK_6rUI䓂qVY-F yђBAQihcJ%Q)#Y.^W'h*hNu*pF> ]8HS*PVq[kӊ dxW.g5q~%I=%]v55_ލE}XQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE⿳OJ׵Wa?S+^M (+OKQ}J5;rq0Q <:nq舅,e!+ ާLW^Z ^j:}@؏ڿ+gzß烞1R⬯JRtzwk3\&ڥ^;fDH^_r,{߽{~!р}+]W6}B _ԡK<.r˗ӎqW]b~ '[VʊJF'S@9fp3_K,ƴpO~)H> (Nq{iB vRnݮIj]Ox7;Ssgp6RC#wn~ kN ʻ~($$~<&~(|RvPhmᶗζ弊7{Y%\D%K4U[ƌ6_viŸ%kxmWI/=ެ(>X((((wgii_N&Iz)koMs7_<[dzN(=уFxTW4VIg3gqTi(?ýϐ~ڗgM)2E21$n6#*)ioIMXu)C_ه`fc `,|R-qaR.cbGp+Iө;}>w!̱vd|ƧNT=2tw_F}EWyQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE⿳OJ׵Wa?S+^M (Q@.7o$>a5`>'ӂ͂I&Jn2]Si8?iث oˍ3Q/}j]J_-Ej,v(ᧉ޿ovƗ3ѠYC{z2 ?Ǐ >"ij֠ 7'='":)`RA|Mlj7ߊOJޡ^iY~o.c!Yx9C ” ӧS XF"'YVrNwivY->ֽkcGsId;pz tX5wk VHb4^+-@SsfRXە'."7ox[b5rG-\oș>bE~wLJQ}dK.J~ZX𾒏IFm%zycCV <7E5ZI#&A[r<<.&J7lK=<M3USݷ-YX4ycw//M𶟯_iφCeM"O9=W~8Gׇ7p<~\?LzXV`gnsIԊ?>ǡ?Eou~=?LMH{*bxrMy5{։Z^IJ&+m+LrG<.;N9K+.~9|4SfkgUP Ѿgm# c<Js\[X`^O ID6֯]n|9Kω?/-WC7ip)FڌQXYn4W^6L5_ܫp d,VU(|X]\'#Z(B(((((((((((((((((((((((((((((((((((((?~اeVWp ?bZlQE(((~&.?Z^ӼWME1qevuv4rr! )G<?`4i=.b_*aH`覊M&˧Rtfkt+kN5;{V&Zƫ{~[qv㻈YYM'RnTUTE@H>ʖ/)z+֫J擕.VLl6Oӭ H-mmX4PUP j+c ( ( ( ( +i75j?>!^I}Hm7ZRR{ f?*($'ǿ_RT kᗃ#4 =mWQ;fR1A"uVQ3H}uk}k,~ <{/?^'}o.x}[2E ,^MrAG_ ?ࢿOM~9K|T$/ʒޔ%gq+HNňzoo5ZַLR崀˟dE{lUk?xcRN#RM+O[ի+\ߵ­gz_u[fMˉ/qk,yI I|oxNvq hzi$W:g?ojl4XC#) iϒ buP@U9T7Z|Ҹ|c3$SRw.02riݷEuo M3*|=i//EmL޷\'9?ZZ~.|&I0XuE FbDO< M L Af>I[EU݅LEiЭGܣ pڼ(%'+8;]pIxJ9Ɲ.d߫\߷AuyCwDtʢKWR@q_Nj8CY9eLApTAX=묚=Rv޹톜$ktρ%O,/]E5]T2H2E{Qww S?.fƋ6q,:_'~bX9ctr#H236[ӿc/y@+(((((((((((((((((((((((((((((((((((((g0)蕯j?~اeVQE ( ( ( ( ( ( ( (l쏢[hsY㟊Oŕ- ɈYB<{w[rixGöf6;#љI8UVMehzno+ ,h*+OZ֡K- ^-# ],YIdLݾھ#1\D}"xx"XN+-'ꕗWhhԴ+\N3e+ƼCP~"xAדMs}'6na72G;qkDž6,V8M&۽}j(+S~'|E u N|>pռե@3?V==Mg!h `N?j[Y3N*p>l~n7X9wkv=\Fu#Q>}ԜwQkV߫HF(ZI"(&-턧!W\xźWh~%tbX{Eզb9$% $\zvĩͭ[Ah}nI{(^xeA"Kg,$oo`Еh$iIߙ=,M{1$"<W^x:׆7f&Ou}{PK?82] qxg6"ľS/xᾅ@З$& co =/rJqsKg%-fgkj{Kȭ« <8_~ ^V?Ʀضg #X_:E\#dcO4"bi" NNUɻgUJte[mAT}饣aQE}QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEx&;/U⿳OJ׵S`(EPEPEPEPEPEPEP|e|~!Sgsvgd2c7fMw|K՞9r_HYnbrED@_-~"uޑdM=.cctb`nkF-aO ]B_^\N@.dG^>y +qݻxXg<4gVJ&&gh~"8#8a"*"@A\FIsy,I>lwqKjȰڌ7SCu\jt?O7'K? řUomV|ۉ\F9 $NQmm{*%YTtB>;Ҷ}M'4_ ^)D~gsz"J̌޺hz߉~5-z]+ u8UlLzzGV:);j+qMyN#N\fmw'm.yO߀!ñI]/Qtdm,N2ppH*s6/iZ3YGj(QwY>f-q8ѫ L1sIt9r/)+ZrWWիkmG :Whqm:_I]7VdV)r 2E*pF xK^Y鶞"4/LK9- EY䕙0^Oo/j] d!uU1IZ_1V*g)(md%++^ǔ[Z|zTaxwT^ zVI!XS$m6ҹ?G\L^ i ]DKC*}QXTi$,'$ɫCZ{y%g?GN2QQRQ[shr:jxNNfOscwr,D>fVSN1L}SEQjZvwl["*yWIzp{U*%r] -X`~Z~.<2׎j(d@3#zlY[Ojuc[xQim4:K| \8||WocO>+a$) f +񍱶>b'?/hW"?_DeG\?>JZ_iw6WDO%F2 A )Kn> V)]Fb2"cs0*|XwŮpLX8_nhGW{SjISŶD1{{{+Ư jEޣ隥jZe + `Zw3JѼ2ng?+')ғEf[̩8TOukF8EHux@XJmbISbfQhoE4JJ E*N &ȝKQ*Gӏ„^md~PA|cRW(+)c/tQd.>VإVSl[j{jor>hSigNAO+|£,Бj:ܾ5t1[f@ZqiDُ{gS)O3#?:#@:d՝ߥgڞxnXot.#-Ä$I&VrS5Kv/>-_ޥ$=G7QX[`a,6ǭ~'Zu/!$\?̶Zhg~hW9ÿ-m!ZMQ]_H:|E8S&O:O|c@Mд}> :!8_eUP>6Mxl2z.Sp SVQ]I(;QEzaEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPa?S+^^+8ɄND{U6(@QEQEQEQEQEQEQEiV\,5 g:0*}hQ@-'/ 7'jB;֮L~v\Dǖ 3_xb+᫭ae,-s#1U_|7º~][apz0 HFkG_W֭lse++N 6 iVqzne(wfbНv``/ÿxI玼-KxcA#8ZOυ53-1eX\[c3.DH8'$1t觅:Y x|ppuVWtd~;|1hooLdžcO OUt:Ee[,qf?@+|9~$|nִ\>n˳\,r/M69^/ E㧷qNrFk3ODg0$vrPq_S>^,8dVrne}~ӧdRwi~k]VSk0bAT%ն ~_|sZP1pyv㝎#8e)hhdRFYH 93|\$!:Lŧ_ gM7Pzs^ncEsxFIT"Ƭ>]o>\N>}l&Y+c¾a6S5ZAk&,XF'h !?C].?C.K_"\mm5wO|ьj`/f#᛿%`"9?4hA W=[?6񕥉qwQQ6ٻwL9 +|?Cያ=Quqwb_u% W88 h_J_⦵|_ K8Y{^S+iF2 /oSnUzcb odXUo#ּ( $"4W I^_>.2|D־+-D 1fa,HG9CѼ3/4= MK]?OaP"@jQEQEQEQEQEQEQEQEQEQEWC&|OZj4Vkڴ6b?1q#=Eyl_5qEFW`v$ߠ0τ L7uUfmfI.X)dY`3&> ++\4o 7U]?C4I.FuԳ#UTI8W=_ֹi ~*xǺB໚m2#qv1q@Ey? ~izm^+Ea.f.],7+d8?Ut>9|jV? >(׺|K-Cw%1!]35յk=/J{۹(mE,HB*K@(7gß2|5t\K1X6YQ^ӿRi/XOx.a,t/1T{W?ߋx^GWˢ^x+nȚ2wkۨb7ms፿X6'-]2rv۸xU5+ԵwQ4]:&Cm )gݸUUz@TW?v7Ïxo>[mGD#f@.2:ᇭgC&|OZj4Vkڴ6b?1q#=Ez%x3 ŞN/ ]_%մQt$#<]MW~&eBVqo&\Ÿ+NGq^_>}wᗏ<'*2=Ƈt f#sĚtUx+gm8<կ~ov^Ɵ0EֺV {h9Xщ;QYڀ=Š~߳ßWƟ~E#iZma t/0eܬ29| d.|87!€|rA'Z(((((((((((((((((g0)蕯j?~اeVQE ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (???'u}U {?| Gᖩq_hwL|Uyd?$:Wʿ\f :H_xS~9o ՍP3"M[8Y4$gTD\*xA_e:ZO=sbWQ@W /&tt\x>;nqrˏVf5Sþ${sh/B453%Fkܜ3]vgz{7>O?+|t?.^h.Z->˿iV?1wK}i͙$ɆEݑ}c&?o̞4 ]MImZuOYrO N ?cQ1Vt8Ϙ`ݎ~7|_< a]M)@ci񆽯 mׇb,n5hUQ4 Ҧߴ[Idko?Q-G;տJS&3OwHnf:?/Q۟-d5dqgֿOѮu]A64=[js{ଞ~>?2Ԉy ɼ3X.tkT|m\[F}VR( y,ojOw,|QHn(r$rrfLdTwSBYmWU#Y`t;g࿏?7l6k}nnn/.K)t%1?ng=ծ^Y\ñEg8e+"[p] >~۞"<]cwmSXKy$BYI1_ؿ)nǚíWEtEvɱqxKdQ_VrߢpjVk;[Yts&l;) C_ xR~ߴr+۟<T2)s-@E~?:GIۗMm//[:ioDd/nחS`KX9\w1> ~"x4Z[j_  E yNI6ymݾ %f?|+xr;WԮu[[T%PIv噱=~ fğ(uI\kRp@Z Xf8!۵˓;0dfo/^)?Yek./Sw^,.ey *K_J<ojڦV$|%k ; @$ '¿_thO׾3V}=ƷEЯΫzw߱]Ӿ!tټuA7Ƣmh 3ऌew^?WG֢Ʃ?Kq&t\@̛]<ӭ~~/Oⶽ/7~9}'2kW|U!L6גO?"|;oŏP|5O;_ /CRPɪxM6S]-Goݱ~_ :]8>܍\dg5tPx/ÝkUM\:hسđ̤d.ˍi ws6[_6:u͔რ!EǞT6|3s5E?W՗B~QkY@yZ(>?Z=oo-3{{]E*2~Q@|/ S:~|C!yZ hnk$玕w9c:gĩOrΛ0]Ͷ2w( OS7Fk_KR1-5 &e?Jb_ #ӵHu'u6SJҘ˾.~XקJ2a_'D-m<? OrhmVӪHmxՇ$|9\*/%%񮧣ivX[`X4bԆ h 7 YG|4:K| %ӝ:['1_T~ğOe1xNV/]/ ."4I+~bWerF?IJ3Q&s,4I㼲NL n%r8+{ײ~Ϳǯߵ" i%׆5}^m y'β 0yP/_Ⅸ?: yߴ~|}8zn_غ?/G_xoUVMb<Ƕh1" ` KU☼kxwEL[a_.61lX ^EQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-VLAN-manager.jpg0000664000175000017500000051267400000000000030531 0ustar00zuulzuul00000000000000JFIFHHXICC_PROFILEHMSFTmntrRGB XYZ  -!acspMSFT-BEP rTRC, gTRC8 bTRCD vcgtP'lumi xrXYZ gXYZ bXYZ bkpt wtpt chad ,Infodescicprt( curv  *5AO^o+Jj%O{ >sY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloqsY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloqsY]3lk  { :  h ;  n^VV_p?nD-~*: !_""#$Y%%&'v(C))*+,f-A../0123{4h5W6H7<829+:%;"<"=$>(?.@7ABBOC_DqEFGHIKL3MXNOPRS5TiUVXYOZ[]^\_`b?cdf8ghjGkmnloq?@ABCEFGH!I+J4K=LFMONXOaPjQsR|STUVWXYZ[\]^_abcd'e3f@gMh[ihjwklmnopqrt uv.w@xSyezw{|}~Ҁ(8HWftÑϒܓ!0@Qcv̢8VtѬ (C\tĸ˹κ̻ƼyX/ÕWƆ9ȘDʘA̒<Α>ОQҾx4ձr4ؼقHܠi2W r4j!FN?_,P\Ag  N 1FPSOF9* !"#y$g%R&8''()*+d,;--./0j1@22345o6F7789:};W<0= =>?@zAWB6CCDEFGHfILJ3KLLMNOPQ{ReSOT8U"V VWXYZ[w\\]@^$__`abcldLe*ffghizjUk0l lmnojpAqqrstiu8V9m:;<=>?ABC/DCEUFhGzHIJKLMNOPQSTU VWXYZ[ \ ]]^_`abcdefgzhgiTj@k+lmmnopqrvs^tFu-vvwxyz{|i}Q~9! ܂ŃhQ9!֌lP3הuR/ ›wP*۠c;nF˫yO%ͰrAܴn4v2뺠T[I!S~ģ4SnȆɝ(ʳ?Xw Ξ5nѲVӝ <"Luminance" = "0"> <"Gamma" = "2.2"> <"Whitepoint" = "6500 K"> <"ChromaticAdaptation" = "2"> <"BlackLuminance" = "0"> } descDell G2410.iccDell G2410.icctextCopyright by ColorLogictExifMM*>F(iNHHx,C     C  ,x" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((`l/o2x,-u]dbGC7gKi~ݞ6|7xOWk俑}sܘ۽B\&Ae)IE]&?Zi(W3jĈAU%Y.WlC&Dm_9~W,HDv! k2U0<wVZcMJ;4sNt_B4=SLfexe]I r85r!y9ukᾷ7įAo%S9f-YtSҿMfO*OCOn"-mMj.CT&r@RH^.2?OVW]20ʰ9zՒQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEWY[36چvG\p 1k#4_xWuM/o[<s<:Oo?g'qJվo4KQ[ۦdgj3@~-Lh_-9guβ>4D[& >2 f^&ů7%ԯ.VuDsܒ3AWgo~|:Ÿ <?B;r?Ff:HͣE߁?K >-߉Uׇ dIX>lFGq^߃|a h:G (M;MX ER{Mv^H՜;ɝ1_Q~Jߥ|D9~p[wA_^?skV-.d=m29%䜖n%J!h\I`$ԗQMœC%UAK"JM5tbՂ( (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((/[4_ xN_h&kGQK{{hWG!UG W+|O5+~̚-_/ٓ7JTR"p1H+ǟ1ZMsAkltV6pD'6'/&ڠ___]?~>h$LᏌvK/&dA\n,yg^1|M#M~WaiG!Hwiڂ !lLMo/٧ౣ.fO$A/uyt$p; @~>y'ҹ*JS1HGH/$QJٗs7y#ҴG}(as7y#ҴG}(as7 I'fndԮDoҿ{}Nx jk{3H?~Erbq4/y֝)c|Exb7Kۃy(у>n_5/|Ep*9X_Ve%9gZqZ.=|>wݟ_m_aj+w+ox W_e_T¿#_Ɵ (p(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((+;~x_nY؆Jto$<#IScԩ% ?붟u.O|'|A2Tb-Ž"KY 7(,Os_5*Y_`}O?G4 F6ZMИB[ݿ"Tx#H * {7Tb74c8!2עi5ǯ_ZJGpFē1[aKkou0'Ty>> \^(̶NLWQ "WIʽ8r^.4]_*Wi+ ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (??뎙p,ZLQv#v"#<i_6=GҾ_9XgWO=~w'<#֓E_!ԑHl n7Jɾ7/NAٶƃKkE4e4q'Vc?Z4j.D.i&A{VlMiեgt|- ;zm&rk^]FI7hdym%Ho(eF `l7`FAK5w78:wZi,񭷃u9<62\jKze5 @PsWQK[QѢ"S8z;736D4g_ |,h ݾA&f|!8՘_[/ˤY#e•ʓćP=I\%vVX[O3 ׍uJL޶ʷGkTsNkТ_,ŹwegN+^7u8o./]#U,,=Ԧ{^1%$S~6,|( ͬϗq bHdds^_Vϋ*C'Um2 Hb$ A*kYGW>t>:Uˎ?a\e|1_A\}>zW~?Ɯp7깶ӗGdKFTjVgCgGfYญUD%YHA օz'Q@Q@Q@Q@Q@Q@ǟ\@m?QǨWǟ\@m?QǨWgY࿆LHѾyk 5!ѴKe" fdE0At\uS#>jK'vhg$hE<#Ќ oZm_/~.Ms麎hցZk+״XG- 4w:5ٿB-_Z4n`ͪC%̐#!U@"8QR,V'Cغ7kͫKuV/=:m&c:2;ISIC4zk?G}O9M|'ֹz˼k**/yU^tjo?k΍^> V鲿?/0MUK+>Wi+ ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (h/Ǘ_!o<>mtZD!I+"xd =%g3 (-dycԴHU@OA_ˏ)'7."x?ş Lܼa}h/@o:!cq }ğZ?;:|8X9̐,i N1_R S}Qibp84:vqi枨U?K?FK]_幌x&Un\ɵU'W/u-oÚWUf#=ySeU׍ Zwpfϵ )9+>? ~/~%\3BmQeBq/2xw+ɾ%|q]Gƾ,Ӭo-.8,+`ddogmڴƱ07<\M9\4 |OŰxxOL.VRnuZU8Op\` xjĿ^UbtFq)pÌa?$k_|@4GT6Hib@7Oʃ$Nzt=Gҿ#7ZߖV6vVM x# H*p}m|*$1'xsb=<$Is@`h> KuSOI}Jنw4A.a"XJ8y}p4F^hvr? ik 5v\O_,FzyUj<^g+5ڬ4{Qj=VkyTR#Mgyڏ4{Ru\F_|s&F|eTZDa'꣭s5ZGVxÿ[{?Zj?a?+DŽ?k]{GnenQl$+E<\ÒO_iJ%I~WIIuaEWQQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEq~?O o|cź<3h?}jk {HE."x+ƾ4~?gχ-~9'dW2^_0 {t̒ڤ.rk>7_xWu /ny~!כ|*m|WV~ xTem=OzcHtE p d`eV)ɕ9l^0i? c?xFR["vs噔P[xxU|Qw/ڮt ;^Ic9 ~9Së |7 ,8i oCnֻ+ڼH莺t"z|-x6þдh6Qv~hB9h&=L״K?QKy=?[ǟ%5MfO#᷊0hww2*ι1 >D?j_ k>vTK L: v,Ic{W%o ¾=\v4yˑqԆAIjzb~~ӿil&ƛAitFN7Qf|'ľ K|wgV=V{kw8 1̽CG/P*~SOW: &;bȊϭJ*/uEy? >9|4 4;[ٕYY1d!~sЍÿ ia-l68A2`teP0ye?wr2'bB-JlkuHz6} >XKi$ܥ'z?,qsE$㴒v4L؅')FsэWV:n38i-IA3'''ēiڧW>eݭ,ȱ,۔H<'Ѣ|L3uvz=O*ۇ/eW=';vˑC*M6ֿGW~#99**@k|Aͦkzua(@=G#|ͩ|?~[u>O:-/LާG2 l^_;іSS<ㇸ7VH8K{O}->~:ƛ M<+qL/z~K7Ş  jZv/}A_MwESz,jS }=w;m kbwzt_ >1vkmlҝ6Iͳ$rU9;2Lc'(8K滯pO*t1RNQڍMfm.kGY΋x7.k:E~]͕ ,2+ i^Y+ WýPd7;H`;/jfoS,<'^K3keLtܪ?Ze?(n?gۗs=>+Kca/3]pR9cb_g_~ x?O]Fnu&ҴXgX䘱ö=kZs5i.Ċ2]&pzZ? egQk{!!pJ1\w5_N}rWVD:v0yOo { ~uWOv:?w|d4/:?N6يUʑcŸg-6\Q/؋[Klk9yV66#i펣߆_~~|چtxzkz I-0 N@<*ʒ9͔^5mz8Fo~3__+)W m?B<6\%t ^~'e>]Ce%1^Ǜk \#j)>Xg')B?_ T7bKHOޏM|\p ܀g /M獼a nr\6sSCf~>?k߄7"A$1NJI{vKG}Q]siqo4S&|WSg1sTϣV'~fhmEpPti4Gh|c؞'ԍT'w;)"X~.#*`8YE~X4yϞ=hǭdy֏8C5G=k#>?=2/ĶWF^Ig´RJ0Wdɨuǭ|G>{_kW1xwH+5qǙWs!T7J>_]{C`DlVz[ +h@=e͟?i i>+?g$Is[?7$28澫Ó]~iiM|/\(O \i}iĽ _; *I&#TxjC[d څ쟯 "R?1~C_Fx:&&eOH-#@TvZ5t0ǖ<%7y;8HD$PU>+b((((((((((((((((((((((((((()I6K,Hݰ$z }ez }3Aa3^:[F:9 =I1?i*߂d|jt=?}vd!-۳ 3/|do04=M vg`N%n2Mb~ҿWcQGɠ(~>=';AWKb  ɉNbvou kb=+j HyG`SF7k+ڸj&񤺞So?>/ᇃ4i%0ca ;TW^ա{Q{Wn4+ڏ+ڴ |03??471mb0^ D"p&ЛGď'Golc/!m5xLK' \sI  h>. l~n$/ vF9-9@s_(χ<)]\h>]>сe?G_Tr? GotMT*\Xݭg#e,KmʂiZ4$'=X989dC/HZe}l⺂TG*7uee`{+Rp?w]Ha8B((((((((((((((((9߰Oj}}nLRRK(tdU+3E< ggH| H9֬bn-"u#~']_A'nYyMsseɞ^,|r7?(<W#xKBnVf8 :甐du!5tُ q |?-W)nO'&f_tlK³^,fM|&Vン/זW{ٯ~?*R7Rq皿QN˚WJ|kҊj.QBh:MX\'xe89W .j:ewK!W{>A ?Ӧ-ӑw|H3ybBdzdd H#yE< k`psV*niʟ4ܟ3kt>'c3Yghp `#|׀g+z߂|_KNQy98'O|qFéfڵK2D"v;wpB˞x*{ ;[V}^Zv7s\srj0F.q'y59ˢڡ {>-M]`\$+$8H׶ gך f{md$$fc{!䛛<1'$f7$ g'[T4ϋ? -1ɦ.GmPp<63^+$w+y= X-4}?g > |w񿅭#W-$(~y)>steP_KxD RI Np $)O>PAe;>#ɮ֙Y!d#^foHo2:j&r5QcQ!B$^E\>-M{H?[~JM>Vuǿ| Ϯĺg0|;[9+k_n}w^4g5ox_ri\Op絭{ s9ql/: 7%d>[btY;HDW.]5U*'44<ͨ^} ԤLf+Xy9ܘ?g$֫(!~1&s zNXEwV:q1WwO_ 4_ x3F|g3ݲf&I#1z}C ZQlEJg%x/Û|? ٌ[M[§ 8cc$[EbQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ_GYO~7z֣r;]:+)T+wE,H9З޳ L!|'I  m2tWf aMg >1IRDh`eQcPц9%f ~>j:ټ+kMvgewX۞G5+V vx/ oh.nJmD yVe(._np lGOaƹ)̑`s$qҿ@{uz?,9;d1{ѳ޶?u<>F>z6{uz?`F>z6{uz?`F>z6{uz?`F>z6{uRB?]{úck?S+*WiP.Zޑ/;V\l4}8~+Kg3?ړe[/nm U^I>fcI<-7<[&.pp,ZC*ʮS9 J0%Hk1،eD wF:16y$ɯ(?pI.r ?ƿ6 u~/ VzM̊6DzA|W?rp9bMƄm,MOwh'[m.W%i?K -Og6$wGq %U$( #_8$ev.X$>%#/nrco=9ne~wo{g/<Flc^,M8a$-y4OkKcq nAԏOc i<^ׄ 3Hy'g]ƒwA,iFXÿO~u&j+1j?|?p?l~3nA%o7N!Uvgp t͌3WNKɩ7ݾ/母^>c8օ=+Ah{__ri/Hciz߀d]=JZvyBGȧ5נ|x>|Boxƾ s6k7Isȶؓ8>_ x'វZpՙ1>y.d_TzsX6+-çJ/W;%|3sxS36#K}Jd2tGlf="i !PAzT' e~%|}C'˃N7?S7;g(CmjM$p;X^8\25'u{<Ύ!Y;>GıP>JiI۽$5S_,~߶Gڏ‹?ß%"IxOWo+$JzD]@$GǾ Ѿ"|Rm3V{y @&#sh]Obywm7Wmgm M<,h1>h ]SHOMP fϽ~W'?BJ V"q q_QE/KqS|K9j79Q`-5ϭ'_?emsR>׾,x)wlu.`>Ll>x+|6:U.5[nXnۑ-vD=PEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP_Q >?򂏎@Zx#Lxxo>/NaV^Tr8 g:ѯ8:ޔڭu8U%$cwFKzݤi??>x¶qJ:E6P\)f C_?oc4I4EwQaZ<eOI;9/g.Ǯ_g9x#V:|[quk. F#0'8Q _??| 3O ;hGέt-9V/ޗ|A~]W-4oxG61^5^[A%Xݣ*89^[myԼ֗J qtA5_.oN߇|>qi7rH̐EUNT8=v/N׾ | 2׃%axůugmo#y*di0I%@# <5uoُςuK;OY2Kom.XT22|Ư ~ᖭ{ 2..ṖC[.mB$Xh1x$]#? x'x+Xxͥ/i`ϵ$2!';H7 |yejWT^FI=2'>-_-c/;歩rcM ,rY ]^sOşվh>;r^.nn#$,J@rKv4>Y>. Pj_(- Zz^>I,o'YECjbxmq1.HmsG+sxF=K iy##R!` q^π>'{jtȵ'm=^h$ګpA ]?ধ?؛diֺWV..[ŤuE9&PGR.5%gsz&/|?}lI fgaŅB_#5{aC4;ߍ|GixG(5k+KV sd]* sɇU~*[nhÙ'4UKXxYel?fHbcAO?4jcU5}SRyְ͒%gv/Io%S}+MJogDuJ~4^ּXy -θa< t&_VgpK}Vť10*׉3E|]뿊?gN{IǾg#y-4%]Vi[|Ez|iLO>+Ahl3+_[fs܋"%)y#xξ?cSY^ u~ӿ4;ڗxo f_:k?cbP0j 7.ck^0ԗOɴkKfI&)<"8}Yeo_Y⏃~4ygėz'u6Ag@E,Ӭ!@<o??6/|Q ߉*MMYi ]A J\iTD?QC75>p?w]Ha((((((((((((((*9e YgX4/$0UE$x//Ok ORj"{y`>̓_yV~ܺh|$i-fY{(A 5SWcms*]_:9OdO{|+7A1e[?OhSvE~56[[yG|%cm}/@SJ]R+!AB[xX~p\+C$fszS^8wK _JU_gM?5v)WO exv[d;F+ը t`N*)tZ#4q*xeYT7w)7&ߛzQZEPYkѵ3Ot{Zu apU hQ@_7 >*go _Jq4ܽ)pkkk^;%Ӣ]'̞[02Π?y@;+a$SGg_W;5]Y/t /~պ%zۑ8^x7vy"*$S)xexw'o k~Cd-{yvfWo*0Uab wfKiez-_pDPp})֟[/s%n׾{wŸ_Y3DGPc /uχEo\N4{ɖGw 4|.BW;ŇOIY^i֗X]<c{8 |.a2_"*xcqMgNE>ov:''⵼Gk?ɓUKEӵLլ-5=:6Omu9R0kMINO8IJ.lτu+ZQLjf6I~Iַ `fĖ ?k jݵXG,ZVmK?y'|\?~x;?<[j(|A/ӡ ׿ ھ H-ץ7_~{ÿ? m|q:<)p܇Hd4N#e|Z~Ԗ>$~ڮ05-326inWkH~Ve%Fo7>|j.|O_ _DTtxU칌ڸ~<l_oE ?#0YMIdnT}$&A!Tǚ?xo 9}eR=m}\*~ ~6Z6}^u`&eapV5x$G _G66u7vƲ<.%FVVA5tBQEQEQEQEQEQEQEW_R~ƶ_^Ec/o[dnvj u wƯe_ToÝ~S_Hn ˹+,X%vҟ K&W>xs~1_bK}Tb') a@HrگE[f9j? w/OIYit2 %(B0[PM&$^:^4 >|"?fg+}f'zC00<*~`Tȧi V%_ ~~ʾ/߇:Ֆ5Y}ݘsVY Kd'7ec_qƻkУw -ԚNUJ7*w@?ChQ|XD.C``ׇC=Oٳ֣w7SX,al˷?Sia|K|}/+ǀ.[sxD`.#IZ,leڮ_8@0[}U>j:Uƃ^Xqu\H%eT"5(((:VwjwvkMsuu2(,@UIos ėa?0`&k'L[P"SPc??|WhՌ{=V- u' t~mG7ǎѿZd}Eߕ5~ph߳'t7]k:>\ݬڐem$K_{ø?eogBo>)o;tx'Ě'<7|5=&.mHGPx85W1~߶g7?5x/il;5y6eDnp70uf+\YGң;mb&q'Cd9oAZQҺ)ՅEx9AٟTU3T5Yj6c7֗W 4*JЂAա!EPEPEPEPEPEPEPEPEPEPEPEPEP_Q >?򂏎@~VZާm j54M-:1R`UG8y/O|!tVHi8yDJYGT a @E'Z?>|9_h0\I[N%LѳC2c7VS2 ucEU89|y[f𾽢xZ~[O4KDw3·EuiCk#Fã#Uf_ǝSUoIxD4FuFlqgCPY _ |oבx2Yo'JTq0 _4D_ĿCSHo-Pz+3!hOjQΟq{W6Z;^=wom1vN V8'qiS[O8[\RzuzJP4~>1T|hO9UrF;\z.NA5:8[_:kG-G-d Ϋ2q֨WEuѫ.]u5\멮zy5:s= ;&"K_Q4]UEw(f&/_Gnk 5}~|QEQEQEQEQEQEQEQEQEQEQEQEQX4/ xN^&hZ-oXa{:s3Oٷ#.6+Hݾi{וgXL-^jߢ uwO,xG/voEeɟ~Nh̺#brsHFy s_>2?~8:[Zޅ&R{f]EmHt aLNX{uYaAˏ$F+xo^<7A-lf=]vbX&gft(gs{1ͳ/Zrwū ~;WV'{?1wfA&rw\prr~[[[YiZYii b8a$j诣r&%[~f3Y%kebF+HEwնQEz'ƅQ@Q@M [K\)17VK4#veX y#q<'|-yYҼ?Z(kCRKkxB$"5h~4xrߴ P*ݮ6> m*[Ƕ1:AmӐ*=*o aKxFl5⾰k=-椹"5hzOgsow$ !:vȫj?'n@׬K-: +?K^jԿ&i2ʨYk?f/ǟݏm{1p<%X$ZXl*d)1&7֊5[Y~I^&QfH_S%z0`C0GP+N?i~߱ާ M+◄|?{x%V[Ha|Dh gO sj%5&2jZ7e,I@Oro\kD\-qeztt䮵-_^ڞyKDpUӼCΖK;xbl2<|r5¯e|qggqyoIᶅ.*J6v>HW3u[Z|m֕vJgd i:d᧋DZxx=<}RK3_HFhoC̪K]F=:>[?>a}`~ŏ٧?xkN ]^0 Mecc?u}x7Eơ"ύ[RH~U'x-jШ&l:4NjvJmY_wM5 h/r|{o:=\iL$2,I)&+O~wf:ׄPլe :\7P~gI|$Үlutώ7hsaChV2H"zC| 6 ivf|3}OʶOk-D;cQU_*U[s-aƸℰ47tsZSnCV+?V ( ( ( ( ( (?0>:0ɿĭ~#^YY@ڄcֺOؿߎ |Cxú> xvPxa $0ED9\7E~ Y/u_⯇5:Υ{ }J |7 z:Om>|.åĽ ᷉4>tY5d` !x+7 Q@^ ?k]K /*K4kdG:a*z$*1R|gG#kao o^-=GxO.|cW <'E~r~w(oxC>Q÷Q&X(!/ύ#G/76QIۏZSRuk%H7y;˨ූrQ@~5~_/i|kƍas,[CUvKR0>-5[Nt[TMV@GCD(;NG%E~$~~5o4v2FBP[`h B*«?i79e.b  I1$8wPEPEw1z~Ͽ ֝>lq^1*P5OO|<+x$ú?~g AH0L1 BuI>|a-2C%Ak6$K#2'MgWo %SFFͼxn3p]UEUP)H."M6GT)&WsGsZHkey?5/ | mx~)U`f6\lpk${zsE 3o~?N]~_[|K3{kky/4Y --4-{WOU i,~!K?#X 7*'& *k/${gW=Yr\z {\eRBI$$^kg4?WRD7Y#u !A)GaEҼEaѷҭoI11"@y W*~̿&i7]9ee;n`IlÂW~GB2RWF 5QEQEQEQEQEQEQEQEQEQEQEQEPQ| ((p_>iwQy?Y-e|s4>&=Ԟ(_\2I$mZ!iwqYO kOoo}j Cwuth c8ڱkss|\1 ?_.CmǚKYq. ^ sf@+ӥNI^ nك\?f?׾/:P[9$!p^m>R1e/PWM Kz5SaHҵ!{w9gqjmon5-y|uzBKVw,NrǢ1~FRq_M,G b>2r1O'Gv$͵?7jzLs.S觯>O?=KsG4#5sެ[Dt4 \otW]۟JP9뮦u5\&T~~pGc_Iu_= ;&"K?ucEdEk;PMt_! ( ( ( ( ( ( ( (~8~? F.S7# *KgeA o`A qc65wK¼q&5`4T}"K+M#馆I%#RI#TQ$Oy}t/k)|7#}UG ?W?Y +iDž3!Py?"z[yO~ξ<_l._Ylw:kqv2KtR wMݯX{>C;JapXi kӽ+'&u?j Ko_]CNC!+?> Ncᯅ&nWSa@yPJOn>njwgZVw~o[<19s7s޶?Oقf!T Ow5`׺<3(i>47|GC_%]k߶O 1;v$g>e&_m| |z֫n?X]NyonIT ŜC.q\7s?wH/&=}#!p:[XJ BWW»Y\C9bxx$WΐAt~| |?S=z&. T<׵Q^Wø\=-_˷w.yxuGHФiYEQ^QEQEQEQEICKn7Aw+$:U]'^V{_~+j_9ß\ ЦقyIfHWߊo/ |ਵ&[wiQQdE.V27<N{Lt]3P&7t[Xt}/Vяb}*8{P? )oG|X| G㟋nCj>1+值c WП|s( xW&Ὲ77!tn- v%JB9FPO;~/:&Kr}K i(@ 3ES~U -2KƓwhdlI+Egk  UٸG["7bHXppI5?n^4k~Ѻng8Bǻ"&+Tw &8_ڣL,D GvjW,x6Yiտ5=;< gtl~y9x/|V?<^bu[M-IWA$#@WǏ&!0kz>ŏ`3ۆg)uc  XeO {-ԿԼ;zo7ݼ-cjʄ#Q.V_Nnkr,n6_NV.q{+w[MlM+&ϯ> m)H!%@^vpKs6խm_[+ bR^-9FrqnR囒I);r#ῂ~XE|dP .G 8O=>x.Ğ&𿇭m<dc4݉T ^Iq\^_uwq5Ԯ^Yry$'mLK~//#ϟ}^vw5O*Y6wt5JO'O;YWgJzr6ʨ>bo oOỾ ]]B'.#eybv}Q_T\Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@|3;Oⷍ,JXLs߁yVs1Wg;>t}sVOâIuk;vL ;C1m?<d'z FH>!;\~R3x8*DTNcQ{>>losQFnU(;5 d$E_8g?%<yk WO1gEtx%'$[]yC6_^ |=<3xKvk};KX!S݈P2ǩcc$SW%JҖF i6g}ZDYj0*^GiyTyU.f#ޏ${֗GG HQQo=Gi?#1*,p=I<^{NHtUyHUe Q݊e'du7 mgw{s}p=k h~08A}g6໫6FA) ֒jlP B#"HDm Yx 5{}&]d$^^Dn >n m͑YIL{^7%μ߭wɘos$5O5btc{^o֏7Yi&4\ܦR+??hR\ooۣ&.~}+z멮zt7]Ms=ʭ_Q4]Wzɣ?',goQC75>p?w]HaC(((((+?3,%4 ѝk;lgdhNR**T8M.񡇦9;(6տ$vui_4f>0V_x{M=%2hy;*+#⟋^zZFv1c]nn;g ?O}CT45GQ=@,$'vV&>oQ_[k&tY ,̸hx,EOUپ7y4~ӿoBρm4;t -/  <gk|Mt⍡@VMSS_txs>=mLhml-;Tu'O$I$oZJTG^>cyO PYn }oSmG)##E4F`Ӭ㷉@WDDY#u*!)W**h~VJu'&md߁4۟o(5ƶWH;FtV~zkiw0C^xA̷۴PO@$8+b E?^#X,fZJo~ u>/yi' xO>68g Οr*,Is_h ~R>Z3RiA9 d|)[kzT  jiȺ~a\rU+-?GG ]ƚl8VyVY{[]c}qO~xIN?+J]=W~G l;|Ti?*3n[2vV-_oW=28/;=>{X%A' Q_T6e'(JX&}aEWAQ@Q@Q@Q@Q@Q@Q@ZO/ k(GVf8S_|W4m.O #/פ%u9"chx$i=:/B +F7>w+{\mU췔V_o x /,4-58\?#uePO~Cŏ~խI*.OؒP_~̟~3?xZKKXCŏ-Snsu|=wß & EѮgaRHrN;`W 8/pď> Y|UVɤrsn''t#85_ 1xZoB%.{b" %X rAY ݇RgYgfRm#e?'Kh6gVCna>[|Wp?uǺw٣obq|@A=7e9TAzڿ}GN} Kլ,M2#ե ,3# t`C)A̟I_s⎵{N_7ϖ+#:i9Tahk㓯̾vx~'|o%(;H!աԮ-ӜI(o]uȯu_"}V' {,v&Mb3ʒnq\u?X:}#Hp-4Ilb1;kI[_vq:: ֩քvxִ?E>x~ SZ-sH$o|⺏'ӿ4}Sbv;iuaWK?Tk>-.?Nkfr MFb W|,_ƏFO^ue!6[LQbSa`.8`M]~%GsxqԲ3ZpS~_%]h_n:/Z񟉿< [mR ۘ{7N<1s@5_q8x3Y Lq$h~~˟H&ߴoK[#VF8rc^m<^9կˈ_-LqHx;E?HeޯxkF4y()` T5~ǟGd~׵?:KF*H%K;e<F>giZVi igg ¢"pvR,] 6 +|VvŠ(P((((((((((((q7>Jv]졨')GSn}(OnZHN>e>n _(ie*Mڅf|"Gm_-Ӽ(-.K.YEDuReܪ xړ9wOCr^'}nӮtGc:*]m.ۉm.+$d9NU<^xDil|Wi c9}ʔ>ưQ|_#J}o**?wr<:Ve47QWuq]/kՇ,IsEٙ^UUj&/m,N٤ /C}?m$ědbNɬ S^ӴdCYD* GU.nhѼr>OU^u5rtJjKV?1מu7^|*NMNRrZZozxug{M8EU(f&/_Q4]Ww(C } Vxߢ>+8(((((((((((+*?PQ|'ZҾx/<aGK%-+?3gOq2c/iq:V=wSk}#Eici8DFfTp9fEi|\ \zDNi$5XgH刐FbTIXim-ᦽף 5H/q@_4H>-Ěg?o߀ht?Q7zu v[DwB@.VՕI`WԣJ7Sٮ"iK xϭ綝O3ޏ3ިSYsAk[u^cנIC]p?w]HaC((((_z-֥iuM-PTI'Kh|*3|y㨉CW 88"b#(H2U)V44Km9{FnʖhPSaX}?ӂ 3<qV8 իNM?> χ$[|ebm~i6@p xW?dϏ!ĺyWMe4[1%gP2F!|) %8Ao+mo>þ=֛M-c[yH0HAp˥Q˧}|"821 ޘ~p9[A&x⧁cIxLl [v#;%XߥwU /CF|[j1[Η5891.>>ǁ5࠺mޯ~>'u2 @r>x$n\R3QPaz>?y0sx72POtrפmgEQ5M7Zj:uudu$0W&?S$%fi (EPEPEPE0FqAvZLķ?FNۘȩI~BU!vKO/~Կ>KxyXqFs_(><~%­/Mm1B #'L206ύ ::Ou"?w nj]gLfZ{:nE'WS _~^7Cb<a3zW_ ?g߇_좟D?Gk\#b%H[{zjUF Z9sޗwv>|61 x/Ð`OkLǙ,/4uFf=z%PEPEPEPEPEPEPEPEPEPEPEPEPEPEPQU m|FVfFJeŜ0C'TqpFCRA7>JvKÞ(?k6>CfXvXN٣ nTr7M;;k4| *i AN| ԯDzURIi %.7#Znjl+M5?xG}R @0MyKHT ZCմ?|e=>!mod]B\{,W巈ਟd4 ?_VlyMܘY_nWU@Oڟ3o/om/U_,-5x̟j>uRap#)bKGlϒI$cIJcJkV{B  hȮ~󩫞Q;5RWUZҹk⮦ҹk&OsVWUy\VLɣ;u?cEdEk =&"K?w]Ha4;GQE}Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@|G|s/4 >9O}|]^x/:K_~ZWOE#do')Xj6i鲂FVN>]_{䏡Vj'H[t;L3EH]ry4#aR!GT|sGhSfOQawM-yGkҬu.{בk=# -Yp?w]HaC((]/ ~%񖿦xoBF!(,ǀ o<_o@fjmuw1\_6p>k88g f|>ѽi{hd~G5˨?YߚOѼ~]lypTU5iSswռ5[}*m yzvHp"S2gC' Xƣxٜ b7FE~?^[_ x/g;~R̀ @v%_=|83t(ki9cy%N_܏iMS>O~-Z|A2F4V`aPA5~[ZZA p HFUGtr&E~/1[ͱ.-1ZEz+oP+>0((((((+??¯zLXcm0,7``l*`9}Esⰴq4:R_#, f_^TiE<ꞏ?/?K+8^$n{"Zdq̱:3k?ߋDe 7Sfw8V$\}@*Ak)u~LKet<I*Qe_O@Ď㘪cQKD>梿t?o/ |YЮi8Kh#q q30;B-x'_CGo}KGFG΅g_E&{uQ^QEG4[< y$($wM+Ğ('sX.n죻1$GfxO“\h_ #`c{29Bڼ_ß>5^*Əj~Л緶@.|-Ku?pCj ˒Ym<'xW[r$|ASO ڕU{3-ԣC{8\f|5-o_>1^ڭi^HsIf3|-7 fx7Aw7l7ܑIO34u>!>úمx8*-U5(zGzhZ73kxKѴe"8}s'ZQ^I+#:tN VKdAES,((()PYkT_޷ۯ k2i,:جnsrc5Zo /bZ fXyFGQH@=;QI4.ε쓴[*'mSpN#<7Q<\ վ'k~ψm^Lded^&aA_Sko ~m^'Q: c# 8A(?~4&:?[-VM6CtH␰snT 6ߎ_5KP?|E:Ö-ݸ@ UVP*NEytoc5:{{8^;+y7*6k=/_ufƕ6OO3}KgeY$$3O EPEPEPEPEPEPEPEPEPEPEPEPEPEPEPQU ?ek`OHbxLA< pZFO)Vn+_H: sg:6Sx/]\KI:ﴺsϖa#>??4_U}ݿ$|iyiX\Ibgݕ$bNLA,v=+˵u-cktvΟi۠bt8/z\LS,jC>1ht_hW?C^I}=_z0\"+8(((((((((((+*?PQ|'Z^.Ӟ-\Zkl(qP8 Uws<9oTAoa9o k}}㈿iw_?GO8xxe}` $y_@x#ω:|kZ!x"3jpUӿBԢUO_$pTBI&}yM8{z}o?4|G=;4yp4|G=;4֝R6w`$iAuBJn-RMӭ2\]]LE Ԛ x&x{|u:z-Yi#3xgޙ į'매bK0TQ|&y~տNjl> ~Ҿ'&CYp8;m>TR J7&s_O>V6|4\G hN,Ql ͵NNMuQ^Q_-s~|M?+) &Uicl>'+2/# chai:՞pkc0V_f*7%նϧk)nf$/,TE%<$/ %Rg#I`HԆ%⼟7rƣ잿 7'\tOVBk5{.lohe_FV_jO kֲ} {q;oEq_WdLZ5}<#u2N^9҃դ nO ӾZԈX[1V.t!_ ׃"xľ]sg.u?280 ;^IFY|Bm{2jS#鿊}?W|H}V6ɔCv-0p _?JcVTg5Zv87?/ 1VB8vQЏy=b7[$͟i/_ H>MM4ݼa:rhUڣ/^O}~ԃ ׃~>8շ mVZxӦְt.T|&XӥldLhT=)PqcWC6j.UV>W.#'4hiWb٨ ju(/#R[ZcVx+npXдQ_CJ)ǖ ,&xuT%B(L((((~핥,v^B:Upz7yiiCF˚ j~|#]n'qdUwy fOBc|iVz:[ 9XgT%wܻFFs^k ZW?ľZGlI T #//[SiEZeO\^'b/F2_ xF}<64ff7>hُ?s~$~|C Si38BXu*fTǷl}K_o_?|ow(]C:\VjwV0DJ&eLLI ( ( ( ( ( ( ( +| Ήr(@Lohw-d]݀A*e*O^σa:~~iN*~UFLh((((((-s7j򏯂?$iKq_ZFO)Vn+_H:sL^+{=>kᵴ6ip(%OJ?~ EO՝4#_yֲ+8(((((((((((+)G>:Zs+e{tMgs;HϲUe# WZiV;WWWm vfT' |wQh>3^sm/4Ncp1uG0#rxc[#V[.RQrj=HX7?ϊ1Ժ&D]K,wnwO#yؿ3)֧O#~4 m;ⶓuDjb5Ьk6c2 Y&6?dW R*xvU 2C9:) 2skO0W~1xƞ ׾D~ ]6IJI9m鹹Z!G?UzN^s>5lpg=>/J[",G: Tտd__4?RQӦH6zHQO6݄rɅ,`:gGڋm|a=a!Pjz&;9It)8=.n0svq0|Cj[]Ccp`Jimjϼ;~>/(^oYOLH?Au5m]Z67G_(%bPC> xs^M.nc\GA9>zܤ2`:ן_pXY*xj>uv>"cV1Y/r.Gޣ^qa%]'5/ õ~u (ԕ37Z S4f13cl . ;PV>Bfڻe"J}}ɯgo('^]P]n KkW;裤2U1 eCIӿ׹e/ FQ~||h\ E5+E>#iYЌWkJ99Qs'}sJ[%[?Y](r @L3 ⸢U:e?k> C)5|_pXZ*y(kkF+S#ǿǏ;Dz;<5sJ; iRx$qܕeے +kx[r]h -"񎲄jsn]>|O~<_x=W@Nmf YZDܬSϹÏimxőῶ58A򛃘"bq:ּ#+y2U8J1nU+t~ߴNs?>3N0&ܠt溓Cic?~xoWi/ Kd hb ȡԌAMeoK>7iKQ3_ ?o zU4x~0!>gzO1^-Q儏%ߛw0M#^#S|5_x7vm?Ba֑R{ǻIk~`P^׿Sxg/мNZ^7GO+Y﵍3^"ґc'yg^{²WY is$oq'$y,9-Lbj)GC4i,N]rz.JG]O> 6ɰy8GF/w-\k>Qb|)/G8|43xvk6o^K e ƽjzuNJQ}VirTRR2M4(Š((((((((((((Wxᾷ6Αhz{&mAf?I Ii+ֻ=Z썡^D3A a]MՕ`dI8-tJRQWoCJTVjMն$~gٓ<_maMM.5]GSn;r19Xȯfx'=2][븏헣naA*|fk>x>;Ɵk"/xwqup'j1_Bi-çv6Q G G;59a\W?/)&/?Qz/y?z |=k?jWOh(~oU׺`]Yn|S2t> ~y_OlKmo+Pcc{n׶hd1jzoմ [MaIH?_hm%X5/$0UEI@VNh޹-[O{Tqy}:cݘs_|jn|a kIΗ%~rJ0PH$}GZe ٤4zS33cN2?Ib_i@0. r~pk_uҏ ?QG=Zk͕ Y>phGQ}jpkͼo ~!񆿥sE~qgyf8W$j& -Ϗ(?pI.S;PMt_!Ow?xosE;k9$ԥIH-QKą6Tum/';ĺt?UqI$eJ*H$q# ԡQϝ*FutQE{GQEQEQEQEQEQEQEQEQEQEQEQEW?aH[^k/Y5#NdȔd9?1Ydӊ(e>:׎ўּEŦ_K%67ts)*6Wo/O*''{Я`'0ݣ.W< u3|nZ. Ѷ8a0)yF j*|ou]O:3\i|q?ZqEC2pꏰ|ʯ4V[[=H4,Te$eIĊK1|S6 ^kyc p.}RSǾ O,5+U$.@=m_?_'S]y1ɗ lV.ѵ}c\MPAZ s#sq>1jzV&Mk;3Y̪Y OǏ~7L_t;<;jj[XgbbF9c8r14T]\L#NVc~x7. z/_PixΠ &}۳+KM2N%0z8,~g.②GKoeDt"$p_WxGX>)x3k%i-Xo,df8/6)U+L lmG~{hm# Ac1<"˖uQ9&Wf?#ʶGfu}h?#OBum;V,exg2cG"QjEGFkΕzrV*QM]4M=>K >>/k^/fmwnu $mc$W1^mU{\7k<,izH8%>\W~Oٵ5%T/a.!u{Wo^<{_&ϟXz_f/M(~["DŽ~_XZ׮Քivۗ 0F\ă?yYa+@CO^2]f>.e!h!6@&Tޒb1_~,~4_|Yi"iq@ ͤx2_>5.?a5"P?$d~.l95٫:u:Yxc _3i+5Q 9_uts}[iU/IV/|FkslaAnFkߩWF 2Al-=5NTb#<ϳzE̗W\<p$&>'~ٺ5D>h]\Cdfbp3 fq8T>(l .-g/HSG|(ۼckq"`ķ]qFXW緊~>n[qτhr EI7\p!1R_^*?2jQKtitŻO7P 𷄼3AODty[{X8{bXk/WoekiKS_g0ֺ˘xPavF+ `o¶s_x◆vOf KӔN8<.#7 ~lNdR=sC"M `FAu^rw ( Q? x~nxD2{0 AQS8Fqqf|EZcVetӳMlZ~RWM|OiХ Fzl3l??x3SLmA6aZg|@q#g5^IcwO^:_|9o2![MN E}g1L@;NPS_%_fe=˧c+atҴkKN[cn)ziv6_< xwJVE$$yC*fN~TlWJ#kſz[[^ڟ}8 ^<تtӢԓi{'RTӌ<# Þ.X$V]&DsNҦ!W{ʹ3M]lt4KFҀm5q|Yo* œH[!k=[`Ԑ}VЍktTG<~Get)JriIvk]DG2$X ʜrU%N$pk1oX> Mr P)ϗujs v9 )澬hk;So#o 2勥^UKWks,?.xKC>?i{Aw+i-]n3J9R[[N5/]fm[ͺ[C9ʕ^9O%㴚#PAMS^ړm2 O> }SJYe!|%$~>Sg>}gpZpu/ w[IOBҖGu> B xrI5?d>𶧴Լ6c*VE_-ϻ>_>~Ԛeo/,7^",X۶v] 9 2ș}_F~0~R_~< "n V#?xJ߱I?/#: ^Na Dq@9kK{ֳxűjnü31 ( _(ie*Mڅy'hmG[gz2RiV q. cP%IE]Ct {ό<{u hzxSkZ6d-.y¨$4X c’<3XÛkq- O5]C|mP xʗ$$vSSCm|Tk,_@'|q@އ0 DC̨3%@]⧆~oWk^dڞJ*7WK|ooj~5})־'y! XGܠO CߋbJq7IkgH 7q=p$c׵yjf[i|F})]Lj!G xԶ|ydpb?+yP'0p3_KjVzo[_΁ืIz2V~gSwrү>I$kaY =ɯ~6~ߟ  E'Iۦ[7LÙqlYMyï|CxyS&mZSlUF&e9?J"?v>{|}:z-Y|dhZ~KO'ծ- b[8qR%oWl$~1־xbcAݪ$DLyV(yX2alcf؋4''U񧕲[R\# !Tq0ym 25lMJ< GKnG7iı-׾EwEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP_:MfRM7?5 2PV9_TP^CyUM :^g d;= |@>~H:i-srNs͵I!B+Ҽ;7DFELjt sk祥:z:.H9s@G,47I~x>TH>@Pdb^Ofωux)%⸺I!yM $8&߳_[>5 _t/g3g;Qsgʘ&>v3.g{ZZq],AޑYLA"M靮 %'s\7dܢjjklݏJ5RZ1ZrqS\|іQtf9~7%%C:Rf6&(Gp@۬H˹Ha5zfaN"S*ÐBȻ<[ulj;kkc;K+FwgP1s^q(\OTׯ4Y\|'Fqw Ҏ]XyBӫ9[U{TQ&hF1~ߢy#|fk8-CoŖ9# W~{JgocrM̪F?c$^ɱm]B?ɻ|/Y ZTVm*BPqnMrWk z~y B I\G 1<DAbxן+} / z~>!,3x,|-L9F06 ?gOxBK ȥP֥حLy2#@|oP6 _lQz18 į@|#S-R/K_$8(+`㌍N7j5ڕIJ)|g?d?*'fcOڽ6s&`yRZNZ0|z Q4_7=ľ6YdbgRI'I5yWn/XJund\m"b7{Z׵ݽ.XQE|QEQEQEQEQExſCXn,o ZɡTldya>[ FZi#sYxeUN e i&DԴ;UP2I'o77:7|q$%ZyͷnH3\?ka۶ElDmK\H>Sn Fk쏄?߆&Vԡ_x2uGl!U#;`FA\ϭWAZ?D~S.,(4:ר=||ß;Ҿ(wjA ;OZZ T9|r ~2-.`:mկqI|r3^EtmݟW5Gw{՟&+Т(w >_x/ff}GUX"R,"4_Mhq?? ']?Ğ4~!-ݥ͵!*ڢ((.~,%ơ5#^,pJ:t`44|,Sa_DQYգ 嚺<&wRU 53nh'UX߆PKϧOEoj=8RݟO?Ũ`K/Ϣj.V=w%E{Aue-1\[ʅ%T `Jc>1w&~nߚ 0yWaǿT˚ފsϷhO ~~nivwp݈˕cq?|7VniƠ};.-"?296:ge=Ϭ=:0nxTf]/(B(L|*ַz? ĀtTyq 7e_ώ+:+:u?C8k~%F[IwM4Z gkMŹL*O9qYs° ['k5%CQ -T{3/ugX >K}ׯ;2ʵ}ݝX\Fc1$r*x 5 _M⟂|>J?S :i16yw W|nekzN -=Tc}dn1V?G(Ǐ~֟gA"?h,c̆>]cݳcm~|4W^luU0)sh&z8$s^ULsp2K~oǾg+kEVIҒ{{o~ZEW~ZQEQEQEQEQEQEQEQEQE|[ο#'M :M>ksFI^;{kJ'lp 7g}*vg[f̣tg9GH//ms E*A*2:pku?Ӫj0==[Y>owcșTea.1$a0] ߄~.NӼ N`c͍{8?N.UĬg]8O{rܖVOysj5ߗM))5N.Q =kNdז0BUB5~:,W:G{]3yKk{$Pɿ5`$w>! >\7ۗN#fx LX#5Hq^iwWoNV}7H~%ʎ[৊Q)ҍ֜c pW~~y>ߥy@~aO!.U "s4գ|^5ٛ ]bKN(%Q nm丫6rmTLȥR0YSN8ߢr#DoOGVǐ=?Z<כϴя~=GeO]^&=> Sp9pF/0k_~:ǃ f?CX] 8$ ҫÿ^x[nr`񮰋>~0(H+gzz1r.?E/S}L´y5ZJW>,k >+S_w#~i? "[os|>^NtC6 1NYNݣc W77~)"~xoUo>]RO]*v62I9'j:ꚵ橩\g3ǖgbKrkC ?(Ksa]_N|#KvUݬG'iREʵ%V|PhA7hme?EWQ@ tI"hUt`C+ PE~M-o3?c QԼX\d^ܛ^ఏUE \+o"{_?|W O) i=[^sEE|Kv4h/c[dBr3r:YOos?%ϊRymi3'I#'pL?m_O舚J&U·Z5k \=;G sx97?ֿ=>N|l>*KXNd4Yܲ`K,Zs7/[BNYN$9RkAZ>vWwR' ˏ? y5A |Et-!{&n_ٟ<;^ hf0_j1t*z7LbI =kK&ޯקsXTi=]>g)I&x1t1<|ʎG>~۟PN+o_f?oY,t3_0~_J?O7+_ȧS٣YH9ա$JA|!|QEpC j8Bھ/0[4.x،uZl~_Mo]7_I&[JK=e|9_BHϥyo#0|J=iW.1 9ZUyܨ́韙vIU4Ok:ނOiߋJL;]N|dʹ2>|p'.i룿+^\itR[}7fGRWM2RtTۿ4*R8JNRTKe{oyPj JK/TtW_l|umYo5jOR0G8:!7&S96jI*_sJ:w_44ipV_8ͿNR5TqX)YB pFu"S}KZmQ4*7;[N@ ٵS8ghžu,5>+͟p`cW10JъB WV]ZGKf ZJO4Yrqҿ@:\f I>=M~ͤ&neãw^*d" -<6~ڿGj^-  Ι4i4GLH<PtJ_J!>eULn"X.yJnZs|O{1+Q(說P=tQ^bJQAAEt&FTE1u$;W4b[׵];DѬ3^_)cB++| 5 |9?af ".e'OC 4gЏE,\X[[IiUX3r SҢu#y1>௞ׂf~,xɤ:kqH<2cel6CUq_xk@l߉V?kžcQ@/& lY! uV~~Ͽo17[HeF#3c5^]|OH/O./Z.Gl[!dU_^`O_x|vḔc5R ot-BYw_WWrՃ泣 #ٓ kⴖm_>#u+:q;|v(c~Z]v76W,H9QU A`^j? >&"X{pd$q_?sYš~1{q09f>v?4Bp ɌWG l9єOƊ߳{w˂?\ 51]'W<_ !|"Owߍx~#h@MAfrc}? fr>hUqqW~.:x,N5iI>eN:)Ew^9MOy_? i }VHop~q^Nkgx࿅"MAƲyq[N n8`JA_t% ƍ[qj \!_8qۨg+cC+^\n&ի%ֶjiZg3y09~*sJ{(|Qc?#z2z8t{~!?e|gڋd#N$*NvF  GUxZ4uNOoy(台uq5:O x wF5E/$yWxY%wr/'5%9 ˹S|]~|}x9Em0~_6vϗC|Y񕎈ͧi?RԈ{u;̣~<FiڷĺZFs&qSwx0-Ѐw唼1W?1xZ2^jd[?a?i_(6x1y]߇|*bK_αy2r2aC7AwRgzI1 O.Oknu,' !i?|S7^9_k>7,qH`{ms?1^\s,r_  ^ڸjo^%٧ oֿ?k oyFP&Hl$~ O0QheIupe< z֩ԍHE=S[3kŞshZwtYZDAz̤؊hY3b~>y{7a.R].b6,wU|K_J//'?>6ZV m|B$*#tYNFp?sa;ƿ E-^$Ml 3"=  ~m:GO_şZf"ewb=w<~3EoZ㡙$88.28׽E;{NhPl>-8EZTPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPE 3]]O (^YpOMIdM\ϋgOx"Ğ2hvw\U]ePX_nn|3, C-n灵9)zWxCZiZxNmer^ZGh6pJ6s_+e*Ym};}Z8(YY~ꔕTK׼>V:_"mZQrX׳Xu%EjZ=o/<5eF>xGwvGK3|v8N5yg~FZJs /Gcxú<%ggGt6[X[1F=@=IO&h1QI%d=֭RIT')7vۻmSwğZ<{#žc4JK 8=aR ~7k.|>s?mǨL#g8$}9?nBm;=شcveRK/#Q0;}+_?~'#o]k~ی(ڼ9Z)"K *,taw, {+GKqP2*'vqmTMhՏrZtmQ&ӥIt vPl szl7y^|𵏘uq4mb3~B}9oj?3τZ|3ev~[.<<|u&7'5[[M==1MgDZ0+[bFvɸaS#R3^./p)ߖ-.m_Ӳx&RQnk.;8%wQlX)U>|O ,oC0T"t?~OajV}>;݌s[ŦA?> ndeUMG KPHB(;W Z%oOAi(kus"r2kl ͌*tpX{RWw? /̱1jʵiYsJջ+~T|+eO\\7j>'KI6Amm8(ƏxSŧ<9s\Gk& CH^[+ ʮ)ӷvxULz۝ozZWWRJj?٣f A?2!NYzĥc&&ѯ5{uKc0`ku?ߊʹ͎0XAs !j[xƺƁK[ؚ;XKPA~ 0^d$V;<k 7-IZm4G=p'̑R"Qj#\CUu> 8JVtwT)/~Kwt ;EwJfɶ[ķruV#Sm@gqg)Ӆ8Yf/J݅QVwQ@a|4|L>eZߡ<=L;XW&W`Os >_躖 ĈAU&IgNncC&Dm_/ S\i1M\y}s9+8n/xH.k'S2xR4[ .1D8F\5q_lOPƱZmd,As#2s+ڏ+ڼML鍒C+ڏ+ڴx*XMk:X= C_H_>"x1Ɠ-: {NC=/##,WhzTz9NQ+b((>/~ʿ~' WO|26&<+gl'l[Cվ'o'3MfW|noDqa_5VSn,5;]B Omsz+5)s|T~qsb^;-v}{Ǐ|)Gj`nD8$Q' ҽiwڄ&? 뱿T0g9Ar6 <PP?Ú'jȁo3ᳲ8y;ۥguJ/yx~=2Zq%DUwN^b}laEq >!E5k:݉_@glpvJJa4qVԣ-SN&(nQE|_#]{ q)2GM}?RI]6NInVAiRxሐEhgd-n8E2|ZiZ^#ZӬu}*3՝ 43!ꬌaE|?iT,*W x`QPau~8yөE-#˹io__ bkj[a,UrD?`;^__v:'c`Ά"$OOTB(9B(((+~*|lkc_ھ>%4{{xGh3s ʽzt`RJ)uz#+ʱ*\)T7e&ߒZ_)t>K!t &e- \K=_"W/E~>4/ 6˻IvyGW%c}/e9#'؝U9}dzԖ^ x:q|q%UNwki]Xu,jۗ]Y/ dƒWeNT7x &|)ooiw$3U爵hF'݀u6 }5QAkE0F#5 `ڤa{zի.56Ym8p EFy-{'W*)kkb KHR\n$xP|"df͡{>{2O]`z?f=:o;>|Aj8 ^شo7i]4W.+ǖWr.(Z]IKI/&?CGyVv yRV'=$n[{iHj1(+p88lܿ-rOjJյˤB>A~ Q53_e ᱬ{ mV}JbFFol.F7#5}&SZu-żBg8ʿ>c_i48G ?e< ˩73, |[N9i+\_dMX.ۯu-=fnFlleq$txj?-]?LO4P\|IƎBO`wO=?DP֍kmFd4ue kЂfaW~+BW}oq\燤w䃗/3[M;i}'+r9kbn]`4< G6cgsoOm_6]6Lv\)~C 6q[ HB( 8yykcϸ_/>:^˅s֏ލ'ϙ>_ ~j:~ Gy-Z%y :ܾ }AEXL -5Nb#>"l>b%ZKKI%Q]GQEW[E|sj)ӿgeXxM2g ((% !xswnRo&{[ʅI&8U dg"g⟆%ة,q~Κ jo㯇5ַoKhb2$=[ʆ9 T5^|?w>Sm.T(=e G_yQ__Ggjo_WMӮ hhX(!wc̔27Pgog⏋^8%`dMẉL+r3H s_?_5?[~gYKt["pe- f^~2>/|l>,~_BڅYeУԅp=-c8Dp DW/_ |6? |x;B᱃pc̚CLqFfyKcHos[//ڿ=Eܿj펣,I+yz͝#*2?8g^τ4 #>˴ӴDe=IO&/ڏ/ڸ77QKb=*w#?{kμGcvy$sB>XU{ukڵ-<=:(gږ_ݿY%|=`=)6ҟ O/z|um?^2rY2J y?Ed ]"մGMӬQF=وw5Sč>ﯵ~G$s@"KdtlIᴯ|#6ഽH.gW<'cd;hF/S#C8޸7(:s.Ep_~'⟃^/5!:Ffb>66`3ds]x4?hV:*ѸF))J*J0ӕ*RѦL'?_G^c/N\pOkkWhYϪ/3iomr'_V O l#+ƾ*|ysNs=(Z늛`֬1xĊ?%͇v{?*?&,W wѝ)zuGhږ}iie)TVRAWhoK[ּI2܈iJ7 `s"cja_t^T_d 金pcI=Nn2%E.gxXΛ =};5Q^@QEW Ꮑ>*OZw%b1){+:RWOt:8N  FR.Qm4Lm?e3]խ%L@QS]|rA<-קږ>Ǣ+Â)/$ƊYݎԓP \u`>oo^ ွ?{xٿt IZn%u>dU#Qu iޑs}ndFھA75?)'? e50~$:uƱIfHB Mv3 |?<ueDY#Z!,N> T껽Ukoek7yrvƥ-̛pO_zQiKN>ߥOPh֟Q㑗|Oz^d*j}QVѦe>ߥOZWUe/*c~ekX\!IaG*=o^5+FJQvh3G솱kτEi0"Jr$ટ osMؗI hcyb^Sş 3KE^#^#mi nW<}޽G!Jѫ/ /5 .iO^/~|*wo|+?Kie>\0 [+ҫ7]c6<^Z0lDg; у__?b/_--u;Mյ+=ŬrHMp $ )IncK"iI^N2^N.?&zQVwQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ooQo-t>&U(QFY٘$p+k>7Ӡ\\E5Jm_AF[⯋iO~I;fPhuKfXogiHh0q٘7~ >|7|3g @nR85げ?[kΆ \)oo peX|׉}g^]0OxVoM.nn.rDj:yqg1:#4ؠmh+?7|m-փZ.;' q yK7)`ku:u~[cV cWVVߋQqM?# e5+_iiks2 *7(sdsR/??]<+׳J\>[-R |?k6z׋]-s[F{us@tfϥLrOz(P{yxԘrkkJ5Ug$kg&mcʾMRmզ9K8'sEՌ/ut^W-sbGyV~(|!ZQӤX:%4y@ q\οMYBo>jv|}܌;}z+mJ]@v{XO"6 ^q5?Hݯ<\YK3qpOm%-\x?O x$G՞r(3< Moȯ`_yzfWfJ9'0nߛ_^G.h4mg 91ڃ;V'~+LTXxh>}{)--}۵ݏ??pEU e)OTTiYYɫK~)xX}js\]J]z(?l7ğO=}^Zi\ii iY{_?| ŜڤvjH!C6 ``8sK&R&2gUf\UCr?QC75>Ozk O Xohtm.̝ۓ'z-}4;n{[յ9 \ Akn&|Rb Xdqkֹqr-kAڤ[?s<<@#ҵeJRDW^,e ܿG+῅o >$%^"Դfc,8%K6FH6N+vigwZ׋kex~*yV }6Ё;MjgUwKx?~P*?z/]ŜPQ*=taB(VsEo迕Ň:0G+s9TN9A =taw0O@+&xw^>+xxN|2׷V&)Wtӽ±$n220L208-j䮫jp6yM%a?-?,cݏy5|"e$b&ɾ ꒃn&]qmx1V?Jn}9/]^BHl^'Wex_YKk$M\3[k-&=%۷q5qjQP_hW~O?75VZ7n#*8W_, w:>eH\3?ZCG>$yuMn|2ӢxDHeaoQ p=sI{öXkUbKk)hfS]I} h|Ew_:2v$oݐw1e8 ~ڟ (D4|?Qm8z#98ٔQE!_dO|BVᅋ*|Kil[B0i$5}mEc_NyfxeԔOtG妓C_|FҮG'8km4v(WCw V s/Os<9i$Ѵ-IAk+(,= P3[4xV\QOەKƞy<=JeZyEZ~h'_o-ssUH[xyX~>8)*t /_q^o8csLDU}d]+I%(>|((*/}gPt({-@YwbTIb@U'<׼+ZYӧbHA4mʳƀ%5|9^ .Եݥͼ RRD%XdzY j>3|axqADq0Pe @rȯo&woZ3^0;<*BDMnjL7cbt 3dB1lhَψn!~Z_>/\\V֣6%2`nFAȍٟ߳{JПi-CmƋ=2ovqͿ#YեzDVR#qsb|>W/#o< zg|'|OZK|3Ŕ*9\/x#3W i'n@kN`L[ˤLYg(U|-񵧇#m1Gַq n ݷi_-t8~VwO'#nC,-粼[V|I.df7/ ^| kmXmW6䴑uyaּ3SMkPwښ8 2x/tJO x~k NR- u$3^L7RKp~<1lL!^.t;(Z;{?^KSv [jBi+5%P|/h%D {5I)p^zW>(x]WYII]/m8"H[ x݂Wsוx7mî+_x_vxDж02a ;ag+h|׳LbOS S?A7{IIt#{_M7oKx1H#ەΨ}j^0G@ulj|"m~+x#!Kƕdް8zd2*?$-oSq/mRe.e/(A~ϼb?*iDž j)fg D1Ğ(~6_s| оxb8=|#lJp:F2^Qox!:uRgʍ@ƃAɯ1khW-(y]Z0Y_qZӳkƒ ޣZ~ڦMcP5+uw3K,yfbI;PMt_!?,P$yL?w]HaxTvl9ONzϣ袊B((((((((((((((?/{/4kif >\᣸}դ po&dGuMV#(EyyԆ2k>q~W+¥jQhyo{ὦ+UQj)C?k=̭CѼ7m@4m*vig`S=ǿh?O5E SA:nzǛo.*C#`VWQ^I+#:tN*VKdG X~Z&8|/RZivH-ޖዾ8S8 G,qN'ݍU$ \? Co Pʁu>5L=9ck/u%=%(7'ozX~yU_äeB팮GkZ7m`=& WG9Moy|A4Ϟ(g?Y-_j EͥrxEtqF 00E~&?IO|:o~#~$8CJՖsL)63)8j5t/#s?0Qr (|#ْ?'im)?bqjVG%'R.֦\}vk˦$}Q6?Yr&.jg?ܑ_y9P`k:s<Ӵh7BizŭBĊ9$NsCJ]⏎TZ]G:^N%ߒ[+oAj?e 9~2~-xCT.O }Fn$#%1,͍ܪx5$*SG|l?$⿍>V7U&g+I峁Z@[IM۷3^O ~^0֝|B?Sou$zvk=T:DEdi8MxWUl-O [j|kyy]y9lb`YI|#k^BvZbef\ppT` uk]?oڋSI o᷃׶PLjDQ+!¿O_?kh>%agUo0ibkA0*&4R~w}' &׼cQž*/$p%\IrȮ.^֭7"~Zߌ~&5 `u4ETϔŲ#ܞV{.ir~ vuitJ.<WB;Jprm08>|:ֵ7㏇m=cOxKHn/qGnJ\a[q=2+*ׅ|._tؼG{/sIOj3xvz ^E{9v>sW/nQ5L{]Dhd`$2ީex:75?|>~!Bsg_ftA+C,=+kK<)~ßhMx>.࿈6:Lkqk<-]9`&16(r+~5goǛrX{M65F3$&M;|)2g.w[XWJ/mR:p;;+ծŽpړZAo?| g?؏Vos[^6i"{Mx,omf2\8;N8/oCG/i5|!KJE߅M6ܻ}9UVi@M>|rMǿxX^";8t[8]]]XC9%f#h^.HjAW# /mX,Owm-,"  C_yJGI{ǯ[X>*x_m%U+ 2[5i,dl>DdȾx^b-J|AԿeۻߊ4k?|L<3Us.ND/)y y&tR3}CL|WH5#OH>J=MPni9 t/zwoa]C“vCCyq <͂1B1P<"ޢZΛÞռ/.|AGY ~ŵGW.&,amdQpy$6ҟ O/z > C0ok 蚆W2=%7*`F?<~SKޱbiVMӒkm˫JpEEPEPEP_mSzׇc.n d~`dM'J"|Ҿ&~׈m|7 I+GvِvhH8 D~п 5h ucu=NEB22A⽃UtwzFXUf;;XfSّ}k?5 xOEvmp[1+ $)?~~^nM mz+; sjA#'.k>>ooCxN'[Y>h\@}Q@7>x+_= W0=A!Gf|_7~xVmX;%.q88*uz>s2JD֏RioV>Mgw]-#_UUXX@W^͵ݝv xf@"Â+þ,ο~-Muiؾ&+sMPwI^{^x? iR^}Q.b1v#&)x8Q|>(1 ܑDy2g{%~:G@OR]%x$d9U[i*uңm|=߆U_\j5-GMYӥkW 885KE~AxBb&so|]n)GPԴmA3J. ];.pg#xd]j}|5>~? 5H.ny}Q:[foY)*XyK6Y_(+-~ѿOinWzv!|+Vujœy;5YnX]UN v.Zd8yet$R $W?lx=tO.SH>ݺr㙏|}v#/\>xt*8N?/ٓ„VxŨ:Σ&&}y~O͎+|NW,?q3) ;޽E_|,qT_:>yٚ, __tSsZ6ɥ6r(E}`2l*<Ɵ~'xWLwFdO''x\! 2py5C pjvϮ8#a*u'RU]9IJK]-nåRR-ZsfؼOZC.O2)9~6|x+ g|1[\r}OJ_mxSm|8ʚKm9lAz<k?x^/78ïr 6/bř2sr.YR7{y[^uvapPpIQ{'/BT1?qmIN4ak%vǛTǎ>(*r|}k /'Ke[D^ /ubH,?Snd"d|=:?uĒ+OK;FzWG>_|_wCsھG~[|Ws|2fek >% &u57IBP ۸W=ơmic56!yf"(OS^^7тWxr_RSIծ4 :qn3Z]N:I=:/{oZ6eiɟ> QGʂ}Oa /!QW[5X~iؓ0[{?&ǿ+Ķ?j?k).$HT1]Ը#W!UiTW}/xf8ż={^Q>$$kۧwjsEdkR~xTXQP]Alt <3 xwz5G4X-XPd95|}fK'zs7s†KQėd8.p+khQ*jڹI0XUjtՕ}v (=`((((((((((((((! ~(~? n Դ6+s,)2ƀC^^/Y8_Ţi!+XͼԦD69b q#Zi&rҾ[ûT(/jKG?MןSS`[ty~W{iHYTg(¿M y~4_ um0ge3ȉf.vÞ?VKo |%O\_xSͱ2yaIsY^GqX\-6]cɟd9O7K:%Vzuoo>~?l |*5<=4ph'r`JO׺~RQEQEQEQEWAm{U;nHCoLўH',ɦI҂}4[|I][>|&?!tM:@#v󟕈tS_g5lqOou T|IxWn" >Ń s9} 0EoopA8*j|N,~E|C:|;߼<|ju+{k?HVֵu tk[sX_[.fɺfm$9 (3Cս7/~O-3DR&zbL֗ *\ΤJW^F_^CĞn^$,:`8ewl`q_>3a[uυ~_.;;{x'2jkVd}2Si/= ӒOcWZncI)liw{3*{YW恀 0 nn7k<;aw'ƺEӮ+6rsY p\`{xB[xL qXZ-92ԗ\m~ >~|^Q^ Qfϊ F :SUK{ʡ,ofh4:O5QI|Wћ>L6#DwK,N0<q__o#,&K:,iF99NKRTpL}s.| nUrڅFP]q36JS5qI?_)+#ͫ 'aJ}ygSQebsGk>tW)s=~fܕs޹ƹۯkƹۯkˬt~W;uk\ݯ._iO}s޾:6ҟ O/z>|Sv?((((((/R[>:톽cɅ][1Ԍ9W濍dߌm?gOkZŌYi4\ E#;,ywܓ!_PwA/Ɲ7CDIDќgPq_WZmiv dTxNo_h˺]Y@rHe+h^)Ή=&[n%Hpz؎A뛞Qq%x욫:6V8?Wß!5̈́߻@T&&:#~Z|E ]{Pm%xF}6G˜퉯_ă f˫db[dwtCC*o_~LxpGCM+S|?=~7kzL1]Y$C wVzi)UX)S (QEQEQEQEQEQEQEgjƕxvW+#KMws,QF=K1Rm-ɝH.RvKvh ߉~ gS]a`~;.H3Hƿlﵣ5ֻ\Iëfoow>}9?~5"vכ{(YyXPq2@8yɇ\ϿD~Y:|=GoJP[ocg/-_j&,p/3ge>͜ c xzxC*i>ep ȝ+׼;xvM=&GW{-R;IIRAqZPAp8*ja)'˪O9¼+,3Z!oR\Ig#˰5i6{'fgG.N֭B2KQN/ ZUc)[Dw{t[ÿ8<<>F29EF6rzY9%hu~E?ip>\[؃ʨj"4eiI{D7 g߱۔{S\K>#isz[cD,/.ZW~ks?co[ſa}A[Ӽy.d{BByaeW;d`=SD/{[*p'T"NM[Xzi&RMt?+mw}/ׄ id=͟YOVx[߼~>&έROlgY`tuᔃW~tlQEQE|(iE ;מ{7Oڇ.w5"Pϭ@<0rO~O}:,q^m.q-cʀ{LN+#߫~`>'¿y%ٯC?߲kM>xR儒iIxeu 4"52@+ xWÞ 𝾅mOд~嵤Aǫ1Ē{ߢ0:T~ଫ";_/YyQ]GQ@Q@|-,X?I!l⮌UMl_tSAǏӿe@ CjZٷmd۩\`8o> &QүclJA1_߰7)|\nS_XpYjm2GpeɆ~Z?HMs~?hQ<fGܐ^X߭HS"7c[/;>!? ΅ȑkw6N˄g«`V/7s#xCǿ|' U]'xnS{g, ݠ^%00eRr & #-h7:6|5ϬF?.qK AX‰%,-ݹ |1+fOxc-Y&c4pS~"wyO8_<'j >h%4C1dy&/g+((m'Va=SIi[7O$Z?ݪz3l?c~Vơi~'R[Im-'_[+kҺk?ᯎI{Rh*,1ӞGbc$/k871}2W̚NW~>"t(_'sU:D:2K!9I$R6g:Z~j|1X_xnWԔOYS c#_CFƚ_/:'{mWO[ȼX$*'^[/?>,6=gO\B=cv84.r+G"4Ѓ+ "k(ɏ^蔬?2LxOL#4{u֗%θGFGYhGYsh2+.?C3JJ訫w_x;uMtW_x;uMys?|+\vחX\~SK޾Ź_iO}|u_g;CQ_pxAEPEPEPEPEPEPMeW]aA(ύmw x-m_Dz>:k}( x0?g=BPwcxú'yH6Y{@yowuno#5ޓs?߈YOm_v<@i ׶"Yy` ##ExWǃ-CwMp?wue0uun<Gp+7o|Tt8/dD+m뜣3ӕ=Gpդ\n*8)[ɟ_.|]{\|Iw\Amۋ^VP98J?_O|b/7.>ὝHp?O}|wbwY.u x#XӢoYe>lq^cՠO{| Õ ?Iv.9#'Ibu =>+#h'VK{-n%y GpkퟄgïpCi_3&I7ݔu69*+l>>$,?xwLa* [Ӟ%f[Sި?A ( ( ( ( B@I==ZY7Ab[3в8 WkW:?<%ܘh ;zƃÈӧ.EK>BTĽhk?Nf>)L%V g_j[xNs_nZGq FZduB2;_~][T|g[1vf w쿕~pnsRU%- n䵓NS[Q 𦌃Qts][ˏ]`m_P*f^+҅8|Xyedc$jkIՋ~=>J/èSN>ZoI+|-vV.׻>(6>w-ڕSJ0vj; MMr)ڜ#f2~v9 vb??}Zt4MVHۻ\HHck#ͧT;E$~3?>ÿeX:kJ%7{hh*Ņ:TdGE\cj t2~/A4j,˥=s.35`#䍀 h\|j_U#ὅv_]A*.fKj z߱7O9?GuviN"-N5',!H!5: N^"0m;$9G}*ʰNZxjSsJ!U*I=#ugK˵g kC-<N][* JgI T"1% ]I} }FY3sPٯU#x^!JaTzxh5#J(P@(((G_)~,>m`:Ne%]N$qͅVcRO6r|`b؃J߲߈mZ \7&n"CH"#0pUʑ/~!FxW~0:V/y<h3~dEl6#rFI? >/.h-S??xLԴNiaaxDnRO5SY|b[./o&RIV0#\2&xZ|-wj[ ~7s֢-PZvA%k7 Jo>_}N UH:\Kވ 8QEQEWMٿ?'"_6+f`\OFm,}O[JJ3Io\\CM, }|e٨vqֿ7f8 w?3i\]D>0Go m$ڪ2N< %,])h8Կn#ÿ$Y-c|tqp;<"d#~u CKZ5 d\*9 (ʊ5!?L[8?iHw&-ƿ$;~?Gc_ o?G1?Iŷ+|`b kOC01m~g|'j[]ݡ=|;ySA?iHw&-ƿ$;~<__٫W^8 ]_S]ˋ\QS1o]+yNb`Gr *1jui=\-=:|Z_;C࿎>յ-د' ު7>v]'ճ+|JW cπ2~d=zpHS 3^,,5*^Z yXr;2Gc_=9W[jKx\Ңvy&yNfP0K_tiVmiVvo(tUU*݇Ҡ}pWQx*J7[^ztH(>(((+5ƿ'O&2Ow_ ]Zx4J\'z~|"X93e  W?ƺߎ~-^'7m̐A`*c |yc>.M+?f/W?oJGpޯa5ONmvǃnq4h?,R8& l0>}PoLe R2:JPBا/~87?ozJ.RI$z$շ? |5h I{]wt,>U^Ko CŘdp@%J8`OP^Wu#52ؑr=rl>,VJx;_ BI\_ 4nEM?D~*p6<3):/KϲmcX--YU7m.[$w$Ošs eW|'צ>oǗ$jԮ]DDF@!>%⿀> mfGO jE~G;-gVFX/n)ѣGUh$ FryKn&Դ.#xy&>$Ҽv։ûrIvn_< ~[]\AsTn/̊FR/m>}Z=(+v2/:ܝCE^pQ0q:QQ^}ol׈WE ֺqgЯ#\G^pprl{FU૪]x2&e>zS[4֩Mw;x'"4)-yڬTzeZ$ORkGTc ܑ/ʩƹ ^"uJmW?Წ Zt^RQWzRHӥ^>sL;ҳNEHEK~u#̫X# NդڵQ8*W)M6dtRkPEUom28µ5)'S}OdcZghi{AYd gĀw*H^ ,ڇ/?4ї٢"X2k:ĚMHɢBd,RZYO!Ǿ{oȟ"=u?cK+=~dqW@!(&h7ky6Oqp[2֗-"uy:叓14?V\pA83q;i%/?cQMvRwq#㞆x;\X5ԏ}ށS$H^FH뭥t\ͥt֟_Dj1z^$ώO *IQҤZWgZ2ʪіfd@d3 XxgJ7 3B&iBg!$2OrIO0޽ WMf-aN($_?4yzgU)ˍ\=kg]%#Oj3Ghy˲4?ڬ4zZ9=PZG 攥t;?|+\vc=s޾:6ҟ O/z|um?^|!~CQE}Q@Q@/ 3_~ M}?'m_̿o(}'M7U C@g1 w۹J| YYjEoniki~ H.nbY"7iՁ +׈~"s ?|$TfDqos+EůLdm ~G|"li Ӣоi:v+f#9+5o_~xQe+I%P6RI#+B$vil1'aPXhT?g/!GJנ{N. F1b϶8S7ωK0^?GY[[BdYYn<__?4~[7JiiY1|/fxߌfuß>;h,G^ȶ,VZ[\{ioRąPXn4P[Gƿ)W[_7xm'-ΖKW0 41O$>&>'h2^>MDk'1mmT.ٿM|[q ԛ,;3>PFe\yϛdvޟ7Z_ [T~7}HxOHi?XJ<h˾ |Jf?_S>SŸ " O,ZwǾEE&U?>!|?co/ T'ϋ7i2Z4axVX23D* Ǿ#TARZ_z"OYV?.xpxAV }W|c3㿏*-csLd=iI]rF_o⯂|🆜xoxZ5)GEi`r8EHÛO{&\Kww)gљI$vu,f/ r?ѣ?/|=/xVTPn ,M/<~𿎼uZot[[C{2ó)v"6~%Ğ2?MA{jXi/|RO%S˓N(*QujZρhhzŬ l)"oze=F=?w޿G^)𥦻_Nt{fRFGB: (oYb'Nڥ7?hi6?6$^کfK ]!eHxcǸSj4;_ J=wH<{kDž9KgU=7o>ҿ{k,tO3/VVL#F?^¿ثwJfɶmkPķoV3V0Kywz򌪿֭*G?z/TQEvvQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ^sW7Ԋhnc<_ŪVxQ=8<#-ὦW>%U'׵˩kZ_1K>F_q(_~ ηwJ=TS|mchhhW==F@{BFV6z6zz6zeVvc,z:rV(s ?ZTw:WЇVU:#؀ѭhbD-'X??R!sGJSTE֯ĝ*@VKb*,F*iPƵj#kǀ QՓwe#[Hښ6j*cU)Y`x8{[E+*VAk'V8xqִTP#Fa5cJrV4ӟ?4Gx7t@4({I$ldl8=xލ޷pXץG/x[=leRvz5$f5%B~ǿ_ZBuxd7YMZ߲ڏDо(Q ұ4y%.}S, G26t8?/ MZ e ]cYd>R=+?5~xLM?=v#a~tW5i(5ɯS§p:*ǚv2qv$kE?M?h;[?R ]-F*]܈@!/缇ҿK4WO|/kZMW]\Z\FrX}AX|^ Vԁ:M䋑mr'`J6?@gڧ|mGc{rxdccCn,ݜlZq KJVps_f_=_TTpŤW<x䍃+w+?eM5tO<5ykfqhfS,#+JRI3Ju'NjpvkTGo'+7gr=bA6 \W rc:jP'@%|q"H̄y~g5ÓʻgcpʸxHC|'FOM]>_qK3+L/װHO54r׬пo> h7~~9'w\-.QG9ywܓFq߂_P_xO㆚xķ:/>@< $Vn蚴oyV1$y$g^Bڗ?Ŗ~5xWR=E~3&Ԣ q9;CۓuSקmׯtzR>==iDcj_>1`p/8hp2q^_Z?|ex"VK,G} /3{?4yÜӔ=g3Gѣ9M/3yh?4s)-\ݯ.C*%׉aTxXP?(?r~r?w?ٿׇ<0˾xcolp9 Vno kpjk·Zd\ic5ċd5~kIu,&dtbHF :֢Uᇎ]\}ޝdtM7?uWUy}0\H\nޭܒ{_|H/Oj[{޷<^!46P^&(oZ9!GhW?ӵ{Q,Ŷ߾BۥFX݂Zqo@^QEQEQEiW޿C~EPEPV7~m)i_%3M~ph?7~/A_A۸m 2ՖH`ly s~Q@||E6zD<]&ZlOsdvLƾVWEe`FA 7o{,m5-@<= d{+?~ӿ~{(̕s\ch&-^>Ntc\L|Ti }@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@l4{+hk (;1TI'8_P?)_^6> 6~tּitjK![(BS,̈́L>tm+Hj6ɪxR6~bHYDC+?{,~c &u?x#fmQ_c<> ό3h'+y?$V|7s?][IwMޫ+%zeH_X~:©DQUzV.! \:8A>³qSIZ;[D^M>Ȱ5 4+[ɹ7mlWvVWe(P4G]ku+o Ax+YZ6T' uji渜<+z{AYy)WHuGCVNN/oCu["u1}\XCiwZToGsw˪ f-fX]]^1IlZ=ׂ!%76d23r1\-]+V,-wW(PUԌR"ՖI˴Cjx^ˀՄܥFTJΔ<`&k+:yN& sy-ҹk/W~/K[+fqw1HNxXh0jȢ"8sTgO|f_s^ֆ&JI~RzE^y.djД[e^u#iVkxSZ'WY4SH5 WGn $Vspob;FEwmVvM"a=.trbiپI6 c8"n4N eG~U+KeOE"kS/J2K@Iƻ0T֥ v7-9.b2I`2)6F]Z_Ta>u/ɬX]D7ZQY Ef7<~(Kv׬_M/I2{MOY/|+L$r%"4"5>}e?=i!&Q2;dC:Rt% sX$7˩<ֆ# 'ՋIB Ӎտp+7"|/'Ú/5i5ycxـޥy#T|CᏆM_SkEH s%zFR2NA+V{'~j둿T0g 9\7b0|3.#o{*ְ\_}WdIOh~~?|z[ix|O+^?O &z[8[dofץuᱴh}peyѓhTgGh>+ŸvS]xIfC=)V+e 8$}Esbt14:Rg ,$WQv~}S}Qwگ9z8Nˍ4ݤg}3"on_7Wsg\܏x\(uI˗=eRe ftI"xU62 W_`_ɑ%o+>$_(+srLz?\qTxqoEZ^r">9@~ҿ/-<ڏ|˲1 ʓ%FǏ_|-Ouya4Eg9x޻k wEe|x7.'-c^K_6{ Q^(QEQEQEQEWMٿ?'"_ϗ;Ucmqt'> cfN|)B&Ֆ99\1TJ2)MFi~?+noA",K۱qnr7?( s⚏}? aghb@@:MU;IdQg]|Dx+Eÿm ݫxrIylpg'!>g ,Hx<1O>&.uنyVA.Jס8h;G A),yTcv+x2@5?i5σ~3ߌ4rd$J>6 pug`5CvM?RpN\eUGX:?f!OƏVM[V5hFQ"|NV/o_׫!3_ԿN|k48ɣD ,G;:_N/MoKWVeU u~ h>Z|>&Yrnu }}IVa&QWb$TҢR0~eG8Q5{3HEܳ6H?>GĖfPWz1s_zWiySYa~:~ٟ > x1?Ǘl~?sKFySWc8Wi6wZebY݈ 56 fM]SEx5MK ]9eCa~lj_%ug|ﵰZ>wN;n+h<TwW.2JDW*lc5щe{}T~ϫp7|eŘgQTh[ǟI[ϡ >ZxU-asmt'{:Q]g@9ϟ~hoP}5*/ƦBy>pdv;x;>05/^i/t(;VہzwZ?o_->i >|V%:m >LO!D+2\.mJ~~^~S?p5<4&#!K1ahurɞS_kt?+a+'<mkzJͧ]M0ܠaYv>UxZY^eDE%`f_N7g=c֮ uON|>m|A}ZK9g&#ω^/(9t ,׉h? 7^c,FZU[/V$¡1'KE~l~_f_b >π'7-YdƑe7 $JerIIRw![_>,/ď׼|:ï &`M㑤!2HElEq_tPo|M1 JaW )3:ͥİ_C`\xـ-}xo>T NJMSÞ#Ӽ3fl\HF2#+*((((~'_z(((XzEΟY-bY"Ue`CEZ?:2}/ j_x'Hgr89^ݳ.py?k_|i/º֯g5 T.c&|kvId=+?oď{N+4Mo7iH>[ GixvUn%#qC_f[Q:SwQ06lں?֏/$<r7T9T?H+VZj.ciWQ-,YaO+ hPEPEPEPEPE࿵/BKHtHRT2:+oR|E>AM ̖Iyٮ7/g|`b kOC01m5!??E~pL[?iHw&-~_ƿ$;~c_ o?@W1?Iŷ|`b kOC01m5!??E~pL[?iHw&-~_ƿ$;~fۗYk^x4=[Ft[{x Qf<3Nx#hڊ((((((u v o-ݓ hٷ:w$6Eoi  HAU_#hm_E}#M2j7)#0'(kPi-;Ȑ.Ê_ԫ)ya04߸&T*UJjS~"*uJu*~2HV3Z(B52S{TꟉQ9TJyGe9RW$jhr΢j̪"_SK--H[Coj ,^Y@x_5:vg[@ǓT(Cdɬ??? vi(lӒ-QEQEr17)4?hV* 7F 'G$OjX(K9(?8xXmɉRuOVyi׋X5"gon?B]I|/K2y/Cl'zPWݚ~-_ZjZudU=ՔG&9ψk*mZWqp c(lkMCߴ/짮~ egC.DP{ls"c?j،74{ת>7kx[xe/`y^kն?T(>?%m'X<‹+[p '6־Fzuct~g~oXURO&OɤŠ(O\=x:. 6Y_۬8#yk>7'߳W5XhSj аK2_tEy9I#jl֍z?E:ruRUեm?񏀼b>~ӞlC6F)EG!_9 ;M~#g&{caU`x WO5Sg~'Sy+ʜ kş-ǿxiH{KX]JsӔQn@Xdּme?N|iy]B(Yܟ5Iv_ֶFOS>ZsXH.rK[HJ}W<3CsgżOo*X2#eٮOPW)mt.K̖(@((9m"{yPHԌ Ab? l%x_Þ'x/O3$,f#ną9"28𐟆͌ }A5F)m9? |mP~j1cGg ,7R`AK,2$.+KM|iV7tq:~ahgi1-fYb!Ԑc|B<1Hsjvk:yw z2ƿ*|#6?*/Ͻxxkk51p`{ߐ} A5قb0uUJq~_OÙ?`4ƭ7KTg|jسT3>$|#qhz)eu|ڗñt$;!-2̉'u",#vvEޛkg}k˶Xf@Ѓg]" u ks X#xZ)< 'uԲ^9Zk]?^ߗwW0˹|='^G/E5夺%'_V+9i_>(YOOa:/Ei \3rIkoaT T`OKMV_H! ڕh7:WBqTաVTQvi4F^/໏x_|;\^KqvsAcWS<^f kV6u˜3E+GdV?e"mc[epW쇝@?whUݑ5KO~4h힍n} 7L?Y[,gF'Ÿ aC_55k2 H/]*;=@F]F߆,OZ~Zk+(_EU Ԣ (?^{gxF߉<9QtY5hh%\>V/ u/+N;Fh6 e0+`W_x# 7LO|g.i.:Gx!t@#i10|YaG~?~x]qSHY ,%K36Q@p<xo tVZGouqwIq!+,qRp~*?.0]i u=W+6d*T+#$eH=|xo8:očþ)1z&Vky$erRwಷ0Nk/A_+gV? ;/JP~蒾( $X")qPꆑM),-R++TXQ^F $f?"C@1S/㹶pĨN ~W>1~~ >s+L熴siRjha1Ȟ+u z'|JG_B|YP5h.mkgt|[=E}|qZ+~ ՁZ{bzy'bvUg#!G:G|i#mPF9K(~'_z(((((7Vz*km{OWp (`^E~=j i^"7 k 5A%8~g/CCgxH90P}d['wZ17o>\X>t}M(+GJ%ĭ_UN?)9_?Z G{;C}ٓGвxl(((?mDdSw_QF??] M's|k޼3uouz|IJI|fcJ0Wi|}Ulוּ9/4[G,{yVDbU`H8 AWßc ~-*'/EsD|gw]ter #+ Cq n?tω:ieǀ|`Eemc!H r=v{#II8 zu8+7F~=ENuK F$}P^i [NagPH]/nc ?DŗSXj6@kylh]8d;r0j+W5soĿ[ KKX>;ňeuEĊbĬYrA` > |^IKgRkGō? }Q}2[=A 4rC,8b9 n_!"| LkӮlNevXUw*(r+HV+o@j~/L?$G/mM^XYcX,heG!+m`6h)J|wO~-{co}40k3[HgԲ݂}Y@~x~{7`o+o@QEQEQEQEQEQE_/d>)rZ |w.a72u`߆KMWA c[_o* EjP;ǘ"Cz*{}+Оqghp$~>eT?jkcuzMXcJՎم˃qҮǣafec֪ǵioLjE:ŜqW 8a1 gڗiU3BКt_Vv<uK1mi߉xorjd&Ab8@$vQeYtQ]V|Oq2:wWo{W}1M_,«Y_j˹Xwr~>(PdcJ𧜸:97\Ne#U=žG aGb+ԯVUj;M ((()@ '/_j0xO2Cr}0y$4eZs&?˵i2DTϵ ^W9մ/]֓iZr.-.Yb} ם[/='/-h;SMW4>yN;w<߇a/]Qcsb+=ry /z~~|OY>,!hyiݴas{$dW;??TQOUK4'k@MDqq|Ae5JɽXҗ/T~\ׅ-Ӄ|"oɎlo\ba*y,T~c>Җ3\hrn"t[ɷc6bo׀]ݡ?,-'8ppH:zuNIj,etNddz+S((+oQ &D#tEI$ ;{̲v"7)8\g(/oįÚoiMW9fLE\Ly?I h:'PFP47,[)eCk?)y,">!}ׄ@\kE)#W1<78*3HtBU)Nퟁ7:xB Cú< 41DȊe764xhkqkR|H`4ƓbGYO}pl 𺁷|8o0O,7χ/I3?9dw+?u̝ŽG`x^׫^\WN5Z ^Oc*qV5JriG+^ u"a-Xrjq4cZ%tM5Ѧk?GJAt}hÞ3N-*PHFa'8O5ޑ߳7/XuI"+&i#>]Ź+*y`kO Wоu):X$roa'mv pH*3< >\>?o87;\*u[&mV|=6.WIx7lOG&DkJx%x$@WR2#_7|\~hOgw4zu;:( ,j|PW#9S?τAc O' .PU`ӿmqq bw5Q\)r^.QR+iv[ר>X(p|,mmϊG@Iܥ"۔cEg$0ꦽUZ߅-T~0]iZUyHw)eIuk> | J-cC<%xT֐GgvWhmۗh`׳&A_|:K>Ե+V,2H1* (K/-|Z< :jln'yvHݑ5|kcX3w\Eqi7~=$s$`{1$eO̧JcV;.x;ysTeŝk/c"heFϓ]ho}KT׾ h_+|8"MsN>^]Lܮۉ'i K" `E~=&><]࿌WV|3{gJPE䆷D3(8|6,S0sG=7`OCM?4f=ك}>0sG=7`OCM?4ďڛI4oGKLڧ' O{?鿳|`?Gzo>?0ϧ!h~{?鿳|`?Gzo>?0ϧ!h~}~ ~%Vˬz'Ԭ[)u0FTg=85QEQEQEf> ai7Ql YbOfV_`Yo||1ŢOz69z+2~+{g|GUEH[WȌej@%|q"H̄O -"_mi&Lm7VJ|t?r]ȻgcQJxN`q_1>5=O||cQ`˒ywi$>3@WW(U ?u-'X&PѢ0Ұhﵛ:WO9$_'aͻ/4?m/@/\.qm;ъ,1%O <_]}-6ܑ(1wbu4 o5עHiȁgUGvڃD-|C ^O}d fW'%G7g~?Ÿ[T׼F;~nxFMy55 ex"lP,>|x~0O? |71|7s[\uؑL"F"ɱ7l~Q@i/ৌ?k ,P>|1ᫍZZ2kE( v,8` _~j~>i~$mn s>tq@v,K:+`+_ ~ ".j?Qe(|F(sk㕇_-{wxT5C}_j\G!`lCGbFOywi&ẖ/6d*fK!?OKNQ.u>mo$yXP2Kn'1t?goߴG4ڻ>-ѵ&l(q͙/ x PY  +EW'_zOOoEPEPEPEPEPEP$9TIbu*`x W/ g?Úi&x Q%ڧ3YC@CF FA(>(Cx_f-+S>,rx)I_[ T8fy7d8݀c(K;נ4m56H-0d#</-R2=IkGii#M#2u7ة|/5wii*l4l5oii9ʛ  [iV8^I#Aff8,XѰ—I GvbX#!hr~0\cUk.ao7 (!wW01x8Uc?38CZ+^TM=_uN]\53j:}bB8PX^+8l~>j;ſJa㯈~{h"9 PAKc #_Goړž{|(𝞫A~ֺz/lKZ 2F_ '#9O3< Vիid{^gF]6m7Rj:U07qtim U;%.q߃6}F |Pg+X 8*ى\^}_5|t<'sB`R `~I8>YP/}|(>+񇂼']xkƞ0d 2AԟQ@Q@Q@G41\ZG<!I#C+ v(/ڏ /*cNy%vߦQ\*K,UO5šiǥsLXdv|q82X?C&eh$Emen 9~x ~#G D' 0>߄^+/fA.4Ks@XیԒO$т˫ P-x+6^M8-5^H] qOJgi0QmIER\\ތ]xh~i>$C^4 Z'A_|D?2^ujG"6';cac_TXFk& +ٕJ^?h7v~¼6bQNݖB?݂|o'"OLH e3@ӮmlPq$ĕ(H8ǩ澒/CV_)7';={m&W-/2[f%0#dxlwfݏCG\TEɵx)맮E(??k/ڿ-Xo`E.^k isnFp$kdԺ |vv ~"p1!8R5#t~fXL~8-E8Kf (;((((OOo_ em>?>=6<>1:,6i=,;җ;V'?xZ_cit 8N-g#&u n'9/o,_x|`:寄I2wCL%'i3}˨||럴ޏ?|? jŔRxYEװA* mخKx(Q/ ycῌ0_MeLԆ_/eUQ$L>#~^95!=J7 #ȑ%*s†"4W$5)H9bW`w$6|} i ?-GO(\[A~tgM:}ڻ/lk!ԷY|1U'n0O~$x+7_ZE_o式_fX!5rIOv <#g߳76twǬX{sm=XH'ǟ''3{sm $JIo,ɍ\U#[/SzFZj?`u;3kuhl 7P@+ۥ}@c/į 5~X+M_QEQEQEQEQE^7ID|o\.-BܨeĊ= R?g{m+:eRZ|_49Ԛ6Gg]*OSk7퍐-#Bt? q6 Gvcrk Vo$Y+L -ܓWżv3CʹјT + `x A?3@c |-4Nk{lXX%]Xd8׫b_/N*oM>;kM'm|!x:#!x:#47¿&t<%G<%_O¦WDnT |?vؗS=ؗS=}? _ON?S|+i _7@0b_/Nb_/N*oM>;kM'm|!x:#!x:#47¿&t<%G<%_O¦WDnT |?vؗS=| @@aM?Un-lb0DL # ~¦WDn4u=(s]hC,y$pz(((((((+όOG㎽፦Z}s7/\笎&W'FtP厕{n+~'!J)>Ʌ~~৚Ƌ+9t\M|bw!HX+W;_0O2CqO xwĖ(Ix&‡(VUuRI2U9٭S῎ u߇ߊMAOX!p>eX'= ;^eګ@7t9ZCvp${pǏvg=\+6=k/xuKIBzkEw?/2x{mbD|J툤PY"3#m!@|/Â1ny}&չ#{6-w*ݳ>hk?U֍+dԡ?X/Ef?u@AYC)# ֿ6r*ΝU?4k_e|OGڃY.g+4m5gh[2ρ).}VMP|fIuv5NIF*IQzPUn.hk>)F\JM>4 cV rM?it&oD[zdE#v}y\tԹjJ>Iѫ(EJ*-KQiv-Ԗ¡f*I'\MMw/TmB6@8 *?5|L l{mHK\ Tw/:+K$iN-%jw#t5VcB烜G;:gpgma"5'yn^WW66][3$zxNzr-OXw$ "ޛetR:eԵ qK@z`d_NNjl|gOkZ7-dI##퓲L!#u%Om 5UVZw>#wì.2bSO/ë7eg͞cz[qd;e |A4OɥY'=R OJFY3y"/M%!$q"*p*M|;|U sXKtjueYY7&@%ek*(((((((>[M_5gTS%fO+cUك7neiFؑRGM^m yaK@~ןQσΧski$YXC4ە?pˌ0#Uy0A*FS-֑/>y6/]y ,OB@_`+߄? 5oxSԥuY2\HwIg9rrXWន:kR\ؤ8ikf԰'Ôp/U(ѿv[/f,G x?jNN䐜m_2ZZD%hnl⸷9@KWR2s6<~Ζ/٭i/|s&_[K_&{8W'rz+Z+):u़N#_G%F%^Gŏ?j؃67$¥Gm콖_#qa?OUL JHuLJ5FX1dBp1)8|4 [Du[=ONskw z#B+{gC5 |]'U>$O禋<,@w/n36) /kI}?y?x[si,&u7޵>~nQJm/?O𞹫@B.28Lwq27J@yG?|ObkbX{029g*kx$_~gǾ (AUOMҒ{ZKkR}.;(xmmaBM3H f<s_o|; o-ϗsl>?21^~cp4E~/nϰϸNUeխ#R~Wץ7_x*~0״h2_N#LQYe'0*~>.ٓxl ד{l%Q1)2?|{ڿ<2;-^Qm nqc~yo>4Pywc7<QGJmNr^K͟fxG5#.٩vJ1z|$x&kr СZic?wy$l~~o>=mMhm-(III&({+0|mF:տW#^hVS]{zq\OGkIJ2׌kjt?M?bƒO} 6Ǭ ~B˝h -B!|] w_3!=@"?6b\ t˩UӖ2{PN{W|QE~/O>D>ue^jHό=ίr#1z0|/n|`NogxoQT-nGn=Z'? |~ittG2GrBu 3X0K). xWwS[Zݮ3*#p99Qx Fq}_˳F~^8d8 bp&5~ZwdkΜZN_TW<aYSx˺0=޾Q'|EUτ{kU40eec矺uZ6|OGYs?{)>!}-^R9fq)Fx-|ISjh6ukwN&r!Kep֧ÿCϤWq*r3; z%i6VXr<2r)_i}aXiZ=YZi&mX}@@5NJJg\}#[h#Hվii_^΅&x"G~ ʈx?|Ck0\qw`wH0P  zXluZ~<͸o2uѣ esƚ%"OjgGʹx/?>pYo?^3Ѽgp5̶r3$2c*9+) V1$,7d>"=ofCgo%2Ac 0sM ww(T5kF{ɳ\>>zϻ$mu,^<;u/Eԃ9{G'NZ1~ږhzewp֗s *A ׺~jEP0?i$e4{0&r^Im9BFO c5؇5>!x<:q8pA93h;/h?ϩxS` dv1g;wc Ai W'|imPNa~wKZ#l7q G̨Q˞GϪ /&,k[u?tHS 09|A TH FGMEQ^ /ۏOuXc-ܿhPy{`m6,V$ =NsSQ@GD?k?`߳9ۻ3)d#41Jc}@v63}Z(ľ |= o*?H7x<>y_wn}(eaؕ&=+~%x_I> ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( iznCFt-[H{kh.bu*Ȍ 2 AQ@χwO?~ʆJ>o2۾rEħcF{G3\;$|CC~ >!񗁮-)--W|8N 濷ʡZ^M:m1-u`AEc[JyjEIyYZT2qzh&j_Z|GiVLtl{W"kڷ}Su1riCydE̠g@}ɯKIDkc2}wnl;ksEz cwzT_=狳*]J[)I7k ~ƿ_w-O-_w,|okO 9mׇ|?{}%ܾ75n큁FO آ"m#+x#PTpKEQEQEQEQEQEQE|?fc /^61=2o" Ljyɱ^l a:C<\ k`ɟ]}~"7?aw z,;3#qh?j/7rj2mr*8 4B.8fdFq~%Vl/^8k xJ_+<51̖%FrAU$pG& Mwx.OX+Vx%{RzH3{g閖j!Ǜu.>cL:fpWWlp')*_7mj;CG+㿁n4:Wm}IZ^>X!O%8t[IHoV^:ѭKxowR0o}62LAO~%jj?qswijK>s1?xWᇃ5A&]:z(<3{$O}"rq刖k}TOh?fVkr&qd?Zvg58<+}Zf>|E-1(o)[0k/ϑ d+k!iѤ٥->7?|;$)mD)72|e v;(EqLT:jTsݟY\f.*ܟwe-bk{ky]Hwq_\3m5: k7 $$f*Ѹ$`OЋڙ5 3o?1mlf_K &t϶ĺ)8$U{4WGu$' hxO]WށN;_ݢ-Rs=> _a?Z_\=GVEbW8Mn7L%-3.Clœ}%C_ mk/զӅq7K+a@8nH{'~(Aeu{eҧHl'Ho//$~dgO#33 k%ii7y5g-)bSB?ZI-jEir @4y;=B.,a A yJUih z b~|OzKYnuxn"֬,bz}qeygM_U>2FtŒG1}?} ld+ ? F7aJ\p!75e&#.~Ꮜ~Uu}6:*0GPd o11'௎X,x+LԵFe$p9u}'߂gBvvo3ZG{r#K>!;OxO?4B⮋|8, lG7[OxzWG_VG6ĚW0?Aojې+r*,|,'_RM/S-b׵o ~ z?N5:Q7 `Hzz֟ ি4o:!c>|w0ȗ:B"BHq_E|Y4:w>#|-m|Gm5Ճ&kIXHeC+1 f OxGT񞫬xn&_YLf\" Q@Q@Q@c/į 5~X+M_QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ_9~Пwًŏǧj7 FScz.x$)<_?4wP9}3HGŵux-c$蚊_-=a<o_fHFym(9t@ϫ+t~ c4^OYЇKKre@FFx ހ;)7Y.>4xy@|?dr;+|Ff(Wt> :ujwWm&MbI.n/f'.T3w9~-ࣺu&QnwA_xlS D3Ԫ_6º~ZeY@[BESܞs|҄>'x^ Xw$vmh[*|6Mɸּ_=ܭ(_(='޾p6ᥔ'"y\D]Ո!AGצ2Bk/`IS@J,N]rWq4yJ?x/*իпZy;^/pM{.d'R/.5 ˿=ܦIgݽ݉,!rI~ø ;|_'qiOXxBC*k*O&0ҹ XBG?d.[.[,=\K8JǍ*ux38oURkx%Ev}.QE} QEQEQEQEQEQEOgY/| }ǃ5KƱ^^Agop[C&@댃7k-x7kρCSdT7YC$Pvc =~DMbx {~hmxK=5*E:GAo1cn?(oE~!~͟_n7uokW_ROEp_I RIk9X7e_0a9-?? moۋe6ߴwejڜefGFDaK=~~߷lG~!|bL^vdЙ|Eo:$QIF$$K,~ z>^~< &o!) e\YPs&'@~?_>C}Z47w]}o(Y]B%IU#?ߴ_k_A3ߥ4($KsKm1tgE%^kʎ8G_`+Z>z.b׊T{[IG-ۡ2 70v1iS=3|.+?/>&lk>{%ty%O4P3ՙ *tx{Uύ>K W[,r*m)Ȍ-1AA?~qG㶛1x+*ԧ|'}elElCh`s?1EV: Oo!~!D}ĻH;v7#m*ǐ[ Ĝ?G0%gg꭪ݯzts(R0"m-8}|R.ӌg}9|ZEƑu-XŪC-7m[šQ! a|܁(Ģ4>+4eQT2prs1y~"Qz{E~z^"J8z),-GwQ9+?$uZ?MsZշ=hxmEg )Z]x~@+i/|-Kjy$02l;sw?|[7V3[^Hc1znq,Hsn$AKᯆ'2Vzhr7m̬Cɝw'n3]pxJ9TmYg:U05(EΤiAIM& Mk([XY'Ch nOkv$|5r'@6]ȤDld6z!x?1 xF_&nI2?.y yiijI$۲ㅃVk2XJ8Hߩvh{XۙKѽC(OB(OOo_ em>1|ESևk6h}rXTBhԖ˸ ddV|Ek6 ~_/Z߀/WmZţ C>#)6K4@ l Zށu'dW6ѧL4偹Š_Q*Z -KN{}nTW6O2Mw'#d|5{񮦟 4O',m">mbIk@$q[-jwFI?xž4KݦcjMwu*o?z׋u]5OjyX:>"L :ol953a)=ɏPab+˖M%f|s߬ 5uZRAX |? XS`yzXM{?+x3w=eeȣ^KORkkqjj.?8Ɯ=Z_ b6m*]ߗ?jsZ/rLBɮrM*ϳ> >~#cK]j:| + 2Oۯ}FYsUu%x|ET1I)JI+$v}Vv6>ЦG ÅRN=%RrrMvE@e ޸$̌0pGiK/Ƶv4Zlִ#JMzUӥdU*GV:tiW3YuJ+L#>ec C*+}2?,Mzv/epC۵E֩zh~CmPHki[ 븕ǵl>·BKOӠ/.5Ls|d~(ft ($ElՎ"$Q<šhOO[<I B(Z[R2!_uÈS׻..$/+}nt1+j}m|}(i:ĉ>Y_L>rcI=N=|{F?XcokLI!}+ooxGkR+ []]2Q$q .z[*Ċ/+tI~Q^KoߋZ0.1L?޽j*u#8EeB5-wAEUEPEPEPEP?Q$o"|:&޻w㮋E5b HTGb2H2jO 3όt'iipJ%V g Gz~z6R.n ٢\#jѬ q#< 7g|;|'O|EEmWQ!0 A,: ů^OV!ٿP~ hA{PK? CoZVڻ'FmR@^zr>=Փd~sG)渉L3i'n*連+߀9χsui[YV"/q8rK3_o|!86$~7Za#PoFޕi>mi6zfn-maX}TүSK旟<8RĬ~iQ??>P.z?>sĭwxe8Ye츁Br=I??=gZoٚo+MsAe D mghEIYbpԱ:R[?+:~Gd!MHj:n_U3~1v)+''+2uV~"{Us7Z4Kx -_$sB~hq#d/+( Gky});ieX8Ө*/p$w|[mg- X|Qf褅" /$kW]Z"t}''~4okԭ+I.#V]MHeeFHyB2 _Q^QEQEiW޿C_ڋf??<c)~Ge<+|`bI?/ޫ ? ×jbxv29?ڧG MN|Q]_"FhmKW&b[.ՂkhڕwxZq RHy+}7Ch x+GY=*+-: qƿ< SW_|-|=ekZډg{yܲ#&!I1#|ƿ!I'v?Ai|8U4DEjV{{*3Ia',V?iHw&-ƿ$;~98fV?D?4?_ F䵸8׮owdnF9Hɗk`&W7 7c N|SxRmoQ6KnKC.ܨ29_ƿ$;~c_ o?@뿲~!߶'rN㏌=w1y7J"v*=rICzoO Qt;q++i#qlT>a%vF\emß5!?L[L&x_6+⽏rI Fh#x\ O~pL[?iHw&-~_ƿ$;~c_ o?@c/į 5~Ɵw쟭>$|Pjt+GXh;8*5]QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ^C_ f_/ĭMC]ֽծH%mБRI 1P{b$.3Zy=YiT2f[fD` 4τ׏.k֮Mm$1I+?k_+S͗4.մ :ss"F&G?grT@O?Ÿ ~&J]f5xCè/##/P^j$u7/w84`ՍVk[}t#/Uݖ5)F xǏ/dеGIrmx[# ]C^|k>5ѮмYd`㄁%屟a]QWoC<aIZEO->f/F |#\5?'F]M&LHd0%1_ɭ"$oiSj1i UyQNRᄸR0ѯ_.dd칥ZVݏHNa0|=Zϋ|wis'V'M C瑎1]G_|PS\x]y菖hМ(V[ߏ9ߊ௫{W꣦&v*OU~lEO2Y"#IΟ|co~+]s4u)&MlmDP͇'xM-O>4}'w"+rN㷱Jn=^P'h8F|WV}Toe}?_~7|+yhc}hʾ1I\RLf)0rSj:vsJȟE.$C YW_.qG><6"3e+IrOTڒZPƵy Yu$|3xß+K5:3hIdU*㢰= xWH:K:ygc O^33&pKjrќda4KWqQnW议ϭ-fDXdU o+ϑcW?^}KmSÞt^vy< `sּF<_NE=-n3iȇ 1$:4/e,n:J֓rMyeH@=SY:n(af}+~x_wwdfU"1r+,r*|S@xZn,!\xYO>-"k@9ͽwFށ?yGß/?xsTymF^ĽH,`M~W㿇 CA->D6lapq#*e_n0 =[^ҟ~Ϫۢ[~s(u k'ICkU#ҺZ_{jΝ-jde{?sש|&JxFyTPuk{_~ݷ#hl 9rގ޽+ϯF2䦹~> <]M? 9m]l{/CF$;~GWS&6"AӢ ?/o/;τEneeht)$9gov'+dpZ8jQF 1%dAEUQ@Q@aAo xvЭQwnZv-.600bh((((((((((((((((((((((((((((((((((((((((((( ׷nwEick Ms<"Aff=}~>x^jZ#_\xhcIl z#AF2;c\IQW_ ]:C~у`Jz y raŨWWbG]"-,ˀv9%}H$5Ɏ^v>G<=bs&EMzs5#__<'[j^%t4#ΕPrygi1|duFG~1L5?>!q+3+땏ڦ NZs\}[6EE,bݿ1gɟ 𯉼=i97 adG>>(j׾Ԭ59lR],1.řWҡ=Q%iCM;N>ۓm]>֌ݻ>>{־; b~'0u!* `q3_WC3,-u&+9BOq揻*ފέ(T,fN 2AjϳZm#ug~wKkd;\xn|bEO;m1l xY?澏tI!xE6R2JqεG%l(_pb?G<כjmhh+{|_>GS[ШKϧze__?io4O:ϋ~նiv6nabz~?r=^ǍS6O/_ޗOɮ_cύ :Bu"?U5 jUF Z+СF=+Χӯ,X8FTg |5 [6*}QUVUhUjuZѳӜVUQSԶrb*z=iʵ25 е2=V*~u FTJꞵ-@fk]j;+#*Nz0J˼Э<#z4m:US?j~i4sYCz:VW=zgqW,XҴ",'܏6k>i f)R^[NGSKL-EcsOG39%gVM#[Q!'sbxODk&SH{r=Քgԉ[~ٞ1ßlIj^$$Sy12+}>TC]}YFe=/_b4 &[}T=3A':Fc)۬QR(CI_b䂊(@(((()߳uPYi+?j?Ɵ|w/|$W> UO4vi[c(tb'k)jິ{{!9P2 9,^8mKf|5nfrL46BM g8^\^rne+B6|/ﭵvvZ[F0CD^sjz_tρ~,k6?öKq&*J>i$881'탥||4.]Y'ceFhB+ˢ>Ok Coh^ WƉ>{]h#yĒG^ d`zQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ 76rEIbC+*A8 6h.߇uhr ey0BcW ^YÏ_5O<7i? mPY4RN{TSI2*҅H8M]={43)Jx/➃麍퟇|vUcKbInuٝ㑂c\Z3FD ~ox:Ɓ=K ?쳲0"߶L"9kikmwH5K}j <.Gp2Og:c? d /'5|΃{4oMuN][2o '<8?QOj?id"g/o_ҽo}Uܹ ӑj$s[EqhD=tު&ǻm˃gTU<||j+cդsL;XNr`|r1v+Ǎ _>ѼS3UKiѣpT8<*I18j8RZ*PM]4M=?o~.;5ߢa4;1 wsIiN~?uD1)oB\M~x +xRό| &KOxEr{Y Ho٪f3W.tYdžum{_E FU%հcY Ov?!< gV y?)?=γ%ռ4S 0Wʘ qOכM~._A}g[Lc,e>Q+3pHF>$~ < ݵ}+ĺ5y:V3Ǽ*;#KtG4Λ|eS%맖gXHѓ{rrk8?ŭasifǼh~ſ7dWqY\⿉JZߊ\Jdxk\tb>af9cU?<3ODK,w2>ݐGB+~n< K@MJnLԭȎJynAXq_g?]xv:ZLeCY?+Oj/+2X%3|8҄g9{,DQ+ZsGzJv}h-+*x[,ŭiq<ڮ[y&i?7↡m\֞qiӌw7 2녱\rVOŏ,EӑkA~>I˽B9T8Yq qs_*+Zxixct7|s "(pv8#k =m|!|P nAyy89?Ÿ|#|4;@nla8 w'ds,I}>&˞tc9Ԍ]wQm}@?Cn򐡭t{U>Y nkc~ɶ Eyr{%ߊ%B䴌3 vQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@<?ZGS[0ki5 LH<'g~pͦi[5Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@[E7ΣuC6ֆhX,īD>/> 5;{ZE XNk r+ Oxk߲ïzuuv5igxړgmG[O_J/]O k>$X6sn,cb/}>1 zg.X0̫r@8ῲ?R$Hв"@o ffM{6:N?c:i-ft+pIUbk)yuK@txS%'?d7cԚ%Qր?{{. YY!'# R8 #yO/fs{OiʴEijC3(yp$ 57>5Iw_j_Y%2BnKwms/ f =ubBDw"hOW?]xYҬfb#f]Xk|2S|ky$mǦ̊p8iVtj(_/>$|Lb8{K*mRdU\g$7MKK~~.η`VJ]DpVMczc;Re?=q{uM'ZO- Ak0v}o_{}K(mr`_V'G⿅ڷ>)kw]KM:%Gpg68RZÄ;DdAwq{u#FGڀ??<{zDž5s-%I ~](~+xc9|IZQ: E k74G,*?_ ٣WixW]T15F:$_/-Ÿ't$:ͯ gKNuBTWP O=ÿiW߲ 3ǶICXhF7GE 7bKvH_ >ح4ua%j?o[ZŜͪjo";`*q/ (g|ih~xg~=🉮&ν]D)@#s!>O_-7ntɯu 'w:Nev5W%Y_N1qxSKt{ #Q/<2 tqXǿ|v_>&|%m=7:#_EdPY_^?|u$ 떇JSqMr~h!o_ o{y_^ܩsI"q;"qՉ?<(7+Ck!I]E9 #1x?,vL{{R3}<16DPȄ ~xg1zMs;\Gt< r9?sH +27d'q''8q]W#?B?~Ӯg7vHpmUn!8h}D7[|Of K_Sy DJl]gN9$'_hoZwxK6 {K-kVd"|tXKjZzMRO$>l9'99/oo*-oHy\tc }|? Ե}V+p'?(C|I_xׅ- {ikmBA* 8P*_/ÿm Gu:Z|dzt'!Ixw'/z ww~5d/ 5)$ Rܩ+oBpHM|NzI%ӞDmai!m{xۑw?W6~ "+WI~E-Z.<'q@]o.6'Py:Ww_>B?? n|Q7|Gzq 0Kg ֿ|xJAEzj֖|?<;A h%X]kvP3X%/Q |O[u?-"-`aFK[łdbqҿF__Q$}x7ChNW?j?Xʐ]ZGG:Wg< E%__ V& /]h۪2ZS7zQ:yduFf'$k?Ϗʿ3 7kυ7>$KKI&?$7_cm)ru5c1|x=;NUMGwK@) 6EgCѼA|0,ﴱ$9nLH+3GP=k^ozu9V6M -=v6 xE/+u-\hG N}ިۂu|t8?#g_oZUŞ0>ȼ1zӵBfA.#Iʯ 6d|/"[jm0o0YaaVrz' ?W¿?hG?-..nnsr< |g89)O?oYx =5yq٢CB3MҾ?>%?|D{&%?'Hmggl NI=X+#X|E? Kf -u}N+eYT\ R&iwdFs*__O?TRt(ƮT݀A)WwWxD/;y[>3mxERv[.{u`1V5\wWPG>/M? x7&#`%+[(Ɉl, Oo^^8#$XfW`:0$Qӭ|q0bK MgKV[hIO3Pc17\C e7&^,Ɖ<^iy xBUCyy6ssΏ!@`F$3ec'ǯo|\5[^uiKpgeS +z~/|E7[xGAx9͕0p1hѿ_έ+JoVK[X]-H+FZ j~ҵfWӵ#,Abl#gqlymZ]y|:d-=8=[rۀmC qiˑFj? HD.nK/ 7p>d4TQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEi׺|Mƻw%1Ԑq5P/~x6_6W< Kr2hztv3#,`*p]Pͷ_y\^\ΟVW>I1Ս+D4?iޏ>"u}>; ڤFD` U>~σ>7A~Lq"(#pfA#{5P|CI֏ix?V40TWl6'i#z iKyᏁm~ݱ{ Ţtdx6fު#9U=z?]Gn?~𯀴{usgiZE4U pv_j>xA˽ӵ+T\pU@<آ<?? :r:XMhݽԕ$2+t x#Kυ]/þmO4Tz$q =(5N>&k>V +K>|⻷6r6.: /_?f#FmY JKz2>y O{iﴝZTүmJr#A (oc/EZem3[Դ=V4r#V'P@5}ϊ|\V:1EYfZoe8/GwtO1] <=vPD'ƕ7~6.&x!R J*uM𗂼=xSz2XZU[[[vv fcbz訠((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack.vsd0000664000175000017500000614100000000000000026362 0ustar00zuulzuul00000000000000ࡱ> 23456789:;<=>?@ABCDEFGHIRoot EntryRoot EntryF0Qk@VisioDocumentJ*SummaryInformation( ODocumentSummaryInformation8  !"#$%&'()*+,-./01KLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~VisioInformation" ՜.+,D՜.+,$HP X`hp x  =Page-10VBackground-1ESSwitchoServeroDatabase serverCloudseDynamic connectornsBridge EthernetonnHub Patch panelFiber optic transmitter ATM switch Bridge.14 Dynamic connector.15erModemc Ring networkctoTilesetUrbanet AlphabetorkModuletNone tSolidtRegistrationctoVervera None .26ionCity.26Technic Server.29on MainframeonAustere TranscendonBlocksn FirewallonLaptop computerPCtApplication server5 Web server FTP server Email serverervManagement server5E-Commerce server5Public/private key serverr Box callout Comm-linkt Terminalt File server DocumenterOblique connector sPrint serverctoLegendeDisk storagectoDatastoContent management serverrStreaming media servereDirectory serverervApplication server.57e Real-time communications serverProxy servermunUser seRoutere PagesMasters;0|_PID_LINKBASE_VPID_ALTERNATENAMES _TemplateIDATC010498141033Oh+'0OHPhx Schma RseauRaziqueLC:\Program Files\Microsoft Office\Office14\visio content\1033\DTLNET_M.VSTRaziqueMicrosoft Visio@@,kGN`C9 < EMFNl8}U HVISIODrawingLN`CaD ??d(M(aDM̱ٿ󣣣󿿿ٱ̣̔@@@```棣z¬٣̣000$M@3r+('888f |@l(h󱱱PPPQUY]a^@888w8Xߗl(ڇiiiDDD111xxx2nMTNUGXXw8أx7a #" 4:7y2o7yC(((f ttt]]]$$$)1)Í000燇 RWA1\ f )*rtxx((HH{  Id4$'"ppp P 0888m*c/0/;b ppp@@GBMCKKPHHHP""....!!hhhhq0HHHHhv{q>>LL//0󶶶XXXׂHHpp((U`g 11@88EE`666󿿿 z (xq0鷗((8?DОhsx!!!Р0000H..Hxp00hhJLMȶۯג̿F>9543(((EH?]@@444011ג߇PPVxﱱ󳳳{{..c(p}%&&xl(ڇl([mJppp ܏f ܏QQQmmmGGG "ʾ444ߗl(ڇ۱\^XpTTS p_#   DDD:;<ٻشq0h00333N6).%qqq QQQzzzQQQn L Svwt888KMN=>>>?>d|@00hh@AAAbbbHHHFRZ,02 %cIY<361sl(ɹPP@@CCCWW &/6HHHDHJ000ppp f%/"@@E$&"$&"GU^܏^[yvw((/// hhh!̚|@@@\hq hv{=ET??SSXfffk'[l'rxJ(hp棣$,0YZ[WWW݆@@@ %-36ؒFF @@GG88``܏f ·AAA6:D|Уlvzddd88~~11P[[[amt!&)@@@)/o}įׯי8;<IIxx((www[rI&'%@@@'''000DLQIJJ!!!ۙ8;bbc񅚣@@@IU]17<@@@000=>?000XXX% ppp󿿿̱ٱ??beX9~#(((>= ||JJJ*4< ///&, "銳 ,X!!((II##(;qpX(PPP %479҆@@@S^e HH`))8CCyJ(\XPPPamu &)ݥ )/[mw168Т}444  00̂t:---&+NUY@@@BILѤ_ko<<ssHHHHf ٕ{RRR111 )/ezpuxٳ٧ӊ00hh+eyV'('󞨯jlmpppPPPPY^}sssۭ_ko ` v26: &06iij߿ߧӊ]]A**,%'$aaa`gkA>??ƬGNS**+8=Aߧӊ I!s:XXt///ER[Z_a(((x{~ PPP3M+DQ:888ٔ000PPP!&-48 ##(-9Bٍ?CFtwy9:: /gggMMMBLS .//)/CVcknoqrstttXenoqs122暽XXX''' NQ,,0~~~4;A9<>9;A>??ť000XXX      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                           ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ Visio (TM) Drawing *B#k)uR`7`{@| ߳!fffMMM333-yN77sPɺm88rcc3f{ڌ626?3ff?G}UUUհc5ڬм>{N9KmfwJpYDRb߷qaXPF91I@dddFFF9{$<<$JJ1tl77A@@@p0Ezٕ㰬VęյTBj~daPc*p PT幵.|AJ Pj( YYY `9tk݀Fʺʹ& LGND/cmd_=nop( LGND/cmd=wnop2h /a/Ra/Ua/Wa/Za/\a/a/a/a/a/Hr)C/xr*C  U)J:DT5I[1hXT@. /UbW##_b0U]?ȾL \&B1&Wb%e !oM $d )P?;$,, & #& , 5&?D#//?M?\.? 7A  T~,,,'q/%P6$Y6 ((}k?l)P?"  U UA3#o >S@#B:aU_ReUeUeUUeUeUeUePONOD`T_ReUeUeUUeUeUeUeUUb RSR;25OM% >Qiq;RRRggR qh>Qj /A^F' %p3|CbFp#| |% | vi?=Os<34*x,,,/HRQAmeY q]U@Q-,_vL__q____OO_OOYJAe@L&d2K?ɠQ3Ҳ s]֦xΟ$H"Ŧ)Լ Dץ4@dUA??ϩJ[D!# f:45ŧ5ŷ5ŧ 0H 0D$#U   [#Wb ;Ch( D?@eDrqr:ePB#<D5,ND0DDA7XDbabX#F:FDF(A:#7 D% #7H(DRQRHBA7D B7DJBA:B78DBAB8BAtstt<tB9DB9D%B9(!(D2&1'2&(!BA9d!Dn&m'n&d!BY0C&'&!<!BȁD&'ȁ<ȅ"A6761<1B#AT6S7T6T1<T1B$91D6761B%91D67%61B'AF GFA<AB!(9DABaicTgdhTsG'0f6U[e0 sFf|tvu dd4φFRd,c  F،3*d4V56Fs 9Faaj9eOwOkŕh8FTo!32pbGFdF毭oʳUųUvoft\W?޳P`ކmW"> gAkA oojx5lR}J<DTqC`` {<eh]-w>yqyu4x.~O6ARH-D@>QWACGcwc]oP{Jk? `Aooookt|/R|a`OX󀠟ҟ=<HQ0BTASFwByѯ+=Vasvψ߿'9Z/oϷ#5ASew߉ߛ߭߿ߐ@C DVhPTCW@CaTHY"$? 4-Qc___o o;Moqo/////%/7/I/o/&/////?@!?3?51qU<e?w??V??????O"MUHh2ET_ BE'_"h%]lO~H~E7O5_G_Y_k_}_fo____o1oCoUoUZoooo4~ /ASeȟ ?M$aǔ=93W+harz3P贁N[ܞԏq a/%?xbpb AD@ xA 柲TfxF#_'2 W2{~"D@妠JJuERB0%B I B0BzvCt*346^C 9C$CI1%7I[mߑߣߵ`rF ﰚEݝ[ #l~FbbOAX:iοF4 9S=*?b/???(:L^p'12O8!M!!,G L!%\.TbJnRtVtQbry !qPmlF/{gYn)1z}nPectWPiehP7__#wSbQ =Ulu!q`UoP9@t `enf9blo~o6o@S^TKoޠ!P`"Rnpu!5zoR*"`u`d0nec;bI[o8e Uu!`{uTa`sp9@Ur`n`yfCb {:o' PanRbTW|5̈W?|\!??+a`B9@ckerv0oP?S/e'BBEYYFYY '1 1" EL;#?L ?OO)N)>F8UOASZOlO~O&I[i O٤E);M_q+kЋSЋSU1%vп?߶U///Ǡ/(%ySA!58%3͟±Ԛ''b<0%@/1d/v#%//(/?!5//ߤ3G5" R` C1pO?% )N@OTbt'ϱpS!jdꕰQX!3<=P4#>k5$y }'9mUТE%O( ٯO-?QcQ\8_H/$ ժϙGB:(Op߂ߔxߩO%C?I Wi{O? i~\6/(O?mJvVŀ#5GY#5GYk}i н 'N(C.%ZFZFH/`%bdRq4q Acrt3rbb㤱7m-c8tZq<|oobobNbb¤osR@Q:4!s/֢ ZF8p%`՞UG?/,TKQeҭy)77s!X7es/??.?R?d?ď????OO{/˥ʦ#xaҏ,>PMR?Ο,*'(7OQTf+=Oas̯ޯ&8J\n𒿤A+PпbbPI ,ɺ /ύoow͏ϡϳτ/|nbb/i~@7I[mߑ+=OasOO'OK*<__D_h_z_GYk} .@Rdvȿf?-m7/I/AS// ?//ϷbbObmҏ%?I?[?m????ef???OO+O=OOO7sOOOOOO__'_9_]_c|C/ßՑa%`&q% A*ewj_f____Oazoo'9K]~ooo/#/5/G/ Ӓf/x////////??,? __0_LUR9/d?v????;6O??OO<'K+bGcM?tOOOOOGqpCQ_:ou__z__v_oo)o_oqoϕoooyF1#5Gccߖ ` 1CUW'uϏ);=?%cȒ ^2ȟڟfffJ@ӦIJVEbwcȾ.LaJυͿ'9DIoρϷ#/5GY[}///S/e/*<N`ri1 2DVM2DVh c n_4BoXj|oo/ //oB/!?3?x///e/#i?E?W?i?{?????c??OO*O;fϧϹϦh`b& 1CUgߝ߯L? -??cu??);M_q"O.T -?QcnF-a & YQgT!-6 FO@l@5GYkPAuV!!` jo2{o7s/ !/3/E/UVt//////7O/?>O(?:?^?p???????:O$OUjQOcOuOQb{GE4f!3 GF}tGN=@OS|OOOO OOOO ]#_/_A_S_e_w________H$b #o5oNtRv|@s!z@@Rzq}U{uwx 0Bufxq߮ҏ,PbewΟ(^ʯ ܯU$ô4DVhƃ˿ݿͽqH2Tv!rAtuacus,oE@ct Uc"VoSe"f//o/3?upgXq:??3oofct* w(hz$rosFo) CHZ*___ ߰?_+"-?UgUK0f//'/9/K/]/o-HNNk?/OPu=A"pcD?V?fb׺Ŝq ? /d߇hf"(VmmmHeXϚO|CUO_DX_YoCQUO_a_:SU__寷roꯖo 1 Wcooo`@-by"q{ ď͆CC jSϑ>qUn'(U)*+,U-./0U1234U5678U9:;?@U4<Y,b'@"H$ @ n 0/[C-H,7 AU'()U+,-.U/012U3456U789:U;<=>U?@CDUEFGIJU4< Y,b'@"H$ @ v/lC-,R0 AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGHLU4()*U+,-.UMNOPUQRSTU4<Y,b'@"H$ @ ~hy1}C-p,7 AUVU4<Y,b'@"H$ @ ,2 A-037Un'()U*+,-U./01U2345U6789U:;<=U>?@WU4<Y,b'@"H$ @ a ^2[C-,7 Aj()+,-./0123456789:;<=>?CNOPQRSTYZ[\]^_`abcdefghRiklUmnopUqrstUuvwxUyz{|U}~UUUUU4?@BU4<Y,b'@"H$ @ [C-Hχ* AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<Y,b'@"H$ @ M5[C-p7 AU&'()U*+,-.U4< Y,b'@"H$ @ Xl6!C-$7 AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<Y,b'@"H$ @ 6[C-HM7 AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<Y,b'@"H$ @ 57[C-0nl7 A-VU4<Y,b'@"H$ @ 87A-:K7 ;UN'()U*+,-U./01U2345U67@U4<Y,b'@"H$ @ HRR8AC-:7 :b()+,-./0123456789:;<=>?CNOPQRST\]^_J`acUdefgUhijkUlmnoUpqrsUtuvwUxyz{U|}~UUUUUU4C-ȶwU'7 AU"'()U+,-.U4<Y,b'@"H$ @ X>C-ol7 AU-U4<Y,b'@"H$K @ c >C-h_ ?AYU'()U+,-.U/012U3456U789:U;<=>U?@CDUEFGJU4< Y,b'@"H$ @ l1d?lC-ol7 AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<Y,b'@"H$ @ +[zC-ol AU&-.ZU[U4< Y,b'@"H$ @ (l!C-0w* AU&-.ZU[U4< Y,b'@"H$ @ lXA!C-:y7 7U&-.ZU[U4< Y,b'@"H$ @ lA!C-H:7 7UN'()U*+,-U./01U2345U67@U4<Y,b'@"H$ @ 7 RBAC-p:7 :U~'()U*+,-U./01U2345U6789U:;<=U>?@AUDEFGU4<Y,b'@"H$ @ (m1BhC-:WC AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGJKU4?@CUDEFGHU4?@CUDEFGHU4?@CUDEFGHU4<"Y,b'@"H$ @ sFrC-\:7 AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGHU4?@CUDEFGHU4<"Y,b'@"H$ @  HrC-͇|7 AUz'()U*+,-U./01U2345U6789U:;<=U>?@CHU4<Y,b'@"H$ @ HQHeC-,U?@DEFGU4<Y,b'@"H$ @ QJeC-,K AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGHU4?@CUDEFG=HU4?@CUDEFG=HU4?@CUDEFG=HU4?@CUDEFG=JU4?@CUDEFG=HU4?@CUDEFG=HU4?@U4<Y,b'@"H$ @ OT[C-p% 7 AL>@8l/QR@xl\0RR@Hl'1RR@l1MR@l2FR@l2QR@ȈlF4SR@؇l4OR@l5QR@8l.6QR@l6QR@Xl7QR@l8OR@Hl8RR@l:SR@8l:QR@l;RR@xl;QR@Xl<QR@l}<NR@v<IR@z6U=QR@{6=OR@8|61>OR@|6>QR@Kl?QR@~6?QR@~6@NR@X6 AOR@6ARR@6BRR@H6BRR@6aCRR@6,DQR@(6DJR@6VERR@h6!FRR@6FQR@6GQR@H6HQR@6FIRR@6IOR@(6PJRR@6KRR@h6KQR@6=LKR@6LKR@H6KMOR@6MQR@60NQR@(6NQR@6wORR@h6CPRR@6QRR@6QQR@X 7RQR@ 7qSRR@lSQR@TQRL<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L>ElYRElYRE،lYREȍlYREhlYRExlYRElYRE(lYRElYRElYREl ZREhlZREȃl)ZRE؂l7ZRElEZRElSZRElaZRE(loZREl}ZRElZREXvZREH{6ZRE{6ZRE|6ZRE(}6ZRE}6ZREh~6ZRE6ZRE6 [REX6[RE6%[RE63[RE86A[RE6O[REx6][RE6k[RE6y[REX6[RE6[RE6[RE86[RE6[REx6[RE6[RE6[REX6[RE6\RE6\RE86!\RE6/\REx6=\RE6K\RE6Y\REX6g\RE 7u\RE 7\REH!7\REl\RE.\RUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X$ ]B66ȅHq?j|?G IB@L&ɯd2?sU!&;/xt  0RKB` SwitcP,ne work pʴ r pP!aUl d v c z !^| R( G@&L >V&7?J7 ?ppwpwpwnwwyZwpw wwwpwwpwyplRp)~pw߇p}~zZDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabL&d2ٿ\.??{ڿ ;?k?r3r CUG DF P# h @T(PYY#EψUF\.?@L&_d2?@ w?P} Y.B! ":J :f^7r7ֆAu` ?U"U66JJU^^rrUSu#U ^b"h,h'RqU f??z4b u{[`:#"&a#'QT)b" ^l!L1$l" l"7'>>ÿ@qZ05?46 r23ZS2uY0`-1ZBu@ l"r3A'Bl" hL$9u`u `"[ bzb!u`R$"% أ#Q!'1`Vis_PRXYcPm!#5HP4{6NP2`?CopWyr.PgPt u(>P) 20{PU9 M.PcePo0P'ofmR^QraPQUamPi_Pn WAlP eXsRUe0PeePvPdB!#d$%b!*(=#&#@Bd&& "$B##0Ub#]g,7{TrN @ cTg d(a1+_c@11"hh"2)t3*tB+tBa a"aab(!BpaSb"+QQou\%G!!["@@Bq"53T@R"q3<1 KEq`RPQs.PU TPxmPSP^2l"+ 0^<@5b 䱠"5R"br3&BrM_q(pffs_<៵R-m11'rvu"u"2$u3/8B0EBH%Q%Q"2_b3l !!E|QQQQ67l!8ĢAP!!1;(bL1D:b;aqq^eD!b(pQu [ ]@U0` SPa*aPePl rsRQV)P E u.PpBP!ePQ"5X%7\In` WTcPx]SP`DP!vR3^߮߼Iӏb` Syb9Sw>?QEd-MPnyfPcmPuRmt߆EQ`]1SPRd Nm E ZK!P PPr8J\h}]%R'`-sRU3e_ 5Afe3`؇e ,S SRiPQ ՑրTZIPL_PcX@;۞`rBYlPiPq٫€րY0KSR_PoŲ/ٸ!ר@/R/5NwRk,a_B`+x/OI4PQdPSi/, "?=ns#]?,(dN??(JSL1m "/Q9tVQ IQ /?րf?O\I-` R 4PQQ^OpOƒ?O?MPCI?O._qSCRo"mY6AtN BT]]9 M JU@L&d2?@|>??@Fh4?P6 AJuM` W?SuJt At=WG_z#J]RbJlp= ף#R>2yw?@MJA@eb&#z(b),+,&J[(.;?&L ?A$./O%B!!5 g`?Copy_rigTtZ(c)Z2U009ZM 0c 0os0f21r0>1a0i0n}.Z AlY0U 8s]2e10e 05vo0dQ0! $-# ',&?6iie59 6uX7_'0UpBŁ&Z!$0 'F";JlJAh h7!1!Z@x<3ף , <~y#  IRvVJ:?OS QlU^TURUEU{bX,G\ZYaaȫQ5UL&[RQQ .aYU520_BUv"rA # cHD H# h4>T]]9 MT JU@Fh?@+@ ??@?]P6 ]nAJuM{` ?5uJt  Q߸It=W贁NUIRIl%0w#>2zG7z?3@MJA@eb #zbR( (2+2&J[ (wp"?& ?A$ 4/U%B!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!E$5 򤲝-?# '&6iieE9 6X73'0UvBŴ&!$0 T-FAJlJ8>UhAA 11!`m`0?@rO\.ש ,#<#{ RVJ:9XYUي#rT Q%UE?,b R8]S# l_@ѱQEakQY,aU5U^fGO+|no/j3asChAZGQaX_j_i2>_PU"rA #js( bUGD  3 h0TdYYB 1 U@ BP(?@'J?@ V?,?@f_(@yw?P} ,w 5 &M& DD bb 4  M  4  &M& 4&4& R&R& p&p&d &M& && &&4 && 66o"u` ?J$0BN`l~$I/( /2(>/P(\/n($z/(/(/(I/(/8?"4u.9!t+1  QyUtmQRkyUWq#6&yU~W6͔QUQQR`?CopyrigPt (c)2`2\092`M*`c(`o%s"`f0b!ar$`]aua0`i"`n.2`_ Alx` (hUs|beP`e(`v` dp`RSST SB-S`6b g?/4)=SfS@tff bT#S0Uorc@V*` @4c u vXgw'ntŔf!d] vze!`^H@!!Yd&Db&VUʄ&ՋeʄDՋ3ʄbՋ uʄՋʄՋ|ʄՋƈՋcȆՋƈ!Ջ&ƈ4!ՋDƈR!Ջbƈp!ՋƈRƈ!Ջƈ!Ջƈ!Ջʄ1VHD: # h0>Th]]9 MTIAU@?@kE8?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  $mCLmta{\_I 3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@?@-Ы?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  H-mta{n_s9/3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@B?@c?@ ͂?@SSfhr~?Pps-8R?>$#A n JuM{` ?e ;  Su*Jt'  m@mta{7._ xX3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@eZԩ?@z?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  }$~mta{_P@3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@h!I?@Zzv?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  XƿHmta{_z3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@sLGv?@Bw6t?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{__j3.Jv  q?#cwn?~v(A[}Kѭ>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&ՆJy#a?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>Uh ETh]]9 MTIAU@?@9 EU?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  $mCLmta{_3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bW)U+U+TU&Jy#?!J ?%P?AO"d+KbW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0en0v0d0:1Y>4#b-#M3 M7A#(p0bP59 DFXG~dG'2WUA);F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@?@,?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  H-mta{X_ 3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@B?@-r-.?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  m@mta{_ :3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@eZԩ?@ƍu?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  }$~mta{_D3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@h!I?@O2?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  XƿHmta{p_'F3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ET]]9 M JU@]u?@x?@,ww'?@d?Pn6 >JuM` ?u#t  fIt=W紁NIRkgS|lWxI#>2zGz;?@MJA@eb #zb(b%)2+2&J[/.;?&L ?A$4/U%B!!5 g`?CopyrigTt (c)0W20%090M0]c0os 0f2R1r 0D1a0i 0n.0 AUl_0 8sc2e70e0vu0dW0!$ę-# '&?6iie59 6X7}'0UvBiŇ&!$0 -FAJlJ8>UhAA@ 11!T2 $<cV] &WR~VJ3kq?@BP( ]S m၏jy#mS zjaֱQE^y`0__ lOA/X5:Fh4 XYePYv81sX5aZjoohY}aUU߭?@,bl'4/\K)aX2>_;"rA #jsHD: # h0>Th]]9 MTIAU@sLGv?@_?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{_3.Jv  q?#cwn?~v(A[}Kѭ>JU2N贁N[?@M#&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@?@*y>{?@ ͂?@SSfhr~?Pps-8R?>$>  uM` ? ; E)u*Jt'  $mCLmta{)4G3.Jv  q?#cwn_?v(A[}Kѭ>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&Jy#c?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>Uh ET]]9 M JU@L&d2?@\.?tP6 >JuM` ?uJt ASt=WJRb~li>t2U?=@MJA@b + &J[N#?0a& ?AT)B!!5 g`?CopyrigTtZ(c)Z20 9ZM c. os f"!r 1ai n.Z Al90 (s=2e0ej vO0d10@!$b-# 'Ya&?6iiek59 6jX7^'0a&Z!|$0 FJlJ UhqMAA ]A !4Ja-'Z@?@  <0贁N#A OR|VJ:6]G+|M__Zw?/#UEF`?0?@ aQS xQ#`Y{GzYk5^ro!o :mX5U'h?gkT:&#dT[RUU4F?@|?BP(flwaw%l5p!Xnc 3dXAYEh4\`Yb/XAdx̼_ZWnQXAYL&tgkRwpQlQE26_HU"]rA HD: # h0>Th]]9 MTIAU@?@?]0?@ ͂?@SSfhr~?Pps-8R?>$hA n JuM{` ?e ;  Su*Jt'  H-mta{AJ_ap3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@B?@d;]?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  m@mta{d_v3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@eZԩ?@>6#TTĶ?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  u*t'  }$~mta{%$苵3.Jv  q?#cwn?v(A[}KI>JU2N贁NI[?@M#J&A@bI(bW)U+U+TU&Jy#?!J ?%P?AO"d+KbW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0en0v0d0:1Y>4#b-#M3 M7A#(p0bP59 DFXG~dG'2WUA);F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@h!I?@Ž?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  XƿHmta{_v3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@sLGv?@c?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{_] 3.Jv  q?#cwn?~v(A[}Kѭ>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&ՆJy#a?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0g59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>Uh ET9 M3JU@L&d2?@\.?ҷP6 m>JuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"A.b+-,-/*B!!5 `?CopyrwigTt `wc)5020A0950M-0c+0o%s%0f32$1r'0`1ua30i%0n.50_ Al{0 +8Us2eS0e+0v0ds0ib =L!XY'0U Bj&%0 :lJ Uh*M[A[A 9A Z1:'h? Y T:&Qi@J4F?@|BP( IQ!i pP!!U31^ `0,I_[_ {uXEEh4Ẍ TbYb/uX _dx_aZW贁NuXAIL&tKRPQ\%QAI@@ HlQbY 0hauXAo G+|oaZw/(uXAY RU:q Ib_`yA\ppr׌ :muXEOO_\_dH >Z(+s .ṿrAA'Fhu7aa#b!OB 9 c~dڮ w7d@+ 9BBGe P?m&P qYXct߻( Bxzج{ Xҿ} g:P~ZU (X Zb|]ޚ_HRo8 ~ 8 Z ߮ iV]!_;eIL UMU'UW4YAU BN [U h u R.8UEU_l y!񷪆"A#K$X%&'ª()Ū*+$,$-"$./$/Ъ<$0I$1V$2c$3p$4}$5#6 !7( !8 !9 !:X !$<$=UFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kX` BW66 W TTT%TTȅ5H?? (?B@L&d2?-(\.g1(.sUW!AU&w/~  0QB`#serv",comput$dWis r0b$ud n tw 'rk!eA^| (SG|& b?,>?TTwpp pqqppqwqpqwqwq+p pppwwwznw6;5wqpwwwhghw>Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??SbM1{V↑T*J?@ ?UG DF P# h @T(PYY# 3U@jZ?@~??F?Pn} u{` ?Su#΍>>HX HlHYHH E.Oč>UDDXXUllBa'9-",'tU"J%q4 Q?"'4b0{[`:#26Q79$"t3&9"OL1" ^!'܈ >7@ÿ@q@@?@8B?@!ABP5F02u@K`-1xBu@rCC53A 3Bo2гC9uv`u `"[ۢ b!-u`23kQQ1Q7B`?CopyrigPtf0(c])f020P9f0uMPcPosPIfRQrPQaPiPn f0AUlP XsRePePvPd/`VPs_SBNcPm!#5JP83`5a Q1#d:4%B @8=3R6#d\6P\6 X2$#3+0Ub\3]Jxge8^ E U#".5m Hvq%t{.5tpw3t{Et{t{^y {|tYwmkdaIB'BT1aa"0ք.2132B3z1z1"|+q+qEQQX116Q;r<A>Ab\1EF`rAJ͔!hh)1baA (dT6vQ1oUkQ]b` MPnuPaPtUQra1Q` E50uPp -`ePt=e.5Q`!3]t1LPRdU\ %`u-`bcu30D-!I PPr֯H^-]L˭DRQqU`r\BAPaPrB`LDž5^,B%I SbiPQXj feTϟB LLPcXςPC5%Bl `iPgCų@o߰DLRPo}߄|AqZ@@K ӱpKDQ`;BNQwRkax`2`녨cϣ[I$`Qd `SsPo8SUbPRPsU}AxBuLDmatVa IaXAfBLT R `QQVhP+IQUm-`u-tP !`4줒ӿU챒VBALC`<1B` W2: /XؑLMPmRAy;/M/˒pq//$OPaSDy1!//%ؒ7VI]BPlRgRTP訰y??SPa2CP򿃪X??B` 7T+01I@"`a^%{/MPCAD vOOHKdQ>C2Z>6?H?qgtԁ/tufvop#U@jZ?F~?vvIC-pˑup` s?bujp`&T0a,Stԁ/p]e-2q?@< o5pmH!q."Tԁ.dԂxAƒ@T@>$pÿ^$gqa OvcrSr`Rno>㳫1e&BIrSrOvcv=ss"u~$bb pmp sty:b{bráe@6Cu&P`?hi@k""oűRT h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B-#3K 7&)? b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc>!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUHLuD" # I>h(J TEI 3qMU@jZ?F({~+?P >uA` ?u#VMJ*  )8*A )LG#P`G.$*G8BV6j>t   t|zpb"#"J>U2zGz?@9 +#>;&A@b]m(b{)+Y&^" !l$jN5#",#C"&p "Ms"b{);x'^",^"V/5#113#`?CopyrigXwt dc)0W20090M0]c0os0f2R1r01a0i0n.0 AUl@ 8sBe0e0v@d0R#"lJ8JUlC -(#s!)(J!O?zQ0 # # %YA NEU . K ;;LGE3EF)$OK.FHE\Vai%?F 6?wt\[ W-L/D5Dꤲ8?F@@SA?F @ ?Nk"?wY2`bmM3_Yw[7 K=ދF  #j1H|AIx]Sit\1IMLHDEnPOoDJ1AAeCBp OOG|u?F9 mO"t\7=L驣/XQ eel.%-+?@llփ?FU~> @oVoɽ7y2VzdOfшlA=$o`"jQ3 ex}@4SSa[?o4ui'e!Oa#Əa$jiMiyI,%!Xtz'0U &`50HF vؚ_`H>;s `Dt![x_H=ߚFXu!#ȱw&OB H9 Ȼ~dۮ F@+ J[zsGUfȓ4 "v" X/ ߻O @֛ ȝ r6?  ( k` 7e޿ IN]UFD  h(^TYYB UFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTTB9T9ȅHBU?? ?B@L&d2?6-G(\.Y(7.sU!}&/)Ѥ&)  0cB`,dat 0b 0se,2rv2,0ompu 0#3di0t0i013d0n0tw*0rk!e ^| (S G& iz ?!3E?? ?  wpp  pqw~ qwqwwpqwqqpppw'wwwqqbwqpwww7{{w^Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??SbM1{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH (E9.EUO؍:DUDXXlUlH"' &T",'U)2&5qq/4 ?""'4b0{[`:#26%Q_9 $7&9"L1" 2^!'>~_@ÿ@qh@o?@`B?gD]F0#Bu)@`-1֠Bu@Ur#CC5CAB""Ch9u`u `"[ b!u`2ւ/3Qy1y7B`?CopyrigPt0(c)020P90MPcPosPfRQrPQaPiPn 0Al` XsbePejPv&`d/`VPs__SBNcPm!#5P8[`CQ2y1&3db4&5=Bh8=/3z6&3d66 Ԁ2&4#/30U!r3]ge<^1E U#&5m Hq&5twV5t{3t{FEt{t{y {et !{|tYwmAdaуIOBT.1:a:a&20#V21032=FB3J112I5db6q 7~C|hhE) *^Q_`!CQCQA11B=B;8ARrSqe a> !b1AE4FAނ.1#8dT7`XQ`]rw` MPnuP%aPt%ar6aYQ_` E]0uPpU`e`tee^V5Qn]1PRRd M`uU`b¯Ԧ30Z! PPr#5GCHhz]3 D&bQe^\޿𿏫AP)aB`Ҩ75^gyϏ-% S*biPaϷZeTϏ-LPcX&ߠϫk5N`rBDl 2`i`g̨)@`ߑRPoʂѩ|(ZP\@ pKD`ĖUSPaPEePlP6?ЪX9` TP7`?>` /aZPa_qSbI`zy`DRa0"ZʒT=NwRkqa<.2`s @ҨגcOaI` ad2`%c& d nĝAs Nx+P=鑙mrDatV/a I͢@+aNfP󨑙 R `Qa// '=/KM`C/P⯺/+QUmU`utP I`/ %N4$?6?s 2v? 2Nv??;Hd2QC_3` Wz%?NV%O7OCt aOO#ޅO`M&`mR{OOӧY"Np_1_E;OUe1ySQ]_o_M qt!Tw*foSU@jZ?@?/v/vUpSp2u8p` ?bup` B=ݝ`M,{t!Wp5'm-ׁ<3ohm!Qb8q׏XݥBT!.ct!xAZg>pÿ=$ bUfq vU֒rrrvvvrsY"`ERon_10 бeCZ~bb W}Wbr@":tbWuP`TfnMq1o$6zK}_obK1K2\#b^ºaaaaFҴ_AaY6P`u$`bTs`ݡs`ap䑹x^tݥ5f_b&bׄЂF3s͵N?SYDs\rV@(#e"88G^80G8ɐIFZ,@rb4 2V}uauaxuYV{nx` % P&S.1ua`NEW_ORKd2H IPհs!R۰RЕIհSÑQȑTi|o&ohXAG'0U?8!0oHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@I5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(@J-&?@ mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6 @ d_mOoT ݬ\VߦyWx?RkrVo_bVh*g:n"'UaEjz}36>CN>?Bn]z/GvN7ViDAAbE9 MX+G'W0UPN!4)ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@Y%?@wᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@jc{g&?@%r`5B?@m_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@nElrkRt# 8$kcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeoooooo@yUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz8T YYT#BqU@5?@`0 ?@ t2?@?P} ։u`h?u# Lv3<@PT=d,2<<Pddt m۶YtjfUh!NUdI,@=#B!!u+`?CopyrigP}tg(c)gW20 9gM ]c os f"R!r 1a i n.g AUl0 (s#2e e v50d0+"!$4#bB-m)|# '?h==$&$#66 2j4$#=#0UB%3#3&50$^ #Q9 =#HAQ5D<KE%DdGg*Fj#aOb'*TATA4"1(3&Sa4% `2HAEBOiSHdiSi^PQ5 3FXm'.W'DŜ6-!40 VZUHLuD" # I>h(J TEI 3MU@jZ?@({~+?P >uA` ?u#jJ* )8* 9LGA )`G#dtG.$*G8BVj~>t  e t>"z "p{b+#7"J>U2zGwz?@9 S#>c&A@b(b)+,&" !$5N]#2T#k"6 "M"b)HF;'","/]#11[#`?CopyrigXt dc)020090M0c.0os0f21r0Aa0i0n.0 Al+@ 8s/Be@ej0vA@d#@K"ulJ Ul,aAaAC 5LU(#!Q(r!wW?@s80&9<01# 0IK A# ѥ,Og NEx]SiL5[ML&X39^PzQQ_c_ ?%YAQ)UEEU . K ;;\W)U I@)$_[.F&XA:jai%?@ 6?wL^ W-\/&T5Dꤲ8?Eg@SA?@ @ ?Nk"?I5`bmMohiy[7 K=F  %?j1&XIOD?@u@rLu7j T_ʹhLi,KV@8$Ȯ?@1x}b} yy_@){yEZJ]%8O@G \t;8j|c1}p?i @'WE"NO_T ZYAAekBp OOLJ}u?@9 mO"9=\/hQeel.%-+@mlփ?@U> "8?ɽ7yf\tOfj|A=$䚏Bp$za3ex@4Sa!6Wy'eU!a)#kq&ziM2i?IT%!Xh<'0Ub6502 ?}HD # =hj0>T h]]9 !AU@+3]q?@8i?@.!'?@\S۳r?P6 t`  C=?t#9sxZlRJ)u duo$>  J JU?'N<?& ?AbA@bNbbz87 5#,"b4$,"'b4$2zG7z?@L&5/G/Y 3%,"j&d,,"Rj*/1/15 `?CopyrigTt (c)f02`09f0M^0c\0osV0fd2U1rX01ad0iV0n.f0 Al0 \8s2e0e\0v0d0r"(1E],4Mb-;3 ;7&loUh#7 M1D hp  o%J F( dj!1B2 bdV.RiAbE9 MX|GW'0UR)N!!$a VZHD: # =h4>TB#]]9 TIAU@1q?@!9?@aww'?@տԌU?P6 t`  9F׿=?t ڰ#slgW ª%?)Zu Lhu!s8hAh hJU'N!?2& ?AbA@bNbbzc a#X"b`$X"b`$#2N贁Nk? "@L&a/s/ _%X"&,X"*T#[1[15 `?CopyrigTt (c)020090M0c.0os0f21r01a0i0n.0 Al0 8s2e0ej0v0d0"T1X4M!EQ#-0Og3 g72&l*>0>U#3 A y1vXD #p %J`#@.T'oo>hn56>fcRM(! Zך4_F_ b/OSϏHT >ɪ#f:?@*S)Y=> yy_@YjeeRT;Gq!Qh~n~OkzEMsZJPl -iU@SQEU"pL._qw;m;aU_ BnhX#rA 0X"IsTj߿kHhl!Qdok!(Npd 0Ale E9 MXXGB'W0U*}N!M$)Aj5 vzHD: # =h4>TB#]]9 BIAU@b?@;h?@ ?@F_a4w?P6 t`  /?t {/_  g?xRJu hu!s8&> JtU2]q ?=@ʐL&A@5b]M(b[)h+Ih&[#$"#?& ?AS/t)T#115 `?CopyrigTt (c)O020[09O0MG0c.E0os?0fM2>1rA0z1a i?0n.O0 Al0 E8s2em0ejE0v0d0"@14M#B-,?$3 $7&l>0>U#3 CE761$!; #p _%J*!ԌSU hneNw3󛍩(D#2q>@t|M?@a^!v:1 yy_@I56>[ŀQhnO:@K0hd7l aV@S8E8Vؚ㱫L{_pZ5X50U-0f:?@D?cHe-1>[W9k]geϺQ Y" KaĘ\Bqj]GRׂP_FR}LY5E:/U=uOqsR Y2JZ8 b G0Ale59 MXGB'0Ur:N!R$XA'5 vz_Hp>,ÿ;s ZH(GBF𸮏t# EOB 9 V~|d8ۮ @+W [sGU0O3;O >OߦOO}OrOMFXO 8O)#5O^(,R_,R0?8R36hR6NRH:x8O<T9ROB˾XS`K(S7IUFD  h$^T YYBBUF\.?x<F BP(?43^P? Vs V6gH?R\d ?8EB G 09gB`.Cloud,netwrskcaUins t,pa !i sa ! %e^ M%J 1Gg YR 1H?_^p Qwwx0w x3wxwx0wxwww xQww ww0wp wwwwwwwz pww/ p7^LDrag thuesapWonodwipe.b~߿~?? XAP7d??O\.ߗr3r\.CUG DF P# h @T(PYYUEUF\.?~?F~׻?P} <Y ^.B" N'O'*Q'R'S'T' *^ Y&Y u` ? "s'_":s"Ds"Ns"Xs"bs"ls"vs"s"s"s"s"'&&:&D"uN)*2<7)VU2^L[b1j2 ^`s" H3z3'N‚??lF ?ma[Z<` Ac@entColn@r3z[_b[[bNbbzI1dq9E 8AЇEO 24bYP{[`:BeR&nVB#32Q?]q ؂3Q8A*W1`Vis_MRXY%cPm!P5`12`݉`?Ap]y@igPt?Pu(@)?P20L`U9?PMPc@o`'of>bAr2`AUa@i@n% ?PA@l?P7gsbe*`e@v@d%Bd455uO#4[%B+@q~9p?@IBp7u@usPRD"J-D"t=2u0`B-1D"֝ruL &@BGm3ry=_"n4O N)9u`u `"AA b1O$u`B, ^aepcT<@o?Pm@sĴ@a>5SRTJAn@e@?aT]N@twBk?PD@vVbe)EяFE*UM_q|U۟e"4Fpd(aC l &&2::)BDD BRXX|bb2llbvv TNA5 rT&_Qj~}8pK!bdzӲ=`pM0 : D N X b l v%‡'_"D_"N_"X_"b_"l_"v%™2:2D2N2X2b2Hl2v2BS&2NC󨕃FkZV?F'IOғp~կˍW 2r033(FE9<QߐlzBO$ߪ߼߮(U9|>}>MR,AZl~(|9x uSV!+=  (UǎoC  (ָe.rMi>3LÔ  eD2(dv  Q!Uh!%5QU}4< ?VV$`fBh}Ȳ7O!<?F/*Ȳ/2sU~1p%)E+D>L?/Q<,6FE%mEB?T?f411%U\+:-yO?H1BdhPUifb}WSȊOt6XQ4X @ pp` k*`n`RPn`dxQdЯd79@#AUUiB+T.(T;)dH*d db$"Tt ,@AaiE)khp:j%56_:9c&ЩD3s͵NSY2a)k8la)k(8l|)k 8hm8_ZI۩Qy`Uee8^6JVAAD @^R` WShpCꥁs!`L ctsPzTq`ss wTyr"^RS` pu EqC#5#s SzbCRsqozA`BiEOO\_xX#Nsqe'2q[[!c0_ZˑU#|7t:}#/'_x # fHavU# 1H !eU iEQ % UH duD " # J U> hzi@T(iiJ\ UF~?F~?tP @Yu`?(.8uBbSA{szat? Cteb"0#$YE['?a&?.tb'|z酱2Q?AG}$&}qam{c8`Con0ect0rz 1l2z{{+` Li2?15-tzYaU#F131 ?WPa0t0r0)4;1Z7}`T5M151?UR0u0dB2g6Rz8%"Q-!c_C~#J *$&#FH-B;"2MC &BS{%ZMCQ-!dgG 1`?1py0i@h0 c)20RP9MB0c0os0fDDR1r8P11i2]. AL0lJ=WsRedP1v0dP]@>ֵDCz\MC2qP?_f[!?\.? ~:o -D~% IKMC&&UgyQh @ 1_`S0aP2BiEfoxoyXMGw'0cia&!|$0 vjlAUEu P-QyB:MCz?bkp Tbb-.(},8>8dup*ar%@68K]ou ÏՏ ?|>R9=PbtuYȟڟ x<w(LA7BUgyu]ͯ߯"}>%7GZl~uAҿ '%OHL_qσϧϠue kZV?F'IOғ %NMQdvߚ߬ߠui+.Q7Vi{Q"QucwHD: # h8>T ]]9 M UF~?F~?JP6 7@ZAAM ,'@' T'h'|']"' ' "  " '""'"'0"`''JuM`l? 'y""y"6y"Jy"^y"ry"y"y"y"y"y"y"y",y"&,y":/L/^"uh)!b_A@36Jte! i(Bt1 LJFb4D"FEB>fCJ[oC'N(?F ?"b96BhE#fCtoC2Q͞C@MEMJ3V8E@8;G2JoCQuAuA5 `B`?CopyrigTtk(c])k20 `9kuMPcPosPIfRQrP+aa+PiPn.k WAlF` XsJbUe`ePv\`d>`PoCQ! \l-OS WFfEP b$U9 fuXG_'0UHrũFZ!D0 fzlJ\>Uhqq]!!0!PQAcHQ@1S?Fȓ$,_q?^"d#e4Eq ^"3d]q 0!0"/"Q!0"fBJ4?F;W?FjlJ?FH E(>?NpKv7atuXad#hX\a3g6a0e!2 4#q E6?Fv9?Fb4!?FЩԏ挍maVv]2:{(?-yjI>P!Xj?FT꞊j?F'K?F;(嗟éT.kvd*ĮVq#(]HXEAr ?FK(umB)?F.~rYYH_amAáF+x(~*=į֯uhY*?FY.(v0zkd)D6GdSZpB{(i]"/M͐?Fߧ78ET+Lk ]D am++Tz&{+69[Ϟuf?FP$?F [13nٶ7gۣ挠Y,Idkim0H@eQY"ߤİuӧ?Fl?|f ͲKlKBG]fkϐߵAZV?F:?F* ?P$ 1EK3 3i{߮Gz~}Ԗ0>_NO5q :_" s3t? ט P&d2g? 41z3B2_qr5E~\.儐?FQ( BP >5 w pmD0VY l j٩ 3 UIPY?F3>Dx?Ptk= `K0z߂3S0kg; z0f=kVN?PX#Z3d1H-Z?F)$F\;X})挗J hG~W*fU4~%?FVFKL/GaH)r~(H  N/?t4ZA rGg?F4W eſ?FH?Ýi`)6¿l#kpc'??4FF\>FI(1 O怘Rrq SCH>t v|RuONHD: # h8>T ]]9 M UFx<?FAj?F~?FwQuu?P6 7@FA]A]M ],'@' T'h'|']"' ' "' ' " '""'"'0"JuM`^l? 'e""e"6e"Je"^e"re"e"e"e"e"e"e"e",e"&,e":/KuT)!bA@*36JtQ! 2l] 2t12zG57RQ57)\(1>RJCJ[SC'N'(?F ?^"b'96BtSC2B??088r0MICJF8;;G2J2SC@?JE=26S0#CSCQYAYA5 `2`?CopyrigTtk(c)k20 `9kMPc.PosPfRQrP)aa@iPn.k AlD` XsHbe`ePvZ`d<`P񾔝mT!l-pHS %WF{Vi AbdU9 {YXpG'0UFrōF!D0 fzl*J\>Uhqq!!0!QsAGHQCf֩?F1Qd?J"P#Oe4E] J"3d]] b0!\0"/",&JBJ2(B@?F,/k?F]?F8{# \*?LpKv7_tuX_P#_hX\_3?g6a0Q!2 4#?DEh?F~E ?F_pLVwMҏ䌍m_Vv]$A2:{&-yjI-H+ŕ' YbҖ.K?uAc9z^̰f5U?FF/`_`P \!b@ir_f|Ƣn y(Tf>rpd6ENj/-g?FRr?Fn^\K0z߀3S0k~mr{~.*{ S4K" =b™? C? "@wD#<2tr5EOK4n?F{׀?FBi䌗J hGW*fUÝ)k4Il RuF?FB5r?Fn?$Xp ?G'gH᷷?r~.H  N//4;KGۨI} yFa%?FO"Z/g`)l#&Ƚkpc%//2DF7?d>FM?MRrq LS3H>{v|R؇?>HD: # h8>T ]]9 M UF`0 ?F M  ,'@' T'h'|'] ' ' "' ' " '""'"'0"JuM`^l? 'e""e"6e"Je"^e"re"e"e"e"e"e"e"e",e"&,e":/KuT)!bA@*36JtQ!  \(\] 2t12 _p= 57]@ף57%)2] >JCJ[SC'N'?(?F ?"b'96BtSC2Q?0G88r0MICQJF8;;G2J2SC@?JE=Z26S0#RCSCQYAYA5 `2`?CopyrigTtk(wc)k20 `9kMPcPo%sPfRQrP)aua@iPn.k_ AlD` XUsHbe`ePvZ`d<`PRmT!l-pHS W$F{Vi bdU9 {YXpG'0UFrōF!D0R fzlJ\>Uhqq]!!0!QsAGHQ'n-?F1Qd?J"P#e4E] J"3d]] 0!0"/",&JBJ2G2,?F,/k?F}X?F8{# < ?LpKv7_tuX_P#hX\_3g6a0Q!2 4#?DEU: Z[?F~E ɀ;9C5 ?F?wMҏm_Vv]2:{&-yjIAim.H@eQW °A{?FH 6I_^ d ͲKlK~BG]fkߵy X0?Fjd?F*OzE=M2'-H+ŕ' YbҖ.KuAc!N\?FZf5U?FO/(Sÿ?P? \!b@ir_f|Ƣn? y(Tf>rp^N/-ُg?F6+:n^ºK0z߀3S0kmr{.*{ S4K" =b ? _C? @wD#<2:r5E1f.]n룠?FF[PBi䌗J ~hG~W*fU)k4R9uRuFpZ?Fn$Xp G'gHr~.H  N//4[/I} KKbv"Z/Ýg`)l#&Ƚkp!c%//2DF %>FM?MRrq kLS3H>{v|R?>HD H# h @>T(]]9 MT  UF|>?Fd2L&??FVR?P6 @ZfA]A] >M /]8HC8\C 8pC8C8C8C8C||C 5C 8" 5"C5$""C58"C5L"JuM` ?  "4'C"R"f"z""""""",",".,"B,"V/Kup)8!bSA_36Jtm! q$y BYt1B5A!EM!E*G[~By >zCJ[C'N'(?F ?7"b'96tC2Q??088r_HEyCJV8;Q;G2JB2C@?zE= BFS_#CCF~F$u?FNR?F~Sߪe f"l#ͽU>aſf"J3RRy f"l#>b@Ed3̀ͅap`L!  f$#p)bq65q p)CaAA5 `%B`?CopyrigTt@(c)@20:p9@M&pc$poKspf,rqr pYqaPipn.@ Altp $xsxreLpe$pv"pdlppT$!q$-Hc gFV$VibE9 YXG{'0>bUŽF!D0 UilJ"\>Uh"" !$!8!L!q'ArwHqI}^?Fwo?0je4EZeLdd?]e\DL"K""L"zBJT" Y?F]?Fo?F5? Ѭ:?1ipKv7tuXZlhX\Zexdg6gJ0m!2 4?#oD Um?Fڼt?Fɤ_i?F0>m* 9?F?eD;]D m++Zl?Tz&{si?69Vu9,"f?FXev!Jx W?F8+y٨HZ ͲKlK~ZlBG]Lfk&8}H ?F$j #?F?B?FH+g'Zl Yb*xdKuAc>j ?F-}JӦ?Fզ.ki]7aB< x!Gi@ir_Zlf|Ƣ y($M-wF ?d2huON{v|RONqNwHD: # h0>Th]]9 MP UF~?F~_?P6 dnA >IM .)B) V)]j ~   ) )) ) ) ")")2"e)~)JuM` ?E){"${"8{"L{"`{"t{"{"{"{"{"{"{",{",{"(,{"JU2Q?@M?CJOFA@!@HbʏIKFrB !RDJICB'?Vp BABbI2[GrBLrBOB-ICR SC9 _VICQQW;HB`?CopyrigTt (c)=`2`09=`M5`c3`os-`f;b,ar/`haa;`i-`n.=` Al` 3hsbe[`e3`v`d{`Pi @bU9 VHXWXA'2qpVtUIa fjlJ Q\>UhQQ]~ !!2!DEQ!=H^A@1S?FO$,q(P`"3 e4Es `"C ?]s Jt{Z rGg?F;mx>F9h E(>?NÝ*`)*f#퐲>+3cݛs a0g!2 4#@ES~%?FV?F4W eſ?F}G*H᷷Ļl#*لȽkpcXSH?F-Z?FF?FPL߯@J *⁈hGr~لH  NᵪS UI?FY?F섰a?F@\ K0z~**3S0k~Ä&~ԉ~$ARm}A\.?F:?F3?FDٳx?P+k= 1`竪3*{Gzޟg; z?ԉf=ks :a" F f 94FQ2! 3B2:9rt5EG~[ZVn-C4?F4?Pu>5 Οw ð}H 1唜2fi0ƱVX 23@0Bk{̒[mY*?Fuӧ?F?F* ?P% t1EC6G*ͲKlK~}>NO5 RԋS2Z#@cukǜ{̦Ouf~$l?F|ƯY,*kBG]fk0BO"/M͐ŧ78E [13nٶ7g"]D *m++仌imIلH@eQOY.(vT+Lɇk h!˿S컌Tz&{u69Rd~OEAr K(F0z?Fkd?YH_2mA B{لi]I'p1OjT꞊j?FmB)k.~rkéT.kܶĮ~+x~ل~*=u1O6?Fv9m'K;(-/m*Vv]BÄVq#ޔ]H//rP;W?Fb]/?F?J/pKv7*tuX滌 &ה`Ypl?GuSFjlJ?FH?#5~hX\~ g61!O3K_H8>X"s D')K'g F(uN1#āOXB CK @Q<>dlۮ |o@+H ~oMsG7K o PK ~["a `thi jI ]sOx |۩K :UFDf h-TYYU?~@x]@L'} V6lX-1u. Bj2u2Z2j2#u9r#ULH/MZ1+B#AD5 60`Vis_SE.cTm!#20AD%`;Copy<0wigTt ?DU1f@M)@c<0o+@'ofdBUArX@AUad@iV@n3@ f@WAl@ \HsBUe+@e<0v@d3@J@=## A G?9#"n"Ea&444 70#AHB53_lj#SV6U8l>(UhnE / J$&9"(Fn_g5Fe#pheo'r#T X!! Jc0xo)vR2'"U7A%O3W_7rr5-6 `F2 Vrqu\@|Q:0b)`R@As)@E %T@xd@Di  ei=G]U6Tg'0"q)V!DHAn 6SWaH>w* O`EW )F8u#7B ` Ȓed- @x PUFD  h(^TYYB UF\.?x<F BP(?P?Xv B6u6  T^ThWZTTȅHBU?? ?B@L&d2?J-[(m(K.sU!&/")ҍx;(  0KB` Bridge,n&0two0k(0p&01pP=1al(0d&0v 0c@&0 [1^P| ( G&e WPW"*U---?-. wwwowwwp~wwp!wxxp}wwhd:lww w pvw>ÿ@qZ@E?DFp6@rR$Sn0!u@`i-FAD"BRuL @GCD$]QO(gR_"|i& 4O N)9u`u `"[bO$]u`J425)3Ga171`Vis_PRXYcPm!#5n`46B.R`?CopWyrn`gPt@u(~`)@20`U9@Mn`c`op`'ofbar`aUa`i`n @WAl` hsbUep`e`v pdR13d45*8=363@tFF B4B#30UrC]Vw,7%$T!nLPLsT<`o@m`s`q>g d(aq;_=@TATA2hhbB)3*̈́b+ڄ8bq q200Dr81@a6s2+66u\L %BWQ(A.A[2BB]P@RbESTO$@6r23| e|8e5=_4&F3s͵NSYFețG@DugQ̗,@Ga cupgbE@*28ABTOopD"%J-.RRO(R$[3VÁ`R pasn`e T px``B| 0Ls'_"x#PG $UJ"bk B3U2brZTCfRr ïը8=vv_<H!U"D"=m0!D0!g!@2ÅbBЅ3/xԂb08beaea22Dr3 AAE|66v7 |8Ԑ`b-A-A;hr#zr{qV W!rA8T@GaKk ]]P@` USPa`e`lLsaQi` E0u n`p`e`TbEXew` —T``8` D pvbT3^r` MSb Bbd`5ed;YRmM`nf`c"`u r88eQTAPbd Nm@Te@h=O! P`rxAx8eb`m +sbeXsuĨ9KuAe;S`uJB Sri`a /TTh//A/L`ch{/Ҡ///Blpi`[/@?#?FR`oS?a??uNwbkl"aXb`K?[It`adp s*OT]]9 M JU@x2zGz?@MJA@eb#z(b),+,&J[(.?& I?A$./O%B!!5 g`?CopyrigTtZ(c)Z2009ZM 0c 0os0f21r0>1a0i0n.Z AlY0 8s]2e10e 0vo0dQ0!$$-# '&%?6iie59 6X7'K0UpBŁ&!$K0 'F;JlJAh h7!1!Z@vnǣ f, <AA#  IRvVJ:'BP(?OS spBlU^TURUEUr\.G\ZY?PuPիQ,5U҄Q[pB.aYU520_BU"rA 5#cHD: # h4>T]]9 M JU@1&d2?@Vj?@҄BP(P6 ]AJuM` ?ju#t  k]It=W(\ZIRUIl">2zGzw?@MJAw@eb #zb](b%)2+)2&J[/.w?& ?A$T4/U%B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-# e'& ?6iie59 6X7{'0UŇ&!$0R -FAJlJ8>UhAA 11!T2 $S<cV{ WR~VJd2L&?@ubX,X ]S fR#mS [ÐڱQE^Fh4__ K&oeX5:f'l6 XYePY?ۭsHMQ5aZjoohY}aUVeEVx_;"rA #dsHD: # h4>T]]9 M JU@|BP(?@h4WFP6 v>JuM` ?)uJt  U]It=W{GzUIR]I%l#>m2zw?3S@MJA@eb #z.b( (2+2&J[ (p"?& I?A$ 4/U%B!!5 g`?CopyrigTt (c)020%090M0c0oKs 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-X# '&?6iie59 6X73'0]UŇ&!$0 -FAJlJ8>Uh 1Z1!Z@o.rɩ  <uCB] WRVJ@ ?@bx ]S ؒQ#hY))QE^jϵZ__ CaX5a9XYePXRmScRU5:]mxc_PU"]rA #dsHD: H ih(>T9 M3JU@xJuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"A.b+-,-/*B!!5 `?CopyrwigTt `wc)5020A0950M-0c+0o%s%0f32$1r'0`1ua30i%0n.50_ Al{0 +8Us2eS0e+0v0ds0ib =L!XY'0U Bj&%0 :lJ Uh*M[A[A 9A Z1:'BP(?X Y s BQi@JSv?@bX, IYwYQ!i )\(!!U31^Fh4I_[_ p= ףuXEfl6 TbYHuX _Vgj_aZ(oQ !UAI҄AK B\%QAI@h4GFHlQbY{uXAo ].Oroa_(miY"@ _?@-BQȠ\ p_\oR\uXAppD`Zsƌ uXEOO_\HD: H h0>Th]]9 M JU@$?@-#?@%G֖g?@q;Ƙ?P6  n>JuM{` ?5 uJt  -|ߥ_Et9S8-zENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@S3   8gF/_ M ARnVJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@!\?@JI?@8HL?@VG?P6  n>JuM{` ?5 uJt  4 ^sEt9SPDEN׈]Ehoþ>2N_Nk?<@MJA@cb]b)++&Jޱ< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@Pg   8gF__  ARnVBJ49BYOPBRWSMRUE:]?{^{?\QRYl>Q5U/V@_YQU2(_:UrA cHD: H h0>Th]]9 M JU@F5x3?@5yF_?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  a8~4Et9SuEJENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J:3f< u) ?AHbʣ$z@bb3?bfb)$%/F%BB115 c`?CopyrigTt (cu)>02`09>0uM60c40os.0If<2-1r00i1a<0i.0n.>0 WAl0 48s2Ue\0e40v0d|01Y4Ť-3 7x&60b59 FX|7*G'0UsB)x&!$aI *F>JlJ]Uhz%11Z@S݆ f  8gF_  LR)yVJ49MYZPMRbSXRUE:]|`XJ\Q]Yl>Q5U:VK_YQU23_EUrA cHD: H h0>Th]]9 M JU@ ~?@gŬs7.?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  YoDt9~S-/ENس\Eh)Fx%'[v>2N贁Nk?<@MJA@bb)+R+&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@S3   8gF/_ E ARRrRJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@ ôOS?@S !p?@%G֖g?@q;Ƙ?]P6 nAJuM{` ?4 uJt  M:AEt9SENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J~"< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@Sg݆   8gF_AR M ARnVJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@]C}?@8?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  [J}$Et9S=d ENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@S3   8gF/_ M ARnVJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@Y ?@fJ7%G֖g?@q;u?P6 >JuM` ^?M uJt  (N7Et9Sp%6ENس\Eh)Fx%'[v>2N贁Nk?<@MJAw@cbWb)++&J:3f< u) ?AHb$z@bb3bOfb)$%/F%B115 c`?CopyrigTt (c)>0]2`09>0M60]c40os.0f<2R-1r00i1a<0i.0n.>0 AUl0 48s2e\0e40v0d|0@1Y4-3 7 x&60b5(9 FX7*G_'0UsBx&J!$a *F>JlJ]Uhz%11劵Z@S݆ Y  8gF_i  LRyV J49MYZPMRbSXRUE:]|`XJ\Q]Yl>Q5U:V K_YQU23_EUvrA cHD: H h0>Th]]9 M JU@$䣹?@R?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  >b3Et9SWENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J~"< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@Sg݆   8gF__  ARnVBJ49BYOPBRWSMRUE:]?|`X?\QRYl>Q5U/V@_YQU2(_:UrA cHD: H h0>Th]]9 M JU@֤M?@ yo?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  ߸ Et9S)hBDNس\Eh)FxO%[v>2N/Nk?<@MJA@cbb)+R+&J:3f< u) f?AHb$z@bb3bfb)$T%/F%B115 c`?CopyrigTt (c)>02`09>0M60c40os.0f<2-1r00i1a<0i.0n.>0 Al0 48s2e\0e40v0d|01Y4b-3 7Ax&60bP59 FX7*G'0UsBŔx&!$a $*F>JlJ]Uhz@%11Z@?S݆   8gF_  LRyVJ49MYZPMRbSXRUE:]|`XJ\Q]Yl>Q5U:V@K_YQU23_EUrA cHD: # h4>T]]9 M JU@xJuM` ?uJt ASt=WJRb~li>t2U?=@MA@+b + &J[N#a?a& ?A)BB!!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r 1ai n.Z AUl90 (s=2e0e vO0d10!$ę-# 'a&?6iiek59 Կ6X7'0Ŵa&!|$0 FJlJ UhqMAA ])A !4Ja-'Z@h4FӃ f <{Gz#A OR|VJ:6]].rM__Z)\("UEC@ ?@-bȃ S 1 `#`YR\Xk5^jZso!o ̌aX5U'BOP(?gks#dT[RUUSv?@n aX,flYwq%lSXn?Fh4  p= ףXAYfl6\`YHzmPXAV``ʼ_Z(QYAY҄agklQbE26_HU"rA _H<>gI+s nsMg@q3JFXu#ǁ"7OB  9 Y~dۮs o@+_ oKsGh$'fX K j\ ƫNؗ ~ ~8 $ ~$xKZ$(ʐXUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?XV 55r55 Jbggah5Zg\ȅH?8?f)B@L&d2U?$-5(G(%.sUm!k&/|(  0O~B`"EtPern t],4wo k0p"i0#al*0d v0c 91^| ( 5G&u  * AAS ~pni'TDrag onut he]pe,nUdu /p64lies9Lt]wrkcQm tOfTly lGo_"imdO.bL&d2~?K?=tp'I??C\.r3?r\.CUG D  # h L^T4YY  BUF\.?@L&d2?@~?PYU^f6.NGOGPGQGRGSGTG\G]G^G_G`GaGbGcGdGeGfGgGQJiGjGkGlGmGnGoGpGqGrGsGtGuGvGwGxGyGzG{G|G}G~GGGGGGGGGGGGGGGGGGGGG736363u` o?%$7G-7B<7BF_G#BP7BZ7Bd7Bn7Bx7B7B7B7B7B7B7B7B7B7B7B7B7B7B"7B"7B"7B""7B,"7B6"7B@"7BJ"7BT"7B^"7Bh"7Br"7B|"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B27B27B27B&27B027B:27BD27BN27BX27Bb27Bl27Bv27B27B27B27B27B27BH27B26g6U6666MBuIU$FrL|Lw&1~rW]@Es  ^7B( ( ~s!v̨ q?@h!;p`:1P IFPr,s~rD1''@A3{:18E9 'N( d?pxh` Acn CS lS r3!ۦpڈ٦p1bbbz0q@!PZY<^pbbNbb™qb"b7+d||u@+@Q Q?@G}x^?@IH‘BMB6}Vu]`-qZBuFgw#/s;}#B(0t@I9u`u`"TYYbzFqDuK`EWAαq1W`V sPwXYEcPm!#528gX1`U?ԑp r gPt1()12U0B91M cq Eo{ o5ԑr] ua iS nE J1A5 l1-sUe{ eq vdE*E WN~`~u01 l2T'sFt5ӤӨὭ w?gߏBq&W,s^KVVTK~u5TԑnI e5TJ Nt uk1DvLN'9K]utc担c8E&cNI[mc}c濵c0BcewcFլcc晱6HZB_z\\Fq ////_ qR/d/v/_///\`//?c+?=?O?cn???@_???_? OO\VCOUOgO_OOO_qOOO__'_9_\\_n__\픱____ꌑ__o_3oEoWo^;t~ooob!ooo\폰,^#M_q\$\%@\%7I_'i{\(ԏ\!) \*>Pb\+\ϟ_,>@c]o_/ů\0 \ 2DV_2v\3Ͽ\ +=5K]oB\6Ϥ϶\7\8 2D\!9gyߋ\:߄\;<x?#5m`3ACTE5[ɳaj1,.tu[񼱼[[_[[([ [CZ==2ffyѭ";;N"""@#2WWj2d`e U@s` M;aiuf,pueQ` Eq0ipeith 'UPouoN:mb>Pb btq' P,r9! / u'EsorK"t=h;/M/$\l/~/A s}-.1 '7`1^/?!' S6"i,l 0?B?`>Th?z?X[Q'Lc,(?`K???B*"liigOP<XLHOZO' RoO_eZOCA8wa' BEp&$/qf`Y4$_8]Sra e\l,)1@#` fi!bty_l_~_W"` WTPQ@^2` Eh6"*lt\a4 l.c`_O|N(,TQw`#|`p/xo6_IPsAd<Told?o/S:bEd +sc*lAxTf;'qmBi tCv9 I[7!3oW='/) fsPt@$:PftMpCnp㏊gTˑ'fmuQ SF l!M_(|/@"ha  c%>UcAXE_`>Z ** [j$w1tt>,3K̗?nؑ`tQgctRm#ZC̗,@_Cc%SuK`rϔ? h!b M^b!bC%_5Ÿ(56c_t&YF3sNSYF??Eq! $U dkV@~?>? ijTMwu `b|tϔ?b+cU@L&d2FX ~u~ߐk7լj(ߎp.AɯD c0I.NO Pl$u1S>GAGAKaaX]8eԁԁrݡݡiaiaa lc/a/aQQAAIIA0,j=al$BAe"!A#Mwx!g%9!@QRQ@ϒ^^!:q"v}#rq01RQz3{| +}*~t36 /*&03!@2M3Z4x|Ut6񡊁7񡕋8񡌛O:񡎵q=>?R0i[8s]&?-ܑ-!lVőncb[r ]#d%+`[ ob Q'eu"$I7 | "#####!#2";#ԁ#ݡ#ia#o#|###/a#Q#A#I###a##A##%#x##L#Q#f#s###q#Q##########)#6CCCPC]C|CwCCCCCCCC`CC "&eR"eR2eR 2eR2eR 2eR*2eR2eR>2eRH2eRR2eR\2eRf2eRp2eRz2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eRBeRBeRBeR$BeR.BeR8BeRBBeRLBeRVBeR`BeRjBeRtBeR~BeRBeRBeRBeRBeRBeRBeRBeRBeRBeRBeRBeRBeRReR ReRReRReR(ReR2ReR2(ݤ>2&̶Pl~3+CkߏH2ݤH2&̶ 1CU4}XRoݤR& ̶ F!R8JݤR*&̶R(ݤR&̶TRݤR&̶Xj4!X/|/R//ݤR ̶/0/B/!?j/E?b\?n?ݤb*̶// ?1?3?O b%O7Oݤ b̶???xAO?ObO_ݤb̶yOOOAQ|_O_g__{b ̶B_T_f_ aEo_ioh|ooDr ̶'qo o 4aWo2iEW ̶oooq j Sւc ̶eďk׏&! !̶fx.il٨h ""ڡ2D 42{Vmi{1## Dn2Dk{$$ӯ寉Ŀ o 4D% 9%Rֿp،* &9&Se wVߟzqߟߨU'9'.@hCrVh(9(1 s1Xh) 9)!t!*1*9*"w ?zuy+9+#@RdCgvz(B"&,9,$ - /U0/wC/U/|8 26- 9-%!///x ??EH2*UF.9.&// /c1?/?y??XBV/L/'d?v??,AgO?OzOOXfRV0L0(-O?OQOA0_yOT_{g_y_h/bf1 \1)O__Q_B_o|0oBoixb*yv2\2*__ _ao oo}o 2rB3\3+oooPqo~ 4\4,QcuTxĘSԖ5 \5-,>fATf 6\6Gv: 謫/ /Vf7\7k/ПӯȮ/8\8k0u=xw9 \9k1>PbAϊeτxϊϱ@*:\:k2 + S.߅ASߨz ;\;3Ϙ߆ CS<\<4߽߫a߂ = \=5bt*ed*>\>6+= O.wRew-?\?7@.@g(w&@\@8  /08"@6A \A9N!////82* FB\B:O/a/ s/1R?/v???HQBFC\C;?*?s__ _;avo_ooo(xurvGGK>oPo@q?ocv>HɌH@)qQ,?QxIIdTtsTssSrU} @ `EtherntnrAnkssAs%t sΑϐΑor#r` %P&o]ptis ߢ``N `TWORKÐS{AEÐPOR2I `Q2 ia1ṃ'0Uyu1!1 0tbų)zw tر)1 ۲eʡ0(&100@w뷤H1wLĭGX+L_W8LWGEDrDC  @   !"#$%&'()*+,-./0123456789:;<=>?>A*BCheATe(eJ\U@L&d2?@0~{??@?PYmUU%.u`o?&UH&T"""u,)ct)! q-tm!ʇ"L5 .&b$&"} U}!Uɤa/ '19 b {V?`%2H6"\>3c6?c82\.?A?9[B>30/D?P?y A@lBw0uC񁠙A>9D@SVCA'1X7i`"`?CopyrigPt(Uc 09uM PcPosPIfRQrP=QaPiPn AUlXP Xs\Re0PePvnPd"Z,AuL-$cA2T%c3C#lBF{L u,G"aOCTOU_2sOU!OOOOO_"_4_F_SO*^x_b._o_p__ ooARodovoooo_j_oC.@+k@asx@j`D.mѐBǏُx!0BTfx"ja.ӟ7 -?Qgx:A̯ޯjbu._qxl jeB.k!xґ.@Rdvjf ".5  ;+=Oe>s8jg/s2)N@7/N?[/0p/K///Ns!//@ ??0?B?O/jht?B^?O?P?B[? OO1nsNO`OrOOOO?"jiO?bn_o'_g`<_R{]_o__~sj_____o_6j@or[ ioopo ooo s,>Pbtok !3!);Mc!s6!ȏڏ@q'"5 LYn"ɾ"s1 .@Mhmrײ#ԯ!#/#sQL^pnؿ=$%e:P$[mϕ$shaψ 4o>ߣY%@g~߶%%sq*@Kf:rp">N!/=(<(//-N(?HJ/\/n////Js/;BN)/O#?c@8?N[)Y?k?}?^)XfѰ????? OD?2jt38b/.J~8/2SN4a/x?/0/K4///N4X?$?6?H?Z?l?y/Z?R^5@?O?+POk5!O3OEO[n5mh.xOO@OOOO?j_ib~6-_DoQ_`f_B|{6___~6x__oo&o8oE_"`jor7oopo7oo'79DVhzoƚ5랁8]2H8Sew8ʑΏ,6Q9_và9˟ݟ91"4FXjDwʄ·:ů ܿ):1CY:k,Qv пݯڅg;+BOϏdz!;ϗϩϿ;a$6C^h<ߨ<%<7qBTfx 3=@=Qcu=^@**4O.>]tB+>.>8ġ 2DVhu :!<">?/'0KF /1/C/YL?iH*t//////J3A\eB^@)?@OM?@b?x[@???^@X???O"O4OA?\jAmlRnAO_OPOkAOO _#~A5x@_R_d_v___DOz_1r~B_ oYp.oDBOoaosoB\ooo oo o(2 Ma^uC!CC 0BTfsD؟%D-?UDg(1rُ̟cE@'>K`vEEȎAد@ 2?ZڐdFϱƿBF !F3Q>PbtφϘϥ"/G W,BGM_qGZq߶ & 0KHYp} HHj .@RdqI#+I+=S.Ie$# TUHL5D" # =h @^T(aaJB [U@L&d2?@~?P e.N Q  m>uA` _?`N  Y c$ A.Y8>uQH>tEt#bH ">U.%>R.#N-7/de!#?A. \|#&/,?\.5?IA@/B[|# 'N(#AxT`?CopyrigXt(wc)20(@9M@c@o%s @fB Ar@GAua@i @n._ Alb@ HUsfBe:@e@vx@dZ@ V4 ?)o32TUA??)77se<-#v26ZbhafYv2r|cVqQN "QQQdl5C/oAe+(dVd>lrd "!s(8r4."%S(QVouodC~w.N}" (-DT!I@[0v2rzCsc "oP%3}`wȄՏyE|?v2>E *P&ai7HD H# h @>T(]]9 MT  )AU@L&d2?@W~?P6 $LA n  JuM` ?8* M^ K*0u:Jt7 StJb)>JU5 JD-//!X#V? .B\F#k&/k,?\.?;IA /[F#Y!3/ g"Ai @@2'bH42F/ -o31/!`'; J`U!`?CopyrigTt(c)2009M0c0os0f21r0Aa0i0n. Al,@ 8s0Be@e0vB@dH$@V o4 x?)eno32TUA?2IaH@2U6TO<@2U6X7ibE9 &Xo72'0BUik&!d$0 V1ZlJQUhhf5`'1@1Kdo30C^*Va&JQ&dfl64iAc7FlUnbi:(}m (-DT!;@-0@2rdUj"Uc60eE3&m9GtZTe%|&ov+=~u @2stktbqvu@<&]ew7UHLD  Jh0TlE>M U@L&d2?@W~?JP jJ %A Q"2  .>uA` ?E_  (< _Z&?uMt>p/b ">tq bo>t6  }配>U2TmUg!?#@9 U#!"/ Aw@"b)+&> !v?(t($#>_#2( m"6 G"\"b)H;'"bu3z=&b3~8/D_V_8E"lJaUl~X'4M1!S(t!?@9j# p}j# K$(°b`0w8-DT! lq0$#qj{jr}4k (Mt6 P pp@U<r(5%8:dfl6/wkrvQ uC EW`#0!y0!yD#|ud)<$ooooQdQAuddT8 %9oeS(<@EYeCw<ȏڄؙhjsu qyoNK$ i 8_\H>+s !SFO^? F؜Zu#2K <qB9h4K ]aۮ 6@+p 8NaG𘡁PxPh x u? RUFD  h(^TYYBCUF\.?x<F BP(?P?\Xt3 D5S?>?5%6qW{qk66b6HCI6CJ6CN6!C&aS6/!C9&R6M!CW&L6k!Cu&16!C&a26!C&36!C&46!C&561C 6a661C)676=1CG6 6[1Ce6 6y1C6a 61C6 61C6 61C661C6!6ACFh9hC7F6KACUF6iACsFa6ACF6ACF6ACF6ACF!6AC Vd9dC'V6;QCEV6YQCcVa6wQCV6QCV6QCV6QCVa6QCV 6 aCf!6+aC5f"6IaCSfa#6gaCqf$60aCfȅU?of?Z0Bc@L&d2?"}3xEx#~sUkqiviҍrx  0bEB`Hub,netwopkpppri hal*pdpvcpp-Qa| xy3U5Gv\ NP?U*U---?-ppwpwwppw׆%p6ppw pwwww }σw ^?w wpwp pd_&wpwwfDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabx<οL&d2??9GΑsп=t??\.r3r\.CUG D # h L^T4YYT UF\.?@x2" $9u`un `"0Yb!u`$"%؂#UQ!'1`Vis_PRXY'cPm!#5P46ޖPg da +_Pb1b1"(Ɣp2)Ӕ2*R+FR"((Rb(!SaDc"76u&`~PG1"1 0&2k@c4p5,CT@Db"5 UaAFUQS_$|lF3s͵NSYF'U;0Re>1,@UQP)epzp51S(("(AT?sR !U/!/keF/X/Ae`'e^//,U SbiPa??g TT]]9 M JU@x2zGz?@MJA@eb#z(b),+,&J[(.?& I?A$./O%B!!5 g`?CopyrigTtZ(c)Z2009ZM 0c 0os0f21r0>1a0i0n.Z AlY0 8s]2e10e 0vo0dQ0!$$-# '&%?6iie59 6X7'K0UpBŁ&!$K0 'F;JlJAh h7!1!Z@vnǣ f, <AA#  IRvVJ:'BP(?OS spBlU^TURUEUr\.G\ZY?PuPիQ,5U҄Q[pB.aYU520_BU"rA 5#cHD: # h4>T]]9 M JU@1&d2?@Vj?@҄BP(P6 ]AJuM` ?ju#t  k]It=W(\ZIRUIl">2zGzw?@MJAw@eb #zb](b%)2+)2&J[/.w?& ?A$T4/U%B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-# e'& ?6iie59 6X7{'0UŇ&!$0R -FAJlJ8>UhAA 11!T2 $S<cV{ WR~VJd2L&?@ubX,X ]S fR#mS [ÐڱQE^Fh4__ K&oeX5:f'l6 XYePY?ۭsHMQ5aZjoohY}aUVeEVx_;"rA #dsHD: # h4>T]]9 M JU@҄BP(?@h4FtP6 >JuM` ?uJt  kU]It=W{GzتIRIl#>2mzw?3@MJAw@eb #zb( (2+T2&J[ (p"?0& ?A$ 4/U%BB!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!$-# '&?6iieP59 6X7{3'0]UŇ&!$0 -FAJlJ8>Uh@ 11!Z@o?.rɩ  ~<uhB] WRVJ@ ?@bȩ ]S 9Q#hY?))QE^jZ__ CaX5a9XY@ePXRmScRU5:]mx_PU"rA k#dsHD:@ # h4>T]]9 MTJU@aܹ?@&cM?@5?@$D"?P6 v >JuM` ?qu#t  @@|It=Wn5_6uIRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ x۰Z @]0?c Q#Oc Q\}ȱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:@ # h4>T]]9 MTJU@ (?@hZ?@>TC?@ݿJWc^?P6 v >JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@?!Qw?@k?@BZ?@ib+iF?P6 v >JuM` ?qu#t  1It=W_OIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@P4K?@l?@u'?@ݿJWc^?P6 v >JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z!?&sA#<# $BK jfJ 'j `00>oPaYc l]]beE:Ͷc0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>v"rA # sHD:@ # h4>T]]9 MTJU@#zUv?@f s9?@ә!?@8laxy?P6 v >JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@"z>?@^i2VS?@ˋ>WB?@=E?P6 v>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@IKY?@#v\?@(w)٫?@Q`m ^?P6 >JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t# b( (+&B(ݙ" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XG3'K0U!R&!4K0 FJlJI  ( sQsQ] 1heA Zl0|z3 0 @?/n' QE bR;bJ:]~^PsojRփUaE !?@q`0 0,c s!R'i e:}ɟh~5aef9oa|b c gT]]9 MTJU@5^?@hP?@5?@$D"?P6 rAJuM` ? qu#t  .9BWyIt=Wn_q8IRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#>F O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !1C# $I<EfB C`fJ x۰Z @]0?c Q#Oc Q\G}ֱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@H?@6͍~>TC?@JWc^?P6 >JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?) HJsAeb #zbU(#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$+A(.?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FL ie<59 FXG'K0U`Rb6!}4K0 V+ZlJQI0h|G!A!Z!?"]rA #sHD:@ # h4>T]]9 MTJU@Sw?@K?@BZ?@ib+iF?P6 v>JuM` ?qu#t  fd>It=WN6sgHRUo}WIlTy6ވ#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A/.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼#FO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1lL# $<EfB 9b`fJ:'@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA 5#sHD:@ # h4>T]]9 MTJU@t)H׾?@Ȑ?@u'?@ݿJWc^?P6 v>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tt(c)tW20 9tM ]c os f"R!r !a i n.t AUl (s"e e v0d @i!Em$+A(.?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z!刭?&sA#<# $B jfJ 'j `00>oPaYc l]]aE:Ͷ10TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>"rA #sHD:@ # h4>T]]9 MTJU@?^oO?@^?@ә!?@8laxy?P6 v>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@tr,?@Uw?@ˋ>WB?@=E?P6 v>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?3 HAeb& #z *(#+#+#&UJp!p!5 g`?CopyrigTt (c) 20 9 M c. os f"!r !a i n. Al (s"e ej v0d  i!Em$}! (K2?b6 ?ALb4($3/ 4/U%ę-|# |'b6?Fiie<59 FX/G3'0UBŴb6!}40 TaFuJlJ8>UhAA !1!Z}'- E|zo}v#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@^5=~?@,=Vݼ?@(w)٫?@Q`m ^?P6 > uM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgT]]9 MTJU@?έ?@=?@5?@$D"?P6 v>JuM` ?qu#t  &lIt=WA_I߃IRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ x۰Z @]0?c Q#Oc Q\}ȱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:@ # h4>T]]9 MTJU@Iq?@q֌?@>TC?@ݿJWc^?P6 rhAJuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LhFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@ڮ?@7 ?@BZ?@ib+iF?P6 v>JuM` ?qu#t  JJ&qIt=Wr_hIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@=9_?@o(Ѿ?@u'?@ݿJWc^?P6 v>JuM` ?qut  V6>nIt=WgD_~baIRkzIlRv\I#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A(.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6B>F O @ie<59 FXG'0U`Rb6!}40 V+ZlJ]Uhz!hA!Z![?&sA#<# $B %jfJ 'j `00>oPaYc l]]|eE:Ͷ10TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>"rA #sHD:@ # h4>T]]9 MTJU@BP(?@xQ8?@ә!?@8laxy?P6 v>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXGB'0U`Rb6!}40 V+ZlJQI0h|G!A !19b$<# $BM 9b`fJ: muۘ]ᇄ0:mOc i'A~aEesD?c  V#NeFaze<5e@|lVaiU?3Lh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@W?@M6N?@ˋ>WB?@=E?P6 rlAJuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#;<b$k] VJ:@8Qw0RS S IBf>QE !?@\BU--| i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwi]Q ieq'5 h2r_Uv"rA # sHD:@ # h8>T ]]9 MTJU@&J?@_5+:?@(w)٫?@Q`m ^?P6 >JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgT]]9 MTJU@B?@-8?@5?@$D"?P6 v>JuM` ?qu#t  4E$p͡It=W_Lx4IRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAJ +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼#FO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1lL# $<EfB 9b`fJ x۰Z @]0?c Q菌#Oc Q\}ֱ~aE:ÕO)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@*ӏ?@c?@>TC?@ݿJWc^?P6 v>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@"?@ "S/?@BZ?@ib+iF?P6 v >JuM`?qu#t  B?ͳIt=WF_p\cIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@d?@ 3\D?@u'異JWc^?P6 !>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?) HJsAeb #zbU(#+#+#&Jp!p!5 g`?Copy_rigTtZ(c)Z2U0 9ZM c os f"!rԙ !a i n}.Z Al T[&s"e e 5v0d i!Em$+A(.a?b6 ?AgJaT6` F !!l"z}bA +6ig5V*181 ?2?;#:C?'4/U%G|# |'b6LLFO ie<59 FXG'0U`Rb610 V+ZlJ]Uhz!A!Z!?&osA#<# $B jfJ 'j `00>oPaYc l]]#aE:Ͷ0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@?||`ai !hA'o>"rA #sHD:@ # h4>T]]9 MTJU@69 Ğ?@l_n.?@ә!?@8laxy?P6 v">JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@#&\?@ԋZ|?@ˋ>WB?@=E?P6 v#>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@\W>?@47 ?@(w)٫?@Q`m ^?P6 $>JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgTh]]9 M JU@xJuM` ?Uu Jb[LgJt b{tM>2zGz?)77Os@MJA@b7sz}"b)+,& !5Jp"( ?& N"A"b)+ 'b##b#(J,J/B-  Q3M#9 \/??\.?!?f=Q1Q1Y7;m`?CopyrigTt (c)3@2`093@M+@c)@oKs#@f1B"Ar%@^Aa1@i#@n.3@ Aly@ )Hs}BeQ@e)@v@dq@O0i bz59 m6m?7/'0URś65a FTR#l:J Uh(] nQnQ]FQ AB1!:'BP(Ա?a sRYqJSv?@bX,, iYwla)q )\()4eEDnFh4\ono ?p= ףЈhz5Ufl6a \uiHшhnVgj tj(a(4eU҄Q kR!l8aQY@h4GF[|0aui{hFT]1rto(yQiC@ ?@-uUaȳl1rlb\߈hQ:WpZƟ ܈hU_oo)lHD:@ # h4>T]]9 MTJU@D//?@w 8?@5?@$D"?P6 v1>JuM` ?qu#t  z7It=W_iIRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ x۰Z @]0?c Q#Oc Q\}ȱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:@ # h4>T]]9 MTJU@qK?@>,Λ?@>TC?@ݿJWc^?P6 v2>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 0??8:C?'4/U%GX|# |'b6LLBBO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@QL ?@L+?@BZ?@ib+iF?P6 v3>JuM` ?qu#t  Ç\UIt=W_ @ٶIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@#li[?@e>?@u'?@ݿJWc^?P6 a?JuM` ?uJt  ICIt=WN3אsIRk_zIlRv\#>2zGz?R HJAeb #zb(#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A(. ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&La`B O @ie<59 FXG'0U`Rb6!}40 V+ZlJ]Uhz!hA!Z![?&sA#<# $` %jfJ 'j `00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>v"rA # sHD:@ # h4>T]]9 MTJU@ 3?@8.G?@ә!?@8laxy?P6 v5>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@'HY?@pN?@ˋ>WB?@=E?P6 v6>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@mڷ?@WZgL?@(w)٫?@Q`m ^?P6 7>JuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgTh]]9 MTIAU@?@|be?@X͂?@SSfhr~?Pps-8R?>$9> n JuM{` ?e ;  Su*Jt'  $mCLmta{x_&3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@?@2R} ?@X͂?@SSfhr~?Pps-8R?>$:> n JuM{` ?e ;  Su*Jt'  H-mta{_mLL}3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@B?@ K?@X͂?@SSfhr~?Pps-8R?>$;> n JuM{` ?e ;  Su*Jt'  m@mta{ɀ_13.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@Zԩ?@j4ʼn?@X͂?@SSfhr~?Pps-8R?>$<> n JuM{` ?e ;  Su*Jt'  }$~mta{_M {3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@h!I?@u\?@X͂?@SSfhr~?Pps-8R?>$=> n JuM{` ?e ;  Su*Jt'  XƿHmta{"_}{3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@?@d\0?@X͂?@SSfhr~?Pps-8R?>$>> n JuM{` ?e ;  Su*Jt'  H-mta{pu_3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@sLGv?@m\?@X͂?@SSfhr~?Pps-8R?>$R>  JuM` ? ;  )u*Jt'  [Rwmta{ēd3.Jv  q?D=:%_z?v(A[|?Jq>JU2N?贁N[?)@M#J&A@wbI(bW)U+U+U&Jy#0?!J ?%P?tAO"d+bW+$u/%B#BA1A15 `?Cop rigTt (c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0Wn.x0 l0U n8s2e0en05v0d0:1PY>4#-#,M3 M7#(p0b59 DFXGdG'/2WUA;F%!$a dF$&#l,&}>UhEuwγ u~w;'bAUSSr`?CopyrigPt (cu) 2\09 uMcoszIfyr|aizn. WAlЀ sԂUeevdȀ;LSPDB-M_ M_?[D =M`DrMM ID1+0UǒM"C $cPc X(.'ƔM$!I] Ue*^Y30DXl ! !dH!\!p!!!!!!!!1$1%&'*()11pD113Ac M?  F4 K &E& 4&4&U R&R&u` W?I$0BN`l~$I/( /2(>/2P(\/n$uz)tw!  IQEtABOJ-EG14%F4GaO ۃ ZUAA/B`?CopyrigPt (c)@2\0 M@c@os@fBAr@Aa@i@n.@ AlP HsRe@e@v&PdPA{BCSDCB-CC "G{$FCTVV RD#C05UbS@qF@ ^PU $fXhW.Dg'dōV%!T] DfXje H^maPaYqT  2qa8wEJtU{BUJtU{3Jt!U{UJtU{JtR!U{|JtU{&qFx&U{tSHvDU{.qFxbU{2qFxU{6uJtUwHD:B # h0>Th]]9 MTIAU@?@&*y>{?@X͂?@SSfhr~?Pps-8R?>$A n JuM{` ?e ;  Su*Jt'  $mCLmta{_n3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhET]]9 M JU@xJuM` ?uJt ASt=WJRb~li>t2U?=@MJA@b + &J[N#?0a& ?AT)B!!5 g`?CopyrigTtZ(c)Z20 9ZM c. os f"!r 1ai n.Z Al90 (s=2e0ej vO0d10@!$b-# 'Ya&?6iiek59 6jX7^'0a&Z!|$0 FJlJ UhqMAA ]A !4Ja-'Z@?h4FӃ  <{Gz#A OR|VJ:6]].rM__Z)\("UEC@ ?@-bȃ S 1 `#`YR\Xk5^jZso!o ̌aX5U'BOP(?gks#dT[RUUSv?@n aX,flYwq%lSXn?Fh4  p= ףXAYfl6\`YHzmPXAV``ʼ_Z(QYAY҄agklQbE26_HU"rA HD:B # h0>Th]]9 MTIAU@B?@Dd;]?@X͂?@SSfhr~?Pps-8R?>$F> n JuM{` ?e ;  Su*Jt'  m@mta{_dA3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhET]]9 MTJU@F6&?@}%4+ޱ?@5?@$D"?P6 rAJuM` ?qu#t  (LbIt=W_,MyIRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4 9b`fJ x۰Z @]0?c Q#Oc Q\}|zeE:ÕO)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@)?@Uj?@>TC?@ݿJWc^?P6 vI>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@#{5?@ř?@BZ?@ib+iF?P6 vJ>JuM` ?qu#t  DqIt=W_{*IRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:B # h0>Th]]9 MTIAU@Zԩ?@6#TTĶ?@X͂?@SSfhr~?Pps-8R?>$K> n JuM{` ?e ;  Su*Jt'  }$~mta{=g2.Jv  q?D=:%_z?v(A[|Jq>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&Jy#c?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>UhET ]]9 MTJU@fl6?@eb,)?@(w)٫?@Q`m ^?P6 L>JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgT]]9 MTJU@S?@ަƢ?@u'?@ݿJWc^?P6 vN>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z!?&sA#<# $BK jfJ 'j `00>oPaYc l?]]aE:Ͷǝ0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>"rA #sHD:B # h0>Th]]9 MTIAU@h!I?@Ž?@X͂?@SSfhr~?Pps-8R?>$P> n JuM{` ?e ;  Su*Jt'  XƿHmta{0_Y3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!,J ?%?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhET]]9 MTJU@ةA!z?@B]?@ˋ>WB?@=E?P6 vR>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTtt(wc)t20 9tM c o%s f"!r !ua i n.t_ Al (Us"e e v0 d i!Em$}! (K2?0b6 ?ALcb4($3/ 4/U%-X|# |'b6?Fiie<59 FX/G3'0UBb6!}40 aFuJl*J8>UhAA !1!Z}'- E|zo}#.<b$]K VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwi]Q ieq'5 h2r_Uv"rA # sHD:@ # h4>T]]9 MTJU@?@zų[?@ә!?@8laxy?P6 vS>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:B # h0>Th]]9 MTIAU@sLGv?@c?@X͂?@SSfhr~?Pps-8R?>$U> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{ۡ_TL3.Jv  q?D=:%z?~v(A[|Jq>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&ՆJy#a?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>UhE9_%O+s u8+DOʅVFxZI#>B U 9? ,~d,ܮ o@+ht oKsG8ofHVz OxX#\Z&Lo>*Ͽ-1H58j<Ͽ?C~Gxҿ&KN$اkR6VH8Yhy] aKdMh lo.sw]0 z82 ^~Hh4 G'6 68 CH: ΌTO//?#?5?G?Y?k?}???X` bNdf߁i@6HkܦHxmg1ObNϮ4!NINJǶ`JNIxmNJ?@QB.OB_ J-O6UQF\)'xNE?_"P(T~X?xߺEW*O9>%_"|PU_D)Z6)oEMaUFD  h(^TYYBXCUF~?x<F BP(?P?  B+66 J] 6qa{6a!6ah9ha6a6!a&`8%!a/&6C!aM&6a!ak&6!a&`d9da&6!a&6!a&6!a6a61a6631a=66Q1a[66o1ay6a 61a6!61a6"61a6#61a6a$6AaF%6#Aa-F&6AAaKF'6_AaiF`]9]aF)6AaF*6AaF+6AaFa,6AaF-6QaV.61Qa;V/6OQaYVa06mQawV16QaV26QaV36QaVa46QaV56aa f66!aa+f86?aaIfa96]aagf:6{aaf;6aaf<6aafa=6aaf>6aaf9av@6/qa9vaA6MqaWvB6kqauvC6qav9av!E6qav9avG6a H6a)aI6=aGJ6[aeK6yaL6aaM6aN6Ӂa݆O6a9aQ06-a7R6K aUS6iasT6aU06aV6Ñ a͖W6aX6a Y06a'Z6; aE[6Yac\6wa]06a^6 a_6ѡaۦ`6aa06 ac6+ a5d6IaSe6gaqf06ag6aQ9Qa˶i6߱aj06ak6 a%l69aCm6Waan06uao6 ap6aq6ar06as6 at6)a3u6GaQv06eaow6 ax6ay6az06a{6 a|6a#}67aA~06Ua_6s a}6a6a06a6 a6 a6'a106EaO6c am6a6a06a6 a6a6a!065a?6S a]6qa{ra0rar arara0r%a/rC MraQkrQ0rQr QrQrQ&0r!Q&r3! Q=&rQ!Q[&ro!Qy&0r!Q&r! Q&r!Q&r!Q&0r1Q6r#1 Q-6rA1QK6r_1Qi60r}1Q6r1 Q6r1Q6r1Q60r1Q6rA QFr1AQ;FrOAQYF0rmAQwFrA QFrAQFrAQFrAQFJȖ SUqJ?_,V S=V Q'  T UrQS S@L&ɯd2?]ɓX?\.ҥX^ sUQV T_\ UJ%Q xS T -b@Y T`&P t>h,aeL,dbtok^`eupjet^`h r\wbgQgQ*QJ_i( RU SJWS> 0V]!? ?wwww{np0wzZp~uwpwwppw^w|{pdp^ZDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿~??)j࿔S5E?0FJ?nɶؿ3@ ?DHD: # h4>T]]9 MTJUF~?F͓??FtQ+/?P6 AmJuM` ?uJt At=Wrb#JRbJli:O#>[a? ?AbAw@"b $J2zGz?)@MJ7& #fbu#z$ ( (&,"*BB!!5 g`?CopyrigT}tZ(c)ZW2009ZM 0]c 0os0f2R1r0>1a0i0n.Z AUlY0 8s]2e10e 0vo0dQ0!$ę-# '?6iie59 6X7}'0UpBi!0 'F;JlJAh7!1!ZFTE <EgQ# IRSvVJ:M&d2?6VOSe\^TUR UEU@SG\BZ_ɫQ5[@_,cYU520_BU"rA #cHD: # h4>T]]9 MTJUFM&d2?FDx??Fl5?P6 AmJuM` ?uJt\  ]It=WIW$IRIl ڜI#>[ ? ?Ab A@"b$J2zGz?R=@MJ`=&#f {#z .(b)&,"*BB!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!$-# '?6iieP59 6X7F"'0UvB!0 -FAJlJ $L!(AA ]A1h1!ZFhmg  <!q}"##M _RVJL!%t?FTE\QeS , φ+#pYYSUKQUE^,n__ "+^HX5:?LV&k?SN#tTkRU5Ulf܄BP(%lgpY?BeO#X(dec]\m\cX ~.K$oZhLYUiXQaIZ^_i3qU32F_XU"rA 5#HD: # h#4>T#3 AMJUFbX?FϏ ?FM&d2?F _gw?P6 #AJuM` ?uJt  k]It=~W.*IRIl̆Rkפ#>[a? ?AbAw@"b$J2zGz?)@MJ=&#fb{#z\ (b)&T,"*B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$$-# '%?6iie59 6X7'K0UvB!K0 -FAJlJAh711T?ҿP mVN  Q  . *>uA` ?Gm$m8JuQ\>XbK>tY  wetVy 5Ie>U2zGz?@9 /#>?&Aw@bq(Wb)+&b" !p$4>9#"?& "w"b)";|'b"i,b"/N9#0%B 9#11;8"`?CopyrigXt (wc)020090M0c0o%s0f21r0Aua0i0n.0_ Al/@ 8Us3Be@e0vE@d'@'"i fE 6(X7H!'_2q@&,d50 tFJul>4Ul8QQCaR 1(Q1w!-(N!R?FJ?@ `0R QyUeR /ge%$(U⤖侇_X?,_XN Uωr?F r?FOtO?P"ZO %o1~Y.$!$iA4Y!eX]?C!ebRҡe:S |we"d  1X461Boo wr$5%>aEJe_~?F.ԵN?FaV-u"ؾ?PWRrR~|`%s mK)m >'ʲl} fdjg tsC uL+t-VwfxQUv`q7&ҹ _XRQb%d2[i֥TWVQFd50_#jS aUTf imp#jpm0"XY֘t?FTEY`Qj;!lЙXbQ?,n$ xXfQYJhbPnZ\>]!h"bXjQV6W_QUnU_o_)oTQaRp v__N-aZ?Fah<b}煔M_Ophz y7vPQP]BP({ύ 贁NPE 0BXЕHD: # h4>T]]9 MTJUF~?FS BP??Fx<?P6 #AmJuM` ?uJt At=W5_v@"J]RbJl>t2U.?=@MJA@+b)+&J[P'?g& ?A)*B!!5 g`?CopyrigTtZ(wc)Z2009ZM c o%s f"!r $1uai n.Z_ Al?0 (UsC2e0e vU0 d70!H$-,# 'g&K?6i@ieq59 6X7'K0g&!$K0 F!JlJ\#Uh( #3 ]At!"3'ZF>h f <YDc#BA NR{VJ:5]dGL_^Zh?.2\QE]t?FgPg܉ S )?w#_YXq5^(l]o o e]ǂX5U%?D"?;Vfk(\#cTZRUUM\_Y7ۨ Xno ^Z~#b%{XAYM&d2fkBqlUE25_tGUSrA   UGD  3 h0TdYYBjUFțau?F73X?FGV^bd?FZ^Zp?P} X  f  f0 D fX l f  f  bd  f  & f & 4& fH& \& fp& &! f&" &# f&$ &% f&& &' f6( $6) f86* L6+ f`6, t6- f6. 6/ b6]_61 f62 63 fF4 F5 (F6 T#3 AMJUF^]P'?Fݱ?F#6?F{_ "w?P6  >JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}|zeE:pOϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:  # h4>T]]9 MTJUFH[b?F:%lj?FݡD?F,6^?P6 v >JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFiW>E?FTb?F %?FP_EyFw?P6 >JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUFF ?FƢ?F%9.?F,6^?P6 v>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #zb(J#+#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v0d i!EHm$+A?b6 ?AgJaTv6` F !!l"z۾bZA +J6ig5V181 ?2?;ߑ:C?'4/U%ęG|# b|'b6LFO ie<59 FXG}'0U`Rib6!}40I V+ZlJ]Uhz!AZֺV\?F9&sA#<# $ B bnbJ'jWĞ 00>oPaYc l]]ȱaE:P^0TbCcba`iK8K8h<5e %k?(qQlqe~EeF$||`ai? !hA'oFrA k#sHD:  # h4>T]]9 MTJUFw??FxOe?FV\?F. xy?P6 rAJuM` ? )uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6L>FO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $BJ C`fJ: m$D]ᇄ0:mOc i'A~aEeTE?c  V#NeFaze<5eF$|lVaiU?3Lh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUF% !(@{?FmN,']?F~ tC?F7܄E?P6 v>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?Copy_rigTtt(c)t2U0 9tM c os f"!rԙ !a i n}.t Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'b6<Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Zo(. E|zo}#<bX$-B] VʼJ:F,?0ȒS S I?Bf>QEҺV\?Ft@K\BU--# i[ÏF @ h<53n?I9fKo]o ڵ h5a9YUU aUU 3yk]Q i?eq5 h2r_U"rA k#sHD:  # h8>T ]]9 MTJUFG"7?F~8)?F AE?Fzl]?P6 >JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?/n' #M b7fJ:]n#vca71xp]] 8usUQPUQS^%S+ %R&bpGba8DaP`Q]`NE_TWOPKJ0WSH+`Ps RUORuIsSk1J0@^e_ erP}28\usvvP62<%-g ۏ'9_us1125q5q2A+Bhh@R)pR*) 2hqhq|b(8 1xpa[3!1Qq6`:zGQ`1f1[j2B@@7RF5VCT @nb2FE5us_4Σ&F3s͵NSYFB@UͬpU ͬQUHͬ|eA§,@Q_Ɔp׏(ɟ 8A B<F5$lAVB\KkF`RC`asPe uexPP2!"! %(P%PORJ1 '2qa` P֒ȑ8(S˱AȕE{8H5AALA˱LܥAL(ARGMAWЛtA5ЛA͞/%AELHD:  # h#4>T#3 AMJUF͹30?FY˟8?F#6?F{_ "w?P6 >JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}ȱ~aE:pϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:  # h4>T]]9 MTJUF,2?F/LΛ?FݡD?F,6^?P6 v>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?@MlAneb #zb(J#+#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v0d i!EHm$+A?b6 ?AgJaTv6` F !!l"z۾bZA +J6ig5V181 ?2?;ߑ:C?'4/U%ęG|# b|'b6LBO ie<59 FXG}'0U`Rib6!}40 V+ZlJQI0h|GЎ!A!ZV\O?F9"rA 5#sHD:  # h#4>T#3 AMJUF *.?F)s*?F %?FP_EyFw?P6 >uM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUFJ.?Fh ?F%9.?F,6^?P6 v>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!ZֺV?\?F9&sA #<# $B jfJ'jWĞ 00>oPaYc l]]aE:P^0TbCcba`iK8K8h<5e %k(qQlqe~EeF$ǧ||`ai !hA'o>"rA 5#sHD:  # h4>T]]9 MTJUFPY?F3pG?FV\?F. xy?P6 rhAJuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LhSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUF]cZ?FVIܳ?F~ tC?F7܄E?P6 v>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<b$] RVJ:F,a0RS S IBf>QEҺV\?Ft@K\B?U--# i[F @ h<53nI9fKo]o ڵ h5a9YUU aUU~ 3yk]Q ieq5 h2r_U"]rA #sHD:  # h8>T ]]9 MTJUFyh ?F*nK?F AE?Fzl]?P6 >JuM`lW? u t  NKMtA[X*`lMVYBMpry2V'>\¹??\.?lA8 J2l_lf?=@JMJB& @ib#ztv t#b() (+&ՆB3" 3?APb)4(z$D3//&T115 k`?CopyrigTtk(c)k2009kM0c0oKs0f21r01a0i0n.k Al @ 8sBe0e0v @d@14-X3 7JbO&&i b~59 &&XGK"/'0U!R&-!40 FJ%lJ Q! (sQsQ]3Ы1eA!Zsꈐg4 0֚ @?n'#2#M b7fJ:]T#3 AMJUF~S5e?Fk]?F#6?F{_ "w?P6 >JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FXGB/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ %ޜ\0?c Q菌#Oc Q\}ֱ~aE:pOϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:  # h4>T]]9 MTJUF3w?Fr(IY?FݡD?F,6^?P6 rlAJuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LlQkBO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFZ)x?F~m?F %?FP_EyFw?P6 >JuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUF:N[%?F ɇ?F%9.?F,6^?P6 v>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V+ZlJ]Uhz!A!ZֺV\?F9&sA#<# $BK jfJ'jWĞ 00>oPaYc l]]beE:P^c0TbCcba`iK8K8h<5e %k(qQlqe~EeF$||`ai !hA'o>v"rA # sHD:  # h4>T]]9 MTJUFydY?F?FV\?F. xy?P6 v>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFEtJuM`?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<bc$ R] )VJ:F,00RS S IBf>QEҺV\?Ft@K\BU--# i?[F @ h<53nI9fKo]o ڵ h5a9YUU aUU? 3yk]Q ieq5 h2r_U"rA #sHD:  # h8>T ]]9 MTJUF̉?F7|?F AE?Fzl]?P6 !>JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R& 10 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?ҏn'8R# b7fJ:]T#3 AMJUFJuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}ȱ~aE:pϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:  # h4>T]]9 MTJUF%?FG>'~ݡD?F,6^?P6 #>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!ZV\?F9"rA #sHD:  # h#4>T#3 AMJUFQK݆x?F-1?F %?FP_EyFw?P6 $>JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ:%ޜ'\0:iG`Oc qy"zeEajoJiaze<5e'fÞ 0mFaij mh~EA#o>"rA 5#sHD:  # h4>T]]9 MTJUF|k~ؾ?F;?F%9.?F,6^?P6 v%>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTtt(c])t20 9tuM c os If"!r !a i n.t WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!ZֺV\O?F9&sA#<# $B %jfJ'jWĞ 00>oPaYc l]]aE:P^c0TbCcba`iK8K8h<5e %k(qQlqe~EeF$||`ai !hA'o>v"rA # sHD:  # h4>T]]9 MTJUFFJuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFQt?FRLxw?F~ tC?F7܄E?P6 v'>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<b$] RVJ:F,a0RS S IBf>QEҺV\?Ft@K\B?U--# i[F @ h<53nI9fKo]o ڵ h5a9YUU aUU~ 3yk]Q ieq5 h2r_U"]rA #sHD:  # h8>T ]]9 MTJUFXf?Fݼ?F AE?Fzl]?P6 (>JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! sQsQ]31heA!Zsꈐ43 0 @?ҏn'R# b7fJ:]T#3 AMJUF;Э?Fn?F#6?F{_ "w?P6 )>JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}ȱ~aE:pϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFAYr?F L?FݡD?F,6^?P6 v*>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFZ?F!ќ?F %?FP_EyFw?P6 +>JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUF9 `?F2{ о?F%9.?F,6^?P6 v,>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!ZֺV?\?F9&sA #<# $B jfJ'jWĞ 00>oPaYc ?l]]|eE:P^0TbCcba`iK8K8h<5e %k(qQlqe~EeF$||`ai !hA'o>"rA #sHD:  # h4>T]]9 MTJUFq[')?F9i7?FV\?F. xy?P6 v->JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFӅN?FZDM?F~ tC?F7܄E?P6 v.>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<bc$2] )VJ:F,00RS S IBf>QEҺV\?Ft@K\BU--| i[F @ h<53nI9fKo]o  h5a9YUU aUU 3yk]Q ieq5 hb2r_U"rA #sHD:  # h8>T ]]9 MTJUFv?Fz?F AE?Fzl]?P6 />JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?/n' #M b7fJ:]T#3 AMJUF`;?F:7?F#6?F{_ "w?P6 0>JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6J!}4 V"+ZlJUhh|G!A!1l&fL# $<EfB 9b`fJ %ޜ\0?c Q菌#Oc Q\}ֱ~aE:pOϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:  # h4>T]]9 MTJUF [kԏ?F4 c?FݡD?F,6^?P6 v1>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFYļ#?FŬ /?F %?FP_EyFw?P6 2>JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181a ??8:C$?'4/U%G|# |'b6L(BBO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~E#o>"rA k#sHD:  # h4>T]]9 MTJUF-!4?FL$$?F%9.,6^?P6 3>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r !a i n.Z AQl [&s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!ZֺV\?F9&߮sA#`<# $B jfJ'jW?Ğ 00>oPaYc l]G]aE:P^0TbCcba`i?K8K8h<5e %k(qׁQlqe~EeF$||`ai !hA'o>"]rA #sHD:  # h4>T]]9 MTJUFX?Ş?F).?FV\?F. xy?P6 a?JuM` ?uJt  sd It=W> IRE!_ YBIl p#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6La(`BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!19b$<# $4` 9b`fJ: m$D]0:mOc i'A~aEeTE?c  V#NeFaze<5eF?$|lVaiU3Lh~EA#o>"rA #sHD:  # h4>T]]9 MTJUFQC]?FD?b.?F~ tC?F7܄E?P6 v5>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<b$] RVJ:F,a0RS S IBf>QEҺV\?Ft@K\B?U--# i[F @ h<53nI9fKo]o ڵ h5a9YUU aUU~ 3yk]Q ieq5 h2r_U"]rA #sHD:  # h8>T ]]9 MTJUF:??Fq12?F AE?Fzl]?P6 6>JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?/n' #M b7fJ:] f? @ fA B &C  fE  &F b & 4&H fH&I \&J fp&K &L f&M &N &&O & f&Q &R f6S $6T f86U L6V f`6W t6X f6Y 6Z f6[ 6\ f6] 6^ fF_ F` f(Fa ~L1B~`1F~t1[D1e1R~1~1Z~1^~1b~Af~Aj~(AnT#3 AMJUF6٫&?Fh%4+ޱ?F*5?F$_D"w?P6 8>JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6ig5V1`e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}|zeE:O)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:7 # h4>T]]9 MTJUFI)?FUj?FeTC?FǿJWc^?P6 v9>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LQ DO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD:7 # h#4>T#3 AMJUFe{5?F?Fs۰Z?FNb_+iFw?P6 :>JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJrO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUFS?FަƢ?F'?FǿJWc^?P6 v;>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD:7 # h4>T]]9 MTJUF蕯?Fcų[?F!?Flaxy?P6 v<>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFA!z?FA]?F>WB?Ft=E?P6 v=>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?Copy_rigTtt(c)t2U0 9tM c os f"!rԙ !a i n}.t Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD:7 # h8>T ]]9 MTJUF gl6?FDb,)?FUw)٫?FQ`m ^?P6 >>JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFE//?F_ 8?F*5?F$_D"w?P6 ?>JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+Ab6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(a2BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4a2 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFK?Fc>,Λ?FeTC?FǿJWc^?P6 v@>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?MHJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!A!Z!?F9v"rA # sHD:7 # h#4>T#3 AMJUFQL ?F7ԩ+?Fs۰Z?FNb_+iFw?P6 A>JuM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJ@eb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUFLli[?FQ>?F'?FǿJWc^?P6 vB>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD:7 # h4>T]]9 MTJUF63?F8.G?F!?Flaxy?P6 vC>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LQ DO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFPHY?FpN?F>WB?Ft=E?P6 rAJuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<bc$] )VJ:FQw00RS S IBf>QE2!?Fp\BU--# i?[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU?wi]Q ieq5 h2r_U"rA #sHD:7 # h8>T ]]9 MTJUFǦmڷ?FWZgL?FUw)٫?FQ`m ^?P6 E>JuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFܹ?F cM?F*5?F$_D"w?P6 F>JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF (?FthZ?FeTC?FǿJWc^?P6 vG>JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%`|# |'1b6LQBBO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD:7 # h#4>T#3 AMJUFr!Qw?Fvk?Fs۰Z?FNb_+iFw?P6 AJuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUF!P4K?F l?F'?FǿJWc^?P6 vI>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]beE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:7 # h4>T]]9 MTJUFYzUv?FS s9?F!?Flaxy?P6 vJ>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF)"z>?F^i2VS?F>WB?Ft=E?P6 vK>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD:7 # h8>T ]]9 MTJUF{KY?F#v\?FUw)٫?FQ`m ^?P6 L>JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<5^?FUP?F*5?F$_D"w?P6 M>JuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF?F͍~eTC?FJWc^?P6 N>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:7 # h#4>T#3 AMJUFSw?FK?Fs۰Z?FNb_+iFw?P6 O>JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ:r'@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA 5#sHD:7 # h4>T]]9 MTJUF%u)H׾?FȐ?F'?FǿJWc^?P6 rAJuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTtt(c])t20 9tuM c os If"!r !a i n.t WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LLs2BO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#<# $s2 %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD:7 # h4>T]]9 MTJUFf^oO?F]?F!?Flaxy?P6 vQ>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFr,?F4w?F>WB?Ft=E?P6 vR>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD:7 # h8>T ]]9 MTJUF_5=~?Fe,=Vݼ?FUw)٫?FQ`m ^?P6 S>JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<@έ?F&¬?F*5?F$_D"w?P6 T>JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJa6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6 LeE O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !6&f# &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:7 # h4>T]]9 MTJUFpq?Fq֌?FeTC?FǿJWc^?P6 vU>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LQBBO ie<59 FXG'0U`RŴb6!}40 V+ZlJQ@hh|G!A!Z!?F9v"rA # sHD:7 # h#4>T#3 AMJUF߳ڮ?F$ ?Fs۰Z?FNb_+iFw?P6 V>JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6big581e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUF>=9_?FL(Ѿ?F'?FǿJWc^?P6 vW>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]|eE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:7 # h4>T]]9 MTJUFBP(?FxQ8?F!?Flaxy?P6 vX>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FD'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFMW?FM6N?F>WB?Ft=E?P6 vY>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\BU--| i?[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU?wi]Q ieq5 h2r_U"rA #sHD:7 # h8>T ]]9 MTJUF('J?FK5+:?FUw)٫?FQ`m ^?P6 Z>JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA! U0|z3 0 @?ҏn'R# b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFB?F̒-8?F*5?F$_D"w?P6 [>JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFJ*ӏ?Fc?FeTC?FǿJWc^?P6 v\>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD:7 # h#4>T#3 AMJUF"?F!S/?Fs۰Z?FNb_+iFw?P6 ]>JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUF?F2\D?F'異JWc^?P6 ^>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r !a i n.Z AQl [&s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z5!?F9&߮sA#`<# $B jfJ'j?`00>oPaYc l]G]aE:Ͷ0TbCcba`i?K8K8h<5e۰Zk(qׁQlqe~EeFͭ||`ai !hA'o>"]rA #sHD:7 # h4>T]]9 MTJUF79 Ğ?Fl_n.?F!?Flaxy?P6 v_>JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF-#&\?FZ|?F>WB?Ft=E?P6 v`>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD:7 # h8>T ]]9 MTJUFW>?F7 ?FUw)٫?FQ`m ^?P6 a>JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?ҏn'0S b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c g~L1B~`1F~t1[D1e1R~1~1Z~1^~1b~Af~Aj~(AnT#3 AMJUF6٫&?Fh%4+ޱ?F*5?F$_D"w?P6 c>JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L D O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc Q\}|zeE:)0:iG`i?w*h<5a#j8oJiqze~EA#o>"rA k#sHD:b # h4>T]]9 MTJUFI)?FUj?FeTC?FǿJWc^?P6 vd>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL0DO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFe{5?F?Fs۰Z?FNb_+iFw?P6 e>JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L0D O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUFS?FަƢ?F'?FǿJWc^?P6 vf>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL DO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD:b # h4>T]]9 MTJUF蕯?Fcų[?F!?Flaxy?P6 vg>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?A mT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L2B O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !19b$< $2M 9b`fJ: m٠uۘ]ᇄ0:mOc i'A~aEeD?c  V#NeFaze<5eF|lVaiU?3Lh~EA#o>"rA k#sHD:b # h4>T]]9 MTJUFA!z?FA]?F>WB?Ft=E?P6 rUAJuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTtt(wc)t20 9tM c o%s f"!r !ua i n.t_ Al (Us"e e v0 d i!Em$}! K2?0b6 ?ALcb4($3/ 4/U%-X|# |'b6?Fiie<59 FX/G_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#;<b$T] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o ?ڵ h5a9YUU aUUwi㼕]Q ieOq5 h2r_U"rA #sHD:b # h8>T ]]9 MTJUF gl6?FDb,)?FUw)٫?FQ`m ^?P6 i>JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7!bO&& b~59 &&XGK"'0U!R&!40R FJlJ Q! (sQsQ]31eA-!ZU0|z 0f @?n'B# b7fJ:]h^Psoj?R֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFE//?F_ 8?F*5?F$_D"w?P6 j>JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFK?Fc>,Λ?FeTC?FǿJWc^?P6 vk>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFQL ?F7ԩ+?Fs۰Z?FNb_+iFw?P6 l>JuM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L D O @ie<59 FXG'0U`Rb6!}40 V+ZMQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUFLli[?FQ>?F'?FǿJWc^?P6 vm>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD:b # h4>T]]9 MTJUF63?F8.G?F!?Flaxy?P6 vn>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL DO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFPHY?FpN?F>WB?Ft=E?P6 vo>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<bX$ [ VʼJ:F?Qw0ȒS S I?Bf>QE2!?Fp\BU--# i[ÏF @ h<53n ?CU9fKo]o ڵ h5a9YUU aUUwi]Q i?eq5 h2r_U"rA k#sHD:b # h8>T ]]9 MTJUFǦmڷ?FWZgL?FUw)٫?FQ`m ^?P6 p>JuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?n'0S b7fJ:]h?^PsojR֯UaEQ!?!`c0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFܹ?F cM?F*5?F$_D"w?P6 q>JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUF (?FthZ?FeTC?FǿJWc^?P6 vr>JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL2BO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFr!Qw?Fvk?Fs۰Z?FNb_+iFw?P6 s>JuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L D O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUF!P4K?F l?F'?FǿJWc^?P6 vt>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL DO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]beE:Ͷ10TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:b # h4>T]]9 MTJUFYzUv?FS s9?F!?Flaxy?P6 vu>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUF)"z>?F^i2VS?F>WB?Ft=E?P6 vv>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<bX$ 0[ VʼJ:F?Qw0ȒS S I?Bf>QE2!?Fp\BU--# i[ÏF @ h<53n ?CU9fKo]o ڵ h5a9YUU aUUwi]Q i?eq5 h2r_U"rA k#sHD:b # h8>T ]]9 MTJUF{KY?F#v\?FUw)٫?FQ`m ^?P6 w>JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?nK' # bS7fJ:]h^PsojR֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<5^?FUP?F*5?F$_D"w?P6 x>JuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUF?F͍~eTC?FJWc^?P6 y>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L DO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJQI0h|G!A!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFSw?FK?Fs۰Z?FNb_+iFw?P6 z>JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJQI0h|G!A!1&f $<EfB 9b`fJ:r@]0:iG`Oc ?qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>v"rA # sHD:b # h4>T]]9 MTJUF%u)H׾?FȐ?F'?FǿJWc^?P6 v{>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigT}tt(c)tW20 9tM ]c os f"R!r !a i n.t AUl (s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z5!?F9&߮sA#`< $B jfJ'j?`00>oPaYc l]G]aE:Ͷ0TbCcba`i?K8K8h<5e۰Zk(qׁQlqe~EeFͭ||`ai !hA'o>"]rA #sHD:b # h4>T]]9 MTJUFf^oO?F]?F!?Flaxy?P6 v|>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFr,?F4w?F>WB?Ft=E?P6 v}>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD:b # h8>T ]]9 MTJUF_5=~?Fe,=Vݼ?FUw)٫?FQ`m ^?P6 ~>JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?nK' # bS7fJ:]h^PsojR֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<@έ?F&¬?F*5?F$_D"w?P6 >JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFpq?Fq֌?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUF߳ڮ?F$ ?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUF>=9_?FL(Ѿ?F'?FǿJWc^?P6 v>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]|eE:Ͷ10TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:b # h4>T]]9 MTJUFBP(?FxQ8?F!?Flaxy?P6 v>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAeWb #z@Q (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFMW?FM6N?F>WB?Ft=E?P6 v>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Ib6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<bX$!B] VʼJ:F?Qw0ȒS S I?Bf>QE2!?Fp\BU--| i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:b # h8>T ]]9 MTJUF('J?FK5+:?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lA8 M2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?n'."# b7fJ:]h?^PsojR֯UaEQ!?!`c0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFB?F̒-8?F*5?F$_D"w?P6 >JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L72B O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<Ef72M 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFJ*ӏ?Fc?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUF"?F!S/?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUF?F2\D?F'異JWc^?P6 ݉>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 `?CopyrigTtZ(c)Z20 9ZM c. os f"!r !a i n.Z Al [&s"e ej v0d  i!Em$+A?b6 ?zAgJaۢT6` F !!l"zbjA) +6ig5UV181 ?2?F;:C?'4/U%G|# |'b6LA#BO ie<59 FXG'0U`Rb6!}40 V+ZlJ]UhzA!Z5!?F9&s7A#< X$" jfJ'j`00>oPaYc l]]aE:?Ͷ0TbCcba`iK8K8h<5e?۰Zk(qQlqe~EeFͭ||`ai !h`A'o>"rA #sHD:b # h4>T]]9 MTJUF79 Ğ?Fl_n.?F!?Flaxy?P6 v>JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!B19b$< $BM 9b`fJ: m٠uۘ]ᇄ0:mOc i'A~aEeD?c  V#NeFaze<5eF|lVaiU?3Lh~EA#o>"rA k#sHD:b # h4>T]]9 MTJUF-#&\?FZ|?F>WB?Ft=E?P6 v>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VM:FQw0RS S IBf>E2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD:b # h8>T ]]9 MTJUFW>?F7 ?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?nK' # bS7fJ:]h^PsojR֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c g~L1B~`1F~t1[D1e1R~1~1Z~1^~1b~Af~Aj~(AnT#3 AMJUF6٫&?Fh%4+ޱ?F*5?F$_D"w?P6 >JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4 9b`fJ۰Z@]0?c Q#Oc Q\}|zeE:O)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD: # h4>T]]9 MTJUFI)?FUj?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUFe{5?F?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUFS?FަƢ?F'?FǿJWc^?P6 v>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD: # h4>T]]9 MTJUF蕯?Fcų[?F!?Flaxy?P6 v>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUFA!z?FA]?F>WB?Ft=E?P6 v>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?Copy_rigTtt(c)t2U0 9tM c os f"!rԙ !a i n}.t Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD: # h8>T ]]9 MTJUF gl6?FDb,)?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i b~59 && XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?ҏn'B# b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFE//?F_ 8?F*5?F$_D"w?P6 >JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUFK?Fc>,Λ?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUFQL ?F7ԩ+?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUFLli[?FQ>?F'?FǿJWc^?P6 v>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD: # h4>T]]9 MTJUF63?F8.G?F!?Flaxy?P6 v>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUFPHY?FpN?F>WB?Ft=E?P6 v>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUFǦmڷ?FWZgL?FUw)٫?FQ`m ^?P6 AJuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?ҏn'# b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFܹ?F cM?F*5?F$_D"w?P6 >JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUF (?FthZ?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUFr!Qw?Fvk?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUF!P4K?F l?F'?FǿJWc^?P6 v>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]beE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD: # h4>T]]9 MTJUFYzUv?FS s9?F!?Flaxy?P6 v>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUF)"z>?F^i2VS?F>WB?Ft=E?P6 v>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUF{KY?F#v\?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<5^?FUP?F*5?F$_D"w?P6 >JuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #zb(#+B#+#&Mp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(\"BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4\" 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUF?F͍~eTC?FJWc^?P6 ݤ>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LLyBBO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"]rA #sHD: # h#4>T#3 AMJUFSw?FK?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ:r'@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA 5#sHD: # h4>T]]9 MTJUF%u)H׾?FȐ?F'?FǿJWc^?P6 v>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTtt(c])t20 9tuM c os If"!r !a i n.t WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#<# $B %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD: # h4>T]]9 MTJUFf^oO?F]?F!?Flaxy?P6 v>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+%#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'b6L5*6O ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!19b$<# $4h" 9b`fJ: m٠uۇ]0:mOc i'A~aEeכD?c  V#NeFaze<5eF?|lVaiU3Lh~EA#o>"rA #sHD: # h4>T]]9 MTJUFr,?F4w?F>WB?Ft=E?P6 v>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUF_5=~?Fe,=Vݼ?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<@έ?F&¬?F*5?F$_D"w?P6 >JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUFpq?Fq֌?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUF߳ڮ?F$ ?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUF>=9_?FL(Ѿ?F'?FǿJWc^?P6 v>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]|eE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD: # h4>T]]9 MTJUFBP(?FxQ8?F!?Flaxy?P6 v>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUFMW?FM6N?F>WB?Ft=E?P6 v>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\BU--| i?[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU?wi]Q ieq5 h2r_U"rA #sHD: # h8>T ]]9 MTJUF('J?FK5+:?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFB?F̒-8?F*5?F$_D"w?P6 >JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUFJ*ӏ?Fc?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUF"?F!S/?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUF?F2\D?F'異JWc^?P6 ݴ>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r !a i n.Z AQl [&s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z5!?F9&߮sA#`<# $B jfJ'j?`00>oPaYc l]G]aE:Ͷ0TbCcba`i?K8K8h<5e۰Zk(qׁQlqe~EeFͭ||`ai !hA'o>"]rA #sHD: # h4>T]]9 MTJUF79 Ğ?Fl_n.?F!?Flaxy?P6 v>JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUF-#&\?FZ|?F>WB?Ft=E?P6 v>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUFW>?F7 ?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c g_+s +EiHG%>F8Z#B9? ~d`ܮ :@+H| <Ss񨚳 P x O   oG  ߫( L!X $ ~( >,諶 / h3(= P7 : A D] WHH膸 K OH >Sϸ V Zq X^t bHHv ex Ii#'z l)| wp `tH x {#' 2߇ z) )8 h #'@ 6B QHE ژ)8G |'hI 6>~K 00"'M 0 N882 74 )6 @8 ߹): oj  W&? \$H1 3 0%%5 17 *:  7<  )h> #' @:" RbG% 8' h) :)+ :- ެ; T82 : : H {(8 }J v((= _/8X? ( ( 2[ "Y( &YX *!( .% 17 R5I( 8nYX < 8@H C"i G QKFiXNXiR~nYAVYYYo]"eHWa(Tg0;j~FinXiȳ&ru'y9}K}4jY=aH(NjjjY#y߰o4i7jPࡾjRijT'yVĬyYQ"iH[94ix]ڷY_o1 i(4~yH6cƎix88:4i<zY?"YHxg܊yy~(xY8YhYoKnYmY85yh-FH#/8x 7o~{(HU!) n$Z(lH+~x_/2)6ߚu:Ʃ8>ZhAl@IE)BIEL 8G5P(hITHKW(Mj[~( ^z8"bxh$Mf &i(mƩ*gqZ(-ulX/xz1W|~35(8lƩX:Z<l>@z(~ X,1UFD  h(^TYYBBUF~?x<F BP(?P? V B666 6 B66h9ȅ2H??\IB@L&d2?-(\.Y'(.sUM!K&Pm/:  0oB`2Fiber], pt c t ansm t #n tw rk equ p e t h rd 0-1e!91A^| (gUGr& wr  .@ wvwpppopp wwpwxwwwwpx=pp pwvLDrag thuesapWonodwipe.bL&d2ɿ~?P?=t˿)"g??)j3@ ?DHD: # h4>T]]9 MTAUF}Da 0?FH7r?F_/++?FXϸ?P6 `A$M (#<#lPJuM{` ?#{{20{F{ZujJtg  ɟ߇{tј` 'ɭ?s>JU2zGz?@M[#|Jk&A@$7b#z #!"b(b)+&J[e#t"t"?'6 I?A$/%Be#115 "`?CopyrigTt (c)020090M0c0oKs0f21r01a0i0n.0 Al0 8sBe0e0v@d0S"{14\#-Xe#3 7'6\% ZLFiie1E9 FXG/'0UR'6-!B40 FJlJ7Q0hLG1ZA!Y(}=K?Fl@I0`fS?Ks`ZsLbL\"JTy+HlPrW?F腺Z>FbyC ua)H0Ynǿl_Vef[U>dxԷErlg uVcQ$@aELe|}?FCʞ*Nc`vna Oc>wok&O8XWleBPmlfVArieoo.HLe+pQ?FYR&`O]vol``}; PaiEz6!iDF`ޣKH_SFn.ki`"rA k#ڃjnw5}6!i{HD: # h4>T]]9 MTUF)Q~ R?F\L#?Fx?F=:?P6 AA M (<Pdx#  $<PdxmJuM` /?9"9"29"F9"Z9"n9"9"9"9"/(Su()Jt%!  _lcd6tY1s2˟ke5n7rs=e57Fq1 >JU2zGzK?=@M3J6A@n4b%Cz@C2b3H%@3HNKNFJ[3 B?F &?A9D POqEUB3AA5 2`?CopyrigTt (c)5P20AP95PM-Pc.+Pos%Pf3R$Qr'P`Qa3Pi%Pn.5P Al{P +XsReSPej+PvPdsP2@AB3b-3 S  WF5 \fiieU9 fjXg2'0UbŴF!D0 TIf]jlJ @maa ]xQQ9A8`3F+G?F| @"$#әF1 "X3]/Lֲ1r2JAn?FkS?FΘtpb N ?_)Ay*t^(|CN7ȗu$#D[DP1X3aT1 :" $z 2? I? uK?$"X2a@s(BS2}:r5qEqEtFdo?F[K ۃ?FJ(OL?P$DT!  bqt%@{rfJHG^"f~Bt2)Џ⏪32Uu5*&?Fl5?F)b&&~?Fcle?P,A&!٘UީȽ|;!GmH﹡^@pw*oAΏ ՟4ޯFUuu #?F{+.W?,b26I>P-nx:aG|ZÉ-DuH rε50!au>uM"D$.@4#ZqTmz?FD?Fb.\i ]aJ`?@~{bcEt/^Vu1 a{&DFT?@l4ϳ$3<@ƒ>Pn}11WBx~|kz}~tesDޠux)LBk(a\6 `G $#Ppztt~Pz+ݿ]s~aϔb-4쇂3~uy0cفMk?Fǒ?F 6_?P(AT~' 6>Ľ?e+P?Toa?CB1wa1 83쇖y&Nfn~bS9xvD<$Q$#QcuNI|" 0)^VUıwm2G:LB3 .쇪;utq~F>`7bt?di д9Brq 8CJ &*짦f4omGL3A/S/#HD: # h4>T]]9 MTMUF؋@?Fce?F5svL*?F7= ?P6 ]A]M (<P]d_]x#_(_<s__JuM` ?_""2"F"Z"n"""uJt  oUxJ5t!21l]5 7,ȃ5$7ǁO>JU2zG/z?=@Ms3J6A@4b3z郷0392bR808;6J[}3 w(B??F ?A4 T? EB}3AA5 2`?CopyrigTt (c)@20@9@M@c@os@fBAr@Aa@i@n.@ AlP HsRe@e@v-PdPk2ADt3-}3C eG?Ft5 r\ViieIU9 VXW2'0U.b?F!ZD0 VZl1 &@(aa 3ArQ1q8Z}3F{و>r/2"0,s\>rt2JA%)?F+?FvWZk>Fy4 wa)`Ayf/!\3s M_lu|a<1u#Ca aܵ0 lkj? &_8)?"a@=s(<" !rÀ5 EbqEnu3.h?Fn`?F=/*lq?FF|oĵWA|GK|Wu|V#H1CRW%7#I3Et2~IUnuj "?FBĥج|+zw~>{Iσ]X41yu,A1BaNycЍ@2Wa4y}K6#I@ZltF~fUds!'Hɇ?FQy~<?F$f㷖꿈xywF,(|pgF|'c7|,X&l x קi4bqJ+ o N9z)e~zy0l44t;Ff4t/p1znuf.ʐ Lˮ?F I`V'0'|֠{yl7#y%m"gs7#It~jyozY_ƀ%çM,fFI|^t-,捇&F9~lFV6#I.@t~3ot~F+}|$+p?F?~a@v2rA 3z9Xt|L;W-yG7IGYt#~HD: # h4>T]]9 MTMUF}W?F?F5svL*?F7= ?P6 >M (<Pd_]x#_(_<s__JuM` ?_""2"F"Z"n"""uJt  wCHݛ5t!2-8Lc5 7,ȃ5$7ǁO>JU2zG/z?=@Ms3J6A@4b3z郷0392bR808;6J[}3 w(B??F ?A4 T? EB}3AA5 2`?CopyrigTt (c)@20@9@M@c@os@fBAr@Aa@i@n.@ AlP HsRe@e@v-PdPk2ADt3-}3C eG?Ft5 r\ViieIU9 VXW2'0U.b?F!ZD0 VZl1 &@(aa ]ArQ1q8Z}3F{و>r_2h"0,sf >r)t2JA%)?F+?FvWZk>Fy4 wa)`Ayf/!\3s ޿Mlu|a<1u#Ca aܵ0 lkj? &_8S?"a@=sB(<" !r5 EbqEnu3.h?Fn`?F=/*lq?FF|oĵWA|GK|WuǢ|V#HCRW%7#I3Et2~IUnuj "?FBĥج|ﲖ+zw~>{I]X41yu,A1BϠaNycߍ@2Wa4y}K6#IZltF~fU~ds!'Hɇ?FQy~<?F$fՖxywF,(|pgF|'c7|,X&l x ק1^uJ+ o N9z)e~zy0l44t;Ff4t/p1znuf.ʐ Lˮ?F I`V'0'|֠ yl7#y%m"gs7#It~jyozY_ƀ%çM,fFI|^t-,捇&F9~lFV6#I.@t~3ot~F+}|$+p?F?~a@v2rA 3z9Xt|L;W-yG7IGYt#~HD H# h4>T]]9 MMUF:TJ3?F ?FW7")?F(GP@?P6 f@#A#M (<Pd]x#$<PdxmJuM` /?""2"F"Z"n"""u )Jt!  ߆,ߑ'7t172DJM)527HB)5L7r Dɸ!>JU2zGz?=)@M3J6߀A@E4bM3z03a2b808RKFJ[3 PB?gF ?$A4 O5E3AA5 G2`?CopyrigTt (c)@20P9@M@c@oKs@fBAr@$Qa@i@n.@ Al?P HsCRePe@vUPd7P2AED3 5 -_C %GgFViieE9 VXW2'K0UVbgF!DK0 f!jl$1N@(aa ]3AQ18`3FPʞ>FQ?͉@"#] >"3n3v1lr2JA_[3nB?F#lNa?F ?Fe a)A@y?2oW Vytu#|_ӄ 5u3?\1a u׿@ ]h? d?#t?"2%a@ks(<29tK1r55EqEu}?FG?F U?F(k=|S@ܱ|ŗ-_`ǿmgV0ÖpITf#was2Eu2* ?Fd|ڀ >{$*Jr| ZȘ*'vY9!=a,z0WJ⅏ze#wFUuUfNڿ?F6 +C?F}ۛ)NQv1Vy>ﴺkcY︉0h ӯe#wZq~zd! nS@3(}|{gu2lqZt@>% rgrܥg/?B~fȿe#wϢnuex9?Fqp1C5bb=|=I"~;Jt^E>~KAOA/0ƛOu=e#w yJG?Fާ;נpIZ=gE܎=|grE2`~%-Bg0L0;e3w0䬌3t-~F`3FgG ߈A2rA 3+j[K~@])nATGf@B7e3w7I#Ha`dgHD H# h4>T]]9 MUFQ|3?Fͪ'?FH(])?F?\s?P6 A > (<@P]dx#D $<PdxJuM` ?9"9"29"F9"Z9"n9"9"9"9"/()u()Jt%!  [4c7tY1s2&뒺e5n7-V;e57n?ͷqJ1 >JU2zGz?R=@M3J6A@4b%Cz@C.2b3H%@3HNKNFJ[3 B?F I?A9D POqE*3AA5 2`?CopyrigTt (c)5P2U0AP95PM-Pc+Pos%Pf3R$Qr'P`Qa3Pi%Pn}.5P Al{PU +XsReSPe+P5vPdsP2AED3 (5 -_ SK  WFVi@ieE9 VXW2'0UbţF!D0 If]jl J @maa ]xQQ9A8Z3F꺷S"$#P8hB1 2X2?@s6vK r2J:jtFEZ qrs%@"s )1HA\HG@HA"lNa?F|/(?Fb) a)AyF7U1 "fWmA}$# c6n>}X3?$1 a }x ?@ %? z?M?$"X2%a@s(}{^<[}̈ιƋ/90!3'x+;m3; 3)1F;+x(Nڿ?FT&B?Fb{f,-JQ1;Ejv㓌 +ߌK$ؿLrZ\b03>P1Z;~+p•%ʲSpFZ>ppd-Tfs1u񶛄D𷱄K 瓌)h:GrKr߿?#4B31;;uSq~F7ސ?FF;Kf@9Brq 8CysZLs뿇* %L3 /1#;H4B`gHD: # h4>T]]9 MTMUFM&d2?F}{?P6  >M (< Pd_#x_tt_ ___(_<JuM` ?2FZnuJt -t!"J&b$=>tJU2Uj2?R=@MY3Ji6A@%2308b9R;6J[c3r27a? F ?A?9Bc3BfAfA5 2`?CopyrigT}t (c) W20@9 M@]c@os@fBRAr@Aaa0i@n. AUl@ HsBe@e@v@d@Q2_AcDZ3ę-c3rC 2rG FZ5 >\iViieU9 iVXW\r2'0j3 FZ!&D0 VZlJ`>UhLaLa ]Rd u 1A A1AW8Pa o|2U?F,%-@Nx[Jk#b?o--hi  6vZ2JAp?F}l^?F +ْ?F G&S- a),Ayx|S<ݻuz}קu#qSBa y= 30 4*?? ?"2a@/s<"t!r55TqE`ud ?F /wpc*uGs>{T#|f{}b*yZ>voigjJ*#;%7f2pU`uTK`j0rr?F);L^fFp/Xx1ѣ?V|%O4Tqde㼕?FHyt߲Xzΐ_3]ؘ#yԄT|qTS|t]<8l j m?@P3ɡեZ:+a{c3~|+2|}ZliL~|d($7! C]{-m;BP4~ΐKjgVhߩ~D8V-l|G2y)PlƳÿտa:dBl#ꃋ-@t32y ?c$p3~,xEr l|0?FS$^ΐWqp>hmN.Ey?'<(+t^+}m܀3}Tq(*wmnp:~qмZBT<i}~۸73q1&3–(Ko|>F}ѣ ?Fs?FUޒ[g$,y~vZ؆t=<}!uɡaѩYK&h0?FL#|ߘ 1D|\ ťa8 ?Fd\Ў!ݯ|? ﮉ2ɡbҨg7+?FBl;|qyS ~L](MB-eS?F@J9F:?FrIÅ hߏ¢h|@b$yoSk9r|hͦ |ɡe2oe1Br{a 53Q2HD: H h0>Th]]9 MIAUFd^V?F߫C?F>?F?_j|w?P6 8 >M $mJuM` /?`OO.u>Jt;  sxЕt=M4&ٕQ䣯+? G>yJU$"#?.& ?JbA@T"Z+bR\)Z&J#2N贁Nk.#@M# J&]/o/T"Z+'T"*B#BA1A15 `?Cop rigTt (c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0n.x0 AUl0 n8s2e0en0v0d0@:1Y>4#-'M3 M7 .&%p0b5(9 DFX'dG_'0UB.&J!I$a dFxJlJ]Uhz_1!T!Պ(Z#FF-D}ؔ?cJ:Z 4qgjGQ RV"JmTtV,Щ=?FG с aW)O YPRSW:fOtG4Pi3GYA; l'z&@B!@C?F[,P4S *?j0[OeT?UƖaU5mTV.b9 Nj?FZp^9o`isoT0e(a~Th]]9 MIAUFHm?Fʋ?FqZzS̩?Fo_ w?P6 8 >M $mJuM` /?`OO.u>Jt;  zt/ Wٕᆝ?a%?uG>yJU$"#?.& ?JbA@T"Z+bR\)Z&J#2N贁Nk.#@M# J&]/o/T"Z+'T"*B#BA1A15 `?Cop rigTt (c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0n.x0 AUl0 n8s2e0en0v0d0@:1Y>4#-'M3 M7 .&%p0b5(9 DFX'dG_'0UB.&J!I$a dFxJlJ]Uhz_1!T!(Z#Fb?d ?F94:,X,TGZ SY RV"JmTig/?F6o?FşY] `)O 1RSx 4S x$G4:74C͐GYA; ^^s¯&@B:~mTn%y2[9)A\(aU5UF O 2Wα?FG?8LolQ,i33((?l+dUl1"Canoo2m_;T#rA $sHD: H h0>Th]]9 MIAUFk"?Fwc?Fymh?F_!sw?P6 8>M $mJuM` /?`OO.u>Jt;  aMtl#/ו1`ng n}_GT>yJU"#?.& ~?JbA@uT"Z+b\)Z&J#2N贁Nk.#S@M#J&]/o/T"Z+'T"U*B#A1A15 `?Cop rigTt (c)x02`09x0Mp0c.n0osh0fv2g1rj01av0ih0n.x0 Al0 n8s2e0ejn0v0d0:1Y>4#-X'M3 M7.&%p0b59 DFX'dG/'0UB.&%!I$a dFxJ lJ]Uhz_1!T!(Z#FFút[?J:Z 4=iLGQ RSV"JmTVoEXNx A`?F&nƜ?FӪr#> `)O 4S }#GYkhkn/e~:KF4}3>YA; ΣW$@B4mT[PRSbU5U2{6E,?F72*ʶ'?FD;6o;^}p/l@i.fMdӟ2Z[lxg҇.atoo2m_UT#rA $sHD: # h4>T]]9 MTAUF\f?FQ)Xd?Fv?FZW?P6 `>$M (#<#lPJuM{` ?#{{20{F{ZujJtg  ntL_פ| 'RGU˲s>JU2zGz?@M[#|Jk&A@$7b#z #b(b)+&J[e#t"t"?'6 I?A$/%Be#115 "`?CopyrigTt (c)020090M0c0oKs0f21r01a0i0n.0 Al0 8sBe0e0v@d0S"{14\#-Xe#3 7'6\% ZLFiie1E9 FXG/'0UR'6-!B40 FJlJ7Q0hLG1ZAY(э6?FξRI0`fS?Ks`ZsLbRP\"JT6mPD2n ?FHOX\>F->wW Wa)H0YnǿlᜯVef[U>dxԷErlg u?VcQ$@aELeANS?FZ ͉?Fޓ{lʤm>wok&O8XWleBPmlfVArieoo.HLec=}pLwS?FXʍ`ˏvol``}; PaiEz|DF`ޣKH_SF:x~ALrA k#ڃjnw5}Oi{HD: # h4>T]]9 MTAUF8XV?Fp#enD?Fv?FZW?P6 `>$M (#<#lPJuM{` ?#{{20{F{ZujJtg  ґߝ t$wOפ| 'RGU˲s>JU2zGz?@M[#|Jk&A@$7b#z #!"b(b)+&J[e#t"t"?'6 I?A$/%Be#115 "`?CopyrigTt (c)020090M0c0oKs0f21r01a0i0n.0 Al0 8sBe0e0v@d0S"{14\#-Xe#3 7'6\% ZLFiie1E9 FXG/'0UR'6-!B40 FJlJ7Q0hLG1ZA!Y(э6?FξRI0`fS?Ks`?Zs$LL!bL\"JT6mPD2n ?FHOX\>F->wW Wa)H0YnǿlVef[U>d~xԷErlg uVcQ$@aELeANS?FZ ?F{lʤm>wok&O8XWleBPmlfVArieoo.HLec=}pLwS?FXʍ`ˏvol``}; PaiEz|DF`ޣKH_SF:x}~AL"rA k#ڃjnw5}Oi{UGDF P# h @T(PYYU UF~?M&Wd2P} (-. +0@ 0T0h0|Y03 - -3 --3--& &#:u` ? T,_'@@TUThh|U|UU&&K"&5&&0"u:)!z2<7q:U? ?; 4b0{[`+:B Fy3U"'ĉQ9" L A$6 K",G'@I@@q@E?DF@[r&BCkBuq@`-A0"Ruu8 @GC0$5Q;(?RK"U& d4; :)9u `u`["[b!;$wu`񉾑4?R5ػ3a171`Vis_PRXYcPm!#5``08``R`?CopyrF`gPt0(V`)02d`090MF`c:}`oH`ofbvary`aa`iw`n 0Al` }hsbeH`e}`v`dRh13d45!B=31ZrA66 24#30Ur?P].w,-^q^$P$sT<`o0mR`s`a>rv "saI8G28\vvk`2 hY,TYYRJ )eUF,,?F6_v?P fQ#"I#M !:#Q>NIU>bIY>vI]>I M ? I ? I ? I"+M?+Q#*"I*S%&S]V#f"+YV#z"V#"IaV#"IeV#"IAV#"IqV#" IV#"I S&2"IS&2IS&.2IS&B2I V#V2 I>2!>2IS&~2 IV#2+S&2"IS&2+S&2+V#2&?PI3 B&?I("C2B?IJCZB%&"9'a'77@I2I2GUG1uM` ?"I&6BDBXBlBBVVVVVV(V$(V8(VL(V`(/Vt(CV(WV(B,B,B,B,B,B};Āi 1a` Q|Ϳs2e8-ߕq`R~,y\1BTC\Zݼc6[?F[hsF߱~Q>qˁ&t ?FgMuN?Dv` ?,?6 EPU-?-+" Ԍ/UFz[aDv>tJ~M^3;0?gt{s4)/~Z!WJ=˧ب 5Z=fO/߱&////??{?q?;d?*V-?F`xM0P01W=͝4F??=ӎC !]=@/V54, OOS9qW@0kM9~0iZ <ȝ0*DT! SU7:w$ ]UZNOW~/mà' ]: e9T ,,? 6v?G@B1B$Gr65߱O__,_>_P_Fs[Q?F/e[p؟?M ?Sl ʟ_tfS$lߕ름Fx1cn)@aCo?20 6-Ԭ,65vٝJ w(-_q9KrUuw;y+OUF!R>7~n字/䉄5 _fL6&lLXqzݍMJ13LIo%]f Fz^(ʲS | u6)|LXX'} Ĥԭ67,M'5LmmFs]mrtoɊ!3EٿEݍv LSB@ d@}U00|Wdby#ANU 0 0 0 qH3ۓ] _WbB'P.>3]_A0hx%>Hލ?1;>s,}O+NڿW >gEn(F_>3 ]Q3}2bu$ˣSfQNFe'-$Sد|;~*YQoU(_HԪb~xBß;6H~ TEbݍ}nO*/<9n-@P,e'}eݴآm'viޞMTQދP?F/VJ$CճѵȌܷݍ1+OOĮY"ؑX:fDx~@Wt)L[~3x _AVW~IA]m_jݍX;AQq__s*~pm.@a)sI_8`}^ߔfЄ_j]~V^1P0oBoTzyE|N8^~{*ϳ}8CތҢ&s%X>F @4ύ?F*棕 L4?n~g&vn˹5so L~<@D~ew,[{ܿc=(SHǑo ,AU@y2ѝrͲs!?F N?F3?FcT(}03@ya?B᳠{j@1y8bvw ??a醼nMk?FCZώek/Ê?F@ٕ:?P(ܴ'B1/?{!K @-X;L_ @1aѶߘ>3C?FqFLc0/?Fd?<+v7D?6.zY Uco غ9JȆ߈ѿrTߣDqV\s"7G\DN,9FslD7Q-r|O╣'YQF{ F@) l$u(U)}}bQr)g|}1p1<G7t}bHA8sah77@y5?F]}H 0NXUVMFQ¡/.}=!8՞%د-+1pQ4ADF*.}K 0BoaKVWg>t}x=$GB=bvaA Q/R"i L1}/9/K/]')uw?F%ly?F*껩/ѵP﹈b@D>3bSSmlGAȔ|#h`*b5%F[ A-@zjaDi~b.x4dM?O+zOOu (?:?L?^?p??d??Fb2*.Q_ DCY?ˉFY] ) ~ zM?F!d]CZXjĞ82Dҥ4Y 0YLD|^,DYA_^F{aE`?`?Nd,/__b6mʝǧNAIY/////?r6FH?> _$s 1+/,5B/Zf#ȸB HK ~??vLl⿤愈?[G=u?aK&d23L4CUG DF P# h @T(PYY#EψUFL&d2?|>?F~?]P} . B!D:J :^:rY: Au` ?"66JJ^^rrju# ^Jb"h,h'U ?L?4b {N[`:!#"&a#'b"^DQ)3 z$2l!L1$l" l"3'@>>ÿ@q05?B46 r&23c2ui0`҈-1Bu:@ l"3-A7Bl"F L$9u`u `m"[ bb!wu`)$"%أ#Q!'1`Vis_PRXYcPm!#5XP1XP02`?Co_pyr>PgPUt (NP) 2`PW09 M>PcuPo@Pof}RnQrTqPQa}PioPn] AlP uXUsRe@PeuPvP dB!#d$%b!(=#&#Rd&& "$B##0Ub#]&g,7^aNX@cTg@ daA+_ s11"hh22)t3*tBaaR+t"qqb(!pacb"!6u\%G!!["-@@Bq253T@b"q3L [ELP@[UL0]E5 s_$\&F3s͵NSYFue7A6P,@QjPUn725^}@$1p]]( sUQŅUQbN9P %BR&RpRQQcPFL]`NETWOFPK SHPP5 DROIR7IR5S!:0@ L*FrLr]@o"( qffs_PU 7ex}PcPn2l"! 0n<@5~b -"ER"^br36 Bm Ak Adzau"u22u3/BaQaQR5Q5Q"2b3 !!E|7l!!8ԲAP! !1; 8b,/$1JbKa'q'qne!b($pǕW ]-@$ SPaCPqA9P _Equ>PpRP!ePk`25Xq\` TsPxcP`DP!vR*`3^5b` Sb8C.SwXOQEdeyMPnfPc}PuRU`1%PRd$  N$m $`E I[!" PPe5R'`y 7sRU dCeEWAe C`eV^` SRiP@Q//` T;/M/"LoPcX/Ҭ///B lPiP+q/ ?/?R% RoPo_?Pm??NwRkx"adB`K?g IDPQdPS6O@HL7/nOnsICOHL8dhOPO#%AmV2irUQ ]Q 1O+_6 fH_Z_\G` fM#AtO_BO_OM@CO(oFNRCbogq%C2mQtBQW6oHL_Roof MHHiAאe2U w'0U W0 /ix4*є"%ݩ܅uuY>2 H4ֹhńЋńЋNńЋTńЋ>ńЇHD: %# h4>T]]9 M JUF/\r?F.U?F~?F_w?P6 AJuM` ?uJt  ;uAIt=WzIRUIl7O],#>2zGz?)@MJ߀A@ebM #zwb(b%)2+2&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$[w?b6 ?A$$4/U%-|# |'b6%?6iie<59 6X7'M0Ub6!}4K0 -FAJlJAI0h7!1!ZF9|$0f2 <ڒ#  OR|VJa9Z9P] PReS[RUE:dHܣM\]\ɱQ<5U oYaU526_HU"]rA #cHD: "# h4>T]]9 M JUF ?F"]?F|>?Fuvw?P6 ]AJuM` ?uJt  ].It=W`ĆW #JRbJlLXz#>2zGz?S@MJA@eb#z(b)R,+,&Jj!j!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v d c!Eg$[;?\6L ?A$./O%-v# v'\6?6iieP659 6X7'0UpB\6!w4%0 'F;JlJAC0h@7!1!ZF?Hܣ~0, <pB#\  IRvVJ:~?6VOS sgZ^TURUEUEBQG\ZYpBaիQ65UQ[fS.aYUb520_BU"rA #cHD: ## h4>T]]9 M JUFyE+?F0]я?F~?F@?d{w?P6 >JuM` ?uJt  vr It=WP|IRsUIlO.a#>2zG_z?=@MJAw@eb #zb( (2+T2&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$[}! K2?b6 &?A$ 4/U%-|# e|'b6 ?6iie<59 6X7y'0Ub6!}40 -FAJlJAI0h7!1!効ZFHܣۄ0Y <? 6ybi  OR|VJa9PY]PPReS[RUE:]|$M\Q`YdC4U<5U=VN_YQU526_HU"rA #cHD: $# h4>T]]9 M JUF ?FDL?F|>?F~w?P6  >JuM` ?uJt  ].It=W,t@ "JRb~l>t2UK?=@MJA@bJ + &Ji[J'?a& ?A)B !!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r 1uai n. _ Al90 (Us=2e0e vO0 d10!H$-,# 'a&K?6i@iek59 6X7'K0a&!|$K0 FJlJ8>UhAA !!ZF@?d3{  </.a#]M 1R^VJ:]ɜHܣ/_AZ2Ey#xUEUVS s#FT=RxUk5U؝Qˠ\BY3QX5*n?֬BoAZ7],XU[Bl|Q2_*U"rA >sHD: ## h4>T]]9 M qAUFA[?FFw:?FE12Oz_:w?P6 L[ >  (m$JuM{` ?Agg2DuVJtS  .!tw|3s,kIɸ _>JU2N贁/Nk?=@M3#JC&A@b#zw u#bR( (+&jJ=#%'?& ?A"b${$:3/*T'B=#115 `?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v@d0+"|144#-=#3 7&4%"XOFiie2E9 ԆFXG|L"'0URi&!40 FJlJ" >UhiQiQD ]GQ N1DJc$LBhL!l%,W2[AN11(Z=#FC[RRTFQ ucR/Q Rf4"J:GdFX_!0wbfc L|c :D)$aER!Jj^opdiUjh2E~V! b# 2 ?߮.hOEu<ɛ?F#Pw|hؾ6|ެ^hu?F/Lw|iU6|^^e~?EYo< thQ[T`vg2w 0M6weQT`Dgwӏ u5XPo`Iϣ4oj7::eQT`Bvu*< f|hQT`r :{T_t Pa?F3SXk͖o?F vT?P);N 90 ,)Ri0m6uR. 6u`yyb !` /K!? -hO.?Ra @cB)r5%aQ K)XiyBG\h5b-Ob?Fԇ*sR6|FsaeQ Q~B?F)qqiQhAeQ9dϢqwiaeQe񯁜ai|hQEWXYWuwZl a++hf! l?FQD0)ˑ6|P?U/hL t9?Fz2)v<>KU6|*hQyո ?ghhWyh= A=}!eQW%|6^T?`NAA_owA j;[6te UI:vdv =KylI?ε~T_Ţr+a W  /~nIX(dur|?F 4?F?PَKK :d?Ltؤ?yl!?(!@[l Xj2GtaCpzm&褯 "4gyC?); 1ecDhe2GoYe%3rQ M3#k2HD: H ih ,>T  9 MJUF ?F~?F|>һP6 m >JuM` ^?5uJHbcFFmtulyt b#tI>2zGwz?@MJA@Wb)+& !Jg"~& E"\Ab)+'A,A/UB115 `?CopyrigTt (c)I020U09I0MA0c.?0os90fG281r;0t1aG0i90n.I0 Al0 ?8s2eg0ej?0v0d0ŤM3 b7~&XXt5i b9 `!X78G'0UB~&%0R 8FLJlJ8>UhAA( 1:9m EsBU[JHU؝Hܣˠ Xm] 3Ey%U3^֬_Z7]G,XHU)a[BUQHUF@?d{lQY?.aXQ,o`ɜQۤoZ 2QXUH_Z_l_~\_H8>t"s ûdGC, FȘZߓ#'wfB C ~dܮ @+C SsGX7`t PhIK)M" 6%?8u( G00UFD  h(^TYYBBUFL&d2?x<F BP(?P? 8$ B66ȅH?j|;? 5JB@?\.ϗsU!&;/ҍ  0RB`=Bridge,lo ica ,n tw rk sy t m t p"r pPy"i"s c n"cB i2%z e^| R(* G@& ߃l?"488Q ]frsww ww ~ww w!%} Iceqs^QZQDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab|>~??[q㿦)j?&Iz?&d23L4CUG DF P# h @T(PYY#EUFL&d2?@F|>?@_~w?P} Y.  !:J :^:rYDu` ?~"66JJ^^rru# *^:"@,@'酅#U2N贁Nk?H]b{# ?#^4Wb {['`:!#"h69#':"UY^"Q)+3 62D!L1$D" D"&3ɜ'>>ÿ@kq05?460r&BCku2u{0`i-1$Bu@ D"3?AIBD" $$9u`u `"[ b:!𿻐u`$i"&){#)Q!'1`Vis_PRXYcPm!y 5jP1jPAB`?CopyrPPgPt (`P) 20P9 MPPcPoRPofRQrPQaPiPn AlP XsReRPeJPvPdB !r#(d$r%:!B={#&r#dd&& t"r$,7^aN.@.cT<*Po mPsPQ>g da +_c&1&1r"hh")ht3* ut2aaB+ti"aa bo(!`ac:"!6uM\%$G!![2?@@Bq%CT@ br"q3$ 35$6Ep05E5c_r$4&F3s͵N?SYFu eIAt,@)Q ySUn%6}61`]]o(c-U)Q)U)QNqKP % TR&RpRQQuP$]`NETWuOXPK SHPUP VRO!RI S1L0@T$J$r5GYkr"o(pa&&s_<*-pq o(A֨57IABKFkq`RPQsPPU exPuP2D"! 0<@Eb Z"E*"br&3BmACAfa^ur"ku"xu3/\Ԭ20 iBGQGQi"2 b3 !!E0|qq7D!C8ArRA!!A;Jb61\b]aaae!bo(q`軕1W c]?@ SPaCPaAKP EquPPpBdPePC8%XI[m` {TPuP`DBPvR83^xmz"` SQbBRdP5d=QMPnfPcPuRiEQ`]&1PRd Nm 8E !3o! PP_q eiGR'`Q sRU<&eФ/YAeC`Ue.^Ԓ` SRiP@Q8T/%/mLPcX_/҄///B }lPiPq/p/?o* RPo7?PEd?v?YNwRkP"a =Yko HiԿAhǖ꥔ Uはw'0Ur/lAx ;,u;uM1, صpH ֍@噄ѤѤ&Ѥ,ѤHD: %# h4>T]]9 M JUFX^?@ ׬?F~?@?P.6 AJuM` ?urJt  ]It=W_7],IRUIlw#>2zGzw?@MJAw@eb #zb](b%)2+2&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$Ɇ[?b6 &?A$4/U%-|# e|'b6 ?6iie<59 6X7{'0Ub6!}40 -FAJlJAI0h7!1!効Z@9|$0Y2 <ڒ#4  OR|VJa9Z9P]PReS[RUE:d_HܣM\]\ɱQ<5U oYaUb526_HU"rA #cHD: "# h4>T]]9 M CAUF|>?@:x??@V|+q?P6 8]AI]M ] (JuM` ?#SS2uBJt? tzEUQؕKJbtU]OK>JU2zGz;?@M#J&A@bS#zI G#a(+bo)|+|&UJ#!!5 `?CopyrigTt(c)20 9M c oKs f"!r 1a i n. Al70 (s;2e0e vM0d/0!EH$#[#""?06 ?Ag$I~/%-#,# '6%KO/Fi@ie59 /FXEG'0UBŬ6!40 wFJlJ]Uhz!hAg!(Z#F@H/DžB0J>| 8Ene&KQ4\ RV"JT~?@gf_?F)?@ $C?8S sBKYbFe>J^R,UFeTGK!Q? ӧ|+@B:TPܛ\YܨMJe5$!JP/.`*7 `3iK=Dn鷯hZbU5TQ`R?@[uo0lBqaRSgdd*lxrl{!Joo2_Uvg"rA f# =HD: ## h4>T]]9 M !AUFnx?@?d{?Fg݅?@@?P6 $&> JuM` ^?1?u.Jt+  Rqtes.aqzWWqI7>JU2zGz?=@MJA@b1#zt' %#b?()1 ?(Z+Z&UJ!!5 `?CopyrigTt (c) 20 9 M c. os f"!r !a i n. Al0 (s2e ej v+0d 0 !E$[! s2?6 I?AE$ \/}%b-# 'Y6? Fiied59 FjX#G'0UBX640 UF"iJlJAq0h7!1E!`Fmr?@Hӣ۬0$*fq`7$d 6yb7  R VJa9Q|TK ST1 QE:]|$0]Y_dC4a8^] 7sP@EU < KB?x_QYFf*ɴ%Ud,Z7"_+dK lV}?? g52^_pUE"rA kD#TsHD: $# h4>T]]9 M AUF|>?@~?P6 n# >] JuM` ??Su.Jt+L iteJ]zb>tJU2U?=@MJ?A@(b')4+4&J[vr'?& ?A/@)B!!5 `?Copy_rigTt(c)2U0'09M0c0os 0f2 1r 0F1ai 0n}. Ala0U 8se2e90e05vw0dY0! $-# ',&?6iie59 65X7/'0ʼn&-!$0 /FCJ)lJ<>Uh3 C ]1!ZF@@_?d{ J*4 $d?.a7^QiBQ ]RVJ:D]ɜHܣ[_mZ2Ey7UEfU$S sTrTiRU5U؝QX\nY3QX5Vn֬nomZ7],Xf8:`ϋ iؚ~Gc2anY_Wt T}զDTk 0Q: %C?J i3l}f*i&udWl˔alb+d ?ܲVXU2D_VU"rA k#-UHLuD" # ;3h , T  JEuUF'_?@~?F|>ếP VNM A #$Ae  >uA` ?KG `mm. RSu#\>b]XFOmrߞuZptY b7#t&>U2N贁Nk?@9 3#>C&A@q" u(b)e+&f" !It$>=##"4#K"& "{"b)&;'f",f"/N =#11;#`?CopyrigXt (c)020090M0c0oKs0f21r01a0i0n.0 Al @ 8sBe0e0vZ!@d@+"BM=#3 7&4"Ht1EiIfE 4%!X|FGG'0UB&h50 TFJl>(fUlI 8%#IA{!1(R!gG?@c\;0R dUeR ?!c e(T@|$?Fn-?@Z %_C?YsB\Ӌ$UX?uj3dR-=wnQAY ܲVX3 e^Fx*aqRS3jIBFm_[viH{goRJdo~b KYeTGQ# #s sFR-Dn?% FZߝ4# $6wYB H }7~dܮ o@+W oMsGXJ8^ PxADHا@Lx?OPUFD # hVTYYzU?@ ?IF?h PaR"|ou]ofu2/uU ` C&nect&r` e-?{U H  ^   -}|5 p`I`V?}`xa833ސ]3σ3u 33?, ?Gxpx^& This coUnetrWaumtUcl#yrebtwe h *hp3is.HD # =h @>T(]]9 T 2AYUa? uP6 u]`u buA_ u  >3.TAuV ` lxMui"a)J@-?x;'6"wE- xo'6"y( _rq?o@I ?$m%? @horVA$"38*EL2-_br @P' Z6\11u .$n2y2^2n2u9ȑ#ULH/MM^1 # A D5 :0`Vis_SE.cTm!#20ED)`?Copy@0igTt; kc)j@DCU9j@M-@c@0o/@'ofhBYAr\@AUah@iZ@n7@ j@WAl@ `HsBUe/@e@0v@d7@N@B= #x  A G{?9#r22&0444 MG4'ALB5?_$k&(SZ6Y8# #2"qlSF,C`l>lUhr5 QA+/J *&%(og5k#p f3e'r#TA(11MJEsooR6'2Y7E5S3Wc7r516Ũ_aJ2_quU\DQ>0]b;)`R@As-@E T@xh@DiA eJi=S]UBTJg'0SV!@LA,SrWg 1Cy$A$AMJ6!Hp,@R/TAEb?RQP? _H>Ff"s O`E )FZ߂U#HUV7Bk WeHdo@XP+U \"sUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X8 ]B66 TbH?~? ?B@L&dW2?Y (sU/!-&PO/v:  0eIB`Modemw,n tw rk p rui h"al !v c !^| f(U GT&( F?_""U*++'?MwpwtwpwDwwp tGwww G@wwDGwD@w p@wyww[wp|`npXDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabi4FѿL&d2?j?oҿ=t??\.r3r\.CUG DF P# h @T(PYY#EUF\.?@i4F?@M&d2ɻ?P} č,.B ! :J :^:rYD 7 7l7u` ?"66JJ^^rr)u#",'񤔅qU ??4^b,0{][`:#A82A6#'%Q)U" ^/LV1$  d7'>>ÿ@qZ05?46F0r.B4CZ2u0`-V1ZRBu@ 3mAwB h$9u`u `"[bz!u`R$"% #WQ 1 71`Vis_PRXYcPm!#5P4o67>B`?CopWyr~PgPt"0u(P)"020PU9"0M~PcPoP'ofRQrPQUaPiPn "0WAl` Xs bUePePv`dR 1#d$%!*(=#6#@d66 2$B##0Ub3]fg,7^q+^\@V\cTg Pda+_Msd1d1"hhr2)$t3*tR+tHR@ q q"@q@qTb(!PpaFc"+m6u\%RG81>1[B2m@@Rr5.CT@Fb"31 UFHUM5Ms_$&ׄF3s͵NSYF)UW0TewA,@WQ S+epwr5}1PpP"(A2yT?+>BB).B4KCFq`R`Qs\~PU T`xPP2 0<#@4Eb_ &2CE"brd3vRrӟ(Mff.F_<1-mTwNcPpu"ur2u3/R0HRuQ$uQ"2Tb3 &1&1E|FF0RQ(dPpWQ,[[ ]m@0` USPaPePl\rsQAyP E u ~PpPePr5X 1V` ?TPEѣPH` D`vR3𯴊ߜb` SabߣPLB PMBPd`rUdKMPnafPcPubHU\nHUQi{]d1;PRd Nm.0Š)U,03!8 PPr0BQhHpcu]uR`sRUѠeANe>`pe^bt1R; SbiPa¢&T18LPcX!mFTXjB?l'`iPkqo&0Z38RPoO,d/?c dDψ(ȱȱ"(Ms3) &&&WUՁ:p/` %R&RE"ibȱ`NEWOPKPSUH`PV0 ROj2URIV0S$AQxR7 i q'U,ygm5xbb7'dũfɵR3D-҃aa8')uyqarL?aTe5x`H7vAտD?aKDKDaKЅDKDK7eDAKTeDJHD: # h4>T]]9 M JU@i4F?@??@hĻ?P6 ܠAJuM` ?juJt At=W ףp= #JRbJlQO#>2zGz?)@MJ߀A@ebM#z\(b),+),&J[(.w?& ?A$T./O%B!!5 g`?CopyrigTtZ(c)Z2009ZM 0c. 0os0f21r0>1a0i0n.Z AlY0 8s]2e10ej 0vo0dQ0@!$b-# 'Y&?6iie59 6X7'0UpBŴ&!$0 D'F;JlJAh h7!h1!YZ@x f, <#  IRvVJ:ǿxT]]9 M JU@"&d2?@ `?@BP(ĺP6 ]AJuM` ?ju#t  m۶mIt=WwIR$I$+IIl#>2zGz?@M|JA@e7b #zb(b%)2+2&J[/.?& I?A$4/U%B!!5 g`?CopyrigTt (c)020%090M0c0oKs 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-X# '&?6iie59 6X7/'0UvBŇ&-!$0 -FAJUlJ8>UhAA 1h1!T12 $<cV5{ WR~VJd2L&?@bX,Ʃ ]S fR#mS ?ZZQ4E^0__ %zzX5:U XYePY؉X5aZjoohY}aUVeEVŠRldQY)+R+X2>_;"rA 5#dsHD: # h4>T]]9 M JU@xmJuM` ?uJt  Am۶mIt=WHzGIR]Il#>2nz?3@MJAw@eb #zb( (2+T2&J[ (p"?0& ?A$ 4/U%BB!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!$-# '&?6iieP59 6X73'0UvBŇ&!$0 -FAJlJ8>Uh@ 11!Z@?t:Né  <qqh#B] WRVJ.@ ?@L&d© ]S )Q#hY8?Kh/QE^ `__ ^BX5a9XYePXRmScRU5:]md2L&U\ahY*PX\eEVV_iaU2>_PU"rA #dsHD: # h4>T]]9 M JU@i4F?@M&ɯd2?tP6  >JuM` ?uJt ASt=WJRb~li>t2U?=@MJA@b + &J[N#?0a& ?AT)B!!5 g`?CopyrigTtZ(c)Z20 9ZM c. os f"!r 1ai n.Z Al90 (s=2e0ej vO0d10@!$b-# 'Ya&?6iiek59 6jX7^'0a&Z!|$0 FJlJ UhqMAA ]A !4Ja-'Z@?fl6˃  <HzGh#BA OR|VJ:6]t:ϝNM__Z(\#UEL@ _?@Lƒ S AA#`YQ(\Xk5^ s`o!o ̌aX5UxT9 M3JU@i4F?@M&d2?ҷP6 m >JuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"A.b+-,-/*B!!5 `?CopyrwigTt `wc)5020A0950M-0c+0o%s%0f32$1r'0`1ua30i%0n.50_ Al{0 +8Us2eS0e+0v0ds0ib =L!XY'0U Bj&%0 :lJ Uh*M[A[A 9A 1:xTh]]9 M JU@w[2?@2?@?뇤?@bf?P6  n>JuM{` ?5 uJt  hxvEt9SYEN^Ӟ~ӞEh!D#<>>2N_Nk?<@MJA@cb]b)++&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@`Y3   8gF/_ M ARnVJ!49BYOPBRWSMRUE:]R.?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@p8?@d2L&?@αB?@.?P6  n>JuM{` ?5 uJt  YEt9SRQEN߆$1Eh'>2N贁NKk?<@MJA@cbb)++&J[ < u)| ?$%/F%BB!!5 c`?CopyrigTt (cu) 02`09 0uM0c0os If2!r 51a0i n. 0 WAlP0 8sT2Ue(0e0vf0dH0!Y$Ť-['# 'x&0b59 6X|['7'0U?B)x&!$aI 6 JlJ]Uhz!^!Z@mN/?  8gF_  REVJ49Y&PR.S$R_UE:p]"`3  \Q)Yl?>cQ5oUV_YQ_Ub2OUrAg wcHD: H h0>Th]]9 M JU@&6?@0ސ?@?뇤?@bf?P6 n>JuM{` ?5 uJt  lUUEt9S;EN^Ӟ~ӞEh!D#<>>2N_Nk?<@MJA@cb]b)++&J~"< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@`Yg   8gF__  ARnVBJ49BYOPBRWSMRUE:]?R.?\QRYl>Q5U/V@_YQU2(_:UrA c_H>vQ+s #X8L(r FXZ\f#ͱ 4^!B- U_~|dTݮ o@+( oLsG!`mP%h*8'kUh) oT+ar(#vz-yX0|2{$OUFD  h,^TYYBBUF\.?x<F BP(c?Ps?m. 9c:M:O @z0 BH ?? V@L&dW2?F-W(Yi(G.sU!&P/)7(:  0 `WB`&Ring\network ?p$0rr0pPE1al\d$0v0c$0 c1^ (J YG& r+?!) +)!q q ǿ~~|p~~ṗ ~p~;ކ~pp2AAqDrag onut he]pe,nUdu /p64lies9Lt]wrkcQm tOfTly lGo_"imdO.b\.L&d2??;IOғEMQS?=?mar3r\.CUG D  # h T,YYEUF\.?@?FL&d2?P} ϛ% ^.&N+O+.Q+R+S+T+\+]+^+_+`+a+b+c+d*+e+f+g+Q.i+j+k+l+m+n+o+p+q+r+s+t+u+v+w+x+y+z+{+|+}+~++++++++++++++++++*++++73636 3 l_Fu{` ? &CG/B>CBHCBRCB\CBfCBpCBzCBCBCBCBCBCBCBCBCBCBCBCBCBCB"CB"CB"CB$"CB."CB8"CBB"CBL"CBV"CB`"CBj"CBt"CB~"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB2CB 2CB2CB2CB(2CB22CB<2CBF2CBP2CBZ2CBd2CBn2CBx2CB2CB2CB2CB2CB2CB2T6g666U66FFMBuIUHrN|Nw1r L@Gs5 ^CB0  $sV!v q)?@4v!p`:!@ K*RGs Br(1=s0s)) %Rt؉sq1`V sPeXY)cPm!#52862F1`?UCA p r gPt3()32U0Z93M c_ Eoi oM=rK yas iA n) J3A# l3EsUei e_ vd)21wrޟҀs?@A3{d|R$Bƕ~u +@@âɸ!BMBj8}rup`i-qBu *@rw1s!8=}/B 2t@I9u`un`"p2] HqDou`t1E[B2TU뱥)8s5ä  r}vh^T`%AT<7 o3mss >XxĀu7T=n7 eMTNtT yk3Dvd{uu ƃ@Rdư5ߙ߫ƕ9'9h\nE|1CU t@ʴ#FXjHqʶq-?bt[Ű//A:/L/^/~///A///p?!?3?ʼqV?h?z????z?? O@ɯ+O=OOOoOOO9OOO =t__'_!D_V_h_ͪ___#___$o+o=oB%`oroo&ooo'o!(5GY)|*+ .HTfx{şÏƹ/&8J!0m1Ɵ؟2 3BTf͐B_5Я6);7^p ̿_9:3EW;zόϞ(Oas?ߨߺDqm4# 1T 5h86%8llc౓#"AAш]]Ѥ C){q{qpqq22EyyNNaѕѨHr/d~CU|@c` MaЧufu e nA` EqipeBtm/h~4qP o|uNmb /b XQ Pr6HZ/{+ srt_q/\A6s6!P/^za Sil/T1 LcAHR\fpz$.8BLV`jt~""0":"D"N"X"b"|"v""""""""""""""" "" "*"4">"H"R"\"f"p"z"""""""""""""""""$"."8"B"L"V"`"j"t"~""AaXT@b FTP%జJKTltφT0ƐĮbPQwr SS#TS”GLF2DFbXO%—ȩP_(_:_S…LF?Hd}OƁLQ____KFLpXO( B}KoЁ?_T_o(ooS L FY2ణ3oaoЁ !eo}cGYkS L Fft]eҼ–B" E L FsxLfO͏ߏ*H L F l_go}<*A]LF џrmБ Є!7I[EʟLF `0 毯ʯ_&tEULF OT~N򤢿ƿTؿFLF O!H$6 Hϱa6ϗ*FLF ?ZبP6HZߙBLFR5xqy?o%L%9F//"q8?onT&L&9F?ᏼJ`_-?*e'L'r?UO{O$(L(roy:uoڟ) L)r!_@4 oj|*L*r"oݿF\);a+L+r#xQwχP ,L,r$|[6qȟϨ߹-L-r%Ϗ0 fTx.L.r&BX%7*] /L/r'7Msߙ0 0r(3 W2m1  1r)˿@,bt-2 2r*>T!/3/Y=3 3r+p/Iߔ/o!/P//M4 4r,tS?.1i?/??M5 5r-O1(O/^OpO]6 6r;Q?\AO:OP_/_*Um7 7r//?E_Q_{?__}8 8r0+oOo*aeo/_oo}9  9r1oa@$ _Zl: :v2/q6LO+Q; ;v3hA?gwoP?؏<  >v6~_ɯ߯2H'*M? ?v7'=ocsoԿ @ @v8#oGπ"]ϴ¿ϥA  Av9ϻ@RdߊB Bv:zߠߠ.D#IC Cv;`9_oP D Dvv*@  E-GV,Gv !7-[k=HLB,N5>LB_r:?C&2F3s͵NSYQFEB,N>HdDq7ME5qD68W#"U%&EfW@@cX @ S1` Token?_RiP1nD?[Q?##qh1ab /X##zQ?AzQgVD6e]r##"` %P&rPpPrtPe+sA D2`A!_`NTMPUOPKfPS A``UEfPPPObR`5ISi0QAqD_!_\X\OT.U_C0?ni4oo\XXOSw'K0Ur%V!0^ v2Zei1DAT\1sEPuQ1MEX OS%HVl2UxAUx1'ExAMEx HCDrDC  @   !"#$%&'()*+,-./0123456789:;<=>?*>ABhiATiiJ\ 3U@\.?FL&d2?JP}@YIYqX"u`W?"&&5&,"u6)~}bE@{#&t03! 7("tm!,v&b$&"qK!Y$30^?2-,"l-2""2!l"6 #",$=F#"f$6)![0$%J[LD0D=F?|" (&3AA1A1 ;m`"`?CopyrigPt(c 09M-Pc+Pos%Pf3R$Qr'P`Qa3Pi%Pn Al{P +XsRe*SPe+PvPd"!ZOAy 7$^ ?P?b?[!H${eQnB? ???O %O.O@@OROdOvO6CkaOROOOO_ %!_3_E_W_i_{_Vx<kb_b____ o %&o8oJo\onooy_ocoroooo %_.@@Rdvwd %0BTfxyheҏ %5GYk}~/mfĢן  % =Oas~_?gůɲܯ $ %?Q@cuOwhʿ) %RGYk}ϏϜ_i . %I[mߑߣoj!3 %N`r k &8 %Sew+= %Xj|)m 0BU]o9n"/#/5/G/Ub/t/////Io/2? ?(?:?L?Ug?y?@????ϟYp?B OO-O?OQOeGQoOOOOO?ߧfqOR_ _2_D_V_"u!q______yr_ro%o7oIo[o'"voooooos *<N`,#{@yt /ASe1ˏݏu "4FXj6%͟ߟ/yv'9K]o;&ү?yw, >Pbt@'@ſ׿Oyxπ1CUgyE(Ϧϸ_yy#6HZl~J)߽߫oyz$(;M_qO*y{)-@ RdvT+@y|.2EWi{Y%,y}37"J\n^5-/y~8/<2O/a/s///cE.///// ?y=?ABT? f?x???hU/??@???OyBOFRYOkO}OOOme0OOOO__yG_Kb^_p____ru1____ oo yLoPrcouoooow2oooo QUh z|3@%VZm4Џ*/[_r5՟ /? Ɇ`dw6گ"4#Oهei| Ŀ7߿@'9(_jπnҁϓϥϷϕ8,>-o',sߘߪ߼9 1C2! F1 xR'9K:)6} ;@);M<+)~%< .@RA09"5=/!/3/E/W/F5I/2/////E>??&?8?J?\?K:Y?B? ????U?OO@+O=OOOaOP?iOROOOOOeQ_!_3_E_W_dOXGv_b_____uAo#o5oGoYokoZIoroooooDžB(:L^p_NɁHD: H h <>T$]]9 MT kAU@\.?FL&d2=??F?P6 LA P>  + 4DJuM` ?"?o`0?oNu^Jt[ tgJbg>6#J?#,b/ @TZT"Z""P& T/@n_^B% 'N< ?m6 ?AeJa@ۢR4` Ac0entCol0r3z۝0Aٝ0bbbz90“2???9Nbb?1"bAYFbeDG?#E![C39 , B6"hLJ?#QE!TG; J`A`?Apy$@igTt(@)20mP9MYPc$@os@f_RArSP#Aa@i@n. AR @lXWsRePe$@v0dP@ᾡ`1O_o?#2TUam3 FD5#]BrFlL]BrF 0PuGiAbE9 6%yXGg'0bUm6!40 fjlJ qUhhUGHQA3(Z\*{ ٠0"#")b5DrD6"J Az~%"#"BA}/t?PSQ< E!?yET#-#tgaD[#,E:g@qVE`0w>]BrA \Cď'$$+HD  hj @>TA(]]9 M  eAU@\.?FL&d2+?]P6 LA > M /8HJuM` ^? C s4CLsRubJ1t_ tuJb$">JU5 JI4#=#/ @X^X"^"B"6 X/r cb!Bh@,G-v31k!k!5 J``?CopyrigTt(c)2009M0c.0os0f21r0Aa0i0n. Al3@ 8s7Be @e0vI@d+@0tv3?c- =< ?F ?AeJa[_Rf14W` A0cI@n0j1lB3z_ՁA_AbbbzY_)mEZf1T  9 MSAU@\.?Fr??F~q?P6 B >%M a*JuM` ? %U U4:uDJ@bEF&d2ɿuBltA bJt EIM>JoU2TU!?#S@M #~A@@"WbH)U+U&mJ !?v(GtI#J#"# %"& "A@"bH);E'|"/-<*B#h1h15 "`?CopyrigTt(wc)2009M0c0o%s0f21r01ua0i0n._ Al0 8Us2e0e0v0d0"MX#t3 t7&"tt Ei b@9 %!X GގG'0"UX&-50 FʢJlJAUhh)Q)Q!#A@! (:#`RB: 3R$JTV  h1?[_R: @AABUVeLٮ\2maXUe.o_jLo^m2aroo__7\HD H# h4>T]]9 MT CAU@\.?F??F~߻?P6 8M >M (JuM` ?#0SS2uBJt? tTKJWbK>[JU#?,& ?AA@R"bZ$J#B!!5 `?CopyrigT}t(c)W20 9M ]c os f"R!r !a i n. AUl0 (s2e e v0d !$#bB-## ',&%en#2TU1?"@M#R"g&a,R"g&j'\?6iieE9 6XG"7'02U,&-!G$0 NFbJqlJAUhh7!lR!(Q:#N J>83R JWTfV  !?qY_R8>qRl?38VUVL?&d2n\] aٔXEUoe6._x_j omQ2oDo5V_h_o\056 b_HR>+s 1ǡY-MSMלC FZ# !B (6K )|dݮ o@+H oMsGU5PXF:*ٮ 8'=7,UFD  h ^TYY>UFD"H$'@F @FxT]]9 #4AUFD"H$@F@ @&@F?x0>Uhhn1n1`!#!J`F6 [>A{ (FMBa!? OABE3RNkO}OGBERE6 O?OBE2?;.1rA _0rRxS`3HD: ##=h0>Th]]9 (3q #UFGJg:#@F@ @FWtyz@F.???Pv> u{` ?u3>(.t  ( y?_A@HNpΈ_-t#`aU"p [`JڍU5 ILҹc0U~ "?AG}cTLIb` LineCoklK rzc+"1&Z/#nFFF"?&?#+"1&&` STadK wI/#/N+$;# FC lM 72zS&eacۢR4` uA cG nt)?@#cb3zs94#+$3$I!8<>3 Q3#8.f.30FP=4B/G3B11G;;#?1pyQ i}gTt ( u)@2`09@uMC cQ osK IfB1r@P!a`0iK n.@ UA%2 HsBe@eQ vG d@0 l>]UhzЍA3+!`J!VQAU__T3^_odgU5^No_roU __ooj'EAb59 M{7/v!3rQav44CK7%iOHBTg7D'$~vv I+"688e @ Q:(F|@v5?=+$cz,Z#b~-(T{ HD: ##=h0>Th]]9 3#UF u` ?6u3`b>"6(t  <,?A@U"p [ZBH|гYYt [ՄJU5 CLT0U"?AG}eC]` LineColZI rz۷!#nF;FF{#?&?#),)"/&&` STaKdI wG/#/ZL+$9# FA lK r72Q&a` ?RR) 3#B2((236727kpvB1o31P17;9#?L1wpyO igTt (cu)J@2`09J@uMA cO osM0IfHBL1r<@N!aH@iM0n.J@ UA#2 @HsBeh@eO vE d@0 l>]Uhz1A3)!BJ@o39DYX3UD_d_vY3^__gzUf53^_V_ozU3__H\o_j5bPf59 M{o7l/5v!3HraPv4!$ 5v|mai6oozo7]'$t~kvPv v8%TE xH@L?c8 eaD@eJ@ l@Qo:0@3 9r)']z,Z#(T5-sTAcL@0P@HD: ##=h4>T]]9 P3#UFpo@F@ @F18Q[ @F.q??P> u` ?u3 >,2t  eX?A@LR[ A\c2t#`e"p [NdJU5 MLc0U "?AG}gLMb` LineColO rz$g/"5&0mddd#nFF;F"?&L?#/"5&&` STa%dO wM/#/R+-$?# FG lQ 72W&eagRa` uA cK nt 85W%#gbd3zRw98#/$h(Ph9BB3 (C#8>239FA=B8G3APAG;?#? 1wpyU igT_t ( )@W20@9@MG ]cU osO fBR 1r@T!ad0iO wn.@ A)2U HsBe@eU EvK d@@l> 0>UhhCA3/!J!VCQU__T3^oomgU5^Wo_{oUE_oo%obTG t>Q e jXHle59 M{7/v!3rX/Qz#=!$K 7%inug7D/'2 #vtx J/":UGDF P# h>,/T6D # UFD"H$@]F@&u@P} jih,D@Su`} ?.U@Nu#hQgtSftܸtRtA@3db#t%T/M/"j/+.d#%@?/"t`Make B 0c"0ground#3%~ؒ314+1`VIS_PRXY0CHM!#60z070"w`?C40py20]i00ht&0(,0u)&02009&0UM0c22s40f2R1r0Aa0i40un0 &0Al:@U 8s>Be@e20%v$0d2"e (^q5 J% 3hHA%DK%D,K3DGg $aI_9SFTv4y13TU-T[Te ZTATU J8>wSa%F1Q01$DtybgURwSi@^UR-n,?oUR1j?RRTuKrQi? eb #! >baes?,3QRoX0cD&q1cQ(QR=wS1VZi#3 D40N40UtBn0a0n`Eq#?_^9jYazPtt"(wS117eU5`>Xcy1`I!&` H0d$2&F40qUAw0޸q` SPo]wRR+="v6 ~!R)RRLf] Th 'Q0d_HV> ל ߪJAC BFʿt#( i;B= Uݮ ]U@eko+C/ ]Ka_)]m*<N`r%` "m*>GC/$UFD  h ^TYYUFD"H$'@F @FxT h]]9 # AUFD"H$@Fcѿ?&@F5 L?sFC?P6 u` ?iu#t A@t   q?7Q.7)CWb7]ARMJqmdddpnFFF? ?Agam9T6` Fil> wCo> orz9b2j9A9obdk#zY)א9S|uX9/K&` /2L%!X!#\#w&` STWadD w/" /;)#82#b5"oU y3#B&(23"I&3cg30U2?AGQ}9L"|w` L ne%?(75N582M:Bhj<:AgCy1y17;C?C!pyJ i! htj(0)j2`09jM cJ osD fBC!r@I!a@iD n.j A" HsRe@eJ v#@dPw0l>]UhzAh#j181Ja`gCVQ'  2 bM[8A_^#e33nLo^og#eE3eVo_o#e!_>82rRA w082YsASi6bE9 MzgG]'4Ō;J!$a vvET#@x@&OnAHD # =hj0>T h]]9 # AUFD"H$@F@ ?&@Fq)?sFC?P6 u` ?iu#t A@t  ^)?7.7C_(\T[>JS/qnFFFa? ?Aga9T6` Fil* Con* orz9_b9A9bW#zE)9S|u%/7&k` /28%ߵ!D!o##&` STaUd0 w/"/+oUz G3#B&( 3"&3iE530U2?AG}x "|` L n e?#2Bh8<:*ACG1G1O7;3?/!py6 i htj(c)j2`09jM c6 os0 fB/!ru@5!a@i0 n.j A" yHsBe@e6 v0d@E0l>]UhzjADJ`XC66Q'  2RM)8@ mT}_^U3^_oagUEUtVKo_ooUp!p_>[QrA E0r sC)84Aeio6AbE9 MzG'4Z;!$Da Wvkv5T0x@?AHD # =hj0>T h]]9 # AUFD"H$@Fɻ1V?&@F&7 ?sFC?P6 u` ?iu#t A@t  Biq?7.7C!_lV}T[>JS/qnFFF? ?ga9T6` Fil* Co* orz}9b 9A9bW#zE)k9S|u%/7&` /28%!ߵ!D!o##w&` STWad0 w/"/+oU QG3#B&(3"&3E530U2?AG}u "|` L 'ne?#2BhP8<:A2CBG1G1O7;3]?/!py6 i htj(cu)j2`09juM c6 os0 IfB/!ru@5!a@i0 n.j UA" yHsBe@e6 v0d@E0 l>]UhzXjADJ`C66Q' C 2RM)8A@mT}_^U3^_oagUEUtVKo_ooU!p_>[QrRA E0r sCio6bE9 MzG]'4Z;J!$a T h]]9 #  AUFR_!@F;(kY?F̅$@F_fw?P6 u` ?u#t  \ Ac?oA@'t3(?.Zd;O?M&S?- >JS/qnFFF? ?{gaCۢT6` Fil4 Co4 orz۾Cb CACbۖa#zO)5CSu//A&` /2B%ߐ!N!y##&` STad: w/"H/+oU (Q3#B&(23"&3E?30U2?AG}u "` L 'ne?-2BhB<:A!CQ1Q1Y7;3?9!py@ i ht (c)@2`09@M c@ oKs: fB9!r@?!a@i: n.@ A" HsBe@e@ v0dH@O0l>]UhztA#B1J`!C660Q'  2RM38@wT_^U3^ookgUEU~VUo_yoU!z_>eQrA O0 rsCiy6bE9 Mz!G'R4d;! $a FvZv5T0x@?(AHD # =hj0>T h]]9 #  AUFR_!@F6??F̅$@F_q)w?P6 u` ?u#t  \ Ac?oA@'t3^)?.Zd;O?Mɯ??- >JS/qnFFF? ?AgލaCT6` Fwil4 Co4 orzCbCACbva#zO)CSu//A&`5 /2B%߿!N!y##&` STad*: w/"/+RoU= Q3# B&(!23"&3iE?30U2?AG}x "` L n e?-2BhB<:*A!CQ1Q1Y7;3?9!py@ i ht (c)@2`09@M c@ os: fB9!r@?!a@i: n.@ A" HsBe*@e@ v0d@O0l>]Uhz`tA#B1ŊJ`!C66"Q '  2RM38@wT_^U3^ookgUEU~VUo_yoU!z_>eQrA rs:Ciy6bE9 Mz!G'4Ŕd;! $a (FvZv5T0x@?(AHD # =hj0>T h]]9 #  AUF^!@Fp>?FKg@FmЁ??P6 u` 7?u#t  ?߀A@'t3;On?.Dioׅ?M@_߾?[ >JUonFFF? ?ob"ozCC]b,#&` STad_owCP lP -rz=&G/Y+[ ##e.2"#&3iE0U2?AG}x "` Li'nez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c\ oKsv f2I1r0[!a0iv n J0AX l07s"BUe0e\ vF0d""l>]Uhz1#!2!ŊJ`%6%6"A2  "RM(@DON(U38^Q_c_W(UE8UF_O_(U2O;2"rA 2#^cFCi%bE9 Mzl7]'4+J! $a ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUF+ $@FmЁ?Fh_IŒ??P6 mu` o?u#t  $C?A@'t3;On?.N@?M? >RJUonFFF? ~?{b"zCCb,#&` STadowCjP lP rz=&G/Y+[ (##."#I&3E0U2?AG}""` LiOnez/"BPh,:A !!';u:3?I1py\ igTt _(c)02`W090MB0c\ osv f2I1rT0[!a0iv n 0AX l07s"Be0e\ vF0d$""l>]Uhz@1#!2!J`%6%6EdA  ("RM(@DON(U38^Q_c_W(UE8UF _O_(U2O;2"rA 2#^c%FCi%bE9 Mzl7'4Ŕ+! $a (ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUFP0&@F'ǖ~rl?FMO?Fq_Ww?P6 u` ?u#t  ~jtx?oA@'t3ʡE?.Cl]M/$A >JUffnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##."#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!2!J`%6%6A  P"RM(@DON(U38^Q_c_W(UE8U@F_O_(U2O,;2"rA 2#J^cFCi%bPE9 Mzl7w'4)+! $aQ ff95TF0x0I?s1HD # =hj0>T h]]9 # AUF€#%@FL&d2?FM_O?ݐ?P6 mu` o?u#t  ~jth?A@t b@(.eX'?(MA>RJUffnFFF? ?b$zCCb&#&` STadowCJ lJ rz7&A/S+I[ # #.2"#&3E0U2?AG} "` Linet/"Bh,T:AB!!';43?C1pyV igTt(cu)2`09uM<0cV osp If2C1r0U!a0Uip n AR l7sBe0eJV v@0d""l>]Uhz1#!,!J`X66DA  " RMB(@DON"U32^K_]_W"UE2UF_O_"Ub2O;,"rAQ ,#Xc@Ci%bE9 Mzf7K'4Ž+!$a ff35T@0x0C?m1HD # =hj0>T h]]9 #  AUFb%@F'ǖ~rl?F2wa?Fq_W}w?P6 u` ?u#t  _LU?oA@'t3ʡE?.j+]M/$A>JUffnFF;F? ?b$zCCb,#&` STadowCjP lP rz=&G/Y+[ (##.2">"#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!"J`%6%6EdA  ("RM(@DON(U38^Q_c_W(UE8UF _O_(U2O;#rA 2#^cFCi%bE9 Mzl7]'4+J! $a ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUFu %@F'ǖ~rl?FMO?Fq_Ww?P6 u` ?u#t  ~jth?oA@'t3ʡE?.٬\m]M/S$AAJUnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##. "#>3E0U2?AG} "` LiOnez/"BPh,:A !!';u:3?I1py\ igTt _(c)02`W090MB0c\ osv f2I1rT0[!a0iv n 0AX l07s"Be0e\ vF0d$""l>]Uhz@1#!2!J`%6%6EA#  R#R M(@DON(U38^Q_c_W(UE8UF_O_(U2O;2"ErA 2#^cFC i%bE9 Mzl7.'4+%! $a ff 95TF0x0I?s1HD # =hj0>T h]]9 #  AUF;%@F'ǖ~rl?FOr?Fq_Ww?P6 u` ?u#t  _vO~?oA@'t3ʡE?.]K=]M/$A>JUnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##."#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!2!J`%6%6A  P"RM(@DON(U38^Q_c_W(UE8U@F_O_(U2O,;2"rA 2#J^cFCi%bPE9 Mzl7w'4)+! $aQ ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUF`9d[%@F?F{?Fߢ_-Jkw?P6 u` ?u#t  a2U0*C?oA@'t3 +?.ڊe]M~jtA>JUnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##."#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!2!J`%6%6A  P"RM(@DON(U38^Q_c_W(UE8U@F_O_(U2O,;2"rA 2#J^cFCi%bPE9 Mzl7w'4)+! $aQ ff95TF0x0I?s1HD: ##=h0>Th]]9 3$#UFt>!@Fp@FU^+m@F BP(??Pv> u{` ?u3>.6.t  ?A@RXqhki)t bk"p [SjJmU5 SLiT0U'"?AG}yFSb` LineColi rz!#nFFF#?&?#I,I"O&&` STadi wg/#/l+$Y# Fa lk &72q&` @?r)o 3#B..36=27.JM3117;Y#?l1pyo igTt (c)R@2`09R@Ma co osm0fPBl1rD@n!aP@im0n.R@ AC2 HHsBep@eo ve d@0l>]Uhz9Ah3I!J@35&5&hQ (;UL_q_T3^__gU5;^_^_oU;__do_j5b59 M{؏7/=v!3PrhaXv4!$A;7ai6oo74'&$|~svXv v_am A` Te xP@l>#mbzR#p8!e t@PQ:(F|'@f5ɔ?{I$m."z,Z#!b~-(T{<\B2>2 HD: ##=h0>Th]]9 3$#UFfG#@F BP(?FUiԄUV?P> u` ?iu3>W..t  x&1?A@RXʡE5i)t bk"p [TjJU5 [ SLT0U'"?AG}F^S` LineColi rz!#nFFF#?&?#I,I"O&&` STadi wg/#/l+$Y# %Fa lk &72q&'` @?r) 3#hR@BL..236=27$1311ԩ7;Y#?l1pyo igTt (c)j@]2`09j@Ma ]co osm0fhBRl1r\@n!ah@im0wn.j@ AC2U `HsBe@eo Eve d@0l>]UhzQA3I!J@X35&5&DQ(SUd__T3^__(gU5S^ov_6oUS__H|o_j5bP59 M{7l/Uv!3hrapv4!$ Uv}aiHz7]'&$Ŕ~vpv vX%Te xh@l?8 e'ad@ej@ @Q:1@3 9rI'm"z,Z#.T5-TAcl@0p@HD: ##=h4>T]]9 P3$#UFl6@Fp@Flf^J @F BP(q??P> u` ?u3 ]>2m2t  %䃞?A@V\m)t bo"p: [nՄJU5 WLT0U+"?AG}FWWb` LineCoklm rz醑!#nFFF#?0&?#M,M"S&&` S.Tadm wk/#i/p+$]# Fe lo *72u&` D?v)h9I!Bo? 3#eB223FA BGT2M3B117;]#?p1pys igTt(c])20@9uMe cs osq0IfBp1rw@r!a@iq0n. UAG2 {HsBe@es vi d@0)l>0>UhhClA3ZM!J@3,9&9&C"Q(nU__T3^__CgU5n^-o_QoUEn__o_(bTe ts0e j$Fje59 Mb{7/v!3rQv5!$;7aiPDV74'2B+#~vx _akq A` Ti x@p>#qb[zމV#qUGDF P# h4TPYY# %UFD"H$@F@&@P} j6   L 4 Hn[u` _?BU 4H9 u#p3lbtntQtRA%@> tS%tK/]'4( tTi%i/](/z.R&U@?/".g`Make BJ0cL0ground #d.#4#30U2)1_FB3!5~*؀3A$D;1`VIS_PRXY@0CHM!060<@78݉"`?C^0puy\0iZ0htP0(V0)P02b@0U9P0M@c\2s^0IfBvAry@Aa@i^0n@0 P0AUl@ }HsBe@e\0vN0d@2"e (^E 5 3r ca%A oCjA$A13 f3ba!5 cq@~n^e3jU?~bhKr;aq? |ub #h bauA1*4H f5ohc&Ac.abaEkQ,o^e= cAzf y3` D^0N^0tBn@a@nEi#_9Y@a `A1A1"( c'AAezeE>Xs A pIp1!` H@dN2&F^0YA @ޮ1` SPo]w:bb+"v 6 ~!bbbf,4T4?Had3E_HV>ל ߪJAC BFw #,ߩ6B} ޮ ]@e8ko+U ]BBG/)Ph{ q|(5&1$5.c/x6X789 (-JUFD  h ^TYYRUFD"H$'@F @FxH5@@@0ۀ HD:  # ;h4>T]]9 #U,AUFD"H$@FjZV@&@@P -u `u bu  @")lt A)0t#` bFv_n?p [_eAu` ?jurJ:26W&$##D./@%W(l"r/tvel JYO8?$"'D{(/$B}U2q(0?'?96?\.?fd?F 7`BackgroundC0l0rz\)z 5G!3M1 3?l"M@h<:A酙!30O贁Nk?mdddclak)2w` A0ce0t?A)n1bdCzI7!3117;l`?1py0i0h@ (0)2P20>P92PM*PcJ2s0f0R1r$P1a@i0n..2P A0l2P)WUs|RePPe0v@dpP0l>#$!Uh#3 pA QYŊJ*!3"Be(o1HD:  # ;h4>T]]9 #U,AUFD"H$@FVj?&@@P -u `u bu  @")lt A)0t#` bԚp [euv` ?SurAJ\tt&!$)'$"3/E X/j$=(tvel gsS?$"'*Q/$BU2q(0?'?96?\.?Efd?F 7`BackgroundC05l0rz)  5Gt!3M1 3&?l"(MPh<:A!30U_B??mdddLcla)2` ]A0ce0t?A)1b[dCzI7%!3117;l`?1py0i0h@ (0)2P20>P92PM*Pc2s0f0R1r$P1a@i0n.2P KA0l2P)Ws|RUePPe0v@dpP0l>#$!Uh#3 A Q!3 (BHA 3J*!32B'e@o1VcHD:  # ;h4>T]]9 #U,AUFD"H$@Fr߹\.@&@@P -u `u bu  @")lt A)0t#` bF_xP92PM*Pc2s0f0R1r$P1a@i0n.2P KA0l2P)Ws|RUePPe0v@dpP0l>#$!Uh#3 A Q\YJ*!32Be(o1>cHD:  # ;h4>T]]9 #U,AUFD"H$@F~߿?&@@P -u `u bu  @")lt A)0t#` bԚp [euv` ?ur>JvSȱ&$)'$"3/ E X/j$=(Etvel 37?$"'*/$BUo2q(0?'?96?\.?fd?F 7`BackgroundC0l0rz) K 5G!3:M1 3?l"Mh0<:A!300b/?mdddzcla)2` A0ce0t?A)1bdCzI7!3117;l`?1py0i0h@ u(0)2P20>PU92PM*Pc2s0If0R1r$P1a@i0n.2P A0l2P)Ws|Re*PPe0v@dpP0l>#$!Uh#3 A Q]!3 (BA 3J*X!32B'e@o1VcUGDF P# h>,/T6D # UFD"H$@]F@&u@P}  ih0D *@3-T-hc-|u` ?W΍U,@T'*hh||-"| 'u#QtSt% "{tR$t/'A@#KtT%/(/-3b/(08"A2?X>d.3@??2.`Make B0c0ground 335~J3hAmD;1`VIS_PRXY0CHM!#60@79݉j2`?C@puy@i @ht@U(@)@2@0@ @i@As@fBAr@Aa@i@n0 @AlP HsRe@eJ@v@d22e h4^hh P T3 3hHQ5T[5T[3T@[TT[ETh[TX|WndPao7T1128cpAhAhEhATh]]9 3#UFԚ?F BP(?F @F߅d\?P>u` ?iu3>k(.t  9#J{?Ao@HtSݓ_N׳_ambԚp [aJڍU5 ILc0U~!"?AG}cLIc"` LineColc rz$cC"I&nFFF#?&^?B#I&&` STa%dc wa/#/f/G(iF" d3#$..36hH1$d1d1l7;S#? 1pyi igTt ( )@2`09@M[ ci os!0f B 1r@h!a @i!0n.@ AR#0l@GsYBe-@ei v_ dM@b0 l>]Uhz13C!`J/!F%QA(E _._@T3O^h_z_W?UI5N___?U Oa_!o_j5AbI59 M{R7/f!3 raavb5!$fAm/%Q12ooXR7w ' $9~0vx] vF88e 1@tQR:(F|@{f5Q?=C$cz,Z#bS~-(T{<B2 >, HD:  ##=h0>Th]]9 3#UFGZت$@F BP(?Fu?Fa]?P>u` ?iu3>k(.t  aTR'?Ao@HtSˡE_N4@_ambԚp [aJڍU5 ILb0U!"?@2LhcC"II&ynFFF#($(?#R,C"I&&` STadowC l rz&/+_y" QB3#.3i6h H1q$B1B1J7;v`? 1py igTt _(c)02`W090M0c os0f2 1r0!a0i0nU.0  l 8Us-Be@e v?@d!@@0l>]Uhz13 C!J/!FA(EO_T3#^<_N_WU'5N_O_UO5__Y_jD#b'59 M{07/f!3baf-A4!$fm/%ij5|oorX07Ww' $ ~vf Wv $8e'ua0e0 @PHZQ0:^2@3 9R'cz,Z#.T.'~-qT@Ac000HD:  ##=h4>T]]9 P3#UFD"H$@F@Fq!j@FC1[z??P>u` ?u3 >,2t  0*?A7@LtWzGcQR e0qbFL&d2?p [>eՄJU5 MLT0U%"?AG}FWMbg"` LineColg rzR#nwFFF#?x&?F#uM&&` STadg we/#/j+$W# F_ lDi 72o&G,`h9C!Bt! 3#.2236A2JM3117;W#?1pym igTt(c)20U@9M_ c.m osg fGB1r;@l!aG@ig n0 A22 ?HsBeg@em vc d2l>UhhC0A3G!bJ@33&3&C_QA(2VC_h_zT3^__gyUb52^_U_oyUE2__[o_(bT_ ti e j5ieb59 M{7/[v!3nrvv-4!$;7aiX7w'2%#՚~vvv vw_H<>SGל ߪJAC BFXwI|#(- T]]9 3#UFD"H$@Fh4@Fj~L$@FZV{?@F?P>mu` o?u3 #,2t  K46?A@LR e,֍t#`F BP(?pz [e"ejJU5 MKLT0U$"?AG}Mb` LineColf -rz!#nFFF#?&F?#F,F"L&&` STa%df wd/#/i+-$V# F^ lh #792n&` =?o))eCQ3#8&327A22\ތ32q@?O FM$? "fKO:F!%311Ԧ7;V#?i1p]yl igTty(c)y2U0@9yM^ cl osj0fBi1rԪ@k!a@ij0n].y A@2 HUsRe@el vb d@0l>0>Uhh7A3gBbJ@32&2&QA(U_ _T3^ o%oxeU5^`o_oU5_ oo.oi3 eB!3)m!#1'/28pȱ;!$8Q 52xbszgO#u%w#vj#lei:M{!/!3wxAJ dQ`$U8!e fQ:(F|@$v9K?F$g"z]#b?M-,T{{/ Ck\tq ĂHD  # =hj0>T h]]9 # AUFD"H$@&@FxitI%"p: [kՆJsnFFF? ?Abm$zOOSsu` FilA ConA orzOk` >/2O%(!(&\#&` STad*G wE/"/+ReU= 3#J8'32H23"&3E#0Ut2??AG}ΐ` L? n /Bh,:A311 7;3?F!pyM igTta(c)a2`09aM? cM oKsG f3BF!r'@L!a3@iG n.a AA" +HsBeS@eM v dHs@0l>]UhzA#!J`366KaDX' (32oRM(@T/_O_KQU3^__gUEU&V_A_!oUk!"_> QrA *bcCi%AbE9 Mz7's4;! $Da fv5T x3@?1HD  # =hj8>T! ]]9 #,AUFD"H$@Fx@&@@P6 -u `u `bu  @e" - A-0t#` b ̮?p [iuu`l?SuvAJ`rJ &$-'("7/ I \/n$A(tzip Q?$"'(/$񓾍U2q,0?+?=6?\.?ljh?F 7`BackgroundCj0l0rz-.~5e%3Q1 3?a t##FGB%3!#rٿnFFFf6AۚsA,? ("bCzy-lB-3&` STa0ow?(!` Fi0lrH2F1` OE=%:?Q1&gp!K7h<:AJ%3117;XQ@?1py0i0htk(0)k20P9kM@c2s0fR1rP1aPi0n.k AB Xs=be*`e0v0d1`0l>#(!Uh #3 XQ\iJQ*%3'928e&o i1HSeE9 MzW'0URr;!4o ]vqv1` T0xP_QH D:  ##=h4>T]]9 3#UFD"H$@F~@Ff>$@FZV{?@F?P>mu` o?u3 >,2t  Q|a?A@LR e,֍t#`Fi4F?pz [e"ejJU5 MKLT0U$"?AGq}2Mb` LineColf rz!#nFFF#?&?#F,F"L&&` STadf wd/#/i+$V# F^ lh #72n&` =?o)eC(3#82A"36A27\327q@?O FM$? HfKO:F! 3117;uV#?i1pyl igTty(wc)y20@9yM^ cl o%sj0fBi1r@k!ua@ij0n.yW A@2 HsRUe@el vb d@0l>0>Uhh7A3gBJ@X32&2&DQ(U__T3^ o%oxeU5^`o_oU5 _ oo.oi3 e23)pm!4/'2$#ձ;!$8Q 52xbszgO#u%t#xj#lei:M{!/!3wxAJ VdQ`$U8bT^ tl0e @he}}k\tq ĂHD:  ##=h0>Th]]9 3#UF3钿Y"@FFە @FNPQkӭ??Pn>u` ?6u3`b >"6(t  z6?A@BHK=kUYt ["p: [ZՄJU5 CLT0U"?AG}eC` LineColZI rz۷!#nF;FF{#?&?#),)"/&&` STaKdI wG/#/ZL+$9# FA lK r72Q&` ?RR) 3#B2((236727kB1o31P17;9#?L1wpyO igTt (cu)J@2`09J@uMA cO osM0IfHBL1r<@N!aH@iM0n.J@ UA#2 @HsBeh@eO vE d@0 l>]Uhz1A3)!BJ@o39DYX3UD_d_vY3^__gzUf53^_V_ozU3__H\o_j5bPf59 M{o7l/5v!3HraPv4!$ 5v|mai6oozo7]'$t~kvPv v8%TE xH@L?c8eaD@e J@ fQo:3@e3 9)']z,Z#bK~-\sTAcL@ 0P@UGDF P# h>,/T6D # CUFD"H$@]F@&u@P} ~ k ,D@Tu` ?LR,ru#|QΉtSzttRt 'A@tT$/ (L/+-3xOb ((A"i/.d#@3)5~??".s`Make B0c0ground#b314 ;1`VIS_PRXYx0CHM!#6008!"`?C0py0i0ht0(0)02@090M@cJ2s0f$BAr@QAa$@i0nx0 0All@ HspBeD@e0v0dx2"eYk} !HA%DK)5D,K3DKDTGnDAD_'7BTAy1y1 "(S11}51<>@XBr1I!&` H@d.2&F0o$@Az0#` SPowRo +<"v8d7e7 J@cbbg bg__0R11"Sa%1+bb$y1r,QHEq)5Se@~|u3{,|uVz?rxK2cxfq<`}? rjb #! bIgaj?@TT,3qEGxsD& 'M1sYq(q{r=SeS,fa0 D0NZbAn$@a@Q"AC@M7LqtKo]e_vH>ל @(T6D UFY,b'@F#H$ @FxT ]]9 TaAUFY,b@F#H$@&@WFF"W@P6 u`bt A@u)΍tStS<tTJ(S3d i)Ob  ADREJ#U0U8"?@$ t0 ?$( ?U kaX:`B c groundC l rzAb#z.)s$%B//#b w/bn3H? K1#5 CL.`M kek2?dQ5.1#A D5 v$`?!py;0]i90htk(50])k20S@9kUM?@c;2s=0fEBR!r9@!aE@i=0wn.k A lk>GsBee@e;0v0d@B1#lhA@=^M G5C G&dRMl>0>UhhE p.A!1 $R0SPJf l#b(.hD Ob"YbM_(F!o+g?7o3coe5noogoeV%efo+o oeE2 o;rAsCghe[+TQQM- &EtJsTl@S6V%B,QE=BAA!D)B0#` D=0N=0t*E2nE@a?@nE,15eq3FB(OqiAbV%9 M!4.'7$ň&!$Q2 $H>%1x (z)ub"`\NWFN#ô jPeB ״ ]Pa, Q@+Y #BXV+k!UFDfP h>,/T6D ؉UFY,b'@F#H$ @Fx%B2 6gdaO{'TT'T= S5Ҧ13C#C D)@N)@tC)@nsPaiyPr$^ Ph(:LV?DOowpx`,p|rag ont] hep ad ful--> siz "]bckru"im.bY,b#H$@@5pm;μ@@3F" HD  ?hZ8>T H]]9 UUFY,b@F#H$W@' P6 lA u`l?U  u(tS&A@ctTl$bsTJh=M/ m20U?@LebTqBG H#$\( ?A3`BackgrounWdC l rzuu!$)(=~ H" #{#9"Mj%7 M`M kek/dpQu` EvNO0nt!p'!~u eJع1!';1#Vis_o RXY. hm!#6050 v`?!py ui hx0 ( u)k2009kUM0c"s fBR!r0!ax0i Un0 kA lk*GsdBe0e s1Qd0 &+?"A %l>0>Uhh5@ 7A!!!JnGV lfX 2 3VM@ !Q_&1q_mQU3^__5gU%UHVoc_CoЧU"5G_ #rRA !"cCgThe'+RT!!M-Qt^t JcTl!nqz{,@{3#T811 C!## D N t"w1a0nEH>Or S-"GqS@w`F8)V"#3 KXTB ]^ax߮s ]@+a Y}BزVIPUFD  h ^TYYUFD"H$'@F @FxT]]9 #U/P%g(|"./t  {Pk?e"p _o*'6#4!Uh#3 A AYAJ*=3F"+2BU_ ASHD: # ;h4>T]]9 #U#AUF@F\.?Fl6 @@P Vt`  ?"u u u " '@l0+bA80&t# bp [5Ru7?u_eAYJUFL8~i/6a房D!@ '!"*(E,{Pko"] U?N@*(~k *Q/Bo2q0??6??S??F 7`BackgroundCq0lq0rz~8b%I=(1g 3?"&Ph<:AZ20U0B3 Lb+ 7117;Y`]?z1pyo0im0Wht(i0)W20@9M@co2sq0fBz1rԡ@1a@iq0n]. A}0lGsBe@eo0v" Pd@0l>#$>Uh,#3 A AUE 2teA}o #J*A @"BU_1SHD:  # ;ih(>TA9 35AUFi4F? <Ѕ2P $>  u` ^?3 u4#"t  q?A@g#tYsn$"Jh?hB="5 +#?A&\ ?~!~!5 `?CopyrigTt(c)20 9M c os f"!r !a i n. Al (s"e e v0d l>#UhEI !#J?Xl]b4X6WBHD:  # ;h0>Th]]9 #AUFi4F?k?@P -u `u bu  @",% AGh[ >u`] ?u#J" ѿ$ #p/$.?/ ")Z/l)t )baht Da&(/Ԯ$BU27q 0??16?\._?b\?F 7`Backgrou_ndC0l0rz%b5=t3E1 3&? ?9!J31h17;h`?1py0i0hUt (0) 2`W09 M{@c2%s0fB1ru@1ua@i0n. A0l zGsBe@e0v@d@0l># !Uh#3 A jAYbJ*3B`Uy_1HD:  # ;h0>Th]]9 #AUFi4F;? ?@P'-DT!>-u `u bu  @"E% AG[mu` 7?u#e@)J"9$' "/-%Y'%-(c/- t aht aR"/$&QbaBU2q 0??16?\.?Eb\?F 7`BackgroundC0l0rz%b_5=3E1 3? ?9!3117;h`?1py0i0ht (0) 2`09 M{@cJ2s0fB1ru@1a@i0n.. A0l zGUsBe@e0v@d@0l># !Uh#3 pA jAYŊJ*3&"2B`Uy_1SUGD  # h#$T #3 BUFL8?Fc?@Fi4F? uP} VV    $ u` ?ULV){$$ASu#L"p [A@)HtI  {Pk5?0t bF,/|񁠅dyBM#!!Z* "`?CopyrigPt (c) 20 9 M c os f"!r !a i n. Al (s2e e v0d ;"eYk}$Hp1D%|4;E|4$7UGD  # h#$T #3 BUFibAA%@Fc?@Fi4F? P}  u` ?u# VrVh -(^<c=PLVU((<<PP"p [A@Kt  6TA9 35AUFi4F? <Ѕ2P u` ?Mu# t  q?A7@t+"=&9Enm$ >  , Jh?B="5 +#?A&\ ?~!~!5 ~`?CopyrigTtl(c)l20 9lM c os f"!r !a i n.l Al (s"e e v0d. l>UhE`I !#J?l]2b6BHD:  # ;h,>T]]9 #AUFi4Fu?k?@P -vu `u bu  @"E! ACWmu` 7?~u#m>)J" ѻt b[]dt ]&( '/$$y'!t"/%'-/ BU2q0??-6?\.?^X?F 7`BackgroundCj0l0rz!?b!I=3A1g 3?3117;vd`?1py0]i0ht/ (0])/ 20\@9/ UMH@c2s0fNBR1rB@1aN@i0wn./ A0l/ GGsBen@e0v@d@0l8>#t!Uh #3 A 7AI+J*3[2B-UF_1\SHD:  # ;h,>T]]9 #AUFi4F? ?@P'-DT!ݿ>-u `u b_u  @"! ACWu` ?~u#->J"9Bt ]d!t ]"/ $-&b[]$y't"/%'-/ BU2q0??-6?\.?^X?F 7`BackgroundC0l0rz!b!=t3A1 3&?J31h17;d`?1py0i0ht (0) 2U0\@9 MH@c2%s0fNB1rB@1uaN@i0n. A0l GGsBen@e0v@d@0l>#t!Uh #3 A 7AI#J*X3[2B-UF_1\SHD: # ;h4>TB]]#AAUFD"H$@Fc?@F%@FCHy5?P6 Amu` ?A@ u#`%t  ŏ1w-#LRtA`%"p [S$J=#UG ?| ?釉\2q- ?,/>&?\.+?&i/: -4 # #20U" @ #Mb Bh,:*A!!';`?CopyrigTty(c)y2009yM0c.0os0f21r01a0i0n.y Al0 8s2e0e0v0d0 l*>,>UhE=%y1# 1J! rBA?(wN9OOEE3wEOKOY% wESO#R] JiAle%(9 MX&7WW'$ž.! $0 V #x"j__$f!32bGgHf_1 `5kyoobq %?1HD: ##=h0>Th]]9 3#UF( x"@FFxQ}H( @Fy0խ??Pn>u` ?A@u3`b>,2t  ?sFLR +5t ""p [NJU5 MLc0U"?AG}(LM"` LineColS rz$3"9&nFFF#?&^?2#9&&` STa%dS wQ/#/V/7(iF" T3#$2236hL1$T1T1\7;C#?1pyY igTt ( )02`090MK cY os0f21r0X!a0i0n.0 AR0l07sIBe@eY vO d=@R0 l>]Uhz133!J!9 RXEO?_)[3?^X_j_W/U95N_ __/U OQ_ou_j|5Ab959 M{B7/f!3bѿavS4!$f1m%A12(ooXB7sw'$)~ vv: sv688e !@dvQB:(F|@kf9A?=3$z,Z#bԁC~-,T{< B2 > HD: ##=h4>T]]9 P3!#UFD"H$@F<@F ҵ #@Fx萢5?@FN@?Pv> u{` ?{u3` #>,2t  ]?A@LR e0t bF@ ?p [e"eJU5 - MLT0U("?A?G}FMb` LineColj rzR#nwFFF#?x&?I#uP&&` STadj wh/#/m+$Z# Fb lDl 72r&J,`h9F!Bt! 3#.2236A2JM3117;Z#?1pyp igTty(c)y20X@9yMb c.p osj fJB1r>@o!aJ@ij n0 yA52 BHsBej@ep vf d2l>UhhC3A3J!bJ@36&6&C\WA(5UF_k_}T3^__ g|Ue55^_X_o|UE5__^o_(bTb tl e j5iee59 M{7/^v!3qryv-4!$;7ai X7w'2(#՝~vyv vwUGDF P# h>,/T6D # UFD"H$@]F@&u@P}  D 3,@+  h?u` ?U~@@T^,u#Q3tStB%: D"tRB$tK/]'A@3bI/[(#:"tTB%/](/{/A*""/.dJ3@S3y5~h?S?J2.`Make B0c0ground: @3ز3AD[;1`VIS_PRXY0CHM!#60+@892`?]C0py0i0Wht0(0)0U2Q@0W@ I@i0J1s0ftBeArh@Aat@i0n0 0Al@ lHsBe@e0v0dT2A2e4^AA @Jy5 3H=QJ5ITT[y5IT,T[3IT@T[ITT7WEIT^[@EXhTWnda:oTw7T(11J2G8hcAAEA>Xr1IC1&` Hn@d2&F0ot@A0#` SPowoTD!+"vde* @"s)r#r%_w)rgA0o^So=@ib@1$1J2ty2-t3I[t ZtE "bG8TQk`aJ5AK#r)r1{1Qg ga$y5hce@6D3,φ+?6ם ߪJAC BFg]b#i ^CB ߮ _U@eko}+j oBBVoP?nbXqeO8KgxiGClO[n:prtX?#w@zo}drUFDfP h,TPYYCUFY,b'@F"H$ @Fx siz "]bckru"im.bY,b"H$@@\L[/f&.{@ͺl@;μ3D UGD" # h <^T$(YY#UFY,b@F"H$+@' ]P} j&(<XP"u` ? ((<<PjP nu#:xtSv߀A@#tT /tb"(9'do `%  `"Mo`Make B c ground!}Q ` Ev nt!pb ~  "%JT%-%'W+3S?$B#i#k0Ud2>3F&=i$46`#4r6r6 :2g"\i#2q0??F`#F\.?y( .O@i#dA+137]D"`?C"y ui h ( ]) 20@9 UM@c"s fBRAr Aa i n. AUl@ HsBe@Ie !d@)01w`V@s_0_RXY@cPWm![06@2.P QW"&/H:L e(^!6D _] i#HZZ`%T[%T@[3TYWgda^oC +T# 1 1`"13Rd3](cvydA C08Ua%k,@odL33TdAfx\3` D N tB!a@n8UA^E `%{MW?r6!3r0v*4*P83r69MTHD # =hj0>T h]]#A UFf0!? c@F"H$@FL( jKb' P6 ]Au` ^?M u#t  ]oEt9SENQ,>ChJ#0O迴Nk?{F 2q ?/&F\.?A8@73`Backgrou_ndCp lp rz@b Q Q#t$]A -]&MB±~<# u;c/u/!O2݇ #Iob< @P2V?h?!ܔ6!P!';3?1py0i0hUtp(0)p2`W09pM@c2%s0f!B1r@!ua!@i0n.p A| lpGsmBeA@e0v@dHa@ l> ,>Uh =E A#!1ŊJ`+6R  8x\ UVMY!9#_x9_5QoU3U+>_3__E2 _U2KrA 2NcCiA1eE(9 MX'E'0UbŔ+!ra .fjvoo{'_ !3rgfc`#fI-pHD # =hj0>T h]]#A UFL( jb@F"H$+@' P6 n>u{` ?5 u#t  ]EQt9SENEEhJ#0O贁Nk?F  ޱ2q ?/&F\.?MA8@73`BackgroundCp lp rz@bI Q:'! #t*$ -&MB±}_'!<# u;c/u/!<݇ #Ib-< @P2V?h?!܂6!!';3?1py0i0htp(0)p2`09pM@c2s0f!B1r@!a!@i0n.p AR| lpGsmBeA@e0v@da@ l> ,>Uh =E A#!1J`X+6  8x" UVMY!!_v9_xoU3^9__1UoUbE2 _U2rA 2NcCiA1eE9 MŔX'E'0Ub+!ra fjvoob{'*?!3rgfc`#fI-pHD # ;hj0>T h]]#A!AUF04qy#@FǮX?Fqo#@uP6 $> lu{` ? ;u4#*t' ‡tfmta{mvm[W"m3J#U0O贁Nk?F 2\2q* ?)/;&F\.?+A,8 iG #f?pH&M<# ";`BackgroundCJ0lJ0rz@ba0a072=?O?a4i6qbWd @B !!';us3?1py0i0ht (}0) @2`0U9 @M@c2s0IfB1r0Y1a@i0n. @ A0l @GsTBe*(@e0vf@dH@ l>,>UhE=%1#!1J`Fr  $&M3 `Q 6RFxAgߊ |? YF_W&d>2 ~`_21^'` _}A_(@M9 __Ue%2OU2rA T 2cxCiA1e%9 MX'g'0EUrŻ+$a f#xB"joo!$m_!3r =w>v12y q-> Ako(#HD # =hj0>T h]]#A UFOvE%@FǮX?FU1@?P6  >u` ?j u#t  ΈEt9SsENCп+EhJ"A|7<F ?ka@X:` Ac entCol* rz} b A bQ#z?)ۀ Ib'Ak!" //.*?/b!#Z/l 񽓱2q0??6F\.?Ey =?:@73`B cwkg0 ou d/b'Q Q3$A -6MB80UB2  !?'117;`?x1py0 ik0h ( )@2`0U9@M@cm2s* IfBx1r@/!a i* n.@ A, l@GsBe*@e0 v d@0Rl> ,>Uh =EAh#1o!J`z*`1  86n8[2 VM(X1_[gU3^9o!oUUE2~_Uo"KrA 0o"c%SiAbE9 M!4E.'D;;Da f jooŕ{7v!3r_w`v`Wvv+H=R_\H>` r =T"?KF(#6 وTB9괿 -^aд o@{+C oQaGUoPxxUH͐IéٗUFDfP h>(/T6D UFY,b'@F#H$ @FxT P]]9 %AUFY,b@F#H$@'R P6 u`bA@utS!WtT6 WWb; Z RADEJ#U0U?t":#a?L& ? kaX:`Bw cq groundCj l rzAb#z)sx%8B//#b ܭ/b23/!5 /Lf1w`Mw kekz/d!/R:}y` Ev0'nt1p~O2%145 8`?!py i h0 ( )k20>@9kM*@cJ"s0f0B!r0!a0i0n..k A lk)Gs|BeP@e 1dp@B+lhA@=M Ga53+ 7L&AORMl> 0>UhhE A!d1 s$R S PJ8VlbhXD !:bDbM#( !0og?"ocZeX5jnoogZe%jeVoooZeE12_;rA*sCgPhe[+!TQQM- IEJCTf%qbt!%s3,@=M! E=s11 DB` D0NR0t 21a*@nE!X5Xeq3FB(qiA b%9 M!d4?#ԁ'ŔL&!g$oQ5 H> r (z)ub"`\NWFX Q#Ǵ eB ( `Pa_Tд o@+ oB+o4UFD  h$^T YYBUFL&d2?F~?Fx<F BP(?| ~P?| w VBW< 2h5h? Pm?2?2P'2ȅHBU? ? D G 0eB`-City,n_ tworko ly cak iy no si"eo pR a !iq sk a !%= \%E^  G n?~#0?688?U[fcffgf 3fwf c3gwfw` f{~6a wxxxwwp` px wpu 0226yR2gy?LDrag thuesapWonodwipe.b~??Q뿤i??"&d2}L4oҿ޿UGDF P# h @T(PYY*# U@L&d2?@~?FUP} . !P6 :T 7h7|777 ,7%A'7.u`_?U"6@RThh||"u#) ^2<7:U? ?4b0{[`:S1#26X3#"2^QHW93 2 1/L1$-"@ 2  2"3'>>ÿ@ǵqp@eE?oDeF0|r&BCkBu@`i-1Bu @  26CA (B" 2#& "0)9u `u`"[ b1 $𿻐u`񾄑W3J#N5)W3Qq1q71`Vis_PRXYcPm!#5n `13RB`?CopWyrPgPtu(`)20?`U9MPc)`oP'of1b"ar%`^aUa1`i#`n WAly` )hs}bUePe)`v`dRq1N3 dZ4V1B=Tr6N3@t|6|6 x2N4#W30U[r|3]W,7^q_@VcTg dagRsqu2+ D3hhR)^RHqqE2K8CT $0aN2{5\1pac2!6u%\ GŜ1S1J[2@@R4{35s_N4]&F3s͵NSYFwUUaU,@Qr#`er51pN2K8ڀqiviv_<׀,=pޟ &K8AWBCgE/s"ITh]]9 M IAUF~?@s?F,jג?FRW_7r6?Pps-8Rw?>$Y]A  JuM` ? ;  )u*Jt' eta{+23.JvюV?lv_ ?vx]PqE>JU43f;?4.&P?Apb3bbfz@Ave bh(bv)t&*J#!!5 `?CopyrigTt~(c)~2`09~M c o%s f"!r 1ua i n.~_ Al/0 (Us32e0e vE0 d'0!3]$$#-## '.&%d 2zoGz.#@ M#J6h/z/ BK' B*ib~59 6X7oG'K0UB.&!I$Ia oFJl6.}UhU=!1 A(# ~VIP & `!& T` V_RUGD  3 h0TdYYB qUF9r?@H$$?Fvn?@@u?P} Lk f 0u` ?00DuNrtK  jtK޹ѹMWZU;!;!`?CopyrigPt(c)2\09Mj c.h osb fp"a!rd !ap ib n. Al h(s"e ejh v d #"P4#S8$,#B-5# H" G'I?O=5#H&,#Z45656 12j,$#5#0U2%53+&j  ^K5 6X77'456!.4] 6Je$B^ 6D 0HA,%DK%D0GHD: # h 4>T  9 P#CUF0C3?@&@ P?FμW3P6 n>u{` ?5u#t  (]It=W!IVR1_IlI#>y;?? ?bA@"b])b)&2\"@ #[&/#'*+:'";*t\U$q ?/6 46?> Jh1h15 g`?CoprigTt (c)02U0090M0c0os0f21rԑ01a0i0nU.0 0l0 8Us2e0e0v0 d0a1(e4B-t3 t70ieM9 kF:X=GA_'2r"Z!0 F3j|OO{=GyV!%3RGFNDGN_:w5k&_8_Rq @ElAUhhEG1#@A-!T6j& f$<f bfI(:x<'0i`c q$q3a(z=iKquM$ufE`|aOyX+cqE2o;#rA $2H bWHD: # h 4>T  9 P#CUF?@&@ _P?FP6 v >u` ?)u#t  P]It=W!IRg]Hl#>y? ~?bA@"b)b)b$񅴍2\"@#[&*<%&$*Q+:'";*\U$q ?/6 b46?> UJh1h15 g`?CoprigTt (c)020090M0c0oKs0f21r01a0i0n.0 0l0 8s2e0e0v0d0a1e4B-t3 t70@ieM9 kFX=G'2r"!u0 F3j|OO{X=GyV!3RGFNDGN_:w5!k&_8_q @ElAUhhEGІ1#@A!Z@2r80; <qL#4 bvI(a9i@`bcbu3:+}E`lQqi?X+cqM*ufoUyQquE2oe#rA $2H bWHD H# h0>Th]]9 M#JUFvn?@&@ P??@*~w?P6  >JuM` ? u#Jt =t9S!WJNbFh.>y*<? ?AJA@+b)b$J2N7Nk@MJQ&*2%$*+0'1*T!!5 c`?CoprigTtV(c)V2`09VM0c0os0f 21r0M1a 0i0n.V Alh0 8sl2e@0e0v~0d`0!E]$-# '0AbE9 6X7G'0URWB!a F"JlJ]Uhz 1#1-Z61 K 8 0R]VJ:9 6S gPQETT]]B9 #CUF^?@z?FO&_~uP6  >u`3?ju#t  @WIt=WUr mIR~[WPHl#>? ?Lb "z@A gb(N(+b*$2g"?088r #f&#fb8!r.DF&""+E'""F* \ ?X64?+96N?UJ~1~15 g`?CopyrigTt (c)020090M0c.0os0f21r01a0i0n.0 0l0 8s2e0e0v@d0w1P{4B-,3 70ieM9 FXSGG'2WUA!0 F%jOO{SGV!%3R W VdDWN_:5k<_N_Rq VEl QUhhE+G1#VA-"!T6u&F f$<f bvT(:+/U?k`c 2g.q%:uoc xWJ#eaq*uM:uf-l6R|aeyj xx-E2o;""rA !$N*H bWHD:  %$h4>T]]B9 #CUFD%?@\f#??@k߻?P6  >u` ?ju#t  I_IIt=WPaIRIJl*w#>G?? ?b߀A@eb=!z #"b $"bI$2g"?088r b#f&*8!r%/F&"+E'"%F*\ ` ?64?6N?U J~1~15 g`?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 0l0 8s2e0ej0v@d0@w1{4B-3 70ieM9 FXSGG_'2WUAZ!0 Fw%j(OO{SGV!3R W V@dDWN_:5kH<_N_q VEql QUhhE+G1#VA!Z@+*S?F <`?"B## bv@T(a9i`bcb0u%:A}?l6lgqiQaF 4qMuЍ=7{cc )A #ougq0u-E2oe#rA 0#VN*H bWHD:  %!$h4>T]]B9 #CUFW-?@=?Fh?@uV?P6 v>u` ?)u#t  Ht=WB/aקIR_noIl3ur#>!;??| ?Lb "zo@A gwb(b*)I(&2g"?@#f&#f #-(+(+E'""F*\ ?64?6N? *J~1~15 g`?CopyrigTt (c)02U0090M0c0os0f21rԧ01a0i0nU.0 0l0 8Us2e0e0v@ d0w1({4B-3 70ieM9 FXSGG'2WUA30 F%jOOŕ{SGV!3R W VdDWN_:5k<_N_)q VEl QUhhE+G1#VA"!Z@%g 1?( <Kg" bSvT(:?31J9c 0_M#db0u%@u~~mlfl?"0uM@ur\8ˊc`[y뵸m}q0u-E2oe""rA 3!$Tu bWUGD  3 h0TdYYB qUF 0KR?@xՈ ?F?@_@w?P} Lf  0u` ?00D)uNtK  Uɹt0zkn̹ k*ѹWU;!;!`?CopyrigPt (c)r 2\09r Mj ch o%sb fp"a!rd !uap ib n.r _ Al h(Us"e eh v d #"4#S8$J,#B-5#6G# G'?O)=5#H&,#@Z45656 12,$u#5#0UU53@+&j ^K5 6X77'R456!.4]R 6Je$^ 6D  5#HA,%DK%D0GHD: # Th4> T]]9 #CUF/KR?@wn?F#vP.6 Au`= ? u#t  o,It=W:CIRGm Mw+Il#i>y9 ?? ? bA@"b)b%)&񭃵2\"@# [&/#'*+:'J";*\U$7q ?/6P 46?> J h1h15 g`?Co}prigTt (c])020090uM0c0os0If21r01a0]i0n.0 0Ul0 8s2e0e0v0d0a1e4bB-t3 t70ieM9 kFX=GA'2r"!0 F3jP|OO{=G,yV!3RGFNDGN_:w5k&_8_q @ElAUhhEG1h#@A Tj&1 $<fib bfʼI(:0 0i`c 498˱q3a(z=iKquM$uft:ۍ|HaOybxE2ot;#rA $V2H bWHD: # Th4> T]]9 #CUF?@wݯn?tP6 >u` ?u#t  ]yY'It=W菗:CIRHl#>Hc? ?Lbbcbz@A } b(b*) #a 2g"@#f&*G%1$*+E'""F*t\`$q0??6 *4A?> Js1s15 g`?CopyrigTt (c)0W20090M0]c0os0f2R1r01a0i0Wn.0 )0l0U 8s2e0e05v@d0l1p4B-X3 70ieM9 vFXHG/'2r 2!0 F 3jOOŕ{HGV!3RGVYDGN_:5k1_C_)q KElQUhhE G1#KA Z@~#0f < 1#4 b vT(a9i@`bcb%u3:6}t:l\qi)qM5ufo`y\q%u"E2oe""rA Y!$=H bWHD H# h0>Th]]9 M#JUFʺ?@؏pǶ??@/Qw?P6 >JuM` ? u#Jt =t9S6WBBJNbFhՁ$>:3f<? ?A A@ab3bfbz #b$b$J 2zGz@MJ\&!(<&'$*+;'<*B!!5 c`?CopyrigT}tV(c)V]2`09VM%0]c#0os0f+2R1r0X1a+0i0n.V AUls0 #8sw2eK0e#0v0dk0@!E]$-3 7 %0bE(9 6X7G_'0UbBJ!a F-JlJ]Uhz1#1 劯Z@`Y<  8KAF4 ;RhVJ:Ç9 AS ]yY'PTGRU3U2_DQL_UEUu^jYH ]U2"_UrA Y$c( bUGD  3 h0TdYYB UFZV?@[V?FRHG?@ _BPhw?P} `f  f0 Du` ?00DDX)ubt_  .HLt %+,!iUc!c!`?CopyrigPt (c) 2\09 M c os f"!r !a i n. Al (s"e e v d AK"\#S`$T#B-]#o# &o'?c=]#p&T#4]6]6 Y2T$#]#05U2]3@qS& ^Ps5 6X87.G'4]6%!V4] F(J)e(^(I ]#HAT%D[5D0[3DDWHD: % $h4>T]]B9 #CUFO/?@/`p?FqRA?@?P6 n>u`3?u#t  noIt=WoIR#U HlAw#>2?088r@2#Aeb8 !rztb(#+#+#&y<u"?& ?"#+b%)+@'\?64?+6C? *Js1s15 g`?Coph rigTt (wc)020090M0c0o%s0f21r01ua0i0n.0U 0l0 8s2Ue0e0v@d0l1p4B-j(3 7&0ieM9 vFXj'G'2WUAŇ&!$0 F%CjOO{j'/V!3RGVYDGN_:B5k1_C_q m&lQUhhE GБ1#m!!bTc# $<f b)v^(::'Ԝ?k`c S#q%/u3;c .9~#eVquM/ufflG|aZy{ґ#q"E2o;#rA $CH bWHD: %$h4>T]]B9 #CUF3;?@/`p??@w?P6 >u` ?u#t  oIt=WoWIRIlAw#>2?088r@<#Aeb{8 !rzb(J#+#+#& yL?q#?& ?w"b%)#+b4/U%\?64?6C? *Js1s15 g`?Coph rigTt (wc)020090M0c0o%s0f21r01ua0i0n.0U 0l0 8s2Ue0e0v@d0l1p4B-3 7&0ieM9 vFXHGG'2WUAŇ&!$0 F%CjOO{HG/V!3RGVYDGN_:B5k1_C_q KElQUhhE G1#KA!効Z@x<?f# <h-nѶ# b v^(a9i`bcb%u%:6}fll\qi{ґ)qM5uf o`y\q%u"E2oe:#rA $=+H bWUGD  3 h0TdYYB qUF%@?@5,k?F ?@Q_rw?P} Lbd  0u` ^?U00_uNtK  :Z[tZJ[jIӹ._jѹԒWU;!;!`?CopyrigPt (c)r 2\09r Mj ch o%sb fp"a!rd !uap ib n.r _ Al h(Us"e eh v d #"4#S8$J,#B-5#6G# G'?O)=5#H&,#@Z45656 12,$#5#0U253@+&j ^K5 6rX77'4)56!.4] 6Je$^ P6D  5#HA,%DdK%D0GHD: # Th4> T]]9 #CUFy?@vn?F#?@u?P6 > u` ^?Mu#t  %7YIt=WaR9IR_L=*Hlw#>y!;?? ?bA@"b)Kb)&Z2\"y@#A[&/#'*+:'";*\nU$q ?/6 X46?> Jh1h15 g`?CoprigTt (c)020090M0c0os0f21r01a0i0n.0 0l0 8s2e0e0v0d0a1e4ŤB-t3 t70iePM9 kFX=GA'K2r"!0 FB3j|OO{=GyV!3RGFNDGN_:Bw5k&_8_q @ElAUhhEG1#@A!ŊTj& $L<f bfI(j! ?.۾0c _*#c ˱q3aklquM:mx<ۓ|aUyM8'hxE2o;:#rA $8+H bWHD: # Th4> T]]9 #CUF_xa*?@vn??@?]P6 hvA u{` ?5u#t  ߔ@gIt=WaR9UIRIl%w#>y%? ?bA@"b);b)b$i2\"@#[&*<%&$*+:'";*t\U$q ?/6 46?> Jh1h15 g`?CoprigTt (c)02U0090M0c0os0f21rԑ01a0i0nU.0 0l0 8Us2e0e0v0 d0a1(e4B-t3 t70ieM9 kFX=G'K2r"!0 FB3j|OO{=GyV!3RGFNDGN_:Bw5k&_8_q @ElAUhhEG1#@A!効Z@ xQ0Y; <3H#4Zb bvI(a9i@`bcbu3:+}x<lQqiM8qMuuUo/] d =xd #YuQquE2oe:#rA $8+H bWHD H# h0>Th]]9 M#JUFs`m?@|nH˲?Fdvn?@MQY?P6 v> uM{` ?5 u#Jt  o\SDt9S2X2ENFYEhI5KOV4>y<? ?AJbA@" +b )b $J2N贁NkS@MJ W& *8%"$ *+6'J"7*!!5 c`?CoprigTt (c)(02`09(0M 0c0os0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0!E]$b-# 'A 0bPE9 6X7G'0U]BŔ!a $F(JlJ]Uhz@1#1!Z@?~ 7  >8ħ]i 6RcVJ:_3?f& .;T 1_QKTBR}U3U-_?QD_VT}UEU7&eD YZM8ʺ]a}U2_U#rA $c( bUGD  3 h0TdYYB qUFx<^?@ȵ?Fʺ?@Ot?Pn} L  0u` W?U050DuNtK Ot@ڹ *"!-U;!;!`?Copy_rigPt_(c)2\W09Mj ch osb fp"a!rd !ap ib n}. Al U h(s"e eh 5v d #"4#(S8$,#B-5#G# G'?O=5#H&,#Z45656 12,$#5#0U2&53@+&j  %cK5 6X77'R456!.4]R 6Je$^ 6D  5#HA,%DK%D0GHD: # Th4> T]]9 #CUF/KR?@}JR4?F#vnP6 >u` ?ju# t  ,It=WXBIRGm_ MwIlb>y;?? ?bA@"b.)b)&i2\"@#[&/#'*Q+:'";*\U$q ?/6 b46?> UJh1h15 g`?CoprigTt (c)020090M0c0oKs0f21r01a0i0n.0 0l0 8s2e0e0v0d0a1e4B-t3 t70@ieM9 kFX=GA/'2r"!0 F 3j|OOŕ{=GyV!3RGFNDGN_:w5k&_8_)q @ElAUhhEG1#@A!Tj& $3<fM bfI(:0i`c l$Zaq3a(z=iKquM$ufh6fҍ|aOy"?0x,bxE2o;#rA Y$2H bWHD: # Th4> T]]9 #CUF?@2L&??@@?]P6 lnAu{` ?5u# t  ]yY'It=Wȅ,d!K IRHl%mw#>y%? ?bA@"b);b)b$i2\"@#[&*<%&$*+:'";*t\U$q ?/6 46?> Jh1h15 g`?CoprigTt (c)02U0090M0c0os0f21rԑ01a0i0nU.0 0l0 8Us2e0e0v0 d0a1(e4B-t3 t70ieM9 kFX=G'K2r"!0 FB3j|OO{=GyV!3RGFNDGN_:Bw5k&_8_q @ElAUhhEG1#@A!効Z@v0Y; <B{ %#4k bvI(a9i@`bcbu3:+}76flQqi*h/qM*ufoUyQquE2oe#rA $2H bWHD H# h0>Th]]9 M#JUFʺ?@BP(T??@W0 w?P6 >JuM` ? u8# t =t9S?JNbFh?s>yr<?X ?AJ~A@+b)b$J2N贁NkS@MJ Q&*2%$*+0'J1*!!5 c`?CoprigTtV(c)V2`09VM0c.0os0f 21r0M1a 0i0n.V Alh0 8sl2e@0ej0v~0d`0!E]$-X# '0bE9 6X7G/'0UWB%!a F"J lJ]Uhz 1#1Z@~1  8[_B! 0R]VJ:9 6S ]yY'ETTh]]9 MJUF3;?@z^??@x<w?P6 >JuM` ? uJt  oEt9S֡WlENEhII>yI$<?X ?AJbA@"b )b$ )&J2zGz@M JW& /'*+6'"7*BB!!5 c`?CoprigTt (c)(0]2`09(0M 0]c0os0f&2R1r0S1a&0i0n.(0 AUln0 8sr2eF0e0v0df0@!Y$-# '  0b5(9 6X7G_'0U]BJ!a FR(JlJ,>Uh11!T0 9 F 8  2RYVJ4]1_CY?QsU3:@|>\OQ HS m`wQ52_;#rA 3$@cHD H# h0>Th]]9 M#JUFfl?@z^?F3;?@x<?P6 n>JuM{` ?5 u#Jt  KEt9S֡lENoחEhᐟI>y$<? ?AJbA@"b] )b$ )&J2z_Gz@ MJW& /'*+6'"7*T!!5 c`?CoprigTt (c)(02`09(0M 0c0oKs0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0!E(]$-# ' 0bE9 6X7G'K0U]B!Ia F(JlJ*,>Uh1#1!Tf& 38>VM 2RYVJ:|> 3Y@P HS mۻPwQ34]1_CYQsUE2_;#rA $@c( bHD # h4>T]]9 P#CqAUFe!@b?@M?FQEV?@^?P6 L6 > #(m$uy`?Agg2Du#VtS  n4BH=t; w9_W*Ut3۶_>UF$<?Z& ?bA@b!z #"b$$"b$ 2"Z$@#&(&$:;'"*\$qy0?x?64#( Z4?V> J=#115 `?CopyrigTt (c)"@20.@9"@M@c@os@f BAr@MAa @i@n."@ 0lh@ HslBe@@e@v~@d`@+"1 44#ŤB-=#3 7Z&4%@iePM9 FXG>'K2r2Z&!u$0 VB3jO_{GZ/V!3bwWxVDnW^^Ko:B5k__q JE4"l(>Uhe Ah#A!1(Z=#F@ g0aR LoWGa"@&b$4"(qC^]?Fs(?@|0 ֿ%L|\|HD_LR4Уӟ^L-ٙ"X_a y4 pW? +?Ra@ts<ЯTh]]9 M!AUF<%?@[?FdHǩ?@W_Mw?P6 $[!>  JuM` ? ;  )u*Jt'  P&'mta{T 6Dmv޿mxb$y3>yJU]<?& ?AJbA@,"b4)b4)Sb4$J2zGzR$@MJ &5*`%J$*+^',"_*BB115 `?Cop_rigTt_(c)2`W09MH0cF0os@0fN2?1rB0{1aN0i@0n}. Al0U F8s2en0eF05v0d01PY4-,%3 %7&H0b59 FX7T]]9 MAUFvIڳ?@J?F8?@P} TP>">JuM` W?uJu u bKu `u Y-JWR]nio@&>٣~ C,ᗢeuܥ]h[o?C$ '"/'%S'-+]/' Jt  $Ӗb#"t="! Ư%'_deѬ&'YY-t#JJU2N贁Nk?<MJ6uAdbG8WbU9S;S;S6B3B115 d`?CopyrigTt (c])020090uM0c0os0If21r0Aa0i0n.0 WAl,@ 8s0BUe@e0vB@d$@14Ry36D3IP!BM2b;R?s?fzU3 7F02%je^E(9 BVXWbW_'0URšFZ!D0 bVrvZl!UhX3 1iAA! P@B#J^ewkdHD:  H h0>Th]]9 M!AUFٟx7$?@!{?FdHǩ?@W_Mw?P6 $[#>  JuM` ? ;  )u*Jt'  Y@mta{kbmv޿mxb$y3>yJU]<?& ?AJbA@,"b4)b4)Sb4$J2zGzR$@MJ &5*`%J$*+^',"_*BB115 `?Cop_rigTt_(c)2`W09MH0cF0os@0fN2?1rB0{1aN0i@0n}. Al0U F8s2en0eF05v0d01PY4-,%3 %7&H0b59 FX7T]]9 MAUF4?@R ?F8?@P} TP>$>JuM` W?uJu u bKu `u Y-JWR]ni_@"J&>kn OS?F4H⋘?"  'B"/'%S'-+]/' Jt  |3#"t="Y(ئ%'~#&' bS#JJU2N贁Nk?<MJ6AdbG8bU9JS;S;S6B3115 d`?CopyrigTt (c)020090M0c0oKs0f21r0Aa0i0n.0 Al,@ 8s0Be@e0vB@d$@14y36D3IP* M2b;R?ds?zU3 7F02%je^E9 BVXWbW'0URšF!D%0 bVvZlUhX3 h1iAA0P@#J^ewkdUHTuD" # A#A hz ,T! YJ MUFv{h?@K]`?FU UP N% # H, @ T  cO,OmJuM` /? O"6J^ Ocu#X"bs!Fۅb^uJt  7͉"tY!"^OJU2zGz?)@M#J&A@i"wb8b#9!;!;!62 %!4J#2#"6 p2.2b#9;/72l<2l?TN#=A=Ai"`?Copyrig`t (c)t@20@9t@Ml@cj@osd@frBcArf@Aar@id@n.t@ Al@ jHsBe@ej@v@d@"M#IC IG6"xh"t"Eiyn %1XGcW'0URũ6EK0 cVwZlJUdnUtLQQS) T =1T[A#A1(!`Iοͺ?@|0}# f ǿձm# Կ׌$8n:Moo` =[Weh$+HeQj?@)v&?FQ-?@iH?PS~>  F߈s|.Zߥ"lE _ e?h=X##Rȵ: Izu:t ɩ X"41(RCBS"r"5w5ae2oG[pF&ߞ֡h|}VhYr~!{/te,gEe DT?nxM殏Ea$6ЇK;@R d9:x,IyyߧUC6 -R&6t/}GyzνTB4gwqLAq@dR!5j;?@iwE?FmnX Űu{jlk8 fl?y>ĕl?_'t?quEt&XQV  :T;haai~?@^vνl쇢&UebhMHz?@^Z|b>jca>lTס/hi-Ya\1~|oW.f? 45=uTIy^. ?@}drPFuN?@zH?Q;|Uuelfay1| @pUmɯR^iqeoo oT##$g}i3 uoouBwoҕF})EL h0m&}~+̖ݿBmȇVQ9adv_UF ]"6l|]ΖmC\+~}i3 CQ!]!dΕ+ (al?@֤ |b$G4).&ܕ\w7f_Utpx'Wẏ,k/~? 5kՕHD: # h4>T]]9 M qAUF~?@%g?Fs7s^?@WҔh?P6 mL'>E  ($JuM` ?Agg2DuVbJtS t~Dzb 013v ';_>-JUC!C!5 `?CopyrigTt(wc)20 9Mr cp o%sj fx"i!rl !uax ij n._ Al p(Us"e ep v d +"<#T@$4#B=#uP!7?56 ?Ab`4z@9Al0b{3j8]b8a0$o3-=#O# O'564%酙t=#2mq @?2@ M3#JF?=EQ6<@B: ?6iie59 6XG2'0UR56!P40R FJlJD4Uh8ZQZQ ]   %a!@A1(A/.F?@":W0LRb O_LP)$J^2_o ꗦ`T8a.eEUZ+mQ?@_Vl}l?9+h5Uu?@w Vlm1lrU|pi5Ui焞?@Gd{y}.xbli\V)+hUrw?@wVleHl~s!2+hUpp%Vl߂'}|U'r,.?@C[>?Fi(w?@[ P?PU>b ԿftVlIֶ6l{)eRX0~eGdQR@_:M I~> zg@&? ɿI#?!$?Ra@B r552aQ:iт0;Kq𾙵QP!µ""]JT豸*e2 Z"s 7tG@-(F7# m#ʴ :+OB I eEJnOgXݻ 5 # ' . Q~ȝ +L葹 pH KH!fz'͞}K?hߔ h K B?(ݵ ` s8 G@HD TJ$, Rе (ӵ )ֵ " ? P/ p%UFDfP h,TPYYCUFY,b'@F"H$ @Fx siz "]bckru"im.bY,b"H$@@5pm;μ@@}_3 D HD  # =hj8>T! ]]9 #UFY,b@F"H$@' P6 eA [u`l?  u#(΍tS&Aw@ctTl$bsTJ20-U?FW]ebqRG # ?sakunB`BZ ckgrou_ndCy ly rzubuAub#z)u=~ " #L#9"#MBh\"C? j0>Uhh5`7A#!Jn00.lPX2 3VM(Q_&1_QU3n o2ogUEe0&oo@_oUb5_KrA -sCgSXhe_'r!TH!!M-t "J'sTl!Mq %F{,@wj1%5bPE9 M{k7l!3k'QPsBp#!J$G=1ƀHD  # =hj0>T h]]?5AUF\,b@FW5?'P6 .L]A]  u` ?Eiu#4t1  tuKˁ=JU^;:Fx& sa@`B`BE ckgroundCd ld rz}J b J AJ b#z)ۀJ bA!"b#"bx##釉2q ?/ 6F\.W?5 8?:@7oW/i/.{!'G!1 03] \NB20UB4&'$$%117;`?s1pyh0if0ht (b0)@2`09@M{@cJh2sj0fBs1ru@s!a@ij0n..@ Ap l@zGUsBe@eh0v@Qd@0l> ^Uh[A"A]@MjA#1!J4/%CrA 0"S C`Q%S1Y9**0=*Qt&2t31:?F@F̬%?PKM! [t?[WY[ZnaU0fJ$U~tr<S! YM# ? 3&M@T@0tr$5!:+oog4! `Q^__1js+TN!p_^Wo[r5V rvM,(b  jbE9 M{7! 3a򪆢P3!5$;ۏ7愕HD:  # ;h0>Th]]#TAqAUFv\.%@F4O"Ƈ@F.r?F"H$ @P6 L6>  #. .u` ?)c,c8u#RtO  ߢ.t|} [J@}@@@FV&  ;`BackgroundC l rz@b ݶ bOA!|"/Ԕ/!K)!?#2!3z29b13{z&b@F3:3BU27qm0?l?~6F+?S ? :@7//!'#Qf31 ("C$ $lSF-2f30UB5DV&$$f3"AP"A*G;#?1py0i0hUt(0)2`W09M@c2%s0fB1r@!ua@i0n. A lGsHRePe0vZPd

 4 >UhzQ9 #AA#ZA!J`f3FI' BHN MozӋ[Hİ] 2b\B qD `2biVl+qېeNI e\0q:I lӱ`gB_@ 81!@N~P@CBfxwr|5Q5Vabe9iEovhwD_CrA) @"dlSaAiW.Fsb?FyOa`?P p ? 'l)݆9t$1d$G|,6J]w : ~gBt*<wxUGD" # h <^T$(YY#IUFY,b@F"H$+@' P} B &( u` ?U U(( Fu#PtSNA@tTLb!d ^% j#"MG`Make Bh cj groundQ` E;vl nt{!pb~- J%9-H%W#??$ZB##0U2#sF =$&#M4"6"6 ""\#2qƢ0??6#F\.?QE?@)#A!'`?C"yz ix h (t )n 20W@9n MC@cz"s| fIB:Ar vAa i| n.n Al@ AHsJBei@ez !d@ 1`VC@s_u0RXY@cPm! 06JW@2@3!&H/<e#ha!#3 H%Y #H Z%|T[?%|TWgda_C +T#!!"H!#8d3 (cQA6@0EHa?%Tk,@doEd 3F3TA`f;nd 3` D| N| tsB!aC@nEG!#^E %b{Gd?"6!3lr'0tv$@b#"6=s_H8> r =T"?KFf7#(9TB ȯ|]^aϥ]@+]La?*:P= AGCF}UFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X` ]B66]  TTTTTdH?ܸ? Q?B@L&d2?-(,1(.sUW!U&(w/x  0KB` Serv",n two k p"i h"al d v0c@ 1^| (SG|& b?,>?TTwpp pqqppqwqpqwqwq+p pppwwwznw6;5wqpwwwhghw>Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??+ ؿ)j??E(i\.r3r\.CUG DF P# h @T(PYY#E3UF\.?@jZ?F~߻?P} ΍L .B":J :^:rY:f 7 7ָAu` ?Uč"U66JJU^^rr)u#",'tU"J%q4??4bp0{[`: #|2&6#*'Q79" z^L1$" 7'@>>ÿ@q@@5EӅ??D5F0Zr&rBxC2u0`-1Buu&@ CAB@C$9u`u `m"[ b!wu`)3kQQ1Q71`Vis_PRXY%cPm!#5jP4P2B`?CopWyrPgPtf0u(P)f020PU9f0MPcPoP'ofRQrPQUaPiPn% f0WAl` XsbUePePv/`d%SRQ1#d:4%!*@8=3R6#@d\6\6 X2$B#30Ub\3]zg,7^-q>@VpcT<`of0m`sP4a>mmdasIJ'BT1/t"0t.2QQ3QQB3 tj1j1"5hbI6% 72EC?|8LPZX116Q;sa>arExAsF@>>ý__0Ru'@`*Xr>a cc^`Roo H?䣬!e'2$r d bpf`bffb bpf"}Be%Rbk ``.e;Rkbrđf@B #[_z}R=3đđBޅrr++r,$t -tH;:@0a(IafcX(Pq`uK`AP!eqPrqbBq b%F@rBq8%5=3_tD&&|rf"e#F3s͵NSYr'X$G,@R!B衅W_i_.] Q QBHʀ=34I'F}@` %_&͒#J0 Q`wNElWODpKDH0PِRTߐRlIِSqIQiOO|1<3;K'0U?}10~U3Uee8sTeq rqqq13@H36EBM$BM4BeMLBMB3M̅BM-uBG2HD: # h0>Th]]9 MP /AU@jZ?F͂T??FSFظ?P6 .]A]   JuM` W? EKu4bJt1 ytuo[=Jb(A =>JU2zGzt?@dM#߀A@bM;#z1 /#\I(bW)d+)d&J["*(w"?& ?AO$Tf/%B115 `?CopyrigTt(c)2`09MC0c.A0os;0fI2:1r=0v1aI0i;0n. Al0 A8s2ei0ejA0v0d0 1Y4-X 3  7&C0b59 FX7}'0UB)Ź&!$aI 7FKJlJ]Uhz211O! !u6?Fh,\.>*0II=*t >J:@T&ȑ/ [WpY\QEUzU . X[:;QRuSU5UF,®\oZOnlQ 8MP@u?FR >Nk"\T`GbmMlQi_f0%=mUt.m {=4+ ߥP> D"? 5-g?T0tp@bTh]]9 MP qAU@;?F5K?@xEL?6?F6?Pn6 L>M  P$  #uM` ?=`cc.@uRJtO  bm&QȽtA@Lؽl]WI@[>JU2zGzt?+PS@M/#J?&A@bbbPzs q#b](b)+&J[9#,ɺ ("a?& ?AR$ /%B9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0'"O1YS40#-9#b3 b7&0%0AbE9 YFX+G'0UB&!4a yFJlJ]Uhzt1h.A!-(:9#t.%-+0JN H߃W%$Jq-&?F mO" >Nk'"\RSWNh[H@_[4I 뇥P> 'CK=o? nO-?N]`@ b<couor5%BUV 6?woZA^ǿXEU@ T6?F __ jv6dDlVߦyZgPRkro}oiBdalN!\z3>6>CRHS X?BnDeTGvN7VHD: # h0>Th]]9 MP JU@jpiH?F[^?@$?F]Fڻ?P6 >JuM` ?j uJt  (a&aEt9S;-FEN=ۯ=Eh?u;>2zGzt?@M|JA@a7b#z}b(b!).+.&J[-m(?& I?A$0/Q%B!!5 c`?CopyrigTt (c)02`090M 0c 0oKs0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0!(Y$-# '& 0b59 6uX7_'0UJBŃ&J!$a FJlJ]Uhz!1!劵:F% Y.  8?zY^J4 ]9$]$R9S/RJUE UH$S!\1\? GX5ZU_YQJU __-_?\HD H# Uh4>T#A]]9 M /AU@jZ?FL|+??F@{~?P6 m. >b  JuM` ?+ILKu8J1t5 }ty_7??J]bA>tJU2]U"?=@MJ&A@3(WbA)N+N&JJ[ "#?& ?A9/Z) !!5 `?CopyrigTt(c])20A09uM-0c+0os%0If32$1r'0`1ai%0n. WAl{0 +8s2UeS0e+0v0ds0!E$5 򤲝-? 3  7&6iieE9 6X7 "'0#Ŵ&!$0 TIF]JlJ8>U#9AA `1!:Nk"\T`ObmM\"f4N[7 zex-K=FAa տ[jr D"? WZvD?4xa@RB#Th]]9 M IAU@FK?F\gVb?FY%?F_wᳬ?PJwW?>$[ >  JuM` ? ;  )u*Jt'  ^Zmta{֙3.JvdE @x?"_r?v{4܂ >pJU43f<!.&P~?b3bbfz@bAe&bq$Jd 2zGzt&.#@M#T"I&r(~&i$q)Q+}'i"~*B#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d011Y54#-#D3 D7.&%g0Ab59 ;FX G[G'0URB.&!I$a [FoJl&}>UhE3MV1Ai!(# jV? & `& T` V_RUHPD  # #>h0TpHeeJ EU@u6?F5c{g&?@{r`5B?FF:Iͷ?mj>  M   *  (Y JuM` ? Y ":N`u#xJtu IoI%t"' % 'ii%$'`OkŁJU2"?@Ms#J&A@"b(bU)++&RJp}#>1#19 ?#bbbz| &/5> }#11"`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@k"1Ue4t#B-}#3 746t%0 jU FXyGG'0UR46!O4i FJlJ$jUp5 3 h19 1!q(q4}#&J,t R j# 8bD?@@)?FvP؞?PEkU O|\QnS .܁nts}ped`,of :o ,M ? &d2? bȩ?tP@RBBoogr"55ZQUf_x____bY@ßAc_\jF?4ևi$on&h{c8oJo\opoo T)Y oaoooo o 6$|U7I[mS Uy{䷰7h7}'g"(I%ZvShX"cwӏiďa$xUGD  3 h0TdYYB U@%)?Fxg^?@ ߯H?Fw_7Uw?P}   f  f0 D fX lu` ?00DDXXll)ut  871%t%!?"Ư)1%:'ֿTeΦ1%T'ʤU!!O"`?CopyrigPt (c) 2\09 M c oKs f"!r 1a i n. Al00 (s42e0e vF0d(0"#S$#B-#l# '?R=#ǒ 2$Z##0U'B3@& ^5 DFX7dG'R&Dŭ6!4] dFxJeYk} #HFQ%RT][b5RT0][3RTD][5RTX][RTl]WHD:  H h4>T]]9 MAU@ ߯H?Fԋz)e??F9q?P6  >IM (#<Pd`<#P#dJuM` ? #2FZnubJt E-tA!["XѮJV&bN$p'N]I֤>J)U5 J[-# ?6 ??" A@.2Sb64B#2zGzt3@M#Jc678C6.? ;B72C: #115 k"`?CopyrigTt^ (c])^ 20@@9^ uM,@c*@os$@If2B#Ar&@_Aa2@i$@n.^ WAlz@ *Hs~BUeR@e*@v@dr@"14ʺ#-/ C  G6&iieP59 &XG W'0UiRŴ6!#40 $ V4ZlJ]Uhz@A!1(`C@U&pz?FDUy{?,d@#b`sb`"J-d K-q?F@F?F F0 ?Nk"?GiE*]CpdadC0[dhZe@#eGDQ> @" i$a3n&Y+`~rrood]io;e s"yr1C82K?Fѻ4G{?F-qx>tFjR^{|ZlYZe tmP|{Mޥa 0 1iS? q1+?wa@rc(<;"̏A!rFÀ5H5aE=*[^N?F%P?@"P&@ c?F{MTh]]9 M]AU@}%?F[$,?@+#V?Fd_3֡w?P6 B>M @ $ JuM` ^?3YY.KuHJtE  L<ܩt.'_3zMҩ,wsQ>JU2N贁N[?@M#J+&Aw@b](]i+bk)+)&Jp%#4"?>"<5!?& ?M#bbbz$ Rp&"/%B%#L1L15 `?Co yrigTt (c)02`090M{0cy0oss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1YI4#b-%#X3 X7A&%{0bP59 OFX!GoG'0UBŔ&!$a oFJlJ(>UhoI j1h$Ac!(`%#@'$F?>D{Q>L!gfQQ R "Jg V?Fo-R?@bHQ??FT!R?P& ߤlH\f6V\P5_E+UDv;*UmʙHQ:? w8s`*%$? .l?Q? ?\?D4_!R oogr5%Q3p_S@hIL?F~Kz?P)Oy޾ F @c"rA b$^sPj+u`dl(ET+QdAf u8<$HD:  H h4>T]]9 MIAU@ ܋?FOa>?@q`5B?F1*ݷ?P6 m8> (JuM` ?#SS2)uBJt?  \Eto֙>_B|KR>JU5 Jp'>͙#<?Z& ?bbbz@b#;A&b$B 2N/N[Z$@MI #"&(&D$ :;'"*#d1d15 `?Co; yrigTt (c)02U0090M0c0os0f21r; 1a0i0n}.0 Al0U 8s2e0e05v0d0"]1a4 #-,/p3 p7Z&'&iieU59 '&X9GG'K0UBZ&!u$K0 FJlJAUhhG1!- (T9C&J> RbT$R}R( "J:]Uy{䷧_8S s_@ o4%BT@.Tw?F:^֝ä >Nk"?YUU!f>)?\(K8ӷjmKl? a\$QU5Ui,SlQ%iu~!8hE_= T?F;uQuo`"rA 4sjQ޽lğ=onHD:  H h4>T]]9 MIAU@ ܋?F<j?@q`5B?F^ݷ?P6 m8>M (JuM` ?#SS2)uBJt?  \Et[Πؙ>_B?&}>J)>JU5 J#2N迴Nk?S@M #JC&AP bu(b)J++&B1[- #L#?&| ?$H*'#O1O15 `?CopyrigTt (c)020090M~0c|0osv0f2u1rx01a0iv0n.0 Al0 |8s2e0e|0v0d0"H1L4 #-/[3 %[7&'&iie@59 '&X|$GrG'0UBi&!40 rFJlJAUhh7m1!{! (Z$CR&FUy{?X> 81cKR RV "J{T@ ![?F=$. >Nk_"?YPSW>OcK8RKl? ף_p= $B:]'i,S\6aYss9$Q@5UV+6o'lQ:ifOdD`i[Dž}soąo82{_U{"rA z$s"HD:  H h4>T]]9 M!AU@ ܋?F@_v?@q`5B?F1*ݷ?P6 m$> JuM` ??Su.Jt+  \Epteap'Eqz>Bq|7>5 Jp>G͙;?2& ?bbbz@bAi&s&Bh 2zGztM?+"@MrX"&v(bu)+s/8*T<1<15 `?Co yrigTt (c)s02009s0Mk0ci0oKsc0fq2b1r 1aq0ic0n.s0 Al0 i8s2e0ei0v0d051P94-H3 H72&iie-59 XG_G'0UB2&M$0 _FsJlJ( >Uh_I `Z1m!afR* zRQd J:`]Uy{w_$S s@ +o4B`T@.Tw?F:^֝ä >Nk?"?zYQYV*)\§(7$d?ӷjm7lb+d ha\HD:  H h8>T ]]9 M!AU@ ܋?F*?@q`5Bǯ?F1?P6 $6>  JuzM`l? C"u2Jt/  \Etti|u~>Bu뒣;>RJU5 Jp>͙#<?6& ?bbbz@b#Am&by$Bl 2zGzt6#I@M\"&Rz(&q$y)+'q"*h,#:3o1o15 `?Co yrigTtk(c)k2009kM0c0os0f21r 1a0i0n.k Al0 8s2e0e0v@d0h1l4e-:?{3 {7I6&&i bP159 &XDGG'0UBX6&Q$0 FRJlJ(>UhI 1lq!a4DC9Nk"?CYQYf.Q;(hğ=;Af/h (\UHLuD" # ;3h( TEJ kU@jZ?F@{W~?P VN- I A `*%#.>>uA` ?%i i4iHu#XpTb>tU b7>t  a>U2zGz]?@9 %#>5&A@g(bu)+&X" !f$>F/#"&#="& "m"bu);r'X"i,X"/N/#11-#`?CopyrwigXt dwc)020090M0c0o%s0f21r01ua0i0n.0_ Al0 8UsBe0e0v@d0"i f &%!X/#]'0UB&Z5%0 VJl>8fUlY(C '(#vAm!#(:/#_Cgwu?F( mO"\>sl?/h%mq.%-+?@lփ?F^Uw>o|m7yQySf|wA=$*<~a3d@9SӀaۏ펃y'e0a#FX_`H> V+s LyC{:,YFhJr#~"IL&OB |oM~d`ro@+xpWoQsG@NfFXHw\HKp`(MScXO3gQj?PUNnVpZu\y_}$⯿<.jAxCtUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X` ]!B66U 66 66%66ȅH??f]B@L&d2?-(1(.sUW!U&w/ҍ~  0QB`#Mainfr me, etwo k p rr pP1al d v c %1^| (SUG|& p??(:L ?qpwqqq wq wwpwq  qqwwzEwwtqwqw~wzqwqwp~*?Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabL&d2ٿ\.??{ڿaAv??"jr3r\.CUG DF P# h>@/T(6D # #UF\.?@L&d2W?@w?P} <.4! D:J D:^:r,: A7f77f7 7!7&"l70&u`?( "66JJ^^rrTU/&0&0&D"u#N) ^2<7qU ?4b@{[`:#B'FT3i"'RQ92 z1L>ÿ@q@E?BDF,@rRSBu@`҈-g d(ag;_3@JAJA2hhXB)3*ÄR+Є.bq q2&&:r816a,s2+66u\L %8WQA$A[(BBSP@RXESTO$@,r23rUrD.e53_4&F3s͵N?SYFer=@:u]Qv,@F=a`up]XE628ABTOfD"J-$RRO(R[)VO`Rpasd`.e Tpx``B2+ 0Ls'_"x#0!UJ"b_ 簋 B)U2brJC\Rr˨83vvF_<U"D"=m2Q:2Q]q2XBƅ3/nR0{.b[a[a22:raa  A AE,,f7Ԣ18Q`#A#AQ; ^rprqqMMr1"8J6=a~Ak ]\SP@` SPa`e`lBsa_` E0ud`px`e`JXEas` T``U\cam`uF3dr7` SbM`i`fbUD7SLgn*f`c`ub/.eQXJAPbd. Nm@Je@=O! P`rx7x.[b`m DraeNiu9KoAe1S`u^8 Sri`a /J T//A/^`L`ch@{/Ҡ///Bl pQ/ @?#?<R`#Q?W??*oNwbkl"b`KҸH?[Ij`ad ps*OT]]9 M JU@z^?@]z?@ZVjźP6 ܠAJuM` ?juJt  3\It=W(\I*RIl#>2zGz?@M|JA@e7b #zb(b%)2+2&J[-m(?& I?A$4/U%B!!5 g`?CopyrigTt (c)020%090M0c0oKs 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-X# '&?6iie59 6X7/'0UvBŇ&-!$0 -FAJlJAn h71Z1!TlL2 $<[V  ORvVJ:%'d2ɩ PY]PeS YY˱QEaZ_`YQU5U=Vr\. l\QY?))bUb526_;"rA #cHD: # h4>T]]9 M JU@d2L&?@P( ??@>?P6 e]A#uM` W?SuJt  RQ͸It=W\_(\IRIlƒ_',#>[,ɺ (? ?Ab A@"Sb$J2zGz?+P@MJ=&#fbbbPz (bE)&,"*UB!!5 g`?CopyrigTt (c)020%090M0c.0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70ej0vu0dW0@!$b-# 'Y?6iie59 6X7'0UvBŴ!0 D-FAJlJAh71h1!Z@bgX,  <)gV#\ M OR|VJ a9PY]PPReS[RUE:]r\.M\Q`YO Q5U=VN_YQĖU526_HU"rA #cHD: # h4>T]]9 M JU@z^?@<0 ?@bX,?@_~w?P6 >JuM` ?uJt  ףp= It=Wk t@IR)\_(IlK&D#>[-m(? ?AbA@"b$J2zGz?@M»J=&#fb{#z (bE)&,"*UB!!5 g`?CopyrigTt (c)020%090M0c.0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70ej0vu0dW0@!$b-# 'Y?6iie59 6X7'0UvBŴ!0 D-FAJlJAh71h1!TL&1 $<[V  ORvVJ:?`0 PY]PeS a+QEaZ_`YQU5U~=Vr\.m\QY%NX526_;"]rA #cHD: # h4>T]]9 M JU@RT*J?@r\.?@zP6  >#uM` ?juJt  ףp= It=W\(\IRQIl#>[,ɺ (a? ?Ab Aw@"b$J2zGz?+P@M»J=&#fbbbPz (b)&,"*B !!5 g`?CopyrigTt (wc)020%090M0c0o%s 0f21r 0D1ua0i 0n.0_ Al_0 8Usc2e70e0vu0 dW0!H$-,# 'K?6i@ie59 6X7'0UvB!0 -FAJlJAh711-!Z@3F  <F9Kt#  OR|VJa9PY]PPReS[RUE:]M\Q`YLQ5U=V N_YQU526_HUv"rA # cHD: # h4>T]]9 M JU@L&d2?@|??@t:?P6  >JuM` W?SuJt At=Wi_6#J]RbJlGX%#R>[*(? ?AbA;@"b $J2zGz?@dM#7& #fbu#z$ ( (&,"*BB!!5 g`?CopyrigT}tZ(c)ZW2009ZM 0]c 0os0f2R1r0>1a0i0n.Z AUlY0 8s]2e10e 0vo0dQ0!$ę-# '?6iie59 6X7}'0UpBi!0 'F;JlJF!( ]A!1-!Z@ B <yO#,4#BQ YR VJF!L&?@bX,_S RQ#jYiFQEU,b?@%\\(\\aX5U@?@ՔJR\p=o ף\kQX5Ufl6?@3oF\H8!\4rOiX:=?F&[f3r|UnTeRU ugZY8oJi QyZV'j[3rJa=yUU2@_RU"rA # HD: H ih(>T9 M3AU@L&d2?@\.?JP6 t >%M  &:"S:JuM` ?S0DVhurlnb#4"Jto ob'#t3&>JU#g"X#a?~& ?AJbAW@"+7 #J!%(,Ja#2zGz~"@MW#"6/%b)_;_6" /*Ba#115 2`?CopyrwigTt `wc)020090M0c0o%s0f21r0Aua0i0n.0_ Al/@ 8Us3Be@e0vE@d'@O"ib =X%`!Xa#Y'0UB~&%0 JlJ UhMQQ A75A!U(!1L&?@bX,Ơ h # RQ{h# >_,S{J,b?@xz^?@1o=?@h?PQ ̿ A!Y\(\\tگ@ UnĿvUohú{:i dk6!d P Ln4!lBoogr5%BUfl6ˣ_nH1_Xq~?4F6H/ ݴpX =?!6@,9?@HW8?P`ڒ੿ ) !oqf%$$UjGil!j( gtoomddvg0|RQPbرɠ )8!Oy;DcwVQ]z,Nz(6abxZQ~FZVjp[3’L%q^QFvc~b-Tt$bxbQ蟯 ' OBNz)6bXfUU^@ss9 ?@u?P1r _RUj2ilpfg.ɏmbd,>gxHD: # h0>Th]]9 MP IAU@Dxb?@F+?@`Wڔ?@u5F&?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  ʼnhmta{|_3.Jv9?\̧q?v-F$>JU2N'N[?@M#J&A@bI(bW)U+RU+U&J[# (<!J m?%P?AR^$(u/%B##1#15 `?CopyrigTt c)Z02`09Z0MR0cP0osJ0fX2I1rL01aX0iJ0n.Z0 l0 P8s2ex0ejP0v0d01Y 4#-X#/3 /7#(R0b59 &FX7FG_'2WUAFJ!$a FFD$&#l,&}>UhEMA11-O!(#H UV & `&d T` V_RHD: # h4>T]]9 M JU@^z?@ `?@<?@>_w?P6 >#uM` ?uJt  QIt=WwIRGzIlƒO_,#>*5 g`?CopyrigTt (c) 2U0 9 Mcosf"r-!a in}. AlH U sL"e e5v^ d@ PB[,ɺ ("a?& ?Ab Aw@"b$- &2?+P@MJ\6#fbbbPz (bE)&,"*%?@6iie 59 @6X7'K0UvBŽ&!$K0 -FAJlJ A>Uh1MDH]A U H!1|4#hLjA*lX!効:@MJR Y <RyJk1LbX,?@`0  S qq#Y`?[G=`a]~mΓAdžeЕQQ0F0ZVCXQ`@iqm2.uQQ=0r"H$YZ~X#Q`kwU{qm)F&uQ'QaZS0W+Q6`V BPXUmt~QR٘6p?@B |"Y(aaX$_کȑYr%e"U1/AoJ\` V"X;Q}rܢZ;o6hׯA\a ?,H%5eQCQ.|JZ'Ja6h1Aupm +p;eQ3@ Z,6hOQ'AE;Y}R8sRYh~Df7gWQAhVǫjfjchiL&E7gjE3ɏd2L14~.yl߶7ggQA?p8;M7 AİeXkQ9QUoQA4F G`XsUWiZñeHD: # h4>T]]9 M JU@JRT*?@+@ ?@ `0?@ ?P6 >n#uM` ?uJt  zGIt=W贁NIRQjIl{^z#R>[,ɺ (? ?Ab A;@"b$UJ=!=!5 g`?CopyrigTt (c)t 20 9t Ml c.j osd fr"c!rf !ar id n.t Al j(s"e ejj v d  6!E:$-I# I'L2]z?+OP@MJ\6#fbbbPz\ (b)&T,"*?@6iie 59 @6X7'0UvBŴ!0 D-FAJlJAhK7[!h1!Z@߆gp8  <JmQ#  OR|VJa9PY]PPReS[RUE:]6BGP(M\Q`YmRbU 5U=VN_YQUM526_HU"]rA #cHD: # h4>T]]9 M JU@jZV?@@?@,b?@&_d2?P(-DTw! @>>JuM` ?uJt  QIst=WR\(\I-lU#>[-m(w?ZP?AbA@"b$J2zG/z?HJ=&#fb{#z (b)&,"*B!!5 g`?CopyrigTt (c)0W20%090M0]c0os 0f2R1r 0D1a0i 0n.0 AUl_0 8sc2e70e0vu0dW0!$ę-# '?6iie59 6X7='0Z!0 -FRAJlJ(>Uh-I 1l!`a9 $< J:@t: HYdQ]S lQE-UV_XYTQHD: # h0>Th]]9 MP 5AU@D 1!?@&?@`Wڔ?@ 6F&?Pps-8R?>$ > n JuMy`?e ;  Su*Jt'  :mta{$_#pYmvK@{3.Jv׊ߖ.?k-I?>JU2N贁N[?@MJ&A@b5(bUC)A+A+A&J[? (<!6 ?%MP?AJ$(a/%B115 `?CopyrigTt] c)F02`W09F0M>0c<0os60fD251r80q1aD0i60nU.F0 l0 <8Us2ed0e<0v0 d01Y 4-3 7 #>0b59 FX72G'2KWU|A F!$a 2F#l(&}>UhE M-11;! AVI & `!&P }T` V_RHD: # h4>T]]9 M JU@&d2?@<0 ?@|>?@_~w?P6 !>#uM` ?uJt  )\(It=Wk t@IRף_p= ׳IlK&D#>[, (? ?Ab A@"b$J2zGz?+P)@MJ=&#fbbbPz (b)&,"*B!!5 g`?CopyrigTt (c)0W20%090M0]c0os 0f2R1r 0D1a0i 0n.0 AUl_0 8sc2e70e0vu0dW0!$ę-# '?6iie59 6X7}'0UvBa!0I -FAJlJ]Uhz11!Z@`0 f <:a+ѐ#B SRԀVJ:BP(?L&YS RJ)#hT_RUEtL!ZrMP  _QdYP^CyQ5Ul6aTYa\8{?h5a=ZR_YyaĚU2:_LU"rA #sHD H# Uh4>T#A]]9 M AU@L&d2?@\.?tP6 j">  ( <$U<mJuM` /?U2FXjutJtq t ,J&b2$H &C">[JU7w$?& ?A5"bA@R"&Jtm#2U"?s"@Mc#J&(IC (/58*m#p1p15 "`?Copy_rigTt (c) 2U009 M0c0os0f21rԙ01a i0n}. Al0U 8s2e0e05v@d0["i1Em4d#(m#5 -EO|3K |7&WFi@ieE9 WFXmGs"'K0#Ŋ&!$K0 FJlJ U#%]VQVQ ]9 1HA!a(Zm#]/z JpC"j(\g}QQ R"fd"J:TZV՛jjS 3sbd dW&@aa5Le@%c~\ja}_,‘up2u?pf}Lb .lukut PF Lpa @cBr5%3q?u,b?@xz^?@oo=?@?PqQ ̿ us&r\ϑ|t@ ħ|v˽|oohwà0B  "K2[@a!fl6NwekH?zGёtm !+s KwYZHCEl#Fxy9ܕwOB í~dto@+UDZoLsGof8 $ (=hZ+P+=OaX5H1?ײ;//%/7/(3X0ϼՁo7k"آS9l%UFD  h ^TYY>UFD"H$'@F @Fx,/T6D # 9UFD"H$@]F@&u@P} jihf,l>u` ?V,@y^u#hQtSfttRt'A@y3dbr(+#tTB/(u/!/ <"/S.pd#@i#5J~?/"._`Make Bx0cz0groundT#X3Ҵ14;1`VIS_PRXYn0CHM!_#60080"`?C0py0i0ht~0(0)~02009~0M@cJ2s0fB Ar@GAa@i0nn0 ~0Alb@ HsfBe:@e0v|0d$n2"e (^C5@ #5 X3hHA%DK5D,K3D@Gg$aq_DaS T@H11"T2-T3[T ZTET@(APa%1 C1o1Do1y.bg B2Q5SiC@nU3bk,qoUcj?bhKrQi? ^ebo #!} baRe#d,UEohbcD&'C1(cQ(QR=S1fZi#3 D0N0UtDBn@a@nEEg_09YBaPPo1o1"(S11iee52>bXc1`I!]&` H@d|2&F0q}A0q` SPowُbb+e"vM- ~-!bb bL~f]Dfah 7Qbdҏ䅕HD: # ;h4>T]]9 # ,AUFD"H$@FZVj@&@@P -u `u bu  _@"f)l A)0t#` bFNt?p [eܠAuv` ?5urJϞ>Xq& $##./@%W(l"r/ tvel 46<R?$&(Q/$BU2q$0?#?56?\.?Ef`?F 7`BackgroundC05l0rz)z5=t3I1 3&?h<:JA#3+0UQB3_ 3117;l`?1py0i0ht (0)@20@9@M@c2s0fB1r@1a@i0n.@ KA0l@Gs RUe@e0vPd@0l>#$!Uh#3 A A\TYJ*3BU_1HD: ##=h0>Th]]9 3 UF>T/?FD"H$@Fn- @FN?t?P>u` o? u3`>$t  `"?A@>D$_~Ut W"p [VJ5 .Lc0U?AGG}YL?b` LineColA r&zY!"'&ɆX#mddd#nFFF"?&?#!"'&&` ]STadA w?/#/D+$1# F9 lC '2I&aWY*1` Ac= nt/#Ybdy3zg9*#!$)$|I!8<e Q3#8$f$3$FP3(B#GB117;1#?!pyG i}gTt (u)@2`09@uM9 cG osA IfB!r@F!aV0iA n.@ UA2 HsBe@eG v= d@0 l>]UhzЁA3!!`J !V)XU__Y3^_oXgU5^Bo_foU@__oojEb59 Mb{7/v!3rav44K7 %iPCH6Hg7w4'~vv !"608e @Q:(3O@v+?=!$Yz,Z#b~~-T{ HD: ##=h0>Th]]9 3 #UFH$$@FFJx-C-?F BP(ԭ??Pn>u` ?6u3`b>"6(t  2U0*?A@BHV-kYt ["p: [ZՄJU5 CLc0U"?AG}]LC` Line_ColI rz])"/&Cmddd#nFFF"?0&?#)$1$)"/&&` STadI wG/#/L+$9# %FA lK 72Q&a]A1` AcE nt?#]bd3zՃ~92#!c'  3#e8((3F7BGk+B$JA3117;9#?1pyO igTtW ()@2`W09@MA cO osI fB1rԗ@N!am0iI n].@ A22 HUsBe@eO vE d@0l>]U@hzA4K ?BEOJ !9"XU__Y3^o(o{gU5^eo_oU_oo$3ojEb5(9 M{7/6v!3ra4v54ve}%i6FWig74 '$~vx 1)"&8ea:@e@ @("4Q:/=@3 9)']zE,Z#(T(-ӁTjQc@0@HD ##=h4>TB]]9 3UF.L@FD"H$@F+@FNt?P>u` o? u3`>"(t L F%u?A@BHڬ\mY(t["p[ZJ5 .LT0U"?A?G}FCb` LineColE rzR˵#nwFFFw#?x&?$#u+&&` STadE wC/#/H+$5# F= lDG '2M&%,`h9!!Btn! 3#.> (3672M%11Ԓ7;5#?!pyK igTt (c)'@W203@9'@M= ]cK osE f%BR!r@J!a%@iE Un0 '@A2 HUsqBeE@eK vA Id2l>0>UhhCA3%!bJ@&)C6XU!_A_SY3g^__WWU@5^_3__WU@E_y_9o_bT= tG e j5ie@5(9 M{x7/69v!3LrA,Tv5!$;7laiooXx7w'2#x~ov]x vw_H<> Eל ߪJAC BFw#84B} uF]@ekHo+xWJ]BzaG=]#P_*<N`r(ߗ?fUFD  h ^TYYHUFD"H$'@F @FxT]]9 #Q ,AUFD"H$@F BߡP(?&@@P-DT! -u `u bu  @")lt A)0t#` bԚp [WenAu` ?urJ &@Fi#$!Uh#3 A Q3 "$BA 3ŊJ*3&"B%e>o1UGDF P# h>,/T6D # 9UFD"H$@]F@&u@P} jih,D >u`} ?VU,@y^u#hQtSft{tRt'A@3<db(+#tTDB/(u/!/<"/S.d#@4#5~%?/"._`Make Bx0cz0gro_und*#X314;1`VIS_PRXYn0CHM!#60082݉"`?C0puy0i0ht~0(0)~0200U9~0M@c2s0IfB Ar@GAa@i0nn0 ~0AUlb@ HsfBe:@e0v|0dn2"e (^C5 (#5 X3hHA%DK5D,K3DGg$aq_aS T11"IT2-T3[T ZTETɴ(Ta%1 C18o1Hy.bgBE 2Q5Si!@nU3lk,{oUZmj?bhuKrQi? ^eb #! >baeo1,UEohlc&'C1(cQPQR=S1fi#3 D0N0tDBn@a@nEEg_09YBaPo1o1"(S11see52>Xc1`I!&` H@d|2&F0}A0q` SPo]wbb+e"vU`6ȅ- ~-!b)bbVf]Dfah 7Qld܏HD: ##=h 0>T@h  9 T3UF}ð?FD"H$@FYjB @F l?uP>u` ? u3`>$t  u?A@>D9ȯvUQt W"up [VJJ5 Lc0U?AG}YL?b` Line_ColA rzY!"'&##nFFF"?&S?#!"'&&w` STadA Iw?/#/D+$1#K F9 lC '2I+'&!$)$|!$L8X< (3#.$3$3632M1P17;1#?!wpyG igT_t ()<@]2`09<@M9 ]cG osA f:BR!r.@F!a:@iA Un0 <@A2 2HUsBeZ@eG v= Id2l>]Uhz#A3!!J !,VKKX%U6_V_hY3|^__WlU5%^_H_olU%__No_j5b59 M{7/'v!3:raBv5X4;s7B %i5oog74'f~]vKx Nv!"I68e ^@Q:(3OO@f+~?!$\Yz,ZB#b ~-~T{ HD: ##=h 0>T@h  9 T3#UFG1"@FF$I @F BP(?V?P>u`} ?u3`b[>"(t  c]K?A@BHuV5Yt T["p [ZjJU5 CKLc0U"?AGQ}]LC` LineColI rzI])"/&Emddd#nFwFF"?&?#)$1$)"/&&` ]STadI wG/#/L+$9# FRA lK 72Q&eoa]R4` AcE nt83Q%#]b[d3z92#M!c'  Q3#8(f(3F7BGkBBLA3B117;9#?1pyO i}gTt (u)@2`09@uMA cO osI IfB1r@N!am0iI n.@ UA22 HsBe@eO vE d@0 l>]UhzA4 ABGOJ!9"XU__Y3^o*o}gU5^go_oU@_oo5ojEb59 Mb{7/v!3rav44vg}%iP8FYkg7w4'$~vv 3)"&P8ea@e@ @$6Q:>@e3 9)']z,Z#(T(-\ՁTQc@ 0@HD: ##=h4>T]]9 @3 UF=U~@FD"H$@F[M l_?uP>u` ? u3`>"(t  {?A@BH-ϝY(Qt ["up [ZJJ5 Li0U"?AG}FLCb` Lne_ColE rz|&.#E#mddd#nFFF"a?&?#%"+&&` S.TadE wC/#i/H+$5# FlG 72M&ea]Ra` AcX0e? t83M%#]bd3zs9.#%$,-$!]/(h9B>3z C#8> (3JF7NBIGJAA!G;5#?1pyK igTt (X0)@20@9@Mc.K osE fB1r@J!a`0iE n.@ A%2 Hs Re@eK v"A d@@l> 0>Uhh @CA3%!J@&CXU__Y3no+o~gU5^ho_oUE_oo6obTt>G e jiHle59 M{G/v!3r@QvE4,K7qgG^"D'2#x [%"6)_H<>? Dל ߪJAC BFxy#:n5B} w]@e ko+>X]B}B]@PG-&P+=Oas?Sĩx%UFD  h ^TYYRUFD"H$'@F @FxT]]9 #AUFD"H$@&@FTЃ$?P ]Au` ? un#`t A@`@I:t5ZE$:"p: [g5J=U5 ? ?B#0{GzI@hQ"*!!5 o`?CopyrigTt](wc)]20 9]M c o%s f"!r !ua i n.]_ Al0 (Us2e e v00-d0l>0>Uhh!#k!UJ`F6[48{  FMBC!??1$E34NMO_OG$E4E6O?O$E28?;1rA TRZSB3HD:  # ;h0>Th]]9 #UFzEI@FD"H$@FE I#@F)[3#Gӹ?P ]Au` ^? u#`t  I.!?_A@=Dgj+U-tA`WU"p [VJ#0{GzK?<@/BhJA/mdddG G) ?AgaYT6` Fil Co orzYbWYAY{bd#;z),!=D" #g#]#M%!!7;}`?!py io ht (c)y02`09y0M c os fw2!rk0!aw0i n.y0 A" o8s2e0e v0d0 l>]Uhz`1X#!J`iF  8H\ 3FM@BsOOAE3NO_WWE%EjFA_Oe_E2cO;QArA RS3HD:  # ;h0>Th]]9 #AUF_k'$@FC@F9C?Fl6_f?P n>u{` ?m u#7`t  Fx $?A@=DsU,tA`Fp5׹?p [WW"WJ#U0{Gz?<@MBh G ^) ?A_aYL!.` Fil Co orVzYAY{b#z)=/["  3~##M% 1 17;}`?!py igTt (c)02`090M c os f2!rz0!a0i n.0 A" ~8s2e0e v0d00l>]Uhzo1X#!Je`FxF ) 8HR )3FMBOOAE3N__fWE%EyFP_Ot_E2rO;`ArA bc3HD:  # ;h0>Th]]9 #AUF_k'$@FF9C?F)[3#ӹ?u?P u` ?u#`b>"(t  Fx $?A@BH_sYCt ["p [ZJ#U0{Gz%?<@?aBhA(G XO) ?A_oa]L.` Fil Co orz]A]bv#z)=/L" !#o#"(#JM!h!7;`?!wpy igTt (cu)y02`09y0uM c os Ifw2!rk0!aw0i n.y0 UA" o8s2e0e v0d0 l>]Uhz`1$ "H/1BJ!`9H""" 3FMBOOE3N __oWE%EY_O}_E2{O;rA bc3HD:  # ;h0>Th]]9 #AUFzEI@FFE I#@F)[?3#ӹ?P u` ?mu#`bn >"m(t  I.!?A@BHgj+YtP ["up [ZJ#U0U?<@/BhA^mdddG `O) ?Aga]T6` Fwil Co orz]b]A]bdv#z)4!=/L" 3o#"(#M%11 7;`?!py iw ht (c)02`090M c os f2!rs0!a0i n.0 A" w8s2e0e v0d00l>]Uhzh1Z$ "71BJ`9H"( "3FMBOOE3N_$_wWEEa_O_E2O;rA bc3UGDF P# h>,/T6D # UFD"H$@]F@&u@P}  k0g,@DTh |u`} ?LU,@TUh|u#QtSt.%& 0"tR.$t|7/I'A@%#tT.%U/I(/g-3b)5/G((""/.d\63@?3e5~???62.`Make B0c0grou/nd& ,3؞314G;1`VIS_PRXY0CHM!#w60@82"`?C0py0i0ht0(0)02=@090MZ@c2s0f`BQArT@Aa`@i0n0 0Al@ XHsBe*@e0v0d2-2 eh8^hhH-Q659TD[e59ThD[39T@D[9TD[E9ThD[5X|D[%U9TDWndaKoc7TP116238ycA1I51x>Xr1I/1&` HZ@d2&F0o`@A0#` SPowo '"0!+x"vde&s ~s!4r:r4r%pw:rgAo%UhmI {a1$162te2-t3I[ ZE*R38\yca651+4r:rB1b?xav@|5Rq'e5ycBE@G'3,'㫊?GMKc`? 3b #/1 bq3Hep@,T*h|62+EM6H'1\+U=yceq@ D0NrQAn`@aZ@5a-2] ,Th O7qouHD:  ##=h0>Th]]9 3!#UFaD@Fi4F?@F3mq?P݀>u` ?u3>(.t  (~k ?A@HrN_atR bp [_a"aJڍU5 ILc0U~$"?AG}cLI` LineCoklf rzcF"L&nFFF#?&S?#F"L&&w` STadf wd/#/i/J(i" Qg3#$.f.36h@H1$g1g1o7;V#?#1pyl igTt ( )@2`09@M^ c.l os$0fB#1r@k!a@i$0n.@ A&0lT@ Gs\Be0@el Evb dP@e0l>]Uhz143F!J2!XVD(Q(E _1_CT3R^k_}_WBUL5N___BUOd_H$o_j5bPL59 M{U7l/f!3raXve5!$fDm2%T12ooXU7w'B#$<~3v!x vI8!e fwQU:(3O@~f5T?F$c"z,Z#bV~-(T{{<B2>/ HD:  ##=h0>Th]]9 3!#UF_k'$@Fi4F?F9C?F3m]?P>u` ?iu3>k(.t  Fx $?A@HN?s_a't bp [a"aJU5 - ILT0U$"?A?G}F/I` LineColf rz!#nFFF#a?&?#F,F"L&&` ]STadf wd/#/i+$V# F^ lh #72n&` =?o)z 3#B..23`6=27kH A3117;V#?i1pyl igTt (c)g@2`09g@M^ cl oKsj0feBi1rY@k!ae@ij0n.g@ A@2 ]HsBe@el vb dH@0l>]UhzNA3F!J@32&2&}Q(PUa__T3^__%gU5P^os_3oUP__yo_ j5b59 M{7/Rv!3eramv4!$Rv}ai6z7 '#$ő~vmv vU%T"b xe@i?8Te'aa@eg@ fQ:?@,3 9F'c"z,Z#.T5-TAci@0m@HD:  ##=h4>T]]9 P3#UFD3@F:N@ @F~?Pn>u` ? vu3`[>"(t  #?A@BHY,t#`F B_P(?p [_["[JU5 Lc0U~"?AG}]TLCb` LineCokl\ rz]<"B&#nFFF#?&?#<"B&&` ]STad\ wZ/#/_+$L# FT l^ 72d+A'Ah9ѤB! Q3#.(f(36(72M3117;L#?1pyb igTt ( )M@20Y@9M@MT cb os\ fKB1r?@a!aK@i\ n0 M@A62 CHsBek@eb vX d2l>0>UhhC4A3 ל ߪJAC BF|#˲R7B} x]@eko+s]K|aIJ][8 3ͩ Ի5 ~Dz G(l P+=Oas   ߥȲs mUFD  h(^TYYBUF\.?x<F BP(?P?< 5 6IHC 6gfa6a9SBU@L&d2U?(sU*!(&J/BH8$q?/&?(H])|  0OB`"Firewal0,n0t0o0k"0p0r0pP711,d 0v0c0 U1^| a(){U5GO& h?0B?E\\\o   w w wy ߹{w yw yϹw {y{wwy w  ߛ{ 뛐*8p  } ߰ i|jDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab\.ҿ~?? ;Կ)j??r]3rCUG DF P# h @T(PYY#EψUF\.?@~߻?P} ,.B ! :J :^:rY:k Au` ?"66JJ^^rru# *^b"h,h')qU ?3?4b {[`:#"&a#'JQ)b" ^^l!L1$l" l"7'>>ÿ@q0-5?46 rz23S2-uY0`-1Bu@ l"r3A`'Bl" L$9u`u `"[ bzb!u`R$"% أ#Q!'1`Vis_PRXYcPm!#5HP47"2`?CopyUr.PgPt (>P]) 20{P9 M.PcePo0PoIfmR^QraPQamPi_Pn AUlP eXsRe0PeePvPdB@!#d$%b!(g#>@f!^#&& "2bbbz12bGbf!cB##0Ub#]g,7^aN @V cT8fda[+_'s1 1"hh"2)t3 *tB+tBaa"qqb(!*paaSb"+6u\%G(!!["@Чcq"53T@ R"q3f uEf;uE5's_t$v&F3s͵NSYFuEf0we'Aj,@QZPUpQ"5x}L1*p"(AT?ݟZ2wB2;6q`RPQs.PU TPxmPSP^2l"+ 0^<@5b ۠۠"x5R"۫br3-PBrw('ٛ`ff_< $R-mA.AQS*pl!u "u"2u3/bBA0oB%Q%Q"2b3 !!E؁؁| NPl!#(>*pQ~ [ ]\@U0` SPaaPePl6rsRQA)P E u.PpBPeP{"5X .0` TcPѼSP"`DPvR3 `r ޹b` S;b4SP&SF.PQ!wPQJEd%TMPn;fPEcmPuR"/6HEQI[]1PRd Nm Eh! PPr"h"CU]%R`sRU3eA(ew3`tJرe^`BT , SRi+~K׻T `L_PcXG&8*JBlPiP;anIY0 -R_PoUS>b©'s)&U1:`` %2R&RviR`f`NEtWO6PKK`SHPP60 4ROJ2R*tI60S1QR`f`s7fitac'Eympx]b7'dYfjBr 3aQrf$i-QIaaL&E`v?`HvADK^DKDQKDqKED3aGHD H# h4>T]]9 MT JU@5wn?@'d2?@vP6 AJuM` W?SuJt  w]It=WuVIRY]JIl#>>@fY(? ?ALbbbz@A ebbfb)]b 6(. T#Jq!q!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d j!n$EB5 -F?}# }'X62zGz?3S@MJD6*-!./@ D)QQ&K,<"Q*i@ieE9 X6Xn73'K0UB!K0 aFuJlJAUhhc7![hz]0T ]]JU@\.?@WjZ?@* `?@JRT?P NamuM` ?,2<u#FtC mtimuV~ڻunO2zGz?>?@MA@bb?bz+ )#bC(+bQ)^+^&Ɇ"@fx?0&?x|5/-&bf^ >/`/%N B1B1`?Copyrigt(c)209Mq0co0o%si0fw2h1rk01uaw0ii0n._ Al0 o8Us2e0eo0v0 d0;1U?4-N3 N7&q045 EFXGeG'0UBų&!$$ eFyJlU`1#El,B^ PhO (!p&d2?@`0  <S ?؉O<S 5<Q3::*\QY'aQ5____Tehf'W{oaYQE##$Wuu']C_U_ kuVn_j@ղ_bl|>xoo uJSAU(:L^xSp8wag?%ehYjdcp i!rx  -?QcxS ?@ je,㨏O4Fӏ 5:vox] 2DVhxSPbtߔ尒 ? ypv y FXj|t:NѧwmN6@4~펰pdNAQcu.0ܰ+ 5P/Q&a/4EHD H# h4>T]]9 MT JU@\.?@W~?P6 >JuM` ?juJt At=WJRb~lT>5 g`?CopyrigTtZ(c)Z20 9ZMcoKsfr!!ain.Z Al< s@"e evR d4 HT5 B[}#?& ?AAw@"b4-/ &&t2Uy2?")@MJx6D8<6<2:iieE9 &5X.7"/'0y3&-!$0 FJ)lJ8>Uh ޓ$Ja7Z@?@ 0<Mbh#B] ?RlVJ:vn?6\ES YwQ#TTKRU%Ur=\PYZd;O#UEd^_OZaQU%5UPBgP([Ma\QU-V'?d2=_OZuV%hb2&_8U2rA 3LsQ% bHD H# h4>T]]9 MT JU@* `?@~?@PBWP(P6  n>JuM{` ?5u#t  5]It=WZd;IRMIl#>>@f.? ?ALbbbz@A ebbfb)]b6(. T#Jq!q!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d j!n$EB5 -F?}# }'X62zGz?S@MJP6-21/D)Q&K,<"Q*iiePE9 X6Xn7'0UB!%0 aFuJlJAUhhc7!-oiLaUe52j_|Uv<"rA ;#c8 bHD H# h4>T]]9 MT JU@\.?@@ ??@{?^?P6 v >JuM` ?)uJt At=W;On#JRbJl?ʡE#>>@f?.a? ?ALbbbz@A ebbfb)bE0(( #Jk!k!5 g`?CopyrigTtZ(c)Z20 9ZM c os f"!r !a i n.Z Al (s"e e v d d!h$B5 ɤ-@?w# Rw'R62zGzM?@MAJ6-2+/>)QK&E,6"K*i@ieE9 R6Xh7'0UB!0 [FoJl8JAUhh]7!6!Z@( <xLK#  }RSVJ:vn?6S YwQUTRU5UXz{\YR/IѫQEUPBP3([MabaiU_52d_vU6"rA 5#c8THD: H ih(>T9 M3JU@\.?@~?ҷP6 m >JuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"Ab+Kb )-,-/B!!5 `?CopyrigTt `c)502U0A0950M-0c+0os%0f32$1r'0`1a30i%0n}.50 Al{0U +8s2eS0e+0v0ds0ib =L!XY/'0U Bj&%0 :l*J8>Uh@[A[A 1:vݏn? Y YwAQi@qJEr Ti Z?d;O!U3!^9_KZXQAUEPBϡP(KMQLQ I@'d2\ QLYuV_XAo@ ,oKZ?Mb_XEOOO\_H8> G*+s d:TzR4D$o F #yw LB г ~dzC @+htEBG̲_PhI" (J& 2 F, 3Ey/ 8G2(C5 UFD  h(^TYYB>UF~?x<F BP(c?P?  s 6 6DSM 6gM 6D{M6M 6TM66x6K6v6HBU?8/&?R)B@L&d2?h-y(\.gҋ(i.sU!A&/@)Y(  0B`KLaptuo>0,>0or@0ableF0nB0tV0bB0oukF0mB0biT4UcB0m>0u^2rj4nr0A11nV0twJ2i1eq0i>0mV0n@0,PaL0d01e11^| ()IU5G& p??U"6? QQQ ``pwwpw w wނ ppK_wpwpwwppwwpppp ~ w w~~@w?~MpopJpppp~ppwpkwwZDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿?_)jk?3@ ?DUG D # h @^T(YYEUF~K? P} 7K.B!C D :T:h1Y:| 73 7 73v7x7377&u` ? "66JTThh||j&&"uJ&)R2%X<X7eU ?0?' 4b0{[`:1262Q3R2A"'QT9R2 ^L1$K" 7"Gd' @Bc]D?MQ0rBCSBuY@`-1"Ru$ @mGrC$Q'('R7"A& <4' &)9uv`u `"[bR1'$u.`425ؓ3a171`Vis_P_RXYcPm!#5H`o06QB`?CopWyr.`gPt0U(>`)02L`0P`W M.`ce`o0`'ofmb^ara`aUam`i_`n 0WAl` ehsbUe0`ee`v`dR13d45R1*83 Br3 66 2'QbtzV[1SBuL@`` F.`a^al^bz1 -2uߐqݐvR2CB#30U3F:w,7^F`^ P sT<`o0m`sm`a>g da;_zAA2Hhh"B)B* R+R::2mDmr81}ahcwP#6u\$ %W11%[2P@RA4"ECT'$@b2AEȜUiBȜU5z_4ɓ&F3sNSYF=U0ʚu s,@a#Z`ep"EˍLA}28ABTO0V""-sRxQ'(BKF>`R`as.`e T`xm`S`^B 0^LK'7"P#P E""b ..2EB2.˯brCRqʯܯ8z,BF3<^-""B=mQQ<*dh2"B B/R0R%a%a2qqr3 11E++0|ss7C8*PbQ11A;Q(r,`LA=k^u1MAE!ffuVҶ8}aY k q]PU@` SPUaa`e`lsRaQ)` Equ .`pB`e`β"Ep` QTc`S`&c^aEma`u9E^Mx_ ` S*bS`u`EL t_b9UdxM`n*f`cm`ub%7Uh5GAPbds Nm0{U0! P`rxqp2D@%b` Dbaew3uĮAe:fC`c'97/I/ Sbi`ao//¹T//` L_`ch/6?'?9?B"l`i`]?8(Y@?0?R_`?65?ON*h"wbk"a"R`K9B-/H`OI4`ad`cOLO/OOnh$gsC_L(t˰=_O_TQm2/atfa I4aoO_g¹f__㪲`Uk f`Pbao%ov8_No\_MPCOoNo(m Qt Q6LM5GMLvEHbdr!SC qpsY` W@DQQ:V6xHS`C@MOя^u`MpmB :V¡3EYx!O U1vqyRdnu}Ϗo߳(©`X^~"` %_@&דq|`NTWORKurH`PaRTRǰISQk:i*,Ŕ{Xo'0U¹ňaY`y}֏(!(H*ڽ(5|ѳy(!{5{ᤰdvHv˷v d,v5˧v(!&v˟"vĈ~(%vċ~Λ%vİQ5vѼd{5v5UGD  3 h0TdYYB IUF%D"?Fب̓m?Fre?F_jYfw?P} 8Yf u` ?0)u:t7  (\toLӑpG?WS̆{c٤CU!!`?CopyrigPt (c)J 2\09J MB c@ oKs: fH"9!r< u!aH i: n.J Al @(s"eh e@ v d  #S$#B- #l# '?R;= # &#24 6 6 2$Z# #0U2 3@&B ^#5 6X'7'R4 6!4] 6:e6^P#[EBM #HA%DGHD: H ih ,>T  9 M/AUF~?FtC&j+?P6 8V> a JuM` ?0K Ku:yJ6bqlt7~ bJt  $OC>JU2zGz?#@MJ?A@+(b9)F+,F&" !*$5J" "& w"\A1"b9)+6'"s,"s/B-  P3v#9 P1P1X7;`?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8s Be0e0v@d0N0i @b9 l6(X>7JG'0UBX&50 JF^JlJ UhMAA $ A11!:Lh4 0 tt3J!@|?Fhdzt 0 ?kCYk XEjUqA '[\<1ߑUjU!o6Pa@oVUhq`V r])m܆YaYi*X(QYaD"H?F=0 \S㥛\blXfYFt:N\[ZR ÅبX0Q^pp\.zZ ,\ 6?iwUS5?FrSGo?PE ?=TvE?!Y?.`txv6mT]]9 M 5AUF@ ?Ft:N?FM~u_7w?P6 .>M JuM` ? IKSu8Jt5  rtyV_->5^IWA>JUT5 J#/!/!5 `?CopyrigTt (c)f 20r 9f M^ c\ osV fd"U!rX !ad iV n.f Al \(s"e e\ v d (!E,$#E)edq%rVA٫llx51o>2rA 53eHD H# ih#4>TA#3 AM JUF8?FzwJHM?F,+?F%VA?P6  n>JuM{` ?5uJt  f߰nZIt=Wv%MIRG|׈,IljE7#D>5 J5 g`?CopyrigTt (c)* 206 9* M" c oKs f("!r U!a( i n.* Alp (st"eH e v dh PB~A!"#?& ?AlJa@Y;6` F" q!z!lT"z0_b0A0#bmT3z0B9Ub25181 $?25?G5\ P1u`"?46?8C_1A1b?t0bAFbI&DɤG b&LBFAL2zGz?")@MJFp'CfT<(Gb&I3F-LB3FiieE9 1X]G/'0UR&-!40 jV~Z 3Q hRG!-AT6yf3@$<f@C  bfJ: @BS+c X$}#e`e%akqcEAyHh?F~l0I#c ,aTE1vo>B]rA Cs+( bHD: # h#4>T#3 AMT JUF ?FH$D?F8}, WcSP6  n>JuM{` ?5uJt  s߿ 0eIt=W(\IR}WIl#>5 UJ5 g`?CopyrigTt (c)* 206 9* M" c. os f("!r U!a( i n.* Alp (st"eH ej v dh  E~A!"w#?X& ?AlJa@Y6` F" q!!lT"z}0b0A0#bT3z0UB9b25U181 $?25?*G5\P1u`"?46 ?8_1A1b?t0}bAF'b&D%G &LBFAL2zGz?"@MJF'CfT<(GbE&I3F-LB3Fiie%9 1X]G'0URŴ&!40 jV~Z 3Q hRG!A`Lޖ3?F\.0hO *1J#<=#,AB  f!JabdP$dreEAJT]]9 M qAUFp8?Ft:N?P6 mL >M ($JuM` ?AggL2DuVJtS  {G7zt.%_>JU5 J =#k!k!5 `?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v d +"d!Eh$4#=#|x!F2 J3?]6 ?Ab4z@0Szu< `` F !!l"z֔0` ?251ݔ6b A6b4(4"Y==/w# w']6T4"BK/C2?=@M3#JF3@C37b"9F<2Fiie759 O&5X/GB_A/'0U{R]6-!x40 2VFZlJQD0h$G!Z@!11(Z/CFF\.30AR@Lu)_G  RaCf4"J? 6Bf*}~2?FVor?PE &TvE?~1LZc Jxdid`gR*YB_Lc׏_:M m&!&Ra DBBBTfwr5UBoPb2]rA 3q+"HD: # h4>T]]9 M 5AUFVj?Fޣ ~-?Fe2L&?FZT?P6 m. >b JuM` ?+ILKu8Jt5  O7nty>+B`"$'-A>JU5 J #/!/!5 `?CopyrigTt (wc)f 20r 9f M^ c\ o%sV fd"U!rX !uad iV n.f _ Al \(Us"e e\ v d (!E,$~|y7, SzQ|nF?F.`d %TvE?B1ZBrA YCc4!*uxnG@#r & 4xavDB#Bsrb5UHD: # h4>T]]9 M 5AUF{y3I?F~??Fd2Lq?P6 . >  mJuM` ?0IKu8Jt5  n`ݴڅtyx&1UԅbX9AR>JU5 J#/!/!5 `?CopyrigTt (c)f W20r 9f M^ ]c\ osV fd"RU!rX !ad iV n.f AUl \(s"e e\ v d @(!E,$~ 0T  JUFe?F:[??FO&Q,"7?P }uM` ?j`fpu#zAtw  [{t2Oۨy`<%t2N贁Nk?@MM+&A8 b*](i+i+i&p!!`?Copyrigt (cu) 209 uM c os If"!r 1a# i n. WAl30 (s72Ue 0e vI0d+0"!3 $#)Ry&#‘4?6Q?\c"bk)i+Q;'-,# '6% h5 IFXGiG'K0UBŨ6!4I iF}JlU!#A(ZFj^I30vi p-ڏ|Y &V"(:!]H?F{wX@0pS l01 o(fy { xU,>Pbtv3m?F/@M\lSFp ^ \ 0+??Fް}ulHyƊqWwi?Fܾrޝ۷cl)-i]oa?F5o רo`lۗ]Cfϟy { xY^p𔟦pO4?F0eO,plkleW,^ \ -r[v?Fp =\LhlmhG N1?F[Q{ l/5:1 fz7[;KՃT0P1ey { x¿Կ{dޯ?Fv9|eޝN.l?DɨT[\[fk'?F#Y}3Zflٙ(oWU"NhLޝ].Yt?BدW>lrt~]|%߸'0(f%y{xa v ?FϘǫḿl$h|a[s!?Fۂe2'[\YRSWW ?FIǧmp~Xl-TBi9w_Ku~Սe~< 8qOfSx#5y=Qgk?FG[Pm j%a ;(|a[sjT?FjwZ̮̈́S"o& +~?F 'APm*?w?3f|o7gyP( '7Ƀp5{/$ x //-/?/Q/c/ya-x?FlS+0ƄɰOYOe[s/?F@A ǻDÄZNF7?>.N?F nMoi?~q,P>Ʉ (:^?OD*x 7OIO[OmOOOyNaZBPK+qǀ̩BF6Ks _S~MGt='7e]Վe?Fɍ#cAcaKҌVB_,ϠZp"IؿoodK eowooooo k4@\{-=(eKs;o@,P%T̆m ?;e_#ֶZpp\J)~Wf ,Padҧɻ;y ÏՏ5A ?Frg,mr2 +nrsc9az?FWn ~}\+ :|wص@ymwQkܳ 8I2=d]? ń̙ "Ͻ)i8j7i3 ͯ߯i+?Fu߫˶<yOKs@,0YvjO}ȶѭ $Y MMeܚ-O ߟ4~ <-f5]Xкj 3ʶPbtf2+ PpߚDڋp,ψyӅ猜,4p:9?Fo}!s>ybH E3Ɲp ?Fn~p6Ü~y˓lj~ݨ" : (0!϶<'y'K+HHZl~f~ӋE?F&Ý A;yƿ* ,/=r?Fegm yQEڳs/e\?Ff"h=y/k.h@̵I y|!??4JkYhs??????iHu)?F͎ l䲥aβ ,IO;o\?F!yhX ejyROm,F @^f]ywO~6bԢ`V 6y O_o%d r______??FlO?=Ȣtc&O ,wo<ڐbrqM~@ O`> s>?Fcme~Vve~sO~%HlĀW<]|F<>~zK_]]M),ͭ`7E̺ژB>mfT# ,Bg)?F[='JFmjG;B|'|`yq])tK } /:Pp0&%;Taqԃa/E-? _ [X!////??D Tw2?Fw]12%W嬨$:W{,߰ M >:6yEpV(0AzGxaNtb#P?P-yp\J~%wMIn17O~:3݃* 3%\K?|\^OT[__ { xOO__+_=_"^j^I<=ADǂr-+ű!M!?夿e{*% m9N?:ߤzf"h<=b\㭖h1boyt65o %#M#'VW?FGn5> 9ؗϽrqCug<#ֶ?Fv^݇qLJ|U[>j?F^@;# qYq(ٿ2#>#Md TH7D6D # 4#T  U U !"#$%&'()*+,-./012*3>5h T  M YUFBg?FJ:?FAؔPM.uM`?u#t  jm7}%t!" .Π%'&!%4-Ut}2N贁NKk?@M#&A b(J++&Yz}/R/$38?#b`4zt (+b/R5N}11/"`?Copyrig t(c 09M0c.0os0f21r0Aa i0n Al6@ 8s:Be@e0vL@d{"1U%4e3 756%01hEPFX|GG'0U%R!56S1I FJlY)U4FX1#Aց(`}oO40OeOW0fnj# Qˣ-Y R5b 8!EЏv?F[ =\}g5ly$Oa3[e_q[~haWWrm)T`fnӟ_C1hhE:]_cd0g`h _R"rg t4{s{"oQ#w______Vcޯ?FW9|e!|vr.l`(n :l Jo}`[Q ?F#Y!|helƳoZfuj'7czX!|ߝ fUo(oZf?M :rm3~Tɶi(R}ty { xUV ?Fǫ!|?UvʁD!iip_(n :l r}9!?Fʂߤe2!|ωY\*!ijlߐjǏȯگ` K <ZfB0񁚍axCmƉR}Bty { xYџ+BQgk?FG[!| =jlU)+R(n :l }jT?FwZ!|B9nl(Z!X󁟜 ")p Tϑ` 4/]$R}p ty { x#5GY,h+?F߼߫!| ?iOf+k:l}‚,Y?FqvjO!|6zKi%SI#Zf@,0ƪ!|~9RĞX}ߘNd̮ Q>'ەߘux x&8J\nYJa-x?FglS+0!| @m}i˜?Wz(n}?F@ D!|¶RiBQ:4i!@Se L]yEڨZf?g5 J[#PR}ty xTfxoPoof+k*\onooocN1?FEߋ]2φ gm_h{ 'aV ?FIQ|X"Z~2z}5o]Sh]CYR`-`CLVI h oo&8@6>Pbtg[`#|϶U /(+~?F0'Au=0cLas` =E ߴI h 0BTfYl~g[ 2DFd-?FGt\e(-upғh6SFOp \pH%y|C/h6+S{0 ӼvAf@g+hPљV M H<l5-W/ :FdAg?F#ccMTbzm1Gcqe~1-B?`ZAvkNngoPoboto OkH?Fɝ?) V1? zk "M ,o"r?FLgmƦ J TܪMcSS6-F؏{wKJ\M,z% #߭ҋEŁN"}3cQLYNVV䯀@.HZl~ n P?F͚Dڋ~Ͱd@~y6{:M ,"Ęp:9?Fn8ͣϕ茻yb H v_=x?vQܠK='+ #X;F.+*3 rz@M\vЯ *ͭ?F;Y ~Eؚ)~ykM ,L5?F)-ffA煤 0{l[΁F~Uk߸?I36&&d>!ё*a0tM@ޞs0COρn{϶ f5A ?Frg, {35ٰy^5PנN,z5kzX_z?F-Wn \?/qy?%b/'8a'߀x\tRʓVxK?q%im!Fg?Jq /2_D_V_hV {+,qO_6}i?FrsM<+bSykY=T=/a]e/r_`x ;8$6HZ>asu=?6|./6l,`F@^f\/j~9̿ia/nLHThmO;~?}[h<11?C?U?g?y??@妎Ė  O0B_f^O8e\?FU"hύ?@B^bybOl?Q ,Mo~SA(5 `_ - <\_n_____⩼Pο!!2o^po8p ?Fna9hfoʌn%Zm?7 ZT![!j( ""`DpHxs/ȏ]-߼LbqS?FҾ+cCXA9x1ˠWC s\3EP|`0""ʟܟy*<N##A?FyYǟìiQ9!.Q^=&J+#ͷFSPA &&Dxj!.?Fo,,M V~< XO~U?F>6S p??HUK,HY&/ ?` _/& A[&PXp//////y*ӋE?FTmcͦQl~V)a ''F? |kټ]c¡J:?YnD((!ooo&$ ?FEc:d9~.]Gl))W_?F޶P dS&9~ԫߟFRB1 > 'Nzrmw }}?=In됿Tr)˻)ڸ" 0BT Yoo?F\/8BҡdifO g **ПA Y{?FְgmƦ͌!MdihMcSS%@'9 )|z֠?H,g]r~ei8YY$m**#(:L^p Y'4?FÄ!GQJ?Zqqqdi +"++,#a?F-ߕfadi [΁S<]?~qUg  ʕ7mI[[ߣވ/ZG +' +6$Vhzߌߞ߰@ Y???6 ,,,X?j??"6?FB6"MRp4KTp#1 ?mJ,` O4PHOZG\,U+,d(% Y__of';--Z__G/_t^> s>?Fyme%]U`vdicdUʔQQ ,?w/ ?~efvoZG*/\-K-H& / YP$6HU[../uO/t|?FaqS} 4dIalBt:K7? \T-E_?!#hTXO\.k.h'??OO(O:O)@R(dvK//Oo_t͇W?FЧji QI?e_N|%ЏIMs.qe$ү䧆o*\/ߋ/(o o2oDoVoho)nπϒƗK00o"яF>t44I8?F?f_<'kԸ!Iv' Pr۱|@? }.q?5d䧴*\0 0)Ptlt)K?FZIl_<緬TF!Ǿ iϿ­.L.7\1 ;1J*j|į)"FM?FO4_<`s31In1)0` 202@q1.?FzT~_<ʱI (빸~u7#ߩ MǍKKRw 2 lPiCϨ6\2i2x+Ϫϼ)PK303nߒ[Pr| ?Fo3on1Ox0 ~L":rCF]KSDJlع0 MPq6>B33, aZdFy^#I1Af4b4J?A^31K404dlI?FjwX@_<\X*߰~LϡD/g\Hb1),-FuPx~K :ehDo^1GEG?7l*B4;48-*<NLIw?F`sT5]|N~LZQCݞaQK505dm%?FGn5;TW~ A,p"wQYy>)N1`B@PKB^qq_'T?B5[5h.?(?:?L?^?p???F1 4c}C0\wPN)qk6<6?d]@?`pF*;Z"}^װIAOfQ;8aVȯ#{Wn\=͘OLP$Zɺ}_.qV'@IO^6 !60/D_V_h_z___2\֟?F4vS[p-̬)K7<7o\?y1<<<:y6yrЃ&~?!1b W@=yBP(lf7?۵FSF&Oi{// bxRC//Jy<<8X5X/j/|//// ???6[=2 l=.?f&M?dsl?~?+eXO-y?Fp\JlR ?ɦp{?\s__ ~C +OOJy==ftSr BTGHD: $ Uh4> T]]9 PM /AUF~?FtC&j?P6 .v> mJuM` ?0IKu8Jt5 }ty$AJbA>JU5 *J)";/#?@& ?Abk$zo@w Su`` Fil Co orzw ` Z/2%w!w&}bA&E&UBh115 `?!wpy igT}t(c)W20[09M ]c os fM2R!rA0!aM0i wn. A"U E8s2em0e 5v0d01 4Y=$3 $7@&T"+t32UHB?=@MJGF(I(/HFiie59 &X7G'K0HC@&![$K0 FJlJ$+VA' (++ ]61h!Z3F\ O6?F9.4?.`tx@! y@c#CJVAbX,?F?&d2b iQA.$c Ȑ #Sa3:TLh4b bcitt3h5eqA 'k?\<1lOe5e4P܇a5f_eD"H?F0 wlS㥛ԍlZblheFt:ǝNl+aiR أhQ\.j ,X^F5~I?F\O;\?PcG &TvE?a!"rA #c4xLexmdAa S ύ4xa$B#T]]9 M qAUFbX?Ffl6?FI$D"?Fz^?P6 L6x> ](m$JuM{` ?Agg2DuVJtS  y&1tQ rhGzI>JU5 J0! #<l!A?% ?A^Jb @bbbz #"b9#$"b$B 2% # ?<@M3#J6(&$%"QD;'"**=#115 `?CopyrigTt (c)02U0090M0c0os0f21r0Aa0i0nU.0 l@ 8Us!Be0e0v3@ d@+"1 (44#-=/3K 73L)i@ie59 O&XuGG'0U RŚF!$0R FJlJ8>UhW^Q^Q 1@!V!1(ZuCFWF3Ф JR L?[<œ_Q3$R$4"J1n6f?Fr\. LSW_Y?)$8aE:TCVb]k&aPdd *#8d2L?Fw;?FH?PBzt &TvE?!YaYcR&ekre~RMv'(J9_b 2}&(!&Ra@cBwrÀ5%8aOEev?n?V]kAyqiQ4eDe'BP(\orlh )$a^F)?F.\?PpU#u#rA =4Pz d|oO_Y~sEzw uFtQcwHD: H h4>T]]9 MT JUF|?Ffl6?F'd2?Fz^?P6 n>JuM{` ?5uJt  bX9It=WQIRuVIlGOz|>5 Jr2?@MJA@]gb%(1+Wb3)O+O&B!!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl 0 (s2e e v 0d0!$صh2?6 ?ALb4z@'!Sgu`` F 1!l"z' ` ?25߶1ݐ6:/[)ʼ=# '66Kiie<59 XQGG'0UB6!4%0 FJlJ(>UhI !+!:6V߫j֋#0=1 JUF'ÇpС0RSO S ;[5)۱Q3a9iQY'aHD: H h4>T]]9 MT JUFt:?FFh4?Fz^?FkZV{?P9>JuM` ?uJt  lIt=W9v׾IR}?_5^IIljt#>5 UJ5 g`?CopyrigTt (c)* 206 9* M" c. os f("!r U!a( i n.* Alp (st"eH ej v dh  3Eص!"#a?&?ALb4z@0Sgu`` F" q!!lT"z0` 2?2C516bANt6b4J= &;02zGz#)@MJF86x?OK7XB 6iie%(9 X7G_'0UB&Z!40 FRJlJ(>UhI !lXAa20$)0=~0M:XbX,\QS }A_б5Y9mYQ_H8>z 9s -b@J)F9 #ߥ; OB ٳ%= ~dso@+_6o8s}G"of֛_H C*J rH]O /qS (-3W Z +_ arc Bhg | !Pas//'/9/K/]/o/////////?#?5?G?Y?k?}???????? OO1OCOUOgOyOOOOOOOO __-_?_Q_c_u________oo)o;oMo_oqoooooooo%7I[m!3EWi{Qw iX Џ*<N`r_ x?v ߖUFD  h(^TYYB UF~?x<F BP(?P?X6 Bh9U66 6 6U 6 66W66}ȅHBU??I\BU@L&ɯd2?6-G(?\.Y(7. sU!}&/)J'( G 0yB`7PC,personal 0c0m0utR2,$01i*0o.2]n0tw<2k 0Weq(0i0m0Un*0,Pa0dH0i1e!u1 ^| (g UG& b `,?FLϲppw  pw owwwwpwpw pwpwp݆~(p5;~@?CW~XZwwpwDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿?)j࿚i?3@ ?DUG DF P# h @T(PYY#E#UF~K? P} 7S.BB"D:J :^:rY: ,7 A 7377377&cy7&'}7f:&7N&7fb&7v&7&u`} ? U"6U6JJ^U^rrUUUU&&&&/8(:&N&N&b&b&v&v&j&&"u#L)VB%\L\GUBEqD?3? 4bP{[`:# R&VUC"'JQIL "/L*Q$" "8W!' @>>ÿ@qZPU?TVpPr&bcwRu}P`-*Q"&bu &@WS$Aa(Kb""ch@D )9u`u `"[bz$u`؂CaAG1`Vis_PRXY%cPm!#52madPa%VR%TKa/ZB0gBqq32REqEqAAB5r6$ 7”ECϔ|A8ܔspꓮQQ;< !LAE7ĂFD a,!J^hdh")xA*HKa./Hd$Gpha1]` Mpnupa2ptqrFqa` E@u"pp 6peptuEQ`ѯ]8QP}rd NmP3P9KZ! PprtX˿]r`iDr|q uϯ\/AA$pqc`#E^-% Sripq߫uT+=,LSpcxwߠ E߱Bl pipgE}Ph RSpoO"|ґyz(ޥA` qp鴜D`NwrkhaGpB`e#AI(pqdpss@|'߸ֿSbpsQx/%mF#qtvq Iq{uQf{ r (pqq@(3/+URqm6put%  SGt5QVeqσąV///(C(pA` WlګO`/iMpmr1//\Qp?!?OUpqsIyP?b?XiBplrgrTSp#"2NO)OS4paB2Cpo! 7iO{O` GT01@ `p#_OOGO _u% `_#kapTq#(?=MpCv/Qv__SH"d\"ZqCB??Pq|&Љ و3UF~?vZS\u`?d?7ruAy}5-.B"TqqՈASw>>{ÿc6Hr92u?`s`Bxrq 07K`R oF@IAeB$H|8Ex~8"7=rb ӀӀƞFSrӋb{ra@RKᙑsz}_MSaabn.{&$,"-Ѵh!PPa@iy60`ZUuu`a p*LQ. p`RÑ.1=f@ߒbÑ;J&;i1J5MS_dK&#„BCF3sNSYFw";(LլDQT]]9 M !AUF8?F:?Fm](?FF]?P6 e$A  JuM` ??Su.Jt+  Rqte_|tH1qzrMZqaMKI7>JU5 J["a?2& ?AbAw@X"b`$B2zGz?")@MJ&a#b#z\c a(b`)m&g,X"m* 21215 `?CopyrigTt (wc)i020u09i0Ma0c_0o%sY0fg2X1r[01uag0iY0n.i0_ Al0 _8Us2e0e_0v0 d0+1 (/4->3K >72&i@ie#59 XG'0UB2&M$K0 UFiJlJA h7P1X!ZCFt,ܿ?F9$*m ;6dm S)RJ:^T<@T R}S $S @gQEU23\T xY _5ˀ aYQU#5UFY&yw\QY!ħ[68^_zd {?FC:\4 `]3?X"rA kf#sS*<7$dFl~r7lb+d [!}UHLuD  #;3h0TlaaJ )UF~?һ P m.N >uA` W?I?u#8b>t5 }ty>b"[>U?& ?bAK@8">&>*m!m!`?CopyrigXt(c)2d09M c oKs f"!r !a i n. Al (s"e e v0d f!3(aj$B-y# y'&t2U2?@9 H>6A(A(>/J8M*ifPE p6XB7^G'03&J!-$e F/Jl>!$Ul8AAa  @!#E18!1Zb?F _4 .40nCA.xMѧnA[(:DTPFmq4 _WtY\ FهX3^&o`_sZRXECU֚|I?F'@[\b Eq\ ypQUdKq\;rdXXQϓCگ K/ڇTXK^F!}} p$?P8-DT! .3!^_p_U.~4}]~q.x~TA#3 AM JUF#H^?FՃv?Fꖞ@?FĶ?P6 n>JuM{` ?5uJt  ߀It=WKꑔIR ׼IlV #D>5 J[? & ?AbA@0"b8$2zGz?@M»Je&9#fb#z; 9(bE8)E&?,0"E*UB 1 15 g`?CopyrigTt (c)A020M09A0M90c.70os10f?201r30l1a?0i10n.A0 Al0 78s2e_0ej70v0d0@1 4Y-3 7 &iieE9 X7'0UvBŴ &!%$0 D-FAJlJAh7(1h0!T#VcS( bHD: # h#4>T#3 AMT JUFȹޘ?FOd?FoT#?FqJĻ?P6  >JuM` ?juJt  uD It=W36EIRj׌IlB?@-#>5 J[? & ?AbA@0"b8$B2zGz?@M»Je&9#fb#z; 9(bE8)E&?,0"E* 1 15 g`?CopyrigTt (c)A0W20M09A0M90]c70os10f?2R01r30l1a?0i10n.A0 AUl0 78s2e_0e70v0d01 4-3 7 &iie%9 X7}'0UvBi &!%$0 -FAJlJAh7(10!`3'e?FfAs!h, ?nN#<bydb   VJaRTT $jTRUE:]gy_eZ-ֱQ%UOfkZoaU526_HU0"rA 5>#cHD: # h4>T]]9 M JUFS6h+?Fڨ?FC7LԠ?F_oo߯w?P6  >JuM` ?uJt  k6It=WHףrIR'c_hIlX_#>5 ՆJ[;? & ?AbA@N0"b8$B2zG7z?@MJe&9#fb#z; 9(b8)E&?,0"RE* 1 15 g`?CopyrigTt (c)A020M09A0M90c70os10f?201r30l1a?0i10n.A0 Al0 78s2e_0e70v0d01 4e-3 7 &iie%9 X7'0UvB &!%$0 -FAJlJ]Uhz(10!Z'F#?F9=7Z#<E $ ĀVJt!#cGP!2, TYŏ.ZfQoS 5!;KƱQE:F1aQ, jRYSuRYFU>X%e1lER[6C/I .g\wPU5egㅪlaYpO0X2:_;0"rA >#sHD: # h#4>T#3 AMT JUFJV?FO7x?F g8t8?FxMƻ?P6  >JuM` ?juJt  .![It=WBaIIR#!xIl?7O##>5 J[? & ?AbA@0"b8$B2zGz?@M»Je&9#fb#z; 9(bE8)E&?,0"E* 1 15 g`?CopyrigTt (c)A0W20M09A0M90]c70os10f?2R01r30l1a?0i10n.A0 AUl0 78s2e_0e70v0d01 4-3 7 &iie%9 X7}'0UvBi &!%$0 -FAJlJAh7(10!ZF/, E <{O!#4  OR|VAJaN[]PPReSRUE:]?Cm{ɾ__Z"u)Q%U=V9P]Y\QĖU526_HU0"rA >#cUHLuD" # ;3h( TEJ UF~K? P .N  >uA` ^?A u#0p,bKp>t- bFtq>U2zGz?@9 >?A@(b),+,,&" !$i>"&p ]""b)+'"Y,"VY/N*1*1`?CopyrigXwt dc)a0W20m09a0MY0]cW0osQ0f_2RP1rS01a_0iQ0n.a0 AUl0 W8s2e0eW0v0d0:i f x!X]'0U5BX&50 Jl>$Ul8AAC U  B # A!֚|I?F'@ & b E9& y'9m(^L0T,_>_ *XX3U{?Fr\HzdKB\;rdXXXAk}pCڇ K//XTTZbP _?F92}&~`m`[ ]3PY0nC]MnBU,Nvlza&p (91n-pQYWAliPFmqٸ EY\ FXX܏&o`DZ?RXXE_&__J_|D#AD1guOKOOjkFs"DI@[sDz.q[]J,bu?kZ̯_ofd $?QRp؞lq o|T]]9 M !AUFM4_?FX({'?F8as?F'|~?P6 m$ > JuM` ??Su.Jt+  ,ئqte_GqzLxg q}KRI7>JU5 J["a?2& ?AbAw@X"b`$B2zGz?")@MJ&a#b#z\c a(b`)m&g,X"m* 21215 `?CopyrigTt (wc)i020u09i0Ma0c_0o%sY0fg2X1r[01uag0iY0n.i0_ Al0 _8Us2e0e_0v0 d0+1 (/4->3K >72&i@ie#59 XG'0UB2&M$K0 UFiJlJ]UhzP1X!ZCFFMaT J*m $d2rv 7|QL{RJ:bTVͶ$S ?!͟TT QE!K:?Fc\?7YMQ#5UC6ǎȥy\aY = GZgh5Uc?hV[\]PfaiQb_tSF菐-?FҮYT˳ ]3?X"]rA f#asS*ԍ֯JQed~V(L qAb+d ׻HD: # h4>T]]9 M JUFd $?Fs"D?tP6 >JuM` ?uJt  RpIt=WcsW.IRI%l#>Q5 J2zGzK?=@MJA@neb1#z' %#b?(1 ?(Z+Z&B[ "?& &?AE$ \/}% 1 15 g`?CopyrigTt (c)A0W20M09A0M90]c70os10f?2R01r30l1a?0i10n.A0 AUl0 78s2e_0e70v0d01 4-3 7&iie%9 X7|'0UvBiů&!$0 -FAJlJA h7(1E!bTc1 $<K $  OR)vVJ: KvŦM\lQeS -^PűQEaZ9]`Y\QU%U=V,M_Zڕ]X526_HUE"rA D#cHD: # h#4>T#3 AMT JUFEM?FZI{?F'ϸBl?Fĺ?P6 >mJuM` ?uJt  ^J˛"It=W0"شIR}*IlFw#>5 J2zGz?)@MJ߀A@eb1#z' %#b?(bJM)Z+Z&B[?& ?$AE$\/}% 1 15 g`?CopyrigTt (c)A020M09A0M90c70oKs10f?201r30l1a?0i10n.A0 Al0 78s2e_0e70v0d01 P4-3 7&iie%9 X7/'0UvBů&-!$0 -FAJlJA h7(1ZTlT]]9 M JUF5 J2zGz?S@MJA@eb1#z' %#b?(bJM)Z+Z&B[?& ?$AE$\/}% 1 15 g`?CopyrigTt (c)A020M09A0M90c70oKs10f?201r30l1a?0i10n.A0 Al0 78s2e_0e70v0d01 P4-3 7&iie%9 X7/'0UvBů&-!$0 -FAJlJA h7(1ZE!ZF" Z <] Gʼ#OR  ORS|VJ:-US DZ/`#dTK QEUHǻM\Q`YO$iQ%UY $?)/Ep?Fa_V0w?P} 8f u` ?0)u:t7  Id̝txhˑgFݑZo`ҤCU!!`?CopyrigPt (c)J 2\09J MB c@ oKs: fH"9!r< u!aH i: n.J Al @(s"eh e@ v d  #S$#B- #l# '?R;= # &#24 6 6 2$Z# #0U2 3@&B ^#5 6X'7'R4 6!4] 6:e6^P#[EBM #HA%DGH7D6D # 4# *  !"#$%&'()*+,-./012*3>5h]T]]M YUFhop?F>+?FoPp?Fsqє?PM]uM`^?u#t  EF't!"UY%'~P*%4'?"U}2N贁Nk?@dM#&A b(+++&Y}/R/`$B8?V#bo4z @/'/5N }11/"`?Copyrig t(Uc 09uM0c0os0If21r0*Aa0i0n AUlE@ 8sIBe@e0v[@d{"1U%4e,3 7D6%01wEPFXGG' 0U4RD6b1I FJlY)U4FX14#A(!L15?Fd4Mf0:O$}#?)߰=^!8U}?Fz d@ lxrNj$!la.PX7h3URWa?3am:"~vn*&^x%>awE:}JC4` k|| a(d08gh_oo)o{T#______Hv?F d1a|3`[]w|c?7oHv.?FMFa|S3w|\£|oHvJKmc~ea|{ oHv(sna|;6n5coMA}#5G~QUHv>Η4?F@va|+|w|JCMHv,f V3Sa|p )zyBr7椏Hv]DWrGsɾa|k eą﫚l2 +7a|QWgQ-<̾э_q}Ra&8JHvˁ?F~廽a|/ͼhzyoχڡHvkT?FS;a|3zy? HvJbS6̨6a|ω9?OHvS1Ɇb._ja|^~_\@{}R -?Qc&Wa#UwvnnE>wm|<jBmm*'Waɱ`nk}.V̲o~'݌ssK/8}߯mEKx6iBJP1 UB}K^~o>P/6i@A}///* ! %/7/I/[/m//&A2mo~Kf΍4D2=y|Bm;m`(=w}zNmWUȣ(K\S}|t|d\L|L+YlQn0T%%$ċ^QUHQOOOO& AOSOeOwOOO&놮 G3!(E> H}OĹ[DnXXt' U!2h`Xn b_ Pg3~qP3fbAu~R)umooo& ]oooooo|o&g-N+_!eW?1 il}1HiDYH'rg|xMMXjQ%f|ɒ= %;vro]ݓinÞ]S( & yӏ&f@e!`¥ G8M Z³绾hv0W5gS1a8|QXSbW~7mȧՠ~=߬lB|y:߾nuTѺ'9& ˯ݯﯾ+A 7>M%- q?j j dEuk uQ9)?FAަIif[WvpQ鐲9 =~OIѮ]xz~êJ 1CU& E`=U4 Xw׵\ o5|݄ erKv7(\2Έf!9"pϼRѡ{ R}\ $”ގ#q[\ƧGJ Q\BĪ&iZf}#Xr_N|1i"@w\y =~X@/n????Te/?#?5?G?Y?3㒾?Fu}A{ZϹ?$=l?Fߣ؝@鴡s! U/q2 y#iwjQ rt}\\ǟ~nxBt@fP@Q*(g˼ߙ`]N1YY͓___Z_-_?_Q_c_u_՜t?F#|IΝ} C)u18h6o]?F\ v!ۊ5,'pm~ "߹3*_CRڟ8Yq_S˼_}5#7o7I[m ռwc?F䱹!? Uܟ*d4IM}l+m/r?F!zxy4GXM g1rH-odip'ʽh:PF}LHM\БqWČMQyS:UŸԟSewZ=x?FrJS=W+}| 8*_ޛU z0~+WzBJ/pVD W| 2'ؚ5F] 0+!31rߠ߲߯450i;h4MLTJMUs`I}$z8.=u2fC}1 O2V{uf`5MEp4}o7o9lM* g`S}+M?=;??1O'9K&)E_i) QiğuPmk6nɣ{j}Q}L]_h?>nj^$oVLɓ߱ mkVo@0Vp > vCu!9QAdmLG~ >Qo1/C/U/g/V/ /T7} 7վkEA*w}+% B`;$gK*"V2w@96קM-nvSiF!nLLgqNy4~ QUcqÍZS)^MO_OqOO-v??OO'O|9O F1'ߢM UPiPK>´}]"}9'uH0 A5]mI;;#ǧ66hx~r JQa~&qrMi~߭Uȴ߯E~߻ io{oooI_ oo1oCo|Uo$C$g;#,b2?.x}CwMLW1b[̵yx8ج`}eM}oJjRݣS62`M\|"XB}B+<5asݴf);M_|q7;ӱ,_|#Vx~ C.t 1_!; }i틺%~?3]9r?Fn?59?趒;>L]ߛ^_u^VQ$2Ս޻}ǸПƯد3EWi{7TS@f𿟜L)l\m߇WMw~쾟K,:֑Zь?FSƊl\?i@9 <>̩#Տߘ]l\ s?BZBDȕ݇k2PU}l\q3h^ ϞOasυϗϩS#ѣn>x8Bި9ڞMh #sUx=BT.< @?F %婢?2#.5>_9K 罰5L]!g6L}MJA?**=k}1TӦy?F'sO=/ge [A\{9M?F>Y=k X*e S(N $9-g[?ow-$.Tzv )\$?3߇Nx'fl\|z $M}~' /IpLֵ?F˦6k]zFq&>}w^YM|Ԅ?F<']?0#Jh#h /xIkHw1]ǥK]P|G>q2:|@_3Y 1I l\"_@=H?` @/?$?6?H?U //////1bJx?FL!E#ԇ}Se γybE?FcN} Ɗx3c }CRD80}>~^ϝ{ª\3s./M?F^Y⣝iϟʁi{s_4I[H?FJ8oxJJϑ+ϊUϰʉhFq$ >LT ]~~[4xYpŬ0Ex}LYTfxy."ooo#5yP?Fឍ~QTJZvW9 ?FڡGSۈrQ5|橁5ǤˑIGYh($Qw~5tjl@fxF# -?Qy?FOT ʼn@-1ş9Ыva*S?FR?fB V1?ݰف]沛tk{Τ/Yv{ʙޅ݉}1MZѰb$%7I[my -}\SU=cx|[)T*Oў}|d.KnޟR{nIxߒΫ}@B j-2YpM:H`FZT=e-B~yh,?/߯}%/ASew߉ߞyf}x0 Q>zsg=ڟUbw;IS ڀ>3LI~ goSy?~wib`W?YM7+mɼ7V9H3pvsSs/M/}~iV9@gM!&K]oZBdPeF]ʼY mV6| ig5 ח9]#.Sp_xuT&~?Ftm*ԍ`_rYHp O~![ިKmU|ױ.jriwo /F'gy|yrqFYH?߁h}FUϯz5ꞵ,WCˑB`Ԟ`H}sU2w~xoӄڭfDjt! J Hna)(J6w~nv؜p oo1oCo)______ֹ2JI`H~Ͱ f tɾ4EwߗMMpW[̵yط]?F3,棣SLLh ?mu`Xr}Q͘~?Dp);M_ *ֹci+!c) x+Ҳ{&3oٶ[i?FqTƊF,]H'^ <Əٶ H;z0=}F,XQʮ}7$zLٶzTǬGWF,4$ZgsRF{^ j4Z>~[T-G^8?Fz^.H ~L?%aasS Xޟno#ά:HTFF}6CF J'bO[;| ~^-!3EWiKYYx4?F -t[|L?r69BMj-1(?FN^4>Nsz9ET~.Q^=#94o4>ݠ_J? H~&U7`dU \(5Vm H FۄܱykyU} қh?nBN 2DV3ïկ IQY1Q( &&] 6P^JY8}Soj11Qً;]n;P^u񟷱g_ G!.H?Fh\<;Qq7B1b;Y|#Kk{[oY??cgCo;M_q4' IsЪN =͐\h \[Vmq:Dyxj?FYi&?Gbx)U͖֒z/^bA/ F5f;d#\EH{=Xj|8%5 1C Id匽 轫 (Z-q͜DD D58|UBn FxV5?F+A玕R\ȚE Fk$!2 +\HC.P?%ҧnP A @6D .THD: # h0>Th]]9 MP IAUFOѬ?FA^?FMOh?F9d*C?P68R?>$y> n JuM{` ?e ;  Su*Jt'  q3 Pmta{Z_ebN3.JvH?zhǿUv?vh e:Z}.b%{>JU2N?贁N[?)@M#J&A@wbI(bW)U+U+U&J π(<!lJ ?%P?AO"d+b Y)(u/%B #A1A15 `?Co}p rigTwt c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0Wn.x0 l0U n8s2e0en05v0d0:1PY>4#-#,M3 M7#(p0b59 DFXGdG'/2WUA;F%!$a dF$&#l,&}>UhET ]]9 M qAUFr ?F l?F?Fg!ِ3u?P6 L}>   h,(JuM`l? Ek"k6HuZJtW  7OttG97i;y4v'ʚc>JU #<H!-A?^% ?AJb^ @bbbz #"b#$"b$J 2zGz?<@M7#J&(&$%" ;'"*@!EA#5 hy<=A#B115 `?CopyrigT}tk(c)kW20@9kM@]c@os0fBR1r05Aa@i0Wn.k ] lP@U HsTBe(@e@5vf@dH@/"148#-,?3 7#9 i b59 6XGG'0U?RF!y$0 F ZlJ]Uhz 1hy1!5(`A#FB1?F#)Q PV?d"cP΄ ťc(Mb(8"J! jo0o Cƣ8-$DqaE4d\b@d Kc;d .#8磵g.p~?FTHR?PV7 /!i q7ikWPV]DT#3 AMT 5AUFr ?F l?F?Fg!ِ3?P6 .6> JuM` ?IKu8Jt5  Otyt_G97iͅ;y4vʚIA>JU5 J#2zGz?@MJ/&A@bm#ztc a#b{(Wb)+&B#!!5 `?CopyrigTt (c) 0W2009 0M0]c0os f 2R!r 61a 0i n. 0 AUlQ0 8sU2e)0e0vg0dI0!$[#8"8"?6 ?A$/%-/# '6&iie59 &XG}'0UBi6!40I iF}JlJ(>UhiI !!AaR4 Rx J:j]ݿb~)_.S buBqVy3'?F(ËE?P,a,[ /A1YQYV4hA.xeAR +3l4xa@ R#Bsoogrb5%UGD  3 h0TdYYB UFW)eK?FI{Ä?F9n;*Y?FI_tnw?P} tf  f0 D Xu` ?00DDXXl)uvts  < %t"F0" %'ӿp %,'qdX¤U!!'"`?CopyrigPt (c) 2\09 M c oKs f"!r !a i n. Al0 (s 2e e v0d0s"#S$|#B-#l# '?Rw=#&|#466 2|$Z##0U23@{& ^5 FX`7Th]]9 M]AUF?FMHR  h$ JuM` ? 3YY.KuHJtE  0oI~$tq/袋[Dmk.IQ>JU2N贁NK[?<@M#J+&A@b](bk)i+i+i&Jy%#7 ?3#) ?Ac"i+h//%*B%#U1U15 `?Cop rigTt (c)02`090M0c0o%s|0f2{1r~01ua0i|0n.0_ Al0 8Us2e0e0v0 d0"N1YR4#--0" a7&%0bE9 XFX'xG'0UB&!$$a xFJlJ]Uhzs1Z!1(`%#FXxcӔ?F>D [TQ>Q) R "J:!l?FQXP\C\6m/m$BQYsZ_?FFt%7?F{hUuW{ ]3\+B MTT0TDiQ>8aQ4? /Ȟv/x i[? `RT*?D`@ S <oor5%QEUVV__ <#<>e__f{,Vxh`d3rAg 4 ccgd$ymAv AGuPHD: H h4>T]]9 MJUFoc?F24ͷ?F7?FXPq?P6 >JuM` ^?MuJt  ~27It=W]tEIURF]IlGEw|>2N/Nk?<@MJA@gbb ) +R + &J[_ffy) ?/3)"B5 !!5 g`?CopyrigTt (c)6020B0960M.0c,0oKs&0f42%1r(0a1a40i&0n.60 Al|0 ,8s2eT0e,0v0dt0!P$-/ 3  7|&&iie%9 &X7"G'0UkB|&!$%0 "F6JlJAUhh71!!T $#< $M DRkVJ:+;5jB\aQZS kBձQ3aZ9]UYQQU%U2VZ?9=_30B_ZK&DiaX52+_=U"rA $cHD: H h4>T]]9 MJUFoc?F24ͷ?F7?FXPq?P6 >JuM` ^?MuJt  ~27It=W]tEIURF]IlGEw|>5 J2zGzKt?<@MJA@gb%(b3)1+1+T1&B!!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al 0 (s2e e v 0d0!$)[&`h4|9 ?A(:/[)-#K '6i@ie<59 X7"G'0UkB6!40 "F6JlJ(>Uh"I !e1+!AaBc1 $<@ J:?+;5j:\YQ,RS kBձaQ3"U9]MYIQHD: H h4>T]]9 MJUFoc?FXP?F7?F+;5jq?P6 >JuM` ^?MuJt  ~27It=WE]tIURF]Il\tE#>5 J2z_Gzt?<@MJA@gb]%(b3)1+1+1&B !!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al 0 (Us2e e v 0 d0!H$[&h4|9 C?A:/[)Y-# '6iie<59 X7"G'0UkB6!40 "F6Jl*$>Uh @3 !Ie1+!a1 $<@ JU99YUQIYEQHD: H h4>T]]9 MJUFc?F24ͷ?F7?FXP㟸?Pn6 >JuM` ?uJt  ~o2It=W]tEZIRF^IlEw|>5 J2zGzt?)@MJA@gb%(1+b3)O+O&*B!!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԶ !a i n}. Al 0U (s2e e 5v 0d0! $![h4?6 ?A:/[)e-# ' 6iie<5(9 X7"G_'0UkB6Z!40 "FR6JlJ(>Uh"I !4e1+!ahBO $<1 J:#]Z9=_30:_RS &DQaQ3"U9=YYQMYIQUHLuD" # ;3h0TlaaBJ >UFE3kh?F3?F_T?FU Whu?P vN>uA` ?)u#>t  5@{It=W#H9IRj;Il廤#2zGzt?@9 >A@gbb )bJ )'+'&>yga?|& ? " +'+,5'UN!!g`?Cop] rigXt (c),02d09,0M$0c."0os0f*21r0W1a*0i0n.,0 Alr0 "8sv2eJ0ej"0v0dj0!Ea$-X3 7|&$0fE 6X7G/'0UaB|&%!$e F,J l>aUl~1#1!jQ?FN @ ?[ql#<3ٽECUS(: 9[#UT eW3wU( R@SRQY?I?dXE ^ߑC8_J_ dX _2_=oV_D#OOOOO _vVFmR^\j:=pd_ofI Њ?F0떜lZ$&Оl ?:Ynx3~ ,az kXao41}gK}YWq`m0BTf|UH luD( " #  #A h]0T]]JMUF:%?F6emP?Fiz9vJ?P N&&M %4&U8HCM 9 / 9 / 9 /9/9CWW W WuM` ?% &W ">"R'&j'&~'&'&'&;&SuQ)t  bD%t!2 F_%7J">67šʿ~ U2zGzK?@Mk3{6A@n4b3z0312b808;6Tu3 A A2`?Copyrigxt c)W@209W@MO@c.M@osG@fUBFArI@AaU@iG@n.W@ AlTW@NGsBeu@eM@5v@d@c2AQ}Dl3R[u3-A B?V?4 ?E-Xu3,C ,GVl5O@(E ԕVXgW|2'0UR)V!-TA VZMU@QjQ1i81g\a?F;|~4P~G #W.?a4Hd3.䛵?Fp _`9},jX?F>@ ]3?iFS_lUӧ:ޜ !hރe#ģ[ Q " i!$a4BH:}k_wg0wEuf.29]~4pO˯ô>P|o!S#_`1y;6}wfl{o@opoQQ$Ug@EiE{iooeD?FD ~a]#Q`7\X!?Fw^?FrY?Fo2?P'DT! /3Q9 մ7'||U|6ys r #, Jދ]? $oe?Ň41:/ B"tŇrf5EU>PbtItnɟ ^HY*Gą 2DhaspA]鯱ï ֯z %7I[mOb罞О )E@ͿXP-?;a+ >L &WEW thqϧ"mϏ] 0BTfߪߘW4. Vwp,> i\_XT&8JZ;?FT}:/?$qxE~yE?FA5g`u/V/ 123\?FrMT/?d3~~?FEa,蕱/?CZ@6 De //*/]0T]]JUFp>)/Ep?F2왵??F ɏD,"7?P uM` ?j`fpu#zAtw tnٳ \b "OMG2N贿Nk?@dM#%&A2 bW(be)Jr+r+r&*!ܿ!`?Copy_rigt_(c)2W09M c os f"!r !1a i n}. Al<0U (s@2e0e 5vR0d40 "!3]$#Ry"&#4?6 Q?]"W*s*;'-X# '6% h5 RFX$GrG'0UBű6!4 rFJlU!h#'A(:FYSocg0vr p_y $(4!Z[7?FL30pS a۫YQ?\I`X3U5 Tީ?FʨQYC\f"qoY5U%?Fq$B\F\bJFXz___\Q# __1_C_U_g_VNx?FM=\>ۆ\34=(_V5T?F}\#R)\b@QoVr]?Fc߰#Z}L \$6ԡyqm0:?F u\.Iſ\)?oUU/ASewV?FN,]\\wzV]~?F@(r\F-~Ȩ\(p|ymIxg?F[j| 7+*y\= qmh?Y?FYgR\Aa\ulӏɟ۟UY=Oas!KC4?F}7PM\8HJ|\ HNVh?Fwi@­\A>xAV%+j7ٽ}#E~\I&)5eF=s '7["mSpуdĿֿRtYk}Ȏ~+]~nn)`]͟Cc֦ъ]#.?FPZk+N:2\y_@W߁2ʞ54}d\e҉qm}&?FUfK­thV\9\ލQak}ߏߡ߳~VZwۿ4&b-} ߜ.c|~kd$~Y""A6\lJ(i0(sVTloV?F} \ǀ_?$qmR'%"2?D= bY\r6yލ 2 QYI0`鎍1/ 48Ve[bV$?Fi뭩1?0pS!K;?F"~[:{Q~b?S)qm#|W-?FG~b .f-?R[Ϡ ލ /*/fɾw2Op>)/Ep?Fd(_a9~ {F!_ ~`2DMz2m^Gm}>IލPobotoj ___o o2o̺m@UP1-JO}^^fon l?FSƊ1̓ ;]\OogMNE0A`^?~uݟ$E~ɰ~%^~ =7}ÀMuҳ1?4tx鑡?3:\ `r $6Hj(i?F ڷ1^̄ZϭSԢ%^XP$֧zw]<<E?F6 "Rf2 =s 7Pq=[~1̇1%]?F | .@Rdc)Jl pe},X|4]-Huvzxډj?FwJ%H]ǕDG7t{GtY~Ǔ.PH#T~]?9+T =~B3~-L /=m$+)Hϰʛ&8J\n|JoWx#0z ),?5Uz!xz?Vsf&; `ߌ 猖gߺ8˺A}2!¹diqBZ$@S =u V N,.zu~6J1&ߴEWi{f&~c?Fۿ>4U, }J\1?Fξ9rзX=YR Qm#07q X; I6 =4T37 xU,=A S)d= Yqas{%?Fyx{wmQ7Z6n4Nl]?F7c2Gsj3j뙚;(N8 i<=ۇ4W-ɫ-9)]-?F`x}SoA뙳x@///?"?Rz//////xmYԒ?F0ufx)r蜅وWjFQ1?GUF MDe9gsaEo2I^~T@s@܂]Y/|sMv۶ IY?}lD?B9kP/ߊ.x/ONϋaz`3O__)_;_uOOOOOOxHOB?Fi$0%oAlOo-M_k"W?F[y\x_ ]g蜢, )+b77bћ9}DxZ]Qey=zFVjx/I bvdRo$6HZooooo  p? {,Բ5~}j8>6zB?FӺ ~|g=)s-?F8H~ݖ}T M?lfa=Kh݁5};p/"_f/tl 8at""4FXj| ߬XNT'(/E9?&I/ i {.PvwHkOQ qӟĶO@ߒ#'ߛ[w̜zfdm ___ooŅp______DAL4?FGww̆Ia{VU$oEFem(5tc(yÂ;UKG whsi}2Ca6o? ?F20?æmEyo6\iq,"4ߤsZ4?F"EwH | QUY% (?FZ <#a;:wPID8 n7 ~f* /e'Mǃac5'?Fԫ|>ka?I}|b:QH,>P גT_HZ> )6s ;gLjOhLAݨF(֯ #c ^OB He߲ ~d] @+yЫ_8sGhoT~ Pȣ  |H? >x} 1X h̿ "\8 L ѻHqU _U exs B d&/#/5/G/Y/k/}//////// ??1?C?U?g?y???????? OO-O?OQOcOuOOOOOOOO__)_;_M___q________oo%o7oIo[omooooooooo!3EWi{ /ASew(-C ԁ/ bҏ+=Oas͟ߟ'9K?y Xs  т"Y %пI! 'B$ h)& H+u) 8%, o"/ ÄYkXu.8 HUFDfP h TPYYhUFL&d2?F`0 ?Fx<F BP(c?P? w<lh -^&88Kh;_ 8srY3 8 83 8 838838 &8&381&8E&38Y&8m&38&d*8&38&8&8&]; 63!8!6#8563$8I6&8]63'8q6(86d2)86*8f6+86,8f6-86.8fF/8F08f/F18CF28fWF38kF48fF58F6U789;8fF<8F=8&F>8 V;fV@83VB8fGVC8[VE8boV;VG8ȗVRH8VI8VJ8VK8VL8fM8fN8-fO`8Af;UfQ8fifR8}fS8ffT8fUUVWXZ8ff[8f\8f v]8v^8f1v_8Eva8fYvb8mvd8fve8vf8Hvrg8vQ;vi8vj8k8l8+m8?n8So8gp8{q8r8s8tuvwy8z8{8|8/}8C~8W8k8Ѕ q  0A`Ap4icbptpo\ evq !aE  -QuYda|z !T3Q_`|]1n@` X?W&&.11 T_?xpxx p  pxp%{wxw?w _w xwOw  p JwwxwpxbL&d2鿃`0 +?? {ҿ ȿUGDF # hz8T YY# B{UFL&d2?F`0 ?P} [u`ho?u#   @B<66D^nY^-] [ [ [ [[[&7^"&[6&[J&[^&]r&[&^&[&d^&[&[&[&[6[&6[:6[N6h^b6![v6"[6#[6$[6%[6&[6'[6([F)[F*[*F+[>F,[RF-[fF.[zF/[F0[F1[F2[F3[F4[F5[V6[V7[.V]_BV9[VV:[jV;[~V<[V=[V>[V?[V@[VA[VB[ fC[f^2fEL[FfF[Zf^nfH[fI[fJ[fK[fL[fM[fN[fO[v^"vQ[6vR[JvS[^vT[rvU[vV[vW[vX[vY[vZ[v[[v\[][&^[:_[N`[ba[vb[c[d[e[Ɔf[چg[Q^i[j[*k[>l[Rm[fn[zo[p[q[r[ʖs[ޖt[u[v[w[.x[By[Vz[j{[~|[}[~[[Φ[[[ ު-2<ZZnn&&"&"&6&6&J&J&^&^&r& r&&&<&U&&&&U&&&&U&&66U&6&6:6:6UN6N6b6b6Uv6v666U6666U6666U66FFUFF*F*FU>F>FRFRFUfFfFzFzFUFFFFUFFFFUFFFFUVVVVU.V.VBVBVUVVVVjVjVU~V~VVVUVVVVUVVVVUVV f fUff2f2fUFfFfZfZfUnfnfffUffffUffffUffffUvv"v"vU6v6vJvJvU^v^vrvrvUvvvvUvvvvUvvvvUvvU&&::UNNbbUvvUUƆƆچچUU**U>>RRUffzzUUʖʖUޖޖUU..BBUVVjjU~~UUΦΦU  $BWW.(%6QDUQ?S>S3Q?RdV"qwX[?( 8RLS`S~SSSSıSرS챔SBRS(Sv|Hv|Rv|\v|fv @ 4񸣢vvy|v|v|v|v|v|v|v-|A|U|U$$}|.|8|V|`|j |t|~1|E|Y|mx؆؆|| "|!"|5"| ]"|q"|("|<"|F" |P"ZZ"|d"|n92|M2|a2|u2|2|2|2|Ȗ2|Җ21@1 AQB|eB|"yB|,B|6B|@B`|TB|^ދQUVVVVUVVVVV SRVfff&f0f :fDfNfSRbfUlfvfffUffffUffffUfffvU vv v*vU4v>vHvRvU\vfvpvzvUvvvvUvvvvUvvvvUvU$.8BULV`jUt~UUĆΆ؆U U(2U<FPZUdnxUUȖҖUܖU"U,6@JUT^h "UȦUܦb&l&U&,@UTh|U&&&&U&̶U046UDXR6lUU66U*>URfz6UFFF$FUUjF.UBFVjU~UFFU(1H1R1\1f1p1z1Ä1Î1Ø1â1ì1ö11111111AAA$A.A8ABALAð÷jAtA~AÈAÒAÜAæAðAúAAAAAAAQ QQQ(Q2QBU=Us/?Se?SU?SѲ?SđPÿ@qP?@R?TV?ܲu`W@jqRu`a@r URfchqb?SuLiup`u`@բѠ+u`ѓb&Y faH1crgg`Ÿ /"(úfiic` dFpaMvaf&afl,s  a?5 %` pq q wu`Vis_PR Y. hm!#5N 93N"+b`?Copyr gt  c)M20Y9MM cCo"o Lÿ҆u` ?u~pQ(-DT! @ p@U au@`ba@? ìҀ@^0FH%@Q.FӀOռB\i@?yQ/FQd@ߐRrf i`R$pnst*oM Pe]ف p ջ bmCuE``-u5@`^0aՁ^0d qn pN<udRE %EЎ%$-+h 0JTlaaM>UFN&d2?F`0 ?F|G?Fm]ri?P U>tA`  M?t q۠ FZ>u hu!s7YaMJMRKm? ?M>bA@bKbmbz #e8 ۢ =# b#2O[(ta k-3thk-![(}/tГ/w t/ t/w t/ t?w t&?t;: Ne F;,";&a#68;R;W *P>B m42A2A`?CopyrigXt _(c)i@2dW09i@Ma@c_@osY@fgBXAr[@Aag@iY@nU.i@ 0l@ _HUsBe@e_@v@-d@\2lH#"p# @K4JUlQQQQPA!@@2Ruݯ?F#& hnGHge) ]NUq Z?FG ߄\.r\oMGhEUKar?F3ojp0m]-Eά\?T\iEUe^?F`1<<0mN[ʰ~\on\i^*S__` 5C3\i__B_TFTA'VW@qqx_[QQ JR! J!4J78 h !a!hE!lU]T T!"#$%T__o{?FlӖ|н1|Ϥo} rhQя Me]otv۲ǀFRH0ύ|dr٣|WӍ[Smp"pʍ|q5JK tvp?FZEoZɍ| 9EPڣ||lbtvX9|"@G!/ ւ?F9|H&3|?<=$_4F+\>30m_d 5\i`U0m~ !Qx$(gN/ ?P0wf>nZ?l? r?;{|{*K1?Fݿ K 1wz_ 9*S?FAcM@=orOyg?_P|* /$?F#K i >捵JLJ=|*zJ@w/d;ЍuI@ͺhˏ4pHD: )&# T=h0>Th]]9 Q BUFܷqP?Fԁ43a?F7??FtYOȻ?P Bt`  XM6 ?t׻4n%O__Bu aquo7>KJASTfw?X ?Az@bbb! ( 6f )  '+ ߈?I-Z#g!z A Qu +%/,8)E!'+ k(?r+o?(a4 8) E"#{ #BstG8tU: Ba L>2?!88;D;68?5;D; tj>8LALA1 `?CopyrigTt (c)@2`09@M{@c.y@oss@fBrAru@Aa@is@n.@ 0l@ yHsBe@ey@vZ@d@v2h<M#l#@ M@?4>UhjA6Q>1| L`Fkdj]J}vMjR]M 6)fMB@Wa\ k_稇avCe=ESeR9Y? l`gESeFeabkod iSQޗh~??5@+ c;Ꞗi"2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUFdHKJAUVJp?X ?Awz@bbb! ( "f Փ'+ ' qq?z A  +/.&68))E! '+ go?W-sU!n*a4 8) "u#{ u#D!st?8tM: Bav L62?!08;<;60?5;<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@c.q@osk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@vZ@d@n2h<M#l#@ M@?]UhzbA.QR61b L` KdjMaQ  6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ NA_qd7eEGec6i\m j'f܋h2_UB3rA'04msCHD: )&# T=h0>Th]]9 Q BUF)8/?FFT?F!/$?Fᱝ̻?P Bt`  vNC?t|¢ϛT H0Bu auo7 >JTA0f88v?WYALv@z bbb!'n 6 % )! ! ,+ <?' o?z #a47% =) J""#K#{% (+ ( tQ#BXYщ?A> ?A "+(/+/=/O/a!y:5rl/@ sU10( /???? "?0<2?M{!b 1/Ĝ/*%,"t:"t>TQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUFJ1>?F}?FF(e?F=PFo?P t`  0!?tj$h5q+ tЖZu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a490 9_b!#{t0#tN8tϻ: Ua BL2$18KE;F?VEK]; t>B 4AA0`?CopyrigTt _(c)@2`W09@M@c@os@fBAr@Qa@i@nU.@ @l/P HUs3RePe@vEP-d'P2h<M#bl# M/P?]Uhz@AQ1 L@3FO#۲?F?o2zP dj|aW{'8cUF:&x0?F?F?F_'qoO 0Yigull4aj?N~f0c~o+ fEee?F<'?F@]?FwF%ޕol}dQ3ll BXl귺m|?bBhaU~e`ba`?FV ?F?Th ofl?PI3qoikeðl8` |GT?heFnF$OĒ?F o* leowk~y{f~tGg˂a ,8b1;MHD: )&# ;h0>Th]]9 PB IAUFGF?FEs?F+\(?F~@ t?P t`  9`Ik?t{ͻIu -ZrqKZu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFGF?F\Dֳ?F+\(?F~@ t?P t`  9`Ik?t%h2tMu -ZrqKZu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFXIn?FLw¾?F+\(?F-ku?P t`  -k?tZ߅BTu -CmoLZu Mauo8 >    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@XEUv/?Fpvw;pg?F $hއvol/AD0qYJg ַBl)$ԞXlD?B:XU^F.fّPKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# ;h0>Th]]9 PB IAUFXIn?F(?F+\(?F-ku?P t`  -k?t7/;3bu -+CmoLZu Mauo8>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L1<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUF贷F$?Ft\@?F\HRq?FƯK7?P Bt`  x#?tK#ll$nZVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 ) "'}# }#BtΜQ8t_:Ba~ LH2!B8;N;6B?5;.N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@oKs}@fB|Ar@Aa@i}@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhuQuQPtA@Qj LZFχϰ 7j bL\2] 6R1bMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeU@Vϰ+@?F23<~pk_xu jlTh]]9 Q BUF#/?FKGY?F\HRq?FƯK7?P Bt`  ~B?tIH#ll$nZVBu acuo77AJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\G6] 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeUWeVϰ+@?Vpk_xu j aGeWe _ooh2K]u:3rA 4s2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?FD6Hr?F\HRq?FƯK7?P Bt`   a1?t -_9#ll$nZVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWenZ\aiavIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#?F\O?Fo?FƯK7?P Bt`  ?tJ8Xj$nZVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\] 6-fMB:ʿMbPdc 2Ngodb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUF?FË.5?FQ?F?P t`  ?{R   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sUK1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDdTh]]9 PB IAUF4{"?F=N?FQ?F?P t`  ^}W;o?tQxČ( 뾅@l* 9u auo8>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUFGC<-?F f?FQ?F?P t`  L?RD)o?tC%)Č( 뾅@l* 9u auo8>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(#{ #st8tϥ:RUa L21*8K;6?@EK; tR>B4AA0`?CopyrigTt c)@2`09@M@c.@os@fBAr@Aa@i@n.@ 0lP HsRe@e@vZ/PdP2h<M#l# MP? < >Uh$QQ]AQ1 LZ3;'xP 7jbUbFb7a 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addǼbjUkgabeE(aFjv?Fr6d?FkĀq o`Oi^j^r`t0sKÊ#|pr9t!KU:M}{]r,Ll}]|9xLufTrg?F<ے7vodObdcZgtfi,V #|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?F$n6b?FQ?F?P t`  ro?t,Č( 뾅@l* 9u auo8dA d  ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7da 6{fM8(a?D +F2|>?FW@WoO 0j?rdcd bj{^WeabeEef"v?Fddd?F܃qo`OivddlXr`tuqE|E:jr3t!KU:G}{]rLl}]|q3xngqrg?F~pooZgt_ML|\ 5ve&1?FWf?F?#0b|[m낪 z'oǵ(|F$$!ayTh]]9 Q BUF贷F$?FlIJ?F\HRq?F#o˯Z?P Bt`  x#?tnZ/J#ll#VBu auo7>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% @& J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F A7<6j% bRzg9<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF#/?F ?F\HRq?Fn˯Z?P Bt`  ~B?t6#ll#VBu acuo7hAJTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt$(c)$2`09$M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.$_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\7MbPdS mT W5UO?6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?FT?F\HRq?Fn˯Z?P Bt`   a1?t#ll#VBu auo7>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#?Flo]?Fo?Fn˯Z?P Bt`  ?tz6,JJ\8Xj#VBu acuo7AJTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUFlQ?Fڋ?FPaW?F ru?P WNtQ` c͕.:7?tU)oni8;(\I2-nNuj mu){E0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUFhY?FlIJ?F&1?F3(e?P Bt`  6n?tҝw~m4tMuVBu auo7>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶGOR] 6SCfMB:ZMbXdc $I$I܍+d `2aa]Eme'la'iL.G]eU@9/$E?;FslAka,dHּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFy?F.޺s?F&1?F3(e?P Bt`  7 ?tN]m4t_MuBu aquo7>KJAS`?X ?Az@bbb! ( 6f )  '+ ߈?I-Z#g!z A " ././@/R+g/y'sU1,a4 ) ."#{ :#Btg8tu:B@a L^2!X8;d;6X?E;d; Kt͊>8alAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hX<M#l # M@?4>UhQQAVQ^1| LZFX} 3 7j b2ɶOK] 6SCfMB:ZMbXdc $I$I܍+d 2aa]Emekla'i.]eU@;0$E?FTh]]9 Q BUF߹e?F4jߤ?F!?FTUUUɺ?P Bt`  ?tiqn= ףc?ckBu1 ,duon7>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFtx?F/!?FTUUU?P Bt`  K_?t)C= ףc?cZBu auo7[>ʜJAUVJp? ?Awz@bbWb! ( 7 " '+ ' qq?z A  +/.&68))E! '+ g?W-sUUn*a4 8) ."u#{ {8aDADA1 `?CopyrigTt (c){@2`09{@Ms@cq@osk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hX<M#l # M@? Uh,@cQcQ] '1bA.Q61b L`zw"?FDA dj O|m cuۡi'1 61fM{":F$3<6d kS=ѣ<n2{!5E[nLah vogaKeE[eV_cbPc g@]amr\Dž}iNu?FS%f|]tElF<2PhQp[" |"*`KeQitVl:qid?5hQ@J_nohU2t3rA 4n2HD: )&# T=h0>Th]]9 Q BUFiv?F{<?FU?F3z[j˺?P Bt`  y]?thS&XRBu ,duo7oAJAa^!? ?Az@bbb! (n 6  )  '+ g~X?I-Z#!g!z A " ,/,/>/P-[ן+g/y'"ي,a4 ) "!E#{ #Btg8tu:Ba L^2!X8;d;6X?Eu;d; t͊>)8lAlA `?CopyrigTt (c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0l@U HsBe@e@v@d@2h<M#l#A M@?8>UhoQQ]AJVQ^1| L@}?F! dj ٯla6?ztB: 7d /kyԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϰ+@ol0juMDhnMbX0j`(DhnC<o+o"<EgUoo6o9HD: )&# T=h0>Th]]9 Q BUFT1#E2H?F]&UaU?F3z[j?P Bt`  ?t1:&XRZBu auo7[!>ʜJAa^Ė? ?Az@bbWb! ( 7 6 )  '+ g~X?I-Z#!g!zV A " ,/,/>/P-[O+g/y',*a4 ) "!#{ #Bstg8tu: Ba L^2!X8;d;6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c.@os@fBAr@Aa@i@n.@ 0l@ HsBe@e@vZ@d@2h<M#l# M@?8>UhoQQ]AVQ^1| L@Ͼ?F! dj ٨la6ztB: 7d /kyԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϰ+@ol0juMDhnH?MbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UFObFd0?Fgj?F0U?F_gw?P >tA`  N+ru?t sM׵01,_Œ X">u eu!s7-"JɅJUU?HA9 j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF\6:?F}?FF(e?F=PFo?P t`  @?tP[5q+ tЖZu Mauo8#>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?FX'qoO 0Yifull4ajS?ON~f0c?h+ fEeޖe?F<'?F@]?F\F%ޕol}dQ3ll BXy귺m|ԇbBhaUe```?FV ?F?Th ofl?PI3qoijeðl.` |=T?heFnF$OĒ?F o* leowk~y{f~tGg˂a ,8b1;MHD: )&# ;h0>Th]]9 PB IAUF\6:?Fx?FF(e?F=PFo?P t`  @?t%5q+ tЖZu Mauo8$>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|a{ޓ8cUF:&x0?F?F?F'qoO 0Yifull4ajK?ON~f0c?+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXyR귺m|bBhaUe```?FV ?F?Th ofl?PI3qoiGkeðl?2` ,yFT?heFnF*OĒ?F o* leowk~yf~tGg˂a ,8b1;MHD: )&# T=h0>Th]]9 Q BUFL&d?F`0?F(\µ^S?P Bt`  {Go?tNV yAP FBub ,duo7%>RJ[0U?<ALv@tt7 }b6*D&tD  td*JBı) ?A>"b7)D&a4D  V3 D 8/td/tRy.x1x1`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"h0!<M#l#/MlP> @?UhLAA]R 1 P1"7Ж1bA>!J`oO2z[ dja_q:z׍ Q 6VMB@챝/P\`_m;E,Ui5Uޖ?FQ[5lf\VU'EUY2Ruݯ?FU]˖7sf\pdѣSYUWfe _zx]݅om{ZcXUFRP\g[)JXAp| rhQ#5 IXAYaLPւ?FتcP\T f\3$sXAp"۲ ElU1YA-k?FFoZP\fyf\!XAYX%?F8K7P\{ 䘷f\&SAY&1l?Fr>|]P\-]qiYip~ X1#g~jt-? XAY?F ףp=P\' f\?P=Rw㹥X'RXT}7vP\__/vnTpljlX7U/?$?vQ[E7Ժf\wUQY|>qh|P\4GN"MDX U29_KU#rA0$@CHD: )&# T=h0>Th]]9 Q BUF8?Fā43?F7??FtYOȻ?P Bt`  1=o2?!88;D;68?5u;D; tj>)8LALA1 `?CopyrigTt (c)@]2`09@M{@]cy@oss@fBRrAru@Aa@is@Wn.@ 0l@U yHsBe@ey@v@d@v2h<M#l#A M@?4>UhkQkQjA6Q>1| L`Fdj]JK}vM41R] 6)fMB@Wa\ k_程avCe=E~SeR9Y? lgESeFeabkod i?SQޗh~?5@+ c;ꉞi2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUF2B9?FZ&?F'l?F{Cƻ?P Bt`  ,i?t('̺㯝4\YDrVBu auo7'>JAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l# M@?]UhzbA.Q61b L` djM"aQ  6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ N_qd7eEGec6i\m jOf܋h2_U3rA'04msCHD: )&# T=h0>Th]]9 Q BUF] ?F t5?F!/$?Fᱝ̻?P Bt`  "?t4@+ϛ׊T, H_0Bu aquo7(>KJTA0f88v?WYgAL5v]@z bbob!'݉ 6 ,% )! ! ,+ <?' o?z #a47% =) J"E#K#{% (+ ( t#BXY?A> ?A "+(/+/=/O/a!y:5r=l/@ sU10( /???? "0<2(?M{!b 1//*%,"st:"t>0QQ'@`?CopyrigTt (cu)FP2`09FPuM>PcTh]]9 PB IAUFR?F}Y?FF(e?F=PFo?P t`  (\#?tƯ.5q+ tЖZu Mauo8)>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 p l l w+ g?-#U!׷!zme Ae [+j/|//-g#+ s"?(a4\0 9 *2!#{0#t8tϻ:񧂍Ua PL2$18K;QF?VEK; t>B4AA0`?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Qua@i@n.@U @l/P Hs3RUePe@vEPd'P 2h<M#l# M/P?]UhzPAQ1 L@3FO#۲?FoO2zP dj|aW{8cUF:&x0?F?F?F_?'qoO 0Yigull4aj?N~f0co+ fEeޖe?F<'?F@]?FwF%ޕol}dQ3ll BXlﷺm|bBhaUeߔ`ba`?FV ?FTh oflPIϮ3qoikeðl8` |GT?heFnF$OĒ?F o* leowky{ftGg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFpX?F&U?F+\(?F~@ t?P t`  (TJ?t[uᯜ(=u -ZrqKZu Mauo8*>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFpX?F*g?F+\(?F~@ t?P t`  (TJ?t4%u -ZrqKZu Mauo8+>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUF$?Fy}?F+\(?F-ku?P t`  p&k?tl    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@XEUv/?Fpvw;pg?F $hއ?FLvo~l/AD0qYJg ַBl)$ԞXlDB:XU^F.fّPKk^Rl_[Ei?1GAf`d?Sq+abvHD: )&# ;h0>Th]]9 PB IAUF$?FQ?F+\(?F-ku?P t`  p&k?t u -+CmoLZu Mauo8->    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUFt[@#?FX J?F\HRq?FƯK7?P Bt`  =0[?t@'#ll$nZBu duo7.>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\GB] 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeU@Vϰ+@_?F23Th]]9 Q BUF偃?F?F\HRq?FƯK7?P Bt`  Xkw?t[I2#ll$nZVBu auo7/>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF͜?Fil8?F\HRq?FƯK7?P Bt`  W׎?tt^ #ll$nZVBu acuo7aJ?JA]^'?X ?Az@bu ( 6  )b= + ? ' 88?z A  +=  /)/;/M$[+["-j!Q U1[!x(a4 ) ."'}#{ :}#BtQ8t_:B@a~ LH2!B8;N;6B?5;N; Ktt>8aVAVA `?CopyrigTt (c)@2`09@M@c@os}@fB|Ar@Aa@i}@n.@ 0l@ HsBe@e@v@d@2hX<M#l # M@?4>UhuQuQtA@QH1j LZFه3 7j bL\`] 6-fMB:MbPdc gmdb`GeGEWenZ\aiavIGeUWeVϰ+@?Vpk_xu j aGeWe_ooh2K]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#"?F`>?Fo?FƯK7?P Bt`  ?tS'8Xj$nZVBu auo71>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\] 6-fMB:ʿMbPdc 2Ngodb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUFv?Fo]j?FQ?F?P t`  Smo?tT.qČ( 뾅@l* 9u duo82>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8cUFnDdTh]]9 PB IAUFSQ=?F{3?FQ?F?P t`  4o?tPǭ\"Č( 뾅@l* 9u auo83>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU K1(a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUF!?F?FQ?F?P t`  + {o?tn&Č( 뾅@l* 9u auo84>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addbjUWkgabeE(aFjv?Fr6d?FkĀAqo`Oi^j^r`t0sK#|pr9t!KU:M}{]rYLl}]|9xLufTrg?F<7vodObdcZgtfi,V#|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?F)lIJ?FQ?F?P t`  o?t\Č( 뾅@l* 9u auo85>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?D +F2|>?FW@WoO 0dTc rdcd bj{^ea͂beEef"v?Fddd?F܃q o`OivddlXr`tuqEÊ|:jr3t!KU:G}{]r,Ll}]|q3xngqrg?F~pooZgtML|\ 5ve&1?FWf?F#0bĂ|[m낪 z'ǵ(|F$$!ayTh]]9 Q BUFt[@#?Fyوe??F\HRq?F#o˯Z?P Bt`  =0[?t&0N#ll#Bu duo76>JTA0f88v?WYϳAjL@z bbb!'  ,% )! ! ,+ <?' o?z #a47% =) J"E#K#{% (+ ( t#B2?A> &?A"&"I&$/*%,"t:"Kt> A A6 `?CopyrigTt (c)D@2`09D@M<@c:@oKs4@fBB3Ar6@oAaB@i4@n.D@ Al@ :HsBeb@e:@vZ@d@2h/1<M#bl#7? M@?(>Uh7E +A(A!{ L:F A<6j% bRzg9ǩ<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF偃?F?P?F\HRq?Fn˯Z?P Bt`  Xkw?tGj#ll#VBu auo77>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F<6j% bRzg9ǩ<2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF͜?F:'?F\HRq?Fn˯Z?P Bt`  W׎?ts aU#ll#VBu auo78>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#"?F.޺?Fo?Fn˯Z?P Bt`  ?tchE 8Xj#VBu auo79>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUF|u(?F-?FPaW?F ru?P WNtQ` 786R7?tU溁i8;(\I2-nNuj mu){:E0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUF4@ ,?FYوe?^?F&1?F3(e?P Bt`  N!?tx/m4tMuVBu auo7;>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶO] 6CfMB:ZMbXdc $I$I܍+d 2aa]Eme'la'iL.]eU@9/$Ew?FslAka,dHޓּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFnټk-?F]u)?F&1?F3(e?P Bt`  3?tg[ m4tMuVBu auo7<>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#z A* " ././@/R+Yg/y'sU1,a4 ) "#{ #BtΜg8tu:Ba L^2!X8;d;6X?E;.d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhQQPAVQ^1| LZFX}  7j b2/O]M 6CfMB:ZMbX͉dc ?$I$I܍+d 2aa]Emekla'i.]eU@;0$E?FTh]]9 Q BUFܲ?FH7H?F!?FTUUUɺ?P Bt`  &?tQ3ņ= ףc?ckBu5 auon7=>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUF:C<?FY!)^?F!?FTUUUɺ?P Bt`  Ȼ?ޫd?ttyI5= ףc?ckBu5 auon7>>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFx]?FTy4߮?FU?F3z[j˺?P Bt`  Q`7lg?t-xW&XRkBu5 auon7?>)JAa^! ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj O٨l,d6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnMbX0j`QDhnC<o+o"Th]]9 Q BUFT1#E2?FLΉ?FU?F3z[j˺?P Bt`  S|.?tп}Qm&XRkBu5 auon7@>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnHMbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UF+1#E2?F* 9?F0U?F_gw?P >tA`  9|4?t +5f1,_Œ X">u eu!s7-AJɅJUU?H<j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF~Q?F?FF(e?F=PFo?P t`  IGr?tn 5q+ tЖZu Mauo8B>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ As"(a40 9 *2!#{0#΍t8tIϻ:AUa L2$18K;F?VEK; Kt>B4aAA0`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hX<M#l # M/P?]UhzA(Q1 L@3FO#?Fo2zP dj|aW{8cUF:&x0?F?Fώ?FX'qoO 0Yifull4ajS?N~dh+ fEeޖe?F<'?F@]?F\F%ޕol}dQ3ll BXlm|ԇbBhaUe```?FV ?FTh oflPIϮ3qoijeðl.` |=T?heFnF$OĒ?F o* leowky{ftGg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUF~Q?Fe?FF(e?F=PFo?P t`  IGr?t8n5q+ tЖZu Mauo8C>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ As"(a40 9 *2!#{0#΍t8tIϻ:AUa L2$18K;F?VEK; Kt>B4aAA0`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hX<M#l # M/P?]UhzA(Q1 L@3FO#?Fo2zP dj|a{8cUF:&x0?F?FȎ?F'qoO 0Yifull4ajK?N~d+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXlRm|bBhaUe`Ԗ``?FV ?FTh oflPIϮ3qoiGkeðl2` |FT?heFnF*OĒ?F o* leowkyftGg˂a,8b1;MHD: )&# T=h0>Th]]9 Q BUF&d29?F `0?F(\µ?F^Sӻ?P Bt`  o?tՐs] yAP FBuj auo7ARJ[0U?<ALv@tt7 }b6*D&tD  td*JBı) ?A>"b7)D&a4D  V3 D 8/td/tRy.x1x1`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"h0!<M#l#/MlP> @?UhLAA]R 1 P1"7Ж1bA>!J`oO2z[ djaq:z׍ QM 6VMB@챝P\`_m;?E,Ui5Uޖ?FQ[_5lf\VU'EUY2Ruݯ?FU]˖7sf\pdѿSYUWfe /zx]݅om{ZcXXUFRP\g[)JXAp| rhQ#5 ?IXAYaLPւ?FتcP\T f\3$sXAp" ElX1YA-k?FFoZP\fyf\?!,UAYX%?F8K7P\{ f\&SAY&1l?Fr>|]P\-]qiYip~ X1#g~jt-? ѰXAY?F ףp=P\' f\?P=RwX'RXT}7vP\__/vnTpډlX7U/$9SYE7Ժf\wUQY|>îqh|P\4GN?"MX U29_KU#rA0$@CHD: )&# T=h0>Th]]9 Q BUFܷqP?F@?F7??FtYOȻ?P Bt`  XM6 ?ti|9O__Bu aquo7E>KJASTfw?X ?Az@bbb! ( 6f )  '+ ߈?I-Z#g!z A Qu +%/,8)E!'+ k(?r+o,(a4 8) E"#{ #BtΜG8tU:Ba L>2?!88;D;68?5;.D; tj>8LALA1 `?CopyrigTt (c)@2`09@M{@cy@oKss@fBrAru@Aa@is@n.@ 0l@ yHsBe@ey@v@d@v2hb<M#l1# M@?4>UhkQkQPjA6Q>1| L`Fd~j]J}v)M]M 6)fMB@Wa\ k_稇avCe=ESeR9Y? l`gESeFeabkod iSQޗh~??5@+ c;Ꞗi"2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUFdHJAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l# M@?]UhzbA.Q61b L` djM"aQ  6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ N_qd7eEGec6i\m jOf܋h2_U3rA'04msCHD: )&# T=h0>Th]]9 Q BUF)8/?FFT?F!/$?Fᱝ̻?P Bt`  vNC?tG(ϛT H0Bu auo7G>JTA0f88v?WYALv@z bbb!'n 6 % )! ! ,+ <?' o?z #a47% =) J""#K#{% (+ ( tQ#BXYщ?A> ?A "+(/+/=/O/a!y:5rl/@ sU10( /???? "?0<2?M{!b 1/Ĝ/*%,"t:"t>TQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUFJ1>?Fl?FF(e?F=PFo?P t`  0!?trj5q+ tЖZu Mauo8A    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?F_'qoO 0Yigull4aj?ON~f0c?o+ fEeޖe?F<'?F@]?FwF%ޕol}dQ3ll BXl귺m|bBhaUe`ba`?FV ?FTh oflPI3qoiؿkeðl8` |G?T?heFnF$O?F o* leowky?{ft?Gg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFGF?FEs?F+\(?F~@ t?P t`  9`Ik?t)6^u -ZrqKZu Mauo8I>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|?rww@,eaUe:&x0?F%Clw?FAz?F 1vņflux1,oiFA(]`t߼ |~Th]]9 PB IAUFGF?FKDֳ?F+\(?F~@ t?P t`  9`Ik?t┯4;u -ZrqKZu Mauo8J>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFXIn?F    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@,UEUv/?Fpvw;pg?F $hއ?FLvo~l/AD0,YJg ַBl)$ԞXlDB:XU^F.fّPKk^Rl_[Ei?1GAf`d?Sq+a,(bvHD: )&# ;h0>Th]]9 PB IAUFXIn?F(?F+\(?F-ku?P t`  -k?tEeoEu -+CmoLZu Mauo8L>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUF贷F$?Ft\@?F\HRq?FƯK7?P Bt`  x#?t#ll$nZVBu auo7M>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\G@[ 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeU@Vϰ+@_?F23Th]]9 Q BUF#/?FKGY?F\HRq?FƯK7?P Bt`  ~B?tl#ll$nZVBu auo7N>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?F46Hr?F\HRq?FƯK7?P Bt`   a1?t 3JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWenZ\aiavIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#?FLO?Fo?FƯK7?P Bt`  ?tSӈEw8Xj$nZVBu acuo7AJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\ѻ] 6-fMB:ʩMbPdc 2N3odb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUF?FtË.5?FQ?F?P t`  ?{R   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDdTh]]9 PB IAUF4{"?F=N?FQ?F?P t`  ^}W;o?t9pČ( 뾅@l* 9u auo8R>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUFGC<-?Ff?FQ?F?P t`  L?RD)o?t? )Č( 뾅@l* 9u auo8S>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addbjUWkgabeE(aFjv?Fr6d?FkĀAqo`Oi^j^r`t0sK#|pr9t!KU:M}{]rYLl}]|9xLufTrg?F<7vodObdcZgtfi,V#|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?Fn6b?FQ?F?P t`  ro?tpwZČ( 뾅@l* 9u auo8T>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?D +F2|>?FW@WoO 0dTc rdcd bj{^ea͂beEef"v?Fddd?F܃q o`OivddlXr`tuqEÊ|:jr3t!KU:G}{]r,Ll}]|q3xngqrg?F~pooZgtML|\ 5ve&1?FWf?F#0bĂ|[m낪 z'ǵ(|F$$!ayTh]]9 Q BUF贷F$?FlIJ?F\HRq?F#o˯Z?P Bt`  x#?tx#ll#VBu auo7U>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F A7<6j% bRzg9<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF#/?F ?F\HRq?Fn˯Z?P Bt`  ~B?t AF#ll#VBu auo7V>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F<6j% bRzg9ǩ<2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?FD?F\HRq?Fn˯Z?P Bt`   a1?t(9Sj#ll#VBu auo7W>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#?F\o]?Fo?Fn˯Z?P Bt`  ?tF]8Xj#VBu auo7X>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUFlQ?Fڋ?FPaW?F ru?P WNtQ` c͕.:7?tUr=Ki8;(\I2-nNuj mu){YE0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUFhY?FlIJ?F&1?F3(e?P Bt`  6n?t:m4tMuVBu auo7Z>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#,z A* " ././@/R+Yg/y'sU1,a4 ) "#{ #BtΜg8tu:Ba L^2!X8;d;6X?E;.d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhQQPAVQ^1| LFX}  7j b2ɏOR] 6CfMB:ZMbXdc $I$I܍+d 2aa]Eme'la'iL.]eU@9/$Ew?FslAka,dHޓּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFy?Flo]9?F&1?F3(e?P Bt`  7 ?t)!m4tMuBu ,duo7[>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶO] 6CfMB:ZMbXdc $I$I܍+d A2aa]Emekla'i.G]eU@;0$E?[FTh]]9 Q BUF߹e?F$jߤ?F!?FTUUUɺ?P Bt`  ?t&S= ףc?ckBu5 auon7\>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?5W-sUU,p(a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFtx?F/?F!?FTUUUɺ?P Bt`  K_?t= ףc?ckBu5 auon7]>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFiv?F{<?FU?F3z[j˺?P Bt`  y]?tHIΧ&XRBu auo7^>JAa^!? ?Az@bbb! (n 6  )  '+ g~X?I-͊Z#!,[z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+,rb6c}bGeUWeFBϏ+@ol0ju?MDhnMbX0j`QDhnC<o+o"Th]]9 Q BUFT1#E2H?FK&Ua?FU?F3z[j˺?P Bt`  ?t &XRkBu5 auon7_>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnHMbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UFObFd0?F gj?F0U?F_gw?P >tA`  N+ru?t Tլ1,_Œ X">u eu!s7-`JɅJUU?HA9 j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF\6:?F}?FF(e?F=PFo?P t`  @?t^$h5q+ tЖZu Mauo8a>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?FX'qoO 0Yifull4ajS?ON~f0c?h+ fEeޖe?F<'?F@]?F\F%ޕol}dQ3ll BXy귺m|ԇbBhaUe```?FV ?F?Th ofl?PI3qoijeðl.` |=T?heFnF$OĒ?F o* leowk~y{f~tGg˂a ,8b1;MHD: )&# ;h0>Th]]9 PB IAUF\6:?F}?FF(e?F=PFo?P t`  @?t8ȍ5q+ tЖZu Mauo8b>    PEJƱZVJp!?&. ?Az@b/b@Z'݉e 6 ,)l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a4 9 .*2!#{:#t8'tϻ:Ua !L2$18K8F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|a{ޓ8cUF:&x0?F?F?F'qoO 0Yifull4ajK?ON~f0c?+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXyR귺m|bBhaUe```?FV ?F?Th ofl?PI3qoiGkeðl2` |FT?heFnF*OĒ?F o* leowk~yf~tGg˂a ,8b1;MHD: )&# T=h0>Th]]9 Q BUFL&d?F`0?F(\µ?F^Sӻ?P Bt`  {G?t7F0 yA_P FBu aquo7c>KJm0U?<AL@tMt7( }b6*D&stD  td**Bıa) ?A.>"b7)D&ar4D [ 3 D 8/td/Kty.x1x1`?CopyrigTt (c)02`090M0c0oKs0f21r01a0i0n.0 Al0 8s2e0e0vZ @d0"h!<M#l#/MBl> @?UhLAA]J 1 A 1"7A1bA>!J`?o2z[ dUjaq:z׍40S 6VMՋB@_P\`_m;E,Ui5Uޖ??FQ[5lf\VU'EUY2Ruݯ?FU]˖7sf\pdGѿSYUWfe zx]om{Zc`,UUFRP\g[)JXAp|? rhQ#5 IXAYaLPւ?FتcP\T f\3$sXAp"۲ ElU1YA-k?FFoZP\fyf\!XAYX%?F8K7P\{ 䘷f\&SAY&1l?Fr>|]P\-]qiY?ip~ X1#g~jt-? GXAY?F ףp=P\' f\?P=RwX'RXT}7vP\__/vnTpډlX7U/$?vQ[E7Ժf\wUQY|>îqh|P\4GN"MX U29_KU#rA0$@CHD: )&# T=h0>Th]]9 Q BUF8?F43?F7??FtYOȻ?P Bt`  1=oJASTfw? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A u +%/,8)E!'+ k(?r+o#?(a4 8) E"E#{ #BtG8tU:Ba L>2?!88;D;68?5u;D; tj>)8LALA1 `?CopyrigTt (c)@]2`09@M{@]cy@oss@fBRrAru@Aa@is@Wn.@ 0l@U yHsBe@ey@v@d@v2h<M#l#A M@?4>UhkQkQjA6Q>1| L`Fdj]JK}vM4@[ 6)fMB@Wa\ k_程avCe=E~SeR9Y? lgESeFeabkod i?SQޗh~?5@+ c;ꉞi2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUF2B9?FW]i\?F'l?F{Cƻ?P Bt`  ,i?t^̺㯝4\YDrVBu auo7e>JAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l# M@?]UhzbA.Q61b L` djMaQ@ 6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ NA_qd7eEGec6i\m j#f,7e2_UB3rA'04msCHD: )&# T=h0>Th]]9 Q BUF] ?FtF?F!/$?Fᱝ̻?P Bt`  "?tEQTϛT H0VBu auo7f>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#BXY'?A> ?A "+(/+/=/O/a!y:5r{l/@ sU10( /???? "0aQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUFR?F|}Y?FF(e?F=PFo?P t`  (\#?tN뻫5q+ tЖZu Mauo8g>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?F_'qoO 0Yigull4aj?ON~f0c?o+ fEeޖe?F<'?F@]?FwF%ޕol}dQ3ll BXl귺m|bBhaUe`ba`?FV ?FTh oflPI3qoiؿkeðl8` |G?T?heFnF$O?F o* leowky?{ft?Gg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFpX?FEsR?F+\(?F~@ t?P t`  (TJ?th/3u -ZrqKZu L,duo8UAU    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFpX?F<Dֳ`?F+\(?F~@ t?P t`  (TJ?tA u -ZrqKZu Mauo8i>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUF$?F,w¾`?F+\(?F-ku?P t`  p&k?t?̈n=u -CmoLZu L,duo8j>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1,(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@XEUv/?Fpvw;pg?F $hއ?FLvo~l/AD0qYJg ַBl)$ԞXlDB:XU^F.fّPKk^Rl_[Ei?1GAf`d?Sq+a,(bvHD: )&# ;h0>Th]]9 PB IAUF$?F(n?F+\(?F-ku?P t`  p&k?t A4u -+CmoLZu Mauo8k>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?Copy_rigTt$_(c)$2`W09$M&@c$@os@f,BAr @YAa,@i@n}.$ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUFt[@#?Ft\?F\HRq?FƯK7?P Bt`  =0[?tRwq#ll$nZBu duo7l>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\GlR] 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeU@Vϰ+@_?F23Th]]9 Q BUF偃?FKG?F\HRq?FƯK7?P Bt`  Xkw?tO [#ll$nZVBu auo7m>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF͜?F$6H?F\HRq?FƯK7?P Bt`  W׎?t #ll$nZVBu auo7n>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\G@[ 6S-fMB:MbPdc mdb`GeGEWenˏZ\ai?avIGeUWeVϰ+@?Vpk_xu j aGeWe _ooh2K]u:3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#"?FJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\ѻ}@[ 6-fMB:ʩMbPdc 2N3odb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUFv?FdË.?FQ?F?P t`  Smo?tϰոČ( 뾅@l* 9u duo8p>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8cUFnDdTh]]9 PB IAUFSQ=?F|=?FQ?F?P t`  4o?taVpRKČ( 뾅@l* 9u auo8q>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU K1(a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUF!?F?FQ?F?P t`  + {o?tHUČ( 뾅@l* 9u auo8r>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addǼbjUkgabeE(aFjv?Fr6d?FkĀq o`Oi^j^r`t0sKÊ#|pr9t!KU:M}{]r,Ll}]|9xLufTrg?F<ے7vodObdcZgtfi,V #|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?Fn6b?FQ?F?P t`  o?t]eČ( 뾅@l* 9u auo8s>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8(aD +F2|>?FW@WoO 0dTc ?rdcd bj{^WeabeEef"v?Fddd?F܃qo`OivddlXr`tuqE|E:jr3t!KU:G}{]rLl}]|q3xngqrg?F~pooZgt_ML|\ 5ve&1?FWf?F?#0b|[m낪 z'oǵ(|F$$!ayTh]]9 Q BUFt[@#?FlIJ?F\HRq?F#o˯Z?P Bt`  =0[?t7c`C#ll#Bu duo7t>JTA0f88v?WYϳAjL@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F A7<6j% bRzg9<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF偃?F (?F\HRq?Fn˯Z?P Bt`  Xkw?tɣ#ll#VBu auo7u>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F<6j% bRzg9ǩ<2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF͜?F4@?F\HRq?Fn˯Z?P Bt`  W׎?tϘsW~#ll#VBu auo7v>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#"?FLo]Y?Fo?Fn˯Z?P Bt`  ?tC2.8Xj#VBu auo7w>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUF|u(?Fڋ?FPaW?F ru?P WNtQ` 786R7?tUau}wAi8;(\I2-nNuj mu){xE0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUF4@ ,?FlIJ/?F&1?F3(e?P Bt`  N!?tw;%m4tMuVBu auo7y>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶGO@[ 6SCfMB:ZMbXdc $I$I܍+d `2aa]Eme'la'iL.G]eU@9/$E?;FslAka,dHּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFnټk-?F.޺s?F&1?F3(e?P Bt`  3?tʼnm4tMuVBu auo7z>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#z A* " ././@/R+Yg/y'sU1,a4 ) "#{ #BtΜg8tu:Ba L^2!X8;d;6X?E;.d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhQQPAVQ^1| LZFX}  7j b2/O]M 6CfMB:ZMbX͉dc ?$I$I܍+d 2aa]Emekla'i.]eU@;0$E?FTh]]9 Q BUFܲ?Fj$?F!?FTUUUɺ?P Bt`  &?t§= ףc?ckBu5 auon7{>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUF:C<?F8!)^?F!?FTUUUɺ?P Bt`  Ȼ?ޫd?tI= ףc?ckBu5 auon7|>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFx]?F{<W?FU?F3z[j˺?P Bt`  Q`7lg?t M&XRkBu5 auon7}>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnMbX0j`QDhnC<o+o"Th]]9 Q BUFT1#E2?FxLΉ?FU?F3z[j˺?P Bt`  S|.?tk, &XRkBu5 auon7~>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnHMbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UF+1#E2?Fgj?F0U?F_gw?P >tA`  9|4?t <'Ŵ\1,_Œ X">u eu!s7-JɅJUU?HA9 j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF~Q?F|}Y?FF(e?F=PFo?P t`  IGr?ttƯ.5q+ ~Жu auo8M> `  EJƱZVJp!?&. ?Az@bbbe!Y(ne 6 p )l l w+ g?-#U!׷!zme Ae [+j/|//-g#+ s "(a40 9 *2!E#{0#t8tϤ:U a L2T$18K;FԞ?VEK; t>B40AA0`?CopyrigTt (cu)@2`09@uM@c@os@IfBAr@Qa@]i@n.@ @Ul/P Hs3RePe@vEPd'PB2h,<M#l#_9M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|_aW{8cUF:&x0?F?Fώ?FX'qoO 0Yifull4ajS?N~dh+ fEeޖe?F<'?F@]?F\F%ol}dQ3ll BXl귺m|ԇbBhaUew```?FV ?FTh oflPI3qoijeðl.` |=T?heFnF$OĒ?F o* leowky{ftGOg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUF~Q?F8?FF(e?F=PFo?P t`  IGr?t+>`5q+ tЖZu Mauo8>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ As"(a40 9 *2!#{0#΍t8tIϻ:AUa L2$18K;F?VEK; Kt>B4aAA0`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hX<M#l # M/P?]UhzA(Q1 L@3FO#?Fo2zP dj|a{8cUF:&x0?F?FȎ?F'qoO 0Yifu,oi4ajK?N~d+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXlRm|bBhaUe`Ԗ``?FV ?FTh oflPIϮ3qoiGkeðl2` |FT?heFnF*OĒ?F o* leowkyftGg˂,,8b1;MHD: )&# T=h0>Th]]9 Q BUF&d29?Ft`0L?F(\µ?F^Sӻ?P Bt`  o?tbS yAP FBuj auo7>RJ[0U?<ALv@tt7 }b6*D&tD  td*JBı) ?A>"b7)D&a4D  V3 D 8/td/tRy.x1x1`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"h0!<M#l#/MlP> @?UhLAA]R 1 P1"7Ж1bA>!J`oO2z[ dja_q:z׍ Q 6VMB@챝/P\`_m;E,Ui5Uޖ?FQ[5lf\VU'EUY2Ruݯ?FU]˖7sf\pdѣSYUWfe _zx]݅om{ZcXUFRP\g[)JXAp| rhQ#5 IXAYaLPւ?FتcP\T f\3$sXAp"۲ ElX1YA-k?FFoZP\fyf\!XAYX%?F8K7P\{ 䘷f\&SAY&1l?Fr>|]P\-]qiYip~ X1#g~jt-? XAY?F ףp=P\' f\?P=Rw㹥X'RXT}7vP\__/vnTpljlX7U/?$?vQ[E7Ժf\wUQY|>qh|P\4GN"MDX U29_KU#rA0$@C_ HN>Lo3s rFYd߫" EMFhP #ȘT B `g+V @elk-o+(/o9BGohoP~rp { п m 8 }OX^ Dc6ߧ W  ! x#@ K& *ҿ 9-+ HĽ B @hF Aʐiϐ ]ҿ  pސrn t &uL ~y 8|I 1&}z X o'@ #'B2 {'E H/ @9K E5'O C'99 m! -%9( (x-  2 $69'T; !()> ~*A*I8,E ?]-@H0L98P9:T 'KXI8\\9ȸ_ 'cv0&8_h |@mIq 0&w z 88~ Ș ׿: 5'XP@98r?d9Xu~v9xxǗJYXzu98|9~9 ]9x &PK 9X H~9 IX "*I b*Io NIB``I ~9x &ά9X  i( ՖIx  ~W  (? )*@& i (  9 g ~(X 0G( 1 n~  [ OI8  ~ O x  e8  w8 Y 8P mR hTM hHV# wY( (]/- 1 p6 9 =I<ȿ2@ OHXqCHKVO IgSxWX[ (^t'c Hth l /"wUFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kX` BW66 W TTT%TTȅ5H?? (?B@L&d2?-(\.g1(.sUW!AU&w/t  0cB`,web,s rv",T P comput$d+i t i 3ud n tw0'rk!eA^| (SG|& f/?,v>? ?T wp  ̠ p ʔ  pqwpqwqnwqpqqbqpppw wwwqqEwqpw߄wwrvrrwDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??SbM1{V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉq%4 ?"u'4b0{[`:#(26 U9$"3&x9"L1" ^!'>U@ÿ@ǽq^@?@VBӅ?]DSF0ZBu@`-1ZBuW@rCC!8GAB"2C9u`u `"[ b![u`2 %3Qo1o7B`?CopyrigPt0(c)020P90MPcPosPfRQrPQaPiPn 0Al` Xs bePePv`d/`VPs_SBNcPm!#5Qb41,a@o13dX453B^8=%3p63dz6z6 v24#V%30Urz3]ge<^1E U#5m Hq5twL5t{3t{ez1A+E*F7ԂX$18dT-`XQj2y`]r` MPWnuPaPtar,aOQ` UES0uPpK`ePt[eTL5Qdv],1PRd C`uK`bʦ30ޯ𯇡! PPr+=9H^p]ZaDbQUT\Կ濅APaB`Ȩ-5^]oυ% S biP aϭPeTυLPcXūa5DVߪhB:l(`iPg߸̨@߇RPoǩ|Z\@E pAD`~YxČUSPaPePlP,5ƪXp/` ATP-`54`%aPK]狥%Sb?`.A4`W`ZʦT3NwRk qa2$2`_ @Ȩ͒Y;MI`ad(`cz d ϴ兵nēAs DxP)ߑmr:atV%a Iâ@!auoDfP㞑 R `Qa/)/7M`Cl/Pد/M+QUmK`utP ?`/ D*?"?_ (b? (Dv??;Hd2QC K3` Wf}%5DVO#O `C` akO}O#ԅO` M`mRgOOɧDODp __1;OUQ1y?QI_[_CqtJw fo SU@jZ?Fű?%v%vKpIp(u.p` s?vbup` ? B=݉`C,qtMp5mQ-́<)oTm}QX.q͏Pߊӥ T.OtxFf@>|pÿruob쁙JRq KN̒rr$QUvvvrsE"`RoZK1& be9P bb OObr@muP`TaC]1eߊ_e71721bKa aXaaehAaYip P`u`T_`_`Qhڑx4ӥ5f_dP&b鄞pCwF3s͵NSY:3R rV@0e18TR <շqW,@rB* B}aaaahʾYVԐ;n` %̀P&²S$1aa`NEW_ORKP2H IP_!RRЕISQTҷiroohA'0UQ?8!Ԑ0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BIU@Sj'ో?FL]&?@j>(?FeTot7Ѻ?P} u` ?u#8k/8CU(.88Lt `lm3tϭ^}ѯnAf'wyU~##0UH"?@&#!!*`?CopyrigPt (c) 20 9 M c os f"!r !a i n. Al (s2e e v0d z!~$#B-#,# 'O#^ =$&#4 {6{6 w2$e#ab3W) #LG,%,Dj8Hg\&dPaxO6'TXAXA"(2Ca%?28(B,"iha05( e&XV7 W'G${6!t40 VZHD: # =h4>TB#]]9 T!AU@%?FYp5?@_RP6 u` ?Mu t  !o)t~7RnB25"!)Lv$>   EJtU2q?=@dLA@-wb%(b3)%@+@&[w#?& ?A+/L)R!!5 {`?CopyrigTt (c)'020309'0M0c0os0f%21r0R1ai0n.'0 Alm0 8sq2eE0e0v0de0f"!$MEBY-?# '&rl U# #7 1* #p| '&)1&JH FFѷ |-B1 @V&Rl( b 0Ale59 MX7'0URN!$0% VZUGD D# h0T@dYY]BIU@j>(?F8??F/I6?Pn} 8WdT t`  7?ht#rL&~n[f1uowuwU!!`?CopyrigPtv(cu)v2\09vuMB c@ os: IfH"9!r< u!aH i: n.v WAl @(s"Ueh e@ v d  #S$#$- ## '?w= # &#24 6 6 $B# #0U2 3@& wa#3~  #dH1%4Gi^#5 6X'EGK'4 6! ] EFYJHD # =h4>TB]]9 \#!AU@#.?F/I6?@P t`  gjo?t#T U $YhAn m7 u)hnx Fu!)J hI 97eyh?#!?9& aAbA@_"bg)b$g)bIg$2zGz9#@L@&h*%}$f*+T'_"*BL1L1`?Co}p rigT}t((c)(W2009(M{0]cy0oss0f2Rr1ru01a0is0n.( AUl0 y8s2e0ey0v0d0E1I4Mb-X3 X79&l)Uh# j1#_!J FF$[ 7#e T$V3RiAlPeE9 XW'0URZFN!^0 VHD # =h0>T@h]]9 #U@딃r?F/(?@2?FkF_w?P t`  ;0K?t# y_h:.A>Dj# ixsj||yyh|j|Ay"s"y,"s,"yJ"y^" yr""y"/y"a|"1y"8y":y";y"|`AUhMq@q] y a7T^1 hjP!,!J!^!o$!$U#$%&U'()*+,t|q!a!U234567$9$!1&1>74Ua#[aJ`S@ɞd?F9^2d3Qqs^2tY7N26J6M#hN1y-loG?gn`bDPu6(āThq 6p?F!_T?@R h^*B?PQi e>amabƫWSuMD,Xud3, utSқJ! \&\xZ? $Zas? V =? @S?d2tR@B8BoRį֧r3Cc EH ]&H[Bѻ?@lML6~ ;m`~b&¨r?+J^t/qO@? [,!rPצ 2XQqH zg?F,53lܿu  ?FM9(֜2Ar}JND7UØB]ٿ?@h߾#?Fd~5ȿMuPu`~dg}-S/9u}DѺ f <-@uušJAV+ǥ8xP?\ q|?FKyhQ~2vTPf=?PoyGV(na~u3vu;*G1+u~ݏֻA_ ^Gϯᬅ?oQo!|b0QJX^?Fsm?@@޻Pg?POWLKz Fw^=g%fԧ!6&]JVbGᬤ? o[oQr!rmz@zg?@f(?FVɱC OPKMgBlVܖVVwVsS3O_ܘ=w?FKL?@jP0A`QʱH=lTG5OxOeuF|Fv=/Fꞏ ,-<60 ]Cs~?Mv1E GZ}v -# &<"ݝK-'>TAFd|6δ̙2{# ,',v #cX#JREB'~5-O$ ,RI~  `ܝ-C-X-_]bd% ,Wl' 4%ŀ>-խ@ײc& , Ǡ4 \lb\c>-R땳J ' ,+E :{(b\s"=]& a:U( ,/?Fb\.bLOOMևQ:U) ,!`3?F k?%6fSԿr`* ,Oڿ?F&Kba,e~Lʴ2ͷ+ ,b d?FjFŋݸML, ,Z׺ı?F bҢu W%r;z- ,TUI8?Fs%:v ~LSa@ . ,3Pވ?Fb^vO$)yϊ&Lؾ1/*mCe%f(~5JOAjuLīt~eq>aT~.j&],_10y՚?Fr`QSٵ?F艿O:McfdilFU1i;}m&R F$w|!l~1`-?F~J~ 03y<$-t9BIL{HiG,LDd|jph:OE 4b\{~-K 6,Ku20e%+~L-r; 7,kh}6`E׻L?S)` {A8 $Fٞ@`o?F%&0MuPu9KdfǸ;}-UL󥶚!s@+C9,*}3?F:ט26-*NY'| ݑU8am?FYe~Kp?@ŮM?F@_O`P jBނ> l'xk超VSOB;6{.f9;?@ v%e??Fh1*o/m_~!Wx ~)j |ALyUƅ?PbO__<J+t?F"e?@'IȂl:xl@׭9,&͓S2&n&Yۙo> =~vU^4Q?@,z"TEo-<[?|8q&?FCv?@s&Ŕck>\ :A:z5,7ح-idsjѦD;gcZT#n$!$ɞd?Fх@Z f<x.Ϗ:"r 23ED'佴9Ki PyT A$))X#!('0Uqŝ)) (h(J TEI 3aAU@jZ?F({~+?P >uA` ?u#ΕMJ*A )83*#<VQ 9jQ 9~Q 9Q 93(QA = Q(Q.$* BT`t &h)&4t   4t"z"pbE#"J>U2zGz?@9 3>+6A@"b]8bk9x;x6N2 !\4N%323326 2Mc2b"k9Kh7N2ׯa : R;l   4"ABoogr#5"Ap(U3TH P/?@ yjPvf MuPu@YF]ul9;VmMgHr f 9LTBUx]Si\QE[?MLx/r?FPzQ- %YAxlQY?U . [ ;;\upQYF)$r[.Fxai%?F 6?w\^ W-\Xj ~ɽ7yކ}dOflA=$'p$`4U@x@4S>aai6$xi'e!aS#8&`iMiI51Xԥ'0U6PE0 H?>_;s 枹\$qG"Fw #My 5OB Jz ~dk{ @+z[sG_fx ʼn 8  H xq F Mߞ 6q? ,s vF x 7z$ N(}r ، Xĺ^H ' ( &UFD  h(^TYYBUFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT"9Td9TȼHBU?F? ?B@L&d2?@-Q(\.,c(A.sU!&(/)1(  0%YB`'FT,serv2,comput"4di0t0ib04d0n0t;w*0rk!ea| (SG& i?!3E?? ?| wp  p | qw qwqw{pqpqwqqpqvppww'wwwnbwqpwoww6zwxvDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??*t {V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?F~?tP} u` ?u#2>D>HX DHlHYHH EOe.O4:DDXXll*k'"&J",':U2%5%4 ?"'{4b0u{[`:Q1#26"*!^ U9$"3&x9"L61 2!'@>e@ÿ@qn@?@fB?BmDcF0)B-u/@`-1Bu@rCC5GAB"2C9uv`u `"[ۢ b!]u` %3Qo1o7B`?CopyrigPt0(c)020P90MPcPosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm!#E5ab3P=` o13dX45CB ^8=%3p63dz6Pz6 v24#%3+0U'rz3]Jge<^AE U#D5m Hq5twL5t{3t{LEt{t{y {et!{|tYwmAdaT׃IUBT$1H@a@a20)L21632CLB3P1$125jb6w  7C|hh2E)*dQ 8ŔIQIQA11CB;1XrbYqea>!bz1AE:FG$18dT=`XQj2`]r` M`n+uPaPt+arp` ?bup` ? B=ݙ`S,t']p5#mQ-݁<9odm'Qh>qݏPT'._t'xVf@>pÿЅobZbq [Nܒrr$ aUvvvrsU"`Roj[16 beI`bb __br@muP`TaSm1u _u'G1G2Ab[a ahaauhAaYypP` u*`To`o` axx%D5f_d`&b鄮pSF3s͵NSYJCb rV@̓@e8d=`LqW,@r B:р.ǏR}qaqahYV䐷Kn ` %݀P&S41qa`NEWORK`2H Po!RRIJSQ*io"ohAo'0Ua?!0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BU@5?F;?@ t2?F`0 ֺ?P} u` ?u#+/8<&L9`d<ft99(.8P8L``Uttyt mH 8"tC"hfc!N&>'nJ!O%X'm= 'p ~i##0U"K?@&S#a!1!1*"`?CopyrigPt (c])X020d09X0uMP0cN0osH0IfV2G1rJ01aV0iH0n.X0 WAl0 N8s2Uev0eN0v0d0"14J#B-#-3 -7#^=$.6#@DFPF B$e 0^dA\C #HA%DK%D8K5D`KEDdKEDg&Ca_'TAA""(Pa%288BKc`DcE ciha5 6rX7fg'$)F!DA ffzjUHPD  # #>h,JTEI MU@jZ?F@s??FX7`/?HmNuQ` o?u# j>tMtQ 1@t PATOQ E O#l|O6,2c J^vNt, "t "QwNbo ?!FVxNo]2УɆX3BUYU>VZ\)_ W거QUTAQ LG4CaCa_* Q_*____ o2f 8CT]?FoYDȡ?@ ~1pc} 9dS0]Yb<=p\/_ RT)4HtvzI!| ?^_AV,uݹ:4Z\o &Ja ʆXBU 9A1p@߄ߝZ\X9 (p\P΀dˆXEBUp2t Z\5q[,=6B҆XaU̓쬏 qXa>YU . [[ ;;p\Ua>YF`rZYoQUaai%?FzP4Z\^ W-p\] xTy8?F.?@@SA?FWk@ ?Nk"_|5`bmMp\PB|y[7 ơ|`?])?p*FWAX|a筃,H7~Z\?bxT?:!G4QڡUeB_T_f_x_DOOkoooZ}u?FU2Ty9=ŏsҟGO"vP_?@mlփ?Ff>> }= R3|Ofѡ|&['aޯ)d3#ux"@4SK 6ASʎy'e>'"Y`r,iMm׹ ߵl%1X~G׷_'0UJ26L50Ov ׶HD: # =h4>TB#]]9 TqAU@1D)?FL"ߛ$??Fٵ #ָ?P6 u` ?u4 t  ]<)t7񿗯)2)LEkL>  JtU2qD ?=@LC&A@-bu(b)+&񆤍[=#L"#a?& ?AH{/)=#@1@15 {`?CopyrigTt (c)w02009w0Mo0cm0osg0fu2f1ri01a; ig0n.w0 Al0 m8s2e0em0v0d0"91=4MB=#e-OL3 L7&l>  U#,|A|A9 $] t^1*=# #p Kw&&J`=#R&F 2'_b0CwvA kRqVM(~"!;ɡ?@<3?F ?P-DT! ;d1?Y=:ZTO㿂f{OaB J4 13? v,? (N_?r5w!:8oJo\g"a@jS(.8R!ZFI nx4__ D_U5U,VS?@g$ ?F~4,ێ~@ 8_H_ZTml`KDfl`'E4q2d8R鿃oo`rmkod&'aE2%_7USr(A@z#y3ae&b{, fZT5sae~t?FG=Va|7>-\氹pAiʈ$%?F(XoZ1a|}%\Iii ^WB^:fvEZT˒AiU?FAx߾Ya|k#ӝY~/Yt(Ae o0AleP59 MX=GL"'0UŢbN!40 O5 |HD: # =h4>TB#]]9 T]AU@R7)?Fy߈C??F"ָ?P6 u` ?u4 t  *5y")t7n)2)LI"+BA B JU2q0 ?=@L/&A@ybm#zc a#-bK{(m {(+I&)#Kڬ !?& ?A"&g$bbu/b /%)#Bo1o15 {`?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v@d0"h1l4M!E)#𤲝-DO{3 {7&l> U#,AA9 ] 18! #p %J`)#@~t?F # 0hU-~-/HA RVM(ET"rYD?@J-?F-' 9d 0_Y b%r\_oӫ([MFoFl 77]8>!Zݹ:4\\o?gƭkdQEF5W4?@A=?F[Q`??P-DT! :U^Zs[ #l@*p9lJjV\\l_ I&vpkhAzi!J?F߸TF\\Ͼ\zT khzinA9f@.mY <\\ԕ$ ,S{khAzi׍8r?Fį^?m ? ~_5 ә(Ae 0AleE9 MXlGR'0UuőN!40 ~5 ,@HD: # =h4>TB#]]9 TU@~t?FYaJE??F? ~?P6 u` 7?u t  }A_)t7Ǫ+)2)L^A]>REJt2q?R=@LA@-bbJ )+&L ;W!?m&L ?A" &zbbbOz /R;%!!5 {`?CopyrigTt (c)(020409(0M 0c0os0f&21r0S1ai0n.(0 Aln0 8sr2eF0e0v0df0>"!$MDO!EY-?# 'm&l> 0>U#3 -A9 B1 #p ;%J`F(jdO C]Se֑  FM(HC-O"K[%qU5)UuBТ:DY=~W\GUE2OE#rA 0"S3QhA%YB[)xIB_STD(Pd 0Ale59 MX7|B'0UbN!$1A5 fjHD # =hj0>T h]]9 U@/?Fxpz?@]JQ?Fk],>Uh@ !f$; hp _Q%Jd(jcZ(sEp!$a VZHD # =hj0>T h]]9 U@~t?Fxpz?@?Fx?vÂE?P6 t`  }A_?t#%_ ,u auo>J2zGzwt?@ʐLA@bu+b)%#+#&!yHa$?x&L ?A+!#+,1'!!5 `?CopY rigTt (c)(02`09(0M 0c0os0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0I"!E]$M-# %'x&l>],>Uhh 1[$ hp  F%Jdjc%O(Eq!$a VjHD: )# Uh4> T]]9 M#AU@#ܡ,?Fb?JP WJtM`  ?t# AJb $>h g JQubhr6 u!>h=U7Jyb?!A?3& [AJ~CA@Y"wba)b$a)'ba$2zGz3#@MJ&b*%w$`*Q+'Y"**F1F1`?Cop rigTt(c)2009Mu0cs0osm0f{2l1ro01a{0im0n. Al0 s8s2e0es0v0d0?1EC4*-R3 %R73&l& UhVE9 `d1#Y!  FFU 7C@VCi@Ale759 XW'0UjR@N!N$0 VHD: )# Uh0>Th]]9 PM#U@X?Fg.?@DE`v?FepA1D?P JtM`  ;0K?t#˯ yh:.A>j# ixsj||"yy h|yy"s"j|,"s," |J""y^" yr"""y"/y" a|"1y""8y":y"";y"JU2z?Gz:1?N1HSJVAL0bXbY[R[VJSON9;?Ax74Ual#aX`S@fN:O6?F9^2d3QqsR^2tY7+"LN26J6RJN1v{+0?F2rn`XDPk6(āThqZMiڲ?FȒ?@%~?džm@`?PR߷i e>amaXƫWSkMD,Xkd3, ktS?J! \&\xZ? $Zas? V =? ?@?d2tR%߀@8BoR4̧r3cu>XRVSwSy?@r$Wè ;m`X&¨r+J^j{/qO@ [jrPͦ 2XQqE>L?F,D?@ cK'ةhzq 6ѱn|3wsW\ۉ>53lҿ侨gnxP?Fa"&~̜2Ah~sJN:@8PKNWM?@0R?Fw+PuP~y`~dg~~s̜ʪ}Bk3a ,?|IF%̜O,nA3O"?̜g'R S;x+#]Ǯ.uǥ8nP\ {r|%ʿs?Fcj˰?@3?FJ?Poy=V(nxW~u3vk;*G!v+u~ݏֻA_ ^=ů׬{5oGorb&Q"P C.?@M:P̈́} ?PNWLKz Fw^=g%fuԧ!6&{]JVbGᢤ oQo׬Qhzrmٿdlƣn@"N@0?@NPXcC OwPKMgBlVܖF{VmVsOS3E_dNF+P?F_iVJ͸?@9;+0}0'Z,H3wlTG5EnOe{uF Fv=/<~R2d0Pm%s~? MvO1; ܿ;t l h-`w?F?6z?Ϋ9W7FOQ1}M4ڔC͏&ِ1Wi!~RѪV? el 3~]_>-#O &2"ܿ#˕G GWAS?@nA8 ?1ІCѵzT]4-'>T7Fd6Ĵ̏2q#~R|Rl uްJREB'+-E$RߎD~ CXн-C-X-]bd%R| j#$ 4-խ@c&Rf^+ !̠Bd% c~4-Rꕳ@ 'R[\e'? }ķp)~ s"ޗ=?]& as(R lM?FAo .bLOMQs)R ^D^*%6VIrV*R2T?F8]X^*a,eLʴ2ͭ+R ug?FdpA1DݸoL,R;0 W%r;p-RH)?F76˸ LSa@ .RPe6˜R|)yϊ&Lؾ1/|EԈ1f?FFX ر}+ ķ&OjuLīt[q>aJ.j&S"_10|*?F `Jԭ%1+.nNMcfdilFU1i1}m&H F$w|!bt1|q59>?F>FK׀*F/BIL{Hi<[cԍ(=Pї!32$f ˂}sD|2k!f{30%_ʣz?T `c`Tw4G,~LDd|`pta?FVynZ{~~-K6~Ka0?F>&Dw@}|%+)~K-r;7F?FTa|E׻LS)`qA8|}FSP`M{ r?FhPuS@/}Kdf1}-U{Ls@+C9\}K;:*N}O'| ݇Uaqyli!&,$e?@hu?FA3]{ӿP jBނ>\'xk|OVO;|Pq2.?@`Ȩ?Fn |%c_Wx ~)j |A LyU{PbE__<|*&G&?F4J?@2s+M{0xl@ף9,͟S2n?&Yۏo4=|4P `RZC?@b* o0Z d֣r~Pz;'Uo};}לSU>T|m?F3|TsDܷ[ڣ̭?|X?F _PМؓ?@1}v򅗫0z5,7-i~ dsjќD;gcZT#dfN:6RDž@2]Kc$Lŏ0r ڝ 23E솤Dʟ佴/AiऑTAX'0Ugœ 2UGD # hz0TdYYBIU@zUϰ?Fz^?@#ܡ,?Fbݱ?P} t`  ?t#H$IpZίP!::8vgpqu B`Ufpp5uU!!`?CopyrigPt (c)J 2\09J MB c@ o%s: fH"9!r< u!uaH i: n.J _ Al @(Us"eh e@ v d  #S$#- #l# '?R= # &#24 6 6 2$ZB# #0U2 3@& a#3~  #pH1%4GicP#5 6X'.EG'4 6%!4] EFYJ_dH >;s w2mUqCU_ܾF8[ # VOB UOO ~d8ouo@+_xo[sGU, f ?   X Hm xd ? F֛  6hQ ,S   7( NX@ G _U# *9( <9X . VQb1 xS@4 /Z 7$\9 (XM G&UFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kXt BW66 W TTTTT9ȅH?Q?G ?B@L&ɯd2?"-3(?\.E(#. sUk!i&/J( G 0gB`.email,SNM,serv 2,co puT.3d s.r b 4d n 0tw0rk!e^| (QSG& q ?%7I??? wpwp pqwqqqwqwq}pqqqpppw,wwwqqgawqp~wwww;yDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1?? m{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH EAEdR9.&؍>DDXXllRu'9b"",'US2&5/4&3 ?"'4Wb0{['`:[1#2!6"d^Q_9$7&9"/L6 2&!'ݯ>o@ÿ@qx@?@-pB?wDmF03Bu9@`҈-1Bu @r#CC5 ",CAB2Cm9u`u `"[ b!u`/3Qy1y7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPaaPiPn 0Al ` Xs$bePePv6`d"/`VPs_SBN cPm_!#5P8k` 7Fay1&3db4&5܀MB=/3z6&3d66 2&4#/30U1r3]%ge<^AE U#"&5m Hq&5twV5t{3t{VEt{t{y {etd{|tY wmda*V_BTAJa$Ja&203V21@3 2MVB3Z1125tb6 ta$taC|èEhhPÓnQ* ϔ8ܔAPꓰMB111;@cqcqba>*Ab1%EQF^.1XA#8dTG`XQ"y`])r` M`WnuPaPt5arFaiQ` WEqiPm6`nPvd{V5QX]1PRd*ơ ]`ue`bͯ߯@30! P`r@RdSH@ȱ]C5D6bQ۱eʿܿ{"\ ϬAP9a:B`諸75^ϖϬ% S:bi`#aweT ߬ぶLPchCekT}ߏBalB`i`g̨9@߮,RPoڂ|EZ\@ =phDBU T)?ar0TPpVrKB` B`sR;IY`A Y`Q@;bmr:PeExab!gD"qQr2w`ijJSCC ` TsR\͒kXV` ?ON`1ڒPN`SbY`(|~PEe`a 蘒Z Nwbk4a+.2`\d: IpadB`5cwϯJ)nĺskAx/&/ImTat f?a ];a£rl/kf//ő R pa/a//((/&?4/M`Ci?5?'+ pu"tPf!?BkQ OO"Ȁ\=O_OOkv(OOKHdBQ#CġHC` Wc!wV_ _ C]0ah_z_yi_'`M@m@_2d__vkpoo.KOB%fy "(p5j-T^u uqE珽 ST1.i1xACw>門ÿތc$bel erłłՆ߂^B2`RoUK=e2jrb ]]br-@O"Wyrl]u``dvx~jc*<Q_c--4A4Bj(6yB1B1*P{£qqqDqcҴzQaig6``!-uA`}ż @avh{zI5_}v&ﲄПF3s͵NSYFvy߂f!@v>8M{>6M>Ocլra~g,@wBQ&8P\UqUq}zʷ@ifz}!` %P&4odAA첱UqNNTWORKJeH0P\1RRISQBito̔3GQM'0UƗ?Ϳ10WƕHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@J5K?@xEL?6ѿP6 t`  bm&Qo?t~@@L]WI,#u auoL> @JU2zGzt?@L?&A@b}#zts q#1b(Wb)+&[9#H",ɺ (;H"?&L ?A$ /R%9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0"O1E]S4Mb-9#b3 b7I&l>]Uhzt1.AF!JN!n.%-+0rdjV1ax߃)(@K-&?@ mO"> ?Nk"PL[Majnha_:e 뇿P> #? BK=o? n-g?j4s!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6?@ @ d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEjz36>C~N>?Bn]_zGvN7ViDAbE9 MX+G'0UPN!R4ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@O?@X5?PJW?>t`  ^Z?t#ə.b?T@[ u?/CG<_Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@ic{g&?@%r`5B?@n_F:Is?HNtQ`  II?t#\  _ii$"`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@oElrkRt# 8$lcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_zRX,{_,__ #R__ (YQoSo+o=osd {oEeoooooo@zUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BqU@ t2?@D&d2?@$p?@|>׺?P} u` ?u#Lk/89L<`(.88LL`t m۶m3tبפUC~#9#0Up"?҃@/&9#!!!G*`?CopyrigPt (c) 20 9 M c. os f"!r 1a i n. Al&0 (s*2e ej v<0d0'"@!$0#B-9## 'w#^=9$&0#466 2R0$e#$^ P#30%T% 9#LHLA0%XDcKT%XD8cGg&daO^'BTcAcA0"-()Ca0%g2 ATB_6Si@haX5 &X~7aW'o$Ŵ6!40 aVuZHD: # =h4>TB#]]9 TU@$p?@|>W?P6 u` ?Mu t  )At72(L>REJt2q?=@LA@-bb )+&񆤍[Z#?m& ?A/$)R!!5 {`?CopyrigTt:(c):20 09:M c. os f"!r *1ai n.: AlE0 (sI2e0ej v[0d=0>"@!$MBQ-?# 'm&l>0>U#3 C!*; #p _ &J`@4?g8oyƏ C?,'ߑK FMD(9IMBIGE%UZ8!<A% fjHD: # =h4>TB#]]9 TU@$p?@|>?׸?P6 u` ?u4 t  )t7J)2(LAEJ޵2q?=@L|A@y7b #z]-b( J(2+2&*;q!?& ?A"#&$bb/bI4/U% 1 15 {`?CopyrigTt:(c):20N09:M:0c80os20f@211r40m1a@0i20n.: Al0 88s2e`0e80v0d0X"14MDi!EY-?3 7&Rl>0>U#3 C)1 #p| U%J`@g8oyƩ C2,'ߑB %VM^(9IMBIG3U%CUY8#M)U@$p?@|>?tP >tA`  ?t# >b24 C>u\u!.M# *H>U2llf?@9 >A@b-(C-(b;)W+W&>yg–#?& ?3"B-*X*G,e'񩣕p%1%1`?Cop _rigXt_(c)2dW09MT0cR0osL0fZ2K1rN01aZ0iL0n}. Al0U R8s2ez0eR05v0d0}"1P^"4-,13 17&!Qi 0,fUl4#A5C1CU3! !R \box(<v9s?@^!?@sckz%?PD-DT! /[S@K0Lobp׻nxw_gBs 3CUy_@ xh ,J T  M#EMU@jZ?FH2)'*??@n>?P m>uA` o?u#VlMJl -<lA -PK#TdK2(.K<FZnf>t  t3|>b!F**JRuKJ>U2zGz?@9 ?#>O&Aw@b(Wb)+&r" !$NI#"@#W"6 "\M"b)2;'r",r"/I#11G#`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v-@d@7"BMI#3 76@"!"t1"=ElJ Ul4@AAC 9" =Q A(4#UA!=(^!#?@RXM(0 # g# '?DnMN:U~TR\u5i[HmvQ~X3^w='__ '-~XE:UU . S[ ;;h\gU:U@>wR9o\jZ|K5~XAjai%?@@dQ]^ W-h\_~T9;Tꤲ8?@@)#SpSA?@jK@ ?Nk"V`UY5`bmMh\ly[7 / *?<~X6YxS?LR\cpTqUQ g8oy1 cJQ6YD}uϯ?^&S[vm۶modUQ6Y ~ÀC8[H`K?)K?M~X U@:_L_^_p_DA(G!EM_"_]u?@S.~{>O]UM~1xQDuMuRw?@lփ?@2'>w9Z^|)Sf|u?U35~8q3Du$-@8S,/: CYy$'eisa"+ziM̱iٙI@%!XRGٗ'0UL6t5 0f ٖ_H>x;s \<6x,c @LaFQ #GS XOB UT ~dU @+X![sGe|8 P` (id ch Ak 6o r F֛XWv Hx 5"~ 8#  86h NP W R 3U m 9!UFD  h(^TYYBވUFjZ?F~??x<F BPG(?P } XZ B66  TTTTTTd &!&Y"/&%'D/+6,k&/TȅHBU??/&:& ?B#@L&d2?=8\.Y+8 >sUQ1O6Pq?)(:  0%$gB`.man0ge]m0ntt s0rv2,co0pu03di0t0ibDdt n0twN@rk1ea| 8(SGv6 ߃m `???(:Lpppp wpppp pqp~pqwwqwqwpwqpqqwwqqp pp w. wwwicwqwwwww}Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??M7{V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉ%4 ?"u'4b0{N[`:Q1#2T6"!^U9$"3&9"/L6 2&!'ݯ>e@ÿ@qn@?@-fB?mDcFԸ0)Bu/@`҈-1Bu@rCC 5GAB"2Cm9u`u `"[ b!u`%3Qo1o7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm_!#5ab4P=`o13dX45CB^8=%3p63dz6z6 v24#%30U'rz3]ge<^AE@ U#5m Hq5twL5t{3t{LEt{t{y {et@!{|tYwm@Ada׃IUBT$1@a@a20)L2163jajaLBA3P1125jb6w 7C |hhE)*dQ8ŔIQIQA11CB!;1XrYqea>!bz1AXE:FG$1 8dT=`XQj2`]r` M`nuPaPt+arp` ?bup` B=ݧ`S,t']p"51m-݁<9orm'Qh>qݏBKT'.mt'xydf@>p{ÿ#ГobZpq [ܒrr$avvvrs{c"`RQoxi16 e5`(bb_ mmbr@ uP`TaS@u._u'U1U2A-b[aahaDauhAaYg6s(P` -u*`}`ᕁ }`ax3R5Y_dn&bpSF3s͵NSYJQbrVC@ړN-e8%dE%'XLqW,@rb:*<Տ`}aahʀYVYn ` %P&S41Ģʢa`NEWOR+Kn2H P}!RȱRISQĢʢĢ ʦio0oŌh^A%'0Uo?č!0ÎoHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#B#U@f?FjK?@瘛?F1.?ٺ?P} u` ?ut#nW/8d`<Lg<`9ft99b<!9f"9#9f&9&'9f&(9(&)9<&.9P&P&/9n&*9&(.88LL``tt&&&&(&(&J<&<&P/b(n&n&&&t : zBtBIE GeroBE$G?3龅U~0UB?@4sF}CAAJSB`?CopyrigPt(}c; 200P9MPcPo%sPf"RQrPOQUa"PiPnO WAljP XsnRUeBPePvPdO"kBADtCB-}CC EGC=}DFtC dVV $RtDe\^RdYQ qd*a!gtEd8HjUdd{|dh{Sft{ah{ah{ah{ah{ah{ahkah!{ah!{tah(!{edh ,J T  M#EMU@jZ?F??Fĩ!3ܾ?P m>uA` o?u#jpMp -<p=PKA -dK#hxK2(.K<FZn>t,  t"lgyPb;!Fi&d2u!J>U2zGz?@9 g#>w&A@b(b)+&" !$NFq#2h#".6 "M"b)Z;'"i,"/*q#11o#`?CopyrigXt (c)020@90M0c.0os0f21r0$Aa0i0n.0 Al?@ 8sCBe@ej0vU@d7@_"ŤBMq#3 b7.6h"P&0"tY"eElJpYUl@{OX(QYuq'`0\q ٬\:SضX,Qp? X̏ ,MgX0QYŞ6ex`WYBaZuU4QY[S FP\ߴ_\16 4X8Qo џ 4WTX&tqYˣhMGX]}/s]GwXHQYt@tԚ?Fu\ɢ|~\oDlQLQ¿԰:G޿ ˘~XPU~U%BUT__ZAA(GE]T_f_[}u?Fʢ \9=Ŭ\G_ԑuQuucߟ?@ml?F=^`> P+CzOOߎf] B]$嗿<Љq3ux5@4Sa/JYԹ6TfJ'e0k?iMiIh%1X|zG'0U]1.650/ (HD # =h#4>T#3 >U@Ac?FO|_D?@ !?Foצ ?P6 u` ?u >,2t  f?5dLtW1qcRH_Axcl^?kJ2N贁Nk?W@ LA@b ++b )'+'&񉄉ԑ"?& ?A' !zR @$)$)/J%B 31315 `?CopyrigTt (wc)j020v09j0Mb0c`0o%sZ0fh2Y1r\01uah0iZ0n.j0_ Al0 `8Us2e0e`0v0 d0,1(04M-_/?3K ?7&l> 0>Uhh= Q1Zb!!J`F9A,2 %,VMS(Jt= OMmѱQ~%+UD_!OSUE^8__ {GzoXb52O;"rAQ 0#Qc3iAleE9 MXG'0Ub-N!R$KAB5 fjHD # =hj4>T]]>U@CC?Fd?@?FԬVq?P6 u` ?)u hAk,2t  Rg(LtW\jwS}ժcRV]clMJ2N贁Nk?@ LA@]b +Wb )'+'&𩆑;"?& ?A' !zR @$I)$)/J%B31315 `?CopyrigTt (c)j020v09j0Mb0c`0oKsZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,1P04M-_/?3 ?7&l>0>Uhh= Q1b!!J`Fuz T ʼ)UB ,VMS(UjExK?DZ/`D%Q~%+U\bHNOkTI?$i|UE+Ugr?F9Y7TRY_DU52OE"rA 0#Qc3iAleE9 MXG}'0Ub-N!$KAB5 fjHD # =hj4>T]]>!AU@0M,?F (Q!?@2"\?F21기?P6 mu` o?u $lAl 1@,2<UJt  gttqzh&vJU? & ?Ab5"z@AA 6$>)ub ^(6 D#@ D2N߁Nk?@ L&^(+bl)+y&s,d"y*BB[1[15 `?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v0d0T1X4M-/g3 g7 &l>0>Uhh= y1!d!J`@"G3?F9;<#R<M(!A|jL, I%% [F?gGQ%SUl"F?F7K^5˖Q YEQCUESUF]k_T Y0ħ[HOMTb^?FWȬs рY+ d"rA 0#c<Fl~rl r!}iA2gE9 MX0G'0U*arUN!%$sAj5 v,zHDJ # =h#4>T#3 >BU@m?Fyg!B?@u>71?F!A0d6u?P6 u` ?iu >k,2t  eypLtWVcRB )clOODJ? ?Ab "z@A $)Mb 6( # D2N߁Nk?@ L&6(+bD)+Q&K,J<"Q*31315 `?CopyrigTt (c)j020v09j0Mb0c`0osZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,104MB-k/?3 ?7l>0>UhhE= Q1n!T#3 >U@rڌ?FlUM?@MX?FI?P6 u` ?u >,2t  ?ou mLtWWcR$_˟clɽJw? ?Ab "zG@A $)]Mb6( # 2N贁Nk?V@ L\&6(bD)+QQ&K,<"Q*B31315 `?CopyrigTt (c)j020v09j0Mb0c`0oKsZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,1P04M-k/?3 ?7l>0>Uhh= Q1n!T]]>TU@n2L?FIҮ?@y“:K?F[dg?P6 u` ?Mu [>,2t  K4LtWc4cR^rCcl{STJ?0 ?Ab "z@A $Һ)Mb6(@ # 2N贁Nk?@ L&6(bD)+Q&K,<"Q*T31315 `?CopyrigTt (c)j020v09j0Mb0c.`0osZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0ej`0v0d0,1E04M-,k/?3 ?7l>#aUh[3 A Q1n!sXE/U8όKBI .O"tW/U{ؔG\b# aio?p0sX2O;<"rA (0#c3q_(RAeoiAVAeP$59 MXG'0U_r-N!0 B5 v*zHDJ # =h#4>T(#3 >U@ҙK?F<;H?@jB?FEk9?Pn6 u{` ?Su  A,2t  ןG}LtWҤcRMNcl?\zՆJ;? ?Ab "z@A $).Mb6( # 酑2N贁Nk?+@ L&6(bD)+(Q&K,<"Q*31315 `?CopyrigTt (c)j020v09j0Mb0c`0oKsZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,1E(04M-k/?3K ?7l> 0>Uhh= Q1Zn!T]]>!AU@ꪓH|?Fto?@P"?F%dݳ?P6 mu` o?u $!>2 1@,2<UJt  ghttb뽘)ub^(6 D#@ 2N迴Nk?@ L&^(Wbl)+y&Ts,d"y*B[1[15 `?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v0d0T1X4M-/g3 %g7 &l>#aUh[3 A]@ y1!d!J`&F?ޓ, W_rv J<'R<M(!kuϢ?FK!TBKQ%WUZ&L#\ = GZɛXEWU8(?/&?F9I'\]\WHOC@}?F,^ рYZ+ d"rA 0#c֪J~V(L ~A ׻XWU8TL)?Fj;KL~\M2aiA~AeE9 MX0G}'0UrUN2(!0j5 svzHD # =hj4>T]]>U@gr?F^i`?@?Fyrq?P6 u` ?iu ">k,2t  !LtW( UcR]cl ;_@J2N贁Nk?*<@ LA@wbb ) + + &D  ") ?A !EzR @$)R$ )/J%B31315 `?CopyrigTt (c)j020v09j0Mb0c`0osZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,104M-_/?3 %?7&l>0>Uhh= Q1b!-!J`[F%),2 ,VMыS(K=_LD^^PCűQ~%+U9IS_JUE+UF XOO ڕ]oX52OE"rA 0#Qc3iAleE9 MXG}'0Ub-N!$KAB5 fjUGD # hz0TdYYBIU@a ?Ftnm?@Me?FpWe`ݷ?P} mu` o?u#8#5$+4%5HU$*44HHt  g t3 at?Ы O` *0U"?҃@&T #M!M!`?CopyrigPt (c) 2\09 M| cz ost f"s!rv !a it n. Al z(s"e ez v d F!3YJ$#$- #Y# Y'#^ B= #Z&#l4G6PG6 C2$e^.317;( #HH1|H44Gi^P5 1&X"7.EG'$G6%!@4] EFYJUHuDh#" # J#UU U UUA>a0JTaaMU@Me̷?Fk ?@N?F0Cꮷ,"7?P AtQ`  7?`t#jn \v\fw"MGwugoquo$Kt2zG/z?@Q#+&A8 b*](i+i+i&)j#3$(Qc"bk)+'Rp7171{`?Copyrig}tn(c)n]209nMf0]cd0os^0fl2R]1r`01a# i^0n.n AUl0 d8s2e0ed0v0d0@"01Y44-C3 C7$&lU@U1#A(:!F?SD [6yw(E ?F{ŦL.D;\I`H3E^?FpJ"MC=\C"qoI5Eo9?F(bȵLvF=\#bJFHOOOO@DQKO]OoOOOOf??Fޝ,lCۆ=\)?4=(Of`ẏ?FC+mR)=\/b@QS_fFȄ?F7qm ~=\76ԡy]4։U?F*t,lI=\oooo ?E#goyooooo/?F:}W=\wzfu=pӣ?F߼<,l-~Ȩ]p|y^]x?F}+*y=\+ 3]X0l?F-Zߣh,l Bߒa=\ vl%?EYˏݏfO/?FI42,llHJ|=\HN4f:f?F`\},l>x싟f~?F3x!##E=\J&)O]챘?F/8-k%mSp=\< 2DBB]ǯٯ~S*|ǾȰq~Fn)`]]c֦]L½r]7rG2=\`_g@P 4?F8^d=\he]|J%%D2Vthz1hV~=\K m0BTfCAſ+i /НJ+mV.c|~kdny].!̛ ?FW,lJ(Jd0(fä j[+mޣ =\_?ny]~"2!GD= bY~=\6y@ mL^pCA .i}-|{zv,l3  4f #[tU?Fߞp=,l0ϓpBKg嵿?FdNw:{QBb?ɣ]gۼ?F5̅h.fQ[Ϡ mhzCA&8J 7?FG1^}a aB Kf?F@;,l^s};]K_dШ&?FfOBl?w2OKNe9@?+o)F?{qOCw_PvI4~=z2m?Gm}>ID ___Z1 (_:_L_^_p__H>ho 2 DVhzHMgz?FC_߯():?1ZSGbZ2@ҽw]6<730b837SI|~"RM?L2]-m?Fq,cɥ~)̲1%]? FZ ҟ2 `rѩ]/U Bkn|)<|4];-@uvzwWPFMǕDGt{GI"MqcB߀~@6?HPͿ]M+T/]-$A#8FNL /Y-m$+|*1 |Ŀֿ~ѩ'Z?s =-B^l?2UzVs0~ԦLnQ ~덧쭕g8A}2!"M 2:2?ְqBYZ$@SK]-?FkL}-.zus1v8 .@2ߪ߼ѩ|[s(^lU }J\!Omy͹?F$зX-R P"M^b=-s ,G4,c^lX I6g ]-BS3>-U,-A -,>Pb 1Ԧ(?F~^lAw,?Q7ZьkP?F=S~-sj3jK~)4~"M`dqhY W-߇4ĭɫ)^v 0$3^l0}S1Ix@YpB/T/f/x/"R//*/ѩ??FTmȵ^l5rЯLjوWjFQ/Zyk?F|X"\=De9gs jEo9 ^80>~sMY/|=?t۶ I4TSx?F/doM~^l ONQZW`@?[OmOOO;u??O"O4OFO??FhWy?0WvOo-uѣOd[?FSҮ- ]gWy, l Sz?Fl}WmxZWQ灝in]\h?F.j.\=/I Wdvdp_zooooboo,o>oPoboѩm4p=u_0S}Ÿj8>4W[դ?F:mq_Wş|g #Z?F'[=ݛ}T r=lfa4J0-p/"_flȏr$6HZl~ѩ;h4=(V.@ ʯܯa@Rdv`j^l8g Uw*$B_.nV?F" }"SqN̎?F-;u6D!FA\Y] œ1g ^~SI6? r1 b\nϒϤ϶`kx7|>@|SA-:N^{a*0An?F~ _1ZA}2!g-О?F\b F! qZtq*s 1Mm#)*:ݧ#VْG%E\־ |)߽BJx;mmMQs~߯X9Y Z}l@/P N?M/?1?C?U?U!///// ?7 ;{?Ft\,;) q~-a.d?~A3U'3҈>}>/Eܑp%ߠߤ@^^l{;HkOQ ҠO\ۼ?Fߙ4zf]ziO;_M___q_OOO__&_juW/?Fia~Ia{pU߀_~ Z lS;܃(Siri`RriZ;ӱhsm2Ca6.o&s ?F~f(;EyoIs6\aoZl~:oo 0BzP1c/?FAd)? |hAقUYܙK?FxtQمwPIr2] ~€+Xy'MፍǃaM*2S%ʰ?F+#f~;-a~p}|b:ѧ`vXT 3T %iX3ha TaPaMYU@?F{dܫ?@B?Fx?PM]tQ`  -F7?t#UY~PUu0"u;(,\"b&hUt}2zGz?@dQ#&A b(++&Y}/SV0$38?b`4z */!"!f2$R}11`?Copyriug t(cU 09M0]c0os0f2R1r0!Aa i0un Al<@U 8s@Be@e0vR@d21Y%4e3K 756lU)U4FX1#A2%!Gj0?F73W00"6#8$}-%L#1O)߰=! 8@U$h?FTۜX\rNj$n\B.PXX3@U xQwv&5}X\:"^?5&^,!UnE@UO/NPY[||m]P5Wh@_R_d_v_DQOOO __-_f/,?FRal_`[]lS_fV>ս?F)b rlS3le£ϲ_fq l+{ ~W2ofG Dv~iqhCs}6n~#coo[mE#o%7If[v6/?F2l+|lTCf% ?F92mXlp )B&ur7f}ܑ]{}k ~Xą=mg?]̧lHWg#tdf߻mAY /ASefWc{?Fu]S䱴ls :Llgu伟flo gU?F;^U=lChlx-jfF^eۙq~j0xi|m~c&Mz?lG6OɿE]+=Oas8,ǿ?l:<]izT}ۿ~*P?F=|{FlqHlο:.2~DB'ǘ}l5(3V߆ϾfD ыh殴li7>-I<̾ϲBbGYk}ߏGM~}?FF*.^/ͼi?oߛfu?Ff l3Mi[ KP~0굂c}lOF?آロfIӂr=Eldݠ_\B\n"w\㾥P8bnnE]<0j]SEj`B^ݷk. ρV̲_#u悔&ƀ ܠ\mmElxY=mu1~!Ii)\mz^.P/كYm//&*Ax"x$qQ} F4D2}{=yM].u #q2}~s}z0N?lWU$?μǵF!i}߶bܑ!n@垇D}LnXXt~#ViԂ,iߝXQ b4Oq>ad䅔tnR)u@o0oBoTon _____ o"u2$j1}Xι_!eΟ`?7-:ܐ9Ri>DYH-#Gߍ=ԑ4MXjQ9Ef׹ tt}k͟in?S(:L^pn &*~]JB&u¥ G48Ġ7-a#|GgS>a8|I>3; nP#bWU6mȧ .ֹc`}l:2?xuTѺVhzn 0B=M#p0}!:rd Q%- qOj 7-Vsb: u>O9e>(xx&?FgH??[WvpLQ?M9 Ԣ1/!ﰳm 7xzNªJrτϖϨn (:L^sjJ^/0ЧIs*!\ٖ o5|OF ȷӸ@`?F/ ߃7\!\ψf+!"p@)-_!\?$”Mގ#q|f@LS`8}!\߬BĪ&Zf<1ߒ2 2DVhz=U[[0m{TfP7-^u,b%:k#ԍ?FȘ!\(V!I/iUBg78;L5?Qcuӵ3@?F:Ě ],‰L?L/LY!ԣ?FRs;=w] PʉL?^+UϰGIm߲E,1u5w]LK>P#XrO?? vnj!\Zz -~X@////UX/j/|////=R?FQJ<}A{ХloZ ?@lsچk?Fp"mI鴡si}/q2 ym9HQ em\\g^rBt@|? [úA'5r?#7-oo#5ߕoooooo=zg?Fzl) Ud4I堹> /k͹?FV{<˽h4GX'1rn=["jyN%.:PɟLIUw ~#Q)9=Syeӄ0_6=6LTL=Us|9ks}ߵX5>2f$~1b?Uw0'/N`%;M p4~mmo7o 9Q5:+G(j=?=;J?1OnN&4'9K]Vv*RQxR]6g^ɣ|Zk%O!QڑQ^^h?g^j^}_Txk|j~WmVo@?xp >eBcN#c G!Tx5Pm|LGf.Q,ojF.@RdvSzL``qn ~GA>~wz^gqҕim~x$gK>~ V2wTx"dQᆠA(Os:!nLĥlqNCyQӫKcAahG}6ZSN?H????fF1;?M?_?q???Vv|&5}7@R"oPiyK> k#>ġ"}?A |M-¿t6j`~r_QW44zaqYX8Tȴ߯~n߻d ____T_f_x____ Y<,2?(rݾW1[̵yѿ㸗_go4͵S62M\{Qo^~(Z&T!+<5&s pBr~y>08x޻ ϛCE/RuCQݞ%~޽3]Fb?F?|~퀶5 ~ 趒;پQtoqΣ6 ޻֮Ǹ) 1Ÿԟ)e]+akDH~6 WMw׭K,@&"c?F'%|6 ξi@a > <ŗ&8`kb|-~ 6  s??BD&m/V%߇6 b3hD5^E);M̿޿Ͼ)`]|i!x899|/*a|>5i=ϭ,?F#6 K.5> ս5|[ݍvϒe*y34ݶJA)=`2DVh j@?F-/g ?hA\)/N/U?FM2e-j X*?v(N })2-OrYQ-~.Tk )\}/'?^qq|*=3laz ~} 4}ס@Rdv22(:)*?FN !kyMdFq&w^I/(Y[?Fߧ!Ma#Jh|ϻ/x9P8!il{mM{G~.2:O˕BĈy!" _뼙-Y` k/}///KU/ /2/D/V/4^F?}e_mu? y/X /?Fjnǚq7}? D?Ɗx^o߾AytÉmA~ ^{yoQfAYvӗ-ׂoNZc?OOOOgu O*O~gL<7@p?ʼn@-1ş9p6=n?FQ>~:B !~ ޾UC?i^Jak{_f/˲p6J%V{垇͉}1JMZ ۯ#l~Ư4ʩ݆!~ cx|[) T@ Ѳg.Kn>$?{nIn880ߊfr2Yp=ȓ }UptCE_}lB~?h,,$ϚϬm9eG\'1rth|~1=9zsg-?Ubw;]9 _Y*'1`-LI@0goSy?o7x>PEu{ X~=8+m?ɼ7)}^E"S0 }٫i)}.@R%m9Y; dA Z8M]Y ]V6yY+t8skA rM#\$Sp!_n8$Ye#=]*}_I}4I H~`]ت]~| .ʗjri_.@RdF&|m9;q;gi}I߂h(}(Uϯz5y+[Bq}X~v"x>?0~1>4W *wFI.i}A0p}} &N{*y/ K?]?o??+e'//??$?6?MIbLR$UFײ;nC -+}IDQ$Օi}-HsC V2wYNHCw?F3kDjt!eJ }=Zܵ|#aOQcܝw~BnW؜p f_x___F(O __._@_R_MIst@xAOK~` f ͹+ixrݧJQp~[̵yvOGo?Flg6uMh ٘]oSC`qb?':^Bo}^~^?>Dp$ςb)&8J\nMImc@xgAx+_f{&34'Hee÷@%|~g܂HV <|PFXdTX׊ćgߩXQ#֟7$zLvPFBrX}g$Zg̞Rש<͏ß՟*0BTfxMIߵ.W_=%eiJz#Ms2,MD_c|T%#5JLYXD?F5hFg y@8|ѕSCw]&.mᕼA?鯻Ϳ߿+L^pP>FS`D0 e4Z>+-G_SCa\0І?FKȝ ǶH ~%aWNwZ=!/OG#b q1{~! tIWxa@$,"-MIkO/?F|R |~A&'A;߯bc @``5('Mh-nf=!SCrRat!2vIgn/=gyzu  jf&q#=-uS),/*/NR<btB1*<|N t!ߡ..Nsz9_-ֽ'}.<94o._/?{iPא5] HV:;$HĠ﬉ қZ?nBӭ_2"4FXj ErT@y}Mo 6N?PJY_-_`o颾Mn;NOqȝR?FOZ|<;QaA1b;oIaWW6/H]ToY?vbgCt_Ϭz3,>PbtφϞ m9`Xizh \~wKVaϤ~gPϐsY6i&vb8l5' f EXe]/; ^bAߡqEiK )~E?H{=߷4HZl~-Yf"b }h 0JTlaHaMCMU@Kk?FVٖ?@uW< ?FY鷺?P >uA` ?u &J2A:2,t  #)taN_"舺m\pmv*IJ>U2zGz?@UL?A>A@Wb#q)!?(b'=&Nm ?0& ?M "b?)'7,I! O'11`?CopyrigXt (c)902d0990M10c/0oKs)0f72(1r+0d1a70i)0n.90 Al0 /8s2eW0e/0v0dw0!(]$B-3 7&lJaUl~ 1l !`@9  颥 nw&. ʙF,2 FFՋN@DW9_k7BLL␕U?A%E?AL9C=QIDZB3>X5EER_`y5qLWO ]7@D>XE2OE#rM) 0 " c3AAO(O:OLO^OpOFF&(LKSoW N L Od]uvx?FZl6>LI3nZձL)LA_x1ur/ fLP $e'Hٰ_FfIXDoo p_Uoi &kpiMf5kiX7!'0Uʂ>$e5 UH luD( " # #Aha0JTaaMQU@Ad{?FdT#3 >5AU@c^?F`Ld?@9x(g?F_Uw?P6 u` ?u .6(>2Q 1@;,2<_Jt  N3 trBNٟ,PޟbX"; JU2N贁Nk?@ʄ L&A@bu9(E+bG)%c+c&# #!!5 `?CopyrigTt (wc)020 090M c o%s f"!r +1ua i n.0_ AlF0 (UsJ2e0e v\0 d>0!E$M[#!2?06 ?AN$)e/%-/#K '6l> 2UhE=%!?!J!(NѠuOO u(:@¤̥?FbP;?PEB GݐAY1&YMAJFHe׵B `,a,[ /? :7i? >y4vʚ?rJ5;!:_o#ga@^AOiAlerI=MuXG_'0Ub5N!40% rFjUHLuD" # R>h0JT#3 _aRMC5MU@u{v?F ??F];\zظ?P >uA` ?u .)J2 1@;,2<_J>t  w?t&9W`ݟu㵩OJ[>U $?& ?MbA@D"J&N#y!y!`?CopyrigXt (c) 2d09 M c o%s f"!r !ua i n. _ Al (Us"e e v 0 d r!]$v$-## '&zBt#2q0[?"@9 q>6M(bL)J/8Y*lJ.-$U#8APAa A =!4Q1D!1A9 {?F%@ 0nCϿMnNEuýL9%K FH(5N?:G_&_ RHE5E#f/?FR=JvK ϽLQ ELxAEE^]R__ *HE8,;@ƫn~LnzdKL5rdXHsA8#5 뇯/tH@ED@/GpOc#?P-DT! рY?!IO F_}]~3v!9$e% UGD # hz0TdYYBqU@UJm?FBO ?@IxJC?F+9lbݠ?P} mu` o?u#L*5-+4,5HS+5\$*U44HH\\t  j@g> t~pm J)4 {ו' -U;!;! "`?CopyrigPt (c)r ]2\09r Mj ]ch osb fp"Ra!rd !ap ib n.r AUl h(s"e eh v d #"4#S8$,#4G# &G'?=5#H&,#Z45656 12,$B#5#05U253@+& $^ #E K5 5#\HA%$DH /KK5$D4/GiA^K5 6X7G'4Ŕ56!.4] FJHD* # =hj0>Th]]>]AU@ ąMu?F*]!?@v$?Fu?P6 u` ?iut  Dm%t3._袋%.0I~$%UHSB+> PJlU?B& ?AuA@o +bs(b$%#2zGz?+"@ L&s#v ##zu s/ 5&,J2*%#c1c15 w`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 KA# l07s2Ue0e0v0d0"\1E]`4MŤ-%#o3 o7B&$!A]UhzA 11J`%#@frq?F ["(M(!T?FyIRaZL\m۶mQ8ϐ?F@$mZK\٩?F}e?P,DT! VрYc!ITB M~E'QY5a XȞv/ ,^$ Ui[? RRT*?r.5 1:Oo aosg4 1"KUE[UF5sRŐO_ <tf ?@Gui&PbE9 M5X8G+"!'0UE]N!R]$ar5 vUHLuD*" # R>h-4JTaTa9 JB>U@@(/&?F94;??FIRaZ?P>uA` ?u>t   e~27-t!;E]t-6-P\E`,J%EJff? ?Mdb"z@A }$)1b:( #>!f!?X(d, P酑2N贁Nk@9 ^"&:(+!D:/ 5;T'2U&Ra1a1`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v0d0"Z1E^4-/m3 m7lJ0JUllI 1!1` C  /VN!b͉(O"TYURIUD%YU+u7tYF_QER5:MT?7 T [Nё\#CTAE2OEv2rM $Fyc"mA$zG %I OUOYU`fsRsŠq_ _ wT#3 >U@+(/&?F94;??FIRaZw?P6 u` ?u t  <~2)t7E]tU)2)LXE\->J2N迴Nk?+@ LA@{b +b )'+'&D*!!5 {`?CopyrigTt (c) 2U0 9 M c os f"!rԶ !a i n}. Al 0U (s2e e 5v 0d0M"!E$M[h2?6 ?$A$)/J%,b,# '6l>(>UhE=~%!6!JXS(xNWOO D QH~%xE9OAOAiAle6I=MX7'0UjR>!40% 6FZHD:  # ;h0>T@h]]9 BIAU@Y"2?F h?F\9?F_X*qw?P68Rw?>u` ?u$.>. V- (.8:Q [t  !g?t{Yu?pvMl{~ݕvs5Q!fJU2N贁N[R?@L&A@bI(b*W)U+U+U&񩆍y#? (<!J ?%MP?AO"d+b Y)(u/%*B#A1A15 `?Cop rigTt cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4M-#M3 M7#lUh/#7 _1AO!J # F qBd V(R iAb59 MXGW'2WUQ;N!R$aP5 VM&#HD # =hj8>T! ]]9 UqAU@c^?F`Ld?@D6 5D?6Hb?b06]DN`lt   ?N t!rBN,_PaX"; jJU #<nH!A?^%Y ?Ab^ @$bbbz #"b#$"b$ 2N_Nk?<@L&(&$%" ;'"!*@!A#BBhy?H *A#115 "`?CopyrigTtk(wc)k20@9kM@c@o%s0fB1r05Aua@i0n.kU ] lP@ HsTBUe(@e@vf@dH@/"14M-?3 7#l>]Uhz11!J`A#@SfVo?FnOo0  d" ΄ ť^R^M(!`ZQaSt__ Cƣ8-$Րj8:ۦ?@u-{?FrgY?PC d f"Y!Y1a ]D_;s ˿}nE>0F # ZOB Q ~dxϽo@+觎o[sGCY_. fX  ? u l X _G F֛# %# 6)Y +[ .  7 NH  P8^ $? B//'/ (y PX6Hapxc}1oeg.i Qk^(Eq#J&IKXMLVpY9H_d(b%K=fiulHAJo1UFD  h(^TYYB UFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT9T9ȴHBU?Q?G ?B@L&ɯd2?6-G(?\.Y(7. sU!}&/) *%  0eB`-ecom0er 0eW,s2v2, 4put$4dui0t0ib24ud0n0tw 0rk!% ^| (S G& q?%7I???"" wp ""ppqwqqwqw^qpi ppwpw)wwwqqd^wqp~wwwwwyDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??%!{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH EEdR.؍>DDXXllRu'9h",'US2&5/4&3 ?"'4Wb0{['`:[1#26"d^Q_9$7&9"/L6 2&!'ݯ>o@ÿ@qx@?@-pB?wDmF03Bu9@`҈-1Bu@r#CC5",CAB2Cm9u`u `"[ b!u`/3Qy1y7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPaaPiPn 0Al ` Xs$bePePv6`d/`VPs_SBNcPm_!#5P8k` 8Fay1&3db4&5܀MB=/3z6&3d66 2&4#/30U1r3]%ge<^AE U#"&5m Hq&5twV5t{3t{VEt{t{y {etd{|tY wmda*V_BTAJa$Ja&203V21@3 2MVB3Z1125tb6 7$C|èEhhPÓnQ*ϔtataAPꓰMB111;@cqcqba>*Ab1%EQF^.1XA#8dTG`XQy`])r` M`WnuPaPt5arFaiQ` WEqiPm6`nPvd{V5QX]1PRd*ơ ]`ue`bͯ߯@30! P`r@RdSH@ȱ]C5D6bQ۱eʿܿ{"\ ϬAP9a:B`諸75^ϖϬ% S:bi`#aweT ߬ぶLPchCekT}ߏBalB`i`g̨9@߮,RPoڂ|EZ\@ =phDBU T)?ar0TPpVrKB` B`sR;IY`A Y`Q@;bmr:PeExab!gD"qQr2w`ijJSCC ` TsR\͒kXV` ?ON`1ڒBkfHZSbY`|` E@cqTZ Nwbk4qa+.2`f dDIpadB`5cϐT)nĺs kx/0/(mTa t f?a ];a£|v///ő R pa/a@/?(/0?>/M`Cs?5?'U+pu"tPf!?BkQO)O"f=OiODOkvOOKHdBQ#CġRC<` Wm!wV_*_ Cg0ar__yi_'`M@mi2n__v"kpo$o8KOB%fy"FaPoboj qt1dw:1NvcU@jZ?@ű? _v pBup` ?}rup`  B=ݐpj,U>(p5$j-T^u*uq, ST1.s1'xAMw>ÿ$revK ˆeGrςłˆTς߆L2`R"oUK=e2j"rb ggbr-}@"Wbrgu``dv~jc 4F[_c@-->A>Bj(@yL1L1*Z{­q qqqcҴQ:ai6“`k`!uA`aż@q@ʇ{„S5_&ﲄПF3s͵N?SYFy 邉f@ H8W{H5WH`Yc|ag,@ BQр0Bf_q_qʷifz!` %.P&4odAA_qNNTWOWRKeH0PRf1RRI%SQ"it!̔=GQW'70U? 10a?ƕHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@J5K?@xEL?6ѿP6 t`  bm&Qo?t~@@L]WI,#u auoL> @JU2zGzt?@L?&A@b}#zts q#1b(Wb)+&[9#H",ɺ (;H"?&L ?A$ /R%9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0"O1E]S4Mb-9#b3 b7I&l>]Uhzt1.AF!JN!n.%-+0rdjV1ax߃)(@K-&?@ mO"> ?Nk"PL[Majnha_:e 뇿P> #? BK=o? n-g?j4s!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6?@ @ d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEjz36>C~N>?Bn]_zGvN7ViDAbE9 MX+G'0UPN!R4ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@O?@X5?PJW?>t`  ^Z?t#ə.b?T@[ u?/CG<_Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@ic{g&?@%r`5B?@n_F:Is?HNtQ`  II?t#\  _ii$"`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@oElrkRt# 8$lcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_zRX,{_,__ #R__ (YQoSo+o=osd {oEeoooooo@zUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BqU@",?@~?@$p?@|>׺?P} u` ?u#L+/8<L9`( .88L``t uPu3Pt۶m!tUH~#9#[0Up"?@/& 9#!!G*`?CopyrigPt(c)20 9M c os f"!r 1a i n. Al&0 (s*2e e v<0d0'"!$0#B-9#,# 'w#^ =9$&0#4 66 20$e#$^ #3 % 9#HLA0%XD`cKT%XD8cG g&daO^'*TcATcA0"-()Ca0% g2ATBK6SihaX5 &X~7aWK'o$ţ6!4 0 aVuZHD: # =h4>TB#]]9 TU@$p?@|>W?P6 u` ?Mu t  )At72(L>REJt2q?=@LA@-bb )+&񆤍[Z#?m& ?A/$)R!!5 {`?CopyrigTt:(c):20 09:M c. os f"!r *1ai n.: AlE0 (sI2e0ej v[0d=0>"@!$MBQ-?# 'm&l>0>U#3 C!*; #p _ &J`@7?g8oyƏ C?,'ߑK FMD(9IMBIGE%U\8!<A% fjHD: # =h4>TB#]]9 TU@$p?@|>??@?Pn6 u{` ?u th  )t*7)2(QLAEzJ2q?=@LA@yb #zt-b() (2+2&醍N>{??& ?A"b%$$>b7b{/b4/U% 1 15 {`?Copy_rigTt:(c):2U0N09:M:0c80os20f@211r40m1a@0i20n}.: Al0U 88s2e`0e805v0d0X"14Mi!E(-?3K 7&l> 0>U#3 C)1 #p U%J`@!g8oyƩ C2,'ߑB DVM^(9IMBIG3U%~CU[8 h:0T5 AB#MU@1 g?@4?@84C?@Kk^Ӻ?P t`  WwB?t#_L&,qۗ'0 .(='u uu1t6#I    ҩ""PE&&E&U2llf?43f@#&A@)b3bbfzt #Eb(Wb) ; 6#"w?_6C?/$ ;H,7#11+`?Copyright (c)@2t09@M@c@oKs @fB Ar @EAa@i @n.@ Al`@ HsdBe8@e@vv@dX@021M(=4#-#3 7_6( v `@K( vU `A#1(Og0x~u- h'#Q  "8!}Z_]02QEUCT/\?@Wh{vž\<ˉ\78u2/k٠' QQQ,WeI_[Xg_`zT'!68?@J[8j/OW\U M "RSDrV95Uۮ4R?@ rё\!l§\Wԫ{d8m k?NR܆?@ֿK #? }+OqZx~b_Xu5sZ OZ{d+i2o,u2r @3C QAoooooouYYG?@Pb|[p o _7~I|׿zT6Q3M7f?@Cs|h')`ρ?|ώ&a@-_4o0:02i@wE%tHXG'W0UN!z4 0N  UHLuD" # I>h ,J T  M#EMU@jZ?FJX??@HX?P m>uA` o?u#VlMJl -<lA -PK#TdK2(.K<FZnf>t  tt|>b!FQ~uKJ>U2zGz?@9 ?#>O&Aw@b(Wb)+&r" !$NI#"@#W"6 "\M"b)2;'r",r"/I#11G#`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v-@d@7"BMI#3 76@"!"t1"=ElJ Ul0@AAC 9 =QA(#UA!=(^!j9?@+P(0 # 4.!# քOCN6ULw-iN\u5e[EYzX3^GxQ1__ pPzXE6UU . O[ ;;d\@#Q}U6U@ _\fZ__"zXAjai%?@NT_N\^ W-d\ƿFvwzT97Tꤲ8?@)#?@@SAWp>@ ?Nk"R`QY5`ߛbmMd\?E_Wqy[7 9(  UDGzX2YGf?@2CL[N\AlTzXQ2YY a?^&O[?++d\g}UQ2Y?i`hCN\,@Иg*%:׉zXU6U?_z^_p^AA(G!EM __0[u?@ N\>4t>GCc-xQ@uIuc ?@lփ?@nc>s9ZQh |)Sf|Dҍu~4q3@uɕҔ@8S7^A?$My$'eCO9c]ziMqi~I@%!XRG~'0U6t5 Ff ~_H>Y;s ~HA10Hù:zFgv#xJOB Ly~d !o@+Xo[sGD)z: PXOcюԎՎ(؎XڎF֛ݎߎ5Ϣ,Ҧ?86جmNl9aH1d7x38Ly9!UFD  h(^TYYBBUFL&d2?x<F BP(?P? t B66  TTTTTTdH?? Q?BZ@?"-3(\.E(#.sUk!i&/(  0B`PPublic,pr vUa.e0k0y0s0r01, og"a ,n.0tw*0r01y0 1m0t*0p *2r 0pP1!91Us0c*0n82c.!iz275 eA^| (SG& q ?%7I??? wpwp pqwqqqwqwq}pqqqpppw,wwwqqgawqp~wwww;yDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??f&d23L4CUG DF P# h @T(PYY#E[UFL&d2?jZ?F~?]P} .B":J :^:rY:3 7 77 Au`?؍"66JJ^^*rrUu#J",':U2%&5/4f??z4b0u{[`:[1#2&6#"'V"^%Q_93 2 L1$" "3'@>>ÿ@qZx@mE?wDmF0r&BCZBu%@`-1ZBu&@ >CAB""C$9u`u `"[ bz!u`!Qy1y71`Vis_PRXY%cPm!#5P168B`?CopyUrPgPt0(P])020`90MPc`oPoIf bQrP6aa `iPn% 0AUlQ` hsUbePe`vg`d%R@y1&3db4&5! h8=/3z6&3d6P6 2&4B#/3+0U3r3]*g,^eqV5@cT<*G`o0m?`s `la>t&5 pISO`PdaTs"_BT /&20*V2QQ32DVB3Q1$125kb6x I7EC|ßgh`nQ*ƄQQA``b111;<balrqqAb1#8dsPXQ1`]+r` ]M?`nu-`aPtfarQiQ_` EqiPmg`n `xd>V5Q`N`]1yP%bd Nm0:30ȟڟZq!v P?`r'SHHZ]R`Dgb$a Be>\ЯoAPja}C`y75^GYo-%v Skbi?`Ta:eT̿o-vLPc?hϠk5.@RB$l s`iG`gpϢE%@`ϮqyRPorϱ|ߩZPm\@x px(+s&5ՈUm lparJ0TPpg`PKBg` Br"a;s;0S0LQmrPeExRg""tXqQrb2:`bv+mSJ)CQ`qsĂXC` qP@тuSbp?1PKg`yނn۠UN~w5bka.2` dHEIPNads`fc6H n $׮n~}sIH xJymQt=fpa ஑la1+fHZ㼁y +b P6a`a@lu MO`C(/F,Ÿb/ U+v3utP)Ɵ/H 9// ?Hq?(1tVEImx_dGF~?AvAvp BuJpGYB#uBGYpLJqsp(7Q7Q(xAF@>>ÿOoQ=ONRRCRIHށrA 7S>S"`AR(a1DceF"$?\[?RVRVR?R]?R">UDBb_ PPaMUZB[br@2\ zOz_sv(YqYq!!BO\ɅiqTqvxUzaO9ꂃSS{P_`u`(0y⢀@Bb 2a8RL7@(0aB{8Q|B{BQ|&Cz,@R2ivOOMoM()A)Awb8z[9 N6dՁ:w`` %c`P&!!i2<B)ANDTWORK(%H PD!R1@URIDS{QR<B<hB!i#??=ly|a'0U?Th]]9 MP /AUFjZ?F͂T??FSFظ?P6 .]A]   JuM` W? EKu4bJt1 ytuo[=Jb(A =>JU2zGzt?@dM#߀A@bM;#z1 /#\I(bW)d+)d&J["*(w"?& ?AO$Tf/%B115 `?CopyrigTt(c)2`09MC0c.A0os;0fI2:1r=0v1aI0i;0n. Al0 A8s2ei0ejA0v0d0 1Y4-X 3  7&C0b59 FX7}'0UB)Ź&!$aI 7FKJlJ]Uhz211O! !u6?Fh,\.>*0II=*t >J:@T&ȑ/ [WpY\QEUzU . X[:;QRuSU5UF,®\oZOnlQ 8MP@fu?FR >Nk"\T`bmMlQi_f0%=mUt.m {=4+ 뇥P> D"? 5-g?0tp@bTh]]9 MP qAUF;?F5K?FxEL?6?F6?Pn6 L>M  P$  #uM` ?=`cc.@uRJtO  bm&QȽtA@Lؽl]WI@[>JU2zGzt?@M/#J?&A@b}#zs q#b(b)R+&J[9#H",ɺ (H"?0& ?A$ /%B9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0'"O1YS40#Ť-9#b3 b7&0%0bE9 YFX+G'K0UB&!4Ia yFJlJ]Uhzt1.A!-(:9#t.%-+0JN H߃+%$Jq-&?F mO" >Nk"\RSWNh[H@[4I 뇥P> 'CK=o? n-?N]`@ b<couo]r5%BUV 6?woZÃ^ǿXEUV T6?F __ jv6dDlVyZgPRkro}oiBdalN!\z3ˏ6>CRHS X?BnDe>TGvN7VHD: # h0>Th]]9 MP JUFjpiH?F[^?F$?F]Fڻ?P6 >JuM` ?j uJt  (a&aEt9S;-FEN=ۯ=Eh?u;>2zGzt?@M|JA@a7b#z}b(b!).+.&J[-m(?& I?A$0/Q%B!!5 c`?CopyrigTt (c)02`090M 0c 0oKs0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0!(Y$-# '& 0b59 6uX7_'0UJBŃ&J!$a FJlJ]Uhz!1!劵:F% Y.  8?zY^J4 ]9$]$R9S/RJUE UH$S!\1\? GX5ZU_YQJU __-_?\HD H# Uh4>T#A]]9 M /AUFjZ?FL|+??F@{~?P6 m. >b  JuM` ?+ILKu8J1t5 }ty_7??J]bA>tJU2wq ?=)@MJ&A@\3(bA)N+)N&J[ "#?& C?A9/Z)*!!5 `?CopyrigTt(wc)20A09M-0c+0o%s%0f32$1r'0`1uai%0n._ Al{0 +8Us2eS0e+0v0 ds0!E$$5 -? 3  7&6iiePE9 6X7 "'0UBţ&!$0 IF]JlJ 8>U#9AA 1[!:Nk"\T`bmM\"f4N[7 zex-K=FAa [jr D"? WZvDS?4xa@R#Th]]9 M IAUFFK?F\gVb?FO?F$_5?PJwW?>$[ >  JuM` ? ;  )u*Jt'  ^Zmta{֙3.Jvt?@[_ u?vC >pJU43f<!.&P~?b3bbfz@bAe&bq$Jd 2zGzt&.#@M#T"I&r(~&i$q)Q+}'i"~*B#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d011Y54#-#D3 D7.&%g0Ab59 ;FX G[G'0URB.&!I$a [FoJl&}>UhE3MV1Ai!(# jV? & `& T` V_RUHPD  # #>h0TpHeeJ EUFu6?F5c{g&?F{r`5B?FF:Iͷ?mj>  M   *  (Y JuM` ? Y ":N`u#xJtu IoI%t"' % 'ii%$'`OkŁJU2"?@Ms#J&A@"b(bU)++&RJp}#>1#19 ?#bbbz| &/5> }#11"`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@k"1Ue4t#B-}#3 746t%0 jU FXyGG'0UR46!O4i FJlJ$jUp5 3 h19 1!q(4}#FXJt R j# 8bD?F@)?FvP؞?PEkU O|Ä\QnS ?.܁nts}ped`,of :o ,M ? &d2? bS?tP@RBoogr"55ZQUf_x____bY@?Ac_\jF4ևi$on&h{ǡ8oJo\opoo T)YAoaoooo o 6$|U7I[mS Uy{䷰7ho}V(f"(I%ZvShX?"cwӏ叭iďa$xUGD  3 h0TdYYB UF%)?Fxg^?F ߯H?Fw_7Uw?P}   f  f0 D fX lu` ?00DDXXll)ut  871%t%!?"Ư)1%:'ֿTeΦ1%T'ʤU!!O"`?CopyrigPt (c) 2\09 M c oKs f"!r 1a i n. Al00 (s42e0e vF0d(0"#S$#B-#l# '?R=#ǒ 2$Z##0U'B3@& ^5 DFX7dG'R&Dŭ6!4] dFxJeYk} #HFQ%RT][b5RT0][3RTD][5RTX][RTl]WHD:  H h4>T]]9 MAUF ߯H?Fԋz)e??F9q?P6  >IM (#<Pd`<#P#dJuM` ? #2FZnubJt E-tA!["XѮJV&bN$p'N]I֤>J)U5 J[-# ?6 ??" A@.2Sb64B#2zGzt3@M#Jc678C6.? ;B72C: #115 k"`?CopyrigTt^ (c])^ 20@@9^ uM,@c*@os$@If2B#Ar&@_Aa2@i$@n.^ WAlz@ *Hs~BUeR@e*@v@dr@"14ʺ#-/ C  G6&iieP59 &XG W'0UiRŴ6!#40 $ V4ZlJ]Uhz@A!1(`CFU&pz?FDUy{?,d@#b`sb`"J-d K-q?FFF?F F0 ?Nk"?GiE*]CpdadC0[dhZe@#eGDQ> @" i$a3n&Y+`~rrood]io;e s"yr1C82K?Fѻ4G{?F-qx>tFjR^{|ZlYZe tmP|{Mޥa 0 1iS? q1+?wa@rc(<;"̏A!rFÀ5H5aE=*[^N?F%P?F"P&@ c?F{MTh]]9 M]AUF}%?F[$,?F+#V?Fd_3֡w?P6 B>M @ $ JuM` ^?3YY.KuHJtE  L<ܩt.'_3zMҩ,wsQ>JU2N贁N[?@M#J+&Aw@b](]i+bk)+)&Jp%#4"?>"<5!?& ?M#bbbz$ Rp&"/%B%#L1L15 `?Co yrigTt (c)02`090M{0cy0oss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1YI4#b-%#X3 X7A&%{0bP59 OFX!GoG'0UBŔ&!$a oFJlJ(>UhoI j1h$Ac!(`%#F'$F?>D{Q>L!gfQQ R "Jg V?Fo-RbHQ??FT!R?P& lH\f6V\P5E+UDv_;*UmʙHQ:? w8s`*%$? .l?Q? \ς?D4h_!R otogr5%Q3p_SFhIL?F~Kz?P)Oy޾ F @c"rA b$^sPj+u`dl(?ET+QdAf u8<$HD:  H h4>T]]9 MIAUF ܋?FOa>?Fq`5B?F1*ݷ?P6 m8> (JuM` ?#SS2)uBJt?  \Eto֙>_B|KR>JU5 Jp'>͙#<?Z& ?bbbz@b#;A&b$B 2N/N[Z$@MI #"&(&D$ :;'"*#d1d15 `?Co; yrigTt (c)02U0090M0c0os0f21r; 1a0i0n}.0 Al0U 8s2e0e05v0d0"]1a4 #-,/p3 p7Z&'&iieU59 '&X9GG'K0UBZ&!u$K0 FJlJAUhhG1! (T9CFJ> kRbT$R}R( "J:]Uy{䷧_8S s@ o4%BTV.Tw?F:^֝ä >Nk"?YUU!f>)?\(K8ӷjmKl? a\$QU5Ui,SlQ%iu~!8hE_= T?F;uQuo`"rA 4sjQ޽lğ=onHD:  H h4>T]]9 MIAUF ܋?F<j?Fq`5B?F^ݷ?P6 m8>M (JuM` ?#SS2)uBJt?  \Et[Πؙ>_B?&}>J)>JU5 J#2N迴Nk?S@M #JC&AP bu(b)J++&B1[- #L#?&| ?$H*'#O1O15 `?CopyrigTt (c)020090M~0c|0osv0f2u1rx01a0iv0n.0 Al0 |8s2e0e|0v0d0"H1L4 #-/[3 %[7&'&iie@59 '&X|$GrG'0UBi&!40 rFJlJAUhh7m1!{!Պ (Z$CFFUy{?> 81cKR R)V "J{TV ![?F=$. >Nk_"?YPSW>OcK8RKl? ף_p= $B:]'i,S\6aYss9$Q@5UV+6o'lQ:ifOdD`i[Dž}soąo82{_U{"rA z$s"HD:  H h4>T]]9 M!AUF ܋?F@_v?Fq`5B?F1*ݷ?P6 m$> JuM` ??Su.Jt+  \Epteap'Eqz>Bq|7>5 Jp>G͙;?2& ?bbbz@bAi&s&Bh 2zGztM?+"@MrX"&v(bu)+s/8*T<1<15 `?Co yrigTt (c)s02009s0Mk0ci0oKsc0fq2b1r 1aq0ic0n.s0 Al0 i8s2e0ei0v0d051P94-H3 H72&iie-59 XG_G'0UB2&M$0 _FsJlJ( >Uh_I `Z1m!afR* zRQd J:`]Uy{w_$S s@ +o4B`TgV.Tw?F:^֝ >Nk"?zYQYV*)\(7$djm7lb+d ha\HD:  H h8>T ]]9 M!AUF ܋?F*?Fq`5Bǯ?F1?P6 $6>  JuzM`l? C"u2Jt/  \Etti|u~>Bu뒣;>RJU5 Jp>͙#<?6& ?bbbz@b#Am&by$Bl 2zGzt6#I@M\"&Rz(&q$y)+'q"*h,#:3o1o15 `?Co yrigTtk(c)k2009kM0c0os0f21r 1a0i0n.k Al0 8s2e0e0v@d0h1l4e-:?{3 {7I6&&i bP159 &XDGG'0UBX6&Q$0 FRJlJ(>UhI 1lq!a4DC9Nk"?CYQYf.Q;(hğ=;Af/h (\UGD # hz4TYYP# 9UFs2?FM&d2?F$p?F|>׺?P} 5   & >hRQ p  u` W?   *<J>R\npUu# t  mom%ty!V"!%' %'ؤUC~#30U82?҃@&T3q1q1:"`?Copy_rigPt (c) 2U009 M0c0os0f21rԚ01a0i0n}. Al0U 8s2e0e05v@d0"j1En4#-d3}3 }7Q?3^=4~6#DkFkF gB$gdaO&7*THT"(Pa%/BQ  2iRcE U6rXFGE'74ikF!dD0 EZeYk} 3 QYg%kd>vk5kd#avk3kdpvkEkdvk=Ekdvkkd vgHD H# Uh4>T#A]]9 MUAUF‰IJ?F#-c?F?FFt=?P6 &> A](# D<# P# dd# xJuM` ? #2FZnuJt  }7Fi8&t-!G"R7NA9%UB'DM"9%\'+d>tJU2wq ?=@M#&A@q"b(bJ);6J鏵#"wJ3?]6 ?A<#0Ɖ3z #V#bA38/9*#115 W"`?CopyrigTt (c)#@2U0/@9#@M@c@os@f!BAr@NAa i@n}.#@ Ali@U HsmBeA@e@5v@da@"1E4#(#5 -O3K 7]6Fi@ieE9 FXG"'0UR]6!x40 7VKZl:JdTU#& uUT ]9a x 1 A*# 2&#p +5(Z#FF<g60Q0,#0Q4"1 Rf"J! {-?F'<0c 0TYǮixxLpH$a5e r&7?FQh|a-|,CxEe}$X%lE?FV}w{|C|80}-|?"l!eEeaM?FO|7#Nqi}=ԡ$dduC_?F:s?FQ+7?FO9 uPu~0y jBqi/3-u_2O<-u,#$6v›l* ," z3H[o'?F hnd?FCsM>-|6`؄Ì~[U0!܉f;-aaiq x?F#^|P-|hP`]H<dp&B?FŜgN,5ҫi~l/WG~Ɖ|] ܉NQ*TMW.DPF)?Fa$IKbүnexC؆˄ILMƯدaWe=j$,$,hx?F3%?F+S 雖dNɛ˄@&fUٌ[?承uW[w?FqoW?Fsg:o?F)3⿗1܊5#AKƉsω5`R!Oǟ1a:dj{+ hʲdde5a:f?Foj&GCx9e2oeWcrA 3"5a5PeHD: H h0>Th]]9 MIAUFe?FM᜕?FXWF-0?FO?PDJW?A$> n JuM{` ?e ;  u*t'  Oț2mta{ԿHM3.Jv??v*Ij/I>JU2N贁NKk?<@M#J&A@bI(bW)U+U+TU&J#!!5 `?CopyrigTt (c) 2`09 M c os f"!r 1a i n. Al.0 (s22e0e vD0d&0!3]$#y#&#4#9PO"Pd+T/u/%-#,# '6% b}59 DFXGdG'0UBţ6!4a dFxJl(,&}>UhETh]]9 M9MUFJsx?F6?FI?FnjrL?P9V> $ 8 L ` t JuM` ?.BVj~SuJt  5hޅ%ty!"_:~څ%''"%'Bs>JU2N贁Nk?<@M#J6A@"b98bG9JE;E;E6J3115 "`?CopyrigTt (c)02`090M0c0oKs0f21r0Aa0i0n.0 Al@ 8s"Be0e0v4@d@"13H]4#y36#|D3I?A?2T;D?e?5-33 7F%0AbmE9 4VXWTW'0URRœF!DaR TVhZlJTUh6UT]$8]tA1 QJVb(T3RT0bLx"fA RSf"J:mGӎ!=Мoc ]rte$a31sE?Fd0!@c +HjẏvR%amE@uRU?F.$wX|'1Tn|pA-x@uӆN?FFoX|1Rqyh -x@u`CB2OX|Ә {n|O$ dCe?F_';?F4Hy ?F ! uPu@[y!Xn|kb_6nu(nux#vi8˿IQv x" YꥬM$5zG?F}Bh?u?FoyD?FYό Ə،m\ҋ:n|XAB0B5R+?F;(h?F?F;S4، nB6{n|Jqe!{q o.2ZQI{]>F˻w&t 1t?FOiL،?ö}>{} SS~DxĶȯ]z#0bb?Fwϯ،BZc4s  x̶.jlT1|t[mi 0?F_?Ft ҿ t߇R?ɨV=7Th]]9 MIAUF9PR1?Fq`?F]Pġ?F_Ӱ?PDJW?A$> n JuM{` ?e ;  Su*Jt'  Oځmta{_ t-3.JvĬN %e?7Ùf?vGOw{jyƤ>JU2N贁Nk?<@M#J&A@bI(bUW)U+U+U&J#!!5 `?CopyrigTt (c) ]2`09 M ]c os f"R!r 1a i n. AUl.0 (s22e0e vD0d&0@!3]$# y#&#4#J9PO"d+(T/u/%-## '6% g}59 DFXGdG'K0UBţ6!4Ia dFxJl,&}>UhETh]]9 MJUFâ[w?FVFc-??Fx?P9v>JuM` ? )uJt  81QEt9Siޯ2ENEh^ڻ IH^>2N贁Nk?<@MJA@cbbU)++&Jy;?u)?AbP)/%/F%B!!5 c`?CopY rigTt (c)(02`09(0M 0c0os0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0!Y$b-# 'Ax& 0bP59 6X7G'0U]BŔx&!$a $F(JlJ]Uhz@11!ZF(?~O攚   8P[&~ 6RcVJ:)Hpv ?#V Th]]9 MMUF"q?F땘n?Fq0>?F_A!w?P9@7>M  $ 8 L ` t      8L`tJuM` ?""."B"V"j"~""pu)Jt!  1X{%5t132m_%5.7RjX6H7!>JU2N贁Nk?<@M3J6A@C2b8b9J;;6J=y3;?3UI?A2b9?O&EB3AA5 C2`?Cop9@rigTt _(c)P2`W09PMPc@os@fRAr@3QaPi@n}.P AlNPU HsRRe&Pe@5vdPdFP2APYD3-3,C GXF5PbU9 VXWW'0U=bXF!sDa Vjl:J Uh^maa]`t AQ֍A8Z3FWY?F9#}"# C22@fCsa Ur)[v2JAURo?FN?F=!?F G< 6uPuyA/ybMv Js 9o#"u#|u32$N 4 ?A~0 f? h#|S)?"2@Ts(<2"41rÀ5&Eyq3u0,Ǥ?Fs^?Fq?FS|xX1y6>|>E<ti!Z$@nqcN#`qBsq- dM?F"7z@Er4s0{욹nZ|q9a!i[+|ҼN#`V}188| m vqt(?&4yq`y:#Z?Fvd@^=PiY$|n׽\|6 ^yEb$ ^S ݿN#`"jtyVp`iK?F3H?FڛY ʜ|B6yI^a};I| Yb[soq|U?n \*O#`7I~Fmp1BB{?F>v?F 6Qqy hq|o]^})f~^K!OM3`Xjy%gzp jh?F t?Fŷ 9X|9} |)x?pM` #e2e:~Cra D)2HD: H h0>Th]]9 M9MUF$sж?Fp b?F4tV ?F_<?P96>M  $83L 3]`3t3 JuM` ?3.BVj~)uJt  DYU%ty!"8Ӆ%'fY*%';ƐnƤ>JU2N贁Nk?<@M#J6A@"b98bUG9E;E;E6Jy3;?39?A?2bPG9D?e?5B31A1A5 "`?Cop0rigTt (c)h@2`09h@M`@c^@osX@ffBWArZ@Aaf@iX@n.h@ Al@ ^HsBe@e^@v@d@"*AY.D#b-3=C =GA6%`@bPE9 4VXWTW'0URŔ6!4a TTVhZlJ <>Uh$]T]tOA Q1(`3F ?FiXe& 0On?Tڿx#EA_0_;Jb"JaFK`E?F /?F߱)\ uPu1ad0kvX'qeDz`Ֆex#P_v^ܿ4 Sm?T0 -? YzO?x"pp1@c](Qv:y!r55a3id{ Y,?F\?F O&&`VW:E3 xui?k0M5eigJA|@ W|wv0rIQv v uB@#aE^0:O o?F^k#?FTL?FY.}*\/~kU?F#]ҝZ?G3´<]lѤqA|? yFז_t&o˿Wi}id3.aДgSh?FDR<<ddT0~B{*!ZyqH_C,qÿ́w?F?F}SR?FD?ƙ!FsU.RM .fMA.zM +M++M.++M."M+"M+."M+B" M / 9Z#j"M M / 9#"Q& / 9#"U@#"a#"y&/9# 2Q&/M#"322y&M='y&x(-63"-6(-6uM`? A7242H2\2p22222222,2$,28,2L,6d(6x(6(6(6(6(F(%F(68686,86,3B?T?f?x???>u#9|J2b&baFU*JRu0;t1 b.!ta  zKdS'& 0JU2zGz?@dMc vA@b\`;xbIyV{YVv,r !:tJsrcrv rArb"Iy{Fw,r|,rMNsTTs`?Copyright%(c])%209%uMcos{Ifzr}ai{n.% WAlр sՂUeevdɀ+bMs` `vba3Wbtbiv eqX z/'0UÒv.0 zlJvUt@S +HW!!1<!.!@B!oe#Aqhq}?FTOp2c r\2"02c JWidxȯ(U?F:IВ?F -e젌"v?PN,DT! uPuqMk 6̍B!23ץE c#)!:2 Fbk  1d2b4?Q& Bbҿr45u)3ߤi]~젙K.LL̖bTJIr^F_ l ^c ׳fif|F~1r!o` GR["hJY@]+ݾ?F6vb"BFϤ %7ɝh|ֻL~|OߔhcAXT4B;CS;YBUߤ_qt|ȝ\ {~TqUAdɼx1ֶ?F|DB젓 Sb\.B| JLǝrZ~HĢg^ޥFg?F$@Į﫶 ,3JaE̲Sι?x<#̿- B 俋}p4<'xld~1<ܬCs?FicuU FЪ֌?< 1TZ pq~Y,ͲJOuOOadO"o4orVbP}q1EʄOCZ d:d?FQC;sl_<>Д]xeW. buDSwV*ɟjoko}o(ZoUgrrPʃL?Fse9mpi1@ps?C>O"Y+Aq[*2l>M_Xp#:,?F[`XK ^$ǽhXvFLzN?F-\Z?FM+ ")?P@Μ<^'k<\ 4.iyq1Kqkjцpqx"pȯ(UP:IВMk=6̍Bᦺi(֋P:mk?FJNHP?PN,.y1",+~]s\߂~]5zn5]?]ȻEtAF5G u$6GaMș<ѶgQ>E% Jצ#{~[:ĂKǯsԪhZ*ߝVU4db62GA,ϹJLk81-s߭f>T (ז0g1!e\%zynA,=͒}s c 5P=D mOAؿvP $kL`C@Y?F,ɴZ`2a]o*"+H) $WV= ً5Od.t@//GB"I0}!!v!M ~+ ت!fLZf-?FOgE>k^:Ўmz@ܐE! cykA$i/ Ok/5BOkFBKHgJ&?كd v:Ì?JO\O1/6O1_C_y%tNw&q00[^cAD[F=k"l4 x QP*% T|0IWj&?F?0ۃ4_#BDM[ٺF1يC nf1Wob!x///oHokJ?~CvY_0 ?Fy]Gp0V~YPM'aܿuiTױ ?pNo=W/_$OpG/_قuO> >F6q0>XY'O?/?A6ُ}Deٱ 5?u_aX݆ϟs+?QcuAuu?FH_*|>Pe hV8?FeHN55/Ԡlփ~+'vc>`h?Nk"\T`bmMPyG&_Sf \l>{f`lER ͮF9S*#s B}PBA9!F8#TEXOB ~dT%@+(ݓMsG:j|fN?xXkL(@P(h ,Dؗ$U-]A8 Hr3 H !O^")9B /UFDfP h>$KT 6D UF~@x! <`>!kbs{kbsi^% X\/7'TFJ!DbA[ qʕHDB # =hj8>T ]]9 #2AUFM&d2?]?Q6 ~A@9 hpAW(4?u`l?Xuhib  #AEJU `0.UH :l$'!f$5 #!`?CopyrigTtk(c])k20 9kuM c os If"!r !a i n.k WAl (s"Ue e v d $\#2q+0?*?<6M?\.?lp*E&B=#Hn&M4?6PMM4-,(,m# m'6##0OoNk3@*6'l6 ?Uhh#3 t!,$ BWaBE@BJ!>BdCRQ, 9VM (DAAO?_SSUR%c^|__WSU5cU ___C2O;rAc#p (e jO(heoiAb59 7,X\,'w'<4N%!4!m; v$zHD # =hj8>T! ]]9 T #iAUFL&d2ٿ} ??FP!3|>2u `u bu  @[A-0' - luu`l?ubC Ae"& J0G6G'0<p'"R('%.&' '!$ +/ "&"7/"',? U`0 h:HM %B7145 11`?CopyrigTtk(c)k20-@9kM@c@oKs@fBAr@LAa@i@n.k Alg@ HskBe?@e@v}@d_@4="36MD?F-P?M-83 7ԿF#30O贁NkC @Lg@?!Uh35 A"@ aRKaSPJOQsb2t8UoAo,giAreE9 6X7g'2Uq`^!D1 5 fj_(Hgyr Ds{ 9O5LFxmBw#HaiDݑB 7]a&E@+,4aGŁiPH.KX$NPUFD  h,^TYYBBUF\.?x<F BP(c?P?tO @L^^{;B_@L&d2?sU!&"/ȍH$б?W/i&? ~t  0 ;QB`#Com ,link\netw r*"p0r pP1]a ,d0v c0 /1^ ; YP!3u &?  ;;" ;;;gr330;G;#7Drag onut heqp ad likb tw e cmGuiCaisd vOe]sBKcnTe)*`i]^o 9il^)n5k].b\.BP(?? M7Pfo??H D #  B>T>':)DNYh>) e0T9 EM dAU@\.?@?@?BP(?P6 .2A7 7P u` ?8=Q Keuoasu`u`bu`u -##@"L( (0!>zLU<?&(?bAR%g#2q l?AG}F]#qa0 c{`Con0eoct 0r 1l2z0 A0 +` Li2?05U"g# ~#"9 `.4 u{0 `:q1276' x:^%g#@KF$=mG<hB<wOEAGMOQ8u?^"f 4 #G<H7g#AwQ17;1w`VA0s_ RXYJchm!#e5P4PQ`? 1py0igPt (0)P209PMA0c0oPofR 1rP1a0i 2.P ARK0lPWs)bePe0v0dJU" Ip`@#neϥ?xdnf'#rK"c'#f `/2 k@ebum 7@w!;FPSeapb po18 `Qps-8R_t-0`R%J1^#B TU7^%\f*?v~!)?LEp &L,5bK`&! CT<0oPm`s0@a> m D/^""QQl2 2B33B11B56hR6C 7$PMC]E(j )w*:B^%]wQ1{U_ B@#"w` M`nu`a2u9bP1vpuqMc;Ca𡟳œTr`PrRdؐ1 Nؐ!m0k6['Zœ!̓ P`rRdEkœR`GD;bQiPeկ3EĜAP>aůYU_hœ%̓ S?bi`(aѿhUz1h B@̓L 0chAWej|ώUBؐiK0dA2gDMMœ̓ R 0oEAYCUœ`Α Apm0n0}Rfz\SP)aPF1lsQ˒W` ;iC`i0y,B@ ̓ ITPOP` 3c/@śSؐbP` 1mP e B1U"%Ѝ# A77^"[(qpPU]=`% &ROái;bBA`NEWOPKPSA ETPQOkRIWS1PT#ѿq;A11^"!! mBz3BB@9191A??´[(3F&p`g1q1 !0 2`T#;4l5cTIt,@r^";E3EU$hU9_D&zFBECF3s͵NSY";ų`pv*HA`bTfo?wQ&bu0HybDr)Ixbkfz`RQz:c exhGFRbBb<\ARbBg"ұeT2b @@-,eu LbrZSt?A////3s8BP(?N#F@<M?Orb:B$Ix\iĺAqG(.f\gXtFZ'0ye] IV]Zq 8BU~\2C }'qbšg q9#3^sGE[Tb%@tbSAc2} 2FA ]bF" JZd~Eq`axe@:F.qTba'g UD?@ ?`0o:C)aY3pra0]! \2rBa(a 3G2T3rA 0ex9dvn]GWrC jae%ah4F?@ p8Gp  Sfeb]oobs5{HZ,+s 5)d5ukDnmFQq#跐jSB#Ta&o@+/o=axU PUFD  h(^TYYBBUF~?x<F BP(?P? ` B 66 6 66^I6ȅH??A>IB@L&ɯd2?-(?\.1(. sUW!U&w/J G 0B`ETerminal,s"v",c i nt wo k t t o , !1i 01, et 05, qu p 5h rd0]1e!i1A^| (]G|& n?_ Q4FPR? pww {pv w wxwږ#wpwp~pw~? ~' ~~  wwpwl+w~wWv-{wywDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿?)j࿚i?3@ ?DUG DF P# h @T(PYY#EUF~K? P} 7K.B! : DTDh1YD| A3 AA3AA3AA&u` ? "6@@TThh||j&&"u#L&)R2%X<X7eqU ??' 4Wb0{[W`:#2P6Q3A"'򉄉Q9 ^ L1$K" H7"G' @>>ÿ@q@E?DF0܇rzBCCB-uI@`-1"Bu$ @]GbC$ Q`'(R7"A& <4' &)9u`u `"[ bz'$u`R425 ؓ3Q171`Vis_PRXYcPm!#58`0o68B`?CopWyr`gPt0(.`)02<`0U90M`cU`o `'of]bNarQ`aUa]`iO`n 0WAl` UhsbUe `eU`v`dR13d45*8=363@2t66 24B#30Ur3]w,A$T!^@cT<`o0m`s]`a>gdPa!gRsAA2hhB)p3+}RqqRqq281pacR2+6uM\$ %G11[2 P@ЯRECT'$@ b23;.UM5s_4/&ׄF3s͵NSYFU00UQ#,!@Q#J`epE>Ҡj_7Nam uBtS` SB~&_o<м_oB=Eoo/.c` WI4q*ՄVoC`C0N`҃?w`M`mI2JmhO@E!4ay>",>jeрϿ55 ©ltgИa ` %_0&ȳa5`NTW_ORK3bHPIPBQRБR5ISЀQ<) i$ eu؊6ŔX·q-/'0UwŻѡ ˈ74hH臘h5: '4ANѤ%Nb͡"44?4A?4'?e4[?4N?]4?4?Y4"4?9%4?%4ɥܣUHLuD" # ;3h( TEJ eUF~K? P VN% @*&>uA` ^?Ci i4Fu#Xp^Tb>tU b7t>U2zGz?@9 #>/&A@qa(bo)e|+|&R" !I`$>)##" #7"& "g"bo);l'R",R"/N )#z1z1'#`?CopyrigXt dc)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"i f  %!X)#]'0UB&T50 PJl>8fUlYC !(#pAg!(:)#FM&d2?qFGN tQ!P"(>!p8?ST0gYfRaNrTaU3^\.,__ 31bXEU[Vj?F@  l(\\ rX HYF `0QYp= ףXo`?̺oZqapXUL_^_p_\A*# ?Fg?l6 l\1"X%Du~ |!YRQi3a\ayo7;mDucP]{1cr|%iU{^?F^ lʡE\1_Zd럄ht X01?FXE#?FFb?PƚK G2G2 lû c_tFaXNTN]\R-\^Dr7 јٜ ZLT4&1pBHZlr À5%8yU`0 ST*J! o`rcݘEF3?FAM)ỷ?PMޟ+=){ )LFj?cl~Hӿ忷ǨHD: # h4>T]]9 M qAUFU*JR?FKRT*M&d2?Fp_8w?P6 L]A]M ($JuM` ?Agg2DSuVJtS  tq= ףUp$_>JU2zG_z?=@M3#JC&Aw@b#zw u#b( (+T&J[=# "?0& ?A$ /%B=#BZ1Z15 `?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v0d0+"S1W44#-=#f3 f7&4%/O]FiieP E9 ]FXsGL"'0UB&!4%0 FJlJQ h@$Gx12A!1(Z=#FF!0JR Lx{פ_Q\  R)V4"JVT?F]Q!?P-M G2G2\PRSWRc^_Lr___:M -LdTd&!&Ra@SBЯoogr5%Bb#iOD"ZoZD?B!)$a EeVHc^?F7zT?PxT8QoWmilvd)TJEvސl5 gvtmFd(:w.|&E2_U"rA k#ރ+"HD: # h4>T]]9 M 5AUFOt:?Fr\.?F#H$?F|>?P6 e.#A#b  JuM` ?+ILKu8Jt5  r7htyw/ؤA>[JU"?& ?AbA@D"bL$J#2zGz?"S@MJy&M#b#zO M(bL)Y&S,D"Y*B #115 `?CopyrigTt(c])20a09uMM0cK0osE0IfS2D1rG01aS0iE0n. WAl0 K8s2Ues0eK0v0d014-#*3 *7&?!FiieP59 !FX7G'0UB&!9$%0 iF}JlJA h@7<11D!Z#F&*JR?F9.4mL AxY S+RJ:rTDa@ RS .S X%QEU~[r\wY aU5UFGh4\Q iʧEQ8y^Ffj?F~@W?PbʒK G2G2@ D"]rA R#s c4v(O A.x$&AR q|4xa@S#Bsdvwrb5^%HD: # h#4>T#3 AMT JUFx<?Ft:N?F~?F7ͯfл?P6  >JuM` ?juJt It=WGWzHRIlQI#>[? ?AbA@"b$J2z6v?@MJ=&#fb{#z (b)&,"*B!!5 g`?Copy_rigTtZ(c)Z2U0%09ZM0c0os 0f21r 0D1a0i 0n}.Z Al_0U 8sc2e70e05vu0dW0! $-# ',?6iie59 6uX7_'0UvBZ!0 -F"AJlJAh711!TصTh]]9 MP qAUFbX,?F3L&?Ffl6?FWjZ?P6 L6 >M   $ m JuM{` ?=cc.@uRJtO  \(\tu_VQӽmI[>JU s#<@!A?V% ?zAJbV @bbbz #|"b#$|"bĄ$J 2zGz?<)@M/#J&B(&$%|";T'|"*B9#t1t15 `?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 U l0 8s2e0e0v@d0'"m1Yq40#b-9#3 7A#-(0bP#E9 wFXIGG'0UBŔnF!q$a TFJlJ8>UhS2Q2Q]1LA|!-(`9#FBP(?FMx HNټOqɠ[Hl[yQ RV0"JU}e\?FAϡ`2If9?PaEQosqvyifdY0_rޖlͅhn4g_oeBdy$w4|!Z$ܿ__  %$?qyU2_U|#rA 4'"HD H# ih#4>TA#3 AM 5AUFbX,?F3L&?Ffl6?FW_jZw?P6 . > JuM` ? IKSu8Jt5  \(\tyuVQӅmݤA>JU2zGz;?@MJ&A@bE#z; 9#bS(ba)n+n&J[#";"?&L ?AY$p/R%#115 `?CopyrigTt (c)U020a09U0MM0cK0osE0fS2D1rG01aS0iE0n.U0 Al0 K8s2es0eK0v0d01E4D#5 Y-?*3 *7&FiieE9 FXG'0UBŴ&!$0 iF}JlJ(>UhiI <1Y!aRY4n Rx_ J:j]_`0 Ɓ_.S F) 8qViH$D?F@\.?Pַ  G2GC2\QYV4GK`mA.x ΤAR Ԡ3l4xa@ R#Bsoog]rb5% 8 bHD: # h4>T]]9 M 5AUFQ( B?F].r?Fx?F`0?P6 m. >b JuM` ?+ILKu8Jt5  ty(\{GzAt>tJU2U"?=@MJ&Aw@b9(WbG)T+T&JJ[#";'?& ?A?/`)UB#115 `?CopyrigTt (c);020G09;0M30c.10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0ej10v0dy0@!4b-#3 7Y&?Fiie59 FjXG"^'0#ũ&Z!$0 OFcJlJ8>Uh9AA x"1DJau'Z#FFp8 J4T .xDD3AQ]M RVJ:n] ̅_Zr==B&d2?tW~ɲ`?FH!{?PZss G2G2 .S =;;RSW4O+EA.xSfϽ؍AR X]ded4Jxa@S#Bsoogrb5w%Q5U}H$D\\~?BBQ5\~CftZ a axUv~lk&lqU2n_U"rA 5>#pHD: # h4>T]]9 M JUFp8?F}>??FbX,?P6 >JuM` W?SuJt  f\It=W|?5^UIRIlI +#>5 ՆJ[;? & ?AbA@N0"b8$Be!e!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v d ^!b$-,q# q' &2zGz?@MJ69#nfb3z; .9(b8)E&?, 0"E*iie59 X:7'0UvB &!%$0 -FAJlJAh/7!0!eZF43, E <ᢋ/.# M OR|VJ:VjUS |bPReS0QEUxx#cHD H# ih#4>TA#3 AM JUFVj?F\.?F`0 Pn6 >JuM` ?uJt  3\ߦIt=WIRfIl$#>5 J[w? & ?AbA;@0"b8$񩣑e!e!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v d @^!Eb$-q# q' &2zGz?S@MJ69#fb3z; 9(b8)(E&?,0"E*iieE9 X:7'K0UvB &!%$K0 -FAJlJAh/7!T#cS( bUGD  3 h0TdYYB IUF}>?F&G/?Fgl6?FB\_z17=w?P} 8f u` ?0)u:t7 ? = ףptDʑ_Gzޑ?"L쇪iU!!`?CopyrigPt (c)J 2\09J MB c@ os: fH"9!r< u!aH i: n.J Al @(s"eh e@ v d  #S$#%B- ## '?;= # &#24 6 6 2$# #0U2 3@&B A^#5 6X'7'4Ŕ 6!4] 6:e6^#[EBM #HA%DGUHuDh" # R#U U  >]0T]]JUFgl6?F &p&??F{jG,"7?P  uM` ?j`fpu#zAtw tnٳ \b "OMGt2N贿Nk?@MM%&A2 bW(c+c+c& !!`?Copyrigt(c)209M c o%s f"!r 1ua i n._ Al-0 (Us12e0e vC0 d%0 "!3]$#Ry&#4?6QC?]"W(c+Q;'-,# '6% h|5 CFXGcG'K0UBŢ6!4I cFwJlU!#A(:F<3)0 vc p/y $(4!Xv݃`?FZyE}0pS a۫YQ\I`X3Uk?FQYC~\f"qoY|5UD/5?Fԧ߾ \F\bJFXk_}__\A#O_"_4_F_X_Vh#?F=]W\>ۆ\34=(_V[e¯?F'e*m#R)\b@QoWZʵ?F,ށmL \$6ԡybm&>e?FDȆY\.Iſ\)ozEU 2DVhVၱ?F}!\\wzVxtD?FYY4F-~Ȩ\(p|y m! ?FߝBF7+*y\= ڙbm&R{?FbZd\Aa\?ulď̟EY.@Rdv m5[?FRⳭ8HJ|\ HN۟VHD2?FߦM\\A>x2V8`?Fitͳ}#E\I&)|5e?:Xa"7["mSpd㯵ǿٿBeJ\n~.?oJ4=|nn)`]Cc֦Š]]7PPE5߁m:2\y_1W3E?F#Bd\eÉbm^9^?FV;.O!thV\9MύAa\n߀ߒߤ߶߻VV ZKX=\ .c|~k綖d]!:`="K\lJ(i0(dV;amЏ9ta*m \ǀ_?bmcO"QGm= bY~\r6y@ύ# AuYKtxoa{6\/ 4)V w!>?F ߄*"̕0ϓpD!K>?FWP! -ߟ:{Qb?D)bmh4(?F|hS ߵ.f-R[Ϡ ύ //-/?* Y7Q%i?F_s"'a E/`'r?Fs7"̍^sÁm;]֜/O?F/RE"ןh~pMM/㿈E8:͉@;$Pe"̪ (M۽^mJ?O.O@ORO ?????O%?kZ@ I~}P"P7|Msk aO=˩ ?Fa7 >fɾw2θOgl6'"*’ ܏{7_(4?F8W!"̿z2m^Gm}>IύAoSoeowj ____o#o\+w(@ `&"-JO}=O^f~oyV 6?F9~"̓ ;M\Oo~ahFp@o!"?ݟ$Eߡ?~%^]"Q>IۢwFX"̰4txڑ3:\Qcu '9لndu%?F<2}G"^̄ZϭSԓD-?F=ߓC.k0w]<<7 Qv@b" "R؝f2]bwƈWРJk "̇1%~]F@m 1CUb^fĽ w \,X|4]-HuvziP=?F ##BOP9]ǕDG(t{GeY zKgPfO2~]9+T~]~;?o-L /=m$+)9ϡʌ);M_q 0;F,5Uz!izVsJ}0W }f6i,g8~ٜ˺A}2!~ tLհ#QqBZ$@S]5tC6 e`,.zu'J1ߥ6HZl~"~ѠZ cX, }ٜ?J\1~C( k0ޜcAKrзX=YϯR p0%N. X, I6]+ Ӂ?FI:5;iU,=A  D)U= 1qRdvE[%ڻ2i{w׾mQ7Z'ütUl?F3&Gsj3jܙ-Ow) S-ۇ4H-ɫ9].E|"?F&ti}S`Aܙx@ ///??Rk/}/////b"m X i)r~ٜوWjFQ"?|PP?Fo%s=De9gsܙaEo#IOl֪E@e~iY/|dM?v۶ :YhSQv}T\Pˇi/ON|aܙz?`$OO__,_uOOOнOOOI@B?FKTG.i`AٜlOo->_|?F!i_ ]gٜ?, )ʼn!|?F3Ck*}DxZٜ?]QVyhk߻?FX[g]/I ٜ?bvd Co '9Koooooopj T&}j8>ͯ߮\nE@L@Gٜ|gxgx0Y3N,*ݖ}T  MlfahhÔ}Dq:6};p/"_f/-?Qc я$]+w(0p:dy~͇JO} M?p 0!s?FYY4ڼ~iS!(p|ʟEDk}'pҧO?zA'xў?R_F!] EQ>sNϽ=(V-Sewaۯ#5ٿ9/{?i߼$&B4O% C3i?y}o3qo?Fq ZiD!'FA\]0$=ڵ}SI6?Ͳ Ai{ߍߟI 0BT~#a\.LSA-BN^{q"?p?Fe6i㪡<_#AA}2![?F1:!6F! [jtE\qRD%yjǍ6Pݿ#V<>tl )ae"%7I[mzj3?C_%?Zzh̿Z} ?-P N?/????eK?]?o????yxz`h qDa.??>#L3ўh>/Ew9&:/ ЎP.JhHkOQ qӠħOʜ- &h̜zfUm@O___oa_s_____}2Z?F$'_{uh̆Ia{V?Uoǿ7e~h̰(ecy) - mz-$.hhsZ}2Ca6o`(~ "?F0.MX×mEyoށ顷6\Zq%բ}t7Z?F~ Oh?H |A?UYwkE?F?>C ?<a,:wPI5) ߈2/WfsM% e'MvǃaËgO?F%F92\aI}|b:B9 /VA ȒTH7D6D # 4# J  !"#$%&'()*+,-./012*3>5haATaaMYUF%>h`?Fr<."L?FcG?Fqј?PM ]u\`^?u#t  EF't !#"UY%'~P*%8'C"Ut2N贁Nk?@d\#&A b(++&Y/R/$78?#bMd4z (+b/513%R 11;3"`?Copyriug t(cU 09M@]c@os @fBR Ar@GAa i @un Alb@U HsfBe:@e@vx@d"1Y]4i<3K 7966i@15P6XGW'0UBQR96W1 VZlY]U]l@TGA#1(!l[?FuO[0:$}#)ߓ=b!8e,չ?FAOw(lxrNj$>la.PXTh|3ecztaLN E'm:"n*&G^|%[a5:q m5a)k?||=a Ed0UgEo"o4oFoT#______evHG?F3'y~|3`[]Ք|cToev_kfP?F;~|S3ٔ|\£oevV!ex<ڤ~|{ |ev-+3t#~|;6nRcoj^}.@RdQUev~؇Z?Fi~|+|ܔ|JCjev<)OO~|p )yBr7ev1 aNְk ~Ůeąܩ#A 5OfYQWgn?FO*~|Ch|`-jݯev%~?&nq΃0xi7y :kOop ~|4ZO勿]oρϓϗU+=Oσ+ӑ!]~|:<]їyT}ςT ?FO'>]P~|qH~|ο:.H!|ςto w'"~| 5(CşVPevR`t)/~|?7>-<Ѫ|Ra 1CUgevo I?Fb7~~|/ͼ셡yoevkY?F>_ ~|3y evMXy"}~|ω9?lev ?< "~|^_\˱R&8J\n|by\M[`bD'nnEm|<z_m6itaW}nk.V̲ocxL45MR9?}.mE?KxSi#;m1LL'7,}K^>P/Si^}///*=! B/T/f/x///byYEvk덚4D2=y_mS M߾}zNmWUdw4֢DB&™t|d˽\Li#hމQ?7B%$Ĩ^QUHnOOO_8& ^OpOOOOObyV~%װ!(E>ܩu;ofUN1[D}~XXtۯ~'haCjmt{ܿ`Xn b_փ%hkqwu~R)uoo 8& zoooooo~KL}"_!eΟW?N eFjӈHi>DYH'h(%i?}ou MMXjQ %f[݉gm`¥ G8j ,E/R&5gS>Na8|'hhv_ ߖ3bW7mȧս[PebݱJ}:?nuTѺ 2DV8& į֯ &i~FR?[M%- q?j  DJP}uQ9/'hdfO?FV+p\[Wvpa9| wdѿOtZ(UxzêJUfeC(6,b#l+eR(߫\(V1:9Oi:4ߋcG1!Gq}9 ?C)uNUBv(Bo?FO& v!~5I-lym(,.$23G_CRHCM4q6vl~_}5#7o Tfxy?F{UX>͘ Uܼ*d4Ij@ ?Fi; Ly4GXj9 g1re-#d\D-h:Pc}Lw4mv/*ĩMQy S:U ߟpʟy7j?FZ?W+}(U*<SW?F .턟bGF赁9 <8߁-`Z͸~~5 CTc/ws}e v#P3ׂo(r.i,"4Nr¿Կy=Lu?F7G3z 9 ˇ 5F?FJ.3$w 1tK -o9a|ዙCѾ;-z0ƛ+WzBg/w-%&G,ߪ 2Dؚ5F] MH,>PNr#߽v<߯0ʨ)QMLTgMUs}I$FATG}=u2f`?}1&OwuXp+MEp41}o7o9ݭ&fgPy}}M?=;??1O2DVh6 yJ]}z aFgRlnk6+nɣj4 anf-n_h?+nj^AoxvTN.g!uh-Vo@MVp >ߜ)sPU 1D'[}LG*>QoN/`/r//.V//+/=/v(#xEAGw} }%(Lʧ1;$gKG"V2w]xR)vJD)3 7F!nLigqN|ݭƃN'QM7ZSF^ jO|OOOJv?O O2ODOVOOh_D PicPK>ѩ݅cDchQ>M cAѩ JBɆf!\.S6x~r|#ݭ#.CqQzUȴ߯b~(oooofo*o<* 8}\q3h^ ߻l~ϢϴQ9 AS@XC^}|/9K ս5١TB㰉\JA**=@$,Q9G\鐒?FU ^d /g [A\9mpï?FnrB=k X* S(N A9]'Nx =$.T~v )\A?S7|#!8+|z AM}D(:LfQ9j,,j?F4*a.ȉ\zFq&ǻ}w^Ym[6?FG!Ɖ\0#Jh@ /x|I]11\\o=P|G>q2:]_1BM4=1vq"_]=H` @]//?A?S?e?e ////??/ԸƑ?F qT}S҂ ϵymޯZ?FQo8G} ϊx"1IQ }M}>~~^{ªy1}eYQrŁ& [ׂox^.cyOK_]_o__+!OO__$_6_/ٗW(?F 5w*zi۟灜i{ސ_26w?Fߒ(xJJ쑻+Uϰ|]FuqyUq~iT ]~[4vp=!;L }?LYh- pzK"o .@R/R7?Fhߤ? QTJZ׬23l%?FO& ?GSۈQ?5])B.zhɅ($Q1(5} X㺾 ?lرc#&8J\n/_?FdW2ʼn@-1ş9ȟ2֮tp?F<*fB s1]V%sAS})Tk{#/v2OB!XLi.݉}1̾MZͯÿտ$0BTfx~P _! QR6#I -hCFUz5|Y}N?#TG}"F$=c*c,j4Z>[T-G#ϊt9a?FQڧc,H ~{%a -2FA`~hLr Xޟno#:L$FFǕgtg'bl[;| ߛ{-,>PbtG)AQHZ?F;ƃOc,t[|{r6<9{mgE?F Cg=:*\R鬱YG/7)- I;-Z?N5şZ=/I'nD5-!bp1{ IWx^q2.BTfx\PX{Zxߛwtu,]R |{A&BQbhì?FB#5һ5(B]-nf!Pb,$21Nw gnV~B]gyz9^p 7QoGI|q~-9@d]/// :/^/p/////\ˍ|x?FȯI|hLYosլ{m|^>`tzϞI|ύKYʰ;!l?bsdzNQN鮢I|\Jm^*Hq^o_`n~APpⶍߥzȣs^ˀ≜O__(ZA}OOOOOO_~ޯ1}^i8Z`֋ ŸLS$W+wvqO j2~Lx/b6^Ni~ KC} mN;-럴 =T=qip8/~ h ;-#霝 2Dz1oooooo\~DG߷&wz P}/p쯹BxN]dK]y.8@h&V&%-2|ЏLWM;| =yidݑ%%-tڨNR 2DV2ď֏ : O21f& =Q>Nsz99]yX21 ;>#94oQ>ݠ_g?]^%mT`>rG.l(5sm H_`?#~6\9Ҵ} қhnBk=Oas#3ί(\Ofe#y~NQmr갯]ߕ 6m^ϕJY9]7}}NQ X]n;m^uԱ_^?$R?F[5l<;Qq<7B1b;-Y='!"/;mZoY?4cgC2oXj|ߎ84 2D\M'g\elh \~5[Vq:a,8ω?F0 iWi&4\GbF|aS)H9z#/<^bAL_߶,fdlEH{=uU%5*eBn v;!#4?FADo)XLȚ|bv!7A!*EKGalH`.P%ҧqp6t KT4HD #h34>T33 AMT JUFM&d2?F W`?P6 >JuM` ?juJt  5]It=Wp= ףIURIl#>5 ՆJ[ ;? & ?Ab A@N0"b8$*e!e!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԎ !a i n}. Al U (s"e e 5v d ^!Eb$-,q# q' &2zGz?=@MJ69#fE 3z; 9(b8)E&?,0"E*iieE9 jX:72'0UvBŴ &!%$0 D-FAJlJAh/7!h0!ZFg, E <ēNmP#  &OR|VJa9PY]PPReS[RUV%:]x#c S( bHD H# h4>T]]9 MT JUFp8?F@ W?P6 >JuM` ?juJt  f\It=W rI*RIl#>5 J[? & ?AbA@0"'b8$e!e!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v d ^!EHb$-q# q' &ڝt2U2?@MJ69(5,E&?,2E*iieE9 X:7'03Ŵ &!%$0 TF'JlJ8>UhAA `!1ZF? `, E <A#] =RjVJ:VjCS |b>RSS6 QV%U\.;\QNYu=!ٱQE^?&d2_MZW#h415U)Hb?*V[E! YB\JQUU+Vp;_MZw@0$[D#h2$_6U2rA >#JsS(#e_zH"%u@s $I/bO`MFXk_#VaZOB Urb~d-o@+[Wo8sG?*c^frmXzsrumv(x/zXB}R(}^+82ۈ6h46Ax8Up:Ӧz&M XH'UFD  h(^TYYBUFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT9T9d9iȅHBU?? ?B@L&_d2?@-Q(\.c(A.sU!&/)1(  0%[B`(file,s0rv2,compuUt$4d0s40r0b24d0n0tw,0rk!e^| (QSG& i?!3E?? ?qq wp  p w{{ qw qwqw{pqpqwqt}pqppw'wwwbwqpwww|zwxvDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??*t {V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉ%4 ?"u'4b0{N[`:Q1#2T6"!^U9$"3&9"/L6 2&!'ݯ>e@ÿ@qn@?@-fB?mDcFԸ0)Bu/@`҈-1Bu@rCC 5GAB"2Cm9u`u `"[ b!u`%3Qo1o7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm_!#5P8a` 6!bz1AXE:FG$1 8dT=`XQj2`]r` M`nuPaPt+arp{` ?bup` BO=ݛ`S,t@']p5%m-T݁<9ofm'Qh>qݏ,BT'.at'xXf@>pÿЇobLZdq [ܒrr$ avvvrsW"`ERol]16 бeI`~bb a}abr@6uP`TaSo1u"_u'I1I2A!b[aahaauhAaYypP`Z u*`q`*q`azAx'F5_db&bpSF3s͵NSYJEbrV@ΓB!e 8d : LqW,@rB:@0ɏT}sasahYV搹Mny ` %߀P&S41sa`NEWOWRKb2H PRq!RRI%SQio$o¹h^A'70Uc?!0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BU@5?F;?@ t2?F`0 ֺ?P} u` ?u#t+/8<&L9`d<t9( .88LU``ttyt m "t"hf;!&&'n"!'%0'm= 'p ~i##0U"K?@&S#a!!*_"`?CopyrigPt (c])0020<0900uM(0c&0os 0If.21r"0[1a.0i 0n.00 WAlv0 &8sz2UeN0e&0v0dn0w"!$J#B-#3 7#^=$6#D66 2$e,^6C% #HA%DK%D8K5D`KEDdGg&}c__'TAA""}(Pa%288BKS`DS\U Siha5 &rX7g'$i6!40 f-jUHPD  # #>h,JTEI MU@jZ?F@s??FX7`/?HmNuQ` o?u# j>tMtQ 1@t PATOQ E O#l|O6,2c J^vNt, "t "QwNbo ?!FVxNo]2УɆX3BUYU>VZ\)_ W거QUTAQ LG4CaCa_* Q_*____ o2f 8CT]?FoYDȡ?@ ~1pc} 9dS0]Yb<=p\/_ RT)4HtvzI!| ?^_AV,uݹ:4Z\o &Ja ʆXBU 9A1p@߄ߝZ\X9 (p\P΀dˆXEBUp2t Z\5q[,=6B҆XaU̓쬏 qXa>YU . [[ ;;p\Ua>YF`rZYoQUaai%?FzP4Z\^ W-p\] xTy8?F.?@@SA?FWk@ ?Nk"_|5`bmMp\PB|y[7 ơ|`?])?p*FWAX|a筃,H7~Z\?bxT?:!G4QڡUeB_T_f_x_DOOkoooZ}u?FU2Ty9=ŏsҟGO"vP_?@mlփ?Ff>> }= R3|Ofѡ|&['aޯ)d3#ux"@4SK 6ASʎy'e>'"Y`r,iMm׹ ߵl%1X~G׷_'0UJ26L50Ov ׶HD: # =h4>TB#]]9 TqAU@1D)?FL"ߛ$??Fٵ #ָ?P6 u` ?u4 t  ]<)t7񿗯)2)LEkL>  JtU2qD ?=@LC&A@-bu(b)+&񆤍[=#L"#a?& ?AH{/)=#@1@15 {`?CopyrigTt (c)w02009w0Mo0cm0osg0fu2f1ri01a; ig0n.w0 Al0 m8s2e0em0v0d0"91=4MB=#e-OL3 L7&l>  U#,|A|A9 $] t^1*=# #p Kw&&J`=#R&F 2'_b0CwvA kRqVM(~"!;ɡ?@<3?F ?P-DT! ;d1?Y=:ZTO㿂f{OaB J4 13? v,? (N_?r5w!:8oJo\g"a@jS(.8R!ZFI nx4__ D_U5U,VS?@g$ ?F~4,ێ~@ 8_H_ZTml`KDfl`'E4q2d8R鿃oo`rmkod&'aE2%_7USr(A@z#y3ae&b{, fZT5sae~t?FG=Va|7>-\氹pAiʈ$%?F(XoZ1a|}%\Iii ^WB^:fvEZT˒AiU?FAx߾Ya|k#ӝY~/Yt(Ae o0AleP59 MX=GL"'0UŢbN!40 O5 |HD: # =h4>TB#]]9 T]AU@R7)?Fy߈C??F"ָ?P6 u` ?u4 t  *5y")t7n)2)LI"+BA B JU2q0 ?=@L/&A@ybm#zc a#-bK{(m {(+I&)#Kڬ !?& ?A"&g$bbu/b /%)#Bo1o15 {`?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v@d0"h1l4M!E)#𤲝-DO{3 {7&l> U#,AA9 ] 18! #p %J`)#@~t?F # 0hU-~-/HA RVM(ET"rYD?@J-?F-' 9d 0_Y b%r\_oӫ([MFoFl 77]8>!Zݹ:4\\o?gƭkdQEF5W4?@A=?F[Q`??P-DT! :U^Zs[ #l@*p9lJjV\\l_ I&vpkhAzi!J?F߸TF\\Ͼ\zT khzinA9f@.mY <\\ԕ$ ,S{khAzi׍8r?Fį^?m ? ~_5 ә(Ae 0AleE9 MXlGR'0UuőN!40 ~5 ,@HD: # =h4>TB#]]9 TU@~t?FYaJE??F? ~?P6 u` 7?u t  }A_)t7Ǫ+)2)L^A]>REJ2q?R=@LA@yb #z.-b( (%2+2&L q!?& &?A"#&$bb/b 4/U%  1 15 {`?CopyrigTt (wc)B020N09B0M:0c80o%s20f@211r40m1ua@0i20n.B0_ Al0 88Us2e`0e80v0 d0X"1 4Mi!E-?3 R7&l> 0>U#3 GA9 B)1 #p| U%J`F(jdO C]/S֑K VM(HC-O"K[%q3U5CU?uBТ:^Y=~q\G3UE2OE#rA 0"c37QA?Y\[)xIB_mT^(`d :0A@le59 MXGB'0Ur-N!$KA5 fjHD # =hj0>T h]]9 U@/?Fxpz?@]JQ?Fk],>Uh@ !f$; hp _Q%Jd(jcZ(sEp!$a VZHD # =hj0>T h]]9 U@~t?Fxpz?@?Fx?vÂE?P6 t`  }A_?t#%_ ,u auo>J2zGzwt?@ʐLA@bu+b)%#+#&![Ha$?x& ?A//)񩣑!!5 `?CopyrigTt (c) 0]2`09 0M0]c0os f2R!r 51a0i n. 0 AUlP0 8sT2e(0e0vf0dH0@I"!E]$M-# 'x&l>],>Uh ![$ hp| F%JdjcO(hEq<cBɀODI$xD\AEOOOOD~hEuBТ91Y=~D\GiAbE9 MX|7W'0Ub>!$a VZ_H$;s nXk_$:@@_F#XߖVOB P~dpj@+Kn[sG8fط;7h ™F֛Hƙ-(ș6˙Ιp(sc 7XuNw_0*9yZ!<9|&]~),!UFD  h> T6D BU?V}U???Fx<F B_P(?3P?| (^-҅  0B`ZWirefam,\UIusrintfcpoymckpAplctoԸWbeUdac"sJ'#ttaeB^ L% --HR D\ 9  9σ 85  3?`?3i o3?{33 3{?33?% } O3 3| n3?l oa?b BP(xh -RfHz C=4uU2U?YYYFuu@bt !"}&#"b) !# w#""n%?#>~?[ڻ! G% LineColt0mrz۱ "ٓ 3z bT'ܱ "#13?6&F$%` I=7Bs?CH32&n#Z A?AGG"`?s1pyz0iglt (wc)@20@9@Ml0cz0o%st0fBs1r@y1ua@it0n.@ Av0l@GsRe@ez0vp0dt@=@1`Vl0s_<0RXY@clm!W#5@4QP8._vHcs 1D:XF؂;1G#2wB xq93U@ek;o+_=oPa7oPUFD  h$^T YYBBUF\.??x<F BP(?i3^P? .vV B2% 2 2ȅH?p G?`1 Bt  0XGB`User,n tWwo k p"ui h"al d v# c >!^ K%OU# r?/%?IHPNL@DD@@D@tDDtDDGpDtpG ttttG_GOOGGW D@DDGD@%DD@@zD@DGGDDM_iDpicZR?Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab\.ҿL&d2??s׿4?@?p2эr3rCUG DF P# h @T(PYY#EU@\.?@L&d2?]P} .B"H:J "bbrY:f 7 Amu` _?U"66UJJ^^UrrUSu#*",'񩤅MU 02#?T"'#^Ax)"L[$ $3' >>ÿ@q05ӥ?46Zr&23G2uM0`-N12uu&@ f3AB@3t$9u`u `m"[ b!wu`)#A!'1`Vis_PRXY%cPm!#52840!2`?Copyr@gPt (P) 2P09 M@c:)Po@of1R"Qr%P^Qa1Pi#Pn% AlyP )Xs}Re@e)PvPd%BH!#$U~%d$%! =#d&P& "$B##+0Ub#]  gmAdaoc_Td"5 tR6$t 7#tEC0t|hh9PKs(&q`#L ]G2`NgPmPTZqU!hj|]sL#PcgXx'e|Bu@lPioPgyEf|R#Pobv|3qdv]@Vq DPpgPr1PBqn1PxHrfcQA!эSPUa%Pe[PlgPs@ c` "QnoPePjQv@t'PxUrT sAp ;T'PA r`UTTq1PaE m\#ff%<z-q#o&5m1`P]](cEAE5:Z%C %B&MRAiR p!]`NEWO@K SH-0Pr BORIrS1QN1^d^dpЭ&&"(AT܍?<$aylA2;6Mq`CRPQllS exR2" 0R<@5b :::4u"5z":br3@@BK ڿUmRc11"@u&2Mu3+ԼB,@"Q(!`a#b!qo+`9R7[d[#uҳB&53@R"3阼E q`_$&F3s͵NSYFO0UAC,@d_"PUi(X''d&E! EQ7 jXq!Ia2ܬوW.Hƒ%Q &5f ~5! E8! U. UHD: # h0>Th]]9 MP MU@˙?@[ bg?@ƒ?@?]P6 ]&A]M  $ 8A] LGD`G t  DG G GG$oJuM` ?.BVj~uJt  &7&t!"Mg7%'tU% 762>[JU 'N((?6 ?A!2b(A@2b4Jte32zGz?)77s@M[3J68b96<2:Be3hAhA5 2`?CopyrigTt (c)@]2`09@M@]c@os@fBRAr@Aa0i@n.@ AUl@ HsBe@e@v@d@@S2aAYeD\3-e3tC tG 6\5@bU9 kVX=W'0URRł6!4aR VZlJ4Uh8&a&a\qA@Q1Y8`e3@omi?@o˟+00f'##Rܗ'/Rb\2JAL)?@}Ɯ?@FGy?@?Ҿi ǒ$Y1iM/JCl;~e*e<8e#K4 @ J? 7ͫ?"p@c <"!r55qE*uzf7?@|ݨ?@ړ]?@,_oTl%rl |}}Ya|tqw(Kx#0.:Ud:Q5>[}W*??@#Y8'&WSyirlE_Ţ|g;||8Q  cGkfɑudpJ?@P"|*u]ad@k!iNŒ|\(|ݙÚw }:xɒ|)3@*ȡyQ5l~eoc@q(Y?@6qﴐ2rA 3zƟZJJ=ay*/AHD H# h4>T]]9 MT !AU@ `h?@ E?@( y?@b|?P6 $]A n JuM{` ?e? $Su.Jt+  SnpteGVBqzE0E>qچۤ7>[JU 'N(( ? & ?Ab(A@0"Sb8$J2zGz?CUUU@MJe&9#ވ"bU#z.;&b8)E&?,J0"E*115 ^"`?CopyrigTt (c)<020H09<0M40c20os,0f:2+1r.0g1a:0i,0n.<0 Al0 28s2eZ0e20v0dz0!E45 e-?3 7 &6iieE9 6XG{'0o bo &%$0 PFdJlz&>UhE=^!Z10! _V, $*$d* Td$VcR( bHD: # h4>T]]9 M 6AU@ù?@C?@V_#?@ zrC?@+>_$6>M JuM` ??u` noD<Ju ,u ,bsu *| -Jz@"JL>Cјղ?@~T]]9 M 0AU@H}^Wm?@,71?@[>(?@Cn9`P B0wR?>$ >M lJuM{` ?E?Ju`bJu ,u ~,bmWu v {-J)yt@e"J>>?@zc?@c$Û?@s?-'7/I%u'A-(?/I Jt+  >Nkt7"te"^_'n3%':4%'~7JJU2zGz?CUUU@MJ76yAZ2bUu3 zii32Wb9;6B13115 02`?CopyrigTt (c)@W20@9@M@]c@os0f BR1r@9Aa @i0n.@ AUlT@ HsXBe,@e@vj@dL@14[13 'N((3FP% 4(d?U3 7F@T%jeE9 LVXW''0A0A0F-!D0 lVZ9lUhzE ,5i 1a^*0^dJid:SYwh>F R e?ijm$*D@@y46$d tNe%7lb+d hc٢4HD: # h0>Th]]9 MP !AU@܄?@?@oEݒ?@fc?P6 $6 >M JuM` ? ;)u*Jt'  sZlta{Jxlmv _=m)>d3>JU2zGz?CUUU@MJA@"bU-#z#!bs!#<"bD)Q+Q&J[ 'N((?& &?A<$(S/t%UB115 `?CopyrigTt (c)802`0980M00c..0os(0f62'1r*0c1a60i(0n.80 Al~0 .8s2eV0ej.0v0dv0!Y$-X 3  7&00b59 FX7='0Ŧ&$a $F8JlUh E ITh]]9 MP MU@\gr?@KeF{2?@0_'LھP6 @M >M   $ 83]L3` t  3 3  3[3tJuM` ?""."B"V"j"~""""u)Jt!  N7F$6t132_y:G#7.7w%5%H=!>}tJU2q0?=@M3J6Aw@]2b8Wb9;6JJ[326C?IF ?PA?IB3AA5 C2`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa0i@n.@ Al!P Hs%Re@e@v7PdP2AYD3-3C GIF5@AbSU9 VXyW2'0UbIF!dDa VZlTJoAT>Uh<oAoA]`t AVb8`3@X?@ bgk@#O 3G$ ):r2JXq@|KC8 uV_Itu4$9K]bkt)Y?@>W?@t |jdEJC!|v|#t0\|`jՉw?0GuN.#?ɿۿjjt^qt'f7?@?@cVB?@-~̍{U |S r!|ÿAD:|S|v|8O/w#L,3?j~;̈މ Q5?@]}W*??@ׅK0M='T!x\.!|[2jq|jm|Äމ}ƒpJ?@Ggo: xυAf%r1$yA5pt| D(ǓI |!TJψ' 55)n?@dѓ`iDF_ὐ#ǮD"ϑ<)tyϏ6qy-8n|r!t@Ikݟ -?jta~TUdQ ;f |I ([%6qZueos@xk.?@U.foB]ra 3z׳]yqʰ-;bq2"HD H# ih ,>T  9 M#EMUFuj?@&d2?@ƒUһP6  > M    4/ H/ \/ p/ / // / mJuM` /?*>Rfzu#J"bo1FI AcﰿuJt  ?t,2t!72B1>JU2zGz?@Mg3Jw6A@-2b8b9;62 !4JFq3Bh32.F 2A2b9ZK72i<2?*q3AA5 p2`?CopyrigTt (c)@2U0P9@M@c@os@fBAr@$Qa@i@n}.@ Al?PU HsCRePe@vUPd7P_2bBMq3C G1.Fh2!"t%2eUibE9 h5AXzWW'0Ub1b.FE0I VZlJ1T>Uh<11@ H\p A l5#}Qbe81@;X?@ bgP@3 ggq3 6MgW7]4JAq@dgLjv2xT?@&e ?>Ȓ$YOA"4[Dzx=Buگ+Bu#տ(: (-DT!o Ut  %"4nARB"&r55Hu™0PL?@dl2,|?U6XB|$e6XxEt\s綡?@kZO?@-9b?@B(® Ǣt/y1MB|sf| +6l|?MQ ' K +Xx}uƻ?@3W?@tM/JCB|w|RDDk|& mzf7?@ݨ?@cVB?@w̍{?PFk%rB|:|F,r8|?P)̿޿:Q5?@P[}W*??@ׅK0mM='a3urB|(q|̔mn|{iϡϕtZpJ?@pgo: x+y|xutyw1B HRs?~dso@+ho-BGD@P_GZhNQJUX#8\ѩc+UFD  h> T6D BU????Fx<F B/P(?3P?|C (^-  0B`WWire]fam,\wUIus}rintUfcpoymckpApUlctoWbed*ac"sbe^ F%B --H{ mj? ~?|?<8M;G3?n333?3? k?I3 {33 383?33^|??|nr?3);3bxBP(״??g*+0KҶ?*?.U5 ÓI&5?<$UH TD &# J>hZRMR"Uu!]SAGLD VMA,lWg7GRql Ut(%EaUQsgH ~(ày8?F~p"Z ao4O eA_DHtO3?FIX]?@fs?Fv+ K7AHPyߖ䰜OC.ő"#*tu尕~D72SőMC!RB _&x_DXP'u[*jB!e#dM׉Yb藛A4+=ɕteٕqC ?FD@?FI][h aӰDW-k}̒cuٕ86 ?F ڜ?@9HBV?FtQM_3LA!uWir"޿xٕ- [?@dpՉ?FJqy7ï!lz(AhiW `_Mϳϱٕg{\?F  ߩ ﰜap` ?%%N`ߢrEٕ{@ٛ?#F1ߨ!^D'DqW4Q8 $0BXj]1G nCgל۴ƟؖýIgcpg?Fm1K V 5ҹ_ :WuC'Kx2Uٕy`*?F!X p?FztRvȖ `ҨTui As8ЧW`tO -uÃ뭤 .) ʉte(Hs 5~si(ovհ?FQ~ksm^9`Dj~UWc&.7Iϐ?@˜%V?FL/DAbWwppX0 YXcu@rgdve'?KssG?F]K?@Ke?ӿSɨ*Ԑ7AN @Y釼if!Џ]ZAJlFbҗ ~=A>7b=Ag[k}oŘJ1*LUsqgQq Tr}Ty)TIr-x~Tq k"`p`IcGonk$,@qqU7_yu5)\(J܂dmwB ,snU@e8ko+_؄oTaX -o:UFD  h> T6D jBU???Fx<F BP(?R3P| (^-J G 0B``Wirefam,\UIusrintfcpoymckpAplctRofrsdaog$sb$ ma e^ X%% -Hu |8' ώw@Hww_ >?nDrawg nd]dopi]t sa]u brBa*iem;.bڿoxTB$]]9 #zAU@??Fx<?P6 mu` o?u#Qrr<;>A@]@aa. A  > MJ\}<?& ?A|baUa` FilR CoR orz0>UhhECGQ\#/J`NJF9O Or-B  ,HrMY8m@Kqu3~pEGu E{pTxu2v;'2rRA ]P'2cg8t heDCGU88rTyM;;\2CC3-d BJXcTQQ S\Q&YcSPau`d\5!u@r?̞B#uL@[@x\23*gE̒Ҙ: EQuD, @jqV3e E9 MdxOz6z6]UaO HbTD`xP ȯگrXMW'd%r[uC-! qn n)EH9 ٥q4_EunmFzO#<|>B 'z]Qas]@=+C]x }PUFD  h> T6D BU?X}U???Fx<F B_P(?3P?| (t  0B`NWirefam,\UIusrintf*cpoymckpApl*ctocsfrwrdWe^ *% -H ]?   ?.-) ?i33?33?݌e3e3?e33?e3 3ytojb BP()A??/KҶ\H??5ZU=5? K=?HDB &D# ?h <>T$P]]9 UÍU@XU?@Ԫ?F BP(?@)Aº?P6 lAT >u` ?W."G&,u6J2U?YYYFuu4A@bb !xAwwwL?C. ?[G%` LineCol rz-"#zbT[S` F l .` Z/2%߽!ݽ%%![k3h#""+"Ph\4>*l1k1s7;`?!py igTt (c)#@W20/@9#@M ]c os f!BR!r@!a!@i wn.#@ A"U HsmBeA@e v da@i01`V s_\ RXYa@cTm!#53@1O67h1B\q@?HOFMA87AT_>p  ?@CE@>@uL-,v?2BSvQ &RuQSep7z!\]Uhz Aa\12SJrQKQsVO@,S7aYhe"(nYo O+xP5eL2S^PO4\tE?oy5~9}ew.ue?&VT2s+xE~W.u?U~o.uob'ghyaeN7bTaޔMP%"JT11 Ch1CI'@^AD%%!ڐ7K,@q8ab59 < DrW9k3ac>ˏixX;g5'bD~;7di!0% 5jd&8{;gV!36 1>i5_sϊVH_9\s ":,$*_AF(w8#ȈwB sm}@etoo}+qoMa  oUFD)  h> T6D jBU??Fx<F BP(?3P| (^-  0B`hWirwefam,\UIusrinUtfcpIoymWckpAWplctofrsdaogB$sbR %hl Hbe^( h% -Hu p'9K]wtrDag] nddWopitW sau brthe cp4Lp.bxT$]]9 #UmU@?Fx<P6 u` 7?u#eAE+4:DMJ\A}<? ?A|b@Ua` FilCoorz9A,b #zvr6b< /"/4!b(' &񅴍A2U"8A$( $*BA-%3s"zp%#D?`Bh4M:G? !C4#1BM؂XAAG;v9"`?pyigTt (c)@2U0@9@MS cosfBrԁ@a@in.@ V! HsBe@ev@dt@@1`VS s_RXY@cTm!#5@34L!"\$ǍqHP?G_YVM?\.? !_;!Rl>0>Uhh5CvA\#/Je`F#f$IaX cbifM( a-oRne3noowe5e$fo?oeE2o;"rA @"s.SgXhe979T$M--"JCTXAXA 4wCASS@Au@-T=%4K,@Y_q.F`n3e59 < @T66Ea\RjiЏX7{'YTK!Qn {xH `9g_DʪFh.W#w6B U']Ua`t@+(1a C PUFD  h ^TYYVUF88??x<F BP(?i3P?~v`:DD\fvv霍UF F~# @Ui$!U$ rR"#)78H"HMHL[0 `workfl-0w`/cmd=3Iy 1Ik1p4;1`Vis_PRXY.A0hC0!_#59042]"`?C-0py/0igPt (A0) 2009 M0c/0o"0o3011r0.1Ua0i-0n0 A50l 7sBe*0e/0v/@d00e7% 3$1S29w3 w7? 7>AhBBBBG&? :7dt5]\nF7FFRFF BbL0A 11R` SPaE081161rzV_k$3 F0A*g2 f"3 DoeBE2N贁Nk?OI}B!^`b` L0n/@tpoi^KE 9E]zoW'2r?qF!D$1z5 feT/@x0oq#a7 i\5QfffE\/uR *5K#[$$L%^vF#2a#M\5鏈ndPa'!T>A>A@Ȑo9Fs? 8#3+ %0&2p3B A0A0]`N`TWO0K SH@P` 2OhRVI`SK1QJP5&X4^CPUeA0k u1D 9m2s D 0`A`ASȐY$!-."+;\2H##KB)Ub)RofR||b "H%vœk1Eբ0.c.AP-0at;X.;; ObSпF-0u@d`H@D%RTC,@²\5fZ]@W3&{01A0U60dI0a00A01-030d0bJ4/@-30e0ubf@480}+KE?!0Pʮ?1(N)U??OfUf]@0+|TƔߔܡNeaUk;_m H*6N秊HD: # ;ih#$>T #3 A UF\.?FL&ɯd2?tP Z>u` ?u4# t  9t-GDB8\J5 `?CopyrigTtJ(c)J209JMcosfr!ain.J Al8 s<"e evN dF0  `0^%l>!Uhh # t /J` ,"  R6MB:?:4b30l53@}:?N6?p1x??=21l5%2?5rA _ BC`#?FJů$?FX~wZ?28Pzu` ?7uu? .1J1i 1h j*}cisځ}$8tTF ]Yj?t84 28?u` ?0u8 1r *;Zbi c28Ɠ?2P@u` w?7u?K= JS9ukuBm  1\2q??\.? ?A7}`?Copyright (c))2059)M!coKsf'rTa'in.) Alo sseGevdg} `0HGgGlGr0d'G_2lM EMFD^i-, +@^`^XLg0@u??@ @K<Lj@w >5Aq=A? UAcA*AA5A(ͪ}@?3p?楟A +VA:#NXJO= Gsss3@U*$B:N@dz*Kɑ*߼- >8g 0wT5> F@> >!b:%U;C(UP        J"<CC(^)C$)^TNPPQwC$J ^!k$r-6(+8g?r(??05w1 0? ?25rGM?w4<); :LUKAJBHCW EV V +U !W?S 3YSRFZgQYYlZ?~^__? -"C$^xg+@`z4W4S4e?g|0*: eD=g1_?4B5"G0|.I%C(e;j%! ׫pVpq @_Vk+p+pn qeezh9fav1V1F%neE $1w?qxA@ˡA'1AqdrGȏAqqs9Tr^'rqP?@F'rquL7WA~@1.A-A6=a=A.JJwAx9f2f"j!rr@=Lzoo '(L.Ci^)V% s r-ATn-pwnC AM} MpwT[ f0 ؍pر >g-pg;X >ɐlR%>(q[,J\i^;d|pw @;I`1 Aj]1Aw)\uMLIKAXIAsh3AffF!A"@Xeg~:erzyȏڍPΠZMbs,1ZEcm̿9\;A6;ҟ;1,UxlzځV A7@}?!A|@)AO@I:=b>~==p.AףA.AZdAX90AA1®G3AA4AB`A4p5A+AtOAlKAvLA%3A!JA($>M-AFU(}?A9&%!EA+FA3ANbF5eFAu6Ҙn8ҤpS-79AS1AA&!@BA@v:y@V0>q@h'rszүP,vV1r-*w WWCr-W1ྍY due]fmf{}w^wEin᧋tr-@}|}}} 0.mS-pe]^ w[q̿޿l{]rj  ]A"@ ߄@8m?8 Au7m@pSe@URM߫A2shAA4uJ )8rTԧʡz!+=f!z! >.>5^>=>C<ߦphca␿?>/>f:/L/^/!Ep`  `Ep *p p p ///340fe!rei!!d  wN%V1N%Oz9 ׯ 0>030[0]..'5?--: 5.4W~?,334:52h?++ 9 4 2 ?77 28,%2=6/ "72 ,%&0 1!/02 345'' /0#$ '}()* +,-.0 !"#$%&'  >@ 4QJ>Z ^<0Yk>GY@!AVA@+D- DA:B`AQBAAj=%?AAV@AA9BA~r V@b@@A"Q= 7)RrZAQ6QA6QA.QX9>BRQQRQQJQU8;^]^ߧ@؞P߱7K܆47 U;`;`7`MWA=` `A`pQo0=Daۛ`M۫=``dgڝ&ံBTfxXLA-@l/Ao@H1_z@RV}AUFp%@,!-r@q+@Ck@A`@@n@-@M@-@~uv]FXjqqvJp~aʡξ}?kjk0OʡbX9_=j|K h<> rM@ :"x[&4@J$> > Z:mS@ S@$@zT@Nb@wWaͅ[@S_@ @Ͽc@+p@T@'19p@Ϫ@Qխ@/ж@&1Az@1JA@\VA/݄@W@@`X.Y-PHQ~y@}?Ypu@ˡG@,!r:Mƛ?Zt@_@~q\?j(a/?%%POAp@y@A_s@Z*Aɰ@@)\@Ƚ@Zd5P@@@@Xqp@"UhrbVAףp7@ÃÖqNb`@_-P@]@r\$H¹^uxN܂Ty]____Q5G~a5f5g 7I8I9IBeMXIlkn\EB55!O`@M?M>}&--d->Q5HЪjqꩉw*a =`!W,eYO`E``aUp;IeRensd~a4*)`=en98f9oko(:L|R2 L2 bu> oHwtDAZA2ff8AZAEE\E1j=Z6ZOwA׉A|ppAw|yZyX9ARaAMq=#ۑAtǩ ʑAwAPw"d;ٰffAWp9P©ûprHˡ̴ >>>T`ݬףj~ ~~wyw נ ܾ"@]P9Tw96J&9 "ˡ@B"uN";jZ"dI6!w\5"=ZI"&!J!!B!! &vW|!/*[Z@T{*Z.FF=l?>94??NX00n0 9҉~~aKp 2 M р )@*006d~o,g JFJ%F/F6 +f":AjE>OPObE @C6N5O EB0vI1OH`ICMO]LIV_0_GVINf_IQRvI _`IWvI1_I0B0en҅?Gm0nc рT0a0 d рa }6f0'.xsպaV e0 '@rqjf.n)oo0sA&q"qrq, >Pdvwuqq'$qx.*4FXl~1u/,ʏ܏3u62&C2ADVh|CbQu¡?<ȯگ@`S*FBòL^עl`+4@@Nf!)12B)[G0Ֆ:g߬_cӮ"e2[)Px!J0dE?7dH'33==5No`g!$Ԗ_(.3Х7757Ѕ_odC785?`]JDH}QH`r]JHnDj?O8R`j+]J8O7ZcZмTZVR)G=BM?/Vx?dC]JQdU-5n'`Ko]j2<ê5 'm!B޹ Ufz GJI6DCK%^@,G $%ߓAX̡A'1A| j͆πϪϼȽ`W`zEjЃ!2DVhd`da1){@')1%/B/Ȟ1~A(zEՈD45 $h衒@T} YGoB_(HAs jkQ+w9CSrF>#7\(hB t@eko+ocBG8roPuONWcVUFDfP h>$/T 6D UUF~@xT$N diIh>) e0T9 M B FAU@bX?\.uP6 Aܿu` ?_A@7=5GQu[(fUau`u`bu`u B($>U@f@X'P%7 Az'6-='l(280?@vKk?45?(@X92V* #LA-hbrO@N3mk2u.h(<KF922D8hapK92u\C9?&,<?/M%I#AAFD5 /`Vis_SFT.chm!#5028`?Copy{0igx@t (v@)@2B9@Mh@c{0oj@ofBAr@Aa@i@nt@ @Al@ HsBe*j@e{0vPdt@@u0x#IA MG?9#2B[ DMW`wBk@l@]w`/v@mP=3yh"Kt7";GbA7"J&l_>'8 \#I#2q L`MSg0SF5B\Ej?ofNA?\ZfoG=I#NF DNCMVMV(IR@$iA2E9 mxoTPw/'2rLbMV!FT Pv&ZWg  aT>@"$K"+X"(e2BA)rR'Q'QcR䌄 완M覄$1`Vis_PRXY. h !#59087`?Cpyi}gPt ( U) 2~ 0| Ui rsf"R!r a iUnn A l 's"eb ev" dn ]3' X#!"E# E'? 1*0>2w27 &~ 7dM@޹nFFF}Bq6q6 m2b A2` SPa !!!-rzۺFOKh F` !G2F"X# _EE2NNk?I}YC.R` L` n >_ i5^C5 Muz=G'2rqq6!j4H% VUT x _q#a7 Y5AvV@vVvV\5" !"aR oh  d*%dEN @wu w5,{Vzq !"8]A d lT`"!i e 1m`"s$s da('T 1 1 =)Y6A/#X# %f &"!! YA@}N_PTWOh ]K SH P_PU f"OR`I_PUS!Q%g Vk͏ イ$-*"2A+C2##)84BEcBR,BT@(Hb*%n,}5vI9!Z5[ .Cr!PQt;u*;OSد*Fu d(4bC5f+0WX#&U{ 1 6 d ua0 14u4 4 -bl4 - kez bꌰf 4 2}3n}4En<}cEn?~ H+0 [ 5e aw[ OHĠNŖHD: # ;ih#$>T #3 A UF\.?FL&ɯd2?tP Z>u` ?u4# t  9t-GDB8\J5 `?CopyrigTtJ(c)J209JMcosfr!ain.J Al8 s<"e evN dF0  `0^%l>!Uhh # t /J` 6 `,21"  )R6MB@ ?R6?21l53|>??Ll59 6 ?(?Ol5%2?rA _ BC`#?Fx&?Fx ?28Pzu` ?7uu? .1J1i 1h j*}cisځ}$8tjT t8i 28?u` ?0u8 1r *;Zbi c28Ɠ?2P@u` w?7u?K= JS9ukuBm  1\2q??\.? ?A7}`?Copyright (c))2059)M!coKsf'rTa'in.) Alo sseGevdg} `0dG|gGHmlG0 fG_7l EMFxr@, |+@`^pdg0@u??@ @ @$Q@@AAA!bH=!'G%JVV0g;u{}6{DIjJA.^g/@wv^@@?5AA%q=A"SA!%-"z߈A$9""@:!I&;M"OM""!F!"!>!"%"!'cI6(p'96IcEU(;sq{Y_q_;@1u0Su!95}XL'?5]0 5}/0~x2] 0xa0~a0W/}]0}M6<>j))" $^rj^H4zzX9@{@HAjElC@fAzEjA%3 *BfE@TRAz!*A>#=‘aE?PP0)J0FXM>UoV2EbE!Vl?zJRQZp9%)Eّ__aVjzQQTcpPa__ooD'oaV ChZaTa bpa`oroo֙oobfQ Chaa [rXqoo/IrfQ Chbq^q rqhzӡrfQyqq c`C%7fQ Chjf pϩՂfQChkh~-?YfQChrnxz˱ßݒ fQvFesp8 u5GDufQChzvpqȹ˯fQvFe{x(l=OufQ!Fe~hӿ"`€ 0c_ Vϴ_zϢHIMj1zi+3H@0{4 2Mth7@!щAAHҦґ@(@R%**"`ц/OOAOO>a7=Ϩ`,QYM!J8/_^4G AA^A|?@?)>_///Eϭaޑqz??4p!j % UwGAa i2 ޑց 2DV R{ha@ɆZޑ!! 1CUSx]PLa!!!! /0/B/T/fn!r//R>!!11 / ??0?f)r`?r???RZ!!AA /? O8O0OR~ q/kO@/O1ja>RQ/O __0_fJQN?`Of_O?_iba?___ oBYNOrqOooi)'ѐց@5Aw-A*VARAAwA?S=pAU7Rv/R>I'LACn;n%?+g*y/gpb6HZlgy~!;wUPRk5]L?;?-;-7-EAͪ=55_z4DiR_ўX)Lжՠހހᴑ1A=AףA-A@)J )33AA>>E^A#}QzAz)݁]S]9AQAwy{yߤAǍ`AshA'1uQxyAAjAVwA࣪w̷ ѠݿIݶ"""ޡ"֡"Dlբrݢ\bbblʣEˡ~桢ޡ֡]ijT=)`{y^AAQwA .;jP=_\>? ~Ҁ?g0g >vtהX#"!w ayUy=@Uِ6ה_p"">U Բ"OՂ.@R@NjDZx"*DYxaiJAZ~x"}F} ɁJv)  R  d * m jC~feyzzYI"q"ِ މ}!fq `"4*L^#zz` "!*}z%z!!`"!/*/DpGpJtOU[`Rp`p`tHapa)bGj#A]pGu}{vun[Aui?pdjեbd5dEeJ eQˏ5QERIRMRmR*}M}H;;U6111s51E2 2\A~pߕpN/[fEEAȝp@S5O5K5G@BCNC~ ~qd"A[lFihAh@qYe.vaadaDud޳hq{qi@,p q=AAZmA`mo$x2DVyW{'~ўIIЯjmHAqahva0$h\Fa.A$@p y>–C@&9?}#?"!?5@S@>oTZd?@sh A:w-Aփ.E9A~BAa҅abыl Aå9}R~~ќ6tӦ8nѦf+B]Ҟ,kJԃ3Њe*`QBb@d#Na*S== ~NaNNpY=rSNЁ ancW>_T>!bPbu " QY0 `(\ P±`rqRtpe`,A58z *c e` A )   ~ Y+|x }BV± = ArCAJaa.va~1~TNPPJae(͹VVP daGa|.T?Zcu2^h?f劷 z ! !@  @  wwx ff!@UV!D2@423@"2Fjo|oooe?ooo*u~ƿؿZBs0 aI@Q@PwBJ!ArO^VPVPNOVBVOABhAwO@%"mBOJjA-_WV@I AR*Q+_PP!E@V_`IQ_ QRwAQJ!_*@uRVq(:L⩀*2m) ;h1 &3_q[8.11q.1ՈD45 $h8|ߧT Pn+OB_(HL ⠪EBq"F>#]wB uuU@ekwo+77yofBGToPXzpON{M CVUFD  h> T6D BU?XU??@`?Fx<F BP(?3P?|; (^-  0B`]Wirefaum,\UIusriWntfc%poy]mckp]AplctoWbedac"s!ssH$be^ R% --Hw i] IA?m333?3?y?3?_3m'n~ީ}3?o?)}y}3B3 "4> sb BP(:K??VLҶ?Y\?̗K?vMcUUUuDlW&e97?<$UHPD  &# R>h -C5Bb=I !H>CN3BB5?N??[N;Am1@G%` LineCol@mrz1@"ٓ;@ Sz1@bT'1@>B>CQ3 _V>#5D`ERpXI =~WcSCNR30@aQW!B`?Apy@ig\t (c)>`20J`9>`M@c@os@f` AR@l>`5gsbe\`e@v@d|`P1N`V@s_RXY|`c\m!#5N`46J`Pt\4q`#?ov3Al%k  t7J`P sE@J@CuL(E-1A`"29s NQf"_@s=NRZ"ruqMS74k =BnA\W7BlJ4Up] e8&I]NQ*tD`!p%a9 QMs8qfqjmD?@ Hp`"s :vs y#>t4HF<{Uf#R+gtUXa:ǟ T㊘UFk#<o1w \~ʝ$Y}7@UF\ihct̖?@çly)?Fg,d7\UH ~?e˜Ndx˕f#T9"iJt޿5_Ks 3g!2 TEZuFا򺬜Ei ơGEO~9*Υ?F2W-9#5ʯi ebKa5qi ɚK ύF} "ǻ<:GʝtǍL%Fз?@@T5A{+y&%hw񿊘O%+f#}V7t&L!f#>@1Lt@d7?@6δFX.1?@7Ii _-&1˜Jd݃p]؂wϳό쁞;倈46~i n NuT<-?@gİ?FB?@-Pͯ\hzF˜Nkpa-t![1webNQ UVͿh!Nqա93?FX0sg!𸎃ؙ%\܂i i qa?? u,񄣾 ^uCn2`onI yp84{enߜ˜Zm ?Ļ%yp~-=ʡr! A{aǂ:h!`] QxWizT?/ůc/V!ϚؠnrC%[k?@ѻ  ǨWX-q jω0wse˥)& +'?VVvFZT!ҡpKȹ\?p/pd[ Y=ɲSp?g:XQ]ᑄ8崏Ə؏,>@,w/;r~+(_xfZce40$[)Q_N`w$p_Xvd̪+?F[w?@D4\蔜*)? +%qsi %|w?NY1Oâ ;oq6Un?FWZbE/{\,Op9_pt~5iԅkoO<`o >FG߶}%w++Q\nt{\?@_,S6~j˜`?ԆX ~FaC{WEX!',JH = -?, FB{s)" %݌jTrOQ`_7Zu |r#9?fF$!ѿ+?{sQlg guyt;]Ft?>%_d+B̟`:Rf"D_RF)z>~gj+Oe__uyD@S T6D BU?VU??`Fx<F BP(K?3P?| P(^-:  0B`WWirefam,\U]Iusr_intfcpoytmckupAplctoWbedac"sfd$ae^ F% --H  t  % CFs?c?3?x?33?? ]3|3ML^pz3 p*33?bGzBP(״??X%KҶ?`?/_U5T5&mB5?<$UHlD( &#  >hmcz>ePbTePrRrS9a>oPf2@iTU K=gkcS"bA"SdP;qagUR`?'apy.`igxt c)rp20~p9rpM `c.`oKs(`fpr'ardp-aappi(`n..rp AlrpiwUsrepe.`v$`dp`1`V `s_PRXYpcxm/ #+5p46a^\+TqF+?*<"SA5 0TkEbP` ?@4@.SSuL\U-eQ2e Y02 P ]b12- u `oP1"GT0]0!lwGR U _A $ [PTQ1$$10!9aD!YqaXg3?F8jZM2K 0PTX8(\±éЯ⬩eA_~!0T̑e޹zG?F?@DM/`2F K7A`'3[2i&_ pXP1 R d ,u䰵U_.0˥}?F<{nCl?F#L&ѿ'pY7 (>PώM=%ԗQy@?nς3塛Ϧn fS?Qcո`*Ǘq.?F:c@Y>Ѓ?F˿01BS%̿5{"x<A0R1i3ʢ (:VJ= ףq r((3ʢ  s꤂{1 t(niGƯ3ʢղ% 26Yk_,)=С?F[ a= ު17_؜S㥛?FA /&l? Y%Age/w/4F%%=?5؄Ўi|?5^$?lY O!; M?TQE B%WO3=OãȎD†BMbh?F?5O…BĿ5P% rOO#W†D"΀_%RQȭL.U…@ԙS[{^g_㼟R]fl_ Ҡto%oPBgoidT "?Of@I +psooۇ}}f+x"6}%7XvT0A~ 3Ɓ:\}p= ãʢr4`yӅRoF~md3"ЧfC؀=lXoJC(R,u?Qؕjůџ!.RĿxvðRc( C(!51 *^"4͛TAޯ֑B#C7T@ߚ~_ȯ_ҔcVB)qD9w o2bl&R 8orU7aؖdPPeM0a`y؉{Ȣrx78?FVj)=\S\"rEM8IPӐ j/QhT`I7IUWEcFr#߸ aBـ߿E8_Eϥ`Ģ `X?F$#+/0_o!l̬`ѬI4ܹz @`?GM;X80_B_P/X5_D|=_{-V(__L__Ū/// ??0?B?Хr/H //>Ӭ6H.QUg˛H 7"X x4 g@>ah.?H ?li>OPNM=/i/f < ooo fbooooooooFr+oH JoW\/<,COHgrO\ݱxOp9ǗOO>H ׸]o8U@rB@V&kCP?y ٯ4I.,O܏_.oϤϥ(fbO8 SS3EWiΠm4?FP_Ӑ d[\TfůJH2 ~奔vx< _r*HI[zBBϡP( YV0BTgaSP\ D1 ]o!" n$tv!8J 0 tr @_Dds2 // ĶY/>}///&/5DV@_xWL?Fj M<br+IMʄ'sObXMtφDgas {0 A Ao n ?Fy|Db %.0OOP2i"2@]}~V-M CYѱuDl3P@<ײ?Fchw?ܓ%7ԯϚqqVʄ,DqLY?@-iUY__ r/"(\µhoPYoiVeo1A?U@Yb9]NW@Т?Fº__ّxVu^?2GGʼnC%o7o>Ֆ}p= lpOݒv_?_AlpODO>cEO __As0OBOrOOSиcé4l;8RPWq ?=G\7 ?,_U@3Wҹ7@[8p?*w$?1$H(I Kv?@o //oPgRtȝ¶?ݿ`>ߝӟg`DY-T-?2sT;$ĉB- ;GTs/bB`s`IconD;ð,@7=q DQ7IE̥2\s?F P4L0&9mi (ߧ߹ߔ2XR'2qk?!M U2jCUg2{Ḁ:!Q332̥^E0ӶC_vHcs MeP@M< F86B#x}QB  wNU@e,kPo+_nRoPa%7oXPUFD  h> T6D BU?XU??`?Fx<F B/P(?3P?|S (Н:  0B`LWirefam,\U]Iusr_intfcpoytmckupAplctocJssWeB^ &% -H ~\`C||d{as'Ï?g?3333?3h? ?333Z3_? Z!~? 3?|8 3u333ww Q 1DC*b BP(Lͺ۰??M/KҶezxet??VU D'JĤdRU?wAk?UH `D &# R>h -BRBfBzFFFFFFF (F(B.,BB,FZ(Fn(F(F(V(B,F(F(F(F8V"8V68BF<7f^8Bn 6<P-Iv0BxE~F;C|&.Ml I(:L^p߂cÜr-ؠŀ 4M&㪏AlXUQ** ?!rpjb^_y h6r|/ / ?wQ=/O/Xm+vy?@` /i?@jLz/E Mu b/@aY&UC/ ?/XF!H46{/ϥ?` A8/? 8S );M)dd?@0gHa!"u Y->>'h?@%h ?FB¯?@тAO@`2u xFoRO(5X𯎶۷O?^/FE֊?@upGˮ?F<p?@?o`g_c3>u  @Y?"IzFfU6!No o!?F0,Uh˙?F(?@?-ݠإgoK_i(X3,}qoo g__rM XI΅& s+Lhǣȡ_]߹liX߯:HQ/5m+VoW@1qU?Fډ3?@Ũ̌s zo jXrpoF L<?@-UZ.?@:_ȹտÿJ3$Cu >`%qX:!2ɣ3Eܺ _r$?@Fì3K}y`#^/;_ 4^t'՘Ec- hm*m =OW=eF o?@ f6_ެ38/uD/Uz1` HP/AS tXjF0뷓uHg)9|{F!̯d)GG%?@D?F8]?@f<ɿ+L]mP.VFd߯ PpBcɾA9h8#yQHa/s$!/$?F%Q'?@Nxʎ6/=O ^7ȼ?F߅ CHiPhN ,ᐤEn|XVWm ?I45ѻߊ-ޝ?@6V T6D BU?}??PU?Fx<F B_P(?3P?| (t  0B`NWirefam,\UIusrintf*cpoymckpApl*ctocsntEwrXeB^ *%  -H ws  25?Q ;3w33w3?3 3? 3?x|~??3'@n~bQ BP(??b//KҶ??4俻RU=忬4?[=?UHdD  &# U>hj cBHuR2U?YYYFAuP@[bbF !www?_.?[G%` LineCol rz"#zbTSawS.` F l .k` ?2%!!%%H!w3#"+"@x4Z*h!Ah17`?!wpy igp}t(c)W20K@9M ]c os f=BR!r1@!a=@i wn. A2U 5HsBe]@e v d}@01`V s_x RXY}@cpm!#5O@1O871\q@?O VAT!T8_b  ?@BS @uL-H[N BSQ(BRuQSf*!x(nT";xl5eV\(]Nq$I$||5~,>u5or2dooooooRљ?F\.kV|Ɓ;~h4Fj:Rd= OףpyN%I:q}ɖ$HC,Pbʏ܏Rpʑ?Fx4U@@@@r??n67',?FJӒ~쓘bFt!A8Q)`n _=c?C;nFt T6D BU?zͅ)U??@U??I?3P| (^-҅  0B`WWirefam,\UIusrintfcpoymckpAplctoԸWbeUdac"s"nbe!^ F% --Hu ,33? 3?43 3?O3 ?O:KO3 <l vZWOW?63?b{Gzw?|?]%tN?zS?s*"iJUt"&\-k7U?<$UHPD  &# R>h -PQ("=Y*ሶǪePf̤Ob;aY!]!䟿 x! ˿ïx"ƕ& ߛ:iXh "ϝA8/!F@ϖ˦IQQVxe A!@J?@9 {ޜ:߉TGaƶȸ; n&)א?Af2?@cBڥ>؇apY :؉ͅ10uy嵼?_TШߠ>ǯDqxGߛZ0L7+',,?@.AΙ?AT o/{^\>Y_Od;f܄-؉BSj~}(杻^eƕ ɨZ-F°u?Av 8V Y⻣E腬\)~]D4q&,e`x"ƕ2ߛ!]ybՠl ?A^GP$!~ &3vb'n:?-!l~"™ˀ$'ߛ!?0g/y$zE2o(+dPAwʄ?@^e%~Y!DζM"5Ͻj֧b 92/*™_sKA4ߛusf܊6L5AQdf?@wϸ.Y?Y'9/^.؃-e^??2™J&RE#ߛ#Op.D66VA`(bfC"QO*}sOsOえ_O :?ּOO7™76n?=Whըa%xA*0ڤN1?\w_s#?利(v`>+!<2_ o?ƕyeߛa/j -?Qc d^瓞(Gqמ*_b%H/|F%kJ?= EW(/{"[颙|ӂُ*ds޾@zo` ?G'13E :Z # Q'p蟢OL "ir֟mTf?Fҽ@@u  Kҏ$/#\9&&ꢅ8/DQXs/kG/#/=x~/q  '?@6\ 3ٰ??@59(W?N?j`S6VJ0?_? @TOQoR-O?O@{\D(Q kO#ߩR\RO?nOOO _R\/K_+tV\&K_@` ik _o7o6|mo#.Af ѠoXxho*Dk>k .0QS T6D BU?V=U??PmI?)3P|N (ҝ  0B`MWire]fam,\wUIus}rintUfcpoymckpApUlctocswzNrdWeB^ (%  -H j ?|2.<+&~  33333?3?3?s?33?3y3? ?zWq{3IqHn  zxn3b{Gz|?&X%(?VU5<35U?SEUH TD  &# U>hZ Pth4J*xV1w17G`?!py ig`t (c)/@20;@9/@M c. os f-B!r!@!a-@i n./@ A2 %HsyBeM@e vҜ dm@u01`V s_h RXYm@c`m!#5?@:AAґ\Ǎq@?OFADCT(_J  ?@CJ@uL-8K)> St2RuQWSq|C!hS*"*?A%`< P8S a"YKYU +׏O}:?A.?";=w 1$aJPu)YfyӍґs?D* r9?]Q/Նay/?A~9^I z Y顃ځay&l %_g탒xQ Ws`?A{ɖP`P_qYE[%Dx$/py?quQ *p?AL<l#?5?`aVP{鞁6!Bѷ?0?~?KqyOuq L%u?Ay):?OcP#Q|\QZHTOb^$P?s8? Uy#TOׄqrp!0]o%|I:Mj9?Uu q pG?A W1O`hbQfҌ@vLR?՞߸ xi?Fo& qS"^F?A]|ӮV `^@ߌh%X?Agodm9W̸euvӒ7?AooXHc"R`̓P5~Wo3BtWu{M?-uj|vۅ?A? !YnYX&Lraݮg-t@չg/ ~w?N&9'Uj|[wT?A*@M-[asucۇVI!^1בx3!p?Pq3j|\ʛ?A@Tގ9KMcۍOG}Zсqџś?AןU j|l1/wrCQ#%Vcof"G}qfN߯S`?McсޯU!j|ui/e?2|c;wnQ>!a5ň t"I2rw~_7G?z+P!U#j|2_{fIWjwӿT#Zc_~aӔq^[?7_-Xo ?EPmu$j|u4@p '߫#{5~)^1OEݿXq?!fzޗQ?ՎU%j|H8 `ay;‡i3{A Xqm!S?ӣ>%((~T?Aʿn燡GVDQBlLx)(}W[?A$EdXśGEi7"GqU*j|PkP>ALC{)t]5B$/夈1${M]1K??V+j|Tt<0]UgS{b=J2?6)\H`x?: l`V,j|70?Aj[Υ-/?/S{6AEikyew/2/pA?61?>4eT4/ t-Y_P2?Ar],Ճ `Blq).j| ?A^zަ???c{-`#BlɒGq]?2a?最VaF7?rLPģ?q/Yl&k?A aEi+_)q0Y0?Alab+VX1j|ho?Ay]â4OFOs{ Xx]?2R.ьO2rUԉTM_?~l/w2j|ՌEɼLۿ__gs{%}@q|Zhqo2?:K/ل*}ɑ/w3j|V`V?AA GĔoo?{ [y\ ro~18?pL0~tu1r4Y%(~k?A#x@ٔ \] (Zm~15\Rb?AoÙj|W+Sy\拏 2$r?2lK?gRL6#J{/L4_ \?w؍`ᾼq7Y Y9?AoӟHY \By\&u18\1"?AvǏF ^ט:JT.u2.ؔ8>2K?A-w9\*"*%?`<1 T.a`Yy\U +k!w2d0?iԽ>| ?G"q=!:U]TPåљ`¯T[ZF'B`0?֥\ؕf!K?i ޱt->2??t㕾`?1ޱ:Yk0$?x&啾On?l@ޱ/0Sao?/3QW敾/ݤ?%;%ޱQ@n[?ꬑ~_\?q6ؿ1&ϔy'W?~:1?@}D'z?q^Q?˲Rߢ(hzb2?<}Id!?.R1)B?`X\!O#?b1*N`"K?Zqӏ\?R1+[K0O?ZY2?r5p1,4F}GV?ylqzmr?{qr e-؊P/]?uq~n? ΂Q߁1.,^! ?+ 1#?SF1/׽M?N ~ʛ?y0!?A(5?%+J111sR?paԳ?$ρ12O<-nrO'J1O?uWp݄C/o3Y/k/Hbk?l/i?^?14//w!?҉R(?Q׻?M`e5O15??Q?c8R?' ~S`W?|&I?o6??{]ҹ?IbOL?Oo7%O7O׽/?x~wa?&Oo8OOz(2?UuOR? qG_Q9 __׿a?輥j?iJDO3a:~__?op|_73@a?eD-oa;_oͼ]ed7~}o?RNooko}ooz?~~-ޔ?Doo=oo!o?Bh3.s?9@aq>J\׫r?-Aզt(ac?~ fv?pj!?jl@0Bzf?Hauo!?sAڨ.q7?xhzߏ6 Z?䢉{oˑB(U˱?Ӳ+rL?E"Dş>C㜠G?rMo~=j ?og_bDתG*.?+ӱX9?Lg_$Eo\8?)g?P58PakOd1&?KDaO GUgT/?UA?D}HȿڿjTA?lTA?.r?_W|wI;M׉+!u|>5D?U/D?cJ$Q߼?KSܼ|>q ?[v ޗK!3@bo?`Uz~<ؑ?A ILߦ߸qoU?In$} fd?RD ߼Mg"w?@|>^?kd/Nzg>ᩑ?|>ށq?]AIDoOgqoI?V@MBCA?XѓD/P`rgQ? #0"z|>^!-?E2.ӽQ/?1/o?5!0oRFXHӀη7?+ro?|p0VN{?nSgrgv1o? ZQ,>g]A)?:^)4@as?^Dla>1U7>1$>51da΁U U R !%uq6QU !"#LѪ%&'()*+,J-.0%12ф48c7U|!A$1$0@0Bx?A-s?"u`` l?U2u`` h0;4/!uH0%}g"Oxy銎`!2?꾡xΎ?_6-o6`!9irBVp˜.`r#;zNh<R<trqy:?ؑ}5d!9?P F%uC8J#;ك+hPV!V;XO}ףƏ?IaxOeޯ4t!9&pn?AAE_(V!;["1k9$ٻlE"K;?Z=0?Ġ?{6x!9tv28JR!uV!;x1 -͊ϡGCF'=YP?D߀6)28)s?Am2;#;m_ѵAk9o_eEGV?yql{qf2T!69\zNzߠ?A[_{R<|/]Qk9t~8!9m"lxz?A ǟ߱#;!ak9x2func/? }Տ6!9ao+ٱ#;)gak91#E^ ?+AA#m4!9eB@{?AGoV!;"k9J`r(oFM?NQˎp7?yD5!9S?x_?AJ~#;͌YF?+Z7Џ6!9Db?A?"Vht3;рԳh¼Oa?F55FM^?qOV 5;NS0T֨hst5OGO4}=0eO~w1?*OFT\TOPXS0)WxwwfQ=  VFGAy):K_5}?a=@hp]}e_wST eMmS`"oۻ?|_8u GמY"?A;XOݐ=oOo6}ULO@oz{0Aҹ?Ih!xoLuctoPX쎘//>?APf+w/cqdM@N_ ߐf,LI؟JY&k7}h i~uzTwg? V/'t4cGtvBT8}UߠW ̗ `-2?USi/ ~fw]F-9}Nq u8|s?Nq"~j?^Jʖ!~l>'I?a{fI:xH "|G+oBܭ{73[?eHʖ"lȶE 2?Ak0 G˯ݯ;}`ތہR 5e07곾۵jP#XPۿfzFwwQ11a^$lCȓk?A? [j<}2WW5R-~!?N KAϦ%l<-xW"^ω=}ÔA6[&߼^A?oߎA?9q/&l`Y?ASgȩ߻߉>}`E)vap_߼]?-qGtB?ʖ'lBO}x1%AejX+-k??Bʖ(lR}:3?A{GzZ(laX+!hq袕ٮ?H7A`@ϡkA?()\Vc#?A/ \Ej7 h%*la?AkCʯC&AI +oPMbc4{u0r6-0Z?Cl{ ʖ+lNԲWEKh`r&B͡!ƟAa U?l +r$%/64,lqT~ DJ/&C..4!1!/{uz,?nrh~j0?opʖ-lԕ^GBs?"?&D3DO=ao ƪJ4+`4=ؾX9?Lqʖ.lX?AG h"8/?&EMIK0g5 \8A?kfOEUOy/\+AspѸ\`hqe0l'lscz@B5v;oO&Fߩ4 do Umda&?1ʖ1LAܵ|@je_o&GV.mDo T?Uqotto2Lz;i@a sooH4.m@q (Uj@p?lj~t_Wp|ʖ3L0Bx@-sI.m /1!5?UN>lTU@?44 Ĕ(Q0(SUc $ߜ+(_~:\ApkMp h8\٧ p?A@j'sxJέ<1pЏ $0?`S } ?[9v=(Q_M?Ar]hӸ"K΀ QeNh 3?żඓr!?/e0>kqqZLΎ?ᵐ@RU?Ie ?a?"s0j婞ӿMȟڛj?X֨j?ȿQd's <9$DTT[m4t8*5{?A⪲ʙ\LqY CLAF?AϮN`.Y<8?rL rhw?@xI6C1r.iv ?APY`.h6Y`6A(UX?V0R? s!ՊW!%R\U]?A3r]Tl R`.@~Yx&1OF/ǀ?9@+|<Nbh֋h?AӺ+SYjRSBr1vo?ۣp= :/հeu//T/SB)?:Mz4Ws?^8?pRg1(Q?QTCnE`8%T KD-XD_CTp|A `@`IconAdEYDCT,@BHpqԃ G{ԙm`8\Z?A,U|V~}B XМ_=gWpiOOD`82ПZ?'2qPK=!o8eܢV `8jQԓ__D`8{WAmo!W3uvA}CffoMm=U_H\s Y&Uc@F8AA(#^iBw0B xCU@eko+_9o7B!o/PUFD  h> T6D BU?fD}??ޖz?Fx<F BP(?3P?| (^-t  0B`YWirefam,\UIusrintf*cpoymckpApl*ctouWbedac"sa*cPe!^ J% --Hf X?N??w57544;<J3?s3x{w?33w{o3?333?3ل?G3 3}?3u3b zBP(״??Q%.KҶ?ݳ?$UdPitV&U7^-?<$UHLD &# T>h uA` ?2  O"*O">O"RO"fO"zO"O"O"O"c&w&& (O",l*&&4"u>)J>U2U2?YYYF9u>u< M@>b012632b9 Q!3N٣3225?ON ??[>1m0G%` LineCol@mrz0"ٓ0Cz0bT'023A3OF#4 ElI $=*W_!kStCB03!0QhkQsWG2`?Awpy@igXt (c])P20P9PuM@c@os@IfRArPAaPi@n.P A@lPWs4be*`e@v@d(`iP1>`V@s}_h@RXY(`cXm!#5P4}`0hQt\4q`#?of3A@%? doJ1B@ ?@BcJ@3u3-14"29G A:" @ns=BruqCq7|4? =A\\Wg7QlJfD Ul855aQl-Qc89q%)9?Fs2$s .?勦G yU4Nt`o?F3t?@%kzx?Fhg{  K7Ax@݉_/>l:#ρqtt[_1G }3;!2 @r !UP *(C@?@O{%?FT>~CUD`<%Y_m܁O~K5՗h᭟MXAbY?FHm?F^X)ʥUJ|7 ԐMpہTחpI<CsX…Ql?F'D\ ڌ Ie p7Ie?F}?@=>F٢T.^SɯU&5A4[߲e3EB…dbb?F肘3k~gڌ)t'Ȅh^8Z?Fj?@sLA 2?FF>V~v!X<[&_1ʌ>N`ς… { Ȱڌ&*gtO'ç %)=^.?Fi;%?@Wsֿ|U㸘 = 3hCt -.*tttGtʄj>ÞʆH2?@_ ;F?FOe.NU) ~ӌ/*ۅ\An#)gq㸗l= YFy+{pwٓ4Gݗ@|'+:5"=ß/̓?@8/<r f}h큜M1/pe,W\M/_/@|]@/3Â:Ă?F//` x(6@+ 3 5 ??Ut@Z5t?/  ?C/?g&qgil)QTT@:uɁX%ThATRjpRWFSTQ>qQ4PhQ[cIP%aȈd!QCt,@=RCX>qqmaA DZ:udtx:u9v6R<ʡwuiq__ X[g'd?H ~q [f juLo^o {AU@!3\r2UdvKP6SfIv}U_vHtcs MPOL*@ FxMtM#uB 8yv@eHko+(/moZa(ݛo PUFD  h> T6D BU?e ??Fx<F B_P(?3P?| (t  0B`QWirefam,\UIusrintf*cpoymckpApl*ctocsreHXe^( 0% -H o?Ǐ?~?{?_9XXTT3333?3?333s?3兔3 tw  'œ 79D pBxmwo3Q3b?Cz BP(״?@?%/KҶ??I7A`y忌G^9n|? ?q9BIUH \D &# T>h ?O[W1M0ۢG%` Line_Col@r[zM0"W0&CzM0bT)M0Z2Z3T"S=7` F@l@N` ZoO2E9A9EP5|51oC3"HBNKRBVPD:!ZhQAGښ=2`?Apy@ight (c)PW20P9PM@]c@os@fRRArPAaPi@wn.P ArBU XsRePe@v @dP@1`V@s_0RXYPchm!#5P1#86`@^n1qX`?Woif 3A% do8@ ?@ac@;3uLD5-M1) {0#s-:q :ru!Bq[_s'$-1L`r'2l U8q0D QUAc8af?Fcz`?s e JyM8,%8w9#W3w +G=;ỈEwsh2_cfiZ'$ ;𤁾*UwӥiDf2^P<~zEdqn>oH?@oHWY~aOMf 3_6o~K~)tm41!7Itk ~ =ExGFt7aNdß4 ϴ_ zGy{I+xsHe? 6fn`(?@>1&Zʥ6EW /lewt!ٸy?F5{ɟ} 9oOt`SIw-Хj;-"'hɞ쮿~Z` vC~nx24'>,Q?F1Q}{.i>Jy%~c݊/ Y na.ςt cOtr MAņ2eu]9o.Q¼QmP` 0)t|y/1/cխ7l:N?)[ބ2z'~=ɞ:ƻL|6z ~-u1qyЬ=?@93bq?W8"?ͿL/A>XH ԋ!J~hsɌ??zeb)a ;@eX`OÔO+|SDJyʖ?G~&z>?B_?!Oas5_PwХeƅ%7D輧[ yw?F?(?@z7Ghf`M!{ܾ],d&vigi4"|oRY3it0wqـ" zo;&7Լ?F h${d?$?WgǞ(!&&مp4H֙Է=J պ}5"m(L~Dt0#H0NxFFlA٩6F7bvs"6?F$K){DBQ% Y̎|(8D{ܸPm wFm(ۃ?Fn܅({Y U ;^ ϾyÒٍx,p?F/r L{? T6D BU?VU??PFx<F BP(K?3P?| (ҝ  0B`MWirefaum,\UIusriWntfc%poy]mckp]AplctocsflXe^ (%  -Hm _r?? ??@9 ??33? 31?v?3J33??3F>w3H @b BP(봿?/KҶ?VU5yR55U?[EUH PD  &# >h<JT$EINÕU@VU?@JF BP(HftMru` ?W.> *O.4uHA>J2U?YYYF u<M@-bqb# !Mwwwĕ?K.?O[ۢG%` Line_Col r[z"#zbT)>rAcS` F l .` /2%!%%>!$Cs3p#D"+"RPpd4F*t1s1{7G`?!py ig\t (c)+@W207@9+@M ]c os f)BR!r@!a)@i wn.+@ A"U !HsuBeI@e v di@q01`V s_RXYi@c\m!#5;@1#7@q0\q@?HOFA@?A T$_JP @C@J@uL-4~G: S~Q.Ru!QSmx?!dِSn#?28Oa?MbXCl? rHr= ף?q"E?@GU5G&Ƕs˝o%JS?@嶰} +B-IX9v?qz'>?q]0 l?2o$CS,.  8?kq"vKap*(r!됮|?5?$'w ^?R?Q;޼>Se:0^zڨU3;Us,?7<>/P/qا</$6-˱? w//%1?(s?/naazQ~d(\okoo@wjZ?Fd+JZЫw ua+*C?F-?e_EӺvomϬYpn}GOOcO߇_aDsb kxOb_xaKI OOPbt_jՙui4F!G2oDooKAOSCo_oj C 3EW^:^gxS__07S1UxFďՖ1=O/+=/myl͟WAi1G4A_%??a1lR8%??G JR?F?;(kY*L &6n#ЊVOP֨^'ȰS?*JRi{tKzp館R /- 6X"ӿwe'~ m3aźEL&d2?FϋǶX߄ۧK7I--C\51IR<z/hRx oV9j`_r_*59{WA T6D BU?6T??@U??I?3P| (^-҅  0B`ZWirefam,\UIusrintfcpoymckpAplctoԸWbeUdac"sUsPufea?!^ L%B --H }l??A1*?AJ??Sb?3?3'3u q{3 {1r33{?33?{?3?{?3p qp|U=A}U/Zw3v?be( Gz״?d?bӸX%??΅)PU=E&e?<$UHPD  &# R>h - .BVj~@+&?&&4uAJNU2UU?YYY@Q#NuM Nb 12&&327b%9 !&3N#""%?>?O[N#1 ۢG%` Line_Col0r[z "#03z bT &2&31?F>H4H5RpI =$fGcC3D6BA#0AAGڎ 2`?1py0ig\t (c)&PW202P9&PM0]c0os0f$RR1rP1a$Pi0wn.&P A0l&PWspReDPeJ0v0ddP@1Nw`V0s__RXYdPc\m!#56PG510P@\$qP?_V#AdoJ1@ ?@BSJ@3uL5-12)6AG0c-6Bbua5C'$-BV1LHG'm2lJeUp Q9 iA5c(!6gDio?A`c ?is؉$j8uk-UuI1Paog1EuT߀ٱTr%vPA4ٜ+v+pqaA0v/@"5pw TN/ٜ ~Dqw@dZĸ/.g|qT|q OɽH Tdq@DB`OB2drTCTANA SASI*PaQT]AB,9Cd,@yBHqa1Di "JxTQdYh irBupnBitOOF|JxX>wW'T?E}=qa0D~ VJxj?__F|Jx{>w4&|o!3b2f@rCV4&oBm2U_vHxds ylL~o+@oUB7oPUFD  h(^TYYBBUFL&d2?x<F BP(?P?| 8ȅH?FX? B@?\.,sU(/  07B`:Key,logica ,n tw rk s s em t pJ"r pP#i"Us c n"c !i"%V eA^| .( G&u }%׀EJ߀\<nOikDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab BP(Կ?'Կ?&d23L4CUGDF P# h @T(PYY*# UFL&d2?@ BWP(P} . ! :DTSDrYD Amu` _?U"6@@T^pru# ^v"|,|'q:U? ?4b {[`+:#"6u#'򄤉Q)v" ^!L1$" "(7'>>ÿ@q05?B46 0r&23g2um0`҈-1Bu:@ "31A;B"F `$9u `u`m"[ bv!wu`)$"%ط#Q!'1`Vis_PRXYcPm!#5\P167B`?CopyrBPUgPt (RP) W20P9 MBPucyPoDPofRRrQruPQaPisPun AlPU yXsReDPeyP%vPdBМ!#@d$%v!B=#&#Vd&& "$##0Ub#]u*g,A^aN @ cT<*Po mPsPQ>gdaE C _s (1(1"hh62)t3* tBaa R+t"qq (!pa cv"!6uM\%G!1[21@@Bq653T @ b"q3P_EPB_ UP0aEM5s_$`&ׄF3s͵NSYFue;AT,@QT#nPUr;65b}@`1p"(qffs_hU(/TUUJ5UU@ B_P(? [M #(. M  )FV)UFj)FD~) C)CD)C CD C CD C "=  #"#2"e"#F")"#Z"-&()))) ")l"U'JuM{` ?)"$"8&P6d"t""""""",",&,(6@(C6T("d/pv////u#)t"bCBB !yt!bC#tAFJU2zGz?@MCJVA@\@5XbCYP[YPV&R !4TJCRC RV R;Rb"CY[@W&R}\&R}_CBNaNaC`?Copyrig\t hc)`20`9`M}`c{`osu`fbtarw`aa`iu`n.` Al` {hsbe`e{`v`d`uBi j@ EQXC|a'0UYrźV(e0 $$zlJeUpД A9 DqHQ]?@ǒ"ǔAP"C 4?p "C id XH}$^E?@[ )K\>o HrL?+ 1>k P>#IH GCꈢp q@!"BU3&@d>~:j?8-DT! d;Q+=O҉䍰&kf :" (Wc ٖ4Q q=HBBǟٟr85Uq#!pw@q!!(SqCq !&Q ogċWƄSO=?@ծj?@.H{[?@?pOg-+'|F+`ˑόV<3= g%L?@ȭ?@<&?@%Y0|D6όTЏ匡?+VRgPI@?@u_T[?@w?@xu+D($Jϣߑ_c|~+~ ό0E'匯̟ɴ΋BGs˄?@9pf߄gA|^x4і |?@[>SfOߢu|ؠ !;?@Xf Ne|$rSDKؤ!PIA& 2بz%/N~4լ'=䊭bg^䲦EDEѰ k;8?@LF߮z fc)V|TzLD*Mc/VNHE?@UN??ѯ ߣ?ޱL`F?q9<ᕚ /`X\(*zϒcFCTD?@_~?@R** R?@@;AQo|Oۈό`D 北9Bc]҃Uƺ-jS ?@&nhS?uC-n|G`ş[~4J dOa'9e*G~ѿ?[CTHYaټXr ͍5LJ45'ek-#LC //63FѺuw ɐ'UECaʔF3TJVA&!BIB?@H8???@#bf?@?$ nTfe[|?)˴1҉?1O}޺(꾟.?@?16fOxO6BEQ̡&7jlP?@M?@13 b ?fgo|nWׄ.ƾ>_OO@O̡O__rRMСt+iu@p` idg?@ -kC%_~~GeJk"ߏ___С_oorrMԡ&oZ= P#?@lV?@M$ύXof]L]R όJ 6老QMסyǼo*ԡr6Y=إ~81JJ&Wd}f+1׬$3-?mFj4Q0^dl Jް PWƦp >L]c@=M>%ϭs)vE?@";r,̡?@x{aхÎT"I|!1߼#y\lǑ4N[U ;M_Mpet*)nOE@ 7@,Хܣɨ z=|Եĝ<ƁTX/$/!s"*?@g9Ų?@bn?@ Vp!iѰ?f|^bܲƶy\dAo >%V@V+幻]>spMy-ui3獛f"A<`[ r9Y9HD: ## h4>T]]9 M 9MU@QB% ?@z%/N?@Ŀ_5P6 ]A| ](# <# P# d# x# #mJuM` /?#2FZnuJt  =Dv-%t}!"4މ%'swJ%->tJU2zGz?=@M#J 6A@"b=8bK9RX;X6J3115 "`?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v)@d @"1E4R#[31 wqB?F ?AC4 $Z?{5-33 7F%%O ViiebE9 VX!W2'K0URňF!DK0 SVgZlJTo@6UT $<ax@C11AUb(T3RYI0b|"ҩf\C1 Rf"J:m_!oc ]rte$aE1}HOv?@?Xݪ@c +Hj!yvR,xbE?uf9B?@>SW|'1Tm|pЏA,xE?uӛ4?@%pW|?1Rqyh ,x?uC|?@u_T[W|Ә {m|O$}dI?@ˠȭ?@e?@)Ll rLߊ?Zy!Xm|kb_6mu(mu|#vi8lz |" YꥬM(#??@ծj?@1Nx?@ f4?ŏ׌m\ҋ:m|XAB/A9d_?@Hm9S@?@$V 6E׌ nB6{m|Jqe!{q o.2ZÔQM8^N>@M*)nO?@ j?@*# LK׌?ö}>{} SS~Dxĵǯakz^5/n??@!,?@?qVw׌BZc4r  x̶.jlT0{xZl?@YBN?@9}7(ѿ  tR?ɨV=7;Mω5;P4HE?@'~$:vߔ׌V9t,Bq0oo!ߠF?@qKp ;R?.A֍^m|ߵf BJq! myaUe2؄oeC2rA 5B3"HD: ## h4>T]]9 M IAU@ts ?@A3V?@Ჺ?@Qp?QDJWƿ>m$>  JuM` ?? $u.Jt+   qte]_W~7.Jzx?{ltJU2zGzK?=@M #J&A@bM(+b[)h+h&UJ#!!5 `?CopyrigTt (c) 20 9 M c. os f"!r 1a i n. Al#0 (s'2e ej v90d0" !E$ #[#! 2?6PI?AS$ j/%b-## 'Y6 %?Fiier59 FjX1G$"'0UBŴ6!40 DcFwJl0&>UhEM![S! (# rV0$*$d*Y Td$VcRHD: ## h4>T]]9 M JU@s?@߳g?@Δ ?@=_&ʠ w?P6  >JuM` ?uJt  /vIt=W IR_IlAm#>t2zGz?R=@MJA@bbJ )+&J[ V"?m& ?A$ /;%*B!!5 g`?CopyrigTt (c) 2U0 09 M c os f"!r *1ai n}. AlE0U (sI2e0e 5v[0d=0! $-# ',m&?6iiew59 65X7_'0U\Bm&Z!$0 F"'JlJ}AT h7!1!Z@ iO3  <P[&/~# M 5RbVJ:Dq9?;S U$^E#JTAR|UEUG_L3\C\Ry^Qw5U> [k;aYa|U52_.U"rA 5#cHD: ## h4>T]]9 M MU@9e*G?@뢑??@gt D7q?P6 @ > M (< P]dx $<PdxJuM` ?""2"F"Z"n"""Su )Jt!  ټXr)5t172X_;y)52=)5L7dϘ'!>tJU2zG_z?=@M3J6A@a2b]8b9;6J[3 6Ba?MF ?AR4 ?EB3AA5 G2`?CopyrigTt (c)@20@9@M@c@os@fBAr@ Qa0i@n.@ Al%P Hs)Re@e@v;PdP2A$D3-3C GMF5%}_ViieWU9 VXW2'K0U&¢"x|6>|>E<ti!Z$@m;M#_I[2WUuP',Ap4RޓpIwS?@B?0ڙՏ|& |֧|?i$g#i!>qbM#_pFtUrqa.^vb?@)fMo@Dr3s0{욹nZ|q9a!iZ+|һM#_Z1 ɐ'?@a"s|? m vpt(&4xquBIB?@Ǐg0p%#|n׽\|6 ]yEb# ]S ܿM#_!ߊnxyjlPpv?@0?@F?aܛ|B6yI^a};I1y Yb[sop|Un [)N#_6H𔌌@uhǕ?@A5Pqy hq|o]^})f?^K!NL3_Wiy;oZ=SO?@`T]]9 M 9MU@YH?@t{FYX?@m?@Txݠ?P6 m >M (<7 P7d7]x7mJuM` /?72FZnSuJt  ԥiAPɉ%t}!",_։%'%3庉%'?݈iӻI>tJU2zGz?=@M#J 6Aw@"b=8WbK9X;X6J[3 w2?6 ?AC4 TZ?{5B3AA5 "`?CopyrigTt (c)?@20K@9?@M7@c5@os/@f=B.Ar1@jAa0i/@n.?@ Al@ 5HsBe]@e5@v@d}@"AD#-3C eG6% O ViieE9 VX!W2'0UҜRŭ6!40R SVgZlJ <>Uh$]T ]&AAC1(`3@]f?@y]0On?T|#EA0_;b"Ja@B*?@[Ŗ?@2Q vrL0adX0kvX'qeD_z`Օe|#Pv^a Sm?I0 -? Yz?|"a@ca(Qu}!r5{5aEhde}U*\?@dB2n?@Dِ?@BwQTDx~uik0M5eigJ@|@ V|wv0rlz v uB@#aEuVA?@ G0?@#S?@S);glk~@|ˆV| -mHux@ Wl:,*TȻ?@:&;]lѤq@| yF֖^t&o˿Vhhdh/ fα?@C%Y;<ddI0A{*V|qH_C+}u$>?@e_Y?@]=Q?@Y+r;.DZӕleM@@|PMkqYyK\r֑ٯ9edon@!u]?@F<ǜl0C2rA B3 BzU=V|b Hot2 IGxtXjq\#x_H82&#s xh;Z LS Fx0n#JwB U]I~>a,{o@+o;aGCsYPxx] X0٩8274h9T>PPUFD  h> T6D BU?VU??P?Fx<F B/P(?3P?|S (Ν:  0B`KWirefam,\U]Iusr_intfcpoytmckupAplctocslXe^ $%  -H o?6 ?" K 33??33}3?s3E??O3P33333?s33^3?xw73bp= ף BP(??N贁N/KҶ??]VU53{5U?S5?UH XD  &# U>h 1{17G`?!py igdt (c)3@20?@93@M c. os f1B!r%@!a1@i n.3@ A2 )Hs}BeQ@e vҠ dq@y01`]V s_l RYq@cdm!#5C@174x1Z\q@?OF)AHGT,_J . ?@CtJ@uL-<O(B SQt6RuQ?SuG!lm?ݣ`PBGpdn:?"rh &賚:l?5#-e0 CU$Hw$s 0g_?ڱc~b'?'W%X9 ڱCU0/B/Vѝ?_/ $&GQ//d6`?+7e /P?!?bd9d?äW_X~?.F?HX??^71—䑾"P#P??x&1Zڱ?O;shO|#EDB͂a@κoOOPk?w_rOäҽ<#OHƣQOOd)UKOģ U+OGQ_c_d3*Yچ? E0YQ\ڱ__'5%o'_äldi.aU(i\rCt!nT vø#Up"Qԅ4?Fx#˥ۥGzwmlsEGk孅 8P\˥ۥ[BP(.ԏ!:r]ﺟvvD{%u/~L^F0-Qan v8g lvvoko ?FXe#Ͻ|㹼KG7vuu!?F `ztFc๼пse}iƢ?FYyÑ,F3UJ`ع0qxr7ߟqIƋ"0BP&d#2tkxԝ>+Ͻ +ē(!Pg:i(!(%aoaaBϗϫ0r?F ) ]͹_%uqbFJur?/œ>Cl/uJ o,BEIu`F ^ur^?ݣm%}ŜhE$_v~ rh ԕj4F5/G/cDVc+/urn?"41 o賚:ldo"tւ!,/m?8iFc?b'vb't//pY$H?Ĕs 0g ?-B?$NGO6-fOxOTfLUMLUX9C/e),1S_,euV5O_6F_Op_{e_(@]|d/aH?л\.sCoUo??Pvom6`?+7ne /DoReάC9~iBo,0rw[ʡ n,obd9ĔW_X~?.ƛi;+ԑ\;DF1CUg-Aqm_. o ?F^VjrΟHm^7ةbJ?x&1⑏s[FĻ /F~9jnƟCe!;sh|}P5pkOw_rӿ崜v<#绯\篨\O _wρ6!jn$^_ť`Ņ_Yo:I#e[ď~菾e~%g~td08} }T6 BP(ě߭2EWp3*Yچ_? pYOQ\ԲDOi%iq&G":o8jn1` 7oi5Ѭe_aiP\aF[`!펒 j1qyrgJ1CT/ħzT1T$-a$2h#T螠!b`c`I?con!m%M#,@"$(qN'-z ҧns?[["A۰G߶r5iR/?=z:'2q0ݼ !0Z". 6zjV??=z{7A싵O!3B2F # FFOVM[_HO\s }t7M:3F>.#lwB U{I@ekKo+8/MoKaʜo{PUFDf h(TPYYUA@ ?IF?P?hj/ bNb>>is wץ @,wOb WA/>,,'OWT / , 'ȉH?/&?"p  0C`User,c_tiv/0,Ui10e72o10y30ObjI4~$e^| UGoPs??h 1 12 5`L/0af! "`I&bW_ObWnFdF^O%nF nA6B XsRe Pe@vPdY ' C G?d ,p(!P^H  cN$ daocJTX1ddL"d2d2 tBaaR$t 1t >tEAAVvaXt et*brt t*یtܙt7Uݦt޳ttAt!RQfTpW0S&UT`r("#'BaK"B&s7F֍A @J)a`wQs. TPxqPEq?`eUT PP s7oSNZmh g.7' DVeU!0IN 6B8B%7qri"2i"2i"bAy "q?jUmEq3`!3`!b3`!b 3`1rQ 'a!R ’лh0gC<s 1. 1^\ L6L?^?2~3 5(,;C6?;y2?:;O(O?KO15+,?>?OOt?~`B⸤q?`}b Hn7eUGD  # h#$T #3 BވU@?Zm/ʪиP} u` ?u4# t 5 t #J8tV53  S UU%3MU!$g`?CopyrigPt@(c)@20 9@M c oKs f"!r !a i n.@ Al (s2e e v0d S`"1B-y##0" 'I?e ,^A   #!H1p%4;E4;34;547HD: # ;ih(>TA3AU@_bZ$1?@v;?@sSx:?@m?Pn6 u{` ?u# t  ~G?t+&@,\`@V>   @JRUĬ&<Q!?b& ?bbbz@B#E#2zGzb#@ 3JE#!$5 o`?CopyrigTt `c)%020109%0M0c0os0f#21r0P1a#0i0n.%0 Alk0 8so2eC0e0v0d^c0 `1l>q$UBh8119 B # 1 1#J1u?@}/?wרll41 kFMыB!zQ?@4\QLIqMqHgLLE3J@LbUӵQLZeHgLswD}{uݢzaʲ{eD?@*N q:QL3" hK"HbcyN6B  ޭO?  ?K`?]r5!:_ og4!B(ESL1@rC#QLߦMNUGNKX Z?@y?@zI@$TXpUTI8$6jI@_A\]Xˉ4VTܿĖTA3AU@vq.?@ ??@v?@?ƛ?P6 u` 7?u# t  _ Qt+&k1@A> #  ""2"2" P"P" n"n" "" "" """"l22B"2"2A2A2A2A2/$A2/0$A2BaUh@SaSa9 l $k$!"1 2!!Q#Q!JBa *Di]?4 |#O tlO"L"2k6MBAb*R/ux ?@ f:|VbBe|O~Fk_XQ /3qB  ly2 =Q s !#1?rZ35@:(4<7r_8Wu3gux TH?@ 0W^ 9P?@l5͒?P-DT! bRG+|4̋|Iv>'vtPϷ_ٸqt~U Q!398Jk}(#Wugu3?@š?@ @ڢ?@I I~ eЏ!y|m%|?fCn| &w?[4g0hC9k}}#WuB?@r[ɯ@?@p}\y-!{oΏ*E|p* %#|k:|xdw]@Ư0C9@ѯ}#$jQ֒?@ٍE|؛|4WmƭWu2!ɞې:Ag]rvpA7?P,Ȅj6{EA~v+m j~y]m`t,&فw| b%ÿ h`rC9(:}#Wuehtn@6fkЄ;??PDžwN!rA z $Zm͌|}Xko?w4TGMs߅r,39M_}G[qn!cydg*?@+?@vw4y=Y4eh"E|?Pyp2|eLU@"@r翨t)C9zd#Wu!cy ¥?@Po*?@/w ?@Ev\?PǏ yf. |dl|6m67 tw=]q@)sHM)3F#Wu!cy:kJ>?@֣e?@.`WC.e?@#[{ s@rH%b~| |y(R~||Bٵ` W),C#Wu!cyy?@;xY]ÀA z}Wi|YnzvtH,M߸q@sM&O8/?+?)33Wuacyhy`'&/p 9>?P%}q AI<bN-[|t'mmRa|@S>'/l?LO^O)ő?Ē6acyZު:?@\\d`ڲh?@Tvϩ?P/ yKDiҊH|N|̏MD`@ЗY?O__)aOb6bʘ. =L?@ݒл?@7g=:s0rN90W|?;[B*9|>0xYea@_z_oo)_Fr6cy~d@=)yΤϊUp!?PņJoWRj 9~Ipwpht+;vVUͫL _o\!{/ɰQؾې JfK;|E R/qwĕHD: # ;ih(>TA3!AU@ *?@8PzvL ?@?@}j_Nw}?P6 u` ?u# t  Pߢ=t+&Xl@$ &> J=UG ? ?MB#2N贁NkC*u!z$5 o`?CopyrigTt `c) 20 9 M c oKs f"!r !a i n. Al (s"e e vz0d I`1l>,>Uh: #!#J"1E 10 l k6MB: z}?@5ё :?9^B:<_ܙ4|DtH ]H79?zF:1E2?51rA 0HRNS%3HD: # ;ih(>TA3IAU@ T?@3g?@@?@?Pn6 u{` ?u# t  [DrXt+&J}:@8 >$ J=U"<?&& ^?B# #2zGz?"B3 #!$5 o`?CopyrigTt `c) 20 9 M c. os f"!r !a i n. Al0 (s2e e v00d0% `@1l>,>Uh: #!# JJ1@ޏ~J? Y0 K {Él!k)MB9}jN}?@n'I?@KǦ :/??4dd]#=̖^/  "H7>zIzzCL7+E?3r$?@"?ieOw@1SrARS{J пPL/~w5ONHD: # ;ih(>TA3AU@]?@_Ң?@0c6?@%#=e?Pn6 u{` ?u# t  Kqt+&P8`@` >$ #J=U <?v& ?B#Y#2zGz?_"@ 3JY#!$5 o`?CopyrigTt `c)$020009$0M0c0os0f"21r0O1a"0i0n.$0 Alj0 8sn2eB0e0v0dNb0l>G!,>UhG!G": # 1\#JV?@+f_n?%㝢l ÷lkMBD"3?@6?@g  ^:?/IBLJjd8_;qIi! 6AA{E3~ـ@ds?@&_ԍ _E.JS'BFBL6Jzܘ+# \>>[B  B½ɖ{? * ^u?(Wr#5eP:__(W4#a[B({EL@7҇c;n?@7uǔFnx_,@ArA_ bcJH \ 4 BbnbK>>H_)@U " &E '/WBHh/н?J" G 0oB`2user, ndu'n tw wrk sy t m$%1rp i",a60pl:0cB0t:0oH0e^| oUW$3u {? ?k0AICȋeȍ{^zȈ{̝̍Ȍf l` uvDrag thuesapWonodwipe+.miUm9r=ai:bH]tEĿ??$XE ſd:M? ?nUH TD # #Ahj4JTAiiM MU@M?@F]t[Eg?P % -jRMQ Q`#dtoeRQ#mNuQ` _?`RR@o`j~ " "uh#  N+bO%X& X!酕#NU2N贁N{?\N3N"+#R "Q3Rt$V 4i V-)' #% #I0@kq0~5?4~6 !uLrM+0@A0=2b .6u~`bP"o+% A7/I&ց#AF1F7`?Copyrig`t (c)@2U0@9@M@c@os@fBArԻ@Aa@i@n}.@ AlPU HsRe@e@v%PdPO /N`V@s_PE P.@hm! 48B6RwEl#$UtE#3 MA#0 Na.RaSPu(?!:32# (Uo!o3elQQx_______u8`oxo4oFo1tmghUU(g5 Y ooor;/?袋.@ Q.Z?'cK3^~ohUu$%#5?@W鯏Ō(T退hЗ vڟGP23(ta xh0 2"@3ɞ@ZICUyP}өqfAsPbt_Zc0:5*o/e,;D'Q#l%x%Ho˿'9ϗ,hBmQwωβ%\IYQ6Ͼ#č?p"^3Hk}6ѢtqZۻɁTQM+1+13u(z贒A:7k:S0U'R @o@a1Pd @y@t%Pm@u'Ty@ud@a@Am3S%,@>2_H f@s Y"tq_HF8rLxw{#xݡB Va_T}o@+UoBBJ4PUFDfP h>(KT6D UUF~?x<F BP(?E P? g/5> މbnbK>>% 1@ob w!O^#,#,#'owd /> w, #, #'dH?/\&?+2  0(}`9Round,keye0Spria0ge0Bi0as8mi0cPaa0i0ale0ew2v1ei0t7s:2e0at2$ e^ 5Y #UGmdaGO'UTTA(ruC;5d31;1D !P` A0k,Oa0 r]0p~Y&b_`TR%0E)%#A#A" =-7?` 3?37w3y7?pw woc]Drag onut hepe.Rih-clckedtk ywy n9 'eg.bL&d2ɿBP(?P?fОɿ8ݥ??~35CUG DF *A)#h8T P  UUF~?M&d2?F BP(?P} rr(V.BDCfCf&D'LHbbf6 ^։u`h?"u)m bI%%R& R!͜U2'q ?&r#?\.?hmAr"&+'$7q{#.@"`H_eigPt_` UsZ0rG]D\0amZ0tl2~M0`W\0db0Q` i7LbZ0n^032pa ۢb M0a` Evz2tro1R 3H3C<{#HAMD+/`V\0s_TMEGcPm!#5 431H!`?C2yn0]5 u(}@)g20@U9gM\0c2s0IfBAr0Aab0i0nG gAUl@ HsBej41dGi"6{#r% ] r(SVR.4&U~{Qv%"@wIP@VT?]fBu`-M01r[2S@%#Ru H@qrHZV!C?C$2a1,Q&$h)4="u`u `"M0d 1 1`u`i"`07 TC MTG?hec^]0uL"UHi4FF 14WK'%v!t+0. VQ%%2ndaXT7GTHHHr"N%2o(4TLEh PETM&fESz2 &$r2y1nj0CsG:I 0v(`2+'0"3+6T-!\g?y>~?;5i"%5i!uŏ׏*>QP6 KC1d12A51`_R RiMzZ0 11 }pa0d@ Qo&J@mW=\P;41N%5uo(S_QHA"-(a,cZ6bb%","0` Be ߱cQebr32`@{aK(+`mA:E u@exb2YcMigZb2r"mYy@66%2''9B"8$HAHA&P.Q6֝K!!W=K#߱aU0B39ECSe~ߐߢִӛ  i"gaW}as@TQTQ%29B*&R66sRaaR11aaL!"8Sh ?#EcH3@L!a r">9EGZAER ," 50(:&U!P%2>sUKfoG,`x'TS-?ss`i"vUnOZ G7veTa fC[u``ub`fR (:9,@$Zr",ds,waUAi{Bf#pniDHD: E*)#;h ,>T  9 EAUFM&d2?F BPW(?ٻjP t`  ?t Bn>u;.ff af puU b J|U2'q" ?!&M?\.?A4'q'B= $N贁Nk?<@e*%Kb #Z. L@* #*1/45 `G`?CopyrigTt(c)20x09Md0cb0os\0fj2[1r^01aj0i\0n. Al0 b8s2e*0eb0v0d0)l>f0>UhhffS1D! GUaJj #H9pp[  j"FMBVsDjZ"ICOB(`(rMrz$ DEB`BEA!vJVRrFTQJsO@y_ l`Rc c_Hl@s ͿKN?g>FxsLk#Ș!wB h"}ma}uo@+ GyocG_XOo\: *OUFDfP h-,TYTYBɉUA@ ??E P?7 u bCbWY[KX^x @Db LLX%DL9 ew.؉.m(sU!A&/2ȍ2H$??6?@2BW( G 0 `;Taper0d\k0y\S0ring\B0a8m0cPa0i.0al\e21e25s2e0a#t2Ue!^ EUYGmdaOL'3TA}r"Cw5o1w1D !` UA0k0O0 ro0/%b`TUR[ Ew  )?__8/NSe ,̃~ύMώ ̯ Ͽ ^ mϯDrag onut hepe.Rih-clckeKdt:m nss,Ja eVou eEov gba9.mbп?8i6пp:m??@o tڰ4 UG DF *A)#h8T P  U[UA? u?P} wrr*V.B$#F#F$LPi(yBBiF` ^Zu`ho?ul +b)%2& 2!iU2na'qt ?s&R#?\.+?hR""i'#J+''[#.wt@`HeigPt` Us:0r'D<0am:0tL2~.-0`W<0dB0Q+` I7L:0n>0r3e2pa }b -0a` Ev2trGo12 3(@##y,J[#(A-D+/`V<0s_TME'cPm!#5i 4g@7(!."`?C2yN0=5 (]@)g2U0@9gM<0c2%s0fB}Ar0AUaB0i0n' gWAl@ HsBIeJ41d'I"6[#R%~ r3V4U~[QV%f" ?@IPs$U{'UlFBut`q--0a1r["@#VRu(@r(:VC_3a,x$h4"uv`u `"-0d 1 1`u`I"P7 4C 4G?heC^d]uY,"UH!i&F 4dW'r%f!Vd0. dVQ%2nda8QW]!T((R"Nz2AOBRO(4T,Eh 0E4M&FESZ2 &$R2Y1nJ0Cs':) v`e2%03%N !VG?Y>xx?;5I"5fɏۏ$J@_TBa0#%b5EI!i.1P0/ C1D1A]51`_RB7iz:0 l1r1 ]pa0d@Ao :g@7=VJj;Uf.:biʗ"f.SW1#AAo0dG<0bgh:0axo0@a`_am01-?a!`A]@EJy[k@]b1`@I" ,^dYap i2g@eR%(@i(cE&M3.R+R"@2BO(f(Ae(Ap@ )-S``` TP P 1ga503(!"#g0R512T$QD`抁; TPAk0A,(Xa!+#jJ El.;͛X1 9I"g@a{]a@g12BR00SR4Q4QRQQb!!1ba] *4 #h5aiaR#?.<E"!1+a "u+a(UbqR@J2SUf_d&54S}# Ss`US1/bc@$a9cr+a+7ex/:atQtRn4%87 G~Ta FC[u``u` VJ 2bSR-4T7,@p@dlq{zU pS_Fi??v3R_h(=Ra cZb)PRGnB`\"e V )c1eV;Wbr3;Na#vJ)iq` p}Kex o%82bIg7pfBf xp}>@$-/HD *")#=h0>Th]]9 P YAoUA?PZ6 t`  ?tF$]AU]A\yMu?.Ej~] \  u?ڍ bJ J|U2''q: ?9&M?\.? e4/''-!#0O_Nk?<@rLb#I!#r. _L@"B"/KA%J!#k1p45 `K`?CopyrigTt (c) 2`09 M0c.0os0f21r01a0i0n.  Al0 8s2e0eJ0v @d0 $l> <>Uhd)A)A]1,4(B0B@$ $bJT!#b&b&kFtAM "FM(FJ?2OA R\S%j*_YBr 7Krݹ dhrx1eSRTVEJڿJ !PY,1Q!?;_(1:rpRS ?YVUqJi>#o+6k9Eo?^`R rSeX]tW-cmVqW!\!_uVr"QpA/O\F32CvtE'!>"zuF2$gYD heW8YBTxxMQQ"A"EJ^A{Gz?5SU+1 `tH~c*vMÁ%Za*MބYBp TS*%-ATTHx$EeЄUsބ)SJ3&o t $|M5aAb@M%tA2_H4 As LoZ4EQ^L4 FuLߘ-*#ȝ/wB C0ax~1@+I:aGHVP8/:bUFDf h(TPYYUF|>@x>[g%U @0 /\pb x0!(O>D,&E&px /'> d, d, d'dH?/\&?+%2  0(`Ucardke_y,}5,6R3p~01a0c0s01l01aUn2,0e0u0Wit2a0tPUo2z~0t0o0,,w01ng21t0o01lJ~2m%e^H 2E*#UUGo$^  h\1\12\5(;`AF 2d,A8 CJ @`z&bxpbxaVWVpaV aQR __+^DEOOSS<9_K_]_o__DJa b oX%_j^%`/SE2UcBi2l,F0Ax01.A$ib oobgVdaV'_Tq(ʄs Qi [\);M en yv  iDrag onut he]pe,nrih-wcl%cks e/if+ymu%yC.@hD KopisebP&d2ɿx35CUG D # h><T$6D U@|>?L&d2?Fx<޺?P} T uv` ?CuN b} ʆL.BXB 'V  ^6#,"2/D$L/zBU0?Oa4{@`: "&U'[)O"6@<&P"!'D23" L^"'Ԃ%$ bւ#1!'`?CopyrigPt (c) 20@9 M0c0os0fB1r0/Aa@i0n AlJ@ 8sNBe"@ej0v`@d/`V0s__TFScPm!#468@8|&7~uC0;1@2E$P!96@U&R2rq$U ?.T$V 3%u`-z rr1b@CgRuuL@rSS,"N7S3DQeP"ZON"mu `u`0@d 0u 0u`2&\#&io(wb\._?OKA2eOaZe59,UHq%t wphda`kw1TP"(syT_C3UJb pp#xU`{br3{P@"WaCJ(FaShZRCJRa)`IR`@3As0;E @Gex@}hP" "P"D7D2&#nRdvx(s9Ohz?%;%uC %@P&BpdB;AaA}Rb H!4}m9a;9a^Esp"\\񺁺2*2 B(esh15s `@n%` M0u@@At005`nP - PTjBaAt8@l@TTr@@)iJ@e@SS.@Q@t11"15a+'y"}"|(Q?٣tQatRE7Z5!va]٣6Gb``֑gW!!Nq "6%^5 S,@٢"^Efz٣WuC&{U55B0k-kW37,@-@7`P8pP9,@A-@3@C4`P3kA@D}E^ U6 @NqT\aPZZe(@3YP@%i T;g5ə6^Zh\7s''.Tf]u0nʕZQ"wR BWh|1`/ۥGd UHD +D# h0T@dYY'4EU@L&d2?Fx(_HE" /%4DU`0;102B$714+ 3`?CopyrigPt (c)@2\09@M@c@o%s @fB Ar @FAua@i @n.@_ Ala@ HUseBe9@e@vw@IdY@4f^P7 &(nh:MQ%%;gz+j )jp` 91 ,^[Q_5:&*TxAc@0!@SQ%7PCT7T UH `uD " ,# J>  h]0JT]]MR/MU@L&d2?FxP0~ϊ߀LUe3EWi{A"?F}Uϰplݶm'm?ܹag\t'qTPTQ{!{!"T&iB$p1TQ-v/u4;u5 &hI_(H*r X: gF9FvL+?#BwkB _H]}aC@+hW SaG]C:"KOh$?`NyPUFDfP h>(/T6D UUA@ ??E P? gU/5 މbobK>>W[g%U @ /p b x0!O>D,px /'> d, d, d'ȉH?/&k?%2  0(`Tcardkey,6}5,0e~0d0r2l2r~0nJ0e0a01s0,0e0u0iUt2a0tPo2z~0t0o0,9w01ng21t0o01l~2m%e^ 0ER*#UGo$^  \1\12\5(;`AF 2d:,A8 C Ju @`z&bxpbx_VUVp_V _QB __)^EѲOOSS<7_I_[_m__Hab oX%_^Z#`/SE2cBUi2l,F0x01,A"ib oobgdaT'_Tq(s 2Q0 "`(:Lm?p kDrag onut he]pe,nrih-wcl%cks e/ifymu NndFh%o1oNtMDi.sb?88ֿJ?#543=<7gfMUG D # h <^T$YYU@?@g]f?@uP} U u` ?#u.b]*f fV.V LV ^*# "&/8$L@/zqU  0?/Ya4ff{@`: "&'ĉ[)C"-@"D"@D""#" ^,'v%u$ `Kb^ f #1!'`?CopyrigPt (wc) 20@9 M0c0o%s0fB1r0-AUa@i0n WAlH@ 8sLBUe @e0v^@d/`V0s_TFScPm!#?468 B4駙7~sC0;1@2E$D!9\#&_((R\.?/-A2򥜡6UR2rq-U?TV 3u`- rr1bY@#Ru,@Urc!c' "B7C7$aED":/."u `u`k" d ֦0 0-u`2ehaMy56: Hzv%t@!{5t!wp@da7k1Tv"s(sT_C`'!e~*b #0e@obr#`@"a #*(2a cZb&#*bA:`)k`R^@1As09E @ex@]hD"D"87(n@uxs(9/Z?;%vsC %@P&BpbB9A_A]bb !4] 5m[M'Rv"#2y2H݁݁2*ĤB ԡas(esh1ԕ5 ``` M0u>@At0 40`nP - PThB_At6@l@TJT#r>@YiH@e@SJSlf6@czFJFH@u @h@RJRk@M"JM#IA;C;K)D^BWJW\_CJC^@wi>@g*@HJH0dTR#$@tB 45݂`An 0l0g0*<(%3`|B*JB6@r@oT*g6@nj@;KAx0myaD0t2 ExQv6B-Ao\CDQHmcoD!}m'Cd\ 8UVy5pΐ _@rEB#wd)Cm0Bnx 8` SPUa @l6@s$B5 0c< c\im2mȆ݁E~#D`Sm#\]` b@ǡܢK ̖yܕ !aA ^@S;C41^>`@QP3Tmow(D&AKA @ycv%C`5y5B`5@RcEaE>uc U !< YcA>?RTZ%QcI`22p211y2!!¢e+E$~R`1ƦaxDL5 3r`K. V~S|!5%y5-/(~U'y5A? tQ$@Qt^2'sb("|!5Ua$U 6jP`W(`^1R`"Y'H|! Uc,@ r|!~UfL W^sC&{5԰5@tFA@A-?@37U-@7P8P9A-@3@CU@4P3?@A@D}НE|!]"AP@¡*d0Fj"N(@MF݁Iw"MP@iFǡE i7qgi:1h8s:1GW'T?`8I1 0]핾 Sat;Eia`MLWyڼUHLD ,# h0JTlEDIAN /MU@? P >tA`  ?A@Ft >]b7%M>uoSu&{>[ >b M%J͜>U2'q ?&?\.?eD!M\#c+ ''>7L. ?N@n&!Pg#; U 2280?@r"?4֪5 $?rg#brkA lu`u`.>p |0e03F7#"' pw1"zt't)"l J MSt)"Nu>`0;1c@!"NGTADL+(A^@ `?CopyW0igXt(c)2d09M@cW0oKs@fBAr@Aa@i@n0 AlP HsRe@eW0v$Pd$2"lJaUl~PA$e1@aRUPAkW"pv9Ap9AD]'  l(b( 2_n3e%Co\onog3eQkVo_o!oJ>rM*cris#i>MAfQkYxY$w'205J!deD+ }[$&"(i ;RQG1LT!@%oY_V_^k  3o v gAOQiT'>TMg!g!ՎT&PUBqpr1M-;u4'u5h5UHD +D# h0T@dYY'4EoU@?AsTpP?@*Iv?P} t`bAtuLJ- m0u.;m$e ^dsubC^eosuz ?- % 7U"s1(ed #H`/ "!/"xnC#Ps$" /%$4DwU`0;102S5B714+ 3`?CopyrigPt (c)@2\09@M@c@os @fB Ar @FAa@i @n.@ Ala@ HseBe*9@e@vw@dY@ f^7 &(Xh:Q%%;z+ p$,^EQ_5T:&TxAc@A0!@jSQ%7|PCuT7}T UHD +D# h0T@dYY'4EoU@?@AsTp?@*Iɻ?P} t`b_ACtC WuL-j 0uk.;u u ?^   ڛ$R ^e7U"1( #H`/ oezou!_euaPjzy/"C#_E " /%4DU`0;102B714+ 3`?CopyrigPt (c)@2\09@M@c@os @fB Ar @FAa@i @n.@ Ala@ HseBe*9@e@vw@dY@4 f^7 &(Xh:Q%%;z+  p%2,^EQ_5T:&TxAc@A0!@jSQ%7|PBuT7T _BHhBr v+N@s FxLT#CcvWZ H]a0ͿOX@+LS=aGx(˫D:,aeO/?ihOUFD# Th,/TYTYBBUA@ ??P?7 uB BbCbWY[KX^:ש @6wDb LXJDL9   BЗ.,.(sU!&H/2ȍH$?$?66?`2w(  0 B`@Keywa0,SPaft\k21pring\B0a8m0c2n0c0l\e21e07s2e"0a0Ac$e!^ CEUYG& R?(WeVVlepooe xZ|Drag onut hepe.uy l/owWdimdr s;z k* y5a+.bP&d2ɿH?_fО#eY6/˘EUGD # h8^T YYJEU@eY?@/˘?@L&d2?@J?]P} `. N  $(U(+G*G (e$$y(Bju`h?Fu[ b%"&I "!"yU}  "b8#֒K#!$X*`?CopyrigPtg(c)g20 9gM c os f"!r !a i n  gAl0 (s 2e e v0d /`V sW_T E cPm!#5 '46X07$AK' L@$y'J12h2K$5~1R|K#??FB#?\.?h$p123[B8WB2qZ@E?DF @12u0`Ձ-0rwr[2i 1Bu] @rZS"S2A3Q:Re22ShB@s7|# Җ'Hhe/!^=U"+U Hg!daQ/'c_BTFFB")a{"jjh2PP2AA211 QQP?(ua`hN!St ;!sra@ ;!)v.v-~0W'u/u 9;!11Bho{\s!" w9"a{%d@;_r?s@S~jbuvurP'@At_CFeh513Ts p(0se5lr z" ;!1}\slcwt !$x6 >a&lu\sp1fv 6hev5Cj?:R`H7:QtR,tQ”vE`a13Ku4ZQ5F"  (ebSe,@B"pJlh5qmc  B"?(PqxؤN&BQj@[eRJ""Uf p1Uebr3P@j1UT0x"B0o kgP"i& @6y2_gi__8F N$7'F!T0?N 6$9qWF7BHa]DüIA.#HD "-#=h(>T"7 31AU@I&d2?K@J P6 t`  ?Qt# U  l>ƍuQjbb*AluQ b TJU5 L@RGG +$?B& !AbH$5 `C`?CopyrigTt(wc)20 9M c o%s f"!r !ua i n._ Al0 (Us2e e v'0d 0l>4 >UhG1G1@ b!g% Ca4Ը0J}@xRW5 n7o}WSgheof#_TG1M[Q[Q3AARAA2AAj2AA1 JCc8C|@B$%F C0[C,u3F=tReJLsp 61 !@b?2CS,uUd6LsxAXP]pZUN@@ [wM0q5I=t_jZKt* US@5]qH{j5 uKt!X-trc=tDKtC#rC#:S,u+rGfTf A7`G2ciAV1e =MX'2q7B&߁!0% h_HAs )N<r(ORFzLk#gmwB En}]a,1o@+ؔWoBaGoo-PXu PUFD  h> T6D  UA??I43P?<N (^.hh;1 B_1 BsYE V B1 B1VHBU  0B`USB,>\key e^ *%)sU-G/ !L??--B?wpwpp wwpTwp{ xp} wwwwpwwwwwwpww}wOX\Drag onut hep adn"y aex.ubۿk??ؿt 37??UGD" # Q h <T$YY^Q#[U@??@?Pu` 7?u#m )V@@:D:brbYb _ _ _ _l_&7b&&6@^^rr*&&&&&&2S0267E67Jr< >0ÿ@q0?@2?46drD@U238'uL 2-1m2u .FB,@+I2u`u`"V0!u`lB@h4?"?P2HGI'i2)UP@r5B{36{?V ?2bbvbz1`B#p`S`SY`S`S`S`S`S`S02\V/3]`R02S"`R02\V?3\V@3\V02 \V 02 \V 02 \V 02 \V 02 \V 02\V0#`R`R"ccccjBC2Ub%SCvlXsfsf"s7sBBfRsf Rs4v%"zw%2c03>vc%bb|QPAD BS؄LXb0@R=R 58SyTT3rr5705=U2`?CopyrigPt (c)2U09MۀcـosӀfҁrՀaiӀnU. pl) وUs-eeـv?d!31`Vۀs_,PRXY!cPm!#54162ea, U  Yo8cUHEEEYuEn|! 7pf3PalGTDDi2o8T?Ŀ {1lBQ(-DT! @b0Q:!iA>AbsF?G .A;¬sBt7lBh@ %@( ; lƄ& ǚ ,FǏAǏ>>A#@4rj4$4V5 ;5  bXu\)@%iA6T0VQ0#dAbN5Q h=)`R?sۀ T?x3B@5X|Swrg^p @i2o8P1 ,D,@"΀HD: )&# =h ,>T  9 MU@z?@|*?@2z[?@S㯥һ?P Bt`  B֫?t/$*_PcBVBu ]uk&>     7ʡl  K"K"K"K"K"K"K"K"K"K""K""K""K"""K","K"6"K"@"ZJ}BU0@!bA$0:2@;@;@;R@;@6BA"G 4?6L ?A:4B?116 @0 e@0 03lf0Ht6@@Mt`@MthOt}OtВO tO tݼO tO tO tNOtZh1<M#h$@3pxQxQG`?CopyrigTt (c])P20P9PuMPcPosPIfRQrPQaPiPn.P WAlP XsRUePePv `dPK2l>`p>UhX``  !!7"!,!6!l6AQ!J`3@u3J0`f59]|/շY76A 6%vMBCsU@VL ?@2Rq?_,Tk00ie<}󵍕A|]f>^몖v|1s 5E(?u:UOuN$?@i6?@0|]1>î8y~|݋ yN.ݳ*|81|?[!xOu~0O??@o@?@B`r.?@bp` | e1 |17#st ;]tVޟ R?uOu\HI?@w (?@Gd%QǏ~|p p v~t?jTx?8ry?j;R`s?uKy5ZD~|?5^cQT?@mȽHm}W^;ލ~|C('? vKqy?fיKyCD?@_:?@S б?@M|>LiM|I޷|*C|nª̷|95T|sy(Ոft{d"5~jt?@RPFo˴lC`X|?lP6wxKyvn?@- p+?@"?@V*Τ |x|}?8|ۼr|>znȆa}sʡe l6WUh)pU*xKy3?7B)ߋ|hca!~A|!u?(n6x!~KyӜ|F'n}ܔE:(|[J@k~}£>b6!KyaCl]p?@]p&fC|؋k}@|T}# zx!Kyn@e]p,\p}o{z[1}LƦ%ue2h/e3rJQ 4/31RHD: )&# =h ,>T  9 AU@ BҴ?@v?@Ot@?@WB׽-+?P Bt`  gI?ttX̦1P^<(w\5VBu ]uk`&>   J  ֔J}BU02@@}BbAp U"++++&B]##kM?> ?,b#bb zp /K0c!r  i3b/$hb<M#hT{0]#11G`?CopyrigTt (c)@20%@9@M@c@os @fBAr @DAa@i @n.@ Al_@ HscBe7@e@vu@dW@"l>4Uh8AA  v1AƆ!J\HRq?@e ?`f Z<]|̟fW74Bv1 6oVMBSU@$M8{?@ ^c?@)RPv/k/00=YHqoP\:y8)+]fwґdOV|66U5U ?@D;?@IޖN`Œ_,Q_\Qj\1}P\|[~ll7ju:Vdp-hUaWk?@ ?@.PX?@njo\X_P\{ǿlPL ln:1ͳ-hUBҔ?@,et?@+R?@p"ۙ\|5P\p@lfL>&lǟ6.hD:#Tj&:m:P7AT ~Q[Wذr%n%*g>@dZd:\?-'P\ 5܇nQUYo2z[?@I 0Pu?@&1L͸\t~P\kw[OV d@0J~l_>Ʌ)A`?@5^s!):\ѽEaP\Q,\Q-hY+L>î9?@PT7?@G\?OǑSY?k r d? gnBvd ;OU-hU?c/?@! P5n˿PP]:WlpF,d계v-hY*^@US#R`\I_[[i܄f+D̗D"E2$5U#rAg $"HD: )&# =h ,>T  9 IAU@7B?@V?@CDD  )JBU2U"u?{@L@-lA@kGzQ bbbf!['Q Ma q 6!x+ 3"?['!Zd;O?zQ O#b(q (+&B>?8> ?I-\-m/&6))!~0~0x+ n?=!0f21(U/g k/}//%Pkw/1Zd*0@ u/I 2/B4h<M#h8@ #QQ(@I`?CopyrigTtD (wc)D 20DP9D M0Pc.Po%s(Pf6R'Qr*PcQua6Pi(Pn.D _ Al~P .XUsReVPe.PvPdvP2l> *<>Uh$QQ Q! L` #@{֣p= S?`f ڱa]|msQNIZa7R 6zfMB"!|?5n?@qMbX9ElIB2ٗ[lnHa+Q>?@$?@ҹI +W?@x/ka00Hiav@h[l x§?30_fq_J]|;/daeÿg5?@F1ElF4?a[lN,he6i]?@ &h@L?@Uʿm9?@k@;k%7|nS[lU(8gYzc|^fdy|臐T  9 !AU@p$?@&1?@z?@lq?P Bt`  ݸQ?t48F_snϽ@3 Bu ]uk$M > )J 2Uu?g@@-lA_@Gz) bbob>!3'm) a I !P+ ^!?3'}!\(\?'z) '#07\I a) n!/b'#I J(+&B}BU???%478 ?,b?d4zI&"&$/"**'h<M#hj 30 A AZ I`?CopyrigT}t (c) W20P@9 M<@]c:@os4@fBBR3Ar6@oAaB@i4@n. AUl@ :HsBeb@e:@v@d@ 2l>8>UhkAA +A! L`3@ʫcs1?`f ӥg _|=fY7iQ 6VbMB00e?@(ʡEM\5Yc\TP bQ8)Q@:m MbXI?7@Ȼ'`[kZ007TTI d[cabVfv]~|T(RUQUjtߓ?@p$ôM\ lJFc\ia7d4hUs!iJN[E߻!b]A?]"WU9_KP@x+ٮX%?-o?`P_b_tUhi?V4Nӣvd?(Ђ\UbU2SHU"rAg x4 2HD: )&# =h ,>T  9 5AU@0?@Dx0O?@g w?@V-2?P Bt`  Q??tZ<_-iB-P Bu ]uk.M > X  J 2U"u?@L@-lA@kGz= bbbR!G'= Ma ] "!ݓd+ !?G'!\(\?z= ;#07] u)bE;#] (+&BBU?E> ?" ;#Fb3z-D/**7h31<M#h~ 030!A!An I`?CopyrigT}t0 (c)0 W20d@90 MP@]cN@osH@fVBRGArJ@AaV@iH@n.0 AUl@ NHsBev@eN@v@d@2l>4>UhAA ?A! LZ03&@\?6f] ^~|_ar7Q 6VMBSU@1ߗ'%X ?@3Pkn00`eS ]uTUztĴ _flU_JO]|wӠ*U5U_,?@x&e?@j IS*n_]_lq*kr&dϯ4lma!좷e?@5 ̃]PYKle'Jh9QI_[P@ Ҕ?@$M8oP`_r_k4 4lQt'hU2\XUs3rA #2HD: )&# =h ,>T  9 AU@eZ?@ޖz?@=~j?@T$M8{˻?P Bt`  կy ?tm3%R+T"(o_?y(Bu ]iuk` > P  kJ}BU0@@BbAp "+++R+&B]#TTT?> ?%$FbT I3zp #c!  i3 b/$h0!<M#h{0*]#11G`?CopyrigTt (c)@W20%@9@M@]c@os @fBRAr @DAa@i @n.@ AUl_@ HscBe7@e@vu@dW@R"l>v1L>Uh4v1v1   A!JGz%^I ?`fMe"]|IPc87F 6kVMBSU@_,?@K~j%k/009Yt&c~K]ݾaQ`f~KV|x 3eaQU5UnFoZ?@1bDDDľ rhsҿ_\kطAL\!~1P?@* ?@rh휾_]( jL\Ih\˼hԯ3q"9X(iYEaS?@<RqC?@@ tj\DQ#[L\Rϻ%\KL~l r?UY-yP|-kuL``X9?@c\_9pUTTąr\V,ʸldCy^)dDn%}C?@7[~L\}ɩ&_8_J_\VE2%7#rA $"HD: )&# =h ,>T  9 IAU@Cl?@lK~d?@D )J 2U"e?@@-?A@GzQ bbbf!['6Q a q u6!x+ !?['!\(\?zQ O#07q ) !bO#q 8 ; 6B>BUPX`G ?Y> ?,bPbXb`z&2 6$//?22h<M#h D35A5A I`?CopyrigTtD (c])D 20x@9D uMd@cb@os\@IfjB[Ar^@Aaj@i\@n.D WAl@ bHsBUe@eb@v@d@22l> UhaAA(  SA1 LM1֣p= ?@aW?`fO]|=g*71 6VMBSU@gcDwP"?@a} wPT}xk00~Ye0 84 Zsy]f7~kP1f|rIr8"!4?@M8Y~\c5(]QYy>tdQU9Zd;ߛ?@3 P?@7a ol!۶mݗ\f7^?KHlyyrfdh-D_'theҔ_?@Hz\w\;hY>îW?{@b"?@Tc?@K7AljJ\qHl/ܼ^ljth/QiX%`S\_DGfT&ÓQUYvN贁`SQF(?@?@elCvCTW=1fPdyoaiC!1awe7Qiq___Vwe;U22;rA 422HD: )&# =h ,>T  9 IAU@NC?@هϰ+?@ kJ 2U"e?@@-?A@GzQ bbbf!Z['Q aq 6!x+ ^!?['!\(\?'zQ O#07\q ) !/bO#q J8 ; 6B}BUPX`G ?Y> ?,bPbX?b`z&2  6$//?22!h<FM#h D35A5A I`?CopyrigTtD (c)D 20x@9D Md@cb@os\@fjB[Ar^@Aaj@i\@n.D Al@ bHsBe@eb@v@dV@22l> UhaAAP  `SA1 LM1֣p= ?@ T}?`fO]|K741 6VMBSU@gcDwPRq3?@a}W wP|Qxk00Ye0 84 Z]}]f7~kP1f|ʟrᮾIr8"!4?@UM8Y\c5(]QYߝ>tdQU9Zd;ߛ?@i6P?@7Կo2z; ol!۶m۶\SWBN~HlyyrfdM:theҔ_?@Bq\w\0hY>îW?@[!?@Tc?@[ljJ\@qHl/ܼ^l?th/Q~iX%`xS\?_DGfT?䣓QUYvN贁`&{ͅ?@?@N?(el?CvCT?iM1fPd?yoai?RR,1awe7Qiq___Vwe;U22rA 3422HD: )&# =h ,>T  9 IAU@MoZS?@^s?@D?@ rhǻ?P Bt`  #Yt?t\_gY~DWBu ]uk8M>  J}BUe0@@BbA 6"<+<+J<+<+<&B #TTTk?. /?5%>$F7bT#z ?#!<  3 Ob l/$h!<M#h+0 #p11G`?CopyrigT}t (c) W2009 M0]c0os0f2R1r01a0i0n. AUl@ 8sBe0e0v%@d@"l> <>Uh$EAEA 1[6!J` #@Կ?@\-kuL`?`fs ]|vuL7A 6 VMB"!J`YcOO p.w 9 Q8A&J~?@$!i?@ ̓k 07TT< KQtJ]f݇[@a]|PR+~HUyT)Q:DRvN[/?PirTD zWQ77B-?@DvGj 2׶_PPYPZaI<9t\SϏ׷f dh2 +Za|UziK7A`?@e贁Я?@KP3pz?!iJ)Ҩol#\L?ڔ>BY]l|3dm#?`\gyXziN@^`D淪Ol\_KY`}(iy'yIhE2E:6#rA D$"HD: )&# T=h0>Th]]> AU@<^I B?@E?@r A?@PV_L?>Bu `u b]u  -B :mBu` 7?Vuae"B@0Sn7>JBU2U?yBt A1Uh5= A YBJ1eVP7^2 "BU[THD: )&# =h ,>T  9 MU@K?@PT?@2z[?@S㯥һ?P Bt`  B֫?t/$X_ pBVBu ]Suk7&A7     7ʡl  K"K"K"K"K"K"K"K"K"K""K""K""K"""K","K"6"K"@"JBU2U2?<@L- A@:2t D=t7D8!bq:2;6:2t0D= tͮ:By339 ? "+br9;?7h1<M#h 3AAG`?Cop0rigTt (wc)@20@9@M@c@o%s@fBAr@Qua@i@n.@_ Al#P HUs'Re@e@v9P-dP2l>`p>UhX``  !!%,!6!l:1A!J`3@ߌޖu3J?XRf59]|/Y7:1M 6SfMBqcU@VL䥛 ?@2Rqǒ_,Tk@0!ie<3m󵍯A|]f_>^3f|?1s 5E(mehE}eⴁN$?@i6?@0|]12p>î8ާol7iN.ݳ*l81l?[!x}e~0O??@o@?@B`r.?;@``Nl e1 4l?17#vd? ;vt?Vޟ Rme}e\HI?@w (?@Gd%pQlp p fLiM4lIl*ClnjmeyiT}?@v/?@:#uMbꟹl3E]lhtqC,r=meU@k.$?@<a?@\`fH@5k5'>ªl9ߠ5Tիlsy_(ՈftTd~jt?@RPFo˴lC`X4llP6wxyivn?@- -p+?@p"?@V*Τ8lx4l}?8lrl>zQs߃ʡe l6WUhWpU*xyiƻ3??7B)0Ϲlhca!~AlOu(n6x!yilF'n3mܔE:(l[J@km£>76!yiaCl`?@*Ћ`&fql؋k3m@lTm# zx@!yin@eً`,\p韸m-o|?kj[1m?LƦSU2e$CrxA 32D]#2H?G_,bs KX LHF8|Lz#ȃ{w B 4|Dd2xo@+HWzo-sGgooJP4 %*ط,2^Cؼ`|$c'fǯRhiHkPsUFD  h> T6D BU??? 5?Fx<F BP(?3P?|; (^-  0B`WWirefaum,\UIusriWntfc%poy]mckp]AplctoWbedac "scPb e^ F% !--H? 1߃[*/A+*&??s333?3}?33?3333?33?3y3?3b BP(![??=MҶ-@v?J?FѪU& }I&t\7?<$UHLD &# T>h uA`} ? / '"*'">'"R'"f'"z'"'"O&'"'"Dw&&'"" "#u)J݀>U2UZ2?YYYF9>u M@>bP0č1|2Y632b9 !3NS3b2b2J5?> ?i[>10G%` LineCoklP@rzۍ0ˣ"0fCz>0bT܍023aAfOxF45񁠕lI %=G _S$CB J30AcQQ#WG}2`?OApyV@igXt (c)P20P9PMH@cV@oKsP@fROArPUAaPiP@n..P AR@lPWUsRePeV@vL@dPP1>`VH@s_@RXYPcXm!/#5P4-bP\S4ǍqS`?RodfJ3A% |doJ1Bx@ +?@\c@J@{3u3-1 "29 A)"0s2=Bt"5ru=qC!7,4 6=@1 \7W7QlJ]DUl8qqaX!pQAcG8aa.b?@*Z $:s C$I EyP?4Nri<~˜?@)` ]WmƠVHdfwNZ#?FՁ݀?@ih ~?C{A,A#]ƪ$tnL/D -3!22 ]Urf%v˥k![= TU `҅2?F>Tk_JY JRЈ,~J ?ek*ԴƐ^͛CerN?H[IMLl۶Ռ𞤹Erl9&?@w^y-\ Ҡ(P  MT[Q?@EIc?FZbΦ uEJp 7ls͡Ey" Tў|XqίƐ䮄rú #L:>+=rkϖ;\$F>Fu`j_>` {cPEy"u'-SySȯ#~!%1}h~D z;FX7kocFQϧʔ tApt4pD"4HZ~?-ɻk۶n#Q6Yѡk?Fw0Q/WJl~~JtYo*#wϬƐzEr\[y?@3J8&U ;]?@OY=b?FP0D?@-˗gJ  ǿ ؈e6ÈYئxq!y?8~^iƐ{.UF#o?@'JپV~?@9 L y쓔8{(-.ڞz)8aEy`;ӄ ˇ4 &e b˶ +,ZՕ$I 2BTSeT/ura$ߓq?c }&U?F+$ֵ\/MpO O{L_qg),qTq$oxaɾx%T(Ydvbphb,smcT aa``%`Ico#nareRctK,@bhq-vQdەv땙x\%BP( [˳?ѤF` T6D jBU??Fx<F BP(?3P|$s h.̝  0B`JWirefam,`IusrinUtfcpIoymWckpAWplct]o,C"tl,\cPe%vnke^ 6% 7G r?{&A2 &   wwޅ튭yz>- +zyGjh~vPDrawg nd]dopo%t eachvn.bÿ?qĿr??L!;?UGD  3 h0TdYY U@L @??Pn} u{` ?SuLY.X.5H5\$*44*HH\\ )&UL0U"?LLLFVbmB%` LineColV mrzٓ9 l#zbwHB#V?##_#&& "3"n!<.FN lX U/g/@b<|*$Y#6*'ت#1p!' "`?m1wpyt0igPt (cu)@2\09@uMN ct0osn0IfBm1r@s1a@in0n.@ UAh2 Hs]Be1@et0vR dQ@ 1`VN s_60RXYQ@cPm!#5>#@213!!e1a 6D v #YHATH[%T\Wg%&fat_T-T!P,@1 3EHD # =h4>T]]9 !AU@?һ P6 mu` o?u# t  )t72R(L$A  J EJU2q?SL{Nb/@A*  /_b/!8<#3"b;$3"% ]+]&PhI 9?2<?& ?A"t/0 $6117;{`?Copy_rigTt:(c):2U04@9:M @c@os@f&BAr@SAa&@i@n}.: Aln@U HsrBeF@e@Ev@df@0l. Uh #@7 A#!3! J_ |!#FR9 hVwRiAlPev59 -X'W'0UbZ>!<0 V*Bs8!ervrH4EŕHD # =h4>T]]9 !AU@m۶?@êPn6 u{` ?u# t lȸ )t72yL3R)L $]A  J EuJU07?A3{`Backgrou_ndC) l) rz@@ -bAE&bQ$& !R(^&񁠍hI 99<?& ?Ag"t)NC*Q)^&I" ^&!\#{u1bZ @!&0%B=?" !33P2]"1P17;#?2!py' i% ht(! )2U0O@9M;@c'"%s) fAB2!r5@8!uaA@i) n. A5 l:GsBea@e' v@dȁ@0l Uh #7 *Ah#!I!Ja,_ab2!aRO VRiW5ie59 -X'W'0U7b;4%0 V*B~8!e?'IVH*qߕHD # =h4>T]]JU@۶mۯ?@$_I$Inq?P6 u` ?iu# t +T)t764_τ?)2W3X)L >REJ5 {`?CopyrigTt(c)20 9Mcosf"r-!a in. AlH sL"e ev^ d@ K2q ?=Lzb#z@ -b A!&#" ( !(&*?4?F6?\.? E q?F 7w`B6 ck*!u> d*"l," I۽ #b= S? %B?[MZD66\ 2"&J" :l>(>Uh5=#WA!JVC$w(&Vp:~AYii@\ZSQmU3&Vھq>_fTYdf`jTiAleI =M!X,GG_'0Ub;\$0 FJjB8e.F[r _BHCך Z95"MK` F~L:#w)B GCe3o@+XXWo3sG_o~(1X&&PUFD  h> T6D BU?VU??X?Fx<F B/P(?3P?|S (֝:  0B`OWirefam,\U]Iusr_intfcpoytmckupAplctoc*sdu$Xe^ ,% -H s Q4B?Z[?3w?? ?3{ D>3??3@3?w3w3?bp= ף BP(??N贁N/KҶ??]VU53{5U?S5?UH dD  &# J>h cBHu$RYYYwww?? [b@G%` LineCol rzۖ "/#z} bT k E"wSuP` F l .` x/2!%B!B%bK# !w!#Q"W+["$h2nU?F^Al'غ(L"2dq6(&2 !B'0A!'l"`?!py igpt(c)20K@9M c os f=B!r1@!a=@i n. A{" 5HsBe]@eJ v d}@ 1w`V s__RXY}@cpm!#5O@183!t\k4q@#?O VAT!T8_!bA  ?@S @.3uL5-M!H[N SQ(BRuQSf1,bw"la0UaE@7&A!NSQQr?Fy?4FPHS ؉q[YQ!̟)(n! ;x35e?zG]N wq||b5~,>u4or2dooooooRvnk} B;x<)+= ؒ{|$HC,Pbʏ܏n\.岇 iW.THZb3x;|@d_1HϬlρb¯ԯ ~ϵsJP?&d2cu 33ʿ3ϳW$ϧ|\{Md߈b*IyfWom6E_p-oh qo y 'v+oBOo.?oX@ogM . T Y-**RT-c-phwTLᔁ 2`MR`Iconȩ|\%,@RLqM T@Q(HY-*5\rju?&v&vw<ɵiMࢁ&*Xd'2q?J1!t' *jMଡ*&*{A-;!32-æe&uHd\s ;S5rN|_AOFL4#C JB O3]@eko+X]Ka >PUFD  h> T6D BU?xͅ)U??Fx<F B_P(?3P?| (^-҅  0B`TWirefam,\UIusrintfcpoymckpAplctoTTob)rMn%nwae^ @% -H u]? |??!??52+*!~?}u3 33?333?33?χ 33?3333?(u3 3? xxb BP(봿?/KҶ?as!iJU_5>?-kU?BEUHLD &H# >h U@ͅ)U?@|F BP(P p J J>uA` ?.&K*0u:J2U?YYYF9>u8M@->bqb !N ?G. ~?[>G%` LineColژ rz"#zbOTf!/&A%lI D="7_c3l#"A1c1k7G`?!py igXt (c)020090M c oKs f2!r0!a0i n..0 A l07Us,Be@e v d @a01>`V s_` RXY @cXm!#50968`1\nq@?OFA<;DOJQ @C@J@u-0zLC!6 fSz"*}RCuQ#it;~B!T<7_JQlJ!JUl-a-aa) & !1da%1C1Q ZšX'.SrXY0{_Bv߇Ne> -?FI:lw(l*3J;x5e"4?Fiwpk>PlL1+ruH5e֛u7pF*tkOcrYIvu5eGylt A_yQuEe3̼6?EeunAv6C,þ¦|E&Lώ06dOa|e2Vo8Qo ya~i_k9]{WΪikuԎI݄Xu*DyD┖ua"TgASuaiP=7_vʩEeF[15GNvMs!iB5}/p^;jF?M~1sqeueoo"dA-o?oQocouooooo Pv׶?FZR6l!hՇ'Ǻyv;t=o Q< ڎCX>m"Qgb-jB?觪Sښ̙p5. P*y?FgyY߫ڹ!`&h8kl@Ew@z?_,ԙ6ﭪ8H5=st?Fib΄q$ڬ& ~h̢fܝ@d+):ayDo?},.ڈ5ա44H?rѕՋLF>L??!<vYI܏>I?O]J9VkIOq(2z?dOPQ?O(O+B.jTOz,ՌP8 /;_|lo"(//7jT_+BA4q`?֥+To(V/%/rdA?+Bu ?z,-?SDoB;M֕q zTI/+K_2n.?ar4oA $}6éT)XɅ4Xr4۵dF':H@Xf[ٚ&H鏈ʄ)+Byd*40r^;j|[_m_gBYt_ , ۱T ۵&ѡȑT%$Fү.x%("4FyէQ,ύ)ϟew=/O/a/s?'?9???O%YkOOOa5__9E?QuoooUM_)U%7%7Ie!u ՟q~u\g!d!n/$$-$#Ti!r`Icon!%Z ,@"(qhԟe'\* OBPNA5@6"<5i7i=?O? U7'2qC@߬!褿0q. 6BjN?? e{+GAO!3B2V #[FF/_M5_H8Rכ t(.$IFHL߮E#!w B 4@e kp+^Y=B@+ PUFDS  h> T6D BU?PU?Fx<F BP(?3P?|; (^-  0B`_Wirefaum,\UIusriWntfc%poy]mckp]Aplctofrsdaog$ %)!cH"be^ V% --H n????&?2x762F' 3333G333?v?3\?wq<3 ??133 @H}?bl?rUUUWſV??Ǫo??hۏ\5JF?UEUHTD &# J>hZ8JT EI/UU@l?rUUU?@VP ZnM Q $U(83A(L3(`3(t3(@3(3 %(3Q ) 3G[" ) 3#("G#<"[#P"3 NuQ`x?d "."B"V"j"~"""&&&&(&"(&6(&J(&^(4"d"j"uUt)#NU0WUU?F|Q QN CYYYKBED`H?"NbDz@bTܙ@@BOG)R CADB`?Copyrig`tw(c)w20&P9wMPc.Pos PfR Qr PEQaPi Pn.w Al`P XsdRe8PePvvPdXP1N`VPs}_s@RXYXP]c`m! @5*P3P2QV\ C2qP?H_VCAv%xQTo@@S@N+uLr M-Aj"e9} p"/bc=td"buac74x=D! Cb`&1Bt5M qliaUt(@QUmq)c8ia Sc#Pj"c e~ |!i49H?dXQ?@dȽB?@1W4p?@_- )s@,Up}?ZdTj"p#K`'>dT3q!2 UEhg ?@e?@ɱveCzL5b?h* Ќ>daVhvU-?@}HvUg"UYD ~Ev}Ќ1?mb Ÿ6ehBB-kYd;?@A<uFu~+Mb>nlЌTzU)uhT}x?@^sZTO[Ā6RگWdWj4? yUs4Č=fADV\h#?@QB.żd~EX6 ?߃Q= XL(L(\E~h*X9v۲VD1Vd~\uh#@65XvrQ9(/4eOӉ3'U}UAߌqQ0wt)u4g~؇ϰ+@VB)(sAd?|9H<vvr!z\&E@e#Xj -Y 06eRvt|@pBU`JB)@?.`jfDy?@$$5UjekC2؄h FOAPQѐa jf(`{H:$7-؄_>&h/`@@+rh@p)gG/Bt)߾`Z|Rw6// L`9P71z4 ?\/n/-<؄?QM^>7Fy?;qqNVTڷ1)S?` AI*U(T㥛D?@,SDS?;>BQ7?WxfȩU T6D BU?NT??`U??I?3P|' (^-  0B`YWirefaum,\UIusriWntfc%poy]mckp]AplctoWbedac"ssrpHbe^H J% 7U-H~ ptv$8?@K K^NC??3?x3?3~333?~?3?3  ?@8;?zb|Gz??{pܑ֡`M<+ ??迏(\37?z?UH XD  &# U>h < T$mmJ\zU@?@< ףp=?@?A{Gz?P  C  Q 6 U:JEA:^E]:rE:Em:EE:E ru` ?ΡE,2@Th|FuQUYYY"?. ? "b03z @bT;0;0uH1S000AH1#2q0?"#[+5;0G%` LineCol0rVz;0*10?2B6F3a4F9628g1G7 !F3 &#xX R=nGkC!3MB B">\0AAG+2`?1py0igdt (c).P20:P9.PM0c0os0f,R1r P1a,Pi0n..P KA0l.P%WsxRUeLPe0v0dlP@1`V0s_0RYlPcdm!#5J>P5:P2AZ \:?_V#k0d'ora1B0 ?@3vr@L2uLT5)-H1)MA(P0 @c-MBtbua?LC'$->ALG "x2lUx0BPQnD=c(`Sd/U ?!fc #:!o vv \x"u8?8YF JbF,>bHSdg$Hd?An?@J}?A&Rp s%0=)C}̣_ |ɣv?ʣd.qǷ#" |?dr;$E̅]7'3?A Giе?@^h&?AT8[A@߅" 4 KܒJSB$|۷`rH̅~fw#o?@oQPzPu?A3f ;bF4,nJz?\dP1#5Z̅zh2rs,|j񾣏d7 DJLu%BiJeu?'& jgdؐ?A~yiC\H?Ai>[zuཹ.{dqQu;̅S7?A%$?@5 ,~ ?A0Y-0+L4<,Ji$ʇm`,4>Y?AݸߎyvVE32s;J?Em`ՆߘBoqz?MVУuc a2rq `36:?G2S+I[qqQQ|tyueEyuE/,9lAA𼚙3dm۶mŸmht  ѻqqU);M_qn?Aũ4gˢ$I$It.?(/f۹?(\qqB!4FXj|ns#*Ѧ?Aw'ձ3*t"C»qqQ?Qcun\N*f"O//zpgGd%U!TU!c?k+HrTq4B`2}"uH3TAaAh^PASI2PiQT1}%3d,%@28aq:tq`]4EyxLHTYdahwCC<b 1ui(2ODON|RxXFwG7'T?M}Eqbѿ}pFRxjOON|Rx{Fw!f_!2~eV03PVV$o]}e_vHds HأKw@ FL߮G#H%wB U5@eTko+/oNa PUFD  h> T6D jBU???]Fx<F BP(?R3P|$ (^-S.:  0B`UWirefam,\U]Iusr_intfcpoytmckupAplctoT*o brMncpHle^( L% AG Ln 3 ; $3 . c.G.?σ\?3q?{u?3333Մ33?33? 33?q733??v3 333L`353?730bL&d2빿?JXߢ=tݻ?W# |5?BP( {^?M?UG DF P# h @T(PYY# UF BP(?@ ?FL&d2P} u`?Mu#8 >D ^B:DDXfqfl]}bz[@$!"d$2U"?YYYF= uuj!b| !"&#" b) !y#"񉆕#V'?36 ?/"[ԉ![%` LineColj0r["ٓ 3z[bT["(t&=/-2 3P3"/# A>A17"`?1py0igPt gc)u@20@9u@M0c0os0fsB1rg@1as@i0n.u@ KA0lu@lGsBUe@e0v0d@01`V0s_D0RXY@cPm #5@96P0t\$q.P?-_?V?\.?A!P0@7S]^@#uL%-!$/! $/"9fbu!a5ceOav% BZ,0o Hl((auQ9sDYhmgdaTkTAJt"PYr3 (DZqFp>A>AD4`@1CIy@ATgq3Cs,@rxqoo&TVV4<c%nirX7'?T K$Q10 ~!jV{73?!3 4ޟWUHxuD4B &# 4  >hQ(TP4 QJUF x^ъ?FG&d2? FAP t`  7?$tM.5*VW;D;YVu+ 5uE#0W-U?F YYY)">(?"bk$z@boTw w Ef"l/~'!!s`?Copyrigt c) 2009 M c oKs f"!r #1a i n. Al>0 (sB2e0e vZT0d60B( p_A{kUlJY0UYYO!M}@7?FW[&4g;i(EqeEO2ODL[N?ksOO AHHNO'OOKE5OmO-_O1Q??????__xsgˏRu -c[BϡP(İŸ 噖透W ?vECsaõJݤRC2 rqiR"Vaq8`'Saqau)K=Ofxzk sY}yT mCECK@b1ew R&J\\VECeBs&a%DMA6O]j>mtW6 /_HQכ "tD9UcFLuH#C'B `6@elko+ȯoQBG$DPhPUFD  h> T6D BU?zͅ)U??@U?Fx<F BP(?3P?|; (^-  0B`WWirefaum,\UIusriWntfc%poy]mckp]AplctoWbedac"snb e^ F% !--Hu ,33? 3?43 3?O3 ?O:KO3 <l vZWOW?63?b BP(??gXKҶtN?l?s*"iJUt"&\-kU?<$UHPD  &# R>h -P~Q("=Y*ǪePf̤Ob;aXY!]!䟿 x! !˿ïxƕ& łߛ:iX "cϝF>8/!Fϖ˦IQQVрxe A!@z0B?@9 {ޜ?:߉TGaƶȸ; &)א?F v?@cB/ڥ>J  :ͅ10uy嵼_TШߠ>ǯ?OY&ߛZ0L7+',ݠ{D?@.AΙ?Fٵ{^\>Y_Od;f܄-؉؅BSj쵼}(杻^eƕ Z,-F°u?F~vn& ?8V Y⻣E腬\)]D4q&,ex"ƕ2ߛ!]ybՠl ?F~^P$! &3'vb'n:?-!l~"™.BKB$'ߛ!0g/y$zE KU*dPFVT|?@^e%Y!sDq~5߽j֧>b 92/*™njA4ߛus/f܊6՜ Y=5F!'+!<2!_ o?ƕyeߛa/j -?Qcy#Fc瓞(qȞ؝(MH/|F%kJ= EW/{颙|ӂُ޶<} z??@z` G'13E :Z #3 Q'p蟢OL Fqۛ֟mTfFҽ@  Kҏ/9&&!Ԣ8/QXs/kG/#/?6Ų/?q '?@6 F?@59(W??k`S6VJ?_?1cTOQoR-O?O@\D;Gk1}jO"ߨS\ROnOOO c_R\/K_r˻tV&sK_@` ik _o7o6||mo+p^>fee1ѠoXxhxo*Dk>k .0QSqgtqqTqa@ɧx|T(-%TSЮmrq|`Icon,%@qtq@[y(ix\JEBP?@Рn<$i0/B/|xX*'2q60?d!0d &xjt//|ʧx{7Ai?!322i6N66"O=_vH?  cs La{POJMN FHLH#(m& wB 6]@eko+8/]]a%'!< PUFD  h> T6D BU? qq?F~?Fx<F BP(?3P?|8 (^-L.A.  0B`ZWire]fam,t]Iusr_intfcpoytmckupAplcto,UC"tl,\R !s-!bx,"]gid,"c&"%une^ j% -Gu ?&4wOYLwu YNJYdFDrag on]t pead lis bx.bi߿ǜ!?? mF1cN??U`X׿ Ta*ʼy0OໆMÌ?UGHD # =h @>T(]]9 #B 9AU@//:&RSS2 `UO;B` *UW<0iu hp8"COaQa$xboOM.{_Z #1AbS\qntuv]MZ r5L% VrC\9sT?vM?\.5?AB{*9s8AG;C?ap0rf kc)o20{9oM 2x!sy fmar` a7.o AB esel2rKv<0d@1cV0s_PR0YcTm!#52O17ABhLK`4! ;@߁T!A1 u.wV#S/J`!*! !(rB s9MH aO3_xݧOe_ǯ O0A>ObrA]Ncg eTqM Bqq3b@AArA=qqEKKGA8Ad -{ĢqJT3@AEϩɈ3~88x‰@AcQsp2BeGedق'9K߉uma{؇ŬzCoӀ#2` Y oxoi0e{ASClG2TT|?ބmbw3>(ۅA:L E By!ņ,iFbe9 ^ E 'BF!I$1 }9B4` am`l<0x1q7"^3` Al0ib"bP3B2`K TtoBqv[#VVB<s,Z - E:.-xTBjnVqDF/!32"Dn/Ue&A9ƙU-`lp'$)+E$}a'HD # =h8>TB ]]9 #C"AUFi?Fǜ!??@}P6 u`lo?u#o`bt v 1r:Aw%  b ul`A@0t! %$2AZD]3JH +@ uL u I.g  t-^"t-_k"'&!&'ZMC P3K#9A@##2Mz2q0|?(9!L+%g68!t04e2V32Fll<Y3c6c6P L"A4 D|jAgU<` Fil@wCo@or1yglgAg,[bCzOE3vFV6OIbOO).O F KF2\3B\:?WURWM?\.? lfo:5%P1P1X7;`?Apy@igTtk(c)k20`9kM)Pc.@os@fqbAre`Aaq`i@n.k A+R ihsbe`e@vB`d`7;B>0>UhhE5GxZai#U+q2)w:J`6At ~/rSD  8rM(1 E/u3~%I'u5u@wvNru7E2p,;2rA 72 cgthe>{ErT sMAA";HA3蚔"*7BJ3l@aI[c6M`+H`a`i`(H.B%Qll@Ӟq2p 8BD' "3h:ȟڗB@tq`om5k٘7EϯA28keCunk1piAPAe59 D9XL7W'c6H1L!a W jHZŕ{7]Oc6!3XX43'ÕHD: # ;h0>Th]]9 # AUFi?FL}?FL&d2?F BP(Pn u` ?u#`bu~A@!)1t t W>5JWU I?#MBhM@8V0Uq"@L#b$7"&lVVV'<H Ae!9Q+!.` Fil"0Con"0orz9A9#7bG3z9.9Z9suN]3d3$'8+ H!e2 &M ;s`?'1py.0igTtf(c)f2`09fM 0c.0oKs(0fB'1r@-1a@i(0n.Rf h #1 Hs^BUe2@e.0v0dR@P#"0?],>Uh  1j2U+#p 8Ab j0#J`FVBQ7 fRM@B!&_F_BQ|U%[_ _Q|U2_;"KrA "[cCUHLuD  T>h0JTlaaDM >UFi?Fǜ!?FL&d2?@}?P >u~A`bSu >uA@taB>t  KAV$My $yJ== ?yQzNz2q |?@9 ! 7>&>(t05 U55"bf ?"R5"t&B~GKP M#LC] >@w(D:'5"t!6loG``?CopyrigXt (c) 2d09 M0c0os0f21r01a0i0n. Al0 8s2e0e0v0dH0HlJa,JUlK  1z/+!p 8Mb 7w Q $?FUѼ >$?&`$?CA?ARB9?\.¾OIY@5"r"4Q)5G?FB?n_bX_CHOGFV2OSU5"wSHqZ GAa$Bp;O Q]I+5#r& ~SI`!&_5BbZa W   r4QQUoi pY }X3:0}ogT$ ]4ZD xu}oa_BH ך A]kEJj FL .f#po/Q d]@a70@{+U^aGh`؜7`< a@UFD<  h> T6D jBU???Fx<F BP(?R3P|$ (^VB7@E G 0B`fWirefam,`IusrintfcpoymckpAplcto,C"tl,\s!bx"im"cr"u! gWid:$elke^ n%J 7GuE 7 #(hDrawg nd]dopit l s. bxdiem.b࿏x#J NAUFx!ApAV0g(gu`m u#>aa "=4>|-*/!"V"kL]w$A,% @$/@6 "bfA1u{$ffV"7" -V;V"$ -w#2q@|?Oå21r+`#b.A0]n2M@[=ZEA.` LineCol@Mrz=8lobCz=>AIYD"f2fA[B\ J?OV!#?\.?A/@h-/'"HuYT?VHAafFMEA:uABPckg@ou}@dOGRDOA0H)Ff2 JJ.+R cBu2#B0@|!"Saag;2`]?apy`iPht (P])p20#p9pUM{@cRs`frRar paapi`wn.p A`lpwsare5peJ`v@dUp`1w`V{@s_P_RXYUpcTm!#5'p218ae>b%9 %%@&R(bT@xp P7heS5 WT!"=An""`!`!"aa"ɄXRR{{"MAABAb CA kDAEEA'u"v!f1(EFG'?a'G3 `7v @ " pa3n"An%]~vw,prS*䒐=L%v#TaapG rda~=%>QJ{=M@ (%c\=r'b7 *)JC\B7b-B5A` 3o%f}>XU=U*\?M@QʺJ4r8rc0kFL&d2ѿu2Ϙ d#AQQ ~#0=FXSA|G>Em˩|ώϠ=/QTظQΓc"qNO:@Z% <<,@qeU"Kq_/! P#}j<:#>!%.(:J'BH7ek{BvBa۷`AMIjMpltAtA!"n"::""䑀"Ԃ**Ⴉ(Aeaeb;^`DAp&tKrqe޲*n%s)8\`c`&LnPQbM~q !72vRHߒg@~Naf #5%VTss ec ;`!u /2/D/V/&Dr/@/ // Օ/?%s PI&MQepLdwrAs ?@E?hh?s&)Rq U>B5OGLG DXUgO?Of&9OO,OO Y!+_=_O_ a_&)wK{__k8 __ Y1#iބtPz]u#b'2qp?[`U1k0f`"r )XAia4ǁ&qU ǁu\re Չ]ea /#ޱ rѸ) 4Mǿ g.5:z?xM(Ġ  HHXp?"~H9u4YsڏJ !2ut -fjTŮoo*{1d vԟ !3%w&xw {)xHDD # =h ,>T  9 #EUF?FxuR`'LR\u'Jޭ0ql?}@Aun`A Wb č5A|/ȿM?k& AnZ4` Ac entCol r3zۼAٖ&#z}bK)אb# / "!%?08<< 3R#R&= e" 3L)u6B117;`?!py iUgTt&( )&W207@9&M#@]c os f)BR!r@!a i wn.& A l&"GsuBeI@e v di@l>R0>UhhRRA^+ p EA ;#J` 7V\b\]QLt2  3~2RMB(A_d/a_]QU5^__%gUE[>PoS_3oU2X1_;rA *cCBB(7erw:GC\!މi8QeE9 MX7#w'2;<4鿫A #v_H ך {5њ4=k\@X+FLVDg#(mEw-B _8i]Ka,85F@-k7IaGhd@ PHkOjPUFD h> ^T6D BU?@m?I?)3P|t h^-\h.8jUUK.Q LE|  0B`[Wirefam,\UIusrNin& !fcpo&)!yt$ m( ck upA$ plcto f( rsdaP ogl/`* e^ %% UGu )7wwwYk} wU{PXDrawg nd]dopo%t e)ail}frm.mb?8?I$k?%5UG HD )# Jh<> T$E>#MEAU@?@Pu` ?u#t&L>"MRbRvRJA `:RDbvAT>&Uc<~# 3#0q ?åߥA?֔nb@Z.` Li eCokl rzۺ Aٺ ,7b#z ߀A)5 lM '3'3qX2l^2A+q2?o5 ? |*&L'F l /B&/ = )N!0! ?2?? >1&^!!A` STad0w?/..?1Ʉ'9 0CJ[:OY1  S3L!# KR'3GV}"hL>4@ #Q Q(W;"`?}Apy@igTt (c)PW20P9PM ]c@os~@fRR}ArPAaPi~@Un@P PA2 XUs6be `e@v d@R1`V s_PRXY@PcTm!#5P3` ABgheTo~ lTd"GAaM"7Aa}"HAqI2IAqD8A qM9HA-qwR-7t (7cQQ SC` }AnPa$ABGqM%Xc$a6r qb #!$?:7v8CCu}%c[2UH?crixD`zDsqɈGqI5cuAz v #%!-Yrap^! ix2k M"GqgE1ix>{ +CR}ARM(MMfO1eaɃŇٌCh*wU@PHCu,ϯe(b7I5 ^!Z%iMtM%it}%iN1eP#LY#"G  ]aU#bS7@$nB %,˵ CUgŊS{dMM AV0|TՀ9 gQ# / (3h Û(7e 6"J(ߌ3ߠ}%jI5CZ~G!A>#!pA^ U cpoo`A" (J`?ͣMcrjȟ!ArƣA? 7 SS1|lΗ\a-T%%Srnt%* iKq` RHbiz Pa"ae9Ńك21Sq pn*A͚PP] 2͚MP`5:naHD: )# T=h8> T ]]A# 5AU}@?2`Uߵ?2 @YA?PBt` l?@[9BtA`1t#-u3;*B bp 7AgbThoiIuBu FA7YJBBUMbAz:"]@B/TlL!a?& ?A}Bbt`BackgroundC l re1UhhE= A$+] L"J`\S}1F7a  6[bMBa#o!9o5cqe}%ooo@jqqe5eoo6o~FE}1X3>2rA 2sISHD: )# T=h8> T ]]A# AU2?2EH, ?2 @2ߚ`?{?P6Bt` bp 7Au l?A@UBAt#@]A)GV47B bXH4RB@RIx3B##5h7A$J'^, d&N" JBB:/zbAz6"]@B/R#G +3?XA6 ?AlBb6X:`BackgrounWdC0l0r!%6b3z968 Hm1526=#M+2 A#C^39AD"#Ma0UlB?åW@ #;H07#B#A#A+G;;`?1py0i0htk(0)k20@9kM*@c2s0fB1r@1a@i0n.k Af0lTkGs;RePe0EvMPd/P!@h/?g1A0>UhhE= A# /J`rS2,f"$#h6aD"  CCc"ZbMBzAo&+%?npe5noogpeEefo,o pe\Ea o,>2rA !@2s_SHD:  )# h4>T]]9 # Mz?AU@?@H6@@@ N??P t`  o?$t# bu! 33?)uLu. @Q]R,Y]A ԛ JA #Q><"l"A*"?(% |?bAdbb$zddSu` Fwil Co orzd` /2%n!n&(Sc+.In&N"T&7DJ#"=1U&" 43E#(9A2;Z"30q}0?åX 6?"A/U!.S'j\"32}3??6\.+?4.OJ>!"3P2J"3B4141<7;`?!py igTt(c)20@9M c os fB!r@!a@i n. T0! HsRe@e v+Pd PgheY_7U+T#T"";AQ2KAQ*3SAA`H+Pa7Pi PCQ C;UQ5QQSFrQ`B t@om;RF;TοB;L+PAEd_l6*SDTjh7Qi5ienY=(9eXZGnW'@G6uWCF!tt nVmEjoo{ZGG6!3:w;x<42v!{ ,A>e>4N -S4i JMq LTA#/cd9"RnQ'D7  pHK"8  p{0am5qa5qdٟaQE!>N"rA N"=Sp"bD a"g*T t eHD:  ;h8>TB ]]>T }AU2U?2EH, ?@2ߚ`?Pu`l?uhp HAA@Ryt! bN]W67<bfI<hGB6Bt Si BUN` dB V>+&%"+&Y+& JBr/IbAzD"]@%j/BUl_uL?6 ?ADt`BackgrounWdCS0lS0r!yDl.%D,b3zj?|5DSu `` Fi_0l[?` ?2e511274"!13B 41CB.FGh#M2 iC%39A%"FH񡅙a0?å< @@G#iAiAqG;`?\1pyQ0iO0htk(K0)k20MP9kM0cJQ2sS0f?R\1r3Pb1a?PiS0n.k A2 7XsRe_PeQ0vPdPg@0/@l>0>UhhE= (Q%!$+1 "J`S9  a G  6N"bM(too$4oceEneweNE>PotdE1o>BrRA g@B S(]e%_\H\!$, Bt&DJIR FLST#h*UUB9]dH8s]@+_j]7aGȨVP\i(5`ZXpchUFD  h> T6D BU?VU??`Fx<F BP(K?3P?| P(^-:  0B`[Wirefam,\U]Iusr_intfcpoytmckupAplctoWbedac"snt$ae^ N% --H n?Plww?w{}}&~~262.&"{sw?@3333?3 3?3? ?3߆ 3? 3DŽ 3%  ?G3 ) B'\T  u3??|?bP#GzBP(??x%0KҶ??#L M\UUUR7&fmF7?<$UH!D &#  J>hZm@~+DqQɼ]ԿD Ξ $6u?b{?Fm%exxϊǾ01EgZ+Iπ8P! ^>2JיĜϥ0d TMipӀQ߼\3Ѥ(q5"4j?FzެhNaCd;._:33#dh6Ih&f;Nv ~?{F5^Vui8j "~Vu6z< 3j?r 86x?FxGND  z'+KY!?Fi޶?@&N?FW 0ׇ=-zLߠe u j7]6K(yr?FćN|^r'"/ݲV6mXƧ~){E3?IOzU$6H8Χ׶፥0SlD Qh~N $J V~5Xo~ñ.C}9=20zTGjϨCA5DU@qs|"+d?@#5%<;QDpߩqD S=j9C_ p%$ij oY|0m6N4zD E4`[OOE5'i?{=O1tA9??*?<:4ΧCAiᚕ///0We3O1O͐OF?NGoҳ|ʡfoLE`~#`3~MUļm?ux̲EU@ E}/*\?@U-15|W~l?dDIL)FMHF$&A1nuAD}™@OROO@-KKp 0U ]ɥi=9 @fɡd@#ߚ?F_rW?_o13F 뾆h?O<Χ$_o o_uƴ5_G_ Ƕ}ooڟ&cX =M@ao,l%w.4m# 6(t%BZ:KN\33Sm>ab*)U@` 9Ӛ`@r?@Qv,nx`?)fw?(\M?KqTDŽ.ԍ&4)A~]<-`^2QǦ:VZd<#?5d_k +ǟRဉ//:Lbt ߅Z ^`,߮yLHxf<˳mӟmOq,`+ʵ۲LN`YچZ;\YjԭOÉA{ u8?F6<,klk0y;oXa?l[=#K,;;B'{tX?f"?{}, pAtȔ{'K#y1xʗj/A~k"XWwW?F>z@w_PAn/w)*U@>:Uç?F3?@ok(Wb Aa(`kux&y8\GVdܔVU@P/mgԿ.M q] Q} g~+Ac?Jn?@z i@!Zb-t^;6NlZ? ѐoo^Ư~>Ma|i2Ac;I Q MS;[&n+=jInIS죣w@G^(d14r ttKONln/~C݀0|Nš*=BQ~jτ?FOD` @333H͙zU@} 7@_?Fˢմ`0,? tҳNljW PO~??'?^ ///////U@I @,y~ۼn|?5M?˳mfE?~n#ϝꐺ0oicn=o};ԭ0RD  JX8Psaڲ?@N*l9))G (\ mTd l fTd߻JU@Zk&P  VN m<Lֳ%~ʣ:vLaprhv ~ʣa)](:ߖߨߺU@^a"PO̡giCK mOBA5HHq~ߌMZ m˸~vf ׀F 2ߗRͦsϒoֺF_=xrh +~*]2P:a֣TUR\5_aU@'$)_̡?/ݤ ;?~Mv{/̡/ 6K%?ob <m?`?Ԓ .N{_]ߌÙP&45xAri3ò?F֕߈Ltߌ$QM"jϱұvU?FxG?@?FѭL9/|^G;}4 *tߓyC]H:|///@[84PzkKy\4Qs|/rf1<_ p.>&oHfY)2fkxMgo̡87m`?zOOOOgm97DATAOTWlH3BT6AIt-tmH~sTlq 2`m3B`IcGonqu}sD,@rxlqmq2AQIt*IRUH\qP#Gz?F ?BP(?A:-Dr<ኾP}9i-?HX_D؊'2q$u?R}!0R~ ۆHjɏۏ폕H{ ARU!3ܒ2RUps<ɖv_vH!cs g9 WJHx FHL{lB#mw.B `8nU@elko+_joXBBGoPUFD  h> T6D BU?{ͅ???Fx<F B_P(?3P|C (^-  0B`XWire]fam,\wUIus}rintUfcpoymckpApUlctoWbedac"so#wbe!^ H% --H q??%?-|2-?A~?I>>80?003?h?33_?y3333?r?) #&?>CaDD?3?b僰|b;[NBP(??hƵ/KҶ??-_s!i=O&A-?<$UHLD &# T>h uA`} ?Ε/ *>RfzutJ>U2U"?YYYF9>uM@>b 12&"327b!9 !"3 N#""%?> ?[>10G%` LineCol0rz-0"03z0bTd0"2"31?FA4D5l,I !=bG_C3$2B#0AGG2`?1py0igXt (c)"P20.P9"PM0c.0os0f R1rP1a Pi0n."P A0lT"PWslRe@Pe0v0d`P@1>`V0s_0RXY`PcXm!#52P4O68A\$qP?_V#AdoJ1pB@ ?@SJ@3Iu3-1)2AC0Bc-2Bbua1C'$-R1LGğ'QlJUl0 QbD1c(qagv[?F&qiPc Ÿitm:$N~   #_K~$VHGd]wcAa?Fʱ"β?@#ba:?Fm( ƯK7A0d vm GJ UT d0#" ? %񍈈E:M~?@,&¼?FB47ʏ܌  {HGr}Zi ]fÑ4FHgؕ?FsF CϷyO>e?Fy㋟ڎ߫k%J ,o:/s6UdOJ?Fle?@T ?Flȏڎ u+}w{ɝ >6lOl 2ʯR1P?Fz$ a a{S]a'ȃ=}̠Jc@Tc?F -ϟMb=wd?20~ʑϬϼl8*7O W` -f` jEP dy@Ugߐu~@F:6?FƒC ,ۍp+ ]N͊-drA|qutlUGG4`?FAvaxg(wk۞\xw6HJNֵ,&?cHYEgEx7[ExnpqgtqTTV Ʉ8TAj"`yA(F~TAAC4 PASI&P]QTAcd,@qtu0/E2lL8TMdqWf2Cu0v<o'u0ߝk GzIށ0(ؽ0N(Ix0SOaA9I uh;@!9W1;@@@IH!_sʞ π@TZdI@EdI@@I(sNIRI@o'@I,%P{>*Y%PSE8(I?k7[Peya[P.'I@AIPLʛ /kgGP HR0 8XP1%_c-8PߒCM*S7kB<~aADe / TGF !"#$*%&'CU)*+,U-./0U1234567 9:;<=>@ABRC4EFG#IJKLMNOPQRTUVXYy ?JRT@b?X, @*<g( "*^p?(4wq #5GYk}D C:\Progam FilesMcsfUtOfe 14vsocne:\$03J\ANROT_.VSbw}q"4FXj| D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\OPS]_.VXSyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\DTLNER_.7VSdyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03JERIH_.VSdyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\NETL C_.7VSdyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\NETSY_.VVSyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\SERUVR_.VSPy}q"4FXj|D C:\Progam FilesMcsft*Ofe W14vsUocne:\$03J\}RKSVP_.XSVuq "4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\SWA]_.VPSuq "4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03JW\SW_.VPSuq "4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\SWT]_.VPSyq "4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\WEBSIT_.VVS]q  2DVhz user_(Oq*dSC!+/h/He/na/~b 2A@(AmE/9A9 *Nb 10Su U Ur`8?JRT@bX, @TFDyC ?P uh,T]UFzE8:&@FbN%@Fǿx187 #&,\6e &2{?'11 #)5 #.1:1t7z1 #5$A$A* #:C;LATZG #39GFHJJA #f[FdOOFA\ V(V((VabZ8gVj &k &l &<QBQAQ22?a>a=Q #n &o & &p &q &okrbookT3ll  U2)}?8;}0x{2"u `D412ws"#uy#s@L&_d2?}x\.x~sU#!CJ1u #s1?x :!@#oonsȇ"ȏrCU Mf6?\00"(p1\1\1(\1!  7:!D!!N!!!Ta1 1 # !41Q"n>1$%UmAGdY dYAGiYdYD!YYp1YN!TeY$AUAU@aUMe0%YD!Udb%YdUd1UdU(̾U-dU1Y*aUud!U!U1UQ!UQ!UBA 1UtAdUA*5YTad UmdUAuYQR5YJa-YiaUSaAU@^aEYhaɿxPGTaЅ6LdAwJau0iռ NA+`HpH\#` Cnt_ectr)R" o1d@aEW>3` UAo|a|i s >2 1Ae Flw zhr|siDVVAmSAm(aUd(Azc}s) _@to /.1ZS[3`SxuJi|y2 /z//6mEy!cl (Fx|!p12 ?2?D2QaCKAHCXI&`hQiю?p5/Q˱Pqp1rxT^$!A!AP uPPmWTRpEsk(O D"^absň)q^_qTD֔ Ta`d8TexUaQMuqQ#tf3r?@p@d?@B-hUpsS?_znsry?uVRtq P7UtqRUWa<"LUg<'4Pxrho eV3fomzJA&qj\?@q`qX3v_PgQ?ym? $@sq`t B-mt7Q@yQBP(??OPQ8@Uiрq,YYENőg! EQ@'n?@qTn74qa[´?@vh?@^G9}kQԚ!W ~u]?u v4.Bl@<#jໍ|߆lsoy ,Ĉ]bv8U@ f@装t8t)%4Uݱ@a7I[B?@u?*us?@Cd ^cPNྏVX4Х|.R@?@2JK6?@\_-]xa?ɇ*51tl" x6~/]a̧ %!PS?@J4^e]_Sk[̧EU)Y vSY4?@~7WYC F:[̧A9mS@NK{?@+uYIJ3ķyrDf{JL]K7j}YҡU˕ؤŭƣ@Лap]ѐEq(:L^pbGG$hï\ k#Y>?>iS?@[}WʽoC !z:ψ P]ύoϧMÅǪz LV>}yP *&8fSECɒaߍfrf$ڔ߯,z <E4"yW/{Oq5X W })X0jXg!A*WခACB#%s9/XeEA4 WB1ziA/@_c$c?@UUh㿁 ?@|BʨEfCV1!_Q_VSSFX% :n[!R?N~k!R(\;c!RARMdaXeoXb8IL`Obgd0cl?YYYfP*"7bbSz`b#`ULq`1%bXc3_:h|?PcJSbvdRcSeRu`` FilwCoor*[z`` o2eafBYFR\~32S?vS?\._?p0SAG~xY=_3 7EN`va-2p` `d??Q?X;Hc3??._'t{*e!Wd}1"Bj- V3_PiR8r#A,j/O/}j#Ay;H/T"B!CFF?\jPGY;H.l@v!3a "Bݡ@zɢ4a=Tw> k䠿̈h;L1cG*14:?3ݕ˹I(,>E-nM~4Ǵ @@2Օl @D?@p`?dG'1MCAR@YBFN֕F WOAuUzQ j_w-.SwXUe*YQ[\,P=FR0A1YUkvaP2A%F(:L~߂06kob"߬߾OSFA"%Uw D :LVh5zkFA1fA8+OdC*@fAiuUCrI~4n"@@T$ @@*dr @w37HU}6irQU*UU2_ Q@q_co O#t&a/[7b2`B2c{g`oundobgqzYPs!a3Qu!$ -LEUY*ȟBi⟮ņU=[u\[/m//']OxZ@@au4k?@pP~@@ńe,-e?dy1΀1v[v fv vvv\nhsoojqgq/%,q@"$dN'V1<doq4Uoٓ!u8 @/-Éo߹ǭR(LiuktR1V4Qtg 2鵕q(rt?k?@EtV}D2www»FFSF$1Se#D33ktHu__0*W)g\_zQ7M3Á?@o}DQ‘DLSev˿aHUQ$6HZl~<Ƴ"l?@࢟\ީD?Կu޼]l[A%E[g5@t__\I l]AuEs0l__o2&=)R+_IBA[xooV;ABKAυEfKo VCAafOGAI6f#vVKA#OA&8iЫֵdAWAI\p2?@IH1 ]][Aɳc>%A=Iuď֏\cA`;AgA~Io>:i@I[kAɱA%EfȟڟsA_fwAx{AVAAYVhK]2EOJq80DAAT! ň II%(!Gܒ=g3vX!g@dA!ϙ<T`=I-JB(!x3q&eQ? BP(?+AG,D?ADBBT!s/A8X@!Ou!h$!,}"%!%PEA (TV|4w%?@N=ǃ @@Itw~u, ?PINB/AgAu@` #?u@ٿ!HxGˆOnF7FFI%%!I%X[APBB`2X@҅8#S2N贁Nk?I}3vAZIQe\.?L?&d2/AGAE/AfiPDx<(zG&&&!'2rq%!.Ԩ!" l",bz:pb :p:pp  ZeM%% O4vпdd܋hE Ѿa%n,D  TA. 2AE%H19/A*$HD$(R+Q$2ZZ")k$Rx$&b@QbHKA9e!x/Hd?ATOAsaa@͊1Ou+M a$bٓEeťe[U@]Hr6?@'GO @@Yp?@ W?45 F@Qe-לhaET<ΰq84v?麧Q /0πBTU1 А9/շ?x{_@Za"!Q1afUAZa9X ;v!$Q9щ<QQ@NnΰƁ▔,(ags;?@SoBAKΰR7 K7A$E$eZ4yGcQ…E$e@B?\+^g=g;OeO荄]?@6d@ D?@@wofCa@UOe:~%J?@ˍGp3T(@_% Q0w0™#wEL?"5bSPu~\@`` Fi! lCo! omrY@z۶@` ,1BTH2O!"U\_2E/"&#?\.?@&AQ_X!Y@eh= >4Br&}///AEQ0Ayϋ8V/ͯA1IP??KGv>5O?YOԝ v??OO TT5mp_utendCP J\nQ8"?]?'"$+ʭ !b 1W2Z iV2  8"X1,fTabPaW-X1Q8cT W2V3FYeoo jZ_l_X,Q8K&o!3rW2!vQYK*Rl_'>C`! gkrooBX,F .?*at=mH5VUu}_=,sH@@6 @@^m @G Yiqs $lU V K!_ D /.$:/] 7 `Bacpgo?PnKPbp!!|*l/'EP$qEu/,$7o!zLuP?Ud7*OB24O9u-FlgEQG4 -,>P9?t~qY4t @@ddֈg@@z?Qs ƈu*<N`rQ/)/ܟ_/q/[mg/// ??9B?T?f?/OO _?GO#OG ^pOsODOY}Z-XTT DQa=PiEPsCP%s eAP_vGPccr$n%eEPmCPsaP________oo(o:oLo^ooo?o/#/oo$6H Ol~p??ana#@òUƿCXAA 8NGpAQ,YSoo6ex?@f}í?@vU??@=O2 PSߑUߥdBtl! kg3tRV9&UW7vW7C]PWR_<=HolTAԈ;oۢXEgoUҫWwww3fs12(0upmYEt@rb#w[sOs-o9\U#VTYA8E :dDOSZ14U !4aq$@q%JD!T|!!qСU%d'()*+$zq.1*4356 Q9B3Q@LXq(#?@ K/"@1?@pKcSk,}?=Еw Y0bp>H֊ YJ/X?L;~YYQð,o)1\UYt^is?@>Ex<"gJ+#p㜟xE~gןB圯8UYv PWs^boUf< ?@ф_%q&AU;?@DVbV[[?@{ƃ{UR) `n59q([/ߍÿտ䃖 &#?@?RPJmaမ2d ZřǖYO D9a\µx.\?@+￞Qso[?@;'ACU׆U?@ws AdxMA+߆bd2{dvλ㟌?@$ZS ο z?@؂--pj|kÁ~fo3?2x>kp&vAbt) |$߿5XݭS_q!U$+Ș?@n3yxӁ[]7Y^pkׁw ve=?@d2/D/ہq?@iWMsՆ//aU7qX?@oH%x぀ˡ1D3^//UwSǤU!X?@>??p(~õvvꢄսY?k?󁀙cu4-lOOSwt.G)5^OpO6zqUF/c@"!(k$_nz شEOO6U.&NPĮvt:d7Is ~l͹VEHPS?@OpmS&ȋߝ}P)cU-~1 l &Nʵz{7Oanko&NPx̀>?@>Caв}G <]~`vC}X@~PhPDZm?@볨masāZ^m~OӘ9 qȁPV[jO#k}́P LfK?@?Bѻ=Ё%dn( ^*hhԁPzc-0Tf}؁P|b}$7&&N܁:=SwT?͐*'/9/}P( QT;z//&NP"mpPX" 1gy֫?O>P3<b(~1a.IO[O>PCv=2 v5OOf%e AyEpysx"5qP1E@W Z|/_>P2xqm~ lp__> P`NtbBw^&/_>PqcJ3b׍Co*o>Pfޢp =rtooV%vU?d`SqPle1pV@ oo} Pa륥4p?`?Q>$~P#&K6p?L;(T^ql9U SVNy!T$$m0vl!u쟉>LΟbUYbφ?@ubdvd} YUٯqف4TمB#ՙ xJsE О&gفLe س T قMaMaAŝL{Gz?M_9MUHƢTDcC}h$[!Lԣ#A ؎;#՛ƨjb_$@@kkW@@7LR!mfryP(-DT!FQ7}c +uz0`[$?u 㰤 &SlbX "bƨ q9Uba !U9@ BP(?R&GUz[o#z؆L#սـ1 ! #@@"@@?@OC "&%w?P[_1ca&*e#u&b&&e#&M&b&&8 e#&b&&*Q"W&a&a&Pu&+3&&/*(&&&+3&&02&ѿ@ƨ*A?zeƤ4IHٟŃ&у Vc၅WGمiD+tKiD!tKXiDtKiD!tKiD8tKiD1tGW\ngL 7O&"@@0$TR @6|f@@r)?d[bA-/?-HQnbtf???$Ԅ}oK&р$ԅၟ Zg_ف_y[E@O"O4OFOD}U?t(`| A&ơAoSot([ epHoT>pit]wd+Hiri*kfa&}da&aɥoɢQLRE0q@l?YYYzb"hz]@@b#$EA㱛ɣ#UpB)(#c~bq~S}`` FiUlCoUorUzց` 2߂{{nbФ-1&чj\2@?{ɣ?\.?A$w#F nbZaDŞoL:Vурf }DUH2' &ыh^S)E8Qc϶(Eᾠ(UJ nh,s a)`kt(:\|bQd:|__q_m[}QԺQQUl_U@JS?"`GxT@@f)G4GP(?-DT!b@oPdmaމR,E*8UG Zmtfeu)/7~`Bcgoudb~ۡE窂C aloɗlQSB_l&UϷe*>ߝ!X$l56;CQAq1>tP1 J;DQIMWuAlRH?@`^$?@wU??@e=S2 @CE2kK2ރ3' 4@PAQ@BSRR9RRRMRa CV MVtSRMRltSR MRP1tSRMR qVRMRtSRMRؑtSR9RRMR1tSb9RbMRtS$b9R$bMRtatSBb9RBbMRZtS`b9R`bkV CV 9V MVcbVcbMRd1cb9Rb*Rf:VfI[f\_nTf~X vXf\f\f\f\f_dfo"df.o@dfLo^dfjo|dfo4lQvhf`mgyt "ΕZƻtɂ\껅ć1F&&އ*Է $z_Gz@Ka=a K@ؒ ޑHݒa ` <8@K#&&Y\KbBaٜaٟ|H9!e!'E|1`?!py i h ( )2w09Mc"s fߢ!rӠ!]i n. z%lk רrҩхvd`&@?sJO95(F7aG wM|?U2K=Fno?@lߧQj4p߅Ľi_d*Ԑ4A% %$h?@x#z {aiW &ة?9 1Žk P҈_IH Gꈢp HrLCE+Qj?@Fyp, [[$h?P>Z ?4Έ+7,>Ͱ&fՉ: Y d;?  BP(Fǀ4ϑxBrs5P %91ApF7XQ<AA3d1oAK0B@P4P1QQ1BaAaտ^VۡϳFC(v?@v@ Nq?@.$?@73Yϒ̗+'kܡF+`V<̾3= |HV=]?@?@R4*?@T6(Нv0kD6TЏ̡+V8V7Ͼ?@8/dyӏ?@ʢȖ9ߑ_ck~+~ Ծ0E'̯̟ɣ1d5S7s?@fAGKU߄gAk^x#5oE\B&?@ =vUOuk¦"1"ot^?@kZU Nek$rSDK ABE%eTt8aA4l& 2/@O@`Ͳ/m4p!5)?@,G?^䲦25"5b#?@߰摨Uܺc)VkܙTzLD!/ARɿ_N.@xJG0* %K ޱLO5q9nWOK AQokOۈ`D ~̗9B!{OOQRɰO0ߤ@ +/y91#OK -nk܁G`~]Ş[~49~ dQ>_P_#'xr|=@nꙟ%~/ɧ?@r[b?@}8@Z?@F^CUܯe[k)ˣqɼ1O}޺( oۯq vUg%璂4@:_?@nϯ%?@bC ?@˪B @gokܰnW.ƾ>!NrrŒҍcԖU@V/ywt~pH;c@=Q<-egJ(Οý?@;0B11&ʲEtÎT"I!1w #hlǑ#=Ϸ *Ж_ɗ`z=|V<vXX1oo  MǙ@}T?@cZ:?@siѰ?f^w ѿbܲƶhd⏏^p-eE@8,?@+K+shmdvۂ3獛f" +@O﷛sr(bHyi5!TP5նd$Ծ'0Um?!dSeR鍌 528eQ , 5%PB%_PeRdSFeQHԄ!TDő .#@$)XuÂ5h4*418 18J4 eQ9ЫA#u?@ϢS!?@9?@O_@`f5o5$$!!*1 22!2222213@221322 622!3B2A3&B2bQ3:Bk$SdSh2622eBZi\ ", r5qn%eTtʂ]rtewq} )?@kZ>HjpQvR}ŝ4R!?@ص =v•́'1T圼pA}FдzGAGK1Rh }R%?@48̮Ә {؜?Oweλİ?@1ώС ?@z[ͩвXҫkb6( vϕi80YꥬMwuN?@yw@ Nq?@)Fl?@;*m\ҋ:ʫXA B(!Ywz?@W?@^&؞?@¡E^`0o nB6{Jqe!{q o.2ZÔQ8[¡>@}8*Lo?@Bv2?}>}ӶSS~ DxĜ߮)te: wtxr?@`RIBZc4Yݛ 7x̶ .jlTbt!ASȷF?@j?Ԛ?@K\ĸϓ tVQR?ɨ ?V=7"4D~Nm :.?@1Bҵ{V9,Bi0ooӶVQ Q~/"[k9dυ`8ǵ?@=% >^ٗߵf BJXԕ my#4 9aueh{(8UDG2E}EI`*Dc1?@CZ2)V>?@9`E`55?QDJWPa$RQE F JO\A2F Feq_ ?*si݋_______ oo-o?oQo/uoooooooo);M_qxw.@Rdv,A݄Cu{=oDUXDhF>=6C-Gi .F&N`C //-/?/Q/c/u///>o//'//?"?4?F?X?@j?^ǡ??}FXspTQW?@mc?@F$0R[3 r0tC6 ZO2BBvHB\$غEG( [^EH7}RO Y'0ʓ ^?@"'x?@p5 Ɂ55vACCA67A32222!3B2B2929292'CAB2AB2-22-22-22M1'C}B2}B211'CB 2B2622B?4B?DBO7B-O?DBKO+4BiO{DBODBOпreb .t mKL6 ͸ny]B;M_qσϧϹ!%7I[mߑߣ!3E5Tfx["4FXjr5!!&Rd5!5!A=11aqq _n}"#ލ.;aq=\r xAWVA,\?@ ?@Dt*Ǖ?@ v bMvP 9o#ז"إ|_إS2$NP4B ?A~N f? h?#|S?RDH>E<tiAZ$@o=OSaK]όS%~sʂZyȲOi?@A溦~Ű8צ׿& ج֧~i$g#iA>qdOCar߄ߌ%SM6!Ln#rbU?@i 욹nZ?q9?a!i\+|ҽO7CaCSFtu@q\wIH m vr(&⇻Fdzu 7>ɧ?@LH?@ 8곝oeL=%n׽\~ج6 _~Eb%~ _S OUCa#aST:_/ɀ0f?@'ŝB6yI^a׭;I Yb[sor|Un ]+PsCa8JS߅%5@yn %mAq?@J;˼7R۩ hqo]^})f^K!PNCaY/k/S15u{fy?@Qkpx0pK49ڲ׭ ٚ)x?/q/NCa??S"v`rq s-*CkriqTLuyvRdud/'0UBŹqp!tt ]Fp\p8Ta,ΈPQ-rgQPTbScFTaAєT!DȖ a\py^#tTYщwLuuhqp8SG \ӝTa9 J&}?@W*İ57C(+48e Cc\ 7jAAmBA[qb|fAqbZUcӚAqbAtqcbb}qbHAqbfۢcQlbrfb[7ro7rl7rl7rl7rl7rl7r h4(w;tAsQ k+ut5uw? luM%bp+j` XU`?CopWyrE@gPtP_(c)P2W09PME@cosfrԠفain].P AlPsèevR dD{&G 8Yp{]`Ek_nPh|’ ƓFF]iA7yS;sP`` FE@l؂z ` A ??!FbMXzIe~"Db A!3B a=, Kҩra2zGz??YYYJA5"S"S.a_b#IGB$0 3BljDnP$EUJAQmS[aaeQA3A X~w5t,?@F',t4On?TJE?A0_;BSl#R ejz?@B]񡦁?@͗ 8T /vX߼'q#,Dz`Pv^hיSm?1-? Yz?2Vw/A1PsQa}}8 0$lPW?@uطbnpd/ 8\7k0M5eqMgJ#,@ ?wv0rRBL uB@acyXQLq/`J-?@WߴԊ أ{gIk~#,ˆ -mL^d٤y#ju?@xR.s?@,'X 8roC]Ѥq yF &o˿!|'BOAh߳{1zV' h<d$0$+*`3NL?@PP 9$O6I&)U=` q)`<1}:7kX( yl*1AH1>o?7o?__/OI-l@$?@N]FQ3WK0LGACFT}C$*k PEr zċ]jvx,!1_ig_y__MC_XA\5ЀS1k0"!A!Q!lBn~;o5Umo~`Mx"+lv#?@٢.TM?@ _*%55GYCTxA =:g * d2;ἄ|d2%o,d2/51vCtP{q9#U2? 4FXj|8  -ϸJa B9U'OzOG \5$_6_뿦s_O_OSr91c!;!f}d<'f NfOM$aR/c!}ROFA9%% (F"6Ocim"-X%dNs4 Y=Ws=x8O<8gPE,zKvBpMMw`MFMTY\'éYYYwww8,Ԗ1 425ӹ$((!OO+O=OoaOsLsU@i "?@?p,WI~ZU?{_Y>y2A:L^pRz?N&&W'# K/ZQz¯ԯ?&k%+c`81/̿޿ϏSި?fujϧϣߠxq/e"4FXVZShߦ$n"tŹ /Y!BVh)ig}AG0ZjQ)M9@RB¬ k[ 2aza,?Szay-8.qY0AgrERWG?@|\ ݴP !a4?0aCgWi%NuSEv +Ev0o8K ) /52gdA Id?G51ɼjaTQ4B-$4R(3c52/zojBMa!&$Ma]du"{oB1I>ye@)'b8/J/\/t5U?@R?@{RʨUPSgvqqu~qxRSuqxA*nBb fwxA ԃ|2b‡-Ĩ{2b/e"Rbt y2 xrLT0rЁC*t0 lC$`vAbszT0b#T0e|1" A6ryCRpsFsbt rsSru`` FilCoorvzT0` Z2uqvbB2ш\݄c?(y?\.?j`PA O^WsqAv=oC PGdž}Sb{Z&NAE2DVf_m/&/-A?DbRY@SY-|VW=aӞUQ =|;_6=|埥 K(ցW@ULpbvprti`dpiCep ptPObOK8#s"??'(D:a52鍤 s%Copyb851Jl'eUѼ8TW52}SFcyyKjKdvLŕ8'Q!3#q52+@ѹQrMe>IC`b*k+LL\%ăGdADWO S#K,Yjaԝg?Y$:{2"@@&?r7`?TPVpdPV:~gBP f^% fg_Q ƅy֏e;as< p51 AyA1pJP1^# -?Q5@@N$_2rcLӤ'ָ:" !@R`!A!\!5Dq!{Ў.!37J\%>,Y1 M\!e1|9>y_QDo6ex?@f}?@vU??@=2 035'hr kg>kB?V9&XkB?7vqlBdhd=0 7ঀ?5ʁ'/*/禠āa"rgbwwwJcf^S€jU\MnpsYTzrRbwSSr-Oe2.rU2Ubc秀^[ȓa\%!O:E4lalbB?: ф5~;Q sqiKĐQ~RKqKKN$r$RK%4(K)K%*K+1Q.Kk113K4K5*K6K7f49Oa3pU@LXq(#3 K/ר(@1?@pKcS<ü]? Wu0bp>pH֊uuJ/XpL;@NuYQ,po)1@2ut^is?@>악qy<"3?J+#pq\%uןB5uYߧv 0Ws^b@Ryf< 4ф_%x~yU;?@生;Rx[[?@{ƃ@ayR)]`n59x#y([/ ߍ%ay &#=RPJ콯ϯJy2j vt5u&vAbt)[m<y5XSۯqy$+Ș?@n3yuqy[]7ழ^u v'qyw߲ ve=U@d҂+qyq?@iWMsAy7qX?@?oHqyˡ1~D3^*<7qywS,@;qy!X?@>?px?qyõvv?ꢄսCqycu-lS/e/GqySwt..G)'%//:Q~yF/cڰ@"!xOqyk$~nz شN5??:Sqy?@>Camq~<]o@?vC}Vhaq@~P XqDZm?@?볨u~&ai{?@a^(:nqu;k/szXJ`|q:^m?OӘ9Z%aqV[jOskmq LfK?@Bѻ=Pba qudnx^*h X$qzcL-0m(q|b}L$7&"4n,q7*=Sw͐*wm0q( Q~;n4q"mp٩=UTu|֍ղ>TuQ'=Uu|;/_/=UP|/ /0A1rNva@cnseC0ut/ mo# ??9'(l!upUTt w6L8},GTq1qp01qMMTu}Ѽo1sT}BFgE){}OyPj(s??}oQ6 _!3'Ref(p19Q 2e>7 A`.GkOOoL MsTqTsJ'm4$B ѭT/ςoчQk_r)uԴ;r@@b!gYP7LR!m$@'cgP(-DT!9ύ!vy'u Y'v1vgFygegtooRղj.A/jh$?k]M ԰sң@ BP(?})_ oى?oQo>8ATKMzzaed55pGxT@@񟙅f)븎up;yzyzEU $ QOD'54@o07ג`BC0cPgoW0n{db@v' CkhaQ5z$bH5p'9A:s.q-˖Cm%G,6[Gُ鏀[M]oooFu},@@4@@9!@@G po{?8?A9:8p/1y8AR{ڢӏ֨#Şkq{*{֟!S0}ϕFVe u8AEۡqϳ+TŒmA QS4WQbQ :x2u>d=h?@"?@??@z$*[րAџ !qt \iwfNtt2^_>-[8>GRn$jE8B(BLd'53l+%1 F(X5ϳu6CquHH\T""C[#b "CS#v/`` F_0lc` /QAFsd2Bl5\y/QR/uW"ϳ5G%V{/;꒵Q10TYQ{c տU鷖E@W( |5?i?(>??QG,|>;O?_O,@|??O Ou PWUTeRusGrRrGqeGPtGP n_RcXQ^QUR LiU__hh#"?]?H'($+u!h1%Z ;fR\8@)ͅ4P4ѽh~4T%P3FeofjP__5X'lQ&)!3Er %MvaYQ*bt]e'>^`! Lgkxoo5L%)Pu 2DV??z8?@{⃋냸& u =8n= }-X }q Ҍ .@RdvUb /&///A/e/w///,@///??$?H?Z?l?OOO_?߆MOO 7I߾OyOĆπO2V"ZN0[SWPnd]RgPu[PhizWP_kPocPseCPmmTagWP|______ oo0oBoTofoxooooo@,>P|`r9K/Q.Ä$ .7rƅwD񋄺Dŵȏr' @@u9@@wU??@f=2 K JE#L%"%47&Si\$i ""ir@#"""" & &#""R#2 "#%2" &92"#Q#M2"#a2"a2"$2"2"Y#2"2"&2"2"#2"2& & & &3B 63)B"DP3=B"=B"&& \B,\B/$pF(F 8\B<\B/<\BC<\BW<\Bk?}4\B?4\B?4\B?4\B?4pFOULF7H\BGMS[z_Gz-3H.uZ"f{@bvh!ġwgâwgfgb Gud 'b_3ѠEi'{ggblZgboom%`?pygzt()2]09McJofrrpai. xre vdhR!WEO(_ǮÀR#ϳU@Hno?@mQj4pģidՋԂA%%$h?@z#z{aig' (ة?> 1k P҈%}#IH GZꈢp' }"Z HrL(r!j?@Hyp,[[$h?P>-DT! 4Έ+q#&f :x" (+d;? BP(Y䬧ؗ4aB"蛯ؗrC56HNw!p_Xw!w!S} Qy91naGQ11gȁoC<놟+ӭ(v?@v@ NqJ.$?@7?3Yew+'УF+`ˑV<˹3= ϟ}x;=]?@ώ?@T4*?@U6(Н[w0գD6TЏӹ?+V;7Ͼ?@8gyӏ?@ʢȖw_cP~+~ ԣ0E'ٹ'̟ɈߚދڵS7sпAGK:߿gAP^xjڵ\B&?@ =v:OuP¦gSֹ"ot^?@kZ: NeP$rSDKgxa5E&%eTtpA`Q& 2g}@P@`R4Uj倱@,L';^䲦 򕓥jֹb#?@:c)VPTzLD7_O.CxJG+ %0ܼޱL4敔q9nW0AQoPOۈ補`D ع9Bk`rM17O}ߤ +y?91#ȶ0-nPG`ş[~4 d#/5/ɲ%'x~s|=pꙟ%Сɧ?@r[b?@8@Z?@F^(:e[~P)ˈA~1O}޺(?Oȯ>_P_AOB3C%:_?@nϯ%?@bC ?@̪BOk3__W_i_too_WHC@Vѿywt@H_j~RGek"cooooo𣲽Wu{fPnS6?@vd&,cr!0 1|9,:] mR J 6QMסy(0ҏ3Gڵ _遼f:+1׬$-mFj%p028@pXp紿ƿpLA,\PX34@~zp > m?c@=!!5jJ(ΟIý?@=0B11 ʲEYzÎT"I!1\#~MllǑƏ0@a :kv9~9] = 握k>{/2ڣ|q0z=|Ե&dԝ<[=X// MǙ}T?@cZ:?@Xؿ2iѰ?f^\bܲƶMld⏏ACU5*@:,?@+K+XM=I[gy3獛f"A\4F@󒚳1iTg(ܙ˕Șd m'0U?6Ŝ!𿹑q pw8B,Li't˕ȘLTFH /iTDqaڟ2hi"PgśȘcn ?@TQ@@!PS? wAHP Pb#"MeBmBE ?YYY vA "BLpzb#Ѐn 27q3?2D?\._?~p07`BackgrouUnClr z =XL~Bm dȔAŜ$șgUmq*c vܖnBVgŮ%B +Tfi1 rU9,1*3hș,1{5ؕ c?@~U?@2@4E81r (1 hFhrtuԑ |M`BtҠDAEG*BޙEMf!*/L q<l1 " XR1LpCbTgRS) `` FXl` o2B'2 g\,toPS]o6# "Pklؕ&oC1%E%ǟ?량/'/m'/q"F>G1). EGY-~%@u@d n^PhuChes t onci .XnfmRaoe X ew~S??d G3X_jP~:L^u `jP}x}xlf!31a f¶hjtg>#` kgtTfz|<Xci1a,?p3B?T?f?x?????;0yn7?@g5W0zEWr%O7OIO 1a ցIHw-ijKPحHhOO __/_A_S_e_w________oo+o=oTofoooooooo|%7I[mߣ5GYk!T0Mqurլԏ;$_/#>g/}B.̐owe̐@R'9K]oɯۯ8#)OG?-?}żտ?  /xOOewGx G] }ɀ$XW1]Y?  ާ?@r7`g2Vpd`Wyc ~ЦӄrRfcA9ff#ޢbfff ff߭ц4p U0UrA1?o!o3o2p xQ@4R@ ERa~ EcPF5 _@:xov?@;IJ@@U@.?@ FW?괕ƋϚ߬߾сSew+=Oasс%7I[m |c,////F ) /Mqܶߚ~/9/;@/߃C*x/5s t3euolr0tn0d 0Htp:/nJmvc 0pmUt?oJPe=yz ?Z_l_A?S?e?w????????OORO=OdaOVhOOOOMO_7&_8_J___Q7_*___ꑗ:H JiIWdK @@w=2 ?@M]F?@V3*%%kޟ&8 c16S;*62 6U66*6*6t ?c%;)tU2{5|7_Ml571:¤cq?K/]/o/EU1~k*1A*1̂1ٌ//I,'O9OKOLx" pv#?@آ.TM?@ __*ŎUU///Sv8 =:g *;R;;R%o;R/566c #|뎤rx@ڋ1a QW2zrcUW:^ߢ|UZHL,dr-bra@ƃA9dL3Oa _#]a( iaif_xS0yn7?@k˷?@x.?@~*_ ѴؽVS _Qfc_\ rDj;?`;C;?*3?Xے 8?LvezIElYYYoa# c[:;±Y@`ς ςooooo  w(Lp̿ϙϋF:dςBϾ.VVNCG}XcB A:O&O%_+U=R˻ݵ5鵂)wTg( @@)iU ?@?@zk}dzZ__`iun૕ a eSXD?\nT?\._?7P=`BacgUrudoo04J&=T ѷ7PrlլA?변1?J$ġ+-鵡eUVU1w%2߱%l.+A$*1ġd7Q*1O鱲#BQ$ٿ?X+U 1)-wTM]FPV3ɽ?p4?@i{2́5P3[]mA8'hx?X=0f7&?fVU{O0wwwZlzaՆd2 FVX5S/S Ri "?@p,GWLb^{2XU{_Y>o;n_U_ZpQQx______oz_$N&&W(oeoa^oEd6o/mTѸoooo'=dp{4i;nEdvomT3 .@RSި}uޏEd mT%-?pQ1,}ыI?ύ-8.qY;EgrEE֫WG?@|\PkV!14}1'ߧ$Ќ5e6 +64/F YLg4᠃4ɋJTݢ-&r "(3<3!p= ף? BP([?/!q:@s4N%J@|m:>F/G?@1!$0QBC//9951 2ǿ8)U?g7)$]@@5\?@wU??@Ptf,<ʊ?60Q O֡l;$"m۶m;$""2)K$")I$IO,F>_>ÿFD%EH$ &>[ET_@G}AUO&?J$1`"DV(VBAQݢALUM2ݣMONmM#bDz2A3Sh"u `` FioPl+ooPomrzے` IO-2ZE#A#F U^$0"Z:ܣ"h"bC!Bb#J񫡐*?_' :Tݢ66)TTA*aTZZѷ+{T)YAO( #T.Tݢ Tg0Mp____Xb/m&n mO _fo~QJUݢWUdU6BTBmed!l1Bz1>4ABex"@nwz1HvݥqqqpZёqePex})PKpAyz14 ! B(qJyf9G.fA,2zGzBOOO"NS`1y yQ@Hno?@jufԐ輱ߗA%\Yd?@z#zߛde ' f?nw U@Hyp,?ȟId?P -DT?! N7d^X|@:1 ԠԠ4!Ԡbү ˥(v?@XA ,m?@.$?@=q?Ǐ>CU=]?@fe'a?@T4*?@L]fBw87Ͼ`Lx@]?gyӏ?@*ߜ|Q@S7s?@a~즌\B&<B<r5"ot^?@SÏ؇5_y5LƏ^XیtF ɏ:@,׉b#?@Ew?]Aw1_HoQc CxJGRw:f:?U?@8?@=+Y'O?@у2 +EXwc%'x2n=x pꙟ%|b[k kїN \x?*Sw!c 7>ɧ?@"6D&?@8@Z?@T?@vd&,cr0J|H1NU@ _&@+`2cmw7IJLA,\hK8w jJ(Ο?@d,0B11?@?|$"kv>T&L ?@=~& e7/ n MǙ?@~?@cZ:?@ )"/ug@:,2? 2/Iq*7qQ]yFFğ?{ܳ q4~<\d4DGj}cU`p `)aIᄎTDغaB CsIF5I?\qmSu%`` FiljZ0ooRzkk` _2Us5s[QSivL,c ",gFIBG 0YYEWi{VX?χEN׈LNFZ mN ?)N埤4R!ГoFǴzGR%?@:E λĘgģ߭ d }Ԇ_Q<.NB)Fl?@/[/,UYwz?@񕄄?@b&z? C+1F[¡ߡ>@vSh$9*Lo +qwe:dek vx)1 ~)ڏgk m?Ԛ?@?q&o~Pm p^1B~H;h7kx:U@2"[kp0C&vqυ`8~sXWlcىo5B1>?P?lam?y???????6;Խ2`dկp?@(K?@V3+4?QDJW>OPObOtOOOOOOKO__(_:_L_^_p___Vǀ____okTa &g˞oU@=*CHWipaȯگD- ?@f)Y"mc?@?9 (!^a%mǿٿG /ASewωϭϦVknEorߣoF@$eo6vGaG&yp Q80?T$~6vRx%/`@ P2DA/ASew6%'x?@ݴ#)GI&# 2DVhzC!&8V7]ok|a !)dp! |a;^opi*t _l7T%YA,\ C)q?@Dt*Ǖs5I 3y&} H?@kEW I J?@@ ˦(_.vsʂZy0uEpD溦?@GbO o& T EprbU&qHkkQ@\[~ϒwĥ 7>Q[&NZ 8곲TjAůoV:_Pk.|fcg$Yx)CFq%mAqop&wExu{fP?*\-?@Qkp ˢJom_}* !B//4!(14/@/R/d/v////=ط{r(87Cc-~3m¼V`6=/ ??1?C?U?g?y???3^????OO%O7OIO[OmFOOO@OO,7Q!D$ٌ7Q_%_C_U_58t,?@[ہx]#2'WvN`]񡦁?@< ʉ(_Q@}}8N$D0\S?@x'bz?@_n_u*o~1|XQLqпtGNYߴԊڦP.^9uXwbro7y#ju?@:9]?@/'X np.̈Ox8(BO={1zPb 〥o79X*a`HG UtP‡|P9%XO;Vƾ@@3NL?@H6l%f8"!fF)t7#i4·ٌp dyR?@\l! Ynu{` ?#u"pB"!pBJot# g+ot5o11{j{ӈ+𐕳QLᡱ0ql)ɝ83/@0iw{ux&8J\C!"2\?Se?\.+?#ࠀ!F#pC2&%/E̓B؏{gU/zs'-4!e%/ +/dU>)?;?7ep.x?/?e}e/"??F?mрSend}sc}e@k_t]k|Bm|@]s@ag|@ էфps)$M`??.r'ee!́b鍱% F:L-8a,naрa]ayMcTb##R_jPzdOO\,M蒥lbo!3~b bf,YAR4M>1Wko*o\,쨵:GTmjW)I~}]+0Žq(·TMqq/0c+& @@~?@^})@@)GR]?%X#NJIHcs/CUJY ss.*% / 7*U-壀0/Q pwȡex(#tJv ;eIMqfUJ.P{!=IMq/q]tlAqKsXktp/X$/Q]Q#QmTt}uhԗ7?@4r?@k ?@%tQ[f?MqN@Տ5#8,\?@]eFxvgK{!W!ty{luԡPၥ;b88RqpwP#Gz? BP(?MqD!QL/^///v|)SQ@rԐXF(GЩ?@SF? K7At|F;kIۊ 2#F߈KUcei?@?R\EF 鴷U$hbxND!%A>heS;q+}ߏߡװ|ZUu`9x!u74`tN?7KU?-~շ9H]ntΉ' %sgx?@"k?@8TX?@$/SŪ A;qhm㵣S?@GYG2W/SU矱㱝>;F$5GYkN88eJuVcӛ^J#6U@`"6Ifx ^?@0?@r6O̻4?\[nQ/Yծڌm h U縡;q4;qy;qQCq i]I,|/?h?K?@~W%`Q@c ᱃,] 5?@-oy?</ ' t9?@VB>R a6R$@[H1w jXOAE>uE/?C/?A?/Z,[q?_Y"͜?^70T_L.A_0mtv̨E0ڻ0p4 :ʺZ>J0Lu9k~O/C(ƒ0׏ .Z $o U_&8OOOO޿k9cɒ0?20_ֿϓ0?Kbס=w0d1Y{?@ P&Y0?t__~־c%P0SO=o/- ġ//o?oo F qMÝ߉{Z 0^.:?@[_?)z#60b5O_U@А57@_?@> m?3p̅T ։ȡoovo;9:{0;]^)0?5kFHfl ?@5q?@~ g]Ayh>' :p2HOU@V80@*`D#k\d1a -?Qc~f=5g )_;]p\ ~+N{ȯQ@V}ꌡF``K]J ' ԆmU@>[u@}5?~ \Pggl H 0(Q 7Iz}\0c/@|Y?5v>غ2 Ȼ?@~隴]πY$`]Pc?@ =0>oĿ?@C:Q}OΟ;6"Efb$q u}Wo=A~42H@ Q@\תco̰rd(9xCفbh/,iebEOas.@RznP|?@? Gv?|**_?@?F{qQ@l0m5@Cfhj,7>/;._>_!M%xSU@rɻT `CIφ4Ϧ?h/mG%2/@zI g<+U; ' u!M5MP?@wU@?x'@kyw#0ߙfh/E! //./@/R/d/QͲR]04;]-Cu vO5w6* h75k^ @&z} ;ݠ:G=j*N(!65!z7E[unM@>tx0x w ֡!??O#O5OGOYOQ@| N ГZyb?/ſ5~Rcꐝ?@u'!:2j?; !+?5?p[ؗy6F=|~ G X@!?{7SA@Io+v?,SoE"___ oo-o?oQ@unq?@pzۏy6Mo~ 5kLa렿,*3ӫ 9}ϖ_)/Rĸ O=RtRἏG=ކF_MyNA?9#%Q@?+Av~.\ NԸ 7m-5 Ǝo~y6ZȈ?+!b߽ 4k?o3W2L.neSlF"+hɜolB)o6 (2?@,q3bDBB,!;@`_` Filk` O2))")6΅\CO_gCt_&AuZ +[\$hoJ /*<N( ooi ~%"4fqouͲo?ո3BTGxyNstrs @RRYC??K'^T[! YAZ M3OE@&&28RZ,ص {ZRTZcF뛫j|PRV9!3U QZ]Z%ݛW>N(`Q2\!kP,3SO=0c "(Y$ /[ ,9Q$ا1)!Qn, /at=?@ٹPM }?@<+5qa==ЈoپT? ԅJ]n? $S68Zѫm10{@gqǸ?@VUngy!ۤ0eAn͓ѥe-rҥ8۠(ݾ;[N? BP(?[(=a`1 !Mdd1tofcV!ƵQ*oYi̓Q@yw?@ƌDirD7ÌDZH8sQmo/P ca;=k U?@4F( ~aQ8M S#Qԍ?@掛@?@ 2`ÿpP?@,P<Z?@4$W>A'qV%r?0{{w~˒?@3/ʵp |CtL9Y躵P ~x.:CDUy@B 5Y?@~XU=adeIo[ooQcuJ]N?@%By[G@";/̓UBq1ӯwykTn$1d1q] ypՅ[g5G/1|)I lԒ yUsK5%)R+g6!3|)6yx5^k|) k̻/1/w^̿/!LQ7iЫֵ =y\?@IH1 ]ؗ//-7\6cy16Em?? 7`Ȁ;y~o>:ih;?O7ZFe;y\Յ/F;qOO? !_3Q$7a[y( `/O_K,Zlq8(TXѳeճ_Qщܒ6Egg@X)doBkɡTd-Dd}()`PoQ? BP(ġ? < qGFS FDTbqg}@}]ۈghjx &g}i_5/ @@I*Ȁ'Cmc?@iiJI%ý?y8Zq@bJ=e`qOatuy`q?PUV``t5tu gѿO&A !el_`Hlxtpa}!1PQMőt[ Lcw o֑qyqy JNy$&g8Ngщ@@)|?@=wU??@=2 ڭ@ tp Cg2(آt݁8THާ|c[=鄜OEqj!1gѣoooogT&c֑ 2u2p ֑Ep`R׃.y0BQ*;~}hp>?@䰟Wɴf $(;02K?@dG'LQ@XT{<8* Q@MSM<z.IvLQMy<՞Y0w^i,ָD?;1;D㯀L+Uw%<U&&+Q@Ĭ?쫠:+Q@3j=~u.!=D_5ϱπ /Aн`C?@H`?H_%v36__)Q%%q@#?@)q AF%ݴFwg(GY%kh73?@W& 8KK/AF52šC/U/abƗ//·A=r &//E??Q?8???EBTgЏ1/?OD8wALĎ/DOVODc:;g:/OODdwMyAnئO+𢡄1@uS$G_Y_> (___qqշ(: _W0Q {:_Vq\_ґ\̱3dWRBչu50ش9̱5e xCB ?@pTZ?@9k?@&{culuȯڦa {{﫢rhp r''#rKǫ鄅C0j?YYYA?T4җaۀ䂰YG.GbNzl@b#산グZBق+GPbf날լ@ -_(o^ebkzhn@#= ,?ȱFп4GB_7BHB‘IBϑ8Bܑ 9B5#a݃ʳzB 9NCd J4Va_Va^Vz!0BL=Õa䯑P@oa]RVa` >׊U,U@ooUevUdߔ ۨua F,a[.nJoU2uplxVςl4ոU vp[FPY|TdJdo/ ?ש hanbiXo÷oeJ$ 7E?2P24i?%ဲzv&Ӆݶ̩ŵd5GYk}Nᓿq滿̿<ϸ)jaw!0BTfx2:R@=ޯЫ2%ݩZ͇{#5GYk}_JUq/R'9 /[i/a 2Iݶ[=t//(/:/Lϛ/ľ_/4]/h_/ _y ]Al">b?n?GmA?:I/q?@gXcx(}'} uxcOH,"rzxOFMLquA";@qŲK@q">fiϤA@ nsUIy1{xXBw'@TDu!C L{x oŲ-OOOOE%ŴOT _̿lw_#s_Tҡ7< o΁{~ҥ=4Ggq{yƏ؇…}iun@@6f @@xwp3 OuXwWi244L>#)%Dm=a(!LF/y4a?;d*(m%go (:L #_q X/j/S  BƄߋɠ)aTѥ͔r-ڔ(xƀ.qP#Gz? BP(?_27.q`aE O-OFI`aW`iΈ8SMEr{le|9FA\tozuq݀q@LD'8\.1da">2yܱ#x,[,o ?.(/FݷD~ե9U@vsz@ 17[(ӏ?Uf fH*05@5Gp% Vh:&~EwΊKٳ|?@Z.P&osd1?@:CĆoS3}?@΋[˜"o:"}53@qL 9˺߇И23Au΁;v?ap: G`/Hw>?@rd)]Gj%Kyn___'M_J|H۹?@Bmpw9V׸?@?2͟U@@@@?@a DfNZR޿IUr?@j>C$`+EOl_ /SU@?9}v|'c.]ڭ@D~5|dcs uM(O??@U@/@{H oѶ8);M_q..7㱈|S}HyTmy]e~/u@l 0!>[= ~MNBʲl͓JaU5 @ {Myw ! 0BTfxU@Jpsmo}L⧳e$Xhܕ?@Gp6o/凿íH-(?&}5<l`@Kj?$`?($I?o`@h?JFj?jj"/??(?:?L?^?U@RЯ\W֖)mʑ/< :ژr㕮/HoC2lj[ X_eVe/~lԘ ݰ1oŭX_v#OOO_ _2_D_U@폫T}L߉)>y~Kş'N\Aᯀ)9}4-KOc?凎jݰel #ŭ$ooooooU@ _jSp˟\*uf9߳ b5B&Iѝ(} '8ͽ8Ew'hM `d'!k//oGHC'ޣMpU=n4PzRwd|@@7 @@vnxűT,?DEx *߶޷6F*F߶:SFÿ̅_e@ MO_O wcW̅d g5uuJ ]AG1Xj͆̈oo/A6vͬ?uK~3Wv@߯ r2E??{¡HЯ-aq7 #)1fa1botoooooo0{H$D @@P( B@@xCP[rpAф1n]D< C??uO#OsYOkNFh@@JRT @@^)ŕ@1嵰OOIRI _J/_A_S_e_w________CoiR)o;oMo_oqod E)euo&ooooSeߛ-?Q^*yj3ɏۅ(/L/k /s5şן 1C j6?H? b {/S\!3hB 76@7>O#% oZiqߓ5߿!H$?@'d2-(L&P‰TL^pAf/%7I[mA }5 e:.@Rd:v9xN_`_7I[///V9//:UQc/e/w%ş/M q?!66+? OOa?s???????f pOOO__&_8_n_@ D"P?@~ XOï____ w n-Dn㖷Uogoyoooooooo w>Pbtoң-?bhz ϰԏBTfs.OƮ֟_0 xp7ȯگ"4FXjqxҿDVπ^ZV@@e]P|?@@b 55qσϕY3! io6 oDo$6HZl~ߐߢߴ.y3! 0BTf_Y_:LFH$6//l~"/ 6Zon~o;l5ooSdlo,KootoC8`r//FeU'(U)*+,U-.UUAAA7A8A9A;A?AFAGAHAIAJAKALAMAuU4<$zE8:&@bN%@ xC-3ѿYAU-UU4<zE8:&@bN%@ 0UC-7"A@ gQR@"ԒKRL<(L<(E8RE"RU`8?s\.@@ LFDNTyB@ uh(TBԆ~ U?zE8:&@?bN%@FxVzqOAOAQ5 ;Hj nU+%@@ ŜK* ?SOF o TG_ƅic܏ʯܯO6aسł0﷿15/AS@dteZ~0֟ aZ*<[OAfx&@F"?$ !ΑB? vPD8 vP /U1( O-D&UA%U f -h"/T))U,   '-P/!(8(B(-$#P/D DU! (,(E(-(U((((U(+-6U?AJE(UH(R(T(e(Ug(y({(0(U3(B(UM(ENPUT]_cUlnr}UUh(t((U((Q(W(U\(`(a(c(Uf(s(v(~(U((((U((((U((((Uk(o(p(r(Uu(w(z((U((((U((((U((((U((((U((((U((((U((N((]((l( "* *+"59:<=>@CILMP5VSUE$%.).*/*xxxxC!+PS!+c!+#x&xU'x*x,x0x Jx4x7x8 x;xFx7#-PɯGxHxURxVxWxXxUYxZx[x\xU^x`xaxdxexfxgX]xix2xbxjxmx xpxtxuxvxxh/{x{xx/xK xkxqx3)Ȩ !!P)GTfpQ 77B/DUYl#~ GWg'%7gA "4U{A fU7|E?@71cu Ě'c0U?DzfeArial UncodeMiS6?/?`4 R$fSymbol$67fWingds*7 fECalibr@  ?$fSwimunt(S$fPMingLU(w$fMS PGothicj@ $fDotum"|i0@ S$fESylaen  $fEstrangeloU dsaC 9$fEVrind1a8 Q$fEShrutqi| Q$fEM_angl$fETunga<"@ Q$fGSendya*/ (   R$fERavi"o Q$fGDhenu"*/ (  R$fELath#/ R$fEGautmqi | Q$fECordia NewE R$fEArial"*Cx /@~4 R$fMalgun Gothic |w L$fETimes NwRoan*Ax /@4$fDSego UI")  7Q$ vEB.B%BD=B3B5BGB0:Btj7BuABv0Bv1BwC0Bws0Bx;Bxޠ.By 9ByE-Bzr2B;Bߡ8BIB`JB>BGuideTheDocPage-1"Gestur Fom aSwitch&msvNoAWutCn 7ecCa}loutPC SidePC TopPC FrontPC OutlineButonScren"Handhel Frot Handhe}l SieHandhel Top&Handhel Ou_tLieCo}nectrNet ormal"Periph Out5ln NetworukFonTower pTower }FontTower Sid TowerOutli nPeriph ToPeriph SdPeriph Font PeriphSadowPeopl"Peopl Outi n3D NetFacu 3DNetShadow&3D NetHighltSymbol TpSymbol Frnt"Symbol OutineSymbol} ide(Default Sv]gtye!HasText$ShapeC5ls$ShapeT ySubhapeTy$SolSH"visLegndSh7apXvisVerion1Row_1Manufctre ProductNmb ePartNumbe*ProductDes_ripinuAsetNumbrSerialNumbLocatinBuildngRoomNetworkamIPAdre sSubnetMask"AdminIterf7ac NumberofPrtsMACdre s$Comuni_tySrng*NetworwkDscWipinPropetis$Lapto cmuerHubDepart5mn HardDi_veSz CPU Memory$ OperatingSwysemIsViableSe7rvBelong7sTPC$Datbse rv CloudRow_2Row_3Row_4Row_5Row_6Row_7Row_8(Dynamic onetrTextPoWsiinBridgeu*Aplicaton servEthernt ScaleAntiScaleRow_9Row_10Row_1Row_12Row_13Row_14Row_15Row_16Row_17Row_18Row_19Row_20Row_21Row_2Row_23Row_24Row_25Row_26Row_27Row_28Row_29Row_30Row_31Row_32Row_3Row_34Row_35Row_36Row_37Row_38Row_39Row_40Row_41Row_42Row_43Row_4Row_45Row_46Row_47Row_48Row_49Row_50Row_51Row_52Row_53Row_54Row_5Row_56Row_57Row_58Row_59Row_60Row_61Row_62Row_63Row_64Patch }pnel4Fiber _optctansemtAusterTranscedvisSpecIDBlocksATM switchBridge.14.Dynamic onetr15ModemRing etwork VBackground-1RegistraionTilesF msvSDContaierExclud daegr eShowFoterBackground*+msvSha_peCtgorisCityServ.29SkinColrMainfrme$msvViioCreat dUrbanAlphabetModuleVerveNone Solid$Background FeNone .26viwsSTXTechniShwufleCom-linkFirewa lBox ca}loutKeyLockUser.698Public/prvate Ukysr Terminalu2AlowShapeMvCbckDirTypeWeb sr vFTP se7rvEmail servType(Ma}ngemnt s7rv (E-Comewrc s vDirBehavoClasGUIDModelGUIDDocumentGU IClasNmeFile s7rvParentGUIDContacUserNameUser.50$Staus briemRe_sizWdthUserWidthFo_rwad AL_Prefayout(AL_Wantuoay ,AL_ChildayoutSye&AL_Justifcaon AL_JustOfs e AL_cyBtwnSubs&AL_cyBelowParnt AL_cxBtwnSubs"AL_cxB_twns t (AL_PropJustOf s e $Staus briconList boxFlow Nrma0Check Wusrpmison6msvPreiwIconCopT Pa gvisKeyword"msvAtoicClas&visWfRuleStNam(visWfRuleStIndx"msvTheeCol r,Dynamic onetr.1&visWfBranchIdex"visWfBranchStvisWfEent$msvTheeEfe7ctvisWfBranchvisWfY evisWfNovisWfBlanku8Ad lisutiemp r son"msvAtoicShapeDi_scusonErrorFindUsersNetworkLinkConfigureSearch&msvStrucueTyp LineEr o<msvSDLitRequr dCaegoi sNetwork.80New*msvSDLitirecionDocumentTopLeftRightIn_tere.103*msvSDL}itAlWgnetScript&Informatin5 cFilterBotomFloatingUser.70"visDecrpton Hident alFastenr  Round keyAskOnDropDiametrLengthTxtSizng Ta_perd kyTaperT5Taper1*Safety_ (Tx ags)"Safety -whie4Ca_rd edrwithky p(IFCPropsnviibleMount BaseElvtionTextWidhCa_rd c7esTechnolgyMountT7ex"TechnolgyxtHairlneKeywaLim1X_eqHLim2XYMxMyAdjCY1USB keyChatChevron,msvSDL]itIewMaerCopy.msvSDConta}ierM7rg ,msvSDContaiert7yl 6msvSDContaiertyl uCStyle8msvSDConta}ierH wad gE g ListbouxiemplacehuodrsHideFr7amLink.84TextColr&placehodrW5itPinX1PinX2spacingStateDialogw frmtopBarHeig hbarWidthbarHeightResizPanlHSideIn_terePower$'NN3КXE3hG3}G3%G3G3̫(G3$G3@ G3\"G3:7G3QG3oG3G3H%G3#G30߬"G3x)G3:*G3XDG3a&G3#G3;G3íG3G3$G30$G3XBG3a"G3#G3G3إ%G3G3#G3"%G30GG3Xd"G38&G3G3h˯)G3G3 G3(;)G3CG3bE3r&G3G3E30Ű G3X$G3 G3 &'-G3TG3r!G3H;G3h;G3 űE3ӱG3;G30 G3Ȧ+&G3XQ$G3uG3&G3X&+G3G3(%G3H # E3/G3L#G3\ o E3{G3X(G3;G3$ӳG3;G3p  E3 !G38-E3P=E3hME3]E3mE3}E3ЛE3E3)G30ִ G3@G3& ,G3;7G3PE3<`G3 |E3\G3G3G3˵G3G3G3$ G3@G3\4G3IG3^G3sG3G3G3$G3@ǶG3\ܶG3G3G3G30G3EG3$ZG3@oG3\G3G3G3÷G3طG3G3$G3@G3\,G3AG3VG3kG3G3G3$G3@G3\ԸG3G3G3G3(G3=G3$RG3@gG3\|G3G3G3G3йG3G3$G3XG3H .4G3@bG3(<zG3G3\G3ǺG3H<G3ȏ&/G380E3@"G3b%G30"G3PE3x HG3XG3G3&<-G3hiE3h<wG3<G3<G3Ƽ'G3E3<G3G3,E3<E3МLE3\&G3<G3=G3G3ͽG3(=G3H=G3G3 7 E3CE3QG3 i9G3h=G3 1G3G3G3 G30<!G3]E3Hk(G3o&G3XG3=ؿG3=G3#G3=2G3LG3iG3$G3 E38E3@G3ho%G3G3=G3\2G30H$G3ol(G3@&/G3xn'G3X$G3%G3n3+G3^%G3m%G3Xm-G3l&G3>G3G3x&4/G3 c9G3!G3k%G3x+G3 ,G3ب9%G3&^.G3+G38'G30G3h'G3X"!G3(>CG3]G3uG3H 5G3&G3G3P E3hE3(E38G3PE3H>^G3zG3ȩ)G3h>G3 =G3G3 . E3&:-G3>gG3  E3E3НE30J G3 &+G3G3 'G32G3$GG3>[G3@uG3($G3&G3>G30&G3>G3?$G3\=G3Ȕ R E3^G3X&rG3E3ܔ  E3G3X&-G3X%G3 2G3D,G3pE3&#G3(?G3&G3&G3H?G3#G3h?8G3QG3 fE38tE3E3PE3E3E3  E3  E3  E30  E3G3hE3G3&+G3=E3ȑ&K0G3&{0G3 6G3$G3 9G3&/!G30&P G3?pG3@G3?G3(G3E3E3\G3ОE3X&/G3&N#G3?qG3?G3&G3E3&G3E  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMUU U UUUU U!"#$U%&&U4<)zE8:&@bN%@ ș C-/ο[ AU4< 8(}A-`7 "A@xmeRR@x8RL<(L<(Em-RE ;R?uR  g"4FX88(}'M@(3O@([>Hֹ2P e}BH P ̚ `.8"ȓ,0f37BxY=wR$U !h@) ER'C ̘z1JbҮ!5l2 ԨT?L*PDC IH%a!././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/serial-console-flow.svg0000664000175000017500000007716600000000000024015 0ustar00zuulzuul00000000000000 image/svg+xml nova-serialproxy nova-api nova-compute # nova.conf[DEFAULT]my_ip=192.168.50.104[serial_console]enabled=trueport_range=10000:20000base_url=ws://192.168.50.100:6083proxyclient_address=192.168.50.104 # nova.conf[DEFAULT]my_ip=192.168.50.100[serial_console]enabled=trueserialproxy_host=192.168.50.100serialproxy_port=6083 10000 ... 20000 nova-compute # nova.conf[DEFAULT]my_ip=192.168.50.105[serial_console]enabled=trueport_range=10000:20000base_url=ws://192.168.50.100:6083proxyclient_address=192.168.50.105 10000 ... 20000 Browser/CLI/Client 1. 2. 3. 4. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/file-backed-memory.rst0000664000175000017500000001020200000000000022104 0ustar00zuulzuul00000000000000================== File-backed memory ================== .. important:: As of the 18.0.0 Rocky release, the functionality described below is only supported by the libvirt/KVM driver. The file-backed memory feature in Openstack allows a Nova node to serve guest memory from a file backing store. This mechanism uses the libvirt file memory source, causing guest instance memory to be allocated as files within the libvirt memory backing directory. Since instance performance will be related to the speed of the backing store, this feature works best when used with very fast block devices or virtual file systems - such as flash or RAM devices. When configured, ``nova-compute`` will report the capacity configured for file-backed memory to placement in place of the total system memory capacity. This allows the node to run more instances than would normally fit within system memory. When available in libvirt and qemu, instance memory will be discarded by qemu at shutdown by calling ``madvise(MADV_REMOVE)``, to avoid flushing any dirty memory to the backing store on exit. To enable file-backed memory, follow the steps below: #. `Configure the backing store`_ #. `Configure Nova Compute for file-backed memory`_ .. important:: It is not possible to live migrate from a node running a version of OpenStack that does not support file-backed memory to a node with file backed memory enabled. It is recommended that all Nova compute nodes are upgraded to Rocky before enabling file-backed memory. Prerequisites and Limitations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Libvirt File-backed memory requires libvirt version 4.0.0 or newer. Discard capability requires libvirt version 4.4.0 or newer. Qemu File-backed memory requires qemu version 2.6.0 or newer. Discard capability requires qemu version 2.10.0 or newer. Memory overcommit File-backed memory is not compatible with memory overcommit. :oslo.config:option:`ram_allocation_ratio` must be set to ``1.0`` in ``nova.conf``, and the host must not be added to a :doc:`host aggregate ` with ``ram_allocation_ratio`` set to anything but ``1.0``. Reserved memory When configured, file-backed memory is reported as total system memory to placement, with RAM used as cache. Reserved memory corresponds to disk space not set aside for file-backed memory. :oslo.config:option:`reserved_host_memory_mb` should be set to ``0`` in ``nova.conf``. Huge pages File-backed memory is not compatible with huge pages. Instances with huge pages configured will not start on a host with file-backed memory enabled. It is recommended to use host aggregates to ensure instances configured for huge pages are not placed on hosts with file-backed memory configured. Handling these limitations could be optimized with a scheduler filter in the future. Configure the backing store ~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: ``/dev/sdb`` and the ``ext4`` filesystem are used here as an example. This will differ between environments. .. note:: ``/var/lib/libvirt/qemu/ram`` is the default location. The value can be set via ``memory_backing_dir`` in ``/etc/libvirt/qemu.conf``, and the mountpoint must match the value configured there. By default, Libvirt with qemu/KVM allocates memory within ``/var/lib/libvirt/qemu/ram/``. To utilize this, you need to have the backing store mounted at (or above) this location. #. Create a filesystem on the backing device .. code-block:: console # mkfs.ext4 /dev/sdb #. Mount the backing device Add the backing device to ``/etc/fstab`` for automatic mounting to ``/var/lib/libvirt/qemu/ram`` Mount the device .. code-block:: console # mount /dev/sdb /var/lib/libvirt/qemu/ram Configure Nova Compute for file-backed memory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Enable File-backed memory in ``nova-compute`` Configure Nova to utilize file-backed memory with the capacity of the backing store in MiB. 1048576 MiB (1 TiB) is used in this example. Edit ``/etc/nova/nova.conf`` .. code-block:: ini [libvirt] file_backed_memory=1048576 #. Restart the ``nova-compute`` service ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/flavors.rst0000664000175000017500000001323400000000000020134 0ustar00zuulzuul00000000000000============== Manage Flavors ============== Admin users can use the :command:`openstack flavor` command to customize and manage flavors. To see information for this command, run: .. code-block:: console $ openstack flavor --help Command "flavor" matches: flavor create flavor delete flavor list flavor set flavor show flavor unset .. note:: Configuration rights can be delegated to additional users by redefining the access controls for ``os_compute_api:os-flavor-manage:create``, ``os_compute_api:os-flavor-manage:update`` and ``os_compute_api:os-flavor-manage:delete`` in ``/etc/nova/policy.json`` on the ``nova-api`` server. .. note:: Flavor customization can be limited by the hypervisor in use. For example the libvirt driver enables quotas on CPUs available to a VM, disk tuning, bandwidth I/O, watchdog behavior, random number generator device control, and instance VIF traffic control. For information on the flavors and flavor extra specs, refer to :doc:`/user/flavors`. Create a flavor --------------- #. List flavors to show the ID and name, the amount of memory, the amount of disk space for the root partition and for the ephemeral partition, the swap, and the number of virtual CPUs for each flavor: .. code-block:: console $ openstack flavor list #. To create a flavor, specify a name, ID, RAM size, disk size, and the number of vCPUs for the flavor, as follows: .. code-block:: console $ openstack flavor create FLAVOR_NAME --id FLAVOR_ID \ --ram RAM_IN_MB --disk ROOT_DISK_IN_GB --vcpus NUMBER_OF_VCPUS .. note:: Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID will be automatically generated. Here is an example that creates a public ``m1.extra_tiny`` flavor that automatically gets an ID assigned, with 256 MB memory, no disk space, and one VCPU. .. code-block:: console $ openstack flavor create --public m1.extra_tiny --id auto \ --ram 256 --disk 0 --vcpus 1 #. If an individual user or group of users needs a custom flavor that you do not want other projects to have access to, you can create a private flavor. .. code-block:: console $ openstack flavor create --private m1.extra_tiny --id auto \ --ram 256 --disk 0 --vcpus 1 After you create a flavor, assign it to a project by specifying the flavor name or ID and the project ID: .. code-block:: console $ openstack flavor set --project PROJECT_ID m1.extra_tiny For a list of optional parameters, run this command: .. code-block:: console $ openstack help flavor create #. In addition, you can set or unset properties, commonly referred to as "extra specs", for the existing flavor. The ``extra_specs`` metadata keys can influence the instance directly when it is launched. If a flavor sets the ``quota:vif_outbound_peak=65536`` extra spec, the instance's outbound peak bandwidth I/O should be less than or equal to 512 Mbps. There are several aspects that can work for an instance including *CPU limits*, *Disk tuning*, *Bandwidth I/O*, *Watchdog behavior*, and *Random-number generator*. For information about available metadata keys, see :doc:`/user/flavors`. For a list of optional parameters, run this command: .. code-block:: console $ openstack flavor set --help Modify a flavor --------------- Only the description of flavors can be modified (starting from microversion 2.55). To modify the description of a flavor, specify the flavor name or ID and a new description as follows: .. code-block:: console $ openstack --os-compute-api-version 2.55 flavor set --description .. note:: The only field that can be updated is the description field. Nova has historically intentionally not included an API to update a flavor because that would be confusing for instances already created with that flavor. Needing to change any other aspect of a flavor requires deleting and/or creating a new flavor. Nova stores a serialized version of the flavor associated with an instance record in the ``instance_extra`` table. While nova supports `updating flavor extra_specs`_ it does not update the embedded flavor in existing instances. Nova does not update the embedded flavor as the extra_specs change may invalidate the current placement of the instance or alter the compute context that has been created for the instance by the virt driver. For this reason admins should avoid updating extra_specs for flavors used by existing instances. A resize can be used to update existing instances if required but as a resize performs a cold migration it is not transparent to a tenant. .. _updating flavor extra_specs: https://docs.openstack.org/api-ref/compute/?expanded=#update-an-extra-spec-for-a-flavor Delete a flavor --------------- To delete a flavor, specify the flavor name or ID as follows: .. code-block:: console $ openstack flavor delete FLAVOR Default Flavors --------------- Previous versions of nova typically deployed with default flavors. This was removed from Newton. The following table lists the default flavors for Mitaka and earlier. ============ ========= =============== =============== Flavor VCPUs Disk (in GB) RAM (in MB) ============ ========= =============== =============== m1.tiny 1 1 512 m1.small 1 20 2048 m1.medium 2 40 4096 m1.large 4 80 8192 m1.xlarge 8 160 16384 ============ ========= =============== =============== ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/huge-pages.rst0000664000175000017500000002333400000000000020507 0ustar00zuulzuul00000000000000========== Huge pages ========== The huge page feature in OpenStack provides important performance improvements for applications that are highly memory IO-bound. .. note:: Huge pages may also be referred to hugepages or large pages, depending on the source. These terms are synonyms. Pages, the TLB and huge pages ----------------------------- Pages Physical memory is segmented into a series of contiguous regions called pages. Each page contains a number of bytes, referred to as the page size. The system retrieves memory by accessing entire pages, rather than byte by byte. Translation Lookaside Buffer (TLB) A TLB is used to map the virtual addresses of pages to the physical addresses in actual memory. The TLB is a cache and is not limitless, storing only the most recent or frequently accessed pages. During normal operation, processes will sometimes attempt to retrieve pages that are not stored in the cache. This is known as a TLB miss and results in a delay as the processor iterates through the pages themselves to find the missing address mapping. Huge Pages The standard page size in x86 systems is 4 kB. This is optimal for general purpose computing but larger page sizes - 2 MB and 1 GB - are also available. These larger page sizes are known as huge pages. Huge pages result in less efficient memory usage as a process will not generally use all memory available in each page. However, use of huge pages will result in fewer overall pages and a reduced risk of TLB misses. For processes that have significant memory requirements or are memory intensive, the benefits of huge pages frequently outweigh the drawbacks. Persistent Huge Pages On Linux hosts, persistent huge pages are huge pages that are reserved upfront. The HugeTLB provides for the mechanism for this upfront configuration of huge pages. The HugeTLB allows for the allocation of varying quantities of different huge page sizes. Allocation can be made at boot time or run time. Refer to the `Linux hugetlbfs guide`_ for more information. Transparent Huge Pages (THP) On Linux hosts, transparent huge pages are huge pages that are automatically provisioned based on process requests. Transparent huge pages are provisioned on a best effort basis, attempting to provision 2 MB huge pages if available but falling back to 4 kB small pages if not. However, no upfront configuration is necessary. Refer to the `Linux THP guide`_ for more information. Enabling huge pages on the host ------------------------------- .. important:: Huge pages may not be used on a host configured for file-backed memory. See :doc:`file-backed-memory` for details Persistent huge pages are required owing to their guaranteed availability. However, persistent huge pages are not enabled by default in most environments. The steps for enabling huge pages differ from platform to platform and only the steps for Linux hosts are described here. On Linux hosts, the number of persistent huge pages on the host can be queried by checking ``/proc/meminfo``: .. code-block:: console $ grep Huge /proc/meminfo AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB In this instance, there are 0 persistent huge pages (``HugePages_Total``) and 0 transparent huge pages (``AnonHugePages``) allocated. Huge pages can be allocated at boot time or run time. Huge pages require a contiguous area of memory - memory that gets increasingly fragmented the long a host is running. Identifying contiguous areas of memory is an issue for all huge page sizes, but it is particularly problematic for larger huge page sizes such as 1 GB huge pages. Allocating huge pages at boot time will ensure the correct number of huge pages is always available, while allocating them at run time can fail if memory has become too fragmented. To allocate huge pages at run time, the kernel boot parameters must be extended to include some huge page-specific parameters. This can be achieved by modifying ``/etc/default/grub`` and appending the ``hugepagesz``, ``hugepages``, and ``transparent_hugepages=never`` arguments to ``GRUB_CMDLINE_LINUX``. To allocate, for example, 2048 persistent 2 MB huge pages at boot time, run: .. code-block:: console # echo 'GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=2M hugepages=2048 transparent_hugepage=never"' > /etc/default/grub $ grep GRUB_CMDLINE_LINUX /etc/default/grub GRUB_CMDLINE_LINUX="..." GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=2M hugepages=2048 transparent_hugepage=never" .. important:: Persistent huge pages are not usable by standard host OS processes. Ensure enough free, non-huge page memory is reserved for these processes. Reboot the host, then validate that huge pages are now available: .. code-block:: console $ grep "Huge" /proc/meminfo AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB There are now 2048 2 MB huge pages totalling 4 GB of huge pages. These huge pages must be mounted. On most platforms, this happens automatically. To verify that the huge pages are mounted, run: .. code-block:: console # mount | grep huge hugetlbfs on /dev/hugepages type hugetlbfs (rw) In this instance, the huge pages are mounted at ``/dev/hugepages``. This mount point varies from platform to platform. If the above command did not return anything, the hugepages must be mounted manually. To mount the huge pages at ``/dev/hugepages``, run: .. code-block:: console # mkdir -p /dev/hugepages # mount -t hugetlbfs hugetlbfs /dev/hugepages There are many more ways to configure huge pages, including allocating huge pages at run time, specifying varying allocations for different huge page sizes, or allocating huge pages from memory affinitized to different NUMA nodes. For more information on configuring huge pages on Linux hosts, refer to the `Linux hugetlbfs guide`_. Customizing instance huge pages allocations ------------------------------------------- .. important:: The functionality described below is currently only supported by the libvirt/KVM driver. .. important:: For performance reasons, configuring huge pages for an instance will implicitly result in a NUMA topology being configured for the instance. Configuring a NUMA topology for an instance requires enablement of ``NUMATopologyFilter``. Refer to :doc:`cpu-topologies` for more information. By default, an instance does not use huge pages for its underlying memory. However, huge pages can bring important or required performance improvements for some workloads. Huge pages must be requested explicitly through the use of flavor extra specs or image metadata. To request an instance use huge pages, run: .. code-block:: console $ openstack flavor set m1.large --property hw:mem_page_size=large Different platforms offer different huge page sizes. For example: x86-based platforms offer 2 MB and 1 GB huge page sizes. Specific huge page sizes can be also be requested, with or without a unit suffix. The unit suffix must be one of: Kb(it), Kib(it), Mb(it), Mib(it), Gb(it), Gib(it), Tb(it), Tib(it), KB, KiB, MB, MiB, GB, GiB, TB, TiB. Where a unit suffix is not provided, Kilobytes are assumed. To request an instance to use 2 MB huge pages, run one of: .. code-block:: console $ openstack flavor set m1.large --property hw:mem_page_size=2MB .. code-block:: console $ openstack flavor set m1.large --property hw:mem_page_size=2048 Enabling huge pages for an instance can have negative consequences for other instances by consuming limited huge pages resources. To explicitly request an instance use small pages, run: .. code-block:: console $ openstack flavor set m1.large --property hw:mem_page_size=small .. note:: Explicitly requesting any page size will still result in a NUMA topology being applied to the instance, as described earlier in this document. Finally, to leave the decision of huge or small pages to the compute driver, run: .. code-block:: console $ openstack flavor set m1.large --property hw:mem_page_size=any For more information about the syntax for ``hw:mem_page_size``, refer to :doc:`flavors`. Applications are frequently packaged as images. For applications that require the IO performance improvements that huge pages provides, configure image metadata to ensure instances always request the specific page size regardless of flavor. To configure an image to use 1 GB huge pages, run: .. code-block:: console $ openstack image set [IMAGE_ID] --property hw_mem_page_size=1GB If the flavor specifies a numerical page size or a page size of "small" the image is not allowed to specify a page size and if it does an exception will be raised. If the flavor specifies a page size of ``any`` or ``large`` then any page size specified in the image will be used. By setting a ``small`` page size in the flavor, administrators can prevent users requesting huge pages in flavors and impacting resource utilization. To configure this page size, run: .. code-block:: console $ openstack flavor set m1.large --property hw:mem_page_size=small .. note:: Explicitly requesting any page size will still result in a NUMA topology being applied to the instance, as described earlier in this document. For more information about image metadata, refer to the `Image metadata`_ guide. .. Links .. _`Linux THP guide`: https://www.kernel.org/doc/Documentation/vm/transhuge.txt .. _`Linux hugetlbfs guide`: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt .. _`Image metadata`: https://docs.openstack.org/image-guide/introduction.html#image-metadata ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/image-caching.rst0000664000175000017500000001235200000000000021134 0ustar00zuulzuul00000000000000============= Image Caching ============= Nova supports caching base images on compute nodes when using a `supported virt driver`_. .. _supported virt driver: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_cache_images What is Image Caching? ---------------------- In order to understand what image caching is and why it is beneficial, it helps to be familiar with the process by which an instance is booted from a given base image. When a new instance is created on a compute node, the following general steps are performed by the compute manager in conjunction with the virt driver: #. Download the base image from glance #. Copy or COW the base image to create a new root disk image for the instance #. Boot the instance using the new root disk image The first step involves downloading the entire base image to the local disk on the compute node, which could involve many gigabytes of network traffic, storage, and many minutes of latency between the start of the boot process and actually running the instance. When the virt driver supports image caching, step #1 above may be skipped if the base image is already present on the compute node. This is most often the case when another instance has been booted on that node from the same base image recently. If present, the download operation can be skipped, which greatly reduces the time-to-boot for the second and subsequent instances that use the same base image, as well as avoids load on the glance server and the network connection. By default, the compute node will periodically scan the images it has cached, looking for base images that are not used by any instances on the node that are older than a configured lifetime (24 hours by default). Those unused images are deleted from the cache directory until they are needed again. For more information about configuring image cache behavior, see the documentation for the configuration options in the :oslo.config:group:`image_cache` group. .. note:: Some ephemeral backend drivers may not use or need image caching, or may not behave in the same way as others. For example, when using the ``rbd`` backend with the ``libvirt`` driver and a shared pool with glance, images are COW'd at the storage level and thus need not be downloaded (and thus cached) at the compute node at all. Image Caching Resource Accounting --------------------------------- Generally the size of the image cache is not part of the data Nova includes when reporting available or consumed disk space. This means that when ``nova-compute`` reports 100G of total disk space, the scheduler will assume that 100G of instances may be placed there. Usually disk is the most plentiful resource and thus the last to be exhausted, so this is often not problematic. However, if many instances are booted from distinct images, all of which need to be cached in addition to the disk space used by the instances themselves, Nova may overcommit the disk unintentionally by failing to consider the size of the image cache. There are two approaches to addressing this situation: #. **Mount the image cache as a separate filesystem**. This will cause Nova to report the amount of disk space available purely to instances, independent of how much is consumed by the cache. Nova will continue to disregard the size of the image cache and, if the cache space is exhausted, builds will fail. However, available disk space for instances will be correctly reported by ``nova-compute`` and accurately considered by the scheduler. #. **Enable optional reserved disk amount behavior**. The configuration workaround :oslo.config:option:`workarounds.reserve_disk_resource_for_image_cache` will cause ``nova-compute`` to periodically update the reserved disk amount to include the statically configured value, as well as the amount currently consumed by the image cache. This will cause the scheduler to see the available disk space decrease as the image cache grows. This is not updated synchronously and thus is not a perfect solution, but should vastly increase the scheduler's visibility resulting in better decisions. (Note this solution is currently libvirt-specific) As above, not all backends and virt drivers use image caching, and thus a third option may be to consider alternative infrastructure to eliminate this problem altogether. Image pre-caching ----------------- It may be beneficial to pre-cache images on compute nodes in order to achieve low time-to-boot latency for new instances immediately. This is often useful when rolling out a new version of an application where downtime is important and having the new images already available on the compute nodes is critical. Nova provides (since the Ussuri release) a mechanism to request that images be cached without having to boot an actual instance on a node. This best-effort service operates at the host aggregate level in order to provide an efficient way to indicate that a large number of computes should receive a given set of images. If the computes that should pre-cache an image are not already in a defined host aggregate, that must be done first. For information on how to perform aggregate-based image pre-caching, see the :ref:`image-caching-aggregates` section of the Host aggregates documentation. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/index.rst0000664000175000017500000001103500000000000017564 0ustar00zuulzuul00000000000000======= Compute ======= The OpenStack Compute service allows you to control an Infrastructure-as-a-Service (IaaS) cloud computing platform. It gives you control over instances and networks, and allows you to manage access to the cloud through users and projects. Compute does not include virtualization software. Instead, it defines drivers that interact with underlying virtualization mechanisms that run on your host operating system, and exposes functionality over a web-based API. Overview -------- To effectively administer compute, you must understand how the different installed nodes interact with each other. Compute can be installed in many different ways using multiple servers, but generally multiple compute nodes control the virtual servers and a cloud controller node contains the remaining Compute services. The Compute cloud works using a series of daemon processes named ``nova-*`` that exist persistently on the host machine. These binaries can all run on the same machine or be spread out on multiple boxes in a large deployment. The responsibilities of services and drivers are: .. rubric:: Services ``nova-api`` Receives XML requests and sends them to the rest of the system. A WSGI app routes and authenticates requests. Supports the OpenStack Compute APIs. A ``nova.conf`` configuration file is created when Compute is installed. .. todo:: Describe nova-api-metadata, nova-api-os-compute, nova-serialproxy and nova-spicehtml5proxy nova-console, nova-dhcpbridge and nova-xvpvncproxy are all deprecated for removal so they can be ignored. ``nova-compute`` Manages virtual machines. Loads a Service object, and exposes the public methods on ComputeManager through a Remote Procedure Call (RPC). ``nova-conductor`` Provides database-access support for compute nodes (thereby reducing security risks). ``nova-scheduler`` Dispatches requests for new virtual machines to the correct node. ``nova-novncproxy`` Provides a VNC proxy for browsers, allowing VNC consoles to access virtual machines. .. note:: Some services have drivers that change how the service implements its core functionality. For example, the ``nova-compute`` service supports drivers that let you choose which hypervisor type it can use. .. toctree:: :maxdepth: 2 manage-volumes flavors default-ports admin-password-injection manage-the-cloud manage-logs root-wrap-reference configuring-migrations live-migration-usage remote-console-access service-groups node-down Advanced configuration ---------------------- OpenStack clouds run on platforms that differ greatly in the capabilities that they provide. By default, the Compute service seeks to abstract the underlying hardware that it runs on, rather than exposing specifics about the underlying host platforms. This abstraction manifests itself in many ways. For example, rather than exposing the types and topologies of CPUs running on hosts, the service exposes a number of generic CPUs (virtual CPUs, or vCPUs) and allows for overcommitting of these. In a similar manner, rather than exposing the individual types of network devices available on hosts, generic software-powered network ports are provided. These features are designed to allow high resource utilization and allows the service to provide a generic cost-effective and highly scalable cloud upon which to build applications. This abstraction is beneficial for most workloads. However, there are some workloads where determinism and per-instance performance are important, if not vital. In these cases, instances can be expected to deliver near-native performance. The Compute service provides features to improve individual instance for these kind of workloads. .. include:: /common/numa-live-migration-warning.txt .. toctree:: :maxdepth: 2 pci-passthrough cpu-topologies huge-pages virtual-gpu file-backed-memory ports-with-resource-requests virtual-persistent-memory Additional guides ----------------- .. TODO(mriedem): This index page has a lot of content which should be organized into groups for things like configuration, operations, troubleshooting, etc. .. toctree:: :maxdepth: 2 aggregates arch availability-zones cells config-drive configuration/index evacuate image-caching metadata-service migration migrate-instance-with-snapshot networking quotas security-groups security services ssh-configuration support-compute secure-live-migration-with-qemu-native-tls mitigation-for-Intel-MDS-security-flaws vendordata ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/live-migration-usage.rst0000664000175000017500000003253400000000000022514 0ustar00zuulzuul00000000000000====================== Live-migrate instances ====================== Live-migrating an instance means moving its virtual machine to a different OpenStack Compute server while the instance continues running. Before starting a live-migration, review the chapter :ref:`section_configuring-compute-migrations`. It covers the configuration settings required to enable live-migration, but also reasons for migrations and non-live-migration options. The instructions below cover shared-storage and volume-backed migration. To block-migrate instances, add the command-line option ``-block-migrate`` to the :command:`nova live-migration` command, and ``--block-migration`` to the :command:`openstack server migrate` command. .. _section-manual-selection-of-dest: Manual selection of the destination host ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Obtain the ID of the instance you want to migrate: .. code-block:: console $ openstack server list +--------------------------------------+------+--------+-----------------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+------+--------+-----------------+------------+ | d1df1b5a-70c4-4fed-98b7-423362f2c47c | vm1 | ACTIVE | private=a.b.c.d | ... | | d693db9e-a7cf-45ef-a7c9-b3ecb5f22645 | vm2 | ACTIVE | private=e.f.g.h | ... | +--------------------------------------+------+--------+-----------------+------------+ #. Determine on which host the instance is currently running. In this example, ``vm1`` is running on ``HostB``: .. code-block:: console $ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | OS-EXT-SRV-ATTR:host | HostB | | ... | ... | | addresses | a.b.c.d | | flavor | m1.tiny | | id | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | name | vm1 | | status | ACTIVE | | ... | ... | +----------------------+--------------------------------------+ #. Select the compute node the instance will be migrated to. In this example, we will migrate the instance to ``HostC``, because ``nova-compute`` is running on it: .. code-block:: console $ openstack compute service list +----+------------------+-------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-------+----------+---------+-------+----------------------------+ | 3 | nova-conductor | HostA | internal | enabled | up | 2017-02-18T09:42:29.000000 | | 4 | nova-scheduler | HostA | internal | enabled | up | 2017-02-18T09:42:26.000000 | | 5 | nova-compute | HostB | nova | enabled | up | 2017-02-18T09:42:29.000000 | | 6 | nova-compute | HostC | nova | enabled | up | 2017-02-18T09:42:29.000000 | +----+------------------+-------+----------+---------+-------+----------------------------+ #. Check that ``HostC`` has enough resources for migration: .. code-block:: console $ openstack host show HostC +-------+------------+-----+-----------+---------+ | Host | Project | CPU | Memory MB | Disk GB | +-------+------------+-----+-----------+---------+ | HostC | (total) | 16 | 32232 | 878 | | HostC | (used_now) | 22 | 21284 | 422 | | HostC | (used_max) | 22 | 21284 | 422 | | HostC | p1 | 22 | 21284 | 422 | | HostC | p2 | 22 | 21284 | 422 | +-------+------------+-----+-----------+---------+ - ``cpu``: Number of CPUs - ``memory_mb``: Total amount of memory, in MB - ``disk_gb``: Total amount of space for NOVA-INST-DIR/instances, in GB In this table, the first row shows the total amount of resources available on the physical server. The second line shows the currently used resources. The third line shows the maximum used resources. The fourth line and below shows the resources available for each project. #. Migrate the instance: .. code-block:: console $ openstack server migrate d1df1b5a-70c4-4fed-98b7-423362f2c47c --live HostC #. Confirm that the instance has been migrated successfully: .. code-block:: console $ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | OS-EXT-SRV-ATTR:host | HostC | | ... | ... | +----------------------+--------------------------------------+ If the instance is still running on ``HostB``, the migration failed. The ``nova-scheduler`` and ``nova-conductor`` log files on the controller and the ``nova-compute`` log file on the source compute host can help pin-point the problem. .. _auto_selection_of_dest: Automatic selection of the destination host ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To leave the selection of the destination host to the Compute service, use the nova command-line client. #. Obtain the instance ID as shown in step 1 of the section :ref:`section-manual-selection-of-dest`. #. Leave out the host selection steps 2, 3, and 4. #. Migrate the instance: .. code-block:: console $ nova live-migration d1df1b5a-70c4-4fed-98b7-423362f2c47c Monitoring the migration ~~~~~~~~~~~~~~~~~~~~~~~~ #. Confirm that the instance is migrating: .. code-block:: console $ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | MIGRATING | | ... | ... | +----------------------+--------------------------------------+ #. Check progress Use the nova command-line client for nova's migration monitoring feature. First, obtain the migration ID: .. code-block:: console $ nova server-migration-list d1df1b5a-70c4-4fed-98b7-423362f2c47c +----+-------------+----------- (...) | Id | Source Node | Dest Node | (...) +----+-------------+-----------+ (...) | 2 | - | - | (...) +----+-------------+-----------+ (...) For readability, most output columns were removed. Only the first column, **Id**, is relevant. In this example, the migration ID is 2. Use this to get the migration status. .. code-block:: console $ nova server-migration-show d1df1b5a-70c4-4fed-98b7-423362f2c47c 2 +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | created_at | 2017-03-08T02:53:06.000000 | | dest_compute | controller | | dest_host | - | | dest_node | - | | disk_processed_bytes | 0 | | disk_remaining_bytes | 0 | | disk_total_bytes | 0 | | id | 2 | | memory_processed_bytes | 65502513 | | memory_remaining_bytes | 786427904 | | memory_total_bytes | 1091379200 | | server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | source_compute | compute2 | | source_node | - | | status | running | | updated_at | 2017-03-08T02:53:47.000000 | +------------------------+--------------------------------------+ The output shows that the migration is running. Progress is measured by the number of memory bytes that remain to be copied. If this number is not decreasing over time, the migration may be unable to complete, and it may be aborted by the Compute service. .. note:: The command reports that no disk bytes are processed, even in the event of block migration. What to do when the migration times out ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ During the migration process, the instance may write to a memory page after that page has been copied to the destination. When that happens, the same page has to be copied again. The instance may write to memory pages faster than they can be copied, so that the migration cannot complete. There are two optional actions, controlled by :oslo.config:option:`libvirt.live_migration_timeout_action`, which can be taken against a VM after :oslo.config:option:`libvirt.live_migration_completion_timeout` is reached: 1. ``abort`` (default): The live migration operation will be cancelled after the completion timeout is reached. This is similar to using API ``DELETE /servers/{server_id}/migrations/{migration_id}``. 2. ``force_complete``: The compute service will either pause the VM or trigger post-copy depending on if post copy is enabled and available (:oslo.config:option:`libvirt.live_migration_permit_post_copy` is set to `True`). This is similar to using API ``POST /servers/{server_id}/migrations/{migration_id}/action (force_complete)``. You can also read the :oslo.config:option:`libvirt.live_migration_timeout_action` configuration option help for more details. The following remarks assume the KVM/Libvirt hypervisor. How to know that the migration timed out ---------------------------------------- To determine that the migration timed out, inspect the ``nova-compute`` log file on the source host. The following log entry shows that the migration timed out: .. code-block:: console # grep WARNING.*d1df1b5a-70c4-4fed-98b7-423362f2c47c /var/log/nova/nova-compute.log ... WARNING nova.virt.libvirt.migration [req-...] [instance: ...] live migration not completed after 1800 sec Addressing migration timeouts ----------------------------- To stop the migration from putting load on infrastructure resources like network and disks, you may opt to cancel it manually. .. code-block:: console $ nova live-migration-abort INSTANCE_ID MIGRATION_ID To make live-migration succeed, you have several options: - **Manually force-complete the migration** .. code-block:: console $ nova live-migration-force-complete INSTANCE_ID MIGRATION_ID The instance is paused until memory copy completes. .. caution:: Since the pause impacts time keeping on the instance and not all applications tolerate incorrect time settings, use this approach with caution. - **Enable auto-convergence** Auto-convergence is a Libvirt feature. Libvirt detects that the migration is unlikely to complete and slows down its CPU until the memory copy process is faster than the instance's memory writes. To enable auto-convergence, set ``live_migration_permit_auto_converge=true`` in ``nova.conf`` and restart ``nova-compute``. Do this on all compute hosts. .. caution:: One possible downside of auto-convergence is the slowing down of the instance. - **Enable post-copy** This is a Libvirt feature. Libvirt detects that the migration does not progress and responds by activating the virtual machine on the destination host before all its memory has been copied. Access to missing memory pages result in page faults that are satisfied from the source host. To enable post-copy, set ``live_migration_permit_post_copy=true`` in ``nova.conf`` and restart ``nova-compute``. Do this on all compute hosts. When post-copy is enabled, manual force-completion does not pause the instance but switches to the post-copy process. .. caution:: Possible downsides: - When the network connection between source and destination is interrupted, page faults cannot be resolved anymore, and the virtual machine is rebooted. - Post-copy may lead to an increased page fault rate during migration, which can slow the instance down. If live migrations routinely timeout or fail during cleanup operations due to the user token timing out, consider configuring nova to use :ref:`service user tokens `. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/manage-logs.rst0000664000175000017500000001656300000000000020662 0ustar00zuulzuul00000000000000======= Logging ======= Logging module ~~~~~~~~~~~~~~ Logging behavior can be changed by creating a configuration file. To specify the configuration file, add this line to the ``/etc/nova/nova.conf`` file: .. code-block:: ini log_config_append=/etc/nova/logging.conf To change the logging level, add ``DEBUG``, ``INFO``, ``WARNING``, or ``ERROR`` as a parameter. The logging configuration file is an INI-style configuration file, which must contain a section called ``logger_nova``. This controls the behavior of the logging facility in the ``nova-*`` services. For example: .. code-block:: ini [logger_nova] level = INFO handlers = stderr qualname = nova This example sets the debugging level to ``INFO`` (which is less verbose than the default ``DEBUG`` setting). For more about the logging configuration syntax, including the ``handlers`` and ``qualname`` variables, see the `Python documentation `__ on logging configuration files. For an example of the ``logging.conf`` file with various defined handlers, see the :oslo.log-doc:`Example Configuration File for nova `. Syslog ~~~~~~ OpenStack Compute services can send logging information to syslog. This is useful if you want to use rsyslog to forward logs to a remote machine. Separately configure the Compute service (nova), the Identity service (keystone), the Image service (glance), and, if you are using it, the Block Storage service (cinder) to send log messages to syslog. Open these configuration files: - ``/etc/nova/nova.conf`` - ``/etc/keystone/keystone.conf`` - ``/etc/glance/glance-api.conf`` - ``/etc/glance/glance-registry.conf`` - ``/etc/cinder/cinder.conf`` In each configuration file, add these lines: .. code-block:: ini debug = False use_syslog = True syslog_log_facility = LOG_LOCAL0 In addition to enabling syslog, these settings also turn off debugging output from the log. .. note:: Although this example uses the same local facility for each service (``LOG_LOCAL0``, which corresponds to syslog facility ``LOCAL0``), we recommend that you configure a separate local facility for each service, as this provides better isolation and more flexibility. For example, you can capture logging information at different severity levels for different services. syslog allows you to define up to eight local facilities, ``LOCAL0, LOCAL1, ..., LOCAL7``. For more information, see the syslog documentation. Rsyslog ~~~~~~~ rsyslog is useful for setting up a centralized log server across multiple machines. This section briefly describe the configuration to set up an rsyslog server. A full treatment of rsyslog is beyond the scope of this book. This section assumes rsyslog has already been installed on your hosts (it is installed by default on most Linux distributions). This example provides a minimal configuration for ``/etc/rsyslog.conf`` on the log server host, which receives the log files .. code-block:: console # provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 1024 Add a filter rule to ``/etc/rsyslog.conf`` which looks for a host name. This example uses COMPUTE_01 as the compute host name: .. code-block:: none :hostname, isequal, "COMPUTE_01" /mnt/rsyslog/logs/compute-01.log On each compute host, create a file named ``/etc/rsyslog.d/60-nova.conf``, with the following content: .. code-block:: none # prevent debug from dnsmasq with the daemon.none parameter *.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog # Specify a log level of ERROR local0.error @@172.20.1.43:1024 Once you have created the file, restart the ``rsyslog`` service. Error-level log messages on the compute hosts should now be sent to the log server. Serial console ~~~~~~~~~~~~~~ The serial console provides a way to examine kernel output and other system messages during troubleshooting if the instance lacks network connectivity. Read-only access from server serial console is possible using the ``os-GetSerialOutput`` server action. Most cloud images enable this feature by default. For more information, see :ref:`compute-common-errors-and-fixes`. OpenStack Juno and later supports read-write access using the serial console using the ``os-GetSerialConsole`` server action. This feature also requires a websocket client to access the serial console. .. rubric:: Configuring read-write serial console access #. On a compute node, edit the ``/etc/nova/nova.conf`` file: In the ``[serial_console]`` section, enable the serial console: .. code-block:: ini [serial_console] # ... enabled = true #. In the ``[serial_console]`` section, configure the serial console proxy similar to graphical console proxies: .. code-block:: ini [serial_console] # ... base_url = ws://controller:6083/ listen = 0.0.0.0 proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS The ``base_url`` option specifies the base URL that clients receive from the API upon requesting a serial console. Typically, this refers to the host name of the controller node. The ``listen`` option specifies the network interface nova-compute should listen on for virtual console connections. Typically, 0.0.0.0 will enable listening on all interfaces. The ``proxyclient_address`` option specifies which network interface the proxy should connect to. Typically, this refers to the IP address of the management interface. When you enable read-write serial console access, Compute will add serial console information to the Libvirt XML file for the instance. For example: .. code-block:: xml .. rubric:: Accessing the serial console on an instance #. Use the :command:`nova get-serial-proxy` command to retrieve the websocket URL for the serial console on the instance: .. code-block:: console $ nova get-serial-proxy INSTANCE_NAME .. list-table:: :header-rows: 0 :widths: 9 65 * - Type - Url * - serial - ws://127.0.0.1:6083/?token=18510769-71ad-4e5a-8348-4218b5613b3d Alternatively, use the API directly: .. code-block:: console $ curl -i 'http://:8774/v2.1//servers//action' \ -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "X-Auth-Project-Id: " \ -H "X-Auth-Token: " \ -d '{"os-getSerialConsole": {"type": "serial"}}' #. Use Python websocket with the URL to generate ``.send``, ``.recv``, and ``.fileno`` methods for serial console access. For example: .. code-block:: python import websocket ws = websocket.create_connection( 'ws://127.0.0.1:6083/?token=18510769-71ad-4e5a-8348-4218b5613b3d', subprotocols=['binary', 'base64']) Alternatively, use a `Python websocket client `__. .. note:: When you enable the serial console, typical instance logging using the :command:`nova console-log` command is disabled. Kernel output and other system messages will not be visible unless you are actively viewing the serial console. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/manage-the-cloud.rst0000664000175000017500000000432600000000000021574 0ustar00zuulzuul00000000000000.. _section_manage-the-cloud: ================ Manage the cloud ================ .. toctree:: common/nova-show-usage-statistics-for-hosts-instances System administrators can use the :command:`openstack` to manage their clouds. The ``openstack`` client can be used by all users, though specific commands might be restricted by the Identity service. **Managing the cloud with the openstack client** #. The ``python-openstackclient`` package provides an ``openstack`` shell that enables Compute API interactions from the command line. Install the client, and provide your user name and password (which can be set as environment variables for convenience), for the ability to administer the cloud from the command line. For more information on ``python-openstackclient``, refer to the :python-openstackclient-doc:`documentation <>`. #. Confirm the installation was successful: .. code-block:: console $ openstack help usage: openstack [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug] [--os-cloud ] [--os-region-name ] [--os-cacert ] [--verify | --insecure] [--os-default-domain ] ... Running :command:`openstack help` returns a list of ``openstack`` commands and parameters. To get help for a subcommand, run: .. code-block:: console $ openstack help SUBCOMMAND For a complete list of ``openstack`` commands and parameters, refer to the :python-openstackclient-doc:`OpenStack Command-Line Reference `. #. Set the required parameters as environment variables to make running commands easier. For example, you can add ``--os-username`` as an ``openstack`` option, or set it as an environment variable. To set the user name, password, and project as environment variables, use: .. code-block:: console $ export OS_USERNAME=joecool $ export OS_PASSWORD=coolword $ export OS_TENANT_NAME=coolu #. The Identity service gives you an authentication endpoint, which Compute recognizes as ``OS_AUTH_URL``: .. code-block:: console $ export OS_AUTH_URL=http://hostname:5000/v2.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/manage-volumes.rst0000664000175000017500000000606300000000000021402 0ustar00zuulzuul00000000000000============== Manage volumes ============== Depending on the setup of your cloud provider, they may give you an endpoint to use to manage volumes. You can use the ``openstack`` CLI to manage volumes. For the purposes of the compute service, attaching, detaching and :doc:`creating a server from a volume ` are of primary interest. Refer to the :python-openstackclient-doc:`CLI documentation ` for more information. Volume multi-attach ------------------- Nova `added support for multiattach volumes`_ in the 17.0.0 Queens release. This document covers the nova-specific aspects of this feature. Refer to the :cinder-doc:`block storage admin guide ` for more details about creating multiattach-capable volumes. :term:`Boot from volume ` and attaching a volume to a server that is not SHELVED_OFFLOADED is supported. Ultimately the ability to perform these actions depends on the compute host and hypervisor driver that is being used. There is also a `recorded overview and demo`_ for volume multi-attach. Requirements ~~~~~~~~~~~~ * The minimum required compute API microversion for attaching a multiattach-capable volume to more than one server is :ref:`2.60 `. * Cinder 12.0.0 (Queens) or newer is required. * The ``nova-compute`` service must be running at least Queens release level code (17.0.0) and the hypervisor driver must support attaching block storage devices to more than one guest. Refer to :doc:`/user/support-matrix` for details on which compute drivers support volume multiattach. * When using the libvirt compute driver, the following native package versions determine multiattach support: * libvirt must be greater than or equal to 3.10, or * qemu must be less than 2.10 * Swapping an *in-use* multiattach volume is not supported (this is actually controlled via the block storage volume retype API). Known issues ~~~~~~~~~~~~ * Creating multiple servers in a single request with a multiattach-capable volume as the root disk is not yet supported: https://bugs.launchpad.net/nova/+bug/1747985 * Subsequent attachments to the same volume are all attached in *read/write* mode by default in the block storage service. A future change either in nova or cinder may address this so that subsequent attachments are made in *read-only* mode, or such that the mode can be specified by the user when attaching the volume to the server. Testing ~~~~~~~ Continuous integration testing of the volume multiattach feature is done via the ``tempest-full`` and ``tempest-slow`` jobs, which, along with the tests themselves, are defined in the `tempest repository`_. .. _added support for multiattach volumes: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/multi-attach-volume.html .. _recorded overview and demo: https://www.youtube.com/watch?v=hZg6wqxdEHk .. _tempest repository: http://codesearch.openstack.org/?q=CONF.compute_feature_enabled.volume_multiattach&i=nope&files=&repos=tempest ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/metadata-service.rst0000664000175000017500000001461100000000000021676 0ustar00zuulzuul00000000000000================ Metadata service ================ .. note:: This section provides deployment information about the metadata service. For end-user information about the metadata service and instance metadata in general, refer to the :ref:`user guide `. The metadata service provides a way for instances to retrieve instance-specific data. Instances access the metadata service at ``http://169.254.169.254``. The metadata service supports two sets of APIs - an OpenStack metadata API and an EC2-compatible API - and also exposes vendordata and user data. Both the OpenStack metadata and EC2-compatible APIs are versioned by date. The metadata service can be run globally, as part of the :program:`nova-api` application, or on a per-cell basis, as part of the standalone :program:`nova-api-metadata` application. A detailed comparison is provided in the :ref:`cells V2 guide `. .. versionchanged:: 19.0.0 The ability to run the nova metadata API service on a per-cell basis was added in Stein. For versions prior to this release, you should not use the standalone :program:`nova-api-metadata` application for multiple cells. Guests access the service at ``169.254.169.254``. The networking service, neutron, is responsible for intercepting these requests and adding HTTP headers which uniquely identify the source of the request before forwarding it to the metadata API server. For the Open vSwitch and Linux Bridge backends provided with neutron, the flow looks something like so: #. Instance sends a HTTP request for metadata to ``169.254.169.254``. #. This request either hits the router or DHCP namespace depending on the route in the instance #. The metadata proxy service in the namespace adds the following info to the request: - Instance IP (``X-Forwarded-For`` header) - Router or Network-ID (``X-Neutron-Network-Id`` or ``X-Neutron-Router-Id`` header) #. The metadata proxy service sends this request to the metadata agent (outside the namespace) via a UNIX domain socket. #. The :program:`neutron-metadata-agent` application forwards the request to the nova metadata API service by adding some new headers (instance ID and Tenant ID) to the request. This flow may vary if a different networking backend is used. Neutron and nova must be configured to communicate together with a shared secret. Neutron uses this secret to sign the Instance-ID header of the metadata request to prevent spoofing. This secret is configured through the :oslo.config:option:`neutron.metadata_proxy_shared_secret` config option in nova and the equivalent ``metadata_proxy_shared_secret`` config option in neutron. Configuration ------------- The :program:`nova-api` application accepts the following metadata service-related options: - :oslo.config:option:`enabled_apis` - :oslo.config:option:`enabled_ssl_apis` - :oslo.config:option:`neutron.service_metadata_proxy` - :oslo.config:option:`neutron.metadata_proxy_shared_secret` - :oslo.config:option:`api.metadata_cache_expiration` - :oslo.config:option:`api.use_forwarded_for` - :oslo.config:option:`api.local_metadata_per_cell` - :oslo.config:option:`api.dhcp_domain` .. note:: This list excludes configuration options related to the vendordata feature. Refer to :doc:`vendordata feature documentation ` for information on configuring this. For example, to configure the :program:`nova-api` application to serve the metadata API, without SSL, using the ``StaticJSON`` vendordata provider, add the following to a :file:`nova-api.conf` file: .. code-block:: ini [DEFAULT] enabled_apis = osapi_compute,metadata enabled_ssl_apis = metadata_listen = 0.0.0.0 metadata_listen_port = 0 metadata_workers = 4 [neutron] service_metadata_proxy = True [api] dhcp_domain = metadata_cache_expiration = 15 use_forwarded_for = False local_metadata_per_cell = False vendordata_providers = StaticJSON vendordata_jsonfile_path = /etc/nova/vendor_data.json .. note:: This does not include configuration options that are not metadata-specific but are nonetheless required, such as :oslo.config:option:`api.auth_strategy`. Configuring the application to use the ``DynamicJSON`` vendordata provider is more involved and is not covered here. The :program:`nova-api-metadata` application accepts almost the same options: - :oslo.config:option:`neutron.service_metadata_proxy` - :oslo.config:option:`neutron.metadata_proxy_shared_secret` - :oslo.config:option:`api.metadata_cache_expiration` - :oslo.config:option:`api.use_forwarded_for` - :oslo.config:option:`api.local_metadata_per_cell` - :oslo.config:option:`api.dhcp_domain` .. note:: This list excludes configuration options related to the vendordata feature. Refer to :doc:`vendordata feature documentation ` for information on configuring this. For example, to configure the :program:`nova-api-metadata` application to serve the metadata API, without SSL, add the following to a :file:`nova-api.conf` file: .. code-block:: ini [DEFAULT] metadata_listen = 0.0.0.0 metadata_listen_port = 0 metadata_workers = 4 [neutron] service_metadata_proxy = True [api] dhcp_domain = metadata_cache_expiration = 15 use_forwarded_for = False local_metadata_per_cell = False .. note:: This does not include configuration options that are not metadata-specific but are nonetheless required, such as :oslo.config:option:`api.auth_strategy`. For information about configuring the neutron side of the metadata service, refer to the :neutron-doc:`neutron configuration guide ` Config drives ------------- Config drives are special drives that are attached to an instance when it boots. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. For more information, refer to :doc:`/admin/config-drive` and the :ref:`user guide `. Vendordata ---------- Vendordata provides a way to pass vendor or deployment-specific information to instances. For more information, refer to :doc:`/admin/vendordata` and the :ref:`user guide `. User data --------- User data is a blob of data that the user can specify when they launch an instance. For more information, refer to :ref:`the user guide `. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/migrate-instance-with-snapshot.rst0000664000175000017500000001243200000000000024517 0ustar00zuulzuul00000000000000================================== Use snapshots to migrate instances ================================== This guide can be used to migrate an instance between different clouds. To use snapshots to migrate instances from OpenStack projects to clouds, complete these steps. In the source project: #. :ref:`Create_a_snapshot_of_the_instance` #. :ref:`Download_the_snapshot_as_an_image` In the destination project: #. :ref:`Import_the_snapshot_to_the_new_environment` #. :ref:`Boot_a_new_instance_from_the_snapshot` .. note:: Some cloud providers allow only administrators to perform this task. .. _Create_a_snapshot_of_the_instance: Create a snapshot of the instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Shut down the source VM before you take the snapshot to ensure that all data is flushed to disk. If necessary, list the instances to view the instance name: .. code-block:: console $ openstack server list +--------------------------------------+------------+--------+------------------+--------------------+-------------------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+------------+--------+------------------+--------------------+-------------------------+ | d0d1b7d9-a6a5-41d3-96ab-07975aadd7fb | myInstance | ACTIVE | private=10.0.0.3 | ubuntu-16.04-amd64 | general.micro.tmp.linux | +--------------------------------------+------------+--------+------------------+--------------------+-------------------------+ #. Use the :command:`openstack server stop` command to shut down the instance: .. code-block:: console $ openstack server stop myInstance #. Use the :command:`openstack server list` command to confirm that the instance shows a ``SHUTOFF`` status: .. code-block:: console $ openstack server list +--------------------------------------+------------+---------+------------------+--------------------+-------------------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+------------+---------+------------------+--------------------+-------------------------+ | d0d1b7d9-a6a5-41d3-96ab-07975aadd7fb | myInstance | SHUTOFF | private=10.0.0.3 | ubuntu-16.04-amd64 | general.micro.tmp.linux | +--------------------------------------+------------+---------+------------------+--------------------+-------------------------+ #. Use the :command:`openstack server image create` command to take a snapshot: .. code-block:: console $ openstack server image create --name myInstanceSnapshot myInstance If snapshot operations routinely fail because the user token times out while uploading a large disk image, consider configuring nova to use :ref:`service user tokens `. #. Use the :command:`openstack image list` command to check the status until the status is ``ACTIVE``: .. code-block:: console $ openstack image list +--------------------------------------+---------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------+--------+ | ab567a44-b670-4d22-8ead-80050dfcd280 | myInstanceSnapshot | active | +--------------------------------------+---------------------------+--------+ .. _Download_the_snapshot_as_an_image: Download the snapshot as an image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Get the image ID: .. code-block:: console $ openstack image list +--------------------------------------+---------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------+--------+ | ab567a44-b670-4d22-8ead-80050dfcd280 | myInstanceSnapshot | active | +--------------------------------------+---------------------------+--------+ #. Download the snapshot by using the image ID that was returned in the previous step: .. code-block:: console $ openstack image save --file snapshot.raw ab567a44-b670-4d22-8ead-80050dfcd280 .. note:: The :command:`openstack image save` command requires the image ID and cannot use the image name. Check there is sufficient space on the destination file system for the image file. #. Make the image available to the new environment, either through HTTP or direct upload to a machine (``scp``). .. _Import_the_snapshot_to_the_new_environment: Import the snapshot to the new environment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the new project or cloud environment, import the snapshot: .. code-block:: console $ openstack image create --container-format bare --disk-format qcow2 \ --file snapshot.raw myInstanceSnapshot .. _Boot_a_new_instance_from_the_snapshot: Boot a new instance from the snapshot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the new project or cloud environment, use the snapshot to create the new instance: .. code-block:: console $ openstack server create --flavor m1.tiny --image myInstanceSnapshot myNewInstance ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/migration.rst0000664000175000017500000000664500000000000020461 0ustar00zuulzuul00000000000000================= Migrate instances ================= .. note:: This documentation is about cold migration. For live migration usage, see :doc:`live-migration-usage`. When you want to move an instance from one compute host to another, you can migrate the instance. The migration operation, which is also known as the cold migration operation to distinguish it from the live migration operation, functions similarly to :doc:`the resize operation ` with the main difference being that a cold migration does not change the flavor of the instance. As with resize, the scheduler chooses the destination compute host based on its settings. This process does not assume that the instance has shared storage available on the target host. If you are using SSH tunneling, you must ensure that each node is configured with SSH key authentication so that the Compute service can use SSH to move disks to other nodes. For more information, see :ref:`cli-os-migrate-cfg-ssh`. To list the VMs you want to migrate, run: .. code-block:: console $ openstack server list Once you have the name or UUID of the server you wish to migrate, migrate it using the :command:`openstack server migrate` command: .. code-block:: console $ openstack server migrate SERVER Once an instance has successfully migrated, you can use the :command:`openstack server migrate confirm` command to confirm it: .. code-block:: console $ openstack server migrate confirm SERVER Alternatively, you can use the :command:`openstack server migrate revert` command to revert the migration and restore the instance to its previous host: .. code-block:: console $ openstack server migrate revert SERVER .. note:: You can configure automatic confirmation of migrations and resizes. Refer to the :oslo.config:option:`resize_confirm_window` option for more information. Example ------- To migrate an instance and watch the status, use this example script: .. code-block:: bash #!/bin/bash # Provide usage usage() { echo "Usage: $0 VM_ID" exit 1 } [[ $# -eq 0 ]] && usage VM_ID=$1 # Show the details for the VM echo "Instance details:" openstack server show ${VM_ID} # Migrate the VM to an alternate hypervisor echo -n "Migrating instance to alternate host " openstack server migrate ${VM_ID} while [[ "$(openstack server show ${VM_ID} -f value -c status)" != "VERIFY_RESIZE" ]]; do echo -n "." sleep 2 done openstack server migrate confirm ${VM_ID} echo " instance migrated and resized." # Show the details for the migrated VM echo "Migrated instance details:" openstack server show ${VM_ID} # Pause to allow users to examine VM details read -p "Pausing, press to exit." .. note:: If you see the following error, it means you are either running the command with the wrong credentials, such as a non-admin user, or the ``policy.json`` file prevents migration for your user:: Policy doesn't allow os_compute_api:os-migrate-server:migrate to be performed. (HTTP 403) .. note:: If you see the following error, similar to this message, SSH tunneling was not set up between the compute nodes:: ProcessExecutionError: Unexpected error while running command. Stderr: u Host key verification failed.\r\n The instance is booted from a new host, but preserves its configuration including instance ID, name, IP address, any metadata, and other properties. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/mitigation-for-Intel-MDS-security-flaws.rst0000664000175000017500000001201400000000000026054 0ustar00zuulzuul00000000000000====================================================================== Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws ====================================================================== Issue ~~~~~ In May 2019, four new microprocessor flaws, known as `MDS `_ , have been discovered. These flaws affect unpatched Nova compute nodes and instances running on Intel x86_64 CPUs. (The said MDS security flaws are also referred to as `RIDL and Fallout `_ or `ZombieLoad `_). Resolution ~~~~~~~~~~ To get mitigation for the said MDS security flaws, a new CPU flag, `md-clear`, needs to be exposed to the Nova instances. It can be done as follows. (1) Update the following components to the versions from your Linux distribution that have fixes for the MDS flaws, on all compute nodes with Intel x86_64 CPUs: - microcode_ctl - kernel - qemu-system-x86 - libvirt (2) When using the libvirt driver, ensure that the CPU flag ``md-clear`` is exposed to the Nova instances. It can be done so in one of the three following ways, given that Nova supports three distinct CPU modes: a. :oslo.config:option:`libvirt.cpu_mode`\ =host-model When using ``host-model`` CPU mode, the ``md-clear`` CPU flag will be passed through to the Nova guests automatically. This mode is the default, when :oslo.config:option:`libvirt.virt_type`\ =kvm|qemu is set in ``/etc/nova/nova-cpu.conf`` on compute nodes. b. :oslo.config:option:`libvirt.cpu_mode`\ =host-passthrough When using ``host-passthrough`` CPU mode, the ``md-clear`` CPU flag will be passed through to the Nova guests automatically. c. Specific custom CPU models — this can be enabled using the Nova config attributes :oslo.config:option:`libvirt.cpu_mode`\ =custom plus particular named CPU models, e.g. :oslo.config:option:`libvirt.cpu_models`\ =IvyBridge. (The list of all valid named CPU models that are supported by your host, QEMU, and libvirt can be found by running the command ``virsh domcapabilities``.) When using a custom CPU mode, you must *explicitly* enable the CPU flag ``md-clear`` to the Nova instances, in addition to the flags required for previous vulnerabilities, using the :oslo.config:option:`libvirt.cpu_model_extra_flags`. E.g.:: [libvirt] cpu_mode = custom cpu_models = IvyBridge cpu_model_extra_flags = spec-ctrl,ssbd,md-clear (3) Reboot the compute node for the fixes to take effect. (To minimize workload downtime, you may wish to live migrate all guests to another compute node first.) Once the above steps have been taken on every vulnerable compute node in the deployment, each running guest in the cluster must be fully powered down, and cold-booted (i.e. an explicit stop followed by a start), in order to activate the new CPU models. This can be done by the guest administrators at a time of their choosing. Validate that the fixes are in effect ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ After applying relevant updates, administrators can check the kernel's "sysfs" interface to see what mitigation is in place, by running the following command (on the host):: # cat /sys/devices/system/cpu/vulnerabilities/mds Mitigation: Clear CPU buffers; SMT vulnerable To unpack the message "Mitigation: Clear CPU buffers; SMT vulnerable": - The ``Mitigation: Clear CPU buffers`` bit means, you have the "CPU buffer clearing" mitigation enabled (which is mechanism to invoke a flush of various exploitable CPU buffers by invoking a CPU instruction called "VERW"). - The ``SMT vulnerable`` bit means, depending on your workload, you may still be vulnerable to SMT-related problems. You need to evaluate whether your workloads need SMT (also called "Hyper-Threading") to be disabled or not. Refer to the guidance from your Linux distribution and processor vendor. To see the other possible values for the sysfs file, ``/sys/devices/system/cpu/vulnerabilities/mds``, refer to the `MDS system information `_ section in Linux kernel's documentation for MDS. On the host, validate that KVM is capable of exposing the ``md-clear`` flag to guests:: # virsh domcapabilities kvm | grep md-clear Also, refer to the 'Diagnosis' tab in this security notice document `here `_ Performance Impact ~~~~~~~~~~~~~~~~~~ Refer to this section titled "Performance Impact and Disabling MDS" from the security notice document `here `_, under the 'Resolve' tab. (Note that although the article referred to is from Red Hat, the findings and recommendations about performance impact apply for other distributions as well.) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/networking.rst0000664000175000017500000001734200000000000020653 0ustar00zuulzuul00000000000000======================= Networking with neutron ======================= While nova uses the :neutron-doc:`OpenStack Networking service (neutron) <>` to provide network connectivity for instances, nova itself provides some additional features not possible with neutron alone. These are described below. SR-IOV ------ .. versionchanged:: 2014.2 The feature described below was first introduced in the Juno release. The SR-IOV specification defines a standardized mechanism to virtualize PCIe devices. This mechanism can virtualize a single PCIe Ethernet controller to appear as multiple PCIe devices. Each device can be directly assigned to an instance, bypassing the hypervisor and virtual switch layer. As a result, users are able to achieve low latency and near-line wire speed. A full guide on configuring and using SR-IOV is provided in the :neutron-doc:`OpenStack Networking service documentation ` .. note:: Nova only supports PCI addresses where the fields are restricted to the following maximum value: * domain - 0xFFFF * bus - 0xFF * slot - 0x1F * function - 0x7 Nova will ignore PCI devices reported by the hypervisor if the address is outside of these ranges. NUMA Affinity ------------- .. versionadded:: 18.0.0 The feature described below was first introduced in the Rocky release. .. important:: The functionality described below is currently only supported by the libvirt/KVM driver. As described in :doc:`cpu-topologies`, NUMA is a computer architecture where memory accesses to certain regions of system memory can have higher latencies than other regions, depending on the CPU(s) your process is running on. This effect extends to devices connected to the PCIe bus, a concept known as NUMA I/O. Many Network Interface Cards (NICs) connect using the PCIe interface, meaning they are susceptible to the ill-effects of poor NUMA affinitization. As a result, NUMA locality must be considered when creating an instance where high dataplane performance is a requirement. Fortunately, nova provides functionality to ensure NUMA affinitization is provided for instances using neutron. How this works depends on the type of port you are trying to use. .. todo:: Add documentation for PCI NUMA affinity and PCI policies and link to it from here. For SR-IOV ports, virtual functions, which are PCI devices, are attached to the instance. This means the instance can benefit from the NUMA affinity guarantees provided for PCI devices. This happens automatically. For all other types of ports, some manual configuration is required. #. Identify the type of network(s) you wish to provide NUMA affinity for. - If a network is an L2-type network (``provider:network_type`` of ``flat`` or ``vlan``), affinity of the network to given NUMA node(s) can vary depending on value of the ``provider:physical_network`` attribute of the network, commonly referred to as the *physnet* of the network. This is because most neutron drivers map each *physnet* to a different bridge, to which multiple NICs are attached, or to a different (logical) NIC. - If a network is an L3-type networks (``provider:network_type`` of ``vxlan``, ``gre`` or ``geneve``), all traffic will use the device to which the *endpoint IP* is assigned. This means all L3 networks on a given host will have affinity to the same NUMA node(s). Refer to :neutron-doc:`the neutron documentation ` for more information. #. Determine the NUMA affinity of the NICs attached to the given network(s). How this should be achieved varies depending on the switching solution used and whether the network is a L2-type network or an L3-type networks. Consider an L2-type network using the Linux Bridge mechanism driver. As noted in the :neutron-doc:`neutron documentation `, *physnets* are mapped to interfaces using the ``[linux_bridge] physical_interface_mappings`` configuration option. For example: .. code-block:: ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE Once you have the device name, you can query *sysfs* to retrieve the NUMA affinity for this device. For example: .. code-block:: shell $ cat /sys/class/net/PROVIDER_INTERFACE/device/numa_node For an L3-type network using the Linux Bridge mechanism driver, the device used will be configured using protocol-specific endpoint IP configuration option. For VXLAN, this is the ``[vxlan] local_ip`` option. For example: .. code-block:: ini [vxlan] local_ip = OVERLAY_INTERFACE_IP_ADDRESS Once you have the IP address in question, you can use :command:`ip` to identify the device that has been assigned this IP address and from there can query the NUMA affinity using *sysfs* as above. .. note:: The example provided above is merely that: an example. How one should identify this information can vary massively depending on the driver used, whether bonding is used, the type of network used, etc. #. Configure NUMA affinity in ``nova.conf``. Once you have identified the NUMA affinity of the devices used for your networks, you need to configure this in ``nova.conf``. As before, how this should be achieved varies depending on the type of network. For L2-type networks, NUMA affinity is defined based on the ``provider:physical_network`` attribute of the network. There are two configuration options that must be set: ``[neutron] physnets`` This should be set to the list of physnets for which you wish to provide NUMA affinity. Refer to the :oslo.config:option:`documentation ` for more information. ``[neutron_physnet_{physnet}] numa_nodes`` This should be set to the list of NUMA node(s) that networks with the given ``{physnet}`` should be affinitized to. For L3-type networks, NUMA affinity is defined globally for all tunneled networks on a given host. There is only one configuration option that must be set: ``[neutron_tunneled] numa_nodes`` This should be set to a list of one or NUMA nodes to which instances using tunneled networks will be affinitized. #. Configure a NUMA topology for instance flavor(s) For network NUMA affinity to have any effect, the instance must have a NUMA topology itself. This can be configured explicitly, using the ``hw:numa_nodes`` extra spec, or implicitly through the use of CPU pinning (``hw:cpu_policy=dedicated``) or PCI devices. For more information, refer to :doc:`cpu-topologies`. Examples ~~~~~~~~ Take an example for deployment using L2-type networks first. .. code-block:: ini [neutron] physnets = foo,bar [neutron_physnet_foo] numa_nodes = 0 [neutron_physnet_bar] numa_nodes = 2, 3 This configuration will ensure instances using one or more L2-type networks with ``provider:physical_network=foo`` must be scheduled on host cores from NUMA nodes 0, while instances using one or more networks with ``provider:physical_network=bar`` must be scheduled on host cores from both NUMA nodes 2 and 3. For the latter case, it will be necessary to split the guest across two or more host NUMA nodes using the ``hw:numa_nodes`` :ref:`flavor extra spec `. Now, take an example for a deployment using L3 networks. .. code-block:: ini [neutron_tunneled] numa_nodes = 0 This is much simpler as all tunneled traffic uses the same logical interface. As with the L2-type networks, this configuration will ensure instances using one or more L3-type networks must be scheduled on host cores from NUMA node 0. It is also possible to define more than one NUMA node, in which case the instance must be split across these nodes. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/node-down.rst0000664000175000017500000002473100000000000020356 0ustar00zuulzuul00000000000000================================== Recover from a failed compute node ================================== If you deploy Compute with a shared file system, you can use several methods to quickly recover from a node failure. This section discusses manual recovery. .. _node-down-evacuate-instances: Evacuate instances ~~~~~~~~~~~~~~~~~~ If a hardware malfunction or other error causes the cloud compute node to fail, you can use the :command:`nova evacuate` command to evacuate instances. See :doc:`evacuate instances ` for more information on using the command. .. _nova-compute-node-down-manual-recovery: Manual recovery ~~~~~~~~~~~~~~~ To manually recover a failed compute node: #. Identify the VMs on the affected hosts by using a combination of the :command:`openstack server list` and :command:`openstack server show` commands. #. Query the Compute database for the status of the host. This example converts an EC2 API instance ID to an OpenStack ID. If you use the :command:`nova` commands, you can substitute the ID directly. This example output is truncated: .. code-block:: none mysql> SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G; *************************** 1. row *************************** created_at: 2012-06-19 00:48:11 updated_at: 2012-07-03 00:35:11 deleted_at: NULL ... id: 5561 ... power_state: 5 vm_state: shutoff ... hostname: at3-ui02 host: np-rcc54 ... uuid: 3f57699a-e773-4650-a443-b4b37eed5a06 ... task_state: NULL ... .. note:: Find the credentials for your database in ``/etc/nova.conf`` file. #. Decide to which compute host to move the affected VM. Run this database command to move the VM to that host: .. code-block:: mysql mysql> UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; #. If you use a hypervisor that relies on libvirt, such as KVM, update the ``libvirt.xml`` file in ``/var/lib/nova/instances/[instance ID]`` with these changes: - Change the ``DHCPSERVER`` value to the host IP address of the new compute host. - Update the VNC IP to ``0.0.0.0``. #. Reboot the VM: .. code-block:: console $ openstack server reboot 3f57699a-e773-4650-a443-b4b37eed5a06 Typically, the database update and :command:`openstack server reboot` command recover a VM from a failed host. However, if problems persist, try one of these actions: - Use :command:`virsh` to recreate the network filter configuration. - Restart Compute services. - Update the ``vm_state`` and ``power_state`` fields in the Compute database. Recover from a UID/GID mismatch ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sometimes when you run Compute with a shared file system or an automated configuration tool, files on your compute node might use the wrong UID or GID. This UID or GID mismatch can prevent you from running live migrations or starting virtual machines. This procedure runs on ``nova-compute`` hosts, based on the KVM hypervisor: #. Set the nova UID to the same number in ``/etc/passwd`` on all hosts. For example, set the UID to ``112``. .. note:: Choose UIDs or GIDs that are not in use for other users or groups. #. Set the ``libvirt-qemu`` UID to the same number in the ``/etc/passwd`` file on all hosts. For example, set the UID to ``119``. #. Set the ``nova`` group to the same number in the ``/etc/group`` file on all hosts. For example, set the group to ``120``. #. Set the ``libvirtd`` group to the same number in the ``/etc/group`` file on all hosts. For example, set the group to ``119``. #. Stop the services on the compute node. #. Change all files that the nova user or group owns. For example: .. code-block:: console # find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova UID before the change # find / -gid 120 -exec chgrp nova {} \; #. Repeat all steps for the ``libvirt-qemu`` files, if required. #. Restart the services. #. To verify that all files use the correct IDs, run the :command:`find` command. Recover cloud after disaster ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to manage your cloud after a disaster and back up persistent storage volumes. Backups are mandatory, even outside of disaster scenarios. For a definition of a disaster recovery plan (DRP), see `https://en.wikipedia.org/wiki/Disaster\_Recovery\_Plan `_. A disk crash, network loss, or power failure can affect several components in your cloud architecture. The worst disaster for a cloud is a power loss. A power loss affects these components: - A cloud controller (``nova-api``, ``nova-conductor``, ``nova-scheduler``) - A compute node (``nova-compute``) - A storage area network (SAN) used by OpenStack Block Storage (``cinder-volumes``) Before a power loss: - Create an active iSCSI session from the SAN to the cloud controller (used for the ``cinder-volumes`` LVM's VG). - Create an active iSCSI session from the cloud controller to the compute node (managed by ``cinder-volume``). - Create an iSCSI session for every volume (so 14 EBS volumes requires 14 iSCSI sessions). - Create ``iptables`` or ``ebtables`` rules from the cloud controller to the compute node. This allows access from the cloud controller to the running instance. - Save the current state of the database, the current state of the running instances, and the attached volumes (mount point, volume ID, volume status, etc), at least from the cloud controller to the compute node. After power resumes and all hardware components restart: - The iSCSI session from the SAN to the cloud no longer exists. - The iSCSI session from the cloud controller to the compute node no longer exists. - Instances stop running. Instances are not lost because neither ``destroy`` nor ``terminate`` ran. The files for the instances remain on the compute node. - The database does not update. .. rubric:: Begin recovery .. warning:: Do not add any steps or change the order of steps in this procedure. #. Check the current relationship between the volume and its instance, so that you can recreate the attachment. Use the :command:`openstack volume list` command to get this information. Note that the :command:`openstack` client can get volume information from OpenStack Block Storage. #. Update the database to clean the stalled state. Do this for every volume by using these queries: .. code-block:: mysql mysql> use cinder; mysql> update volumes set mountpoint=NULL; mysql> update volumes set status="available" where status <>"error_deleting"; mysql> update volumes set attach_status="detached"; mysql> update volumes set instance_id=0; Use :command:`openstack volume list` command to list all volumes. #. Restart the instances by using the :command:`openstack server reboot INSTANCE` command. .. important:: Some instances completely reboot and become reachable, while some might stop at the plymouth stage. This is expected behavior. DO NOT reboot a second time. Instance state at this stage depends on whether you added an `/etc/fstab` entry for that volume. Images built with the cloud-init package remain in a ``pending`` state, while others skip the missing volume and start. You perform this step to ask Compute to reboot every instance so that the stored state is preserved. It does not matter if not all instances come up successfully. For more information about cloud-init, see `help.ubuntu.com/community/CloudInit/ `__. #. If required, run the :command:`openstack server add volume` command to reattach the volumes to their respective instances. This example uses a file of listed volumes to reattach them: .. code-block:: bash #!/bin/bash while read line; do volume=`echo $line | $CUT -f 1 -d " "` instance=`echo $line | $CUT -f 2 -d " "` mount_point=`echo $line | $CUT -f 3 -d " "` echo "ATTACHING VOLUME FOR INSTANCE - $instance" openstack server add volume $instance $volume $mount_point sleep 2 done < $volumes_tmp_file Instances that were stopped at the plymouth stage now automatically continue booting and start normally. Instances that previously started successfully can now see the volume. #. Log in to the instances with SSH and reboot them. If some services depend on the volume or if a volume has an entry in fstab, you can now restart the instance. Restart directly from the instance itself and not through :command:`nova`: .. code-block:: console # shutdown -r now When you plan for and complete a disaster recovery, follow these tips: - Use the ``errors=remount`` option in the ``fstab`` file to prevent data corruption. In the event of an I/O error, this option prevents writes to the disk. Add this configuration option into the cinder-volume server that performs the iSCSI connection to the SAN and into the instances' ``fstab`` files. - Do not add the entry for the SAN's disks to the cinder-volume's ``fstab`` file. Some systems hang on that step, which means you could lose access to your cloud-controller. To re-run the session manually, run this command before performing the mount: .. code-block:: console # iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l - On your instances, if you have the whole ``/home/`` directory on the disk, leave a user's directory with the user's bash files and the ``authorized_keys`` file instead of emptying the ``/home/`` directory and mapping the disk on it. This action enables you to connect to the instance without the volume attached, if you allow only connections through public keys. To reproduce the power loss, connect to the compute node that runs that instance and close the iSCSI session. Do not detach the volume by using the :command:`openstack server remove volume` command. You must manually close the iSCSI session. This example closes an iSCSI session with the number ``15``: .. code-block:: console # iscsiadm -m session -u -r 15 Do not forget the ``-r`` option. Otherwise, all sessions close. .. warning:: There is potential for data loss while running instances during this procedure. If you are using Liberty or earlier, ensure you have the correct patch and set the options appropriately. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/pci-passthrough.rst0000664000175000017500000001772100000000000021605 0ustar00zuulzuul00000000000000======================================== Attaching physical PCI devices to guests ======================================== The PCI passthrough feature in OpenStack allows full access and direct control of a physical PCI device in guests. This mechanism is generic for any kind of PCI device, and runs with a Network Interface Card (NIC), Graphics Processing Unit (GPU), or any other devices that can be attached to a PCI bus. Correct driver installation is the only requirement for the guest to properly use the devices. Some PCI devices provide Single Root I/O Virtualization and Sharing (SR-IOV) capabilities. When SR-IOV is used, a physical device is virtualized and appears as multiple PCI devices. Virtual PCI devices are assigned to the same or different guests. In the case of PCI passthrough, the full physical device is assigned to only one guest and cannot be shared. PCI devices are requested through flavor extra specs, specifically via the ``pci_passthrough:alias=`` flavor extra spec. This guide demonstrates how to enable PCI passthrough for a type of PCI device with a vendor ID of ``8086`` and a product ID of ``154d`` - an Intel X520 Network Adapter - by mapping them to the alias ``a1``. You should adjust the instructions for other devices with potentially different capabilities. .. note:: For information on creating servers with SR-IOV network interfaces, refer to the :neutron-doc:`Networking Guide `. **Limitations** * Attaching SR-IOV ports to existing servers is not currently supported. This is now rejected by the API but previously it fail later in the process. See `bug 1708433 `_ for details. * Cold migration (resize) of servers with SR-IOV devices attached was not supported until the 14.0.0 Newton release, see `bug 1512800 `_ for details. .. note:: Nova only supports PCI addresses where the fields are restricted to the following maximum value: * domain - 0xFFFF * bus - 0xFF * slot - 0x1F * function - 0x7 Nova will ignore PCI devices reported by the hypervisor if the address is outside of these ranges. Configure host (Compute) ------------------------ To enable PCI passthrough on an x86, Linux-based compute node, the following are required: * VT-d enabled in the BIOS * IOMMU enabled on the host OS, e.g. by adding the ``intel_iommu=on`` or ``amd_iommu=on`` parameter to the kernel parameters * Assignable PCIe devices To enable PCI passthrough on a Hyper-V compute node, the following are required: * Windows 10 or Windows / Hyper-V Server 2016 or newer * VT-d enabled on the host * Assignable PCI devices In order to check the requirements above and if there are any assignable PCI devices, run the following Powershell commands: .. code-block:: console Start-BitsTransfer https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/hyperv-samples/benarm-powershell/DDA/survey-dda.ps1 .\survey-dda.ps1 If the compute node passes all the requirements, the desired assignable PCI devices to be disabled and unmounted from the host, in order to be assignable by Hyper-V. The following can be read for more details: `Hyper-V PCI passthrough`__. .. __: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/ Configure ``nova-compute`` (Compute) ------------------------------------ Once PCI passthrough has been configured for the host, :program:`nova-compute` must be configured to allow the PCI device to pass through to VMs. This is done using the :oslo.config:option:`pci.passthrough_whitelist` option. For example, assuming our sample PCI device has a PCI address of ``41:00.0`` on each host: .. code-block:: ini [pci] passthrough_whitelist = { "address": "0000:41:00.0" } Refer to :oslo.config:option:`pci.passthrough_whitelist` for syntax information. Alternatively, to enable passthrough of all devices with the same product and vendor ID: .. code-block:: ini [pci] passthrough_whitelist = { "vendor_id": "8086", "product_id": "154d" } If using vendor and product IDs, all PCI devices matching the ``vendor_id`` and ``product_id`` are added to the pool of PCI devices available for passthrough to VMs. In addition, it is necessary to configure the :oslo.config:option:`pci.alias` option, which is a JSON-style configuration option that allows you to map a given device type, identified by the standard PCI ``vendor_id`` and (optional) ``product_id`` fields, to an arbitrary name or *alias*. This alias can then be used to request a PCI device using the ``pci_passthrough:alias=`` flavor extra spec, as discussed previously. For our sample device with a vendor ID of ``0x8086`` and a product ID of ``0x154d``, this would be: .. code-block:: ini [pci] alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1" } It's important to note the addition of the ``device_type`` field. This is necessary because this PCI device supports SR-IOV. The ``nova-compute`` service categorizes devices into one of three types, depending on the capabilities the devices report: ``type-PF`` The device supports SR-IOV and is the parent or root device. ``type-VF`` The device is a child device of a device that supports SR-IOV. ``type-PCI`` The device does not support SR-IOV. By default, it is only possible to attach ``type-PCI`` devices using PCI passthrough. If you wish to attach ``type-PF`` or ``type-VF`` devices, you must specify the ``device_type`` field in the config option. If the device was a device that did not support SR-IOV, the ``device_type`` field could be omitted. Refer to :oslo.config:option:`pci.alias` for syntax information. .. important:: This option must also be configured on controller nodes. This is discussed later in this document. Once configured, restart the :program:`nova-compute` service. Configure ``nova-scheduler`` (Controller) ----------------------------------------- The :program:`nova-scheduler` service must be configured to enable the ``PciPassthroughFilter``. To do this, add this filter to the list of filters specified in :oslo.config:option:`filter_scheduler.enabled_filters` and set :oslo.config:option:`filter_scheduler.available_filters` to the default of ``nova.scheduler.filters.all_filters``. For example: .. code-block:: ini [filter_scheduler] enabled_filters = ...,PciPassthroughFilter available_filters = nova.scheduler.filters.all_filters Once done, restart the :program:`nova-scheduler` service. .. _pci-passthrough-alias: Configure ``nova-api`` (Controller) ----------------------------------- It is necessary to also configure the :oslo.config:option:`pci.alias` config option on the controller. This configuration should match the configuration found on the compute nodes. For example: .. code-block:: ini [pci] alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1", "numa_policy":"preferred" } Refer to :oslo.config:option:`pci.alias` for syntax information. Refer to :ref:`Affinity ` for ``numa_policy`` information. Once configured, restart the :program:`nova-api` service. Configure a flavor (API) ------------------------ Once the alias has been configured, it can be used for an flavor extra spec. For example, to request two of the PCI devices referenced by alias ``a1``, run: .. code-block:: console $ openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2" For more information about the syntax for ``pci_passthrough:alias``, refer to :ref:`Flavors `. Create instances with PCI passthrough devices --------------------------------------------- The :program:`nova-scheduler` service selects a destination host that has PCI devices available that match the ``alias`` specified in the flavor. .. code-block:: console # openstack server create --flavor m1.large --image cirros-0.3.5-x86_64-uec --wait test-pci ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/ports-with-resource-requests.rst0000664000175000017500000000625600000000000024304 0ustar00zuulzuul00000000000000================================= Using ports with resource request ================================= Starting from microversion 2.72 nova supports creating servers with neutron ports having resource request visible as a admin-only port attribute ``resource_request``. For example a neutron port has resource request if it has a QoS minimum bandwidth rule attached. The :neutron-doc:`Quality of Service (QoS): Guaranteed Bandwidth ` document describes how to configure neutron to use this feature. Resource allocation ~~~~~~~~~~~~~~~~~~~ Nova collects and combines the resource request from each port in a boot request and sends one allocation candidate request to placement during scheduling so placement will make sure that the resource request of the ports are fulfilled. At the end of the scheduling nova allocates one candidate in placement. Therefore the requested resources for each port from a single boot request will be allocated under the server's allocation in placement. Resource Group policy ~~~~~~~~~~~~~~~~~~~~~ Nova represents the resource request of each neutron port as a separate :placement-doc:`Granular Resource Request group ` when querying placement for allocation candidates. When a server create request includes more than one port with resource requests then more than one group will be used in the allocation candidate query. In this case placement requires to define the ``group_policy``. Today it is only possible via the ``group_policy`` key of the :nova-doc:`flavor extra_spec `. The possible values are ``isolate`` and ``none``. When the policy is set to ``isolate`` then each request group and therefore the resource request of each neutron port will be fulfilled from separate resource providers. In case of neutron ports with ``vnic_type=direct`` or ``vnic_type=macvtap`` this means that each port will use a virtual function from different physical functions. When the policy is set to ``none`` then the resource request of the neutron ports can be fulfilled from overlapping resource providers. In case of neutron ports with ``vnic_type=direct`` or ``vnic_type=macvtap`` this means the ports may use virtual functions from the same physical function. For neutron ports with ``vnic_type=normal`` the group policy defines the collocation policy on OVS bridge level so ``group_policy=none`` is a reasonable default value in this case. If the ``group_policy`` is missing from the flavor then the server create request will fail with 'No valid host was found' and a warning describing the missing policy will be logged. Virt driver support ~~~~~~~~~~~~~~~~~~~ Supporting neutron ports with ``vnic_type=direct`` or ``vnic_type=macvtap`` depends on the capability of the virt driver. For the supported virt drivers see the :nova-doc:`Support matrix ` If the virt driver on the compute host does not support the needed capability then the PCI claim will fail on the host and re-schedule will be triggered. It is suggested not to configure bandwidth inventory in the neutron agents on these compute hosts to avoid unnecessary reschedule. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/quotas.rst0000664000175000017500000003325100000000000017775 0ustar00zuulzuul00000000000000============= Manage quotas ============= .. note:: This section provides deployment information about the quota feature. For end-user information about quotas, including information about the type of quotas available, refer to the :doc:`user guide `. To prevent system capacities from being exhausted without notification, you can set up quotas. Quotas are operational limits. For example, the number of gigabytes allowed for each project can be controlled so that cloud resources are optimized. Quotas can be enforced at both the project and the project-user level. Starting in the 16.0.0 Pike release, the quota calculation system in nova was overhauled and the old reserve/commit/rollback flow was changed to `count resource usage`__ at the point of whatever operation is being performed, such as creating or resizing a server. A check will be performed by counting current usage for the relevant resource and then, if :oslo.config:option:`quota.recheck_quota` is True, another check will be performed to ensure the initial check is still valid. By default resource usage is counted using the API and cell databases but nova can be configured to count some resource usage without using the cell databases. See `Quota usage from placement`_ for details. Using the command-line interface, you can manage quotas for nova, along with :cinder-doc:`cinder ` and :neutron-doc:`neutron `. You would typically change default values because, for example, a project requires more than ten volumes or 1 TB on a compute node. __ https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/cells-count-resources-to-check-quota-in-api.html Checking quota -------------- When calculating limits for a given resource and project, the following checks are made in order: #. Project-specific limits Depending on the resource, is there a project-specific limit on the resource in either the ``quotas`` or ``project_user_quotas`` tables in the database? If so, use that as the limit. You can create these resources using: .. code-block:: console $ openstack quota set --instances 5 #. Default limits Check to see if there is a hard limit for the given resource in the ``quota_classes`` table in the database for the ``default`` quota class. If so, use that as the limit. You can modify the default quota limit for a resource using: .. code-block:: console $ openstack quota set --instances 5 --class default .. note:: Only the ``default`` class is supported by nova. #. Config-driven limits If the above does not provide a resource limit, then rely on the configuration options in the :oslo.config:group:`quota` config group for the default limits. .. note:: The API sets the limit in the ``quota_classes`` table. Once a default limit is set via the `default` quota class, that takes precedence over any changes to that resource limit in the configuration options. In other words, once you've changed things via the API, you either have to keep those synchronized with the configuration values or remove the default limit from the database manually as there is no REST API for removing quota class values from the database. .. _quota-usage-from-placement: Quota usage from placement -------------------------- Starting in the Train (20.0.0) release, it is possible to configure quota usage counting of cores and RAM from the placement service and instances from instance mappings in the API database instead of counting resources from cell databases. This makes quota usage counting resilient in the presence of `down or poor-performing cells`__. Quota usage counting from placement is opt-in via the ::oslo.config:option:`quota.count_usage_from_placement` config option: .. code-block:: ini [quota] count_usage_from_placement = True There are some things to note when opting in to counting quota usage from placement: * Counted usage will not be accurate in an environment where multiple Nova deployments are sharing a placement deployment because currently placement has no way of partitioning resource providers between different Nova deployments. Operators who are running multiple Nova deployments that share a placement deployment should not set the :oslo.config:option:`quota.count_usage_from_placement` configuration option to ``True``. * Behavior will be different for resizes. During a resize, resource allocations are held on both the source and destination (even on the same host, see https://bugs.launchpad.net/nova/+bug/1790204) until the resize is confirmed or reverted. Quota usage will be inflated for servers in this state and operators should weigh the advantages and disadvantages before enabling :oslo.config:option:`quota.count_usage_from_placement`. * The ``populate_queued_for_delete`` and ``populate_user_id`` online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of an EXISTS database query during each quota check, if :oslo.config:option:`quota.count_usage_from_placement` is set to ``True``. Operators who want to avoid the performance hit from the EXISTS queries should wait to set the :oslo.config:option:`quota.count_usage_from_placement` configuration option to ``True`` until after they have completed their online data migrations via ``nova-manage db online_data_migrations``. * Behavior will be different for unscheduled servers in ``ERROR`` state. A server in ``ERROR`` state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram. * Behavior will be different for servers in ``SHELVED_OFFLOADED`` state. A server in ``SHELVED_OFFLOADED`` state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved. __ https://docs.openstack.org/api-guide/compute/down_cells.html Known issues ------------ If not :ref:`counting quota usage from placement ` it is possible for down or poor-performing cells to impact quota calculations. See the :ref:`cells documentation ` for details. Future plans ------------ Hierarchical quotas ~~~~~~~~~~~~~~~~~~~ There has long been a desire to support hierarchical or nested quotas leveraging support in the identity service for hierarchical projects. See the `unified limits`__ spec for details. __ https://review.opendev.org/#/c/602201/ Configuration ------------- View and update default quota values ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To list all default quotas for a project, run: .. code-block:: console $ openstack quota show --default .. note:: This lists default quotas for all services and not just nova. To update a default value for a new project, run: .. code-block:: console $ openstack quota set --class --instances 15 default View and update quota values for a project or class ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To list quotas for a project, run: .. code-block:: console $ openstack quota show PROJECT .. note:: This lists project quotas for all services and not just nova. To update quotas for a project, run: .. code-block:: console $ openstack quota set --QUOTA QUOTA_VALUE PROJECT To update quotas for a class, run: .. code-block:: console $ openstack quota set --class --QUOTA QUOTA_VALUE CLASS .. note:: Only the ``default`` class is supported by nova. For example: .. code-block:: console $ openstack quota set --instances 12 my-project $ openstack quota show my-project +----------------------+----------------------------------+ | Field | Value | +----------------------+----------------------------------+ | backup-gigabytes | 1000 | | backups | 10 | | cores | 32 | | fixed-ips | -1 | | floating-ips | 10 | | gigabytes | 1000 | | health_monitors | None | | injected-file-size | 10240 | | injected-files | 5 | | injected-path-size | 255 | | instances | 12 | | key-pairs | 100 | | l7_policies | None | | listeners | None | | load_balancers | None | | location | None | | name | None | | networks | 20 | | per-volume-gigabytes | -1 | | pools | None | | ports | 60 | | project | c8156b55ec3b486193e73d2974196993 | | project_name | project | | properties | 128 | | ram | 65536 | | rbac_policies | 10 | | routers | 10 | | secgroup-rules | 50 | | secgroups | 50 | | server-group-members | 10 | | server-groups | 10 | | snapshots | 10 | | subnet_pools | -1 | | subnets | 20 | | volumes | 10 | +----------------------+----------------------------------+ To view a list of options for the :command:`openstack quota show` and :command:`openstack quota set` commands, run: .. code-block:: console $ openstack quota show --help $ openstack quota set --help View and update quota values for a project user ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: User-specific quotas are legacy and will be removed when migration to :keystone-doc:`unified limits ` is complete. User-specific quotas were added as a way to provide two-level hierarchical quotas and this feature is already being offered in unified limits. For this reason, the below commands have not and will not be ported to openstackclient. To show quotas for a specific project user, run: .. code-block:: console $ nova quota-show --user USER PROJECT To update quotas for a specific project user, run: .. code-block:: console $ nova quota-update --user USER --QUOTA QUOTA_VALUE PROJECT For example: .. code-block:: console $ projectUser=$(openstack user show -f value -c id USER) $ project=$(openstack project show -f value -c id PROJECT) $ nova quota-update --user $projectUser --instance 12 $project $ nova quota-show --user $projectUser --tenant $project +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 12 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+ To view the quota usage for the current user, run: .. code-block:: console $ nova limits --tenant PROJECT For example: .. code-block:: console $ nova limits --tenant my-project +------+-----+-------+--------+------+----------------+ | Verb | URI | Value | Remain | Unit | Next_Available | +------+-----+-------+--------+------+----------------+ +------+-----+-------+--------+------+----------------+ +--------------------+------+-------+ | Name | Used | Max | +--------------------+------+-------+ | Cores | 0 | 20 | | Instances | 0 | 10 | | Keypairs | - | 100 | | Personality | - | 5 | | Personality Size | - | 10240 | | RAM | 0 | 51200 | | Server Meta | - | 128 | | ServerGroupMembers | - | 10 | | ServerGroups | 0 | 10 | +--------------------+------+-------+ .. note:: The :command:`nova limits` command generates an empty table as a result of the Compute API, which prints an empty list for backward compatibility purposes. To view a list of options for the :command:`nova quota-show` and :command:`nova quota-update` commands, run: .. code-block:: console $ nova help quota-show $ nova help quota-update ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/remote-console-access.rst0000664000175000017500000005305500000000000022657 0ustar00zuulzuul00000000000000=============================== Configure remote console access =============================== OpenStack provides a number of different methods to interact with your guests: VNC, SPICE, Serial, RDP or MKS. If configured, these can be accessed by users through the OpenStack dashboard or the command line. This document outlines how these different technologies can be configured. Overview -------- It is considered best practice to deploy only one of the consoles types and not all console types are supported by all compute drivers. Regardless of what option is chosen, a console proxy service is required. These proxy services are responsible for the following: - Provide a bridge between the public network where the clients live and the private network where the servers with consoles live. - Mediate token authentication. - Transparently handle hypervisor-specific connection details to provide a uniform client experience. For some combinations of compute driver and console driver, these proxy services are provided by the hypervisor or another service. For all others, nova provides services to handle this proxying. Consider a noVNC-based VNC console connection for example: #. A user connects to the API and gets an ``access_url`` such as, ``http://ip:port/?path=%3Ftoken%3Dxyz``. #. The user pastes the URL in a browser or uses it as a client parameter. #. The browser or client connects to the proxy. #. The proxy authorizes the token for the user, and maps the token to the *private* host and port of the VNC server for an instance. The compute host specifies the address that the proxy should use to connect through the :oslo.config:option:`vnc.server_proxyclient_address` option. In this way, the VNC proxy works as a bridge between the public network and private host network. #. The proxy initiates the connection to VNC server and continues to proxy until the session ends. This means a typical deployment with noVNC-based VNC consoles will have the following components: - One or more :program:`nova-novncproxy` service. Supports browser-based noVNC clients. For simple deployments, this service typically runs on the same machine as :program:`nova-api` because it operates as a proxy between the public network and the private compute host network. - One or more :program:`nova-compute` services. Hosts the instances for which consoles are provided. .. todo:: The below diagram references :program:`nova-consoleauth` and needs to be updated. This particular example is illustrated below. .. figure:: figures/SCH_5009_V00_NUAC-VNC_OpenStack.png :alt: noVNC process :width: 95% noVNC-based VNC console ----------------------- VNC is a graphical console with wide support among many hypervisors and clients. noVNC provides VNC support through a web browser. .. note:: It has `been reported`__ that versions of noVNC older than 0.6 do not work with the :program:`nova-novncproxy` service. If using non-US key mappings, you need at least noVNC 1.0.0 for `a fix`__. If using VMware ESX/ESXi hypervisors, you need at least noVNC 1.1.0 for `a fix`__. __ https://bugs.launchpad.net/nova/+bug/1752896 __ https://github.com/novnc/noVNC/commit/99feba6ba8fee5b3a2b2dc99dc25e9179c560d31 __ https://github.com/novnc/noVNC/commit/2c813a33fe6821f5af737327c50f388052fa963b Configuration ~~~~~~~~~~~~~ To enable the noVNC VNC console service, you must configure both the :program:`nova-novncproxy` service and the :program:`nova-compute` service. Most options are defined in the :oslo.config:group:`vnc` group. The :program:`nova-novncproxy` service accepts the following options: - :oslo.config:option:`daemon` - :oslo.config:option:`ssl_only` - :oslo.config:option:`source_is_ipv6` - :oslo.config:option:`cert` - :oslo.config:option:`key` - :oslo.config:option:`web` - :oslo.config:option:`console.ssl_ciphers` - :oslo.config:option:`console.ssl_minimum_version` - :oslo.config:option:`vnc.novncproxy_host` - :oslo.config:option:`vnc.novncproxy_port` If using the libvirt compute driver and enabling :ref:`vnc-security`, the following additional options are supported: - :oslo.config:option:`vnc.auth_schemes` - :oslo.config:option:`vnc.vencrypt_client_key` - :oslo.config:option:`vnc.vencrypt_client_cert` - :oslo.config:option:`vnc.vencrypt_ca_certs` For example, to configure this via a ``nova-novncproxy.conf`` file: .. code-block:: ini [vnc] novncproxy_host = 0.0.0.0 novncproxy_port = 6082 .. note:: This doesn't show configuration with security. For information on how to configure this, refer to :ref:`vnc-security` below. The :program:`nova-compute` service requires the following options to configure noVNC-based VNC console support: - :oslo.config:option:`vnc.enabled` - :oslo.config:option:`vnc.novncproxy_base_url` - :oslo.config:option:`vnc.server_listen` - :oslo.config:option:`vnc.server_proxyclient_address` - :oslo.config:option:`vnc.keymap` If using the VMware compute driver, the following additional options are supported: - :oslo.config:option:`vmware.vnc_port` - :oslo.config:option:`vmware.vnc_port_total` For example, to configure this via a ``nova.conf`` file: .. code-block:: ini [vnc] enabled = True novncproxy_base_url = http://IP_ADDRESS:6082/vnc_auto.html server_listen = 127.0.0.1 server_proxyclient_address = 127.0.0.1 keymap = en-us Replace ``IP_ADDRESS`` with the IP address from which the proxy is accessible by the outside world. For example, this may be the management interface IP address of the controller or the VIP. .. _vnc-security: VNC proxy security ~~~~~~~~~~~~~~~~~~ Deploy the public-facing interface of the VNC proxy with HTTPS to prevent attacks from malicious parties on the network between the tenant user and proxy server. When using HTTPS, the TLS encryption only applies to data between the tenant user and proxy server. The data between the proxy server and Compute node instance will still be unencrypted. To provide protection for the latter, it is necessary to enable the VeNCrypt authentication scheme for VNC in both the Compute nodes and noVNC proxy server hosts. QEMU/KVM Compute node configuration +++++++++++++++++++++++++++++++++++ Ensure each Compute node running QEMU/KVM with libvirt has a set of certificates issued to it. The following is a list of the required certificates: - :file:`/etc/pki/libvirt-vnc/server-cert.pem` An x509 certificate to be presented **by the VNC server**. The ``CommonName`` should match the **primary hostname of the compute node**. Use of ``subjectAltName`` is also permitted if there is a need to use multiple hostnames or IP addresses to access the same Compute node. - :file:`/etc/pki/libvirt-vnc/server-key.pem` The private key used to generate the ``server-cert.pem`` file. - :file:`/etc/pki/libvirt-vnc/ca-cert.pem` The authority certificate used to sign ``server-cert.pem`` and sign the VNC proxy server certificates. The certificates must have v3 basic constraints [2]_ present to indicate the permitted key use and purpose data. We recommend using a dedicated certificate authority solely for the VNC service. This authority may be a child of the master certificate authority used for the OpenStack deployment. This is because libvirt does not currently have a mechanism to restrict what certificates can be presented by the proxy server. For further details on certificate creation, consult the QEMU manual page documentation on VNC server certificate setup [1]_. Configure libvirt to enable the VeNCrypt authentication scheme for the VNC server. In :file:`/etc/libvirt/qemu.conf`, uncomment the following settings: - ``vnc_tls=1`` This instructs libvirt to enable the VeNCrypt authentication scheme when launching QEMU, passing it the certificates shown above. - ``vnc_tls_x509_verify=1`` This instructs QEMU to require that all VNC clients present a valid x509 certificate. Assuming a dedicated certificate authority is used for the VNC service, this ensures that only approved VNC proxy servers can connect to the Compute nodes. After editing :file:`qemu.conf`, the ``libvirtd`` service must be restarted: .. code-block:: shell $ systemctl restart libvirtd.service Changes will not apply to any existing running guests on the Compute node, so this configuration should be done before launching any instances. noVNC proxy server configuration ++++++++++++++++++++++++++++++++ The noVNC proxy server initially only supports the ``none`` authentication scheme, which does no checking. Therefore, it is necessary to enable the ``vencrypt`` authentication scheme by editing the :file:`nova.conf` file to set. .. code-block:: ini [vnc] auth_schemes=vencrypt,none The :oslo.config:option:`vnc.auth_schemes` values should be listed in order of preference. If enabling VeNCrypt on an existing deployment which already has instances running, the noVNC proxy server must initially be allowed to use ``vencrypt`` and ``none``. Once it is confirmed that all Compute nodes have VeNCrypt enabled for VNC, it is possible to remove the ``none`` option from the list of the :oslo.config:option:`vnc.auth_schemes` values. At that point, the noVNC proxy will refuse to connect to any Compute node that does not offer VeNCrypt. As well as enabling the authentication scheme, it is necessary to provide certificates to the noVNC proxy. - :file:`/etc/pki/nova-novncproxy/client-cert.pem` An x509 certificate to be presented **to the VNC server**. While libvirt/QEMU will not currently do any validation of the ``CommonName`` field, future versions will allow for setting up access controls based on the ``CommonName``. The ``CommonName`` field should match the **primary hostname of the controller node**. If using a HA deployment, the ``Organization`` field can also be configured to a value that is common across all console proxy instances in the deployment. This avoids the need to modify each compute node's whitelist every time a console proxy instance is added or removed. - :file:`/etc/pki/nova-novncproxy/client-key.pem` The private key used to generate the ``client-cert.pem`` file. - :file:`/etc/pki/nova-novncproxy/ca-cert.pem` The certificate authority cert used to sign ``client-cert.pem`` and sign the compute node VNC server certificates. The certificates must have v3 basic constraints [2]_ present to indicate the permitted key use and purpose data. Once the certificates have been created, the noVNC console proxy service must be told where to find them. This requires editing :file:`nova.conf` to set. .. code-block:: ini [vnc] vencrypt_client_key=/etc/pki/nova-novncproxy/client-key.pem vencrypt_client_cert=/etc/pki/nova-novncproxy/client-cert.pem vencrypt_ca_certs=/etc/pki/nova-novncproxy/ca-cert.pem SPICE console ------------- The VNC protocol is fairly limited, lacking support for multiple monitors, bi-directional audio, reliable cut-and-paste, video streaming and more. SPICE is a new protocol that aims to address the limitations in VNC and provide good remote desktop support. SPICE support in OpenStack Compute shares a similar architecture to the VNC implementation. The OpenStack dashboard uses a SPICE-HTML5 widget in its console tab that communicates with the :program:`nova-spicehtml5proxy` service by using SPICE-over-websockets. The :program:`nova-spicehtml5proxy` service communicates directly with the hypervisor process by using SPICE. Configuration ~~~~~~~~~~~~~ .. important:: VNC must be explicitly disabled to get access to the SPICE console. Set the :oslo.config:option:`vnc.enabled` option to ``False`` to disable the VNC console. To enable the SPICE console service, you must configure both the :program:`nova-spicehtml5proxy` service and the :program:`nova-compute` service. Most options are defined in the :oslo.config:group:`spice` group. The :program:`nova-spicehtml5proxy` service accepts the following options. - :oslo.config:option:`daemon` - :oslo.config:option:`ssl_only` - :oslo.config:option:`source_is_ipv6` - :oslo.config:option:`cert` - :oslo.config:option:`key` - :oslo.config:option:`web` - :oslo.config:option:`console.ssl_ciphers` - :oslo.config:option:`console.ssl_minimum_version` - :oslo.config:option:`spice.html5proxy_host` - :oslo.config:option:`spice.html5proxy_port` For example, to configure this via a ``nova-spicehtml5proxy.conf`` file: .. code-block:: ini [spice] html5proxy_host = 0.0.0.0 html5proxy_port = 6082 The :program:`nova-compute` service requires the following options to configure SPICE console support. - :oslo.config:option:`spice.enabled` - :oslo.config:option:`spice.agent_enabled` - :oslo.config:option:`spice.html5proxy_base_url` - :oslo.config:option:`spice.server_listen` - :oslo.config:option:`spice.server_proxyclient_address` - :oslo.config:option:`spice.keymap` For example, to configure this via a ``nova.conf`` file: .. code-block:: ini [spice] agent_enabled = False enabled = True html5proxy_base_url = http://IP_ADDRESS:6082/spice_auto.html server_listen = 127.0.0.1 server_proxyclient_address = 127.0.0.1 keymap = en-us Replace ``IP_ADDRESS`` with the IP address from which the proxy is accessible by the outside world. For example, this may be the management interface IP address of the controller or the VIP. Serial ------ Serial consoles provide an alternative to graphical consoles like VNC or SPICE. They work a little differently to graphical consoles so an example is beneficial. The example below uses these nodes: * controller node with IP ``192.168.50.100`` * compute node 1 with IP ``192.168.50.104`` * compute node 2 with IP ``192.168.50.105`` Here's the general flow of actions: .. figure:: figures/serial-console-flow.svg :width: 100% :alt: The serial console flow 1. The user requests a serial console connection string for an instance from the REST API. 2. The :program:`nova-api` service asks the :program:`nova-compute` service, which manages that instance, to fulfill that request. 3. That connection string gets used by the user to connect to the :program:`nova-serialproxy` service. 4. The :program:`nova-serialproxy` service then proxies the console interaction to the port of the compute node where the instance is running. That port gets forwarded by the hypervisor (or ironic conductor, for ironic) to the guest. Configuration ~~~~~~~~~~~~~ To enable the serial console service, you must configure both the :program:`nova-serialproxy` service and the :program:`nova-compute` service. Most options are defined in the :oslo.config:group:`serial_console` group. The :program:`nova-serialproxy` service accepts the following options. - :oslo.config:option:`daemon` - :oslo.config:option:`ssl_only` - :oslo.config:option:`source_is_ipv6` - :oslo.config:option:`cert` - :oslo.config:option:`key` - :oslo.config:option:`web` - :oslo.config:option:`console.ssl_ciphers` - :oslo.config:option:`console.ssl_minimum_version` - :oslo.config:option:`serial_console.serialproxy_host` - :oslo.config:option:`serial_console.serialproxy_port` For example, to configure this via a ``nova-serialproxy.conf`` file: .. code-block:: ini [serial_console] serialproxy_host = 0.0.0.0 serialproxy_port = 6083 The :program:`nova-compute` service requires the following options to configure serial console support. - :oslo.config:option:`serial_console.enabled` - :oslo.config:option:`serial_console.base_url` - :oslo.config:option:`serial_console.proxyclient_address` - :oslo.config:option:`serial_console.port_range` For example, to configure this via a ``nova.conf`` file: .. code-block:: ini [serial_console] enabled = True base_url = ws://IP_ADDRESS:6083/ proxyclient_address = 127.0.0.1 port_range = 10000:20000 Replace ``IP_ADDRESS`` with the IP address from which the proxy is accessible by the outside world. For example, this may be the management interface IP address of the controller or the VIP. There are some things to keep in mind when configuring these options: * :oslo.config:option:`serial_console.serialproxy_host` is the address the :program:`nova-serialproxy` service listens to for incoming connections. * :oslo.config:option:`serial_console.serialproxy_port` must be the same value as the port in the URI of :oslo.config:option:`serial_console.base_url`. * The URL defined in :oslo.config:option:`serial_console.base_url` will form part of the response the user will get when asking for a serial console connection string. This means it needs to be an URL the user can connect to. * :oslo.config:option:`serial_console.proxyclient_address` will be used by the :program:`nova-serialproxy` service to determine where to connect to for proxying the console interaction. RDP --- RDP is a graphical console primarily used with Hyper-V. Nova does not provide a console proxy service for RDP - instead, an external proxy service, such as the :program:`wsgate` application provided by `FreeRDP-WebConnect`__, should be used. __ https://github.com/FreeRDP/FreeRDP-WebConnect Configuration ~~~~~~~~~~~~~ To enable the RDP console service, you must configure both a console proxy service like :program:`wsgate` and the :program:`nova-compute` service. All options for the latter service are defined in the :oslo.config:group:`rdp` group. Information on configuring an RDP console proxy service, such as :program:`wsgate`, is not provided here. However, more information can be found at `cloudbase.it`__. The :program:`nova-compute` service requires the following options to configure RDP console support. - :oslo.config:option:`rdp.enabled` - :oslo.config:option:`rdp.html5_proxy_base_url` For example, to configure this via a ``nova.conf`` file: .. code-block:: ini [rdp] enabled = True html5_proxy_base_url = https://IP_ADDRESS:6083/ Replace ``IP_ADDRESS`` with the IP address from which the proxy is accessible by the outside world. For example, this may be the management interface IP address of the controller or the VIP. __ https://cloudbase.it/freerdp-html5-proxy-windows/ MKS --- MKS is the protocol used for accessing the console of a virtual machine running on VMware vSphere. It is very similar to VNC. Due to the architecture of the VMware vSphere hypervisor, it is not necessary to run a console proxy service. Configuration ~~~~~~~~~~~~~ To enable the MKS console service, only the :program:`nova-compute` service must be configured. All options are defined in the :oslo.config:group:`mks` group. The :program:`nova-compute` service requires the following options to configure MKS console support. - :oslo.config:option:`mks.enabled` - :oslo.config:option:`mks.mksproxy_base_url` For example, to configure this via a ``nova.conf`` file: .. code-block:: ini [mks] enabled = True mksproxy_base_url = https://127.0.0.1:6090/ .. _about-nova-consoleauth: About ``nova-consoleauth`` -------------------------- The now-removed :program:`nova-consoleauth` service was previously used to provide a shared service to manage token authentication that the client proxies outlined below could leverage. Token authentication was moved to the database in 18.0.0 (Rocky) and the service was removed in 20.0.0 (Train). Frequently Asked Questions -------------------------- - **Q: I want VNC support in the OpenStack dashboard. What services do I need?** A: You need ``nova-novncproxy`` and correctly configured compute hosts. - **Q: My VNC proxy worked fine during my all-in-one test, but now it doesn't work on multi host. Why?** A: The default options work for an all-in-one install, but changes must be made on your compute hosts once you start to build a cluster. As an example, suppose you have two servers: .. code-block:: bash PROXYSERVER (public_ip=172.24.1.1, management_ip=192.168.1.1) COMPUTESERVER (management_ip=192.168.1.2) Your ``nova-compute`` configuration file must set the following values: .. code-block:: ini [vnc] # These flags help construct a connection data structure server_proxyclient_address=192.168.1.2 novncproxy_base_url=http://172.24.1.1:6080/vnc_auto.html # This is the address where the underlying vncserver (not the proxy) # will listen for connections. server_listen=192.168.1.2 .. note:: ``novncproxy_base_url`` uses a public IP; this is the URL that is ultimately returned to clients, which generally do not have access to your private network. Your PROXYSERVER must be able to reach ``server_proxyclient_address``, because that is the address over which the VNC connection is proxied. - **Q: My noVNC does not work with recent versions of web browsers. Why?** A: Make sure you have installed ``python-numpy``, which is required to support a newer version of the WebSocket protocol (HyBi-07+). - **Q: How do I adjust the dimensions of the VNC window image in the OpenStack dashboard?** A: These values are hard-coded in a Django HTML template. To alter them, edit the ``_detail_vnc.html`` template file. The location of this file varies based on Linux distribution. On Ubuntu 14.04, the file is at ``/usr/share/pyshared/horizon/dashboards/nova/instances/templates/instances/_detail_vnc.html``. Modify the ``width`` and ``height`` options, as follows: .. code-block:: ini - **Q: My noVNC connections failed with ValidationError: Origin header protocol does not match. Why?** A: Make sure the ``base_url`` match your TLS setting. If you are using https console connections, make sure that the value of ``novncproxy_base_url`` is set explicitly where the ``nova-novncproxy`` service is running. References ---------- .. [1] https://qemu.weilnetz.de/doc/qemu-doc.html#vnc_005fsec_005fcertificate_005fverify .. [2] https://tools.ietf.org/html/rfc3280#section-4.2.1.10 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/root-wrap-reference.rst0000664000175000017500000001140100000000000022340 0ustar00zuulzuul00000000000000==================== Secure with rootwrap ==================== Rootwrap allows unprivileged users to safely run Compute actions as the root user. Compute previously used :command:`sudo` for this purpose, but this was difficult to maintain, and did not allow advanced filters. The :command:`rootwrap` command replaces :command:`sudo` for Compute. To use rootwrap, prefix the Compute command with :command:`nova-rootwrap`. For example: .. code-block:: console $ sudo nova-rootwrap /etc/nova/rootwrap.conf command A generic ``sudoers`` entry lets the Compute user run :command:`nova-rootwrap` as root. The :command:`nova-rootwrap` code looks for filter definition directories in its configuration file, and loads command filters from them. It then checks if the command requested by Compute matches one of those filters and, if so, executes the command (as root). If no filter matches, it denies the request. .. note:: Be aware of issues with using NFS and root-owned files. The NFS share must be configured with the ``no_root_squash`` option enabled, in order for rootwrap to work correctly. Rootwrap is fully controlled by the root user. The root user owns the sudoers entry which allows Compute to run a specific rootwrap executable as root, and only with a specific configuration file (which should also be owned by root). The :command:`nova-rootwrap` command imports the Python modules it needs from a cleaned, system-default PYTHONPATH. The root-owned configuration file points to root-owned filter definition directories, which contain root-owned filters definition files. This chain ensures that the Compute user itself is not in control of the configuration or modules used by the :command:`nova-rootwrap` executable. Configure rootwrap ~~~~~~~~~~~~~~~~~~ Configure rootwrap in the ``rootwrap.conf`` file. Because it is in the trusted security path, it must be owned and writable by only the root user. The ``rootwrap_config=entry`` parameter specifies the file's location in the sudoers entry and in the ``nova.conf`` configuration file. The ``rootwrap.conf`` file uses an INI file format with these sections and parameters: .. list-table:: **rootwrap.conf configuration options** :widths: 64 31 * - Configuration option=Default value - (Type) Description * - [DEFAULT] filters\_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap - (ListOpt) Comma-separated list of directories containing filter definition files. Defines where rootwrap filters are stored. Directories defined on this line should all exist, and be owned and writable only by the root user. If the root wrapper is not performing correctly, you can add a workaround option into the ``nova.conf`` configuration file. This workaround re-configures the root wrapper configuration to fall back to running commands as ``sudo``, and is a Kilo release feature. Including this workaround in your configuration file safeguards your environment from issues that can impair root wrapper performance. Tool changes that have impacted `Python Build Reasonableness (PBR) `__ for example, are a known issue that affects root wrapper performance. To set up this workaround, configure the ``disable_rootwrap`` option in the ``[workaround]`` section of the ``nova.conf`` configuration file. The filters definition files contain lists of filters that rootwrap will use to allow or deny a specific command. They are generally suffixed by ``.filters`` . Since they are in the trusted security path, they need to be owned and writable only by the root user. Their location is specified in the ``rootwrap.conf`` file. Filter definition files use an INI file format with a ``[Filters]`` section and several lines, each with a unique parameter name, which should be different for each filter you define: .. list-table:: **Filters configuration options** :widths: 72 39 * - Configuration option=Default value - (Type) Description * - [Filters] filter\_name=kpartx: CommandFilter, /sbin/kpartx, root - (ListOpt) Comma-separated list containing the filter class to use, followed by the Filter arguments (which vary depending on the Filter class selected). Configure the rootwrap daemon ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Administrators can use rootwrap daemon support instead of running rootwrap with :command:`sudo`. The rootwrap daemon reduces the overhead and performance loss that results from running ``oslo.rootwrap`` with :command:`sudo`. Each call that needs rootwrap privileges requires a new instance of rootwrap. The daemon prevents overhead from the repeated calls. The daemon does not support long running processes, however. To enable the rootwrap daemon, set ``use_rootwrap_daemon`` to ``True`` in the Compute service configuration file. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/secure-live-migration-with-qemu-native-tls.rst0000664000175000017500000002020400000000000026667 0ustar00zuulzuul00000000000000========================================== Secure live migration with QEMU-native TLS ========================================== Context ~~~~~~~ The encryption offered by nova's :oslo.config:option:`libvirt.live_migration_tunnelled` does not secure all the different migration streams of a nova instance, namely: guest RAM, device state, and disks (via NBD) when using non-shared storage. Further, the "tunnelling via libvirtd" has inherent limitations: (a) it cannot handle live migration of disks in a non-shared storage setup (a.k.a. "block migration"); and (b) has a huge performance overhead and latency, because it burns more CPU and memory bandwidth due to increased number of data copies on both source and destination hosts. To solve this existing limitation, QEMU and libvirt have gained (refer :ref:`below ` for version details) support for "native TLS", i.e. TLS built into QEMU. This will secure all data transports, including disks that are not on shared storage, without incurring the limitations of the "tunnelled via libvirtd" transport. To take advantage of the "native TLS" support in QEMU and libvirt, nova has introduced new configuration attribute :oslo.config:option:`libvirt.live_migration_with_native_tls`. .. _`Prerequisites`: Prerequisites ~~~~~~~~~~~~~ (1) Version requirement: This feature needs at least libvirt 4.4.0 and QEMU 2.11. (2) A pre-configured TLS environment—i.e. CA, server, and client certificates, their file permissions, et al—must be "correctly" configured (typically by an installer tool) on all relevant compute nodes. To simplify your PKI (Public Key Infrastructure) setup, use deployment tools that take care of handling all the certificate lifecycle management. For example, refer to the "`TLS everywhere `__" guide from the TripleO project. (3) Password-less SSH setup for all relevant compute nodes. (4) On all relevant compute nodes, ensure the TLS-related config attributes in ``/etc/libvirt/qemu.conf`` are in place:: default_tls_x509_cert_dir = "/etc/pki/qemu" default_tls_x509_verify = 1 If it is not already configured, modify ``/etc/sysconfig/libvirtd`` on both (ComputeNode1 & ComputeNode2) to listen for TCP/IP connections:: LIBVIRTD_ARGS="--listen" Then, restart the libvirt daemon (also on both nodes):: $ systemctl restart libvirtd Refer to the "`Related information`_" section on a note about the other TLS-related configuration attributes in ``/etc/libvirt/qemu.conf``. Validating your TLS environment on compute nodes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Assuming you have two compute hosts (``ComputeNode1``, and ``ComputeNode2``) run the :command:`virt-pki-validate` tool (comes with the ``libvirt-client`` package on your Linux distribution) on both the nodes to ensure all the necessary PKI files are configured are configured:: [ComputeNode1]$ virt-pki-validate Found /usr/bin/certtool Found CA certificate /etc/pki/CA/cacert.pem for TLS Migration Test Found client certificate /etc/pki/libvirt/clientcert.pem for ComputeNode1 Found client private key /etc/pki/libvirt/private/clientkey.pem Found server certificate /etc/pki/libvirt/servercert.pem for ComputeNode1 Found server private key /etc/pki/libvirt/private/serverkey.pem Make sure /etc/sysconfig/libvirtd is setup to listen to TCP/IP connections and restart the libvirtd service [ComputeNode2]$ virt-pki-validate Found /usr/bin/certtool Found CA certificate /etc/pki/CA/cacert.pem for TLS Migration Test Found client certificate /etc/pki/libvirt/clientcert.pem for ComputeNode2 Found client private key /etc/pki/libvirt/private/clientkey.pem Found server certificate /etc/pki/libvirt/servercert.pem for ComputeNode2 Found server private key /etc/pki/libvirt/private/serverkey.pem Make sure /etc/sysconfig/libvirtd is setup to listen to TCP/IP connections and restart the libvirtd service Other TLS environment related checks on compute nodes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **IMPORTANT**: Ensure that the permissions of certificate files and keys in ``/etc/pki/qemu/*`` directory on both source *and* destination compute nodes to be the following ``0640`` with ``root:qemu`` as the group/user. For example, on a Fedora-based system:: $ ls -lasrtZ /etc/pki/qemu total 32 0 drwxr-xr-x. 10 root root system_u:object_r:cert_t:s0 110 Dec 10 10:39 .. 4 -rw-r-----. 1 root qemu unconfined_u:object_r:cert_t:s0 1464 Dec 10 11:08 ca-cert.pem 4 -rw-r-----. 1 root qemu unconfined_u:object_r:cert_t:s0 1558 Dec 10 11:08 server-cert.pem 4 -rw-r-----. 1 root qemu unconfined_u:object_r:cert_t:s0 1619 Dec 10 11:09 client-cert.pem 8 -rw-r-----. 1 root qemu unconfined_u:object_r:cert_t:s0 8180 Dec 10 11:09 client-key.pem 8 -rw-r-----. 1 root qemu unconfined_u:object_r:cert_t:s0 8177 Dec 11 05:35 server-key.pem 0 drwxr-xr-x. 2 root root unconfined_u:object_r:cert_t:s0 146 Dec 11 06:01 . Performing the migration ~~~~~~~~~~~~~~~~~~~~~~~~ (1) On all relevant compute nodes, enable the :oslo.config:option:`libvirt.live_migration_with_native_tls` configuration attribute and set the :oslo.config:option:`libvirt.live_migration_scheme` configuration attribute to tls:: [libvirt] live_migration_with_native_tls = true live_migration_scheme = tls .. note:: Setting both :oslo.config:option:`libvirt.live_migration_with_native_tls` and :oslo.config:option:`libvirt.live_migration_tunnelled` at the same time is invalid (and disallowed). .. note:: Not setting :oslo.config:option:`libvirt.live_migration_scheme` to ``tls`` will result in libvirt using the unencrypted TCP connection without displaying any error or a warning in the logs. And restart the ``nova-compute`` service:: $ systemctl restart openstack-nova-compute (2) Now that all TLS-related configuration is in place, migrate guests (with or without shared storage) from ``ComputeNode1`` to ``ComputeNode2``. Refer to the :doc:`live-migration-usage` document on details about live migration. .. _`Related information`: Related information ~~~~~~~~~~~~~~~~~~~ - If you have the relevant libvirt and QEMU versions (mentioned in the "`Prerequisites`_" section earlier), then using the :oslo.config:option:`libvirt.live_migration_with_native_tls` is strongly recommended over the more limited :oslo.config:option:`libvirt.live_migration_tunnelled` option, which is intended to be deprecated in future. - There are in total *nine* TLS-related config options in ``/etc/libvirt/qemu.conf``:: default_tls_x509_cert_dir default_tls_x509_verify nbd_tls nbd_tls_x509_cert_dir migrate_tls_x509_cert_dir vnc_tls_x509_cert_dir spice_tls_x509_cert_dir vxhs_tls_x509_cert_dir chardev_tls_x509_cert_dir If you set both ``default_tls_x509_cert_dir`` and ``default_tls_x509_verify`` parameters for all certificates, there is no need to specify any of the other ``*_tls*`` config options. The intention (of libvirt) is that you can just use the ``default_tls_x509_*`` config attributes so that you don't need to set any other ``*_tls*`` parameters, _unless_ you need different certificates for some services. The rationale for that is that some services (e.g. migration / NBD) are only exposed to internal infrastructure; while some sevices (VNC, Spice) might be exposed publically, so might need different certificates. For OpenStack this does not matter, though, we will stick with the defaults. - If they are not already open, ensure you open up these TCP ports on your firewall: ``16514`` (where the authenticated and encrypted TCP/IP socket will be listening on) and ``49152-49215`` (for regular migration) on all relevant compute nodes. (Otherwise you get ``error: internal error: unable to execute QEMU command 'drive-mirror': Failed to connect socket: No route to host``). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/security-groups.rst0000664000175000017500000002776100000000000021656 0ustar00zuulzuul00000000000000======================= Manage project security ======================= Security groups are sets of IP filter rules that are applied to all project instances, which define networking access to the instance. Group rules are project specific; project members can edit the default rules for their group and add new rule sets. All projects have a ``default`` security group which is applied to any instance that has no other defined security group. Unless you change the default, this security group denies all incoming traffic and allows only outgoing traffic to your instance. Security groups (and their quota) are managed by :neutron-doc:`Neutron, the networking service `. Working with security groups ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From the command-line you can get a list of security groups for the project, using the :command:`openstack` commands. List and view current security groups ------------------------------------- #. Ensure your system variables are set for the user and project for which you are checking security group rules. For example: .. code-block:: console export OS_USERNAME=demo00 export OS_TENANT_NAME=tenant01 #. Output security groups, as follows: .. code-block:: console $ openstack security group list +--------------------------------------+---------+-------------+ | Id | Name | Description | +--------------------------------------+---------+-------------+ | 73580272-d8fa-4927-bd55-c85e43bc4877 | default | default | | 6777138a-deb7-4f10-8236-6400e7aff5b0 | open | all ports | +--------------------------------------+---------+-------------+ #. View the details of a group, as follows: .. code-block:: console $ openstack security group rule list GROUPNAME For example: .. code-block:: console $ openstack security group rule list open +--------------------------------------+-------------+-----------+-----------------+-----------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+-----------------+-----------------------+ | 353d0611-3f67-4848-8222-a92adbdb5d3a | udp | 0.0.0.0/0 | 1:65535 | None | | 63536865-e5b6-4df1-bac5-ca6d97d8f54d | tcp | 0.0.0.0/0 | 1:65535 | None | +--------------------------------------+-------------+-----------+-----------------+-----------------------+ These rules are allow type rules as the default is deny. The first column is the IP protocol (one of ICMP, TCP, or UDP). The second and third columns specify the affected port range. The third column specifies the IP range in CIDR format. This example shows the full port range for all protocols allowed from all IPs. Create a security group ----------------------- When adding a new security group, you should pick a descriptive but brief name. This name shows up in brief descriptions of the instances that use it where the longer description field often does not. For example, seeing that an instance is using security group "http" is much easier to understand than "bobs\_group" or "secgrp1". #. Ensure your system variables are set for the user and project for which you are creating security group rules. #. Add the new security group, as follows: .. code-block:: console $ openstack security group create GroupName --description Description For example: .. code-block:: console $ openstack security group create global_http --description "Allows Web traffic anywhere on the Internet." +-----------------+--------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------+--------------------------------------------------------------------------------------------------------------------------+ | created_at | 2016-11-03T13:50:53Z | | description | Allows Web traffic anywhere on the Internet. | | headers | | | id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 | | name | global_http | | project_id | 5669caad86a04256994cdf755df4d3c1 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | revision_number | 1 | | rules | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv4', id='4d8cec94-e0ee-4c20-9f56-8fb67c21e4df', | | | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' | | | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv6', id='31be2ad1-be14-4aef-9492-ecebede2cf12', | | | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' | | updated_at | 2016-11-03T13:50:53Z | +-----------------+--------------------------------------------------------------------------------------------------------------------------+ #. Add a new group rule, as follows: .. code-block:: console $ openstack security group rule create SEC_GROUP_NAME \ --protocol PROTOCOL --dst-port FROM_PORT:TO_PORT --remote-ip CIDR The arguments are positional, and the ``from-port`` and ``to-port`` arguments specify the local port range connections are allowed to access, not the source and destination ports of the connection. For example: .. code-block:: console $ openstack security group rule create global_http \ --protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2016-11-06T14:02:00Z | | description | | | direction | ingress | | ethertype | IPv4 | | headers | | | id | 2ba06233-d5c8-43eb-93a9-8eaa94bc9eb5 | | port_range_max | 80 | | port_range_min | 80 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | protocol | tcp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 1 | | security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 | | updated_at | 2016-11-06T14:02:00Z | +-------------------+--------------------------------------+ You can create complex rule sets by creating additional rules. For example, if you want to pass both HTTP and HTTPS traffic, run: .. code-block:: console $ openstack security group rule create global_http \ --protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2016-11-06T14:09:20Z | | description | | | direction | ingress | | ethertype | IPv4 | | headers | | | id | 821c3ef6-9b21-426b-be5b-c8a94c2a839c | | port_range_max | 443 | | port_range_min | 443 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | protocol | tcp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 1 | | security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 | | updated_at | 2016-11-06T14:09:20Z | +-------------------+--------------------------------------+ Despite only outputting the newly added rule, this operation is additive (both rules are created and enforced). #. View all rules for the new security group, as follows: .. code-block:: console $ openstack security group rule list global_http +--------------------------------------+-------------+-----------+-----------------+-----------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+-----------------+-----------------------+ | 353d0611-3f67-4848-8222-a92adbdb5d3a | tcp | 0.0.0.0/0 | 80:80 | None | | 63536865-e5b6-4df1-bac5-ca6d97d8f54d | tcp | 0.0.0.0/0 | 443:443 | None | +--------------------------------------+-------------+-----------+-----------------+-----------------------+ Delete a security group ----------------------- #. Ensure your system variables are set for the user and project for which you are deleting a security group. #. Delete the new security group, as follows: .. code-block:: console $ openstack security group delete GROUPNAME For example: .. code-block:: console $ openstack security group delete global_http Create security group rules for a cluster of instances ------------------------------------------------------ Source Groups are a special, dynamic way of defining the CIDR of allowed sources. The user specifies a Source Group (Security Group name), and all the user's other Instances using the specified Source Group are selected dynamically. This alleviates the need for individual rules to allow each new member of the cluster. #. Make sure to set the system variables for the user and project for which you are creating a security group rule. #. Add a source group, as follows: .. code-block:: console $ openstack security group rule create secGroupName \ --remote-group source-group --protocol ip-protocol \ --dst-port from-port:to-port For example: .. code-block:: console $ openstack security group rule create cluster \ --remote-group global_http --protocol tcp --dst-port 22:22 The ``cluster`` rule allows SSH access from any other instance that uses the ``global_http`` group. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/security.rst0000664000175000017500000000344400000000000020331 0ustar00zuulzuul00000000000000================== Security hardening ================== OpenStack Compute can be integrated with various third-party technologies to increase security. For more information, see the `OpenStack Security Guide `_. Encrypt Compute metadata traffic ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Enabling SSL encryption** OpenStack supports encrypting Compute metadata traffic with HTTPS. Enable SSL encryption in the ``metadata_agent.ini`` file. #. Enable the HTTPS protocol. .. code-block:: ini nova_metadata_protocol = https #. Determine whether insecure SSL connections are accepted for Compute metadata server requests. The default value is ``False``. .. code-block:: ini nova_metadata_insecure = False #. Specify the path to the client certificate. .. code-block:: ini nova_client_cert = PATH_TO_CERT #. Specify the path to the private key. .. code-block:: ini nova_client_priv_key = PATH_TO_KEY Securing live migration streams with QEMU-native TLS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is strongly recommended to secure all the different live migration streams of a nova instance—i.e. guest RAM, device state, and disks (via NBD) when using non-shared storage. For further details on how to set this up, refer to the :doc:`secure-live-migration-with-qemu-native-tls` document. Mitigation for MDS (Microarchitectural Data Sampling) security flaws ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is strongly recommended to patch all compute nodes and nova instances against the processor-related security flaws, such as MDS (and other previous vulnerabilities). For details on applying mitigation for the MDS flaws, refer to the :doc:`mitigation-for-Intel-MDS-security-flaws` document. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/service-groups.rst0000664000175000017500000000555700000000000021446 0ustar00zuulzuul00000000000000================================ Configure Compute service groups ================================ The Compute service must know the status of each compute node to effectively manage and use them. This can include events like a user launching a new VM, the scheduler sending a request to a live node, or a query to the ServiceGroup API to determine if a node is live. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Any service (such as the scheduler) can query the group's membership and the status of its nodes. Internally, the ServiceGroup client driver automatically updates the compute worker status. .. _database-servicegroup-driver: Database ServiceGroup driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, Compute uses the database driver to track if a node is live. In a compute worker, this driver periodically sends a ``db update`` command to the database, saying "I'm OK" with a timestamp. Compute uses a pre-defined timeout (``service_down_time``) to determine if a node is dead. The driver has limitations, which can be problematic depending on your environment. If a lot of compute worker nodes need to be checked, the database can be put under heavy load, which can cause the timeout to trigger, and a live node could incorrectly be considered dead. By default, the timeout is 60 seconds. Reducing the timeout value can help in this situation, but you must also make the database update more frequently, which again increases the database workload. The database contains data that is both transient (such as whether the node is alive) and persistent (such as entries for VM owners). With the ServiceGroup abstraction, Compute can treat each type separately. .. _memcache-servicegroup-driver: Memcache ServiceGroup driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The memcache ServiceGroup driver uses memcached, a distributed memory object caching system that is used to increase site performance. For more details, see `memcached.org `_. To use the memcache driver, you must install memcached. You might already have it installed, as the same driver is also used for the OpenStack Object Storage and OpenStack dashboard. To install memcached, see the *Environment -> Memcached* section in the `Installation Tutorials and Guides `_ depending on your distribution. These values in the ``/etc/nova/nova.conf`` file are required on every node for the memcache driver: .. code-block:: ini # Driver for the ServiceGroup service servicegroup_driver = "mc" # Memcached servers. Use either a list of memcached servers to use for caching (list value), # or "" for in-process caching (default). memcached_servers = # Timeout; maximum time since last check-in for up service (integer value). # Helps to define whether a node is dead service_down_time = 60 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/services.rst0000664000175000017500000000507400000000000020306 0ustar00zuulzuul00000000000000======================= Manage Compute services ======================= You can enable and disable Compute services. The following examples disable and enable the ``nova-compute`` service. #. List the Compute services: .. code-block:: console $ openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 4 | nova-scheduler | controller | internal | enabled | up | 2016-12-20T00:44:48.000000 | | 5 | nova-conductor | controller | internal | enabled | up | 2016-12-20T00:44:54.000000 | | 8 | nova-compute | compute | nova | enabled | up | 2016-10-21T02:35:03.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ #. Disable a nova service: .. code-block:: console $ openstack compute service set --disable --disable-reason "trial log" compute nova-compute +----------+--------------+----------+-------------------+ | Host | Binary | Status | Disabled Reason | +----------+--------------+----------+-------------------+ | compute | nova-compute | disabled | trial log | +----------+--------------+----------+-------------------+ #. Check the service list: .. code-block:: console $ openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 5 | nova-scheduler | controller | internal | enabled | up | 2016-12-20T00:44:48.000000 | | 6 | nova-conductor | controller | internal | enabled | up | 2016-12-20T00:44:54.000000 | | 9 | nova-compute | compute | nova | disabled| up | 2016-10-21T02:35:03.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ #. Enable the service: .. code-block:: console $ openstack compute service set --enable compute nova-compute +----------+--------------+---------+ | Host | Binary | Status | +----------+--------------+---------+ | compute | nova-compute | enabled | +----------+--------------+---------+ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/ssh-configuration.rst0000664000175000017500000000440200000000000022117 0ustar00zuulzuul00000000000000.. _cli-os-migrate-cfg-ssh: =================================== Configure SSH between compute nodes =================================== .. todo:: Consider merging this into a larger "migration" document or to the installation guide If you are resizing or migrating an instance between hypervisors, you might encounter an SSH (Permission denied) error. Ensure that each node is configured with SSH key authentication so that the Compute service can use SSH to move disks to other nodes. .. note:: It is not necessary that all the compute nodes share the same key pair. However for the ease of the configuration, this document only utilizes a single key pair for communication between compute nodes. To share a key pair between compute nodes, complete the following steps: #. On the first node, obtain a key pair (public key and private key). Use the root key that is in the ``/root/.ssh/id_rsa`` and ``/root/.ssh/id_rsa.pub`` directories or generate a new key pair. #. Run :command:`setenforce 0` to put SELinux into permissive mode. #. Enable login abilities for the nova user: .. code-block:: console # usermod -s /bin/bash nova Ensure you can switch to the nova account: .. code-block:: console # su - nova #. As root, create the folder that is needed by SSH and place the private key that you obtained in step 1 into this folder, and add the pub key to the authorized_keys file: .. code-block:: console mkdir -p /var/lib/nova/.ssh cp /var/lib/nova/.ssh/id_rsa echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config chmod 600 /var/lib/nova/.ssh/id_rsa /var/lib/nova/.ssh/authorized_keys echo >> /var/lib/nova/.ssh/authorized_keys #. Copy the whole folder created in step 4 to the rest of the nodes: .. code-block:: console # scp -r /var/lib/nova/.ssh remote-host:/var/lib/nova/ #. Ensure that the nova user can now log in to each node without using a password: .. code-block:: console # su - nova $ ssh *computeNodeAddress* $ exit #. As root on each node, restart both libvirt and the Compute services: .. code-block:: console # systemctl restart libvirtd.service # systemctl restart openstack-nova-compute.service ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/support-compute.rst0000664000175000017500000004472500000000000021657 0ustar00zuulzuul00000000000000==================== Troubleshoot Compute ==================== Common problems for Compute typically involve misconfigured networking or credentials that are not sourced properly in the environment. Also, most flat networking configurations do not enable :command:`ping` or :command:`ssh` from a compute node to the instances that run on that node. Another common problem is trying to run 32-bit images on a 64-bit compute node. This section shows you how to troubleshoot Compute. .. todo:: Move the sections below into sub-pages for readability. .. toctree:: :maxdepth: 1 troubleshooting/orphaned-allocations.rst troubleshooting/rebuild-placement-db.rst troubleshooting/affinity-policy-violated.rst Compute service logging ----------------------- Compute stores a log file for each service in ``/var/log/nova``. For example, ``nova-compute.log`` is the log for the ``nova-compute`` service. You can set the following options to format log strings for the ``nova.log`` module in the ``nova.conf`` file: * ``logging_context_format_string`` * ``logging_default_format_string`` If the log level is set to ``debug``, you can also specify ``logging_debug_format_suffix`` to append extra formatting. For information about what variables are available for the formatter, see `Formatter Objects `_. You have two logging options for OpenStack Compute based on configuration settings. In ``nova.conf``, include the ``logfile`` option to enable logging. Alternatively you can set ``use_syslog = 1`` so that the nova daemon logs to syslog. Guru Meditation reports ----------------------- A Guru Meditation report is sent by the Compute service upon receipt of the ``SIGUSR2`` signal (``SIGUSR1`` before Mitaka). This report is a general-purpose error report that includes details about the current state of the service. The error report is sent to ``stderr``. For example, if you redirect error output to ``nova-api-err.log`` using :command:`nova-api 2>/var/log/nova/nova-api-err.log`, resulting in the process ID 8675, you can then run: .. code-block:: console # kill -USR2 8675 This command triggers the Guru Meditation report to be printed to ``/var/log/nova/nova-api-err.log``. The report has the following sections: * Package: Displays information about the package to which the process belongs, including version information. * Threads: Displays stack traces and thread IDs for each of the threads within the process. * Green Threads: Displays stack traces for each of the green threads within the process (green threads do not have thread IDs). * Configuration: Lists all configuration options currently accessible through the CONF object for the current process. For more information, see :doc:`/reference/gmr`. .. _compute-common-errors-and-fixes: Common errors and fixes for Compute ----------------------------------- The `ask.openstack.org `_ site offers a place to ask and answer questions, and you can also mark questions as frequently asked questions. This section describes some errors people have posted previously. Bugs are constantly being fixed, so online resources are a great way to get the most up-to-date errors and fixes. Credential errors, 401, and 403 forbidden errors ------------------------------------------------ Problem ~~~~~~~ Missing credentials cause a ``403 forbidden`` error. Solution ~~~~~~~~ To resolve this issue, use one of these methods: #. Manual method Gets the ``novarc`` file from the project ZIP file, saves existing credentials in case of override, and manually sources the ``novarc`` file. #. Script method Generates ``novarc`` from the project ZIP file and sources it for you. When you run ``nova-api`` the first time, it generates the certificate authority information, including ``openssl.cnf``. If you start the CA services before this, you might not be able to create your ZIP file. Restart the services. When your CA information is available, create your ZIP file. Also, check your HTTP proxy settings to see whether they cause problems with ``novarc`` creation. Live migration permission issues -------------------------------- Problem ~~~~~~~ When live migrating an instance, you may see errors like the below: .. code-block:: shell libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+ssh://stack@cld6b16/system: Cannot recv data: Host key verification failed.: Connection reset by peer Solution ~~~~~~~~ Ensure you have completed all the steps outlined in :doc:`/admin/ssh-configuration`. In particular, it's important to note that the ``libvirt`` process runs as ``root`` even though it may be connecting to a different user (``stack`` in the above example). You can ensure everything is correctly configured by attempting to connect to the remote host via the ``root`` user. Using the above example once again: .. code-block:: shell $ su - -c 'ssh stack@cld6b16' Instance errors --------------- Problem ~~~~~~~ Sometimes a particular instance shows ``pending`` or you cannot SSH to it. Sometimes the image itself is the problem. For example, when you use flat manager networking, you do not have a DHCP server and certain images do not support interface injection; you cannot connect to them. Solution ~~~~~~~~ To fix instance errors use an image that does support this method, such as Ubuntu, which obtains an IP address correctly with FlatManager network settings. To troubleshoot other possible problems with an instance, such as an instance that stays in a spawning state, check the directory for the particular instance under ``/var/lib/nova/instances`` on the ``nova-compute`` host and make sure that these files are present: * ``libvirt.xml`` * ``disk`` * ``disk-raw`` * ``kernel`` * ``ramdisk`` * ``console.log``, after the instance starts. If any files are missing, empty, or very small, the ``nova-compute`` service did not successfully download the images from the Image service. Also check ``nova-compute.log`` for exceptions. Sometimes they do not appear in the console output. Next, check the log file for the instance in the ``/var/log/libvirt/qemu`` directory to see if it exists and has any useful error messages in it. Finally, from the ``/var/lib/nova/instances`` directory for the instance, see if this command returns an error: .. code-block:: console # virsh create libvirt.xml Empty log output for Linux instances ------------------------------------ Problem ~~~~~~~ You can view the log output of running instances from either the :guilabel:`Log` tab of the dashboard or the output of :command:`nova console-log`. In some cases, the log output of a running Linux instance will be empty or only display a single character (for example, the `?` character). This occurs when the Compute service attempts to retrieve the log output of the instance via a serial console while the instance itself is not configured to send output to the console. Solution ~~~~~~~~ To rectify this, append the following parameters to kernel arguments specified in the instance's boot loader: .. code-block:: ini console=tty0 console=ttyS0,115200n8 Upon rebooting, the instance will be configured to send output to the Compute service. Reset the state of an instance ------------------------------ Problem ~~~~~~~ Instances can remain in an intermediate state, such as ``deleting``. Solution ~~~~~~~~ You can use the :command:`nova reset-state` command to manually reset the state of an instance to an error state. You can then delete the instance. For example: .. code-block:: console $ nova reset-state c6bbbf26-b40a-47e7-8d5c-eb17bf65c485 $ openstack server delete c6bbbf26-b40a-47e7-8d5c-eb17bf65c485 You can also use the ``--active`` parameter to force the instance back to an active state instead of an error state. For example: .. code-block:: console $ nova reset-state --active c6bbbf26-b40a-47e7-8d5c-eb17bf65c485 Injection problems ------------------ Problem ~~~~~~~ Instances may boot slowly, or do not boot. File injection can cause this problem. Solution ~~~~~~~~ To disable injection in libvirt, set the following in ``nova.conf``: .. code-block:: ini [libvirt] inject_partition = -2 .. note:: If you have not enabled the config drive and you want to make user-specified files available from the metadata server for to improve performance and avoid boot failure if injection fails, you must disable injection. Cannot find suitable emulator for x86_64 ---------------------------------------- Problem ~~~~~~~ When you attempt to create a VM, the error shows the VM is in the ``BUILD`` then ``ERROR`` state. Solution ~~~~~~~~ On the KVM host, run :command:`cat /proc/cpuinfo`. Make sure the ``vmx`` or ``svm`` flags are set. Follow the instructions in the :ref:`enable-kvm` section in the Nova Configuration Reference to enable hardware virtualization support in your BIOS. Failed to attach volume after detaching --------------------------------------- Problem ~~~~~~~ Failed to attach a volume after detaching the same volume. Solution ~~~~~~~~ You must change the device name on the :command:`nova-attach` command. The VM might not clean up after a :command:`nova-detach` command runs. This example shows how the :command:`nova-attach` command fails when you use the ``vdb``, ``vdc``, or ``vdd`` device names: .. code-block:: console # ls -al /dev/disk/by-path/ total 0 drwxr-xr-x 2 root root 200 2012-08-29 17:33 . drwxr-xr-x 5 root root 100 2012-08-29 17:33 .. lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1 lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2 lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5 lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1 You might also have this problem after attaching and detaching the same volume from the same VM with the same mount point multiple times. In this case, restart the KVM host. Failed to attach volume, systool is not installed ------------------------------------------------- Problem ~~~~~~~ This warning and error occurs if you do not have the required ``sysfsutils`` package installed on the compute node: .. code-block:: console WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb\ admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool\ is not installed ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin\ admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] [instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-47\ 7a-be9b-47c97626555c] Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk. Solution ~~~~~~~~ Install the ``sysfsutils`` package on the compute node. For example: .. code-block:: console # apt-get install sysfsutils Failed to connect volume in FC SAN ---------------------------------- Problem ~~~~~~~ The compute node failed to connect to a volume in a Fibre Channel (FC) SAN configuration. The WWN may not be zoned correctly in your FC SAN that links the compute host to the storage array: .. code-block:: console ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin\ demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd\ 6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3\ d5f3] Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while\ attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4\ bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] Traceback (most recent call last):...f07aa4c3d5f3\] ClientException: The\ server has either erred or is incapable of performing the requested\ operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00) Solution ~~~~~~~~ The network administrator must configure the FC SAN fabric by correctly zoning the WWN (port names) from your compute node HBAs. Multipath call failed exit -------------------------- Problem ~~~~~~~ Multipath call failed exit. This warning occurs in the Compute log if you do not have the optional ``multipath-tools`` package installed on the compute node. This is an optional package and the volume attachment does work without the multipath tools installed. If the ``multipath-tools`` package is installed on the compute node, it is used to perform the volume attachment. The IDs in your message are unique to your system. .. code-block:: console WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571 \ admin admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin] \ Multipath call failed exit (96) Solution ~~~~~~~~ Install the ``multipath-tools`` package on the compute node. For example: .. code-block:: console # apt-get install multipath-tools Failed to Attach Volume, Missing sg_scan ---------------------------------------- Problem ~~~~~~~ Failed to attach volume to an instance, ``sg_scan`` file not found. This error occurs when the sg3-utils package is not installed on the compute node. The IDs in your message are unique to your system: .. code-block:: console ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin] [instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5] Failed to attach volume 4cc104c4-ac92-4bd6-9b95-c6686746414a at /dev/vdcTRACE nova.compute.manager [instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5] Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan' Solution ~~~~~~~~ Install the ``sg3-utils`` package on the compute node. For example: .. code-block:: console # apt-get install sg3-utils Requested microversions are ignored ----------------------------------- Problem ~~~~~~~ When making a request with a microversion beyond 2.1, for example: .. code-block:: console $ openstack --os-compute-api-version 2.15 server group create \ --policy soft-anti-affinity my-soft-anti-group It fails saying that "soft-anti-affinity" is not a valid policy, even thought it is allowed with the `2.15 microversion`_. .. _2.15 microversion: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id13 Solution ~~~~~~~~ Ensure the ``compute`` endpoint in the identity service catalog is pointing at ``/v2.1`` instead of ``/v2``. The former route supports microversions, while the latter route is considered the legacy v2.0 compatibility-mode route which renders all requests as if they were made on the legacy v2.0 API. .. _user_token_timeout: User token times out during long-running operations --------------------------------------------------- Problem ~~~~~~~ Long-running operations such as live migration or snapshot can sometimes overrun the expiry of the user token. In such cases, post operations such as cleaning up after a live migration can fail when the nova-compute service needs to cleanup resources in other services, such as in the block-storage (cinder) or networking (neutron) services. For example: .. code-block:: console 2018-12-17 13:47:29.591 16987 WARNING nova.virt.libvirt.migration [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Live migration not completed after 2400 sec 2018-12-17 13:47:30.097 16987 WARNING nova.virt.libvirt.driver [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Migration operation was cancelled 2018-12-17 13:47:30.299 16987 ERROR nova.virt.libvirt.driver [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Live Migration failure: operation aborted: migration job: canceled by client: libvirtError: operation aborted: migration job: canceled by client 2018-12-17 13:47:30.685 16987 INFO nova.compute.manager [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Swapping old allocation on 3e32d595-bd1f-4136-a7f4-c6703d2fbe18 held by migration 17bec61d-544d-47e0-a1c1-37f9d7385286 for instance 2018-12-17 13:47:32.450 16987 ERROR nova.volume.cinder [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] Delete attachment failed for attachment 58997d5b-24f0-4073-819e-97916fb1ee19. Error: The request you have made requires authentication. (HTTP 401) Code: 401: Unauthorized: The request you have made requires authentication. (HTTP 401) Solution ~~~~~~~~ Configure nova to use service user tokens to supplement the regular user token used to initiate the operation. The identity service (keystone) will then authenticate a request using the service user token if the user token has already expired. To use, create a service user in the identity service similar as you would when creating the ``nova`` service user. Then configure the :oslo.config:group:`service_user` section of the nova configuration file, for example: .. code-block:: ini [service_user] send_service_user_token = True auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = secretservice username = nova auth_url = https://104.130.216.102/identity ... And configure the other identity options as necessary for the service user, much like you would configure nova to work with the image service (glance) or networking service. .. note:: Please note that the role of the :oslo.config:group:`service_user` you configure needs to be a superset of :oslo.config:option:`keystone_authtoken.service_token_roles` (The option :oslo.config:option:`keystone_authtoken.service_token_roles` is configured in cinder, glance and neutron). ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2424707 nova-21.2.4/doc/source/admin/troubleshooting/0000775000175000017500000000000000000000000021152 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/troubleshooting/affinity-policy-violated.rst0000664000175000017500000000631200000000000026621 0ustar00zuulzuul00000000000000Affinity policy violated with parallel requests =============================================== Problem ------- Parallel server create requests for affinity or anti-affinity land on the same host and servers go to the ``ACTIVE`` state even though the affinity or anti-affinity policy was violated. Solution -------- There are two ways to avoid anti-/affinity policy violations among multiple server create requests. Create multiple servers as a single request ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use the `multi-create API`_ with the ``min_count`` parameter set or the `multi-create CLI`_ with the ``--min`` option set to the desired number of servers. This works because when the batch of requests is visible to ``nova-scheduler`` at the same time as a group, it will be able to choose compute hosts that satisfy the anti-/affinity constraint and will send them to the same hosts or different hosts accordingly. .. _multi-create API: https://docs.openstack.org/api-ref/compute/#create-multiple-servers .. _multi-create CLI: https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-create Adjust Nova configuration settings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When requests are made separately and the scheduler cannot consider the batch of requests at the same time as a group, anti-/affinity races are handled by what is called the "late affinity check" in ``nova-compute``. Once a server lands on a compute host, if the request involves a server group, ``nova-compute`` contacts the API database (via ``nova-conductor``) to retrieve the server group and then it checks whether the affinity policy has been violated. If the policy has been violated, ``nova-compute`` initiates a reschedule of the server create request. Note that this means the deployment must have :oslo.config:option:`scheduler.max_attempts` set greater than ``1`` (default is ``3``) to handle races. An ideal configuration for multiple cells will minimize `upcalls`_ from the cells to the API database. This is how devstack, for example, is configured in the CI gate. The cell conductors do not set :oslo.config:option:`api_database.connection` and ``nova-compute`` sets :oslo.config:option:`workarounds.disable_group_policy_check_upcall` to ``True``. However, if a deployment needs to handle racing affinity requests, it needs to configure cell conductors to have access to the API database, for example: .. code-block:: ini [api_database] connection = mysql+pymysql://root:a@127.0.0.1/nova_api?charset=utf8 The deployment also needs to configure ``nova-compute`` services not to disable the group policy check upcall by either not setting (use the default) :oslo.config:option:`workarounds.disable_group_policy_check_upcall` or setting it to ``False``, for example: .. code-block:: ini [workarounds] disable_group_policy_check_upcall = False With these settings, anti-/affinity policy should not be violated even when parallel server create requests are racing. Future work is needed to add anti-/affinity support to the placement service in order to eliminate the need for the late affinity check in ``nova-compute``. .. _upcalls: https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#operations-requiring-upcalls ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/troubleshooting/orphaned-allocations.rst0000664000175000017500000003265600000000000026026 0ustar00zuulzuul00000000000000Orphaned resource allocations ============================= Problem ------- There are orphaned resource allocations in the placement service which can cause resource providers to: * Appear to the scheduler to be more utilized than they really are * Prevent deletion of compute services One scenario in which this could happen is a compute service host is having problems so the administrator forces it down and evacuates servers from it. Note that in this case "evacuates" refers to the server ``evacuate`` action, not live migrating all servers from the running compute service. Assume the compute host is down and fenced. In this case, the servers have allocations tracked in placement against both the down source compute node and their current destination compute host. For example, here is a server *vm1* which has been evacuated from node *devstack1* to node *devstack2*: .. code-block:: console $ openstack --os-compute-api-version 2.53 compute service list --service nova-compute +--------------------------------------+--------------+-----------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +--------------------------------------+--------------+-----------+------+---------+-------+----------------------------+ | e3c18c2d-9488-4863-b728-f3f292ec5da8 | nova-compute | devstack1 | nova | enabled | down | 2019-10-25T20:13:51.000000 | | 50a20add-cc49-46bd-af96-9bb4e9247398 | nova-compute | devstack2 | nova | enabled | up | 2019-10-25T20:13:52.000000 | | b92afb2e-cd00-4074-803e-fff9aa379c2f | nova-compute | devstack3 | nova | enabled | up | 2019-10-25T20:13:53.000000 | +--------------------------------------+--------------+-----------+------+---------+-------+----------------------------+ $ vm1=$(openstack server show vm1 -f value -c id) $ openstack server show $vm1 -f value -c OS-EXT-SRV-ATTR:host devstack2 The server now has allocations against both *devstack1* and *devstack2* resource providers in the placement service: .. code-block:: console $ devstack1=$(openstack resource provider list --name devstack1 -f value -c uuid) $ devstack2=$(openstack resource provider list --name devstack2 -f value -c uuid) $ openstack resource provider show --allocations $devstack1 +-------------+-----------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+-----------------------------------------------------------------------------------------------------------+ | uuid | 9546fce4-9fb5-4b35-b277-72ff125ad787 | | name | devstack1 | | generation | 6 | | allocations | {u'a1e6e0b2-9028-4166-b79b-c177ff70fbb7': {u'resources': {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1}}} | +-------------+-----------------------------------------------------------------------------------------------------------+ $ openstack resource provider show --allocations $devstack2 +-------------+-----------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+-----------------------------------------------------------------------------------------------------------+ | uuid | 52d0182d-d466-4210-8f0d-29466bb54feb | | name | devstack2 | | generation | 3 | | allocations | {u'a1e6e0b2-9028-4166-b79b-c177ff70fbb7': {u'resources': {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1}}} | +-------------+-----------------------------------------------------------------------------------------------------------+ $ openstack --os-placement-api-version 1.12 resource provider allocation show $vm1 +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ | resource_provider | generation | resources | project_id | user_id | +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ | 9546fce4-9fb5-4b35-b277-72ff125ad787 | 6 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | 2f3bffc5db2b47deb40808a4ed2d7c7a | 2206168427c54d92ae2b2572bb0da9af | | 52d0182d-d466-4210-8f0d-29466bb54feb | 3 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | 2f3bffc5db2b47deb40808a4ed2d7c7a | 2206168427c54d92ae2b2572bb0da9af | +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ One way to find all servers that were evacuated from *devstack1* is: .. code-block:: console $ nova migration-list --source-compute devstack1 --migration-type evacuation +----+--------------------------------------+-------------+-----------+----------------+--------------+-------------+--------+--------------------------------------+------------+------------+----------------------------+----------------------------+------------+ | Id | UUID | Source Node | Dest Node | Source Compute | Dest Compute | Dest Host | Status | Instance UUID | Old Flavor | New Flavor | Created At | Updated At | Type | +----+--------------------------------------+-------------+-----------+----------------+--------------+-------------+--------+--------------------------------------+------------+------------+----------------------------+----------------------------+------------+ | 1 | 8a823ba3-e2e9-4f17-bac5-88ceea496b99 | devstack1 | devstack2 | devstack1 | devstack2 | 192.168.0.1 | done | a1e6e0b2-9028-4166-b79b-c177ff70fbb7 | None | None | 2019-10-25T17:46:35.000000 | 2019-10-25T17:46:37.000000 | evacuation | +----+--------------------------------------+-------------+-----------+----------------+--------------+-------------+--------+--------------------------------------+------------+------------+----------------------------+----------------------------+------------+ Trying to delete the resource provider for *devstack1* will fail while there are allocations against it: .. code-block:: console $ openstack resource provider delete $devstack1 Unable to delete resource provider 9546fce4-9fb5-4b35-b277-72ff125ad787: Resource provider has allocations. (HTTP 409) Solution -------- Using the example resources above, remove the allocation for server *vm1* from the *devstack1* resource provider. If you have `osc-placement `_ 1.8.0 or newer, you can use the :command:`openstack resource provider allocation unset` command to remove the allocations for consumer *vm1* from resource provider *devstack1*: .. code-block:: console $ openstack --os-placement-api-version 1.12 resource provider allocation \ unset --provider $devstack1 $vm1 +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ | resource_provider | generation | resources | project_id | user_id | +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ | 52d0182d-d466-4210-8f0d-29466bb54feb | 4 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | 2f3bffc5db2b47deb40808a4ed2d7c7a | 2206168427c54d92ae2b2572bb0da9af | +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ If you have *osc-placement* 1.7.x or older, the ``unset`` command is not available and you must instead overwrite the allocations. Note that we do not use :command:`openstack resource provider allocation delete` here because that will remove the allocations for the server from all resource providers, including *devstack2* where it is now running; instead, we use :command:`openstack resource provider allocation set` to overwrite the allocations and only retain the *devstack2* provider allocations. If you do remove all allocations for a given server, you can heal them later. See `Using heal_allocations`_ for details. .. code-block:: console $ openstack --os-placement-api-version 1.12 resource provider allocation set $vm1 \ --project-id 2f3bffc5db2b47deb40808a4ed2d7c7a \ --user-id 2206168427c54d92ae2b2572bb0da9af \ --allocation rp=52d0182d-d466-4210-8f0d-29466bb54feb,VCPU=1 \ --allocation rp=52d0182d-d466-4210-8f0d-29466bb54feb,MEMORY_MB=512 \ --allocation rp=52d0182d-d466-4210-8f0d-29466bb54feb,DISK_GB=1 +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ | resource_provider | generation | resources | project_id | user_id | +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ | 52d0182d-d466-4210-8f0d-29466bb54feb | 4 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | 2f3bffc5db2b47deb40808a4ed2d7c7a | 2206168427c54d92ae2b2572bb0da9af | +--------------------------------------+------------+------------------------------------------------+----------------------------------+----------------------------------+ Once the *devstack1* resource provider allocations have been removed using either of the approaches above, the *devstack1* resource provider can be deleted: .. code-block:: console $ openstack resource provider delete $devstack1 And the related compute service if desired: .. code-block:: console $ openstack --os-compute-api-version 2.53 compute service delete e3c18c2d-9488-4863-b728-f3f292ec5da8 For more details on the resource provider commands used in this guide, refer to the `osc-placement plugin documentation`_. .. _osc-placement plugin documentation: https://docs.openstack.org/osc-placement/latest/ Using heal_allocations ~~~~~~~~~~~~~~~~~~~~~~ If you have a particularly troubling allocation consumer and just want to delete its allocations from all providers, you can use the :command:`openstack resource provider allocation delete` command and then heal the allocations for the consumer using the :ref:`heal_allocations command `. For example: .. code-block:: console $ openstack resource provider allocation delete $vm1 $ nova-manage placement heal_allocations --verbose --instance $vm1 Looking for instances in cell: 04879596-d893-401c-b2a6-3d3aa096089d(cell1) Found 1 candidate instances. Successfully created allocations for instance a1e6e0b2-9028-4166-b79b-c177ff70fbb7. Processed 1 instances. $ openstack resource provider allocation show $vm1 +--------------------------------------+------------+------------------------------------------------+ | resource_provider | generation | resources | +--------------------------------------+------------+------------------------------------------------+ | 52d0182d-d466-4210-8f0d-29466bb54feb | 5 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | +--------------------------------------+------------+------------------------------------------------+ Note that deleting allocations and then relying on ``heal_allocations`` may not always be the best solution since healing allocations does not account for some things: * `Migration-based allocations`_ would be lost if manually deleted during a resize. These are allocations tracked by the migration resource record on the source compute service during a migration. * Healing allocations does not supported nested resource allocations before the 20.0.0 (Train) release. If you do use the ``heal_allocations`` command to cleanup allocations for a specific trouble instance, it is recommended to take note of what the allocations were before you remove them in case you need to reset them manually later. Use the :command:`openstack resource provider allocation show` command to get allocations for a consumer before deleting them, e.g.: .. code-block:: console $ openstack --os-placement-api-version 1.12 resource provider allocation show $vm1 .. _Migration-based allocations: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/migration-allocations.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/troubleshooting/rebuild-placement-db.rst0000664000175000017500000000477500000000000025700 0ustar00zuulzuul00000000000000Rebuild placement DB ==================== Problem ------- You have somehow changed a nova cell database and the ``compute_nodes`` table entries are now reporting different uuids to the placement service but placement already has ``resource_providers`` table entries with the same names as those computes so the resource providers in placement and the compute nodes in the nova database are not synchronized. Maybe this happens as a result of restoring the nova cell database from a backup where the compute hosts have not changed but they are using different uuids. Nova reports compute node inventory to placement using the ``hypervisor_hostname`` and uuid of the ``compute_nodes`` table to the placement ``resource_providers`` table, which has a unique constraint on the name (hostname in this case) and uuid. Trying to create a new resource provider with a new uuid but the same name as an existing provider results in a 409 error from placement, such as in `bug 1817833`_. .. _bug 1817833: https://bugs.launchpad.net/nova/+bug/1817833 Solution -------- .. warning:: This is likely a last resort when *all* computes and resource providers are not synchronized and it is simpler to just rebuild the placement database from the current state of nova. This may, however, not work when using placement for more advanced features such as :neutron-doc:`ports with minimum bandwidth guarantees ` or `accelerators `_. Obviously testing first in a pre-production environment is ideal. These are the steps at a high level: #. Make a backup of the existing placement database in case these steps fail and you need to start over. #. Recreate the placement database and run the schema migrations to initialize the placement database. #. Either restart or wait for the :oslo.config:option:`update_resources_interval` on the ``nova-compute`` services to report resource providers and their inventory to placement. #. Run the :ref:`nova-manage placement heal_allocations ` command to report allocations to placement for the existing instances in nova. #. Run the :ref:`nova-manage placement sync_aggregates ` command to synchronize nova host aggregates to placement resource provider aggregates. Once complete, test your deployment as usual, e.g. running Tempest integration and/or Rally tests, creating, migrating and deleting a server, etc. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/vendordata.rst0000664000175000017500000001621600000000000020612 0ustar00zuulzuul00000000000000========== Vendordata ========== .. note:: This section provides deployment information about the vendordata feature. For end-user information about the vendordata feature and instance metadata in general, refer to the :doc:`user guide `. The *vendordata* feature provides a way to pass vendor or deployment-specific information to instances. This can be accessed by users using :doc:`the metadata service ` or with :doc:`config drives `. There are two vendordata modules provided with nova: ``StaticJSON`` and ``DynamicJSON``. ``StaticJSON`` -------------- The ``StaticJSON`` module includes the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server. It is the default provider. Configuration ~~~~~~~~~~~~~ The service you must configure to enable the ``StaticJSON`` vendordata module depends on how guests are accessing vendordata. If using the metadata service, configuration applies to either :program:`nova-api` or :program:`nova-api-metadata`, depending on the deployment, while if using config drives, configuration applies to :program:`nova-compute`. However, configuration is otherwise the same and the following options apply: - :oslo.config:option:`api.vendordata_providers` - :oslo.config:option:`api.vendordata_jsonfile_path` Refer to the :doc:`metadata service ` and :doc:`config drive ` documentation for more information on how to configure the required services. ``DynamicJSON`` --------------- The ``DynamicJSON`` module can make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance. When used, the ``DynamicJSON`` module will make a request to any REST services listed in the :oslo.config:option:`api.vendordata_dynamic_targets` configuration option. There can be more than one of these but note that they will be queried once per metadata request from the instance which can mean a lot of traffic depending on your configuration and the configuration of the instance. The following data is passed to your REST service as a JSON encoded POST: .. list-table:: :header-rows: 1 * - Key - Description * - ``project-id`` - The ID of the project that owns this instance. * - ``instance-id`` - The UUID of this instance. * - ``image-id`` - The ID of the image used to boot this instance. * - ``user-data`` - As specified by the user at boot time. * - ``hostname`` - The hostname of the instance. * - ``metadata`` - As specified by the user at boot time. Metadata fetched from the REST service will appear in the metadata service at a new file called ``vendordata2.json``, with a path (either in the metadata service URL or in the config drive) like this:: openstack/latest/vendor_data2.json For each dynamic target, there will be an entry in the JSON file named after that target. For example: .. code-block:: json { "testing": { "value1": 1, "value2": 2, "value3": "three" } } The `novajoin`__ project provides a dynamic vendordata service to manage host instantiation in an IPA server. __ https://opendev.org/x/novajoin Deployment considerations ~~~~~~~~~~~~~~~~~~~~~~~~~ Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request -- you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the :oslo.config:group:`vendordata_dynamic_auth` configuration group. Configuration ~~~~~~~~~~~~~ As with ``StaticJSON``, the service you must configure to enable the ``DynamicJSON`` vendordata module depends on how guests are accessing vendordata. If using the metadata service, configuration applies to either :program:`nova-api` or :program:`nova-api-metadata`, depending on the deployment, while if using config drives, configuration applies to :program:`nova-compute`. However, configuration is otherwise the same and the following options apply: - :oslo.config:option:`api.vendordata_providers` - :oslo.config:option:`api.vendordata_dynamic_ssl_certfile` - :oslo.config:option:`api.vendordata_dynamic_connect_timeout` - :oslo.config:option:`api.vendordata_dynamic_read_timeout` - :oslo.config:option:`api.vendordata_dynamic_failure_fatal` - :oslo.config:option:`api.vendordata_dynamic_targets` Refer to the :doc:`metadata service ` and :doc:`config drive ` documentation for more information on how to configure the required services. In addition, there are also many options related to authentication. These are provided by :keystone-doc:`keystone <>` but are listed below for completeness: - :oslo.config:option:`vendordata_dynamic_auth.cafile` - :oslo.config:option:`vendordata_dynamic_auth.certfile` - :oslo.config:option:`vendordata_dynamic_auth.keyfile` - :oslo.config:option:`vendordata_dynamic_auth.insecure` - :oslo.config:option:`vendordata_dynamic_auth.timeout` - :oslo.config:option:`vendordata_dynamic_auth.collect_timing` - :oslo.config:option:`vendordata_dynamic_auth.split_loggers` - :oslo.config:option:`vendordata_dynamic_auth.auth_type` - :oslo.config:option:`vendordata_dynamic_auth.auth_section` - :oslo.config:option:`vendordata_dynamic_auth.auth_url` - :oslo.config:option:`vendordata_dynamic_auth.system_scope` - :oslo.config:option:`vendordata_dynamic_auth.domain_id` - :oslo.config:option:`vendordata_dynamic_auth.domain_name` - :oslo.config:option:`vendordata_dynamic_auth.project_id` - :oslo.config:option:`vendordata_dynamic_auth.project_name` - :oslo.config:option:`vendordata_dynamic_auth.project_domain_id` - :oslo.config:option:`vendordata_dynamic_auth.project_domain_name` - :oslo.config:option:`vendordata_dynamic_auth.trust_id` - :oslo.config:option:`vendordata_dynamic_auth.default_domain_id` - :oslo.config:option:`vendordata_dynamic_auth.default_domain_name` - :oslo.config:option:`vendordata_dynamic_auth.user_id` - :oslo.config:option:`vendordata_dynamic_auth.username` - :oslo.config:option:`vendordata_dynamic_auth.user_domain_id` - :oslo.config:option:`vendordata_dynamic_auth.user_domain_name` - :oslo.config:option:`vendordata_dynamic_auth.password` - :oslo.config:option:`vendordata_dynamic_auth.tenant_id` - :oslo.config:option:`vendordata_dynamic_auth.tenant_name` Refer to the :keystone-doc:`keystone documentation ` for information on configuring these. References ---------- * Michael Still's talk from the Queens summit in Sydney, `Metadata, User Data, Vendor Data, oh my!`__ * Michael's blog post on `deploying a simple vendordata service`__ which provides more details and sample code to supplement the documentation above. __ https://www.openstack.org/videos/sydney-2017/metadata-user-data-vendor-data-oh-my __ https://www.madebymikal.com/nova-vendordata-deployment-an-excessively-detailed-guide/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/admin/virtual-gpu.rst0000664000175000017500000004476200000000000020751 0ustar00zuulzuul00000000000000======================================= Attaching virtual GPU devices to guests ======================================= The virtual GPU feature in Nova allows a deployment to provide specific GPU types for instances using physical GPUs that can provide virtual devices. For example, a single `Intel GVT-g`_ or a `NVIDIA GRID vGPU`_ physical Graphics Processing Unit (pGPU) can be virtualized as multiple virtual Graphics Processing Units (vGPUs) if the hypervisor supports the hardware driver and has the capability to create guests using those virtual devices. This feature is highly dependent on the hypervisor, its version and the physical devices present on the host. In addition, the vendor's vGPU driver software must be installed and configured on the host at the same time. Hypervisor-specific caveats are mentioned in the `Caveats`_ section. To enable virtual GPUs, follow the steps below: #. `Enable GPU types (Compute)`_ #. `Configure a flavor (Controller)`_ Enable GPU types (Compute) -------------------------- #. Specify which specific GPU type(s) the instances would get. Edit :oslo.config:option:`devices.enabled_vgpu_types`: .. code-block:: ini [devices] enabled_vgpu_types = nvidia-35 If you want to support more than a single GPU type, you need to provide a separate configuration section for each device. For example: .. code-block:: ini [devices] enabled_vgpu_types = nvidia-35, nvidia-36 [vgpu_nvidia-35] device_addresses = 0000:84:00.0,0000:85:00.0 [vgpu_nvidia-36] device_addresses = 0000:86:00.0 where you have to define which physical GPUs are supported per GPU type. If the same PCI address is provided for two different types, nova-compute will refuse to start and issue a specific error in the logs. To know which specific type(s) to mention, please refer to `How to discover a GPU type`_. .. versionchanged:: 21.0.0 Supporting multiple GPU types is only supported by the Ussuri release and later versions. #. Restart the ``nova-compute`` service. .. warning:: Changing the type is possible but since existing physical GPUs can't address multiple guests having different types, that will make Nova return you a NoValidHost if existing instances with the original type still exist. Accordingly, it's highly recommended to instead deploy the new type to new compute nodes that don't already have workloads and rebuild instances on the nodes that need to change types. Configure a flavor (Controller) ------------------------------- Configure a flavor to request one virtual GPU: .. code-block:: console $ openstack flavor set vgpu_1 --property "resources:VGPU=1" .. note:: As of the Queens release, all hypervisors that support virtual GPUs only accept a single virtual GPU per instance. The enabled vGPU types on the compute hosts are not exposed to API users. Flavors configured for vGPU support can be tied to host aggregates as a means to properly schedule those flavors onto the compute hosts that support them. See :doc:`/admin/aggregates` for more information. Create instances with virtual GPU devices ----------------------------------------- The ``nova-scheduler`` selects a destination host that has vGPU devices available by calling the Placement API for a specific VGPU resource class provided by compute nodes. .. code-block:: console $ openstack server create --flavor vgpu_1 --image cirros-0.3.5-x86_64-uec --wait test-vgpu .. note:: As of the Queens release, only the *FilterScheduler* scheduler driver uses the Placement API. How to discover a GPU type -------------------------- Depending on your hypervisor: - For libvirt, virtual GPUs are seen as mediated devices. Physical PCI devices (the graphic card here) supporting virtual GPUs propose mediated device (mdev) types. Since mediated devices are supported by the Linux kernel through sysfs files after installing the vendor's virtual GPUs driver software, you can see the required properties as follows: .. code-block:: console $ ls /sys/class/mdev_bus/*/mdev_supported_types /sys/class/mdev_bus/0000:84:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 /sys/class/mdev_bus/0000:85:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 /sys/class/mdev_bus/0000:86:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 /sys/class/mdev_bus/0000:87:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 - For XenServer, virtual GPU types are created by XenServer at startup depending on the available hardware and config files present in dom0. You can run the command of ``xe vgpu-type-list`` from dom0 to get the available vGPU types. The value for the field of ``model-name ( RO):`` is the vGPU type's name which can be used to set the nova config option ``[devices]/enabled_vgpu_types``. See the following example: .. code-block:: console [root@trailblazer-2 ~]# xe vgpu-type-list uuid ( RO) : 78d2d963-41d6-4130-8842-aedbc559709f vendor-name ( RO): NVIDIA Corporation model-name ( RO): GRID M60-8Q max-heads ( RO): 4 max-resolution ( RO): 4096x2160 uuid ( RO) : a1bb1692-8ce3-4577-a611-6b4b8f35a5c9 vendor-name ( RO): NVIDIA Corporation model-name ( RO): GRID M60-0Q max-heads ( RO): 2 max-resolution ( RO): 2560x1600 uuid ( RO) : 69d03200-49eb-4002-b661-824aec4fd26f vendor-name ( RO): NVIDIA Corporation model-name ( RO): GRID M60-2A max-heads ( RO): 1 max-resolution ( RO): 1280x1024 uuid ( RO) : c58b1007-8b47-4336-95aa-981a5634d03d vendor-name ( RO): NVIDIA Corporation model-name ( RO): GRID M60-4Q max-heads ( RO): 4 max-resolution ( RO): 4096x2160 uuid ( RO) : 292a2b20-887f-4a13-b310-98a75c53b61f vendor-name ( RO): NVIDIA Corporation model-name ( RO): GRID M60-2Q max-heads ( RO): 4 max-resolution ( RO): 4096x2160 uuid ( RO) : d377db6b-a068-4a98-92a8-f94bd8d6cc5d vendor-name ( RO): NVIDIA Corporation model-name ( RO): GRID M60-0B max-heads ( RO): 2 max-resolution ( RO): 2560x1600 ... Checking allocations and inventories for virtual GPUs ----------------------------------------------------- .. note:: The information below is only valid from the 19.0.0 Stein release and only for the libvirt driver. Before this release or when using the Xen driver, inventories and allocations related to a ``VGPU`` resource class are still on the root resource provider related to the compute node. If upgrading from Rocky and using the libvirt driver, ``VGPU`` inventory and allocations are moved to child resource providers that represent actual physical GPUs. The examples you will see are using the `osc-placement plugin`_ for OpenStackClient. For details on specific commands, see its documentation. #. Get the list of resource providers .. code-block:: console $ openstack resource provider list +--------------------------------------+---------------------------------------------------------+------------+ | uuid | name | generation | +--------------------------------------+---------------------------------------------------------+------------+ | 5958a366-3cad-416a-a2c9-cfbb5a472287 | virtlab606.xxxxxxxxxxxxxxxxxxxxxxxxxxx | 7 | | fc9b9287-ef5e-4408-aced-d5577560160c | virtlab606.xxxxxxxxxxxxxxxxxxxxxxxxxxx_pci_0000_86_00_0 | 2 | | e2f8607b-0683-4141-a8af-f5e20682e28c | virtlab606.xxxxxxxxxxxxxxxxxxxxxxxxxxx_pci_0000_85_00_0 | 3 | | 85dd4837-76f9-41f2-9f19-df386017d8a0 | virtlab606.xxxxxxxxxxxxxxxxxxxxxxxxxxx_pci_0000_87_00_0 | 2 | | 7033d860-8d8a-4963-8555-0aa902a08653 | virtlab606.xxxxxxxxxxxxxxxxxxxxxxxxxxx_pci_0000_84_00_0 | 2 | +--------------------------------------+---------------------------------------------------------+------------+ In this example, we see the root resource provider ``5958a366-3cad-416a-a2c9-cfbb5a472287`` with four other resource providers that are its children and where each of them corresponds to a single physical GPU. #. Check the inventory of each resource provider to see resource classes .. code-block:: console $ openstack resource provider inventory list 5958a366-3cad-416a-a2c9-cfbb5a472287 +----------------+------------------+----------+----------+-----------+----------+-------+ | resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total | +----------------+------------------+----------+----------+-----------+----------+-------+ | VCPU | 16.0 | 48 | 0 | 1 | 1 | 48 | | MEMORY_MB | 1.5 | 65442 | 512 | 1 | 1 | 65442 | | DISK_GB | 1.0 | 49 | 0 | 1 | 1 | 49 | +----------------+------------------+----------+----------+-----------+----------+-------+ $ openstack resource provider inventory list e2f8607b-0683-4141-a8af-f5e20682e28c +----------------+------------------+----------+----------+-----------+----------+-------+ | resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total | +----------------+------------------+----------+----------+-----------+----------+-------+ | VGPU | 1.0 | 16 | 0 | 1 | 1 | 16 | +----------------+------------------+----------+----------+-----------+----------+-------+ Here you can see a ``VGPU`` inventory on the child resource provider while other resource class inventories are still located on the root resource provider. #. Check allocations for each server that is using virtual GPUs .. code-block:: console $ openstack server list +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+--------+ | 5294f726-33d5-472a-bef1-9e19bb41626d | vgpu2 | ACTIVE | private=10.0.0.14, fd45:cdad:c431:0:f816:3eff:fe78:a748 | cirros-0.4.0-x86_64-disk | vgpu | | a6811fc2-cec8-4f1d-baea-e2c6339a9697 | vgpu1 | ACTIVE | private=10.0.0.34, fd45:cdad:c431:0:f816:3eff:fe54:cc8f | cirros-0.4.0-x86_64-disk | vgpu | +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+--------+ $ openstack resource provider allocation show 5294f726-33d5-472a-bef1-9e19bb41626d +--------------------------------------+------------+------------------------------------------------+ | resource_provider | generation | resources | +--------------------------------------+------------+------------------------------------------------+ | 5958a366-3cad-416a-a2c9-cfbb5a472287 | 8 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | | 7033d860-8d8a-4963-8555-0aa902a08653 | 3 | {u'VGPU': 1} | +--------------------------------------+------------+------------------------------------------------+ $ openstack resource provider allocation show a6811fc2-cec8-4f1d-baea-e2c6339a9697 +--------------------------------------+------------+------------------------------------------------+ | resource_provider | generation | resources | +--------------------------------------+------------+------------------------------------------------+ | e2f8607b-0683-4141-a8af-f5e20682e28c | 3 | {u'VGPU': 1} | | 5958a366-3cad-416a-a2c9-cfbb5a472287 | 8 | {u'VCPU': 1, u'MEMORY_MB': 512, u'DISK_GB': 1} | +--------------------------------------+------------+------------------------------------------------+ In this example, two servers were created using a flavor asking for 1 ``VGPU``, so when looking at the allocations for each consumer UUID (which is the server UUID), you can see that VGPU allocation is against the child resource provider while other allocations are for the root resource provider. Here, that means that the virtual GPU used by ``a6811fc2-cec8-4f1d-baea-e2c6339a9697`` is actually provided by the physical GPU having the PCI ID ``0000:85:00.0``. (Optional) Provide custom traits for multiple GPU types ------------------------------------------------------- Since operators want to support different GPU types per compute, it would be nice to have flavors asking for a specific GPU type. This is now possible using custom traits by decorating child Resource Providers that correspond to physical GPUs. .. note:: Possible improvements in a future release could consist of providing automatic tagging of Resource Providers with standard traits corresponding to versioned mapping of public GPU types. For the moment, this has to be done manually. #. Get the list of resource providers See `Checking allocations and inventories for virtual GPUs`_ first for getting the list of Resource Providers that support a ``VGPU`` resource class. #. Define custom traits that will correspond for each to a GPU type .. code-block:: console $ openstack --os-placement-api-version 1.6 trait create CUSTOM_NVIDIA_11 In this example, we ask to create a custom trait named ``CUSTOM_NVIDIA_11``. #. Add the corresponding trait to the Resource Provider matching the GPU .. code-block:: console $ openstack --os-placement-api-version 1.6 resource provider trait set \ --trait CUSTOM_NVIDIA_11 e2f8607b-0683-4141-a8af-f5e20682e28c In this case, the trait ``CUSTOM_NVIDIA_11`` will be added to the Resource Provider with the UUID ``e2f8607b-0683-4141-a8af-f5e20682e28c`` that corresponds to the PCI address ``0000:85:00:0`` as shown above. #. Amend the flavor to add a requested trait .. code-block:: console $ openstack flavor set --property trait:CUSTOM_NVIDIA_11=required vgpu_1 In this example, we add the ``CUSTOM_NVIDIA_11`` trait as a required information for the ``vgpu_1`` flavor we created earlier. This will allow the Placement service to only return the Resource Providers matching this trait so only the GPUs that were decorated with will be checked for this flavor. Caveats ------- .. note:: This information is correct as of the 17.0.0 Queens release. Where improvements have been made or issues fixed, they are noted per item. For libvirt: * Suspending a guest that has vGPUs doesn't yet work because of a libvirt limitation (it can't hot-unplug mediated devices from a guest). Workarounds using other instance actions (like snapshotting the instance or shelving it) are recommended until libvirt gains mdev hot-unplug support. If a user attempts to suspend the instance, the libvirt driver will raise an exception that will cause the instance to be set back to ACTIVE. The ``suspend`` action in the ``os-instance-actions`` API will have an *Error* state. * Resizing an instance with a new flavor that has vGPU resources doesn't allocate those vGPUs to the instance (the instance is created without vGPU resources). The proposed workaround is to rebuild the instance after resizing it. The rebuild operation allocates vGPUS to the instance. .. versionchanged:: 21.0.0 This has been resolved in the Ussuri release. See `bug 1778563`_. * Cold migrating an instance to another host will have the same problem as resize. If you want to migrate an instance, make sure to rebuild it after the migration. .. versionchanged:: 21.0.0 This has been resolved in the Ussuri release. See `bug 1778563`_. * Rescue images do not use vGPUs. An instance being rescued does not keep its vGPUs during rescue. During that time, another instance can receive those vGPUs. This is a known issue. The recommended workaround is to rebuild an instance immediately after rescue. However, rebuilding the rescued instance only helps if there are other free vGPUs on the host. .. versionchanged:: 18.0.0 This has been resolved in the Rocky release. See `bug 1762688`_. For XenServer: * Suspend and live migration with vGPUs attached depends on support from the underlying XenServer version. Please see XenServer release notes for up to date information on when a hypervisor supporting live migration and suspend/resume with vGPUs is available. If a suspend or live migrate operation is attempted with a XenServer version that does not support that operation, an internal exception will occur that will cause nova setting the instance to be in ERROR status. You can use the command of ``openstack server set --state active `` to set it back to ACTIVE. * Resizing an instance with a new flavor that has vGPU resources doesn't allocate those vGPUs to the instance (the instance is created without vGPU resources). The proposed workaround is to rebuild the instance after resizing it. The rebuild operation allocates vGPUS to the instance. * Cold migrating an instance to another host will have the same problem as resize. If you want to migrate an instance, make sure to rebuild it after the migration. * Multiple GPU types per compute is not supported by the XenServer driver. .. _bug 1778563: https://bugs.launchpad.net/nova/+bug/1778563 .. _bug 1762688: https://bugs.launchpad.net/nova/+bug/1762688 .. Links .. _Intel GVT-g: https://01.org/igvt-g .. _NVIDIA GRID vGPU: http://docs.nvidia.com/grid/5.0/pdf/grid-vgpu-user-guide.pdf .. _osc-placement plugin: https://docs.openstack.org/osc-placement/latest/index.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/virtual-persistent-memory.rst0000664000175000017500000002663700000000000023665 0ustar00zuulzuul00000000000000============================================= Attaching virtual persistent memory to guests ============================================= .. versionadded:: 20.0.0 (Train) Starting in the 20.0.0 (Train) release, the virtual persistent memory (vPMEM) feature in Nova allows a deployment using the libvirt compute driver to provide vPMEMs for instances using physical persistent memory (PMEM) that can provide virtual devices. PMEM must be partitioned into `PMEM namespaces`_ for applications to use. This vPMEM feature only uses PMEM namespaces in ``devdax`` mode as QEMU `vPMEM backends`_. If you want to dive into related notions, the document `NVDIMM Linux kernel document`_ is recommended. To enable vPMEMs, follow the steps below. Dependencies ------------ The following are required to support the vPMEM feature: * Persistent Memory Hardware One such product is Intel® Optane™ DC Persistent Memory. `ipmctl`_ is used to configure it. * Linux Kernel version >= 4.18 with the following modules loaded: ``dax_pmem``, ``nd_pmem``, ``device_dax``, ``nd_btt`` .. note:: NVDIMM support is present in the Linux Kernel v4.0 or newer. It is recommended to use Kernel version 4.2 or later since `NVDIMM support `_ is enabled by default. We met some bugs in older versions, and we have done all verification works with OpenStack on 4.18 version, so 4.18 version and newer will probably guarantee its functionality. * QEMU version >= 3.1.0 * Libvirt version >= 5.0.0 * `ndctl`_ version >= 62 * daxio version >= 1.6 The vPMEM feature has been verified under the software and hardware listed above. Configure PMEM namespaces (Compute) ----------------------------------- #. Create PMEM namespaces as `vPMEM backends`_ using the `ndctl`_ utility. For example, to create a 30GiB namespace named ``ns3``: .. code-block:: console $ sudo ndctl create-namespace -s 30G -m devdax -M mem -n ns3 { "dev":"namespace1.0", "mode":"devdax", "map":"mem", "size":"30.00 GiB (32.21 GB)", "uuid":"937e9269-512b-4f65-9ac6-b74b61075c11", "raw_uuid":"17760832-a062-4aef-9d3b-95ea32038066", "daxregion":{ "id":1, "size":"30.00 GiB (32.21 GB)", "align":2097152, "devices":[ { "chardev":"dax1.0", "size":"30.00 GiB (32.21 GB)" } ] }, "name":"ns3", "numa_node":1 } Then list the available PMEM namespaces on the host: .. code-block:: console $ ndctl list -X [ { ... "size":6440353792, ... "name":"ns0", ... }, { ... "size":6440353792, ... "name":"ns1", ... }, { ... "size":6440353792, ... "name":"ns2", ... }, { ... "size":32210157568, ... "name":"ns3", ... } ] #. Specify which PMEM namespaces should be available to instances. Edit :oslo.config:option:`libvirt.pmem_namespaces`: .. code-block:: ini [libvirt] # pmem_namespaces=$LABEL:$NSNAME[|$NSNAME][,$LABEL:$NSNAME[|$NSNAME]] pmem_namespaces = 6GB:ns0|ns1|ns2,LARGE:ns3 Configured PMEM namespaces must have already been created on the host as described above. The conf syntax allows the admin to associate one or more namespace ``$NSNAME``\ s with an arbitrary ``$LABEL`` that can subsequently be used in a flavor to request one of those namespaces. It is recommended, but not required, for namespaces under a single ``$LABEL`` to be the same size. #. Restart the ``nova-compute`` service. Nova will invoke `ndctl`_ to identify the configured PMEM namespaces, and report vPMEM resources to placement. Configure a flavor ------------------ Specify a comma-separated list of the ``$LABEL``\ s from :oslo.config:option:`libvirt.pmem_namespaces` to the flavor's ``hw:pmem`` property. Note that multiple instances of the same label are permitted: .. code-block:: console $ openstack flavor set --property hw:pmem='6GB' my_flavor $ openstack flavor set --property hw:pmem='6GB,LARGE' my_flavor_large $ openstack flavor set --property hw:pmem='6GB,6GB' m1.medium .. note:: If a NUMA topology is specified, all vPMEM devices will be put on guest NUMA node 0; otherwise nova will generate one NUMA node automatically for the guest. Based on the above examples, an ``openstack server create`` request with ``my_flavor_large`` will spawn an instance with two vPMEMs. One, corresponding to the ``LARGE`` label, will be ``ns3``; the other, corresponding to the ``6G`` label, will be arbitrarily chosen from ``ns0``, ``ns1``, or ``ns2``. .. note:: Using vPMEM inside a virtual machine requires the following: * Guest kernel version 4.18 or higher; * The ``dax_pmem``, ``nd_pmem``, ``device_dax``, and ``nd_btt`` kernel modules; * The `ndctl`_ utility. .. note:: When resizing an instance with vPMEMs, the vPMEM data won't be migrated. Verify inventories and allocations ---------------------------------- This section describes how to check that: * vPMEM inventories were created correctly in placement, validating the `configuration described above <#configure-pmem-namespaces-compute>`_. * allocations were created correctly in placement for instances spawned from `flavors configured with vPMEMs <#configure-a-flavor>`_. .. note:: Inventories and allocations related to vPMEM resource classes are on the root resource provider related to the compute node. #. Get the list of resource providers .. code-block:: console $ openstack resource provider list +--------------------------------------+--------+------------+ | uuid | name | generation | +--------------------------------------+--------+------------+ | 1bc545f9-891f-4930-ab2b-88a56078f4be | host-1 | 47 | | 7d994aef-680d-43d4-9325-a67c807e648e | host-2 | 67 | --------------------------------------+---------+------------+ #. Check the inventory of each resource provider to see resource classes Each ``$LABEL`` configured in :oslo.config:option:`libvirt.pmem_namespaces` is used to generate a resource class named ``CUSTOM_PMEM_NAMESPACE_$LABEL``. Nova will report to Placement the number of vPMEM namespaces configured for each ``$LABEL``. For example, assuming ``host-1`` was configured as described above: .. code-block:: console $ openstack resource provider inventory list 1bc545f9-891f-4930-ab2b-88a56078f4be +-----------------------------+------------------+----------+----------+-----------+----------+--------+ | resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total | +-----------------------------+------------------+----------+----------+-----------+----------+--------+ | VCPU | 16.0 | 64 | 0 | 1 | 1 | 64 | | MEMORY_MB | 1.5 | 190604 | 512 | 1 | 1 | 190604 | | CUSTOM_PMEM_NAMESPACE_LARGE | 1.0 | 1 | 0 | 1 | 1 | 1 | | CUSTOM_PMEM_NAMESPACE_6GB | 1.0 | 3 | 0 | 1 | 1 | 3 | | DISK_GB | 1.0 | 439 | 0 | 1 | 1 | 439 | +-----------------------------+------------------+----------+----------+-----------+----------+--------+ Here you can see the vPMEM resource classes prefixed with ``CUSTOM_PMEM_NAMESPACE_``. The ``LARGE`` label was configured with one namespace (``ns3``), so it has an inventory of ``1``. Since the ``6GB`` label was configured with three namespaces (``ns0``, ``ns1``, and ``ns2``), the ``CUSTOM_PMEM_NAMESPACE_6GB`` inventory has a ``total`` and ``max_unit`` of ``3``. #. Check allocations for each server that is using vPMEMs .. code-block:: console $ openstack server list +--------------------------------------+----------------------+--------+-------------------+---------------+-----------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+----------------------+--------+-------------------+---------------+-----------------+ | 41d3e139-de5c-40fd-9d82-016b72f2ba1d | server-with-2-vpmems | ACTIVE | private=10.0.0.24 | ubuntu-bionic | my_flavor_large | | a616a7f6-b285-4adf-a885-dd8426dd9e6a | server-with-1-vpmem | ACTIVE | private=10.0.0.13 | ubuntu-bionic | my_flavor | +--------------------------------------+----------------------+--------+-------------------+---------------+-----------------+ $ openstack resource provider allocation show 41d3e139-de5c-40fd-9d82-016b72f2ba1d +--------------------------------------+------------+------------------------------------------------------------------------------------------------------------------------+ | resource_provider | generation | resources | +--------------------------------------+------------+------------------------------------------------------------------------------------------------------------------------+ | 1bc545f9-891f-4930-ab2b-88a56078f4be | 49 | {u'MEMORY_MB': 32768, u'VCPU': 16, u'DISK_GB': 20, u'CUSTOM_PMEM_NAMESPACE_6GB': 1, u'CUSTOM_PMEM_NAMESPACE_LARGE': 1} | +--------------------------------------+------------+------------------------------------------------------------------------------------------------------------------------+ $ openstack resource provider allocation show a616a7f6-b285-4adf-a885-dd8426dd9e6a +--------------------------------------+------------+-----------------------------------------------------------------------------------+ | resource_provider | generation | resources | +--------------------------------------+------------+-----------------------------------------------------------------------------------+ | 1bc545f9-891f-4930-ab2b-88a56078f4be | 49 | {u'MEMORY_MB': 8192, u'VCPU': 8, u'DISK_GB': 20, u'CUSTOM_PMEM_NAMESPACE_6GB': 1} | +--------------------------------------+------------+-----------------------------------------------------------------------------------+ In this example, two servers were created. ``server-with-2-vpmems`` used ``my_flavor_large`` asking for one ``6GB`` vPMEM and one ``LARGE`` vPMEM. ``server-with-1-vpmem`` used ``my_flavor`` asking for a single ``6GB`` vPMEM. .. _`PMEM namespaces`: http://pmem.io/ndctl/ndctl-create-namespace.html .. _`vPMEM backends`: https://github.com/qemu/qemu/blob/19b599f7664b2ebfd0f405fb79c14dd241557452/docs/nvdimm.txt#L145 .. _`NVDIMM Linux kernel document`: https://www.kernel.org/doc/Documentation/nvdimm/nvdimm.txt .. _`ipmctl`: https://software.intel.com/en-us/articles/quick-start-guide-configure-intel-optane-dc-persistent-memory-on-linux .. _`ndctl`: http://pmem.io/ndctl/ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2424707 nova-21.2.4/doc/source/cli/0000775000175000017500000000000000000000000015402 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/index.rst0000664000175000017500000000411100000000000017240 0ustar00zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Command-line Utilities ====================== In this section you will find information on Nova's command line utilities. Nova Management Commands ------------------------ These commands are used to manage existing installations. They are designed to be run by operators in an environment where they have direct access to the nova database. .. toctree:: :maxdepth: 1 nova-manage nova-status Service Daemons --------------- The service daemons make up a functioning nova environment. All of these are expected to be started by an init system, expect to read a nova.conf file, and daemonize correctly after starting up. .. toctree:: :maxdepth: 1 nova-api nova-compute nova-conductor nova-novncproxy nova-scheduler nova-serialproxy nova-spicehtml5proxy WSGI Services ------------- Starting in the Pike release, the preferred way to deploy the nova api is in a wsgi container (uwsgi or apache/mod_wsgi). These are the wsgi entry points to do that. .. toctree:: :maxdepth: 1 nova-api-metadata nova-api-os-compute Additional Tools ---------------- There are a few additional cli tools which nova services call when appropriate. This should not need to be called directly by operators, but they are documented for completeness and debugging if something goes wrong. .. toctree:: :maxdepth: 1 nova-rootwrap ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-api-metadata.rst0000664000175000017500000000175500000000000021434 0ustar00zuulzuul00000000000000================= nova-api-metadata ================= -------------------------------- Server for the Nova Metadata API -------------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-api-metadata [options] Description =========== :program:`nova-api-metadata` is a server daemon that serves the Nova Metadata API. This daemon routes database requests via the ``nova-conductor`` service, so there are some considerations about using this in a :ref:`multi-cell layout `. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/api-paste.ini`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` * :nova-doc:`Using WSGI with Nova ` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-api-os-compute.rst0000664000175000017500000000155300000000000021743 0ustar00zuulzuul00000000000000=================== nova-api-os-compute =================== ------------------------------------------ Server for the Nova OpenStack Compute APIs ------------------------------------------ :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-api-os-compute [options] Description =========== :program:`nova-api-os-compute` is a server daemon that serves the Nova OpenStack Compute API. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/api-paste.ini`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` * :nova-doc:`Using WSGI with Nova ` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-api.rst0000664000175000017500000000147500000000000017655 0ustar00zuulzuul00000000000000======== nova-api ======== ------------------------------------- Server for the OpenStack Compute APIs ------------------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-api [options] Description =========== :program:`nova-api` is a server daemon that serves the metadata and compute APIs in separate greenthreads. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/api-paste.ini`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` * :nova-doc:`Using WSGI with Nova ` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-compute.rst0000664000175000017500000000161100000000000020550 0ustar00zuulzuul00000000000000============ nova-compute ============ ------------------- Nova Compute Server ------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-compute [options] Description =========== :program:`nova-compute` is a server daemon that serves the Nova Compute service, which is responsible for building a disk image, launching an instance via the underlying virtualization driver, responding to calls to check the instance's state, attaching persistent storage, and terminating the instance. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-conductor.rst0000664000175000017500000000127700000000000021104 0ustar00zuulzuul00000000000000============== nova-conductor ============== ----------------------------- Server for the Nova Conductor ----------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-conductor [options] Description =========== :program:`nova-conductor` is a server daemon that serves the Nova Conductor service, which provides coordination and database query support for nova. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-manage.rst0000664000175000017500000010012600000000000020325 0ustar00zuulzuul00000000000000=========== nova-manage =========== .. program:: nova-manage ------------------------------------------- Control and manage cloud computer instances ------------------------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-manage [] Description =========== :program:`nova-manage` controls cloud computing instances by managing various admin-only aspects of Nova. The standard pattern for executing a :program:`nova-manage` command is:: nova-manage [] Run without arguments to see a list of available command categories:: nova-manage You can also run with a category argument such as ``db`` to see a list of all commands in that category:: nova-manage db These sections describe the available categories and arguments for :program:`nova-manage`. Options ======= These options apply to all commands and may be given in any order, before or after commands. Individual commands may provide additional options. Options without an argument can be combined after a single dash. .. option:: -h, --help Show a help message and exit .. option:: --config-dir DIR Path to a config directory to pull ``*.conf`` files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via previous :option:`--config-file`, arguments hence over-ridden options in the directory take precedence. This option must be set from the command-line. .. option:: --config-file PATH Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. Defaults to None. This option must be set from the command-line. .. option:: --debug, -d If set to true, the logging level will be set to DEBUG instead of the default INFO level. .. option:: --log-config-append PATH, --log-config PATH, --log_config PATH The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, :option:`--log-date-format`). .. option:: --log-date-format DATE_FORMAT Defines the format string for ``%(asctime)s`` in log records. Default: None. This option is ignored if :option:`--log-config-append` is set. .. option:: --log-dir LOG_DIR, --logdir LOG_DIR (Optional) The base directory used for relative log_file paths. This option is ignored if :option:`--log-config-append` is set. .. option:: --log-file PATH, --logfile PATH (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if :option:`--log-config-append` is set. .. option:: --nodebug The inverse of :option:`--debug`. .. option:: --nopost-mortem The inverse of :option:`--post-mortem`. .. option:: --nouse-journal The inverse of :option:`--use-journal`. .. option:: --nouse-json The inverse of :option:`--use-json`. .. option:: --nouse-syslog The inverse of :option:`--use-syslog`. .. option:: --nowatch-log-file The inverse of :option:`--watch-log-file`. .. option:: --post-mortem Allow post-mortem debugging .. option:: --syslog-log-facility SYSLOG_LOG_FACILITY Syslog facility to receive log lines. This option is ignored if :option:`--log-config-append` is set. .. option:: --use-journal Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages. This option is ignored if :option:`--log-config-append` is set. .. option:: --use-json Use JSON formatting for logging. This option is ignored if :option:`--log-config-append` is set. .. option:: --use-syslog Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if :option:`--log-config-append` is set. .. option:: --version Show program's version number and exit .. option:: --watch-log-file Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if :option:`--log-file` option is specified and Linux platform is used. This option is ignored if :option:`--log-config-append` is set. Commands ======== Nova Database ~~~~~~~~~~~~~ ``nova-manage db version`` Print the current main database version. ``nova-manage db sync [--local_cell] [VERSION]`` Upgrade the main database schema up to the most recent version or ``VERSION`` if specified. By default, this command will also attempt to upgrade the schema for the cell0 database if it is mapped (see the ``map_cell0`` or ``simple_cell_setup`` commands for more details on mapping the cell0 database). If ``--local_cell`` is specified, then only the main database in the current cell is upgraded. The local database connection is determined by :oslo.config:option:`database.connection` in the configuration file, passed to nova-manage using the ``--config-file`` option(s). This command should be run after ``nova-manage api_db sync``. Returns exit code 0 if the database schema was synced successfully, or 1 if cell0 cannot be accessed. ``nova-manage db archive_deleted_rows [--max_rows ] [--verbose] [--until-complete] [--before ] [--purge] [--all-cells]`` Move deleted rows from production tables to shadow tables. Note that the corresponding rows in the ``instance_mappings``, ``request_specs`` and ``instance_group_member`` tables of the API database are purged when instance records are archived and thus, :oslo.config:option:`api_database.connection` is required in the config file. Specifying ``--verbose`` will print the results of the archive operation for any tables that were changed. Specifying ``--until-complete`` will make the command run continuously until all deleted rows are archived. Use the ``--max_rows`` option, which defaults to 1000, as a batch size for each iteration (note that the purged API database table records are not included in this batch size). Specifying ``--before`` will archive only instances that were deleted before the date_ provided, and records in other tables related to those instances. Specifying ``--purge`` will cause a *full* DB purge to be completed after archival. If a date range is desired for the purge, then run ``nova-manage db purge --before `` manually after archiving is complete. Specifying ``--all-cells`` will cause the process to run against all cell databases. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - Nothing was archived. * - 1 - Some number of rows were archived. * - 2 - Invalid value for ``--max_rows``. * - 3 - No connection to the API database could be established using :oslo.config:option:`api_database.connection`. * - 4 - Invalid value for ``--before``. * - 255 - An unexpected error occurred. If automating, this should be run continuously while the result is 1, stopping at 0, or use the ``--until-complete`` option. ``nova-manage db purge [--all] [--before ] [--verbose] [--all-cells]`` Delete rows from shadow tables. Specifying ``--all`` will delete all data from all shadow tables. Specifying ``--before`` will delete data from all shadow tables that is older than the date_ provided. Specifying ``--verbose`` will cause information to be printed about purged records. Specifying ``--all-cells`` will cause the purge to be applied against all cell databases. For ``--all-cells`` to work, the api database connection information must be configured. Returns exit code 0 if rows were deleted, 1 if required arguments are not provided, 2 if an invalid date is provided, 3 if no data was deleted, 4 if the list of cells cannot be obtained. ``nova-manage db null_instance_uuid_scan [--delete]`` Lists and optionally deletes database records where instance_uuid is NULL. ``nova-manage db online_data_migrations [--max-count]`` Perform data migration to update all live data. ``--max-count`` controls the maximum number of objects to migrate in a given call. If not specified, migration will occur in batches of 50 until fully complete. Returns exit code 0 if no (further) updates are possible, 1 if the ``--max-count`` option was used and some updates were completed successfully (even if others generated errors), 2 if some updates generated errors and no other migrations were able to take effect in the last batch attempted, or 127 if invalid input is provided (e.g. non-numeric max-count). This command should be called after upgrading database schema and nova services on all controller nodes. If it exits with partial updates (exit status 1) it should be called again, even if some updates initially generated errors, because some updates may depend on others having completed. If it exits with status 2, intervention is required to resolve the issue causing remaining updates to fail. It should be considered successfully completed only when the exit status is 0. For example:: $ nova-manage db online_data_migrations Running batches of 50 until complete 2 rows matched query migrate_instances_add_request_spec, 0 migrated 2 rows matched query populate_queued_for_delete, 2 migrated +---------------------------------------------+--------------+-----------+ | Migration | Total Needed | Completed | +---------------------------------------------+--------------+-----------+ | create_incomplete_consumers | 0 | 0 | | migrate_instances_add_request_spec | 2 | 0 | | migrate_quota_classes_to_api_db | 0 | 0 | | migrate_quota_limits_to_api_db | 0 | 0 | | migration_migrate_to_uuid | 0 | 0 | | populate_missing_availability_zones | 0 | 0 | | populate_queued_for_delete | 2 | 2 | | populate_uuids | 0 | 0 | +---------------------------------------------+--------------+-----------+ In the above example, the ``migrate_instances_add_request_spec`` migration found two candidate records but did not need to perform any kind of data migration for either of them. In the case of the ``populate_queued_for_delete`` migration, two candidate records were found which did require a data migration. Since ``--max-count`` defaults to 50 and only two records were migrated with no more candidates remaining, the command completed successfully with exit code 0. ``nova-manage db ironic_flavor_migration [--all] [--host] [--node] [--resource_class]`` Perform the ironic flavor migration process against the database while services are offline. This is *not recommended* for most people. The ironic compute driver will do this online and as necessary if run normally. This routine is provided only for advanced users that may be skipping the 16.0.0 Pike release, never able to run services normally at the Pike level. Since this utility is for use when all services (including ironic) are down, you must pass the resource class set on your node(s) with the ``--resource_class`` parameter. To migrate a specific host and node, provide the hostname and node uuid with ``--host $hostname --node $uuid``. To migrate all instances on nodes managed by a single host, provide only ``--host``. To iterate over all nodes in the system in a single pass, use ``--all``. Note that this process is not lightweight, so it should not be run frequently without cause, although it is not harmful to do so. If you have multiple cellsv2 cells, you should run this once per cell with the corresponding cell config for each (i.e. this does not iterate cells automatically). Note that this is not recommended unless you need to run this specific data migration offline, and it should be used with care as the work done is non-trivial. Running smaller and more targeted batches (such as specific nodes) is recommended. .. _date: ``--before `` The date argument accepted by the ``--before`` option can be in any of several formats, including ``YYYY-MM-DD [HH:mm[:ss]]`` and the default format produced by the ``date`` command, e.g. ``Fri May 24 09:20:11 CDT 2019``. Date strings containing spaces must be quoted appropriately. Some examples:: # Purge shadow table rows older than a specific date nova-manage db purge --before 2015-10-21 # or nova-manage db purge --before "Oct 21 2015" # Times are also accepted nova-manage db purge --before "2015-10-21 12:00" Note that relative dates (such as ``yesterday``) are not supported natively. The ``date`` command can be helpful here:: # Archive deleted rows more than one month old nova-manage db archive_deleted_rows --before "$(date -d 'now - 1 month')" Nova API Database ~~~~~~~~~~~~~~~~~ ``nova-manage api_db version`` Print the current API database version. ``nova-manage api_db sync [VERSION]`` Upgrade the API database schema up to the most recent version or ``VERSION`` if specified. This command does not create the API database, it runs schema migration scripts. The API database connection is determined by :oslo.config:option:`api_database.connection` in the configuration file passed to nova-manage. In the 18.0.0 Rocky or 19.0.0 Stein release, this command will also upgrade the optional placement database if ``[placement_database]/connection`` is configured. Returns exit code 0 if the database schema was synced successfully. This command should be run before ``nova-manage db sync``. .. _man-page-cells-v2: Nova Cells v2 ~~~~~~~~~~~~~ ``nova-manage cell_v2 simple_cell_setup [--transport-url ]`` Setup a fresh cells v2 environment. If a ``transport_url`` is not specified, it will use the one defined by :oslo.config:option:`transport_url` in the configuration file. Returns 0 if setup is completed (or has already been done), 1 if no hosts are reporting (and cannot be mapped) and 1 if the transport url is missing or invalid. ``nova-manage cell_v2 map_cell0 [--database_connection ]`` Create a cell mapping to the database connection for the cell0 database. If a database_connection is not specified, it will use the one defined by :oslo.config:option:`database.connection` in the configuration file passed to nova-manage. The cell0 database is used for instances that have not been scheduled to any cell. This generally applies to instances that have encountered an error before they have been scheduled. Returns 0 if cell0 is created successfully or already setup. ``nova-manage cell_v2 map_instances --cell_uuid [--max-count ] [--reset]`` Map instances to the provided cell. Instances in the nova database will be queried from oldest to newest and mapped to the provided cell. A ``--max-count`` can be set on the number of instance to map in a single run. Repeated runs of the command will start from where the last run finished so it is not necessary to increase ``--max-count`` to finish. A ``--reset`` option can be passed which will reset the marker, thus making the command start from the beginning as opposed to the default behavior of starting from where the last run finished. If ``--max-count`` is not specified, all instances in the cell will be mapped in batches of 50. If you have a large number of instances, consider specifying a custom value and run the command until it exits with 0. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All instances have been mapped. * - 1 - There are still instances to be mapped. * - 127 - Invalid value for ``--max-count``. * - 255 - An unexpected error occurred. ``nova-manage cell_v2 map_cell_and_hosts [--name ] [--transport-url ] [--verbose]`` Create a cell mapping to the database connection and message queue transport url, and map hosts to that cell. The database connection comes from the :oslo.config:option:`database.connection` defined in the configuration file passed to nova-manage. If a transport_url is not specified, it will use the one defined by :oslo.config:option:`transport_url` in the configuration file. This command is idempotent (can be run multiple times), and the verbose option will print out the resulting cell mapping uuid. Returns 0 on successful completion, and 1 if the transport url is missing or invalid. ``nova-manage cell_v2 verify_instance --uuid [--quiet]`` Verify instance mapping to a cell. This command is useful to determine if the cells v2 environment is properly setup, specifically in terms of the cell, host, and instance mapping records required. Returns 0 when the instance is successfully mapped to a cell, 1 if the instance is not mapped to a cell (see the ``map_instances`` command), 2 if the cell mapping is missing (see the ``map_cell_and_hosts`` command if you are upgrading from a cells v1 environment, and the ``simple_cell_setup`` if you are upgrading from a non-cells v1 environment), 3 if it is a deleted instance which has instance mapping, and 4 if it is an archived instance which still has an instance mapping. ``nova-manage cell_v2 create_cell [--name ] [--transport-url ] [--database_connection ] [--verbose] [--disabled]`` Create a cell mapping to the database connection and message queue transport url. If a database_connection is not specified, it will use the one defined by :oslo.config:option:`database.connection` in the configuration file passed to nova-manage. If a transport_url is not specified, it will use the one defined by :oslo.config:option:`transport_url` in the configuration file. The verbose option will print out the resulting cell mapping uuid. All the cells created are by default enabled. However passing the ``--disabled`` option can create a pre-disabled cell, meaning no scheduling will happen to this cell. The meaning of the various exit codes returned by this command are explained below: * Returns 0 if the cell mapping was successfully created. * Returns 1 if the transport url or database connection was missing or invalid. * Returns 2 if another cell is already using that transport url and/or database connection combination. ``nova-manage cell_v2 discover_hosts [--cell_uuid ] [--verbose] [--strict] [--by-service]`` Searches cells, or a single cell, and maps found hosts. This command will check the database for each cell (or a single one if passed in) and map any hosts which are not currently mapped. If a host is already mapped, nothing will be done. You need to re-run this command each time you add a batch of compute hosts to a cell (otherwise the scheduler will never place instances there and the API will not list the new hosts). If ``--strict`` is specified, the command will only return 0 if an unmapped host was discovered and mapped successfully. If ``--by-service`` is specified, this command will look in the appropriate cell(s) for any nova-compute services and ensure there are host mappings for them. This is less efficient and is only necessary when using compute drivers that may manage zero or more actual compute nodes at any given time (currently only ironic). This command should be run once after all compute hosts have been deployed and should not be run in parallel. When run in parallel, the commands will collide with each other trying to map the same hosts in the database at the same time. The meaning of the various exit codes returned by this command are explained below: * Returns 0 if hosts were successfully mapped or no hosts needed to be mapped. If ``--strict`` is specified, returns 0 only if an unmapped host was discovered and mapped. * Returns 1 if ``--strict`` is specified and no unmapped hosts were found. Also returns 1 if an exception was raised while running. * Returns 2 if the command aborted because of a duplicate host mapping found. This means the command collided with another running discover_hosts command or scheduler periodic task and is safe to retry. ``nova-manage cell_v2 list_cells [--verbose]`` By default the cell name, uuid, disabled state, masked transport URL and database connection details are shown. Use the ``--verbose`` option to see transport URL and database connection with their sensitive details. ``nova-manage cell_v2 delete_cell [--force] --cell_uuid `` Delete a cell by the given uuid. Returns 0 if the empty cell is found and deleted successfully or the cell that has hosts is found and the cell, hosts and the instance_mappings are deleted successfully with ``--force`` option (this happens if there are no living instances), 1 if a cell with that uuid could not be found, 2 if host mappings were found for the cell (cell not empty) without ``--force`` option, 3 if there are instances mapped to the cell (cell not empty) irrespective of the ``--force`` option, and 4 if there are instance mappings to the cell but all instances have been deleted in the cell, again without the ``--force`` option. ``nova-manage cell_v2 list_hosts [--cell_uuid ]`` Lists the hosts in one or all v2 cells. By default hosts in all v2 cells are listed. Use the ``--cell_uuid`` option to list hosts in a specific cell. If the cell is not found by uuid, this command will return an exit code of 1. Otherwise, the exit code will be 0. ``nova-manage cell_v2 update_cell --cell_uuid [--name ] [--transport-url ] [--database_connection ] [--disable] [--enable]`` Updates the properties of a cell by the given uuid. If a database_connection is not specified, it will attempt to use the one defined by :oslo.config:option:`database.connection` in the configuration file. If a transport_url is not specified, it will attempt to use the one defined by :oslo.config:option:`transport_url` in the configuration file. The meaning of the various exit codes returned by this command are explained below: * If successful, it will return 0. * If the cell is not found by the provided uuid, it will return 1. * If the properties cannot be set, it will return 2. * If the provided transport_url or/and database_connection is/are same as another cell, it will return 3. * If an attempt is made to disable and enable a cell at the same time, it will return 4. * If an attempt is made to disable or enable cell0 it will return 5. .. note:: Updating the ``transport_url`` or ``database_connection`` fields on a running system will NOT result in all nodes immediately using the new values. Use caution when changing these values. The scheduler will not notice that a cell has been enabled/disabled until it is restarted or sent the SIGHUP signal. ``nova-manage cell_v2 delete_host --cell_uuid --host `` Delete a host by the given host name and the given cell uuid. Returns 0 if the empty host is found and deleted successfully, 1 if a cell with that uuid could not be found, 2 if a host with that name could not be found, 3 if a host with that name is not in a cell with that uuid, 4 if a host with that name has instances (host not empty). .. note:: The scheduler caches host-to-cell mapping information so when deleting a host the scheduler may need to be restarted or sent the SIGHUP signal. Placement ~~~~~~~~~ .. _heal_allocations_cli: ``nova-manage placement heal_allocations [--max-count ] [--verbose] [--skip-port-allocations] [--dry-run] [--instance ] [--cell `) but the corresponding allocation is not found then the allocation is created against the network device resource providers according to the resource request of that port. It is possible that the missing allocation cannot be created either due to not having enough resource inventory on the host the instance resides on or because more than one resource provider could fulfill the request. In this case the instance needs to be manually deleted or the port needs to be detached. When nova `supports migrating instances with guaranteed bandwidth ports`_, migration will heal missing allocations for these instances. Before the allocations for the ports are persisted in placement nova-manage tries to update each port in neutron to refer to the resource provider UUID which provides the requested resources. If any of the port updates fail in neutron or the allocation update fails in placement the command tries to roll back the partial updates to the ports. If the roll back fails then the process stops with exit code ``7`` and the admin needs to do the rollback in neutron manually according to the description in the exit code section. .. _supports migrating instances with guaranteed bandwidth ports: https://specs.openstack.org/openstack/nova-specs/specs/train/approved/support-move-ops-with-qos-ports.html There is also a special case handled for instances that *do* have allocations created before Placement API microversion 1.8 where project_id and user_id values were required. For those types of allocations, the project_id and user_id are updated using the values from the instance. Specify ``--max-count`` to control the maximum number of instances to process. If not specified, all instances in each cell will be mapped in batches of 50. If you have a large number of instances, consider specifying a custom value and run the command until it exits with 0 or 4. Specify ``--verbose`` to get detailed progress output during execution. Specify ``--dry-run`` to print output but not commit any changes. The return code should be 4. *(Since 20.0.0 Train)* Specify ``--instance`` to process a specific instance given its UUID. If specified the ``--max-count`` option has no effect. *(Since 20.0.0 Train)* Specify ``--skip-port-allocations`` to skip the healing of the resource allocations of bound ports, e.g. healing bandwidth resource allocation for ports having minimum QoS policy rules attached. If your deployment does not use such a feature then the performance impact of querying neutron ports for each instance can be avoided with this flag. *(Since 20.0.0 Train)* Specify ``--cell`` to process heal allocations within a specific cell. This is mutually exclusive with the ``--instance`` option. This command requires that the :oslo.config:option:`api_database.connection` and :oslo.config:group:`placement` configuration options are set. Placement API >= 1.28 is required. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - Command completed successfully and allocations were created. * - 1 - ``--max-count`` was reached and there are more instances to process. * - 2 - Unable to find a compute node record for a given instance. * - 3 - Unable to create (or update) allocations for an instance against its compute node resource provider. * - 4 - Command completed successfully but no allocations were created. * - 5 - Unable to query ports from neutron * - 6 - Unable to update ports in neutron * - 7 - Cannot roll back neutron port updates. Manual steps needed. The error message will indicate which neutron ports need to be changed to clean up ``binding:profile`` of the port:: $ openstack port unset --binding-profile allocation * - 127 - Invalid input. * - 255 - An unexpected error occurred. .. _sync_aggregates_cli: ``nova-manage placement sync_aggregates [--verbose]`` Mirrors compute host aggregates to resource provider aggregates in the Placement service. Requires the :oslo.config:group:`api_database` and :oslo.config:group:`placement` sections of the nova configuration file to be populated. Specify ``--verbose`` to get detailed progress output during execution. .. note:: Depending on the size of your deployment and the number of compute hosts in aggregates, this command could cause a non-negligible amount of traffic to the placement service and therefore is recommended to be run during maintenance windows. .. versionadded:: Rocky **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - Successful run * - 1 - A host was found with more than one matching compute node record * - 2 - An unexpected error occurred while working with the placement API * - 3 - Failed updating provider aggregates in placement * - 4 - Host mappings not found for one or more host aggregate members * - 5 - Compute node records not found for one or more hosts * - 6 - Resource provider not found by uuid for a given host * - 255 - An unexpected error occurred. ``nova-manage placement audit [--verbose] [--delete] [--resource_provider ]`` Iterates over all the Resource Providers (or just one if you provide the UUID) and then verifies if the compute allocations are either related to an existing instance or a migration UUID. If not, it will tell which allocations are orphaned. You can also ask to delete all the orphaned allocations by specifying ``-delete``. Specify ``--verbose`` to get detailed progress output during execution. This command requires that the :oslo.config:option:`api_database.connection` and :oslo.config:group:`placement` configuration options are set. Placement API >= 1.14 is required. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - No orphaned allocations were found * - 1 - An unexpected error occurred * - 3 - Orphaned allocations were found * - 4 - All found orphaned allocations were deleted * - 127 - Invalid input See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-novncproxy.rst0000664000175000017500000000161300000000000021323 0ustar00zuulzuul00000000000000=============== nova-novncproxy =============== ------------------------------------------------------- Websocket novnc Proxy for OpenStack Nova noVNC consoles ------------------------------------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-novncproxy [options] Description =========== :program:`nova-novncproxy` is a server daemon that serves the Nova noVNC Websocket Proxy service, which provides a websocket proxy that is compatible with OpenStack Nova noVNC consoles. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-rootwrap.rst0000664000175000017500000000246300000000000020757 0ustar00zuulzuul00000000000000============= nova-rootwrap ============= --------------------- Root wrapper for Nova --------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-rootwrap [options] Description =========== :program:`nova-rootwrap` is an application that filters which commands nova is allowed to run as another user. To use this, you should set the following in ``nova.conf``:: rootwrap_config=/etc/nova/rootwrap.conf You also need to let the nova user run :program:`nova-rootwrap` as root in ``sudoers``:: nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf * To make allowed commands node-specific, your packaging should only install ``{compute,network}.filters`` respectively on compute and network nodes, i.e. :program:`nova-api` nodes should not have any of those files installed. .. note:: :program:`nova-rootwrap` is being slowly deprecated and replaced by ``oslo.privsep``, and will eventually be removed. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-scheduler.rst0000664000175000017500000000136300000000000021056 0ustar00zuulzuul00000000000000============== nova-scheduler ============== -------------- Nova Scheduler -------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-scheduler [options] Description =========== :program:`nova-scheduler` is a server daemon that serves the Nova Scheduler service, which is responsible for picking a compute node to run a given instance on. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-serialproxy.rst0000664000175000017500000000161400000000000021460 0ustar00zuulzuul00000000000000================ nova-serialproxy ================ ------------------------------------------------------ Websocket serial Proxy for OpenStack Nova serial ports ------------------------------------------------------ :Author: openstack@lists.launchpad.net :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-serialproxy [options] Description =========== :program:`nova-serialproxy` is a server daemon that serves the Nova Serial Websocket Proxy service, which provides a websocket proxy that is compatible with OpenStack Nova serial ports. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-spicehtml5proxy.rst0000664000175000017500000000166000000000000022257 0ustar00zuulzuul00000000000000==================== nova-spicehtml5proxy ==================== ------------------------------------------------------- Websocket Proxy for OpenStack Nova SPICE HTML5 consoles ------------------------------------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-spicehtml5proxy [options] Description =========== :program:`nova-spicehtml5proxy` is a server daemon that serves the Nova SPICE HTML5 Websocket Proxy service, which provides a websocket proxy that is compatible with OpenStack Nova SPICE HTML5 consoles. Options ======= **General options** Files ===== * ``/etc/nova/nova.conf`` * ``/etc/nova/policy.json`` * ``/etc/nova/rootwrap.conf`` * ``/etc/nova/rootwrap.d/`` See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/cli/nova-status.rst0000664000175000017500000001174100000000000020424 0ustar00zuulzuul00000000000000=========== nova-status =========== -------------------------------------- CLI interface for nova status commands -------------------------------------- :Author: openstack@lists.openstack.org :Copyright: OpenStack Foundation :Manual section: 1 :Manual group: cloud computing Synopsis ======== :: nova-status [] Description =========== :program:`nova-status` is a tool that provides routines for checking the status of a Nova deployment. Options ======= The standard pattern for executing a :program:`nova-status` command is:: nova-status [] Run without arguments to see a list of available command categories:: nova-status Categories are: * ``upgrade`` Detailed descriptions are below. You can also run with a category argument such as ``upgrade`` to see a list of all commands in that category:: nova-status upgrade These sections describe the available categories and arguments for :program:`nova-status`. Upgrade ~~~~~~~ .. _nova-status-checks: ``nova-status upgrade check`` Performs a release-specific readiness check before restarting services with new code. This command expects to have complete configuration and access to databases and services within a cell. For example, this check may query the Nova API database and one or more cell databases. It may also make requests to other services such as the Placement REST API via the Keystone service catalog. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All upgrade readiness checks passed successfully and there is nothing to do. * - 1 - At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * - 2 - There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. * - 255 - An unexpected error occurred. **History of Checks** **15.0.0 (Ocata)** * Checks are added for cells v2 so ``nova-status upgrade check`` should be run *after* running the ``nova-manage cell_v2 simple_cell_setup`` command. * Checks are added for the Placement API such that there is an endpoint in the Keystone service catalog, the service is running and the check can make a successful request to the endpoint. The command also checks to see that there are compute node resource providers checking in with the Placement service. More information on the Placement service can be found at :placement-doc:`Placement API <>`. **16.0.0 (Pike)** * Checks for the Placement API are modified to require version 1.4, that is needed in Pike and further for nova-scheduler to work correctly. **17.0.0 (Queens)** * Checks for the Placement API are modified to require version 1.17. **18.0.0 (Rocky)** * Checks for the Placement API are modified to require version 1.28. * Checks that ironic instances have had their embedded flavors migrated to use custom resource classes. * Checks for ``nova-osapi_compute`` service versions that are less than 15 across all cell mappings which might cause issues when querying instances depending on how the **nova-api** service is configured. See https://bugs.launchpad.net/nova/+bug/1759316 for details. * Checks that existing instances have been migrated to have a matching request spec in the API DB. **19.0.0 (Stein)** * Checks for the Placement API are modified to require version 1.30. * Checks are added for the **nova-consoleauth** service to warn and provide additional instructions to set **[workarounds]enable_consoleauth = True** while performing a live/rolling upgrade. * The "Resource Providers" upgrade check was removed since the placement service code is being extracted from nova and the related tables are no longer used in the ``nova_api`` database. * The "API Service Version" upgrade check was removed since the corresponding code for that check was removed in Stein. **20.0.0 (Train)** * Checks for the Placement API are modified to require version 1.32. * Checks to ensure block-storage (cinder) API version 3.44 is available in order to support multi-attach volumes. If ``[cinder]/auth_type`` is not configured this is a no-op check. * The "**nova-consoleauth** service" upgrade check was removed since the service was removed in Train. * The ``Request Spec Migration`` check was removed. **21.0.0 (Ussuri)** * Checks for the Placement API are modified to require version 1.35. **21.0.0 (Ussuri)** * Checks for the policy files are not automatically overwritten with new defaults. * Checks for computes older than the previous major release. This check was backported from 23.0.0 (Wallaby). See Also ======== * :nova-doc:`OpenStack Nova <>` Bugs ==== * Nova bugs are managed at `Launchpad `_ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2424707 nova-21.2.4/doc/source/common/0000775000175000017500000000000000000000000016123 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/common/numa-live-migration-warning.txt0000664000175000017500000000122600000000000024214 0ustar00zuulzuul00000000000000.. important:: In deployments older than Train, or in mixed Stein/Train deployments with a rolling upgrade in progress, unless :oslo.config:option:`specifically enabled `, live migration is not possible for instances with a NUMA topology when using the libvirt driver. A NUMA topology may be specified explicitly or can be added implicitly due to the use of CPU pinning or huge pages. Refer to `bug #1289064`__ for more information. As of Train, live migration of instances with a NUMA topology when using the libvirt driver is fully supported. __ https://bugs.launchpad.net/nova/+bug/1289064 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/conf.py0000664000175000017500000001762200000000000016142 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # nova documentation build configuration file # # Refer to the Sphinx documentation for advice on configuring this file: # # http://www.sphinx-doc.org/en/stable/config.html import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.graphviz', 'openstackdocstheme', 'sphinx_feature_classification.support_matrix', 'oslo_config.sphinxconfiggen', 'oslo_config.sphinxext', 'oslo_policy.sphinxpolicygen', 'oslo_policy.sphinxext', 'ext.versioned_notifications', 'ext.feature_matrix', 'ext.extra_specs', 'sphinxcontrib.actdiag', 'sphinxcontrib.seqdiag', 'sphinxcontrib.rsvgconverter', ] # openstackdocstheme options repository_name = 'openstack/nova' bug_project = 'nova' bug_tag = 'doc' config_generator_config_file = '../../etc/nova/nova-config-generator.conf' sample_config_basename = '_static/nova' policy_generator_config_file = [ ('../../etc/nova/nova-policy-generator.conf', '_static/nova'), ] actdiag_html_image_format = 'SVG' actdiag_antialias = True seqdiag_html_image_format = 'SVG' seqdiag_antialias = True todo_include_todos = True # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2010-present, OpenStack Foundation' # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for man page output ---------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' _man_pages = [ ('nova-api', u'Cloud controller fabric'), ('nova-api-metadata', u'Cloud controller fabric'), ('nova-api-os-compute', u'Cloud controller fabric'), ('nova-compute', u'Cloud controller fabric'), ('nova-conductor', u'Cloud controller fabric'), ('nova-manage', u'Cloud controller fabric'), ('nova-novncproxy', u'Cloud controller fabric'), ('nova-rootwrap', u'Cloud controller fabric'), ('nova-scheduler', u'Cloud controller fabric'), ('nova-serialproxy', u'Cloud controller fabric'), ('nova-spicehtml5proxy', u'Cloud controller fabric'), ('nova-status', u'Cloud controller fabric'), ] man_pages = [ ('cli/%s' % name, name, description, [u'OpenStack'], 1) for name, description in _man_pages] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'openstackdocs' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any paths that contain "extra" files, such as .htaccess or # robots.txt. html_extra_path = ['_extra'] # -- Options for LaTeX output ------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'doc-nova.tex', u'Nova Documentation', u'OpenStack Foundation', 'manual'), ] # Allow deeper levels of nesting for \begin...\end stanzas latex_elements = { 'maxlistdepth': 10, 'extraclassoptions': 'openany,oneside', 'preamble': r''' \setcounter{tocdepth}{3} \setcounter{secnumdepth}{3} ''', } # Disable use of xindy since that's another binary dependency that's not # available on all platforms latex_use_xindy = False # -- Options for openstackdocstheme ------------------------------------------- # keep this ordered to keep mriedem happy # # NOTE(stephenfin): Projects that don't have a release branch, like TripleO and # reno, should not be included here openstack_projects = [ 'ceilometer', 'cinder', 'cyborg', 'glance', 'horizon', 'ironic', 'keystone', 'neutron', 'nova', 'oslo.log', 'oslo.messaging', 'oslo.i18n', 'oslo.versionedobjects', 'placement', 'python-novaclient', 'python-openstackclient', 'watcher', ] # -- Custom extensions -------------------------------------------------------- # NOTE(mdbooth): (2019-03-20) Sphinx loads policies defined in setup.cfg, which # includes the placement policy at nova/api/openstack/placement/policies.py. # Loading this imports nova/api/openstack/__init__.py, which imports # nova.monkey_patch, which will do eventlet monkey patching to the sphinx # process. As well as being unnecessary and a bad idea, this breaks on # python3.6 (but not python3.7), so don't do that. os.environ['OS_NOVA_DISABLE_EVENTLET_PATCHING'] = '1' def monkey_patch_blockdiag(): """Monkey patch the blockdiag library. The default word wrapping in blockdiag is poor, and breaks on a fixed text width rather than on word boundaries. There's a patch submitted to resolve this [1]_ but it's unlikely to merge anytime soon. In addition, blockdiag monkey patches a core library function, ``codecs.getreader`` [2]_, to work around some Python 3 issues. Because this operates in the same environment as other code that uses this library, it ends up causing issues elsewhere. We undo these destructive changes pending a fix. TODO: Remove this once blockdiag is bumped to 1.6, which will hopefully include the fix. .. [1] https://bitbucket.org/blockdiag/blockdiag/pull-requests/16/ .. [2] https://bitbucket.org/blockdiag/blockdiag/src/1.5.3/src/blockdiag/utils/compat.py # noqa """ import codecs from codecs import getreader from blockdiag.imagedraw import textfolder from blockdiag.utils import compat # noqa # oh, blockdiag. Let's undo the mess you made. codecs.getreader = getreader def splitlabel(text): """Split text to lines as generator. Every line will be stripped. If text includes characters "\n\n", treat as line separator. Ignore '\n' to allow line wrapping. """ lines = [x.strip() for x in text.splitlines()] out = [] for line in lines: if line: out.append(line) else: yield ' '.join(out) out = [] yield ' '.join(out) def splittext(metrics, text, bound, measure='width'): folded = [' '] for word in text.split(): # Try appending the word to the last line tryline = ' '.join([folded[-1], word]).strip() textsize = metrics.textsize(tryline) if getattr(textsize, measure) > bound: # Start a new line. Appends `word` even if > bound. folded.append(word) else: folded[-1] = tryline return folded # monkey patch those babies textfolder.splitlabel = splitlabel textfolder.splittext = splittext monkey_patch_blockdiag() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2424707 nova-21.2.4/doc/source/configuration/0000775000175000017500000000000000000000000017502 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/configuration/config.rst0000664000175000017500000000045400000000000021504 0ustar00zuulzuul00000000000000===================== Configuration Options ===================== The following is an overview of all available configuration options in Nova. .. only:: html For a sample configuration file, refer to :doc:`sample-config`. .. show-options:: :config-file: etc/nova/nova-config-generator.conf ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/configuration/extra-specs.rst0000664000175000017500000001260500000000000022476 0ustar00zuulzuul00000000000000=========== Extra Specs =========== The following is an overview of all extra specs recognized by nova in its default configuration. .. note:: Other services and virt drivers may provide additional extra specs not listed here. In addition, it is possible to register your own extra specs. For more information on the latter, refer to :doc:`/user/filter-scheduler`. Placement --------- The following extra specs are used during scheduling to modify the request sent to placement. ``resources`` ~~~~~~~~~~~~~ The following extra specs are used to request an amount of the specified resource from placement when scheduling. All extra specs expect an integer value. .. note:: Not all of the resource types listed below are supported by all virt drivers. .. extra-specs:: resources :summary: ``trait`` ~~~~~~~~~ The following extra specs are used to request a specified trait from placement when scheduling. All extra specs expect one of the following values: - ``required`` - ``forbidden`` .. note:: Not all of the traits listed below are supported by all virt drivers. .. extra-specs:: trait :summary: Scheduler Filters ----------------- The following extra specs are specific to various in-tree scheduler filters. ``aggregate_instance_extra_specs`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following extra specs are used to specify metadata that must be present on the aggregate of a host. If this metadata is not present or does not match the expected value, the aggregate and all hosts within in will be rejected. Requires the ``AggregateInstanceExtraSpecsFilter`` scheduler filter. .. extra-specs:: aggregate_instance_extra_specs ``capabilities`` ~~~~~~~~~~~~~~~~ The following extra specs are used to specify a host capability that must be provided by the host compute service. If this capability is not present or does not match the expected value, the host will be rejected. Requires the ``ComputeCapabilitiesFilter`` scheduler filter. All extra specs expect similar types of values: * ``=`` (equal to or greater than as a number; same as vcpus case) * ``==`` (equal to as a number) * ``!=`` (not equal to as a number) * ``>=`` (greater than or equal to as a number) * ``<=`` (less than or equal to as a number) * ``s==`` (equal to as a string) * ``s!=`` (not equal to as a string) * ``s>=`` (greater than or equal to as a string) * ``s>`` (greater than as a string) * ``s<=`` (less than or equal to as a string) * ``s<`` (less than as a string) * ```` (substring) * ```` (all elements contained in collection) * ```` (find one of these) * A specific value, e.g. ``true``, ``123``, ``testing`` Examples are: ``>= 5``, ``s== 2.1.0``, `` gcc``, `` aes mmx``, and `` fpu gpu`` .. note:: Not all operators will apply to all types of values. For example, the ``==`` operator should not be used for a string value - use ``s==`` instead. .. extra-specs:: capabilities :summary: Virt driver ----------- The following extra specs are used as hints to configure internals of a instance, from the bus used for paravirtualized devices to the amount of a physical device to passthrough to the instance. Most of these are virt driver-specific. ``quota`` ~~~~~~~~~ The following extra specs are used to configure quotas for various paravirtualized devices. They are only supported by the libvirt virt driver. .. extra-specs:: quota ``accel`` ~~~~~~~~~ The following extra specs are used to configure attachment of various accelerators to an instance. For more information, refer to :cyborg-doc:`the Cyborg documentation <>`. They are only supported by the libvirt virt driver. .. extra-specs:: accel ``pci_passthrough`` ~~~~~~~~~~~~~~~~~~~ The following extra specs are used to configure passthrough of a host PCI device to an instance. This requires prior host configuration. For more information, refer to :doc:`/admin/pci-passthrough`. They are only supported by the libvirt virt driver. .. extra-specs:: pci_passthrough ``hw`` ~~~~~~ The following extra specs are used to configure various attributes of instances. Some of the extra specs act as feature flags, while others tweak for example the guest-visible CPU topology of the instance. Except where otherwise stated, they are only supported by the libvirt virt driver. .. extra-specs:: hw ``hw_rng`` ~~~~~~~~~~ The following extra specs are used to configure a random number generator for an instance. They are only supported by the libvirt virt driver. .. extra-specs:: hw_rng ``hw_video`` ~~~~~~~~~~~~ The following extra specs are used to configure attributes of the default guest video device. They are only supported by the libvirt virt driver. .. extra-specs:: hw_video ``os`` ~~~~~~ The following extra specs are used to configure various attributes of instances when using the HyperV virt driver. They are only supported by the HyperV virt driver. .. extra-specs:: os ``powervm`` ~~~~~~~~~~~ The following extra specs are used to configure various attributes of instances when using the PowerVM virt driver. They are only supported by the PowerVM virt driver. .. extra-specs:: powervm ``vmware`` ~~~~~~~~~~ The following extra specs are used to configure various attributes of instances when using the VMWare virt driver. They are only supported by the VMWare virt driver. .. extra-specs:: vmware Others (uncategorized) ---------------------- The following extra specs are not part of a group. .. extra-specs:: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/configuration/index.rst0000664000175000017500000000631100000000000021344 0ustar00zuulzuul00000000000000=================== Configuration Guide =================== The static configuration for nova lives in two main files: ``nova.conf`` and ``policy.json``. These are described below. For a bigger picture view on configuring nova to solve specific problems, refer to the :doc:`Nova Admin Guide `. Configuration ------------- * :doc:`Configuration Guide `: Detailed configuration guides for various parts of your Nova system. Helpful reference for setting up specific hypervisor backends. * :doc:`Config Reference `: A complete reference of all configuration options available in the ``nova.conf`` file. .. only:: html * :doc:`Sample Config File `: A sample config file with inline documentation. .. # NOTE(mriedem): This is the section where we hide things that we don't # actually want in the table of contents but sphinx build would fail if # they aren't in the toctree somewhere. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: config .. # NOTE(amotoki): Sample files are only available in HTML document. # Inline sample files with literalinclude hit LaTeX processing error # like TeX capacity exceeded and direct links are discouraged in PDF doc. .. only:: html .. toctree:: :hidden: sample-config Policy ------ Nova, like most OpenStack projects, uses a policy language to restrict permissions on REST API actions. * :doc:`Policy Concepts `: Starting in the Ussuri release, Nova API policy defines new default roles with system scope capabilities. These new changes improve the security level and manageability of Nova API as they are richer in terms of handling access at system and project level token with 'Read' and 'Write' roles. .. toctree:: :hidden: policy-concepts * :doc:`Policy Reference `: A complete reference of all policy points in nova and what they impact. .. only:: html * :doc:`Sample Policy File `: A sample nova policy file with inline documentation. .. # NOTE(mriedem): This is the section where we hide things that we don't # actually want in the table of contents but sphinx build would fail if # they aren't in the toctree somewhere. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: policy .. # NOTE(amotoki): Sample files are only available in HTML document. # Inline sample files with literalinclude hit LaTeX processing error # like TeX capacity exceeded and direct links are discouraged in PDF doc. .. only:: html .. toctree:: :hidden: sample-policy Extra Specs ----------- Nova uses *flavor extra specs* as a way to provide additional information to instances beyond basic information like amount of RAM or disk. This information can range from hints for the scheduler to hypervisor-specific configuration instructions for the instance. * :doc:`Extra Spec Reference `: A complete reference for all extra specs currently recognized and supported by nova. .. toctree:: :hidden: extra-specs ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/configuration/policy-concepts.rst0000664000175000017500000002523600000000000023357 0ustar00zuulzuul00000000000000Understanding Nova Policies =========================== Nova supports a rich policy system that has evolved significantly over its lifetime. Initially, this took the form of a large, mostly hand-written ``policy.json`` file but, starting in the Newton (14.0.0) release, policy defaults have been defined in the codebase, requiring the ``policy.json`` file only to override these defaults. In the Ussuri (21.0.0) release, further work was undertaken to address some issues that had been identified: #. No global vs project admin. The ``admin_only`` role is used for the global admin that is able to make almost any change to Nova, and see all details of the Nova system. The rule passes for any user with an admin role, it doesn’t matter which project is used. #. No read-only roles. Since several APIs tend to share a single policy rule for read and write actions, they did not provide the granularity necessary for read-only access roles. #. The ``admin_or_owner`` role did not work as expected. For most APIs with ``admin_or_owner``, the project authentication happened in a separate component than API in Nova that did not honor changes to policy. As a result, policy could not override hard-coded in-project checks. Keystone comes with ``admin``, ``member`` and ``reader`` roles by default. Please refer to :keystone-doc:`this document ` for more information about these new defaults. In addition, keystone supports a new "system scope" concept that makes it easier to protect deployment level resources from project or system level resources. Please refer to :keystone-doc:`this document ` and `system scope specification `_ to understand the scope concept. In the Nova 21.0.0 (Ussuri) release, Nova policies implemented the scope concept and default roles provided by keystone (admin, member, and reader). Using common roles from keystone reduces the likelihood of similar, but different, roles implemented across projects or deployments (e.g., a role called ``observer`` versus ``reader`` versus ``auditor``). With the help of the new defaults it is easier to understand who can do what across projects, reduces divergence, and increases interoperability. The below sections explain how these new defaults in the Nova can solve the first two issues mentioned above and extend more functionality to end users in a safe and secure way. More information is provided in the `nova specification `_. Scope ----- OpenStack Keystone supports different scopes in tokens. These are described :keystone-doc:`here `. Token scopes represent the layer of authorization. Policy ``scope_types`` represent the layer of authorization required to access an API. .. note:: The ``scope_type`` of each policy is hardcoded and is not overridable via the policy file. Nova policies have implemented the scope concept by defining the ``scope_type`` in policies. To know each policy's ``scope_type``, please refer to the :doc:`Policy Reference ` and look for ``Scope Types`` or ``Intended scope(s)`` in :doc:`Policy Sample File ` as shown in below examples. .. rubric:: ``system`` scope Policies with a ``scope_type`` of ``system`` means a user with a ``system-scoped`` token has permission to access the resource. This can be seen as a global role. All the system-level operation's policies have defaulted to ``scope_type`` of ``['system']``. For example, consider the ``GET /os-hypervisors`` API. .. code:: # List all hypervisors. # GET /os-hypervisors # Intended scope(s): system #"os_compute_api:os-hypervisors:list": "rule:system_reader_api" .. rubric:: ``project`` scope Policies with a ``scope_type`` of ``project`` means a user with a ``project-scoped`` token has permission to access the resource. Project-level only operation's policies are defaulted to ``scope_type`` of ``['project']``. For example, consider the ``POST /os-server-groups`` API. .. code:: # Create a new server group # POST /os-server-groups # Intended scope(s): project #"os_compute_api:os-server-groups:create": "rule:project_member_api" .. rubric:: ``system and project`` scope Policies with a ``scope_type`` of ``system and project`` means a user with a ``system-scoped`` or ``project-scoped`` token has permission to access the resource. All the system and project level operation's policies have defaulted to ``scope_type`` of ``['system', 'project']``. For example, consider the ``POST /servers/{server_id}/action (os-migrateLive)`` API. .. code:: # Live migrate a server to a new host without a reboot # POST /servers/{server_id}/action (os-migrateLive) # Intended scope(s): system, project #"os_compute_api:os-migrate-server:migrate_live": "rule:system_admin_api" These scope types provide a way to differentiate between system-level and project-level access roles. You can control the information with scope of the users. This means you can control that none of the project level role can get the hypervisor information. Policy scope is disabled by default to allow operators to migrate from the old policy enforcement system in a graceful way. This can be enabled by configuring the :oslo.config:option:`oslo_policy.enforce_scope` option to ``True``. .. note:: [oslo_policy] enforce_scope=True Roles ----- You can refer to :keystone-doc:`this ` document to know about all available defaults from Keystone. Along with the ``scope_type`` feature, Nova policy defines new defaults for each policy. .. rubric:: ``reader`` This provides read-only access to the resources within the ``system`` or ``project``. Nova policies are defaulted to below rules: .. code:: system_reader_api Default role:reader and system_scope:all system_or_project_reader Default (rule:system_reader_api) or (role:reader and project_id:%(project_id)s) .. rubric:: ``member`` This role is to perform the project level write operation with combination to the system admin. Nova policies are defaulted to below rules: .. code:: project_member_api Default role:member and project_id:%(project_id)s system_admin_or_owner Default (role:admin and system_scope:all) or (role:member and project_id:%(project_id)s) .. rubric:: ``admin`` This role is to perform the admin level write operation at system as well as at project-level operations. Nova policies are defaulted to below rules: .. code:: system_admin_api Default role:admin and system_scope:all project_admin_api Default role:admin and project_id:%(project_id)s system_admin_or_owner Default (role:admin and system_scope:all) or (role:member and project_id:%(project_id)s) With these new defaults, you can solve the problem of: #. Providing the read-only access to the user. Polices are made more granular and defaulted to reader rules. For exmaple: If you need to let someone audit your deployment for security purposes. #. Customize the policy in better way. For example, you will be able to provide access to project level user to perform live migration for their server or any other project with their token. Backward Compatibility ---------------------- Backward compatibility with versions prior to 21.0.0 (Ussuri) is maintained by supporting the old defaults and disabling the ``scope_type`` feature by default. This means the old defaults and deployments that use them will keep working as-is. However, we encourage every deployment to switch to new policy. ``scope_type`` will be enabled by default and the old defaults will be removed starting in the 23.0.0 (W) release. To implement the new default reader roles, some policies needed to become granular. They have been renamed, with the old names still supported for backwards compatibility. Migration Plan -------------- To have a graceful migration, Nova provides two flags to switch to the new policy completely. You do not need to overwrite the policy file to adopt the new policy defaults. Here is step wise guide for migration: #. Create scoped token: You need to create the new token with scope knowledge via below CLI: - :keystone-doc:`Create System Scoped Token `. - :keystone-doc:`Create Project Scoped Token `. #. Create new default roles in keystone if not done: If you do not have new defaults in Keystone then you can create and re-run the :keystone-doc:`Keystone Bootstrap `. Keystone added this support in 14.0.0 (Rocky) release. #. Enable Scope Checks The :oslo.config:option:`oslo_policy.enforce_scope` flag is to enable the ``scope_type`` features. The scope of the token used in the request is always compared to the ``scope_type`` of the policy. If the scopes do not match, one of two things can happen. If :oslo.config:option:`oslo_policy.enforce_scope` is True, the request will be rejected. If :oslo.config:option:`oslo_policy.enforce_scope` is False, an warning will be logged, but the request will be accepted (assuming the rest of the policy passes). The default value of this flag is False. .. note:: Before you enable this flag, you need to audit your users and make sure everyone who needs system-level access has a system role assignment in keystone. #. Enable new defaults The :oslo.config:option:`oslo_policy.enforce_new_defaults` flag switches the policy to new defaults-only. This flag controls whether or not to use old deprecated defaults when evaluating policies. If True, the old deprecated defaults are not evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be rejected. The default value of this flag is False. .. note:: Before you enable this flag, you need to educate users about the different roles they need to use to continue using Nova APIs. #. Check for deprecated policies A few policies were made more granular to implement the reader roles. New policy names are available to use. If old policy names which are renamed are overwritten in policy file, then warning will be logged. Please migrate those policies to new policy names. We expect all deployments to migrate to new policy by 23.0.0 release so that we can remove the support of old policies. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/configuration/policy.rst0000664000175000017500000000040600000000000021533 0ustar00zuulzuul00000000000000============= Nova Policies ============= The following is an overview of all available policies in Nova. .. only:: html For a sample configuration file, refer to :doc:`sample-policy`. .. show-policy:: :config-file: etc/nova/nova-policy-generator.conf ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/configuration/sample-config.rst0000664000175000017500000000112300000000000022755 0ustar00zuulzuul00000000000000========================= Sample Configuration File ========================= The following is a sample nova configuration for adaptation and use. For a detailed overview of all available configuration options, refer to :doc:`/configuration/config`. The sample configuration can also be viewed in :download:`file form `. .. important:: The sample configuration file is auto-generated from nova when this documentation is built. You must ensure your version of nova matches the version of this documentation. .. literalinclude:: /_static/nova.conf.sample ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/configuration/sample-policy.rst0000664000175000017500000000074400000000000023017 0ustar00zuulzuul00000000000000======================= Sample Nova Policy File ======================= The following is a sample nova policy file for adaptation and use. The sample policy can also be viewed in :download:`file form `. .. important:: The sample policy file is auto-generated from nova when this documentation is built. You must ensure your version of nova matches the version of this documentation. .. literalinclude:: /_static/nova.policy.yaml.sample ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2464707 nova-21.2.4/doc/source/contributor/0000775000175000017500000000000000000000000017205 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/api-2.rst0000664000175000017500000000450500000000000020653 0ustar00zuulzuul00000000000000.. Copyright 2010-2011 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. TODO:: This should be merged into contributor/api Adding a Method to the OpenStack API ==================================== The interface is a mostly RESTful API. REST stands for Representational State Transfer and provides an architecture "style" for distributed systems using HTTP for transport. Figure out a way to express your request and response in terms of resources that are being created, modified, read, or destroyed. Routing ------- To map URLs to controllers+actions, OpenStack uses the Routes package, a clone of Rails routes for Python implementations. See http://routes.readthedocs.io/en/latest/ for more information. URLs are mapped to "action" methods on "controller" classes in ``nova/api/openstack/__init__/ApiRouter.__init__`` . See http://routes.readthedocs.io/en/latest/modules/mapper.html for all syntax, but you'll probably just need these two: - mapper.connect() lets you map a single URL to a single action on a controller. - mapper.resource() connects many standard URLs to actions on a controller. Controllers and actions ----------------------- Controllers live in ``nova/api/openstack``, and inherit from nova.wsgi.Controller. See ``nova/api/openstack/compute/servers.py`` for an example. Action methods take parameters that are sucked out of the URL by mapper.connect() or .resource(). The first two parameters are self and the WebOb request, from which you can get the req.environ, req.body, req.headers, etc. Serialization ------------- Actions return a dictionary, and wsgi.Controller serializes that to JSON. Faults ------ If you need to return a non-200, you should return faults.Fault(webob.exc.HTTPNotFound()) replacing the exception as appropriate. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/api-ref-guideline.rst0000664000175000017500000002610300000000000023227 0ustar00zuulzuul00000000000000======================= API reference guideline ======================= The API reference should be updated when compute APIs are modified (microversion is bumped, etc.). This page describes the guideline for updating the API reference. API reference ============= * `Compute API reference `_ The guideline to write the API reference ======================================== The API reference consists of the following files. Compute API reference --------------------- * API reference text: ``api-ref/source/*.inc`` * Parameter definition: ``api-ref/source/parameters.yaml`` * JSON request/response samples: ``doc/api_samples/*`` Structure of inc file --------------------- Each REST API is described in the text file (\*.inc). The structure of inc file is as follows: - Title (Resource name) - Introductory text and context The introductory text and the context for the resource in question should be added. This might include links to the API Concept guide, or building other supporting documents to explain a concept (like versioning). - API Name - REST Method - URL - Description See the `Description`_ section for more details. - Response codes - Request - Parameters - JSON request body example (if exists) - Response - Parameters - JSON response body example (if exists) - API Name (Next) - ... REST Method ----------- The guideline for describing HTTP methods is described in this section. All supported methods by resource should be listed in the API reference. The order of methods ~~~~~~~~~~~~~~~~~~~~ Methods have to be sorted by each URI in the following order: 1. GET 2. POST 3. PUT 4. PATCH (unused by Nova) 5. DELETE And sorted from broadest to narrowest. So for /severs it would be: 1. GET /servers 2. POST /servers 3. GET /servers/details 4. GET /servers/{server_id} 5. PUT /servers/{server_id} 6. DELETE /servers/{server_id} Method titles spelling and case ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The spelling and the case of method names in the title have to match what is in the code. For instance, the title for the section on method "Get Rdp Console" should be "Get Rdp Console (os-getRDPConsole Action)" NOT "Get Rdp Console (Os-Getrdpconsole Action)" Description ----------- The following items should be described in each API. Or links to the pages describing them should be added. * The purpose of the API(s) - e.g. Lists, creates, shows details for, updates, and deletes servers. - e.g. Creates a server. * Microversion - Deprecated - Warning - Microversion to start deprecation - Alternatives (superseded ways) and their links (if document is available) - Added - Microversion in which the API has been added - Changed behavior - Microversion to change behavior - Explanation of the behavior - Changed HTTP response codes - Microversion to change the response code - Explanation of the response code * Warning if direct use is not recommended - e.g. This is an admin level service API only designed to be used by other OpenStack services. The point of this API is to coordinate between Nova and Neutron, Nova and Cinder (and potentially future services) on activities they both need to be involved in, such as network hotplugging. Unless you are writing Neutron or Cinder code you should not be using this API. * Explanation about statuses of resource in question - e.g. The server status. - ``ACTIVE``. The server is active. * Supplementary explanation for parameters - Examples of query parameters - Parameters that are not specified at the same time - Values that cannot be specified. - e.g. A destination host is the same host. * Behavior - Config options to change the behavior and the effect - Effect to resource status - Ephemeral disks, attached volumes, attached network ports and others - Data loss or preserve contents - Scheduler - Whether the scheduler choose a destination host or not * Sort order of response results - Describe sorting order of response results if the API implements the order (e.g. The response is sorted by ``created_at`` and ``id`` in descending order by default) * Policy - Default policy (the admin only, the admin or the owner) - How to change the policy * Preconditions - Server status - Task state - Policy for locked servers - Quota - Limited support - e.g. Only qcow2 is supported - Compute driver support - If very few compute drivers support the operation, add a warning and a link to the support matrix of virt driver. - Cases that are not supported - e.g. A volume-backed server * Postconditions - If the operation is asynchronous, it should be "Asynchronous postconditions". - Describe what status/state resource in question becomes by the operation - Server status - Task state - Path of output file * Troubleshooting - e.g. If the server status remains ``BUILDING`` or shows another error status, the request failed. Ensure you meet the preconditions then investigate the compute node. * Related operations - Operations to be paired - e.g. Start and stop - Subsequent operation - e.g. "Confirm resize" after "Resize" operation * Performance - e.g. The progress of this operation depends on the location of the requested image, network I/O, host load, selected flavor, and other factors. * Progress - How to get progress of the operation if the operation is asynchronous. * Restrictions - Range that ID is unique - e.g. HostId is unique per account and is not globally unique. * How to avoid errors - e.g. The server to get console log from should set ``export LC_ALL=en_US.UTF-8`` in order to avoid incorrect unicode error. * Reference - Links to the API Concept guide, or building other supporting documents to explain a concept (like versioning). * Other notices Response codes ~~~~~~~~~~~~~~ The normal response codes (20x) and error response codes have to be listed. The order of response codes should be in ascending order. The description of typical error response codes are as follows: .. list-table:: Error response codes :header-rows: 1 * - Response codes - Description * - 400 - badRequest(400) * - 401 - unauthorized(401) * - 403 - forbidden(403) * - 404 - itemNotFound(404) * - 409 - conflict(409) * - 410 - gone(410) * - 501 - notImplemented(501) * - 503 - serviceUnavailable(503) In addition, the following explanations should be described. - Conditions under which each normal response code is returned (If there are multiple normal response codes.) - Conditions under which each error response code is returned Parameters ---------- Parameters need to be defined by 2 subsections. The one is in the 'Request' subsection, the other is in the 'Response' subsection. The queries, request headers and attributes go in the 'Request' subsection and response headers and attributes go in the 'Response' subsection. The order of parameters in each API ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The request and response parameters have to be listed in the following order in each API in the text file. 1. Header 2. Path 3. Query 4. Body a. Top level object (i.e. server) b. Required fields c. Optional fields d. Parameters added in microversions (by the microversion they were added) Parameter type ~~~~~~~~~~~~~~ The parameters are defined in the parameter file (``parameters.yaml``). The type of parameters have to be one of followings: * ``array`` It is a list. * ``boolean`` * ``float`` * ``integer`` * ``none`` The value is always ``null`` in a response or should be ``null`` in a request. * ``object`` The value is dict. * ``string`` If the value can be specified by multiple types, specify one type in the file and mention the other types in the description. Required or optional ~~~~~~~~~~~~~~~~~~~~ In the parameter file, define the ``required`` field in each parameter. .. list-table:: :widths: 15 85 * - ``true`` - The parameter must be specified in the request, or the parameter always appears in the response. * - ``false`` - It is not always necessary to specify the parameter in the request, or the parameter does not appear in the response in some cases. e.g. A config option defines whether the parameter appears in the response or not. A parameter appears when administrators call but does not appear when non-admin users call. If a parameter must be specified in the request or always appears in the response in the micoversion added or later, the parameter must be defined as required (``true``). Microversion ~~~~~~~~~~~~ If a parameter is available starting from a microversion, the microversion must be described by ``min_version`` in the parameter file. However, if the minimum microversion is the same as a microversion that the API itself is added, it is not necessary to describe the microversion. For example:: aggregate_uuid: description: | The UUID of the host aggregate. in: body required: true type: string min_version: 2.41 This example describes that ``aggregate_uuid`` is available starting from microversion 2.41. If a parameter is available up to a microversion, the microversion must be described by ``max_version`` in the parameter file. For example:: security_group_rules: description: | The number of allowed rules for each security group. in: body required: false type: integer max_version: 2.35 This example describes that ``security_group_rules`` is available up to microversion 2.35 (and has been removed since microversion 2.36). The order of parameters in the parameter file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The order of parameters in the parameter file has to be kept as follows: 1. By in type a. Header b. Path c. Query d. Body 2. Then alphabetical by name Example ------- One or more examples should be provided for operations whose request and/or response contains a payload. The example should describe what the operation is attempting to do and provide a sample payload for the request and/or response as appropriate. Sample files should be created in the ``doc/api_samples`` directory and inlined by inclusion. When an operation has no payload in the response, a suitable message should be included. For example:: There is no body content for the response of a successful DELETE query. Examples for multiple microversions should be included in ascending microversion order. Reference ========= * `Verifying the Nova API Ref `_ * `The description for Parameters whose values are null `_ * `The definition of "Optional" parameter `_ * `How to document your OpenStack API service `_ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/api.rst0000664000175000017500000002703500000000000020517 0ustar00zuulzuul00000000000000Extending the API ================= Background ---------- Nova has v2.1 API frameworks which supports microversions. This document covers how to add API for the v2.1 API framework. A :doc:`microversions specific document ` covers the details around what is required for the microversions part. The v2.1 API framework is under ``nova/api`` and each API is implemented in ``nova/api/openstack/compute``. Note that any change to the Nova API to be merged will first require a spec be approved first. See `here `_ for the appropriate repository. For guidance on the design of the API please refer to the `OpenStack API WG `_ Basic API Controller -------------------- API controller includes the implementation of API methods for a resource. A very basic controller of a v2.1 API:: """Basic Controller""" from nova.api.openstack.compute.schemas import xyz from nova.api.openstack import extensions from nova.api.openstack import wsgi from nova.api import validation class BasicController(wsgi.Controller): # Define support for GET on a collection def index(self, req): data = {'param': 'val'} return data # Define support for POST on a collection @extensions.expected_errors((400, 409)) @validation.schema(xyz.create) @wsgi.response(201) def create(self, req, body): write_body_here = ok return response_body # Defining support for other RESTFul methods based on resource. See `servers.py `_ for ref. All of the controller modules should live in the ``nova/api/openstack/compute`` directory. URL Mapping to API ~~~~~~~~~~~~~~~~~~ The URL mapping is based on the plain list which routes the API request to appropriate controller and method. Each API needs to add its route information in ``nova/api/openstack/compute/routes.py``. A basic skeleton of URL mapping in routers.py:: """URL Mapping Router List""" import functools import nova.api.openstack from nova.api.openstack.compute import basic_api # Create a controller object basic_controller = functools.partial( _create_controller, basic_api.BasicController, [], []) # Routing list structure: # ( # ('Route path': { # 'HTTP method: [ # 'Controller', # 'The method of controller is used to handle this route' # ], # ... # }), # ... # ) ROUTE_LIST = ( . . . ('/basic', { 'GET': [basic_controller, 'index'], 'POST': [basic_controller, 'create'] }), . . . ) Complete routing list can be found in `routes.py `_. Policy ~~~~~~ For more info about policy, see :doc:`policies `, Also look at the ``context.can(...)`` call in existing API controllers. Modularity ~~~~~~~~~~ The Nova REST API is separated into different controllers in the directory 'nova/api/openstack/compute/' Because microversions are supported in the Nova REST API, the API can be extended without any new controller. But for code readability, the Nova REST API code still needs modularity. Here are rules for how to separate modules: * You are adding a new resource The new resource should be in standalone module. There isn't any reason to put different resources in a single module. * Add sub-resource for existing resource To prevent an existing resource module becoming over-inflated, the sub-resource should be implemented in a separate module. * Add extended attributes for existing resource In normally, the extended attributes is part of existing resource's data model too. So this can be added into existing resource module directly and lightly. To avoid namespace complexity, we should avoid to add extended attributes in existing extended models. New extended attributes needn't any namespace prefix anymore. JSON-Schema ~~~~~~~~~~~ The v2.1 API validates a REST request body with JSON-Schema library. Valid body formats are defined with JSON-Schema in the directory 'nova/api/openstack/compute/schemas'. Each definition is used at the corresponding method with the ``validation.schema`` decorator like:: @validation.schema(schema.update_something) def update(self, req, id, body): .... Similarly to controller modularity, JSON-Schema definitions can be added in same or separate JSON-Schema module. The following are the combinations of extensible API and method name which returns additional JSON-Schema parameters: * Create a server API - get_server_create_schema() For example, keypairs extension(Keypairs class) contains the method get_server_create_schema() which returns:: { 'key_name': parameter_types.name, } then the parameter key_name is allowed on Create a server API. .. note:: Currently only create schema are implemented in modular way. Final goal is to merge them all and define the concluded process in this doc. These are essentially hooks into the servers controller which allow other controller to modify behaviour without having to modify servers.py. In the past not having this capability led to very large chunks of unrelated code being added to servers.py which was difficult to maintain. Unit Tests ---------- Unit tests for the API can be found under path ``nova/tests/unit/api/openstack/compute/``. Unit tests for the API are generally negative scenario tests, because the positive scenarios are tested with functional API samples tests. Negative tests would include such things as: * Request schema validation failures, for both the request body and query parameters * HTTPNotFound or other >=400 response code failures Functional tests and API Samples -------------------------------- All functional API changes, including new microversions - especially if there are new request or response parameters, should have new functional API samples tests. The API samples tests are made of two parts: * The API sample for the reference docs. These are found under path ``doc/api_samples/``. There is typically one directory per API controller with subdirectories per microversion for that API controller. The unversioned samples are used for the base v2.0 / v2.1 APIs. * Corresponding API sample templates found under path ``nova/tests/functional/api_sample_tests/api_samples``. These have a similar structure to the API reference docs samples, except the format of the sample can include substitution variables filled in by the tests where necessary, for example, to substitute things that change per test run, like a server UUID. The actual functional tests are found under path ``nova/tests/functional/api_sample_tests/``. Most, if not all, API samples tests extend the ``ApiSampleTestBaseV21`` class which extends ``ApiSampleTestBase``. These base classes provide the framework for making a request using an API reference doc sample and validating the response using the corresponding template file, along with any variable substitutions that need to be made. Note that it is possible to automatically generate the API reference doc samples using the templates by simply running the tests using ``tox -r -e api-samples``. This relies, of course, upon the test and templates being correct for the test to pass, which may take some iteration. In general, if you are adding a new microversion to an existing API controller, it is easiest to simply copy an existing test and modify it for the new microversion and the new samples/templates. The functional API samples tests are not the simplest thing in the world to get used to, and can be very frustrating at times when they fail in not obvious ways. If you need help debugging a functional API sample test failure, feel free to post your work-in-progress change for review and ask for help in the ``openstack-nova`` freenode IRC channel. Documentation ------------- All API changes must also include updates to the compute API reference, which can be found under path ``api-ref/source/``. Things to consider here include: * Adding new request and/or response parameters with a new microversion * Marking existing parameters as deprecated in a new microversion More information on the compute API reference format and conventions can be found in the :doc:`/contributor/api-ref-guideline`. For more detailed documentation of certain aspects of the API, consider writing something into the compute API guide found under path ``api-guide/source/``. Deprecating APIs ---------------- Compute REST API routes may be deprecated by capping a method or functionality using microversions. For example, the :ref:`2.36 microversion <2.36 microversion>` deprecated several compute REST API routes which only worked when using the since-removed ``nova-network`` service or are proxies to other external services like cinder, neutron, etc. The point of deprecating with microversions is users can still get the same functionality at a lower microversion but there is at least some way to signal to users that they should stop using the REST API. The general steps for deprecating a REST API are: * Set a maximum allowed microversion for the route. Requests beyond that microversion on that route will result in a ``404 HTTPNotFound`` error. * Update the Compute API reference documentation to indicate the route is deprecated and move it to the bottom of the list with the other deprecated APIs. * Deprecate, and eventually remove, related CLI / SDK functionality in other projects like python-novaclient. Removing deprecated APIs ------------------------ Nova tries to maintain backward compatibility with all REST APIs as much as possible, but when enough time has lapsed, there are few (if any) users or there are supported alternatives, the underlying service code that supports a deprecated REST API, like in the case of ``nova-network``, is removed and the REST API must also be effectively removed. The general steps for removing support for a deprecated REST API are: * The `route mapping`_ will remain but all methods will return a ``410 HTTPGone`` error response. This is slightly different then the ``404 HTTPNotFound`` error response a user will get for trying to use a microversion that does not support a deprecated API. 410 means the resource is gone and not coming back, which is more appropriate when the API is fully removed and will not work at any microversion. * Related configuration options, policy rules, and schema validation are removed. * The API reference documentation should be updated to move the documentation for the removed API to the `Obsolete APIs`_ section and mention in which release the API was removed. * Unit tests can be removed. * API sample functional tests can be changed to assert the 410 response behavior, but can otherwise be mostly gutted. Related \*.tpl files for the API sample functional tests can be deleted since they will not be used. * An "upgrade" :doc:`release note ` should be added to mention the REST API routes that were removed along with any related configuration options that were also removed. Here is an example of the above steps: https://review.opendev.org/567682/ .. _route mapping: https://opendev.org/openstack/nova/src/branch/master/nova/api/openstack/compute/routes.py .. _Obsolete APIs: https://docs.openstack.org/api-ref/compute/#obsolete-apis ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/blueprints.rst0000664000175000017500000000570500000000000022135 0ustar00zuulzuul00000000000000================================== Blueprints, Specs and Priorities ================================== Like most OpenStack projects, Nova uses `blueprints`_ and specifications (specs) to track new features, but not all blueprints require a spec. This document covers when a spec is needed. .. note:: Nova's specs live at: `specs.openstack.org`_ .. _`blueprints`: http://docs.openstack.org/infra/manual/developers.html#working-on-specifications-and-blueprints .. _`specs.openstack.org`: http://specs.openstack.org/openstack/nova-specs/ Specs ===== A spec is needed for any feature that requires a design discussion. All features need a blueprint but not all blueprints require a spec. If a new feature is straightforward enough that it doesn't need any design discussion, then no spec is required. In order to provide the sort of documentation that would otherwise be provided via a spec, the commit message should include a ``DocImpact`` flag and a thorough description of the feature from a user/operator perspective. Guidelines for when a feature doesn't need a spec. * Is the feature a single self contained change? * If the feature touches code all over the place, it probably should have a design discussion. * If the feature is big enough that it needs more than one commit, it probably should have a design discussion. * Not an API change. * API changes always require a design discussion. When a blueprint does not require a spec it still needs to be approved before the code which implements the blueprint is merged. Specless blueprints are discussed and potentially approved during the `Open Discussion` portion of the weekly `nova IRC meeting`_. See `trivial specifications`_ for more details. Project Priorities =================== * Pick several project priority themes, in the form of use cases, to help us prioritize work * Generate list of improvement blueprints based on the themes * Produce rough draft of list going into summit and finalize the list at the summit * Publish list of project priorities and look for volunteers to work on them * Update spec template to include * Specific use cases * State if the spec is project priority or not * Keep an up to date list of project priority blueprints that need code review in an etherpad. * Consumers of project priority and project priority blueprint lists: * Reviewers looking for direction of where to spend their blueprint review time. If a large subset of nova-core doesn't use the project priorities it means the core team is not aligned properly and should revisit the list of project priorities * The blueprint approval team, to help find the right balance of blueprints * Contributors looking for something to work on * People looking for what they can expect in the next release .. _nova IRC meeting: http://eavesdrop.openstack.org/#Nova_Team_Meeting .. _trivial specifications: https://specs.openstack.org/openstack/nova-specs/readme.html#trivial-specifications ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/code-review.rst0000664000175000017500000002643300000000000022160 0ustar00zuulzuul00000000000000.. _code-review: ========================== Code Review Guide for Nova ========================== OpenStack has a general set of code review guidelines: https://docs.openstack.org/infra/manual/developers.html#peer-review What follows is a very terse set of points for reviewers to consider when looking at nova code. These are things that are important for the continued smooth operation of Nova, but that tend to be carried as "tribal knowledge" instead of being written down. It is an attempt to boil down some of those things into nearly checklist format. Further explanation about why some of these things are important belongs elsewhere and should be linked from here. Upgrade-Related Concerns ======================== RPC API Versions ---------------- * If an RPC method is modified, the following needs to happen: * The manager-side (example: compute/manager) needs a version bump * The manager-side method needs to tolerate older calls as well as newer calls * Arguments can be added as long as they are optional. Arguments cannot be removed or changed in an incompatible way. * The RPC client code (example: compute/rpcapi.py) needs to be able to honor a pin for the older version (see self.client.can_send_version() calls). If we are pinned at 1.5, but the version requirement for a method is 1.7, we need to be able to formulate the call at version 1.5. * Methods can drop compatibility with older versions when we bump a major version. * RPC methods can be deprecated by removing the client (example: compute/rpcapi.py) implementation. However, the manager method must continue to exist until the major version of the API is bumped. Object Versions --------------- * If a tracked attribute (i.e. listed in fields) or remotable method is added, or a method is changed, the object version must be bumped. Changes for methods follow the same rules as above for regular RPC methods. We have tests to try to catch these changes, which remind you to bump the version and then correct the version-hash in the tests. * Field types cannot be changed. If absolutely required, create a new attribute and deprecate the old one. Ideally, support converting the old attribute to the new one with an obj_load_attr() handler. There are some exceptional cases where changing the type can be allowed, but care must be taken to ensure it does not affect the wireline API. * New attributes should be removed from the primitive in obj_make_compatible() if the attribute was added after the target version. * Remotable methods should not return unversioned structures wherever possible. They should return objects or simple values as the return types are not (and cannot) be checked by the hash tests. * Remotable methods should not take complex structures as arguments. These cannot be verified by the hash tests, and thus are subject to drift. Either construct an object and pass that, or pass all the simple values required to make the call. * Changes to an object as described above will cause a hash to change in TestObjectVersions. This is a reminder to the developer and the reviewer that the version needs to be bumped. There are times when we need to make a change to an object without bumping its version, but those cases are only where the hash logic detects a change that is not actually a compatibility issue and must be handled carefully. Database Schema --------------- * Changes to the database schema must generally be additive-only. This means you can add columns, but you can't drop or alter a column. We have some hacky tests to try to catch these things, but they are fragile. Extreme reviewer attention to non-online alterations to the DB schema will help us avoid disaster. * Dropping things from the schema is a thing we need to be extremely careful about, making sure that the column has not been used (even present in one of our models) for at least a release. * Data migrations must not be present in schema migrations. If data needs to be converted to another format, or moved from one place to another, then that must be done while the database server remains online. Generally, this can and should be hidden within the object layer so that an object can load from either the old or new location, and save to the new one. * Multiple Cells v2 cells are supported started in the Pike release. As such, any online data migrations that move data from a cell database to the API database must be multi-cell aware. REST API ========= When making a change to the nova API, we should always follow `the API WG guidelines `_ rather than going for "local" consistency. Developers and reviewers should read all of the guidelines, but they are very long. So here are some key points: * `Terms `_ * ``project`` should be used in the REST API instead of ``tenant``. * ``server`` should be used in the REST API instead of ``instance``. * ``compute`` should be used in the REST API instead of ``nova``. * `Naming Conventions `_ * URL should not include underscores; use hyphens ('-') instead. * The field names contained in a request/response body should use snake_case style, not CamelCase or Mixed_Case style. * `HTTP Response Codes `_ * Synchronous resource creation: ``201 Created`` * Asynchronous resource creation: ``202 Accepted`` * Synchronous resource deletion: ``204 No Content`` * For all other successful operations: ``200 OK`` Config Options ============== Location -------- The central place where all config options should reside is the ``/nova/conf/`` package. Options that are in named sections of ``nova.conf``, such as ``[serial_console]``, should be in their own module. Options that are in the ``[DEFAULT]`` section should be placed in modules that represent a natural grouping. For example, all of the options that affect the scheduler would be in the ``scheduler.py`` file, and all the networking options would be moved to ``network.py``. Implementation -------------- A config option should be checked for: * A short description which explains what it does. If it is a unit (e.g. timeouts or so) describe the unit which is used (seconds, megabyte, mebibyte, ...). * A long description which explains the impact and scope. The operators should know the expected change in the behavior of Nova if they tweak this. * Descriptions/Validations for the possible values. * If this is an option with numeric values (int, float), describe the edge cases (like the min value, max value, 0, -1). * If this is a DictOpt, describe the allowed keys. * If this is a StrOpt, list any possible regex validations, or provide a list of acceptable and/or prohibited values. Previously used sections which explained which services consume a specific config option and which options are related to each other got dropped because they are too hard to maintain: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095538.html Third Party Tests ================= Any change that is not tested well by the Jenkins check jobs must have a recent +1 vote from an appropriate third party test (or tests) on the latest patchset, before a core reviewer is allowed to make a +2 vote. Virt drivers ------------ At a minimum, we must ensure that any technology specific code has a +1 from the relevant third party test, on the latest patchset, before a +2 vote can be applied. Specifically, changes to nova/virt/driver/ need a +1 vote from the respective third party CI. For example, if you change something in the XenAPI virt driver, you must wait for a +1 from the XenServer CI on the latest patchset, before you can give that patch set a +2 vote. This is important to ensure: * We keep those drivers stable * We don't break that third party CI Notes ----- Please note: * Long term, we should ensure that any patch a third party CI is allowed to vote on, can be blocked from merging by that third party CI. But we need a lot more work to make something like that feasible, hence the proposed compromise. * While its possible to break a virt driver CI system by changing code that is outside the virt drivers, this policy is not focusing on fixing that. A third party test failure should always be investigated, but the failure of a third party test to report in a timely manner should not block others. * We are only talking about the testing of in-tree code. Please note the only public API is our REST API, see: :doc:`policies` Should I run the experimental queue jobs on this change? ======================================================== Because we can't run all CI jobs in the check and gate pipelines, some jobs can be executed on demand, thanks to the experimental pipeline. To run the experimental jobs, you need to comment your Gerrit review with "check experimental". The experimental jobs aim to test specific features, such as LXC containers or DVR with multiple nodes. Also, it might be useful to run them when we want to test backward compatibility with tools that deploy OpenStack outside Devstack (e.g. TripleO, etc). They can produce a non-voting feedback of whether the system continues to work when we deprecate or remove some options or features in Nova. The experimental queue can also be used to test that new CI jobs are correct before making them voting. Database Schema =============== * Use the ``utf8`` charset only where necessary. Some string fields, such as hex-stringified UUID values, MD5 fingerprints, SHA1 hashes or base64-encoded data, are always interpreted using ASCII encoding. A hex-stringified UUID value in ``latin1`` is 1/3 the size of the same field in ``utf8``, impacting performance without bringing any benefit. If there are no string type columns in the table, or the string type columns contain **only** the data described above, then stick with ``latin1``. Microversion API ================ If a new microversion API is added, the following needs to happen: * A new patch for the microversion API change in python-novaclient side should be submitted before the microversion change in Nova is merged. See :python-novaclient-doc:`Adding support for a new microversion ` in python-novaclient for more details. * If the microversion changes the response schema, a new schema and test for the microversion must be added to Tempest. The microversion change in Nova should not be merged until the Tempest test is submitted and at least passing; it does not need to be merged yet as long as it is testing the Nova change via Depends-On. The Nova microversion change commit message should reference the Change-Id of the Tempest test for reviewers to identify it. Notifications ============= * Every new notification type shall use the new versioned notification infrastructure documented in :doc:`/reference/notifications` Release Notes ============= A release note is required on changes that have upgrade impact, security impact, introduce a new feature, fix Critical bugs, or fix long-standing bugs with high importance. See :doc:`releasenotes` for details on how to create a release note, each available section and the type of content required. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/contributing.rst0000664000175000017500000000370100000000000022447 0ustar00zuulzuul00000000000000============================ So You Want to Contribute... ============================ For general information on contributing to OpenStack, please check out the `contributor guide `_ to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. Below will cover the more project specific information you need to get started with nova. Communication ~~~~~~~~~~~~~ :doc:`how-to-get-involved` Contacting the Core Team ~~~~~~~~~~~~~~~~~~~~~~~~ The overall structure of the Nova team is documented on `the wiki `_. New Feature Planning ~~~~~~~~~~~~~~~~~~~~ If you want to propose a new feature please read the :doc:`blueprints` page. Task Tracking ~~~~~~~~~~~~~ We track our tasks in `Launchpad `__. If you're looking for some smaller, easier work item to pick up and get started on, search for the 'low-hanging-fruit' tag. Reporting a Bug ~~~~~~~~~~~~~~~ You found an issue and want to make sure we are aware of it? You can do so on `Launchpad `__. More info about Launchpad usage can be found on `OpenStack docs page `_. Getting Your Patch Merged ~~~~~~~~~~~~~~~~~~~~~~~~~ All changes proposed to the Nova requires two ``Code-Review +2`` votes from Nova core reviewers before one of the core reviewers can approve patch by giving ``Workflow +1`` vote. More detailed guidelines for reviewers of Nova patches are available at :doc:`code-review`. Project Team Lead Duties ~~~~~~~~~~~~~~~~~~~~~~~~ All common PTL duties are enumerated in the `PTL guide `_. For the Nova specific duties you can read the Nova PTL guide :doc:`ptl-guide` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/development-environment.rst0000664000175000017500000001541500000000000024631 0ustar00zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ======================= Development Quickstart ======================= This page describes how to setup and use a working Python development environment that can be used in developing nova on Ubuntu, Fedora or Mac OS X. These instructions assume you're already familiar with git. Following these instructions will allow you to build the documentation and run the nova unit tests. If you want to be able to run nova (i.e., launch VM instances), you will also need to --- either manually or by letting DevStack do it for you --- install libvirt and at least one of the `supported hypervisors`_. Running nova is currently only supported on Linux, although you can run the unit tests on Mac OS X. .. _supported hypervisors: http://wiki.openstack.org/HypervisorSupportMatrix .. note:: For how to contribute to Nova, see HowToContribute_. Nova uses the Gerrit code review system, GerritWorkflow_. .. _GerritWorkflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow .. _HowToContribute: http://docs.openstack.org/infra/manual/developers.html .. _`docs.openstack.org`: http://docs.openstack.org Setup ===== There are two ways to create a development environment: using DevStack, or explicitly installing and cloning just what you need. Using DevStack -------------- See `Devstack`_ Documentation. If you would like to use Vagrant, there is a `Vagrant`_ for DevStack. .. _`Devstack`: http://docs.openstack.org/developer/devstack/ .. _`Vagrant`: https://github.com/openstack-dev/devstack-vagrant/blob/master/README.md .. Until the vagrant markdown documents are rendered somewhere on .openstack.org, linking to github Explicit Install/Clone ---------------------- DevStack installs a complete OpenStack environment. Alternatively, you can explicitly install and clone just what you need for Nova development. Getting the code ```````````````` Grab the code from git:: git clone https://opendev.org/openstack/nova cd nova Linux Systems ````````````` The first step of this process is to install the system (not Python) packages that are required. Following are instructions on how to do this on Linux and on the Mac. .. note:: This section is tested for Nova on Ubuntu (14.04-64) and Fedora-based (RHEL 6.1) distributions. Feel free to add notes and change according to your experiences or operating system. Install the prerequisite packages listed in the ``bindep.txt`` file. On Debian-based distributions (e.g., Debian/Mint/Ubuntu):: sudo apt-get install python-pip sudo pip install tox tox -e bindep sudo apt-get install On Fedora-based distributions (e.g., Fedora/RHEL/CentOS/Scientific Linux):: sudo yum install python-pip sudo pip install tox tox -e bindep sudo yum install On openSUSE-based distributions (SLES, openSUSE Leap / Tumbleweed):: sudo zypper in python-pip sudo pip install tox tox -e bindep sudo zypper in Mac OS X Systems ```````````````` Install virtualenv:: sudo easy_install virtualenv Check the version of OpenSSL you have installed:: openssl version The stock version of OpenSSL that ships with Mac OS X 10.6 (OpenSSL 0.9.8l) or Mac OS X 10.7 (OpenSSL 0.9.8r) or Mac OS X 10.10.3 (OpenSSL 0.9.8zc) works fine with nova. OpenSSL versions from brew like OpenSSL 1.0.1k work fine as well. Brew is very useful for installing dependencies. As a minimum for running tests, install the following:: brew install python3 postgres python3 -mpip install tox Building the Documentation ========================== Install the prerequisite packages: graphviz To do a full documentation build, issue the following command while the nova directory is current. .. code-block:: bash tox -edocs That will create a Python virtual environment, install the needed Python prerequisites in that environment, and build all the documentation in that environment. Running unit tests ================== See `Running Python Unit Tests`_. .. _`Running Python Unit Tests`: https://docs.openstack.org/project-team-guide/project-setup/python.html#running-python-unit-tests Note that some unit and functional tests use a database. See the file ``tools/test-setup.sh`` on how the databases are set up in the OpenStack CI environment and replicate it in your test environment. Using the pre-commit hook ========================= Nova makes use of the `pre-commit framework `__ to allow running of some linters on each commit. This must be enabled locally to function: .. code-block:: shell $ pip install --user pre-commit $ pre-commit install --allow-missing-config Using a remote debugger ======================= Some modern IDE such as pycharm (commercial) or Eclipse (open source) support remote debugging. In order to run nova with remote debugging, start the nova process with the following parameters:: --remote_debug-host --remote_debug-port Before you start your nova process, start the remote debugger using the instructions for that debugger: * For pycharm - http://blog.jetbrains.com/pycharm/2010/12/python-remote-debug-with-pycharm/ * For Eclipse - http://pydev.org/manual_adv_remote_debugger.html More detailed instructions are located here - https://wiki.openstack.org/wiki/Nova/RemoteDebugging Using fake computes for tests ============================= The number of instances supported by fake computes is not limited by physical constraints. It allows you to perform stress tests on a deployment with few resources (typically a laptop). Take care to avoid using scheduler filters that will limit the number of instances per compute, such as ``AggregateCoreFilter``. Fake computes can also be used in multi hypervisor-type deployments in order to take advantage of fake and "real" computes during tests: * create many fake instances for stress tests * create some "real" instances for functional tests Fake computes can be used for testing Nova itself but also applications on top of it. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/documentation.rst0000664000175000017500000000704200000000000022613 0ustar00zuulzuul00000000000000======================== Documentation Guidelines ======================== These are some basic guidelines for contributing to documentation in nova. Review Guidelines ================= Documentation-only patches differ from code patches in a few ways. * They are often written by users / operators that aren't plugged into daily cycles of nova or on IRC * Outstanding patches are far more likely to trigger merge conflict in Git than code patches * There may be wide variation on points of view of what the "best" or "clearest" way is to say a thing This all can lead to a large number of practical documentation improvements stalling out because the author submitted the fix, and does not have the time to merge conflict chase or is used to the Gerrit follow up model. As such, documentation patches should be evaluated in the basic context of "does this make things better than the current tree". Patches are cheap, it can always be further enhanced in future patches. Typo / trivial documentation only fixes should get approved with a single +2. How users consume docs ====================== The current primary target for all documentation in nova is the web. While it is theoretically possible to generate PDF versions of the content, the tree is not currently well structured for that, and it's not clear there is an audience for that. The main nova docs tree ``doc/source`` is published per release, so there will be copies of all of this as both the ``latest`` URL (which is master), and for every stable release (e.g. ``pike``). .. note:: This raises an interesting and unexplored question about whether we want all of ``doc/source`` published with stable branches that will be stale and unimproved as we address content in ``latest``. The ``api-ref`` and ``api-guide`` publish only from master to a single site on `docs.openstack.org`. As such, they are effectively branchless. Guidelines for consumable docs ============================== * Give users context before following a link Most users exploring our documentation will be trying to learn about our software. Entry and subpages that provide links to in depth topics need to provide some basic context about why someone would need to know about a ``filter scheduler`` before following the link named filter scheduler. Providing these summaries helps the new consumer come up to speed more quickly. * Doc moves require ``.htaccess`` redirects If a file is moved in a documentation source tree, we should be aware that it might be linked from external sources, and is now a ``404 Not Found`` error for real users. All doc moves should include an entry in ``doc/source/_extra/.htaccess`` to redirect from the old page to the new page. * Images are good, please provide source An image is worth a 1000 words, but can go stale pretty quickly. We ideally want ``png`` files for display on the web, but that's a non modifiable format. For any new diagram contributions we should also get some kind of source format (``svg`` is ideal as it can be modified with open tools) along with ``png`` formats. Long Term TODOs =============== * Sort out our toctree / sidebar navigation During the bulk import of the install, admin, config guides we started with a unified toctree, which was a ton of entries, and made the navigation sidebar in Nova incredibly confusing. The short term fix was to just make that almost entirely hidden and rely on structured landing and sub pages. Long term it would be good to reconcile the toctree / sidebar into something that feels coherent. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/evacuate-vs-rebuild.rst0000664000175000017500000001273600000000000023617 0ustar00zuulzuul00000000000000=================== Evacuate vs Rebuild =================== The `evacuate API`_ and `rebuild API`_ are commonly confused in nova because the internal `conductor code`_ and `compute code`_ use the same methods called ``rebuild_instance``. This document explains some of the differences in what happens between an evacuate and rebuild operation. High level ~~~~~~~~~~ *Evacuate* is an operation performed by an administrator when a compute service or host is encountering some problem, goes down and needs to be fenced from the network. The servers that were running on that compute host can be rebuilt on a **different** host using the **same** image. If the source and destination hosts are running on shared storage then the root disk image of the servers can be retained otherwise the root disk image (if not using a volume-backed server) will be lost. This is one example of why it is important to attach data volumes to a server to store application data and leave the root disk for the operating system since data volumes will be re-attached to the server as part of the evacuate process. *Rebuild* is an operation which can be performed by a non-administrative owner of the server (the user) performed on the **same** compute host to change certain aspects of the server, most notably using a **different** image. Note that the image does not *have* to change and in the case of volume-backed servers the image `currently cannot change`_. Other attributes of the server can be changed as well such as ``key_name`` and ``user_data``. See the `rebuild API`_ reference for full usage details. When a user rebuilds a server they want to change it which requires re-spawning the guest in the hypervisor but retain the UUID, volumes and ports attached to the server. For a non-volume-backed server the root disk image is rebuilt. Scheduling ~~~~~~~~~~ Evacuate always schedules the server to another host and rebuild always occurs on the same host. Note that when `rebuilding with a different image`_, the request is run through the scheduler to ensure the new image is still valid for the current compute host. Image ~~~~~ As noted above, the image that the server uses during an evacuate operation does not change. The image used to rebuild a server *may* change but does not have to and in the case of volume-backed servers *cannot* change. Resource claims ~~~~~~~~~~~~~~~ The compute service ``ResourceTracker`` has a `claims`_ operation which is used to ensure resources are available before building a server on the host. The scheduler performs the initial filtering of hosts to ensure a server can be built on a given host and the compute claim is essentially meant as a secondary check to prevent races when the scheduler has out of date information or when there are concurrent build requests going to the same host. During an evacuate operation there is a `rebuild claim`_ since the server is being re-built on a different host. During a rebuild operation, since the flavor does not change, there is `no claim`_ made since the host does not change. Allocations ~~~~~~~~~~~ Since the 16.0.0 (Pike) release, the scheduler uses the `placement service`_ to filter compute nodes (resource providers) based on information in the flavor and image used to build the server. Once the scheduler runs through its filters and weighers and picks a host, resource class `allocations`_ are atomically consumed in placement with the server as the consumer. During an evacuate operation, the allocations held by the server consumer against the source compute node resource provider are left intact since the source compute service is down. Note that `migration-based allocations`_, which were introduced in the 17.0.0 (Queens) release, do not apply to evacuate operations but only resize, cold migrate and live migrate. So once a server is successfully evacuated to a different host, the placement service will track allocations for that server against both the source and destination compute node resource providers. If the source compute service is restarted after being evacuated and fixed, the compute service will `delete the old allocations`_ held by the evacuated servers. During a rebuild operation, since neither the host nor flavor changes, the server allocations remain intact. .. _evacuate API: https://docs.openstack.org/api-ref/compute/#evacuate-server-evacuate-action .. _rebuild API: https://docs.openstack.org/api-ref/compute/#rebuild-server-rebuild-action .. _conductor code: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/conductor/manager.py#L944 .. _compute code: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L3052 .. _currently cannot change: https://specs.openstack.org/openstack/nova-specs/specs/train/approved/volume-backed-server-rebuild.html .. _rebuilding with a different image: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/api.py#L3414 .. _claims: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/claims.py .. _rebuild claim: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L3104 .. _no claim: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L3108 .. _placement service: https://docs.openstack.org/placement/latest/ .. _allocations: https://docs.openstack.org/api-ref/placement/#allocations .. _migration-based allocations: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/migration-allocations.html .. _delete the old allocations: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L627 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/how-to-get-involved.rst0000664000175000017500000003465300000000000023570 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _getting_involved: ===================================== How to get (more) involved with Nova ===================================== So you want to get more involved with Nova? Or you are new to Nova and wondering where to start? We are working on building easy ways for you to get help and ideas on how to learn more about Nova and how the Nova community works. Any questions, please ask! If you are unsure who to ask, then please contact the `PTL`__. __ `Nova People`_ How do I get started? ===================== There are quite a few global docs on this: - https://docs.openstack.org/contributors/ - https://www.openstack.org/community/ - https://www.openstack.org/assets/welcome-guide/OpenStackWelcomeGuide.pdf - https://wiki.openstack.org/wiki/How_To_Contribute There is more general info, non Nova specific info here: - https://wiki.openstack.org/wiki/Mentoring - https://docs.openstack.org/upstream-training/ What should I work on? ~~~~~~~~~~~~~~~~~~~~~~ So you are starting out your Nova journey, where is a good place to start? If you'd like to learn how Nova works before changing anything (good idea!), we recommend looking for reviews with -1s and -2s and seeing why they got downvoted. There is also the :ref:`code-review`. Once you have some understanding, start reviewing patches. It's OK to ask people to explain things you don't understand. It's also OK to see some potential problems but put a +0. Once you're ready to write code, take a look at some of the work already marked as low-hanging fruit: * https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit How do I get my feature in? ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The best way of getting your feature in is... well it depends. First concentrate on solving your problem and/or use case, don't fixate on getting the code you have working merged. It's likely things will need significant re-work after you discuss how your needs match up with all the existing ways Nova is currently being used. The good news, is this process should leave you with a feature that's more flexible and doesn't lock you into your current way of thinking. A key part of getting code merged, is helping with reviewing other people's code. Great reviews of others code will help free up more core reviewer time to look at your own patches. In addition, you will understand how the review is thinking when they review your code. Also, work out if any on going efforts are blocking your feature and helping out speeding those up. The spec review process should help with this effort. For more details on our process, please see: :ref:`process`. What is expected of a good contributor? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ TODO - need more info on this Top Tips for working with the Nova community ============================================ Here are some top tips around engaging with the Nova community: - IRC - we talk a lot in #openstack-nova - do ask us questions in there, and we will try to help you - not sure about asking questions? feel free to listen in around other people's questions - we recommend you setup an IRC bouncer: https://docs.openstack.org/contributors/common/irc.html - Email - Use the [nova] tag in the mailing lists - Filtering on [nova] and [all] can help tame the list - Be Open - i.e. don't review your teams code in private, do it publicly in gerrit - i.e. be ready to talk about openly about problems you are having, not "theoretical" issues - that way you can start to gain the trust of the wider community - Got a problem? Please ask! - Please raise any problems and ask questions early - we want to help you before you are frustrated or annoyed - unsure who to ask? Just ask in IRC, or check out the list of `Nova people`_. - Talk about problems first, then solutions - Nova is a big project. At first, it can be hard to see the big picture - Don't think about "merging your patch", instead think about "solving your problem" - conversations are more productive that way - It's not the decision that's important, it's the reason behind it that's important - Don't like the way the community is going? - Please ask why we were going that way, and please engage with the debate - If you don't, we are unable to learn from what you have to offer - No one will decide, this is stuck, who can help me? - it's rare, but it happens - it's the `Nova PTL`__'s job to help you - ...but if you don't ask, it's hard for them to help you __ `Nova People`_ Process ======= It can feel like you are faced with a wall of process. We are a big community, to make sure the right communication happens, we do use a minimal amount of process. If you find something that doesn't make sense, please: - ask questions to find out \*why\* it happens - if you know of a better way to do it, please speak up - one "better way" might be to remove the process if it no longer helps To learn more about Nova's process, please read :ref:`process`. Why bother with any process? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Why is it worth creating a bug or blueprint to track your code review? This may seem like silly process, but there is usually a good reason behind it. We have lots of code to review, and we have tools to try and get to really important code reviews first. If yours is really important, but not picked up by our tools, it's possible you just get lost in the bottom of a big queue. If you have a bug fix, you have done loads of work to identify the issue, and test out your fix, and submit it. By adding a bug report, you are making it easier for other folks who hit the same problem to find your work, possibly saving them the hours of pain you went through. With any luck that gives all those people the time to fix different bugs, all that might have affected you, if you had not given them the time go fix it. It's similar with blueprints. You have worked out how to scratch your itch, lets tell others about that great new feature you have added, so they can use that. Also, it stops someone with a similar idea going through all the pain of creating a feature only to find you already have that feature ready and up for review, or merged into the latest release. Hopefully this gives you an idea why we have applied a small layer of process to what we are doing. Having said all this, we need to unlearn old habits to move forward, there may be better ways to do things, and we are open to trying them. Please help be part of the solution. .. _why_plus1: Why do code reviews if I am not in nova-core? ============================================= Code reviews are the life blood of the Nova developer community. There is a good discussion on how you do good reviews, and how anyone can be a reviewer: http://docs.openstack.org/infra/manual/developers.html#peer-review In the draft process guide, I discuss how doing reviews can help get your code merged faster: :ref:`process`. Lets look at some of the top reasons why participating with code reviews really helps you: - Doing more reviews, and seeing what other reviewers notice, will help you better understand what is expected of code that gets merged into master. - Having more non-core people do great reviews, leaves less review work for the core reviewers to do, so we are able get more code merged. - Empathy is one of the keys to a happy community. If you are used to doing code reviews, you will better understand the comments you get when people review your code. As you do more code reviews, and see what others notice, you will get a better idea of what people are looking for when then apply a +2 to your code. - If you do quality reviews, you'll be noticed and it's more likely you'll get reciprocal eyes on your reviews. What are the most useful types of code review comments? Well here are a few to the top ones: - Fundamental flaws are the biggest thing to spot. Does the patch break a whole set of existing users, or an existing feature? - Consistency of behaviour is really important. Does this bit of code do things differently to where similar things happen else where in Nova? - Is the code easy to maintain, well tested and easy to read? Code is read order of magnitude times more than it is written, so optimise for the reader of the code, not the writer. - TODO - what others should go here? Let's look at some problems people hit when starting out doing code reviews: - My +1 doesn't mean anything, why should I bother? - So your +1 really does help. Some really useful -1 votes that lead to a +1 vote helps get code into a position - When to use -1 vs 0 vs +1 - Please see the guidelines here: http://docs.openstack.org/infra/manual/developers.html#peer-review - I have already reviewed this code internally, no point in adding a +1 externally? - Please talk to your company about doing all code reviews in the public, that is a much better way to get involved. showing how the code has evolved upstream, is much better than trying to 'perfect' code internally, before uploading for public review. You can use Draft mode, and mark things as WIP if you prefer, but please do the reviews upstream. - Where do I start? What should I review? - There are various tools, but a good place to start is: https://etherpad.openstack.org/p/nova-runways-ussuri - Depending on the time in the cycle, it's worth looking at NeedsCodeReview blueprints: https://blueprints.launchpad.net/nova/ - Custom Gerrit review dashboards often provide a more manageable view of the outstanding reviews, and help focus your efforts: - Nova Review Inbox: https://goo.gl/1vTS0Z - Small Bug Fixes: http://ow.ly/WAw1J - Maybe take a look at things you want to see merged, bug fixes and features, or little code fixes - Look for things that have been waiting a long time for a review: https://review.opendev.org/#/q/project:openstack/nova+status:open+age:2weeks - If you get through the above lists, try other tools, such as: http://status.openstack.org/reviews How to do great code reviews? ============================= http://docs.openstack.org/infra/manual/developers.html#peer-review For more tips, please see: `Why do code reviews if I am not in nova-core?`_ How do I become nova-core? ========================== You don't have to be nova-core to be a valued member of the Nova community. There are many, many ways you can help. Every quality review that helps someone get their patch closer to being ready to merge helps everyone get their code merged faster. The first step to becoming nova-core is learning how to be an active member of the Nova community, including learning how to do great code reviews. For more details see: https://wiki.openstack.org/wiki/Nova/CoreTeam#Membership_Expectations If you feel like you have the time to commit to all the nova-core membership expectations, reach out to the Nova PTL who will be able to find you an existing member of nova-core to help mentor you. If all goes well, and you seem like a good candidate, your mentor will contact the rest of the nova-core team to ask them to start looking at your reviews, so they are able to vote for you, if you get nominated for join nova-core. We encourage all mentoring, where possible, to occur on #openstack-nova so everyone can learn and benefit from your discussions. The above mentoring is available to every one who wants to learn how to better code reviews, even if you don't ever want to commit to becoming nova-core. If you already have a mentor, that's great, the process is only there for folks who are still trying to find a mentor. Being admitted to the mentoring program no way guarantees you will become a member of nova-core eventually, it's here to help you improve, and help you have the sort of involvement and conversations that can lead to becoming a member of nova-core. How to do great nova-spec reviews? ================================== https://specs.openstack.org/openstack/nova-specs/specs/ussuri/template.html :doc:`/contributor/blueprints`. Spec reviews are always a step ahead of the normal code reviews. Follow the above links for some great information on specs/reviews. The following could be some important tips: 1. The specs are published as html documents. Ensure that the author has a proper render of the same via the .rst file. 2. More often than not, it's important to know that there are no overlaps across multiple specs. 3. Ensure that a proper dependency of the spec is identified. For example - a user desired feature that requires a proper base enablement should be a dependent spec. 4. Ask for clarity on changes that appear ambiguous to you. 5. Every release nova gets a huge set of spec proposals and that's a huge task for the limited set of nova cores to complete. Helping the cores with additional reviews is always a great thing. How to do great bug triage? =========================== https://wiki.openstack.org/wiki/Nova/BugTriage Sylvain Bauza and Stephen Finucane gave a nice `presentation`_ on this topic at the Queens summit in Sydney. .. _presentation: https://www.openstack.org/videos/sydney-2017/upstream-bug-triage-the-hidden-gem How to step up into a project leadership role? ============================================== There are many ways to help lead the Nova project: * Mentoring efforts, and getting started tips: https://wiki.openstack.org/wiki/Nova/Mentoring * Info on process, with a focus on how you can go from an idea to getting code merged Nova: :ref:`process` * Consider leading an existing `Nova subteam`_ or forming a new one. * Consider becoming a `Bug tag owner`_. * Contact the PTL about becoming a Czar `Nova People`_. .. _`Nova people`: https://wiki.openstack.org/wiki/Nova#People .. _`Nova subteam`: https://wiki.openstack.org/wiki/Nova#Nova_subteams .. _`Bug tag owner`: https://wiki.openstack.org/wiki/Nova/BugTriage#Tag_Owner_List ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/index.rst0000664000175000017500000001254200000000000021052 0ustar00zuulzuul00000000000000=========================== Contributor Documentation =========================== Contributing to nova gives you the power to help add features, fix bugs, enhance documentation, and increase testing. Contributions of any type are valuable, and part of what keeps the project going. Here are a list of resources to get your started. Basic Information ================= .. toctree:: :maxdepth: 2 contributing Getting Started =============== * :doc:`/contributor/how-to-get-involved`: Overview of engaging in the project * :doc:`/contributor/development-environment`: Get your computer setup to contribute .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: how-to-get-involved development-environment Nova Process ============ The nova community is a large community. We have lots of users, and they all have a lot of expectations around upgrade and backwards compatibility. For example, having a good stable API, with discoverable versions and capabilities is important for maintaining the strong ecosystem around nova. Our process is always evolving, just as nova and the community around nova evolves over time. If there are things that seem strange, or you have ideas on how to improve things, please bring them forward on IRC or the openstack-discuss mailing list, so we continue to improve how the nova community operates. This section looks at the processes and why. The main aim behind all the process is to aid communication between all members of the nova community, while keeping users happy and keeping developers productive. * :doc:`/contributor/project-scope`: The focus is on features and bug fixes that make nova work better within this scope * :doc:`/contributor/policies`: General guidelines about what's supported * :doc:`/contributor/process`: The processes we follow around feature and bug submission, including how the release calendar works, and the freezes we go under * :doc:`/contributor/blueprints`: An overview of our tracking artifacts. * :doc:`/contributor/ptl-guide`: A chronological PTL reference guide .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: project-scope policies process blueprints ptl-guide Reviewing ========= * :doc:`/contributor/releasenotes`: When we need a release note for a contribution. * :doc:`/contributor/code-review`: Important cheat sheet for what's important when doing code review in Nova, especially some things that are hard to test for, but need human eyes. * :doc:`/reference/i18n`: What we require for i18n in patches * :doc:`/contributor/documentation`: Guidelines for handling documentation contributions .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: releasenotes code-review /reference/i18n documentation Testing ======= Because Python is a dynamic language, code that is not tested might not even be Python code. All new code needs to be validated somehow. * :doc:`/contributor/testing`: An overview of our test taxonomy and the kinds of testing we do and expect. * **Testing Guides**: There are also specific testing guides for features that are hard to test in our gate. * :doc:`/contributor/testing/libvirt-numa` * :doc:`/contributor/testing/serial-console` * :doc:`/contributor/testing/zero-downtime-upgrade` * :doc:`/contributor/testing/down-cell` * **Profiling Guides**: These are guides to profiling nova. * :doc:`/contributor/testing/eventlet-profiling` .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: testing testing/libvirt-numa testing/serial-console testing/zero-downtime-upgrade testing/down-cell testing/eventlet-profiling The Nova API ============ Because we have many consumers of our API, we're extremely careful about changes done to the API, as the impact can be very wide. * :doc:`/contributor/api`: How the code is structured inside the API layer * :doc:`/contributor/api-2`: (needs update) * :doc:`/contributor/microversions`: How the API is (micro)versioned and what you need to do when adding an API exposed feature that needs a new microversion. * :doc:`/contributor/api-ref-guideline`: The guideline to write the API reference. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: api api-2 microversions api-ref-guideline Nova Major Subsystems ===================== Major subsystems in nova have different needs. If you are contributing to one of these please read the :ref:`reference guide ` before diving in. * Move operations * :doc:`/contributor/evacuate-vs-rebuild`: Describes the differences between the often-confused evacuate and rebuild operations. * :doc:`/contributor/resize-and-cold-migrate`: Describes the differences and similarities between resize and cold migrate operations. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: evacuate-vs-rebuild resize-and-cold-migrate ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/microversions.rst0000664000175000017500000003763300000000000022655 0ustar00zuulzuul00000000000000API Microversions ================= Background ---------- Nova uses a framework we call 'API Microversions' for allowing changes to the API while preserving backward compatibility. The basic idea is that a user has to explicitly ask for their request to be treated with a particular version of the API. So breaking changes can be added to the API without breaking users who don't specifically ask for it. This is done with an HTTP header ``OpenStack-API-Version`` which has as its value a string containing the name of the service, ``compute``, and a monotonically increasing semantic version number starting from ``2.1``. The full form of the header takes the form:: OpenStack-API-Version: compute 2.1 If a user makes a request without specifying a version, they will get the ``DEFAULT_API_VERSION`` as defined in ``nova/api/openstack/wsgi.py``. This value is currently ``2.1`` and is expected to remain so for quite a long time. There is a special value ``latest`` which can be specified, which will allow a client to always receive the most recent version of API responses from the server. .. warning:: The ``latest`` value is mostly meant for integration testing and would be dangerous to rely on in client code since Nova microversions are not following semver and therefore backward compatibility is not guaranteed. Clients, like python-novaclient, should always require a specific microversion but limit what is acceptable to the version range that it understands at the time. .. warning:: To maintain compatibility, an earlier form of the microversion header is acceptable. It takes the form:: X-OpenStack-Nova-API-Version: 2.1 This form will continue to be supported until the ``DEFAULT_API_VERSION`` is raised to version ``2.27`` or higher. Clients accessing deployments of the Nova API which are not yet providing microversion ``2.27`` must use the older form. For full details please read the `Kilo spec for microversions `_ and `Microversion Specification `_. When do I need a new Microversion? ---------------------------------- A microversion is needed when the contract to the user is changed. The user contract covers many kinds of information such as: - the Request - the list of resource urls which exist on the server Example: adding a new servers/{ID}/foo which didn't exist in a previous version of the code - the list of query parameters that are valid on urls Example: adding a new parameter ``is_yellow`` servers/{ID}?is_yellow=True - the list of query parameter values for non free form fields Example: parameter filter_by takes a small set of constants/enums "A", "B", "C". Adding support for new enum "D". - new headers accepted on a request - the list of attributes and data structures accepted. Example: adding a new attribute 'locked': True/False to the request body However, the attribute ``os.scheduler_hints`` of the "create a server" API is an exception to this. A new scheduler which adds a new attribute to ``os:scheduler_hints`` doesn't require a new microversion, because available schedulers depend on cloud environments, and we accept customized schedulers as a rule. - the Response - the list of attributes and data structures returned Example: adding a new attribute 'locked': True/False to the output of servers/{ID} - the allowed values of non free form fields Example: adding a new allowed ``status`` to servers/{ID} - the list of status codes allowed for a particular request Example: an API previously could return 200, 400, 403, 404 and the change would make the API now also be allowed to return 409. See [#f2]_ for the 400, 403, 404 and 415 cases. - changing a status code on a particular response Example: changing the return code of an API from 501 to 400. .. note:: Fixing a bug so that a 400+ code is returned rather than a 500 or 503 does not require a microversion change. It's assumed that clients are not expected to handle a 500 or 503 response and therefore should not need to opt-in to microversion changes that fixes a 500 or 503 response from happening. According to the OpenStack API Working Group, a **500 Internal Server Error** should **not** be returned to the user for failures due to user error that can be fixed by changing the request on the client side. See [#f1]_. - new headers returned on a response The following flow chart attempts to walk through the process of "do we need a microversion". .. graphviz:: digraph states { label="Do I need a microversion?" silent_fail[shape="diamond", style="", group=g1, label="Did we silently fail to do what is asked?"]; ret_500[shape="diamond", style="", group=g1, label="Did we return a 500 before?"]; new_error[shape="diamond", style="", group=g1, label="Are we changing what status code is returned?"]; new_attr[shape="diamond", style="", group=g1, label="Did we add or remove an attribute to a payload?"]; new_param[shape="diamond", style="", group=g1, label="Did we add or remove an accepted query string parameter or value?"]; new_resource[shape="diamond", style="", group=g1, label="Did we add or remove a resource url?"]; no[shape="box", style=rounded, label="No microversion needed"]; yes[shape="box", style=rounded, label="Yes, you need a microversion"]; no2[shape="box", style=rounded, label="No microversion needed, it's a bug"]; silent_fail -> ret_500[label=" no"]; silent_fail -> no2[label="yes"]; ret_500 -> no2[label="yes [1]"]; ret_500 -> new_error[label=" no"]; new_error -> new_attr[label=" no"]; new_error -> yes[label="yes"]; new_attr -> new_param[label=" no"]; new_attr -> yes[label="yes"]; new_param -> new_resource[label=" no"]; new_param -> yes[label="yes"]; new_resource -> no[label=" no"]; new_resource -> yes[label="yes"]; {rank=same; yes new_attr} {rank=same; no2 ret_500} {rank=min; silent_fail} } **Footnotes** .. [#f1] When fixing 500 errors that previously caused stack traces, try to map the new error into the existing set of errors that API call could previously return (400 if nothing else is appropriate). Changing the set of allowed status codes from a request is changing the contract, and should be part of a microversion (except in [#f2]_). The reason why we are so strict on contract is that we'd like application writers to be able to know, for sure, what the contract is at every microversion in Nova. If they do not, they will need to write conditional code in their application to handle ambiguities. When in doubt, consider application authors. If it would work with no client side changes on both Nova versions, you probably don't need a microversion. If, on the other hand, there is any ambiguity, a microversion is probably needed. .. [#f2] The exception to not needing a microversion when returning a previously unspecified error code is the 400, 403, 404 and 415 cases. This is considered OK to return even if previously unspecified in the code since it's implied given keystone authentication can fail with a 403 and API validation can fail with a 400 for invalid json request body. Request to url/resource that does not exist always fails with 404. Invalid content types are handled before API methods are called which results in a 415. .. note:: When in doubt about whether or not a microversion is required for changing an error response code, consult the `Nova API subteam`_. .. _Nova API subteam: https://wiki.openstack.org/wiki/Meetings/NovaAPI When a microversion is not needed --------------------------------- A microversion is not needed in the following situation: - the response - Changing the error message without changing the response code does not require a new microversion. - Removing an inapplicable HTTP header, for example, suppose the Retry-After HTTP header is being returned with a 4xx code. This header should only be returned with a 503 or 3xx response, so it may be removed without bumping the microversion. - An obvious regression bug in an admin-only API where the bug can still be fixed upstream on active stable branches. Admin-only APIs are less of a concern for interoperability and generally a regression in behavior can be dealt with as a bug fix when the documentation clearly shows the API behavior was unexpectedly regressed. See [#f3]_ for an example. Intentional behavior changes to an admin-only API *do* require a microversion, like the :ref:`2.53 microversion <2.53-microversion>` for example. **Footnotes** .. [#f3] https://review.opendev.org/#/c/523194/ In Code ------- In ``nova/api/openstack/wsgi.py`` we define an ``@api_version`` decorator which is intended to be used on top-level Controller methods. It is not appropriate for lower-level methods. Some examples: Adding a new API method ~~~~~~~~~~~~~~~~~~~~~~~ In the controller class:: @wsgi.Controller.api_version("2.4") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an ``OpenStack-API-Version`` of >= ``2.4``. If they had specified a lower version (or not specified it and received the default of ``2.1``) the server would respond with ``HTTP/404``. Removing an API method ~~~~~~~~~~~~~~~~~~~~~~ In the controller class:: @wsgi.Controller.api_version("2.1", "2.4") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an ``OpenStack-API-Version`` of <= ``2.4``. If ``2.5`` or later is specified the server will respond with ``HTTP/404``. Changing a method's behavior ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the controller class:: @wsgi.Controller.api_version("2.1", "2.3") def my_api_method(self, req, id): .... method_1 ... @wsgi.Controller.api_version("2.4") # noqa def my_api_method(self, req, id): .... method_2 ... If a caller specified ``2.1``, ``2.2`` or ``2.3`` (or received the default of ``2.1``) they would see the result from ``method_1``, ``2.4`` or later ``method_2``. It is vital that the two methods have the same name, so the second of them will need ``# noqa`` to avoid failing flake8's ``F811`` rule. The two methods may be different in any kind of semantics (schema validation, return values, response codes, etc) A change in schema only ~~~~~~~~~~~~~~~~~~~~~~~ If there is no change to the method, only to the schema that is used for validation, you can add a version range to the ``validation.schema`` decorator:: @wsgi.Controller.api_version("2.1") @validation.schema(dummy_schema.dummy, "2.3", "2.8") @validation.schema(dummy_schema.dummy2, "2.9") def update(self, req, id, body): .... This method will be available from version ``2.1``, validated according to ``dummy_schema.dummy`` from ``2.3`` to ``2.8``, and validated according to ``dummy_schema.dummy2`` from ``2.9`` onward. When not using decorators ~~~~~~~~~~~~~~~~~~~~~~~~~ When you don't want to use the ``@api_version`` decorator on a method or you want to change behavior within a method (say it leads to simpler or simply a lot less code) you can directly test for the requested version with a method as long as you have access to the api request object (commonly called ``req``). Every API method has an api_version_request object attached to the req object and that can be used to modify behavior based on its value:: def index(self, req): req_version = req.api_version_request req1_min = api_version_request.APIVersionRequest("2.1") req1_max = api_version_request.APIVersionRequest("2.5") req2_min = api_version_request.APIVersionRequest("2.6") req2_max = api_version_request.APIVersionRequest("2.10") if req_version.matches(req1_min, req1_max): ....stuff.... elif req_version.matches(req2min, req2_max): ....other stuff.... elif req_version > api_version_request.APIVersionRequest("2.10"): ....more stuff..... The first argument to the matches method is the minimum acceptable version and the second is maximum acceptable version. A specified version can be null:: null_version = APIVersionRequest() If the minimum version specified is null then there is no restriction on the minimum version, and likewise if the maximum version is null there is no restriction the maximum version. Alternatively a one sided comparison can be used as in the example above. Other necessary changes ----------------------- If you are adding a patch which adds a new microversion, it is necessary to add changes to other places which describe your change: * Update ``REST_API_VERSION_HISTORY`` in ``nova/api/openstack/api_version_request.py`` * Update ``_MAX_API_VERSION`` in ``nova/api/openstack/api_version_request.py`` * Add a verbose description to ``nova/api/openstack/compute/rest_api_version_history.rst``. * Add a :doc:`release note ` with a ``features`` section announcing the new or changed feature and the microversion. * Update the expected versions in affected tests, for example in ``nova/tests/unit/api/openstack/compute/test_versions.py``. * Update the get versions api sample file: ``doc/api_samples/versions/versions-get-resp.json`` and ``doc/api_samples/versions/v21-version-get-resp.json``. * Make a new commit to python-novaclient and update corresponding files to enable the newly added microversion API. See :python-novaclient-doc:`Adding support for a new microversion ` in python-novaclient for more details. * If the microversion changes the response schema, a new schema and test for the microversion must be added to Tempest. * If applicable, add Functional sample tests under ``nova/tests/functional/api_sample_tests``. Also, add JSON examples to ``doc/api_samples`` directory which can be generated automatically via tox env ``api-samples`` or run test with env var ``GENERATE_SAMPLES`` True. * Update the `API Reference`_ documentation as appropriate. The source is located under `api-ref/source/`. * If the microversion changes servers related APIs, update the ``api-guide/source/server_concepts.rst`` accordingly. .. _API Reference: https://docs.openstack.org/api-ref/compute/ Allocating a microversion ------------------------- If you are adding a patch which adds a new microversion, it is necessary to allocate the next microversion number. Except under extremely unusual circumstances and this would have been mentioned in the nova spec for the change, the minor number of ``_MAX_API_VERSION`` will be incremented. This will also be the new microversion number for the API change. It is possible that multiple microversion patches would be proposed in parallel and the microversions would conflict between patches. This will cause a merge conflict. We don't reserve a microversion for each patch in advance as we don't know the final merge order. Developers may need over time to rebase their patch calculating a new version number as above based on the updated value of ``_MAX_API_VERSION``. Testing Microversioned API Methods ---------------------------------- Testing a microversioned API method is very similar to a normal controller method test, you just need to add the ``OpenStack-API-Version`` header, for example:: req = fakes.HTTPRequest.blank('/testable/url/endpoint') req.headers = {'OpenStack-API-Version': 'compute 2.28'} req.api_version_request = api_version.APIVersionRequest('2.6') controller = controller.TestableController() res = controller.index(req) ... assertions about the response ... For many examples of testing, the canonical examples are in ``nova/tests/unit/api/openstack/compute/test_microversions.py``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/policies.rst0000664000175000017500000001645600000000000021562 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Development policies -------------------- Out Of Tree Support =================== While nova has many entrypoints and other places in the code that allow for wiring in out of tree code, upstream doesn't actively make any guarantees about these extensibility points; we don't support them, make any guarantees about compatibility, stability, etc. Furthermore, hooks and extension points in the code impede efforts in Nova to support interoperability between OpenStack clouds. Therefore an effort is being made to systematically deprecate and remove hooks, extension points, and classloading of managers and other services. Public Contractual APIs ======================== Although nova has many internal APIs, they are not all public contractual APIs. Below is a link of our public contractual APIs: * https://docs.openstack.org/api-ref/compute/ Anything not in this list is considered private, not to be used outside of nova, and should not be considered stable. REST APIs ========== Follow the guidelines set in: https://wiki.openstack.org/wiki/APIChangeGuidelines The canonical source for REST API behavior is the code *not* documentation. Documentation is manually generated after the code by folks looking at the code and writing up what they think it does, and it is very easy to get this wrong. This policy is in place to prevent us from making backwards incompatible changes to REST APIs. Patches and Reviews =================== Merging a patch requires a non-trivial amount of reviewer resources. As a patch author, you should try to offset the reviewer resources spent on your patch by reviewing other patches. If no one does this, the review team (cores and otherwise) become spread too thin. For review guidelines see: :doc:`code-review` Reverts for Retrospective Vetos =============================== Sometimes our simple "2 +2s" approval policy will result in errors. These errors might be a bug that was missed, or equally importantly, it might be that other cores feel that there is a need for more discussion on the implementation of a given piece of code. Rather than `an enforced time-based solution`_ - for example, a patch couldn't be merged until it has been up for review for 3 days - we have chosen an honor-based system where core reviewers would not approve potentially contentious patches until the proposal had been sufficiently socialized and everyone had a chance to raise any concerns. Recognising that mistakes can happen, we also have a policy where contentious patches which were quickly approved should be reverted so that the discussion around the proposal can continue as if the patch had never been merged in the first place. In such a situation, the procedure is: 0. The commit to be reverted must not have been released. 1. The core team member who has a -2 worthy objection should propose a revert, stating the specific concerns that they feel need addressing. 2. Any subsequent patches depending on the to-be-reverted patch may need to be reverted also. 3. Other core team members should quickly approve the revert. No detailed debate should be needed at this point. A -2 vote on a revert is strongly discouraged, because it effectively blocks the right of cores approving the revert from -2 voting on the original patch. 4. The original patch submitter should re-submit the change, with a reference to the original patch and the revert. 5. The original reviewers of the patch should restore their votes and attempt to summarize their previous reasons for their votes. 6. The patch should not be re-approved until the concerns of the people proposing the revert are worked through. A mailing list discussion or design spec might be the best way to achieve this. .. _`an enforced time-based solution`: https://lists.launchpad.net/openstack/msg08574.html Metrics Gathering ================= Nova currently has a monitor plugin to gather CPU metrics on compute nodes. This feeds into the MetricsFilter and MetricsWeigher in the scheduler. The CPU metrics monitor is only implemented for the libvirt compute driver. External projects like :ceilometer-doc:`Ceilometer <>` and :watcher-doc:`Watcher <>` consume these metrics. Over time people have tried to add new monitor plugins for things like memory bandwidth. There have also been attempts to expose these monitors over CLI, the REST API, and notifications. At the `Newton midcycle`_ it was decided that Nova does a poor job as a metrics gathering tool, especially as it's incomplete, not tested, and there are numerous other tools available to get this information as their primary function. Therefore, there is a freeze on adding new metrics monitoring plugins which also includes exposing existing monitored metrics outside of Nova, like with the nova-manage CLI, the REST API, or the notification bus. Long-term, metrics gathering will likely be deprecated within Nova. Since there is not yet a clear replacement, the deprecation is open-ended, but serves as a signal that new deployments should not rely on the metrics that Nova gathers and should instead focus their efforts on alternative solutions for placement. .. _Newton midcycle: http://lists.openstack.org/pipermail/openstack-dev/2016-August/100600.html Continuous Delivery Mentality ============================= Nova generally tries to subscribe to a philosophy of anything we merge today can be in production today, and people can continuously deliver Nova. In practice this means we should not merge code that will not work until some later change is merged, because that later change may never come, or not come in the same release cycle, or may be substantially different from what was originally intended. For example, if patch A uses code that is not available until patch D later in the series. The strategy for dealing with this in particularly long and complicated series of changes is to start from the "bottom" with code that is no-op until it is "turned on" at the top of the stack, generally with some feature flag, policy rule, API microversion, etc. So in the example above, the code from patch D should come before patch A even if nothing is using it yet, but things will build on it. Realistically this means if you are working on a feature that touches most of the Nova "stack", i.e. compute driver/service through to API, you will work on the compute driver/service code first, then conductor and/or scheduler, and finally the API. An extreme example of this can be found by reading the `code review guide for the cross-cell resize feature`_. Even if this philosophy is not the reality of how the vast majority of OpenStack deployments consume Nova, it is a development philosophy to try and avoid merging broken code. .. _code review guide for the cross-cell resize feature: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006366.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/process.rst0000664000175000017500000010566100000000000021426 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _process: ================= Nova team process ================= Nova is always evolving its processes, but it's important to explain why we have them: so we can all work to ensure that the interactions we need to happen do happen. The process exists to make productive communication between all members of our community easier. OpenStack Wide Patterns ======================= Nova follows most of the generally adopted norms for OpenStack projects. You can get more details here: * https://docs.openstack.org/infra/manual/developers.html * https://docs.openstack.org/project-team-guide/ If you are new to Nova, please read this first: :ref:`getting_involved`. Dates overview ============== For Ussuri, please see: https://wiki.openstack.org/wiki/Nova/Ussuri_Release_Schedule .. note: Throughout this document any link which references the name of a release cycle in the link can usually be changed to the name of the current cycle to get up to date information. Feature Freeze ~~~~~~~~~~~~~~ Feature freeze primarily provides a window of time to help the horizontal teams prepare their items for release, while giving developers time to focus on stabilising what is currently in master, and encouraging users and packagers to perform tests (automated, and manual) on the release, to spot any major bugs. The Nova release process is aligned with the `development cycle schedule `_ used by many OpenStack projects, including the following steps. - Feature Proposal Freeze - make sure all code is up for review - so we can optimise for completed features, not lots of half completed features - Feature Freeze - make sure all feature code is merged - String Freeze - give translators time to translate all our strings .. note:: debug logs are no longer translated - Dependency Freeze - time to co-ordinate the final list of dependencies, and give packagers time to package them - generally it is also quite destabilising to take upgrades (beyond bug fixes) this late As with all processes here, there are exceptions. The exceptions at this stage need to be discussed with the horizontal teams that might be affected by changes beyond this point, and as such are discussed with one of the OpenStack release managers. Spec and Blueprint Approval Freeze ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is a (mostly) Nova specific process. Why we have a Spec Freeze: - specs take a long time to review and reviewing specs throughout the cycle distracts from code reviews - keeping specs "open" and being slow at reviewing them (or just ignoring them) annoys the spec submitters - we generally have more code submitted that we can review, this time bounding is a useful way to limit the number of submissions By the freeze date, we expect all blueprints that will be approved for the cycle to be listed on launchpad and all relevant specs to be merged. For Ussuri, blueprints can be found at https://blueprints.launchpad.net/nova/ussuri and specs at https://specs.openstack.org/openstack/nova-specs/specs/ussuri/index.html Starting with Liberty, we are keeping a backlog open for submission at all times. .. note:: The focus is on accepting and agreeing problem statements as being in scope, rather than queueing up work items for the next release. We are still working on a new lightweight process to get out of the backlog and approved for a particular release. For more details on backlog specs, please see: http://specs.openstack.org/openstack/nova-specs/specs/backlog/index.html There can be exceptions, usually it's an urgent feature request that comes up after the initial deadline. These will generally be discussed at the weekly Nova meeting, by adding the spec or blueprint to discuss in the appropriate place in the meeting agenda here (ideally make yourself available to discuss the blueprint, or alternatively make your case on the ML before the meeting): https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting String Freeze ~~~~~~~~~~~~~ String Freeze provides an opportunity for translators to translate user-visible messages to a variety of languages. By not changing strings after the date of the string freeze, the job of the translators is made a bit easier. For more information on string and other OpenStack-wide release processes see `the release management docs `_. How do I get my code merged? ============================ OK, so you are new to Nova, and you have been given a feature to implement. How do I make that happen? You can get most of your questions answered here: - https://docs.openstack.org/infra/manual/developers.html But let's put a Nova specific twist on things... Overview ~~~~~~~~ .. image:: /_static/images/nova-spec-process.svg :alt: Flow chart showing the Nova bug/feature process Where do you track bugs? ~~~~~~~~~~~~~~~~~~~~~~~~ We track bugs here: - https://bugs.launchpad.net/nova If you fix an issue, please raise a bug so others who spot that issue can find the fix you kindly created for them. Also before submitting your patch it's worth checking to see if someone has already fixed it for you (Launchpad helps you with that, at little, when you create the bug report). When do I need a blueprint vs a spec? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For more details refer to :doc:`/contributor/blueprints`. To understand this question, we need to understand why blueprints and specs are useful. But here is the rough idea: - if it needs a spec, it will need a blueprint. - if it's an API change, it needs a spec. - if it's a single small patch that touches a small amount of code, with limited deployer and doc impact, it probably doesn't need a spec. If you are unsure, please ask the `PTL`_ on IRC, or one of the other nova-drivers. How do I get my blueprint approved? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ So you need your blueprint approved? Here is how: - if you don't need a spec, please add a link to your blueprint to the agenda for the next nova meeting: https://wiki.openstack.org/wiki/Meetings/Nova - be sure your blueprint description has enough context for the review in that meeting. - if you need a spec, then please submit a nova-spec for review, see: https://docs.openstack.org/infra/manual/developers.html Got any more questions? Contact the `PTL`_ or one of the other nova-specs-core who are awake at the same time as you. IRC is best as you will often get an immediate response, if they are too busy send him/her an email. How do I get a procedural -2 removed from my patch? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When feature freeze hits, any patches for blueprints that are still in review get a procedural -2 to stop them merging. In Nova a blueprint is only approved for a single release. To have the -2 removed, you need to get the blueprint approved for the current release (see `How do I get my blueprint approved?`_). Why are the reviewers being mean to me? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Code reviews take intense concentration and a lot of time. This tends to lead to terse responses with very little preamble or nicety. That said, there's no excuse for being actively rude or mean. OpenStack has a Code of Conduct (https://www.openstack.org/legal/community-code-of-conduct/) and if you feel this has been breached please raise the matter privately. Either with the relevant parties, the `PTL`_ or failing those, the OpenStack Foundation. That said, there are many objective reasons for applying a -1 or -2 to a patch: - Firstly and simply, patches must address their intended purpose successfully. - Patches must not have negative side-effects like wiping the database or causing a functional regression. Usually removing anything, however tiny, requires a deprecation warning be issued for a cycle. - Code must be maintainable, that is it must adhere to coding standards and be as readable as possible for an average OpenStack developer (we acknowledge that this person is not easy to define). - Patches must respect the direction of the project, for example they should not make approved specs substantially more difficult to implement. - Release coordinators need the correct process to be followed so scope can be tracked accurately. Bug fixes require bugs, features require blueprints and all but the simplest features require specs. If there is a blueprint, it must be approved for the release/milestone the patch is attempting to merge into. Please particularly bear in mind that a -2 does not mean "never ever" nor does it mean "your idea is bad and you are dumb". It simply means "do not merge today". You may need to wait some time, rethink your approach or even revisit the problem definition but there is almost always some way forward. The core who applied the -2 should tell you what you need to do. My code review seems stuck, what can I do? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ First and foremost - address any -1s and -2s! The review load on Nova is high enough that patches with negative reviews often get filtered out entirely. A few tips: - Be precise. Ensure you're not talking at cross purposes. - Try to understand where the reviewer is coming from. They may have a very different perspective and/or use-case to you. - If you don't understand the problem, ask them to explain - this is common and helpful behaviour. - Be positive. Everyone's patches have issues, including core reviewers. No-one cares once the issues are fixed. - Try not to flip-flop. When two reviewers are pulling you in different directions, stop pushing code and negotiate the best way forward. - If the reviewer does not respond to replies left on the patchset, reach out to them on IRC or email. If they still don't respond, you can try to ask their colleagues if they're on holiday (or simply wait). Finally, you can ask for mediation in the Nova meeting by adding it to the agenda (https://wiki.openstack.org/wiki/Meetings/Nova). This is also what you should do if you are unable to negotiate a resolution to an issue. Secondly, Nova is a big project, look for things that have been waiting a long time for a review: https://review.opendev.org/#/q/project:openstack/nova+status:open+age:2weeks Eventually you should get some +1s from people working through the review queue. Expect to get -1s as well. You can ask for reviews within your company, 1-2 are useful (not more), especially if those reviewers are known to give good reviews. You can spend some time while you wait reviewing other people's code - they may reciprocate and you may learn something (:ref:`Why do code reviews when I'm not core? `). If you've waited an appropriate amount of time and you haven't had any +1s, you can ask on IRC for reviews. Please don't ask for core review straight away, especially not directly (IRC or email). Core reviewer time is very valuable and gaining some +1s is a good way to show your patch meets basic quality standards. Once you have a few +1s, be patient. Remember the average wait times. You can ask for reviews each week in IRC, it helps to ask when cores are awake. Bugs ^^^^ It helps to apply correct tracking information. - Put "Closes-Bug", "Partial-Bug" or "Related-Bug" in the commit message tags as necessary. - If you have to raise a bug in Launchpad first, do it - this helps someone else find your fix. - Make sure the bug has the correct `priority`_ and `tag`_ set. .. _priority: https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29 .. _tag: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags Features ^^^^^^^^ Again, it helps to apply correct tracking information. For blueprint-only features: - Put your blueprint in the commit message, EG "blueprint simple-feature". - Mark the blueprint as NeedsCodeReview if you are finished. - Maintain the whiteboard on the blueprint so it's easy to understand which patches need reviews. - Use a single topic for all related patches. All patches for one blueprint should share a topic. For blueprint and spec features, do everything for blueprint-only features and also: - Ensure your spec is approved for the current release cycle. If your code is a project or subteam priority, the cores interested in that priority might not mind a ping after it has sat with +1s for a week. If you abuse this privilege, you'll lose respect. If it's not a priority, your blueprint/spec has been approved for the cycle and you have been patient, you can raise it during the Nova meeting. The outcome may be that your spec gets unapproved for the cycle, so that priority items can take focus. If this happens to you, sorry - it should not have been approved in the first place, Nova team bit off more than they could chew, it is their mistake not yours. You can re-propose it for the next cycle. If it's not a priority and your spec has not been approved, your code will not merge this cycle. Please re-propose your spec for the next cycle. Nova Process Mission ==================== This section takes a high level look at the guiding principles behind the Nova process. Open ~~~~ Our mission is to have: - Open Source - Open Design - Open Development - Open Community We have to work out how to keep communication open in all areas. We need to be welcoming and mentor new people, and make it easy for them to pickup the knowledge they need to get involved with OpenStack. For more info on Open, please see: https://wiki.openstack.org/wiki/Open Interoperable API, supporting a vibrant ecosystem ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ An interoperable API that gives users on-demand access to compute resources is at the heart of :ref:`nova's mission `. Nova has a vibrant ecosystem of tools built on top of the current Nova API. All features should be designed to work with all technology combinations, so the feature can be adopted by our ecosystem. If a new feature is not adopted by the ecosystem, it will make it hard for your users to make use of those features, defeating most of the reason to add the feature in the first place. The microversion system allows users to isolate themselves This is a very different aim to being "pluggable" or wanting to expose all capabilities to end users. At the same time, it is not just a "lowest common denominator" set of APIs. It should be discoverable which features are available, and while no implementation details should leak to the end users, purely admin concepts may need to understand technology specific details that back the interoperable and more abstract concepts that are exposed to the end user. This is a hard goal, and one area we currently don't do well is isolating image creators from these technology specific details. Smooth Upgrades ~~~~~~~~~~~~~~~ As part of our mission for a vibrant ecosystem around our APIs, we want to make it easy for those deploying Nova to upgrade with minimal impact to their users. Here is the scope of Nova's upgrade support: - upgrade from any commit, to any future commit, within the same major release - only support upgrades between N and N+1 major versions, to reduce technical debt relating to upgrades Here are some of the things we require developers to do, to help with upgrades: - when replacing an existing feature or configuration option, make it clear how to transition to any replacement - deprecate configuration options and features before removing them - i.e. continue to support and test features for at least one release before they are removed - this gives time for operator feedback on any removals - End User API will always be kept backwards compatible Interaction goals ~~~~~~~~~~~~~~~~~ When thinking about the importance of process, we should take a look at: http://agilemanifesto.org With that in mind, let's look at how we want different members of the community to interact. Let's start with looking at issues we have tried to resolve in the past (currently in no particular order). We must: - have a way for everyone to review blueprints and designs, including allowing for input from operators and all types of users (keep it open) - take care to not expand Nova's scope any more than absolutely necessary - ensure we get sufficient focus on the core of Nova so that we can maintain or improve the stability and flexibility of the overall codebase - support any API we release approximately forever. We currently release every commit, so we're motivated to get the API right the first time - avoid low priority blueprints that slow work on high priority work, without blocking those forever - focus on a consistent experience for our users, rather than ease of development - optimise for completed blueprints, rather than more half completed blueprints, so we get maximum value for our users out of our review bandwidth - focus efforts on a subset of patches to allow our core reviewers to be more productive - set realistic expectations on what can be reviewed in a particular cycle, to avoid sitting in an expensive rebase loop - be aware of users that do not work on the project full time - be aware of users that are only able to work on the project at certain times that may not align with the overall community cadence - discuss designs for non-trivial work before implementing it, to avoid the expense of late-breaking design issues FAQs ==== Why bother with all this process? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We are a large community, spread across multiple timezones, working with several horizontal teams. Good communication is a challenge and the processes we have are mostly there to try and help fix some communication challenges. If you have a problem with a process, please engage with the community, discover the reasons behind our current process, and help fix the issues you are experiencing. Why don't you remove old process? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We do! For example, in Liberty we stopped trying to predict the milestones when a feature will land. As we evolve, it is important to unlearn new habits and explore if things get better if we choose to optimise for a different set of issues. Why are specs useful? ~~~~~~~~~~~~~~~~~~~~~ Spec reviews allow anyone to step up and contribute to reviews, just like with code. Before we used gerrit, it was a very messy review process, that felt very "closed" to most people involved in that process. As Nova has grown in size, it can be hard to work out how to modify Nova to meet your needs. Specs are a great way of having that discussion with the wider Nova community. For Nova to be a success, we need to ensure we don't break our existing users. The spec template helps focus the mind on the impact your change might have on existing users and gives an opportunity to discuss the best way to deal with those issues. However, there are some pitfalls with the process. Here are some top tips to avoid them: - keep it simple. Shorter, simpler, more decomposed specs are quicker to review and merge much quicker (just like code patches). - specs can help with documentation but they are only intended to document the design discussion rather than document the final code. - don't add details that are best reviewed in code, it's better to leave those things for the code review. If we have specs, why still have blueprints? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We use specs to record the design agreement, we use blueprints to track progress on the implementation of the spec. Currently, in Nova, specs are only approved for one release, and must be re-submitted for each release you want to merge the spec, although that is currently under review. Why do we have priorities? ~~~~~~~~~~~~~~~~~~~~~~~~~~ To be clear, there is no "nova dev team manager", we are an open team of professional software developers, that all work for a variety of (mostly competing) companies that collaborate to ensure the Nova project is a success. Over time, a lot of technical debt has accumulated, because there was a lack of collective ownership to solve those cross-cutting concerns. Before the Kilo release, it was noted that progress felt much slower, because we were unable to get appropriate attention on the architectural evolution of Nova. This was important, partly for major concerns like upgrades and stability. We agreed it's something we all care about and it needs to be given priority to ensure that these things get fixed. Since Kilo, priorities have been discussed at the summit. This turns in to a spec review which eventually means we get a list of priorities here: http://specs.openstack.org/openstack/nova-specs/#priorities Allocating our finite review bandwidth to these efforts means we have to limit the reviews we do on non-priority items. This is mostly why we now have the non-priority Feature Freeze. For more on this, see below. Blocking a priority effort is one of the few widely acceptable reasons to block someone adding a feature. One of the great advantages of being more explicit about that relationship is that people can step up to help review and/or implement the work that is needed to unblock the feature they want to get landed. This is a key part of being an Open community. Why is there a Feature Freeze (and String Freeze) in Nova? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The main reason Nova has a feature freeze is that it allows people working on docs and translations to sync up with the latest code. Traditionally this happens at the same time across multiple projects, so the docs are synced between what used to be called the "integrated release". We also use this time period as an excuse to focus our development efforts on bug fixes, ideally lower risk bug fixes, and improving test coverage. In theory, with a waterfall hat on, this would be a time for testing and stabilisation of the product. In Nova we have a much stronger focus on keeping every commit stable, by making use of extensive continuous testing. In reality, we frequently see the biggest influx of fixes in the few weeks after the release, as distributions do final testing of the released code. It is hoped that the work on Feature Classification will lead us to better understand the levels of testing of different Nova features, so we will be able to reduce and dependency between Feature Freeze and regression testing. It is also likely that the move away from "integrated" releases will help find a more developer friendly approach to keep the docs and translations in sync. Why is there a non-priority Feature Freeze in Nova? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We have already discussed why we have priority features. The rate at which code can be merged to Nova is primarily constrained by the amount of time able to be spent reviewing code. Given this, earmarking review time for priority items means depriving it from non-priority items. The simplest way to make space for the priority features is to stop reviewing and merging non-priority features for a whole milestone. The idea being developers should focus on bug fixes and priority features during that milestone, rather than working on non-priority features. A known limitation of this approach is developer frustration. Many developers are not being given permission to review code, work on bug fixes or work on priority features, and so feel very unproductive upstream. An alternative approach of "slots" or "runways" has been considered, that uses a kanban style approach to regulate the influx of work onto the review queue. We are yet to get agreement on a more balanced approach, so the existing system is being continued to ensure priority items are more likely to get the attention they require. Why do you still use Launchpad? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We are actively looking for an alternative to Launchpad's bugs and blueprints. Originally the idea was to create Storyboard. However development stalled for a while so interest waned. The project has become more active recently so it may be worth looking again: https://storyboard.openstack.org/#!/page/about When should I submit my spec? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ideally we want to get all specs for a release merged before the summit. For things that we can't get agreement on, we can then discuss those at the summit. There will always be ideas that come up at the summit and need to be finalised after the summit. This causes a rush which is best avoided. How can I get my code merged faster? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ So no-one is coming to review your code, how do you speed up that process? Firstly, make sure you are following the above process. If it's a feature, make sure you have an approved blueprint. If it's a bug, make sure it is triaged, has its priority set correctly, it has the correct bug tag and is marked as in progress. If the blueprint has all the code up for review, change it from Started into NeedsCodeReview so people know only reviews are blocking you, make sure it hasn't accidentally got marked as implemented. Secondly, if you have a negative review (-1 or -2) and you responded to that in a comment or uploading a new change with some updates, but that reviewer hasn't come back for over a week, it's probably a good time to reach out to the reviewer on IRC (or via email) to see if they could look again now you have addressed their comments. If you can't get agreement, and your review gets stuck (i.e. requires mediation), you can raise your patch during the Nova meeting and we will try to resolve any disagreement. Thirdly, is it in merge conflict with master or are any of the CI tests failing? Particularly any third-party CI tests that are relevant to the code you are changing. If you're fixing something that only occasionally failed before, maybe recheck a few times to prove the tests stay passing. Without green tests, reviewers tend to move on and look at the other patches that have the tests passing. OK, so you have followed all the process (i.e. your patches are getting advertised via the project's tracking mechanisms), and your patches either have no reviews, or only positive reviews. Now what? Have you considered reviewing other people's patches? Firstly, participating in the review process is the best way for you to understand what reviewers are wanting to see in the code you are submitting. As you get more practiced at reviewing it will help you to write "merge-ready" code. Secondly, if you help review other peoples code and help get their patches ready for the core reviewers to add a +2, it will free up a lot of non-core and core reviewer time, so they are more likely to get time to review your code. For more details, please see: :ref:`Why do code reviews when I'm not core? ` Please note, I am not recommending you go to ask people on IRC or via email for reviews. Please try to get your code reviewed using the above process first. In many cases multiple direct pings generate frustration on both sides and that tends to be counter productive. Now you have got your code merged, lets make sure you don't need to fix this bug again. The fact the bug exists means there is a gap in our testing. Your patch should have included some good unit tests to stop the bug coming back. But don't stop there, maybe its time to add tempest tests, to make sure your use case keeps working? Maybe you need to set up a third party CI so your combination of drivers will keep working? Getting that extra testing in place should stop a whole heap of bugs, again giving reviewers more time to get to the issues or features you want to add in the future. Process Evolution Ideas ======================= We are always evolving our process as we try to improve and adapt to the changing shape of the community. Here we discuss some of the ideas, along with their pros and cons. Splitting out the virt drivers (or other bits of code) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently, Nova doesn't have strong enough interfaces to split out the virt drivers, scheduler or REST API. This is seen as the key blocker. Let's look at both sides of the debate here. Reasons for the split: - can have separate core teams for each repo - this leads to quicker turn around times, largely due to focused teams - splitting out things from core means less knowledge required to become core in a specific area Reasons against the split: - loss of interoperability between drivers - this is a core part of Nova's mission, to have a single API across all deployments, and a strong ecosystem of tools and apps built on that - we can overcome some of this with stronger interfaces and functional tests - new features often need changes in the API and virt driver anyway - the new "depends-on" can make these cross-repo dependencies easier - loss of code style consistency across the code base - fear of fragmenting the nova community, leaving few to work on the core of the project - could work in subteams within the main tree TODO - need to complete analysis Subteam recommendation as a +2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are groups of people with great knowledge of particular bits of the code base. It may be a good idea to give their recommendation of a merge greater strength. In addition, having the subteam focus review efforts on a subset of patches should help concentrate the nova-core reviews they get, and increase the velocity of getting code merged. Ideally this would be done with gerrit user "tags". There are some investigations by sdague in how feasible it would be to add tags to gerrit. Stop having to submit a spec for each release ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As mentioned above, we use blueprints for tracking, and specs to record design decisions. Targeting specs to a specific release is a heavyweight solution and blurs the lines between specs and blueprints. At the same time, we don't want to lose the opportunity to revise existing blueprints. Maybe there is a better balance? What about this kind of process: - backlog has these folders: - backlog/incomplete - merge a partial spec - backlog/complete - merge complete specs (remove tracking details, such as assignee part of the template) - ?? backlog/expired - specs are moved here from incomplete or complete when no longer seem to be given attention (after 1 year, by default) - /implemented - when a spec is complete it gets moved into the release directory and possibly updated to reflect what actually happened - there will no longer be a per-release approved spec list To get your blueprint approved: - add it to the next nova meeting - if a spec is required, update the URL to point to the spec merged in a spec to the blueprint - ensure there is an assignee in the blueprint - a day before the meeting, a note is sent to the ML to review the list before the meeting - discuss any final objections in the nova-meeting - this may result in a request to refine the spec, if things have changed since it was merged - trivial cases can be approved in advance by a nova-driver, so not all folks need to go through the meeting This still needs more thought, but should decouple the spec review from the release process. It is also more compatible with a runway style system, that might be less focused on milestones. Runways ~~~~~~~ Runways are a form of Kanban, where we look at optimising the flow through the system, by ensuring we focus our efforts on reviewing a specific subset of patches. The idea goes something like this: - define some states, such as: design backlog, design review, code backlog, code review, test+doc backlog, complete - blueprints must be in one of the above state - large or high priority bugs may also occupy a code review slot - core reviewer member moves item between the slots - must not violate the rules on the number of items in each state - states have a limited number of slots, to ensure focus - certain percentage of slots are dedicated to priorities, depending on point in the cycle, and the type of the cycle, etc Reasons for: - more focused review effort, get more things merged more quickly - more upfront about when your code is likely to get reviewed - smooth out current "lumpy" non-priority feature freeze system Reasons against: - feels like more process overhead - control is too centralised Replacing Milestones with SemVer Releases ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can deploy any commit of Nova and upgrade to a later commit in that same release. Making our milestones versioned more like an official release would help signal to our users that people can use the milestones in production, and get a level of upgrade support. It could go something like this: - 14.0.0 is milestone 1 - 14.0.1 is milestone 2 (maybe, because we add features, it should be 14.1.0?) - 14.0.2 is milestone 3 - we might do other releases (once a critical bug is fixed?), as it makes sense, but we will always be the time bound ones - 14.0.3 two weeks after milestone 3, adds only bug fixes (and updates to RPC versions?) - maybe a stable branch is created at this point? - 14.1.0 adds updated translations and co-ordinated docs - this is released from the stable branch? - 15.0.0 is the next milestone, in the following cycle - not the bump of the major version to signal an upgrade incompatibility with 13.x We are currently watching Ironic to see how their use of semver goes, and see what lessons need to be learnt before we look to maybe apply this technique during M. Feature Classification ~~~~~~~~~~~~~~~~~~~~~~ This is a look at moving forward the :doc:`support matrix effort `. The things we need to cover: - note what is tested, and how often that test passes (via 3rd party CI, or otherwise) - link to current test results for stable and master (time since last pass, recent pass rate, etc) - TODO - sync with jogo on his third party CI audit and getting trends, ask infra - include experimental features (untested feature) - get better at the impact of volume drivers and network drivers on available features (not just hypervisor drivers) Main benefits: - users get a clear picture of what is known to work - be clear about when experimental features are removed, if no tests are added - allows a way to add experimental things into Nova, and track either their removal or maturation .. _PTL: https://governance.openstack.org/tc/reference/projects/nova.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/project-scope.rst0000664000175000017500000003220600000000000022517 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Scope of the Nova project ========================== Nova is focusing on doing an awesome job of its core mission. This document aims to clarify that core mission. This is a living document to help record where we agree about what Nova should and should not be doing, and why. Please treat this as a discussion of interesting, and hopefully useful, examples. It is not intended to be an exhaustive policy statement. .. _nova-mission: Mission ------- Our mission statement starts with: To implement services and associated libraries to provide massively scalable, on demand, self service access to compute resources. Our official mission statement also includes the following examples of compute resources: bare metal, virtual machines, and containers. For the full official mission statement see: https://governance.openstack.org/tc/reference/projects/nova.html#mission This document aims to help clarify what the mission statement means. Compute Resources ------------------ Nova is all about access to compute resources. This section looks at the types of compute resource Nova works with. Virtual Servers **************** Nova was originally focused purely on providing access to virtual servers running on a variety of different hypervisors. The majority of users use Nova only to provide access to virtual servers from a single hypervisor, however, its possible to have a Nova deployment include multiple different types of hypervisors, while at the same time offering containers and bare metal servers. Containers *********** The Nova API is not a good fit for a lot of container use cases. The Magnum project intends to deliver a good container experience built on top of Nova. Nova allows you to use containers in a similar way to how you would use on demand virtual machines. We want to maintain this distinction, so we maintain the integrity and usefulness of the existing Nova API. For example, Nova is not designed to spin up new containers for every apache request, nor do we plan to control what goes on inside containers. They get the same metadata provided to them as virtual machines, to do with as they see fit. Bare Metal Servers ******************* Ironic project has been pioneering the idea of treating physical machines in a similar way to on demand virtual machines. Nova's driver is able to allow a multi-tenant cloud style use of Ironic controlled resources. While currently there are operations that are a fundamental part of our virtual machine abstraction that are not currently available in ironic, such as attaching iSCSI volumes, it does not fundamentally change the semantics of our API, and as such is a suitable Nova driver. Moreover, it is expected that gap with shrink over time. Driver Parity ************** Our goal for the Nova API is to provide a consistent abstraction to access on demand compute resources. We are not aiming to expose all features of all hypervisors. Where the details of the underlying hypervisor leak through our APIs, we have failed in this goal, and we must work towards better abstractions that are more `interoperable`_. This is one reason why we put so much emphasis on the use of Tempest in third party CI systems. The key tenet of driver parity is that if a feature is supported in a driver, it must feel the same to users, as if they where using any of the other drivers that also support that feature. The exception is that, if possible for widely different performance characteristics, but the effect of that API call must be identical. Following on from that, should a feature only be added to one of the drivers, we must make every effort to ensure another driver could be implemented to match that behavior. It is important that drivers support enough features, so the API actually provides a consistent abstraction. For example, being unable to create a server or delete a server would severely undermine that goal. In fact, Nova only ever manages resources it creates. .. _interoperable: https://www.openstack.org/brand/interop/ Upgrades --------- Nova is widely used in production. As such we need to respect the needs of our existing users. At the same time we need evolve the current code base, including both adding and removing features. This section outlines how we expect people to upgrade, and what we do to help existing users that upgrade in the way we expect. Upgrade expectations ********************* Our upgrade plan is to concentrate on upgrades from N-1 to the Nth release. So for someone running juno, they would have to upgrade to kilo before upgrading to liberty. This is designed to balance the need for a smooth upgrade, against having to keep maintaining the compatibility code to make that upgrade possible. We talk about this approach as users consuming the stable branch. In addition, we also support users upgrading from the master branch, technically, between any two commits within the same release cycle. In certain cases, when crossing release boundaries, you must upgrade to the stable branch, before upgrading to the tip of master. This is to support those that are doing some level of "Continuous Deployment" from the tip of master into production. Many of the public cloud provides running OpenStack use this approach so they are able to get access to bug fixes and features they work on into production sooner. This becomes important when you consider reverting a commit that turns out to have been bad idea. We have to assume any public API change may have already been deployed into production, and as such cannot be reverted. In a similar way, a database migration may have been deployed. Any commit that will affect an upgrade gets the UpgradeImpact tag added to the commit message, so there is no requirement to wait for release notes. Don't break existing users **************************** As a community we are aiming towards a smooth upgrade process, where users must be unaware you have just upgraded your deployment, except that there might be additional feature available and improved stability and performance of some existing features. We don't ever want to remove features our users rely on. Sometimes we need to migrate users to a new implementation of that feature, which may require extra steps by the deployer, but the end users must be unaffected by such changes. However there are times when some features become a problem to maintain, and fall into disrepair. We aim to be honest with our users and highlight the issues we have, so we are in a position to find help to fix that situation. Ideally we are able to rework the feature so it can be maintained, but in some rare cases, the feature no longer works, is not tested, and no one is stepping forward to maintain that feature, the best option can be to remove that feature. When we remove features, we need to warn users by first marking those features as deprecated, before we finally remove the feature. The idea is to get feedback on how important the feature is to our user base. Where a feature is important we work with the whole community to find a path forward for those users. API Scope ---------- Nova aims to provide a highly interoperable and stable REST API for our users to get self-service access to compute resources. No more API Proxies ******************** Nova API current has some APIs that are now (in kilo) mostly just a proxy to other OpenStack services. If it were possible to remove a public API, these are some we might start with. As such, we don't want to add any more. The first example is the API that is a proxy to the Glance v1 API. As Glance moves to deprecate its v1 API, we need to translate calls from the old v1 API we expose, to Glance's v2 API. The next API to mention is the networking APIs, in particular the security groups API. Most of these APIs exist from when ``nova-network`` existed and the proxies were added during the transition. However, security groups has a much richer Neutron API, and if you use both Nova API and Neutron API, the mismatch can lead to some very unexpected results, in certain cases. Our intention is to avoid adding to the problems we already have in this area. No more Orchestration ********************** Nova is a low level infrastructure API. It is plumbing upon which richer ideas can be built. Heat and Magnum being great examples of that. While we have some APIs that could be considered orchestration, and we must continue to maintain those, we do not intend to add any more APIs that do orchestration. Third Party APIs ***************** Nova aims to focus on making a great API that is highly interoperable across all Nova deployments. We have historically done a very poor job of implementing and maintaining compatibility with third party APIs inside the Nova tree. As such, all new efforts should instead focus on external projects that provide third party compatibility on top of the Nova API. Where needed, we will work with those projects to extend the Nova API such that its possible to add that functionality on top of the Nova API. However, we do not intend to add API calls for those services to persist third party API specific information in the Nova database. Instead we want to focus on additions that enhance the existing Nova API. Scalability ------------ Our mission includes the text "massively scalable". Lets discuss what that means. Nova has three main axes of scale: Number of API requests, number of compute nodes and number of active instances. In many cases the number of compute nodes and active instances are so closely related, you rarely need to consider those separately. There are other items, such as the number of tenants, and the number of instances per tenant. But, again, these are very rarely the key scale issue. Its possible to have a small cloud with lots of requests for very short lived VMs, or a large cloud with lots of longer lived VMs. These need to scale out different components of the Nova system to reach their required level of scale. Ideally all Nova components are either scaled out to match the number of API requests and build requests, or scaled out to match the number of running servers. If we create components that have their load increased relative to both of these items, we can run into inefficiencies or resource contention. Although it is possible to make that work in some cases, this should always be considered. We intend Nova to be usable for both small and massive deployments. Where small involves 1-10 hypervisors and massive deployments are single regions with greater than 10,000 hypervisors. That should be seen as our current goal, not an upper limit. There are some features that would not scale well for either the small scale or the very large scale. Ideally we would not accept these features, but if there is a strong case to add such features, we must work hard to ensure you can run without that feature at the scale you are required to run. IaaS not Batch Processing -------------------------- Currently Nova focuses on providing on-demand compute resources in the style of classic Infrastructure-as-a-service clouds. A large pool of compute resources that people can consume in a self-service way. Nova is not currently optimized for dealing with a larger number of requests for compute resources compared with the amount of compute resources currently available. We generally assume that a level of spare capacity is maintained for future requests. This is needed for users who want to quickly scale out, and extra capacity becomes available again as users scale in. While spare capacity is also not required, we are not optimizing for a system that aims to run at 100% capacity at all times. As such our quota system is more focused on limiting the current level of resource usage, rather than ensuring a fair balance of resources between all incoming requests. This doesn't exclude adding features to support making a better use of spare capacity, such as "spot instances". There have been discussions around how to change Nova to work better for batch job processing. But the current focus is on how to layer such an abstraction on top of the basic primitives Nova currently provides, possibly adding additional APIs where that makes good sense. Should this turn out to be impractical, we may have to revise our approach. Deployment and Packaging ------------------------- Nova does not plan on creating its own packaging or deployment systems. Our CI infrastructure is powered by Devstack. This can also be used by developers to test their work on a full deployment of Nova. We do not develop any deployment or packaging for production deployments. Being widely adopted by many distributions and commercial products, we instead choose to work with all those parties to ensure they are able to effectively package and deploy Nova. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/ptl-guide.rst0000664000175000017500000002261400000000000021636 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Chronological PTL guide ======================= This is just a reference guide that a PTL may use as an aid, if they choose. New PTL ------- * Update the nova meeting chair * https://github.com/openstack-infra/irc-meetings/blob/master/meetings/nova-team-meeting.yaml * Update the team wiki * https://wiki.openstack.org/wiki/Nova#People * Get acquainted with the release schedule * Example: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule Project Team Gathering ---------------------- * Create PTG planning etherpad, retrospective etherpad and alert about it in nova meeting and dev mailing list * Example: https://etherpad.openstack.org/p/nova-ptg-stein * Run sessions at the PTG * Have a priorities discussion at the PTG * Example: https://etherpad.openstack.org/p/nova-ptg-stein-priorities * Sign up for group photo at the PTG (if applicable) * Open review runways for the cycle * Example: https://etherpad.openstack.org/p/nova-runways-stein After PTG --------- * Send PTG session summaries to the dev mailing list * Make sure the cycle priorities spec gets reviewed and merged * Example: https://specs.openstack.org/openstack/nova-specs/priorities/stein-priorities.html * Run the count-blueprints script daily to gather data for the cycle burndown chart A few weeks before milestone 1 ------------------------------ * Plan a spec review day * Periodically check the series goals others have proposed in the “Set series goals” link: * Example: https://blueprints.launchpad.net/nova/stein/+setgoals Milestone 1 ----------- * Do milestone release of nova and python-novaclient (in launchpad only) * This is launchpad bookkeeping only. With the latest release team changes, projects no longer do milestone releases. See: https://releases.openstack.org/reference/release_models.html#cycle-with-milestones-legacy * For nova, set the launchpad milestone release as “released” with the date * Release other libraries if there are significant enough changes since last release. When releasing the first version of a library for the cycle, bump the minor version to leave room for future stable branch releases * os-vif * Release stable branches of nova * ``git checkout `` * ``git log --no-merges ..`` * Examine commits that will go into the release and use it to decide whether the release is a major, minor, or revision bump according to semver * Then, propose the release with version according to semver x.y.z * X - backward-incompatible changes * Y - features * Z - bug fixes * Use the new-release command to generate the release * https://releases.openstack.org/reference/using.html#using-new-release-command Summit ------ * Prepare the project update presentation. Enlist help of others * Prepare the on-boarding session materials. Enlist help of others A few weeks before milestone 2 ------------------------------ * Plan a spec review day (optional) * Periodically check the series goals others have proposed in the “Set series goals” link: * Example: https://blueprints.launchpad.net/nova/stein/+setgoals Milestone 2 ----------- * Spec freeze * Release nova and python-novaclient * Release other libraries as needed * Stable branch releases of nova * For nova, set the launchpad milestone release as “released” with the date Shortly after spec freeze ------------------------- * Create a blueprint status etherpad to help track, especially non-priority blueprint work, to help things get done by Feature Freeze (FF). Example: * https://etherpad.openstack.org/p/nova-stein-blueprint-status * Create or review a patch to add the next release’s specs directory so people can propose specs for next release after spec freeze for current release Non-client library release freeze --------------------------------- * Final release for os-vif Milestone 3 ----------- * Feature freeze day * Client library freeze, release python-novaclient * Close out all blueprints, including “catch all” blueprints like mox, versioned notifications * Stable branch releases of nova * For nova, set the launchpad milestone release as “released” with the date Week following milestone 3 -------------------------- * Consider announcing the FFE (feature freeze exception process) to have people propose FFE requests to a special etherpad where they will be reviewed and possibly sponsored: * https://docs.openstack.org/nova/latest/contributor/process.html#non-priority-feature-freeze * Note: if there is only a short time between FF and RC1 (lately it’s been 2 weeks), then the only likely candidates will be low-risk things that are almost done A few weeks before RC --------------------- * Make a RC1 todos etherpad and tag bugs as ``-rc-potential`` and keep track of them, example: * https://etherpad.openstack.org/p/nova-stein-rc-potential * Go through the bug list and identify any rc-potential bugs and tag them RC -- * Do steps described on the release checklist wiki: * https://wiki.openstack.org/wiki/Nova/ReleaseChecklist * If we want to drop backward-compat RPC code, we have to do a major RPC version bump and coordinate it just before the major release: * https://wiki.openstack.org/wiki/RpcMajorVersionUpdates * Example: https://review.opendev.org/541035 * “Merge latest translations" means translation patches * Check for translations with: * https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:zanata/translations * Should NOT plan to have more than one RC if possible. RC2 should only happen if there was a mistake and something was missed for RC, or a new regression was discovered * Do the RPC version aliases just before RC1 if no further RCs are planned. Else do them at RC2. In the past, we used to update all service version aliases (example: https://review.opendev.org/230132) but since we really only support compute being backlevel/old during a rolling upgrade, we only need to update the compute service alias, see related IRC discussion: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-08-08.log.html#t2018-08-08T17:13:45 * Example: https://review.opendev.org/642599 * More detail on how version aliases work: https://docs.openstack.org/nova/latest/configuration/config.html#upgrade-levels * Write the reno prelude for the release GA * Example: https://review.opendev.org/644412 * Write the cycle-highlights in marketing-friendly sentences and propose to the openstack/releases repo. Usually based on reno prelude but made more readable and friendly * Example: https://review.opendev.org/644697 Immediately after RC -------------------- * Look for bot proposed changes to reno and stable/ * Follow the post-release checklist * https://wiki.openstack.org/wiki/Nova/ReleaseChecklist * Add database migration placeholders * Example: https://review.opendev.org/650964 * Drop old RPC compat code (if there was a RPC major version bump) * Example: https://review.opendev.org/543580 * Bump the oldest supported compute service version * https://review.opendev.org/#/c/738482/ * Create the launchpad series for the next cycle * Set the development focus of the project to the new cycle series * Set the status of the new series to “active development” * Set the last series status to “current stable branch release” * Set the previous to last series status to “supported” * Repeat launchpad steps ^ for python-novaclient * Register milestones in launchpad for the new cycle based on the new cycle release schedule * Make sure the specs directory for the next cycle gets created so people can start proposing new specs * Make sure to move implemented specs from the previous release * Use ``tox -e move-implemented-specs `` * Also remove template from ``doc/source/specs//index.rst`` * Also delete template file ``doc/source/specs//template.rst`` * Create new release wiki: * Example: https://wiki.openstack.org/wiki/Nova/Train_Release_Schedule Miscellaneous Notes ------------------- How to approve a launchpad blueprint ************************************ * Set the approver as the person who +W the spec, or set to self if it’s specless * Set the Direction => Approved and Definition => Approved and make sure the Series goal is set to the current release. If code is already proposed, set Implementation => Needs Code Review * Add a comment to the Whiteboard explaining the approval, with a date (launchpad does not record approval dates). For example: “We discussed this in the team meeting and agreed to approve this for . -- ” How to complete a launchpad blueprint ************************************* * Set Implementation => Implemented. The completion date will be recorded by launchpad ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/releasenotes.rst0000664000175000017500000000516500000000000022437 0ustar00zuulzuul00000000000000.. _releasenotes: Release Notes ============= What is reno ? -------------- Nova uses `reno `__ for providing release notes in-tree. That means that a patch can include a *reno file* or a series can have a follow-on change containing that file explaining what the impact is. A *reno file* is a YAML file written in the ``releasenotes/notes`` tree which is generated using the *reno* tool this way: .. code-block:: bash $ tox -e venv -- reno new where usually ```` can be ``bp-`` for a blueprint or ``bug-XXXXXX`` for a bugfix. Refer to the `reno documentation `__ for more information. When a release note is needed ----------------------------- A release note is required anytime a reno section is needed. Below are some examples for each section. Any sections that would be blank should be left out of the note file entirely. If no section is needed, then you know you don't need to provide a release note :-) * ``upgrade`` * The patch has an `UpgradeImpact `_ tag * A DB change needs some deployer modification (like a migration) * A configuration option change (deprecation, removal or modified default) * some specific changes that have a `DocImpact `_ tag but require further action from an deployer perspective * any patch that requires an action from the deployer in general * ``security`` * If the patch fixes a known vulnerability * ``features`` * If the patch has an `APIImpact `_ tag * For nova-manage and python-novaclient changes, if it adds or changes a new command, including adding new options to existing commands * not all blueprints in general, just the ones impacting a :doc:`/contributor/policies` * a new virt driver is provided or an existing driver impacts the :doc:`HypervisorSupportMatrix ` * ``critical`` * Bugfixes categorized as Critical in Launchpad *impacting users* * ``fixes`` * No clear definition of such bugfixes. Hairy long-standing bugs with high importance that have been fixed are good candidates though. Three sections are left intentionally unexplained (``prelude``, ``issues`` and ``other``). Those are targeted to be filled in close to the release time for providing details about the soon-ish release. Don't use them unless you know exactly what you are doing. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/resize-and-cold-migrate.rst0000664000175000017500000001755400000000000024361 0ustar00zuulzuul00000000000000======================= Resize and cold migrate ======================= The `resize API`_ and `cold migrate API`_ are commonly confused in nova because the internal `API code`_, `conductor code`_ and `compute code`_ use the same methods. This document explains some of the differences in what happens between a resize and cold migrate operation. For the most part this document describes :term:`same-cell resize `. For details on :term:`cross-cell resize `, refer to :doc:`/admin/configuration/cross-cell-resize`. High level ~~~~~~~~~~ :doc:`Cold migrate ` is an operation performed by an administrator to power off and move a server from one host to a **different** host using the **same** flavor. Volumes and network interfaces are disconnected from the source host and connected on the destination host. The type of file system between the hosts and image backend determine if the server files and disks have to be copied. If copy is necessary then root and ephemeral disks are copied and swap disks are re-created. :doc:`Resize ` is an operation which can be performed by a non-administrative owner of the server (the user) with a **different** flavor. The new flavor can change certain aspects of the server such as the number of CPUS, RAM and disk size. Otherwise for the most part the internal details are the same as a cold migration. Scheduling ~~~~~~~~~~ Depending on how the API is configured for :oslo.config:option:`allow_resize_to_same_host`, the server may be able to be resized on the current host. *All* compute drivers support *resizing* to the same host but *only* the vCenter driver supports *cold migrating* to the same host. Enabling resize to the same host is necessary for features such as strict affinity server groups where there are more than one server in the same affinity group. Starting with `microversion 2.56`_ an administrator can specify a destination host for the cold migrate operation. Resize does not allow specifying a destination host. Flavor ~~~~~~ As noted above, with resize the flavor *must* change and with cold migrate the flavor *will not* change. Resource claims ~~~~~~~~~~~~~~~ Both resize and cold migration perform a `resize claim`_ on the destination node. Historically the resize claim was meant as a safety check on the selected node to work around race conditions in the scheduler. Since the scheduler started `atomically claiming`_ VCPU, MEMORY_MB and DISK_GB allocations using Placement the role of the resize claim has been reduced to detecting the same conditions but for resources like PCI devices and NUMA topology which, at least as of the 20.0.0 (Train) release, are not modeled in Placement and as such are not atomic. If this claim fails, the operation can be rescheduled to an alternative host, if there are any. The number of possible alternative hosts is determined by the :oslo.config:option:`scheduler.max_attempts` configuration option. Allocations ~~~~~~~~~~~ Since the 16.0.0 (Pike) release, the scheduler uses the `placement service`_ to filter compute nodes (resource providers) based on information in the flavor and image used to build the server. Once the scheduler runs through its filters and weighers and picks a host, resource class `allocations`_ are atomically consumed in placement with the server as the consumer. During both resize and cold migrate operations, the allocations held by the server consumer against the source compute node resource provider are `moved`_ to a `migration record`_ and the scheduler will create allocations, held by the instance consumer, on the selected destination compute node resource provider. This is commonly referred to as `migration-based allocations`_ which were introduced in the 17.0.0 (Queens) release. If the operation is successful and confirmed, the source node allocations held by the migration record are `dropped`_. If the operation fails or is reverted, the source compute node resource provider allocations held by the migration record are `reverted`_ back to the instance consumer and the allocations against the destination compute node resource provider are dropped. Summary of differences ~~~~~~~~~~~~~~~~~~~~~~ .. list-table:: :header-rows: 1 * - - Resize - Cold migrate * - New flavor - Yes - No * - Authorization (default) - Admin or owner (user) Policy rule: ``os_compute_api:servers:resize`` - Admin only Policy rule: ``os_compute_api:os-migrate-server:migrate`` * - Same host - Maybe - Only vCenter * - Can specify target host - No - Yes (microversion >= 2.56) Sequence Diagrams ~~~~~~~~~~~~~~~~~ The following diagrams are current as of the 21.0.0 (Ussuri) release. Resize ------ This is the sequence of calls to get the server to ``VERIFY_RESIZE`` status. .. seqdiag:: seqdiag { API; Conductor; Scheduler; Source; Destination; edge_length = 300; span_height = 15; activation = none; default_note_color = white; API -> Conductor [label = "cast", note = "resize_instance/migrate_server"]; Conductor => Scheduler [label = "call", note = "select_destinations"]; Conductor -> Destination [label = "cast", note = "prep_resize"]; Source <- Destination [label = "cast", leftnote = "resize_instance"]; Source -> Destination [label = "cast", note = "finish_resize"]; } Confirm resize -------------- This is the sequence of calls when confirming `or deleting`_ a server in ``VERIFY_RESIZE`` status. Note that in the below diagram, if confirming a resize while deleting a server the API synchronously calls the source compute service. .. seqdiag:: seqdiag { API; Source; edge_length = 300; span_height = 15; activation = none; default_note_color = white; API -> Source [label = "cast (or call if deleting)", note = "confirm_resize"]; } Revert resize ------------- This is the sequence of calls when reverting a server in ``VERIFY_RESIZE`` status. .. seqdiag:: seqdiag { API; Source; Destination; edge_length = 300; span_height = 15; activation = none; default_note_color = white; API -> Destination [label = "cast", note = "revert_resize"]; Source <- Destination [label = "cast", leftnote = "finish_revert_resize"]; } .. _resize API: https://docs.openstack.org/api-ref/compute/#resize-server-resize-action .. _cold migrate API: https://docs.openstack.org/api-ref/compute/#migrate-server-migrate-action .. _API code: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/api.py#L3568 .. _conductor code: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/conductor/manager.py#L297 .. _compute code: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L4445 .. _microversion 2.56: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52 .. _resize claim: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/resource_tracker.py#L248 .. _atomically claiming: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/scheduler/filter_scheduler.py#L239 .. _moved: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/conductor/tasks/migrate.py#L28 .. _placement service: https://docs.openstack.org/placement/latest/ .. _allocations: https://docs.openstack.org/api-ref/placement/#allocations .. _migration record: https://docs.openstack.org/api-ref/compute/#migrations-os-migrations .. _migration-based allocations: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/migration-allocations.html .. _dropped: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L4048 .. _reverted: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/manager.py#L4233 .. _or deleting: https://opendev.org/openstack/nova/src/tag/19.0.0/nova/compute/api.py#L2135 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2464707 nova-21.2.4/doc/source/contributor/testing/0000775000175000017500000000000000000000000020662 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/testing/down-cell.rst0000664000175000017500000003114500000000000023304 0ustar00zuulzuul00000000000000================== Testing Down Cells ================== This document describes how to recreate a down-cell scenario in a single-node devstack environment. This can be useful for testing the reliability of the controller services when a cell in the deployment is down. Setup ===== DevStack config --------------- This guide is based on a devstack install from the Train release using an Ubuntu Bionic 18.04 VM with 8 VCPU, 8 GB RAM and 200 GB of disk following the `All-In-One Single Machine`_ guide. The following minimal local.conf was used: .. code-block:: ini [[local|localrc]] # Define passwords OS_PASSWORD=openstack1 SERVICE_TOKEN=$OS_PASSWORD ADMIN_PASSWORD=$OS_PASSWORD MYSQL_PASSWORD=$OS_PASSWORD RABBIT_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=$OS_PASSWORD # Logging config LOGFILE=$DEST/logs/stack.sh.log LOGDAYS=2 # Disable non-essential services disable_service horizon tempest .. _All-In-One Single Machine: https://docs.openstack.org/devstack/latest/guides/single-machine.html Populate cell1 -------------- Create a test server first so there is something in cell1: .. code-block:: console $ source openrc admin admin $ IMAGE=$(openstack image list -f value -c ID) $ openstack server create --wait --flavor m1.tiny --image $IMAGE cell1-server Take down cell1 =============== Break the connection to the cell1 database by changing the ``database_connection`` URL, in this case with an invalid host IP: .. code-block:: console mysql> select database_connection from cell_mappings where name='cell1'; +-------------------------------------------------------------------+ | database_connection | +-------------------------------------------------------------------+ | mysql+pymysql://root:openstack1@127.0.0.1/nova_cell1?charset=utf8 | +-------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> update cell_mappings set database_connection='mysql+pymysql://root:openstack1@192.0.0.1/nova_cell1?charset=utf8' where name='cell1'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 Update controller services ========================== Prepare the controller services for the down cell. See :ref:`Handling cell failures ` for details. Modify nova.conf ---------------- Configure the API to avoid long timeouts and slow start times due to `bug 1815697`_ by modifying ``/etc/nova/nova.conf``: .. code-block:: ini [database] ... max_retries = 1 retry_interval = 1 [upgrade_levels] ... compute = stein # N-1 from train release, just something other than "auto" .. _bug 1815697: https://bugs.launchpad.net/nova/+bug/1815697 Restart services ---------------- .. note:: It is useful to tail the n-api service logs in another screen to watch for errors / warnings in the logs due to down cells: .. code-block:: console $ sudo journalctl -f -a -u devstack@n-api.service Restart controller services to flush the cell cache: .. code-block:: console $ sudo systemctl restart devstack@n-api.service devstack@n-super-cond.service devstack@n-sch.service Test cases ========== 1. Try to create a server which should fail and go to cell0. .. code-block:: console $ openstack server create --wait --flavor m1.tiny --image $IMAGE cell0-server You can expect to see errors like this in the n-api logs: .. code-block:: console Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context [None req-fdaff415-48b9-44a7-b4c3-015214e80b90 None None] Error gathering result from cell 4f495a21-294a-4051-9a3d-8b34a250bbb4: DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on u'192.0.0.1' ([Errno 101] ENETUNREACH)") (Background on this error at: http://sqlalche.me/e/e3q8) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context Traceback (most recent call last): Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/opt/stack/nova/nova/context.py", line 441, in gather_result Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context result = fn(cctxt, *args, **kwargs) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 211, in wrapper Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context with reader_mode.using(context): Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context return self.gen.next() Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 1061, in _transaction_scope Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context context=context) as resource: Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context return self.gen.next() Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 659, in _session Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context bind=self.connection, mode=self.mode) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 418, in _create_session Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context self._start() Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 510, in _start Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context engine_args, maker_args) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 534, in _setup_for_connection Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context sql_connection=sql_connection, **engine_kwargs) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/debtcollector/renames.py", line 43, in decorator Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context return wrapped(*args, **kwargs) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py", line 201, in create_engine Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context test_conn = _test_connection(engine, max_retries, retry_interval) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py", line 387, in _test_connection Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context six.reraise(type(de_ref), de_ref) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context File "", line 3, in reraise Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on u'192.0.0.1' ([Errno 101] ENETUNREACH)") (Background on this error at: http://sqlalche.me/e/e3q8) Apr 04 20:48:22 train devstack@n-api.service[10884]: ERROR nova.context Apr 04 20:48:22 train devstack@n-api.service[10884]: WARNING nova.objects.service [None req-1cf4bf5c-2f74-4be0-a18d-51ff81df57dd admin admin] Failed to get minimum service version for cell 4f495a21-294a-4051-9a3d-8b34a250bbb4 2. List servers with the 2.69 microversion for down cells. .. note:: Requires python-openstackclient >= 3.18.0 for v2.69 support. The server in cell1 (which is down) will show up with status UNKNOWN: .. code-block:: console $ openstack --os-compute-api-version 2.69 server list +--------------------------------------+--------------+---------+----------+--------------------------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------------+---------+----------+--------------------------+--------+ | 8e90f1f0-e8dd-4783-8bb3-ec8d594e60f1 | | UNKNOWN | | | | | afd45d84-2bd7-4e49-9dff-93359f742bc1 | cell0-server | ERROR | | cirros-0.4.0-x86_64-disk | | +--------------------------------------+--------------+---------+----------+--------------------------+--------+ 3. Using v2.1 the UNKNOWN server is filtered out by default due to :oslo.config:option:`api.list_records_by_skipping_down_cells`: .. code-block:: console $ openstack --os-compute-api-version 2.1 server list +--------------------------------------+--------------+--------+----------+--------------------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------------+--------+----------+--------------------------+---------+ | afd45d84-2bd7-4e49-9dff-93359f742bc1 | cell0-server | ERROR | | cirros-0.4.0-x86_64-disk | m1.tiny | +--------------------------------------+--------------+--------+----------+--------------------------+---------+ 4. Configure nova-api with ``list_records_by_skipping_down_cells=False`` .. code-block:: ini [api] list_records_by_skipping_down_cells = False 5. Restart nova-api and then listing servers should fail: .. code-block:: console $ sudo systemctl restart devstack@n-api.service $ openstack --os-compute-api-version 2.1 server list Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-e2264d67-5b6c-4f17-ae3d-16c7562f1b69) 6. Try listing compute services with a down cell. The services from the down cell are skipped: .. code-block:: console $ openstack --os-compute-api-version 2.1 compute service list +----+------------------+-------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-------+----------+---------+-------+----------------------------+ | 2 | nova-scheduler | train | internal | enabled | up | 2019-04-04T21:12:47.000000 | | 6 | nova-consoleauth | train | internal | enabled | up | 2019-04-04T21:12:38.000000 | | 7 | nova-conductor | train | internal | enabled | up | 2019-04-04T21:12:47.000000 | +----+------------------+-------+----------+---------+-------+----------------------------+ With 2.69 the nova-compute service from cell1 is shown with status UNKNOWN: .. code-block:: console $ openstack --os-compute-api-version 2.69 compute service list +--------------------------------------+------------------+-------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +--------------------------------------+------------------+-------+----------+---------+-------+----------------------------+ | f68a96d9-d994-4122-a8f9-1b0f68ed69c2 | nova-scheduler | train | internal | enabled | up | 2019-04-04T21:13:47.000000 | | 70cd668a-6d60-4a9a-ad83-f863920d4c44 | nova-consoleauth | train | internal | enabled | up | 2019-04-04T21:13:38.000000 | | ca88f023-1de4-49e0-90b0-581e16bebaed | nova-conductor | train | internal | enabled | up | 2019-04-04T21:13:47.000000 | | | nova-compute | train | | UNKNOWN | | | +--------------------------------------+------------------+-------+----------+---------+-------+----------------------------+ Future ====== This guide could be expanded for having multiple non-cell0 cells where one cell is down while the other is available and go through scenarios where the down cell is marked as disabled to take it out of scheduling consideration. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/testing/eventlet-profiling.rst0000664000175000017500000003550700000000000025243 0ustar00zuulzuul00000000000000======================= Profiling With Eventlet ======================= When performance of one of the Nova services is worse than expected, and other sorts of analysis do not lead to candidate fixes, profiling is an excellent tool for producing detailed analysis of what methods in the code are called the most and which consume the most time. Because most Nova services use eventlet_, the standard profiling tool provided with Python, cProfile_, will not work. Something is required to keep track of changing tasks. Thankfully eventlet comes with ``eventlet.green.profile.Profile``, a mostly undocumented class that provides a similar (but not identical) API to the one provided by Python's ``Profile`` while outputting the same format. .. note:: The eventlet Profile outputs the ``prof`` format produced by ``profile``, which is not the same as that output by ``cProfile``. Some analysis tools (for example, SnakeViz_) only read the latter so the options for analyzing eventlet profiling are not always deluxe (see below). Setup ===== This guide assumes the Nova service being profiled is running devstack, but that is not necessary. What is necessary is that the code associated with the service can be changed and the service restarted, in place. Profiling the entire service will produce mostly noise and the output will be confusing because different tasks will operate during the profile run. It is better to begin the process with a candidate task or method *within* the service that can be associated with an identifier. For example, ``select_destinations`` in the ``FilterScheduler`` can be associated with the list of ``instance_uuids`` passed to it and it runs only once for that set of instance uuids. The process for profiling is: #. Identify the method to be profiled. #. Populate the environment with sufficient resources to exercise the code. For example you may wish to use the FakeVirtDriver_ to have nova aware of multiple ``nova-compute`` processes. Or you may wish to launch many instances if you are evaluating a method that loops over instances. #. At the start of that method, change the code to instantiate a ``Profile`` object and ``start()`` it. #. At the end of that method, change the code to ``stop()`` profiling and write the data (with ``dump_stats()``) to a reasonable location. #. Restart the service. #. Cause the method being evaluated to run. #. Analyze the profile data with the pstats_ module. .. note:: ``stop()`` and ``start()`` are two of the ways in which the eventlet ``Profile`` API differs from the stdlib. There the methods are ``enable()`` and ``disable()``. Example ======= For this example we will analyze ``select_destinations`` in the ``FilterScheduler``. A known problem is that it does excessive work when presented with too many candidate results from the Placement service. We'd like to know why. We'll configure and run devstack_ with FakeVirtDriver_ so there are several candidate hypervisors (the following ``local.conf`` is also useful for other profiling and benchmarking scenarios so not all changes are relevant here): .. code-block:: ini [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD VIRT_DRIVER=fake # You may use different numbers of fake computes, but be careful: 100 will # completely overwhelm a 16GB, 16VPCU server. In the test profiles below a # value of 50 was used, on a 16GB, 16VCPU server. NUMBER_FAKE_NOVA_COMPUTE=25 disable_service cinder disable_service horizon disable_service dstat disable_service tempest [[post-config|$NOVA_CONF]] rpc_response_timeout = 300 # Disable filtering entirely. For some profiling this will not be what you # want. [filter_scheduler] enabled_filters = '""' # Send only one type of notifications to avoid notification overhead. [notifications] notification_format = unversioned Change the code in ``nova/scheduler/filter_scheduler.py`` as follows: .. code-block:: diff diff --git a/nova/scheduler/filter_scheduler.py b/nova/scheduler/filter_scheduler.py index 672f23077e..cb0f87fe48 100644 --- a/nova/scheduler/filter_scheduler.py +++ b/nova/scheduler/filter_scheduler.py @@ -49,92 +49,99 @@ class FilterScheduler(driver.Scheduler): def select_destinations(self, context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version=None, return_alternates=False): """Returns a list of lists of Selection objects, which represent the hosts and (optionally) alternates for each instance. :param context: The RequestContext object :param spec_obj: The RequestSpec object :param instance_uuids: List of UUIDs, one for each value of the spec object's num_instances attribute :param alloc_reqs_by_rp_uuid: Optional dict, keyed by resource provider UUID, of the allocation_requests that may be used to claim resources against matched hosts. If None, indicates either the placement API wasn't reachable or that there were no allocation_requests returned by the placement API. If the latter, the provider_summaries will be an empty dict, not None. :param provider_summaries: Optional dict, keyed by resource provider UUID, of information that will be used by the filters/weighers in selecting matching hosts for a request. If None, indicates that the scheduler driver should grab all compute node information locally and that the Placement API is not used. If an empty dict, indicates the Placement API returned no potential matches for the requested resources. :param allocation_request_version: The microversion used to request the allocations. :param return_alternates: When True, zero or more alternate hosts are returned with each selected host. The number of alternates is determined by the configuration option `CONF.scheduler.max_attempts`. """ + from eventlet.green import profile + pr = profile.Profile() + pr.start() + self.notifier.info( context, 'scheduler.select_destinations.start', dict(request_spec=spec_obj.to_legacy_request_spec_dict())) compute_utils.notify_about_scheduler_action( context=context, request_spec=spec_obj, action=fields_obj.NotificationAction.SELECT_DESTINATIONS, phase=fields_obj.NotificationPhase.START) host_selections = self._schedule(context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version, return_alternates) self.notifier.info( context, 'scheduler.select_destinations.end', dict(request_spec=spec_obj.to_legacy_request_spec_dict())) compute_utils.notify_about_scheduler_action( context=context, request_spec=spec_obj, action=fields_obj.NotificationAction.SELECT_DESTINATIONS, phase=fields_obj.NotificationPhase.END) + pr.stop() + pr.dump_stats('/tmp/select_destinations/%s.prof' % ':'.join(instance_uuids)) + return host_selections Make a ``/tmp/select_destinations`` directory that is writable by the user nova-scheduler will run as. This is where the profile output will go. Restart the scheduler service. Note that ``systemctl restart`` may not kill things sufficiently dead, so:: sudo systemctl stop devstack@n-sch sleep 5 sudo systemctl start devstack@n-sch Create a server (which will call ``select_destinations``):: openstack server create --image cirros-0.4.0-x86_64-disk --flavor c1 x1 In ``/tmp/select_destinations`` there should be a file with a name using the uuid of the created server with a ``.prof`` extension. Change to that directory and view the profile using the pstats `interactive mode`_:: python3 -m pstats ef044142-f3b8-409d-9af6-c60cea39b273.prof .. note:: The major version of python used to analyze the profile data must be the same as the version used to run the process being profiled. Sort stats by their cumulative time:: ef044142-f3b8-409d-9af6-c60cea39b273.prof% sort cumtime ef044142-f3b8-409d-9af6-c60cea39b273.prof% stats 10 Tue Aug 6 17:17:56 2019 ef044142-f3b8-409d-9af6-c60cea39b273.prof 603477 function calls (587772 primitive calls) in 2.294 seconds Ordered by: cumulative time List reduced from 2484 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 1.957 1.957 profile:0(start) 1 0.000 0.000 1.911 1.911 /mnt/share/opt/stack/nova/nova/scheduler/filter_scheduler.py:113(_schedule) 1 0.000 0.000 1.834 1.834 /mnt/share/opt/stack/nova/nova/scheduler/filter_scheduler.py:485(_get_all_host_states) 1 0.000 0.000 1.834 1.834 /mnt/share/opt/stack/nova/nova/scheduler/host_manager.py:757(get_host_states_by_uuids) 1 0.004 0.004 1.818 1.818 /mnt/share/opt/stack/nova/nova/scheduler/host_manager.py:777(_get_host_states) 104/103 0.001 0.000 1.409 0.014 /usr/local/lib/python3.6/dist-packages/oslo_versionedobjects/base.py:170(wrapper) 50 0.001 0.000 1.290 0.026 /mnt/share/opt/stack/nova/nova/scheduler/host_manager.py:836(_get_instance_info) 50 0.001 0.000 1.289 0.026 /mnt/share/opt/stack/nova/nova/scheduler/host_manager.py:820(_get_instances_by_host) 103 0.001 0.000 0.890 0.009 /usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/query.py:3325(__iter__) 50 0.001 0.000 0.776 0.016 /mnt/share/opt/stack/nova/nova/objects/host_mapping.py:99(get_by_host) From this we can make a couple of useful inferences about ``get_by_host``: * It is called once for each of the 50 ``FakeVirtDriver`` hypervisors configured for these tests. * It (and the methods it calls internally) consumes about 40% of the entire time spent running (``0.776 / 1.957``) the ``select_destinations`` method (indicated by ``profile:0(start)``, above). Several other sort modes can be used. List those that are available by entering ``sort`` without arguments. Caveats ======= Real world use indicates that the eventlet profiler is not perfect. There are situations where it will not always track switches between greenlets as well as it could. This can result in profile data that does not make sense or random slowdowns in the system being profiled. There is no one size fits all solution to these issues; profiling eventlet services is more an art than science. However, this section tries to provide a (hopefully) growing body of advice on what to do to work around problems. General Advice -------------- * Try to profile chunks of code that operate mostly within one module or class and do not have many collaborators. The more convoluted the path through the code, the more confused the profiler gets. * Similarly, where possible avoid profiling code that will trigger many greenlet context switches; either specific spawns, or multiple types of I/O. Instead, narrow the focus of the profiler. * If possible, avoid RPC. In nova-compute --------------- The creation of this caveat section was inspired by issues experienced while profiling ``nova-compute``. The ``nova-compute`` process is not allowed to speak with a database server directly. Instead communication is mediated through the conductor, communication happening via ``oslo.versionedobjects`` and remote calls. Profiling methods such as ``update_available_resource`` in the ResourceTracker, which needs information from the database, results in profile data that can be analyzed but is incorrect and misleading. This can be worked around by temporarily changing ``nova-compute`` to allow it to speak to the database directly: .. code-block:: diff diff --git a/nova/cmd/compute.py b/nova/cmd/compute.py index 01fd20de2e..655d503158 100644 --- a/nova/cmd/compute.py +++ b/nova/cmd/compute.py @@ -50,8 +50,10 @@ def main(): gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) - cmd_common.block_db_access('nova-compute') - objects_base.NovaObject.indirection_api = conductor_rpcapi.ConductorAPI() + # Temporarily allow access to the database. You must update the config file + # used by this process to set [database]/connection to the cell1 database. + # cmd_common.block_db_access('nova-compute') + # objects_base.NovaObject.indirection_api = conductor_rpcapi.ConductorAPI() objects.Service.enable_min_version_cache() server = service.Service.create(binary='nova-compute', topic=compute_rpcapi.RPC_TOPIC) The configuration file used by the ``nova-compute`` process must also be updated to ensure that it contains a setting for the relevant database: .. code-block:: ini [database] connection = mysql+pymysql://root:secret@127.0.0.1/nova_cell1?charset=utf8 In a single node devstack setup ``nova_cell1`` is the right choice. The connection string will vary in other setups. Once these changes are made, along with the profiler changes indicated in the example above, ``nova-compute`` can be restarted and with luck some useful profiling data will emerge. .. _eventlet: https://eventlet.net/ .. _cProfile: https://docs.python.org/3/library/profile.html .. _SnakeViz: https://jiffyclub.github.io/snakeviz/ .. _devstack: https://docs.openstack.org/devstack/latest/ .. _FakeVirtDriver: https://docs.openstack.org/devstack/latest/guides/nova.html#fake-virt-driver .. _pstats: https://docs.python.org/3/library/profile.html#pstats.Stats .. _interactive mode: https://www.stefaanlippens.net/python_profiling_with_pstats_interactive_mode/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/testing/libvirt-numa.rst0000664000175000017500000007073400000000000024040 0ustar00zuulzuul00000000000000================================================ Testing NUMA related hardware setup with libvirt ================================================ This page describes how to test the libvirt driver's handling of the NUMA placement, large page allocation and CPU pinning features. It relies on setting up a virtual machine as the test environment and requires support for nested virtualization since plain QEMU is not sufficiently functional. The virtual machine will itself be given NUMA topology, so it can then act as a virtual "host" for testing purposes. ------------------------------------------ Provisioning a virtual machine for testing ------------------------------------------ The entire test process will take place inside a large virtual machine running Fedora 24. The instructions should work for any other Linux distribution which includes libvirt >= 1.2.9 and QEMU >= 2.1.2 The tests will require support for nested KVM, which is not enabled by default on hypervisor hosts. It must be explicitly turned on in the host when loading the kvm-intel/kvm-amd kernel modules. On Intel hosts verify it with .. code-block:: bash # cat /sys/module/kvm_intel/parameters/nested N # rmmod kvm-intel # echo "options kvm-intel nested=y" > /etc/modprobe.d/dist.conf # modprobe kvm-intel # cat /sys/module/kvm_intel/parameters/nested Y While on AMD hosts verify it with .. code-block:: bash # cat /sys/module/kvm_amd/parameters/nested 0 # rmmod kvm-amd # echo "options kvm-amd nested=1" > /etc/modprobe.d/dist.conf # modprobe kvm-amd # cat /sys/module/kvm_amd/parameters/nested 1 The virt-install command below shows how to provision a basic Fedora 24 x86_64 guest with 8 virtual CPUs, 8 GB of RAM and 20 GB of disk space: .. code-block:: bash # cd /var/lib/libvirt/images # wget https://download.fedoraproject.org/pub/fedora/linux/releases/29/Server/x86_64/iso/Fedora-Server-netinst-x86_64-29-1.2.iso # virt-install \ --name f29x86_64 \ --ram 8000 \ --vcpus 8 \ --file /var/lib/libvirt/images/f29x86_64.img \ --file-size 20 --cdrom /var/lib/libvirt/images/Fedora-Server-netinst-x86_64-24-1.2.iso \ --os-variant fedora23 When the virt-viewer application displays the installer, follow the defaults for the installation with a couple of exceptions: * The automatic disk partition setup can be optionally tweaked to reduce the swap space allocated. No more than 500MB is required, free'ing up an extra 1.5 GB for the root disk * Select "Minimal install" when asked for the installation type since a desktop environment is not required * When creating a user account be sure to select the option "Make this user administrator" so it gets 'sudo' rights Once the installation process has completed, the virtual machine will reboot into the final operating system. It is now ready to deploy an OpenStack development environment. --------------------------------- Setting up a devstack environment --------------------------------- For later ease of use, copy your SSH public key into the virtual machine: .. code-block:: bash # ssh-copy-id Now login to the virtual machine: .. code-block:: bash # ssh The Fedora minimal install does not contain git. Install git and clone the devstack repo: .. code-block:: bash $ sudo dnf install git $ git clone https://opendev.org/openstack/devstack $ cd devstack At this point a fairly standard devstack setup can be done with one exception: we should enable the ``NUMATopologyFilter`` filter, which we will use later. For example: .. code-block:: bash $ cat >>local.conf < select numa_topology from compute_nodes; +----------------------------------------------------------------------------+ | numa_topology | +----------------------------------------------------------------------------+ | { | "nova_object.name": "NUMATopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.2", | "nova_object.data": { | "cells": [{ | "nova_object.name": "NUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 0, | "cpuset": [0, 1, 2, 3, 4, 5, 6, 7], | "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7], | "memory": 7975, | "cpu_usage": 0, | "memory_usage": 0, | "pinned_cpus": [], | "siblings": [ | [0], | [1], | [2], | [3], | [4], | [5], | [6], | [7] | ], | "mempages": [{ | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 4, | "total": 2041795, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["size_kb", "total", "reserved", "used"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 2048, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["size_kb", "total", "reserved", "used"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 1048576, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["size_kb", "total", "reserved", "used"] | }], | "network_metadata": { | "nova_object.name": "NetworkMetadata", | "nova_object.namespace": "nova", | "nova_object.version": "1.0", | "nova_object.data": { | "physnets": [], | "tunneled": false | }, | "nova_object.changes": ["tunneled", "physnets"] | } | }, | "nova_object.changes": ["pinned_cpus", "memory_usage", "siblings", "mempages", "memory", "id", "network_metadata", "cpuset", "cpu_usage", "pcpuset"] | }] | }, | "nova_object.changes": ["cells"] | } +----------------------------------------------------------------------------+ Meanwhile, the guest instance should not have any NUMA configuration recorded: .. code-block:: bash MariaDB [nova]> select numa_topology from instance_extra; +---------------+ | numa_topology | +---------------+ | NULL | +---------------+ ----------------------------------------------------- Reconfiguring the test instance to have NUMA topology ----------------------------------------------------- Now that devstack is proved operational, it is time to configure some NUMA topology for the test VM, so that it can be used to verify the OpenStack NUMA support. To do the changes, the VM instance that is running devstack must be shut down: .. code-block:: bash $ sudo shutdown -h now And now back on the physical host edit the guest config as root: .. code-block:: bash $ sudo virsh edit f29x86_64 The first thing is to change the `` block to do passthrough of the host CPU. In particular this exposes the "SVM" or "VMX" feature bits to the guest so that "Nested KVM" can work. At the same time we want to define the NUMA topology of the guest. To make things interesting we're going to give the guest an asymmetric topology with 4 CPUS and 4 GBs of RAM in the first NUMA node and 2 CPUs and 2 GB of RAM in the second and third NUMA nodes. So modify the guest XML to include the following CPU XML: .. code-block:: xml Now start the guest again: .. code-block:: bash # virsh start f29x86_64 ...and login back in: .. code-block:: bash # ssh Before starting OpenStack services again, it is necessary to explicitly set the libvirt virtualization type to KVM, so that guests can take advantage of nested KVM: .. code-block:: bash $ sudo sed -i 's/virt_type = qemu/virt_type = kvm/g' /etc/nova/nova.conf With that done, OpenStack can be started again: .. code-block:: bash $ cd devstack $ ./stack.sh The first thing is to check that the compute node picked up the new NUMA topology setup for the guest: .. code-block:: bash $ mysql -u root -p123456 nova_cell1 MariaDB [nova]> select numa_topology from compute_nodes; +----------------------------------------------------------------------------+ | numa_topology | +----------------------------------------------------------------------------+ | { | "nova_object.name": "NUMATopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.2", | "nova_object.data": { | "cells": [{ | "nova_object.name": "NUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 0, | "cpuset": [0, 1, 2, 3], | "pcpuset": [0, 1, 2, 3], | "memory": 3966, | "cpu_usage": 0, | "memory_usage": 0, | "pinned_cpus": [], | "siblings": [ | [2], | [0], | [3], | [1] | ], | "mempages": [{ | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 4, | "total": 1015418, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 2048, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 1048576, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }], | "network_metadata": { | "nova_object.name": "NetworkMetadata", | "nova_object.namespace": "nova", | "nova_object.version": "1.0", | "nova_object.data": { | "physnets": [], | "tunneled": false | }, | "nova_object.changes": ["physnets", "tunneled"] | } | }, | "nova_object.changes": ["pinned_cpus", "siblings", "memory", "id", "cpuset", "network_metadata", "pcpuset", "mempages", "cpu_usage", "memory_usage"] | }, { | "nova_object.name": "NUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 1, | "cpuset": [4, 5], | "pcpuset": [4, 5], | "memory": 1994, | "cpu_usage": 0, | "memory_usage": 0, | "pinned_cpus": [], | "siblings": [ | [5], | [4] | ], | "mempages": [{ | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 4, | "total": 510562, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 2048, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 1048576, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }], | "network_metadata": { | "nova_object.name": "NetworkMetadata", | "nova_object.namespace": "nova", | "nova_object.version": "1.0", | "nova_object.data": { | "physnets": [], | "tunneled": false | }, | "nova_object.changes": ["physnets", "tunneled"] | } | }, | "nova_object.changes": ["pinned_cpus", "siblings", "memory", "id", "cpuset", "network_metadata", "pcpuset", "mempages", "cpu_usage", "memory_usage"] | }, { | "nova_object.name": "NUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 2, | "cpuset": [6, 7], | "pcpuset": [6, 7], | "memory": 2014, | "cpu_usage": 0, | "memory_usage": 0, | "pinned_cpus": [], | "siblings": [ | [7], | [6] | ], | "mempages": [{ | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 4, | "total": 515727, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 2048, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }, { | "nova_object.name": "NUMAPagesTopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.1", | "nova_object.data": { | "size_kb": 1048576, | "total": 0, | "used": 0, | "reserved": 0 | }, | "nova_object.changes": ["total", "size_kb", "used", "reserved"] | }], | "network_metadata": { | "nova_object.name": "NetworkMetadata", | "nova_object.namespace": "nova", | "nova_object.version": "1.0", | "nova_object.data": { | "physnets": [], | "tunneled": false | }, | "nova_object.changes": ["physnets", "tunneled"] | } | }, | "nova_object.changes": ["pinned_cpus", "siblings", "memory", "id", "cpuset", "network_metadata", "pcpuset", "mempages", "cpu_usage", "memory_usage"] | }] | }, | "nova_object.changes": ["cells"] +----------------------------------------------------------------------------+ This indeed shows that there are now 3 NUMA nodes for the "host" machine, the first with 4 GB of RAM and 4 CPUs, and others with 2 GB of RAM and 2 CPUs each. ----------------------------------------------------- Testing instance boot with no NUMA topology requested ----------------------------------------------------- For the sake of backwards compatibility, if the NUMA filter is enabled, but the flavor/image does not have any NUMA settings requested, it should be assumed that the guest will have a single NUMA node. The guest should be locked to a single host NUMA node too. Boot a guest with the `m1.tiny` flavor to test this condition: .. code-block:: bash $ . openrc admin admin $ openstack server create --image cirros-0.4.0-x86_64-disk --flavor m1.tiny \ cirros1 Now look at the libvirt guest XML: .. code-block:: bash $ sudo virsh list Id Name State ---------------------------------------------------- 1 instance-00000001 running $ sudo virsh dumpxml instance-00000001 ... 1 ... This example shows that there is no explicit NUMA topology listed in the guest XML. ------------------------------------------------ Testing instance boot with 1 NUMA cell requested ------------------------------------------------ Moving forward a little, explicitly tell nova that the NUMA topology for the guest should have a single NUMA node. This should operate in an identical manner to the default behavior where no NUMA policy is set. To define the topology we will create a new flavor: .. code-block:: bash $ openstack flavor create --ram 1024 --disk 1 --vcpus 4 m1.numa $ openstack flavor set --property hw:numa_nodes=1 m1.numa $ openstack flavor show m1.numa Now boot the guest using this new flavor: .. code-block:: bash $ openstack server create --image cirros-0.4.0-x86_64-disk --flavor m1.numa \ cirros2 Looking at the resulting guest XML from libvirt: .. code-block:: bash $ sudo virsh list Id Name State ---------------------------------------------------- 1 instance-00000001 running 2 instance-00000002 running $ sudo virsh dumpxml instance-00000002 ... 4 ... ... The XML shows: * Each guest CPU has been pinned to the physical CPUs associated with a particular NUMA node * The emulator threads have been pinned to the union of all physical CPUs in the host NUMA node that the guest is placed on * The guest has been given a virtual NUMA topology with a single node holding all RAM and CPUs * The guest NUMA node has been strictly pinned to a host NUMA node. As a further sanity test, check what nova recorded for the instance in the database. This should match the ```` information: .. code-block:: bash $ mysql -u root -p123456 nova_cell1 MariaDB [nova]> select numa_topology from instance_extra; +----------------------------------------------------------------------------+ | numa_topology | +----------------------------------------------------------------------------+ | { | "nova_object.name": "InstanceNUMATopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.3", | "nova_object.data": { | "cells": [{ | "nova_object.name": "InstanceNUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 0, | "cpuset": [0, 1, 2, 3], | "memory": 1024, | "pagesize": null, | "cpu_pinning_raw": null, | "cpu_policy": null, | "cpu_thread_policy": null, | "cpuset_reserved": null | }, | "nova_object.changes": ["id"] | }], | "emulator_threads_policy": null | }, | "nova_object.changes": ["cells", "emulator_threads_policy"] | } +----------------------------------------------------------------------------+ Delete this instance: .. code-block:: bash $ openstack server delete cirros2 ------------------------------------------------- Testing instance boot with 2 NUMA cells requested ------------------------------------------------- Now getting more advanced we tell nova that the guest will have two NUMA nodes. To define the topology we will change the previously defined flavor: .. code-block:: bash $ openstack flavor set --property hw:numa_nodes=2 m1.numa $ openstack flavor show m1.numa Now boot the guest using this changed flavor: .. code-block:: bash $ openstack server create --image cirros-0.4.0-x86_64-disk --flavor m1.numa \ cirros2 Looking at the resulting guest XML from libvirt: .. code-block:: bash $ sudo virsh list Id Name State ---------------------------------------------------- 1 instance-00000001 running 3 instance-00000003 running $ sudo virsh dumpxml instance-00000003 ... 4 ... ... The XML shows: * Each guest CPU has been pinned to the physical CPUs associated with particular NUMA nodes * The emulator threads have been pinned to the union of all physical CPUs in the host NUMA nodes that the guest is placed on * The guest has been given a virtual NUMA topology with two nodes, each holding half the RAM and CPUs * The guest NUMA nodes have been strictly pinned to different host NUMA node As a further sanity test, check what nova recorded for the instance in the database. This should match the ```` information: .. code-block:: bash MariaDB [nova]> select numa_topology from instance_extra; +----------------------------------------------------------------------------+ | numa_topology | +----------------------------------------------------------------------------+ | { | "nova_object.name": "InstanceNUMATopology", | "nova_object.namespace": "nova", | "nova_object.version": "1.3", | "nova_object.data": { | "cells": [{ | "nova_object.name": "InstanceNUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 0, | "cpuset": [0, 1], | "memory": 512, | "pagesize": null, | "cpu_pinning_raw": null, | "cpu_policy": null, | "cpu_thread_policy": null, | "cpuset_reserved": null | }, | "nova_object.changes": ["id"] | }, { | "nova_object.name": "InstanceNUMACell", | "nova_object.namespace": "nova", | "nova_object.version": "1.4", | "nova_object.data": { | "id": 1, | "cpuset": [2, 3], | "memory": 512, | "pagesize": null, | "cpu_pinning_raw": null, | "cpu_policy": null, | "cpu_thread_policy": null, | "cpuset_reserved": null | }, | "nova_object.changes": ["id"] | }], | "emulator_threads_policy": null | }, | "nova_object.changes": ["cells", "emulator_threads_policy"] | } +----------------------------------------------------------------------------+ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/testing/serial-console.rst0000664000175000017500000000526600000000000024344 0ustar00zuulzuul00000000000000 ====================== Testing Serial Console ====================== The main aim of this feature is exposing an interactive web-based serial consoles through a web-socket proxy. This page describes how to test it from a devstack environment. --------------------------------- Setting up a devstack environment --------------------------------- For instructions on how to setup devstack with serial console support enabled see `this guide `_. --------------- Testing the API --------------- Starting a new instance. .. code-block:: bash # cd devstack && . openrc # nova boot --flavor 1 --image cirros-0.3.2-x86_64-uec cirros1 Nova provides a command `nova get-serial-console` which will returns a URL with a valid token to connect to the serial console of VMs. .. code-block:: bash # nova get-serial-console cirros1 +--------+-----------------------------------------------------------------+ | Type | Url | +--------+-----------------------------------------------------------------+ | serial | ws://127.0.0.1:6083/?token=5f7854b7-bf3a-41eb-857a-43fc33f0b1ec | +--------+-----------------------------------------------------------------+ Currently nova does not provide any client able to connect from an interactive console through a web-socket. A simple client for *test purpose* can be written with few lines of Python. .. code-block:: python # sudo easy_install ws4py || sudo pip install ws4py # cat >> client.py < cirros1 login ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/contributor/testing/zero-downtime-upgrade.rst0000664000175000017500000001367000000000000025653 0ustar00zuulzuul00000000000000===================================== Testing Zero Downtime Upgrade Process ===================================== Zero Downtime upgrade eliminates any disruption to nova API service during upgrade. Nova API services are upgraded at the end. The basic idea of the zero downtime upgrade process is to have the connections drain from the old API before being upgraded. In this process, new connections go to the new API nodes while old connections slowly drain from the old nodes. This ensures that the user sees the max_supported API version as a monotonically increasing number. There might be some performance degradation during the process due to slow HTTP responses and delayed request handling, but there is no API downtime. This page describes how to test the zero downtime upgrade process. ----------- Environment ----------- * Multinode devstack environment with 2 nodes: * controller - All services (N release) * compute-api - Only n-cpu and n-api services (N release) * Highly available load balancer (HAProxy) on top of the n-api services. This is required for zero downtime upgrade as it allows one n-api service to run while we upgrade the other. See instructions to setup HAProxy below. ----------------------------- Instructions to setup HAProxy ----------------------------- Install HAProxy and Keepalived on both nodes. .. code-block:: bash # apt-get install haproxy keepalived Let the kernel know that we intend to bind additional IP addresses that won't be defined in the interfaces file. To do this, edit ``/etc/sysctl.conf`` and add the following line: .. code-block:: INI net.ipv4.ip_nonlocal_bind=1 Make this take effect without rebooting. .. code-block:: bash # sysctl -p Configure HAProxy to add backend servers and assign virtual IP to the frontend. On both nodes add the below HAProxy config: .. code-block:: bash # cd /etc/haproxy # cat >> haproxy.cfg <> default_backend nova-api backend nova-api balance roundrobin option tcplog server controller 192.168.0.88:8774 check server apicomp 192.168.0.89:8774 check EOF .. note:: Just change the IP for log in the global section on each node. On both nodes add ``keepalived.conf``: .. code-block:: bash # cd /etc/keepalived # cat >> keepalived.conf <> 0" | sudo socat /var/run/haproxy.sock stdio * OR disable service using: .. code-block:: bash # echo "disable server nova-api/<>" | sudo socat /var/run/haproxy.sock stdio * This allows the current node to complete all the pending requests. When this is being upgraded, other api node serves the requests. This way we can achieve zero downtime. * Restart n-api service and enable n-api using the command: .. code-block:: bash # echo "enable server nova-api/<>" | sudo socat /var/run/haproxy.sock stdio * Drain connections from other old api node in the same way and upgrade. * No tempest tests should fail since there is no API downtime. After maintenance window ''''''''''''''''''''''''' * Follow the steps from general rolling upgrade process to clear any cached service version data and complete all online data migrations. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/contributor/testing.rst0000664000175000017500000001120000000000000021406 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============== Test Strategy ============== A key part of the "four opens" is ensuring the OpenStack delivers well-tested and usable software. For more details see: http://docs.openstack.org/project-team-guide/introduction.html#the-four-opens Experience has shown that untested features are frequently broken, in part due to the velocity of upstream changes. As we aim to ensure we keep all features working across upgrades, we must aim to test all features. Reporting Test Coverage ======================= For details on plans to report the current test coverage, refer to :doc:`/user/feature-classification`. Running tests and reporting results =================================== Running tests locally --------------------- Please see https://opendev.org/openstack/nova/src/branch/master/HACKING.rst#running-tests Voting in Gerrit ---------------- On every review in gerrit, check tests are run on very patch set, and are able to report a +1 or -1 vote. For more details, please see: http://docs.openstack.org/infra/manual/developers.html#automated-testing Before merging any code, there is an integrate gate test queue, to ensure master is always passing all tests. For more details, please see: http://docs.openstack.org/infra/zuul/user/gating.html Infra vs Third-Party -------------------- Tests that use fully open source components are generally run by the OpenStack Infra teams. Test setups that use non-open technology must be run outside of that infrastructure, but should still report their results upstream. For more details, please see: http://docs.openstack.org/infra/system-config/third_party.html Ad-hoc testing -------------- It is particularly common for people to run ad-hoc tests on each released milestone, such as RC1, to stop regressions. While these efforts can help stabilize the release, as a community we have a much stronger preference for continuous integration testing. Partly this is because we encourage users to deploy master, and we generally have to assume that any upstream commit may already been deployed in production. Types of tests ============== Unit tests ---------- Unit tests help document and enforce the contract for each component. Without good unit test coverage it is hard to continue to quickly evolve the codebase. The correct level of unit test coverage is very subjective, and as such we are not aiming for a particular percentage of coverage, rather we are aiming for good coverage. Generally, every code change should have a related unit test: https://opendev.org/openstack/nova/src/branch/master/HACKING.rst#creating-unit-tests Integration tests ----------------- Today, our integration tests involve running the Tempest test suite on a variety of Nova deployment scenarios. The integration job setup is defined in the ``.zuul.yaml`` file in the root of the nova repository. Jobs are restricted by queue: * ``check``: jobs in this queue automatically run on all proposed changes even with non-voting jobs * ``gate``: jobs in this queue automatically run on all approved changes (voting jobs only) * ``experimental``: jobs in this queue are non-voting and run on-demand by leaving a review comment on the change of "check experimental" In addition, we have third parties running the tests on their preferred Nova deployment scenario. Functional tests ---------------- Nova has a set of in-tree functional tests that focus on things that are out of scope for tempest testing and unit testing. Tempest tests run against a full live OpenStack deployment, generally deployed using devstack. At the other extreme, unit tests typically use mock to test a unit of code in isolation. Functional tests don't run an entire stack, they are isolated to nova code, and have no reliance on external services. They do have a WSGI app, nova services and a database, with minimal stubbing of nova internals. Interoperability tests ----------------------- The DefCore committee maintains a list that contains a subset of Tempest tests. These are used to verify if a particular Nova deployment's API responds as expected. For more details, see: https://opendev.org/openstack/interop ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/index.rst0000664000175000017500000002310600000000000016476 0ustar00zuulzuul00000000000000.. Copyright 2010-2012 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ======================== OpenStack Compute (nova) ======================== What is nova? ============= Nova is the OpenStack project that provides a way to provision compute instances (aka virtual servers). Nova supports creating virtual machines, baremetal servers (through the use of ironic), and has limited support for system containers. Nova runs as a set of daemons on top of existing Linux servers to provide that service. It requires the following additional OpenStack services for basic function: * :keystone-doc:`Keystone <>`: This provides identity and authentication for all OpenStack services. * :glance-doc:`Glance <>`: This provides the compute image repository. All compute instances launch from glance images. * :neutron-doc:`Neutron <>`: This is responsible for provisioning the virtual or physical networks that compute instances connect to on boot. * :placement-doc:`Placement <>`: This is responsible for tracking inventory of resources available in a cloud and assisting in choosing which provider of those resources will be used when creating a virtual machine. It can also integrate with other services to include: persistent block storage, encrypted disks, and baremetal compute instances. For End Users ============= As an end user of nova, you'll use nova to create and manage servers with either tools or the API directly. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: user/index Tools for using Nova -------------------- * :horizon-doc:`Horizon `: The official web UI for the OpenStack Project. * :python-openstackclient-doc:`OpenStack Client <>`: The official CLI for OpenStack Projects. You should use this as your CLI for most things, it includes not just nova commands but also commands for most of the projects in OpenStack. * :python-novaclient-doc:`Nova Client `: For some very advanced features (or administrative commands) of nova you may need to use nova client. It is still supported, but the ``openstack`` cli is recommended. Writing to the API ------------------ All end user (and some administrative) features of nova are exposed via a REST API, which can be used to build more complicated logic or automation with nova. This can be consumed directly, or via various SDKs. The following resources will help you get started with consuming the API directly. * `Compute API Guide `_: The concept guide for the API. This helps lay out the concepts behind the API to make consuming the API reference easier. * `Compute API Reference `_: The complete reference for the compute API, including all methods and request / response parameters and their meaning. * :doc:`Compute API Microversion History `: The compute API evolves over time through `Microversions `_. This provides the history of all those changes. Consider it a "what's new" in the compute API. * :doc:`Block Device Mapping `: One of the trickier parts to understand is the Block Device Mapping parameters used to connect specific block devices to computes. This deserves its own deep dive. * :doc:`Metadata `: Provide information to the guest instance when it is created. Nova can be configured to emit notifications over RPC. * :ref:`Versioned Notifications `: This provides the list of existing versioned notifications with sample payloads. Other end-user guides can be found under :doc:`/user/index`. For Operators ============= Architecture Overview --------------------- * :doc:`Nova architecture `: An overview of how all the parts in nova fit together. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: user/architecture Installation ------------ .. TODO(sdague): links to all the rest of the install guide pieces. The detailed install guide for nova. A functioning nova will also require having installed :keystone-doc:`keystone `, :glance-doc:`glance `, :neutron-doc:`neutron `, and :placement-doc:`placement `. Ensure that you follow their install guides first. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :maxdepth: 2 install/index Deployment Considerations ------------------------- There is information you might want to consider before doing your deployment, especially if it is going to be a larger deployment. For smaller deployments the defaults from the :doc:`install guide ` will be sufficient. * **Compute Driver Features Supported**: While the majority of nova deployments use libvirt/kvm, you can use nova with other compute drivers. Nova attempts to provide a unified feature set across these, however, not all features are implemented on all backends, and not all features are equally well tested. * :doc:`Feature Support by Use Case `: A view of what features each driver supports based on what's important to some large use cases (General Purpose Cloud, NFV Cloud, HPC Cloud). * :doc:`Feature Support full list `: A detailed dive through features in each compute driver backend. * :doc:`Cells v2 Planning `: For large deployments, Cells v2 allows sharding of your compute environment. Upfront planning is key to a successful Cells v2 layout. * :doc:`Running nova-api on wsgi `: Considerations for using a real WSGI container instead of the baked-in eventlet web server. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: user/feature-classification user/support-matrix user/cellsv2-layout user/wsgi Maintenance ----------- Once you are running nova, the following information is extremely useful. * :doc:`Admin Guide `: A collection of guides for administrating nova. * :doc:`Flavors `: What flavors are and why they are used. * :doc:`Upgrades `: How nova is designed to be upgraded for minimal service impact, and the order you should do them in. * :doc:`Quotas `: Managing project quotas in nova. * :doc:`Aggregates `: Aggregates are a useful way of grouping hosts together for scheduling purposes. * :doc:`Filter Scheduler `: How the filter scheduler is configured, and how that will impact where compute instances land in your environment. If you are seeing unexpected distribution of compute instances in your hosts, you'll want to dive into this configuration. * :doc:`Exposing custom metadata to compute instances `: How and when you might want to extend the basic metadata exposed to compute instances (either via metadata server or config drive) for your specific purposes. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: admin/index user/flavors user/upgrade user/quotas user/filter-scheduler admin/vendordata Reference Material ------------------ * :doc:`Nova CLI Command References `: the complete command reference for all the daemons and admin tools that come with nova. * :doc:`Configuration Guide `: Information on configuring the system, including role-based access control policy rules. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: cli/index configuration/index For Contributors ================ * :doc:`contributor/contributing`: If you are a new contributor this should help you to start contributing to Nova. * :doc:`contributor/index`: If you are new to Nova, this should help you start to understand what Nova actually does, and why. * :doc:`reference/index`: There are also a number of technical references on both current and future looking parts of our architecture. These are collected here. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: contributor/index contributor/contributing reference/index .. only:: html Search ====== * :ref:`Nova document search `: Search the contents of this document. * `OpenStack wide search `_: Search the wider set of OpenStack documentation, including forums. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2504706 nova-21.2.4/doc/source/install/0000775000175000017500000000000000000000000016301 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/compute-install-obs.rst0000664000175000017500000002160500000000000022740 0ustar00zuulzuul00000000000000Install and configure a compute node for openSUSE and SUSE Linux Enterprise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes. .. note:: This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the :ref:`example architectures ` section. Each additional compute node requires a unique IP address. Install and configure components -------------------------------- .. include:: shared/note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # zypper install openstack-nova-compute genisoimage qemu-kvm libvirt #. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... enabled_apis = osapi_compute,metadata * In the ``[DEFAULT]`` section, set the ``compute_driver``: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... compute_driver = libvirt.LibvirtDriver * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/nova/nova.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the management network interface on your compute node, typically ``10.0.0.31`` for the first node in the :ref:`example architecture `. * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to the :neutron-doc:`Networking service install guide ` for more details. * In the ``[vnc]`` section, enable and configure remote console access: .. path /etc/nova/nova.conf .. code-block:: ini [vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node. .. note:: If the web browser to access remote consoles resides on a host that cannot resolve the ``controller`` hostname, you must replace ``controller`` with the management interface IP address of the controller node. * In the ``[glance]`` section, configure the location of the Image service API: .. path /etc/nova/nova.conf .. code-block:: ini [glance] # ... api_servers = http://controller:9292 * In the ``[oslo_concurrency]`` section, configure the lock path: .. path /etc/nova/nova.conf .. code-block:: ini [oslo_concurrency] # ... lock_path = /var/run/nova * In the ``[placement]`` section, configure the Placement API: .. path /etc/nova/nova.conf .. code-block:: ini [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you choose for the ``placement`` user in the Identity service. Comment out any other options in the ``[placement]`` section. #. Ensure the kernel module ``nbd`` is loaded. .. code-block:: console # modprobe nbd #. Ensure the module loads on every boot by adding ``nbd`` to the ``/etc/modules-load.d/nbd.conf`` file. Finalize installation --------------------- #. Determine whether your compute node supports hardware acceleration for virtual machines: .. code-block:: console $ egrep -c '(vmx|svm)' /proc/cpuinfo If this command returns a value of ``one or greater``, your compute node supports hardware acceleration which typically requires no additional configuration. If this command returns a value of ``zero``, your compute node does not support hardware acceleration and you must configure ``libvirt`` to use QEMU instead of KVM. * Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as follows: .. path /etc/nova/nova.conf .. code-block:: ini [libvirt] # ... virt_type = qemu #. Start the Compute service including its dependencies and configure them to start automatically when the system boots: .. code-block:: console # systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service .. note:: If the ``nova-compute`` service fails to start, check ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on controller:5672 is unreachable`` likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart ``nova-compute`` service on the compute node. Add the compute node to the cell database ----------------------------------------- .. important:: Run the following commands on the **controller** node. #. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database: .. code-block:: console $ . admin-openrc $ openstack compute service list --service nova-compute +----+-------+--------------+------+-------+---------+----------------------------+ | ID | Host | Binary | Zone | State | Status | Updated At | +----+-------+--------------+------+-------+---------+----------------------------+ | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | +----+-------+--------------+------+-------+---------+----------------------------+ #. Discover compute hosts: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 .. note:: When you add new compute nodes, you must run ``nova-manage cell_v2 discover_hosts`` on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in ``/etc/nova/nova.conf``: .. code-block:: ini [scheduler] discover_hosts_in_cells_interval = 300 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/compute-install-rdo.rst0000664000175000017500000002103400000000000022735 0ustar00zuulzuul00000000000000Install and configure a compute node for Red Hat Enterprise Linux and CentOS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes. .. note:: This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the :ref:`example architectures ` section. Each additional compute node requires a unique IP address. Install and configure components -------------------------------- .. include:: shared/note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # yum install openstack-nova-compute #. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... enabled_apis = osapi_compute,metadata * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/nova/nova.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the :ref:`example architecture `. * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to the :neutron-doc:`Networking service install guide ` for more details. * In the ``[vnc]`` section, enable and configure remote console access: .. path /etc/nova/nova.conf .. code-block:: ini [vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node. .. note:: If the web browser to access remote consoles resides on a host that cannot resolve the ``controller`` hostname, you must replace ``controller`` with the management interface IP address of the controller node. * In the ``[glance]`` section, configure the location of the Image service API: .. path /etc/nova/nova.conf .. code-block:: ini [glance] # ... api_servers = http://controller:9292 * In the ``[oslo_concurrency]`` section, configure the lock path: .. path /etc/nova/nova.conf .. code-block:: ini [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp * In the ``[placement]`` section, configure the Placement API: .. path /etc/nova/nova.conf .. code-block:: ini [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you choose for the ``placement`` user in the Identity service. Comment out any other options in the ``[placement]`` section. Finalize installation --------------------- #. Determine whether your compute node supports hardware acceleration for virtual machines: .. code-block:: console $ egrep -c '(vmx|svm)' /proc/cpuinfo If this command returns a value of ``one or greater``, your compute node supports hardware acceleration which typically requires no additional configuration. If this command returns a value of ``zero``, your compute node does not support hardware acceleration and you must configure ``libvirt`` to use QEMU instead of KVM. * Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as follows: .. path /etc/nova/nova.conf .. code-block:: ini [libvirt] # ... virt_type = qemu #. Start the Compute service including its dependencies and configure them to start automatically when the system boots: .. code-block:: console # systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service .. note:: If the ``nova-compute`` service fails to start, check ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on controller:5672 is unreachable`` likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart ``nova-compute`` service on the compute node. Add the compute node to the cell database ----------------------------------------- .. important:: Run the following commands on the **controller** node. #. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database: .. code-block:: console $ . admin-openrc $ openstack compute service list --service nova-compute +----+-------+--------------+------+-------+---------+----------------------------+ | ID | Host | Binary | Zone | State | Status | Updated At | +----+-------+--------------+------+-------+---------+----------------------------+ | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | +----+-------+--------------+------+-------+---------+----------------------------+ #. Discover compute hosts: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 .. note:: When you add new compute nodes, you must run ``nova-manage cell_v2 discover_hosts`` on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in ``/etc/nova/nova.conf``: .. code-block:: ini [scheduler] discover_hosts_in_cells_interval = 300 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/compute-install-ubuntu.rst0000664000175000017500000002010700000000000023473 0ustar00zuulzuul00000000000000Install and configure a compute node for Ubuntu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes. .. note:: This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the :ref:`example architectures ` section. Each additional compute node requires a unique IP address. Install and configure components -------------------------------- .. include:: shared/note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # apt install nova-compute 2. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/nova/nova.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the :ref:`example architecture `. * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to the :neutron-doc:`Networking service install guide ` for more details. * In the ``[vnc]`` section, enable and configure remote console access: .. path /etc/nova/nova.conf .. code-block:: ini [vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node. .. note:: If the web browser to access remote consoles resides on a host that cannot resolve the ``controller`` hostname, you must replace ``controller`` with the management interface IP address of the controller node. * In the ``[glance]`` section, configure the location of the Image service API: .. path /etc/nova/nova.conf .. code-block:: ini [glance] # ... api_servers = http://controller:9292 * In the ``[oslo_concurrency]`` section, configure the lock path: .. path /etc/nova/nova.conf .. code-block:: ini [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp * In the ``[placement]`` section, configure the Placement API: .. path /etc/nova/nova.conf .. code-block:: ini [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you choose for the ``placement`` user in the Identity service. Comment out any other options in the ``[placement]`` section. Finalize installation --------------------- #. Determine whether your compute node supports hardware acceleration for virtual machines: .. code-block:: console $ egrep -c '(vmx|svm)' /proc/cpuinfo If this command returns a value of ``one or greater``, your compute node supports hardware acceleration which typically requires no additional configuration. If this command returns a value of ``zero``, your compute node does not support hardware acceleration and you must configure ``libvirt`` to use QEMU instead of KVM. * Edit the ``[libvirt]`` section in the ``/etc/nova/nova-compute.conf`` file as follows: .. path /etc/nova/nova-compute.conf .. code-block:: ini [libvirt] # ... virt_type = qemu #. Restart the Compute service: .. code-block:: console # service nova-compute restart .. note:: If the ``nova-compute`` service fails to start, check ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on controller:5672 is unreachable`` likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart ``nova-compute`` service on the compute node. Add the compute node to the cell database ----------------------------------------- .. important:: Run the following commands on the **controller** node. #. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database: .. code-block:: console $ . admin-openrc $ openstack compute service list --service nova-compute +----+-------+--------------+------+-------+---------+----------------------------+ | ID | Host | Binary | Zone | State | Status | Updated At | +----+-------+--------------+------+-------+---------+----------------------------+ | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | +----+-------+--------------+------+-------+---------+----------------------------+ #. Discover compute hosts: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 .. note:: When you add new compute nodes, you must run ``nova-manage cell_v2 discover_hosts`` on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in ``/etc/nova/nova.conf``: .. code-block:: ini [scheduler] discover_hosts_in_cells_interval = 300 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/compute-install.rst0000664000175000017500000000232700000000000022157 0ustar00zuulzuul00000000000000Install and configure a compute node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service on a compute node for Ubuntu, openSUSE and SUSE Linux Enterprise, and Red Hat Enterprise Linux and CentOS. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes. .. note:: This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the :ref:`example architectures ` section. Each additional compute node requires a unique IP address. .. toctree:: :glob: compute-install-ubuntu compute-install-rdo compute-install-obs ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/install/controller-install-obs.rst0000664000175000017500000003271100000000000023447 0ustar00zuulzuul00000000000000Install and configure controller node for openSUSE and SUSE Linux Enterprise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service, code-named nova, on the controller node. Prerequisites ------------- Before you install and configure the Compute service, you must create databases, service credentials, and API endpoints. #. To create the databases, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: .. code-block:: console MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; * Grant proper access to the databases: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; Replace ``NOVA_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc #. Create the Compute service credentials: * Create the ``nova`` user: .. code-block:: console $ openstack user create --domain default --password-prompt nova User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 8a7dbf5279404537b1c7b86c033620fe | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ * Add the ``admin`` role to the ``nova`` user: .. code-block:: console $ openstack role add --project service --user nova admin .. note:: This command provides no output. * Create the ``nova`` service entity: .. code-block:: console $ openstack service create --name nova \ --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 060d59eac51b4594815603d75a00aba2 | | name | nova | | type | compute | +-------------+----------------------------------+ #. Create the Compute API service endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 3c1caa473bfe4390a11e7177894bcc7b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | e3c918de680746a586eac1f2d9bc10ab | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ Install and configure components -------------------------------- .. include:: shared/note_configuration_vary_by_distribution.rst .. note:: As of the Newton release, SUSE OpenStack packages are shipped with the upstream default configuration files. For example, ``/etc/nova/nova.conf`` has customizations in ``/etc/nova/nova.conf.d/010-nova.conf``. While the following instructions modify the default configuration file, adding a new file in ``/etc/nova/nova.conf.d`` achieves the same result. #. Install the packages: .. code-block:: console # zypper install \ openstack-nova-api \ openstack-nova-scheduler \ openstack-nova-conductor \ openstack-nova-novncproxy \ iptables #. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... enabled_apis = osapi_compute,metadata * In the ``[api_database]`` and ``[database]`` sections, configure database access: .. path /etc/nova/nova.conf .. code-block:: ini [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova Replace ``NOVA_DBPASS`` with the password you chose for the Compute databases. * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/nova/nova.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the management interface IP address of the controller node: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... my_ip = 10.0.0.11 * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to the :neutron-doc:`Networking service install guide ` for more details. * In the ``[vnc]`` section, configure the VNC proxy to use the management interface IP address of the controller node: .. path /etc/nova/nova.conf .. code-block:: ini [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip * In the ``[glance]`` section, configure the location of the Image service API: .. path /etc/nova/nova.conf .. code-block:: ini [glance] # ... api_servers = http://controller:9292 * In the ``[oslo_concurrency]`` section, configure the lock path: .. path /etc/nova/nova.conf .. code-block:: ini [oslo_concurrency] # ... lock_path = /var/run/nova * In the ``[placement]`` section, configure access to the Placement service: .. path /etc/nova/nova.conf .. code-block:: ini [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you choose for the ``placement`` service user created when installing :placement-doc:`Placement `. Comment out or remove any other options in the ``[placement]`` section. #. Populate the ``nova-api`` database: .. code-block:: console # su -s /bin/sh -c "nova-manage api_db sync" nova .. note:: Ignore any deprecation messages in this output. #. Register the ``cell0`` database: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova #. Create the ``cell1`` cell: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova #. Populate the nova database: .. code-block:: console # su -s /bin/sh -c "nova-manage db sync" nova #. Verify nova cell0 and cell1 are registered correctly: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False | | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ Finalize installation --------------------- * Start the Compute services and configure them to start when the system boots: .. code-block:: console # systemctl enable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service # systemctl start \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/install/controller-install-rdo.rst0000664000175000017500000003170500000000000023452 0ustar00zuulzuul00000000000000Install and configure controller node for Red Hat Enterprise Linux and CentOS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service, code-named nova, on the controller node. Prerequisites ------------- Before you install and configure the Compute service, you must create databases, service credentials, and API endpoints. #. To create the databases, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: .. code-block:: console MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; * Grant proper access to the databases: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; Replace ``NOVA_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc #. Create the Compute service credentials: * Create the ``nova`` user: .. code-block:: console $ openstack user create --domain default --password-prompt nova User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 8a7dbf5279404537b1c7b86c033620fe | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ * Add the ``admin`` role to the ``nova`` user: .. code-block:: console $ openstack role add --project service --user nova admin .. note:: This command provides no output. * Create the ``nova`` service entity: .. code-block:: console $ openstack service create --name nova \ --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 060d59eac51b4594815603d75a00aba2 | | name | nova | | type | compute | +-------------+----------------------------------+ #. Create the Compute API service endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 3c1caa473bfe4390a11e7177894bcc7b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | e3c918de680746a586eac1f2d9bc10ab | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ Install and configure components -------------------------------- .. include:: shared/note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # yum install openstack-nova-api openstack-nova-conductor \ openstack-nova-novncproxy openstack-nova-scheduler #. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... enabled_apis = osapi_compute,metadata * In the ``[api_database]`` and ``[database]`` sections, configure database access: .. path /etc/nova/nova.conf .. code-block:: ini [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova Replace ``NOVA_DBPASS`` with the password you chose for the Compute databases. * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/nova/nova.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the management interface IP address of the controller node: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... my_ip = 10.0.0.11 * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to the :neutron-doc:`Networking service install guide ` for more details. * In the ``[vnc]`` section, configure the VNC proxy to use the management interface IP address of the controller node: .. path /etc/nova/nova.conf .. code-block:: ini [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip * In the ``[glance]`` section, configure the location of the Image service API: .. path /etc/nova/nova.conf .. code-block:: ini [glance] # ... api_servers = http://controller:9292 * In the ``[oslo_concurrency]`` section, configure the lock path: .. path /etc/nova/nova.conf .. code-block:: ini [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp * In the ``[placement]`` section, configure access to the Placement service: .. path /etc/nova/nova.conf .. code-block:: ini [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you choose for the ``placement`` service user created when installing :placement-doc:`Placement `. Comment out or remove any other options in the ``[placement]`` section. #. Populate the ``nova-api`` database: .. code-block:: console # su -s /bin/sh -c "nova-manage api_db sync" nova .. note:: Ignore any deprecation messages in this output. #. Register the ``cell0`` database: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova #. Create the ``cell1`` cell: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova #. Populate the nova database: .. code-block:: console # su -s /bin/sh -c "nova-manage db sync" nova #. Verify nova cell0 and cell1 are registered correctly: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False | | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ Finalize installation --------------------- * Start the Compute services and configure them to start when the system boots: .. code-block:: console # systemctl enable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service # systemctl start \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/install/controller-install-ubuntu.rst0000664000175000017500000003077700000000000024220 0ustar00zuulzuul00000000000000Install and configure controller node for Ubuntu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service, code-named nova, on the controller node. Prerequisites ------------- Before you install and configure the Compute service, you must create databases, service credentials, and API endpoints. #. To create the databases, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console # mysql * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: .. code-block:: console MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; * Grant proper access to the databases: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; Replace ``NOVA_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc #. Create the Compute service credentials: * Create the ``nova`` user: .. code-block:: console $ openstack user create --domain default --password-prompt nova User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 8a7dbf5279404537b1c7b86c033620fe | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ * Add the ``admin`` role to the ``nova`` user: .. code-block:: console $ openstack role add --project service --user nova admin .. note:: This command provides no output. * Create the ``nova`` service entity: .. code-block:: console $ openstack service create --name nova \ --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 060d59eac51b4594815603d75a00aba2 | | name | nova | | type | compute | +-------------+----------------------------------+ #. Create the Compute API service endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 3c1caa473bfe4390a11e7177894bcc7b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | e3c918de680746a586eac1f2d9bc10ab | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ Install and configure components -------------------------------- .. include:: shared/note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # apt install nova-api nova-conductor nova-novncproxy nova-scheduler #. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: * In the ``[api_database]`` and ``[database]`` sections, configure database access: .. path /etc/nova/nova.conf .. code-block:: ini [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova Replace ``NOVA_DBPASS`` with the password you chose for the Compute databases. * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/nova/nova.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the management interface IP address of the controller node: .. path /etc/nova/nova.conf .. code-block:: ini [DEFAULT] # ... my_ip = 10.0.0.11 * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to the :neutron-doc:`Networking service install guide ` for more information. * In the ``[vnc]`` section, configure the VNC proxy to use the management interface IP address of the controller node: .. path /etc/nova/nova.conf .. code-block:: ini [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip * In the ``[glance]`` section, configure the location of the Image service API: .. path /etc/nova/nova.conf .. code-block:: ini [glance] # ... api_servers = http://controller:9292 * In the ``[oslo_concurrency]`` section, configure the lock path: .. path /etc/nova/nova.conf .. code-block:: ini [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp * Due to a packaging bug, remove the ``log_dir`` option from the ``[DEFAULT]`` section. * In the ``[placement]`` section, configure access to the Placement service: .. path /etc/nova/nova.conf .. code-block:: ini [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you choose for the ``placement`` service user created when installing :placement-doc:`Placement `. Comment out or remove any other options in the ``[placement]`` section. #. Populate the ``nova-api`` database: .. code-block:: console # su -s /bin/sh -c "nova-manage api_db sync" nova .. note:: Ignore any deprecation messages in this output. #. Register the ``cell0`` database: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova #. Create the ``cell1`` cell: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova #. Populate the nova database: .. code-block:: console # su -s /bin/sh -c "nova-manage db sync" nova #. Verify nova cell0 and cell1 are registered correctly: .. code-block:: console # su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False | | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ Finalize installation --------------------- * Restart the Compute services: .. code-block:: console # service nova-api restart # service nova-scheduler restart # service nova-conductor restart # service nova-novncproxy restart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/controller-install.rst0000664000175000017500000000054500000000000022666 0ustar00zuulzuul00000000000000Install and configure controller node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service on the controller node for Ubuntu, openSUSE and SUSE Linux Enterprise, and Red Hat Enterprise Linux and CentOS. .. toctree:: controller-install-ubuntu controller-install-obs controller-install-rdo ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2504706 nova-21.2.4/doc/source/install/figures/0000775000175000017500000000000000000000000017745 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/hwreqs.graffle0000664000175000017500000000767200000000000022622 0ustar00zuulzuul00000000000000[W۸ǟ˧G%Y0m 30gX,(SNw?@RHI -֯_z~JG/ ~yN 7l{A~T}ݳN}čw;Gvfslqrt:k9/ټ6p^b8hDa_Gl.3:{!;msW϶>Wn]#FGA[j&A/uWGVsqN |?:4EN _Ax(c:r:_˭$gMmPgBڊLS0A6K,cl/y __4!k! I[e,%9'e;2k7,'9Sã{se+|v<1}@ a qu&L2-Rt~Z4?۾ *y<֭)vW&TIW囔nRҠKW%wI¨Ǎ?8Nyܴ\ǯ\*nKc|<_+he۴Rn*%9̃DZjyvTEҲ4)Dq aDJl_rNڜMM8©m3ܠɘbT۪ Zd$/k2 0脳ԥ .r%Mtg$ Q}qS q_-'ltϻv4Zh]lSXYrgqkZ͖g9eu<߿ᕟnKJr"Ir}Ӕlγ&eTR̥䅔G_K^V;:jee.=`AQvt5Stڲf@GN R6Sj>gK<&fm<;q];wC7tC KyT!;x^:@ǤQ<=HyLnu|}O2H}wyEHM䍁'o$ovjyj"Hj7ISiIr^/ x9g7": <&^wT6k49h_`ɒ6ЉH* 6<Rx{[e.p.;fcPͼ{X4Ra y7gqP c 60`XND"gMCm^e );UeLŵOY6 pCdy3hr !#9B!+Rx+RTF7-|*%1X$)8"\ ֔oĬB*JmX!M֧HL)bAJů"~惬q*djBVn!3RWfU*RRRIvϓUMRVéjFIs!" 2miTTWDԏ-߫Vȫ^ī,I%8J%z5Jr5Vڝ10uviPnK"MpQWQWq-iQru(psXqow~G?^^8;c-> 6}w_ߞ#݉S4n0;%pD3$.!p̋JWΠRNx#mۍy4q 8L.>u{ ~{mޑ@o{J"-ɪ^;48DNTPih4 |V)4u 18f\1sO˯pR74\bG1Ou \YfLLGq 2Jg[ʚB_jRNiJQzMyV{闅h[TU>4WHga././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/hwreqs.png0000664000175000017500000026132200000000000021772 0ustar00zuulzuul00000000000000PNG  IHDRvFNsRGB pHYs&:4iTXtXML:com.adobe.xmp 5 2 1 2@IDATx]E֮sTL gTs8쬎,άb8|FTQ HJ%g6LgvfvR{^U#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0#0@( #A*ܕCv˖W ތ#0@D(f&mX&,O8H2.q60ZDP@D;0@XbF`@@Q Ey E,[oMSܕhV2Gb@07s\M?^WꝹT2e# e_FH2sVq]_[oXv?wc) i,f5Bh 5;hczF><\a=fδSٳg[oxxwoG􋗮HtYnR.v:ggE t>}w#m3`nNe4M E;+HwKJ;R~0!g2#kpFYr(D>Q^M6ʥ.G=ky=#ӅìekǾ)OZLGb!,ߕ '^S4mGނf#n[E(8^878oU:x ~[}c)jU Ct\BbRZ\p ]C9@K<\p*B;^S'N,өY%sauy驺.ߺe̺dY?U|/ҟS |("z 5&ؑ"s<н eڢئ(=fb2GTz-&q7^'M=Vռ"gn J. d?'-6' 9O~\ALcdfOs"%۳MӄrIOԣ_a 0'7yBރnESRf:'*i0^c]Mj P 7#AKkW޷سf8o"(>+caa%|>'=aSzk`ԕ<Eb+--' ˎ`Ggǘs`LG/Ҕ\Mt>ݣy *Dn{U{.|XQpRЫ{bT:}{ 2&(Y$,-џ2PIiFEcm|oVH,s ˫(Pt'*wpT(ɓq_2.uNJ +촪(Cul"VKgф.oAcBU-x|Iv}BNq1y[=|á'7PS<,65^A;J^M K YHzYizP0h16×J j1^S Tߠ>#zuV@9+ j/zy>nl?IRiVIS5$LS[5]v-wŎ'qy7gnZmϷ 0rۃU364@e0)BgS4gC3R}1,3s'œ2*t^Q-,t!t4~g9XIq@J[+Ii8FZiv.pN ùP>@qfOzlaeFu>DlV|dabPWƧׂBE|C`I9Od ݫF%B ]3i,^( fC /vI +1ԢX.$'96J%D&)@]ӴkL~OMXB=T?q.WEyڨ$w޹N"5Z”d"Y_/#"Шct"2: :CcwKu b7@i QPzF(gwe:,C|6^-=k^$SIIgXsAVEEK4U5@Q61'R>}GMwNØ.w޵jq ~YQvJ :"ԨB h%mGOOVB9EX$O"_z<yBvk00ѠDR=|+]̸Y۬=G}+vL8|}X0#`lza2#-+gE/R?km:Xoh,r2wY.b>h}Yޓ PoTA2}rlRxɓbD"ʹc@ǂHI Hv0P8Mt-G"ܼ1jO:e#OVp`5:xI>պH·>t^{-h~N+<@?x +C'=1GCC ,`,9*> i$^܍g^dmH'E:'z`~T1ԋ[`1+7Oyd$R\~0 9r@J{zu|deJ6h5>]7RJY^vU`E|2uC54ȞfP+aWś,ocֈjq&ٶ(9m #wH~ˣ?>O2[y &&GS,7rG(i F$|L_+hc$Igewf+ŢlBՏY!$`%94*#5wcmݓ RL7U4o{ĥߒ*|xE*J 7&3?N`"#rd|-#lyJL7QhjLaJt=Bs:b{as2%Ok)]U.(g}ffJV}kBn[oFm=m-͆c1҇ 揽6WL4 vH?<߃GJ")^J7vf0L| Rٱ֣:0`E92>hTvN&V}c0+/gU@a1Pi>p* ưX)JٶGglFl٬_cy-ӬkC`3 v,)_Cv3-}Pğž&?]!`n Β0;3hc#c0p PLh@Hk,c!rʠ#`Ք*+{tL#ś;_@5ә:Jiyp)Mke}fU9E0︍f2d3G$Y(mj+fڴ2_ynچM>'rC`C93lY'jLvcYEu>Bmг̽p ӓsoW;/wCIWSyp=88uMp$ðOYQt䯡޳#(+o˕CT.(r hZ%ā#ʬ{$Bm6GxŰ9>}bZi7L^̊*3i(8OC/:2dTʐ>š]<$ww$=O`@ mcon뉵8i )&נGRn(,c[}޷q91p!8xݳo} .np[|^g?> Jsxw﬊Sx3$,+0h ;#G7IFj#/lE/pF]섣Z@θ??0K gTw<1'``T)lʣ F irҠ#@\(HB[i@;Jϝ|q䭶(^YK}_;|yÄ@;X֋Xڻ SQ~d,; |f=;SJ%w~-6=pĤ;_`H j;ݢFxw$ܛZA.a8y-_doӱfg*΃}KYS ?-L "lO,ϼaۻ߂w`wr]1oxpo(HK'dM\zű :Իq{ 2 FAjSN.uLn"eF i`0ˎ`trf8oߒ2>Ǟ{fuƓϒO.嫛_T~fӺ)K?لP,¶!}7i{#7һMa<߁P/ vW)=ZJu3s&aڐ?ɓ]I e=]ŧz E%%GJ ܑ))3cA,Hd/lv!~/[,M4ig|dr,Fi pʹimJ M.TtxPK]E,Fk d*(0M܂~u]O}sD`5RGЗeEѮWOE E5U<(|d1MB^rb;MUXNS4ӘH6fܘvF  (cvy򽺴B2;o[cXINNSDCw<_D!IE9YNS2Hv)e`D`6- _ZZG'g\MSާN"[b "8F"FJs!6ir+vR)T؂KNVYTcm`Fܳ)g'Y$'eFYdn!/g%9Yt@nP_U'OI/:i! Q:{ͫ=2w#DyMd*ɔmReB_RgAe #`@xɑ[]nYN$Hޅ,LɬU͑\fF ^(pq61B#'S`95VN `\RF sHXQυ&Or)cAtJ",Ij%AMmv $ڵM*aENܣiD@sG$ˤr4(9fY2UO/KF !p % +H7̛' ˑIƃ4ɀrreTMsc y˥f6ډK>{7W&R3G2EDr4r4(9!FsCɗQ+…~F Al#dDŠ2 b_p|F!"y"Jc9M#s / +#$=Hu&('J@Op+yt-XFp8H:JMX^*|Jʯi~)A)kť„m>f"pV.[o23/N;L(tZYە3'}95;a1deCtd9={l[NEM6^47orc NnR"K*HmQH=ﰇkMpEc*LMݮ3q-ó'`fqaixAzbxxE`BIIWJA vTsmO8YQl\j)2?7Rjq:/Mfn踖`aX7ns 3gڵu[jl,-q$լ7?eq@~ d_g8xg_|a\~o i +mJ!Em2+գΘGm1C[v|&劫ZE;MceFrkyYbpMt,8@*/h,QsA#ұ]<ݷǒ^7GT+NAyQ& 2M.rǬ)(B¹ky :Sv `n^P@֪uvnfڴ}nhn},W|=+"F&(Vt4*ɄQ}cīAqA>u?0U+֎Eƣa>p# ~.PE(4nW!=Eî6p*}ZOA hxtafs.-Mr4t܇`Wзgf;fHζ,"m,Y*c'.GÅrk`˧EQн\4e5 PeiPNPEv'&߱՘t ~ ~RJfɰ姚罈7no{-ߪy|K3І4:a}B :~&Sr,|g~觿'1|No}AUh/<1DQtM]mf. \ ehB^(-·#~F9U=_ai,5 D[VcO+>ɓP<4rV0i'יNJ_gDx< Б 2J]_ rX;AJ #5/PEk-ؼ3þG wgsL~čW8vT( ^=]W=-tmi{q՛4=Zl)rwt^u Vxtzy0;R]꙾ ňN[~|iԇaP(C=XC)Dky0)c脵Pt7Rh#e?\,D>q7ws~Nݺ&t7wcG6aߢR{z .LF2Cn&(^9Ȇnis4Rq ޥ. cg=w:2 3]E 8Wk,W ogɑ.I8q2mT>vm8|㢉"Fp>TN4@ou٬m>©Xxi dBvA}Cc)g[ZBɚݹU_ծiLO^Vf"ΦNQ~n(YB,6҆ay躩Sӡ:Qb&`ml.F?QL /^>ڮ)[?lgڹzz1A~NENrmh'|z$ڟSgh1+.ZG]e+uzEts ],bF?%rhmk1DRm10`&,z2{l64@,uBPB`Eyy.P^C6yMVQHuHX ER8(`/Ke2@:zC~Ip:hA!|L?z,/#=#gKG3*HyTLWtO7jVs.m]論CvaJU8]Gs9O5X ̞Nle NO6aP^O i@=*RIf&cIզB=]+%$<4{GA~XYt:UR"m*qEQXU5#]})(ӗ'}Oܦ[}s#A7O *bt4kJ>+I&(l;b'd5lͭ}A+k~7\WꜼ̸ŝ#=&v Ki?MѱۙdO,~LtFBz/bG%BĢ~ QE=WVe1Œ[M7 BжN+as[Y^N_lDz#[m_X(f"j.X,:, *$ N សv>{+wyt g)w_a#3E, гPV4$m *Zwh H rLAb.hyv(jOWzo,?5F?I-8IW$|W2RO.#四kR;靆1ϝ1a›(`Ka/jLW;׀I.1? ^.yH]YDZ fhKz*S|+٘Iܙ.dzo<]a} ODspqǢ#Ć<~hk+.,ФY(<y3@<,<Kd{ *YfO,lSb*lsL؞lo<6ϡWӺ}q1@p)h#,lſ gAWHߡQɘHнm'jo[ъT uZ%O^iQ] /]y=13RlQ4ݎ{J04$4taC;)\"KzJGޣ Z =0(|-m7az K1M_C4> }#9.Ƅ>$wg'e˷\'VeVLt>Sh+pN29yl!ØD1;ЦBmcPw JCD ]O[iVz/BbXRO~\1ϫ/(q6/Qݭvj)wx=;E(wBIV%fOz5ٔeϾ7`[?p$>"*eQYUC6Ca:p+iܸg3hzy>ثmw[h ̰'ꍰÂu pIt*A8%2k3(Wfz< adѴAf@,^wOD&@^TH1_+0Y:H?= L9 *=lWåv7 'v֙^q]6p׻ۭa3 NٯH Ie'o;(VдCq (`5q70g(0DsPקca⧜i`#(Iޗz Pzq -_Z,DK.~ݧ,tgZS;܄Ix6"'.6f/8Z$ѩmt`=O'=دdOv tz}m=D,䬷PިRƸ@:KƑxdS_0خFXk ɑ5lV!-ӡ1$& eI5v(ŶXRNXU7AvDg K& "L}Xă-l`G>:_j l#dd`<2 K''YjaZ(彜Q޽s->nrL E$p ˼ݤ4eB)sxmUlo]C?Pi(x~"מ_GKa$ߢC {mvY˛t(jGt pkA6zr y7[O^m"x.Au{OF&rbB>LC:gNPQ}@`*k  Ou-&Ͻj;tv[;Wfgi1kRŭ~po`(B gcep>S}B<+ !B]ݱQ"ܽ;/Py6ziE<|%FKSfOE!4mf2](g 5̫c@?$sD QRQOqN">v e!ڱ7$tl'iVYd|m D?6u6[ >)c.},uWaw,.T=*ꄋ6xӫHnYPo9F{z'ty}9b;$ ,]`ʵͦpO;oKJм4wx &&δDYG4^2h_11DUޕn?uMvF`] Oȓ>g$(=xď:oe%y 0nFgwa{9c̓ف hX-ykC A5E:<(`ǻp??ṭ%-Wc)'0,/ h\e: ߥ~&,38޴۬Cy|9i؇U!>a,}   36pɼBV=WcP @|mN ud4]xk]=,_@o{gb$lw)ǢSy䠬K+J[ d]^B[[( lهv(*Kp>li:v pkbaB*Tb9Z_k%BAnEkt}ݮ< ; `Wø4d\tۑYM7dH}l,_fSSP֎ )~a}>?! [ܙ?s0g01ll6j}YS40^^ˍ[$Z6lqN7;h5&'YqVlY=!+ab)'}o{F"&^3dʌ4?VwӲ5OM=R|:ڛUefoH{[EphGu릒UVkuW4~{d]ƓaX\]`m [Q}N|ﶆV8^ŖVWUVpEg: ]1KȔri"%+nu:zlnߦd%$6I蟡]C}l, tVEN7T{/Eh|_ 6ӮsMɞA 1)Z<c6pN(f(+&3if*~eT{ Ɉ6j̒'t캂]W5H͐xLG3ߊ~hi]f|} 9UOR>: v*1>˓p2dMBedF!0sĢi'i["1#uƓVǞ-9+˲֬uQƁBȌ#Д[(I_Ҕ ea P5Ů)!f2Z5ڝ(XQN=0#0#qn1Uɮpc!D㸌#0@X8%F#hIqH3g`FHG@cc BI̗A83)#0#0 VØ\F`F` rjp\F`F`l2#4M&:gf)T@Д)Se9y=58Gwtu^}a#yKb^7>^3 TMq c2ta%g%qߦSQJAS.yp[}bE(-.U#)bPPz=[gҬۊlc!⽉o&Z6R&[>⛦<^|tڒ:2y]V#5A5Q9$)iPo|(Prl5G 6'7zUպЩX-r6=P\vTj{+-2ȵ>Z[ɗ^z AHJ\zmY[^o[ofj;oUO,x s&/üB(kn_x&U˕4-#YNG#2ENS(+d۪MۧUM:_wmA!tWr}su࠲FL|^>movz5kR;˫ݘojoF}]]ʓ!=A⟟(nAT^sgM(eJ*irP΄64w^>JxY,~ݷ4 \StQ?-:ȍ|N1sb*PXЪ f^sd#ԾJRbƂV]X#&3ȻXTu&ӉjP9ι4$eK,{Q#/jw٭NERl(_[l7TB*:M5 =y5Ő~i8d7:m2ii(wQ˩tf+ʔ>ݫ׀xGeg.Fjyc\o12֭cA#0*0Ĩ)5 r$ih_W*~^`9ݏEw,qCS3;jJf=SsQ^=J#v#@]-v~ķ?z?s&fOf͕y JQI!u$n_K4ARܾFCRp`D|f9 95Yc5O۷ք]~cvN H5kv^|_2kɅYGse^GRaRkn_G,$RоB r*HU64 NB.UrjLC&wYG@-ȟ]uF )9ٸ)[ QUf^'kgȐf} @>rj a9m ~=t@IDATJ95KQtI/c0쓜& 7Ή  ϼ-Of_7aogQSKg|#y4Lal.cxM*oYpMԅMU@ypOVXU]*D䝉kX:&:O_%!+V*I^&]NCAC%oC֝m[.jrA}5e[VV!_(ϼ5_ypqؑ!ٰyϜ,C).=e7 dN0D;d=}XzS>8W,#zvn/=T%~:ŦtԮsAaźY, pE:c=|< _(Xv^ug'(XlzXyYW`_zoXrp{T1wqQEOmٱWbMe~2쫯'Rb=aI_4HggHw涯a)S5QӠ@!A4 Ivi„wȄT4Lkʷ8,A|w{_~Emeg] =䴴Akvfh}$r$۴'9_,>XclܶKNrZoܷصRQMtj* BA %|tI 3>9v; {]KƉ[D//74JۭXnS8|IxZ A':(tҪ氟V^]iK3= `2|x_ OMd/גXmݦ˟~zLͼ% myR޾yY=4 ܺE.Fm6/<ǃ1`Ц66(}lK) -AVkSMn_=#H8B!c4T-_gbbnٚUecЎm[ Z47vԟL m)r7g2BI^qzyj4r^Saq3Wj@%ɚQ/M|k+\s0Z=ط,4bpw8CIs:<3.$萨8Kd\4OFz1q,^J7<{*u}E~[+^H.ӟ9_.U&ͻ&f/`b`VѮUYI^:+dV#u^oqpf;2\ 浬3tMi- O wMy># COc`>qx"-_S!'^D$2֒. *ѩ]˔_n2NNc= hC8MFN]sW ]mR]6m\gf}!~\OjqfRIN':]#^6x7FLb7CQ%NWb-{֎܅cROdsgڪsFA\Ӌ&Hi8|: 35Ӟ{WL܀-!*VVo n& 𙔭͘}#^x_=OGsI絑gu sSp0}_}p 8HM0?5{.Bp)tIԁoڶ[_U}):X(9\$ڈw:L!jb)GrtnCM~ᚖ1P$D}7v*}NI iegwh?|q&Y^2TEgdc(SDtHc-AiPɑ rB|ǥ3Xg1b,H3jh&%{؊l.WU$7SdGG+QEœw]]irߧ1cB 8damLG~Yv-y0ܥSFAvrOiHl>咢egqRw|aZ!ʶ gkHvqTӱ?8h>EnB(#i3HYJtdf2_~/îB;sƛJSTzArI2TOOfXZcEKD׿Ox0CʱTh[q!:ZH&+Ad \:N?4{t)vK9%s aSo bYHL4)PDr}حE4dB|zx-)$oیt];b_>@;.;GG ΐ3xmF߳Xb}Pr `lHp9Z,6CEPMȶrۮ}M svwDȴ~OF) 2#ͨϧkPϾ}ũnw=E/O(Zaf˄يFu? VW~8Qxy*Gc$Kbq/;IUxQWreۘ3@rh1k<[8RcE98 #0#l&:O4g>Mvڜ#Д(3+@`F`D@G]('%GgJK@di0#0"hY@cF1sigF`F0 f(Ӗ'W7p)$tiᥱ$,F4=mHrʮ>bw~}: [wS{GG:ئ%N>Ŀ,hбGGNizq۬M\>ؾk/NX8+OG ؎kŌ>qŧgN#Ʊ Y?Vz5p{\{՟FZ|~? sq"JJ KCPk8, ']'Ł:0DbE|j(MG(Ĵ}Wp4F1o7?{goҊf<89q3==i'nQux˼3{}OߴΙ S XN/=f!p$w{.&I_~? Wg␙1HpYɊ#'N]aFm(l3U.2$lپ[-:%s{+sok6n[`qۄf̴iT4@,qs:r:yj%tXZL\sɘUbOٓxTj~n!o%sR(UUmUw #)}e $Óѩ(FDu[SUm_gZ.QQ ,OFҶxJ]D ȃQZ-(y dIt*7S'2Cn0iM8ȃšmZGrit)u:W»ʩF{r-Rчk.r: Id6 Zヺ.ޝpR `R-r}_H?:_jݘxZ}:[X &zlEY0kkv]E}|} ڥtY' M<FsVFSy=ff[7*W/,:~@`w!Yo`E3#/>54lT,?ÈX.DGL 񭶦fW]#b.ݼ&[f>1&[势9J>m?`ޱO.i2Jio_ MUNn_82hsӡPlIY~G_0}u/z'xW?VyPWH;f 3GM&cq`]* *Y6ԀK%@غFI9vر? ~Xo Fl7n%h1Gv;(>WQe#MtSNV~z~f,#9[{19^auNB_W} Acq[p>W)|#y@~0ܧt=uw#{Wj1E*E?_'h쏿Szum0LQھkjrjbj-~$x2Mf9 ^)2BL޺sԩW8^t6tp}g0 ,OmlD9Ք~ :3e@Bu+Q7o=D%=u1 {6pѸ3%x% u-K# - FA?=J?|!NO  fl@ѧ{W%H@.6ҝ{5qkǶyb򓯉ŠԷ/d^pމn Χ?/cm)g"|Oa*D$ʼnNv ?qW6K8K̓LK |KFҙͿXxe t:y33t➌%Ll_!3&Fde4"\L(S,څ y*d;.A TcO0| ?(ʏHQ'9@xڂEo8q5 d|ϼ6}xȥzk|w)h_sn/0mgȝ4q?IޥRNPi$Ear{|.;]n߲}p-A4K84|q:7$HdH._,T2cA+)u@Q% w&ѐ&&䵉mr@!il C.9thڈ ipTIhfΨTWϿo۩۸xCh9JIgRВFPT9ߡRzs >x M3L?ʯUv#+ol.}?b$f/z"}u/X6żOP~V|g]טiD贈KS& 6=Vp!$k TP=F&::(Pu Y 35JA~y{<#я!tϹM^u|mr\|3f9mPK4IȟnW [+whsׁo>+75tJ2p}& < 0A,A4LS'f+DQ"xJqVR*M`|/OؑB-T'yF{rg2>J+K=ڇ]ZyoHc):p2fJQ̂+ THʲB*MQlGbr,Mctu}LOy&9Txe*ϙgglsHaRbLWyߜdr`ӳĊ3Ii, ::4>kYYheƊ44.웑>6kYY}caNj>&4MtFzVmVo)t:e#_S2d^'Ԧ]vŢ :B9Z2t"y>]< $JSφ"""!R}fnnڽ0;~gΝsϜ{mAgc#=wߴ;v G{ރײ{nEFɦGYH*>*/iNk9z%vUj|H{#+.h]dɧ^ɦEz]iBiACfזC|rxbye6]~~/Uj O+q {Wk&v8x*]| 8/f)#S'<9·[ITB|_:œ_V> x{^b,S QFڳ0 ^h$?زI>2/iL`nK> -/4>$N2tfžP*s6)%>}ov&UHy*I/ w>ebWqKRHZ`U_TD%|*|Z1FRкdɧ(UFCPv?>VkD_XVht5NIqFq\->'7nA@|,E7!I:2Aݭ\\ʣ܍ɧ%X|%4d*=c0|:G䋚eS",>1)itTښ:ď8Zc,)jc=nKϩӍ|xV9Rӫ,iJ5Ir| .$i{s$xxORNɒOCL:ŧ+ςԚ 87 H ֲj8¿rz;D SI7cbCJ%D~4Z:HZ5:$Jk(M| j~H>NM>A\&5hnJ?ɕvmwr9 Nđx ChaK˯L_A#䕒V k_0TZ'`A]`LFH>?>Vu@|ݪ7<г@鈀>FOъD?7Ġ8&)>Vf"(~;wiZ;/[7,Vl^tm+BE֩1˹bWc7\W :W,?%Dl}PpOJa_Z'eҮG-:v(2ggB׎TM` ŷ lvCthT\:o8.G>F7A6QPnyM?p(7/<1Gq0~X* k~8[ 4+4D4owO_љ³] UO[I>վFE1O HHrDnJ-(?ZjLZoxX+4R+{=.6v;Mt6CuմBRKgA[%h1ng5,30?\߉sgC޾mï'$:(emnH@!$_ \MeCF2'x^I`QZg>.}hXiIش렠a6 /;G{z {vd?˝7aB[_v^/+Qu أn1>]5 ,wDB4C">w|ՕO#4zBz)"yx7#1&§ޭz%FO1O HPRT rVϲyXU*^n&GNdD-p䒗4w ΜɧQrv)WNiBpXXy8t<[,YyW A "Ƞѯ|#ˍhYwمgNjB#Fϳ#F{t "_n:I1cbgR}jjɳ gwi})`L嗥%kwi+yHF34(kH Sȍ>HUO[v|QZ?Y3Y%HAp7}\< 67cBhZ\ ~!{?%s8b_YDʽW ~eGDvNl.DQXq1UvA/˄ю",d)بW2Ss#Lz#Nk8Q%!&E&$׽ϏRsX&6=qRٴ5W>Eӹ?-p[/;WL>u ܂RQ?"3ҺGxLO[9ɧKhV 4DOKV/(z [~@%S@[GP@Rҗ I Ɖ\]<^Cu,`x[EDe47UdFge9ZYnDs@u]KuHKu>Nk%M'x{|pͼ!M35չ\'YQtyg?/.U/MBϹ6ɚ428Mk`1n|5X蘾w+*(hZrF!iTX^z0(Fļ_761/rQ4Հ41/~۸(ia˶'!_]^ $C'o9@WNRĻRꦀp8I`0.M8L&IEt$ϒ_oߧE|k1{)=;Bf|*X{Yr`0|`ldJi?^ne%jYS#(剷UGBPf#6<$v?C7.@Mdnovb-[ϰ=ϲ Wʹ"+;,$uvN⅕hn(07mP\DxbmT(Өˡ]%k҆gy~i5_WFN.mkg鞍4{:,m-wOL*cޫSkѺY2p.ͧ8imr&ru糖huFug9zFвjF ѩBO{m|?^n["jA:=j;f,5n#ez+j^癚{)#/.]KB3_&7]:@-f'6\{L3-5P\c)oꭂ_JtZG6DZC.O&z>UPQњv_7Xsmʎa (Ցp+ H|DFz(#j*(ˠW))wkW : }Z;.lcVzZd./s\i;Ge ѯX)]S.}YsGS g:לq2/<4uXkC# 4ywkExޗ|Fո%KUUN+Jҵjh6I-C#P|Q(QVHl{*f\QG֑,`dq_l_%L-|Ѫ^#H$D@" HAjEJ" H$D DF?_٢azL^H$D@" H".&*Bvh8uIA9d^D@" R㊒%Ij9jkH!M n.dxCpm7oe\d|\kJOHteSieSj_ v{!HhѸ򂳴gmn[U :#9:kďģ&B|=g$ü[C5_g`{kOC@qXV$Vu n?>=|l&} T 8&&?65}X_8>uC]T[AL,^]6r7JѸA]՝I^TKeu7f#Pj ʻ5u$ѽ}Kк-?qt[_3;|8x/w P܆鹅h a,մM1H׻ܷpذ#S'ͭS/'3"1h?bv ..nk ho˱us|e. !H>.ޕ]ʦ@hәWevj,ZM:\ѳSߧ7gqtB˽ =w嗿}:q0SYf_ϻo!nǻ<:2|Gݥx/>za;G{_iX/QtlD$cڻ@\ܯp\fc0DVMĿ!sŌy+ĕu]ۉ\F|2_j5 F\v u&EKEZ;=Dێ_i'Ol ݊i/Y$FߪV(2:u|n_|ګSk]l)P}OO}|}œ fNFO6 28 [ 'O nAٝh|9Z x tWb~f# AYu8v;yбuS>u L.1kկ Y%'%`J.%m{oOP|U狄X_P|wm`Hnq$ѫckmpش|C3fŜujt!v30GguI \|֠S _hW].tT]9F_}9.qÐUnwp]Ck3F_C}jFz|KURyxǸh)rGP@}Z'!.)dܰ3S{_o`'v~ e"Сu_)`ӵ- GVbݾ 96ѵh_iU}O3SG(ӋY's(*q=?:C?/Z'$Ɖa( +Z7/,3z}šdro.~s (3luPc~΂@|,f"Fa5̲e9 G"Azh30LSY7H‚ʪWumh:R㫯g§_⨨~WՕOgwkZL3L!hXsȗi\kEP6BƤ(P ʧ@kMYK\q>FZPִf9GN(4]hdp{4o!-_: !‡^6#>'5O#=mubMZߖmpwzܺNb˞C"挏6L_ @밧'P06)ȷ ^_8Ui4psׅuϖMkaf\G.gdzsĺӚFc;=CP{:u96jƸ([8;'f >g-^[*wg ӎz؊|8oD45hOlс8{v-DٞXUV&7n6o_,YCeg,j4=&onT먶Һii_}kFp| JS.B{'R{m\?7B3Dúu:$L Scp6h OiGsT*|>FT>@IDATBLfcݟzI;G{C>ŧ\ͬa`NXI[ 6z rj!4ezCݹ Y.6&dqE}:v-?%޲W]=^?aOQz8snv1f[3?X~9E0ۦe[xؼ⮺lh,p-|,ѼcfOv>Dr:K xG~/k+FnP$W/JQՙO#8#2i^|zQ^D[s^NAO{vl%?7|Qm9)AFvA ~}+횙WbR>T=kT|* k iU}VħmiN:me~;XHʬRxF[767lsP[D.e}&F{jZim> *h:Y-).<oF'!=JA'jO,ޣ947).,4*F>$dJݥ (W]ާ.&*V - HAf_dl~]IlIQ J詢LRdpO!YI*'@h+A3mi&n.+ ė8ZtzYX3n`R/OrF/<7?^I|ZO;k0Si Fx|& #9&S3I!.$-n׼۵wvuK%-BRZZ0C$e6I{ 1#Y3es{jghU}ѭ\q#~/y-+h#4+~j̃ktZvQ_}6=#m<$pӼ&M]mE_ˍVNn%E:ɧaK^M!|8qpU 筫߸F|%;@6jY7h~pqg,5s1&ė8o&7uYG5xmۺ3/)Mhii]f3US``#z. k(M.G2&OIHyvmޘi/,ȬӰYv:pC=зM{YCv dX'((,gŠC9ذnH߸;'CUt.KSΙ)VY3h 閃ŐOYu(E L^fr|uYڅ2FG˷\L91F":1T(G3d|ei"ͪCC FHa,t̤I;DqC84=-W4]pիv;䐄wjR|\/QZժ7]q:vmg;}`PAq4)D0](:\ǃB> uAY(#*"A/pX ԴU6uUI`d*Chxu>SJ hI:a.[y[ԫy3ָ{]ݶvǏ:{@2.yx]S ]HMVLM2{'Ao ǥwuaͧ,TҺbh uUBҬb YyT ;\#k9&i·uB]^ AyZ] 94%J[)WUf@0DPnimVc}͑3l(1ߞ Ze]Xk=oM>+Om2qX?V]?\H ְIpُ {ZXZXç,H:08æu`J, __tZY:>[,u]5}{㧂^m:vngۖZ|\#YjUY~"(l1ե {/e˺hVA1t 6ÃמBrmSBAA4˨I?PL?jɥPoRLR_]1w*SV"iiHCH1,EGQe #p-76lR?繃R9uß>+2:,؎EUimlN)i,N_ ( K/aˊpgs=aLtM(=j(R{v&bS!!>B]湲oCfs {fdv>eIk=DkE+iΐiV inpWhL#PgH탸a3GjMJTɆLkS ̾5]$I?JQT4d$އGu.M OK?=i$*YUD~в?1Z:f iɔVJ\bڌ!-o?y ˴h E_k6J= @N9b~;c\YYk{囶GU!T+AِB$D@" LUubT/ R'^R]_Biiye|SRW0JPM D@" T2S' Q0z0o ]S}eifLU珱Mj8f Yxpm?t,VRG@U6cYY*r2D@" H D8) 1J)fͲ1?VpVk,>i;UeC4N(i$nkйPz! DM SXs#nο?3< ~KA22^" H$QDOQlߪ M6Y~Smz0:uU0Ø,#ڴI%ھ}1.HZ̮争jr.{2 a5t++;G;&I:Oe9 8W;Yodq H$D RB]v ^)i.j3<^Z\.]; ۫ܶ Pm`vbeAxͶwAb}*_JQXY0O[ɬD@" @@Q7w`LAgbK]f?}hE?u:(DLlL %rϵD3Z)$xC7Jky߳{2^պF*_RP;S" H$%{A*=|$c49Ӈ(}-&6N־x枫D&dtw_i޾lLqC+Z),+E" ,P$D@" j} lEh ۥofIi(5El k'b{=4@Z)g"uOSPQ%D@"5"m)3Xps_P x }λ @R޿ 2K!qrr&mEķ{jÕgZ:E" H$F #m<"]O˟ק:gf5kSNݤ;yis ;y+v[;,XS?hZƜ5xpQڙL ʵD@" <G%x\󒂛M}Ѡ!Έw #ŽG>rGau_eyֽq1+D; eg)(CMfH$DbC[PVM q dR,]!ZĸRĨ:Jh 8k $ v_D@" H${vX5{Ӱ,~'=uo1"TQ}rbi` 8Tn)nVwĶ}EDt\&f_#N[/("{tQW?IoD\;z.]ڭĬĭ6KsVQ,y-kwQe۳CIB8♛Eg;glR\kI/;.H$U ѩJm -7|33e^*S BMGd{V7v8s-įK׋0 PjZݡSgwo">!ܢ {Ѡn8k[wMeW^wlTظ DEr:ycޝݵ_zkѾEcm; x\NniMA9l\VsFz(#)5F(ːH$@( j{lU暇x#)еY6zתl+CM3*YFh)4OzXz8uJ9&+ (3 OF:x Bov<_@ uD|l A9@.yEת5!kdg%D@"Pxǖ 266u fiK7M`>oipª-{MفaۤA]q,봻'K ևF'o+9MKۿ{{9v-q^#vҫ f>'3 E%Bv#?đxf? ٥"$D@"  $)UfZ5B c]8aOvvmBfvwpWѻskq_n!]޶.^@cfw.[G)a >խ7bҍ ם4o|&/eBry5k+e%D@"%,f wUfNd o.\)8{7־T.@م^>]ˉ\PʫDN5//},ycA^:.j\_x) z (FkQD^.Xp.}Q@CUo O+z޼Gئ%?hngvf;7k_'qgB~>3xCC RP9O" H$X6w ZX,<1_uae϶-6iU[-Gg+qVѠ^"L(0]my}:7"' N($ϝnrAB0!lt0ɴD@" ^`*E&5>wHYj鳍Ze&5K޹e_vpUudy3-36\k%fH*ဟdUEas5oӮclBB:,鐥v,,,8]{྽/[:?7 ( \v'8٘ʢ$D@"6S' a!S *\{#ﰉPv#{fddPÑ# G=uqAegš Ҕ_j C eBy ʺɅm25ʵ*'`z1*Kr8ɼD@" 0Uyr#hۛz` HN7ɠ 9 n \KAhq16vin3O.P]Bu$Y" H$ SWx"odj[l0&M>6- Qej=ڠY&. < #FaZe&vE " `%D@"  DK9a |;]_ }iv!M/JeO DX?!5L#dN py%D@"j|eA5>N¡(־S'<?YA1*xO2 _Ѕe ƺp ȵ8FpHeD@" T )Ƃ;a| 4Uudl@M5C-p\53Ob_ ]LTGG  ӧO7vY׽R`m{iј.~8j[Bk6=*7.DnVyKU#OZ\F <1O^qi$֊4<q&|Uq8u0~m4G>t z>":*k-c<܋pݨ@<52_L%>~}+lbf4ۃtSiarK!4&GϕI|>/.Z5(c(~xHt+isɽ /+㻑dLK4'<[xCН&X3_D5N c(H:v;>#ǧ!-/d<,(`yMnU/1|.pj,ʰ=Ǡvb$_y%>xέF@`(vU1VF{վBo>jcݱcOD2@C.x!.W7lMjt8`j!8 {yrnsfl*Q[vlKT!sNS'*JlQc##(Oظ W`rpħDV]v!K+EQ6vy2H$U% ]Ѥ*ҴٌQ)iTN=nѨz_[6_#ld>!f3h^-S>DgPIk@/+2*ӓ''eO2hFOVf&PqgwMʿ,11{V+1q8:DWs ;Ti/بY²Q~QDv:!ii<췧g?ֶ:$*,^|S-,_ !9]UL:ak%֫ ɨߺaY؜f$K!9xpRAZ4J&e:lQ }AU6kp;n4fsϧnd~D 2lY2 @ms @lӗojIJR9aT2SiGy$[>\Иp_IMrtLs}`bx^ 0eD (oyt3i4"2* @t.x+ʻcaңZn"%5nxԮrǽv֏OH a b^:~tď8ͦtSr.ʨUH#uÐ^4R|3M^$hZy.[#mBWHJ#@ \[:V  n6:u/]en]_F糌vngI\ H<-fEhqr F~&Yo9Y$, b4) »t.cJ"!)sK>H(^p;I I޴+Ss]ӹޝDՊ!M#`m1}RrnMMĐ~Űs{h3Z6.rJtIgXth[UaZ9k6 Bt Msnf\??-\#Nm6nX$jeg^~A+AA&iVݧ l22hϪ5ˎJ"f] UreaZY> H7˔Ax\nbaZOjz{uj%cYPB8ѬS'&Vlkw_=H\=lկ#'5ax!X=,ߴKB>Ec-jLQh:фOZ$5o(mn#K{EfPx"4o̧\ .mje8X!2D@"P*@ˁͰso02H$a" G-aV(Spӊ#n/8NM;|U8!Ndhc]5XDtYm&T7m$x4kTO?gՂ,5+6” (@yc^:e_.ĉ_9n\4gnp @tMAHLTrYČ{716X6{wam\o X%X׷ldz9rTp!4<<ӳnGwL,+,QS|3lԳ5Gi$Mr5OQ;s"P nd~컧2htaYW-}m~ _{}[/uEWK֋@^~OOpv6b [m .ڋEuᏸ9 { ̷tvѧ[;]6]#ba]Q]1)b_ж>Xxe!Bxx"pİ/"4WZ<_32J6WBH^ dtvpp| bàuSu, kLPz$xٓRd0~iR'ON܄t8m?kpM;܋ l}_{]%jRY5LA&dԎBzn[4~|?)7­m|9WB` c1G~9A[TWNDͦ~rr왻wY~JPÊE8g犕{ڔơDf kNx??$"W˞%]sƆۻpb3o|.~3shbpnߖqo} |q<~N>LsXMWDW^9.:X,fSMii2=l-Kk6zcg0{fٷN /cDV/Z^OI_7^Y;o={2y.#6 ,x Y׫J-c: cnD3+(׉R uh$ĊwU*59/ڤvsW7xK:|.Ύma\!Ku9s(gP^ :k 0v1ZC8",x6hF'}34qvco %1aӞytī$Iubyjm.xaz4Qf<`4QqZs^UJh益WE8jO,Ѕd/p!inؗ C| -,gf4O~"h$t sYG(LʍKוup[DWW|&I/#E% @nQEuFT'0I7 [=o )icoӳeҺsߞߊ~|c*9"`>Ԑj_TLch|DU,sU{ɈFE| 0i yvA1)>CQxNCSq;v&Θ_o[Z}?mṺͳny{t|)[irH M:՞OƇ aZFb^Z0 LhМhTU,ܞ}X9|`1_UG^?X EXxʣpKA5dy?MK($Dz ^8w/Z_ Ļ&<}!K`n_ɴxP\$!${f2 A|kizwaPK==j04=*z^OO{fTvxL.c90Y@J@Mo#{ Up}.kڎL8zg54pr7'{u8e/ Z?Y&=iψ#:ݧ+rGpw`xJ,SVv{Uu=_x$jQx4=]Ո#GqE> w%FӆmQLS$[hF\w?LTcŃ2['`L᠟.K,l>|ϊ ɛ-Mu}S-\C`4/_Jx44>/=:~D[Y̆0 z^GE C?WW>u`r:a>ƇG  u4h980LSQ@ UuQzQ2ڻd*i-Xd˖|a>u߻1@y&pmkWg,Tp%CE A lAI;UG[}%Zh;A᭬jƭ" B{&Ѓj9~ -{Fx2@y@FZԮb:vEu G 6Qi BQfZ-'E|6{y|q+pG3RR 't.4ߌ66d3./WbK7cecaWŋE"o_4, dv10M`CKk[b"|_5>3`LruR?5-'=Դq3G̓10uB{ϧ"E!6n~ͳRo~nA{= Ӑj冠%$dViO.A`Ygkǡ׿9η}*9BKf:h-Ӿ`4:S20/d6˻rB|̆ŌB$ױm XxYvni k"}isn|*IƘ匫[ 5AZ0|6LgarAZRQ #O^NMSxFj(ffFﯯQ[-4ۋ&\SKvkƉ\>r{ e7ܺiï{ہK2''%bXkvfdUo?4cF㔑uxWK A"UA:L Y0tk쳝 I }g`[*ilfN-m=K]+ju_b* '<~oZ[t+.m+vqn! ު8}`1^gE0-7;9|"e=z2je-Ѧ[nFi^hV[8JRM=\zK/M}鬂 Xl? lYT !>\jr~2rט  g[-3l= a1څDP6/uPPuz$OG-ncw:Irci(ƣ~uӌQ /> b *-rxD]Nآ(␟1P\(?fMI2!ezEGj2kQLc"s}Gz}5gQIpG!قC{r%ڧomڵKH IY1#] Lc=t~nɃ{wo۰|z0%-;\N\-(~*76s{ݜ(pp)ѶR1{)@~h!lHq|Ӟ{By >D>O.(=P)jh{*\=;>UCUu[R_&Kg)`h_< KhCX|  [0W|rQbQv:Ch~Br I/NmF8a9 ltL0e҂>r=.suS:U#^g&4ҏ3P@@)Qke_=~/.??pՂykpC<]e cyItA9 T e˴fDĠ5Zг흿VewNh>ƿ Xi6R7ָ<꾧k|tǕ` J62?5WJ1 ؃zy : xt 90\bjѪRa"ўOIU ťGK:1ӊM¥M8$HԸL-p3@$o?uؖm~G=y⟠[:_%<e'RPtSKmU|#})[gestuϧ.;ե4b O$$EEjwM^T>MYZMZc!ċaB0 KaBs_7x6+|fRՕO,1dL i?1!5<*8L6U^5(T^=)1TzXq4iqYk!PÍB2tAB)ē_Y+_5WL&(#:)UNvph8X]G^+й( =ϳSuݣ怇eu_ZVew;eM4TGyOfѶ4i Hl**a0l rTRF@8ۤ@AZ Uf`7A@IDAT|m00"˴~C #)~֡֯u2?ׂ>Bii!P^}=wIt65ЦdCSHWmdph T )1\< ݶᅌ&/ƓFb揟bZ]m.~~?sO`\,,~9p 7yM?^'X\N>ue§{^^t|[g\UQԐk4 TJQABM2^sxާVy0_5/ <󠖘B05{Q*5u!Yϋ^{|x5^Sc#4AYbz4>vKd)$){lq vGAnxQ.X,UAfc~j?aV ʏ6 E!c³kba%bmk*6))JhOnh!cPt_f?MP \[чB2sřUKxk4 гBaoÒh}\ibDLnƐ AtT_JR*=nX|W~3v&(]H@N</ieiuס> J[xT ՕO#l/ xN_v+Jzt%"pL*`pħ?^5c<=}6*7TvT?aMISL1xε!9wM?8H\qQVL\kEoCv(!a\@J96VqC<9?؅ޗJ3W;+MMLΣ1cS^: l3ZMIɄսo61yrfUp9qβ1ƣƧoUS̤unl~T\GBP.Şܵ;Az6?j+/+445aP Oɋ=GZn$]h>WOL G׹?޴=E`jz+̈_FLDQGZ[co(8 3F]$>fz*q1j/OxQĺu%א{T$Ⱥ&]*PvNpu2''[-|b172|jTFc;X(^]||hd0+3}Ahfh<몃^s5[&4| ~RL^ڗ0"a4Ư ش=nj'm|EvIQ>1_@5Jau :̳.0 -*=2Gpp85zB@A\Z eqfh`ЌYpvt,ɋ#I!9P+;)wЬAqDNPk1nq8Kx H13YLW,2W ((zf:ӥ~@&묉oZ(ELtdžuwл~Wʘ<ה|w>,&.6WyI1֠O@OtvsҲv#1M-nhkܽQ&J"\GO>'^<<Oha.x?[H>-I1qvxq\t.>.LbgwlcwZWu(^hoӝCr@8.5!AjMyhNzN,Rx Ն͛-4횜LkOآ++pL "a@ZUVH zZMm㶄%m遣=LN,Ȋ9OçKU-646II=cO.D@"PQ>+' 9 >A.z+j XՊ~zJ= (("MIvw3w3Iv7{Nvٙwgy}hwujwx_`|x&>q %Z eo3~EY?rK (* Ͷ?/݂RNiZMfD|iKa$?i oj;Ep jζZjodk_F4͜ '>$c%v`) 8%MW6kUL*.% To4٪<]E/ck]1rw1>&+a; orG7nڵ_V&P`k[k_p&Nh.F@'(Ϛ_$,&c&&!I=;]ЫS$z)":Y+`ӰyoHd4Z˧¼oRaYWſnw8Z1oǺ E՗cN.v\O~*umjSzW˝NW[F T0jT?Mkq˂Kxw MdHMz+{Zg'nܳkpńRk]F.پv[jEYW3l¼vl~Bb kC$r;Iuw }Sl*_ JyrJFNhmg2@#D  A<>y7<ɦ-!lk2{'/O>{:P$?g|\]tqâw(` Z+ʵ> --遛Gb,A 5p"7{0,ѻ\l?e,uO?雲[7ޢh2Y'oUr@0@h(SO%5U[H~ DpH["9TǼI";֪y OrBk\ D42.1<T0oa!ƍړ&'jz1 nSb v}K ]cdyso;jϊE`hşosbAKr-sM  O8/#o^,Ud5&jNlv?͒@>aG#cR_2T8_L$Z'e\yB5>O6]WyF ({h; š7 D}M!?YnqGU62T>üAN ͜W.D.s텳@u`ȜdM50 =c|`YH` < &$iZev4dꙢVu> τ xdp4oT**)1pKPuˤ⬸TR?-zvA'ɩ\q3<7rZn8>Y}­`eLly 7ĴN0.4}9/VÐᎉҟ{|Op4ƘW6pu!Ƣdnv'eVk[@ r1NƶTRWno7g~}3d%Ջ*;,[: %eܺE s>_-I>iM`+ʾ]'2o((x>^|h_9$ì+ +uRocS^T :q**m]Э ՗ m’敿!3׺F#sì}2F xXhՠbi4=lX4S/P9.)+.m[~+TaW|Gz dKşhȤz| EHQOCú7Am!!^$g[cSjG0Oa~#kDjĵ3D+l79~~c+h޴P=>W<5 u>y[eOgjPp>a+|6mY4p|լ\0#h_5JaRbWq aOg.\jHok[O 4L;i[5#%":Ac~Qnw;!=4Lq{v8~`kQ 8㜫a6dTv0nHAney$y2I2649,\7HS 9~B>6$sYdVT4wo $Anma0DVfwov/OyXmWM΅3&KUmk)!+ʒ$Wv mf3;j[/Wfѡul?ĕ8QӴl.Yt8$8M6-uӱ3]d*P1qq>~|=t  zQxܩӇZqP"EN!Փ ēvDrJ. -/\~ڛcv]cڌ귓KdM EVD7Կ՟|TX:HIJ!cxٟS+Y]~CyU9{2zf?3%BYKɾbt _W% JDhQfWk`oz>0zFK4|X++fskB>.3:_ 9v)#bQ_\3IN O|??NLҵ6.QG3rD>NAQi9>@'Aώ]'im|Y$>wfbF@'h( +B:t}3e|C`go}\tuB %ފo]^ !ӻK$L?ɻ\M$onX XM~/6I7>6F$B.iv{4?&j L}p;ͤ5iBՅ*r!4RHv_#KI<}(TrD2Z*fJ3{&ddo>/Gl,eZt:}2kE†,)4t5u#eљ @פ  Uv^'4MSڒu2-L}NK' mZDX/FΒu8-̕mNŖv6~2uqȩ/-֢L$*;]<D' ݥ,CuS9 9XP"+Ѥ0Hs?Fs8dBr)# 5eO*Ou6/Kߡaʴ`ʿ4wHd9DV)e7~;ejl?4Op }+0Lw)bNkZ6k 8ϿTv;-!\sKZoh/z:~x$dAfT K [\.֠i337t|0(Z E$tSA # B)܄ Ԅ䰡UIZre^FNiY ϝ\n9k??pbF@(FI}>+:1!@ouC:qvF "`E9"iN5f&fc~ /\;#BXQ!\#.3W \/#)h0)p?HC hE-UvIp"3#)F`F@mVR6%\#0#0N%T*8 lF@ǝ:{ .dF`0 FSe%29E6]52zaI}/S irsmrO r|v!8uέku-c R1B.s y c6CWֲuuz*t0oݟj>ZjNx7N"O_ t/ÝGK,/"TN€33@!/V*fVUр^,",5ulFz[V^~/^AkepH%Z.9Jk o{ QQ&jO6EnmaђuhYAUߡ+Q؂_NG9\xlSB~!4E_tKˬ<Lٷµ?T\s|%l#UN_u]r#5Z[vTZ+.EQNNJJD=>jY<*@?: PPT*mZ;U㨲_o^-.,.}Ӛ& ,JdopOGtd/LvC82VYQQ@Trýb3Tpslʬe" ;Z\*)>qla_ɑ*Zg0@8JQ\(4Pfb=)=%Qf7WOgCvzSO*r%.&Ì;j׏w(ZaANS*\r+:̇u|Fz_0ۥF@mQ&@&wְHꞨ 8$O Fl^,2胰 aٺ]pE٩/kp$<\(V>WyT~xSvdܠ$>g"޴7-a4u-c1Ͼcg_+*yj=)|[F˲kZ6"h*h\u]J®ç]sp#΋z\Ho&Ê{@~b0%;yZАkgJ #MN0Of} L*aRtI(ټn/ܼ7G oN[{>Oa>ΛLO׿8mY)ՒV`fs2|'\[ Bذ(<(d-Dfe%؏^wnRy(ۄ#VN hmAS\Ӈyu5d6O^P84O[و笄]Ѣ] GҘkuUIr* 5#4v6sJ'>]/IUN .] ugo{k䩾p_XRYyģ'Q4лmE@ޙӯ~JUƃH-nGz8ܼau?vLӣ6nq6k*:ߎuu%רӾE5iDgC֘Zat5 ci8jt.`4B h2TVP,uK/wlAކ+Q dj+ȍ[~nNw}e o5? 7ojv.e '95 8<&rFo©$uעH7ki/7`|C xE~=dդaRidU!ab+n߁[^|{GWjaZ?cot`Id'p˟ax>\ oڷRk<2a6N#4ZzA2@^Ku?nOo@pZ"q l8}[k-^ '\ _Y-%KNMȗ_%oGўhmN?l\PXTRsÚ^RĆ>" l{qjjwdϮɔߴY{r:筸h70k1.4_2UV|Kn 8uur_3u˖,)--`Tp6t`ɓ}74}7|tUyy))dE&Bɟї5s JGG\ .qC+y7# QpXp%T1K^}۵ҭ[|bRZlB|d)*6ke$R9s!,Jk$)[JkS4_>dw3>IzL|\SUTN|ġq K?=$ܰ͟(MsjsC޲4k5jaQocPâi@VO_(b/~ٷ}U7]v& Q"\h6Y#IQ&K2:FR~-s*;q9遈n̟SIem MV 㫓TŠ=Ζ/#e(-_9"Hd=B$p!HHR")‹0:Nyō75I̟oʟqEf%0uvu˜aַr.FPejr0Ll5Y$VI(5DxPR",^, IpF6WEirKaIQu5`+ƒĵIM0A hEGKu`NVIRZtJr)uorLkRłaI̟g 01{5s&+g[{F@w(a )P(ƴѤ$ \_`EzIM_mFT&36׮"kƵ˜OϜo1e] x7h"ERr,ָ+aBI e : \kubUÜՇRbu0[`TE #4AL̟<;<.s&ס)9Sg>̪C.`zQE{x0 )6q@$#HFVC9Wf?ZFF`Fq#̖} pF`F` ̖*7.F`F`F@ЊrG; 7F=#0Q+QJ޴% $ѻ oqVYݿߊ&Έ;_x\ Vr5XZT5W?9e`,c%v`) 8%x` ù֪*GAQT\f5LgϞwVe[W-zDJWbd!ެWnk.E„N\בF5u8=aVt׌#2du*M5wK1cSR吤̞ZK`) bʬiؼ7Gs$t|/Bl2>>길G+6uy3 g .?/-ǜ6\4ȟ2UcR-Ɵ1@T0jl,?R T$/j81IW6H9n/!ܳkpńRk]F.پvYC!s~o6[Ha\uy;q6?w\1hu׆!WP(So:ĉp*_ Jyrf`E hEd?丟?}1#-9>DMޝyl~ľWDeǟ^-Ao?M& ̛o*yPP:{?ĝGxf}V`UP`J2~bΟ52g9;#@Њr 5xI)_!e끛GLsCn. ],9h薝`nʖ+wf\c{y?%oyE壳;u}`XP7`a}k5+WcP Sy| @:]ɝ++*E05U*)bfjr=׏nYR|BoK57?Rd'Z5OI9\""h()7̵:Z\g" Wph0J_rkwLbhmV:+pr}շjšC1o_ ۝ 5Or%B$ZE$Ys\E$!Z I<͟UYe[x`@7i/*,Ħi, [%6:/G0o_(7,\h_F{\F+!נXr.^FA+,U4 krk~6f4h "%IjqG(162T>üs8q44sʃ( `|̫12g~x|_`|)`t@Њ2TQ< &B$*[+=VǓ 戶G3!)i֤{?*?Լi\xK<`KȢLQ? ;u|z E +d1 ʵ?P hKm 4_#3#Ы_-Yq-.TMW OWaaq#J.\bA%&1bQ a^AQ7ߕ72zvRfݢjdt=(]K&_!=;@ZN8#4j/Y]jZFˌCHVrM{i2|y3C.M1(Wy-\HNp| A FP 2XR ]5~Z"$H tގzCrx0[c!Ku/zhN_~ZR9QvK.zQk%jvyj)ʨNR7Yv7 qrJ:~s B;DPկչ=uS#|*w`!@0tP..\.iɉc;\qvMަfSmc }C_=+KPYO>XCuwjDkТL)(Cl;!:P9Maْ,lJyQ+$:2M(5keAn ~ZY(;&Pb! fHF hrt6i&Zr,%I1 (h/[ڴLwiȆ 0u_I eK}LѡƅC/m-IZ\#lAqEФ2̛}١:X9mOJUNmNŖV\Ye٤H:݋L,'F VC "SMc-SiV鷌h$͟yu_N(Zдs05`[#0C x+Yh"F 8(<W 0#0"E9#P9<.T>F@ìn^;cm``EY;ldF 0(<.'F-0[ja YuH@F@ˆr#@D(pb!@YfQ}n"h,!QUrڨ2QrX,Uv`%eЪy*\7~fcgh[Qdwt??Rn${k"to >_?.$Pvn?_iоUsj{iAku`9UP ic`0".K'a㞣mci8|܃{ |j++^ :ъ|j]Go6Q} 4֊J8v&dS/˩zX6XN SNFe_*"qTzchj"Q:{_Jʬ0OYpפa@!?CjR@pՔB]@q`1dkddާ]~~9#1:*8wt=d5Lu-?#0W9#>=}Vc@IDAT.2qI`E,SQ%1~0w6pbZ4:ɀ +}FU>p~ZgxIT)Za7'Z7d &ү[;xIЧk6|b3Y+_:b܆z+yԛGE_8 \ Y7 2O`;VXRE%нux""h(2i(]Q#0Ar~,iNmZgoLIJTT-!Y 9u.2}A@V{&_ q_ϋdR|-cQN<5 zwj#++yQC-cr|iד;&r᩿ASES6ʐjɵe}=D@YNF;A^+ՂyWznZh@7x~okw·@88gHԦ=Cn KCVùlN8C>ޣC&e?΢``bJw) DL*j:fe%GTzxfxo{ c16RYy<:@l@L-V9 Z^"Z1@hZQVX)ʲZqTuE"&4(.0>dtY󗊡'/^ Ӳr/btxYeeߟC'T{řR\L ZJv-( 6U ZhsAhݎ-oIOIg}]#*+*  GXkQNCk3T/@93# ]/RUZEP$kYi^qyA[?v.žj{ ,E1;\c43r:_@ytc(Gܞ7d:ٯ\ùCV4IhiXjډp$<**`JժJ.W`hFo?~ړ_L3y,~xv!MqN4}\8Ŷ< ;uA!j\Gu+e^5vN9p|u#(R'ܙվ!^^)K8C7v}En zwIW:g8E|^G|2͆ЧKIњ8d-] FZ5s6Yv:%.rL3^?twD3_\ͪ p>$x( U6~ݲ+hqYM} ncyy6-a@ve.E! [+UN}ZpG{wfF6k*BSCkLdŅpiKƯzmc|ubyTz QٗX`ř/50P &.kDy$=yuִ TTVkBu<"ΰ6W6^wç.T%\hnZ=3k{un6o6ҟAOSy]O9S\ύL[U &r]w k:6rIN_͜?\# P֑ŃRlTC5>9) 8{}pL4SE]õRI8e@+<8]ڦ6G8^<{<%)(Kт7OkyhQe H,\yjEЬE3d\{HSrbb P7Z@e2@h(Ss,uK/wl-A5Ƅ5;WcrZp#pCʲջzΛ[<^7 ^)(u4D\8"f|wp.k{U1@8B Oo] E1dƥi5Z⪪*rRFyqUjz`cI4ܐ^*`lN 6D δq`湇vo߃xbŅ.y\YƄ;ojo9Wq*Q<\e qqԽx Pަ)buܺ8#n| O2XF0#L r%ZCIAܩ)͚IVClw ,εrR @ķ?f8}[їxjELJVT?tزMfϼv}al${/7awJ= kz?P3wUrb~>0"y4kAʲ;/7%宻Z5O1Gq#@ >Y K]0'Y#IH*_ _Z޽3'#+K*z3oxIѽ\)BI߀sf5+4PZ k`*g|eE9F"L\(TeL]69wEaHk45['uہ4m¢}/"6BiTB -ٕc2r6qOD V\ywnuxX( G J2E8jl-8V/s-a>T-K-0#Ъ>V;0<30Z @Vczi82S4..zI||bҰӮ[Q&9gc8E˶a(\4E£ 4On+:$_dIiiEle\h:1HRjN<򗘘lɓt2yC'q7?oo*//%"BVWp:[׵\#=r=:T.k_F+A+d-7[(Y8Mp )kgbҫo]uC9-6!>l2d2m+jD];iHsrkayiI>s`CX!!+p d&kvʩA%sמMc"ogT%䭢uK'8phqCZ?ޙ<̣,Ӻqpm0ġX[ K1NH8[A.&5K:Aqk:_}m:c0"C2̪n;Efb 'Y>R.o-%}mBIh4 p߰ SGSU֊?xY~/'…R|fOؔBHR݂֤1,X+ >'N\qz wXzÕ̟?^uuZFˌnΦmCq I嵃u?M4JzA]Kz̗6sFA+8Pju)œ+wRpeK(fKDAD,11Ũ0p3[ ͘ظlxw@E~F-V k$)d E &Fß&PU SxFԊ(3o}rsK~(*'gN?;u~Tkk}BvOO`G xEẕVfRe"J EY)1qqyfeKN.UV}$%V߾{ϟ=uRVJ%@NJҢLB P(XgGl?gsC *x|KbOr[t1cwL6LpBOTftgazUr"F_B(SNkGI$eLV)XsNUpWvƼ3F]w/~?șlLѢT) ϥ5 <7_#и:.11&V[wb;ws1J*I]to?~mXWku{fYfOdL;Ճ@RB(ɡWO-4w?N*3nͷ_WQԅ/.>ko_9#\1H9U7fn m˺KQ9fԵπ>XG賯Yʵ9\Yxf5ewRQ aRbLkr%&3MiQբ?fN27m__zћ鿊`NV$ڤ; [8oKcbcr F@O4dю˞u]۟?SoYMW3ZguəP9 ;| # 5e,U~E2-t XqWhSlLQ0afs[\SCfKp[[Vm~|ofxվ &Ġ_+11gnD9m)Ɂ1 ȏ>ԩkEpoUoMoNzZW ~靳ŧ17DUzi<|m2-/a6 1 r:Ŧ&~4gf;b@Wbm]2X ;p:xn._ &I>n4ߨ8/u9ox:֘8G&9/#h@Њm y1R G+-3k|A-٩J3oJ+o0j -"`G:*a3>WRO6ӧo1#0 "ZC?͜E%rlWJOô#Y6ǜRJW{T購h7>wOfM4W@%9BQBNfyX|>TotBqΞ?(<7Z%IQm 5r7t˩Gq3>|`Qpբ,\29h\w'W/%%Ó^gOg0C8!^&q K#KT_rx\}~_^o0)eG_zz9}U6V,u0[D*Q#y3OEYsۻˇ;r|9'..!ils"dez8wCztb5zM !r]*}1Ƣ,zG\\r}cl|ԅ,5Xl D:ѫx`0#p8Rl+d+3wï߂AMfdJ~lnSbB3|l-x*6y?0l +m<]^8ȥ7;MC=S7`%~r<@D q>5b)ʄvKb?A{w:?/_"h+:l J?e}%],"3@cB@ E$,UDA\2TVJV*CNe4&}圆 u  >}᲼G8"]n|N8UY-#Khخ^G#XN\`?ZQDKUl,kZ#=\+݀K\fI)Y9K -#4t_*.?n2+N3-۹*++'))):K8Lpd4 vzPNNc˩:x0@(^~o[!$?gա*fh ZSmړgARtL<E%0{7h01HY&wrx\ :n÷Έ|Ơ*'Vr*x`? '7G}ZOv8?lu.F&9#i*VO;M,j5ɽm3AFj_X]Nχda |qtl!g]ql{ faHI~v ͽ/ zqu]]G`]Poczth?4 >C! t@aL+oHN#SBADF *^PUlfd$E/$Ud*߱3PYU}d_~^87⍳Kۖqw_,zYѻƏRo6@d`+*"muڵj&+v?{v/߰ )=4yKd5& wŹ֫!`@ç\ǂ '_™ѠtL4Q4U(QEॲJ 9Ɖm\BN kaDmce sp4szC_崡1b9%M0F@ VBR%Uj4H/eX)k.%; Knm#MXdeZPPXJ]Љ,L}:gCl R?z1}SGuTkMi㐒NEw9ւjr壛ټY튯}ޥ]K;C"X~u'e;]+?G8U'6._Fk_|O8gxa̘9$Dڡu\/;lg9r闌B얰xv鏜z#&, 0\#wj24 {O*. %Vl|ŵ*\lwlB]?x &r}_B+N3[ὥ`hʼnvơ;1\3)Ey6)4+F2"As gpfGN^y*ڹpUcd0]%tƏ y_Qa;4G8=+ |G3*~8<_4/7s_xgtnY}V9ݹOQHoڬg_$oo@>>ԕR!\M䴡14Pݟ2@`r r}bÅ \5ú_C;{Sny`d*ÓK3pxR<2{:/­şsWZkaUMN"4X 7|Ou%D(%I#XNkF`C@ E" 9n\2B_]8sª>¢C^9A!/pÊeOK/L2'F+4>m3IKf9oYN}És1@d";j}S/w?w^ze98ӆBkX7`;( ބwM|Oq_ rL 3-Iо8\h* zK'_%R#1Ҕמ9nzg;>|`]/REׇ`|zGV |7?x%R(э@r %nԤ( pr4)m96BI&aˆ>ʤ G|"hFAZ0 g}#Ϡ7gimݺ+J,NC1@T# KIX97#Wj8 Ng͛ħ{: "HJ G )BQ}PwEl/%q&|LNL!dS/H%𛂦t7=[X"˩{XYN{F-%*iUӟ_ɠIp,z}kRr0YAAQl)ĴN6)Ф$S^qKn7Iϝ9z|=c3罈\* iO=ts3)v,N6,N<`(gf-:k'xa\0gG}̹=(/˅"Zeq&|hOQOrx\"`@RJEE^Ui`tE݂:$,&F˩ K~ a"ApCP&Mv"et#&%cZSVENЍX(LkNr)p㏗N{J/9~{3~x~1rW!,qYD<|C3g>{ޓ4- ̙E]Ib;bi0 ! I2 Ad.ޑ,oI;lR@?LBS# s<YN=r(Ǖ06#  ލ7hYUW||֕0/wmvqKA,f#%9QJi`Ļ`Ս4TT2GAQfCZfG4@`OhPLI2?^P\9 -x5*_~.c9uO˩{\x/#D'j(QgKŀ_ϙʾQ%%wfCA=]ڶ4Q1;WCRD &;>=:q6:i?%?p㳦<|M.+l]nI>OSgλJE9$;~Aio-,`nDW3A;Fs&%6VjQTbN_ o}~bm`o> lG+Zs s 㰓-Fuj*-imXFkHF͜K7?҃x =:d:V}ĕ&/h2S W\!N2!g,oeG/,i,qu`1CuYvg{An=lj]WCl-SMF%۰)~}FǶoٱJkV.EM(QcoY15mef=ҹ{-s?XNk0r0|W[Nkj-F lQ n1$ ~`}Ѭ$G8%3#j| ϺQMEͺ#-9x18 StzvtZOcF@ЊYhiD}\,D&l$9W7Orp~bu1c4>#88kM4(fuCi+!,AC0@#B hE9 -U kr|ѣ⼣q4E(ҾӰz[xWCQ7 {qfE9VV9~>8r"f>s`"eJ}*(ǥyq 8ծywtk' rySqj|O81/u2ǝ:{ ƒS(eM6mW&`Fog׹o43S@y7Ɂ )QqNBQG ' O0[?)0{à\jrc%0RD8YLN .&ūft} x xp$lI~aXvt瓆zG7_Q9_7Сu0?ԭc\Xq^mZ+{ʧY K!E:L*HM xMxC:@8 TM$qx`\N?п[[d=~zYf,~ a9GD#RB8݄I KUUeIQiyѿw TVUti7텣q#*䖰qw2xgZѯ3N'33+š9 .ꨞ?t u"~vt:}C*Y/ GoIE4? @hqrT雬-?šgԿےAۿ=Ϛ>4߼36Z"--ʉ+srɟ37U?33_{ߗJ̃Ohz/(29F޵y5aq9[nۋO=d:,2˜^,k8ш 22O:(}luJD:@.Z;Y7o Y魵uk/˜n[5CN:v.'?2BZw+8GgC꘭u̗s8v!=ǟ{= #|ʞe'f<7jdQece(? o5OhқTKN?Z}ɯ}ΡyP;\3t˶LӳFD5sZnsz̾N6r p7ztq.ǫV|tC-]<0F#CGЛyPsh;=?.\~Mܜ7?&CN4GD?,s|A(B3$18W2%}w c,2-co bxHpwϑzi n =4ܵOy`Sڧ}x >,9s{,bZܼ˓էgvΰ^}v\Njeok>1߾vvT$oI׋]鞹봝~JZP!giTC@ tX2Y; W8>sGْf<ٗisLNҵKL0z{2+[QkA8ݞ5>C>?9ŗ2o9 ()O5Ua$L:kA!LӆT؆_EUYSzs}}͝y5ss`vr,k[L_˒2prc<ء}+Jte77H*S,[,{g:Z2k ϡJgjsiG#tC:kaQ:H!lZ0ʝ F;[Q9X֛"+qÞ}?]Fۥ0&])IkmAh9T["}/l"'=ǟR4H֡;\/v PLCמwGK@r:m! #@hqV, =vjU|yPi$Lwr7wJCd٣wg]t׎ܠddi 2-62:$eqe8E]zy~|k.E헬sآwY"sX򣞽t'zR~WAAawiY!5#}unW.Yſl۲E6틬0@\^ ogԫrҋruQI-<|q>\|sNơAr?+h<%{@38PxZ7J@ (M8 Kxj8 u{׭[k_7F 5]VVfҠ% \MGYoӁ5Pn˅z\p|BO,Yz\IV~q(I@:@e߷ =UvF~i_( ӡE^0(րO' g d㴋:(jj?2{˺ꩭy$'Δּ3JљI~sw\MeN+W^$n#7N IZ(KWgN>?Vd|̗@9l!p=r@YB@Ygd(C˲~Hb Jg]/0HM*8ٜ= 5θ*ȷ^\~ktFoqöKEμBN$@%U'mSNdӔS 5[} X:}eٜ7S{.>MjrR3[ё=;7Ot6{N5KcWl:Zu*tb|hlOVA#D=|4À%lY0@ 9嶺IgGTm+a.S|Nnsæ8?y|،P |3/d]{/a p_k7:+Ǐ?lq46O" >e_\ᵨTyH>4/U'*+4iRgIEfX YϹլ;~~oG^WO>iMg:N͕ }Unm\>y+gV JqXJ?{ƾwۘ1̤D*Y3=.%zӑ?-$!5v팪x|GWu:V~uX7n]?23~~r$D#^&s̷/C^k}gqńʓk+y$j"~S/Q3J}^tRsƞPХDŽ@> Ak2/&HΧsE@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@mˢGUL_2q{e,g&j=lQ{u@@]eߟeUrUƺk3GYk(7^J(#3Cg@l?b[ @@!ЪrIBa!@k2?yVVc^-GM;|9]/+DŜ+@@*jrIEZ;uoI UgZ|KWZ8'0f`j*w 7HkTh{>bb}.~ikY\ܼ姝_` Һ|׷ ZIDATi瞷aѳ^,y  Q -9dhuB25E@d[ FSH l=ov0#.'H>0݌QqeBj@@4P-27ItKNjg)13ZH? :>ɍ{-{{O3C,+F@h@e}qCͬ e Hzf\  69 {0'm޹KQG$}  r(˹Oa"y3qM  pHhRy$uID< }̷L  m,@YM!uʼnz+  @ EW@@h=ֳ%g@@,@܁<  zʭgK  X@yT@@r6EU"p[ UwI`92ƚ2RՔq6%  EyE]rÒgG:jW)A*u%Syr& Jv]YߟUj˒J`nQvzllioNHS[ C?kϖSs ,N;{$3Y<3mFe0qq<3Gs'6mITzӤI=mJeXuĬucEkix˰,) uxub¯T!ʾZcnn7_^\p ucէ /swT}3<7L|`]uMg#Ψ,[itjjv&=N:FZ>;?6pg9c!51/Y1bynH߿m̘MzdEՌtfڈ|m=yraX_]1*)YςdˉM-_ZcnL++[p6l&_5.i:WTM&u$lML  >hQnCk!җ Wcw.i4ƛZo=cvmAnVSwkJ#oun ctyet硒g_Ϲ}ݠ6XZwwobԹ@[nwLvM$i%^)/<pPZѝM$2.CՔ&%0V=~G&oVAwǍ7.=uS][[>$ejf W4fڻT~G-$1SjT~guW'}x2ZFv gGswn &)vs-{)LIn췮N"﬇[u)/جn٤Pk/H#S]:Fre #Պ4Y fcR_5tY_ZA4A] -u~- m.zf}~ܿ5R e/}_bW߳_O_D;ۛ.z]-J"s޹;N/W˾EnT/# p0~ʞZ9vwKr^H7g|NJ>7#Q'-waU\h`4N ʀh|dիɔdQ7vMA&<|VCI7%0X2/;OR Ni1KL4IWǫʺGVךm}SI_Z7]C6"~S|e-eKdd+5䓣ytl64  pP|Ps[PwW {W֥}cI}:z]2 K5҂yYv>!7UiZ LTZ7HsV[W$ܨ붂>~"9 3W]M0W{WnkS+M*-]H4zQc۔IbKV$emdx?vPkƎ }b&%KnzC؆ X@Z5ѹBzsLήTUɍ{tkdX?qѭ)Gj%UUtC:Dоl1.`s].琫ԥsǨUC؆ \8^̌QE2;BԮ@Y>oJR9@0ܪ3?󯖹Y¸YGr  @r(K-dG0wjxf\1i<=_7)1@@lԋ RU@oFzp?{?('Gf')w̨ :3! -YʲE2؇rw[\+2Q<3x  @L͝GdsL:Q꧎`{:iq*  @i< 6^vbQ\&7  ОZ5P]V%W]%#8sn=aE]13O08:n7'-@@&rv5FUL_mH_ ?%Pޤs]ia|x9G@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@iGUL_|q{4gC3W|7{[ޡ;XF.Ɨ'w͌ĸߵzR@@ ڤEb◜K]y:=ieeZ]NZC\cGfT2M8@@@@Gwm @@+7Wo <*LlJf#N-(@@ -??nJ:[A6;y՚# (yKK%oVt{ieen4u[&.G{6;muU׬wy?Ԇ'3BZz-oh:?6qbim>D#57VWuNzmrMNMH cl)HkSF'Ϩ,[2Ɉ+igG~AZɟJՉz~?Y=UJYEpz/F\ /H1򣣿 /XZ돖7^ʱo9Y+77\]Yig㤬c$ϻ峵,Ouw.bWXQک}g]zsytRC}}_t @ve׋W$kr% 4>'I-6tEvTɺ?N{rGF K88W5ɷ-$wEXE&nԥ6>.)ֳWO,sz]F'ARVF7%PNk4h ^b圷 =!ܿeir2Åsz=n?X %Fm,K-Od9ܳb/fϑWڶPy("a.H=G@~Ę Yn|s;P1.,ay$? Α-/.8vN|7欝?jv @ZkzSj.bqA[J{o#>$QY~mUwpl> KtuNo[]QNs1K7v쿦8^yV}2(#vl] к-3WTM&_kK4|-@~ zJ^Jp:NZgTkN˷/$M5xDU;sUrI3pl{BKKswaF:p3m^>|M#֓zɓ \}u]8g1;FZe3%?0+Yo?X[;p6l&_|>+-–D"pO"/_>Yk60iu{ *eDnIm֬YҨմ%컬"=Z/dg{=_Vw˸L70|H$[;_ȊLZ5v$eLoW$~*&)];\ՙ^a}-n:J_}lVfn@N%-ԚK:~dm(owoJ݁W̿{G>V@@hQ>){Wד)w}Ci\T}Zu]-deǟ3-頑9r/+U?1\g{LUfBwf,:1eƎWS^Fsdo%z*R b 5{)z]xK{Ub[˟ʞk> Rnr؏ @GjQkiHaDnΣGAa+o|S"LeҟL:ZbQA`~$I/ܫӺ⛫>:=i RJGg>'eܾcS>ܿ7$J*&Fץ6Pn <݋~_NJݜݖNe*W\1auwk;K7%0X.;OR Ni1KL4xUYhZO}*Kpp~wKKxl򃡲%2ĕҚFcdQyUʵV7vMA&<|~+&{lJu<Εkp3K@[ g-2@\罋`KsK 7-DҒeyҊDI59#fcCgT_س`FQ:X\9O9~ZNnbGϗKƻpe].9.)#A<_&_o_xG?˿3k&]-zL6VaY[OzԂZpi2TJ0"}]H4z'_/:s2xU21X)JK&J%~IڹQcc7ȍ$G}檯gg@ZA@n.zFV:ol{ڵ@ h ݊+ݕ~/ ~Co+8\כ;MǏ?qz%UU4ֿG}٦IX.TUX>*ā^6V^coZ5} @EsP52L9c L{E m.b_Z965/'@9킛f$*2cT# KVf'iֱ԰5Bq;ي  К9 5}h׶5+/yowt|wDs^{LR۝!}wW9y[@~=ru9D@89{ȾNGl}<ʯ 8LyGM7zQot<8eN@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  [.eTIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/hwreqs.svg0000664000175000017500000012130300000000000021777 0ustar00zuulzuul00000000000000 Produced by OmniGraffle 6.5.2 2016-04-26 14:57:28 +0000Canvas 1Layer 1Controller NodeCompute Node 11-2CPUBlock Storage Node 1Object Storage Node 1Object Storage Node 2Hardware RequirementsCore componentOptional component8 GBRAM100 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1-2CPU4 GBRAM2NIC2NIC1NIC1NIC4+ GBRAM1-2CPU1NIC100+ GBStorage100+ GBStorage/dev/sdb/dev/sdb/dev/sdc/dev/sdb/dev/sdc1-2CPU4+ GBRAM100+ GBStorage/dev/sdc ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/network1-services.graffle0000664000175000017500000000770400000000000024700 0ustar00zuulzuul00000000000000s۶?O>;i@0ڸ+f4s E$? Y^}db v 04>$ Ĵ*^ ~|-?/7vup1 43ϷwGPmo\G ><}n<ʲыkR`}#d7Pd0{Y9fRz:pˍg;]? >cF%GQO}G!125PKkg{s^E<%1O?f^֨"dXq^|ӅI=eƯQk/tz7 gUz?52{uټSx󆤍vT=ΚB4 ZL8r6SJ@ݩ-,b;B:MvmriJ1.8uujdJ:yTRN1߮SЙ[[M\秵LfKnU , C-CҞ3]p6RQNjҭqثcȹTNV7q%VDD*+߯yzvxVBAlܽ~wݼݭHi+XVvi.w\Ɗ%gҞS.d[_׋j=jjv`zQqdᆞ(9:,[zvqI7^/kl49A%QEg5-)܅VuOA ~$f?g;]ݾG|W^2_@2;Hp;O8SPyI}‚p,RQ/i/El7a,V. p@D$/Y0T|@O0~xu7N(M/_sG#sBŸ ǹ G%ukIF)֕Y\ܓ66@yO2It.ײ%ё{s<̹S]Gh9 ʼp<0\S2 > iMAK#a;d{ A!w5|ꀌ;K`G%_= ۆItLK  TJp דtD:>J:@~(itnL 4`lmU.:F1)cks@@H爩DH"$kɇ#mP{؄X@D|:H$q;ك,,P钧ncI@sm;6n-a'p a{ʰ==9a[ɢ߹b)ae%Ao|jIdzg*G"z@o^2P2ͯiNLEҭ'I5*TC%7ݍo6)6Nlp5TD*"q+0?HNuǁ4KBT9 .F\SQ$[]@o<ۜanƶiy=~P 8 :EiܻEU\=JI%G|)"bq՞?gbm/çbk::D!pzg?O7KX޺sDbI\SFVHd${ N09\rERDzD4"1||c_)h:/ 'µ]CȷNC$>&$~DD"><\E7 6<tg t[246FBũ5kG|D>"v RPWlYt8H[`#tHZ[a'>!9rn8w4-}m|15%NC!Vp=݂fpbѵm33n n4 p[4?dܢs(˲+t\3"8)Dԣ-j=He30}#x\ĺ7xAy2;eFw fnj?١;l]?tHBÏ8y̤ރ}i ;N02j.%pPrZ7.E4q4hgUW$V3 }EjmK,AEMl!m=ss]yíc}C59g-'r?GX;n,c9CUf;̑GF[UF*ե eqgWpAMRpE`Ru.fl u AMˆxya'񧠧YjڽOLM<>0#$.+Xs6#Z^)~ܦA3Nj'ZRW+c]!%dW]dmHNbxi枵 qJ1ScBL!STexafmfuVGm[`8ά~3 .™HSY 3;/.Tu7@ı)~\Y"`1v¹#NQC-yZ` 1c5C+˘ΌÆ~RK9 ze=3L:5e5v*:=NZσZs|_:W!n&ґ7J/䋫ϯj@z gl5)%wA\4:9Dsu+•NPg*iJ.jTm$g45'C_kN$OjJ1HS]tû@]UN;yx{{ݍF=թ~IiVDt.Q{j╹K|W0Q.8kTF1>c JgNGx,$_U 4@Ӑ)hDJIg?AN [ۢ /,A\\yA%&xYv_#}OL-~IgIFvz& ]fµك1]@Vsً,x Ԡiy0(g)TU0C3O4 D< 쵞ho7R~8z{:WWgo-N.}w||.܅C~qASknsx?V4*wVR1-OZxc Z1lq\8gptӐR pˑ}JtG|>Tv<'././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/network1-services.png0000664000175000017500000051645000000000000024061 0ustar00zuulzuul00000000000000PNG  IHDRsRGB pHYs&:4iTXtXML:com.adobe.xmp 5 2 1 2@IDATx|Eg]){E}W}$ Rr\](ڐDEWW_{{W@Per%ʳ\vwv晙3m 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&Z٪BÁaL \W P`U=dL&p~N%}|>m|=g9ci??)S,yĊc#L׼wQ;;* Njń^FGRўUƅB; P" ]ե00N~[O(O(_w]3c䲖 [[H&sLr꼧Nx CIo [pnhG(4Z:Nz(ݶHi]8e ϐ쌸vswlڎ}}[%N;?OZws}(/CpR+%dB}n٧LpW.[o*)ԃVX}Pd?tK-qtdyw+WR-sSu./pk:W(cyx@>/KwHM!-k]0o eׇoǥ3&v<lxicu$ΥKz ̝PjD%b1iS=4GY;$4i|~Ή-;y>CZ]LvP%æ>{{pSM_2@~+a;kVw8rMNs? gç% Ӕ9sPZ θ%%8I=6VhB.^Kc)|Azη0i1ʫ<055yXU̝SOS5[42riA^maCnPiqsYM (NYA5᠉xuGܲl2;Gem>&|Th_G6{Ni>ϞΎMu W KM 켳ș??KWValp{敷(C턼l/%Y#rq 1챹ʼoȗ=n]<ıGN]Hh4R Mq8t#z>āUϿň+5FDRa{Lt:gOŸaxGXjV"^PXQғU.I4!/vF\}[:Wq)@xvm{f_yM!S yy }Lg%DƢ韠7a0NM c454 *V+l~ 8G%>QaIӔv>W\Pi160gOǝsE'deqc:B1˜=/`Q<4RRChQ >9Θ[߃dYgPY@C4ވ^6[a&yTlz5FG h,AR>ϝiT73p;zesw@a?Hހ~fx/3GӴa?)yx :1=nXJ܎' Ay[0zM"!zDz\ޥg(ʺhJ2p+]iQyx + '0ǯ!.qFzHL9帡p>.0go2_"S4V.lX3x*n]=#d+U꺧^5гsozC>?v@)z r!V#h')=HSg`[OO ]yϱ}FJ5}}GV]² ﲼLLh6?g-Az`mT^]gV|TXGkMQFbռ>qB=P=HK1,@Cr?^SH{ИCHXraQlKZN B(Des|iU?'D"I}#'q커h4Z[`crC %LѿγeryPmF x VhƳzGJ @;t&N\}y4^[!Ngi]pE 27 C{΍Ms!?<&xG F(QU7 뜚nҴtTS0MMSVF~hxz=KaHߋkr4/{%K*@tp-ØaxлebrPZ#DuzOr@y*/v~m7T~5O2OޱBE%kJשÿN'wt/5J Y3NK8\=v@X^(/O3F{yW[*'7H'﹭:Y[1J_7 y,>Z:y3oM fI9 Iq w( Sը`Ўԃ,*q+ eFcY ~Y?*!y: n7ˎ@=C7= '{t8첌̨R 'mN GznO5 SUTL̃5]APS EV|τ) rc7L|-)s(:Q\8 "UNq;1{s?YL)c Nun+=nES#lk$;FT93uȌ/] y=;~h9Li,R!7S=HX?dĒ/7+A E_wN/Ww/}_59nW; n n߻T_7BZCoy6DѠ0XҲȴ+8ڼ%<:|q.H1@ٌwD*pNMLaO(QOH/\2:uܸrJn!M>Jfnv+&0cFaO@CPc=>3jht;M1ʖ#ei% :yNg@HgKT+ $9=C t3(gEwG| yS1Es"92"V6bq$е{z h䟄]]E< p%qC`A9pz㉴Qo2db9|[IL@^l]iQP] BWP}Ĵs쮱*%;mی ܘ.'sZ˾emmj Fn7\B Z;\* J{$/U}kHhpQ#Q^n*7዗\J胺 _؊캡kCqǗLMD(9L As+=2 6CdFm+``+ :?EC^ԋz:~R ]T5!'WY3f_ǓF2zw~)uF?pO * d%`IAbBS\(Nі_b+aM%;8_JcKĬG8g}";}cpT(3<ydIwc?YU+fs=L[nHAZGO[pé@ci=@AÔ9C/Z# !=yhϻkߑ0Mi}!^}P>俑H-ACK>ȨWw)qE B ؛xg(Xba.cnN8nuؼY +}~o§rx,Ec0hO{]V.Φ߷Aw/ֆ捈c`Ҕi^a?-\f]<>S6>-D{~V|93Z3VBJ`)#s/z?*iyp*=vm¨ U)"Ӽ%~w)eڕ k=J4)l<*f>_oX'hE?E*hNu')+X\3*$kguۉ8alSq!挝WKs{^"7,'b3b*~YbЮ4.G>DQ ȝJω4 ٗ&M9W{NkcJ&,ԩ?8͙qѢԾ\J<(zcz|bH(DoYUW9EpFOuJN?Ɖ(/> LR}88lV tvj LS  ͨ^FxnM>W"bD3 n ?ʽ펯@k##-E8<)!t*4 չcZ;90_ 3-NUBR&G{*x#ѳ&zw&%>Nؒd(G"^Q ے&0 CLl݋ cٍg <Ŋ`M*y*44X=xXH3лP,n}0Q)U[6WKĝ):AzDc x.ޕ1\&gԌi\ڸFp]ާw14ZZ5$4W,2(V=L@c))Ǐ k1T JEaIWCǍ]^:)(1R!w}|S%#6l/41JwzwҤe.C;lzUb(|Z-V6ZmpCv9" MKݍ Cly2Ǣ0{4veJAxN7矆Jph8 M_'|0LQy`8Ζm!ӵ.QuqRԿRt!QOECbE]sJ1 ߐ?*p>)> tnfz]5Bѻ)[5`7:uX9є/(6~.O;HS;`F<؋>7gGݞys1r0 ,L8$]KUpm ,j Jɉqʯw]&eU^֨ѬVuA2_O"JiWxkl䀅SmƁN(s\qy-+5&F`wC0@9]76oDK[PNå|9\zv \gò2F\|Z5 VRh|DjrN[1bGhߍF~ OP!mfJ7240aw^ F/ܫh^. dUSɨʺtEvbzI5;R9cK,6M/a"0q!5ћ!in mFω7'Q1ׅd EPxL Kvfِ_Gm\qb !~ Hv < yM8r4B1.bP>>8.·!HOB7O |>\HO䗺JCԀ'9o )r 'JCV +_!/P^b|;'Oo@y} %+,;ZN˩hAMp_719XW[uehå7go>4;PƶWJ{E/G בg;ko(N!3~y>7&o$sMɲ(+*Yo+C[(3֏?/ƇZcQޔ 3.hI<ђ&'Hiduf&#P/Cw8r1;3cR(Th$X%Gi޴'PVbVxwo%n $w1~P4݃vK8| "{D_߳{|X}I/Vet6J&E@&TYACF]hPaRC>%h't05 h[ԝ[Lv啾+}4tv!(7w{+L–s0[aە kS+8ճ裟r!(чCH?dik_4$~"|E:3܆S4Yϰx=vއaGxgL(̼_Leڹ0< _Tuꀜ=&Yȋws[Z׋tH a}L#Yj~DSkFSgڷZ_1=v\A9^ח5Vp.bYp5F1V0ꮡ]xjhez͛*0rx51 r MDŽ^eZ?"Z9KF@ MS>o~prO4-ОXWL<54o龮TjĪoZ+ƭ{_bh#l9OVK_-yzfvHfi&`mFF+¼0Ej*RXЗ1J XI_9{q Oej9`+1U9NLZa]]vǂ쾈614Ӥz`O--.הּDPO&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&N 0&RI@w(6VFGRv2“C~ɸcL 0&RETd9L sD( t_B珺w]7wԭI_چ/( Nn3PEcÿCוjJ?X6`L`G 82&9sXs^&KThͩhP\uxCib%M(y>g@{lj 0&Мh6o5JiڕyW]T ZovM`E-'" 0VCV&4emKxd"~FBO*u2dﮤ Ta?ΑhɜizRq8t9|Pbo  ^_*447aXk3BG q^d܇Mú,ʮy:kҲ/ >=;\AѸf EcU$8KL (AXCj6y2ţsj8}.nGe+Y4 ,"+$*n^=v E/}jEs?\ٜLYDF *1&@xd~Fl 0&P/jD Dcx^ź-+[~w_y-nj1X<~ק@Z1JaYsp?}%t̐KJE0—oݵڳtX1HoAp7U`*@gr+,51ݛ<}Dܱ&MG6bsaS&@JJP(n-oYi8s0ȧ.E~$|ãGE`ʢjE,cDb m;ƿ?ӚLI⎹ d$^Av [~C# aO2C@B|r˴u Bvvj$F+^q)uB%% 0&l4?s 0vLs!zeK6ROml ; D1<($(;N37 'R9F}׊*ɀ_u'K6k^H%bѓ5Mk%̝?vBYbO(3b5ţ,=xh0snbqw@ J =ʌo`L ;&v UԪgbv-|E 5zAw1]⾧kC1u e 5 #GeFwgHak]&Z1 ߅>)P M[-5REFWC-|>{޴l:sL 'JhH&H=V6Rϔ%2& td{U2_3`-v_ EAՖpz^6֊5@/~/,' 02(/8|.HN~[O!˺>_tI/b(W/.Ч؁3}! ڙedAz޾$U̷h|`!P=d! 0&b(3Ρ'GLg4j_qx0v#,%psūxGjeQ@ϩ15+Z ^фS}êT/}0c.mbLb1»+.0 ]({Jt|hIYVۗr5FШ6#kΕIS`L 0FU7B;eL 0=h"ofeRyVKٔxSY5_vN :”Rݒ L]\:,UVs.aT\%fj%m|ga#Ç%q<^bƕ27T"_F'2-VamBYb 6_.Ov-*VF7 ޲Z9^Ǿ,eF4Y!CNe'>nޠ|&J3`L 9</I߮،EW(z OM%^ACFeRֿjeZ/eb+v̱Gg{݋n}*Zc5`L Ce,L 0&*cg!+Ζ,[c}Bt}WXyZI{챮RqA[^瞠ؔᛤѽ"\.>xBSH4y9eL>xVHօC|ީ_.yy>k&@13`L 0&`Q=)R_p`;$^C&;G 0&к qRg`Z`L EQE͙`LE E {`&#F`L <*%zvY(<,&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0! FlMcsg 7-TVR]j;&`B(!J%gyaa"b5.m=fXB\&CEgl;6G-ȫkHxP=&MsmBkkR]gX)% Q%eL 0&Ц E臣胺#4&u]}9f\iyͷUJZ \,dؓD?3B([U~?RI`ΓM& 䝂!#R>CZh\&h?Κ*TJ]iS}5ƒ֘*-֜wfNT[ f % L^fc)l!Gn0\ NRMJ%'k\&hc0cbčy;[SkMѺҚnfneBZ!`L Sɬ@aT.磊9Tqtdeb8?)0++#086@Vc鑙34LJMWX@AJM V4ϔ%0&@Mv݂:_WTUԴLw\5vMKE#O;r$+$C5a4O eͣRUJ u^ 55xڒ.JXf5fNV5/ k<9^L5g3+YFCkhCHya^>SWgtJ*%NY&" e%6Ҷm(Jjwo"F>uU%/]m ))K,rXfS5#xe'mɚ(/pC/™iZVA0yCc`)Ɔ`wL Oc_X/ŷ3Oh[Rږȇk>mu6H uJ/EZ'VIugPjqStQ&I %*\֗>y!o{ 7G 'DΩX а@TzRb:X;P . V&uLքW n{y.;{`BdF[K ,Gy(7:QN迡=rZc,{8'䣯BXV6R WV 45/0@(O}/gq{a OidFnҰ+h7v;b' ]@#K+:Ke\OM&v3*nB|4Oh, 43@YR_ ic]]$ᒓe%N))SfL3˦-h_` E_6qdQ(QM- ҵ+m9gnNtp~pp[a/I'9f 9j&%*m;= /IΛ3e[ 2=^)9$ܗWnUT詸#.DQ⯅y̧v:Ul`XZ[P8Þ8H׵bcp/WW + Gn DhVc6 |qB,r/lu4q`tdvn*[\]ee}({(9GVntEd$\r'o3Uv PFϧ)TNkbjNGLZB}w] 8} B}/|p{D k"gŲ3{}EyմϜDSqޛ4! +9C9 ѹgOQ~0;w^OfU=?afc%=3WK..Ca@\VVRjA0w]7I^*%zOeHCFt%BoʧP &YL .f5Y>;>Pb7DCu/^x!!_󛋍rn7^ZTx0ŁD?ydsZ~eiKAϓ 9 bGe#\UE8#9b k!so)*d9^ViA@* KYx?톯M6ր$v[m5My.OMQ/wǠ Q)>H,ʫA(g0aXuShF݅(x·aF5e&VrW*_@_/2EƩ+]L0Ƶ=e iòΎHœL>;kVrcoH-sOK{,L3ZYgGS$#lXOɶC* r멈fI|VʠiYxV)s̗dWX>՟^ع2G@ՖAx-BG/^y . ^C[6מ;!X|%F2C&?J^Ig)A1xy{ ~5ޣ:a!dAa"q-< e~ޣg\/mhsi&%ŠQљlQ˹b"1FX5JSWWZ5Ye'f9{ݫ,%ϺwQ[h&3p8FT% -if/l{'ઑ!uJƒ5pZ@v o#+w`\p0:UzPO!1 Ļ M0ӴQ{r7v" {P/LAiz`|Rh( RQJ(wDq])aZ߆kic/cmȿSZ>7sr̚ G9T36`E K’eBgz@Qpl$Pwl@ݱsYU񱂅WƧ8ǀKiuf,u1N{kتOVi803I@SLEVq"2SކJGjX'儥1bL 9ޞת0=d@Ӣh92B;t6yCb"]y1KJۍ&JmI) Z0mBOuHvW 鎋Ls:qH%C2~2u*D#?Kr2 u2c*c;[{S[S;ݛv2Fnj ]&E"{Fȗ?13mOg/tcv ٠ݸ \F8>n* 2hG+:|ݖQVLzA̾/ޞ"\NS;&NTTa=r;b=,^]A5D0Q`1pO#"J_7?C7v9\ʲnEJ }$XSgR >%}gHp~Ŏ\LS2W#<p>ZӖ*DiweZ[~/ 3Xw-K(t/?%'1J]0FqvZcXÍR=~v>M;Gx^jOsjyG6j*47 9/h,S'ðf=Q+)LL(NC`Ox9_<& L!Oʚ/諈#)D@TKc{ixe!+)?WBh0d≅( M h4*yDQchV/@;"@OVS`x7գɼөD0I( !l)hXg:[⎣eHj|]3<e{ICuT{'(Oc-QG =#6;M}ҁjL>+̛%sdG]G]>`-k$ .b͇g;ͨ*GRB\h00/pO@#?%oZ0ބ\vU:M10ۥ 7 /xhfzSq4Iz3L1Sif uA' *v]2Z#w.n}w#8Xz;]׹wpݜs>goC" Zc\+0-UPon{Ԫxɼ-gyNT&_ xʝmNqΖs~FZchgc~Ӻhi<~Ds>|F;.n4dx'XnPn~/?%BO2n3淽".0RweYKCAf| 4 ,PF3 7He^փ#Lӣ_5w5OJkSk F{\UmƼ=7YYMdҔg2M1Mɑe3&@8GG2t*\|w L0ZEꛨy? C&Ϫ:V4E`:(dSWc t;l=84=(oMbe#yf 0&@$GJC[}ѧ#YPSB~g)[)N)"'~VZ Χ#]ȗvI QP6nZ橈`%U?)DD6-%VdP¼5?ne7w;0Bif o bXJ 4)DVn(S)g n9.XifMI@u0*5/,siH6HT)R"!ք\ira} ]2:Xݻf>%ifrS[.ZӴ6`.OE08bh1N9llak`Ueo~_\cWO ؏ |BR_]зM3sC=fec&J2ʚ`z˜@ p>MV""fas˩ 3:lߡbˌ鞦姄 W/_#>]~\餫W7[4GJҨ%4mHemZ*5dp>.gםh⨃ٿD#j >h`ٙ~oEo3M]4[푧㧍 )QL+˚?7qbE}0rxPr g?b\*(-X .h*1 Rփ^Fٳl7r- ,H;!?WE5ZHq< vB{Wjꅂ'hsgNE==BmI$"#Bz۫.7\~ P46[m ((( )-w٭>}M-#|8֨P3 +M?DfnZMow1+7^q<=DEMJRpN.9E٭>pCʏV0(|Z.{PB=20ݛ/ 5:Jqu}q8n=4pBGפhǰ {a*~¯Y:K'd́P(0(%` Xe@Z|:fFt("_<'BR(4!g^s{Ѧ\ygA$N PZN,mWOדU884nLR?{N2?`wO5j ݘ VD`])WgF~BGS|R۲>V5.sw=פ: LJc=!RxΦ!Lѡ[PvHvP!|(<{@?KhM cY=.dQ(|[_o';~MU[ʧDW4U4|ӐWOݳ'ni{)7![CU]s2 h_(M)m)53%C}eiD yZŖճg);bm@zODyQa %2Up4AECLkh?fؼ.oLt()>+ txZK"S~e[p? ;Qۮ E;75 =@OڎI[ʧDUilK3gT> 8eEۡ AweҠP6ߘЧ죸hPRG{P3:wyx m'iSdkDZ*[H`ɕn3{]!eG1*.[ޮt֤62@~Eg|A@_;8J>O{-O}i=`+u69#3YkljLu`ZVN9#MS`{-B'iJCd }gC-SZXs!tF7j@*pwA'JXweA'4n@D6P7yč ?Q?Jݷ޵*ꌢ+l p~XH{kEyϳ̇_ʃpmf <=kJ>-ԳJO>`cq`4nO(oFlDִҼѮSM >v4G~XC]Qe:.U?dG<rនG]<ղ;"9Ν;u:wt`UjGєq ,\cG?4G۽TlH nYR)m"/P8 D4M㷡w-c˅AƸXO_ koQbގ^)9fǰF@[ɧFi0գbΒ%4uoڛMsp]%WBacKm9^>]5Q!6:phiܘ'?}= [gf]SU m(;}O>wxS1KQEX HIPj^|ZPNr קDdw {5PJE-emFΨL >Uw7zGwO=Lx:<̺զ ؞PIEh2lbȨl#h!b[8Y8h m!ˆ4łS\`gEG@!YivCj [ЙhZZA"[K͐,G]֮$>gqƑPZ\x!W^_,me_^!Aة@ P71`!gG˧PƚJʰ@2U|x#p.( ֘@R-?`FM3•:|@aq ?sݻ}0? }^G[n ޏs(my!:|~ |jop`:1bP`DGR&[0M" {S*I!e`5zيUb[i׫8!{G:Uq_=w9apm J\}{_Lx/gDc)|1G۸UP"aM~_ើ@i_VR2"-:GŦ%̿;EKO[r; ZJ}O'ߺ>Oi"ƭ:^;u03:7eY%2".>C<Ҿ_n>l!`㥟[^?CwbϿXWw%C|rҷQv萦*ʆl)x;{r|z`"4h-5x/G#͹X]ز:HNm9ߜы_K(!nʩ RdRз3lRr<"qI|D߃PsV%:LdC<:(/n[-IĻE"{ ?֋ċȾ |?!W P+B)2II қr<}9smDrB0ŭeIWoamC#z~3uТQ!#O66ғǟ,1a"5=C?sA l?R4nYw4X%]J!˗F?8wp·'E7e~b}aeqgW9|z,7cv#yB(S@pC !Na|f*;LPry݀81U߸t*> `xO%dD!ElDNpI%vH$?Avʸ:pBuj5Y O)TNxE1E-d;\!l8!'EEN-BDmLpmuњŊWEb04Ԯn !|Utl^_}kW٦vCbƽu#:BeE⇡n-k.<#Uڨ2GAӻC3rp]ڨ9K70׹ص5sb Ц]/<T[\M&B:K>tPILWvDC$C.qB,<4SzZ6XoE;'5?.4P%}C]O56b%E2׮QU`q~@R$\I&hhڳ =~;gAa(˧p=3iqQ0" MQ_wHS!^{lto4`?Bv=SDȶHcGnwT.2 4wiil D., %[5ǣE _׊Fu"$].B =蹇i,{ہb!CcF-GsҌtW470``@'nwbvs+"A<.@|4$@nirO-JXY(4+cg*5?0  (Si߼̿DS`? W$MآپֳdbEVNi%3:KK@s 2&=5=z+\.'N%ha}:Y0nnrJX--셰@-ADxEѴ^t{BT"@`@id(ÁcEKhBqd"id`cj@x}CTp-# B)hRP%c|[ h7 ~R˥Gi/F (2҄hCUily>Ps)##}w}{ @3{Fj~q4U5!nL:39vXtPPh1{k16'HM5RAz#lsDo}Ѩ8(5k՚ I i.F S˲sh%΀Ep76܌`z 2bP"5lfBM%͕*(M]*cUJj:d@I *!RӄiѺ]"|p:) #CD3#d("M Bd hbAp}>_tnPU?+oX{@6\ ;!$=%z> z? ovo#Qjl:x4ںFT_P83X8wF]pq"94s!覮&X݈*" + 5F4db *xZ6Z=N 9ٛo8WhiFt%U"{lt<;,h'e σ8)^@I >3BUK'5e F˩Ш\mp (\Q #/]IWo3ԙ7hG0iΫE \[QU: j 00_+K/{ KذLzdFa+GUȕ.={GƯa!͏<)נ )0m8OG\MVtBת^vo9QhlUbEYhhF ]P^N&釱n!1oV)0W)\*p9IXwi >!A2Tt+jg1o&XMW`/s{wj TP\ɾ|J5-}x.hq_-8;phfg2~?W]<0QCшfa`0[Ls0 w#o1Ź8y AGrH tV2V)02$o ywd!(͏†8[PBT+iX 6K7SN}'$]ъLV 0,i Φ,g9t@ plX4Z b9#pÔNKctF(ǁZmT{lwDAU^҆b&9'WiFhasۼ~MIw?#PAs:p ˠ΀- vcVclLNͫ`>%:~s ]&KPQm%$  ? ̯CZHgRAM[H\q{0НhhͭhN({}8 2nܫ#7Qb'n(&ln}ƒoF=Y&4 ֦ "08ru{oE$MMҠ92v|+Z#x FGp2W6\A&f\ wǦjsYY7gAsU~l>ˋ}:HRhtFݻT (4yY;̵34J&LGD!BC^ yi5a4mn;vЊ(oD♂Fs3JKw hC&}l4)dNB((&cȲ_'tn$@W׊#AQ /֑(u>x6~(Oq ` ݝ`^]dn;G!>DE['FDnC #Ѽt[8 =:SވJFg=T[sRd1z%)Pe3g6p^DЄQ 9_i89>T-&ڲqTUs>)H8S:$fw[O8*[l̃lK:VL]Z[|MjBly9kQlt^'3\.=Gwޛh_py P yyq!$t*3@O],Uo6g( :e1[3t8-[;pqϋ#PҦ߀p5}QO"6dD0c?VVn})rb46<41@/ϾJ3d2HMȷHzGW>qd(94xh^#8kB}:7E=ls{c|I1%6 C|0tuu\QQWa c=`o`E|ՁVv(N;d% xİR/O w@9u}E@x)&"O,dڤmFAF NsǙzx{G(\E$-fx9EFbTW|Jn\7##{ F,_^m ڽ PUE E)lazA~OaY&?%,8Ќx&$`7nk0p-3 3jKJ5,sS^?M{uhԇm+M`ЩtϧMRIas_;cPj㮖D[nzj\(DUVES ^r*A&(bDє0r|zWN@ztjv>L<,9/#z+a5e%+1WbCV*#H|{eB,0,A 挌kjٯg.J{6Rd'}|:wvAu{>d.2oJлaeZ\RW'=oӟ;E~KPpȌ>}Fz8 :3AoޢSCذ3X&shΕܕЈё}\YQW!0&;NX1kru]LW=g~1z!hĝԑmo<_jOI/x>"‘h0-+V)aC/(qvȩ&|4 #t߱-ȨʥX&O :u|[tǏ60|voXf1˷ WӮ!rK-qn a6"hs#r%Ѡvul1izzjֻxndAO"xdt&c=c_4 4[C<fhLm1"ش22V.X'`$6+F$plcH23qor,vF+_1;sI"T&4!=ڈ5u8֪=F Zq̯2oDi:4kk7/,w ˫K1Ţĥ+cbGf/Zf.81>/Sj=݅fL|c/t$lX%jP,,sD kwA?-dݞ2 ӯsK1w};x`]t*vqE'n\b8ls9MalYܣz=49x7!eHtj1Ź7$8l0Юi]LR0v0+)7 bDMĄ;Iab-GLqXh"J0a:5z Q_Zŕk"%-]:N[f51nxOI=U/$b|L*؊vS"!ca?zb㞣qB9 jXշ}~b|w3w'nau5' gcڣFf젮5-U*IxAv*b }x*T;?Af(ltn@շw[3^:pq'گm͢$$C{-ƟcpН{Vl/]x)}(!PW.uW4-58#F>8OZ,~[~04?Nԕ'VR#’qKԽjx_$3Ɨs8^2zGa #}gf &H fv]nj$ 2[6+ևp+^`zUF/=mq0eWu˱@Ф^D ti%jC Uyʌ^t`IɝڀYm߼$b!h娡 Av=G-XֻWf>jLDJjhB9zѮGWhNMZA;0[Bhn*nDUY~d5TŽPyMA ~(vlY_Wi;R\@6࿅^hӸ.5cg=BçV."LWZ%71҇<"|jǼcҬjVfyjM (;xsC* [Så7 ]) 31'f{Ps)SV@(^5suN?g B ێ?,VϢQ&T x[>8 ~C}-}R8ESm2xL&w4SPs(^,ZK$&][EYeVvn 6B6P0:=~$NfR(JP^GU `/䢧}q98b^5CrҤa {ons+Mbgr$`3KA%UAL=c}:8}ewbxg<}G303鈛O& PP{م+ד-@>}N ~ՠ }hlaŦ\ӟ#4vmQ\ (g_e?LFŐ@hH!fQ 9}z L -cAL 5jt †'%5F?@C(} bˎ]z;jeXUt&]H`raJ%;ƕ@IDAT%TfT =Ђs??/- H C-2BKd P˅ISph5Z5#7w.8rc˗IQ.4τd>iLQO)7v<>wfssΔ{jSI>ka8>#=270t:aǦٹ˿4[s'#I$Ɵ f*/07c ViȈ(jˍ*DFƌ?ugP3y"t v4&6dîEm/O^>c21=5v7K̿fG[sRx)8&39iU>8n:B9'<VF;7or7#|D>XNC"1>~GxDX1+cl/>Xn7 cvZ"à΂AޟT 0V9ǤTR5r?b1]Jzݳ:+`'ޯN%S i. 2ZkҀ% կb S+j^hN$chmkMZ8=skmuH'{/gw0O&m)肓yU[vta[ ~Z<>f8@8%=ڊAXI3/Ecf 77,3׮:(aC-TtX?I#aC+PLFI4ú t̘>eM;S Ƣ^Wm9 /ReTтl޽Ч A Բ08 4[2 ҢAmYF9kq0 1 N~H)Щɒ oX.NAIf gf5: #No\"o+hhѾ|ut}pU}뽇M u^d]V@ Jа-L'r&ͣs#b _-jo >OGaē'>Ƿz٨%>Ʒ :< LxΟI4w]Ty2pj?ю1@HzEN='ĻVP@@-MbޢIje`aqo():u%̬5 Y7)P11X[/>>V J }05_Vn\ի}Be *"}\_yX?lR=yCդ$42Ϭ;grܕW'cI0̹j/ oAۗ\CtдN⵫W3s9+6GGj0u{<&˕`3$Gk/ow[ݜ>\p;SAfK]ݹߦo| ފuc 8|ؽi'))콆bHe>΁]e@N]kԮ{V,}.69d|մ-fǩqU-RpΠvC^Zj< \!.\*G7tJPwܸ|%awFķCE4*-%#@שQsEmԳ޷=U*RZ & 4O#kXOѸK Qb-3Jկq', arM82$ɭ퟽yB5L6k>E2Kۼb6l5y(GY{ a|?]y`5s_z6+ vVlnY|>o5Զu2y繱2ɒ {dD Dv+dt- xILWԢA^9N8o༹w'L֏W>>ee;̫}MagN6͐in|kRqm;5MedX"<>L( "椏~={ՅN 6aJ6?d6Sf֭ccx2vPW?_@ ptc~ZlOQ~JP dPFAaT\&-9摟׫IOeEw8(#x~7uڻdl;8Zaex-4zhzW%CN\Qj>b&MW{(#23"nbsE)l(f3Rعπ3.7Oxŕhca +^Ψý`/eT!::e5YCfqb)aCQi5d& J#>(Rs{ۋs٪ M gZԮY=Q8?isy˵_\)Op`Js5PyܲRH-gV#|rkk4-|PLvF`(Q"a6H49biQUH{曊AɏfohXL;#nqؼLFLCl3H% ʩPu`6S:^ ϼA)}vԐ 3hD %oFvьf^Ð2Vl{d9SB%Yp (xł+j jZ\i}F.v)Ze΂q0aǻ6?~ زzy-b6,;;k 0bP 3:µL 3a#` SNŮڰ䏥iiW"y9w#à ҁ-`R ~­r8hڨ@dbjj6"p2qWb[Y0&r̸?c78Pؠ& BT?i%`;X@a?wM;bOJ[`mlY6[86mm=Q`1ѩ|Mm{h%evh[Cvм /XDОme1*W)8PhD'prTQ>\? 5l5]۶ Ҳ?1%KMb?jBnp)esgk ش^fBBV 7#ΐ9##=rիI'۽_#"@FB9Ja,6p!}5HTnp[LܬN-CτWk׿L|7TLj 719HXu Y"񟝝q=ڕ)N>n.dqޠA9eFŹ:f8.s82[H` KQP&T,eY~:]`„eM9iTTrc z#h(P؎v h͹hT>h80 0m?1ĺ`\Ȼ" MA\a&9ti))p9Dkh;)>, mO>h30P 0B/na/ىRCFf #v׶]bqLFqB sfu cM [2d}aV% J >X3 ڹA !;Ķ8wHaA/MH^gsQmڭ7qP†:xsn)Z @ [\&͗{Z4[:jvAL6~\][V{L*P{b>(ݪ a{ B D f_Z$L%e7KmqN3 1SJ0|DXp>RgǏ E"QPB;8>|LCL~ZǢ'4?1W ZDIS5)քJ!m.ؕ? ?.$M'b*U %] |ӱկgo2η*}2CS^ԯUMG ͢hBbRh .@Yӣ-O$Xi;!@ Z6 Ǟ{~e=sYN=C`Ї]E5"BiB;*aCSp<ǽ'mUs^SQoQW 08jDt}%ۧv:)敫iRqsIq`4f ڰ0b^aE (^b)B #UԤm 00Pl 3}bW=%l(S1a+h( Ɛ7QXQ0+N =Um8.P(C JS9):uQ=!~ j9~nyJ)܁0J}0N[Upa5ridmG s#69"nsi]=װv ᬥj2cz&u-gԳhR}:ڐlsX_ fxf}:4q oW>Y&U M6QgL2Խ(`K+E_]&ԼGM ./>YE1 Z}T8 152Ȭq^]羨`GQ1EodTռbEs{ҧEܣ6c Wx*ǬCՃCls06? G1n5U"cEͶui:ؖÿ| 5l\(pBJNL׶y~3nZ2/,Uz<-lS~8Si?Lc 2f j^v9m9ulw~x])!C]S{c JL L}'#r|=I^+hh 9?-Wcj2~8*o\y#;ICՕ<7&i$gI~`f41]ڣ7Y5{|VcXϗѐEn1uM\"u$}eʛVIDOUs]FM%ĀEX̔ ̖~~~?{}Vq҇Gg9{ԗw>>ycwȝlIFyb0g ~丱erk!|Bmb&zf|M\ Eu*ܬ]"i5wfIm4dn/ n |Pׄ'743+E@ 骞dN;玃ۯ {}ʩOgS^ j`:H//2BrǧOQ73ƏYDaԯOg~ ӟ2gEL@30t$3'_I]L?3PKW|N}>%nD5c^?O6i#м鍧w˸A L<t9:m2K7FGdۯɈkRzPLT^_Ša%}=C//'2.%_N5 |%Ւyz,=4YY,2KkFI3%i(GDʳHjyyV[qx6ycaEfee=CFVgT"У߼s>#lNg>yz=-ljVx%d8kd,/k,X_ _}^V09vԢ=L i/̕@h/ydTӗoNA~ =`thr--|Jz>2! p9sXKz@?>K*6 yS:'`صNdE_!p+Bb_F\PptjAK FCcrsI"Z%*Yd3"l]'$hRd,ŐNvY.83 C8/\KNщ4zB #whϺʕ?BO+] nq[klM`Ɨu"* j4JRʃ1"XCf327 {"7t[66gg_ϒbW/nb6_ںvE,҄Z,r>&%_-F㣟QtkD D#DE \*|[w 0gc`ȓg. pXNtW,>@>U=!9H8,^o',Aa5i:QoA0Bڙ>}MG `VRl+W+iBԢa:ew1;-)z;A*[0[z751LVx~wgn paFl:}oW?=>OCq]U`dBqC2K1a2Lӯe mkz鄩ӣ-O$lŹ$gM;v_~l6V}@hHm?|_ݰ +Lu,Ǹlߒc1N|de$i!|xJ JظnႥ欬difa@Ǘ|5b~?o*NI oMiTЩ3^%D*BVAC[1?ڲ>.*3&s%ǽ>ת\{sn˶m\w3*bTtS/ &|<\<-x 41(L-- GX, 1VLHKK=fw4`p=uk18޴i4mfL2Ad"ө )q  B;:=`v+qx;f?N >D|l h=977{~fK1؇T ~+8t9~KC_z|=2r\Fa;֭qd_ż5C"xr\9y2j8i0T/#~xNK,8p ͫg s[&l#F$Ûr]%;p~!b~rѮY=ڍv,f-(MLwO{HrPD-ZGm药TURQjXK ޮ^EP?t>w_{m˾cpF2N-h/88zzz}tT.';IׯJqMNI>>JԨRU7deg#B$㬘Kq .X7-OTpzhF0seb75itLFI݂UXOnRKfmi:EƱ}?-y6ͧ+aCi6e$<=Щ; 2 jaAdJdsں@}E+zņ~ZHfTE\=,7`Rd4''"+!nsh)vsbGGC-XS3 ȸ~:wxi0QBN(7g}^J!˗PmAfR)lpRB9AAwÜeSwRfIWos詯3g؟OJ%#GN_ÀCy%iX L!i{\L<ةWP>K4%,0?Æ2Y)׭fG̚MxM o^РS7:u!L!JLjX[/;DEFɝXN jU?L,LrZ`q'}dFzQ.9%MP2W{XEUÐm2\DACQ`;D_;(]Na6+F#1 ?\KDx12$SI"voXA= hEjazwt)r%e[D܉3",$XthK;ՙt3prB0M-hTE/DIɐ8q/4\;4+8^t .zM\ѪQmq5 2(ȾZILWdzuj"g/^PAF Ϊbu""~U玃ɻA %~5,~\ Q2,⾡X-ViRG,Ev@8\{NMO>i'{?aJIKnЌfpkջ 8QKaWo 1q۠!}zY )aŠ X\23-τ}Mq6,ciZdj 9+N4Rǖ̻j0.C0TR.N9DV,zwl!wa־c$j4WkWIlhׁBFD#2u7yu({2R`0f"VHo95Z4/<8D ?1OsBlUQ; &Mwun!c4ߙObT)1w3ɏb(uaصX󐘷r5 WiVX,c ?;(@OHFEb2Y5DIk}(M= cZ-M۵>A O݂w 3iBу[F/ЇSJAUj6M-ĨwS{2j5(LPpA:ǙBzE:6Tcָ%F>n Vl{D_G;-Z[[,#b6fUH H\7:}C`t(P#-t Ma Uit?v:׸MGj[/mOcandT]oj,Q ?iWO G&=:=j($+-ݸǶ)<(/4EffjKU?IeD ^KMH0a2%@D;؞^qn8 })n&sO-Usq#x!1S t@pK} ΀з$ٜE:7s=7_(ϻlM0G`|ẇBmR}8ՃVSo]h~E ƯPaq)p Zk65ǓV`0WF~N%lh?j`4G -;nRkAQ}JfYFߡ34R)`=y66h%8iWa#B):aT]uѡ֞&26{j@t+qR|*Wh6~b %!һcSLMIwDڡ&kY 2F%g.t& aCMj/3*աfpQ@U /Vc O2cH5K* #۔6[X|ݶivTNZc$jfb)b$* Ee[{}"Df\ >E"@Q9p^aep?ݏ:8W Z&Uc2\ Wi4:ǑǐTi0W׹E0G-]NN|>mҖJ''O[fL҇\`42d<cϞ?i:sGzJ-;4ybpWd!jt8zgg8G&$=:VB~pݾ(- %+~lIa0ז:=878 (HsZMXa߆@!XM)E-I4n >CD4FrRm[,{oއ.b]b<60"D|8Bҕ0y]j^`QsXovN ;h}!81̵5To9%hF J{/ 2rwgu1Wa4rHgageY;Qj+xyѹߠ&[V.I@}<)^S\p4ŭfЬf>Bs?C'?v?ڿ|>h0$hF{)\QCi/ Q]YGA΄ "j7rlUw@Gj+EanjPHCrЗ^{)!{yy8oL vF(5G#_=yVyv/ֹe}RA[K_i?r8@w@>+̕[ӎ2S0[N zgOުI1 U]:5[Q:*LlK[&Щɒ!-m?~| {%3Kc_.ZU_Wn'iE?zoqOJ-T\&E¹K'YyQiX~P4SL{^v'];An\Lo٦8c?Ci$"sѴ/CjRlǻψfnQ})?L?>5(;),"= ?B͈Ǥ~{t$K3'_I Rb6~g~N:Mm i`Nu>z_F"9\3-<>DVcguk)%۾yCmvy4%dKOԱE1U{jC46!3υ=t0b\Q^uĎRp^>l8rsZU$l ja)0?.Et-(Eǽ0y_`-fs-h, a}S&!wk딂 H9|Zlw&2}6;G^ȗoM]e0ԗG/ﺤSw ?\nr8hŨ@0Pb,5p5-oB>/3=BfzL8Js-2s>!o~m]G^(|H^-(/ύ''$~Gyw)@{ nQݼ3?3ފ^|Q<"LyL\ P!r%"NK/xa޵ kN}e Oѩ S Yl'3V:x+W`@j:^9Dk؇Z`ޚ aBE_*=`D:{X`wˢ*ΔF._^kPQA Ɗ)s:LlɈ} T׋ڹǾ'5M;h`0]0 Ԣ|ydPc 0yZ?n@1m=Щ 0 "y, dFZ㦋Ix 0#Va+x] 6Ő>?L3a ,ݸW&dVIC&i~Vo} |!99˼?4:|r4g'[sVX9xfeg9Ü0tf3҅-ns2>X-s? '\\7 hXۤ=G-`q'1.477 rJӭbө'/R<sFƵ]0\*=WCK0UfXfܨ##{JH*Xvrd#n?WUzd. NJЍ| yJG^e_9?K0ZcY ̶K*]0[@nDz\)tf Fb)p^s%h(4xt3'7a8bE CL|h ;9K2g V!mRQB% DI rA]d+*JegbA\wۼ< oKA(*T/c(T=BQsEGfn >z w5ah4`]GFulX=NOz_N5鍷]†/]GPc`4cO9KqT(>>Яu fsM0?#hVѠvui6UÇ%لL䌴Bz#1G!vx06d'"cxDi><763!J 2-y14@.0qN&1<\FUi6lW7#тĴ'F/Hz]n$*/dқ aKz|/ro֌lٻĥ+cbD"~:J`( >gTGl^Ֆ٫ߙ&4=kZ Ge>qef%W!CG*{\\G} [1SrBnq\k5 Su^ .bcŷ.l[շBIU5'2a3װR< E3D>DinWm >KsHH%!J#1w';gtɣa xF}u,Feoh#F>8OZtZFkzN%lufԽjx_̘6>~UDnQKd} &dfa+Z،s>rڹ#S=0t=hEZUZ5"ÜB eo{HxkjHztIL"Jkk.ȳU(,N:=)o]fT?{&;Cl˥B{a=IK6ʨ4>>i|| h.CcGqQ18Hzm'] ө䁳xyK +huhYM{65c4{| sz{|gK?;֧mbp5nxzR4l @IDATw1.Ŭ])*U `A{mzQ^`~̫z(EZl7][ ދG ~h4W7qfTCeniR:woN,a)X#_`B}4N͙ S(K*5^a$ʳX1:ш+'K-tnHٶݡΎtI Ӟ-=o{4YvH@a93\T  `͊WS`uΞ Kpt8)2qƉT{k"(U( z@A=')te co9w\Kjɼ~/GPP*2ƾ(h(~nyto6"*W 5|RR$^@k81}D;xNFo/~Eˑl.Ҏؓb#|")<-X}GʄA NM .YHj$V*"fŖ؂aڴqeƨDĠu4asHŐ U}@oѰKGeA%H}Џ٫[i1[v##3##!%=D_Jބ4ٞZFhTY0_:텂*aA΀U tЏ2S*[}zRuXojk,;Bnd.IE+r9rLJx1@DzU+4 ,W (c SE2"m-xNPV'ܘGӪE}eFҴYݿ5x:yp 0zy=]2A nτ/$#J\-#EY~DzXv ,$JGU\|7#`ҽmcѧS YY`o$SؐM)0_6hʴpX~># 1" W9/_ο!A}"!%>/}D~iL(L[îY<~N$8cl:q$n=4 XOƄ?i8-7̹LjƼ*[4gҦ*թ޻ QjKFՊWf&."un!̱aj.kg4cT3^H->!Z/_/a|r+Yo{EzY$3fDh66; &ݡuv,Yկ%PQ ;b ( ڋ#hL, #1!RkHb%p^2w~ը2}0+= .Jg=<" 4 h61gҷCg` >a&MnE(vwԝ?-+WlٱӅBT;`|2̵aQ||>7{pOɕTD7g[wĞl_ %hpo^۴eѪ689{VN_J} s89i}FK| 09C?11J)'Q/-]ڶĝUa1XkW⒤=K 0¹b ls׵`c(c&}p^&~xN# V .Qko6sb@V6 ԂPQ|c1w""UV{Zgbc ҃7W߀`5Q2pjfjm5@8L<trQ::iw["V(4.*&^ S󞣀:,w t@wUڄQJv40RQ@K.mR=Ţu!+; Mm] Std$;"ݹe4c_lBGm6PSB?ǣ/Z-S|pWo<^-""œnXpØ]Nc`輸 |?6 qP~p-dkܩ03 ii){m|z^ yޫ۲r|M{0eߩ\H҄VA$ڵI`hF}ǞmfTُks%dxAHI'$܎hἥ.uֶ]fά~Q4 0EtmU!m{>ޅiGa;e )(l+M]a*#E2@x "9<6Fc]~ u&Odĵp?6Өl 䈦uшHPG>×r-2$( Agi^&ûj(u$|DU͟W BsU:YȜ^`/vZpAN%nsQW'?9PP@VVV~}[mAp0Y^1d Zs;1ɇ\3?áght;Ic_ |Z"] K]Me>&.\w n^nG C҇cK#Gѝ`L`rM{jP^2V&]~x멻eY3f/ct&,Vs? apuBX"!ƀ){d+@ =~3".ψ߅5p$Ok)l{Y As[ %.]uR}cOWvLaoP.Z0~x|vI)':-o%P𓙕mJHI3aL u0dge^Ũp)\PРwIPPfT4KW(#`ڕQm:hМ`TCa*TČz,3"#+ĬLG&]i٢ii)B%m (@ y?yO`l UWGYFܯ9F(6=" )a6wH?Cº@-2_I (?4 X,Eࠒ]:@3w&ay5'ދ +!<& ]&ENo)ۊ]w le.Cf!9q^Ҥm55hR!>$<Ӕ}Nrb"o(*VwQ[|⾄쬬djĊSMÆ2zfFRZJSG;#\# d$i&̨8 #mpWcd_>7iѨV-BJW(X6_7^$rq1;J{EڠAw =X>;KZwNqdw ۘ#h"IjIJ~)̲ue#1 P!GäQ}EhP:j >q7'Tm~q:x|51KmlW4zR CAPZ !qS$Y|\3?w}2 <ȜPwn-qΝhC %lk-I.%(3c؃#TI#Z~JؒF(|*Ag{Z/<:U=;?NY U&]e`}:8Hz3@y&O͇j)0A2`XhA܉Hkg.&†Byi/Z1 & .ā+Ai*`bz=jY1d0Z\Y,w74J H2 C 5߳63U@Ȉ`0S.h(yYc6HJJQ4b>PD άO? ĉ~_5>tp+0_M{p Mn7px6_8E'F0hԻ .#?Oч W6Fp34) a(mHOhI80nCp,Þ6]EFiX+†Qڢ~u82/(#;hD*,c@ǀwcO.%ES:(h(a}AF%w8r5~_G0(C1|΃c(T^҈N0:xq9 KE'QQ?@R:87YΪ3kk)lqHe%Ծ_QбP9ٮXfe6ѷCSD 1n-{zz^׼:BEIwCx1_0}gI1>@Fel)짧Ia}"@&C8QKQYJP̬O'Us1pcɬ5Eb&̣'kUzX/U Zmjv|QD/Nǐ4gS.-}z# }ߔcŷ_=160@3&&,#|ڰrX2hZZ*?Q7vH].s(Gr>y|nfow4V{ -8(>x|@& a)4[nХݰ6+ps_VnNMDń=EYpv,\K {Z†B2d@Be,C=>eY-eaė‰l ?qxpXbtB (ZPEm2"=Q Y\\];7\k48ЖsRPk9t!Cah2L+~I=hZGa7]XkI̅kbx}M5-ڰGv2~`]*hُ"Vk2o,d%lEiD@V2&#qlL\@%`'D48܇ >$pogLn9.ch\ 1AC>pѷj\NipNM`7̊|uSzx _WL_9;Xߵ<"#sOcI藡C1a`G!R9i+Y5Dv4RPj¦S<4L"g_!Kҁbr8`yV fZL~]]V:| !%d[69$zm"~ZEjXCxMEgVВl{>R&7ARߩZU*@#XJ_#R2$&КS'b6"# mLp\h0y d.u i1!)͔g4ޚ8mF<~\d"#S&3Bu(2^=Q'Kge'L@PIXDԬR1t]lwֱE3_;Q 2_`3:DK;̘s=1mFogMH[t,:x5Bօ!qLbO Tw(m\ J>[G?o{C?QKJIjw{)SEkJCU\i(Y2@imJ*ɶ<~NFL9S[<-(acG qOjĊ!.qEΙ& ]Fbr?ow.$c>صe?7? [՛իn"[6n::x?0),cLr䄯ލ\. > 4zŽMј3TujW1ui@thV7f(:1``>fHGߙ˯?r>s0A{ڧHnYOy2g`^vMkKS.p?dp$i3Hs4#ơJ2VFj_vB%?>ΎR{k h@A^RIGg %) kT;:%.?좉=:{ ݮImYERhsI:a³Axz; 8|Ǽo'H8 ";oٖ*g/&+o4?A(p84x@i'L9 I\?+;8+mFa--sk -v$$_gg?P~mt@ ]Se@P94"rLI0 L4GѤxbokO>'ԶXK*@rX(4! @ iVn|5)Sa >Hw^d.m Oҩ' 㯾MP"1 WF yiPn7m?x S" _IL:}ڠ;^fV:+wgGC`)FBV`/_g/{oD|I<:SFZL5oo yOtpm&A{ʖ1P0DseJDI *I0鏺X&0rI'z_?K'0m"XK* `Zм>@H2 K(( !M=7=s GtR99/ߚ(͢ػV}.h87*Ec1oSӇOt[24#CF*Uxj[Ӵ4xCBCxph'7̱[I8iD <hV:maRECJв /KV k3Uʋ0?-!ͅƖGE܉x5ʶ1J22өWd@ZmR+xF}G;6H\)Q9 _O`%!9Uز_\MLѧ>"&Jq62՟;2V2Պ0L.\RI}+QӐ$GC7*[c:' >Dl1 :}cL2i5k ACy }l޽LG sgoO_h}ȡ~FSBnm pah8{詤>@Gl%$;"gJ I̗(\hxqzV6EoZRI˶hdLKж K^8 TLlh|Xi7YAŇ| G"ǹ^6γuWGU5`t::ZNdfXwݟ7qa.lw bsb!#}W]G(}ٿ3`'Ǖ{"O[>wIDx'M}/|_ :jq\7r]@A Df [/=rSSkLʼn<-a]l#&c[q=wɨ-QjE1aDOQLx5YF-._'He^+X70kmJ`R V>)M_fB0?ݬq;M8_|Z 6f5 S>ZqN5ȸ.D&C+pBkEit֌-2rO,Qpbc' "̨Hyn] F#CƵĜ%DXLea~JbHYIg\g CŽCՂl@1_GWI屑H|J0%ţHd ,v~G;jU#bqTHI Р.;>R0s$(bGȾL'Ic0,l?(;DC+b.x"(=ۈA`k {3f!3 QA/x`/R2 -R7dqú &}"pL$D'lP0?ib̐}kҍ12+l*@#=l73&e՟.FGaڵq9A4΢eb |qnmO Ev$OW1QP~[Z$ӣb53Smc'uA{(5]LfӲERlSH¡ i@So[]5$Ldt1>4oJW<';Ƅ$[3NhHӸpn{I3Y' @-I22brZ%7_{h(ֈ:U]%x(P@XfNͱ^ɑ΋}"dՠR}+΂ d"&bJCeCP0v91u2 NSдPa"&R@v˾rAC}}n;ѤvYPiZ7\ kwl 훊Y0ksE155u#4(6bS j=!,O?pdfea>ԜaSZlRj:B 1:HtV(a?'6{G-HKcm\U}vPj0Y&3#s ءd!xS$:z9'cjM(xR(&з H_+bKOY, 5wxKL=s8wfx;8Z&H30?VP4ɶ*St:Q.Sw DjUe9dEA 4=Y F6;7>wM&@!h+ <~N"YNre&!97E ܴN5mƹf jVIkAm r ͻ2A+?|߾} RVW&eI\gCJZ7c1 X>4o{w5)EԂ4,߷?ah6tk޲bokOy-_KJEnqMj$B-"* O(hƣ.hƑyà gW\JP== jZŀ#jpG\_P` A0mQR€C+j-ՠ`A)uP+F^:zI.I;|kʶ]†0չ(m߽50@&AуbC[ |lgqTeu섯r@ڽg%TV׫^\$]yh6,g X\4ϲ&u5Ha 2iGT>7yS !\A )`FoK b:ŤQ}--}x.(Dh.(h0`op|+OV ߛVae'u^Js-򴹠Muok Q;}&ަԜfkҪ~LmQ}8@F4H "v/KnyB18/?^u?Bp&dt: =gk8p1OKeiXpe7QQ} 24i >dd]Zf}fHnTK~Z B߿o{8Jgb{3:χ X:+3supg_о[_‡78nRCU)0:~K7~TM-imq&P; @KfsQy_|ma9B' V+6Z:@++s5<diZ'o`Fg p1 >F4*\OUg5W"ŕ:żiqKAC Snl^u!1Вx ,O{сFt|`rJkͨ^ED.j7N}"Bɤm9g:b6҇X̷V ٓ30Lkw]e2KMwEIƞ-{{/>2D ?޻~D^ E99Mh"~3i>zQ!uM/d?0AůC6 #۶IWljU[ex_KZ [ڂgn`i Ͼ:fS7cw@8xBU]\$} k}+!#;YkWǙoOB8P-tnY_PNq*&-T<@ɈX2 xamyh6.eL˾>#&P@ow3%襁0+q f8td Hv)eo cw*)lq6c"{ A̵a6jJG^MlVB RkE KaSel>w͌[̭#R} TH,>pAHֻۣXN}w4N}RbGt "J%w:m&a4Wx``: 1#ԯQIm0m&NT%X,&na4z[c&)d:n'iE0E8vr'zWI3j6{j#irpSwטe2Sh־5B^8&X_H擹uÓ|A#1`)h-EIDeEWf1 g2Mj+7{Alk+k)hv4=k[t ?DysmUiZ_ϋzM3}4ܶz9EF /OaC ^l@o8:YroѶ#?"M |whW-3FFFk \6>"007L՛̧9ƹy}6X.<6ǁ^^MQ$ڌ(h}-6M>t ByF;fZadypXUsnNpfwh@'a%1$B+4!X3u-UQa4"$CIPnh;L)SEX4Ђ#vJ"ܳ^ջ|ҩ׀dט6sÓVpԧCSQ0/8Mݑ6N1oVyg5M+7Wޘۯsd"Ǟ ~\Qڱofaҵ;qHnF) L1wC Zni([Ʋg1I A!&blkIiұ6E9h:hN(W&D /9[A\r#g/?b{S-1WծI-6屈_FU8t0B֜^0"0':Ҿū~3?ߢ& (p)(O~1A Z=iS^2<hNpJA}ӺwuK7hҸ-2m?x S" _I@w4H%.y:,,8#zby~_޴+g_ȯTѩH$*H_;ƌ-ψ/HөY91pV9s`:e0t{ v5FNň33o(̤!jP@sTUiT !J론mZb[>Pq y,tz5eapi4Z#f޷]GN_}cgM's!H6N"Q;JG`[{}.ԾX rhi](jc!Ӯv]y A%Jzcw2肆/ mxr\9~~}~ Cq9Z=?j$ O>:>rWIn~N[5 /~f Cdr=B> kR&JǥkQc&94Ij>nɬE2eCBXl[);(_173J<=_QCy&[B  C>93FxAj˜Af S8 b/4R`[ =vN`A^X,+_'F(hk 1c(T=yEV ^,Yع#-\ A:r@}Fj!Is<#>BוMWںd.2oja͂('w7PlT\^j4\6o4V+ KY{hvc;DžąՈ~_?Bb^w6lu83ԡ4{q|s 5)`hZ,gdxW3LE{O?ȫB\>_ќ>~Ю`IbX)ճԬ<94[!A7b^-hS{1ɺ]N0 ,˶ۿdR;r!>ַOF~m[xb:8 3Y-^7TT-UVV>԰Ă;†sZiOjӏy7%}w5]bرoRH=}4tlѦS 3xߗ4l+R|ܫGCh |=/6?~ixd(px#:N9I5}7 Lk^{ z9ݰ*pL1'[S+>'lZBZAC[>#0UJZL4oA`uM2jBD w),l[bmC=1@歔Bs"P Ӳ-үh @M7F7+dh)C_Ds$=09c{ThJSItMEY~Gդ3'B15 GThH'"oWO"{.gG&Tlw Db{b-H<2`xhp %6Ed}4 `"j5yhaS&D59{柍{m A*Ə ʲ v|onM˰)V Lj{6!?;>sF+ꤠ!8)3E&:qydS<6f4<00$p<|NM447۷B2z|0:D*ۆGqN:휀EBkr0!;$辅kZ|$y~;#Dqú$dON˟D>JɄRl4hE9{ 5HPyͥB^P蝃2u d|fX e3|ǀ:'?E,X|*Oa%0CC2ͱ"v܆x|StAH .l{T!TsJL򿿭Ƌ7 e{ۆbYG,{u8x/M@K`c&c=紴yVՊ C@n t󙈹7ճ 78W+m/b׉76xd"ǟPl?ky]⒃zh9i6_`r];" xta$w #'Ȅ8(%-=y`Z%j.@4{Lf \* FPF`rRzЯSv-!ҏo/~^%[6@uWk`j3ocv&,jFJ9wvj!ѥU#>9Ԑ0h@ǖ$ -S}Q!=re`]о(`0 FBEGhev"8c $Rg&_=iι-{ wb>iVi5*W GÁW]S]8f&M6~E-FcAe 'u31Q3 9O;,[_ Çy%گuJA-1Sy7 jEȸest"}h;&( MRxOwA?D3OX Cڡ y;n״*`1Y({1BM$TPy@_/ÀPP&@qFVS_=X#(ėņP;_/R)(m۟kw-̟öYդN7ejJ e]xg(.E +!GƒL\_@@2A\n;i-uWdXsS4߳uqj4e\w Q|,Pǜc==M8%M5 &,:m`i.|>M %rCMGzD3c^ xWRC0s0wͮʖ)a#JE^sQeHINW*8xaeKG3m~Z .4&{|+\@ e=wDHXy y ”j.lA;BK` >CC'˅u"z!{FxPI7iupP9C-[&uIq# _@څ _oFiѩ;]x_p:w'*MgAXG/m0  =F)4C^^ "Ra+6m8xa@*z#j޶q|UU?,#Eb5mZM{XS$"sQ_bPV͝L.Gx_Wn&lC$37e,% }G"fAMO>/5ǪH{ISˏ͜>muGV+N Zuo^ q@7֨e`t=nTI6 Qj@zAq6o݁͞03-'E3eI9K6A(acäPú ^QaRC А`a ZxeqX4ݢ >秖|xVM-Z2M&ɨdpv?wA.95[ scGC+kO`hT"O|1`&y;YCt}2OFC&Z01?60`ޖ%jR $^tE5`.W8`Ve݅Sr8y4n# f(Ln|)8%0k}?07h$P(gDb$E tp51VKAC;yh^J|(ZFx:蝂69B c4_ LAj| S H !wiVacޢM@SeFuB&u a4Pm;]lH:m~V Zws>l4i m:c`ts@ ZOȝV SpUs*9 +w k50_}8#g P ̭s4>8~cuEdK7%}Vv rnS!N("rwJڿ~נU+ @]<0ccbI Im9Jj-܇ktF5.U=§Åe8gʝ!i˥M~O/@AҸmC"_^PFyqgj[-˱Pϸ#7rQ,MJCx{LXj}CZ4 )G2_\2Vk>kimg@ HAқꏭ+rMƜFJL\Ym.'`SK][2ذ'8+ X{#d~7 hRȄe`BVLK&~+ T@uA ?C;+$3 a1_NeY2 Π@ C"-'=1ww]>pR1p><+( -U2<  Z?hlO"ThiWGk܌#mՠ c7801 PT(c F`|ip^z6`hfqm0 R7"MFⵙ; rͧeW|Y Z:Hℂ?5WRƼ *à/[ghwj$wƞ@³R!hN7(S65pfLh]ʹK3&SSF|kJΞ8'zí4ztݷ6Z4*,ЮP6٫ד.:%)-{9xlg*љ00IU{,.XumS:p)ԭ s̛:@=hJ+xXX{hD-hZA܉RP`Xet>s1A+Pԙ}\9)#= mMWO_FԡR"ewɴфh%{ u䆎y7JT4-g%t8[A+ThyDO[\Y>;,A.0 hb/ VYHGIԴT,kk8 B՛F"JڇWԲQ+ ҏ\QͪZ#LzԨѬkn2Ze;_U gHpY(Ep #*9R@u ~wϤ=0w/?z_A hU%QXM99Q-X*aC lNތE ~g _;j0r/v@+xׂ2k6 rٌgLaAФNUBbLj>(6]U&6w n00nKB3V3$]qYk\jwE}䗉;%N^3@FA@aX*xl}-]>׽3ٜ) P;}jޥ#)5MrVT0~;o oA,Z l4 h^H?:z30k9.hCZ$2a=1\}ߡ3;bS:8I\FPOdTQ[1چnFdO[v'p?(T]aMS$wn1m܏(Z_^Lȁ}=F+љ3$|盿$>T.<{h/|GFk@6 5rsqT 謑U! B曛4T>TiVD1;&zWiaB' Kt{0=͸j.t S0ߍGN-PEڠ2sisYͫ.U=)rzǜӗoMٖԷ򛳦OͳX7Np0 *57~zrpFJBA:hO)j4e\ql$.%5$j> I4*66sG~'=z|D@VVV~}[mA hӣ?rp`gQEr]ҪZh+Tu3/~ںYæ?<-mpU-PޞLʍ@Ehnڤ#:QAjH- RoY'=AIj6_R\v7ge^TT/nմ~ G锛FFsg ;;G20i~ 4S`4q+ZP֞6KZsk 1,}JG"&kPk=Qp|@$[\@8~8Wq /aŚ {NJd`tk> aA54eN|ChAZv_pXҤm55hR!>$<_Ӕ}Nrb"ߑxV ;b(\)67LBvVVz^y2zyb)&a_y=3#)-%#*(hA*}9Ǒsypc@oxM @=\&O+`NѸU lG9muAsஔZȵdٱG,HP6 %@"DVPؒ) sٕE:ՆZ ]v8g;bβRNwnn^ɹC+lk t1: 9g]:\M}UukKM@Z…v-  ksV1RkA@1jwJFAi5lN@-FшTVc.9)gr>NOz(t(P ^>Ǯ E&up.QB;8>|O:oeӧ{MTjKNvN͜V/z a OgE.' 6Z-k%lp!@ Z:ń+fA1{ZA1 <>ǘԻ :}XEOTc "PP QUs'L"wMzIU,e|.N܁{5op*\c@ǀ30 S+aC߬yg] @>.8O.r6k.,\d(pX 1#YPcHƀR(p|atnp'}(o8XnP(C HdK7Eߘ]◠ƑG톚7;G&y1MZ2ޅ+I2- X[{GW nGy7}6:VA}eZUlFP+}_y ?h+m<7BNtE_<)ljYa&NʼnPǏ2,,ƼCU%H(aXQmm~Z{Wef{?~`g%*"/\ s̯|D氟e!0S.wSśĞçEJZ*>{Ѯi"׭*`~]thVWݾj~Vg†ZT1ZB"a@;Y)&L+UNEb{ΑN(g{ת}ڳԵ^Ng[}U㧘uO 6[ -^xP&#GzZ2n2_V UKj:ϵnǀ<|vŞ}E ftU'?`5 T*A[k[V\† ?iAP U eT}0 "M 锟ΘMO&cmA>NnB+Š%y`5-l`{wJY&uzQB=3|mS'NuhVxhP/!9U_Mĝ!rzh6Ocs:;Y4Oᜯγo07D%hКի.?_ *vŝL"طC3nΒMF0q%!EluHtjQO=sQL9;?,jTzow}gG4eV)FoڎQ:7Eԫ^ɳ)UuǨBE&lDF~x6dLRg ~Shp\h0QábIی Ii ?,8oM6#jǏkYd4+2X>YW x*Y JIX`ԬR1t:*[:ulQN@2_p~V_v sǁ~rB'xCԨiVu+[Z5)l?(Z6)/+Jb ]/l$fg(!\th^W>u^!* 8QObrva7BԩVISˁ][K $BB,EĈ>6KawVHLM-U**Hs4+[:H VQ3DUy4hNH m#cu1ѿKK:6(68xb=7]I:a|LJXj.L1'[SOVTȻƀ9羂0UҩCzĪKѶZǀ22{cU}^ջNd> 6ozɽMј0bҺw8CvoH"C/oS6:v;MGN_9QhVy~+)[f@f?s\pаVUeiRaa|3",4D,ТA +61 JBآ| 2g0 [44i}sr23}N)<`?]UjǨUJ2 - @ӟ,5tZy(W&D4'fun)!cQ}ERsV @ Z#_KYmC]|†f Je};qI2!JU$ԑ\[ً =/~OQ {Ӈ}S7ЇxbSBC)ר>E˝;]r8$_JX ذ%?W^|9zF浄T^I F w9)❯|gwk- m L`ʋ1*!)ՔuU.?R=YӧhIbV|dy)ݴn5椑0#AU:$Pѱ8>Q`1 |AfA:/l;(ͥ(t2 r e/SFqWe5jp;s1gL"G.=3\"'Ά_}9 G1]! h nK0ʴ'L4|N$Ju[ x0T +Dﻳ!nRqK:AG:_鵠{s&yNuKjɄx3+,I3;R%Eyh6힛*%_GMlX+wӋxqv!Z(Ɩ4j iWS"(JG߀R%K@^9}Qѿs }Dj:’ج}vqU7Fw KBR+My uXawj㵂\OF[5~uMK4\:Q[{c2-i'6{Ӈ="ϊ@Ā_fᑓF6dh7]иEhB 2Oi63?C[Mq]gp:1iC>c8{X#N_ISMsӫV(+Kgop8K:2rUk_ɪPoHfuz2v!i9-sQq?s(M²h== t~aWRžgI (gm[!-m#,at!"ѤJl-͐OW((GSt"g{|?.VlհEZhSѨ'4%J|E)j4|e0v+ KY{hvdKáGo9K躘C" ů/765b|6jQ\s 4}6tǀDԛ?G[ؠ_!Cv}xv|Žܥsr NX@;fF l|o<Wl wiG<6Jd{;`oLؾ~U,ڪ lzq%Щ.x5WMji^ޥIs.A0#0B?\ Чⶠ#LHN[p/L)%t;r=)X7G^by;~tunEVIf{B+QӐ2GR嶞8KkG4,.0Ql#U<v>8mԬ. =q Yq|t]cqasbXN}#O3Elӯ70 i!@`, ]|u|͋tc |><1((4{".jS|l 3@D% Q4Vk>k.-uMM9L}O 4<9||hf-iN׭uCC iBqC捾L܌i~Mks]Ѩw Y „ɛ'[f0昗 J9%2VE҇!"">xh4[=[[T  737|`P.4q=zh4v=JRƛ^t\u!{Dr՟{PWqX-֔(dfLKKNs;`lmbqEr2l}ri@7 )Iv24bM4:kԉ<&-.)\ 5!̣(3JmWz.ǜcR6?J;B~m]չhmGdO_EȘoc਀)0` WGԾ1۵} va"[4\^聘?0u&KD|O+D܁{2҄7tJA7fsw?Χ .p $]pX=bET@PB  $zBHOv&wlv7mw9MfvΝ;瞹{Z%dj6nAOsNBRx2>Ǭs/$`e$H&l؉ɽm+ڴ7%\5z2Y2tx}jrrEJ$hn.<%Z'[K/_D,'6U|oA\TD"Fl':4+1q'3W0.-+ZXn7bמO3Pdb˛2RS<_1mN-_d8)aIݼUXוƹrǹWMt͊2Ѡ5`~]%x%ɸc9҄;1Z:Ug~c{~͎ՄoGB1R&r}2:qD{ct}&>DZQpz/D_;~*ģ9Nd49Mma9}YԴwSeV+&L33a܀K:~۪eSR'J2p֘8w Nc5m{C{:+cǂġ&?oo3Hf#-̡@/0𓜖aY/뗾)l{j3(dh4γ:N %;Ŭi;Q`&b`F n춋8JtcqE MYNқ"ՈzW `3sQ; @>tB '.˕AJ!q`"1tM¦XL֊w(O[4SMNڷ%µ&Z7m!\CM1b@ *pƼ`4 ׆cV]eC&0D?&?zHy&]v)}{T/'!,@ٻ̇ 3ܼx|KAU6F-c$3L@N8E)@d& P6CM|~r$#xu1ص7BCO -@J@ ]:`RX:~Lȉ8Ҫh״.1i Yf5هGC{YhiLfQT3cĭyO[ev*}yDO釾@8ӿԒ>$S *^l 7(#z#l#P.#Yʭ"p$/\();Z( Q [i79u-+559Z8Iv6i۷R`o/OvK@VvVzzjjbW1*&284XjBJauWGҩ55n7.] ۆzMZԬ۴isLɀjG3/BL3?'%&Op4~Cn@ XLe$'%$ĝ>yxo) 2X)iHrE&|4&|д<'飪/}>Pc{$ (&{[],@{DMX9sA \JH"3$.3r➫:59@ 'bO9؃Pŏ$ڂ Ww7_~{cuX1j=4@:{FH!flw NA-UjG{j3.++[~ch-hkk 0E1 5vtC$3CE'vZD݃:)aHfdb=;^Zk.ygdA|/%ptkl?Q؎5ås?ԷzT3[/gGg; yJ rs2`rJXk O]t9ڣ4kUHdÑ&SNӂN֠ajr%ؙ"9uaM Kiz-f 7L T]NLSs++Q!IOMJc=GLb\k@IDATYCHiAbNK h+|BBU`u\P)Gc/ A+ǗCƜ[aCa@at󨠋"|h&G2xd8cNZ0p1iɏ9ǜA>xs<^'}ߜrZpeqi.IF"6 ɔ>t !Ʃ䦀Chn)4}¼{ qۼQ%i}b 2FjI~Jy~x][FWϋgGG?Q)O&Z!C(tALw3Ib#bѺ]f2RLԷc3)U,KӪ2ˢxhdinB2*#_:n$c08 ='[2_. ZC璀eVvK2Uw܀>aW1 CxLp9}<6W XA'ԹV ( Z2{ET -~9mZ4wL6MrG8 ZN tǰ\2YSp"VӬfi\yOߨmsAT@fUW׊:ja3[B]FĂ!J2EWcqumU|$Y#,<=nO[WA&#}(LA5uŰX f{2q܈ ~8RPJPLdr G,š[Ϝ H@2I_ćG/CS(1g6HC }4PAǤoE#L:mi)"Zj*8IRЬ~ذ+ZT#[Z#\9c9ioL4GoXTlߑӢY`e!0 T,RGXDᴘ{Rb:ce, QG9 KG%p?i 1z{-Z%zU茶M J`֖!s#Nz<(:?j`7iIC) 5&>)3ֻΌJ}ab4ƂS͗@AC!>Ѩj+jJ0AaEJt^쮾J6YۏAdxq0llf}!_ HrʘGTW;zV6ɝ '{2\eԮ+FAn#7!7bܸWw2cV6f (]S2JP2MU4kU9}=7EFIVUk†_-+ ?0$σĸ"̾MN:˾G|(81ϝ>Q^W^R 4)!5Ur\%ҼB7g0J΅KA02XUA:"FczpрPrmKNF 𝡆BEUq|夙&f@$NFt^f Ԏ#1U᥉#,FX36#iq@|q|B7M8/rh!>5ENjDʭz5Ři-^E{/[{v0 3r;>^SPS .i7UFgy:֖S9đ‰vo ?2/y}Zt0b~S;n a#k>I5Pq]+V(;\dܡMцTv;*xd-Y *}úi䣉sY`F†<שCFPS\V0J,myk zIcnғe;N#mTX.>k>f>kcqq,h)isݪ8dI,Oǣ/vl4B F93,`Y[aCi5cW7dJYTzjgNxQ 57ri~_pN$,)\d?hp[3&o9a: Ā=#V`hJ`%&[ߙWI$cx]7sf+͡y MٳСgKM-wac=Aa  s-3U1"HupC 1Z))%h2͚7y25UQ*hct0@3x|^ˉIdv0aN7>xh%Õ00q#kԹgIy?g X X`MBgȀ|GNIK w<^ߋώ{WBh>,\S !GN!P N1U~8W5¸Y 砶Mֆ4x#19ouө.lWCAN1PbR@d\G^yYOrW 0;+5K0] RG27UWf8] VWcs^gf`\ǹ~)垝8bvnY&:,14GtUiËWցNN}r)ZU9]t 1P k/ʽzٛdx[YxK+&4y![,lrُ/O٩y=G>eA'/HZ-)6-+w+Ӂ;"GuMP4q] N e̷;uaUa;E}˙C:b,b[}d0NjյɱdOsl4?r*́azlL9{^1d /L@N*39vW``鼡=k C:mӤg=;vԘ)ou3*+xUvLziukT-QtFދg1'(8t"N:cTOu{&S͈ Z",IɩbޢbE sCtܝ4oL, ٰ"D%h!uWU FjRD {v|>\nm,ʚCa޽e62Pa)y999̼l9 JjV4aTO+91m _ù s bÆ%XF+5Z7{K Ljǎ};[4 Xa#Cq4j09DN?+CYƧH=E+$ZsG>|g/V* )3L҅["Dr ʼqo7cMA50<2o Fo# |ޭVh$gl@! 0 3Uk="S쇘S!]=1rj. 3e[2Bbp~+\ PѯSw}PI N\SW6*?P=4L~F`va\A_) D .2x[FTf X0kUنy?ͦ>^F<-IcV`)f+F"q$&#ɚway=6."1 횆ldҲ3}ڿSsb_TYY˷ jS 2Iۏ+#m>硞H 7_Nr6jv:!#EE44,ɩbTvЉX\H5Dp4%hK6GĈXiB%'J}}D3CdȐ+i lD tW\ONpsRf֮!R"ش'F2=6Yi/hZY ѫ|dvmEA!xRJBˀ"Cxġ94CD*4;nO-a&x1X!"w*Q 5( yaCG]kB<{G bx6vУ@=}fIӣuIJW"855Y 4]zsJ߹̀wb.b}dѱY=q' ʜ0i﷐b{tP9ŭ̥4۷R)6Egٺ>b)l|M9t+-):Xѷ#LOx^[>>w ?^I-ntAkRGL͉YA"V/ CbʸRƀ F2Vѱy]u&=[!D(}Rd{ RRcg/zC`5qT/IayN}odxp0!;ًCЗv0d#6Y y+6VlqRLV_, A5)>#3[/ޢ~jaa·Sf/ ~>Hw>ɸB3  QDW0#yyH6SC,th"smtpVmu{RkDƕPZe0Lj:7EZZCpsZw:.:7pu($w7wBD&p,: {]7s0t뚝4S RHa[>tCAX!R|d 1`SsۋE^/'^G@ n(A? LC ;g(2u_l+*.WCx[sϾrJܫd?i_ ?F=ܱN#RLgѭ=4Jzpj†7ң˕J;4W0ĞS@2gWag.n( vnQ?-83 MldJcPd!_1\e5,{/F.~^.!p ԙX, )mꬅx6;^Ъx[j[Ș}`&*W!T W@EpG5=W؍tUڨ ju@C %qjdwA Rd4EML3Z6AWj@VvJczM5%&!եK؅czvm)}VZPfiߝ}ۋXx NɭۤFZJM:W8qH+.4h BLݱF>`B HquTI]k^"4X;V)PĄu@ GS"MMfͫŁUń<n\Y?LڻL1?7kS~|Y.:QV_j>r.> PTyfT&c22or C5W;iB`vi.㣥fNo{XtXб'-5!d^-ʵ0 5%=/ +&8'j+̝rܦ<$PϥegL?MW9aOi axj*T4ys:f7|?hfCY+=״SKC74,6FS3 Y[h9sìP~S@eA8lRKx䊚= 0+֩C{7H߇ CQx BP@M}p3U8YS@Q+w4ɹ i+cjr2w'ҰSTP>0 3rO\$r@K2k>+{a>Ld~&`g[ F~e' $+M_\wfRe+NX]֣ ëܥ3J(i(p@b:0$mF:e`A|Vȸ,@I}pFm5F.XnQ^j$~00y^ GIm'J ey> `c84ih&km{o9ZaƂrpPՂ!7}~ԪiAiɬ[[Ε;Oiڦ}՘=qxnЕ)Y$U^97/\-W|"~{[X<ǔ4X*{h/=.^F'MLLnj[$ tZ;[i/BN{󗉁]ZB;[/<'/ejX ,†OŊ,N[hiclx[3&\!lX*tm9KK{}do<&[F"-{BLb`L&tiYO8x Z-dŨ#L|l-|G<(s}Fj&H;>n#U؟Ln0w;px0R#gE= >6)ymED5@&ݴSlK&92Rsmӄo;vX"#j`[0̂ܣv!foΚ0@Iw0p5Z X7+춅4 aHU5H胓zR >>c@ cEzH1 Yo:k?|yN4'5) -QE +K[ SN2D(pXоix`FE*F)$Z ]ԌC0駥=oUjѱkO9 AztG&OEv˕+&n25,mB4Xu x_*1!pt0_-وU` =6i`u ]NBqJrR|5KN-_ a*'hmLc#D}C#PH5r'̶8?oo}hNm$(tj95"h흜q<8yn جk7pg}2?{:?(Gs8N ᛱ ΰ4Mf2XWNr`@nh*6>>ն3J̉X:#I2E}I9(v Z&Njpt%p[k;eK UPQ*wfX;}̍ÇB*yʎVFRZ= o2??^#!}JAA"(>X@[c[~Zg/13`F,cT _^M )f}C?{vOP`iJa Gѥ3L<*W9*X`v}:61O39}L'pX0v`g-ҷBÔ 6h&Fh^GA\{K'}éc1[QV0Gqv}d(9r+K h0ݰ@3Т sđFвkq,  =+z(UZPmݳ]$.?4燖hDQj[$bO WiPs[x~F \ipL0 i}'}ˋg"6!+!aƆ2Z#M]Z! 44gi A4eRw(lFY@ҸEiwvl)ɝ$H:jNܸ](d*mYuмж ( >4šs[˹cc®MGrlGi D WS3WMxf]Ԣiͥ=~p?'Pw[EN6ErvT2Z26 4fJf9J!(*'ͧ80𱡉[cWժq+CB{u@h^2dͼ`e( ӼU3o6Ws)krҎ{K~PY>ҹ S٩ڎq_}KAC zmwtUѳؓ'DԪ0ip,C:m f5c:Cf(\3楚y/0WOeT⥤WU/&25OHj :< _\D.34ԼR*J 7:cLh:/V!e 0T*䉌#+2 i <j$]!,CS2ޅ<4*DvW]fY&He俗0r~qUha_+zX%),F$W!-KKG>{w%t[0-M; S`1t ѷR]^HAv0j1 Mw6% c¨皽Y(ESBgw4}1k<YA f%=00TyQ PG_Mىa-Y*aC lN6A hRŭ="7JRg✅?&[ ;Q`"{pssݭ^줍Ƭp!GaC \5IkL9.3`M9y^^yxd6[Vs@6Cf_Go,)):^5 0PpAiFJJ};zRX'ܔ@=`ئ!b}O-^j# 6~j}vV ۹'We N#s]f8w&[@1SH1P|QhN i CfǼ@s}x4`lfn1/މ :jګʹni¹ϴ}/q839w7%5T*JMZj?i|ґIun j&SN) 4rX^oOmO:o@.Qg_L^[ y> Z2 اXٿJ78~T, zt KH:tp2&j" l0gr@!2Rvz7HcĐ `Prt['㇢޴-#s_.l(ՙR30 (YVeoH{A1hB2dyh3\rMf'-W #{%v: aEC'VN|Z &S†l&>[AmRc~e*-'` CQ10w[8pG\.^8ekaeNLoS.}&Rܧy@isq!BA70͈ԫ}Ue꜌p:=$94 u-\!f ye/6orT0 N@"P@z%ݧ2Cn"UZgd,~Wngo4JpHa +K>߯ u dh [򲐇 Ԛ< _ʨBtuRĖm$@}<7%l(3*E:}ZϹ npXň76Yvqo`jv"@9)0a57E>/޼%<jvA8tkpmέ €*l-1b%-#lN*f}Cxmkз2d5 ,1&d́Ct0ctioe<(J` GӰ5H ֚e !u.Ms,V9euAZfڹamXF FsS% &f\-DHlc -3ĬL؇A݀>L:~۪eSRsE:tndA:$b> HJn%p[FꋒGjbЯXT׀0&;d4etZOzf)z }"ǏK߼7s  6hIaCM8ԡ XfgyF[+,c-1=kadvh6~m 0ͼ^Yb[n_WF@2ƈhi:6w{i"+M1Kڣ͝=s9 MpCvHө0#Awa^n7ݾf9Zś 2 TpU"pd\Ͽfm;6 iܴ_Py^yⶀ,驩ׯ_9}$&*&28Wd(P1]i5HZC؁ApӚSQ8~h־s 5 :Si3Hݷ\ūZzzzV`q$def'S'VJ8iR']8}ȁ#{*(hpAs?؏+}c[+# hFaʗ }>:zJ,#y-+hFaF.^c%hxA9ΰd}hڦݓ66_&C#IRj"מ$o>l8&@?<ESnJdߒ) s *u3u*:@k^=g^lQÅ {YR|\¹~<:):V}zyh7 9np ͱEj : cb4˗Vu1{+7 W Vi1{]q+Ёw"k&D ޚ1Yd a&@MJD0tT|y~up/ c S:~%f9-F?v4}#F7~sW*TB5sp(PqA-RpP%dt1`LnI?m6.(JIW!{k@fKϊ>$WlصJ$Cf 6 j:di(pcS*a-ecghziB;8`'c WMҨbfy/sp (ܫqcP _5vh5O8ĀvނE9ʁU) tqڡ*afIǜX8P4cK:7Q`*&NLp&}(g8~X.P(C H`Cy7E<◠G7[ E;p ̊ gz_~_/ukԤ2EE rgn\fķ[ o틬Z`"gQV9P}k%UN%d N({sǪ}ڽԱ^N{[yUuNݧ` |e> od{^5X;!P8'E&13}\0ƈQɧeVhVFY`mסHNݸ$6ޓa9;tߎ37#VG,*4QRԢp,b I[$6H9aTo2 i)Qz Ӆ jN2qQ!⎡w H-Ƙ)rHJEy[?KKP!g?1v0d25DMu%U@߻iI&Lfi̞:}3fM»ݗrC;/sqzy w-Мb7B_tH01!E{AMDo[QtoHܳcWlO3@˳ߋL^pZ 1(%JxE  ?  "NQH=[g_!=< ;zZϛ=%+ا P+idBHfoh7 +FqR:&uLN%FaU x2S̬Gy)HլNW)gR79\+[V50Fv3-b̕0krBw;q*?>7DMLr8lI6%#]FUgSC{GΈ Ƣ 0-Rr3%D}:6S=8$+?俨A7gqj@jW"ŹK hC 4'Y̜GkRwLGm?4=~81k 0WEJbw #װvuѯsN["ő3E&!FUx0 ^mqٳ}zyr~h[Q 5ݻLGϜ'0OS&sT)}=7V(oǏBVYf5aC' t >w1a,&cb>uR9%Ǽ <.f#}%dƭgAR@0V' XԂF@m0wb+·Q,D`Bƽz`̘+#:,>t\% ZLd2g⻿bEuDDߢ N;LA ELNE)M97%38f4 גMWPQQ?&97\mߐ~-Lar:4X 54sX`f~vCj/ 3?s fa.  _fkx66R0Lwj^4aTOPn @h/O0>jP $ :}`-@P#0q=D/c7%ꑗ_769sXgJKInLLqسQN&)TUaT+-ʭh !@fCoI-KjmB2id]j*.^/l_s4mQWEx OtW_i , 7r~j,ӒG"IjTܴL_x%Y~XoL:ŏ;?EK :>;>}ZY[Eg3RS҅{}qʌYY3YMRcHMIrMMˀFJDz4b!  "Ѷڣ pѭՑ*Va l OS{mɲ{w\yorrϝ8Z59\%lZ5F.ze o NxaZwyz>J\~ʕkAR?\a.~' 0 )&_OQnN}ˉ}7rş#E˅p#2g2߄2z):(y&L|1[kihl*@IDAT>_1{5?K IjۧLf`zR8{IZ<#r񝑊((am[Y٬'o8d(lYWcvnE#Ō&ÌZij63cլ[:Y3x@Y휐PZ/];ƺD#˽70 YnP6I3˷ V> ԔP;HD i]`fGtHVB.[v m'}Q!c舚lax}$|UzeAcL8&oy}#n[Ex8K}!awŘUы: MzGx߾mɭ>B}ێ>>m'ppX zV3T{ND3\uo.E]YͼvULD(U#lP ꜽ=UuMF]5-MXuoKnv rǖcˑO*iKdHAp-Ё;dQ>%nxs>bj}E1y{ Ԡ1ًeVYN3cYI`[3&sw'n,;"(Wi]d^h 1>?翿_\c˩bÒQ%K#7":u߁ľiCz~}vq9j)COrY^9>m4%aDi{-#Pˈ@D'̿X/1j" nifؓ'D$ `✜'n,ZY6Vp kD9MKo>Ln#Q|1R`eC>ex{tB?mIn;;p7Ğ8%p#8FJ9.Y׳4m0Ynf_+x{OEi\ϯђt(c> ;v|l4n:8z=n?_;~SݸTMAC [/]mJ@L#͙ne C֪Q-5/J*h(\ր#<}1qYؿgג%h6HS.ͽ6fnYEfffzi6f1FjS5PfI:}7]}|-=V8^&skе0r\?udMLfagS8_sdϒvi4Aa_Khۘ-oɏ=p'M+>EAZyhDζ5;[ń\6ּҥ$INQgQqb,]q0<e{s:M|]  Z""U%b,K8[4#(K1 C^^}@*&ǏeHŤc\6x9~8m2t^6Q1qg=Un%S-RР`$$x?ׯcbY 9Ո 'mDKO̜(<}>}%3 i"C|b#G-`UWbC!8AcH$hnnAt KHJ?~ib|dlZ,DU9tնzݑ']`MXi@)1|S* }@h gUE~.9_/љZJ-Z*|dQt$_t >˛2RSŴeG?slS1]u]i._5C7+68GShu h>E}9Dh `R.C-<_ Z:Ug~c{~͎ռ{`gDgV'h"[.?V*:|ck)0ǰ9s`KYn Xod_r%(PVpv1 ʼn%vDذ׶eew{\+><1YVEb?4b/\KT81; fS1ܣ؊_Cb52K?8vq7!;d4pZ}s;?3FSBj(xP60< o˨St7fe&El8OpR)aTiY3^U-RB07 [֮]ԱW?Yj@ ͐ u(v]+(Ӹ'^x}1Lbq6E\7>5}v ))rhR"pA%bïbr-=bQeu'm }0cq?z֪x qǘ^ݿs ċoЍOH̴? Yxki"!"$͚uw~1x=4/'qIWAFݑۉN-rNlaːO;@WO~Y쨵*T*\fQ}pDD8#6m@/UVg⯈[#Ec:1}ABQ={񂛧A|{yVo=ct&޴' j?¢+|o~hnn0Vy$S\MtlVO7j%8t ٶqD$C;Rk뚝H~v}D$Cce[4.{AG;~*q!qBF7 ίj4o[X{t2ovvVo045]T>Wcn i&fg>h 迤S1뷭Z:%zMGp77D:媶ոkط=lw{ 90xL/ 8m~lgb eV^^ajO''9-tG23 ^/}{Sx<}fP)io\1ym$`&Tb^I l9 +by&]v F0ejc6E+YG)MCDP/&n tR8wY$r keIa+߽:4ɓQ|`-V4ˋ rFqf2 +{Dѫ`d?]VTd<| Nq{-9#.HfnWQR(0Y3@- vhV¢BڑZQؼ(,O$C-3ۢi&ls7cDZB%m#G#T .-*FN]JMM΄c9NiҺ]M5XKWqSnYY驩ׯ_AƨȈh`b &JN`uqU6kҢfݦMTFȴ`M Uk?z=yiBtVffg9)1xEUL,2N<7 Ҧڨ *|c`g?yOg⃳ (xy-)p8n݀>n6wvG-0T$7:a3IАZ9R9~3++DJЉX)0yQӺ53B$375{cNG%gOϳ*̶&=3~]s;>' IwꍁPѶIϋuGГ5f0GqVzm+ړ}*JAw1.el6HR@s-+߾YlD<>x,V&a`זRF9Sؠ|%:4dɢK,٣is!796ؑobtd.ժeu;WU8oG)؇ܔ0~$b2p\1枌b\s{)4;^# ^+<Xn^S81\KZ8qN}GQ8S9xVp4~z{֫|ǔA:U~V]H@oBMt:իHPJV4Iօ盳E`<$۵)"FtkH/6C6{g3Vۊܫ0[jR7X޶Y+:jʂCU@DVlawZ`["aUh="Ѵzfi])glUʮ4RgrbVLoNJ!MF^n| !>d_r}KACi5J̤oqvW*w/)2 $߇1U~Jy~x][efy(h4>e)^+d(eS<k\/1psS β!<&}E%6+4S۱YnP ,K!D)CEO7|Fyk#XK G` 5-4ǰ>m̺;nZp+(,n(үb|Ie$/6*LFc%lp— qSz7%J LC2iܔf+jy3˩{qX@l1++ꯡGc(d 7ߏʕ %lPAߥ _|bh71GG=qNat>c%kD%.kMcA+E:΍ IT7Հ7LlZ4LJh7%0҄-^quX$g<@MfGePڷӗbSD4VPf`07 yWYL˷BcxY1~>Ҧμ!x)@h39!̓ݒ^Bs)ە+OXWϢ?qv9(լ&n$޾?3,醂yߑӢ=>)9EfRҞ?h nC G!%U+I)4.Ѵ,^X*ݫ6ZI\12lT.E (Y8؟ܴB`J >sߏO= ڍo㲶d觏p$Ȥja˨Sߦl F9xp~o$S jLR}Sa왟?[36r[Uv̨ ޓrJׄ ZI ~W%WK6^_%v9{B苁,hZ|iRD!>Ѩjw^T' .vAd4bo,^w)ٛO  ^I8aXan1!-Y[F tm~@&$UZ|_jh%U`6>Hmktc<Ҳɔ4:}i~<k`U  1e `KGӲ{SڻW췆mV;Ndʨ] V;@GnCnSŸqs0'Od42v2csgܠi,ۮ}O)UKWc4W0T}s;?K6^(SѪ.ۻ@pٻ-ۼGTC(L%lɳ|/3/f } &%C"3ĸRπ=|EQ^yxP> :wytж3')Gfh)jѼkLkঀ]=F(1gXZc,l檽Ć? pj]L>;т5W^=kֲF( !ΐ,٣iU9AIR1CnԼB5ELb̴LPܖ}֌<>sF6dXfi8߉UE dTvSe{'h˩cm9uSZ/O)h-اyw Hξ+hӔap]ذLFiVQHd1ʶ~jɜ4-=2K2rqHu a4XO:!;݄ ' %y6wER֬ *}Jc܌AT)dɤY{qA,n=mU8C.s@F& u=o݄⊦yn *O:ۺfUN\s!w[grY'k)vD2F|/lƟoCcޘӹyw&~qͨ/ .BCi73J'C߫0"OBrY> T㯾_ ,Mu }W97,9 oJ`|/4[}XǔYҏ䡓†ltcX$Y4L: S`rMH6GkQ~r3Yx[:2ri(" z$,WHwG,T?ױXI(4ӊG$eS+iXT-~^W+tE l*3 {#9]xCXϷbv Zl晹 aop4~v#L8~f~9lB}8odj >~[{{{s46^!F0 6A1<3m\90^FYgFP8Vtdq)]ذN:]^D`RCmFan6X*}F\Aa DR!\B?6D1YK 5ɿE2??7F(%s 1K77|5_#;\ު;Uo14CzH˴ؚ@ؒWMZxh!>lv1^pUS?4@%U`VY zy*?ˣZZ Am1 ?h8{G^}cs^߈2 })YTYǏ+U]ذ(<U WF$^Oo?s@ ? !f+.*0Dvca֥1c@@(h(-]g'pą[֧i`'2q7cdEyc19SoWI@1 6Ph]J.B`Ok.TP^j i Ƒнm#`vQ é |^7b(M71@\ڠAEKOiߢUSRĭbb_M=1?~̯ 0 ^BzMz/RE)TEwذLSBbGt{!+$6r!{S޼̾7}h _Ph Gofb{ ^&R(VT@M*¤j>rl0\<}>N0Fr bC{zt|dFq>?ilфV#97î T\N-C^@,W9 PvW)hu honA=/I'wњׯ&r z9Z+@n_Y~][_Fͨǂ| yy3P{ Q -${eIgf^nhYAXK}Zuhui-CҪqLƵHb>1فGNyj#h;˶ YxL yWv20@ˆ /aw]ٝ,/]sdBP399iS*EcΈLGı`s I%^Odg F{W+Dbd7x\ȴ+4ڴ^wש]CS uCܿtÞ㇏:'PNө?US0L5ɑ \`BݮmX4IC8;psd&)ld$nfAs]c֡suM+\ιc8wLȃ"[:҃}Odf2&eSo -m$,yk0Uxna-M{ɳL,;7A(έ/U3v'n yjO -\`wis:U˥ヌӗF|v\hCYUFy=O6Z86u_ש*!D6wk-M^EjjzBJ(ڳisW ɲ6r NZIqFS@Đxhݟ2M_i*(Z4cTC} mʤ:*9DaE&9h:C};g0nQllbR#D#mӏc bi!0=FHW u1h jjY O !kG>u.c *KVЁe2|!\/CGB) (y[.1Ӳ@o;̀8{J(XfDzNIrLX|8ۑsq/HI-B do&n-0>}s$ _Kr$ 4i K[@gzV0||??Z&7F.QJ;`RHzC ^Pu~S.r^tju Lަh.שC0Bn/mK /Ůf{qdO92hd]I0N A aA1{} 꽵M/!LT'qUiTK=zRFDu@Q|)_|Qʇb.9OPFEi:q0}j)У})0P5c^z~][I#Q QYݍ͑s:v8-瘍֥_;bēi`XQ =AXMl<%x@z%sIɐ%sOK(O׵k!KĜwΜS/`6OНI6?\M0bNG|f7 hdO`{r gܥ&$$ȳtGcFwlLbFƐ@dޣAskD>{\^| ?C2nhg4[7J + "͢hh&g6њ2kw(AjTy3O ? ZPy- +T ?4׫B& |w'ƛFa/sBnKI9nw%>Tǵ-˖ &J[[C0Lm<͛:; O(Qf5-ܲ fcؽҵ)y8,!*:wa#U|F5aNR1k2VmGO ~0rlK &{epW,9fAA*T<.͡h&뷱\^^ٺ?AVvmva#J*C)UeS l42F\NuVxТE-Lu7O2 g,k{K%'G%&kf4ClAsιOIL8gI4>-N1g rP=FqP`9pey?@#A"A8(gw"ąBXZV%BӫiβeFPc.x٘>+C^AA&M584tY)Y,&Ixs])Q,̚EgPѼ^ʹޠB|ԆPӀ"d}1 ]jPB>~Fua2F"Tʁ~=Qf_j]0OgQiQVWfhB|ƵQ7 .{.d"A Hs?\ gdt{tjgŸlԵp|R7׷j(}V hxAUe^kh&]Б"MשdYv#Sl \;N"q(|D5:$puasι?jܸ+kBpEDy0[o m{ݱ̧旿1հlz/P14wiCS*j]&sB,Ƙ}:Mx4s9~N|P^P8q(Li} ɊaUtY]]"|qo;ŗrsUTi>_ͱk7h+jV*+>2W<=YZ &@#Yt ҍvh߮(hG.5[PN7ꋟxmnz< 06)EvLL?++dSVnN85,p`%їȗ¯j3ϔ+^L +| C\[6oyfͶe7}tYpw6מ^~wYRI]ӯyL a)ڽ9syshd!{}B\p3.?'(Xs aS{a,;Ou<ݻ}&! ;N=fܕ6*r\ɃJFCfbQ7A )@Cⓟ/eyW㒐Ϡw&cߥ؄ah|BŁt@o5L 6%cުi0Bqݰ̚{j}ͪQLP҇ʍ <̼9ȬVB3#C]`~돷g;q$|32pWg~s4Qe|WsllC]SС] dcS1tQdy]L%oAd Rq~zZ&<AL6~!D ˕/=G/QFV42 [,Bx9 q>`tm'm4EaC 9:U)y)`kY`yo Ne;֘Ե=Ail_ \L.-;ې^ _a𕰡[~ lΝsF0=J/7sie_ŝVH8@AI@7DmwjMQ/ p{qӚoT-ե9wߏ9~ԗ\eںş&ڤo}Mv!|]3d0u##'(SԨD*` :AA+;} K 2O}2bb ?1+ac+fM_ZtoqCY35F!H\vpΣc8=O)l0A1 &}Wd>Ȱ^%as8زnnPa8&zp7V;XdoNv,QEaw%џU0K;8K+\.?]HgMntf\xu¨0ÈqvȨKCPhv@`иsS=cΜbphO)j4ߧ|Z &S†lp&} W \}n>_?| )]a}P v*Fv)ko/0t=ޮU| aA9F_uѓ6[.]*q!-~үhv-:W]ۙz ZѻEkF{\HW\}(A,5Gm_jgu8JV/* 2}:=?$MgVx&c aܡ];3cnBEHƦdA:p]lMR "}(aw4{a΃1忟ZE讕)QL{^(MESHyd]?1AǜnK>{0+c\7P& ŸotJ"o;Z;d`UbYcuy+SCϛ7W=u*wL h Z @5"w!Q s_qofmרנiHXEB 0#dANNNLMx=ڼ~?:Wd(P5]i5HF0ÈBrAm4veS~jتmju6 +Z>|ذ)qѐ*U)qwfKL+O,#9%9BŸٺg̀I8h:ȵ"}po RF`Zg7CqphмS6[]ƥٹ.9##UIu(8b8Bp `MJp>rj((I#S@@1 <+p%l(LHGEPAF(`pC\C~eZKxVOzQs|*s]❚ \7x u -Pyn%NH28*=kZ4 W><[lspw >B4k74glaΤGŤ v- 5S!2APd ,`҇:Q44GR gJ@Y1\OSuG Y>BN\;(d{ [{n_*s;i-E>|נY?@^Qan?Pu&^1 32 Q9,ť[0-Z M5Ǿ"Bш@P3{iHe]wY,6~պs ίZ;8/jnpY`@~n?qk\ƀ `\wHL \u+ab<5?,Pࠐa4c&搌ΥQfQ0+K =Um~ %l35+zStoC?d svb}(z#{H(gs>#>&o5H7˧bJ惵(skvyA#qd7/exs}D2<D*LW@ѽ:!C01} 䵰z>*䢬>Cfa*LQUL`T]&}gիtBAW<+0sV3=g,pbo}y,͝[g, pY1{f}4@ܯ^`cŏ3#'Έ eJཎNzo.ċ6 k~ZE:_$%S@VA;1h/0mڴ[t gm3ʟ[Ŭ4>r$@5Xםk~]<@1j|u7aj1۱g,E^YC:KeRW6yBdqHAjEZI\0* xy1AbnoXb΄#K7쮈XZu҈ 厭6 ;t XSX;jzӣK 1mQ YUA2QE36haڌTȟ#=*Z6!uxX@UbbP.DhH}Œu;ϳWo^?Z&n앢kMj=;eKW|:HcC7͡:Ni x_[fyCTEGAу#Io㱃9xcߕĊ۽v8U֭mRGkV*3`(yd||ўWGx ;S6Z/^qEh|Fdhipkխ^QآFF&, 0o{Р%=4Fs%ߟP&2 98կsߚln9u'>E<9FѬn57-P6ɪ}GqMDuM{įֈ_O'&? GwQ|Iq&6^|1;Eē[`w3DţnRkFָ;y6ҿ[-n[-Su+ `O \1a߶loxqDGmNy,O>W}믳?X^ԫQ@.ľT/Mи5uΪN7~o}paiHIC ;>.U"kU0׏,תR^Mڎ:T9u~[`{;G~tCKIN>}66_ѵ;'L]Q^Y<]HHfNtn\Xv,14d.Ba\Q7Zpy)lwM8+(ZT$zUx ))F8]``Iv1x燴#g|^ ڠw^kV6& @h=7Q:>R<>ܪ}=Jdo> )E.*)9&wlh5nulTx&Zү{͐o}gN-@IDATbc^Ԕ솦<XJ8e?)zgzM[@솟c`ر%DZoțh +†/}]b)AFܬo19Z6kۋ>v&}ӱzaٳeYm!?ȋ577k"nJ*{#c >u^#԰&ip OSj:,$XtmWQ#S4Xv4َ-7c;ιo@gp0=fU^CQt:gaX@J%Y`TسpB t* lǜ{zp54yF, .M\W٤nNVYy, 3$v! u1¡vi3-KX\;7cڷ:(7-u:/xlfb4B;9&&H,۸G h}1;O)`eFd2nݻ{ I)؁bE&pv?OSn@(r7G({ԓ(< ^<*Nq2s F!\ͤ.i&w_VyPcXXm?}n6-y۲9. `SKRt?}Z&3B%A5}: $Y>+TMx:51E\N,@/\ܨXv3EDJ ok|`^~ pα]hEs> ?d:B/ҤKg.$I_U|?\U_l2'S`VSIo(Vg?Sܸ]Ep oGMөjFخqXzesqE>8w4P(pR$msRؠ Eq=ꣵkW/}ٮPν;6zw{g ,#Q0˕(7Ǡ;P kŰ{{ʄeø?@!x2>dMT6|ybB% lEX oB p-BN . eQBn&%߽mc Iydʬ"hʋIlb;ҬIeJ! 6=r gbx`Qx{CKqmڲV //v1~,U*T2gMv+bgzmCqI8r|ؾ?N"+Mc|#z#raEI+~UʧA1'zŲB ONJg.GNOGQj= }?-v4{$LlLG~][!A)Y{̨͒ש"qS˰O#-1 Q\iC:'n"ȡ`):(-+X؝Q'#Mh FjSmZn5;a}|kGBBҿdj'XHO.#0Ic``xZfy !(6~R?b5M%o[8}̒*j3(d '2z%@i ALwhՠ([2 a+Ha[uwݎR8}F#1ՖGn60׷n' !DăbQt uB|iTb![Je/NY|| AAdywNP1z80F ML1فxP.!u&#^Yf&_ Q` W5 ԳЇwb'Ck$lm@-AŲf&ݱ4>}^ƕK !ZY&q^='٣5+!Iqu_xoj46:,cF岲d| ' v1^;:;9:wwAhQ:'$#K^~\FjВ7תRCͻ)T">C,:4+>i JM<1+wې}>h,(>#C>& oZ7)"ԴdЊa JS8#d-K?%Yq"l yi a¨%ΰoJ -+zf>@AFq5vxtܴ1pgz,Zr()AZ {|2nAw_Ag?4~2X֧ˀQJ`NVBeqcHGf 02|潜cW{҂qy h@i"m~E&)EuqL i2vV^}GNpo{LSURvcWPzl쥵Zo[jbhJ j -R7F 4S*w~qq~B&!'~?ou4AX ,Ϲ(AG :B-R : uNß*U?K`^xCFD49}PDa*߶ZRk"nM?,Zh[gF2ѣ2-OYAuyPkd;#@4r4V6: Ԃ^?~a0 J`@ڦaZV[X ݁c2xo'>!!$VKF:~B]q]R00E-4 (T} aOĘխX^3҇_х7(,yD0S)>+zrwW>U}qXVg׺Q)m'rFvh-UQ#']qύKraBRldEMNʨ&~ L4an>~_uR蟱j>h0;m;6ӗb܅x)Tpab\7ː Aeb,*Äk뾣2/O< }{x3K3.f:3:{׷jIWV{i]ye@(*5bFQ鉞To4[q uuWPP+W2ӂQsֺa-/]%oeZãVh6s|YwŝG2pIaְbP՘f /A,_'~Y7MsHg28}\^j4@nf^8rKO}w54bdlGXV|[3̡[{S3[MlN?8Z{;̵7a%9'a⨈ ֏K%{Vkz Rތ0fx[E 8 iDIfsK疢uM:bZWQ&#;Chհhm̕ HO̕2dhOA _!hTijڋ?KDH)%ԊegxߥBgofnEmp%n2XZNFY}-Lj>>ijbkY7M!wv,B`D&C_'zUGdujiDB/qSpo5ulQOܲO܆~͉##>A+4A}k2dTh_gERO1*kOuN9CpVsˌ }4`:MRjЂmCát}~>*␱%;L F,80i ߱Q44bt_p-tMʀW[dyZ?-l0 ᰻ԫڅy "F#@x[<\iMf窎~a6<,Vn']d9r"{AK~-hxFR>>{g'(kOLNvJN?zPxLJ?ܞ70AdR0:L=w#џvl}b4Bq~_$YPUg~-7`' ~ (*~CXw/iB` Z8|լB&7]ѣ8D )$MԳmS¬kݪ99@F=uUW1r;:5)Ń#"gC%2#-R(N0,_6˧ FZANiW%63h38(ps^i4;Hn? EًϢ<;sq5Z@[9t@-L6Q-Vժ9;d>ƟuO6(?Oup lŨ#lsaꭉ2P|'jz(| Z$<|ᏟwW Ja#!L"ْ=" $<+Y 4f b#q9}JG`C pWYAf 63qIwoX0nMgJ?1að`wMFN14>Q[kDS.g3b%\]ug'N:;r8<#,K굞 O$dx*-S˟\O2a0S cb7ٶ{zk̉FߔoݔImcc\9VMN؃h4R/G:dIǿ `\M쒫[g<GKd=cb. ayE)q_3qǜeMN' '&"~M̀*V B&B,7L PݮBt{b|f h?19'|Φ|0c15.o4mI4J됈[u+9;߻ڀ;+Of"LXX2VS2V8GZ@'$ձԝr۩eRP1;'Bkq9f6dF3ujc[oh%S&_qx羣's&c(Si0$wߙbB2;έ#MjqyœyL07O`p!|K}#Y?MG`B2Y }HᯅEVmoJI">CQ=x-dt2h2<,u]5*˳gO <{t;t@z0 HפJ?婘&|!k9b3Sv g/݂-IKP)PQ׵N==[t6͎ @&a b;ĪOQ &S^; T/%Ҳzq/+K)!Bw?ōA$sG IfiwMvepni5jEGu26~aoedݟ^}1ޞ>w4d^:zCxǠ]-]c Nl#b6ةum_w윉?DhrTV 8CʵB7 $<.XRX.õ q-|,:R6OV"]y74IJ%Q!Wvƈ mz"Zu;!óS6ǣ6̓h5ҾÀًqp 969+bÝ74jz%?]#Bu`v2\jL|4̨HHeOyLaÈ)Ā?_{?-&r ϑv=RI)EtrJ kiaMB e=vhXq,!@~Ajzl,WXkHns|5|fŜo  < "Pt l `QBvc8e}8,*F,2Bu`_VlsT]o|,p=sL AkfW 6i"f \X{4UJ]|:hzyM)Aqx-MWN.o/4Ͳ[gJM9sg^{->Yg Mv$Q15F φa|Y^Me>7 M{׈klb1igdV]s^h>\Θ[ʇBg ߇p>"L_A:U:0 .(,,}U6c9?rHvB 3i2pb;$8 kL `_EQ,l`ߒp8h1vtu쁵/x *BCw>~a\ݿ- kO*yS=knGKZ߯@3+>)>( ]-qja`D?'%B?q͊@ i^-#e[*AF{ؼ]F,~9QajBZ[Jf[oSf/oCf,Կ4,V(@[),{8U^5B4a9{] LѮ$Rclxj,'o^* n@:5䴿^BgrrҦZU1!7ͧh]F'?۠?jL (YoEۜB M6 y%V0ꀰocǮA-I{|2a9T A){ӱ |,AOw 'AYDoȈz&hD:lOrGK!#FF(~.c{xb3k45 䙌C|j2ނv$kV Y{Y x0?kHLOQuLf\*01F*~sk+  whn?T3u/y@H_Q)85Wfesr?Pi!)H2,7sʧ;y' <eK Gl9}31 u]TVq+=! j4P0rucûݴ/rR}dם߃-pqÆy@hs/oAq.%MAD5#_2x<Ľ6c[nh!LZ"^Ҝ 7zAxXvG!*tPW0^Qd=(Unu0Q @#dXU0>xz<]tȢ1Z}p&T4.O qlO#;&BF6LDKWGǨyVn ̭G:>b=f?}xhݟhszĀ #(fdRx!6ZsFUL HRiOgcKj-,aiI:36qp }p^-40 Mz'w}hddUar_61#'}4 {>>D ˕8ʗ(lx|hI ﻩ/Z7 15R0 1" 9Iپ o{ʖ #(Lx߃̛[#5ӻ`lv'HiPvC2r}kݎ+*͠jO`8f!h1ft[ )",S|+*9&4k4cP6BCAC i!Z{CS-{9 Oa$3Zhg]dΐ90@43zөYz{)6;賃_K3D+M4aØ'6>Ae~41 #6k^bYt|^Ř<ݎ/b.9κ*o臵Wc_LIȌy}J 4^֥4`K M<~M@/R>k%$\W󯺢JSu<(hpZ7G_3B ڟi&[yUFH,HI6?dL0bN%}AdBL4iQ(] 6G eY6ɐʤIQ`bEQ0C``)V縷܅sfzl&e4y$Ϳ(`4SvH#cQ xʷRIcHs_zaG;$޺Q-UjЏ7ba4$d6jTQ^%$KM^lOQ/;M$>iNg pe>21p4P )7t&Ȱߠ *bpq|l)lp /붔.ݕ}pG'0i '<)XFڕ+, Y}/* )8 S$iBԣΞq-L*c)dY;ORO\| 'wjwh |+@L胓%*:wa#$Ɋ~Wwx49  jzz *khٖz`HL;ȢC \5ʽȹd-!ɣj7sS>\SUFM\S NL'[:^m;5rK֪"PKS:+u XюQ !촣d.>/[MUFd ӭ[5,'V\կ잳)p\݊6F.\IӼzjڰhi^OmgWfvӣ7nӾ3PapdyĀYb` _h Y*W J #IO=eoIoN09ݿY=&>}ቋ! |h3+'}:MS4sڦL/2UaLx飩Z,Kter((Da6YQhD=v;(=Fc;GhÄ❉W0&D:/yj{1U&׮xaĮH͉G7}G6pGVi^=fFKƼ>~7C>n)>yWO{>5aɔ1?&BwE3Pо](hӇ[&L 01[е],mKsH!9IWӀB i^7މB2#o >ee_yt[4A.5_wPCI)Š5 MMmA"C1'ێ:6IcTב|~!3QJ MT`BvC!7}+Rxه_yei=x`[?<ͫ'%pƼ>Ct/3zUϽ=S! ]{=߅of?ӷ9ط8 R0%dQͳU=~5x\Sλȝ'yޣÉpu`s9?v3mR smi ]W1Gfg&hYfe8~aeO(ddW5g;q$hoi^=53ZRy}:O0d>Re;9oBx0OYhׇ/@[>tbm4,] @fL  Uf' 5|vFi5;w΍kZ(]//'`e_0MIaä лcS!A4%!!5ߨZK;~_nT^|뀟4b'i$Yjk-&01`b@€ ,5k|BΚDM($I>uyTƚHY0ȸ 4_ cplYRm~ W=+Ig[`e7 _i= [а^y%ń\4IUW@FI/\?MZ*rݷ>#O= l6[>l* <{&01 iD mi{.GcW`t?TmZ ǝXwoÐ7#ou:t>42%A#(ݩ0C& J>43N&FkS,zq_ Y JaSm1 pRMV65[UMʹ'Klۯ6q-~Cb2 :C ʌtA0#\ἊK-:]ckU)ؾ5Z5O`pGT,g=i /eƅ3j7~ b`h,;>rĂށ$}O&t..j?fBpOo|Vد5]6x&|}Ka犜+x@8J(fY4?jU;;ԷaFz`T0hP(fә swh΅+̘p; QɃ 6&T-BtEPp+#hv;*Mc?]pYTVD1~\$ H (&X0$Ì(bfOͮcNa% 1 \?(t '`^bkiE6PnBxm⨈xmǻʹ`;COKxq[|OJu՜X<%%דojÁZA: ;sg||so?3hSFMCŠ-Rd`i- rrrbblŋgٵ}Y"@FB9J,6E jLSQu2)\?f5lնA:u - W>lؔXCAE+WJ>5|$6[bZy2fy f\Oq p8S.$\;yx{6o܃)TPuM~pV &014i, jVw!#Fw?EZ;hC6=Ww-~-Rhz7u5mP~DK0zt:zeK3t ƂuKX`ڴi~+ rvc߹/9y^;gVg5;_M1Z8놎|U~z=`OĦ?Jc8|Lׄe;UmЈ}Vq\x} tC]p4*Z}L Q|}s¹+>{$ܴn&#B A& @8< ɹ%S@@1 <{+iUH:Q5 ҧc5ql3\/,xt|:y(:V{sa65j &5NׅIn~(2yKpbI\eY24Du89QO58WVM< 8P_3S0Q?`;>842c,ߖoWMk'ʼMP>Kpr웈Guy&}[7}1dDd=\3 >/;dL})#a 8 }{ s] D<G{cv{#~1*Ƞ7G_xyXQmt ^_ atI@L~hBqٓ?À?)Hj-(ANr^((s&}JAL(!sBanP  Tc}PuAmRpPƵL01`bG9mc0m;vݐ*U(t@{a1z+jO-̼J`՛`P~jb^C7ACC~ٓ Kg}M8c&>>s%/G_k8CKߢ :?װ@"yJv[Aaؐ,pqPDg׋(;8G9}. ˒ %z6,YF\g0ݞ)[OL.i:=9>d.##^ayuZè=39/@}ٖV~d,мWaC1xdB>a@}y_1{FA1 <>Իt &P]U~{S1ϹM  Zv5 ZuθI3Xu/anG^\+8J&01+ F:bԖC CG1~dw޴ .Rh:5Q RWƺ?.ٗσ`ܩix A)DkEKo=  +-^.5L-Ϻ$ &'|iX|6qSj  (C$˕!j? ܉uB8aS܏9X47fYS֖2˼6A?<k b?€?5d xp.yȜ_ayooe|I'ҖjuB JxP )M)KPGUj(n9o,czȈ10qfCw~ ߌ0哑Y~ipjye,gp$qѭɝuc\ܛea|I: aRT tI>D!]-&b1r%n]LܚGOj]yP>U0+/emu/ɇ`_۴/7r-̹hV\,lG+eKeYz%-=HeM& }d%z ǢkCb \tTKd,;R`8n4 M pq9JԁlIB_+PtGFըPk0>WH#MڣUS1pԻYG+skn &L L-77v?C  mI*+{-b7l N˅;:-Α-U<:XUO-v1}m ^u<^Qۄvjb{L=;4iv;S@~!zy<Imz1;ɗx@]jxT=` t=ƇMj ue@Mw ܵn/{~9=/ W-W}T~dԇK}Ȍ&5w\xytiT&}~^GF: ^Ϟ3SfKxW813r?%d{=lb@1mīWZ4os&qZF-hlUμ'mH%$^@BBmpJ R FRMYƿ>>%֧/bG?ZʰS^ǁd`|C&ȕ-EJ:E'TA ]d;f٭6 FKO.뮻rS0 @_hGtܛr_† 1-R*`@)lb8È0{¨>ꥤ?0)#S&ǀgu۽u1tLoJy7CC'~01yE:˹AM_Ht'&2 3c%K5r at$%h@Qi!o;+B$2tV>OrtbJrtڃ/DASc^{ 6htƒ.͢H +6.2Q#^:N<܏J*u P=f; E^zТR%BԏH<!AKH8u}bjEZ Z槨1Gxy1AbnoXb΄#K7쮸z~Zun/]ڰ[VYY| )K>c}|#F-6`^01'H tsv34vط`Z XhՑ#ozꅚF(>!AaHwGDZ%mz0qAg ͺ~(61d)ꈷquVoC{!Мy"/#_灁E_r+dl4An;Y K- ųNF[u=9(F? WUj75\BJw{\ssi9< YݭYZxD.>u~Z%L|"/k<AWs`7#w15 `TBRؖ}BnXל"Ǐ~ W]PTdXŦ?ٕŭp'{:~`s|vgƢ.I x=6~H\i%co׫C/F3Z_KfD f<6-cM (֞y#f|7M- 4wOs057z2sqIx&P@/$RL5L~yb{e}_w0@АQclh㦼fDo͓u U 4#"' U뀀VZ qnuxz@U:?}R}xk2ף| UפR\hq+xK '+j?U bKzF!Prx,BNjdtmgM]X ME\Tz &:d[u`TE(S! ƆItn4g& eBR? fý^o1\bW߯_H.tXq=U]US@@x9n>T#mL<^n8>OzfըptSׯV+Dv%`Ѷi0ƲNa"@q:wv}lj@!w_Ï>(PRP( k;r up\֌Z=$vSb(kk]Ec@9k\\S{'$gqLǪ\Uz[2WvZ)UL[ԹB@!p2,9qK7OA8e-Sqp0e6GU /nJW_1~27\\\䩅Й.RS?G=ҢSǸĊ(?ͦJ:l`S8SJPqpdpW {[ f#F5Nj_7a/>q֮N'9SSs}$(7C(AQ9}}}9VYg'a${Ԝޫu\BƂ5zb v8by7Xz{Y ApMB@!p 8KHn+ ceҝT u@%$^ZN`0^xd0xyѡgNHR-4sJzeT˫wz~h .@3;!`"}\fww_2g!b9샷`] >nH0UӒcr+Q{Ac16眩\qg1beD&o++]}WNEQÛs IMtl}n.t?jUԹtYU q -\l }gTzUs4u? Cݑ׉zgG/%_/oPwq5(>iediKdiOi6=B凜 ԡ̫H"%J5* cpb#ne:!94WhD-H'PjԸnHX˲+$QiP"]I<`o@ Z%$к]Gb8]eNwR7₏' ,C9YAA|<7 dMK{E.gyuҚw#vr11fb+kBxdl7}Nme ]' uEjI0ɮzcFzācqkFP կLɩvg ֊ڃ~\^;jߌڋ~Y숁 X^ 5䜘a7ms_FI)KIE60}kg '%]I AgjX'D0Dky0W;c6.sZ$װj\t . rC.c'M|h0c&w(ڝiߑ38чi_ccKIƕƁۺ;n kwQ~KjZVˁqԮi(YX3fG`( %%hhuA!E5=ܯIOzx? '?Z4k+hȹylMLNӯeLny.+3Ä؏ L6%Q[>XY̆(󗒨{v9r:.  j S% `6@tm9҈'ݼ2ѓ X`0i>TJ\c1VJ$6j~ԩeq.fiWIe6x^>  7dAWZffކu("KJM K /e)!52ij,j 3g_qq޸2ddZ(ah G;NjTGs6&9C* L͌o -8bpV ZEsUkԭF;RZFXڶXͨ#onM^팷mOҡ *fPa(Љ]|(56)zz5X␒X~qǛUsEޭ-H3 >^tfR D&/^M( ?WG /HbVa"Ə((f?xdc Zi|iXtK3>Ka~H>9ِ_+g ƔPg2RH ]Jӻ> mrKm4tH4l'YgfQݐԥ2YJ&(E73*`X"dv ܻZUyB;Ys ʓ|ʌٖB5cM-d AqǕҌye[hcl@ QPz0g̨ j^VnЄ bQo :8 ]Z5njm/Hs]d屷}`N}40p8s? l43o5d <;KЅ/-EdvR;V(-A?d6#&Y5=ԶR3UB^+X McU L\:4'Tz2,R#G\oU끧JTh\P\aG**3?J1PR*j7nTZ6)jW)({\  ߓd4ztP_h<6 ǀ*=Ѯh_Oi?P.tY FU=}m-O}ғ*}җROwwj5=[yyՅh6@c|P-hɆ}"s?UPxifw=tՉd.*xiZ/y)~kJwZ2g""7g.83~4|1o3adwؕmX&| l=` ^*j7nwhC^ainu)\z@@c+&!^7Q'tFݚABQ!j*aXiq/<%_۶oڒ 1B4̆rMG( {tӄ+J1DNwp=@*Qdbo~0oG&9+½t5!BJ!oI>JG+xK6 ={iϮ{3ۣX@!P>a^hP-CXt9elAT#$n.+|iaĽ($N {/9t\)j7l-#ӝ=\`Ć DhQU0Rcذê  9(P,1h_y=ƻ_OB*=!aԡ0rTnv\Y2QX{Ϩk r@TfOA՘pdm#qq4J'.+wЭ#=\N]7۶riM~]< XLz0U Oj+c֌+/N> +Nv%/[?U.TyzxW^X!pc$6})I}+!)lHiԺcq\60rtCB@!pmrlFDDJ5orGԪ{[5A1Tm1]N(|9}=CNl b! ?zr_T!йUCSvQ Wn4U6A`vԗFPA;nh?C ;iv q==Ȍwp8g<[2qjpQTȸ1A!!wDGB%mSsl1^!P(n^FYS`6==C|tpf7U.: G6sc8m ]fc?;|z_{馫&jX!fx1Mi._:p{DZ5!M8hd\?|5+]sp!Tv B\{-uXIDȸQI;voHJ6:R /V\iЅڳS #gw]LOÒZ6/@E!c6vEQGsAHF3/ً3-A[;QKq3A+ (B1F9%q46~XIHc|0ߍȸϻTrxU0+m ƾqmoUhv#4IYRUiHFÚaN{3u!lZ|m_fSm/睯aĜ?>KXM$KW_N!PJ&Efxɼc8"ORUn?$(8CpxL% ,Rx)g C.N1J0 Aҁ$5χ:rIcRvm&c32SG\7VOq`|twVFP@sRIl1-/099w%GBO@ڗ>|,/eᕸbukk1by6LVg|,zZ:[X}թfp e·J~ ]ت| @kal #K;M*S {[()<voۘnҒ*XI؃^ZPhߔ HHBGxS2j`ֶ H*$ ,]ܽږ#DX}3yosǪ~Č \Uxﺇ iƘo&=l{D22-X Zh@_>n%M %xNbN jB+4_SQYDE3E<`id>ep`W*3r5+87z1W,Fha-êwԱ3ԤnSmX'DTrhZ?lq= +_hXtMiwE 8YI2sQ団 DiZq@[ؠ Ψ(k]f?B'rH5f|ӣD~BYQQ.hSq{## YyQ vK6^?V欄'GIh!(Ɯ[c=5qilu\H"+.]nU8K @ a)&9Wv$:<U&<H1' F.cHAEjTޘp7}Iy ;5rj%C91ʖ}ci=tRP uKEnݸ6~h)w(ӹ`أ sh8"6b<-֯L k=7wyCbY?Q\nؾsĢpV.} NfA}obG<9T_ v$ӆY9wk1x?Vsv|bRuc6 Ngl5АO>$L0?K.<{QQ~o4X}S3xg$rǛE>ul5^5VPdAja$0Ck6>Tݜuߣ&FC05jӲ-3.];PAz1"^7?e RdގA=# n1"80GҀ<%ÓlX5RYDlX@(%\27=m&+*DqW*5Hſ7%jJw$.y":. Eb~,@bq}b_fIY KH\f|ٻ*A5Zw3H7W-*BPv v[FöPrR TϷ?&ˌu<9yAE2[FCþpٖ;ikb[~<͘˼0y$T5̌1]y8|_cR ǞZ6vWp+Vjlݟ4 c{@}iDƼC@X dAƇcJј\F#Wm4U.B`vԗ\@, ԊTp[sl[p۟$]wfy]/a;u?s Y^}c}f=}2/7L=Z/Kʴ_GaXUF|_cOlj]b4_, A*+?l?b`/]Y} }ߡSMn ̂`6`>)! OπLs/Rㇽۻ9Խ‹F9LcsgH)3iK(w=VEv8=}k;\G>y^Kv7l Z& @~0Ua~dKwhݬ ʄ{.}~3c#\ F,8F'lPZ.1HvN֦&*N6|&Bgm7;*Z2 l;h Ҋ`lFm6?s\TAJ7$PƏ+N=ty%g̍b~Dȅz#v1-TriFjڅqJ!S uD`BxdqSY,fPF5[ ?.lJ>_VlXdYmֳ/k|d6$QPu-rZyMm= ?-v]=3 /0>`?[j=_,hwpȢw+ JLojevTѻu7 돍[.!6\N7UhcD8޻eçWo9"!' JQ{A*(0YiiI){m|nվY>w"I ^ðlwؚX$"^;B! aȩ+7PSq.GY>y'-k1aleסkC2g³׶S9Ld/XTK>?G5g)D" Fch'bu[Xـ " m{NԴV) ǵ*%YO{UQ=*p vb2 }Ēnum^*Ե231"Vd3u_8Gg. aR @р$ǣ⷟>IfCJ6*@UR9~`u}Q~O}eP_Gfs[n&mod64ak{+D!A@1R`W;&+4u(c6wi (б 9ƒ5+2$MAN|P`9Y`6x~oYk'%U6 #p` 8pom1c}e5_ZHIfCQ)Fyi)ʃ7_~}U/69uBܻkaɥE\pEl+|VO=c/ [xq^QƏ| |sS-5=E  w{9FϞc7M{p`< n]o+N iWP:r@`gkQ4ɑ֘Llf UX2ehR7ҁТ{h}M{Ģkً"p!1!'C&1`X :DTQXIh f(_yvBf[ .zi>a9WffbLYs7& P}lkgg[D5|U4oDz6ReC lɘU.\IĆ d6 DHfV%d߇~5祦6}l 85V-_睷sM . jږncUO٢'&b3/~Ym0pu fC;.B"+$yr֍&Rl ^ Ox9r<QuLhX I ;m)B1iǵd1vg]1kw꾁"٪\DիlľA:5{yCibu>F:2G@x炵nvfl$yQVx(މ"b8g; +4bqa<|jlUJaNaܙzź|:7߬E[?`O*nJ=qqY왙I))N9w1.^׏ݥT#d+R\ ,Xթ eZ?صEh㦭|}1O~$%AVVw;-G|ʋ|]lNMIJTO/HNK{w7 I5*ym0V,A8WPX@'1\ (Tnj@L .q/gql˗v}Ou`6"鹄$Ȓ0!h<i@boQGN3AQVz `f@s4,o[&D uKw2=4Hk%5BmIZ3Q,mƘ[8ZL7̆x})}fM [kz|$9nk dFٱh>DAg>vQTvM2h[L 09 ]{A}N30ƱA0!ѻ!^x..|{ybXgM*a7 yR@oU9 6+R(J@VFeh*ek%qgy@|X ƻmBSXuZ>FŶwGSYhSQ]IgK0%xz%TxVDY*m4gjatS!~̚s4pIZugfOfr@,,`Obr R2g*y {DRKz.ia㋺_{a6z8Usvx|,$Qc9zd@ ƞrR2늀l?)R L AN]1QR,H!P"%xO=D d ##d3*m܁7N.R`dȱ Z>©:u@*R(#Me^J4lj7 ;H0['Ґ\[y{{!yy(J$^B*mL1QDl|9#\vaۉ f8CunҕβX^V\@jӂ@#[BV+3tӗvJO;卯%ɣ$il?xX8~"2_;0ǒǒG70J7$!|[ B@'ɞq ' :|H!P*CA10rc6y |}}TNfs*߸qcP2h_9v]d!7<ȍBb=WPA`vԗ\oH?u(z_FM{dӝ:=~ZW3%n|ԙU:ϪC ޖm%vŜ8Ah=mۤ.#_*t"0(a e4fr{褰ɀ)glTc4_BQى /`xWF;!~>t1 /o{ #қ_&/C ;ǧFK3c@4 *>cÖѐȅ'lhKDoE@;L~1 |qDJ*p Ge?ügTz6a\@; ݐ%sS`DޫgGL])GH֨훏v8$J2!9J&=թeCl@ѯs+NMFbHKݱ {+3VzQY;6x}UDdW7tļ1YAiƇGcrLL'+qpJ7& q|Hh}ćm4kvDPk#& jVc=ߨe{^=Į4q(YY`6]&ʖpO?HJJC}=b#usYɖ`#7~fmCbe[ ɑC\4=AvO|*`k ł&GvvDL&f/?̥ygXjߜ Yթ״&ъlR6~#6x\7Fc&n<׸/9c:y_^bbt3$!q4rDcl48HB$`ڟ7{[M__>[C^yt( z3,RRbj(x,)+psA6t{ /[1yEY[=jBgqQ5~?E.$bu.*BGM;JQWQﻬ7+,,}`ePcpe~ #0!,?Rό|h#`d1cX{ӭMfz}ՇH7 8eI~w:3 xDxHN\Ǩ@uA!P+*};~Yv 288E( ޓ 1*yJvkBjw<nj) QA 0׎GGݾ\5=OKNc6XKFPԱB@!P( g!PQ%0X M8G8U]*xfل{`cx+$G읿_sh{ ua6RrB@!P( @I l@&/?[pl|~Gq`ۦz6vM9GI Җ р{[x:e[);֮qϦha΀d6d/9xLU B@!P(\^ը0Ѹ jڷe{v1h6ڀUQN?F~|9ޟ?y1DW҃T= B@!P\E`+bGAޫ~A5[1Ъ оuBRjk۶{`np:/}*HRP( B@! رvrlܰ" ~p^ǍgQ OE771ms?8Nھ-p]7TSR( B@!PR oWgۦ(-Y"~rS%Z(ECG B@!PC0V0 e]xY:uYqEF{:r讹}}@mHS&RFS+ B@!(7`އ%TCC> ,O \Dž,3E8k/fz2k>Q'M͚=Ȩ8eEP=P( BE9uhXO0>^:e(mxT`.oK @z5=IچltJ]Z7*k=D_Bf*K/D{-C }ujՀ,tYfZ๊pq8> 򓦗m.SB@!P( Ur/ 0R!Ԩ=}>/q4 l{|¥|(cno/AŲP-]Swζf45W\h;fr[i.WHU B@!P(?^@- kap3=<l@b~ө%ڵ5(?=vwo~aߵbZw$jUG˒s43'['F ȹIݢt|<8'5"8{1~\NMN{mc-/Oڴ=0kvURq"T ֍sT$Cr46~XIHc|<4K4\BO;|>7Id>f5JHLuS`p[&3<:4~]r|%ZR,Rzi¨[z:ݤ9;i;t6=nJƅEX-I#8j$,< NUOB@!P( l "Oj(:OC(.2 jL>IDe6SBR /,px{ćS]Vyq&.;JLVA;6իPfNj:h@cMV_Ly?VAUs2SmyGzFECkg)f<@ɩ`.,ҁjW@?ز!M3:O.F]`2?s/3;uĵq#}a7OG@o q$Aq9\iKl5-t=\|b ku8>?wgq5tߦrT( B"I`Jf)oO?oJJM{/Qs]IMWy7sR3ؙ ߟ|GKp1)U0%>x[B>҅jsKbrӔiF`iƯ^cW2Ub>#pT̶VR( @ dRN(径Y\;ypU?jh,u`u%Ie<[ʖ}E͌pp}60~KO] wbrRO+ B@!(9v{9HFغgIq0.lϞ҃)(ѐao@ؔ@ZŶneWZP傔q:TT=yԹUTS止Gcu]v?+*v0C%iޣhUQLsMb a?Ѕ֫)Y0=9LHkW7.ڝtߠ).\|%(3>h]:rZH^jZի0uv#_"N2:_K-"n)faPVX6|t=j#GHZvWZV̚<|vdmvV B@!H22/'&󦴌,EId ^ ī곍$_/fBt4䒍Tݜ#9p6æM"yv4eaЌr[JR'na$Xj* Jf^ed iCq1RM0p;۲a|x?ǫt=LmR7/TKo9@%t`xE>a?oo/O bFSVs7=+= r969L얝ωVNHP+:Y_᥅Fc4 TqBXĸl]{o4ᅁ:9aȄ"?#-Jϰ*VtDx z5X4gEI>k0uzG5Mpȩ#ݓnwq8P[ßn1Ƈ9Ҫ[ҿkP0lkHVE:, #^ ϱh.Oa5Oܞ3 00Ü6K4.;MfϿOJBnt'Y3'ݝrؾw_6o\bfNиnWI3S\ xƔ=m@vk.cqS"#Owixf+R( #%6~P{ޜdԢzzQ#V^yoRPZ)JrGCTjFšZUa A 9?OfrҋLӡy=ڰp_Sl m!d:{f.l=tB JowVZ#Zխ=΃'MP *dg{eQa޴~M$fIvoyXFb20b)x)@<fszkUǧF>͌F3%_x{x$^;%G*"kS 5i# ^e<=[t9N˟ (4!|zlŒŲOMȓ1hwf4қYRG4jmt2{}žÏ] f4ښi6_Ӝf)z03\Wݼ|8T?ZM_Wl5f9l^!P(* (yL_bھ5;;ic!yPw`D[M~<>o:ywŪ?xR^</ڟ>w{{qRxw4[ V#ȟkvz_h"zlҒ{鵙-fV}D۹UC$ԫL`l:6(b¾A`z?5zldf/{޺zyǺ`A_/9i;qQkeΠ$/Si]sޅǏLXPbD3sDL~Ni~<_ޓA{ըRg͔g5mƬWOÙIaSO igaq[ِJ9{KCE1y+80uڜl榍F*q-,CfV<^>ҁHY[OL}5çSD>q\XYfd>c&xgV/"r%Igrwo23| $?%2Ҫk8]5O'ONDRRDg_ZҏT+sxR4 Br"`յܚ4 ܰd}p =hmx!Tw\Dѷ${:N![|eju$V=~/> Nh9*yӣwJgk,XMG\&'|{X56#x6t0_ifGzwyKWO@2rȩyV~YvvV2o;ҍ:fd N%5W^|.|ګB fk8ֳoLZ2srXu*g)H p۱$#j| qaխqMϥ8hIR[YJ B@!3ğ0I73/Xa~\ L%%Ly0FFxc<# ΖѐK`wx*)–0Q|yþE2{PQ|Ѿhg=;!'d0!J+F&,líFv=O%5gBw3UMtUƱw$`bXw^}졻  UX4+H̿^@XVx}5ߛ(~|WI̐/lpXruvWYӼXL#-zjڕs(F|yjP( b"  ldX{ӭM` }s?AQ%+7}.$M>KIIœB2N!Ä)L!n/f5ټ>N]HbFEl H4Csvakf˥ݜ~AZgfܪ{ӈ;<rZR[,슖)WpjI=JMl_rswr-޻l1Ӳfv-I2LfU_90jC9K5^`fmͯs0y_aeg,V=+&[ן({gJ&<=>%2On/P()ِTiRx>}ڼbW.%+-Q8^h7 )[W.Ì%!%8{Zxl.%XMg5^xapt}ZZχ M6f b\n{fgXE&z7C?ht.y.|FU-PϜ LjժOYOm8|zQ^S{_E3 @E 6tU] * I7 Ec)*HoC{3s&dRHyw?s{w3{O5Hpx*ϫBs^?ɾqBTԉʔGO*ވߡ,Q]}9rK }`nZm2 6Vx{;r|v:5^6c-v9޵bfee2 uc Gk4)G|a*31:R0` y`Dnib"f"ph"lȹrE]{jӮsNABBBlVC5,\ܜ̬#{ٿcXbK>ĊK 6 ACDАuAҸ-8"x/Zdu(oUz@@@j ÔJ|,e*‡D rO&r/}8s]yBz5g  LN6q| >׵|!p@`>1q5SB4FCj ' (yɰw*v†b]R]j*|Z.!'Ȼp[lA@@j^}Bޙ6djlaKɗ!,Q4VìÈ.,|ZBPnP\Br%!˝>IFL:Gkp@@@eУR! !/=rEQp\ .'Ţ<K!@m!P|pGi-ăԆdqr A\MN2-љnFk3+9<*nt3!>:*yaC:'p?Yx>A4sP,"YV6tsQ㣇uj˞[k}{tP؊qxMyW2shaAmDj$9ο?ͱ r4aC/p>%\۲g ؇H[V{%F'K   PU2&\2AK/ZZ7 کmm_'Ib:ɤiMXq\j!p|pyio#j͓j7;)Zw$]ԥoI= e 4y%Q%v+{6e5!z?OcwR쒌P:]UuC(a11Ss[9 -#!uᜅ46Gd#DQF}9H>?"aYP65vn*U-'+_zIPB@@@@F} -MxVh4JUKD1OI>g˖RHG\?۪͍*@U,<2{|SYsрQ"-}VEz<* NyFIyK}@I    "cJ,2ͧ)NR0IY?0P ;GyJ~Uޓk!\t @@@ US%_1 d{0Y޶|Hdyf<!/B?Z"[   u*_ NhԴͲaQA}PV30my3Yu'JDeKA[=A}VnTQeH]@@@j0>lƏcfk}^hpꭴt.J&B4Gn6#ܰRkyқ/>Lv[妫ϧVMCUe>[;ˢatǠnvӮC'誎 Acі#B.GEqk@@@@j*d VY^4VNloC:,rz'?RS)]QϿW^А6b3`SyQZ=Ԡ^;v ߎI4ooV6 YΜ?߂O]prPHW#Q)%& Bә`{]mOvkM}lܠ7}4o7= m"5oR-{p^3R׊,4N_(HfLC#PjF5 rN:3߼'?;{Vj&O/\sUzX8a4ԢIթ 5]ѨQVMƏ^ې)+ iF2 +lՔnbȭ=#+IquhݔrGY𐐚Mo{S ~p ] mԻK[j G2>'vZiGA?O'…>{#$V*j(    p H4L,% ճ Q*LI?(%# I=;M}ޣ' 'Y9ttu5:hDHϚJևRט5 6.4bA{l{x8egGG4jL,hȞREi:Ϻ)%q &Rv~u!͔.\FXv۞#sog4c!$XDeA^= aC 2Qy)*K !.fB -IQZL% &<72Iaᓺ + ex:rķ]etkߒ^%۩^p гX<ᨍmyS;E"&[Fض/~\NO$v鷻whAK7h=$7gc y;pe;GAب@@@|L,%&d3/Ջ?7 ֿ8H\y}B* !f9XpHct9Dmtt_IIQ$E.D@wh]{q54x1pSUO0yuC 1#5Yi h׿#'nb͇YP'VCj6>E    P dҘPGZ̗dxv(tofJ{w*x ZNI5lԙM䋿9thT7Z1}18gFԹT^6Iw;!k!vh$<_J<}^0s.=1m-Xҗ_x9䴌"Eʦb ّ\|Wf~7^Eݖd*E*E(=otԾ <6Z%ۃb    ' ǟcz yXy$oNaeu;uA@Yqظ0]Ѳ 忳у5Ig.GAugjq^ūXIWme!⬾8Z7oܐoޣ/#Z}?;doWʭ Ձ3t]tkBb7n}:ΦZ׳Թj>MJD;1zuYĉw֯BΈpp66j= F6ҵ9"ejo|̗EEJW    B 2:vev\/ḿeG=u7˸ oީ Y-Q %O`lwcҷF_׸' k $],^*W*)H}%%y3Y!m2[$bzUɿʆFز(}K4v^k\|5'l( c%O^d!m}poL W!j=G.kb!rlFeɡ(USf-dBg#k#%fMw0O%oYzRg4AbTNc.R\*pF5iW!lԄQBA@@@"C"ML^BY޼I-a>,7:׶F \N\/gٷ^ߓ$֕PhDVWz~\l\x Ƙ׹U h-ĭYu0 Wh=bN3kT~MIERG^b:    +.Vdmi쯱i/q;WUvHݼI#Z>12<]>xF= yU5Nҟ4+j+ )@Ǣ{WJ)+[W¦SwXS,Mt-뫡 Wq tl m!cuz]_7> 2 o&L Uk͛][_nVB73BUv\zj%[#qS~[7kҤ ((_㍆Y̻TM{eA=%{#L܍Ke]g|DodvoQCèǕRyV?qm+ՎI4W*MؐUdIOQ-^4UCyyj1'- t/Cm{|2?x@1{vCލ{]HvՖp~|[ac{4 5ZpՕz,Z_kzM;Gn^G#f z7S5]ѿy=V07C Cxzrv nxfH4v2ڮEa<[$S~kH̟$fdSZllMkJtM'P}|^!Ӎulӌ7cSxu禾WwnClkm'?4[UX(}p~;dӬ$dV9X;]sse릴!#0r}[Ázڲ֨;o,QVߌ:/1ot*ovF7wƧkw׆Oր8@@@kx YTҮ,2k5t-k"=3m"5bA5bIOؼoNsizIߓ{nhل&F&$luX׮XDUc dƏ{(oTuEuOC}֌XjKrdx͡R;e?גv:E(:#.9{SeԌbYS!np2_@?:, hh?&s].7JP y&&WjŢRG=xꂾ hR\ "iб5!A; Stz$'h.]e:5v(FE+23}+MrzZ#; ͉pV]P/ǿ@nm 1Lcf&ރ@ jJt~OK6tFnv΅䔌j6ZĮIvdl?p\pGT"0LrrlBh*g. Q"m8K'zl$ṇST͵$û9q%N֨ԣf}9Z]'Ox&SY}+ ɩ37Wlw UZEȗ2PiK,L-Jԏb I/֊   'KtIDATϫĢA&R8df>-V746ݯt9efa[V}P"~.,j Q͊ܗ_q͘~^rr]Zdn[ݗB|EOR3t(=1rr6wEЪ)nNٔIe~=>33WngrZ-YΧZv_YGUees2{QꋕIFcJW龂C⊵ '`QT}@ '`L&JE?\ѢyM_MA6k+>Eթ/WKNσ8?0?B[^+YVį}M3Ĕi,Ul՞}/6dD7>a5E^DLzD4; YhǁD 9}I$b"&>"#"\ӥhUzJ 2޼QxzFQK.Z|$KwiSy@Vl5xW\ /0E   P B[)lT / /%3nݯ­ ^t)%`ealr%]Jn%*UR`h¨;t>!R/=qe÷]/#6<q*eH. K"f=,ɑDy;Fi}3ziq2'4zDzXyAӜ ~U>t_lLD湆` @@@d.Q&`ú߻xDeѓ|Y} yϧ߅!h{P+<{ OB8$a%Lhp%5U9ede>=>;aE4eIb(z+Eq# 2"!rѱˤ9r9cGY6cӠ~ZhR'ȄG|&RwZPBjrdڇ'ϥ7LKKư,Q 9B@A&eR1'33-}eX F'>Dݾ%"xoҙdںn,4ҹfy] z7>6~[tD"HH|p'ǎ4lDVB ' B۸rx/V,xѧpe :DV_69"~"lϮ!pa;{gbM,ptٺ2"N5huϑth~G"hHL(S/FlȮ}mqC:Xi#57r`eBa2v95jyC,Ԥ4]ͻ}V3_$ hl}TlۊsW/gnaj)Q/ڥWMw޾{%r'}6.e;[ N 2ckJY&],w:I@Led5*1ѵG}-kVn}z5W./~!ZJ. v:5^6쓅xҎٽt͢gfah4̂ ̧~޶,iHumU>)leUB@*4֑/V>~zl鼎VRޜ7a7G}yN\toW:҃7)/SYe l81 yEM7;gqGp2|UW_Ѯs׮!۬6NGA.+''++%+=pYlo&ľ(6,W;ve>4I$g3g;SڡB-6?-6$c~.wn\?7W7x.1q<6p;rmkFvb@E!>)cӣk@@HpR_? 3ϒ2^ p2ILPuEؐ)l@`(>d$QF?/c,E08K>,-! Xmʍ?^*P@u=O ]hX%V..-)g6\CҴȘQf /7<&`݂6;ׅYЈz>`糮,Tz£y2ɱ,+6~N08g>ogmsO2%S11 bA-vh-)#X{nbȿ)dTU͆:X)*X^|0XW!1^iF}6ͭz]t9sY㜿G1'vDt|0Ovy ++Y2(7:&9Ƶۦ미=19Yݜ)SK}׏(SCr:ݢ867.J/v='G H"k:b(^{L6{fUy}lv(ǜVyM4gV3{9u__ۧ-y{lHR$aLX;q7Ƶc3PcI?m#bzaCi+i;^ohZ;Y,îd!hHQCMUB@@P1Svڭ#Z&E !\8{K8W9/b :&f x[А}R(1N.)HkA~/`MSU*FA"_BJ/h@ frWBE,P kC&4Kp2햳NӼ%([TXRz&Gak#'xxYS2$T)4yL+E9H4.dԜC.?}gN9o5kq )R#@$;ygڒ)Zn6 pJ{c16Ș)e1m.HcqŹ f8<Z[؞l9s֨<X;6kn|@ ߣ2{|=̜fsHuFދD.#_50}z@vC:KMucJaMnq]_1M߉}S#Rj8>X 7@@k,(l5TF/pkmXb-|sn>*2]I<'GebWF'W5ZkV&L+[*-.;&\tbhxܳ,% uw]/+Ŏ^z)H/rT]kx>?;fV(R 46R),oz!eFb$n]mx5'>"'C=J?rR򅬻xyڑclԴ'ͭ6ۈDƳ<,l`cS>܆I22Q+NpժgIj-UtՈa;ƆYo8t?>3gUXykm)߲]bw    3xDiP-hIbE"ۼ81 zXml4LIhB cN 5 o\||3/IvRyG˕բJ㍆EI*&K&LnaPV=R]XVS0^}qy{s8 oᙎk(qɥ+,'9"T@u0_*7bY[ †[S'̍^zUUz\]FV)kzu=׫)h^ 6xR|Z#ڗQ QHrNyQ29ysY]+XxY.oEQD@@@4N7/@n wz+Cbxs)ީ@|Ra/z%Õ=nyd O^y@@@@@$>)lԽ\p@IFZ RY%@@@@@@@@@@@@@@@@@@@@@@@r|r5*Ϯ~96E۬:bO@ײ^jJ VŲ`VDBuu@@@@@@6LJIi=y9׌LDKO9^mrǛ3Ҍ)ɡ폾:qbVGL뭩uß|4͙>cGl㡕}D6}UG|/ͥ{3+[a-(Rߺ?-9b,^ .T7"*;}IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/network1-services.svg0000664000175000017500000014032300000000000024064 0ustar00zuulzuul00000000000000 Produced by OmniGraffle 6.5.2 2016-04-26 14:56:09 +0000Canvas 1Layer 1 Controller NodeSQL DatabaseServiceBlock Storage Nodes Object Storage NodesNetworking Option 1: Provider NetworksService LayoutCore componentOptional componentMessage QueueIdentityImage ServiceComputeManagementNetworkingManagementBlock StorageManagementNetwork Time ServiceOrchestrationTelemetryManagementObject StorageProxy ServiceNetworkingDHCP Agent Compute NodesKVM HypervisorComputeNetworkingLinux Bridge AgentTelemetryAgentTelemetryAgent(s)NetworkingML2 Plug-inObject StorageAccount ServiceObject StorageContainer ServiceObject StorageObject ServiceBlock StorageVolume ServiceTelemetryAgentiSCSI TargetServiceNetworkingLinux Bridge AgentLinux NetworkUtilitiesLinux NetworkUtilitiesShared File SystemServiceShared File SystemManagementNoSQL DatabaseServiceNetworkingMetadata AgentDatabaseManagement ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/network2-services.graffle0000664000175000017500000000770100000000000024676 0ustar00zuulzuul00000000000000[s6_Ԧ MǷ6n|K$یfv` PJRqLRrk9|`I p@\~:N(95sC^Oߝ!rm_{'gOR|css{0ٞqzx:3 4ܼ4U&ezQ?L6Ohۀf'TW:>;fv0=$[(|?JyIűʾ̪nĹe#7=RPkڨ]o>͐\֞^^$)mNlRzi|Әp\΋!fSܚK1K Չ.ohjjnieU.cEi:wEkpk+zrsq㷪=TfȠ$ۈULIoBqڥmP~? z "/RY'|#on.ؑldqa^v!{݄F2LvBA֝5%gNڒ(\$BPpj[JVcC ;骲:{DH¦wdBg4ܑwd#h %D6"F=e6G7\%gHGneĦp KZ8& pD8>vqw>c3J$9*dR SWE,"GE^^$ Z1,jQJ,A%0vL9qťtldj2R"#H$_c^Fڻ}DJ…$`',8 ]۲!\QeED!v' ;|s9;a _Fk%x';=m&2EfK(ѻg"zeCNUGqW}j\d`\GlqN w;o3t,2fp$j#tH:Q23Je2&- $̲%r5i! Hn'OF UOg{<ФLvs8XfcH<$;9C{iAEM[XS ̇[M^l̑C4{F@[UE㎨C! uֽi^ w݊ߑ!ta56n9/W ضq);:O'52K'#TUtᄒ iO+-If`"V'G<8/)3}`/rEw^jύfGwSufnl?K7غnx [\Ë(m̨>};N0rf.pQ QEfqf"tw6 Jb"36SHK,AE3X?NdGC.+{(Pkr[ sɳ~n7x#Kptcq˨rbRpGZV64eTU4hT6dwj fԵAu2[#pl.,fCUg,C'uNŪx{at;H$d1=lʡ L`$w\*l 9vЗ[7(禰'#򉝞ixz&p_U:oRD-;*SSik2.&3(;R )BJJ-Еsud!8Or'W#Mqr2Ln99dKdI1Qh<s[HFXffΘc±m m<4Pܱ5ftC| >ӵ"df 6t{W qnʯߙ; m e0j<̔eߩhL~Q}W 2bͦa$Ar!_}~WC\Cpf4bI~'yk_w 5 2 1 2@IDATx|Eg])ҋ_a{m}m* E@r\](bAJ}_}}E}l;v RTZM6l.. IxN<3 `L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 4+Yņ#@\g*yyA(&.l @N>/_%a~GpA#,߷ zȑf4lZ qzejE9Yo8s<@&дtuz^? ǚp]1(zt}n cNCj=K3C+euf/~sU3JJV>]ZǴ2tWƭq˲‡Xlڪp.+{7]K59H4%X'=7mhn2 <%yzvpx/)(V|{#v?oJRMRN=`7]`ieҩn 8mJujnE)HowenmuK&A7OP[.<"G`w*KݯX uu7UҍB=dՇſosOhXUR8*2=%.[gKJO[//W8QI?kMDqKOX.5=3^4?|Ϥ6O %qHy8H =xĘ@r~By{ߖib/}r=8޿dOp\Mc7ׅB=b1vo](Q?cLe`aeWkT 4eh:4zٔ6MT&zo goqEFM%l)U)KRs4)&8_l*`FD@({L gV4?ZG[X-,A)R;] 70\rЇ顿DKcf;o]y#)ŭBjYoKhaYeŌ''fC Sϟ ٳTz](u*+/x4~+~j;L*GIuO}uljcL :7уaS&PMt5 ~tSoRVlNPP"2h|  ThEl tUV$fN"G#jsuiAėwzi9,[H!pPyACٹ (4EzOg]LA;,B aG@iyd݇1Q{jfAذr6Eb}w9sgr/ f7U<8&;9wuڕ| Чh-4޹)ѡAHu~vh!8>()'x<'7'F"/'A#?>:Aɟ^j P0uORo&$={iL7m~PAÎr^(m4 {FvuL% z@\MPm~9K}yS{ "ت^ÔQmz}kh~Q]s.V7 M@J͜!ױD g mR-7ͳgw1dBLwT4l s` |=HGPyZ&j:'Tc2]O&G#QB 䜊3GaڋӱM0s蜟);3u6顡a-A`UC%0QՈ>%2"EzXĝy0$]  A@:;ѐH] ~=HňoyG8%<ݩ͕`sS\(|AfG[Ii$!Nimr ~ ]nty [;-{{>vcM\r@}]?yܷiZx'Z9Wce[: 3 3(CP܏Ԙ5EP'*(zqߘw>x/|Nxt䒇cɼ6b$ĩc9 /F=`8"j|9XLz^xm2e_rSF쳥*~GY[B'&FZ áHgwcϓ(o=>0{2"V9iڷG7gTWСLVR+,+Q(pz!irP*ڦ T"OwxJ,1CG8Ѓ2y^_5*9 @и{yw=uA;j:(xPnQ6ūl>>3=>! ڭ㌲[/BtlO41ԩQFbXռ>qB=P_r8vyT2Ƀh={:|)B7r܏d:UJiʾ(6%%*,e]Knx $?C o.1١Kvs>eqy[יf X ;Ey sު~gJA4Ss#}…fFX!hM6/_(|6ol(vvbf48!5N[~y4A[/"Mjk6y]~zGL9PeX΍s!?<);xG՟hKWeY({3QΩqkUڋ]SotRNhSl1AHkbmL4q;Fț IvtE@#EF޶W7VU( |)gt!efex { QJNW~Mt}:iX:)zJd^3+4xpv.1-K \ʎo뾗8iLj 1vϨxQTcxJ,1m kFPt0N\}Fh>㝋{wfi0!_RQ/dyy-ڏyӆR۶pMw>3d6̀o&Ii `!;Q"v"(͢ cnA좣LP PC_6~Dz<.qѯ G7h9⌆=F\N@JRvXNf4:f)ݓvo;#=/}.PjR1Q'l}W)<`J0c:\4ȍ! 8Ρ ǼTϺ q fE*د4F칟)n.cLvn+=nASlk̊d;F"D93ڤGSAہ=hȰ㹷L450X(S}>2wZȇQhrI@6FPg=KzAԯߌɴ)t ;:'Xz񇺉 c B!-AGx>/WAAL$:"~5=+2׾\u:8>K?;j#"UH-&wזe |u[-[((+L6TG{;.B9(L2.g4Tu>@JĻo}YlPAoI_t}=BUoes dF ,hTS |B  Bw bA 'd\}t<\G>JF0NA٢Ⰴv]a5%,trSe#uۡci24yzJ)ﱅTu]xt w縌.}D*Z.eF2LՔzu{gI4 ^OŎE2ŞZȟ\vDjkPbkmJ<*]E'jgj7Ї uspw4;%"Q^6*owH|cZV<'vo'H| C/'b0 GLLv uhv0çsE:/3p?ޞiuM+҇q_蕾C <-F>" б,]Av+ jQ%dٖ+"VB<9qґz\$^ k!mG}:.o$hZiœV!S&(ԅ4`644Ow?6{}x;9]%ۂ UI4F`@WDdSn."Ou:bߨBٯT#v·C{sJg|Zh4Pǎ59*Dž>l_]3>v-ԟZJ^+t4NSl )r;FsiƱ|Q"dpQ0pXSXkٵ߄(ByHxbW޺ F7mEmv""w,Yq=ôfCW*EjTs[?:45f<`? e[<-vM< 銧N:$dK#᝹\vcۋST~gV#An-%_Fډ[BۍWi" ~;z4|J0ɰxeLCG"[Bcfu>pfHf2|Pl)qK]Y^Xs0 ]"g{AOC߃>ԩb+g1؅E?;x0)^ǐBcÖP6Eȴw?O Cw-4yY;UVa⅐Lǚ nN+>3F{DFnGw oV݋ѨO{:Ǟ؋z9;>Jho9*\8\eLy0!a {W.ubOhg>uZ Z8Gۏ(M_#͢cЎ/Η{4{u gmŞc_ޗNhQ>3<#9dYwcγi0e͆>vy.X%N[ yPTu&$QQqNإ;])6sc⠶TfFBM:آ;kyτ>hpE=c}0[DI3ݩG|SnS/%*)E-m .Al#;ֶm-?lgK"կ;3* rjD_]X o!*rZy$zv@=əyEB,dE ~)!ΣMێ:hij+aEzjU9 0F6 $-E~T/ IBTeAx;~0th|ޘ{M:Fi.b؍Ե IM />~0*ߋ{PߡrQwryY=Y{vm6 FyOl9:cbp01.-ўD7osP:EPQTW _l+N X:SH&po;|tݺ&+.jwي)<,a0knBvpޤz.IM aB/IFtxooĈC7Ez!8ꌗm. l=pxLDT߻fX+}/_Ng+N&[R֓HS=zLu^_SnkJvđB-~5g6VDFqS-k)0f4e5gGwWCx;C0gV5Wef?ԱYߢX]vY9q"AgTC'Ҍ |ܱWQ9"MNXt7iep(Fˊ]o:1c^1d1gWj)ɶʦRlsA"&w9>л7H\R߇iڑwxn̎;cI&Fh𡥊F-&/n(:=ZLi& #ܳory4aTڧQ#dܙSB0!rrFHH F3 ܣ]Nʍp.| > tgnfxJB#o$nm9rgկH/ծa|AǬu O2Ոeıyn< o?ؾ~ IksHrd U^ )uq<_+wbgD8"j12QZjA sJ>PaOSE|*6t(=<ߗdיn t{U ~O@Hg o %)آMA0t/+\&T>H%KUc-E>͒hJуT^(Mcǧ$8g4tͤi븻-!up`f{(e UyN33@ϯs!2z+E.GQ/i*{wTx!=ì<=ksA\^b>mdFdivvEQ.?7j`uP`A(3Ǽ." FŐ@7Yw6mλ;#"M hw=x-Ϝ놖Xㅺu"u_- N ОB^ŧg_q?]wf<"b#@ȞPfo{L7aYv] $7ApLa<6Ө.)㫸Ӹ~Ol7*0\S&qNڋz AaTReJ C\ڡ*U)tN-޵gYNo^d,?MKF4nC e:FoBzox܏EWWl7|5h2}h(?h) 5&\OxjgeV_ -[@F>į 9aeb(XflpE0E )_}&Q*=EgSjw-eڏh?JuDR{-|vE;< }v [ޅY5~+}=ٹE|R񝌅u r@,?>*iqrxǙ(xmsܭUa"%M G!oPVhfRY 7)֍.e1lwP>F(#nH4R H|-ζ`PE)lG}Zp,Zw~8[=3io^δ17gȁ!e%X+ #/^w zԣgP U[DuWmi!+, moBcKϞK<~NBWOzĝL xf#tF`&\w@k{djp;:=HrG_gm- Mt&FD~=:;*zSvCX^>!yy ^ _|v4|" F̉g_ {|Y}c7Ty\jmA` *Uï1BSW3!(R>Srh] 9(y6KviCqW•p:{#zw{+g|W:+MvpPwc`ʾ k+R}ٗ0^G?w!B0g 9={D^gGb GuRV^oBc8ޕ'#xW[iCQ/nH\='4H=s5TFj@y18|?CgKM&w]!ZGRVb}Q*P1md%e*M.RC_-y(jx> ImԱv^[j~0h7=^yZ$Z:، $>@ @*o~:in8ړK6cyLOmWŪbPZ$eS %l̂!38=m-wS]ܨ^sbw v뺞ۭT<]VE hX;A8":2{Ǭ6dNA(v|C.uY9CǦ:.FE7Jw5w)1C[FIt5MmU1R:_/״4*oZuGXb6JS>Yk,"ox:R.5MƼuYsyvnֽ ְsm_qS<.`L 0D`a#Q$&v{9-Q66?|GԿ|͙:u[SA׽E᷷ S"@p* }bS0t]i.d~3&vHN#`L n=%_F81֤v pѦ) Jb(ML`/4?Ogii41&)㰘`m B]ge:yɮJsAN` %;6{桦eђ3@!F `-SG}[_:483g@s srMm5(b:쟌ǡu'/[f_OGQ09re [sR>JߤО;~?~=tJс> ~>{5MNH}4# a-س>k`:4AY؉3~$ Jb5U{)GHFch,xB/Iw% J J[ án!hv@u;<W6'SzV">QRݍ͘`L~ tNu&Fw8SӦmq粍[=|QNguss.vۇVXm0m|D #|]g;BJGOPUHP~SiɱMe>='/>sAN֛cC=&zSO];` Ft.lI k3s:A4+f@>s #)K8FzWݻ/G;S,⺷/)l. 93&@cTMY7v` #~3:A6I&K1,l*Unj$?ӳfEzc6Ls>QʞRKfoq[eq)Tj ~X@:AV %G|ψLiCZ})"N *3"l}ŵ\Gr$6>`LjT2&v[PrTR4ɻR9$lNjm3&+`ʴC3SWёs/jmYt#0RC*[QsҔam(::ڏH 8b6EC Ҳ/:G w^"BձV 0&!:&Hq3gvebOk.>!r*(E,[V+9} ,r @pa{_jc27YsY03҃g0icrޭ.}|c8C#Nv~s3h`L 0&hFJnEY-R"|Δ$ʛom Å 3,JGxԈlOV/`L h=v@Y޾9e5i{̷UJQ \$d8D7"sP t'% Y$+IH䜁 RZ? 0&Zq3gvJ(., XԍfS^s̕]\v3NS[ ufJ&0YsYrԞ('YqJ,qKLO ȳL 0& d'BcRzč9;SkNѼҜnzve|Z%L 0&#`9h{6> ~^fʲ6&C3z~6Zؠ/; gL 0& js6(^"i~~5U'RKmLרBO_5*1`L AnsI8%*{"Te00!bNr,`@]|i|zUJ^DQc#`L C=5^n5IUrOoi9QMB /'pPw4Zw5:  v`L >h6Inw IʮTiUD5 DF MZd!pܹm/nvEwLj*I|k2j% (C֌Θ,k6}mt1D ]կ,3+kC: )}C)3' !9{t^λn*n?.v,lΊm&@z(SZ'mnbvgA$]mFUԡRZ˪fV0ӃYzYs r7D !Ye#7^3aBP?`?vgJKb){w?XX_T13, -]_\*5)={%=&䌶οCKD9c/oSʗZcښCPFyzqYoSJ%,둌@hjs_,q1=z"=_"=/n?<1VE}& ♍qcW 0S6ly^ ;^o umRDO}R#)YK`$Dx!'zE L k*r) 9{(xml5;rѹ$\< YZҔt|<7Ϟabt/KC=EKSC9tJ[g>X.OVXJaYC,l4I HIxSӴ eݼ 1=E)9$ܥ-;_(qBhF*(?RhYF8DN=Ò"^5D<AQ=qsE=+4y#=CZ\#Jx9ARnC@e EpbM _3&PmIj[6Nå]ԧwcĠ )>D,~g!0!Xu:SVN}e%[x.?n3il[a]Aw`Oz)0Dg7cL:Wb]S͜eux 9R}oBt-<$h `L}Z1,6,͚>R퇔C8Gy3͒7 Y)Զ3g,iK~r:'j?s+x \hU[9: K|h8* e}F rۧ~0pu@hߺeGMQ%B07/$4Q*:H FN#eHP\* =K'rX!R:!L={|_L] :rRF`Σǭ;- "vn7JfEpǓ V @/GǁPOϹ?K>3&&C.cRoN;*CPIr~Ơd͆&:&Q80t7h ݣ]a]!S!N~[MB#=]K 2a rE;oskHwa4' XoBc١Qo;τdL.%;蜞{ i5ه&ke=!XL NRm/I\V.$cni/Ipnێ7o_ouj7"Nno]SŢ~h7آ B YB؆&2nk_ JiFX.vlI'Bkի9ּUнܰV|@ԡ :R;]>5+N.6G\^h'6*P{uc9GQ>߫0 S+1q76xbO9#PoK EM"MԱXA뇄wo~QNȞyi Mh&4*DsGf /,̴U~g2 Z1 WCvIxyF'`$Ϫ[BYga" x/B5zd,[,jTi86 ݾQ"gHVR٣#mcΦqr{q]gB+KP;Vۇ;f92nyLgO|C+V]9{6qO:SEī"*W MfLǫ~0ID~Nǝ恵5Mk7)VFE{N6R#\d(asSVk ^Jh7sӳvWZun٣&3p8ZT]Eg[:n؂.t#3 Oӧn}*W $&i{AZ߁%ff|E \_Bi+o6 ,OS!hC;x^N䶧J~GqnQ=CQzޭMFhP|RhUQBwDQm 8-'l>:\2 O];bMN.Y31ݘǛN *1'@Svw`WՉ @k#赺Hɞ[6Cj](;hVMdQ] Q3LQ!8hw"g*ztpLb}Ld0:a~(5xڎvYfUѢW.ϡshvk3#c>綰U6mS(exSqa OT!LKVqѡ+,J'ATށHGjdupҘsCcbL >ޮ!ת06Λx)ƓNԐUSQ+.HkU|Z˥$yKoN;nuh/~Ǘ{Z7>>ʌ:v+֕cN]cWڹw\Ǹ`$!8G#Ws< mrTw>oN}wx@To(w>fLvR5 cά2'QCy[uxҳ3WHkTvln\Fmx>n* r$hjG3/*lTq'w6OC.Z< YO4Qy!Qub>華V1A G;6l=uʔOc++;H@Jm<4OA:cCP[RI_ȑ#M/?W ju|#N{On gd^Bkt!B?e݆v7 y*"}+:T)gRf  ډaϑGᲡ|?EΙTW<&>zf 2QYKEM q(֝oUt]Wy'=N%Oq q v)ֱF_L? !OxgzSs/X5Ǵ6%WavU}C`gWk 睎'|݊r6؃ lFupgͩ۾s=^[.eB_Xߴ!fsc 5EkюK%}ҼЧʢX_Nm~k# Ozj=q쾽ԵտU3uWZ b"iPL i8>;#/'p;Dz((WazjTb{MB}jȀ+#{_ٙ+q|~7.ɴO:mjiM&G $@Ӊsv ݁$|<ߎ׺U=[VD\uvm)}DP5~4Eo66_aدo[%kJte!jvANZ_S?])`?*(/D2H)ʅ-ݻgѦ͝ӳC>]G 6L(102cz޼ =c 4QdmL֭gj'}-ڤz60XL|P|{}z䫧W7K4xGBhWz ڼD(k@=ԟGy=hCؿX& > `}_,_Zf}v-c f=Twi#hJhATӹʲOMTV9#s>x)ZMđA|nኳĵ.>hGjs @yFyGyHyWN=4kn#ZC|XIBk5eӹCZ:GxC.7i)Yl9W7;gN=5E XFR4t6AӺtރ\ޱ~wo쾌ɨ a?u*F^Ejph! {!4pAG$hm'?:5DX 2 ;t(1 qRWŕ1CgCxe*џ h|r}iŭ-bMOS r(ilT[89{a^kRc=aRxΣ!'I17[Pv>z!|Tf=/_ 2t&e 7sf#ǐZ= /{>isNc9oI˻jIJSn2z齼v"y-/6$F\s5$SlG"@yJyKyiWNK%Uʶx>dT_0ذHM#NAJ(`# 63fژaڂ1!TP~ɱzjN9daly3/u Hpݻ|)D,oe?/fAn.Awok>j{~5TNOEͫZuWLw.]6!4w_ŕ G8<ݽj./lar6!1| PmoDOTMQg%$IHO3?7| 8Qm[F[0pvF tR,wW\˯&H@%RJ]? m'ͮ}2`B@:kRnvuMIKz x?͢3>j! Q_ү-JOg0<6ާʴ*7*]Ebf&~-BMQ5_MuG{xFYS_z6ӳ^a41|o h ͉fYOs1tf7j@%;'ܺ:awx *6PbۼM?xo U4EWA>.cqrK4g?GXpmz B=kZJ93J $ilܧEۯ.:H\%-j <un@tڧB6+Yv c~gKZhw;}v?xޗJ<^B#8M[ |ӿ3H&ԠW> 5V&R.] @Fǫd>a;F}(0zb^;(^%9fOEF@K)#G뀏I4B'zNAQٰgyOO}*gTDwz$@(!=Ifnږ==sv9sEjجX~fa){t!Llp53Wfq p y( [[0Ǹ8/}=ǥk;8b< `Q}WFL1zR&U4oiiQYY8}wHrܝ d`u"jV$h+7^Tyq" aV,/F v%Wŋ uk}vqmb=^P"JѻCSѵu#m/'$ 2NW}-Hy۶p< yӱQcKE\IL%Wv } 2ā琔b.eqY[mY,1~˺>.}hV9,8+8;*"N僃D5ŽC3 8k+)O[_Lt'XĜ8'ET%Wp`YKzJJ=㯄 /f%eg>E_|0" 3@BZ?:z7(37J Z6,wOyHBbq};~CTf[V RLe8"A 7x1(r~Z ew4V[#†Q;sSǗϬCG*(~^I4ĕ?%324qE)hPt ¸aF9`w+^V up ߞDqbYMGfO~bVٔ21mJ9\!js K ^r<~1sz)lFQ?!Cu'0>G3ȝgƪPn@^ @o\Q:M{i0<ѧb3F"%lDNpI%vH$?Avʸ҉ pBuj5Y GN_ )TNxÅkw>vB4p4}:6 `eqiµaDk"nj\[#Vl/^OM#k0Ө_7ԆCz[6>#l+['+/^/X]*~*hڲ6=bQe*Á}:7+{+4PӒbh`',uj!vi4^\ڴTs@KTQ%,T64kz@1}e7>T4@2:!'@3j=W[ѱE}Y]¸x^m~2I*~\I3uX j Za/V˂^xO bb?ef ׁFO1WCn>q@/a]R6 澰z/dfc{†)eC T'ЃlO{xE RxhHiӸݯԔWo\INKt`T@ow- :0Q44((޳ϦJȷ&_ ΈEg{I9Q՗ҾwGu@IкIq&R_Yb_"#ټ~dyQ4iAX_q=!,$HP>m -Hk|CʉvM#SgBĨ?&VYϿvg͎`Q-ĆG9j>)3mE]n]QBywAe`4HԂ,@%O`Mќ#w$MW1k rVK N2n._^CD:!d1۩,v>%<{IL9rZ2dU<ڽ(|I*h1gQ&{ (eZ,BBzFbc`rD$h%$AC!'oByis V, 25=CDжHcGnwL.24{iil1D., %Z֗ǣE ֈ%%K?.h> uAnz!hڵ=,vघ4n@DИ|@>5rPQ{.F (Pn0cNC0z_%3H{µ+(/&hMz=YE+EFu$cs\fa?Cey !ՅWhc劤닞횈a=J&V$ j%$V2:4 cңmSأ]rn[L>)p*AT G#ɄqVw$&e2^ W)/ NcNCD% H1ZQPEz8phReh:6-^VM /B0ڌчa=JiZHA*ׂv3'\~DvoiDR"#M8x2N}Ƞ*m9Cj0쮇zObh|f/(Vn/n:4mI't& 2~%uk%&^WZJ}6HoM{C:/5GtVـ-!TbԞ <,K8)jZ4\Gy?f00``pMݯ`c4zu(iUА@iRrגX$Б&dP U*=M%WǙY٫ &!CInJhD"c@ v&WpxySҬRfZkܞ"͝CЎp=J[(]46Z:u^LmZ ?ك/(@^ ;.8"94s!覎&XJ"+ 5Fddb d *ixZ4+Z< /8ٛo^?=S];4J$v=2O杍]&דPG`AdH/\\$sW"H۪e2u³$hTBT8g.(΍%K+7L4F4T[؇W [SϠ.~8Yu(`a W5^Npa]$Ș 5Wr+uNʕ]2{ b5_qarey*SA"a26q)QQJ .!A2Tt+j1o&VXMW`/sĻw*TP\ɾ|J5-}x.hq_-8;phfg 3~?8xß8l [^K*ðآ¯CyILlz߁@=4 [a0K/T}=o̝.҇Ӈ+ͨ1`aFvIb:W.#1W#)D1ZiX%”s#˰t%3IZjc{4? lf7C IS-j0%%cE0X/ٸ'W3;mip5[+BS3YMh-䳴.8dzCT:dѳAtk.l+ S:-Mh=ҍ!H:jQ`iQqZ631%>LS5B3[t@GfjH#Z ЁfWtPg^ pnOhPכ$&gm^.\g(0YƎvh+9'VUqn`~ @:&Zjڨ݂FH ,|@ҍ\= DӯGknEs*F4`@q^= x&;q3@1`qc7\}~3DYR4IP֍)qO} "lo ̑{Yך1WGA0:Ty8ݼ @04ZU;4ATJoZQ<Fg`_V^2FZ6D%R@ΣaIQ2d: MwB2Kˉh[t۱VOy#,0ߟD2x4_Sx6dG?fO"2MdQџ(T̎b9,.Ng6HXT,~5o'|Q(<l S jj臒$G 1 VNj;~"CTx2`Dvtn?԰12ͫ}.)l~sg9eã;[荨dy#p@U1 (q>/?,tNB-9ضs"j&* 00``]S܇Ri[Fumk hOva1 [{J&Q\g;df[>~Sz$er[oPbY;b㇠@<Jp[${o܃.a} ;(}b#Uw .`+qBb W9u*0n(JU2@Nip@LŅ!cO~=ݿs6vi+?{m?P]ubfv)p/h[v>|1]H?/ *BJE=qÌM8[[=OА'(Ҍ|yd>*34 5!Vۮ8َ# C4XΙ|FY;\}u2)(ba;{.#J9 #"7:g ΞPP?yG84~tn ,p6ϵB@A߭0hvکߠ[V,s0 0MxizA u)1/K4YL y?`'&o7*p :-0_Pt#>ƛ޻->g 00Pz0@<"i1K2å2΀Sr*<Kl}95bD`ٲoc`Vm8.Z(JaAYUb#L~&KXpLHrQ|o` 5# 3jKbr_9sekm% :t5No|-AI60g+Ac`Hn: :jZ 'BTk _?<)|n1khP75eBrp= HI^;Vc3oϊ}h:iNoY(0Peaҩ9Cn2݀m64Y 8(dA_x)2oFT`A=NBo:#6aDxS+<⿞WY+b_I|ܠ"WNM R40``T`@1j_*^xI 3ݜM2&My7M~("8Ϗ߱y=?&G?8{_N}eO=Щ!l8# /f'*CFA]zFǃo 2CVq&}eʔ{ѽMQT<,9"z+aե%+1WbkCVy*#d.2oJлaeZTBW;=oӟ;E~KPpȌ>}Fz8 :3AoޢSCذ3X&chܕ}\YQW0&;̥bd((퐙%>]+X{6o1z h"_ONE䖚]%lwBfGd+o leQV5lKŃ1iQzznjֻhndAOuë LQ{?ƾphT8t2y gDi1kedNJIi\GQ'X#x "ddfVWފX6 Z+_kwgDLhCz6q8֪=F Zq̯2o{Di:4K+7.,w˫I1kICzbGf/Zf.81>/(BI'd&xE j6UՊ5(?-,s#kw6@?eݞ2 ӯS 1w}=o̝t*vqE 'jTb8ls=KalYܣz=4%x7!Ptj1tnHp`m:b1%`aV@So=5o'5` X1szѡy`HÆ"l9!Q_ĵT"o=/[7ԤPp֞Xum0d&lqۦԩn10vcYq1ɸGLe~T9,DQ[@?YiS>Gٻ`cɰ:m֕VQ#3vPq;JCKou$]F+bGIgd &H fvoj( 2Zԗ+ևp+^`zUFϯ=mq0iWu˱@иnx tn)ƪC UʌN`IɝZYm׬$b!#h娡 AHMf 2ַn'ա?KK"5Ih zm*C%dVsH'f^YQa9Uqo!nCi_ ZԓUڎ> o.U*/Z7z"yi R^&QC ^>c1iVi5+W<  p9!⭩қV)i͘ ΓM=9{ϔkSQeYIhȏ 'vz&&o3)54sz?o[QQP&:vĜ$5!n[ ׮t2&(%y@MI~~Bg,sVy>CqJPx􇩔^Oj#'D]@IDAT8dQ4kӖFRNӠ PHv$̒17j`r5gQ}txr򙂌B"ԩ-s%l13 8qͯߦٗ :l`XCE֡紏c^tvʵ$KFzzEޠک4!-,ߴKbcc;.-"ʖ9 :S(mWn+$qQ>$R)g~@9a2p@Ԃ)b2\vF4-~ky}9F5^aÓݒ lr=LBQC@#sF*ULGY'r4ӡjIe䦭U={G,mtL"P +&=0߅:HNg~y'EAs'b/n"_b}IGY ZɫY{2ogkT`S=ƊyyM[yNr++9KFZZlbJ6ɾs.q EЦ@?-o_Aq e=C_QD$t>p_ks}zJBx>ځy]0$ LI y: N~`A GA jv$' 4]m9a#n(:?JXjԪ@+l6_ރ82\b>/|Dī{?“вa-33?MlA ?lԖQdk՞60Dj:BlX;cAAk˷йV*p^Պ1Mg!؅)cvh_YjKeFE-N9y€4<"#4Af 5\ޫ45Fˆ={G.rl`ٲ??ʅT'?/)Y1n,,^xxݙrOm 0`- G?I?`zQs<N'FOzdۿKŭ1y2KܙSkêTyj*/07c1Vi]Ȉ(ukTUv! WYΠg˹"tv4&6dm/W^>c219vKV%0F0QQMۡyX:kCm&"Ni꣣NwSrsefvn\]p}P0oYP/~ ,4NAqch6b)(pҌ@ˣ(t !{}^&} Ŝyn%l痴M1r6t dd H> )@njS'ٴK:ep`,zqu2"ʠ*=g>ej]a(ld ުDmaЗk2ʁ]9T0AXpC,L͇NML|;%3/dz03X;s̾-FjW98uWUq9c:{﨓t$BuM_ Pma:3i g hUc}C֞'|: JqN~odQK'(*}Po\p;SAf˗oS׿ފuc 8{ӺOг706J}0;K7>(''_kkU=+XwT[x>6_rݴmꏓZA^5mS{y" xQ5nC 3:-]BTn&`qQJ ðohT4[= /_ze'֢n\z&VIӷS k T04B&.0< |5cV?IFOWCt.a8GA2 ا]8JCb76hb+d$8n:qp+AgX,d&($}$0]Ka;ѶQ[ +IܶfŁV5w]Pr6WJ؉]M|jkmՁv 3ce@$ ؉ W>ZuLe1dPi{6OW:Qӂrߝ8eZ?^t3Ͷ?4]we Dv iLs[lP?h)a`Kd0 `6苲1'ͯ.vZpW `7n{ۼܥ˖l Sh_f|Z &S†l&>RE j6i1[$< pPJ틯;.zڻ+8wpNK70d[ hK1'%3pt'|Ux uf6Oꍾ-)96ѥ\mBYpj~ɐcI?aR:Ξ{I^4>/L7 Žú!q`ٷV}xv'Q0F*e+gD@݂t}sHarfc>Ι;ww*(3 %WQ{1< _ʨBtuuٱv5Ɖ妄 eFl4H(@JU U믥+^luլMό51GpNZRw?W~kݦXדeJs7rqдK*uѷ 3^2sàe^U߲)aò2Q Y0㪑/fdX cx` \|+Ws62 J 2 })% \HJ+J(hCII i7?@*CLBM\FcN`2N[઒QsAL}˕?23.~rƘ !6hIaGz(3O~^FVAP Ǖj|p' P({WhDi*xM+vϔL8h֧d=BHCm0+ V M/`y~BҔjҔCkBF߈@WcGqZlVv* DM9th&FnF+J3q [{*Nլ*^|Vu1hq6JkEyO!.!QIuco`c`b,&o).uQ;胈P4BCAP2JP8qIj՜הbw5~ռc ;8:]ElDtCYۿjہ<&TgK+my Ej<돿v]O yzjt1r@:,;&UlZ ŰP9l(W,+#X,A(:wgH75m[EIHOd~U; JPƒTpN!yEoNy]=gw~_Gjn 3ws̃>h):DUkwp 4M6ZᴮkPp51M:Y}LcCmH9K,ok3<>8@L˟" sy&um(q3OT^JB0ݥ.Ej#&KTRVx,âE>*AC} фQPnd8js_X0裰_yOч72jP=SKo"~8M;1phƃ9J1P\LOrÑD-hMȘ}8^kFbGsgy6/  [9{" \ІFwPM y^0R3rg(ݩB1ZAQ{vc`@LiMZ=JSBWEU=V bg*R2I=c[jLy\\PuQ\LzyOߌtBAW+^=V=-7p;r?%dk9cs $\Ki/dTX>O-k Bh昚 Gv޸x~<8fu9?cZ*NYocG"2_L{y#/0 %+koFs c2Qޭ! 0`o7"-R6 )V-}ڔŬ N&s֌v2\= 6W<1=wz?r29oc5|>w̶1L@P3>& [nVd4ךly蔌FE2 ֋qCAuKWkWsIkc ȼ梉 tUK0'Nkǡ{WT>}'Z3)k/YBHt5~RExKԮ͌ hQl:l';Lsfd 4QI3@@R?kɖT?ȴ S_;{(:Xl:Uz<1ؠ%^x7{@ן-10qʴ~lSW8h5r,ٸW:ٷ}.;Eo&# DHjC3QlpPY50 I%_zV2vz2.?J|'a1׋tEnHI_Fyr~`20AuЮZWy!h~'G^נY_ZXtjIj!X&s>z+4%x˾c596^~7EքǍavF=hc$Ly0ݛ2v?R3,pXgxbGE2S(7x_Eѷ;E`ltomϤ4ݭKFKH iϤ DAWtӟ0Yޫ^@jLĮI/Qf5б%܅+ PNC D0B f-O##UwDV5 YsLOo ?叇__Ea}=C//'ԴW&†^FŃr-ɒ& s}MgV>KRQҌo1BQ-xNIҮ=qTu7bMCzStY?O;a6dݱy}ã{wa c1g'߂:h})c J0cVB:waӴY=胇0渟9v mF<ͨ>ptrAK FCcr3 "j*YD,_#$hRd,ŐNvY.83 C8/XKNщ4zB #wxϺeʔBO+] nqkvc?5 Rw6{g0h5%) D,A!3cȅ}RCO:}-gI1xz7am1/o]<MhiM-Zm|N\}xᐽQtmX F""kT\"[w !0gc`S. pXtW,,>@>M9)9OpE_NdMꆧfTdDF AY+rskgB:-܇ZImԯL' Q>-zJ9x~("C'ĪyMl?l1R: :/2Y#N%P1q+^}<[JZ*X57]"T ErLaƄ0 OhӉӣ-O8lŹ8gM:t?o6Fƾt` 4$nX˔ xUc\:oɱ昗 zv>2Cy޴Gg|L% %l[093 V`40bqFk7o*NI oMiDЩ3^%D*BVAC[1?ڲ>.,3&sǽ>ײzl۲m82h2W1ֳ 31slO ^~y|11B SjK/KnzzyՠDA&dҒo_;Z08:lŵrwoZIb"l}i@P 6Om3OMS̠S 2Jad ܊ 8aj0G@[;NvBuw[=N >D|l h=977{~fK1؇T~+8t9~KB_z|=2rcAcV8ofbjVJp<9yRj8i0T/#~xN-8p *g s[&o#F(ÛrY)=p~a|~bѶi]ڍv,f-(LwO{nHeQDN-ZmTUT^lPS ή^E1P/t>w_{m˾pF2-k/88%=%:EDIGfpo%^IL%Wv)Šxod FpYYPqIk;+{o@,1~.i0(>%).ڣ,?\up2zM%dR lV"o=we.~(@wNQq#Z6%*a2ZW1+cWxRѠvu\$j7ЈUBXaZTvxe.qH<~g\aLŏ7!JE3bň֍k%Ȯ>^xO bRWL!:W\ c'LT)i Qn~ z7aY6jp)l6 ܵaC;w:hH^`VhJX>"f=8@jFᙰs x` \| +7fS†aiӧiݠSSҩ 1dn++%TV\qok7VS2Ydޣ'_ 1CgsWN` o^,k%hJ[|fex†*A(N^&{uh >KeH:Ccөe}Au(jU"ˇvI]OK!Jn\й{K7?2̤i +ɱ780e fab3nx/q5)Y|:$9؊;~h+dx-",.6џnw ~ZpQ:ե0nxwg3mKaD%iyپL81f"^4բ:pq QZ=ۈи)ߟrBh!\Ly-@xsF^ǀtAfI?n7e$e@˱KԶ^df͂BW.@M:@fVfZZJՔqۏJE <+FČ 4%6mO $N-; τnw)ngiԶihV/B2H/<0,WWNܗcTN&`9z&NٜPkv\fkSDhP`xq,dP8t ?}:I!V ٨ZQ9CnTĠ@sF,MʘȆ"}[ ~Cf:u'! #f^0\ R^nv$&UH{fA}|01^W=Dں ڥ?E,_.-sU /B ^oZnz=grqT=jYzrs섖m#R`R2džfVV@?&4⽿4oDv2iPAm>Nmp)x4.&{ NEVpw3Z..z6d丢h#+A]?a+wh1W"c@ C-qPA hpOp# 4IՠSZv xmWt†Y2&~ob_FZճu t5I4ƾT@&D]%(="1GȨYF"\R$ЙČN9|.p Z&& ($BA1l)`Vg@S;gHwTC|N*Yt&$A 5FR\b\$WF\$[fx]4DlPxiX *pD'3 qx hޗmcƂBjM.ӨQTy}$4$АlJƃТAXךEC&Ѐb}.jmN揢W'⪱k;2*Ws.CɀPcqXp"ɔ8dޕV#qA:bvItIΧ ZEѻCs1 jU~[%PZ _*Mf] Z{٬kU̫{ٓyOHoKX#w]:Wh^?BA O݌w 3iBу[F/ЇSJAUj6M-ĨwS{2j5(LPpA:ǙBzE:6Tcָ%F>n Fl{T_G[-Z[K,#|K7fUHB o2u̳IP4+F['햮ɚ&44Wil ^J4m??kw'rS#Y`Izj8Z7+ѱQ@!u_5 o=MԷn!)xT 23gP[Rb(I*„'~gdشyim [\ TN$M=M~Y3Зbo2R5pjk7X?[ˊI;@a tܧ H }{M9!iX3|#<׳]ShNe flZ]x-w0|(9,5߇s]]h5uVх_YZYżiiaPchqN'FO|>mJ'Sbs3O}^{w^~[2pDgNA49ÞAzƄJ-;4ybpWd"jt8zgg8G&$%-V!^? z?N_Ö?[F~9&oqզ–uau_7TT!"^9 )]6 S?-{b7s.P1b?ɍC`5A?u<&Sy/FctGo_?ߛ''xn0Z_5N ss#s;U{;C2u?nr1巑.KwϕBA1hY٤?]РB"LG]K/M_w⼣5~u=`.Z\Y{W̻ۀXVs$hd33ղ}GG(pp"oP-+Ǣ>N̅]W /)*8֗sfh3hVH3o;O\_i4QD_N5Ej(,,vBI gBAs9ph*̻C#Ԣ0T75(!{9pKd/ϐDVcgum)%۾yCmvy4%dKOԡy}1{*= "rڐh:1}HyUbGI)l8ʁc/c992Q6F 5[Zذj@|xQsC 㞛2/&4ZQ-;ȵuJA$9+;V>C/D?Ks e]t.axw`Mnu(*'!<\('}FsL GiEr'䍯T2r(ӫoiëd#$H9.иG[}7}q̏/7#U0_>nRW=tF$lÈ9{Wx=0̻vaWcЩT7:P^bT5ⳙYN0' ta œ){ yrٖ乏Vn .S4,m ģі{0r|ߺ l1M :uttj1܊ ފY)W9--u:J{2"l*7FU^ فYUUK6t#p0CґW+DN/)d>`XkᒡJl: `US.׿/x58 S0sܑa߱X-lP??sDL:wVUhĖL,0ڇ/W 'v&Qq)v,/Oohx}úQQ*]Bx\Ȃܼ>r (Wa؝?H#z+ײe *U+Yރ%_ӹKW.f3ŪUUi=Z5YќA%Va!o'8%l}.g~﷾{1RL~pc>qK #zzs ë)g '=%lx4DF~` pbJ=0YKE#<=<4g- y6tBn4<:pܙēs0!&>ȝ%i陂!Uj۲BP:CF?2Rˁs2\m31 .m^ Wӗ1 #5tԜh#kёjϪa9ZM!FF h ;*A8cuEZ>!V{GAӓנS}Gz-:u$yn@@IDATnWm8nXSΒd88wLdj UԯUMbyMqAhI6!9#A%E`LѺ? `Gɇ|nDo̝t#t{::'G$s&DAc3#Sȥpf8$.kW,ͦ#*fD9fh>#|I/4Yt= BeE Yڽm˟@>:+&Am͘QȖK\$:4'Fm/G#ÑF~uQ[mib(K+A޳pTS}a8jqzRDD6X*x;z 2贄`A^tJswz¦z~~ T|Qff.\Pr!y1tm1'D:꫏r7_Z Pp֞GJew3+u`{ڿ8n۴ N>?8y ,OJ9`(@q|Y6>k!5d\4,!!?,\H* ň__ $:v8:%b@ &W.ugհk p y?iAoi)GGi:uABnWיQwPVbmO~2cEW/AFFe3L'(0P ea0ka3VϹ^i6!uX1d3329"+aMi_Xup*=l4 zoO~Z.FT75[4%*Z* 'M#Sa-裞*;rY<=a@QU\=?,ԐPz0渇T4yLt$)*0\A<[iQL}~2@0ٙsg[.;V^QFyeHU(( 1'D GwhwuۺOvS1AY/gsVxɊqβl^[6)NLo#:;{汗>hu(QU{65c4{l sz{|gK?vӶY>8КAna7_LMϰ PBf|v.ƥ"b պ1eBUe_@@A,wﵹ]/joy՜TH t7fk ^ӗ`{ Q&Gm;9MYnie[?, v> D_=,Rv9Pϖҩ9Cn&yw \%Pzal2A(N?lVmxßU||hDΕ:l(VoۏPAgG{ hs|iOۖ=ܴuwEa9#+$  [c͊WS`uǽΞKpt8)2qƉT{m,(+ ~@A=')te bo9w\r-ɒΗ# ((zc_t 7fufF #޷S &~﫷Goo#` O.)gX1oնR!lS%rK*q@axuo1 -zFYe$`6m}Tv11hb3>>d> Db" v~t>hS"%I={{#|=m:ȣHKMLI3ׁ7b1 A&pKM*KǡPP),Haq;`~QfsJPe o\7BBiհa)G2A!fa犲s}zJrvlC >9/FgHjŘ3~~!e w?a*VfT ҂߼)jWsh}Z/H?]6 wPG<>fY?WG˰4C&(AdDɗ>pddHt: 뀏w!{ED@@+bAuum*+hT'm] ((;{%t If&gxf&mf2޼w}{޽ӎ!t}9HFu9=qNV 8!Fh2BD EFg̘aA~0Mv˷`ŠȰ4/ɀ: \e@#E^Ƞ˴".-x"1A(_& ǕP5p u=[T%12}0˔*l(?0Tk:Nm0j1˓}(}=Gg` D:0pC+: qwrˁ7#sm8t0)#cS!are< (l9珿;nfflY '†[>sWVm.m}$g?nƟOoXX^ 2A^>{4k̷@ 3+;~]6"}2QmMO[~sjݴr>g?p<\P߄B` 5{V2ptcts;|.4ʽOzF :W?vKͣFi\t& Sh󚫀:,y9kB!]sU5v:A)wIP=c 5 uՄ 1&y𔚷zP"ddf2KQzЌ$|u3fB;hn2&`8殱Z3c'.]y{fN YoPtZ WEP׶+ѿϚ6!_C?S=<|G"cI00:kY1@H>ŃfPc[)L p\`:mʙ嬨Vr]4)~dH0!:Vi{9>yi岄xxZD"_1.+j}wM6~(?95lؙz ykڲ䶵;Z0eIIq\(L$i¨ՠ~}6Ү;m{2 Pۿo &?KHg4(p@J:1 f<[2v_joW+ A0p͌H3w[ S:ɳaNMw#\V sVjJ7˹ #0md07bڞm%_i CxK7}.Vn٧'):QU n&{K͖H OJֹhpN{1.i0\tʤc/Qo l0 i8Nްdfl4:tQQ^[!LZ3gʥ{^Me36Fϛ*\8f*ۨ;zX+8oBc+46=j9ϴ ˰ s.H1~z_S /l+RtH`$?ӂhLܸObx䉌b5L[Lմnu5 陸 RN_^GdtީsxV ݳ.Go;}2Ѩj U?}*$xH0rrF,G}տkKmE N 6H=\$m`kg!!! >ۨ?\bc60d x,Af3iLxH1%t>4J}/yͧ8&FD_?9~D쫒cDcg.vQ5Dp7r=ݞIe0/anEh^ ПY@`(L-cU2|FJ"&QY@>_1 SХa`r0 yGwk=TYI2|cN7W-Z}a~oB ܲ/ǎGY -u /˪Ϡ4#d&ݠ A0y-l`_b~ا_s~;,20XZ¾p}&̣:Egp$lY⛭+(6>bFesh!]K_Me>&.w n'.[yC҇}9K!Gm`L`M;J:&rzuYFMX<ꈀUCQ S).0P>#y~ /]-tvΛ' +"p?kxII;S:|pKdD)waG~Ү&'P5?idĂUKFzů6?1 "3I3 bFC;yFI!/޳qnjV>lK1#(ˎ?Ї [vɉ\B%mp#ft 5G6"nσG6v]~0p6f9]G+ڣC"WVT#uRvQDؠ-uEaݿ"CwQ/qZ\B[VP溜,E߰tZ gCk0w&C@yM7)336662b.Cf!c?Iێkk4,,|LxX9'q qq4bPZD߻F 5dd'Sk'V<6iWRΟ8tp0r*8r BFf2bFEg8&i,$%X͘JۤuF6nQ2T%C˄wRRR/8["mPРch5H`7ÑVyyDӫ$rHC̴dX3M$iB-I·YVY_ 9_a!sg9r4WK {s=1[nބg` PX;ͦ~A/ 2d8 %om7uOZX0ug$4>On[ 2{ ?@9W #5"l1ەV4X<)Xھ훷cۇc # M ;$w)$4BSQipYz#k|N,acoEagB>0ɩZr{@(mnBoj>`h;p:к"ٮM1{MkO5 A~ /;2,6 $-A4@0h1q0 Qɕup3R,M)H_7Ga"4FP 5sBrMH6!X}( Fg~F\hGNa8[Ä|`zH˪I9-ǁ.f-svs0K8*o0RxԾ AB;K ~;2,؇"`y&E.u9aL*QvC ~7t t"4"¨ɽ8t B?0 }]xbpI41ЦqMPCGAmC,|"VN2.~uWo;GJ:iڕuBP*P[ԯ9*syK<фJ0&01AƓqd@rҔB 6pgdT^c‘CaCH^>+H mA4bFq?€}Hƾ: m^iD5Ǐ9~L f]\;2n.A^w,,L5WJUvլeՌEUߎM*~cyӟP ~yAh-A"%0*5}h9DgVO=}l_ |'2 d2dOƁ .09x?53+y2I]fƑY ?EOI#(d=-^Y%JèEL ؇yN6 !G囒zGF|Chs͘  Êqˠij9#TX.o6#ѣcY6c],%81]LrǍ}FGLC -H?Mf]#?6{㾨7y7s jk480뱣!g9L!C~oYgX꾤箶oZGjX+Թ+= /4gvyC#QA~ (wh6KNV^T†4WO:n{Hyp6V,YZ po TVBI ,O{l ,'gM}LGjޯ *}XrЇeV! ;E)e>lONx%ؚ9%mOI9^Ŧpe̜{Q_d_Y=~]-**kLiВy>R68;T|jU. *19U!h >oL?u ֌Xa c6 @RYcm ɩAV1& lG5$*1< -¤B>TzFhII.[Oz]GkZ5qBy)Yλ^ӯBԸcϤ/ꙋe0MU7Sƌ&^dr^M&r+T(m}T J_]?o[O<ʹd%M0rAiT\RժmZb^۠ Rŋ[:4AۮM'J-{Q'0rVO"llwL]SZEhnzK>u^s"B^=SU+12G{5,Og3|ŭU-T@K ZLNMG ЧϏ~5jE-+d}Ix> 4ŻMTujWlںجn$gm-0˻ǔ7aΫc]im;tA.QA׽[^ErT|}Ya j  :m״6:v*H8{k$`rסZP|iDc+/[  蚓yq['=`e*4.$Wg a) kTR[}/yc0 &}}罁^ԄXlA?%~W+vppUpUf,`46ㇿ󮦦]K2 #6>ɖ~*ga F a>UĄ*I#]!"3ۤx=/ &3֙aos~;B9l7f2sz㩻 -ΒJJ*EF@#Q1d/ͪ].a:CE4z9^%Rذ<[/>дm=,]èyh`WM{~0z\w(p4/IhOT,Tﻭ%#!p#`Q^~dP翬AK㯼^ߚw $㜌u )ij]9a}gCN WSe_1EE)M<ʗ)m7T^=GG厁C%A/s-<2%d0b}7Vޜ{Wv/&hf#t.sPpR%C޳1k8W:5Sffm,sz05^ V7ٞ)h oU#} m٤53j n DbZhMFhm h $뉃c%B;qn 硫@:f _#m&`lW J>4r?B2,[5%{ 5*Sa~DC wV UQMN.&(J^5M:Լ"q%G0|9e?n؄$hnu9.Q8i6,ٰG]Pr~_aJtŚna ͆VX/i:EFam`\\Lf˗JD]i8LxKhu1Hb|'GUP8׆VSG yG/ڠsB'tZQ>ON-6 $w *VԳ$G4[7e:7F-^` y|A yDvR}tW>i#s#ġ GЧ&`? r%}5NDvwGOH:TsK4sK0I&#nU5$0_pa>GpW3̜,2agI%b3a,B ԽpL:)Թ%$‰zS3#SobClK[}>/Q3x 1p7> <҇}WVVZ`(>?g|/[OWWϝК;u:+.Lq1e?#ϕg›Az'A/>ư(QpW6k kd蝕'(hӉ%-~M0>}Zs\Tty)ky虿 L $@,NfBBzcQV;f[ CU΀ :a loo?5z>Áܱf2Wl=gǯgb Vn `_ٿq2g=i@(wr`ҩ7qQ)Wd8Aafp3a2#>%)ZD^x:c8l~2mzm:- TwAK`eQ &\!Ft@WeqWo[g.Wpk^{ ٳPk+酎\Ԃ6gVU^nda#L%..jq;ȸv}l'M2 ίǾ/t.\^}G2i)!5|f`)@ 6| nHOزz7[W+$do"l-ZF34@h0N4~La!z"#WTPbۄ ea!1"]73K@6kUeJ30م+=[#gEzUM7hv/gTeJ:iZ֛ԧ?.P\=d@!NcVqH͍ޡ>t{WL[$(b'ґ+}J# -L-۴WmuT㱘KVćj: ^K/Ę1x߭5 5nx͓z| x k!!^6#!L=:b' j9 j&}'̲ (<'MM_f [ryHOկrg虋[vQ*\vmsTͤjٰZ> qS6vաI?YAK6A+yvZyZ8Tu3I]?a*4AMOQ5G+սM#=WxqЁҵqʬ9j`(lp~M[lzA}233zYi%le!Xs%0槐ncFx&dA%߿os&''ƢٴM iS_[5ĦldL1>4c"KHcB<ʭܲ_lH8{6{i35Y'&+@ISSޣ1jC.:'7_h(^ۙ:)E̎f<t( w[X}Hgհ[ %KP8 F–`c <+ɋ Ax=GΨ/~Y*Z 4-`Ԯuٝ=.h3\1ձ~MjW ҞujaM~sr~uG`ʓ<WWCP}]U R>u^-VA+uy$@ v~5>mi]8B0ҡ=z cSl٠ڪZ:X_Rukq5QH3L O\׬X$2) sVmSJFAOBs4ܽ'4XaԌ.㫋 WS-\/jԲMڍ4)^:DXɈ,ɔM+ ddfĥ$&^F=wl9 :x̾F~&Qgx)J6Q”Ud- hIpm'A3&!\MR.ߢm 7(LhJ>Ȱ; i\[5SUpz!9턀@L,fZ[Af& }6TȪ ;+ҡw6 :ym<0r^Fc2!5-CM|:0AW-i-˳ܦY]+3,wԩV7vN-% 3R%F~?jDqhn0U7jFiƽZ̲067PH]WVO'ݷA@Z I?mbB*WeU*ܽ.†*lP}6e9G׾vnۃ6|/n'1 >42G \A k`X_Ahms&kn_ө C(]ȼrCc d2 կV,fA]j@ӓ`i߿KsdWf*"Ghswn -OZz`"Yaud֩CBipnY\EZd9k4\p\Ї~wo9+ $γ9!19E7cKUX>4orww9>QՂ`4w0gm5m:5e2Sw̼'m&L ldnBa ffa4 VB4 gԭ֬j.ȤbgaOp-`$ގ@!=Ԭ tVcu,F ]P{^.h.('jCGpy{6<&P+ʿ~EFjӞ:]Uy,0&:z܃ue@ㆨ>HhPkJ@5h{sF} G!1T2dPf{ Fy7 Eq2&#xLDaMcNa? qυ ҟGԚޱd+̫{K`8Xf>{+d<툛fx5S)#M=> }oB p,ߴO1J  Q'T[Z$ +"]QS5LovگgSK 9}n8ϋ-4yd^ժauhn3 *-^MЦu+jtcDӫ2\w)e<" 1܋ fdh"+⎂S^lYu1 }ȱ? a")'Ft$Hө9#ϚQ*e5N}vjfts0/⺡.҇XV S~`ߖoիd;ɗ@{b<Ͻ`-NG>t4(Ly y#/N3iߧ}ViN W/hZ;| p%YqU~}ܣ]g'eO_^5UEkL\-y#}K2`^j h p^Vnt1hN#LBnGpp0MM[Ќ~ IAGj32KuS= ƿzﵟG;tG'yYf\#Q1#i6|z[D:}|{1'P~YVɄMp8i 0h%P`p/D04{6-@ظ= zcZ/umBH:Rr 4O@{lﰾud'gmڣc]ZW:©5 7%E8b;-J;(}:gE ٖd3aR ,)>Q'QQ`Q'NXYn#*>r C3UhN; ''Ġ J'N$~&~_/.F _oUSƌoQ@@ Tt"3ڨos亘4UBX,@{0[0Cp KȳhFj49C_2>m@aT:6[]BIK;yubng9O 0@`@Vd_MpHF2Fr_:2ߋhB3N "dȾ(gkѿZsZ m4W>\|GU7>Rv6_{Q |ҩ߀fݘylܠsVSFn'-zFq!t1~:Y;vަڱj(ʖK&3r3oB#sДs3 $}S,˘d %sobĀ`C yC炃Wbs@´6|DYb`}co;o|s:؜j \ W ĸVGߺ ץ I.hi1NR':2iOg@ |q*_b:R Y_ :[i`RH`!g p.]kYi_~e}>ZgQCk \Iʙ{>ĀLԲG)@ +!<^ ֵG6~wYn0@WѶM{~0z\w(p!hTގi;|mTsh^zcLՠ/Yiۺ'{Q_>Ӏ%&2-w^ī|c3<#]]N]MDӺȩsW#)S˜IrS*BN9gfI1CR!@s$UiT !J!жq-~a,j xxt:=sʲ~lX6Fir؜SY ЮC'w#Mp'{!P6NYQ;JA`Wrw}7yNԁ4Y{ ii[(lXc!.<A|bJCA㉻{[LA#/md_ٿAAzqCq8Z=;z On\&jT ֟=8d?NVA2iפLj Qc594Ij>nͬOE2J‡#RSFUѿb"Nzwh[z!|gglNF?H $L6X1B }H,/V/ݮav`jc"fQB-wU#g[ςV٫w[ቑnL HMzf *W^'խUC<,\푋f܊ ǾZ#5Ť9DQL[KߜGct7CS9uI@ kZZvc\=|3*g>]0ȾhJܼy2eE_JFóx1k' @cek`ҧ]͖/l帀дھ}^F1婻XVm=v)뷘lx}< 8ac̚Vb U *+d yqpcF^ q~Es* 7o橇tXO +jFܪF>+x^LC3c˲m酪`d;,aרq,&~{aӿߘ7X 3W(o~ΞN?e?^|@kXp3jvazZiOj_N>˷ybXo퓵zAZCWa. vhԓ.iSJO?mz92BnJN9Irn-[{hRA`b@ίZ}gYd(p#؅;=s>.m"bj8n$4yqxQwé16jb`oXQ5 ]7 )dyQ)f]$Ha /Uy\MԙKBC]Μ*V rܻ}>X @`Z~E#dj˜y1_ւRk'遁])˥BUJ_HǤke9A{>j+15 GxhH %gG=v !x*v@ȕ[+Db{j9H<2b РK݉6G9h hEjr3\^Hbl?zm7@x^) l=~uz<.1Eժ\^/zM8~F 6Q7:)hh%KۢA \cü؄8 .f4p=44\+ y91 HjLNYAunvJc]~/d|\1곟h3Ɩ 4lDaz^07ڌQCk5PIM5#q:ѵU#>9Ԑ0h@r#qHZtAm?Ea)ƻ##¸B"`w^ ق >(LKL,ELwjonϳWqwѯ+t~7xp0&#kW)[2m$d$O[n6{Wmpܡ@c HownTs;  6O?Ї?X-{ekZc4mLJ( qPI=j"?z0?sF3`|(d6r"92A!> CfN~5G{R mo˷-̟ö9DN4ejJr{c] 3gs1GSd=dX Ud H ~Fÿn ko AۺN}҃s_N&LM7ʕV -CidOIJgn4/\))R'Dq+x- ZA] |`61MhX%a=GjmĹ`W -}?w~$LYSܣqA1LA(= lRZphlke-HunnM<d nƽdKzIA5W $*3bͤtd&D-\CQa,fvnAAC +U2mz >*:&NnQ"Yh AA8I2sx9yh*U*bN'9Nq8`tAA+\M(\_cIŽ3({\w)/ϻ&ظHrVm+߱ Df  \L8 Z8jUCSœU[UEIy[o SlDXЅ˗2{I"ֈ,[*W:K FX7`*峂ЯW;!r0c-Y)(: *+!x%後(,} H>< NjJ-ދHY `~_]Gpt̢॑./냨i ,{ƵsDr2%=F>00kkRwBum:G~_nJֹUś IziE@ aסȪNQ@ 9f) W#ČAZ3;|f?T.''%mVZt[ {kw"ph^oZRs$}H3F%Y߳?w+{"OH yĬ&P?[toPvs;U)__ӚQ+!Yx ؙ :4A&]xO B Ft2zI 5"ô,`0g/Gb95}ڦv!EDpnGyu'q5(h zΧd+,<F&u2F9#!#) 0,WW w Ѽ 3vjw L ) Ѭ~50i]OJN- fbJ^E`Cy"MyL;ӂ<<-xaYs6e.j64 Z{s>l4qzm:/nWb4K?&ܳ\Ca; +c*W5F+D@ -a4%p=G /ߦvk ̝qhF?|.~ZoNE/P"#^.鳴&d"=\O6~ŭʽd(A6` }ҳc{ AM+#-66& +M(Y>$^.yp1O 'WWRd.cm zWr2֍Kh־ŕ[goW-0cF|r+rSO\I N8gs /+y<$\7Ea`25I m̍n%vҎ%t(iimg@ HIAқ+m̌-+D"lpG`scGi&.,ra%-v.lpAW ܝ}P=`?o'&-Ѥ mKÄ LL>+$FӺӏ! {Xjׯss]L'3H2^(ap{x htmhnvZ]o\d$.^B+|Z6PxΗJjPRlt\Ae0ymY-!bi-|qA4'))Ək]yKōCŘ3SwY,P“]k5V, oå4ztaw[4*(ЮPp,6JW&;qsp/8Xy9x\g*љ00I Y< \꼧@tB̙y[ W<8OGM6'j4³ƕ4hk*†lI܉^l[>g|O/^nwy~ELHe3$ "VkF%3~ %=q qF&i'"oj# p`,c<!Tb3J0A:U/1۸v0\ͦ6Y':S-%7&[% ӇwEa />*U˂_ot Z!85*8 }?-zoICgP(|.¿j.#2;φ A4 .JGh7I \D^_ТY!s29 F|D:y3ͦS(O-?o1'+W۶ I^3}c0*(0 4$lpz599.q5xv:VH7 tdN&5NB^hڝpl^襤8rH8Y  eUdG 2A&9hZEyc?=+q:\w6(5`RCM\lt.}00nKB3V3$]qYk\MSSYnQem #70À뿿EV׳|6x. ^ߓ-5g@܀e'N#h{OJ־\lAUw5 grMAm>#5_MfO BGo ,DzM{uHpп X@ҮIl-6;\D7ƍ~S)<A>H~{s'5}j٫"E~jQ:N3Z}N"d$7=j9ϴ ˰P`q҃0 &}7>8^Ea؎iP%u0 {kf{W GyҽVؑ2v!͒ȄԝMػk}2 WB (4E&*Vi/p?h}J3!S/wGgGݯ׌}H^λ9 )܁6 5$/4_FN͇f03 o.bf=ig 1 ͟1/e[a{qum>t4Ll(N"cPFGt4W/yp/EFM씙lqs\BO{99e>u$6&!]|fWTΈ]y<Ą@zzz{|mG hc?[?4WNKZBUF>?[7kŇz +%FrMSD'_Y!CO>k@ӝGDI2Q %񌿱2g/*1a*>_jZE^锋FFs@FFffR%LOjAUu !66GZuk"5 }JsG""7a5{sخ`% (hx굉yG~Yg ߛ# 'm&=WSiرOkc,l2W>5!p0u _ʨBtu–+nw0 &}172x>8q&pE^ysբevfjCx}7O1@F„j-nKKoj3h. 2g2~p"w[}%h Q24XGM!H[|'~DmR~U0(xn ÑiVuE:w&h3Wα[e-GTh#l\8z>dHffF0PWJ,acU_'Br~ ۘ dA&߿os&''rEaat~HAa#WE(63EV֑Iˆ["#JYG.nBji:UfyRl6?2.~A9^pAaf6qL (8R--@}C}ѲhF!Wm[4 d{[ N$B*))h87Ij5lܢdxK 6#< _ii))q)Oܿg-G^A @ws8~;*nc/HKnp Z?5i۱qz +6O!!.U UEk,)7pMCFzzJvy2fyb) a_v5-5>91Cwܡ)TPqUs# {&=64B`s&xL-P`ֲ-q6OCX6s|ma8$U)ȍ8۾y;}8&#AI.?!DQP`ߒ) s yY4A:eqlBȾaۃk;d2.왤&Y'7cm,59AHFaCߦV3Ā0 c(fsx "g&D/ 6LL"pak:{ʪ!X}HaduJFAפ OQ#4" 3 t̥#3Iq@e}R'! Y> ǞY!2d`>L Ǝ/O8aimS8wpjZ l͖BC_lLc68ɐYy vC^\lBa@&{a32 (pc{qLp:)} "F(p3HcjeiD)4p^MҨ0eY8~2np`"}CL 6"lo|կ2mMx<0G,Yaa6F6^ON ufaտkK:6(6dq܃Jdo:zmP}Os. W5Sffm,;9 Sa^}o8yKS!:6A%,mM ha095Mw,C>?ը_@`GO7y>ɰZcR6kU+ۺnH=6[^aw)o S\+GMC'}tOOKp=DŽ羕y-S!x=rr\pHpаVeiRa|S*2"\,ТA uq-lPcrLߥ$թE}}=^Mga68ii}Y33SSN)<`_OE UjvǨUrz#m- @HMP5tMiun{L-ծ+vZMA[ c}UKVr@ F#GKK]†f U+Q́+|)P"pjڂO1^Gίۛ?u 0-AKկQ?*r6+upHs!v,`ǼiWbx4 '#XV4Mg%wsُԽJ8e2S#u+ Ql|-h]NS&NxĬldvr)ݴn5a0ۮ@U*Qc1 |&Рf%s[W6RJ:A Z9>G0KӑUj:rN a#55ȑ?ZӅ߆/#rG*D粷<[/?дm=, YK;^W^ytHfG6=qQ/ߍA9':IrC}*eSb}u0M*nKyxA-闏zɯk2.qN:[jr҅4 5dBʖSG X0qh4xu,_GߎM0qV~5%\v3tQ-h @[R.')WUP*J/Fuyu[.-O&Z-yN_nR;R]V ym}\ZX(D<,>?sVmKXj 11FAC0|#,X[umO4<ڍQ?H-8((Q/4;0vnkb (-Qc5dHwSиAhdq64mk?%XMQͽOܿ1iC>{`w:}z:s&M5MRs.c#@Yp9'Z"LOl؃z3T<0Փ M˪0o/.ro#idFYg{-!#[@!j.']OCp>DYQ6> C B/[5-; wjوF*C=c;pUz%U*,T]]Nщ ذZa_-BNFlX~aŊShJg,1b1,SQ+ ICsޕ_Z\]c0j$` >"pxz as 4]|8Eԟ?g[ؠ_U?/٬Cv>TO@ Ɗsrٻ*_J Bg.WEbuUY@QZWĊAEz%@PC =$47{/&3oΝ;瞹{d bW%h/ Gs>XvLJM;z|VQ{p% xc-cV-Mxdӣvԯը#+wkȣL@4U`z*͕i,܎Qt5R>taJEGr}؂`OQ/+imS?M`yihyaifˣVHfB*S#H5arڛ qxU!uC[Nbtp[OE>놼 ACy.sS6  '$ۧHh.g>vdXűm)&02}1(4A 0S'C0x}\ Ν5}N. 1{#uk]8kky(e &xZٱoy3oIs꾻6ܴ@MīK}i~Iksɸ%hi4UB]K0aVY0 |}'ReaIo:}8 㥫B҇!4450tڒb !(V MAC 7W,[cJFH3 O/sK2>.6V/^YBi4ؿgҥx%l3gjNm4qx6fs"333â3 Gkʕ1 Zİ-ڣ-c Fh2f'XBEU4y&C,b_(Khw\#L:V@JdQ% 9VθàGy,}x=o8wcRF4R.}Vƫ˝o&kJd2ӱe&߈ }<<Ý ԟ\ Q%''H N-r9t ɏXw`t31ݲpkɦ־$h6 Ť|4:oJ&-.)\x-?kB w=̭$3JmWz}/}8R %t C^>>m[h>,WFdW_Ș:QSbquV6)4Ӆ ⋈jbRq7|Q#JśM2:b ?ٯ߸_vn{2҄'}Z:A[[L8 ]p2OQ4JitfgN* 38Tuz2sSě܇9fݜ`3_&a${-#qF2!g.D 6'us˾|Js ֊Dfx i))u.Wr%Ass}Y(!08iZ,O}$fa#^XqvEOCJbxvC3wt7qi_gt’ {#x"WK74)U*V)%O?/_c"c!):;ȸ9mBD%c;hRؠ)Ǿ KzŎC є>{~_h0-Nܘyްh&RIHZ:`ng9/׭[ڳß,X[EfBd>\ס`! {+o.9"}&aM)r::c%p{/S ;4 V0THIZ!=+۱.~h )FZ Zj6|'r4+i~*MY}Ӡq9/̓>_ | hF@1Z)ߜ5RAʈpqy=adO&kO1q+/\I?&\ -PxhH7ċj~0IQ.2(J==|!'<܆5yYKU{"X P"DƈFo=|M{bg&u8]?rFl!@_D6koo{1|QrhѠxsTj%Y{vh"V]nb.0h8-WD68hw"0tc ;H?ib͎2n}mсa9>2:qD{w}%fuEa" d{{ܭe׎J,GF7Mίj/{X}{v&.+aӍU"-`6e9Io<*W#&%RRe\e& #q篊 # vBs6ʕAJ!š`"1t-;¦X L։Q&^4]UNڷ%u&Z7m!\G-*ּ%`4Gv E{bV[e0D?.>yHy&]v)}{7w &!,*Aѫ̇ Ы3ܺx[|KA6f-c$3L@N8s!A%Ad&} P6CM|~r$#xv1ص7BC -@J@ }};`=RX?qd"<'i/Ea!2'La.KK0w!P`)D4*(W V0{k*N$Ȭb%JD21W;C0+ށiުz7ч3 ݣ}ZF1 KBxcK&jʱH"Jb  q$Ȑ\'a,(t=0b/ t즆C= ?Bh .%$~oǴ,۱cFh|# E֍<-RD#/Fœ]ZwՎjY f0!\VVXК&7Bab jYHf+9N P/qzwA:F!xeө9!@T7ZsL=?{i #MvGǩӂN֠ar%"y uaM Kiv=f L jW ]MJSs++Q!IOMJc=GLbRYCHiAb.H h+|RbU`u\P)Gcwʕ+X}!lbOm0ϡA0:ayTETi>4<2.J-iqj1iɏ9ǜA>xsnq |A枓-/lmsI2+ |n@bW1 CxLp9}<6W XA'ԾV YLP-us6e-K&&xpyhDtA-'D^ )8OԇZ~wECM+ʀiV@4z~uʼoͿ a!sҪ+kExOHՊ%^h!Ha[Fb*K2E)p%#AGFġҷVsH&4Q#5$ hV?Dl#*bexd:h##꺈h~a0 5WLԼK=ꞎ yBk|ΪVopI&W+:Oj|&zÒUJAj 0ɷ+j)Pٯ@筕Np:}-WPg/%_y$$q,RKl ;>Z@8ŠC͸ԫ.賱mQ5ɕ#8)C`vD?}ߔb`,3(QBr|t8 ' ܓө -`)hhb>RI`_U?*M+dW<l*T0ֳz?mRXV qBAБQ&IM N)d1I!N9)lufT ]M-Vk k:5~ | $"R2 "U \8`^ aoQ`oS@s!DZ mM&|&ruur#ΩMe.6y2T8 (`m|568x&EwpNmV;Ndʨ] V;@<GnCnSŸqs10i~lY7Ql=IdTheܫ1h>E9gs{nN>>)ׂ l+ /?28σ"̾MN:˾pG|(81ϝ:A^Wޜ8J- θ^^$by?k#,HPN`Y>R 4)!5Ur\%ҼB7g0J΅KA02XUA:"AkظǺsрPrmKNF+𝡆BEUq |e&f@$NFt^f Ԏ#1U ñ,FX36#iq@|q|B7M8/rha|j $ɉ[ik212]3Ri{;T_a46jC!w"}򽸧hUѧ.\^nvm9u-sJ#~eT_`2wFײ}('lk[V9QPYOnɌyBw m;{*xd-Y *}úi䣉s&Y`&A†ծk>f>kcqq,h) nkxdI4Oד5h2օ.sfXҌ7eCi//UϞ*6R֮+:3f9d0b#4qzw8oi3Czy`5Bk͹"Λ=mbnQ6lA?c@ c1bV[mF\b5 u= ԊWup h:u~BG(tYjR2 Lyb/1qwpe ? fZ!4Rxiʗd4@"X2 9QƀdS3d]NTG.dg<ջ3֠ s:~+SR9r LLpʭŹ&͚<mZ۴.g[f\w9†~կ00[+䞸^N2J!7xۻ̻!ULB͒.LBjen.ȽpvRO׾&4kR F˼4axC'/UԲ>MXu:-MhV14GtUiג7N'N}r)ZUYUc@ǀWb@1jgL nʼb{\f«^^fO[0~|my ^?9eh#1WEءe޽LQ|pFYEM뚠( hB:?=N%}5]ذAaNb_brAǀ(V_D?Y2 ,Zu--LYC˔)7ѽ CEdeP ڍ줂=0CaU\ytJ5!iRK_^Pgc1S1)+fTV0un V"K!`ٖLU]v$bcO9PpdtLL2F3DkYܓSļśǦH>%n#;3\3dnaHߏ%^U) ixqypɆȧsPݶ(m~y–Abp\Bd.2ojbZ/3ֲeJWJ#{ \ɉck@_M4lקϨ6-2huެ0I<8[4 X)aޣN5~Ni`r~VγO?VHHA|ޞQ}<]?U #)3L҅["Dr ʼ񗮉o7cm!50<2o Fo# |ޭVh$glB! 0 33Uk=;j;yly8Qcy*+?\dOD$3WA@v\ W=a;kۿs 5Eg鸏;npo!uC[NΥOBqޝ'_Jz+j8vlVޡwOA*;wU1аeډ'IB ԠqH&+@@[52Va t( QG@[sA!Cvd.-Q1{>q)!?/6Nf֪.R"ز7V26Yeh&RQ ѫdvmAu5'=Eq94C箈4#iw2- O-a&x1X!<w*Q 5( yaAG]kB<{)Aģ1Gq7XǾZ㠤Q};$Epά.\S=%Bf@O71f@gr~qzmVO܋"u2'Z-|{6r8] n9j7g? Z5eҝu}8߇!6qX㷤|yO`>0a>y O_4GOj,u; ]db &nN 7Y¸50`$mʬ4!B8qG<&/?{I[ZF 1@p\,Ek#$NL~0g/ 8zB_*BR;ؓ_!:7puNo龜MX}pB2uQ LH`CE[ŢiB @;f>w|醂@=oq2Cȣ2t焑 ]&@!7 F!z=3MO yz/x`g"yf_9UoFe"ynN^)3t\%=8ua ՍY gKJ%8vVNޏ+bo)s d ܰ3]Z7L WK;&6OK2 I(y`vdڲ GO]{/dž-~^!" p ԙ WR}U 43R@sY Glfw&U[ש񀍷Զ1e;LU/\pPN 6\!UDfvUŎ\a75R^9(@BlPsZj /Kג%#&)Z7hfՄ RRSsn9.7SMjnC[g6KK#TbԂb53O^tvDln}H&5JU+UmFFh.c'XBxNFP!BJAimtm)(Zjߵ:%R3`տs aNLhY pq4%d<iּZ\j񽘐GC7`~X{IffqtJޏ} a/E'Kk NnFm2 &' x2T#z& V"o>3Jj.ho[QQ@e {cD?_ !j ot2W))yy_X!7>V\Id6͖) x.5f*q&1Ӥ paƜ_EwݐZV_y)'10mA3j]!T yM;04tCSosh4%4!k -džb%0)/T8aCisu,%O:[y ҳ:4JkEq1fNiZV^e`BU BCB4jV[uj _\QU Q0n0crshOrp~( OA] H]az+?S〓5 ||T-x\79A4mwcLKI!NZBv FaZ SlO=reXt*yC! ]Ky{75&Qtb&ʴ|Qz#fǁfwը2|Z5z{G>3/%&pаDo+ " }W5mg=fj:lh0$Ύ>,ه/uHez<˺f@K2k>+`>Ld~.~4ݵc^6ˠOHj W䝹 2N'jlKͤWNGOQ3AW+(+/>(͢XW0z`dXM jWegRX!J N%B wU`MFIS{:{-ڬYύ8x}d $K0hū"єL֎@T stŒeࠪoCoLUӂҒYk+w<ڴM*Q{+SH$+Sb^);yEBk;N!y)i:U*^z]NH`Г&r&&W`5-i :UbGiOBN{[B;[/<'/z[X ,†_^,N[hiclxiqĐ6,q TfFF|rZ6֜Yͥ= z8^+;˖!-/|+t @x dlACb QJQBaCf~m;RY}N7R/6A҄ؑl^ ZgZug_Q uH3<Z&U5wԖ&D>4c #am$6_FZM?ymR cxor +??[l6yQl: &Yyӑ{T`2Ĵ-YHA N-vBC0k$좚P}p#9`9?1 %PXR-a/=2(_-tA }d j0HKdQ`ʒ2wԯ , 9oZGD`[#C9[}6 øwł@*F傊rCҿ$h($ڔg(I:]~\燖hDQj[$a?'WiPs[x~F \ipL0 i}'|ˊ, ϫO#㍻c Ɔ2Z#M]Z! 44_'i A4eRO(lFY@Ҹiwv*Y SR};Ht.qͷ|9 "PTڲqԡy? mPA|niCasճA[Ǡrlч;k "i7WMxf]Ԣiͥ= ~^j=y8"pYj`tJjBU}4ڰHЎWD vvT2Z2K2 4fJf9J!(* ͧ80ڛ𱡉[cWU+ٱ/CB{<@<T!`~ ɘu>rƣR 7<8bscrҀ!= Hz[X[ ecRӣDM(:a f@ƋN(s7$ťiN0߁ VGȐ?_]p!~vmf?6GJT5#*jk{B&$-&}p\s7]I8 n[@ `<PP`|Bbrle?URKXB ]$W &O24/`2HP3/X9i2Bp~34o[qޒdϺr>dvc8DfR>h Ci^[GǾ>x\?P'm92 >'zN[~FBYhΐY,)0 nLμx\2WIR+pqe' $I_Qc EKZQ/Ly F j^)L(CMmng8~IMr5 @DYcc~ s ʷ\9As0y"Ȋ2%<jw$ ]!,GS2އ<4*DvW]fY&HeW1r~qUh_ϫvzX%),A$.-K#h`VV:<]S&)L0X[g/%`;QAy{@k1aTs,s"MI3fu6}1sʮ<YA f楋?00Tz^XAftp (ݣ줰?.S6DA n~s4n("}\ۯy'%†ۙ8gᏉAhf6Hbf\O;i1+_.D(l(Ӛڣ&i )'qu{̠)Q0돏{+cc (ҟ hjhnģs&Ge) _ \܋/go Thf^GLffh2x0Dp? Zpu+hl Ԃs^` F:*Fr9_Ѧֻ`$@( !9}><hwjY4<3f99y.񇳤mEĜ 2]t=fYeam1:*"xtl8@AN@E 7SSnߵׯ\X~m j=m "чz[-_⥶8jxð'l'))Ii)<h'k`Z~rU@!41;`sg0Z#35d %g rA!=\hvk ɥ}{Bm{~y|tMZԶ8<,|ڭ`J%IВY>=l岭*'ba?Ap^)ď' q vn(lFzșs:}!w >8>!#u5{ŵkT=ǃrBɝ&O^gˆ7G;<]P8ѫ3n30 (YVeoH{A1hB2dEh3\J.OH_ܓ =Me[f|r%̰b2:IGU{-=90k v`+2SmwT`N~ZN-5q,f|b9{-iZ&t8Dj\Ut&:S p)͔?yȸG_fB4Á6`5'c-ZOQ~JP ֤p@1?MZ*;j̓7&?;9;!.~?c#Dl;Za?u(*Fj "˅ l-qs0ɓtʥO#@vD0O^ mp<.DXh:VC~b+lDUY!: zN-B" kN2B|BYD?<|:e|F l ÛL)0^IiL됛H&րߕYś9)l(f3Rܪ?ҩπ.wO*BjBZÖ4@Bq['KUఱN <of2 sS†2RĬӇz5v}P v*F(uư+ x V H 䯱)={sx4S-5~УV92O 0te.764bgj=>^}-'^ C6 !MȥgdɈA4!c E<KͽE(-LB_#%hXkWvexα[1֙Fk>gצuaѻ"b1";;k@M|˙q]` @zfYᙰ}|*6fפ$u0(at`I:})8>ɑ+J(j#FjSRCa/_\߀0&;d4etZOFf)F }"Ǐ̌+߼7s  6hIaCM8ԡ XfgyF[+,c-1=kaDvh6}~m 0ͼ܈Yb[n_WFtG2i:6W{i"+M6Sڣ͝5}9 mpCn;E'Z0WׁÚXN-?S{g*ʼnDHII6E8Y&u7m\ַ|nx, rFFZZRڍ FFE@c  |Zs*JǏwjZA~~*A5MyşxU7joo ,n̴djĊSKc;fFɗN;zhԾx" 8nP# sXb -lÅ {Yʅs?yurSt~Nn8s61ctj1 #h4ʗ- 3jbPVoPc(fNH!VEMΉ͞6Qd a&@MJD0tT|y~u, c S:~%f9-F?v4}#F/~sW*TB5sp(PqA-RpP%dt1`LiI?m6}OIkǑŽ?ٝlYчÛ aVX d,м†V^\qY7a@MYP̞iP<>f9u/NVRN>v}F(pg(Ԫy=q{u]O$*f^>'½78VUcV:1@ Ho-X#X^U€ I 6h6^1Dx̉ KAC1a@1 pc_jbpX Q JEgz8T 8+zSt>ڑ!~ nqWݞpqo8﷍b`VNM(S^TF֍kیrP}U\ L{?<92O?mouo/qw-3nk7y4c0D PC1 ?(M~۩<+ N8d{u Ǎ̈`h^+*QTy^yWч;2JP^5SKo$AWA^>ŏ֢>TjsvgS6:40N6}+16d8xR0ng? klώF&6/:^mo#{LxvV]ݯ0G&nl@DQ/SmR\dҌYM_̜|6T9p㋑Į&|/2:xbTZIAեGI1]A|3 \5^p@{XOw:»‰vo ?<ʱΩ PAa̝ # LwYq%_UyˈM{unDcUd3:{bd2#HW!tt%VEN5\!INH ym1< ֧ވ'q9ڍ+b+CcA]7 (j#bIT GqhSEՇcy{I2EZ`󋔞MUߙq+U=pA_7'Z1x[3CX_i(o맻 ~i\ CL&c]A[WPaeHv;:}8 άazz; y3{f̧niτ2|/sqzyp;Dnc.jr1V(wv0F^9C8yh/ g1>q [c4+,~Ǖ;sW!! B )i1?q BϦ݇Eqb$qOًįv)pR^?85/[F{|0Kedf˨SJu.Qj%1ad}C$3,!0-\a`MR"nz 2`0Zk]j͚sS[V[N(ȕVw422!$?4`?UۢDx)|m:~Ja6qU!Di& ,Vn/]O{vHuH )j߳'){dXw1~̃E՘= & $ >3c7eWY!/)X1ЏIrjnSӽ0Ϝ8}yJVZϘ8MJ :}n܌O2,-&!Wժ$SϤm{Vk`zϝgZp5+a@//j.c숞岐+s*>3}[V&NY9ޤkƒ[![g; AcԆɅ20-Rr3%D8'z6S==8$|ˋAQ)&f_) ϼ ^d)ԪVI/\$ 9ɂf"]d?j+aHyMT+D8b} kU};`Z%(5KjT g˓?EKqcV/Ǩ)jհj57w^+~Y!5]O-{^|ˊ,U'׮&݀ .VU;D="Q = m}}CEpiKI١ORxk LꌉB[7i@`FVORLMπ$# {ӿ>'zwY@b͓nr.}@a4&.Ē|!L6P#dMVXdc!l]DM|tx7v6;s W| :*t|& zb!esc д)L?|?to%ͨ(bF.V}+hE m:bh1chAʋ;4b8El; _{iZ H*y!vړ{zwm#jмB[rޝ'e}|A~%~3ӔmfxO4iSg #9%]>yN<;?XohjSCJߊ-ډ A$hh~\-,1]Ӻ0Jt4j\@ jJ/CaC+p8u\-l`W8+0PJ@IDATjD wN^)ˉ.Pc3P_:ڹK>^rΛE8̃Z!PG!TZ8>3gM6[1{{a.}(-ajT?pd<]!/'ĂD,XGVSr̻q-1)%lW@fzV$qHcuEMh6/o-"|uKBt޿f*dܫU}Ss CǕ=O$ ZOAXTqL|9ѱy]y> ѷhNk{'GSP#AafQ:4#z~#@iJӯSN,e, g) Y2FS͛ (eo6oq%tN09M:c9/WG3?~;ˡPA5u󗅙9DQ„ 3t;Azzu_J Ns])lwl^4~dPn @hMݲ0>2;3(p 3q{շp}^CA@N$:h^x/üX-WO|{.i0qNuԔij(d}BU&N!̽Bd\]<6nPH>tVw+%Fݤ50Wy?K%8p5ډi[G TOGbu/< ѳij27-ɂ;rAjIn#6>%}BI1px@GzAy[m!U*noʹԋta^G_4mf?9w uTRSni7QұtMcj.6C.uڲ\r9_$ȓu^[oݗR5$9ΪVa}9W V mGAѽs^1@QbA_ޡ޻ C7+_`2L2-1H+: w0])j)_Q{9q>fԔWYVS;uK``WnPPLp@[[#EY^{ZԞdO~Y+um-RK8fG{Zcx!iaUSx ~WӨ1htDM0<$|UZEAcL8&_?y}#n[Ex:z >Ɛ0b*E&w{3c?ǯ6DVKKK!>`8 , W=W+hT{^D3\uk.E]YͼvULD(U#lP ꜽ=UuMF[&,:%7;31;jԑnS;My 3):pK-ȘSV.}#&M&G_NSy9, B@  L4*myaimkG|kj>7{Dn.ĭe"X^ej}ՋqxU!uC[N*IpXN>5Ԯ. =$mN ӧ,Ps;>Fjc}d>vdXєN)˹ oMlH@7/#.+#az1>шUapgŝm4_҇'SsrvEZקX'mޣf 8׈usN}\Qj|R`eC>exwB?mI;;p7;%p#8FJ9.]K4m0Ynf_NFi\ϯђt(e> ޯ_[j^Gc Ti7n-U{n9aSP6Hs;ȐjTS&l  O_ n\v5 lξҥh6HSv^ N3Wm~Ά9ٌ4ƈg2E'`?-Pȸ%hh?OGi4 F1;aǺ*r /\_ zGJSu- ՗Os6UQh4GP{[Uj{p>qǪKdbHWJ΀')QLǖ|#2l4` Ojb~,99j' H6$iŇȝ2~B˷-XT1R{wMecڹi׮\Iƛ(4u'ﲾxṂQzGoyq0>Ӕ?(pߍMu("RUr,&˲HE30bBL;zX[L:ƕ; ~;hODlZO7M+6*&1To[鮻~}3HmhFa4F/عq-dI n[4OtJAykvׯ_cof禃@X4#2x9SF)_>Qx]|KCg DFD(]Fm%~#RR!9𒠹y}1,19E}3@9Ut9kUWuGdt 5acٖI9N0 Y@l\8s*{>UzΤU+U@h߾P#B7e"Z;N+_t3-WL[ QvqEc6ԍYmXhq z4npW/ݬ(@ NQnѺ|;^1r h `R.CM<_ Z:Ug~czyθUk?Wa3i+׏=4y-W/^G+t E5۔IS؊ Μ9p|\l,7/'$j0#Q<>!&#UQxt= kЇ7&*BSƹK2$:t"^Ln D'r!):%;ȸqbN2d8-Ǿ KzŎC є> : 02Yɑ 7Y 7%lf?9M+X) .s8ynО}d*-74CޖJpסt` v`LިʛK`*wHIخcStʅ|cSgսi2AZ%l`IX (_ pb}{ThICLjX*40^_!;jM YT8Έ{bDƵĚ⯬dU*ՙ bն( ƘnA_zPmTp}pm>>[1KDo` aQqbh 1[&LU^d1?Dhz!]PC- p2gƵ Hݢ<HL˶hZ=Se˂ Ahvp*q!qBF7 ίj4{X}{vWc 陙&fg>h S1^&5F"MG47D:媶ոkؿ#lO]G9 9^>a);~ygb eV^^ajO''%dff|2m\ΖTQA!CKBqG 法`JR{I>&'ಁ;4@8tc背6VEBNyTdMK$ Aw&[&b6=OJa#.HI*$ʒW{vh'd:hA@ .$̪ tAbx϶WQt:Q -D}D*22$ 2v;2T0VgȧGVջ|o> YO! !8nag$ݬэ*J &+caQBX4+)l1)0ˈ¸"2>9-f6O0v+Q҆'2pD?BҢbԵLh9V&իۤY3_ x*b!+;+##--)ƍ$l흗v!DgefMyş' z\D8?>>e[bfJrbbc1QGEATSeޞoL5LOO~-ә,aC1 ^ GpGc@ 5N[{l+oM EyN /L^uE3{Դnuq%P'LMƾb[18Ybs 1k/d\>Idݎzc!TmR]ѸN5\oICQ`mE{r_y)(.fs߹ަQmws.AK7ʷoVG6"y}dm"?'@Aa0KKI#Mщ)lPpNH Yܪ,h\C, 5v=lKjhYΕD_l-|/{pJ5!7%Lɬq W'cgs<^J iHäqGS-k '+CS&Qeo|>^*1%hNQd3Iom5<Y"O.4&S5z}1ҽXOtj &Sis^ kU걏8~沈x'ԅU"7cřڅD&D]d UԚhd_r}KACi5J̤i_8;اL{7y2 $߇1U~Jy~x][efy(h4>e)^+d(eS<k\/1psS ҅!<&x}Eo-6+4S,7[(p)]ElO/|p(5F,%Dkh[he a (T}uw ޹5V=QtQXP _P00  0s{_lVUA+ x\jEs}D8s*`c w+U18fz:5v4՛Eisen8xdXw1~̃ѤE#>Fc%lp— qSz7%܈ńqO!4nJb<Խ8t37[ʺhƌ#g6*Pɶoi+Jăz7)KK,o cƏ#{}HJK\ւiׂVtl)R6 ns=LlZ4rnKe* G6oIψ8t̎.\IYCiN_-1XMNBW܄A\ef2?mdbIz:<~Vaзg. l`| loٮ|\y:}~ì+GQFUѵu#A ux}͎yaI7,)i/.n0"AKPwK_j84#Q[RbLBrYyMOҳh5ө -`)h(aFui7b@1Ϫ2X7럙:ih7 >eCáޓt",O}JA&2qBQ&IC N)d1I!Nyl8SmU1Za{ʟ58( \JEᯖna^l0 Eo /J m|iOIcIA>aІLxQՂ# -O*(\|wjijŲUoRx73F"!8t\ttfb3`$ҵ;҇WWju]>=ܽTٿӳBz6N۪9^y׹eCk௓)iuz#x^2*ր-ťk428@"+(>`e&Eo ۬&w2pQw":x؏؇اq^#ai'Nl42u4msgMߤi,ۮ}O)UKWc4W0T}s;?K6^(SѪ.ۻ@pٻ-ߺWTE(L%lɳ|/5/a KxyNvlg)ؿ߄=|yQ^|P> :wG"YPU{9 Xw ~= 1LV(<}v诣ktczpeGQBy!uY^GӪ^sTbzJڈ)/gXzelXt qYRXGae71`bĀbB? JK4ݺDέ4?mYи 9s0u^K>W}2 %qL TSE=9 !ej< 53!06̺L C>-ň2_hAnzr;lTqk6}`WFԴ|R}C>$B8z5ǀ倫ZU+r!jRq&Ucj0Di鈩Adqgا##0+iMS$d&t=ə vL 0BU~-=+?Ky~͏u)' av4vo/~,A=xc:Uγ?j>Gk31pi0\vsbG?Ce7f &D6Ί'Np}Le/~O>ffҵMcW^b䄯!dh6mJSo,ZXkoL/zB"?%h8RcԪR$Ps{#2dv9Wd'LsYWف0$)Z\ȴ+8)ht'loۨaɦQTSr^H߹ͨhtZh2*𬭏#ޝdd5훉mBᐉ}uʶGbR  sx(h߼Lޖ&C1/,]qF&/,@_ߨ]L :MW̮2#4qpOj9ヷޘ>ÞMnuѧ 1΃q29*vہUu)NixFp"~bmr f J( ш4~P]?̵~&\ -B[Q5Lm܁#31 FH"Sͭ wZz]ׄs!ZAFeVv}7"^iRWd%z;|쌘|؆Le!ZD]$zTvr8-̻t:1@(:bTӇ2I3k#'.n~J3?E4ukGO;6}ABvs;,rϳ9Nc;z]~2dgDYM--t&A}\QAg-X AYw疢s Zi٩Eq33I{3@Đ:x/,A<ظN P 0vO35|./v̻91"|d*71:G6S.v~D-cNK :xj3ȐCaf?뤐CdrH;lPOY9ˤlA$BOGmHr>GFN*7[ pqM=k!ێ`'|m':6̓">^{5[#4Y>a:LsCqdg}|=x-QjA)h>=:4#ׇ$ oKr$ i KE7^AZϙ绿{v 36Ap5$]E2µo*Ue+NIS(Nmu47sEW86;0 ?^)cb4n.9#1) if}QѾE}([fy4xzxKGvh>gߎz^DKƐӘ^ja[3bH?\sv怦KZh9:l ͫӵh U*2=j| h~wmUhdt߱ eäy^*Ƽ>#fsSbꉫ;5M$v=Ie V))0'$O&k!܅aNE8Bd!a ňg4.#w]M0i۳ힸ2>,GN٥`1ijG?ǑU^A} ʅ4W\]b6N3 4H6q)M?(ןvojt).o0랷/NI~9iݾ'_rsj<AV:L w'LaH3?Ϡ6 BI\ա(;񲆵!&T]ZA@{\:0 *|Y bs5 'wH`BĀ792---61%M3q>sSg^҄_Y yIq Y 9ijfs~ty?k&0ǡ\&@Шv58ҧ3򾄁=&3s,]CjQȃ>q>1aKpX3%\lp8Y}Oe0uy 5mֻ j0%%Km;YE/A}. kW0 {]2o'@[MVWE}мX3j&0btZ*b< ]cӻr_Ύ.ײS66R;R8F INJ<ֲa-6*@ƃJR!s  9#%.$V>H/hWs8:aZdvDEᏰ?˳0WU=m);x⵩ޟ)Dāȱ'XD tDZfԑۿrV8BY#<~LiS} BDӺ)_a 7okg)OXo?D1gޝ)Gs-Ws@W/fKӧq` c#ޞ&7VjDYhN.zo0׏{w.GFA#>.joF;8~]{X 06 9 :`c-" Ќp_E'h)\(Xy/R=/3ϔ-^8fibf ?Ҥm0ܵu#)0. sKAZ0/K^/ wmꖇ+pwG k#ʝ{XObt~ABe)M?yYYhcgNZ\Q25jSKx~A8 Qz,urdIS } <_ L'>\H 2懡pC !v3za,;~<ѿk&!zdžu~ݨZ jíy:'N*M>ƀ C{s  ~!80#2}ޝ>Ojh.# |we jUŘDk7tZm&սaFPÆ É@ZA: M &o SqoѴǹ ˬ1nv?ڬdJ}a+K^sD4Rp9|8.(!#\\-w~#==viN#5Pt)\` W $3¥1)P((Ow1ɸX:(AC}W .n1Z^T R[\L((%B6@'VBfؘ-IkL Z xK BP/3o][Yn>؄BaÑoaJPoa1!МbC(q͈ E?=zhʭ6JD|ߗJa!wBf6J$QE@"Kn;Of %pJH0@QgO)-jYUmVGl}X&쒙{qϐ^ Qa3lbvl\w g&` p9כV-411<b(h0H$n+^9Hf/ɂ+׀|1ktԱO)d|*<8Pvx֞QD6Aؼy:.ߏRPTmRp}{r}zMV;aA `_`0 pѴ`@Sok~_F}.r0ŁؓDf9AwRWmTX3^4n6jb1M^,W%}=4SQ.AAy%;\1>)t,d&( }a#0Kq}7(|eQRJarms-]}7eοZ;|Yzhڮ`FW|:Ātf1)\5j ?bi_?4" |1}r=aVlAq߉Hhkh\gpTjE|Zѫ _Eo C5?d6w6;}VGX;l >McPÙ %zj^2)l(fzP)l0KV̟ 9%6Aqc0gn!!/e9W.{:_J;&6,P])~HB, Gܛۢ]f6o^6LhH<xY@䴴ڽsMJ{E^sޕV`>(!bh9\, ~:7Өeaaqa cF@2Ze )67p_fK,O,Ox?!ĿHKOK=|>ġ=ބ2B \7(ptkE֜@ :< O7#nqdh޶6[_]ƥٹ.aS Iur8ϑ8Bq `Jp]><`oȤ) ss6x&}d/z> \?vkf;C%<'O[=WY>.N68|a7^+afèPM( bgd3sYNK`҇[M5ޠ"Bш@Pg1{HU|e8߮p C%`p~yQsK3d܍~kƀ `\wHL : A*fA}l!㇅ 2b p€?2Fsid I>4Ʌ7DWAqP†:sQ ^7E1{?/Aͣ8ԺB鉠hK3̽P0!`c2EYb( "Q}y6~p0w8]|#AfMi0.zVQP^EwdT]w%8l|gҨMf L\*~|wΡ0Чњk}5~ o.whVrA*J(W0$s& 6T3?@.î>d”oU9‚ˤbx/ H'pz=k?9ԵYNO1zΪjLuPԩ^Y޲†Z,^\oqW=Xr0I~;& 6T@;z{jc}|j!*#zЅHSZp,˼)"z&}x ެC6Z1:v#" 6>=ワ;Od,Ba?1׭֌wVPę'Ҕ:]r'v7c}޸X{Y.p`?cgļĶ}((qnN%o.*{DU[,"Z#ΞKZ47_QX,ÙDjM7c m?\] |{跢u¬Yl)tIh:%XwM3dĸ uy6 ѵs ̣q~H \[b6/Cڎ>nhO۝TX>I2V3oڬP@6gd=19M{uj}&q> b(QemHޢn*gɇS۰:v`ݮk]Z7zˇ?5"&z.ckBx?$֎m>fMxUԩ#hV@88xnt_ fkZWn\G ؼh߼CO ?,X%v<& )1fXh1by+؇nN'L\we; !y+E"[6[5^IDTWb:LcE1`sQFN@IDATiP_[f\q[t_M:kӢ^#aU|+zKtnݤ޹U#M\5S9J{^Y5Ll>d]{I`%^}@9gAmФnu[&_`>AKzi=ExKxC9\/+z~xYFЍz#1)U!׊6MiA t7w_wֶX[pxٻERJxpVMTPV<2kѣC3q/v o|5GTTN…5_LCM'ȣ^1|\xbp˃`Nשfỹ/?nPxvu:Ԍ;C/ZG`kGqtCKOK;u&!_ө?nm&OS[6)K M,]S\1i.hp-8r"^ ] ʅ4]Jh8(Z.HvmnςY⒃us$֧7GA_/P6ac_{ :w.>Z@P20QpofGHV \l#pQ2H(<@WQp\ѷVBոWiyYZ>`گRK/M|IߤPRΜOIRREh4BRB!bb{b@S8s|+˅gOOԫ!X$k!99mLZfɸ8ү|%~χ/tff&\1E(@nUo^~YCA=MA(^mPxr-7>=f@2=>xBkI;f[[+MA8&m V32Mk9?%=w:,wsҤi@6^q (կ r8lMTWuh.j)(3GDYh+.E!=/Й]Д'K 'zG ~Qi%4|Ef7|'M?#^@S@@Q}=y5Ybs\Z%=4,|j^EGs8LٙN Jee |(, ܬ5O `gmk㧯noZla;i!d)1tq V$OX8(fxmg #n%bOƋ<5b3. :@YTݎMنs)pLcbٰaV 8*ҝN',G*Ze3x-hSy`<ٿk;լߩưҤLpH ]@D`5'A0Cbp0 l^ݢ{(;\Lo9]*ƒZرa46F9co@[У}3#ښkMKt?bA)^{dHIKU+32ML{j=[{wBGN}GOjG (-63ζqy+:f M01+9 O >7  x#T&p4Nq2t`s9 }#VfG FO:uotj.ļQ1P6>1I?C`Ǚ檈&r:5AE\ؿ /?2XvuFD ok|`^| pα]͕L(dAV=8}.}GOx{ʱM*֬t^UFr3,foU1hp;Rxo٪cw`.7n7kVNm#>QvHmĵ]Z of/OMWN 'O"&vE˭,y7n6h3aXoubƛfTd,ʄ"#+Yؾ8;s^|?tYƹ˅פ nYe .zsҙҟb>,=A*z~~Ar`hnB1{{wo? T IL\G 4[!GЫ]۰a=Jܿ[k!%87ʯ0ڮLGa ҍ#^Y{ uެ'00e|䕴P)lg%lpa`!AAmB)"$Nٱ9"lq+4vjrX;ĊGfBGVݱ$Za|J(_V`R56wQr1r5rdFM\=?wZT*&^^\޲,zT cw\zG?,B *~Jfz̮u;DۦuĂ[]F$>vF[Il '@Һ5"!/ȁ>^\x@ҊO@fP I[=B(AةВw &ծzQs'3ϡc}qfqx-fpskPQֻb17zj{&iÙdfA4uEFȇo"ȡU`]):(-+X۝ d'#M}k FjSŬ^mݚ]\ٶhy5Yd2,C'V$Al00<„I=tlv Si[~}ich2iwnkq*aK€' 6]R/r m=bӒ,8 }цq*rK rL"!)YLsм0դpWDV̺n~)l=-j Ka`+:ɓS|f">d?_QRyu\RiT↫ډvɬ~&2o:/"ԴdЊa JS8#d-O?%Yq"l yiou K aߔAZV"|Fhȁ]ےp5vXl1pgz,P-qz9wA=>cZ ҋSϡ |l?@ ,{cEDt|P+Nw,¸1"rBM?vc[{҂q-|4 49Ik ֍pL i2vہש*'8wF]PC=JZؕ&5D'[8pah& ƖR3ixZ6PeNƸ8!]7'}=_~y4AfQ^~I |'<4&Sm̰- `jsoO+MuoDߓ{!poҫ@bmJ*ݫTX'puJ@L'A wCqkQO >X ,Ϟq~CF!C)aP͏9m'?ğءZ?z`^~#Fӓ54Y}Xd!Z^ZoWk".M?$:XOThB2NX22qYՑ֨.G>m5i 7@}muԂ^?~a0/J`@թEfV[X 2o`w>!$VU+eD:v\]3]Rp0Dh[hP41iR]g "$3X֓`4S=|(eu]Tešcqc[y]Ftiq( ܃ |hh'胻2 +l'SϞ³_L>6j,vY%hCGѿ#7z\i Emt=7R/ŏ H95-&#{+$\02҄'Mo8Ly1֝ ]ƪ{%숶dhcO_w璤P k"YLwpBh++B0=&\[!x)@з4p1}>3|wUМ~W6ծ&͟.C\xԈGfnLewaл\ڤnuN r)+'-5g[4"9iUPO?Z+ Ej>M㢞՝{#"'p4F!;dR)h5,T5f KPe(l E0g!_M|SR ·j W0YS]/Q{ sh/$m>6nR+vž2rMx$I0uB\ *&ےBVUkGToFu|&~1[Ѝ}} GFs}cRУxY}uNǟxٻv=V*ʁQ>}dZ!턹`Tm)H዆6ḋ?"Uĝ}벢?Q@p&RdJ 4b/ORx73nDpⷥ١畬2X~[NFY}-Lj1>ijbky7M,B`DV#]rntQ\nmZ:?DžsG ZZ̜R]9ԭ]Sr^qQP̂:T7|0'&TɬQU}5>K >ܪY>ׅs35…:[ .р4"+pZdfCA7B M"Qix[G7:+xA4۹ ^N,k`1j'ْ0:6.Іn|o 6+>Vus%oÎCb喽!N=؋iA#w$ nLPdȵtiG47H? {Lwʈ[7f;̪N>Sryx]HGgࡀ]?1AMLD(T:.s7|Hb6Twՙwwם@_ _MGIP9elndGjD!lQmE"XI?Dw'Ԥ0Z*@NBWk(AfO>U)pD߆NMs QYfVL)dzP@Z'{Ux/#nx4֫Hـy42k!4Yd$v~|cYTşXnO~m|nn}e^(9A?&P vVj՜2ѿNO8tl7.;~jɄlstnꭏ2P|jɦxZ$<OR%R(hr1Ll@x7̌MH"ob\:px?^"ySv+RL#(;,kB>0tBؐpBi"rxk/ H."rc $Եkb5}B*=%J>t }Ii:cb7\.^? > ڒ&G_?-j\oWڀ;+%Of"LXX2V2V8Gz!21wHկ[k1SbT艳1]-l x7^A 'asM?-r/@νGLP`^I348_!zthh69h O s.0ta;}h pZ>\䁏 ^~e@0/oK֋: Cߔvç2&D n%r8.edjyX,]ꞻj3Ugu,!$\*>y8ϳ94rXntMӄ~!Dr-Y,OFs5ُg`Ke Ea Ծuzj ym^c 21XR1ĪO &S; TjW(R3^VMSF#BW_ŵA$sG INiwMvek2kЊ0>dlߚɺ/ Lc]})|[_yq ÇA}ه]c Nl#b6ةξumOw윉DhrT9V 8CʵB7 $<NXRX.۵ q9|,:2!OV2uy7,Բ9Q WvzbD%Voz"Zu"óS6ǣ!̓hӢ5ҾOɀqp9b[DYDNr^Z70 ⟮m PҷL*_=83*>y!<aĔymbĀb얉€Eud]F'\LӢqm:-L5C״p&s!ɍ c=vhXq,9t~Njzf,WRk'ƒ=QO>XNy7K%p\(G `h6FO0(1|+B2s^^䔿Z#KF^+6>"2>׌BC8n߹sL A}kfW 6i"f `f6 1D5Rmc$o~x^CzC\4^FӕSYz7yK MҙCӃ`5x /$'4D4G*SiDDlƗ噄dYɱS|S`O~ڴwkX\H-fǐP1`Zu{pfjf_:cn{T:+ ov~gõffx3 l7Up@1&¶)pY #sxtPʁ̤Pr"D!C&0GQ,l`_p`HkU(c|@~Q4}YZo)֞U9{/ݎS9UfV|SP鿁@͍e@2j8T~NM!3L_2.},hZŤW/)ƽzq&l3"]%h/6E?9ۯKT.0̐Ɩ2ٸƫŌy+ )rJ`>OAw7(q2[6@D ^@^ן!Nѝ(om_aߤu~ŬwP{?:u1> d-gWOU(dp4U *_Oٰ`;{E@^ }v`cL46o'&J8mEva&BCގl : <`+*po䰱Q,I{.>vu Vw#w;{с=gXEg ا;aY =b\TS/[ZSA\c60ānn*8`vܯv&ڎ|Z,^Q^MN9vL͆mob1p;ЭxJ!d4AQba^3I1tOz׮;Ʋ!V!1rA{X}ڈ+.v hb7"QmM&S>vVCd"ϡ~{_>Fuq8lQ.%Bظq_4\!ld}cܡu]|sGFGkQ.~{xl\Q+ 013+ )p[W#dƵ&KrDo 27sj07 ٶ_ +b;_!4FLj <=>1ڛD 6#վt7;]Z{uhCYԠBojCޮ _&|)scԼ+!@_uLW~}Ԣu1 |o|pwvFCݴY>)Q>Gw>gSUkRx6Z AngI*h-,SI:6uKpS|^',0{֧ M{-gydTTmaM>wW61ק0}4\"|}6h\U+e6r|G4B f=ǞS?Ϻyutmu/ &ZypPX=Gţ!0P2֞2$/L.r02aȜ/9Coe y!HS@iRW%dec^SPBG6+ pՕ&j}ctH;PSsgU 84yoAKNMd79,M1S)߉ 1KKFrS9fG! oK3A+7h#S0IHJf&f2iɠ5ڹeCdo(v:&srDTE\F la9Q1ƲرݽwTh"OJiSPB)"o)؝sޟy4>L="'xC:dj吟Fmpg1q`9FT2T1$}$^>?*e`Sv{8ͪ 09_+b©$;|N/}S4WH;\svYR؅͡RKaLjBh^IV\^vAhj$ ,Ak\&M7BЛc1h$,sStH&LBZ6!ɼGa09_ըMjhFC&\胝p=`dxYU 4kY:f$J5Z3Z)9a09J69 @ OI&Z#SW\N޳;j*f͜wF9+1G4W\:eӜ:k"vN_'`@w:gp4SA 2-`pq">!6<0YXE$ݖNEDd*Wm/dR#*4ߡ\X{ZC#3t2wPlf&T 9 )lDT1pU"BÜh:U0{6qi`i7]Կ20CrjqYÚ^$ fYAPPF fv1Aߋ뷪kKn޶C6Ģm%pu7L (aC4gA֗vp.yrʖ%V͚|s}zJ4A4=%f!)zù7z.fRqx+A5rRKMǺW/S#R:2"[QB! [pE O|DS5Me`gp8? jCiXص_ޯgv ]ԌPf ľ]eԩ0j8ٴY%SϞ%Q4}֛gU_!C#"FokW:APj09CٷY=&;}0h#*C-W8@S|M֐O_F S7MRuƣ9_g ɷC?s஼'BjǏQ&̕NBRMO]]"ޚ2M1ѥ#Z~[KѧK+qlk~~Htn&ѳS3X΂sQnn\J~VÐ̽@/3 lOC0 8/m2rpo$MeԽܠNH>ʖ)S.{p>~0$S'D7Xa@s $`$QͳRTOO —γ`O~eaM(\sΏL. (\  E1*QX 4iq6GzzP^cxW_K48jXo<}&-(ҧۂvD|k'b cj5g^?hUŊo 4p/31۝ +jBs;g7V J L(0K̈ØkΗG6F'Z4 틊>i,+ &c'pԼkb0M 2@Ξ=qnƵoiLfv>gQt(/-qoӪe&&ӳ7tC1 >BIRpp>fjWڜMN˕9. rVl<^[GII L6)fPcn&L %l(HffWR/qECoy[L(B pG;\1>3l)l0A1 &} Hd,I/ cphߤFM7ZRD aҹo9.޾qٿ2_i=pUbXx`=c,7kV#^Ao QcG)[#Q]}1k Ӑ7 sͤ3~N2]t;4V 6f2X(%p Aaۤ2 ~wxh```PB0`&twVaH8vp~`bĀ70DŽm)5ioFcg`tQmZKǝXoÈnt:t2G>:&N%C1eة/+cA:98mb&?ŢOaҬ6p7:zwϫغIsFZ&E?`/]uNgpX'_%S<̨H>2Pb}(U`V^oSMyA]47-;ʙ`mpGT,g=uSϼV1y׍sgnq05@ &򋁑{qz#L?͚ж~Hӵn[hh`|=w< =YYIt}q:%}r|9#p24P \ϢmkV~PBµ ӧ#TMgVx&c a;?gAryH2D%2 J0P>&Jك VGмyrW%)q~=BU._V{dDV$,C>av )#E4]?cNaͨ)1 \?(t0- N&EfS@, Aㅩ"ժ[oт =-I{mI}#:V97 y*`S"Zj_ȁz[A: ;sgRRsOs[Ԭ^CF `G/> ȂrCwn۹i>tVi Qkλj2 &}9% &€``uAUժ.F1nb$[S:u6dS)z1rlQBӯZK>]jUM߹BqMꢡP 4Cv;8X{zx 끶^^6̚5+`]arPΎ{;S=&U2]x)#'Gp6?NʑG_ `ʦJc8 |քe;Umȉ Vvq\d tތJhU"T|1u8Ugo ?'RCnǏHڎu18vd %]#KP$L(]I>ޠ֩qm>;6وcqzaSͲ9% zuޞ< eT H9ǠaY\|GJl;-LDS3#EAyߩ6M 2ϱ:L@=.2( aQvv#Cf 6x # c(cS--~uSͱ7胈P4BCAP2FA\K\70MҨbfYu3p (ܫukP _vp^&#M>yj &U~~;ˁaN )4U{ec]O xcƜ|kԀVVB0hԴ?|ʔ q$5iJxOElq*nAbu }cL vϺ$ [ #֨/>FtW.} ^sNGN~[*F_JАoM w֍!sqoG7rCH`+?`! :h1u۰+q}72 3I)xg'kmAm!oN~vFEM~#ÜfeYC@IDATP|Urv:_ZeUB͏T6}/X#+A9-0_^)0 ZAWBԻ9,`DN r( sAla0&*AC}N yO,Q tdᢢEwdTq]3v\!4j7uø30p~\;P V`b@1ĺ|70c7m0tϩ3XaK@vX.TЉ?Xqm hnV։-6rt}Hl Yks`îFzhVԨD`=؆3 ;gIy /ܴq`Mj#ԋGHRãxW/]8-6>B nRAͻhypU}髴Ou}aj}B#>\Cf0[bCͣua5飰,>82 \^xZx=c9umw9'*SB3&L F<UPb럽@!$ޛ""X'=lD=)곋 QD; @HϽ$6777&)ΙئKNOEhO_gDMuf]|.XS:~cH0zlk~'ZG͎%03q+yBP>6J#JRy5fZ;I٦DoWxG K1Xk(zOh{mțܶbfϱxpR~,ԇM M܆3W{0Ǣs`0/ 愻wEpΝb6dk0`̏Mvj^1 X/@&PgC obXa̩Q[TK5>3񡏛Ly;Ot~X…< %t :@](@fˢ þPgއh#*`dEfLye/XOxq0l:|<&q~hTL;LIW}//{{D|56lo iH<zWOLr@/ļl_}epfbJ^Fzy"hǵxʒc?eM6슗3SH/y,3fٌ NeS]F&oDQn?Nc*c gӄ]b1M[^8WۿG,nmb;6?_q1936 ׻kJk8OE`3hӴeܧKx Gj|7z˹%K2Z7 {wjN6o(-*}8=p?47l~Թ٦ {9oGϭ>멞*@RsׇQ& /bb_D/pXf:\2ʶ7crwESD'ZppFzWf0o̅@s.2y./kuqmr==Kǟ蔫{ۢe4dd1рrY@1\N\< |a}L)',yz=z,99ě[؍̩t|n薻Qɍ^$zn%`AҕP1bNr%r`v%e݈ wpOKxE=:y,d>Oz'p@IRCU:h|ozZ~,R% jTb>s>Y_r;SQUUTV\x! w#ky5:3Ķ3'&Z'dȜȐJ,7i؀^Ϗf^X,_̈f<55cM H֞y#n|1:Z7]ȍi{SDw,c:G^߉pHp ^"!xvTg`˳Kmr6uOTbc+Mz9խM}̈'@(9dkiygӢp%`G@3lyjMhWCYY=ΡٿYJss7gpK2VOTrdRtO] aɣ)/8xcYG΁53郂L8j,5*knmտLNR+0ow~ ßVqƇ ]eTd:g]CbF2ʀAaVywyfbm`n)u^ebN e'5t:2.qZ_ɣSfl4+ts>oHO76L kԵݓ\˽L{Fao3\b\o:Bܡ+͊V91*cB/C 9vtCVs_vRUAjT@yx?Sl ^vyqa`߾yi0ЗdYYk\u{{bH5m?| DP櫦*D`FTiNC5::{H{ɉRmQުfy7:􎿍Uck=f}b|7)5ͣ=W9i*/ {ggO, Ba}{0%9FYVFRˬ酡~[CHZWm[t.)^Hef|37(ME/&Jc@෵{jե#|K1\ԥBlt8)gQ n] ]˷ёDlDԫCw؅XJ;Xx38.U7W焾HK HP6Pc~iwR:y&`<{KzԦIjk6Ӂcg8(eDd˾c"mqdX䙫(`u@_3RCƬyı {]Kzhx"qiT7(q4θY] #ޞzvFFcnhPP?{$BRp;lL JJJLYJ`0hxygAHPtXy9oHRRRs6* (RTVs/" Hٜq)vMdplAяxT gR$Km"}g/Ёg~Z4v `N{4dtA)k>bu~ػ#6 ώD[M^tGNmќeCӿ[L&M}rHaasnR- {T¸+]Z-hxL~\vĝ ww7ؽ- ^H0>5 +Ow.vߜ֩EF(zcfnra};8"BPRZ =qMze~2!cWMĶQbwmնX"l1OЭ5vNHGÝܹc5twwfAHd"J@ah03Pct\KixHOHXym mS:6s7}\M|<=yhe1F'U]EUB&"W~1Ͼcf2E&pi~,ޒ9a1)۹3qH&5lvgOyeK_Y@uupٸ5k`X!.3>#MԖզHbF43?QV?7'ѱ#gћ6=vqMԥW>>/$ 4k66ml6 =aNuiHf#2.1Q]`|-t76Zy?K- \{Y 48@6C…d+WPkfVM>p`^ğgs#Ҍ̬\y|!$7f}-KrH3$k9:vx䜡eeenܠn[JS #؜v1K. 1{ J;؛lzLa>N`s%wVLb6ܡUVD`BԔKh.O080+ a =4W-mc1OFiڲhoulf/z;NҡI1=C!$MˀrQJfahm8\)H,?*S vX-+w$mA@aՖc0Gx'2JM|d.2ƀ\av5̬udմB|8غxdE;9ubp;;B}&_ wKQ#fԶiz@ 9,J߼Ϸ|ԣUYҔwPQl9 1OW3X>l{3;TWW@ښ !9JU=.Eע+g`21{A 1c#u&E*6 xSӺ>oQ~ϘXV &pg2*g_Ó)dy^L-л>w2g|IЭod:y>6 P{}FV6I-,H=>'՞a ǃ\0.QWaym *еP[+HݱӉb{}] a&Cz*B~<>_WR@58=-S ;fns m%{R4@x;z=Iɧ|)yYTyC.5(6i:vp* Tb\Je)StUU8&.Qsa]䚯J`icv/-ͥ4a{VVM boݸ>%$sx*T  ~ 6q)Xn37tnҸZXfKc-ɹ5 'ԟ\Kxjx{[| 9ҺpHl(z ɹ¥'U*R!P$ ;`4@z?ѯ_O>-\KxsOjTl w?aE=/@z|@ ŧD} /;&q;,:z`O~VٳgP]e7gڊ#ʁ:v ig ξZY0#ؕo0y)WR1ٵG֑Ԫq`,$$#ٳn`|0Xeilp5cҋ_9v5=1UjÓ?`(=ab 6DU8>p-gnRB5nx'nan]E޴7$s<`k(.ޝZ=GT})**^!P#܊EWddDL-$5u6`pKo{7pB̭p8rhԔx 9-Ĩ6oC3<[ n[\blC?/qM^H)$b]/݆aۣUJsO_M!`(j3>dfv>z,@&<_[ii-KD^;\de_V Wѷ8,]1# G}oyﵧX]c,{do<{ֳ c\Ki*esAW#7߆7rq|`|öb?XB@!``쳵_|iىB"9Oro?x=niXd1~tRL 1kjxK (o*Q Z: pfoqW?ߘv0rl+{_u. TNH.VѽFhO@:1%1S{I)yy^d?=p7rWd4'ز|qWB%@)RT ̱XW+٫+juy %{ߑ \TædLĤZ(V׳pUʪkTkPW\C0bk0|G[Wk!&q]պ> !AWy+S]qhC1YNf#a!R`q8V=~okMu톒lT€^?E W@`;F$?zUULL XpNSusXE&1tѰaӳ 577VvHMܮ-|@X GEۮ"W( @Lb4*iUFMFL~p9.Ù l,\.: G$p-1alc9[Ew2o9(&?7ЮC~Ԁ! {uiHD@h<8THnD^0.'4X1NUePT e^`!d<بݪQoHJ6:Ti.ްFԍ: /ó{_/.nD_(?<A!b6vĞ{y e8LH[2tnLB:!:Ϟ &l:B6s!J[۩s;3jdˋcJ\0LxUV2\H8.9q@BerR\{k? 6`k0#wEؽ nd=~@z,䠣o>vB$ C{]>s!_c%r ų- ov*+ (B_ۦy%q$>Qı4f?[pz1g908Q#! ^E8KWh͎Ľ71nZap ONi@6:{g?KoÝ> 9kAMɟcYr--v|PeQ$2RUe/`X3~tMtpZ$OL 5үϾcv J6. UokwKS^ET,_2]-kU2#l&|铟bS[o^).}!b@\2_tUD`BԔ8 oG2X`z3$Ć\od+sˋq%f5@oh/JMˢXxWpC(U_Adžs*Th[EC[5&ᅿ%ٖLgXEf_p-?VGA~)BL*PS)u(_Ce~C'7χ:nL9V'v-&cSi 8NcT `308czf{yZp{yYxCиAhm|~؊-wd9];_hXߎtu-i?Rda$𮃧h Qҍ{J Dh횊AzQ.~2W};AsjLn1OtNs`-A'.t`Mv5lԑldo`7W QX]$,пD QQve-G);9@*ۉ @Ѻb)tT H8lS̿hnaQmv{sYD=4sn!̥};fs%厬ܗ]4v\eqY~B`l T;AWVo?H#u* S?_LBa)s.&/_N').005ҍ=lN9Esb5cA?EZ4f݄7u~ݢBs֐*ZxzoE#F@:AE [$dCG?7^ ~_C}b'vk `݊.IiCbhoI2pi?O$O x鉶2YrCe.}Pd ' @h;qO1`6տuΒM3˒+>  _VlؘTP!L+ƛԫ]-{عn \=p`MU̳N#|woa*8`-[x#\5W s]I6{o^IwPeC^|81/<`.eLwԃqol--&$.3YҝCٰNGegMK='++rF&}N_yq4Tx˘bJ! <pk񇱂]،}Dp`yK'ذ^k:eذ(Ơ(%u(mz9>;#]6T:>0և 5k ղ'mݸ/OxpLcpI8";ˁCRYˑAۈ@-vql&lB"^Gh[S Ǖ){oI\\Ua- $<(%ҹ4^-ѴhXD)?ldO La "ag7~_y6G;Jד/k9"Co3cш1n$8A$\1/[׶+?җtLZ*)Ϫx^UaKbx"as{;W ap% w5\U ޽08hd4 Ƈ]K! ̊+ԣuZ.qkArmKaB|lѲ5=Vx4jJ$A=|mVW\aO~'-}` +->\b|84^Eܾ7 e-Khefr"Z84{ĒT3 o'i\ !q,Ro"Hf"wQn=ߝvIm~T{ę~V.@ LP1X `v3/ߩm6Ϗ?]KE'O{۝DԨ;ع$|pvF'*M6֌|'Bgm79"2 l;*h `lF}>?}\U0X H\㣸Ʃbo⨬3E5[Ҏ9N5UWrf G[4[ۥ pk$II5dgB@!D&D7yZKb6`4j9Œx@|loŁ{ŁH׺|iq%!q͍ܫ~8g#]8>\QLd&rxc Lq(Osq>7Xrs?r ( kKY*^g)J>9.%֥[__?rWjQp9y%%Ljp{ӺO._nB \0qMnoq"a|9NOOkWÃb{Xߙ$N̕) @w7mU@/7)0gۢH!( D1"C@*-2C⼓Vx8AksEu>™0GA,v5>fD_٪<6n9Hb!1ql^y:`6A.`0%*%-ϼm1ՑḖm光 ^~`U_E ]Wկ+/P=@)ZT~Je6vrP?iګ5$Opbֳ**B%.B!LCXv \ө̆kpP0` ,6}&~ߐ^ɚyt:Ʊ}?-eԧ|$!%EpP9Q9`wcӿyⓏ}hfNNN) NslB @ZHwn\Gj` p 8B,pt\ tl 䮆3Saf+u4=ij+ f^|CXz^?__/ jφ]Ybы:(@̅duӘP&,]1g#tz"͒{i+[EDK m&pkpQ;p|Hfv%h_-~sҚ =~&| ~ \S# ։`ËJ_ydWu1d]?cf1 ?l- o/@L *. *E`=m6L@/rD. FX|#Ѱy/0 ;mw&R F$0#I)WĢR9q"vf@Ix{8[+z5AlĹQXP(^3ĴZd;,ʜz<``Nƌd66>1CW_E+ _^y*?*l|2m㢦An5ܽb¸3-eĿlݩk{xrwSG`HɸrCqvo?ʕ cw)(Xj|058> R:LK~֝hڼ_?`=,bwmG,_c!3L _;QTqC2[, 8 ݕgXPC"R}S30ƹA0!;c??!Yp6!|yXTzð1o02yTs2m"?WP̬KT+Jcq'">Of#00Ѿ]LªSu 1>ƪFZVO3k5ӓ$_/ě\~M˟y`0meOK+i߆AO$f2k:%Aja֓ߙ9v<Xڟ Β1|bdDRxGw<>/@M \" 9w Zr녓 @*R(l#őLO;htJ0sdM?;jeFr3J,Ϙ,o{vF9~`~-KeIf#:G_6Z2ZJ;b `6 l3?VTE3.BgG#|/mx( ޔ}-YZ9w`AygyOq&=j:70W%sE _rdz_%*gE  fL~m ' o^8{KVasбE$>K 9؜y>Z6 nG]Ys:ԨVbGIȁj19k"2좶}pZ-N/k4"%XJӔgc&`ia<2ĄK0<[[N@?|"U1:rXz3 fA1-t>-F=>,Xf4€)r!d>ҸP@Z(o{|%c"Vb̙_Y |yDJ*p rq9_Gro; *4ܬ)pzE @se֍Gz^%ِy?)NI};zKƂـc@6B~Y}y1݈%!Mm޿opO"zD8lkCT.Yi`6¨Pv~ggz=Ԣ uNEx̂TLH&a C,օ ʄ20a&j޷"U g+{"[,,fo~B=+gu ǂi6x}y]Dy凿A`k,ְ@g ZyeH帐gLrA*k|%S`Z9MF>rr .R2ƏOyp! \yX&6De;D5%}@PB`0hޠ￷Iw0zMiNP=1l' ,=˕5 u(+A21e͍fu&kj  jH,q8:rEsˁ*`k ł&D0S^"l}J`~0;2թ״*Ee}픺PEFŁg`=&cEG|*s|`=kY?{tZx.ܗ0gH&Cޓ8ymy16pԅB@!P0be#?;",$G87rEQs>>ʕxnZ" '?`60 ztR2\;(X| AtZ9$aժ4+}R&hGA(rax'At3TZ-4+\PU*mkثJ.c/o!O,,o}aePcpeQ(WE5e ΈL>p@2Yƛ'|5]+PTy' k>eF}tc0iCdRfy? <+׍hDxEH.\ǨHuF@k>ƇK?6DG>G~ ZT3l` J7dg\zozm41#jJhDǴc{޺4=:lPkB@!P(B@ӡzlM`8pp>+~y9,ag. e+FF-hٿ{_ZG100&06Bl8*SB@!P( %ـ*MA"ܙd##jW8GY*V р{[x{8^3\ٶz6D}s%!%| crT( B@!`ڨF'e`TӞMڵXîEX @/=l:\gN99:"#`Kq,K\`2u d6l0( B@!P(*T®66!ծ,敿|ﴪy l?o;E39YWRO;oI&XP8xHfx*]t&%٨0*B@!P(Jm,Bd0|'yqx#|kQR  $G0`, 7FK!vy;rPF9@S( B@!pq6۩\#d =~-_੒)#E.TB␪rG)030$HTRũ B@!P$XQ`. /_ 9-vd0\TQ y!Q[!%p*WB@!P(X5Xbjl!f6IfgT̄l@J~FwyU9+ B@!puz lBw*62nqQ̆58F&iJd4&D0S^R6+FES* B@!`)w5է$6]я `GZ`ɡ<נRPL"lP( B"0!j@}Fl\TRX3OqƳ;OpH&eJP( B@!L,i yɉ|}n&(̤i6UEׯ0K[,'-_{*W@1  B@!P(CQ S߾^_D"xY-KT -ɩLGs6;+äsx RWQv B@!P(j  A˻am55_o/S)rm#E{-=VXǦOnU('p5B@!P(0^F?>~o6;'a3cH=7š:y,Tﴮbԫ_3n4/G9Pr^Q( B@!P\z?ш֦nucF7OU_{'=˷;c̤ B*z]!P( Bx4jJ:+fq㇆4 GֆV`Ք]o=pLO4U)Qĕd B@!P(W̗8lPlY^1#*FZ5AH O=ǀyeSCjB@!P( +␿ gh BߐG?;d]L`"f"f6G4H߳i&n4wӹa] B@!P(!^@͇_:opp"el;p?V>ndeP369fvBO\;F [1Q'[<Ȩ8EPP( B5e 4#frCհ{`6==#|tPdWN.\,\Vl9@W3 ҃&ie1u ukۤH&gVx<˷\Mxrrs)^"뼵hסS\ eM^G]4\; #Wn =-=ۃᙪ0iz/P( B@!PZT~%BJ6GHP-?3(5;i=4nԱe$\ɠl^H1Mױ]Z7/@A!ln{6=BH}ЩsIy+Xfؠ.򥌹PT7+B{"9;&Ojp1tMo8H˞GeMb'Ln?6B@!P(QJ~8u#v<&z󱻄Tt\?75 yL/H'$r şH[XEӃ6:L EZ\TO8O!AԶi dH'w 7i̤vƜMJq.v/ G֡+fAzޛ\7qfE eO4[^OHX곟7`YӰ~0My#khC4?g'ǃҜCjgItMǸ)r-[Iӧp|Y%V~#.8]+v* B@!P*T /h&xѝKm^}HJHdvS7eo,QePRfhz:gB "l,c UzQ-)n-آ!xQVhm}Ix]^ױeCٯI,o3^((Ow#3EfK^Y94zxjQ>N2JMˤwо4NT?P֍i[SHi m1Q4_kxt? 9>^ đ0'Fj-bvi'ruV8MVےn廌S8 B@!`Q);RYe [tǛR 1rPãiBaEZF=}wR؜v`. o.RYP[n}a?kM,s`W h$0#"ߋ;qCCUB3$}"ÂEqY6LHR z7vP챳&$9"o/.#K^;Qs4Q2rL8_`6,dž{M)hTLoߌӏm>?̚~5mB@!P(C@J1 Lt\ 9a; D`Aݠ$Pe;h)Y6$Kf $9>12ŭFa%2s& wҟkw1ipE^I54Իahmh'!PْP<-\]Ljak6 [#''?{3_e+̩Qٛr[)fjB@!P(ʈrA)e̢A~b~ߑxJy}R* gǁ' Ò&ϲ$BJcz6Yi`؍J#|PoA tP9t՞9$yz\P#azdRڧ#}r:vȋjtqQP¬Q ~Tb%N 33c&;K1+?XU"B@!P(j !8l %,6ZRKp#5 'TdI+c"  /OwZRem0CNNV"wHO!H8vĞa}; cjxc/VHpnf".N0Dw {\$ ,oZy V$i;`nBu${ԶEGΣle-04>ވ/G9翭4c4%?6B@!P(5 QS3b&/N-qy#W*Cz'.O>%% ~#ҵs*`c:&KB@!P(.EעppUծ±8+]=^1*R( B#鮺\ӑWB%"Q8|Uϲ!w4>%zwLx]Hc8f }p۽mGe8g\ B@!P(Bi?ulM ~R'TF֍ g4 Aqfح { :Dn&q E_0t`J*Aۊp*B@!P(GiSа>~P0K6$MJ9p3IT;nwu߭C ,iQЉ#h[A6:̒^Ԧi}5`u)/;hHw &]?.Do>vx_hp.F][7oRQW3m-E ~Ask"FC60#`~^N8 Eq|8(y+G",lKz÷3riv~^vr@!qfwBؽ- VF7.iQƨ B@!P(g/<`&x2=ȡPS^TC rPjZ&aő,XK<Ȍܪ/5 Kp4U|{q>;'Rph6΄ocuft{.bFAvnAF P!nzڦ`*ăթsWо#GvE[A xypmPIڸ3ZuE@?+'&FpKSFԍ7 ~f,ш~i_[%v]R *Yt r~]93:jkULP( @3̜ˤȒZBBa]St#c?Pjըh;IYETt!9m'R0ujQ9Գ?Ew4Ԣ:fTdulgq':4QT@GЎ4O;w-h!"pjӤ=Č9vn/̬\y| \c=)b*%D ѦGiBîXç%48pԯ[#s4;IH1dt:w5CM3qH i\55%F,usljB@!P8̬K)ief6)=3fUYjh0nZ.򮽤H%TL@]ȚRa#?_VJtw| $\\ɠiE[8e?`RnsW էmް^{kB3suyI+8(@H-~IfV"#NxgDq%]dg牨 rr΅b6kѨ,i=?yh46cQʥ승FRT(8bevrF% $<`vuBw훲={j, TJKmO&t`x鉶v_'fFw>TJ"؞\dcTIzz5v:% x&w޳Vr3Zـ΁]_+o7=;#r9V9La6鳧Q~2e$t[ ?R( QSƙuI~Ќ7qOfOL8.*S2ihފ,bHW,O݃UMs{aW7w&Olx ώ|/===<,[Qoe-_uxI5(n4yxErYS^ @:Ob5Hp٣kn螊003'Ҡ ]&w(͞}$FRdXxJAjɆ{/8$_Nlt]\\t;3Bnv!"(44Z6u; }kI0ȶBucKlBSHgM摡,E0&Oi ͒ԑR58. [eʠPD[-y bNy)̔|њL4G<|S#b3yk#D0:Y{{bR>YhBf=gޙ /n M/A:xjT?OLz!穮=n\ڶf94X0dskwUߟ=%=@l`׆^U+ PԜxkܿ(`iқBIB %=d=''1Ng>|g̨̳q,JZ+8op8#hRA;76Z#([|0&¶y潭 ?q^~@L++I3&}Kit!D򓸄kpkR0p''EX|oߋ8 m]H[HxBzk,?שHfiĜ7m+4[LP^9_o/',Vi<;)OdB Y\N3YqXM}gۣ_jG'q#4)1 e_nLh.zq}8YJN:?ܦw@X+V<++5fH"nWF{hx_Ri64KъG:KwJ{vS&hS)B1ş!/Mi8S{ޥ.IH3Rz '.& qN%i'(lmVϡ tG POUQZbT-ٱ eѵSu߆<_%"t%$A ҈RSd] L=б8]kbi"`L@3ҀalTʈ:e2Evd|hB+2U5$(h˓p`4*:2s4CN^U5ap?']0#.<—ƗYu:/ߏ$lhN1^9g͆W0֏J,p0zGW f9诗2Y-W$Jm{'H@Ț3poU5 ?~H*[!KBqiIR m~ Gʖ5<Q+ M %/8֋y?(WJ9P^Jj$h6$i奀QxG1o+: ԡ 0&@% 肆.lfz嗦[o *T9[!@㛚._avv)taz7j%&dR?WZKQ˲f?d#n)܎V͒<N(숄7m Vyٵty졞2MPFo}ZGa/,ӫu中kE`"f^`h!ަǽ \m#{$'ןs!-\ދ -&=ڣW^Yc:!| 6Bs& 3P&d\}ݪa>Pw侊qHAƖ}IRn׭ =.;AV$x#IvW` &K l1*?'zf9gn]"Xb4&3Ԩ"8Lû<& YJA??xqf>"6¯#?Hc. t*K¡Fd~JuvU>V oEL7/pfHy*XlS[pmi:%O,IkFϫLZF%sbVVgdL 0B&44$b]_dvuTGLHqd4Ɯƞz(+h8mj\WP $Va ~(T<_?a" Rcm`/3ĉ 4!^gp҈T%lM pΫ4IQ8M榞 Ύ;{} (jG}F-B6>,O;%Gxk`UJVk x +ͱ6W5 yX>goI:2'Li7Ә`L5A4وt[w+6,i"1ԍb?ΡqqPD.h HR5 44Mp"ePZ/""@FQ9$}Sܜ 0&@"OF髶.@i%IAZ +"^ 92( $H8Z 6ȌN*hw:؉' 8n[2%̸Y{ Ggko(*uL`L 0F&F}8M> j54_4>4v$XfC$|Z ]PAkPbT uS+6&bY\Vj]#v6Y_3&`ABRQua&F͆Q 9Ѕ:fCj5$ti)^AJIa;>`aRCY~&$k`L 0&P_`t[q_6.d'$pa4of]h2(Oł†?UzM&sTSTuL26~s_`L 4hA_e F'SPyh]AG0xI[FkYrK*ދ'ܔh" C[H{ӝhmi3 E9+`L 0&аQ&F.訟a>6g7HPY(iQTP\l&8r`L 0&@EhE†&p}zt*xJR{,VtU&i,|bRP#c^Qu^o5|k8 && /Iz51L 0&@}!P|H{E(ă&hqrpAA\IUrM)S֯{@LEդ>/lP/(6vcه!|4x9zI&: ;L 0&+jԈ;:u.m~:IWui#x]S=G݇%ɔp8T.i/8!a/Bd(ZV}0q1Rq`L 0&̛ޢ.c.0NsA4NxclڶE7oE8tzFgtN'Yq\ꌰAiHyT+ب*H6iHoI4=iDG{&ǻ\_uYCb}eL 0&i)ˈq1V>u@JBS|`e_(αMlxuJ0yt̔뜊Nu 5o6 li#DG!: 0&W$ c/:,Io!hH SzJ=#C-˽,BW//T=~kL 0&@Dի_޷uDXlfZ$4>TZk\r>Y+pw:؉kW =.`L 0Gn}{$ 4@V4y=p]߁re!&KsJU)oӘ`L 0&|([ZJs?| Q;{ UhiIذ"һYT k'!\`L 0& El=Lh0GNzI7V/mQ4.QO_$`L 0&Ѐ H*VDt %Hh6ڰѨ@>(r;&MrO֮ bSy;|Rqn܈е&pX5Ґ`L 0:L2C 0Y,CUl& )Vl'sl]ڵOuhHCyO~P=;j{&yqaTozBpȲhfZ;Ǘ}bǡh_h&xBv:žS.m˔ Ac(viлPHWH]=`L 0&@"@B4alh8b]ۮ}ybޒM_"HY+^ۣ ԆWol< veh$<}A,g/7$tYiײ𭢢~jwu.ڻo<ݖurL 0&`G}D2D<(d=G=-55=cK^|t  G2;v? peޅx[Ś>L53&`L \vz%E]Q;edBHPv4 9%I$h<wۏcaVrL 0&)4``j͢T/l$XjH s%=;bzCS蜾7f᨞cth"ڷjf#4_JI.H@xp4Ʋi3`ЃquC! 2#ݚ5YŰ.C  Otլ#^} =`L 0&@'@I}Bk2_IŒ !PdiQ߯̔Z 9b-JKa K:̠BS@S/#)Z0,b1ϜĒs8$ rχa Y|+N %?IDATC/GϹ{dDZ Zj ԗX7=+X7i02a*仲d.m|}rMwVK~TୋR3:{QR({\1S$TThlcL 0&>Lv>A!}npWf7a$o0Cu^M=5V\'=&h\Ua*%ģwޠi6R.m՜{-V| !✶9ZlDv@[jw CxVkCVm+ZI[L$(t2T-t_7jsC߶C+!ĠzhQ?h'̭]ΐph.x5Q6 Vn/~}?õ6O7K{NM~`w3;6 yWG>N0j|𧻓jWѱpA&`L 0_!0;2'.fyqs "G3ݾ=^ړpY{!d宱0>ЖEa OiK麏Fi^@a)P: 0yX*+Sy T(o.&cIKBW˲S+KaK,;a=p^4. ߭$P<}@=[/Jym+X͓?_$v?S*hY-g#dcH[J\ culTcL 0&PTV4]po%l"_yhE(2kڴhMqNy+|`*@BAy M FXFRtN}}8HKCKCE4 ,lԅQ62&`L*C*MN^ӂ ZM4|K~^ ,͇![}jIX(A'L 0&`9۞r6֟}MņJJL 0&`L \ٸ$!L 0&`LPhm6FLΩ8ǭ+15 #!0u**eb,upBӉz?`L 0_!e(.6_c,q;˪Ozi rnټT+GjpWփ.U^x[~^c6"c& TYaEHEt۾&TNYں?~ #T:92:*̍S 0&j*Whffq\ m&ر:kqVmmMÂU>.ߪԵtbSإǷ˶hy͒вDc8{+Դj媧 0B2nү`fOx5.ڎg4sާʽBRQ`L ,0r7.˺jsIN٫s_)*zu.][sW=GR|nUAB.[S*hPkڇӗ9a#!vP/uޱg~V/yL 0&(CvnݸG7PЮ_>]wkLoƫ:kxf~{n֮i;;7_Z'4vlGPzv (oo6q7$qq3vo%wвs⋅ymwIt.;vr=l}N~?NʾR,$w)ӇEv<ѾE3]Ճ`y}+iP=)yn^7׉a4&`L :ЪDxeV̧#w&_.\/BfNʥFx^F !ն=ڛ6|ۮ]ڵ'XxnwnŒ m?V1;E0բBYd6B cp>I>=̵F{ AyqGLf'>-lD[#>p}x^ԩq8ևp`L 032s]D~erkĵD+U*xou٫XggL 0&9Z SzSw8Hp{^T<3Llh*hmNC[ dv'0H@GS>4-e'L DMK+yF&Wjq]+׋7WF> \V"M!@Ž"6҄mhjZ'.9^̆TSz$i'5 *s*QRKŒJ%?zwSk_T] QQҰK2ׄS]xJ^34kDL 0&El=$lhZ9&lľcT\jLH66ًpMh+(RY!Bi*^xXT;'EnhD69OxL(}f.nrx7~:~ JhѬH0ާhu>,*[^HHQd˦ ^*JG,"@\2^TssҲ bջvsS?&N>/hYrئUץi/kw2=ھ} s0cu^QP(Lh9ck.|)Wd)lX3}Fi?Ȕ5;Q?L60  2")* {]>dw߻=3=+G,ݸG\i+L 0&@'O&P^Hzx~EE4 _E A5N['MڪS@qBͻYKY^į~HL|mȔi@c,L:≻o9V&#׉ӊTQeX!s$YD\>%? wkwVcE5HFR#P w5{.Zv$=维Ne=P Xf˒|P`W;P ^`U0&4!so 5EuR1#*SOrH3 nt) ;S*UR`!b܈{4|U5 #rM~"g#h w=\Y%`l`afQXV$ )PrEm#!C47O>!7v{h|S΍kgCFл@;( z7>qmL 0&h~_b)E7$HP}p&''g4i\͗<}.Cݥ3>:N4Zݲ/I:v`׫H^V0AB ^jNe#&`L 0O :,I&7i C'M$rP Ɯo+I|컯@HРHSOFlЮq˪IX({\knU6s`L 0R 44!M[t۰;)BMRSm>E'hy]Ǖ[̬WnϿ$dQATdFU͛g'{޿nR_>e;ې *1Y*Z6dv iy3R'^l,L 0& FE3VPco;tӝwv:AX}MKV+ *|@áҎa- :~`^M0 .1uסBҜ)ok]6rY" UQo=TCxzaYlcSH~$č; %!mK*[GvgH˕֡ `L *tTootC: #ݓrK]u?(lG:$ ;傂 ذq[$T_LZ :'ҀEx;FX hi>Hq' #IoA#DʽB-iĸI({BV'LAxQ2I'k>M&nV3&7|GF9oF/6V*p]k*4ߧ@DEB^|1H 4$TS$`FCӠ{ՠ.adtЌ܀@Fܨ W_|#.(#̒yI kviOXh")̘9!ѶieQxmk9 0&@k>S:۟hrjtk]'G"AE  @!i#7HG>hu. IȩSl3mX,l`}+.9ձ32M(%bHO#aos!gE=YPi$OͤB1q"H$8^:H[܇0ک磼X`S^ig%KRMbXt_$E i8US3>2e-xWЪ|D+FLFQӑ&&URύy>oZv{H/8;WRD&WTfhJqd8 E~ۡ Nb]z9UC`dVh|v7S /g6wdNPUL16n#JAc}>wd5/de܀{Ct}G)7 cEwAI&*H^f6>cQot:662xrݶ90&'3y(:RLH+ K&4  ƎI`4u=R:3(Y'3[[5v, (5$r`%_e% M!!Pdiz:ů~"T5~TLc1MK&[[t߭Yѹ&A#| *POt-~G+"Ob[H~N3QcK􋖂gj1wy4ɓ<G٧\Te "d$Lߥ2tI-u~yFB+g/"G,NqI24YA"oIRMhS3U2F H1'}SHR૚ uZTc!U:Gz}pa2(/$L'1l ķsc?.N s^PپO?/߅8/пn1h{ϊckko~@2Q/72&}I( 5q' Oi=:e58a!C+'i%N΍& }/b(!wz!%Ba {.R@TO$4bΛZ-v&e/͜//?d+dJI8y(xSuYqXM}gۣA`1$H-e0b`/Or a`]x?b32(/[ G"jE?`L +([l}"/jKr;~ӗp' k糢#@HƐ"i.h,NGa}| '7$R|R؀8+$ۢ_υS>W{ͪLoP1H{̘_ݗk{t@Q.yI4at̔렝q`_=Hg4 -uO-\˽Yq1IcՆ>ȣ h)]РrV@@M]׃&%99d (?' e#U7U賦p8?kC zV>{8O=ji# 2BDit[dFA2U4(S*R(]s`L 0C>@ELk5[jR$&pAGq]7"pе>t^J.>#HIAQc>0VQ=}*uÔ}E2QiPT|KHB09HRTZ@"&lonI'ІUyN,j:琝_PLI7ТQ5%LўBe Osi I$8HSBi#{IAT>(\ d?~0^:/9U@C\`u@b줍-Om/diiRJڌ tIz:q4kSF OĿ퐒'ߙwOyGS2/t)Nj* C}(ƐUDi k$ULHK8(v</p(ڗw= ^۷~?ONDr. 3bn_ѪY1shT^:cO!yq=Ԙ0wif߈HاQ-MV5;}0T%+/F'05㲿վbHG$ߏ)Z8CL 0:@B0`5 y{IӦqv8C7vL&Ѫrja\?^9|!&IViXqr _ck9\y!mXdrd̔ژ;H 8(e+98&h:ԉX2\Ӥũi"߇v/v -8aBrV5[ R݁%xGL.E<]@NS9)bJ,>"6!?Hc. t*K¡{UTl`2bͺƥFCxgLtNkp[af&Rs y,]^uC0M\mBwQUӡWugP?vI=x$n$k Ac/hcG'Ch*9_(gN9y %%G>#XXȺϑsN¨i&-˰D܏Z6P"m(LNP^r6zq`%jߖǪjоݦ0kyCp>3iqMU5 }0K|#r:`L 0&|&.)LACsŪv91 zX-J2FOf4*/ߘ_Fy+N&3-/-KEy}FӽR̈́M *+UY=2*=>/2>{:_3&`L%VYIO`:r9`ZTnڻ*5M ӳ{`L 4T5z\[` Cx^[ϬR0}?L 0&`>C' LϪB|f.EC$Af9gl0y\›Ϫuaӿ&I`L 4,iF7 N\6kX&nphN&{V 0&`L TL'5fXՋL9rw+C ϙ7xby*;W~`L 0&<Ia֞6csO*RجgCK%9;`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&P=>{WF,/^ܮǐ̒i؉K&`L 0&pI{MW1Vv\5;>_!PP:3&`L 0&pi !e8Ŀxw!Blvĉ ~C&On&)P!O$Nxi_'~`L 0&(%Pg jI@#e> 7s Z{gwy[-eG0DOӏ*5>YzZ]3B0}{/\L 0&`Lr\Vv3(>pKώޫn\h# XŴb=zw=w 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&`L 0&~GIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/figures/network2-services.svg0000664000175000017500000014221000000000000024062 0ustar00zuulzuul00000000000000 Produced by OmniGraffle 6.5.2 2016-04-26 14:55:33 +0000Canvas 1Layer 1 Controller NodeSQL DatabaseServiceBlock Storage Nodes Object Storage NodesNetworking Option 2: Self-Service NetworksService LayoutCore componentOptional componentMessage QueueIdentityImage ServiceComputeManagementNetworkingManagementBlock StorageManagementNetwork Time ServiceOrchestrationDatabaseManagementObject StorageProxy ServiceNetworkingL3 AgentNetworkingDHCP Agent Compute NodesKVM HypervisorComputeNetworkingLinux Bridge AgentTelemetryAgentTelemetryAgent(s)NetworkingML2 Plug-inObject StorageAccount ServiceObject StorageContainer ServiceObject StorageObject ServiceBlock StorageVolume ServiceShared File SystemServiceiSCSI TargetServiceNetworkingMetadata AgentNetworkingLinux Bridge AgentLinux NetworkUtilitiesLinux NetworkUtilitiesShared File SystemManagementTelemetryAgentNoSQL DatabaseServiceTelemetryManagement ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/install/get-started-compute.rst0000664000175000017500000000605000000000000022731 0ustar00zuulzuul00000000000000======================== Compute service overview ======================== .. todo:: Update a lot of the links in here. Use OpenStack Compute to host and manage cloud computing systems. OpenStack Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. The main modules are implemented in Python. OpenStack Compute interacts with OpenStack Identity for authentication, OpenStack Placement for resource inventory tracking and selection, OpenStack Image service for disk and server images, and OpenStack Dashboard for the user and administrative interface. Image access is limited by projects, and by users; quotas are limited per project (the number of instances, for example). OpenStack Compute can scale horizontally on standard hardware, and download images to launch instances. OpenStack Compute consists of the following areas and their components: ``nova-api`` service Accepts and responds to end user compute API calls. The service supports the OpenStack Compute API. It enforces some policies and initiates most orchestration activities, such as running an instance. ``nova-api-metadata`` service Accepts metadata requests from instances. For more information, refer to :doc:`/admin/metadata-service`. ``nova-compute`` service A worker daemon that creates and terminates virtual machine instances through hypervisor APIs. For example: - XenAPI for XenServer/XCP - libvirt for KVM or QEMU - VMwareAPI for VMware Processing is fairly complex. Basically, the daemon accepts actions from the queue and performs a series of system commands such as launching a KVM instance and updating its state in the database. ``nova-scheduler`` service Takes a virtual machine instance request from the queue and determines on which compute server host it runs. ``nova-conductor`` module Mediates interactions between the ``nova-compute`` service and the database. It eliminates direct accesses to the cloud database made by the ``nova-compute`` service. The ``nova-conductor`` module scales horizontally. However, do not deploy it on nodes where the ``nova-compute`` service runs. For more information, see the ``conductor`` section in the :doc:`/configuration/config`. ``nova-novncproxy`` daemon Provides a proxy for accessing running instances through a VNC connection. Supports browser-based novnc clients. ``nova-spicehtml5proxy`` daemon Provides a proxy for accessing running instances through a SPICE connection. Supports browser-based HTML5 client. The queue A central hub for passing messages between daemons. Usually implemented with `RabbitMQ `__ but :oslo.messaging-doc:`other options are available `. SQL database Stores most build-time and run-time states for a cloud infrastructure, including: - Available instance types - Instances in use - Available networks - Projects Theoretically, OpenStack Compute can support any database that SQLAlchemy supports. Common databases are SQLite3 for test and development work, MySQL, MariaDB, and PostgreSQL. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/install/index.rst0000664000175000017500000000022500000000000020141 0ustar00zuulzuul00000000000000=============== Compute service =============== .. toctree:: overview get-started-compute controller-install compute-install verify ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/install/overview.rst0000664000175000017500000001531700000000000020710 0ustar00zuulzuul00000000000000======== Overview ======== The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project. OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration. This guide covers step-by-step deployment of the major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not intended to be used for production system installations, but to create a minimum proof-of-concept for the purpose of learning about OpenStack. After becoming familiar with basic installation, configuration, operation, and troubleshooting of these OpenStack services, you should consider the following steps toward deployment using a production architecture: * Determine and implement the necessary core and optional services to meet performance and redundancy requirements. * Increase security using methods such as firewalls, encryption, and service policies. * Implement a deployment tool such as Ansible, Chef, Puppet, or Salt to automate deployment and management of the production environment. .. _overview-example-architectures: Example architecture ~~~~~~~~~~~~~~~~~~~~ The example architecture requires at least two nodes (hosts) to launch a basic virtual machine (VM) or instance. Optional services such as Block Storage and Object Storage require additional nodes. .. important:: The example architecture used in this guide is a minimum configuration, and is not intended for production system installations. It is designed to provide a minimum proof-of-concept for the purpose of learning about OpenStack. For information on creating architectures for specific use cases, or how to determine which architecture is required, see the `Architecture Design Guide `_. This example architecture differs from a minimal production architecture as follows: * Networking agents reside on the controller node instead of one or more dedicated network nodes. * Overlay (tunnel) traffic for self-service networks traverses the management network instead of a dedicated network. For more information on production architectures, see the `Architecture Design Guide `_, `OpenStack Operations Guide `_, and `OpenStack Networking Guide `_. .. _figure-hwreqs: .. figure:: figures/hwreqs.png :alt: Hardware requirements **Hardware requirements** Controller ---------- The controller node runs the Identity service, Image service, management portions of Compute, management portion of Networking, various Networking agents, and the Dashboard. It also includes supporting services such as an SQL database, message queue, and Network Time Protocol (NTP). Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services. The controller node requires a minimum of two network interfaces. Compute ------- The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the kernel-based VM (KVM) hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups. You can deploy more than one compute node. Each node requires a minimum of two network interfaces. Block Storage ------------- The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. You can deploy more than one block storage node. Each node requires a minimum of one network interface. Object Storage -------------- The optional Object Storage node contain the disks that the Object Storage service uses for storing accounts, containers, and objects. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes. Networking ~~~~~~~~~~ Choose one of the following virtual networking options. .. _network1: Networking Option 1: Provider networks -------------------------------------- The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP` with ``boot_index=0`` and ``destination_type=volume``. The root volume can already exist when the server is created or be created by the compute service as part of the server creation. Note that a server can have volumes attached and not be boot-from-volume. A boot from volume server has an empty ("") ``image`` parameter in ``GET /servers/{server_id}`` responses. Cross-Cell Resize A resize (or cold migrate) operation where the source and destination compute hosts are mapped to different cells. By default, resize and cold migrate operations occur within the same cell. For more information, refer to :doc:`/admin/configuration/cross-cell-resize`. Host Aggregate Host aggregates can be regarded as a mechanism to further partition an :term:`Availability Zone`; while availability zones are visible to users, host aggregates are only visible to administrators. Host aggregates provide a mechanism to allow administrators to assign key-value pairs to groups of machines. Each node can have multiple aggregates, each aggregate can have multiple key-value pairs, and the same key-value pair can be assigned to multiple aggregates. For more information, refer to :doc:`/admin/aggregates`. Same-Cell Resize A resize (or cold migrate) operation where the source and destination compute hosts are mapped to the same cell. Also commonly referred to as "standard resize" or simply "resize". By default, resize and cold migrate operations occur within the same cell. For more information, refer to :doc:`/contributor/resize-and-cold-migrate`. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/reference/gmr.rst0000664000175000017500000000560100000000000020112 0ustar00zuulzuul00000000000000.. Copyright (c) 2014 OpenStack Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Guru Meditation Reports ======================= Nova contains a mechanism whereby developers and system administrators can generate a report about the state of a running Nova executable. This report is called a *Guru Meditation Report* (*GMR* for short). Generating a GMR ---------------- A *GMR* can be generated by sending the *USR2* signal to any Nova process with support (see below). The *GMR* will then be outputted standard error for that particular process. For example, suppose that ``nova-api`` has process id ``8675``, and was run with ``2>/var/log/nova/nova-api-err.log``. Then, ``kill -USR2 8675`` will trigger the Guru Meditation report to be printed to ``/var/log/nova/nova-api-err.log``. Structure of a GMR ------------------ The *GMR* is designed to be extensible; any particular executable may add its own sections. However, the base *GMR* consists of several sections: Package Shows information about the package to which this process belongs, including version information Threads Shows stack traces and thread ids for each of the threads within this process Green Threads Shows stack traces for each of the green threads within this process (green threads don't have thread ids) Configuration Lists all the configuration options currently accessible via the CONF object for the current process Adding Support for GMRs to New Executables ------------------------------------------ Adding support for a *GMR* to a given executable is fairly easy. First import the module, as well as the Nova version module: .. code-block:: python from oslo_reports import guru_meditation_report as gmr from nova import version Then, register any additional sections (optional): .. code-block:: python TextGuruMeditation.register_section('Some Special Section', some_section_generator) Finally (under main), before running the "main loop" of the executable (usually ``service.server(server)`` or something similar), register the *GMR* hook: .. code-block:: python TextGuruMeditation.setup_autorun(version) Extending the GMR ----------------- As mentioned above, additional sections can be added to the GMR for a particular executable. For more information, see the inline documentation under :mod:`oslo.reports` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/reference/i18n.rst0000664000175000017500000000316400000000000020106 0ustar00zuulzuul00000000000000Internationalization ==================== Nova uses the :oslo.i18n-doc:`oslo.i18n library <>` to support internationalization. The oslo.i18n library is built on top of `gettext `_ and provides functions that are used to enable user-facing strings such as log messages to appear in the appropriate language in different locales. Nova exposes the oslo.i18n library support via the ``nova/i18n.py`` integration module. This module provides the functions needed to wrap translatable strings. It provides the ``_()`` wrapper for general user-facing messages (such as ones that end up in command line responses, or responses over the network). One upon a time there was an effort to translate log messages in OpenStack projects. But starting with the Ocata release these are no longer being supported. Log messages **should not** be translated. Any use of ``_LI()``, ``_LW()``, ``_LE()``, ``_LC()`` are vestigial and will be removed over time. No new uses of these should be added. You should use the basic wrapper ``_()`` for strings which are not log messages that are expected to get to an end user:: raise nova.SomeException(_('Invalid service catalogue')) Do not use ``locals()`` for formatting messages because: 1. It is not as clear as using explicit dicts. 2. It could produce hidden errors during refactoring. 3. Changing the name of a variable causes a change in the message. 4. It creates a lot of otherwise unused variables. If you do not follow the project conventions, your code may cause hacking checks to fail. The ``_()`` function can be imported with :: from nova.i18n import _ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/reference/index.rst0000664000175000017500000000721200000000000020434 0ustar00zuulzuul00000000000000================================ Technical Reference Deep Dives ================================ The nova project is large, and there are lots of complicated parts in it where it helps to have an overview to understand how the internals of a particular part work. .. _reference-internals: Internals ========= The following is a dive into some of the internals in nova. * :doc:`/reference/rpc`: How nova uses AMQP as an RPC transport * :doc:`/reference/scheduling`: The workflow through the scheduling process * :doc:`/reference/scheduler-hints-vs-flavor-extra-specs`: The similarities and differences between flavor extra specs and scheduler hints. * :doc:`/reference/live-migration`: The live migration flow * :doc:`/reference/services`: Module descriptions for some of the key modules used in starting / running services * :doc:`/reference/vm-states`: Cheat sheet for understanding the life cycle of compute instances * :doc:`/reference/threading`: The concurrency model used in nova, which is based on eventlet, and may not be familiar to everyone. * :doc:`/reference/notifications`: How the notifications subsystem works in nova, and considerations when adding notifications. * :doc:`/reference/update-provider-tree`: A detailed explanation of the ``ComputeDriver.update_provider_tree`` method. * :doc:`/reference/upgrade-checks`: A guide to writing automated upgrade checks. * :doc:`/reference/conductor` .. todo:: Need something about versioned objects and how they fit in with conductor as an object backporter during upgrades. * :doc:`/reference/isolate-aggregates`: Describes how the placement filter works in nova to isolate groups of hosts. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: rpc scheduling scheduler-hints-vs-flavor-extra-specs live-migration services vm-states threading notifications update-provider-tree upgrade-checks conductor isolate-aggregates api-microversion-history Debugging ========= * :doc:`/reference/gmr`: Inspired by Amiga, a way to trigger a very comprehensive dump of a running service for deep debugging. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: gmr Forward Looking Plans ===================== The following section includes documents that describe the overall plan behind groups of nova-specs. Most of these cover items relating to the evolution of various parts of nova's architecture. Once the work is complete, these documents will move into the "Internals" section. If you want to get involved in shaping the future of nova's architecture, these are a great place to start reading up on the current plans. * :doc:`/user/cells`: How cells v2 is evolving * :doc:`/reference/policy-enforcement`: How we want policy checks on API actions to work in the future * :doc:`/reference/stable-api`: What stable api means to nova * :doc:`/reference/scheduler-evolution`: Motivation behind the scheduler / placement evolution .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: /user/cells policy-enforcement stable-api scheduler-evolution Additional Information ====================== * :doc:`/reference/glossary`: A quick reference guide to some of the terms you might encounter working on or using nova. .. # NOTE(amotoki): toctree needs to be placed at the end of the secion to # keep the document structure in the PDF doc. .. toctree:: :hidden: glossary ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/isolate-aggregates.rst0000664000175000017500000001013200000000000023067 0ustar00zuulzuul00000000000000.. Copyright 2019 NTT DATA Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Filtering hosts by isolating aggregates ======================================= Background ----------- I want to set up an aggregate ``ABC`` with hosts that allow you to run only certain licensed images. I could tag the aggregate with metadata such as ````. Then if I boot an instance with an image containing the property ````, it will land on one of the hosts in aggregate ``ABC``. But if the user creates a new image which does not include ```` metadata, an instance booted with that image could still land on a host in aggregate ``ABC`` as reported in launchpad bug `1677217`_. The :ref:`AggregateImagePropertiesIsolation` scheduler filter passes even though the aggregate metadata ```` is not present in the image properties. .. _1677217: https://bugs.launchpad.net/nova/+bug/1677217 Solution -------- The above problem is addressed by blueprint `placement-req-filter-forbidden-aggregates`_ which was implemented in the 20.0.0 Train release. The following example assumes you have configured aggregate ``ABC`` and added hosts ``HOST1`` and ``HOST2`` to it in Nova, and that you want to isolate those hosts to run only instances requiring Windows licensing. #. Set the :oslo.config:option:`scheduler.enable_isolated_aggregate_filtering` config option to ``true`` in nova.conf and restart the nova-scheduler service. #. Add trait ``CUSTOM_LICENSED_WINDOWS`` to the resource providers for ``HOST1`` and ``HOST2`` in the Placement service. First create the ``CUSTOM_LICENSED_WINDOWS`` trait .. code-block:: console # openstack --os-placement-api-version 1.6 trait create CUSTOM_LICENSED_WINDOWS Assume ```` is the UUID of ``HOST1``, which is the same as its resource provider UUID. Start to build the command line by first collecting existing traits for ``HOST1`` .. code-block:: console # traits=$(openstack --os-placement-api-version 1.6 resource provider trait list -f value | sed 's/^/--trait /') Replace ``HOST1``\ 's traits, adding ``CUSTOM_LICENSED_WINDOWS`` .. code-block:: console # openstack --os-placement-api-version 1.6 resource provider trait set $traits --trait CUSTOM_LICENSED_WINDOWS Repeat the above steps for ``HOST2``. #. Add the ``trait:CUSTOM_LICENSED_WINDOWS=required`` metadata property to aggregate ``ABC``. .. code-block:: console # openstack --os-compute-api-version 2.53 aggregate set --property trait:CUSTOM_LICENSED_WINDOWS=required ABC As before, any instance spawned with a flavor or image containing ``trait:CUSTOM_LICENSED_WINDOWS=required`` will land on ``HOST1`` or ``HOST2`` because those hosts expose that trait. However, now that the ``isolate_aggregates`` request filter is configured, any instance whose flavor or image **does not** contain ``trait:CUSTOM_LICENSED_WINDOWS=required`` will **not** land on ``HOST1`` or ``HOST2`` because aggregate ``ABC`` requires that trait. The above example uses a ``CUSTOM_LICENSED_WINDOWS`` trait, but you can use any custom or `standard trait`_ in a similar fashion. The filter supports the use of multiple traits across multiple aggregates. The combination of flavor and image metadata must require **all** of the traits configured on the aggregate in order to pass. .. _placement-req-filter-forbidden-aggregates: https://specs.openstack.org/openstack/nova-specs/specs/train/approved/placement-req-filter-forbidden-aggregates.html .. _standard trait: https://docs.openstack.org/os-traits/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/live-migration.rst0000664000175000017500000000400600000000000022251 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================ Live Migration ================ .. seqdiag:: seqdiag { Conductor; Source; Destination; edge_length = 300; span_height = 15; activation = none; default_note_color = white; Conductor -> Destination [label = "call", note = "check_can_live_migrate_destination"]; Source <- Destination [label = "call", leftnote = "check_can_live_migrate_source"]; Source --> Destination; Conductor <-- Destination; Conductor ->> Source [label = "cast", note = "live_migrate"]; Source -> Destination [label = "call", note = "pre_live_migration (set up dest)"]; Source <-- Destination; === driver.live_migration (success) === Source -> Source [leftnote = "post_live_migration (clean up source)"]; Source -> Destination [label = "call", note = "post_live_migration_at_destination (finish dest)"]; Source <-- Destination; === driver.live_migration (failure) === Source -> Source [leftnote = "_rollback_live_migration"]; Source -> Destination [label = "call", note = "remove_volume_connections"]; Source <-- Destination; Source ->> Destination [label = "cast", note = "rollback_live_migration_at_destination"]; } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/notifications.rst0000664000175000017500000004052400000000000022201 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Notifications in Nova ===================== Similarly to other OpenStack services Nova emits notifications to the message bus with the Notifier class provided by :oslo.messaging-doc:`oslo.messaging `. From the notification consumer point of view a notification consists of two parts: an envelope with a fixed structure defined by oslo.messaging and a payload defined by the service emitting the notification. The envelope format is the following:: { "priority": , "event_type": , "timestamp": , "publisher_id": , "message_id": , "payload": } Notifications can be completely disabled by setting the following in your nova configuration file: .. code-block:: ini [oslo_messaging_notifications] driver = noop There are two types of notifications in Nova: legacy notifications which have an unversioned payload and newer notifications which have a versioned payload. Unversioned notifications ------------------------- Nova code uses the nova.rpc.get_notifier call to get a configured oslo.messaging Notifier object and it uses the oslo provided functions on the Notifier object to emit notifications. The configuration of the returned Notifier object depends on the parameters of the get_notifier call and the value of the oslo.messaging configuration options ``driver`` and ``topics``. There are notification configuration options in Nova which are specific for certain notification types like :oslo.config:option:`notifications.notify_on_state_change`, :oslo.config:option:`notifications.default_level`, etc. The structure of the payload of the unversioned notifications is defined in the code that emits the notification and no documentation or enforced backward compatibility contract exists for that format. Versioned notifications ----------------------- The versioned notification concept is created to fix the shortcomings of the unversioned notifications. The envelope structure of the emitted notification is the same as in the unversioned notification case as it is provided by oslo.messaging. However the payload is not a free form dictionary but a serialized :oslo.versionedobjects-doc:`oslo versionedobjects object <>`. .. _service.update: For example the wire format of the ``service.update`` notification looks like the following:: { "priority":"INFO", "payload":{ "nova_object.namespace":"nova", "nova_object.name":"ServiceStatusPayload", "nova_object.version":"1.0", "nova_object.data":{ "host":"host1", "disabled":false, "last_seen_up":null, "binary":"nova-compute", "topic":"compute", "disabled_reason":null, "report_count":1, "forced_down":false, "version":2 } }, "event_type":"service.update", "publisher_id":"nova-compute:host1" } The serialized oslo versionedobject as a payload provides a version number to the consumer so the consumer can detect if the structure of the payload is changed. Nova provides the following contract regarding the versioned notification payload: * the payload version defined by the ``nova_object.version`` field of the payload will be increased if and only if the syntax or the semantics of the ``nova_object.data`` field of the payload is changed. * a minor version bump indicates a backward compatible change which means that only new fields are added to the payload so a well written consumer can still consume the new payload without any change. * a major version bump indicates a backward incompatible change of the payload which can mean removed fields, type change, etc in the payload. * there is an additional field 'nova_object.name' for every payload besides 'nova_object.data' and 'nova_object.version'. This field contains the name of the nova internal representation of the payload type. Client code should not depend on this name. There is a Nova configuration parameter :oslo.config:option:`notifications.notification_format` that can be used to specify which notifications are emitted by Nova. The versioned notifications are emitted to a different topic than the legacy notifications. By default they are emitted to 'versioned_notifications' but it is configurable in the nova.conf with the :oslo.config:option:`notifications.versioned_notifications_topics` config option. A `presentation from the Train summit`_ goes over the background and usage of versioned notifications, and provides a demo. .. _presentation from the Train summit: https://www.openstack.org/videos/summits/denver-2019/nova-versioned-notifications-the-result-of-a-3-year-journey How to add a new versioned notification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To support the above contract from the Nova code every versioned notification is modeled with oslo versionedobjects. Every versioned notification class shall inherit from the ``nova.notifications.objects.base.NotificationBase`` which already defines three mandatory fields of the notification ``event_type``, ``publisher`` and ``priority``. The new notification class shall add a new field ``payload`` with an appropriate payload type. The payload object of the notifications shall inherit from the ``nova.notifications.objects.base.NotificationPayloadBase`` class and shall define the fields of the payload as versionedobject fields. The base classes are described in the following section. The nova.notifications.objects.base module .......................................... .. automodule:: nova.notifications.objects.base :noindex: :members: :show-inheritance: Please note that the notification objects shall not be registered to the NovaObjectRegistry to avoid mixing nova internal objects with the notification objects. Instead of that use the register_notification decorator on every concrete notification object. The following code example defines the necessary model classes for a new notification ``myobject.update``:: @notification.notification_sample('myobject-update.json') @object_base.NovaObjectRegistry.register.register_notification class MyObjectNotification(notification.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('MyObjectUpdatePayload') } @object_base.NovaObjectRegistry.register.register_notification class MyObjectUpdatePayload(notification.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'some_data': fields.StringField(), 'another_data': fields.StringField(), } After that the notification can be populated and emitted with the following code:: payload = MyObjectUpdatePayload(some_data="foo", another_data="bar") MyObjectNotification( publisher=notification.NotificationPublisher.from_service_obj( ), event_type=notification.EventType( object='myobject', action=fields.NotificationAction.UPDATE), priority=fields.NotificationPriority.INFO, payload=payload).emit(context) The above code will generate the following notification on the wire:: { "priority":"INFO", "payload":{ "nova_object.namespace":"nova", "nova_object.name":"MyObjectUpdatePayload", "nova_object.version":"1.0", "nova_object.data":{ "some_data":"foo", "another_data":"bar", } }, "event_type":"myobject.update", "publisher_id":":" } There is a possibility to reuse an existing versionedobject as notification payload by adding a ``SCHEMA`` field for the payload class that defines a mapping between the fields of existing objects and the fields of the new payload object. For example the service.status notification reuses the existing ``nova.objects.service.Service`` object when defines the notification's payload:: @notification.notification_sample('service-update.json') @object_base.NovaObjectRegistry.register.register_notification class ServiceStatusNotification(notification.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('ServiceStatusPayload') } @object_base.NovaObjectRegistry.register.register_notification class ServiceStatusPayload(notification.NotificationPayloadBase): SCHEMA = { 'host': ('service', 'host'), 'binary': ('service', 'binary'), 'topic': ('service', 'topic'), 'report_count': ('service', 'report_count'), 'disabled': ('service', 'disabled'), 'disabled_reason': ('service', 'disabled_reason'), 'availability_zone': ('service', 'availability_zone'), 'last_seen_up': ('service', 'last_seen_up'), 'forced_down': ('service', 'forced_down'), 'version': ('service', 'version') } # Version 1.0: Initial version VERSION = '1.0' fields = { 'host': fields.StringField(nullable=True), 'binary': fields.StringField(nullable=True), 'topic': fields.StringField(nullable=True), 'report_count': fields.IntegerField(), 'disabled': fields.BooleanField(), 'disabled_reason': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'last_seen_up': fields.DateTimeField(nullable=True), 'forced_down': fields.BooleanField(), 'version': fields.IntegerField(), } def populate_schema(self, service): super(ServiceStatusPayload, self).populate_schema(service=service) If the ``SCHEMA`` field is defined then the payload object needs to be populated with the ``populate_schema`` call before it can be emitted:: payload = ServiceStatusPayload() payload.populate_schema(service=) ServiceStatusNotification( publisher=notification.NotificationPublisher.from_service_obj( ), event_type=notification.EventType( object='service', action=fields.NotificationAction.UPDATE), priority=fields.NotificationPriority.INFO, payload=payload).emit(context) The above code will emit the :ref:`already shown notification` on the wire. Every item in the ``SCHEMA`` has the syntax of:: : (, ) The mapping defined in the ``SCHEMA`` field has the following semantics. When the ``populate_schema`` function is called the content of the ``SCHEMA`` field is enumerated and the value of the field of the pointed parameter object is copied to the requested payload field. So in the above example the ``host`` field of the payload object is populated from the value of the ``host`` field of the ``service`` object that is passed as a parameter to the ``populate_schema`` call. A notification payload object can reuse fields from multiple existing objects. Also a notification can have both new and reused fields in its payload. Note that the notification's publisher instance can be created two different ways. It can be created by instantiating the ``NotificationPublisher`` object with a ``host`` and a ``source`` string parameter or it can be generated from a ``Service`` object by calling ``NotificationPublisher.from_service_obj`` function. Versioned notifications shall have a sample file stored under ``doc/sample_notifications`` directory and the notification object shall be decorated with the ``notification_sample`` decorator. For example the ``service.update`` notification has a sample file stored in ``doc/sample_notifications/service-update.json`` and the ServiceUpdateNotification class is decorated accordingly. Notification payload classes can use inheritance to avoid duplicating common payload fragments in nova code. However the leaf classes used directly in a notification should be created with care to avoid future needs of adding extra level of inheritance that changes the name of the leaf class as that name is present in the payload class. If this cannot be avoided and the only change is the renaming then the version of the new payload shall be the same as the old payload was before the rename. See [1]_ as an example. If the renaming involves any other changes on the payload (e.g. adding new fields) then the version of the new payload shall be higher than the old payload was. See [2]_ as an example. What should be in the notification payload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is just a guideline. You should always consider the actual use case that requires the notification. * Always include the identifier (e.g. uuid) of the entity that can be used to query the whole entity over the REST API so that the consumer can get more information about the entity. * You should consider including those fields that are related to the event you are sending the notification about. For example if a change of a field of the entity triggers an update notification then you should include the field to the payload. * An update notification should contain information about what part of the entity is changed. Either by filling the nova_object.changes part of the payload (note that it is not supported by the notification framework currently) or sending both the old state and the new state of the entity in the payload. * You should never include a nova internal object in the payload. Create a new object and use the SCHEMA field to map the internal object to the notification payload. This way the evolution of the internal object model can be decoupled from the evolution of the notification payload. .. important:: This does not mean that every field from internal objects should be mirrored in the notification payload objects. Think about what is actually needed by a consumer before adding it to a payload. When in doubt, if no one is requesting specific information in notifications, then leave it out until someone asks for it. * The delete notification should contain the same information as the create or update notifications. This makes it possible for the consumer to listen only to the delete notifications but still filter on some fields of the entity (e.g. project_id). What should **NOT** be in the notification payload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Generally anything that contains sensitive information about the internals of the nova deployment, for example fields that contain access credentials to a cell database or message queue (see `bug 1823104`_). .. _bug 1823104: https://bugs.launchpad.net/nova/+bug/1823104 Existing versioned notifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: Versioned notifications are added in each release, so the samples represented below may not necessarily be in an older version of nova. Ensure you are looking at the correct version of the documentation for the release you are using. .. This is a reference anchor used in the main index page. .. _versioned_notification_samples: .. versioned_notifications:: .. [1] https://review.opendev.org/#/c/463001/ .. [2] https://review.opendev.org/#/c/453077/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/policy-enforcement.rst0000664000175000017500000002131100000000000023123 0ustar00zuulzuul00000000000000.. Copyright 2014 Intel All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. REST API Policy Enforcement =========================== The following describes some of the shortcomings in how policy is used and enforced in nova, along with some benefits of fixing those issues. Each issue has a section dedicated to describing the underlying cause and historical context in greater detail. Problems with current system ---------------------------- The following is a list of issues with the existing policy enforcement system: * `Testing default policies`_ * `Mismatched authorization`_ * `Inconsistent naming`_ * `Incorporating default roles`_ * `Compartmentalized policy enforcement`_ * `Refactoring hard-coded permission checks`_ * `Granular policy checks`_ Addressing the list above helps operators by: 1. Providing them with flexible and useful defaults 2. Reducing the likelihood of writing and maintaining custom policies 3. Improving interoperability between deployments 4. Increasing RBAC confidence through first-class testing and verification 5. Reducing complexity by using consistent policy naming conventions 6. Exposing more functionality to end-users, safely, making the entire nova API more self-serviceable resulting in less operational overhead for operators to do things on behalf of users Additionally, the following is a list of benefits to contributors: 1. Reduce developer maintenance and cost by isolating policy enforcement into a single layer 2. Reduce complexity by using consistent policy naming conventions 3. Increased confidence in RBAC refactoring through exhaustive testing that prevents regressions before they merge Testing default policies ------------------------ Testing default policies is important in protecting against authoritative regression. Authoritative regression is when a change accidentally allows someone to do something or see something they shouldn't. It can also be when a change accidentally restricts a user from doing something they used to have the authorization to perform. This testing is especially useful prior to refactoring large parts of the policy system. For example, this level of testing would be invaluable prior to pulling policy enforcement logic from the database layer up to the API layer. `Testing documentation`_ exists that describes the process for developing these types of tests. .. _Testing documentation: https://docs.openstack.org/keystone/latest/contributor/services.html#ruthless-testing Mismatched authorization ------------------------ The compute API is rich in functionality and has grown to manage both physical and virtual hardware. Some APIs were meant to assist operators while others were specific to end users. Historically, nova used project-scoped tokens to protect almost every API, regardless of the intended user. Using project-scoped tokens to authorize requests for system-level APIs makes for undesirable user-experience and is prone to overloading roles. For example, to prevent every user from accessing hardware level APIs that would otherwise violate tenancy requires operators to create a ``system-admin`` or ``super-admin`` role, then rewrite those system-level policies to incorporate that role. This means users with that special role on a project could access system-level resources that aren't even tracked against projects (hypervisor information is an example of system-specific information.) As of the Queens release, keystone supports a scope type dedicated to easing this problem, called system scope. Consuming system scope across the compute API results in fewer overloaded roles, less specialized authorization logic in code, and simpler policies that expose more functionality to users without violating tenancy. Please refer to keystone's `authorization scopes documentation`_ to learn more about scopes and how to use them effectively. .. _authorization scopes documentation: https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes Inconsistent naming ------------------- Inconsistent conventions for policy names are scattered across most OpenStack services, nova included. Recently, there was an effort that introduced a convention that factored in service names, resources, and use cases. This new convention is applicable to nova policy names. The convention is formally `documented`_ in oslo.policy and we can use policy `deprecation tooling`_ to gracefully rename policies. .. _documented: https://docs.openstack.org/oslo.policy/latest/user/usage.html#naming-policies .. _deprecation tooling: https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule Incorporating default roles --------------------------- Up until the Rocky release, keystone only ensured a single role called ``admin`` was available to the deployment upon installation. In Rocky, this support was expanded to include ``member`` and ``reader`` roles as first-class citizens during keystone's installation. This allows service developers to rely on these roles and include them in their default policy definitions. Standardizing on a set of role names for default policies increases interoperability between deployments and decreases operator overhead. You can find more information on default roles in the keystone `specification`_ or `developer documentation`_. .. _specification: http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html .. _developer documentation: https://docs.openstack.org/keystone/latest/contributor/services.html#reusable-default-roles Compartmentalized policy enforcement ------------------------------------ Policy logic and processing is inherently sensitive and often complicated. It is sensitive in that coding mistakes can lead to security vulnerabilities. It is complicated in the resources and APIs it needs to protect and the vast number of use cases it needs to support. These reasons make a case for isolating policy enforcement and processing into a compartmentalized space, as opposed to policy logic bleeding through to different layers of nova. Not having all policy logic in a single place makes evolving the policy enforcement system arduous and makes the policy system itself fragile. Currently, the database and API components of nova contain policy logic. At some point, we should refactor these systems into a single component that is easier to maintain. Before we do this, we should consider approaches for bolstering testing coverage, which ensures we are aware of or prevent policy regressions. There are examples and documentation in API protection `testing guides`_. .. _testing guides: https://docs.openstack.org/keystone/latest/contributor/services.html#ruthless-testing Refactoring hard-coded permission checks ---------------------------------------- The policy system in nova is designed to be configurable. Despite this design, there are some APIs that have hard-coded checks for specific roles. This makes configuration impossible, misleading, and frustrating for operators. Instead, we can remove hard-coded policies and ensure a configuration-driven approach, which reduces technical debt, increases consistency, and provides better user-experience for operators. Additionally, moving hard-coded checks into first-class policy rules let us use existing policy tooling to deprecate, document, and evolve policies. Granular policy checks ---------------------- Policies should be as granular as possible to ensure consistency and reasonable defaults. Using a single policy to protect CRUD for an entire API is restrictive because it prevents us from using default roles to make delegation to that API flexible. For example, a policy for ``compute:foobar`` could be broken into ``compute:foobar:create``, ``compute:foobar:update``, ``compute:foobar:list``, ``compute:foobar:get``, and ``compute:foobar:delete``. Breaking policies down this way allows us to set read-only policies for readable operations or use another default role for creation and management of `foobar` resources. The oslo.policy library has `examples`_ that show how to do this using deprecated policy rules. .. _examples: https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/rpc.rst0000664000175000017500000003234300000000000020114 0ustar00zuulzuul00000000000000.. Copyright (c) 2010 Citrix Systems, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. AMQP and Nova ============= AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, default to Rabbitmq, sits between any two Nova components and allows them to communicate in a loosely coupled fashion. More precisely, Nova components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC hereinafter) to communicate to one another; however such a paradigm is built atop the publish/subscribe paradigm so that the following benefits can be achieved: * Decoupling between client and servant (such as the client does not need to know where the servant's reference is). * Full a-synchronism between client and servant (such as the client does not need the servant to run at the same time of the remote call). * Random balancing of remote calls (such as if more servants are up and running, one-way calls are transparently dispatched to the first available servant). Nova uses direct, fanout, and topic-based exchanges. The architecture looks like the one depicted in the figure below: .. image:: /_static/images/rpc-arch.png :width: 60% Nova implements RPC (both request+response, and one-way, respectively nicknamed ``rpc.call`` and ``rpc.cast``) over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each Nova service (for example Compute, Scheduler, etc.) create two queues at the initialization time, one which accepts messages with routing keys ``NODE-TYPE.NODE-ID`` (for example ``compute.hostname``) and another, which accepts messages with routing keys as generic ``NODE-TYPE`` (for example ``compute``). The former is used specifically when Nova-API needs to redirect commands to a specific node like ``openstack server delete $instance``. In this case, only the compute node whose host's hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise it acts as a publisher only. Nova RPC Mappings ----------------- The figure below shows the internals of a message broker node (referred to as a RabbitMQ node in the diagrams) when a single instance is deployed and shared in an OpenStack cloud. Every Nova component connects to the message broker and, depending on its personality (for example a compute node or a network node), may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute or Network). Invokers and Workers do not actually exist in the Nova object model, but we are going to use them as an abstraction for sake of clarity. An Invoker is a component that sends messages in the queuing system via two operations: 1) ``rpc.call`` and ii) ``rpc.cast``; a Worker is a component that receives messages from the queuing system and reply accordingly to ``rpc.call`` operations. Figure 2 shows the following internal elements: Topic Publisher A Topic Publisher comes to life when an ``rpc.call`` or an ``rpc.cast`` operation is executed; this object is instantiated and used to push a message to the queuing system. Every publisher connects always to the same topic-based exchange; its life-cycle is limited to the message delivery. Direct Consumer A Direct Consumer comes to life if (and only if) an ``rpc.call`` operation is executed; this object is instantiated and used to receive a response message from the queuing system. Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery; the exchange and queue identifiers are determined by a UUID generator, and are marshaled in the message sent by the Topic Publisher (only ``rpc.call`` operations). Topic Consumer A Topic Consumer comes to life as soon as a Worker is instantiated and exists throughout its life-cycle; this object is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during ``rpc.cast`` operations (and it connects to a shared queue whose exchange key is ``topic``) and the other that is addressed only during ``rpc.call`` operations (and it connects to a unique queue whose exchange key is ``topic.host``). Direct Publisher A Direct Publisher comes to life only during ``rpc.call`` operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message. Topic Exchange The Exchange is a routing table that exists in the context of a virtual host (the multi-tenancy mechanism provided by RabbitMQ etc); its type (such as topic vs. direct) determines the routing policy; a message broker node will have only one topic-based exchange for every topic in Nova. Direct Exchange This is a routing table that is created during ``rpc.call`` operations; there are many instances of this kind of exchange throughout the life-cycle of a message broker node, one for each ``rpc.call`` invoked. Queue Element A Queue is a message bucket. Messages are kept in the queue until a Consumer (either Topic or Direct Consumer) connects to the queue and fetch it. Queues can be shared or can be exclusive. Queues whose routing key is ``topic`` are shared amongst Workers of the same personality. .. image:: /_static/images/rpc-rabt.png :width: 60% RPC Calls --------- The diagram below shows the message flow during an ``rpc.call`` operation: 1. A Topic Publisher is instantiated to send the message request to the queuing system; immediately before the publishing operation, a Direct Consumer is instantiated to wait for the response message. 2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as 'topic.host') and passed to the Worker in charge of the task. 3. Once the task is completed, a Direct Publisher is allocated to send the response message to the queuing system. 4. Once the message is dispatched by the exchange, it is fetched by the Direct Consumer dictated by the routing key (such as ``msg_id``) and passed to the Invoker. .. image:: /_static/images/rpc-flow-1.png :width: 60% RPC Casts --------- The diagram below shows the message flow during an ``rpc.cast`` operation: 1. A Topic Publisher is instantiated to send the message request to the queuing system. 2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as 'topic') and passed to the Worker in charge of the task. .. image:: /_static/images/rpc-flow-2.png :width: 60% AMQP Broker Load ---------------- At any given time the load of a message broker node running RabbitMQ etc is function of the following parameters: Throughput of API calls The number of API calls (more precisely ``rpc.call`` ops) being served by the OpenStack cloud dictates the number of direct-based exchanges, related queues and direct consumers connected to them. Number of Workers There is one queue shared amongst workers with the same personality; however there are as many exclusive queues as the number of workers; the number of workers dictates also the number of routing keys within the topic-based exchange, which is shared amongst all workers. The figure below shows the status of a RabbitMQ node after Nova components' bootstrap in a test environment. Exchanges and queues being created by Nova components are: * Exchanges 1. nova (topic exchange) * Queues 1. ``compute.phantom`` (``phantom`` is hostname) 2. ``compute`` 3. ``network.phantom`` (``phantom`` is hostname) 4. ``network`` 5. ``scheduler.phantom`` (``phantom`` is hostname) 6. ``scheduler`` .. image:: /_static/images/rpc-state.png :width: 60% RabbitMQ Gotchas ---------------- Nova uses Kombu to connect to the RabbitMQ environment. Kombu is a Python library that in turn uses AMQPLib, a library that implements the standard AMQP 0.8 at the time of writing. When using Kombu, Invokers and Workers need the following parameters in order to instantiate a Connection object that connects to the RabbitMQ server (please note that most of the following material can be also found in the Kombu documentation; it has been summarized and revised here for sake of clarity): ``hostname`` The hostname to the AMQP server. ``userid`` A valid username used to authenticate to the server. ``password`` The password used to authenticate to the server. ``virtual_host`` The name of the virtual host to work with. This virtual host must exist on the server, and the user must have access to it. Default is "/". ``port`` The port of the AMQP server. Default is ``5672`` (amqp). The following parameters are default: ``insist`` Insist on connecting to a server. In a configuration with multiple load-sharing servers, the Insist option tells the server that the client is insisting on a connection to the specified server. Default is False. ``connect_timeout`` The timeout in seconds before the client gives up connecting to the server. The default is no timeout. ``ssl`` Use SSL to connect to the server. The default is False. More precisely Consumers need the following parameters: ``connection`` The above mentioned Connection object. ``queue`` Name of the queue. ``exchange`` Name of the exchange the queue binds to. ``routing_key`` The interpretation of the routing key depends on the value of the ``exchange_type`` attribute. Direct exchange If the routing key property of the message and the ``routing_key`` attribute of the queue are identical, then the message is forwarded to the queue. Fanout exchange Messages are forwarded to the queues bound the exchange, even if the binding does not have a key. Topic exchange If the routing key property of the message matches the routing key of the key according to a primitive pattern matching scheme, then the message is forwarded to the queue. The message routing key then consists of words separated by dots (``.``, like domain names), and two special characters are available; star (``*``) and hash (``#``). The star matches any word, and the hash matches zero or more words. For example ``.stock.#`` matches the routing keys ``usd.stock`` and ``eur.stock.db`` but not ``stock.nasdaq``. ``durable`` This flag determines the durability of both exchanges and queues; durable exchanges and queues remain active when a RabbitMQ server restarts. Non-durable exchanges/queues (transient exchanges/queues) are purged when a server restarts. It is worth noting that AMQP specifies that durable queues cannot bind to transient exchanges. Default is True. ``auto_delete`` If set, the exchange is deleted when all queues have finished using it. Default is False. ``exclusive`` Exclusive queues (such as non-shared) may only be consumed from by the current connection. When exclusive is on, this also implies ``auto_delete``. Default is False. ``exchange_type`` AMQP defines several default exchange types (routing algorithms) that covers most of the common messaging use cases. ``auto_ack`` Acknowledgment is handled automatically once messages are received. By default ``auto_ack`` is set to False, and the receiver is required to manually handle acknowledgment. ``no_ack`` It disable acknowledgment on the server-side. This is different from ``auto_ack`` in that acknowledgment is turned off altogether. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application. ``auto_declare`` If this is True and the exchange name is set, the exchange will be automatically declared at instantiation. Auto declare is on by default. Publishers specify most the parameters of Consumers (such as they do not specify a queue name), but they can also specify the following: ``delivery_mode`` The default delivery mode used for messages. The value is an integer. The following delivery modes are supported by RabbitMQ: ``1`` (transient) The message is transient. Which means it is stored in memory only, and is lost if the server dies or restarts. ``2`` (persistent) The message is persistent. Which means the message is stored both in-memory, and on disk, and therefore preserved if the server dies or restarts. The default value is ``2`` (persistent). During a send operation, Publishers can override the delivery mode of messages so that, for example, transient messages can be sent over a durable queue. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/scheduler-evolution.rst0000664000175000017500000001327300000000000023331 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =================== Scheduler Evolution =================== Evolving the scheduler has been a priority item over several releases: http://specs.openstack.org/openstack/nova-specs/#priorities The scheduler has become tightly coupled with the rest of nova, limiting its capabilities, accuracy, flexibility and maintainability. The goal of scheduler evolution is to bring about a better separation of concerns between scheduling functionality and the rest of nova. Once this effort has completed, its conceivable that the nova-scheduler could become a separate git repo, outside of nova but within the compute project. This is not the current focus. Problem Use Cases ================== Many users are wanting to do more advanced things with the scheduler, but the current architecture is not ready to support those use cases in a maintainable way. A few examples will help to illustrate where the scheduler falls short: Cross Project Affinity ----------------------- It can be desirable, when booting from a volume, to use a compute node that is close to the shared storage where that volume is. Similarly, for the sake of performance, it can be desirable to use a compute node that is in a particular location in relation to a pre-created port. Filter Scheduler Alternatives ------------------------------ For certain use cases, radically different schedulers may perform much better than the filter scheduler. We should not block this innovation. It is unreasonable to assume a single scheduler will work for all use cases. However, to enable this kind of innovation in a maintainable way, a single strong scheduler interface is required. Project Scale issues --------------------- There are many interesting ideas for new schedulers, like the solver scheduler, and frequent requests to add new filters and weights to the scheduling system. The current nova team does not have the bandwidth to deal with all these requests. A dedicated scheduler team could work on these items independently of the rest of nova. The tight coupling that currently exists makes it impossible to work on the scheduler in isolation. A stable interface is required before the code can be split out. Key areas we are evolving ========================== Here we discuss, at a high level, areas that are being addressed as part of the scheduler evolution work. Versioning Scheduler Placement Interfaces ------------------------------------------ At the start of kilo, the scheduler is passed a set of dictionaries across a versioned RPC interface. The dictionaries can create problems with the backwards compatibility needed for live-upgrades. Luckily we already have the oslo.versionedobjects infrastructure we can use to model this data in a way that can be versioned across releases. This effort is mostly focusing around the request_spec. See, for example, `this spec`_. Sending host and node stats to the scheduler --------------------------------------------- Periodically nova-compute updates the scheduler state stored in the database. We need a good way to model the data that is being sent from the compute nodes into the scheduler, so over time, the scheduler can move to having its own database. This is linked to the work on the resource tracker. Updating the Scheduler about other data ---------------------------------------- Over time, its possible that we need to send cinder and neutron data, so the scheduler can use that data to help pick a nova-compute host. Resource Tracker ----------------- The recent work to add support for NUMA and PCI pass through have shown we have no good pattern to extend the resource tracker. Ideally we want to keep the innovation inside the nova tree, but we also need it to be easier. This is very related to the effort to re-think how we model resources, as covered by discussion about `resource providers`_. Parallelism and Concurrency ---------------------------- The current design of the nova-scheduler is very racy, and can lead to excessive numbers of build retries before the correct host is found. The recent NUMA features are particularly impacted by how the scheduler works. All this has lead to many people running only a single nova-scheduler process configured to use a very small greenthread pool. The work on cells v2 will mean that we soon need the scheduler to scale for much larger problems. The current scheduler works best with less than 1k nodes but we will need the scheduler to work with at least 10k nodes. Various ideas have been discussed to reduce races when running multiple nova-scheduler processes. One idea is to use two-phase commit "style" resource tracker claims. Another idea involves using incremental updates so it is more efficient to keep the scheduler's state up to date, potentially using Kafka. For more details, see the `backlog spec`_ that describes more of the details around this problem. .. _this spec: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/sched-select-destinations-use-request-spec-object.html .. _resource providers: https://blueprints.launchpad.net/nova/+spec/resource-providers .. _backlog spec: http://specs.openstack.org/openstack/nova-specs/specs/backlog/approved/parallel-scheduler.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/reference/scheduler-hints-vs-flavor-extra-specs.rst0000664000175000017500000001753100000000000026604 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ========================================= Scheduler hints versus flavor extra specs ========================================= People deploying and working on Nova often have questions about flavor extra specs and scheduler hints and what role they play in scheduling decisions, and which is a better choice for exposing capability to an end user of the cloud. There are several things to consider and it can get complicated. This document attempts to explain at a high level some of the major differences and drawbacks with both flavor extra specs and scheduler hints. Extra Specs ----------- In general flavor extra specs are specific to the cloud and how it is organized for capabilities, and should be abstracted from the end user. Extra specs are tied to :doc:`host aggregates ` and a lot of them also define how a guest is created in the hypervisor, for example what the watchdog action is for a VM. Extra specs are also generally interchangeable with `image properties`_ when it comes to VM behavior, like the watchdog example. How that is presented to the user is via the name of the flavor, or documentation specifically for that deployment, e.g. instructions telling a user how to setup a baremetal instance. .. _image properties: https://docs.openstack.org/glance/latest/admin/useful-image-properties.html Scheduler Hints --------------- Scheduler hints, also known simply as "hints", can be specified during server creation to influence the placement of the server by the scheduler depending on which scheduler filters are enabled. Hints are mapped to specific filters. For example, the ``ServerGroupAntiAffinityFilter`` scheduler filter is used with the ``group`` scheduler hint to indicate that the server being created should be a member of the specified anti-affinity group and the filter should place that server on a compute host which is different from all other current members of the group. Hints are not more "dynamic" than flavor extra specs. The end user specifies a flavor and optionally a hint when creating a server, but ultimately what they can specify is static and defined by the deployment. Similarities ------------ * Both scheduler hints and flavor extra specs can be used by :doc:`scheduler filters `. * Both are totally customizable, meaning there is no whitelist within Nova of acceptable hints or extra specs, unlike image properties [1]_. * An end user cannot achieve a new behavior without deployer consent, i.e. even if the end user specifies the ``group`` hint, if the deployer did not configure the ``ServerGroupAntiAffinityFilter`` the end user cannot have the ``anti-affinity`` behavior. Differences ----------- * A server's host location and/or behavior can change when resized with a flavor that has different extra specs from those used to create the server. Scheduler hints can only be specified during server creation, not during resize or any other "move" operation, but the original hints are still applied during the move operation. * The flavor extra specs used to create (or resize) a server can be retrieved from the compute API using the `2.47 microversion`_. As of the 19.0.0 Stein release, there is currently no way from the compute API to retrieve the scheduler hints used to create a server. .. note:: Exposing the hints used to create a server has been proposed [2]_. Without this, it is possible to workaround the limitation by doing things such as including the scheduler hint in the server metadata so it can be retrieved via server metadata later. * In the case of hints the end user can decide not to include a hint. On the other hand the end user cannot create a new flavor (by default policy) to avoid passing a flavor with an extra spec - the deployer controls the flavors. .. _2.47 microversion: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id42 Discoverability --------------- When it comes to discoverability, by the default ``os_compute_api:os-flavor-extra-specs:index`` policy rule, flavor extra specs are more "discoverable" by the end user since they can list them for a flavor. However, one should not expect an average end user to understand what different extra specs mean as they are just a key/value pair. There is some documentation for some "standard" extra specs though [3]_. However, that is not an exhaustive list and it does not include anything that different deployments would define for things like linking a flavor to a set of :doc:`host aggregates `, for example, when creating flavors for baremetal instances, or what the chosen :doc:`hypervisor driver ` might support for flavor extra specs. Scheduler hints are less discoverable from an end user perspective than extra specs. There are some standard hints defined in the API request schema [4]_. However: 1. Those hints are tied to scheduler filters and the scheduler filters are configurable per deployment, so for example the ``JsonFilter`` might not be enabled (it is not enabled by default), so the ``query`` hint would not do anything. 2. Scheduler hints are not restricted to just what is in that schema in the upstream nova code because of the ``additionalProperties: True`` entry in the schema. This allows deployments to define their own hints outside of that API request schema for their own :ref:`custom scheduler filters ` which are not part of the upstream nova code. Interoperability ---------------- The only way an end user can really use scheduler hints is based on documentation (or GUIs/SDKs) that a specific cloud deployment provides for their setup. So if **CloudA** defines a custom scheduler filter X and a hint for that filter in their documentation, an end user application can only run with that hint on that cloud and expect it to work as documented. If the user moves their application to **CloudB** which does not have that scheduler filter or hint, they will get different behavior. So obviously both flavor extra specs and scheduler hints are not interoperable. Which to use? ------------- When it comes to defining a custom scheduler filter, you could use a hint or an extra spec. If you need a flavor extra spec anyway for some behavior in the hypervisor when creating the guest, or to be able to retrieve the original flavor extra specs used to create a guest later, then you might as well just use the extra spec. If you do not need that, then a scheduler hint may be an obvious choice, from an end user perspective, for exposing a certain scheduling behavior but it must be well documented and the end user should realize that hint might not be available in other clouds, and they do not have a good way of finding that out either. Long-term, flavor extra specs are likely to be more standardized than hints so ultimately extra specs are the recommended choice. Footnotes --------- .. [1] https://opendev.org/openstack/nova/src/commit/fbe6f77bc1cb41f5d6cfc24ece54d3413f997aab/nova/objects/image_meta.py#L225 .. [2] https://review.opendev.org/#/c/440580/ .. [3] https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs .. [4] https://opendev.org/openstack/nova/src/commit/fbe6f77bc1cb41f5d6cfc24ece54d3413f997aab/nova/api/openstack/compute/schemas/scheduler_hints.py ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/reference/scheduling.rst0000664000175000017500000001471300000000000021456 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============ Scheduling ============ This is an overview of how scheduling works in nova from Pike onwards. For information on the scheduler itself, refer to :doc:`/user/filter-scheduler`. For an overview of why we've changed how the scheduler works, refer to :doc:`/reference/scheduler-evolution`. Overview -------- The scheduling process is described below. .. note:: This is current as of the 16.0.0 Pike release. Any mention of alternative hosts passed between the scheduler and conductor(s) is future work. .. actdiag:: actdiag { build-spec -> send-spec -> send-reqs -> query -> return-rps -> create -> filter -> claim -> return-hosts -> send-hosts; lane conductor { label = "Conductor"; build-spec [label = "Build request spec object", height = 38]; send-spec [label = "Submit request spec to scheduler", height = 38]; send-hosts [label = "Submit list of suitable hosts to target cell", height = 51]; } lane scheduler { label = "Scheduler"; send-reqs [label = "Submit resource requirements to placement", height = 64]; create [label = "Create a HostState object for each RP returned from Placement", height = 64]; filter [label = "Filter and weigh results", height = 38]; return-hosts [label = "Return a list of selected host & alternates, along with their allocations, to the conductor", height = 89]; } lane placement { label = "Placement"; query [label = "Query to determine the RPs representing compute nodes to satisfy requirements", height = 64]; return-rps [label = "Return list of resource providers and their corresponding allocations to scheduler", height = 89]; claim [label = "Create allocations against selected compute node", height = 64]; } } As the above diagram illustrates, scheduling works like so: #. Scheduler gets a request spec from the "super conductor", containing resource requirements. The "super conductor" operates at the top level of a deployment, as contrasted with the "cell conductor", which operates within a particular cell. #. Scheduler sends those requirements to placement. #. Placement runs a query to determine the resource providers (in this case, compute nodes) that can satisfy those requirements. #. Placement then constructs a data structure for each compute node as documented in the `spec`__. The data structure contains summaries of the matching resource provider information for each compute node, along with the AllocationRequest that will be used to claim the requested resources if that compute node is selected. #. Placement returns this data structure to the Scheduler. #. The Scheduler creates HostState objects for each compute node contained in the provider summaries. These HostState objects contain the information about the host that will be used for subsequent filtering and weighing. #. Since the request spec can specify one or more instances to be scheduled. The Scheduler repeats the next several steps for each requested instance. #. Scheduler runs these HostState objects through the filters and weighers to further refine and rank the hosts to match the request. #. Scheduler then selects the HostState at the top of the ranked list, and determines its matching AllocationRequest from the data returned by Placement. It uses that AllocationRequest as the body of the request sent to Placement to claim the resources. #. If the claim is not successful, that indicates that another process has consumed those resources, and the host is no longer able to satisfy the request. In that event, the Scheduler moves on to the next host in the list, repeating the process until it is able to successfully claim the resources. #. Once the Scheduler has found a host for which a successful claim has been made, it needs to select a number of "alternate" hosts. These are hosts from the ranked list that are in the same cell as the selected host, which can be used by the cell conductor in the event that the build on the selected host fails for some reason. The number of alternates is determined by the configuration option `scheduler.max_attempts`. #. Scheduler creates two list structures for each requested instance: one for the hosts (selected + alternates), and the other for their matching AllocationRequests. #. To create the alternates, Scheduler determines the cell of the selected host. It then iterates through the ranked list of HostState objects to find a number of additional hosts in that same cell. It adds those hosts to the host list, and their AllocationRequest to the allocation list. #. Once those lists are created, the Scheduler has completed what it needs to do for a requested instance. #. Scheduler repeats this process for any additional requested instances. When all instances have been scheduled, it creates a 2-tuple to return to the super conductor, with the first element of the tuple being a list of lists of hosts, and the second being a list of lists of the AllocationRequests. #. Scheduler returns that 2-tuple to the super conductor. #. For each requested instance, the super conductor determines the cell of the selected host. It then sends a 2-tuple of ([hosts], [AllocationRequests]) for that instance to the target cell conductor. #. Target cell conductor tries to build the instance on the selected host. If it fails, it uses the AllocationRequest data for that host to unclaim the resources for the selected host. It then iterates through the list of alternates by first attempting to claim the resources, and if successful, building the instance on that host. Only when all alternates fail does the build request fail. __ https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-allocation-requests.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/services.rst0000664000175000017500000000446000000000000021152 0ustar00zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _service_manager_driver: Services, Managers and Drivers ============================== The responsibilities of Services, Managers, and Drivers, can be a bit confusing to people that are new to nova. This document attempts to outline the division of responsibilities to make understanding the system a little bit easier. Currently, Managers and Drivers are specified by flags and loaded using utils.load_object(). This method allows for them to be implemented as singletons, classes, modules or objects. As long as the path specified by the flag leads to an object (or a callable that returns an object) that responds to getattr, it should work as a manager or driver. The :mod:`nova.service` Module ------------------------------ .. automodule:: nova.service :noindex: :members: :undoc-members: :show-inheritance: The :mod:`nova.manager` Module ------------------------------ .. automodule:: nova.manager :noindex: :members: :undoc-members: :show-inheritance: Implementation-Specific Drivers ------------------------------- A manager will generally load a driver for some of its tasks. The driver is responsible for specific implementation details. Anything running shell commands on a host, or dealing with other non-python code should probably be happening in a driver. Drivers should not touch the database as the database management is done inside `nova-conductor`. It usually makes sense to define an Abstract Base Class for the specific driver (i.e. VolumeDriver), to define the methods that a different driver would need to implement. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/stable-api.rst0000664000175000017500000001326300000000000021351 0ustar00zuulzuul00000000000000.. Copyright 2015 Intel All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Nova Stable REST API ==================== This document describes both the current state of the Nova REST API -- as of the Pike release -- and also attempts to describe how the Nova team evolved the REST API's implementation over time and removed some of the cruft that has crept in over the years. Background ---------- Nova used to include two distinct frameworks for exposing REST API functionality. Older code is called the "v2 API" and existed in the /nova/api/openstack/compute/legacy_v2/ directory. This code tree was totally removed during Netwon release time frame (14.0.0 and later). Newer code is called the "v2.1 API" and exists in the /nova/api/openstack/compute directory. The v2 API is the old Nova REST API. It is mostly replaced by v2.1 API. The v2.1 API is the new Nova REST API with a set of improvements which includes `Microversion `_ and standardized validation of inputs using JSON-Schema. Also the v2.1 API is totally backwards compatible with the v2 API (That is the reason we call it as v2.1 API). Current Stable API ------------------ * Nova v2.1 API + Microversion (v2.1 APIs are backward-compatible with v2 API, but more strict validation) * /v2 & /v2.1 endpoint supported * v2 compatible mode for old v2 users Evolution of Nova REST API -------------------------- .. image:: /_static/images/evolution-of-api.png Nova v2 API + Extensions ************************ Nova used to have v2 API. In v2 API, there was a concept called 'extension'. An operator can use it to enable/disable part of Nova REST API based on requirements. An end user may query the '/extensions' API to discover what *API functionality* is supported by the Nova deployment. Unfortunately, because v2 API extensions could be enabled or disabled from one deployment to another -- as well as custom API extensions added to one deployment and not another -- it was impossible for an end user to know what the OpenStack Compute API actually included. No two OpenStack deployments were consistent, which made cloud interoperability impossible. In the Newton release, stevedore loading of API extension plugins was deprecated and marked for removal. In the Newton release, v2 API code base has been removed and /v2 endpoints were directed to v2.1 code base. v2 API compatibility mode based on v2.1 API ******************************************* v2.1 API is exactly same as v2 API except strong input validation with no additional request parameter allowed and Microversion feature. Since Newton, '/v2' endpoint also started using v2.1 API implementation. But to keep the backward compatibility of v2 API, '/v2' endpoint should not return error on additional request parameter or any new headers for Microversion. v2 API must be same as it has been since starting. To achieve that behavior legacy v2 compatibility mode has been introduced. v2 compatibility mode is based on v2.1 implementation with below difference: * Skip additionalProperties checks in request body * Ignore Microversion headers in request * No Microversion headers in response Nova v2.1 API + Microversion **************************** In the Kilo release, nova v2.1 API has been released. v2.1 API is supposed to be backward compatible with v2 API with strong input validation using JSON Schema. v2.1 API comes up with microversion concept which is a way to version the API changes. Each new feature or modification in API has to done via microversion bump. API extensions concept was deprecated from the v2.1 API, are no longer needed to evolve the REST API, and no new API functionality should use the API extension classes to implement new functionality. Instead, new API functionality should be added via microversion concept and use the microversioning decorators to add or change the REST API. v2.1 API had plugin framework which was using stevedore to load Nova REST API extensions instead of old V2 handcrafted extension load mechanism. There was an argument that the plugin framework supported extensibility in the Nova API to allow deployers to publish custom API resources. In the Newton release, config options of blacklist and whitelist extensions and stevedore things were deprecated and marked for removal. In Pike, stevedore based plugin framework has been removed and url mapping is done with plain router list. There is no more dynamic magic of detecting API implementation for url. See :doc:`Extending the API ` for more information. The '/extensions' API exposed the list of enabled API functions to users by GET method. However as the above, new API extensions should not be added to the list of this API. The '/extensions' API is frozen in Nova V2.1 API and is `deprecated `_. Things which are History now **************************** As of the Pike release, many deprecated things have been removed and became history in Nova API world: * v2 legacy framework * API extensions concept * stevedore magic to load the extension/plugin dynamically * Configurable way to enable/disable APIs extensions ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/threading.rst0000664000175000017500000000521500000000000021273 0ustar00zuulzuul00000000000000Threading model =============== All OpenStack services use *green thread* model of threading, implemented through using the Python `eventlet `_ and `greenlet `_ libraries. Green threads use a cooperative model of threading: thread context switches can only occur when specific eventlet or greenlet library calls are made (e.g., sleep, certain I/O calls). From the operating system's point of view, each OpenStack service runs in a single thread. The use of green threads reduces the likelihood of race conditions, but does not completely eliminate them. In some cases, you may need to use the ``@lockutils.synchronized(...)`` decorator to avoid races. In addition, since there is only one operating system thread, a call that blocks that main thread will block the entire process. Yielding the thread in long-running tasks ----------------------------------------- If a code path takes a long time to execute and does not contain any methods that trigger an eventlet context switch, the long-running thread will block any pending threads. This scenario can be avoided by adding calls to the eventlet sleep method in the long-running code path. The sleep call will trigger a context switch if there are pending threads, and using an argument of 0 will avoid introducing delays in the case that there is only a single green thread:: from eventlet import greenthread ... greenthread.sleep(0) In current code, time.sleep(0) does the same thing as greenthread.sleep(0) if time module is patched through eventlet.monkey_patch(). To be explicit, we recommend contributors use ``greenthread.sleep()`` instead of ``time.sleep()``. MySQL access and eventlet ------------------------- There are some MySQL DB API drivers for oslo.db, like `PyMySQL`_, MySQL-python etc. PyMySQL is the default MySQL DB API driver for oslo.db, and it works well with eventlet. MySQL-python uses an external C library for accessing the MySQL database. Since eventlet cannot use monkey-patching to intercept blocking calls in a C library, so queries to the MySQL database will block the main thread of a service. The Diablo release contained a thread-pooling implementation that did not block, but this implementation resulted in a `bug`_ and was removed. See this `mailing list thread`_ for a discussion of this issue, including a discussion of the `impact on performance`_. .. _bug: https://bugs.launchpad.net/nova/+bug/838581 .. _mailing list thread: https://lists.launchpad.net/openstack/msg08118.html .. _impact on performance: https://lists.launchpad.net/openstack/msg08217.html .. _PyMySQL: https://wiki.openstack.org/wiki/PyMySQL_evaluation ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/update-provider-tree.rst0000664000175000017500000002377000000000000023403 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==================================== ComputeDriver.update_provider_tree ==================================== This provides details on the ``ComputeDriver`` abstract method ``update_provider_tree`` for developers implementing this method in their own virt drivers. Background ---------- In the movement towards using placement for scheduling and resource management, the virt driver method ``get_available_resource`` was initially superseded by ``get_inventory`` (now gone), whereby the driver could specify its inventory in terms understood by placement. In Queens, a ``get_traits`` driver method was added. But ``get_inventory`` was limited to expressing only inventory (not traits or aggregates). And both of these methods were limited to the resource provider corresponding to the compute node. Developments such as Nested Resource Providers necessitate the ability for the virt driver to have deeper control over what the resource tracker configures in placement on behalf of the compute node. This need is filled by the virt driver method ``update_provider_tree`` and its consumption by the resource tracker, allowing full control over the placement representation of the compute node and its associated providers. The Method ---------- ``update_provider_tree`` accepts the following parameters: * A ``nova.compute.provider_tree.ProviderTree`` object representing all the providers in the tree associated with the compute node, and any sharing providers (those with the ``MISC_SHARES_VIA_AGGREGATE`` trait) associated via aggregate with any of those providers (but not *their* tree- or aggregate-associated providers), as currently known by placement. This object is fully owned by the ``update_provider_tree`` method, and can therefore be modified without locking/concurrency considerations. In other words, the parameter is passed *by reference* with the expectation that the virt driver will modify the object. Note, however, that it may contain providers not directly owned/controlled by the compute host. Care must be taken not to remove or modify such providers inadvertently. In addition, providers may be associated with traits and/or aggregates maintained by outside agents. The ``update_provider_tree`` method must therefore also be careful only to add/remove traits/aggregates it explicitly controls. * String name of the compute node (i.e. ``ComputeNode.hypervisor_hostname``) for which the caller is requesting updated provider information. Drivers may use this to help identify the compute node provider in the ProviderTree. Drivers managing more than one node (e.g. ironic) may also use it as a cue to indicate which node is being processed by the caller. * Dictionary of ``allocations`` data of the form: .. code:: { $CONSUMER_UUID: { # The shape of each "allocations" dict below is identical # to the return from GET /allocations/{consumer_uuid} "allocations": { $RP_UUID: { "generation": $RP_GEN, "resources": { $RESOURCE_CLASS: $AMOUNT, ... }, }, ... }, "project_id": $PROJ_ID, "user_id": $USER_ID, "consumer_generation": $CONSUMER_GEN, }, ... } If ``None``, and the method determines that any inventory needs to be moved (from one provider to another and/or to a different resource class), the ``ReshapeNeeded`` exception must be raised. Otherwise, this dict must be edited in place to indicate the desired final state of allocations. Drivers should *only* edit allocation records for providers whose inventories are being affected by the reshape operation. For more information about the reshape operation, refer to the `spec `_. The virt driver is expected to update the ProviderTree object with current resource provider and inventory information. When the method returns, the ProviderTree should represent the correct hierarchy of nested resource providers associated with this compute node, as well as the inventory, aggregates, and traits associated with those resource providers. .. note:: Despite the name, a ProviderTree instance may in fact contain more than one tree. For purposes of this specification, the ProviderTree passed to ``update_provider_tree`` will contain: * the entire tree associated with the compute node; and * any sharing providers (those with the ``MISC_SHARES_VIA_AGGREGATE`` trait) which are associated via aggregate with any of the providers in the compute node's tree. The sharing providers will be presented as lone roots in the ProviderTree, even if they happen to be part of a tree themselves. Consider the example below. ``SSP`` is a shared storage provider and ``BW1`` and ``BW2`` are shared bandwidth providers; all three have the ``MISC_SHARES_VIA_AGGREGATE`` trait:: CN1 SHR_ROOT CN2 / \ agg1 / /\ agg1 / \ NUMA1 NUMA2--------SSP--/--\-----------NUMA1 NUMA2 / \ / \ / \ / \ / \ PF1 PF2 PF3 PF4--------BW1 BW2------PF1 PF2 PF3 PF4 agg2 agg3 When ``update_provider_tree`` is invoked for ``CN1``, it is passed a ProviderTree containing:: CN1 (root) / \ agg1 NUMA1 NUMA2-------SSP (root) / \ / \ PF1 PF2 PF3 PF4------BW1 (root) agg2 Driver implementations of ``update_provider_tree`` are expected to use public ``ProviderTree`` methods to effect changes to the provider tree passed in. Some of the methods which may be useful are as follows: * ``new_root``: Add a new root provider to the tree. * ``new_child``: Add a new child under an existing provider. * ``data``: Access information (name, UUID, parent, inventory, traits, aggregates) about a provider in the tree. * ``remove``: Remove a provider **and its descendants** from the tree. Use caution in multiple-ownership scenarios. * ``update_inventory``: Set the inventory for a provider. * ``add_traits``, ``remove_traits``: Set/unset virt-owned traits for a provider. * ``add_aggregates``, ``remove_aggregates``: Set/unset virt-owned aggregate associations for a provider. .. note:: There is no supported mechanism for ``update_provider_tree`` to effect changes to allocations. This is intentional: in Nova, allocations are managed exclusively outside of virt. (Usually by the scheduler; sometimes - e.g. for migrations - by the conductor.) Porting from get_inventory ~~~~~~~~~~~~~~~~~~~~~~~~~~ Virt driver developers wishing to move from ``get_inventory`` to ``update_provider_tree`` should use the ``ProviderTree.update_inventory`` method, specifying the compute node as the provider and the same inventory as returned by ``get_inventory``. For example: .. code:: def get_inventory(self, nodename): inv_data = { 'VCPU': { ... }, 'MEMORY_MB': { ... }, 'DISK_GB': { ... }, } return inv_data would become: .. code:: def update_provider_tree(self, provider_tree, nodename, allocations=None): inv_data = { 'VCPU': { ... }, 'MEMORY_MB': { ... }, 'DISK_GB': { ... }, } provider_tree.update_inventory(nodename, inv_data) When reporting inventory for the standard resource classes ``VCPU``, ``MEMORY_MB`` and ``DISK_GB``, implementors of ``update_provider_tree`` may need to set the ``allocation_ratio`` and ``reserved`` values in the ``inv_data`` dict based on configuration to reflect changes on the compute for allocation ratios and reserved resource amounts back to the placement service. Porting from get_traits ~~~~~~~~~~~~~~~~~~~~~~~ To replace ``get_traits``, developers should use the ``ProviderTree.add_traits`` method, specifying the compute node as the provider and the same traits as returned by ``get_traits``. For example: .. code:: def get_traits(self, nodename): traits = ['HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'CUSTOM_GOLD'] return traits would become: .. code:: def update_provider_tree(self, provider_tree, nodename, allocations=None): provider_tree.add_traits( nodename, 'HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'CUSTOM_GOLD') .. _taxonomy_of_traits_and_capabilities: Taxonomy of traits and capabilities ----------------------------------- There are various types of traits: - Some are standard (registered in `os-traits `_); others are custom. - Some are owned by the compute service; others can be managed by operators. - Some come from driver-supported capabilities, via a mechanism which was `introduced `_ to convert them to standard traits on the compute node resource provider. This mechanism is :ref:`documented in the configuration guide `. This diagram may shed further light on how these traits relate to each other and how they are managed. .. figure:: /_static/images/traits-taxonomy.svg :width: 800 :alt: Venn diagram showing taxonomy of traits and capabilities ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/reference/upgrade-checks.rst0000664000175000017500000002771400000000000022223 0ustar00zuulzuul00000000000000============== Upgrade checks ============== Nova provides automated :ref:`upgrade check tooling ` to assist deployment tools in verifying critical parts of the deployment, especially when it comes to major changes during upgrades that require operator intervention. This guide covers the background on nova's upgrade check tooling, how it is used, and what to look for in writing new checks. Background ========== Nova has historically supported offline database schema migrations (``nova-manage db sync``) and :ref:`online data migrations ` during upgrades. The ``nova-status upgrade check`` command was introduced in the 15.0.0 Ocata release to aid in the verification of two major required changes in that release, namely Placement and Cells v2. Integration with the Placement service and deploying Cells v2 was optional starting in the 14.0.0 Newton release and made required in the Ocata release. The nova team working on these changes knew that there were required deployment changes to successfully upgrade to Ocata. In addition, the required deployment changes were not things that could simply be verified in a database migration script, e.g. a migration script should not make REST API calls to Placement. So ``nova-status upgrade check`` was written to provide an automated "pre-flight" check to verify that required deployment steps were performed prior to upgrading to Ocata. Reference the `Ocata changes`_ for implementation details. .. _Ocata changes: https://review.opendev.org/#/q/topic:bp/resource-providers-scheduler-db-filters+status:merged+file:%255Enova/cmd/status.py Guidelines ========== * The checks should be able to run within a virtual environment or container. All that is required is a full configuration file, similar to running other ``nova-manage`` type administration commands. In the case of nova, this means having :oslo.config:group:`api_database`, :oslo.config:group:`placement`, etc sections configured. * Candidates for automated upgrade checks are things in a project's upgrade release notes which can be verified via the database. For example, when upgrading to Cells v2 in Ocata, one required step was creating "cell mappings" for ``cell0`` and ``cell1``. This can easily be verified by checking the contents of the ``cell_mappings`` table in the ``nova_api`` database. * Checks will query the database(s) and potentially REST APIs (depending on the check) but should not expect to run RPC calls. For example, a check should not require that the ``nova-compute`` service is running on a particular host. * Checks are typically meant to be run before re-starting and upgrading to new service code, which is how `grenade uses them`_, but they can also be run as a :ref:`post-install verify step ` which is how `openstack-ansible`_ also uses them. The high-level set of upgrade steps for upgrading nova in grenade is: * Install new code * Sync the database schema for new models (``nova-manage api_db sync``; ``nova-manage db sync``) * Run the online data migrations (``nova-manage db online_data_migrations``) * Run the upgrade check (``nova-status upgrade check``) * Restart services with new code * Checks must be idempotent so they can be run repeatedly and the results are always based on the latest data. This allows an operator to run the checks, fix any issues reported, and then iterate until the status check no longer reports any issues. * Checks which cannot easily, or should not, be run within offline database migrations are a good candidate for these CLI-driven checks. For example, ``instances`` records are in the cell database and for each instance there should be a corresponding ``request_specs`` table entry in the ``nova_api`` database. A ``nova-manage db online_data_migrations`` routine was added in the Newton release to back-fill request specs for existing instances, and `in Rocky`_ an upgrade check was added to make sure all non-deleted instances have a request spec so compatibility code can be removed in Stein. In older releases of nova we would have added a `blocker migration`_ as part of the database schema migrations to make sure the online data migrations had been completed before the upgrade could proceed. .. note:: Usage of ``nova-status upgrade check`` does not preclude the need for blocker migrations within a given database, but in the case of request specs the check spans multiple databases and was a better fit for the nova-status tooling. * All checks should have an accompanying upgrade release note. .. _grenade uses them: https://github.com/openstack-dev/grenade/blob/dc7f4a4ba/projects/60_nova/upgrade.sh#L96 .. _openstack-ansible: https://review.opendev.org/#/c/575125/ .. _in Rocky: https://review.opendev.org/#/c/581813/ .. _blocker migration: https://review.opendev.org/#/c/289450/ Structure ========= There is no graph logic for checks, meaning each check is meant to be run independently of other checks in the same set. For example, a project could have five checks which run serially but that does not mean the second check in the set depends on the results of the first check in the set, or the third check depends on the second, and so on. The base framework is fairly simple as can be seen from the `initial change`_. Each check is registered in the ``_upgrade_checks`` variable and the ``check`` method executes each check and records the result. The most severe result is recorded for the final return code. There are one of three possible results per check: * ``Success``: All upgrade readiness checks passed successfully and there is nothing to do. * ``Warning``: At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * ``Failure``: There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. The ``UpgradeCheckResult`` object provides for adding details when there is a warning or failure result which generally should refer to how to resolve the failure, e.g. maybe ``nova-manage db online_data_migrations`` is incomplete and needs to be run again. Using the `cells v2 check`_ as an example, there are really two checks involved: 1. Do the cell0 and cell1 mappings exist? 2. Do host mappings exist in the API database if there are compute node records in the cell database? Failing either check results in a ``Failure`` status for that check and return code of ``2`` for the overall run. The initial `placement check`_ provides an example of a warning response. In that check, if there are fewer resource providers in Placement than there are compute nodes in the cell database(s), the deployment may be underutilized because the ``nova-scheduler`` is using the Placement service to determine candidate hosts for scheduling. Warning results are good for cases where scenarios are known to run through a rolling upgrade process, e.g. ``nova-compute`` being configured to report resource provider information into the Placement service. These are things that should be investigated and completed at some point, but might not cause any immediate failures. The results feed into a standard output for the checks: .. code-block:: console $ nova-status upgrade check +----------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +----------------------------------------------------+ | Check: Placement API | | Result: Failure | | Details: There is no placement-api endpoint in the | | service catalog. | +----------------------------------------------------+ .. note:: Long-term the framework for upgrade checks will come from the `oslo.upgradecheck library`_. .. _initial change: https://review.opendev.org/#/c/411517/ .. _cells v2 check: https://review.opendev.org/#/c/411525/ .. _placement check: https://review.opendev.org/#/c/413250/ .. _oslo.upgradecheck library: http://opendev.org/openstack/oslo.upgradecheck/ Other ===== Documentation ------------- Each check should be documented in the :ref:`history section ` of the CLI guide and have a release note. This is important since the checks can be run in an isolated environment apart from the actual deployed version of the code and since the checks should be idempotent, the history / change log is good for knowing what is being validated. Backports --------- Sometimes upgrade checks can be backported to aid in pre-empting bugs on stable branches. For example, a check was added for `bug 1759316`_ in Rocky which was also backported to stable/queens in case anyone upgrading from Pike to Queens would hit the same issue. Backportable checks are generally only made for latent bugs since someone who has already passed checks and upgraded to a given stable branch should not start failing after a patch release on that same branch. For this reason, any check being backported should have a release note with it. .. _bug 1759316: https://bugs.launchpad.net/nova/+bug/1759316 Other projects -------------- A community-wide `goal for the Stein release`_ is adding the same type of ``$PROJECT-status upgrade check`` tooling to other projects to ease in upgrading OpenStack across the board. So while the guidelines in this document are primarily specific to nova, they should apply generically to other projects wishing to incorporate the same tooling. .. _goal for the Stein release: https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html FAQs ---- #. How is the nova-status upgrade script packaged and deployed? There is a ``console_scripts`` entry for ``nova-status`` in the ``setup.cfg`` file. #. Why are there multiple parts to the command structure, i.e. "upgrade" and "check"? This is an artifact of how the ``nova-manage`` command is structured which has categories of sub-commands, like ``nova-manage db`` is a sub-category made up of other sub-commands like ``nova-manage db sync``. The ``nova-status upgrade check`` command was written in the same way for consistency and extensibility if other sub-commands need to be added later. #. Where should the documentation live for projects other than nova? As part of the standard OpenStack project `documentation guidelines`_ the command should be documented under ``doc/source/cli`` in each project repo. #. Why is the upgrade check command not part of the standard python-\*client CLIs? The ``nova-status`` command was modeled after the ``nova-manage`` command which is meant to be admin-only and has direct access to the database, unlike other CLI packages like python-novaclient which requires a token and communicates with nova over the REST API. Because of this, it is also possible to write commands in ``nova-manage`` and ``nova-status`` that can work while the API service is down for maintenance. #. Can upgrade checks only be for N-1 to N version upgrades? No, not necessarily. The upgrade checks are also an essential part of `fast-forward upgrades`_ to make sure that as you roll through each release performing schema (data model) updates and data migrations that you are also completing all of the necessary changes. For example, if you are fast forward upgrading from Ocata to Rocky, something could have been added, deprecated or removed in Pike or Queens and a pre-upgrade check is a way to make sure the necessary steps were taking while upgrading through those releases before restarting the Rocky code at the end. .. _documentation guidelines: https://docs.openstack.org/doc-contrib-guide/project-guides.html .. _fast-forward upgrades: https://wiki.openstack.org/wiki/Fast_forward_upgrades ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/reference/vm-states.rst0000664000175000017500000001213600000000000021251 0ustar00zuulzuul00000000000000Virtual Machine States and Transitions ======================================= The following diagrams and tables show the required virtual machine (VM) states and task states for various commands issued by the user. Allowed State Transitions -------------------------- .. graphviz:: digraph states { graph [pad=".35", ranksep="0.65", nodesep="0.55", concentrate=true]; node [fontsize=10 fontname="Monospace"]; edge [arrowhead="normal", arrowsize="0.8"]; label="All states are allowed to transition to DELETED and ERROR."; forcelabels=true; labelloc=bottom; labeljust=left; /* states */ building [label="BUILDING"] active [label="ACTIVE"] paused [label="PAUSED"] suspended [label="SUSPENDED"] stopped [label="STOPPED"] rescued [label="RESCUED"] resized [label="RESIZED"] soft_deleted [label="SOFT_DELETED"] shelved [label="SHELVED"] shelved_offloaded [label="SHELVED_OFFLOADED"] deleted [label="DELETED", color="red"] error [label="ERROR", color="red"] /* transitions [action] */ building -> active active -> active [headport=nw, tailport=ne] // manual layout active -> soft_deleted [tailport=e] // prevent arrowhead overlap active -> suspended active -> paused [tailport=w] // prevent arrowhead overlap active -> stopped active -> shelved active -> shelved_offloaded active -> rescued active -> resized soft_deleted -> active [headport=e] // prevent arrowhead overlap suspended -> active suspended -> shelved suspended -> shelved_offloaded paused -> active paused -> shelved paused -> shelved_offloaded stopped -> active stopped -> stopped [headport=nw, tailport=ne] // manual layout stopped -> resized stopped -> rescued stopped -> shelved stopped -> shelved_offloaded resized -> active rescued -> active shelved -> shelved_offloaded shelved -> active shelved_offloaded -> active } Requirements for Commands ------------------------- ================== ================== ==================== ================ Command Req'd VM States Req'd Task States Target State ================== ================== ==================== ================ pause Active, Shutoff, Resize Verify, unset Paused Rescued unpause Paused N/A Active suspend Active, Shutoff N/A Suspended resume Suspended N/A Active rescue Active, Shutoff Resize Verify, unset Rescued unrescue Rescued N/A Active set admin password Active N/A Active rebuild Active, Shutoff Resize Verify, unset Active, Shutoff force delete Soft Deleted N/A Deleted restore Soft Deleted N/A Active soft delete Active, Shutoff, N/A Soft Deleted Error delete Active, Shutoff, N/A Deleted Building, Rescued, Error backup Active, Shutoff N/A Active, Shutoff snapshot Active, Shutoff N/A Active, Shutoff start Shutoff, Stopped N/A Active stop Active, Shutoff, Resize Verify, unset Stopped Rescued reboot Active, Shutoff, Resize Verify, unset Active Rescued resize Active, Shutoff Resize Verify, unset Resized revert resize Active, Shutoff Resize Verify, unset Active confirm resize Active, Shutoff Resize Verify, unset Active ================== ================== ==================== ================ VM states and Possible Commands ------------------------------- ============ ================================================================= VM State Commands ============ ================================================================= Paused unpause Suspended resume Active set admin password, suspend, pause, rescue, rebuild, soft delete, delete, backup, snapshot, stop, reboot, resize, revert resize, confirm resize Shutoff suspend, pause, rescue, rebuild, soft delete, delete, backup, start, snapshot, stop, reboot, resize, revert resize, confirm resize Rescued unrescue, pause Stopped rescue, delete, start Soft Deleted force delete, restore Error soft delete, delete Building delete Rescued delete, stop, reboot ============ ================================================================= Create Instance States ---------------------- The following diagram shows the sequence of VM states, task states, and power states when a new VM instance is created. .. image:: /_static/images/create-vm-states.svg :alt: Sequence of VM states, task states, and power states when a new VM instance is created. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2584705 nova-21.2.4/doc/source/user/0000775000175000017500000000000000000000000015611 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/architecture.rst0000664000175000017500000000573400000000000021036 0ustar00zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Nova System Architecture ======================== Nova comprises multiple server processes, each performing different functions. The user-facing interface is a REST API, while internally Nova components communicate via an RPC message passing mechanism. The API servers process REST requests, which typically involve database reads/writes, optionally sending RPC messages to other Nova services, and generating responses to the REST calls. RPC messaging is done via the **oslo.messaging** library, an abstraction on top of message queues. Most of the major nova components can be run on multiple servers, and have a manager that is listening for RPC messages. The one major exception is ``nova-compute``, where a single process runs on the hypervisor it is managing (except when using the VMware or Ironic drivers). The manager also, optionally, has periodic tasks. For more details on our RPC system, please see: :doc:`/reference/rpc` Nova also uses a central database that is (logically) shared between all components. However, to aid upgrade, the DB is accessed through an object layer that ensures an upgraded control plane can still communicate with a ``nova-compute`` running the previous release. To make this possible nova-compute proxies DB requests over RPC to a central manager called ``nova-conductor``. To horizontally expand Nova deployments, we have a deployment sharding concept called cells. For more information please see: :doc:`cells` Components ---------- Below you will find a helpful explanation of the key components of a typical Nova deployment. .. image:: /_static/images/architecture.svg :width: 100% * DB: sql database for data storage. * API: component that receives HTTP requests, converts commands and communicates with other components via the **oslo.messaging** queue or HTTP. * Scheduler: decides which host gets each instance. * Compute: manages communication with hypervisor and virtual machines. * Conductor: handles requests that need coordination (build/resize), acts as a database proxy, or handles object conversions. * :placement-doc:`Placement <>`: tracks resource provider inventories and usages. While all services are designed to be horizontally scalable, you should have significantly more computes than anything else. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/availability-zones.rst0000664000175000017500000000241500000000000022153 0ustar00zuulzuul00000000000000================== Availability zones ================== Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. Availability zones can be used to partition a cloud on arbitrary factors, such as location (country, datacenter, rack), network layout and/or power source. Because of the flexibility, the names and purposes of availability zones can vary massively between clouds. In addition, other services, such as the :neutron-doc:`networking service <>` and the :cinder-doc:`block storage service <>`, also provide an availability zone feature. However, the implementation of these features differs vastly between these different services. Consult the documentation for these other services for more information on their implementation of this feature. Usage ----- Availability zones can only be created and configured by an admin but they can be used by an end-user when creating an instance. For example: .. code-block:: console $ openstack server create --availability-zone ZONE ... SERVER It is also possible to specify a destination host and/or node using this command; however, this is an admin-only operation by default. For more information, see :ref:`using-availability-zones-to-select-hosts`. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/block-device-mapping.rst0000664000175000017500000002735700000000000022341 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Block Device Mapping in Nova ============================ Nova has a concept of block devices that can be exposed to cloud instances. There are several types of block devices an instance can have (we will go into more details about this later in this document), and which ones are available depends on a particular deployment and the usage limitations set for tenants and users. Block device mapping is a way to organize and keep data about all of the block devices an instance has. When we talk about block device mapping, we usually refer to one of two things 1. API/CLI structure and syntax for specifying block devices for an instance boot request 2. The data structure internal to Nova that is used for recording and keeping, which is ultimately persisted in the block_device_mapping table. However, Nova internally has several "slightly" different formats for representing the same data. All of them are documented in the code and or presented by a distinct set of classes, but not knowing that they exist might trip up people reading the code. So in addition to BlockDeviceMapping [1]_ objects that mirror the database schema, we have: 2.1 The API format - this is the set of raw key-value pairs received from the API client, and is almost immediately transformed into the object; however, some validations are done using this format. We will refer to this format as the 'API BDMs' from now on. 2.2 The virt driver format - this is the format defined by the classes in :mod: `nova.virt.block_device`. This format is used and expected by the code in the various virt drivers. These classes, in addition to exposing a different format (mimicking the Python dict interface), also provide a place to bundle some functionality common to certain types of block devices (for example attaching volumes which has to interact with both Cinder and the virt driver code). We will refer to this format as 'Driver BDMs' from now on. .. note:: The maximum limit on the number of disk devices allowed to attach to a single server is configurable with the option :oslo.config:option:`compute.max_disk_devices_to_attach`. Data format and its history ---------------------------- In the early days of Nova, block device mapping general structure closely mirrored that of the EC2 API. During the Havana release of Nova, block device handling code, and in turn the block device mapping structure, had work done on improving the generality and usefulness. These improvements included exposing additional details and features in the API. In order to facilitate this, a new extension was added to the v2 API called `BlockDeviceMappingV2Boot` [2]_, that added an additional `block_device_mapping_v2` field to the instance boot API request. Block device mapping v1 (aka legacy) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This was the original format that supported only cinder volumes (similar to how EC2 block devices support only EBS volumes). Every entry was keyed by device name (we will discuss why this was problematic in its own section later on this page), and would accept only: * UUID of the Cinder volume or snapshot * Type field - used only to distinguish between volumes and Cinder volume snapshots * Optional size field * Optional `delete_on_termination` flag While all of Nova internal code only uses and stores the new data structure, we still need to handle API requests that use the legacy format. This is handled by the Nova API service on every request. As we will see later, since block device mapping information can also be stored in the image metadata in Glance, this is another place where we need to handle the v1 format. The code to handle legacy conversions is part of the :mod: `nova.block_device` module. Intermezzo - problem with device names ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Using device names as the primary per-instance identifier, and exposing them in the API, is problematic for Nova mostly because several hypervisors Nova supports with its drivers can't guarantee that the device names the guest OS assigns are the ones the user requested from Nova. Exposing such a detail in the public API of Nova is obviously not ideal, but it needed to stay for backwards compatibility. It is also required for some (slightly obscure) features around overloading a block device in a Glance image when booting an instance [3]_. The plan for fixing this was to allow users to not specify the device name of a block device, and Nova will determine it (with the help of the virt driver), so that it can still be discovered through the API and used when necessary, like for the features mentioned above (and preferably only then). Another use for specifying the device name was to allow the "boot from volume" functionality, by specifying a device name that matches the root device name for the instance (usually `/dev/vda`). Currently (mid Liberty) users are discouraged from specifying device names for all calls requiring or allowing block device mapping, except when trying to override the image block device mapping on instance boot, and it will likely remain like that in the future. Libvirt device driver will outright override any device names passed with it's own values. Block device mapping v2 ^^^^^^^^^^^^^^^^^^^^^^^ New format was introduced in an attempt to solve issues with the original block device mapping format discussed above, and also to allow for more flexibility and addition of features that were not possible with the simple format we had. New block device mapping is a list of dictionaries containing the following fields (in addition to the ones that were already there): * source_type - this can have one of the following values: * `image` * `volume` * `snapshot` * `blank` * destination_type - this can have one of the following values: * `local` * `volume` * guest_format - Tells Nova how/if to format the device prior to attaching, should be only used with blank local images. Denotes a swap disk if the value is `swap`. * device_name - See the previous section for a more in depth explanation of this - currently best left empty (not specified that is), unless the user wants to override the existing device specified in the image metadata. In case of Libvirt, even when passed in with the purpose of overriding the existing image metadata, final set of device names for the instance may still get changed by the driver. * disk_bus and device_type - low level details that some hypervisors (currently only libvirt) may support. Some example disk_bus values can be: `ide`, `usb`, `virtio`, `scsi`, while device_type may be `disk`, `cdrom`, `floppy`, `lun`. This is not an exhaustive list as it depends on the virtualization driver, and may change as more support is added. Leaving these empty is the most common thing to do. * boot_index - Defines the order in which a hypervisor will try devices when attempting to boot the guest from storage. Each device which is capable of being used as boot device should be given a unique boot index, starting from 0 in ascending order. Some hypervisors may not support booting from multiple devices, so will only consider the device with boot index of 0. Some hypervisors will support booting from multiple devices, but only if they are of different types - eg a disk and CD-ROM. Setting a negative value or None indicates that the device should not be used for booting. The simplest usage is to set it to 0 for the boot device and leave it as None for any other devices. * volume_type - Added in microversion 2.67 to the servers create API to support specifying volume type when booting instances. When we snapshot a volume-backed server, the block_device_mapping_v2 image metadata will include the volume_type from the BDM record so if the user then creates another server from that snapshot, the volume that nova creates from that snapshot will use the same volume_type. If a user wishes to change that volume type in the image metadata, they can do so via the image API. Valid source / destination combinations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Combination of the ``source_type`` and ``destination_type`` will define the kind of block device the entry is referring to. The following combinations are supported: * `image` -> `local` - this is only currently reserved for the entry referring to the Glance image that the instance is being booted with (it should also be marked as a boot device). It is also worth noting that an API request that specifies this, also has to provide the same Glance uuid as the `image_ref` parameter to the boot request (this is done for backwards compatibility and may be changed in the future). This functionality might be extended to specify additional Glance images to be attached to an instance after boot (similar to kernel/ramdisk images) but this functionality is not supported by any of the current drivers. * `volume` -> `volume` - this is just a Cinder volume to be attached to the instance. It can be marked as a boot device. * `snapshot` -> `volume` - this works exactly as passing `type=snap` does. It would create a volume from a Cinder volume snapshot and attach that volume to the instance. Can be marked bootable. * `image` -> `volume` - As one would imagine, this would download a Glance image to a cinder volume and attach it to an instance. Can also be marked as bootable. This is really only a shortcut for creating a volume out of an image before booting an instance with the newly created volume. * `blank` -> `volume` - Creates a blank Cinder volume and attaches it. This will also require the volume size to be set. * `blank` -> `local` - Depending on the guest_format field (see below), this will either mean an ephemeral blank disk on hypervisor local storage, or a swap disk (instances can have only one of those). Nova will not allow mixing of BDMv1 and BDMv2 in a single request, and will do basic validation to make sure that the requested block device mapping is valid before accepting a boot request. .. [1] In addition to the BlockDeviceMapping Nova object, we also have the BlockDeviceDict class in :mod: `nova.block_device` module. This class handles transforming and validating the API BDM format. .. [2] This work predates API microversions and thus the only way to add it was by means of an API extension. .. [3] This is a feature that the EC2 API offers as well and has been in Nova for a long time, although it has been broken in several releases. More info can be found on `this bug ` FAQs ---- 1. Is it possible to configure nova to automatically use cinder to back all root disks with volumes? No, there is nothing automatic within nova that converts a non-:term:`boot-from-volume ` request to convert the image to a root volume. Several ideas have been discussed over time which are captured in the spec for `volume-backed flavors`_. However, if you wish to force users to always create volume-backed servers, you can configure the API service by setting :oslo.config:option:`max_local_block_devices` to 0. This will result in any non-boot-from-volume server create request to fail with a 400 response. .. _volume-backed flavors: https://review.opendev.org/511965/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/cells.rst0000664000175000017500000010234100000000000017446 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _cells: ======= Cells ======= Before reading further, there is a nice overview presentation_ that Andrew Laski gave at the Austin (Newton) summit which is worth watching. .. _presentation: https://www.openstack.org/videos/summits/austin-2016/nova-cells-v2-whats-going-on .. _cells-v2: Cells V2 ======== * `Newton Summit Video - Nova Cells V2: What's Going On? `_ * `Pike Summit Video - Scaling Nova: How CellsV2 Affects Your Deployment `_ * `Queens Summit Video - Add Cellsv2 to your existing Nova deployment `_ * `Rocky Summit Video - Moving from CellsV1 to CellsV2 at CERN `_ * `Stein Summit Video - Scaling Nova with CellsV2: The Nova Developer and the CERN Operator perspective `_ Manifesto ~~~~~~~~~ Proposal -------- Right now, when a request hits the Nova API for a particular instance, the instance information is fetched from the database, which contains the hostname of the compute node on which the instance currently lives. If the request needs to take action on the instance (which is most of them), the hostname is used to calculate the name of a queue, and a message is written there which finds its way to the proper compute node. The meat of this proposal is changing the above hostname lookup into two parts that yield three pieces of information instead of one. Basically, instead of merely looking up the *name* of the compute node on which an instance lives, we will also obtain database and queue connection information. Thus, when asked to take action on instance $foo, we will: 1. Lookup the three-tuple of (database, queue, hostname) for that instance 2. Connect to that database and fetch the instance record 3. Connect to the queue and send the message to the proper hostname queue The above differs from the current organization in two ways. First, we need to do two database lookups before we know where the instance lives. Second, we need to demand-connect to the appropriate database and queue. Both of these have performance implications, but we believe we can mitigate the impacts through the use of things like a memcache of instance mapping information and pooling of connections to database and queue systems. The number of cells will always be much smaller than the number of instances. There are availability implications with this change since something like a 'nova list' which might query multiple cells could end up with a partial result if there is a database failure in a cell. See :doc:`/admin/cells` for knowing more about the recommended practices under such situations. A database failure within a cell would cause larger issues than a partial list result so the expectation is that it would be addressed quickly and cellsv2 will handle it by indicating in the response that the data may not be complete. Since this is very similar to what we have with current cells, in terms of organization of resources, we have decided to call this "cellsv2" for disambiguation. After this work is complete there will no longer be a "no cells" deployment. The default installation of Nova will be a single cell setup. Benefits -------- The benefits of this new organization are: * Native sharding of the database and queue as a first-class-feature in nova. All of the code paths will go through the lookup procedure and thus we won't have the same feature parity issues as we do with current cells. * No high-level replication of all the cell databases at the top. The API will need a database of its own for things like the instance index, but it will not need to replicate all the data at the top level. * It draws a clear line between global and local data elements. Things like flavors and keypairs are clearly global concepts that need only live at the top level. Providing this separation allows compute nodes to become even more stateless and insulated from things like deleted/changed global data. * Existing non-cells users will suddenly gain the ability to spawn a new "cell" from their existing deployment without changing their architecture. Simply adding information about the new database and queue systems to the new index will allow them to consume those resources. * Existing cells users will need to fill out the cells mapping index, shutdown their existing cells synchronization service, and ultimately clean up their top level database. However, since the high-level organization is not substantially different, they will not have to re-architect their systems to move to cellsv2. * Adding new sets of hosts as a new "cell" allows them to be plugged into a deployment and tested before allowing builds to be scheduled to them. Database split ~~~~~~~~~~~~~~ As mentioned above there is a split between global data and data that is local to a cell. The following is a breakdown of what data can uncontroversially considered global versus local to a cell. Missing data will be filled in as consensus is reached on the data that is more difficult to cleanly place. The missing data is mostly concerned with scheduling and networking. Global (API-level) Tables ------------------------- instance_types instance_type_projects instance_type_extra_specs quotas project_user_quotas quota_classes quota_usages security_groups security_group_rules security_group_default_rules provider_fw_rules key_pairs migrations networks tags Cell-level Tables ----------------- instances instance_info_caches instance_extra instance_metadata instance_system_metadata instance_faults instance_actions instance_actions_events instance_id_mappings pci_devices block_device_mapping virtual_interfaces Setup of Cells V2 ================= Overview ~~~~~~~~ As more of the CellsV2 implementation is finished, all operators are required to make changes to their deployment. For all deployments (even those that only intend to have one cell), these changes are configuration-related, both in the main nova configuration file as well as some extra records in the databases. All nova deployments must now have the following databases available and configured: 1. The "API" database 2. One special "cell" database called "cell0" 3. One (or eventually more) "cell" databases Thus, a small nova deployment will have an API database, a cell0, and what we will call here a "cell1" database. High-level tracking information is kept in the API database. Instances that are never scheduled are relegated to the cell0 database, which is effectively a graveyard of instances that failed to start. All successful/running instances are stored in "cell1". .. note:: Since Nova services make use of both configuration file and some databases records, starting or restarting those services with an incomplete configuration could lead to an incorrect deployment. Please only restart the services once you are done with the described steps below. First Time Setup ~~~~~~~~~~~~~~~~ Since there is only one API database, the connection information for it is stored in the nova.conf file. :: [api_database] connection = mysql+pymysql://root:secretmysql@dbserver/nova_api?charset=utf8 Since there may be multiple "cell" databases (and in fact everyone will have cell0 and cell1 at a minimum), connection info for these is stored in the API database. Thus, you must have connection information in your config file for the API database before continuing to the steps below, so that `nova-manage` can find your other databases. The following examples show the full expanded command line usage of the setup commands. This is to make it easier to visualize which of the various URLs are used by each of the commands. However, you should be able to put all of that in the config file and `nova-manage` will use those values. If need be, you can create separate config files and pass them as `nova-manage --config-file foo.conf` to control the behavior without specifying things on the command lines. The commands below use the API database so remember to run `nova-manage api_db sync` first. First we will create the necessary records for the cell0 database. To do that we use `nova-manage` like this:: nova-manage cell_v2 map_cell0 --database_connection \ mysql+pymysql://root:secretmysql@dbserver/nova_cell0?charset=utf8 .. note:: If you don't specify `--database_connection` then `nova-manage` will use the `[database]/connection` value from your config file, and mangle the database name to have a `_cell0` suffix. .. warning:: If your databases are on separate hosts then you should specify `--database_connection` or make certain that the nova.conf being used has the `[database]/connection` value pointing to the same user/password/host that will work for the cell0 database. If the cell0 mapping was created incorrectly, it can be deleted using the `nova-manage cell_v2 delete_cell` command and then run `map_cell0` again with the proper database connection value. Since no hosts are ever in cell0, nothing further is required for its setup. Note that all deployments only ever have one cell0, as it is special, so once you have done this step you never need to do it again, even if you add more regular cells. Now, we must create another cell which will be our first "regular" cell, which has actual compute hosts in it, and to which instances can actually be scheduled. First, we create the cell record like this:: nova-manage cell_v2 create_cell --verbose --name cell1 \ --database_connection mysql+pymysql://root:secretmysql@127.0.0.1/nova?charset=utf8 --transport-url rabbit://stackrabbit:secretrabbit@mqserver:5672/ .. note:: If you don't specify the database and transport urls then `nova-manage` will use the `[database]/connection` and `[DEFAULT]/transport_url` values from the config file. .. note:: At this point, the API database can now find the cell database, and further commands will attempt to look inside. If this is a completely fresh database (such as if you're adding a cell, or if this is a new deployment), then you will need to run `nova-manage db sync` on it to initialize the schema. The `nova-manage cell_v2 create_cell` command will print the UUID of the newly-created cell if `--verbose` is passed, which is useful if you need to run commands like `discover_hosts` targeted at a specific cell. Now we have a cell, but no hosts are in it which means the scheduler will never actually place instances there. The next step is to scan the database for compute node records and add them into the cell we just created. For this step, you must have had a compute node started such that it registers itself as a running service. Once that has happened, you can scan and add it to the cell:: nova-manage cell_v2 discover_hosts This command will connect to any databases for which you have created cells (as above), look for hosts that have registered themselves there, and map those hosts in the API database so that they are visible to the scheduler as available targets for instances. Any time you add more compute hosts to a cell, you need to re-run this command to map them from the top-level so they can be utilized. Template URLs in Cell Mappings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Starting in the Rocky release, the URLs provided in the cell mappings for ``--database_connection`` and ``--transport-url`` can contain variables which are evaluated each time they are loaded from the database, and the values of which are taken from the corresponding base options in the host's configuration file. The base URL is parsed and the following elements may be substituted into the cell mapping URL (using ``rabbit://bob:s3kret@myhost:123/nova?sync=true#extra``): .. list-table:: Cell Mapping URL Variables :header-rows: 1 :widths: 15, 50, 15 * - Variable - Meaning - Part of example URL * - ``scheme`` - The part before the `://` - ``rabbit`` * - ``username`` - The username part of the credentials - ``bob`` * - ``password`` - The password part of the credentials - ``s3kret`` * - ``hostname`` - The hostname or address - ``myhost`` * - ``port`` - The port number (must be specified) - ``123`` * - ``path`` - The "path" part of the URL (without leading slash) - ``nova`` * - ``query`` - The full query string arguments (without leading question mark) - ``sync=true`` * - ``fragment`` - Everything after the first hash mark - ``extra`` Variables are provided in curly brackets, like ``{username}``. A simple template of ``rabbit://{username}:{password}@otherhost/{path}`` will generate a full URL of ``rabbit://bob:s3kret@otherhost/nova`` when used with the above example. .. note:: The ``[database]/connection`` and ``[DEFAULT]/transport_url`` values are not reloaded from the configuration file during a SIGHUP, which means that a full service restart will be required to notice changes in a cell mapping record if variables are changed. .. note:: The ``[DEFAULT]/transport_url`` option can contain an extended syntax for the "netloc" part of the url (i.e. `userA:passwordA@hostA:portA,userB:passwordB:hostB:portB`). In this case, substitions of the form ``username1``, ``username2``, etc will be honored and can be used in the template URL. The templating of these URLs may be helpful in order to provide each service host with its own credentials for, say, the database. Without templating, all hosts will use the same URL (and thus credentials) for accessing services like the database and message queue. By using a URL with a template that results in the credentials being taken from the host-local configuration file, each host will use different values for those connections. Assuming you have two service hosts that are normally configured with the cell0 database as their primary connection, their (abbreviated) configurations would look like this:: [database] connection = mysql+pymysql://service1:foo@myapidbhost/nova_cell0 and:: [database] connection = mysql+pymysql://service2:bar@myapidbhost/nova_cell0 Without cell mapping template URLs, they would still use the same credentials (as stored in the mapping) to connect to the cell databases. However, consider template URLs like the following:: mysql+pymysql://{username}:{password}@mycell1dbhost/nova and:: mysql+pymysql://{username}:{password}@mycell2dbhost/nova Using the first service and cell1 mapping, the calculated URL that will actually be used for connecting to that database will be:: mysql+pymysql://service1:foo@mycell1dbhost/nova References ~~~~~~~~~~ * :doc:`nova-manage man page ` Step-By-Step for Common Use Cases ================================= The following are step-by-step examples for common use cases setting up Cells V2. This is intended as a quick reference that puts together everything explained in `Setup of Cells V2`_. It is assumed that you have followed all other install steps for Nova and are setting up Cells V2 specifically at this point. Fresh Install ~~~~~~~~~~~~~ You are installing Nova for the first time and have no compute hosts in the database yet. This will set up a single cell Nova deployment. 1. Reminder: You should have already created and synced the Nova API database by creating a database, configuring its connection in the ``[api_database]/connection`` setting in the Nova configuration file, and running ``nova-manage api_db sync``. 2. Create a database for cell0. If you are going to pass the database connection url on the command line in step 3, you can name the cell0 database whatever you want. If you are not going to pass the database url on the command line in step 3, you need to name the cell0 database based on the name of your existing Nova database: _cell0. For example, if your Nova database is named ``nova``, then your cell0 database should be named ``nova_cell0``. 3. Run the ``map_cell0`` command to create and map cell0:: nova-manage cell_v2 map_cell0 \ --database_connection The database connection url is generated based on the ``[database]/connection`` setting in the Nova configuration file if not specified on the command line. 4. Run ``nova-manage db sync`` to populate the cell0 database with a schema. The ``db sync`` command reads the database connection for cell0 that was created in step 3. 5. Run the ``create_cell`` command to create the single cell which will contain your compute hosts:: nova-manage cell_v2 create_cell --name \ --transport-url \ --database_connection The transport url is taken from the ``[DEFAULT]/transport_url`` setting in the Nova configuration file if not specified on the command line. The database url is taken from the ``[database]/connection`` setting in the Nova configuration file if not specified on the command line. 6. Configure your compute host(s), making sure ``[DEFAULT]/transport_url`` matches the transport URL for the cell created in step 5, and start the nova-compute service. Before step 7, make sure you have compute hosts in the database by running:: nova service-list --binary nova-compute 7. Run the ``discover_hosts`` command to map compute hosts to the single cell by running:: nova-manage cell_v2 discover_hosts The command will search for compute hosts in the database of the cell you created in step 5 and map them to the cell. You can also configure a periodic task to have Nova discover new hosts automatically by setting the ``[scheduler]/discover_hosts_in_cells_interval`` to a time interval in seconds. The periodic task is run by the nova-scheduler service, so you must be sure to configure it on all of your nova-scheduler hosts. .. note:: Remember: In the future, whenever you add new compute hosts, you will need to run the ``discover_hosts`` command after starting them to map them to the cell if you did not configure the automatic host discovery in step 7. Upgrade (minimal) ~~~~~~~~~~~~~~~~~ You are upgrading an existing Nova install and have compute hosts in the database. This will set up a single cell Nova deployment. 1. If you haven't already created a cell0 database in a prior release, create a database for cell0 with a name based on the name of your Nova database: _cell0. If your Nova database is named ``nova``, then your cell0 database should be named ``nova_cell0``. .. warning:: In Newton, the ``simple_cell_setup`` command expects the name of the cell0 database to be based on the name of the Nova API database: _cell0 and the database connection url is taken from the ``[api_database]/connection`` setting in the Nova configuration file. 2. Run the ``simple_cell_setup`` command to create and map cell0, create and map the single cell, and map existing compute hosts and instances to the single cell:: nova-manage cell_v2 simple_cell_setup \ --transport-url The transport url is taken from the ``[DEFAULT]/transport_url`` setting in the Nova configuration file if not specified on the command line. The database connection url will be generated based on the ``[database]/connection`` setting in the Nova configuration file. .. note:: Remember: In the future, whenever you add new compute hosts, you will need to run the ``discover_hosts`` command after starting them to map them to the cell. You can also configure a periodic task to have Nova discover new hosts automatically by setting the ``[scheduler]/discover_hosts_in_cells_interval`` to a time interval in seconds. The periodic task is run by the nova-scheduler service, so you must be sure to configure it on all of your nova-scheduler hosts. Upgrade with Cells V1 ~~~~~~~~~~~~~~~~~~~~~ .. todo:: This needs to be removed but `Adding a new cell to an existing deployment`_ is still using it. You are upgrading an existing Nova install that has Cells V1 enabled and have compute hosts in your databases. This will set up a multiple cell Nova deployment. At this time, it is recommended to keep Cells V1 enabled during and after the upgrade as multiple Cells V2 cell support is not fully finished and may not work properly in all scenarios. These upgrade steps will help ensure a simple cutover from Cells V1 to Cells V2 in the future. .. note:: There is a Rocky summit video from CERN about how they did their upgrade from cells v1 to v2 here: https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern 1. If you haven't already created a cell0 database in a prior release, create a database for cell0. If you are going to pass the database connection url on the command line in step 2, you can name the cell0 database whatever you want. If you are not going to pass the database url on the command line in step 2, you need to name the cell0 database based on the name of your existing Nova database: _cell0. For example, if your Nova database is named ``nova``, then your cell0 database should be named ``nova_cell0``. 2. Run the ``map_cell0`` command to create and map cell0:: nova-manage cell_v2 map_cell0 \ --database_connection The database connection url is generated based on the ``[database]/connection`` setting in the Nova configuration file if not specified on the command line. 3. Run ``nova-manage db sync`` to populate the cell0 database with a schema. The ``db sync`` command reads the database connection for cell0 that was created in step 2. 4. Run the ``create_cell`` command to create cells which will contain your compute hosts:: nova-manage cell_v2 create_cell --name \ --transport-url \ --database_connection You will need to repeat this step for each cell in your deployment. Your existing cell database will be re-used -- this simply informs the top-level API database about your existing cell databases. It is a good idea to specify a name for the new cell you create so you can easily look up cell uuids with the ``list_cells`` command later if needed. The transport url is taken from the ``[DEFAULT]/transport_url`` setting in the Nova configuration file if not specified on the command line. The database url is taken from the ``[database]/connection`` setting in the Nova configuration file if not specified on the command line. If you are not going to specify ``--database_connection`` and ``--transport-url`` on the command line, be sure to specify your existing cell Nova configuration file:: nova-manage --config-file cell_v2 create_cell \ --name 5. Run the ``discover_hosts`` command to map compute hosts to cells:: nova-manage cell_v2 discover_hosts --cell_uuid You will need to repeat this step for each cell in your deployment unless you omit the ``--cell_uuid`` option. If the cell uuid is not specified on the command line, ``discover_hosts`` will search for compute hosts in each cell database and map them to the corresponding cell. You can use the ``list_cells`` command to look up cell uuids if you are going to specify ``--cell_uuid``. You can also configure a periodic task to have Nova discover new hosts automatically by setting the ``[scheduler]/discover_hosts_in_cells_interval`` to a time interval in seconds. The periodic task is run by the nova-scheduler service, so you must be sure to configure it on all of your nova-scheduler hosts. 6. Run the ``map_instances`` command to map instances to cells:: nova-manage cell_v2 map_instances --cell_uuid \ --max-count You will need to repeat this step for each cell in your deployment. You can use the ``list_cells`` command to look up cell uuids. The ``--max-count`` option can be specified if you would like to limit the number of instances to map in a single run. If ``--max-count`` is not specified, all instances will be mapped. Repeated runs of the command will start from where the last run finished so it is not necessary to increase ``--max-count`` to finish. An exit code of 0 indicates that all instances have been mapped. An exit code of 1 indicates that there are remaining instances that need to be mapped. .. note:: Remember: In the future, whenever you add new compute hosts, you will need to run the ``discover_hosts`` command after starting them to map them to a cell if you did not configure the automatic host discovery in step 5. Adding a new cell to an existing deployment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To expand your deployment with a new cell, first follow the usual steps for standing up a new Cells V1 cell. After that is finished, follow step 4 in `Upgrade with Cells V1`_ to create a new Cells V2 cell for it. If you have added new compute hosts for the new cell, you will also need to run the ``discover_hosts`` command after starting them to map them to the new cell if you did not configure the automatic host discovery as described in step 5 in `Upgrade with Cells V1`_. References ~~~~~~~~~~ * :doc:`nova-manage man page ` FAQs ==== #. How do I find out which hosts are bound to which cell? There are a couple of ways to do this. 1. Run ``nova-manage --config-file host list``. This will only lists hosts in the provided cell nova.conf. .. deprecated:: 16.0.0 The ``nova-manage host list`` command is deprecated as of the 16.0.0 Pike release. 2. Run ``nova-manage cell_v2 discover_hosts --verbose``. This does not produce a report but if you are trying to determine if a host is in a cell you can run this and it will report any hosts that are not yet mapped to a cell and map them. This command is idempotent. 3. Run ``nova-manage cell_v2 list_hosts``. This will list hosts in all cells. If you want to list hosts in a specific cell, you can run ``nova-manage cell_v2 list_hosts --cell_uuid ``. #. I updated the database_connection and/or transport_url in a cell using the ``nova-manage cell_v2 update_cell`` command but the API is still trying to use the old settings. The cell mappings are cached in the nova-api service worker so you will need to restart the worker process to rebuild the cache. Note that there is another global cache tied to request contexts, which is used in the nova-conductor and nova-scheduler services, so you might need to do the same if you are having the same issue in those services. As of the 16.0.0 Pike release there is no timer on the cache or hook to refresh the cache using a SIGHUP to the service. #. I have upgraded from Newton to Ocata and I can list instances but I get a 404 NotFound error when I try to get details on a specific instance. Instances need to be mapped to cells so the API knows which cell an instance lives in. When upgrading, the ``nova-manage cell_v2 simple_cell_setup`` command will automatically map the instances to the single cell which is backed by the existing nova database. If you have already upgraded and did not use the ``simple_cell_setup`` command, you can run the ``nova-manage cell_v2 map_instances --cell_uuid `` command to map all instances in the given cell. See the :ref:`man-page-cells-v2` man page for details on command usage. #. Should I change any of the ``[cells]`` configuration options for Cells v2? **NO**. Those options are for Cells v1 usage only and are not used at all for Cells v2. That includes the ``nova-cells`` service - it has nothing to do with Cells v2. #. Can I create a cell but have it disabled from scheduling? Yes. It is possible to create a pre-disabled cell such that it does not become a candidate for scheduling new VMs. This can be done by running the ``nova-manage cell_v2 create_cell`` command with the ``--disabled`` option. #. How can I disable a cell so that the new server create requests do not go to it while I perform maintenance? Existing cells can be disabled by running ``nova-manage cell_v2 update_cell --cell_uuid --disable`` and can be re-enabled once the maintenance period is over by running ``nova-manage cell_v2 update_cell --cell_uuid --enable`` #. I disabled (or enabled) a cell using the ``nova-manage cell_v2 update_cell`` or I created a new (pre-disabled) cell(mapping) using the ``nova-manage cell_v2 create_cell`` command but the scheduler is still using the old settings. The cell mappings are cached in the scheduler worker so you will either need to restart the scheduler process to refresh the cache, or send a SIGHUP signal to the scheduler by which it will automatically refresh the cells cache and the changes will take effect. #. Why was the cells REST API not implemented for CellsV2? Why are there no CRUD operations for cells in the API? One of the deployment challenges that CellsV1 had was the requirement for the API and control services to be up before a new cell could be deployed. This was not a problem for large-scale public clouds that never shut down, but is not a reasonable requirement for smaller clouds that do offline upgrades and/or clouds which could be taken completely offline by something like a power outage. Initial devstack and gate testing for CellsV1 was delayed by the need to engineer a solution for bringing the services partially online in order to deploy the rest, and this continues to be a gap for other deployment tools. Consider also the FFU case where the control plane needs to be down for a multi-release upgrade window where changes to cell records have to be made. This would be quite a bit harder if the way those changes are made is via the API, which must remain down during the process. Further, there is a long-term goal to move cell configuration (i.e. cell_mappings and the associated URLs and credentials) into config and get away from the need to store and provision those things in the database. Obviously a CRUD interface in the API would prevent us from making that move. #. Why are cells not exposed as a grouping mechanism in the API for listing services, instances, and other resources? Early in the design of CellsV2 we set a goal to not let the cell concept leak out of the API, even for operators. Aggregates are the way nova supports grouping of hosts for a variety of reasons, and aggregates can cut across cells, and/or be aligned with them if desired. If we were to support cells as another grouping mechanism, we would likely end up having to implement many of the same features for them as aggregates, such as scheduler features, metadata, and other searching/filtering operations. Since aggregates are how Nova supports grouping, we expect operators to use aggregates any time they need to refer to a cell as a group of hosts from the API, and leave actual cells as a purely architectural detail. The need to filter instances by cell in the API can and should be solved by adding a generic by-aggregate filter, which would allow listing instances on hosts contained within any aggregate, including one that matches the cell boundaries if so desired. #. Why are the API responses for ``GET /servers``, ``GET /servers/detail``, ``GET /servers/{server_id}`` and ``GET /os-services`` missing some information for certain cells at certain times? Why do I see the status as "UNKNOWN" for the servers in those cells at those times when I run ``openstack server list`` or ``openstack server show``? Starting from microversion 2.69 the API responses of ``GET /servers``, ``GET /servers/detail``, ``GET /servers/{server_id}`` and ``GET /os-services`` may contain missing keys during down cell situations. See the `Handling Down Cells `__ section of the Compute API guide for more information on the partial constructs. For administrative considerations, see :ref:`Handling cell failures `. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/cellsv2-layout.rst0000664000175000017500000004061200000000000021233 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =================== Cells Layout (v2) =================== This document describes the layout of a deployment with Cells version 2, including deployment considerations for security and scale. It is focused on code present in Pike and later, and while it is geared towards people who want to have multiple cells for whatever reason, the nature of the cellsv2 support in Nova means that it applies in some way to all deployments. Concepts ======== A basic Nova system consists of the following components: * The nova-api service which provides the external REST API to users. * The nova-scheduler and placement services which are responsible for tracking resources and deciding which compute node instances should be on. * An "API database" that is used primarily by nova-api and nova-scheduler (called *API-level services* below) to track location information about instances, as well as a temporary location for instances being built but not yet scheduled. * The nova-conductor service which offloads long-running tasks for the API-level service, as well as insulates compute nodes from direct database access * The nova-compute service which manages the virt driver and hypervisor host. * A "cell database" which is used by API, conductor and compute services, and which houses the majority of the information about instances. * A "cell0 database" which is just like the cell database, but contains only instances that failed to be scheduled. * A message queue which allows the services to communicate with each other via RPC. All deployments have at least the above components. Small deployments likely have a single message queue that all services share, and a single database server which hosts the API database, a single cell database, as well as the required cell0 database. This is considered a "single-cell deployment" because it only has one "real" cell. The cell0 database mimics a regular cell, but has no compute nodes and is used only as a place to put instances that fail to land on a real compute node (and thus a real cell). The purpose of the cells functionality in nova is specifically to allow larger deployments to shard their many compute nodes into cells, each of which has a database and message queue. The API database is always and only global, but there can be many cell databases (where the bulk of the instance information lives), each with a portion of the instances for the entire deployment within. All of the nova services use a configuration file, all of which will at a minimum specify a message queue endpoint (i.e. ``[DEFAULT]/transport_url``). Most of the services also require configuration of database connection information (i.e. ``[database]/connection``). API-level services that need access to the global routing and placement information will also be configured to reach the API database (i.e. ``[api_database]/connection``). .. note:: The pair of ``transport_url`` and ``[database]/connection`` configured for a service defines what cell a service lives in. API-level services need to be able to contact other services in all of the cells. Since they only have one configured ``transport_url`` and ``[database]/connection`` they look up the information for the other cells in the API database, with records called *cell mappings*. .. note:: The API database must have cell mapping records that match the ``transport_url`` and ``[database]/connection`` configuration elements of the lower-level services. See the ``nova-manage`` :ref:`man-page-cells-v2` commands for more information about how to create and examine these records. Service Layout ============== The services generally have a well-defined communication pattern that dictates their layout in a deployment. In a small/simple scenario, the rules do not have much of an impact as all the services can communicate with each other on a single message bus and in a single cell database. However, as the deployment grows, scaling and security concerns may drive separation and isolation of the services. Simple ------ This is a diagram of the basic services that a simple (single-cell) deployment would have, as well as the relationships (i.e. communication paths) between them: .. graphviz:: digraph services { graph [pad="0.35", ranksep="0.65", nodesep="0.55", concentrate=true]; node [fontsize=10 fontname="Monospace"]; edge [arrowhead="normal", arrowsize="0.8"]; labelloc=bottom; labeljust=left; { rank=same api [label="nova-api"] apidb [label="API Database" shape="box"] scheduler [label="nova-scheduler"] } { rank=same mq [label="MQ" shape="diamond"] conductor [label="nova-conductor"] } { rank=same cell0db [label="Cell0 Database" shape="box"] celldb [label="Cell Database" shape="box"] compute [label="nova-compute"] } api -> mq -> compute conductor -> mq -> scheduler api -> apidb api -> cell0db api -> celldb conductor -> apidb conductor -> cell0db conductor -> celldb } All of the services are configured to talk to each other over the same message bus, and there is only one cell database where live instance data resides. The cell0 database is present (and required) but as no compute nodes are connected to it, this is still a "single cell" deployment. Multiple Cells -------------- In order to shard the services into multiple cells, a number of things must happen. First, the message bus must be split into pieces along the same lines as the cell database. Second, a dedicated conductor must be run for the API-level services, with access to the API database and a dedicated message queue. We call this *super conductor* to distinguish its place and purpose from the per-cell conductor nodes. .. graphviz:: digraph services2 { graph [pad="0.35", ranksep="0.65", nodesep="0.55", concentrate=true]; node [fontsize=10 fontname="Monospace"]; edge [arrowhead="normal", arrowsize="0.8"]; labelloc=bottom; labeljust=left; subgraph api { api [label="nova-api"] scheduler [label="nova-scheduler"] conductor [label="super conductor"] { rank=same apimq [label="API MQ" shape="diamond"] apidb [label="API Database" shape="box"] } api -> apimq -> conductor api -> apidb conductor -> apimq -> scheduler conductor -> apidb } subgraph clustercell0 { label="Cell 0" color=green cell0db [label="Cell Database" shape="box"] } subgraph clustercell1 { label="Cell 1" color=blue mq1 [label="Cell MQ" shape="diamond"] cell1db [label="Cell Database" shape="box"] conductor1 [label="nova-conductor"] compute1 [label="nova-compute"] conductor1 -> mq1 -> compute1 conductor1 -> cell1db } subgraph clustercell2 { label="Cell 2" color=red mq2 [label="Cell MQ" shape="diamond"] cell2db [label="Cell Database" shape="box"] conductor2 [label="nova-conductor"] compute2 [label="nova-compute"] conductor2 -> mq2 -> compute2 conductor2 -> cell2db } api -> mq1 -> conductor1 api -> mq2 -> conductor2 api -> cell0db api -> cell1db api -> cell2db conductor -> cell0db conductor -> cell1db conductor -> mq1 conductor -> cell2db conductor -> mq2 } It is important to note that services in the lower cell boxes only have the ability to call back to the placement API but cannot access any other API-layer services via RPC, nor do they have access to the API database for global visibility of resources across the cloud. This is intentional and provides security and failure domain isolation benefits, but also has impacts on some things that would otherwise require this any-to-any communication style. Check the release notes for the version of Nova you are using for the most up-to-date information about any caveats that may be present due to this limitation. Caveats of a Multi-Cell deployment ---------------------------------- .. note:: This information is correct as of the Pike release. Where improvements have been made or issues fixed, they are noted per item. Cross-cell instance migrations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently it is not possible to migrate an instance from a host in one cell to a host in another cell. This may be possible in the future, but it is currently unsupported. This impacts cold migration, resizes, live migrations, evacuate, and unshelve operations. Quota-related quirks ~~~~~~~~~~~~~~~~~~~~ Quotas are now calculated live at the point at which an operation would consume more resource, instead of being kept statically in the database. This means that a multi-cell environment may incorrectly calculate the usage of a tenant if one of the cells is unreachable, as those resources cannot be counted. In this case, the tenant may be able to consume more resource from one of the available cells, putting them far over quota when the unreachable cell returns. .. note:: Starting in the Train (20.0.0) release, it is possible to configure counting of quota usage from the placement service and API database to make quota usage calculations resilient to down or poor-performing cells in a multi-cell environment. See the :doc:`quotas documentation` for more details. Performance of listing instances ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: This has been resolved in the Queens release [#]_. With multiple cells, the instance list operation may not sort and paginate results properly when crossing multiple cell boundaries. Further, the performance of a sorted list operation will be considerably slower than with a single cell. Notifications ~~~~~~~~~~~~~ With a multi-cell environment with multiple message queues, it is likely that operators will want to configure a separate connection to a unified queue for notifications. This can be done in the configuration file of all nodes. Refer to the :oslo.messaging-doc:`oslo.messaging configuration documentation ` for more details. .. _cells-v2-layout-metadata-api: Nova Metadata API service ~~~~~~~~~~~~~~~~~~~~~~~~~ Starting from the Stein release, the :doc:`nova metadata API service ` can be run either globally or per cell using the :oslo.config:option:`api.local_metadata_per_cell` configuration option. **Global** If you have networks that span cells, you might need to run Nova metadata API globally. When running globally, it should be configured as an API-level service with access to the :oslo.config:option:`api_database.connection` information. The nova metadata API service **must not** be run as a standalone service, using the :program:`nova-api-metadata` service, in this case. **Local per cell** Running Nova metadata API per cell can have better performance and data isolation in a multi-cell deployment. If your networks are segmented along cell boundaries, then you can run Nova metadata API service per cell. If you choose to run it per cell, you should also configure each :neutron-doc:`neutron-metadata-agent ` service to point to the corresponding :program:`nova-api-metadata`. The nova metadata API service **must** be run as a standalone service, using the :program:`nova-api-metadata` service, in this case. Console proxies ~~~~~~~~~~~~~~~ `Starting from the Rocky release`__, console proxies must be run per cell because console token authorizations are stored in cell databases. This means that each console proxy server must have access to the :oslo.config:option:`database.connection` information for the cell database containing the instances for which it is proxying console access. .. __: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/convert-consoles-to-objects.html .. _upcall: Operations Requiring upcalls ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you deploy multiple cells with a superconductor as described above, computes and cell-based conductors will not have the ability to speak to the scheduler as they are not connected to the same MQ. This is by design for isolation, but currently the processes are not in place to implement some features without such connectivity. Thus, anything that requires a so-called "upcall" will not function. This impacts the following: #. Instance reschedules during boot and resize (part 1) .. note:: This has been resolved in the Queens release [#]_. #. Instance affinity reporting from the compute nodes to scheduler #. The late anti-affinity check during server create and evacuate #. Querying host aggregates from the cell .. note:: This has been resolved in the Rocky release [#]_. #. Attaching a volume and ``[cinder]/cross_az_attach=False`` #. Instance reschedules during boot and resize (part 2) .. note:: This has been resolved in the Ussuri release [#]_ [#]_. The first is simple: if you boot an instance, it gets scheduled to a compute node, fails, it would normally be re-scheduled to another node. That requires scheduler intervention and thus it will not work in Pike with a multi-cell layout. If you do not rely on reschedules for covering up transient compute-node failures, then this will not affect you. To ensure you do not make futile attempts at rescheduling, you should set ``[scheduler]/max_attempts=1`` in ``nova.conf``. The second two are related. The summary is that some of the facilities that Nova has for ensuring that affinity/anti-affinity is preserved between instances does not function in Pike with a multi-cell layout. If you don't use affinity operations, then this will not affect you. To make sure you don't make futile attempts at the affinity check, you should set ``[workarounds]/disable_group_policy_check_upcall=True`` and ``[filter_scheduler]/track_instance_changes=False`` in ``nova.conf``. The fourth is currently only a problem when performing live migrations using the XenAPI driver and not specifying ``--block-migrate``. The driver will attempt to figure out if block migration should be performed based on source and destination hosts being in the same aggregate. Since aggregates data has migrated to the API database, the cell conductor will not be able to access the aggregate information and will fail. The fifth is a problem because when a volume is attached to an instance in the *nova-compute* service, and ``[cinder]/cross_az_attach=False`` in nova.conf, we attempt to look up the availability zone that the instance is in which includes getting any host aggregates that the ``instance.host`` is in. Since the aggregates are in the API database and the cell conductor cannot access that information, so this will fail. In the future this check could be moved to the *nova-api* service such that the availability zone between the instance and the volume is checked before we reach the cell, except in the case of :term:`boot from volume ` where the *nova-compute* service itself creates the volume and must tell Cinder in which availability zone to create the volume. Long-term, volume creation during boot from volume should be moved to the top-level superconductor which would eliminate this AZ up-call check problem. The sixth is detailed in `bug 1781286`_ and similar to the first issue. The issue is that servers created without a specific availability zone will have their AZ calculated during a reschedule based on the alternate host selected. Determining the AZ for the alternate host requires an "up call" to the API DB. .. _bug 1781286: https://bugs.launchpad.net/nova/+bug/1781286 .. [#] https://blueprints.launchpad.net/nova/+spec/efficient-multi-cell-instance-list-and-sort .. [#] https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/return-alternate-hosts.html .. [#] https://blueprints.launchpad.net/nova/+spec/live-migration-in-xapi-pool .. [#] https://review.opendev.org/686047/ .. [#] https://review.opendev.org/686050/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/certificate-validation.rst0000664000175000017500000006611400000000000022765 0ustar00zuulzuul00000000000000Image Signature Certificate Validation ====================================== Nova can determine if the certificate used to generate and verify the signature of a signed image (see `Glance Image Signature Verification documentation`_) is trusted by the user. This feature is called certificate validation and can be applied to the creation or rebuild of an instance. Certificate validation is meant to be performed jointly with image signature verification but each feature has its own Nova configuration option, to be specified in the ``[glance]`` section of the ``nova.conf`` configuration file. To enable certificate validation, set :oslo.config:option:`glance.enable_certificate_validation` to True. To enable signature validation, set :oslo.config:option:`glance.verify_glance_signatures` to True. Conversely, to disable either of these features, set their option to False or do not include the option in the Nova configurations at all. Certificate validation operates in concert with signature validation in `Cursive`_. It takes in a list of trusted certificate IDs and verifies that the certificate used to sign the image being booted is cryptographically linked to at least one of the provided trusted certificates. This provides the user with confidence in the identity and integrity of the image being booted. Certificate validation will only be performed if image signature validation is enabled. However, the presence of trusted certificate IDs overrides the ``enable_certificate_validation`` and ``verify_glance_signatures`` settings. In other words, if a list of trusted certificate IDs is provided to the instance create or rebuild commands, signature verification and certificate validation will be performed, regardless of their settings in the Nova configurations. See `Using Signature Verification`_ for details. .. _Cursive: http://opendev.org/x/cursive/ .. _Glance Image Signature Verification documentation: https://docs.openstack.org/glance/latest/user/signature.html .. note:: Certificate validation configuration options must be specified in the Nova configuration file that controls the ``nova-osapi_compute`` and ``nova-compute`` services, as opposed to other Nova services (conductor, scheduler, etc.). Requirements ------------ Key manager that is a backend to the `Castellan Interface`_. Possible key managers are: * `Barbican`_ * `Vault`_ .. _Castellan Interface: https://docs.openstack.org/castellan/latest/ .. _Barbican: https://docs.openstack.org/barbican/latest/contributor/devstack.html .. _Vault: https://www.vaultproject.io/ Limitations ----------- * As of the 18.0.0 Rocky release, only the libvirt compute driver supports trusted image certification validation. The feature is not, however, driver specific so other drivers should be able to support this feature over time. See the `feature support matrix`_ for information on which drivers support the feature at any given release. * As of the 18.0.0 Rocky release, image signature and trusted image certification validation is not supported with the Libvirt compute driver when using the ``rbd`` image backend (``[libvirt]/images_type=rbd``) and ``RAW`` formatted images. This is due to the images being cloned directly in the ``RBD`` backend avoiding calls to download and verify on the compute. * As of the 18.0.0 Rocky release, trusted image certification validation is not supported with volume-backed (:term:`boot from volume `) instances. The block storage service support may be available in a future release: https://blueprints.launchpad.net/cinder/+spec/certificate-validate * Trusted image certification support can be controlled via `policy configuration`_ if it needs to be disabled. See the ``os_compute_api:servers:create:trusted_certs`` and ``os_compute_api:servers:rebuild:trusted_certs`` policy rules. .. _feature support matrix: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_trusted_certs .. _policy configuration: https://docs.openstack.org/nova/latest/configuration/policy.html Configuration ------------- Nova will use the key manager defined by the Castellan key manager interface, which is the Barbican key manager by default. To use a different key manager, update the ``backend`` value in the ``[key_manager]`` group of the nova configuration file. For example:: [key_manager] backend = barbican .. note:: If these lines do not exist, then simply add them to the end of the file. Using Signature Verification ---------------------------- An image will need a few properties for signature verification to be enabled: ``img_signature`` Signature of your image. Signature restrictions are: * 255 character limit ``img_signature_hash_method`` Method used to hash your signature. Possible hash methods are: * SHA-224 * SHA-256 * SHA-384 * SHA-512 ``img_signature_key_type`` Key type used for your image. Possible key types are: * RSA-PSS * DSA * ECC-CURVES * SECT571K1 * SECT409K1 * SECT571R1 * SECT409R1 * SECP521R1 * SECP384R1 ``img_signature_certificate_uuid`` UUID of the certificate that you uploaded to the key manager. Possible certificate types are: * X_509 Using Certificate Validation ---------------------------- Certificate validation is triggered by one of two ways: 1. The Nova configuration options ``verify_glance_signatures`` and ``enable_certificate_validation`` are both set to True:: [glance] verify_glance_signatures = True enable_certificate_validation = True 2. A list of trusted certificate IDs is provided by one of three ways: .. note:: The command line support is pending changes https://review.opendev.org/#/c/500396/ and https://review.opendev.org/#/c/501926/ to python-novaclient and python-openstackclient, respectively. Environment Variable Use the environment variable ``OS_TRUSTED_IMAGE_CERTIFICATE_IDS`` to define a comma-delimited list of trusted certificate IDs. For example: .. code-block:: console $ export OS_TRUSTED_IMAGE_CERTIFICATE_IDS=79a6ad17-3298-4e55-8b3a-1672dd93c40f,b20f5600-3c9d-4af5-8f37-3110df3533a0 Command-Line Flag If booting or rebuilding an instance using the :command:`nova` commands, use the ``--trusted-image-certificate-id`` flag to define a single trusted certificate ID. The flag may be used multiple times to specify multiple trusted certificate IDs. For example: .. code-block:: console $ nova boot myInstanceName \ --flavor 1 \ --image myImageId \ --trusted-image-certificate-id 79a6ad17-3298-4e55-8b3a-1672dd93c40f \ --trusted-image-certificate-id b20f5600-3c9d-4af5-8f37-3110df3533a0 If booting or rebuilding an instance using the :command:`openstack server` commands, use the ``--trusted-image-certificate-id`` flag to define a single trusted certificate ID. The flag may be used multiple times to specify multiple trusted certificate IDs. For example: .. code-block:: console $ openstack --os-compute-api-version=2.63 server create myInstanceName \ --flavor 1 \ --image myImageId \ --nic net-id=fd25c0b2-b36b-45a8-82e4-ab52516289e5 \ --trusted-image-certificate-id 79a6ad17-3298-4e55-8b3a-1672dd93c40f \ --trusted-image-certificate-id b20f5600-3c9d-4af5-8f37-3110df3533a0 Nova Configuration Option Use the Nova configuration option :oslo.config:option:`glance.default_trusted_certificate_ids` to define a comma-delimited list of trusted certificate IDs. This configuration value is only used if ``verify_glance_signatures`` and ``enable_certificate_validation`` options are set to True, and the trusted certificate IDs are not specified anywhere else. For example:: [glance] default_trusted_certificate_ids=79a6ad17-3298-4e55-8b3a-1672dd93c40f,b20f5600-3c9d-4af5-8f37-3110df3533a0 Example Usage ------------- For these instructions, we will construct a 4-certificate chain to illustrate that it is possible to have a single trusted root certificate. We will upload all four certificates to Barbican. Then, we will sign an image and upload it to Glance, which will illustrate image signature verification. Finally, we will boot the signed image from Glance to show that certificate validation is enforced. Enable certificate validation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Enable image signature verification and certificate validation by setting both of their Nova configuration options to True:: [glance] verify_glance_signatures = True enable_certificate_validation = True Create a certificate chain ^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned above, we will construct a 4-certificate chain to illustrate that it is possible to have a single trusted root certificate. Before we begin to build our certificate chain, we must first create files for OpenSSL to use for indexing and serial number tracking: .. code-block:: console $ touch index.txt $ echo '01' > serial.txt Create a certificate configuration file """"""""""""""""""""""""""""""""""""""" For these instructions, we will create a single configuration file called ``ca.conf``, which contains various sections that we can specify for use on the command-line during certificate requests and generation. Note that this certificate will be able to sign other certificates because it is a certificate authority. Also note the root CA's unique common name ("root"). The intermediate certificates' common names will be specified on the command-line when generating the corresponding certificate requests. ``ca.conf``:: [ req ] prompt = no distinguished_name = dn-param x509_extensions = ca_cert_extensions [ ca ] default_ca = ca_default [ dn-param ] C = US CN = Root CA [ ca_cert_extensions ] keyUsage = keyCertSign, digitalSignature basicConstraints = CA:TRUE, pathlen:2 [ ca_default ] new_certs_dir = . # Location for new certs after signing database = ./index.txt # Database index file serial = ./serial.txt # The current serial number default_days = 1000 default_md = sha256 policy = signing_policy email_in_dn = no [ intermediate_cert_extensions ] keyUsage = keyCertSign, digitalSignature basicConstraints = CA:TRUE, pathlen:1 [client_cert_extensions] keyUsage = keyCertSign, digitalSignature basicConstraints = CA:FALSE [ signing_policy ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional Generate the certificate authority (CA) and corresponding private key """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" For these instructions, we will save the certificate as ``cert_ca.pem`` and the private key as ``key_ca.pem``. This certificate will be a self-signed root certificate authority (CA) that can sign other CAs and non-CA certificates. .. code-block:: console $ openssl req \ -x509 \ -nodes \ -newkey rsa:1024 \ -config ca.conf \ -keyout key_ca.pem \ -out cert_ca.pem Generating a 1024 bit RSA private key ............................++++++ ...++++++ writing new private key to 'key_ca.pem' ----- Create the first intermediate certificate """"""""""""""""""""""""""""""""""""""""" Create a certificate request for the first intermediate certificate. For these instructions, we will save the certificate request as ``cert_intermeidate_a.csr`` and the private key as ``key_intermediate_a.pem``. .. code-block:: console $ openssl req \ -nodes \ -newkey rsa:2048 \ -subj '/CN=First Intermediate Certificate' \ -keyout key_intermediate_a.pem \ -out cert_intermediate_a.csr Generating a 2048 bit RSA private key .............................................................................................................+++ .....+++ writing new private key to 'key_intermediate_a.pem' ----- Generate the first intermediate certificate by signing its certificate request with the CA. For these instructions we will save the certificate as ``cert_intermediate_a.pem``. .. code-block:: console $ openssl ca \ -config ca.conf \ -extensions intermediate_cert_extensions \ -cert cert_ca.pem \ -keyfile key_ca.pem \ -out cert_intermediate_a.pem \ -infiles cert_intermediate_a.csr Using configuration from ca.conf Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows commonName :ASN.1 12:'First Intermediate Certificate' Certificate is to be certified until Nov 15 16:24:21 2020 GMT (1000 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated Create the second intermediate certificate """""""""""""""""""""""""""""""""""""""""" Create a certificate request for the second intermediate certificate. For these instructions, we will save the certificate request as ``cert_intermeidate_b.csr`` and the private key as ``key_intermediate_b.pem``. .. code-block:: console $ openssl req \ -nodes \ -newkey rsa:2048 \ -subj '/CN=Second Intermediate Certificate' \ -keyout key_intermediate_b.pem \ -out cert_intermediate_b.csr Generating a 2048 bit RSA private key ..........+++ ............................................+++ writing new private key to 'key_intermediate_b.pem' ----- Generate the second intermediate certificate by signing its certificate request with the first intermediate certificate. For these instructions we will save the certificate as ``cert_intermediate_b.pem``. .. code-block:: console $ openssl ca \ -config ca.conf \ -extensions intermediate_cert_extensions \ -cert cert_intermediate_a.pem \ -keyfile key_intermediate_a.pem \ -out cert_intermediate_b.pem \ -infiles cert_intermediate_b.csr Using configuration from ca.conf Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows commonName :ASN.1 12:'Second Intermediate Certificate' Certificate is to be certified until Nov 15 16:25:42 2020 GMT (1000 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated Create the client certificate """"""""""""""""""""""""""""" Create a certificate request for the client certificate. For these instructions, we will save the certificate request as ``cert_client.csr`` and the private key as ``key_client.pem``. .. code-block:: console $ openssl req \ -nodes \ -newkey rsa:2048 \ -subj '/CN=Client Certificate' \ -keyout key_client.pem \ -out cert_client.csr Generating a 2048 bit RSA private key .............................................................................................................................+++ ..............................................................................................+++ writing new private key to 'key_client.pem' ----- Generate the client certificate by signing its certificate request with the second intermediate certificate. For these instructions we will save the certificate as ``cert_client.pem``. .. code-block:: console $ openssl ca \ -config ca.conf \ -extensions client_cert_extensions \ -cert cert_intermediate_b.pem \ -keyfile key_intermediate_b.pem \ -out cert_client.pem \ -infiles cert_client.csr Using configuration from ca.conf Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows commonName :ASN.1 12:'Client Certificate' Certificate is to be certified until Nov 15 16:26:46 2020 GMT (1000 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated Upload the generated certificates to the key manager ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In order interact with the key manager, the user needs to have a `creator` role. To list all users with a `creator` role, run the following command as an admin: .. code-block:: console $ openstack role assignment list --role creator --names +---------+-----------------------------+-------+-------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +---------+-----------------------------+-------+-------------------+--------+-----------+ | creator | project_a_creator_2@Default | | project_a@Default | | False | | creator | project_b_creator@Default | | project_b@Default | | False | | creator | project_a_creator@Default | | project_a@Default | | False | +---------+-----------------------------+-------+-------------------+--------+-----------+ To give the `demo` user a `creator` role in the `demo` project, run the following command as an admin: .. code-block:: console $ openstack role add --user demo --project demo creator .. note:: This command provides no output. If the command fails, the user will see a "4xx Client error" indicating that "Secret creation attempt not allowed" and to "please review your user/project privileges". .. note:: The following "openstack secret" commands require that the `python-barbicanclient `_ package is installed. .. code-block:: console $ openstack secret store \ --name CA \ --algorithm RSA \ --expiration 2018-06-29 \ --secret-type certificate \ --payload-content-type "application/octet-stream" \ --payload-content-encoding base64 \ --payload "$(base64 cert_ca.pem)" $ openstack secret store \ --name IntermediateA \ --algorithm RSA \ --expiration 2018-06-29 \ --secret-type certificate \ --payload-content-type "application/octet-stream" \ --payload-content-encoding base64 \ --payload "$(base64 cert_intermediate_a.pem)" $ openstack secret store \ --name IntermediateB \ --algorithm RSA \ --expiration 2018-06-29 \ --secret-type certificate \ --payload-content-type "application/octet-stream" \ --payload-content-encoding base64 \ --payload "$(base64 cert_intermediate_b.pem)" $ openstack secret store \ --name Client \ --algorithm RSA \ --expiration 2018-06-29 \ --secret-type certificate \ --payload-content-type "application/octet-stream" \ --payload-content-encoding base64 \ --payload "$(base64 cert_client.pem)" The responses should look something like this: .. code-block:: console +---------------+------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------+ | Secret href | http://127.0.0.1/key-manager/v1/secrets/8fbcce5d-d646-4295-ba8a-269fc9451eeb | | Name | CA | | Created | None | | Status | None | | Content types | {u'default': u'application/octet-stream'} | | Algorithm | RSA | | Bit length | 256 | | Secret type | certificate | | Mode | cbc | | Expiration | 2018-06-29T00:00:00+00:00 | +---------------+------------------------------------------------------------------------------+ Save off the certificate UUIDs (found in the secret href): .. code-block:: console $ cert_ca_uuid=8fbcce5d-d646-4295-ba8a-269fc9451eeb $ cert_intermediate_a_uuid=0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8 $ cert_intermediate_b_uuid=674736e3-f25c-405c-8362-bbf991e0ce0a $ cert_client_uuid=125e6199-2de4-46e3-b091-8e2401ef0d63 Create a signed image ^^^^^^^^^^^^^^^^^^^^^ For these instructions, we will download a small CirrOS image: .. code-block:: console $ wget -nc -O cirros.tar.gz http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-source.tar.gz --2018-02-19 11:37:52-- http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-source.tar.gz Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 64.90.42.85 Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 434333 (424K) [application/x-tar] Saving to: ‘cirros.tar.gz’ cirros.tar.gz 100%[===================>] 424.15K --.-KB/s in 0.1s 2018-02-19 11:37:54 (3.79 MB/s) - ‘cirros.tar.gz’ saved [434333/434333] Sign the image with the generated client private key: .. code-block:: console $ openssl dgst \ -sha256 \ -sign key_client.pem \ -sigopt rsa_padding_mode:pss \ -out cirros.self_signed.signature \ cirros.tar.gz .. note:: This command provides no output. Save off the base64 encoded signature: .. code-block:: console $ base64_signature=$(base64 -w 0 cirros.self_signed.signature) Upload the signed image to Glance: .. code-block:: console $ openstack image create \ --public \ --container-format bare \ --disk-format qcow2 \ --property img_signature="$base64_signature" \ --property img_signature_certificate_uuid="$cert_client_uuid" \ --property img_signature_hash_method='SHA-256' \ --property img_signature_key_type='RSA-PSS' \ --file cirros.tar.gz \ cirros_client_signedImage +------------------+------------------------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------------------------+ | checksum | d41d8cd98f00b204e9800998ecf8427e | | container_format | bare | | created_at | 2019-02-06T06:29:56Z | | disk_format | qcow2 | | file | /v2/images/17f48a6c-e592-446e-9c91-00fbc436d47e/file | | id | 17f48a6c-e592-446e-9c91-00fbc436d47e | | min_disk | 0 | | min_ram | 0 | | name | cirros_client_signedImage | | owner | 45e13e63606f40d6b23275c3cd91aec2 | | properties | img_signature='swA/hZi3WaNh35VMGlnfGnBWuXMlUbdO8h306uG7W3nwOyZP6dGRJ3 | | | Xoi/07Bo2dMUB9saFowqVhdlW5EywQAK6vgDsi9O5aItHM4u7zUPw+2e8eeaIoHlGhTks | | | kmW9isLy0mYA9nAfs3coChOIPXW4V8VgVXEfb6VYGHWm0nShiAP1e0do9WwitsE/TVKoS | | | QnWjhggIYij5hmUZ628KAygPnXklxVhqPpY/dFzL+tTzNRD0nWAtsc5wrl6/8HcNzZsaP | | | oexAysXJtcFzDrf6UQu66D3UvFBVucRYL8S3W56It3Xqu0+InLGaXJJpNagVQBb476zB2 | | | ZzZ5RJ/4Zyxw==', | | | img_signature_certificate_uuid='125e6199-2de4-46e3-b091-8e2401ef0d63', | | | img_signature_hash_method='SHA-256', | | | img_signature_key_type='RSA-PSS', | | | os_hash_algo='sha512', | | | os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a92 | | | 1d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927d | | | a3e', | | | os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | 0 | | status | active | | tags | | | updated_at | 2019-02-06T06:29:56Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------------------------+ .. note:: Creating the image can fail if validation does not succeed. This will cause the image to be deleted and the Glance log to report that "Signature verification failed" for the given image ID. Boot the signed image ^^^^^^^^^^^^^^^^^^^^^ Boot the signed image without specifying trusted certificate IDs: .. code-block:: console $ nova boot myInstance \ --flavor m1.tiny \ --image cirros_client_signedImage .. note:: The instance should fail to boot because certificate validation fails when the feature is enabled but no trusted image certificates are provided. The Nova log output should indicate that "Image signature certificate validation failed" because "Certificate chain building failed". Boot the signed image with trusted certificate IDs: .. code-block:: console $ nova boot myInstance \ --flavor m1.tiny \ --image cirros_client_signedImage \ --trusted-image-certificate-id $cert_ca_uuid,$cert_intermediate_a_uuid \ --trusted-image-certificate-id $cert_intermediate_b_uuid .. note:: The instance should successfully boot and certificate validation should succeed. The Nova log output should indicate that "Image signature certificate validation succeeded". Other Links ----------- * https://etherpad.openstack.org/p/mitaka-glance-image-signing-instructions * https://etherpad.openstack.org/p/queens-nova-certificate-validation * https://wiki.openstack.org/wiki/OpsGuide/User-Facing_Operations * http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/feature-classification.rst0000664000175000017500000001422600000000000022774 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ====================== Feature Classification ====================== This document presents a matrix that describes which features are ready to be used and which features are works in progress. It includes links to relevant documentation and functional tests. .. warning:: Please note: this is a work in progress! Aims ==== Users want reliable, long-term solutions for their use cases. The feature classification matrix identifies which features are complete and ready to use, and which should be used with caution. The matrix also benefits developers by providing a list of features that require further work to be considered complete. Below is a matrix for a selection of important verticals: * :ref:`matrix-gp` * :ref:`matrix-nfv` * :ref:`matrix-hpc` For more details on the concepts in each matrix, please see :ref:`notes-on-concepts`. .. _matrix-gp: General Purpose Cloud Features =============================== This is a summary of the key features dev/test clouds, and other similar general purpose clouds need, and it describes their current state. Below there are sections on NFV and HPC specific features. These look at specific features and scenarios that are important to those more specific sets of use cases. .. feature_matrix:: feature-matrix-gp.ini .. _matrix-nfv: NFV Cloud Features ================== Network Function Virtualization (NFV) is about virtualizing network node functions into building blocks that may connect, or chain together to create a particular service. It is common for this workloads needing bare metal like performance, i.e. low latency and close to line speed performance. .. include:: /common/numa-live-migration-warning.txt .. feature_matrix:: feature-matrix-nfv.ini .. _matrix-hpc: HPC Cloud Features ================== High Performance Compute (HPC) cloud have some specific needs that are covered in this set of features. .. feature_matrix:: feature-matrix-hpc.ini .. _notes-on-concepts: Notes on Concepts ================= This document uses the following terminology. Users ----- These are the users we talk about in this document: application deployer creates and deletes servers, directly or indirectly using an API application developer creates images and apps that run on the cloud cloud operator administers the cloud self service administrator runs and uses the cloud .. note:: This is not an exhaustive list of personas, but rather an indicative set of users. Feature Group ------------- To reduce the size of the matrix, we organize the features into groups. Each group maps to a set of user stories that can be validated by a set of scenarios and tests. Typically, this means a set of tempest tests. This list focuses on API concepts like attach and detach volumes, rather than deployment specific concepts like attach an iSCSI volume to a KVM based VM. Deployment ---------- A deployment maps to a specific test environment. We provide a full description of the environment, so it is possible to reproduce the reported test results for each of the Feature Groups. This description includes all aspects of the deployment, for example the hypervisor, number of nova-compute services, storage, network driver, and types of images being tested. Feature Group Maturity ----------------------- The Feature Group Maturity rating is specific to the API concepts, rather than specific to a particular deployment. That detail is covered in the deployment rating for each feature group. .. note:: Although having some similarities, this list is not directly related to the DefCore effort. **Feature Group ratings:** Incomplete Incomplete features are those that do not have enough functionality to satisfy real world use cases. Experimental Experimental features should be used with extreme caution. They are likely to have little or no upstream testing, and are therefore likely to contain bugs. Complete For a feature to be considered complete, it must have: * complete API docs (concept and REST call definition) * complete Administrator docs * tempest tests that define if the feature works correctly * sufficient functionality and reliability to be useful in real world scenarios * a reasonable expectation that the feature will be supported long-term Complete and Required There are various reasons why a complete feature may be required, but generally it is when all drivers support that feature. New drivers need to prove they support all required features before they are allowed in upstream Nova. Required features are those that any new technology must support before being allowed into tree. The larger the list, the more features are available on all Nova based clouds. Deprecated Deprecated features are those that are scheduled to be removed in a future major release of Nova. If a feature is marked as complete, it should never be deprecated. If a feature is incomplete or experimental for several releases, it runs the risk of being deprecated and later removed from the code base. Deployment Rating for a Feature Group -------------------------------------- The deployment rating refers to the state of the tests for each Feature Group on a particular deployment. **Deployment ratings:** Unknown No data is available. Not Implemented No tests exist. Implemented Self declared that the tempest tests pass. Regularly Tested Tested by third party CI. Checked Tested as part of the check or gate queue. The eventual goal is to automate this list from a third party CI reporting system, but currently we document manual inspections in an ini file. Ideally, we will review the list at every milestone. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/feature-matrix-gp.ini0000664000175000017500000003222000000000000021652 0ustar00zuulzuul00000000000000# # Lists all the CI jobs as targets # [target.libvirt-kvm] title=libvirt+kvm (x86 & ppc64) link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.libvirt-kvm-s390] title=libvirt+kvm (s390x) link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.libvirt-virtuozzo-ct] title=libvirt+virtuozzo CT link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_CI [target.libvirt-virtuozzo-vm] title=libvirt+virtuozzo VM link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI [target.libvirt-xen] title=libvirt+xen link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI [target.xenserver] title=XenServer CI link=https://wiki.openstack.org/wiki/XenServer/XenServer_CI [target.vmware] title=VMware CI link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper [target.hyperv] title=Hyper-V CI link=https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI [target.zvm] title=IBM zVM CI link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_z/VM_CI [target.ironic] title=Ironic CI link= [target.powervm] title=IBM PowerVM CI link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_PowerVM_CI # # Lists all features # # Includes information on the feature, its maturity, status, # links to admin docs, api docs and tempest test uuids. # # It then lists the current state for each of the about CI jobs. # It is hoped this mapping will eventually be automated. # # This doesn't include things like Server metadata, Server tagging, # or Lock Server, or Keypair CRUD as they can all be tested independently # of the nova virt driver used. # [operation.create-delete-server] title=Create Server and Delete Server notes=This includes creating a server, and deleting a server. Specifically this is about booting a server from a glance image using the default disk and network configuration. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/#servers-servers admin_doc_link=https://docs.openstack.org/nova/latest/user/launch-instances.html tempest_test_uuids=9a438d88-10c6-4bcd-8b5b-5b6e25e1346f;585e934c-448e-43c4-acbf-d06a9b899997 libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=unknown powervm=complete zvm=complete [operation.snapshot-server] title=Snapshot Server notes=This is creating a glance image from the currently running server. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?expanded=#servers-run-an-action-servers-action admin_doc_link=https://docs.openstack.org/glance/latest/admin/troubleshooting.html tempest_test_uuids=aaacd1d0-55a2-4ce8-818a-b5439df8adc9 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=unknown hyperv=unknown ironic=unknown powervm=complete zvm=complete [operation.power-ops] title=Server power ops notes=This includes reboot, shutdown and start. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?expanded=#servers-run-an-action-servers-action tempest_test_uuids=2cb1baf6-ac8d-4429-bf0d-ba8a0ba53e32;af8eafd4-38a7-4a4b-bdbc-75145a580560 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=unknown powervm=complete zvm=complete [operation.rebuild-server] title=Rebuild Server notes=You can rebuild a server, optionally specifying the glance image to use. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?expanded=#servers-run-an-action-servers-action tempest_test_uuids=aaa6cdf3-55a7-461a-add9-1c8596b9a07c cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=unknown powervm=missing zvm=missing [operation.resize-server] title=Resize Server notes=You resize a server to a new flavor, then confirm or revert that operation. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?expanded=#servers-run-an-action-servers-action tempest_test_uuids=1499262a-9328-4eda-9068-db1ac57498d2 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=complete libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=unknown powervm=missing zvm=missing [operation.server-volume-ops] title=Volume Operations notes=This is about attaching volumes, detaching volumes. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/#servers-with-volume-attachments-servers-os-volume-attachments admin_doc_link=https://docs.openstack.org/cinder/latest/admin/blockstorage-manage-volumes.html tempest_test_uuids=fff42874-7db5-4487-a8e1-ddda5fb5288d cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=complete libvirt-virtuozzo-vm=complete libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=missing powervm=complete driver-notes-powervm=This is not tested for every CI run. Add a "powervm:volume-check" comment to trigger a CI job running volume tests. zvm=missing [operation.server-bdm] title=Custom disk configurations on boot notes=This is about supporting all the features of BDMv2. This includes booting from a volume, in various ways, and specifying a custom set of ephemeral disks. Note some drivers only supports part of what the API allows. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?expanded=create-image-createimage-action-detail#create-server admin_doc_link=https://docs.openstack.org/nova/latest/user/block-device-mapping.html tempest_test_uuids=557cd2c2-4eb8-4dce-98be-f86765ff311b, 36c34c67-7b54-4b59-b188-02a2f458a63b cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=missing libvirt-virtuozzo-vm=complete libvirt-xen=complete xenserver=partial driver-notes-xenserver=This is not tested in a CI system, and only partially implemented. vmware=partial driver-notes-vmware=This is not tested in a CI system, but it is implemented. hyperv=complete:n ironic=missing powervm=missing zvm=missing [operation.server-neutron] title=Custom neutron configurations on boot notes=This is about supporting booting from one or more neutron ports, or all the related short cuts such as booting a specified network. This does not include SR-IOV or similar, just simple neutron ports. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?&expanded=create-server-detail admin_doc_link= tempest_test_uuids=2f3a0127-95c7-4977-92d2-bc5aec602fb4 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=unknown libvirt-virtuozzo-vm=unknown libvirt-xen=partial driver-notes-libvirt-xen=This is not tested in a CI system, but it is implemented. xenserver=partial driver-notes-xenserver=This is not tested in a CI system, but it is implemented. vmware=partial driver-notes-vmware=This is not tested in a CI system, but it is implemented. hyperv=partial driver-notes-hyperv=This is not tested in a CI system, but it is implemented. ironic=missing powervm=complete zvm=partial driver-notes-zvm=This is not tested in a CI system, but it is implemented. [operation.server-pause] title=Pause a Server notes=This is pause and unpause a server, where the state is held in memory. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?#pause-server-pause-action admin_doc_link= tempest_test_uuids=bd61a9fd-062f-4670-972b-2d6c3e3b9e73 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=missing libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=partial driver-notes-vmware=This is not tested in a CI system, but it is implemented. hyperv=complete ironic=missing powervm=missing zvm=complete [operation.server-suspend] title=Suspend a Server notes=This suspend and resume a server, where the state is held on disk. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/?expanded=suspend-server-suspend-action-detail admin_doc_link= tempest_test_uuids=0d8ee21e-b749-462d-83da-b85b41c86c7f cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=missing powervm=missing zvm=missing [operation.server-consoleoutput] title=Server console output notes=This gets the current server console output. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/#show-console-output-os-getconsoleoutput-action admin_doc_link= tempest_test_uuids=4b8867e6-fffa-4d54-b1d1-6fdda57be2f3 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=unknown libvirt-virtuozzo-vm=unknown libvirt-xen=complete xenserver=complete vmware=partial driver-notes-vmware=This is not tested in a CI system, but it is implemented. hyperv=partial driver-notes-hyperv=This is not tested in a CI system, but it is implemented. ironic=missing powervm=complete zvm=complete [operation.server-rescue] title=Server Rescue notes=This boots a server with a new root disk from the specified glance image to allow a user to fix a boot partition configuration, or similar. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/#rescue-server-rescue-action admin_doc_link= tempest_test_uuids=fd032140-714c-42e4-a8fd-adcd8df06be6;70cdb8a1-89f8-437d-9448-8844fd82bf46 cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=complete libvirt-xen=complete xenserver=complete vmware=complete hyperv=partial driver-notes-hyperv=This is not tested in a CI system, but it is implemented. ironic=missing powervm=missing zvm=missing [operation.server-configdrive] title=Server Config Drive notes=This ensures the user data provided by the user when booting a server is available in one of the expected config drive locations. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server admin_doc_link=https://docs.openstack.org/nova/latest/admin/config-drive.html tempest_test_uuids=7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=missing libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=complete xenserver=complete vmware=complete hyperv=complete ironic=partial driver-notes-ironic=This is not tested in a CI system, but it is implemented. powervm=complete zvm=complete [operation.server-changepassword] title=Server Change Password notes=The ability to reset the password of a user within the server. maturity=experimental api_doc_link=https://docs.openstack.org/api-ref/compute/#change-administrative-password-changepassword-action admin_doc_link= tempest_test_uuids=6158df09-4b82-4ab3-af6d-29cf36af858d cli= libvirt-kvm=partial driver-notes-libvirt-kvm=This is not tested in a CI system, but it is implemented. libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=missing libvirt-virtuozzo-vm=missing libvirt-xen=missing xenserver=partial driver-notes-xenserver=This is not tested in a CI system, but it is implemented. vmware=missing hyperv=partial driver-notes-hyperv=This is not tested in a CI system, but it is implemented. ironic=missing powervm=missing zvm=missing [operation.server-shelve] title=Server Shelve and Unshelve notes=The ability to keep a server logically alive, but not using any cloud resources. For local disk based instances, this involves taking a snapshot, called offloading. maturity=complete api_doc_link=https://docs.openstack.org/api-ref/compute/#shelve-server-shelve-action admin_doc_link= tempest_test_uuids=1164e700-0af0-4a4c-8792-35909a88743c,c1b6318c-b9da-490b-9c67-9339b627271f cli= libvirt-kvm=complete libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=missing libvirt-virtuozzo-vm=complete libvirt-xen=complete xenserver=complete vmware=missing hyperv=complete ironic=missing powervm=complete zvm=missing ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/feature-matrix-hpc.ini0000664000175000017500000000513300000000000022021 0ustar00zuulzuul00000000000000[target.libvirt-kvm] title=libvirt+kvm (x86 & ppc64) link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.libvirt-kvm-s390] title=libvirt+kvm (s390x) link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.libvirt-virtuozzo-ct] title=libvirt+virtuozzo CT link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_CI [target.libvirt-virtuozzo-vm] title=libvirt+virtuozzo VM link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI [target.libvirt-xen] title=libvirt+xen link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI [target.xenserver] title=XenServer CI link=https://wiki.openstack.org/wiki/XenServer/XenServer_CI [target.vmware] title=VMware CI link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper [target.hyperv] title=Hyper-V CI link=https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI [target.ironic] title=Ironic link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.powervm] title=PowerVM CI link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_PowerVM_CI [operation.gpu-passthrough] title=GPU Passthrough notes=The PCI passthrough feature in OpenStack allows full access and direct control of a physical PCI device in guests. This mechanism is generic for any devices that can be attached to a PCI bus. Correct driver installation is the only requirement for the guest to properly use the devices. maturity=experimental api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server admin_doc_link=https://docs.openstack.org/nova/latest/admin/pci-passthrough.html tempest_test_uuids=9a438d88-10c6-4bcd-8b5b-5b6e25e1346f;585e934c-448e-43c4-acbf-d06a9b899997 libvirt-kvm=complete:l libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=partial driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented. libvirt-virtuozzo-vm=partial driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented. libvirt-xen=missing xenserver=partial:k vmware=missing hyperv=missing ironic=unknown powervm=missing [operation.virtual-gpu] title=Virtual GPUs notes=Attach a virtual GPU to an instance at server creation time maturity=experimental api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server admin_doc_link=https://docs.openstack.org/nova/latest/admin/virtual-gpu.html libvirt-kvm=partial:queens libvirt-kvm-s390=unknown libvirt-virtuozzo-ct=unknown libvirt-virtuozzo-vm=unknown libvirt-xen=unknown xenserver=partial:queens vmware=missing hyperv=missing ironic=missing powervm=missing ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/feature-matrix-nfv.ini0000664000175000017500000000370600000000000022044 0ustar00zuulzuul00000000000000# # Lists all the CI jobs as targets # [target.libvirt-kvm] title=libvirt+kvm (x86 & ppc64) link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.libvirt-kvm-s390] title=libvirt+kvm (s390x) link=http://docs.openstack.org/infra/manual/developers.html#project-gating [target.libvirt-xen] title=libvirt+xen link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI # # Lists all features # # Includes information on the feature, its maturity, status, # links to admin docs, api docs and tempest test uuids. # # It then lists the current state for each of the about CI jobs. # It is hoped this mapping will eventually be automated. # [operation.numa-placement] title=NUMA Placement notes=Configure placement of instance vCPUs and memory across host NUMA nodes maturity=experimental api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies tempest_test_uuids=9a438d88-10c6-4bcd-8b5b-5b6e25e1346f;585e934c-448e-43c4-acbf-d06a9b899997 libvirt-kvm=partial libvirt-kvm-s390=unknown libvirt-xen=missing [operation.cpu-pinning-policy] title=CPU Pinning Policy notes=Enable/disable binding of instance vCPUs to host CPUs maturity=experimental api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies libvirt-kvm=partial libvirt-kvm-s390=unknown libvirt-xen=missing [operation.cpu-pinning-thread-policy] title=CPU Pinning Thread Policy notes=Configure usage of host hardware threads when pinning is used maturity=experimental api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies libvirt-kvm=partial libvirt-kvm-s390=unknown libvirt-xen=missing ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/filter-scheduler.rst0000664000175000017500000011033300000000000021605 0ustar00zuulzuul00000000000000Filter Scheduler ================ The **Filter Scheduler** supports `filtering` and `weighting` to make informed decisions on where a new instance should be created. This Scheduler supports working with Compute Nodes only. Filtering --------- .. image:: /_static/images/filtering-workflow-1.png During its work Filter Scheduler iterates over all found compute nodes, evaluating each against a set of filters. The list of resulting hosts is ordered by weighers. The Scheduler then chooses hosts for the requested number of instances, choosing the most weighted hosts. For a specific filter to succeed for a specific host, the filter matches the user request against the state of the host plus some extra magic as defined by each filter (described in more detail below). If the Scheduler cannot find candidates for the next instance, it means that there are no appropriate hosts where that instance can be scheduled. The Filter Scheduler has to be quite flexible to support the required variety of `filtering` and `weighting` strategies. If this flexibility is insufficient you can implement `your own filtering algorithm`. There are many standard filter classes which may be used (:mod:`nova.scheduler.filters`): * |AllHostsFilter| - does no filtering. It passes all the available hosts. * |ImagePropertiesFilter| - filters hosts based on properties defined on the instance's image. It passes hosts that can support the properties specified on the image used by the instance. * |AvailabilityZoneFilter| - filters hosts by availability zone. It passes hosts matching the availability zone specified in the instance properties. Use a comma to specify multiple zones. The filter will then ensure it matches any zone specified. * |ComputeCapabilitiesFilter| - checks that the capabilities provided by the host compute service satisfy any extra specifications associated with the instance type. It passes hosts that can create the specified instance type. If an extra specs key contains a colon (:), anything before the colon is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace is present and is not ``capabilities``, the filter ignores the namespace. For example ``capabilities:cpu_info:features`` is a valid scope format. For backward compatibility, when a key doesn't contain a colon (:), the key's contents are important. If this key is an attribute of HostState object, like ``free_disk_mb``, the filter also treats the extra specs key as the key to be matched. If not, the filter will ignore the key. The extra specifications can have an operator at the beginning of the value string of a key/value pair. If there is no operator specified, then a default operator of ``s==`` is used. Valid operators are: :: * = (equal to or greater than as a number; same as vcpus case) * == (equal to as a number) * != (not equal to as a number) * >= (greater than or equal to as a number) * <= (less than or equal to as a number) * s== (equal to as a string) * s!= (not equal to as a string) * s>= (greater than or equal to as a string) * s> (greater than as a string) * s<= (less than or equal to as a string) * s< (less than as a string) * (substring) * (all elements contained in collection) * (find one of these) Examples are: ">= 5", "s== 2.1.0", " gcc", " aes mmx", and " fpu gpu" some of attributes that can be used as useful key and their values contains: :: * free_ram_mb (compared with a number, values like ">= 4096") * free_disk_mb (compared with a number, values like ">= 10240") * host (compared with a string, values like: " compute","s== compute_01") * hypervisor_type (compared with a string, values like: "s== QEMU", "s== powervm") * hypervisor_version (compared with a number, values like : ">= 1005003", "== 2000000") * num_instances (compared with a number, values like: "<= 10") * num_io_ops (compared with a number, values like: "<= 5") * vcpus_total (compared with a number, values like: "= 48", ">=24") * vcpus_used (compared with a number, values like: "= 0", "<= 10") * |AggregateInstanceExtraSpecsFilter| - checks that the aggregate metadata satisfies any extra specifications associated with the instance type (that have no scope or are scoped with ``aggregate_instance_extra_specs``). It passes hosts that can create the specified instance type. The extra specifications can have the same operators as |ComputeCapabilitiesFilter|. To specify multiple values for the same key use a comma. E.g., "value1,value2". All hosts are passed if no extra_specs are specified. * |ComputeFilter| - passes all hosts that are operational and enabled. * |AggregateCoreFilter| - DEPRECATED; filters hosts by CPU core number with per-aggregate :oslo.config:option:`cpu_allocation_ratio` setting. If no per-aggregate value is found, it will fall back to the global default :oslo.config:option:`cpu_allocation_ratio`. If more than one value is found for a host (meaning the host is in two different aggregates with different ratio settings), the minimum value will be used. * |IsolatedHostsFilter| - filter based on :oslo.config:option:`filter_scheduler.isolated_images`, :oslo.config:option:`filter_scheduler.isolated_hosts` and :oslo.config:option:`filter_scheduler.restrict_isolated_hosts_to_isolated_images` flags. * |JsonFilter| - allows simple JSON-based grammar for selecting hosts. * |AggregateRamFilter| - DEPRECATED; filters hosts by RAM with per-aggregate :oslo.config:option:`ram_allocation_ratio` setting. If no per-aggregate value is found, it will fall back to the global default :oslo.config:option:`ram_allocation_ratio`. If more than one value is found for a host (meaning the host is in two different aggregates with different ratio settings), the minimum value will be used. * |AggregateDiskFilter| - DEPRECATED; filters hosts by disk allocation with per-aggregate :oslo.config:option:`disk_allocation_ratio` setting. If no per-aggregate value is found, it will fall back to the global default :oslo.config:option:`disk_allocation_ratio`. If more than one value is found for a host (meaning the host is in two or more different aggregates with different ratio settings), the minimum value will be used. * |NumInstancesFilter| - filters compute nodes by number of instances. Nodes with too many instances will be filtered. The host will be ignored by the scheduler if more than :oslo.config:option:`filter_scheduler.max_instances_per_host` already exist on the host. * |AggregateNumInstancesFilter| - filters hosts by number of instances with per-aggregate :oslo.config:option:`filter_scheduler.max_instances_per_host` setting. If no per-aggregate value is found, it will fall back to the global default :oslo.config:option:`filter_scheduler.max_instances_per_host`. If more than one value is found for a host (meaning the host is in two or more different aggregates with different max instances per host settings), the minimum value will be used. * |IoOpsFilter| - filters hosts by concurrent I/O operations on it. hosts with too many concurrent I/O operations will be filtered. :oslo.config:option:`filter_scheduler.max_io_ops_per_host` setting. Maximum number of I/O intensive instances allowed to run on this host, the host will be ignored by scheduler if more than :oslo.config:option:`filter_scheduler.max_io_ops_per_host` instances such as build/resize/snapshot etc are running on it. * |AggregateIoOpsFilter| - filters hosts by I/O operations with per-aggregate :oslo.config:option:`filter_scheduler.max_io_ops_per_host` setting. If no per-aggregate value is found, it will fall back to the global default `:oslo.config:option:`filter_scheduler.max_io_ops_per_host`. If more than one value is found for a host (meaning the host is in two or more different aggregates with different max io operations settings), the minimum value will be used. * |PciPassthroughFilter| - Filter that schedules instances on a host if the host has devices to meet the device requests in the 'extra_specs' for the flavor. * |SimpleCIDRAffinityFilter| - allows a new instance on a host within the same IP block. * |DifferentHostFilter| - allows the instance on a different host from a set of instances. * |SameHostFilter| - puts the instance on the same host as another instance in a set of instances. * |RetryFilter| - DEPRECATED; filters hosts that have been attempted for scheduling. Only passes hosts that have not been previously attempted. * |AggregateTypeAffinityFilter| - limits instance_type by aggregate. This filter passes hosts if no instance_type key is set or the instance_type aggregate metadata value contains the name of the instance_type requested. The value of the instance_type metadata entry is a string that may contain either a single instance_type name or a comma separated list of instance_type names. e.g. 'm1.nano' or "m1.nano,m1.small" * |ServerGroupAntiAffinityFilter| - This filter implements anti-affinity for a server group. First you must create a server group with a policy of 'anti-affinity' via the server groups API. Then, when you boot a new server, provide a scheduler hint of 'group=' where is the UUID of the server group you created. This will result in the server getting added to the group. When the server gets scheduled, anti-affinity will be enforced among all servers in that group. * |ServerGroupAffinityFilter| - This filter works the same way as ServerGroupAntiAffinityFilter. The difference is that when you create the server group, you should specify a policy of 'affinity'. * |AggregateMultiTenancyIsolation| - isolate tenants in specific aggregates. To specify multiple tenants use a comma. Eg. "tenant1,tenant2" * |AggregateImagePropertiesIsolation| - isolates hosts based on image properties and aggregate metadata. Use a comma to specify multiple values for the same property. The filter will then ensure at least one value matches. * |MetricsFilter| - filters hosts based on metrics weight_setting. Only hosts with the available metrics are passed. * |NUMATopologyFilter| - filters hosts based on the NUMA topology requested by the instance, if any. Now we can focus on these standard filter classes in some detail. Some filters such as |AllHostsFilter| and |NumInstancesFilter| are relatively simple and can be understood from the code. For example, |NumInstancesFilter| has the following implementation: .. code-block:: python class NumInstancesFilter(filters.BaseHostFilter): """Filter out hosts with too many instances.""" def _get_max_instances_per_host(self, host_state, spec_obj): return CONF.filter_scheduler.max_instances_per_host def host_passes(self, host_state, spec_obj): num_instances = host_state.num_instances max_instances = self._get_max_instances_per_host(host_state, spec_obj) passes = num_instances < max_instances return passes Here :oslo.config:option:`filter_scheduler.max_instances_per_host` means the maximum number of instances that can be on a host. The |AvailabilityZoneFilter| looks at the availability zone of compute node and availability zone from the properties of the request. Each compute service has its own availability zone. So deployment engineers have an option to run scheduler with availability zones support and can configure availability zones on each compute host. This class's method ``host_passes`` returns ``True`` if availability zone mentioned in request is the same on the current compute host. The |ImagePropertiesFilter| filters hosts based on the architecture, hypervisor type and virtual machine mode specified in the instance. For example, an instance might require a host that supports the ARM architecture on a qemu compute host. The |ImagePropertiesFilter| will only pass hosts that can satisfy this request. These instance properties are populated from properties defined on the instance's image. E.g. an image can be decorated with these properties using ``glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu`` Only hosts that satisfy these requirements will pass the |ImagePropertiesFilter|. |ComputeCapabilitiesFilter| checks if the host satisfies any ``extra_specs`` specified on the instance type. The ``extra_specs`` can contain key/value pairs. The key for the filter is either non-scope format (i.e. no ``:`` contained), or scope format in capabilities scope (i.e. ``capabilities:xxx:yyy``). One example of capabilities scope is ``capabilities:cpu_info:features``, which will match host's cpu features capabilities. The |ComputeCapabilitiesFilter| will only pass hosts whose capabilities satisfy the requested specifications. All hosts are passed if no ``extra_specs`` are specified. |ComputeFilter| is quite simple and passes any host whose compute service is enabled and operational. Now we are going to |IsolatedHostsFilter|. There can be some special hosts reserved for specific images. These hosts are called **isolated**. So the images to run on the isolated hosts are also called isolated. The filter checks if :oslo.config:option:`filter_scheduler.isolated_images` flag named in instance specifications is the same as the host specified in :oslo.config:option:`filter_scheduler.isolated_hosts`. Isolated hosts can run non-isolated images if the flag :oslo.config:option:`filter_scheduler.restrict_isolated_hosts_to_isolated_images` is set to false. |DifferentHostFilter| - method ``host_passes`` returns ``True`` if the host to place an instance on is different from all the hosts used by a set of instances. |SameHostFilter| does the opposite to what |DifferentHostFilter| does. ``host_passes`` returns ``True`` if the host we want to place an instance on is one of the hosts used by a set of instances. |SimpleCIDRAffinityFilter| looks at the subnet mask and investigates if the network address of the current host is in the same sub network as it was defined in the request. |JsonFilter| - this filter provides the opportunity to write complicated queries for the hosts capabilities filtering, based on simple JSON-like syntax. There can be used the following operations for the host states properties: ``=``, ``<``, ``>``, ``in``, ``<=``, ``>=``, that can be combined with the following logical operations: ``not``, ``or``, ``and``. For example, the following query can be found in tests: :: ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024] ] This query will filter all hosts with free RAM greater or equal than 1024 MB and at the same time with free disk space greater or equal than 200 GB. Many filters use data from ``scheduler_hints``, that is defined in the moment of creation of the new server for the user. The only exception for this rule is |JsonFilter|, that takes data from the schedulers ``HostState`` data structure directly. Variable naming, such as the ``$free_ram_mb`` example above, should be based on those attributes. The |RetryFilter| filters hosts that have already been attempted for scheduling. It only passes hosts that have not been previously attempted. If a compute node is raising an exception when spawning an instance, then the compute manager will reschedule it by adding the failing host to a retry dictionary so that the RetryFilter will not accept it as a possible destination. That means that if all of your compute nodes are failing, then the RetryFilter will return 0 hosts and the scheduler will raise a NoValidHost exception even if the problem is related to 1:N compute nodes. If you see that case in the scheduler logs, then your problem is most likely related to a compute problem and you should check the compute logs. .. note:: The ``RetryFilter`` is deprecated since the 20.0.0 (Train) release and will be removed in an upcoming release. Since the 17.0.0 (Queens) release, the scheduler has provided alternate hosts for rescheduling so the scheduler does not need to be called during a reschedule which makes the ``RetryFilter`` useless. See the `Return Alternate Hosts`_ spec for details. .. _Return Alternate Hosts: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/return-alternate-hosts.html The |NUMATopologyFilter| considers the NUMA topology that was specified for the instance through the use of flavor extra_specs in combination with the image properties, as described in detail in the related nova-spec document: * https://opendev.org/openstack/nova-specs/src/branch/master/specs/juno/implemented/virt-driver-numa-placement.rst and try to match it with the topology exposed by the host, accounting for the :oslo.config:option:`ram_allocation_ratio` and :oslo.config:option:`cpu_allocation_ratio` for over-subscription. The filtering is done in the following manner: * Filter will attempt to pack instance cells onto host cells. * It will consider the standard over-subscription limits for each host NUMA cell, and provide limits to the compute host accordingly (as mentioned above). * If instance has no topology defined, it will be considered for any host. * If instance has a topology defined, it will be considered only for NUMA capable hosts. Configuring Filters ------------------- To use filters you specify two settings: * :oslo.config:option:`filter_scheduler.available_filters` - Defines filter classes made available to the scheduler. This setting can be used multiple times. * :oslo.config:option:`filter_scheduler.enabled_filters` - Of the available filters, defines those that the scheduler uses by default. The default values for these settings in nova.conf are: :: --filter_scheduler.available_filters=nova.scheduler.filters.all_filters --filter_scheduler.enabled_filters=ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter With this configuration, all filters in ``nova.scheduler.filters`` would be available, and by default the |ComputeFilter|, |AvailabilityZoneFilter|, |ComputeCapabilitiesFilter|, |ImagePropertiesFilter|, |ServerGroupAntiAffinityFilter|, and |ServerGroupAffinityFilter| would be used. Each filter selects hosts in a different way and has different costs. The order of :oslo.config:option:`filter_scheduler.enabled_filters` affects scheduling performance. The general suggestion is to filter out invalid hosts as soon as possible to avoid unnecessary costs. We can sort :oslo.config:option:`filter_scheduler.enabled_filters` items by their costs in reverse order. For example, ``ComputeFilter`` is better before any resource calculating filters like ``NUMATopologyFilter``. In medium/large environments having AvailabilityZoneFilter before any capability or resource calculating filters can be useful. .. _custom-scheduler-filters: Writing Your Own Filter ----------------------- To create **your own filter**, you must inherit from |BaseHostFilter| and implement one method: ``host_passes``. This method should return ``True`` if a host passes the filter and return ``False`` elsewhere. It takes two parameters: * the ``HostState`` object allows to get attributes of the host * the ``RequestSpec`` object describes the user request, including the flavor, the image and the scheduler hints For further details about each of those objects and their corresponding attributes, refer to the codebase (at least by looking at the other filters code) or ask for help in the #openstack-nova IRC channel. In addition, if your custom filter uses non-standard extra specs, you must register validators for these extra specs. Examples of validators can be found in the ``nova.api.validation.extra_specs`` module. These should be registered via the ``nova.api.extra_spec_validator`` `entrypoint`__. The module containing your custom filter(s) must be packaged and available in the same environment(s) that the nova controllers, or specifically the :program:`nova-scheduler` and :program:`nova-api` services, are available in. As an example, consider the following sample package, which is the `minimal structure`__ for a standard, setuptools-based Python package: __ https://packaging.python.org/specifications/entry-points/ __ https://python-packaging.readthedocs.io/en/latest/minimal.html .. code-block:: none acmefilter/ acmefilter/ __init__.py validators.py setup.py Where ``__init__.py`` contains: .. code-block:: python from oslo_log import log as logging from nova.scheduler import filters LOG = logging.getLogger(__name__) class AcmeFilter(filters.BaseHostFilter): def host_passes(self, host_state, spec_obj): extra_spec = spec_obj.flavor.extra_specs.get('acme:foo') LOG.info("Extra spec value was '%s'", extra_spec) # do meaningful stuff here... return True ``validators.py`` contains: .. code-block:: python from nova.api.validation.extra_specs import base def register(): validators = [ base.ExtraSpecValidator( name='acme:foo', description='My custom extra spec.' value={ 'type': str, 'enum': [ 'bar', 'baz', ], }, ), ] return validators ``setup.py`` contains: .. code-block:: python from setuptools import setup setup( name='acmefilter', version='0.1', description='My custom filter', packages=[ 'acmefilter' ], entry_points={ 'nova.api.extra_spec_validators': [ 'acme = acmefilter.validators', ], }, ) To enable this, you would set the following in :file:`nova.conf`: .. code-block:: ini [filter_scheduler] available_filters = nova.scheduler.filters.all_filters available_filters = acmefilter.AcmeFilter enabled_filters = ComputeFilter,AcmeFilter .. note:: You **must** add custom filters to the list of available filters using the :oslo.config:option:`filter_scheduler.available_filters` config option in addition to enabling them via the :oslo.config:option:`filter_scheduler.enabled_filters` config option. The default ``nova.scheduler.filters.all_filters`` value for the former only includes the filters shipped with nova. With these settings, nova will use the ``FilterScheduler`` for the scheduler driver. All of the standard nova filters and the custom ``AcmeFilter`` filter are available to the ``FilterScheduler``, but just the ``ComputeFilter`` and ``AcmeFilter`` will be used on each request. Weights ------- Filter Scheduler uses the so-called **weights** during its work. A weigher is a way to select the best suitable host from a group of valid hosts by giving weights to all the hosts in the list. In order to prioritize one weigher against another, all the weighers have to define a multiplier that will be applied before computing the weight for a node. All the weights are normalized beforehand so that the multiplier can be applied easily. Therefore the final weight for the object will be:: weight = w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ... A weigher should be a subclass of ``weights.BaseHostWeigher`` and they can implement both the ``weight_multiplier`` and ``_weight_object`` methods or just implement the ``weight_objects`` method. ``weight_objects`` method is overridden only if you need access to all objects in order to calculate weights, and it just return a list of weights, and not modify the weight of the object directly, since final weights are normalized and computed by ``weight.BaseWeightHandler``. The Filter Scheduler weighs hosts based on the config option `filter_scheduler.weight_classes`, this defaults to `nova.scheduler.weights.all_weighers`, which selects the following weighers: * |RAMWeigher| Compute weight based on available RAM on the compute node. Sort with the largest weight winning. If the multiplier, :oslo.config:option:`filter_scheduler.ram_weight_multiplier`, is negative, the host with least RAM available will win (useful for stacking hosts, instead of spreading). Starting with the Stein release, if per-aggregate value with the key ``ram_weight_multiplier`` is found, this value would be chosen as the ram weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.ram_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |CPUWeigher| Compute weight based on available vCPUs on the compute node. Sort with the largest weight winning. If the multiplier, :oslo.config:option:`filter_scheduler.cpu_weight_multiplier`, is negative, the host with least CPUs available will win (useful for stacking hosts, instead of spreading). Starting with the Stein release, if per-aggregate value with the key ``cpu_weight_multiplier`` is found, this value would be chosen as the cpu weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.cpu_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |DiskWeigher| Hosts are weighted and sorted by free disk space with the largest weight winning. If the multiplier is negative, the host with less disk space available will win (useful for stacking hosts, instead of spreading). Starting with the Stein release, if per-aggregate value with the key ``disk_weight_multiplier`` is found, this value would be chosen as the disk weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.disk_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |MetricsWeigher| This weigher can compute the weight based on the compute node host's various metrics. The to-be weighed metrics and their weighing ratio are specified in the configuration file as the followings:: metrics_weight_setting = name1=1.0, name2=-1.0 Starting with the Stein release, if per-aggregate value with the key `metrics_weight_multiplier` is found, this value would be chosen as the metrics weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`metrics.weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |IoOpsWeigher| The weigher can compute the weight based on the compute node host's workload. The default is to preferably choose light workload compute hosts. If the multiplier is positive, the weigher prefer choosing heavy workload compute hosts, the weighing has the opposite effect of the default. Starting with the Stein release, if per-aggregate value with the key ``io_ops_weight_multiplier`` is found, this value would be chosen as the IO ops weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.io_ops_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |PCIWeigher| Compute a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance. For example, given three hosts - one with a single PCI device, one with many PCI devices, and one with no PCI devices - nova should prioritise these differently based on the demands of the instance. If the instance requests a single PCI device, then the first of the hosts should be preferred. Similarly, if the instance requests multiple PCI devices, then the second of these hosts would be preferred. Finally, if the instance does not request a PCI device, then the last of these hosts should be preferred. For this to be of any value, at least one of the |PciPassthroughFilter| or |NUMATopologyFilter| filters must be enabled. :Configuration Option: ``[filter_scheduler] pci_weight_multiplier``. Only positive values are allowed for the multiplier as a negative value would force non-PCI instances away from non-PCI hosts, thus, causing future scheduling issues. Starting with the Stein release, if per-aggregate value with the key ``pci_weight_multiplier`` is found, this value would be chosen as the pci weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.pci_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |ServerGroupSoftAffinityWeigher| The weigher can compute the weight based on the number of instances that run on the same server group. The largest weight defines the preferred host for the new instance. For the multiplier only a positive value is allowed for the calculation. Starting with the Stein release, if per-aggregate value with the key ``soft_affinity_weight_multiplier`` is found, this value would be chosen as the soft affinity weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.soft_affinity_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |ServerGroupSoftAntiAffinityWeigher| The weigher can compute the weight based on the number of instances that run on the same server group as a negative value. The largest weight defines the preferred host for the new instance. For the multiplier only a positive value is allowed for the calculation. Starting with the Stein release, if per-aggregate value with the key ``soft_anti_affinity_weight_multiplier`` is found, this value would be chosen as the soft anti-affinity weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.soft_anti_affinity_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. * |BuildFailureWeigher| Weigh hosts by the number of recent failed boot attempts. It considers the build failure counter and can negatively weigh hosts with recent failures. This avoids taking computes fully out of rotation. Starting with the Stein release, if per-aggregate value with the key ``build_failure_weight_multiplier`` is found, this value would be chosen as the build failure weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.build_failure_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. .. _cross-cell-weigher: * |CrossCellWeigher| Weighs hosts based on which cell they are in. "Local" cells are preferred when moving an instance. Use configuration option :oslo.config:option:`filter_scheduler.cross_cell_move_weight_multiplier` to control the weight. If per-aggregate value with the key `cross_cell_move_weight_multiplier` is found, this value would be chosen as the cross-cell move weight multiplier. Otherwise, it will fall back to the :oslo.config:option:`filter_scheduler.cross_cell_move_weight_multiplier`. If more than one value is found for a host in aggregate metadata, the minimum value will be used. Filter Scheduler makes a local list of acceptable hosts by repeated filtering and weighing. Each time it chooses a host, it virtually consumes resources on it, so subsequent selections can adjust accordingly. It is useful if the customer asks for a large block of instances, because weight is computed for each instance requested. .. image:: /_static/images/filtering-workflow-2.png At the end Filter Scheduler sorts selected hosts by their weight and attempts to provision instances on the chosen hosts. P.S.: you can find more examples of using Filter Scheduler and standard filters in :mod:`nova.tests.scheduler`. .. |AllHostsFilter| replace:: :class:`AllHostsFilter ` .. |ImagePropertiesFilter| replace:: :class:`ImagePropertiesFilter ` .. |AvailabilityZoneFilter| replace:: :class:`AvailabilityZoneFilter ` .. |BaseHostFilter| replace:: :class:`BaseHostFilter ` .. |ComputeCapabilitiesFilter| replace:: :class:`ComputeCapabilitiesFilter ` .. |ComputeFilter| replace:: :class:`ComputeFilter ` .. |AggregateCoreFilter| replace:: :class:`AggregateCoreFilter ` .. |IsolatedHostsFilter| replace:: :class:`IsolatedHostsFilter ` .. |JsonFilter| replace:: :class:`JsonFilter ` .. |AggregateRamFilter| replace:: :class:`AggregateRamFilter ` .. |AggregateDiskFilter| replace:: :class:`AggregateDiskFilter ` .. |NumInstancesFilter| replace:: :class:`NumInstancesFilter ` .. |AggregateNumInstancesFilter| replace:: :class:`AggregateNumInstancesFilter ` .. |IoOpsFilter| replace:: :class:`IoOpsFilter ` .. |AggregateIoOpsFilter| replace:: :class:`AggregateIoOpsFilter ` .. |PciPassthroughFilter| replace:: :class:`PciPassthroughFilter ` .. |SimpleCIDRAffinityFilter| replace:: :class:`SimpleCIDRAffinityFilter ` .. |DifferentHostFilter| replace:: :class:`DifferentHostFilter ` .. |SameHostFilter| replace:: :class:`SameHostFilter ` .. |RetryFilter| replace:: :class:`RetryFilter ` .. |AggregateTypeAffinityFilter| replace:: :class:`AggregateTypeAffinityFilter ` .. |ServerGroupAntiAffinityFilter| replace:: :class:`ServerGroupAntiAffinityFilter ` .. |ServerGroupAffinityFilter| replace:: :class:`ServerGroupAffinityFilter ` .. |AggregateInstanceExtraSpecsFilter| replace:: :class:`AggregateInstanceExtraSpecsFilter ` .. |AggregateMultiTenancyIsolation| replace:: :class:`AggregateMultiTenancyIsolation ` .. |NUMATopologyFilter| replace:: :class:`NUMATopologyFilter ` .. |RAMWeigher| replace:: :class:`RAMWeigher ` .. |CPUWeigher| replace:: :class:`CPUWeigher ` .. |AggregateImagePropertiesIsolation| replace:: :class:`AggregateImagePropertiesIsolation ` .. |MetricsFilter| replace:: :class:`MetricsFilter ` .. |MetricsWeigher| replace:: :class:`MetricsWeigher ` .. |IoOpsWeigher| replace:: :class:`IoOpsWeigher ` .. |PCIWeigher| replace:: :class:`PCIWeigher ` .. |ServerGroupSoftAffinityWeigher| replace:: :class:`ServerGroupSoftAffinityWeigher ` .. |ServerGroupSoftAntiAffinityWeigher| replace:: :class:`ServerGroupSoftAntiAffinityWeigher ` .. |DiskWeigher| replace:: :class:`DiskWeigher ` .. |BuildFailureWeigher| replace:: :class:`BuildFailureWeigher ` .. |CrossCellWeigher| replace:: :class:`CrossCellWeigher ` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/flavors.rst0000664000175000017500000010520400000000000020021 0ustar00zuulzuul00000000000000======= Flavors ======= In OpenStack, flavors define the compute, memory, and storage capacity of nova computing instances. To put it simply, a flavor is an available hardware configuration for a server. It defines the *size* of a virtual server that can be launched. .. note:: Flavors can also determine on which compute host a flavor can be used to launch an instance. For information about customizing flavors, refer to :doc:`/admin/flavors`. Overview -------- A flavor consists of the following parameters: Flavor ID Unique ID (integer or UUID) for the new flavor. This property is required. If specifying 'auto', a UUID will be automatically generated. Name Name for the new flavor. This property is required. Historically, names were given a format `XX.SIZE_NAME`. These are typically not required, though some third party tools may rely on it. VCPUs Number of virtual CPUs to use. This property is required. Memory MB Amount of RAM to use (in megabytes). This property is required. Root Disk GB Amount of disk space (in gigabytes) to use for the root (``/``) partition. This property is required. The root disk is an ephemeral disk that the base image is copied into. When booting from a persistent volume it is not used. The ``0`` size is a special case which uses the native base image size as the size of the ephemeral root volume. However, in this case the filter scheduler cannot select the compute host based on the virtual image size. As a result, ``0`` should only be used for volume booted instances or for testing purposes. Volume-backed instances can be enforced for flavors with zero root disk via the ``os_compute_api:servers:create:zero_disk_flavor`` policy rule. Ephemeral Disk GB Amount of disk space (in gigabytes) to use for the ephemeral partition. This property is optional. If unspecified, the value is ``0`` by default. Ephemeral disks offer machine local disk storage linked to the lifecycle of a VM instance. When a VM is terminated, all data on the ephemeral disk is lost. Ephemeral disks are not included in any snapshots. Swap Amount of swap space (in megabytes) to use. This property is optional. If unspecified, the value is ``0`` by default. RXTX Factor (DEPRECATED) This value was only applicable when using the ``xen`` compute driver with the ``nova-network`` network driver. Since ``nova-network`` has been removed, this no longer applies and should not be specified. It will likely be removed in a future release. ``neutron`` users should refer to the :neutron-doc:`neutron QoS documentation ` Is Public Boolean value that defines whether the flavor is available to all users or private to the project it was created in. This property is optional. In unspecified, the value is ``True`` by default. By default, a flavor is public and available to all projects. Private flavors are only accessible to those on the access list for a given project and are invisible to other projects. Extra Specs Key and value pairs that define on which compute nodes a flavor can run. These are optional. Extra specs are generally used as scheduler hints for more advanced instance configuration. The key-value pairs used must correspond to well-known options. For more information on the standardized extra specs available, :ref:`see below ` Description A free form description of the flavor. Limited to 65535 characters in length. Only printable characters are allowed. Available starting in microversion 2.55. .. _flavors-extra-specs: Extra Specs ~~~~~~~~~~~ .. TODO: Consider adding a table of contents here for the various extra specs or make them sub-sections. .. todo:: A lot of these need investigation - for example, I can find no reference to the ``cpu_shares_level`` option outside of documentation and (possibly) useless tests. We should assess which drivers each option actually apply to. .. _extra-specs-CPU-limits: CPU limits You can configure the CPU limits with control parameters. For example, to configure the I/O limit, use: .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property quota:read_bytes_sec=10240000 \ --property quota:write_bytes_sec=10240000 Use these optional parameters to control weight shares, enforcement intervals for runtime quotas, and a quota for maximum allowed bandwidth: - ``cpu_shares``: Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024. - ``cpu_shares_level``: On VMware, specifies the allocation level. Can be ``custom``, ``high``, ``normal``, or ``low``. If you choose ``custom``, set the number of shares using ``cpu_shares_share``. - ``cpu_period``: Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range ``[1000, 1000000]``. A period with value 0 means no value. - ``cpu_limit``: Specifies the upper limit for VMware machine CPU allocation in MHz. This parameter ensures that a machine never uses more than the defined amount of CPU time. It can be used to enforce a limit on the machine's CPU performance. - ``cpu_reservation``: Specifies the guaranteed minimum CPU reservation in MHz for VMware. This means that if needed, the machine will definitely get allocated the reserved amount of CPU cycles. - ``cpu_quota``: Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range ``[1000, 18446744073709551]`` or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed. For example: .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property quota:cpu_quota=10000 \ --property quota:cpu_period=20000 In this example, an instance of ``FLAVOR-NAME`` can only consume a maximum of 50% CPU of a physical CPU computing capability. .. _extra-specs-memory-limits: Memory limits For VMware, you can configure the memory limits with control parameters. Use these optional parameters to limit the memory allocation, guarantee minimum memory reservation, and to specify shares used in case of resource contention: - ``memory_limit``: Specifies the upper limit for VMware machine memory allocation in MB. The utilization of a virtual machine will not exceed this limit, even if there are available resources. This is typically used to ensure a consistent performance of virtual machines independent of available resources. - ``memory_reservation``: Specifies the guaranteed minimum memory reservation in MB for VMware. This means the specified amount of memory will definitely be allocated to the machine. - ``memory_shares_level``: On VMware, specifies the allocation level. This can be ``custom``, ``high``, ``normal`` or ``low``. If you choose ``custom``, set the number of shares using ``memory_shares_share``. - ``memory_shares_share``: Specifies the number of shares allocated in the event that ``custom`` is used. There is no unit for this value. It is a relative measure based on the settings for other VMs. For example: .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property quota:memory_shares_level=custom \ --property quota:memory_shares_share=15 .. _extra-specs-disk-io-limits: Disk I/O limits For VMware, you can configure the resource limits for disk with control parameters. Use these optional parameters to limit the disk utilization, guarantee disk allocation, and to specify shares used in case of resource contention. This allows the VMware driver to enable disk allocations for the running instance. - ``disk_io_limit``: Specifies the upper limit for disk utilization in I/O per second. The utilization of a virtual machine will not exceed this limit, even if there are available resources. The default value is -1 which indicates unlimited usage. - ``disk_io_reservation``: Specifies the guaranteed minimum disk allocation in terms of Input/output Operations Per Second (IOPS). - ``disk_io_shares_level``: Specifies the allocation level. This can be ``custom``, ``high``, ``normal`` or ``low``. If you choose custom, set the number of shares using ``disk_io_shares_share``. - ``disk_io_shares_share``: Specifies the number of shares allocated in the event that ``custom`` is used. When there is resource contention, this value is used to determine the resource allocation. The example below sets the ``disk_io_reservation`` to 2000 IOPS. .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property quota:disk_io_reservation=2000 .. _extra-specs-disk-tuning: Disk tuning Using disk I/O quotas, you can set maximum disk write to 10 MB per second for a VM user. For example: .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property quota:disk_write_bytes_sec=10485760 The disk I/O options are: - ``disk_read_bytes_sec`` - ``disk_read_iops_sec`` - ``disk_write_bytes_sec`` - ``disk_write_iops_sec`` - ``disk_total_bytes_sec`` - ``disk_total_iops_sec`` .. _extra-specs-bandwidth-io: Bandwidth I/O The vif I/O options are: - ``vif_inbound_average`` - ``vif_inbound_burst`` - ``vif_inbound_peak`` - ``vif_outbound_average`` - ``vif_outbound_burst`` - ``vif_outbound_peak`` Incoming and outgoing traffic can be shaped independently. The bandwidth element can have at most, one inbound and at most, one outbound child element. If you leave any of these child elements out, no quality of service (QoS) is applied on that traffic direction. So, if you want to shape only the network's incoming traffic, use inbound only (and vice versa). Each element has one mandatory attribute average, which specifies the average bit rate on the interface being shaped. There are also two optional attributes (integer): ``peak``, which specifies the maximum rate at which a bridge can send data (kilobytes/second), and ``burst``, the amount of bytes that can be burst at peak speed (kilobytes). The rate is shared equally within domains connected to the network. The example below sets network traffic bandwidth limits for existing flavor as follows: - Outbound traffic: - average: 262 Mbps (32768 kilobytes/second) - peak: 524 Mbps (65536 kilobytes/second) - burst: 65536 kilobytes - Inbound traffic: - average: 262 Mbps (32768 kilobytes/second) - peak: 524 Mbps (65536 kilobytes/second) - burst: 65536 kilobytes .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property quota:vif_outbound_average=32768 \ --property quota:vif_outbound_peak=65536 \ --property quota:vif_outbound_burst=65536 \ --property quota:vif_inbound_average=32768 \ --property quota:vif_inbound_peak=65536 \ --property quota:vif_inbound_burst=65536 .. note:: All the speed limit values in above example are specified in kilobytes/second. And burst values are in kilobytes. Values were converted using `Data rate units on Wikipedia `_. .. _extra-specs-hardware-video-ram: Hardware video RAM Specify ``hw_video:ram_max_mb`` to control the maximum RAM for the video image. Used in conjunction with the ``hw_video_ram`` image property. ``hw_video_ram`` must be less than or equal to ``hw_video:ram_max_mb``. This is currently supported by the libvirt and the vmware drivers. See https://libvirt.org/formatdomain.html#elementsVideo for more information on how this is used to set the ``vram`` attribute with the libvirt driver. See https://pubs.vmware.com/vi-sdk/visdk250/ReferenceGuide/vim.vm.device.VirtualVideoCard.html for more information on how this is used to set the ``videoRamSizeInKB`` attribute with the vmware driver. .. _extra-specs-watchdog-behavior: Watchdog behavior For the libvirt driver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action, if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If ``hw:watchdog_action`` is not specified, the watchdog is disabled. To set the behavior, use: .. code-block:: console $ openstack flavor set FLAVOR-NAME --property hw:watchdog_action=ACTION Valid ACTION values are: - ``disabled``: (default) The device is not attached. - ``reset``: Forcefully reset the guest. - ``poweroff``: Forcefully power off the guest. - ``pause``: Pause the guest. - ``none``: Only enable the watchdog; do nothing if the server hangs. .. note:: Watchdog behavior set using a specific image's properties will override behavior set using flavors. .. _extra-specs-random-number-generator: Random-number generator If a random-number generator device has been added to the instance through its image properties, the device can be enabled and configured using: .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property hw_rng:allowed=True \ --property hw_rng:rate_bytes=RATE-BYTES \ --property hw_rng:rate_period=RATE-PERIOD Where: - RATE-BYTES: (integer) Allowed amount of bytes that the guest can read from the host's entropy per period. - RATE-PERIOD: (integer) Duration of the read period in milliseconds. .. _extra-specs-performance-monitoring-unit: Performance Monitoring Unit (vPMU) If nova is deployed with the libvirt virt driver and :oslo.config:option:`libvirt.virt_type` is set to ``qemu`` or ``kvm``, a vPMU can be enabled or disabled for an instance using the ``hw:pmu`` extra_spec or the ``hw_pmu`` image property. The supported values are ``True`` or ``False``. If the vPMU is not explicitly enabled or disabled via the flavor or image, its presence is left to QEMU to decide. .. code-block:: console $ openstack flavor set FLAVOR-NAME --property hw:pmu=True|False The vPMU is used by tools like ``perf`` in the guest to provide more accurate information for profiling application and monitoring guest performance. For realtime workloads, the emulation of a vPMU can introduce additional latency which may be undesirable. If the telemetry it provides is not required, such workloads should set ``hw:pmu=False``. For most workloads the default of unset or enabling the vPMU ``hw:pmu=True`` will be correct. .. _extra-specs-cpu-topology: CPU topology For the libvirt driver, you can define the topology of the processors in the virtual machine using properties. The properties with ``max`` limit the number that can be selected by the user with image properties. .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property hw:cpu_sockets=FLAVOR-SOCKETS \ --property hw:cpu_cores=FLAVOR-CORES \ --property hw:cpu_threads=FLAVOR-THREADS \ --property hw:cpu_max_sockets=FLAVOR-SOCKETS \ --property hw:cpu_max_cores=FLAVOR-CORES \ --property hw:cpu_max_threads=FLAVOR-THREADS Where: - FLAVOR-SOCKETS: (integer) The number of sockets for the guest VM. By default, this is set to the number of vCPUs requested. - FLAVOR-CORES: (integer) The number of cores per socket for the guest VM. By default, this is set to ``1``. - FLAVOR-THREADS: (integer) The number of threads per core for the guest VM. By default, this is set to ``1``. .. _extra-specs-cpu-policy: CPU pinning policy For the libvirt driver, you can pin the virtual CPUs (vCPUs) of instances to the host's physical CPU cores (pCPUs) using properties. You can further refine this by stating how hardware CPU threads in a simultaneous multithreading-based (SMT) architecture be used. These configurations will result in improved per-instance determinism and performance. .. note:: SMT-based architectures include Intel processors with Hyper-Threading technology. In these architectures, processor cores share a number of components with one or more other cores. Cores in such architectures are commonly referred to as hardware threads, while the cores that a given core share components with are known as thread siblings. .. note:: Host aggregates should be used to separate these pinned instances from unpinned instances as the latter will not respect the resourcing requirements of the former. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property hw:cpu_policy=CPU-POLICY \ --property hw:cpu_thread_policy=CPU-THREAD-POLICY Valid CPU-POLICY values are: - ``shared``: (default) The guest vCPUs will be allowed to freely float across host pCPUs, albeit potentially constrained by NUMA policy. - ``dedicated``: The guest vCPUs will be strictly pinned to a set of host pCPUs. In the absence of an explicit vCPU topology request, the drivers typically expose all vCPUs as sockets with one core and one thread. When strict CPU pinning is in effect the guest CPU topology will be setup to match the topology of the CPUs to which it is pinned. This option implies an overcommit ratio of 1.0. For example, if a two vCPU guest is pinned to a single host core with two threads, then the guest will get a topology of one socket, one core, two threads. Valid CPU-THREAD-POLICY values are: - ``prefer``: (default) The host may or may not have an SMT architecture. Where an SMT architecture is present, thread siblings are preferred. - ``isolate``: The host must not have an SMT architecture or must emulate a non-SMT architecture. If the host does not have an SMT architecture, each vCPU is placed on a different core as expected. If the host does have an SMT architecture - that is, one or more cores have thread siblings - then each vCPU is placed on a different physical core. No vCPUs from other guests are placed on the same core. All but one thread sibling on each utilized core is therefore guaranteed to be unusable. - ``require``: The host must have an SMT architecture. Each vCPU is allocated on thread siblings. If the host does not have an SMT architecture, then it is not used. If the host has an SMT architecture, but not enough cores with free thread siblings are available, then scheduling fails. .. note:: The ``hw:cpu_thread_policy`` option is only valid if ``hw:cpu_policy`` is set to ``dedicated``. .. _pci_numa_affinity_policy: PCI NUMA Affinity Policy For the libvirt driver, you can specify the NUMA affinity policy for PCI passthrough devices and neutron SR-IOV interfaces via the ``hw:pci_numa_affinity_policy`` flavor extra spec or ``hw_pci_numa_affinity_policy`` image property. The allowed values are ``required``,``preferred`` or ``legacy`` (default). **required** This value will mean that nova will boot instances with PCI devices **only** if at least one of the NUMA nodes of the instance is associated with these PCI devices. It means that if NUMA node info for some PCI devices could not be determined, those PCI devices wouldn't be consumable by the instance. This provides maximum performance. **preferred** This value will mean that ``nova-scheduler`` will choose a compute host with minimal consideration for the NUMA affinity of PCI devices. ``nova-compute`` will attempt a best effort selection of PCI devices based on NUMA affinity, however, if this is not possible then ``nova-compute`` will fall back to scheduling on a NUMA node that is not associated with the PCI device. **legacy** This is the default value and it describes the current nova behavior. Usually we have information about association of PCI devices with NUMA nodes. However, some PCI devices do not provide such information. The ``legacy`` value will mean that nova will boot instances with PCI device if either: * The PCI device is associated with at least one NUMA nodes on which the instance will be booted * There is no information about PCI-NUMA affinity available .. _extra-specs-numa-topology: NUMA topology For the libvirt driver, you can define the host NUMA placement for the instance vCPU threads as well as the allocation of instance vCPUs and memory from the host NUMA nodes. For flavors whose memory and vCPU allocations are larger than the size of NUMA nodes in the compute hosts, the definition of a NUMA topology allows hosts to better utilize NUMA and improve performance of the instance OS. .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property hw:numa_nodes=FLAVOR-NODES \ --property hw:numa_cpus.N=FLAVOR-CORES \ --property hw:numa_mem.N=FLAVOR-MEMORY Where: - FLAVOR-NODES: (integer) The number of host NUMA nodes to restrict execution of instance vCPU threads to. If not specified, the vCPU threads can run on any number of the host NUMA nodes available. - N: (integer) The instance NUMA node to apply a given CPU or memory configuration to, where N is in the range ``0`` to ``FLAVOR-NODES - 1``. - FLAVOR-CORES: (comma-separated list of integers) A list of instance vCPUs to map to instance NUMA node N. If not specified, vCPUs are evenly divided among available NUMA nodes. - FLAVOR-MEMORY: (integer) The number of MB of instance memory to map to instance NUMA node N. If not specified, memory is evenly divided among available NUMA nodes. .. note:: ``hw:numa_cpus.N`` and ``hw:numa_mem.N`` are only valid if ``hw:numa_nodes`` is set. Additionally, they are only required if the instance's NUMA nodes have an asymmetrical allocation of CPUs and RAM (important for some NFV workloads). .. note:: The ``N`` parameter is an index of *guest* NUMA nodes and may not correspond to *host* NUMA nodes. For example, on a platform with two NUMA nodes, the scheduler may opt to place guest NUMA node 0, as referenced in ``hw:numa_mem.0`` on host NUMA node 1 and vice versa. Similarly, the integers used for ``FLAVOR-CORES`` are indexes of *guest* vCPUs and may not correspond to *host* CPUs. As such, this feature cannot be used to constrain instances to specific host CPUs or NUMA nodes. .. warning:: If the combined values of ``hw:numa_cpus.N`` or ``hw:numa_mem.N`` are greater than the available number of CPUs or memory respectively, an exception is raised. .. _extra-specs-memory-encryption: Hardware encryption of guest memory If there are compute hosts which support encryption of guest memory at the hardware level, this functionality can be requested via the ``hw:mem_encryption`` extra spec parameter: .. code-block:: console $ openstack flavor set FLAVOR-NAME \ --property hw:mem_encryption=True .. _extra-specs-realtime-policy: CPU real-time policy For the libvirt driver, you can state that one or more of your instance virtual CPUs (vCPUs), though not all of them, run with a real-time policy. When used on a correctly configured host, this provides stronger guarantees for worst case scheduler latency for vCPUs and is a requirement for certain applications. .. todo:: Document the required steps to configure hosts and guests. There are a lot of things necessary, from isolating hosts and configuring the ``[compute] cpu_dedicated_set`` nova configuration option on the host, to choosing a correctly configured guest image. .. important:: While most of your instance vCPUs can run with a real-time policy, you must mark at least one vCPU as non-real-time, to be used for both non-real-time guest processes and emulator overhead (housekeeping) processes. .. important:: To use this extra spec, you must enable pinned CPUs. Refer to :ref:`CPU policy ` for more information. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property hw:cpu_realtime=CPU-REALTIME-POLICY \ --property hw:cpu_realtime_mask=CPU-REALTIME-MASK Where: CPU-REALTIME-POLICY (enum): One of: - ``no``: (default) The guest vCPUs will not have a real-time policy - ``yes``: The guest vCPUs will have a real-time policy CPU-REALTIME-MASK (coremask): A coremask indicating which vCPUs **will not** have a real-time policy. This should start with a ``^``. For example, a value of ``^0-1`` indicates that all vCPUs *except* vCPUs ``0`` and ``1`` will have a real-time policy. .. note:: The ``hw:cpu_realtime_mask`` option is only valid if ``hw:cpu_realtime`` is set to ``yes``. .. _extra-specs-emulator-threads-policy: Emulator threads policy For the libvirt driver, you can assign a separate pCPU to an instance that will be used for emulator threads, which are emulator processes not directly related to the guest OS. This pCPU will used in addition to the pCPUs used for the guest. This is generally required for use with a :ref:`real-time workload `. .. important:: To use this extra spec, you must enable pinned CPUs. Refer to :ref:`CPU policy ` for more information. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property hw:emulator_threads_policy=THREAD-POLICY The expected behavior of emulator threads depends on the value of the ``hw:emulator_threads_policy`` flavor extra spec and the value of :oslo.config:option:`compute.cpu_shared_set`. It is presented in the following table: .. list-table:: :header-rows: 1 :stub-columns: 1 * - - :oslo.config:option:`compute.cpu_shared_set` set - :oslo.config:option:`compute.cpu_shared_set` unset * - ``hw:emulator_treads_policy`` unset (default) - Pinned to all of the instance's pCPUs - Pinned to all of the instance's pCPUs * - ``hw:emulator_threads_policy`` = ``share`` - Pinned to :oslo.config:option:`compute.cpu_shared_set` - Pinned to all of the instance's pCPUs * - ``hw:emulator_threads_policy`` = ``isolate`` - Pinned to a single pCPU distinct from the instance's pCPUs - Pinned to a single pCPU distinct from the instance's pCPUs .. _extra-specs-large-pages-allocation: Large pages allocation You can configure the size of large pages used to back the VMs. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property hw:mem_page_size=PAGE_SIZE Valid ``PAGE_SIZE`` values are: - ``small``: (default) The smallest page size is used. Example: 4 KB on x86. - ``large``: Only use larger page sizes for guest RAM. Example: either 2 MB or 1 GB on x86. - ``any``: It is left up to the compute driver to decide. In this case, the libvirt driver might try to find large pages, but fall back to small pages. Other drivers may choose alternate policies for ``any``. - pagesize: (string) An explicit page size can be set if the workload has specific requirements. This value can be an integer value for the page size in KB, or can use any standard suffix. Example: ``4KB``, ``2MB``, ``2048``, ``1GB``. .. note:: Large pages can be enabled for guest RAM without any regard to whether the guest OS will use them or not. If the guest OS chooses not to use huge pages, it will merely see small pages as before. Conversely, if a guest OS does intend to use huge pages, it is very important that the guest RAM be backed by huge pages. Otherwise, the guest OS will not be getting the performance benefit it is expecting. .. _extra-spec-pci-passthrough: PCI passthrough You can assign PCI devices to a guest by specifying them in the flavor. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property pci_passthrough:alias=ALIAS:COUNT Where: - ALIAS: (string) The alias which correspond to a particular PCI device class as configured in the nova configuration file (see :oslo.config:option:`pci.alias`). - COUNT: (integer) The amount of PCI devices of type ALIAS to be assigned to a guest. .. _extra-specs-hiding-hypervisor-signature: Hiding hypervisor signature Some hypervisors add a signature to their guests. While the presence of the signature can enable some paravirtualization features on the guest, it can also have the effect of preventing some drivers from loading. Hiding the signature by setting this property to true may allow such drivers to load and work. .. note:: As of the 18.0.0 Rocky release, this is only supported by the libvirt driver. Prior to the 21.0.0 Ussuri release, this was called ``hide_hypervisor_id``. An alias is provided to provide backwards compatibility. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property hw:hide_hypervisor_id=VALUE Where: - VALUE: (string) 'true' or 'false'. 'false' is equivalent to the property not existing. .. _extra-specs-secure-boot: Secure Boot When your Compute services use the Hyper-V hypervisor, you can enable secure boot for Windows and Linux instances. .. code:: console $ openstack flavor set FLAVOR-NAME \ --property os:secure_boot=SECURE_BOOT_OPTION Valid ``SECURE_BOOT_OPTION`` values are: - ``required``: Enable Secure Boot for instances running with this flavor. - ``disabled`` or ``optional``: (default) Disable Secure Boot for instances running with this flavor. .. _extra-specs-required-resources: Custom resource classes and standard resource classes to override Added in the 16.0.0 Pike release. Specify custom resource classes to require or override quantity values of standard resource classes. The syntax of the extra spec is ``resources:=VALUE`` (``VALUE`` is integer). The name of custom resource classes must start with ``CUSTOM_``. Standard resource classes to override are ``VCPU``, ``MEMORY_MB`` or ``DISK_GB``. In this case, you can disable scheduling based on standard resource classes by setting the value to ``0``. For example: - resources:CUSTOM_BAREMETAL_SMALL=1 - resources:VCPU=0 See :ironic-doc:`Create flavors for use with the Bare Metal service ` for more examples. .. _extra-specs-required-traits: Required traits Added in the 17.0.0 Queens release. Required traits allow specifying a server to build on a compute node with the set of traits specified in the flavor. The traits are associated with the resource provider that represents the compute node in the Placement API. See the resource provider traits API reference for more details: https://docs.openstack.org/api-ref/placement/#resource-provider-traits The syntax of the extra spec is ``trait:=required``, for example: - trait:HW_CPU_X86_AVX2=required - trait:STORAGE_DISK_SSD=required The scheduler will pass required traits to the ``GET /allocation_candidates`` endpoint in the Placement API to include only resource providers that can satisfy the required traits. In 17.0.0 the only valid value is ``required``. In 18.0.0 ``forbidden`` is added (see below). Any other value will be considered invalid. The FilterScheduler is currently the only scheduler driver that supports this feature. Traits can be managed using the `osc-placement plugin`_. .. _extra-specs-forbidden-traits: Forbidden traits Added in the 18.0.0 Rocky release. Forbidden traits are similar to required traits, described above, but instead of specifying the set of traits that must be satisfied by a compute node, forbidden traits must **not** be present. The syntax of the extra spec is ``trait:=forbidden``, for example: - trait:HW_CPU_X86_AVX2=forbidden - trait:STORAGE_DISK_SSD=forbidden The FilterScheduler is currently the only scheduler driver that supports this feature. Traits can be managed using the `osc-placement plugin`_. .. _osc-placement plugin: https://docs.openstack.org/osc-placement/latest/index.html .. _extra-specs-numbered-resource-groupings: Numbered groupings of resource classes and traits Added in the 18.0.0 Rocky release. Specify numbered groupings of resource classes and traits. The syntax is as follows (``N`` and ``VALUE`` are integers): .. parsed-literal:: resources\ *N*:**\ =\ *VALUE* trait\ *N*:**\ =required A given numbered ``resources`` or ``trait`` key may be repeated to specify multiple resources/traits in the same grouping, just as with the un-numbered syntax. Specify inter-group affinity policy via the ``group_policy`` key, which may have the following values: * ``isolate``: Different numbered request groups will be satisfied by *different* providers. * ``none``: Different numbered request groups may be satisfied by different providers *or* common providers. .. note:: If more than one group is specified then the ``group_policy`` is mandatory in the request. However such groups might come from other sources than flavor extra_spec (e.g. from Neutron ports with QoS minimum bandwidth policy). If the flavor does not specify any groups and ``group_policy`` but more than one group is coming from other sources then nova will default the ``group_policy`` to ``none`` to avoid scheduler failure. For example, to create a server with the following VFs: * One SR-IOV virtual function (VF) on NET1 with bandwidth 10000 bytes/sec * One SR-IOV virtual function (VF) on NET2 with bandwidth 20000 bytes/sec on a *different* NIC with SSL acceleration It is specified in the extra specs as follows:: resources1:SRIOV_NET_VF=1 resources1:NET_EGRESS_BYTES_SEC=10000 trait1:CUSTOM_PHYSNET_NET1=required resources2:SRIOV_NET_VF=1 resources2:NET_EGRESS_BYTES_SEC:20000 trait2:CUSTOM_PHYSNET_NET2=required trait2:HW_NIC_ACCEL_SSL=required group_policy=isolate See `Granular Resource Request Syntax`_ for more details. .. _Granular Resource Request Syntax: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/index.rst0000664000175000017500000000701100000000000017451 0ustar00zuulzuul00000000000000================== User Documentation ================== End user guide -------------- .. toctree:: :maxdepth: 1 availability-zones launch-instances metadata manage-ip-addresses certificate-validation resize reboot rescue availability-zones block-device-mapping /reference/api-microversion-history .. todo:: The rest of this document should probably move to the admin guide. Architecture Overview --------------------- * :doc:`Nova architecture `: An overview of how all the parts in nova fit together. * :doc:`Block Device Mapping `: One of the more complicated parts to understand is the Block Device Mapping parameters used to connect specific block devices to computes. This deserves its own deep dive. See the :ref:`reference guide ` for details about more internal subsystems. Deployment Considerations ------------------------- There is information you might want to consider before doing your deployment, especially if it is going to be a larger deployment. For smaller deployments the defaults from the :doc:`install guide ` will be sufficient. * **Compute Driver Features Supported**: While the majority of nova deployments use libvirt/kvm, you can use nova with other compute drivers. Nova attempts to provide a unified feature set across these, however, not all features are implemented on all backends, and not all features are equally well tested. * :doc:`Feature Support by Use Case `: A view of what features each driver supports based on what's important to some large use cases (General Purpose Cloud, NFV Cloud, HPC Cloud). * :doc:`Feature Support full list `: A detailed dive through features in each compute driver backend. * :doc:`Cells v2 Planning `: For large deployments, Cells v2 allows sharding of your compute environment. Upfront planning is key to a successful Cells v2 layout. * :placement-doc:`Placement service <>`: Overview of the placement service, including how it fits in with the rest of nova. * :doc:`Running nova-api on wsgi `: Considerations for using a real WSGI container instead of the baked-in eventlet web server. Maintenance ----------- Once you are running nova, the following information is extremely useful. * :doc:`Admin Guide `: A collection of guides for administrating nova. * :doc:`Upgrades `: How nova is designed to be upgraded for minimal service impact, and the order you should do them in. * :doc:`Quotas `: Managing project quotas in nova. * :doc:`Availablity Zones `: Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. They can be used to partition a cloud on arbitrary factors, such as location (country, datacenter, rack), network layout and/or power source. * :doc:`Filter Scheduler `: How the filter scheduler is configured, and how that will impact where compute instances land in your environment. If you are seeing unexpected distribution of compute instances in your hosts, you'll want to dive into this configuration. * :doc:`Exposing custom metadata to compute instances `: How and when you might want to extend the basic metadata exposed to compute instances (either via metadata server or config drive) for your specific purposes. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/launch-instance-from-image.rst0000664000175000017500000001562500000000000023451 0ustar00zuulzuul00000000000000================================ Launch an instance from an image ================================ Follow the steps below to launch an instance from an image. #. After you gather required parameters, run the following command to launch an instance. Specify the server name, flavor ID, and image ID. .. code-block:: console $ openstack server create --flavor FLAVOR_ID --image IMAGE_ID --key-name KEY_NAME \ --user-data USER_DATA_FILE --security-group SEC_GROUP_NAME --property KEY=VALUE \ INSTANCE_NAME Optionally, you can provide a key name for access control and a security group for security. You can also include metadata key and value pairs. For example, you can add a description for your server by providing the ``--property description="My Server"`` parameter. You can pass :ref:`user data ` in a local file at instance launch by using the ``--user-data USER-DATA-FILE`` parameter. .. important:: If you boot an instance with an INSTANCE_NAME greater than 63 characters, Compute truncates it automatically when turning it into a host name to ensure the correct work of dnsmasq. The corresponding warning is written into the ``neutron-dnsmasq.log`` file. The following command launches the ``MyCirrosServer`` instance with the ``m1.small`` flavor (ID of ``1``), ``cirros-0.3.2-x86_64-uec`` image (ID of ``397e713c-b95b-4186-ad46-6126863ea0a9``), ``default`` security group, ``KeyPair01`` key, and a user data file called ``cloudinit.file``: .. code-block:: console $ openstack server create --flavor 1 --image 397e713c-b95b-4186-ad46-6126863ea0a9 \ --security-group default --key-name KeyPair01 --user-data cloudinit.file \ myCirrosServer Depending on the parameters that you provide, the command returns a list of server properties. .. code-block:: console +--------------------------------------+-----------------------------------------------+ | Field | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | E4Ksozt4Efi8 | | config_drive | | | created | 2016-11-30T14:48:05Z | | flavor | m1.tiny | | hostId | | | id | 89015cc9-bdf1-458a-8518-fdca2b4a5785 | | image | cirros (397e713c-b95b-4186-ad46-6126863ea0a9) | | key_name | KeyPair01 | | name | myCirrosServer | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | properties | | | security_groups | [{u'name': u'default'}] | | status | BUILD | | updated | 2016-11-30T14:48:05Z | | user_id | c36cec73b0e44876a4478b1e6cd749bb | | metadata | {u'KEY': u'VALUE'} | +--------------------------------------+-----------------------------------------------+ A status of ``BUILD`` indicates that the instance has started, but is not yet online. A status of ``ACTIVE`` indicates that the instance is active. #. Copy the server ID value from the ``id`` field in the output. Use the ID to get server details or to delete your server. #. Copy the administrative password value from the ``adminPass`` field. Use the password to log in to your server. #. Check if the instance is online. .. code-block:: console $ openstack server list The list shows the ID, name, status, and private (and if assigned, public) IP addresses for all instances in the project to which you belong: .. code-block:: console +-------------+----------------------+--------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +-------------+----------------------+--------+------------+-------------+------------------+------------+ | 84c6e57d... | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | cirros | | 8a99547e... | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | centos | +-------------+----------------------+--------+------------+-------------+------------------+------------+ If the status for the instance is ACTIVE, the instance is online. #. To view the available options for the :command:`openstack server list` command, run the following command: .. code-block:: console $ openstack help server list .. note:: If you did not provide a key pair, security groups, or rules, you can access the instance only from inside the cloud through VNC. Even pinging the instance is not possible. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/launch-instance-from-volume.rst0000664000175000017500000005005500000000000023672 0ustar00zuulzuul00000000000000================================ Launch an instance from a volume ================================ You can boot instances from a volume instead of an image. To complete these tasks, use these parameters on the :command:`nova boot` command: .. tabularcolumns:: |p{0.3\textwidth}|p{0.25\textwidth}|p{0.4\textwidth}| .. list-table:: :header-rows: 1 :widths: 30 15 30 * - Task - nova boot parameter - Information * - Boot an instance from an image and attach a non-bootable volume. - ``--block-device`` - :ref:`Boot_instance_from_image_and_attach_non-bootable_volume` * - Create a volume from an image and boot an instance from that volume. - ``--block-device`` - :ref:`Create_volume_from_image_and_boot_instance` * - Boot from an existing source image, volume, or snapshot. - ``--block-device`` - :ref:`Create_volume_from_image_and_boot_instance` * - Attach a swap disk to an instance. - ``--swap`` - :ref:`Attach_swap_or_ephemeral_disk_to_an_instance` * - Attach an ephemeral disk to an instance. - ``--ephemeral`` - :ref:`Attach_swap_or_ephemeral_disk_to_an_instance` .. note:: To attach a volume to a running instance, refer to the :cinder-doc:`Cinder documentation `. .. note:: The maximum limit on the number of disk devices allowed to attach to a single server is configurable with the option :oslo.config:option:`compute.max_disk_devices_to_attach`. .. _Boot_instance_from_image_and_attach_non-bootable_volume: Boot instance from image and attach non-bootable volume ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Create a non-bootable volume and attach that volume to an instance that you boot from an image. To create a non-bootable volume, do not create it from an image. The volume must be entirely empty with no partition table and no file system. #. Create a non-bootable volume. .. code-block:: console $ openstack volume create --size 8 my-volume +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-11-25T10:37:08.850997 | | description | None | | encrypted | False | | id | b8f7bbec-6274-4cd7-90e7-60916a5e75d4 | | migration_status | None | | multiattach | False | | name | my-volume | | properties | | | replication_status | disabled | | size | 8 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 0678735e449149b0a42076e12dd54e28 | +---------------------+--------------------------------------+ #. List volumes. .. code-block:: console $ openstack volume list +--------------------------------------+--------------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+--------------+-----------+------+-------------+ | b8f7bbec-6274-4cd7-90e7-60916a5e75d4 | my-volume | available | 8 | | +--------------------------------------+--------------+-----------+------+-------------+ #. Boot an instance from an image and attach the empty volume to the instance. .. code-block:: console $ nova boot --flavor 2 --image 98901246-af91-43d8-b5e6-a4506aa8f369 \ --block-device source=volume,id=d620d971-b160-4c4e-8652-2513d74e2080,dest=volume,shutdown=preserve \ myInstanceWithVolume +--------------------------------------+--------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | ZaiYeC8iucgU | | config_drive | | | created | 2014-05-09T16:34:50Z | | flavor | m1.small (2) | | hostId | | | id | 1e1797f3-1662-49ff-ae8c-a77e82ee1571 | | image | cirros-0.3.5-x86_64-uec (98901246-af91-... | | key_name | - | | metadata | {} | | name | myInstanceWithVolume | | os-extended-volumes:volumes_attached | [{"id": "d620d971-b160-4c4e-8652-2513d7... | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | ccef9e62b1e645df98728fb2b3076f27 | | updated | 2014-05-09T16:34:51Z | | user_id | fef060ae7bfd4024b3edb97dff59017a | +--------------------------------------+--------------------------------------------+ .. _Create_volume_from_image_and_boot_instance: Create volume from image and boot instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can create a volume from an existing image, volume, or snapshot. This procedure shows you how to create a volume from an image, and use the volume to boot an instance. #. List the available images. .. code-block:: console $ openstack image list +-----------------+---------------------------------+--------+ | ID | Name | Status | +-----------------+---------------------------------+--------+ | 484e05af-a14... | Fedora-x86_64-20-20131211.1-sda | active | | 98901246-af9... | cirros-0.3.5-x86_64-uec | active | | b6e95589-7eb... | cirros-0.3.5-x86_64-uec-kernel | active | | c90893ea-e73... | cirros-0.3.5-x86_64-uec-ramdisk | active | +-----------------+---------------------------------+--------+ Note the ID of the image that you want to use to create a volume. If you want to create a volume to a specific storage backend, you need to use an image which has *cinder_img_volume_type* property. In this case, a new volume will be created as *storage_backend1* volume type. .. code-block:: console $ openstack image show 98901246-af9d-4b61-bea8-09cc6dc41829 +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-10-08T14:59:05Z | | disk_format | qcow2 | | file | /v2/images/9fef3b2d-c35d-4b61-bea8-09cc6dc41829/file | | id | 98901246-af9d-4b61-bea8-09cc6dc41829 | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.5-x86_64-uec | | owner | 8d8ef3cdf2b54c25831cbb409ad9ae86 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-10-19T09:12:52Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+ #. List the available flavors. .. code-block:: console $ openstack flavor list +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+ Note the ID of the flavor that you want to use to create a volume. #. To create a bootable volume from an image and launch an instance from this volume, use the ``--block-device`` parameter with the ``nova boot`` command. For example: .. code-block:: console $ nova boot --flavor FLAVOR --block-device \ source=SOURCE,id=ID,dest=DEST,size=SIZE,shutdown=PRESERVE,bootindex=INDEX \ NAME The parameters are: - ``--flavor`` The flavor ID or name. - ``--block-device`` source=SOURCE,id=ID,dest=DEST,size=SIZE,shutdown=PRESERVE,bootindex=INDEX **source=SOURCE** The type of object used to create the block device. Valid values are ``volume``, ``snapshot``, ``image``, and ``blank``. **id=ID** The ID of the source object. **dest=DEST** The type of the target virtual device. Valid values are ``volume`` and ``local``. **size=SIZE** The size of the volume that is created. **shutdown={preserve\|remove}** What to do with the volume when the instance is deleted. ``preserve`` does not delete the volume. ``remove`` deletes the volume. **bootindex=INDEX** Orders the boot disks. Use ``0`` to boot from this volume. - ``NAME``. The name for the server. See the `nova boot`_ command documentation and :doc:`block-device-mapping` for more details on these parameters. .. note:: As of the Stein release, the ``openstack server create`` command does not support creating a volume-backed server from a source image like the ``nova boot`` command. The next steps will show how to create a bootable volume from an image and then create a server from that boot volume using the ``openstack server create`` command. #. Create a bootable volume from an image. Cinder makes a volume bootable when ``--image`` parameter is passed. .. code-block:: console $ openstack volume create --image IMAGE_ID --size SIZE_IN_GB bootable_volume .. note:: A bootable encrypted volume can also be created by adding the `--type ENCRYPTED_VOLUME_TYPE` parameter to the volume create command: .. code-block:: console $ openstack volume create --type ENCRYPTED_VOLUME_TYPE --image IMAGE_ID --size SIZE_IN_GB bootable_volume +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-06-13T18:59:57.626872 | | description | None | | encrypted | True | | id | ded57a86-5b51-43ab-b70e-9bc0f91ef4ab | | multiattach | False | | name | bootable_volume | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LUKS | | updated_at | None | | user_id | 459ae34ffcd94edab0c128ed616bb19f | +---------------------+--------------------------------------+ This requires an encrypted volume type, which must be created ahead of time by an admin. Refer to :horizon-doc:`admin/manage-volumes.html#create-an-encrypted-volume-type`. in the OpenStack Horizon Administration Guide. #. Create a VM from previously created bootable volume. The volume is not deleted when the instance is terminated. .. note:: The example here uses the ``--volume`` option for simplicity. The ``--block-device-mapping`` option could also be used for more granular control over the parameters. See the `openstack server create`_ documentation for details. .. code-block:: console $ openstack server create --flavor 2 --volume VOLUME_ID myInstanceFromVolume +--------------------------------------+--------------------------------+ | Field | Value | +--------------------------------------+--------------------------------+ | OS-EXT-STS:task_state | scheduling | | image | Attempt to boot from volume | | | - no image supplied | | OS-EXT-STS:vm_state | building | | OS-EXT-SRV-ATTR:instance_name | instance-00000003 | | OS-SRV-USG:launched_at | None | | flavor | m1.small | | id | 2e65c854-dba9-4f68-8f08-fe3... | | security_groups | [{u'name': u'default'}] | | user_id | 352b37f5c89144d4ad053413926... | | OS-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 0 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | | status | BUILD | | updated | 2014-02-02T13:29:54Z | | hostId | | | OS-EXT-SRV-ATTR:host | None | | OS-SRV-USG:terminated_at | None | | key_name | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | name | myInstanceFromVolume | | adminPass | TzjqyGsRcJo9 | | tenant_id | f7ac731cc11f40efbc03a9f9e1d... | | created | 2014-02-02T13:29:53Z | | os-extended-volumes:volumes_attached | [{"id": "2fff50ab..."}] | | metadata | {} | +--------------------------------------+--------------------------------+ #. List volumes to see the bootable volume and its attached ``myInstanceFromVolume`` instance. .. code-block:: console $ openstack volume list +---------------------+-----------------+--------+------+---------------------------------+ | ID | Name | Status | Size | Attached to | +---------------------+-----------------+--------+------+---------------------------------+ | c612f739-8592-44c4- | bootable_volume | in-use | 10 | Attached to myInstanceFromVolume| | b7d4-0fee2fe1da0c | | | | on /dev/vda | +---------------------+-----------------+--------+------+---------------------------------+ .. _nova boot: https://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-boot .. _openstack server create: https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-create .. _Attach_swap_or_ephemeral_disk_to_an_instance: Attach swap or ephemeral disk to an instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use the ``nova boot`` ``--swap`` parameter to attach a swap disk on boot or the ``nova boot`` ``--ephemeral`` parameter to attach an ephemeral disk on boot. When you terminate the instance, both disks are deleted. Boot an instance with a 512 MB swap disk and 2 GB ephemeral disk. .. code-block:: console $ nova boot --flavor FLAVOR --image IMAGE_ID --swap 512 \ --ephemeral size=2 NAME .. note:: The flavor defines the maximum swap and ephemeral disk size. You cannot exceed these maximum values. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/launch-instance-using-ISO-image.rst0000664000175000017500000001601100000000000024251 0ustar00zuulzuul00000000000000================================== Launch an instance using ISO image ================================== .. _Boot_instance_from_ISO_image: Boot an instance from an ISO image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ OpenStack supports booting instances using ISO images. But before you make such instances functional, use the :command:`openstack server create` command with the following parameters to boot an instance: .. code-block:: console $ openstack server create --image ubuntu-14.04.2-server-amd64.iso \ --nic net-id = NETWORK_UUID \ --flavor 2 INSTANCE_NAME +--------------------------------------+--------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | ZaiYeC8iucgU | | config_drive | | | created | 2015-06-01T16:34:50Z | | flavor | m1.small (2) | | hostId | | | id | 1e1797f3-1662-49ff-ae8c-a77e82ee1571 | | image | ubuntu-14.04.2-server-amd64.iso | | key_name | - | | metadata | {} | | name | INSTANCE_NAME | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | ccef9e62b1e645df98728fb2b3076f27 | | updated | 2014-05-09T16:34:51Z | | user_id | fef060ae7bfd4024b3edb97dff59017a | +--------------------------------------+--------------------------------------------+ In this command, ``ubuntu-14.04.2-server-amd64.iso`` is the ISO image, and ``INSTANCE_NAME`` is the name of the new instance. ``NETWORK_UUID`` is a valid network id in your system. Create a bootable volume for the instance to reside on after shutdown. #. Create the volume: .. code-block:: console $ openstack volume create \ --size \ --bootable VOLUME_NAME #. Attach the instance to the volume: .. code-block:: console $ openstack server add volume INSTANCE_NAME \ VOLUME_NAME \ --device /dev/vda .. note:: You need the Block Storage service to preserve the instance after shutdown. The ``--block-device`` argument, used with the legacy :command:`nova boot`, will not work with the OpenStack :command:`openstack server create` command. Instead, the :command:`openstack volume create` and :command:`openstack server add volume` commands create persistent storage. After the instance is successfully launched, connect to the instance using a remote console and follow the instructions to install the system as using ISO images on regular computers. When the installation is finished and system is rebooted, the instance asks you again to install the operating system, which means your instance is not usable. If you have problems with image creation, please check the `Virtual Machine Image Guide `_ for reference. .. _Make_instance_booted_from_ISO_image_functional: Make the instances booted from ISO image functional ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Now complete the following steps to make your instances created using ISO image actually functional. #. Delete the instance using the following command. .. code-block:: console $ openstack server delete INSTANCE_NAME #. After you delete the instance, the system you have just installed using your ISO image remains, because the parameter ``shutdown=preserve`` was set, so run the following command. .. code-block:: console $ openstack volume list +--------------------------+-------------------------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------+-------------------------+-----------+------+-------------+ | 8edd7c97-1276-47a5-9563- |dc01d873-d0f1-40b6-bfcc- | available | 10 | | | 1025f4264e4f | 26a8d955a1d9-blank-vol | | | | +--------------------------+-------------------------+-----------+------+-------------+ You get a list with all the volumes in your system. In this list, you can find the volume that is attached to your ISO created instance, with the false bootable property. #. Upload the volume to glance. .. code-block:: console $ openstack image create --volume SOURCE_VOLUME IMAGE_NAME $ openstack image list +-------------------+------------+--------+ | ID | Name | Status | +-------------------+------------+--------+ | 74303284-f802-... | IMAGE_NAME | active | +-------------------+------------+--------+ The ``SOURCE_VOLUME`` is the UUID or a name of the volume that is attached to your ISO created instance, and the ``IMAGE_NAME`` is the name that you give to your new image. #. After the image is successfully uploaded, you can use the new image to boot instances. The instances launched using this image contain the system that you have just installed using the ISO image. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/launch-instances.rst0000664000175000017500000001471400000000000021611 0ustar00zuulzuul00000000000000================ Launch instances ================ Instances are virtual machines that run inside the cloud. Before you can launch an instance, gather the following parameters: - The **instance source** can be an image, snapshot, or block storage volume that contains an image or snapshot. - A **name** for your instance. - The **flavor** for your instance, which defines the compute, memory, and storage capacity of nova computing instances. A flavor is an available hardware configuration for a server. It defines the size of a virtual server that can be launched. - Any **user data** files. A :ref:`user data ` file is a special key in the metadata service that holds a file that cloud-aware applications in the guest instance can access. For example, one application that uses user data is the `cloud-init `__ system, which is an open-source package from Ubuntu that is available on various Linux distributions and that handles early initialization of a cloud instance. - Access and security credentials, which include one or both of the following credentials: - A **key pair** for your instance, which are SSH credentials that are injected into images when they are launched. For the key pair to be successfully injected, the image must contain the ``cloud-init`` package. Create at least one key pair for each project. If you already have generated a key pair with an external tool, you can import it into OpenStack. You can use the key pair for multiple instances that belong to that project. - A **security group** that defines which incoming network traffic is forwarded to instances. Security groups hold a set of firewall policies, known as *security group rules*. - If needed, you can assign a **floating (public) IP address** to a running instance to make it accessible from outside the cloud. See :doc:`manage-ip-addresses`. - You can also attach a block storage device, or **volume**, for persistent storage. .. note:: Instances that use the default security group cannot, by default, be accessed from any IP address outside of the cloud. If you want those IP addresses to access the instances, you must modify the rules for the default security group. After you gather the parameters that you need to launch an instance, you can launch it from an :doc:`image` or a :doc:`volume`. You can launch an instance directly from one of the available OpenStack images or from an image that you have copied to a persistent volume. The OpenStack Image service provides a pool of images that are accessible to members of different projects. Gather parameters to launch an instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before you begin, source the OpenStack RC file. #. Create a flavor. Creating a flavor is typically only available to administrators of a cloud because this has implications for scheduling efficiently in the cloud. .. code-block:: console $ openstack flavor create --ram 512 --disk 1 --vcpus 1 m1.tiny #. List the available flavors. .. code-block:: console $ openstack flavor list Note the ID of the flavor that you want to use for your instance:: +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+ #. List the available images. .. code-block:: console $ openstack image list Note the ID of the image from which you want to boot your instance:: +--------------------------------------+---------------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------------+--------+ | 397e713c-b95b-4186-ad46-6126863ea0a9 | cirros-0.3.5-x86_64-uec | active | | df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.5-x86_64-uec-kernel | active | | 3cf852bd-2332-48f4-9ae4-7d926d50945e | cirros-0.3.5-x86_64-uec-ramdisk | active | +--------------------------------------+---------------------------------+--------+ You can also filter the image list by using :command:`grep` to find a specific image, as follows: .. code-block:: console $ openstack image list | grep 'kernel' | df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.5-x86_64-uec-kernel | active | #. List the available security groups. .. code-block:: console $ openstack security group list .. note:: If you are an admin user, this command will list groups for all tenants. Note the ID of the security group that you want to use for your instance:: +--------------------------------------+---------+------------------------+----------------------------------+ | ID | Name | Description | Project | +--------------------------------------+---------+------------------------+----------------------------------+ | b0d78827-0981-45ef-8561-93aee39bbd9f | default | Default security group | 5669caad86a04256994cdf755df4d3c1 | | ec02e79e-83e1-48a5-86ad-14ab9a8c375f | default | Default security group | 1eaaf6ede7a24e78859591444abf314a | +--------------------------------------+---------+------------------------+----------------------------------+ If you have not created any security groups, you can assign the instance to only the default security group. You can view rules for a specified security group: .. code-block:: console $ openstack security group rule list default #. List the available key pairs, and note the key pair name that you use for SSH access. .. code-block:: console $ openstack keypair list Launch an instance ~~~~~~~~~~~~~~~~~~ You can launch an instance from various sources. .. toctree:: :maxdepth: 2 launch-instance-from-image launch-instance-from-volume launch-instance-using-ISO-image ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/manage-ip-addresses.rst0000664000175000017500000002004000000000000022150 0ustar00zuulzuul00000000000000=================== Manage IP addresses =================== Each instance has a private, fixed IP address and can also have a public, or floating IP address. Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet. When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address. A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project. After you allocate a floating IP address to a project, you can: - Associate the floating IP address with an instance of the project. - Disassociate a floating IP address from an instance in the project. - Delete a floating IP from the project which automatically deletes that IP's associations. Use the :command:`openstack` commands to manage floating IP addresses. List floating IP address information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To list all pools that provide floating IP addresses, run: .. code-block:: console $ openstack floating ip pool list +--------+ | name | +--------+ | public | | test | +--------+ .. note:: If this list is empty, the cloud administrator must configure a pool of floating IP addresses. To list all floating IP addresses that are allocated to the current project, run: .. code-block:: console $ openstack floating ip list +--------------------------------------+---------------------+------------------+------+ | ID | Floating IP Address | Fixed IP Address | Port | +--------------------------------------+---------------------+------------------+------+ | 760963b2-779c-4a49-a50d-f073c1ca5b9e | 172.24.4.228 | None | None | | 89532684-13e1-4af3-bd79-f434c9920cc3 | 172.24.4.235 | None | None | | ea3ebc6d-a146-47cd-aaa8-35f06e1e8c3d | 172.24.4.229 | None | None | +--------------------------------------+---------------------+------------------+------+ For each floating IP address that is allocated to the current project, the command outputs the floating IP address, the ID for the instance to which the floating IP address is assigned, the associated fixed IP address, and the pool from which the floating IP address was allocated. Associate floating IP addresses ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can assign a floating IP address to a project and to an instance. #. Run the following command to allocate a floating IP address to the current project. By default, the floating IP address is allocated from the public pool. The command outputs the allocated IP address: .. code-block:: console $ openstack floating ip create public +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2016-11-30T15:02:05Z | | description | | | fixed_ip_address | None | | floating_ip_address | 172.24.4.236 | | floating_network_id | 0bf90de6-fc0f-4dba-b80d-96670dfb331a | | headers | | | id | c70ad74b-2f64-4e60-965e-f24fc12b3194 | | port_id | None | | project_id | 5669caad86a04256994cdf755df4d3c1 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | revision_number | 1 | | router_id | None | | status | DOWN | | updated_at | 2016-11-30T15:02:05Z | +---------------------+--------------------------------------+ #. List all project instances with which a floating IP address could be associated. .. code-block:: console $ openstack server list +---------------------+------+---------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +---------------------+------+---------+------------+-------------+------------------+------------+ | d5c854f9-d3e5-4f... | VM1 | ACTIVE | - | Running | private=10.0.0.3 | cirros | | 42290b01-0968-43... | VM2 | SHUTOFF | - | Shutdown | private=10.0.0.4 | centos | +---------------------+------+---------+------------+-------------+------------------+------------+ Note the server ID to use. #. List ports associated with the selected server. .. code-block:: console $ openstack port list --device-id SERVER_ID +--------------------------------------+------+-------------------+--------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+--------------------------------------------------------------+--------+ | 40e9dea9-f457-458f-bc46-6f4ebea3c268 | | fa:16:3e:00:57:3e | ip_address='10.0.0.4', subnet_id='23ee9de7-362e- | ACTIVE | | | | | 49e2-a3b0-0de1c14930cb' | | | | | | ip_address='fd22:4c4c:81c2:0:f816:3eff:fe00:573e', subnet_id | | | | | | ='a2b3acbe-fbeb-40d3-b21f-121268c21b55' | | +--------------------------------------+------+-------------------+--------------------------------------------------------------+--------+ Note the port ID to use. #. Associate an IP address with an instance in the project, as follows: .. code-block:: console $ openstack floating ip set --port PORT_ID FLOATING_IP_ADDRESS For example: .. code-block:: console $ openstack floating ip set --port 40e9dea9-f457-458f-bc46-6f4ebea3c268 172.24.4.225 The instance is now associated with two IP addresses: .. code-block:: console $ openstack server list +------------------+------+--------+------------+-------------+-------------------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +------------------+------+--------+------------+-------------+-------------------------------+------------+ | d5c854f9-d3e5... | VM1 | ACTIVE | - | Running | private=10.0.0.3, 172.24.4.225| cirros | | 42290b01-0968... | VM2 | SHUTOFF| - | Shutdown | private=10.0.0.4 | centos | +------------------+------+--------+------------+-------------+-------------------------------+------------+ After you associate the IP address and configure security group rules for the instance, the instance is publicly available at the floating IP address. Disassociate floating IP addresses ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To disassociate a floating IP address from an instance: .. code-block:: console $ openstack floating ip unset --port FLOATING_IP_ADDRESS To remove the floating IP address from a project: .. code-block:: console $ openstack floating ip delete FLOATING_IP_ADDRESS The IP address is returned to the pool of IP addresses that is available for all projects. If the IP address is still associated with a running instance, it is automatically disassociated from that instance. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/metadata.rst0000664000175000017500000004044000000000000020125 0ustar00zuulzuul00000000000000======== Metadata ======== Nova presents configuration information to instances it starts via a mechanism called metadata. These mechanisms are widely used via helpers such as `cloud-init`_ to specify things like the root password the instance should use. This metadata is made available via either a *config drive* or the *metadata service* and can be somewhat customised by the user using the *user data* feature. This guide provides an overview of these features along with a summary of the types of metadata available. .. _cloud-init: https://cloudinit.readthedocs.io/en/latest/ Types of metadata ----------------- There are three separate groups of users who need to be able to specify metadata for an instance. User provided data ~~~~~~~~~~~~~~~~~~ The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the nova API can be used to upload a key and then specify that key during the nova boot API request. For less structured data, a small opaque blob of data may be passed via the :ref:`user data ` feature of the nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server from which to fetch post-boot configuration information. Nova provided data ~~~~~~~~~~~~~~~~~~ Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the requested hostname for the instance and the availability zone the instance is in. This happens by default and requires no configuration by the user or deployer. Nova provides both an :ref:`OpenStack metadata API ` and an :ref:`EC2-compatible API `. Both the OpenStack metadata and EC2-compatible APIs are versioned by date. These are described later. Deployer provided data ~~~~~~~~~~~~~~~~~~~~~~ A deployer of OpenStack may need to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot -- the user starting the instance should not have access to Active Directory to create this token, but the nova deployment might have permissions to generate the token on the user's behalf. This is possible using the :ref:`vendordata ` feature, which must be configured by your cloud operator. .. _metadata-service: The metadata service -------------------- .. note:: This section provides end user information about the metadata service. For deployment information about the metadata service, refer to the :doc:`admin guide `. The *metadata service* provides a way for instances to retrieve instance-specific data via a REST API. Instances access this service at ``169.254.169.254`` and all types of metadata, be it user-, nova- or vendor-provided, can be accessed via this service. Using the metadata service ~~~~~~~~~~~~~~~~~~~~~~~~~~ To retrieve a list of supported versions for the :ref:`OpenStack metadata API `, make a GET request to ``http://169.254.169.254/openstack``, which will return a list of directories: .. code-block:: console $ curl http://169.254.169.254/openstack 2012-08-10 2013-04-04 2013-10-17 2015-10-15 2016-06-30 2016-10-06 2017-02-22 2018-08-27 latest Refer to :ref:`OpenStack format metadata ` for information on the contents and structure of these directories. To list supported versions for the :ref:`EC2-compatible metadata API `, make a GET request to ``http://169.254.169.254``, which will, once again, return a list of directories: .. code-block:: console $ curl http://169.254.169.254 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 latest Refer to :ref:`EC2-compatible metadata ` for information on the contents and structure of these directories. .. _metadata-config-drive: Config drives ------------- .. note:: This section provides end user information about config drives. For deployment information about the config drive feature, refer to the :doc:`admin guide `. *Config drives* are special drives that are attached to an instance when it boots. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. One use case for using the config drive is to pass a networking configuration when you do not use DHCP to assign IP addresses to instances. For example, you might pass the IP address configuration for the instance through the config drive, which the instance can mount and access before you configure the network settings for the instance. Using the config drive ~~~~~~~~~~~~~~~~~~~~~~ To enable the config drive for an instance, pass the ``--config-drive true`` parameter to the :command:`openstack server create` command. The following example enables the config drive and passes a user data file and two key/value metadata pairs, all of which are accessible from the config drive: .. code-block:: console $ openstack server create --config-drive true --image my-image-name \ --flavor 1 --key-name mykey --user-data ./my-user-data.txt \ --property role=webservers --property essential=false MYINSTANCE .. note:: The Compute service can be configured to always create a config drive. For more information, refer to :doc:`the admin guide `. If your guest operating system supports accessing disk by label, you can mount the config drive as the ``/dev/disk/by-label/configurationDriveVolumeLabel`` device. In the following example, the config drive has the ``config-2`` volume label: .. code-block:: console # mkdir -p /mnt/config # mount /dev/disk/by-label/config-2 /mnt/config If your guest operating system does not use ``udev``, the ``/dev/disk/by-label`` directory is not present. You can use the :command:`blkid` command to identify the block device that corresponds to the config drive. For example: .. code-block:: console # blkid -t LABEL="config-2" -odevice /dev/vdb Once identified, you can mount the device: .. code-block:: console # mkdir -p /mnt/config # mount /dev/vdb /mnt/config Once mounted, you can examine the contents of the config drive: .. code-block:: console $ cd /mnt/config $ find . -maxdepth 2 . ./ec2 ./ec2/2009-04-04 ./ec2/latest ./openstack ./openstack/2012-08-10 ./openstack/2013-04-04 ./openstack/2013-10-17 ./openstack/2015-10-15 ./openstack/2016-06-30 ./openstack/2016-10-06 ./openstack/2017-02-22 ./openstack/latest The files that appear on the config drive depend on the arguments that you pass to the :command:`openstack server create` command. The format of this directory is the same as that provided by the :ref:`metadata service `, with the exception that the EC2-compatible metadata is now located in the ``ec2`` directory instead of the root (``/``) directory. Refer to the :ref:`metadata-openstack-format` and :ref:`metadata-ec2-format` sections for information about the format of the files and subdirectories within these directories. Nova metadata ------------- As noted previously, nova provides its metadata in two formats: OpenStack format and EC2-compatible format. .. _metadata-openstack-format: OpenStack format metadata ~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionchanged:: 12.0.0 Support for network metadata was added in the Liberty release. Metadata from the OpenStack API is distributed in JSON format. There are two files provided for each version: ``meta_data.json`` and ``network_data.json``. The ``meta_data.json`` file contains nova-specific information, while the ``network_data.json`` file contains information retrieved from neutron. For example: .. code-block:: console $ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json .. code-block:: json { "random_seed": "yu5ZnkqF2CqnDZVAfZgarGLoFubhcK5wHG4fcNfVZEtie/bTV8k2dDXK\ C7krP2cjp9A7g9LIWe5+WSaZ3zpvQ03hp/4mMNy9V1U/mnRMZyQ3W4Fn\ Nex7UP/0Smjb9rVzfUb2HrVUCN61Yo4jHySTd7UeEasF0nxBrx6NFY6e\ KRoELGPPr1S6+ZDcDT1Sp7pRoHqwVbzyJZc80ICndqxGkZOuvwDgVKZD\ B6O3kFSLuqOfNRaL8y79gJizw/MHI7YjOxtPMr6g0upIBHFl8Vt1VKjR\ s3zB+c3WkC6JsopjcToHeR4tPK0RtdIp6G2Bbls5cblQUAc/zG0a8BAm\ p6Pream9XRpaQBDk4iXtjIn8Bf56SCANOFfeI5BgBeTwfdDGoM0Ptml6\ BJQiyFtc3APfXVVswrCq2SuJop+spgrpiKXOzXvve+gEWVhyfbigI52e\ l1VyMoyZ7/pbdnX0LCGHOdAU8KRnBoo99ZOErv+p7sROEIN4Yywq/U/C\ xXtQ5BNCtae389+3yT5ZCV7fYzLYChgDMJSZ9ds9fDFIWKmsRu3N+wUg\ eL4klxAjRgzQ7MMlap5kppnIYRxXVy0a5j1qOaBAzJB5LLJ7r3/Om38x\ Z4+XGWjqd6KbSwhUVs1aqzxpep1Sp3nTurQCuYjgMchjslt0O5oJjh5Z\ hbCZT3YUc8M=\n", "uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "keys": [ { "data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\ VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\ bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\ uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n", "type": "ssh", "name": "mykey" } ], "hostname": "test.novalocal", "launch_index": 0, "meta": { "priority": "low", "role": "webserver" }, "devices": [ { "type": "nic", "bus": "pci", "address": "0000:00:02.0", "mac": "00:11:22:33:44:55", "tags": ["trusted"] }, { "type": "disk", "bus": "ide", "address": "0:0", "serial": "disk-vol-2352423", "path": "/dev/sda", "tags": ["baz"] } ], "project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f", "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\ VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\ bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\ uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n" }, "name": "test" } .. code-block:: console $ curl http://169.254.169.254/openstack/2018-08-27/network_data.json .. code-block:: json { "links": [ { "ethernet_mac_address": "fa:16:3e:9c:bf:3d", "id": "tapcd9f6d46-4a", "mtu": null, "type": "bridge", "vif_id": "cd9f6d46-4a3a-43ab-a466-994af9db96fc" } ], "networks": [ { "id": "network0", "link": "tapcd9f6d46-4a", "network_id": "99e88329-f20d-4741-9593-25bf07847b16", "type": "ipv4_dhcp" } ], "services": [ { "address": "8.8.8.8", "type": "dns" } ] } ::download:`Download` network_data.json JSON schema. .. _metadata-ec2-format: EC2-compatible metadata ~~~~~~~~~~~~~~~~~~~~~~~ The EC2-compatible API is compatible with version 2009-04-04 of the `Amazon EC2 metadata service`__ This means that virtual machine images designed for EC2 will work properly with OpenStack. The EC2 API exposes a separate URL for each metadata element. Retrieve a listing of these elements by making a GET query to ``http://169.254.169.254/2009-04-04/meta-data/``. For example: .. code-block:: console $ curl http://169.254.169.254/2009-04-04/meta-data/ ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type kernel-id local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ ramdisk-id reservation-id security-groups .. code-block:: console $ curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ ami .. code-block:: console $ curl http://169.254.169.254/2009-04-04/meta-data/placement/ availability-zone .. code-block:: console $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/ 0=mykey Instances can retrieve the public SSH key (identified by keypair name when a user requests a new instance) by making a GET request to ``http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key``: .. code-block:: console $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+US\ LGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3B\ ISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated\ by Nova __ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html .. _metadata-userdata: User data --------- *User data* is a blob of data that the user can specify when they launch an instance. The instance can access this data through the metadata service or config drive. Commonly used to pass a shell script that the instance runs on boot. For example, one application that uses user data is the `cloud-init `__ system, which is an open-source package from Ubuntu that is available on various Linux distributions and which handles early initialization of a cloud instance. You can place user data in a local file and pass it through the ``--user-data `` parameter at instance creation. .. code-block:: console $ openstack server create --image ubuntu-cloudimage --flavor 1 \ --user-data mydata.file VM_INSTANCE .. note:: The provided user data should not be base64-encoded, as it will be automatically encoded in order to pass valid input to the REST API, which has a limit of 65535 bytes after encoding. Once booted, you can access this data from the instance using either the metadata service or the config drive. To access it via the metadata service, make a GET request to either ``http://169.254.169.254/openstack/{version}/user_data`` (OpenStack API) or ``http://169.254.169.254/{version}/user-data`` (EC2-compatible API). For example: .. code-block:: console $ curl http://169.254.169.254/openstack/2018-08-27/user_data .. code-block:: shell #!/bin/bash echo 'Extra user data here' .. _metadata-vendordata: Vendordata ---------- .. note:: This section provides end user information about the vendordata feature. For deployment information about this feature, refer to the :doc:`admin guide `. .. versionchanged:: 14.0.0 Support for dynamic vendor data was added in the Newton release. **Where configured**, instances can retrieve vendor-specific data from the metadata service or config drive. To access it via the metadata service, make a GET request to either ``http://169.254.169.254/openstack/{version}/vendor_data.json`` or ``http://169.254.169.254/openstack/{version}/vendor_data2.json``, depending on the deployment. For example: .. code-block:: console $ curl http://169.254.169.254/openstack/2018-08-27/vendor_data2.json .. code-block:: json { "testing": { "value1": 1, "value2": 2, "value3": "three" } } .. note:: The presence and contents of this file will vary from deployment to deployment. General guidelines ------------------ - Do not rely on the presence of the EC2 metadata in the metadata API or config drive, because this content might be removed in a future release. For example, do not rely on files in the ``ec2`` directory. - When you create images that access metadata service or config drive data and multiple directories are under the ``openstack`` directory, always select the highest API version by date that your consumer supports. For example, if your guest image supports the ``2012-03-05``, ``2012-08-05``, and ``2013-04-13`` versions, try ``2013-04-13`` first and fall back to a previous version if ``2013-04-13`` is not present. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/quotas.rst0000664000175000017500000002015600000000000017663 0ustar00zuulzuul00000000000000====== Quotas ====== Nova uses a quota system for setting limits on resources such as number of instances or amount of CPU that a specific project or user can use. Quota limits and usage can be retrieved using the command-line interface. Types of quota -------------- .. list-table:: :header-rows: 1 :widths: 10 40 * - Quota name - Description * - cores - Number of instance cores (VCPUs) allowed per project. * - instances - Number of instances allowed per project. * - key_pairs - Number of key pairs allowed per user. * - metadata_items - Number of metadata items allowed per instance. * - ram - Megabytes of instance ram allowed per project. * - server_groups - Number of server groups per project. * - server_group_members - Number of servers per server group. The following quotas were previously available but were removed in microversion 2.36 as they proxied information available from the networking service. .. list-table:: :header-rows: 1 :widths: 10 40 * - Quota name - Description * - fixed_ips - Number of fixed IP addresses allowed per project. This number must be equal to or greater than the number of allowed instances. * - floating_ips - Number of floating IP addresses allowed per project. * - networks - Number of networks allowed per project (no longer used). * - security_groups - Number of security groups per project. * - security_group_rules - Number of security group rules per project. Similarly, the following quotas were previously available but were removed in microversion 2.57 as the personality files feature was deprecated. .. list-table:: :header-rows: 1 :widths: 10 40 * - Quota name - Description * - injected_files - Number of injected files allowed per project. * - injected_file_content_bytes - Number of content bytes allowed per injected file. * - injected_file_path_bytes - Length of injected file path. Usage ----- Project quotas ~~~~~~~~~~~~~~ To list all default quotas for projects, run: .. code-block:: console $ openstack quota show --default .. note:: This lists default quotas for all services and not just nova. For example: .. code-block:: console $ openstack quota show --default +----------------------+----------+ | Field | Value | +----------------------+----------+ | backup-gigabytes | 1000 | | backups | 10 | | cores | 20 | | fixed-ips | -1 | | floating-ips | 50 | | gigabytes | 1000 | | health_monitors | None | | injected-file-size | 10240 | | injected-files | 5 | | injected-path-size | 255 | | instances | 10 | | key-pairs | 100 | | l7_policies | None | | listeners | None | | load_balancers | None | | location | None | | name | None | | networks | 10 | | per-volume-gigabytes | -1 | | pools | None | | ports | 50 | | project | None | | project_name | project | | properties | 128 | | ram | 51200 | | rbac_policies | 10 | | routers | 10 | | secgroup-rules | 100 | | secgroups | 10 | | server-group-members | 10 | | server-groups | 10 | | snapshots | 10 | | subnet_pools | -1 | | subnets | 10 | | volumes | 10 | +----------------------+----------+ To list the currently set quota values for your project, run: .. code-block:: console $ openstack quota show PROJECT where ``PROJECT`` is the ID or name of your project. For example: .. code-block:: console $ openstack quota show $OS_PROJECT_ID +----------------------+----------------------------------+ | Field | Value | +----------------------+----------------------------------+ | backup-gigabytes | 1000 | | backups | 10 | | cores | 32 | | fixed-ips | -1 | | floating-ips | 10 | | gigabytes | 1000 | | health_monitors | None | | injected-file-size | 10240 | | injected-files | 5 | | injected-path-size | 255 | | instances | 10 | | key-pairs | 100 | | l7_policies | None | | listeners | None | | load_balancers | None | | location | None | | name | None | | networks | 20 | | per-volume-gigabytes | -1 | | pools | None | | ports | 60 | | project | c8156b55ec3b486193e73d2974196993 | | project_name | project | | properties | 128 | | ram | 65536 | | rbac_policies | 10 | | routers | 10 | | secgroup-rules | 50 | | secgroups | 50 | | server-group-members | 10 | | server-groups | 10 | | snapshots | 10 | | subnet_pools | -1 | | subnets | 20 | | volumes | 10 | +----------------------+----------------------------------+ To view a list of options for the :command:`openstack quota show` command, run: .. code-block:: console $ openstack quota show --help User quotas ~~~~~~~~~~~ .. note:: User-specific quotas are legacy and will be removed when migration to :keystone-doc:`unified limits ` is complete. User-specific quotas were added as a way to provide two-level hierarchical quotas and this feature is already being offered in unified limits. For this reason, the below commands have not and will not be ported to openstackclient. To list the quotas for your user, run: .. code-block:: console $ nova quota-show --user USER --tenant PROJECT where ``USER`` is the ID or name of your user and ``PROJECT`` is the ID or name of your project. For example: .. code-block:: console $ nova quota-show --user $OS_USERNAME --tenant $OS_PROJECT_ID +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 32 | | ram | 65536 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+ To view a list of options for the :command:`nova quota-show` command, run: .. code-block:: console $ nova help quota-show ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/reboot.rst0000664000175000017500000000140300000000000017633 0ustar00zuulzuul00000000000000================== Reboot an instance ================== You can soft or hard reboot a running instance. A soft reboot attempts a graceful shut down and restart of the instance. A hard reboot power cycles the instance. To reboot a server, use the :command:`openstack server reboot` command: .. code-block:: console $ openstack server reboot SERVER By default, when you reboot an instance it is a soft reboot. To perform a hard reboot, pass the ``--hard`` parameter as follows: .. code-block:: console $ openstack server reboot --hard SERVER It is also possible to reboot a running instance into rescue mode. For example, this operation may be required if a filesystem of an instance becomes corrupted with prolonged use. See :doc:`rescue` for more details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/rescue.rst0000664000175000017500000000631500000000000017636 0ustar00zuulzuul00000000000000================== Rescue an instance ================== Instance rescue provides a mechanism for access, even if an image renders the instance inaccessible. Two rescue modes are currently provided. Instance rescue --------------- By default the instance is booted from the provided rescue image or a fresh copy of the original instance image if a rescue image is not provided. The root disk and optional regenerated config drive are also attached to the instance for data recovery. .. note:: Rescuing a volume-backed instance is not supported with this mode. Stable device instance rescue ----------------------------- As of 21.0.0 (Ussuri) an additional stable device rescue mode is available. This mode now supports the rescue of volume-backed instances. This mode keeps all devices both local and remote attached in their original order to the instance during the rescue while booting from the provided rescue image. This mode is enabled and controlled by the presence of ``hw_rescue_device`` or ``hw_rescue_bus`` image properties on the provided rescue image. As their names suggest these properties control the rescue device type (``cdrom``, ``disk`` or ``floppy``) and bus type (``scsi``, ``virtio``, ``ide``, or ``usb``) used when attaching the rescue image to the instance. Support for each combination of the ``hw_rescue_device`` and ``hw_rescue_bus`` image properties is dependent on the underlying hypervisor and platform being used. For example the ``IDE`` bus is not available on POWER KVM based compute hosts. .. note:: This mode is only supported when using the Libvirt virt driver. This mode is not supported when using LXC or Xen hypervisors as enabled by the :oslo.config:option:`libvirt.virt_type` configurable on the computes. Usage ----- .. note:: Pause, suspend, and stop operations are not allowed when an instance is running in rescue mode, as triggering these actions causes the loss of the original instance state and makes it impossible to unrescue the instance. To perform an instance rescue, use the :command:`openstack server rescue` command: .. code-block:: console $ openstack server rescue SERVER .. note:: On running the :command:`openstack server rescue` command, an instance performs a soft shutdown first. This means that the guest operating system has a chance to perform a controlled shutdown before the instance is powered off. The shutdown behavior is configured by the :oslo.config:option:`shutdown_timeout` parameter that can be set in the ``nova.conf`` file. Its value stands for the overall period (in seconds) a guest operating system is allowed to complete the shutdown. The timeout value can be overridden on a per image basis by means of ``os_shutdown_timeout`` that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly. If you want to rescue an instance with a specific image, rather than the default one, use the ``--image`` parameter: .. code-block:: console $ openstack server rescue --image IMAGE_ID SERVER To restart the instance from the normal boot disk, run the following command: .. code-block:: console $ openstack server unrescue SERVER ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/resize.rst0000664000175000017500000000432600000000000017651 0ustar00zuulzuul00000000000000================== Resize an instance ================== You can change the size of an instance by changing its flavor. This rebuilds the instance and therefore results in a restart. To list the VMs you want to resize, run: .. code-block:: console $ openstack server list Once you have the name or UUID of the server you wish to resize, resize it using the :command:`openstack server resize` command: .. code-block:: console $ openstack server resize --flavor FLAVOR SERVER .. note:: By default, the :command:`openstack server resize` command gives the guest operating system a chance to perform a controlled shutdown before the instance is powered off and the instance is resized. This behavior can be configured by the administrator but it can also be overridden on a per image basis using the ``os_shutdown_timeout`` image metadata setting. This allows different types of operating systems to specify how much time they need to shut down cleanly. See :glance-doc:`Useful image properties ` for details. Resizing can take some time. During this time, the instance status will be ``RESIZE``: .. code-block:: console $ openstack server list +----------------------+----------------+--------+-----------------------------------------+ | ID | Name | Status | Networks | +----------------------+----------------+--------+-----------------------------------------+ | 67bc9a9a-5928-47c... | myCirrosServer | RESIZE | admin_internal_net=192.168.111.139 | +----------------------+----------------+--------+-----------------------------------------+ When the resize completes, the instance status will be ``VERIFY_RESIZE``. You can now confirm the resize to change the status to ``ACTIVE``: .. code-block:: console $ openstack server resize confirm SERVER .. note:: The resized server may be automatically confirmed based on the administrator's configuration of the deployment. If the resize does not work as expected, you can revert the resize. This will revert the instance to the old flavor and change the status to ``ACTIVE``: .. code-block:: console $ openstack server resize revert SERVER ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/support-matrix.ini0000664000175000017500000016065600000000000021346 0ustar00zuulzuul00000000000000# Copyright (C) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # # # ========================================= # Nova Hypervisor Feature Capability Matrix # ========================================= # # This obsoletes the information previously at # # https://wiki.openstack.org/wiki/HypervisorSupportMatrix # # This file contains a specification of what feature capabilities each # hypervisor driver in Nova is able to support. Feature capabilities include # what API operations are supported, what storage / networking features can be # used and what aspects of the guest machine can be configured. The capabilities # can be considered to be structured into nested groups, but in this file they # have been flattened for ease of representation. The section names represent # the group structure. At the top level there are the following groups defined # # - operation - public API operations # - storage - host storage configuration options # - networking - host networking configuration options # - guest - guest hardware configuration options # # When considering which capabilities should be marked as mandatory, # consider the general guiding principles listed in the support-matrix.rst # file # # The 'status' field takes possible values # # - mandatory - unconditionally required to be implemented # - optional - optional to support, nice to have # - choice(group) - at least one of the options within the named group # must be implemented # - conditional(cond) - required, if the referenced condition is met. # # The value against each 'driver.XXXX' entry refers to the level of the # implementation of the feature in that driver # # - complete - fully implemented, expected to work at all times # - partial - implemented, but with caveats about when it will work # eg some configurations or hardware or guest OS may not # support it # - missing - not implemented at all # # In the case of the driver being marked as 'partial', then # 'driver-notes.XXX' entry should be used to explain the caveats # around the implementation. # # The 'cli' field takes a list of nova client commands, separated by semicolon. # These CLi commands are related to that feature. # Example: # cli=nova list;nova show # List of driver impls we are going to record info for later # This list only covers drivers that are in the Nova source # tree. Out of tree drivers should maintain their own equivalent # document, and merge it with this when their code merges into # Nova core. [driver.xenserver] title=XenServer [driver.libvirt-kvm-x86] title=Libvirt KVM (x86) [driver.libvirt-kvm-aarch64] title=Libvirt KVM (aarch64) [driver.libvirt-kvm-ppc64] title=Libvirt KVM (ppc64) [driver.libvirt-kvm-s390x] title=Libvirt KVM (s390x) [driver.libvirt-qemu-x86] title=Libvirt QEMU (x86) [driver.libvirt-lxc] title=Libvirt LXC [driver.libvirt-vz-vm] title=Libvirt Virtuozzo VM [driver.libvirt-vz-ct] title=Libvirt Virtuozzo CT [driver.libvirt-xen] title=Libvirt Xen [driver.vmware] title=VMware vCenter [driver.hyperv] title=Hyper-V [driver.ironic] title=Ironic [driver.powervm] title=PowerVM [driver.zvm] title=zVM [operation.attach-volume] title=Attach block volume to instance status=optional notes=The attach volume operation provides a means to hotplug additional block storage to a running instance. This allows storage capabilities to be expanded without interruption of service. In a cloud model it would be more typical to just spin up a new instance with large storage, so the ability to hotplug extra storage is for those cases where the instance is considered to be more of a pet than cattle. Therefore this operation is not considered to be mandatory to support. cli=nova volume-attach driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=complete driver-notes.powervm=This is not tested for every CI run. Add a "powervm:volume-check" comment to trigger a CI job running volume tests. driver.zvm=missing [operation.attach-tagged-volume] title=Attach tagged block device to instance status=optional notes=Attach a block device with a tag to an existing server instance. See "Device tags" for more information. cli=nova volume-attach [--tag ] driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.detach-volume] title=Detach block volume from instance status=optional notes=See notes for attach volume operation. cli=nova volume-detach driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=complete driver-notes.powervm=This is not tested for every CI run. Add a "powervm:volume-check" comment to trigger a CI job running volume tests. driver.zvm=missing [operation.extend-volume] title=Extend block volume attached to instance status=optional notes=The extend volume operation provides a means to extend the size of an attached volume. This allows volume size to be expanded without interruption of service. In a cloud model it would be more typical to just spin up a new instance with large storage, so the ability to extend the size of an attached volume is for those cases where the instance is considered to be more of a pet than cattle. Therefore this operation is not considered to be mandatory to support. cli=cinder extend driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=unknown driver.libvirt-kvm-s390x=unknown driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=unknown driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=unknown driver.libvirt-vz-ct=missing driver.powervm=complete driver-notes.powervm=This is not tested for every CI run. Add a "powervm:volume-check" comment to trigger a CI job running volume tests. driver.zvm=missing [operation.attach-interface] title=Attach virtual network interface to instance status=optional notes=The attach interface operation provides a means to hotplug additional interfaces to a running instance. Hotplug support varies between guest OSes and some guests require a reboot for new interfaces to be detected. This operation allows interface capabilities to be expanded without interruption of service. In a cloud model it would be more typical to just spin up a new instance with more interfaces. cli=nova interface-attach driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=partial driver-notes.hyperv=Works without issue if instance is off. When hotplugging, only works if using Windows/Hyper-V Server 2016 and the instance is a Generation 2 VM. driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=missing [operation.attach-tagged-interface] title=Attach tagged virtual network interface to instance status=optional notes=Attach a virtual network interface with a tag to an existing server instance. See "Device tags" for more information. cli=nova interface-attach [--tag ] driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.detach-interface] title=Detach virtual network interface from instance status=optional notes=See notes for attach-interface operation. cli=nova interface-detach driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver-notes.hyperv=Works without issue if instance is off. When hotplugging, only works if using Windows/Hyper-V Server 2016 and the instance is a Generation 2 VM. driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=missing [operation.maintenance-mode] title=Set the host in a maintenance mode status=optional notes=This operation allows a host to be placed into maintenance mode, automatically triggering migration of any running instances to an alternative host and preventing new instances from being launched. This is not considered to be a mandatory operation to support. The driver methods to implement are "host_maintenance_mode" and "set_host_enabled". cli=nova host-update driver.xenserver=complete driver.libvirt-kvm-x86=missing driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=missing driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.evacuate] title=Evacuate instances from a host status=optional notes=A possible failure scenario in a cloud environment is the outage of one of the compute nodes. In such a case the instances of the down host can be evacuated to another host. It is assumed that the old host is unlikely ever to be powered back on, otherwise the evacuation attempt will be rejected. When the instances get moved to the new host, their volumes get re-attached and the locally stored data is dropped. That happens in the same way as a rebuild. This is not considered to be a mandatory operation to support. cli=nova evacuate ;nova host-evacuate driver.xenserver=unknown driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=unknown driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=unknown driver.libvirt-lxc=unknown driver.libvirt-xen=unknown driver.vmware=unknown driver.hyperv=unknown driver.ironic=unknown driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=unknown [operation.rebuild] title=Rebuild instance status=optional notes=A possible use case is additional attributes need to be set to the instance, nova will purge all existing data from the system and remakes the VM with given information such as 'metadata' and 'personalities'. Though this is not considered to be a mandatory operation to support. cli=nova rebuild driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=unknown [operation.get-guest-info] title=Guest instance status status=mandatory notes=Provides realtime information about the power state of the guest instance. Since the power state is used by the compute manager for tracking changes in guests, this operation is considered mandatory to support. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.get-host-uptime] title=Guest host uptime status=optional notes=Returns the result of host uptime since power on, it's used to report hypervisor status. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.get-host-ip] title=Guest host ip status=optional notes=Returns the ip of this host, it's used when doing resize and migration. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.live-migrate] title=Live migrate instance across hosts status=optional notes=Live migration provides a way to move an instance off one compute host, to another compute host. Administrators may use this to evacuate instances from a host that needs to undergo maintenance tasks, though of course this may not help if the host is already suffering a failure. In general instances are considered cattle rather than pets, so it is expected that an instance is liable to be killed if host maintenance is required. It is technically challenging for some hypervisors to provide support for the live migration operation, particularly those built on the container based virtualization. Therefore this operation is not considered mandatory to support. cli=nova live-migration ;nova host-evacuate-live driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=missing [operation.force-live-migration-to-complete] title=Force live migration to complete status=optional notes=Live migration provides a way to move a running instance to another compute host. But it can sometimes fail to complete if an instance has a high rate of memory or disk page access. This operation provides the user with an option to assist the progress of the live migration. The mechanism used to complete the live migration depends on the underlying virtualization subsystem capabilities. If libvirt/qemu is used and the post-copy feature is available and enabled then the force complete operation will cause a switch to post-copy mode. Otherwise the instance will be suspended until the migration is completed or aborted. cli=nova live-migration-force-complete driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=missing driver-notes.libvirt-kvm-x86=Requires libvirt>=1.3.3, qemu>=2.5.0 driver.libvirt-kvm-ppc64=complete driver-notes.libvirt-kvm-ppc64=Requires libvirt>=1.3.3, qemu>=2.5.0 driver.libvirt-kvm-s390x=complete driver-notes.libvirt-kvm-s390x=Requires libvirt>=1.3.3, qemu>=2.5.0 driver.libvirt-qemu-x86=complete driver-notes.libvirt-qemu-x86=Requires libvirt>=1.3.3, qemu>=2.5.0 driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.abort-in-progress-live-migration] title=Abort an in-progress or queued live migration status=optional notes=Live migration provides a way to move a running instance to another compute host. But it can sometimes need a large amount of time to complete if an instance has a high rate of memory or disk page access or is stuck in queued status if there are too many in-progress live migration jobs in the queue. This operation provides the user with an option to abort in-progress live migrations. When the live migration job is still in "queued" or "preparing" status, it can be aborted regardless of the type of underneath hypervisor, but once the job status changes to "running", only some of the hypervisors support this feature. cli=nova live-migration-abort driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=unknown driver.libvirt-vz-ct=unknown driver.powervm=missing driver.zvm=missing [operation.launch] title=Launch instance status=mandatory notes=Importing pre-existing running virtual machines on a host is considered out of scope of the cloud paradigm. Therefore this operation is mandatory to support in drivers. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.pause] title=Stop instance CPUs (pause) status=optional notes=Stopping an instances CPUs can be thought of as roughly equivalent to suspend-to-RAM. The instance is still present in memory, but execution has stopped. The problem, however, is that there is no mechanism to inform the guest OS that this takes place, so upon unpausing, its clocks will no longer report correct time. For this reason hypervisor vendors generally discourage use of this feature and some do not even implement it. Therefore this operation is considered optional to support in drivers. cli=nova pause driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=complete [operation.reboot] title=Reboot instance status=optional notes=It is reasonable for a guest OS administrator to trigger a graceful reboot from inside the instance. A host initiated graceful reboot requires guest co-operation and a non-graceful reboot can be achieved by a combination of stop+start. Therefore this operation is considered optional. cli=nova reboot driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.rescue] title=Rescue instance status=optional notes=The rescue operation starts an instance in a special configuration whereby it is booted from an special root disk image. The goal is to allow an administrator to recover the state of a broken virtual machine. In general the cloud model considers instances to be cattle, so if an instance breaks the general expectation is that it be thrown away and a new instance created. Therefore this operation is considered optional to support in drivers. cli=nova rescue driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=missing [operation.resize] title=Resize instance status=optional notes=The resize operation allows the user to change a running instance to match the size of a different flavor from the one it was initially launched with. There are many different flavor attributes that potentially need to be updated. In general it is technically challenging for a hypervisor to support the alteration of all relevant config settings for a running instance. Therefore this operation is considered optional to support in drivers. cli=nova resize driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver-notes.vz-vm=Resizing Virtuozzo instances implies guest filesystem resize also driver.libvirt-vz-ct=complete driver-notes.vz-ct=Resizing Virtuozzo instances implies guest filesystem resize also driver.powervm=missing driver.zvm=missing [operation.resume] title=Restore instance status=optional notes=See notes for the suspend operation cli=nova resume driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=missing [operation.set-admin-password] title=Set instance admin password status=optional notes=Provides a mechanism to (re)set the password of the administrator account inside the instance operating system. This requires that the hypervisor has a way to communicate with the running guest operating system. Given the wide range of operating systems in existence it is unreasonable to expect this to be practical in the general case. The configdrive and metadata service both provide a mechanism for setting the administrator password at initial boot time. In the case where this operation were not available, the administrator would simply have to login to the guest and change the password in the normal manner, so this is just a convenient optimization. Therefore this operation is not considered mandatory for drivers to support. cli=nova set-password driver.xenserver=complete driver-notes.xenserver=Requires XenAPI agent on the guest. driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver-notes.libvirt-kvm-x86=Requires libvirt>=1.2.16 and hw_qemu_guest_agent. driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver-notes.libvirt-qemu-x86=Requires libvirt>=1.2.16 and hw_qemu_guest_agent. driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=complete driver-notes.libvirt-vz-vm=Requires libvirt>=2.0.0 driver.libvirt-vz-ct=complete driver-notes.libvirt-vz-ct=Requires libvirt>=2.0.0 driver.powervm=missing driver.zvm=missing [operation.snapshot] title=Save snapshot of instance disk status=optional notes=The snapshot operation allows the current state of the instance root disk to be saved and uploaded back into the glance image repository. The instance can later be booted again using this saved image. This is in effect making the ephemeral instance root disk into a semi-persistent storage, in so much as it is preserved even though the guest is no longer running. In general though, the expectation is that the root disks are ephemeral so the ability to take a snapshot cannot be assumed. Therefore this operation is not considered mandatory to support. cli=nova image-create driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=partial driver-notes.libvirt-xen=Only cold snapshots (pause + snapshot) supported driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver-notes.powervm=When using the localdisk disk driver, snapshot is only supported if I/O is being hosted by the management partition. If hosting I/O on traditional VIOS, we are limited by the fact that a VSCSI device can't be mapped to two partitions (the VIOS and the management) at once. driver.zvm=complete [operation.suspend] title=Suspend instance status=optional notes=Suspending an instance can be thought of as roughly equivalent to suspend-to-disk. The instance no longer consumes any RAM or CPUs, with its live running state having been preserved in a file on disk. It can later be restored, at which point it should continue execution where it left off. As with stopping instance CPUs, it suffers from the fact that the guest OS will typically be left with a clock that is no longer telling correct time. For container based virtualization solutions, this operation is particularly technically challenging to implement and is an area of active research. This operation tends to make more sense when thinking of instances as pets, rather than cattle, since with cattle it would be simpler to just terminate the instance instead of suspending. Therefore this operation is considered optional to support. cli=nova suspend driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=missing [operation.swap-volume] title=Swap block volumes status=optional notes=The swap volume operation is a mechanism for changing a running instance so that its attached volume(s) are backed by different storage in the host. An alternative to this would be to simply terminate the existing instance and spawn a new instance with the new storage. In other words this operation is primarily targeted towards the pet use case rather than cattle, however, it is required for volume migration to work in the volume service. This is considered optional to support. cli=nova volume-update driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.terminate] title=Shutdown instance status=mandatory notes=The ability to terminate a virtual machine is required in order for a cloud user to stop utilizing resources and thus avoid indefinitely ongoing billing. Therefore this operation is mandatory to support in drivers. cli=nova delete driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver-notes.libvirt-lxc=Fails in latest Ubuntu Trusty kernel from security repository (3.13.0-76-generic), but works in upstream 3.13.x kernels as well as default Ubuntu Trusty latest kernel (3.13.0-58-generic). driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.trigger-crash-dump] title=Trigger crash dump status=optional notes=The trigger crash dump operation is a mechanism for triggering a crash dump in an instance. The feature is typically implemented by injecting an NMI (Non-maskable Interrupt) into the instance. It provides a means to dump the production memory image as a dump file which is useful for users. Therefore this operation is considered optional to support. cli=nova trigger-crash-dump driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=complete driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.unpause] title=Resume instance CPUs (unpause) status=optional notes=See notes for the "Stop instance CPUs" operation cli=nova unpause driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=complete [guest.disk.autoconfig] title=Auto configure disk status=optional notes=Partition and resize FS to match the size specified by flavors.root_gb, As this is hypervisor specific feature. Therefore this operation is considered optional to support. cli= driver.xenserver=complete driver.libvirt-kvm-x86=missing driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=missing driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=complete [guest.disk.rate-limit] title=Instance disk I/O limits status=optional notes=The ability to set rate limits on virtual disks allows for greater performance isolation between instances running on the same host storage. It is valid to delegate scheduling of I/O operations to the hypervisor with its default settings, instead of doing fine grained tuning. Therefore this is not considered to be an mandatory configuration to support. cli=nova limits driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [guest.setup.configdrive] title=Config drive support status=choice(guest.setup) notes=The config drive provides an information channel into the guest operating system, to enable configuration of the administrator password, file injection, registration of SSH keys, etc. Since cloud images typically ship with all login methods locked, a mechanism to set the administrator password or keys is required to get login access. Alternatives include the metadata service and disk injection. At least one of the guest setup mechanisms is required to be supported by drivers, in order to enable login access. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver-notes.libvirt-kvm-aarch64=Requires kernel with proper config (oldest known: Ubuntu 4.13 HWE) driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=complete driver.zvm=complete [guest.setup.inject.file] title=Inject files into disk image status=optional notes=This allows for the end user to provide data for multiple files to be injected into the root filesystem before an instance is booted. This requires that the compute node understand the format of the filesystem and any partitioning scheme it might use on the block device. This is a non-trivial problem considering the vast number of filesystems in existence. The problem of injecting files to a guest OS is better solved by obtaining via the metadata service or config drive. Therefore this operation is considered optional to support. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [guest.setup.inject.networking] title=Inject guest networking config status=optional notes=This allows for static networking configuration (IP address, netmask, gateway and routes) to be injected directly into the root filesystem before an instance is booted. This requires that the compute node understand how networking is configured in the guest OS which is a non-trivial problem considering the vast number of operating system types. The problem of configuring networking is better solved by DHCP or by obtaining static config via config drive. Therefore this operation is considered optional to support. cli= driver.xenserver=partial driver-notes.xenserver=Only for Debian derived guests driver.libvirt-kvm-x86=partial driver-notes.libvirt-kvm-x86=Only for Debian derived guests driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=partial driver-notes.libvirt-qemu-x86=Only for Debian derived guests driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=partial driver-notes.vmware=requires vmware tools installed driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [console.rdp] title=Remote desktop over RDP status=choice(console) notes=This allows the administrator to interact with the graphical console of the guest OS via RDP. This provides a way to see boot up messages and login to the instance when networking configuration has failed, thus preventing a network based login. Some operating systems may prefer to emit messages via the serial console for easier consumption. Therefore support for this operation is not mandatory, however, a driver is required to support at least one of the listed console access operations. cli=nova get-rdp-console driver.xenserver=missing driver.libvirt-kvm-x86=missing driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=missing driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [console.serial.log] title=View serial console logs status=choice(console) notes=This allows the administrator to query the logs of data emitted by the guest OS on its virtualized serial port. For UNIX guests this typically includes all boot up messages and so is useful for diagnosing problems when an instance fails to successfully boot. Not all guest operating systems will be able to emit boot information on a serial console, others may only support graphical consoles. Therefore support for this operation is not mandatory, however, a driver is required to support at least one of the listed console access operations. cli=nova console-log driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=complete [console.serial.interactive] title=Remote interactive serial console status=choice(console) notes=This allows the administrator to interact with the serial console of the guest OS. This provides a way to see boot up messages and login to the instance when networking configuration has failed, thus preventing a network based login. Not all guest operating systems will be able to emit boot information on a serial console, others may only support graphical consoles. Therefore support for this operation is not mandatory, however, a driver is required to support at least one of the listed console access operations. This feature was introduced in the Juno release with blueprint https://blueprints.launchpad.net/nova/+spec/serial-ports cli=nova get-serial-console driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=unknown driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=unknown driver.libvirt-lxc=unknown driver.libvirt-xen=unknown driver.vmware=missing driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [console.spice] title=Remote desktop over SPICE status=choice(console) notes=This allows the administrator to interact with the graphical console of the guest OS via SPICE. This provides a way to see boot up messages and login to the instance when networking configuration has failed, thus preventing a network based login. Some operating systems may prefer to emit messages via the serial console for easier consumption. Therefore support for this operation is not mandatory, however, a driver is required to support at least one of the listed console access operations. cli=nova get-spice-console driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [console.vnc] title=Remote desktop over VNC status=choice(console) notes=This allows the administrator to interact with the graphical console of the guest OS via VNC. This provides a way to see boot up messages and login to the instance when networking configuration has failed, thus preventing a network based login. Some operating systems may prefer to emit messages via the serial console for easier consumption. Therefore support for this operation is not mandatory, however, a driver is required to support at least one of the listed console access operations. cli=nova get-vnc-console driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=missing [storage.block] title=Block storage support status=optional notes=Block storage provides instances with direct attached virtual disks that can be used for persistent storage of data. As an alternative to direct attached disks, an instance may choose to use network based persistent storage. OpenStack provides object storage via the Swift service, or a traditional filesystem such as NFS may be used. Some types of instances may not require persistent storage at all, being simple transaction processing systems reading requests & sending results to and from the network. Therefore support for this configuration is not considered mandatory for drivers to support. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=partial driver.libvirt-vz-ct=missing driver.powervm=complete driver-notes.powervm=This is not tested for every CI run. Add a "powervm:volume-check" comment to trigger a CI job running volume tests. driver.zvm=missing [storage.block.backend.fibrechannel] title=Block storage over fibre channel status=optional notes=To maximise performance of the block storage, it may be desirable to directly access fibre channel LUNs from the underlying storage technology on the compute hosts. Since this is just a performance optimization of the I/O path it is not considered mandatory to support. cli= driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=complete driver-notes.powervm=This is not tested for every CI run. Add a "powervm:volume-check" comment to trigger a CI job running volume tests. driver.zvm=missing [storage.block.backend.iscsi] title=Block storage over iSCSI status=condition(storage.block==complete) notes=If the driver wishes to support block storage, it is common to provide an iSCSI based backend to access the storage from cinder. This isolates the compute layer for knowledge of the specific storage technology used by Cinder, albeit at a potential performance cost due to the longer I/O path involved. If the driver chooses to support block storage, then this is considered mandatory to support, otherwise it is considered optional. cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [storage.block.backend.iscsi.auth.chap] title=CHAP authentication for iSCSI status=optional notes=If accessing the cinder iSCSI service over an untrusted LAN it is desirable to be able to enable authentication for the iSCSI protocol. CHAP is the commonly used authentication protocol for iSCSI. This is not considered mandatory to support. (?) cli= driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [storage.image] title=Image storage support status=mandatory notes=This refers to the ability to boot an instance from an image stored in the glance image repository. Without this feature it would not be possible to bootstrap from a clean environment, since there would be no way to get block volumes populated and reliance on external PXE servers is out of scope. Therefore this is considered a mandatory storage feature to support. cli=nova boot --image driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=complete driver.hyperv=complete driver.ironic=complete driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete [operation.uefi-boot] title=uefi boot status=optional notes=This allows users to boot a guest with uefi firmware. cli= driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=complete driver.hyperv=complete driver-notes.hyperv=In order to use uefi, a second generation Hyper-V vm must be requested. driver.ironic=partial driver-notes.ironic=depends on hardware support driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.device-tags] title=Device tags status=optional notes=This allows users to set tags on virtual devices when creating a server instance. Device tags are used to identify virtual device metadata, as exposed in the metadata API and on the config drive. For example, a network interface tagged with "nic1" will appear in the metadata along with its bus (ex: PCI), bus address (ex: 0000:00:02.0), MAC address, and tag (nic1). If multiple networks are defined, the order in which they appear in the guest operating system will not necessarily reflect the order in which they are given in the server boot request. Guests should therefore not depend on device order to deduce any information about their network devices. Instead, device role tags should be used. Device tags can be applied to virtual network interfaces and block devices. cli=nova boot driver.xenserver=complete driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=unknown driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=complete driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=unknown driver.powervm=missing driver.zvm=missing [operation.quiesce] title=quiesce status=optional notes=Quiesce the specified instance to prepare for snapshots. For libvirt, guest filesystems will be frozen through qemu agent. cli= driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.unquiesce] title=unquiesce status=optional notes=See notes for the quiesce operation cli= driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.multiattach-volume] title=Attach block volume to multiple instances status=optional notes=The multiattach volume operation is an extension to the attach volume operation. It allows to attach a single volume to multiple instances. This operation is not considered to be mandatory to support. Note that for the libvirt driver, this is only supported if qemu<2.10 or libvirt>=3.10. cli=nova volume-attach driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.encrypted-volume] title=Attach encrypted block volume to server status=optional notes=This is the same as the attach volume operation except with an encrypted block device. Encrypted volumes are controlled via admin-configured volume types in the block storage service. Since attach volume is optional this feature is also optional for compute drivers to support. cli=nova volume-attach driver.xenserver=missing driver.libvirt-kvm-x86=complete driver-notes.libvirt-kvm-x86=For native QEMU decryption of the encrypted volume (and rbd support), QEMU>=2.6.0 and libvirt>=2.2.0 are required and only the "luks" type provider is supported. Otherwise both "luks" and "cryptsetup" types are supported but not natively, i.e. not all volume types are supported. driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=unknown driver.libvirt-kvm-s390x=unknown driver.libvirt-qemu-x86=complete driver-notes.libvirt-qemu-x86=The same restrictions apply as KVM x86. driver.libvirt-lxc=missing driver.libvirt-xen=unknown driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=unknown driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.trusted-certs] title=Validate image with trusted certificates status=optional notes=Since trusted image certification validation is configurable by the cloud deployer it is considered optional. However, it is a virt-agnostic feature so there is no good reason that all virt drivers cannot support the feature since it is mostly just plumbing user requests through the virt driver when downloading images. cli=nova boot --trusted-image-certificate-id ... driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=complete driver.libvirt-xen=complete driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=missing driver.zvm=missing [operation.file-backed-memory] title=File backed memory status=optional notes=The file backed memory feature in Openstack allows a Nova node to serve guest memory from a file backing store. This mechanism uses the libvirt file memory source, causing guest instance memory to be allocated as files within the libvirt memory backing directory. This is only supported if qemu>2.6 and libvirt>4.0.0 cli= driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=unknown driver.libvirt-kvm-s390x=unknown driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.report-cpu-traits] title=Report CPU traits status=optional notes=The report CPU traits feature in OpenStack allows a Nova node to report its CPU traits according to CPU mode configuration. This gives users the ability to boot instances based on desired CPU traits. cli= driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=unknown driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.port-with-resource-request] title=SR-IOV ports with resource request status=optional notes=To support neutron SR-IOV ports (vnic_type=direct or vnic_type=macvtap) with resource request the virt driver needs to include the 'parent_ifname' key in each subdict which represents a VF under the 'pci_passthrough_devices' key in the dict returned from the ComputeDriver.get_available_resource() call. cli=nova boot --nic port-id ... driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=complete driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.boot-encrypted-vm] title=Boot instance with secure encrypted memory status=optional notes=The feature allows VMs to be booted with their memory hardware-encrypted with a key specific to the VM, to help protect the data residing in the VM against access from anyone other than the user of the VM. The Configuration and Security Guides specify usage of this feature. cli=openstack server create driver.xenserver=missing driver.libvirt-kvm-x86=partial driver-notes.libvirt-kvm-x86=This feature is currently only available with hosts which support the SEV (Secure Encrypted Virtualization) technology from AMD. driver.libvirt-kvm-aarch64=missing driver.libvirt-kvm-ppc64=missing driver.libvirt-kvm-s390x=missing driver.libvirt-qemu-x86=missing driver.libvirt-lxc=missing driver.libvirt-xen=missing driver.vmware=missing driver.hyperv=missing driver.ironic=missing driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing [operation.cache-images] title=Cache base images for faster instance boot status=optional notes=Drivers supporting this feature cache base images on the compute host so that subsequent boots need not incur the expense of downloading them. Partial support entails caching an image after the first boot that uses it. Complete support allows priming the cache so that the first boot also benefits. Image caching support is tunable via config options in the [image_cache] group. cli=openstack server create driver.xenserver=missing driver.libvirt-kvm-x86=complete driver.libvirt-kvm-aarch64=complete driver.libvirt-kvm-ppc64=complete driver.libvirt-kvm-s390x=complete driver.libvirt-qemu-x86=complete driver.libvirt-lxc=unknown driver.libvirt-xen=complete driver.vmware=partial driver.hyperv=partial driver.ironic=missing driver.libvirt-vz-vm=complete driver.libvirt-vz-ct=complete driver.powervm=partial driver-notes.powervm=The PowerVM driver does image caching natively when using the SSP disk driver. It does not use the config options in the [image_cache] group. driver.zvm=missing ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/support-matrix.rst0000664000175000017500000000345200000000000021365 0ustar00zuulzuul00000000000000 Feature Support Matrix ====================== .. TODO: Add UML (User-Mode Linux) hypervisor and its support status for the listed features to the support matrix. When considering which capabilities should be marked as mandatory the following general guiding principles were applied * **Inclusivity** - people have shown ability to make effective use of a wide range of virtualization technologies with broadly varying feature sets. Aiming to keep the requirements as inclusive as possible, avoids second-guessing what a user may wish to use the cloud compute service for. * **Bootstrapping** - a practical use case test is to consider that starting point for the compute deploy is an empty data center with new machines and network connectivity. The look at what are the minimum features required of a compute service, in order to get user instances running and processing work over the network. * **Competition** - an early leader in the cloud compute service space was Amazon EC2. A sanity check for whether a feature should be mandatory is to consider whether it was available in the first public release of EC2. This had quite a narrow feature set, but none the less found very high usage in many use cases. So it serves to illustrate that many features need not be considered mandatory in order to get useful work done. * **Reality** - there are many virt drivers currently shipped with Nova, each with their own supported feature set. Any feature which is missing in at least one virt driver that is already in-tree, must by inference be considered optional until all in-tree drivers support it. This does not rule out the possibility of a currently optional feature becoming mandatory at a later date, based on other principles above. .. support_matrix:: support-matrix.ini ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/source/user/upgrade.rst0000664000175000017500000004002100000000000017767 0ustar00zuulzuul00000000000000.. Copyright 2014 Rackspace All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Upgrades ======== Nova aims to provide upgrades with minimal downtime. Firstly, the data plane. There should be no VM downtime when you upgrade Nova. Nova has had this since the early days. Secondly, we want no downtime during upgrades of the Nova control plane. This document is trying to describe how we can achieve that. Once we have introduced the key concepts relating to upgrade, we will introduce the process needed for a no downtime upgrade of nova. .. _minimal_downtime_upgrade: Minimal Downtime Upgrade Process -------------------------------- Plan your upgrade ''''''''''''''''' * Read and ensure you understand the release notes for the next release. * You should ensure all required steps from the previous upgrade have been completed, such as data migrations. * Make a backup of your database. Nova does not support downgrading of the database. Hence, in case of upgrade failure, restoring database from backup is the only choice. * During upgrade be aware that there will be additional load on nova-conductor. You may find you need to add extra nova-conductor workers to deal with the additional upgrade related load. Rolling upgrade process ''''''''''''''''''''''' To reduce downtime, the compute services can be upgraded in a rolling fashion. It means upgrading a few services at a time. This results in a condition where both old (N) and new (N+1) nova-compute services co-exist for a certain time period. Note that, there is no upgrade of the hypervisor here, this is just upgrading the nova services. If reduced downtime is not a concern (or lower complexity is desired), all services may be taken down and restarted at the same time. #. Before maintenance window: * Start the process with the controller node. Install the code for the next version of Nova, either in a venv or a separate control plane node, including all the python dependencies. * Using the newly installed nova code, run the DB sync. First run ``nova-manage api_db sync``, then ``nova-manage db sync``. ``nova-manage db sync`` should be run for all cell databases, including ``cell0``. If necessary, the ``--config-file`` argument can be used to point to the correct ``nova.conf`` file for the given cell. These schema change operations should have minimal or no effect on performance, and should not cause any operations to fail. * At this point, new columns and tables may exist in the database. These DB schema changes are done in a way that both the N and N+1 release can perform operations against the same schema. #. During maintenance window: * Several nova services rely on the external placement service being at the latest level. Therefore, you must upgrade placement before any nova services. See the :placement-doc:`placement upgrade notes ` for more details on upgrading the placement service. * For maximum safety (no failed API operations), gracefully shutdown all the services (i.e. SIG_TERM) except nova-compute. * Before restarting services with new code, perform the release-specific readiness check with ``nova-status upgrade check``. See the :ref:`nova-status upgrade check ` for more details on status check. * Start all services on the new code, with ``[upgrade_levels]compute=auto`` in nova.conf. It is safest to start nova-conductor first and nova-api last. Note that you may use a static alias name instead of ``auto``, such as ``[upgrade_levels]compute=``. Also note that this step is only required if compute services are not upgraded in lock-step with the control services. * If desired, gracefully shutdown nova-compute (i.e. SIG_TERM) services in small batches, then start the new version of the code with: ``[upgrade_levels]compute=auto``. If this batch-based approach is used, only a few compute nodes will have any delayed API actions, and to ensure there is enough capacity online to service any boot requests that happen during this time. #. After maintenance window: * Once all services are running the new code, double check in the DB that there are no old orphaned service records using `nova service-list`. * Now that all services are upgraded, we need to send the SIG_HUP signal, so all the services clear any cached service version data. When a new service starts, it automatically detects which version of the compute RPC protocol to use, and it can decide if it is safe to do any online data migrations. Note, if you used a static value for the upgrade_level, such as ``[upgrade_levels]compute=``, you must update nova.conf to remove that configuration value and do a full service restart. * Now all the services are upgraded and signaled, the system is able to use the latest version of the RPC protocol and can access all of the features in the new release. * Once all the services are running the latest version of the code, and all the services are aware they all have been upgraded, it is safe to transform the data in the database into its new format. While some of this work happens on demand when the system reads a database row that needs updating, we must get all the data transformed into the current version before the next upgrade. Additionally, some data may not be transformed automatically so performing the data migration is necessary to avoid performance degradation due to compatibility routines. * This process can put significant extra write load on the database. Complete all online data migrations using: ``nova-manage db online_data_migrations --max-count ``. Note that you can use the ``--max-count`` argument to reduce the load this operation will place on the database, which allows you to run a small chunk of the migrations until all of the work is done. The chunk size you should use depends on your infrastructure and how much additional load you can impose on the database. To reduce load, perform smaller batches with delays between chunks. To reduce time to completion, run larger batches. Each time it is run, the command will show a summary of completed and remaining records. If using the ``--max-count`` option, the command should be rerun while it returns exit status 1 (which indicates that some migrations took effect, and more work may remain to be done), even if some migrations produce errors. If all possible migrations have completed and some are still producing errors, exit status 2 will be returned. In this case, the cause of the errors should be investigated and resolved. Migrations should be considered successfully completed only when the command returns exit status 0. * At this point, you must also ensure you update the configuration, to stop using any deprecated features or options, and perform any required work to transition to alternative features. All the deprecated options should be supported for one cycle, but should be removed before your next upgrade is performed. Current Database Upgrade Types ------------------------------ Currently Nova has 2 types of database upgrades that are in use. #. Schema Migrations #. Data Migrations Schema Migrations '''''''''''''''''' Schema migrations are defined in ``nova/db/sqlalchemy/migrate_repo/versions`` and in ``nova/db/sqlalchemy/api_migrations/migrate_repo/versions``. They are the routines that transform our database structure, which should be additive and able to be applied to a running system before service code has been upgraded. .. note:: The API database migrations should be assumed to run before the migrations for the main/cell databases. This is because the former contains information about how to find and connect to the latter. Some management commands that operate on multiple cells will attempt to list and iterate over cell mapping records, which require a functioning API database schema. .. _data-migrations: Data Migrations ''''''''''''''''' Online data migrations occur in two places: #. Inline migrations that occur as part of normal run-time activity as data is read in the old format and written in the new format #. Background online migrations that are performed using ``nova-manage`` to complete transformations that will not occur incidentally due to normal runtime activity. An example of online data migrations are the flavor migrations done as part of Nova object version 1.18. This included a transient migration of flavor storage from one database location to another. .. note:: Database downgrades are not supported. Migration policy: ''''''''''''''''' The following guidelines for schema and data migrations are followed in order to ease upgrades: * Additive schema migrations - In general, almost all schema migrations should be additive. Put simply, they should only create elements like columns, indices, and tables. * Subtractive schema migrations - To remove an element like a column or table during the N release cycle: #. The element must be deprecated and retained for backward compatibility. (This allows for graceful upgrade from N to N+1.) #. Data migration, by the objects layer, must completely migrate data from the old version of the schema to the new version. * `Data migration example `_ * `Data migration enforcement example `_ (for sqlalchemy migrate/deprecated scripts): #. The column can then be removed with a migration at the start of N+2. * All schema migrations should be idempotent. (For example, a migration should check if an element exists in the schema before attempting to add it.) This logic comes for free in the autogenerated workflow of the online migrations. * Constraints - When adding a foreign or unique key constraint, the schema migration code needs to handle possible problems with data before applying the constraint. (Example: A unique constraint must clean up duplicate records before applying said constraint.) * Data migrations - As mentioned above, data migrations will be done in an online fashion by custom code in the object layer that handles moving data between the old and new portions of the schema. In addition, for each type of data migration performed, there should exist a nova-manage option for an operator to manually request that rows be migrated. * See `flavor migration spec `_ for an example of data migrations in the object layer. *Future* work - #. Adding plumbing to enforce that relevant data migrations are completed before running `contract` in the expand/migrate/contract schema migration workflow. A potential solution would be for `contract` to run a gating test for each specific subtract operation to determine if the operation can be completed. Concepts -------- Here are the key concepts you need to know before reading the section on the upgrade process: RPC version pinning Through careful RPC versioning, newer nodes are able to talk to older nova-compute nodes. When upgrading control plane nodes, we can pin them at an older version of the compute RPC API, until all the compute nodes are able to be upgraded. https://wiki.openstack.org/wiki/RpcMajorVersionUpdates .. note:: The procedure for rolling upgrades with multiple cells v2 cells is not yet determined. Online Configuration Reload During the upgrade, we pin new serves at the older RPC version. When all services are updated to use newer code, we need to unpin them so we are able to use any new functionality. To avoid having to restart the service, using the current SIGHUP signal handling, or otherwise, ideally we need a way to update the currently running process to use the latest configuration. Graceful service shutdown Many nova services are python processes listening for messages on a AMQP queue, including nova-compute. When sending the process the SIGTERM the process stops getting new work from its queue, completes any outstanding work, then terminates. During this process, messages can be left on the queue for when the python process starts back up. This gives us a way to shutdown a service using older code, and start up a service using newer code with minimal impact. If its a service that can have multiple workers, like nova-conductor, you can usually add the new workers before the graceful shutdown of the old workers. In the case of singleton services, like nova-compute, some actions could be delayed during the restart, but ideally no actions should fail due to the restart. .. note:: While this is true for the RabbitMQ RPC backend, we need to confirm what happens for other RPC backends. API load balancer draining When upgrading API nodes, you can make your load balancer only send new connections to the newer API nodes, allowing for a seamless update of your API nodes. Expand/Contract DB Migrations Modern databases are able to make many schema changes while you are still writing to the database. Taking this a step further, we can make all DB changes by first adding the new structures, expanding. Then you can slowly move all the data into a new location and format. Once that is complete, you can drop bits of the scheme that are no long needed, i.e. contract. This happens multiple cycles after we have stopped using a particular piece of schema, and can happen in a schema migration without affecting runtime code. Online Data Migrations using objects In Kilo we are moving all data migration into the DB objects code. When trying to migrate data in the database from the old format to the new format, this is done in the object code when reading or saving things that are in the old format. For records that are not updated, you need to run a background process to convert those records into the newer format. This process must be completed before you contract the database schema. DB prune deleted rows Currently resources are soft deleted in the main database, so users are able to track instances in the DB that are created and destroyed in production. However, most people have a data retention policy, of say 30 days or 90 days after which they will want to delete those entries. Not deleting those entries affects DB performance as indices grow very large and data migrations take longer as there is more data to migrate. nova-conductor object backports RPC pinning ensures new services can talk to the older service's method signatures. But many of the parameters are objects that may well be too new for the old service to understand, so you are able to send the object back to the nova-conductor to be downgraded to a version the older service can understand. Testing ------- Once we have all the pieces in place, we hope to move the Grenade testing to follow this new pattern. The current tests only cover the existing upgrade process where: * old computes can run with new control plane * but control plane is turned off for DB migrations ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/user/wsgi.rst0000664000175000017500000000337300000000000017322 0ustar00zuulzuul00000000000000Using WSGI with Nova ==================== Though the compute and metadata APIs can be run using independent scripts that provide eventlet-based HTTP servers, it is generally considered more performant and flexible to run them using a generic HTTP server that supports WSGI_ (such as Apache_ or nginx_). The nova project provides two automatically generated entry points that support this: ``nova-api-wsgi`` and ``nova-metadata-wsgi``. These read ``nova.conf`` and ``api-paste.ini`` and generate the required module-level ``application`` that most WSGI servers require. If nova is installed using pip, these two scripts will be installed into whatever the expected ``bin`` directory is for the environment. The new scripts replace older experimental scripts that could be found in the ``nova/wsgi`` directory of the code repository. The new scripts are *not* experimental. When running the compute and metadata services with WSGI, sharing the compute and metadata service in the same process is not supported (as it is in the eventlet-based scripts). In devstack as of May 2017, the compute and metadata APIs are hosted by a Apache communicating with uwsgi_ via mod_proxy_uwsgi_. Inspecting the configuration created there can provide some guidance on one option for managing the WSGI scripts. It is important to remember, however, that one of the major features of using WSGI is that there are many different ways to host a WSGI application. Different servers make different choices about performance and configurability. .. _WSGI: https://www.python.org/dev/peps/pep-3333/ .. _apache: http://httpd.apache.org/ .. _nginx: http://nginx.org/en/ .. _uwsgi: https://uwsgi-docs.readthedocs.io/ .. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html#mod-proxy-uwsgi ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2584705 nova-21.2.4/doc/test/0000775000175000017500000000000000000000000014312 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/doc/test/redirect-tests.txt0000664000175000017500000001344700000000000020025 0ustar00zuulzuul00000000000000/nova/latest/addmethod.openstackapi.html 301 /nova/latest/contributor/api-2.html /nova/latest/admin/flavors2.html 301 /nova/latest/admin/flavors.html /nova/latest/admin/quotas2.html 301 /nova/latest/admin/quotas.html /nova/latest/admin/numa.html 301 /nova/latest/admin/cpu-topologies.html /nova/latest/aggregates.html 301 /nova/latest/user/aggregates.html /nova/latest/api_microversion_dev.html 301 /nova/latest/contributor/microversions.html /nova/latest/api_microversion_history.html 301 /nova/latest/reference/api-microversion-history.html /nova/latest/api_plugins.html 301 /nova/latest/contributor/api.html /nova/latest/architecture.html 301 /nova/latest/user/architecture.html /nova/latest/block_device_mapping.html 301 /nova/latest/user/block-device-mapping.html /nova/latest/blueprints.html 301 /nova/latest/contributor/blueprints.html /nova/latest/cells.html 301 /nova/latest/user/cells.html /nova/latest/code-review.html 301 /nova/latest/contributor/code-review.html /nova/latest/conductor.html 301 /nova/latest/user/conductor.html /nova/latest/development.environment.html 301 /nova/latest/contributor/development-environment.html /nova/latest/devref/api.html 301 /nova/latest/contributor/api.html /nova/latest/devref/cells.html 301 /nova/latest/user/cells.html /nova/latest/devref/filter_scheduler.html 301 /nova/latest/user/filter-scheduler.html # catch all, if we hit something in devref assume it moved to # reference unless we have already triggered a hit above. /nova/latest/devref/any-page.html 301 /nova/latest/reference/any-page.html /nova/latest/feature_classification.html 301 /nova/latest/user/feature-classification.html /nova/latest/filter_scheduler.html 301 /nova/latest/user/filter-scheduler.html /nova/latest/gmr.html 301 /nova/latest/reference/gmr.html /nova/latest/how_to_get_involved.html 301 /nova/latest/contributor/how-to-get-involved.html /nova/latest/i18n.html 301 /nova/latest/reference/i18n.html /nova/latest/man/index.html 301 /nova/latest/cli/index.html /nova/latest/man/nova-api-metadata.html 301 /nova/latest/cli/nova-api-metadata.html /nova/latest/man/nova-api-os-compute.html 301 /nova/latest/cli/nova-api-os-compute.html /nova/latest/man/nova-api.html 301 /nova/latest/cli/nova-api.html # this is gone and never coming back, indicate that to the end users /nova/latest/man/nova-cells.html 301 /nova/latest/cli/nova-cells.html /nova/latest/man/nova-compute.html 301 /nova/latest/cli/nova-compute.html /nova/latest/man/nova-conductor.html 301 /nova/latest/cli/nova-conductor.html /nova/latest/man/nova-dhcpbridge.html 301 /nova/latest/cli/nova-dhcpbridge.html /nova/latest/man/nova-manage.html 301 /nova/latest/cli/nova-manage.html /nova/latest/man/nova-network.html 301 /nova/latest/cli/nova-network.html /nova/latest/man/nova-novncproxy.html 301 /nova/latest/cli/nova-novncproxy.html /nova/latest/man/nova-rootwrap.html 301 /nova/latest/cli/nova-rootwrap.html /nova/latest/man/nova-scheduler.html 301 /nova/latest/cli/nova-scheduler.html /nova/latest/man/nova-serialproxy.html 301 /nova/latest/cli/nova-serialproxy.html /nova/latest/man/nova-spicehtml5proxy.html 301 /nova/latest/cli/nova-spicehtml5proxy.html /nova/latest/man/nova-status.html 301 /nova/latest/cli/nova-status.html /nova/latest/notifications.html 301 /nova/latest/reference/notifications.html /nova/latest/placement.html 301 /nova/latest/user/placement.html /nova/latest/placement_dev.html 301 /nova/latest/contributor/placement.html /nova/latest/policies.html 301 /nova/latest/contributor/policies.html /nova/latest/policy_enforcement.html 301 /nova/latest/reference/policy-enforcement.html /nova/latest/process.html 301 /nova/latest/contributor/process.html /nova/latest/project_scope.html 301 /nova/latest/contributor/project-scope.html /nova/latest/quotas.html 301 /nova/latest/user/quotas.html /nova/latest/releasenotes.html 301 /nova/latest/contributor/releasenotes.html /nova/latest/rpc.html 301 /nova/latest/reference/rpc.html /nova/latest/sample_config.html 301 /nova/latest/configuration/sample-config.html /nova/latest/sample_policy.html 301 /nova/latest/configuration/sample-policy.html /nova/latest/scheduler_evolution.html 301 /nova/latest/reference/scheduler-evolution.html /nova/latest/services.html 301 /nova/latest/reference/services.html /nova/latest/stable_api.html 301 /nova/latest/reference/stable-api.html /nova/latest/support-matrix.html 301 /nova/latest/user/support-matrix.html /nova/latest/test_strategy.html 301 /nova/latest/contributor/testing.html /nova/latest/testing/libvirt-numa.html 301 /nova/latest/contributor/testing/libvirt-numa.html /nova/latest/testing/serial-console.html 301 /nova/latest/contributor/testing/serial-console.html /nova/latest/testing/zero-downtime-upgrade.html 301 /nova/latest/contributor/testing/zero-downtime-upgrade.html /nova/latest/threading.html 301 /nova/latest/reference/threading.html /nova/latest/upgrade.html 301 /nova/latest/user/upgrade.html /nova/latest/user/aggregates.html 301 /nova/latest/admin/aggregates.html /nova/latest/user/cellsv2_layout.html 301 /nova/latest/user/cellsv2-layout.html /nova/latest/user/config-drive.html 301 /nova/latest/user/metadata.html /nova/latest/user/metadata-service.html 301 /nova/latest/user/metadata.html /nova/latest/user/placement.html 301 /placement/latest/ /nova/latest/user/user-data.html 301 /nova/latest/user/metadata.html /nova/latest/user/vendordata.html 301 /nova/latest/user/metadata.html /nova/latest/vendordata.html 301 /nova/latest/user/metadata.html /nova/latest/vmstates.html 301 /nova/latest/reference/vm-states.html /nova/latest/wsgi.html 301 /nova/latest/user/wsgi.html /nova/latest/admin/adv-config.html 301 /nova/latest/admin/index.html /nova/latest/admin/system-admin.html 301 /nova/latest/admin/index.html /nova/latest/admin/port_with_resource_request.html 301 /nova/latest/admin/ports-with-resource-requests.html /nova/latest/admin/manage-users.html 301 /nova/latest/admin/arch.html ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0624723 nova-21.2.4/etc/0000775000175000017500000000000000000000000013341 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2584705 nova-21.2.4/etc/nova/0000775000175000017500000000000000000000000014304 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/README-nova.conf.txt0000664000175000017500000000041000000000000017662 0ustar00zuulzuul00000000000000To generate the sample nova.conf file, run the following command from the top level of the nova directory: tox -egenconfig For a pre-generated example of the latest nova.conf, see: https://docs.openstack.org/nova/latest/configuration/sample-config.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/README-policy.yaml.txt0000664000175000017500000000044100000000000020237 0ustar00zuulzuul00000000000000Nova ==== To generate the sample nova policy.yaml file, run the following command from the top level of the nova directory: tox -egenpolicy For a pre-generated example of the latest nova policy.yaml, see: https://docs.openstack.org/nova/latest/configuration/sample-policy.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/api-paste.ini0000664000175000017500000000733300000000000016676 0ustar00zuulzuul00000000000000############ # Metadata # ############ [composite:metadata] use = egg:Paste#urlmap /: meta [pipeline:meta] pipeline = cors metaapp [app:metaapp] paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory ############# # OpenStack # ############# [composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v2: oscomputeversion_legacy_v2 /v2.1: oscomputeversion_v2 # v21 is an exactly feature match for v2, except it has more stringent # input validation on the wsgi surface (prevents fuzzing early on the # API). It also provides new features via API microversions which are # opt into for clients. Unaware clients will receive the same frozen # v2 API feature set, but with some relaxed validation /v2/+: openstack_compute_api_v21_legacy_v2_compatible /v2.1/+: openstack_compute_api_v21 [composite:openstack_compute_api_v21] use = call:nova.api.auth:pipeline_factory_v21 keystone = cors http_proxy_to_wsgi compute_req_id faultwrap request_log sizelimit osprofiler authtoken keystonecontext osapi_compute_app_v21 # DEPRECATED: The [api]auth_strategy conf option is deprecated and will be # removed in a subsequent release, whereupon this pipeline will be unreachable. noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap request_log sizelimit osprofiler noauth2 osapi_compute_app_v21 [composite:openstack_compute_api_v21_legacy_v2_compatible] use = call:nova.api.auth:pipeline_factory_v21 keystone = cors http_proxy_to_wsgi compute_req_id faultwrap request_log sizelimit osprofiler authtoken keystonecontext legacy_v2_compatible osapi_compute_app_v21 # DEPRECATED: The [api]auth_strategy conf option is deprecated and will be # removed in a subsequent release, whereupon this pipeline will be unreachable. noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap request_log sizelimit osprofiler noauth2 legacy_v2_compatible osapi_compute_app_v21 [filter:request_log] paste.filter_factory = nova.api.openstack.requestlog:RequestLog.factory [filter:compute_req_id] paste.filter_factory = nova.api.compute_req_id:ComputeReqIdMiddleware.factory [filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory # DEPRECATED: NoAuthMiddleware will be removed in a subsequent release, # whereupon this filter will cease to function. [filter:noauth2] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory [filter:osprofiler] paste.filter_factory = nova.profiler:WsgiMiddleware.factory [filter:sizelimit] paste.filter_factory = oslo_middleware:RequestBodySizeLimiter.factory [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory [filter:legacy_v2_compatible] paste.filter_factory = nova.api.openstack:LegacyV2CompatibleWrapper.factory [app:osapi_compute_app_v21] paste.app_factory = nova.api.openstack.compute:APIRouterV21.factory [pipeline:oscomputeversions] pipeline = cors faultwrap request_log http_proxy_to_wsgi oscomputeversionapp [pipeline:oscomputeversion_v2] pipeline = cors compute_req_id faultwrap request_log http_proxy_to_wsgi oscomputeversionapp_v2 [pipeline:oscomputeversion_legacy_v2] pipeline = cors compute_req_id faultwrap request_log http_proxy_to_wsgi legacy_v2_compatible oscomputeversionapp_v2 [app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:Versions.factory [app:oscomputeversionapp_v2] paste.app_factory = nova.api.openstack.compute.versions:VersionsV2.factory ########## # Shared # ########## [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = nova [filter:keystonecontext] paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/logging_sample.conf0000664000175000017500000000305100000000000020141 0ustar00zuulzuul00000000000000[loggers] keys = root, nova [handlers] keys = stderr, stdout, watchedfile, syslog, null [formatters] keys = context, default [logger_root] level = WARNING handlers = null [logger_nova] level = INFO handlers = stderr qualname = nova [logger_amqp] level = WARNING handlers = stderr qualname = amqp [logger_amqplib] level = WARNING handlers = stderr qualname = amqplib [logger_sqlalchemy] level = WARNING handlers = stderr qualname = sqlalchemy # "level = INFO" logs SQL queries. # "level = DEBUG" logs SQL queries and results. # "level = WARNING" logs neither. (Recommended for production systems.) [logger_boto] level = WARNING handlers = stderr qualname = boto # NOTE(mikal): suds is used by the vmware driver, removing this will # cause many extraneous log lines for their tempest runs. Refer to # https://review.opendev.org/#/c/219225/ for details. [logger_suds] level = INFO handlers = stderr qualname = suds [logger_eventletwsgi] level = WARNING handlers = stderr qualname = eventlet.wsgi.server [handler_stderr] class = StreamHandler args = (sys.stderr,) formatter = context [handler_stdout] class = StreamHandler args = (sys.stdout,) formatter = context [handler_watchedfile] class = handlers.WatchedFileHandler args = ('nova.log',) formatter = context [handler_syslog] class = handlers.SysLogHandler args = ('/dev/log', handlers.SysLogHandler.LOG_USER) formatter = context [handler_null] class = logging.NullHandler formatter = default args = () [formatter_context] class = oslo_log.formatters.ContextFormatter [formatter_default] format = %(message)s ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/etc/nova/nova-config-generator.conf0000664000175000017500000000070000000000000021342 0ustar00zuulzuul00000000000000[DEFAULT] output_file = etc/nova/nova.conf.sample wrap_width = 80 summarize = true namespace = nova.conf namespace = oslo.log namespace = oslo.messaging namespace = oslo.policy namespace = oslo.privsep namespace = oslo.service.periodic_task namespace = oslo.service.service namespace = oslo.db namespace = oslo.db.concurrency namespace = oslo.middleware namespace = oslo.concurrency namespace = keystonemiddleware.auth_token namespace = osprofiler ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/nova-policy-generator.conf0000664000175000017500000000010500000000000021373 0ustar00zuulzuul00000000000000[DEFAULT] output_file = etc/nova/policy.yaml.sample namespace = nova ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/release.sample0000664000175000017500000000011100000000000017120 0ustar00zuulzuul00000000000000[Nova] vendor = Fedora Project product = OpenStack Nova package = 1.fc18 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/rootwrap.conf0000664000175000017500000000170600000000000017034 0ustar00zuulzuul00000000000000# Configuration for nova-rootwrap # This file should be owned by (and only-writeable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap # List of directories to search executables in, in case filters do not # explicitly specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/sbin,/usr/local/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, local0, local1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2584705 nova-21.2.4/etc/nova/rootwrap.d/0000775000175000017500000000000000000000000016403 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/etc/nova/rootwrap.d/compute.filters0000664000175000017500000000114400000000000021451 0ustar00zuulzuul00000000000000# nova-rootwrap command filters for compute nodes # This file should be owned by (and only-writeable by) the root user [Filters] # os_brick.privileged.default oslo.privsep context privsep-rootwrap-os_brick: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.* # nova.privsep.sys_admin_pctxt oslo.privsep context privsep-rootwrap-sys_admin: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, nova.privsep.sys_admin_pctxt, --privsep_sock_path, /tmp/.* ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2624707 nova-21.2.4/gate/0000775000175000017500000000000000000000000013506 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/gate/README0000664000175000017500000000043600000000000014371 0ustar00zuulzuul00000000000000These are hooks to be used by the OpenStack infra test system. These scripts may be called by certain jobs at important times to do extra testing, setup, etc. They are really only relevant within the scope of the OpenStack infra system and are not expected to be useful to anyone else. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/gate/post_test_hook.sh0000775000175000017500000002330000000000000017107 0ustar00zuulzuul00000000000000#!/bin/bash -x MANAGE="/usr/local/bin/nova-manage" function archive_deleted_rows { # NOTE(danms): Run this a few times to make sure that we end # up with nothing more to archive if ! $MANAGE db archive_deleted_rows --verbose --before "$(date -d yesterday)" 2>&1 | grep 'Nothing was archived'; then echo "Archiving yesterday data should have done nothing" return 1 fi for i in `seq 30`; do if [[ $i -eq 1 ]]; then # This is just a test wrinkle to make sure we're covering the # non-all-cells (cell0) case, as we're not passing in the cell1 # config. $MANAGE db archive_deleted_rows --verbose --max_rows 50 --before "$(date -d tomorrow)" else $MANAGE db archive_deleted_rows --verbose --max_rows 1000 --before "$(date -d tomorrow)" --all-cells fi RET=$? if [[ $RET -gt 1 ]]; then echo Archiving failed with result $RET return $RET # When i = 1, we only archive cell0 (without --all-cells), so run at # least twice to ensure --all-cells are archived before considering # archiving complete. elif [[ $RET -eq 0 && $i -gt 1 ]]; then echo Archiving Complete break; fi done } function purge_db { $MANAGE db purge --all --verbose --all-cells RET=$? if [[ $RET -eq 0 ]]; then echo Purge successful else echo Purge failed with result $RET return $RET fi } BASE=${BASE:-/opt/stack} source ${BASE}/devstack/functions-common source ${BASE}/devstack/lib/nova # This needs to go before 'set -e' because otherwise the intermediate runs of # 'nova-manage db archive_deleted_rows' returning 1 (normal and expected) would # cause this script to exit and fail. archive_deleted_rows set -e # This needs to go after 'set -e' because otherwise a failure to purge the # database would not cause this script to exit and fail. purge_db # We need to get the admin credentials to run the OSC CLIs for Placement. set +x source $BASE/devstack/openrc admin set -x # Verify whether instances were archived from all cells. Admin credentials are # needed to list deleted instances across all projects. echo "Verifying that instances were archived from all cells" deleted_servers=$(openstack server list --deleted --all-projects -c ID -f value) # Fail if any deleted servers were found. if [[ -n "$deleted_servers" ]]; then echo "There were unarchived instances found after archiving; failing." exit 1 fi # TODO(mriedem): Consider checking for instances in ERROR state because # if there are any, we would expect them to retain allocations in Placement # and therefore we don't really need to check for leaked allocations. # Check for orphaned instance allocations in Placement which could mean # something failed during a test run and isn't getting cleaned up properly. echo "Looking for leaked resource provider allocations in Placement" LEAKED_ALLOCATIONS=0 for provider in $(openstack resource provider list -c uuid -f value); do echo "Looking for allocations for provider $provider" allocations=$(openstack resource provider show --allocations $provider \ -c allocations -f value) if [[ "$allocations" != "{}" ]]; then echo "Resource provider has allocations:" openstack resource provider show --allocations $provider LEAKED_ALLOCATIONS=1 fi done # Fail if there were any leaked allocations. if [[ $LEAKED_ALLOCATIONS -eq 1 ]]; then echo "There were leaked allocations; failing." exit 1 fi echo "Resource provider allocations were cleaned up properly." # Test "nova-manage placement heal_allocations" by creating a server, deleting # its allocations in placement, and then running heal_allocations and assert # the allocations were healed as expected. function get_binding_profile_value { # Returns the value of the key in the binding profile if exsits or return # empty. local port=${1} local key=${2} local print_value='import sys, json; print(json.load(sys.stdin).get("binding_profile", {}).get("'${key}'", ""))' openstack port show ${port} -f json -c binding_profile \ | /usr/bin/env python3 -c "${print_value}" } echo "Creating port with bandwidth request for heal_allocations testing" openstack network create net0 \ --provider-network-type vlan \ --provider-physical-network public \ --provider-segment 100 openstack subnet create subnet0 \ --network net0 \ --subnet-range 10.0.4.0/24 \ openstack network qos policy create qp0 openstack network qos rule create qp0 \ --type minimum-bandwidth \ --min-kbps 1000 \ --egress openstack network qos rule create qp0 \ --type minimum-bandwidth \ --min-kbps 1000 \ --ingress openstack port create port-normal-qos \ --network net0 \ --vnic-type normal \ --qos-policy qp0 # Let's make the binding:profile for this port contain some # (non-allocation-y) stuff and then later assert that this stuff is still # there after the heal. # Cf. https://review.opendev.org/#/c/637955/35/nova/cmd/manage.py@1896 openstack port set port-normal-qos --binding-profile my_key=my_value image_id=$(openstack image list -f value -c ID | awk 'NR==1{print $1}') flavor_id=$(openstack flavor list -f value -c ID | awk 'NR==1{print $1}') network_id=$(openstack network list --no-share -f value -c ID | awk 'NR==1{print $1}') echo "Creating server for heal_allocations testing" # microversion 2.72 introduced the support for bandwidth aware ports openstack --os-compute-api-version 2.72 \ server create --image ${image_id} --flavor ${flavor_id} \ --nic net-id=${network_id} --nic port-id=port-normal-qos \ --wait heal-allocations-test server_id=$(openstack server show heal-allocations-test -f value -c id) # Make sure there are allocations for the consumer. allocations=$(openstack resource provider allocation show ${server_id} \ -c resources -f value) if [[ "$allocations" == "" ]]; then echo "No allocations found for the server." exit 2 fi # Make sure that the binding:profile.allocation key is updated rp_uuid=$(get_binding_profile_value port-normal-qos "allocation") if [[ "$rp_uuid" == "" ]]; then echo "No allocation found for the bandwidth aware port." exit 2 fi # Make sure our extra key in the binding:profile is still there my_key=$(get_binding_profile_value port-normal-qos "my_key") if [[ "$my_key" == "" ]]; then echo "During port binding the binding:profile was overwritten." exit 2 fi echo "Deleting allocations in placement for the server" openstack resource provider allocation delete ${server_id} echo "Deleting allocation key from the binding:profile of the bandwidth aware port" openstack port unset --binding-profile allocation port-normal-qos # Make sure the allocations are gone. allocations=$(openstack resource provider allocation show ${server_id} \ -c resources -f value) if [[ "$allocations" != "" ]]; then echo "Server allocations were not deleted." exit 2 fi # Make sure that the binding:profile.allocation key is gone null_rp_uuid=$(get_binding_profile_value port-normal-qos "allocation") if [[ "$null_rp_uuid" != "" ]]; then echo "Binding profile not updated for the bandwidth aware port." exit 2 fi # Make sure our extra key in the binding:profile is still there my_key=$(get_binding_profile_value port-normal-qos "my_key") if [[ "$my_key" == "" ]]; then echo "During deletion of allocation key our extra key was also deleted from the binding:profile." exit 2 fi echo "Healing allocations" # First test with the --dry-run over all instances in all cells. set +e nova-manage placement heal_allocations --verbose --dry-run rc=$? set -e # Since we did not create allocations because of --dry-run the rc should be 4. if [[ ${rc} -ne 4 ]]; then echo "Expected return code 4 from heal_allocations with --dry-run" exit 2 fi # Now test with just the single instance and actually perform the heal. nova-manage placement heal_allocations --verbose --instance ${server_id} # Make sure there are allocations for the consumer. allocations=$(openstack resource provider allocation show ${server_id} \ -c resources -f value) if [[ "$allocations" == "" ]]; then echo "Failed to heal allocations." exit 2 fi # Make sure that the allocations contains bandwidth as well bandwidth_allocations=$(echo "$allocations" | grep NET_BW_EGR_KILOBIT_PER_SEC) if [[ "$bandwidth_allocations" == "" ]]; then echo "Failed to heal port allocations." exit 2 fi # Make sure that the binding:profile.allocation key healed back healed_rp_uuid=$(get_binding_profile_value port-normal-qos "allocation") if [[ "$rp_uuid" != "$healed_rp_uuid" ]]; then echo "The value of the allocation key of the bandwidth aware port does not match." echo "expected: $rp_uuid; actual: $healed_rp_uuid." exit 2 fi # Make sure our extra key in the binding:profile is still there my_key=$(get_binding_profile_value port-normal-qos "allocation") if [[ "$my_key" == "" ]]; then echo "During heal port allocation our extra key in the binding:profile was deleted." exit 2 fi echo "Verifying online_data_migrations idempotence" # We will re-use the server created earlier for this test. (A server needs to # be present during the run of online_data_migrations and archiving). # Run the online data migrations before archiving. $MANAGE db online_data_migrations # We need to archive the deleted marker instance used by the # fill_virtual_interface_list online data migration in order to trigger # creation of a new deleted marker instance. set +e archive_deleted_rows set -e # Verify whether online data migrations run after archiving will succeed. # See for more details: https://bugs.launchpad.net/nova/+bug/1824435 $MANAGE db online_data_migrations ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/gate/test_evacuate.sh0000775000175000017500000001226400000000000016706 0ustar00zuulzuul00000000000000#!/bin/bash -x BASE=${BASE:-/opt/stack} # Source stackrc to determine the configured VIRT_DRIVER source ${BASE}/new/devstack/stackrc # Source tempest to determine the build timeout configuration. source ${BASE}/new/devstack/lib/tempest set -e # We need to get the admin credentials to run CLIs. set +x source ${BASE}/new/devstack/openrc admin set -x if [[ ${VIRT_DRIVER} != libvirt ]]; then echo "Only the libvirt driver is supported by this script" exit 1 fi echo "Ensure we have at least two compute nodes" nodenames=$(openstack hypervisor list -f value -c 'Hypervisor Hostname') node_count=$(echo ${nodenames} | wc -w) if [[ ${node_count} -lt 2 ]]; then echo "Evacuate requires at least two nodes" exit 2 fi echo "Finding the subnode" subnode='' local_hostname=$(hostname -s) for nodename in ${nodenames}; do if [[ ${local_hostname} != ${nodename} ]]; then subnode=${nodename} break fi done # Sanity check that we found the subnode. if [[ -z ${subnode} ]]; then echo "Failed to find subnode from nodes: ${nodenames}" exit 3 fi image_id=$(openstack image list -f value -c ID | awk 'NR==1{print $1}') flavor_id=$(openstack flavor list -f value -c ID | awk 'NR==1{print $1}') network_id=$(openstack network list --no-share -f value -c ID | awk 'NR==1{print $1}') echo "Creating ephemeral test server on subnode" openstack server create --image ${image_id} --flavor ${flavor_id} \ --nic net-id=${network_id} --availability-zone nova:${subnode} --wait evacuate-test echo "Creating BFV test server on subnode" nova boot --flavor ${flavor_id} --poll \ --block-device id=${image_id},source=image,dest=volume,size=1,bootindex=0,shutdown=remove \ --nic net-id=${network_id} --availability-zone nova:${subnode} evacuate-bfv-test # Fence the subnode echo "Stopping n-cpu, q-agt and guest domains on subnode" $ANSIBLE subnodes --become -f 5 -i "$WORKSPACE/inventory" -m shell -a "systemctl stop devstack@n-cpu devstack@q-agt" $ANSIBLE subnodes --become -f 5 -i "$WORKSPACE/inventory" -m shell -a "for domain in \$(virsh list --all --name); do virsh destroy \$domain; done" echo "Forcing down the subnode so we can evacuate from it" openstack --os-compute-api-version 2.11 compute service set --down ${subnode} nova-compute echo "Stopping libvirt on the localhost before evacuating to trigger failure" sudo systemctl stop libvirtd # Now force the evacuation to *this* host; we have to force to bypass the # scheduler since we killed libvirtd which will trigger the libvirt compute # driver to auto-disable the nova-compute service and then the ComputeFilter # would filter out this host and we'd get NoValidHost. Normally forcing a host # during evacuate and bypassing the scheduler is a very bad idea, but we're # doing a negative test here. function evacuate_and_wait_for_error() { local server="$1" echo "Forcing evacuate of ${server} to local host" # TODO(mriedem): Use OSC when it supports evacuate. nova --os-compute-api-version "2.67" evacuate --force ${server} ${local_hostname} # Wait for the instance to go into ERROR state from the failed evacuate. count=0 status=$(openstack server show ${server} -f value -c status) while [ "${status}" != "ERROR" ] do sleep 1 count=$((count+1)) if [ ${count} -eq ${BUILD_TIMEOUT} ]; then echo "Timed out waiting for server ${server} to go to ERROR status" exit 4 fi status=$(openstack server show ${server} -f value -c status) done } evacuate_and_wait_for_error evacuate-test evacuate_and_wait_for_error evacuate-bfv-test echo "Now restart libvirt and perform a successful evacuation" sudo systemctl start libvirtd sleep 10 # Wait for the compute service to be enabled. count=0 status=$(openstack compute service list --host ${local_hostname} --service nova-compute -f value -c Status) while [ "${status}" != "enabled" ] do sleep 1 count=$((count+1)) if [ ${count} -eq 30 ]; then echo "Timed out waiting for local compute service to be enabled" exit 5 fi status=$(openstack compute service list --host ${local_hostname} --service nova-compute -f value -c Status) done function evacuate_and_wait_for_active() { local server="$1" nova evacuate ${server} # Wait for the instance to go into ACTIVE state from the evacuate. count=0 status=$(openstack server show ${server} -f value -c status) while [ "${status}" != "ACTIVE" ] do sleep 1 count=$((count+1)) if [ ${count} -eq ${BUILD_TIMEOUT} ]; then echo "Timed out waiting for server ${server} to go to ACTIVE status" exit 6 fi status=$(openstack server show ${server} -f value -c status) done } evacuate_and_wait_for_active evacuate-test evacuate_and_wait_for_active evacuate-bfv-test # Make sure the servers moved. for server in evacuate-test evacuate-bfv-test; do host=$(openstack server show ${server} -f value -c OS-EXT-SRV-ATTR:host) if [[ ${host} != ${local_hostname} ]]; then echo "Unexpected host ${host} for server ${server} after evacuate." exit 7 fi done # Cleanup test servers openstack server delete --wait evacuate-test openstack server delete --wait evacuate-bfv-test ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/lower-constraints.txt0000664000175000017500000000572000000000000017030 0ustar00zuulzuul00000000000000alembic==0.9.8 amqp==2.5.0 appdirs==1.4.3 asn1crypto==0.24.0 attrs==17.4.0 automaton==1.14.0 Babel==2.3.4 bandit==1.1.0 cachetools==2.0.1 castellan==0.16.0 cffi==1.11.5 cliff==2.11.0 cmd2==0.8.1 colorama==0.3.9 coverage==4.0 cryptography==2.7 cursive==0.2.1 dataclasses==0.7 ddt==1.0.1 debtcollector==1.19.0 decorator==3.4.0 deprecation==2.0 dogpile.cache==0.6.5 enum34==1.1.8 enum-compat==0.0.2 eventlet==0.20.0 extras==1.0.0 fasteners==0.14.1 fixtures==3.0.0 flake8==3.6.0 future==0.16.0 futurist==1.8.0 gabbi==1.35.0 gitdb2==2.0.3 GitPython==2.1.8 greenlet==0.4.10 hacking==3.0.1 idna==2.6 iso8601==0.1.11 Jinja2==2.10 jmespath==0.9.3 jsonpatch==1.21 jsonpath-rw==1.4.0 jsonpath-rw-ext==1.1.3 jsonpointer==2.0 jsonschema==2.6.0 keystoneauth1==3.16.0 keystonemiddleware==4.20.0 kombu==4.6.1 linecache2==1.0.0 lxml==3.4.1 Mako==1.0.7 MarkupSafe==1.0 mccabe==0.6.0 microversion-parse==0.2.1 mock==3.0.0 monotonic==1.4 msgpack==0.5.6 msgpack-python==0.5.6 munch==2.2.0 netaddr==0.7.18 netifaces==0.10.4 networkx==1.11 numpy==1.14.2 openstacksdk==0.35.0 os-brick==3.0.1 os-client-config==1.29.0 os-resource-classes==0.4.0 os-service-types==1.7.0 os-traits==2.2.0 os-vif==1.14.0 os-win==3.0.0 os-xenapi==0.3.3 osc-lib==1.10.0 oslo.cache==1.26.0 oslo.concurrency==3.29.0 oslo.config==6.1.0 oslo.context==2.22.0 oslo.db==4.44.0 oslo.i18n==3.15.3 oslo.log==3.36.0 oslo.messaging==10.3.0 oslo.middleware==3.31.0 oslo.policy==3.1.0 oslo.privsep==1.33.2 oslo.reports==1.18.0 oslo.rootwrap==5.8.0 oslo.serialization==2.21.1 oslo.service==1.40.1 oslo.upgradecheck==0.1.1 oslo.utils==4.1.0 oslo.versionedobjects==1.35.0 oslo.vmware==2.17.0 oslotest==3.8.0 osprofiler==1.4.0 ovs==2.10.0 ovsdbapp==0.15.0 packaging==17.1 paramiko==2.0.0 Paste==2.0.2 PasteDeploy==1.5.0 pbr==2.0.0 pluggy==0.6.0 ply==3.11 prettytable==0.7.1 psutil==3.2.2 psycopg2==2.7 py==1.5.2 pyasn1==0.4.2 pyasn1-modules==0.2.1 pycadf==2.7.0 pycparser==2.18 pyflakes==2.0.0 pycodestyle==2.4.0 pyinotify==0.9.6 pyroute2==0.5.4 PyJWT==1.7.0 PyMySQL==0.7.6 pyOpenSSL==17.5.0 pyparsing==2.2.0 pyperclip==1.6.0 pypowervm==1.1.15 pytest==3.4.2 python-barbicanclient==4.5.2 python-cinderclient==3.3.0 python-dateutil==2.5.3 python-editor==1.0.3 python-glanceclient==2.8.0 python-ironicclient==2.7.0 python-keystoneclient==3.15.0 python-mimeparse==1.6.0 python-neutronclient==6.7.0 python-subunit==1.4.0 pytz==2018.3 PyYAML==3.12 repoze.lru==0.7 requests==2.14.2 requests-mock==1.2.0 requestsexceptions==1.4.0 retrying==1.3.3 rfc3986==1.1.0 Routes==2.3.1 simplejson==3.13.2 six==1.10.0 smmap2==2.0.3 sortedcontainers==2.1.0 SQLAlchemy==1.2.19 sqlalchemy-migrate==0.13.0 sqlparse==0.2.4 statsd==3.2.2 stestr==2.0.0 stevedore==1.20.0 suds-jurko==0.6 taskflow==2.16.0 Tempita==0.5.2 tenacity==4.9.0 testrepository==0.0.20 testresources==2.0.0 testscenarios==0.4 testtools==2.2.0 tooz==1.58.0 traceback2==1.4.0 unittest2==1.1.0 urllib3==1.22 vine==1.1.4 voluptuous==0.11.1 warlock==1.2.0 WebOb==1.8.2 websockify==0.9.0 wrapt==1.10.11 wsgi-intercept==1.7.0 zVMCloudConnector==1.3.0 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2664707 nova-21.2.4/nova/0000775000175000017500000000000000000000000013531 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/__init__.py0000664000175000017500000000161600000000000015646 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova` -- Cloud IaaS Platform =================================== .. automodule:: nova :platform: Unix :synopsis: Infrastructure-as-a-Service Cloud platform. """ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2664707 nova-21.2.4/nova/accelerator/0000775000175000017500000000000000000000000016015 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/accelerator/__init__.py0000664000175000017500000000000000000000000020114 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/accelerator/cyborg.py0000664000175000017500000002454500000000000017666 0ustar00zuulzuul00000000000000# Copyright 2019 Intel # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from oslo_log import log as logging from keystoneauth1 import exceptions as ks_exc from nova import exception from nova.i18n import _ from nova import objects from nova.scheduler import utils as schedutils from nova import service_auth from nova import utils """ Note on object relationships: 1 device profile (DP) has D >= 1 request groups (just as a flavor has many request groups). Each DP request group corresponds to exactly 1 numbered request group (RG) in the request spec. Each numbered RG corresponds to exactly one resource provider (RP). A DP request group may request A >= 1 accelerators, and so result in the creation of A ARQs. Each ARQ corresponds to exactly 1 DP request group. A device profile is a dictionary: { "name": "mydpname", "uuid": , "groups": [ ] } A device profile group is a dictionary too: { "resources:CUSTOM_ACCELERATOR_FPGA": "2", "resources:CUSTOM_LOCAL_MEMORY": "1", "trait:CUSTOM_INTEL_PAC_ARRIA10": "required", "trait:CUSTOM_FUNCTION_NAME_FALCON_GZIP_1_1": "required", # 0 or more Cyborg properties "accel:bitstream_id": "FB021995_BF21_4463_936A_02D49D4DB5E5" } See cyborg/cyborg/objects/device_profile.py for more details. """ LOG = logging.getLogger(__name__) def get_client(context): return _CyborgClient(context) def get_device_profile_group_requester_id(dp_group_id): """Return the value to use in objects.RequestGroup.requester_id. The requester_id is used to match device profile groups from Cyborg to the request groups in request spec. :param dp_group_id: The index of the request group in the device profile. """ req_id = "device_profile_" + str(dp_group_id) return req_id def get_device_profile_request_groups(context, dp_name): cyclient = get_client(context) return cyclient.get_device_profile_groups(dp_name) class _CyborgClient(object): DEVICE_PROFILE_URL = "/device_profiles" ARQ_URL = "/accelerator_requests" def __init__(self, context): auth = service_auth.get_auth_plugin(context) self._client = utils.get_ksa_adapter('accelerator', ksa_auth=auth) def _call_cyborg(self, func, *args, **kwargs): resp = err_msg = None try: resp = func(*args, **kwargs) if not resp: msg = _('Invalid response from Cyborg: ') err_msg = msg + str(resp) except ks_exc.ClientException as exc: err_msg = _('Could not communicate with Cyborg.') LOG.exception('%s: %s', err_msg, six.text_type(exc)) return resp, err_msg def _get_device_profile_list(self, dp_name): query = {"name": dp_name} err_msg = None resp, err_msg = self._call_cyborg(self._client.get, self.DEVICE_PROFILE_URL, params=query) if err_msg: raise exception.DeviceProfileError(name=dp_name, msg=err_msg) return resp.json().get('device_profiles') def get_device_profile_groups(self, dp_name): """Get list of profile group objects from the device profile. Cyborg API returns: {"device_profiles": []} See module notes above for further details. :param dp_name: string: device profile name Expected to be valid, not None or ''. :returns: [objects.RequestGroup] :raises: DeviceProfileError """ dp_list = self._get_device_profile_list(dp_name) if not dp_list: msg = _('Expected 1 device profile but got nothing.') raise exception.DeviceProfileError(name=dp_name, msg=msg) if len(dp_list) != 1: err = _('Expected 1 device profile but got %s.') % len(dp_list) raise exception.DeviceProfileError(name=dp_name, msg=err) dp_groups = dp_list[0]['groups'] request_groups = [] for dp_group_id, dp_group in enumerate(dp_groups): req_id = get_device_profile_group_requester_id(dp_group_id) rg = objects.RequestGroup(requester_id=req_id) for key, val in dp_group.items(): match = schedutils.ResourceRequest.XS_KEYPAT.match(key) if not match: continue # could be 'accel:foo=bar', skip it prefix, _ignore, name = match.groups() if prefix == schedutils.ResourceRequest.XS_RES_PREFIX: rg.add_resource(rclass=name, amount=val) elif prefix == schedutils.ResourceRequest.XS_TRAIT_PREFIX: rg.add_trait(trait_name=name, trait_type=val) request_groups.append(rg) return request_groups def _create_arqs(self, dp_name): data = {"device_profile_name": dp_name} resp, err_msg = self._call_cyborg(self._client.post, self.ARQ_URL, json=data) if err_msg: raise exception.AcceleratorRequestOpFailed( op=_('create'), msg=err_msg) return resp.json().get('arqs') def create_arqs_and_match_resource_providers(self, dp_name, rg_rp_map): """Create ARQs, match them with request groups and thereby determine their corresponding RPs. :param dp_name: Device profile name :param rg_rp_map: Request group - Resource Provider map {requester_id: [resource_provider_uuid]} :returns: [arq], with each ARQ associated with an RP :raises: DeviceProfileError, AcceleratorRequestOpFailed """ LOG.info('Creating ARQs for device profile %s', dp_name) arqs = self._create_arqs(dp_name) if not arqs or len(arqs) == 0: msg = _('device profile name %s') % dp_name raise exception.AcceleratorRequestOpFailed(op=_('create'), msg=msg) for arq in arqs: dp_group_id = arq['device_profile_group_id'] arq['device_rp_uuid'] = None requester_id = ( get_device_profile_group_requester_id(dp_group_id)) arq['device_rp_uuid'] = rg_rp_map[requester_id][0] return arqs def bind_arqs(self, bindings): """Initiate Cyborg bindings. Handles RFC 6902-compliant JSON patching, sparing calling Nova code from those details. :param bindings: { "$arq_uuid": { "hostname": STRING "device_rp_uuid": UUID "instance_uuid": UUID }, ... } :returns: nothing :raises: AcceleratorRequestOpFailed """ LOG.info('Binding ARQs.') # Create a JSON patch in RFC 6902 format patch_list = {} for arq_uuid, binding in bindings.items(): patch = [{"path": "/" + field, "op": "add", "value": value } for field, value in binding.items()] patch_list[arq_uuid] = patch resp, err_msg = self._call_cyborg(self._client.patch, self.ARQ_URL, json=patch_list) if err_msg: msg = _(' Binding failed for ARQ UUIDs: ') err_msg = err_msg + msg + ','.join(bindings.keys()) raise exception.AcceleratorRequestOpFailed( op=_('bind'), msg=err_msg) def get_arqs_for_instance(self, instance_uuid, only_resolved=False): """Get ARQs for the instance. :param instance_uuid: Instance UUID :param only_resolved: flag to return only resolved ARQs :returns: List of ARQs for the instance: if only_resolved: only those ARQs which have completed binding else: all ARQs The format of the returned data structure is as below: [ {'uuid': $arq_uuid, 'device_profile_name': $dp_name, 'device_profile_group_id': $dp_request_group_index, 'state': 'Bound', 'device_rp_uuid': $resource_provider_uuid, 'hostname': $host_nodename, 'instance_uuid': $instance_uuid, 'attach_handle_info': { # PCI bdf 'bus': '0c', 'device': '0', 'domain': '0000', 'function': '0'}, 'attach_handle_type': 'PCI' # or 'TEST_PCI' for Cyborg fake driver } ] :raises: AcceleratorRequestOpFailed """ query = {"instance": instance_uuid} resp, err_msg = self._call_cyborg(self._client.get, self.ARQ_URL, params=query) if err_msg: err_msg = err_msg + _(' Instance %s') % instance_uuid raise exception.AcceleratorRequestOpFailed( op=_('get'), msg=err_msg) arqs = resp.json().get('arqs') if not arqs: err_msg = _('Cyborg returned no accelerator requests for ' 'instance %s') % instance_uuid raise exception.AcceleratorRequestOpFailed( op=_('get'), msg=err_msg) if only_resolved: arqs = [arq for arq in arqs if arq['state'] in ['Bound', 'BindFailed', 'Deleting']] return arqs def delete_arqs_for_instance(self, instance_uuid): """Delete ARQs for instance, after unbinding if needed. :param instance_uuid: Instance UUID :raises: AcceleratorRequestOpFailed """ # Unbind and delete the ARQs params = {"instance": instance_uuid} resp, err_msg = self._call_cyborg(self._client.delete, self.ARQ_URL, params=params) if err_msg: msg = err_msg + _(' Instance %s') % instance_uuid raise exception.AcceleratorRequestOpFailed( op=_('delete'), msg=msg) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2664707 nova-21.2.4/nova/api/0000775000175000017500000000000000000000000014302 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/__init__.py0000664000175000017500000000000000000000000016401 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/auth.py0000664000175000017500000001016300000000000015616 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common Auth Middleware. """ from oslo_log import log as logging from oslo_log import versionutils from oslo_serialization import jsonutils import webob.dec import webob.exc from nova.api import wsgi import nova.conf from nova import context from nova.i18n import _ CONF = nova.conf.CONF LOG = logging.getLogger(__name__) def _load_pipeline(loader, pipeline): filters = [loader.get_filter(n) for n in pipeline[:-1]] app = loader.get_app(pipeline[-1]) filters.reverse() for filter in filters: app = filter(app) return app def pipeline_factory(loader, global_conf, **local_conf): """A paste pipeline replica that keys off of auth_strategy.""" versionutils.report_deprecated_feature( LOG, "The legacy V2 API code tree has been removed in Newton. " "Please remove legacy v2 API entry from api-paste.ini, and use " "V2.1 API or V2.1 API compat mode instead" ) def pipeline_factory_v21(loader, global_conf, **local_conf): """A paste pipeline replica that keys off of auth_strategy.""" auth_strategy = CONF.api.auth_strategy if auth_strategy == 'noauth2': versionutils.report_deprecated_feature( LOG, "'[api]auth_strategy=noauth2' is deprecated as of the 21.0.0 " "Ussuri release and will be removed in a future release. Please " "remove any 'noauth2' entries from api-paste.ini; only the " "'keystone' pipeline is supported." ) return _load_pipeline(loader, local_conf[auth_strategy].split()) class InjectContext(wsgi.Middleware): """Add a 'nova.context' to WSGI environ.""" def __init__(self, context, *args, **kwargs): self.context = context super(InjectContext, self).__init__(*args, **kwargs) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): req.environ['nova.context'] = self.context return self.application class NovaKeystoneContext(wsgi.Middleware): """Make a request context from keystone headers.""" @staticmethod def _create_context(env, **kwargs): """Create a context from a request environ. This exists to make test stubbing easier. """ return context.RequestContext.from_environ(env, **kwargs) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): # Build a context, including the auth_token... remote_address = req.remote_addr if CONF.api.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) service_catalog = None if req.headers.get('X_SERVICE_CATALOG') is not None: try: catalog_header = req.headers.get('X_SERVICE_CATALOG') service_catalog = jsonutils.loads(catalog_header) except ValueError: raise webob.exc.HTTPInternalServerError( _('Invalid service catalog json.')) # NOTE(jamielennox): This is a full auth plugin set by auth_token # middleware in newer versions. user_auth_plugin = req.environ.get('keystone.token_auth') ctx = self._create_context( req.environ, user_auth_plugin=user_auth_plugin, remote_address=remote_address, service_catalog=service_catalog) if ctx.user_id is None: LOG.debug("Neither X_USER_ID nor X_USER found in request") return webob.exc.HTTPUnauthorized() req.environ['nova.context'] = ctx return self.application ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/compute_req_id.py0000664000175000017500000000237600000000000017663 0ustar00zuulzuul00000000000000# Copyright (c) 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Middleware that ensures x-compute-request-id Nova's notion of request-id tracking predates any common idea, so the original version of this header in OpenStack was x-compute-request-id. Eventually we got oslo, and all other projects implemented this with x-openstack-request-id. However, x-compute-request-id was always part of our contract. The following migrates us to use x-openstack-request-id as well, by using the common middleware. """ from oslo_middleware import request_id HTTP_RESP_HEADER_REQUEST_ID = 'x-compute-request-id' class ComputeReqIdMiddleware(request_id.RequestId): compat_headers = [HTTP_RESP_HEADER_REQUEST_ID] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2704706 nova-21.2.4/nova/api/metadata/0000775000175000017500000000000000000000000016062 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/metadata/__init__.py0000664000175000017500000000145400000000000020177 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova.api.metadata` -- Nova Metadata Server ================================================ .. automodule:: nova.api.metadata :platform: Unix :synopsis: Metadata Server for Nova """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/metadata/base.py0000664000175000017500000006625000000000000017357 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Instance Metadata information.""" import os import posixpath from oslo_log import log as logging from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import timeutils import six from nova.api.metadata import password from nova.api.metadata import vendordata_dynamic from nova.api.metadata import vendordata_json from nova import block_device import nova.conf from nova import context from nova import exception from nova.network import neutron from nova.network import security_group_api from nova import objects from nova.objects import virt_device_metadata as metadata_obj from nova import utils from nova.virt import netutils CONF = nova.conf.CONF VERSIONS = [ '1.0', '2007-01-19', '2007-03-01', '2007-08-29', '2007-10-10', '2007-12-15', '2008-02-01', '2008-09-01', '2009-04-04', ] # NOTE(mikal): think of these strings as version numbers. They traditionally # correlate with OpenStack release dates, with all the changes for a given # release bundled into a single version. Note that versions in the future are # hidden from the listing, but can still be requested explicitly, which is # required for testing purposes. We know this isn't great, but its inherited # from EC2, which this needs to be compatible with. # NOTE(jichen): please update doc/source/user/metadata.rst on the metadata # output when new version is created in order to make doc up-to-date. FOLSOM = '2012-08-10' GRIZZLY = '2013-04-04' HAVANA = '2013-10-17' LIBERTY = '2015-10-15' NEWTON_ONE = '2016-06-30' NEWTON_TWO = '2016-10-06' OCATA = '2017-02-22' ROCKY = '2018-08-27' OPENSTACK_VERSIONS = [ FOLSOM, GRIZZLY, HAVANA, LIBERTY, NEWTON_ONE, NEWTON_TWO, OCATA, ROCKY, ] VERSION = "version" CONTENT = "content" CONTENT_DIR = "content" MD_JSON_NAME = "meta_data.json" VD_JSON_NAME = "vendor_data.json" VD2_JSON_NAME = "vendor_data2.json" NW_JSON_NAME = "network_data.json" UD_NAME = "user_data" PASS_NAME = "password" MIME_TYPE_TEXT_PLAIN = "text/plain" MIME_TYPE_APPLICATION_JSON = "application/json" LOG = logging.getLogger(__name__) class InvalidMetadataVersion(Exception): pass class InvalidMetadataPath(Exception): pass class InstanceMetadata(object): """Instance metadata.""" def __init__(self, instance, address=None, content=None, extra_md=None, network_info=None, network_metadata=None, request_context=None): """Creation of this object should basically cover all time consuming collection. Methods after that should not cause time delays due to network operations or lengthy cpu operations. The user should then get a single instance and make multiple method calls on it. """ if not content: content = [] # NOTE(gibi): this is not a cell targeted context even if we are called # in a situation when the instance is in a different cell than the # metadata service itself. ctxt = context.get_admin_context() self.mappings = _format_instance_mapping(instance) # NOTE(danms): Sanitize the instance to limit the amount of stuff # inside that may not pickle well (i.e. context). We also touch # some of the things we'll lazy load later to make sure we keep their # values in what we cache. instance.ec2_ids instance.keypairs instance.device_metadata instance = objects.Instance.obj_from_primitive( instance.obj_to_primitive()) # The default value of mimeType is set to MIME_TYPE_TEXT_PLAIN self.set_mimetype(MIME_TYPE_TEXT_PLAIN) self.instance = instance self.extra_md = extra_md self.availability_zone = instance.get('availability_zone') self.security_groups = security_group_api.get_instance_security_groups( ctxt, instance) if instance.user_data is not None: self.userdata_raw = base64.decode_as_bytes(instance.user_data) else: self.userdata_raw = None self.address = address # expose instance metadata. self.launch_metadata = utils.instance_meta(instance) self.password = password.extract_password(instance) self.uuid = instance.uuid self.content = {} self.files = [] # get network info, and the rendered network template if network_info is None: network_info = instance.info_cache.network_info # expose network metadata if network_metadata is None: self.network_metadata = netutils.get_network_metadata(network_info) else: self.network_metadata = network_metadata self.ip_info = netutils.get_ec2_ip_info(network_info) self.network_config = None cfg = netutils.get_injected_network_template(network_info) if cfg: key = "%04i" % len(self.content) self.content[key] = cfg self.network_config = {"name": "network_config", 'content_path': "/%s/%s" % (CONTENT_DIR, key)} # 'content' is passed in from the configdrive code in # nova/virt/libvirt/driver.py. That's how we get the injected files # (personalities) in. AFAIK they're not stored in the db at all, # so are not available later (web service metadata time). for (path, contents) in content: key = "%04i" % len(self.content) self.files.append({'path': path, 'content_path': "/%s/%s" % (CONTENT_DIR, key)}) self.content[key] = contents self.route_configuration = None # NOTE(mikal): the decision to not pass extra_md here like we # do to the StaticJSON driver is deliberate. extra_md will # contain the admin password for the instance, and we shouldn't # pass that to external services. self.vendordata_providers = { 'StaticJSON': vendordata_json.JsonFileVendorData( instance=instance, address=address, extra_md=extra_md, network_info=network_info), 'DynamicJSON': vendordata_dynamic.DynamicVendorData( instance=instance, address=address, network_info=network_info, context=request_context) } def _route_configuration(self): if self.route_configuration: return self.route_configuration path_handlers = {UD_NAME: self._user_data, PASS_NAME: self._password, VD_JSON_NAME: self._vendor_data, VD2_JSON_NAME: self._vendor_data2, MD_JSON_NAME: self._metadata_as_json, NW_JSON_NAME: self._network_data, VERSION: self._handle_version, CONTENT: self._handle_content} self.route_configuration = RouteConfiguration(path_handlers) return self.route_configuration def set_mimetype(self, mime_type): self.md_mimetype = mime_type def get_mimetype(self): return self.md_mimetype def get_ec2_metadata(self, version): if version == "latest": version = VERSIONS[-1] if version not in VERSIONS: raise InvalidMetadataVersion(version) hostname = self._get_hostname() floating_ips = self.ip_info['floating_ips'] floating_ip = floating_ips and floating_ips[0] or '' fixed_ips = self.ip_info['fixed_ips'] fixed_ip = fixed_ips and fixed_ips[0] or '' fmt_sgroups = [x['name'] for x in self.security_groups] meta_data = { 'ami-id': self.instance.ec2_ids.ami_id, 'ami-launch-index': self.instance.launch_index, 'ami-manifest-path': 'FIXME', 'instance-id': self.instance.ec2_ids.instance_id, 'hostname': hostname, 'local-ipv4': fixed_ip or self.address, 'reservation-id': self.instance.reservation_id, 'security-groups': fmt_sgroups} # public keys are strangely rendered in ec2 metadata service # meta-data/public-keys/ returns '0=keyname' (with no trailing /) # and only if there is a public key given. # '0=keyname' means there is a normally rendered dict at # meta-data/public-keys/0 # # meta-data/public-keys/ : '0=%s' % keyname # meta-data/public-keys/0/ : 'openssh-key' # meta-data/public-keys/0/openssh-key : '%s' % publickey if self.instance.key_name: meta_data['public-keys'] = { '0': {'_name': "0=" + self.instance.key_name, 'openssh-key': self.instance.key_data}} if self._check_version('2007-01-19', version): meta_data['local-hostname'] = hostname meta_data['public-hostname'] = hostname meta_data['public-ipv4'] = floating_ip if self._check_version('2007-08-29', version): instance_type = self.instance.get_flavor() meta_data['instance-type'] = instance_type['name'] if self._check_version('2007-12-15', version): meta_data['block-device-mapping'] = self.mappings if self.instance.ec2_ids.kernel_id: meta_data['kernel-id'] = self.instance.ec2_ids.kernel_id if self.instance.ec2_ids.ramdisk_id: meta_data['ramdisk-id'] = self.instance.ec2_ids.ramdisk_id if self._check_version('2008-02-01', version): meta_data['placement'] = {'availability-zone': self.availability_zone} if self._check_version('2008-09-01', version): meta_data['instance-action'] = 'none' data = {'meta-data': meta_data} if self.userdata_raw is not None: data['user-data'] = self.userdata_raw return data def get_ec2_item(self, path_tokens): # get_ec2_metadata returns dict without top level version data = self.get_ec2_metadata(path_tokens[0]) return find_path_in_tree(data, path_tokens[1:]) def get_openstack_item(self, path_tokens): if path_tokens[0] == CONTENT_DIR: return self._handle_content(path_tokens) return self._route_configuration().handle_path(path_tokens) def _metadata_as_json(self, version, path): metadata = {'uuid': self.uuid} if self.launch_metadata: metadata['meta'] = self.launch_metadata if self.files: metadata['files'] = self.files if self.extra_md: metadata.update(self.extra_md) if self.network_config: metadata['network_config'] = self.network_config if self.instance.key_name: keypairs = self.instance.keypairs # NOTE(mriedem): It's possible for the keypair to be deleted # before it was migrated to the instance_extra table, in which # case lazy-loading instance.keypairs will handle the 404 and # just set an empty KeyPairList object on the instance. keypair = keypairs[0] if keypairs else None if keypair: metadata['public_keys'] = { keypair.name: keypair.public_key, } metadata['keys'] = [ {'name': keypair.name, 'type': keypair.type, 'data': keypair.public_key} ] else: LOG.debug("Unable to find keypair for instance with " "key name '%s'.", self.instance.key_name, instance=self.instance) metadata['hostname'] = self._get_hostname() metadata['name'] = self.instance.display_name metadata['launch_index'] = self.instance.launch_index metadata['availability_zone'] = self.availability_zone if self._check_os_version(GRIZZLY, version): metadata['random_seed'] = base64.encode_as_text(os.urandom(512)) if self._check_os_version(LIBERTY, version): metadata['project_id'] = self.instance.project_id if self._check_os_version(NEWTON_ONE, version): metadata['devices'] = self._get_device_metadata(version) self.set_mimetype(MIME_TYPE_APPLICATION_JSON) return jsonutils.dump_as_bytes(metadata) def _get_device_metadata(self, version): """Build a device metadata dict based on the metadata objects. This is done here in the metadata API as opposed to in the objects themselves because the metadata dict is part of the guest API and thus must be controlled. """ device_metadata_list = [] vif_vlans_supported = self._check_os_version(OCATA, version) vif_vfs_trusted_supported = self._check_os_version(ROCKY, version) if self.instance.device_metadata is not None: for device in self.instance.device_metadata.devices: device_metadata = {} bus = 'none' address = 'none' if 'bus' in device: # TODO(artom/mriedem) It would be nice if we had something # more generic, like a type identifier or something, built # into these types of objects, like a get_meta_type() # abstract method on the base DeviceBus class. if isinstance(device.bus, metadata_obj.PCIDeviceBus): bus = 'pci' elif isinstance(device.bus, metadata_obj.USBDeviceBus): bus = 'usb' elif isinstance(device.bus, metadata_obj.SCSIDeviceBus): bus = 'scsi' elif isinstance(device.bus, metadata_obj.IDEDeviceBus): bus = 'ide' elif isinstance(device.bus, metadata_obj.XenDeviceBus): bus = 'xen' else: LOG.debug('Metadata for device with unknown bus %s ' 'has not been included in the ' 'output', device.bus.__class__.__name__) continue if 'address' in device.bus: address = device.bus.address if isinstance(device, metadata_obj.NetworkInterfaceMetadata): vlan = device.vlan if 'vlan' in device else None if vif_vlans_supported and vlan is not None: device_metadata['vlan'] = vlan if vif_vfs_trusted_supported: vf_trusted = (device.vf_trusted if 'vf_trusted' in device else False) device_metadata['vf_trusted'] = vf_trusted device_metadata['type'] = 'nic' device_metadata['mac'] = device.mac # NOTE(artom) If a device has neither tags, vlan or # vf_trusted, don't expose it if not ('tags' in device or 'vlan' in device_metadata or 'vf_trusted' in device_metadata): continue elif isinstance(device, metadata_obj.DiskMetadata): device_metadata['type'] = 'disk' # serial and path are optional parameters if 'serial' in device: device_metadata['serial'] = device.serial if 'path' in device: device_metadata['path'] = device.path else: LOG.debug('Metadata for device of unknown type %s has not ' 'been included in the ' 'output', device.__class__.__name__) continue device_metadata['bus'] = bus device_metadata['address'] = address if 'tags' in device: device_metadata['tags'] = device.tags device_metadata_list.append(device_metadata) return device_metadata_list def _handle_content(self, path_tokens): if len(path_tokens) == 1: raise KeyError("no listing for %s" % "/".join(path_tokens)) if len(path_tokens) != 2: raise KeyError("Too many tokens for /%s" % CONTENT_DIR) return self.content[path_tokens[1]] def _handle_version(self, version, path): # request for /version, give a list of what is available ret = [MD_JSON_NAME] if self.userdata_raw is not None: ret.append(UD_NAME) if self._check_os_version(GRIZZLY, version): ret.append(PASS_NAME) if self._check_os_version(HAVANA, version): ret.append(VD_JSON_NAME) if self._check_os_version(LIBERTY, version): ret.append(NW_JSON_NAME) if self._check_os_version(NEWTON_TWO, version): ret.append(VD2_JSON_NAME) return ret def _user_data(self, version, path): if self.userdata_raw is None: raise KeyError(path) return self.userdata_raw def _network_data(self, version, path): if self.network_metadata is None: return jsonutils.dump_as_bytes({}) return jsonutils.dump_as_bytes(self.network_metadata) def _password(self, version, path): if self._check_os_version(GRIZZLY, version): return password.handle_password raise KeyError(path) def _vendor_data(self, version, path): if self._check_os_version(HAVANA, version): self.set_mimetype(MIME_TYPE_APPLICATION_JSON) if (CONF.api.vendordata_providers and 'StaticJSON' in CONF.api.vendordata_providers): return jsonutils.dump_as_bytes( self.vendordata_providers['StaticJSON'].get()) raise KeyError(path) def _vendor_data2(self, version, path): if self._check_os_version(NEWTON_TWO, version): self.set_mimetype(MIME_TYPE_APPLICATION_JSON) j = {} for provider in CONF.api.vendordata_providers: if provider == 'StaticJSON': j['static'] = self.vendordata_providers['StaticJSON'].get() else: values = self.vendordata_providers[provider].get() for key in list(values): if key in j: LOG.warning('Removing duplicate metadata key: %s', key, instance=self.instance) del values[key] j.update(values) return jsonutils.dump_as_bytes(j) raise KeyError(path) def _check_version(self, required, requested, versions=VERSIONS): return versions.index(requested) >= versions.index(required) def _check_os_version(self, required, requested): return self._check_version(required, requested, OPENSTACK_VERSIONS) def _get_hostname(self): # TODO(stephenfin): At some point in the future, we may wish to # retrieve this information from neutron. if CONF.api.dhcp_domain: return '.'.join([self.instance.hostname, CONF.api.dhcp_domain]) return self.instance.hostname def lookup(self, path): if path == "" or path[0] != "/": path = posixpath.normpath("/" + path) else: path = posixpath.normpath(path) # Set default mimeType. It will be modified only if there is a change self.set_mimetype(MIME_TYPE_TEXT_PLAIN) # fix up requests, prepending /ec2 to anything that does not match path_tokens = path.split('/')[1:] if path_tokens[0] not in ("ec2", "openstack"): if path_tokens[0] == "": # request for / path_tokens = ["ec2"] else: path_tokens = ["ec2"] + path_tokens path = "/" + "/".join(path_tokens) # all values of 'path' input starts with '/' and have no trailing / # specifically handle the top level request if len(path_tokens) == 1: if path_tokens[0] == "openstack": # NOTE(vish): don't show versions that are in the future today = timeutils.utcnow().strftime("%Y-%m-%d") versions = [v for v in OPENSTACK_VERSIONS if v <= today] if OPENSTACK_VERSIONS != versions: LOG.debug("future versions %s hidden in version list", [v for v in OPENSTACK_VERSIONS if v not in versions], instance=self.instance) versions += ["latest"] else: versions = VERSIONS + ["latest"] return versions try: if path_tokens[0] == "openstack": data = self.get_openstack_item(path_tokens[1:]) else: data = self.get_ec2_item(path_tokens[1:]) except (InvalidMetadataVersion, KeyError): raise InvalidMetadataPath(path) return data def metadata_for_config_drive(self): """Yields (path, value) tuples for metadata elements.""" # EC2 style metadata for version in VERSIONS + ["latest"]: if version in CONF.api.config_drive_skip_versions.split(' '): continue data = self.get_ec2_metadata(version) if 'user-data' in data: filepath = os.path.join('ec2', version, 'user-data') yield (filepath, data['user-data']) del data['user-data'] try: del data['public-keys']['0']['_name'] except KeyError: pass filepath = os.path.join('ec2', version, 'meta-data.json') yield (filepath, jsonutils.dump_as_bytes(data['meta-data'])) ALL_OPENSTACK_VERSIONS = OPENSTACK_VERSIONS + ["latest"] for version in ALL_OPENSTACK_VERSIONS: path = 'openstack/%s/%s' % (version, MD_JSON_NAME) yield (path, self.lookup(path)) path = 'openstack/%s/%s' % (version, UD_NAME) if self.userdata_raw is not None: yield (path, self.lookup(path)) if self._check_version(HAVANA, version, ALL_OPENSTACK_VERSIONS): path = 'openstack/%s/%s' % (version, VD_JSON_NAME) yield (path, self.lookup(path)) if self._check_version(LIBERTY, version, ALL_OPENSTACK_VERSIONS): path = 'openstack/%s/%s' % (version, NW_JSON_NAME) yield (path, self.lookup(path)) if self._check_version(NEWTON_TWO, version, ALL_OPENSTACK_VERSIONS): path = 'openstack/%s/%s' % (version, VD2_JSON_NAME) yield (path, self.lookup(path)) for (cid, content) in self.content.items(): yield ('%s/%s/%s' % ("openstack", CONTENT_DIR, cid), content) class RouteConfiguration(object): """Routes metadata paths to request handlers.""" def __init__(self, path_handler): self.path_handlers = path_handler def _version(self, version): if version == "latest": version = OPENSTACK_VERSIONS[-1] if version not in OPENSTACK_VERSIONS: raise InvalidMetadataVersion(version) return version def handle_path(self, path_tokens): version = self._version(path_tokens[0]) if len(path_tokens) == 1: path = VERSION else: path = '/'.join(path_tokens[1:]) path_handler = self.path_handlers[path] if path_handler is None: raise KeyError(path) return path_handler(version, path) def get_metadata_by_address(address): ctxt = context.get_admin_context() fixed_ip = neutron.API().get_fixed_ip_by_address(ctxt, address) LOG.info('Fixed IP %(ip)s translates to instance UUID %(uuid)s', {'ip': address, 'uuid': fixed_ip['instance_uuid']}) return get_metadata_by_instance_id(fixed_ip['instance_uuid'], address, ctxt) def get_metadata_by_instance_id(instance_id, address, ctxt=None): ctxt = ctxt or context.get_admin_context() attrs = ['ec2_ids', 'flavor', 'info_cache', 'metadata', 'system_metadata', 'security_groups', 'keypairs', 'device_metadata'] if CONF.api.local_metadata_per_cell: instance = objects.Instance.get_by_uuid(ctxt, instance_id, expected_attrs=attrs) return InstanceMetadata(instance, address) try: im = objects.InstanceMapping.get_by_instance_uuid(ctxt, instance_id) except exception.InstanceMappingNotFound: LOG.warning('Instance mapping for %(uuid)s not found; ' 'cell setup is incomplete', {'uuid': instance_id}) instance = objects.Instance.get_by_uuid(ctxt, instance_id, expected_attrs=attrs) return InstanceMetadata(instance, address) with context.target_cell(ctxt, im.cell_mapping) as cctxt: instance = objects.Instance.get_by_uuid(cctxt, instance_id, expected_attrs=attrs) return InstanceMetadata(instance, address) def _format_instance_mapping(instance): bdms = instance.get_bdms() return block_device.instance_block_mapping(instance, bdms) def ec2_md_print(data): if isinstance(data, dict): output = '' for key in sorted(data.keys()): if key == '_name': continue if isinstance(data[key], dict): if '_name' in data[key]: output += str(data[key]['_name']) else: output += key + '/' else: output += key output += '\n' return output[:-1] elif isinstance(data, list): return '\n'.join(data) elif isinstance(data, (bytes, six.text_type)): return data else: return str(data) def find_path_in_tree(data, path_tokens): # given a dict/list tree, and a path in that tree, return data found there. for i in range(0, len(path_tokens)): if isinstance(data, dict) or isinstance(data, list): if path_tokens[i] in data: data = data[path_tokens[i]] else: raise KeyError("/".join(path_tokens[0:i])) else: if i != len(path_tokens) - 1: raise KeyError("/".join(path_tokens[0:i])) data = data[path_tokens[i]] return data ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/metadata/handler.py0000664000175000017500000003412100000000000020052 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Metadata request handler.""" import hashlib import hmac import os from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import secretutils as secutils from oslo_utils import strutils import six import webob.dec import webob.exc from nova.api.metadata import base from nova.api import wsgi from nova import cache_utils import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.network import neutron as neutronapi CONF = nova.conf.CONF LOG = logging.getLogger(__name__) # 160 networks is large enough to satisfy most cases. # Yet while reaching 182 networks Neutron server will break as URL length # exceeds the maximum. Left this at 160 to allow additional parameters when # they're needed. MAX_QUERY_NETWORKS = 160 class MetadataRequestHandler(wsgi.Application): """Serve metadata.""" def __init__(self): self._cache = cache_utils.get_client( expiration_time=CONF.api.metadata_cache_expiration) if (CONF.neutron.service_metadata_proxy and not CONF.neutron.metadata_proxy_shared_secret): LOG.warning("metadata_proxy_shared_secret is not configured, " "the metadata information returned by the proxy " "cannot be trusted") def get_metadata_by_remote_address(self, address): if not address: raise exception.FixedIpNotFoundForAddress(address=address) cache_key = 'metadata-%s' % address data = self._cache.get(cache_key) if data: LOG.debug("Using cached metadata for %s", address) return data try: data = base.get_metadata_by_address(address) except exception.NotFound: return None if CONF.api.metadata_cache_expiration > 0: self._cache.set(cache_key, data) return data def get_metadata_by_instance_id(self, instance_id, address): cache_key = 'metadata-%s' % instance_id data = self._cache.get(cache_key) if data: LOG.debug("Using cached metadata for instance %s", instance_id) return data try: data = base.get_metadata_by_instance_id(instance_id, address) except exception.NotFound: return None if CONF.api.metadata_cache_expiration > 0: self._cache.set(cache_key, data) return data @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): if os.path.normpath(req.path_info) == "/": resp = base.ec2_md_print(base.VERSIONS + ["latest"]) req.response.body = encodeutils.to_utf8(resp) req.response.content_type = base.MIME_TYPE_TEXT_PLAIN return req.response # Convert webob.headers.EnvironHeaders to a dict and mask any sensitive # details from the logs. if CONF.debug: headers = {k: req.headers[k] for k in req.headers} LOG.debug('Metadata request headers: %s', strutils.mask_dict_password(headers)) if CONF.neutron.service_metadata_proxy: if req.headers.get('X-Metadata-Provider'): meta_data = self._handle_instance_id_request_from_lb(req) else: meta_data = self._handle_instance_id_request(req) else: if req.headers.get('X-Instance-ID'): LOG.warning( "X-Instance-ID present in request headers. The " "'service_metadata_proxy' option must be " "enabled to process this header.") meta_data = self._handle_remote_ip_request(req) if meta_data is None: raise webob.exc.HTTPNotFound() try: data = meta_data.lookup(req.path_info) except base.InvalidMetadataPath: raise webob.exc.HTTPNotFound() if callable(data): return data(req, meta_data) resp = base.ec2_md_print(data) req.response.body = encodeutils.to_utf8(resp) req.response.content_type = meta_data.get_mimetype() return req.response def _handle_remote_ip_request(self, req): remote_address = req.remote_addr if CONF.api.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) try: meta_data = self.get_metadata_by_remote_address(remote_address) except Exception: LOG.exception('Failed to get metadata for IP %s', remote_address) msg = _('An unknown error has occurred. ' 'Please try your request again.') raise webob.exc.HTTPInternalServerError( explanation=six.text_type(msg)) if meta_data is None: LOG.error('Failed to get metadata for IP %s: no metadata', remote_address) return meta_data def _handle_instance_id_request(self, req): instance_id = req.headers.get('X-Instance-ID') tenant_id = req.headers.get('X-Tenant-ID') signature = req.headers.get('X-Instance-ID-Signature') remote_address = req.headers.get('X-Forwarded-For') # Ensure that only one header was passed if instance_id is None: msg = _('X-Instance-ID header is missing from request.') elif signature is None: msg = _('X-Instance-ID-Signature header is missing from request.') elif tenant_id is None: msg = _('X-Tenant-ID header is missing from request.') elif not isinstance(instance_id, six.string_types): msg = _('Multiple X-Instance-ID headers found within request.') elif not isinstance(tenant_id, six.string_types): msg = _('Multiple X-Tenant-ID headers found within request.') else: msg = None if msg: raise webob.exc.HTTPBadRequest(explanation=msg) self._validate_shared_secret(instance_id, signature, remote_address) return self._get_meta_by_instance_id(instance_id, tenant_id, remote_address) def _get_instance_id_from_lb(self, provider_id, instance_address): # We use admin context, admin=True to lookup the # inter-Edge network port context = nova_context.get_admin_context() neutron = neutronapi.get_client(context, admin=True) # Tenant, instance ids are found in the following method: # X-Metadata-Provider contains id of the metadata provider, and since # overlapping networks cannot be connected to the same metadata # provider, the combo of tenant's instance IP and the metadata # provider has to be unique. # # The networks which are connected to the metadata provider are # retrieved in the 1st call to neutron.list_subnets() # In the 2nd call we read the ports which belong to any of the # networks retrieved above, and have the X-Forwarded-For IP address. # This combination has to be unique as explained above, and we can # read the instance_id, tenant_id from that port entry. # Retrieve networks which are connected to metadata provider md_subnets = neutron.list_subnets( context, advanced_service_providers=[provider_id], fields=['network_id']) if not md_subnets or not md_subnets.get('subnets'): msg = _('Could not find any subnets for provider %s') % provider_id LOG.error(msg) raise webob.exc.HTTPBadRequest(explanation=msg) md_networks = [subnet['network_id'] for subnet in md_subnets['subnets']] try: # Retrieve the instance data from the instance's port ports = [] while md_networks: ports.extend(neutron.list_ports( context, fixed_ips='ip_address=' + instance_address, network_id=md_networks[:MAX_QUERY_NETWORKS], fields=['device_id', 'tenant_id'])['ports']) md_networks = md_networks[MAX_QUERY_NETWORKS:] except Exception as e: LOG.error('Failed to get instance id for metadata ' 'request, provider %(provider)s ' 'networks %(networks)s ' 'requester %(requester)s. Error: %(error)s', {'provider': provider_id, 'networks': md_networks, 'requester': instance_address, 'error': e}) msg = _('An unknown error has occurred. ' 'Please try your request again.') raise webob.exc.HTTPBadRequest(explanation=msg) if len(ports) != 1: msg = _('Expected a single port matching provider %(pr)s ' 'and IP %(ip)s. Found %(count)d.') % { 'pr': provider_id, 'ip': instance_address, 'count': len(ports)} LOG.error(msg) raise webob.exc.HTTPBadRequest(explanation=msg) instance_data = ports[0] instance_id = instance_data['device_id'] tenant_id = instance_data['tenant_id'] # instance_data is unicode-encoded, while cache_utils doesn't like # that. Therefore we convert to str if isinstance(instance_id, six.text_type): instance_id = instance_id.encode('utf-8') return instance_id, tenant_id def _handle_instance_id_request_from_lb(self, req): remote_address = req.headers.get('X-Forwarded-For') if remote_address is None: msg = _('X-Forwarded-For is missing from request.') raise webob.exc.HTTPBadRequest(explanation=msg) provider_id = req.headers.get('X-Metadata-Provider') if provider_id is None: msg = _('X-Metadata-Provider is missing from request.') raise webob.exc.HTTPBadRequest(explanation=msg) instance_address = remote_address.split(',')[0] # If authentication token is set, authenticate if CONF.neutron.metadata_proxy_shared_secret: signature = req.headers.get('X-Metadata-Provider-Signature') self._validate_shared_secret(provider_id, signature, instance_address) cache_key = 'provider-%s-%s' % (provider_id, instance_address) data = self._cache.get(cache_key) if data: LOG.debug("Using cached metadata for %s for %s", provider_id, instance_address) instance_id, tenant_id = data else: instance_id, tenant_id = self._get_instance_id_from_lb( provider_id, instance_address) if CONF.api.metadata_cache_expiration > 0: self._cache.set(cache_key, (instance_id, tenant_id)) LOG.debug('Instance %s with address %s matches provider %s', instance_id, remote_address, provider_id) return self._get_meta_by_instance_id(instance_id, tenant_id, instance_address) def _validate_shared_secret(self, requestor_id, signature, requestor_address): expected_signature = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), encodeutils.to_utf8(requestor_id), hashlib.sha256).hexdigest() if (not signature or not secutils.constant_time_compare(expected_signature, signature)): if requestor_id: LOG.warning('X-Instance-ID-Signature: %(signature)s does ' 'not match the expected value: ' '%(expected_signature)s for id: ' '%(requestor_id)s. Request From: ' '%(requestor_address)s', {'signature': signature, 'expected_signature': expected_signature, 'requestor_id': requestor_id, 'requestor_address': requestor_address}) msg = _('Invalid proxy request signature.') raise webob.exc.HTTPForbidden(explanation=msg) def _get_meta_by_instance_id(self, instance_id, tenant_id, remote_address): try: meta_data = self.get_metadata_by_instance_id(instance_id, remote_address) except Exception: LOG.exception('Failed to get metadata for instance id: %s', instance_id) msg = _('An unknown error has occurred. ' 'Please try your request again.') raise webob.exc.HTTPInternalServerError( explanation=six.text_type(msg)) if meta_data is None: LOG.error('Failed to get metadata for instance id: %s', instance_id) elif meta_data.instance.project_id != tenant_id: LOG.warning("Tenant_id %(tenant_id)s does not match tenant_id " "of instance %(instance_id)s.", {'tenant_id': tenant_id, 'instance_id': instance_id}) # causes a 404 to be raised meta_data = None return meta_data ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/metadata/password.py0000664000175000017500000000565600000000000020312 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from six.moves import range from webob import exc import nova.conf from nova import context from nova import exception from nova.i18n import _ from nova import objects from nova import utils CONF = nova.conf.CONF CHUNKS = 4 CHUNK_LENGTH = 255 MAX_SIZE = CHUNKS * CHUNK_LENGTH def extract_password(instance): result = '' sys_meta = utils.instance_sys_meta(instance) for key in sorted(sys_meta.keys()): if key.startswith('password_'): result += sys_meta[key] return result or None def convert_password(context, password): """Stores password as system_metadata items. Password is stored with the keys 'password_0' -> 'password_3'. """ password = password or '' if six.PY3 and isinstance(password, bytes): password = password.decode('utf-8') meta = {} for i in range(CHUNKS): meta['password_%d' % i] = password[:CHUNK_LENGTH] password = password[CHUNK_LENGTH:] return meta def handle_password(req, meta_data): ctxt = context.get_admin_context() if req.method == 'GET': return meta_data.password elif req.method == 'POST': # NOTE(vish): The conflict will only happen once the metadata cache # updates, but it isn't a huge issue if it can be set for # a short window. if meta_data.password: raise exc.HTTPConflict() if (req.content_length > MAX_SIZE or len(req.body) > MAX_SIZE): msg = _("Request is too large.") raise exc.HTTPBadRequest(explanation=msg) if CONF.api.local_metadata_per_cell: instance = objects.Instance.get_by_uuid(ctxt, meta_data.uuid) else: im = objects.InstanceMapping.get_by_instance_uuid( ctxt, meta_data.uuid) with context.target_cell(ctxt, im.cell_mapping) as cctxt: try: instance = objects.Instance.get_by_uuid( cctxt, meta_data.uuid) except exception.InstanceNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) instance.system_metadata.update(convert_password(ctxt, req.body)) instance.save() else: msg = _("GET and POST only are supported.") raise exc.HTTPBadRequest(explanation=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/metadata/vendordata.py0000664000175000017500000000215000000000000020561 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class VendorDataDriver(object): """The base VendorData Drivers should inherit from.""" def __init__(self, *args, **kwargs): """Init method should do all expensive operations.""" self._data = {} def get(self): """Return a dictionary of primitives to be rendered in metadata :return: A dictionary of primitives. """ return self._data ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/metadata/vendordata_dynamic.py0000664000175000017500000001252700000000000022276 0ustar00zuulzuul00000000000000# Copyright 2016 Rackspace Australia # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Render vendordata as stored fetched from REST microservices.""" import sys from keystoneauth1 import exceptions as ks_exceptions from keystoneauth1 import loading as ks_loading from oslo_log import log as logging from oslo_serialization import jsonutils import six from nova.api.metadata import vendordata import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) _SESSION = None _ADMIN_AUTH = None def _load_ks_session(conf): """Load session. This is either an authenticated session or a requests session, depending on what's configured. """ global _ADMIN_AUTH global _SESSION if not _ADMIN_AUTH: _ADMIN_AUTH = ks_loading.load_auth_from_conf_options( conf, nova.conf.vendordata.vendordata_group.name) if not _ADMIN_AUTH: LOG.warning('Passing insecure dynamic vendordata requests ' 'because of missing or incorrect service account ' 'configuration.') if not _SESSION: _SESSION = ks_loading.load_session_from_conf_options( conf, nova.conf.vendordata.vendordata_group.name, auth=_ADMIN_AUTH) return _SESSION class DynamicVendorData(vendordata.VendorDataDriver): def __init__(self, context=None, instance=None, address=None, network_info=None): # NOTE(mikal): address and network_info are unused, but can't be # removed / renamed as this interface is shared with the static # JSON plugin. self.context = context self.instance = instance # We only create the session if we make a request. self.session = None def _do_request(self, service_name, url): if self.session is None: self.session = _load_ks_session(CONF) try: body = {'project-id': self.instance.project_id, 'instance-id': self.instance.uuid, 'image-id': self.instance.image_ref, 'user-data': self.instance.user_data, 'hostname': self.instance.hostname, 'metadata': self.instance.metadata, 'boot-roles': self.instance.system_metadata.get( 'boot_roles', '')} headers = {'Content-Type': 'application/json', 'Accept': 'application/json', 'User-Agent': 'openstack-nova-vendordata'} # SSL verification verify = url.startswith('https://') if verify and CONF.api.vendordata_dynamic_ssl_certfile: verify = CONF.api.vendordata_dynamic_ssl_certfile timeout = (CONF.api.vendordata_dynamic_connect_timeout, CONF.api.vendordata_dynamic_read_timeout) res = self.session.request(url, 'POST', data=jsonutils.dumps(body), verify=verify, headers=headers, timeout=timeout) if res and res.text: # TODO(mikal): Use the Cache-Control response header to do some # sensible form of caching here. return jsonutils.loads(res.text) return {} except (TypeError, ValueError, ks_exceptions.connection.ConnectionError, ks_exceptions.http.HttpError) as e: LOG.warning('Error from dynamic vendordata service ' '%(service_name)s at %(url)s: %(error)s', {'service_name': service_name, 'url': url, 'error': e}, instance=self.instance) if CONF.api.vendordata_dynamic_failure_fatal: six.reraise(type(e), e, sys.exc_info()[2]) return {} def get(self): j = {} for target in CONF.api.vendordata_dynamic_targets: # NOTE(mikal): a target is composed of the following: # name@url # where name is the name to use in the metadata handed to # instances, and url is the URL to fetch it from if target.find('@') == -1: LOG.warning('Vendordata target %(target)s lacks a name. ' 'Skipping', {'target': target}, instance=self.instance) continue tokens = target.split('@') name = tokens[0] url = '@'.join(tokens[1:]) if name in j: LOG.warning('Vendordata already contains an entry named ' '%(target)s. Skipping', {'target': target}, instance=self.instance) continue j[name] = self._do_request(name, url) return j ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/metadata/vendordata_json.py0000664000175000017500000000363100000000000021617 0ustar00zuulzuul00000000000000# Copyright 2013 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Render Vendordata as stored in configured file.""" import errno from oslo_log import log as logging from oslo_serialization import jsonutils from nova.api.metadata import vendordata import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class JsonFileVendorData(vendordata.VendorDataDriver): def __init__(self, *args, **kwargs): super(JsonFileVendorData, self).__init__(*args, **kwargs) data = {} fpath = CONF.api.vendordata_jsonfile_path logprefix = "vendordata_jsonfile_path[%s]:" % fpath if fpath: try: with open(fpath, "rb") as fp: data = jsonutils.load(fp) except IOError as e: if e.errno == errno.ENOENT: LOG.warning("%(logprefix)s file does not exist", {'logprefix': logprefix}) else: LOG.warning("%(logprefix)s unexpected IOError when " "reading", {'logprefix': logprefix}) raise except ValueError: LOG.warning("%(logprefix)s failed to load json", {'logprefix': logprefix}) raise self._data = data def get(self): return self._data ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/metadata/wsgi.py0000664000175000017500000000141100000000000017402 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application entry-point for Nova Metadata API, installed by pbr.""" from nova.api.openstack import wsgi_app NAME = "metadata" def init_application(): return wsgi_app.init_application(NAME) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2704706 nova-21.2.4/nova/api/openstack/0000775000175000017500000000000000000000000016271 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/__init__.py0000664000175000017500000002126400000000000020407 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ WSGI middleware for OpenStack API controllers. """ import nova.monkey_patch # noqa from oslo_log import log as logging import routes import webob.dec import webob.exc from nova.api.openstack import wsgi from nova.api import wsgi as base_wsgi import nova.conf from nova.i18n import translate LOG = logging.getLogger(__name__) CONF = nova.conf.CONF def walk_class_hierarchy(clazz, encountered=None): """Walk class hierarchy, yielding most derived classes first.""" if not encountered: encountered = [] for subclass in clazz.__subclasses__(): if subclass not in encountered: encountered.append(subclass) # drill down to leaves first for subsubclass in walk_class_hierarchy(subclass, encountered): yield subsubclass yield subclass class FaultWrapper(base_wsgi.Middleware): """Calls down the middleware stack, making exceptions into faults.""" _status_to_type = {} @staticmethod def status_to_type(status): if not FaultWrapper._status_to_type: for clazz in walk_class_hierarchy(webob.exc.HTTPError): FaultWrapper._status_to_type[clazz.code] = clazz return FaultWrapper._status_to_type.get( status, webob.exc.HTTPInternalServerError)() def _error(self, inner, req): LOG.exception("Caught error: %s", inner) safe = getattr(inner, 'safe', False) headers = getattr(inner, 'headers', None) status = getattr(inner, 'code', 500) if status is None: status = 500 msg_dict = dict(url=req.url, status=status) LOG.info("%(url)s returned with HTTP %(status)d", msg_dict) outer = self.status_to_type(status) if headers: outer.headers = headers # NOTE(johannes): We leave the explanation empty here on # purpose. It could possibly have sensitive information # that should not be returned back to the user. See # bugs 868360 and 874472 # NOTE(eglynn): However, it would be over-conservative and # inconsistent with the EC2 API to hide every exception, # including those that are safe to expose, see bug 1021373 if safe: user_locale = req.best_match_language() inner_msg = translate(inner.message, user_locale) outer.explanation = '%s: %s' % (inner.__class__.__name__, inner_msg) return wsgi.Fault(outer) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): try: return req.get_response(self.application) except Exception as ex: return self._error(ex, req) class LegacyV2CompatibleWrapper(base_wsgi.Middleware): def _filter_request_headers(self, req): """For keeping same behavior with v2 API, ignores microversions HTTP headers X-OpenStack-Nova-API-Version and OpenStack-API-Version in the request. """ if wsgi.API_VERSION_REQUEST_HEADER in req.headers: del req.headers[wsgi.API_VERSION_REQUEST_HEADER] if wsgi.LEGACY_API_VERSION_REQUEST_HEADER in req.headers: del req.headers[wsgi.LEGACY_API_VERSION_REQUEST_HEADER] return req def _filter_response_headers(self, response): """For keeping same behavior with v2 API, filter out microversions HTTP header and microversions field in header 'Vary'. """ if wsgi.API_VERSION_REQUEST_HEADER in response.headers: del response.headers[wsgi.API_VERSION_REQUEST_HEADER] if wsgi.LEGACY_API_VERSION_REQUEST_HEADER in response.headers: del response.headers[wsgi.LEGACY_API_VERSION_REQUEST_HEADER] if 'Vary' in response.headers: vary_headers = response.headers['Vary'].split(',') filtered_vary = [] for vary in vary_headers: vary = vary.strip() if (vary == wsgi.API_VERSION_REQUEST_HEADER or vary == wsgi.LEGACY_API_VERSION_REQUEST_HEADER): continue filtered_vary.append(vary) if filtered_vary: response.headers['Vary'] = ','.join(filtered_vary) else: del response.headers['Vary'] return response @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): req.set_legacy_v2() req = self._filter_request_headers(req) response = req.get_response(self.application) return self._filter_response_headers(response) class APIMapper(routes.Mapper): def routematch(self, url=None, environ=None): if url == "": result = self._match("", environ) return result[0], result[1] return routes.Mapper.routematch(self, url, environ) def connect(self, *args, **kargs): # NOTE(vish): Default the format part of a route to only accept json # and xml so it doesn't eat all characters after a '.' # in the url. kargs.setdefault('requirements', {}) if not kargs['requirements'].get('format'): kargs['requirements']['format'] = 'json|xml' return routes.Mapper.connect(self, *args, **kargs) class ProjectMapper(APIMapper): def _get_project_id_token(self): # NOTE(sdague): project_id parameter is only valid if its hex # or hex + dashes (note, integers are a subset of this). This # is required to hand our overlaping routes issues. return '{project_id:[0-9a-f-]+}' def resource(self, member_name, collection_name, **kwargs): project_id_token = self._get_project_id_token() if 'parent_resource' not in kwargs: kwargs['path_prefix'] = '%s/' % project_id_token else: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '%s/%s/:%s_id' % ( project_id_token, p_collection, p_member) routes.Mapper.resource( self, member_name, collection_name, **kwargs) # while we are in transition mode, create additional routes # for the resource that do not include project_id. if 'parent_resource' not in kwargs: del kwargs['path_prefix'] else: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '%s/:%s_id' % (p_collection, p_member) routes.Mapper.resource(self, member_name, collection_name, **kwargs) def create_route(self, path, method, controller, action): project_id_token = self._get_project_id_token() # while we transition away from project IDs in the API URIs, create # additional routes that include the project_id self.connect('/%s%s' % (project_id_token, path), conditions=dict(method=[method]), controller=controller, action=action) self.connect(path, conditions=dict(method=[method]), controller=controller, action=action) class PlainMapper(APIMapper): def resource(self, member_name, collection_name, **kwargs): if 'parent_resource' in kwargs: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '%s/:%s_id' % (p_collection, p_member) routes.Mapper.resource(self, member_name, collection_name, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/api_version_request.py0000664000175000017500000004554600000000000022747 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from nova import exception from nova.i18n import _ # Define the minimum and maximum version of the API across all of the # REST API. The format of the version is: # X.Y where: # # - X will only be changed if a significant backwards incompatible API # change is made which affects the API as whole. That is, something # that is only very very rarely incremented. # # - Y when you make any change to the API. Note that this includes # semantic changes which may not affect the input or output formats or # even originate in the API code layer. We are not distinguishing # between backwards compatible and backwards incompatible changes in # the versioning system. It must be made clear in the documentation as # to what is a backwards compatible change and what is a backwards # incompatible one. # # You must update the API version history string below with a one or # two line description as well as update rest_api_version_history.rst REST_API_VERSION_HISTORY = """REST API Version History: * 2.1 - Initial version. Equivalent to v2.0 code * 2.2 - Adds (keypair) type parameter for os-keypairs plugin Fixes success status code for create/delete a keypair method * 2.3 - Exposes additional os-extended-server-attributes Exposes delete_on_termination for os-extended-volumes * 2.4 - Exposes reserved field in os-fixed-ips. * 2.5 - Allow server search option ip6 for non-admin * 2.6 - Consolidate the APIs for getting remote consoles * 2.7 - Check flavor type before add tenant access. * 2.8 - Add new protocol for VM console (mks) * 2.9 - Exposes lock information in server details. * 2.10 - Allow admins to query, create and delete keypairs owned by any user. * 2.11 - Exposes forced_down attribute for os-services * 2.12 - Exposes VIF net_id in os-virtual-interfaces * 2.13 - Add project id and user id information for os-server-groups API * 2.14 - Remove onSharedStorage from evacuate request body and remove adminPass from the response body * 2.15 - Add soft-affinity and soft-anti-affinity policies * 2.16 - Exposes host_status for servers/detail and servers/{server_id} * 2.17 - Add trigger_crash_dump to server actions * 2.18 - Makes project_id optional in v2.1 * 2.19 - Allow user to set and get the server description * 2.20 - Add attach and detach volume operations for instances in shelved and shelved_offloaded state * 2.21 - Make os-instance-actions read deleted instances * 2.22 - Add API to force live migration to complete * 2.23 - Add index/show API for server migrations. Also add migration_type for /os-migrations and add ref link for it when the migration is an in progress live migration. * 2.24 - Add API to cancel a running live migration * 2.25 - Make block_migration support 'auto' and remove disk_over_commit for os-migrateLive. * 2.26 - Adds support of server tags * 2.27 - Adds support for new-style microversion headers while keeping support for the original style. * 2.28 - Changes compute_node.cpu_info from string to object * 2.29 - Add a force flag in evacuate request body and change the behaviour for the host flag by calling the scheduler. * 2.30 - Add a force flag in live-migrate request body and change the behaviour for the host flag by calling the scheduler. * 2.31 - Fix os-console-auth-tokens to work for all console types. * 2.32 - Add tag to networks and block_device_mapping_v2 in server boot request body. * 2.33 - Add pagination support for hypervisors. * 2.34 - Checks before live-migration are made in asynchronous way. os-Migratelive Action does not throw badRequest in case of pre-checks failure. Verification result is available over instance-actions. * 2.35 - Adds keypairs pagination support. * 2.36 - Deprecates all the API which proxy to another service and fping API. * 2.37 - Adds support for auto-allocating networking, otherwise known as "Get me a Network". Also enforces server.networks.uuid to be in UUID format. * 2.38 - Add a condition to return HTTPBadRequest if invalid status is provided for listing servers. * 2.39 - Deprecates image-metadata proxy API * 2.40 - Adds simple tenant usage pagination support. * 2.41 - Return uuid attribute for aggregates. * 2.42 - In the context of device tagging at instance boot time, re-introduce the tag attribute that, due to bugs, was lost starting with version 2.33 for block devices and starting with version 2.37 for network interfaces. * 2.43 - Deprecate os-hosts API * 2.44 - The servers action addFixedIp, removeFixedIp, addFloatingIp, removeFloatingIp and os-virtual-interfaces APIs are deprecated. * 2.45 - The createImage and createBackup APIs no longer return a Location header in the response for the snapshot image, they now return a json dict in the response body with an image_id key and uuid value. * 2.46 - Return ``X-OpenStack-Request-ID`` header on requests. * 2.47 - When displaying server details, display the flavor as a dict rather than a link. If the user is prevented from retrieving the flavor extra-specs by policy, simply omit the field from the output. * 2.48 - Standardize VM diagnostics info. * 2.49 - Support tagged attachment of network interfaces and block devices. * 2.50 - Exposes ``server_groups`` and ``server_group_members`` keys in GET & PUT ``os-quota-class-sets`` APIs response. Also filter out Network related quotas from ``os-quota-class-sets`` API * 2.51 - Adds new event name to external-events (volume-extended). Also, non-admins can see instance action event details except for the traceback field. * 2.52 - Adds support for applying tags when creating a server. * 2.53 - Service and compute node (hypervisor) database ids are hidden. The os-services and os-hypervisors APIs now return a uuid in the id field, and takes a uuid in requests. PUT and GET requests and responses are also changed. * 2.54 - Enable reset key pair while rebuilding instance. * 2.55 - Added flavor.description to GET/POST/PUT flavors APIs. * 2.56 - Add a host parameter in migrate request body in order to enable users to specify a target host in cold migration. The target host is checked by the scheduler. * 2.57 - Deprecated personality files from POST /servers and the rebuild server action APIs. Added the ability to pass new user_data to the rebuild server action API. Personality / file injection related limits and quota resources are also removed. * 2.58 - Add pagination support and changes-since filter for os-instance-actions API. * 2.59 - Add pagination support and changes-since filter for os-migrations API. And the os-migrations API now returns both the id and the uuid in response. * 2.60 - Add support for attaching a single volume to multiple instances. * 2.61 - Exposes flavor extra_specs in the flavor representation. Flavor extra_specs will be included in Response body of GET, POST, PUT /flavors APIs. * 2.62 - Add ``host`` and ``hostId`` fields to instance action detail API responses. * 2.63 - Add support for applying trusted certificates when creating or rebuilding a server. * 2.64 - Add support for the "max_server_per_host" policy rule for ``anti-affinity`` server group policy, the ``policies`` and ``metadata`` fields are removed and the ``policy`` (required) and ``rules`` (optional) fields are added in response body of GET, POST /os-server-groups APIs and GET /os-server-groups/{group_id} API. * 2.65 - Add support for abort live migrations in ``queued`` and ``preparing`` status. * 2.66 - Add ``changes-before`` to support users to specify the ``updated_at`` time to filter nova resources, the resources include the servers API, os-instance-action API and os-migrations API. * 2.67 - Adds the optional ``volume_type`` field to the ``block_device_mapping_v2`` parameter when creating a server. * 2.68 - Remove support for forced live migration and evacuate server actions. * 2.69 - Add support for returning minimal constructs for ``GET /servers``, ``GET /servers/detail``, ``GET /servers/{server_id}`` and ``GET /os-services`` when there is a transient unavailability condition in the deployment like an infrastructure failure. * 2.70 - Exposes virtual device tags in the response of the ``os-volume_attachments`` and ``os-interface`` APIs. * 2.71 - Adds the ``server_groups`` field to ``GET /servers/{id}``, ``PUT /servers/{server_id}`` and ``POST /servers/{server_id}/action`` (rebuild) responses. * 2.72 - Add support for neutron ports with resource request during server create. Server move operations are not yet supported for servers with such ports. * 2.73 - Adds support for specifying a reason when locking the server and exposes this via the response from ``GET /servers/detail``, ``GET /servers/{server_id}``, ``PUT servers/{server_id}`` and ``POST /servers/{server_id}/action`` where the action is rebuild. It also supports ``locked`` as a filter/sort parameter for ``GET /servers/detail`` and ``GET /servers``. * 2.74 - Add support for specifying ``host`` and/or ``hypervisor_hostname`` in request body to ``POST /servers``. Allow users to specify which host/node they want their servers to land on and still be validated by the scheduler. * 2.75 - Multiple API cleanup listed below: - 400 for unknown param for query param and for request body. - Making server representation always consistent among GET, PUT and Rebuild serevr APIs response. - Change the default return value of swap field from the empty string to 0 (integer) in flavor APIs. - Return ``servers`` field always in the response of GET hypervisors API even there are no servers on hypervisor. * 2.76 - Adds ``power-update`` event to ``os-server-external-events`` API. The changes to the power state of an instance caused by this event can be viewed through ``GET /servers/{server_id}/os-instance-actions`` and ``GET /servers/{server_id}/os-instance-actions/{request_id}``. * 2.77 - Add support for specifying ``availability_zone`` to unshelve of a shelved offload server. * 2.78 - Adds new API ``GET /servers/{server_id}/topology`` which shows NUMA topology of a given server. * 2.79 - Adds support for specifying ``delete_on_termination`` field in the request body to ``POST /servers/{server_id}/os-volume_attachments`` and exposes this via the response from ``POST /servers/{server_id}/os-volume_attachments``, ``GET /servers/{server_id}/os-volume_attachments`` and ``GET /servers/{server_id}/os-volume_attachments/{volume_id}``. * 2.80 - Adds support for optional query parameters ``user_id`` and ``project_id`` to the ``GET /os-migrations`` API and exposes ``user_id`` and ``project_id`` via the response from ``GET /os-migrations``, ``GET /servers/{server_id}/migrations``, and ``GET /servers/{server_id}/migrations/{migration_id}``. * 2.81 - Adds support for image cache management by aggregate by adding ``POST /os-aggregates/{aggregate_id}/images``. * 2.82 - Adds ``accelerator-request-bound`` event to ``os-server-external-events`` API. This event is sent by Cyborg to indicate completion of ARQ binding. The ARQs can be obtained from Cyborg with ``GET /v2/accelerator_requests?instance={uuid}`` * 2.83 - Allow more filter parameters for ``GET /servers/detail`` and ``GET /servers`` for non-admin. * 2.84 - Adds ``details`` field to instance action events. * 2.85 - Add support for ``PUT /servers/{server_id}/os-volume_attachments/{volume_id}`` which supports specifying the ``delete_on_termination`` field in the request body to change the attached volume's flag. * 2.86 - Add support for validation of known extra specs to the ``POST /flavors/{flavor_id}/os-extra_specs`` and ``PUT /flavors/{flavor_id}/os-extra_specs/{id}`` APIs. * 2.87 - Adds support for rescuing boot from volume instances when the compute host reports the COMPUTE_BFV_RESCUE capability trait. """ # The minimum and maximum versions of the API supported # The default api version request is defined to be the # minimum version of the API supported. # Note(cyeoh): This only applies for the v2.1 API once microversions # support is fully merged. It does not affect the V2 API. _MIN_API_VERSION = '2.1' _MAX_API_VERSION = '2.87' DEFAULT_API_VERSION = _MIN_API_VERSION # Almost all proxy APIs which are related to network, images and baremetal # were deprecated from 2.36. MAX_PROXY_API_SUPPORT_VERSION = '2.35' MIN_WITHOUT_PROXY_API_SUPPORT_VERSION = '2.36' # Starting from microversion 2.39 also image-metadata proxy API is deprecated. MAX_IMAGE_META_PROXY_API_VERSION = '2.38' MIN_WITHOUT_IMAGE_META_PROXY_API_VERSION = '2.39' # NOTE(cyeoh): min and max versions declared as functions so we can # mock them for unittests. Do not use the constants directly anywhere # else. def min_api_version(): return APIVersionRequest(_MIN_API_VERSION) def max_api_version(): return APIVersionRequest(_MAX_API_VERSION) def is_supported(req, min_version=_MIN_API_VERSION, max_version=_MAX_API_VERSION): """Check if API request version satisfies version restrictions. :param req: request object :param min_version: minimal version of API needed for correct request processing :param max_version: maximum version of API needed for correct request processing :returns: True if request satisfies minimal and maximum API version requirements. False in other case. """ return (APIVersionRequest(max_version) >= req.api_version_request >= APIVersionRequest(min_version)) class APIVersionRequest(object): """This class represents an API Version Request with convenience methods for manipulation and comparison of version numbers that we need to do to implement microversions. """ def __init__(self, version_string=None): """Create an API version request object. :param version_string: String representation of APIVersionRequest. Correct format is 'X.Y', where 'X' and 'Y' are int values. None value should be used to create Null APIVersionRequest, which is equal to 0.0 """ self.ver_major = 0 self.ver_minor = 0 if version_string is not None: match = re.match(r"^([1-9]\d*)\.([1-9]\d*|0)$", version_string) if match: self.ver_major = int(match.group(1)) self.ver_minor = int(match.group(2)) else: raise exception.InvalidAPIVersionString(version=version_string) def __str__(self): """Debug/Logging representation of object.""" return ("API Version Request Major: %s, Minor: %s" % (self.ver_major, self.ver_minor)) def is_null(self): return self.ver_major == 0 and self.ver_minor == 0 def _format_type_error(self, other): return TypeError(_("'%(other)s' should be an instance of '%(cls)s'") % {"other": other, "cls": self.__class__}) def __lt__(self, other): if not isinstance(other, APIVersionRequest): raise self._format_type_error(other) return ((self.ver_major, self.ver_minor) < (other.ver_major, other.ver_minor)) def __eq__(self, other): if not isinstance(other, APIVersionRequest): raise self._format_type_error(other) return ((self.ver_major, self.ver_minor) == (other.ver_major, other.ver_minor)) def __gt__(self, other): if not isinstance(other, APIVersionRequest): raise self._format_type_error(other) return ((self.ver_major, self.ver_minor) > (other.ver_major, other.ver_minor)) def __le__(self, other): return self < other or self == other def __ne__(self, other): return not self.__eq__(other) def __ge__(self, other): return self > other or self == other def matches(self, min_version, max_version): """Returns whether the version object represents a version greater than or equal to the minimum version and less than or equal to the maximum version. @param min_version: Minimum acceptable version. @param max_version: Maximum acceptable version. @returns: boolean If min_version is null then there is no minimum limit. If max_version is null then there is no maximum limit. If self is null then raise ValueError """ if self.is_null(): raise ValueError if max_version.is_null() and min_version.is_null(): return True elif max_version.is_null(): return min_version <= self elif min_version.is_null(): return self <= max_version else: return min_version <= self <= max_version def get_string(self): """Converts object to string representation which if used to create an APIVersionRequest object results in the same version request. """ if self.is_null(): raise ValueError return "%s.%s" % (self.ver_major, self.ver_minor) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/auth.py0000664000175000017500000000622000000000000017604 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_middleware import request_id import webob.dec import webob.exc from nova.api.openstack import wsgi from nova.api import wsgi as base_wsgi import nova.conf from nova import context CONF = nova.conf.CONF class NoAuthMiddlewareBase(base_wsgi.Middleware): """Return a fake token if one isn't specified.""" def base_call(self, req, project_id_in_path, always_admin=True): if 'X-Auth-Token' not in req.headers: user_id = req.headers.get('X-Auth-User', 'admin') project_id = req.headers.get('X-Auth-Project-Id', 'admin') if project_id_in_path: os_url = '/'.join([req.url.rstrip('/'), project_id]) else: os_url = req.url.rstrip('/') res = webob.Response() # NOTE(vish): This is expecting and returning Auth(1.1), whereas # keystone uses 2.0 auth. We should probably allow # 2.0 auth here as well. res.headers['X-Auth-Token'] = '%s:%s' % (user_id, project_id) res.headers['X-Server-Management-Url'] = os_url res.content_type = 'text/plain' res.status = '204' return res token = req.headers['X-Auth-Token'] user_id, _sep, project_id = token.partition(':') project_id = project_id or user_id remote_address = getattr(req, 'remote_address', '127.0.0.1') if CONF.api.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) is_admin = always_admin or (user_id == 'admin') ctx = context.RequestContext( user_id, project_id, is_admin=is_admin, remote_address=remote_address, request_id=req.environ.get(request_id.ENV_REQUEST_ID)) req.environ['nova.context'] = ctx return self.application class NoAuthMiddleware(NoAuthMiddlewareBase): """Return a fake token if one isn't specified. noauth2 provides admin privs if 'admin' is provided as the user id. """ @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): return self.base_call(req, True, always_admin=False) class NoAuthMiddlewareV2_18(NoAuthMiddlewareBase): """Return a fake token if one isn't specified. This provides a version of the middleware which does not add project_id into server management urls. """ @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): return self.base_call(req, False, always_admin=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/common.py0000664000175000017500000005341000000000000020136 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import itertools import re from oslo_log import log as logging from oslo_utils import strutils import six import six.moves.urllib.parse as urlparse import webob from webob import exc from nova.api.openstack import api_version_request from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.network import constants from nova import objects from nova.objects import service from nova import quota from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) QUOTAS = quota.QUOTAS POWER_ON = 'POWER_ON' POWER_OFF = 'POWER_OFF' _STATE_MAP = { vm_states.ACTIVE: { 'default': 'ACTIVE', task_states.REBOOTING: 'REBOOT', task_states.REBOOT_PENDING: 'REBOOT', task_states.REBOOT_STARTED: 'REBOOT', task_states.REBOOTING_HARD: 'HARD_REBOOT', task_states.REBOOT_PENDING_HARD: 'HARD_REBOOT', task_states.REBOOT_STARTED_HARD: 'HARD_REBOOT', task_states.UPDATING_PASSWORD: 'PASSWORD', task_states.REBUILDING: 'REBUILD', task_states.REBUILD_BLOCK_DEVICE_MAPPING: 'REBUILD', task_states.REBUILD_SPAWNING: 'REBUILD', task_states.MIGRATING: 'MIGRATING', task_states.RESIZE_PREP: 'RESIZE', task_states.RESIZE_MIGRATING: 'RESIZE', task_states.RESIZE_MIGRATED: 'RESIZE', task_states.RESIZE_FINISH: 'RESIZE', }, vm_states.BUILDING: { 'default': 'BUILD', }, vm_states.STOPPED: { 'default': 'SHUTOFF', task_states.RESIZE_PREP: 'RESIZE', task_states.RESIZE_MIGRATING: 'RESIZE', task_states.RESIZE_MIGRATED: 'RESIZE', task_states.RESIZE_FINISH: 'RESIZE', task_states.REBUILDING: 'REBUILD', task_states.REBUILD_BLOCK_DEVICE_MAPPING: 'REBUILD', task_states.REBUILD_SPAWNING: 'REBUILD', }, vm_states.RESIZED: { 'default': 'VERIFY_RESIZE', # Note(maoy): the OS API spec 1.1 doesn't have CONFIRMING_RESIZE # state so we comment that out for future reference only. # task_states.RESIZE_CONFIRMING: 'CONFIRMING_RESIZE', task_states.RESIZE_REVERTING: 'REVERT_RESIZE', }, vm_states.PAUSED: { 'default': 'PAUSED', task_states.MIGRATING: 'MIGRATING', }, vm_states.SUSPENDED: { 'default': 'SUSPENDED', }, vm_states.RESCUED: { 'default': 'RESCUE', }, vm_states.ERROR: { 'default': 'ERROR', task_states.REBUILDING: 'REBUILD', task_states.REBUILD_BLOCK_DEVICE_MAPPING: 'REBUILD', task_states.REBUILD_SPAWNING: 'REBUILD', }, vm_states.DELETED: { 'default': 'DELETED', }, vm_states.SOFT_DELETED: { 'default': 'SOFT_DELETED', }, vm_states.SHELVED: { 'default': 'SHELVED', }, vm_states.SHELVED_OFFLOADED: { 'default': 'SHELVED_OFFLOADED', }, } def status_from_state(vm_state, task_state='default'): """Given vm_state and task_state, return a status string.""" task_map = _STATE_MAP.get(vm_state, dict(default='UNKNOWN')) status = task_map.get(task_state, task_map['default']) if status == "UNKNOWN": LOG.error("status is UNKNOWN from vm_state=%(vm_state)s " "task_state=%(task_state)s. Bad upgrade or db " "corrupted?", {'vm_state': vm_state, 'task_state': task_state}) return status def task_and_vm_state_from_status(statuses): """Map the server's multiple status strings to list of vm states and list of task states. """ vm_states = set() task_states = set() lower_statuses = [status.lower() for status in statuses] for state, task_map in _STATE_MAP.items(): for task_state, mapped_state in task_map.items(): status_string = mapped_state if status_string.lower() in lower_statuses: vm_states.add(state) task_states.add(task_state) # Add sort to avoid different order on set in Python 3 return sorted(vm_states), sorted(task_states) def get_sort_params(input_params, default_key='created_at', default_dir='desc'): """Retrieves sort keys/directions parameters. Processes the parameters to create a list of sort keys and sort directions that correspond to the 'sort_key' and 'sort_dir' parameter values. These sorting parameters can be specified multiple times in order to generate the list of sort keys and directions. The input parameters are not modified. :param input_params: webob.multidict of request parameters (from nova.wsgi.Request.params) :param default_key: default sort key value, added to the list if no 'sort_key' parameters are supplied :param default_dir: default sort dir value, added to the list if no 'sort_dir' parameters are supplied :returns: list of sort keys, list of sort dirs """ params = input_params.copy() sort_keys = [] sort_dirs = [] while 'sort_key' in params: sort_keys.append(params.pop('sort_key').strip()) while 'sort_dir' in params: sort_dirs.append(params.pop('sort_dir').strip()) if len(sort_keys) == 0 and default_key: sort_keys.append(default_key) if len(sort_dirs) == 0 and default_dir: sort_dirs.append(default_dir) return sort_keys, sort_dirs def get_pagination_params(request): """Return marker, limit tuple from request. :param request: `wsgi.Request` possibly containing 'marker' and 'limit' GET variables. 'marker' is the id of the last element the client has seen, and 'limit' is the maximum number of items to return. If 'limit' is not specified, 0, or > max_limit, we default to max_limit. Negative values for either marker or limit will cause exc.HTTPBadRequest() exceptions to be raised. """ params = {} if 'limit' in request.GET: params['limit'] = _get_int_param(request, 'limit') if 'page_size' in request.GET: params['page_size'] = _get_int_param(request, 'page_size') if 'marker' in request.GET: params['marker'] = _get_marker_param(request) if 'offset' in request.GET: params['offset'] = _get_int_param(request, 'offset') return params def _get_int_param(request, param): """Extract integer param from request or fail.""" try: int_param = utils.validate_integer(request.GET[param], param, min_value=0) except exception.InvalidInput as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return int_param def _get_marker_param(request): """Extract marker id from request or fail.""" return request.GET['marker'] def limited(items, request): """Return a slice of items according to requested offset and limit. :param items: A sliceable entity :param request: ``wsgi.Request`` possibly containing 'offset' and 'limit' GET variables. 'offset' is where to start in the list, and 'limit' is the maximum number of items to return. If 'limit' is not specified, 0, or > max_limit, we default to max_limit. Negative values for either offset or limit will cause exc.HTTPBadRequest() exceptions to be raised. """ params = get_pagination_params(request) offset = params.get('offset', 0) limit = CONF.api.max_limit limit = min(limit, params.get('limit') or limit) return items[offset:(offset + limit)] def get_limit_and_marker(request): """Get limited parameter from request.""" params = get_pagination_params(request) limit = CONF.api.max_limit limit = min(limit, params.get('limit', limit)) marker = params.get('marker', None) return limit, marker def get_id_from_href(href): """Return the id or uuid portion of a url. Given: 'http://www.foo.com/bar/123?q=4' Returns: '123' Given: 'http://www.foo.com/bar/abc123?q=4' Returns: 'abc123' """ return urlparse.urlsplit("%s" % href).path.split('/')[-1] def remove_trailing_version_from_href(href): """Removes the api version from the href. Given: 'http://www.nova.com/compute/v1.1' Returns: 'http://www.nova.com/compute' Given: 'http://www.nova.com/v1.1' Returns: 'http://www.nova.com' """ parsed_url = urlparse.urlsplit(href) url_parts = parsed_url.path.rsplit('/', 1) # NOTE: this should match vX.X or vX expression = re.compile(r'^v([0-9]+|[0-9]+\.[0-9]+)(/.*|$)') if not expression.match(url_parts.pop()): LOG.debug('href %s does not contain version', href) raise ValueError(_('href %s does not contain version') % href) new_path = url_join(*url_parts) parsed_url = list(parsed_url) parsed_url[2] = new_path return urlparse.urlunsplit(parsed_url) def check_img_metadata_properties_quota(context, metadata): if not metadata: return try: QUOTAS.limit_check(context, metadata_items=len(metadata)) except exception.OverQuota: expl = _("Image metadata limit exceeded") raise webob.exc.HTTPForbidden(explanation=expl) def get_networks_for_instance_from_nw_info(nw_info): networks = collections.OrderedDict() for vif in nw_info: ips = vif.fixed_ips() floaters = vif.floating_ips() label = vif['network']['label'] if label not in networks: networks[label] = {'ips': [], 'floating_ips': []} for ip in itertools.chain(ips, floaters): ip['mac_address'] = vif['address'] networks[label]['ips'].extend(ips) networks[label]['floating_ips'].extend(floaters) return networks def get_networks_for_instance(context, instance): """Returns a prepared nw_info list for passing into the view builders We end up with a data structure like:: {'public': {'ips': [{'address': '10.0.0.1', 'version': 4, 'mac_address': 'aa:aa:aa:aa:aa:aa'}, {'address': '2001::1', 'version': 6, 'mac_address': 'aa:aa:aa:aa:aa:aa'}], 'floating_ips': [{'address': '172.16.0.1', 'version': 4, 'mac_address': 'aa:aa:aa:aa:aa:aa'}, {'address': '172.16.2.1', 'version': 4, 'mac_address': 'aa:aa:aa:aa:aa:aa'}]}, ...} """ nw_info = instance.get_network_info() return get_networks_for_instance_from_nw_info(nw_info) def raise_http_conflict_for_instance_invalid_state(exc, action, server_id): """Raises a webob.exc.HTTPConflict instance containing a message appropriate to return via the API based on the original InstanceInvalidState exception. """ attr = exc.kwargs.get('attr') state = exc.kwargs.get('state') if attr is not None and state is not None: msg = _("Cannot '%(action)s' instance %(server_id)s while it is in " "%(attr)s %(state)s") % {'action': action, 'attr': attr, 'state': state, 'server_id': server_id} else: # At least give some meaningful message msg = _("Instance %(server_id)s is in an invalid state for " "'%(action)s'") % {'action': action, 'server_id': server_id} raise webob.exc.HTTPConflict(explanation=msg) def url_join(*parts): """Convenience method for joining parts of a URL Any leading and trailing '/' characters are removed, and the parts joined together with '/' as a separator. If last element of 'parts' is an empty string, the returned URL will have a trailing slash. """ parts = parts or [""] clean_parts = [part.strip("/") for part in parts if part] if not parts[-1]: # Empty last element should add a trailing slash clean_parts.append("") return "/".join(clean_parts) class ViewBuilder(object): """Model API responses as dictionaries.""" def _get_project_id(self, request): """Get project id from request url if present or empty string otherwise """ project_id = request.environ["nova.context"].project_id if project_id and project_id in request.url: return project_id return '' def _get_links(self, request, identifier, collection_name): return [{ "rel": "self", "href": self._get_href_link(request, identifier, collection_name), }, { "rel": "bookmark", "href": self._get_bookmark_link(request, identifier, collection_name), }] def _get_next_link(self, request, identifier, collection_name): """Return href string with proper limit and marker params.""" params = collections.OrderedDict(sorted(request.params.items())) params["marker"] = identifier prefix = self._update_compute_link_prefix(request.application_url) url = url_join(prefix, self._get_project_id(request), collection_name) return "%s?%s" % (url, urlparse.urlencode(params)) def _get_href_link(self, request, identifier, collection_name): """Return an href string pointing to this object.""" prefix = self._update_compute_link_prefix(request.application_url) return url_join(prefix, self._get_project_id(request), collection_name, str(identifier)) def _get_bookmark_link(self, request, identifier, collection_name): """Create a URL that refers to a specific resource.""" base_url = remove_trailing_version_from_href(request.application_url) base_url = self._update_compute_link_prefix(base_url) return url_join(base_url, self._get_project_id(request), collection_name, str(identifier)) def _get_collection_links(self, request, items, collection_name, id_key="uuid"): """Retrieve 'next' link, if applicable. This is included if: 1) 'limit' param is specified and equals the number of items. 2) 'limit' param is specified but it exceeds CONF.api.max_limit, in this case the number of items is CONF.api.max_limit. 3) 'limit' param is NOT specified but the number of items is CONF.api.max_limit. """ links = [] max_items = min( int(request.params.get("limit", CONF.api.max_limit)), CONF.api.max_limit) if max_items and max_items == len(items): last_item = items[-1] if id_key in last_item: last_item_id = last_item[id_key] elif 'id' in last_item: last_item_id = last_item["id"] else: last_item_id = last_item["flavorid"] links.append({ "rel": "next", "href": self._get_next_link(request, last_item_id, collection_name), }) return links def _update_link_prefix(self, orig_url, prefix): if not prefix: return orig_url url_parts = list(urlparse.urlsplit(orig_url)) prefix_parts = list(urlparse.urlsplit(prefix)) url_parts[0:2] = prefix_parts[0:2] url_parts[2] = prefix_parts[2] + url_parts[2] return urlparse.urlunsplit(url_parts).rstrip('/') def _update_glance_link_prefix(self, orig_url): return self._update_link_prefix(orig_url, CONF.api.glance_link_prefix) def _update_compute_link_prefix(self, orig_url): return self._update_link_prefix(orig_url, CONF.api.compute_link_prefix) def get_instance(compute_api, context, instance_id, expected_attrs=None, cell_down_support=False): """Fetch an instance from the compute API, handling error checking.""" try: return compute_api.get(context, instance_id, expected_attrs=expected_attrs, cell_down_support=cell_down_support) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) def normalize_name(name): # NOTE(alex_xu): This method is used by v2.1 legacy v2 compat mode. # In the legacy v2 API, some of APIs strip the spaces and some of APIs not. # The v2.1 disallow leading/trailing, for compatible v2 API and consistent, # we enable leading/trailing spaces and strip spaces in legacy v2 compat # mode. Althrough in legacy v2 API there are some APIs didn't strip spaces, # but actually leading/trailing spaces(that means user depend on leading/ # trailing spaces distinguish different instance) is pointless usecase. return name.strip() def raise_feature_not_supported(msg=None): if msg is None: msg = _("The requested functionality is not supported.") raise webob.exc.HTTPNotImplemented(explanation=msg) def get_flavor(context, flavor_id): try: return objects.Flavor.get_by_flavor_id(context, flavor_id) except exception.FlavorNotFound as error: raise exc.HTTPNotFound(explanation=error.format_message()) def is_all_tenants(search_opts): """Checks to see if the all_tenants flag is in search_opts :param dict search_opts: The search options for a request :returns: boolean indicating if all_tenants are being requested or not """ all_tenants = search_opts.get('all_tenants') if all_tenants: try: all_tenants = strutils.bool_from_string(all_tenants, True) except ValueError as err: raise exception.InvalidInput(six.text_type(err)) else: # The empty string is considered enabling all_tenants all_tenants = 'all_tenants' in search_opts return all_tenants def is_locked(search_opts): """Converts the value of the locked parameter to a boolean. Note that this function will be called only if locked exists in search_opts. :param dict search_opts: The search options for a request :returns: boolean indicating if locked is being requested or not """ locked = search_opts.get('locked') try: locked = strutils.bool_from_string(locked, strict=True) except ValueError as err: raise exception.InvalidInput(six.text_type(err)) return locked def supports_multiattach_volume(req): """Check to see if the requested API version is high enough for multiattach Microversion 2.60 adds support for booting from a multiattach volume. The actual validation for a multiattach volume is done in the compute API code, this is just checking the version so we can tell the API code if the request version is high enough to even support it. :param req: The incoming API request :returns: True if the requested API microversion is high enough for volume multiattach support, False otherwise. """ return api_version_request.is_supported(req, '2.60') def supports_port_resource_request(req): """Check to see if the requested API version is high enough for resource request :param req: The incoming API request :returns: True if the requested API microversion is high enough for port resource request support, False otherwise. """ return api_version_request.is_supported(req, '2.72') def supports_port_resource_request_during_move(): """Check to see if the global compute service version is high enough to support port resource request during move operation. :returns: True if the compute service version is high enough for port resource request move support, False otherwise. """ return service.get_minimum_version_all_cells( nova_context.get_admin_context(), ['nova-compute']) >= 49 def instance_has_port_with_resource_request(instance_uuid, network_api): # TODO(gibi): Use instance.info_cache to see if there is VIFs with # allocation key in the profile. If there is no such VIF for an instance # and the instance is not shelve offloaded then we can be sure that the # instance has no port with resource request. If the instance is shelve # offloaded then we still have to hit neutron. search_opts = {'device_id': instance_uuid, 'fields': [constants.RESOURCE_REQUEST]} # NOTE(gibi): We need to use an admin context to query neutron ports as # neutron does not fill the resource_request field in the port response if # we query with a non admin context. admin_context = nova_context.get_admin_context() ports = network_api.list_ports( admin_context, **search_opts).get('ports', []) for port in ports: if port.get(constants.RESOURCE_REQUEST): return True return False ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2824705 nova-21.2.4/nova/api/openstack/compute/0000775000175000017500000000000000000000000017745 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/__init__.py0000664000175000017500000000206700000000000022063 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # The APIRouterV21 moves down to the 'nova.api.openstack.compute.routes' for # circle reference problem. Import the APIRouterV21 is for the api-paste.ini # works correctly without modification. We still looking for a chance to move # the APIRouterV21 back to here after cleanups. from nova.api.openstack.compute.routes import APIRouterV21 # noqa ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/admin_actions.py0000664000175000017500000000646500000000000023142 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import reset_server_state from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova.compute import vm_states from nova import exception from nova.policies import admin_actions as aa_policies # States usable in resetState action # NOTE: It is necessary to update the schema of nova/api/openstack/compute/ # schemas/reset_server_state.py, when updating this state_map. state_map = dict(active=vm_states.ACTIVE, error=vm_states.ERROR) class AdminActionsController(wsgi.Controller): def __init__(self): super(AdminActionsController, self).__init__() self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('resetNetwork') def _reset_network(self, req, id, body): """Permit admins to reset networking on a server.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(aa_policies.POLICY_ROOT % 'reset_network', target={'project_id': instance.project_id}) try: self.compute_api.reset_network(context, instance) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('injectNetworkInfo') def _inject_network_info(self, req, id, body): """Permit admins to inject network info into a server.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(aa_policies.POLICY_ROOT % 'inject_network_info', target={'project_id': instance.project_id}) try: self.compute_api.inject_network_info(context, instance) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action('os-resetState') @validation.schema(reset_server_state.reset_state) def _reset_state(self, req, id, body): """Permit admins to reset the state of a server.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(aa_policies.POLICY_ROOT % 'reset_state', target={'project_id': instance.project_id}) # Identify the desired state from the body state = state_map[body["os-resetState"]["state"]] instance.vm_state = state instance.task_state = None instance.save(admin_state_reset=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/admin_password.py0000664000175000017500000000463700000000000023343 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import admin_password from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.policies import admin_password as ap_policies class AdminPasswordController(wsgi.Controller): def __init__(self): super(AdminPasswordController, self).__init__() self.compute_api = compute.API() # TODO(eliqiao): Here should be 204(No content) instead of 202 by v2.1+ # microversions because the password has been changed when returning # a response. @wsgi.action('changePassword') @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @validation.schema(admin_password.change_password) def change_password(self, req, id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(ap_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) password = body['changePassword']['adminPass'] try: self.compute_api.set_admin_password(context, instance, password) except (exception.InstancePasswordSetFailed, exception.SetAdminPasswdNotSupported, exception.InstanceAgentNotEnabled) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as e: raise common.raise_http_conflict_for_instance_invalid_state( e, 'changePassword', id) except NotImplementedError: msg = _("Unable to set password on instance") common.raise_feature_not_supported(msg=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/agents.py0000664000175000017500000001521500000000000021604 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.compute.schemas import agents as schema from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova import objects from nova.policies import agents as agents_policies from nova import utils class AgentController(wsgi.Controller): """The agent is talking about guest agent.The host can use this for things like accessing files on the disk, configuring networking, or running other applications/scripts in the guest while it is running. Typically this uses some hypervisor-specific transport to avoid being dependent on a working network configuration. Xen, VMware, and VirtualBox have guest agents,although the Xen driver is the only one with an implementation for managing them in openstack. KVM doesn't really have a concept of a guest agent (although one could be written). You can find the design of agent update in this link: http://wiki.openstack.org/AgentUpdate and find the code in nova.virt.xenapi.vmops.VMOps._boot_new_instance. In this design We need update agent in guest from host, so we need some interfaces to update the agent info in host. You can find more information about the design of the GuestAgent in the following link: http://wiki.openstack.org/GuestAgent http://wiki.openstack.org/GuestAgentXenStoreCommunication """ @validation.query_schema(schema.index_query_275, '2.75') @validation.query_schema(schema.index_query, '2.0', '2.74') @wsgi.expected_errors(()) def index(self, req): """Return a list of all agent builds. Filter by hypervisor.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME % 'list', target={}) hypervisor = None agents = [] if 'hypervisor' in req.GET: hypervisor = req.GET['hypervisor'] builds = objects.AgentList.get_all(context, hypervisor=hypervisor) for agent_build in builds: agents.append({'hypervisor': agent_build.hypervisor, 'os': agent_build.os, 'architecture': agent_build.architecture, 'version': agent_build.version, 'md5hash': agent_build.md5hash, 'agent_id': agent_build.id, 'url': agent_build.url}) return {'agents': agents} @wsgi.expected_errors((400, 404)) @validation.schema(schema.update) def update(self, req, id, body): """Update an existing agent build.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME % 'update', target={}) # TODO(oomichi): This parameter name "para" is different from the ones # of the other APIs. Most other names are resource names like "server" # etc. This name should be changed to "agent" for consistent naming # with v2.1+microversions. para = body['para'] url = para['url'] md5hash = para['md5hash'] version = para['version'] try: utils.validate_integer(id, 'id') except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) agent = objects.Agent(context=context, id=id) agent.obj_reset_changes() agent.version = version agent.url = url agent.md5hash = md5hash try: agent.save() except exception.AgentBuildNotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) # TODO(alex_xu): The agent_id should be integer that consistent with # create/index actions. But parameter 'id' is string type that parsed # from url. This is a bug, but because back-compatibility, it can't be # fixed for v2 API. This will be fixed in v2.1 API by Microversions in # the future. lp bug #1333494 return {"agent": {'agent_id': id, 'version': version, 'url': url, 'md5hash': md5hash}} # TODO(oomichi): Here should be 204(No Content) instead of 200 by v2.1 # +microversions because the resource agent has been deleted completely # when returning a response. @wsgi.expected_errors((400, 404)) @wsgi.response(200) def delete(self, req, id): """Deletes an existing agent build.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME % 'delete', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) try: agent = objects.Agent(context=context, id=id) agent.destroy() except exception.AgentBuildNotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) # TODO(oomichi): Here should be 201(Created) instead of 200 by v2.1 # +microversions because the creation of a resource agent finishes # when returning a response. @wsgi.expected_errors(409) @wsgi.response(200) @validation.schema(schema.create) def create(self, req, body): """Creates a new agent build.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME % 'create', target={}) agent = body['agent'] hypervisor = agent['hypervisor'] os = agent['os'] architecture = agent['architecture'] version = agent['version'] url = agent['url'] md5hash = agent['md5hash'] agent_obj = objects.Agent(context=context) agent_obj.hypervisor = hypervisor agent_obj.os = os agent_obj.architecture = architecture agent_obj.version = version agent_obj.url = url agent_obj.md5hash = md5hash try: agent_obj.create() agent['agent_id'] = agent_obj.id except exception.AgentBuildExists as ex: raise webob.exc.HTTPConflict(explanation=ex.format_message()) return {'agent': agent} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/aggregates.py0000664000175000017500000003075600000000000022443 0ustar00zuulzuul00000000000000# Copyright (c) 2012 Citrix Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Aggregate admin API extension.""" import datetime from oslo_log import log as logging import six from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import aggregate_images from nova.api.openstack.compute.schemas import aggregates from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova.conductor import api as conductor from nova import exception from nova.i18n import _ from nova.policies import aggregates as aggr_policies from nova import utils LOG = logging.getLogger(__name__) def _get_context(req): return req.environ['nova.context'] class AggregateController(wsgi.Controller): """The Host Aggregates API controller for the OpenStack API.""" def __init__(self): super(AggregateController, self).__init__() self.api = compute.AggregateAPI() self.conductor_tasks = conductor.ComputeTaskAPI() @wsgi.expected_errors(()) def index(self, req): """Returns a list a host aggregate's id, name, availability_zone.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'index', target={}) aggregates = self.api.get_aggregate_list(context) return {'aggregates': [self._marshall_aggregate(req, a)['aggregate'] for a in aggregates]} # NOTE(gmann): Returns 200 for backwards compatibility but should be 201 # as this operation complete the creation of aggregates resource. @wsgi.expected_errors((400, 409)) @validation.schema(aggregates.create_v20, '2.0', '2.0') @validation.schema(aggregates.create, '2.1') def create(self, req, body): """Creates an aggregate, given its name and optional availability zone. """ context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'create', target={}) host_aggregate = body["aggregate"] name = common.normalize_name(host_aggregate["name"]) avail_zone = host_aggregate.get("availability_zone") if avail_zone: avail_zone = common.normalize_name(avail_zone) try: aggregate = self.api.create_aggregate(context, name, avail_zone) except exception.AggregateNameExists as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.ObjectActionError: raise exc.HTTPConflict(explanation=_( 'Not all aggregates have been migrated to the API database')) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) agg = self._marshall_aggregate(req, aggregate) # To maintain the same API result as before the changes for returning # nova objects were made. del agg['aggregate']['hosts'] del agg['aggregate']['metadata'] return agg @wsgi.expected_errors((400, 404)) def show(self, req, id): """Shows the details of an aggregate, hosts and metadata included.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'show', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) try: aggregate = self.api.get_aggregate(context, id) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) @wsgi.expected_errors((400, 404, 409)) @validation.schema(aggregates.update_v20, '2.0', '2.0') @validation.schema(aggregates.update, '2.1') def update(self, req, id, body): """Updates the name and/or availability_zone of given aggregate.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'update', target={}) updates = body["aggregate"] if 'name' in updates: updates['name'] = common.normalize_name(updates['name']) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) try: aggregate = self.api.update_aggregate(context, id, updates) except exception.AggregateNameExists as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) # NOTE(gmann): Returns 200 for backwards compatibility but should be 204 # as this operation complete the deletion of aggregate resource and return # no response body. @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Removes an aggregate by id.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'delete', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) try: self.api.delete_aggregate(context, id) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) # NOTE(gmann): Returns 200 for backwards compatibility but should be 202 # for representing async API as this API just accepts the request and # request hypervisor driver to complete the same in async mode. @wsgi.expected_errors((400, 404, 409)) @wsgi.action('add_host') @validation.schema(aggregates.add_host) def _add_host(self, req, id, body): """Adds a host to the specified aggregate.""" host = body['add_host']['host'] context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'add_host', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) try: aggregate = self.api.add_host_to_aggregate(context, id, host) except (exception.AggregateNotFound, exception.HostMappingNotFound, exception.ComputeHostNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.AggregateHostExists, exception.InvalidAggregateAction) as e: raise exc.HTTPConflict(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) # NOTE(gmann): Returns 200 for backwards compatibility but should be 202 # for representing async API as this API just accepts the request and # request hypervisor driver to complete the same in async mode. @wsgi.expected_errors((400, 404, 409)) @wsgi.action('remove_host') @validation.schema(aggregates.remove_host) def _remove_host(self, req, id, body): """Removes a host from the specified aggregate.""" host = body['remove_host']['host'] context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'remove_host', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) try: aggregate = self.api.remove_host_from_aggregate(context, id, host) except (exception.AggregateNotFound, exception.AggregateHostNotFound, exception.ComputeHostNotFound) as e: LOG.error('Failed to remove host %s from aggregate %s. Error: %s', host, id, six.text_type(e)) msg = _('Cannot remove host %(host)s in aggregate %(id)s') % { 'host': host, 'id': id} raise exc.HTTPNotFound(explanation=msg) except (exception.InvalidAggregateAction, exception.ResourceProviderUpdateConflict) as e: LOG.error('Failed to remove host %s from aggregate %s. Error: %s', host, id, six.text_type(e)) msg = _('Cannot remove host %(host)s in aggregate %(id)s') % { 'host': host, 'id': id} raise exc.HTTPConflict(explanation=msg) return self._marshall_aggregate(req, aggregate) @wsgi.expected_errors((400, 404)) @wsgi.action('set_metadata') @validation.schema(aggregates.set_metadata) def _set_metadata(self, req, id, body): """Replaces the aggregate's existing metadata with new metadata.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'set_metadata', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) metadata = body["set_metadata"]["metadata"] try: aggregate = self.api.update_aggregate_metadata(context, id, metadata) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) def _marshall_aggregate(self, req, aggregate): _aggregate = {} for key, value in self._build_aggregate_items(req, aggregate): # NOTE(danms): The original API specified non-TZ-aware timestamps if isinstance(value, datetime.datetime): value = value.replace(tzinfo=None) _aggregate[key] = value return {"aggregate": _aggregate} def _build_aggregate_items(self, req, aggregate): show_uuid = api_version_request.is_supported(req, min_version="2.41") keys = aggregate.obj_fields # NOTE(rlrossit): Within the compute API, metadata will always be # set on the aggregate object (at a minimum to {}). Because of this, # we can freely use getattr() on keys in obj_extra_fields (in this # case it is only ['availability_zone']) without worrying about # lazy-loading an unset variable for key in keys: if ((aggregate.obj_attr_is_set(key) or key in aggregate.obj_extra_fields) and (show_uuid or key != 'uuid')): yield key, getattr(aggregate, key) @wsgi.Controller.api_version('2.81') @wsgi.response(202) @wsgi.expected_errors((400, 404)) @validation.schema(aggregate_images.aggregate_images_v2_81) def images(self, req, id, body): """Allows image cache management requests.""" context = _get_context(req) context.can(aggr_policies.NEW_POLICY_ROOT % 'images', target={}) try: utils.validate_integer(id, 'id') except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) image_ids = [] for image_req in body.get('cache'): image_ids.append(image_req['id']) if image_ids != list(set(image_ids)): raise exc.HTTPBadRequest( explanation=_('Duplicate images in request')) try: aggregate = self.api.get_aggregate(context, id) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) try: self.conductor_tasks.cache_images(context, aggregate, image_ids) except exception.NovaException as e: raise exc.HTTPBadRequest(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/assisted_volume_snapshots.py0000664000175000017500000001017000000000000025626 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Assisted volume snapshots extension.""" from oslo_serialization import jsonutils import six from webob import exc from nova.api.openstack.compute.schemas import assisted_volume_snapshots from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.policies import assisted_volume_snapshots as avs_policies class AssistedVolumeSnapshotsController(wsgi.Controller): """The Assisted volume snapshots API controller for the OpenStack API.""" def __init__(self): super(AssistedVolumeSnapshotsController, self).__init__() self.compute_api = compute.API() @wsgi.expected_errors(400) @validation.schema(assisted_volume_snapshots.snapshots_create) def create(self, req, body): """Creates a new snapshot.""" context = req.environ['nova.context'] context.can(avs_policies.POLICY_ROOT % 'create', target={}) snapshot = body['snapshot'] create_info = snapshot['create_info'] volume_id = snapshot['volume_id'] try: return self.compute_api.volume_snapshot_create(context, volume_id, create_info) except (exception.VolumeBDMNotFound, exception.VolumeBDMIsMultiAttach, exception.InvalidVolume) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except (exception.InstanceInvalidState, exception.InstanceNotReady) as e: # TODO(mriedem) InstanceInvalidState and InstanceNotReady would # normally result in a 409 but that would require bumping the # microversion, which we should just do in a single microversion # across all APIs when we fix status code wrinkles. raise exc.HTTPBadRequest(explanation=e.format_message()) @wsgi.response(204) @validation.query_schema(assisted_volume_snapshots.delete_query_275, '2.75') @validation.query_schema(assisted_volume_snapshots.delete_query, '2.0', '2.74') @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Delete a snapshot.""" context = req.environ['nova.context'] context.can(avs_policies.POLICY_ROOT % 'delete', target={}) delete_metadata = {} delete_metadata.update(req.GET) try: delete_info = jsonutils.loads(delete_metadata['delete_info']) volume_id = delete_info['volume_id'] except (KeyError, ValueError) as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) try: self.compute_api.volume_snapshot_delete(context, volume_id, id, delete_info) except (exception.VolumeBDMNotFound, exception.InvalidVolume) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except exception.NotFound as e: return exc.HTTPNotFound(explanation=e.format_message()) except (exception.InstanceInvalidState, exception.InstanceNotReady) as e: # TODO(mriedem) InstanceInvalidState and InstanceNotReady would # normally result in a 409 but that would require bumping the # microversion, which we should just do in a single microversion # across all APIs when we fix status code wrinkles. raise exc.HTTPBadRequest(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/attach_interfaces.py0000664000175000017500000002240500000000000023771 0ustar00zuulzuul00000000000000# Copyright 2012 SINA Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The instance interfaces extension.""" import webob from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import attach_interfaces from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.network import neutron from nova import objects from nova.policies import attach_interfaces as ai_policies def _translate_interface_attachment_view(context, port_info, show_tag=False): """Maps keys for interface attachment details view. :param port_info: dict of port details from the networking service :param show_tag: If True, includes the "tag" key in the returned dict, else the "tag" entry is omitted (default: False) :returns: dict of a subset of details about the port and optionally the tag associated with the VirtualInterface record in the nova database """ info = { 'net_id': port_info['network_id'], 'port_id': port_info['id'], 'mac_addr': port_info['mac_address'], 'port_state': port_info['status'], 'fixed_ips': port_info.get('fixed_ips', None), } if show_tag: # Get the VIF for this port (if one exists - VirtualInterface records # did not exist for neutron ports until the Newton release). vif = objects.VirtualInterface.get_by_uuid(context, port_info['id']) info['tag'] = vif.tag if vif else None return info class InterfaceAttachmentController(wsgi.Controller): """The interface attachment API controller for the OpenStack API.""" def __init__(self): super(InterfaceAttachmentController, self).__init__() self.compute_api = compute.API() self.network_api = neutron.API() @wsgi.expected_errors((404, 501)) def index(self, req, server_id): """Returns the list of interface attachments for a given instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(ai_policies.POLICY_ROOT % 'list', target={'project_id': instance.project_id}) search_opts = {'device_id': instance.uuid} try: data = self.network_api.list_ports(context, **search_opts) except exception.NotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() # If showing tags, get the VirtualInterfaceList for the server and # map VIFs by port ID. Note that VirtualInterface records did not # exist for neutron ports until the Newton release so it's OK if we # are missing records for old servers. show_tag = api_version_request.is_supported(req, '2.70') tag_per_port_id = {} if show_tag: vifs = objects.VirtualInterfaceList.get_by_instance_uuid( context, server_id) tag_per_port_id = {vif.uuid: vif.tag for vif in vifs} results = [] ports = data.get('ports', []) for port in ports: # Note that we do not pass show_tag=show_tag to # _translate_interface_attachment_view because we are handling it # ourselves here since we have the list of VIFs which is better # for performance than doing a DB query per port. info = _translate_interface_attachment_view(context, port) if show_tag: info['tag'] = tag_per_port_id.get(port['id']) results.append(info) return {'interfaceAttachments': results} @wsgi.expected_errors((403, 404)) def show(self, req, server_id, id): """Return data about the given interface attachment.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(ai_policies.POLICY_ROOT % 'show', target={'project_id': instance.project_id}) port_id = id try: port_info = self.network_api.show_port(context, port_id) except exception.PortNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.Forbidden as e: raise exc.HTTPForbidden(explanation=e.format_message()) if port_info['port']['device_id'] != server_id: msg = _("Instance %(instance)s does not have a port with id " "%(port)s") % {'instance': server_id, 'port': port_id} raise exc.HTTPNotFound(explanation=msg) return {'interfaceAttachment': _translate_interface_attachment_view( context, port_info['port'], show_tag=api_version_request.is_supported(req, '2.70'))} @wsgi.expected_errors((400, 403, 404, 409, 500, 501)) @validation.schema(attach_interfaces.create, '2.0', '2.48') @validation.schema(attach_interfaces.create_v249, '2.49') def create(self, req, server_id, body): """Attach an interface to an instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(ai_policies.POLICY_ROOT % 'create', target={'project_id': instance.project_id}) network_id = None port_id = None req_ip = None tag = None if body: attachment = body['interfaceAttachment'] network_id = attachment.get('net_id', None) port_id = attachment.get('port_id', None) tag = attachment.get('tag', None) try: req_ip = attachment['fixed_ips'][0]['ip_address'] except Exception: pass if network_id and port_id: msg = _("Must not input both network_id and port_id") raise exc.HTTPBadRequest(explanation=msg) if req_ip and not network_id: msg = _("Must input network_id when request IP address") raise exc.HTTPBadRequest(explanation=msg) try: vif = self.compute_api.attach_interface(context, instance, network_id, port_id, req_ip, tag=tag) except (exception.InterfaceAttachFailedNoNetwork, exception.NetworkAmbiguous, exception.NoMoreFixedIps, exception.PortNotUsable, exception.AttachInterfaceNotSupported, exception.SecurityGroupCannotBeApplied, exception.NetworkInterfaceTaggedAttachNotSupported, exception.NetworksWithQoSPolicyNotSupported) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except (exception.InstanceIsLocked, exception.FixedIpAlreadyInUse, exception.PortInUse) as e: raise exc.HTTPConflict(explanation=e.format_message()) except (exception.PortNotFound, exception.NetworkNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.PortLimitExceeded as e: raise exc.HTTPForbidden(explanation=e.format_message()) except exception.InterfaceAttachFailed as e: raise webob.exc.HTTPInternalServerError( explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'attach_interface', server_id) return self.show(req, server_id, vif['id']) @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) def delete(self, req, server_id, id): """Detach an interface from an instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id, expected_attrs=['device_metadata']) context.can(ai_policies.POLICY_ROOT % 'delete', target={'project_id': instance.project_id}) port_id = id try: self.compute_api.detach_interface(context, instance, port_id=port_id) except exception.PortNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'detach_interface', server_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/availability_zone.py0000664000175000017500000001142000000000000024022 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import wsgi from nova import availability_zones from nova.compute import api as compute import nova.conf from nova.policies import availability_zone as az_policies from nova import servicegroup CONF = nova.conf.CONF ATTRIBUTE_NAME = "availability_zone" class AvailabilityZoneController(wsgi.Controller): """The Availability Zone API controller for the OpenStack API.""" def __init__(self): super(AvailabilityZoneController, self).__init__() self.servicegroup_api = servicegroup.API() self.host_api = compute.HostAPI() def _get_filtered_availability_zones(self, zones, is_available): result = [] for zone in zones: # Hide internal_service_availability_zone if zone == CONF.internal_service_availability_zone: continue result.append({'zoneName': zone, 'zoneState': {'available': is_available}, "hosts": None}) return result def _describe_availability_zones(self, context, **kwargs): ctxt = context.elevated() available_zones, not_available_zones = ( availability_zones.get_availability_zones( ctxt, self.host_api)) filtered_available_zones = \ self._get_filtered_availability_zones(available_zones, True) filtered_not_available_zones = \ self._get_filtered_availability_zones(not_available_zones, False) return {'availabilityZoneInfo': filtered_available_zones + filtered_not_available_zones} def _describe_availability_zones_verbose(self, context, **kwargs): ctxt = context.elevated() services = self.host_api.service_get_all( context, set_zones=True, all_cells=True) available_zones, not_available_zones = ( availability_zones.get_availability_zones( ctxt, self.host_api, services=services)) zone_hosts = {} host_services = {} api_services = ('nova-osapi_compute', 'nova-metadata') for service in filter(lambda x: not x.disabled, services): if service.binary in api_services: # Skip API services in the listing since they are not # maintained in the same way as other services continue zone_hosts.setdefault(service['availability_zone'], set()) zone_hosts[service['availability_zone']].add(service['host']) host_services.setdefault(service['availability_zone'] + service['host'], []) host_services[service['availability_zone'] + service['host']].\ append(service) result = [] for zone in available_zones: hosts = {} for host in zone_hosts.get(zone, []): hosts[host] = {} for service in host_services[zone + host]: alive = self.servicegroup_api.service_is_up(service) hosts[host][service['binary']] = { 'available': alive, 'active': service['disabled'] is not True, 'updated_at': service['updated_at'] } result.append({'zoneName': zone, 'zoneState': {'available': True}, "hosts": hosts}) for zone in not_available_zones: result.append({'zoneName': zone, 'zoneState': {'available': False}, "hosts": None}) return {'availabilityZoneInfo': result} @wsgi.expected_errors(()) def index(self, req): """Returns a summary list of availability zone.""" context = req.environ['nova.context'] context.can(az_policies.POLICY_ROOT % 'list', target={}) return self._describe_availability_zones(context) @wsgi.expected_errors(()) def detail(self, req): """Returns a detailed list of availability zone.""" context = req.environ['nova.context'] context.can(az_policies.POLICY_ROOT % 'detail', target={}) return self._describe_availability_zones_verbose(context) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/baremetal_nodes.py0000664000175000017500000001277400000000000023456 0ustar00zuulzuul00000000000000# Copyright (c) 2013 NTT DOCOMO, INC. # Copyright 2014 IBM Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The bare-metal admin extension.""" from oslo_utils import importutils import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack import wsgi import nova.conf from nova.i18n import _ from nova.policies import baremetal_nodes as bn_policies ironic_client = importutils.try_import('ironicclient.client') ironic_exc = importutils.try_import('ironicclient.exc') CONF = nova.conf.CONF def _check_ironic_client_enabled(): """Check whether Ironic is installed or not.""" if ironic_client is None: common.raise_feature_not_supported() def _get_ironic_client(): """return an Ironic client.""" # TODO(NobodyCam): Fix insecure setting # NOTE(efried): This should all be replaced by ksa adapter options; but the # nova-to-baremetal API is deprecated, so not changing it. # https://docs.openstack.org/api-ref/compute/#bare-metal-nodes-os-baremetal-nodes-deprecated # noqa kwargs = {'os_username': CONF.ironic.admin_username, 'os_password': CONF.ironic.admin_password, 'os_auth_url': CONF.ironic.admin_url, 'os_tenant_name': CONF.ironic.admin_tenant_name, 'os_service_type': 'baremetal', 'os_endpoint_type': 'public', 'insecure': 'true', 'endpoint': CONF.ironic.endpoint_override} # NOTE(mriedem): The 1 api_version arg here is the only valid value for # the client, but it's not even used so it doesn't really matter. The # ironic client wrapper in the virt driver actually uses a hard-coded # microversion via the os_ironic_api_version kwarg. icli = ironic_client.get_client(1, **kwargs) return icli def _no_ironic_proxy(cmd): raise webob.exc.HTTPBadRequest( explanation=_("Command Not supported. Please use Ironic " "command %(cmd)s to perform this " "action.") % {'cmd': cmd}) class BareMetalNodeController(wsgi.Controller): """The Bare-Metal Node API controller for the OpenStack API.""" @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) def index(self, req): context = req.environ['nova.context'] context.can(bn_policies.BASE_POLICY_NAME) nodes = [] # proxy command to Ironic _check_ironic_client_enabled() icli = _get_ironic_client() ironic_nodes = icli.node.list(detail=True) for inode in ironic_nodes: node = {'id': inode.uuid, 'interfaces': [], 'host': 'IRONIC MANAGED', 'task_state': inode.provision_state, 'cpus': inode.properties.get('cpus', 0), 'memory_mb': inode.properties.get('memory_mb', 0), 'disk_gb': inode.properties.get('local_gb', 0)} nodes.append(node) return {'nodes': nodes} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) def show(self, req, id): context = req.environ['nova.context'] context.can(bn_policies.BASE_POLICY_NAME) # proxy command to Ironic _check_ironic_client_enabled() icli = _get_ironic_client() try: inode = icli.node.get(id) except ironic_exc.NotFound: msg = _("Node %s could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) iports = icli.node.list_ports(id) node = {'id': inode.uuid, 'interfaces': [], 'host': 'IRONIC MANAGED', 'task_state': inode.provision_state, 'cpus': inode.properties.get('cpus', 0), 'memory_mb': inode.properties.get('memory_mb', 0), 'disk_gb': inode.properties.get('local_gb', 0), 'instance_uuid': inode.instance_uuid} for port in iports: node['interfaces'].append({'address': port.address}) return {'node': node} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def create(self, req, body): _no_ironic_proxy("node-create") @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def delete(self, req, id): _no_ironic_proxy("node-delete") @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action('add_interface') @wsgi.expected_errors(400) def _add_interface(self, req, id, body): _no_ironic_proxy("port-create") @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action('remove_interface') @wsgi.expected_errors(400) def _remove_interface(self, req, id, body): _no_ironic_proxy("port-delete") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/cells.py0000664000175000017500000000331100000000000021417 0ustar00zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class CellsController(wsgi.Controller): """(Removed) Controller for Cell resources. This was removed during the Train release in favour of cells v2. """ @wsgi.expected_errors(410) def index(self, req): raise exc.HTTPGone() @wsgi.expected_errors(410) def detail(self, req): raise exc.HTTPGone() @wsgi.expected_errors(410) def info(self, req): raise exc.HTTPGone() @wsgi.expected_errors(410) def capacities(self, req, id=None): raise exc.HTTPGone() @wsgi.expected_errors(410) def show(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def delete(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def create(self, req, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def update(self, req, id, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def sync_instances(self, req, body): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/certificates.py0000664000175000017500000000206300000000000022765 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack import wsgi class CertificatesController(wsgi.Controller): """The x509 Certificates API controller for the OpenStack API.""" @wsgi.expected_errors(410) def show(self, req, id): """Return certificate information.""" raise webob.exc.HTTPGone() @wsgi.expected_errors((410)) def create(self, req, body=None): """Create a certificate.""" raise webob.exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/cloudpipe.py0000664000175000017500000000245600000000000022312 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Connect your vlan to the world.""" from webob import exc from nova.api.openstack import wsgi class CloudpipeController(wsgi.Controller): """Handle creating and listing cloudpipe instances.""" @wsgi.expected_errors((410)) def create(self, req, body): """Create a new cloudpipe instance, if none exists. Parameters: {cloudpipe: {'project_id': ''}} """ raise exc.HTTPGone() @wsgi.expected_errors((410)) def index(self, req): """List running cloudpipe instances.""" raise exc.HTTPGone() @wsgi.expected_errors(410) def update(self, req, id, body): """Configure cloudpipe parameters for the project.""" raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/console_auth_tokens.py0000664000175000017500000000567100000000000024376 0ustar00zuulzuul00000000000000# Copyright 2013 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import wsgi import nova.conf from nova import context as nova_context from nova.i18n import _ from nova import objects from nova.policies import console_auth_tokens as cat_policies CONF = nova.conf.CONF class ConsoleAuthTokensController(wsgi.Controller): def _show(self, req, id, rdp_only): """Checks a console auth token and returns the related connect info.""" context = req.environ['nova.context'] context.can(cat_policies.BASE_POLICY_NAME, target={}) token = id if not token: msg = _("token not provided") raise webob.exc.HTTPBadRequest(explanation=msg) connect_info = None results = nova_context.scatter_gather_skip_cell0( context, objects.ConsoleAuthToken.validate, token) # NOTE(melwitt): Console token auths are stored in cell databases, # but with only the token as a request param, we can't know which # cell database contains the token's corresponding connection info. # So, we must query all cells for the info and we can break the # loop as soon as we find a result because the token is associated # with one instance, which can only be in one cell. for result in results.values(): if not nova_context.is_cell_failure_sentinel(result): connect_info = result break if not connect_info: raise webob.exc.HTTPNotFound(explanation=_("Token not found")) console_type = connect_info.console_type if rdp_only and console_type != "rdp-html5": raise webob.exc.HTTPUnauthorized( explanation=_("The requested console type details are not " "accessible")) return {'console': { 'instance_uuid': connect_info.instance_uuid, 'host': connect_info.host, 'port': connect_info.port, 'internal_access_path': connect_info.internal_access_path, }} @wsgi.Controller.api_version("2.1", "2.30") @wsgi.expected_errors((400, 401, 404)) def show(self, req, id): return self._show(req, id, True) @wsgi.Controller.api_version("2.31") # noqa @wsgi.expected_errors((400, 404)) def show(self, req, id): return self._show(req, id, False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/console_output.py0000664000175000017500000000565700000000000023416 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2011 Grid Dynamics # Copyright 2011 Eldar Nugaev, Kirill Shileev, Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import console_output from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.policies import console_output as co_policies class ConsoleOutputController(wsgi.Controller): def __init__(self): super(ConsoleOutputController, self).__init__() self.compute_api = compute.API() @wsgi.expected_errors((404, 409, 501)) @wsgi.action('os-getConsoleOutput') @validation.schema(console_output.get_console_output) def get_console_output(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(co_policies.BASE_POLICY_NAME, target={'project_id': instance.project_id}) length = body['os-getConsoleOutput'].get('length') # TODO(cyeoh): In a future API update accept a length of -1 # as meaning unlimited length (convert to None) try: output = self.compute_api.get_console_output(context, instance, length) # NOTE(cyeoh): This covers race conditions where the instance is # deleted between common.get_instance and get_console_output # being called except (exception.InstanceNotFound, exception.ConsoleNotAvailable) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() # XML output is not correctly escaped, so remove invalid characters # NOTE(cyeoh): We don't support XML output with V2.1, but for # backwards compatibility reasons we continue to filter the output # We should remove this in the future remove_re = re.compile('[\x00-\x08\x0B-\x1F]') output = remove_re.sub('', output) return {'output': output} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/consoles.py0000664000175000017500000000243400000000000022147 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class ConsolesController(wsgi.Controller): """(Removed) The Consoles controller for the OpenStack API. This was removed during the Ussuri release along with the nova-console service. """ @wsgi.expected_errors(410) def index(self, req, server_id): raise exc.HTTPGone() @wsgi.expected_errors(410) def create(self, req, server_id, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def show(self, req, server_id, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def delete(self, req, server_id, id): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/create_backup.py0000664000175000017500000000717700000000000023123 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import create_backup from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.policies import create_backup as cb_policies class CreateBackupController(wsgi.Controller): def __init__(self): super(CreateBackupController, self).__init__() self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('createBackup') @validation.schema(create_backup.create_backup_v20, '2.0', '2.0') @validation.schema(create_backup.create_backup, '2.1') def _create_backup(self, req, id, body): """Backup a server instance. Images now have an `image_type` associated with them, which can be 'snapshot' or the backup type, like 'daily' or 'weekly'. If the image_type is backup-like, then the rotation factor can be included and that will cause the oldest backups that exceed the rotation factor to be deleted. """ context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(cb_policies.BASE_POLICY_NAME, target={'project_id': instance.project_id}) entity = body["createBackup"] image_name = common.normalize_name(entity["name"]) backup_type = entity["backup_type"] rotation = int(entity["rotation"]) props = {} metadata = entity.get('metadata', {}) # Starting from microversion 2.39 we don't check quotas on createBackup if api_version_request.is_supported( req, max_version= api_version_request.MAX_IMAGE_META_PROXY_API_VERSION): common.check_img_metadata_properties_quota(context, metadata) props.update(metadata) try: image = self.compute_api.backup(context, instance, image_name, backup_type, rotation, extra_properties=props) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'createBackup', id) except exception.InvalidRequest as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) # Starting with microversion 2.45 we return a response body containing # the snapshot image id without the Location header. if api_version_request.is_supported(req, '2.45'): return {'image_id': image['id']} resp = webob.Response(status_int=202) # build location of newly-created image entity if rotation is not zero if rotation > 0: image_id = str(image['id']) image_ref = common.url_join(req.application_url, 'images', image_id) resp.headers['Location'] = image_ref return resp ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/deferred_delete.py0000664000175000017500000000517300000000000023427 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The deferred instance delete extension.""" import webob from nova.api.openstack import common from nova.api.openstack import wsgi from nova.compute import api as compute from nova import exception from nova.policies import deferred_delete as dd_policies class DeferredDeleteController(wsgi.Controller): def __init__(self): super(DeferredDeleteController, self).__init__() self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((403, 404, 409)) @wsgi.action('restore') def _restore(self, req, id, body): """Restore a previously deleted instance.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(dd_policies.BASE_POLICY_NAME % 'restore', target={'project_id': instance.project_id}) try: self.compute_api.restore(context, instance) except exception.QuotaError as error: raise webob.exc.HTTPForbidden(explanation=error.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'restore', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('forceDelete') def _force_delete(self, req, id, body): """Force delete of instance before deferred cleanup.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(dd_policies.BASE_POLICY_NAME % 'force', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.force_delete(context, instance) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/evacuate.py0000664000175000017500000001450500000000000022121 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import strutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import evacuate from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute import nova.conf from nova import exception from nova.i18n import _ from nova.network import neutron from nova.policies import evacuate as evac_policies from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class EvacuateController(wsgi.Controller): def __init__(self): super(EvacuateController, self).__init__() self.compute_api = compute.API() self.host_api = compute.HostAPI() self.network_api = neutron.API() def _get_on_shared_storage(self, req, evacuate_body): if api_version_request.is_supported(req, min_version='2.14'): return None else: return strutils.bool_from_string(evacuate_body["onSharedStorage"]) def _get_password(self, req, evacuate_body, on_shared_storage): password = None if 'adminPass' in evacuate_body: # check that if requested to evacuate server on shared storage # password not specified if on_shared_storage: msg = _("admin password can't be changed on existing disk") raise exc.HTTPBadRequest(explanation=msg) password = evacuate_body['adminPass'] elif not on_shared_storage: password = utils.generate_password() return password def _get_password_v214(self, req, evacuate_body): if 'adminPass' in evacuate_body: password = evacuate_body['adminPass'] else: password = utils.generate_password() return password # TODO(eliqiao): Should be responding here with 202 Accept # because evacuate is an async call, but keep to 200 for # backwards compatibility reasons. @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('evacuate') @validation.schema(evacuate.evacuate, "2.0", "2.13") @validation.schema(evacuate.evacuate_v214, "2.14", "2.28") @validation.schema(evacuate.evacuate_v2_29, "2.29", "2.67") @validation.schema(evacuate.evacuate_v2_68, "2.68") def _evacuate(self, req, id, body): """Permit admins to evacuate a server from a failed host to a new one. """ context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(evac_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) evacuate_body = body["evacuate"] host = evacuate_body.get("host") force = None on_shared_storage = self._get_on_shared_storage(req, evacuate_body) if api_version_request.is_supported(req, min_version='2.29'): force = body["evacuate"].get("force", False) force = strutils.bool_from_string(force, strict=True) if force is True and not host: message = _("Can't force to a non-provided destination") raise exc.HTTPBadRequest(explanation=message) if api_version_request.is_supported(req, min_version='2.14'): password = self._get_password_v214(req, evacuate_body) else: password = self._get_password(req, evacuate_body, on_shared_storage) if host is not None: try: self.host_api.service_get_by_compute_host(context, host) except (exception.ComputeHostNotFound, exception.HostMappingNotFound): msg = _("Compute host %s not found.") % host raise exc.HTTPNotFound(explanation=msg) if instance.host == host: msg = _("The target host can't be the same one.") raise exc.HTTPBadRequest(explanation=msg) # We could potentially move this check to conductor and avoid the # extra API call to neutron when we support move operations with ports # having resource requests. if (common.instance_has_port_with_resource_request( instance.uuid, self.network_api) and not common.supports_port_resource_request_during_move()): LOG.warning("The evacuate action on a server with ports " "having resource requests, like a port with a QoS " "minimum bandwidth policy, is not supported until " "every nova-compute is upgraded to Ussuri") msg = _("The evacuate action on a server with ports having " "resource requests, like a port with a QoS minimum " "bandwidth policy, is not supported by this cluster right " "now") raise exc.HTTPBadRequest(explanation=msg) try: self.compute_api.evacuate(context, instance, host, on_shared_storage, password, force) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'evacuate', id) except exception.ComputeServiceInUse as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.ForbiddenWithAccelerators as e: raise exc.HTTPForbidden(explanation=e.format_message()) if (not api_version_request.is_supported(req, min_version='2.14') and CONF.api.enable_instance_password): return {'adminPass': password} else: return None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/extension_info.py0000664000175000017500000007517600000000000023366 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack import wsgi from nova.policies import extensions as ext_policies EXTENSION_LIST = [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "Adds type parameter to the ip list.", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "Adds mac address parameter to the ip list.", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "Support to show the disabled status of a flavor.", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "Provide additional data for flavors.", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n " "Actions include: resetNetwork, injectNetworkInfo, " "os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server " "API.\n 2. Add availability zones " "describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "Add extended status in Baremetal Nodes v2 API.", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "Allow boot with the new BDM data format.", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "Adding functionality to get cell capacities.", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding " "neighbor cells,\n listing neighbor cells, " "and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n " "When running with the Vlan network mode, you need a " "mechanism to route\n from the public Internet to " "your vlans. This mechanism is known as a\n " "cloudpipe.\n\n At the time of creating this class, " "only OpenVPN is supported. Support for\n a SSH " "Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "Adds the ability to set the vpn ip/port for cloudpipe " "instances.", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "Extended support to the Create Server v1.1 API.", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "Enables server evacuation without target host. " "Scheduler will select one to target.", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "Adds optional fixed_address to the add floating IP " "command.", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "Extended hypervisors support.", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "Adds additional fields to networks.", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "Adds ability for admins to delete quota and " "optionally force the update Quota command.", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "Allow the user to specify the image to use for " "rescue.", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "Extended services support.", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "Extended services deletion support.", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "Support to show the swap status of a flavor.", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "Show hypervisor status.", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an " "instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on " "rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through " "server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "Adds quota support to server groups.", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "Allow to filter the servers by a set of status " "values.", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "Add sorting support in get Server v2 API.", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "Start/Stop instance compute API support.", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being " "used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "Provide data to admin on limited resources used by " "other tenants.", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "Project user quota support.", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "Support for updating a volume attachment.", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] EXTENSION_LIST_LEGACY_V2_COMPATIBLE = EXTENSION_LIST[:] EXTENSION_LIST_LEGACY_V2_COMPATIBLE.append({ 'alias': 'OS-EXT-VIF-NET', 'description': 'Adds network id parameter to the virtual interface list.', 'links': [], 'name': 'ExtendedVIFNet', 'namespace': 'http://docs.openstack.org/compute/ext/fake_xml', "updated": "2014-12-03T00:00:00Z" }) EXTENSION_LIST_LEGACY_V2_COMPATIBLE = sorted( EXTENSION_LIST_LEGACY_V2_COMPATIBLE, key=lambda x: x['alias']) class ExtensionInfoController(wsgi.Controller): @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(ext_policies.BASE_POLICY_NAME) # NOTE(gmann): This is for v2.1 compatible mode where # extension list should show all extensions as shown by v2. if req.is_legacy_v2(): return dict(extensions=EXTENSION_LIST_LEGACY_V2_COMPATIBLE) return dict(extensions=EXTENSION_LIST) @wsgi.expected_errors(404) def show(self, req, id): context = req.environ['nova.context'] context.can(ext_policies.BASE_POLICY_NAME) all_exts = EXTENSION_LIST # NOTE(gmann): This is for v2.1 compatible mode where # extension list should show all extensions as shown by v2. if req.is_legacy_v2(): all_exts = EXTENSION_LIST_LEGACY_V2_COMPATIBLE # NOTE(dprince): the extensions alias is used as the 'id' for show for ext in all_exts: if ext['alias'] == id: return dict(extension=ext) raise webob.exc.HTTPNotFound() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/fixed_ips.py0000664000175000017500000000204300000000000022270 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class FixedIPController(wsgi.Controller): @wsgi.expected_errors((410)) def show(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors((410)) @wsgi.action('reserve') def reserve(self, req, id, body): raise exc.HTTPGone() @wsgi.expected_errors((410)) @wsgi.action('unreserve') def unreserve(self, req, id, body): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/flavor_access.py0000664000175000017500000000775700000000000023151 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The flavor access extension.""" import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import flavor_access from nova.api.openstack import identity from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova.policies import flavor_access as fa_policies def _marshall_flavor_access(flavor): rval = [] for project_id in flavor.projects: rval.append({'flavor_id': flavor.flavorid, 'tenant_id': project_id}) return {'flavor_access': rval} class FlavorAccessController(wsgi.Controller): """The flavor access API controller for the OpenStack API.""" @wsgi.expected_errors(404) def index(self, req, flavor_id): context = req.environ['nova.context'] context.can(fa_policies.BASE_POLICY_NAME) flavor = common.get_flavor(context, flavor_id) # public flavor to all projects if flavor.is_public: explanation = _("Access list not available for public flavors.") raise webob.exc.HTTPNotFound(explanation=explanation) # private flavor to listed projects only return _marshall_flavor_access(flavor) class FlavorActionController(wsgi.Controller): """The flavor access API controller for the OpenStack API.""" @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action("addTenantAccess") @validation.schema(flavor_access.add_tenant_access) def _add_tenant_access(self, req, id, body): context = req.environ['nova.context'] context.can(fa_policies.POLICY_ROOT % "add_tenant_access", target={}) vals = body['addTenantAccess'] tenant = vals['tenant'] identity.verify_project_id(context, tenant) flavor = common.get_flavor(context, id) try: if api_version_request.is_supported(req, min_version='2.7'): if flavor.is_public: exp = _("Can not add access to a public flavor.") raise webob.exc.HTTPConflict(explanation=exp) flavor.add_access(tenant) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.FlavorAccessExists as err: raise webob.exc.HTTPConflict(explanation=err.format_message()) return _marshall_flavor_access(flavor) @wsgi.expected_errors((400, 403, 404)) @wsgi.action("removeTenantAccess") @validation.schema(flavor_access.remove_tenant_access) def _remove_tenant_access(self, req, id, body): context = req.environ['nova.context'] context.can( fa_policies.POLICY_ROOT % "remove_tenant_access", target={}) vals = body['removeTenantAccess'] tenant = vals['tenant'] identity.verify_project_id(context, tenant) # NOTE(gibi): We have to load a flavor from the db here as # flavor.remove_access() will try to emit a notification and that needs # a fully loaded flavor. flavor = common.get_flavor(context, id) try: flavor.remove_access(tenant) except (exception.FlavorAccessNotFound, exception.FlavorNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return _marshall_flavor_access(flavor) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/flavor_manage.py0000664000175000017500000001323300000000000023122 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute.schemas import flavor_manage from nova.api.openstack.compute.views import flavors as flavors_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import flavors from nova import exception from nova import objects from nova.policies import flavor_extra_specs as fes_policies from nova.policies import flavor_manage as fm_policies class FlavorManageController(wsgi.Controller): """The Flavor Lifecycle API controller for the OpenStack API.""" _view_builder_class = flavors_view.ViewBuilder # NOTE(oomichi): Return 202 for backwards compatibility but should be # 204 as this operation complete the deletion of aggregate resource and # return no response body. @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action("delete") def _delete(self, req, id): context = req.environ['nova.context'] context.can(fm_policies.POLICY_ROOT % 'delete', target={}) flavor = objects.Flavor(context=context, flavorid=id) try: flavor.destroy() except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) # NOTE(oomichi): Return 200 for backwards compatibility but should be 201 # as this operation complete the creation of flavor resource. @wsgi.action("create") @wsgi.expected_errors((400, 409)) @validation.schema(flavor_manage.create_v20, '2.0', '2.0') @validation.schema(flavor_manage.create, '2.1', '2.54') @validation.schema(flavor_manage.create_v2_55, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) def _create(self, req, body): context = req.environ['nova.context'] context.can(fm_policies.POLICY_ROOT % 'create', target={}) vals = body['flavor'] name = vals['name'] flavorid = vals.get('id') memory = vals['ram'] vcpus = vals['vcpus'] root_gb = vals['disk'] ephemeral_gb = vals.get('OS-FLV-EXT-DATA:ephemeral', 0) swap = vals.get('swap', 0) rxtx_factor = vals.get('rxtx_factor', 1.0) is_public = vals.get('os-flavor-access:is_public', True) # The user can specify a description starting with microversion 2.55. include_description = api_version_request.is_supported( req, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) description = vals.get('description') if include_description else None try: flavor = flavors.create(name, memory, vcpus, root_gb, ephemeral_gb=ephemeral_gb, flavorid=flavorid, swap=swap, rxtx_factor=rxtx_factor, is_public=is_public, description=description) # NOTE(gmann): For backward compatibility, non public flavor # access is not being added for created tenant. Ref -bug/1209101 except (exception.FlavorExists, exception.FlavorIdExists) as err: raise webob.exc.HTTPConflict(explanation=err.format_message()) include_extra_specs = False if api_version_request.is_supported( req, flavors_view.FLAVOR_EXTRA_SPECS_MICROVERSION): include_extra_specs = context.can( fes_policies.POLICY_ROOT % 'index', fatal=False) # NOTE(yikun): This empty extra_specs only for keeping consistent # with PUT and GET flavor APIs. extra_specs in flavor is added # after creating the flavor so to avoid the error in _view_builder # flavor.extra_specs is populated with the empty string. flavor.extra_specs = {} return self._view_builder.show(req, flavor, include_description, include_extra_specs=include_extra_specs) @wsgi.Controller.api_version(flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) @wsgi.action('update') @wsgi.expected_errors((400, 404)) @validation.schema(flavor_manage.update_v2_55, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) def _update(self, req, id, body): # Validate the policy. context = req.environ['nova.context'] context.can(fm_policies.POLICY_ROOT % 'update', target={}) # Get the flavor and update the description. try: flavor = objects.Flavor.get_by_flavor_id(context, id) flavor.description = body['flavor']['description'] flavor.save() except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) include_extra_specs = False if api_version_request.is_supported( req, flavors_view.FLAVOR_EXTRA_SPECS_MICROVERSION): include_extra_specs = context.can( fes_policies.POLICY_ROOT % 'index', fatal=False) return self._view_builder.show(req, flavor, include_description=True, include_extra_specs=include_extra_specs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/flavors.py0000664000175000017500000001252000000000000021773 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import flavors as schema from nova.api.openstack.compute.views import flavors as flavors_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import flavors from nova import exception from nova.i18n import _ from nova import objects from nova.policies import flavor_extra_specs as fes_policies from nova import utils class FlavorsController(wsgi.Controller): """Flavor controller for the OpenStack API.""" _view_builder_class = flavors_view.ViewBuilder @validation.query_schema(schema.index_query_275, '2.75') @validation.query_schema(schema.index_query, '2.0', '2.74') @wsgi.expected_errors(400) def index(self, req): """Return all flavors in brief.""" limited_flavors = self._get_flavors(req) return self._view_builder.index(req, limited_flavors) @validation.query_schema(schema.index_query_275, '2.75') @validation.query_schema(schema.index_query, '2.0', '2.74') @wsgi.expected_errors(400) def detail(self, req): """Return all flavors in detail.""" context = req.environ['nova.context'] limited_flavors = self._get_flavors(req) include_extra_specs = False if api_version_request.is_supported( req, flavors_view.FLAVOR_EXTRA_SPECS_MICROVERSION): include_extra_specs = context.can( fes_policies.POLICY_ROOT % 'index', fatal=False) return self._view_builder.detail( req, limited_flavors, include_extra_specs=include_extra_specs) @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given flavor id.""" context = req.environ['nova.context'] try: flavor = flavors.get_flavor_by_flavor_id(id, ctxt=context) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) include_extra_specs = False if api_version_request.is_supported( req, flavors_view.FLAVOR_EXTRA_SPECS_MICROVERSION): include_extra_specs = context.can( fes_policies.POLICY_ROOT % 'index', fatal=False) include_description = api_version_request.is_supported( req, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) return self._view_builder.show( req, flavor, include_description=include_description, include_extra_specs=include_extra_specs) def _parse_is_public(self, is_public): """Parse is_public into something usable.""" if is_public is None: # preserve default value of showing only public flavors return True elif utils.is_none_string(is_public): return None else: try: return strutils.bool_from_string(is_public, strict=True) except ValueError: msg = _('Invalid is_public filter [%s]') % is_public raise webob.exc.HTTPBadRequest(explanation=msg) def _get_flavors(self, req): """Helper function that returns a list of flavor dicts.""" filters = {} sort_key = req.params.get('sort_key') or 'flavorid' sort_dir = req.params.get('sort_dir') or 'asc' limit, marker = common.get_limit_and_marker(req) context = req.environ['nova.context'] if context.is_admin: # Only admin has query access to all flavor types filters['is_public'] = self._parse_is_public( req.params.get('is_public', None)) else: filters['is_public'] = True filters['disabled'] = False if 'minRam' in req.params: try: filters['min_memory_mb'] = int(req.params['minRam']) except ValueError: msg = _('Invalid minRam filter [%s]') % req.params['minRam'] raise webob.exc.HTTPBadRequest(explanation=msg) if 'minDisk' in req.params: try: filters['min_root_gb'] = int(req.params['minDisk']) except ValueError: msg = (_('Invalid minDisk filter [%s]') % req.params['minDisk']) raise webob.exc.HTTPBadRequest(explanation=msg) try: limited_flavors = objects.FlavorList.get_all(context, filters=filters, sort_key=sort_key, sort_dir=sort_dir, limit=limit, marker=marker) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise webob.exc.HTTPBadRequest(explanation=msg) return limited_flavors ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/flavors_extraspecs.py0000664000175000017500000001374300000000000024244 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import flavors_extraspecs from nova.api.openstack import wsgi from nova.api import validation from nova.api.validation.extra_specs import validators from nova import exception from nova.i18n import _ from nova.policies import flavor_extra_specs as fes_policies from nova import utils class FlavorExtraSpecsController(wsgi.Controller): """The flavor extra specs API controller for the OpenStack API.""" def _get_extra_specs(self, context, flavor_id): flavor = common.get_flavor(context, flavor_id) return dict(extra_specs=flavor.extra_specs) def _check_extra_specs_value(self, req, specs): validation_supported = api_version_request.is_supported( req, min_version='2.86', ) for name, value in specs.items(): # NOTE(gmann): Max length for numeric value is being checked # explicitly as json schema cannot have max length check for # numeric value if isinstance(value, (six.integer_types, float)): value = six.text_type(value) try: utils.check_string_length(value, 'extra_specs value', max_length=255) except exception.InvalidInput as error: raise webob.exc.HTTPBadRequest( explanation=error.format_message()) if validation_supported: validators.validate(name, value) @wsgi.expected_errors(404) def index(self, req, flavor_id): """Returns the list of extra specs for a given flavor.""" context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'index', target={'project_id': context.project_id}) return self._get_extra_specs(context, flavor_id) # NOTE(gmann): Here should be 201 instead of 200 by v2.1 # +microversions because the flavor extra specs has been created # completely when returning a response. @wsgi.expected_errors((400, 404, 409)) @validation.schema(flavors_extraspecs.create) def create(self, req, flavor_id, body): context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'create', target={}) specs = body['extra_specs'] self._check_extra_specs_value(req, specs) flavor = common.get_flavor(context, flavor_id) try: flavor.extra_specs = dict(flavor.extra_specs, **specs) flavor.save() except exception.FlavorExtraSpecUpdateCreateFailed as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return body @wsgi.expected_errors((400, 404, 409)) @validation.schema(flavors_extraspecs.update) def update(self, req, flavor_id, id, body): context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'update', target={}) self._check_extra_specs_value(req, body) if id not in body: expl = _('Request body and URI mismatch') raise webob.exc.HTTPBadRequest(explanation=expl) flavor = common.get_flavor(context, flavor_id) try: flavor.extra_specs = dict(flavor.extra_specs, **body) flavor.save() except exception.FlavorExtraSpecUpdateCreateFailed as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return body @wsgi.expected_errors(404) def show(self, req, flavor_id, id): """Return a single extra spec item.""" context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'show', target={'project_id': context.project_id}) flavor = common.get_flavor(context, flavor_id) try: return {id: flavor.extra_specs[id]} except KeyError: msg = _("Flavor %(flavor_id)s has no extra specs with " "key %(key)s.") % dict(flavor_id=flavor_id, key=id) raise webob.exc.HTTPNotFound(explanation=msg) # NOTE(gmann): Here should be 204(No Content) instead of 200 by v2.1 # +microversions because the flavor extra specs has been deleted # completely when returning a response. @wsgi.expected_errors(404) def delete(self, req, flavor_id, id): """Deletes an existing extra spec.""" context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'delete', target={}) flavor = common.get_flavor(context, flavor_id) try: del flavor.extra_specs[id] flavor.save() except (exception.FlavorExtraSpecsNotFound, exception.FlavorNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except KeyError: msg = _("Flavor %(flavor_id)s has no extra specs with " "key %(key)s.") % dict(flavor_id=flavor_id, key=id) raise webob.exc.HTTPNotFound(explanation=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/floating_ip_dns.py0000664000175000017500000000273000000000000023460 0ustar00zuulzuul00000000000000# Copyright 2011 Andrew Bogott for the Wikimedia Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class FloatingIPDNSDomainController(wsgi.Controller): """DNS domain controller for OpenStack API.""" @wsgi.expected_errors(410) def index(self, req): raise exc.HTTPGone() @wsgi.expected_errors(410) def update(self, req, id, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def delete(self, req, id): raise exc.HTTPGone() class FloatingIPDNSEntryController(wsgi.Controller): """DNS Entry controller for OpenStack API.""" @wsgi.expected_errors(410) def show(self, req, domain_id, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def update(self, req, domain_id, id, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def delete(self, req, domain_id, id): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/floating_ip_pools.py0000664000175000017500000000334500000000000024033 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import wsgi from nova.network import neutron from nova.policies import floating_ip_pools as fip_policies def _translate_floating_ip_view(pool): return { 'name': pool['name'] or pool['id'], } def _translate_floating_ip_pools_view(pools): return { 'floating_ip_pools': [_translate_floating_ip_view(pool) for pool in pools] } class FloatingIPPoolsController(wsgi.Controller): """The Floating IP Pool API controller for the OpenStack API.""" def __init__(self): super(FloatingIPPoolsController, self).__init__() self.network_api = neutron.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): """Return a list of pools.""" context = req.environ['nova.context'] context.can(fip_policies.BASE_POLICY_NAME) pools = self.network_api.get_floating_ip_pools(context) return _translate_floating_ip_pools_view(pools) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/floating_ips.py0000664000175000017500000002760600000000000023010 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2011 Grid Dynamics # Copyright 2011 Eldar Nugaev, Kirill Shileev, Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import netutils import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import floating_ips from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.network import neutron from nova.policies import floating_ips as fi_policies LOG = logging.getLogger(__name__) def _translate_floating_ip_view(floating_ip): instance_id = None if floating_ip['port_details']: instance_id = floating_ip['port_details']['device_id'] return { 'floating_ip': { 'id': floating_ip['id'], 'ip': floating_ip['floating_ip_address'], 'pool': floating_ip['network_details']['name'] or ( floating_ip['network_details']['id']), 'fixed_ip': floating_ip['fixed_ip_address'], 'instance_id': instance_id, } } def get_instance_by_floating_ip_addr(self, context, address): try: instance_id =\ self.network_api.get_instance_id_by_floating_address( context, address) except exception.FloatingIpNotFoundForAddress as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) except exception.FloatingIpMultipleFoundForAddress as ex: raise webob.exc.HTTPConflict(explanation=ex.format_message()) if instance_id: return common.get_instance(self.compute_api, context, instance_id, expected_attrs=['flavor']) def disassociate_floating_ip(self, context, instance, address): try: self.network_api.disassociate_floating_ip(context, instance, address) except exception.Forbidden: raise webob.exc.HTTPForbidden() class FloatingIPController(wsgi.Controller): """The Floating IPs API controller for the OpenStack API.""" def __init__(self): super(FloatingIPController, self).__init__() self.compute_api = compute.API() self.network_api = neutron.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """Return data about the given floating IP.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) try: floating_ip = self.network_api.get_floating_ip(context, id) except (exception.NotFound, exception.FloatingIpNotFound): msg = _("Floating IP not found for ID %s") % id raise webob.exc.HTTPNotFound(explanation=msg) except exception.InvalidID as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return _translate_floating_ip_view(floating_ip) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): """Return a list of floating IPs allocated to a project.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) floating_ips = self.network_api.get_floating_ips_by_project(context) return {'floating_ips': [_translate_floating_ip_view(ip)['floating_ip'] for ip in floating_ips]} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 404)) def create(self, req, body=None): context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) pool = None if body and 'pool' in body: pool = body['pool'] try: address = self.network_api.allocate_floating_ip(context, pool) ip = self.network_api.get_floating_ip_by_address(context, address) except exception.NoMoreFloatingIps: if pool: msg = _("No more floating IPs in pool %s.") % pool else: msg = _("No more floating IPs available.") raise webob.exc.HTTPNotFound(explanation=msg) except exception.FloatingIpLimitExceeded: if pool: msg = _("IP allocation over quota in pool %s.") % pool else: msg = _("IP allocation over quota.") raise webob.exc.HTTPForbidden(explanation=msg) except exception.FloatingIpPoolNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.FloatingIpBadRequest as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return _translate_floating_ip_view(ip) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) def delete(self, req, id): context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) # get the floating ip object try: floating_ip = self.network_api.get_floating_ip(context, id) except (exception.NotFound, exception.FloatingIpNotFound): msg = _("Floating IP not found for ID %s") % id raise webob.exc.HTTPNotFound(explanation=msg) except exception.InvalidID as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) address = floating_ip['floating_ip_address'] # get the associated instance object (if any) instance = get_instance_by_floating_ip_addr(self, context, address) try: self.network_api.disassociate_and_release_floating_ip( context, instance, floating_ip) except exception.Forbidden: raise webob.exc.HTTPForbidden() except exception.FloatingIpNotFoundForAddress as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) class FloatingIPActionController(wsgi.Controller): """This API is deprecated from the Microversion '2.44'.""" def __init__(self): super(FloatingIPActionController, self).__init__() self.compute_api = compute.API() self.network_api = neutron.API() @wsgi.Controller.api_version("2.1", "2.43") @wsgi.expected_errors((400, 403, 404)) @wsgi.action('addFloatingIp') @validation.schema(floating_ips.add_floating_ip) def _add_floating_ip(self, req, id, body): """Associate floating_ip to an instance.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) address = body['addFloatingIp']['address'] instance = common.get_instance(self.compute_api, context, id, expected_attrs=['flavor']) cached_nwinfo = instance.get_network_info() if not cached_nwinfo: LOG.warning( 'Info cache is %r during associate with no nw_info cache', instance.info_cache, instance=instance) msg = _('Instance network is not ready yet') raise webob.exc.HTTPBadRequest(explanation=msg) fixed_ips = cached_nwinfo.fixed_ips() if not fixed_ips: msg = _('No fixed IPs associated to instance') raise webob.exc.HTTPBadRequest(explanation=msg) fixed_address = None if 'fixed_address' in body['addFloatingIp']: fixed_address = body['addFloatingIp']['fixed_address'] for fixed in fixed_ips: if fixed['address'] == fixed_address: break else: msg = _('Specified fixed address not assigned to instance') raise webob.exc.HTTPBadRequest(explanation=msg) if not fixed_address: try: fixed_address = next(ip['address'] for ip in fixed_ips if netutils.is_valid_ipv4(ip['address'])) except StopIteration: msg = _('Unable to associate floating IP %(address)s ' 'to any fixed IPs for instance %(id)s. ' 'Instance has no fixed IPv4 addresses to ' 'associate.') % ( {'address': address, 'id': id}) raise webob.exc.HTTPBadRequest(explanation=msg) if len(fixed_ips) > 1: LOG.warning('multiple fixed_ips exist, using the first ' 'IPv4 fixed_ip: %s', fixed_address) try: self.network_api.associate_floating_ip(context, instance, floating_address=address, fixed_address=fixed_address) except exception.FloatingIpAssociated: msg = _('floating IP is already associated') raise webob.exc.HTTPBadRequest(explanation=msg) except exception.FloatingIpAssociateFailed as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.NoFloatingIpInterface: msg = _('l3driver call to add floating IP failed') raise webob.exc.HTTPBadRequest(explanation=msg) except exception.FloatingIpNotFoundForAddress: msg = _('floating IP not found') raise webob.exc.HTTPNotFound(explanation=msg) except exception.Forbidden as e: raise webob.exc.HTTPForbidden(explanation=e.format_message()) except Exception as e: msg = _('Unable to associate floating IP %(address)s to ' 'fixed IP %(fixed_address)s for instance %(id)s. ' 'Error: %(error)s') % ( {'address': address, 'fixed_address': fixed_address, 'id': id, 'error': e}) LOG.exception(msg) raise webob.exc.HTTPBadRequest(explanation=msg) return webob.Response(status_int=202) @wsgi.Controller.api_version("2.1", "2.43") @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('removeFloatingIp') @validation.schema(floating_ips.remove_floating_ip) def _remove_floating_ip(self, req, id, body): """Dissociate floating_ip from an instance.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) address = body['removeFloatingIp']['address'] # get the floating ip object try: floating_ip = self.network_api.get_floating_ip_by_address(context, address) except exception.FloatingIpNotFoundForAddress: msg = _("floating IP not found") raise webob.exc.HTTPNotFound(explanation=msg) # get the associated instance object (if any) instance = get_instance_by_floating_ip_addr(self, context, address) # disassociate if associated if instance and floating_ip['port_id'] and instance.uuid == id: disassociate_floating_ip(self, context, instance, address) return webob.Response(status_int=202) else: msg = _("Floating IP %(address)s is not associated with instance " "%(id)s.") % {'address': address, 'id': id} raise webob.exc.HTTPConflict(explanation=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/floating_ips_bulk.py0000664000175000017500000000207100000000000024012 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class FloatingIPBulkController(wsgi.Controller): @wsgi.expected_errors(410) def index(self, req): raise exc.HTTPGone() @wsgi.expected_errors(410) def show(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def create(self, req, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def update(self, req, id, body): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/fping.py0000664000175000017500000000166100000000000021426 0ustar00zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class FpingController(wsgi.Controller): @wsgi.expected_errors(410) def index(self, req): raise exc.HTTPGone() @wsgi.expected_errors(410) def show(self, req, id): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/helpers.py0000664000175000017500000001054300000000000021764 0ustar00zuulzuul00000000000000# Copyright 2016 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from webob import exc from nova.i18n import _ API_DISK_CONFIG = "OS-DCF:diskConfig" API_ACCESS_V4 = "accessIPv4" API_ACCESS_V6 = "accessIPv6" # possible ops CREATE = 'create' UPDATE = 'update' REBUILD = 'rebuild' RESIZE = 'resize' def disk_config_from_api(value): if value == 'AUTO': return True elif value == 'MANUAL': return False else: msg = _("%s must be either 'MANUAL' or 'AUTO'.") % API_DISK_CONFIG raise exc.HTTPBadRequest(explanation=msg) def get_injected_files(personality): """Create a list of injected files from the personality attribute. At this time, injected_files must be formatted as a list of (file_path, file_content) pairs for compatibility with the underlying compute service. """ injected_files = [] for item in personality: injected_files.append((item['path'], item['contents'])) return injected_files def translate_attributes(op, server_dict, operation_kwargs): """Translate REST attributes on create to server object kwargs. Our REST API is relatively fixed, but internal representations change over time, this is a translator for inbound REST request attributes that modifies the server dict that we get and adds appropriate attributes to ``operation_kwargs`` that will be passed down to instance objects later. It's done in a common function as this is used for create / resize / rebuild / update The ``op`` is the operation that we are transforming, because there are times when we translate differently for different operations. (Yes, it's a little nuts, but legacy... ) The ``server_dict`` is a representation of the server in question. During ``create`` and ``update`` operations this will actually be the ``server`` element of the request body. During actions, such as ``rebuild`` and ``resize`` this will be the attributes passed to the action object during the operation. This is equivalent to the ``server`` object. Not all operations support all attributes listed here. Which is why it's important to only set operation_kwargs if there is something to set. Input validation will ensure that we are only operating on appropriate attributes for each operation. """ # Disk config auto_disk_config_raw = server_dict.pop(API_DISK_CONFIG, None) if auto_disk_config_raw is not None: auto_disk_config = disk_config_from_api(auto_disk_config_raw) operation_kwargs['auto_disk_config'] = auto_disk_config if API_ACCESS_V4 in server_dict: operation_kwargs['access_ip_v4'] = server_dict.pop(API_ACCESS_V4) if API_ACCESS_V6 in server_dict: operation_kwargs['access_ip_v6'] = server_dict.pop(API_ACCESS_V6) # This is only ever expected during rebuild operations, and only # does anything with Ironic driver. It also demonstrates the lack # of understanding of the word ephemeral. if 'preserve_ephemeral' in server_dict and op == REBUILD: preserve = strutils.bool_from_string( server_dict.pop('preserve_ephemeral'), strict=True) operation_kwargs['preserve_ephemeral'] = preserve # yes, we use different kwargs, this goes all the way back to # commit cebc98176926f57016a508d5c59b11f55dfcf2b3. if 'personality' in server_dict: if op == REBUILD: operation_kwargs['files_to_inject'] = get_injected_files( server_dict.pop('personality')) # NOTE(sdague): the deprecated hooks infrastructure doesn't # function if injected files is not defined as a list. Once hooks # are removed, this should go back inside the personality # conditional above. if op == CREATE: operation_kwargs['injected_files'] = get_injected_files( server_dict.pop('personality', [])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/hosts.py0000664000175000017500000002734500000000000021472 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The hosts admin extension.""" from oslo_log import log as logging import six import webob.exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import hosts from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import context as nova_context from nova import exception from nova import objects from nova.policies import hosts as hosts_policies LOG = logging.getLogger(__name__) class HostController(wsgi.Controller): """The Hosts API controller for the OpenStack API.""" def __init__(self): super(HostController, self).__init__() self.api = compute.HostAPI() @wsgi.Controller.api_version("2.1", "2.42") @validation.query_schema(hosts.index_query) @wsgi.expected_errors(()) def index(self, req): """Returns a dict in the format | {'hosts': [{'host_name': 'some.host.name', | 'service': 'cells', | 'zone': 'internal'}, | {'host_name': 'some.other.host.name', | 'service': 'cells', | 'zone': 'internal'}, | {'host_name': 'some.celly.host.name', | 'service': 'cells', | 'zone': 'internal'}, | {'host_name': 'compute1.host.com', | 'service': 'compute', | 'zone': 'nova'}, | {'host_name': 'compute2.host.com', | 'service': 'compute', | 'zone': 'nova'}, | {'host_name': 'sched1.host.com', | 'service': 'scheduler', | 'zone': 'internal'}, | {'host_name': 'sched2.host.com', | 'service': 'scheduler', | 'zone': 'internal'}, | {'host_name': 'vol1.host.com', | 'service': 'volume', | 'zone': 'internal'}]} """ context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) filters = {'disabled': False} zone = req.GET.get('zone', None) if zone: filters['availability_zone'] = zone services = self.api.service_get_all(context, filters=filters, set_zones=True, all_cells=True) hosts = [] api_services = ('nova-osapi_compute', 'nova-metadata') for service in services: if service.binary not in api_services: hosts.append({'host_name': service['host'], 'service': service['topic'], 'zone': service['availability_zone']}) return {'hosts': hosts} @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) @validation.schema(hosts.update) def update(self, req, id, body): """Return booleanized version of body dict. :param Request req: The request object (containing 'nova-context' env var). :param str id: The host name. :param dict body: example format {'host': {'status': 'enable', 'maintenance_mode': 'enable'}} :return: Same dict as body but 'enable' strings for 'status' and 'maintenance_mode' are converted into True, else False. :rtype: dict """ def read_enabled(orig_val): # Convert enable/disable str to a bool. val = orig_val.strip().lower() return val == "enable" context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) # See what the user wants to 'update' status = body.get('status') maint_mode = body.get('maintenance_mode') if status is not None: status = read_enabled(status) if maint_mode is not None: maint_mode = read_enabled(maint_mode) # Make the calls and merge the results result = {'host': id} if status is not None: result['status'] = self._set_enabled_status(context, id, status) if maint_mode is not None: result['maintenance_mode'] = self._set_host_maintenance(context, id, maint_mode) return result def _set_host_maintenance(self, context, host_name, mode=True): """Start/Stop host maintenance window. On start, it triggers guest VMs evacuation. """ LOG.info("Putting host %(host_name)s in maintenance mode %(mode)s.", {'host_name': host_name, 'mode': mode}) try: result = self.api.set_host_maintenance(context, host_name, mode) except NotImplementedError: common.raise_feature_not_supported() except (exception.HostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) if result not in ("on_maintenance", "off_maintenance"): raise webob.exc.HTTPBadRequest(explanation=result) return result def _set_enabled_status(self, context, host_name, enabled): """Sets the specified host's ability to accept new instances. :param enabled: a boolean - if False no new VMs will be able to start on the host. """ if enabled: LOG.info("Enabling host %s.", host_name) else: LOG.info("Disabling host %s.", host_name) try: result = self.api.set_host_enabled(context, host_name, enabled) except NotImplementedError: common.raise_feature_not_supported() except (exception.HostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) if result not in ("enabled", "disabled"): raise webob.exc.HTTPBadRequest(explanation=result) return result def _host_power_action(self, req, host_name, action): """Reboots, shuts down or powers up the host.""" context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) try: result = self.api.host_power_action(context, host_name, action) except NotImplementedError: common.raise_feature_not_supported() except (exception.HostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return {"host": host_name, "power_action": result} @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) def startup(self, req, id): return self._host_power_action(req, host_name=id, action="startup") @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) def shutdown(self, req, id): return self._host_power_action(req, host_name=id, action="shutdown") @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) def reboot(self, req, id): return self._host_power_action(req, host_name=id, action="reboot") @staticmethod def _get_total_resources(host_name, compute_node): return {'resource': {'host': host_name, 'project': '(total)', 'cpu': compute_node.vcpus, 'memory_mb': compute_node.memory_mb, 'disk_gb': compute_node.local_gb}} @staticmethod def _get_used_now_resources(host_name, compute_node): return {'resource': {'host': host_name, 'project': '(used_now)', 'cpu': compute_node.vcpus_used, 'memory_mb': compute_node.memory_mb_used, 'disk_gb': compute_node.local_gb_used}} @staticmethod def _get_resource_totals_from_instances(host_name, instances): cpu_sum = 0 mem_sum = 0 hdd_sum = 0 for instance in instances: cpu_sum += instance['vcpus'] mem_sum += instance['memory_mb'] hdd_sum += instance['root_gb'] + instance['ephemeral_gb'] return {'resource': {'host': host_name, 'project': '(used_max)', 'cpu': cpu_sum, 'memory_mb': mem_sum, 'disk_gb': hdd_sum}} @staticmethod def _get_resources_by_project(host_name, instances): # Getting usage resource per project project_map = {} for instance in instances: resource = project_map.setdefault(instance['project_id'], {'host': host_name, 'project': instance['project_id'], 'cpu': 0, 'memory_mb': 0, 'disk_gb': 0}) resource['cpu'] += instance['vcpus'] resource['memory_mb'] += instance['memory_mb'] resource['disk_gb'] += (instance['root_gb'] + instance['ephemeral_gb']) return project_map @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors(404) def show(self, req, id): """Shows the physical/usage resource given by hosts. :param id: hostname :returns: expected to use HostShowTemplate. ex.:: {'host': {'resource':D},..} D: {'host': 'hostname','project': 'admin', 'cpu': 1, 'memory_mb': 2048, 'disk_gb': 30} """ context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) host_name = id try: mapping = objects.HostMapping.get_by_host(context, host_name) nova_context.set_target_cell(context, mapping.cell_mapping) compute_node = ( objects.ComputeNode.get_first_node_by_host_for_old_compat( context, host_name)) instances = self.api.instance_get_all_by_host(context, host_name) except (exception.ComputeHostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) resources = [self._get_total_resources(host_name, compute_node)] resources.append(self._get_used_now_resources(host_name, compute_node)) resources.append(self._get_resource_totals_from_instances(host_name, instances)) by_proj_resources = self._get_resources_by_project(host_name, instances) for resource in six.itervalues(by_proj_resources): resources.append({'resource': resource}) return {'host': resources} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/hypervisors.py0000664000175000017500000004415100000000000022721 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The hypervisors admin extension.""" from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import strutils from oslo_utils import uuidutils import webob.exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import hypervisors as hyper_schema from nova.api.openstack.compute.views import hypervisors as hyper_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.policies import hypervisors as hv_policies from nova import servicegroup from nova import utils LOG = logging.getLogger(__name__) UUID_FOR_ID_MIN_VERSION = '2.53' class HypervisorsController(wsgi.Controller): """The Hypervisors API controller for the OpenStack API.""" _view_builder_class = hyper_view.ViewBuilder def __init__(self): super(HypervisorsController, self).__init__() self.host_api = compute.HostAPI() self.servicegroup_api = servicegroup.API() def _view_hypervisor(self, hypervisor, service, detail, req, servers=None, with_servers=False, **kwargs): alive = self.servicegroup_api.service_is_up(service) # The 2.53 microversion returns the compute node uuid rather than id. uuid_for_id = api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION) hyp_dict = { 'id': hypervisor.uuid if uuid_for_id else hypervisor.id, 'hypervisor_hostname': hypervisor.hypervisor_hostname, 'state': 'up' if alive else 'down', 'status': ('disabled' if service.disabled else 'enabled'), } if detail: for field in ('vcpus', 'memory_mb', 'local_gb', 'vcpus_used', 'memory_mb_used', 'local_gb_used', 'hypervisor_type', 'hypervisor_version', 'free_ram_mb', 'free_disk_gb', 'current_workload', 'running_vms', 'disk_available_least', 'host_ip'): hyp_dict[field] = getattr(hypervisor, field) service_id = service.uuid if uuid_for_id else service.id hyp_dict['service'] = { 'id': service_id, 'host': hypervisor.host, 'disabled_reason': service.disabled_reason, } if api_version_request.is_supported(req, min_version='2.28'): if hypervisor.cpu_info: hyp_dict['cpu_info'] = jsonutils.loads(hypervisor.cpu_info) else: hyp_dict['cpu_info'] = {} else: hyp_dict['cpu_info'] = hypervisor.cpu_info if servers: hyp_dict['servers'] = [dict(name=serv['name'], uuid=serv['uuid']) for serv in servers] # The 2.75 microversion adds 'servers' field always in response. # Empty list if there are no servers on hypervisors and it is # requested in request. elif with_servers and api_version_request.is_supported( req, min_version='2.75'): hyp_dict['servers'] = [] # Add any additional info if kwargs: hyp_dict.update(kwargs) return hyp_dict def _get_compute_nodes_by_name_pattern(self, context, hostname_match): compute_nodes = self.host_api.compute_node_search_by_hypervisor( context, hostname_match) if not compute_nodes: msg = (_("No hypervisor matching '%s' could be found.") % hostname_match) raise webob.exc.HTTPNotFound(explanation=msg) return compute_nodes def _get_hypervisors(self, req, detail=False, limit=None, marker=None, links=False): """Get hypervisors for the given request. :param req: nova.api.openstack.wsgi.Request for the GET request :param detail: If True, return a detailed response. :param limit: An optional user-supplied page limit. :param marker: An optional user-supplied marker for paging. :param links: If True, return links in the response for paging. """ context = req.environ['nova.context'] # The 2.53 microversion moves the search and servers routes into # GET /os-hypervisors and GET /os-hypervisors/detail with query # parameters. if api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION): hypervisor_match = req.GET.get('hypervisor_hostname_pattern') with_servers = strutils.bool_from_string( req.GET.get('with_servers', False), strict=True) else: hypervisor_match = None with_servers = False if hypervisor_match is not None: # We have to check for 'limit' in the request itself because # the limit passed in is CONF.api.max_limit by default. if 'limit' in req.GET or marker: # Paging with hostname pattern isn't supported. raise webob.exc.HTTPBadRequest( _('Paging over hypervisors with the ' 'hypervisor_hostname_pattern query parameter is not ' 'supported.')) # Explicitly do not try to generate links when querying with the # hostname pattern since the request in the link would fail the # check above. links = False # Get all compute nodes with a hypervisor_hostname that matches # the given pattern. If none are found then it's a 404 error. compute_nodes = self._get_compute_nodes_by_name_pattern( context, hypervisor_match) else: # Get all compute nodes. try: compute_nodes = self.host_api.compute_node_get_all( context, limit=limit, marker=marker) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise webob.exc.HTTPBadRequest(explanation=msg) hypervisors_list = [] for hyp in compute_nodes: try: instances = None if with_servers: instances = self.host_api.instance_get_all_by_host( context, hyp.host) service = self.host_api.service_get_by_compute_host( context, hyp.host) hypervisors_list.append( self._view_hypervisor( hyp, service, detail, req, servers=instances, with_servers=with_servers)) except (exception.ComputeHostNotFound, exception.HostMappingNotFound): # The compute service could be deleted which doesn't delete # the compute node record, that has to be manually removed # from the database so we just ignore it when listing nodes. LOG.debug('Unable to find service for compute node %s. The ' 'service may be deleted and compute nodes need to ' 'be manually cleaned up.', hyp.host) hypervisors_dict = dict(hypervisors=hypervisors_list) if links: hypervisors_links = self._view_builder.get_links( req, hypervisors_list, detail) if hypervisors_links: hypervisors_dict['hypervisors_links'] = hypervisors_links return hypervisors_dict @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) @validation.query_schema(hyper_schema.list_query_schema_v253, UUID_FOR_ID_MIN_VERSION) @wsgi.expected_errors((400, 404)) def index(self, req): """Starting with the 2.53 microversion, the id field in the response is the compute_nodes.uuid value. Also, the search and servers routes are superseded and replaced with query parameters for listing hypervisors by a hostname pattern and whether or not to include hosted servers in the response. """ limit, marker = common.get_limit_and_marker(req) return self._index(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.33", "2.52") # noqa @validation.query_schema(hyper_schema.list_query_schema_v233) @wsgi.expected_errors(400) def index(self, req): limit, marker = common.get_limit_and_marker(req) return self._index(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.1", "2.32") # noqa @wsgi.expected_errors(()) def index(self, req): return self._index(req) def _index(self, req, limit=None, marker=None, links=False): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'list', target={}) return self._get_hypervisors(req, detail=False, limit=limit, marker=marker, links=links) @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) @validation.query_schema(hyper_schema.list_query_schema_v253, UUID_FOR_ID_MIN_VERSION) @wsgi.expected_errors((400, 404)) def detail(self, req): """Starting with the 2.53 microversion, the id field in the response is the compute_nodes.uuid value. Also, the search and servers routes are superseded and replaced with query parameters for listing hypervisors by a hostname pattern and whether or not to include hosted servers in the response. """ limit, marker = common.get_limit_and_marker(req) return self._detail(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.33", "2.52") # noqa @validation.query_schema(hyper_schema.list_query_schema_v233) @wsgi.expected_errors((400)) def detail(self, req): limit, marker = common.get_limit_and_marker(req) return self._detail(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.1", "2.32") # noqa @wsgi.expected_errors(()) def detail(self, req): return self._detail(req) def _detail(self, req, limit=None, marker=None, links=False): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'list-detail', target={}) return self._get_hypervisors(req, detail=True, limit=limit, marker=marker, links=links) @staticmethod def _validate_id(req, hypervisor_id): """Validates that the id is a uuid for microversions that require it. :param req: The HTTP request object which contains the requested microversion information. :param hypervisor_id: The provided hypervisor id. :raises: webob.exc.HTTPBadRequest if the requested microversion is greater than or equal to 2.53 and the id is not a uuid. :raises: webob.exc.HTTPNotFound if the requested microversion is less than 2.53 and the id is not an integer. """ expect_uuid = api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION) if expect_uuid: if not uuidutils.is_uuid_like(hypervisor_id): msg = _('Invalid uuid %s') % hypervisor_id raise webob.exc.HTTPBadRequest(explanation=msg) else: try: utils.validate_integer(hypervisor_id, 'id') except exception.InvalidInput: msg = (_("Hypervisor with ID '%s' could not be found.") % hypervisor_id) raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) @validation.query_schema(hyper_schema.show_query_schema_v253, UUID_FOR_ID_MIN_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """The 2.53 microversion requires that the id is a uuid and as a result it can also return a 400 response if an invalid uuid is passed. The 2.53 microversion also supports the with_servers query parameter to include a list of servers on the given hypervisor if requested. """ with_servers = strutils.bool_from_string( req.GET.get('with_servers', False), strict=True) return self._show(req, id, with_servers) @wsgi.Controller.api_version("2.1", "2.52") # noqa F811 @wsgi.expected_errors(404) def show(self, req, id): return self._show(req, id) def _show(self, req, id, with_servers=False): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'show', target={}) self._validate_id(req, id) try: hyp = self.host_api.compute_node_get(context, id) instances = None if with_servers: instances = self.host_api.instance_get_all_by_host( context, hyp.host) service = self.host_api.service_get_by_compute_host( context, hyp.host) except (ValueError, exception.ComputeHostNotFound, exception.HostMappingNotFound): msg = _("Hypervisor with ID '%s' could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) return dict(hypervisor=self._view_hypervisor( hyp, service, True, req, instances, with_servers)) @wsgi.expected_errors((400, 404, 501)) def uptime(self, req, id): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'uptime', target={}) self._validate_id(req, id) try: hyp = self.host_api.compute_node_get(context, id) except (ValueError, exception.ComputeHostNotFound): msg = _("Hypervisor with ID '%s' could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) # Get the uptime try: host = hyp.host uptime = self.host_api.get_host_uptime(context, host) service = self.host_api.service_get_by_compute_host(context, host) except NotImplementedError: common.raise_feature_not_supported() except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.HostMappingNotFound: # NOTE(danms): This mirrors the compute_node_get() behavior # where the node is missing, resulting in NotFound instead of # BadRequest if we fail on the map lookup. msg = _("Hypervisor with ID '%s' could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) return dict(hypervisor=self._view_hypervisor(hyp, service, False, req, uptime=uptime)) @wsgi.Controller.api_version('2.1', '2.52') @wsgi.expected_errors(404) def search(self, req, id): """Prior to microversion 2.53 you could search for hypervisors by a hostname pattern on a dedicated route. Starting with 2.53, searching by a hostname pattern is a query parameter in the GET /os-hypervisors index and detail methods. """ context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'search', target={}) hypervisors = self._get_compute_nodes_by_name_pattern(context, id) try: return dict(hypervisors=[ self._view_hypervisor( hyp, self.host_api.service_get_by_compute_host(context, hyp.host), False, req) for hyp in hypervisors]) except exception.HostMappingNotFound: msg = _("No hypervisor matching '%s' could be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version('2.1', '2.52') @wsgi.expected_errors(404) def servers(self, req, id): """Prior to microversion 2.53 you could search for hypervisors by a hostname pattern and include servers on those hosts in the response on a dedicated route. Starting with 2.53, searching by a hostname pattern and including hosted servers is a query parameter in the GET /os-hypervisors index and detail methods. """ context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'servers', target={}) compute_nodes = self._get_compute_nodes_by_name_pattern(context, id) hypervisors = [] for compute_node in compute_nodes: try: instances = self.host_api.instance_get_all_by_host(context, compute_node.host) service = self.host_api.service_get_by_compute_host( context, compute_node.host) except exception.HostMappingNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) hyp = self._view_hypervisor(compute_node, service, False, req, instances) hypervisors.append(hyp) return dict(hypervisors=hypervisors) @wsgi.expected_errors(()) def statistics(self, req): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME % 'statistics', target={}) stats = self.host_api.compute_node_statistics(context) return dict(hypervisor_statistics=stats) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/image_metadata.py0000664000175000017500000001274100000000000023246 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack.api_version_request import \ MAX_IMAGE_META_PROXY_API_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import image_metadata from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova.image import glance class ImageMetadataController(wsgi.Controller): """The image metadata API controller for the OpenStack API.""" def __init__(self): super(ImageMetadataController, self).__init__() self.image_api = glance.API() def _get_image(self, context, image_id): try: return self.image_api.get(context, image_id) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) except exception.ImageNotFound: msg = _("Image not found.") raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((403, 404)) def index(self, req, image_id): """Returns the list of metadata for a given instance.""" context = req.environ['nova.context'] metadata = self._get_image(context, image_id)['properties'] return dict(metadata=metadata) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((403, 404)) def show(self, req, image_id, id): context = req.environ['nova.context'] metadata = self._get_image(context, image_id)['properties'] if id in metadata: return {'meta': {id: metadata[id]}} else: raise exc.HTTPNotFound() @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(image_metadata.create) def create(self, req, image_id, body): context = req.environ['nova.context'] image = self._get_image(context, image_id) for key, value in body['metadata'].items(): image['properties'][key] = value common.check_img_metadata_properties_quota(context, image['properties']) try: image = self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(metadata=image['properties']) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(image_metadata.update) def update(self, req, image_id, id, body): context = req.environ['nova.context'] meta = body['meta'] if id not in meta: expl = _('Request body and URI mismatch') raise exc.HTTPBadRequest(explanation=expl) image = self._get_image(context, image_id) image['properties'][id] = meta[id] common.check_img_metadata_properties_quota(context, image['properties']) try: self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(meta=meta) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(image_metadata.update_all) def update_all(self, req, image_id, body): context = req.environ['nova.context'] image = self._get_image(context, image_id) metadata = body['metadata'] common.check_img_metadata_properties_quota(context, metadata) image['properties'] = metadata try: self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(metadata=metadata) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((403, 404)) @wsgi.response(204) def delete(self, req, image_id, id): context = req.environ['nova.context'] image = self._get_image(context, image_id) if id not in image['properties']: msg = _("Invalid metadata key") raise exc.HTTPNotFound(explanation=msg) image['properties'].pop(id) try: self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/images.py0000664000175000017500000001250100000000000021563 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.views import images as views_images from nova.api.openstack import wsgi from nova import exception from nova.i18n import _ from nova.image import glance SUPPORTED_FILTERS = { 'name': 'name', 'status': 'status', 'changes-since': 'changes-since', 'server': 'property-instance_uuid', 'type': 'property-image_type', 'minRam': 'min_ram', 'minDisk': 'min_disk', } class ImagesController(wsgi.Controller): """Base controller for retrieving/displaying images.""" _view_builder_class = views_images.ViewBuilder def __init__(self): super(ImagesController, self).__init__() self._image_api = glance.API() def _get_filters(self, req): """Return a dictionary of query param filters from the request. :param req: the Request object coming from the wsgi layer :retval a dict of key/value filters """ filters = {} for param in req.params: if param in SUPPORTED_FILTERS or param.startswith('property-'): # map filter name or carry through if property-* filter_name = SUPPORTED_FILTERS.get(param, param) filters[filter_name] = req.params.get(param) # ensure server filter is the instance uuid filter_name = 'property-instance_uuid' try: filters[filter_name] = filters[filter_name].rsplit('/', 1)[1] except (AttributeError, IndexError, KeyError): pass filter_name = 'status' if filter_name in filters: # The Image API expects us to use lowercase strings for status filters[filter_name] = filters[filter_name].lower() return filters @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return detailed information about a specific image. :param req: `wsgi.Request` object :param id: Image identifier """ context = req.environ['nova.context'] try: image = self._image_api.get(context, id) except (exception.ImageNotFound, exception.InvalidImageRef): explanation = _("Image not found.") raise webob.exc.HTTPNotFound(explanation=explanation) return self._view_builder.show(req, image) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((403, 404)) @wsgi.response(204) def delete(self, req, id): """Delete an image, if allowed. :param req: `wsgi.Request` object :param id: Image identifier (integer) """ context = req.environ['nova.context'] try: self._image_api.delete(context, id) except exception.ImageNotFound: explanation = _("Image not found.") raise webob.exc.HTTPNotFound(explanation=explanation) except exception.ImageNotAuthorized: # The image service raises this exception on delete if glanceclient # raises HTTPForbidden. explanation = _("You are not allowed to delete the image.") raise webob.exc.HTTPForbidden(explanation=explanation) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def index(self, req): """Return an index listing of images available to the request. :param req: `wsgi.Request` object """ context = req.environ['nova.context'] filters = self._get_filters(req) page_params = common.get_pagination_params(req) try: images = self._image_api.get_all(context, filters=filters, **page_params) except exception.Invalid as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return self._view_builder.index(req, images) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def detail(self, req): """Return a detailed index listing of images available to the request. :param req: `wsgi.Request` object. """ context = req.environ['nova.context'] filters = self._get_filters(req) page_params = common.get_pagination_params(req) try: images = self._image_api.get_all(context, filters=filters, **page_params) except exception.Invalid as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return self._view_builder.detail(req, images) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/instance_actions.py0000664000175000017500000002250000000000000023642 0ustar00zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from oslo_utils import timeutils from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas \ import instance_actions as schema_instance_actions from nova.api.openstack.compute.views \ import instance_actions as instance_actions_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.policies import instance_actions as ia_policies from nova import utils ACTION_KEYS = ['action', 'instance_uuid', 'request_id', 'user_id', 'project_id', 'start_time', 'message'] ACTION_KEYS_V258 = ['action', 'instance_uuid', 'request_id', 'user_id', 'project_id', 'start_time', 'message', 'updated_at'] EVENT_KEYS = ['event', 'start_time', 'finish_time', 'result', 'traceback'] class InstanceActionsController(wsgi.Controller): _view_builder_class = instance_actions_view.ViewBuilder def __init__(self): super(InstanceActionsController, self).__init__() self.compute_api = compute.API() self.action_api = compute.InstanceActionAPI() def _format_action(self, action_raw, action_keys): action = {} for key in action_keys: action[key] = action_raw.get(key) return action @staticmethod def _format_event(event_raw, project_id, show_traceback=False, show_host=False, show_hostid=False, show_details=False): event = {} for key in EVENT_KEYS: # By default, non-admins are not allowed to see traceback details. if key == 'traceback' and not show_traceback: continue event[key] = event_raw.get(key) # By default, non-admins are not allowed to see host. if show_host: event['host'] = event_raw['host'] if show_hostid: event['hostId'] = utils.generate_hostid(event_raw['host'], project_id) if show_details: event['details'] = event_raw['details'] return event @wsgi.Controller.api_version("2.1", "2.20") def _get_instance(self, req, context, server_id): return common.get_instance(self.compute_api, context, server_id) @wsgi.Controller.api_version("2.21") # noqa def _get_instance(self, req, context, server_id): with utils.temporary_mutation(context, read_deleted='yes'): return common.get_instance(self.compute_api, context, server_id) @wsgi.Controller.api_version("2.1", "2.57") @wsgi.expected_errors(404) def index(self, req, server_id): """Returns the list of actions recorded for a given instance.""" context = req.environ["nova.context"] instance = self._get_instance(req, context, server_id) context.can(ia_policies.BASE_POLICY_NAME % 'list', target={'project_id': instance.project_id}) actions_raw = self.action_api.actions_get(context, instance) actions = [self._format_action(action, ACTION_KEYS) for action in actions_raw] return {'instanceActions': actions} @wsgi.Controller.api_version("2.58") # noqa @wsgi.expected_errors((400, 404)) @validation.query_schema(schema_instance_actions.list_query_params_v266, "2.66") @validation.query_schema(schema_instance_actions.list_query_params_v258, "2.58", "2.65") def index(self, req, server_id): """Returns the list of actions recorded for a given instance.""" context = req.environ["nova.context"] instance = self._get_instance(req, context, server_id) context.can(ia_policies.BASE_POLICY_NAME % 'list', target={'project_id': instance.project_id}) search_opts = {} search_opts.update(req.GET) if 'changes-since' in search_opts: search_opts['changes-since'] = timeutils.parse_isotime( search_opts['changes-since']) if 'changes-before' in search_opts: search_opts['changes-before'] = timeutils.parse_isotime( search_opts['changes-before']) changes_since = search_opts.get('changes-since') if (changes_since and search_opts['changes-before'] < search_opts['changes-since']): msg = _('The value of changes-since must be less than ' 'or equal to changes-before.') raise exc.HTTPBadRequest(explanation=msg) limit, marker = common.get_limit_and_marker(req) try: actions_raw = self.action_api.actions_get(context, instance, limit=limit, marker=marker, filters=search_opts) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) actions = [self._format_action(action, ACTION_KEYS_V258) for action in actions_raw] actions_dict = {'instanceActions': actions} actions_links = self._view_builder.get_links(req, server_id, actions) if actions_links: actions_dict['links'] = actions_links return actions_dict @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return data about the given instance action.""" context = req.environ['nova.context'] instance = self._get_instance(req, context, server_id) context.can(ia_policies.BASE_POLICY_NAME % 'show', target={'project_id': instance.project_id}) action = self.action_api.action_get_by_request_id(context, instance, id) if action is None: msg = _("Action %s not found") % id raise exc.HTTPNotFound(explanation=msg) action_id = action['id'] if api_version_request.is_supported(req, min_version="2.58"): action = self._format_action(action, ACTION_KEYS_V258) else: action = self._format_action(action, ACTION_KEYS) # Prior to microversion 2.51, events would only be returned in the # response for admins by default policy rules. Starting in # microversion 2.51, events are returned for admin_or_owner (of the # instance) but the "traceback" field is only shown for admin users # by default. show_events = False show_traceback = False show_host = False if context.can(ia_policies.BASE_POLICY_NAME % 'events', target={'project_id': instance.project_id}, fatal=False): # For all microversions, the user can see all event details # including the traceback. show_events = show_traceback = True show_host = api_version_request.is_supported(req, '2.62') elif api_version_request.is_supported(req, '2.51'): # The user is not able to see all event details, but they can at # least see the non-traceback event details. show_events = True # An obfuscated hashed host id is returned since microversion 2.62 # for all users. show_hostid = api_version_request.is_supported(req, '2.62') if show_events: # NOTE(brinzhang): Event details are shown since microversion # 2.84. show_details = False support_v284 = api_version_request.is_supported(req, '2.84') if support_v284: show_details = context.can( ia_policies.BASE_POLICY_NAME % 'events:details', target={'project_id': instance.project_id}, fatal=False) events_raw = self.action_api.action_events_get(context, instance, action_id) # NOTE(takashin): The project IDs of instance action events # become null (None) when instance action events are created # by periodic tasks. If the project ID is null (None), # it causes an error when 'hostId' is generated. # If the project ID is null (None), pass the project ID of # the server instead of that of instance action events. action['events'] = [self._format_event( evt, action['project_id'] or instance.project_id, show_traceback=show_traceback, show_host=show_host, show_hostid=show_hostid, show_details=show_details ) for evt in events_raw] return {'instanceAction': action} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/instance_usage_audit_log.py0000664000175000017500000001106500000000000025341 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import webob.exc from nova.api.openstack import wsgi from nova.compute import api as compute from nova.compute import rpcapi as compute_rpcapi from nova.i18n import _ from nova.policies import instance_usage_audit_log as iual_policies from nova import utils class InstanceUsageAuditLogController(wsgi.Controller): def __init__(self): super(InstanceUsageAuditLogController, self).__init__() self.host_api = compute.HostAPI() @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(iual_policies.BASE_POLICY_NAME % 'list', target={}) task_log = self._get_audit_task_logs(context) return {'instance_usage_audit_logs': task_log} @wsgi.expected_errors(400) def show(self, req, id): context = req.environ['nova.context'] context.can(iual_policies.BASE_POLICY_NAME % 'show', target={}) try: if '.' in id: before_date = datetime.datetime.strptime(str(id), "%Y-%m-%d %H:%M:%S.%f") else: before_date = datetime.datetime.strptime(str(id), "%Y-%m-%d %H:%M:%S") except ValueError: msg = _("Invalid timestamp for date %s") % id raise webob.exc.HTTPBadRequest(explanation=msg) task_log = self._get_audit_task_logs(context, before=before_date) return {'instance_usage_audit_log': task_log} def _get_audit_task_logs(self, context, before=None): """Returns a full log for all instance usage audit tasks on all computes. :param context: Nova request context. :param before: By default we look for the audit period most recently completed before this datetime. Has no effect if both begin and end are specified. """ begin, end = utils.last_completed_audit_period(before=before) task_logs = self.host_api.task_log_get_all(context, "instance_usage_audit", begin, end) # We do this in this way to include disabled compute services, # which can have instances on them. (mdragon) filters = {'topic': compute_rpcapi.RPC_TOPIC} services = self.host_api.service_get_all(context, filters=filters) hosts = set(serv['host'] for serv in services) seen_hosts = set() done_hosts = set() running_hosts = set() total_errors = 0 total_items = 0 for tlog in task_logs: seen_hosts.add(tlog['host']) if tlog['state'] == "DONE": done_hosts.add(tlog['host']) if tlog['state'] == "RUNNING": running_hosts.add(tlog['host']) total_errors += tlog['errors'] total_items += tlog['task_items'] log = {tl['host']: dict(state=tl['state'], instances=tl['task_items'], errors=tl['errors'], message=tl['message']) for tl in task_logs} missing_hosts = hosts - seen_hosts overall_status = "%s hosts done. %s errors." % ( 'ALL' if len(done_hosts) == len(hosts) else "%s of %s" % (len(done_hosts), len(hosts)), total_errors) return dict(period_beginning=str(begin), period_ending=str(end), num_hosts=len(hosts), num_hosts_done=len(done_hosts), num_hosts_running=len(running_hosts), num_hosts_not_run=len(missing_hosts), hosts_not_run=list(missing_hosts), total_instances=total_items, total_errors=total_errors, overall_status=overall_status, log=log) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/ips.py0000664000175000017500000000467300000000000021124 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.views import addresses as views_addresses from nova.api.openstack import wsgi from nova.compute import api as compute from nova.i18n import _ from nova.policies import ips as ips_policies class IPsController(wsgi.Controller): """The servers addresses API controller for the OpenStack API.""" # Note(gmann): here using V2 view builder instead of V3 to have V2.1 # server ips response same as V2 which does not include "OS-EXT-IPS:type" # & "OS-EXT-IPS-MAC:mac_addr". If needed those can be added with # microversion by using V2.1 view builder. _view_builder_class = views_addresses.ViewBuilder def __init__(self): super(IPsController, self).__init__() self._compute_api = compute.API() @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ["nova.context"] instance = common.get_instance(self._compute_api, context, server_id) context.can(ips_policies.POLICY_ROOT % 'index', target={'project_id': instance.project_id}) networks = common.get_networks_for_instance(context, instance) return self._view_builder.index(networks) @wsgi.expected_errors(404) def show(self, req, server_id, id): context = req.environ["nova.context"] instance = common.get_instance(self._compute_api, context, server_id) context.can(ips_policies.POLICY_ROOT % 'show', target={'project_id': instance.project_id}) networks = common.get_networks_for_instance(context, instance) if id not in networks: msg = _("Instance is not a member of specified network") raise exc.HTTPNotFound(explanation=msg) return self._view_builder.show(networks[id], id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/keypairs.py0000664000175000017500000002324600000000000022155 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Keypair management extension.""" import webob import webob.exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import keypairs from nova.api.openstack.compute.views import keypairs as keypairs_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute_api from nova import exception from nova.i18n import _ from nova.objects import keypair as keypair_obj from nova.policies import keypairs as kp_policies class KeypairController(wsgi.Controller): """Keypair API controller for the OpenStack API.""" _view_builder_class = keypairs_view.ViewBuilder def __init__(self): super(KeypairController, self).__init__() self.api = compute_api.KeypairAPI() @wsgi.Controller.api_version("2.10") @wsgi.response(201) @wsgi.expected_errors((400, 403, 409)) @validation.schema(keypairs.create_v210) def create(self, req, body): """Create or import keypair. A policy check restricts users from creating keys for other users params: keypair object with: name (required) - string public_key (optional) - string type (optional) - string user_id (optional) - string """ # handle optional user-id for admin only user_id = body['keypair'].get('user_id') return self._create(req, body, key_type=True, user_id=user_id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @wsgi.response(201) @wsgi.expected_errors((400, 403, 409)) @validation.schema(keypairs.create_v22) def create(self, req, body): """Create or import keypair. Sending name will generate a key and return private_key and fingerprint. Keypair will have the type ssh or x509, specified by type. You can send a public_key to add an existing ssh/x509 key. params: keypair object with: name (required) - string public_key (optional) - string type (optional) - string """ return self._create(req, body, key_type=True) @wsgi.Controller.api_version("2.1", "2.1") # noqa @wsgi.expected_errors((400, 403, 409)) @validation.schema(keypairs.create_v20, "2.0", "2.0") @validation.schema(keypairs.create, "2.1", "2.1") def create(self, req, body): """Create or import keypair. Sending name will generate a key and return private_key and fingerprint. You can send a public_key to add an existing ssh key. params: keypair object with: name (required) - string public_key (optional) - string """ return self._create(req, body) def _create(self, req, body, user_id=None, key_type=False): context = req.environ['nova.context'] params = body['keypair'] name = common.normalize_name(params['name']) key_type_value = params.get('type', keypair_obj.KEYPAIR_TYPE_SSH) user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'create', target={'user_id': user_id}) return_priv_key = False try: if 'public_key' in params: keypair = self.api.import_key_pair( context, user_id, name, params['public_key'], key_type_value) else: keypair, private_key = self.api.create_key_pair( context, user_id, name, key_type_value) keypair['private_key'] = private_key return_priv_key = True except exception.KeypairLimitExceeded: msg = _("Quota exceeded, too many key pairs.") raise webob.exc.HTTPForbidden(explanation=msg) except exception.InvalidKeypair as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) except exception.KeyPairExists as exc: raise webob.exc.HTTPConflict(explanation=exc.format_message()) return self._view_builder.create(keypair, private_key=return_priv_key, key_type=key_type) @wsgi.Controller.api_version("2.1", "2.1") @validation.query_schema(keypairs.delete_query_schema_v20) @wsgi.response(202) @wsgi.expected_errors(404) def delete(self, req, id): self._delete(req, id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @validation.query_schema(keypairs.delete_query_schema_v20) @wsgi.response(204) @wsgi.expected_errors(404) def delete(self, req, id): self._delete(req, id) @wsgi.Controller.api_version("2.10") # noqa @validation.query_schema(keypairs.delete_query_schema_v275, '2.75') @validation.query_schema(keypairs.delete_query_schema_v210, '2.10', '2.74') @wsgi.response(204) @wsgi.expected_errors(404) def delete(self, req, id): # handle optional user-id for admin only user_id = self._get_user_id(req) self._delete(req, id, user_id=user_id) def _delete(self, req, id, user_id=None): """Delete a keypair with a given name.""" context = req.environ['nova.context'] # handle optional user-id for admin only user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'delete', target={'user_id': user_id}) try: self.api.delete_key_pair(context, user_id, id) except exception.KeypairNotFound as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) def _get_user_id(self, req): if 'user_id' in req.GET.keys(): user_id = req.GET.getall('user_id')[0] return user_id @wsgi.Controller.api_version("2.10") @validation.query_schema(keypairs.show_query_schema_v275, '2.75') @validation.query_schema(keypairs.show_query_schema_v210, '2.10', '2.74') @wsgi.expected_errors(404) def show(self, req, id): # handle optional user-id for admin only user_id = self._get_user_id(req) return self._show(req, id, key_type=True, user_id=user_id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @validation.query_schema(keypairs.show_query_schema_v20) @wsgi.expected_errors(404) def show(self, req, id): return self._show(req, id, key_type=True) @wsgi.Controller.api_version("2.1", "2.1") # noqa @validation.query_schema(keypairs.show_query_schema_v20) @wsgi.expected_errors(404) def show(self, req, id): return self._show(req, id) def _show(self, req, id, key_type=False, user_id=None): """Return data for the given key name.""" context = req.environ['nova.context'] user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'show', target={'user_id': user_id}) try: keypair = self.api.get_key_pair(context, user_id, id) except exception.KeypairNotFound as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) return self._view_builder.show(keypair, key_type=key_type) @wsgi.Controller.api_version("2.35") @validation.query_schema(keypairs.index_query_schema_v275, '2.75') @validation.query_schema(keypairs.index_query_schema_v235, '2.35', '2.74') @wsgi.expected_errors(400) def index(self, req): user_id = self._get_user_id(req) return self._index(req, key_type=True, user_id=user_id, links=True) @wsgi.Controller.api_version("2.10", "2.34") # noqa @validation.query_schema(keypairs.index_query_schema_v210) @wsgi.expected_errors(()) def index(self, req): # handle optional user-id for admin only user_id = self._get_user_id(req) return self._index(req, key_type=True, user_id=user_id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @validation.query_schema(keypairs.index_query_schema_v20) @wsgi.expected_errors(()) def index(self, req): return self._index(req, key_type=True) @wsgi.Controller.api_version("2.1", "2.1") # noqa @validation.query_schema(keypairs.index_query_schema_v20) @wsgi.expected_errors(()) def index(self, req): return self._index(req) def _index(self, req, key_type=False, user_id=None, links=False): """List of keypairs for a user.""" context = req.environ['nova.context'] user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'index', target={'user_id': user_id}) if api_version_request.is_supported(req, min_version='2.35'): limit, marker = common.get_limit_and_marker(req) else: limit = marker = None try: key_pairs = self.api.get_key_pairs( context, user_id, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return self._view_builder.index(req, key_pairs, key_type=key_type, links=links) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/limits.py0000664000175000017500000000731600000000000021627 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.api_version_request \ import MAX_IMAGE_META_PROXY_API_VERSION from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.api_version_request \ import MIN_WITHOUT_IMAGE_META_PROXY_API_VERSION from nova.api.openstack.api_version_request \ import MIN_WITHOUT_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import limits from nova.api.openstack.compute.views import limits as limits_views from nova.api.openstack import wsgi from nova.api import validation from nova.policies import limits as limits_policies from nova import quota QUOTAS = quota.QUOTAS # This is a list of limits which needs to filter out from the API response. # This is due to the deprecation of network related proxy APIs, the related # limit should be removed from the API also. FILTERED_LIMITS_2_36 = ['floating_ips', 'security_groups', 'security_group_rules'] FILTERED_LIMITS_2_57 = list(FILTERED_LIMITS_2_36) FILTERED_LIMITS_2_57.extend(['injected_files', 'injected_file_content_bytes']) class LimitsController(wsgi.Controller): """Controller for accessing limits in the OpenStack API.""" @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req) @wsgi.Controller.api_version(MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, # noqa MAX_IMAGE_META_PROXY_API_VERSION) # noqa @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req, FILTERED_LIMITS_2_36) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_IMAGE_META_PROXY_API_VERSION, '2.56') # noqa @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req, FILTERED_LIMITS_2_36, max_image_meta=False) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema_275, '2.75') @validation.query_schema(limits.limits_query_schema, '2.57', '2.74') def index(self, req): return self._index(req, FILTERED_LIMITS_2_57, max_image_meta=False) def _index(self, req, filtered_limits=None, max_image_meta=True): """Return all global limit information.""" context = req.environ['nova.context'] context.can(limits_policies.BASE_POLICY_NAME, target={}) project_id = context.project_id if 'tenant_id' in req.GET: project_id = req.GET.get('tenant_id') context.can(limits_policies.OTHER_PROJECT_LIMIT_POLICY_NAME, target={'project_id': project_id}) quotas = QUOTAS.get_project_quotas(context, project_id, usages=True) builder = limits_views.ViewBuilder() return builder.build(req, quotas, filtered_limits=filtered_limits, max_image_meta=max_image_meta) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/lock_server.py0000664000175000017500000000473600000000000022647 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import lock_server from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova.policies import lock_server as ls_policies class LockServerController(wsgi.Controller): def __init__(self): super(LockServerController, self).__init__() self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action('lock') @validation.schema(lock_server.lock_v2_73, "2.73") def _lock(self, req, id, body): """Lock a server instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(ls_policies.POLICY_ROOT % 'lock', target={'user_id': instance.user_id, 'project_id': instance.project_id}) reason = None if (api_version_request.is_supported(req, min_version='2.73') and body['lock'] is not None): reason = body['lock'].get('locked_reason') self.compute_api.lock(context, instance, reason=reason) @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action('unlock') def _unlock(self, req, id, body): """Unlock a server instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(ls_policies.POLICY_ROOT % 'unlock', target={'project_id': instance.project_id}) if not self.compute_api.is_expected_locked_by(context, instance): context.can(ls_policies.POLICY_ROOT % 'unlock:unlock_override', target={'project_id': instance.project_id}) self.compute_api.unlock(context, instance) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/migrate_server.py0000664000175000017500000002165600000000000023347 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import strutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import migrate_server from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.network import neutron from nova import objects from nova.policies import migrate_server as ms_policies LOG = logging.getLogger(__name__) MIN_COMPUTE_MOVE_BANDWIDTH = 39 class MigrateServerController(wsgi.Controller): def __init__(self): super(MigrateServerController, self).__init__() self.compute_api = compute.API() self.network_api = neutron.API() @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('migrate') @validation.schema(migrate_server.migrate_v2_56, "2.56") def _migrate(self, req, id, body): """Permit admins to migrate a server to a new host.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id, expected_attrs=['flavor', 'services']) context.can(ms_policies.POLICY_ROOT % 'migrate', target={'project_id': instance.project_id}) host_name = None if (api_version_request.is_supported(req, min_version='2.56') and body['migrate'] is not None): host_name = body['migrate'].get('host') if common.instance_has_port_with_resource_request( instance.uuid, self.network_api): # TODO(gibi): Remove when nova only supports compute newer than # Train source_service = objects.Service.get_by_host_and_binary( context, instance.host, 'nova-compute') if source_service.version < MIN_COMPUTE_MOVE_BANDWIDTH: msg = _("The migrate action on a server with ports having " "resource requests, like a port with a QoS " "minimum bandwidth policy, is not yet supported " "on the source compute") raise exc.HTTPConflict(explanation=msg) try: self.compute_api.resize(req.environ['nova.context'], instance, host_name=host_name) except (exception.TooManyInstances, exception.QuotaError, exception.ForbiddenWithAccelerators) as e: raise exc.HTTPForbidden(explanation=e.format_message()) except (exception.InstanceIsLocked, exception.InstanceNotReady, exception.ServiceUnavailable) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'migrate', id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.ComputeHostNotFound, exception.CannotMigrateToSameHost) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('os-migrateLive') @validation.schema(migrate_server.migrate_live, "2.0", "2.24") @validation.schema(migrate_server.migrate_live_v2_25, "2.25", "2.29") @validation.schema(migrate_server.migrate_live_v2_30, "2.30", "2.67") @validation.schema(migrate_server.migrate_live_v2_68, "2.68") def _migrate_live(self, req, id, body): """Permit admins to (live) migrate a server to a new host.""" context = req.environ["nova.context"] # NOTE(stephenfin): we need 'numa_topology' because of the # 'LiveMigrationTask._check_instance_has_no_numa' check in the # conductor instance = common.get_instance(self.compute_api, context, id, expected_attrs=['numa_topology']) context.can(ms_policies.POLICY_ROOT % 'migrate_live', target={'project_id': instance.project_id}) host = body["os-migrateLive"]["host"] block_migration = body["os-migrateLive"]["block_migration"] force = None async_ = api_version_request.is_supported(req, min_version='2.34') if api_version_request.is_supported(req, min_version='2.30'): force = self._get_force_param_for_live_migration(body, host) if api_version_request.is_supported(req, min_version='2.25'): if block_migration == 'auto': block_migration = None else: block_migration = strutils.bool_from_string(block_migration, strict=True) disk_over_commit = None else: disk_over_commit = body["os-migrateLive"]["disk_over_commit"] block_migration = strutils.bool_from_string(block_migration, strict=True) disk_over_commit = strutils.bool_from_string(disk_over_commit, strict=True) # We could potentially move this check to conductor and avoid the # extra API call to neutron when we support move operations with ports # having resource requests. if (common.instance_has_port_with_resource_request( instance.uuid, self.network_api) and not common.supports_port_resource_request_during_move()): LOG.warning("The os-migrateLive action on a server with ports " "having resource requests, like a port with a QoS " "minimum bandwidth policy, is not supported until " "every nova-compute is upgraded to Ussuri") msg = _("The os-migrateLive action on a server with ports having " "resource requests, like a port with a QoS minimum " "bandwidth policy, is not supported by this cluster right " "now") raise exc.HTTPBadRequest(explanation=msg) try: self.compute_api.live_migrate(context, instance, block_migration, disk_over_commit, host, force, async_) except (exception.NoValidHost, exception.ComputeServiceUnavailable, exception.InvalidHypervisorType, exception.InvalidCPUInfo, exception.UnableToMigrateToSelf, exception.DestinationHypervisorTooOld, exception.InvalidLocalStorage, exception.InvalidSharedStorage, exception.HypervisorUnavailable, exception.MigrationPreCheckError) as ex: if async_: with excutils.save_and_reraise_exception(): LOG.error("Unexpected exception received from " "conductor during pre-live-migration checks " "'%(ex)s'", {'ex': ex}) else: raise exc.HTTPBadRequest(explanation=ex.format_message()) except exception.OperationNotSupportedForSEV as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.ComputeHostNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'os-migrateLive', id) except exception.ForbiddenWithAccelerators as e: raise exc.HTTPForbidden(explanation=e.format_message()) def _get_force_param_for_live_migration(self, body, host): force = body["os-migrateLive"].get("force", False) force = strutils.bool_from_string(force, strict=True) if force is True and not host: message = _("Can't force to a non-provided destination") raise exc.HTTPBadRequest(explanation=message) return force ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/migrations.py0000664000175000017500000002063300000000000022477 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import timeutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import migrations as schema_migrations from nova.api.openstack.compute.views import migrations as migrations_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.objects import base as obj_base from nova.policies import migrations as migrations_policies class MigrationsController(wsgi.Controller): """Controller for accessing migrations in OpenStack API.""" _view_builder_class = migrations_view.ViewBuilder _collection_name = "servers/%s/migrations" def __init__(self): super(MigrationsController, self).__init__() self.compute_api = compute.API() def _output(self, req, migrations_obj, add_link=False, add_uuid=False, add_user_project=False): """Returns the desired output of the API from an object. From a MigrationsList's object this method returns a list of primitive objects with the only necessary fields. """ detail_keys = ['memory_total', 'memory_processed', 'memory_remaining', 'disk_total', 'disk_processed', 'disk_remaining'] # TODO(Shaohe Feng) we should share the in-progress list. live_migration_in_progress = ['queued', 'preparing', 'running', 'post-migrating'] # Note(Shaohe Feng): We need to leverage the oslo.versionedobjects. # Then we can pass the target version to it's obj_to_primitive. objects = obj_base.obj_to_primitive(migrations_obj) objects = [x for x in objects if not x['hidden']] for obj in objects: del obj['deleted'] del obj['deleted_at'] del obj['hidden'] del obj['cross_cell_move'] if not add_uuid: del obj['uuid'] if 'memory_total' in obj: for key in detail_keys: del obj[key] if not add_user_project: if 'user_id' in obj: del obj['user_id'] if 'project_id' in obj: del obj['project_id'] # NOTE(Shaohe Feng) above version 2.23, add migration_type for all # kinds of migration, but we only add links just for in-progress # live-migration. if add_link and obj['migration_type'] == "live-migration" and ( obj["status"] in live_migration_in_progress): obj["links"] = self._view_builder._get_links( req, obj["id"], self._collection_name % obj['instance_uuid']) elif add_link is False: del obj['migration_type'] return objects def _index(self, req, add_link=False, next_link=False, add_uuid=False, sort_dirs=None, sort_keys=None, limit=None, marker=None, allow_changes_since=False, allow_changes_before=False): context = req.environ['nova.context'] context.can(migrations_policies.POLICY_ROOT % 'index', target={}) search_opts = {} search_opts.update(req.GET) if 'changes-since' in search_opts: if allow_changes_since: search_opts['changes-since'] = timeutils.parse_isotime( search_opts['changes-since']) else: # Before microversion 2.59, the changes-since filter was not # supported in the DB API. However, the schema allowed # additionalProperties=True, so a user could pass it before # 2.59 and filter by the updated_at field if we don't remove # it from search_opts. del search_opts['changes-since'] if 'changes-before' in search_opts: if allow_changes_before: search_opts['changes-before'] = timeutils.parse_isotime( search_opts['changes-before']) changes_since = search_opts.get('changes-since') if (changes_since and search_opts['changes-before'] < search_opts['changes-since']): msg = _('The value of changes-since must be less than ' 'or equal to changes-before.') raise exc.HTTPBadRequest(explanation=msg) else: # Before microversion 2.59 the schema allowed # additionalProperties=True, so a user could pass # changes-before before 2.59 and filter by the updated_at # field if we don't remove it from search_opts. del search_opts['changes-before'] if sort_keys: try: migrations = self.compute_api.get_migrations_sorted( context, search_opts, sort_dirs=sort_dirs, sort_keys=sort_keys, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) else: migrations = self.compute_api.get_migrations( context, search_opts) add_user_project = api_version_request.is_supported(req, '2.80') migrations = self._output(req, migrations, add_link, add_uuid, add_user_project) migrations_dict = {'migrations': migrations} if next_link: migrations_links = self._view_builder.get_links(req, migrations) if migrations_links: migrations_dict['migrations_links'] = migrations_links return migrations_dict @wsgi.Controller.api_version("2.1", "2.22") # noqa @wsgi.expected_errors(()) @validation.query_schema(schema_migrations.list_query_schema_v20, "2.0", "2.22") def index(self, req): """Return all migrations using the query parameters as filters.""" return self._index(req) @wsgi.Controller.api_version("2.23", "2.58") # noqa @wsgi.expected_errors(()) @validation.query_schema(schema_migrations.list_query_schema_v20, "2.23", "2.58") def index(self, req): """Return all migrations using the query parameters as filters.""" return self._index(req, add_link=True) @wsgi.Controller.api_version("2.59", "2.65") # noqa @wsgi.expected_errors(400) @validation.query_schema(schema_migrations.list_query_params_v259, "2.59", "2.65") def index(self, req): """Return all migrations using the query parameters as filters.""" limit, marker = common.get_limit_and_marker(req) return self._index(req, add_link=True, next_link=True, add_uuid=True, sort_keys=['created_at', 'id'], sort_dirs=['desc', 'desc'], limit=limit, marker=marker, allow_changes_since=True) @wsgi.Controller.api_version("2.66") # noqa @wsgi.expected_errors(400) @validation.query_schema(schema_migrations.list_query_params_v266, "2.66", "2.79") @validation.query_schema(schema_migrations.list_query_params_v280, "2.80") def index(self, req): """Return all migrations using the query parameters as filters.""" limit, marker = common.get_limit_and_marker(req) return self._index(req, add_link=True, next_link=True, add_uuid=True, sort_keys=['created_at', 'id'], sort_dirs=['desc', 'desc'], limit=limit, marker=marker, allow_changes_since=True, allow_changes_before=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/multinic.py0000664000175000017500000000512000000000000022141 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The multinic extension.""" from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import multinic from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.policies import multinic as multinic_policies class MultinicController(wsgi.Controller): """This API is deprecated from Microversion '2.44'.""" def __init__(self): super(MultinicController, self).__init__() self.compute_api = compute.API() @wsgi.Controller.api_version("2.1", "2.43") @wsgi.response(202) @wsgi.action('addFixedIp') @wsgi.expected_errors((400, 404)) @validation.schema(multinic.add_fixed_ip) def _add_fixed_ip(self, req, id, body): """Adds an IP on a given network to an instance.""" context = req.environ['nova.context'] context.can(multinic_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) network_id = body['addFixedIp']['networkId'] try: self.compute_api.add_fixed_ip(context, instance, network_id) except exception.NoMoreFixedIps as e: raise exc.HTTPBadRequest(explanation=e.format_message()) @wsgi.Controller.api_version("2.1", "2.43") @wsgi.response(202) @wsgi.action('removeFixedIp') @wsgi.expected_errors((400, 404)) @validation.schema(multinic.remove_fixed_ip) def _remove_fixed_ip(self, req, id, body): """Removes an IP from an instance.""" context = req.environ['nova.context'] context.can(multinic_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) address = body['removeFixedIp']['address'] try: self.compute_api.remove_fixed_ip(context, instance, address) except exception.FixedIpNotFoundForInstance as e: raise exc.HTTPBadRequest(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/networks.py0000664000175000017500000000731000000000000022174 0ustar00zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import wsgi from nova import exception from nova.i18n import _ from nova.network import neutron from nova.policies import networks as net_policies def network_dict(context, network): if not network: return {} fields = ('id', 'cidr', 'netmask', 'gateway', 'broadcast', 'dns1', 'dns2', 'cidr_v6', 'gateway_v6', 'label', 'netmask_v6') admin_fields = ('created_at', 'updated_at', 'deleted_at', 'deleted', 'injected', 'bridge', 'vlan', 'vpn_public_address', 'vpn_public_port', 'vpn_private_address', 'dhcp_start', 'project_id', 'host', 'bridge_interface', 'multi_host', 'priority', 'rxtx_base', 'mtu', 'dhcp_server', 'enable_dhcp', 'share_address') # NOTE(mnaser): We display a limited set of fields so users can know what # networks are available, extra system-only fields are only visible if they # are an admin. if context.is_admin: fields += admin_fields result = {} for field in fields: # we only provide a limited number of fields now that nova-network is # gone (yes, two fields of thirty) if field == 'id': result[field] = network['id'] elif field == 'label': result[field] = network['name'] else: result[field] = None return result class NetworkController(wsgi.Controller): def __init__(self, network_api=None): super(NetworkController, self).__init__() # TODO(stephenfin): 'network_api' is only being passed for use by tests self.network_api = network_api or neutron.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(net_policies.POLICY_ROOT % 'view') networks = self.network_api.get_all(context) result = [network_dict(context, net_ref) for net_ref in networks] return {'networks': result} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): context = req.environ['nova.context'] context.can(net_policies.POLICY_ROOT % 'view') try: network = self.network_api.get(context, id) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) return {'network': network_dict(context, network)} @wsgi.expected_errors(410) @wsgi.action("disassociate") def _disassociate_host_and_project(self, req, id, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def delete(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def create(self, req, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def add(self, req, body): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/networks_associate.py0000664000175000017500000000224700000000000024233 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class NetworkAssociateActionController(wsgi.Controller): """Network Association API Controller.""" @wsgi.action("disassociate_host") @wsgi.expected_errors(410) def _disassociate_host_only(self, req, id, body): raise exc.HTTPGone() @wsgi.action("disassociate_project") @wsgi.expected_errors(410) def _disassociate_project_only(self, req, id, body): raise exc.HTTPGone() @wsgi.action("associate_host") @wsgi.expected_errors(410) def _associate_host(self, req, id, body): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/pause_server.py0000664000175000017500000000564300000000000023032 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack import wsgi from nova.compute import api as compute from nova import exception from nova.policies import pause_server as ps_policies class PauseServerController(wsgi.Controller): def __init__(self): super(PauseServerController, self).__init__() self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @wsgi.action('pause') def _pause(self, req, id, body): """Permit Admins to pause the server.""" ctxt = req.environ['nova.context'] server = common.get_instance(self.compute_api, ctxt, id) ctxt.can(ps_policies.POLICY_ROOT % 'pause', target={'user_id': server.user_id, 'project_id': server.project_id}) try: self.compute_api.pause(ctxt, server) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'pause', id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @wsgi.action('unpause') def _unpause(self, req, id, body): """Permit Admins to unpause the server.""" ctxt = req.environ['nova.context'] server = common.get_instance(self.compute_api, ctxt, id) ctxt.can(ps_policies.POLICY_ROOT % 'unpause', target={'project_id': server.project_id}) try: self.compute_api.unpause(ctxt, server) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'unpause', id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/quota_classes.py0000664000175000017500000001253400000000000023172 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import webob from nova.api.openstack.compute.schemas import quota_classes from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova import objects from nova.policies import quota_class_sets as qcs_policies from nova import quota from nova import utils QUOTAS = quota.QUOTAS # NOTE(gmann): Quotas which were returned in v2 but in v2.1 those # were not returned. Fixed in microversion 2.50. Bug#1693168. EXTENDED_QUOTAS = ['server_groups', 'server_group_members'] # NOTE(gmann): Network related quotas are filter out in # microversion 2.50. Bug#1701211. FILTERED_QUOTAS_2_50 = ["fixed_ips", "floating_ips", "security_group_rules", "security_groups"] # Microversion 2.57 removes personality (injected) files from the API. FILTERED_QUOTAS_2_57 = list(FILTERED_QUOTAS_2_50) FILTERED_QUOTAS_2_57.extend(['injected_files', 'injected_file_content_bytes', 'injected_file_path_bytes']) class QuotaClassSetsController(wsgi.Controller): supported_quotas = [] def __init__(self): super(QuotaClassSetsController, self).__init__() self.supported_quotas = QUOTAS.resources def _format_quota_set(self, quota_class, quota_set, filtered_quotas=None, exclude_server_groups=False): """Convert the quota object to a result dict.""" if quota_class: result = dict(id=str(quota_class)) else: result = {} original_quotas = copy.deepcopy(self.supported_quotas) if filtered_quotas: original_quotas = [resource for resource in original_quotas if resource not in filtered_quotas] # NOTE(gmann): Before microversion v2.50, v2.1 API does not return the # 'server_groups' & 'server_group_members' key in quota class API # response. if exclude_server_groups: for resource in EXTENDED_QUOTAS: original_quotas.remove(resource) for resource in original_quotas: if resource in quota_set: result[resource] = quota_set[resource] return dict(quota_class_set=result) @wsgi.Controller.api_version('2.1', '2.49') @wsgi.expected_errors(()) def show(self, req, id): return self._show(req, id, exclude_server_groups=True) @wsgi.Controller.api_version('2.50', '2.56') # noqa @wsgi.expected_errors(()) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_50) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(()) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_57) def _show(self, req, id, filtered_quotas=None, exclude_server_groups=False): context = req.environ['nova.context'] context.can(qcs_policies.POLICY_ROOT % 'show', target={}) values = QUOTAS.get_class_quotas(context, id) return self._format_quota_set(id, values, filtered_quotas, exclude_server_groups) @wsgi.Controller.api_version("2.1", "2.49") # noqa @wsgi.expected_errors(400) @validation.schema(quota_classes.update) def update(self, req, id, body): return self._update(req, id, body, exclude_server_groups=True) @wsgi.Controller.api_version("2.50", "2.56") # noqa @wsgi.expected_errors(400) @validation.schema(quota_classes.update_v250) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_50) @wsgi.Controller.api_version("2.57") # noqa @wsgi.expected_errors(400) @validation.schema(quota_classes.update_v257) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_57) def _update(self, req, id, body, filtered_quotas=None, exclude_server_groups=False): context = req.environ['nova.context'] context.can(qcs_policies.POLICY_ROOT % 'update', target={}) try: utils.check_string_length(id, 'quota_class_name', min_length=1, max_length=255) except exception.InvalidInput as e: raise webob.exc.HTTPBadRequest( explanation=e.format_message()) quota_class = id for key, value in body['quota_class_set'].items(): try: objects.Quotas.update_class(context, quota_class, key, value) except exception.QuotaClassNotFound: objects.Quotas.create_class(context, quota_class, key, value) values = QUOTAS.get_class_quotas(context, quota_class) return self._format_quota_set(None, values, filtered_quotas, exclude_server_groups) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/quota_sets.py0000664000175000017500000002702700000000000022516 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils import six.moves.urllib.parse as urlparse import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.api_version_request \ import MIN_WITHOUT_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import quota_sets from nova.api.openstack import identity from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova.policies import quota_sets as qs_policies from nova import quota CONF = nova.conf.CONF QUOTAS = quota.QUOTAS FILTERED_QUOTAS_2_36 = ["fixed_ips", "floating_ips", "security_group_rules", "security_groups"] FILTERED_QUOTAS_2_57 = list(FILTERED_QUOTAS_2_36) FILTERED_QUOTAS_2_57.extend(['injected_files', 'injected_file_content_bytes', 'injected_file_path_bytes']) class QuotaSetsController(wsgi.Controller): def _format_quota_set(self, project_id, quota_set, filtered_quotas): """Convert the quota object to a result dict.""" if project_id: result = dict(id=str(project_id)) else: result = {} for resource in QUOTAS.resources: if (resource not in filtered_quotas and resource in quota_set): result[resource] = quota_set[resource] return dict(quota_set=result) def _validate_quota_limit(self, resource, limit, minimum, maximum): def conv_inf(value): return float("inf") if value == -1 else value if conv_inf(limit) < conv_inf(minimum): msg = (_("Quota limit %(limit)s for %(resource)s must " "be greater than or equal to already used and " "reserved %(minimum)s.") % {'limit': limit, 'resource': resource, 'minimum': minimum}) raise webob.exc.HTTPBadRequest(explanation=msg) if conv_inf(limit) > conv_inf(maximum): msg = (_("Quota limit %(limit)s for %(resource)s must be " "less than or equal to %(maximum)s.") % {'limit': limit, 'resource': resource, 'maximum': maximum}) raise webob.exc.HTTPBadRequest(explanation=msg) def _get_quotas(self, context, id, user_id=None, usages=False): if user_id: values = QUOTAS.get_user_quotas(context, id, user_id, usages=usages) else: values = QUOTAS.get_project_quotas(context, id, usages=usages) if usages: # NOTE(melwitt): For the detailed quota view with usages, the API # returns a response in the format: # { # "quota_set": { # "cores": { # "in_use": 0, # "limit": 20, # "reserved": 0 # }, # ... # We've re-architected quotas to eliminate reservations, so we no # longer have a 'reserved' key returned from get_*_quotas, so set # it here to satisfy the REST API response contract. reserved = QUOTAS.get_reserved() for v in values.values(): v['reserved'] = reserved return values else: return {k: v['limit'] for k, v in values.items()} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def show(self, req, id): return self._show(req, id, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_57) @validation.query_schema(quota_sets.query_schema_275, '2.75') @validation.query_schema(quota_sets.query_schema, '2.0', '2.74') def _show(self, req, id, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'show', {'project_id': id}) identity.verify_project_id(context, id) params = urlparse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] return self._format_quota_set(id, self._get_quotas(context, id, user_id=user_id), filtered_quotas=filtered_quotas) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def detail(self, req, id): return self._detail(req, id, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) def detail(self, req, id): return self._detail(req, id, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) def detail(self, req, id): return self._detail(req, id, FILTERED_QUOTAS_2_57) @validation.query_schema(quota_sets.query_schema_275, '2.75') @validation.query_schema(quota_sets.query_schema, '2.0', '2.74') def _detail(self, req, id, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'detail', {'project_id': id}) identity.verify_project_id(context, id) user_id = req.GET.get('user_id', None) return self._format_quota_set( id, self._get_quotas(context, id, user_id=user_id, usages=True), filtered_quotas=filtered_quotas) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) @validation.schema(quota_sets.update) def update(self, req, id, body): return self._update(req, id, body, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) @validation.schema(quota_sets.update_v236) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) @validation.schema(quota_sets.update_v257) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_57) @validation.query_schema(quota_sets.query_schema_275, '2.75') @validation.query_schema(quota_sets.query_schema, '2.0', '2.74') def _update(self, req, id, body, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'update', {'project_id': id}) identity.verify_project_id(context, id) project_id = id params = urlparse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] quota_set = body['quota_set'] # NOTE(stephenfin): network quotas were only used by nova-network and # therefore should be explicitly rejected if 'networks' in quota_set: raise webob.exc.HTTPBadRequest( explanation=_('The networks quota has been removed')) force_update = strutils.bool_from_string(quota_set.get('force', 'False')) settable_quotas = QUOTAS.get_settable_quotas(context, project_id, user_id=user_id) # NOTE(dims): Pass #1 - In this loop for quota_set.items(), we validate # min/max values and bail out if any of the items in the set is bad. valid_quotas = {} for key, value in body['quota_set'].items(): if key == 'force' or (not value and value != 0): continue # validate whether already used and reserved exceeds the new # quota, this check will be ignored if admin want to force # update value = int(value) if not force_update: minimum = settable_quotas[key]['minimum'] maximum = settable_quotas[key]['maximum'] self._validate_quota_limit(key, value, minimum, maximum) valid_quotas[key] = value # NOTE(dims): Pass #2 - At this point we know that all the # values are correct and we can iterate and update them all in one # shot without having to worry about rolling back etc as we have done # the validation up front in the loop above. for key, value in valid_quotas.items(): try: objects.Quotas.create_limit(context, project_id, key, value, user_id=user_id) except exception.QuotaExists: objects.Quotas.update_limit(context, project_id, key, value, user_id=user_id) # Note(gmann): Removed 'id' from update's response to make it same # as V2. If needed it can be added with microversion. return self._format_quota_set( None, self._get_quotas(context, id, user_id=user_id), filtered_quotas=filtered_quotas) @wsgi.Controller.api_version("2.0", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def defaults(self, req, id): return self._defaults(req, id, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) def defaults(self, req, id): return self._defaults(req, id, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) def defaults(self, req, id): return self._defaults(req, id, FILTERED_QUOTAS_2_57) def _defaults(self, req, id, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'defaults', {'project_id': id}) identity.verify_project_id(context, id) values = QUOTAS.get_defaults(context) return self._format_quota_set(id, values, filtered_quotas=filtered_quotas) # TODO(oomichi): Here should be 204(No Content) instead of 202 by v2.1 # +microversions because the resource quota-set has been deleted completely # when returning a response. @wsgi.expected_errors(()) @validation.query_schema(quota_sets.query_schema_275, '2.75') @validation.query_schema(quota_sets.query_schema, '2.0', '2.74') @wsgi.response(202) def delete(self, req, id): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'delete', {'project_id': id}) params = urlparse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] if user_id: objects.Quotas.destroy_all_by_project_and_user( context, id, user_id) else: objects.Quotas.destroy_all_by_project(context, id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/remote_consoles.py0000664000175000017500000002072300000000000023523 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import remote_consoles from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.policies import remote_consoles as rc_policies class RemoteConsolesController(wsgi.Controller): def __init__(self): super(RemoteConsolesController, self).__init__() self.compute_api = compute.API() self.handlers = {'vnc': self.compute_api.get_vnc_console, 'spice': self.compute_api.get_spice_console, 'rdp': self.compute_api.get_rdp_console, 'serial': self.compute_api.get_serial_console, 'mks': self.compute_api.get_mks_console} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getVNCConsole') @validation.schema(remote_consoles.get_vnc_console) def get_vnc_console(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown, get_vnc_console below will cope console_type = body['os-getVNCConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: output = self.compute_api.get_vnc_console(context, instance, console_type) except exception.ConsoleTypeUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getSPICEConsole') @validation.schema(remote_consoles.get_spice_console) def get_spice_console(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown, get_spice_console below will cope console_type = body['os-getSPICEConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: output = self.compute_api.get_spice_console(context, instance, console_type) except exception.ConsoleTypeUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getRDPConsole') @validation.schema(remote_consoles.get_rdp_console) def get_rdp_console(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown, get_rdp_console below will cope console_type = body['os-getRDPConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: # NOTE(mikal): get_rdp_console() can raise InstanceNotFound, so # we still need to catch it here. output = self.compute_api.get_rdp_console(context, instance, console_type) except exception.ConsoleTypeUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getSerialConsole') @validation.schema(remote_consoles.get_serial_console) def get_serial_console(self, req, id, body): """Get connection to a serial console.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown get_serial_console below will cope console_type = body['os-getSerialConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: output = self.compute_api.get_serial_console(context, instance, console_type) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except (exception.ConsoleTypeUnavailable, exception.ImageSerialPortNumberInvalid, exception.ImageSerialPortNumberExceedFlavorValue, exception.SocketPortRangeExhaustedException) as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.6") @wsgi.expected_errors((400, 404, 409, 501)) @validation.schema(remote_consoles.create_v26, "2.6", "2.7") @validation.schema(remote_consoles.create_v28, "2.8") def create(self, req, server_id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(rc_policies.BASE_POLICY_NAME, target={'project_id': instance.project_id}) protocol = body['remote_console']['protocol'] console_type = body['remote_console']['type'] try: handler = self.handlers.get(protocol) output = handler(context, instance, console_type) return {'remote_console': {'protocol': protocol, 'type': console_type, 'url': output['url']}} except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except (exception.ConsoleTypeInvalid, exception.ConsoleTypeUnavailable, exception.ImageSerialPortNumberInvalid, exception.ImageSerialPortNumberExceedFlavorValue, exception.SocketPortRangeExhaustedException) as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/rescue.py0000664000175000017500000001016500000000000021610 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The rescue mode extension.""" from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import rescue from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute import nova.conf from nova import exception from nova.policies import rescue as rescue_policies from nova import utils CONF = nova.conf.CONF class RescueController(wsgi.Controller): def __init__(self): super(RescueController, self).__init__() self.compute_api = compute.API() # TODO(cyeoh): Should be responding here with 202 Accept # because rescue is an async call, but keep to 200 # for backwards compatibility reasons. @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('rescue') @validation.schema(rescue.rescue) def _rescue(self, req, id, body): """Rescue an instance.""" context = req.environ["nova.context"] if body['rescue'] and 'adminPass' in body['rescue']: password = body['rescue']['adminPass'] else: password = utils.generate_password() instance = common.get_instance(self.compute_api, context, id) context.can(rescue_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) rescue_image_ref = None if body['rescue']: rescue_image_ref = body['rescue'].get('rescue_image_ref') allow_bfv_rescue = api_version_request.is_supported(req, '2.87') try: self.compute_api.rescue(context, instance, rescue_password=password, rescue_image_ref=rescue_image_ref, allow_bfv_rescue=allow_bfv_rescue) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'rescue', id) except exception.InvalidVolume as volume_error: raise exc.HTTPConflict(explanation=volume_error.format_message()) except ( exception.InstanceNotRescuable, exception.UnsupportedRescueImage, ) as non_rescuable: raise exc.HTTPBadRequest( explanation=non_rescuable.format_message()) if CONF.api.enable_instance_password: return {'adminPass': password} else: return {} @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @wsgi.action('unrescue') def _unrescue(self, req, id, body): """Unrescue an instance.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(rescue_policies.UNRESCUE_POLICY_NAME, target={'project_id': instance.project_id}) try: self.compute_api.unrescue(context, instance) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'unrescue', id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/rest_api_version_history.rst0000664000175000017500000011211400000000000025633 0ustar00zuulzuul00000000000000REST API Version History ======================== This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation. 2.1 --- This is the initial version of the v2.1 API which supports microversions. The V2.1 API is from the REST API users' point of view exactly the same as v2.0 except with strong input validation. A user can specify a header in the API request:: X-OpenStack-Nova-API-Version: where ```` is any valid api version for this API. If no version is specified then the API will behave as if a version request of v2.1 was requested. 2.2 --- Added Keypair type. A user can request the creation of a certain 'type' of keypair (``ssh`` or ``x509``) in the ``os-keypairs`` plugin If no keypair type is specified, then the default ``ssh`` type of keypair is created. Fixes status code for ``os-keypairs`` create method from 200 to 201 Fixes status code for ``os-keypairs`` delete method from 202 to 204 2.3 (Maximum in Kilo) --------------------- Exposed additional attributes in ``os-extended-server-attributes``: ``reservation_id``, ``launch_index``, ``ramdisk_id``, ``kernel_id``, ``hostname``, ``root_device_name``, ``userdata``. Exposed ``delete_on_termination`` for ``volumes_attached`` in ``os-extended-volumes``. This change is required for the extraction of EC2 API into a standalone service. It exposes necessary properties absent in public nova APIs yet. Add info for Standalone EC2 API to cut access to Nova DB. 2.4 --- Show the ``reserved`` status on a ``FixedIP`` object in the ``os-fixed-ips`` API extension. The extension allows one to ``reserve`` and ``unreserve`` a fixed IP but the show method does not report the current status. 2.5 --- Before version 2.5, the command ``nova list --ip6 xxx`` returns all servers for non-admins, as the filter option is silently discarded. There is no reason to treat ip6 different from ip, though, so we just add this option to the allowed list. 2.6 --- A new API for getting remote console is added:: POST /servers//remote-consoles { "remote_console": { "protocol": ["vnc"|"rdp"|"serial"|"spice"], "type": ["novnc"|"xpvnc"|"rdp-html5"|"spice-html5"|"serial"] } } Example response:: { "remote_console": { "protocol": "vnc", "type": "novnc", "url": "http://example.com:6080/vnc_auto.html?path=%3Ftoken%3DXYZ" } } The old APIs ``os-getVNCConsole``, ``os-getSPICEConsole``, ``os-getSerialConsole`` and ``os-getRDPConsole`` are removed. 2.7 --- Check the ``is_public`` attribute of a flavor before adding tenant access to it. Reject the request with ``HTTPConflict`` error. 2.8 --- Add ``mks`` protocol and ``webmks`` type for remote consoles. 2.9 --- Add a new ``locked`` attribute to the detailed view, update, and rebuild action. ``locked`` will be ``true`` if anyone is currently holding a lock on the server, ``false`` otherwise. 2.10 ---- Added ``user_id`` parameter to ``os-keypairs`` plugin, as well as a new property in the request body, for the create operation. Administrators will be able to list, get details and delete keypairs owned by users other than themselves and to create new keypairs on behalf of their users. 2.11 ---- Exposed attribute ``forced_down`` for ``os-services``. Added ability to change the ``forced_down`` attribute by calling an update. 2.12 (Maximum in Liberty) ------------------------- Exposes VIF ``net_id`` attribute in ``os-virtual-interfaces``. User will be able to get Virtual Interfaces ``net_id`` in Virtual Interfaces list and can determine in which network a Virtual Interface is plugged into. 2.13 ---- Add information ``project_id`` and ``user_id`` to ``os-server-groups`` API response data. 2.14 ---- Remove ``onSharedStorage`` parameter from server's evacuate action. Nova will automatically detect if the instance is on shared storage. ``adminPass`` is removed from the response body. The user can get the password with the server's ``os-server-password`` action. 2.15 ---- From this version of the API users can choose 'soft-affinity' and 'soft-anti-affinity' rules too for server-groups. 2.16 ---- Exposes new ``host_status`` attribute for servers/detail and servers/{server_id}. Ability to get nova-compute status when querying servers. By default, this is only exposed to cloud administrators. 2.17 ---- Add a new API for triggering crash dump in an instance. Different operation systems in instance may need different configurations to trigger crash dump. 2.18 ---- Establishes a set of routes that makes project_id an optional construct in v2.1. 2.19 ---- Allow the user to set and get the server description. The user will be able to set the description when creating, rebuilding, or updating a server, and get the description as part of the server details. 2.20 ---- From this version of the API user can call detach and attach volumes for instances which are in ``shelved`` and ``shelved_offloaded`` state. 2.21 ---- The ``os-instance-actions`` API now returns information from deleted instances. 2.22 ---- A new resource, ``servers:migrations``, is added. A new API to force live migration to complete added:: POST /servers//migrations//action { "force_complete": null } 2.23 ---- From this version of the API users can get the migration summary list by index API or the information of a specific migration by get API. Add ``migration_type`` for old ``/os-migrations`` API, also add ``ref`` link to the ``/servers/{uuid}/migrations/{id}`` for it when the migration is an in-progress live-migration. 2.24 ---- A new API call to cancel a running live migration:: DELETE /servers//migrations/ 2.25 (Maximum in Mitaka) ------------------------ Modify input parameter for ``os-migrateLive``. The ``block_migration`` field now supports an ``auto`` value and the ``disk_over_commit`` flag is removed. 2.26 ---- Added support of server tags. A user can create, update, delete or check existence of simple string tags for servers by the ``os-server-tags`` plugin. Tags have the following schema restrictions: * Tag is a Unicode bytestring no longer than 60 characters. * Tag is a non-empty string. * '/' is not allowed to be in a tag name * Comma is not allowed to be in a tag name in order to simplify requests that specify lists of tags * All other characters are allowed to be in a tag name * Each server can have up to 50 tags. The resource point for these operations is ``/servers//tags``. A user can add a single tag to the server by making a ``PUT`` request to ``/servers//tags/``. where ```` is any valid tag name. A user can replace **all** current server tags to the new set of tags by making a ``PUT`` request to the ``/servers//tags``. The new set of tags must be specified in request body. This set must be in list ``tags``. A user can remove specified tag from the server by making a ``DELETE`` request to ``/servers//tags/``. where ```` is tag name which user wants to remove. A user can remove **all** tags from the server by making a ``DELETE`` request to the ``/servers//tags``. A user can get a set of server tags with information about server by making a ``GET`` request to ``/servers/``. Request returns dictionary with information about specified server, including list ``tags``:: { 'id': {server_id}, ... 'tags': ['foo', 'bar', 'baz'] } A user can get **only** a set of server tags by making a ``GET`` request to ``/servers//tags``. Response :: { 'tags': ['foo', 'bar', 'baz'] } A user can check if a tag exists or not on a server by making a ``GET`` request to ``/servers/{server_id}/tags/{tag}``. Request returns ``204 No Content`` if tag exist on a server or ``404 Not Found`` if tag doesn't exist on a server. A user can filter servers in ``GET /servers`` request by new filters: * ``tags`` * ``tags-any`` * ``not-tags`` * ``not-tags-any`` These filters can be combined. Also user can use more than one string tags for each filter. In this case string tags for each filter must be separated by comma. For example:: GET /servers?tags=red&tags-any=green,orange 2.27 ---- Added support for the new form of microversion headers described in the `Microversion Specification `_. Both the original form of header and the new form is supported. 2.28 ---- Nova API ``hypervisor.cpu_info`` change from string to JSON object. From this version of the API the hypervisor's ``cpu_info`` field will be returned as JSON object (not string) by sending GET request to the ``/v2.1/os-hypervisors/{hypervisor_id}``. 2.29 ---- Updates the POST request body for the ``evacuate`` action to include the optional ``force`` boolean field defaulted to False. Also changes the evacuate action behaviour when providing a ``host`` string field by calling the nova scheduler to verify the provided host unless the ``force`` attribute is set. 2.30 ---- Updates the POST request body for the ``live-migrate`` action to include the optional ``force`` boolean field defaulted to False. Also changes the live-migrate action behaviour when providing a ``host`` string field by calling the nova scheduler to verify the provided host unless the ``force`` attribute is set. 2.31 ---- Fix ``os-console-auth-tokens`` to return connection info for all types of tokens, not just RDP. 2.32 ---- Adds an optional, arbitrary 'tag' item to the 'networks' item in the server boot request body. In addition, every item in the block_device_mapping_v2 array can also have an optional, arbitrary 'tag' item. These tags are used to identify virtual device metadata, as exposed in the metadata API and on the config drive. For example, a network interface on the virtual PCI bus tagged with 'nic1' will appear in the metadata along with its bus (PCI), bus address (ex: 0000:00:02.0), MAC address, and tag ('nic1'). .. note:: A bug has caused the tag attribute to no longer be accepted for networks starting with version 2.37 and for block_device_mapping_v2 starting with version 2.33. In other words, networks could only be tagged between versions 2.32 and 2.36 inclusively and block devices only in version 2.32. As of version 2.42 the tag attribute has been restored and both networks and block devices can be tagged again. 2.33 ---- Support pagination for hypervisor by accepting limit and marker from the GET API request:: GET /v2.1/{tenant_id}/os-hypervisors?marker={hypervisor_id}&limit={limit} In the context of device tagging at server create time, 2.33 also removes the tag attribute from block_device_mapping_v2. This is a bug that is fixed in 2.42, in which the tag attribute is reintroduced. 2.34 ---- Checks in ``os-migrateLive`` before live-migration actually starts are now made in background. ``os-migrateLive`` is not throwing `400 Bad Request` if pre-live-migration checks fail. 2.35 ---- Added pagination support for keypairs. Optional parameters 'limit' and 'marker' were added to GET /os-keypairs request, the default sort_key was changed to 'name' field as ASC order, the generic request format is:: GET /os-keypairs?limit={limit}&marker={kp_name} .. _2.36 microversion: 2.36 ---- All the APIs which proxy to another service were deprecated in this version, also the fping API. Those APIs will return 404 with Microversion 2.36. The network related quotas and limits are removed from API also. The deprecated API endpoints as below:: '/images' '/os-networks' '/os-tenant-networks' '/os-fixed-ips' '/os-floating-ips' '/os-floating-ips-bulk' '/os-floating-ip-pools' '/os-floating-ip-dns' '/os-security-groups' '/os-security-group-rules' '/os-security-group-default-rules' '/os-volumes' '/os-snapshots' '/os-baremetal-nodes' '/os-fping' .. note:: A `regression`__ was introduced in this microversion which broke the ``force`` parameter in the ``PUT /os-quota-sets`` API. The fix will have to be applied to restore this functionality. __ https://bugs.launchpad.net/nova/+bug/1733886 .. versionchanged:: 18.0.0 The ``os-fping`` API was completely removed in the 18.0.0 (Rocky) release. On deployments newer than this, the API will return HTTP 410 (Gone) regardless of the requested microversion. .. versionchanged:: 21.0.0 The ``os-security-group-default-rules`` API was completely removed in the 21.0.0 (Ussuri) release. On deployments newer than this, the APIs will return HTTP 410 (Gone) regardless of the requested microversion. .. versionchanged:: 21.0.0 The ``os-networks`` API was partially removed in the 21.0.0 (Ussuri) release. On deployments newer than this, some endpoints of the API will return HTTP 410 (Gone) regardless of the requested microversion. .. versionchanged:: 21.0.0 The ``os-tenant-networks`` API was partially removed in the 21.0.0 (Ussuri) release. On deployments newer than this, some endpoints of the API will return HTTP 410 (Gone) regardless of the requested microversion. 2.37 ---- Added support for automatic allocation of networking, also known as "Get Me a Network". With this microversion, when requesting the creation of a new server (or servers) the ``networks`` entry in the ``server`` portion of the request body is required. The ``networks`` object in the request can either be a list or an enum with values: #. *none* which means no networking will be allocated for the created server(s). #. *auto* which means either a network that is already available to the project will be used, or if one does not exist, will be automatically created for the project. Automatic network allocation for a project only happens once for a project. Subsequent requests using *auto* for the same project will reuse the network that was previously allocated. Also, the ``uuid`` field in the ``networks`` object in the server create request is now strictly enforced to be in UUID format. In the context of device tagging at server create time, 2.37 also removes the tag attribute from networks. This is a bug that is fixed in 2.42, in which the tag attribute is reintroduced. 2.38 (Maximum in Newton) ------------------------ Before version 2.38, the command ``nova list --status invalid_status`` was returning empty list for non admin user and 500 InternalServerError for admin user. As there are sufficient statuses defined already, any invalid status should not be accepted. From this version of the API admin as well as non admin user will get 400 HTTPBadRequest if invalid status is passed to nova list command. 2.39 ---- Deprecates image-metadata proxy API that is just a proxy for Glance API to operate the image metadata. Also removes the extra quota enforcement with Nova `metadata` quota (quota checks for 'createImage' and 'createBackup' actions in Nova were removed). After this version Glance configuration option `image_property_quota` should be used to control the quota of image metadatas. Also, removes the `maxImageMeta` field from `os-limits` API response. 2.40 ---- Optional query parameters ``limit`` and ``marker`` were added to the ``os-simple-tenant-usage`` endpoints for pagination. If a limit isn't provided, the configurable ``max_limit`` will be used which currently defaults to 1000. :: GET /os-simple-tenant-usage?limit={limit}&marker={instance_uuid} GET /os-simple-tenant-usage/{tenant_id}?limit={limit}&marker={instance_uuid} A tenant's usage statistics may span multiple pages when the number of instances exceeds limit, and API consumers will need to stitch together the aggregate results if they still want totals for all instances in a specific time window, grouped by tenant. Older versions of the ``os-simple-tenant-usage`` endpoints will not accept these new paging query parameters, but they will start to silently limit by ``max_limit`` to encourage the adoption of this new microversion, and circumvent the existing possibility of DoS-like usage requests when there are thousands of instances. 2.41 ---- The 'uuid' attribute of an aggregate is now returned from calls to the `/os-aggregates` endpoint. This attribute is auto-generated upon creation of an aggregate. The `os-aggregates` API resource endpoint remains an administrator-only API. 2.42 (Maximum in Ocata) ----------------------- In the context of device tagging at server create time, a bug has caused the tag attribute to no longer be accepted for networks starting with version 2.37 and for block_device_mapping_v2 starting with version 2.33. Microversion 2.42 restores the tag parameter to both networks and block_device_mapping_v2, allowing networks and block devices to be tagged again. 2.43 ---- The ``os-hosts`` API is deprecated as of the 2.43 microversion. Requests made with microversion >= 2.43 will result in a 404 error. To list and show host details, use the ``os-hypervisors`` API. To enable or disable a service, use the ``os-services`` API. There is no replacement for the `shutdown`, `startup`, `reboot`, or `maintenance_mode` actions as those are system-level operations which should be outside of the control of the compute service. 2.44 ---- The following APIs which are considered as proxies of Neutron networking API, are deprecated and will result in a 404 error response in new Microversion:: POST /servers/{server_uuid}/action { "addFixedIp": {...} } POST /servers/{server_uuid}/action { "removeFixedIp": {...} } POST /servers/{server_uuid}/action { "addFloatingIp": {...} } POST /servers/{server_uuid}/action { "removeFloatingIp": {...} } Those server actions can be replaced by calling the Neutron API directly. The nova-network specific API to query the server's interfaces is deprecated:: GET /servers/{server_uuid}/os-virtual-interfaces To query attached neutron interfaces for a specific server, the API `GET /servers/{server_uuid}/os-interface` can be used. 2.45 ---- The ``createImage`` and ``createBackup`` server action APIs no longer return a ``Location`` header in the response for the snapshot image, they now return a json dict in the response body with an ``image_id`` key and uuid value. 2.46 ---- The request_id created for every inbound request is now returned in ``X-OpenStack-Request-ID`` in addition to ``X-Compute-Request-ID`` to be consistent with the rest of OpenStack. This is a signaling only microversion, as these header settings happen well before microversion processing. 2.47 ---- Replace the ``flavor`` name/ref with the actual flavor details from the embedded flavor object when displaying server details. Requests made with microversion >= 2.47 will no longer return the flavor ID/link but instead will return a subset of the flavor details. If the user is prevented by policy from indexing extra-specs, then the ``extra_specs`` field will not be included in the flavor information. 2.48 ---- Before version 2.48, VM diagnostics response was just a 'blob' of data returned by each hypervisor. From this version VM diagnostics response is standardized. It has a set of fields which each hypervisor will try to fill. If a hypervisor driver is unable to provide a specific field then this field will be reported as 'None'. 2.49 ---- Continuing from device role tagging at server create time introduced in version 2.32 and later fixed in 2.42, microversion 2.49 allows the attachment of network interfaces and volumes with an optional ``tag`` parameter. This tag is used to identify the virtual devices in the guest and is exposed in the metadata API. Because the config drive cannot be updated while the guest is running, it will only contain metadata of devices that were tagged at boot time. Any changes made to devices while the instance is running - be it detaching a tagged device or performing a tagged device attachment - will not be reflected in the config drive. Tagged volume attachment is not supported for shelved-offloaded instances. 2.50 ---- The ``server_groups`` and ``server_group_members`` keys are exposed in GET & PUT ``os-quota-class-sets`` APIs Response body. Networks related quotas have been filtered out from os-quota-class. Below quotas are filtered out and not available in ``os-quota-class-sets`` APIs from this microversion onwards. - "fixed_ips" - "floating_ips" - "networks", - "security_group_rules" - "security_groups" 2.51 ---- There are two changes for the 2.51 microversion: * Add ``volume-extended`` event name to the ``os-server-external-events`` API. This will be used by the Block Storage service when extending the size of an attached volume. This signals the Compute service to perform any necessary actions on the compute host or hypervisor to adjust for the new volume block device size. * Expose the ``events`` field in the response body for the ``GET /servers/{server_id}/os-instance-actions/{request_id}`` API. This is useful for API users to monitor when a volume extend operation completes for the given server instance. By default only users with the administrator role will be able to see event ``traceback`` details. 2.52 ---- Adds support for applying tags when creating a server. The tag schema is the same as in the `2.26`_ microversion. .. _2.53-microversion: 2.53 (Maximum in Pike) ---------------------- **os-services** Services are now identified by uuid instead of database id to ensure uniqueness across cells. This microversion brings the following changes: * ``GET /os-services`` returns a uuid in the ``id`` field of the response * ``DELETE /os-services/{service_uuid}`` requires a service uuid in the path * The following APIs have been superseded by ``PUT /os-services/{service_uuid}/``: * ``PUT /os-services/disable`` * ``PUT /os-services/disable-log-reason`` * ``PUT /os-services/enable`` * ``PUT /os-services/force-down`` ``PUT /os-services/{service_uuid}`` takes the following fields in the body: * ``status`` - can be either "enabled" or "disabled" to enable or disable the given service * ``disabled_reason`` - specify with status="disabled" to log a reason for why the service is disabled * ``forced_down`` - boolean indicating if the service was forced down by an external service * ``PUT /os-services/{service_uuid}`` will now return a full service resource representation like in a ``GET`` response **os-hypervisors** Hypervisors are now identified by uuid instead of database id to ensure uniqueness across cells. This microversion brings the following changes: * ``GET /os-hypervisors/{hypervisor_hostname_pattern}/search`` is deprecated and replaced with the ``hypervisor_hostname_pattern`` query parameter on the ``GET /os-hypervisors`` and ``GET /os-hypervisors/detail`` APIs. Paging with ``hypervisor_hostname_pattern`` is not supported. * ``GET /os-hypervisors/{hypervisor_hostname_pattern}/servers`` is deprecated and replaced with the ``with_servers`` query parameter on the ``GET /os-hypervisors`` and ``GET /os-hypervisors/detail`` APIs. * ``GET /os-hypervisors/{hypervisor_id}`` supports the ``with_servers`` query parameter to include hosted server details in the response. * ``GET /os-hypervisors/{hypervisor_id}`` and ``GET /os-hypervisors/{hypervisor_id}/uptime`` APIs now take a uuid value for the ``{hypervisor_id}`` path parameter. * The ``GET /os-hypervisors`` and ``GET /os-hypervisors/detail`` APIs will now use a uuid marker for paging across cells. * The following APIs will now return a uuid value for the hypervisor id and optionally service id fields in the response: * ``GET /os-hypervisors`` * ``GET /os-hypervisors/detail`` * ``GET /os-hypervisors/{hypervisor_id}`` * ``GET /os-hypervisors/{hypervisor_id}/uptime`` 2.54 ---- Allow the user to set the server key pair while rebuilding. 2.55 ---- Adds a ``description`` field to the flavor resource in the following APIs: * ``GET /flavors`` * ``GET /flavors/detail`` * ``GET /flavors/{flavor_id}`` * ``POST /flavors`` * ``PUT /flavors/{flavor_id}`` The embedded flavor description will not be included in server representations. 2.56 ---- Updates the POST request body for the ``migrate`` action to include the the optional ``host`` string field defaulted to ``null``. If ``host`` is set the migrate action verifies the provided host with the nova scheduler and uses it as the destination for the migration. 2.57 ---- The 2.57 microversion makes the following changes: * The ``personality`` parameter is removed from the server create and rebuild APIs. * The ``user_data`` parameter is added to the server rebuild API. * The ``maxPersonality`` and ``maxPersonalitySize`` limits are excluded from the ``GET /limits`` API response. * The ``injected_files``, ``injected_file_content_bytes`` and ``injected_file_path_bytes`` quotas are removed from the ``os-quota-sets`` and ``os-quota-class-sets`` APIs. 2.58 ---- Add pagination support and ``changes-since`` filter for os-instance-actions API. Users can now use ``limit`` and ``marker`` to perform paginated query when listing instance actions. Users can also use ``changes-since`` filter to filter the results based on the last time the instance action was updated. 2.59 ---- Added pagination support for migrations, there are four changes: * Add pagination support and ``changes-since`` filter for os-migrations API. Users can now use ``limit`` and ``marker`` to perform paginate query when listing migrations. * Users can also use ``changes-since`` filter to filter the results based on the last time the migration record was updated. * ``GET /os-migrations``, ``GET /servers/{server_id}/migrations/{migration_id}`` and ``GET /servers/{server_id}/migrations`` will now return a uuid value in addition to the migrations id in the response. * The query parameter schema of the ``GET /os-migrations`` API no longer allows additional properties. .. _api-microversion-queens: 2.60 (Maximum in Queens) ------------------------ From this version of the API users can attach a ``multiattach`` capable volume to multiple instances. The API request for creating the additional attachments is the same. The chosen virt driver and the volume back end has to support the functionality as well. 2.61 ---- Exposes flavor extra_specs in the flavor representation. Now users can see the flavor extra-specs in flavor APIs response and do not need to call ``GET /flavors/{flavor_id}/os-extra_specs`` API. If the user is prevented by policy from indexing extra-specs, then the ``extra_specs`` field will not be included in the flavor information. Flavor extra_specs will be included in Response body of the following APIs: * ``GET /flavors/detail`` * ``GET /flavors/{flavor_id}`` * ``POST /flavors`` * ``PUT /flavors/{flavor_id}`` 2.62 ---- Adds ``host`` (hostname) and ``hostId`` (an obfuscated hashed host id string) fields to the instance action ``GET /servers/{server_id}/os-instance-actions/{req_id}`` API. The display of the newly added ``host`` field will be controlled via policy rule ``os_compute_api:os-instance-actions:events``, which is the same policy used for the ``events.traceback`` field. If the user is prevented by policy, only ``hostId`` will be displayed. 2.63 ---- Adds support for the ``trusted_image_certificates`` parameter, which is used to define a list of trusted certificate IDs that can be used during image signature verification and certificate validation. The list is restricted to a maximum of 50 IDs. Note that ``trusted_image_certificates`` is not supported with volume-backed servers. The ``trusted_image_certificates`` request parameter can be passed to the server create and rebuild APIs: * ``POST /servers`` * ``POST /servers/{server_id}/action (rebuild)`` The ``trusted_image_certificates`` parameter will be in the response body of the following APIs: * ``GET /servers/detail`` * ``GET /servers/{server_id}`` * ``PUT /servers/{server_id}`` * ``POST /servers/{server_id}/action (rebuild)`` 2.64 ---- Enable users to define the policy rules on server group policy to meet more advanced policy requirement. This microversion brings the following changes in server group APIs: * Add ``policy`` and ``rules`` fields in the request of POST ``/os-server-groups``. The ``policy`` represents the name of policy. The ``rules`` field, which is a dict, can be applied to the policy, which currently only support ``max_server_per_host`` for ``anti-affinity`` policy. * The ``policy`` and ``rules`` fields will be returned in response body of POST, GET ``/os-server-groups`` API and GET ``/os-server-groups/{server_group_id}`` API. * The ``policies`` and ``metadata`` fields have been removed from the response body of POST, GET ``/os-server-groups`` API and GET ``/os-server-groups/{server_group_id}`` API. 2.65 (Maximum in Rocky) ----------------------- Add support for abort live migrations in ``queued`` and ``preparing`` status for API ``DELETE /servers/{server_id}/migrations/{migration_id}``. 2.66 ---- The ``changes-before`` filter can be included as a request parameter of the following APIs to filter by changes before or equal to the resource ``updated_at`` time: * ``GET /servers`` * ``GET /servers/detail`` * ``GET /servers/{server_id}/os-instance-actions`` * ``GET /os-migrations`` 2.67 ---- Adds the ``volume_type`` parameter to ``block_device_mapping_v2``, which can be used to specify cinder ``volume_type`` when creating a server. 2.68 ---- Remove support for forced live migration and evacuate server actions. 2.69 ---- Add support for returning minimal constructs for ``GET /servers``, ``GET /servers/detail``, ``GET /servers/{server_id}`` and ``GET /os-services`` when there is a transient unavailability condition in the deployment like an infrastructure failure. Starting from this microversion, the responses from the down part of the infrastructure for the above four requests will have missing key values to make it more resilient. The response body will only have a minimal set of information obtained from the available information in the API database for the down cells. See `handling down cells `__ for more information. 2.70 ---- Exposes virtual device tags for volume attachments and virtual interfaces (ports). A ``tag`` parameter is added to the response body for the following APIs: **Volumes** * GET /servers/{server_id}/os-volume_attachments (list) * GET /servers/{server_id}/os-volume_attachments/{volume_id} (show) * POST /servers/{server_id}/os-volume_attachments (attach) **Ports** * GET /servers/{server_id}/os-interface (list) * GET /servers/{server_id}/os-interface/{port_id} (show) * POST /servers/{server_id}/os-interface (attach) 2.71 ---- The ``server_groups`` parameter will be in the response body of the following APIs to list the server groups to which the server belongs: * ``GET /servers/{server_id}`` * ``PUT /servers/{server_id}`` * ``POST /servers/{server_id}/action (rebuild)`` 2.72 (Maximum in Stein) ----------------------- API microversion 2.72 adds support for creating servers with neutron ports that has resource request, e.g. neutron ports with `QoS minimum bandwidth rule`_. Deleting servers with such ports have already been handled properly as well as detaching these type of ports. API limitations: * Creating servers with Neutron networks having QoS minimum bandwidth rule is not supported. * Attaching Neutron ports and networks having QoS minimum bandwidth rule is not supported. * Moving (resizing, migrating, live-migrating, evacuating, unshelving after shelve offload) servers with ports having resource request is not yet supported. .. _QoS minimum bandwidth rule: https://docs.openstack.org/neutron/latest/admin/config-qos-min-bw.html 2.73 ---- API microversion 2.73 adds support for specifying a reason when locking the server and exposes this information via ``GET /servers/detail``, ``GET /servers/{server_id}``, ``PUT servers/{server_id}`` and ``POST /servers/{server_id}/action`` where the action is rebuild. It also supports ``locked`` as a filter/sort parameter for ``GET /servers/detail`` and ``GET /servers``. 2.74 ---- API microversion 2.74 adds support for specifying optional ``host`` and/or ``hypervisor_hostname`` parameters in the request body of ``POST /servers``. These request a specific destination host/node to boot the requested server. These parameters are mutually exclusive with the special ``availability_zone`` format of ``zone:host:node``. Unlike ``zone:host:node``, the ``host`` and/or ``hypervisor_hostname`` parameters still allow scheduler filters to be run. If the requested host/node is unavailable or otherwise unsuitable, earlier failure will be raised. There will be also a new policy named ``compute:servers:create:requested_destination``. By default, it can be specified by administrators only. 2.75 ---- Multiple API cleanups are done in API microversion 2.75: * 400 error response for an unknown parameter in the querystring or request body. * Make the server representation consistent among GET, PUT and rebuild server API responses. ``PUT /servers/{server_id}`` and ``POST /servers/{server_id}/action {rebuild}`` API responses are modified to add all the missing fields which are returned by ``GET /servers/{server_id}``. * Change the default return value of the ``swap`` field from the empty string to 0 (integer) in flavor APIs. * Always return the ``servers`` field in the response of the ``GET /os-hypervisors``, ``GET /os-hypervisors/detail`` and ``GET /os-hypervisors/{hypervisor_id}`` APIs even when there are no servers on a hypervisor. 2.76 ---- Adds ``power-update`` event name to ``os-server-external-events`` API. The changes to the power state of an instance caused by this event can be viewed through ``GET /servers/{server_id}/os-instance-actions`` and ``GET /servers/{server_id}/os-instance-actions/{request_id}``. 2.77 ---- API microversion 2.77 adds support for specifying availability zone when unshelving a shelved offloaded server. 2.78 ---- Add server sub-resource ``topology`` to show server NUMA information. * ``GET /servers/{server_id}/topology`` The default behavior is configurable using two new policies: * ``compute:server:topology:index`` * ``compute:server:topology:host:index`` .. Keep a reference for python-novaclient releasenotes .. _id71: 2.79 (Maximum in Train) ----------------------- API microversion 2.79 adds support for specifying the ``delete_on_termination`` field in the request body when attaching a volume to a server, to support configuring whether to delete the data volume when the server is destroyed. Also, ``delete_on_termination`` is added to the GET responses when showing attached volumes, and the ``delete_on_termination`` field is contained in the POST API response body when attaching a volume. The affected APIs are as follows: * ``POST /servers/{server_id}/os-volume_attachments`` * ``GET /servers/{server_id}/os-volume_attachments`` * ``GET /servers/{server_id}/os-volume_attachments/{volume_id}`` 2.80 ---- Microversion 2.80 changes the list migrations APIs and the os-migrations API. Expose the ``user_id`` and ``project_id`` fields in the following APIs: * ``GET /os-migrations`` * ``GET /servers/{server_id}/migrations`` * ``GET /servers/{server_id}/migrations/{migration_id}`` The ``GET /os-migrations`` API will also have optional ``user_id`` and ``project_id`` query parameters for filtering migrations by user and/or project, for example: * ``GET /os-migrations?user_id=ef9d34b4-45d0-4530-871b-3fb535988394`` * ``GET /os-migrations?project_id=011ee9f4-8f16-4c38-8633-a254d420fd54`` * ``GET /os-migrations?user_id=ef9d34b4-45d0-4530-871b-3fb535988394&project_id=011ee9f4-8f16-4c38-8633-a254d420fd54`` 2.81 ---- Adds support for image cache management by aggregate by adding ``POST /os-aggregates/{aggregate_id}/images``. 2.82 ---- Adds ``accelerator-request-bound`` event to ``os-server-external-events`` API. This event is sent by Cyborg to indicate completion of the binding event for one accelerator request (ARQ) associated with an instance. 2.83 ---- Allow the following filter parameters for ``GET /servers/detail`` and ``GET /servers`` for non-admin : * ``availability_zone`` * ``config_drive`` * ``key_name`` * ``created_at`` * ``launched_at`` * ``terminated_at`` * ``power_state`` * ``task_state`` * ``vm_state`` * ``progress`` * ``user_id`` 2.84 ---- The ``GET /servers/{server_id}/os-instance-actions/{request_id}`` API returns a ``details`` parameter for each failed event with a fault message, similar to the server ``fault.message`` parameter in ``GET /servers/{server_id}`` for a server with status ``ERROR``. 2.85 ---- Adds the ability to specify ``delete_on_termination`` in the ``PUT /servers/{server_id}/os-volume_attachments/{volume_id}`` API, which allows changing the behavior of volume deletion on instance deletion. 2.86 ---- Add support for validation of known extra specs. This is enabled by default for the following APIs: * ``POST /flavors/{flavor_id}/os-extra_specs`` * ``PUT /flavors/{flavor_id}/os-extra_specs/{id}`` Validation is only used for recognized extra spec namespaces, currently: ``accel``, ``aggregate_instance_extra_specs``, ``capabilities``, ``hw``, ``hw_rng``, ``hw_video``, ``os``, ``pci_passthrough``, ``powervm``, ``quota``, ``resources``, ``trait``, and ``vmware``. 2.87 (Maximum in Ussuri) ------------------------ Adds support for rescuing boot from volume instances when the compute host reports the ``COMPUTE_BFV_RESCUE`` capability trait. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/routes.py0000664000175000017500000007406600000000000021655 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import six import nova.api.openstack from nova.api.openstack.compute import admin_actions from nova.api.openstack.compute import admin_password from nova.api.openstack.compute import agents from nova.api.openstack.compute import aggregates from nova.api.openstack.compute import assisted_volume_snapshots from nova.api.openstack.compute import attach_interfaces from nova.api.openstack.compute import availability_zone from nova.api.openstack.compute import baremetal_nodes from nova.api.openstack.compute import cells from nova.api.openstack.compute import certificates from nova.api.openstack.compute import cloudpipe from nova.api.openstack.compute import console_auth_tokens from nova.api.openstack.compute import console_output from nova.api.openstack.compute import consoles from nova.api.openstack.compute import create_backup from nova.api.openstack.compute import deferred_delete from nova.api.openstack.compute import evacuate from nova.api.openstack.compute import extension_info from nova.api.openstack.compute import fixed_ips from nova.api.openstack.compute import flavor_access from nova.api.openstack.compute import flavor_manage from nova.api.openstack.compute import flavors from nova.api.openstack.compute import flavors_extraspecs from nova.api.openstack.compute import floating_ip_dns from nova.api.openstack.compute import floating_ip_pools from nova.api.openstack.compute import floating_ips from nova.api.openstack.compute import floating_ips_bulk from nova.api.openstack.compute import fping from nova.api.openstack.compute import hosts from nova.api.openstack.compute import hypervisors from nova.api.openstack.compute import image_metadata from nova.api.openstack.compute import images from nova.api.openstack.compute import instance_actions from nova.api.openstack.compute import instance_usage_audit_log from nova.api.openstack.compute import ips from nova.api.openstack.compute import keypairs from nova.api.openstack.compute import limits from nova.api.openstack.compute import lock_server from nova.api.openstack.compute import migrate_server from nova.api.openstack.compute import migrations from nova.api.openstack.compute import multinic from nova.api.openstack.compute import networks from nova.api.openstack.compute import networks_associate from nova.api.openstack.compute import pause_server from nova.api.openstack.compute import quota_classes from nova.api.openstack.compute import quota_sets from nova.api.openstack.compute import remote_consoles from nova.api.openstack.compute import rescue from nova.api.openstack.compute import security_group_default_rules from nova.api.openstack.compute import security_groups from nova.api.openstack.compute import server_diagnostics from nova.api.openstack.compute import server_external_events from nova.api.openstack.compute import server_groups from nova.api.openstack.compute import server_metadata from nova.api.openstack.compute import server_migrations from nova.api.openstack.compute import server_password from nova.api.openstack.compute import server_tags from nova.api.openstack.compute import server_topology from nova.api.openstack.compute import servers from nova.api.openstack.compute import services from nova.api.openstack.compute import shelve from nova.api.openstack.compute import simple_tenant_usage from nova.api.openstack.compute import suspend_server from nova.api.openstack.compute import tenant_networks from nova.api.openstack.compute import versionsV21 from nova.api.openstack.compute import virtual_interfaces from nova.api.openstack.compute import volumes from nova.api.openstack import wsgi from nova.api import wsgi as base_wsgi def _create_controller(main_controller, action_controller_list): """This is a helper method to create controller with a list of action controller. """ controller = wsgi.Resource(main_controller()) for ctl in action_controller_list: controller.register_actions(ctl()) return controller agents_controller = functools.partial( _create_controller, agents.AgentController, []) aggregates_controller = functools.partial( _create_controller, aggregates.AggregateController, []) assisted_volume_snapshots_controller = functools.partial( _create_controller, assisted_volume_snapshots.AssistedVolumeSnapshotsController, []) availability_zone_controller = functools.partial( _create_controller, availability_zone.AvailabilityZoneController, []) baremetal_nodes_controller = functools.partial( _create_controller, baremetal_nodes.BareMetalNodeController, []) cells_controller = functools.partial( _create_controller, cells.CellsController, []) certificates_controller = functools.partial( _create_controller, certificates.CertificatesController, []) cloudpipe_controller = functools.partial( _create_controller, cloudpipe.CloudpipeController, []) extensions_controller = functools.partial( _create_controller, extension_info.ExtensionInfoController, []) fixed_ips_controller = functools.partial(_create_controller, fixed_ips.FixedIPController, []) flavor_controller = functools.partial(_create_controller, flavors.FlavorsController, [ flavor_manage.FlavorManageController, flavor_access.FlavorActionController ] ) flavor_access_controller = functools.partial(_create_controller, flavor_access.FlavorAccessController, []) flavor_extraspec_controller = functools.partial(_create_controller, flavors_extraspecs.FlavorExtraSpecsController, []) floating_ip_dns_controller = functools.partial(_create_controller, floating_ip_dns.FloatingIPDNSDomainController, []) floating_ip_dnsentry_controller = functools.partial(_create_controller, floating_ip_dns.FloatingIPDNSEntryController, []) floating_ip_pools_controller = functools.partial(_create_controller, floating_ip_pools.FloatingIPPoolsController, []) floating_ips_controller = functools.partial(_create_controller, floating_ips.FloatingIPController, []) floating_ips_bulk_controller = functools.partial(_create_controller, floating_ips_bulk.FloatingIPBulkController, []) fping_controller = functools.partial(_create_controller, fping.FpingController, []) hosts_controller = functools.partial( _create_controller, hosts.HostController, []) hypervisors_controller = functools.partial( _create_controller, hypervisors.HypervisorsController, []) images_controller = functools.partial( _create_controller, images.ImagesController, []) image_metadata_controller = functools.partial( _create_controller, image_metadata.ImageMetadataController, []) instance_actions_controller = functools.partial(_create_controller, instance_actions.InstanceActionsController, []) instance_usage_audit_log_controller = functools.partial(_create_controller, instance_usage_audit_log.InstanceUsageAuditLogController, []) ips_controller = functools.partial(_create_controller, ips.IPsController, []) keypairs_controller = functools.partial( _create_controller, keypairs.KeypairController, []) limits_controller = functools.partial( _create_controller, limits.LimitsController, []) migrations_controller = functools.partial(_create_controller, migrations.MigrationsController, []) networks_controller = functools.partial(_create_controller, networks.NetworkController, [networks_associate.NetworkAssociateActionController]) quota_classes_controller = functools.partial(_create_controller, quota_classes.QuotaClassSetsController, []) quota_set_controller = functools.partial(_create_controller, quota_sets.QuotaSetsController, []) security_group_controller = functools.partial(_create_controller, security_groups.SecurityGroupController, []) security_group_default_rules_controller = functools.partial(_create_controller, security_group_default_rules.SecurityGroupDefaultRulesController, []) security_group_rules_controller = functools.partial(_create_controller, security_groups.SecurityGroupRulesController, []) server_controller = functools.partial(_create_controller, servers.ServersController, [ admin_actions.AdminActionsController, admin_password.AdminPasswordController, console_output.ConsoleOutputController, create_backup.CreateBackupController, deferred_delete.DeferredDeleteController, evacuate.EvacuateController, floating_ips.FloatingIPActionController, lock_server.LockServerController, migrate_server.MigrateServerController, multinic.MultinicController, pause_server.PauseServerController, remote_consoles.RemoteConsolesController, rescue.RescueController, security_groups.SecurityGroupActionController, shelve.ShelveController, suspend_server.SuspendServerController ] ) console_auth_tokens_controller = functools.partial(_create_controller, console_auth_tokens.ConsoleAuthTokensController, []) consoles_controller = functools.partial(_create_controller, consoles.ConsolesController, []) server_diagnostics_controller = functools.partial(_create_controller, server_diagnostics.ServerDiagnosticsController, []) server_external_events_controller = functools.partial(_create_controller, server_external_events.ServerExternalEventsController, []) server_groups_controller = functools.partial(_create_controller, server_groups.ServerGroupController, []) server_metadata_controller = functools.partial(_create_controller, server_metadata.ServerMetadataController, []) server_migrations_controller = functools.partial(_create_controller, server_migrations.ServerMigrationsController, []) server_os_interface_controller = functools.partial(_create_controller, attach_interfaces.InterfaceAttachmentController, []) server_password_controller = functools.partial(_create_controller, server_password.ServerPasswordController, []) server_remote_consoles_controller = functools.partial(_create_controller, remote_consoles.RemoteConsolesController, []) server_security_groups_controller = functools.partial(_create_controller, security_groups.ServerSecurityGroupController, []) server_tags_controller = functools.partial(_create_controller, server_tags.ServerTagsController, []) server_topology_controller = functools.partial(_create_controller, server_topology.ServerTopologyController, []) server_volume_attachments_controller = functools.partial(_create_controller, volumes.VolumeAttachmentController, []) services_controller = functools.partial(_create_controller, services.ServiceController, []) simple_tenant_usage_controller = functools.partial(_create_controller, simple_tenant_usage.SimpleTenantUsageController, []) snapshots_controller = functools.partial(_create_controller, volumes.SnapshotController, []) tenant_networks_controller = functools.partial(_create_controller, tenant_networks.TenantNetworkController, []) version_controller = functools.partial(_create_controller, versionsV21.VersionsController, []) virtual_interfaces_controller = functools.partial(_create_controller, virtual_interfaces.ServerVirtualInterfaceController, []) volumes_controller = functools.partial(_create_controller, volumes.VolumeController, []) # NOTE(alex_xu): This is structure of this route list as below: # ( # ('Route path', { # 'HTTP method: [ # 'Controller', # 'The method of controller is used to handle this route' # ], # ... # }), # ... # ) # # Also note that this is ordered tuple. For example, the '/servers/detail' # should be in the front of '/servers/{id}', otherwise the request to # '/servers/detail' always matches to '/servers/{id}' as the id is 'detail'. ROUTE_LIST = ( # NOTE: This is a redirection from '' to '/'. The request to the '/v2.1' # or '/2.0' without the ending '/' will get a response with status code # '302' returned. ('', '/'), ('/', { 'GET': [version_controller, 'show'] }), ('/versions/{id}', { 'GET': [version_controller, 'show'] }), ('/extensions', { 'GET': [extensions_controller, 'index'], }), ('/extensions/{id}', { 'GET': [extensions_controller, 'show'], }), ('/flavors', { 'GET': [flavor_controller, 'index'], 'POST': [flavor_controller, 'create'] }), ('/flavors/detail', { 'GET': [flavor_controller, 'detail'] }), ('/flavors/{id}', { 'GET': [flavor_controller, 'show'], 'PUT': [flavor_controller, 'update'], 'DELETE': [flavor_controller, 'delete'] }), ('/flavors/{id}/action', { 'POST': [flavor_controller, 'action'] }), ('/flavors/{flavor_id}/os-extra_specs', { 'GET': [flavor_extraspec_controller, 'index'], 'POST': [flavor_extraspec_controller, 'create'] }), ('/flavors/{flavor_id}/os-extra_specs/{id}', { 'GET': [flavor_extraspec_controller, 'show'], 'PUT': [flavor_extraspec_controller, 'update'], 'DELETE': [flavor_extraspec_controller, 'delete'] }), ('/flavors/{flavor_id}/os-flavor-access', { 'GET': [flavor_access_controller, 'index'] }), ('/images', { 'GET': [images_controller, 'index'] }), ('/images/detail', { 'GET': [images_controller, 'detail'], }), ('/images/{id}', { 'GET': [images_controller, 'show'], 'DELETE': [images_controller, 'delete'] }), ('/images/{image_id}/metadata', { 'GET': [image_metadata_controller, 'index'], 'POST': [image_metadata_controller, 'create'], 'PUT': [image_metadata_controller, 'update_all'] }), ('/images/{image_id}/metadata/{id}', { 'GET': [image_metadata_controller, 'show'], 'PUT': [image_metadata_controller, 'update'], 'DELETE': [image_metadata_controller, 'delete'] }), ('/limits', { 'GET': [limits_controller, 'index'] }), ('/os-agents', { 'GET': [agents_controller, 'index'], 'POST': [agents_controller, 'create'] }), ('/os-agents/{id}', { 'PUT': [agents_controller, 'update'], 'DELETE': [agents_controller, 'delete'] }), ('/os-aggregates', { 'GET': [aggregates_controller, 'index'], 'POST': [aggregates_controller, 'create'] }), ('/os-aggregates/{id}', { 'GET': [aggregates_controller, 'show'], 'PUT': [aggregates_controller, 'update'], 'DELETE': [aggregates_controller, 'delete'] }), ('/os-aggregates/{id}/action', { 'POST': [aggregates_controller, 'action'], }), ('/os-aggregates/{id}/images', { 'POST': [aggregates_controller, 'images'], }), ('/os-assisted-volume-snapshots', { 'POST': [assisted_volume_snapshots_controller, 'create'] }), ('/os-assisted-volume-snapshots/{id}', { 'DELETE': [assisted_volume_snapshots_controller, 'delete'] }), ('/os-availability-zone', { 'GET': [availability_zone_controller, 'index'] }), ('/os-availability-zone/detail', { 'GET': [availability_zone_controller, 'detail'], }), ('/os-baremetal-nodes', { 'GET': [baremetal_nodes_controller, 'index'], 'POST': [baremetal_nodes_controller, 'create'] }), ('/os-baremetal-nodes/{id}', { 'GET': [baremetal_nodes_controller, 'show'], 'DELETE': [baremetal_nodes_controller, 'delete'] }), ('/os-baremetal-nodes/{id}/action', { 'POST': [baremetal_nodes_controller, 'action'] }), ('/os-cells', { 'POST': [cells_controller, 'create'], 'GET': [cells_controller, 'index'], }), ('/os-cells/capacities', { 'GET': [cells_controller, 'capacities'] }), ('/os-cells/detail', { 'GET': [cells_controller, 'detail'] }), ('/os-cells/info', { 'GET': [cells_controller, 'info'] }), ('/os-cells/sync_instances', { 'POST': [cells_controller, 'sync_instances'] }), ('/os-cells/{id}', { 'GET': [cells_controller, 'show'], 'PUT': [cells_controller, 'update'], 'DELETE': [cells_controller, 'delete'] }), ('/os-cells/{id}/capacities', { 'GET': [cells_controller, 'capacities'] }), ('/os-certificates', { 'POST': [certificates_controller, 'create'] }), ('/os-certificates/{id}', { 'GET': [certificates_controller, 'show'] }), ('/os-cloudpipe', { 'GET': [cloudpipe_controller, 'index'], 'POST': [cloudpipe_controller, 'create'] }), ('/os-cloudpipe/{id}', { 'PUT': [cloudpipe_controller, 'update'] }), ('/os-console-auth-tokens/{id}', { 'GET': [console_auth_tokens_controller, 'show'] }), ('/os-fixed-ips/{id}', { 'GET': [fixed_ips_controller, 'show'] }), ('/os-fixed-ips/{id}/action', { 'POST': [fixed_ips_controller, 'action'], }), ('/os-floating-ip-dns', { 'GET': [floating_ip_dns_controller, 'index'] }), ('/os-floating-ip-dns/{id}', { 'PUT': [floating_ip_dns_controller, 'update'], 'DELETE': [floating_ip_dns_controller, 'delete'] }), ('/os-floating-ip-dns/{domain_id}/entries/{id}', { 'GET': [floating_ip_dnsentry_controller, 'show'], 'PUT': [floating_ip_dnsentry_controller, 'update'], 'DELETE': [floating_ip_dnsentry_controller, 'delete'] }), ('/os-floating-ip-pools', { 'GET': [floating_ip_pools_controller, 'index'], }), ('/os-floating-ips', { 'GET': [floating_ips_controller, 'index'], 'POST': [floating_ips_controller, 'create'] }), ('/os-floating-ips/{id}', { 'GET': [floating_ips_controller, 'show'], 'DELETE': [floating_ips_controller, 'delete'] }), ('/os-floating-ips-bulk', { 'GET': [floating_ips_bulk_controller, 'index'], 'POST': [floating_ips_bulk_controller, 'create'] }), ('/os-floating-ips-bulk/{id}', { 'GET': [floating_ips_bulk_controller, 'show'], 'PUT': [floating_ips_bulk_controller, 'update'] }), ('/os-fping', { 'GET': [fping_controller, 'index'] }), ('/os-fping/{id}', { 'GET': [fping_controller, 'show'] }), ('/os-hosts', { 'GET': [hosts_controller, 'index'] }), ('/os-hosts/{id}', { 'GET': [hosts_controller, 'show'], 'PUT': [hosts_controller, 'update'] }), ('/os-hosts/{id}/reboot', { 'GET': [hosts_controller, 'reboot'] }), ('/os-hosts/{id}/shutdown', { 'GET': [hosts_controller, 'shutdown'] }), ('/os-hosts/{id}/startup', { 'GET': [hosts_controller, 'startup'] }), ('/os-hypervisors', { 'GET': [hypervisors_controller, 'index'] }), ('/os-hypervisors/detail', { 'GET': [hypervisors_controller, 'detail'] }), ('/os-hypervisors/statistics', { 'GET': [hypervisors_controller, 'statistics'] }), ('/os-hypervisors/{id}', { 'GET': [hypervisors_controller, 'show'] }), ('/os-hypervisors/{id}/search', { 'GET': [hypervisors_controller, 'search'] }), ('/os-hypervisors/{id}/servers', { 'GET': [hypervisors_controller, 'servers'] }), ('/os-hypervisors/{id}/uptime', { 'GET': [hypervisors_controller, 'uptime'] }), ('/os-instance_usage_audit_log', { 'GET': [instance_usage_audit_log_controller, 'index'] }), ('/os-instance_usage_audit_log/{id}', { 'GET': [instance_usage_audit_log_controller, 'show'] }), ('/os-keypairs', { 'GET': [keypairs_controller, 'index'], 'POST': [keypairs_controller, 'create'] }), ('/os-keypairs/{id}', { 'GET': [keypairs_controller, 'show'], 'DELETE': [keypairs_controller, 'delete'] }), ('/os-migrations', { 'GET': [migrations_controller, 'index'] }), ('/os-networks', { 'GET': [networks_controller, 'index'], 'POST': [networks_controller, 'create'] }), ('/os-networks/add', { 'POST': [networks_controller, 'add'] }), ('/os-networks/{id}', { 'GET': [networks_controller, 'show'], 'DELETE': [networks_controller, 'delete'] }), ('/os-networks/{id}/action', { 'POST': [networks_controller, 'action'], }), ('/os-quota-class-sets/{id}', { 'GET': [quota_classes_controller, 'show'], 'PUT': [quota_classes_controller, 'update'] }), ('/os-quota-sets/{id}', { 'GET': [quota_set_controller, 'show'], 'PUT': [quota_set_controller, 'update'], 'DELETE': [quota_set_controller, 'delete'] }), ('/os-quota-sets/{id}/detail', { 'GET': [quota_set_controller, 'detail'] }), ('/os-quota-sets/{id}/defaults', { 'GET': [quota_set_controller, 'defaults'] }), ('/os-security-group-default-rules', { 'GET': [security_group_default_rules_controller, 'index'], 'POST': [security_group_default_rules_controller, 'create'] }), ('/os-security-group-default-rules/{id}', { 'GET': [security_group_default_rules_controller, 'show'], 'DELETE': [security_group_default_rules_controller, 'delete'] }), ('/os-security-group-rules', { 'POST': [security_group_rules_controller, 'create'] }), ('/os-security-group-rules/{id}', { 'DELETE': [security_group_rules_controller, 'delete'] }), ('/os-security-groups', { 'GET': [security_group_controller, 'index'], 'POST': [security_group_controller, 'create'] }), ('/os-security-groups/{id}', { 'GET': [security_group_controller, 'show'], 'PUT': [security_group_controller, 'update'], 'DELETE': [security_group_controller, 'delete'] }), ('/os-server-external-events', { 'POST': [server_external_events_controller, 'create'] }), ('/os-server-groups', { 'GET': [server_groups_controller, 'index'], 'POST': [server_groups_controller, 'create'] }), ('/os-server-groups/{id}', { 'GET': [server_groups_controller, 'show'], 'DELETE': [server_groups_controller, 'delete'] }), ('/os-services', { 'GET': [services_controller, 'index'] }), ('/os-services/{id}', { 'PUT': [services_controller, 'update'], 'DELETE': [services_controller, 'delete'] }), ('/os-simple-tenant-usage', { 'GET': [simple_tenant_usage_controller, 'index'] }), ('/os-simple-tenant-usage/{id}', { 'GET': [simple_tenant_usage_controller, 'show'] }), ('/os-snapshots', { 'GET': [snapshots_controller, 'index'], 'POST': [snapshots_controller, 'create'] }), ('/os-snapshots/detail', { 'GET': [snapshots_controller, 'detail'] }), ('/os-snapshots/{id}', { 'GET': [snapshots_controller, 'show'], 'DELETE': [snapshots_controller, 'delete'] }), ('/os-tenant-networks', { 'GET': [tenant_networks_controller, 'index'], 'POST': [tenant_networks_controller, 'create'] }), ('/os-tenant-networks/{id}', { 'GET': [tenant_networks_controller, 'show'], 'DELETE': [tenant_networks_controller, 'delete'] }), ('/os-volumes', { 'GET': [volumes_controller, 'index'], 'POST': [volumes_controller, 'create'], }), ('/os-volumes/detail', { 'GET': [volumes_controller, 'detail'], }), ('/os-volumes/{id}', { 'GET': [volumes_controller, 'show'], 'DELETE': [volumes_controller, 'delete'] }), # NOTE: '/os-volumes_boot' is a clone of '/servers'. We may want to # deprecate it in the future. ('/os-volumes_boot', { 'GET': [server_controller, 'index'], 'POST': [server_controller, 'create'] }), ('/os-volumes_boot/detail', { 'GET': [server_controller, 'detail'] }), ('/os-volumes_boot/{id}', { 'GET': [server_controller, 'show'], 'PUT': [server_controller, 'update'], 'DELETE': [server_controller, 'delete'] }), ('/os-volumes_boot/{id}/action', { 'POST': [server_controller, 'action'] }), ('/servers', { 'GET': [server_controller, 'index'], 'POST': [server_controller, 'create'] }), ('/servers/detail', { 'GET': [server_controller, 'detail'] }), ('/servers/{id}', { 'GET': [server_controller, 'show'], 'PUT': [server_controller, 'update'], 'DELETE': [server_controller, 'delete'] }), ('/servers/{id}/action', { 'POST': [server_controller, 'action'] }), ('/servers/{server_id}/consoles', { 'GET': [consoles_controller, 'index'], 'POST': [consoles_controller, 'create'] }), ('/servers/{server_id}/consoles/{id}', { 'GET': [consoles_controller, 'show'], 'DELETE': [consoles_controller, 'delete'] }), ('/servers/{server_id}/diagnostics', { 'GET': [server_diagnostics_controller, 'index'] }), ('/servers/{server_id}/ips', { 'GET': [ips_controller, 'index'] }), ('/servers/{server_id}/ips/{id}', { 'GET': [ips_controller, 'show'] }), ('/servers/{server_id}/metadata', { 'GET': [server_metadata_controller, 'index'], 'POST': [server_metadata_controller, 'create'], 'PUT': [server_metadata_controller, 'update_all'], }), ('/servers/{server_id}/metadata/{id}', { 'GET': [server_metadata_controller, 'show'], 'PUT': [server_metadata_controller, 'update'], 'DELETE': [server_metadata_controller, 'delete'], }), ('/servers/{server_id}/migrations', { 'GET': [server_migrations_controller, 'index'] }), ('/servers/{server_id}/migrations/{id}', { 'GET': [server_migrations_controller, 'show'], 'DELETE': [server_migrations_controller, 'delete'] }), ('/servers/{server_id}/migrations/{id}/action', { 'POST': [server_migrations_controller, 'action'] }), ('/servers/{server_id}/os-instance-actions', { 'GET': [instance_actions_controller, 'index'] }), ('/servers/{server_id}/os-instance-actions/{id}', { 'GET': [instance_actions_controller, 'show'] }), ('/servers/{server_id}/os-interface', { 'GET': [server_os_interface_controller, 'index'], 'POST': [server_os_interface_controller, 'create'] }), ('/servers/{server_id}/os-interface/{id}', { 'GET': [server_os_interface_controller, 'show'], 'DELETE': [server_os_interface_controller, 'delete'] }), ('/servers/{server_id}/os-server-password', { 'GET': [server_password_controller, 'index'], 'DELETE': [server_password_controller, 'clear'] }), ('/servers/{server_id}/os-virtual-interfaces', { 'GET': [virtual_interfaces_controller, 'index'] }), ('/servers/{server_id}/os-volume_attachments', { 'GET': [server_volume_attachments_controller, 'index'], 'POST': [server_volume_attachments_controller, 'create'], }), ('/servers/{server_id}/os-volume_attachments/{id}', { 'GET': [server_volume_attachments_controller, 'show'], 'PUT': [server_volume_attachments_controller, 'update'], 'DELETE': [server_volume_attachments_controller, 'delete'] }), ('/servers/{server_id}/remote-consoles', { 'POST': [server_remote_consoles_controller, 'create'] }), ('/servers/{server_id}/os-security-groups', { 'GET': [server_security_groups_controller, 'index'] }), ('/servers/{server_id}/tags', { 'GET': [server_tags_controller, 'index'], 'PUT': [server_tags_controller, 'update_all'], 'DELETE': [server_tags_controller, 'delete_all'], }), ('/servers/{server_id}/tags/{id}', { 'GET': [server_tags_controller, 'show'], 'PUT': [server_tags_controller, 'update'], 'DELETE': [server_tags_controller, 'delete'] }), ('/servers/{server_id}/topology', { 'GET': [server_topology_controller, 'index'] }), ) class APIRouterV21(base_wsgi.Router): """Routes requests on the OpenStack API to the appropriate controller and method. The URL mapping based on the plain list `ROUTE_LIST` is built at here. """ def __init__(self, custom_routes=None): """:param custom_routes: the additional routes can be added by this parameter. This parameter is used to test on some fake routes primarily. """ super(APIRouterV21, self).__init__(nova.api.openstack.ProjectMapper()) if custom_routes is None: custom_routes = tuple() for path, methods in ROUTE_LIST + custom_routes: # NOTE(alex_xu): The variable 'methods' is a dict in normal, since # the dict includes all the methods supported in the path. But # if the variable 'method' is a string, it means a redirection. # For example, the request to the '' will be redirect to the '/' in # the Nova API. To indicate that, using the target path instead of # a dict. The route entry just writes as "('', '/)". if isinstance(methods, six.string_types): self.map.redirect(path, methods) continue for method, controller_info in methods.items(): # TODO(alex_xu): In the end, I want to create single controller # instance instead of create controller instance for each # route. controller = controller_info[0]() action = controller_info[1] self.map.create_route(path, method, controller, action) @classmethod def factory(cls, global_config, **local_config): """Simple paste factory, :class:`nova.wsgi.Router` doesn't have one.""" return cls() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2864704 nova-21.2.4/nova/api/openstack/compute/schemas/0000775000175000017500000000000000000000000021370 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/__init__.py0000664000175000017500000000000000000000000023467 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/admin_password.py0000664000175000017500000000206600000000000024760 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types change_password = { 'type': 'object', 'properties': { 'changePassword': { 'type': 'object', 'properties': { 'adminPass': parameter_types.admin_password, }, 'required': ['adminPass'], 'additionalProperties': False, }, }, 'required': ['changePassword'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/agents.py0000664000175000017500000000651200000000000023227 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'agent': { 'type': 'object', 'properties': { 'hypervisor': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'os': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'architecture': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'version': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'url': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'format': 'uri' }, 'md5hash': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-fA-F0-9]*$' }, }, 'required': ['hypervisor', 'os', 'architecture', 'version', 'url', 'md5hash'], 'additionalProperties': False, }, }, 'required': ['agent'], 'additionalProperties': False, } update = { 'type': 'object', 'properties': { 'para': { 'type': 'object', 'properties': { 'version': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'url': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'format': 'uri' }, 'md5hash': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-fA-F0-9]*$' }, }, 'required': ['version', 'url', 'md5hash'], 'additionalProperties': False, }, }, 'required': ['para'], 'additionalProperties': False, } index_query = { 'type': 'object', 'properties': { 'hypervisor': parameter_types.common_query_param }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } index_query_275 = copy.deepcopy(index_query) index_query_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/aggregate_images.py0000664000175000017500000000212500000000000025215 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types aggregate_images_v2_81 = { 'type': 'object', 'properties': { 'cache': { 'type': ['array'], 'minItems': 1, 'items': { 'type': 'object', 'properties': { 'id': parameter_types.image_id, }, 'additionalProperties': False, 'required': ['id'], }, }, }, 'required': ['cache'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/aggregates.py0000664000175000017500000000717100000000000024061 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types availability_zone = {'oneOf': [parameter_types.az_name, {'type': 'null'}]} availability_zone_with_leading_trailing_spaces = { 'oneOf': [parameter_types.az_name_with_leading_trailing_spaces, {'type': 'null'}] } create = { 'type': 'object', 'properties': { 'type': 'object', 'aggregate': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'availability_zone': availability_zone, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['aggregate'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['aggregate']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) create_v20['properties']['aggregate']['properties']['availability_zone'] = ( availability_zone_with_leading_trailing_spaces) update = { 'type': 'object', 'properties': { 'type': 'object', 'aggregate': { 'type': 'object', 'properties': { 'name': parameter_types.name_with_leading_trailing_spaces, 'availability_zone': availability_zone }, 'additionalProperties': False, 'anyOf': [ {'required': ['name']}, {'required': ['availability_zone']} ] }, }, 'required': ['aggregate'], 'additionalProperties': False, } update_v20 = copy.deepcopy(update) update_v20['properties']['aggregate']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) update_v20['properties']['aggregate']['properties']['availability_zone'] = ( availability_zone_with_leading_trailing_spaces) add_host = { 'type': 'object', 'properties': { 'type': 'object', 'add_host': { 'type': 'object', 'properties': { 'host': parameter_types.hostname, }, 'required': ['host'], 'additionalProperties': False, }, }, 'required': ['add_host'], 'additionalProperties': False, } remove_host = { 'type': 'object', 'properties': { 'type': 'object', 'remove_host': { 'type': 'object', 'properties': { 'host': parameter_types.hostname, }, 'required': ['host'], 'additionalProperties': False, }, }, 'required': ['remove_host'], 'additionalProperties': False, } set_metadata = { 'type': 'object', 'properties': { 'type': 'object', 'set_metadata': { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata_with_null }, 'required': ['metadata'], 'additionalProperties': False, }, }, 'required': ['set_metadata'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/assisted_volume_snapshots.py0000664000175000017500000000457000000000000027260 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types snapshots_create = { 'type': 'object', 'properties': { 'snapshot': { 'type': 'object', 'properties': { 'volume_id': { 'type': 'string', 'minLength': 1, }, 'create_info': { 'type': 'object', 'properties': { 'snapshot_id': { 'type': 'string', 'minLength': 1, }, 'type': { 'type': 'string', 'enum': ['qcow2'], }, 'new_file': { 'type': 'string', 'minLength': 1, }, 'id': { 'type': 'string', 'minLength': 1, }, }, 'required': ['snapshot_id', 'type', 'new_file'], 'additionalProperties': False, }, }, 'required': ['volume_id', 'create_info'], 'additionalProperties': False, } }, 'required': ['snapshot'], 'additionalProperties': False, } delete_query = { 'type': 'object', 'properties': { 'delete_info': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } delete_query_275 = copy.deepcopy(delete_query) delete_query_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/attach_interfaces.py0000664000175000017500000000357100000000000025417 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'interfaceAttachment': { 'type': 'object', 'properties': { # NOTE: This parameter is passed to the search_opts of # Neutron list_network API: search_opts = {'id': net_id} 'net_id': parameter_types.network_id, # NOTE: This parameter is passed to Neutron show_port API # as a port id. 'port_id': parameter_types.network_port_id, 'fixed_ips': { 'type': 'array', 'minItems': 1, 'maxItems': 1, 'items': { 'type': 'object', 'properties': { 'ip_address': parameter_types.ip_address }, 'required': ['ip_address'], 'additionalProperties': False, }, }, }, 'additionalProperties': False, }, }, 'additionalProperties': False, } create_v249 = copy.deepcopy(create) create_v249['properties']['interfaceAttachment'][ 'properties']['tag'] = parameter_types.tag ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/console_output.py0000664000175000017500000000256100000000000025030 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. get_console_output = { 'type': 'object', 'properties': { 'os-getConsoleOutput': { 'type': 'object', 'properties': { 'length': { 'type': ['integer', 'string', 'null'], 'pattern': '^-?[0-9]+$', # NOTE: -1 means an unlimited length. # TODO(cyeoh): None also means unlimited length # and is supported for v2 backwards compatibility # Should remove in the future with a microversion 'minimum': -1, }, }, 'additionalProperties': False, }, }, 'required': ['os-getConsoleOutput'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/create_backup.py0000664000175000017500000000272300000000000024536 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create_backup = { 'type': 'object', 'properties': { 'createBackup': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'backup_type': { 'type': 'string', }, 'rotation': parameter_types.non_negative_integer, 'metadata': parameter_types.metadata, }, 'required': ['name', 'backup_type', 'rotation'], 'additionalProperties': False, }, }, 'required': ['createBackup'], 'additionalProperties': False, } create_backup_v20 = copy.deepcopy(create_backup) create_backup_v20['properties'][ 'createBackup']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/evacuate.py0000664000175000017500000000315000000000000023536 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types evacuate = { 'type': 'object', 'properties': { 'evacuate': { 'type': 'object', 'properties': { 'host': parameter_types.hostname, 'onSharedStorage': parameter_types.boolean, 'adminPass': parameter_types.admin_password, }, 'required': ['onSharedStorage'], 'additionalProperties': False, }, }, 'required': ['evacuate'], 'additionalProperties': False, } evacuate_v214 = copy.deepcopy(evacuate) del evacuate_v214['properties']['evacuate']['properties']['onSharedStorage'] del evacuate_v214['properties']['evacuate']['required'] evacuate_v2_29 = copy.deepcopy(evacuate_v214) evacuate_v2_29['properties']['evacuate']['properties'][ 'force'] = parameter_types.boolean # v2.68 removes the 'force' parameter added in v2.29, meaning it is identical # to v2.14 evacuate_v2_68 = copy.deepcopy(evacuate_v214) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/flavor_access.py0000664000175000017500000000325600000000000024562 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. add_tenant_access = { 'type': 'object', 'properties': { 'addTenantAccess': { 'type': 'object', 'properties': { 'tenant': { # defined from project_id in instance_type_projects table 'type': 'string', 'minLength': 1, 'maxLength': 255, }, }, 'required': ['tenant'], 'additionalProperties': False, }, }, 'required': ['addTenantAccess'], 'additionalProperties': False, } remove_tenant_access = { 'type': 'object', 'properties': { 'removeTenantAccess': { 'type': 'object', 'properties': { 'tenant': { # defined from project_id in instance_type_projects table 'type': 'string', 'minLength': 1, 'maxLength': 255, }, }, 'required': ['tenant'], 'additionalProperties': False, }, }, 'required': ['removeTenantAccess'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/flavor_manage.py0000664000175000017500000000761500000000000024554 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'flavor': { 'type': 'object', 'properties': { # in nova/flavors.py, name with all white spaces is forbidden. 'name': parameter_types.name, # forbid leading/trailing whitespaces 'id': { 'type': ['string', 'number', 'null'], 'minLength': 1, 'maxLength': 255, 'pattern': '^(?! )[a-zA-Z0-9. _-]+(? 0) float 'rxtx_factor': { 'type': ['number', 'string'], 'pattern': r'^[0-9]+(\.[0-9]+)?$', 'minimum': 0, 'exclusiveMinimum': True, # maximum's value is limited to db constant's # SQL_SP_FLOAT_MAX (in nova/db/constants.py) 'maximum': 3.40282e+38 }, 'os-flavor-access:is_public': parameter_types.boolean, }, # TODO(oomichi): 'id' should be required with v2.1+microversions. # On v2.0 API, nova-api generates a flavor-id automatically if # specifying null as 'id' or not specifying 'id'. Ideally a client # should specify null as 'id' for requesting auto-generated id # exactly. However, this strict limitation causes a backwards # incompatible issue on v2.1. So now here relaxes the requirement # of 'id'. 'required': ['name', 'ram', 'vcpus', 'disk'], 'additionalProperties': False, }, }, 'required': ['flavor'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['flavor']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) # 2.55 adds an optional description field with a max length of 65535 since the # backing database column is a TEXT column which is 64KiB. flavor_description = { 'type': ['string', 'null'], 'minLength': 0, 'maxLength': 65535, 'pattern': parameter_types.valid_description_regex, } create_v2_55 = copy.deepcopy(create) create_v2_55['properties']['flavor']['properties']['description'] = ( flavor_description) update_v2_55 = { 'type': 'object', 'properties': { 'flavor': { 'type': 'object', 'properties': { 'description': flavor_description }, # Since the only property that can be specified on update is the # description field, it is required. If we allow updating other # flavor attributes in a later microversion, we should reconsider # what is required. 'required': ['description'], 'additionalProperties': False, }, }, 'required': ['flavor'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/flavors.py0000664000175000017500000000421200000000000023415 0ustar00zuulzuul00000000000000# Copyright 2017 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types # NOTE(takashin): The following sort keys are defined for backward # compatibility. If they are changed, the API microversion should be bumped. VALID_SORT_KEYS = [ 'created_at', 'description', 'disabled', 'ephemeral_gb', 'flavorid', 'id', 'is_public', 'memory_mb', 'name', 'root_gb', 'rxtx_factor', 'swap', 'updated_at', 'vcpu_weight', 'vcpus' ] VALID_SORT_DIR = ['asc', 'desc'] index_query = { 'type': 'object', 'properties': { 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'marker': parameter_types.multi_params({'type': 'string'}), 'is_public': parameter_types.multi_params({'type': 'string'}), 'minRam': parameter_types.multi_params({'type': 'string'}), 'minDisk': parameter_types.multi_params({'type': 'string'}), 'sort_key': parameter_types.multi_params({'type': 'string', 'enum': VALID_SORT_KEYS}), 'sort_dir': parameter_types.multi_params({'type': 'string', 'enum': VALID_SORT_DIR}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } index_query_275 = copy.deepcopy(index_query) index_query_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/flavors_extraspecs.py0000664000175000017500000000225300000000000025661 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types # NOTE(oomichi): The metadata of flavor_extraspecs should accept numbers # as its values. metadata = copy.deepcopy(parameter_types.metadata) metadata['patternProperties']['^[a-zA-Z0-9-_:. ]{1,255}$']['type'] = \ ['string', 'number'] create = { 'type': 'object', 'properties': { 'extra_specs': metadata }, 'required': ['extra_specs'], 'additionalProperties': False, } update = copy.deepcopy(metadata) update.update({ 'minProperties': 1, 'maxProperties': 1 }) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/floating_ips.py0000664000175000017500000000275600000000000024432 0ustar00zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types add_floating_ip = { 'type': 'object', 'properties': { 'addFloatingIp': { 'type': 'object', 'properties': { 'address': parameter_types.ip_address, 'fixed_address': parameter_types.ip_address }, 'required': ['address'], 'additionalProperties': False } }, 'required': ['addFloatingIp'], 'additionalProperties': False } remove_floating_ip = { 'type': 'object', 'properties': { 'removeFloatingIp': { 'type': 'object', 'properties': { 'address': parameter_types.ip_address }, 'required': ['address'], 'additionalProperties': False } }, 'required': ['removeFloatingIp'], 'additionalProperties': False } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/hosts.py0000664000175000017500000000330300000000000023101 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types update = { 'type': 'object', 'properties': { 'status': { 'type': 'string', 'enum': ['enable', 'disable', 'Enable', 'Disable', 'ENABLE', 'DISABLE'], }, 'maintenance_mode': { 'type': 'string', 'enum': ['enable', 'disable', 'Enable', 'Disable', 'ENABLE', 'DISABLE'], }, 'anyOf': [ {'required': ['status']}, {'required': ['maintenance_mode']} ], }, 'additionalProperties': False } index_query = { 'type': 'object', 'properties': { 'zone': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. This API is deprecated in microversion 2.43 so we # do not to update the additionalProperties to False. 'additionalProperties': True } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/hypervisors.py0000664000175000017500000000370000000000000024337 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types list_query_schema_v233 = { 'type': 'object', 'properties': parameter_types.pagination_parameters, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.53, the additional parameters # are not allowed. 'additionalProperties': True } list_query_schema_v253 = { 'type': 'object', 'properties': { # The 2.33 microversion added support for paging by limit and marker. 'limit': parameter_types.single_param( parameter_types.non_negative_integer), 'marker': parameter_types.single_param({'type': 'string'}), # The 2.53 microversion adds support for filtering by hostname pattern # and requesting hosted servers in the GET /os-hypervisors and # GET /os-hypervisors/detail response. 'hypervisor_hostname_pattern': parameter_types.single_param( parameter_types.hostname), 'with_servers': parameter_types.single_param( parameter_types.boolean) }, 'additionalProperties': False } show_query_schema_v253 = { 'type': 'object', 'properties': { 'with_servers': parameter_types.single_param( parameter_types.boolean) }, 'additionalProperties': False } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/image_metadata.py0000664000175000017500000000223100000000000024662 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata }, 'required': ['metadata'], 'additionalProperties': False, } single_metadata = copy.deepcopy(parameter_types.metadata) single_metadata.update({ 'minProperties': 1, 'maxProperties': 1 }) update = { 'type': 'object', 'properties': { 'meta': single_metadata }, 'required': ['meta'], 'additionalProperties': False, } update_all = create ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/instance_actions.py0000664000175000017500000000255100000000000025271 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types list_query_params_v258 = { 'type': 'object', 'properties': { # The 2.58 microversion added support for paging by limit and marker # and filtering by changes-since. 'limit': parameter_types.single_param( parameter_types.non_negative_integer), 'marker': parameter_types.single_param({'type': 'string'}), 'changes-since': parameter_types.single_param( {'type': 'string', 'format': 'date-time'}), }, 'additionalProperties': False } list_query_params_v266 = copy.deepcopy(list_query_params_v258) list_query_params_v266['properties'].update({ 'changes-before': parameter_types.single_param( {'type': 'string', 'format': 'date-time'}), }) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/keypairs.py0000664000175000017500000000646100000000000023600 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'keypair': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'public_key': {'type': 'string'}, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['keypair'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['keypair']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) create_v22 = { 'type': 'object', 'properties': { 'keypair': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'type': { 'type': 'string', 'enum': ['ssh', 'x509'] }, 'public_key': {'type': 'string'}, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['keypair'], 'additionalProperties': False, } create_v210 = { 'type': 'object', 'properties': { 'keypair': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'type': { 'type': 'string', 'enum': ['ssh', 'x509'] }, 'public_key': {'type': 'string'}, 'user_id': {'type': 'string'}, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['keypair'], 'additionalProperties': False, } index_query_schema_v20 = { 'type': 'object', 'properties': {}, 'additionalProperties': True } index_query_schema_v210 = { 'type': 'object', 'properties': { 'user_id': parameter_types.multi_params({'type': 'string'}) }, 'additionalProperties': True } index_query_schema_v235 = copy.deepcopy(index_query_schema_v210) index_query_schema_v235['properties'].update( parameter_types.pagination_parameters) show_query_schema_v20 = index_query_schema_v20 show_query_schema_v210 = index_query_schema_v210 delete_query_schema_v20 = index_query_schema_v20 delete_query_schema_v210 = index_query_schema_v210 index_query_schema_v275 = copy.deepcopy(index_query_schema_v235) index_query_schema_v275['additionalProperties'] = False show_query_schema_v275 = copy.deepcopy(show_query_schema_v210) show_query_schema_v275['additionalProperties'] = False delete_query_schema_v275 = copy.deepcopy(delete_query_schema_v210) delete_query_schema_v275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/limits.py0000664000175000017500000000201400000000000023240 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types limits_query_schema = { 'type': 'object', 'properties': { 'tenant_id': parameter_types.common_query_param, }, # For backward compatible changes # In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } limits_query_schema_275 = copy.deepcopy(limits_query_schema) limits_query_schema_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/lock_server.py0000664000175000017500000000172500000000000024265 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. lock_v2_73 = { 'type': 'object', 'properties': { 'lock': { 'type': ['object', 'null'], 'properties': { 'locked_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, }, 'additionalProperties': False, }, }, 'required': ['lock'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/migrate_server.py0000664000175000017500000000450200000000000024761 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types host = copy.deepcopy(parameter_types.hostname) host['type'] = ['string', 'null'] migrate_v2_56 = { 'type': 'object', 'properties': { 'migrate': { 'type': ['object', 'null'], 'properties': { 'host': host, }, 'additionalProperties': False, }, }, 'required': ['migrate'], 'additionalProperties': False, } migrate_live = { 'type': 'object', 'properties': { 'os-migrateLive': { 'type': 'object', 'properties': { 'block_migration': parameter_types.boolean, 'disk_over_commit': parameter_types.boolean, 'host': host }, 'required': ['block_migration', 'disk_over_commit', 'host'], 'additionalProperties': False, }, }, 'required': ['os-migrateLive'], 'additionalProperties': False, } block_migration = copy.deepcopy(parameter_types.boolean) block_migration['enum'].append('auto') migrate_live_v2_25 = copy.deepcopy(migrate_live) del migrate_live_v2_25['properties']['os-migrateLive']['properties'][ 'disk_over_commit'] migrate_live_v2_25['properties']['os-migrateLive']['properties'][ 'block_migration'] = block_migration migrate_live_v2_25['properties']['os-migrateLive']['required'] = ( ['block_migration', 'host']) migrate_live_v2_30 = copy.deepcopy(migrate_live_v2_25) migrate_live_v2_30['properties']['os-migrateLive']['properties'][ 'force'] = parameter_types.boolean # v2.68 removes the 'force' parameter added in v2.30, meaning it is identical # to v2.25 migrate_live_v2_68 = copy.deepcopy(migrate_live_v2_25) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/migrations.py0000664000175000017500000000431100000000000024115 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types list_query_schema_v20 = { 'type': 'object', 'properties': { 'hidden': parameter_types.common_query_param, 'host': parameter_types.common_query_param, 'instance_uuid': parameter_types.common_query_param, 'source_compute': parameter_types.common_query_param, 'status': parameter_types.common_query_param, 'migration_type': parameter_types.common_query_param, }, # For backward compatible changes 'additionalProperties': True } list_query_params_v259 = copy.deepcopy(list_query_schema_v20) list_query_params_v259['properties'].update({ # The 2.59 microversion added support for paging by limit and marker # and filtering by changes-since. 'limit': parameter_types.single_param( parameter_types.non_negative_integer), 'marker': parameter_types.single_param({'type': 'string'}), 'changes-since': parameter_types.single_param( {'type': 'string', 'format': 'date-time'}), }) list_query_params_v259['additionalProperties'] = False list_query_params_v266 = copy.deepcopy(list_query_params_v259) list_query_params_v266['properties'].update({ 'changes-before': parameter_types.single_param( {'type': 'string', 'format': 'date-time'}), }) list_query_params_v280 = copy.deepcopy(list_query_params_v266) list_query_params_v280['properties'].update({ # The 2.80 microversion added support for filtering migrations # by user_id and/or project_id 'user_id': parameter_types.single_param({'type': 'string'}), 'project_id': parameter_types.single_param({'type': 'string'}), }) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/multinic.py0000664000175000017500000000304300000000000023566 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types add_fixed_ip = { 'type': 'object', 'properties': { 'addFixedIp': { 'type': 'object', 'properties': { # The maxLength is from the column 'uuid' of the # table 'networks' 'networkId': { 'type': ['string', 'number'], 'minLength': 1, 'maxLength': 36, }, }, 'required': ['networkId'], 'additionalProperties': False, }, }, 'required': ['addFixedIp'], 'additionalProperties': False, } remove_fixed_ip = { 'type': 'object', 'properties': { 'removeFixedIp': { 'type': 'object', 'properties': { 'address': parameter_types.ip_address }, 'required': ['address'], 'additionalProperties': False, }, }, 'required': ['removeFixedIp'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/quota_classes.py0000664000175000017500000000344100000000000024612 0ustar00zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.openstack.compute.schemas import quota_sets update = { 'type': 'object', 'properties': { 'type': 'object', 'quota_class_set': { 'properties': quota_sets.quota_resources, 'additionalProperties': False, }, }, 'required': ['quota_class_set'], 'additionalProperties': False, } update_v250 = copy.deepcopy(update) del update_v250['properties']['quota_class_set']['properties']['fixed_ips'] del update_v250['properties']['quota_class_set']['properties']['floating_ips'] del update_v250['properties']['quota_class_set']['properties'][ 'security_groups'] del update_v250['properties']['quota_class_set']['properties'][ 'security_group_rules'] del update_v250['properties']['quota_class_set']['properties']['networks'] # 2.57 builds on 2.50 and removes injected_file* quotas. update_v257 = copy.deepcopy(update_v250) del update_v257['properties']['quota_class_set']['properties'][ 'injected_files'] del update_v257['properties']['quota_class_set']['properties'][ 'injected_file_content_bytes'] del update_v257['properties']['quota_class_set']['properties'][ 'injected_file_path_bytes'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/quota_sets.py0000664000175000017500000000637200000000000024141 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types common_quota = { 'type': ['integer', 'string'], 'pattern': '^-?[0-9]+$', # -1 is a flag value for unlimited 'minimum': -1, # maximum's value is limited to db constant's MAX_INT # (in nova/db/constants.py) 'maximum': 0x7FFFFFFF } quota_resources = { 'instances': common_quota, 'cores': common_quota, 'ram': common_quota, 'floating_ips': common_quota, 'fixed_ips': common_quota, 'metadata_items': common_quota, 'key_pairs': common_quota, 'security_groups': common_quota, 'security_group_rules': common_quota, 'injected_files': common_quota, 'injected_file_content_bytes': common_quota, 'injected_file_path_bytes': common_quota, 'server_groups': common_quota, 'server_group_members': common_quota, # NOTE(stephenfin): This will always be rejected since it was nova-network # only, but we need to allow users to submit it at a minimum 'networks': common_quota } update_quota_set = copy.deepcopy(quota_resources) update_quota_set.update({'force': parameter_types.boolean}) update_quota_set_v236 = copy.deepcopy(update_quota_set) del update_quota_set_v236['fixed_ips'] del update_quota_set_v236['floating_ips'] del update_quota_set_v236['security_groups'] del update_quota_set_v236['security_group_rules'] del update_quota_set_v236['networks'] update = { 'type': 'object', 'properties': { 'type': 'object', 'quota_set': { 'properties': update_quota_set, 'additionalProperties': False, }, }, 'required': ['quota_set'], 'additionalProperties': False, } update_v236 = copy.deepcopy(update) update_v236['properties']['quota_set']['properties'] = update_quota_set_v236 # 2.57 builds on 2.36 and removes injected_file* quotas. update_quota_set_v257 = copy.deepcopy(update_quota_set_v236) del update_quota_set_v257['injected_files'] del update_quota_set_v257['injected_file_content_bytes'] del update_quota_set_v257['injected_file_path_bytes'] update_v257 = copy.deepcopy(update_v236) update_v257['properties']['quota_set']['properties'] = update_quota_set_v257 query_schema = { 'type': 'object', 'properties': { 'user_id': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } query_schema_275 = copy.deepcopy(query_schema) query_schema_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/remote_consoles.py0000664000175000017500000000746300000000000025154 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. get_vnc_console = { 'type': 'object', 'properties': { 'os-getVNCConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['novnc', 'xvpvnc'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getVNCConsole'], 'additionalProperties': False, } get_spice_console = { 'type': 'object', 'properties': { 'os-getSPICEConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['spice-html5'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getSPICEConsole'], 'additionalProperties': False, } get_rdp_console = { 'type': 'object', 'properties': { 'os-getRDPConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['rdp-html5'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getRDPConsole'], 'additionalProperties': False, } get_serial_console = { 'type': 'object', 'properties': { 'os-getSerialConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['serial'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getSerialConsole'], 'additionalProperties': False, } create_v26 = { 'type': 'object', 'properties': { 'remote_console': { 'type': 'object', 'properties': { 'protocol': { 'type': 'string', 'enum': ['vnc', 'spice', 'rdp', 'serial'], }, 'type': { 'type': 'string', 'enum': ['novnc', 'xvpvnc', 'rdp-html5', 'spice-html5', 'serial'], }, }, 'required': ['protocol', 'type'], 'additionalProperties': False, }, }, 'required': ['remote_console'], 'additionalProperties': False, } create_v28 = { 'type': 'object', 'properties': { 'remote_console': { 'type': 'object', 'properties': { 'protocol': { 'type': 'string', 'enum': ['vnc', 'spice', 'rdp', 'serial', 'mks'], }, 'type': { 'type': 'string', 'enum': ['novnc', 'xvpvnc', 'rdp-html5', 'spice-html5', 'serial', 'webmks'], }, }, 'required': ['protocol', 'type'], 'additionalProperties': False, }, }, 'required': ['remote_console'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/rescue.py0000664000175000017500000000207600000000000023235 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types rescue = { 'type': 'object', 'properties': { 'rescue': { 'type': ['object', 'null'], 'properties': { 'adminPass': parameter_types.admin_password, 'rescue_image_ref': parameter_types.image_id, }, 'additionalProperties': False, }, }, 'required': ['rescue'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/reset_server_state.py0000664000175000017500000000210200000000000025645 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. reset_state = { 'type': 'object', 'properties': { 'os-resetState': { 'type': 'object', 'properties': { 'state': { 'type': 'string', 'enum': ['active', 'error'], }, }, 'required': ['state'], 'additionalProperties': False, }, }, 'required': ['os-resetState'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/security_groups.py0000664000175000017500000000246300000000000025215 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types index_query = { 'type': 'object', 'properties': { 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'offset': parameter_types.multi_params( parameter_types.non_negative_integer), 'all_tenants': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. This API is deprecated in microversion 2.36 so we # do not to update the additionalProperties to False. 'additionalProperties': True } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/server_external_events.py0000664000175000017500000000445200000000000026543 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.objects import external_event as external_event_obj create = { 'type': 'object', 'properties': { 'events': { 'type': 'array', 'minItems': 1, 'items': { 'type': 'object', 'properties': { 'server_uuid': { 'type': 'string', 'format': 'uuid' }, 'name': { 'type': 'string', 'enum': [ 'network-changed', 'network-vif-plugged', 'network-vif-unplugged', 'network-vif-deleted' ], }, 'status': { 'type': 'string', 'enum': external_event_obj.EVENT_STATUSES, }, 'tag': { 'type': 'string', 'maxLength': 255, }, }, 'required': ['server_uuid', 'name'], 'additionalProperties': False, }, }, }, 'required': ['events'], 'additionalProperties': False, } create_v251 = copy.deepcopy(create) name = create_v251['properties']['events']['items']['properties']['name'] name['enum'].append('volume-extended') create_v276 = copy.deepcopy(create_v251) name = create_v276['properties']['events']['items']['properties']['name'] name['enum'].append('power-update') create_v282 = copy.deepcopy(create_v276) name = create_v282['properties']['events']['items']['properties']['name'] name['enum'].append('accelerator-request-bound') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/server_groups.py0000664000175000017500000000650400000000000024654 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types # NOTE(russellb) There is one other policy, 'legacy', but we don't allow that # being set via the API. It's only used when a group gets automatically # created to support the legacy behavior of the 'group' scheduler hint. create = { 'type': 'object', 'properties': { 'server_group': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'policies': { # This allows only a single item and it must be one of the # enumerated values. It's changed to a single string value # in 2.64. 'type': 'array', 'items': [ { 'type': 'string', 'enum': ['anti-affinity', 'affinity'], }, ], 'uniqueItems': True, 'minItems': 1, 'maxItems': 1, 'additionalItems': False, } }, 'required': ['name', 'policies'], 'additionalProperties': False, } }, 'required': ['server_group'], 'additionalProperties': False, } create_v215 = copy.deepcopy(create) policies = create_v215['properties']['server_group']['properties']['policies'] policies['items'][0]['enum'].extend(['soft-anti-affinity', 'soft-affinity']) create_v264 = copy.deepcopy(create_v215) del create_v264['properties']['server_group']['properties']['policies'] sg_properties = create_v264['properties']['server_group'] sg_properties['required'].remove('policies') sg_properties['required'].append('policy') sg_properties['properties']['policy'] = { 'type': 'string', 'enum': ['anti-affinity', 'affinity', 'soft-anti-affinity', 'soft-affinity'], } sg_properties['properties']['rules'] = { 'type': 'object', 'properties': { 'max_server_per_host': parameter_types.positive_integer, }, 'additionalProperties': False, } server_groups_query_param = { 'type': 'object', 'properties': { 'all_projects': parameter_types.multi_params({'type': 'string'}), 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'offset': parameter_types.multi_params( parameter_types.non_negative_integer), }, # For backward compatible changes. In microversion 2.75, we have # blocked the additional parameters. 'additionalProperties': True } server_groups_query_param_275 = copy.deepcopy(server_groups_query_param) server_groups_query_param_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/server_metadata.py0000664000175000017500000000246500000000000025117 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata }, 'required': ['metadata'], 'additionalProperties': False, } metadata_update = copy.deepcopy(parameter_types.metadata) metadata_update.update({ 'minProperties': 1, 'maxProperties': 1 }) update = { 'type': 'object', 'properties': { 'meta': metadata_update }, 'required': ['meta'], 'additionalProperties': False, } update_all = { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata }, 'required': ['metadata'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/server_migrations.py0000664000175000017500000000151300000000000025504 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. force_complete = { 'type': 'object', 'properties': { 'force_complete': { 'type': 'null' } }, 'required': ['force_complete'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/server_tags.py0000664000175000017500000000205700000000000024272 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types from nova.objects import instance update_all = { "title": "Server tags", "type": "object", "properties": { "tags": { "type": "array", "items": parameter_types.tag, "maxItems": instance.MAX_TAG_COUNT } }, 'required': ['tags'], 'additionalProperties': False } update = { "title": "Server tag", "type": "null", 'required': [], 'additionalProperties': False } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/servers.py0000664000175000017500000006425100000000000023443 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types from nova.api.validation.parameter_types import multi_params from nova.objects import instance legacy_block_device_mapping = { 'type': 'object', 'properties': { 'virtual_name': { 'type': 'string', 'maxLength': 255, }, 'volume_id': parameter_types.volume_id, 'snapshot_id': parameter_types.image_id, 'volume_size': parameter_types.volume_size, # Do not allow empty device names and number values and # containing spaces(defined in nova/block_device.py:from_api()) 'device_name': { 'type': 'string', 'minLength': 1, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9._-r/]*$', }, # Defined as boolean in nova/block_device.py:from_api() 'delete_on_termination': parameter_types.boolean, 'no_device': {}, # Defined as mediumtext in column "connection_info" in table # "block_device_mapping" 'connection_info': { 'type': 'string', 'maxLength': 16777215 }, }, 'additionalProperties': False } block_device_mapping_v2_new_item = { # defined in nova/block_device.py:from_api() # NOTE: Client can specify the Id with the combination of # source_type and uuid, or a single attribute like volume_id/ # image_id/snapshot_id. 'source_type': { 'type': 'string', 'enum': ['volume', 'image', 'snapshot', 'blank'], }, 'uuid': { 'type': 'string', 'minLength': 1, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9._-]*$', }, 'image_id': parameter_types.image_id, 'destination_type': { 'type': 'string', 'enum': ['local', 'volume'], }, # Defined as varchar(255) in column "guest_format" in table # "block_device_mapping" 'guest_format': { 'type': 'string', 'maxLength': 255, }, # Defined as varchar(255) in column "device_type" in table # "block_device_mapping" 'device_type': { 'type': 'string', 'maxLength': 255, }, # Defined as varchar(255) in column "disk_bus" in table # "block_device_mapping" 'disk_bus': { 'type': 'string', 'maxLength': 255, }, # Defined as integer in nova/block_device.py:from_api() # NOTE(mriedem): boot_index=None is also accepted for backward # compatibility with the legacy v2 API. 'boot_index': { 'type': ['integer', 'string', 'null'], 'pattern': '^-?[0-9]+$', }, } block_device_mapping_v2 = copy.deepcopy(legacy_block_device_mapping) block_device_mapping_v2['properties'].update(block_device_mapping_v2_new_item) _hints = { 'type': 'object', 'properties': { 'group': { 'type': 'string', 'format': 'uuid' }, 'different_host': { # NOTE: The value of 'different_host' is the set of server # uuids where a new server is scheduled on a different host. # A user can specify one server as string parameter and should # specify multiple servers as array parameter instead. 'oneOf': [ { 'type': 'string', 'format': 'uuid' }, { 'type': 'array', 'items': parameter_types.server_id } ] }, 'same_host': { # NOTE: The value of 'same_host' is the set of server # uuids where a new server is scheduled on the same host. 'type': ['string', 'array'], 'items': parameter_types.server_id }, 'query': { # NOTE: The value of 'query' is converted to dict data with # jsonutils.loads() and used for filtering hosts. 'type': ['string', 'object'], }, # NOTE: The value of 'target_cell' is the cell name what cell # a new server is scheduled on. 'target_cell': parameter_types.name, 'different_cell': { 'type': ['string', 'array'], 'items': { 'type': 'string' } }, 'build_near_host_ip': parameter_types.ip_address, 'cidr': { 'type': 'string', 'pattern': '^/[0-9a-f.:]+$' }, }, # NOTE: As this Mail: # http://lists.openstack.org/pipermail/openstack-dev/2015-June/067996.html # pointed out the limit the scheduler-hints in the API is problematic. So # relax it. 'additionalProperties': True } base_create = { 'type': 'object', 'properties': { 'server': { 'type': 'object', 'properties': { 'name': parameter_types.name, # NOTE(gmann): In case of boot from volume, imageRef was # allowed as the empty string also So keeping the same # behavior and allow empty string in case of boot from # volume only. Python code make sure empty string is # not allowed for other cases. 'imageRef': parameter_types.image_id_or_empty_string, 'flavorRef': parameter_types.flavor_ref, 'adminPass': parameter_types.admin_password, 'metadata': parameter_types.metadata, 'networks': { 'type': 'array', 'items': { 'type': 'object', 'properties': { 'fixed_ip': parameter_types.ip_address, 'port': { 'oneOf': [{'type': 'string', 'format': 'uuid'}, {'type': 'null'}] }, 'uuid': {'type': 'string'}, }, 'additionalProperties': False, } }, 'OS-DCF:diskConfig': parameter_types.disk_config, 'accessIPv4': parameter_types.accessIPv4, 'accessIPv6': parameter_types.accessIPv6, 'personality': parameter_types.personality, 'availability_zone': parameter_types.name, 'block_device_mapping': { 'type': 'array', 'items': legacy_block_device_mapping }, 'block_device_mapping_v2': { 'type': 'array', 'items': block_device_mapping_v2 }, 'config_drive': parameter_types.boolean, 'key_name': parameter_types.name, 'min_count': parameter_types.positive_integer, 'max_count': parameter_types.positive_integer, 'return_reservation_id': parameter_types.boolean, 'security_groups': { 'type': 'array', 'items': { 'type': 'object', 'properties': { # NOTE(oomichi): allocate_for_instance() of # network/neutron.py gets security_group names # or UUIDs from this parameter. # parameter_types.name allows both format. 'name': parameter_types.name, }, 'additionalProperties': False, } }, 'user_data': { 'type': 'string', 'format': 'base64', 'maxLength': 65535 } }, 'required': ['name', 'flavorRef'], 'additionalProperties': False, }, 'os:scheduler_hints': _hints, 'OS-SCH-HNT:scheduler_hints': _hints, }, 'required': ['server'], 'additionalProperties': False, } base_create_v20 = copy.deepcopy(base_create) base_create_v20['properties']['server'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces base_create_v20['properties']['server']['properties'][ 'availability_zone'] = parameter_types.name_with_leading_trailing_spaces base_create_v20['properties']['server']['properties'][ 'key_name'] = parameter_types.name_with_leading_trailing_spaces base_create_v20['properties']['server']['properties'][ 'security_groups']['items']['properties']['name'] = ( parameter_types.name_with_leading_trailing_spaces) base_create_v20['properties']['server']['properties'][ 'user_data'] = { 'oneOf': [{'type': 'string', 'format': 'base64', 'maxLength': 65535}, {'type': 'null'}, ], } base_create_v219 = copy.deepcopy(base_create) base_create_v219['properties']['server'][ 'properties']['description'] = parameter_types.description base_create_v232 = copy.deepcopy(base_create_v219) base_create_v232['properties']['server'][ 'properties']['networks']['items'][ 'properties']['tag'] = parameter_types.tag base_create_v232['properties']['server'][ 'properties']['block_device_mapping_v2']['items'][ 'properties']['tag'] = parameter_types.tag # NOTE(artom) the following conditional was merged as # "if version == '2.32'" The intent all along was to check whether # version was greater than or equal to 2.32. In other words, we wanted # to support tags in versions 2.32 and up, but ended up supporting them # in version 2.32 only. Since we need a new microversion to add request # body attributes, tags have been re-added in version 2.42. # NOTE(gmann) Below schema 'base_create_v233' is added (builds on 2.19 schema) # to keep the above mentioned behavior while merging the extension schema code # into server schema file. Below is the ref code where BDM tag was originally # got added for 2.32 microversion *only*. # Ref- https://opendev.org/openstack/nova/src/commit/ # 9882a60e69a5ab8da314a199a56defc05098b743/nova/api/ # openstack/compute/block_device_mapping.py#L71 base_create_v233 = copy.deepcopy(base_create_v219) base_create_v233['properties']['server'][ 'properties']['networks']['items'][ 'properties']['tag'] = parameter_types.tag # 2.37 builds on 2.32 and makes the following changes: # 1. server.networks is required # 2. server.networks is now either an enum or a list # 3. server.networks.uuid is now required to be a uuid base_create_v237 = copy.deepcopy(base_create_v233) base_create_v237['properties']['server']['required'].append('networks') base_create_v237['properties']['server']['properties']['networks'] = { 'oneOf': [ {'type': 'array', 'items': { 'type': 'object', 'properties': { 'fixed_ip': parameter_types.ip_address, 'port': { 'oneOf': [{'type': 'string', 'format': 'uuid'}, {'type': 'null'}] }, 'uuid': {'type': 'string', 'format': 'uuid'}, }, 'additionalProperties': False, }, }, {'type': 'string', 'enum': ['none', 'auto']}, ]} # 2.42 builds on 2.37 and re-introduces the tag field to the list of network # objects. base_create_v242 = copy.deepcopy(base_create_v237) base_create_v242['properties']['server']['properties']['networks'] = { 'oneOf': [ {'type': 'array', 'items': { 'type': 'object', 'properties': { 'fixed_ip': parameter_types.ip_address, 'port': { 'oneOf': [{'type': 'string', 'format': 'uuid'}, {'type': 'null'}] }, 'uuid': {'type': 'string', 'format': 'uuid'}, 'tag': parameter_types.tag, }, 'additionalProperties': False, }, }, {'type': 'string', 'enum': ['none', 'auto']}, ]} base_create_v242['properties']['server'][ 'properties']['block_device_mapping_v2']['items'][ 'properties']['tag'] = parameter_types.tag # 2.52 builds on 2.42 and makes the following changes: # Allowing adding tags to instances when booting base_create_v252 = copy.deepcopy(base_create_v242) base_create_v252['properties']['server']['properties']['tags'] = { "type": "array", "items": parameter_types.tag, "maxItems": instance.MAX_TAG_COUNT } # 2.57 builds on 2.52 and removes the personality parameter. base_create_v257 = copy.deepcopy(base_create_v252) base_create_v257['properties']['server']['properties'].pop('personality') # 2.63 builds on 2.57 and makes the following changes: # Allowing adding trusted certificates to instances when booting base_create_v263 = copy.deepcopy(base_create_v257) base_create_v263['properties']['server']['properties'][ 'trusted_image_certificates'] = parameter_types.trusted_certs # Add volume type in block_device_mapping_v2. base_create_v267 = copy.deepcopy(base_create_v263) base_create_v267['properties']['server']['properties'][ 'block_device_mapping_v2']['items'][ 'properties']['volume_type'] = parameter_types.volume_type # Add host and hypervisor_hostname in server base_create_v274 = copy.deepcopy(base_create_v267) base_create_v274['properties']['server'][ 'properties']['host'] = parameter_types.hostname base_create_v274['properties']['server'][ 'properties']['hypervisor_hostname'] = parameter_types.hostname base_update = { 'type': 'object', 'properties': { 'server': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'OS-DCF:diskConfig': parameter_types.disk_config, 'accessIPv4': parameter_types.accessIPv4, 'accessIPv6': parameter_types.accessIPv6, }, 'additionalProperties': False, }, }, 'required': ['server'], 'additionalProperties': False, } base_update_v20 = copy.deepcopy(base_update) base_update_v20['properties']['server'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces base_update_v219 = copy.deepcopy(base_update) base_update_v219['properties']['server'][ 'properties']['description'] = parameter_types.description base_rebuild = { 'type': 'object', 'properties': { 'rebuild': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'imageRef': parameter_types.image_id, 'adminPass': parameter_types.admin_password, 'metadata': parameter_types.metadata, 'preserve_ephemeral': parameter_types.boolean, 'OS-DCF:diskConfig': parameter_types.disk_config, 'accessIPv4': parameter_types.accessIPv4, 'accessIPv6': parameter_types.accessIPv6, 'personality': parameter_types.personality, }, 'required': ['imageRef'], 'additionalProperties': False, }, }, 'required': ['rebuild'], 'additionalProperties': False, } base_rebuild_v20 = copy.deepcopy(base_rebuild) base_rebuild_v20['properties']['rebuild'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces base_rebuild_v219 = copy.deepcopy(base_rebuild) base_rebuild_v219['properties']['rebuild'][ 'properties']['description'] = parameter_types.description base_rebuild_v254 = copy.deepcopy(base_rebuild_v219) base_rebuild_v254['properties']['rebuild'][ 'properties']['key_name'] = parameter_types.name_or_none # 2.57 builds on 2.54 and makes the following changes: # 1. Remove the personality parameter. # 2. Add the user_data parameter which is nullable so user_data can be reset. base_rebuild_v257 = copy.deepcopy(base_rebuild_v254) base_rebuild_v257['properties']['rebuild']['properties'].pop('personality') base_rebuild_v257['properties']['rebuild']['properties']['user_data'] = ({ 'oneOf': [ {'type': 'string', 'format': 'base64', 'maxLength': 65535}, {'type': 'null'} ] }) # 2.63 builds on 2.57 and makes the following changes: # Allowing adding trusted certificates to instances when rebuilding base_rebuild_v263 = copy.deepcopy(base_rebuild_v257) base_rebuild_v263['properties']['rebuild']['properties'][ 'trusted_image_certificates'] = parameter_types.trusted_certs resize = { 'type': 'object', 'properties': { 'resize': { 'type': 'object', 'properties': { 'flavorRef': parameter_types.flavor_ref, 'OS-DCF:diskConfig': parameter_types.disk_config, }, 'required': ['flavorRef'], 'additionalProperties': False, }, }, 'required': ['resize'], 'additionalProperties': False, } create_image = { 'type': 'object', 'properties': { 'createImage': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'metadata': parameter_types.metadata }, 'required': ['name'], 'additionalProperties': False } }, 'required': ['createImage'], 'additionalProperties': False } create_image_v20 = copy.deepcopy(create_image) create_image_v20['properties']['createImage'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces reboot = { 'type': 'object', 'properties': { 'reboot': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['HARD', 'Hard', 'hard', 'SOFT', 'Soft', 'soft'] } }, 'required': ['type'], 'additionalProperties': False } }, 'required': ['reboot'], 'additionalProperties': False } trigger_crash_dump = { 'type': 'object', 'properties': { 'trigger_crash_dump': { 'type': 'null' } }, 'required': ['trigger_crash_dump'], 'additionalProperties': False } JOINED_TABLE_QUERY_PARAMS_SERVERS = { 'block_device_mapping': parameter_types.common_query_param, 'services': parameter_types.common_query_param, 'metadata': parameter_types.common_query_param, 'system_metadata': parameter_types.common_query_param, 'info_cache': parameter_types.common_query_param, 'security_groups': parameter_types.common_query_param, 'pci_devices': parameter_types.common_query_param } # These fields are valid values for sort_keys before we start # using schema validation, but are considered to be bad values # and disabled to use. In order to avoid backward incompatibility, # they are ignored instead of return HTTP 400. SERVER_LIST_IGNORE_SORT_KEY = [ 'architecture', 'cell_name', 'cleaned', 'default_ephemeral_device', 'default_swap_device', 'deleted', 'deleted_at', 'disable_terminate', 'ephemeral_gb', 'ephemeral_key_uuid', 'id', 'key_data', 'launched_on', 'locked', 'memory_mb', 'os_type', 'reservation_id', 'root_gb', 'shutdown_terminate', 'user_data', 'vcpus', 'vm_mode' ] # From microversion 2.73 we start offering locked as a valid sort key. SERVER_LIST_IGNORE_SORT_KEY_V273 = list(SERVER_LIST_IGNORE_SORT_KEY) SERVER_LIST_IGNORE_SORT_KEY_V273.remove('locked') VALID_SORT_KEYS = { "type": "string", "enum": ['access_ip_v4', 'access_ip_v6', 'auto_disk_config', 'availability_zone', 'config_drive', 'created_at', 'display_description', 'display_name', 'host', 'hostname', 'image_ref', 'instance_type_id', 'kernel_id', 'key_name', 'launch_index', 'launched_at', 'locked_by', 'node', 'power_state', 'progress', 'project_id', 'ramdisk_id', 'root_device_name', 'task_state', 'terminated_at', 'updated_at', 'user_id', 'uuid', 'vm_state'] + SERVER_LIST_IGNORE_SORT_KEY } # We reuse the existing list and add locked to the list of valid sort keys. VALID_SORT_KEYS_V273 = { "type": "string", "enum": ['locked'] + list( set(VALID_SORT_KEYS["enum"]) - set(SERVER_LIST_IGNORE_SORT_KEY)) + SERVER_LIST_IGNORE_SORT_KEY_V273 } query_params_v21 = { 'type': 'object', 'properties': { 'user_id': parameter_types.common_query_param, 'project_id': parameter_types.common_query_param, # The alias of project_id. It should be removed in the # future with microversion bump. 'tenant_id': parameter_types.common_query_param, 'launch_index': parameter_types.common_query_param, # The alias of image. It should be removed in the # future with microversion bump. 'image_ref': parameter_types.common_query_param, 'image': parameter_types.common_query_param, 'kernel_id': parameter_types.common_query_regex_param, 'ramdisk_id': parameter_types.common_query_regex_param, 'hostname': parameter_types.common_query_regex_param, 'key_name': parameter_types.common_query_regex_param, 'power_state': parameter_types.common_query_regex_param, 'vm_state': parameter_types.common_query_param, 'task_state': parameter_types.common_query_param, 'host': parameter_types.common_query_param, 'node': parameter_types.common_query_regex_param, 'flavor': parameter_types.common_query_regex_param, 'reservation_id': parameter_types.common_query_regex_param, 'launched_at': parameter_types.common_query_regex_param, 'terminated_at': parameter_types.common_query_regex_param, 'availability_zone': parameter_types.common_query_regex_param, # NOTE(alex_xu): This is pattern matching, it didn't get any benefit # from DB index. 'name': parameter_types.common_query_regex_param, # The alias of name. It should be removed in the future # with microversion bump. 'display_name': parameter_types.common_query_regex_param, 'description': parameter_types.common_query_regex_param, # The alias of description. It should be removed in the # future with microversion bump. 'display_description': parameter_types.common_query_regex_param, 'locked_by': parameter_types.common_query_regex_param, 'uuid': parameter_types.common_query_param, 'root_device_name': parameter_types.common_query_regex_param, 'config_drive': parameter_types.common_query_regex_param, 'access_ip_v4': parameter_types.common_query_regex_param, 'access_ip_v6': parameter_types.common_query_regex_param, 'auto_disk_config': parameter_types.common_query_regex_param, 'progress': parameter_types.common_query_regex_param, 'sort_key': multi_params(VALID_SORT_KEYS), 'sort_dir': parameter_types.common_query_param, 'all_tenants': parameter_types.common_query_param, 'soft_deleted': parameter_types.common_query_param, 'deleted': parameter_types.common_query_param, 'status': parameter_types.common_query_param, 'changes-since': multi_params({'type': 'string', 'format': 'date-time'}), # NOTE(alex_xu): The ip and ip6 are implemented in the python. 'ip': parameter_types.common_query_regex_param, 'ip6': parameter_types.common_query_regex_param, 'created_at': parameter_types.common_query_regex_param, }, # For backward-compatible additionalProperties is set to be True here. # And we will either strip the extra params out or raise HTTP 400 # according to the params' value in the later process. # This has been changed to False in microversion 2.75. From # microversion 2.75, no additional unknown parameter will be allowed. 'additionalProperties': True, # Prevent internal-attributes that are started with underscore from # being striped out in schema validation, and raise HTTP 400 in API. 'patternProperties': {"^_": parameter_types.common_query_param} } # Update the joined-table fields to the list so it will not be # stripped in later process, thus can be handled later in api # to raise HTTP 400. query_params_v21['properties'].update( JOINED_TABLE_QUERY_PARAMS_SERVERS) query_params_v21['properties'].update( parameter_types.pagination_parameters) query_params_v226 = copy.deepcopy(query_params_v21) query_params_v226['properties'].update({ 'tags': parameter_types.common_query_regex_param, 'tags-any': parameter_types.common_query_regex_param, 'not-tags': parameter_types.common_query_regex_param, 'not-tags-any': parameter_types.common_query_regex_param, }) query_params_v266 = copy.deepcopy(query_params_v226) query_params_v266['properties'].update({ 'changes-before': multi_params({'type': 'string', 'format': 'date-time'}), }) query_params_v273 = copy.deepcopy(query_params_v266) query_params_v273['properties'].update({ 'sort_key': multi_params(VALID_SORT_KEYS_V273), 'locked': parameter_types.common_query_param, }) # Microversion 2.75 makes query schema to disallow any invalid or unknown # query parameters (filter or sort keys). # *****Schema updates for microversion 2.75 start here******* query_params_v275 = copy.deepcopy(query_params_v273) # 1. Update sort_keys to allow only valid sort keys: # NOTE(gmann): Remove the ignored sort keys now because 'additionalProperties' # is Flase for query schema. Starting from miceoversion 2.75, API will # raise 400 for any not-allowed sort keys instead of ignoring them. VALID_SORT_KEYS_V275 = copy.deepcopy(VALID_SORT_KEYS_V273) VALID_SORT_KEYS_V275['enum'] = list( set(VALID_SORT_KEYS_V273["enum"]) - set( SERVER_LIST_IGNORE_SORT_KEY_V273)) query_params_v275['properties'].update({ 'sort_key': multi_params(VALID_SORT_KEYS_V275), }) # 2. Make 'additionalProperties' False. query_params_v275['additionalProperties'] = False # *****Schema updates for microversion 2.75 end here******* ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/schemas/services.py0000664000175000017500000000522300000000000023567 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types service_update = { 'type': 'object', 'properties': { 'host': parameter_types.hostname, 'binary': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'disabled_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, } }, 'required': ['host', 'binary'], 'additionalProperties': False } service_update_v211 = { 'type': 'object', 'properties': { 'host': parameter_types.hostname, 'binary': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'disabled_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'forced_down': parameter_types.boolean }, 'required': ['host', 'binary'], 'additionalProperties': False } # The 2.53 body is for updating a service's status and/or forced_down fields. # There are no required attributes since the service is identified using a # unique service_id on the request path, and status and/or forced_down can # be specified in the body. If status=='disabled', then 'disabled_reason' is # also checked in the body but is not required. Requesting status='enabled' and # including a 'disabled_reason' results in a 400, but this is checked in code. service_update_v2_53 = { 'type': 'object', 'properties': { 'status': { 'type': 'string', 'enum': ['enabled', 'disabled'], }, 'disabled_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'forced_down': parameter_types.boolean }, 'additionalProperties': False } index_query_schema = { 'type': 'object', 'properties': { 'host': parameter_types.common_query_param, 'binary': parameter_types.common_query_param, }, # For backward compatible changes 'additionalProperties': True } index_query_schema_275 = copy.deepcopy(index_query_schema) index_query_schema_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/shelve.py0000664000175000017500000000270600000000000023235 0ustar00zuulzuul00000000000000# Copyright 2019 INSPUR Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types # NOTE(brinzhang): For older microversion there will be no change as # schema is applied only for >2.77 with unshelve a server API. # Anything working in old version keep working as it is. unshelve_v277 = { 'type': 'object', 'properties': { 'unshelve': { 'type': ['object', 'null'], 'properties': { 'availability_zone': parameter_types.name }, # NOTE: The allowed request body is {'unshelve': null} or # {'unshelve': {'availability_zone': }}, not allowed # {'unshelve': {}} as the request body for unshelve. 'required': ['availability_zone'], 'additionalProperties': False, }, }, 'required': ['unshelve'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/simple_tenant_usage.py0000664000175000017500000000410000000000000025763 0ustar00zuulzuul00000000000000# Copyright 2017 NEC Corporation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types index_query = { 'type': 'object', 'properties': { 'start': parameter_types.multi_params({'type': 'string'}), 'end': parameter_types.multi_params({'type': 'string'}), 'detailed': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } show_query = { 'type': 'object', 'properties': { 'start': parameter_types.multi_params({'type': 'string'}), 'end': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } index_query_v240 = copy.deepcopy(index_query) index_query_v240['properties'].update( parameter_types.pagination_parameters) show_query_v240 = copy.deepcopy(show_query) show_query_v240['properties'].update( parameter_types.pagination_parameters) index_query_275 = copy.deepcopy(index_query_v240) index_query_275['additionalProperties'] = False show_query_275 = copy.deepcopy(show_query_v240) show_query_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/schemas/volumes.py0000664000175000017500000001200400000000000023431 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'volume': { 'type': 'object', 'properties': { 'volume_type': {'type': 'string'}, 'metadata': {'type': 'object'}, 'snapshot_id': {'type': 'string'}, 'size': { 'type': ['integer', 'string'], 'pattern': '^[0-9]+$', 'minimum': 1 }, 'availability_zone': {'type': 'string'}, 'display_name': {'type': 'string'}, 'display_description': {'type': 'string'}, }, 'required': ['size'], 'additionalProperties': False, }, }, 'required': ['volume'], 'additionalProperties': False, } snapshot_create = { 'type': 'object', 'properties': { 'snapshot': { 'type': 'object', 'properties': { 'volume_id': {'type': 'string'}, 'force': parameter_types.boolean, 'display_name': {'type': 'string'}, 'display_description': {'type': 'string'}, }, 'required': ['volume_id'], 'additionalProperties': False, }, }, 'required': ['snapshot'], 'additionalProperties': False, } create_volume_attachment = { 'type': 'object', 'properties': { 'volumeAttachment': { 'type': 'object', 'properties': { 'volumeId': parameter_types.volume_id, 'device': { 'type': ['string', 'null'], # NOTE: The validation pattern from match_device() in # nova/block_device.py. 'pattern': '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$' }, }, 'required': ['volumeId'], 'additionalProperties': False, }, }, 'required': ['volumeAttachment'], 'additionalProperties': False, } create_volume_attachment_v249 = copy.deepcopy(create_volume_attachment) create_volume_attachment_v249['properties']['volumeAttachment'][ 'properties']['tag'] = parameter_types.tag create_volume_attachment_v279 = copy.deepcopy(create_volume_attachment_v249) create_volume_attachment_v279['properties']['volumeAttachment'][ 'properties']['delete_on_termination'] = parameter_types.boolean update_volume_attachment = copy.deepcopy(create_volume_attachment) del update_volume_attachment['properties']['volumeAttachment'][ 'properties']['device'] # NOTE(brinzhang): Allow attachment_id, serverId, device, tag, and # delete_on_termination (i.e., follow the content of the GET response) # to be specified for RESTfulness, even though we will not allow updating # all of them. update_volume_attachment_v285 = { 'type': 'object', 'properties': { 'volumeAttachment': { 'type': 'object', 'properties': { 'volumeId': parameter_types.volume_id, 'device': { 'type': ['string', 'null'], # NOTE: The validation pattern from match_device() in # nova/block_device.py. 'pattern': '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$' }, 'tag': parameter_types.tag, 'delete_on_termination': parameter_types.boolean, 'serverId': parameter_types.server_id, 'id': parameter_types.attachment_id }, 'required': ['volumeId'], 'additionalProperties': False, }, }, 'required': ['volumeAttachment'], 'additionalProperties': False, } index_query = { 'type': 'object', 'properties': { 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'offset': parameter_types.multi_params( parameter_types.non_negative_integer) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.75, we have blocked the additional # parameters. 'additionalProperties': True } detail_query = index_query index_query_275 = copy.deepcopy(index_query) index_query_275['additionalProperties'] = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/security_group_default_rules.py0000664000175000017500000000220700000000000026321 0ustar00zuulzuul00000000000000# Copyright 2013 Metacloud Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi class SecurityGroupDefaultRulesController(wsgi.Controller): """(Removed) Controller for default project security groups.""" @wsgi.expected_errors(410) def create(self, req, body): raise exc.HTTPGone() @wsgi.expected_errors(410) def show(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def delete(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def index(self, req): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/security_groups.py0000664000175000017500000004664200000000000023601 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The security groups extension.""" from oslo_log import log as logging from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import security_groups as \ schema_security_groups from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.network import security_group_api from nova.policies import security_groups as sg_policies from nova.virt import netutils LOG = logging.getLogger(__name__) SG_NOT_FOUND = object() def _authorize_context(req): context = req.environ['nova.context'] context.can(sg_policies.BASE_POLICY_NAME) return context class SecurityGroupControllerBase(object): """Base class for Security Group controllers.""" def __init__(self): super(SecurityGroupControllerBase, self).__init__() self.compute_api = compute.API() def _format_security_group_rule(self, context, rule, group_rule_data=None): """Return a security group rule in desired API response format. If group_rule_data is passed in that is used rather than querying for it. """ sg_rule = {} sg_rule['id'] = rule['id'] sg_rule['parent_group_id'] = rule['parent_group_id'] sg_rule['ip_protocol'] = rule['protocol'] sg_rule['from_port'] = rule['from_port'] sg_rule['to_port'] = rule['to_port'] sg_rule['group'] = {} sg_rule['ip_range'] = {} if group_rule_data: sg_rule['group'] = group_rule_data elif rule['group_id']: try: source_group = security_group_api.get( context, id=rule['group_id']) except exception.SecurityGroupNotFound: # NOTE(arosen): There is a possible race condition that can # occur here if two api calls occur concurrently: one that # lists the security groups and another one that deletes a # security group rule that has a group_id before the # group_id is fetched. To handle this if # SecurityGroupNotFound is raised we return None instead # of the rule and the caller should ignore the rule. LOG.debug("Security Group ID %s does not exist", rule['group_id']) return sg_rule['group'] = {'name': source_group.get('name'), 'tenant_id': source_group.get('project_id')} else: sg_rule['ip_range'] = {'cidr': rule['cidr']} return sg_rule def _format_security_group(self, context, group, group_rule_data_by_rule_group_id=None): security_group = {} security_group['id'] = group['id'] security_group['description'] = group['description'] security_group['name'] = group['name'] security_group['tenant_id'] = group['project_id'] security_group['rules'] = [] for rule in group['rules']: group_rule_data = None if rule['group_id'] and group_rule_data_by_rule_group_id: group_rule_data = ( group_rule_data_by_rule_group_id.get(rule['group_id'])) if group_rule_data == SG_NOT_FOUND: # The security group for the rule was not found so skip it. continue formatted_rule = self._format_security_group_rule( context, rule, group_rule_data) if formatted_rule: security_group['rules'] += [formatted_rule] return security_group def _get_group_rule_data_by_rule_group_id(self, context, groups): group_rule_data_by_rule_group_id = {} # Pre-populate with the group information itself in case any of the # rule group IDs are the in-scope groups. for group in groups: group_rule_data_by_rule_group_id[group['id']] = { 'name': group.get('name'), 'tenant_id': group.get('project_id')} for group in groups: for rule in group['rules']: rule_group_id = rule['group_id'] if (rule_group_id and rule_group_id not in group_rule_data_by_rule_group_id): try: source_group = security_group_api.get( context, id=rule['group_id']) group_rule_data_by_rule_group_id[rule_group_id] = { 'name': source_group.get('name'), 'tenant_id': source_group.get('project_id')} except exception.SecurityGroupNotFound: LOG.debug("Security Group %s does not exist", rule_group_id) # Use a sentinel so we don't process this group again. group_rule_data_by_rule_group_id[rule_group_id] = ( SG_NOT_FOUND) return group_rule_data_by_rule_group_id def _from_body(self, body, key): if not body: raise exc.HTTPBadRequest( explanation=_("The request body can't be empty")) value = body.get(key, None) if value is None: raise exc.HTTPBadRequest( explanation=_("Missing parameter %s") % key) return value class SecurityGroupController(SecurityGroupControllerBase, wsgi.Controller): """The Security group API controller for the OpenStack API.""" @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """Return data about the given security group.""" context = _authorize_context(req) try: id = security_group_api.validate_id(id) security_group = security_group_api.get(context, id) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) return {'security_group': self._format_security_group(context, security_group)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) @wsgi.response(202) def delete(self, req, id): """Delete a security group.""" context = _authorize_context(req) try: id = security_group_api.validate_id(id) security_group = security_group_api.get(context, id) security_group_api.destroy(context, security_group) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(schema_security_groups.index_query) @wsgi.expected_errors(404) def index(self, req): """Returns a list of security groups.""" context = _authorize_context(req) search_opts = {} search_opts.update(req.GET) project_id = context.project_id raw_groups = security_group_api.list( context, project=project_id, search_opts=search_opts) limited_list = common.limited(raw_groups, req) result = [self._format_security_group(context, group) for group in limited_list] return {'security_groups': list(sorted(result, key=lambda k: (k['tenant_id'], k['name'])))} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403)) def create(self, req, body): """Creates a new security group.""" context = _authorize_context(req) security_group = self._from_body(body, 'security_group') group_name = security_group.get('name', None) group_description = security_group.get('description', None) try: security_group_api.validate_property(group_name, 'name', None) security_group_api.validate_property(group_description, 'description', None) group_ref = security_group_api.create_security_group( context, group_name, group_description) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) except exception.SecurityGroupLimitExceeded as exp: raise exc.HTTPForbidden(explanation=exp.format_message()) return {'security_group': self._format_security_group(context, group_ref)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def update(self, req, id, body): """Update a security group.""" context = _authorize_context(req) try: id = security_group_api.validate_id(id) security_group = security_group_api.get(context, id) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) security_group_data = self._from_body(body, 'security_group') group_name = security_group_data.get('name', None) group_description = security_group_data.get('description', None) try: security_group_api.validate_property(group_name, 'name', None) security_group_api.validate_property( group_description, 'description', None) group_ref = security_group_api.update_security_group( context, security_group, group_name, group_description) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) return {'security_group': self._format_security_group(context, group_ref)} class SecurityGroupRulesController(SecurityGroupControllerBase, wsgi.Controller): @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 404)) def create(self, req, body): context = _authorize_context(req) sg_rule = self._from_body(body, 'security_group_rule') group_id = sg_rule.get('group_id') source_group = {} try: parent_group_id = security_group_api.validate_id( sg_rule.get('parent_group_id')) security_group = security_group_api.get( context, parent_group_id) if group_id is not None: group_id = security_group_api.validate_id(group_id) source_group = security_group_api.get( context, id=group_id) new_rule = self._rule_args_to_dict(context, to_port=sg_rule.get('to_port'), from_port=sg_rule.get('from_port'), ip_protocol=sg_rule.get('ip_protocol'), cidr=sg_rule.get('cidr'), group_id=group_id) except (exception.Invalid, exception.InvalidCidr) as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) if new_rule is None: msg = _("Not enough parameters to build a valid rule.") raise exc.HTTPBadRequest(explanation=msg) new_rule['parent_group_id'] = security_group['id'] if 'cidr' in new_rule: net, prefixlen = netutils.get_net_and_prefixlen(new_rule['cidr']) if net not in ('0.0.0.0', '::') and prefixlen == '0': msg = _("Bad prefix for network in cidr %s") % new_rule['cidr'] raise exc.HTTPBadRequest(explanation=msg) group_rule_data = None try: if group_id: group_rule_data = {'name': source_group.get('name'), 'tenant_id': source_group.get('project_id')} security_group_rule = ( security_group_api.create_security_group_rule( context, security_group, new_rule)) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.SecurityGroupLimitExceeded as exp: raise exc.HTTPForbidden(explanation=exp.format_message()) formatted_rule = self._format_security_group_rule(context, security_group_rule, group_rule_data) return {"security_group_rule": formatted_rule} def _rule_args_to_dict(self, context, to_port=None, from_port=None, ip_protocol=None, cidr=None, group_id=None): if group_id is not None: return security_group_api.new_group_ingress_rule( group_id, ip_protocol, from_port, to_port) else: cidr = security_group_api.parse_cidr(cidr) return security_group_api.new_cidr_ingress_rule( cidr, ip_protocol, from_port, to_port) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404, 409)) @wsgi.response(202) def delete(self, req, id): context = _authorize_context(req) try: id = security_group_api.validate_id(id) rule = security_group_api.get_rule(context, id) group_id = rule['parent_group_id'] security_group = security_group_api.get(context, group_id) security_group_api.remove_rules( context, security_group, [rule['id']]) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.NoUniqueMatch as exp: raise exc.HTTPConflict(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) class ServerSecurityGroupController(SecurityGroupControllerBase): @wsgi.expected_errors(404) def index(self, req, server_id): """Returns a list of security groups for the given instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(sg_policies.POLICY_NAME % 'list', target={'project_id': instance.project_id}) try: groups = security_group_api.get_instance_security_groups( context, instance, True) except (exception.SecurityGroupNotFound, exception.InstanceNotFound) as exp: msg = exp.format_message() raise exc.HTTPNotFound(explanation=msg) # Optimize performance here by loading up the group_rule_data per # rule['group_id'] ahead of time so we're not doing redundant # security group lookups for each rule. group_rule_data_by_rule_group_id = ( self._get_group_rule_data_by_rule_group_id(context, groups)) result = [self._format_security_group(context, group, group_rule_data_by_rule_group_id) for group in groups] return {'security_groups': list(sorted(result, key=lambda k: (k['tenant_id'], k['name'])))} class SecurityGroupActionController(wsgi.Controller): def __init__(self): super(SecurityGroupActionController, self).__init__() self.compute_api = compute.API() def _parse(self, body, action): try: body = body[action] group_name = body['name'] except TypeError: msg = _("Missing parameter dict") raise exc.HTTPBadRequest(explanation=msg) except KeyError: msg = _("Security group not specified") raise exc.HTTPBadRequest(explanation=msg) if not group_name or group_name.strip() == '': msg = _("Security group name cannot be empty") raise exc.HTTPBadRequest(explanation=msg) return group_name @wsgi.expected_errors((400, 404, 409)) @wsgi.response(202) @wsgi.action('addSecurityGroup') def _addSecurityGroup(self, req, id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(sg_policies.POLICY_NAME % 'add', target={'project_id': instance.project_id}) group_name = self._parse(body, 'addSecurityGroup') try: return security_group_api.add_to_instance(context, instance, group_name) except (exception.SecurityGroupNotFound, exception.InstanceNotFound) as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.NoUniqueMatch as exp: raise exc.HTTPConflict(explanation=exp.format_message()) except exception.SecurityGroupCannotBeApplied as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) @wsgi.expected_errors((400, 404, 409)) @wsgi.response(202) @wsgi.action('removeSecurityGroup') def _removeSecurityGroup(self, req, id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(sg_policies.POLICY_NAME % 'remove', target={'project_id': instance.project_id}) group_name = self._parse(body, 'removeSecurityGroup') try: return security_group_api.remove_from_instance(context, instance, group_name) except (exception.SecurityGroupNotFound, exception.InstanceNotFound) as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.NoUniqueMatch as exp: raise exc.HTTPConflict(explanation=exp.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/server_diagnostics.py0000664000175000017500000000433700000000000024223 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.views import server_diagnostics from nova.api.openstack import wsgi from nova.compute import api as compute from nova import exception from nova.policies import server_diagnostics as sd_policies class ServerDiagnosticsController(wsgi.Controller): _view_builder_class = server_diagnostics.ViewBuilder def __init__(self): super(ServerDiagnosticsController, self).__init__() self.compute_api = compute.API() @wsgi.expected_errors((400, 404, 409, 501)) def index(self, req, server_id): context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, server_id) context.can(sd_policies.BASE_POLICY_NAME, target={'project_id': instance.project_id}) try: if api_version_request.is_supported(req, min_version='2.48'): diagnostics = self.compute_api.get_instance_diagnostics( context, instance) return self._view_builder.instance_diagnostics(diagnostics) return self.compute_api.get_diagnostics(context, instance) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'get_diagnostics', server_id) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/server_external_events.py0000664000175000017500000001444200000000000025120 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.api.openstack.compute.schemas import server_external_events from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import context as nova_context from nova import objects from nova.policies import server_external_events as see_policies LOG = logging.getLogger(__name__) TAG_REQUIRED = ('volume-extended', 'power-update', 'accelerator-request-bound') class ServerExternalEventsController(wsgi.Controller): def __init__(self): super(ServerExternalEventsController, self).__init__() self.compute_api = compute.API() @staticmethod def _is_event_tag_present_when_required(event): if event.name in TAG_REQUIRED and event.tag is None: return False return True def _get_instances_all_cells(self, context, instance_uuids, instance_mappings): cells = {} instance_uuids_by_cell = {} for im in instance_mappings: if im.cell_mapping.uuid not in cells: cells[im.cell_mapping.uuid] = im.cell_mapping instance_uuids_by_cell.setdefault(im.cell_mapping.uuid, list()) instance_uuids_by_cell[im.cell_mapping.uuid].append( im.instance_uuid) instances = {} for cell_uuid, cell in cells.items(): with nova_context.target_cell(context, cell) as cctxt: instances.update( {inst.uuid: inst for inst in objects.InstanceList.get_by_filters( cctxt, {'uuid': instance_uuids_by_cell[cell_uuid]}, expected_attrs=['migration_context', 'info_cache'])}) return instances @wsgi.expected_errors(403) @wsgi.response(200) @validation.schema(server_external_events.create, '2.0', '2.50') @validation.schema(server_external_events.create_v251, '2.51', '2.75') @validation.schema(server_external_events.create_v276, '2.76', '2.81') @validation.schema(server_external_events.create_v282, '2.82') def create(self, req, body): """Creates a new instance event.""" context = req.environ['nova.context'] context.can(see_policies.POLICY_ROOT % 'create', target={}) response_events = [] accepted_events = [] accepted_instances = set() result = 200 body_events = body['events'] # Fetch instance objects for all relevant instances instance_uuids = set([event['server_uuid'] for event in body_events]) instance_mappings = objects.InstanceMappingList.get_by_instance_uuids( context, list(instance_uuids)) instances = self._get_instances_all_cells(context, instance_uuids, instance_mappings) for _event in body_events: client_event = dict(_event) event = objects.InstanceExternalEvent(context) event.instance_uuid = client_event.pop('server_uuid') event.name = client_event.pop('name') event.status = client_event.pop('status', 'completed') event.tag = client_event.pop('tag', None) response_events.append(_event) instance = instances.get(event.instance_uuid) if not instance: LOG.debug('Dropping event %(name)s:%(tag)s for unknown ' 'instance %(instance_uuid)s', {'name': event.name, 'tag': event.tag, 'instance_uuid': event.instance_uuid}) _event['status'] = 'failed' _event['code'] = 404 result = 207 continue # NOTE: before accepting the event, make sure the instance # for which the event is sent is assigned to a host; otherwise # it will not be possible to dispatch the event if not self._is_event_tag_present_when_required(event): LOG.debug("Event tag is missing for instance " "%(instance)s. Dropping event %(event)s", {'instance': event.instance_uuid, 'event': event.name}) _event['status'] = 'failed' _event['code'] = 400 result = 207 elif instance.host: accepted_events.append(event) accepted_instances.add(instance) LOG.info('Creating event %(name)s:%(tag)s for ' 'instance %(instance_uuid)s on %(host)s', {'name': event.name, 'tag': event.tag, 'instance_uuid': event.instance_uuid, 'host': instance.host}) # NOTE: as the event is processed asynchronously verify # whether 202 is a more suitable response code than 200 _event['status'] = 'completed' _event['code'] = 200 else: LOG.debug("Unable to find a host for instance " "%(instance)s. Dropping event %(event)s", {'instance': event.instance_uuid, 'event': event.name}) _event['status'] = 'failed' _event['code'] = 422 result = 207 if accepted_events: self.compute_api.external_instance_event( context, accepted_instances, accepted_events) # FIXME(cyeoh): This needs some infrastructure support so that # we have a general way to do this robj = wsgi.ResponseObject({'events': response_events}) robj._code = result return robj ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/server_groups.py0000664000175000017500000002413500000000000023231 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Server Group API Extension.""" import collections from oslo_log import log as logging import webob from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_groups as schema from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import context as nova_context import nova.exception from nova.i18n import _ from nova import objects from nova.objects import service from nova.policies import server_groups as sg_policies LOG = logging.getLogger(__name__) CONF = nova.conf.CONF GROUP_POLICY_OBJ_MICROVERSION = "2.64" def _get_not_deleted(context, uuids): mappings = objects.InstanceMappingList.get_by_instance_uuids( context, uuids) inst_by_cell = collections.defaultdict(list) cell_mappings = {} found_inst_uuids = [] # Get a master list of cell mappings, and a list of instance # uuids organized by cell for im in mappings: if not im.cell_mapping: # Not scheduled yet, so just throw it in the final list # and move on found_inst_uuids.append(im.instance_uuid) continue if im.cell_mapping.uuid not in cell_mappings: cell_mappings[im.cell_mapping.uuid] = im.cell_mapping inst_by_cell[im.cell_mapping.uuid].append(im.instance_uuid) # Query each cell for the instances that are inside, building # a list of non-deleted instance uuids. for cell_uuid, cell_mapping in cell_mappings.items(): inst_uuids = inst_by_cell[cell_uuid] LOG.debug('Querying cell %(cell)s for %(num)i instances', {'cell': cell_mapping.identity, 'num': len(uuids)}) filters = {'uuid': inst_uuids, 'deleted': False} with nova_context.target_cell(context, cell_mapping) as ctx: found_inst_uuids.extend([ inst.uuid for inst in objects.InstanceList.get_by_filters( ctx, filters=filters)]) return found_inst_uuids def _should_enable_custom_max_server_rules(context, rules): if rules and int(rules.get('max_server_per_host', 1)) > 1: minver = service.get_minimum_version_all_cells( context, ['nova-compute']) if minver < 33: return False return True class ServerGroupController(wsgi.Controller): """The Server group API controller for the OpenStack API.""" def _format_server_group(self, context, group, req): # the id field has its value as the uuid of the server group # There is no 'uuid' key in server_group seen by clients. # In addition, clients see policies as a ["policy-name"] list; # and they see members as a ["server-id"] list. server_group = {} server_group['id'] = group.uuid server_group['name'] = group.name if api_version_request.is_supported( req, min_version=GROUP_POLICY_OBJ_MICROVERSION): server_group['policy'] = group.policy server_group['rules'] = group.rules else: server_group['policies'] = group.policies or [] # NOTE(yikun): Before v2.64, a empty metadata is exposed to the # user, and it is removed since v2.64. server_group['metadata'] = {} members = [] if group.members: # Display the instances that are not deleted. members = _get_not_deleted(context, group.members) server_group['members'] = members # Add project id information to the response data for # API version v2.13 if api_version_request.is_supported(req, min_version="2.13"): server_group['project_id'] = group.project_id server_group['user_id'] = group.user_id return server_group @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given server group.""" context = req.environ['nova.context'] try: sg = objects.InstanceGroup.get_by_uuid(context, id) except nova.exception.InstanceGroupNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) context.can(sg_policies.POLICY_ROOT % 'show', target={'project_id': sg.project_id}) return {'server_group': self._format_server_group(context, sg, req)} @wsgi.response(204) @wsgi.expected_errors(404) def delete(self, req, id): """Delete a server group.""" context = req.environ['nova.context'] try: sg = objects.InstanceGroup.get_by_uuid(context, id) except nova.exception.InstanceGroupNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) context.can(sg_policies.POLICY_ROOT % 'delete', target={'project_id': sg.project_id}) try: sg.destroy() except nova.exception.InstanceGroupNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) @wsgi.expected_errors(()) @validation.query_schema(schema.server_groups_query_param_275, '2.75') @validation.query_schema(schema.server_groups_query_param, '2.0', '2.74') def index(self, req): """Returns a list of server groups.""" context = req.environ['nova.context'] project_id = context.project_id # NOTE(gmann): Using context's project_id as target here so # that when we remove the default target from policy class, # it does not fail if user requesting operation on for their # own server group. context.can(sg_policies.POLICY_ROOT % 'index', target={'project_id': project_id}) if 'all_projects' in req.GET and context.is_admin: # TODO(gmann): Remove the is_admin check in the above condition # so that the below policy can raise error if not allowed. # In existing behavior, if non-admin users requesting # all projects server groups they do not get error instead # get their own server groups. Once we switch to policy # new defaults completly then we can remove the above check. # Until then, let's keep the old behaviour. context.can(sg_policies.POLICY_ROOT % 'index:all_projects', target={}) sgs = objects.InstanceGroupList.get_all(context) else: sgs = objects.InstanceGroupList.get_by_project_id( context, project_id) limited_list = common.limited(sgs.objects, req) result = [self._format_server_group(context, group, req) for group in limited_list] return {'server_groups': result} @wsgi.Controller.api_version("2.1") @wsgi.expected_errors((400, 403, 409)) @validation.schema(schema.create, "2.0", "2.14") @validation.schema(schema.create_v215, "2.15", "2.63") @validation.schema(schema.create_v264, GROUP_POLICY_OBJ_MICROVERSION) def create(self, req, body): """Creates a new server group.""" context = req.environ['nova.context'] project_id = context.project_id context.can(sg_policies.POLICY_ROOT % 'create', target={'project_id': project_id}) try: objects.Quotas.check_deltas(context, {'server_groups': 1}, project_id, context.user_id) except nova.exception.OverQuota: msg = _("Quota exceeded, too many server groups.") raise exc.HTTPForbidden(explanation=msg) vals = body['server_group'] if api_version_request.is_supported( req, GROUP_POLICY_OBJ_MICROVERSION): policy = vals['policy'] rules = vals.get('rules', {}) if policy != 'anti-affinity' and rules: msg = _("Only anti-affinity policy supports rules.") raise exc.HTTPBadRequest(explanation=msg) # NOTE(yikun): This should be removed in Stein version. if not _should_enable_custom_max_server_rules(context, rules): msg = _("Creating an anti-affinity group with rule " "max_server_per_host > 1 is not yet supported.") raise exc.HTTPConflict(explanation=msg) sg = objects.InstanceGroup(context, policy=policy, rules=rules) else: policies = vals.get('policies') sg = objects.InstanceGroup(context, policy=policies[0]) try: sg.name = vals.get('name') sg.project_id = project_id sg.user_id = context.user_id sg.create() except ValueError as e: raise exc.HTTPBadRequest(explanation=e) # NOTE(melwitt): We recheck the quota after creating the object to # prevent users from allocating more resources than their allowed quota # in the event of a race. This is configurable because it can be # expensive if strict quota limits are not required in a deployment. if CONF.quota.recheck_quota: try: objects.Quotas.check_deltas(context, {'server_groups': 0}, project_id, context.user_id) except nova.exception.OverQuota: sg.destroy() msg = _("Quota exceeded, too many server groups.") raise exc.HTTPForbidden(explanation=msg) return {'server_group': self._format_server_group(context, sg, req)} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/server_metadata.py0000664000175000017500000001554300000000000023475 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_metadata from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.policies import server_metadata as sm_policies class ServerMetadataController(wsgi.Controller): """The server metadata API controller for the OpenStack API.""" def __init__(self): super(ServerMetadataController, self).__init__() self.compute_api = compute.API() def _get_metadata(self, context, server): try: # NOTE(mikal): get_instance_metadata sometimes returns # InstanceNotFound in unit tests, even though the instance is # fetched on the line above. I blame mocking. meta = self.compute_api.get_instance_metadata(context, server) except exception.InstanceNotFound: msg = _('Server does not exist') raise exc.HTTPNotFound(explanation=msg) meta_dict = {} for key, value in meta.items(): meta_dict[key] = value return meta_dict @wsgi.expected_errors(404) def index(self, req, server_id): """Returns the list of metadata for a given instance.""" context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'index', target={'project_id': server.project_id}) return {'metadata': self._get_metadata(context, server)} @wsgi.expected_errors((403, 404, 409)) # NOTE(gmann): Returns 200 for backwards compatibility but should be 201 # as this operation complete the creation of metadata. @validation.schema(server_metadata.create) def create(self, req, server_id, body): metadata = body['metadata'] context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'create', target={'project_id': server.project_id}) new_metadata = self._update_instance_metadata(context, server, metadata, delete=False) return {'metadata': new_metadata} @wsgi.expected_errors((400, 403, 404, 409)) @validation.schema(server_metadata.update) def update(self, req, server_id, id, body): context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'update', target={'project_id': server.project_id}) meta_item = body['meta'] if id not in meta_item: expl = _('Request body and URI mismatch') raise exc.HTTPBadRequest(explanation=expl) self._update_instance_metadata(context, server, meta_item, delete=False) return {'meta': meta_item} @wsgi.expected_errors((403, 404, 409)) @validation.schema(server_metadata.update_all) def update_all(self, req, server_id, body): context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'update_all', target={'project_id': server.project_id}) metadata = body['metadata'] new_metadata = self._update_instance_metadata(context, server, metadata, delete=True) return {'metadata': new_metadata} def _update_instance_metadata(self, context, server, metadata, delete=False): try: return self.compute_api.update_instance_metadata(context, server, metadata, delete) except exception.QuotaError as error: raise exc.HTTPForbidden(explanation=error.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'update metadata', server.uuid) @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return a single metadata item.""" context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'show', target={'project_id': server.project_id}) data = self._get_metadata(context, server) try: return {'meta': {id: data[id]}} except KeyError: msg = _("Metadata item was not found") raise exc.HTTPNotFound(explanation=msg) @wsgi.expected_errors((404, 409)) @wsgi.response(204) def delete(self, req, server_id, id): """Deletes an existing metadata.""" context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'delete', target={'project_id': server.project_id}) metadata = self._get_metadata(context, server) if id not in metadata: msg = _("Metadata item was not found") raise exc.HTTPNotFound(explanation=msg) try: self.compute_api.delete_instance_metadata(context, server, id) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'delete metadata', server_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/server_migrations.py0000664000175000017500000001716400000000000024072 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_migrations from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova.policies import servers_migrations as sm_policies def output(migration, include_uuid=False, include_user_project=False): """Returns the desired output of the API from an object. From a Migrations's object this method returns the primitive object with the only necessary and expected fields. """ result = { "created_at": migration.created_at, "dest_compute": migration.dest_compute, "dest_host": migration.dest_host, "dest_node": migration.dest_node, "disk_processed_bytes": migration.disk_processed, "disk_remaining_bytes": migration.disk_remaining, "disk_total_bytes": migration.disk_total, "id": migration.id, "memory_processed_bytes": migration.memory_processed, "memory_remaining_bytes": migration.memory_remaining, "memory_total_bytes": migration.memory_total, "server_uuid": migration.instance_uuid, "source_compute": migration.source_compute, "source_node": migration.source_node, "status": migration.status, "updated_at": migration.updated_at } if include_uuid: result['uuid'] = migration.uuid if include_user_project: result['user_id'] = migration.user_id result['project_id'] = migration.project_id return result class ServerMigrationsController(wsgi.Controller): """The server migrations API controller for the OpenStack API.""" def __init__(self): super(ServerMigrationsController, self).__init__() self.compute_api = compute.API() @wsgi.Controller.api_version("2.22") @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('force_complete') @validation.schema(server_migrations.force_complete) def _force_complete(self, req, id, server_id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'force_complete', target={'project_id': instance.project_id}) try: self.compute_api.live_migrate_force_complete(context, instance, id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.MigrationNotFoundByStatus, exception.InvalidMigrationState, exception.MigrationNotFoundForInstance) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state( state_error, 'force_complete', server_id) @wsgi.Controller.api_version("2.23") @wsgi.expected_errors(404) def index(self, req, server_id): """Return all migrations of an instance in progress.""" context = req.environ['nova.context'] # NOTE(Shaohe Feng) just check the instance is available. To keep # consistency with other API, check it before get migrations. instance = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'index', target={'project_id': instance.project_id}) migrations = self.compute_api.get_migrations_in_progress_by_instance( context, server_id, 'live-migration') include_uuid = api_version_request.is_supported(req, '2.59') include_user_project = api_version_request.is_supported(req, '2.80') return {'migrations': [ output(migration, include_uuid, include_user_project) for migration in migrations]} @wsgi.Controller.api_version("2.23") @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return the migration of an instance in progress by id.""" context = req.environ['nova.context'] # NOTE(Shaohe Feng) just check the instance is available. To keep # consistency with other API, check it before get migrations. instance = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'show', target={'project_id': instance.project_id}) try: migration = self.compute_api.get_migration_by_id_and_instance( context, id, server_id) except exception.MigrationNotFoundForInstance: msg = _("In-progress live migration %(id)s is not found for" " server %(uuid)s.") % {"id": id, "uuid": server_id} raise exc.HTTPNotFound(explanation=msg) if migration.get("migration_type") != "live-migration": msg = _("Migration %(id)s for server %(uuid)s is not" " live-migration.") % {"id": id, "uuid": server_id} raise exc.HTTPNotFound(explanation=msg) # TODO(Shaohe Feng) we should share the in-progress list. in_progress = ['queued', 'preparing', 'running', 'post-migrating'] if migration.get("status") not in in_progress: msg = _("Live migration %(id)s for server %(uuid)s is not in" " progress.") % {"id": id, "uuid": server_id} raise exc.HTTPNotFound(explanation=msg) include_uuid = api_version_request.is_supported(req, '2.59') include_user_project = api_version_request.is_supported(req, '2.80') return {'migration': output(migration, include_uuid, include_user_project)} @wsgi.Controller.api_version("2.24") @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) def delete(self, req, server_id, id): """Abort an in progress migration of an instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(sm_policies.POLICY_ROOT % 'delete', target={'project_id': instance.project_id}) support_abort_in_queue = api_version_request.is_supported(req, '2.65') try: self.compute_api.live_migrate_abort( context, instance, id, support_abort_in_queue=support_abort_in_queue) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state( state_error, "abort live migration", server_id) except exception.MigrationNotFoundForInstance as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidMigrationState as e: raise exc.HTTPBadRequest(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/server_password.py0000664000175000017500000000417200000000000023553 0ustar00zuulzuul00000000000000# Copyright (c) 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The server password extension.""" from nova.api.metadata import password from nova.api.openstack import common from nova.api.openstack import wsgi from nova.compute import api as compute from nova.policies import server_password as sp_policies class ServerPasswordController(wsgi.Controller): """The Server Password API controller for the OpenStack API.""" def __init__(self): super(ServerPasswordController, self).__init__() self.compute_api = compute.API() @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(sp_policies.BASE_POLICY_NAME % 'show', target={'project_id': instance.project_id}) passw = password.extract_password(instance) return {'password': passw or ''} @wsgi.expected_errors(404) @wsgi.response(204) def clear(self, req, server_id): """Removes the encrypted server password from the metadata server Note that this does not actually change the instance server password. """ context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(sp_policies.BASE_POLICY_NAME % 'clear', target={'project_id': instance.project_id}) meta = password.convert_password(context, None) instance.system_metadata.update(meta) instance.save() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/server_tags.py0000664000175000017500000002267100000000000022653 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import jsonschema import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_tags as schema from nova.api.openstack.compute.views import server_tags from nova.api.openstack import wsgi from nova.api import validation from nova.api.validation import parameter_types from nova.compute import api as compute from nova.compute import vm_states from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.notifications import base as notifications_base from nova import objects from nova.policies import server_tags as st_policies def _get_tags_names(tags): return [t.tag for t in tags] def _get_instance_mapping(context, server_id): try: return objects.InstanceMapping.get_by_instance_uuid(context, server_id) except exception.InstanceMappingNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) class ServerTagsController(wsgi.Controller): _view_builder_class = server_tags.ViewBuilder def __init__(self): super(ServerTagsController, self).__init__() self.compute_api = compute.API() def _check_instance_in_valid_state(self, context, server_id, action): instance = common.get_instance(self.compute_api, context, server_id) if instance.vm_state not in (vm_states.ACTIVE, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.STOPPED): exc = exception.InstanceInvalidState(attr='vm_state', instance_uuid=instance.uuid, state=instance.vm_state, method=action) common.raise_http_conflict_for_instance_invalid_state(exc, action, server_id) return instance @wsgi.Controller.api_version("2.26") @wsgi.response(204) @wsgi.expected_errors(404) def show(self, req, server_id, id): context = req.environ["nova.context"] im = _get_instance_mapping(context, server_id) context.can(st_policies.POLICY_ROOT % 'show', target={'project_id': im.project_id}) try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: exists = objects.Tag.exists(cctxt, server_id, id) except (exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) if not exists: msg = (_("Server %(server_id)s has no tag '%(tag)s'") % {'server_id': server_id, 'tag': id}) raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version("2.26") @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ["nova.context"] im = _get_instance_mapping(context, server_id) context.can(st_policies.POLICY_ROOT % 'index', target={'project_id': im.project_id}) try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tags = objects.TagList.get_by_resource_id(cctxt, server_id) except (exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return {'tags': _get_tags_names(tags)} @wsgi.Controller.api_version("2.26") @wsgi.expected_errors((400, 404, 409)) @validation.schema(schema.update) def update(self, req, server_id, id, body): context = req.environ["nova.context"] im = _get_instance_mapping(context, server_id) context.can(st_policies.POLICY_ROOT % 'update', target={'project_id': im.project_id}) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'update tag') try: jsonschema.validate(id, parameter_types.tag) except jsonschema.ValidationError as e: msg = (_("Tag '%(tag)s' is invalid. It must be a non empty string " "without characters '/' and ','. Validation error " "message: %(err)s") % {'tag': id, 'err': e.message}) raise webob.exc.HTTPBadRequest(explanation=msg) try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tags = objects.TagList.get_by_resource_id(cctxt, server_id) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) if len(tags) >= objects.instance.MAX_TAG_COUNT: msg = (_("The number of tags exceeded the per-server limit %d") % objects.instance.MAX_TAG_COUNT) raise webob.exc.HTTPBadRequest(explanation=msg) if id in _get_tags_names(tags): # NOTE(snikitin): server already has specified tag return webob.Response(status_int=204) try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tag = objects.Tag(context=cctxt, resource_id=server_id, tag=id) tag.create() instance.tags = objects.TagList.get_by_resource_id(cctxt, server_id) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") response = webob.Response(status_int=201) response.headers['Location'] = self._view_builder.get_location( req, server_id, id) return response @wsgi.Controller.api_version("2.26") @wsgi.expected_errors((404, 409)) @validation.schema(schema.update_all) def update_all(self, req, server_id, body): context = req.environ["nova.context"] im = _get_instance_mapping(context, server_id) context.can(st_policies.POLICY_ROOT % 'update_all', target={'project_id': im.project_id}) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'update tags') try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tags = objects.TagList.create(cctxt, server_id, body['tags']) instance.tags = tags except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") return {'tags': _get_tags_names(tags)} @wsgi.Controller.api_version("2.26") @wsgi.response(204) @wsgi.expected_errors((404, 409)) def delete(self, req, server_id, id): context = req.environ["nova.context"] im = _get_instance_mapping(context, server_id) context.can(st_policies.POLICY_ROOT % 'delete', target={'project_id': im.project_id}) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'delete tag') try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: objects.Tag.destroy(cctxt, server_id, id) instance.tags = objects.TagList.get_by_resource_id(cctxt, server_id) except (exception.InstanceTagNotFound, exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") @wsgi.Controller.api_version("2.26") @wsgi.response(204) @wsgi.expected_errors((404, 409)) def delete_all(self, req, server_id): context = req.environ["nova.context"] im = _get_instance_mapping(context, server_id) context.can(st_policies.POLICY_ROOT % 'delete_all', target={'project_id': im.project_id}) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'delete tags') try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: objects.TagList.destroy(cctxt, server_id) instance.tags = objects.TagList() except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/server_topology.py0000664000175000017500000000503000000000000023557 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common from nova.api.openstack import wsgi from nova.compute import api as compute from nova.policies import server_topology as st_policies class ServerTopologyController(wsgi.Controller): def __init__(self, *args, **kwargs): super(ServerTopologyController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.Controller.api_version("2.78") @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, server_id, expected_attrs=['numa_topology', 'vcpu_model']) context.can(st_policies.BASE_POLICY_NAME % 'index', target={'project_id': instance.project_id}) host_policy = (st_policies.BASE_POLICY_NAME % 'host:index') show_host_info = context.can(host_policy, fatal=False) return self._get_numa_topology(context, instance, show_host_info) def _get_numa_topology(self, context, instance, show_host_info): if instance.numa_topology is None: return { 'nodes': [], 'pagesize_kb': None } topo = {} cells = [] pagesize_kb = None for cell_ in instance.numa_topology.cells: cell = {} cell['vcpu_set'] = cell_.cpuset cell['siblings'] = cell_.siblings cell['memory_mb'] = cell_.memory if show_host_info: cell['host_node'] = cell_.id if cell_.cpu_pinning is None: cell['cpu_pinning'] = {} else: cell['cpu_pinning'] = cell_.cpu_pinning if cell_.pagesize: pagesize_kb = cell_.pagesize cells.append(cell) topo['nodes'] = cells topo['pagesize_kb'] = pagesize_kb return topo ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/servers.py0000664000175000017500000017731400000000000022025 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import uuidutils import six import webob from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute import helpers from nova.api.openstack.compute.schemas import servers as schema_servers from nova.api.openstack.compute.views import servers as views_servers from nova.api.openstack import wsgi from nova.api import validation from nova import block_device from nova.compute import api as compute from nova.compute import flavors from nova.compute import utils as compute_utils import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.image import glance from nova.network import neutron from nova import objects from nova.policies import servers as server_policies from nova import utils TAG_SEARCH_FILTERS = ('tags', 'tags-any', 'not-tags', 'not-tags-any') PARTIAL_CONSTRUCT_FOR_CELL_DOWN_MIN_VERSION = '2.69' PAGING_SORTING_PARAMS = ('sort_key', 'sort_dir', 'limit', 'marker') CONF = nova.conf.CONF LOG = logging.getLogger(__name__) INVALID_FLAVOR_IMAGE_EXCEPTIONS = ( exception.BadRequirementEmulatorThreadsPolicy, exception.CPUThreadPolicyConfigurationInvalid, exception.FlavorImageConflict, exception.ImageCPUPinningForbidden, exception.ImageCPUThreadPolicyForbidden, exception.ImageNUMATopologyAsymmetric, exception.ImageNUMATopologyCPUDuplicates, exception.ImageNUMATopologyCPUOutOfRange, exception.ImageNUMATopologyCPUsUnassigned, exception.ImageNUMATopologyForbidden, exception.ImageNUMATopologyIncomplete, exception.ImageNUMATopologyMemoryOutOfRange, exception.ImageNUMATopologyRebuildConflict, exception.ImagePMUConflict, exception.ImageSerialPortNumberExceedFlavorValue, exception.ImageSerialPortNumberInvalid, exception.ImageVCPULimitsRangeExceeded, exception.ImageVCPUTopologyRangeExceeded, exception.InvalidCPUAllocationPolicy, exception.InvalidCPUThreadAllocationPolicy, exception.InvalidEmulatorThreadsPolicy, exception.InvalidMachineType, exception.InvalidNUMANodesNumber, exception.InvalidRequest, exception.MemoryPageSizeForbidden, exception.MemoryPageSizeInvalid, exception.PciInvalidAlias, exception.PciRequestAliasNotDefined, exception.RealtimeConfigurationInvalid, exception.RealtimeMaskNotFoundOrInvalid, ) MIN_COMPUTE_MOVE_BANDWIDTH = 39 class ServersController(wsgi.Controller): """The Server API base controller class for the OpenStack API.""" _view_builder_class = views_servers.ViewBuilder @staticmethod def _add_location(robj): # Just in case... if 'server' not in robj.obj: return robj link = [l for l in robj.obj['server']['links'] if l['rel'] == 'self'] if link: robj['Location'] = link[0]['href'] # Convenience return return robj def __init__(self): super(ServersController, self).__init__() self.compute_api = compute.API() self.network_api = neutron.API() @wsgi.expected_errors((400, 403)) @validation.query_schema(schema_servers.query_params_v275, '2.75') @validation.query_schema(schema_servers.query_params_v273, '2.73', '2.74') @validation.query_schema(schema_servers.query_params_v266, '2.66', '2.72') @validation.query_schema(schema_servers.query_params_v226, '2.26', '2.65') @validation.query_schema(schema_servers.query_params_v21, '2.1', '2.25') def index(self, req): """Returns a list of server names and ids for a given user.""" context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'index') try: servers = self._get_servers(req, is_detail=False) except exception.Invalid as err: raise exc.HTTPBadRequest(explanation=err.format_message()) return servers @wsgi.expected_errors((400, 403)) @validation.query_schema(schema_servers.query_params_v275, '2.75') @validation.query_schema(schema_servers.query_params_v273, '2.73', '2.74') @validation.query_schema(schema_servers.query_params_v266, '2.66', '2.72') @validation.query_schema(schema_servers.query_params_v226, '2.26', '2.65') @validation.query_schema(schema_servers.query_params_v21, '2.1', '2.25') def detail(self, req): """Returns a list of server details for a given user.""" context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'detail') try: servers = self._get_servers(req, is_detail=True) except exception.Invalid as err: raise exc.HTTPBadRequest(explanation=err.format_message()) return servers @staticmethod def _is_cell_down_supported(req, search_opts): cell_down_support = api_version_request.is_supported( req, min_version=PARTIAL_CONSTRUCT_FOR_CELL_DOWN_MIN_VERSION) if cell_down_support: # NOTE(tssurya): Minimal constructs would be returned from the down # cells if cell_down_support is True, however if filtering, sorting # or paging is requested by the user, then cell_down_support should # be made False and the down cells should be skipped (depending on # CONF.api.list_records_by_skipping_down_cells) as there is no # way to return correct results for the down cells in those # situations due to missing keys/information. # NOTE(tssurya): Since there is a chance that # remove_invalid_options function could have removed the paging and # sorting parameters, we add the additional check for that from the # request. pag_sort = any( ps in req.GET.keys() for ps in PAGING_SORTING_PARAMS) # NOTE(tssurya): ``nova list --all_tenants`` is the only # allowed filter exception when handling down cells. filters = list(search_opts.keys()) not in ([u'all_tenants'], []) if pag_sort or filters: cell_down_support = False return cell_down_support def _get_servers(self, req, is_detail): """Returns a list of servers, based on any search options specified.""" search_opts = {} search_opts.update(req.GET) context = req.environ['nova.context'] remove_invalid_options(context, search_opts, self._get_server_search_options(req)) cell_down_support = self._is_cell_down_supported(req, search_opts) for search_opt in search_opts: if (search_opt in schema_servers.JOINED_TABLE_QUERY_PARAMS_SERVERS.keys() or search_opt.startswith('_')): msg = _("Invalid filter field: %s.") % search_opt raise exc.HTTPBadRequest(explanation=msg) # Verify search by 'status' contains a valid status. # Convert it to filter by vm_state or task_state for compute_api. # For non-admin user, vm_state and task_state are filtered through # remove_invalid_options function, based on value of status field. # Set value to vm_state and task_state to make search simple. search_opts.pop('status', None) if 'status' in req.GET.keys(): statuses = req.GET.getall('status') states = common.task_and_vm_state_from_status(statuses) vm_state, task_state = states if not vm_state and not task_state: if api_version_request.is_supported(req, min_version='2.38'): msg = _('Invalid status value') raise exc.HTTPBadRequest(explanation=msg) return {'servers': []} search_opts['vm_state'] = vm_state # When we search by vm state, task state will return 'default'. # So we don't need task_state search_opt. if 'default' not in task_state: search_opts['task_state'] = task_state if 'changes-since' in search_opts: try: search_opts['changes-since'] = timeutils.parse_isotime( search_opts['changes-since']) except ValueError: # NOTE: This error handling is for V2.0 API to pass the # experimental jobs at the gate. V2.1 API covers this case # with JSON-Schema and it is a hard burden to apply it to # v2.0 API at this time. msg = _("Invalid filter field: changes-since.") raise exc.HTTPBadRequest(explanation=msg) if 'changes-before' in search_opts: try: search_opts['changes-before'] = timeutils.parse_isotime( search_opts['changes-before']) changes_since = search_opts.get('changes-since') if changes_since and search_opts['changes-before'] < \ search_opts['changes-since']: msg = _('The value of changes-since must be' ' less than or equal to changes-before.') raise exc.HTTPBadRequest(explanation=msg) except ValueError: msg = _("Invalid filter field: changes-before.") raise exc.HTTPBadRequest(explanation=msg) # By default, compute's get_all() will return deleted instances. # If an admin hasn't specified a 'deleted' search option, we need # to filter out deleted instances by setting the filter ourselves. # ... Unless 'changes-since' or 'changes-before' is specified, # because those will return recently deleted instances according to # the API spec. if 'deleted' not in search_opts: if 'changes-since' not in search_opts and \ 'changes-before' not in search_opts: # No 'changes-since' or 'changes-before', so we only # want non-deleted servers search_opts['deleted'] = False else: # Convert deleted filter value to a valid boolean. # Return non-deleted servers if an invalid value # is passed with deleted filter. search_opts['deleted'] = strutils.bool_from_string( search_opts['deleted'], default=False) if search_opts.get("vm_state") == ['deleted']: if context.is_admin: search_opts['deleted'] = True else: msg = _("Only administrators may list deleted instances") raise exc.HTTPForbidden(explanation=msg) if api_version_request.is_supported(req, min_version='2.26'): for tag_filter in TAG_SEARCH_FILTERS: if tag_filter in search_opts: search_opts[tag_filter] = search_opts[ tag_filter].split(',') all_tenants = common.is_all_tenants(search_opts) # use the boolean from here on out so remove the entry from search_opts # if it's present. # NOTE(tssurya): In case we support handling down cells # we need to know further down the stack whether the 'all_tenants' # filter was passed with the true value or not, so we pass the flag # further down the stack. search_opts.pop('all_tenants', None) if 'locked' in search_opts: search_opts['locked'] = common.is_locked(search_opts) elevated = None if all_tenants: if is_detail: context.can(server_policies.SERVERS % 'detail:get_all_tenants') else: context.can(server_policies.SERVERS % 'index:get_all_tenants') elevated = context.elevated() else: # As explained in lp:#1185290, if `all_tenants` is not passed # we must ignore the `tenant_id` search option. search_opts.pop('tenant_id', None) if context.project_id: search_opts['project_id'] = context.project_id else: search_opts['user_id'] = context.user_id limit, marker = common.get_limit_and_marker(req) sort_keys, sort_dirs = common.get_sort_params(req.params) blacklist = schema_servers.SERVER_LIST_IGNORE_SORT_KEY if api_version_request.is_supported(req, min_version='2.73'): blacklist = schema_servers.SERVER_LIST_IGNORE_SORT_KEY_V273 sort_keys, sort_dirs = remove_invalid_sort_keys( context, sort_keys, sort_dirs, blacklist, ('host', 'node')) expected_attrs = [] if is_detail: if api_version_request.is_supported(req, '2.16'): expected_attrs.append('services') if api_version_request.is_supported(req, '2.26'): expected_attrs.append("tags") if api_version_request.is_supported(req, '2.63'): expected_attrs.append("trusted_certs") if api_version_request.is_supported(req, '2.73'): expected_attrs.append("system_metadata") # merge our expected attrs with what the view builder needs for # showing details expected_attrs = self._view_builder.get_show_expected_attrs( expected_attrs) try: instance_list = self.compute_api.get_all(elevated or context, search_opts=search_opts, limit=limit, marker=marker, expected_attrs=expected_attrs, sort_keys=sort_keys, sort_dirs=sort_dirs, cell_down_support=cell_down_support, all_tenants=all_tenants) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise exc.HTTPBadRequest(explanation=msg) except exception.FlavorNotFound: LOG.debug("Flavor '%s' could not be found ", search_opts['flavor']) instance_list = objects.InstanceList() if is_detail: instance_list._context = context instance_list.fill_faults() response = self._view_builder.detail( req, instance_list, cell_down_support=cell_down_support) else: response = self._view_builder.index( req, instance_list, cell_down_support=cell_down_support) return response def _get_server(self, context, req, instance_uuid, is_detail=False, cell_down_support=False, columns_to_join=None): """Utility function for looking up an instance by uuid. :param context: request context for auth :param req: HTTP request. :param instance_uuid: UUID of the server instance to get :param is_detail: True if you plan on showing the details of the instance in the response, False otherwise. :param cell_down_support: True if the API (and caller) support returning a minimal instance construct if the relevant cell is down. :param columns_to_join: optional list of extra fields to join on the Instance object """ expected_attrs = ['flavor', 'numa_topology'] if is_detail: if api_version_request.is_supported(req, '2.26'): expected_attrs.append("tags") if api_version_request.is_supported(req, '2.63'): expected_attrs.append("trusted_certs") expected_attrs = self._view_builder.get_show_expected_attrs( expected_attrs) if columns_to_join: expected_attrs.extend(columns_to_join) instance = common.get_instance(self.compute_api, context, instance_uuid, expected_attrs=expected_attrs, cell_down_support=cell_down_support) return instance @staticmethod def _validate_network_id(net_id, network_uuids): """Validates that a requested network id. This method checks that the network id is in the proper UUID format. :param net_id: The network id to validate. :param network_uuids: A running list of requested network IDs that have passed validation already. :raises: webob.exc.HTTPBadRequest if validation fails """ if not uuidutils.is_uuid_like(net_id): msg = _("Bad networks format: network uuid is " "not in proper format (%s)") % net_id raise exc.HTTPBadRequest(explanation=msg) def _get_requested_networks(self, requested_networks): """Create a list of requested networks from the networks attribute.""" # Starting in the 2.37 microversion, requested_networks is either a # list or a string enum with value 'auto' or 'none'. The auto/none # values are verified via jsonschema so we don't check them again here. if isinstance(requested_networks, six.string_types): return objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=requested_networks)]) networks = [] network_uuids = [] for network in requested_networks: request = objects.NetworkRequest() try: # fixed IP address is optional # if the fixed IP address is not provided then # it will use one of the available IP address from the network request.address = network.get('fixed_ip', None) request.port_id = network.get('port', None) request.tag = network.get('tag', None) if request.port_id: request.network_id = None if request.address is not None: msg = _("Specified Fixed IP '%(addr)s' cannot be used " "with port '%(port)s': the two cannot be " "specified together.") % { "addr": request.address, "port": request.port_id} raise exc.HTTPBadRequest(explanation=msg) else: request.network_id = network['uuid'] self._validate_network_id( request.network_id, network_uuids) network_uuids.append(request.network_id) networks.append(request) except KeyError as key: expl = _('Bad network format: missing %s') % key raise exc.HTTPBadRequest(explanation=expl) except TypeError: expl = _('Bad networks format') raise exc.HTTPBadRequest(explanation=expl) return objects.NetworkRequestList(objects=networks) @wsgi.expected_errors(404) def show(self, req, id): """Returns server details by server id.""" context = req.environ['nova.context'] cell_down_support = api_version_request.is_supported( req, min_version=PARTIAL_CONSTRUCT_FOR_CELL_DOWN_MIN_VERSION) show_server_groups = api_version_request.is_supported( req, min_version='2.71') instance = self._get_server( context, req, id, is_detail=True, cell_down_support=cell_down_support) context.can(server_policies.SERVERS % 'show', target={'project_id': instance.project_id}) return self._view_builder.show( req, instance, cell_down_support=cell_down_support, show_server_groups=show_server_groups) @staticmethod def _process_bdms_for_create( context, target, server_dict, create_kwargs): """Processes block_device_mapping(_v2) req parameters for server create :param context: The nova auth request context :param target: The target dict for ``context.can`` policy checks :param server_dict: The POST /servers request body "server" entry :param create_kwargs: dict that gets populated by this method and passed to nova.comptue.api.API.create() :raises: webob.exc.HTTPBadRequest if the request parameters are invalid :raises: nova.exception.Forbidden if a policy check fails """ block_device_mapping_legacy = server_dict.get('block_device_mapping', []) block_device_mapping_v2 = server_dict.get('block_device_mapping_v2', []) if block_device_mapping_legacy and block_device_mapping_v2: expl = _('Using different block_device_mapping syntaxes ' 'is not allowed in the same request.') raise exc.HTTPBadRequest(explanation=expl) if block_device_mapping_legacy: for bdm in block_device_mapping_legacy: if 'delete_on_termination' in bdm: bdm['delete_on_termination'] = strutils.bool_from_string( bdm['delete_on_termination']) create_kwargs[ 'block_device_mapping'] = block_device_mapping_legacy # Sets the legacy_bdm flag if we got a legacy block device mapping. create_kwargs['legacy_bdm'] = True elif block_device_mapping_v2: # Have to check whether --image is given, see bug 1433609 image_href = server_dict.get('imageRef') image_uuid_specified = image_href is not None try: block_device_mapping = [ block_device.BlockDeviceDict.from_api(bdm_dict, image_uuid_specified) for bdm_dict in block_device_mapping_v2] except exception.InvalidBDMFormat as e: raise exc.HTTPBadRequest(explanation=e.format_message()) create_kwargs['block_device_mapping'] = block_device_mapping # Unset the legacy_bdm flag if we got a block device mapping. create_kwargs['legacy_bdm'] = False block_device_mapping = create_kwargs.get("block_device_mapping") if block_device_mapping: context.can(server_policies.SERVERS % 'create:attach_volume', target) def _process_networks_for_create( self, context, target, server_dict, create_kwargs): """Processes networks request parameter for server create :param context: The nova auth request context :param target: The target dict for ``context.can`` policy checks :param server_dict: The POST /servers request body "server" entry :param create_kwargs: dict that gets populated by this method and passed to nova.comptue.api.API.create() :raises: webob.exc.HTTPBadRequest if the request parameters are invalid :raises: nova.exception.Forbidden if a policy check fails """ requested_networks = server_dict.get('networks', None) if requested_networks is not None: requested_networks = self._get_requested_networks( requested_networks) # Skip policy check for 'create:attach_network' if there is no # network allocation request. if requested_networks and len(requested_networks) and \ not requested_networks.no_allocate: context.can(server_policies.SERVERS % 'create:attach_network', target) create_kwargs['requested_networks'] = requested_networks @staticmethod def _process_hosts_for_create( context, target, server_dict, create_kwargs, host, node): """Processes hosts request parameter for server create :param context: The nova auth request context :param target: The target dict for ``context.can`` policy checks :param server_dict: The POST /servers request body "server" entry :param create_kwargs: dict that gets populated by this method and passed to nova.comptue.api.API.create() :param host: Forced host of availability_zone :param node: Forced node of availability_zone :raise: webob.exc.HTTPBadRequest if the request parameters are invalid :raise: nova.exception.Forbidden if a policy check fails """ requested_host = server_dict.get('host') requested_hypervisor_hostname = server_dict.get('hypervisor_hostname') if requested_host or requested_hypervisor_hostname: # If the policy check fails, this will raise Forbidden exception. context.can(server_policies.REQUESTED_DESTINATION, target=target) if host or node: msg = _("One mechanism with host and/or " "hypervisor_hostname and another mechanism " "with zone:host:node are mutually exclusive.") raise exc.HTTPBadRequest(explanation=msg) create_kwargs['requested_host'] = requested_host create_kwargs['requested_hypervisor_hostname'] = ( requested_hypervisor_hostname) @wsgi.response(202) @wsgi.expected_errors((400, 403, 409)) @validation.schema(schema_servers.base_create_v20, '2.0', '2.0') @validation.schema(schema_servers.base_create, '2.1', '2.18') @validation.schema(schema_servers.base_create_v219, '2.19', '2.31') @validation.schema(schema_servers.base_create_v232, '2.32', '2.32') @validation.schema(schema_servers.base_create_v233, '2.33', '2.36') @validation.schema(schema_servers.base_create_v237, '2.37', '2.41') @validation.schema(schema_servers.base_create_v242, '2.42', '2.51') @validation.schema(schema_servers.base_create_v252, '2.52', '2.56') @validation.schema(schema_servers.base_create_v257, '2.57', '2.62') @validation.schema(schema_servers.base_create_v263, '2.63', '2.66') @validation.schema(schema_servers.base_create_v267, '2.67', '2.73') @validation.schema(schema_servers.base_create_v274, '2.74') def create(self, req, body): """Creates a new server for a given user.""" context = req.environ['nova.context'] server_dict = body['server'] password = self._get_server_admin_password(server_dict) name = common.normalize_name(server_dict['name']) description = name if api_version_request.is_supported(req, min_version='2.19'): description = server_dict.get('description') # Arguments to be passed to instance create function create_kwargs = {} create_kwargs['user_data'] = server_dict.get('user_data') # NOTE(alex_xu): The v2.1 API compat mode, we strip the spaces for # keypair create. But we didn't strip spaces at here for # backward-compatible some users already created keypair and name with # leading/trailing spaces by legacy v2 API. create_kwargs['key_name'] = server_dict.get('key_name') create_kwargs['config_drive'] = server_dict.get('config_drive') security_groups = server_dict.get('security_groups') if security_groups is not None: create_kwargs['security_groups'] = [ sg['name'] for sg in security_groups if sg.get('name')] create_kwargs['security_groups'] = list( set(create_kwargs['security_groups'])) scheduler_hints = {} if 'os:scheduler_hints' in body: scheduler_hints = body['os:scheduler_hints'] elif 'OS-SCH-HNT:scheduler_hints' in body: scheduler_hints = body['OS-SCH-HNT:scheduler_hints'] create_kwargs['scheduler_hints'] = scheduler_hints # min_count and max_count are optional. If they exist, they may come # in as strings. Verify that they are valid integers and > 0. # Also, we want to default 'min_count' to 1, and default # 'max_count' to be 'min_count'. min_count = int(server_dict.get('min_count', 1)) max_count = int(server_dict.get('max_count', min_count)) if min_count > max_count: msg = _('min_count must be <= max_count') raise exc.HTTPBadRequest(explanation=msg) create_kwargs['min_count'] = min_count create_kwargs['max_count'] = max_count availability_zone = server_dict.pop("availability_zone", None) if api_version_request.is_supported(req, min_version='2.52'): create_kwargs['tags'] = server_dict.get('tags') helpers.translate_attributes(helpers.CREATE, server_dict, create_kwargs) target = { 'project_id': context.project_id, 'user_id': context.user_id, 'availability_zone': availability_zone} context.can(server_policies.SERVERS % 'create', target) # Skip policy check for 'create:trusted_certs' if no trusted # certificate IDs were provided. trusted_certs = server_dict.get('trusted_image_certificates', None) if trusted_certs: create_kwargs['trusted_certs'] = trusted_certs context.can(server_policies.SERVERS % 'create:trusted_certs', target=target) parse_az = self.compute_api.parse_availability_zone try: availability_zone, host, node = parse_az(context, availability_zone) except exception.InvalidInput as err: raise exc.HTTPBadRequest(explanation=six.text_type(err)) if host or node: context.can(server_policies.SERVERS % 'create:forced_host', target=target) if api_version_request.is_supported(req, min_version='2.74'): self._process_hosts_for_create(context, target, server_dict, create_kwargs, host, node) self._process_bdms_for_create( context, target, server_dict, create_kwargs) image_uuid = self._image_from_req_data(server_dict, create_kwargs) self._process_networks_for_create( context, target, server_dict, create_kwargs) flavor_id = self._flavor_id_from_req_data(body) try: inst_type = flavors.get_flavor_by_flavor_id( flavor_id, ctxt=context, read_deleted="no") supports_multiattach = common.supports_multiattach_volume(req) supports_port_resource_request = \ common.supports_port_resource_request(req) (instances, resv_id) = self.compute_api.create(context, inst_type, image_uuid, display_name=name, display_description=description, availability_zone=availability_zone, forced_host=host, forced_node=node, metadata=server_dict.get('metadata', {}), admin_password=password, check_server_group_quota=True, supports_multiattach=supports_multiattach, supports_port_resource_request=supports_port_resource_request, **create_kwargs) except (exception.QuotaError, exception.PortLimitExceeded) as error: raise exc.HTTPForbidden( explanation=error.format_message()) except exception.ImageNotFound: msg = _("Can not find requested image") raise exc.HTTPBadRequest(explanation=msg) except exception.KeypairNotFound: msg = _("Invalid key_name provided.") raise exc.HTTPBadRequest(explanation=msg) except exception.ConfigDriveInvalidValue: msg = _("Invalid config_drive provided.") raise exc.HTTPBadRequest(explanation=msg) except (exception.BootFromVolumeRequiredForZeroDiskFlavor, exception.ExternalNetworkAttachForbidden) as error: raise exc.HTTPForbidden(explanation=error.format_message()) except messaging.RemoteError as err: msg = "%(err_type)s: %(err_msg)s" % {'err_type': err.exc_type, 'err_msg': err.value} raise exc.HTTPBadRequest(explanation=msg) except UnicodeDecodeError as error: msg = "UnicodeError: %s" % error raise exc.HTTPBadRequest(explanation=msg) except (exception.ImageNotActive, exception.ImageBadRequest, exception.ImageNotAuthorized, exception.ImageUnacceptable, exception.FixedIpNotFoundForAddress, exception.FlavorNotFound, exception.FlavorDiskTooSmall, exception.FlavorMemoryTooSmall, exception.InvalidMetadata, exception.InvalidVolume, exception.MismatchVolumeAZException, exception.MultiplePortsNotApplicable, exception.InvalidFixedIpAndMaxCountRequest, exception.InstanceUserDataMalformed, exception.PortNotFound, exception.FixedIpAlreadyInUse, exception.SecurityGroupNotFound, exception.PortRequiresFixedIP, exception.NetworkRequiresSubnet, exception.NetworkNotFound, exception.InvalidBDM, exception.InvalidBDMSnapshot, exception.InvalidBDMVolume, exception.InvalidBDMImage, exception.InvalidBDMBootSequence, exception.InvalidBDMLocalsLimit, exception.InvalidBDMVolumeNotBootable, exception.InvalidBDMEphemeralSize, exception.InvalidBDMFormat, exception.InvalidBDMSwapSize, exception.InvalidBDMDiskBus, exception.VolumeTypeNotFound, exception.AutoDiskConfigDisabledByImage, exception.InstanceGroupNotFound, exception.SnapshotNotFound, exception.UnableToAutoAllocateNetwork, exception.MultiattachNotSupportedOldMicroversion, exception.CertificateValidationFailed, exception.CreateWithPortResourceRequestOldVersion, exception.DeviceProfileError, exception.ComputeHostNotFound) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except INVALID_FLAVOR_IMAGE_EXCEPTIONS as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except (exception.PortInUse, exception.InstanceExists, exception.NetworkAmbiguous, exception.NoUniqueMatch) as error: raise exc.HTTPConflict(explanation=error.format_message()) # If the caller wanted a reservation_id, return it if server_dict.get('return_reservation_id', False): return wsgi.ResponseObject({'reservation_id': resv_id}) server = self._view_builder.create(req, instances[0]) if CONF.api.enable_instance_password: server['server']['adminPass'] = password robj = wsgi.ResponseObject(server) return self._add_location(robj) def _delete(self, context, req, instance_uuid): instance = self._get_server(context, req, instance_uuid) context.can(server_policies.SERVERS % 'delete', target={'user_id': instance.user_id, 'project_id': instance.project_id}) if CONF.reclaim_instance_interval: try: self.compute_api.soft_delete(context, instance) except exception.InstanceInvalidState: # Note(yufang521247): instance which has never been active # is not allowed to be soft_deleted. Thus we have to call # delete() to clean up the instance. self.compute_api.delete(context, instance) else: self.compute_api.delete(context, instance) @wsgi.expected_errors(404) @validation.schema(schema_servers.base_update_v20, '2.0', '2.0') @validation.schema(schema_servers.base_update, '2.1', '2.18') @validation.schema(schema_servers.base_update_v219, '2.19') def update(self, req, id, body): """Update server then pass on to version-specific controller.""" ctxt = req.environ['nova.context'] update_dict = {} instance = self._get_server(ctxt, req, id, is_detail=True) ctxt.can(server_policies.SERVERS % 'update', target={'user_id': instance.user_id, 'project_id': instance.project_id}) show_server_groups = api_version_request.is_supported( req, min_version='2.71') server = body['server'] if 'name' in server: update_dict['display_name'] = common.normalize_name( server['name']) if 'description' in server: # This is allowed to be None (remove description) update_dict['display_description'] = server['description'] helpers.translate_attributes(helpers.UPDATE, server, update_dict) try: instance = self.compute_api.update_instance(ctxt, instance, update_dict) # NOTE(gmann): Starting from microversion 2.75, PUT and Rebuild # API response will show all attributes like GET /servers API. show_all_attributes = api_version_request.is_supported( req, min_version='2.75') extend_address = show_all_attributes show_AZ = show_all_attributes show_config_drive = show_all_attributes show_keypair = show_all_attributes show_srv_usg = show_all_attributes show_sec_grp = show_all_attributes show_extended_status = show_all_attributes show_extended_volumes = show_all_attributes # NOTE(gmann): Below attributes need to be added in response # if respective policy allows.So setting these as None # to perform the policy check in view builder. show_extended_attr = None if show_all_attributes else False show_host_status = None if show_all_attributes else False return self._view_builder.show( req, instance, extend_address=extend_address, show_AZ=show_AZ, show_config_drive=show_config_drive, show_extended_attr=show_extended_attr, show_host_status=show_host_status, show_keypair=show_keypair, show_srv_usg=show_srv_usg, show_sec_grp=show_sec_grp, show_extended_status=show_extended_status, show_extended_volumes=show_extended_volumes, show_server_groups=show_server_groups) except exception.InstanceNotFound: msg = _("Instance could not be found") raise exc.HTTPNotFound(explanation=msg) # NOTE(gmann): Returns 204 for backwards compatibility but should be 202 # for representing async API as this API just accepts the request and # request hypervisor driver to complete the same in async mode. @wsgi.response(204) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('confirmResize') def _action_confirm_resize(self, req, id, body): context = req.environ['nova.context'] instance = self._get_server(context, req, id) context.can(server_policies.SERVERS % 'confirm_resize', target={'project_id': instance.project_id}) try: self.compute_api.confirm_resize(context, instance) except exception.MigrationNotFound: msg = _("Instance has not been resized.") raise exc.HTTPBadRequest(explanation=msg) except ( exception.InstanceIsLocked, exception.ServiceUnavailable, ) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'confirmResize', id) @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('revertResize') def _action_revert_resize(self, req, id, body): context = req.environ['nova.context'] instance = self._get_server(context, req, id) context.can(server_policies.SERVERS % 'revert_resize', target={'project_id': instance.project_id}) try: self.compute_api.revert_resize(context, instance) except exception.MigrationNotFound: msg = _("Instance has not been resized.") raise exc.HTTPBadRequest(explanation=msg) except exception.FlavorNotFound: msg = _("Flavor used by the instance could not be found.") raise exc.HTTPBadRequest(explanation=msg) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'revertResize', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('reboot') @validation.schema(schema_servers.reboot) def _action_reboot(self, req, id, body): reboot_type = body['reboot']['type'].upper() context = req.environ['nova.context'] instance = self._get_server(context, req, id) context.can(server_policies.SERVERS % 'reboot', target={'project_id': instance.project_id}) try: self.compute_api.reboot(context, instance, reboot_type) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'reboot', id) def _resize(self, req, instance_id, flavor_id, auto_disk_config=None): """Begin the resize process with given instance/flavor.""" context = req.environ["nova.context"] instance = self._get_server(context, req, instance_id, columns_to_join=['services']) context.can(server_policies.SERVERS % 'resize', target={'user_id': instance.user_id, 'project_id': instance.project_id}) if common.instance_has_port_with_resource_request( instance_id, self.network_api): # TODO(gibi): Remove when nova only supports compute newer than # Train source_service = objects.Service.get_by_host_and_binary( context, instance.host, 'nova-compute') if source_service.version < MIN_COMPUTE_MOVE_BANDWIDTH: msg = _("The resize action on a server with ports having " "resource requests, like a port with a QoS " "minimum bandwidth policy, is not yet supported.") raise exc.HTTPConflict(explanation=msg) try: self.compute_api.resize(context, instance, flavor_id, auto_disk_config=auto_disk_config) except (exception.QuotaError, exception.ForbiddenWithAccelerators) as error: raise exc.HTTPForbidden( explanation=error.format_message()) except (exception.InstanceIsLocked, exception.InstanceNotReady, exception.ServiceUnavailable) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'resize', instance_id) except exception.ImageNotAuthorized: msg = _("You are not authorized to access the image " "the instance was started with.") raise exc.HTTPUnauthorized(explanation=msg) except exception.ImageNotFound: msg = _("Image that the instance was started " "with could not be found.") raise exc.HTTPBadRequest(explanation=msg) except (exception.AutoDiskConfigDisabledByImage, exception.CannotResizeDisk, exception.CannotResizeToSameFlavor, exception.FlavorNotFound) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except INVALID_FLAVOR_IMAGE_EXCEPTIONS as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.Invalid: msg = _("Invalid instance image.") raise exc.HTTPBadRequest(explanation=msg) @wsgi.response(204) @wsgi.expected_errors((404, 409)) def delete(self, req, id): """Destroys a server.""" try: self._delete(req.environ['nova.context'], req, id) except exception.InstanceNotFound: msg = _("Instance could not be found") raise exc.HTTPNotFound(explanation=msg) except (exception.InstanceIsLocked, exception.AllocationDeleteFailed) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'delete', id) def _image_from_req_data(self, server_dict, create_kwargs): """Get image data from the request or raise appropriate exceptions. The field imageRef is mandatory when no block devices have been defined and must be a proper uuid when present. """ image_href = server_dict.get('imageRef') if not image_href and create_kwargs.get('block_device_mapping'): return '' elif image_href: return image_href else: msg = _("Missing imageRef attribute") raise exc.HTTPBadRequest(explanation=msg) def _flavor_id_from_req_data(self, data): flavor_ref = data['server']['flavorRef'] return common.get_id_from_href(flavor_ref) @wsgi.response(202) @wsgi.expected_errors((400, 401, 403, 404, 409)) @wsgi.action('resize') @validation.schema(schema_servers.resize) def _action_resize(self, req, id, body): """Resizes a given instance to the flavor size requested.""" resize_dict = body['resize'] flavor_ref = str(resize_dict["flavorRef"]) kwargs = {} helpers.translate_attributes(helpers.RESIZE, resize_dict, kwargs) self._resize(req, id, flavor_ref, **kwargs) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('rebuild') @validation.schema(schema_servers.base_rebuild_v20, '2.0', '2.0') @validation.schema(schema_servers.base_rebuild, '2.1', '2.18') @validation.schema(schema_servers.base_rebuild_v219, '2.19', '2.53') @validation.schema(schema_servers.base_rebuild_v254, '2.54', '2.56') @validation.schema(schema_servers.base_rebuild_v257, '2.57', '2.62') @validation.schema(schema_servers.base_rebuild_v263, '2.63') def _action_rebuild(self, req, id, body): """Rebuild an instance with the given attributes.""" rebuild_dict = body['rebuild'] image_href = rebuild_dict["imageRef"] password = self._get_server_admin_password(rebuild_dict) context = req.environ['nova.context'] instance = self._get_server(context, req, id) target = {'user_id': instance.user_id, 'project_id': instance.project_id} context.can(server_policies.SERVERS % 'rebuild', target=target) attr_map = { 'name': 'display_name', 'description': 'display_description', 'metadata': 'metadata', } kwargs = {} helpers.translate_attributes(helpers.REBUILD, rebuild_dict, kwargs) if (api_version_request.is_supported(req, min_version='2.54') and 'key_name' in rebuild_dict): kwargs['key_name'] = rebuild_dict.get('key_name') # If user_data is not specified, we don't include it in kwargs because # we don't want to overwrite the existing user_data. include_user_data = api_version_request.is_supported( req, min_version='2.57') if include_user_data and 'user_data' in rebuild_dict: kwargs['user_data'] = rebuild_dict['user_data'] # Skip policy check for 'rebuild:trusted_certs' if no trusted # certificate IDs were provided. if ((api_version_request.is_supported(req, min_version='2.63')) and # Note that this is different from server create since with # rebuild a user can unset/reset the trusted certs by # specifying trusted_image_certificates=None, similar to # key_name. ('trusted_image_certificates' in rebuild_dict)): kwargs['trusted_certs'] = rebuild_dict.get( 'trusted_image_certificates') context.can(server_policies.SERVERS % 'rebuild:trusted_certs', target=target) for request_attribute, instance_attribute in attr_map.items(): try: if request_attribute == 'name': kwargs[instance_attribute] = common.normalize_name( rebuild_dict[request_attribute]) else: kwargs[instance_attribute] = rebuild_dict[ request_attribute] except (KeyError, TypeError): pass try: self.compute_api.rebuild(context, instance, image_href, password, **kwargs) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'rebuild', id) except exception.InstanceNotFound: msg = _("Instance could not be found") raise exc.HTTPNotFound(explanation=msg) except exception.ImageNotFound: msg = _("Cannot find image for rebuild") raise exc.HTTPBadRequest(explanation=msg) except exception.KeypairNotFound: msg = _("Invalid key_name provided.") raise exc.HTTPBadRequest(explanation=msg) except (exception.QuotaError, exception.ForbiddenWithAccelerators) as error: raise exc.HTTPForbidden(explanation=error.format_message()) except (exception.AutoDiskConfigDisabledByImage, exception.CertificateValidationFailed, exception.FlavorDiskTooSmall, exception.FlavorMemoryTooSmall, exception.ImageNotActive, exception.ImageUnacceptable, exception.InvalidMetadata, exception.InvalidArchitectureName, exception.InvalidVolume, ) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except INVALID_FLAVOR_IMAGE_EXCEPTIONS as error: raise exc.HTTPBadRequest(explanation=error.format_message()) instance = self._get_server(context, req, id, is_detail=True) # NOTE(liuyulong): set the new key_name for the API response. # from microversion 2.54 onwards. show_keypair = api_version_request.is_supported( req, min_version='2.54') show_server_groups = api_version_request.is_supported( req, min_version='2.71') # NOTE(gmann): Starting from microversion 2.75, PUT and Rebuild # API response will show all attributes like GET /servers API. show_all_attributes = api_version_request.is_supported( req, min_version='2.75') extend_address = show_all_attributes show_AZ = show_all_attributes show_config_drive = show_all_attributes show_srv_usg = show_all_attributes show_sec_grp = show_all_attributes show_extended_status = show_all_attributes show_extended_volumes = show_all_attributes # NOTE(gmann): Below attributes need to be added in response # if respective policy allows.So setting these as None # to perform the policy check in view builder. show_extended_attr = None if show_all_attributes else False show_host_status = None if show_all_attributes else False view = self._view_builder.show( req, instance, extend_address=extend_address, show_AZ=show_AZ, show_config_drive=show_config_drive, show_extended_attr=show_extended_attr, show_host_status=show_host_status, show_keypair=show_keypair, show_srv_usg=show_srv_usg, show_sec_grp=show_sec_grp, show_extended_status=show_extended_status, show_extended_volumes=show_extended_volumes, show_server_groups=show_server_groups, # NOTE(gmann): user_data has been added in response (by code at # the end of this API method) since microversion 2.57 so tell # view builder not to include it. show_user_data=False) # Add on the admin_password attribute since the view doesn't do it # unless instance passwords are disabled if CONF.api.enable_instance_password: view['server']['adminPass'] = password if include_user_data: view['server']['user_data'] = instance.user_data robj = wsgi.ResponseObject(view) return self._add_location(robj) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('createImage') @validation.schema(schema_servers.create_image, '2.0', '2.0') @validation.schema(schema_servers.create_image, '2.1') def _action_create_image(self, req, id, body): """Snapshot a server instance.""" context = req.environ['nova.context'] instance = self._get_server(context, req, id) target = {'project_id': instance.project_id} context.can(server_policies.SERVERS % 'create_image', target=target) entity = body["createImage"] image_name = common.normalize_name(entity["name"]) metadata = entity.get('metadata', {}) # Starting from microversion 2.39 we don't check quotas on createImage if api_version_request.is_supported( req, max_version= api_version_request.MAX_IMAGE_META_PROXY_API_VERSION): common.check_img_metadata_properties_quota(context, metadata) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) try: if compute_utils.is_volume_backed_instance(context, instance, bdms): context.can(server_policies.SERVERS % 'create_image:allow_volume_backed', target=target) image = self.compute_api.snapshot_volume_backed( context, instance, image_name, extra_properties= metadata) else: image = self.compute_api.snapshot(context, instance, image_name, extra_properties=metadata) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'createImage', id) except exception.Invalid as err: raise exc.HTTPBadRequest(explanation=err.format_message()) except exception.OverQuota as e: raise exc.HTTPForbidden(explanation=e.format_message()) # Starting with microversion 2.45 we return a response body containing # the snapshot image id without the Location header. if api_version_request.is_supported(req, '2.45'): return {'image_id': image['id']} # build location of newly-created image entity image_id = str(image['id']) image_ref = glance.API().generate_image_url(image_id, context) resp = webob.Response(status_int=202) resp.headers['Location'] = image_ref return resp def _get_server_admin_password(self, server): """Determine the admin password for a server on creation.""" if 'adminPass' in server: password = server['adminPass'] else: password = utils.generate_password() return password def _get_server_search_options(self, req): """Return server search options allowed by non-admin.""" # NOTE(mriedem): all_tenants is admin-only by default but because of # tight-coupling between this method, the remove_invalid_options method # and how _get_servers uses them, we include all_tenants here but it # will be removed later for non-admins. Fixing this would be nice but # probably not trivial. opt_list = ('reservation_id', 'name', 'status', 'image', 'flavor', 'ip', 'changes-since', 'all_tenants') if api_version_request.is_supported(req, min_version='2.5'): opt_list += ('ip6',) if api_version_request.is_supported(req, min_version='2.26'): opt_list += TAG_SEARCH_FILTERS if api_version_request.is_supported(req, min_version='2.66'): opt_list += ('changes-before',) if api_version_request.is_supported(req, min_version='2.73'): opt_list += ('locked',) if api_version_request.is_supported(req, min_version='2.83'): opt_list += ('availability_zone', 'config_drive', 'key_name', 'created_at', 'launched_at', 'terminated_at', 'power_state', 'task_state', 'vm_state', 'progress', 'user_id',) return opt_list def _get_instance(self, context, instance_uuid): try: attrs = ['system_metadata', 'metadata'] mapping = objects.InstanceMapping.get_by_instance_uuid( context, instance_uuid) nova_context.set_target_cell(context, mapping.cell_mapping) return objects.Instance.get_by_uuid( context, instance_uuid, expected_attrs=attrs) except (exception.InstanceNotFound, exception.InstanceMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('os-start') def _start_server(self, req, id, body): """Start an instance.""" context = req.environ['nova.context'] instance = self._get_instance(context, id) context.can(server_policies.SERVERS % 'start', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.start(context, instance) except (exception.InstanceNotReady, exception.InstanceIsLocked) as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'start', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('os-stop') def _stop_server(self, req, id, body): """Stop an instance.""" context = req.environ['nova.context'] instance = self._get_instance(context, id) context.can(server_policies.SERVERS % 'stop', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.stop(context, instance) except (exception.InstanceNotReady, exception.InstanceIsLocked) as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'stop', id) @wsgi.Controller.api_version("2.17") @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('trigger_crash_dump') @validation.schema(schema_servers.trigger_crash_dump) def _action_trigger_crash_dump(self, req, id, body): """Trigger crash dump in an instance""" context = req.environ['nova.context'] instance = self._get_instance(context, id) context.can(server_policies.SERVERS % 'trigger_crash_dump', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.trigger_crash_dump(context, instance) except (exception.InstanceNotReady, exception.InstanceIsLocked) as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'trigger_crash_dump', id) def remove_invalid_options(context, search_options, allowed_search_options): """Remove search options that are not permitted unless policy allows.""" if context.can(server_policies.SERVERS % 'allow_all_filters', fatal=False): # Only remove parameters for sorting and pagination for key in PAGING_SORTING_PARAMS: search_options.pop(key, None) return # Otherwise, strip out all unknown options unknown_options = [opt for opt in search_options if opt not in allowed_search_options] if unknown_options: LOG.debug("Removing options '%s' from query", ", ".join(unknown_options)) for opt in unknown_options: search_options.pop(opt, None) def remove_invalid_sort_keys(context, sort_keys, sort_dirs, blacklist, admin_only_fields): key_list = copy.deepcopy(sort_keys) for key in key_list: # NOTE(Kevin Zheng): We are intend to remove the sort_key # in the blacklist and its' corresponding sort_dir, since # the sort_key and sort_dir are not strict to be provide # in pairs in the current implement, sort_dirs could be # less than sort_keys, in order to avoid IndexError, we # only pop sort_dir when number of sort_dirs is no less # than the sort_key index. if key in blacklist: if len(sort_dirs) > sort_keys.index(key): sort_dirs.pop(sort_keys.index(key)) sort_keys.pop(sort_keys.index(key)) elif key in admin_only_fields and not context.is_admin: msg = _("Only administrators can sort servers " "by %s") % key raise exc.HTTPForbidden(explanation=msg) return sort_keys, sort_dirs ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/services.py0000664000175000017500000005021100000000000022141 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as ks_exc from oslo_log import log as logging from oslo_utils import strutils from oslo_utils import uuidutils import six import webob.exc from nova.api.openstack import api_version_request from nova.api.openstack.compute.schemas import services from nova.api.openstack import wsgi from nova.api import validation from nova import availability_zones from nova.compute import api as compute from nova import exception from nova.i18n import _ from nova import objects from nova.policies import services as services_policies from nova.scheduler.client import report from nova import servicegroup from nova import utils UUID_FOR_ID_MIN_VERSION = '2.53' PARTIAL_CONSTRUCT_FOR_CELL_DOWN_MIN_VERSION = '2.69' LOG = logging.getLogger(__name__) class ServiceController(wsgi.Controller): def __init__(self): super(ServiceController, self).__init__() self.host_api = compute.HostAPI() self.aggregate_api = compute.AggregateAPI() self.servicegroup_api = servicegroup.API() self.actions = {"enable": self._enable, "disable": self._disable, "disable-log-reason": self._disable_log_reason} self._placementclient = None # Lazy-load on first access. @property def placementclient(self): if self._placementclient is None: self._placementclient = report.SchedulerReportClient() return self._placementclient def _get_services(self, req): # The API services are filtered out since they are not RPC services # and therefore their state is not reported through the service group # API, so they would always be reported as 'down' (see bug 1543625). api_services = ('nova-osapi_compute', 'nova-metadata') context = req.environ['nova.context'] cell_down_support = api_version_request.is_supported( req, min_version=PARTIAL_CONSTRUCT_FOR_CELL_DOWN_MIN_VERSION) _services = [ s for s in self.host_api.service_get_all(context, set_zones=True, all_cells=True, cell_down_support=cell_down_support) if s['binary'] not in api_services ] host = '' if 'host' in req.GET: host = req.GET['host'] binary = '' if 'binary' in req.GET: binary = req.GET['binary'] if host: _services = [s for s in _services if s['host'] == host] if binary: _services = [s for s in _services if s['binary'] == binary] return _services def _get_service_detail(self, svc, additional_fields, req, cell_down_support=False): # NOTE(tssurya): The below logic returns a minimal service construct # consisting of only the host, binary and status fields for the compute # services in the down cell. if (cell_down_support and 'uuid' not in svc): return {'binary': svc.binary, 'host': svc.host, 'status': "UNKNOWN"} alive = self.servicegroup_api.service_is_up(svc) state = (alive and "up") or "down" active = 'enabled' if svc['disabled']: active = 'disabled' updated_time = self.servicegroup_api.get_updated_time(svc) uuid_for_id = api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION) if 'availability_zone' not in svc: # The service wasn't loaded with the AZ so we need to do it here. # Yes this looks weird, but set_availability_zones makes a copy of # the list passed in and mutates the objects within it, so we have # to pull it back out from the resulting copied list. svc.availability_zone = ( availability_zones.set_availability_zones( req.environ['nova.context'], [svc])[0]['availability_zone']) service_detail = {'binary': svc['binary'], 'host': svc['host'], 'id': svc['uuid' if uuid_for_id else 'id'], 'zone': svc['availability_zone'], 'status': active, 'state': state, 'updated_at': updated_time, 'disabled_reason': svc['disabled_reason']} for field in additional_fields: service_detail[field] = svc[field] return service_detail def _get_services_list(self, req, additional_fields=()): _services = self._get_services(req) cell_down_support = api_version_request.is_supported(req, min_version=PARTIAL_CONSTRUCT_FOR_CELL_DOWN_MIN_VERSION) return [self._get_service_detail(svc, additional_fields, req, cell_down_support=cell_down_support) for svc in _services] def _enable(self, body, context): """Enable scheduling for a service.""" return self._enable_disable(body, context, "enabled", {'disabled': False, 'disabled_reason': None}) def _disable(self, body, context, reason=None): """Disable scheduling for a service with optional log.""" return self._enable_disable(body, context, "disabled", {'disabled': True, 'disabled_reason': reason}) def _disable_log_reason(self, body, context): """Disable scheduling for a service with a log.""" try: reason = body['disabled_reason'] except KeyError: msg = _('Missing disabled reason field') raise webob.exc.HTTPBadRequest(explanation=msg) return self._disable(body, context, reason) def _enable_disable(self, body, context, status, params_to_update): """Enable/Disable scheduling for a service.""" reason = params_to_update.get('disabled_reason') ret_value = { 'service': { 'host': body['host'], 'binary': body['binary'], 'status': status }, } if reason: ret_value['service']['disabled_reason'] = reason self._update(context, body['host'], body['binary'], params_to_update) return ret_value def _forced_down(self, body, context): """Set or unset forced_down flag for the service""" try: forced_down = strutils.bool_from_string(body["forced_down"]) except KeyError: msg = _('Missing forced_down field') raise webob.exc.HTTPBadRequest(explanation=msg) host = body['host'] binary = body['binary'] ret_value = {'service': {'host': host, 'binary': binary, 'forced_down': forced_down}} self._update(context, host, binary, {"forced_down": forced_down}) return ret_value def _update(self, context, host, binary, payload): """Do the actual PUT/update""" # If the user tried to perform an action # (disable/enable/force down) on a non-nova-compute # service, provide a more useful error message. if binary != 'nova-compute': msg = (_( 'Updating a %(binary)s service is not supported. Only ' 'nova-compute services can be updated.') % {'binary': binary}) raise webob.exc.HTTPBadRequest(explanation=msg) try: self.host_api.service_update_by_host_and_binary( context, host, binary, payload) except (exception.HostBinaryNotFound, exception.HostMappingNotFound) as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) def _perform_action(self, req, id, body, actions): """Calculate action dictionary dependent on provided fields""" context = req.environ['nova.context'] try: action = actions[id] except KeyError: msg = _("Unknown action") raise webob.exc.HTTPNotFound(explanation=msg) return action(body, context) @wsgi.response(204) @wsgi.expected_errors((400, 404, 409)) def delete(self, req, id): """Deletes the specified service.""" context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME % 'delete', target={}) if api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION): if not uuidutils.is_uuid_like(id): msg = _('Invalid uuid %s') % id raise webob.exc.HTTPBadRequest(explanation=msg) else: try: utils.validate_integer(id, 'id') except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest( explanation=exc.format_message()) try: service = self.host_api.service_get_by_id(context, id) # remove the service from all the aggregates in which it's included if service.binary == 'nova-compute': # Check to see if there are any instances on this compute host # because if there are, we need to block the service (and # related compute_nodes record) delete since it will impact # resource accounting in Placement and orphan the compute node # resource provider. num_instances = objects.InstanceList.get_count_by_hosts( context, [service['host']]) if num_instances: raise webob.exc.HTTPConflict( explanation=_('Unable to delete compute service that ' 'is hosting instances. Migrate or ' 'delete the instances first.')) # Similarly, check to see if the are any in-progress migrations # involving this host because if there are we need to block the # service delete since we could orphan resource providers and # break the ability to do things like confirm/revert instances # in VERIFY_RESIZE status. compute_nodes = objects.ComputeNodeList.get_all_by_host( context, service.host) self._assert_no_in_progress_migrations( context, id, compute_nodes) aggrs = self.aggregate_api.get_aggregates_by_host(context, service.host) for ag in aggrs: self.aggregate_api.remove_host_from_aggregate(context, ag.id, service.host) # remove the corresponding resource provider record from # placement for the compute nodes managed by this service; # remember that an ironic compute service can manage multiple # nodes for compute_node in compute_nodes: try: self.placementclient.delete_resource_provider( context, compute_node, cascade=True) except ks_exc.ClientException as e: LOG.error( "Failed to delete compute node resource provider " "for compute node %s: %s", compute_node.uuid, six.text_type(e)) # remove the host_mapping of this host. try: hm = objects.HostMapping.get_by_host(context, service.host) hm.destroy() except exception.HostMappingNotFound: # It's possible to startup a nova-compute service and then # delete it (maybe it was accidental?) before mapping it to # a cell using discover_hosts, so we just ignore this. pass service.destroy() except exception.ServiceNotFound: explanation = _("Service %s not found.") % id raise webob.exc.HTTPNotFound(explanation=explanation) except exception.ServiceNotUnique: explanation = _("Service id %s refers to multiple services.") % id raise webob.exc.HTTPBadRequest(explanation=explanation) @staticmethod def _assert_no_in_progress_migrations(context, service_id, compute_nodes): """Ensures there are no in-progress migrations on the given nodes. :param context: nova auth RequestContext :param service_id: id of the Service being deleted :param compute_nodes: ComputeNodeList of nodes on a compute service :raises: HTTPConflict if there are any in-progress migrations on the nodes """ for cn in compute_nodes: migrations = ( objects.MigrationList.get_in_progress_by_host_and_node( context, cn.host, cn.hypervisor_hostname)) if migrations: # Log the migrations for the operator and then raise # a 409 error. LOG.info('Unable to delete compute service with id %s ' 'for host %s. There are %i in-progress ' 'migrations involving the host. Migrations ' '(uuid:status): %s', service_id, cn.host, len(migrations), ','.join(['%s:%s' % (mig.uuid, mig.status) for mig in migrations])) raise webob.exc.HTTPConflict( explanation=_( 'Unable to delete compute service that has ' 'in-progress migrations. Complete the ' 'migrations or delete the instances first.')) @validation.query_schema(services.index_query_schema_275, '2.75') @validation.query_schema(services.index_query_schema, '2.0', '2.74') @wsgi.expected_errors(()) def index(self, req): """Return a list of all running services. Filter by host & service name """ context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME % 'list', target={}) if api_version_request.is_supported(req, min_version='2.11'): _services = self._get_services_list(req, ['forced_down']) else: _services = self._get_services_list(req) return {'services': _services} @wsgi.Controller.api_version('2.1', '2.52') @wsgi.expected_errors((400, 404)) @validation.schema(services.service_update, '2.0', '2.10') @validation.schema(services.service_update_v211, '2.11', '2.52') def update(self, req, id, body): """Perform service update Before microversion 2.53, the body contains a host and binary value to identify the service on which to perform the action. There is no service ID passed on the path, just the action, for example PUT /os-services/disable. """ context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME % 'update', target={}) if api_version_request.is_supported(req, min_version='2.11'): actions = self.actions.copy() actions["force-down"] = self._forced_down else: actions = self.actions return self._perform_action(req, id, body, actions) @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) # noqa F811 @wsgi.expected_errors((400, 404)) @validation.schema(services.service_update_v2_53, UUID_FOR_ID_MIN_VERSION) def update(self, req, id, body): """Perform service update Starting with microversion 2.53, the service uuid is passed in on the path of the request to uniquely identify the service record on which to perform a given update, which is defined in the body of the request. """ service_id = id # Validate that the service ID is a UUID. if not uuidutils.is_uuid_like(service_id): msg = _('Invalid uuid %s') % service_id raise webob.exc.HTTPBadRequest(explanation=msg) # Validate the request context against the policy. context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME % 'update', target={}) # Get the service by uuid. try: service = self.host_api.service_get_by_id(context, service_id) # At this point the context is targeted to the cell that the # service was found in so we don't need to do any explicit cell # targeting below. except exception.ServiceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) # Return 400 if service.binary is not nova-compute. # Before the earlier PUT handlers were made cells-aware, you could # technically disable a nova-scheduler service, although that doesn't # really do anything within Nova and is just confusing. Now trying to # do that will fail as a nova-scheduler service won't have a host # mapping so you'll get a 400. In this new microversion, we close that # old gap and make sure you can only enable/disable and set forced_down # on nova-compute services since those are the only ones that make # sense to update for those operations. if service.binary != 'nova-compute': msg = (_('Updating a %(binary)s service is not supported. Only ' 'nova-compute services can be updated.') % {'binary': service.binary}) raise webob.exc.HTTPBadRequest(explanation=msg) # Now determine the update to perform based on the body. We are # intentionally not using _perform_action or the other old-style # action functions. if 'status' in body: # This is a status update for either enabled or disabled. if body['status'] == 'enabled': # Fail if 'disabled_reason' was requested when enabling the # service since those two combined don't make sense. if body.get('disabled_reason'): msg = _("Specifying 'disabled_reason' with status " "'enabled' is invalid.") raise webob.exc.HTTPBadRequest(explanation=msg) service.disabled = False service.disabled_reason = None elif body['status'] == 'disabled': service.disabled = True # The disabled reason is optional. service.disabled_reason = body.get('disabled_reason') # This is intentionally not an elif, i.e. it's in addition to the # status update. if 'forced_down' in body: service.forced_down = strutils.bool_from_string( body['forced_down'], strict=True) # Check to see if anything was actually updated since the schema does # not define any required fields. if not service.obj_what_changed(): msg = _("No updates were requested. Fields 'status' or " "'forced_down' should be specified.") raise webob.exc.HTTPBadRequest(explanation=msg) # Now save our updates to the service record in the database. self.host_api.service_update(context, service) # Return the full service record details. additional_fields = ['forced_down'] return {'service': self._get_service_detail( service, additional_fields, req)} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/shelve.py0000664000175000017500000001334700000000000021615 0ustar00zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The shelved mode extension.""" from oslo_log import log as logging from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import shelve as shelve_schemas from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova.compute import vm_states from nova import exception from nova.i18n import _ from nova.network import neutron from nova.policies import shelve as shelve_policies LOG = logging.getLogger(__name__) class ShelveController(wsgi.Controller): def __init__(self): super(ShelveController, self).__init__() self.compute_api = compute.API() self.network_api = neutron.API() @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('shelve') def _shelve(self, req, id, body): """Move an instance into shelved mode.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(shelve_policies.POLICY_ROOT % 'shelve', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.shelve(context, instance) except (exception.InstanceIsLocked, exception.UnexpectedTaskStateError) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'shelve', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('shelveOffload') def _shelve_offload(self, req, id, body): """Force removal of a shelved instance from the compute node.""" context = req.environ["nova.context"] context.can(shelve_policies.POLICY_ROOT % 'shelve_offload') instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.shelve_offload(context, instance) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'shelveOffload', id) @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('unshelve') # In microversion 2.77 we support specifying 'availability_zone' to # unshelve a server. But before 2.77 there is no request body # schema validation (because of body=null). @validation.schema(shelve_schemas.unshelve_v277, min_version='2.77') def _unshelve(self, req, id, body): """Restore an instance from shelved mode.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(shelve_policies.POLICY_ROOT % 'unshelve', target={'project_id': instance.project_id}) new_az = None unshelve_dict = body['unshelve'] support_az = api_version_request.is_supported(req, '2.77') if support_az and unshelve_dict: new_az = unshelve_dict['availability_zone'] # We could potentially move this check to conductor and avoid the # extra API call to neutron when we support move operations with ports # having resource requests. if (instance.vm_state == vm_states.SHELVED_OFFLOADED and common.instance_has_port_with_resource_request( instance.uuid, self.network_api) and not common.supports_port_resource_request_during_move()): LOG.warning("The unshelve action on a server with ports having " "resource requests, like a port with a QoS minimum " "bandwidth policy, is not supported until every " "nova-compute is upgraded to Ussuri") msg = _("The unshelve action on a server with ports having " "resource requests, like a port with a QoS minimum " "bandwidth policy, is not supported by this cluster right " "now") raise exc.HTTPBadRequest(explanation=msg) try: self.compute_api.unshelve(context, instance, new_az=new_az) except (exception.InstanceIsLocked, exception.UnshelveInstanceInvalidState, exception.MismatchVolumeAZException) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'unshelve', id) except exception.InvalidRequest as e: raise exc.HTTPBadRequest(explanation=e.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/simple_tenant_usage.py0000664000175000017500000003414700000000000024356 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime import iso8601 from oslo_utils import timeutils import six import six.moves.urllib.parse as urlparse from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import simple_tenant_usage as schema from nova.api.openstack.compute.views import usages as usages_view from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova import objects from nova.policies import simple_tenant_usage as stu_policies CONF = nova.conf.CONF def parse_strtime(dstr, fmt): try: return timeutils.parse_strtime(dstr, fmt) except (TypeError, ValueError) as e: raise exception.InvalidStrTime(reason=six.text_type(e)) class SimpleTenantUsageController(wsgi.Controller): _view_builder_class = usages_view.ViewBuilder def _hours_for(self, instance, period_start, period_stop): launched_at = instance.launched_at terminated_at = instance.terminated_at if terminated_at is not None: if not isinstance(terminated_at, datetime.datetime): # NOTE(mriedem): Instance object DateTime fields are # timezone-aware so convert using isotime. terminated_at = timeutils.parse_isotime(terminated_at) if launched_at is not None: if not isinstance(launched_at, datetime.datetime): launched_at = timeutils.parse_isotime(launched_at) if terminated_at and terminated_at < period_start: return 0 # nothing if it started after the usage report ended if launched_at and launched_at > period_stop: return 0 if launched_at: # if instance launched after period_started, don't charge for first start = max(launched_at, period_start) if terminated_at: # if instance stopped before period_stop, don't charge after stop = min(period_stop, terminated_at) else: # instance is still running, so charge them up to current time stop = period_stop dt = stop - start return dt.total_seconds() / 3600.0 else: # instance hasn't launched, so no charge return 0 def _get_flavor(self, context, instance, flavors_cache): """Get flavor information from the instance object, allowing a fallback to lookup by-id for deleted instances only. """ try: return instance.get_flavor() except exception.NotFound: if not instance.deleted: # Only support the fallback mechanism for deleted instances # that would have been skipped by migration #153 raise flavor_type = instance.instance_type_id if flavor_type in flavors_cache: return flavors_cache[flavor_type] try: flavor_ref = objects.Flavor.get_by_id(context, flavor_type) flavors_cache[flavor_type] = flavor_ref except exception.FlavorNotFound: # can't bill if there is no flavor flavor_ref = None return flavor_ref def _get_instances_all_cells(self, context, period_start, period_stop, tenant_id, limit, marker): all_instances = [] cells = objects.CellMappingList.get_all(context) for cell in cells: with nova_context.target_cell(context, cell) as cctxt: try: instances = ( objects.InstanceList.get_active_by_window_joined( cctxt, period_start, period_stop, tenant_id, expected_attrs=['flavor'], limit=limit, marker=marker)) except exception.MarkerNotFound: # NOTE(danms): We need to keep looking through the later # cells to find the marker continue all_instances.extend(instances) # NOTE(danms): We must have found a marker if we had one, # so make sure we don't require a marker in the next cell marker = None if limit: limit -= len(instances) if limit <= 0: break if marker is not None and len(all_instances) == 0: # NOTE(danms): If we did not find the marker in any cell, # mimic the db_api behavior here raise exception.MarkerNotFound(marker=marker) return all_instances def _tenant_usages_for_period(self, context, period_start, period_stop, tenant_id=None, detailed=True, limit=None, marker=None): instances = self._get_instances_all_cells(context, period_start, period_stop, tenant_id, limit, marker) rval = collections.OrderedDict() flavors = {} all_server_usages = [] for instance in instances: info = {} info['hours'] = self._hours_for(instance, period_start, period_stop) flavor = self._get_flavor(context, instance, flavors) if not flavor: info['flavor'] = '' else: info['flavor'] = flavor.name info['instance_id'] = instance.uuid info['name'] = instance.display_name info['tenant_id'] = instance.project_id try: info['memory_mb'] = instance.flavor.memory_mb info['local_gb'] = (instance.flavor.root_gb + instance.flavor.ephemeral_gb) info['vcpus'] = instance.flavor.vcpus except exception.InstanceNotFound: # This is rare case, instance disappear during analysis # As it's just info collection, we can try next one continue # NOTE(mriedem): We need to normalize the start/end times back # to timezone-naive so the response doesn't change after the # conversion to objects. info['started_at'] = timeutils.normalize_time(instance.launched_at) info['ended_at'] = ( timeutils.normalize_time(instance.terminated_at) if instance.terminated_at else None) if info['ended_at']: info['state'] = 'terminated' else: info['state'] = instance.vm_state now = timeutils.utcnow() if info['state'] == 'terminated': delta = info['ended_at'] - info['started_at'] else: delta = now - info['started_at'] info['uptime'] = int(delta.total_seconds()) if info['tenant_id'] not in rval: summary = {} summary['tenant_id'] = info['tenant_id'] if detailed: summary['server_usages'] = [] summary['total_local_gb_usage'] = 0 summary['total_vcpus_usage'] = 0 summary['total_memory_mb_usage'] = 0 summary['total_hours'] = 0 summary['start'] = timeutils.normalize_time(period_start) summary['stop'] = timeutils.normalize_time(period_stop) rval[info['tenant_id']] = summary summary = rval[info['tenant_id']] summary['total_local_gb_usage'] += info['local_gb'] * info['hours'] summary['total_vcpus_usage'] += info['vcpus'] * info['hours'] summary['total_memory_mb_usage'] += (info['memory_mb'] * info['hours']) summary['total_hours'] += info['hours'] all_server_usages.append(info) if detailed: summary['server_usages'].append(info) return list(rval.values()), all_server_usages def _parse_datetime(self, dtstr): if not dtstr: value = timeutils.utcnow() elif isinstance(dtstr, datetime.datetime): value = dtstr else: for fmt in ["%Y-%m-%dT%H:%M:%S", "%Y-%m-%dT%H:%M:%S.%f", "%Y-%m-%d %H:%M:%S.%f"]: try: value = parse_strtime(dtstr, fmt) break except exception.InvalidStrTime: pass else: msg = _("Datetime is in invalid format") raise exception.InvalidStrTime(reason=msg) # NOTE(mriedem): Instance object DateTime fields are timezone-aware # so we have to force UTC timezone for comparing this datetime against # instance object fields and still maintain backwards compatibility # in the API. if value.utcoffset() is None: value = value.replace(tzinfo=iso8601.UTC) return value def _get_datetime_range(self, req): qs = req.environ.get('QUERY_STRING', '') env = urlparse.parse_qs(qs) # NOTE(lzyeval): env.get() always returns a list period_start = self._parse_datetime(env.get('start', [None])[0]) period_stop = self._parse_datetime(env.get('end', [None])[0]) if not period_start < period_stop: msg = _("Invalid start time. The start time cannot occur after " "the end time.") raise exc.HTTPBadRequest(explanation=msg) detailed = env.get('detailed', ['0'])[0] == '1' return (period_start, period_stop, detailed) @wsgi.Controller.api_version("2.40") @validation.query_schema(schema.index_query_275, '2.75') @validation.query_schema(schema.index_query_v240, '2.40', '2.74') @wsgi.expected_errors(400) def index(self, req): """Retrieve tenant_usage for all tenants.""" return self._index(req, links=True) @wsgi.Controller.api_version("2.1", "2.39") # noqa @validation.query_schema(schema.index_query) @wsgi.expected_errors(400) def index(self, req): """Retrieve tenant_usage for all tenants.""" return self._index(req) @wsgi.Controller.api_version("2.40") @validation.query_schema(schema.show_query_275, '2.75') @validation.query_schema(schema.show_query_v240, '2.40', '2.74') @wsgi.expected_errors(400) def show(self, req, id): """Retrieve tenant_usage for a specified tenant.""" return self._show(req, id, links=True) @wsgi.Controller.api_version("2.1", "2.39") # noqa @validation.query_schema(schema.show_query) @wsgi.expected_errors(400) def show(self, req, id): """Retrieve tenant_usage for a specified tenant.""" return self._show(req, id) def _index(self, req, links=False): context = req.environ['nova.context'] context.can(stu_policies.POLICY_ROOT % 'list') try: (period_start, period_stop, detailed) = self._get_datetime_range( req) except exception.InvalidStrTime as e: raise exc.HTTPBadRequest(explanation=e.format_message()) now = timeutils.parse_isotime(timeutils.utcnow().isoformat()) if period_stop > now: period_stop = now marker = None limit = CONF.api.max_limit if links: limit, marker = common.get_limit_and_marker(req) try: usages, server_usages = self._tenant_usages_for_period( context, period_start, period_stop, detailed=detailed, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) tenant_usages = {'tenant_usages': usages} if links: usages_links = self._view_builder.get_links(req, server_usages) if usages_links: tenant_usages['tenant_usages_links'] = usages_links return tenant_usages def _show(self, req, id, links=False): tenant_id = id context = req.environ['nova.context'] context.can(stu_policies.POLICY_ROOT % 'show', {'project_id': tenant_id}) try: (period_start, period_stop, ignore) = self._get_datetime_range( req) except exception.InvalidStrTime as e: raise exc.HTTPBadRequest(explanation=e.format_message()) now = timeutils.parse_isotime(timeutils.utcnow().isoformat()) if period_stop > now: period_stop = now marker = None limit = CONF.api.max_limit if links: limit, marker = common.get_limit_and_marker(req) try: usage, server_usages = self._tenant_usages_for_period( context, period_start, period_stop, tenant_id=tenant_id, detailed=True, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) if len(usage): usage = list(usage)[0] else: usage = {} tenant_usage = {'tenant_usage': usage} if links: usages_links = self._view_builder.get_links( req, server_usages, tenant_id=tenant_id) if usages_links: tenant_usage['tenant_usage_links'] = usages_links return tenant_usage ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/suspend_server.py0000664000175000017500000000534400000000000023374 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack import wsgi from nova.compute import api as compute from nova import exception from nova.policies import suspend_server as ss_policies class SuspendServerController(wsgi.Controller): def __init__(self): super(SuspendServerController, self).__init__() self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((403, 404, 409)) @wsgi.action('suspend') def _suspend(self, req, id, body): """Permit admins to suspend the server.""" context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, id) try: context.can(ss_policies.POLICY_ROOT % 'suspend', target={'user_id': server.user_id, 'project_id': server.project_id}) self.compute_api.suspend(context, server) except (exception.OperationNotSupportedForSEV, exception.InstanceIsLocked) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'suspend', id) except exception.ForbiddenWithAccelerators as e: raise exc.HTTPForbidden(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('resume') def _resume(self, req, id, body): """Permit admins to resume the server from suspend.""" context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, id) context.can(ss_policies.POLICY_ROOT % 'resume', target={'project_id': server.project_id}) try: self.compute_api.resume(context, server) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'resume', id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/tenant_networks.py0000664000175000017500000000661500000000000023554 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import wsgi import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.network import neutron from nova.policies import tenant_networks as tn_policies from nova import quota CONF = nova.conf.CONF QUOTAS = quota.QUOTAS LOG = logging.getLogger(__name__) def network_dict(network): # convert from a neutron response to something resembling what we used to # produce with nova-network return { 'id': network.get('id'), # yes, this is bananas, but this is what the API returned historically # when using neutron instead of nova-network, so we keep on returning # that 'cidr': str(None), 'label': network.get('name'), } class TenantNetworkController(wsgi.Controller): def __init__(self): super(TenantNetworkController, self).__init__() self.network_api = neutron.API() self._default_networks = [] def _refresh_default_networks(self): self._default_networks = [] if CONF.api.use_neutron_default_nets: try: self._default_networks = self._get_default_networks() except Exception: LOG.exception("Failed to get default networks") def _get_default_networks(self): project_id = CONF.api.neutron_default_tenant_id ctx = nova_context.RequestContext(user_id=None, project_id=project_id) return self.network_api.get_all(ctx) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(tn_policies.BASE_POLICY_NAME) networks = list(self.network_api.get_all(context)) if not self._default_networks: self._refresh_default_networks() networks.extend(self._default_networks) return {'networks': [network_dict(n) for n in networks]} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): context = req.environ['nova.context'] context.can(tn_policies.BASE_POLICY_NAME) try: network = self.network_api.get(context, id) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) return {'network': network_dict(network)} @wsgi.expected_errors(410) def delete(self, req, id): raise exc.HTTPGone() @wsgi.expected_errors(410) def create(self, req, body): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/versions.py0000664000175000017500000000653500000000000022200 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova.api.openstack.compute.views import versions as views_versions from nova.api.openstack import wsgi LINKS = { 'v2.0': { 'html': 'http://docs.openstack.org/' }, 'v2.1': { 'html': 'http://docs.openstack.org/' }, } VERSIONS = { "v2.0": { "id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": LINKS['v2.0']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2", } ], }, "v2.1": { "id": "v2.1", "status": "CURRENT", "version": api_version_request._MAX_API_VERSION, "min_version": api_version_request._MIN_API_VERSION, "updated": "2013-07-23T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": LINKS['v2.1']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1", } ], } } class Versions(wsgi.Resource): # The root version API isn't under the microversion control. support_api_request_version = False def __init__(self): super(Versions, self).__init__(None) def index(self, req, body=None): """Return all versions.""" builder = views_versions.get_view_builder(req) return builder.build_versions(VERSIONS) @wsgi.response(300) def multi(self, req, body=None): """Return multiple choices.""" builder = views_versions.get_view_builder(req) return builder.build_choices(VERSIONS, req) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" args = {} if request_environment['PATH_INFO'] == '/': args['action'] = 'index' else: args['action'] = 'multi' return args class VersionsV2(wsgi.Resource): def __init__(self): super(VersionsV2, self).__init__(None) def index(self, req, body=None): builder = views_versions.get_view_builder(req) ver = 'v2.0' if req.is_legacy_v2() else 'v2.1' return builder.build_version(VERSIONS[ver]) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" return {'action': 'index'} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/versionsV21.py0000664000175000017500000000221500000000000022460 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.compute import versions from nova.api.openstack.compute.views import versions as views_versions from nova.api.openstack import wsgi class VersionsController(wsgi.Controller): @wsgi.expected_errors(404) def show(self, req, id='v2.1'): builder = views_versions.get_view_builder(req) if req.is_legacy_v2(): id = 'v2.0' if id not in versions.VERSIONS: raise webob.exc.HTTPNotFound() return builder.build_version(versions.VERSIONS[id]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2904704 nova-21.2.4/nova/api/openstack/compute/views/0000775000175000017500000000000000000000000021102 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/__init__.py0000664000175000017500000000000000000000000023201 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/views/addresses.py0000664000175000017500000000355200000000000023436 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import itertools from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): """Models server addresses as a dictionary.""" _collection_name = "addresses" def basic(self, ip, extend_address=False): """Return a dictionary describing an IP address.""" address = { "version": ip["version"], "addr": ip["address"], } if extend_address: address.update({ "OS-EXT-IPS:type": ip["type"], "OS-EXT-IPS-MAC:mac_addr": ip['mac_address'], }) return address def show(self, network, label, extend_address=False): """Returns a dictionary describing a network.""" all_ips = itertools.chain(network["ips"], network["floating_ips"]) return {label: [self.basic(ip, extend_address) for ip in all_ips]} def index(self, networks, extend_address=False): """Return a dictionary describing a list of networks.""" addresses = collections.OrderedDict() for label, network in networks.items(): network_dict = self.show(network, label, extend_address) addresses[label] = network_dict[label] return dict(addresses=addresses) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/flavors.py0000664000175000017500000001210300000000000023125 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova.api.openstack import common FLAVOR_DESCRIPTION_MICROVERSION = '2.55' FLAVOR_EXTRA_SPECS_MICROVERSION = '2.61' class ViewBuilder(common.ViewBuilder): _collection_name = "flavors" def basic(self, request, flavor, include_description=False, include_extra_specs=False): # include_extra_specs is placeholder param which is not used in # this method as basic() method is used by index() (GET /flavors) # which does not return those keys in response. flavor_dict = { "flavor": { "id": flavor["flavorid"], "name": flavor["name"], "links": self._get_links(request, flavor["flavorid"], self._collection_name), }, } if include_description: flavor_dict['flavor']['description'] = flavor.description return flavor_dict def show(self, request, flavor, include_description=False, include_extra_specs=False): flavor_dict = { "flavor": { "id": flavor["flavorid"], "name": flavor["name"], "ram": flavor["memory_mb"], "disk": flavor["root_gb"], "swap": flavor["swap"] or "", "OS-FLV-EXT-DATA:ephemeral": flavor["ephemeral_gb"], "OS-FLV-DISABLED:disabled": flavor["disabled"], "vcpus": flavor["vcpus"], "os-flavor-access:is_public": flavor['is_public'], "rxtx_factor": flavor['rxtx_factor'] or "", "links": self._get_links(request, flavor["flavorid"], self._collection_name), }, } if include_description: flavor_dict['flavor']['description'] = flavor.description if include_extra_specs: flavor_dict['flavor']['extra_specs'] = flavor.extra_specs if api_version_request.is_supported(request, '2.75'): flavor_dict['flavor']['swap'] = flavor["swap"] or 0 return flavor_dict def index(self, request, flavors): """Return the 'index' view of flavors.""" coll_name = self._collection_name include_description = api_version_request.is_supported( request, FLAVOR_DESCRIPTION_MICROVERSION) return self._list_view(self.basic, request, flavors, coll_name, include_description=include_description) def detail(self, request, flavors, include_extra_specs=False): """Return the 'detail' view of flavors.""" coll_name = self._collection_name + '/detail' include_description = api_version_request.is_supported( request, FLAVOR_DESCRIPTION_MICROVERSION) return self._list_view(self.show, request, flavors, coll_name, include_description=include_description, include_extra_specs=include_extra_specs) def _list_view(self, func, request, flavors, coll_name, include_description=False, include_extra_specs=False): """Provide a view for a list of flavors. :param func: Function used to format the flavor data :param request: API request :param flavors: List of flavors in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :param include_description: If the flavor.description should be included in the response dict. :param include_extra_specs: If the flavor.extra_specs should be included in the response dict. :returns: Flavor reply data in dictionary format """ flavor_list = [func(request, flavor, include_description, include_extra_specs)["flavor"] for flavor in flavors] flavors_links = self._get_collection_links(request, flavors, coll_name, "flavorid") flavors_dict = dict(flavors=flavor_list) if flavors_links: flavors_dict["flavors_links"] = flavors_links return flavors_dict ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/hypervisors.py0000664000175000017500000000202500000000000024050 0ustar00zuulzuul00000000000000# Copyright 2016 Kylin Cloud # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "os-hypervisors" def get_links(self, request, hypervisors, detail=False): coll_name = (self._collection_name + '/detail' if detail else self._collection_name) return self._get_collection_links(request, hypervisors, coll_name, 'id') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/images.py0000664000175000017500000001377000000000000022731 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from nova.api.openstack import common from nova.image import glance from nova import utils class ViewBuilder(common.ViewBuilder): _collection_name = "images" def basic(self, request, image): """Return a dictionary with basic image attributes.""" return { "image": { "id": image.get("id"), "name": image.get("name"), "links": self._get_links(request, image["id"], self._collection_name), }, } def show(self, request, image): """Return a dictionary with image details.""" image_dict = { "id": image.get("id"), "name": image.get("name"), "minRam": int(image.get("min_ram") or 0), "minDisk": int(image.get("min_disk") or 0), "metadata": image.get("properties", {}), "created": self._format_date(image.get("created_at")), "updated": self._format_date(image.get("updated_at")), "status": self._get_status(image), "progress": self._get_progress(image), "OS-EXT-IMG-SIZE:size": image.get("size"), "links": self._get_links(request, image["id"], self._collection_name), } instance_uuid = image.get("properties", {}).get("instance_uuid") if instance_uuid is not None: server_ref = self._get_href_link(request, instance_uuid, 'servers') image_dict["server"] = { "id": instance_uuid, "links": [{ "rel": "self", "href": server_ref, }, { "rel": "bookmark", "href": self._get_bookmark_link(request, instance_uuid, 'servers'), }], } auto_disk_config = image_dict['metadata'].get("auto_disk_config", None) if auto_disk_config is not None: value = strutils.bool_from_string(auto_disk_config) image_dict["OS-DCF:diskConfig"] = ( 'AUTO' if value else 'MANUAL') return dict(image=image_dict) def detail(self, request, images): """Show a list of images with details.""" list_func = self.show coll_name = self._collection_name + '/detail' return self._list_view(list_func, request, images, coll_name) def index(self, request, images): """Show a list of images with basic attributes.""" list_func = self.basic coll_name = self._collection_name return self._list_view(list_func, request, images, coll_name) def _list_view(self, list_func, request, images, coll_name): """Provide a view for a list of images. :param list_func: Function used to format the image data :param request: API request :param images: List of images in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :returns: Image reply data in dictionary format """ image_list = [list_func(request, image)["image"] for image in images] images_links = self._get_collection_links(request, images, coll_name) images_dict = dict(images=image_list) if images_links: images_dict["images_links"] = images_links return images_dict def _get_links(self, request, identifier, collection_name): """Return a list of links for this image.""" return [{ "rel": "self", "href": self._get_href_link(request, identifier, collection_name), }, { "rel": "bookmark", "href": self._get_bookmark_link(request, identifier, collection_name), }, { "rel": "alternate", "type": "application/vnd.openstack.image", "href": self._get_alternate_link(request, identifier), }] def _get_alternate_link(self, request, identifier): """Create an alternate link for a specific image id.""" glance_url = glance.generate_glance_url( request.environ['nova.context']) glance_url = self._update_glance_link_prefix(glance_url) return '/'.join([glance_url, self._collection_name, str(identifier)]) @staticmethod def _format_date(dt): """Return standard format for a given datetime object.""" if dt is not None: return utils.isotime(dt) @staticmethod def _get_status(image): """Update the status field to standardize format.""" return { 'active': 'ACTIVE', 'queued': 'SAVING', 'saving': 'SAVING', 'deleted': 'DELETED', 'pending_delete': 'DELETED', 'killed': 'ERROR', }.get(image.get("status"), 'UNKNOWN') @staticmethod def _get_progress(image): return { "queued": 25, "saving": 50, "active": 100, }.get(image.get("status"), 0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/instance_actions.py0000664000175000017500000000166600000000000025011 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): def get_links(self, request, server_id, instance_actions): collection_name = 'servers/%s/os-instance-actions' % server_id return self._get_collection_links(request, instance_actions, collection_name, 'request_id') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/keypairs.py0000664000175000017500000000553700000000000023315 0ustar00zuulzuul00000000000000# Copyright 2016 Mirantis Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = 'os-keypairs' # TODO(takashin): After v2 and v2.1 is no longer supported, # 'type' can always be included in the response. _index_params = ('name', 'public_key', 'fingerprint') _create_params = _index_params + ('user_id',) _show_params = _create_params + ('created_at', 'deleted', 'deleted_at', 'id', 'updated_at') _index_params_v2_2 = _index_params + ('type',) _show_params_v2_2 = _show_params + ('type',) def get_links(self, request, keypairs): return self._get_collection_links(request, keypairs, self._collection_name, 'name') # TODO(oomichi): It is necessary to filter a response of keypair with # _build_keypair() when v2.1+microversions for implementing consistent # behaviors in this keypair resource. @staticmethod def _build_keypair(keypair, attrs): body = {} for attr in attrs: body[attr] = keypair[attr] return body def create(self, keypair, private_key=False, key_type=False): params = [] if private_key: params.append('private_key') # TODO(takashin): After v2 and v2.1 is no longer supported, # 'type' can always be included in the response. if key_type: params.append('type') params.extend(self._create_params) return {'keypair': self._build_keypair(keypair, params)} def index(self, req, key_pairs, key_type=False, links=False): keypairs_list = [ {'keypair': self._build_keypair( key_pair, self._index_params_v2_2 if key_type else self._index_params)} for key_pair in key_pairs] keypairs_dict = {'keypairs': keypairs_list} if links: keypairs_links = self.get_links(req, key_pairs) if keypairs_links: keypairs_dict['keypairs_links'] = keypairs_links return keypairs_dict def show(self, keypair, key_type=False): return {'keypair': self._build_keypair( keypair, self._show_params_v2_2 if key_type else self._show_params)} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/limits.py0000664000175000017500000000667700000000000022775 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class ViewBuilder(object): """OpenStack API base limits view builder.""" limit_names = {} def __init__(self): self.limit_names = { "ram": ["maxTotalRAMSize"], "instances": ["maxTotalInstances"], "cores": ["maxTotalCores"], "key_pairs": ["maxTotalKeypairs"], "floating_ips": ["maxTotalFloatingIps"], "metadata_items": ["maxServerMeta", "maxImageMeta"], "injected_files": ["maxPersonality"], "injected_file_content_bytes": ["maxPersonalitySize"], "security_groups": ["maxSecurityGroups"], "security_group_rules": ["maxSecurityGroupRules"], "server_groups": ["maxServerGroups"], "server_group_members": ["maxServerGroupMembers"] } def build(self, request, quotas, filtered_limits=None, max_image_meta=True): filtered_limits = filtered_limits or [] absolute_limits = self._build_absolute_limits( quotas, filtered_limits, max_image_meta=max_image_meta) used_limits = self._build_used_limits( request, quotas, filtered_limits) absolute_limits.update(used_limits) output = { "limits": { "rate": [], "absolute": absolute_limits, }, } return output def _build_absolute_limits(self, quotas, filtered_limits=None, max_image_meta=True): """Builder for absolute limits absolute_limits should be given as a dict of limits. For example: {"ram": 512, "gigabytes": 1024}. filtered_limits is an optional list of limits to exclude from the result set. """ absolute_limits = {k: v['limit'] for k, v in quotas.items()} limits = {} for name, value in absolute_limits.items(): if (name in self.limit_names and value is not None and name not in filtered_limits): for limit_name in self.limit_names[name]: if not max_image_meta and limit_name == "maxImageMeta": continue limits[limit_name] = value return limits def _build_used_limits(self, request, quotas, filtered_limits): quota_map = { 'totalRAMUsed': 'ram', 'totalCoresUsed': 'cores', 'totalInstancesUsed': 'instances', 'totalFloatingIpsUsed': 'floating_ips', 'totalSecurityGroupsUsed': 'security_groups', 'totalServerGroupsUsed': 'server_groups', } used_limits = {} for display_name, key in quota_map.items(): if (key in quotas and key not in filtered_limits): used_limits[display_name] = quotas[key]['in_use'] return used_limits ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/migrations.py0000664000175000017500000000160000000000000023625 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "os-migrations" def get_links(self, request, migrations): return self._get_collection_links(request, migrations, self._collection_name, 'uuid') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/server_diagnostics.py0000664000175000017500000000500000000000000025344 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common INSTANCE_DIAGNOSTICS_PRIMITIVE_FIELDS = ( 'state', 'driver', 'hypervisor', 'hypervisor_os', 'uptime', 'config_drive', 'num_cpus', 'num_nics', 'num_disks' ) INSTANCE_DIAGNOSTICS_LIST_FIELDS = { 'disk_details': ('read_bytes', 'read_requests', 'write_bytes', 'write_requests', 'errors_count'), 'cpu_details': ('id', 'time', 'utilisation'), 'nic_details': ('mac_address', 'rx_octets', 'rx_errors', 'rx_drop', 'rx_packets', 'rx_rate', 'tx_octets', 'tx_errors', 'tx_drop', 'tx_packets', 'tx_rate') } INSTANCE_DIAGNOSTICS_OBJECT_FIELDS = {'memory_details': ('maximum', 'used')} class ViewBuilder(common.ViewBuilder): @staticmethod def _get_obj_field(obj, field): if obj and obj.obj_attr_is_set(field): return getattr(obj, field) return None def instance_diagnostics(self, diagnostics): """Return a dictionary with instance diagnostics.""" diagnostics_dict = {} for field in INSTANCE_DIAGNOSTICS_PRIMITIVE_FIELDS: diagnostics_dict[field] = self._get_obj_field(diagnostics, field) for list_field in INSTANCE_DIAGNOSTICS_LIST_FIELDS: diagnostics_dict[list_field] = [] list_obj = getattr(diagnostics, list_field) for obj in list_obj: obj_dict = {} for field in INSTANCE_DIAGNOSTICS_LIST_FIELDS[list_field]: obj_dict[field] = self._get_obj_field(obj, field) diagnostics_dict[list_field].append(obj_dict) for obj_field in INSTANCE_DIAGNOSTICS_OBJECT_FIELDS: diagnostics_dict[obj_field] = {} obj = self._get_obj_field(diagnostics, obj_field) for field in INSTANCE_DIAGNOSTICS_OBJECT_FIELDS[obj_field]: diagnostics_dict[obj_field][field] = self._get_obj_field( obj, field) return diagnostics_dict ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/server_tags.py0000664000175000017500000000220400000000000023776 0ustar00zuulzuul00000000000000# Copyright 2016 Mirantis Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common from nova.api.openstack.compute.views import servers class ViewBuilder(common.ViewBuilder): _collection_name = "tags" def __init__(self): super(ViewBuilder, self).__init__() self._server_builder = servers.ViewBuilder() def get_location(self, request, server_id, tag_name): server_location = self._server_builder._get_href_link( request, server_id, "servers") return "%s/%s/%s" % (server_location, self._collection_name, tag_name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/views/servers.py0000664000175000017500000010473300000000000023155 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.views import addresses as views_addresses from nova.api.openstack.compute.views import flavors as views_flavors from nova.api.openstack.compute.views import images as views_images from nova import availability_zones as avail_zone from nova.compute import api as compute from nova.compute import vm_states from nova import context as nova_context from nova import exception from nova.network import security_group_api from nova import objects from nova.objects import fields from nova.objects import virtual_interface from nova.policies import extended_server_attributes as esa_policies from nova.policies import flavor_extra_specs as fes_policies from nova.policies import servers as servers_policies from nova import utils LOG = logging.getLogger(__name__) class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = "servers" _progress_statuses = ( "ACTIVE", "BUILD", "REBUILD", "RESIZE", "VERIFY_RESIZE", "MIGRATING", ) _fault_statuses = ( "ERROR", "DELETED" ) # These are the lazy-loadable instance attributes required for showing # details about an instance. Add to this list as new things need to be # shown. _show_expected_attrs = ['flavor', 'info_cache', 'metadata'] def __init__(self): """Initialize view builder.""" super(ViewBuilder, self).__init__() self._address_builder = views_addresses.ViewBuilder() self._image_builder = views_images.ViewBuilder() self._flavor_builder = views_flavors.ViewBuilder() self.compute_api = compute.API() def create(self, request, instance): """View that should be returned when an instance is created.""" server = { "server": { "id": instance["uuid"], "links": self._get_links(request, instance["uuid"], self._collection_name), # NOTE(sdague): historically this was the # os-disk-config extension, but now that extensions # are gone, we merge these attributes here. "OS-DCF:diskConfig": ( 'AUTO' if instance.get('auto_disk_config') else 'MANUAL'), }, } self._add_security_grps(request, [server["server"]], [instance], create_request=True) return server def basic(self, request, instance, show_extra_specs=False, show_extended_attr=None, show_host_status=None, show_sec_grp=None, bdms=None, cell_down_support=False, show_user_data=False): """Generic, non-detailed view of an instance.""" if cell_down_support and 'display_name' not in instance: # NOTE(tssurya): If the microversion is >= 2.69, this boolean will # be true in which case we check if there are instances from down # cells (by checking if their objects have missing keys like # `display_name`) and return partial constructs based on the # information available from the nova_api database. return { "server": { "id": instance.uuid, "status": "UNKNOWN", "links": self._get_links(request, instance.uuid, self._collection_name), }, } return { "server": { "id": instance["uuid"], "name": instance["display_name"], "links": self._get_links(request, instance["uuid"], self._collection_name), }, } def get_show_expected_attrs(self, expected_attrs=None): """Returns a list of lazy-loadable expected attributes used by show This should be used when getting the instances from the database so that the necessary attributes are pre-loaded before needing to build the show response where lazy-loading can fail if an instance was deleted. :param list expected_attrs: The list of expected attributes that will be requested in addition to what this view builder requires. This method will merge the two lists and return what should be ultimately used when getting an instance from the database. :returns: merged and sorted list of expected attributes """ if expected_attrs is None: expected_attrs = [] # NOTE(mriedem): We sort the list so we can have predictable test # results. return sorted(list(set(self._show_expected_attrs + expected_attrs))) def _show_from_down_cell(self, request, instance, show_extra_specs, show_server_groups): """Function that constructs the partial response for the instance.""" ret = { "server": { "id": instance.uuid, "status": "UNKNOWN", "tenant_id": instance.project_id, "created": utils.isotime(instance.created_at), "links": self._get_links( request, instance.uuid, self._collection_name), }, } if 'flavor' in instance: # If the key 'flavor' is present for an instance from a down cell # it means that the request is ``GET /servers/{server_id}`` and # thus we include the information from the request_spec of the # instance like its flavor, image, avz, and user_id in addition to # the basic information from its instance_mapping. # If 'flavor' key is not present for an instance from a down cell # down cell it means the request is ``GET /servers/detail`` and we # do not expose the flavor in the response when listing servers # with details for performance reasons of fetching it from the # request specs table for the whole list of instances. ret["server"]["image"] = self._get_image(request, instance) ret["server"]["flavor"] = self._get_flavor(request, instance, show_extra_specs) # in case availability zone was not requested by the user during # boot time, return UNKNOWN. avz = instance.availability_zone or "UNKNOWN" ret["server"]["OS-EXT-AZ:availability_zone"] = avz ret["server"]["OS-EXT-STS:power_state"] = instance.power_state # in case its an old request spec which doesn't have the user_id # data migrated, return UNKNOWN. ret["server"]["user_id"] = instance.user_id or "UNKNOWN" if show_server_groups: context = request.environ['nova.context'] ret['server']['server_groups'] = self._get_server_groups( context, instance) return ret @staticmethod def _get_host_status_unknown_only(context, instance=None): """We will use the unknown_only variable to tell us what host status we can show, if any: * unknown_only = False means we can show any host status. * unknown_only = True means that we can only show host status: UNKNOWN. If the host status is anything other than UNKNOWN, we will not include the host_status field in the response. * unknown_only = None means we cannot show host status at all and we will not include the host_status field in the response. """ unknown_only = None # Check show:host_status policy first because if it passes, we know we # can show any host status and need not check the more restrictive # show:host_status:unknown-only policy. # Keeping target as None (which means policy will default these target # to context.project_id) for now which is case of 'detail' API which # policy is default to system and project reader. target = None if instance is not None: target = {'project_id': instance.project_id} if context.can( servers_policies.SERVERS % 'show:host_status', fatal=False, target=target): unknown_only = False # If we are not allowed to show any/all host status, check if we can at # least show only the host status: UNKNOWN. elif context.can( servers_policies.SERVERS % 'show:host_status:unknown-only', fatal=False, target=target): unknown_only = True return unknown_only def show(self, request, instance, extend_address=True, show_extra_specs=None, show_AZ=True, show_config_drive=True, show_extended_attr=None, show_host_status=None, show_keypair=True, show_srv_usg=True, show_sec_grp=True, show_extended_status=True, show_extended_volumes=True, bdms=None, cell_down_support=False, show_server_groups=False, show_user_data=True): """Detailed view of a single instance.""" if show_extra_specs is None: # detail will pre-calculate this for us. If we're doing show, # then figure it out here. show_extra_specs = False if api_version_request.is_supported(request, min_version='2.47'): context = request.environ['nova.context'] show_extra_specs = context.can( fes_policies.POLICY_ROOT % 'index', fatal=False) if cell_down_support and 'display_name' not in instance: # NOTE(tssurya): If the microversion is >= 2.69, this boolean will # be true in which case we check if there are instances from down # cells (by checking if their objects have missing keys like # `display_name`) and return partial constructs based on the # information available from the nova_api database. return self._show_from_down_cell( request, instance, show_extra_specs, show_server_groups) ip_v4 = instance.get('access_ip_v4') ip_v6 = instance.get('access_ip_v6') server = { "server": { "id": instance["uuid"], "name": instance["display_name"], "status": self._get_vm_status(instance), "tenant_id": instance.get("project_id") or "", "user_id": instance.get("user_id") or "", "metadata": self._get_metadata(instance), "hostId": self._get_host_id(instance), "image": self._get_image(request, instance), "flavor": self._get_flavor(request, instance, show_extra_specs), "created": utils.isotime(instance["created_at"]), "updated": utils.isotime(instance["updated_at"]), "addresses": self._get_addresses(request, instance, extend_address), "accessIPv4": str(ip_v4) if ip_v4 is not None else '', "accessIPv6": str(ip_v6) if ip_v6 is not None else '', "links": self._get_links(request, instance["uuid"], self._collection_name), # NOTE(sdague): historically this was the # os-disk-config extension, but now that extensions # are gone, we merge these attributes here. "OS-DCF:diskConfig": ( 'AUTO' if instance.get('auto_disk_config') else 'MANUAL'), }, } if server["server"]["status"] in self._fault_statuses: _inst_fault = self._get_fault(request, instance) if _inst_fault: server['server']['fault'] = _inst_fault if server["server"]["status"] in self._progress_statuses: server["server"]["progress"] = instance.get("progress", 0) context = request.environ['nova.context'] if show_AZ: az = avail_zone.get_instance_availability_zone(context, instance) # NOTE(mriedem): The OS-EXT-AZ prefix should not be used for new # attributes after v2.1. They are only in v2.1 for backward compat # with v2.0. server["server"]["OS-EXT-AZ:availability_zone"] = az or '' if show_config_drive: server["server"]["config_drive"] = instance["config_drive"] if show_keypair: server["server"]["key_name"] = instance["key_name"] if show_srv_usg: for k in ['launched_at', 'terminated_at']: key = "OS-SRV-USG:" + k # NOTE(danms): Historically, this timestamp has been generated # merely by grabbing str(datetime) of a TZ-naive object. The # only way we can keep that with instance objects is to strip # the tzinfo from the stamp and str() it. server["server"][key] = (instance[k].replace(tzinfo=None) if instance[k] else None) if show_sec_grp: self._add_security_grps(request, [server["server"]], [instance]) if show_extended_attr is None: show_extended_attr = context.can( esa_policies.BASE_POLICY_NAME, fatal=False, target={'project_id': instance.project_id}) if show_extended_attr: properties = ['host', 'name', 'node'] if api_version_request.is_supported(request, min_version='2.3'): # NOTE(mriedem): These will use the OS-EXT-SRV-ATTR prefix # below and that's OK for microversion 2.3 which is being # compatible with v2.0 for the ec2 API split out from Nova. # After this, however, new microversions should not be using # the OS-EXT-SRV-ATTR prefix. properties += ['reservation_id', 'launch_index', 'hostname', 'kernel_id', 'ramdisk_id', 'root_device_name'] # NOTE(gmann): Since microversion 2.75, PUT and Rebuild # response include all the server attributes including these # extended attributes also. But microversion 2.57 already # adding the 'user_data' in Rebuild response in API method. # so we will skip adding the user data attribute for rebuild # case. 'show_user_data' is false only in case of rebuild. if show_user_data: properties += ['user_data'] for attr in properties: if attr == 'name': key = "OS-EXT-SRV-ATTR:instance_%s" % attr elif attr == 'node': key = "OS-EXT-SRV-ATTR:hypervisor_hostname" else: # NOTE(mriedem): Nothing after microversion 2.3 should use # the OS-EXT-SRV-ATTR prefix for the attribute key name. key = "OS-EXT-SRV-ATTR:%s" % attr server["server"][key] = getattr(instance, attr) if show_extended_status: # NOTE(gmann): Removed 'locked_by' from extended status # to make it same as V2. If needed it can be added with # microversion. for state in ['task_state', 'vm_state', 'power_state']: # NOTE(mriedem): The OS-EXT-STS prefix should not be used for # new attributes after v2.1. They are only in v2.1 for backward # compat with v2.0. key = "%s:%s" % ('OS-EXT-STS', state) server["server"][key] = instance[state] if show_extended_volumes: # NOTE(mriedem): The os-extended-volumes prefix should not be used # for new attributes after v2.1. They are only in v2.1 for backward # compat with v2.0. add_delete_on_termination = api_version_request.is_supported( request, min_version='2.3') if bdms is None: bdms = objects.BlockDeviceMappingList.bdms_by_instance_uuid( context, [instance["uuid"]]) self._add_volumes_attachments(server["server"], bdms, add_delete_on_termination) if (api_version_request.is_supported(request, min_version='2.16')): if show_host_status is None: unknown_only = self._get_host_status_unknown_only( context, instance) # If we're not allowed by policy to show host status at all, # don't bother requesting instance host status from the compute # API. if unknown_only is not None: host_status = self.compute_api.get_instance_host_status( instance) # If we are allowed to show host status of some kind, set # the host status field only if: # * unknown_only = False, meaning we can show any status # OR # * if unknown_only = True and host_status == UNKNOWN if (not unknown_only or host_status == fields.HostStatus.UNKNOWN): server["server"]['host_status'] = host_status if api_version_request.is_supported(request, min_version="2.9"): server["server"]["locked"] = (True if instance["locked_by"] else False) if api_version_request.is_supported(request, min_version="2.73"): server["server"]["locked_reason"] = (instance.system_metadata.get( "locked_reason")) if api_version_request.is_supported(request, min_version="2.19"): server["server"]["description"] = instance.get( "display_description") if api_version_request.is_supported(request, min_version="2.26"): server["server"]["tags"] = [t.tag for t in instance.tags] if api_version_request.is_supported(request, min_version="2.63"): trusted_certs = None if instance.trusted_certs: trusted_certs = instance.trusted_certs.ids server["server"]["trusted_image_certificates"] = trusted_certs if show_server_groups: server['server']['server_groups'] = self._get_server_groups( context, instance) return server def index(self, request, instances, cell_down_support=False): """Show a list of servers without many details.""" coll_name = self._collection_name return self._list_view(self.basic, request, instances, coll_name, False, cell_down_support=cell_down_support) def detail(self, request, instances, cell_down_support=False): """Detailed view of a list of instance.""" coll_name = self._collection_name + '/detail' context = request.environ['nova.context'] if api_version_request.is_supported(request, min_version='2.47'): # Determine if we should show extra_specs in the inlined flavor # once before we iterate the list of instances show_extra_specs = context.can(fes_policies.POLICY_ROOT % 'index', fatal=False) else: show_extra_specs = False show_extended_attr = context.can( esa_policies.BASE_POLICY_NAME, fatal=False) instance_uuids = [inst['uuid'] for inst in instances] bdms = self._get_instance_bdms_in_multiple_cells(context, instance_uuids) # NOTE(gmann): pass show_sec_grp=False in _list_view() because # security groups for detail method will be added by separate # call to self._add_security_grps by passing the all servers # together. That help to avoid multiple neutron call for each server. servers_dict = self._list_view(self.show, request, instances, coll_name, show_extra_specs, show_extended_attr=show_extended_attr, # We process host_status in aggregate. show_host_status=False, show_sec_grp=False, bdms=bdms, cell_down_support=cell_down_support) if api_version_request.is_supported(request, min_version='2.16'): unknown_only = self._get_host_status_unknown_only(context) # If we're not allowed by policy to show host status at all, don't # bother requesting instance host status from the compute API. if unknown_only is not None: self._add_host_status(list(servers_dict["servers"]), instances, unknown_only=unknown_only) self._add_security_grps(request, list(servers_dict["servers"]), instances) return servers_dict def _list_view(self, func, request, servers, coll_name, show_extra_specs, show_extended_attr=None, show_host_status=None, show_sec_grp=False, bdms=None, cell_down_support=False): """Provide a view for a list of servers. :param func: Function used to format the server data :param request: API request :param servers: List of servers in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :param show_extended_attr: If the server extended attributes should be included in the response dict. :param show_host_status: If the host status should be included in the response dict. :param show_sec_grp: If the security group should be included in the response dict. :param bdms: Instances bdms info from multiple cells. :param cell_down_support: True if the API (and caller) support returning a minimal instance construct if the relevant cell is down. :returns: Server data in dictionary format """ server_list = [func(request, server, show_extra_specs=show_extra_specs, show_extended_attr=show_extended_attr, show_host_status=show_host_status, show_sec_grp=show_sec_grp, bdms=bdms, cell_down_support=cell_down_support)["server"] for server in servers # Filter out the fake marker instance created by the # fill_virtual_interface_list online data migration. if server.uuid != virtual_interface.FAKE_UUID] servers_links = self._get_collection_links(request, servers, coll_name) servers_dict = dict(servers=server_list) if servers_links: servers_dict["servers_links"] = servers_links return servers_dict @staticmethod def _get_metadata(instance): return instance.metadata or {} @staticmethod def _get_vm_status(instance): # If the instance is deleted the vm and task states don't really matter if instance.get("deleted"): return "DELETED" return common.status_from_state(instance.get("vm_state"), instance.get("task_state")) @staticmethod def _get_host_id(instance): host = instance.get("host") project = str(instance.get("project_id")) return utils.generate_hostid(host, project) def _get_addresses(self, request, instance, extend_address=False): # Hide server addresses while the server is building. if instance.vm_state == vm_states.BUILDING: return {} context = request.environ["nova.context"] networks = common.get_networks_for_instance(context, instance) return self._address_builder.index(networks, extend_address)["addresses"] def _get_image(self, request, instance): image_ref = instance["image_ref"] if image_ref: image_id = str(common.get_id_from_href(image_ref)) bookmark = self._image_builder._get_bookmark_link(request, image_id, "images") return { "id": image_id, "links": [{ "rel": "bookmark", "href": bookmark, }], } else: return "" def _get_flavor_dict(self, request, instance_type, show_extra_specs): flavordict = { "vcpus": instance_type.vcpus, "ram": instance_type.memory_mb, "disk": instance_type.root_gb, "ephemeral": instance_type.ephemeral_gb, "swap": instance_type.swap, "original_name": instance_type.name } if show_extra_specs: flavordict['extra_specs'] = instance_type.extra_specs return flavordict def _get_flavor(self, request, instance, show_extra_specs): instance_type = instance.get_flavor() if not instance_type: LOG.warning("Instance has had its instance_type removed " "from the DB", instance=instance) return {} if api_version_request.is_supported(request, min_version="2.47"): return self._get_flavor_dict(request, instance_type, show_extra_specs) flavor_id = instance_type["flavorid"] flavor_bookmark = self._flavor_builder._get_bookmark_link(request, flavor_id, "flavors") return { "id": str(flavor_id), "links": [{ "rel": "bookmark", "href": flavor_bookmark, }], } def _load_fault(self, request, instance): try: mapping = objects.InstanceMapping.get_by_instance_uuid( request.environ['nova.context'], instance.uuid) if mapping.cell_mapping is not None: with nova_context.target_cell(instance._context, mapping.cell_mapping): return instance.fault except exception.InstanceMappingNotFound: pass # NOTE(danms): No instance mapping at all, or a mapping with no cell, # which means a legacy environment or instance. return instance.fault def _get_fault(self, request, instance): if 'fault' in instance: fault = instance.fault else: fault = self._load_fault(request, instance) if not fault: return None fault_dict = { "code": fault["code"], "created": utils.isotime(fault["created_at"]), "message": fault["message"], } if fault.get('details', None): is_admin = False context = request.environ["nova.context"] if context: is_admin = getattr(context, 'is_admin', False) if is_admin or fault['code'] != 500: fault_dict['details'] = fault["details"] return fault_dict def _add_host_status(self, servers, instances, unknown_only=False): """Adds the ``host_status`` field to the list of servers This method takes care to filter instances from down cells since they do not have a host set and as such we cannot determine the host status. :param servers: list of detailed server dicts for the API response body; this list is modified by reference by updating the server dicts within the list :param instances: list of Instance objects :param unknown_only: whether to show only UNKNOWN host status """ # Filter out instances from down cells which do not have a host field. instances = [instance for instance in instances if 'host' in instance] # Get the dict, keyed by instance.uuid, of host status values. host_statuses = self.compute_api.get_instances_host_statuses(instances) for server in servers: # Filter out anything that is not in the resulting dict because # we had to filter the list of instances above for down cells. if server['id'] in host_statuses: host_status = host_statuses[server['id']] if unknown_only and host_status != fields.HostStatus.UNKNOWN: # Filter servers that are not allowed by policy to see # host_status values other than UNKNOWN. continue server['host_status'] = host_status def _add_security_grps(self, req, servers, instances, create_request=False): if not len(servers): return # If request is a POST create server we get the security groups # intended for an instance from the request. This is necessary because # the requested security groups for the instance have not yet been sent # to neutron. # Starting from microversion 2.75, security groups is returned in # PUT and POST Rebuild response also. if not create_request: context = req.environ['nova.context'] sg_instance_bindings = ( security_group_api.get_instances_security_groups_bindings( context, servers)) for server in servers: groups = sg_instance_bindings.get(server['id']) if groups: server['security_groups'] = groups # This section is for POST create server request. There can be # only one security group for POST create server request. else: # try converting to json req_obj = jsonutils.loads(req.body) # Add security group to server, if no security group was in # request add default since that is the group it is part of servers[0]['security_groups'] = req_obj['server'].get( 'security_groups', [{'name': 'default'}]) @staticmethod def _get_instance_bdms_in_multiple_cells(ctxt, instance_uuids): inst_maps = objects.InstanceMappingList.get_by_instance_uuids( ctxt, instance_uuids) cell_mappings = {} for inst_map in inst_maps: if (inst_map.cell_mapping is not None and inst_map.cell_mapping.uuid not in cell_mappings): cell_mappings.update( {inst_map.cell_mapping.uuid: inst_map.cell_mapping}) bdms = {} results = nova_context.scatter_gather_cells( ctxt, cell_mappings.values(), nova_context.CELL_TIMEOUT, objects.BlockDeviceMappingList.bdms_by_instance_uuid, instance_uuids) for cell_uuid, result in results.items(): if isinstance(result, Exception): LOG.warning('Failed to get block device mappings for cell %s', cell_uuid) elif result is nova_context.did_not_respond_sentinel: LOG.warning('Timeout getting block device mappings for cell ' '%s', cell_uuid) else: bdms.update(result) return bdms def _add_volumes_attachments(self, server, bdms, add_delete_on_termination): # server['id'] is guaranteed to be in the cache due to # the core API adding it in the 'detail' or 'show' method. # If that instance has since been deleted, it won't be in the # 'bdms' dictionary though, so use 'get' to avoid KeyErrors. instance_bdms = bdms.get(server['id'], []) volumes_attached = [] for bdm in instance_bdms: if bdm.get('volume_id'): volume_attached = {'id': bdm['volume_id']} if add_delete_on_termination: volume_attached['delete_on_termination'] = ( bdm['delete_on_termination']) volumes_attached.append(volume_attached) # NOTE(mriedem): The os-extended-volumes prefix should not be used for # new attributes after v2.1. They are only in v2.1 for backward compat # with v2.0. key = "os-extended-volumes:volumes_attached" server[key] = volumes_attached @staticmethod def _get_server_groups(context, instance): try: sg = objects.InstanceGroup.get_by_instance_uuid(context, instance.uuid) return [sg.uuid] except exception.InstanceGroupNotFound: return [] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/usages.py0000664000175000017500000000205700000000000022747 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "os-simple-tenant-usage" def get_links(self, request, server_usages, tenant_id=None): coll_name = self._collection_name if tenant_id: coll_name = self._collection_name + '/{}'.format(tenant_id) return self._get_collection_links( request, server_usages, coll_name, 'instance_id') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/views/versions.py0000664000175000017500000000573600000000000023337 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.openstack import common def get_view_builder(req): base_url = req.application_url return ViewBuilder(base_url) class ViewBuilder(common.ViewBuilder): def __init__(self, base_url): """:param base_url: url of the root wsgi application.""" self.prefix = self._update_compute_link_prefix(base_url) self.base_url = base_url def build_choices(self, VERSIONS, req): version_objs = [] for version in sorted(VERSIONS): version = VERSIONS[version] version_objs.append({ "id": version['id'], "status": version['status'], "links": [ { "rel": "self", "href": self.generate_href(version['id'], req.path), }, ], "media-types": version['media-types'], }) return dict(choices=version_objs) def build_versions(self, versions): version_objs = [] for version in sorted(versions.keys()): version = versions[version] version_objs.append({ "id": version['id'], "status": version['status'], "version": version['version'], "min_version": version['min_version'], "updated": version['updated'], "links": self._build_links(version), }) return dict(versions=version_objs) def build_version(self, version): reval = copy.deepcopy(version) reval['links'].insert(0, { "rel": "self", "href": self.prefix.rstrip('/') + '/', }) return dict(version=reval) def _build_links(self, version_data): """Generate a container of links that refer to the provided version.""" href = self.generate_href(version_data['id']) links = [ { "rel": "self", "href": href, }, ] return links def generate_href(self, version, path=None): """Create an url that refers to a specific version_number.""" if version.find('v2.1') == 0: version_number = 'v2.1' else: version_number = 'v2' path = path or '' return common.url_join(self.prefix, version_number, path) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/virtual_interfaces.py0000664000175000017500000000157200000000000024215 0ustar00zuulzuul00000000000000# Copyright (C) 2011 Midokura KK # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The virtual interfaces extension.""" from webob import exc from nova.api.openstack import wsgi class ServerVirtualInterfaceController(wsgi.Controller): @wsgi.expected_errors((410)) def index(self, req, server_id): raise exc.HTTPGone() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/compute/volumes.py0000664000175000017500000006411000000000000022013 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The volumes extension.""" from oslo_utils import strutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import volumes as volumes_schema from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute from nova.compute import vm_states from nova import exception from nova.i18n import _ from nova import objects from nova.policies import volumes as vol_policies from nova.policies import volumes_attachments as va_policies from nova.volume import cinder def _translate_volume_detail_view(context, vol): """Maps keys for volumes details view.""" d = _translate_volume_summary_view(context, vol) # No additional data / lookups at the moment return d def _translate_volume_summary_view(context, vol): """Maps keys for volumes summary view.""" d = {} d['id'] = vol['id'] d['status'] = vol['status'] d['size'] = vol['size'] d['availabilityZone'] = vol['availability_zone'] d['createdAt'] = vol['created_at'] if vol['attach_status'] == 'attached': # NOTE(ildikov): The attachments field in the volume info that # Cinder sends is converted to an OrderedDict with the # instance_uuid as key to make it easier for the multiattach # feature to check the required information. Multiattach will # be enable in the Nova API in Newton. # The format looks like the following: # attachments = {'instance_uuid': { # 'attachment_id': 'attachment_uuid', # 'mountpoint': '/dev/sda/ # } # } attachment = list(vol['attachments'].items())[0] d['attachments'] = [_translate_attachment_summary_view(vol['id'], attachment[0], attachment[1].get('mountpoint'))] else: d['attachments'] = [{}] d['displayName'] = vol['display_name'] d['displayDescription'] = vol['display_description'] if vol['volume_type_id'] and vol.get('volume_type'): d['volumeType'] = vol['volume_type']['name'] else: d['volumeType'] = vol['volume_type_id'] d['snapshotId'] = vol['snapshot_id'] if vol.get('volume_metadata'): d['metadata'] = vol.get('volume_metadata') else: d['metadata'] = {} return d class VolumeController(wsgi.Controller): """The Volumes API controller for the OpenStack API.""" def __init__(self): super(VolumeController, self).__init__() self.volume_api = cinder.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given volume.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: vol = self.volume_api.get(context, id) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return {'volume': _translate_volume_detail_view(context, vol)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Delete a volume.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: self.volume_api.delete(context, id) except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.index_query) @wsgi.expected_errors(()) def index(self, req): """Returns a summary list of volumes.""" return self._items(req, entity_maker=_translate_volume_summary_view) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.detail_query) @wsgi.expected_errors(()) def detail(self, req): """Returns a detailed list of volumes.""" return self._items(req, entity_maker=_translate_volume_detail_view) def _items(self, req, entity_maker): """Returns a list of volumes, transformed through entity_maker.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) volumes = self.volume_api.get_all(context) limited_list = common.limited(volumes, req) res = [entity_maker(context, vol) for vol in limited_list] return {'volumes': res} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(volumes_schema.create) def create(self, req, body): """Creates a new volume.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) vol = body['volume'] vol_type = vol.get('volume_type') metadata = vol.get('metadata') snapshot_id = vol.get('snapshot_id', None) if snapshot_id is not None: try: snapshot = self.volume_api.get_snapshot(context, snapshot_id) except exception.SnapshotNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) else: snapshot = None size = vol.get('size', None) if size is None and snapshot is not None: size = snapshot['volume_size'] availability_zone = vol.get('availability_zone') try: new_volume = self.volume_api.create( context, size, vol.get('display_name'), vol.get('display_description'), snapshot=snapshot, volume_type=vol_type, metadata=metadata, availability_zone=availability_zone ) except exception.InvalidInput as err: raise exc.HTTPBadRequest(explanation=err.format_message()) except exception.OverQuota as err: raise exc.HTTPForbidden(explanation=err.format_message()) # TODO(vish): Instance should be None at db layer instead of # trying to lazy load, but for now we turn it into # a dict to avoid an error. retval = _translate_volume_detail_view(context, dict(new_volume)) result = {'volume': retval} location = '%s/%s' % (req.url, new_volume['id']) return wsgi.ResponseObject(result, headers=dict(location=location)) def _translate_attachment_detail_view(bdm, show_tag=False, show_delete_on_termination=False): """Maps keys for attachment details view. :param bdm: BlockDeviceMapping object for an attached volume :param show_tag: True if the "tag" field should be in the response, False to exclude the "tag" field from the response :param show_delete_on_termination: True if the "delete_on_termination" field should be in the response, False to exclude the "delete_on_termination" field from the response """ d = _translate_attachment_summary_view( bdm.volume_id, bdm.instance_uuid, bdm.device_name) if show_tag: d['tag'] = bdm.tag if show_delete_on_termination: d['delete_on_termination'] = bdm.delete_on_termination return d def _translate_attachment_summary_view(volume_id, instance_uuid, mountpoint): """Maps keys for attachment summary view.""" d = {} # NOTE(justinsb): We use the volume id as the id of the attachment object d['id'] = volume_id d['volumeId'] = volume_id d['serverId'] = instance_uuid if mountpoint: d['device'] = mountpoint return d def _check_request_version(req, min_version, method, server_id, server_state): if not api_version_request.is_supported(req, min_version=min_version): exc_inv = exception.InstanceInvalidState( attr='vm_state', instance_uuid=server_id, state=server_state, method=method) common.raise_http_conflict_for_instance_invalid_state( exc_inv, method, server_id) class VolumeAttachmentController(wsgi.Controller): """The volume attachment API controller for the OpenStack API. A child resource of the server. Note that we use the volume id as the ID of the attachment (though this is not guaranteed externally) """ def __init__(self): self.compute_api = compute.API() self.volume_api = cinder.API() super(VolumeAttachmentController, self).__init__() @wsgi.expected_errors(404) @validation.query_schema(volumes_schema.index_query_275, '2.75') @validation.query_schema(volumes_schema.index_query, '2.0', '2.74') def index(self, req, server_id): """Returns the list of volume attachments for a given instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(va_policies.POLICY_ROOT % 'index', target={'project_id': instance.project_id}) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) limited_list = common.limited(bdms, req) results = [] show_tag = api_version_request.is_supported(req, '2.70') show_delete_on_termination = api_version_request.is_supported( req, '2.79') for bdm in limited_list: if bdm.volume_id: va = _translate_attachment_detail_view( bdm, show_tag=show_tag, show_delete_on_termination=show_delete_on_termination) results.append(va) return {'volumeAttachments': results} @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return data about the given volume attachment.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(va_policies.POLICY_ROOT % 'show', target={'project_id': instance.project_id}) volume_id = id try: bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) except exception.VolumeBDMNotFound: msg = (_("Instance %(instance)s is not attached " "to volume %(volume)s") % {'instance': server_id, 'volume': volume_id}) raise exc.HTTPNotFound(explanation=msg) show_tag = api_version_request.is_supported(req, '2.70') show_delete_on_termination = api_version_request.is_supported( req, '2.79') return {'volumeAttachment': _translate_attachment_detail_view( bdm, show_tag=show_tag, show_delete_on_termination=show_delete_on_termination)} # TODO(mriedem): This API should return a 202 instead of a 200 response. @wsgi.expected_errors((400, 403, 404, 409)) @validation.schema(volumes_schema.create_volume_attachment, '2.0', '2.48') @validation.schema(volumes_schema.create_volume_attachment_v249, '2.49', '2.78') @validation.schema(volumes_schema.create_volume_attachment_v279, '2.79') def create(self, req, server_id, body): """Attach a volume to an instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) context.can(va_policies.POLICY_ROOT % 'create', target={'project_id': instance.project_id}) volume_id = body['volumeAttachment']['volumeId'] device = body['volumeAttachment'].get('device') tag = body['volumeAttachment'].get('tag') delete_on_termination = body['volumeAttachment'].get( 'delete_on_termination', False) if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): _check_request_version(req, '2.20', 'attach_volume', server_id, instance.vm_state) try: supports_multiattach = common.supports_multiattach_volume(req) device = self.compute_api.attach_volume( context, instance, volume_id, device, tag=tag, supports_multiattach=supports_multiattach, delete_on_termination=delete_on_termination) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.InstanceIsLocked, exception.DevicePathInUse, exception.MultiattachNotSupportedByVirtDriver) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'attach_volume', server_id) except (exception.InvalidVolume, exception.InvalidDevicePath, exception.InvalidInput, exception.VolumeTaggedAttachNotSupported, exception.MultiattachNotSupportedOldMicroversion, exception.MultiattachToShelvedNotSupported) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.TooManyDiskDevices as e: raise exc.HTTPForbidden(explanation=e.format_message()) # The attach is async # NOTE(mriedem): It would be nice to use # _translate_attachment_summary_view here but that does not include # the 'device' key if device is None or the empty string which would # be a backward incompatible change. attachment = {} attachment['id'] = volume_id attachment['serverId'] = server_id attachment['volumeId'] = volume_id attachment['device'] = device if api_version_request.is_supported(req, '2.70'): attachment['tag'] = tag if api_version_request.is_supported(req, '2.79'): attachment['delete_on_termination'] = delete_on_termination return {'volumeAttachment': attachment} def _update_volume_swap(self, req, instance, id, body): context = req.environ['nova.context'] old_volume_id = id try: old_volume = self.volume_api.get(context, old_volume_id) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) new_volume_id = body['volumeAttachment']['volumeId'] try: new_volume = self.volume_api.get(context, new_volume_id) except exception.VolumeNotFound as e: # NOTE: This BadRequest is different from the above NotFound even # though the same VolumeNotFound exception. This is intentional # because new_volume_id is specified in a request body and if a # nonexistent resource in the body (not URI) the code should be # 400 Bad Request as API-WG guideline. On the other hand, # old_volume_id is specified with URI. So it is valid to return # NotFound response if that is not existent. raise exc.HTTPBadRequest(explanation=e.format_message()) try: self.compute_api.swap_volume(context, instance, old_volume, new_volume) except exception.VolumeBDMNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.InvalidVolume, exception.MultiattachSwapVolumeNotSupported) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'swap_volume', instance.uuid) def _update_volume_regular(self, req, instance, id, body): context = req.environ['nova.context'] att = body['volumeAttachment'] # NOTE(danms): We may be doing an update of regular parameters in # the midst of a swap operation, so to find the original BDM, we need # to use the old volume ID, which is the one in the path. volume_id = id try: bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) # NOTE(danms): The attachment id is just the (current) volume id if 'id' in att and att['id'] != volume_id: raise exc.HTTPBadRequest(explanation='The id property is ' 'not mutable') if 'serverId' in att and att['serverId'] != instance.uuid: raise exc.HTTPBadRequest(explanation='The serverId property ' 'is not mutable') if 'device' in att and att['device'] != bdm.device_name: raise exc.HTTPBadRequest(explanation='The device property is ' 'not mutable') if 'tag' in att and att['tag'] != bdm.tag: raise exc.HTTPBadRequest(explanation='The tag property is ' 'not mutable') if 'delete_on_termination' in att: bdm.delete_on_termination = strutils.bool_from_string( att['delete_on_termination'], strict=True) bdm.save() except exception.VolumeBDMNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @validation.schema(volumes_schema.update_volume_attachment, '2.0', '2.84') @validation.schema(volumes_schema.update_volume_attachment_v285, min_version='2.85') def update(self, req, server_id, id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id) attachment = body['volumeAttachment'] volume_id = attachment['volumeId'] only_swap = not api_version_request.is_supported(req, '2.85') # NOTE(brinzhang): If the 'volumeId' requested by the user is # different from the 'id' in the url path, or only swap is allowed by # the microversion, we should check the swap volume policy. # otherwise, check the volume update policy. if only_swap or id != volume_id: context.can(va_policies.POLICY_ROOT % 'swap', target={}) else: context.can(va_policies.POLICY_ROOT % 'update', target={'project_id': instance.project_id}) if only_swap: # NOTE(danms): Original behavior is always call swap on PUT self._update_volume_swap(req, instance, id, body) else: # NOTE(danms): New behavior is update any supported attachment # properties first, and then call swap if volumeId differs self._update_volume_regular(req, instance, id, body) if id != volume_id: self._update_volume_swap(req, instance, id, body) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) def delete(self, req, server_id, id): """Detach a volume from an instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, server_id, expected_attrs=['device_metadata']) context.can(va_policies.POLICY_ROOT % 'delete', target={'project_id': instance.project_id}) volume_id = id if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): _check_request_version(req, '2.20', 'detach_volume', server_id, instance.vm_state) try: volume = self.volume_api.get(context, volume_id) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) try: bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) except exception.VolumeBDMNotFound: msg = (_("Instance %(instance)s is not attached " "to volume %(volume)s") % {'instance': server_id, 'volume': volume_id}) raise exc.HTTPNotFound(explanation=msg) if bdm.is_root: msg = _("Cannot detach a root device volume") raise exc.HTTPBadRequest(explanation=msg) try: self.compute_api.detach_volume(context, instance, volume) except exception.InvalidVolume as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'detach_volume', server_id) def _translate_snapshot_detail_view(context, vol): """Maps keys for snapshots details view.""" d = _translate_snapshot_summary_view(context, vol) # NOTE(gagupta): No additional data / lookups at the moment return d def _translate_snapshot_summary_view(context, vol): """Maps keys for snapshots summary view.""" d = {} d['id'] = vol['id'] d['volumeId'] = vol['volume_id'] d['status'] = vol['status'] # NOTE(gagupta): We map volume_size as the snapshot size d['size'] = vol['volume_size'] d['createdAt'] = vol['created_at'] d['displayName'] = vol['display_name'] d['displayDescription'] = vol['display_description'] return d class SnapshotController(wsgi.Controller): """The Snapshots API controller for the OpenStack API.""" def __init__(self): self.volume_api = cinder.API() super(SnapshotController, self).__init__() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given snapshot.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: vol = self.volume_api.get_snapshot(context, id) except exception.SnapshotNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return {'snapshot': _translate_snapshot_detail_view(context, vol)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors(404) def delete(self, req, id): """Delete a snapshot.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: self.volume_api.delete_snapshot(context, id) except exception.SnapshotNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.index_query) @wsgi.expected_errors(()) def index(self, req): """Returns a summary list of snapshots.""" return self._items(req, entity_maker=_translate_snapshot_summary_view) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.detail_query) @wsgi.expected_errors(()) def detail(self, req): """Returns a detailed list of snapshots.""" return self._items(req, entity_maker=_translate_snapshot_detail_view) def _items(self, req, entity_maker): """Returns a list of snapshots, transformed through entity_maker.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) snapshots = self.volume_api.get_all_snapshots(context) limited_list = common.limited(snapshots, req) res = [entity_maker(context, snapshot) for snapshot in limited_list] return {'snapshots': res} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403)) @validation.schema(volumes_schema.snapshot_create) def create(self, req, body): """Creates a new snapshot.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) snapshot = body['snapshot'] volume_id = snapshot['volume_id'] force = snapshot.get('force', False) force = strutils.bool_from_string(force, strict=True) if force: create_func = self.volume_api.create_snapshot_force else: create_func = self.volume_api.create_snapshot try: new_snapshot = create_func(context, volume_id, snapshot.get('display_name'), snapshot.get('display_description')) except exception.OverQuota as e: raise exc.HTTPForbidden(explanation=e.format_message()) retval = _translate_snapshot_detail_view(context, new_snapshot) return {'snapshot': retval} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/compute/wsgi.py0000664000175000017500000000141500000000000021271 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application entry-point for Nova Compute API, installed by pbr.""" from nova.api.openstack import wsgi_app NAME = "osapi_compute" def init_application(): return wsgi_app.init_application(NAME) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/identity.py0000664000175000017500000000534600000000000020504 0ustar00zuulzuul00000000000000# Copyright 2017 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as kse from oslo_log import log as logging import webob from nova.i18n import _ from nova import utils LOG = logging.getLogger(__name__) def verify_project_id(context, project_id): """verify that a project_id exists. This attempts to verify that a project id exists. If it does not, an HTTPBadRequest is emitted. """ adap = utils.get_ksa_adapter( 'identity', ksa_auth=context.get_auth_plugin(), min_version=(3, 0), max_version=(3, 'latest')) failure = webob.exc.HTTPBadRequest( explanation=_("Project ID %s is not a valid project.") % project_id) try: resp = adap.get('/projects/%s' % project_id) except kse.EndpointNotFound: LOG.error( "Keystone identity service version 3.0 was not found. This might " "be because your endpoint points to the v2.0 versioned endpoint " "which is not supported. Please fix this.") raise failure except kse.ClientException: # something is wrong, like there isn't a keystone v3 endpoint, # or nova isn't configured for the interface to talk to it; # we'll take the pass and default to everything being ok. LOG.info("Unable to contact keystone to verify project_id") return True if resp: # All is good with this 20x status return True elif resp.status_code == 404: # we got access, and we know this project is not there raise failure elif resp.status_code == 403: # we don't have enough permission to verify this, so default # to "it's ok". LOG.info( "Insufficient permissions for user %(user)s to verify " "existence of project_id %(pid)s", {"user": context.user_id, "pid": project_id}) return True else: LOG.warning( "Unexpected response from keystone trying to " "verify project_id %(pid)s - resp: %(code)s %(content)s", {"pid": project_id, "code": resp.status_code, "content": resp.content}) # realize we did something wrong, but move on with a warning return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/requestlog.py0000664000175000017500000000633400000000000021043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Simple middleware for request logging.""" import time from oslo_log import log as logging from oslo_utils import excutils import webob.dec import webob.exc from nova.api.openstack import wsgi from nova.api import wsgi as base_wsgi # TODO(sdague) maybe we can use a better name here for the logger LOG = logging.getLogger(__name__) class RequestLog(base_wsgi.Middleware): """WSGI Middleware to write a simple request log to. Borrowed from Paste Translogger """ # This matches the placement fil _log_format = ('%(REMOTE_ADDR)s "%(REQUEST_METHOD)s %(REQUEST_URI)s" ' 'status: %(status)s len: %(len)s ' 'microversion: %(microversion)s time: %(time).6f') @staticmethod def _get_uri(environ): req_uri = (environ.get('SCRIPT_NAME', '') + environ.get('PATH_INFO', '')) if environ.get('QUERY_STRING'): req_uri += '?' + environ['QUERY_STRING'] return req_uri @staticmethod def _should_emit(req): """Conditions under which we should skip logging. If we detect we are running under eventlet wsgi processing, we already are logging things, let it go. This is broken out as a separate method so that it can be easily adjusted for testing. """ if req.environ.get('eventlet.input', None) is not None: return False return True def _log_req(self, req, res, start): if not self._should_emit(req): return # in the event that things exploded really badly deeper in the # wsgi stack, res is going to be an empty dict for the # fallback logging. So never count on it having attributes. status = getattr(res, "status", "500 Error").split(None, 1)[0] data = { 'REMOTE_ADDR': req.environ.get('REMOTE_ADDR', '-'), 'REQUEST_METHOD': req.environ['REQUEST_METHOD'], 'REQUEST_URI': self._get_uri(req.environ), 'status': status, 'len': getattr(res, "content_length", 0), 'time': time.time() - start, 'microversion': '-' } # set microversion if it exists if not req.api_version_request.is_null(): data["microversion"] = req.api_version_request.get_string() LOG.info(self._log_format, data) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): res = {} start = time.time() try: res = req.get_response(self.application) self._log_req(req, res, start) return res except Exception: with excutils.save_and_reraise_exception(): self._log_req(req, res, start) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/urlmap.py0000664000175000017500000002660300000000000020152 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_log import log as logging import paste.urlmap import six from nova.api.openstack import wsgi if six.PY2: import urllib2 else: from urllib import request as urllib2 LOG = logging.getLogger(__name__) _quoted_string_re = r'"[^"\\]*(?:\\.[^"\\]*)*"' _option_header_piece_re = re.compile(r';\s*([^\s;=]+|%s)\s*' r'(?:=\s*([^;]+|%s))?\s*' % (_quoted_string_re, _quoted_string_re)) def unquote_header_value(value): """Unquotes a header value. This does not use the real unquoting but what browsers are actually using for quoting. :param value: the header value to unquote. """ if value and value[0] == value[-1] == '"': # this is not the real unquoting, but fixing this so that the # RFC is met will result in bugs with internet explorer and # probably some other browsers as well. IE for example is # uploading files with "C:\foo\bar.txt" as filename value = value[1:-1] return value def parse_list_header(value): """Parse lists as described by RFC 2068 Section 2. In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing. The return value is a standard :class:`list`: >>> parse_list_header('token, "quoted value"') ['token', 'quoted value'] :param value: a string with a list header. :return: :class:`list` """ result = [] for item in urllib2.parse_http_list(value): if item[:1] == item[-1:] == '"': item = unquote_header_value(item[1:-1]) result.append(item) return result def parse_options_header(value): """Parse a ``Content-Type`` like header into a tuple with the content type and the options: >>> parse_options_header('Content-Type: text/html; mimetype=text/html') ('Content-Type:', {'mimetype': 'text/html'}) :param value: the header to parse. :return: (str, options) """ def _tokenize(string): for match in _option_header_piece_re.finditer(string): key, value = match.groups() key = unquote_header_value(key) if value is not None: value = unquote_header_value(value) yield key, value if not value: return '', {} parts = _tokenize(';' + value) name = next(parts)[0] extra = dict(parts) return name, extra class Accept(object): def __init__(self, value): self._content_types = [parse_options_header(v) for v in parse_list_header(value)] def best_match(self, supported_content_types): # FIXME: Should we have a more sophisticated matching algorithm that # takes into account the version as well? best_quality = -1 best_content_type = None best_params = {} best_match = '*/*' for content_type in supported_content_types: for content_mask, params in self._content_types: try: quality = float(params.get('q', 1)) except ValueError: continue if quality < best_quality: continue elif best_quality == quality: if best_match.count('*') <= content_mask.count('*'): continue if self._match_mask(content_mask, content_type): best_quality = quality best_content_type = content_type best_params = params best_match = content_mask return best_content_type, best_params def _match_mask(self, mask, content_type): if '*' not in mask: return content_type == mask if mask == '*/*': return True mask_major = mask[:-2] content_type_major = content_type.split('/', 1)[0] return content_type_major == mask_major def urlmap_factory(loader, global_conf, **local_conf): if 'not_found_app' in local_conf: not_found_app = local_conf.pop('not_found_app') else: not_found_app = global_conf.get('not_found_app') if not_found_app: not_found_app = loader.get_app(not_found_app, global_conf=global_conf) urlmap = URLMap(not_found_app=not_found_app) for path, app_name in local_conf.items(): path = paste.urlmap.parse_path_expression(path) app = loader.get_app(app_name, global_conf=global_conf) urlmap[path] = app return urlmap class URLMap(paste.urlmap.URLMap): def _match(self, host, port, path_info): """Find longest match for a given URL path.""" for (domain, app_url), app in self.applications: if domain and domain != host and domain != host + ':' + port: continue # Rudimentary "wildcard" support: # By declaring a urlmap path ending in '/+', you're saying the # incoming path must start with everything up to and including the # '/' *and* have something after that as well. For example, path # /foo/bar/+ will match /foo/bar/baz, but not /foo/bar/ or /foo/bar # NOTE(efried): This assumes we'll never need a path URI component # that legitimately starts with '+'. (We could use a # more obscure character/sequence here in that case.) if app_url.endswith('/+'): # Must be requesting at least the path element (including /) if not path_info.startswith(app_url[:-1]): continue # ...but also must be requesting something after that / if len(path_info) < len(app_url): continue # Trim the /+ off the app_url to make it look "normal" for e.g. # proper splitting of SCRIPT_NAME and PATH_INFO. return app, app_url[:-2] # Normal (non-wildcarded) prefix match if (path_info == app_url or path_info.startswith(app_url + '/')): return app, app_url return None, None def _set_script_name(self, app, app_url): def wrap(environ, start_response): environ['SCRIPT_NAME'] += app_url return app(environ, start_response) return wrap def _munge_path(self, app, path_info, app_url): def wrap(environ, start_response): environ['SCRIPT_NAME'] += app_url environ['PATH_INFO'] = path_info[len(app_url):] return app(environ, start_response) return wrap def _path_strategy(self, host, port, path_info): """Check path suffix for MIME type and path prefix for API version.""" mime_type = app = app_url = None parts = path_info.rsplit('.', 1) if len(parts) > 1: possible_type = 'application/' + parts[1] if possible_type in wsgi.get_supported_content_types(): mime_type = possible_type parts = path_info.split('/') if len(parts) > 1: possible_app, possible_app_url = self._match(host, port, path_info) # Don't use prefix if it ends up matching default if possible_app and possible_app_url: app_url = possible_app_url app = self._munge_path(possible_app, path_info, app_url) return mime_type, app, app_url def _content_type_strategy(self, host, port, environ): """Check Content-Type header for API version.""" app = None params = parse_options_header(environ.get('CONTENT_TYPE', ''))[1] if 'version' in params: app, app_url = self._match(host, port, '/v' + params['version']) if app: app = self._set_script_name(app, app_url) return app def _accept_strategy(self, host, port, environ, supported_content_types): """Check Accept header for best matching MIME type and API version.""" accept = Accept(environ.get('HTTP_ACCEPT', '')) app = None # Find the best match in the Accept header mime_type, params = accept.best_match(supported_content_types) if 'version' in params: app, app_url = self._match(host, port, '/v' + params['version']) if app: app = self._set_script_name(app, app_url) return mime_type, app def __call__(self, environ, start_response): host = environ.get('HTTP_HOST', environ.get('SERVER_NAME')).lower() if ':' in host: host, port = host.split(':', 1) else: if environ['wsgi.url_scheme'] == 'http': port = '80' else: port = '443' path_info = environ['PATH_INFO'] path_info = self.normalize_url(path_info, False)[1] # The MIME type for the response is determined in one of two ways: # 1) URL path suffix (eg /servers/detail.json) # 2) Accept header (eg application/json;q=0.8, application/xml;q=0.2) # The API version is determined in one of three ways: # 1) URL path prefix (eg /v1.1/tenant/servers/detail) # 2) Content-Type header (eg application/json;version=1.1) # 3) Accept header (eg application/json;q=0.8;version=1.1) supported_content_types = list(wsgi.get_supported_content_types()) mime_type, app, app_url = self._path_strategy(host, port, path_info) # Accept application/atom+xml for the index query of each API # version mount point as well as the root index if (app_url and app_url + '/' == path_info) or path_info == '/': supported_content_types.append('application/atom+xml') if not app: app = self._content_type_strategy(host, port, environ) if not mime_type or not app: possible_mime_type, possible_app = self._accept_strategy( host, port, environ, supported_content_types) if possible_mime_type and not mime_type: mime_type = possible_mime_type if possible_app and not app: app = possible_app if not mime_type: mime_type = 'application/json' if not app: # Didn't match a particular version, probably matches default app, app_url = self._match(host, port, path_info) if app: app = self._munge_path(app, path_info, app_url) if app: environ['nova.best_content_type'] = mime_type return app(environ, start_response) LOG.debug('Could not find application for %s', environ['PATH_INFO']) environ['paste.urlmap_object'] = self return self.not_found_application(environ, start_response) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/openstack/versioned_method.py0000664000175000017500000000234500000000000022205 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class VersionedMethod(object): def __init__(self, name, start_version, end_version, func): """Versioning information for a single method @name: Name of the method @start_version: Minimum acceptable version @end_version: Maximum acceptable_version @func: Method to call Minimum and maximums are inclusive """ self.name = name self.start_version = start_version self.end_version = end_version self.func = func def __str__(self): return ("Version Method %s: min: %s, max: %s" % (self.name, self.start_version, self.end_version)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/wsgi.py0000664000175000017500000010432300000000000017617 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import microversion_parse from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import strutils import six import webob from nova.api.openstack import api_version_request as api_version from nova.api.openstack import versioned_method from nova.api import wsgi from nova import exception from nova import i18n from nova.i18n import _ LOG = logging.getLogger(__name__) _SUPPORTED_CONTENT_TYPES = ( 'application/json', 'application/vnd.openstack.compute+json', ) # These are typically automatically created by routes as either defaults # collection or member methods. _ROUTES_METHODS = [ 'create', 'delete', 'show', 'update', ] _METHODS_WITH_BODY = [ 'POST', 'PUT', ] # The default api version request if none is requested in the headers # Note(cyeoh): This only applies for the v2.1 API once microversions # support is fully merged. It does not affect the V2 API. DEFAULT_API_VERSION = "2.1" # name of attribute to keep version method information VER_METHOD_ATTR = 'versioned_methods' # Names of headers used by clients to request a specific version # of the REST API API_VERSION_REQUEST_HEADER = 'OpenStack-API-Version' LEGACY_API_VERSION_REQUEST_HEADER = 'X-OpenStack-Nova-API-Version' ENV_LEGACY_V2 = 'openstack.legacy_v2' def get_supported_content_types(): return _SUPPORTED_CONTENT_TYPES class Request(wsgi.Request): """Add some OpenStack API-specific logic to the base webob.Request.""" def __init__(self, *args, **kwargs): super(Request, self).__init__(*args, **kwargs) if not hasattr(self, 'api_version_request'): self.api_version_request = api_version.APIVersionRequest() def best_match_content_type(self): """Determine the requested response content-type.""" if 'nova.best_content_type' not in self.environ: # Calculate the best MIME type content_type = None # Check URL path suffix parts = self.path.rsplit('.', 1) if len(parts) > 1: possible_type = 'application/' + parts[1] if possible_type in get_supported_content_types(): content_type = possible_type if not content_type: best_matches = self.accept.acceptable_offers( get_supported_content_types()) if best_matches: content_type = best_matches[0][0] self.environ['nova.best_content_type'] = (content_type or 'application/json') return self.environ['nova.best_content_type'] def get_content_type(self): """Determine content type of the request body. Does not do any body introspection, only checks header """ if "Content-Type" not in self.headers: return None content_type = self.content_type # NOTE(markmc): text/plain is the default for eventlet and # other webservers which use mimetools.Message.gettype() # whereas twisted defaults to ''. if not content_type or content_type == 'text/plain': return None if content_type not in get_supported_content_types(): raise exception.InvalidContentType(content_type=content_type) return content_type def best_match_language(self): """Determine the best available language for the request. :returns: the best language match or None if the 'Accept-Language' header was not available in the request. """ if not self.accept_language: return None # NOTE(takashin): To decide the default behavior, 'default' is # preferred over 'default_tag' because that is return as it is when # no match. This is also little tricky that 'default' value cannot be # None. At least one of default_tag or default must be supplied as # an argument to the method, to define the defaulting behavior. # So passing a sentinal value to return None from this function. best_match = self.accept_language.lookup( i18n.get_available_languages(), default='fake_LANG') if best_match == 'fake_LANG': best_match = None return best_match def set_api_version_request(self): """Set API version request based on the request header information.""" hdr_string = microversion_parse.get_version( self.headers, service_type='compute', legacy_headers=[LEGACY_API_VERSION_REQUEST_HEADER]) if hdr_string is None: self.api_version_request = api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION) elif hdr_string == 'latest': # 'latest' is a special keyword which is equivalent to # requesting the maximum version of the API supported self.api_version_request = api_version.max_api_version() else: self.api_version_request = api_version.APIVersionRequest( hdr_string) # Check that the version requested is within the global # minimum/maximum of supported API versions if not self.api_version_request.matches( api_version.min_api_version(), api_version.max_api_version()): raise exception.InvalidGlobalAPIVersion( req_ver=self.api_version_request.get_string(), min_ver=api_version.min_api_version().get_string(), max_ver=api_version.max_api_version().get_string()) def set_legacy_v2(self): self.environ[ENV_LEGACY_V2] = True def is_legacy_v2(self): return self.environ.get(ENV_LEGACY_V2, False) class ActionDispatcher(object): """Maps method name to local methods through action name.""" def dispatch(self, *args, **kwargs): """Find and call local method.""" action = kwargs.pop('action', 'default') action_method = getattr(self, str(action), self.default) return action_method(*args, **kwargs) def default(self, data): raise NotImplementedError() class JSONDeserializer(ActionDispatcher): def _from_json(self, datastring): try: return jsonutils.loads(datastring) except ValueError: msg = _("cannot understand JSON") raise exception.MalformedRequestBody(reason=msg) def deserialize(self, datastring, action='default'): return self.dispatch(datastring, action=action) def default(self, datastring): return {'body': self._from_json(datastring)} class JSONDictSerializer(ActionDispatcher): """Default JSON request body serialization.""" def serialize(self, data, action='default'): return self.dispatch(data, action=action) def default(self, data): return six.text_type(jsonutils.dumps(data)) def response(code): """Attaches response code to a method. This decorator associates a response code with a method. Note that the function attributes are directly manipulated; the method is not wrapped. """ def decorator(func): func.wsgi_code = code return func return decorator class ResponseObject(object): """Bundles a response object Object that app methods may return in order to allow its response to be modified by extensions in the code. Its use is optional (and should only be used if you really know what you are doing). """ def __init__(self, obj, code=None, headers=None): """Builds a response object.""" self.obj = obj self._default_code = 200 self._code = code self._headers = headers or {} self.serializer = JSONDictSerializer() def __getitem__(self, key): """Retrieves a header with the given name.""" return self._headers[key.lower()] def __setitem__(self, key, value): """Sets a header with the given name to the given value.""" self._headers[key.lower()] = value def __delitem__(self, key): """Deletes the header with the given name.""" del self._headers[key.lower()] def serialize(self, request, content_type): """Serializes the wrapped object. Utility method for serializing the wrapped object. Returns a webob.Response object. Header values are set to the appropriate Python type and encoding demanded by PEP 3333: whatever the native str type is. """ serializer = self.serializer body = None if self.obj is not None: body = serializer.serialize(self.obj) response = webob.Response(body=body) response.status_int = self.code for hdr, val in self._headers.items(): if six.PY2: # In Py2.X Headers must be a UTF-8 encode str. response.headers[hdr] = encodeutils.safe_encode(val) else: # In Py3.X Headers must be a str that was first safely # encoded to UTF-8 (to catch any bad encodings) and then # decoded back to a native str. response.headers[hdr] = encodeutils.safe_decode( encodeutils.safe_encode(val)) # Deal with content_type if not isinstance(content_type, six.text_type): content_type = six.text_type(content_type) if six.PY2: # In Py2.X Headers must be a UTF-8 encode str. response.headers['Content-Type'] = encodeutils.safe_encode( content_type) else: # In Py3.X Headers must be a str. response.headers['Content-Type'] = encodeutils.safe_decode( encodeutils.safe_encode(content_type)) return response @property def code(self): """Retrieve the response status.""" return self._code or self._default_code @property def headers(self): """Retrieve the headers.""" return self._headers.copy() def action_peek(body): """Determine action to invoke. This looks inside the json body and fetches out the action method name. """ try: decoded = jsonutils.loads(body) except ValueError: msg = _("cannot understand JSON") raise exception.MalformedRequestBody(reason=msg) # Make sure there's exactly one key... if len(decoded) != 1: msg = _("too many body keys") raise exception.MalformedRequestBody(reason=msg) # Return the action name return list(decoded.keys())[0] class ResourceExceptionHandler(object): """Context manager to handle Resource exceptions. Used when processing exceptions generated by API implementation methods. Converts most exceptions to Fault exceptions, with the appropriate logging. """ def __enter__(self): return None def __exit__(self, ex_type, ex_value, ex_traceback): if not ex_value: return True if isinstance(ex_value, exception.Forbidden): raise Fault(webob.exc.HTTPForbidden( explanation=ex_value.format_message())) elif isinstance(ex_value, exception.VersionNotFoundForAPIMethod): raise elif isinstance(ex_value, exception.Invalid): raise Fault(exception.ConvertedException( code=ex_value.code, explanation=ex_value.format_message())) elif isinstance(ex_value, TypeError): exc_info = (ex_type, ex_value, ex_traceback) LOG.error('Exception handling resource: %s', ex_value, exc_info=exc_info) raise Fault(webob.exc.HTTPBadRequest()) elif isinstance(ex_value, Fault): LOG.info("Fault thrown: %s", ex_value) raise ex_value elif isinstance(ex_value, webob.exc.HTTPException): LOG.info("HTTP exception thrown: %s", ex_value) raise Fault(ex_value) # We didn't handle the exception return False class Resource(wsgi.Application): """WSGI app that handles (de)serialization and controller dispatch. WSGI app that reads routing information supplied by RoutesMiddleware and calls the requested action method upon its controller. All controller action methods must accept a 'req' argument, which is the incoming wsgi.Request. If the operation is a PUT or POST, the controller method must also accept a 'body' argument (the deserialized request body). They may raise a webob.exc exception or return a dict, which will be serialized by requested content type. Exceptions derived from webob.exc.HTTPException will be automatically wrapped in Fault() to provide API friendly error responses. """ support_api_request_version = True def __init__(self, controller): """:param controller: object that implement methods created by routes lib """ self.controller = controller self.default_serializers = dict(json=JSONDictSerializer) # Copy over the actions dictionary self.wsgi_actions = {} if controller: self.register_actions(controller) def register_actions(self, controller): """Registers controller actions with this resource.""" actions = getattr(controller, 'wsgi_actions', {}) for key, method_name in actions.items(): self.wsgi_actions[key] = getattr(controller, method_name) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" # NOTE(Vek): Check for get_action_args() override in the # controller if hasattr(self.controller, 'get_action_args'): return self.controller.get_action_args(request_environment) try: args = request_environment['wsgiorg.routing_args'][1].copy() except (KeyError, IndexError, AttributeError): return {} try: del args['controller'] except KeyError: pass try: del args['format'] except KeyError: pass return args def get_body(self, request): content_type = request.get_content_type() return content_type, request.body def deserialize(self, body): return JSONDeserializer().deserialize(body) def _should_have_body(self, request): return request.method in _METHODS_WITH_BODY @webob.dec.wsgify(RequestClass=Request) def __call__(self, request): """WSGI method that controls (de)serialization and method dispatch.""" if self.support_api_request_version: # Set the version of the API requested based on the header try: request.set_api_version_request() except exception.InvalidAPIVersionString as e: return Fault(webob.exc.HTTPBadRequest( explanation=e.format_message())) except exception.InvalidGlobalAPIVersion as e: return Fault(webob.exc.HTTPNotAcceptable( explanation=e.format_message())) # Identify the action, its arguments, and the requested # content type action_args = self.get_action_args(request.environ) action = action_args.pop('action', None) # NOTE(sdague): we filter out InvalidContentTypes early so we # know everything is good from here on out. try: content_type, body = self.get_body(request) accept = request.best_match_content_type() except exception.InvalidContentType: msg = _("Unsupported Content-Type") return Fault(webob.exc.HTTPUnsupportedMediaType(explanation=msg)) # NOTE(Vek): Splitting the function up this way allows for # auditing by external tools that wrap the existing # function. If we try to audit __call__(), we can # run into troubles due to the @webob.dec.wsgify() # decorator. return self._process_stack(request, action, action_args, content_type, body, accept) def _process_stack(self, request, action, action_args, content_type, body, accept): """Implement the processing stack.""" # Get the implementing method try: meth = self.get_method(request, action, content_type, body) except (AttributeError, TypeError): return Fault(webob.exc.HTTPNotFound()) except KeyError as ex: msg = _("There is no such action: %s") % ex.args[0] return Fault(webob.exc.HTTPBadRequest(explanation=msg)) except exception.MalformedRequestBody: msg = _("Malformed request body") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) if body: msg = _("Action: '%(action)s', calling method: %(meth)s, body: " "%(body)s") % {'action': action, 'body': six.text_type(body, 'utf-8'), 'meth': str(meth)} LOG.debug(strutils.mask_password(msg)) else: LOG.debug("Calling method '%(meth)s'", {'meth': str(meth)}) # Now, deserialize the request body... try: contents = self._get_request_content(body, request) except exception.MalformedRequestBody: msg = _("Malformed request body") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) # Update the action args action_args.update(contents) project_id = action_args.pop("project_id", None) context = request.environ.get('nova.context') if (context and project_id and (project_id != context.project_id)): msg = _("Malformed request URL: URL's project_id '%(project_id)s'" " doesn't match Context's project_id" " '%(context_project_id)s'") % \ {'project_id': project_id, 'context_project_id': context.project_id} return Fault(webob.exc.HTTPBadRequest(explanation=msg)) response = None try: with ResourceExceptionHandler(): action_result = self.dispatch(meth, request, action_args) except Fault as ex: response = ex if not response: # No exceptions; convert action_result into a # ResponseObject resp_obj = None if type(action_result) is dict or action_result is None: resp_obj = ResponseObject(action_result) elif isinstance(action_result, ResponseObject): resp_obj = action_result else: response = action_result # Run post-processing extensions if resp_obj: # Do a preserialize to set up the response object if hasattr(meth, 'wsgi_code'): resp_obj._default_code = meth.wsgi_code if resp_obj and not response: response = resp_obj.serialize(request, accept) if hasattr(response, 'headers'): for hdr, val in list(response.headers.items()): if not isinstance(val, six.text_type): val = six.text_type(val) if six.PY2: # In Py2.X Headers must be UTF-8 encoded string response.headers[hdr] = encodeutils.safe_encode(val) else: # In Py3.X Headers must be a string response.headers[hdr] = encodeutils.safe_decode( encodeutils.safe_encode(val)) if not request.api_version_request.is_null(): response.headers[API_VERSION_REQUEST_HEADER] = \ 'compute ' + request.api_version_request.get_string() response.headers[LEGACY_API_VERSION_REQUEST_HEADER] = \ request.api_version_request.get_string() response.headers.add('Vary', API_VERSION_REQUEST_HEADER) response.headers.add('Vary', LEGACY_API_VERSION_REQUEST_HEADER) return response def _get_request_content(self, body, request): contents = {} if self._should_have_body(request): # allow empty body with PUT and POST if request.content_length == 0 or request.content_length is None: contents = {'body': None} else: contents = self.deserialize(body) return contents def get_method(self, request, action, content_type, body): meth = self._get_method(request, action, content_type, body) return meth def _get_method(self, request, action, content_type, body): """Look up the action-specific method.""" # Look up the method try: if not self.controller: meth = getattr(self, action) else: meth = getattr(self.controller, action) return meth except AttributeError: if (not self.wsgi_actions or action not in _ROUTES_METHODS + ['action']): # Propagate the error raise if action == 'action': action_name = action_peek(body) else: action_name = action # Look up the action method return (self.wsgi_actions[action_name]) def dispatch(self, method, request, action_args): """Dispatch a call to the action-specific method.""" try: return method(req=request, **action_args) except exception.VersionNotFoundForAPIMethod: # We deliberately don't return any message information # about the exception to the user so it looks as if # the method is simply not implemented. return Fault(webob.exc.HTTPNotFound()) def action(name): """Mark a function as an action. The given name will be taken as the action key in the body. This is also overloaded to allow extensions to provide non-extending definitions of create and delete operations. """ def decorator(func): func.wsgi_action = name return func return decorator def expected_errors(errors): """Decorator for v2.1 API methods which specifies expected exceptions. Specify which exceptions may occur when an API method is called. If an unexpected exception occurs then return a 500 instead and ask the user of the API to file a bug report. """ def decorator(f): @functools.wraps(f) def wrapped(*args, **kwargs): try: return f(*args, **kwargs) except Exception as exc: if isinstance(exc, webob.exc.WSGIHTTPException): if isinstance(errors, int): t_errors = (errors,) else: t_errors = errors if exc.code in t_errors: raise elif isinstance(exc, exception.Forbidden): # Note(cyeoh): Special case to handle # Forbidden exceptions so every # extension method does not need to wrap authorize # calls. ResourceExceptionHandler silently # converts NotAuthorized to HTTPForbidden raise elif isinstance(exc, exception.ValidationError): # Note(oomichi): Handle a validation error, which # happens due to invalid API parameters, as an # expected error. raise elif isinstance(exc, exception.Unauthorized): # Handle an authorized exception, will be # automatically converted to a HTTP 401, clients # like python-novaclient handle this error to # generate new token and do another attempt. raise LOG.exception("Unexpected exception in API method") msg = _('Unexpected API Error. Please report this at ' 'http://bugs.launchpad.net/nova/ and attach the Nova ' 'API log if possible.\n%s') % type(exc) raise webob.exc.HTTPInternalServerError(explanation=msg) return wrapped return decorator class ControllerMetaclass(type): """Controller metaclass. This metaclass automates the task of assembling a dictionary mapping action keys to method names. """ def __new__(mcs, name, bases, cls_dict): """Adds the wsgi_actions dictionary to the class.""" # Find all actions actions = {} versioned_methods = None # start with wsgi actions from base classes for base in bases: actions.update(getattr(base, 'wsgi_actions', {})) if base.__name__ == "Controller": # NOTE(cyeoh): This resets the VER_METHOD_ATTR attribute # between API controller class creations. This allows us # to use a class decorator on the API methods that doesn't # require naming explicitly what method is being versioned as # it can be implicit based on the method decorated. It is a bit # ugly. if VER_METHOD_ATTR in base.__dict__: versioned_methods = getattr(base, VER_METHOD_ATTR) delattr(base, VER_METHOD_ATTR) for key, value in cls_dict.items(): if not callable(value): continue if getattr(value, 'wsgi_action', None): actions[value.wsgi_action] = key # Add the actions to the class dict cls_dict['wsgi_actions'] = actions if versioned_methods: cls_dict[VER_METHOD_ATTR] = versioned_methods return super(ControllerMetaclass, mcs).__new__(mcs, name, bases, cls_dict) @six.add_metaclass(ControllerMetaclass) class Controller(object): """Default controller.""" _view_builder_class = None def __init__(self): """Initialize controller with a view builder instance.""" if self._view_builder_class: self._view_builder = self._view_builder_class() else: self._view_builder = None def __getattribute__(self, key): def version_select(*args, **kwargs): """Look for the method which matches the name supplied and version constraints and calls it with the supplied arguments. @return: Returns the result of the method called @raises: VersionNotFoundForAPIMethod if there is no method which matches the name and version constraints """ # The first arg to all versioned methods is always the request # object. The version for the request is attached to the # request object if len(args) == 0: ver = kwargs['req'].api_version_request else: ver = args[0].api_version_request func_list = self.versioned_methods[key] for func in func_list: if ver.matches(func.start_version, func.end_version): # Update the version_select wrapper function so # other decorator attributes like wsgi.response # are still respected. functools.update_wrapper(version_select, func.func) return func.func(self, *args, **kwargs) # No version match raise exception.VersionNotFoundForAPIMethod(version=ver) try: version_meth_dict = object.__getattribute__(self, VER_METHOD_ATTR) except AttributeError: # No versioning on this class return object.__getattribute__(self, key) if version_meth_dict and \ key in object.__getattribute__(self, VER_METHOD_ATTR): return version_select return object.__getattribute__(self, key) # NOTE(cyeoh): This decorator MUST appear first (the outermost # decorator) on an API method for it to work correctly @classmethod def api_version(cls, min_ver, max_ver=None): """Decorator for versioning api methods. Add the decorator to any method which takes a request object as the first parameter and belongs to a class which inherits from wsgi.Controller. @min_ver: string representing minimum version @max_ver: optional string representing maximum version """ def decorator(f): obj_min_ver = api_version.APIVersionRequest(min_ver) if max_ver: obj_max_ver = api_version.APIVersionRequest(max_ver) else: obj_max_ver = api_version.APIVersionRequest() # Add to list of versioned methods registered func_name = f.__name__ new_func = versioned_method.VersionedMethod( func_name, obj_min_ver, obj_max_ver, f) func_dict = getattr(cls, VER_METHOD_ATTR, {}) if not func_dict: setattr(cls, VER_METHOD_ATTR, func_dict) func_list = func_dict.get(func_name, []) if not func_list: func_dict[func_name] = func_list func_list.append(new_func) # Ensure the list is sorted by minimum version (reversed) # so later when we work through the list in order we find # the method which has the latest version which supports # the version requested. is_intersect = Controller.check_for_versions_intersection( func_list) if is_intersect: raise exception.ApiVersionsIntersect( name=new_func.name, min_ver=new_func.start_version, max_ver=new_func.end_version, ) func_list.sort(key=lambda f: f.start_version, reverse=True) return f return decorator @staticmethod def is_valid_body(body, entity_name): if not (body and entity_name in body): return False def is_dict(d): try: d.get(None) return True except AttributeError: return False return is_dict(body[entity_name]) @staticmethod def check_for_versions_intersection(func_list): """Determines whether function list contains version intervals intersections or not. General algorithm: https://en.wikipedia.org/wiki/Intersection_algorithm :param func_list: list of VersionedMethod objects :return: boolean """ pairs = [] counter = 0 for f in func_list: pairs.append((f.start_version, 1, f)) pairs.append((f.end_version, -1, f)) def compare(x): return x[0] pairs.sort(key=compare) for p in pairs: counter += p[1] if counter > 1: return True return False class Fault(webob.exc.HTTPException): """Wrap webob.exc.HTTPException to provide API friendly response.""" _fault_names = { 400: "badRequest", 401: "unauthorized", 403: "forbidden", 404: "itemNotFound", 405: "badMethod", 409: "conflictingRequest", 413: "overLimit", 415: "badMediaType", 429: "overLimit", 501: "notImplemented", 503: "serviceUnavailable"} def __init__(self, exception): """Create a Fault for the given webob.exc.exception.""" self.wrapped_exc = exception for key, value in list(self.wrapped_exc.headers.items()): self.wrapped_exc.headers[key] = str(value) self.status_int = exception.status_int @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): """Generate a WSGI response based on the exception passed to ctor.""" user_locale = req.best_match_language() # Replace the body with fault details. code = self.wrapped_exc.status_int fault_name = self._fault_names.get(code, "computeFault") explanation = self.wrapped_exc.explanation LOG.debug("Returning %(code)s to user: %(explanation)s", {'code': code, 'explanation': explanation}) explanation = i18n.translate(explanation, user_locale) fault_data = { fault_name: { 'code': code, 'message': explanation}} if code == 413 or code == 429: retry = self.wrapped_exc.headers.get('Retry-After', None) if retry: fault_data[fault_name]['retryAfter'] = retry if not req.api_version_request.is_null(): self.wrapped_exc.headers[API_VERSION_REQUEST_HEADER] = \ 'compute ' + req.api_version_request.get_string() self.wrapped_exc.headers[LEGACY_API_VERSION_REQUEST_HEADER] = \ req.api_version_request.get_string() self.wrapped_exc.headers.add('Vary', API_VERSION_REQUEST_HEADER) self.wrapped_exc.headers.add('Vary', LEGACY_API_VERSION_REQUEST_HEADER) self.wrapped_exc.content_type = 'application/json' self.wrapped_exc.charset = 'UTF-8' self.wrapped_exc.text = JSONDictSerializer().serialize(fault_data) return self.wrapped_exc def __str__(self): return self.wrapped_exc.__str__() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/openstack/wsgi_app.py0000664000175000017500000001041300000000000020453 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application initialization for Nova APIs.""" import os from oslo_config import cfg from oslo_log import log as logging from oslo_service import _options as service_opts from paste import deploy import six from nova import config from nova import context from nova import exception from nova import objects from nova import service from nova import utils CONF = cfg.CONF CONFIG_FILES = ['api-paste.ini', 'nova.conf'] LOG = logging.getLogger(__name__) objects.register_all() def _get_config_files(env=None): if env is None: env = os.environ dirname = env.get('OS_NOVA_CONFIG_DIR', '/etc/nova').strip() return [os.path.join(dirname, config_file) for config_file in CONFIG_FILES] def _setup_service(host, name): try: utils.raise_if_old_compute() except exception.TooOldComputeService as e: logging.getLogger(__name__).warning(six.text_type(e)) binary = name if name.startswith('nova-') else "nova-%s" % name ctxt = context.get_admin_context() service_ref = objects.Service.get_by_host_and_binary( ctxt, host, binary) if service_ref: service._update_service_ref(service_ref) else: try: service_obj = objects.Service(ctxt) service_obj.host = host service_obj.binary = binary service_obj.topic = None service_obj.report_count = 0 service_obj.create() except (exception.ServiceTopicExists, exception.ServiceBinaryExists): # If we race to create a record with a sibling, don't # fail here. pass def error_application(exc, name): # TODO(cdent): make this something other than a stub def application(environ, start_response): start_response('500 Internal Server Error', [ ('Content-Type', 'text/plain; charset=UTF-8')]) return ['Out of date %s service %s\n' % (name, exc)] return application @utils.run_once('Global data already initialized, not re-initializing.', LOG.info) def init_global_data(conf_files): # NOTE(melwitt): parse_args initializes logging and calls global rpc.init() # and db_api.configure(). The db_api.configure() call does not initiate any # connection to the database. config.parse_args([], default_config_files=conf_files) logging.setup(CONF, "nova") # dump conf at debug (log_options option comes from oslo.service) # FIXME(mriedem): This is gross but we don't have a public hook into # oslo.service to register these options, so we are doing it manually for # now; remove this when we have a hook method into oslo.service. CONF.register_opts(service_opts.service_opts) if CONF.log_options: CONF.log_opt_values( logging.getLogger(__name__), logging.DEBUG) def init_application(name): conf_files = _get_config_files() # NOTE(melwitt): The init_application method can be called multiple times # within a single python interpreter instance if any exception is raised # during it (example: DBConnectionError while setting up the service) and # apache/mod_wsgi reloads the init_application script. So, we initialize # global data separately and decorate the method to run only once in a # python interpreter instance. init_global_data(conf_files) try: _setup_service(CONF.host, name) except exception.ServiceTooOld as exc: return error_application(exc, name) # This global init is safe because if we got here, we already successfully # set up the service and setting up the profile cannot fail. service.setup_profiler(name, CONF.host) conf = conf_files[0] return deploy.loadapp('config:%s' % conf, name=name) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2904704 nova-21.2.4/nova/api/validation/0000775000175000017500000000000000000000000016434 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/__init__.py0000664000175000017500000002005600000000000020550 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Request Body validating middleware. """ import functools import re from nova.api.openstack import api_version_request as api_version from nova.api.validation import validators from nova import exception from nova.i18n import _ def _schema_validation_helper(schema, target, min_version, max_version, args, kwargs, is_body=True): """A helper method to execute JSON-Schema Validation. This method checks the request version whether matches the specified max version and min_version. It also takes a care of legacy v2 request. If the version range matches the request, we validate the schema against the target and a failure will result in a ValidationError being raised. :param schema: A dict, the JSON-Schema is used to validate the target. :param target: A dict, the target is validated by the JSON-Schema. :param min_version: A string of two numerals. X.Y indicating the minimum version of the JSON-Schema to validate against. :param max_version: A string of two numerals. X.Y indicating the maximum version of the JSON-Schema to validate against. :param args: Positional arguments which passed into original method. :param kwargs: Keyword arguments which passed into original method. :param is_body: A boolean. Indicating whether the target is HTTP request body or not. :returns: A boolean. `True` if and only if the version range matches the request AND the schema is successfully validated. `False` if the version range does not match the request and no validation is performed. :raises: ValidationError, when the validation fails. """ min_ver = api_version.APIVersionRequest(min_version) max_ver = api_version.APIVersionRequest(max_version) # The request object is always the second argument. # However numerous unittests pass in the request object # via kwargs instead so we handle that as well. # TODO(cyeoh): cleanup unittests so we don't have to # to do this if 'req' in kwargs: ver = kwargs['req'].api_version_request legacy_v2 = kwargs['req'].is_legacy_v2() else: ver = args[1].api_version_request legacy_v2 = args[1].is_legacy_v2() if legacy_v2: # NOTE: For v2.0 compatible API, here should work like # client | schema min_version | schema # -----------+--------------------+-------- # legacy_v2 | None | work # legacy_v2 | 2.0 | work # legacy_v2 | 2.1+ | don't if min_version is None or min_version == '2.0': schema_validator = validators._SchemaValidator( schema, legacy_v2, is_body) schema_validator.validate(target) return True elif ver.matches(min_ver, max_ver): # Only validate against the schema if it lies within # the version range specified. Note that if both min # and max are not specified the validator will always # be run. schema_validator = validators._SchemaValidator( schema, legacy_v2, is_body) schema_validator.validate(target) return True return False def schema(request_body_schema, min_version=None, max_version=None): """Register a schema to validate request body. Registered schema will be used for validating request body just before API method executing. :argument dict request_body_schema: a schema to validate request body """ def add_validator(func): @functools.wraps(func) def wrapper(*args, **kwargs): _schema_validation_helper(request_body_schema, kwargs['body'], min_version, max_version, args, kwargs) return func(*args, **kwargs) return wrapper return add_validator def _strip_additional_query_parameters(schema, req): """Strip the additional properties from the req.GET. This helper method assumes the JSON-Schema is only described as a dict without nesting. This method should be called after query parameters pass the JSON-Schema validation. It also means this method only can be called after _schema_validation_helper return `True`. """ additional_properties = schema.get('addtionalProperties', True) pattern_regexes = [] patterns = schema.get('patternProperties', None) if patterns: for regex in patterns: pattern_regexes.append(re.compile(regex)) if additional_properties: # `req.GET.keys` will return duplicated keys for multiple values # parameters. To get rid of duplicated keys, convert it to set. for param in set(req.GET.keys()): if param not in schema['properties'].keys(): # keys that can match the patternProperties will be # retained and handle latter. if not (list(regex for regex in pattern_regexes if regex.match(param))): del req.GET[param] def query_schema(query_params_schema, min_version=None, max_version=None): """Register a schema to validate request query parameters. Registered schema will be used for validating request query params just before API method executing. :param query_params_schema: A dict, the JSON-Schema for validating the query parameters. :param min_version: A string of two numerals. X.Y indicating the minimum version of the JSON-Schema to validate against. :param max_version: A string of two numerals. X.Y indicating the maximum version of the JSON-Schema against to. """ def add_validator(func): @functools.wraps(func) def wrapper(*args, **kwargs): # The request object is always the second argument. # However numerous unittests pass in the request object # via kwargs instead so we handle that as well. # TODO(cyeoh): cleanup unittests so we don't have to # to do this if 'req' in kwargs: req = kwargs['req'] else: req = args[1] # NOTE(Kevin_Zheng): The webob package throws UnicodeError when # param cannot be decoded. Catch this and raise HTTP 400. try: query_dict = req.GET.dict_of_lists() except UnicodeDecodeError: msg = _('Query string is not UTF-8 encoded') raise exception.ValidationError(msg) if _schema_validation_helper(query_params_schema, query_dict, min_version, max_version, args, kwargs, is_body=False): # NOTE(alex_xu): The additional query parameters were stripped # out when `additionalProperties=True`. This is for backward # compatible with v2.1 API and legacy v2 API. But it makes the # system more safe for no more unexpected parameters pass down # to the system. In microversion 2.75, we have blocked all of # those additional parameters. _strip_additional_query_parameters(query_params_schema, req) return func(*args, **kwargs) return wrapper return add_validator ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2904704 nova-21.2.4/nova/api/validation/extra_specs/0000775000175000017500000000000000000000000020754 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/__init__.py0000664000175000017500000000000000000000000023053 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/accel.py0000664000175000017500000000217500000000000022402 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``accel`` namespaced extra specs.""" from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='accel:device_profile', description=( 'The name of a device profile to configure for the instance. ' 'A device profile may be viewed as a "flavor for devices".' ), value={ 'type': str, 'description': 'A name of a device profile.', }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/aggregate_instance_extra_specs.py0000664000175000017500000000464000000000000027544 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for (preferrably) ``aggregate_instance_extra_specs`` namespaced extra specs. These are used by the ``AggregateInstanceExtraSpecsFilter`` scheduler filter. Note that we explicitly do not support the unnamespaced variant of extra specs since these have been deprecated since Havana (commit fbedf60a432). Users that insist on using these can disable extra spec validation. """ from nova.api.validation.extra_specs import base DESCRIPTION = """\ Specify metadata that must be present on the aggregate of a host. If this metadata is not present, the host will be rejected. Requires the ``AggregateInstanceExtraSpecsFilter`` scheduler filter. The value can be one of the following: * ``=`` (equal to or greater than as a number; same as vcpus case) * ``==`` (equal to as a number) * ``!=`` (not equal to as a number) * ``>=`` (greater than or equal to as a number) * ``<=`` (less than or equal to as a number) * ``s==`` (equal to as a string) * ``s!=`` (not equal to as a string) * ``s>=`` (greater than or equal to as a string) * ``s>`` (greater than as a string) * ``s<=`` (less than or equal to as a string) * ``s<`` (less than as a string) * ```` (substring) * ```` (all elements contained in collection) * ```` (find one of these) * A specific value, e.g. ``true``, ``123``, ``testing`` """ EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='aggregate_instance_extra_specs:{key}', description=DESCRIPTION, parameters=[ { 'name': 'key', 'description': 'The metadata key to match on', 'pattern': r'.+', }, ], value={ # this is totally arbitary, since we need to support specific # values 'type': str, }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/base.py0000664000175000017500000001014000000000000022234 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import dataclasses import re import typing as ty from oslo_utils import strutils from nova import exception @dataclasses.dataclass class ExtraSpecValidator: name: str description: str value: ty.Dict[str, ty.Any] deprecated: bool = False parameters: ty.List[ty.Dict[str, ty.Any]] = dataclasses.field( default_factory=list ) name_regex: str = None value_regex: str = None def __post_init__(self): # generate a regex for the name name_regex = self.name # replace the human-readable patterns with named regex groups; this # will transform e.g. 'hw:numa_cpus.{id}' to 'hw:numa_cpus.(?P\d+)' for param in self.parameters: pattern = f'(?P<{param["name"]}>{param["pattern"]})' name_regex = name_regex.replace(f'{{{param["name"]}}}', pattern) self.name_regex = name_regex # ...and do the same for the value, but only if we're using strings if self.value['type'] not in (int, str, bool): raise ValueError( f"Unsupported parameter type '{self.value['type']}'" ) value_regex = None if self.value['type'] == str and self.value.get('pattern'): value_regex = self.value['pattern'] self.value_regex = value_regex def _validate_str(self, value): if 'pattern' in self.value: value_match = re.fullmatch(self.value_regex, value) if not value_match: raise exception.ValidationError( f"Validation failed; '{value}' is not of the format " f"'{self.value_regex}'." ) elif 'enum' in self.value: if value not in self.value['enum']: values = ', '.join(str(x) for x in self.value['enum']) raise exception.ValidationError( f"Validation failed; '{value}' is not one of: {values}." ) def _validate_int(self, value): try: value = int(value) except ValueError: raise exception.ValidationError( f"Validation failed; '{value}' is not a valid integer value." ) if 'max' in self.value and self.value['max'] < value: raise exception.ValidationError( f"Validation failed; '{value}' is greater than the max value " f"of '{self.value['max']}'." ) if 'min' in self.value and self.value['min'] > value: raise exception.ValidationError( f"Validation failed; '{value}' is less than the min value " f"of '{self.value['min']}'." ) def _validate_bool(self, value): try: strutils.bool_from_string(value, strict=True) except ValueError: raise exception.ValidationError( f"Validation failed; '{value}' is not a valid boolean-like " f"value." ) def validate(self, name, value): name_match = re.fullmatch(self.name_regex, name) if not name_match: # NOTE(stephenfin): This is mainly here for testing purposes raise exception.ValidationError( f"Validation failed; expected a name of format '{self.name}' " f"but got '{name}'." ) if self.value['type'] == int: self._validate_int(value) elif self.value['type'] == bool: self._validate_bool(value) else: # str self._validate_str(value) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/capabilities.py0000664000175000017500000000772300000000000023770 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for (preferrably) ``capabilities`` namespaced extra specs. These are used by the ``ComputeCapabilitiesFilter`` scheduler filter. Note that we explicitly do not allow the unnamespaced variant of extra specs since this has been deprecated since Grizzly (commit 8ce8e4b6c0d). Users that insist on using these can disable extra spec validation. For all extra specs, the value can be one of the following: * ``=`` (equal to or greater than as a number; same as vcpus case) * ``==`` (equal to as a number) * ``!=`` (not equal to as a number) * ``>=`` (greater than or equal to as a number) * ``<=`` (less than or equal to as a number) * ``s==`` (equal to as a string) * ``s!=`` (not equal to as a string) * ``s>=`` (greater than or equal to as a string) * ``s>`` (greater than as a string) * ``s<=`` (less than or equal to as a string) * ``s<`` (less than as a string) * ```` (substring) * ```` (all elements contained in collection) * ```` (find one of these) * A specific value, e.g. ``true``, ``123``, ``testing`` Examples are: ``>= 5``, ``s== 2.1.0``, `` gcc``, `` aes mmx``, and `` fpu gpu`` """ from nova.api.validation.extra_specs import base DESCRIPTION = """\ Specify that the '{capability}' capability provided by the host compute service satisfy the provided filter value. Requires the ``ComputeCapabilitiesFilter`` scheduler filter. """ EXTRA_SPEC_VALIDATORS = [] # non-nested capabilities (from 'nova.objects.compute_node.ComputeNode' and # nova.scheduler.host_manager.HostState') for capability in ( 'id', 'uuid', 'service_id', 'host', 'vcpus', 'memory_mb', 'local_gb', 'vcpus_used', 'memory_mb_used', 'local_gb_used', 'hypervisor_type', 'hypervisor_version', 'hypervisor_hostname', 'free_ram_mb', 'free_disk_gb', 'current_workload', 'running_vms', 'disk_available_least', 'host_ip', 'mapped', 'cpu_allocation_ratio', 'ram_allocation_ratio', 'disk_allocation_ratio', ) + ( 'total_usable_ram_mb', 'total_usable_disk_gb', 'disk_mb_used', 'free_disk_mb', 'vcpus_total', 'vcpus_used', 'num_instances', 'num_io_ops', 'failed_builds', 'aggregates', 'cell_uuid', 'updated', ): EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'capabilities:{capability}', description=DESCRIPTION.format(capability=capability), value={ # this is totally arbitary, since we need to support specific # values 'type': str, }, ), ) # nested capabilities (from 'nova.objects.compute_node.ComputeNode' and # nova.scheduler.host_manager.HostState') for capability in ( 'cpu_info', 'metrics', 'stats', 'numa_topology', 'supported_hv_specs', 'pci_device_pools', ) + ( 'nodename', 'pci_stats', 'supported_instances', 'limits', 'instances', ): EXTRA_SPEC_VALIDATORS.extend([ base.ExtraSpecValidator( name=f'capabilities:{capability}{{filter}}', description=DESCRIPTION.format(capability=capability), parameters=[ { 'name': 'filter', # this is optional, but if it's present it must be preceded # by ':' 'pattern': r'(:\w+)*', } ], value={ 'type': str, }, ), ]) def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/extra_specs/hw.py0000664000175000017500000003033700000000000021752 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``hw`` namespaced extra specs.""" from nova.api.validation.extra_specs import base realtime_validators = [ base.ExtraSpecValidator( name='hw:cpu_realtime', description=( 'Determine whether realtime mode should be enabled for the ' 'instance or not. Only supported by the libvirt driver.' ), value={ 'type': bool, 'description': 'Whether to enable realtime priority.', }, ), base.ExtraSpecValidator( name='hw:cpu_realtime_mask', description=( 'A exclusion mask of CPUs that should not be enabled for realtime.' ), value={ 'type': str, # NOTE(stephenfin): Yes, these things *have* to start with '^' 'pattern': r'\^\d+((-\d+)?(,\^?\d+(-\d+)?)?)*', }, ), ] hide_hypervisor_id_validator = [ base.ExtraSpecValidator( name='hw:hide_hypervisor_id', description=( 'Determine whether the hypervisor ID should be hidden from the ' 'guest. Only supported by the libvirt driver.' ), value={ 'type': bool, 'description': 'Whether to hide the hypervisor ID.', }, ) ] cpu_policy_validators = [ base.ExtraSpecValidator( name='hw:cpu_policy', description=( 'The policy to apply when determining what host CPUs the guest ' 'CPUs can run on. If ``shared`` (default), guest CPUs can be ' 'overallocated but cannot float across host cores. If ' '``dedicated``, guest CPUs cannot be overallocated but are ' 'individually pinned to their own host core.' ), value={ 'type': str, 'description': 'The CPU policy.', 'enum': [ 'dedicated', 'shared' ], }, ), base.ExtraSpecValidator( name='hw:cpu_thread_policy', description=( 'The policy to apply when determining whether the destination ' 'host can have hardware threads enabled or not. If ``prefer`` ' '(default), hosts with hardware threads will be preferred. If ' '``require``, hosts with hardware threads will be required. If ' '``isolate``, hosts with hardware threads will be forbidden.' ), value={ 'type': str, 'description': 'The CPU thread policy.', 'enum': [ 'prefer', 'isolate', 'require', ], }, ), base.ExtraSpecValidator( name='hw:emulator_threads_policy', description=( 'The policy to apply when determining whether emulator threads ' 'should be offloaded to a separate isolated core or to a pool ' 'of shared cores. If ``share``, emulator overhead threads will ' 'be offloaded to a pool of shared cores. If ``isolate``, ' 'emulator overhead threads will be offloaded to their own core.' ), value={ 'type': str, 'description': 'The emulator thread policy.', 'enum': [ 'isolate', 'share', ], }, ), ] hugepage_validators = [ base.ExtraSpecValidator( name='hw:mem_page_size', description=( 'The size of memory pages to allocate to the guest with. Can be ' 'one of the three alias - ``large``, ``small`` or ``any``, - or ' 'an actual size. Only supported by the libvirt virt driver.' ), value={ 'type': str, 'description': 'The size of memory page to allocate', 'pattern': r'(large|small|any|\d+([kKMGT]i?)?(b|bit|B)?)', }, ), ] numa_validators = [ base.ExtraSpecValidator( name='hw:numa_nodes', description=( 'The number of virtual NUMA nodes to allocate to configure the ' 'guest with. Each virtual NUMA node will be mapped to a unique ' 'host NUMA node. Only supported by the libvirt virt driver.' ), value={ 'type': int, 'description': 'The number of virtual NUMA nodes to allocate', 'min': 1, }, ), base.ExtraSpecValidator( name='hw:numa_cpus.{id}', description=( 'A mapping of **guest** CPUs to the **guest** NUMA node ' 'identified by ``{id}``. This can be used to provide asymmetric ' 'CPU-NUMA allocation and is necessary where the number of guest ' 'NUMA nodes is not a factor of the number of guest CPUs.' ), parameters=[ { 'name': 'id', 'pattern': r'\d+', # positive integers 'description': 'The ID of the **guest** NUMA node.', }, ], value={ 'type': str, 'description': ( 'The guest CPUs, in the form of a CPU map, to allocate to the ' 'guest NUMA node identified by ``{id}``.' ), 'pattern': r'\^?\d+((-\d+)?(,\^?\d+(-\d+)?)?)*', }, ), base.ExtraSpecValidator( name='hw:numa_mem.{id}', description=( 'A mapping of **guest** memory to the **guest** NUMA node ' 'identified by ``{id}``. This can be used to provide asymmetric ' 'memory-NUMA allocation and is necessary where the number of ' 'guest NUMA nodes is not a factor of the total guest memory.' ), parameters=[ { 'name': 'id', 'pattern': r'\d+', # positive integers 'description': 'The ID of the **guest** NUMA node.', }, ], value={ 'type': int, 'description': ( 'The guest memory, in MB, to allocate to the guest NUMA node ' 'identified by ``{id}``.' ), 'min': 1, }, ), base.ExtraSpecValidator( name='hw:pci_numa_affinity_policy', description=( 'The NUMA affinity policy of any PCI passthrough devices or ' 'SR-IOV network interfaces attached to the instance.' ), value={ 'type': str, 'description': 'The PCI NUMA affinity policy', 'enum': [ 'required', 'preferred', 'legacy', ], }, ), ] cpu_topology_validators = [ base.ExtraSpecValidator( name='hw:cpu_sockets', description=( 'The number of virtual CPU threads to emulate in the guest ' 'CPU topology.' ), value={ 'type': int, 'description': 'A number of vurtla CPU sockets', 'min': 1, }, ), base.ExtraSpecValidator( name='hw:cpu_cores', description=( 'The number of virtual CPU cores to emulate per socket in the ' 'guest CPU topology.' ), value={ 'type': int, 'description': 'A number of virtual CPU cores', 'min': 1, }, ), base.ExtraSpecValidator( name='hw:cpu_threads', description=( 'The number of virtual CPU threads to emulate per core in the ' 'guest CPU topology.' ), value={ 'type': int, 'description': 'A number of virtual CPU threads', 'min': 1, }, ), base.ExtraSpecValidator( name='hw:max_cpu_sockets', description=( 'The max number of virtual CPU threads to emulate in the ' 'guest CPU topology. This is used to limit the topologies that ' 'can be requested by an image and will be used to validate the ' '``hw_cpu_sockets`` image metadata property.' ), value={ 'type': int, 'description': 'A number of virtual CPU sockets', 'min': 1, }, ), base.ExtraSpecValidator( name='hw:max_cpu_cores', description=( 'The max number of virtual CPU cores to emulate per socket in the ' 'guest CPU topology. This is used to limit the topologies that ' 'can be requested by an image and will be used to validate the ' '``hw_cpu_cores`` image metadata property.' ), value={ 'type': int, 'description': 'A number of virtual CPU cores', 'min': 1, }, ), base.ExtraSpecValidator( name='hw:max_cpu_threads', description=( 'The max number of virtual CPU threads to emulate per core in the ' 'guest CPU topology. This is used to limit the topologies that ' 'can be requested by an image and will be used to validate the ' '``hw_cpu_threads`` image metadata property.' ), value={ 'type': int, 'description': 'A number of virtual CPU threads', 'min': 1, }, ), ] feature_flag_validators = [ # TODO(stephenfin): Consider deprecating and moving this to the 'os:' # namespace base.ExtraSpecValidator( name='hw:boot_menu', description=( 'Whether to show a boot menu when booting the guest.' ), value={ 'type': bool, 'description': 'Whether to enable the boot menu', }, ), base.ExtraSpecValidator( name='hw:mem_encryption', description=( 'Whether to enable memory encryption for the guest. Only ' 'supported by the libvirt driver on hosts with AMD SEV support.' ), value={ 'type': bool, 'description': 'Whether to enable memory encryption', }, ), base.ExtraSpecValidator( name='hw:pmem', description=( 'A comma-separated list of ``$LABEL``\\ s defined in config for ' 'vPMEM devices.' ), value={ 'type': str, 'description': ( 'A comma-separated list of valid resource class names.' ), 'pattern': '([a-zA-Z0-9_]+(,)?)+', }, ), base.ExtraSpecValidator( name='hw:pmu', description=( 'Whether to enable the Performance Monitory Unit (PMU) for the ' 'guest. Only supported by the libvirt driver.' ), value={ 'type': bool, 'description': 'Whether to enable the PMU', }, ), base.ExtraSpecValidator( name='hw:serial_port_count', description=( 'The number of serial ports to allocate to the guest. Only ' 'supported by the libvirt virt driver.' ), value={ 'type': int, 'min': 0, 'description': 'The number of serial ports to allocate', }, ), base.ExtraSpecValidator( name='hw:watchdog_action', description=( 'The action to take when the watchdog timer is kicked. Only ' 'supported by the libvirt virt driver.' ), value={ 'type': str, 'description': 'The action to take', 'enum': [ 'none', 'pause', 'poweroff', 'reset', 'disabled', ], }, ), ] def register(): return ( realtime_validators + hide_hypervisor_id_validator + cpu_policy_validators + hugepage_validators + numa_validators + cpu_topology_validators + feature_flag_validators ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/extra_specs/hw_rng.py0000664000175000017500000000336200000000000022616 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``hw_rng`` namespaced extra specs.""" from nova.api.validation.extra_specs import base # TODO(stephenfin): Move these to the 'hw:' namespace EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='hw_rng:allowed', description=( 'Whether to disable configuration of a random number generator ' 'in their image. Before 21.0.0 (Ussuri), random number generators ' 'were not enabled by default so this was used to determine ' 'whether to **enable** configuration.' ), value={ 'type': bool, }, ), base.ExtraSpecValidator( name='hw_rng:rate_bytes', description=( 'The allowed amount of bytes for the guest to read from the ' 'host\'s entropy per period.' ), value={ 'type': int, 'min': 0, }, ), base.ExtraSpecValidator( name='hw_rng:rate_period', description='The duration of a read period in seconds.', value={ 'type': int, 'min': 0, }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/hw_video.py0000664000175000017500000000237300000000000023137 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``hw_video`` namespaced extra specs.""" from nova.api.validation.extra_specs import base # TODO(stephenfin): Move these to the 'hw:' namespace EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='hw_video:ram_max_mb', description=( 'The maximum amount of memory the user can request using the ' '``hw_video_ram`` image metadata property, which represents the ' 'video memory that the guest OS will see. This has no effect for ' 'vGPUs.' ), value={ 'type': int, 'min': 0, }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/extra_specs/null.py0000664000175000017500000000335200000000000022303 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for non-namespaced extra specs.""" from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='hide_hypervisor_id', description=( 'Determine whether the hypervisor ID should be hidden from the ' 'guest. Only supported by the libvirt driver. This extra spec is ' 'not compatible with the AggregateInstanceExtraSpecsFilter ' 'scheduler filter. The ``hw:hide_hypervisor_id`` extra spec ' 'should be used instead.' ), value={ 'type': bool, 'description': 'Whether to hide the hypervisor ID.', }, deprecated=True, ), # TODO(stephenfin): This should be moved to a namespace base.ExtraSpecValidator( name='group_policy', description=( 'The group policy to apply when using the granular resource ' 'request syntax.' ), value={ 'type': str, 'enum': [ 'isolate', 'none', ], }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/extra_specs/os.py0000664000175000017500000000555400000000000021760 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``os`` namespaced extra specs.""" from nova.api.validation.extra_specs import base # TODO(stephenfin): Most of these belong in the 'hw:' or 'hyperv:' namespace # and should be moved. EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='os:secure_boot', description=( 'Determine whether secure boot is enabled or not. Currently only ' 'supported by the HyperV driver.' ), value={ 'type': str, 'description': 'Whether secure boot is required or not', 'enum': [ 'disabled', 'required', ], }, ), base.ExtraSpecValidator( name='os:resolution', description=( 'Guest VM screen resolution size. Only supported by the HyperV ' 'driver.' ), value={ 'type': str, 'description': 'The chosen resolution', 'enum': [ '1024x768', '1280x1024', '1600x1200', '1920x1200', '2560x1600', '3840x2160', ], }, ), base.ExtraSpecValidator( name='os:monitors', description=( 'Guest VM number of monitors. Only supported by the HyperV driver.' ), value={ 'type': int, 'description': 'The number of monitors enabled', 'min': 1, 'max': 8, }, ), # TODO(stephenfin): Consider merging this with the 'hw_video_ram' image # metadata property or adding a 'hw:video_ram' extra spec that works for # both Hyper-V and libvirt. base.ExtraSpecValidator( name='os:vram', description=( 'Guest VM VRAM amount. Only supported by the HyperV driver.' ), # NOTE(stephenfin): This is really an int, but because there's a # limited range of options we treat it as a string value={ 'type': str, 'description': 'Amount of VRAM to allocate to instance', 'enum': [ '64', '128', '256', '512', '1024', ], }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/extra_specs/pci_passthrough.py0000664000175000017500000000236700000000000024540 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``pci_passthrough`` namespaced extra specs.""" from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='pci_passthrough:alias', description=( 'Specify the number of ``$alias`` PCI device(s) to attach to the ' 'instance. Must be of format ``$alias:$number``. Use commas to ' 'specify multiple values.' ), value={ 'type': str, # one or more comma-separated '$alias:$num' values 'pattern': r'[^:]+:\d+(?:\s*,\s*[^:]+:\d+)*', }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/powervm.py0000664000175000017500000002422400000000000023031 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``powervm`` namespaced extra specs. These were all taken from the IBM documentation. https://www.ibm.com/support/knowledgecenter/SSXK2N_1.4.4/com.ibm.powervc.standard.help.doc/powervc_pg_flavorsextraspecs_hmc.html """ from nova.api.validation.extra_specs import base # TODO(stephenfin): A lot of these seem to overlap with existing 'hw:' extra # specs and could be deprecated in favour of those. EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='powervm:min_mem', description=( 'Minimum memory (MB). If you do not specify the value, the value ' 'is defaulted to the value for ``memory_mb``.' ), value={ 'type': int, 'min': 256, 'description': 'Integer >=256 divisible by LMB size of the target', }, ), base.ExtraSpecValidator( name='powervm:max_mem', description=( 'Maximum memory (MB). If you do not specify the value, the value ' 'is defaulted to the value for ``memory_mb``.' ), value={ 'type': int, 'min': 256, 'description': 'Integer >=256 divisible by LMB size of the target', }, ), base.ExtraSpecValidator( name='powervm:min_vcpu', description=( 'Minimum virtual processors. Minimum resource that is required ' 'for LPAR to boot is 1. The maximum value can be equal to the ' 'value, which is set to vCPUs. If you specify the value of the ' 'attribute, you must also specify value of powervm:max_vcpu. ' 'Defaults to value set for vCPUs.' ), value={ 'type': int, 'min': 1, }, ), base.ExtraSpecValidator( name='powervm:max_vcpu', description=( 'Minimum virtual processors. Minimum resource that is required ' 'for LPAR to boot is 1. The maximum value can be equal to the ' 'value, which is set to vCPUs. If you specify the value of the ' 'attribute, you must also specify value of powervm:max_vcpu. ' 'Defaults to value set for vCPUs.' ), value={ 'type': int, 'min': 1, }, ), base.ExtraSpecValidator( name='powervm:proc_units', description=( 'The wanted ``proc_units``. The value for the attribute cannot be ' 'less than 1/10 of the value that is specified for Virtual ' 'CPUs (vCPUs) for hosts with firmware level 7.5 or earlier and ' '1/20 of the value that is specified for vCPUs for hosts with ' 'firmware level 7.6 or later. If the value is not specified ' 'during deployment, it is defaulted to vCPUs * 0.5.' ), value={ 'type': str, 'pattern': r'\d+\.\d+', 'description': ( 'Float (divisible by 0.1 for hosts with firmware level 7.5 or ' 'earlier and 0.05 for hosts with firmware level 7.6 or later)' ), }, ), base.ExtraSpecValidator( name='powervm:min_proc_units', description=( 'Minimum ``proc_units``. The minimum value for the attribute is ' '0.1 for hosts with firmware level 7.5 or earlier and 0.05 for ' 'hosts with firmware level 7.6 or later. The maximum value must ' 'be equal to the maximum value of ``powervm:proc_units``. If you ' 'specify the attribute, you must also specify ' '``powervm:proc_units``, ``powervm:max_proc_units``, ' '``powervm:min_vcpu``, `powervm:max_vcpu``, and ' '``powervm:dedicated_proc``. Set the ``powervm:dedicated_proc`` ' 'to false.' '\n' 'The value for the attribute cannot be less than 1/10 of the ' 'value that is specified for powervm:min_vcpu for hosts with ' 'firmware level 7.5 or earlier and 1/20 of the value that is ' 'specified for ``powervm:min_vcpu`` for hosts with firmware ' 'level 7.6 or later. If you do not specify the value of the ' 'attribute during deployment, it is defaulted to equal the value ' 'of ``powervm:proc_units``.' ), value={ 'type': str, 'pattern': r'\d+\.\d+', 'description': ( 'Float (divisible by 0.1 for hosts with firmware level 7.5 or ' 'earlier and 0.05 for hosts with firmware level 7.6 or later)' ), }, ), base.ExtraSpecValidator( name='powervm:max_proc_units', description=( 'Maximum ``proc_units``. The minimum value can be equal to `` ' '``powervm:proc_units``. The maximum value for the attribute ' 'cannot be more than the value of the host for maximum allowed ' 'processors per partition. If you specify this attribute, you ' 'must also specify ``powervm:proc_units``, ' '``powervm:min_proc_units``, ``powervm:min_vcpu``, ' '``powervm:max_vcpu``, and ``powervm:dedicated_proc``. Set the ' '``powervm:dedicated_proc`` to false.' '\n' 'The value for the attribute cannot be less than 1/10 of the ' 'value that is specified for powervm:max_vcpu for hosts with ' 'firmware level 7.5 or earlier and 1/20 of the value that is ' 'specified for ``powervm:max_vcpu`` for hosts with firmware ' 'level 7.6 or later. If you do not specify the value of the ' 'attribute during deployment, the value is defaulted to equal the ' 'value of ``powervm:proc_units``.' ), value={ 'type': str, 'pattern': r'\d+\.\d+', 'description': ( 'Float (divisible by 0.1 for hosts with firmware level 7.5 or ' 'earlier and 0.05 for hosts with firmware level 7.6 or later)' ), }, ), base.ExtraSpecValidator( name='powervm:dedicated_proc', description=( 'Use dedicated processors. The attribute defaults to false.' ), value={ 'type': bool, }, ), base.ExtraSpecValidator( name='powervm:shared_weight', description=( 'Shared processor weight. When ``powervm:dedicated_proc`` is set ' 'to true and ``powervm:uncapped`` is also set to true, the value ' 'of the attribute defaults to 128.' ), value={ 'type': int, 'min': 0, 'max': 255, }, ), base.ExtraSpecValidator( name='powervm:availability_priority', description=( 'Availability priority. The attribute priority of the server if ' 'there is a processor failure and there are not enough resources ' 'for all servers. VIOS and i5 need to remain high priority ' 'default of 191. The value of the attribute defaults to 128.' ), value={ 'type': int, 'min': 0, 'max': 255, }, ), base.ExtraSpecValidator( name='powervm:uncapped', description=( 'LPAR can use unused processor cycles that are beyond or exceed ' 'the wanted setting of the attribute. This attribute is ' 'supported only when ``powervm:dedicated_proc`` is set to false. ' 'When ``powervm:dedicated_proc`` is set to false, ' '``powervm:uncapped`` defaults to true.' ), value={ 'type': bool, }, ), base.ExtraSpecValidator( name='powervm:dedicated_sharing_mode', description=( 'Sharing mode for dedicated processors. The attribute is ' 'supported only when ``powervm:dedicated_proc`` is set to true.' ), value={ 'type': str, 'enum': ( 'share_idle_procs', 'keep_idle_procs', 'share_idle_procs_active', 'share_idle_procs_always', ) }, ), base.ExtraSpecValidator( name='powervm:processor_compatibility', description=( 'A processor compatibility mode is a value that is assigned to a ' 'logical partition by the hypervisor that specifies the processor ' 'environment in which the logical partition can successfully ' 'operate.' ), value={ 'type': str, 'enum': ( 'default', 'POWER6', 'POWER6+', 'POWER6_Enhanced', 'POWER6+_Enhanced', 'POWER7', 'POWER8' ), }, ), base.ExtraSpecValidator( name='powervm:shared_proc_pool_name', description=( 'Specifies the shared processor pool to be targeted during ' 'deployment of a virtual machine.' ), value={ 'type': str, 'description': 'String with upper limit of 14 characters', }, ), base.ExtraSpecValidator( name='powervm:srr_capability', description=( 'If the value of simplified remote restart capability is set to ' 'true for the LPAR, you can remote restart the LPAR to supported ' 'CEC or host when the source CEC or host is down. The attribute ' 'defaults to false.' ), value={ 'type': bool, }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/extra_specs/quota.py0000664000175000017500000000615600000000000022467 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``quota`` namespaced extra specs.""" from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [] # CPU, memory, disk IO and VIF quotas (VMWare) for resource in ('cpu', 'memory', 'disk_io', 'vif'): for key, fmt in ( ('limit', int), ('reservation', int), ('shares_level', str), ('shares_share', int) ): EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'quota:{resource}_{key}', description=( 'The {} for {}. Only supported by the VMWare virt ' 'driver.'.format(' '.join(key.split('_')), resource) ), value={ 'type': fmt, }, ) ) # CPU quotas (libvirt) for key in ('shares', 'period', 'quota'): EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'quota:cpu_{key}', description=( f'The quota {key} for CPU. Only supported by the libvirt ' f'virt driver.' ), value={ 'type': int, 'min': 0, }, ) ) # Disk quotas (libvirt, HyperV) for stat in ('read', 'write', 'total'): for metric in ('bytes', 'iops'): EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'quota:disk_{stat}_{metric}_sec', # NOTE(stephenfin): HyperV supports disk_total_{metric}_sec # too; update description=( f'The quota {stat} {metric} for disk. Only supported ' f'by the libvirt virt driver.' ), value={ 'type': int, 'min': 0, }, ) ) # VIF quotas (libvirt) # TODO(stephenfin): Determine whether this should be deprecated now that # nova-network is dead for stat in ('inbound', 'outbound'): for metric in ('average', 'peak', 'burst'): EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'quota:vif_{stat}_{metric}', description=( f'The quota {stat} {metric} for VIF. Only supported ' f'by the libvirt virt driver.' ), value={ 'type': int, 'min': 0, }, ) ) def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/resources.py0000664000175000017500000000347300000000000023347 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``resources`` namespaced extra specs.""" import os_resource_classes from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [] for resource_class in os_resource_classes.STANDARDS: EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'resources{{group}}:{resource_class}', description=f'The amount of resource {resource_class} requested.', value={ 'type': int, }, parameters=[ { 'name': 'group', 'pattern': r'([a-zA-Z0-9_-]{1,64})?', }, ], ) ) EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name='resources{group}:CUSTOM_{resource}', description=( 'The amount of resource CUSTOM_{resource} requested.' ), value={ 'type': int, }, parameters=[ { 'name': 'group', 'pattern': r'([a-zA-Z0-9_-]{1,64})?', }, { 'name': 'resource', 'pattern': r'[A-Z0-9_]+', }, ], ) ) def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/traits.py0000664000175000017500000000366500000000000022646 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``traits`` namespaced extra specs.""" import os_traits from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [] for trait in os_traits.get_traits(): EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name=f'trait{{group}}:{trait}', description=f'Require or forbid trait {trait}.', value={ 'type': str, 'enum': [ 'required', 'forbidden', ], }, parameters=[ { 'name': 'group', 'pattern': r'([a-zA-Z0-9_-]{1,64})?', }, ], ) ) EXTRA_SPEC_VALIDATORS.append( base.ExtraSpecValidator( name='trait{group}:CUSTOM_{trait}', description=( 'Require or forbid trait CUSTOM_{trait}.' ), value={ 'type': str, 'enum': [ 'required', 'forbidden', ], }, parameters=[ { 'name': 'group', 'pattern': r'([a-zA-Z0-9_-]{1,64})?', }, { 'name': 'trait', 'pattern': r'[A-Z0-9_]+', }, ], ) ) def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/validators.py0000664000175000017500000000516700000000000023507 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for all extra specs known by nova.""" import re import typing as ty from oslo_log import log as logging from stevedore import extension from nova.api.validation.extra_specs import base from nova import exception LOG = logging.getLogger(__name__) VALIDATORS: ty.Dict[str, base.ExtraSpecValidator] = {} NAMESPACES: ty.Set[str] = set() def validate(name: str, value: str): """Validate a given extra spec. :param name: Extra spec name. :param value: Extra spec value. :raises: exception.ValidationError if validation fails. """ # attempt a basic lookup for extra specs without embedded parameters if name in VALIDATORS: VALIDATORS[name].validate(name, value) return # if that failed, fallback to a linear search through the registry for validator in VALIDATORS.values(): if re.fullmatch(validator.name_regex, name): validator.validate(name, value) return # check if there's a namespace; if not, we've done all we can do if ':' not in name: # no namespace return # if there is, check if it's one we recognize for namespace in NAMESPACES: if re.fullmatch(namespace, name.split(':', 1)[0]): break else: return raise exception.ValidationError( f"Validation failed; extra spec '{name}' does not appear to be a " f"valid extra spec." ) def load_validators(): global VALIDATORS def _report_load_failure(mgr, ep, err): LOG.warning(u'Failed to load %s: %s', ep.module_name, err) mgr = extension.ExtensionManager( 'nova.api.extra_spec_validators', on_load_failure_callback=_report_load_failure, invoke_on_load=False, ) for ext in mgr: # TODO(stephenfin): Make 'register' return a dict rather than a list? for validator in ext.plugin.register(): VALIDATORS[validator.name] = validator if ':' in validator.name_regex: NAMESPACES.add(validator.name_regex.split(':', 1)[0]) load_validators() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/api/validation/extra_specs/vmware.py0000664000175000017500000000274500000000000022637 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Validators for ``vmware`` namespaced extra specs.""" from nova.api.validation.extra_specs import base EXTRA_SPEC_VALIDATORS = [ base.ExtraSpecValidator( name='vmware:hw_version', description=( 'Specify the hardware version used to create images. In an ' 'environment with different host versions, you can use this ' 'parameter to place instances on the correct hosts.' ), value={ 'type': str, }, ), base.ExtraSpecValidator( name='vmware:storage_policy', description=( 'Specify the storage policy used for new instances.' '\n' 'If Storage Policy-Based Management (SPBM) is not enabled, this ' 'parameter is ignored.' ), value={ 'type': str, }, ), ] def register(): return EXTRA_SPEC_VALIDATORS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/validation/parameter_types.py0000664000175000017500000003010400000000000022210 0ustar00zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common parameter types for validating request Body. """ import copy import functools import re import unicodedata import six from nova.i18n import _ from nova.objects import tag _REGEX_RANGE_CACHE = {} def memorize(func): @functools.wraps(func) def memorizer(*args, **kwargs): global _REGEX_RANGE_CACHE key = "%s:%s:%s" % (func.__name__, hash(str(args)), hash(str(kwargs))) value = _REGEX_RANGE_CACHE.get(key) if value is None: value = func(*args, **kwargs) _REGEX_RANGE_CACHE[key] = value return value return memorizer def _reset_cache(): global _REGEX_RANGE_CACHE _REGEX_RANGE_CACHE = {} def single_param(schema): """Macro function for use in JSONSchema to support query parameters that should have only one value. """ ret = multi_params(schema) ret['maxItems'] = 1 return ret def multi_params(schema): """Macro function for use in JSONSchema to support query parameters that may have multiple values. """ return {'type': 'array', 'items': schema} # NOTE: We don't check actual values of queries on params # which are defined as the following common_param. # Please note those are for backward compatible existing # query parameters because previously multiple parameters # might be input and accepted. common_query_param = multi_params({'type': 'string'}) common_query_regex_param = multi_params({'type': 'string', 'format': 'regex'}) class ValidationRegex(object): def __init__(self, regex, reason): self.regex = regex self.reason = reason def _is_printable(char): """determine if a unicode code point is printable. This checks if the character is either "other" (mostly control codes), or a non-horizontal space. All characters that don't match those criteria are considered printable; that is: letters; combining marks; numbers; punctuation; symbols; (horizontal) space separators. """ category = unicodedata.category(char) return (not category.startswith("C") and (not category.startswith("Z") or category == "Zs")) def _get_all_chars(): for i in range(0xFFFF): yield six.unichr(i) # build a regex that matches all printable characters. This allows # spaces in the middle of the name. Also note that the regexp below # deliberately allows the empty string. This is so only the constraint # which enforces a minimum length for the name is triggered when an # empty string is tested. Otherwise it is not deterministic which # constraint fails and this causes issues for some unittests when # PYTHONHASHSEED is set randomly. @memorize def _build_regex_range(ws=True, invert=False, exclude=None): """Build a range regex for a set of characters in utf8. This builds a valid range regex for characters in utf8 by iterating the entire space and building up a set of x-y ranges for all the characters we find which are valid. :param ws: should we include whitespace in this range. :param exclude: any characters we want to exclude :param invert: invert the logic The inversion is useful when we want to generate a set of ranges which is everything that's not a certain class. For instance, produce all the non printable characters as a set of ranges. """ if exclude is None: exclude = [] regex = "" # are we currently in a range in_range = False # last character we found, for closing ranges last = None # last character we added to the regex, this lets us know that we # already have B in the range, which means we don't need to close # it out with B-B. While the later seems to work, it's kind of bad form. last_added = None def valid_char(char): if char in exclude: result = False elif ws: result = _is_printable(char) else: # Zs is the unicode class for space characters, of which # there are about 10 in this range. result = (_is_printable(char) and unicodedata.category(char) != "Zs") if invert is True: return not result return result # iterate through the entire character range. in_ for c in _get_all_chars(): if valid_char(c): if not in_range: regex += re.escape(c) last_added = c in_range = True else: if in_range and last != last_added: regex += "-" + re.escape(last) in_range = False last = c else: if in_range: regex += "-" + re.escape(c) return regex valid_name_regex_base = '^(?![%s])[%s]*(? 0: if self.is_body: # NOTE: For whole OpenStack message consistency, this error # message has been written as the similar format of # WSME. detail = _("Invalid input for field/attribute %(path)s. " "Value: %(value)s. %(message)s") % { 'path': ex.path.pop(), 'value': ex.instance, 'message': ex.message} else: # NOTE: Use 'ex.path.popleft()' instead of 'ex.path.pop()', # due to the structure of query parameters is a dict # with key as name and value is list. So the first # item in the 'ex.path' is the key, and second item # is the index of list in the value. We need the # key as the parameter name in the error message. # So pop the first value out of 'ex.path'. detail = _("Invalid input for query parameters %(path)s. " "Value: %(value)s. %(message)s") % { 'path': ex.path.popleft(), 'value': ex.instance, 'message': ex.message} else: detail = ex.message raise exception.ValidationError(detail=detail) except TypeError as ex: # NOTE: If passing non string value to patternProperties parameter, # TypeError happens. Here is for catching the TypeError. detail = six.text_type(ex) raise exception.ValidationError(detail=detail) def _number_from_str(self, instance): try: value = int(instance) except (ValueError, TypeError): try: value = float(instance) except (ValueError, TypeError): return None return value def _validate_minimum(self, validator, minimum, instance, schema): instance = self._number_from_str(instance) if instance is None: return return self.validator_org.VALIDATORS['minimum'](validator, minimum, instance, schema) def _validate_maximum(self, validator, maximum, instance, schema): instance = self._number_from_str(instance) if instance is None: return return self.validator_org.VALIDATORS['maximum'](validator, maximum, instance, schema) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/api/wsgi.py0000664000175000017500000002101300000000000015622 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI primitives used throughout the nova WSGI apps.""" import os from oslo_log import log as logging from paste import deploy import routes.middleware import webob import nova.conf from nova import exception from nova.i18n import _, _LE CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class Request(webob.Request): def __init__(self, environ, *args, **kwargs): if CONF.wsgi.secure_proxy_ssl_header: scheme = environ.get(CONF.wsgi.secure_proxy_ssl_header) if scheme: environ['wsgi.url_scheme'] = scheme super(Request, self).__init__(environ, *args, **kwargs) class Application(object): """Base WSGI application wrapper. Subclasses need to implement __call__.""" @classmethod def factory(cls, global_config, **local_config): """Used for paste app factories in paste.deploy config files. Any local configuration (that is, values under the [app:APPNAME] section of the paste config) will be passed into the `__init__` method as kwargs. A hypothetical configuration would look like: [app:wadl] latest_version = 1.3 paste.app_factory = nova.api.fancy_api:Wadl.factory which would result in a call to the `Wadl` class as import nova.api.fancy_api fancy_api.Wadl(latest_version='1.3') You could of course re-implement the `factory` method in subclasses, but using the kwarg passing it shouldn't be necessary. """ return cls(**local_config) def __call__(self, environ, start_response): r"""Subclasses will probably want to implement __call__ like this: @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): # Any of the following objects work as responses: # Option 1: simple string res = 'message\n' # Option 2: a nicely formatted HTTP exception page res = exc.HTTPForbidden(explanation='Nice try') # Option 3: a webob Response object (in case you need to play with # headers, or you want to be treated like an iterable, or ...) res = Response() res.app_iter = open('somefile') # Option 4: any wsgi app to be run next res = self.application # Option 5: you can get a Response object for a wsgi app, too, to # play with headers etc res = req.get_response(self.application) # You can then just return your response... return res # ... or set req.response and return None. req.response = res See the end of http://pythonpaste.org/webob/modules/dec.html for more info. """ raise NotImplementedError(_('You must implement __call__')) class Middleware(Application): """Base WSGI middleware. These classes require an application to be initialized that will be called next. By default the middleware will simply call its wrapped app, or you can override __call__ to customize its behavior. """ @classmethod def factory(cls, global_config, **local_config): """Used for paste app factories in paste.deploy config files. Any local configuration (that is, values under the [filter:APPNAME] section of the paste config) will be passed into the `__init__` method as kwargs. A hypothetical configuration would look like: [filter:analytics] redis_host = 127.0.0.1 paste.filter_factory = nova.api.analytics:Analytics.factory which would result in a call to the `Analytics` class as import nova.api.analytics analytics.Analytics(app_from_paste, redis_host='127.0.0.1') You could of course re-implement the `factory` method in subclasses, but using the kwarg passing it shouldn't be necessary. """ def _factory(app): return cls(app, **local_config) return _factory def __init__(self, application): self.application = application def process_request(self, req): """Called on each request. If this returns None, the next application down the stack will be executed. If it returns a response then that response will be returned and execution will stop here. """ return None def process_response(self, response): """Do whatever you'd like to the response.""" return response @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): response = self.process_request(req) if response: return response response = req.get_response(self.application) return self.process_response(response) class Router(object): """WSGI middleware that maps incoming requests to WSGI apps.""" def __init__(self, mapper): """Create a router for the given routes.Mapper. Each route in `mapper` must specify a 'controller', which is a WSGI app to call. You'll probably want to specify an 'action' as well and have your controller be an object that can route the request to the action-specific method. Examples: mapper = routes.Mapper() sc = ServerController() # Explicit mapping of one route to a controller+action mapper.connect(None, '/svrlist', controller=sc, action='list') # Actions are all implicitly defined mapper.resource('server', 'servers', controller=sc) # Pointing to an arbitrary WSGI app. You can specify the # {path_info:.*} parameter so the target app can be handed just that # section of the URL. mapper.connect(None, '/v1.0/{path_info:.*}', controller=BlogApp()) """ self.map = mapper self._router = routes.middleware.RoutesMiddleware(self._dispatch, self.map) @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): """Route the incoming request to a controller based on self.map. If no match, return a 404. """ return self._router @staticmethod @webob.dec.wsgify(RequestClass=Request) def _dispatch(req): """Dispatch the request to the appropriate controller. Called by self._router after matching the incoming request to a route and putting the information into req.environ. Either returns 404 or the routed WSGI app's response. """ match = req.environ['wsgiorg.routing_args'][1] if not match: return webob.exc.HTTPNotFound() app = match['controller'] return app class Loader(object): """Used to load WSGI applications from paste configurations.""" def __init__(self, config_path=None): """Initialize the loader, and attempt to find the config. :param config_path: Full or relative path to the paste config. :returns: None """ self.config_path = None config_path = config_path or CONF.wsgi.api_paste_config if not os.path.isabs(config_path): self.config_path = CONF.find_file(config_path) elif os.path.exists(config_path): self.config_path = config_path if not self.config_path: raise exception.ConfigNotFound(path=config_path) def load_app(self, name): """Return the paste URLMap wrapped WSGI application. :param name: Name of the application to load. :returns: Paste URLMap object wrapping the requested application. :raises: `nova.exception.PasteAppNotFound` """ try: LOG.debug("Loading app %(name)s from %(path)s", {'name': name, 'path': self.config_path}) return deploy.loadapp("config:%s" % self.config_path, name=name) except LookupError: LOG.exception(_LE("Couldn't lookup app: %s"), name) raise exception.PasteAppNotFound(name=name, path=self.config_path) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/availability_zones.py0000664000175000017500000001663600000000000020007 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Availability zone helper functions.""" import collections import six from nova import cache_utils import nova.conf from nova import objects # NOTE(vish): azs don't change that often, so cache them for an hour to # avoid hitting the db multiple times on every request. AZ_CACHE_SECONDS = 60 * 60 MC = None CONF = nova.conf.CONF def _get_cache(): global MC if MC is None: MC = cache_utils.get_client(expiration_time=AZ_CACHE_SECONDS) return MC def reset_cache(): """Reset the cache, mainly for testing purposes and update availability_zone for host aggregate """ global MC MC = None def _make_cache_key(host): if six.PY2: host = host.encode('utf-8') return "azcache-%s" % host def _build_metadata_by_host(aggregates, hosts=None): if hosts and not isinstance(hosts, set): hosts = set(hosts) metadata = collections.defaultdict(set) for aggregate in aggregates: for host in aggregate.hosts: if hosts and host not in hosts: continue metadata[host].add(list(aggregate.metadata.values())[0]) return metadata def set_availability_zones(context, services): # Makes sure services isn't a sqlalchemy object services = [dict(service) for service in services] hosts = set([service['host'] for service in services]) aggregates = objects.AggregateList.get_by_metadata_key(context, 'availability_zone', hosts=hosts) metadata = _build_metadata_by_host(aggregates, hosts=hosts) # gather all of the availability zones associated with a service host for service in services: az = CONF.internal_service_availability_zone if service['topic'] == "compute": if metadata.get(service['host']): az = u','.join(list(metadata[service['host']])) else: az = CONF.default_availability_zone # update the cache update_host_availability_zone_cache(context, service['host'], az) service['availability_zone'] = az return services def get_host_availability_zone(context, host): aggregates = objects.AggregateList.get_by_host(context, host, key='availability_zone') if aggregates: az = aggregates[0].metadata['availability_zone'] else: az = CONF.default_availability_zone return az def update_host_availability_zone_cache(context, host, availability_zone=None): if not availability_zone: availability_zone = get_host_availability_zone(context, host) cache = _get_cache() cache_key = _make_cache_key(host) cache.delete(cache_key) cache.set(cache_key, availability_zone) def get_availability_zones(context, hostapi, get_only_available=False, with_hosts=False, services=None): """Return available and unavailable zones on demand. :param context: nova auth RequestContext :param hostapi: nova.compute.api.HostAPI instance :param get_only_available: flag to determine whether to return available zones only, default False indicates return both available zones and not available zones, True indicates return available zones only :param with_hosts: whether to return hosts part of the AZs :type with_hosts: bool :param services: list of services to use; if None, enabled services will be retrieved from all cells with zones set """ if services is None: services = hostapi.service_get_all( context, set_zones=True, all_cells=True) enabled_services = [] disabled_services = [] for service in services: if not service.disabled: enabled_services.append(service) else: disabled_services.append(service) if with_hosts: return _get_availability_zones_with_hosts( enabled_services, disabled_services, get_only_available) else: return _get_availability_zones( enabled_services, disabled_services, get_only_available) def _get_availability_zones( enabled_services, disabled_services, get_only_available=False): available_zones = { service['availability_zone'] for service in enabled_services } if get_only_available: return sorted(available_zones) not_available_zones = { service['availability_zone'] for service in disabled_services if service['availability_zone'] not in available_zones } return sorted(available_zones), sorted(not_available_zones) def _get_availability_zones_with_hosts( enabled_services, disabled_services, get_only_available=False): available_zones = collections.defaultdict(set) for service in enabled_services: available_zones[service['availability_zone']].add(service['host']) if get_only_available: return sorted(available_zones.items()) not_available_zones = collections.defaultdict(set) for service in disabled_services: if service['availability_zone'] in available_zones: continue not_available_zones[service['availability_zone']].add(service['host']) return sorted(available_zones.items()), sorted(not_available_zones.items()) def get_instance_availability_zone(context, instance): """Return availability zone of specified instance.""" host = instance.host if 'host' in instance else None if not host: # Likely hasn't reached a viable compute node yet so give back the # desired availability_zone in the instance record if the boot request # specified one. az = instance.get('availability_zone') return az cache_key = _make_cache_key(host) cache = _get_cache() az = cache.get(cache_key) az_inst = instance.get('availability_zone') if az_inst is not None and az != az_inst: # NOTE(sbauza): Cache is wrong, we need to invalidate it by fetching # again the right AZ related to the aggregate the host belongs to. # As the API is also calling this method for setting the instance # AZ field, we don't need to update the instance.az field. # This case can happen because the cache is populated before the # instance has been assigned to the host so that it would keep the # former reference which was incorrect. Instead of just taking the # instance AZ information for refilling the cache, we prefer to # invalidate the cache and fetch it again because there could be some # corner cases where this method could be called before the instance # has been assigned to the host also. az = None if not az: elevated = context.elevated() az = get_host_availability_zone(elevated, host) cache.set(cache_key, az) return az ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/baserpc.py0000664000175000017500000000450700000000000015530 0ustar00zuulzuul00000000000000# # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """ Base RPC client and server common to all services. """ import oslo_messaging as messaging from oslo_serialization import jsonutils import nova.conf from nova import rpc CONF = nova.conf.CONF _NAMESPACE = 'baseapi' class BaseAPI(object): """Client side of the base rpc API. API version history: 1.0 - Initial version. 1.1 - Add get_backdoor_port """ VERSION_ALIASES = { # baseapi was added in havana } def __init__(self, topic): super(BaseAPI, self).__init__() target = messaging.Target(topic=topic, namespace=_NAMESPACE, version='1.0') version_cap = self.VERSION_ALIASES.get(CONF.upgrade_levels.baseapi, CONF.upgrade_levels.baseapi) self.client = rpc.get_client(target, version_cap=version_cap) def ping(self, context, arg, timeout=None): arg_p = jsonutils.to_primitive(arg) cctxt = self.client.prepare(timeout=timeout) return cctxt.call(context, 'ping', arg=arg_p) def get_backdoor_port(self, context, host): cctxt = self.client.prepare(server=host, version='1.1') return cctxt.call(context, 'get_backdoor_port') class BaseRPCAPI(object): """Server side of the base RPC API.""" target = messaging.Target(namespace=_NAMESPACE, version='1.1') def __init__(self, service_name, backdoor_port): self.service_name = service_name self.backdoor_port = backdoor_port def ping(self, context, arg): resp = {'service': self.service_name, 'arg': arg} return jsonutils.to_primitive(resp) def get_backdoor_port(self, context): return self.backdoor_port ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/block_device.py0000664000175000017500000005337100000000000016525 0ustar00zuulzuul00000000000000# Copyright 2011 Isaku Yamahata # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_log import log as logging from oslo_utils import strutils import nova.conf from nova import exception from nova.i18n import _ from nova import utils from nova.virt import driver CONF = nova.conf.CONF LOG = logging.getLogger(__name__) DEFAULT_ROOT_DEV_NAME = '/dev/sda1' _DEFAULT_MAPPINGS = {'ami': 'sda1', 'ephemeral0': 'sda2', 'root': DEFAULT_ROOT_DEV_NAME, 'swap': 'sda3'} bdm_legacy_fields = set(['device_name', 'delete_on_termination', 'virtual_name', 'snapshot_id', 'volume_id', 'volume_size', 'no_device', 'connection_info']) bdm_new_fields = set(['source_type', 'destination_type', 'guest_format', 'device_type', 'disk_bus', 'boot_index', 'device_name', 'delete_on_termination', 'snapshot_id', 'volume_id', 'volume_size', 'image_id', 'no_device', 'connection_info', 'tag', 'volume_type']) bdm_db_only_fields = set(['id', 'instance_uuid', 'attachment_id', 'uuid']) bdm_db_inherited_fields = set(['created_at', 'updated_at', 'deleted_at', 'deleted']) class BlockDeviceDict(dict): """Represents a Block Device Mapping in Nova.""" _fields = bdm_new_fields _db_only_fields = (bdm_db_only_fields | bdm_db_inherited_fields) _required_fields = set(['source_type']) def __init__(self, bdm_dict=None, do_not_default=None, **kwargs): super(BlockDeviceDict, self).__init__() bdm_dict = bdm_dict or {} bdm_dict.update(kwargs) do_not_default = do_not_default or set() self._validate(bdm_dict) if bdm_dict.get('device_name'): bdm_dict['device_name'] = prepend_dev(bdm_dict['device_name']) bdm_dict['delete_on_termination'] = bool( bdm_dict.get('delete_on_termination')) # NOTE (ndipanov): Never default db fields self.update({field: None for field in self._fields - do_not_default}) self.update(bdm_dict.items()) def _validate(self, bdm_dict): """Basic data format validations.""" dict_fields = set(key for key, _ in bdm_dict.items()) valid_fields = self._fields | self._db_only_fields # Check that there are no bogus fields if not (dict_fields <= valid_fields): raise exception.InvalidBDMFormat( details=("Following fields are invalid: %s" % " ".join(dict_fields - valid_fields))) if bdm_dict.get('no_device'): return # Check that all required fields are there if (self._required_fields and not ((dict_fields & self._required_fields) == self._required_fields)): raise exception.InvalidBDMFormat( details=_("Some required fields are missing")) if 'delete_on_termination' in bdm_dict: bdm_dict['delete_on_termination'] = strutils.bool_from_string( bdm_dict['delete_on_termination']) if bdm_dict.get('device_name') is not None: validate_device_name(bdm_dict['device_name']) validate_and_default_volume_size(bdm_dict) if bdm_dict.get('boot_index'): try: bdm_dict['boot_index'] = int(bdm_dict['boot_index']) except ValueError: raise exception.InvalidBDMFormat( details=_("Boot index is invalid.")) @classmethod def from_legacy(cls, legacy_bdm): copy_over_fields = bdm_legacy_fields & bdm_new_fields copy_over_fields |= (bdm_db_only_fields | bdm_db_inherited_fields) # NOTE (ndipanov): These fields cannot be computed # from legacy bdm, so do not default them # to avoid overwriting meaningful values in the db non_computable_fields = set(['boot_index', 'disk_bus', 'guest_format', 'device_type']) new_bdm = {fld: val for fld, val in legacy_bdm.items() if fld in copy_over_fields} virt_name = legacy_bdm.get('virtual_name') if is_swap_or_ephemeral(virt_name): new_bdm['source_type'] = 'blank' new_bdm['delete_on_termination'] = True new_bdm['destination_type'] = 'local' if virt_name == 'swap': new_bdm['guest_format'] = 'swap' else: new_bdm['guest_format'] = CONF.default_ephemeral_format elif legacy_bdm.get('snapshot_id'): new_bdm['source_type'] = 'snapshot' new_bdm['destination_type'] = 'volume' elif legacy_bdm.get('volume_id'): new_bdm['source_type'] = 'volume' new_bdm['destination_type'] = 'volume' elif legacy_bdm.get('no_device'): # NOTE (ndipanov): Just keep the BDM for now, pass else: raise exception.InvalidBDMFormat( details=_("Unrecognized legacy format.")) return cls(new_bdm, non_computable_fields) @classmethod def from_api(cls, api_dict, image_uuid_specified): """Transform the API format of data to the internally used one. Only validate if the source_type field makes sense. """ if not api_dict.get('no_device'): source_type = api_dict.get('source_type') device_uuid = api_dict.get('uuid') destination_type = api_dict.get('destination_type') volume_type = api_dict.get('volume_type') if source_type == 'blank' and device_uuid: raise exception.InvalidBDMFormat( details=_("Invalid device UUID.")) elif source_type != 'blank': if not device_uuid: raise exception.InvalidBDMFormat( details=_("Missing device UUID.")) api_dict[source_type + '_id'] = device_uuid if source_type == 'image' and destination_type == 'local': # NOTE(mriedem): boot_index can be None so we need to # account for that to avoid a TypeError. boot_index = api_dict.get('boot_index', -1) if boot_index is None: # boot_index=None is equivalent to -1. boot_index = -1 boot_index = int(boot_index) # if this bdm is generated from --image, then # source_type = image and destination_type = local is allowed if not (image_uuid_specified and boot_index == 0): raise exception.InvalidBDMFormat( details=_("Mapping image to local is not supported.")) if destination_type == 'local' and volume_type: raise exception.InvalidBDMFormat( details=_("Specifying a volume_type with destination_type=" "local is not supported.")) # Specifying a volume_type with a pre-existing source volume is # not supported. if source_type == 'volume' and volume_type: raise exception.InvalidBDMFormat( details=_("Specifying volume type to existing volume is " "not supported.")) api_dict.pop('uuid', None) return cls(api_dict) def legacy(self): copy_over_fields = bdm_legacy_fields - set(['virtual_name']) copy_over_fields |= (bdm_db_only_fields | bdm_db_inherited_fields) legacy_block_device = {field: self.get(field) for field in copy_over_fields if field in self} source_type = self.get('source_type') destination_type = self.get('destination_type') no_device = self.get('no_device') if source_type == 'blank': if self['guest_format'] == 'swap': legacy_block_device['virtual_name'] = 'swap' else: # NOTE (ndipanov): Always label as 0, it is up to # the calling routine to re-enumerate them legacy_block_device['virtual_name'] = 'ephemeral0' elif source_type in ('volume', 'snapshot') or no_device: legacy_block_device['virtual_name'] = None elif source_type == 'image': if destination_type != 'volume': # NOTE(ndipanov): Image bdms with local destination # have no meaning in the legacy format - raise raise exception.InvalidBDMForLegacy() legacy_block_device['virtual_name'] = None return legacy_block_device def get_image_mapping(self): drop_fields = (set(['connection_info']) | self._db_only_fields) mapping_dict = dict(self) for fld in drop_fields: mapping_dict.pop(fld, None) return mapping_dict def is_safe_for_update(block_device_dict): """Determine if passed dict is a safe subset for update. Safe subset in this case means a safe subset of both legacy and new versions of data, that can be passed to an UPDATE query without any transformation. """ fields = set(block_device_dict.keys()) return fields <= (bdm_new_fields | bdm_db_inherited_fields | bdm_db_only_fields) def create_image_bdm(image_ref, boot_index=0): """Create a block device dict based on the image_ref. This is useful in the API layer to keep the compatibility with having an image_ref as a field in the instance requests """ return BlockDeviceDict( {'source_type': 'image', 'image_id': image_ref, 'delete_on_termination': True, 'boot_index': boot_index, 'device_type': 'disk', 'destination_type': 'local'}) def create_blank_bdm(size, guest_format=None): return BlockDeviceDict( {'source_type': 'blank', 'delete_on_termination': True, 'device_type': 'disk', 'boot_index': -1, 'destination_type': 'local', 'guest_format': guest_format, 'volume_size': size}) def snapshot_from_bdm(snapshot_id, template): """Create a basic volume snapshot BDM from a given template bdm.""" copy_from_template = ('disk_bus', 'device_type', 'boot_index', 'delete_on_termination', 'volume_size', 'device_name') snapshot_dict = {'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': snapshot_id} for key in copy_from_template: snapshot_dict[key] = template.get(key) return BlockDeviceDict(snapshot_dict) def legacy_mapping(block_device_mapping): """Transform a list of block devices of an instance back to the legacy data format. """ legacy_block_device_mapping = [] for bdm in block_device_mapping: try: legacy_block_device = BlockDeviceDict(bdm).legacy() except exception.InvalidBDMForLegacy: continue legacy_block_device_mapping.append(legacy_block_device) # Re-enumerate the ephemeral devices for i, dev in enumerate(dev for dev in legacy_block_device_mapping if dev['virtual_name'] and is_ephemeral(dev['virtual_name'])): dev['virtual_name'] = dev['virtual_name'][:-1] + str(i) return legacy_block_device_mapping def from_legacy_mapping(legacy_block_device_mapping, image_uuid='', root_device_name=None, no_root=False): """Transform a legacy list of block devices to the new data format.""" new_bdms = [BlockDeviceDict.from_legacy(legacy_bdm) for legacy_bdm in legacy_block_device_mapping] # NOTE (ndipanov): We will not decide which device is root here - we assume # that it will be supplied later. This is useful for having the root device # as part of the image defined mappings that are already in the v2 format. if no_root: for bdm in new_bdms: bdm['boot_index'] = -1 return new_bdms image_bdm = None volume_backed = False # Try to assign boot_device if not root_device_name and not image_uuid: # NOTE (ndipanov): If there is no root_device, pick the first non # blank one. non_blank = [bdm for bdm in new_bdms if bdm['source_type'] != 'blank'] if non_blank: non_blank[0]['boot_index'] = 0 else: for bdm in new_bdms: if (bdm['source_type'] in ('volume', 'snapshot', 'image') and root_device_name is not None and (strip_dev(bdm.get('device_name')) == strip_dev(root_device_name))): bdm['boot_index'] = 0 volume_backed = True elif not bdm['no_device']: bdm['boot_index'] = -1 else: bdm['boot_index'] = None if not volume_backed and image_uuid: image_bdm = create_image_bdm(image_uuid, boot_index=0) return ([image_bdm] if image_bdm else []) + new_bdms def properties_root_device_name(properties): """Get root device name from image meta data. If it isn't specified, return None. """ root_device_name = None # NOTE(yamahata): see image_service.s3.s3create() for bdm in properties.get('mappings', []): if bdm['virtual'] == 'root': root_device_name = bdm['device'] # NOTE(yamahata): register_image's command line can override # .manifest.xml if 'root_device_name' in properties: root_device_name = properties['root_device_name'] return root_device_name def validate_device_name(value): try: # NOTE (ndipanov): Do not allow empty device names # until assigning default values # are supported by nova.compute utils.check_string_length(value, 'Device name', min_length=1, max_length=255) except exception.InvalidInput: raise exception.InvalidBDMFormat( details=_("Device name empty or too long.")) if ' ' in value: raise exception.InvalidBDMFormat( details=_("Device name contains spaces.")) def validate_and_default_volume_size(bdm): if bdm.get('volume_size'): try: bdm['volume_size'] = utils.validate_integer( bdm['volume_size'], 'volume_size', min_value=0) except exception.InvalidInput: # NOTE: We can remove this validation code after removing # Nova v2.0 API code, because v2.1 API validates this case # already at its REST API layer. raise exception.InvalidBDMFormat( details=_("Invalid volume_size.")) _ephemeral = re.compile(r'^ephemeral(\d|[1-9]\d+)$') def is_ephemeral(device_name): return _ephemeral.match(device_name) is not None def ephemeral_num(ephemeral_name): assert is_ephemeral(ephemeral_name) return int(_ephemeral.sub('\\1', ephemeral_name)) def is_swap_or_ephemeral(device_name): return (device_name and (device_name == 'swap' or is_ephemeral(device_name))) def new_format_is_swap(bdm): if (bdm.get('source_type') == 'blank' and bdm.get('destination_type') == 'local' and bdm.get('guest_format') == 'swap'): return True return False def new_format_is_ephemeral(bdm): if (bdm.get('source_type') == 'blank' and bdm.get('destination_type') == 'local' and bdm.get('guest_format') != 'swap'): return True return False def get_root_bdm(bdms): try: return next(bdm for bdm in bdms if bdm.get('boot_index', -1) == 0) except StopIteration: return None def get_bdms_to_connect(bdms, exclude_root_mapping=False): """Will return non-root mappings, when exclude_root_mapping is true. Otherwise all mappings will be returned. """ return (bdm for bdm in bdms if bdm.get('boot_index', -1) != 0 or not exclude_root_mapping) def mappings_prepend_dev(mappings): """Prepend '/dev/' to 'device' entry of swap/ephemeral virtual type.""" for m in mappings: virtual = m['virtual'] if (is_swap_or_ephemeral(virtual) and (not m['device'].startswith('/'))): m['device'] = '/dev/' + m['device'] return mappings _dev = re.compile('^/dev/') def strip_dev(device_name): """remove leading '/dev/'.""" return _dev.sub('', device_name) if device_name else device_name def prepend_dev(device_name): """Make sure there is a leading '/dev/'.""" return device_name and '/dev/' + strip_dev(device_name) _pref = re.compile('^((x?v|s|h)d)') def strip_prefix(device_name): """remove both leading /dev/ and xvd or sd or vd or hd.""" device_name = strip_dev(device_name) return _pref.sub('', device_name) if device_name else device_name _nums = re.compile(r'\d+') def get_device_letter(device_name): letter = strip_prefix(device_name) # NOTE(vish): delete numbers in case we have something like # /dev/sda1 return _nums.sub('', letter) if device_name else device_name def generate_device_letter(index): """Returns device letter by index (starts by zero) i.e. index = 0, 1,..., 18277 results = a, b,..., zzz """ base = ord('z') - ord('a') + 1 unit_dev_name = "" while index >= 0: letter = chr(ord('a') + (index % base)) unit_dev_name = letter + unit_dev_name index = int(index / base) - 1 return unit_dev_name def generate_device_name(prefix, index): """Returns device unit name by index (starts by zero) i.e. prefix = vd index = 0, 1,..., 18277 results = vda, vdb,..., vdzzz """ return prefix + generate_device_letter(index) def instance_block_mapping(instance, bdms): root_device_name = instance['root_device_name'] # NOTE(clayg): remove this when xenapi is setting default_root_device if root_device_name is None: if driver.is_xenapi(): root_device_name = '/dev/xvda' else: return _DEFAULT_MAPPINGS mappings = {} mappings['ami'] = strip_dev(root_device_name) mappings['root'] = root_device_name default_ephemeral_device = instance.get('default_ephemeral_device') if default_ephemeral_device: mappings['ephemeral0'] = default_ephemeral_device default_swap_device = instance.get('default_swap_device') if default_swap_device: mappings['swap'] = default_swap_device ebs_devices = [] blanks = [] # 'ephemeralN', 'swap' and ebs for bdm in bdms: # ebs volume case if bdm.destination_type == 'volume': ebs_devices.append(bdm.device_name) continue if bdm.source_type == 'blank': blanks.append(bdm) # NOTE(yamahata): I'm not sure how ebs device should be numbered. # Right now sort by device name for deterministic # result. if ebs_devices: # NOTE(claudiub): python2.7 sort places None values first. # this sort will maintain the same behaviour for both py27 and py34. ebs_devices = sorted(ebs_devices, key=lambda x: (x is not None, x)) for nebs, ebs in enumerate(ebs_devices): mappings['ebs%d' % nebs] = ebs swap = [bdm for bdm in blanks if bdm.guest_format == 'swap'] if swap: mappings['swap'] = swap.pop().device_name ephemerals = [bdm for bdm in blanks if bdm.guest_format != 'swap'] if ephemerals: for num, eph in enumerate(ephemerals): mappings['ephemeral%d' % num] = eph.device_name return mappings def match_device(device): """Matches device name and returns prefix, suffix.""" match = re.match("(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$", device) if not match: return None return match.groups() def volume_in_mapping(mount_device, block_device_info): block_device_list = [strip_dev(vol['mount_device']) for vol in driver.block_device_info_get_mapping( block_device_info)] swap = driver.block_device_info_get_swap(block_device_info) if driver.swap_is_usable(swap): block_device_list.append(strip_dev(swap['device_name'])) block_device_list += [strip_dev(ephemeral['device_name']) for ephemeral in driver.block_device_info_get_ephemerals( block_device_info)] LOG.debug("block_device_list %s", sorted(filter(None, block_device_list))) return strip_dev(mount_device) in block_device_list def get_bdm_ephemeral_disk_size(block_device_mappings): return sum(bdm.get('volume_size', 0) for bdm in block_device_mappings if new_format_is_ephemeral(bdm)) def get_bdm_swap_list(block_device_mappings): return [bdm for bdm in block_device_mappings if new_format_is_swap(bdm)] def get_bdm_local_disk_num(block_device_mappings): return len([bdm for bdm in block_device_mappings if bdm.get('destination_type') == 'local']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/cache_utils.py0000664000175000017500000000745100000000000016375 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Simple wrapper for oslo_cache.""" from oslo_cache import core as cache from oslo_log import log as logging import nova.conf from nova.i18n import _ CONF = nova.conf.CONF LOG = logging.getLogger(__name__) WEEK = 604800 def _warn_if_null_backend(): if CONF.cache.backend == 'dogpile.cache.null': LOG.warning("Cache enabled with backend dogpile.cache.null.") def get_memcached_client(expiration_time=0): """Used ONLY when memcached is explicitly needed.""" # If the operator has [cache]/enabled flag on then we let oslo_cache # configure the region from the configuration settings if CONF.cache.enabled and CONF.cache.memcache_servers: _warn_if_null_backend() return CacheClient( _get_default_cache_region(expiration_time=expiration_time)) def get_client(expiration_time=0): """Used to get a caching client.""" # If the operator has [cache]/enabled flag on then we let oslo_cache # configure the region from configuration settings. if CONF.cache.enabled: _warn_if_null_backend() return CacheClient( _get_default_cache_region(expiration_time=expiration_time)) # If [cache]/enabled flag is off, we use the dictionary backend return CacheClient( _get_custom_cache_region(expiration_time=expiration_time, backend='oslo_cache.dict')) def _get_default_cache_region(expiration_time): region = cache.create_region() if expiration_time != 0: CONF.cache.expiration_time = expiration_time cache.configure_cache_region(CONF, region) return region def _get_custom_cache_region(expiration_time=WEEK, backend=None, url=None): """Create instance of oslo_cache client. For backends you can pass specific parameters by kwargs. For 'dogpile.cache.memcached' backend 'url' parameter must be specified. :param backend: backend name :param expiration_time: interval in seconds to indicate maximum time-to-live value for each key :param url: memcached url(s) """ region = cache.create_region() region_params = {} if expiration_time != 0: region_params['expiration_time'] = expiration_time if backend == 'oslo_cache.dict': region_params['arguments'] = {'expiration_time': expiration_time} elif backend == 'dogpile.cache.memcached': region_params['arguments'] = {'url': url} else: raise RuntimeError(_('old style configuration can use ' 'only dictionary or memcached backends')) region.configure(backend, **region_params) return region class CacheClient(object): """Replicates a tiny subset of memcached client interface.""" def __init__(self, region): self.region = region def get(self, key): value = self.region.get(key) if value == cache.NO_VALUE: return None return value def set(self, key, value): return self.region.set(key, value) def delete(self, key): return self.region.delete(key) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2944703 nova-21.2.4/nova/cmd/0000775000175000017500000000000000000000000014274 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/cmd/__init__.py0000664000175000017500000000117700000000000016413 0ustar00zuulzuul00000000000000# Copyright (c) 2019 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.monkey_patch # noqa ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/api.py0000664000175000017500000000430600000000000015422 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Starter script for Nova API. Starts both the EC2 and OpenStack APIs in separate greenthreads. """ import sys from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts import nova.conf from nova import config from nova import exception from nova import objects from nova import service from nova import version CONF = nova.conf.CONF def main(): config.parse_args(sys.argv) logging.setup(CONF, "nova") objects.register_all() gmr_opts.set_defaults(CONF) if 'osapi_compute' in CONF.enabled_apis: # NOTE(mriedem): This is needed for caching the nova-compute service # version. objects.Service.enable_min_version_cache() log = logging.getLogger(__name__) gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) launcher = service.process_launcher() started = 0 for api in CONF.enabled_apis: should_use_ssl = api in CONF.enabled_ssl_apis try: server = service.WSGIService(api, use_ssl=should_use_ssl) launcher.launch_service(server, workers=server.workers or 1) started += 1 except exception.PasteAppNotFound as ex: log.warning("%s. ``enabled_apis`` includes bad values. " "Fix to remove this warning.", ex) if started == 0: log.error('No APIs were started. ' 'Check the enabled_apis config option.') sys.exit(1) launcher.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/api_metadata.py0000664000175000017500000000314400000000000017261 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Starter script for Nova Metadata API.""" import sys from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from nova.conductor import rpcapi as conductor_rpcapi import nova.conf from nova import config from nova import objects from nova.objects import base as objects_base from nova import service from nova import version CONF = nova.conf.CONF def main(): config.parse_args(sys.argv) logging.setup(CONF, "nova") objects.register_all() gmr_opts.set_defaults(CONF) gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) objects_base.NovaObject.indirection_api = conductor_rpcapi.ConductorAPI() should_use_ssl = 'metadata' in CONF.enabled_ssl_apis server = service.WSGIService('metadata', use_ssl=should_use_ssl) service.serve(server, workers=server.workers) service.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/api_os_compute.py0000664000175000017500000000307400000000000017660 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Starter script for Nova OS API.""" import sys from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts import nova.conf from nova import config from nova import objects from nova import service from nova import version CONF = nova.conf.CONF def main(): config.parse_args(sys.argv) logging.setup(CONF, "nova") objects.register_all() gmr_opts.set_defaults(CONF) # NOTE(mriedem): This is needed for caching the nova-compute service # version. objects.Service.enable_min_version_cache() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) should_use_ssl = 'osapi_compute' in CONF.enabled_ssl_apis server = service.WSGIService('osapi_compute', use_ssl=should_use_ssl) service.serve(server, workers=server.workers) service.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/baseproxy.py0000664000175000017500000000507200000000000016666 0ustar00zuulzuul00000000000000# # Copyright (C) 2014 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Base proxy module used to create compatible consoles for OpenStack Nova.""" import os import sys from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts import nova.conf from nova.conf import novnc from nova.console import websocketproxy from nova import objects from nova import version CONF = nova.conf.CONF novnc.register_cli_opts(CONF) gmr_opts.set_defaults(CONF) objects.register_all() def exit_with_error(msg, errno=-1): sys.stderr.write(msg + '\n') sys.exit(errno) def proxy(host, port, security_proxy=None): """:param host: local address to listen on :param port: local port to listen on :param security_proxy: instance of nova.console.securityproxy.base.SecurityProxy Setup a proxy listening on @host:@port. If the @security_proxy parameter is not None, this instance is used to negotiate security layer with the proxy target """ if CONF.ssl_only and not os.path.exists(CONF.cert): exit_with_error("SSL only and %s not found" % CONF.cert) # Check to see if tty html/js/css files are present if CONF.web and not os.path.exists(CONF.web): exit_with_error("Can not find html/js files at %s." % CONF.web) logging.setup(CONF, "nova") gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) # Create and start the NovaWebSockets proxy websocketproxy.NovaWebSocketProxy( listen_host=host, listen_port=port, source_is_ipv6=CONF.source_is_ipv6, cert=CONF.cert, key=CONF.key, ssl_only=CONF.ssl_only, ssl_ciphers=CONF.console.ssl_ciphers, ssl_minimum_version=CONF.console.ssl_minimum_version, daemon=CONF.daemon, record=CONF.record, traffic=not CONF.daemon, web=CONF.web, file_only=True, RequestHandlerClass=websocketproxy.NovaProxyRequestHandler, security_proxy=security_proxy, ).start_server() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/common.py0000664000175000017500000001551000000000000016140 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common functions used by different CLI interfaces. """ from __future__ import print_function import argparse import traceback from oslo_log import log as logging import six import nova.conf import nova.db.api from nova import exception from nova.i18n import _ from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) def block_db_access(service_name): """Blocks Nova DB access.""" class NoDB(object): def __getattr__(self, attr): return self def __call__(self, *args, **kwargs): stacktrace = "".join(traceback.format_stack()) LOG.error('No db access allowed in %(service_name)s: ' '%(stacktrace)s', dict(service_name=service_name, stacktrace=stacktrace)) raise exception.DBNotAllowed(binary=service_name) nova.db.api.IMPL = NoDB() def validate_args(fn, *args, **kwargs): """Check that the supplied args are sufficient for calling a function. >>> validate_args(lambda a: None) Traceback (most recent call last): ... MissingArgs: Missing argument(s): a >>> validate_args(lambda a, b, c, d: None, 0, c=1) Traceback (most recent call last): ... MissingArgs: Missing argument(s): b, d :param fn: the function to check :param arg: the positional arguments supplied :param kwargs: the keyword arguments supplied """ argspec = utils.getargspec(fn) num_defaults = len(argspec.defaults or []) required_args = argspec.args[:len(argspec.args) - num_defaults] if six.get_method_self(fn) is not None: required_args.pop(0) missing = [arg for arg in required_args if arg not in kwargs] missing = missing[len(args):] return missing # Decorators for actions def args(*args, **kwargs): """Decorator which adds the given args and kwargs to the args list of the desired func's __dict__. """ def _decorator(func): func.__dict__.setdefault('args', []).insert(0, (args, kwargs)) return func return _decorator def methods_of(obj): """Get all callable methods of an object that don't start with underscore returns a list of tuples of the form (method_name, method) """ result = [] for i in dir(obj): if callable(getattr(obj, i)) and not i.startswith('_'): result.append((i, getattr(obj, i))) return result def add_command_parsers(subparsers, categories): """Adds command parsers to the given subparsers. Adds version and bash-completion parsers. Adds a parser with subparsers for each category in the categories dict given. """ parser = subparsers.add_parser('version') parser = subparsers.add_parser('bash-completion') parser.add_argument('query_category', nargs='?') for category in categories: command_object = categories[category]() desc = getattr(command_object, 'description', None) parser = subparsers.add_parser(category, description=desc) parser.set_defaults(command_object=command_object) category_subparsers = parser.add_subparsers(dest='action') category_subparsers.required = True for (action, action_fn) in methods_of(command_object): parser = category_subparsers.add_parser( action, description=getattr(action_fn, 'description', desc)) action_kwargs = [] for args, kwargs in getattr(action_fn, 'args', []): # we must handle positional parameters (ARG) separately from # positional parameters (--opt). Detect this by checking for # the presence of leading '--' if args[0] != args[0].lstrip('-'): kwargs.setdefault('dest', args[0].lstrip('-')) if kwargs['dest'].startswith('action_kwarg_'): action_kwargs.append( kwargs['dest'][len('action_kwarg_'):]) else: action_kwargs.append(kwargs['dest']) kwargs['dest'] = 'action_kwarg_' + kwargs['dest'] else: action_kwargs.append(args[0]) args = ['action_kwarg_' + arg for arg in args] parser.add_argument(*args, **kwargs) parser.set_defaults(action_fn=action_fn) parser.set_defaults(action_kwargs=action_kwargs) parser.add_argument('action_args', nargs='*', help=argparse.SUPPRESS) def print_bash_completion(categories): if not CONF.category.query_category: print(" ".join(categories.keys())) elif CONF.category.query_category in categories: fn = categories[CONF.category.query_category] command_object = fn() actions = methods_of(command_object) print(" ".join([k for (k, v) in actions])) def get_action_fn(): fn = CONF.category.action_fn fn_args = [] for arg in CONF.category.action_args: if isinstance(arg, six.binary_type): arg = arg.decode('utf-8') fn_args.append(arg) fn_kwargs = {} for k in CONF.category.action_kwargs: v = getattr(CONF.category, 'action_kwarg_' + k) if v is None: continue if isinstance(v, six.binary_type): v = v.decode('utf-8') fn_kwargs[k] = v # call the action with the remaining arguments # check arguments missing = validate_args(fn, *fn_args, **fn_kwargs) if missing: # NOTE(mikal): this isn't the most helpful error message ever. It is # long, and tells you a lot of things you probably don't want to know # if you just got a single arg wrong. print(fn.__doc__) CONF.print_help() raise exception.Invalid( _("Missing arguments: %s") % ", ".join(missing)) return fn, fn_args, fn_kwargs def action_description(text): """Decorator for adding a description to command action. To display help text on action call instead of common category help text action function can be decorated. command -h will show description and arguments. """ def _decorator(func): func.description = text return func return _decorator ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/compute.py0000664000175000017500000000376700000000000016337 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for Nova Compute.""" import shlex import sys import os_vif from oslo_log import log as logging from oslo_privsep import priv_context from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from nova.cmd import common as cmd_common from nova.compute import rpcapi as compute_rpcapi from nova.conductor import rpcapi as conductor_rpcapi import nova.conf from nova import config from nova import objects from nova.objects import base as objects_base from nova import service from nova import utils from nova import version CONF = nova.conf.CONF def main(): config.parse_args(sys.argv) logging.setup(CONF, 'nova') priv_context.init(root_helper=shlex.split(utils.get_root_helper())) objects.register_all() gmr_opts.set_defaults(CONF) # Ensure os-vif objects are registered and plugins loaded os_vif.initialize() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) cmd_common.block_db_access('nova-compute') objects_base.NovaObject.indirection_api = conductor_rpcapi.ConductorAPI() objects.Service.enable_min_version_cache() server = service.Service.create(binary='nova-compute', topic=compute_rpcapi.RPC_TOPIC) service.serve(server) service.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/conductor.py0000664000175000017500000000274600000000000016657 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for Nova Conductor.""" import sys from oslo_concurrency import processutils from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from nova.conductor import rpcapi import nova.conf from nova import config from nova import objects from nova import service from nova import version CONF = nova.conf.CONF def main(): config.parse_args(sys.argv) logging.setup(CONF, "nova") objects.register_all() gmr_opts.set_defaults(CONF) objects.Service.enable_min_version_cache() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) server = service.Service.create(binary='nova-conductor', topic=rpcapi.RPC_TOPIC) workers = CONF.conductor.workers or processutils.get_worker_count() service.serve(server, workers=workers) service.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/manage.py0000664000175000017500000037577200000000000016123 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ CLI interface for nova management. """ from __future__ import print_function import collections import functools import re import sys import traceback from dateutil import parser as dateutil_parser from keystoneauth1 import exceptions as ks_exc from neutronclient.common import exceptions as neutron_client_exc import os_resource_classes as orc from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import uuidutils import prettytable import six import six.moves.urllib.parse as urlparse from sqlalchemy.engine import url as sqla_url from nova.cmd import common as cmd_common from nova.compute import api as compute_api import nova.conf from nova import config from nova import context from nova.db import api as db from nova.db import migration from nova.db.sqlalchemy import api as sa_db from nova import exception from nova.i18n import _ from nova.network import constants from nova.network import neutron as neutron_api from nova import objects from nova.objects import block_device as block_device_obj from nova.objects import compute_node as compute_node_obj from nova.objects import host_mapping as host_mapping_obj from nova.objects import instance as instance_obj from nova.objects import instance_mapping as instance_mapping_obj from nova.objects import quotas as quotas_obj from nova.objects import virtual_interface as virtual_interface_obj from nova import rpc from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import version from nova.virt import ironic CONF = nova.conf.CONF LOG = logging.getLogger(__name__) # Keep this list sorted and one entry per line for readability. _EXTRA_DEFAULT_LOG_LEVELS = ['oslo_concurrency=INFO', 'oslo_db=INFO', 'oslo_policy=INFO'] # Consts indicating whether allocations need to be healed by creating them or # by updating existing allocations. _CREATE = 'create' _UPDATE = 'update' # Decorators for actions args = cmd_common.args action_description = cmd_common.action_description def mask_passwd_in_url(url): parsed = urlparse.urlparse(url) safe_netloc = re.sub(':.*@', ':****@', parsed.netloc) new_parsed = urlparse.ParseResult( parsed.scheme, safe_netloc, parsed.path, parsed.params, parsed.query, parsed.fragment) return urlparse.urlunparse(new_parsed) class DbCommands(object): """Class for managing the main database.""" # NOTE(danms): These functions are called with a DB context and a # count, which is the maximum batch size requested by the # user. They must be idempotent. At most $count records should be # migrated. The function must return a tuple of (found, done). The # found value indicates how many unmigrated/candidate records existed in # the database prior to the migration (either total, or up to the # $count limit provided), and a nonzero found value may tell the user # that there is still work to do. The done value indicates whether # or not any records were actually migrated by the function. Thus # if both (found, done) are nonzero, work was done and some work # remains. If found is nonzero and done is zero, some records are # not migratable (or don't need migrating), but all migrations that can # complete have finished. # NOTE(stephenfin): These names must be unique online_migrations = ( # Added in Pike quotas_obj.migrate_quota_limits_to_api_db, # Added in Pike quotas_obj.migrate_quota_classes_to_api_db, # Added in Queens sa_db.migration_migrate_to_uuid, # Added in Queens block_device_obj.BlockDeviceMapping.populate_uuids, # Added in Rocky # NOTE(tssurya): This online migration is going to be backported to # Queens and Pike since instance.avz of instances before Pike # need to be populated if it was not specified during boot time. instance_obj.populate_missing_availability_zones, # Added in Rocky instance_mapping_obj.populate_queued_for_delete, # Added in Stein compute_node_obj.migrate_empty_ratio, # Added in Stein virtual_interface_obj.fill_virtual_interface_list, # Added in Stein instance_mapping_obj.populate_user_id, ) def __init__(self): pass @staticmethod def _print_dict(dct, dict_property="Property", dict_value='Value', sort_key=None): """Print a `dict` as a table of two columns. :param dct: `dict` to print :param dict_property: name of the first column :param wrap: wrapping for the second column :param dict_value: header label for the value (second) column :param sort_key: key used for sorting the dict """ pt = prettytable.PrettyTable([dict_property, dict_value]) pt.align = 'l' for k, v in sorted(dct.items(), key=sort_key): # convert dict to str to check length if isinstance(v, dict): v = six.text_type(v) # if value has a newline, add in multiple rows # e.g. fault with stacktrace if v and isinstance(v, six.string_types) and r'\n' in v: lines = v.strip().split(r'\n') col1 = k for line in lines: pt.add_row([col1, line]) col1 = '' else: pt.add_row([k, v]) if six.PY2: print(encodeutils.safe_encode(pt.get_string())) else: print(encodeutils.safe_encode(pt.get_string()).decode()) @args('--local_cell', action='store_true', help='Only sync db in the local cell: do not attempt to fan-out ' 'to all cells') @args('version', metavar='VERSION', nargs='?', help='Database version') def sync(self, version=None, local_cell=False): """Sync the database up to the most recent version.""" if not local_cell: ctxt = context.RequestContext() # NOTE(mdoff): Multiple cells not yet implemented. Currently # fanout only looks for cell0. try: cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) with context.target_cell(ctxt, cell_mapping) as cctxt: migration.db_sync(version, context=cctxt) except exception.CellMappingNotFound: print(_('WARNING: cell0 mapping not found - not' ' syncing cell0.')) except Exception as e: print(_("""ERROR: Could not access cell0. Has the nova_api database been created? Has the nova_cell0 database been created? Has "nova-manage api_db sync" been run? Has "nova-manage cell_v2 map_cell0" been run? Is [api_database]/connection set in nova.conf? Is the cell0 database connection URL correct? Error: %s""") % six.text_type(e)) return 1 return migration.db_sync(version) def version(self): """Print the current database version.""" print(migration.db_version()) @args('--max_rows', type=int, metavar='', dest='max_rows', help='Maximum number of deleted rows to archive. Defaults to 1000. ' 'Note that this number does not include the corresponding ' 'rows, if any, that are removed from the API database for ' 'deleted instances.') @args('--before', metavar='', help=('Archive rows that have been deleted before this date. ' 'Accepts date strings in the default format output by the ' '``date`` command, as well as ``YYYY-MM-DD [HH:mm:ss]``.')) @args('--verbose', action='store_true', dest='verbose', default=False, help='Print how many rows were archived per table.') @args('--until-complete', action='store_true', dest='until_complete', default=False, help=('Run continuously until all deleted rows are archived. Use ' 'max_rows as a batch size for each iteration.')) @args('--purge', action='store_true', dest='purge', default=False, help='Purge all data from shadow tables after archive completes') @args('--all-cells', action='store_true', dest='all_cells', default=False, help='Run command across all cells.') def archive_deleted_rows(self, max_rows=1000, verbose=False, until_complete=False, purge=False, before=None, all_cells=False): """Move deleted rows from production tables to shadow tables. Returns 0 if nothing was archived, 1 if some number of rows were archived, 2 if max_rows is invalid, 3 if no connection could be established to the API DB, 4 if before date is invalid. If automating, this should be run continuously while the result is 1, stopping at 0. """ max_rows = int(max_rows) if max_rows < 0: print(_("Must supply a positive value for max_rows")) return 2 if max_rows > db.MAX_INT: print(_('max rows must be <= %(max_value)d') % {'max_value': db.MAX_INT}) return 2 ctxt = context.get_admin_context() try: # NOTE(tssurya): This check has been added to validate if the API # DB is reachable or not as this is essential for purging the # related API database records of the deleted instances. cell_mappings = objects.CellMappingList.get_all(ctxt) except db_exc.CantStartEngineError: print(_('Failed to connect to API DB so aborting this archival ' 'attempt. Please check your config file to make sure that ' '[api_database]/connection is set and run this ' 'command again.')) return 3 if before: try: before_date = dateutil_parser.parse(before, fuzzy=True) except ValueError as e: print(_('Invalid value for --before: %s') % e) return 4 else: before_date = None table_to_rows_archived = {} if until_complete and verbose: sys.stdout.write(_('Archiving') + '..') # noqa interrupt = False if all_cells: # Sort first by cell name, then by table: # +--------------------------------+-------------------------+ # | Table | Number of Rows Archived | # +--------------------------------+-------------------------+ # | cell0.block_device_mapping | 1 | # | cell1.block_device_mapping | 1 | # | cell1.instance_actions | 2 | # | cell1.instance_actions_events | 2 | # | cell2.block_device_mapping | 1 | # | cell2.instance_actions | 2 | # | cell2.instance_actions_events | 2 | # ... def sort_func(item): cell_name, table = item[0].split('.') return cell_name, table print_sort_func = sort_func else: cell_mappings = [None] print_sort_func = None total_rows_archived = 0 for cell_mapping in cell_mappings: # NOTE(Kevin_Zheng): No need to calculate limit for each # cell if until_complete=True. # We need not adjust max rows to avoid exceeding a specified total # limit because with until_complete=True, we have no total limit. if until_complete: max_rows_to_archive = max_rows elif max_rows > total_rows_archived: # We reduce the max rows to archive based on what we've # archived so far to avoid potentially exceeding the specified # total limit. max_rows_to_archive = max_rows - total_rows_archived else: break # If all_cells=False, cell_mapping is None with context.target_cell(ctxt, cell_mapping) as cctxt: cell_name = cell_mapping.name if cell_mapping else None try: rows_archived = self._do_archive( table_to_rows_archived, cctxt, max_rows_to_archive, until_complete, verbose, before_date, cell_name) except KeyboardInterrupt: interrupt = True break # TODO(melwitt): Handle skip/warn for unreachable cells. Note # that cell_mappings = [None] if not --all-cells total_rows_archived += rows_archived if until_complete and verbose: if interrupt: print('.' + _('stopped')) # noqa else: print('.' + _('complete')) # noqa if verbose: if table_to_rows_archived: self._print_dict(table_to_rows_archived, _('Table'), dict_value=_('Number of Rows Archived'), sort_key=print_sort_func) else: print(_('Nothing was archived.')) if table_to_rows_archived and purge: if verbose: print(_('Rows were archived, running purge...')) self.purge(purge_all=True, verbose=verbose, all_cells=all_cells) # NOTE(danms): Return nonzero if we archived something return int(bool(table_to_rows_archived)) def _do_archive(self, table_to_rows_archived, cctxt, max_rows, until_complete, verbose, before_date, cell_name): """Helper function for archiving deleted rows for a cell. This will archive deleted rows for a cell database and remove the associated API database records for deleted instances. :param table_to_rows_archived: Dict tracking the number of rows archived by .. Example: {'cell0.instances': 2, 'cell1.instances': 5} :param cctxt: Cell-targeted nova.context.RequestContext if archiving across all cells :param max_rows: Maximum number of deleted rows to archive :param until_complete: Whether to run continuously until all deleted rows are archived :param verbose: Whether to print how many rows were archived per table :param before_date: Archive rows that were deleted before this date :param cell_name: Name of the cell or None if not archiving across all cells """ ctxt = context.get_admin_context() while True: run, deleted_instance_uuids, total_rows_archived = \ db.archive_deleted_rows(cctxt, max_rows, before=before_date) for table_name, rows_archived in run.items(): if cell_name: table_name = cell_name + '.' + table_name table_to_rows_archived.setdefault(table_name, 0) table_to_rows_archived[table_name] += rows_archived if deleted_instance_uuids: table_to_rows_archived.setdefault( 'API_DB.instance_mappings', 0) table_to_rows_archived.setdefault( 'API_DB.request_specs', 0) table_to_rows_archived.setdefault( 'API_DB.instance_group_member', 0) deleted_mappings = objects.InstanceMappingList.destroy_bulk( ctxt, deleted_instance_uuids) table_to_rows_archived[ 'API_DB.instance_mappings'] += deleted_mappings deleted_specs = objects.RequestSpec.destroy_bulk( ctxt, deleted_instance_uuids) table_to_rows_archived[ 'API_DB.request_specs'] += deleted_specs deleted_group_members = ( objects.InstanceGroup.destroy_members_bulk( ctxt, deleted_instance_uuids)) table_to_rows_archived[ 'API_DB.instance_group_member'] += deleted_group_members # If we're not archiving until there is nothing more to archive, we # have reached max_rows in this cell DB or there was nothing to # archive. if not until_complete or not run: break if verbose: sys.stdout.write('.') return total_rows_archived @args('--before', metavar='', dest='before', help='If specified, purge rows from shadow tables that are older ' 'than this. Accepts date strings in the default format output ' 'by the ``date`` command, as well as ``YYYY-MM-DD ' '[HH:mm:ss]``.') @args('--all', dest='purge_all', action='store_true', help='Purge all rows in the shadow tables') @args('--verbose', dest='verbose', action='store_true', default=False, help='Print information about purged records') @args('--all-cells', dest='all_cells', action='store_true', default=False, help='Run against all cell databases') def purge(self, before=None, purge_all=False, verbose=False, all_cells=False): if before is None and purge_all is False: print(_('Either --before or --all is required')) return 1 if before: try: before_date = dateutil_parser.parse(before, fuzzy=True) except ValueError as e: print(_('Invalid value for --before: %s') % e) return 2 else: before_date = None def status(msg): if verbose: print('%s: %s' % (identity, msg)) deleted = 0 admin_ctxt = context.get_admin_context() if all_cells: try: cells = objects.CellMappingList.get_all(admin_ctxt) except db_exc.DBError: print(_('Unable to get cell list from API DB. ' 'Is it configured?')) return 4 for cell in cells: identity = _('Cell %s') % cell.identity with context.target_cell(admin_ctxt, cell) as cctxt: deleted += sa_db.purge_shadow_tables(cctxt, before_date, status_fn=status) else: identity = _('DB') deleted = sa_db.purge_shadow_tables(admin_ctxt, before_date, status_fn=status) if deleted: return 0 else: return 3 @args('--delete', action='store_true', dest='delete', help='If specified, automatically delete any records found where ' 'instance_uuid is NULL.') def null_instance_uuid_scan(self, delete=False): """Lists and optionally deletes database records where instance_uuid is NULL. """ hits = migration.db_null_instance_uuid_scan(delete) records_found = False for table_name, records in hits.items(): # Don't print anything for 0 hits if records: records_found = True if delete: print(_("Deleted %(records)d records " "from table '%(table_name)s'.") % {'records': records, 'table_name': table_name}) else: print(_("There are %(records)d records in the " "'%(table_name)s' table where the uuid or " "instance_uuid column is NULL. Run this " "command again with the --delete option after you " "have backed up any necessary data.") % {'records': records, 'table_name': table_name}) # check to see if we didn't find anything if not records_found: print(_('There were no records found where ' 'instance_uuid was NULL.')) def _run_migration(self, ctxt, max_count): ran = 0 exceptions = False migrations = {} for migration_meth in self.online_migrations: count = max_count - ran try: found, done = migration_meth(ctxt, count) except Exception: msg = (_("Error attempting to run %(method)s") % dict( method=migration_meth)) print(msg) LOG.exception(msg) exceptions = True found = done = 0 name = migration_meth.__name__ if found: print(_('%(total)i rows matched query %(meth)s, %(done)i ' 'migrated') % {'total': found, 'meth': name, 'done': done}) # This is the per-migration method result for this batch, and # _run_migration will either continue on to the next migration, # or stop if up to this point we've processed max_count of # records across all migration methods. migrations[name] = found, done if max_count is not None: ran += done if ran >= max_count: break return migrations, exceptions @args('--max-count', metavar='', dest='max_count', help='Maximum number of objects to consider') def online_data_migrations(self, max_count=None): ctxt = context.get_admin_context() if max_count is not None: try: max_count = int(max_count) except ValueError: max_count = -1 unlimited = False if max_count < 1: print(_('Must supply a positive value for max_number')) return 127 else: unlimited = True max_count = 50 print(_('Running batches of %i until complete') % max_count) ran = None migration_info = {} exceptions = False while ran is None or ran != 0: migrations, exceptions = self._run_migration(ctxt, max_count) ran = 0 # For each batch of migration method results, build the cumulative # set of results. for name in migrations: migration_info.setdefault(name, (0, 0)) migration_info[name] = ( migration_info[name][0] + migrations[name][0], migration_info[name][1] + migrations[name][1], ) ran += migrations[name][1] if not unlimited: break t = prettytable.PrettyTable([_('Migration'), _('Total Needed'), # Really: Total Found _('Completed')]) for name in sorted(migration_info.keys()): info = migration_info[name] t.add_row([name, info[0], info[1]]) print(t) # NOTE(imacdonn): In the "unlimited" case, the loop above will only # terminate when all possible migrations have been effected. If we're # still getting exceptions, there's a problem that requires # intervention. In the max-count case, exceptions are only considered # fatal if no work was done by any other migrations ("not ran"), # because otherwise work may still remain to be done, and that work # may resolve dependencies for the failing migrations. if exceptions and (unlimited or not ran): print(_("Some migrations failed unexpectedly. Check log for " "details.")) return 2 # TODO(mriedem): Potentially add another return code for # "there are more migrations, but not completable right now" return ran and 1 or 0 @args('--resource_class', metavar='', required=True, help='Ironic node class to set on instances') @args('--host', metavar='', required=False, help='Compute service name to migrate nodes on') @args('--node', metavar='', required=False, help='Ironic node UUID to migrate (all on the host if omitted)') @args('--all', action='store_true', default=False, dest='all_hosts', help='Run migrations for all ironic hosts and nodes') @args('--verbose', action='store_true', default=False, help='Print information about migrations being performed') def ironic_flavor_migration(self, resource_class, host=None, node=None, all_hosts=False, verbose=False): """Migrate flavor information for ironic instances. This will manually push the instance flavor migration required for ironic-hosted instances in Pike. The best way to accomplish this migration is to run your ironic computes normally in Pike. However, if you need to push the migration manually, then use this. This is idempotent, but not trivial to start/stop/resume. It is recommended that you do this with care and not from a script assuming it is trivial. Running with --all may generate a large amount of DB traffic all at once. Running at least one host at a time is recommended for batching. Return values: 0: All work is completed (or none is needed) 1: Specified host and/or node is not found, or no ironic nodes present 2: Internal accounting error shows more than one instance per node 3: Invalid combination of required arguments """ if not resource_class: # Note that if --resource_class is not specified on the command # line it will actually result in a return code of 2, but we # leave 3 here for testing purposes. print(_('A resource_class is required for all modes of operation')) return 3 ctx = context.get_admin_context() if all_hosts: if host or node: print(_('--all with --host and/or --node does not make sense')) return 3 cns = objects.ComputeNodeList.get_by_hypervisor_type(ctx, 'ironic') elif host and node: try: cn = objects.ComputeNode.get_by_host_and_nodename(ctx, host, node) cns = [cn] except exception.ComputeHostNotFound: cns = [] elif host: try: cns = objects.ComputeNodeList.get_all_by_host(ctx, host) except exception.ComputeHostNotFound: cns = [] else: print(_('Either --all, --host, or --host and --node are required')) return 3 if len(cns) == 0: print(_('No ironic compute nodes found that match criteria')) return 1 # Check that we at least got one ironic compute and we can pretty # safely assume the rest are if cns[0].hypervisor_type != 'ironic': print(_('Compute node(s) specified is not of type ironic')) return 1 for cn in cns: # NOTE(danms): The instance.node is the # ComputeNode.hypervisor_hostname, which in the case of ironic is # the node uuid. Since only one instance can be on a node in # ironic, do another sanity check here to make sure we look legit. inst = objects.InstanceList.get_by_filters( ctx, {'node': cn.hypervisor_hostname, 'deleted': False}) if len(inst) > 1: print(_('Ironic node %s has multiple instances? ' 'Something is wrong.') % cn.hypervisor_hostname) return 2 elif len(inst) == 1: result = ironic.IronicDriver._pike_flavor_migration_for_node( ctx, resource_class, inst[0].uuid) if result and verbose: print(_('Migrated instance %(uuid)s on node %(node)s') % { 'uuid': inst[0].uuid, 'node': cn.hypervisor_hostname}) return 0 class ApiDbCommands(object): """Class for managing the api database.""" def __init__(self): pass @args('version', metavar='VERSION', nargs='?', help='Database version') def sync(self, version=None): """Sync the database up to the most recent version.""" return migration.db_sync(version, database='api') def version(self): """Print the current database version.""" print(migration.db_version(database='api')) class CellV2Commands(object): """Commands for managing cells v2.""" def _validate_transport_url(self, transport_url, warn_about_none=True): if not transport_url: if not CONF.transport_url: if warn_about_none: print(_( 'Must specify --transport-url if ' '[DEFAULT]/transport_url is not set in the ' 'configuration file.')) return None print(_('--transport-url not provided in the command line, ' 'using the value [DEFAULT]/transport_url from the ' 'configuration file')) transport_url = CONF.transport_url try: messaging.TransportURL.parse(conf=CONF, url=objects.CellMapping.format_mq_url( transport_url)) except (messaging.InvalidTransportURL, ValueError) as e: print(_('Invalid transport URL: %s') % six.text_type(e)) return None return transport_url def _validate_database_connection( self, database_connection, warn_about_none=True): if not database_connection: if not CONF.database.connection: if warn_about_none: print(_( 'Must specify --database_connection if ' '[database]/connection is not set in the ' 'configuration file.')) return None print(_('--database_connection not provided in the command line, ' 'using the value [database]/connection from the ' 'configuration file')) return CONF.database.connection return database_connection def _non_unique_transport_url_database_connection_checker(self, ctxt, cell_mapping, transport_url, database_connection): for cell in objects.CellMappingList.get_all(ctxt): if cell_mapping and cell.uuid == cell_mapping.uuid: # If we're looking for a specific cell, then don't check # that one for same-ness to allow idempotent updates continue if (cell.database_connection == database_connection or cell.transport_url == transport_url): print(_('The specified transport_url and/or ' 'database_connection combination already exists ' 'for another cell with uuid %s.') % cell.uuid) return True return False @args('--transport-url', metavar='', dest='transport_url', help='The transport url for the cell message queue') def simple_cell_setup(self, transport_url=None): """Simple cellsv2 setup. This simplified command is for use by existing non-cells users to configure the default environment. Returns 0 if setup is completed (or has already been done) and 1 if no hosts are reporting (and this cannot be mapped). """ transport_url = self._validate_transport_url(transport_url) if not transport_url: return 1 ctxt = context.RequestContext() try: cell0_mapping = self._map_cell0() except db_exc.DBDuplicateEntry: print(_('Cell0 is already setup')) cell0_mapping = objects.CellMapping.get_by_uuid( ctxt, objects.CellMapping.CELL0_UUID) # Run migrations so cell0 is usable with context.target_cell(ctxt, cell0_mapping) as cctxt: try: migration.db_sync(None, context=cctxt) except db_exc.DBError as ex: print(_('Unable to sync cell0 schema: %s') % ex) cell_uuid = self._map_cell_and_hosts(transport_url) if cell_uuid is None: # There are no compute hosts which means no cell_mapping was # created. This should also mean that there are no instances. return 1 self.map_instances(cell_uuid) return 0 @args('--database_connection', metavar='', help='The database connection url for cell0. ' 'This is optional. If not provided, a standard database ' 'connection will be used based on the main database connection ' 'from the Nova configuration.' ) def map_cell0(self, database_connection=None): """Create a cell mapping for cell0. cell0 is used for instances that have not been scheduled to any cell. This generally applies to instances that have encountered an error before they have been scheduled. This command creates a cell mapping for this special cell which requires a database to store the instance data. Returns 0 if cell0 created successfully or already setup. """ try: self._map_cell0(database_connection=database_connection) except db_exc.DBDuplicateEntry: print(_('Cell0 is already setup')) return 0 def _map_cell0(self, database_connection=None): """Faciliate creation of a cell mapping for cell0. See map_cell0 for more. """ def cell0_default_connection(): # If no database connection is provided one is generated # based on the database connection url. # The cell0 database will use the same database scheme and # netloc as the main database, with a related path. # NOTE(sbauza): The URL has to be RFC1738 compliant in order to # be usable by sqlalchemy. connection = CONF.database.connection # sqlalchemy has a nice utility for parsing database connection # URLs so we use that here to get the db name so we don't have to # worry about parsing and splitting a URL which could have special # characters in the password, which makes parsing a nightmare. url = sqla_url.make_url(connection) url.database = url.database + '_cell0' return urlparse.unquote(str(url)) dbc = database_connection or cell0_default_connection() ctxt = context.RequestContext() # A transport url of 'none://' is provided for cell0. RPC should not # be used to access cell0 objects. Cells transport switching will # ignore any 'none' transport type. cell_mapping = objects.CellMapping( ctxt, uuid=objects.CellMapping.CELL0_UUID, name="cell0", transport_url="none:///", database_connection=dbc) cell_mapping.create() return cell_mapping def _get_and_map_instances(self, ctxt, cell_mapping, limit, marker): filters = {} with context.target_cell(ctxt, cell_mapping) as cctxt: instances = objects.InstanceList.get_by_filters( cctxt.elevated(read_deleted='yes'), filters, sort_key='created_at', sort_dir='asc', limit=limit, marker=marker) for instance in instances: try: mapping = objects.InstanceMapping(ctxt) mapping.instance_uuid = instance.uuid mapping.cell_mapping = cell_mapping mapping.project_id = instance.project_id mapping.user_id = instance.user_id mapping.create() except db_exc.DBDuplicateEntry: continue if len(instances) == 0 or len(instances) < limit: # We've hit the end of the instances table marker = None else: marker = instances[-1].uuid return marker @args('--cell_uuid', metavar='', dest='cell_uuid', required=True, help='Unmigrated instances will be mapped to the cell with the ' 'uuid provided.') @args('--max-count', metavar='', dest='max_count', help='Maximum number of instances to map. If not set, all instances ' 'in the cell will be mapped in batches of 50. If you have a ' 'large number of instances, consider specifying a custom value ' 'and run the command until it exits with 0.') @args('--reset', action='store_true', dest='reset_marker', help='The command will start from the beginning as opposed to the ' 'default behavior of starting from where the last run ' 'finished') def map_instances(self, cell_uuid, max_count=None, reset_marker=None): """Map instances into the provided cell. Instances in the nova database of the provided cell (nova database info is obtained from the nova-api database) will be queried from oldest to newest and if unmapped, will be mapped to the provided cell. A max-count can be set on the number of instance to map in a single run. Repeated runs of the command will start from where the last run finished so it is not necessary to increase max-count to finish. A reset option can be passed which will reset the marker, thus making the command start from the beginning as opposed to the default behavior of starting from where the last run finished. An exit code of 0 indicates that all instances have been mapped. """ # NOTE(stephenfin): The support for batching in this command relies on # a bit of a hack. We initially process N instance-cell mappings, where # N is the value of '--max-count' if provided else 50. To ensure we # can continue from N on the next iteration, we store a instance-cell # mapping object with a special name and the UUID of the last # instance-cell mapping processed (N - 1) in munged form. On the next # iteration, we search for the special name and unmunge the UUID to # pick up where we left off. This is done until all mappings are # processed. The munging is necessary as there's a unique constraint on # the UUID field and we need something reversable. For more # information, see commit 9038738d0. if max_count is not None: try: max_count = int(max_count) except ValueError: max_count = -1 map_all = False if max_count < 1: print(_('Must supply a positive value for max-count')) return 127 else: map_all = True max_count = 50 ctxt = context.RequestContext() marker_project_id = 'INSTANCE_MIGRATION_MARKER' # Validate the cell exists, this will raise if not cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) # Check for a marker from a previous run marker_mapping = objects.InstanceMappingList.get_by_project_id(ctxt, marker_project_id) if len(marker_mapping) == 0: marker = None else: # There should be only one here marker = marker_mapping[0].instance_uuid.replace(' ', '-') if reset_marker: marker = None marker_mapping[0].destroy() next_marker = True while next_marker is not None: next_marker = self._get_and_map_instances(ctxt, cell_mapping, max_count, marker) marker = next_marker if not map_all: break if next_marker: # Don't judge me. There's already an InstanceMapping with this UUID # so the marker needs to be non destructively modified. next_marker = next_marker.replace('-', ' ') # This is just the marker record, so set user_id to the special # marker name as well. objects.InstanceMapping(ctxt, instance_uuid=next_marker, project_id=marker_project_id, user_id=marker_project_id).create() return 1 return 0 def _map_cell_and_hosts(self, transport_url, name=None, verbose=False): ctxt = context.RequestContext() cell_mapping_uuid = cell_mapping = None # First, try to detect if a CellMapping has already been created compute_nodes = objects.ComputeNodeList.get_all(ctxt) if not compute_nodes: print(_('No hosts found to map to cell, exiting.')) return None missing_nodes = set() for compute_node in compute_nodes: try: host_mapping = objects.HostMapping.get_by_host( ctxt, compute_node.host) except exception.HostMappingNotFound: missing_nodes.add(compute_node.host) else: if verbose: print(_( 'Host %(host)s is already mapped to cell %(uuid)s' ) % {'host': host_mapping.host, 'uuid': host_mapping.cell_mapping.uuid}) # Re-using the existing UUID in case there is already a mapping # NOTE(sbauza): There could be possibly multiple CellMappings # if the operator provides another configuration file and moves # the hosts to another cell v2, but that's not really something # we should support. cell_mapping_uuid = host_mapping.cell_mapping.uuid if not missing_nodes: print(_('All hosts are already mapped to cell(s).')) return cell_mapping_uuid # Create the cell mapping in the API database if cell_mapping_uuid is not None: cell_mapping = objects.CellMapping.get_by_uuid( ctxt, cell_mapping_uuid) if cell_mapping is None: cell_mapping_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_mapping_uuid, name=name, transport_url=transport_url, database_connection=CONF.database.connection) cell_mapping.create() # Pull the hosts from the cell database and create the host mappings for compute_host in missing_nodes: host_mapping = objects.HostMapping( ctxt, host=compute_host, cell_mapping=cell_mapping) host_mapping.create() if verbose: print(cell_mapping_uuid) return cell_mapping_uuid @args('--transport-url', metavar='', dest='transport_url', help='The transport url for the cell message queue') @args('--name', metavar='', help='The name of the cell') @args('--verbose', action='store_true', help='Output the cell mapping uuid for any newly mapped hosts.') def map_cell_and_hosts(self, transport_url=None, name=None, verbose=False): """EXPERIMENTAL. Create a cell mapping and host mappings for a cell. Users not dividing their cloud into multiple cells will be a single cell v2 deployment and should specify: nova-manage cell_v2 map_cell_and_hosts --config-file Users running multiple cells can add a cell v2 by specifying: nova-manage cell_v2 map_cell_and_hosts --config-file """ transport_url = self._validate_transport_url(transport_url) if not transport_url: return 1 self._map_cell_and_hosts(transport_url, name, verbose) # online_data_migrations established a pattern of 0 meaning everything # is done, 1 means run again to do more work. This command doesn't do # partial work so 0 is appropriate. return 0 @args('--uuid', metavar='', dest='uuid', required=True, help=_('The instance UUID to verify')) @args('--quiet', action='store_true', dest='quiet', help=_('Do not print anything')) def verify_instance(self, uuid, quiet=False): """Verify instance mapping to a cell. This command is useful to determine if the cellsv2 environment is properly setup, specifically in terms of the cell, host, and instance mapping records required. This prints one of three strings (and exits with a code) indicating whether the instance is successfully mapped to a cell (0), is unmapped due to an incomplete upgrade (1), unmapped due to normally transient state (2), it is a deleted instance which has instance mapping (3), or it is an archived instance which still has an instance mapping (4). """ def say(string): if not quiet: print(string) ctxt = context.get_admin_context() try: mapping = objects.InstanceMapping.get_by_instance_uuid( ctxt, uuid) except exception.InstanceMappingNotFound: say('Instance %s is not mapped to a cell ' '(upgrade is incomplete) or instance ' 'does not exist' % uuid) return 1 if mapping.cell_mapping is None: say('Instance %s is not mapped to a cell' % uuid) return 2 else: with context.target_cell(ctxt, mapping.cell_mapping) as cctxt: try: instance = objects.Instance.get_by_uuid(cctxt, uuid) except exception.InstanceNotFound: try: el_ctx = cctxt.elevated(read_deleted='yes') instance = objects.Instance.get_by_uuid(el_ctx, uuid) # instance is deleted if instance: say('The instance with uuid %s has been deleted.' % uuid) say('Execute ' '`nova-manage db archive_deleted_rows` ' 'command to archive this deleted ' 'instance and remove its instance_mapping.') return 3 except exception.InstanceNotFound: # instance is archived say('The instance with uuid %s has been archived.' % uuid) say('However its instance_mapping remains.') return 4 # instance is alive and mapped to a cell say('Instance %s is in cell: %s (%s)' % ( uuid, mapping.cell_mapping.name, mapping.cell_mapping.uuid)) return 0 @args('--cell_uuid', metavar='', dest='cell_uuid', help='If provided only this cell will be searched for new hosts to ' 'map.') @args('--verbose', action='store_true', help=_('Provide detailed output when discovering hosts.')) @args('--strict', action='store_true', help=_('Considered successful (exit code 0) only when an unmapped ' 'host is discovered. Any other outcome will be considered a ' 'failure (non-zero exit code).')) @args('--by-service', action='store_true', default=False, dest='by_service', help=_('Discover hosts by service instead of compute node')) def discover_hosts(self, cell_uuid=None, verbose=False, strict=False, by_service=False): """Searches cells, or a single cell, and maps found hosts. When a new host is added to a deployment it will add a service entry to the db it's configured to use. This command will check the db for each cell, or a single one if passed in, and map any hosts which are not currently mapped. If a host is already mapped nothing will be done. This command should be run once after all compute hosts have been deployed and should not be run in parallel. When run in parallel, the commands will collide with each other trying to map the same hosts in the database at the same time. """ def status_fn(msg): if verbose: print(msg) ctxt = context.RequestContext() try: hosts = host_mapping_obj.discover_hosts(ctxt, cell_uuid, status_fn, by_service) except exception.HostMappingExists as exp: print(_('ERROR: Duplicate host mapping was encountered. This ' 'command should be run once after all compute hosts have ' 'been deployed and should not be run in parallel. When ' 'run in parallel, the commands will collide with each ' 'other trying to map the same hosts in the database at ' 'the same time. Error: %s') % exp) return 2 # discover_hosts will return an empty list if no hosts are discovered if strict: return int(not hosts) @action_description( _("Add a new cell to nova API database. " "DB and MQ urls can be provided directly " "or can be taken from config. The result is cell uuid.")) @args('--name', metavar='', help=_('The name of the cell')) @args('--database_connection', metavar='', dest='database_connection', help=_('The database url for the cell database')) @args('--transport-url', metavar='', dest='transport_url', help=_('The transport url for the cell message queue')) @args('--verbose', action='store_true', help=_('Output the uuid of the created cell')) @args('--disabled', action='store_true', help=_('To create a pre-disabled cell.')) def create_cell(self, name=None, database_connection=None, transport_url=None, verbose=False, disabled=False): ctxt = context.get_context() transport_url = self._validate_transport_url(transport_url) if not transport_url: return 1 database_connection = self._validate_database_connection( database_connection) if not database_connection: return 1 if (self._non_unique_transport_url_database_connection_checker(ctxt, None, transport_url, database_connection)): return 2 cell_mapping_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_mapping_uuid, name=name, transport_url=transport_url, database_connection=database_connection, disabled=disabled) cell_mapping.create() if verbose: print(cell_mapping_uuid) return 0 @args('--verbose', action='store_true', help=_('Show sensitive details, such as passwords')) def list_cells(self, verbose=False): """Lists the v2 cells in the deployment. By default the cell name, uuid, disabled state, masked transport URL and database connection details are shown. Use the --verbose option to see transport URL and database connection with their sensitive details. """ cell_mappings = objects.CellMappingList.get_all( context.get_admin_context()) field_names = [_('Name'), _('UUID'), _('Transport URL'), _('Database Connection'), _('Disabled')] t = prettytable.PrettyTable(field_names) for cell in sorted(cell_mappings, # CellMapping.name is optional key=lambda _cell: _cell.name or ''): fields = [cell.name or '', cell.uuid] if verbose: fields.extend([cell.transport_url, cell.database_connection]) else: fields.extend([ mask_passwd_in_url(cell.transport_url), mask_passwd_in_url(cell.database_connection)]) fields.extend([cell.disabled]) t.add_row(fields) print(t) return 0 @args('--force', action='store_true', default=False, help=_('Delete hosts and instance_mappings that belong ' 'to the cell as well.')) @args('--cell_uuid', metavar='', dest='cell_uuid', required=True, help=_('The uuid of the cell to delete.')) def delete_cell(self, cell_uuid, force=False): """Delete an empty cell by the given uuid. This command will return a non-zero exit code in the following cases. * The cell is not found by uuid. * It has hosts and force is False. * It has instance mappings and force is False. If force is True and the cell has hosts and/or instance_mappings, they are deleted as well (as long as there are no living instances). Returns 0 in the following cases. * The empty cell is found and deleted successfully. * The cell has hosts and force is True then the cell, hosts and instance_mappings are deleted successfully; if there are no living instances. """ ctxt = context.get_admin_context() # Find the CellMapping given the uuid. try: cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) except exception.CellMappingNotFound: print(_('Cell with uuid %s was not found.') % cell_uuid) return 1 # Check to see if there are any HostMappings for this cell. host_mappings = objects.HostMappingList.get_by_cell_id( ctxt, cell_mapping.id) nodes = [] if host_mappings: if not force: print(_('There are existing hosts mapped to cell with uuid ' '%s.') % cell_uuid) return 2 # We query for the compute nodes in the cell, # so that they can be unmapped. with context.target_cell(ctxt, cell_mapping) as cctxt: nodes = objects.ComputeNodeList.get_all(cctxt) # Check to see if there are any InstanceMappings for this cell. instance_mappings = objects.InstanceMappingList.get_by_cell_id( ctxt, cell_mapping.id) if instance_mappings: with context.target_cell(ctxt, cell_mapping) as cctxt: instances = objects.InstanceList.get_all(cctxt) if instances: # There are instances in the cell. print(_('There are existing instances mapped to cell with ' 'uuid %s.') % cell_uuid) return 3 else: if not force: # There are no instances in the cell but the records remain # in the 'instance_mappings' table. print(_("There are instance mappings to cell with uuid " "%s, but all instances have been deleted " "in the cell.") % cell_uuid) print(_("So execute 'nova-manage db archive_deleted_rows' " "to delete the instance mappings.")) return 4 # Delete instance_mappings of the deleted instances for instance_mapping in instance_mappings: instance_mapping.destroy() # Unmap the compute nodes so that they can be discovered # again in future, if needed. for node in nodes: node.mapped = 0 node.save() # Delete hosts mapped to the cell. for host_mapping in host_mappings: host_mapping.destroy() # There are no hosts or instances mapped to the cell so delete it. cell_mapping.destroy() return 0 @args('--cell_uuid', metavar='', dest='cell_uuid', required=True, help=_('The uuid of the cell to update.')) @args('--name', metavar='', dest='name', help=_('Set the cell name.')) @args('--transport-url', metavar='', dest='transport_url', help=_('Set the cell transport_url. NOTE that running nodes ' 'will not see the change until restart!')) @args('--database_connection', metavar='', dest='db_connection', help=_('Set the cell database_connection. NOTE that running nodes ' 'will not see the change until restart!')) @args('--disable', action='store_true', dest='disable', help=_('Disables the cell. Note that the scheduling will be blocked ' 'to this cell until its enabled and followed by a SIGHUP of ' 'nova-scheduler service.')) @args('--enable', action='store_true', dest='enable', help=_('Enables the cell. Note that this makes a disabled cell ' 'available for scheduling after a SIGHUP of the ' 'nova-scheduler service')) def update_cell(self, cell_uuid, name=None, transport_url=None, db_connection=None, disable=False, enable=False): """Updates the properties of a cell by the given uuid. If the cell is not found by uuid, this command will return an exit code of 1. If the provided transport_url or/and database_connection is/are same as another cell, this command will return an exit code of 3. If the properties cannot be set, this will return 2. If an attempt is made to disable and enable a cell at the same time, this command will exit with a return code of 4. If an attempt is made to disable or enable cell0 this command will exit with a return code of 5. Otherwise, the exit code will be 0. NOTE: Updating the transport_url or database_connection fields on a running system will NOT result in all nodes immediately using the new values. Use caution when changing these values. NOTE (tssurya): The scheduler will not notice that a cell has been enabled/disabled until it is restarted or sent the SIGHUP signal. """ ctxt = context.get_admin_context() try: cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) except exception.CellMappingNotFound: print(_('Cell with uuid %s was not found.') % cell_uuid) return 1 if name: cell_mapping.name = name # Having empty transport_url and db_connection means leaving the # existing values transport_url = self._validate_transport_url( transport_url, warn_about_none=False) db_connection = self._validate_database_connection( db_connection, warn_about_none=False) if (self._non_unique_transport_url_database_connection_checker(ctxt, cell_mapping, transport_url, db_connection)): # We use the return code 3 before 2 to avoid changing the # semantic meanings of return codes. return 3 if transport_url: cell_mapping.transport_url = transport_url if db_connection: cell_mapping.database_connection = db_connection if disable and enable: print(_('Cell cannot be disabled and enabled at the same time.')) return 4 if disable or enable: if cell_mapping.is_cell0(): print(_('Cell0 cannot be disabled.')) return 5 elif disable and not cell_mapping.disabled: cell_mapping.disabled = True elif enable and cell_mapping.disabled: cell_mapping.disabled = False elif disable and cell_mapping.disabled: print(_('Cell %s is already disabled') % cell_uuid) elif enable and not cell_mapping.disabled: print(_('Cell %s is already enabled') % cell_uuid) try: cell_mapping.save() except Exception as e: print(_('Unable to update CellMapping: %s') % e) return 2 return 0 @args('--cell_uuid', metavar='', dest='cell_uuid', help=_('The uuid of the cell.')) def list_hosts(self, cell_uuid=None): """Lists the hosts in one or all v2 cells.""" ctxt = context.get_admin_context() if cell_uuid: # Find the CellMapping given the uuid. try: cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) except exception.CellMappingNotFound: print(_('Cell with uuid %s was not found.') % cell_uuid) return 1 host_mappings = objects.HostMappingList.get_by_cell_id( ctxt, cell_mapping.id) else: host_mappings = objects.HostMappingList.get_all(ctxt) field_names = [_('Cell Name'), _('Cell UUID'), _('Hostname')] t = prettytable.PrettyTable(field_names) for host in sorted(host_mappings, key=lambda _host: _host.host): fields = [host.cell_mapping.name, host.cell_mapping.uuid, host.host] t.add_row(fields) print(t) return 0 @args('--cell_uuid', metavar='', dest='cell_uuid', required=True, help=_('The uuid of the cell.')) @args('--host', metavar='', dest='host', required=True, help=_('The host to delete.')) def delete_host(self, cell_uuid, host): """Delete a host in a cell (host mappings) by the given host name This command will return a non-zero exit code in the following cases. * The cell is not found by uuid. * The host is not found by host name. * The host is not in the cell. * The host has instances. Returns 0 if the host is deleted successfully. NOTE: The scheduler caches host-to-cell mapping information so when deleting a host the scheduler may need to be restarted or sent the SIGHUP signal. """ ctxt = context.get_admin_context() # Find the CellMapping given the uuid. try: cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) except exception.CellMappingNotFound: print(_('Cell with uuid %s was not found.') % cell_uuid) return 1 try: host_mapping = objects.HostMapping.get_by_host(ctxt, host) except exception.HostMappingNotFound: print(_('The host %s was not found.') % host) return 2 if host_mapping.cell_mapping.uuid != cell_mapping.uuid: print(_('The host %(host)s was not found ' 'in the cell %(cell_uuid)s.') % {'host': host, 'cell_uuid': cell_uuid}) return 3 with context.target_cell(ctxt, cell_mapping) as cctxt: instances = objects.InstanceList.get_by_host(cctxt, host) try: nodes = objects.ComputeNodeList.get_all_by_host(cctxt, host) except exception.ComputeHostNotFound: nodes = [] if instances: print(_('There are instances on the host %s.') % host) return 4 for node in nodes: node.mapped = 0 node.save() host_mapping.destroy() return 0 class PlacementCommands(object): """Commands for managing placement resources.""" @staticmethod def _get_compute_node_uuid(ctxt, instance, node_cache): """Find the ComputeNode.uuid for the given Instance :param ctxt: cell-targeted nova.context.RequestContext :param instance: the instance to lookup a compute node :param node_cache: dict of Instance.node keys to ComputeNode.uuid values; this cache is updated if a new node is processed. :returns: ComputeNode.uuid for the given instance :raises: nova.exception.ComputeHostNotFound """ if instance.node in node_cache: return node_cache[instance.node] compute_node = objects.ComputeNode.get_by_host_and_nodename( ctxt, instance.host, instance.node) node_uuid = compute_node.uuid node_cache[instance.node] = node_uuid return node_uuid @staticmethod def _get_ports(ctxt, instance, neutron): """Return the ports that are bound to the instance :param ctxt: nova.context.RequestContext :param instance: the instance to return the ports for :param neutron: nova.network.neutron.ClientWrapper to communicate with Neutron :return: a list of neutron port dict objects :raise UnableToQueryPorts: If the neutron list ports query fails. """ try: return neutron.list_ports( ctxt, device_id=instance.uuid, fields=['id', constants.RESOURCE_REQUEST, constants.BINDING_PROFILE] )['ports'] except neutron_client_exc.NeutronClientException as e: raise exception.UnableToQueryPorts( instance_uuid=instance.uuid, error=six.text_type(e)) @staticmethod def _has_request_but_no_allocation(port): request = port.get(constants.RESOURCE_REQUEST) binding_profile = neutron_api.get_binding_profile(port) allocation = binding_profile.get(constants.ALLOCATION) # We are defensive here about 'resources' and 'required' in the # 'resource_request' as neutron API is not clear about those fields # being optional. return (request and request.get('resources') and request.get('required') and not allocation) @staticmethod def _get_rps_in_tree_with_required_traits( ctxt, rp_uuid, required_traits, placement): """Find the RPs that have all the required traits in the given rp tree. :param ctxt: nova.context.RequestContext :param rp_uuid: the RP uuid that will be used to query the tree. :param required_traits: the traits that need to be supported by the returned resource providers. :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :raise PlacementAPIConnectFailure: if placement API cannot be reached :raise ResourceProviderRetrievalFailed: if the resource provider does not exist. :raise ResourceProviderTraitRetrievalFailed: if resource provider trait information cannot be read from placement. :return: A list of RP UUIDs that supports every required traits and in the tree for the provider rp_uuid. """ try: rps = placement.get_providers_in_tree(ctxt, rp_uuid) matching_rps = [ rp['uuid'] for rp in rps if set(required_traits).issubset( placement.get_provider_traits(ctxt, rp['uuid']).traits) ] except ks_exc.ClientException: raise exception.PlacementAPIConnectFailure() return matching_rps @staticmethod def _merge_allocations(alloc1, alloc2): """Return a new allocation dict that contains the sum of alloc1 and alloc2. :param alloc1: a dict in the form of { : {'resources': {: amount, : amount}, : {'resources': {: amount}, } :param alloc2: a dict in the same form as alloc1 :return: the merged allocation of alloc1 and alloc2 in the same format """ allocations = collections.defaultdict( lambda: {'resources': collections.defaultdict(int)}) for alloc in [alloc1, alloc2]: for rp_uuid in alloc: for rc, amount in alloc[rp_uuid]['resources'].items(): allocations[rp_uuid]['resources'][rc] += amount return allocations def _get_port_allocation( self, ctxt, node_uuid, port, instance_uuid, placement): """Return the extra allocation the instance needs due to the given port. :param ctxt: nova.context.RequestContext :param node_uuid: the ComputeNode uuid the instance is running on. :param port: the port dict returned from neutron :param instance_uuid: The uuid of the instance the port is bound to :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :raise PlacementAPIConnectFailure: if placement API cannot be reached :raise ResourceProviderRetrievalFailed: compute node resource provider does not exist. :raise ResourceProviderTraitRetrievalFailed: if resource provider trait information cannot be read from placement. :raise MoreThanOneResourceProviderToHealFrom: if it cannot be decided unambiguously which resource provider to heal from. :raise NoResourceProviderToHealFrom: if there is no resource provider found to heal from. :return: A dict of resources keyed by RP uuid to be included in the instance allocation dict. """ matching_rp_uuids = self._get_rps_in_tree_with_required_traits( ctxt, node_uuid, port[constants.RESOURCE_REQUEST]['required'], placement) if len(matching_rp_uuids) > 1: # If there is more than one such RP then it is an ambiguous # situation that we cannot handle here efficiently because that # would require the reimplementation of most of the allocation # candidate query functionality of placement. Also if more # than one such RP exists then selecting the right one might # need extra information from the compute node. For example # which PCI PF the VF is allocated from and which RP represents # that PCI PF in placement. When migration is supported with such # servers then we can ask the admin to migrate these servers # instead to heal their allocation. raise exception.MoreThanOneResourceProviderToHealFrom( rp_uuids=','.join(matching_rp_uuids), port_id=port['id'], instance_uuid=instance_uuid) if len(matching_rp_uuids) == 0: raise exception.NoResourceProviderToHealFrom( port_id=port['id'], instance_uuid=instance_uuid, traits=port[constants.RESOURCE_REQUEST]['required'], node_uuid=node_uuid) # We found one RP that matches the traits. Assume that we can allocate # the resources from it. If there is not enough inventory left on the # RP then the PUT /allocations placement call will detect that. rp_uuid = matching_rp_uuids[0] port_allocation = { rp_uuid: { 'resources': port[constants.RESOURCE_REQUEST]['resources'] } } return port_allocation def _get_port_allocations_to_heal( self, ctxt, instance, node_cache, placement, neutron, output): """Return the needed extra allocation for the ports of the instance. :param ctxt: nova.context.RequestContext :param instance: instance to get the port allocations for :param node_cache: dict of Instance.node keys to ComputeNode.uuid values; this cache is updated if a new node is processed. :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param neutron: nova.network.neutron.ClientWrapper to communicate with Neutron :param output: function that takes a single message for verbose output :raise UnableToQueryPorts: If the neutron list ports query fails. :raise nova.exception.ComputeHostNotFound: if compute node of the instance not found in the db. :raise PlacementAPIConnectFailure: if placement API cannot be reached :raise ResourceProviderRetrievalFailed: if the resource provider representing the compute node the instance is running on does not exist. :raise ResourceProviderTraitRetrievalFailed: if resource provider trait information cannot be read from placement. :raise MoreThanOneResourceProviderToHealFrom: if it cannot be decided unambiguously which resource provider to heal from. :raise NoResourceProviderToHealFrom: if there is no resource provider found to heal from. :return: A two tuple where the first item is a dict of resources keyed by RP uuid to be included in the instance allocation dict. The second item is a list of port dicts to be updated in Neutron. """ # We need to heal port allocations for ports that have resource_request # but do not have an RP uuid in the binding:profile.allocation field. # We cannot use the instance info_cache to check the binding profile # as this code needs to be able to handle ports that were attached # before nova in stein started updating the allocation key in the # binding:profile. # In theory a port can be assigned to an instance without it being # bound to any host (e.g. in case of shelve offload) but # _heal_allocations_for_instance() already filters out instances that # are not on any host. ports_to_heal = [ port for port in self._get_ports(ctxt, instance, neutron) if self._has_request_but_no_allocation(port)] if not ports_to_heal: # nothing to do, return early return {}, [] node_uuid = self._get_compute_node_uuid( ctxt, instance, node_cache) allocations = {} for port in ports_to_heal: port_allocation = self._get_port_allocation( ctxt, node_uuid, port, instance.uuid, placement) rp_uuid = list(port_allocation)[0] allocations = self._merge_allocations( allocations, port_allocation) # We also need to record the RP we are allocated from in the # port. This will be sent back to Neutron before the allocation # is updated in placement binding_profile = neutron_api.get_binding_profile(port) binding_profile[constants.ALLOCATION] = rp_uuid port[constants.BINDING_PROFILE] = binding_profile output(_("Found resource provider %(rp_uuid)s having matching " "traits for port %(port_uuid)s with resource request " "%(request)s attached to instance %(instance_uuid)s") % {"rp_uuid": rp_uuid, "port_uuid": port["id"], "request": port.get(constants.RESOURCE_REQUEST), "instance_uuid": instance.uuid}) return allocations, ports_to_heal def _update_ports(self, neutron, ports_to_update, output): succeeded = [] try: for port in ports_to_update: profile = neutron_api.get_binding_profile(port) body = { 'port': { constants.BINDING_PROFILE: profile } } output( _('Updating port %(port_uuid)s with attributes ' '%(attributes)s') % {'port_uuid': port['id'], 'attributes': body['port']}) neutron.update_port(port['id'], body=body) succeeded.append(port) except neutron_client_exc.NeutronClientException as e: output( _('Updating port %(port_uuid)s failed: %(error)s') % {'port_uuid': port['id'], 'error': six.text_type(e)}) # one of the port updates failed. We need to roll back the updates # that succeeded before self._rollback_port_updates(neutron, succeeded, output) # we failed to heal so we need to stop but we successfully rolled # back the partial updates so the admin can retry the healing. raise exception.UnableToUpdatePorts(error=six.text_type(e)) @staticmethod def _rollback_port_updates(neutron, ports_to_rollback, output): # _update_ports() added the allocation key to these ports, so we need # to remove them during the rollback. manual_rollback_needed = [] last_exc = None for port in ports_to_rollback: profile = neutron_api.get_binding_profile(port) profile.pop(constants.ALLOCATION) body = { 'port': { constants.BINDING_PROFILE: profile } } try: output(_('Rolling back port update for %(port_uuid)s') % {'port_uuid': port['id']}) neutron.update_port(port['id'], body=body) except neutron_client_exc.NeutronClientException as e: output( _('Rolling back update for port %(port_uuid)s failed: ' '%(error)s') % {'port_uuid': port['id'], 'error': six.text_type(e)}) # TODO(gibi): We could implement a retry mechanism with # back off. manual_rollback_needed.append(port['id']) last_exc = e if manual_rollback_needed: # At least one of the port operation failed so we failed to roll # back. There are partial updates in neutron. Human intervention # needed. raise exception.UnableToRollbackPortUpdates( error=six.text_type(last_exc), port_uuids=manual_rollback_needed) def _heal_missing_alloc(self, ctxt, instance, node_cache): node_uuid = self._get_compute_node_uuid( ctxt, instance, node_cache) # Now get the resource allocations for the instance based # on its embedded flavor. resources = scheduler_utils.resources_from_flavor( instance, instance.flavor) payload = { 'allocations': { node_uuid: {'resources': resources}, }, 'project_id': instance.project_id, 'user_id': instance.user_id, 'consumer_generation': None } return payload def _heal_missing_project_and_user_id(self, allocations, instance): allocations['project_id'] = instance.project_id allocations['user_id'] = instance.user_id return allocations def _heal_allocations_for_instance(self, ctxt, instance, node_cache, output, placement, dry_run, heal_port_allocations, neutron): """Checks the given instance to see if it needs allocation healing :param ctxt: cell-targeted nova.context.RequestContext :param instance: the instance to check for allocation healing :param node_cache: dict of Instance.node keys to ComputeNode.uuid values; this cache is updated if a new node is processed. :param output: function that takes a single message for verbose output :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param dry_run: Process instances and print output but do not commit any changes. :param heal_port_allocations: True if healing port allocation is requested, False otherwise. :param neutron: nova.network.neutron.ClientWrapper to communicate with Neutron :return: True if allocations were created or updated for the instance, None if nothing needed to be done :raises: nova.exception.ComputeHostNotFound if a compute node for a given instance cannot be found :raises: AllocationCreateFailed if unable to create allocations for a given instance against a given compute node resource provider :raises: AllocationUpdateFailed if unable to update allocations for a given instance with consumer project/user information :raise UnableToQueryPorts: If the neutron list ports query fails. :raise PlacementAPIConnectFailure: if placement API cannot be reached :raise ResourceProviderRetrievalFailed: if the resource provider representing the compute node the instance is running on does not exist. :raise ResourceProviderTraitRetrievalFailed: if resource provider trait information cannot be read from placement. :raise MoreThanOneResourceProviderToHealFrom: if it cannot be decided unambiguously which resource provider to heal from. :raise NoResourceProviderToHealFrom: if there is no resource provider found to heal from. :raise UnableToUpdatePorts: if a port update failed in neutron but any partial update was rolled back successfully. :raise UnableToRollbackPortUpdates: if a port update failed in neutron and the rollback of the partial updates also failed. """ if instance.task_state is not None: output(_('Instance %(instance)s is undergoing a task ' 'state transition: %(task_state)s') % {'instance': instance.uuid, 'task_state': instance.task_state}) return if instance.node is None: output(_('Instance %s is not on a host.') % instance.uuid) return try: allocations = placement.get_allocs_for_consumer( ctxt, instance.uuid) except (ks_exc.ClientException, exception.ConsumerAllocationRetrievalFailed) as e: raise exception.AllocationUpdateFailed( consumer_uuid=instance.uuid, error=_("Allocation retrieval failed: %s") % e) need_healing = False # Placement response can have an empty {'allocations': {}} in it if # there are no allocations for the instance if not allocations.get('allocations'): # This instance doesn't have allocations need_healing = _CREATE allocations = self._heal_missing_alloc(ctxt, instance, node_cache) if (allocations.get('project_id') != instance.project_id or allocations.get('user_id') != instance.user_id): # We have an instance with allocations but not the correct # project_id/user_id, so we want to update the allocations # and re-put them. We don't use put_allocations here # because we don't want to mess up shared or nested # provider allocations. need_healing = _UPDATE allocations = self._heal_missing_project_and_user_id( allocations, instance) if heal_port_allocations: to_heal = self._get_port_allocations_to_heal( ctxt, instance, node_cache, placement, neutron, output) port_allocations, ports_to_update = to_heal else: port_allocations, ports_to_update = {}, [] if port_allocations: need_healing = need_healing or _UPDATE # Merge in any missing port allocations allocations['allocations'] = self._merge_allocations( allocations['allocations'], port_allocations) if need_healing: if dry_run: # json dump the allocation dict as it contains nested default # dicts that is pretty hard to read in the verbose output alloc = jsonutils.dumps(allocations) if need_healing == _CREATE: output(_('[dry-run] Create allocations for instance ' '%(instance)s: %(allocations)s') % {'instance': instance.uuid, 'allocations': alloc}) elif need_healing == _UPDATE: output(_('[dry-run] Update allocations for instance ' '%(instance)s: %(allocations)s') % {'instance': instance.uuid, 'allocations': alloc}) else: # First update ports in neutron. If any of those operations # fail, then roll back the successful part of it and fail the # healing. We do this first because rolling back the port # updates is more straight-forward than rolling back allocation # changes. self._update_ports(neutron, ports_to_update, output) # Now that neutron update succeeded we can try to update # placement. If it fails we need to rollback every neutron port # update done before. resp = placement.put_allocations(ctxt, instance.uuid, allocations) if resp: if need_healing == _CREATE: output(_('Successfully created allocations for ' 'instance %(instance)s.') % {'instance': instance.uuid}) elif need_healing == _UPDATE: output(_('Successfully updated allocations for ' 'instance %(instance)s.') % {'instance': instance.uuid}) return True else: # Rollback every neutron update. If we succeed to # roll back then it is safe to stop here and let the admin # retry. If the rollback fails then # _rollback_port_updates() will raise another exception # that instructs the operator how to clean up manually # before the healing can be retried self._rollback_port_updates( neutron, ports_to_update, output) raise exception.AllocationUpdateFailed( consumer_uuid=instance.uuid, error='') else: output(_('The allocation of instance %s is up-to-date. ' 'Nothing to be healed.') % instance.uuid) return def _heal_instances_in_cell(self, ctxt, max_count, unlimited, output, placement, dry_run, instance_uuid, heal_port_allocations, neutron): """Checks for instances to heal in a given cell. :param ctxt: cell-targeted nova.context.RequestContext :param max_count: batch size (limit per instance query) :param unlimited: True if all instances in the cell should be processed, else False to just process $max_count instances :param outout: function that takes a single message for verbose output :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param dry_run: Process instances and print output but do not commit any changes. :param instance_uuid: UUID of a specific instance to process. :param heal_port_allocations: True if healing port allocation is requested, False otherwise. :param neutron: nova.network.neutron.ClientWrapper to communicate with Neutron :return: Number of instances that had allocations created. :raises: nova.exception.ComputeHostNotFound if a compute node for a given instance cannot be found :raises: AllocationCreateFailed if unable to create allocations for a given instance against a given compute node resource provider :raises: AllocationUpdateFailed if unable to update allocations for a given instance with consumer project/user information :raise UnableToQueryPorts: If the neutron list ports query fails. :raise PlacementAPIConnectFailure: if placement API cannot be reached :raise ResourceProviderRetrievalFailed: if the resource provider representing the compute node the instance is running on does not exist. :raise ResourceProviderTraitRetrievalFailed: if resource provider trait information cannot be read from placement. :raise MoreThanOneResourceProviderToHealFrom: if it cannot be decided unambiguously which resource provider to heal from. :raise NoResourceProviderToHealFrom: if there is no resource provider found to heal from. :raise UnableToUpdatePorts: if a port update failed in neutron but any partial update was rolled back successfully. :raise UnableToRollbackPortUpdates: if a port update failed in neutron and the rollback of the partial updates also failed. """ # Keep a cache of instance.node to compute node resource provider UUID. # This will save some queries for non-ironic instances to the # compute_nodes table. node_cache = {} # Track the total number of instances that have allocations created # for them in this cell. We return when num_processed equals max_count # and unlimited=True or we exhaust the number of instances to process # in this cell. num_processed = 0 # Get all instances from this cell which have a host and are not # undergoing a task state transition. Go from oldest to newest. # NOTE(mriedem): Unfortunately we don't have a marker to use # between runs where the user is specifying --max-count. # TODO(mriedem): Store a marker in system_metadata so we can # automatically pick up where we left off without the user having # to pass it in (if unlimited is False). filters = {'deleted': False} if instance_uuid: filters['uuid'] = instance_uuid instances = objects.InstanceList.get_by_filters( ctxt, filters=filters, sort_key='created_at', sort_dir='asc', limit=max_count, expected_attrs=['flavor']) while instances: output(_('Found %s candidate instances.') % len(instances)) # For each instance in this list, we need to see if it has # allocations in placement and if so, assume it's correct and # continue. for instance in instances: if self._heal_allocations_for_instance( ctxt, instance, node_cache, output, placement, dry_run, heal_port_allocations, neutron): num_processed += 1 # Make sure we don't go over the max count. Note that we # don't include instances that already have allocations in the # max_count number, only the number of instances that have # successfully created allocations. # If a specific instance was requested we return here as well. if (not unlimited and num_processed == max_count) or instance_uuid: return num_processed # Use a marker to get the next page of instances in this cell. # Note that InstanceList doesn't support slice notation. marker = instances[len(instances) - 1].uuid instances = objects.InstanceList.get_by_filters( ctxt, filters=filters, sort_key='created_at', sort_dir='asc', limit=max_count, marker=marker, expected_attrs=['flavor']) return num_processed @action_description( _("Iterates over non-cell0 cells looking for instances which do " "not have allocations in the Placement service, or have incomplete " "consumer project_id/user_id values in existing allocations or " "missing allocations for ports having resource request, and " "which are not undergoing a task state transition. For each " "instance found, allocations are created (or updated) against the " "compute node resource provider for that instance based on the " "flavor associated with the instance. This command requires that " "the [api_database]/connection and [placement] configuration " "options are set.")) @args('--max-count', metavar='', dest='max_count', help='Maximum number of instances to process. If not specified, all ' 'instances in each cell will be mapped in batches of 50. ' 'If you have a large number of instances, consider specifying ' 'a custom value and run the command until it exits with ' '0 or 4.') @args('--verbose', action='store_true', dest='verbose', default=False, help='Provide verbose output during execution.') @args('--dry-run', action='store_true', dest='dry_run', default=False, help='Runs the command and prints output but does not commit any ' 'changes. The return code should be 4.') @args('--instance', metavar='', dest='instance_uuid', help='UUID of a specific instance to process. If specified ' '--max-count has no effect. ' 'The --cell and --instance options are mutually exclusive.') @args('--skip-port-allocations', action='store_true', dest='skip_port_allocations', default=False, help='Skip the healing of the resource allocations of bound ports. ' 'E.g. healing bandwidth resource allocation for ports having ' 'minimum QoS policy rules attached. If your deployment does ' 'not use such a feature then the performance impact of ' 'querying neutron ports for each instance can be avoided with ' 'this flag.') @args('--cell', metavar='', dest='cell_uuid', help='Heal allocations within a specific cell. ' 'The --cell and --instance options are mutually exclusive.') def heal_allocations(self, max_count=None, verbose=False, dry_run=False, instance_uuid=None, skip_port_allocations=False, cell_uuid=None): """Heals instance allocations in the Placement service Return codes: * 0: Command completed successfully and allocations were created. * 1: --max-count was reached and there are more instances to process. * 2: Unable to find a compute node record for a given instance. * 3: Unable to create (or update) allocations for an instance against its compute node resource provider. * 4: Command completed successfully but no allocations were created. * 5: Unable to query ports from neutron * 6: Unable to update ports in neutron * 7: Cannot roll back neutron port updates. Manual steps needed. * 127: Invalid input. """ # NOTE(mriedem): Thoughts on ways to expand this: # - allow filtering on enabled/disabled cells # - add a force option to force allocations for instances which have # task_state is not None (would get complicated during a migration); # for example, this could cleanup ironic instances that have # allocations on VCPU/MEMORY_MB/DISK_GB but are now using a custom # resource class # - add an option to overwrite allocations for instances which already # have allocations (but the operator thinks might be wrong?); this # would probably only be safe with a specific instance. # - deal with nested resource providers? heal_port_allocations = not skip_port_allocations output = lambda msg: None if verbose: output = lambda msg: print(msg) # If user has provided both cell and instance # Throw an error if instance_uuid and cell_uuid: print(_('The --cell and --instance options ' 'are mutually exclusive.')) return 127 # TODO(mriedem): Rather than --max-count being both a total and batch # count, should we have separate options to be specific, i.e. --total # and --batch-size? Then --batch-size defaults to 50 and --total # defaults to None to mean unlimited. if instance_uuid: max_count = 1 unlimited = False elif max_count is not None: try: max_count = int(max_count) except ValueError: max_count = -1 unlimited = False if max_count < 1: print(_('Must supply a positive integer for --max-count.')) return 127 else: max_count = 50 unlimited = True output(_('Running batches of %i until complete') % max_count) ctxt = context.get_admin_context() # If we are going to process a specific instance, just get the cell # it is in up front. if instance_uuid: try: im = objects.InstanceMapping.get_by_instance_uuid( ctxt, instance_uuid) cells = objects.CellMappingList(objects=[im.cell_mapping]) except exception.InstanceMappingNotFound: print('Unable to find cell for instance %s, is it mapped? Try ' 'running "nova-manage cell_v2 verify_instance" or ' '"nova-manage cell_v2 map_instances".' % instance_uuid) return 127 elif cell_uuid: try: # validate cell_uuid cell = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) # create CellMappingList cells = objects.CellMappingList(objects=[cell]) except exception.CellMappingNotFound: print(_('Cell with uuid %s was not found.') % cell_uuid) return 127 else: cells = objects.CellMappingList.get_all(ctxt) if not cells: output(_('No cells to process.')) return 4 placement = report.SchedulerReportClient() neutron = None if heal_port_allocations: neutron = neutron_api.get_client(ctxt, admin=True) num_processed = 0 # TODO(mriedem): Use context.scatter_gather_skip_cell0. for cell in cells: # Skip cell0 since that is where instances go that do not get # scheduled and hence would not have allocations against a host. if cell.uuid == objects.CellMapping.CELL0_UUID: continue output(_('Looking for instances in cell: %s') % cell.identity) limit_per_cell = max_count if not unlimited: # Adjust the limit for the next cell. For example, if the user # only wants to process a total of 100 instances and we did # 75 in cell1, then we only need 25 more from cell2 and so on. limit_per_cell = max_count - num_processed with context.target_cell(ctxt, cell) as cctxt: try: num_processed += self._heal_instances_in_cell( cctxt, limit_per_cell, unlimited, output, placement, dry_run, instance_uuid, heal_port_allocations, neutron) except exception.ComputeHostNotFound as e: print(e.format_message()) return 2 except (exception.AllocationCreateFailed, exception.AllocationUpdateFailed, exception.NoResourceProviderToHealFrom, exception.MoreThanOneResourceProviderToHealFrom, exception.PlacementAPIConnectFailure, exception.ResourceProviderRetrievalFailed, exception.ResourceProviderTraitRetrievalFailed) as e: print(e.format_message()) return 3 except exception.UnableToQueryPorts as e: print(e.format_message()) return 5 except exception.UnableToUpdatePorts as e: print(e.format_message()) return 6 except exception.UnableToRollbackPortUpdates as e: print(e.format_message()) return 7 # Make sure we don't go over the max count. Note that we # don't include instances that already have allocations in the # max_count number, only the number of instances that have # successfully created allocations. # If a specific instance was provided then we'll just exit # the loop and process it below (either return 4 or 0). if num_processed == max_count and not instance_uuid: output(_('Max count reached. Processed %s instances.') % num_processed) return 1 output(_('Processed %s instances.') % num_processed) if not num_processed: return 4 return 0 @staticmethod def _get_rp_uuid_for_host(ctxt, host): """Finds the resource provider (compute node) UUID for the given host. :param ctxt: cell-targeted nova RequestContext :param host: name of the compute host :returns: The UUID of the resource provider (compute node) for the host :raises: nova.exception.HostMappingNotFound if no host_mappings record is found for the host; indicates "nova-manage cell_v2 discover_hosts" needs to be run on the cell. :raises: nova.exception.ComputeHostNotFound if no compute_nodes record is found in the cell database for the host; indicates the nova-compute service on that host might need to be restarted. :raises: nova.exception.TooManyComputesForHost if there are more than one compute_nodes records in the cell database for the host which is only possible (under normal circumstances) for ironic hosts but ironic hosts are not currently supported with host aggregates so if more than one compute node is found for the host, it is considered an error which the operator will need to resolve manually. """ # Get the host mapping to determine which cell it's in. hm = objects.HostMapping.get_by_host(ctxt, host) # Now get the compute node record for the host from the cell. with context.target_cell(ctxt, hm.cell_mapping) as cctxt: # There should really only be one, since only ironic # hosts can have multiple nodes, and you can't have # ironic hosts in aggregates for that reason. If we # find more than one, it's an error. nodes = objects.ComputeNodeList.get_all_by_host( cctxt, host) if len(nodes) > 1: # This shouldn't happen, so we need to bail since we # won't know which node to use. raise exception.TooManyComputesForHost( num_computes=len(nodes), host=host) return nodes[0].uuid @action_description( _("Mirrors compute host aggregates to resource provider aggregates " "in the Placement service. Requires the [api_database] and " "[placement] sections of the nova configuration file to be " "populated.")) @args('--verbose', action='store_true', dest='verbose', default=False, help='Provide verbose output during execution.') # TODO(mriedem): Add an option for the 'remove aggregate' behavior. # We know that we want to mirror hosts aggregate membership to # placement, but regarding removal, what if the operator or some external # tool added the resource provider to an aggregate but there is no matching # host aggregate, e.g. ironic nodes or shared storage provider # relationships? # TODO(mriedem): Probably want an option to pass a specific host instead of # doing all of them. def sync_aggregates(self, verbose=False): """Synchronizes nova host aggregates with resource provider aggregates Adds nodes to missing provider aggregates in Placement. NOTE: Depending on the size of your deployment and the number of compute hosts in aggregates, this command could cause a non-negligible amount of traffic to the placement service and therefore is recommended to be run during maintenance windows. Return codes: * 0: Successful run * 1: A host was found with more than one matching compute node record * 2: An unexpected error occurred while working with the placement API * 3: Failed updating provider aggregates in placement * 4: Host mappings not found for one or more host aggregate members * 5: Compute node records not found for one or more hosts * 6: Resource provider not found by uuid for a given host """ # Start by getting all host aggregates. ctxt = context.get_admin_context() aggregate_api = compute_api.AggregateAPI() placement = aggregate_api.placement_client aggregates = aggregate_api.get_aggregate_list(ctxt) # Now we're going to loop over the existing compute hosts in aggregates # and check to see if their corresponding resource provider, found via # the host's compute node uuid, are in the same aggregate. If not, we # add the resource provider to the aggregate in Placement. output = lambda msg: None if verbose: output = lambda msg: print(msg) output(_('Filling in missing placement aggregates')) # Since hosts can be in more than one aggregate, keep track of the host # to its corresponding resource provider uuid to avoid redundant # lookups. host_to_rp_uuid = {} unmapped_hosts = set() # keep track of any missing host mappings computes_not_found = set() # keep track of missing nodes providers_not_found = {} # map of hostname to missing provider uuid for aggregate in aggregates: output(_('Processing aggregate: %s') % aggregate.name) for host in aggregate.hosts: output(_('Processing host: %s') % host) rp_uuid = host_to_rp_uuid.get(host) if not rp_uuid: try: rp_uuid = self._get_rp_uuid_for_host(ctxt, host) host_to_rp_uuid[host] = rp_uuid except exception.HostMappingNotFound: # Don't fail on this now, we can dump it at the end. unmapped_hosts.add(host) continue except exception.ComputeHostNotFound: # Don't fail on this now, we can dump it at the end. computes_not_found.add(host) continue except exception.TooManyComputesForHost as e: # TODO(mriedem): Should we treat this like the other # errors and not fail immediately but dump at the end? print(e.format_message()) return 1 # We've got our compute node record, so now we can ensure that # the matching resource provider, found via compute node uuid, # is in the same aggregate in placement, found via aggregate # uuid. try: placement.aggregate_add_host(ctxt, aggregate.uuid, rp_uuid=rp_uuid) output(_('Successfully added host (%(host)s) and ' 'provider (%(provider)s) to aggregate ' '(%(aggregate)s).') % {'host': host, 'provider': rp_uuid, 'aggregate': aggregate.uuid}) except exception.ResourceProviderNotFound: # The resource provider wasn't found. Store this for later. providers_not_found[host] = rp_uuid except exception.ResourceProviderAggregateRetrievalFailed as e: print(e.message) return 2 except exception.NovaException as e: # The exception message is too generic in this case print(_('Failed updating provider aggregates for ' 'host (%(host)s), provider (%(provider)s) ' 'and aggregate (%(aggregate)s). Error: ' '%(error)s') % {'host': host, 'provider': rp_uuid, 'aggregate': aggregate.uuid, 'error': e.message}) return 3 # Now do our error handling. Note that there is no real priority on # the error code we return. We want to dump all of the issues we hit # so the operator can fix them before re-running the command, but # whether we return 4 or 5 or 6 doesn't matter. return_code = 0 if unmapped_hosts: print(_('The following hosts were found in nova host aggregates ' 'but no host mappings were found in the nova API DB. Run ' '"nova-manage cell_v2 discover_hosts" and then retry. ' 'Missing: %s') % ','.join(unmapped_hosts)) return_code = 4 if computes_not_found: print(_('Unable to find matching compute_nodes record entries in ' 'the cell database for the following hosts; does the ' 'nova-compute service on each host need to be restarted? ' 'Missing: %s') % ','.join(computes_not_found)) return_code = 5 if providers_not_found: print(_('Unable to find matching resource provider record in ' 'placement with uuid for the following hosts: %s. Try ' 'restarting the nova-compute service on each host and ' 'then retry.') % ','.join('(%s=%s)' % (host, providers_not_found[host]) for host in sorted(providers_not_found.keys()))) return_code = 6 return return_code def _get_instances_and_current_migrations(self, ctxt, cn_uuid): if self.cn_uuid_mapping.get(cn_uuid): cell_uuid, cn_host, cn_node = self.cn_uuid_mapping[cn_uuid] else: # We need to find the compute node record from all cells. results = context.scatter_gather_skip_cell0( ctxt, objects.ComputeNode.get_by_uuid, cn_uuid) for result_cell_uuid, result in results.items(): if not context.is_cell_failure_sentinel(result): cn = result cell_uuid = result_cell_uuid break else: return False cn_host, cn_node = (cn.host, cn.hypervisor_hostname) self.cn_uuid_mapping[cn_uuid] = (cell_uuid, cn_host, cn_node) cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) # Get all the active instances from this compute node if self.instances_mapping.get(cn_uuid): inst_uuids = self.instances_mapping[cn_uuid] else: # Get the instance list record from the cell. with context.target_cell(ctxt, cell_mapping) as cctxt: instances = objects.InstanceList.get_by_host_and_node( cctxt, cn_host, cn_node, expected_attrs=[]) inst_uuids = [instance.uuid for instance in instances] self.instances_mapping[cn_uuid] = inst_uuids # Get all *active* migrations for this compute node # NOTE(sbauza): Since migrations are transient, it's better to not # cache the results as they could be stale with context.target_cell(ctxt, cell_mapping) as cctxt: migs = objects.MigrationList.get_in_progress_by_host_and_node( cctxt, cn_host, cn_node) mig_uuids = [migration.uuid for migration in migs] return (inst_uuids, mig_uuids) def _delete_allocations_from_consumer(self, ctxt, placement, provider, consumer_uuid, consumer_type): """Deletes allocations from a resource provider with consumer UUID. :param ctxt: nova.context.RequestContext :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param provider: Resource Provider to look at. :param consumer_uuid: the consumer UUID having allocations. :param consumer_type: the type of consumer, either 'instance' or 'migration' :returns: bool whether the allocations were deleted. """ # We need to be careful and only remove the allocations # against this specific RP or we would delete the # whole instance usage and then it would require some # healing. # TODO(sbauza): Remove this extra check once placement # supports querying allocation delete on both # consumer and resource provider parameters. allocations = placement.get_allocs_for_consumer( ctxt, consumer_uuid) if len(allocations['allocations']) > 1: # This consumer has resources spread among multiple RPs (think # nested or shared for example) # We then need to just update the usage to remove # the orphaned resources on the specific RP del allocations['allocations'][provider['uuid']] try: placement.put_allocations( ctxt, consumer_uuid, allocations) except exception.AllocationUpdateFailed: return False else: try: placement.delete_allocation_for_instance( ctxt, consumer_uuid, consumer_type) except exception.AllocationDeleteFailed: return False return True def _check_orphaned_allocations_for_provider(self, ctxt, placement, output, provider, delete): """Finds orphaned allocations for a specific resource provider. :param ctxt: nova.context.RequestContext :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param output: function that takes a single message for verbose output :param provider: Resource Provider to look at. :param delete: deletes the found orphaned allocations. :return: a tuple (, ) """ num_processed = 0 faults = 0 # TODO(sbauza): Are we sure we have all Nova RCs ? # FIXME(sbauza): Possibly use consumer types once Placement API # supports them. # NOTE(sbauza): We check allocations having *any* below RC, not having # *all* of them. NOVA_RCS = [orc.VCPU, orc.MEMORY_MB, orc.DISK_GB, orc.VGPU, orc.NET_BW_EGR_KILOBIT_PER_SEC, orc.NET_BW_IGR_KILOBIT_PER_SEC, orc.PCPU, orc.MEM_ENCRYPTION_CONTEXT] # Since the RP can be a child RP, we need to get the root RP as it's # the compute node UUID # NOTE(sbauza): In case Placement doesn't support 1.14 microversion, # that means we don't have nested RPs. # Since we ask for microversion 1.14, all RPs have a root RP UUID. cn_uuid = provider.get("root_provider_uuid") # Now get all the existing instances and active migrations for this # compute node result = self._get_instances_and_current_migrations(ctxt, cn_uuid) if result is False: # We don't want to hard stop here because the compute service could # have disappear while we could still have orphaned allocations. output(_('The compute node for UUID %s can not be ' 'found') % cn_uuid) inst_uuids, mig_uuids = result or ([], []) try: pallocs = placement.get_allocations_for_resource_provider( ctxt, provider['uuid']) except exception.ResourceProviderAllocationRetrievalFailed: print(_('Not able to find allocations for resource ' 'provider %s.') % provider['uuid']) raise # Verify every allocations for each consumer UUID for consumer_uuid, consumer_resources in six.iteritems( pallocs.allocations): consumer_allocs = consumer_resources['resources'] if any(rc in NOVA_RCS for rc in consumer_allocs): # We reset the consumer type for each allocation consumer_type = None # This is an allocation for Nova resources # We need to guess whether the instance was deleted # or if the instance is currently migrating if not (consumer_uuid in inst_uuids or consumer_uuid in mig_uuids): # By default we suspect the orphaned allocation was for a # migration... consumer_type = 'migration' if not(consumer_uuid in inst_uuids): # ... but if we can't find it either for an instance, # that means it was for this. consumer_type = 'instance' if consumer_type is not None: output(_('Allocations were set against consumer UUID ' '%(consumer_uuid)s but no existing instances or ' 'active migrations are related. ') % {'consumer_uuid': consumer_uuid}) if delete: deleted = self._delete_allocations_from_consumer( ctxt, placement, provider, consumer_uuid, consumer_type) if not deleted: print(_('Not able to delete allocations ' 'for consumer UUID %s') % consumer_uuid) faults += 1 continue output(_('Deleted allocations for consumer UUID ' '%(consumer_uuid)s on Resource Provider ' '%(rp)s: %(allocations)s') % {'consumer_uuid': consumer_uuid, 'rp': provider['uuid'], 'allocations': consumer_allocs}) else: output(_('Allocations for consumer UUID ' '%(consumer_uuid)s on Resource Provider ' '%(rp)s can be deleted: ' '%(allocations)s') % {'consumer_uuid': consumer_uuid, 'rp': provider['uuid'], 'allocations': consumer_allocs}) num_processed += 1 return (num_processed, faults) # TODO(sbauza): Move this to the scheduler report client ? def _get_resource_provider(self, context, placement, uuid): """Returns a single Resource Provider by its UUID. :param context: The nova.context.RequestContext auth context :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param uuid: A specific Resource Provider UUID :return: the existing resource provider. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API """ resource_providers = self._get_resource_providers(context, placement, uuid=uuid) if not resource_providers: # The endpoint never returns a 404, it rather returns an empty list raise exception.ResourceProviderNotFound(name_or_uuid=uuid) return resource_providers[0] def _get_resource_providers(self, context, placement, **kwargs): """Returns all resource providers regardless of their relationships. :param context: The nova.context.RequestContext auth context :param placement: nova.scheduler.client.report.SchedulerReportClient to communicate with the Placement service API. :param kwargs: extra attributes for the query string :return: list of resource providers. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API """ url = '/resource_providers' if 'uuid' in kwargs: url += '?uuid=%s' % kwargs['uuid'] resp = placement.get(url, global_request_id=context.global_id, version='1.14') if resp is None: raise exception.PlacementAPIConnectFailure() data = resp.json() resource_providers = data.get('resource_providers') return resource_providers @action_description( _("Audits orphaned allocations that are no longer corresponding to " "existing instance resources. This command requires that " "the [api_database]/connection and [placement] configuration " "options are set.")) @args('--verbose', action='store_true', dest='verbose', default=False, help='Provide verbose output during execution.') @args('--resource_provider', metavar='', dest='provider_uuid', help='UUID of a specific resource provider to verify.') @args('--delete', action='store_true', dest='delete', default=False, help='Deletes orphaned allocations that were found.') def audit(self, verbose=False, provider_uuid=None, delete=False): """Provides information about orphaned allocations that can be removed Return codes: * 0: Command completed successfully and no orphaned allocations exist. * 1: An unexpected error happened during run. * 3: Orphaned allocations were detected. * 4: Orphaned allocations were detected and deleted. * 127: Invalid input. """ ctxt = context.get_admin_context() output = lambda msg: None if verbose: output = lambda msg: print(msg) placement = report.SchedulerReportClient() # Resets two in-memory dicts for knowing instances per compute node self.cn_uuid_mapping = collections.defaultdict(tuple) self.instances_mapping = collections.defaultdict(list) num_processed = 0 faults = 0 if provider_uuid: try: resource_provider = self._get_resource_provider( ctxt, placement, provider_uuid) except exception.ResourceProviderNotFound: print(_('Resource provider with UUID %s does not exist.') % provider_uuid) return 127 resource_providers = [resource_provider] else: resource_providers = self._get_resource_providers(ctxt, placement) for provider in resource_providers: nb_p, faults = self._check_orphaned_allocations_for_provider( ctxt, placement, output, provider, delete) num_processed += nb_p if faults > 0: print(_('The Resource Provider %s had problems when ' 'deleting allocations. Stopping now. Please fix the ' 'problem by hand and run again.') % provider['uuid']) return 1 if num_processed > 0: suffix = 's.' if num_processed > 1 else '.' output(_('Processed %(num)s allocation%(suffix)s') % {'num': num_processed, 'suffix': suffix}) return 4 if delete else 3 return 0 CATEGORIES = { 'api_db': ApiDbCommands, 'cell_v2': CellV2Commands, 'db': DbCommands, 'placement': PlacementCommands } add_command_parsers = functools.partial(cmd_common.add_command_parsers, categories=CATEGORIES) category_opt = cfg.SubCommandOpt('category', title='Command categories', help='Available categories', handler=add_command_parsers) post_mortem_opt = cfg.BoolOpt('post-mortem', default=False, help='Allow post-mortem debugging') def main(): """Parse options and call the appropriate class/method.""" CONF.register_cli_opts([category_opt, post_mortem_opt]) config.parse_args(sys.argv) logging.set_defaults( default_log_levels=logging.get_default_log_levels() + _EXTRA_DEFAULT_LOG_LEVELS) logging.setup(CONF, "nova") objects.register_all() if CONF.category.name == "version": print(version.version_string_with_package()) return 0 if CONF.category.name == "bash-completion": cmd_common.print_bash_completion(CATEGORIES) return 0 try: fn, fn_args, fn_kwargs = cmd_common.get_action_fn() ret = fn(*fn_args, **fn_kwargs) rpc.cleanup() return ret except Exception: if CONF.post_mortem: import pdb pdb.post_mortem() else: print(_("An error has occurred:\n%s") % traceback.format_exc()) return 255 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/novncproxy.py0000664000175000017500000000270600000000000017100 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Websocket proxy that is compatible with OpenStack Nova noVNC consoles. Leverages websockify.py by Joel Martin """ import sys from nova.cmd import baseproxy import nova.conf from nova.conf import vnc from nova import config from nova.console.securityproxy import rfb CONF = nova.conf.CONF vnc.register_cli_opts(CONF) def main(): # set default web flag option CONF.set_default('web', '/usr/share/novnc') config.parse_args(sys.argv) # TODO(stephenfin): Always enable the security proxy once we support RFB # version 3.3, as used in XenServer. security_proxy = None if CONF.compute_driver != 'xenapi.XenAPIDriver': security_proxy = rfb.RFBSecurityProxy() baseproxy.proxy( host=CONF.vnc.novncproxy_host, port=CONF.vnc.novncproxy_port, security_proxy=security_proxy) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/policy.py0000664000175000017500000001347200000000000016154 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ CLI interface for nova policy rule commands. """ import functools import os import sys from oslo_config import cfg from nova.cmd import common as cmd_common import nova.conf from nova import config from nova import context as nova_context from nova.db import api as db from nova import exception from nova.i18n import _ from nova import policies from nova import version CONF = nova.conf.CONF cli_opts = [ cfg.ListOpt( 'os-roles', metavar='', default=os.environ.get('OS_ROLES'), help=_('Defaults to env[OS_ROLES].')), cfg.StrOpt( 'os-user-id', metavar='', default=os.environ.get('OS_USER_ID'), help=_('Defaults to env[OS_USER_ID].')), cfg.StrOpt( 'os-tenant-id', metavar='', default=os.environ.get('OS_TENANT_ID'), help=_('Defaults to env[OS_TENANT_ID].')), ] class PolicyCommands(object): """Commands for policy rules.""" _ACCEPTABLE_TARGETS = [ 'project_id', 'user_id', 'quota_class', 'availability_zone', 'instance_id'] @cmd_common.args('--api-name', dest='api_name', metavar='', help='Will return only passing policy rules containing ' 'the given API name.') @cmd_common.args('--target', nargs='+', dest='target', metavar='', help='Will return only passing policy rules for the ' 'given target. The available targets are %s. When ' '"instance_id" is used, the other targets will be ' 'overwritten.' % ','.join(_ACCEPTABLE_TARGETS)) def check(self, api_name=None, target=None): """Prints all passing policy rules for the given user. :param api_name: If None, all passing policy rules will be printed, otherwise, only passing policies that contain the given api_name in their names. :param target: The target against which the policy rule authorization will be tested. If None, the given user will be considered as the target. """ context = self._get_context() api_name = api_name or '' target = self._get_target(context, target) allowed_operations = self._filter_rules(context, api_name, target) if allowed_operations: print('\n'.join(allowed_operations)) return 0 else: print('No rules matched or allowed') return 1 def _get_context(self): return nova_context.RequestContext( roles=CONF.os_roles, user_id=CONF.os_user_id, project_id=CONF.os_tenant_id) def _get_target(self, context, target): """Processes and validates the CLI given target and adapts it for policy authorization. :returns: None if the given target is None, otherwise returns a proper authorization target. :raises nova.exception.InvalidAttribute: if a key in the given target is not an acceptable. :raises nova.exception.InstanceNotFound: if 'instance_id' is given, and there is no instance match the id. """ if not target: return None new_target = {} for t in target: key, value = t.split('=') if key not in self._ACCEPTABLE_TARGETS: raise exception.InvalidAttribute(attr=key) new_target[key] = value # if the target is an instance_id, return an instance instead. instance_id = new_target.get('instance_id') if instance_id: admin_ctxt = nova_context.get_admin_context() instance = db.instance_get_by_uuid(admin_ctxt, instance_id) new_target = {'user_id': instance['user_id'], 'project_id': instance['project_id']} return new_target def _filter_rules(self, context, api_name, target): all_rules = policies.list_rules() return [rule.name for rule in all_rules if api_name in rule.name and context.can(rule.name, target, fatal=False)] CATEGORIES = { 'policy': PolicyCommands, } add_command_parsers = functools.partial(cmd_common.add_command_parsers, categories=CATEGORIES) category_opt = cfg.SubCommandOpt('category', title='Command categories', help='Available categories', handler=add_command_parsers) def main(): """Parse options and call the appropriate class/method.""" CONF.register_cli_opts(cli_opts) CONF.register_cli_opt(category_opt) config.parse_args(sys.argv) if CONF.category.name == "version": print(version.version_string_with_package()) return 0 if CONF.category.name == "bash-completion": cmd_common.print_bash_completion(CATEGORIES) return 0 try: fn, fn_args, fn_kwargs = cmd_common.get_action_fn() ret = fn(*fn_args, **fn_kwargs) return ret except Exception as ex: print(_("error: %s") % ex) return 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/scheduler.py0000664000175000017500000000357500000000000016636 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for Nova Scheduler.""" import sys from oslo_concurrency import processutils from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts import nova.conf from nova import config from nova import objects from nova.scheduler import rpcapi as scheduler_rpcapi from nova import service from nova import version CONF = nova.conf.CONF def main(): config.parse_args(sys.argv) logging.setup(CONF, "nova") objects.register_all() gmr_opts.set_defaults(CONF) objects.Service.enable_min_version_cache() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) server = service.Service.create(binary='nova-scheduler', topic=scheduler_rpcapi.RPC_TOPIC) # Determine the number of workers; if not specified in config, default # to ncpu for the FilterScheduler and 1 for everything else. workers = CONF.scheduler.workers if not workers: workers = (processutils.get_worker_count() if CONF.scheduler.driver == 'filter_scheduler' else 1) service.serve(server, workers=workers) service.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/serialproxy.py0000664000175000017500000000222000000000000017223 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Websocket proxy that is compatible with OpenStack Nova Serial consoles. Leverages websockify.py by Joel Martin. Based on nova-novncproxy. """ import sys from nova.cmd import baseproxy import nova.conf from nova.conf import serial_console as serial from nova import config CONF = nova.conf.CONF serial.register_cli_opts(CONF) def main(): # set default web flag option CONF.set_default('web', None) config.parse_args(sys.argv) baseproxy.proxy( host=CONF.serial_console.serialproxy_host, port=CONF.serial_console.serialproxy_port) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/spicehtml5proxy.py0000664000175000017500000000207100000000000020025 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Websocket proxy that is compatible with OpenStack Nova SPICE HTML5 consoles. Leverages websockify.py by Joel Martin """ import sys from nova.cmd import baseproxy import nova.conf from nova.conf import spice from nova import config CONF = nova.conf.CONF spice.register_cli_opts(CONF) def main(): config.parse_args(sys.argv) baseproxy.proxy( host=CONF.spice.html5proxy_host, port=CONF.spice.html5proxy_port) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/cmd/status.py0000664000175000017500000005503400000000000016200 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ CLI interface for nova status commands. """ from __future__ import print_function import collections import functools import sys import traceback from keystoneauth1 import exceptions as ks_exc from oslo_config import cfg from oslo_serialization import jsonutils from oslo_upgradecheck import upgradecheck import pkg_resources import six from sqlalchemy import func as sqlfunc from sqlalchemy import MetaData, Table, and_, select from nova.cmd import common as cmd_common import nova.conf from nova import config from nova import context as nova_context from nova.db.sqlalchemy import api as db_session from nova import exception from nova.i18n import _ from nova.objects import cell_mapping as cell_mapping_obj from nova import policy from nova import utils from nova import version from nova.volume import cinder CONF = nova.conf.CONF # NOTE(efried): 1.35 is required by nova-scheduler to support the root_required # queryparam to make GET /allocation_candidates require that a trait be present # on the root provider, irrespective of how the request groups are specified. # NOTE: If you bump this version, remember to update the history # section in the nova-status man page (doc/source/cli/nova-status). MIN_PLACEMENT_MICROVERSION = "1.35" # NOTE(mriedem): 3.44 is needed to work with volume attachment records which # are required for supporting multi-attach capable volumes. MIN_CINDER_MICROVERSION = '3.44' class UpgradeCommands(upgradecheck.UpgradeCommands): """Commands related to upgrades. The subcommands here must not rely on the nova object model since they should be able to run on n-1 data. Any queries to the database should be done through the sqlalchemy query language directly like the database schema migrations. """ def _count_compute_nodes(self, context=None): """Returns the number of compute nodes in the cell database.""" # NOTE(mriedem): This does not filter based on the service status # because a disabled nova-compute service could still be reporting # inventory info to the placement service. There could be an outside # chance that there are compute node records in the database for # disabled nova-compute services that aren't yet upgraded to Ocata or # the nova-compute service was deleted and the service isn't actually # running on the compute host but the operator hasn't cleaned up the # compute_nodes entry in the database yet. We consider those edge cases # here and the worst case scenario is we give a warning that there are # more compute nodes than resource providers. We can tighten this up # later if needed, for example by not including compute nodes that # don't have a corresponding nova-compute service in the services # table, or by only counting compute nodes with a service version of at # least 15 which was the highest service version when Newton was # released. meta = MetaData(bind=db_session.get_engine(context=context)) compute_nodes = Table('compute_nodes', meta, autoload=True) return select([sqlfunc.count()]).select_from(compute_nodes).where( compute_nodes.c.deleted == 0).scalar() def _check_cellsv2(self): """Checks to see if cells v2 has been setup. These are the same checks performed in the 030_require_cell_setup API DB migration except we expect this to be run AFTER the nova-manage cell_v2 simple_cell_setup command, which would create the cell and host mappings and sync the cell0 database schema, so we don't check for flavors at all because you could create those after doing this on an initial install. This also has to be careful about checking for compute nodes if there are no host mappings on a fresh install. """ meta = MetaData() meta.bind = db_session.get_api_engine() cell_mappings = self._get_cell_mappings() count = len(cell_mappings) # Two mappings are required at a minimum, cell0 and your first cell if count < 2: msg = _('There needs to be at least two cell mappings, one for ' 'cell0 and one for your first cell. Run command ' '\'nova-manage cell_v2 simple_cell_setup\' and then ' 'retry.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) cell0 = any(mapping.is_cell0() for mapping in cell_mappings) if not cell0: msg = _('No cell0 mapping found. Run command ' '\'nova-manage cell_v2 simple_cell_setup\' and then ' 'retry.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) host_mappings = Table('host_mappings', meta, autoload=True) count = select([sqlfunc.count()]).select_from(host_mappings).scalar() if count == 0: # This may be a fresh install in which case there may not be any # compute_nodes in the cell database if the nova-compute service # hasn't started yet to create those records. So let's query the # cell database for compute_nodes records and if we find at least # one it's a failure. num_computes = self._count_compute_nodes() if num_computes > 0: msg = _('No host mappings found but there are compute nodes. ' 'Run command \'nova-manage cell_v2 ' 'simple_cell_setup\' and then retry.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) msg = _('No host mappings or compute nodes were found. Remember ' 'to run command \'nova-manage cell_v2 discover_hosts\' ' 'when new compute hosts are deployed.') return upgradecheck.Result(upgradecheck.Code.SUCCESS, msg) return upgradecheck.Result(upgradecheck.Code.SUCCESS) @staticmethod def _placement_get(path): """Do an HTTP get call against placement engine. This is in a dedicated method to make it easier for unit testing purposes. """ client = utils.get_ksa_adapter('placement') return client.get(path, raise_exc=True).json() def _check_placement(self): """Checks to see if the placement API is ready for scheduling. Checks to see that the placement API service is registered in the service catalog and that we can make requests against it. """ try: # TODO(efried): Use ksa's version filtering in _placement_get versions = self._placement_get("/") max_version = pkg_resources.parse_version( versions["versions"][0]["max_version"]) needs_version = pkg_resources.parse_version( MIN_PLACEMENT_MICROVERSION) if max_version < needs_version: msg = (_('Placement API version %(needed)s needed, ' 'you have %(current)s.') % {'needed': needs_version, 'current': max_version}) return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) except ks_exc.MissingAuthPlugin: msg = _('No credentials specified for placement API in nova.conf.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) except ks_exc.Unauthorized: msg = _('Placement service credentials do not work.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) except ks_exc.EndpointNotFound: msg = _('Placement API endpoint not found.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) except ks_exc.DiscoveryFailure: msg = _('Discovery for placement API URI failed.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) except ks_exc.NotFound: msg = _('Placement API does not seem to be running.') return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) return upgradecheck.Result(upgradecheck.Code.SUCCESS) @staticmethod def _get_non_cell0_mappings(): """Queries the API database for non-cell0 cell mappings. :returns: list of nova.objects.CellMapping objects """ return UpgradeCommands._get_cell_mappings(include_cell0=False) @staticmethod def _get_cell_mappings(include_cell0=True): """Queries the API database for cell mappings. .. note:: This method is unique in that it queries the database using CellMappingList.get_all() rather than a direct query using the sqlalchemy models. This is because CellMapping.database_connection can be a template and the object takes care of formatting the URL. We cannot use RowProxy objects from sqlalchemy because we cannot set the formatted 'database_connection' value back on those objects (they are read-only). :param include_cell0: True if cell0 should be returned, False if cell0 should be excluded from the results. :returns: list of nova.objects.CellMapping objects """ ctxt = nova_context.get_admin_context() cell_mappings = cell_mapping_obj.CellMappingList.get_all(ctxt) if not include_cell0: # Since CellMappingList does not implement remove() we have to # create a new list and exclude cell0. mappings = [mapping for mapping in cell_mappings if not mapping.is_cell0()] cell_mappings = cell_mapping_obj.CellMappingList(objects=mappings) return cell_mappings @staticmethod def _is_ironic_instance_migrated(extras, inst): extra = (extras.select().where(extras.c.instance_uuid == inst['uuid'] ).execute().first()) # Pull the flavor and deserialize it. Note that the flavor info for an # instance is a dict keyed by "cur", "old", "new" and we want the # current flavor. flavor = jsonutils.loads(extra['flavor'])['cur']['nova_object.data'] # Do we have a custom resource flavor extra spec? specs = flavor['extra_specs'] if 'extra_specs' in flavor else {} for spec_key in specs: if spec_key.startswith('resources:CUSTOM_'): # We found a match so this instance is good. return True return False def _check_ironic_flavor_migration(self): """In Pike, ironic instances and flavors need to be migrated to use custom resource classes. In ironic, the node.resource_class should be set to some custom resource class value which should match a "resources:" flavor extra spec on baremetal flavors. Existing ironic instances will have their embedded instance.flavor.extra_specs migrated to use the matching ironic node.resource_class value in the nova-compute service, or they can be forcefully migrated using "nova-manage db ironic_flavor_migration". In this check, we look for all ironic compute nodes in all non-cell0 cells, and from those ironic compute nodes, we look for an instance that has a "resources:CUSTOM_*" key in it's embedded flavor extra specs. """ cell_mappings = self._get_non_cell0_mappings() ctxt = nova_context.get_admin_context() # dict of cell identifier (name or uuid) to number of unmigrated # instances unmigrated_instance_count_by_cell = collections.defaultdict(int) for cell_mapping in cell_mappings: with nova_context.target_cell(ctxt, cell_mapping) as cctxt: # Get the (non-deleted) ironic compute nodes in this cell. meta = MetaData(bind=db_session.get_engine(context=cctxt)) compute_nodes = Table('compute_nodes', meta, autoload=True) ironic_nodes = ( compute_nodes.select().where(and_( compute_nodes.c.hypervisor_type == 'ironic', compute_nodes.c.deleted == 0 )).execute().fetchall()) if ironic_nodes: # We have ironic nodes in this cell, let's iterate over # them looking for instances. instances = Table('instances', meta, autoload=True) extras = Table('instance_extra', meta, autoload=True) for node in ironic_nodes: nodename = node['hypervisor_hostname'] # Get any (non-deleted) instances for this node. ironic_instances = ( instances.select().where(and_( instances.c.node == nodename, instances.c.deleted == 0 )).execute().fetchall()) # Get the instance_extras for each instance so we can # find the flavors. for inst in ironic_instances: if not self._is_ironic_instance_migrated( extras, inst): # We didn't find the extra spec key for this # instance so increment the number of # unmigrated instances in this cell. unmigrated_instance_count_by_cell[ cell_mapping.uuid] += 1 if not cell_mappings: # There are no non-cell0 mappings so we can't determine this, just # return a warning. The cellsv2 check would have already failed # on this. msg = (_('Unable to determine ironic flavor migration without ' 'cell mappings.')) return upgradecheck.Result(upgradecheck.Code.WARNING, msg) if unmigrated_instance_count_by_cell: # There are unmigrated ironic instances, so we need to fail. msg = (_('There are (cell=x) number of unmigrated instances in ' 'each cell: %s. Run \'nova-manage db ' 'ironic_flavor_migration\' on each cell.') % ' '.join('(%s=%s)' % ( cell_id, unmigrated_instance_count_by_cell[cell_id]) for cell_id in sorted(unmigrated_instance_count_by_cell.keys()))) return upgradecheck.Result(upgradecheck.Code.FAILURE, msg) # Either there were no ironic compute nodes or all instances for # those nodes are already migrated, so there is nothing to do. return upgradecheck.Result(upgradecheck.Code.SUCCESS) def _check_cinder(self): """Checks to see that the cinder API is available at a given minimum microversion. """ # Check to see if nova is even configured for Cinder yet (fresh install # or maybe not using Cinder at all). if CONF.cinder.auth_type is None: return upgradecheck.Result(upgradecheck.Code.SUCCESS) try: # TODO(mriedem): Eventually use get_ksa_adapter here when it # supports cinder. cinder.is_microversion_supported( nova_context.get_admin_context(), MIN_CINDER_MICROVERSION) except exception.CinderAPIVersionNotAvailable: return upgradecheck.Result( upgradecheck.Code.FAILURE, _('Cinder API %s or greater is required. Deploy at least ' 'Cinder 12.0.0 (Queens).') % MIN_CINDER_MICROVERSION) except Exception as ex: # Anything else trying to connect, like bad config, is out of our # hands so just return a warning. return upgradecheck.Result( upgradecheck.Code.WARNING, _('Unable to determine Cinder API version due to error: %s') % six.text_type(ex)) return upgradecheck.Result(upgradecheck.Code.SUCCESS) def _check_policy(self): """Checks to see if policy file is overwritten with the new defaults. """ msg = _("Your policy file contains rules which examine token scope, " "which may be due to generation with the new defaults. " "If that is done intentionally to migrate to the new rule " "format, then you are required to enable the flag " "'oslo_policy.enforce_scope=True' and educate end users on " "how to request scoped tokens from Keystone. Another easy " "and recommended way for you to achieve the same is via two " "flags, 'oslo_policy.enforce_scope=True' and " "'oslo_policy.enforce_new_defaults=True' and avoid " "overwriting the file. Please refer to this document to " "know the complete migration steps: " "https://docs.openstack.org/nova/latest/configuration" "/policy-concepts.html. If you did not intend to migrate " "to new defaults in this upgrade, then with your current " "policy file the scope checking rule will fail. A possible " "reason for such a policy file is that you generated it with " "'oslopolicy-sample-generator' in json format. " "Three ways to fix this until you are ready to migrate to " "scoped policies: 1. Generate the policy file with " "'oslopolicy-sample-generator' in yaml format, keep " "the generated content commented out, and update " "the generated policy.yaml location in " "``oslo_policy.policy_file``. " "2. Use a pre-existing sample config file from the Train " "release. 3. Use an empty or non-existent file to take all " "the defaults.") rule = "system_admin_api" rule_new_default = "role:admin and system_scope:all" status = upgradecheck.Result(upgradecheck.Code.SUCCESS) # NOTE(gmann): Initialise the policy if it not initialized. # We need policy enforcer with all the rules loaded to check # their value with defaults. try: if policy._ENFORCER is None: policy.init(suppress_deprecation_warnings=True) # For safer side, recheck that the enforcer is available before # upgrade checks. If something is wrong on oslo side and enforcer # is still not available the return warning to avoid any false # result. if policy._ENFORCER is not None: current_rule = str(policy._ENFORCER.rules[rule]).strip("()") if (current_rule == rule_new_default and not CONF.oslo_policy.enforce_scope): status = upgradecheck.Result(upgradecheck.Code.WARNING, msg) else: status = upgradecheck.Result( upgradecheck.Code.WARNING, _('Policy is not initialized to check the policy rules')) except Exception as ex: status = upgradecheck.Result( upgradecheck.Code.WARNING, _('Unable to perform policy checks due to error: %s') % six.text_type(ex)) # reset the policy state so that it can be initialized from fresh if # operator changes policy file after running this upgrade checks. policy.reset() return status def _check_old_computes(self): # warn if there are computes in the system older than the previous # major release try: utils.raise_if_old_compute() except exception.TooOldComputeService as e: return upgradecheck.Result( upgradecheck.Code.WARNING, six.text_type(e)) return upgradecheck.Result(upgradecheck.Code.SUCCESS) # The format of the check functions is to return an upgradecheck.Result # object with the appropriate upgradecheck.Code and details set. If the # check hits warnings or failures then those should be stored in the # returned upgradecheck.Result's "details" attribute. The summary will # be rolled up at the end of the check() function. These functions are # intended to be run in order and build on top of each other so order # matters. _upgrade_checks = ( # Added in Ocata (_('Cells v2'), _check_cellsv2), # Added in Ocata (_('Placement API'), _check_placement), # Added in Rocky (but also useful going back to Pike) (_('Ironic Flavor Migration'), _check_ironic_flavor_migration), # Added in Train (_('Cinder API'), _check_cinder), # Added in Ussuri (_('Policy Scope-based Defaults'), _check_policy), # Backported from Wallaby (_('Older than N-1 computes'), _check_old_computes) ) CATEGORIES = { 'upgrade': UpgradeCommands, } add_command_parsers = functools.partial(cmd_common.add_command_parsers, categories=CATEGORIES) category_opt = cfg.SubCommandOpt('category', title='Command categories', help='Available categories', handler=add_command_parsers) def main(): """Parse options and call the appropriate class/method.""" CONF.register_cli_opt(category_opt) config.parse_args(sys.argv) if CONF.category.name == "version": print(version.version_string_with_package()) return 0 if CONF.category.name == "bash-completion": cmd_common.print_bash_completion(CATEGORIES) return 0 try: fn, fn_args, fn_kwargs = cmd_common.get_action_fn() ret = fn(*fn_args, **fn_kwargs) return ret except Exception: print(_('Error:\n%s') % traceback.format_exc()) # This is 255 so it's not confused with the upgrade check exit codes. return 255 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2984703 nova-21.2.4/nova/compute/0000775000175000017500000000000000000000000015205 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/__init__.py0000664000175000017500000000000000000000000017304 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/api.py0000664000175000017500000112404500000000000016337 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Piston Cloud Computing, Inc. # Copyright 2012-2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handles all requests relating to compute resources (e.g. guest VMs, networking and storage of VMs, and compute hosts on which they run).""" import collections import functools import re import string from castellan import key_manager import os_traits from oslo_log import log as logging from oslo_messaging import exceptions as oslo_exceptions from oslo_serialization import base64 as base64utils from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import units from oslo_utils import uuidutils import six from six.moves import range from nova.accelerator import cyborg from nova import availability_zones from nova import block_device from nova.compute import flavors from nova.compute import instance_actions from nova.compute import instance_list from nova.compute import migration_list from nova.compute import power_state from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute.utils import wrap_instance_event from nova.compute import vm_states from nova import conductor import nova.conf from nova import context as nova_context from nova import crypto from nova.db import base from nova.db.sqlalchemy import api as db_api from nova import exception from nova import exception_wrapper from nova import hooks from nova.i18n import _ from nova.image import glance from nova.network import constants from nova.network import model as network_model from nova.network import neutron from nova.network import security_group_api from nova import objects from nova.objects import block_device as block_device_obj from nova.objects import external_event as external_event_obj from nova.objects import fields as fields_obj from nova.objects import image_meta as image_meta_obj from nova.objects import keypair as keypair_obj from nova.objects import quotas as quotas_obj from nova.pci import request as pci_request from nova.policies import servers as servers_policies import nova.policy from nova import profiler from nova import rpc from nova.scheduler.client import query from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import servicegroup from nova import utils from nova.virt import hardware from nova.volume import cinder LOG = logging.getLogger(__name__) get_notifier = functools.partial(rpc.get_notifier, service='compute') # NOTE(gibi): legacy notification used compute as a service but these # calls still run on the client side of the compute service which is # nova-api. By setting the binary to nova-api below, we can make sure # that the new versioned notifications has the right publisher_id but the # legacy notifications does not change. wrap_exception = functools.partial(exception_wrapper.wrap_exception, get_notifier=get_notifier, binary='nova-api') CONF = nova.conf.CONF AGGREGATE_ACTION_UPDATE = 'Update' AGGREGATE_ACTION_UPDATE_META = 'UpdateMeta' AGGREGATE_ACTION_DELETE = 'Delete' AGGREGATE_ACTION_ADD = 'Add' MIN_COMPUTE_SYNC_COMPUTE_STATUS_DISABLED = 38 MIN_COMPUTE_CROSS_CELL_RESIZE = 47 MIN_COMPUTE_SAME_HOST_COLD_MIGRATE = 48 # FIXME(danms): Keep a global cache of the cells we find the # first time we look. This needs to be refreshed on a timer or # trigger. CELLS = [] def check_instance_state(vm_state=None, task_state=(None,), must_have_launched=True): """Decorator to check VM and/or task state before entry to API functions. If the instance is in the wrong state, or has not been successfully started at least once the wrapper will raise an exception. """ if vm_state is not None and not isinstance(vm_state, set): vm_state = set(vm_state) if task_state is not None and not isinstance(task_state, set): task_state = set(task_state) def outer(f): @six.wraps(f) def inner(self, context, instance, *args, **kw): if vm_state is not None and instance.vm_state not in vm_state: raise exception.InstanceInvalidState( attr='vm_state', instance_uuid=instance.uuid, state=instance.vm_state, method=f.__name__) if (task_state is not None and instance.task_state not in task_state): raise exception.InstanceInvalidState( attr='task_state', instance_uuid=instance.uuid, state=instance.task_state, method=f.__name__) if must_have_launched and not instance.launched_at: raise exception.InstanceInvalidState( attr='launched_at', instance_uuid=instance.uuid, state=instance.launched_at, method=f.__name__) return f(self, context, instance, *args, **kw) return inner return outer def _set_or_none(q): return q if q is None or isinstance(q, set) else set(q) def reject_instance_state(vm_state=None, task_state=None): """Decorator. Raise InstanceInvalidState if instance is in any of the given states. """ vm_state = _set_or_none(vm_state) task_state = _set_or_none(task_state) def outer(f): @six.wraps(f) def inner(self, context, instance, *args, **kw): _InstanceInvalidState = functools.partial( exception.InstanceInvalidState, instance_uuid=instance.uuid, method=f.__name__) if vm_state is not None and instance.vm_state in vm_state: raise _InstanceInvalidState( attr='vm_state', state=instance.vm_state) if task_state is not None and instance.task_state in task_state: raise _InstanceInvalidState( attr='task_state', state=instance.task_state) return f(self, context, instance, *args, **kw) return inner return outer def check_instance_host(check_is_up=False): """Validate the instance.host before performing the operation. At a minimum this method will check that the instance.host is set. :param check_is_up: If True, check that the instance.host status is UP or MAINTENANCE (disabled but not down). :raises: InstanceNotReady if the instance.host is not set :raises: ServiceUnavailable if check_is_up=True and the instance.host compute service status is not UP or MAINTENANCE """ def outer(function): @six.wraps(function) def wrapped(self, context, instance, *args, **kwargs): if not instance.host: raise exception.InstanceNotReady(instance_id=instance.uuid) if check_is_up: # Make sure the source compute service is not down otherwise we # cannot proceed. host_status = self.get_instance_host_status(instance) if host_status not in (fields_obj.HostStatus.UP, fields_obj.HostStatus.MAINTENANCE): # ComputeServiceUnavailable would make more sense here but # we do not want to leak hostnames to end users. raise exception.ServiceUnavailable() return function(self, context, instance, *args, **kwargs) return wrapped return outer def check_instance_lock(function): @six.wraps(function) def inner(self, context, instance, *args, **kwargs): if instance.locked and not context.is_admin: raise exception.InstanceIsLocked(instance_uuid=instance.uuid) return function(self, context, instance, *args, **kwargs) return inner def reject_sev_instances(operation): """Decorator. Raise OperationNotSupportedForSEV if instance has SEV enabled. """ def outer(f): @six.wraps(f) def inner(self, context, instance, *args, **kw): if hardware.get_mem_encryption_constraint(instance.flavor, instance.image_meta): raise exception.OperationNotSupportedForSEV( instance_uuid=instance.uuid, operation=operation) return f(self, context, instance, *args, **kw) return inner return outer def _diff_dict(orig, new): """Return a dict describing how to change orig to new. The keys correspond to values that have changed; the value will be a list of one or two elements. The first element of the list will be either '+' or '-', indicating whether the key was updated or deleted; if the key was updated, the list will contain a second element, giving the updated value. """ # Figure out what keys went away result = {k: ['-'] for k in set(orig.keys()) - set(new.keys())} # Compute the updates for key, value in new.items(): if key not in orig or value != orig[key]: result[key] = ['+', value] return result def load_cells(): global CELLS if not CELLS: CELLS = objects.CellMappingList.get_all( nova_context.get_admin_context()) LOG.debug('Found %(count)i cells: %(cells)s', dict(count=len(CELLS), cells=','.join([c.identity for c in CELLS]))) if not CELLS: LOG.error('No cells are configured, unable to continue') def _get_image_meta_obj(image_meta_dict): try: image_meta = objects.ImageMeta.from_dict(image_meta_dict) except ValueError as e: # there must be invalid values in the image meta properties so # consider this an invalid request msg = _('Invalid image metadata. Error: %s') % six.text_type(e) raise exception.InvalidRequest(msg) return image_meta def block_accelerators(func): @functools.wraps(func) def wrapper(self, context, instance, *args, **kwargs): dp_name = instance.flavor.extra_specs.get('accel:device_profile') if dp_name: raise exception.ForbiddenWithAccelerators() return func(self, context, instance, *args, **kwargs) return wrapper @profiler.trace_cls("compute_api") class API(base.Base): """API for interacting with the compute manager.""" def __init__(self, image_api=None, network_api=None, volume_api=None, **kwargs): self.image_api = image_api or glance.API() self.network_api = network_api or neutron.API() self.volume_api = volume_api or cinder.API() self._placementclient = None # Lazy-load on first access. self.compute_rpcapi = compute_rpcapi.ComputeAPI() self.compute_task_api = conductor.ComputeTaskAPI() self.servicegroup_api = servicegroup.API() self.host_api = HostAPI(self.compute_rpcapi, self.servicegroup_api) self.notifier = rpc.get_notifier('compute', CONF.host) if CONF.ephemeral_storage_encryption.enabled: self.key_manager = key_manager.API() # Help us to record host in EventReporter self.host = CONF.host super(API, self).__init__(**kwargs) def _record_action_start(self, context, instance, action): objects.InstanceAction.action_start(context, instance.uuid, action, want_result=False) def _check_injected_file_quota(self, context, injected_files): """Enforce quota limits on injected files. Raises a QuotaError if any limit is exceeded. """ if not injected_files: return # Check number of files first try: objects.Quotas.limit_check(context, injected_files=len(injected_files)) except exception.OverQuota: raise exception.OnsetFileLimitExceeded() # OK, now count path and content lengths; we're looking for # the max... max_path = 0 max_content = 0 for path, content in injected_files: max_path = max(max_path, len(path)) max_content = max(max_content, len(content)) try: objects.Quotas.limit_check(context, injected_file_path_bytes=max_path, injected_file_content_bytes=max_content) except exception.OverQuota as exc: # Favor path limit over content limit for reporting # purposes if 'injected_file_path_bytes' in exc.kwargs['overs']: raise exception.OnsetFilePathLimitExceeded( allowed=exc.kwargs['quotas']['injected_file_path_bytes']) else: raise exception.OnsetFileContentLimitExceeded( allowed=exc.kwargs['quotas']['injected_file_content_bytes']) def _check_metadata_properties_quota(self, context, metadata=None): """Enforce quota limits on metadata properties.""" if not metadata: return if not isinstance(metadata, dict): msg = (_("Metadata type should be dict.")) raise exception.InvalidMetadata(reason=msg) num_metadata = len(metadata) try: objects.Quotas.limit_check(context, metadata_items=num_metadata) except exception.OverQuota as exc: quota_metadata = exc.kwargs['quotas']['metadata_items'] raise exception.MetadataLimitExceeded(allowed=quota_metadata) # Because metadata is stored in the DB, we hard-code the size limits # In future, we may support more variable length strings, so we act # as if this is quota-controlled for forwards compatibility. # Those are only used in V2 API, from V2.1 API, those checks are # validated at API layer schema validation. for k, v in metadata.items(): try: utils.check_string_length(v) utils.check_string_length(k, min_length=1) except exception.InvalidInput as e: raise exception.InvalidMetadata(reason=e.format_message()) if len(k) > 255: msg = _("Metadata property key greater than 255 characters") raise exception.InvalidMetadataSize(reason=msg) if len(v) > 255: msg = _("Metadata property value greater than 255 characters") raise exception.InvalidMetadataSize(reason=msg) def _check_requested_secgroups(self, context, secgroups): """Check if the security group requested exists and belongs to the project. :param context: The nova request context. :type context: nova.context.RequestContext :param secgroups: list of requested security group names :type secgroups: list :returns: list of requested security group UUIDs; note that 'default' is a special case and will be unmodified if it's requested. """ security_groups = [] for secgroup in secgroups: # NOTE(sdague): default is handled special if secgroup == "default": security_groups.append(secgroup) continue secgroup_uuid = security_group_api.validate_name(context, secgroup) security_groups.append(secgroup_uuid) return security_groups def _check_requested_networks(self, context, requested_networks, max_count): """Check if the networks requested belongs to the project and the fixed IP address for each network provided is within same the network block """ if requested_networks is not None: if requested_networks.no_allocate: # If the network request was specifically 'none' meaning don't # allocate any networks, we just return the number of requested # instances since quotas don't change at all. return max_count # NOTE(danms): Temporary transition requested_networks = requested_networks.as_tuples() return self.network_api.validate_networks(context, requested_networks, max_count) def _handle_kernel_and_ramdisk(self, context, kernel_id, ramdisk_id, image): """Choose kernel and ramdisk appropriate for the instance. The kernel and ramdisk can be chosen in one of two ways: 1. Passed in with create-instance request. 2. Inherited from image metadata. If inherited from image metadata, and if that image metadata value is set to 'nokernel', both kernel and ramdisk will default to None. """ # Inherit from image if not specified image_properties = image.get('properties', {}) if kernel_id is None: kernel_id = image_properties.get('kernel_id') if ramdisk_id is None: ramdisk_id = image_properties.get('ramdisk_id') # Force to None if kernel_id indicates that a kernel is not to be used if kernel_id == 'nokernel': kernel_id = None ramdisk_id = None # Verify kernel and ramdisk exist (fail-fast) if kernel_id is not None: kernel_image = self.image_api.get(context, kernel_id) # kernel_id could have been a URI, not a UUID, so to keep behaviour # from before, which leaked that implementation detail out to the # caller, we return the image UUID of the kernel image and ramdisk # image (below) and not any image URIs that might have been # supplied. # TODO(jaypipes): Get rid of this silliness once we move to a real # Image object and hide all of that stuff within nova.image.glance kernel_id = kernel_image['id'] if ramdisk_id is not None: ramdisk_image = self.image_api.get(context, ramdisk_id) ramdisk_id = ramdisk_image['id'] return kernel_id, ramdisk_id @staticmethod def parse_availability_zone(context, availability_zone): # NOTE(vish): We have a legacy hack to allow admins to specify hosts # via az using az:host:node. It might be nice to expose an # api to specify specific hosts to force onto, but for # now it just supports this legacy hack. # NOTE(deva): It is also possible to specify az::node, in which case # the host manager will determine the correct host. forced_host = None forced_node = None if availability_zone and ':' in availability_zone: c = availability_zone.count(':') if c == 1: availability_zone, forced_host = availability_zone.split(':') elif c == 2: if '::' in availability_zone: availability_zone, forced_node = \ availability_zone.split('::') else: availability_zone, forced_host, forced_node = \ availability_zone.split(':') else: raise exception.InvalidInput( reason="Unable to parse availability_zone") if not availability_zone: availability_zone = CONF.default_schedule_zone return availability_zone, forced_host, forced_node def _ensure_auto_disk_config_is_valid(self, auto_disk_config_img, auto_disk_config, image): auto_disk_config_disabled = \ utils.is_auto_disk_config_disabled(auto_disk_config_img) if auto_disk_config_disabled and auto_disk_config: raise exception.AutoDiskConfigDisabledByImage(image=image) def _inherit_properties_from_image(self, image, auto_disk_config): image_properties = image.get('properties', {}) auto_disk_config_img = \ utils.get_auto_disk_config_from_image_props(image_properties) self._ensure_auto_disk_config_is_valid(auto_disk_config_img, auto_disk_config, image.get("id")) if auto_disk_config is None: auto_disk_config = strutils.bool_from_string(auto_disk_config_img) return { 'os_type': image_properties.get('os_type'), 'architecture': image_properties.get('architecture'), 'vm_mode': image_properties.get('vm_mode'), 'auto_disk_config': auto_disk_config } def _check_config_drive(self, config_drive): if config_drive: try: bool_val = strutils.bool_from_string(config_drive, strict=True) except ValueError: raise exception.ConfigDriveInvalidValue(option=config_drive) else: bool_val = False # FIXME(comstud): Bug ID 1193438 filed for this. This looks silly, # but this is because the config drive column is a String. False # is represented by using an empty string. And for whatever # reason, we rely on the DB to cast True to a String. return True if bool_val else '' def _validate_flavor_image(self, context, image_id, image, instance_type, root_bdm, validate_numa=True): """Validate the flavor and image. This is called from the API service to ensure that the flavor extra-specs and image properties are self-consistent and compatible with each other. :param context: A context.RequestContext :param image_id: UUID of the image :param image: a dict representation of the image including properties, enforces the image status is active. :param instance_type: Flavor object :param root_bdm: BlockDeviceMapping for root disk. Will be None for the resize case. :param validate_numa: Flag to indicate whether or not to validate the NUMA-related metadata. :raises: Many different possible exceptions. See api.openstack.compute.servers.INVALID_FLAVOR_IMAGE_EXCEPTIONS for the full list. """ if image and image['status'] != 'active': raise exception.ImageNotActive(image_id=image_id) self._validate_flavor_image_nostatus(context, image, instance_type, root_bdm, validate_numa) @staticmethod def _detect_nonbootable_image_from_properties(image_id, image): """Check image for a property indicating it's nonbootable. This is called from the API service to ensure that there are no known image properties indicating that this image is of a type that we do not support booting from. Currently the only such property is 'cinder_encryption_key_id'. :param image_id: UUID of the image :param image: a dict representation of the image including properties :raises: ImageUnacceptable if the image properties indicate that booting this image is not supported """ if not image: return image_properties = image.get('properties', {}) # NOTE(lyarwood) Skip this check when image_id is None indicating that # the instance is booting from a volume that was itself initially # created from an image. As such we don't care if # cinder_encryption_key_id was against the original image as we are now # booting from an encrypted volume. if image_properties.get('cinder_encryption_key_id') and image_id: reason = _('Direct booting of an image uploaded from an ' 'encrypted volume is unsupported.') raise exception.ImageUnacceptable(image_id=image_id, reason=reason) @staticmethod def _validate_flavor_image_nostatus(context, image, instance_type, root_bdm, validate_numa=True, validate_pci=False): """Validate the flavor and image. This is called from the API service to ensure that the flavor extra-specs and image properties are self-consistent and compatible with each other. :param context: A context.RequestContext :param image: a dict representation of the image including properties :param instance_type: Flavor object :param root_bdm: BlockDeviceMapping for root disk. Will be None for the resize case. :param validate_numa: Flag to indicate whether or not to validate the NUMA-related metadata. :param validate_pci: Flag to indicate whether or not to validate the PCI-related metadata. :raises: Many different possible exceptions. See api.openstack.compute.servers.INVALID_FLAVOR_IMAGE_EXCEPTIONS for the full list. """ if not image: return image_properties = image.get('properties', {}) config_drive_option = image_properties.get( 'img_config_drive', 'optional') if config_drive_option not in ['optional', 'mandatory']: raise exception.InvalidImageConfigDrive( config_drive=config_drive_option) if instance_type['memory_mb'] < int(image.get('min_ram') or 0): raise exception.FlavorMemoryTooSmall() # Image min_disk is in gb, size is in bytes. For sanity, have them both # in bytes. image_min_disk = int(image.get('min_disk') or 0) * units.Gi image_size = int(image.get('size') or 0) # Target disk is a volume. Don't check flavor disk size because it # doesn't make sense, and check min_disk against the volume size. if root_bdm is not None and root_bdm.is_volume: # There are 2 possibilities here: # # 1. The target volume already exists but bdm.volume_size is not # yet set because this method is called before # _bdm_validate_set_size_and_instance during server create. # 2. The target volume doesn't exist, in which case the bdm will # contain the intended volume size # # Note that rebuild also calls this method with potentially a new # image but you can't rebuild a volume-backed server with a new # image (yet). # # Cinder does its own check against min_disk, so if the target # volume already exists this has already been done and we don't # need to check it again here. In this case, volume_size may not be # set on the bdm. # # If we're going to create the volume, the bdm will contain # volume_size. Therefore we should check it if it exists. This will # still be checked again by cinder when the volume is created, but # that will not happen until the request reaches a host. By # checking it here, the user gets an immediate and useful failure # indication. # # The third possibility is that we have failed to consider # something, and there are actually more than 2 possibilities. In # this case cinder will still do the check at volume creation time. # The behaviour will still be correct, but the user will not get an # immediate failure from the api, and will instead have to # determine why the instance is in an error state with a task of # block_device_mapping. # # We could reasonably refactor this check into _validate_bdm at # some future date, as the various size logic is already split out # in there. dest_size = root_bdm.volume_size if dest_size is not None: dest_size *= units.Gi if image_min_disk > dest_size: raise exception.VolumeSmallerThanMinDisk( volume_size=dest_size, image_min_disk=image_min_disk) # Target disk is a local disk whose size is taken from the flavor else: dest_size = instance_type['root_gb'] * units.Gi # NOTE(johannes): root_gb is allowed to be 0 for legacy reasons # since libvirt interpreted the value differently than other # drivers. A value of 0 means don't check size. if dest_size != 0: if image_size > dest_size: raise exception.FlavorDiskSmallerThanImage( flavor_size=dest_size, image_size=image_size) if image_min_disk > dest_size: raise exception.FlavorDiskSmallerThanMinDisk( flavor_size=dest_size, image_min_disk=image_min_disk) else: # The user is attempting to create a server with a 0-disk # image-backed flavor, which can lead to issues with a large # image consuming an unexpectedly large amount of local disk # on the compute host. Check to see if the deployment will # allow that. if not context.can( servers_policies.ZERO_DISK_FLAVOR, fatal=False): raise exception.BootFromVolumeRequiredForZeroDiskFlavor() API._validate_flavor_image_numa_pci( image, instance_type, validate_numa=validate_numa, validate_pci=validate_pci) @staticmethod def _validate_flavor_image_numa_pci(image, instance_type, validate_numa=True, validate_pci=False): """Validate the flavor and image NUMA/PCI values. This is called from the API service to ensure that the flavor extra-specs and image properties are self-consistent and compatible with each other. :param image: a dict representation of the image including properties :param instance_type: Flavor object :param validate_numa: Flag to indicate whether or not to validate the NUMA-related metadata. :param validate_pci: Flag to indicate whether or not to validate the PCI-related metadata. :raises: Many different possible exceptions. See api.openstack.compute.servers.INVALID_FLAVOR_IMAGE_EXCEPTIONS for the full list. """ image_meta = _get_image_meta_obj(image) API._validate_flavor_image_mem_encryption(instance_type, image_meta) # validate PMU extra spec and image metadata flavor_pmu = instance_type.extra_specs.get('hw:pmu') image_pmu = image_meta.properties.get('hw_pmu') if (flavor_pmu is not None and image_pmu is not None and image_pmu != strutils.bool_from_string(flavor_pmu)): raise exception.ImagePMUConflict() # Only validate values of flavor/image so the return results of # following 'get' functions are not used. hardware.get_number_of_serial_ports(instance_type, image_meta) if hardware.is_realtime_enabled(instance_type): hardware.vcpus_realtime_topology(instance_type, image_meta) hardware.get_cpu_topology_constraints(instance_type, image_meta) if validate_numa: hardware.numa_get_constraints(instance_type, image_meta) if validate_pci: pci_request.get_pci_requests_from_flavor(instance_type) @staticmethod def _validate_flavor_image_mem_encryption(instance_type, image): """Validate that the flavor and image don't make contradictory requests regarding memory encryption. :param instance_type: Flavor object :param image: an ImageMeta object :raises: nova.exception.FlavorImageConflict """ # This library function will raise the exception for us if # necessary; if not, we can ignore the result returned. hardware.get_mem_encryption_constraint(instance_type, image) def _get_image_defined_bdms(self, instance_type, image_meta, root_device_name): image_properties = image_meta.get('properties', {}) # Get the block device mappings defined by the image. image_defined_bdms = image_properties.get('block_device_mapping', []) legacy_image_defined = not image_properties.get('bdm_v2', False) image_mapping = image_properties.get('mappings', []) if legacy_image_defined: image_defined_bdms = block_device.from_legacy_mapping( image_defined_bdms, None, root_device_name) else: image_defined_bdms = list(map(block_device.BlockDeviceDict, image_defined_bdms)) if image_mapping: image_mapping = self._prepare_image_mapping(instance_type, image_mapping) image_defined_bdms = self._merge_bdms_lists( image_mapping, image_defined_bdms) return image_defined_bdms def _get_flavor_defined_bdms(self, instance_type, block_device_mapping): flavor_defined_bdms = [] have_ephemeral_bdms = any(filter( block_device.new_format_is_ephemeral, block_device_mapping)) have_swap_bdms = any(filter( block_device.new_format_is_swap, block_device_mapping)) if instance_type.get('ephemeral_gb') and not have_ephemeral_bdms: flavor_defined_bdms.append( block_device.create_blank_bdm(instance_type['ephemeral_gb'])) if instance_type.get('swap') and not have_swap_bdms: flavor_defined_bdms.append( block_device.create_blank_bdm(instance_type['swap'], 'swap')) return flavor_defined_bdms def _merge_bdms_lists(self, overridable_mappings, overrider_mappings): """Override any block devices from the first list by device name :param overridable_mappings: list which items are overridden :param overrider_mappings: list which items override :returns: A merged list of bdms """ device_names = set(bdm['device_name'] for bdm in overrider_mappings if bdm['device_name']) return (overrider_mappings + [bdm for bdm in overridable_mappings if bdm['device_name'] not in device_names]) def _check_and_transform_bdm(self, context, base_options, instance_type, image_meta, min_count, max_count, block_device_mapping, legacy_bdm): # NOTE (ndipanov): Assume root dev name is 'vda' if not supplied. # It's needed for legacy conversion to work. root_device_name = (base_options.get('root_device_name') or 'vda') image_ref = base_options.get('image_ref', '') # If the instance is booted by image and has a volume attached, # the volume cannot have the same device name as root_device_name if image_ref: for bdm in block_device_mapping: if (bdm.get('destination_type') == 'volume' and block_device.strip_dev(bdm.get( 'device_name')) == root_device_name): msg = _('The volume cannot be assigned the same device' ' name as the root device %s') % root_device_name raise exception.InvalidRequest(msg) image_defined_bdms = self._get_image_defined_bdms( instance_type, image_meta, root_device_name) root_in_image_bdms = ( block_device.get_root_bdm(image_defined_bdms) is not None) if legacy_bdm: block_device_mapping = block_device.from_legacy_mapping( block_device_mapping, image_ref, root_device_name, no_root=root_in_image_bdms) elif root_in_image_bdms: # NOTE (ndipanov): client will insert an image mapping into the v2 # block_device_mapping, but if there is a bootable device in image # mappings - we need to get rid of the inserted image # NOTE (gibi): another case is when a server is booted with an # image to bdm mapping where the image only contains a bdm to a # snapshot. In this case the other image to bdm mapping # contains an unnecessary device with boot_index == 0. # Also in this case the image_ref is None as we are booting from # an image to volume bdm. def not_image_and_root_bdm(bdm): return not (bdm.get('boot_index') == 0 and bdm.get('source_type') == 'image') block_device_mapping = list( filter(not_image_and_root_bdm, block_device_mapping)) block_device_mapping = self._merge_bdms_lists( image_defined_bdms, block_device_mapping) if min_count > 1 or max_count > 1: if any(map(lambda bdm: bdm['source_type'] == 'volume', block_device_mapping)): msg = _('Cannot attach one or more volumes to multiple' ' instances') raise exception.InvalidRequest(msg) block_device_mapping += self._get_flavor_defined_bdms( instance_type, block_device_mapping) return block_device_obj.block_device_make_list_from_dicts( context, block_device_mapping) def _get_image(self, context, image_href): if not image_href: return None, {} image = self.image_api.get(context, image_href) return image['id'], image def _checks_for_create_and_rebuild(self, context, image_id, image, instance_type, metadata, files_to_inject, root_bdm, validate_numa=True): self._check_metadata_properties_quota(context, metadata) self._check_injected_file_quota(context, files_to_inject) self._detect_nonbootable_image_from_properties(image_id, image) self._validate_flavor_image(context, image_id, image, instance_type, root_bdm, validate_numa=validate_numa) def _validate_and_build_base_options(self, context, instance_type, boot_meta, image_href, image_id, kernel_id, ramdisk_id, display_name, display_description, key_name, key_data, security_groups, availability_zone, user_data, metadata, access_ip_v4, access_ip_v6, requested_networks, config_drive, auto_disk_config, reservation_id, max_count, supports_port_resource_request): """Verify all the input parameters regardless of the provisioning strategy being performed. """ if instance_type['disabled']: raise exception.FlavorNotFound(flavor_id=instance_type['id']) if user_data: try: base64utils.decode_as_bytes(user_data) except TypeError: raise exception.InstanceUserDataMalformed() # When using Neutron, _check_requested_secgroups will translate and # return any requested security group names to uuids. security_groups = self._check_requested_secgroups( context, security_groups) # Note: max_count is the number of instances requested by the user, # max_network_count is the maximum number of instances taking into # account any network quotas max_network_count = self._check_requested_networks( context, requested_networks, max_count) kernel_id, ramdisk_id = self._handle_kernel_and_ramdisk( context, kernel_id, ramdisk_id, boot_meta) config_drive = self._check_config_drive(config_drive) if key_data is None and key_name is not None: key_pair = objects.KeyPair.get_by_name(context, context.user_id, key_name) key_data = key_pair.public_key else: key_pair = None root_device_name = block_device.prepend_dev( block_device.properties_root_device_name( boot_meta.get('properties', {}))) image_meta = _get_image_meta_obj(boot_meta) numa_topology = hardware.numa_get_constraints( instance_type, image_meta) system_metadata = {} pci_numa_affinity_policy = hardware.get_pci_numa_policy_constraint( instance_type, image_meta) # PCI requests come from two sources: instance flavor and # requested_networks. The first call in below returns an # InstancePCIRequests object which is a list of InstancePCIRequest # objects. The second call in below creates an InstancePCIRequest # object for each SR-IOV port, and append it to the list in the # InstancePCIRequests object pci_request_info = pci_request.get_pci_requests_from_flavor( instance_type, affinity_policy=pci_numa_affinity_policy) result = self.network_api.create_resource_requests( context, requested_networks, pci_request_info, affinity_policy=pci_numa_affinity_policy) network_metadata, port_resource_requests = result # Creating servers with ports that have resource requests, like QoS # minimum bandwidth rules, is only supported in a requested minimum # microversion. if port_resource_requests and not supports_port_resource_request: raise exception.CreateWithPortResourceRequestOldVersion() base_options = { 'reservation_id': reservation_id, 'image_ref': image_href, 'kernel_id': kernel_id or '', 'ramdisk_id': ramdisk_id or '', 'power_state': power_state.NOSTATE, 'vm_state': vm_states.BUILDING, 'config_drive': config_drive, 'user_id': context.user_id, 'project_id': context.project_id, 'instance_type_id': instance_type['id'], 'memory_mb': instance_type['memory_mb'], 'vcpus': instance_type['vcpus'], 'root_gb': instance_type['root_gb'], 'ephemeral_gb': instance_type['ephemeral_gb'], 'display_name': display_name, 'display_description': display_description, 'user_data': user_data, 'key_name': key_name, 'key_data': key_data, 'locked': False, 'metadata': metadata or {}, 'access_ip_v4': access_ip_v4, 'access_ip_v6': access_ip_v6, 'availability_zone': availability_zone, 'root_device_name': root_device_name, 'progress': 0, 'pci_requests': pci_request_info, 'numa_topology': numa_topology, 'system_metadata': system_metadata, 'port_resource_requests': port_resource_requests} options_from_image = self._inherit_properties_from_image( boot_meta, auto_disk_config) base_options.update(options_from_image) # return the validated options and maximum number of instances allowed # by the network quotas return (base_options, max_network_count, key_pair, security_groups, network_metadata) @staticmethod @db_api.api_context_manager.writer def _create_reqspec_buildreq_instmapping(context, rs, br, im): """Create the request spec, build request, and instance mapping in a single database transaction. The RequestContext must be passed in to this method so that the database transaction context manager decorator will nest properly and include each create() into the same transaction context. """ rs.create() br.create() im.create() def _validate_host_or_node(self, context, host, hypervisor_hostname): """Check whether compute nodes exist by validating the host and/or the hypervisor_hostname. There are three cases: 1. If only host is supplied, we can lookup the HostMapping in the API DB. 2. If only node is supplied, we can query a resource provider with that name in placement. 3. If both host and node are supplied, we can get the cell from HostMapping and from that lookup the ComputeNode with the given cell. :param context: The API request context. :param host: Target host. :param hypervisor_hostname: Target node. :raises: ComputeHostNotFound if we find no compute nodes with host and/or hypervisor_hostname. """ if host: # When host is specified. try: host_mapping = objects.HostMapping.get_by_host(context, host) except exception.HostMappingNotFound: LOG.warning('No host-to-cell mapping found for host ' '%(host)s.', {'host': host}) raise exception.ComputeHostNotFound(host=host) # When both host and node are specified. if hypervisor_hostname: cell = host_mapping.cell_mapping with nova_context.target_cell(context, cell) as cctxt: # Here we only do an existence check, so we don't # need to store the return value into a variable. objects.ComputeNode.get_by_host_and_nodename( cctxt, host, hypervisor_hostname) elif hypervisor_hostname: # When only node is specified. try: self.placementclient.get_provider_by_name( context, hypervisor_hostname) except exception.ResourceProviderNotFound: raise exception.ComputeHostNotFound(host=hypervisor_hostname) def _get_volumes_for_bdms(self, context, bdms): """Get the pre-existing volumes from cinder for the list of BDMs. :param context: nova auth RequestContext :param bdms: BlockDeviceMappingList which has zero or more BDMs with a pre-existing volume_id specified. :return: dict, keyed by volume id, of volume dicts :raises: VolumeNotFound - if a given volume does not exist :raises: CinderConnectionFailed - if there are problems communicating with the cinder API :raises: Forbidden - if the user token does not have authority to see a volume """ volumes = {} for bdm in bdms: if bdm.volume_id: volumes[bdm.volume_id] = self.volume_api.get( context, bdm.volume_id) return volumes @staticmethod def _validate_vol_az_for_create(instance_az, volumes): """Performs cross_az_attach validation for the instance and volumes. If [cinder]/cross_az_attach=True (default) this method is a no-op. If [cinder]/cross_az_attach=False, this method will validate that: 1. All volumes are in the same availability zone. 2. The volume AZ matches the instance AZ. If the instance is being created without a specific AZ (either via the user request or the [DEFAULT]/default_schedule_zone option), and the volume AZ matches [DEFAULT]/default_availability_zone for compute services, then the method returns the volume AZ so it can be set in the RequestSpec as if the user requested the zone explicitly. :param instance_az: Availability zone for the instance. In this case the host is not yet selected so the instance AZ value should come from one of the following cases: * The user requested availability zone. * [DEFAULT]/default_schedule_zone (defaults to None) if the request does not specify an AZ (see parse_availability_zone). :param volumes: iterable of dicts of cinder volumes to be attached to the server being created :returns: None or volume AZ to set in the RequestSpec for the instance :raises: MismatchVolumeAZException if the instance and volume AZ do not match """ if CONF.cinder.cross_az_attach: return if not volumes: return # First make sure that all of the volumes are in the same zone. vol_zones = [vol['availability_zone'] for vol in volumes] if len(set(vol_zones)) > 1: msg = (_("Volumes are in different availability zones: %s") % ','.join(vol_zones)) raise exception.MismatchVolumeAZException(reason=msg) volume_az = vol_zones[0] # In this case the instance.host should not be set so the instance AZ # value should come from instance.availability_zone which will be one # of the following cases: # * The user requested availability zone. # * [DEFAULT]/default_schedule_zone (defaults to None) if the request # does not specify an AZ (see parse_availability_zone). # If the instance is not being created with a specific AZ (the AZ is # input via the API create request *or* [DEFAULT]/default_schedule_zone # is not None), then check to see if we should use the default AZ # (which by default matches the default AZ in Cinder, i.e. 'nova'). if instance_az is None: # Check if the volume AZ is the same as our default AZ for compute # hosts (nova) and if so, assume we are OK because the user did not # request an AZ and will get the same default. If the volume AZ is # not the same as our default, return the volume AZ so the caller # can put it into the request spec so the instance is scheduled # to the same zone as the volume. Note that we are paranoid about # the default here since both nova and cinder's default backend AZ # is "nova" and we do not want to pin the server to that AZ since # it's special, i.e. just like we tell users in the docs to not # specify availability_zone='nova' when creating a server since we # might not be able to migrate it later. if volume_az != CONF.default_availability_zone: return volume_az # indication to set in request spec # The volume AZ is the same as the default nova AZ so we will be OK return if instance_az != volume_az: msg = _("Server and volumes are not in the same availability " "zone. Server is in: %(instance_az)s. Volumes are in: " "%(volume_az)s") % { 'instance_az': instance_az, 'volume_az': volume_az} raise exception.MismatchVolumeAZException(reason=msg) def _provision_instances(self, context, instance_type, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota, filter_properties, key_pair, tags, trusted_certs, supports_multiattach, network_metadata=None, requested_host=None, requested_hypervisor_hostname=None): # NOTE(boxiang): Check whether compute nodes exist by validating # the host and/or the hypervisor_hostname. Pass the destination # to the scheduler with host and/or hypervisor_hostname(node). destination = None if requested_host or requested_hypervisor_hostname: self._validate_host_or_node(context, requested_host, requested_hypervisor_hostname) destination = objects.Destination() if requested_host: destination.host = requested_host destination.node = requested_hypervisor_hostname # Check quotas num_instances = compute_utils.check_num_instances_quota( context, instance_type, min_count, max_count) security_groups = security_group_api.populate_security_groups( security_groups) port_resource_requests = base_options.pop('port_resource_requests') instances_to_build = [] # We could be iterating over several instances with several BDMs per # instance and those BDMs could be using a lot of the same images so # we want to cache the image API GET results for performance. image_cache = {} # dict of image dicts keyed by image id # Before processing the list of instances get all of the requested # pre-existing volumes so we can do some validation here rather than # down in the bowels of _validate_bdm. volumes = self._get_volumes_for_bdms(context, block_device_mapping) volume_az = self._validate_vol_az_for_create( base_options['availability_zone'], volumes.values()) if volume_az: # This means the instance is not being created in a specific zone # but needs to match the zone that the volumes are in so update # base_options to match the volume zone. base_options['availability_zone'] = volume_az LOG.debug("Going to run %s instances...", num_instances) extra_specs = instance_type.extra_specs dp_name = extra_specs.get('accel:device_profile') dp_request_groups = [] if dp_name: dp_request_groups = cyborg.get_device_profile_request_groups( context, dp_name) try: for i in range(num_instances): # Create a uuid for the instance so we can store the # RequestSpec before the instance is created. instance_uuid = uuidutils.generate_uuid() # Store the RequestSpec that will be used for scheduling. req_spec = objects.RequestSpec.from_components(context, instance_uuid, boot_meta, instance_type, base_options['numa_topology'], base_options['pci_requests'], filter_properties, instance_group, base_options['availability_zone'], security_groups=security_groups, port_resource_requests=port_resource_requests) if block_device_mapping: # Record whether or not we are a BFV instance root = block_device_mapping.root_bdm() req_spec.is_bfv = bool(root and root.is_volume) else: # If we have no BDMs, we're clearly not BFV req_spec.is_bfv = False # NOTE(danms): We need to record num_instances on the request # spec as this is how the conductor knows how many were in this # batch. req_spec.num_instances = num_instances # NOTE(stephenfin): The network_metadata field is not persisted # inside RequestSpec object. if network_metadata: req_spec.network_metadata = network_metadata if destination: req_spec.requested_destination = destination if dp_request_groups: req_spec.requested_resources.extend(dp_request_groups) # Create an instance object, but do not store in db yet. instance = objects.Instance(context=context) instance.uuid = instance_uuid instance.update(base_options) instance.keypairs = objects.KeyPairList(objects=[]) if key_pair: instance.keypairs.objects.append(key_pair) instance.trusted_certs = self._retrieve_trusted_certs_object( context, trusted_certs) instance = self.create_db_entry_for_new_instance(context, instance_type, boot_meta, instance, security_groups, block_device_mapping, num_instances, i, shutdown_terminate, create_instance=False) block_device_mapping = ( self._bdm_validate_set_size_and_instance(context, instance, instance_type, block_device_mapping, image_cache, volumes, supports_multiattach)) instance_tags = self._transform_tags(tags, instance.uuid) build_request = objects.BuildRequest(context, instance=instance, instance_uuid=instance.uuid, project_id=instance.project_id, block_device_mappings=block_device_mapping, tags=instance_tags) # Create an instance_mapping. The null cell_mapping indicates # that the instance doesn't yet exist in a cell, and lookups # for it need to instead look for the RequestSpec. # cell_mapping will be populated after scheduling, with a # scheduling failure using the cell_mapping for the special # cell0. inst_mapping = objects.InstanceMapping(context=context) inst_mapping.instance_uuid = instance_uuid inst_mapping.project_id = context.project_id inst_mapping.user_id = context.user_id inst_mapping.cell_mapping = None # Create the request spec, build request, and instance mapping # records in a single transaction so that if a DBError is # raised from any of them, all INSERTs will be rolled back and # no orphaned records will be left behind. self._create_reqspec_buildreq_instmapping(context, req_spec, build_request, inst_mapping) instances_to_build.append( (req_spec, build_request, inst_mapping)) if instance_group: if check_server_group_quota: try: objects.Quotas.check_deltas( context, {'server_group_members': 1}, instance_group, context.user_id) except exception.OverQuota: msg = _("Quota exceeded, too many servers in " "group") raise exception.QuotaError(msg) members = objects.InstanceGroup.add_members( context, instance_group.uuid, [instance.uuid]) # NOTE(melwitt): We recheck the quota after creating the # object to prevent users from allocating more resources # than their allowed quota in the event of a race. This is # configurable because it can be expensive if strict quota # limits are not required in a deployment. if CONF.quota.recheck_quota and check_server_group_quota: try: objects.Quotas.check_deltas( context, {'server_group_members': 0}, instance_group, context.user_id) except exception.OverQuota: objects.InstanceGroup._remove_members_in_db( context, instance_group.id, [instance.uuid]) msg = _("Quota exceeded, too many servers in " "group") raise exception.QuotaError(msg) # list of members added to servers group in this iteration # is needed to check quota of server group during add next # instance instance_group.members.extend(members) # In the case of any exceptions, attempt DB cleanup except Exception: with excutils.save_and_reraise_exception(): self._cleanup_build_artifacts(None, instances_to_build) return instances_to_build @staticmethod def _retrieve_trusted_certs_object(context, trusted_certs, rebuild=False): """Convert user-requested trusted cert IDs to TrustedCerts object Also validates that the deployment is new enough to support trusted image certification validation. :param context: The user request auth context :param trusted_certs: list of user-specified trusted cert string IDs, may be None :param rebuild: True if rebuilding the server, False if creating a new server :returns: nova.objects.TrustedCerts object or None if no user-specified trusted cert IDs were given and nova is not configured with default trusted cert IDs """ # Retrieve trusted_certs parameter, or use CONF value if certificate # validation is enabled if trusted_certs: certs_to_return = objects.TrustedCerts(ids=trusted_certs) elif (CONF.glance.verify_glance_signatures and CONF.glance.enable_certificate_validation and CONF.glance.default_trusted_certificate_ids): certs_to_return = objects.TrustedCerts( ids=CONF.glance.default_trusted_certificate_ids) else: return None return certs_to_return @staticmethod def _get_requested_instance_group(context, filter_properties): if (not filter_properties or not filter_properties.get('scheduler_hints')): return group_hint = filter_properties.get('scheduler_hints').get('group') if not group_hint: return return objects.InstanceGroup.get_by_uuid(context, group_hint) def _create_instance(self, context, instance_type, image_href, kernel_id, ramdisk_id, min_count, max_count, display_name, display_description, key_name, key_data, security_groups, availability_zone, user_data, metadata, injected_files, admin_password, access_ip_v4, access_ip_v6, requested_networks, config_drive, block_device_mapping, auto_disk_config, filter_properties, reservation_id=None, legacy_bdm=True, shutdown_terminate=False, check_server_group_quota=False, tags=None, supports_multiattach=False, trusted_certs=None, supports_port_resource_request=False, requested_host=None, requested_hypervisor_hostname=None): """Verify all the input parameters regardless of the provisioning strategy being performed and schedule the instance(s) for creation. """ # Normalize and setup some parameters if reservation_id is None: reservation_id = utils.generate_uid('r') security_groups = security_groups or ['default'] min_count = min_count or 1 max_count = max_count or min_count block_device_mapping = block_device_mapping or [] tags = tags or [] if image_href: image_id, boot_meta = self._get_image(context, image_href) else: # This is similar to the logic in _retrieve_trusted_certs_object. if (trusted_certs or (CONF.glance.verify_glance_signatures and CONF.glance.enable_certificate_validation and CONF.glance.default_trusted_certificate_ids)): msg = _("Image certificate validation is not supported " "when booting from volume") raise exception.CertificateValidationFailed(message=msg) image_id = None boot_meta = utils.get_bdm_image_metadata( context, self.image_api, self.volume_api, block_device_mapping, legacy_bdm) self._check_auto_disk_config(image=boot_meta, auto_disk_config=auto_disk_config) base_options, max_net_count, key_pair, security_groups, \ network_metadata = self._validate_and_build_base_options( context, instance_type, boot_meta, image_href, image_id, kernel_id, ramdisk_id, display_name, display_description, key_name, key_data, security_groups, availability_zone, user_data, metadata, access_ip_v4, access_ip_v6, requested_networks, config_drive, auto_disk_config, reservation_id, max_count, supports_port_resource_request) # max_net_count is the maximum number of instances requested by the # user adjusted for any network quota constraints, including # consideration of connections to each requested network if max_net_count < min_count: raise exception.PortLimitExceeded() elif max_net_count < max_count: LOG.info("max count reduced from %(max_count)d to " "%(max_net_count)d due to network port quota", {'max_count': max_count, 'max_net_count': max_net_count}) max_count = max_net_count block_device_mapping = self._check_and_transform_bdm(context, base_options, instance_type, boot_meta, min_count, max_count, block_device_mapping, legacy_bdm) # We can't do this check earlier because we need bdms from all sources # to have been merged in order to get the root bdm. # Set validate_numa=False since numa validation is already done by # _validate_and_build_base_options(). self._checks_for_create_and_rebuild(context, image_id, boot_meta, instance_type, metadata, injected_files, block_device_mapping.root_bdm(), validate_numa=False) instance_group = self._get_requested_instance_group(context, filter_properties) tags = self._create_tag_list_obj(context, tags) instances_to_build = self._provision_instances( context, instance_type, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota, filter_properties, key_pair, tags, trusted_certs, supports_multiattach, network_metadata, requested_host, requested_hypervisor_hostname) instances = [] request_specs = [] build_requests = [] for rs, build_request, im in instances_to_build: build_requests.append(build_request) instance = build_request.get_new_instance(context) instances.append(instance) request_specs.append(rs) self.compute_task_api.schedule_and_build_instances( context, build_requests=build_requests, request_spec=request_specs, image=boot_meta, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, block_device_mapping=block_device_mapping, tags=tags) return instances, reservation_id @staticmethod def _cleanup_build_artifacts(instances, instances_to_build): # instances_to_build is a list of tuples: # (RequestSpec, BuildRequest, InstanceMapping) # Be paranoid about artifacts being deleted underneath us. for instance in instances or []: try: instance.destroy() except exception.InstanceNotFound: pass for rs, build_request, im in instances_to_build or []: try: rs.destroy() except exception.RequestSpecNotFound: pass try: build_request.destroy() except exception.BuildRequestNotFound: pass try: im.destroy() except exception.InstanceMappingNotFound: pass @staticmethod def _volume_size(instance_type, bdm): size = bdm.get('volume_size') # NOTE (ndipanov): inherit flavor size only for swap and ephemeral if (size is None and bdm.get('source_type') == 'blank' and bdm.get('destination_type') == 'local'): if bdm.get('guest_format') == 'swap': size = instance_type.get('swap', 0) else: size = instance_type.get('ephemeral_gb', 0) return size def _prepare_image_mapping(self, instance_type, mappings): """Extract and format blank devices from image mappings.""" prepared_mappings = [] for bdm in block_device.mappings_prepend_dev(mappings): LOG.debug("Image bdm %s", bdm) virtual_name = bdm['virtual'] if virtual_name == 'ami' or virtual_name == 'root': continue if not block_device.is_swap_or_ephemeral(virtual_name): continue guest_format = bdm.get('guest_format') if virtual_name == 'swap': guest_format = 'swap' if not guest_format: guest_format = CONF.default_ephemeral_format values = block_device.BlockDeviceDict({ 'device_name': bdm['device'], 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': guest_format, 'delete_on_termination': True, 'boot_index': -1}) values['volume_size'] = self._volume_size( instance_type, values) if values['volume_size'] == 0: continue prepared_mappings.append(values) return prepared_mappings def _bdm_validate_set_size_and_instance(self, context, instance, instance_type, block_device_mapping, image_cache, volumes, supports_multiattach=False): """Ensure the bdms are valid, then set size and associate with instance Because this method can be called multiple times when more than one instance is booted in a single request it makes a copy of the bdm list. :param context: nova auth RequestContext :param instance: Instance object :param instance_type: Flavor object - used for swap and ephemeral BDMs :param block_device_mapping: BlockDeviceMappingList object :param image_cache: dict of image dicts keyed by id which is used as a cache in case there are multiple BDMs in the same request using the same image to avoid redundant GET calls to the image service :param volumes: dict, keyed by volume id, of volume dicts from cinder :param supports_multiattach: True if the request supports multiattach volumes, False otherwise """ LOG.debug("block_device_mapping %s", list(block_device_mapping), instance_uuid=instance.uuid) self._validate_bdm( context, instance, instance_type, block_device_mapping, image_cache, volumes, supports_multiattach) instance_block_device_mapping = block_device_mapping.obj_clone() for bdm in instance_block_device_mapping: bdm.volume_size = self._volume_size(instance_type, bdm) bdm.instance_uuid = instance.uuid return instance_block_device_mapping @staticmethod def _check_requested_volume_type(bdm, volume_type_id_or_name, volume_types): """If we are specifying a volume type, we need to get the volume type details from Cinder and make sure the ``volume_type`` is available. """ # NOTE(brinzhang): Verify that the specified volume type exists. # And save the volume type name internally for consistency in the # BlockDeviceMapping object. for vol_type in volume_types: if (volume_type_id_or_name == vol_type['id'] or volume_type_id_or_name == vol_type['name']): bdm.volume_type = vol_type['name'] break else: raise exception.VolumeTypeNotFound( id_or_name=volume_type_id_or_name) def _validate_bdm(self, context, instance, instance_type, block_device_mappings, image_cache, volumes, supports_multiattach=False): """Validate requested block device mappings. :param context: nova auth RequestContext :param instance: Instance object :param instance_type: Flavor object - used for swap and ephemeral BDMs :param block_device_mappings: BlockDeviceMappingList object :param image_cache: dict of image dicts keyed by id which is used as a cache in case there are multiple BDMs in the same request using the same image to avoid redundant GET calls to the image service :param volumes: dict, keyed by volume id, of volume dicts from cinder :param supports_multiattach: True if the request supports multiattach volumes, False otherwise """ # Make sure that the boot indexes make sense. # Setting a negative value or None indicates that the device should not # be used for booting. boot_indexes = sorted([bdm.boot_index for bdm in block_device_mappings if bdm.boot_index is not None and bdm.boot_index >= 0]) # Each device which is capable of being used as boot device should # be given a unique boot index, starting from 0 in ascending order, and # there needs to be at least one boot device. if not boot_indexes or any(i != v for i, v in enumerate(boot_indexes)): # Convert the BlockDeviceMappingList to a list for repr details. LOG.debug('Invalid block device mapping boot sequence for ' 'instance: %s', list(block_device_mappings), instance=instance) raise exception.InvalidBDMBootSequence() volume_types = None for bdm in block_device_mappings: volume_type = bdm.volume_type if volume_type: if not volume_types: # In order to reduce the number of hit cinder APIs, # initialize our cache of volume types. volume_types = self.volume_api.get_all_volume_types( context) # NOTE(brinzhang): Ensure the validity of volume_type. self._check_requested_volume_type(bdm, volume_type, volume_types) # NOTE(vish): For now, just make sure the volumes are accessible. # Additionally, check that the volume can be attached to this # instance. snapshot_id = bdm.snapshot_id volume_id = bdm.volume_id image_id = bdm.image_id if image_id is not None: if (image_id != instance.get('image_ref') and image_id not in image_cache): try: # Cache the results of the image GET so we do not make # the same request for the same image if processing # multiple BDMs or multiple servers with the same image image_cache[image_id] = self._get_image( context, image_id) except Exception: raise exception.InvalidBDMImage(id=image_id) if (bdm.source_type == 'image' and bdm.destination_type == 'volume' and not bdm.volume_size): raise exception.InvalidBDM(message=_("Images with " "destination_type 'volume' need to have a non-zero " "size specified")) elif volume_id is not None: try: volume = volumes[volume_id] # We do not validate the instance and volume AZ here # because that is done earlier by _provision_instances. self._check_attach_and_reserve_volume( context, volume, instance, bdm, supports_multiattach, validate_az=False) bdm.volume_size = volume.get('size') except (exception.CinderConnectionFailed, exception.InvalidVolume, exception.MultiattachNotSupportedOldMicroversion): raise except exception.InvalidInput as exc: raise exception.InvalidVolume(reason=exc.format_message()) except Exception as e: LOG.info('Failed validating volume %s. Error: %s', volume_id, e) raise exception.InvalidBDMVolume(id=volume_id) elif snapshot_id is not None: try: snap = self.volume_api.get_snapshot(context, snapshot_id) bdm.volume_size = bdm.volume_size or snap.get('size') except exception.CinderConnectionFailed: raise except Exception: raise exception.InvalidBDMSnapshot(id=snapshot_id) elif (bdm.source_type == 'blank' and bdm.destination_type == 'volume' and not bdm.volume_size): raise exception.InvalidBDM(message=_("Blank volumes " "(source: 'blank', dest: 'volume') need to have non-zero " "size")) # NOTE(lyarwood): Ensure the disk_bus is at least known to Nova. # The virt driver may reject this later but for now just ensure # it's listed as an acceptable value of the DiskBus field class. disk_bus = bdm.disk_bus if 'disk_bus' in bdm else None if disk_bus and disk_bus not in fields_obj.DiskBus.ALL: raise exception.InvalidBDMDiskBus(disk_bus=disk_bus) ephemeral_size = sum(bdm.volume_size or instance_type['ephemeral_gb'] for bdm in block_device_mappings if block_device.new_format_is_ephemeral(bdm)) if ephemeral_size > instance_type['ephemeral_gb']: raise exception.InvalidBDMEphemeralSize() # There should be only one swap swap_list = block_device.get_bdm_swap_list(block_device_mappings) if len(swap_list) > 1: msg = _("More than one swap drive requested.") raise exception.InvalidBDMFormat(details=msg) if swap_list: swap_size = swap_list[0].volume_size or 0 if swap_size > instance_type['swap']: raise exception.InvalidBDMSwapSize() max_local = CONF.max_local_block_devices if max_local >= 0: num_local = len([bdm for bdm in block_device_mappings if bdm.destination_type == 'local']) if num_local > max_local: raise exception.InvalidBDMLocalsLimit() def _populate_instance_names(self, instance, num_instances, index): """Populate instance display_name and hostname. :param instance: The instance to set the display_name, hostname for :type instance: nova.objects.Instance :param num_instances: Total number of instances being created in this request :param index: The 0-based index of this particular instance """ # NOTE(mriedem): This is only here for test simplicity since a server # name is required in the REST API. if 'display_name' not in instance or instance.display_name is None: instance.display_name = 'Server %s' % instance.uuid # if we're booting multiple instances, we need to add an indexing # suffix to both instance.hostname and instance.display_name. This is # not necessary for a single instance. if num_instances == 1: default_hostname = 'Server-%s' % instance.uuid instance.hostname = utils.sanitize_hostname( instance.display_name, default_hostname) elif num_instances > 1: old_display_name = instance.display_name new_display_name = '%s-%d' % (old_display_name, index + 1) if utils.sanitize_hostname(old_display_name) == "": instance.hostname = 'Server-%s' % instance.uuid else: instance.hostname = utils.sanitize_hostname( new_display_name) instance.display_name = new_display_name def _populate_instance_for_create(self, context, instance, image, index, security_groups, instance_type, num_instances, shutdown_terminate): """Build the beginning of a new instance.""" instance.launch_index = index instance.vm_state = vm_states.BUILDING instance.task_state = task_states.SCHEDULING info_cache = objects.InstanceInfoCache() info_cache.instance_uuid = instance.uuid info_cache.network_info = network_model.NetworkInfo() instance.info_cache = info_cache instance.flavor = instance_type instance.old_flavor = None instance.new_flavor = None if CONF.ephemeral_storage_encryption.enabled: # NOTE(kfarr): dm-crypt expects the cipher in a # hyphenated format: cipher-chainmode-ivmode # (ex: aes-xts-plain64). The algorithm needs # to be parsed out to pass to the key manager (ex: aes). cipher = CONF.ephemeral_storage_encryption.cipher algorithm = cipher.split('-')[0] if cipher else None instance.ephemeral_key_uuid = self.key_manager.create_key( context, algorithm=algorithm, length=CONF.ephemeral_storage_encryption.key_size) else: instance.ephemeral_key_uuid = None # Store image properties so we can use them later # (for notifications, etc). Only store what we can. if not instance.obj_attr_is_set('system_metadata'): instance.system_metadata = {} # Make sure we have the dict form that we need for instance_update. instance.system_metadata = utils.instance_sys_meta(instance) system_meta = utils.get_system_metadata_from_image( image, instance_type) # In case we couldn't find any suitable base_image system_meta.setdefault('image_base_image_ref', instance.image_ref) system_meta['owner_user_name'] = context.user_name system_meta['owner_project_name'] = context.project_name instance.system_metadata.update(system_meta) # Since the removal of nova-network, we don't actually store anything # in the database. Instead, we proxy the security groups on the # instance from the ports attached to the instance. instance.security_groups = objects.SecurityGroupList() self._populate_instance_names(instance, num_instances, index) instance.shutdown_terminate = shutdown_terminate return instance def _create_tag_list_obj(self, context, tags): """Create TagList objects from simple string tags. :param context: security context. :param tags: simple string tags from API request. :returns: TagList object. """ tag_list = [objects.Tag(context=context, tag=t) for t in tags] tag_list_obj = objects.TagList(objects=tag_list) return tag_list_obj def _transform_tags(self, tags, resource_id): """Change the resource_id of the tags according to the input param. Because this method can be called multiple times when more than one instance is booted in a single request it makes a copy of the tags list. :param tags: TagList object. :param resource_id: string. :returns: TagList object. """ instance_tags = tags.obj_clone() for tag in instance_tags: tag.resource_id = resource_id return instance_tags # This method remains because cellsv1 uses it in the scheduler def create_db_entry_for_new_instance(self, context, instance_type, image, instance, security_group, block_device_mapping, num_instances, index, shutdown_terminate=False, create_instance=True): """Create an entry in the DB for this new instance, including any related table updates (such as security group, etc). This is called by the scheduler after a location for the instance has been determined. :param create_instance: Determines if the instance is created here or just populated for later creation. This is done so that this code can be shared with cellsv1 which needs the instance creation to happen here. It should be removed and this method cleaned up when cellsv1 is a distant memory. """ self._populate_instance_for_create(context, instance, image, index, security_group, instance_type, num_instances, shutdown_terminate) if create_instance: instance.create() return instance def _check_multiple_instances_with_neutron_ports(self, requested_networks): """Check whether multiple instances are created from port id(s).""" for requested_net in requested_networks: if requested_net.port_id: msg = _("Unable to launch multiple instances with" " a single configured port ID. Please launch your" " instance one by one with different ports.") raise exception.MultiplePortsNotApplicable(reason=msg) def _check_multiple_instances_with_specified_ip(self, requested_networks): """Check whether multiple instances are created with specified ip.""" for requested_net in requested_networks: if requested_net.network_id and requested_net.address: msg = _("max_count cannot be greater than 1 if an fixed_ip " "is specified.") raise exception.InvalidFixedIpAndMaxCountRequest(reason=msg) @hooks.add_hook("create_instance") def create(self, context, instance_type, image_href, kernel_id=None, ramdisk_id=None, min_count=None, max_count=None, display_name=None, display_description=None, key_name=None, key_data=None, security_groups=None, availability_zone=None, forced_host=None, forced_node=None, user_data=None, metadata=None, injected_files=None, admin_password=None, block_device_mapping=None, access_ip_v4=None, access_ip_v6=None, requested_networks=None, config_drive=None, auto_disk_config=None, scheduler_hints=None, legacy_bdm=True, shutdown_terminate=False, check_server_group_quota=False, tags=None, supports_multiattach=False, trusted_certs=None, supports_port_resource_request=False, requested_host=None, requested_hypervisor_hostname=None): """Provision instances, sending instance information to the scheduler. The scheduler will determine where the instance(s) go and will handle creating the DB entries. Returns a tuple of (instances, reservation_id) """ if requested_networks and max_count is not None and max_count > 1: self._check_multiple_instances_with_specified_ip( requested_networks) self._check_multiple_instances_with_neutron_ports( requested_networks) if availability_zone: available_zones = availability_zones.\ get_availability_zones(context.elevated(), self.host_api, get_only_available=True) if forced_host is None and availability_zone not in \ available_zones: msg = _('The requested availability zone is not available') raise exception.InvalidRequest(msg) filter_properties = scheduler_utils.build_filter_properties( scheduler_hints, forced_host, forced_node, instance_type) return self._create_instance( context, instance_type, image_href, kernel_id, ramdisk_id, min_count, max_count, display_name, display_description, key_name, key_data, security_groups, availability_zone, user_data, metadata, injected_files, admin_password, access_ip_v4, access_ip_v6, requested_networks, config_drive, block_device_mapping, auto_disk_config, filter_properties=filter_properties, legacy_bdm=legacy_bdm, shutdown_terminate=shutdown_terminate, check_server_group_quota=check_server_group_quota, tags=tags, supports_multiattach=supports_multiattach, trusted_certs=trusted_certs, supports_port_resource_request=supports_port_resource_request, requested_host=requested_host, requested_hypervisor_hostname=requested_hypervisor_hostname) def _check_auto_disk_config(self, instance=None, image=None, auto_disk_config=None): if auto_disk_config is None: return if not image and not instance: return if image: image_props = image.get("properties", {}) auto_disk_config_img = \ utils.get_auto_disk_config_from_image_props(image_props) image_ref = image.get("id") else: sys_meta = utils.instance_sys_meta(instance) image_ref = sys_meta.get('image_base_image_ref') auto_disk_config_img = \ utils.get_auto_disk_config_from_instance(sys_meta=sys_meta) self._ensure_auto_disk_config_is_valid(auto_disk_config_img, auto_disk_config, image_ref) def _lookup_instance(self, context, uuid): '''Helper method for pulling an instance object from a database. During the transition to cellsv2 there is some complexity around retrieving an instance from the database which this method hides. If there is an instance mapping then query the cell for the instance, if no mapping exists then query the configured nova database. Once we are past the point that all deployments can be assumed to be migrated to cellsv2 this method can go away. ''' inst_map = None try: inst_map = objects.InstanceMapping.get_by_instance_uuid( context, uuid) except exception.InstanceMappingNotFound: # TODO(alaski): This exception block can be removed once we're # guaranteed everyone is using cellsv2. pass if inst_map is None or inst_map.cell_mapping is None: # If inst_map is None then the deployment has not migrated to # cellsv2 yet. # If inst_map.cell_mapping is None then the instance is not in a # cell yet. Until instance creation moves to the conductor the # instance can be found in the configured database, so attempt # to look it up. cell = None try: instance = objects.Instance.get_by_uuid(context, uuid) except exception.InstanceNotFound: # If we get here then the conductor is in charge of writing the # instance to the database and hasn't done that yet. It's up to # the caller of this method to determine what to do with that # information. return None, None else: cell = inst_map.cell_mapping with nova_context.target_cell(context, cell) as cctxt: try: instance = objects.Instance.get_by_uuid(cctxt, uuid) except exception.InstanceNotFound: # Since the cell_mapping exists we know the instance is in # the cell, however InstanceNotFound means it's already # deleted. return None, None return cell, instance def _delete_while_booting(self, context, instance): """Handle deletion if the instance has not reached a cell yet Deletion before an instance reaches a cell needs to be handled differently. What we're attempting to do is delete the BuildRequest before the api level conductor does. If we succeed here then the boot request stops before reaching a cell. If not then the instance will need to be looked up in a cell db and the normal delete path taken. """ deleted = self._attempt_delete_of_buildrequest(context, instance) if deleted: # If we've reached this block the successful deletion of the # buildrequest indicates that the build process should be halted by # the conductor. # NOTE(alaski): Though the conductor halts the build process it # does not currently delete the instance record. This is # because in the near future the instance record will not be # created if the buildrequest has been deleted here. For now we # ensure the instance has been set to deleted at this point. # Yes this directly contradicts the comment earlier in this # method, but this is a temporary measure. # Look up the instance because the current instance object was # stashed on the buildrequest and therefore not complete enough # to run .destroy(). try: instance_uuid = instance.uuid cell, instance = self._lookup_instance(context, instance_uuid) if instance is not None: # If instance is None it has already been deleted. if cell: with nova_context.target_cell(context, cell) as cctxt: # FIXME: When the instance context is targeted, # we can remove this with compute_utils.notify_about_instance_delete( self.notifier, cctxt, instance): instance.destroy() else: instance.destroy() except exception.InstanceNotFound: pass return True return False def _local_delete_cleanup(self, context, instance_uuid): # NOTE(aarents) Ensure instance allocation is cleared and instance # mapping queued as deleted before _delete() return try: self.placementclient.delete_allocation_for_instance( context, instance_uuid) except exception.AllocationDeleteFailed: LOG.info("Allocation delete failed during local delete cleanup.", instance_uuid=instance_uuid) try: self._update_queued_for_deletion(context, instance_uuid, True) except exception.InstanceMappingNotFound: LOG.info("Instance Mapping does not exist while attempting " "local delete cleanup.", instance_uuid=instance_uuid) def _attempt_delete_of_buildrequest(self, context, instance): # If there is a BuildRequest then the instance may not have been # written to a cell db yet. Delete the BuildRequest here, which # will indicate that the Instance build should not proceed. try: build_req = objects.BuildRequest.get_by_instance_uuid( context, instance.uuid) build_req.destroy() except exception.BuildRequestNotFound: # This means that conductor has deleted the BuildRequest so the # instance is now in a cell and the delete needs to proceed # normally. return False # We need to detach from any volumes so they aren't orphaned. self._local_cleanup_bdm_volumes( build_req.block_device_mappings, instance, context) return True def _delete(self, context, instance, delete_type, cb, **instance_attrs): if instance.disable_terminate: LOG.info('instance termination disabled', instance=instance) return cell = None # If there is an instance.host (or the instance is shelved-offloaded or # in error state), the instance has been scheduled and sent to a # cell/compute which means it was pulled from the cell db. # Normal delete should be attempted. may_have_ports_or_volumes = compute_utils.may_have_ports_or_volumes( instance) if not instance.host and not may_have_ports_or_volumes: try: if self._delete_while_booting(context, instance): self._local_delete_cleanup(context, instance.uuid) return # If instance.host was not set it's possible that the Instance # object here was pulled from a BuildRequest object and is not # fully populated. Notably it will be missing an 'id' field # which will prevent instance.destroy from functioning # properly. A lookup is attempted which will either return a # full Instance or None if not found. If not found then it's # acceptable to skip the rest of the delete processing. # Save a copy of the instance UUID early, in case # _lookup_instance returns instance = None, to pass to # _local_delete_cleanup if needed. instance_uuid = instance.uuid cell, instance = self._lookup_instance(context, instance.uuid) if cell and instance: try: # Now destroy the instance from the cell it lives in. with compute_utils.notify_about_instance_delete( self.notifier, context, instance): instance.destroy() except exception.InstanceNotFound: pass # The instance was deleted or is already gone. self._local_delete_cleanup(context, instance.uuid) return if not instance: # Instance is already deleted. self._local_delete_cleanup(context, instance_uuid) return except exception.ObjectActionError: # NOTE(melwitt): This means the instance.host changed # under us indicating the instance became scheduled # during the destroy(). Refresh the instance from the DB and # continue on with the delete logic for a scheduled instance. # NOTE(danms): If instance.host is set, we should be able to # do the following lookup. If not, there's not much we can # do to recover. cell, instance = self._lookup_instance(context, instance.uuid) if not instance: # Instance is already deleted self._local_delete_cleanup(context, instance_uuid) return bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) # At these states an instance has a snapshot associate. if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): snapshot_id = instance.system_metadata.get('shelved_image_id') LOG.info("Working on deleting snapshot %s " "from shelved instance...", snapshot_id, instance=instance) try: self.image_api.delete(context, snapshot_id) except (exception.ImageNotFound, exception.ImageNotAuthorized) as exc: LOG.warning("Failed to delete snapshot " "from shelved instance (%s).", exc.format_message(), instance=instance) except Exception: LOG.exception("Something wrong happened when trying to " "delete snapshot from shelved instance.", instance=instance) original_task_state = instance.task_state try: # NOTE(maoy): no expected_task_state needs to be set instance.update(instance_attrs) instance.progress = 0 instance.save() if not instance.host and not may_have_ports_or_volumes: try: with compute_utils.notify_about_instance_delete( self.notifier, context, instance, delete_type if delete_type != 'soft_delete' else 'delete'): instance.destroy() LOG.info('Instance deleted and does not have host ' 'field, its vm_state is %(state)s.', {'state': instance.vm_state}, instance=instance) self._local_delete_cleanup(context, instance.uuid) return except exception.ObjectActionError as ex: # The instance's host likely changed under us as # this instance could be building and has since been # scheduled. Continue with attempts to delete it. LOG.debug('Refreshing instance because: %s', ex, instance=instance) instance.refresh() if instance.vm_state == vm_states.RESIZED: self._confirm_resize_on_deleting(context, instance) # NOTE(neha_alhat): After confirm resize vm_state will become # 'active' and task_state will be set to 'None'. But for soft # deleting a vm, the _do_soft_delete callback requires # task_state in 'SOFT_DELETING' status. So, we need to set # task_state as 'SOFT_DELETING' again for soft_delete case. # After confirm resize and before saving the task_state to # "SOFT_DELETING", during the short window, user can submit # soft delete vm request again and system will accept and # process it without any errors. if delete_type == 'soft_delete': instance.task_state = instance_attrs['task_state'] instance.save() is_local_delete = True try: # instance.host must be set in order to look up the service. if instance.host is not None: service = objects.Service.get_by_compute_host( context.elevated(), instance.host) is_local_delete = not self.servicegroup_api.service_is_up( service) if not is_local_delete: if original_task_state in (task_states.DELETING, task_states.SOFT_DELETING): LOG.info('Instance is already in deleting state, ' 'ignoring this request', instance=instance) return self._record_action_start(context, instance, instance_actions.DELETE) cb(context, instance, bdms) except exception.ComputeHostNotFound: LOG.debug('Compute host %s not found during service up check, ' 'going to local delete instance', instance.host, instance=instance) if is_local_delete: # If instance is in shelved_offloaded state or compute node # isn't up, delete instance from db and clean bdms info and # network info if cell is None: # NOTE(danms): If we didn't get our cell from one of the # paths above, look it up now. try: im = objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) cell = im.cell_mapping except exception.InstanceMappingNotFound: LOG.warning('During local delete, failed to find ' 'instance mapping', instance=instance) return LOG.debug('Doing local delete in cell %s', cell.identity, instance=instance) with nova_context.target_cell(context, cell) as cctxt: self._local_delete(cctxt, instance, bdms, delete_type, cb) except exception.InstanceNotFound: # NOTE(comstud): Race condition. Instance already gone. pass def _confirm_resize_on_deleting(self, context, instance): # If in the middle of a resize, use confirm_resize to # ensure the original instance is cleaned up too along # with its allocations (and migration-based allocations) # in placement. migration = None for status in ('finished', 'confirming'): try: migration = objects.Migration.get_by_instance_and_status( context.elevated(), instance.uuid, status) LOG.info('Found an unconfirmed migration during delete, ' 'id: %(id)s, status: %(status)s', {'id': migration.id, 'status': migration.status}, instance=instance) break except exception.MigrationNotFoundByStatus: pass if not migration: LOG.info('Instance may have been confirmed during delete', instance=instance) return self._record_action_start(context, instance, instance_actions.CONFIRM_RESIZE) # If migration.cross_cell_move, we need to also cleanup the instance # data from the source cell database. if migration.cross_cell_move: self.compute_task_api.confirm_snapshot_based_resize( context, instance, migration, do_cast=False) else: self.compute_rpcapi.confirm_resize(context, instance, migration, migration.source_compute, cast=False) def _local_cleanup_bdm_volumes(self, bdms, instance, context): """The method deletes the bdm records and, if a bdm is a volume, call the terminate connection and the detach volume via the Volume API. """ elevated = context.elevated() for bdm in bdms: if bdm.is_volume: try: if bdm.attachment_id: self.volume_api.attachment_delete(context, bdm.attachment_id) else: connector = compute_utils.get_stashed_volume_connector( bdm, instance) if connector: self.volume_api.terminate_connection(context, bdm.volume_id, connector) else: LOG.debug('Unable to find connector for volume %s,' ' not attempting terminate_connection.', bdm.volume_id, instance=instance) # Attempt to detach the volume. If there was no # connection made in the first place this is just # cleaning up the volume state in the Cinder DB. self.volume_api.detach(elevated, bdm.volume_id, instance.uuid) if bdm.delete_on_termination: self.volume_api.delete(context, bdm.volume_id) except Exception as exc: LOG.warning("Ignoring volume cleanup failure due to %s", exc, instance=instance) # If we're cleaning up volumes from an instance that wasn't yet # created in a cell, i.e. the user deleted the server while # the BuildRequest still existed, then the BDM doesn't actually # exist in the DB to destroy it. if 'id' in bdm: bdm.destroy() @property def placementclient(self): if self._placementclient is None: self._placementclient = report.SchedulerReportClient() return self._placementclient def _local_delete(self, context, instance, bdms, delete_type, cb): if instance.vm_state == vm_states.SHELVED_OFFLOADED: LOG.info("instance is in SHELVED_OFFLOADED state, cleanup" " the instance's info from database.", instance=instance) else: LOG.warning("instance's host %s is down, deleting from " "database", instance.host, instance=instance) with compute_utils.notify_about_instance_delete( self.notifier, context, instance, delete_type if delete_type != 'soft_delete' else 'delete'): elevated = context.elevated() self.network_api.deallocate_for_instance(elevated, instance) # cleanup volumes self._local_cleanup_bdm_volumes(bdms, instance, context) # cleanup accelerator requests (ARQs) compute_utils.delete_arqs_if_needed(context, instance) # Cleanup allocations in Placement since we can't do it from the # compute service. self.placementclient.delete_allocation_for_instance( context, instance.uuid) cb(context, instance, bdms, local=True) instance.destroy() @staticmethod def _update_queued_for_deletion(context, instance_uuid, qfd): # NOTE(tssurya): We query the instance_mapping record of this instance # and update the queued_for_delete flag to True (or False according to # the state of the instance). This just means that the instance is # queued for deletion (or is no longer queued for deletion). It does # not guarantee its successful deletion (or restoration). Hence the # value could be stale which is fine, considering its use is only # during down cell (desperate) situation. im = objects.InstanceMapping.get_by_instance_uuid(context, instance_uuid) im.queued_for_delete = qfd im.save() def _do_delete(self, context, instance, bdms, local=False): if local: instance.vm_state = vm_states.DELETED instance.task_state = None instance.terminated_at = timeutils.utcnow() instance.save() else: self.compute_rpcapi.terminate_instance(context, instance, bdms) self._update_queued_for_deletion(context, instance.uuid, True) def _do_soft_delete(self, context, instance, bdms, local=False): if local: instance.vm_state = vm_states.SOFT_DELETED instance.task_state = None instance.terminated_at = timeutils.utcnow() instance.save() else: self.compute_rpcapi.soft_delete_instance(context, instance) self._update_queued_for_deletion(context, instance.uuid, True) # NOTE(maoy): we allow delete to be called no matter what vm_state says. @check_instance_lock @check_instance_state(vm_state=None, task_state=None, must_have_launched=True) def soft_delete(self, context, instance): """Terminate an instance.""" LOG.debug('Going to try to soft delete instance', instance=instance) self._delete(context, instance, 'soft_delete', self._do_soft_delete, task_state=task_states.SOFT_DELETING, deleted_at=timeutils.utcnow()) def _delete_instance(self, context, instance): self._delete(context, instance, 'delete', self._do_delete, task_state=task_states.DELETING) @check_instance_lock @check_instance_state(vm_state=None, task_state=None, must_have_launched=False) def delete(self, context, instance): """Terminate an instance.""" LOG.debug("Going to try to terminate instance", instance=instance) self._delete_instance(context, instance) @check_instance_lock @check_instance_state(vm_state=[vm_states.SOFT_DELETED]) def restore(self, context, instance): """Restore a previously deleted (but not reclaimed) instance.""" # Check quotas flavor = instance.get_flavor() project_id, user_id = quotas_obj.ids_from_instance(context, instance) compute_utils.check_num_instances_quota(context, flavor, 1, 1, project_id=project_id, user_id=user_id) self._record_action_start(context, instance, instance_actions.RESTORE) if instance.host: instance.task_state = task_states.RESTORING instance.deleted_at = None instance.save(expected_task_state=[None]) # TODO(melwitt): We're not rechecking for strict quota here to # guard against going over quota during a race at this time because # the resource consumption for this operation is written to the # database by compute. self.compute_rpcapi.restore_instance(context, instance) else: instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.deleted_at = None instance.save(expected_task_state=[None]) self._update_queued_for_deletion(context, instance.uuid, False) @check_instance_lock @check_instance_state(task_state=None, must_have_launched=False) def force_delete(self, context, instance): """Force delete an instance in any vm_state/task_state.""" self._delete(context, instance, 'force_delete', self._do_delete, task_state=task_states.DELETING) def force_stop(self, context, instance, do_cast=True, clean_shutdown=True): LOG.debug("Going to try to stop instance", instance=instance) instance.task_state = task_states.POWERING_OFF instance.progress = 0 instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.STOP) self.compute_rpcapi.stop_instance(context, instance, do_cast=do_cast, clean_shutdown=clean_shutdown) @check_instance_lock @check_instance_host() @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.ERROR]) def stop(self, context, instance, do_cast=True, clean_shutdown=True): """Stop an instance.""" self.force_stop(context, instance, do_cast, clean_shutdown) @check_instance_lock @check_instance_host() @check_instance_state(vm_state=[vm_states.STOPPED]) def start(self, context, instance): """Start an instance.""" LOG.debug("Going to try to start instance", instance=instance) instance.task_state = task_states.POWERING_ON instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.START) self.compute_rpcapi.start_instance(context, instance) @check_instance_lock @check_instance_host() @check_instance_state(vm_state=vm_states.ALLOW_TRIGGER_CRASH_DUMP) def trigger_crash_dump(self, context, instance): """Trigger crash dump in an instance.""" LOG.debug("Try to trigger crash dump", instance=instance) self._record_action_start(context, instance, instance_actions.TRIGGER_CRASH_DUMP) self.compute_rpcapi.trigger_crash_dump(context, instance) def _generate_minimal_construct_for_down_cells(self, context, down_cell_uuids, project, limit): """Generate a list of minimal instance constructs for a given list of cells that did not respond to a list operation. This will list every instance mapping in the affected cells and return a minimal objects.Instance for each (non-queued-for-delete) mapping. :param context: RequestContext :param down_cell_uuids: A list of cell UUIDs that did not respond :param project: A project ID to filter mappings, or None :param limit: A numeric limit on the number of results, or None :returns: An InstanceList() of partial Instance() objects """ unavailable_servers = objects.InstanceList() for cell_uuid in down_cell_uuids: LOG.warning("Cell %s is not responding and hence only " "partial results are available from this " "cell if any.", cell_uuid) instance_mappings = (objects.InstanceMappingList. get_not_deleted_by_cell_and_project(context, cell_uuid, project, limit=limit)) for im in instance_mappings: unavailable_servers.objects.append( objects.Instance( context=context, uuid=im.instance_uuid, project_id=im.project_id, created_at=im.created_at ) ) if limit is not None: limit -= len(instance_mappings) if limit <= 0: break return unavailable_servers def _get_instance_map_or_none(self, context, instance_uuid): try: inst_map = objects.InstanceMapping.get_by_instance_uuid( context, instance_uuid) except exception.InstanceMappingNotFound: # InstanceMapping should always be found generally. This exception # may be raised if a deployment has partially migrated the nova-api # services. inst_map = None return inst_map @staticmethod def _save_user_id_in_instance_mapping(mapping, instance): # TODO(melwitt): We take the opportunity to migrate user_id on the # instance mapping if it's not yet been migrated. This can be removed # in a future release, when all migrations are complete. # If the instance came from a RequestSpec because of a down cell, its # user_id could be None and the InstanceMapping.user_id field is # non-nullable. Avoid trying to set/save the user_id in that case. if 'user_id' not in mapping and instance.user_id is not None: mapping.user_id = instance.user_id mapping.save() def _get_instance_from_cell(self, context, im, expected_attrs, cell_down_support): # NOTE(danms): Even though we're going to scatter/gather to the # right cell, other code depends on this being force targeted when # the get call returns. nova_context.set_target_cell(context, im.cell_mapping) uuid = im.instance_uuid result = nova_context.scatter_gather_single_cell(context, im.cell_mapping, objects.Instance.get_by_uuid, uuid, expected_attrs=expected_attrs) cell_uuid = im.cell_mapping.uuid if not nova_context.is_cell_failure_sentinel(result[cell_uuid]): inst = result[cell_uuid] self._save_user_id_in_instance_mapping(im, inst) return inst elif isinstance(result[cell_uuid], exception.InstanceNotFound): raise exception.InstanceNotFound(instance_id=uuid) elif cell_down_support: if im.queued_for_delete: # should be treated like deleted instance. raise exception.InstanceNotFound(instance_id=uuid) # instance in down cell, return a minimal construct LOG.warning("Cell %s is not responding and hence only " "partial results are available from this " "cell.", cell_uuid) try: rs = objects.RequestSpec.get_by_instance_uuid(context, uuid) # For BFV case, we could have rs.image but rs.image.id might # still not be set. So we check the existence of both image # and its id. image_ref = (rs.image.id if rs.image and 'id' in rs.image else None) inst = objects.Instance(context=context, power_state=0, uuid=uuid, project_id=im.project_id, created_at=im.created_at, user_id=rs.user_id, flavor=rs.flavor, image_ref=image_ref, availability_zone=rs.availability_zone) self._save_user_id_in_instance_mapping(im, inst) return inst except exception.RequestSpecNotFound: # could be that a deleted instance whose request # spec has been archived is being queried. raise exception.InstanceNotFound(instance_id=uuid) else: raise exception.NovaException( _("Cell %s is not responding and hence instance " "info is not available.") % cell_uuid) def _get_instance(self, context, instance_uuid, expected_attrs, cell_down_support=False): inst_map = self._get_instance_map_or_none(context, instance_uuid) if inst_map and (inst_map.cell_mapping is not None): instance = self._get_instance_from_cell(context, inst_map, expected_attrs, cell_down_support) elif inst_map and (inst_map.cell_mapping is None): # This means the instance has not been scheduled and put in # a cell yet. For now it also may mean that the deployer # has not created their cell(s) yet. try: build_req = objects.BuildRequest.get_by_instance_uuid( context, instance_uuid) instance = build_req.instance except exception.BuildRequestNotFound: # Instance was mapped and the BuildRequest was deleted # while fetching. Try again. inst_map = self._get_instance_map_or_none(context, instance_uuid) if inst_map and (inst_map.cell_mapping is not None): instance = self._get_instance_from_cell(context, inst_map, expected_attrs, cell_down_support) else: raise exception.InstanceNotFound(instance_id=instance_uuid) else: # If we got here, we don't have an instance mapping, but we aren't # sure why. The instance mapping might be missing because the # upgrade is incomplete (map_instances wasn't run). Or because the # instance was deleted and the DB was archived at which point the # mapping is deleted. The former case is bad, but because of the # latter case we can't really log any kind of warning/error here # since it might be normal. raise exception.InstanceNotFound(instance_id=instance_uuid) return instance def get(self, context, instance_id, expected_attrs=None, cell_down_support=False): """Get a single instance with the given instance_id. :param cell_down_support: True if the API (and caller) support returning a minimal instance construct if the relevant cell is down. If False, an error is raised since the instance cannot be retrieved due to the cell being down. """ if not expected_attrs: expected_attrs = [] expected_attrs.extend(['metadata', 'system_metadata', 'security_groups', 'info_cache']) # NOTE(ameade): we still need to support integer ids for ec2 try: if uuidutils.is_uuid_like(instance_id): LOG.debug("Fetching instance by UUID", instance_uuid=instance_id) instance = self._get_instance(context, instance_id, expected_attrs, cell_down_support=cell_down_support) else: LOG.debug("Failed to fetch instance by id %s", instance_id) raise exception.InstanceNotFound(instance_id=instance_id) except exception.InvalidID: LOG.debug("Invalid instance id %s", instance_id) raise exception.InstanceNotFound(instance_id=instance_id) return instance def get_all(self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): """Get all instances filtered by one of the given parameters. If there is no filter and the context is an admin, it will retrieve all instances in the system. Deleted instances will be returned by default, unless there is a search option that says otherwise. The results will be sorted based on the list of sort keys in the 'sort_keys' parameter (first value is primary sort key, second value is secondary sort ket, etc.). For each sort key, the associated sort direction is based on the list of sort directions in the 'sort_dirs' parameter. :param cell_down_support: True if the API (and caller) support returning a minimal instance construct if the relevant cell is down. If False, instances from unreachable cells will be omitted. :param all_tenants: True if the "all_tenants" filter was passed. """ if search_opts is None: search_opts = {} LOG.debug("Searching by: %s", str(search_opts)) # Fixups for the DB call filters = {} def _remap_flavor_filter(flavor_id): flavor = objects.Flavor.get_by_flavor_id(context, flavor_id) filters['instance_type_id'] = flavor.id def _remap_fixed_ip_filter(fixed_ip): # Turn fixed_ip into a regexp match. Since '.' matches # any character, we need to use regexp escaping for it. filters['ip'] = '^%s$' % fixed_ip.replace('.', '\\.') # search_option to filter_name mapping. filter_mapping = { 'image': 'image_ref', 'name': 'display_name', 'tenant_id': 'project_id', 'flavor': _remap_flavor_filter, 'fixed_ip': _remap_fixed_ip_filter} # copy from search_opts, doing various remappings as necessary for opt, value in search_opts.items(): # Do remappings. # Values not in the filter_mapping table are copied as-is. # If remapping is None, option is not copied # If the remapping is a string, it is the filter_name to use try: remap_object = filter_mapping[opt] except KeyError: filters[opt] = value else: # Remaps are strings to translate to, or functions to call # to do the translating as defined by the table above. if isinstance(remap_object, six.string_types): filters[remap_object] = value else: try: remap_object(value) # We already know we can't match the filter, so # return an empty list except ValueError: return objects.InstanceList() # IP address filtering cannot be applied at the DB layer, remove any DB # limit so that it can be applied after the IP filter. filter_ip = 'ip6' in filters or 'ip' in filters skip_build_request = False orig_limit = limit if filter_ip: # We cannot skip build requests if there is a marker since the # the marker could be a build request. skip_build_request = marker is None if self.network_api.has_substr_port_filtering_extension(context): # We're going to filter by IP using Neutron so set filter_ip # to False so we don't attempt post-DB query filtering in # memory below. filter_ip = False instance_uuids = self._ip_filter_using_neutron(context, filters) if instance_uuids: # Note that 'uuid' is not in the 2.1 GET /servers query # parameter schema, however, we allow additionalProperties # so someone could filter instances by uuid, which doesn't # make a lot of sense but we have to account for it. if 'uuid' in filters and filters['uuid']: filter_uuids = filters['uuid'] if isinstance(filter_uuids, list): instance_uuids.extend(filter_uuids) else: # Assume a string. If it's a dict or tuple or # something, well...that's too bad. This is why # we have query parameter schema definitions. if filter_uuids not in instance_uuids: instance_uuids.append(filter_uuids) filters['uuid'] = instance_uuids else: # No matches on the ip filter(s), return an empty list. return objects.InstanceList() elif limit: LOG.debug('Removing limit for DB query due to IP filter') limit = None # Skip get BuildRequest if filtering by IP address, as building # instances will not have IP addresses. if skip_build_request: build_requests = objects.BuildRequestList() else: # The ordering of instances will be # [sorted instances with no host] + [sorted instances with host]. # This means BuildRequest and cell0 instances first, then cell # instances try: build_requests = objects.BuildRequestList.get_by_filters( context, filters, limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs) # If we found the marker in we need to set it to None # so we don't expect to find it in the cells below. marker = None except exception.MarkerNotFound: # If we didn't find the marker in the build requests then keep # looking for it in the cells. build_requests = objects.BuildRequestList() build_req_instances = objects.InstanceList( objects=[build_req.instance for build_req in build_requests]) # Only subtract from limit if it is not None limit = (limit - len(build_req_instances)) if limit else limit # We could arguably avoid joining on security_groups if we're using # neutron (which is the default) but if you're using neutron then the # security_group_instance_association table should be empty anyway # and the DB should optimize out that join, making it insignificant. fields = ['metadata', 'info_cache', 'security_groups'] if expected_attrs: fields.extend(expected_attrs) insts, down_cell_uuids = instance_list.get_instance_objects_sorted( context, filters, limit, marker, fields, sort_keys, sort_dirs, cell_down_support=cell_down_support) def _get_unique_filter_method(): seen_uuids = set() def _filter(instance): # During a cross-cell move operation we could have the instance # in more than one cell database so we not only have to filter # duplicates but we want to make sure we only return the # "current" one which should also be the one that the instance # mapping points to, but we don't want to do that expensive # lookup here. The DB API will filter out hidden instances by # default but there is a small window where two copies of an # instance could be hidden=False in separate cell DBs. # NOTE(mriedem): We could make this better in the case that we # have duplicate instances that are both hidden=False by # showing the one with the newer updated_at value, but that # could be tricky if the user is filtering on # changes-since/before or updated_at, or sorting on updated_at, # but technically that was already potentially broken with this # _filter method if we return an older BuildRequest.instance, # and given the window should be very small where we have # duplicates, it's probably not worth the complexity. if instance.uuid in seen_uuids: return False seen_uuids.add(instance.uuid) return True return _filter filter_method = _get_unique_filter_method() # Only subtract from limit if it is not None limit = (limit - len(insts)) if limit else limit # TODO(alaski): Clean up the objects concatenation when List objects # support it natively. instances = objects.InstanceList( objects=list(filter(filter_method, build_req_instances.objects + insts.objects))) if filter_ip: instances = self._ip_filter(instances, filters, orig_limit) if cell_down_support: # API and client want minimal construct instances for any cells # that didn't return, so generate and prefix those to the actual # results. project = search_opts.get('project_id', context.project_id) if all_tenants: # NOTE(tssurya): The only scenario where project has to be None # is when using "all_tenants" in which case we do not want # the query to be restricted based on the project_id. project = None limit = (orig_limit - len(instances)) if limit else limit return (self._generate_minimal_construct_for_down_cells(context, down_cell_uuids, project, limit) + instances) return instances @staticmethod def _ip_filter(inst_models, filters, limit): ipv4_f = re.compile(str(filters.get('ip'))) ipv6_f = re.compile(str(filters.get('ip6'))) def _match_instance(instance): nw_info = instance.get_network_info() for vif in nw_info: for fixed_ip in vif.fixed_ips(): address = fixed_ip.get('address') if not address: continue version = fixed_ip.get('version') if ((version == 4 and ipv4_f.match(address)) or (version == 6 and ipv6_f.match(address))): return True return False result_objs = [] for instance in inst_models: if _match_instance(instance): result_objs.append(instance) if limit and len(result_objs) == limit: break return objects.InstanceList(objects=result_objs) def _ip_filter_using_neutron(self, context, filters): ip4_address = filters.get('ip') ip6_address = filters.get('ip6') addresses = [ip4_address, ip6_address] uuids = [] for address in addresses: if address: try: ports = self.network_api.list_ports( context, fixed_ips='ip_address_substr=' + address, fields=['device_id'])['ports'] for port in ports: uuids.append(port['device_id']) except Exception as e: LOG.error('An error occurred while listing ports ' 'with an ip_address filter value of "%s". ' 'Error: %s', address, six.text_type(e)) return uuids def update_instance(self, context, instance, updates): """Updates a single Instance object with some updates dict. Returns the updated instance. """ # NOTE(sbauza): Given we only persist the Instance object after we # create the BuildRequest, we are sure that if the Instance object # has an ID field set, then it was persisted in the right Cell DB. if instance.obj_attr_is_set('id'): instance.update(updates) instance.save() else: # Instance is not yet mapped to a cell, so we need to update # BuildRequest instead # TODO(sbauza): Fix the possible race conditions where BuildRequest # could be deleted because of either a concurrent instance delete # or because the scheduler just returned a destination right # after we called the instance in the API. try: build_req = objects.BuildRequest.get_by_instance_uuid( context, instance.uuid) instance = build_req.instance instance.update(updates) # FIXME(sbauza): Here we are updating the current # thread-related BuildRequest object. Given that another worker # could have looking up at that BuildRequest in the API, it # means that it could pass it down to the conductor without # making sure that it's not updated, we could have some race # condition where it would missing the updated fields, but # that's something we could discuss once the instance record # is persisted by the conductor. build_req.save() except exception.BuildRequestNotFound: # Instance was mapped and the BuildRequest was deleted # while fetching (and possibly the instance could have been # deleted as well). We need to lookup again the Instance object # in order to correctly update it. # TODO(sbauza): Figure out a good way to know the expected # attributes by checking which fields are set or not. expected_attrs = ['flavor', 'pci_devices', 'numa_topology', 'tags', 'metadata', 'system_metadata', 'security_groups', 'info_cache'] inst_map = self._get_instance_map_or_none(context, instance.uuid) if inst_map and (inst_map.cell_mapping is not None): with nova_context.target_cell( context, inst_map.cell_mapping) as cctxt: instance = objects.Instance.get_by_uuid( cctxt, instance.uuid, expected_attrs=expected_attrs) instance.update(updates) instance.save() else: # Conductor doesn't delete the BuildRequest until after the # InstanceMapping record is created, so if we didn't get # that and the BuildRequest doesn't exist, then the # instance is already gone and we need to just error out. raise exception.InstanceNotFound(instance_id=instance.uuid) return instance # NOTE(melwitt): We don't check instance lock for backup because lock is # intended to prevent accidental change/delete of instances @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.PAUSED, vm_states.SUSPENDED]) def backup(self, context, instance, name, backup_type, rotation, extra_properties=None): """Backup the given instance :param instance: nova.objects.instance.Instance object :param name: name of the backup :param backup_type: 'daily' or 'weekly' :param rotation: int representing how many backups to keep around; None if rotation shouldn't be used (as in the case of snapshots) :param extra_properties: dict of extra image properties to include when creating the image. :returns: A dict containing image metadata """ props_copy = dict(extra_properties, backup_type=backup_type) if compute_utils.is_volume_backed_instance(context, instance): LOG.info("It's not supported to backup volume backed " "instance.", instance=instance) raise exception.InvalidRequest( _('Backup is not supported for volume-backed instances.')) else: image_meta = compute_utils.create_image( context, instance, name, 'backup', self.image_api, extra_properties=props_copy) instance.task_state = task_states.IMAGE_BACKUP instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.BACKUP) self.compute_rpcapi.backup_instance(context, instance, image_meta['id'], backup_type, rotation) return image_meta # NOTE(melwitt): We don't check instance lock for snapshot because lock is # intended to prevent accidental change/delete of instances @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.PAUSED, vm_states.SUSPENDED]) def snapshot(self, context, instance, name, extra_properties=None): """Snapshot the given instance. :param instance: nova.objects.instance.Instance object :param name: name of the snapshot :param extra_properties: dict of extra image properties to include when creating the image. :returns: A dict containing image metadata """ image_meta = compute_utils.create_image( context, instance, name, 'snapshot', self.image_api, extra_properties=extra_properties) instance.task_state = task_states.IMAGE_SNAPSHOT_PENDING try: instance.save(expected_task_state=[None]) except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError) as ex: # Changing the instance task state to use in raising the # InstanceInvalidException below LOG.debug('Instance disappeared during snapshot.', instance=instance) try: image_id = image_meta['id'] self.image_api.delete(context, image_id) LOG.info('Image %s deleted because instance ' 'deleted before snapshot started.', image_id, instance=instance) except exception.ImageNotFound: pass except Exception as exc: LOG.warning("Error while trying to clean up image %(img_id)s: " "%(error_msg)s", {"img_id": image_meta['id'], "error_msg": six.text_type(exc)}) attr = 'task_state' state = task_states.DELETING if type(ex) == exception.InstanceNotFound: attr = 'vm_state' state = vm_states.DELETED raise exception.InstanceInvalidState(attr=attr, instance_uuid=instance.uuid, state=state, method='snapshot') self._record_action_start(context, instance, instance_actions.CREATE_IMAGE) self.compute_rpcapi.snapshot_instance(context, instance, image_meta['id']) return image_meta # NOTE(melwitt): We don't check instance lock for snapshot because lock is # intended to prevent accidental change/delete of instances @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.PAUSED, vm_states.SUSPENDED]) def snapshot_volume_backed(self, context, instance, name, extra_properties=None): """Snapshot the given volume-backed instance. :param instance: nova.objects.instance.Instance object :param name: name of the backup or snapshot :param extra_properties: dict of extra image properties to include :returns: the new image metadata """ image_meta = compute_utils.initialize_instance_snapshot_metadata( context, instance, name, extra_properties) # the new image is simply a bucket of properties (particularly the # block device mapping, kernel and ramdisk IDs) with no image data, # hence the zero size image_meta['size'] = 0 for attr in ('container_format', 'disk_format'): image_meta.pop(attr, None) properties = image_meta['properties'] # clean properties before filling for key in ('block_device_mapping', 'bdm_v2', 'root_device_name'): properties.pop(key, None) if instance.root_device_name: properties['root_device_name'] = instance.root_device_name bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) mapping = [] # list of BDM dicts that can go into the image properties # Do some up-front filtering of the list of BDMs from # which we are going to create snapshots. volume_bdms = [] for bdm in bdms: if bdm.no_device: continue if bdm.is_volume: # These will be handled below. volume_bdms.append(bdm) else: mapping.append(bdm.get_image_mapping()) # Check limits in Cinder before creating snapshots to avoid going over # quota in the middle of a list of volumes. This is a best-effort check # but concurrently running snapshot requests from the same project # could still fail to create volume snapshots if they go over limit. if volume_bdms: limits = self.volume_api.get_absolute_limits(context) total_snapshots_used = limits['totalSnapshotsUsed'] max_snapshots = limits['maxTotalSnapshots'] # -1 means there is unlimited quota for snapshots if (max_snapshots > -1 and len(volume_bdms) + total_snapshots_used > max_snapshots): LOG.debug('Unable to create volume snapshots for instance. ' 'Currently has %s snapshots, requesting %s new ' 'snapshots, with a limit of %s.', total_snapshots_used, len(volume_bdms), max_snapshots, instance=instance) raise exception.OverQuota(overs='snapshots') quiesced = False if instance.vm_state == vm_states.ACTIVE: try: LOG.info("Attempting to quiesce instance before volume " "snapshot.", instance=instance) self.compute_rpcapi.quiesce_instance(context, instance) quiesced = True except (exception.InstanceQuiesceNotSupported, exception.QemuGuestAgentNotEnabled, exception.NovaException, NotImplementedError) as err: if strutils.bool_from_string(instance.system_metadata.get( 'image_os_require_quiesce')): raise if isinstance(err, exception.NovaException): LOG.info('Skipping quiescing instance: %(reason)s.', {'reason': err.format_message()}, instance=instance) else: LOG.info('Skipping quiescing instance because the ' 'operation is not supported by the underlying ' 'compute driver.', instance=instance) # NOTE(tasker): discovered that an uncaught exception could occur # after the instance has been frozen. catch and thaw. except Exception as ex: with excutils.save_and_reraise_exception(): LOG.error("An error occurred during quiesce of instance. " "Unquiescing to ensure instance is thawed. " "Error: %s", six.text_type(ex), instance=instance) self.compute_rpcapi.unquiesce_instance(context, instance, mapping=None) @wrap_instance_event(prefix='api') def snapshot_instance(self, context, instance, bdms): try: for bdm in volume_bdms: # create snapshot based on volume_id volume = self.volume_api.get(context, bdm.volume_id) # NOTE(yamahata): Should we wait for snapshot creation? # Linux LVM snapshot creation completes in # short time, it doesn't matter for now. name = _('snapshot for %s') % image_meta['name'] LOG.debug('Creating snapshot from volume %s.', volume['id'], instance=instance) snapshot = self.volume_api.create_snapshot_force( context, volume['id'], name, volume['display_description']) mapping_dict = block_device.snapshot_from_bdm( snapshot['id'], bdm) mapping_dict = mapping_dict.get_image_mapping() mapping.append(mapping_dict) return mapping # NOTE(tasker): No error handling is done in the above for loop. # This means that if the snapshot fails and throws an exception # the traceback will skip right over the unquiesce needed below. # Here, catch any exception, unquiesce the instance, and raise the # error so that the calling function can do what it needs to in # order to properly treat a failed snap. except Exception: with excutils.save_and_reraise_exception(): if quiesced: LOG.info("Unquiescing instance after volume snapshot " "failure.", instance=instance) self.compute_rpcapi.unquiesce_instance( context, instance, mapping) self._record_action_start(context, instance, instance_actions.CREATE_IMAGE) mapping = snapshot_instance(self, context, instance, bdms) if quiesced: self.compute_rpcapi.unquiesce_instance(context, instance, mapping) if mapping: properties['block_device_mapping'] = mapping properties['bdm_v2'] = True return self.image_api.create(context, image_meta) @check_instance_lock def reboot(self, context, instance, reboot_type): """Reboot the given instance.""" if reboot_type == 'SOFT': self._soft_reboot(context, instance) else: self._hard_reboot(context, instance) @check_instance_state(vm_state=set(vm_states.ALLOW_SOFT_REBOOT), task_state=[None]) def _soft_reboot(self, context, instance): expected_task_state = [None] instance.task_state = task_states.REBOOTING instance.save(expected_task_state=expected_task_state) self._record_action_start(context, instance, instance_actions.REBOOT) self.compute_rpcapi.reboot_instance(context, instance=instance, block_device_info=None, reboot_type='SOFT') @check_instance_state(vm_state=set(vm_states.ALLOW_HARD_REBOOT), task_state=task_states.ALLOW_REBOOT) def _hard_reboot(self, context, instance): instance.task_state = task_states.REBOOTING_HARD instance.save(expected_task_state=task_states.ALLOW_REBOOT) self._record_action_start(context, instance, instance_actions.REBOOT) self.compute_rpcapi.reboot_instance(context, instance=instance, block_device_info=None, reboot_type='HARD') def _check_image_arch(self, image=None): if image: img_arch = image.get("properties", {}).get('hw_architecture') if img_arch: fields_obj.Architecture.canonicalize(img_arch) @block_accelerators # TODO(stephenfin): We should expand kwargs out to named args @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.ERROR]) def rebuild(self, context, instance, image_href, admin_password, files_to_inject=None, **kwargs): """Rebuild the given instance with the provided attributes.""" files_to_inject = files_to_inject or [] metadata = kwargs.get('metadata', {}) preserve_ephemeral = kwargs.get('preserve_ephemeral', False) auto_disk_config = kwargs.get('auto_disk_config') if 'key_name' in kwargs: key_name = kwargs.pop('key_name') if key_name: # NOTE(liuyulong): we are intentionally using the user_id from # the request context rather than the instance.user_id because # users own keys but instances are owned by projects, and # another user in the same project can rebuild an instance # even if they didn't create it. key_pair = objects.KeyPair.get_by_name(context, context.user_id, key_name) instance.key_name = key_pair.name instance.key_data = key_pair.public_key instance.keypairs = objects.KeyPairList(objects=[key_pair]) else: instance.key_name = None instance.key_data = None instance.keypairs = objects.KeyPairList(objects=[]) # Use trusted_certs value from kwargs to create TrustedCerts object trusted_certs = None if 'trusted_certs' in kwargs: # Note that the user can set, change, or unset / reset trusted # certs. If they are explicitly specifying # trusted_image_certificates=None, that means we'll either unset # them on the instance *or* reset to use the defaults (if defaults # are configured). trusted_certs = kwargs.pop('trusted_certs') instance.trusted_certs = self._retrieve_trusted_certs_object( context, trusted_certs, rebuild=True) image_id, image = self._get_image(context, image_href) self._check_auto_disk_config(image=image, auto_disk_config=auto_disk_config) self._check_image_arch(image=image) flavor = instance.get_flavor() bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) root_bdm = compute_utils.get_root_bdm(context, instance, bdms) # Check to see if the image is changing and we have a volume-backed # server. The compute doesn't support changing the image in the # root disk of a volume-backed server, so we need to just fail fast. is_volume_backed = compute_utils.is_volume_backed_instance( context, instance, bdms) if is_volume_backed: if trusted_certs: # The only way we can get here is if the user tried to set # trusted certs or specified trusted_image_certificates=None # and default_trusted_certificate_ids is configured. msg = _("Image certificate validation is not supported " "for volume-backed servers.") raise exception.CertificateValidationFailed(message=msg) # For boot from volume, instance.image_ref is empty, so we need to # query the image from the volume. if root_bdm is None: # This shouldn't happen and is an error, we need to fail. This # is not the users fault, it's an internal error. Without a # root BDM we have no way of knowing the backing volume (or # image in that volume) for this instance. raise exception.NovaException( _('Unable to find root block device mapping for ' 'volume-backed instance.')) volume = self.volume_api.get(context, root_bdm.volume_id) volume_image_metadata = volume.get('volume_image_metadata', {}) orig_image_ref = volume_image_metadata.get('image_id') if orig_image_ref != image_href: # Leave a breadcrumb. LOG.debug('Requested to rebuild instance with a new image %s ' 'for a volume-backed server with image %s in its ' 'root volume which is not supported.', image_href, orig_image_ref, instance=instance) msg = _('Unable to rebuild with a different image for a ' 'volume-backed server.') raise exception.ImageUnacceptable( image_id=image_href, reason=msg) else: orig_image_ref = instance.image_ref request_spec = objects.RequestSpec.get_by_instance_uuid( context, instance.uuid) self._checks_for_create_and_rebuild(context, image_id, image, flavor, metadata, files_to_inject, root_bdm) # Check the state of the volume. If it is not in-use, an exception # will occur when creating attachment during reconstruction, # resulting in the failure of reconstruction and the instance # turning into an error state. self._check_volume_status(context, bdms) # NOTE(sean-k-mooney): When we rebuild with a new image we need to # validate that the NUMA topology does not change as we do a NOOP claim # in resource tracker. As such we cannot allow the resource usage or # assignment to change as a result of a new image altering the # numa constraints. if orig_image_ref != image_href: self._validate_numa_rebuild(instance, image, flavor) kernel_id, ramdisk_id = self._handle_kernel_and_ramdisk( context, None, None, image) def _reset_image_metadata(): """Remove old image properties that we're storing as instance system metadata. These properties start with 'image_'. Then add the properties for the new image. """ # FIXME(comstud): There's a race condition here in that if # the system_metadata for this instance is updated after # we do the previous save() and before we update.. those # other updates will be lost. Since this problem exists in # a lot of other places, I think it should be addressed in # a DB layer overhaul. orig_sys_metadata = dict(instance.system_metadata) # Remove the old keys for key in list(instance.system_metadata.keys()): if key.startswith(utils.SM_IMAGE_PROP_PREFIX): del instance.system_metadata[key] # Add the new ones new_sys_metadata = utils.get_system_metadata_from_image( image, flavor) new_sys_metadata.update({'image_base_image_ref': image_id}) instance.system_metadata.update(new_sys_metadata) instance.save() return orig_sys_metadata # Since image might have changed, we may have new values for # os_type, vm_mode, etc options_from_image = self._inherit_properties_from_image( image, auto_disk_config) instance.update(options_from_image) instance.task_state = task_states.REBUILDING # An empty instance.image_ref is currently used as an indication # of BFV. Preserve that over a rebuild to not break users. if not is_volume_backed: instance.image_ref = image_href instance.kernel_id = kernel_id or "" instance.ramdisk_id = ramdisk_id or "" instance.progress = 0 instance.update(kwargs) instance.save(expected_task_state=[None]) # On a rebuild, since we're potentially changing images, we need to # wipe out the old image properties that we're storing as instance # system metadata... and copy in the properties for the new image. orig_sys_metadata = _reset_image_metadata() self._record_action_start(context, instance, instance_actions.REBUILD) # NOTE(sbauza): The migration script we provided in Newton should make # sure that all our instances are currently migrated to have an # attached RequestSpec object but let's consider that the operator only # half migrated all their instances in the meantime. host = instance.host # If a new image is provided on rebuild, we will need to run # through the scheduler again, but we want the instance to be # rebuilt on the same host it's already on. if orig_image_ref != image_href: # We have to modify the request spec that goes to the scheduler # to contain the new image. We persist this since we've already # changed the instance.image_ref above so we're being # consistent. request_spec.image = objects.ImageMeta.from_dict(image) request_spec.save() if 'scheduler_hints' not in request_spec: request_spec.scheduler_hints = {} # Nuke the id on this so we can't accidentally save # this hint hack later del request_spec.id # NOTE(danms): Passing host=None tells conductor to # call the scheduler. The _nova_check_type hint # requires that the scheduler returns only the same # host that we are currently on and only checks # rebuild-related filters. request_spec.scheduler_hints['_nova_check_type'] = ['rebuild'] request_spec.force_hosts = [instance.host] request_spec.force_nodes = [instance.node] host = None self.compute_task_api.rebuild_instance(context, instance=instance, new_pass=admin_password, injected_files=files_to_inject, image_ref=image_href, orig_image_ref=orig_image_ref, orig_sys_metadata=orig_sys_metadata, bdms=bdms, preserve_ephemeral=preserve_ephemeral, host=host, request_spec=request_spec) def _check_volume_status(self, context, bdms): """Check whether the status of the volume is "in-use". :param context: A context.RequestContext :param bdms: BlockDeviceMappingList of BDMs for the instance """ for bdm in bdms: if bdm.volume_id: vol = self.volume_api.get(context, bdm.volume_id) self.volume_api.check_attached(context, vol) @staticmethod def _validate_numa_rebuild(instance, image, flavor): """validates that the NUMA constraints do not change on rebuild. :param instance: nova.objects.instance.Instance object :param image: the new image the instance will be rebuilt with. :param flavor: the flavor of the current instance. :raises: nova.exception.ImageNUMATopologyRebuildConflict """ # NOTE(sean-k-mooney): currently it is not possible to express # a PCI NUMA affinity policy via flavor or image but that will # change in the future. we pull out the image metadata into # separate variable to make future testing of this easier. old_image_meta = instance.image_meta new_image_meta = objects.ImageMeta.from_dict(image) old_constraints = hardware.numa_get_constraints(flavor, old_image_meta) new_constraints = hardware.numa_get_constraints(flavor, new_image_meta) # early out for non NUMA instances if old_constraints is None and new_constraints is None: return # if only one of the constraints are non-None (or 'set') then the # constraints changed so raise an exception. if old_constraints is None or new_constraints is None: action = "removing" if old_constraints else "introducing" LOG.debug("NUMA rebuild validation failed. The requested image " "would alter the NUMA constraints by %s a NUMA " "topology.", action, instance=instance) raise exception.ImageNUMATopologyRebuildConflict() # otherwise since both the old a new constraints are non none compare # them as dictionaries. old = old_constraints.obj_to_primitive() new = new_constraints.obj_to_primitive() if old != new: LOG.debug("NUMA rebuild validation failed. The requested image " "conflicts with the existing NUMA constraints.", instance=instance) raise exception.ImageNUMATopologyRebuildConflict() # TODO(sean-k-mooney): add PCI NUMA affinity policy check. @staticmethod def _check_quota_for_upsize(context, instance, current_flavor, new_flavor): project_id, user_id = quotas_obj.ids_from_instance(context, instance) # Deltas will be empty if the resize is not an upsize. deltas = compute_utils.upsize_quota_delta(new_flavor, current_flavor) if deltas: try: res_deltas = {'cores': deltas.get('cores', 0), 'ram': deltas.get('ram', 0)} objects.Quotas.check_deltas(context, res_deltas, project_id, user_id=user_id, check_project_id=project_id, check_user_id=user_id) except exception.OverQuota as exc: quotas = exc.kwargs['quotas'] overs = exc.kwargs['overs'] usages = exc.kwargs['usages'] headroom = compute_utils.get_headroom(quotas, usages, deltas) (overs, reqs, total_alloweds, useds) = compute_utils.get_over_quota_detail(headroom, overs, quotas, deltas) LOG.info("%(overs)s quota exceeded for %(pid)s," " tried to resize instance.", {'overs': overs, 'pid': context.project_id}) raise exception.TooManyInstances(overs=overs, req=reqs, used=useds, allowed=total_alloweds) @check_instance_lock @check_instance_state(vm_state=[vm_states.RESIZED]) def revert_resize(self, context, instance): """Reverts a resize or cold migration, deleting the 'new' instance in the process. """ elevated = context.elevated() migration = objects.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') # If this is a resize down, a revert might go over quota. self._check_quota_for_upsize(context, instance, instance.flavor, instance.old_flavor) # The AZ for the server may have changed when it was migrated so while # we are in the API and have access to the API DB, update the # instance.availability_zone before casting off to the compute service. # Note that we do this in the API to avoid an "up-call" from the # compute service to the API DB. This is not great in case something # fails during revert before the instance.host is updated to the # original source host, but it is good enough for now. Long-term we # could consider passing the AZ down to compute so it can set it when # the instance.host value is set in finish_revert_resize. instance.availability_zone = ( availability_zones.get_host_availability_zone( context, migration.source_compute)) # If this was a resize, the conductor may have updated the # RequestSpec.flavor field (to point at the new flavor) and the # RequestSpec.numa_topology field (to reflect the new flavor's extra # specs) during the initial resize operation, so we need to update the # RequestSpec to point back at the original flavor and reflect the NUMA # settings of this flavor, otherwise subsequent move operations through # the scheduler will be using the wrong values. There's no need to do # this if the flavor hasn't changed though and we're migrating rather # than resizing. reqspec = objects.RequestSpec.get_by_instance_uuid( context, instance.uuid) if reqspec.flavor['id'] != instance.old_flavor['id']: reqspec.flavor = instance.old_flavor reqspec.numa_topology = hardware.numa_get_constraints( instance.old_flavor, instance.image_meta) reqspec.save() # NOTE(gibi): This is a performance optimization. If the network info # cache does not have ports with allocations in the binding profile # then we can skip reading port resource request from neutron below. # If a port has resource request then that would have already caused # that the finish_resize call put allocation in the binding profile # during the resize. if instance.get_network_info().has_port_with_allocation(): # TODO(gibi): do not directly overwrite the # RequestSpec.requested_resources as others like cyborg might added # to things there already # NOTE(gibi): We need to collect the requested resource again as it # is intentionally not persisted in nova. Note that this needs to # be done here as the nova API code directly calls revert on the # dest compute service skipping the conductor. port_res_req = ( self.network_api.get_requested_resource_for_instance( context, instance.uuid)) reqspec.requested_resources = port_res_req instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=[None]) migration.status = 'reverting' migration.save() self._record_action_start(context, instance, instance_actions.REVERT_RESIZE) if migration.cross_cell_move: # RPC cast to conductor to orchestrate the revert of the cross-cell # resize. self.compute_task_api.revert_snapshot_based_resize( context, instance, migration) else: # TODO(melwitt): We're not rechecking for strict quota here to # guard against going over quota during a race at this time because # the resource consumption for this operation is written to the # database by compute. self.compute_rpcapi.revert_resize(context, instance, migration, migration.dest_compute, reqspec) @staticmethod def _get_source_compute_service(context, migration): """Find the source compute Service object given the Migration. :param context: nova auth RequestContext target at the destination compute cell :param migration: Migration object for the move operation :return: Service object representing the source host nova-compute """ if migration.cross_cell_move: # The source compute could be in another cell so look up the # HostMapping to determine the source cell. hm = objects.HostMapping.get_by_host( context, migration.source_compute) with nova_context.target_cell(context, hm.cell_mapping) as cctxt: return objects.Service.get_by_compute_host( cctxt, migration.source_compute) # Same-cell migration so just use the context we have. return objects.Service.get_by_compute_host( context, migration.source_compute) @check_instance_lock @check_instance_state(vm_state=[vm_states.RESIZED]) def confirm_resize(self, context, instance, migration=None): """Confirms a migration/resize and deletes the 'old' instance. :param context: nova auth RequestContext :param instance: Instance object to confirm the resize :param migration: Migration object; provided if called from the _poll_unconfirmed_resizes periodic task on the dest compute. :raises: MigrationNotFound if migration is not provided and a migration cannot be found for the instance with status "finished". :raises: ServiceUnavailable if the source compute service is down. """ elevated = context.elevated() # NOTE(melwitt): We're not checking quota here because there isn't a # change in resource usage when confirming a resize. Resource # consumption for resizes are written to the database by compute, so # a confirm resize is just a clean up of the migration objects and a # state change in compute. if migration is None: migration = objects.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') # Check if the source compute service is up before modifying the # migration record because once we do we cannot come back through this # method since it will be looking for a "finished" status migration. source_svc = self._get_source_compute_service(context, migration) if not self.servicegroup_api.service_is_up(source_svc): raise exception.ServiceUnavailable() migration.status = 'confirming' migration.save() self._record_action_start(context, instance, instance_actions.CONFIRM_RESIZE) # Check to see if this was a cross-cell resize, in which case the # resized instance is in the target cell (the migration and instance # came from the target cell DB in this case), and we need to cleanup # the source host and source cell database records. if migration.cross_cell_move: self.compute_task_api.confirm_snapshot_based_resize( context, instance, migration) else: # It's a traditional resize within a single cell, so RPC cast to # the source compute host to cleanup the host since the instance # is already on the target host. self.compute_rpcapi.confirm_resize(context, instance, migration, migration.source_compute) def _allow_cross_cell_resize(self, context, instance): """Determine if the request can perform a cross-cell resize on this instance. :param context: nova auth request context for the resize operation :param instance: Instance object being resized :returns: True if cross-cell resize is allowed, False otherwise """ # First check to see if the requesting project/user is allowed by # policy to perform cross-cell resize. allowed = context.can( servers_policies.CROSS_CELL_RESIZE, target={'user_id': instance.user_id, 'project_id': instance.project_id}, fatal=False) # If the user is allowed by policy, check to make sure the deployment # is upgraded to the point of supporting cross-cell resize on all # compute services. if allowed: # TODO(mriedem): We can remove this minimum compute version check # in the 22.0.0 "V" release. min_compute_version = ( objects.service.get_minimum_version_all_cells( context, ['nova-compute'])) if min_compute_version < MIN_COMPUTE_CROSS_CELL_RESIZE: LOG.debug('Request is allowed by policy to perform cross-cell ' 'resize but the minimum nova-compute service ' 'version in the deployment %s is less than %s so ' 'cross-cell resize is not allowed at this time.', min_compute_version, MIN_COMPUTE_CROSS_CELL_RESIZE) return False if self.network_api.get_requested_resource_for_instance( context, instance.uuid): LOG.info( 'Request is allowed by policy to perform cross-cell ' 'resize but the instance has ports with resource request ' 'and cross-cell resize is not supported with such ports.', instance=instance) return False return allowed @staticmethod def _validate_host_for_cold_migrate( context, instance, host_name, allow_cross_cell_resize): """Validates a host specified for cold migration. :param context: nova auth request context for the cold migration :param instance: Instance object being cold migrated :param host_name: User-specified compute service hostname for the desired destination of the instance during the cold migration :param allow_cross_cell_resize: If True, cross-cell resize is allowed for this operation and the host could be in a different cell from the one that the instance is currently in. If False, the speciifed host must be in the same cell as the instance. :returns: ComputeNode object of the requested host :raises: CannotMigrateToSameHost if the host is the same as the current instance.host :raises: ComputeHostNotFound if the specified host cannot be found """ # Cannot migrate to the host where the instance exists # because it is useless. if host_name == instance.host: raise exception.CannotMigrateToSameHost() # Check whether host exists or not. If a cross-cell resize is # allowed, the host could be in another cell from the one the # instance is currently in, so we need to lookup the HostMapping # to get the cell and lookup the ComputeNode in that cell. if allow_cross_cell_resize: try: hm = objects.HostMapping.get_by_host(context, host_name) except exception.HostMappingNotFound: LOG.info('HostMapping not found for host: %s', host_name) raise exception.ComputeHostNotFound(host=host_name) with nova_context.target_cell(context, hm.cell_mapping) as cctxt: node = objects.ComputeNode.\ get_first_node_by_host_for_old_compat( cctxt, host_name, use_slave=True) else: node = objects.ComputeNode.get_first_node_by_host_for_old_compat( context, host_name, use_slave=True) return node @block_accelerators @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED]) @check_instance_host(check_is_up=True) def resize(self, context, instance, flavor_id=None, clean_shutdown=True, host_name=None, auto_disk_config=None): """Resize (ie, migrate) a running instance. If flavor_id is None, the process is considered a migration, keeping the original flavor_id. If flavor_id is not None, the instance should be migrated to a new host and resized to the new flavor_id. host_name is always None in the resize case. host_name can be set in the cold migration case only. """ allow_cross_cell_resize = self._allow_cross_cell_resize( context, instance) if host_name is not None: node = self._validate_host_for_cold_migrate( context, instance, host_name, allow_cross_cell_resize) self._check_auto_disk_config( instance, auto_disk_config=auto_disk_config) current_instance_type = instance.get_flavor() # If flavor_id is not provided, only migrate the instance. volume_backed = None if not flavor_id: LOG.debug("flavor_id is None. Assuming migration.", instance=instance) new_instance_type = current_instance_type else: new_instance_type = flavors.get_flavor_by_flavor_id( flavor_id, read_deleted="no") # NOTE(wenping): We use this instead of the 'block_accelerator' # decorator since the operation can differ depending on args, # and for resize we have two flavors to worry about, we should # reject resize with new flavor with accelerator. if new_instance_type.extra_specs.get('accel:device_profile'): raise exception.ForbiddenWithAccelerators() # Check to see if we're resizing to a zero-disk flavor which is # only supported with volume-backed servers. if (new_instance_type.get('root_gb') == 0 and current_instance_type.get('root_gb') != 0): volume_backed = compute_utils.is_volume_backed_instance( context, instance) if not volume_backed: reason = _('Resize to zero disk flavor is not allowed.') raise exception.CannotResizeDisk(reason=reason) current_instance_type_name = current_instance_type['name'] new_instance_type_name = new_instance_type['name'] LOG.debug("Old instance type %(current_instance_type_name)s, " "new instance type %(new_instance_type_name)s", {'current_instance_type_name': current_instance_type_name, 'new_instance_type_name': new_instance_type_name}, instance=instance) same_instance_type = (current_instance_type['id'] == new_instance_type['id']) # NOTE(sirp): We don't want to force a customer to change their flavor # when Ops is migrating off of a failed host. if not same_instance_type and new_instance_type.get('disabled'): raise exception.FlavorNotFound(flavor_id=flavor_id) if same_instance_type and flavor_id: raise exception.CannotResizeToSameFlavor() # ensure there is sufficient headroom for upsizes if flavor_id: self._check_quota_for_upsize(context, instance, current_instance_type, new_instance_type) if not same_instance_type: image = utils.get_image_from_system_metadata( instance.system_metadata) # Figure out if the instance is volume-backed but only if we didn't # already figure that out above (avoid the extra db hit). if volume_backed is None: volume_backed = compute_utils.is_volume_backed_instance( context, instance) # If the server is volume-backed, we still want to validate numa # and pci information in the new flavor, but we don't call # _validate_flavor_image_nostatus because how it handles checking # disk size validation was not intended for a volume-backed # resize case. if volume_backed: self._validate_flavor_image_numa_pci( image, new_instance_type, validate_pci=True) else: self._validate_flavor_image_nostatus( context, image, new_instance_type, root_bdm=None, validate_pci=True) filter_properties = {'ignore_hosts': []} if not self._allow_resize_to_same_host(same_instance_type, instance): filter_properties['ignore_hosts'].append(instance.host) request_spec = objects.RequestSpec.get_by_instance_uuid( context, instance.uuid) request_spec.ignore_hosts = filter_properties['ignore_hosts'] # don't recalculate the NUMA topology unless the flavor has changed if not same_instance_type: request_spec.numa_topology = hardware.numa_get_constraints( new_instance_type, instance.image_meta) instance.task_state = task_states.RESIZE_PREP instance.progress = 0 instance.auto_disk_config = auto_disk_config or False instance.save(expected_task_state=[None]) if not flavor_id: self._record_action_start(context, instance, instance_actions.MIGRATE) else: self._record_action_start(context, instance, instance_actions.RESIZE) # TODO(melwitt): We're not rechecking for strict quota here to guard # against going over quota during a race at this time because the # resource consumption for this operation is written to the database # by compute. scheduler_hint = {'filter_properties': filter_properties} if host_name is None: # If 'host_name' is not specified, # clear the 'requested_destination' field of the RequestSpec # except set the allow_cross_cell_move flag since conductor uses # it prior to scheduling. request_spec.requested_destination = objects.Destination( allow_cross_cell_move=allow_cross_cell_resize) else: # Set the host and the node so that the scheduler will # validate them. request_spec.requested_destination = objects.Destination( host=node.host, node=node.hypervisor_hostname, allow_cross_cell_move=allow_cross_cell_resize) # Asynchronously RPC cast to conductor so the response is not blocked # during scheduling. If something fails the user can find out via # instance actions. self.compute_task_api.resize_instance(context, instance, scheduler_hint=scheduler_hint, flavor=new_instance_type, clean_shutdown=clean_shutdown, request_spec=request_spec, do_cast=True) def _allow_resize_to_same_host(self, cold_migrate, instance): """Contains logic for excluding the instance.host on resize/migrate. If performing a cold migration and the compute node resource provider reports the COMPUTE_SAME_HOST_COLD_MIGRATE trait then same-host cold migration is allowed otherwise it is not and the current instance.host should be excluded as a scheduling candidate. :param cold_migrate: true if performing a cold migration, false for resize :param instance: Instance object being resized or cold migrated :returns: True if same-host resize/cold migrate is allowed, False otherwise """ if cold_migrate: # Check to see if the compute node resource provider on which the # instance is running has the COMPUTE_SAME_HOST_COLD_MIGRATE # trait. # Note that we check this here in the API since we cannot # pre-filter allocation candidates in the scheduler using this # trait as it would not work. For example, libvirt nodes will not # report the trait but using it as a forbidden trait filter when # getting allocation candidates would still return libvirt nodes # which means we could attempt to cold migrate to the same libvirt # node, which would fail. ctxt = instance._context cn = objects.ComputeNode.get_by_host_and_nodename( ctxt, instance.host, instance.node) traits = self.placementclient.get_provider_traits( ctxt, cn.uuid).traits # If the provider has the trait it is (1) new enough to report that # trait and (2) supports cold migration on the same host. if os_traits.COMPUTE_SAME_HOST_COLD_MIGRATE in traits: allow_same_host = True else: # TODO(mriedem): Remove this compatibility code after one # release. If the compute is old we will not know if it # supports same-host cold migration so we fallback to config. service = objects.Service.get_by_compute_host(ctxt, cn.host) if service.version >= MIN_COMPUTE_SAME_HOST_COLD_MIGRATE: # The compute is new enough to report the trait but does # not so same-host cold migration is not allowed. allow_same_host = False else: # The compute is not new enough to report the trait so we # fallback to config. allow_same_host = CONF.allow_resize_to_same_host else: allow_same_host = CONF.allow_resize_to_same_host return allow_same_host @block_accelerators @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.PAUSED, vm_states.SUSPENDED]) def shelve(self, context, instance, clean_shutdown=True): """Shelve an instance. Shuts down an instance and frees it up to be removed from the hypervisor. """ instance.task_state = task_states.SHELVING # NOTE(aarents): Ensure image_base_image_ref is present as it will be # needed during unshelve and instance rebuild done before Bug/1893618 # Fix dropped it. instance.system_metadata.update( {'image_base_image_ref': instance.image_ref} ) instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.SHELVE) if not compute_utils.is_volume_backed_instance(context, instance): name = '%s-shelved' % instance.display_name image_meta = compute_utils.create_image( context, instance, name, 'snapshot', self.image_api) image_id = image_meta['id'] self.compute_rpcapi.shelve_instance(context, instance=instance, image_id=image_id, clean_shutdown=clean_shutdown) else: self.compute_rpcapi.shelve_offload_instance(context, instance=instance, clean_shutdown=clean_shutdown) @check_instance_lock @check_instance_state(vm_state=[vm_states.SHELVED]) def shelve_offload(self, context, instance, clean_shutdown=True): """Remove a shelved instance from the hypervisor.""" instance.task_state = task_states.SHELVING_OFFLOADING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.SHELVE_OFFLOAD) self.compute_rpcapi.shelve_offload_instance(context, instance=instance, clean_shutdown=clean_shutdown) def _validate_unshelve_az(self, context, instance, availability_zone): """Verify the specified availability_zone during unshelve. Verifies that the server is shelved offloaded, the AZ exists and if [cinder]/cross_az_attach=False, that any attached volumes are in the same AZ. :param context: nova auth RequestContext for the unshelve action :param instance: Instance object for the server being unshelved :param availability_zone: The user-requested availability zone in which to unshelve the server. :raises: UnshelveInstanceInvalidState if the server is not shelved offloaded :raises: InvalidRequest if the requested AZ does not exist :raises: MismatchVolumeAZException if [cinder]/cross_az_attach=False and any attached volumes are not in the requested AZ """ if instance.vm_state != vm_states.SHELVED_OFFLOADED: # NOTE(brinzhang): If the server status is 'SHELVED', it still # belongs to a host, the availability_zone has not changed. # Unshelving a shelved offloaded server will go through the # scheduler to find a new host. raise exception.UnshelveInstanceInvalidState( state=instance.vm_state, instance_uuid=instance.uuid) available_zones = availability_zones.get_availability_zones( context, self.host_api, get_only_available=True) if availability_zone not in available_zones: msg = _('The requested availability zone is not available') raise exception.InvalidRequest(msg) # NOTE(brinzhang): When specifying a availability zone to unshelve # a shelved offloaded server, and conf cross_az_attach=False, need # to determine if attached volume AZ matches the user-specified AZ. if not CONF.cinder.cross_az_attach: bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) for bdm in bdms: if bdm.is_volume and bdm.volume_id: volume = self.volume_api.get(context, bdm.volume_id) if availability_zone != volume['availability_zone']: msg = _("The specified availability zone does not " "match the volume %(vol_id)s attached to the " "server. Specified availability zone is " "%(az)s. Volume is in %(vol_zone)s.") % { "vol_id": volume['id'], "az": availability_zone, "vol_zone": volume['availability_zone']} raise exception.MismatchVolumeAZException(reason=msg) @check_instance_lock @check_instance_state(vm_state=[vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]) def unshelve(self, context, instance, new_az=None): """Restore a shelved instance.""" request_spec = objects.RequestSpec.get_by_instance_uuid( context, instance.uuid) if new_az: self._validate_unshelve_az(context, instance, new_az) LOG.debug("Replace the old AZ %(old_az)s in RequestSpec " "with a new AZ %(new_az)s of the instance.", {"old_az": request_spec.availability_zone, "new_az": new_az}, instance=instance) # Unshelving a shelved offloaded server will go through the # scheduler to pick a new host, so we update the # RequestSpec.availability_zone here. Note that if scheduling # fails the RequestSpec will remain updated, which is not great, # but if we want to change that we need to defer updating the # RequestSpec until conductor which probably means RPC changes to # pass the new_az variable to conductor. This is likely low # priority since the RequestSpec.availability_zone on a shelved # offloaded server does not mean much anyway and clearly the user # is trying to put the server in the target AZ. request_spec.availability_zone = new_az request_spec.save() instance.task_state = task_states.UNSHELVING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.UNSHELVE) self.compute_task_api.unshelve_instance(context, instance, request_spec) @check_instance_lock def add_fixed_ip(self, context, instance, network_id): """Add fixed_ip from specified network to given instance.""" self.compute_rpcapi.add_fixed_ip_to_instance(context, instance=instance, network_id=network_id) @check_instance_lock def remove_fixed_ip(self, context, instance, address): """Remove fixed_ip from specified network to given instance.""" self.compute_rpcapi.remove_fixed_ip_from_instance(context, instance=instance, address=address) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE]) def pause(self, context, instance): """Pause the given instance.""" instance.task_state = task_states.PAUSING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.PAUSE) self.compute_rpcapi.pause_instance(context, instance) @check_instance_lock @check_instance_state(vm_state=[vm_states.PAUSED]) def unpause(self, context, instance): """Unpause the given instance.""" instance.task_state = task_states.UNPAUSING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.UNPAUSE) self.compute_rpcapi.unpause_instance(context, instance) @check_instance_host() def get_diagnostics(self, context, instance): """Retrieve diagnostics for the given instance.""" return self.compute_rpcapi.get_diagnostics(context, instance=instance) @check_instance_host() def get_instance_diagnostics(self, context, instance): """Retrieve diagnostics for the given instance.""" return self.compute_rpcapi.get_instance_diagnostics(context, instance=instance) @block_accelerators @reject_sev_instances(instance_actions.SUSPEND) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE]) def suspend(self, context, instance): """Suspend the given instance.""" instance.task_state = task_states.SUSPENDING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.SUSPEND) self.compute_rpcapi.suspend_instance(context, instance) @check_instance_lock @check_instance_state(vm_state=[vm_states.SUSPENDED]) def resume(self, context, instance): """Resume the given instance.""" instance.task_state = task_states.RESUMING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.RESUME) self.compute_rpcapi.resume_instance(context, instance) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.ERROR]) def rescue(self, context, instance, rescue_password=None, rescue_image_ref=None, clean_shutdown=True, allow_bfv_rescue=False): """Rescue the given instance.""" if rescue_image_ref: try: image_meta = image_meta_obj.ImageMeta.from_image_ref( context, self.image_api, rescue_image_ref) except (exception.ImageNotFound, exception.ImageBadRequest): LOG.warning("Failed to fetch rescue image metadata using " "image_ref %(image_ref)s", {'image_ref': rescue_image_ref}) raise exception.UnsupportedRescueImage( image=rescue_image_ref) # FIXME(lyarwood): There is currently no support for rescuing # instances using a volume snapshot so fail here before we cast to # the compute. if image_meta.properties.get('img_block_device_mapping'): LOG.warning("Unable to rescue an instance using a volume " "snapshot image with img_block_device_mapping " "image properties set") raise exception.UnsupportedRescueImage( image=rescue_image_ref) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._check_volume_status(context, bdms) volume_backed = compute_utils.is_volume_backed_instance( context, instance, bdms) if volume_backed and allow_bfv_rescue: cn = objects.ComputeNode.get_by_host_and_nodename( context, instance.host, instance.node) traits = self.placementclient.get_provider_traits( context, cn.uuid).traits if os_traits.COMPUTE_RESCUE_BFV not in traits: reason = _("Host unable to rescue a volume-backed instance") raise exception.InstanceNotRescuable(instance_id=instance.uuid, reason=reason) elif volume_backed: reason = _("Cannot rescue a volume-backed instance") raise exception.InstanceNotRescuable(instance_id=instance.uuid, reason=reason) instance.task_state = task_states.RESCUING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.RESCUE) self.compute_rpcapi.rescue_instance(context, instance=instance, rescue_password=rescue_password, rescue_image_ref=rescue_image_ref, clean_shutdown=clean_shutdown) @check_instance_lock @check_instance_state(vm_state=[vm_states.RESCUED]) def unrescue(self, context, instance): """Unrescue the given instance.""" instance.task_state = task_states.UNRESCUING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.UNRESCUE) self.compute_rpcapi.unrescue_instance(context, instance=instance) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE]) def set_admin_password(self, context, instance, password): """Set the root/admin password for the given instance. @param context: Nova auth context. @param instance: Nova instance object. @param password: The admin password for the instance. """ instance.task_state = task_states.UPDATING_PASSWORD instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.CHANGE_PASSWORD) self.compute_rpcapi.set_admin_password(context, instance=instance, new_pass=password) @check_instance_host() @reject_instance_state( task_state=[task_states.DELETING, task_states.MIGRATING]) def get_vnc_console(self, context, instance, console_type): """Get a url to an instance Console.""" connect_info = self.compute_rpcapi.get_vnc_console(context, instance=instance, console_type=console_type) return {'url': connect_info['access_url']} @check_instance_host() @reject_instance_state( task_state=[task_states.DELETING, task_states.MIGRATING]) def get_spice_console(self, context, instance, console_type): """Get a url to an instance Console.""" connect_info = self.compute_rpcapi.get_spice_console(context, instance=instance, console_type=console_type) return {'url': connect_info['access_url']} @check_instance_host() @reject_instance_state( task_state=[task_states.DELETING, task_states.MIGRATING]) def get_rdp_console(self, context, instance, console_type): """Get a url to an instance Console.""" connect_info = self.compute_rpcapi.get_rdp_console(context, instance=instance, console_type=console_type) return {'url': connect_info['access_url']} @check_instance_host() @reject_instance_state( task_state=[task_states.DELETING, task_states.MIGRATING]) def get_serial_console(self, context, instance, console_type): """Get a url to a serial console.""" connect_info = self.compute_rpcapi.get_serial_console(context, instance=instance, console_type=console_type) return {'url': connect_info['access_url']} @check_instance_host() @reject_instance_state( task_state=[task_states.DELETING, task_states.MIGRATING]) def get_mks_console(self, context, instance, console_type): """Get a url to a MKS console.""" connect_info = self.compute_rpcapi.get_mks_console(context, instance=instance, console_type=console_type) return {'url': connect_info['access_url']} @check_instance_host() def get_console_output(self, context, instance, tail_length=None): """Get console output for an instance.""" return self.compute_rpcapi.get_console_output(context, instance=instance, tail_length=tail_length) def lock(self, context, instance, reason=None): """Lock the given instance.""" # Only update the lock if we are an admin (non-owner) is_owner = instance.project_id == context.project_id if instance.locked and is_owner: return context = context.elevated() self._record_action_start(context, instance, instance_actions.LOCK) @wrap_instance_event(prefix='api') def lock(self, context, instance, reason=None): LOG.debug('Locking', instance=instance) instance.locked = True instance.locked_by = 'owner' if is_owner else 'admin' if reason: instance.system_metadata['locked_reason'] = reason instance.save() lock(self, context, instance, reason=reason) compute_utils.notify_about_instance_action( context, instance, CONF.host, action=fields_obj.NotificationAction.LOCK, source=fields_obj.NotificationSource.API) def is_expected_locked_by(self, context, instance): is_owner = instance.project_id == context.project_id expect_locked_by = 'owner' if is_owner else 'admin' locked_by = instance.locked_by if locked_by and locked_by != expect_locked_by: return False return True def unlock(self, context, instance): """Unlock the given instance.""" context = context.elevated() self._record_action_start(context, instance, instance_actions.UNLOCK) @wrap_instance_event(prefix='api') def unlock(self, context, instance): LOG.debug('Unlocking', instance=instance) instance.locked = False instance.locked_by = None instance.system_metadata.pop('locked_reason', None) instance.save() unlock(self, context, instance) compute_utils.notify_about_instance_action( context, instance, CONF.host, action=fields_obj.NotificationAction.UNLOCK, source=fields_obj.NotificationSource.API) @check_instance_lock def reset_network(self, context, instance): """Reset networking on the instance.""" self.compute_rpcapi.reset_network(context, instance=instance) @check_instance_lock def inject_network_info(self, context, instance): """Inject network info for the instance.""" self.compute_rpcapi.inject_network_info(context, instance=instance) def _create_volume_bdm(self, context, instance, device, volume, disk_bus, device_type, is_local_creation=False, tag=None, delete_on_termination=False): volume_id = volume['id'] if is_local_creation: # when the creation is done locally we can't specify the device # name as we do not have a way to check that the name specified is # a valid one. # We leave the setting of that value when the actual attach # happens on the compute manager # NOTE(artom) Local attach (to a shelved-offload instance) cannot # support device tagging because we have no way to call the compute # manager to check that it supports device tagging. In fact, we # don't even know which computer manager the instance will # eventually end up on when it's unshelved. volume_bdm = objects.BlockDeviceMapping( context=context, source_type='volume', destination_type='volume', instance_uuid=instance.uuid, boot_index=None, volume_id=volume_id, device_name=None, guest_format=None, disk_bus=disk_bus, device_type=device_type, delete_on_termination=delete_on_termination) volume_bdm.create() else: # NOTE(vish): This is done on the compute host because we want # to avoid a race where two devices are requested at # the same time. When db access is removed from # compute, the bdm will be created here and we will # have to make sure that they are assigned atomically. volume_bdm = self.compute_rpcapi.reserve_block_device_name( context, instance, device, volume_id, disk_bus=disk_bus, device_type=device_type, tag=tag, multiattach=volume['multiattach']) volume_bdm.delete_on_termination = delete_on_termination volume_bdm.save() return volume_bdm def _check_volume_already_attached_to_instance(self, context, instance, volume_id): """Avoid attaching the same volume to the same instance twice. As the new Cinder flow (microversion 3.44) is handling the checks differently and allows to attach the same volume to the same instance twice to enable live_migrate we are checking whether the BDM already exists for this combination for the new flow and fail if it does. """ try: objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) msg = _("volume %s already attached") % volume_id raise exception.InvalidVolume(reason=msg) except exception.VolumeBDMNotFound: pass def _check_attach_and_reserve_volume(self, context, volume, instance, bdm, supports_multiattach=False, validate_az=True): """Perform checks against the instance and volume before attaching. If validation succeeds, the bdm is updated with an attachment_id which effectively reserves it during the attach process in cinder. :param context: nova auth RequestContext :param volume: volume dict from cinder :param instance: Instance object :param bdm: BlockDeviceMapping object :param supports_multiattach: True if the request supports multiattach volumes, i.e. microversion >= 2.60, False otherwise :param validate_az: True if the instance and volume availability zones should be validated for cross_az_attach, False to not validate AZ """ volume_id = volume['id'] if validate_az: self.volume_api.check_availability_zone(context, volume, instance=instance) # If volume.multiattach=True and the microversion to # support multiattach is not used, fail the request. if volume['multiattach'] and not supports_multiattach: raise exception.MultiattachNotSupportedOldMicroversion() attachment_id = self.volume_api.attachment_create( context, volume_id, instance.uuid)['id'] bdm.attachment_id = attachment_id # NOTE(ildikov): In case of boot from volume the BDM at this # point is not yet created in a cell database, so we can't # call save(). When attaching a volume to an existing # instance, the instance is already in a cell and the BDM has # been created in that same cell so updating here in that case # is "ok". if bdm.obj_attr_is_set('id'): bdm.save() # TODO(stephenfin): Fold this back in now that cells v1 no longer needs to # override it. def _attach_volume(self, context, instance, volume, device, disk_bus, device_type, tag=None, supports_multiattach=False, delete_on_termination=False): """Attach an existing volume to an existing instance. This method is separated to make it possible for cells version to override it. """ volume_bdm = self._create_volume_bdm( context, instance, device, volume, disk_bus=disk_bus, device_type=device_type, tag=tag, delete_on_termination=delete_on_termination) try: self._check_attach_and_reserve_volume(context, volume, instance, volume_bdm, supports_multiattach) self._record_action_start( context, instance, instance_actions.ATTACH_VOLUME) self.compute_rpcapi.attach_volume(context, instance, volume_bdm) except Exception: with excutils.save_and_reraise_exception(): volume_bdm.destroy() return volume_bdm.device_name def _attach_volume_shelved_offloaded(self, context, instance, volume, device, disk_bus, device_type, delete_on_termination): """Attach an existing volume to an instance in shelved offloaded state. Attaching a volume for an instance in shelved offloaded state requires to perform the regular check to see if we can attach and reserve the volume then we need to call the attach method on the volume API to mark the volume as 'in-use'. The instance at this stage is not managed by a compute manager therefore the actual attachment will be performed once the instance will be unshelved. """ volume_id = volume['id'] @wrap_instance_event(prefix='api') def attach_volume(self, context, v_id, instance, dev, attachment_id): if attachment_id: # Normally we wouldn't complete an attachment without a host # connector, but we do this to make the volume status change # to "in-use" to maintain the API semantics with the old flow. # When unshelving the instance, the compute service will deal # with this disconnected attachment. self.volume_api.attachment_complete(context, attachment_id) else: self.volume_api.attach(context, v_id, instance.uuid, dev) volume_bdm = self._create_volume_bdm( context, instance, device, volume, disk_bus=disk_bus, device_type=device_type, is_local_creation=True, delete_on_termination=delete_on_termination) try: self._check_attach_and_reserve_volume(context, volume, instance, volume_bdm) self._record_action_start( context, instance, instance_actions.ATTACH_VOLUME) attach_volume(self, context, volume_id, instance, device, volume_bdm.attachment_id) except Exception: with excutils.save_and_reraise_exception(): volume_bdm.destroy() return volume_bdm.device_name @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.STOPPED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]) def attach_volume(self, context, instance, volume_id, device=None, disk_bus=None, device_type=None, tag=None, supports_multiattach=False, delete_on_termination=False): """Attach an existing volume to an existing instance.""" # NOTE(vish): Fail fast if the device is not going to pass. This # will need to be removed along with the test if we # change the logic in the manager for what constitutes # a valid device. if device and not block_device.match_device(device): raise exception.InvalidDevicePath(path=device) # Make sure the volume isn't already attached to this instance # because we'll use the v3.44 attachment flow in # _check_attach_and_reserve_volume and Cinder will allow multiple # attachments between the same volume and instance but the old flow # API semantics don't allow that so we enforce it here. self._check_volume_already_attached_to_instance(context, instance, volume_id) volume = self.volume_api.get(context, volume_id) is_shelved_offloaded = instance.vm_state == vm_states.SHELVED_OFFLOADED if is_shelved_offloaded: if tag: # NOTE(artom) Local attach (to a shelved-offload instance) # cannot support device tagging because we have no way to call # the compute manager to check that it supports device tagging. # In fact, we don't even know which computer manager the # instance will eventually end up on when it's unshelved. raise exception.VolumeTaggedAttachToShelvedNotSupported() if volume['multiattach']: # NOTE(mriedem): Similar to tagged attach, we don't support # attaching a multiattach volume to shelved offloaded instances # because we can't tell if the compute host (since there isn't # one) supports it. This could possibly be supported in the # future if the scheduler was made aware of which computes # support multiattach volumes. raise exception.MultiattachToShelvedNotSupported() return self._attach_volume_shelved_offloaded(context, instance, volume, device, disk_bus, device_type, delete_on_termination) return self._attach_volume(context, instance, volume, device, disk_bus, device_type, tag=tag, supports_multiattach=supports_multiattach, delete_on_termination=delete_on_termination) # TODO(stephenfin): Fold this back in now that cells v1 no longer needs to # override it. def _detach_volume(self, context, instance, volume): """Detach volume from instance. This method is separated to make it easier for cells version to override. """ try: self.volume_api.begin_detaching(context, volume['id']) except exception.InvalidInput as exc: raise exception.InvalidVolume(reason=exc.format_message()) attachments = volume.get('attachments', {}) attachment_id = None if attachments and instance.uuid in attachments: attachment_id = attachments[instance.uuid]['attachment_id'] self._record_action_start( context, instance, instance_actions.DETACH_VOLUME) self.compute_rpcapi.detach_volume(context, instance=instance, volume_id=volume['id'], attachment_id=attachment_id) def _detach_volume_shelved_offloaded(self, context, instance, volume): """Detach a volume from an instance in shelved offloaded state. If the instance is shelved offloaded we just need to cleanup volume calling the volume api detach, the volume api terminate_connection and delete the bdm record. If the volume has delete_on_termination option set then we call the volume api delete as well. """ @wrap_instance_event(prefix='api') def detach_volume(self, context, instance, bdms): self._local_cleanup_bdm_volumes(bdms, instance, context) bdms = [objects.BlockDeviceMapping.get_by_volume_id( context, volume['id'], instance.uuid)] # The begin_detaching() call only works with in-use volumes, # which will not be the case for volumes attached to a shelved # offloaded server via the attachments API since those volumes # will have `reserved` status. if not bdms[0].attachment_id: try: self.volume_api.begin_detaching(context, volume['id']) except exception.InvalidInput as exc: raise exception.InvalidVolume(reason=exc.format_message()) self._record_action_start( context, instance, instance_actions.DETACH_VOLUME) detach_volume(self, context, instance, bdms) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.STOPPED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]) def detach_volume(self, context, instance, volume): """Detach a volume from an instance.""" if instance.vm_state == vm_states.SHELVED_OFFLOADED: self._detach_volume_shelved_offloaded(context, instance, volume) else: self._detach_volume(context, instance, volume) def _count_attachments_for_swap(self, ctxt, volume): """Counts the number of attachments for a swap-related volume. Attempts to only count read/write attachments if the volume attachment records exist, otherwise simply just counts the number of attachments regardless of attach mode. :param ctxt: nova.context.RequestContext - user request context :param volume: nova-translated volume dict from nova.volume.cinder. :returns: count of attachments for the volume """ # This is a dict, keyed by server ID, to a dict of attachment_id and # mountpoint. attachments = volume.get('attachments', {}) # Multiattach volumes can have more than one attachment, so if there # is more than one attachment, attempt to count the read/write # attachments. if len(attachments) > 1: count = 0 for attachment in attachments.values(): attachment_id = attachment['attachment_id'] # Get the attachment record for this attachment so we can # get the attach_mode. # TODO(mriedem): This could be optimized if we had # GET /attachments/detail?volume_id=volume['id'] in Cinder. try: attachment_record = self.volume_api.attachment_get( ctxt, attachment_id) # Note that the attachment record from Cinder has # attach_mode in the top-level of the resource but the # nova.volume.cinder code translates it and puts the # attach_mode in the connection_info for some legacy # reason... if attachment_record['attach_mode'] == 'rw': count += 1 except exception.VolumeAttachmentNotFound: # attachments are read/write by default so count it count += 1 else: count = len(attachments) return count @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.RESIZED]) def swap_volume(self, context, instance, old_volume, new_volume): """Swap volume attached to an instance.""" # The caller likely got the instance from volume['attachments'] # in the first place, but let's sanity check. if not old_volume.get('attachments', {}).get(instance.uuid): msg = _("Old volume is attached to a different instance.") raise exception.InvalidVolume(reason=msg) if new_volume['attach_status'] == 'attached': msg = _("New volume must be detached in order to swap.") raise exception.InvalidVolume(reason=msg) if int(new_volume['size']) < int(old_volume['size']): msg = _("New volume must be the same size or larger.") raise exception.InvalidVolume(reason=msg) self.volume_api.check_availability_zone(context, new_volume, instance=instance) try: self.volume_api.begin_detaching(context, old_volume['id']) except exception.InvalidInput as exc: raise exception.InvalidVolume(reason=exc.format_message()) # Disallow swapping from multiattach volumes that have more than one # read/write attachment. We know the old_volume has at least one # attachment since it's attached to this server. The new_volume # can't have any attachments because of the attach_status check above. # We do this count after calling "begin_detaching" to lock against # concurrent attachments being made while we're counting. try: if self._count_attachments_for_swap(context, old_volume) > 1: raise exception.MultiattachSwapVolumeNotSupported() except Exception: # This is generic to handle failures while counting # We need to reset the detaching status before raising. with excutils.save_and_reraise_exception(): self.volume_api.roll_detaching(context, old_volume['id']) # Get the BDM for the attached (old) volume so we can tell if it was # attached with the new-style Cinder 3.44 API. bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, old_volume['id'], instance.uuid) new_attachment_id = None if bdm.attachment_id is None: # This is an old-style attachment so reserve the new volume before # we cast to the compute host. self.volume_api.reserve_volume(context, new_volume['id']) else: try: self._check_volume_already_attached_to_instance( context, instance, new_volume['id']) except exception.InvalidVolume: with excutils.save_and_reraise_exception(): self.volume_api.roll_detaching(context, old_volume['id']) # This is a new-style attachment so for the volume that we are # going to swap to, create a new volume attachment. new_attachment_id = self.volume_api.attachment_create( context, new_volume['id'], instance.uuid)['id'] self._record_action_start( context, instance, instance_actions.SWAP_VOLUME) try: self.compute_rpcapi.swap_volume( context, instance=instance, old_volume_id=old_volume['id'], new_volume_id=new_volume['id'], new_attachment_id=new_attachment_id) except Exception: with excutils.save_and_reraise_exception(): self.volume_api.roll_detaching(context, old_volume['id']) if new_attachment_id is None: self.volume_api.unreserve_volume(context, new_volume['id']) else: self.volume_api.attachment_delete( context, new_attachment_id) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.STOPPED], task_state=[None]) def attach_interface(self, context, instance, network_id, port_id, requested_ip, tag=None): """Use hotplug to add an network adapter to an instance.""" self._record_action_start( context, instance, instance_actions.ATTACH_INTERFACE) # NOTE(gibi): Checking if the requested port has resource request as # such ports are currently not supported as they would at least # need resource allocation manipulation in placement but might also # need a new scheduling if resource on this host is not available. if port_id: port = self.network_api.show_port(context, port_id) if port['port'].get(constants.RESOURCE_REQUEST): raise exception.AttachInterfaceWithQoSPolicyNotSupported( instance_uuid=instance.uuid) return self.compute_rpcapi.attach_interface(context, instance=instance, network_id=network_id, port_id=port_id, requested_ip=requested_ip, tag=tag) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.STOPPED], task_state=[None]) def detach_interface(self, context, instance, port_id): """Detach an network adapter from an instance.""" self._record_action_start( context, instance, instance_actions.DETACH_INTERFACE) self.compute_rpcapi.detach_interface(context, instance=instance, port_id=port_id) def get_instance_metadata(self, context, instance): """Get all metadata associated with an instance.""" return self.db.instance_metadata_get(context, instance.uuid) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.STOPPED], task_state=None) def delete_instance_metadata(self, context, instance, key): """Delete the given metadata item from an instance.""" instance.delete_metadata_key(key) self.compute_rpcapi.change_instance_metadata(context, instance=instance, diff={key: ['-']}) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.STOPPED], task_state=None) def update_instance_metadata(self, context, instance, metadata, delete=False): """Updates or creates instance metadata. If delete is True, metadata items that are not specified in the `metadata` argument will be deleted. """ orig = dict(instance.metadata) if delete: _metadata = metadata else: _metadata = dict(instance.metadata) _metadata.update(metadata) self._check_metadata_properties_quota(context, _metadata) instance.metadata = _metadata instance.save() diff = _diff_dict(orig, instance.metadata) self.compute_rpcapi.change_instance_metadata(context, instance=instance, diff=diff) return _metadata @block_accelerators @reject_sev_instances(instance_actions.LIVE_MIGRATION) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED]) def live_migrate(self, context, instance, block_migration, disk_over_commit, host_name, force=None, async_=False): """Migrate a server lively to a new host.""" LOG.debug("Going to try to live migrate instance to %s", host_name or "another host", instance=instance) if host_name: # Validate the specified host before changing the instance task # state. nodes = objects.ComputeNodeList.get_all_by_host(context, host_name) request_spec = objects.RequestSpec.get_by_instance_uuid( context, instance.uuid) instance.task_state = task_states.MIGRATING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.LIVE_MIGRATION) # NOTE(sbauza): Force is a boolean by the new related API version if force is False and host_name: # Unset the host to make sure we call the scheduler # from the conductor LiveMigrationTask. Yes this is tightly-coupled # to behavior in conductor and not great. host_name = None # FIXME(sbauza): Since only Ironic driver uses more than one # compute per service but doesn't support live migrations, # let's provide the first one. target = nodes[0] destination = objects.Destination( host=target.host, node=target.hypervisor_hostname ) # This is essentially a hint to the scheduler to only consider # the specified host but still run it through the filters. request_spec.requested_destination = destination try: self.compute_task_api.live_migrate_instance(context, instance, host_name, block_migration=block_migration, disk_over_commit=disk_over_commit, request_spec=request_spec, async_=async_) except oslo_exceptions.MessagingTimeout as messaging_timeout: with excutils.save_and_reraise_exception(): # NOTE(pkoniszewski): It is possible that MessagingTimeout # occurs, but LM will still be in progress, so write # instance fault to database compute_utils.add_instance_fault_from_exc(context, instance, messaging_timeout) @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE], task_state=[task_states.MIGRATING]) def live_migrate_force_complete(self, context, instance, migration_id): """Force live migration to complete. :param context: Security context :param instance: The instance that is being migrated :param migration_id: ID of ongoing migration """ LOG.debug("Going to try to force live migration to complete", instance=instance) # NOTE(pkoniszewski): Get migration object to check if there is ongoing # live migration for particular instance. Also pass migration id to # compute to double check and avoid possible race condition. migration = objects.Migration.get_by_id_and_instance( context, migration_id, instance.uuid) if migration.status != 'running': raise exception.InvalidMigrationState(migration_id=migration_id, instance_uuid=instance.uuid, state=migration.status, method='force complete') self._record_action_start( context, instance, instance_actions.LIVE_MIGRATION_FORCE_COMPLETE) self.compute_rpcapi.live_migration_force_complete( context, instance, migration) @check_instance_lock @check_instance_state(task_state=[task_states.MIGRATING]) def live_migrate_abort(self, context, instance, migration_id, support_abort_in_queue=False): """Abort an in-progress live migration. :param context: Security context :param instance: The instance that is being migrated :param migration_id: ID of in-progress live migration :param support_abort_in_queue: Flag indicating whether we can support abort migrations in "queued" or "preparing" status. """ migration = objects.Migration.get_by_id_and_instance(context, migration_id, instance.uuid) LOG.debug("Going to cancel live migration %s", migration.id, instance=instance) # If the microversion does not support abort migration in queue, # we are only be able to abort migrations with `running` status; # if it is supported, we are able to also abort migrations in # `queued` and `preparing` status. allowed_states = ['running'] queued_states = ['queued', 'preparing'] if support_abort_in_queue: # The user requested a microversion that supports aborting a queued # or preparing live migration. But we need to check that the # compute service hosting the instance is new enough to support # aborting a queued/preparing live migration, so we check the # service version here. allowed_states.extend(queued_states) if migration.status not in allowed_states: raise exception.InvalidMigrationState(migration_id=migration_id, instance_uuid=instance.uuid, state=migration.status, method='abort live migration') self._record_action_start(context, instance, instance_actions.LIVE_MIGRATION_CANCEL) self.compute_rpcapi.live_migration_abort(context, instance, migration.id) @block_accelerators @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, vm_states.ERROR]) def evacuate(self, context, instance, host, on_shared_storage, admin_password=None, force=None): """Running evacuate to target host. Checking vm compute host state, if the host not in expected_state, raising an exception. :param instance: The instance to evacuate :param host: Target host. if not set, the scheduler will pick up one :param on_shared_storage: True if instance files on shared storage :param admin_password: password to set on rebuilt instance :param force: Force the evacuation to the specific host target """ LOG.debug('vm evacuation scheduled', instance=instance) inst_host = instance.host service = objects.Service.get_by_compute_host(context, inst_host) if self.servicegroup_api.service_is_up(service): LOG.error('Instance compute service state on %s ' 'expected to be down, but it was up.', inst_host) raise exception.ComputeServiceInUse(host=inst_host) request_spec = objects.RequestSpec.get_by_instance_uuid( context, instance.uuid) instance.task_state = task_states.REBUILDING instance.save(expected_task_state=[None]) self._record_action_start(context, instance, instance_actions.EVACUATE) # NOTE(danms): Create this as a tombstone for the source compute # to find and cleanup. No need to pass it anywhere else. migration = objects.Migration(context, source_compute=instance.host, source_node=instance.node, instance_uuid=instance.uuid, status='accepted', migration_type='evacuation') if host: migration.dest_compute = host migration.create() compute_utils.notify_about_instance_usage( self.notifier, context, instance, "evacuate") compute_utils.notify_about_instance_action( context, instance, CONF.host, action=fields_obj.NotificationAction.EVACUATE, source=fields_obj.NotificationSource.API) # NOTE(sbauza): Force is a boolean by the new related API version # TODO(stephenfin): Any reason we can't use 'not force' here to handle # the pre-v2.29 API microversion, which wouldn't set force if force is False and host: nodes = objects.ComputeNodeList.get_all_by_host(context, host) # NOTE(sbauza): Unset the host to make sure we call the scheduler host = None # FIXME(sbauza): Since only Ironic driver uses more than one # compute per service but doesn't support evacuations, # let's provide the first one. target = nodes[0] destination = objects.Destination( host=target.host, node=target.hypervisor_hostname ) request_spec.requested_destination = destination return self.compute_task_api.rebuild_instance(context, instance=instance, new_pass=admin_password, injected_files=None, image_ref=None, orig_image_ref=None, orig_sys_metadata=None, bdms=None, recreate=True, on_shared_storage=on_shared_storage, host=host, request_spec=request_spec, ) def get_migrations(self, context, filters): """Get all migrations for the given filters.""" load_cells() migrations = [] for cell in CELLS: if cell.uuid == objects.CellMapping.CELL0_UUID: continue with nova_context.target_cell(context, cell) as cctxt: migrations.extend(objects.MigrationList.get_by_filters( cctxt, filters).objects) return objects.MigrationList(objects=migrations) def get_migrations_sorted(self, context, filters, sort_dirs=None, sort_keys=None, limit=None, marker=None): """Get all migrations for the given parameters.""" mig_objs = migration_list.get_migration_objects_sorted( context, filters, limit, marker, sort_keys, sort_dirs) # Due to cross-cell resize, we could have duplicate migration records # while the instance is in VERIFY_RESIZE state in the destination cell # but the original migration record still exists in the source cell. # Filter out duplicate migration records here based on which record # is newer (last updated). def _get_newer_obj(obj1, obj2): # created_at will always be set. created_at1 = obj1.created_at created_at2 = obj2.created_at # updated_at might be None updated_at1 = obj1.updated_at updated_at2 = obj2.updated_at # If both have updated_at, compare using that field. if updated_at1 and updated_at2: if updated_at1 > updated_at2: return obj1 return obj2 # Compare created_at versus updated_at. if updated_at1: if updated_at1 > created_at2: return obj1 return obj2 if updated_at2: if updated_at2 > created_at1: return obj2 return obj1 # Compare created_at only. if created_at1 > created_at2: return obj1 return obj2 # TODO(mriedem): This could be easier if we leveraged the "hidden" # field on the Migration record and then just did like # _get_unique_filter_method in the get_all() method for instances. migrations_by_uuid = collections.OrderedDict() # maintain sort order for migration in mig_objs: if migration.uuid not in migrations_by_uuid: migrations_by_uuid[migration.uuid] = migration else: # We have a collision, keep the newer record. # Note that using updated_at could be wrong if changes-since or # changes-before filters are being used but we have the same # issue in _get_unique_filter_method for instances. doppelganger = migrations_by_uuid[migration.uuid] newer = _get_newer_obj(doppelganger, migration) migrations_by_uuid[migration.uuid] = newer return objects.MigrationList(objects=list(migrations_by_uuid.values())) def get_migrations_in_progress_by_instance(self, context, instance_uuid, migration_type=None): """Get all migrations of an instance in progress.""" return objects.MigrationList.get_in_progress_by_instance( context, instance_uuid, migration_type) def get_migration_by_id_and_instance(self, context, migration_id, instance_uuid): """Get the migration of an instance by id.""" return objects.Migration.get_by_id_and_instance( context, migration_id, instance_uuid) def _get_bdm_by_volume_id(self, context, volume_id, expected_attrs=None): """Retrieve a BDM without knowing its cell. .. note:: The context will be targeted to the cell in which the BDM is found, if any. :param context: The API request context. :param volume_id: The ID of the volume. :param expected_attrs: list of any additional attributes that should be joined when the BDM is loaded from the database. :raises: nova.exception.VolumeBDMNotFound if not found in any cell """ load_cells() for cell in CELLS: nova_context.set_target_cell(context, cell) try: return objects.BlockDeviceMapping.get_by_volume( context, volume_id, expected_attrs=expected_attrs) except exception.NotFound: continue raise exception.VolumeBDMNotFound(volume_id=volume_id) def volume_snapshot_create(self, context, volume_id, create_info): bdm = self._get_bdm_by_volume_id( context, volume_id, expected_attrs=['instance']) # We allow creating the snapshot in any vm_state as long as there is # no task being performed on the instance and it has a host. @check_instance_host() @check_instance_state(vm_state=None) def do_volume_snapshot_create(self, context, instance): self.compute_rpcapi.volume_snapshot_create(context, instance, volume_id, create_info) snapshot = { 'snapshot': { 'id': create_info.get('id'), 'volumeId': volume_id } } return snapshot return do_volume_snapshot_create(self, context, bdm.instance) def volume_snapshot_delete(self, context, volume_id, snapshot_id, delete_info): bdm = self._get_bdm_by_volume_id( context, volume_id, expected_attrs=['instance']) # We allow deleting the snapshot in any vm_state as long as there is # no task being performed on the instance and it has a host. @check_instance_host() @check_instance_state(vm_state=None) def do_volume_snapshot_delete(self, context, instance): self.compute_rpcapi.volume_snapshot_delete(context, instance, volume_id, snapshot_id, delete_info) do_volume_snapshot_delete(self, context, bdm.instance) def external_instance_event(self, api_context, instances, events): # NOTE(danms): The external API consumer just provides events, # but doesn't know where they go. We need to collate lists # by the host the affected instance is on and dispatch them # according to host instances_by_host = collections.defaultdict(list) events_by_host = collections.defaultdict(list) hosts_by_instance = collections.defaultdict(list) cell_contexts_by_host = {} for instance in instances: # instance._context is used here since it's already targeted to # the cell that the instance lives in, and we need to use that # cell context to lookup any migrations associated to the instance. hosts, cross_cell_move = self._get_relevant_hosts( instance._context, instance) for host in hosts: # NOTE(danms): All instances on a host must have the same # mapping, so just use that if host not in cell_contexts_by_host: # NOTE(mriedem): If the instance is being migrated across # cells then we have to get the host mapping to determine # which cell a given host is in. if cross_cell_move: hm = objects.HostMapping.get_by_host(api_context, host) ctxt = nova_context.get_admin_context() nova_context.set_target_cell(ctxt, hm.cell_mapping) cell_contexts_by_host[host] = ctxt else: # The instance is not migrating across cells so just # use the cell-targeted context already in the # instance since the host has to be in that same cell. cell_contexts_by_host[host] = instance._context instances_by_host[host].append(instance) hosts_by_instance[instance.uuid].append(host) for event in events: if event.name == 'volume-extended': # Volume extend is a user-initiated operation starting in the # Block Storage service API. We record an instance action so # the user can monitor the operation to completion. host = hosts_by_instance[event.instance_uuid][0] cell_context = cell_contexts_by_host[host] objects.InstanceAction.action_start( cell_context, event.instance_uuid, instance_actions.EXTEND_VOLUME, want_result=False) elif event.name == 'power-update': host = hosts_by_instance[event.instance_uuid][0] cell_context = cell_contexts_by_host[host] if event.tag == external_event_obj.POWER_ON: inst_action = instance_actions.START elif event.tag == external_event_obj.POWER_OFF: inst_action = instance_actions.STOP else: LOG.warning("Invalid power state %s. Cannot process " "the event %s. Skipping it.", event.tag, event) continue objects.InstanceAction.action_start( cell_context, event.instance_uuid, inst_action, want_result=False) for host in hosts_by_instance[event.instance_uuid]: events_by_host[host].append(event) for host in instances_by_host: cell_context = cell_contexts_by_host[host] # TODO(salv-orlando): Handle exceptions raised by the rpc api layer # in order to ensure that a failure in processing events on a host # will not prevent processing events on other hosts self.compute_rpcapi.external_instance_event( cell_context, instances_by_host[host], events_by_host[host], host=host) def _get_relevant_hosts(self, context, instance): """Get the relevant hosts for an external server event on an instance. :param context: nova auth request context targeted at the same cell that the instance lives in :param instance: Instance object which is the target of an external server event :returns: 2-item tuple of: - set of at least one host (the host where the instance lives); if the instance is being migrated the source and dest compute hostnames are in the returned set - boolean indicating if the instance is being migrated across cells """ hosts = set() hosts.add(instance.host) cross_cell_move = False if instance.migration_context is not None: migration_id = instance.migration_context.migration_id migration = objects.Migration.get_by_id(context, migration_id) cross_cell_move = migration.cross_cell_move hosts.add(migration.dest_compute) hosts.add(migration.source_compute) cells_msg = ( 'across cells' if cross_cell_move else 'within the same cell') LOG.debug('Instance %(instance)s is migrating %(cells_msg)s, ' 'copying events to all relevant hosts: ' '%(hosts)s', {'cells_msg': cells_msg, 'instance': instance.uuid, 'hosts': hosts}) return hosts, cross_cell_move def get_instance_host_status(self, instance): if instance.host: try: service = [service for service in instance.services if service.binary == 'nova-compute'][0] if service.forced_down: host_status = fields_obj.HostStatus.DOWN elif service.disabled: host_status = fields_obj.HostStatus.MAINTENANCE else: alive = self.servicegroup_api.service_is_up(service) host_status = ((alive and fields_obj.HostStatus.UP) or fields_obj.HostStatus.UNKNOWN) except IndexError: host_status = fields_obj.HostStatus.NONE else: host_status = fields_obj.HostStatus.NONE return host_status def get_instances_host_statuses(self, instance_list): host_status_dict = dict() host_statuses = dict() for instance in instance_list: if instance.host: if instance.host not in host_status_dict: host_status = self.get_instance_host_status(instance) host_status_dict[instance.host] = host_status else: host_status = host_status_dict[instance.host] else: host_status = fields_obj.HostStatus.NONE host_statuses[instance.uuid] = host_status return host_statuses def target_host_cell(fn): """Target a host-based function to a cell. Expects to wrap a function of signature: func(self, context, host, ...) """ @functools.wraps(fn) def targeted(self, context, host, *args, **kwargs): mapping = objects.HostMapping.get_by_host(context, host) nova_context.set_target_cell(context, mapping.cell_mapping) return fn(self, context, host, *args, **kwargs) return targeted def _get_service_in_cell_by_host(context, host_name): # validates the host; ComputeHostNotFound is raised if invalid try: mapping = objects.HostMapping.get_by_host(context, host_name) nova_context.set_target_cell(context, mapping.cell_mapping) service = objects.Service.get_by_compute_host(context, host_name) except exception.HostMappingNotFound: try: # NOTE(danms): This targets our cell service = _find_service_in_cell(context, service_host=host_name) except exception.NotFound: raise exception.ComputeHostNotFound(host=host_name) return service def _find_service_in_cell(context, service_id=None, service_host=None): """Find a service by id or hostname by searching all cells. If one matching service is found, return it. If none or multiple are found, raise an exception. :param context: A context.RequestContext :param service_id: If not none, the DB ID of the service to find :param service_host: If not None, the hostname of the service to find :returns: An objects.Service :raises: ServiceNotUnique if multiple matching IDs are found :raises: NotFound if no matches are found :raises: NovaException if called with neither search option """ load_cells() service = None found_in_cell = None is_uuid = False if service_id is not None: is_uuid = uuidutils.is_uuid_like(service_id) if is_uuid: lookup_fn = lambda c: objects.Service.get_by_uuid(c, service_id) else: lookup_fn = lambda c: objects.Service.get_by_id(c, service_id) elif service_host is not None: lookup_fn = lambda c: ( objects.Service.get_by_compute_host(c, service_host)) else: LOG.exception('_find_service_in_cell called with no search parameters') # This is intentionally cryptic so we don't leak implementation details # out of the API. raise exception.NovaException() for cell in CELLS: # NOTE(danms): Services can be in cell0, so don't skip it here try: with nova_context.target_cell(context, cell) as cctxt: cell_service = lookup_fn(cctxt) except exception.NotFound: # NOTE(danms): Keep looking in other cells continue if service and cell_service: raise exception.ServiceNotUnique() service = cell_service found_in_cell = cell if service and is_uuid: break if service: # NOTE(danms): Set the cell on the context so it remains # when we return to our caller nova_context.set_target_cell(context, found_in_cell) return service else: raise exception.NotFound() class HostAPI(base.Base): """Sub-set of the Compute Manager API for managing host operations.""" def __init__(self, rpcapi=None, servicegroup_api=None): self.rpcapi = rpcapi or compute_rpcapi.ComputeAPI() self.servicegroup_api = servicegroup_api or servicegroup.API() super(HostAPI, self).__init__() def _assert_host_exists(self, context, host_name, must_be_up=False): """Raise HostNotFound if compute host doesn't exist.""" service = objects.Service.get_by_compute_host(context, host_name) if not service: raise exception.HostNotFound(host=host_name) if must_be_up and not self.servicegroup_api.service_is_up(service): raise exception.ComputeServiceUnavailable(host=host_name) return service['host'] @wrap_exception() @target_host_cell def set_host_enabled(self, context, host_name, enabled): """Sets the specified host's ability to accept new instances.""" host_name = self._assert_host_exists(context, host_name) payload = {'host_name': host_name, 'enabled': enabled} compute_utils.notify_about_host_update(context, 'set_enabled.start', payload) result = self.rpcapi.set_host_enabled(context, enabled=enabled, host=host_name) compute_utils.notify_about_host_update(context, 'set_enabled.end', payload) return result @target_host_cell def get_host_uptime(self, context, host_name): """Returns the result of calling "uptime" on the target host.""" host_name = self._assert_host_exists(context, host_name, must_be_up=True) return self.rpcapi.get_host_uptime(context, host=host_name) @wrap_exception() @target_host_cell def host_power_action(self, context, host_name, action): """Reboots, shuts down or powers up the host.""" host_name = self._assert_host_exists(context, host_name) payload = {'host_name': host_name, 'action': action} compute_utils.notify_about_host_update(context, 'power_action.start', payload) result = self.rpcapi.host_power_action(context, action=action, host=host_name) compute_utils.notify_about_host_update(context, 'power_action.end', payload) return result @wrap_exception() @target_host_cell def set_host_maintenance(self, context, host_name, mode): """Start/Stop host maintenance window. On start, it triggers guest VMs evacuation. """ host_name = self._assert_host_exists(context, host_name) payload = {'host_name': host_name, 'mode': mode} compute_utils.notify_about_host_update(context, 'set_maintenance.start', payload) result = self.rpcapi.host_maintenance_mode(context, host_param=host_name, mode=mode, host=host_name) compute_utils.notify_about_host_update(context, 'set_maintenance.end', payload) return result def service_get_all(self, context, filters=None, set_zones=False, all_cells=False, cell_down_support=False): """Returns a list of services, optionally filtering the results. If specified, 'filters' should be a dictionary containing services attributes and matching values. Ie, to get a list of services for the 'compute' topic, use filters={'topic': 'compute'}. If all_cells=True, then scan all cells and merge the results. If cell_down_support=True then return minimal service records for cells that do not respond based on what we have in the host mappings. These will have only 'binary' and 'host' set. """ if filters is None: filters = {} disabled = filters.pop('disabled', None) if 'availability_zone' in filters: set_zones = True # NOTE(danms): Eventually this all_cells nonsense should go away # and we should always iterate over the cells. However, certain # callers need the legacy behavior for now. if all_cells: services = [] service_dict = nova_context.scatter_gather_all_cells(context, objects.ServiceList.get_all, disabled, set_zones=set_zones) for cell_uuid, service in service_dict.items(): if not nova_context.is_cell_failure_sentinel(service): services.extend(service) elif cell_down_support: unavailable_services = objects.ServiceList() cid = [cm.id for cm in nova_context.CELLS if cm.uuid == cell_uuid] # We know cid[0] is in the list because we are using the # same list that scatter_gather_all_cells used hms = objects.HostMappingList.get_by_cell_id(context, cid[0]) for hm in hms: unavailable_services.objects.append(objects.Service( binary='nova-compute', host=hm.host)) LOG.warning("Cell %s is not responding and hence only " "partial results are available from this " "cell.", cell_uuid) services.extend(unavailable_services) else: LOG.warning("Cell %s is not responding and hence skipped " "from the results.", cell_uuid) else: services = objects.ServiceList.get_all(context, disabled, set_zones=set_zones) ret_services = [] for service in services: for key, val in filters.items(): if service[key] != val: break else: # All filters matched. ret_services.append(service) return ret_services def service_get_by_id(self, context, service_id): """Get service entry for the given service id or uuid.""" try: return _find_service_in_cell(context, service_id=service_id) except exception.NotFound: raise exception.ServiceNotFound(service_id=service_id) @target_host_cell def service_get_by_compute_host(self, context, host_name): """Get service entry for the given compute hostname.""" return objects.Service.get_by_compute_host(context, host_name) def _update_compute_provider_status(self, context, service): """Calls the compute service to sync the COMPUTE_STATUS_DISABLED trait. There are two cases where the API will not call the compute service: * The compute service is down. In this case the trait is synchronized when the compute service is restarted. * The compute service is old. In this case the trait is synchronized when the compute service is upgraded and restarted. :param context: nova auth RequestContext :param service: nova.objects.Service object which has been enabled or disabled (see ``service_update``). """ # Make sure the service is up so we can make the RPC call. if not self.servicegroup_api.service_is_up(service): LOG.info('Compute service on host %s is down. The ' 'COMPUTE_STATUS_DISABLED trait will be synchronized ' 'when the service is restarted.', service.host) return # Make sure the compute service is new enough for the trait sync # behavior. # TODO(mriedem): Remove this compat check in the U release. if service.version < MIN_COMPUTE_SYNC_COMPUTE_STATUS_DISABLED: LOG.info('Compute service on host %s is too old to sync the ' 'COMPUTE_STATUS_DISABLED trait in Placement. The ' 'trait will be synchronized when the service is ' 'upgraded and restarted.', service.host) return enabled = not service.disabled # Avoid leaking errors out of the API. try: LOG.debug('Calling the compute service on host %s to sync the ' 'COMPUTE_STATUS_DISABLED trait.', service.host) self.rpcapi.set_host_enabled(context, service.host, enabled) except Exception: LOG.exception('An error occurred while updating the ' 'COMPUTE_STATUS_DISABLED trait on compute node ' 'resource providers managed by host %s. The trait ' 'will be synchronized automatically by the compute ' 'service when the update_available_resource ' 'periodic task runs.', service.host) def service_update(self, context, service): """Performs the actual service update operation. If the "disabled" field is changed, potentially calls the compute service to sync the COMPUTE_STATUS_DISABLED trait on the compute node resource providers managed by this compute service. :param context: nova auth RequestContext :param service: nova.objects.Service object with changes already set on the object """ # Before persisting changes and resetting the changed fields on the # Service object, determine if the disabled field changed. update_placement = 'disabled' in service.obj_what_changed() # Persist the Service object changes to the database. service.save() # If the disabled field changed, potentially call the compute service # to sync the COMPUTE_STATUS_DISABLED trait. if update_placement: self._update_compute_provider_status(context, service) return service @target_host_cell def service_update_by_host_and_binary(self, context, host_name, binary, params_to_update): """Enable / Disable a service. Determines the cell that the service is in using the HostMapping. For compute services, this stops new builds and migrations going to the host. See also ``service_update``. :param context: nova auth RequestContext :param host_name: hostname of the service :param binary: service binary (really only supports "nova-compute") :param params_to_update: dict of changes to make to the Service object :raises: HostMappingNotFound if the host is not mapped to a cell :raises: HostBinaryNotFound if a services table record is not found with the given host_name and binary """ # TODO(mriedem): Service.get_by_args is deprecated; we should use # get_by_compute_host here (remember to update the "raises" docstring). service = objects.Service.get_by_args(context, host_name, binary) service.update(params_to_update) return self.service_update(context, service) @target_host_cell def instance_get_all_by_host(self, context, host_name): """Return all instances on the given host.""" return objects.InstanceList.get_by_host(context, host_name) def task_log_get_all(self, context, task_name, period_beginning, period_ending, host=None, state=None): """Return the task logs within a given range, optionally filtering by host and/or state. """ return self.db.task_log_get_all(context, task_name, period_beginning, period_ending, host=host, state=state) def compute_node_get(self, context, compute_id): """Return compute node entry for particular integer ID or UUID.""" load_cells() # NOTE(danms): Unfortunately this API exposes database identifiers # which means we really can't do something efficient here is_uuid = uuidutils.is_uuid_like(compute_id) for cell in CELLS: if cell.uuid == objects.CellMapping.CELL0_UUID: continue with nova_context.target_cell(context, cell) as cctxt: try: if is_uuid: return objects.ComputeNode.get_by_uuid(cctxt, compute_id) return objects.ComputeNode.get_by_id(cctxt, int(compute_id)) except exception.ComputeHostNotFound: # NOTE(danms): Keep looking in other cells continue raise exception.ComputeHostNotFound(host=compute_id) def compute_node_get_all(self, context, limit=None, marker=None): load_cells() computes = [] uuid_marker = marker and uuidutils.is_uuid_like(marker) for cell in CELLS: if cell.uuid == objects.CellMapping.CELL0_UUID: continue with nova_context.target_cell(context, cell) as cctxt: # If we have a marker and it's a uuid, see if the compute node # is in this cell. if marker and uuid_marker: try: compute_marker = objects.ComputeNode.get_by_uuid( cctxt, marker) # we found the marker compute node, so use it's id # for the actual marker for paging in this cell's db marker = compute_marker.id except exception.ComputeHostNotFound: # The marker node isn't in this cell so keep looking. continue try: cell_computes = objects.ComputeNodeList.get_by_pagination( cctxt, limit=limit, marker=marker) except exception.MarkerNotFound: # NOTE(danms): Keep looking through cells continue computes.extend(cell_computes) # NOTE(danms): We must have found the marker, so continue on # without one marker = None if limit: limit -= len(cell_computes) if limit <= 0: break if marker is not None and len(computes) == 0: # NOTE(danms): If we did not find the marker in any cell, # mimic the db_api behavior here. raise exception.MarkerNotFound(marker=marker) return objects.ComputeNodeList(objects=computes) def compute_node_search_by_hypervisor(self, context, hypervisor_match): load_cells() computes = [] for cell in CELLS: if cell.uuid == objects.CellMapping.CELL0_UUID: continue with nova_context.target_cell(context, cell) as cctxt: cell_computes = objects.ComputeNodeList.get_by_hypervisor( cctxt, hypervisor_match) computes.extend(cell_computes) return objects.ComputeNodeList(objects=computes) def compute_node_statistics(self, context): load_cells() cell_stats = [] for cell in CELLS: if cell.uuid == objects.CellMapping.CELL0_UUID: continue with nova_context.target_cell(context, cell) as cctxt: cell_stats.append(self.db.compute_node_statistics(cctxt)) if cell_stats: keys = cell_stats[0].keys() return {k: sum(stats[k] for stats in cell_stats) for k in keys} else: return {} class InstanceActionAPI(base.Base): """Sub-set of the Compute Manager API for managing instance actions.""" def actions_get(self, context, instance, limit=None, marker=None, filters=None): return objects.InstanceActionList.get_by_instance_uuid( context, instance.uuid, limit, marker, filters) def action_get_by_request_id(self, context, instance, request_id): return objects.InstanceAction.get_by_request_id( context, instance.uuid, request_id) def action_events_get(self, context, instance, action_id): return objects.InstanceActionEventList.get_by_action( context, action_id) class AggregateAPI(base.Base): """Sub-set of the Compute Manager API for managing host aggregates.""" def __init__(self, **kwargs): self.compute_rpcapi = compute_rpcapi.ComputeAPI() self.query_client = query.SchedulerQueryClient() self._placement_client = None # Lazy-load on first access. super(AggregateAPI, self).__init__(**kwargs) @property def placement_client(self): if self._placement_client is None: self._placement_client = report.SchedulerReportClient() return self._placement_client @wrap_exception() def create_aggregate(self, context, aggregate_name, availability_zone): """Creates the model for the aggregate.""" aggregate = objects.Aggregate(context=context) aggregate.name = aggregate_name if availability_zone: aggregate.metadata = {'availability_zone': availability_zone} aggregate.create() self.query_client.update_aggregates(context, [aggregate]) return aggregate def get_aggregate(self, context, aggregate_id): """Get an aggregate by id.""" return objects.Aggregate.get_by_id(context, aggregate_id) def get_aggregate_list(self, context): """Get all the aggregates.""" return objects.AggregateList.get_all(context) def get_aggregates_by_host(self, context, compute_host): """Get all the aggregates where the given host is presented.""" return objects.AggregateList.get_by_host(context, compute_host) @wrap_exception() def update_aggregate(self, context, aggregate_id, values): """Update the properties of an aggregate.""" aggregate = objects.Aggregate.get_by_id(context, aggregate_id) if 'name' in values: aggregate.name = values.pop('name') aggregate.save() self.is_safe_to_update_az(context, values, aggregate=aggregate, action_name=AGGREGATE_ACTION_UPDATE, check_no_instances_in_az=True) if values: aggregate.update_metadata(values) aggregate.updated_at = timeutils.utcnow() self.query_client.update_aggregates(context, [aggregate]) # If updated values include availability_zones, then the cache # which stored availability_zones and host need to be reset if values.get('availability_zone'): availability_zones.reset_cache() return aggregate @wrap_exception() def update_aggregate_metadata(self, context, aggregate_id, metadata): """Updates the aggregate metadata.""" aggregate = objects.Aggregate.get_by_id(context, aggregate_id) self.is_safe_to_update_az(context, metadata, aggregate=aggregate, action_name=AGGREGATE_ACTION_UPDATE_META, check_no_instances_in_az=True) aggregate.update_metadata(metadata) self.query_client.update_aggregates(context, [aggregate]) # If updated metadata include availability_zones, then the cache # which stored availability_zones and host need to be reset if metadata and metadata.get('availability_zone'): availability_zones.reset_cache() aggregate.updated_at = timeutils.utcnow() return aggregate @wrap_exception() def delete_aggregate(self, context, aggregate_id): """Deletes the aggregate.""" aggregate_payload = {'aggregate_id': aggregate_id} compute_utils.notify_about_aggregate_update(context, "delete.start", aggregate_payload) aggregate = objects.Aggregate.get_by_id(context, aggregate_id) compute_utils.notify_about_aggregate_action( context=context, aggregate=aggregate, action=fields_obj.NotificationAction.DELETE, phase=fields_obj.NotificationPhase.START) if len(aggregate.hosts) > 0: msg = _("Host aggregate is not empty") raise exception.InvalidAggregateActionDelete( aggregate_id=aggregate_id, reason=msg) aggregate.destroy() self.query_client.delete_aggregate(context, aggregate) compute_utils.notify_about_aggregate_update(context, "delete.end", aggregate_payload) compute_utils.notify_about_aggregate_action( context=context, aggregate=aggregate, action=fields_obj.NotificationAction.DELETE, phase=fields_obj.NotificationPhase.END) def is_safe_to_update_az(self, context, metadata, aggregate, hosts=None, action_name=AGGREGATE_ACTION_ADD, check_no_instances_in_az=False): """Determine if updates alter an aggregate's availability zone. :param context: local context :param metadata: Target metadata for updating aggregate :param aggregate: Aggregate to update :param hosts: Hosts to check. If None, aggregate.hosts is used :type hosts: list :param action_name: Calling method for logging purposes :param check_no_instances_in_az: if True, it checks there is no instances on any hosts of the aggregate """ if 'availability_zone' in metadata: if not metadata['availability_zone']: msg = _("Aggregate %s does not support empty named " "availability zone") % aggregate.name self._raise_invalid_aggregate_exc(action_name, aggregate.id, msg) _hosts = hosts or aggregate.hosts host_aggregates = objects.AggregateList.get_by_metadata_key( context, 'availability_zone', hosts=_hosts) conflicting_azs = [ agg.availability_zone for agg in host_aggregates if agg.availability_zone != metadata['availability_zone'] and agg.id != aggregate.id] if conflicting_azs: msg = _("One or more hosts already in availability zone(s) " "%s") % conflicting_azs self._raise_invalid_aggregate_exc(action_name, aggregate.id, msg) same_az_name = (aggregate.availability_zone == metadata['availability_zone']) if check_no_instances_in_az and not same_az_name: instance_count_by_cell = ( nova_context.scatter_gather_skip_cell0( context, objects.InstanceList.get_count_by_hosts, _hosts)) if any(cnt for cnt in instance_count_by_cell.values()): msg = _("One or more hosts contain instances in this zone") self._raise_invalid_aggregate_exc( action_name, aggregate.id, msg) def _raise_invalid_aggregate_exc(self, action_name, aggregate_id, reason): if action_name == AGGREGATE_ACTION_ADD: raise exception.InvalidAggregateActionAdd( aggregate_id=aggregate_id, reason=reason) elif action_name == AGGREGATE_ACTION_UPDATE: raise exception.InvalidAggregateActionUpdate( aggregate_id=aggregate_id, reason=reason) elif action_name == AGGREGATE_ACTION_UPDATE_META: raise exception.InvalidAggregateActionUpdateMeta( aggregate_id=aggregate_id, reason=reason) elif action_name == AGGREGATE_ACTION_DELETE: raise exception.InvalidAggregateActionDelete( aggregate_id=aggregate_id, reason=reason) raise exception.NovaException( _("Unexpected aggregate action %s") % action_name) def _update_az_cache_for_host(self, context, host_name, aggregate_meta): # Update the availability_zone cache to avoid getting wrong # availability_zone in cache retention time when add/remove # host to/from aggregate. if aggregate_meta and aggregate_meta.get('availability_zone'): availability_zones.update_host_availability_zone_cache(context, host_name) @wrap_exception() def add_host_to_aggregate(self, context, aggregate_id, host_name): """Adds the host to an aggregate.""" aggregate_payload = {'aggregate_id': aggregate_id, 'host_name': host_name} compute_utils.notify_about_aggregate_update(context, "addhost.start", aggregate_payload) service = _get_service_in_cell_by_host(context, host_name) if service.host != host_name: # NOTE(danms): If we found a service but it is not an # exact match, we may have a case-insensitive backend # database (like mysql) which will end up with us # adding the host-aggregate mapping with a # non-matching hostname. raise exception.ComputeHostNotFound(host=host_name) aggregate = objects.Aggregate.get_by_id(context, aggregate_id) compute_utils.notify_about_aggregate_action( context=context, aggregate=aggregate, action=fields_obj.NotificationAction.ADD_HOST, phase=fields_obj.NotificationPhase.START) self.is_safe_to_update_az(context, aggregate.metadata, hosts=[host_name], aggregate=aggregate) aggregate.add_host(host_name) self.query_client.update_aggregates(context, [aggregate]) try: self.placement_client.aggregate_add_host( context, aggregate.uuid, host_name=host_name) except (exception.ResourceProviderNotFound, exception.ResourceProviderAggregateRetrievalFailed, exception.ResourceProviderUpdateFailed, exception.ResourceProviderUpdateConflict) as err: # NOTE(jaypipes): We don't want a failure perform the mirroring # action in the placement service to be returned to the user (they # probably don't know anything about the placement service and # would just be confused). So, we just log a warning here, noting # that on the next run of nova-manage placement sync_aggregates # things will go back to normal LOG.warning("Failed to associate %s with a placement " "aggregate: %s. This may be corrected after running " "nova-manage placement sync_aggregates.", host_name, err) self._update_az_cache_for_host(context, host_name, aggregate.metadata) # NOTE(jogo): Send message to host to support resource pools self.compute_rpcapi.add_aggregate_host(context, aggregate=aggregate, host_param=host_name, host=host_name) aggregate_payload.update({'name': aggregate.name}) compute_utils.notify_about_aggregate_update(context, "addhost.end", aggregate_payload) compute_utils.notify_about_aggregate_action( context=context, aggregate=aggregate, action=fields_obj.NotificationAction.ADD_HOST, phase=fields_obj.NotificationPhase.END) return aggregate @wrap_exception() def remove_host_from_aggregate(self, context, aggregate_id, host_name): """Removes host from the aggregate.""" aggregate_payload = {'aggregate_id': aggregate_id, 'host_name': host_name} compute_utils.notify_about_aggregate_update(context, "removehost.start", aggregate_payload) _get_service_in_cell_by_host(context, host_name) aggregate = objects.Aggregate.get_by_id(context, aggregate_id) compute_utils.notify_about_aggregate_action( context=context, aggregate=aggregate, action=fields_obj.NotificationAction.REMOVE_HOST, phase=fields_obj.NotificationPhase.START) # Remove the resource provider from the provider aggregate first before # we change anything on the nova side because if we did the nova stuff # first we can't re-attempt this from the compute API if cleaning up # placement fails. try: # Anything else this raises is handled in the route handler as # either a 409 (ResourceProviderUpdateConflict) or 500. self.placement_client.aggregate_remove_host( context, aggregate.uuid, host_name) except exception.ResourceProviderNotFound as err: # If the resource provider is not found then it's likely not part # of the aggregate anymore anyway since provider aggregates are # not resources themselves with metadata like nova aggregates, they # are just a grouping concept around resource providers. Log and # continue. LOG.warning("Failed to remove association of %s with a placement " "aggregate: %s.", host_name, err) aggregate.delete_host(host_name) self.query_client.update_aggregates(context, [aggregate]) self._update_az_cache_for_host(context, host_name, aggregate.metadata) self.compute_rpcapi.remove_aggregate_host(context, aggregate=aggregate, host_param=host_name, host=host_name) compute_utils.notify_about_aggregate_update(context, "removehost.end", aggregate_payload) compute_utils.notify_about_aggregate_action( context=context, aggregate=aggregate, action=fields_obj.NotificationAction.REMOVE_HOST, phase=fields_obj.NotificationPhase.END) return aggregate class KeypairAPI(base.Base): """Subset of the Compute Manager API for managing key pairs.""" get_notifier = functools.partial(rpc.get_notifier, service='api') wrap_exception = functools.partial(exception_wrapper.wrap_exception, get_notifier=get_notifier, binary='nova-api') def _notify(self, context, event_suffix, keypair_name): payload = { 'tenant_id': context.project_id, 'user_id': context.user_id, 'key_name': keypair_name, } notify = self.get_notifier() notify.info(context, 'keypair.%s' % event_suffix, payload) def _validate_new_key_pair(self, context, user_id, key_name, key_type): safe_chars = "_- " + string.digits + string.ascii_letters clean_value = "".join(x for x in key_name if x in safe_chars) if clean_value != key_name: raise exception.InvalidKeypair( reason=_("Keypair name contains unsafe characters")) try: utils.check_string_length(key_name, min_length=1, max_length=255) except exception.InvalidInput: raise exception.InvalidKeypair( reason=_('Keypair name must be string and between ' '1 and 255 characters long')) try: objects.Quotas.check_deltas(context, {'key_pairs': 1}, user_id) except exception.OverQuota: raise exception.KeypairLimitExceeded() @wrap_exception() def import_key_pair(self, context, user_id, key_name, public_key, key_type=keypair_obj.KEYPAIR_TYPE_SSH): """Import a key pair using an existing public key.""" self._validate_new_key_pair(context, user_id, key_name, key_type) self._notify(context, 'import.start', key_name) keypair = objects.KeyPair(context) keypair.user_id = user_id keypair.name = key_name keypair.type = key_type keypair.fingerprint = None keypair.public_key = public_key compute_utils.notify_about_keypair_action( context=context, keypair=keypair, action=fields_obj.NotificationAction.IMPORT, phase=fields_obj.NotificationPhase.START) fingerprint = self._generate_fingerprint(public_key, key_type) keypair.fingerprint = fingerprint keypair.create() compute_utils.notify_about_keypair_action( context=context, keypair=keypair, action=fields_obj.NotificationAction.IMPORT, phase=fields_obj.NotificationPhase.END) self._notify(context, 'import.end', key_name) return keypair @wrap_exception() def create_key_pair(self, context, user_id, key_name, key_type=keypair_obj.KEYPAIR_TYPE_SSH): """Create a new key pair.""" self._validate_new_key_pair(context, user_id, key_name, key_type) keypair = objects.KeyPair(context) keypair.user_id = user_id keypair.name = key_name keypair.type = key_type keypair.fingerprint = None keypair.public_key = None self._notify(context, 'create.start', key_name) compute_utils.notify_about_keypair_action( context=context, keypair=keypair, action=fields_obj.NotificationAction.CREATE, phase=fields_obj.NotificationPhase.START) private_key, public_key, fingerprint = self._generate_key_pair( user_id, key_type) keypair.fingerprint = fingerprint keypair.public_key = public_key keypair.create() # NOTE(melwitt): We recheck the quota after creating the object to # prevent users from allocating more resources than their allowed quota # in the event of a race. This is configurable because it can be # expensive if strict quota limits are not required in a deployment. if CONF.quota.recheck_quota: try: objects.Quotas.check_deltas(context, {'key_pairs': 0}, user_id) except exception.OverQuota: keypair.destroy() raise exception.KeypairLimitExceeded() compute_utils.notify_about_keypair_action( context=context, keypair=keypair, action=fields_obj.NotificationAction.CREATE, phase=fields_obj.NotificationPhase.END) self._notify(context, 'create.end', key_name) return keypair, private_key def _generate_fingerprint(self, public_key, key_type): if key_type == keypair_obj.KEYPAIR_TYPE_SSH: return crypto.generate_fingerprint(public_key) elif key_type == keypair_obj.KEYPAIR_TYPE_X509: return crypto.generate_x509_fingerprint(public_key) def _generate_key_pair(self, user_id, key_type): if key_type == keypair_obj.KEYPAIR_TYPE_SSH: return crypto.generate_key_pair() elif key_type == keypair_obj.KEYPAIR_TYPE_X509: return crypto.generate_winrm_x509_cert(user_id) @wrap_exception() def delete_key_pair(self, context, user_id, key_name): """Delete a keypair by name.""" self._notify(context, 'delete.start', key_name) keypair = self.get_key_pair(context, user_id, key_name) compute_utils.notify_about_keypair_action( context=context, keypair=keypair, action=fields_obj.NotificationAction.DELETE, phase=fields_obj.NotificationPhase.START) objects.KeyPair.destroy_by_name(context, user_id, key_name) compute_utils.notify_about_keypair_action( context=context, keypair=keypair, action=fields_obj.NotificationAction.DELETE, phase=fields_obj.NotificationPhase.END) self._notify(context, 'delete.end', key_name) def get_key_pairs(self, context, user_id, limit=None, marker=None): """List key pairs.""" return objects.KeyPairList.get_by_user( context, user_id, limit=limit, marker=marker) def get_key_pair(self, context, user_id, key_name): """Get a keypair by name.""" return objects.KeyPair.get_by_name(context, user_id, key_name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/build_results.py0000664000175000017500000000201700000000000020437 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Possible results from instance build Results represent the ultimate result of an attempt to build an instance. Results describe whether an instance was actually built, failed to build, or was rescheduled. """ ACTIVE = 'active' # Instance is running FAILED = 'failed' # Instance failed to build and was not rescheduled RESCHEDULED = 'rescheduled' # Instance failed to build, but was rescheduled ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/claims.py0000664000175000017500000002423700000000000017037 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Claim objects for use with resource tracking. """ from oslo_log import log as logging from nova import exception from nova.i18n import _ from nova import objects from nova.virt import hardware LOG = logging.getLogger(__name__) class NopClaim(object): """For use with compute drivers that do not support resource tracking.""" def __init__(self, *args, **kwargs): self.migration = kwargs.pop('migration', None) self.claimed_numa_topology = None def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): if exc_type is not None: self.abort() def abort(self): pass class Claim(NopClaim): """A declaration that a compute host operation will require free resources. Claims serve as marker objects that resources are being held until the update_available_resource audit process runs to do a full reconciliation of resource usage. This information will be used to help keep the local compute hosts's ComputeNode model in sync to aid the scheduler in making efficient / more correct decisions with respect to host selection. """ def __init__(self, context, instance, nodename, tracker, compute_node, pci_requests, migration=None, limits=None): super(Claim, self).__init__(migration=migration) # Stash a copy of the instance at the current point of time self.instance = instance.obj_clone() self.nodename = nodename self.tracker = tracker self._pci_requests = pci_requests self.context = context # Check claim at constructor to avoid mess code # Raise exception ComputeResourcesUnavailable if claim failed self._claim_test(compute_node, limits) @property def numa_topology(self): return self.instance.numa_topology def abort(self): """Compute operation requiring claimed resources has failed or been aborted. """ LOG.debug("Aborting claim: %s", self, instance=self.instance) self.tracker.abort_instance_claim(self.context, self.instance, self.nodename) def _claim_test(self, compute_node, limits=None): """Test if this claim can be satisfied given available resources and optional oversubscription limits This should be called before the compute node actually consumes the resources required to execute the claim. :param compute_node: available local ComputeNode object :param limits: Optional limits to test, either dict or objects.SchedulerLimits :raises: exception.ComputeResourcesUnavailable if any resource claim fails """ if not limits: limits = {} if isinstance(limits, objects.SchedulerLimits): limits = limits.to_dict() # If an individual limit is None, the resource will be considered # unlimited: numa_topology_limit = limits.get('numa_topology') reasons = [self._test_numa_topology(compute_node, numa_topology_limit), self._test_pci()] reasons = [r for r in reasons if r is not None] if len(reasons) > 0: raise exception.ComputeResourcesUnavailable(reason= "; ".join(reasons)) LOG.info('Claim successful on node %s', self.nodename, instance=self.instance) def _test_pci(self): pci_requests = self._pci_requests if pci_requests.requests: stats = self.tracker.pci_tracker.stats if not stats.support_requests(pci_requests.requests): return _('Claim pci failed') def _test_numa_topology(self, compute_node, limit): host_topology = (compute_node.numa_topology if 'numa_topology' in compute_node else None) requested_topology = self.numa_topology if host_topology: host_topology = objects.NUMATopology.obj_from_db_obj( host_topology) pci_requests = self._pci_requests pci_stats = None if pci_requests.requests: pci_stats = self.tracker.pci_tracker.stats instance_topology = ( hardware.numa_fit_instance_to_host( host_topology, requested_topology, limits=limit, pci_requests=pci_requests.requests, pci_stats=pci_stats)) if requested_topology and not instance_topology: if pci_requests.requests: return (_("Requested instance NUMA topology together with " "requested PCI devices cannot fit the given " "host NUMA topology")) else: return (_("Requested instance NUMA topology cannot fit " "the given host NUMA topology")) elif instance_topology: self.claimed_numa_topology = instance_topology class MoveClaim(Claim): """Claim used for holding resources for an incoming move operation. Move can be either a migrate/resize, live-migrate or an evacuate operation. """ def __init__(self, context, instance, nodename, instance_type, image_meta, tracker, compute_node, pci_requests, migration, limits=None): self.context = context self.instance_type = instance_type if isinstance(image_meta, dict): image_meta = objects.ImageMeta.from_dict(image_meta) self.image_meta = image_meta super(MoveClaim, self).__init__(context, instance, nodename, tracker, compute_node, pci_requests, migration=migration, limits=limits) @property def numa_topology(self): return hardware.numa_get_constraints(self.instance_type, self.image_meta) def abort(self): """Compute operation requiring claimed resources has failed or been aborted. """ LOG.debug("Aborting claim: %s", self, instance=self.instance) self.tracker.drop_move_claim( self.context, self.instance, self.nodename, instance_type=self.instance_type) self.instance.drop_migration_context() def _test_pci(self): """Test whether this host can accept this claim's PCI requests. For live migration, only Neutron SRIOV PCI requests are supported. Any other type of PCI device would need to be removed and re-added for live migration to work, and there is currently no support for that. For cold migration, all types of PCI requests are supported, so we just call up to normal Claim's _test_pci(). """ if self.migration.migration_type != 'live-migration': return super(MoveClaim, self)._test_pci() elif self._pci_requests.requests: for pci_request in self._pci_requests.requests: if (pci_request.source != objects.InstancePCIRequest.NEUTRON_PORT): return (_('Non-VIF related PCI requests are not ' 'supported for live migration.')) # TODO(artom) At this point, once we've made sure we only have # NEUTRON_PORT (aka SRIOV) PCI requests, we should check whether # the host can support them, like Claim._test_pci() does. However, # SRIOV live migration is currently being handled separately - see # for example _claim_pci_for_instance_vifs() in the compute # manager. So we do nothing here to avoid stepping on that code's # toes, but ideally MoveClaim would be used for all live migration # resource claims. def _test_live_migration_page_size(self): """Tests that the current page size and the requested page size are the same. Must be called after _test_numa_topology() to make sure self.claimed_numa_topology is set. This only applies for live migrations when the hw:mem_page_size extra spec has been set to a non-numeric value (like 'large'). That would in theory allow an instance to live migrate from a host with a 1M page size to a host with a 2M page size, for example. This is not something we want to support, so fail the claim if the page sizes are different. """ if (self.migration.migration_type == 'live-migration' and self.instance.numa_topology and # NOTE(artom) We only support a single page size across all # cells, checking cell 0 is sufficient. self.claimed_numa_topology.cells[0].pagesize != self.instance.numa_topology.cells[0].pagesize): return (_('Requested page size is different from current ' 'page size.')) def _test_numa_topology(self, resources, limit): """Test whether this host can accept the instance's NUMA topology. The _test methods return None on success, and a string-like Message _() object explaining the reason on failure. So we call up to the normal Claim's _test_numa_topology(), and if we get nothing back we test the page size. """ numa_test_failure = super(MoveClaim, self)._test_numa_topology(resources, limit) if numa_test_failure: return numa_test_failure return self._test_live_migration_page_size() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/flavors.py0000664000175000017500000001555700000000000017250 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright (c) 2010 Citrix Systems, Inc. # Copyright 2011 Ken Pepple # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Built-in instance properties.""" import re from oslo_utils import strutils from oslo_utils import uuidutils import six import nova.conf from nova import context from nova.db import api as db from nova import exception from nova.i18n import _ from nova import objects from nova import utils CONF = nova.conf.CONF # Validate extra specs key names. VALID_EXTRASPEC_NAME_REGEX = re.compile(r"[\w\.\- :]+$", re.UNICODE) def _int_or_none(val): if val is not None: return int(val) system_metadata_flavor_props = { 'id': int, 'name': str, 'memory_mb': int, 'vcpus': int, 'root_gb': int, 'ephemeral_gb': int, 'flavorid': str, 'swap': int, 'rxtx_factor': float, 'vcpu_weight': _int_or_none, } system_metadata_flavor_extra_props = [ 'hw:numa_cpus.', 'hw:numa_mem.', ] def create(name, memory, vcpus, root_gb, ephemeral_gb=0, flavorid=None, swap=0, rxtx_factor=1.0, is_public=True, description=None): """Creates flavors.""" if not flavorid: flavorid = uuidutils.generate_uuid() kwargs = { 'memory_mb': memory, 'vcpus': vcpus, 'root_gb': root_gb, 'ephemeral_gb': ephemeral_gb, 'swap': swap, 'rxtx_factor': rxtx_factor, 'description': description } if isinstance(name, six.string_types): name = name.strip() # NOTE(vish): Internally, flavorid is stored as a string but it comes # in through json as an integer, so we convert it here. flavorid = six.text_type(flavorid) # NOTE(wangbo): validate attributes of the creating flavor. # ram and vcpus should be positive ( > 0) integers. # disk, ephemeral and swap should be non-negative ( >= 0) integers. flavor_attributes = { 'memory_mb': ('ram', 1), 'vcpus': ('vcpus', 1), 'root_gb': ('disk', 0), 'ephemeral_gb': ('ephemeral', 0), 'swap': ('swap', 0) } for key, value in flavor_attributes.items(): kwargs[key] = utils.validate_integer(kwargs[key], value[0], value[1], db.MAX_INT) # rxtx_factor should be a positive float try: kwargs['rxtx_factor'] = float(kwargs['rxtx_factor']) if (kwargs['rxtx_factor'] <= 0 or kwargs['rxtx_factor'] > db.SQL_SP_FLOAT_MAX): raise ValueError() except ValueError: msg = (_("'rxtx_factor' argument must be a float between 0 and %g") % db.SQL_SP_FLOAT_MAX) raise exception.InvalidInput(reason=msg) kwargs['name'] = name kwargs['flavorid'] = flavorid # ensure is_public attribute is boolean try: kwargs['is_public'] = strutils.bool_from_string( is_public, strict=True) except ValueError: raise exception.InvalidInput(reason=_("is_public must be a boolean")) flavor = objects.Flavor(context=context.get_admin_context(), **kwargs) flavor.create() return flavor # TODO(termie): flavor-specific code should probably be in the API that uses # flavors. def get_flavor_by_flavor_id(flavorid, ctxt=None, read_deleted="yes"): """Retrieve flavor by flavorid. :raises: FlavorNotFound """ if ctxt is None: ctxt = context.get_admin_context(read_deleted=read_deleted) return objects.Flavor.get_by_flavor_id(ctxt, flavorid, read_deleted) # NOTE(danms): This method is deprecated, do not use it! # Use instance.{old_,new_,}flavor instead, as instances no longer # have flavor information in system_metadata. def extract_flavor(instance, prefix=''): """Create a Flavor object from instance's system_metadata information. """ flavor = objects.Flavor() sys_meta = utils.instance_sys_meta(instance) if not sys_meta: return None for key in system_metadata_flavor_props.keys(): type_key = '%sinstance_type_%s' % (prefix, key) setattr(flavor, key, sys_meta[type_key]) # NOTE(danms): We do NOT save all of extra_specs, but only the # NUMA-related ones that we need to avoid an uglier alternative. This # should be replaced by a general split-out of flavor information from # system_metadata very soon. extra_specs = [(k, v) for k, v in sys_meta.items() if k.startswith('%sinstance_type_extra_' % prefix)] if extra_specs: flavor.extra_specs = {} for key, value in extra_specs: extra_key = key[len('%sinstance_type_extra_' % prefix):] flavor.extra_specs[extra_key] = value return flavor # NOTE(danms): This method is deprecated, do not use it! # Use instance.{old_,new_,}flavor instead, as instances no longer # have flavor information in system_metadata. def save_flavor_info(metadata, instance_type, prefix=''): """Save properties from instance_type into instance's system_metadata, in the format of: [prefix]instance_type_[key] This can be used to update system_metadata in place from a type, as well as stash information about another instance_type for later use (such as during resize). """ for key in system_metadata_flavor_props.keys(): to_key = '%sinstance_type_%s' % (prefix, key) metadata[to_key] = instance_type[key] # NOTE(danms): We do NOT save all of extra_specs here, but only the # NUMA-related ones that we need to avoid an uglier alternative. This # should be replaced by a general split-out of flavor information from # system_metadata very soon. extra_specs = instance_type.get('extra_specs', {}) for extra_prefix in system_metadata_flavor_extra_props: for key in extra_specs: if key.startswith(extra_prefix): to_key = '%sinstance_type_extra_%s' % (prefix, key) metadata[to_key] = extra_specs[key] return metadata def validate_extra_spec_keys(key_names_list): for key_name in key_names_list: if not VALID_EXTRASPEC_NAME_REGEX.match(key_name): expl = _('Key Names can only contain alphanumeric characters, ' 'periods, dashes, underscores, colons and spaces.') raise exception.InvalidInput(message=expl) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/instance_actions.py0000664000175000017500000000511000000000000021100 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Possible actions on an instance. Actions should probably match a user intention at the API level. Because they can be user visible that should help to avoid confusion. For that reason they tend to maintain the casing sent to the API. Maintaining a list of actions here should protect against inconsistencies when they are used. The naming style of instance actions should be snake_case, as it will consistent with the API names. Do not modify the old ones because they have been exposed to users. """ CREATE = 'create' DELETE = 'delete' EVACUATE = 'evacuate' RESTORE = 'restore' STOP = 'stop' START = 'start' REBOOT = 'reboot' REBUILD = 'rebuild' REVERT_RESIZE = 'revertResize' CONFIRM_RESIZE = 'confirmResize' RESIZE = 'resize' MIGRATE = 'migrate' PAUSE = 'pause' UNPAUSE = 'unpause' SUSPEND = 'suspend' RESUME = 'resume' RESCUE = 'rescue' UNRESCUE = 'unrescue' CHANGE_PASSWORD = 'changePassword' SHELVE = 'shelve' SHELVE_OFFLOAD = 'shelveOffload' UNSHELVE = 'unshelve' LIVE_MIGRATION = 'live-migration' LIVE_MIGRATION_CANCEL = 'live_migration_cancel' LIVE_MIGRATION_FORCE_COMPLETE = 'live_migration_force_complete' TRIGGER_CRASH_DUMP = 'trigger_crash_dump' # The extend_volume action is not like the traditional instance actions which # are driven directly through the compute API. The extend_volume action is # initiated by a Cinder volume extend (resize) action. Cinder will call the # server external events API after the volume extend is performed so that Nova # can perform any updates on the compute side. The instance actions framework # is used for tracking this asynchronous operation so the user/admin can know # when it is done in case they need/want to reboot the guest operating system. EXTEND_VOLUME = 'extend_volume' ATTACH_INTERFACE = 'attach_interface' DETACH_INTERFACE = 'detach_interface' ATTACH_VOLUME = 'attach_volume' DETACH_VOLUME = 'detach_volume' SWAP_VOLUME = 'swap_volume' LOCK = 'lock' UNLOCK = 'unlock' BACKUP = 'createBackup' CREATE_IMAGE = 'createImage' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/instance_list.py0000664000175000017500000001713700000000000020427 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.compute import multi_cell_list import nova.conf from nova import context from nova.db import api as db from nova import exception from nova import objects from nova.objects import instance as instance_obj CONF = nova.conf.CONF class InstanceSortContext(multi_cell_list.RecordSortContext): def __init__(self, sort_keys, sort_dirs): if not sort_keys: sort_keys = ['created_at', 'id'] sort_dirs = ['desc', 'desc'] if 'uuid' not in sort_keys: # Historically the default sort includes 'id' (see above), which # should give us a stable ordering. Since we're striping across # cell databases here, many sort_keys arrangements will yield # nothing unique across all the databases to give us a stable # ordering, which can mess up expected client pagination behavior. # So, throw uuid into the sort_keys at the end if it's not already # there to keep us repeatable. sort_keys = copy.copy(sort_keys) + ['uuid'] sort_dirs = copy.copy(sort_dirs) + ['asc'] super(InstanceSortContext, self).__init__(sort_keys, sort_dirs) class InstanceLister(multi_cell_list.CrossCellLister): def __init__(self, sort_keys, sort_dirs, cells=None, batch_size=None): super(InstanceLister, self).__init__( InstanceSortContext(sort_keys, sort_dirs), cells=cells, batch_size=batch_size) @property def marker_identifier(self): return 'uuid' def get_marker_record(self, ctx, marker): try: im = objects.InstanceMapping.get_by_instance_uuid(ctx, marker) except exception.InstanceMappingNotFound: raise exception.MarkerNotFound(marker=marker) elevated = ctx.elevated(read_deleted='yes') with context.target_cell(elevated, im.cell_mapping) as cctx: try: # NOTE(danms): We query this with no columns_to_join() # as we're just getting values for the sort keys from # it and none of the valid sort keys are on joined # columns. db_inst = db.instance_get_by_uuid(cctx, marker, columns_to_join=[]) except exception.InstanceNotFound: raise exception.MarkerNotFound(marker=marker) return im.cell_mapping.uuid, db_inst def get_marker_by_values(self, ctx, values): return db.instance_get_by_sort_filters(ctx, self.sort_ctx.sort_keys, self.sort_ctx.sort_dirs, values) def get_by_filters(self, ctx, filters, limit, marker, **kwargs): return db.instance_get_all_by_filters_sort( ctx, filters, limit=limit, marker=marker, sort_keys=self.sort_ctx.sort_keys, sort_dirs=self.sort_ctx.sort_dirs, **kwargs) # NOTE(danms): These methods are here for legacy glue reasons. We should not # replicate these for every data type we implement. def get_instances_sorted(ctx, filters, limit, marker, columns_to_join, sort_keys, sort_dirs, cell_mappings=None, batch_size=None, cell_down_support=False): instance_lister = InstanceLister(sort_keys, sort_dirs, cells=cell_mappings, batch_size=batch_size) instance_generator = instance_lister.get_records_sorted( ctx, filters, limit, marker, columns_to_join=columns_to_join, cell_down_support=cell_down_support) return instance_lister, instance_generator def get_instance_list_cells_batch_size(limit, cells): """Calculate the proper batch size for a list request. This will consider config, request limit, and cells being queried and return an appropriate batch size to use for querying said cells. :param limit: The overall limit specified in the request :param cells: The list of CellMapping objects being queried :returns: An integer batch size """ strategy = CONF.api.instance_list_cells_batch_strategy limit = limit or CONF.api.max_limit if len(cells) <= 1: # If we're limited to one (or no) cell for whatever reason, do # not do any batching and just pull the desired limit from the # single cell in one shot. return limit if strategy == 'fixed': # Fixed strategy, always a static batch size batch_size = CONF.api.instance_list_cells_batch_fixed_size elif strategy == 'distributed': # Distributed strategy, 10% more than even partitioning batch_size = int((limit / len(cells)) * 1.10) # We never query a larger batch than the total requested, and never # smaller than the lower limit of 100. return max(min(batch_size, limit), 100) def get_instance_objects_sorted(ctx, filters, limit, marker, expected_attrs, sort_keys, sort_dirs, cell_down_support=False): """Return a list of instances and information about down cells. This returns a tuple of (objects.InstanceList, list(of down cell uuids) for the requested operation. The instances returned are those that were collected from the cells that responded. The uuids of any cells that did not respond (or raised an error) are included in the list as the second element of the tuple. That list is empty if all cells responded. """ query_cell_subset = CONF.api.instance_list_per_project_cells # NOTE(danms): Replicated in part from instance_get_all_by_sort_filters(), # where if we're not admin we're restricted to our context's project if query_cell_subset and not ctx.is_admin: # We are not admin, and configured to only query the subset of cells # we could possibly have instances in. cell_mappings = objects.CellMappingList.get_by_project_id( ctx, ctx.project_id) else: # Either we are admin, or configured to always hit all cells, # so don't limit the list to a subset. context.load_cells() cell_mappings = context.CELLS batch_size = get_instance_list_cells_batch_size(limit, cell_mappings) columns_to_join = instance_obj._expected_cols(expected_attrs) instance_lister, instance_generator = get_instances_sorted(ctx, filters, limit, marker, columns_to_join, sort_keys, sort_dirs, cell_mappings=cell_mappings, batch_size=batch_size, cell_down_support=cell_down_support) if 'fault' in expected_attrs: # We join fault above, so we need to make sure we don't ask # make_instance_list to do it again for us expected_attrs = copy.copy(expected_attrs) expected_attrs.remove('fault') instance_list = instance_obj._make_instance_list(ctx, objects.InstanceList(), instance_generator, expected_attrs) down_cell_uuids = (instance_lister.cells_failed + instance_lister.cells_timed_out) return instance_list, down_cell_uuids ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/manager.py0000664000175000017500000177015000000000000017204 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handles all processes relating to instances (guest vms). The :py:class:`ComputeManager` class is a :py:class:`nova.manager.Manager` that handles RPC calls relating to creating instances. It is responsible for building a disk image, launching it via the underlying virtualization driver, responding to calls to check its state, attaching persistent storage, and terminating it. """ import base64 import binascii import contextlib import copy import functools import inspect import sys import time import traceback from cinderclient import exceptions as cinder_exception from cursive import exception as cursive_exception import eventlet.event from eventlet import greenthread import eventlet.semaphore import eventlet.timeout import futurist from keystoneauth1 import exceptions as keystone_exception import os_traits from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_service import loopingcall from oslo_service import periodic_task from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import units import six from six.moves import range from nova.accelerator import cyborg from nova import block_device from nova.compute import api as compute from nova.compute import build_results from nova.compute import claims from nova.compute import power_state from nova.compute import resource_tracker from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute.utils import wrap_instance_event from nova.compute import vm_states from nova import conductor import nova.conf import nova.context from nova import exception from nova import exception_wrapper from nova import hooks from nova.i18n import _ from nova.image import glance from nova import manager from nova.network import model as network_model from nova.network import neutron from nova import objects from nova.objects import base as obj_base from nova.objects import external_event as external_event_obj from nova.objects import fields from nova.objects import instance as obj_instance from nova.objects import migrate_data as migrate_data_obj from nova.pci import request as pci_req_module from nova.pci import whitelist from nova import rpc from nova import safe_utils from nova.scheduler.client import query from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import utils from nova.virt import block_device as driver_block_device from nova.virt import configdrive from nova.virt import driver from nova.virt import event as virtevent from nova.virt import hardware from nova.virt import storage_users from nova.virt import virtapi from nova.volume import cinder CONF = nova.conf.CONF LOG = logging.getLogger(__name__) get_notifier = functools.partial(rpc.get_notifier, service='compute') wrap_exception = functools.partial(exception_wrapper.wrap_exception, get_notifier=get_notifier, binary='nova-compute') @contextlib.contextmanager def errors_out_migration_ctxt(migration): """Context manager to error out migration on failure.""" try: yield except Exception: with excutils.save_and_reraise_exception(): if migration: # We may have been passed None for our migration if we're # receiving from an older client. The migration will be # errored via the legacy path. migration.status = 'error' try: migration.save() except Exception: LOG.debug( 'Error setting migration status for instance %s.', migration.instance_uuid, exc_info=True) @utils.expects_func_args('migration') def errors_out_migration(function): """Decorator to error out migration on failure.""" @functools.wraps(function) def decorated_function(self, context, *args, **kwargs): wrapped_func = safe_utils.get_wrapped_function(function) keyed_args = inspect.getcallargs(wrapped_func, self, context, *args, **kwargs) migration = keyed_args['migration'] with errors_out_migration_ctxt(migration): return function(self, context, *args, **kwargs) return decorated_function @utils.expects_func_args('instance') def reverts_task_state(function): """Decorator to revert task_state on failure.""" @functools.wraps(function) def decorated_function(self, context, *args, **kwargs): try: return function(self, context, *args, **kwargs) except exception.UnexpectedTaskStateError as e: # Note(maoy): unexpected task state means the current # task is preempted. Do not clear task state in this # case. with excutils.save_and_reraise_exception(): LOG.info("Task possibly preempted: %s", e.format_message()) except Exception: with excutils.save_and_reraise_exception(): wrapped_func = safe_utils.get_wrapped_function(function) keyed_args = inspect.getcallargs(wrapped_func, self, context, *args, **kwargs) # NOTE(mriedem): 'instance' must be in keyed_args because we # have utils.expects_func_args('instance') decorating this # method. instance = keyed_args['instance'] original_task_state = instance.task_state try: self._instance_update(context, instance, task_state=None) LOG.info("Successfully reverted task state from %s on " "failure for instance.", original_task_state, instance=instance) except exception.InstanceNotFound: # We might delete an instance that failed to build shortly # after it errored out this is an expected case and we # should not trace on it. pass except Exception as e: LOG.warning("Failed to revert task state for instance. " "Error: %s", e, instance=instance) return decorated_function @utils.expects_func_args('instance') def wrap_instance_fault(function): """Wraps a method to catch exceptions related to instances. This decorator wraps a method to catch any exceptions having to do with an instance that may get thrown. It then logs an instance fault in the db. """ @functools.wraps(function) def decorated_function(self, context, *args, **kwargs): try: return function(self, context, *args, **kwargs) except exception.InstanceNotFound: raise except Exception as e: # NOTE(gtt): If argument 'instance' is in args rather than kwargs, # we will get a KeyError exception which will cover up the real # exception. So, we update kwargs with the values from args first. # then, we can get 'instance' from kwargs easily. kwargs.update(dict(zip(function.__code__.co_varnames[2:], args))) with excutils.save_and_reraise_exception(): compute_utils.add_instance_fault_from_exc(context, kwargs['instance'], e, sys.exc_info()) return decorated_function @utils.expects_func_args('image_id', 'instance') def delete_image_on_error(function): """Used for snapshot related method to ensure the image created in compute.api is deleted when an error occurs. """ @functools.wraps(function) def decorated_function(self, context, image_id, instance, *args, **kwargs): try: return function(self, context, image_id, instance, *args, **kwargs) except Exception: with excutils.save_and_reraise_exception(): compute_utils.delete_image( context, instance, self.image_api, image_id, log_exc_info=True) return decorated_function # TODO(danms): Remove me after Icehouse # TODO(alaski): Actually remove this after Newton, assuming a major RPC bump # NOTE(mikal): if the method being decorated has more than one decorator, then # put this one first. Otherwise the various exception handling decorators do # not function correctly. def object_compat(function): """Wraps a method that expects a new-world instance This provides compatibility for callers passing old-style dict instances. """ @functools.wraps(function) def decorated_function(self, context, *args, **kwargs): def _load_instance(instance_or_dict): if isinstance(instance_or_dict, dict): # try to get metadata and system_metadata for most cases but # only attempt to load those if the db instance already has # those fields joined metas = [meta for meta in ('metadata', 'system_metadata') if meta in instance_or_dict] instance = objects.Instance._from_db_object( context, objects.Instance(), instance_or_dict, expected_attrs=metas) instance._context = context return instance return instance_or_dict try: kwargs['instance'] = _load_instance(kwargs['instance']) except KeyError: args = (_load_instance(args[0]),) + args[1:] migration = kwargs.get('migration') if isinstance(migration, dict): migration = objects.Migration._from_db_object( context.elevated(), objects.Migration(), migration) kwargs['migration'] = migration return function(self, context, *args, **kwargs) return decorated_function class InstanceEvents(object): def __init__(self): self._events = {} @staticmethod def _lock_name(instance): return '%s-%s' % (instance.uuid, 'events') def prepare_for_instance_event(self, instance, name, tag): """Prepare to receive an event for an instance. This will register an event for the given instance that we will wait on later. This should be called before initiating whatever action will trigger the event. The resulting eventlet.event.Event object should be wait()'d on to ensure completion. :param instance: the instance for which the event will be generated :param name: the name of the event we're expecting :param tag: the tag associated with the event we're expecting :returns: an event object that should be wait()'d on """ if self._events is None: # NOTE(danms): We really should have a more specific error # here, but this is what we use for our default error case raise exception.NovaException('In shutdown, no new events ' 'can be scheduled') @utils.synchronized(self._lock_name(instance)) def _create_or_get_event(): instance_events = self._events.setdefault(instance.uuid, {}) return instance_events.setdefault((name, tag), eventlet.event.Event()) LOG.debug('Preparing to wait for external event %(name)s-%(tag)s', {'name': name, 'tag': tag}, instance=instance) return _create_or_get_event() def pop_instance_event(self, instance, event): """Remove a pending event from the wait list. This will remove a pending event from the wait list so that it can be used to signal the waiters to wake up. :param instance: the instance for which the event was generated :param event: the nova.objects.external_event.InstanceExternalEvent that describes the event :returns: the eventlet.event.Event object on which the waiters are blocked """ no_events_sentinel = object() no_matching_event_sentinel = object() @utils.synchronized(self._lock_name(instance)) def _pop_event(): if self._events is None: LOG.debug('Unexpected attempt to pop events during shutdown', instance=instance) return no_events_sentinel events = self._events.get(instance.uuid) if not events: return no_events_sentinel _event = events.pop((event.name, event.tag), None) if not events: del self._events[instance.uuid] if _event is None: return no_matching_event_sentinel return _event result = _pop_event() if result is no_events_sentinel: LOG.debug('No waiting events found dispatching %(event)s', {'event': event.key}, instance=instance) return None elif result is no_matching_event_sentinel: LOG.debug('No event matching %(event)s in %(events)s', {'event': event.key, 'events': self._events.get(instance.uuid, {}).keys()}, instance=instance) return None else: return result def clear_events_for_instance(self, instance): """Remove all pending events for an instance. This will remove all events currently pending for an instance and return them (indexed by event name). :param instance: the instance for which events should be purged :returns: a dictionary of {event_name: eventlet.event.Event} """ @utils.synchronized(self._lock_name(instance)) def _clear_events(): if self._events is None: LOG.debug('Unexpected attempt to clear events during shutdown', instance=instance) return dict() # NOTE(danms): We have historically returned the raw internal # format here, which is {event.key: [events, ...])} so just # trivially convert it here. return {'%s-%s' % k: e for k, e in self._events.pop(instance.uuid, {}).items()} return _clear_events() def cancel_all_events(self): if self._events is None: LOG.debug('Unexpected attempt to cancel events during shutdown.') return our_events = self._events # NOTE(danms): Block new events self._events = None for instance_uuid, events in our_events.items(): for (name, tag), eventlet_event in events.items(): LOG.debug('Canceling in-flight event %(name)s-%(tag)s for ' 'instance %(instance_uuid)s', {'name': name, 'tag': tag, 'instance_uuid': instance_uuid}) event = objects.InstanceExternalEvent( instance_uuid=instance_uuid, name=name, status='failed', tag=tag, data={}) eventlet_event.send(event) class ComputeVirtAPI(virtapi.VirtAPI): def __init__(self, compute): super(ComputeVirtAPI, self).__init__() self._compute = compute self.reportclient = compute.reportclient class ExitEarly(Exception): def __init__(self, events): super(Exception, self).__init__() self.events = events self._exit_early_exc = ExitEarly def exit_wait_early(self, events): """Exit a wait_for_instance_event() immediately and avoid waiting for some events. :param: events: A list of (name, tag) tuples for events that we should skip waiting for during a wait_for_instance_event(). """ raise self._exit_early_exc(events=events) def _default_error_callback(self, event_name, instance): raise exception.NovaException(_('Instance event failed')) @contextlib.contextmanager def wait_for_instance_event(self, instance, event_names, deadline=300, error_callback=None): """Plan to wait for some events, run some code, then wait. This context manager will first create plans to wait for the provided event_names, yield, and then wait for all the scheduled events to complete. Note that this uses an eventlet.timeout.Timeout to bound the operation, so callers should be prepared to catch that failure and handle that situation appropriately. If the event is not received by the specified timeout deadline, eventlet.timeout.Timeout is raised. If the event is received but did not have a 'completed' status, a NovaException is raised. If an error_callback is provided, instead of raising an exception as detailed above for the failure case, the callback will be called with the event_name and instance, and can return True to continue waiting for the rest of the events, False to stop processing, or raise an exception which will bubble up to the waiter. If the inner code wishes to abort waiting for one or more events because it knows some state to be finished or condition to be satisfied, it can use VirtAPI.exit_wait_early() with a list of event (name,tag) items to avoid waiting for those events upon context exit. Note that exit_wait_early() exits the context immediately and should be used to signal that all work has been completed and provide the unified list of events that need not be waited for. Waiting for the remaining events will begin immediately upon early exit as if the context was exited normally. :param instance: The instance for which an event is expected :param event_names: A list of event names. Each element is a tuple of strings to indicate (name, tag), where name is required, but tag may be None. :param deadline: Maximum number of seconds we should wait for all of the specified events to arrive. :param error_callback: A function to be called if an event arrives """ if error_callback is None: error_callback = self._default_error_callback events = {} for event_name in event_names: name, tag = event_name event_name = objects.InstanceExternalEvent.make_key(name, tag) try: events[event_name] = ( self._compute.instance_events.prepare_for_instance_event( instance, name, tag)) except exception.NovaException: error_callback(event_name, instance) # NOTE(danms): Don't wait for any of the events. They # should all be canceled and fired immediately below, # but don't stick around if not. deadline = 0 try: yield except self._exit_early_exc as e: early_events = set([objects.InstanceExternalEvent.make_key(n, t) for n, t in e.events]) else: early_events = [] with eventlet.timeout.Timeout(deadline): for event_name, event in events.items(): if event_name in early_events: continue else: actual_event = event.wait() if actual_event.status == 'completed': continue # If we get here, we have an event that was not completed, # nor skipped via exit_wait_early(). Decide whether to # keep waiting by calling the error_callback() hook. decision = error_callback(event_name, instance) if decision is False: break def update_compute_provider_status(self, context, rp_uuid, enabled): """Used to add/remove the COMPUTE_STATUS_DISABLED trait on the provider :param context: nova auth RequestContext :param rp_uuid: UUID of a compute node resource provider in Placement :param enabled: True if the node is enabled in which case the trait would be removed, False if the node is disabled in which case the trait would be added. :raises: ResourceProviderTraitRetrievalFailed :raises: ResourceProviderUpdateConflict :raises: ResourceProviderUpdateFailed :raises: TraitRetrievalFailed :raises: keystoneauth1.exceptions.ClientException """ trait_name = os_traits.COMPUTE_STATUS_DISABLED # Get the current traits (and generation) for the provider. # TODO(mriedem): Leverage the ProviderTree cache in get_provider_traits trait_info = self.reportclient.get_provider_traits(context, rp_uuid) # If the host is enabled, remove the trait (if set), else add # the trait if it doesn't already exist. original_traits = trait_info.traits new_traits = None if enabled and trait_name in original_traits: new_traits = original_traits - {trait_name} LOG.debug('Removing trait %s from compute node resource ' 'provider %s in placement.', trait_name, rp_uuid) elif not enabled and trait_name not in original_traits: new_traits = original_traits | {trait_name} LOG.debug('Adding trait %s to compute node resource ' 'provider %s in placement.', trait_name, rp_uuid) if new_traits is not None: self.reportclient.set_traits_for_provider( context, rp_uuid, new_traits) class ComputeManager(manager.Manager): """Manages the running instances from creation to destruction.""" target = messaging.Target(version='5.11') def __init__(self, compute_driver=None, *args, **kwargs): """Load configuration options and connect to the hypervisor.""" # We want the ComputeManager, ResourceTracker and ComputeVirtAPI all # using the same instance of SchedulerReportClient which has the # ProviderTree cache for this compute service. self.reportclient = report.SchedulerReportClient() self.virtapi = ComputeVirtAPI(self) self.network_api = neutron.API() self.volume_api = cinder.API() self.image_api = glance.API() self._last_bw_usage_poll = 0 self._bw_usage_supported = True self.compute_api = compute.API() self.compute_rpcapi = compute_rpcapi.ComputeAPI() self.compute_task_api = conductor.ComputeTaskAPI() self.query_client = query.SchedulerQueryClient() self.instance_events = InstanceEvents() self._sync_power_pool = eventlet.GreenPool( size=CONF.sync_power_state_pool_size) self._syncs_in_progress = {} self.send_instance_updates = ( CONF.filter_scheduler.track_instance_changes) if CONF.max_concurrent_builds != 0: self._build_semaphore = eventlet.semaphore.Semaphore( CONF.max_concurrent_builds) else: self._build_semaphore = compute_utils.UnlimitedSemaphore() if CONF.max_concurrent_live_migrations > 0: self._live_migration_executor = futurist.GreenThreadPoolExecutor( max_workers=CONF.max_concurrent_live_migrations) else: # CONF.max_concurrent_live_migrations is 0 (unlimited) self._live_migration_executor = futurist.GreenThreadPoolExecutor() # This is a dict, keyed by instance uuid, to a two-item tuple of # migration object and Future for the queued live migration. self._waiting_live_migrations = {} super(ComputeManager, self).__init__(service_name="compute", *args, **kwargs) # NOTE(russellb) Load the driver last. It may call back into the # compute manager via the virtapi, so we want it to be fully # initialized before that happens. self.driver = driver.load_compute_driver(self.virtapi, compute_driver) self.use_legacy_block_device_info = \ self.driver.need_legacy_block_device_info self.rt = resource_tracker.ResourceTracker( self.host, self.driver, reportclient=self.reportclient) def reset(self): LOG.info('Reloading compute RPC API') compute_rpcapi.reset_globals() self.compute_rpcapi = compute_rpcapi.ComputeAPI() self.reportclient.clear_provider_cache() def _update_resource_tracker(self, context, instance): """Let the resource tracker know that an instance has changed state.""" if instance.host == self.host: self.rt.update_usage(context, instance, instance.node) def _instance_update(self, context, instance, **kwargs): """Update an instance in the database using kwargs as value.""" for k, v in kwargs.items(): setattr(instance, k, v) instance.save() self._update_resource_tracker(context, instance) def _nil_out_instance_obj_host_and_node(self, instance): # NOTE(jwcroppe): We don't do instance.save() here for performance # reasons; a call to this is expected to be immediately followed by # another call that does instance.save(), thus avoiding two writes # to the database layer. instance.host = None instance.node = None # ResourceTracker._set_instance_host_and_node also sets launched_on # to the same value as host and is really only ever used by legacy # nova-network code, but we should also null it out to avoid confusion # if there is an instance in the database with no host set but # launched_on is set. Note that we do not care about using launched_on # as some kind of debug helper if diagnosing a build failure, that is # what instance action events are for. instance.launched_on = None # If the instance is not on a host, it's not in an aggregate and # therefore is not in an availability zone. instance.availability_zone = None def _set_instance_obj_error_state(self, context, instance, clean_task_state=False): try: instance.vm_state = vm_states.ERROR if clean_task_state: instance.task_state = None instance.save() except exception.InstanceNotFound: LOG.debug('Instance has been destroyed from under us while ' 'trying to set it to ERROR', instance=instance) def _get_instances_on_driver(self, context, filters=None): """Return a list of instance records for the instances found on the hypervisor which satisfy the specified filters. If filters=None return a list of instance records for all the instances found on the hypervisor. """ if not filters: filters = {} try: driver_uuids = self.driver.list_instance_uuids() if len(driver_uuids) == 0: # Short circuit, don't waste a DB call return objects.InstanceList() filters['uuid'] = driver_uuids local_instances = objects.InstanceList.get_by_filters( context, filters, use_slave=True) return local_instances except NotImplementedError: pass # The driver doesn't support uuids listing, so we'll have # to brute force. driver_instances = self.driver.list_instances() # NOTE(mjozefcz): In this case we need to apply host filter. # Without this all instance data would be fetched from db. filters['host'] = self.host instances = objects.InstanceList.get_by_filters(context, filters, use_slave=True) name_map = {instance.name: instance for instance in instances} local_instances = [] for driver_instance in driver_instances: instance = name_map.get(driver_instance) if not instance: continue local_instances.append(instance) return local_instances def _destroy_evacuated_instances(self, context, node_cache): """Destroys evacuated instances. While nova-compute was down, the instances running on it could be evacuated to another host. This method looks for evacuation migration records where this is the source host and which were either started (accepted), in-progress (pre-migrating) or migrated (done). From those migration records, local instances reported by the hypervisor are compared to the instances for the migration records and those local guests are destroyed, along with instance allocation records in Placement for this node. Then allocations are removed from Placement for every instance that is evacuated from this host regardless if the instance is reported by the hypervisor or not. :param context: The request context :param node_cache: A dict of ComputeNode objects keyed by the UUID of the compute node :return: A dict keyed by instance uuid mapped to Migration objects for instances that were migrated away from this host """ filters = { 'source_compute': self.host, # NOTE(mriedem): Migration records that have been accepted are # included in case the source node comes back up while instances # are being evacuated to another host. We don't want the same # instance being reported from multiple hosts. # NOTE(lyarwood): pre-migrating is also included here as the # source compute can come back online shortly after the RT # claims on the destination that in-turn moves the migration to # pre-migrating. If the evacuate fails on the destination host, # the user can rebuild the instance (in ERROR state) on the source # host. 'status': ['accepted', 'pre-migrating', 'done'], 'migration_type': 'evacuation', } with utils.temporary_mutation(context, read_deleted='yes'): evacuations = objects.MigrationList.get_by_filters(context, filters) if not evacuations: return {} evacuations = {mig.instance_uuid: mig for mig in evacuations} # TODO(mriedem): We could optimize by pre-loading the joined fields # we know we'll use, like info_cache and flavor. local_instances = self._get_instances_on_driver(context) evacuated_local_instances = {inst.uuid: inst for inst in local_instances if inst.uuid in evacuations} for instance in evacuated_local_instances.values(): LOG.info('Destroying instance as it has been evacuated from ' 'this host but still exists in the hypervisor', instance=instance) try: network_info = self.network_api.get_instance_nw_info( context, instance) bdi = self._get_instance_block_device_info(context, instance) destroy_disks = not (self._is_instance_storage_shared( context, instance)) except exception.InstanceNotFound: network_info = network_model.NetworkInfo() bdi = {} LOG.info('Instance has been marked deleted already, ' 'removing it from the hypervisor.', instance=instance) # always destroy disks if the instance was deleted destroy_disks = True self.driver.destroy(context, instance, network_info, bdi, destroy_disks) hostname_to_cn_uuid = { cn.hypervisor_hostname: cn.uuid for cn in node_cache.values()} for instance_uuid, migration in evacuations.items(): try: if instance_uuid in evacuated_local_instances: # Avoid the db call if we already have the instance loaded # above instance = evacuated_local_instances[instance_uuid] else: instance = objects.Instance.get_by_uuid( context, instance_uuid) except exception.InstanceNotFound: # The instance already deleted so we expect that every # allocation of that instance has already been cleaned up continue LOG.info('Cleaning up allocations of the instance as it has been ' 'evacuated from this host', instance=instance) if migration.source_node not in hostname_to_cn_uuid: LOG.error("Failed to clean allocation of evacuated " "instance as the source node %s is not found", migration.source_node, instance=instance) continue cn_uuid = hostname_to_cn_uuid[migration.source_node] # If the instance was deleted in the interim, assume its # allocations were properly cleaned up (either by its hosting # compute service or the API). if (not instance.deleted and not self.reportclient. remove_provider_tree_from_instance_allocation( context, instance.uuid, cn_uuid)): LOG.error("Failed to clean allocation of evacuated instance " "on the source node %s", cn_uuid, instance=instance) migration.status = 'completed' migration.save() return evacuations def _is_instance_storage_shared(self, context, instance, host=None): shared_storage = True data = None try: data = self.driver.check_instance_shared_storage_local(context, instance) if data: shared_storage = (self.compute_rpcapi. check_instance_shared_storage(context, instance, data, host=host)) except NotImplementedError: LOG.debug('Hypervisor driver does not support ' 'instance shared storage check, ' 'assuming it\'s not on shared storage', instance=instance) shared_storage = False except Exception: LOG.exception('Failed to check if instance shared', instance=instance) finally: if data: self.driver.check_instance_shared_storage_cleanup(context, data) return shared_storage def _complete_partial_deletion(self, context, instance): """Complete deletion for instances in DELETED status but not marked as deleted in the DB """ instance.destroy() bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._complete_deletion(context, instance) self._notify_about_instance_usage(context, instance, "delete.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.DELETE, phase=fields.NotificationPhase.END, bdms=bdms) def _complete_deletion(self, context, instance): self._update_resource_tracker(context, instance) self.reportclient.delete_allocation_for_instance(context, instance.uuid) self._clean_instance_console_tokens(context, instance) self._delete_scheduler_instance_info(context, instance.uuid) def _validate_pinning_configuration(self, instances): if not self.driver.capabilities.get('supports_pcpus', False): return for instance in instances: # ignore deleted instances if instance.deleted: continue # if this is an unpinned instance and the host only has # 'cpu_dedicated_set' configured, we need to tell the operator to # correct their configuration if not (instance.numa_topology and instance.numa_topology.cpu_pinning_requested): # we don't need to check 'vcpu_pin_set' since it can't coexist # alongside 'cpu_dedicated_set' if (CONF.compute.cpu_dedicated_set and not CONF.compute.cpu_shared_set): msg = _("This host has unpinned instances but has no CPUs " "set aside for this purpose; configure '[compute] " "cpu_shared_set' instead of, or in addition to, " "'[compute] cpu_dedicated_set'") raise exception.InvalidConfiguration(msg) continue # ditto for pinned instances if only 'cpu_shared_set' is configured if (CONF.compute.cpu_shared_set and not CONF.compute.cpu_dedicated_set and not CONF.vcpu_pin_set): msg = _("This host has pinned instances but has no CPUs " "set aside for this purpose; configure '[compute] " "cpu_dedicated_set' instead of, or in addition to, " "'[compute] cpu_shared_set'") raise exception.InvalidConfiguration(msg) # also check to make sure the operator hasn't accidentally # dropped some cores that instances are currently using available_dedicated_cpus = (hardware.get_vcpu_pin_set() or hardware.get_cpu_dedicated_set()) pinned_cpus = instance.numa_topology.cpu_pinning if available_dedicated_cpus and ( pinned_cpus - available_dedicated_cpus): # we can't raise an exception because of bug #1289064, # which meant we didn't recalculate CPU pinning information # when we live migrated a pinned instance LOG.warning( "Instance is pinned to host CPUs %(cpus)s " "but one or more of these CPUs are not included in " "either '[compute] cpu_dedicated_set' or " "'vcpu_pin_set'; you should update these " "configuration options to include the missing CPUs " "or rebuild or cold migrate this instance.", {'cpus': list(pinned_cpus)}, instance=instance) def _reset_live_migration(self, context, instance): migration = None try: migration = objects.Migration.get_by_instance_and_status( context, instance.uuid, 'running') if migration: self.live_migration_abort(context, instance, migration.id) except Exception: LOG.exception('Failed to abort live-migration', instance=instance) finally: if migration: self._set_migration_status(migration, 'error') LOG.info('Instance found in migrating state during ' 'startup. Resetting task_state', instance=instance) instance.task_state = None instance.save(expected_task_state=[task_states.MIGRATING]) def _init_instance(self, context, instance): """Initialize this instance during service init.""" # NOTE(danms): If the instance appears to not be owned by this # host, it may have been evacuated away, but skipped by the # evacuation cleanup code due to configuration. Thus, if that # is a possibility, don't touch the instance in any way, but # log the concern. This will help avoid potential issues on # startup due to misconfiguration. if instance.host != self.host: LOG.warning('Instance %(uuid)s appears to not be owned ' 'by this host, but by %(host)s. Startup ' 'processing is being skipped.', {'uuid': instance.uuid, 'host': instance.host}) return # Instances that are shut down, or in an error state can not be # initialized and are not attempted to be recovered. The exception # to this are instances that are in RESIZE_MIGRATING or DELETING, # which are dealt with further down. if (instance.vm_state == vm_states.SOFT_DELETED or (instance.vm_state == vm_states.ERROR and instance.task_state not in (task_states.RESIZE_MIGRATING, task_states.DELETING))): LOG.debug("Instance is in %s state.", instance.vm_state, instance=instance) return if instance.vm_state == vm_states.DELETED: try: self._complete_partial_deletion(context, instance) except Exception: # we don't want that an exception blocks the init_host LOG.exception('Failed to complete a deletion', instance=instance) return if (instance.vm_state == vm_states.BUILDING or instance.task_state in [task_states.SCHEDULING, task_states.BLOCK_DEVICE_MAPPING, task_states.NETWORKING, task_states.SPAWNING]): # NOTE(dave-mcnally) compute stopped before instance was fully # spawned so set to ERROR state. This is safe to do as the state # may be set by the api but the host is not so if we get here the # instance has already been scheduled to this particular host. LOG.debug("Instance failed to spawn correctly, " "setting to ERROR state", instance=instance) self._set_instance_obj_error_state( context, instance, clean_task_state=True) return if (instance.vm_state in [vm_states.ACTIVE, vm_states.STOPPED] and instance.task_state in [task_states.REBUILDING, task_states.REBUILD_BLOCK_DEVICE_MAPPING, task_states.REBUILD_SPAWNING]): # NOTE(jichenjc) compute stopped before instance was fully # spawned so set to ERROR state. This is consistent to BUILD LOG.debug("Instance failed to rebuild correctly, " "setting to ERROR state", instance=instance) self._set_instance_obj_error_state( context, instance, clean_task_state=True) return if (instance.vm_state != vm_states.ERROR and instance.task_state in [task_states.IMAGE_SNAPSHOT_PENDING, task_states.IMAGE_PENDING_UPLOAD, task_states.IMAGE_UPLOADING, task_states.IMAGE_SNAPSHOT]): LOG.debug("Instance in transitional state %s at start-up " "clearing task state", instance.task_state, instance=instance) try: self._post_interrupted_snapshot_cleanup(context, instance) except Exception: # we don't want that an exception blocks the init_host LOG.exception('Failed to cleanup snapshot.', instance=instance) instance.task_state = None instance.save() if (instance.vm_state != vm_states.ERROR and instance.task_state in [task_states.RESIZE_PREP]): LOG.debug("Instance in transitional state %s at start-up " "clearing task state", instance['task_state'], instance=instance) instance.task_state = None instance.save() if instance.task_state == task_states.DELETING: try: LOG.info('Service started deleting the instance during ' 'the previous run, but did not finish. Restarting' ' the deletion now.', instance=instance) instance.obj_load_attr('metadata') instance.obj_load_attr('system_metadata') bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._delete_instance(context, instance, bdms) except Exception: # we don't want that an exception blocks the init_host LOG.exception('Failed to complete a deletion', instance=instance) self._set_instance_obj_error_state(context, instance) return current_power_state = self._get_power_state(context, instance) try_reboot, reboot_type = self._retry_reboot(context, instance, current_power_state) if try_reboot: LOG.debug("Instance in transitional state (%(task_state)s) at " "start-up and power state is (%(power_state)s), " "triggering reboot", {'task_state': instance.task_state, 'power_state': current_power_state}, instance=instance) # NOTE(mikal): if the instance was doing a soft reboot that got as # far as shutting down the instance but not as far as starting it # again, then we've just become a hard reboot. That means the # task state for the instance needs to change so that we're in one # of the expected task states for a hard reboot. if (instance.task_state in task_states.soft_reboot_states and reboot_type == 'HARD'): instance.task_state = task_states.REBOOT_PENDING_HARD instance.save() self.reboot_instance(context, instance, block_device_info=None, reboot_type=reboot_type) return elif (current_power_state == power_state.RUNNING and instance.task_state in [task_states.REBOOT_STARTED, task_states.REBOOT_STARTED_HARD, task_states.PAUSING, task_states.UNPAUSING]): LOG.warning("Instance in transitional state " "(%(task_state)s) at start-up and power state " "is (%(power_state)s), clearing task state", {'task_state': instance.task_state, 'power_state': current_power_state}, instance=instance) instance.task_state = None instance.vm_state = vm_states.ACTIVE instance.save() elif (current_power_state == power_state.PAUSED and instance.task_state == task_states.UNPAUSING): LOG.warning("Instance in transitional state " "(%(task_state)s) at start-up and power state " "is (%(power_state)s), clearing task state " "and unpausing the instance", {'task_state': instance.task_state, 'power_state': current_power_state}, instance=instance) try: self.unpause_instance(context, instance) except NotImplementedError: # Some virt driver didn't support pause and unpause pass except Exception: LOG.exception('Failed to unpause instance', instance=instance) return if instance.task_state == task_states.POWERING_OFF: try: LOG.debug("Instance in transitional state %s at start-up " "retrying stop request", instance.task_state, instance=instance) self.stop_instance(context, instance, True) except Exception: # we don't want that an exception blocks the init_host LOG.exception('Failed to stop instance', instance=instance) return if instance.task_state == task_states.POWERING_ON: try: LOG.debug("Instance in transitional state %s at start-up " "retrying start request", instance.task_state, instance=instance) self.start_instance(context, instance) except Exception: # we don't want that an exception blocks the init_host LOG.exception('Failed to start instance', instance=instance) return net_info = instance.get_network_info() try: self.driver.plug_vifs(instance, net_info) except NotImplementedError as e: LOG.debug(e, instance=instance) except exception.VirtualInterfacePlugException: # NOTE(mriedem): If we get here, it could be because the vif_type # in the cache is "binding_failed" or "unbound". # The periodic task _heal_instance_info_cache checks for this # condition. It should fix this by binding the ports again when # it gets to this instance. LOG.exception('Virtual interface plugging failed for instance. ' 'The port binding:host_id may need to be manually ' 'updated.', instance=instance) self._set_instance_obj_error_state(context, instance) return if instance.task_state == task_states.RESIZE_MIGRATING: # We crashed during resize/migration, so roll back for safety try: # NOTE(mriedem): check old_vm_state for STOPPED here, if it's # not in system_metadata we default to True for backwards # compatibility power_on = (instance.system_metadata.get('old_vm_state') != vm_states.STOPPED) block_dev_info = self._get_instance_block_device_info(context, instance) migration = objects.Migration.get_by_id_and_instance( context, instance.migration_context.migration_id, instance.uuid) self.driver.finish_revert_migration(context, instance, net_info, migration, block_dev_info, power_on) except Exception: LOG.exception('Failed to revert crashed migration', instance=instance) finally: LOG.info('Instance found in migrating state during ' 'startup. Resetting task_state', instance=instance) instance.task_state = None instance.save() if instance.task_state == task_states.MIGRATING: # Live migration did not complete, but instance is on this # host. Abort ongoing migration if still running and reset state. self._reset_live_migration(context, instance) db_state = instance.power_state drv_state = self._get_power_state(context, instance) expect_running = (db_state == power_state.RUNNING and drv_state != db_state) LOG.debug('Current state is %(drv_state)s, state in DB is ' '%(db_state)s.', {'drv_state': drv_state, 'db_state': db_state}, instance=instance) if expect_running and CONF.resume_guests_state_on_host_boot: self._resume_guests_state(context, instance, net_info) def _resume_guests_state(self, context, instance, net_info): LOG.info('Rebooting instance after nova-compute restart.', instance=instance) block_device_info = \ self._get_instance_block_device_info(context, instance) try: self.driver.resume_state_on_host_boot( context, instance, net_info, block_device_info) except NotImplementedError: LOG.warning('Hypervisor driver does not support ' 'resume guests', instance=instance) except Exception: # NOTE(vish): The instance failed to resume, so we set the # instance to error and attempt to continue. LOG.warning('Failed to resume instance', instance=instance) self._set_instance_obj_error_state(context, instance) def _retry_reboot(self, context, instance, current_power_state): current_task_state = instance.task_state retry_reboot = False reboot_type = compute_utils.get_reboot_type(current_task_state, current_power_state) pending_soft = ( current_task_state == task_states.REBOOT_PENDING and instance.vm_state in vm_states.ALLOW_SOFT_REBOOT) pending_hard = ( current_task_state == task_states.REBOOT_PENDING_HARD and instance.vm_state in vm_states.ALLOW_HARD_REBOOT) started_not_running = (current_task_state in [task_states.REBOOT_STARTED, task_states.REBOOT_STARTED_HARD] and current_power_state != power_state.RUNNING) if pending_soft or pending_hard or started_not_running: retry_reboot = True return retry_reboot, reboot_type def handle_lifecycle_event(self, event): LOG.info("VM %(state)s (Lifecycle Event)", {'state': event.get_name()}, instance_uuid=event.get_instance_uuid()) context = nova.context.get_admin_context(read_deleted='yes') vm_power_state = None event_transition = event.get_transition() if event_transition == virtevent.EVENT_LIFECYCLE_STOPPED: vm_power_state = power_state.SHUTDOWN elif event_transition == virtevent.EVENT_LIFECYCLE_STARTED: vm_power_state = power_state.RUNNING elif event_transition in ( virtevent.EVENT_LIFECYCLE_PAUSED, virtevent.EVENT_LIFECYCLE_POSTCOPY_STARTED, virtevent.EVENT_LIFECYCLE_MIGRATION_COMPLETED): vm_power_state = power_state.PAUSED elif event_transition == virtevent.EVENT_LIFECYCLE_RESUMED: vm_power_state = power_state.RUNNING elif event_transition == virtevent.EVENT_LIFECYCLE_SUSPENDED: vm_power_state = power_state.SUSPENDED else: LOG.warning("Unexpected lifecycle event: %d", event_transition) migrate_finish_statuses = { # This happens on the source node and indicates live migration # entered post-copy mode. virtevent.EVENT_LIFECYCLE_POSTCOPY_STARTED: 'running (post-copy)', # Suspended for offline migration. virtevent.EVENT_LIFECYCLE_MIGRATION_COMPLETED: 'running' } expected_attrs = [] if event_transition in migrate_finish_statuses: # Join on info_cache since that's needed in migrate_instance_start. expected_attrs.append('info_cache') instance = objects.Instance.get_by_uuid(context, event.get_instance_uuid(), expected_attrs=expected_attrs) # Note(lpetrut): The event may be delayed, thus not reflecting # the current instance power state. In that case, ignore the event. current_power_state = self._get_power_state(context, instance) if current_power_state == vm_power_state: LOG.debug('Synchronizing instance power state after lifecycle ' 'event "%(event)s"; current vm_state: %(vm_state)s, ' 'current task_state: %(task_state)s, current DB ' 'power_state: %(db_power_state)s, VM power_state: ' '%(vm_power_state)s', {'event': event.get_name(), 'vm_state': instance.vm_state, 'task_state': instance.task_state, 'db_power_state': instance.power_state, 'vm_power_state': vm_power_state}, instance_uuid=instance.uuid) self._sync_instance_power_state(context, instance, vm_power_state) # The following checks are for live migration. We want to activate # the port binding for the destination host before the live migration # is resumed on the destination host in order to reduce network # downtime. Otherwise the ports are bound to the destination host # in post_live_migration_at_destination. # TODO(danms): Explore options for using a different live migration # specific callback for this instead of piggy-backing on the # handle_lifecycle_event callback. if (instance.task_state == task_states.MIGRATING and event_transition in migrate_finish_statuses): status = migrate_finish_statuses[event_transition] try: migration = objects.Migration.get_by_instance_and_status( context, instance.uuid, status) LOG.debug('Binding ports to destination host: %s', migration.dest_compute, instance=instance) # For neutron, migrate_instance_start will activate the # destination host port bindings, if there are any created by # conductor before live migration started. self.network_api.migrate_instance_start( context, instance, migration) except exception.MigrationNotFoundByStatus: LOG.warning("Unable to find migration record with status " "'%s' for instance. Port binding will happen in " "post live migration.", status, instance=instance) def handle_events(self, event): if isinstance(event, virtevent.LifecycleEvent): try: self.handle_lifecycle_event(event) except exception.InstanceNotFound: LOG.debug("Event %s arrived for non-existent instance. The " "instance was probably deleted.", event) else: LOG.debug("Ignoring event %s", event) def init_virt_events(self): if CONF.workarounds.handle_virt_lifecycle_events: self.driver.register_event_listener(self.handle_events) else: # NOTE(mriedem): If the _sync_power_states periodic task is # disabled we should emit a warning in the logs. if CONF.sync_power_state_interval < 0: LOG.warning('Instance lifecycle events from the compute ' 'driver have been disabled. Note that lifecycle ' 'changes to an instance outside of the compute ' 'service will not be synchronized ' 'automatically since the _sync_power_states ' 'periodic task is also disabled.') else: LOG.info('Instance lifecycle events from the compute ' 'driver have been disabled. Note that lifecycle ' 'changes to an instance outside of the compute ' 'service will only be synchronized by the ' '_sync_power_states periodic task.') def _get_nodes(self, context): """Queried the ComputeNode objects from the DB that are reported by the hypervisor. :param context: the request context :return: a dict of ComputeNode objects keyed by the UUID of the given node. """ nodes_by_uuid = {} try: node_names = self.driver.get_available_nodes() except exception.VirtDriverNotReady: LOG.warning( "Virt driver is not ready. If this is the first time this " "service is starting on this host, then you can ignore this " "warning.") return {} for node_name in node_names: try: node = objects.ComputeNode.get_by_host_and_nodename( context, self.host, node_name) nodes_by_uuid[node.uuid] = node except exception.ComputeHostNotFound: LOG.warning( "Compute node %s not found in the database. If this is " "the first time this service is starting on this host, " "then you can ignore this warning.", node_name) return nodes_by_uuid def init_host(self): """Initialization for a standalone compute service.""" if CONF.pci.passthrough_whitelist: # Simply loading the PCI passthrough whitelist will do a bunch of # validation that would otherwise wait until the PciDevTracker is # constructed when updating available resources for the compute # node(s) in the resource tracker, effectively killing that task. # So load up the whitelist when starting the compute service to # flush any invalid configuration early so we can kill the service # if the configuration is wrong. whitelist.Whitelist(CONF.pci.passthrough_whitelist) nova.conf.neutron.register_dynamic_opts(CONF) # Even if only libvirt uses them, make it available for all drivers nova.conf.devices.register_dynamic_opts(CONF) # Override the number of concurrent disk operations allowed if the # user has specified a limit. if CONF.compute.max_concurrent_disk_ops != 0: compute_utils.disk_ops_semaphore = \ eventlet.semaphore.BoundedSemaphore( CONF.compute.max_concurrent_disk_ops) if CONF.compute.max_disk_devices_to_attach == 0: msg = _('[compute]max_disk_devices_to_attach has been set to 0, ' 'which will prevent instances from being able to boot. ' 'Set -1 for unlimited or set >= 1 to limit the maximum ' 'number of disk devices.') raise exception.InvalidConfiguration(msg) self.driver.init_host(host=self.host) context = nova.context.get_admin_context() instances = objects.InstanceList.get_by_host( context, self.host, expected_attrs=['info_cache', 'metadata', 'numa_topology']) self.init_virt_events() self._validate_pinning_configuration(instances) # NOTE(gibi): At this point the compute_nodes of the resource tracker # has not been populated yet so we cannot rely on the resource tracker # here. # NOTE(gibi): If ironic and vcenter virt driver slow start time # becomes problematic here then we should consider adding a config # option or a driver flag to tell us if we should thread # _destroy_evacuated_instances and # _error_out_instances_whose_build_was_interrupted out in the # background on startup nodes_by_uuid = self._get_nodes(context) try: # checking that instance was not already evacuated to other host evacuated_instances = self._destroy_evacuated_instances( context, nodes_by_uuid) # Initialise instances on the host that are not evacuating for instance in instances: if instance.uuid not in evacuated_instances: self._init_instance(context, instance) # NOTE(gibi): collect all the instance uuids that is in some way # was already handled above. Either by init_instance or by # _destroy_evacuated_instances. This way we can limit the scope of # the _error_out_instances_whose_build_was_interrupted call to look # only for instances that have allocations on this node and not # handled by the above calls. already_handled = {instance.uuid for instance in instances}.union( evacuated_instances) self._error_out_instances_whose_build_was_interrupted( context, already_handled, nodes_by_uuid.keys()) finally: if instances: # We only send the instance info to the scheduler on startup # if there is anything to send, otherwise this host might # not be mapped yet in a cell and the scheduler may have # issues dealing with the information. Later changes to # instances on this host will update the scheduler, or the # _sync_scheduler_instance_info periodic task will. self._update_scheduler_instance_info(context, instances) def _error_out_instances_whose_build_was_interrupted( self, context, already_handled_instances, node_uuids): """If there are instances in BUILDING state that are not assigned to this host but have allocations in placement towards this compute that means the nova-compute service was restarted while those instances waited for the resource claim to finish and the _set_instance_host_and_node() to update the instance.host field. We need to push them to ERROR state here to prevent keeping them in BUILDING state forever. :param context: The request context :param already_handled_instances: The set of instance UUIDs that the host initialization process already handled in some way. :param node_uuids: The list of compute node uuids handled by this service """ # Strategy: # 1) Get the allocations from placement for our compute node(s) # 2) Remove the already handled instances from the consumer list; # they are either already initialized or need to be skipped. # 3) Check which remaining consumer is an instance in BUILDING state # and push it to ERROR state. LOG.info( "Looking for unclaimed instances stuck in BUILDING status for " "nodes managed by this host") for cn_uuid in node_uuids: try: f = self.reportclient.get_allocations_for_resource_provider allocations = f(context, cn_uuid).allocations except (exception.ResourceProviderAllocationRetrievalFailed, keystone_exception.ClientException) as e: LOG.error( "Could not retrieve compute node resource provider %s and " "therefore unable to error out any instances stuck in " "BUILDING state. Error: %s", cn_uuid, six.text_type(e)) continue not_handled_consumers = (set(allocations) - already_handled_instances) if not not_handled_consumers: continue filters = { 'vm_state': vm_states.BUILDING, 'uuid': not_handled_consumers } instances = objects.InstanceList.get_by_filters( context, filters, expected_attrs=[]) for instance in instances: LOG.debug( "Instance spawn was interrupted before instance_claim, " "setting instance to ERROR state", instance=instance) self._set_instance_obj_error_state( context, instance, clean_task_state=True) def cleanup_host(self): self.driver.register_event_listener(None) self.instance_events.cancel_all_events() self.driver.cleanup_host(host=self.host) self._cleanup_live_migrations_in_pool() def _cleanup_live_migrations_in_pool(self): # Shutdown the pool so we don't get new requests. self._live_migration_executor.shutdown(wait=False) # For any queued migrations, cancel the migration and update # its status. for migration, future in self._waiting_live_migrations.values(): # If we got here before the Future was submitted then we need # to move on since there isn't anything we can do. if future is None: continue if future.cancel(): self._set_migration_status(migration, 'cancelled') LOG.info('Successfully cancelled queued live migration.', instance_uuid=migration.instance_uuid) else: LOG.warning('Unable to cancel live migration.', instance_uuid=migration.instance_uuid) self._waiting_live_migrations.clear() def pre_start_hook(self): """After the service is initialized, but before we fully bring the service up by listening on RPC queues, make sure to update our available resources (and indirectly our available nodes). """ self.update_available_resource(nova.context.get_admin_context(), startup=True) def _get_power_state(self, context, instance): """Retrieve the power state for the given instance.""" LOG.debug('Checking state', instance=instance) try: return self.driver.get_info(instance, use_cache=False).state except exception.InstanceNotFound: return power_state.NOSTATE # TODO(stephenfin): Remove this once we bump the compute API to v6.0 def get_console_topic(self, context): """Retrieves the console host for a project on this host. Currently this is just set in the flags for each compute host. """ # TODO(mdragon): perhaps make this variable by console_type? return 'console.%s' % CONF.console_host @wrap_exception() def get_console_pool_info(self, context, console_type): return self.driver.get_console_pool_info(console_type) # TODO(stephenfin): Remove this as it's nova-network only @wrap_exception() def refresh_instance_security_rules(self, context, instance): """Tell the virtualization driver to refresh security rules for an instance. Passes straight through to the virtualization driver. Synchronize the call because we may still be in the middle of creating the instance. """ pass def _await_block_device_map_created(self, context, vol_id): # TODO(yamahata): creating volume simultaneously # reduces creation time? # TODO(yamahata): eliminate dumb polling start = time.time() retries = CONF.block_device_allocate_retries # (1) if the configured value is 0, one attempt should be made # (2) if the configured value is > 0, then the total number attempts # is (retries + 1) attempts = 1 if retries >= 1: attempts = retries + 1 for attempt in range(1, attempts + 1): volume = self.volume_api.get(context, vol_id) volume_status = volume['status'] if volume_status not in ['creating', 'downloading']: if volume_status == 'available': return attempt LOG.warning("Volume id: %(vol_id)s finished being " "created but its status is %(vol_status)s.", {'vol_id': vol_id, 'vol_status': volume_status}) break greenthread.sleep(CONF.block_device_allocate_retries_interval) raise exception.VolumeNotCreated(volume_id=vol_id, seconds=int(time.time() - start), attempts=attempt, volume_status=volume_status) def _decode_files(self, injected_files): """Base64 decode the list of files to inject.""" if not injected_files: return [] def _decode(f): path, contents = f # Py3 raises binascii.Error instead of TypeError as in Py27 try: decoded = base64.b64decode(contents) return path, decoded except (TypeError, binascii.Error): raise exception.Base64Exception(path=path) return [_decode(f) for f in injected_files] def _validate_instance_group_policy(self, context, instance, scheduler_hints=None): if CONF.workarounds.disable_group_policy_check_upcall: return # NOTE(russellb) Instance group policy is enforced by the scheduler. # However, there is a race condition with the enforcement of # the policy. Since more than one instance may be scheduled at the # same time, it's possible that more than one instance with an # anti-affinity policy may end up here. It's also possible that # multiple instances with an affinity policy could end up on different # hosts. This is a validation step to make sure that starting the # instance here doesn't violate the policy. if scheduler_hints is not None: # only go through here if scheduler_hints is provided, even if it # is empty. group_hint = scheduler_hints.get('group') if not group_hint: return else: # The RequestSpec stores scheduler_hints as key=list pairs so # we need to check the type on the value and pull the single # entry out. The API request schema validates that # the 'group' hint is a single value. if isinstance(group_hint, list): group_hint = group_hint[0] group = objects.InstanceGroup.get_by_hint(context, group_hint) else: # TODO(ganso): a call to DB can be saved by adding request_spec # to rpcapi payload of live_migration, pre_live_migration and # check_can_live_migrate_destination try: group = objects.InstanceGroup.get_by_instance_uuid( context, instance.uuid) except exception.InstanceGroupNotFound: return @utils.synchronized(group['uuid']) def _do_validation(context, instance, group): if group.policy and 'anti-affinity' == group.policy: # instances on host instances_uuids = objects.InstanceList.get_uuids_by_host( context, self.host) ins_on_host = set(instances_uuids) # instance param is just for logging, the nodename obtained is # not actually related to the instance at all nodename = self._get_nodename(instance) # instances being migrated to host migrations = ( objects.MigrationList.get_in_progress_by_host_and_node( context, self.host, nodename)) migration_vm_uuids = set([mig['instance_uuid'] for mig in migrations]) total_instances = migration_vm_uuids | ins_on_host # refresh group to get updated members within locked block group = objects.InstanceGroup.get_by_uuid(context, group['uuid']) members = set(group.members) # Determine the set of instance group members on this host # which are not the instance in question. This is used to # determine how many other members from the same anti-affinity # group can be on this host. members_on_host = (total_instances & members - set([instance.uuid])) rules = group.rules if rules and 'max_server_per_host' in rules: max_server = rules['max_server_per_host'] else: max_server = 1 if len(members_on_host) >= max_server: msg = _("Anti-affinity instance group policy " "was violated.") raise exception.RescheduledException( instance_uuid=instance.uuid, reason=msg) # NOTE(ganso): The check for affinity below does not work and it # can easily be violated because the lock happens in different # compute hosts. # The only fix seems to be a DB lock to perform the check whenever # setting the host field to an instance. elif group.policy and 'affinity' == group.policy: group_hosts = group.get_hosts(exclude=[instance.uuid]) if group_hosts and self.host not in group_hosts: msg = _("Affinity instance group policy was violated.") raise exception.RescheduledException( instance_uuid=instance.uuid, reason=msg) _do_validation(context, instance, group) def _log_original_error(self, exc_info, instance_uuid): LOG.error('Error: %s', exc_info[1], instance_uuid=instance_uuid, exc_info=exc_info) @periodic_task.periodic_task def _check_instance_build_time(self, context): """Ensure that instances are not stuck in build.""" timeout = CONF.instance_build_timeout if timeout == 0: return filters = {'vm_state': vm_states.BUILDING, 'host': self.host} building_insts = objects.InstanceList.get_by_filters(context, filters, expected_attrs=[], use_slave=True) for instance in building_insts: if timeutils.is_older_than(instance.created_at, timeout): self._set_instance_obj_error_state(context, instance) LOG.warning("Instance build timed out. Set to error " "state.", instance=instance) def _check_instance_exists(self, context, instance): """Ensure an instance with the same name is not already present.""" if self.driver.instance_exists(instance): raise exception.InstanceExists(name=instance.name) def _allocate_network_async(self, context, instance, requested_networks, security_groups, is_vpn, resource_provider_mapping): """Method used to allocate networks in the background. Broken out for testing. """ # First check to see if we're specifically not supposed to allocate # networks because if so, we can exit early. if requested_networks and requested_networks.no_allocate: LOG.debug("Not allocating networking since 'none' was specified.", instance=instance) return network_model.NetworkInfo([]) LOG.debug("Allocating IP information in the background.", instance=instance) retries = CONF.network_allocate_retries attempts = retries + 1 retry_time = 1 bind_host_id = self.driver.network_binding_host_id(context, instance) for attempt in range(1, attempts + 1): try: nwinfo = self.network_api.allocate_for_instance( context, instance, vpn=is_vpn, requested_networks=requested_networks, security_groups=security_groups, bind_host_id=bind_host_id, resource_provider_mapping=resource_provider_mapping) LOG.debug('Instance network_info: |%s|', nwinfo, instance=instance) instance.system_metadata['network_allocated'] = 'True' # NOTE(JoshNang) do not save the instance here, as it can cause # races. The caller shares a reference to instance and waits # for this async greenthread to finish before calling # instance.save(). return nwinfo except Exception: exc_info = sys.exc_info() log_info = {'attempt': attempt, 'attempts': attempts} if attempt == attempts: LOG.exception('Instance failed network setup ' 'after %(attempts)d attempt(s)', log_info) six.reraise(*exc_info) LOG.warning('Instance failed network setup ' '(attempt %(attempt)d of %(attempts)d)', log_info, instance=instance) time.sleep(retry_time) retry_time *= 2 if retry_time > 30: retry_time = 30 # Not reached. def _build_networks_for_instance(self, context, instance, requested_networks, security_groups, resource_provider_mapping): # If we're here from a reschedule the network may already be allocated. if strutils.bool_from_string( instance.system_metadata.get('network_allocated', 'False')): # NOTE(alex_xu): The network_allocated is True means the network # resource already allocated at previous scheduling, and the # network setup is cleanup at previous. After rescheduling, the # network resource need setup on the new host. self.network_api.setup_instance_network_on_host( context, instance, instance.host) return self.network_api.get_instance_nw_info(context, instance) network_info = self._allocate_network(context, instance, requested_networks, security_groups, resource_provider_mapping) return network_info def _allocate_network(self, context, instance, requested_networks, security_groups, resource_provider_mapping): """Start network allocation asynchronously. Return an instance of NetworkInfoAsyncWrapper that can be used to retrieve the allocated networks when the operation has finished. """ # NOTE(comstud): Since we're allocating networks asynchronously, # this task state has little meaning, as we won't be in this # state for very long. instance.vm_state = vm_states.BUILDING instance.task_state = task_states.NETWORKING instance.save(expected_task_state=[None]) is_vpn = False return network_model.NetworkInfoAsyncWrapper( self._allocate_network_async, context, instance, requested_networks, security_groups, is_vpn, resource_provider_mapping) def _default_root_device_name(self, instance, image_meta, root_bdm): """Gets a default root device name from the driver. :param nova.objects.Instance instance: The instance for which to get the root device name. :param nova.objects.ImageMeta image_meta: The metadata of the image of the instance. :param nova.objects.BlockDeviceMapping root_bdm: The description of the root device. :returns: str -- The default root device name. :raises: InternalError, TooManyDiskDevices """ try: return self.driver.default_root_device_name(instance, image_meta, root_bdm) except NotImplementedError: return compute_utils.get_next_device_name(instance, []) def _default_device_names_for_instance(self, instance, root_device_name, *block_device_lists): """Default the missing device names in the BDM from the driver. :param nova.objects.Instance instance: The instance for which to get default device names. :param str root_device_name: The root device name. :param list block_device_lists: List of block device mappings. :returns: None :raises: InternalError, TooManyDiskDevices """ try: self.driver.default_device_names_for_instance(instance, root_device_name, *block_device_lists) except NotImplementedError: compute_utils.default_device_names_for_instance( instance, root_device_name, *block_device_lists) def _get_device_name_for_instance(self, instance, bdms, block_device_obj): """Get the next device name from the driver, based on the BDM. :param nova.objects.Instance instance: The instance whose volume is requesting a device name. :param nova.objects.BlockDeviceMappingList bdms: The block device mappings for the instance. :param nova.objects.BlockDeviceMapping block_device_obj: A block device mapping containing info about the requested block device. :returns: The next device name. :raises: InternalError, TooManyDiskDevices """ # NOTE(ndipanov): Copy obj to avoid changing the original block_device_obj = block_device_obj.obj_clone() try: return self.driver.get_device_name_for_instance( instance, bdms, block_device_obj) except NotImplementedError: return compute_utils.get_device_name_for_instance( instance, bdms, block_device_obj.get("device_name")) def _default_block_device_names(self, instance, image_meta, block_devices): """Verify that all the devices have the device_name set. If not, provide a default name. It also ensures that there is a root_device_name and is set to the first block device in the boot sequence (boot_index=0). """ root_bdm = block_device.get_root_bdm(block_devices) if not root_bdm: return # Get the root_device_name from the root BDM or the instance root_device_name = None update_root_bdm = False if root_bdm.device_name: root_device_name = root_bdm.device_name instance.root_device_name = root_device_name elif instance.root_device_name: root_device_name = instance.root_device_name root_bdm.device_name = root_device_name update_root_bdm = True else: root_device_name = self._default_root_device_name(instance, image_meta, root_bdm) instance.root_device_name = root_device_name root_bdm.device_name = root_device_name update_root_bdm = True if update_root_bdm: root_bdm.save() ephemerals = list(filter(block_device.new_format_is_ephemeral, block_devices)) swap = list(filter(block_device.new_format_is_swap, block_devices)) block_device_mapping = list(filter( driver_block_device.is_block_device_mapping, block_devices)) self._default_device_names_for_instance(instance, root_device_name, ephemerals, swap, block_device_mapping) def _block_device_info_to_legacy(self, block_device_info): """Convert BDI to the old format for drivers that need it.""" if self.use_legacy_block_device_info: ephemerals = driver_block_device.legacy_block_devices( driver.block_device_info_get_ephemerals(block_device_info)) mapping = driver_block_device.legacy_block_devices( driver.block_device_info_get_mapping(block_device_info)) swap = block_device_info['swap'] if swap: swap = swap.legacy() block_device_info.update({ 'ephemerals': ephemerals, 'swap': swap, 'block_device_mapping': mapping}) def _add_missing_dev_names(self, bdms, instance): for bdm in bdms: if bdm.device_name is not None: continue device_name = self._get_device_name_for_instance(instance, bdms, bdm) values = {'device_name': device_name} bdm.update(values) bdm.save() def _prep_block_device(self, context, instance, bdms): """Set up the block device for an instance with error logging.""" try: self._add_missing_dev_names(bdms, instance) block_device_info = driver.get_block_device_info(instance, bdms) mapping = driver.block_device_info_get_mapping(block_device_info) driver_block_device.attach_block_devices( mapping, context, instance, self.volume_api, self.driver, wait_func=self._await_block_device_map_created) self._block_device_info_to_legacy(block_device_info) return block_device_info except exception.OverQuota as e: LOG.warning('Failed to create block device for instance due' ' to exceeding volume related resource quota.' ' Error: %s', e.message, instance=instance) raise except Exception as ex: LOG.exception('Instance failed block device setup', instance=instance) # InvalidBDM will eventually result in a BuildAbortException when # booting from volume, and will be recorded as an instance fault. # Maintain the original exception message which most likely has # useful details which the standard InvalidBDM error message lacks. raise exception.InvalidBDM(six.text_type(ex)) def _update_instance_after_spawn(self, context, instance, vm_state=vm_states.ACTIVE): instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_state instance.task_state = None # NOTE(sean-k-mooney): configdrive.update_instance checks # instance.launched_at to determine if it is the first or # subsequent spawn of an instance. We need to call update_instance # first before setting instance.launched_at or instance.config_drive # will never be set to true based on the value of force_config_drive. # As a result the config drive will be lost on a hard reboot of the # instance even when force_config_drive=true. see bug #1835822. configdrive.update_instance(instance) instance.launched_at = timeutils.utcnow() def _update_scheduler_instance_info(self, context, instance): """Sends an InstanceList with created or updated Instance objects to the Scheduler client. In the case of init_host, the value passed will already be an InstanceList. Other calls will send individual Instance objects that have been created or resized. In this case, we create an InstanceList object containing that Instance. """ if not self.send_instance_updates: return if isinstance(instance, obj_instance.Instance): instance = objects.InstanceList(objects=[instance]) context = context.elevated() self.query_client.update_instance_info(context, self.host, instance) def _delete_scheduler_instance_info(self, context, instance_uuid): """Sends the uuid of the deleted Instance to the Scheduler client.""" if not self.send_instance_updates: return context = context.elevated() self.query_client.delete_instance_info(context, self.host, instance_uuid) @periodic_task.periodic_task(spacing=CONF.scheduler_instance_sync_interval) def _sync_scheduler_instance_info(self, context): if not self.send_instance_updates: return context = context.elevated() instances = objects.InstanceList.get_by_host(context, self.host, expected_attrs=[], use_slave=True) uuids = [instance.uuid for instance in instances] self.query_client.sync_instance_info(context, self.host, uuids) def _notify_about_instance_usage(self, context, instance, event_suffix, network_info=None, extra_usage_info=None, fault=None): compute_utils.notify_about_instance_usage( self.notifier, context, instance, event_suffix, network_info=network_info, extra_usage_info=extra_usage_info, fault=fault) def _deallocate_network(self, context, instance, requested_networks=None): # If we were told not to allocate networks let's save ourselves # the trouble of calling the network API. if requested_networks and requested_networks.no_allocate: LOG.debug("Skipping network deallocation for instance since " "networking was not requested.", instance=instance) return LOG.debug('Deallocating network for instance', instance=instance) with timeutils.StopWatch() as timer: self.network_api.deallocate_for_instance( context, instance, requested_networks=requested_networks) # nova-network does an rpc call so we're OK tracking time spent here LOG.info('Took %0.2f seconds to deallocate network for instance.', timer.elapsed(), instance=instance) def _get_instance_block_device_info(self, context, instance, refresh_conn_info=False, bdms=None): """Transform block devices to the driver block_device format.""" if bdms is None: bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) block_device_info = driver.get_block_device_info(instance, bdms) if not refresh_conn_info: # if the block_device_mapping has no value in connection_info # (returned as None), don't include in the mapping block_device_info['block_device_mapping'] = [ bdm for bdm in driver.block_device_info_get_mapping( block_device_info) if bdm.get('connection_info')] else: driver_block_device.refresh_conn_infos( driver.block_device_info_get_mapping(block_device_info), context, instance, self.volume_api, self.driver) self._block_device_info_to_legacy(block_device_info) return block_device_info def _build_failed(self, node): if CONF.compute.consecutive_build_service_disable_threshold: # NOTE(danms): Update our counter, but wait for the next # update_available_resource() periodic to flush it to the DB self.rt.build_failed(node) def _build_succeeded(self, node): self.rt.build_succeeded(node) @wrap_exception() @reverts_task_state @wrap_instance_fault def build_and_run_instance(self, context, instance, image, request_spec, filter_properties, admin_password=None, injected_files=None, requested_networks=None, security_groups=None, block_device_mapping=None, node=None, limits=None, host_list=None, accel_uuids=None): @utils.synchronized(instance.uuid) def _locked_do_build_and_run_instance(*args, **kwargs): # NOTE(danms): We grab the semaphore with the instance uuid # locked because we could wait in line to build this instance # for a while and we want to make sure that nothing else tries # to do anything with this instance while we wait. with self._build_semaphore: try: result = self._do_build_and_run_instance(*args, **kwargs) except Exception: # NOTE(mriedem): This should really only happen if # _decode_files in _do_build_and_run_instance fails, and # that's before a guest is spawned so it's OK to remove # allocations for the instance for this node from Placement # below as there is no guest consuming resources anyway. # The _decode_files case could be handled more specifically # but that's left for another day. result = build_results.FAILED raise finally: if result == build_results.FAILED: # Remove the allocation records from Placement for the # instance if the build failed. The instance.host is # likely set to None in _do_build_and_run_instance # which means if the user deletes the instance, it # will be deleted in the API, not the compute service. # Setting the instance.host to None in # _do_build_and_run_instance means that the # ResourceTracker will no longer consider this instance # to be claiming resources against it, so we want to # reflect that same thing in Placement. No need to # call this for a reschedule, as the allocations will # have already been removed in # self._do_build_and_run_instance(). self.reportclient.delete_allocation_for_instance( context, instance.uuid) if result in (build_results.FAILED, build_results.RESCHEDULED): self._build_failed(node) else: self._build_succeeded(node) # NOTE(danms): We spawn here to return the RPC worker thread back to # the pool. Since what follows could take a really long time, we don't # want to tie up RPC workers. utils.spawn_n(_locked_do_build_and_run_instance, context, instance, image, request_spec, filter_properties, admin_password, injected_files, requested_networks, security_groups, block_device_mapping, node, limits, host_list, accel_uuids) def _check_device_tagging(self, requested_networks, block_device_mapping): tagging_requested = False if requested_networks: for net in requested_networks: if 'tag' in net and net.tag is not None: tagging_requested = True break if block_device_mapping and not tagging_requested: for bdm in block_device_mapping: if 'tag' in bdm and bdm.tag is not None: tagging_requested = True break if (tagging_requested and not self.driver.capabilities.get('supports_device_tagging', False)): raise exception.BuildAbortException('Attempt to boot guest with ' 'tagged devices on host that ' 'does not support tagging.') def _check_trusted_certs(self, instance): if (instance.trusted_certs and not self.driver.capabilities.get('supports_trusted_certs', False)): raise exception.BuildAbortException( 'Trusted image certificates provided on host that does not ' 'support certificate validation.') @hooks.add_hook('build_instance') @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def _do_build_and_run_instance(self, context, instance, image, request_spec, filter_properties, admin_password, injected_files, requested_networks, security_groups, block_device_mapping, node=None, limits=None, host_list=None, accel_uuids=None): try: LOG.debug('Starting instance...', instance=instance) instance.vm_state = vm_states.BUILDING instance.task_state = None instance.save(expected_task_state= (task_states.SCHEDULING, None)) except exception.InstanceNotFound: msg = 'Instance disappeared before build.' LOG.debug(msg, instance=instance) return build_results.FAILED except exception.UnexpectedTaskStateError as e: LOG.debug(e.format_message(), instance=instance) return build_results.FAILED # b64 decode the files to inject: decoded_files = self._decode_files(injected_files) if limits is None: limits = {} if node is None: node = self._get_nodename(instance, refresh=True) try: with timeutils.StopWatch() as timer: self._build_and_run_instance(context, instance, image, decoded_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties, request_spec, accel_uuids) LOG.info('Took %0.2f seconds to build instance.', timer.elapsed(), instance=instance) return build_results.ACTIVE except exception.RescheduledException as e: retry = filter_properties.get('retry') if not retry: # no retry information, do not reschedule. LOG.debug("Retry info not present, will not reschedule", instance=instance) self._cleanup_allocated_networks(context, instance, requested_networks) self._cleanup_volumes(context, instance, block_device_mapping, raise_exc=False) compute_utils.add_instance_fault_from_exc(context, instance, e, sys.exc_info(), fault_message=e.kwargs['reason']) self._nil_out_instance_obj_host_and_node(instance) self._set_instance_obj_error_state(context, instance, clean_task_state=True) return build_results.FAILED LOG.debug(e.format_message(), instance=instance) # This will be used for logging the exception retry['exc'] = traceback.format_exception(*sys.exc_info()) # This will be used for setting the instance fault message retry['exc_reason'] = e.kwargs['reason'] self._cleanup_allocated_networks(context, instance, requested_networks) self._nil_out_instance_obj_host_and_node(instance) instance.task_state = task_states.SCHEDULING instance.save() # The instance will have already claimed resources from this host # before this build was attempted. Now that it has failed, we need # to unclaim those resources before casting to the conductor, so # that if there are alternate hosts available for a retry, it can # claim resources on that new host for the instance. self.reportclient.delete_allocation_for_instance(context, instance.uuid) self.compute_task_api.build_instances(context, [instance], image, filter_properties, admin_password, injected_files, requested_networks, security_groups, block_device_mapping, request_spec=request_spec, host_lists=[host_list]) return build_results.RESCHEDULED except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError): msg = 'Instance disappeared during build.' LOG.debug(msg, instance=instance) self._cleanup_allocated_networks(context, instance, requested_networks) return build_results.FAILED except Exception as e: if isinstance(e, exception.BuildAbortException): LOG.error(e.format_message(), instance=instance) else: # Should not reach here. LOG.exception('Unexpected build failure, not rescheduling ' 'build.', instance=instance) self._cleanup_allocated_networks(context, instance, requested_networks) self._cleanup_volumes(context, instance, block_device_mapping, raise_exc=False) compute_utils.add_instance_fault_from_exc(context, instance, e, sys.exc_info()) self._nil_out_instance_obj_host_and_node(instance) self._set_instance_obj_error_state(context, instance, clean_task_state=True) return build_results.FAILED @staticmethod def _get_scheduler_hints(filter_properties, request_spec=None): """Helper method to get scheduler hints. This method prefers to get the hints out of the request spec, but that might not be provided. Conductor will pass request_spec down to the first compute chosen for a build but older computes will not pass the request_spec to conductor's build_instances method for a a reschedule, so if we're on a host via a retry, request_spec may not be provided so we need to fallback to use the filter_properties to get scheduler hints. """ hints = {} if request_spec is not None and 'scheduler_hints' in request_spec: hints = request_spec.scheduler_hints if not hints: hints = filter_properties.get('scheduler_hints') or {} return hints @staticmethod def _get_request_group_mapping(request_spec): """Return request group resource - provider mapping. This is currently used for Neutron ports that have resource request due to the port having QoS minimum bandwidth policy rule attached. :param request_spec: A RequestSpec object or None :returns: A dict keyed by RequestGroup requester_id, currently Neutron port_id, to resource provider UUID that provides resource for that RequestGroup. Or None if the request_spec was None. """ if request_spec: return request_spec.get_request_group_mapping() else: return None def _build_and_run_instance(self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties, request_spec=None, accel_uuids=None): image_name = image.get('name') self._notify_about_instance_usage(context, instance, 'create.start', extra_usage_info={'image_name': image_name}) compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.START, bdms=block_device_mapping) # NOTE(mikal): cache the keystone roles associated with the instance # at boot time for later reference instance.system_metadata.update( {'boot_roles': ','.join(context.roles)}) self._check_device_tagging(requested_networks, block_device_mapping) self._check_trusted_certs(instance) provider_mapping = self._get_request_group_mapping(request_spec) if provider_mapping: try: compute_utils\ .update_pci_request_spec_with_allocated_interface_name( context, self.reportclient, instance, provider_mapping) except (exception.AmbiguousResourceProviderForPCIRequest, exception.UnexpectedResourceProviderNameForPCIRequest ) as e: raise exception.BuildAbortException( reason=six.text_type(e), instance_uuid=instance.uuid) # TODO(Luyao) cut over to get_allocs_for_consumer allocs = self.reportclient.get_allocations_for_consumer( context, instance.uuid) try: scheduler_hints = self._get_scheduler_hints(filter_properties, request_spec) with self.rt.instance_claim(context, instance, node, allocs, limits): # NOTE(russellb) It's important that this validation be done # *after* the resource tracker instance claim, as that is where # the host is set on the instance. self._validate_instance_group_policy(context, instance, scheduler_hints) image_meta = objects.ImageMeta.from_dict(image) with self._build_resources(context, instance, requested_networks, security_groups, image_meta, block_device_mapping, provider_mapping, accel_uuids) as resources: instance.vm_state = vm_states.BUILDING instance.task_state = task_states.SPAWNING # NOTE(JoshNang) This also saves the changes to the # instance from _allocate_network_async, as they aren't # saved in that function to prevent races. instance.save(expected_task_state= task_states.BLOCK_DEVICE_MAPPING) block_device_info = resources['block_device_info'] network_info = resources['network_info'] accel_info = resources['accel_info'] LOG.debug('Start spawning the instance on the hypervisor.', instance=instance) with timeutils.StopWatch() as timer: self.driver.spawn(context, instance, image_meta, injected_files, admin_password, allocs, network_info=network_info, block_device_info=block_device_info, accel_info=accel_info) LOG.info('Took %0.2f seconds to spawn the instance on ' 'the hypervisor.', timer.elapsed(), instance=instance) except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError) as e: with excutils.save_and_reraise_exception(): self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) except exception.ComputeResourcesUnavailable as e: LOG.debug(e.format_message(), instance=instance) self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) raise exception.RescheduledException( instance_uuid=instance.uuid, reason=e.format_message()) except exception.BuildAbortException as e: with excutils.save_and_reraise_exception(): LOG.debug(e.format_message(), instance=instance) self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) except exception.NoMoreFixedIps as e: LOG.warning('No more fixed IP to be allocated', instance=instance) self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) msg = _('Failed to allocate the network(s) with error %s, ' 'not rescheduling.') % e.format_message() raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=msg) except (exception.ExternalNetworkAttachForbidden, exception.VirtualInterfaceCreateException, exception.VirtualInterfaceMacAddressException, exception.FixedIpInvalidOnHost, exception.UnableToAutoAllocateNetwork, exception.NetworksWithQoSPolicyNotSupported) as e: LOG.exception('Failed to allocate network(s)', instance=instance) self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) msg = _('Failed to allocate the network(s), not rescheduling.') raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=msg) except (exception.FlavorDiskTooSmall, exception.FlavorMemoryTooSmall, exception.ImageNotActive, exception.ImageUnacceptable, exception.InvalidDiskInfo, exception.InvalidDiskFormat, cursive_exception.SignatureVerificationError, exception.CertificateValidationFailed, exception.VolumeEncryptionNotSupported, exception.InvalidInput, # TODO(mriedem): We should be validating RequestedVRamTooHigh # in the API during server create and rebuild. exception.RequestedVRamTooHigh) as e: self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=e.format_message()) except Exception as e: LOG.exception('Failed to build and run instance', instance=instance) self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) raise exception.RescheduledException( instance_uuid=instance.uuid, reason=six.text_type(e)) # NOTE(alaski): This is only useful during reschedules, remove it now. instance.system_metadata.pop('network_allocated', None) # If CONF.default_access_ip_network_name is set, grab the # corresponding network and set the access ip values accordingly. network_name = CONF.default_access_ip_network_name if (network_name and not instance.access_ip_v4 and not instance.access_ip_v6): # Note that when there are multiple ips to choose from, an # arbitrary one will be chosen. for vif in network_info: if vif['network']['label'] == network_name: for ip in vif.fixed_ips(): if not instance.access_ip_v4 and ip['version'] == 4: instance.access_ip_v4 = ip['address'] if not instance.access_ip_v6 and ip['version'] == 6: instance.access_ip_v6 = ip['address'] break self._update_instance_after_spawn(context, instance) try: instance.save(expected_task_state=task_states.SPAWNING) except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError) as e: with excutils.save_and_reraise_exception(): self._notify_about_instance_usage(context, instance, 'create.error', fault=e) tb = traceback.format_exc() compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=e, bdms=block_device_mapping, tb=tb) self._update_scheduler_instance_info(context, instance) self._notify_about_instance_usage(context, instance, 'create.end', extra_usage_info={'message': _('Success')}, network_info=network_info) compute_utils.notify_about_instance_create(context, instance, self.host, phase=fields.NotificationPhase.END, bdms=block_device_mapping) def _build_resources_cleanup(self, instance, network_info): # Make sure the async call finishes if network_info is not None: network_info.wait(do_raise=False) self.driver.clean_networks_preparation(instance, network_info) self.driver.failed_spawn_cleanup(instance) @contextlib.contextmanager def _build_resources(self, context, instance, requested_networks, security_groups, image_meta, block_device_mapping, resource_provider_mapping, accel_uuids): resources = {} network_info = None try: LOG.debug('Start building networks asynchronously for instance.', instance=instance) network_info = self._build_networks_for_instance(context, instance, requested_networks, security_groups, resource_provider_mapping) resources['network_info'] = network_info except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError): raise except exception.UnexpectedTaskStateError as e: raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=e.format_message()) except Exception: # Because this allocation is async any failures are likely to occur # when the driver accesses network_info during spawn(). LOG.exception('Failed to allocate network(s)', instance=instance) msg = _('Failed to allocate the network(s), not rescheduling.') raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=msg) try: # Perform any driver preparation work for the driver. self.driver.prepare_for_spawn(instance) # Depending on a virt driver, some network configuration is # necessary before preparing block devices. self.driver.prepare_networks_before_block_device_mapping( instance, network_info) # Verify that all the BDMs have a device_name set and assign a # default to the ones missing it with the help of the driver. self._default_block_device_names(instance, image_meta, block_device_mapping) LOG.debug('Start building block device mappings for instance.', instance=instance) instance.vm_state = vm_states.BUILDING instance.task_state = task_states.BLOCK_DEVICE_MAPPING instance.save() block_device_info = self._prep_block_device(context, instance, block_device_mapping) resources['block_device_info'] = block_device_info except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError): with excutils.save_and_reraise_exception(): self._build_resources_cleanup(instance, network_info) except (exception.UnexpectedTaskStateError, exception.OverQuota, exception.InvalidBDM) as e: self._build_resources_cleanup(instance, network_info) raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=e.format_message()) except Exception: LOG.exception('Failure prepping block device', instance=instance) self._build_resources_cleanup(instance, network_info) msg = _('Failure prepping block device.') raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=msg) arqs = [] dp_name = instance.flavor.extra_specs.get('accel:device_profile') try: if dp_name: arqs = self._get_bound_arq_resources( context, dp_name, instance, accel_uuids) except (Exception, eventlet.timeout.Timeout) as exc: LOG.exception(exc) self._build_resources_cleanup(instance, network_info) compute_utils.delete_arqs_if_needed(context, instance) msg = _('Failure getting accelerator requests.') raise exception.BuildAbortException(instance_uuid=instance.uuid, reason=msg) resources['accel_info'] = arqs try: yield resources except Exception as exc: with excutils.save_and_reraise_exception() as ctxt: if not isinstance(exc, ( exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError)): LOG.exception('Instance failed to spawn', instance=instance) # Make sure the async call finishes if network_info is not None: network_info.wait(do_raise=False) # if network_info is empty we're likely here because of # network allocation failure. Since nothing can be reused on # rescheduling it's better to deallocate network to eliminate # the chance of orphaned ports in neutron deallocate_networks = False if network_info else True try: self._shutdown_instance(context, instance, block_device_mapping, requested_networks, try_deallocate_networks=deallocate_networks) except Exception as exc2: ctxt.reraise = False LOG.warning('Could not clean up failed build,' ' not rescheduling. Error: %s', six.text_type(exc2)) raise exception.BuildAbortException( instance_uuid=instance.uuid, reason=six.text_type(exc)) finally: # Call Cyborg to delete accelerator requests compute_utils.delete_arqs_if_needed(context, instance) def _get_bound_arq_resources(self, context, dp_name, instance, arq_uuids): """Get bound accelerator requests. The ARQ binding was kicked off in the conductor as an async operation. Here we wait for the notification from Cyborg. If the notification arrived before this point, which can happen in many/most cases (see [1]), it will be lost. To handle that, we use exit_wait_early. [1] https://review.opendev.org/#/c/631244/46/nova/compute/ manager.py@2627 :param dp_name: Device profile name. Caller ensures this is valid. :param instance: instance object :param arq_uuids: List of accelerator request (ARQ) UUIDs. :returns: List of ARQs for which bindings have completed, successfully or otherwise """ cyclient = cyborg.get_client(context) if arq_uuids is None: arqs = cyclient.get_arqs_for_instance(instance.uuid) arq_uuids = [arq['uuid'] for arq in arqs] events = [('accelerator-request-bound', arq_uuid) for arq_uuid in arq_uuids] timeout = CONF.arq_binding_timeout with self.virtapi.wait_for_instance_event( instance, events, deadline=timeout): resolved_arqs = cyclient.get_arqs_for_instance( instance.uuid, only_resolved=True) # Events for these resolved ARQs may have already arrived. # Such 'early' events need to be ignored. early_events = [('accelerator-request-bound', arq['uuid']) for arq in resolved_arqs] if early_events: self.virtapi.exit_wait_early(early_events) # Since a timeout in wait_for_instance_event will raise, we get # here only if all binding events have been received. resolved_uuids = [arq['uuid'] for arq in resolved_arqs] if sorted(resolved_uuids) != sorted(arq_uuids): # Query Cyborg to get all. arqs = cyclient.get_arqs_for_instance(instance.uuid) else: arqs = resolved_arqs return arqs def _cleanup_allocated_networks(self, context, instance, requested_networks): """Cleanup networks allocated for instance. :param context: nova request context :param instance: nova.objects.instance.Instance object :param requested_networks: nova.objects.NetworkRequestList """ LOG.debug('Unplugging VIFs for instance', instance=instance) network_info = instance.get_network_info() # NOTE(stephenfin) to avoid nova destroying the instance without # unplugging the interface, refresh network_info if it is empty. if not network_info: try: network_info = self.network_api.get_instance_nw_info( context, instance, ) except Exception as exc: LOG.warning( 'Failed to update network info cache when cleaning up ' 'allocated networks. Stale VIFs may be left on this host.' 'Error: %s', six.text_type(exc) ) return try: self.driver.unplug_vifs(instance, network_info) except NotImplementedError: # This is an optional method so ignore things if it doesn't exist LOG.debug( 'Virt driver does not provide unplug_vifs method, so it ' 'is not possible determine if VIFs should be unplugged.' ) except exception.NovaException as exc: # It's possible that the instance never got as far as plugging # VIFs, in which case we would see an exception which can be # mostly ignored LOG.warning( 'Cleaning up VIFs failed for instance. Error: %s', six.text_type(exc), instance=instance, ) else: LOG.debug('Unplugged VIFs for instance', instance=instance) try: self._deallocate_network(context, instance, requested_networks) except Exception: LOG.exception('Failed to deallocate networks', instance=instance) return instance.system_metadata['network_allocated'] = 'False' try: instance.save() except exception.InstanceNotFound: # NOTE(alaski): It's possible that we're cleaning up the networks # because the instance was deleted. If that's the case then this # exception will be raised by instance.save() pass def _try_deallocate_network(self, context, instance, requested_networks=None): # During auto-scale cleanup, we could be deleting a large number # of servers at the same time and overloading parts of the system, # so we retry a few times in case of connection failures to the # networking service. @loopingcall.RetryDecorator( max_retry_count=3, inc_sleep_time=2, max_sleep_time=12, exceptions=(keystone_exception.connection.ConnectFailure,)) def _deallocate_network_with_retries(): try: self._deallocate_network( context, instance, requested_networks) except keystone_exception.connection.ConnectFailure as e: # Provide a warning that something is amiss. with excutils.save_and_reraise_exception(): LOG.warning('Failed to deallocate network for instance; ' 'retrying. Error: %s', six.text_type(e), instance=instance) try: # tear down allocated network structure _deallocate_network_with_retries() except Exception as ex: with excutils.save_and_reraise_exception(): LOG.error('Failed to deallocate network for instance. ' 'Error: %s', ex, instance=instance) self._set_instance_obj_error_state(context, instance) def _get_power_off_values(self, context, instance, clean_shutdown): """Get the timing configuration for powering down this instance.""" if clean_shutdown: timeout = compute_utils.get_value_from_system_metadata(instance, key='image_os_shutdown_timeout', type=int, default=CONF.shutdown_timeout) retry_interval = CONF.compute.shutdown_retry_interval else: timeout = 0 retry_interval = 0 return timeout, retry_interval def _power_off_instance(self, context, instance, clean_shutdown=True): """Power off an instance on this host.""" timeout, retry_interval = self._get_power_off_values(context, instance, clean_shutdown) self.driver.power_off(instance, timeout, retry_interval) def _shutdown_instance(self, context, instance, bdms, requested_networks=None, notify=True, try_deallocate_networks=True): """Shutdown an instance on this host. :param:context: security context :param:instance: a nova.objects.Instance object :param:bdms: the block devices for the instance to be torn down :param:requested_networks: the networks on which the instance has ports :param:notify: true if a final usage notification should be emitted :param:try_deallocate_networks: false if we should avoid trying to teardown networking """ context = context.elevated() LOG.info('Terminating instance', instance=instance) if notify: self._notify_about_instance_usage(context, instance, "shutdown.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SHUTDOWN, phase=fields.NotificationPhase.START, bdms=bdms) network_info = instance.get_network_info() # NOTE(arnaudmorin) to avoid nova destroying the instance without # unplugging the interface, refresh network_info if it is empty. if not network_info: network_info = self.network_api.get_instance_nw_info( context, instance) # NOTE(vish) get bdms before destroying the instance vol_bdms = [bdm for bdm in bdms if bdm.is_volume] block_device_info = self._get_instance_block_device_info( context, instance, bdms=bdms) # NOTE(melwitt): attempt driver destroy before releasing ip, may # want to keep ip allocated for certain failures try: LOG.debug('Start destroying the instance on the hypervisor.', instance=instance) with timeutils.StopWatch() as timer: self.driver.destroy(context, instance, network_info, block_device_info) LOG.info('Took %0.2f seconds to destroy the instance on the ' 'hypervisor.', timer.elapsed(), instance=instance) except exception.InstancePowerOffFailure: # if the instance can't power off, don't release the ip with excutils.save_and_reraise_exception(): pass except Exception: with excutils.save_and_reraise_exception(): # deallocate ip and fail without proceeding to # volume api calls, preserving current behavior if try_deallocate_networks: self._try_deallocate_network(context, instance, requested_networks) if try_deallocate_networks: self._try_deallocate_network(context, instance, requested_networks) timer.restart() for bdm in vol_bdms: try: if bdm.attachment_id: self.volume_api.attachment_delete(context, bdm.attachment_id) else: # NOTE(vish): actual driver detach done in driver.destroy, # so just tell cinder that we are done with it. connector = self.driver.get_volume_connector(instance) self.volume_api.terminate_connection(context, bdm.volume_id, connector) self.volume_api.detach(context, bdm.volume_id, instance.uuid) except exception.VolumeAttachmentNotFound as exc: LOG.debug('Ignoring VolumeAttachmentNotFound: %s', exc, instance=instance) except exception.DiskNotFound as exc: LOG.debug('Ignoring DiskNotFound: %s', exc, instance=instance) except exception.VolumeNotFound as exc: LOG.debug('Ignoring VolumeNotFound: %s', exc, instance=instance) except (cinder_exception.EndpointNotFound, keystone_exception.EndpointNotFound) as exc: LOG.warning('Ignoring EndpointNotFound for ' 'volume %(volume_id)s: %(exc)s', {'exc': exc, 'volume_id': bdm.volume_id}, instance=instance) except cinder_exception.ClientException as exc: LOG.warning('Ignoring unknown cinder exception for ' 'volume %(volume_id)s: %(exc)s', {'exc': exc, 'volume_id': bdm.volume_id}, instance=instance) except Exception as exc: LOG.warning('Ignoring unknown exception for ' 'volume %(volume_id)s: %(exc)s', {'exc': exc, 'volume_id': bdm.volume_id}, instance=instance) if vol_bdms: LOG.info('Took %(time).2f seconds to detach %(num)s volumes ' 'for instance.', {'time': timer.elapsed(), 'num': len(vol_bdms)}, instance=instance) if notify: self._notify_about_instance_usage(context, instance, "shutdown.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SHUTDOWN, phase=fields.NotificationPhase.END, bdms=bdms) def _cleanup_volumes(self, context, instance, bdms, raise_exc=True, detach=True): exc_info = None for bdm in bdms: if detach and bdm.volume_id: try: LOG.debug("Detaching volume: %s", bdm.volume_id, instance_uuid=instance.uuid) destroy = bdm.delete_on_termination self._detach_volume(context, bdm, instance, destroy_bdm=destroy) except Exception as exc: exc_info = sys.exc_info() LOG.warning('Failed to detach volume: %(volume_id)s ' 'due to %(exc)s', {'volume_id': bdm.volume_id, 'exc': exc}) if bdm.volume_id and bdm.delete_on_termination: try: LOG.debug("Deleting volume: %s", bdm.volume_id, instance_uuid=instance.uuid) self.volume_api.delete(context, bdm.volume_id) except Exception as exc: exc_info = sys.exc_info() LOG.warning('Failed to delete volume: %(volume_id)s ' 'due to %(exc)s', {'volume_id': bdm.volume_id, 'exc': exc}) if exc_info is not None and raise_exc: six.reraise(exc_info[0], exc_info[1], exc_info[2]) @hooks.add_hook("delete_instance") def _delete_instance(self, context, instance, bdms): """Delete an instance on this host. :param context: nova request context :param instance: nova.objects.instance.Instance object :param bdms: nova.objects.block_device.BlockDeviceMappingList object """ events = self.instance_events.clear_events_for_instance(instance) if events: LOG.debug('Events pending at deletion: %(events)s', {'events': ','.join(events.keys())}, instance=instance) self._notify_about_instance_usage(context, instance, "delete.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.DELETE, phase=fields.NotificationPhase.START, bdms=bdms) self._shutdown_instance(context, instance, bdms) # NOTE(vish): We have already deleted the instance, so we have # to ignore problems cleaning up the volumes. It # would be nice to let the user know somehow that # the volume deletion failed, but it is not # acceptable to have an instance that can not be # deleted. Perhaps this could be reworked in the # future to set an instance fault the first time # and to only ignore the failure if the instance # is already in ERROR. # NOTE(ameeda): The volumes already detached during the above # _shutdown_instance() call and this is why # detach is not requested from _cleanup_volumes() # in this case self._cleanup_volumes(context, instance, bdms, raise_exc=False, detach=False) # Delete Cyborg ARQs if the instance has a device profile. compute_utils.delete_arqs_if_needed(context, instance) # if a delete task succeeded, always update vm state and task # state without expecting task state to be DELETING instance.vm_state = vm_states.DELETED instance.task_state = None instance.power_state = power_state.NOSTATE instance.terminated_at = timeutils.utcnow() instance.save() self._complete_deletion(context, instance) # only destroy the instance in the db if the _complete_deletion # doesn't raise and therefore allocation is successfully # deleted in placement instance.destroy() self._notify_about_instance_usage(context, instance, "delete.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.DELETE, phase=fields.NotificationPhase.END, bdms=bdms) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def terminate_instance(self, context, instance, bdms): """Terminate an instance on this host.""" @utils.synchronized(instance.uuid) def do_terminate_instance(instance, bdms): # NOTE(mriedem): If we are deleting the instance while it was # booting from volume, we could be racing with a database update of # the BDM volume_id. Since the compute API passes the BDMs over RPC # to compute here, the BDMs may be stale at this point. So check # for any volume BDMs that don't have volume_id set and if we # detect that, we need to refresh the BDM list before proceeding. # TODO(mriedem): Move this into _delete_instance and make the bdms # parameter optional. for bdm in list(bdms): if bdm.is_volume and not bdm.volume_id: LOG.debug('There are potentially stale BDMs during ' 'delete, refreshing the BlockDeviceMappingList.', instance=instance) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) break try: self._delete_instance(context, instance, bdms) except exception.InstanceNotFound: LOG.info("Instance disappeared during terminate", instance=instance) except Exception: # As we're trying to delete always go to Error if something # goes wrong that _delete_instance can't handle. with excutils.save_and_reraise_exception(): LOG.exception('Setting instance vm_state to ERROR', instance=instance) self._set_instance_obj_error_state(context, instance) do_terminate_instance(instance, bdms) # NOTE(johannes): This is probably better named power_off_instance # so it matches the driver method, but because of other issues, we # can't use that name in grizzly. @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def stop_instance(self, context, instance, clean_shutdown): """Stopping an instance on this host.""" @utils.synchronized(instance.uuid) def do_stop_instance(): current_power_state = self._get_power_state(context, instance) LOG.debug('Stopping instance; current vm_state: %(vm_state)s, ' 'current task_state: %(task_state)s, current DB ' 'power_state: %(db_power_state)s, current VM ' 'power_state: %(current_power_state)s', {'vm_state': instance.vm_state, 'task_state': instance.task_state, 'db_power_state': instance.power_state, 'current_power_state': current_power_state}, instance_uuid=instance.uuid) # NOTE(mriedem): If the instance is already powered off, we are # possibly tearing down and racing with other operations, so we can # expect the task_state to be None if something else updates the # instance and we're not locking it. expected_task_state = [task_states.POWERING_OFF] # The list of power states is from _sync_instance_power_state. if current_power_state in (power_state.NOSTATE, power_state.SHUTDOWN, power_state.CRASHED): LOG.info('Instance is already powered off in the ' 'hypervisor when stop is called.', instance=instance) expected_task_state.append(None) self._notify_about_instance_usage(context, instance, "power_off.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.POWER_OFF, phase=fields.NotificationPhase.START) self._power_off_instance(context, instance, clean_shutdown) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.STOPPED instance.task_state = None instance.save(expected_task_state=expected_task_state) self._notify_about_instance_usage(context, instance, "power_off.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.POWER_OFF, phase=fields.NotificationPhase.END) do_stop_instance() def _power_on(self, context, instance): network_info = self.network_api.get_instance_nw_info(context, instance) block_device_info = self._get_instance_block_device_info(context, instance) accel_info = self._get_accel_info(context, instance) self.driver.power_on(context, instance, network_info, block_device_info, accel_info) def _delete_snapshot_of_shelved_instance(self, context, instance, snapshot_id): """Delete snapshot of shelved instance.""" try: self.image_api.delete(context, snapshot_id) except (exception.ImageNotFound, exception.ImageNotAuthorized) as exc: LOG.warning("Failed to delete snapshot " "from shelved instance (%s).", exc.format_message(), instance=instance) except Exception: LOG.exception("Something wrong happened when trying to " "delete snapshot from shelved instance.", instance=instance) # NOTE(johannes): This is probably better named power_on_instance # so it matches the driver method, but because of other issues, we # can't use that name in grizzly. @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def start_instance(self, context, instance): """Starting an instance on this host.""" self._notify_about_instance_usage(context, instance, "power_on.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.POWER_ON, phase=fields.NotificationPhase.START) self._power_on(context, instance) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.ACTIVE instance.task_state = None # Delete an image(VM snapshot) for a shelved instance snapshot_id = instance.system_metadata.get('shelved_image_id') if snapshot_id: self._delete_snapshot_of_shelved_instance(context, instance, snapshot_id) # Delete system_metadata for a shelved instance compute_utils.remove_shelved_keys_from_system_metadata(instance) instance.save(expected_task_state=task_states.POWERING_ON) self._notify_about_instance_usage(context, instance, "power_on.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.POWER_ON, phase=fields.NotificationPhase.END) @messaging.expected_exceptions(NotImplementedError, exception.TriggerCrashDumpNotSupported, exception.InstanceNotRunning) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def trigger_crash_dump(self, context, instance): """Trigger crash dump in an instance.""" self._notify_about_instance_usage(context, instance, "trigger_crash_dump.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.TRIGGER_CRASH_DUMP, phase=fields.NotificationPhase.START) # This method does not change task_state and power_state because the # effect of a trigger depends on user's configuration. self.driver.trigger_crash_dump(instance) self._notify_about_instance_usage(context, instance, "trigger_crash_dump.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.TRIGGER_CRASH_DUMP, phase=fields.NotificationPhase.END) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def soft_delete_instance(self, context, instance): """Soft delete an instance on this host.""" with compute_utils.notify_about_instance_delete( self.notifier, context, instance, 'soft_delete', source=fields.NotificationSource.COMPUTE): try: self.driver.soft_delete(instance) except NotImplementedError: # Fallback to just powering off the instance if the # hypervisor doesn't implement the soft_delete method self.driver.power_off(instance) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.SOFT_DELETED instance.task_state = None instance.save(expected_task_state=[task_states.SOFT_DELETING]) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def restore_instance(self, context, instance): """Restore a soft-deleted instance on this host.""" self._notify_about_instance_usage(context, instance, "restore.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESTORE, phase=fields.NotificationPhase.START) try: self.driver.restore(instance) except NotImplementedError: # Fallback to just powering on the instance if the hypervisor # doesn't implement the restore method self._power_on(context, instance) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.save(expected_task_state=task_states.RESTORING) self._notify_about_instance_usage(context, instance, "restore.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESTORE, phase=fields.NotificationPhase.END) @staticmethod def _set_migration_status(migration, status): """Set the status, and guard against a None being passed in. This is useful as some of the compute RPC calls will not pass a migration object in older versions. The check can be removed when we move past 4.x major version of the RPC API. """ if migration: migration.status = status migration.save() def _rebuild_default_impl(self, context, instance, image_meta, injected_files, admin_password, allocations, bdms, detach_block_devices, attach_block_devices, network_info=None, evacuate=False, block_device_info=None, preserve_ephemeral=False): if preserve_ephemeral: # The default code path does not support preserving ephemeral # partitions. raise exception.PreserveEphemeralNotSupported() if evacuate: detach_block_devices(context, bdms) else: self._power_off_instance(context, instance, clean_shutdown=True) detach_block_devices(context, bdms) self.driver.destroy(context, instance, network_info=network_info, block_device_info=block_device_info) instance.task_state = task_states.REBUILD_BLOCK_DEVICE_MAPPING instance.save(expected_task_state=[task_states.REBUILDING]) new_block_device_info = attach_block_devices(context, instance, bdms) instance.task_state = task_states.REBUILD_SPAWNING instance.save( expected_task_state=[task_states.REBUILD_BLOCK_DEVICE_MAPPING]) with instance.mutated_migration_context(): self.driver.spawn(context, instance, image_meta, injected_files, admin_password, allocations, network_info=network_info, block_device_info=new_block_device_info) def _notify_instance_rebuild_error(self, context, instance, error, bdms): tb = traceback.format_exc() self._notify_about_instance_usage(context, instance, 'rebuild.error', fault=error) compute_utils.notify_about_instance_rebuild( context, instance, self.host, phase=fields.NotificationPhase.ERROR, exception=error, bdms=bdms, tb=tb) @messaging.expected_exceptions(exception.PreserveEphemeralNotSupported) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def rebuild_instance(self, context, instance, orig_image_ref, image_ref, injected_files, new_pass, orig_sys_metadata, bdms, recreate, on_shared_storage, preserve_ephemeral, migration, scheduled_node, limits, request_spec): """Destroy and re-make this instance. A 'rebuild' effectively purges all existing data from the system and remakes the VM with given 'metadata' and 'personalities'. :param context: `nova.RequestContext` object :param instance: Instance object :param orig_image_ref: Original image_ref before rebuild :param image_ref: New image_ref for rebuild :param injected_files: Files to inject :param new_pass: password to set on rebuilt instance :param orig_sys_metadata: instance system metadata from pre-rebuild :param bdms: block-device-mappings to use for rebuild :param recreate: True if the instance is being evacuated (e.g. the hypervisor it was on failed) - cleanup of old state will be skipped. :param on_shared_storage: True if instance files on shared storage. If not provided then information from the driver will be used to decide if the instance files are available or not on the target host :param preserve_ephemeral: True if the default ephemeral storage partition must be preserved on rebuild :param migration: a Migration object if one was created for this rebuild operation (if it's a part of evacuate) :param scheduled_node: A node of the host chosen by the scheduler. If a host was specified by the user, this will be None :param limits: Overcommit limits set by the scheduler. If a host was specified by the user, this will be None :param request_spec: a RequestSpec object used to schedule the instance """ # recreate=True means the instance is being evacuated from a failed # host to a new destination host (this host). The 'recreate' variable # name is confusing, so rename it to evacuate here at the top, which # is simpler than renaming a parameter in an RPC versioned method. evacuate = recreate context = context.elevated() if evacuate: LOG.info("Evacuating instance", instance=instance) else: LOG.info("Rebuilding instance", instance=instance) if evacuate: # This is an evacuation to a new host, so we need to perform a # resource claim. rebuild_claim = self.rt.rebuild_claim else: # This is a rebuild to the same host, so we don't need to make # a claim since the instance is already on this host. rebuild_claim = claims.NopClaim if image_ref: image_meta = objects.ImageMeta.from_image_ref( context, self.image_api, image_ref) elif evacuate: # For evacuate the API does not send down the image_ref since the # image does not change so just get it from what was stashed in # the instance system_metadata when the instance was created (or # last rebuilt). This also works for volume-backed instances. image_meta = instance.image_meta else: image_meta = objects.ImageMeta() # NOTE(mriedem): On an evacuate, we need to update # the instance's host and node properties to reflect it's # destination node for the evacuate. if not scheduled_node: if evacuate: try: compute_node = self._get_compute_info(context, self.host) scheduled_node = compute_node.hypervisor_hostname except exception.ComputeHostNotFound: LOG.exception('Failed to get compute_info for %s', self.host) else: scheduled_node = instance.node allocs = self.reportclient.get_allocations_for_consumer( context, instance.uuid) # If the resource claim or group policy validation fails before we # do anything to the guest or its networking/volumes we want to keep # the current status rather than put the instance into ERROR status. instance_state = instance.vm_state with self._error_out_instance_on_exception( context, instance, instance_state=instance_state): try: self._do_rebuild_instance_with_claim( context, instance, orig_image_ref, image_meta, injected_files, new_pass, orig_sys_metadata, bdms, evacuate, on_shared_storage, preserve_ephemeral, migration, request_spec, allocs, rebuild_claim, scheduled_node, limits) except (exception.ComputeResourcesUnavailable, exception.RescheduledException) as e: if isinstance(e, exception.ComputeResourcesUnavailable): LOG.debug("Could not rebuild instance on this host, not " "enough resources available.", instance=instance) else: # RescheduledException is raised by the late server group # policy check during evacuation if a parallel scheduling # violated the policy. # We catch the RescheduledException here but we don't have # the plumbing to do an actual reschedule so we abort the # operation. LOG.debug("Could not rebuild instance on this host, " "late server group check failed.", instance=instance) # NOTE(ndipanov): We just abort the build for now and leave a # migration record for potential cleanup later self._set_migration_status(migration, 'failed') # Since the claim failed, we need to remove the allocation # created against the destination node. Note that we can only # get here when evacuating to a destination node. Rebuilding # on the same host (not evacuate) uses the NopClaim which will # not raise ComputeResourcesUnavailable. self.rt.delete_allocation_for_evacuated_instance( context, instance, scheduled_node, node_type='destination') self._notify_instance_rebuild_error(context, instance, e, bdms) # Wrap this in InstanceFaultRollback so that the # _error_out_instance_on_exception context manager keeps the # vm_state unchanged. raise exception.InstanceFaultRollback( inner_exception=exception.BuildAbortException( instance_uuid=instance.uuid, reason=e.format_message())) except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError) as e: LOG.debug('Instance was deleted while rebuilding', instance=instance) self._set_migration_status(migration, 'failed') self._notify_instance_rebuild_error(context, instance, e, bdms) except Exception as e: self._set_migration_status(migration, 'failed') if evacuate or scheduled_node is not None: self.rt.delete_allocation_for_evacuated_instance( context, instance, scheduled_node, node_type='destination') self._notify_instance_rebuild_error(context, instance, e, bdms) raise else: # NOTE(gibi): Let the resource tracker set the instance # host and drop the migration context as we need to hold the # COMPUTE_RESOURCE_SEMAPHORE to avoid the race with # _update_available_resources. See bug 1896463. self.rt.finish_evacuation(instance, scheduled_node, migration) def _do_rebuild_instance_with_claim( self, context, instance, orig_image_ref, image_meta, injected_files, new_pass, orig_sys_metadata, bdms, evacuate, on_shared_storage, preserve_ephemeral, migration, request_spec, allocations, rebuild_claim, scheduled_node, limits): """Helper to avoid deep nesting in the top-level method.""" provider_mapping = None if evacuate: provider_mapping = self._get_request_group_mapping(request_spec) if provider_mapping: compute_utils.\ update_pci_request_spec_with_allocated_interface_name( context, self.reportclient, instance, provider_mapping) claim_context = rebuild_claim( context, instance, scheduled_node, allocations, limits=limits, image_meta=image_meta, migration=migration) with claim_context: self._do_rebuild_instance( context, instance, orig_image_ref, image_meta, injected_files, new_pass, orig_sys_metadata, bdms, evacuate, on_shared_storage, preserve_ephemeral, migration, request_spec, allocations, provider_mapping) @staticmethod def _get_image_name(image_meta): if image_meta.obj_attr_is_set("name"): return image_meta.name else: return '' def _do_rebuild_instance(self, context, instance, orig_image_ref, image_meta, injected_files, new_pass, orig_sys_metadata, bdms, evacuate, on_shared_storage, preserve_ephemeral, migration, request_spec, allocations, request_group_resource_providers_mapping): orig_vm_state = instance.vm_state if evacuate: if request_spec: # NOTE(gibi): Do a late check of server group policy as # parallel scheduling could violate such policy. This will # cause the evacuate to fail as rebuild does not implement # reschedule. hints = self._get_scheduler_hints({}, request_spec) self._validate_instance_group_policy(context, instance, hints) if not self.driver.capabilities.get("supports_evacuate", False): raise exception.InstanceEvacuateNotSupported self._check_instance_exists(context, instance) if on_shared_storage is None: LOG.debug('on_shared_storage is not provided, using driver ' 'information to decide if the instance needs to ' 'be evacuated') on_shared_storage = self.driver.instance_on_disk(instance) elif (on_shared_storage != self.driver.instance_on_disk(instance)): # To cover case when admin expects that instance files are # on shared storage, but not accessible and vice versa raise exception.InvalidSharedStorage( _("Invalid state of instance files on shared" " storage")) if on_shared_storage: LOG.info('disk on shared storage, evacuating using' ' existing disk') elif instance.image_ref: orig_image_ref = instance.image_ref LOG.info("disk not on shared storage, evacuating from " "image: '%s'", str(orig_image_ref)) else: LOG.info('disk on volume, evacuating using existing ' 'volume') # We check trusted certs capabilities for both evacuate (rebuild on # another host) and rebuild (rebuild on the same host) because for # evacuate we need to make sure an instance with trusted certs can # have the image verified with those certs during rebuild, and for # rebuild we could be rebuilding a server that started out with no # trusted certs on this host, and then was rebuilt with trusted certs # for a new image, in which case we need to validate that new image # with the trusted certs during the rebuild. self._check_trusted_certs(instance) # This instance.exists message should contain the original # image_ref, not the new one. Since the DB has been updated # to point to the new one... we have to override it. orig_image_ref_url = self.image_api.generate_image_url(orig_image_ref, context) extra_usage_info = {'image_ref_url': orig_image_ref_url} compute_utils.notify_usage_exists( self.notifier, context, instance, self.host, current_period=True, system_metadata=orig_sys_metadata, extra_usage_info=extra_usage_info) # This message should contain the new image_ref extra_usage_info = {'image_name': self._get_image_name(image_meta)} self._notify_about_instance_usage(context, instance, "rebuild.start", extra_usage_info=extra_usage_info) # NOTE: image_name is not included in the versioned notification # because we already provide the image_uuid in the notification # payload and the image details can be looked up via the uuid. compute_utils.notify_about_instance_rebuild( context, instance, self.host, phase=fields.NotificationPhase.START, bdms=bdms) instance.power_state = self._get_power_state(context, instance) instance.task_state = task_states.REBUILDING instance.save(expected_task_state=[task_states.REBUILDING]) if evacuate: self.network_api.setup_networks_on_host( context, instance, self.host) # For nova-network this is needed to move floating IPs # For neutron this updates the host in the port binding # TODO(cfriesen): this network_api call and the one above # are so similar, we should really try to unify them. self.network_api.setup_instance_network_on_host( context, instance, self.host, migration, provider_mappings=request_group_resource_providers_mapping) # TODO(mriedem): Consider decorating setup_instance_network_on_host # with @api.refresh_cache and then we wouldn't need this explicit # call to get_instance_nw_info. network_info = self.network_api.get_instance_nw_info(context, instance) else: network_info = instance.get_network_info() if bdms is None: bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) block_device_info = \ self._get_instance_block_device_info( context, instance, bdms=bdms) def detach_block_devices(context, bdms): for bdm in bdms: if bdm.is_volume: # NOTE (ildikov): Having the attachment_id set in the BDM # means that it's the new Cinder attach/detach flow # (available from v3.44). In that case we explicitly # attach and detach the volumes through attachment level # operations. In this scenario _detach_volume will delete # the existing attachment which would make the volume # status change to 'available' if we don't pre-create # another empty attachment before deleting the old one. attachment_id = None if bdm.attachment_id: attachment_id = self.volume_api.attachment_create( context, bdm['volume_id'], instance.uuid)['id'] self._detach_volume(context, bdm, instance, destroy_bdm=False) if attachment_id: bdm.attachment_id = attachment_id bdm.save() files = self._decode_files(injected_files) kwargs = dict( context=context, instance=instance, image_meta=image_meta, injected_files=files, admin_password=new_pass, allocations=allocations, bdms=bdms, detach_block_devices=detach_block_devices, attach_block_devices=self._prep_block_device, block_device_info=block_device_info, network_info=network_info, preserve_ephemeral=preserve_ephemeral, evacuate=evacuate) try: with instance.mutated_migration_context(): self.driver.rebuild(**kwargs) except NotImplementedError: # NOTE(rpodolyaka): driver doesn't provide specialized version # of rebuild, fall back to the default implementation self._rebuild_default_impl(**kwargs) self._update_instance_after_spawn(context, instance) instance.save(expected_task_state=[task_states.REBUILD_SPAWNING]) if orig_vm_state == vm_states.STOPPED: LOG.info("bringing vm to original state: '%s'", orig_vm_state, instance=instance) instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.POWERING_OFF instance.progress = 0 instance.save() self.stop_instance(context, instance, False) # TODO(melwitt): We should clean up instance console tokens here in the # case of evacuate. The instance is on a new host and will need to # establish a new console connection. self._update_scheduler_instance_info(context, instance) self._notify_about_instance_usage( context, instance, "rebuild.end", network_info=network_info, extra_usage_info=extra_usage_info) compute_utils.notify_about_instance_rebuild( context, instance, self.host, phase=fields.NotificationPhase.END, bdms=bdms) def _handle_bad_volumes_detached(self, context, instance, bad_devices, block_device_info): """Handle cases where the virt-layer had to detach non-working volumes in order to complete an operation. """ for bdm in block_device_info['block_device_mapping']: if bdm.get('mount_device') in bad_devices: try: volume_id = bdm['connection_info']['data']['volume_id'] except KeyError: continue # NOTE(sirp): ideally we'd just call # `compute_api.detach_volume` here but since that hits the # DB directly, that's off limits from within the # compute-manager. # # API-detach LOG.info("Detaching from volume api: %s", volume_id) self.volume_api.begin_detaching(context, volume_id) # Manager-detach self.detach_volume(context, volume_id, instance) def _get_accel_info(self, context, instance): dp_name = instance.flavor.extra_specs.get('accel:device_profile') if dp_name: cyclient = cyborg.get_client(context) accel_info = cyclient.get_arqs_for_instance(instance.uuid) else: accel_info = [] return accel_info @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def reboot_instance(self, context, instance, block_device_info, reboot_type): @utils.synchronized(instance.uuid) def do_reboot_instance(context, instance, block_device_info, reboot_type): self._reboot_instance(context, instance, block_device_info, reboot_type) do_reboot_instance(context, instance, block_device_info, reboot_type) def _reboot_instance(self, context, instance, block_device_info, reboot_type): """Reboot an instance on this host.""" # acknowledge the request made it to the manager if reboot_type == "SOFT": instance.task_state = task_states.REBOOT_PENDING expected_states = task_states.soft_reboot_states else: instance.task_state = task_states.REBOOT_PENDING_HARD expected_states = task_states.hard_reboot_states context = context.elevated() LOG.info("Rebooting instance", instance=instance) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) block_device_info = self._get_instance_block_device_info( context, instance, bdms=bdms) network_info = self.network_api.get_instance_nw_info(context, instance) accel_info = self._get_accel_info(context, instance) self._notify_about_instance_usage(context, instance, "reboot.start") compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.REBOOT, phase=fields.NotificationPhase.START, bdms=bdms ) instance.power_state = self._get_power_state(context, instance) instance.save(expected_task_state=expected_states) if instance.power_state != power_state.RUNNING: state = instance.power_state running = power_state.RUNNING LOG.warning('trying to reboot a non-running instance:' ' (state: %(state)s expected: %(running)s)', {'state': state, 'running': running}, instance=instance) def bad_volumes_callback(bad_devices): self._handle_bad_volumes_detached( context, instance, bad_devices, block_device_info) try: # Don't change it out of rescue mode if instance.vm_state == vm_states.RESCUED: new_vm_state = vm_states.RESCUED else: new_vm_state = vm_states.ACTIVE new_power_state = None if reboot_type == "SOFT": instance.task_state = task_states.REBOOT_STARTED expected_state = task_states.REBOOT_PENDING else: instance.task_state = task_states.REBOOT_STARTED_HARD expected_state = task_states.REBOOT_PENDING_HARD instance.save(expected_task_state=expected_state) self.driver.reboot(context, instance, network_info, reboot_type, block_device_info=block_device_info, accel_info=accel_info, bad_volumes_callback=bad_volumes_callback) except Exception as error: with excutils.save_and_reraise_exception() as ctxt: exc_info = sys.exc_info() # if the reboot failed but the VM is running don't # put it into an error state new_power_state = self._get_power_state(context, instance) if new_power_state == power_state.RUNNING: LOG.warning('Reboot failed but instance is running', instance=instance) compute_utils.add_instance_fault_from_exc(context, instance, error, exc_info) self._notify_about_instance_usage(context, instance, 'reboot.error', fault=error) tb = traceback.format_exc() compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.REBOOT, phase=fields.NotificationPhase.ERROR, exception=error, bdms=bdms, tb=tb ) ctxt.reraise = False else: LOG.error('Cannot reboot instance: %s', error, instance=instance) self._set_instance_obj_error_state(context, instance) if not new_power_state: new_power_state = self._get_power_state(context, instance) try: instance.power_state = new_power_state instance.vm_state = new_vm_state instance.task_state = None instance.save() except exception.InstanceNotFound: LOG.warning("Instance disappeared during reboot", instance=instance) self._notify_about_instance_usage(context, instance, "reboot.end") compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.REBOOT, phase=fields.NotificationPhase.END, bdms=bdms ) @delete_image_on_error def _do_snapshot_instance(self, context, image_id, instance): self._snapshot_instance(context, image_id, instance, task_states.IMAGE_BACKUP) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def backup_instance(self, context, image_id, instance, backup_type, rotation): """Backup an instance on this host. :param backup_type: daily | weekly :param rotation: int representing how many backups to keep around """ self._do_snapshot_instance(context, image_id, instance) self._rotate_backups(context, instance, backup_type, rotation) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault @delete_image_on_error def snapshot_instance(self, context, image_id, instance): """Snapshot an instance on this host. :param context: security context :param image_id: glance.db.sqlalchemy.models.Image.Id :param instance: a nova.objects.instance.Instance object """ # NOTE(dave-mcnally) the task state will already be set by the api # but if the compute manager has crashed/been restarted prior to the # request getting here the task state may have been cleared so we set # it again and things continue normally try: instance.task_state = task_states.IMAGE_SNAPSHOT instance.save( expected_task_state=task_states.IMAGE_SNAPSHOT_PENDING) except exception.InstanceNotFound: # possibility instance no longer exists, no point in continuing LOG.debug("Instance not found, could not set state %s " "for instance.", task_states.IMAGE_SNAPSHOT, instance=instance) return except exception.UnexpectedDeletingTaskStateError: LOG.debug("Instance being deleted, snapshot cannot continue", instance=instance) return self._snapshot_instance(context, image_id, instance, task_states.IMAGE_SNAPSHOT) def _snapshot_instance(self, context, image_id, instance, expected_task_state): context = context.elevated() instance.power_state = self._get_power_state(context, instance) try: instance.save() LOG.info('instance snapshotting', instance=instance) if instance.power_state != power_state.RUNNING: state = instance.power_state running = power_state.RUNNING LOG.warning('trying to snapshot a non-running instance: ' '(state: %(state)s expected: %(running)s)', {'state': state, 'running': running}, instance=instance) self._notify_about_instance_usage( context, instance, "snapshot.start") compute_utils.notify_about_instance_snapshot(context, instance, self.host, phase=fields.NotificationPhase.START, snapshot_image_id=image_id) def update_task_state(task_state, expected_state=expected_task_state): instance.task_state = task_state instance.save(expected_task_state=expected_state) with timeutils.StopWatch() as timer: self.driver.snapshot(context, instance, image_id, update_task_state) LOG.info('Took %0.2f seconds to snapshot the instance on ' 'the hypervisor.', timer.elapsed(), instance=instance) instance.task_state = None instance.save(expected_task_state=task_states.IMAGE_UPLOADING) self._notify_about_instance_usage(context, instance, "snapshot.end") compute_utils.notify_about_instance_snapshot(context, instance, self.host, phase=fields.NotificationPhase.END, snapshot_image_id=image_id) except (exception.InstanceNotFound, exception.InstanceNotRunning, exception.UnexpectedDeletingTaskStateError): # the instance got deleted during the snapshot # Quickly bail out of here msg = 'Instance disappeared during snapshot' LOG.debug(msg, instance=instance) try: image = self.image_api.get(context, image_id) if image['status'] != 'active': self.image_api.delete(context, image_id) except exception.ImageNotFound: LOG.debug('Image not found during clean up %s', image_id) except Exception: LOG.warning("Error while trying to clean up image %s", image_id, instance=instance) except exception.ImageNotFound: instance.task_state = None instance.save() LOG.warning("Image not found during snapshot", instance=instance) def _post_interrupted_snapshot_cleanup(self, context, instance): self.driver.post_interrupted_snapshot_cleanup(context, instance) @messaging.expected_exceptions(NotImplementedError) @wrap_exception() def volume_snapshot_create(self, context, instance, volume_id, create_info): try: self.driver.volume_snapshot_create(context, instance, volume_id, create_info) except exception.InstanceNotRunning: # Libvirt driver can raise this exception LOG.debug('Instance disappeared during volume snapshot create', instance=instance) @messaging.expected_exceptions(NotImplementedError) @wrap_exception() def volume_snapshot_delete(self, context, instance, volume_id, snapshot_id, delete_info): try: self.driver.volume_snapshot_delete(context, instance, volume_id, snapshot_id, delete_info) except exception.InstanceNotRunning: # Libvirt driver can raise this exception LOG.debug('Instance disappeared during volume snapshot delete', instance=instance) @wrap_instance_fault def _rotate_backups(self, context, instance, backup_type, rotation): """Delete excess backups associated to an instance. Instances are allowed a fixed number of backups (the rotation number); this method deletes the oldest backups that exceed the rotation threshold. :param context: security context :param instance: Instance dict :param backup_type: a user-defined type, like "daily" or "weekly" etc. :param rotation: int representing how many backups to keep around; None if rotation shouldn't be used (as in the case of snapshots) """ filters = {'property-image_type': 'backup', 'property-backup_type': backup_type, 'property-instance_uuid': instance.uuid} images = self.image_api.get_all(context, filters=filters, sort_key='created_at', sort_dir='desc') num_images = len(images) LOG.debug("Found %(num_images)d images (rotation: %(rotation)d)", {'num_images': num_images, 'rotation': rotation}, instance=instance) if num_images > rotation: # NOTE(sirp): this deletes all backups that exceed the rotation # limit excess = len(images) - rotation LOG.debug("Rotating out %d backups", excess, instance=instance) for i in range(excess): image = images.pop() image_id = image['id'] LOG.debug("Deleting image %s", image_id, instance=instance) try: self.image_api.delete(context, image_id) except exception.ImageNotFound: LOG.info("Failed to find image %(image_id)s to " "delete", {'image_id': image_id}, instance=instance) except (exception.ImageDeleteConflict, Exception) as exc: LOG.info("Failed to delete image %(image_id)s during " "deleting excess backups. " "Continuing for next image.. %(exc)s", {'image_id': image_id, 'exc': exc}, instance=instance) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def set_admin_password(self, context, instance, new_pass): """Set the root/admin password for an instance on this host. This is generally only called by API password resets after an image has been built. @param context: Nova auth context. @param instance: Nova instance object. @param new_pass: The admin password for the instance. """ context = context.elevated() current_power_state = self._get_power_state(context, instance) expected_state = power_state.RUNNING if current_power_state != expected_state: instance.task_state = None instance.save(expected_task_state=task_states.UPDATING_PASSWORD) _msg = _('instance %s is not running') % instance.uuid raise exception.InstancePasswordSetFailed( instance=instance.uuid, reason=_msg) try: self.driver.set_admin_password(instance, new_pass) LOG.info("Admin password set", instance=instance) instance.task_state = None instance.save( expected_task_state=task_states.UPDATING_PASSWORD) except exception.InstanceAgentNotEnabled: with excutils.save_and_reraise_exception(): LOG.debug('Guest agent is not enabled for the instance.', instance=instance) instance.task_state = None instance.save( expected_task_state=task_states.UPDATING_PASSWORD) except exception.SetAdminPasswdNotSupported: with excutils.save_and_reraise_exception(): LOG.info('set_admin_password is not supported ' 'by this driver or guest instance.', instance=instance) instance.task_state = None instance.save( expected_task_state=task_states.UPDATING_PASSWORD) except NotImplementedError: LOG.warning('set_admin_password is not implemented ' 'by this driver or guest instance.', instance=instance) instance.task_state = None instance.save( expected_task_state=task_states.UPDATING_PASSWORD) raise NotImplementedError(_('set_admin_password is not ' 'implemented by this driver or guest ' 'instance.')) except exception.UnexpectedTaskStateError: # interrupted by another (most likely delete) task # do not retry raise except Exception: # Catch all here because this could be anything. LOG.exception('set_admin_password failed', instance=instance) # We create a new exception here so that we won't # potentially reveal password information to the # API caller. The real exception is logged above _msg = _('error setting admin password') raise exception.InstancePasswordSetFailed( instance=instance.uuid, reason=_msg) @wrap_exception() @reverts_task_state @wrap_instance_fault def inject_file(self, context, path, file_contents, instance): """Write a file to the specified path in an instance on this host.""" # NOTE(russellb) Remove this method, as well as the underlying virt # driver methods, when the compute rpc interface is bumped to 4.x # as it is no longer used. context = context.elevated() current_power_state = self._get_power_state(context, instance) expected_state = power_state.RUNNING if current_power_state != expected_state: LOG.warning('trying to inject a file into a non-running ' '(state: %(current_state)s expected: ' '%(expected_state)s)', {'current_state': current_power_state, 'expected_state': expected_state}, instance=instance) LOG.info('injecting file to %s', path, instance=instance) self.driver.inject_file(instance, path, file_contents) def _get_rescue_image(self, context, instance, rescue_image_ref=None): """Determine what image should be used to boot the rescue VM.""" # 1. If rescue_image_ref is passed in, use that for rescue. # 2. Else, use the base image associated with instance's current image. # The idea here is to provide the customer with a rescue # environment which they are familiar with. # So, if they built their instance off of a Debian image, # their rescue VM will also be Debian. # 3. As a last resort, use instance's current image. if not rescue_image_ref: system_meta = utils.instance_sys_meta(instance) rescue_image_ref = system_meta.get('image_base_image_ref') if not rescue_image_ref: LOG.warning('Unable to find a different image to use for ' 'rescue VM, using instance\'s current image', instance=instance) rescue_image_ref = instance.image_ref return objects.ImageMeta.from_image_ref( context, self.image_api, rescue_image_ref) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def rescue_instance(self, context, instance, rescue_password, rescue_image_ref, clean_shutdown): context = context.elevated() LOG.info('Rescuing', instance=instance) admin_password = (rescue_password if rescue_password else utils.generate_password()) network_info = self.network_api.get_instance_nw_info(context, instance) rescue_image_meta = self._get_rescue_image(context, instance, rescue_image_ref) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) block_device_info = self._get_instance_block_device_info( context, instance, bdms=bdms) extra_usage_info = {'rescue_image_name': self._get_image_name(rescue_image_meta)} self._notify_about_instance_usage(context, instance, "rescue.start", extra_usage_info=extra_usage_info, network_info=network_info) compute_utils.notify_about_instance_rescue_action( context, instance, self.host, rescue_image_ref, phase=fields.NotificationPhase.START) try: self._power_off_instance(context, instance, clean_shutdown) self.driver.rescue(context, instance, network_info, rescue_image_meta, admin_password, block_device_info) except Exception as e: LOG.exception("Error trying to Rescue Instance", instance=instance) self._set_instance_obj_error_state(context, instance) raise exception.InstanceNotRescuable( instance_id=instance.uuid, reason=_("Driver Error: %s") % e) compute_utils.notify_usage_exists(self.notifier, context, instance, self.host, current_period=True) instance.vm_state = vm_states.RESCUED instance.task_state = None instance.power_state = self._get_power_state(context, instance) instance.launched_at = timeutils.utcnow() instance.save(expected_task_state=task_states.RESCUING) self._notify_about_instance_usage(context, instance, "rescue.end", extra_usage_info=extra_usage_info, network_info=network_info) compute_utils.notify_about_instance_rescue_action( context, instance, self.host, rescue_image_ref, phase=fields.NotificationPhase.END) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def unrescue_instance(self, context, instance): context = context.elevated() LOG.info('Unrescuing', instance=instance) network_info = self.network_api.get_instance_nw_info(context, instance) self._notify_about_instance_usage(context, instance, "unrescue.start", network_info=network_info) compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.UNRESCUE, phase=fields.NotificationPhase.START) with self._error_out_instance_on_exception(context, instance): self.driver.unrescue(instance, network_info) instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.power_state = self._get_power_state(context, instance) instance.save(expected_task_state=task_states.UNRESCUING) self._notify_about_instance_usage(context, instance, "unrescue.end", network_info=network_info) compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.UNRESCUE, phase=fields.NotificationPhase.END) @wrap_exception() @wrap_instance_fault def change_instance_metadata(self, context, diff, instance): """Update the metadata published to the instance.""" LOG.debug("Changing instance metadata according to %r", diff, instance=instance) self.driver.change_instance_metadata(context, instance, diff) @wrap_exception() @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def confirm_resize(self, context, instance, migration): """Confirms a migration/resize and deletes the 'old' instance. This is called from the API and runs on the source host. Nothing needs to happen on the destination host at this point since the instance is already running there. This routine just cleans up the source host. """ @utils.synchronized(instance.uuid) def do_confirm_resize(context, instance, migration_id): # NOTE(wangpan): Get the migration status from db, if it has been # confirmed, we do nothing and return here LOG.debug("Going to confirm migration %s", migration_id, instance=instance) try: # TODO(russellb) Why are we sending the migration object just # to turn around and look it up from the db again? migration = objects.Migration.get_by_id( context.elevated(), migration_id) except exception.MigrationNotFound: LOG.error("Migration %s is not found during confirmation", migration_id, instance=instance) return if migration.status == 'confirmed': LOG.info("Migration %s is already confirmed", migration_id, instance=instance) return elif migration.status not in ('finished', 'confirming'): LOG.warning("Unexpected confirmation status '%(status)s' " "of migration %(id)s, exit confirmation process", {"status": migration.status, "id": migration_id}, instance=instance) return # NOTE(wangpan): Get the instance from db, if it has been # deleted, we do nothing and return here expected_attrs = ['metadata', 'system_metadata', 'flavor'] try: instance = objects.Instance.get_by_uuid( context, instance.uuid, expected_attrs=expected_attrs) except exception.InstanceNotFound: LOG.info("Instance is not found during confirmation", instance=instance) return with self._error_out_instance_on_exception(context, instance): try: self._confirm_resize( context, instance, migration=migration) except Exception: # Something failed when cleaning up the source host so # log a traceback and leave a hint about hard rebooting # the server to correct its state in the DB. with excutils.save_and_reraise_exception(logger=LOG): LOG.exception( 'Confirm resize failed on source host %s. ' 'Resource allocations in the placement service ' 'will be removed regardless because the instance ' 'is now on the destination host %s. You can try ' 'hard rebooting the instance to correct its ' 'state.', self.host, migration.dest_compute, instance=instance) finally: # Whether an error occurred or not, at this point the # instance is on the dest host. Avoid leaking allocations # in placement by deleting them here... self._delete_allocation_after_move( context, instance, migration) # ...inform the scheduler about the move... self._delete_scheduler_instance_info( context, instance.uuid) # ...and unset the cached flavor information (this is done # last since the resource tracker relies on it for its # periodic tasks) self._delete_stashed_flavor_info(instance) do_confirm_resize(context, instance, migration.id) def _get_updated_nw_info_with_pci_mapping(self, nw_info, pci_mapping): # NOTE(adrianc): This method returns a copy of nw_info if modifications # are made else it returns the original nw_info. updated_nw_info = nw_info if nw_info and pci_mapping: updated_nw_info = copy.deepcopy(nw_info) for vif in updated_nw_info: if vif['vnic_type'] in network_model.VNIC_TYPES_SRIOV: try: vif_pci_addr = vif['profile']['pci_slot'] new_addr = pci_mapping[vif_pci_addr].address vif['profile']['pci_slot'] = new_addr LOG.debug("Updating VIF's PCI address for VIF %(id)s. " "Original value %(orig_val)s, " "new value %(new_val)s", {'id': vif['id'], 'orig_val': vif_pci_addr, 'new_val': new_addr}) except (KeyError, AttributeError): with excutils.save_and_reraise_exception(): # NOTE(adrianc): This should never happen. If we # get here it means there is some inconsistency # with either 'nw_info' or 'pci_mapping'. LOG.error("Unexpected error when updating network " "information with PCI mapping.") return updated_nw_info def _confirm_resize(self, context, instance, migration=None): """Destroys the source instance.""" self._notify_about_instance_usage(context, instance, "resize.confirm.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESIZE_CONFIRM, phase=fields.NotificationPhase.START) # NOTE(tr3buchet): tear down networks on source host self.network_api.setup_networks_on_host(context, instance, migration.source_compute, teardown=True) # TODO(stephenfin): These next three calls should be bundled network_info = self.network_api.get_instance_nw_info(context, instance) # NOTE(adrianc): Populate old PCI device in VIF profile # to allow virt driver to properly unplug it from Hypervisor. pci_mapping = (instance.migration_context. get_pci_mapping_for_migration(True)) network_info = self._get_updated_nw_info_with_pci_mapping( network_info, pci_mapping) self.driver.confirm_migration(context, migration, instance, network_info) # Free up the old_flavor usage from the resource tracker for this host. self.rt.drop_move_claim_at_source(context, instance, migration) # NOTE(mriedem): The old_vm_state could be STOPPED but the user # might have manually powered up the instance to confirm the # resize/migrate, so we need to check the current power state # on the instance and set the vm_state appropriately. We default # to ACTIVE because if the power state is not SHUTDOWN, we # assume _sync_instance_power_state will clean it up. p_state = instance.power_state vm_state = None if p_state == power_state.SHUTDOWN: vm_state = vm_states.STOPPED LOG.debug("Resized/migrated instance is powered off. " "Setting vm_state to '%s'.", vm_state, instance=instance) else: vm_state = vm_states.ACTIVE instance.vm_state = vm_state instance.task_state = None instance.save(expected_task_state=[None, task_states.DELETING, task_states.SOFT_DELETING]) self._notify_about_instance_usage( context, instance, "resize.confirm.end", network_info=network_info) compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESIZE_CONFIRM, phase=fields.NotificationPhase.END) def _delete_allocation_after_move(self, context, instance, migration): """Deletes resource allocations held by the migration record against the source compute node resource provider after a confirmed cold / successful live migration. """ try: # NOTE(danms): We're finishing on the source node, so try # to delete the allocation based on the migration uuid self.reportclient.delete_allocation_for_instance( context, migration.uuid, consumer_type='migration') except exception.AllocationDeleteFailed: LOG.error('Deleting allocation in placement for migration ' '%(migration_uuid)s failed. The instance ' '%(instance_uuid)s will be put to ERROR state ' 'but the allocation held by the migration is ' 'leaked.', {'instance_uuid': instance.uuid, 'migration_uuid': migration.uuid}) raise def _delete_stashed_flavor_info(self, instance): """Remove information about the flavor change after a resize.""" instance.old_flavor = None instance.new_flavor = None instance.system_metadata.pop('old_vm_state', None) instance.save() @wrap_exception() @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def confirm_snapshot_based_resize_at_source( self, ctxt, instance, migration): """Confirms a snapshot-based resize on the source host. Cleans the guest from the source hypervisor including disks and drops the MoveClaim which will free up "old_flavor" usage from the ResourceTracker. Deletes the allocations held by the migration consumer against the source compute node resource provider. :param ctxt: nova auth request context targeted at the source cell :param instance: Instance object being resized which should have the "old_flavor" attribute set :param migration: Migration object for the resize operation """ @utils.synchronized(instance.uuid) def do_confirm(): LOG.info('Confirming resize on source host.', instance=instance) with self._error_out_instance_on_exception(ctxt, instance): # TODO(mriedem): Could probably make this try/except/finally # a context manager to share with confirm_resize(). try: self._confirm_snapshot_based_resize_at_source( ctxt, instance, migration) except Exception: # Something failed when cleaning up the source host so # log a traceback and leave a hint about hard rebooting # the server to correct its state in the DB. with excutils.save_and_reraise_exception(logger=LOG): LOG.exception( 'Confirm resize failed on source host %s. ' 'Resource allocations in the placement service ' 'will be removed regardless because the instance ' 'is now on the destination host %s. You can try ' 'hard rebooting the instance to correct its ' 'state.', self.host, migration.dest_compute, instance=instance) finally: # Whether an error occurred or not, at this point the # instance is on the dest host so to avoid leaking # allocations in placement, delete them here. # TODO(mriedem): Should we catch and just log # AllocationDeleteFailed? What is the user's recourse if # we got this far but this fails? At this point the # instance is on the target host and the allocations # could just be manually cleaned up by the operator. self._delete_allocation_after_move(ctxt, instance, migration) do_confirm() def _confirm_snapshot_based_resize_at_source( self, ctxt, instance, migration): """Private version of confirm_snapshot_based_resize_at_source This allows the main method to be decorated with error handlers. :param ctxt: nova auth request context targeted at the source cell :param instance: Instance object being resized which should have the "old_flavor" attribute set :param migration: Migration object for the resize operation """ # Cleanup the guest from the hypervisor including local disks. network_info = self.network_api.get_instance_nw_info(ctxt, instance) LOG.debug('Cleaning up guest from source hypervisor including disks.', instance=instance) # FIXME(mriedem): Per bug 1809095, _confirm_resize calls # _get_updated_nw_info_with_pci_mapping here prior to unplugging # VIFs on the source, but in our case we have already unplugged # VIFs during prep_snapshot_based_resize_at_source, so what do we # need to do about those kinds of ports? Do we need to wait to unplug # VIFs until confirm like normal resize? # Note that prep_snapshot_based_resize_at_source already destroyed the # guest which disconnected volumes and unplugged VIFs but did not # destroy disks in case something failed during the resize and the # instance needed to be rebooted or rebuilt on the source host. Now # that we are confirming the resize we want to cleanup the disks left # on the source host. We call cleanup() instead of destroy() to avoid # any InstanceNotFound confusion from the driver since the guest was # already destroyed on this host. block_device_info=None and # destroy_vifs=False means cleanup() will not try to disconnect volumes # or unplug VIFs. self.driver.cleanup( ctxt, instance, network_info, block_device_info=None, destroy_disks=True, destroy_vifs=False) # Delete port bindings for the source host. self._confirm_snapshot_based_resize_delete_port_bindings( ctxt, instance, migration) # Delete volume attachments for the source host. self._delete_volume_attachments(ctxt, instance.get_bdms()) # Free up the old_flavor usage from the resource tracker for this host. self.rt.drop_move_claim_at_source(ctxt, instance, migration) def _confirm_snapshot_based_resize_delete_port_bindings( self, ctxt, instance, migration): """Delete port bindings for the source host when confirming snapshot-based resize on the source host." :param ctxt: nova auth RequestContext :param instance: Instance object that was resized/cold migrated :param migration: Migration object for the resize/cold migrate """ LOG.debug('Deleting port bindings for source host.', instance=instance) try: self.network_api.cleanup_instance_network_on_host( ctxt, instance, self.host) except exception.PortBindingDeletionFailed as e: # Do not let this stop us from cleaning up since the guest # is already gone. LOG.error('Failed to delete port bindings from source host. ' 'Error: %s', six.text_type(e), instance=instance) def _delete_volume_attachments(self, ctxt, bdms): """Deletes volume attachment records for the given bdms. This method will log but not re-raise any exceptions if the volume attachment delete fails. :param ctxt: nova auth request context used to make DELETE /attachments/{attachment_id} requests to cinder. :param bdms: objects.BlockDeviceMappingList representing volume attachments to delete based on BlockDeviceMapping.attachment_id. """ for bdm in bdms: if bdm.attachment_id: try: self.volume_api.attachment_delete(ctxt, bdm.attachment_id) except Exception as e: LOG.error('Failed to delete volume attachment with ID %s. ' 'Error: %s', bdm.attachment_id, six.text_type(e), instance_uuid=bdm.instance_uuid) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def revert_snapshot_based_resize_at_dest(self, ctxt, instance, migration): """Reverts a snapshot-based resize at the destination host. Cleans the guest from the destination compute service host hypervisor and related resources (ports, volumes) and frees resource usage from the compute service on that host. :param ctxt: nova auth request context targeted at the target cell :param instance: Instance object whose vm_state is "resized" and task_state is "resize_reverting". :param migration: Migration object whose status is "reverting". """ # A resize revert is essentially a resize back to the old size, so we # need to send a usage event here. compute_utils.notify_usage_exists( self.notifier, ctxt, instance, self.host, current_period=True) @utils.synchronized(instance.uuid) def do_revert(): LOG.info('Reverting resize on destination host.', instance=instance) with self._error_out_instance_on_exception(ctxt, instance): self._revert_snapshot_based_resize_at_dest( ctxt, instance, migration) do_revert() # Broadcast to all schedulers that the instance is no longer on # this host and clear any waiting callback events. This is best effort # so if anything fails just log it. try: self._delete_scheduler_instance_info(ctxt, instance.uuid) self.instance_events.clear_events_for_instance(instance) except Exception as e: LOG.warning('revert_snapshot_based_resize_at_dest failed during ' 'post-processing. Error: %s', e, instance=instance) def _revert_snapshot_based_resize_at_dest( self, ctxt, instance, migration): """Private version of revert_snapshot_based_resize_at_dest. This allows the main method to be decorated with error handlers. :param ctxt: nova auth request context targeted at the target cell :param instance: Instance object whose vm_state is "resized" and task_state is "resize_reverting". :param migration: Migration object whose status is "reverting". """ # Cleanup the guest from the hypervisor including local disks. network_info = self.network_api.get_instance_nw_info(ctxt, instance) bdms = instance.get_bdms() block_device_info = self._get_instance_block_device_info( ctxt, instance, bdms=bdms) LOG.debug('Destroying guest from destination hypervisor including ' 'disks.', instance=instance) self.driver.destroy( ctxt, instance, network_info, block_device_info=block_device_info) # Activate source host port bindings. We need to do this before # deleting the (active) dest host port bindings in # setup_networks_on_host otherwise the ports will be unbound and # finish on the source will fail. # migrate_instance_start uses migration.dest_compute for the port # binding host and since we want to activate the source host port # bindings, we need to temporarily mutate the migration object. with utils.temporary_mutation( migration, dest_compute=migration.source_compute): LOG.debug('Activating port bindings for source host %s.', migration.source_compute, instance=instance) # TODO(mriedem): https://review.opendev.org/#/c/594139/ would allow # us to remove this and make setup_networks_on_host do it. # TODO(mriedem): Should we try/except/log any errors but continue? self.network_api.migrate_instance_start( ctxt, instance, migration) # Delete port bindings for the target host. LOG.debug('Deleting port bindings for target host %s.', self.host, instance=instance) try: # Note that deleting the destination host port bindings does # not automatically activate the source host port bindings. self.network_api.cleanup_instance_network_on_host( ctxt, instance, self.host) except exception.PortBindingDeletionFailed as e: # Do not let this stop us from cleaning up since the guest # is already gone. LOG.error('Failed to delete port bindings from target host. ' 'Error: %s', six.text_type(e), instance=instance) # Delete any volume attachments remaining for this target host. LOG.debug('Deleting volume attachments for target host.', instance=instance) self._delete_volume_attachments(ctxt, bdms) # Free up the new_flavor usage from the resource tracker for this host. self.rt.drop_move_claim_at_dest(ctxt, instance, migration) def _revert_instance_flavor_host_node(self, instance, migration): """Revert host, node and flavor fields after a resize-revert.""" self._set_instance_info(instance, instance.old_flavor) instance.host = migration.source_compute instance.node = migration.source_node instance.save(expected_task_state=[task_states.RESIZE_REVERTING]) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def finish_revert_snapshot_based_resize_at_source( self, ctxt, instance, migration): """Reverts a snapshot-based resize at the source host. Spawn the guest and re-connect volumes/VIFs on the source host and revert the instance to use the old_flavor for resource usage reporting. Updates allocations in the placement service to move the source node allocations, held by the migration record, to the instance and drop the allocations held by the instance on the destination node. :param ctxt: nova auth request context targeted at the target cell :param instance: Instance object whose vm_state is "resized" and task_state is "resize_reverting". :param migration: Migration object whose status is "reverting". """ @utils.synchronized(instance.uuid) def do_revert(): LOG.info('Reverting resize on source host.', instance=instance) with self._error_out_instance_on_exception(ctxt, instance): self._finish_revert_snapshot_based_resize_at_source( ctxt, instance, migration) try: do_revert() finally: self._delete_stashed_flavor_info(instance) # Broadcast to all schedulers that the instance is on this host. # This is best effort so if anything fails just log it. try: self._update_scheduler_instance_info(ctxt, instance) except Exception as e: LOG.warning('finish_revert_snapshot_based_resize_at_source failed ' 'during post-processing. Error: %s', e, instance=instance) def _finish_revert_snapshot_based_resize_at_source( self, ctxt, instance, migration): """Private version of finish_revert_snapshot_based_resize_at_source. This allows the main method to be decorated with error handlers. :param ctxt: nova auth request context targeted at the source cell :param instance: Instance object whose vm_state is "resized" and task_state is "resize_reverting". :param migration: Migration object whose status is "reverting". """ # Get stashed old_vm_state information to determine if guest should # be powered on after spawn; we default to ACTIVE for backwards # compatibility if old_vm_state is not set old_vm_state = instance.system_metadata.get( 'old_vm_state', vm_states.ACTIVE) # Revert the flavor and host/node fields to their previous values self._revert_instance_flavor_host_node(instance, migration) # Move the allocations against the source compute node resource # provider, held by the migration, to the instance which will drop # the destination compute node resource provider allocations held by # the instance. This puts the allocations against the source node # back to the old_flavor and owned by the instance. try: self._revert_allocation(ctxt, instance, migration) except exception.AllocationMoveFailed: # Log the error but do not re-raise because we want to continue to # process ports and volumes below. LOG.error('Reverting allocation in placement for migration ' '%(migration_uuid)s failed. You may need to manually ' 'remove the allocations for the migration consumer ' 'against the source node resource provider ' '%(source_provider)s and the allocations for the ' 'instance consumer against the destination node ' 'resource provider %(dest_provider)s and then run the ' '"nova-manage placement heal_allocations" command.', {'instance_uuid': instance.uuid, 'migration_uuid': migration.uuid, 'source_provider': migration.source_node, 'dest_provider': migration.dest_node}, instance=instance) bdms = instance.get_bdms() # prep_snapshot_based_resize_at_source created empty volume attachments # that we need to update here to get the connection_info before calling # driver.finish_revert_migration which will connect the volumes to this # host. LOG.debug('Updating volume attachments for target host %s.', self.host, instance=instance) # TODO(mriedem): We should probably make _update_volume_attachments # (optionally) graceful to errors so we (1) try to process all # attachments and (2) continue to process networking below. self._update_volume_attachments(ctxt, instance, bdms) LOG.debug('Updating port bindings for source host %s.', self.host, instance=instance) # TODO(mriedem): Calculate provider mappings when we support # cross-cell resize/migrate with ports having resource requests. self._finish_revert_resize_network_migrate_finish( ctxt, instance, migration, provider_mappings=None) network_info = self.network_api.get_instance_nw_info(ctxt, instance) # Remember that prep_snapshot_based_resize_at_source destroyed the # guest but left the disks intact so we cannot call spawn() here but # finish_revert_migration should do the job. block_device_info = self._get_instance_block_device_info( ctxt, instance, bdms=bdms) power_on = old_vm_state == vm_states.ACTIVE driver_error = None try: self.driver.finish_revert_migration( ctxt, instance, network_info, migration, block_device_info=block_device_info, power_on=power_on) except Exception as e: driver_error = e # Leave a hint about hard rebooting the guest and reraise so the # instance is put into ERROR state. with excutils.save_and_reraise_exception(logger=LOG): LOG.error('An error occurred during finish_revert_migration. ' 'The instance may need to be hard rebooted. Error: ' '%s', driver_error, instance=instance) else: # Perform final cleanup of the instance in the database. instance.drop_migration_context() # If the original vm_state was STOPPED, set it back to STOPPED. vm_state = vm_states.ACTIVE if power_on else vm_states.STOPPED self._update_instance_after_spawn( ctxt, instance, vm_state=vm_state) instance.save(expected_task_state=[task_states.RESIZE_REVERTING]) finally: # Complete any volume attachments so the volumes are in-use. We # do this regardless of finish_revert_migration failing because # the instance is back on this host now and we do not want to leave # the volumes in a pending state in case the instance is hard # rebooted. LOG.debug('Completing volume attachments for instance on source ' 'host.', instance=instance) with excutils.save_and_reraise_exception( reraise=driver_error is not None, logger=LOG): self._complete_volume_attachments(ctxt, bdms) migration.status = 'reverted' migration.save() @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def revert_resize(self, context, instance, migration, request_spec=None): """Destroys the new instance on the destination machine. Reverts the model changes, and powers on the old instance on the source machine. """ # NOTE(comstud): A revert_resize is essentially a resize back to # the old size, so we need to send a usage event here. compute_utils.notify_usage_exists(self.notifier, context, instance, self.host, current_period=True) with self._error_out_instance_on_exception(context, instance): # NOTE(tr3buchet): tear down networks on destination host self.network_api.setup_networks_on_host(context, instance, teardown=True) self.network_api.migrate_instance_start(context, instance, migration) network_info = self.network_api.get_instance_nw_info(context, instance) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) block_device_info = self._get_instance_block_device_info( context, instance, bdms=bdms) destroy_disks = not self._is_instance_storage_shared( context, instance, host=migration.source_compute) self.driver.destroy(context, instance, network_info, block_device_info, destroy_disks) self._terminate_volume_connections(context, instance, bdms) # Free up the new_flavor usage from the resource tracker for this # host. self.rt.drop_move_claim_at_dest(context, instance, migration) # RPC cast back to the source host to finish the revert there. self.compute_rpcapi.finish_revert_resize(context, instance, migration, migration.source_compute, request_spec) def _finish_revert_resize_network_migrate_finish( self, context, instance, migration, provider_mappings): """Causes port binding to be updated. In some Neutron or port configurations - see NetworkModel.get_bind_time_events() - we expect the vif-plugged event from Neutron immediately and wait for it. The rest of the time, the event is expected further along in the virt driver, so we don't wait here. :param context: The request context. :param instance: The instance undergoing the revert resize. :param migration: The Migration object of the resize being reverted. :param provider_mappings: a dict of list of resource provider uuids keyed by port uuid :raises: eventlet.timeout.Timeout or exception.VirtualInterfacePlugException. """ network_info = instance.get_network_info() events = [] deadline = CONF.vif_plugging_timeout if deadline and network_info: events = network_info.get_bind_time_events(migration) if events: LOG.debug('Will wait for bind-time events: %s', events) error_cb = self._neutron_failed_migration_callback try: with self.virtapi.wait_for_instance_event(instance, events, deadline=deadline, error_callback=error_cb): # NOTE(hanrong): we need to change migration.dest_compute to # source host temporarily. # "network_api.migrate_instance_finish" will setup the network # for the instance on the destination host. For revert resize, # the instance will back to the source host, the setup of the # network for instance should be on the source host. So set # the migration.dest_compute to source host at here. with utils.temporary_mutation( migration, dest_compute=migration.source_compute): self.network_api.migrate_instance_finish( context, instance, migration, provider_mappings) except eventlet.timeout.Timeout: with excutils.save_and_reraise_exception(): LOG.error('Timeout waiting for Neutron events: %s', events, instance=instance) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def finish_revert_resize( self, context, instance, migration, request_spec=None): """Finishes the second half of reverting a resize on the source host. Bring the original source instance state back (active/shutoff) and revert the resized attributes in the database. """ try: self._finish_revert_resize( context, instance, migration, request_spec) finally: self._delete_stashed_flavor_info(instance) def _finish_revert_resize( self, context, instance, migration, request_spec=None, ): """Inner version of finish_revert_resize.""" with self._error_out_instance_on_exception(context, instance): bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._notify_about_instance_usage( context, instance, "resize.revert.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESIZE_REVERT, phase=fields.NotificationPhase.START, bdms=bdms) # Get stashed old_vm_state information to determine if guest should # be powered on after spawn; we default to ACTIVE for backwards # compatibility if old_vm_state is not set old_vm_state = instance.system_metadata.get( 'old_vm_state', vm_states.ACTIVE) # Revert the flavor and host/node fields to their previous values self._revert_instance_flavor_host_node(instance, migration) try: source_allocations = self._revert_allocation( context, instance, migration) except exception.AllocationMoveFailed: LOG.error('Reverting allocation in placement for migration ' '%(migration_uuid)s failed. The instance ' '%(instance_uuid)s will be put into ERROR state but ' 'the allocation held by the migration is leaked.', {'instance_uuid': instance.uuid, 'migration_uuid': migration.uuid}) raise provider_mappings = self._fill_provider_mapping_based_on_allocs( context, source_allocations, request_spec) self.network_api.setup_networks_on_host(context, instance, migration.source_compute) self._finish_revert_resize_network_migrate_finish( context, instance, migration, provider_mappings) network_info = self.network_api.get_instance_nw_info(context, instance) # revert_resize deleted any volume attachments for the instance # and created new ones to be used on this host, but we # have to update those attachments with the host connector so the # BDM.connection_info will get set in the call to # _get_instance_block_device_info below with refresh_conn_info=True # and then the volumes can be re-connected via the driver on this # host. self._update_volume_attachments(context, instance, bdms) block_device_info = self._get_instance_block_device_info( context, instance, refresh_conn_info=True, bdms=bdms) power_on = old_vm_state != vm_states.STOPPED self.driver.finish_revert_migration( context, instance, network_info, migration, block_device_info, power_on) instance.drop_migration_context() instance.launched_at = timeutils.utcnow() instance.save(expected_task_state=task_states.RESIZE_REVERTING) # Complete any volume attachments so the volumes are in-use. self._complete_volume_attachments(context, bdms) # if the original vm state was STOPPED, set it back to STOPPED LOG.info("Updating instance to original state: '%s'", old_vm_state, instance=instance) if power_on: instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.save() else: instance.task_state = task_states.POWERING_OFF instance.save() self.stop_instance(context, instance=instance, clean_shutdown=True) self._notify_about_instance_usage( context, instance, "resize.revert.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESIZE_REVERT, phase=fields.NotificationPhase.END, bdms=bdms) def _fill_provider_mapping_based_on_allocs( self, context, allocations, request_spec): """Fills and returns the request group - resource provider mapping based on the allocation passed in. :param context: The security context :param allocation: allocation dict keyed by RP UUID. :param request_spec: The RequestSpec object associated with the operation :returns: None if the request_spec is None. Otherwise a mapping between RequestGroup requester_id, currently Neutron port_id, and a list of resource provider UUIDs providing resource for that RequestGroup. """ if request_spec: # NOTE(gibi): We need to re-calculate the resource provider - # port mapping as we have to have the neutron ports allocate # from the source compute after revert. scheduler_utils.fill_provider_mapping_based_on_allocation( context, self.reportclient, request_spec, allocations) provider_mappings = self._get_request_group_mapping( request_spec) else: # NOTE(gibi): The compute RPC is pinned to be older than 5.2 # and therefore request_spec is not sent. We cannot calculate # the provider mappings. If the instance has ports with # resource request then the port update will fail in # _update_port_binding_for_instance() called via # _finish_revert_resize_network_migrate_finish() in # finish_revert_resize. provider_mappings = None return provider_mappings def _revert_allocation(self, context, instance, migration): """Revert an allocation that is held by migration to our instance.""" # Fetch the original allocation that the instance had on the source # node, which are now held by the migration orig_alloc = self.reportclient.get_allocations_for_consumer( context, migration.uuid) if not orig_alloc: LOG.error('Did not find resource allocations for migration ' '%s on source node %s. Unable to revert source node ' 'allocations back to the instance.', migration.uuid, migration.source_node, instance=instance) return False LOG.info('Swapping old allocation on %(rp_uuids)s held by migration ' '%(mig)s for instance', {'rp_uuids': orig_alloc.keys(), 'mig': migration.uuid}, instance=instance) # FIXME(gibi): This method is flawed in that it does not handle # allocations against sharing providers in any special way. This leads # to duplicate allocations against the sharing provider during # migration. # TODO(cdent): Should we be doing anything with return values here? self.reportclient.move_allocations(context, migration.uuid, instance.uuid) return orig_alloc def _prep_resize(self, context, image, instance, instance_type, filter_properties, node, migration, request_spec, clean_shutdown=True): if not filter_properties: filter_properties = {} if not instance.host: self._set_instance_obj_error_state(context, instance) msg = _('Instance has no source host') raise exception.MigrationError(reason=msg) same_host = instance.host == self.host # if the flavor IDs match, it's migrate; otherwise resize if same_host and instance_type.id == instance['instance_type_id']: # check driver whether support migrate to same host if not self.driver.capabilities.get( 'supports_migrate_to_same_host', False): # Raise InstanceFaultRollback so that the # _error_out_instance_on_exception context manager in # prep_resize will set the instance.vm_state properly. raise exception.InstanceFaultRollback( inner_exception=exception.UnableToMigrateToSelf( instance_id=instance.uuid, host=self.host)) # NOTE(danms): Stash the new instance_type to avoid having to # look it up in the database later instance.new_flavor = instance_type # NOTE(mriedem): Stash the old vm_state so we can set the # resized/reverted instance back to the same state later. vm_state = instance.vm_state LOG.debug('Stashing vm_state: %s', vm_state, instance=instance) instance.system_metadata['old_vm_state'] = vm_state instance.save() if not isinstance(request_spec, objects.RequestSpec): # Prior to compute RPC API 5.1 conductor would pass a legacy dict # version of the request spec to compute and since Stein compute # could be sending that back to conductor on reschedule, so if we # got a dict convert it to an object. # TODO(mriedem): We can drop this compat code when we only support # compute RPC API >=6.0. request_spec = objects.RequestSpec.from_primitives( context, request_spec, filter_properties) # We don't have to set the new flavor on the request spec because # if we got here it was due to a reschedule from the compute and # the request spec would already have the new flavor in it from the # else block below. provider_mapping = self._get_request_group_mapping(request_spec) if provider_mapping: try: compute_utils.\ update_pci_request_spec_with_allocated_interface_name( context, self.reportclient, instance, provider_mapping) except (exception.AmbiguousResourceProviderForPCIRequest, exception.UnexpectedResourceProviderNameForPCIRequest ) as e: raise exception.BuildAbortException( reason=six.text_type(e), instance_uuid=instance.uuid) limits = filter_properties.get('limits', {}) allocs = self.reportclient.get_allocations_for_consumer( context, instance.uuid) with self.rt.resize_claim(context, instance, instance_type, node, migration, allocs, image_meta=image, limits=limits) as claim: LOG.info('Migrating', instance=instance) # RPC cast to the source host to start the actual resize/migration. self.compute_rpcapi.resize_instance( context, instance, claim.migration, image, instance_type, request_spec, clean_shutdown) def _send_prep_resize_notifications( self, context, instance, phase, flavor): """Send "resize.prep.*" notifications. :param context: nova auth request context :param instance: The instance being resized :param phase: The phase of the action (NotificationPhase enum) :param flavor: The (new) flavor for the resize (same as existing instance.flavor for a cold migration) """ # Only send notify_usage_exists if it's the "start" phase. if phase == fields.NotificationPhase.START: compute_utils.notify_usage_exists( self.notifier, context, instance, self.host, current_period=True) # Send extra usage info about the flavor if it's the "end" phase for # the legacy unversioned notification. extra_usage_info = None if phase == fields.NotificationPhase.END: extra_usage_info = dict( new_instance_type=flavor.name, new_instance_type_id=flavor.id) self._notify_about_instance_usage( context, instance, "resize.prep.%s" % phase, extra_usage_info=extra_usage_info) # Send the versioned notification. compute_utils.notify_about_resize_prep_instance( context, instance, self.host, phase, flavor) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def prep_resize(self, context, image, instance, instance_type, request_spec, filter_properties, node, clean_shutdown, migration, host_list): """Initiates the process of moving a running instance to another host. Possibly changes the VCPU, RAM and disk size in the process. This is initiated from conductor and runs on the destination host. The main purpose of this method is performing some checks on the destination host and making a claim for resources. If the claim fails then a reschedule to another host may be attempted which involves calling back to conductor to start the process over again. """ if node is None: node = self._get_nodename(instance, refresh=True) # Pass instance_state=instance.vm_state because we can resize # a STOPPED server and we don't want to set it back to ACTIVE # in case _prep_resize fails. instance_state = instance.vm_state with self._error_out_instance_on_exception( context, instance, instance_state=instance_state),\ errors_out_migration_ctxt(migration): self._send_prep_resize_notifications( context, instance, fields.NotificationPhase.START, instance_type) try: scheduler_hints = self._get_scheduler_hints(filter_properties, request_spec) # Error out if this host cannot accept the new instance due # to anti-affinity. At this point the migration is already # in-progress, so this is the definitive moment to abort due to # the policy violation. Also, exploding here is covered by the # cleanup methods in except block. try: self._validate_instance_group_policy(context, instance, scheduler_hints) except exception.RescheduledException as e: raise exception.InstanceFaultRollback(inner_exception=e) self._prep_resize(context, image, instance, instance_type, filter_properties, node, migration, request_spec, clean_shutdown) except exception.BuildAbortException: # NOTE(gibi): We failed # update_pci_request_spec_with_allocated_interface_name so # there is no reason to re-schedule. Just revert the allocation # and fail the migration. with excutils.save_and_reraise_exception(): self._revert_allocation(context, instance, migration) except Exception: # Since we hit a failure, we're either rescheduling or dead # and either way we need to cleanup any allocations created # by the scheduler for the destination node. self._revert_allocation(context, instance, migration) # try to re-schedule the resize elsewhere: exc_info = sys.exc_info() self._reschedule_resize_or_reraise(context, instance, exc_info, instance_type, request_spec, filter_properties, host_list) finally: self._send_prep_resize_notifications( context, instance, fields.NotificationPhase.END, instance_type) def _reschedule_resize_or_reraise(self, context, instance, exc_info, instance_type, request_spec, filter_properties, host_list): """Try to re-schedule the resize or re-raise the original error to error out the instance. """ if not filter_properties: filter_properties = {} rescheduled = False instance_uuid = instance.uuid try: retry = filter_properties.get('retry') if retry: LOG.debug('Rescheduling, attempt %d', retry['num_attempts'], instance_uuid=instance_uuid) # reset the task state task_state = task_states.RESIZE_PREP self._instance_update(context, instance, task_state=task_state) if exc_info: # stringify to avoid circular ref problem in json # serialization retry['exc'] = traceback.format_exception_only( exc_info[0], exc_info[1]) scheduler_hint = {'filter_properties': filter_properties} self.compute_task_api.resize_instance( context, instance, scheduler_hint, instance_type, request_spec=request_spec, host_list=host_list) rescheduled = True else: # no retry information, do not reschedule. LOG.debug('Retry info not present, will not reschedule', instance_uuid=instance_uuid) rescheduled = False except Exception as error: rescheduled = False LOG.exception("Error trying to reschedule", instance_uuid=instance_uuid) compute_utils.add_instance_fault_from_exc(context, instance, error, exc_info=sys.exc_info()) self._notify_about_instance_usage(context, instance, 'resize.error', fault=error) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.RESIZE, phase=fields.NotificationPhase.ERROR, exception=error, tb=','.join(traceback.format_exception(*exc_info))) if rescheduled: self._log_original_error(exc_info, instance_uuid) compute_utils.add_instance_fault_from_exc(context, instance, exc_info[1], exc_info=exc_info) self._notify_about_instance_usage(context, instance, 'resize.error', fault=exc_info[1]) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.RESIZE, phase=fields.NotificationPhase.ERROR, exception=exc_info[1], tb=','.join(traceback.format_exception(*exc_info))) else: # not re-scheduling six.reraise(*exc_info) @messaging.expected_exceptions(exception.MigrationPreCheckError) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def prep_snapshot_based_resize_at_dest( self, ctxt, instance, flavor, nodename, migration, limits, request_spec): """Performs pre-cross-cell resize resource claim on the dest host. This runs on the destination host in a cross-cell resize operation before the resize is actually started. Performs a resize_claim for resources that are not claimed in placement like PCI devices and NUMA topology. Note that this is different from same-cell prep_resize in that this: * Does not RPC cast to the source compute, that is orchestrated from conductor. * This does not reschedule on failure, conductor handles that since conductor is synchronously RPC calling this method. As such, the reverts_task_state decorator is not used on this method. :param ctxt: user auth request context :param instance: the instance being resized :param flavor: the flavor being resized to (unchanged for cold migrate) :param nodename: Name of the target compute node :param migration: nova.objects.Migration object for the operation :param limits: nova.objects.SchedulerLimits object of resource limits :param request_spec: nova.objects.RequestSpec object for the operation :returns: nova.objects.MigrationContext; the migration context created on the destination host during the resize_claim. :raises: nova.exception.MigrationPreCheckError if the pre-check validation fails for the given host selection """ LOG.debug('Checking if we can cross-cell migrate instance to this ' 'host (%s).', self.host, instance=instance) self._send_prep_resize_notifications( ctxt, instance, fields.NotificationPhase.START, flavor) # TODO(mriedem): update_pci_request_spec_with_allocated_interface_name # should be called here if the request spec has request group mappings, # e.g. for things like QoS ports with resource requests. Do it outside # the try/except so if it raises BuildAbortException we do not attempt # to reschedule. try: # Get the allocations within the try/except block in case we get # an error so MigrationPreCheckError is raised up. allocations = self.reportclient.get_allocs_for_consumer( ctxt, instance.uuid)['allocations'] # Claim resources on this target host using the new flavor which # will create the MigrationContext object. Note that in the future # if we want to do other validation here we should do it within # the MoveClaim context so we can drop the claim if anything fails. self.rt.resize_claim( ctxt, instance, flavor, nodename, migration, allocations, image_meta=instance.image_meta, limits=limits) except Exception as ex: err = six.text_type(ex) LOG.warning( 'Cross-cell resize pre-checks failed for this host (%s). ' 'Cleaning up. Failure: %s', self.host, err, instance=instance, exc_info=True) raise exception.MigrationPreCheckError( reason=(_("Pre-checks failed on host '%(host)s'. " "Error: %(error)s") % {'host': self.host, 'error': err})) finally: self._send_prep_resize_notifications( ctxt, instance, fields.NotificationPhase.END, flavor) # ResourceTracker.resize_claim() sets instance.migration_context. return instance.migration_context @messaging.expected_exceptions(exception.InstancePowerOffFailure) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def prep_snapshot_based_resize_at_source( self, ctxt, instance, migration, snapshot_id=None): """Prepares the instance at the source host for cross-cell resize Performs actions like powering off the guest, upload snapshot data if the instance is not volume-backed, disconnecting volumes, unplugging VIFs and activating the destination host port bindings. :param ctxt: user auth request context targeted at source cell :param instance: nova.objects.Instance; the instance being resized. The expected instance.task_state is "resize_migrating" when calling this method, and the expected task_state upon successful completion is "resize_migrated". :param migration: nova.objects.Migration object for the operation. The expected migration.status is "pre-migrating" when calling this method and the expected status upon successful completion is "post-migrating". :param snapshot_id: ID of the image snapshot to upload if not a volume-backed instance :raises: nova.exception.InstancePowerOffFailure if stopping the instance fails """ LOG.info('Preparing for snapshot based resize on source host %s.', self.host, instance=instance) # Note that if anything fails here, the migration-based allocations # created in conductor should be reverted by conductor as well, # see MigrationTask.rollback. self._prep_snapshot_based_resize_at_source( ctxt, instance, migration, snapshot_id=snapshot_id) @delete_image_on_error def _snapshot_for_resize(self, ctxt, image_id, instance): """Uploads snapshot for the instance during a snapshot-based resize If the snapshot operation fails the image will be deleted. :param ctxt: the nova auth request context for the resize operation :param image_id: the snapshot image ID :param instance: the instance to snapshot/resize """ LOG.debug('Uploading snapshot data for image %s', image_id, instance=instance) # Note that we do not track the snapshot phase task states # during resize since we do not want to reflect those into the # actual instance.task_state. update_task_state = lambda *args, **kwargs: None with timeutils.StopWatch() as timer: self.driver.snapshot(ctxt, instance, image_id, update_task_state) LOG.debug('Took %0.2f seconds to snapshot the instance on ' 'the hypervisor.', timer.elapsed(), instance=instance) def _prep_snapshot_based_resize_at_source( self, ctxt, instance, migration, snapshot_id=None): """Private method for prep_snapshot_based_resize_at_source so calling code can handle errors and perform rollbacks as necessary. """ # Fetch and update the instance.info_cache. network_info = self.network_api.get_instance_nw_info(ctxt, instance) # Get the BDMs attached to this instance on this source host. bdms = instance.get_bdms() # Send the resize.start notification. self._send_resize_instance_notifications( ctxt, instance, bdms, network_info, fields.NotificationPhase.START) # Update the migration status from "pre-migrating" to "migrating". migration.status = 'migrating' migration.save() # Since the instance is going to be left on the source host during the # resize, we need to power it off so we do not have the instance # potentially running in two places. LOG.debug('Stopping instance', instance=instance) try: self._power_off_instance(ctxt, instance) except Exception as e: LOG.exception('Failed to power off instance.', instance=instance) raise exception.InstancePowerOffFailure(reason=six.text_type(e)) instance.power_state = self._get_power_state(ctxt, instance) # If a snapshot image ID was provided, we need to snapshot the guest # disk image and upload it to the image service. if snapshot_id: self._snapshot_for_resize(ctxt, snapshot_id, instance) block_device_info = self._get_instance_block_device_info( ctxt, instance, bdms=bdms) # If something fails at this point the instance must go to ERROR # status for operator intervention or to reboot/rebuild the instance. with self._error_out_instance_on_exception( ctxt, instance, instance_state=vm_states.ERROR): # Destroy the guest on the source host which will disconnect # volumes and unplug VIFs. Note that we DO NOT destroy disks since # we want to leave those on the source host in case of a later # failure and disks are needed to recover the guest or in case the # resize is reverted. LOG.debug('Destroying guest on source host but retaining disks.', instance=instance) self.driver.destroy( ctxt, instance, network_info, block_device_info=block_device_info, destroy_disks=False) # At this point the volumes are disconnected from this source host. # Delete the old volume attachment records and create new empty # ones which will be used later if the resize is reverted. LOG.debug('Deleting volume attachments for the source host.', instance=instance) self._terminate_volume_connections(ctxt, instance, bdms) # At this point the VIFs are unplugged from this source host. # Activate the dest host port bindings created by conductor. self.network_api.migrate_instance_start(ctxt, instance, migration) # Update the migration status from "migrating" to "post-migrating". migration.status = 'post-migrating' migration.save() # At this point, the traditional resize_instance would update the # instance host/node values to point at the dest host/node because # that is where the disk is transferred during resize_instance, but # with cross-cell resize the instance is not yet at the dest host # so we do not make that update here. instance.task_state = task_states.RESIZE_MIGRATED instance.save(expected_task_state=task_states.RESIZE_MIGRATING) self._send_resize_instance_notifications( ctxt, instance, bdms, network_info, fields.NotificationPhase.END) self.instance_events.clear_events_for_instance(instance) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def resize_instance(self, context, instance, image, migration, instance_type, clean_shutdown, request_spec=None): """Starts the migration of a running instance to another host. This is initiated from the destination host's ``prep_resize`` routine and runs on the source host. """ try: self._resize_instance(context, instance, image, migration, instance_type, clean_shutdown, request_spec) except Exception: with excutils.save_and_reraise_exception(): self._revert_allocation(context, instance, migration) def _resize_instance(self, context, instance, image, migration, instance_type, clean_shutdown, request_spec): # Pass instance_state=instance.vm_state because we can resize # a STOPPED server and we don't want to set it back to ACTIVE # in case migrate_disk_and_power_off raises InstanceFaultRollback. instance_state = instance.vm_state with self._error_out_instance_on_exception( context, instance, instance_state=instance_state), \ errors_out_migration_ctxt(migration): network_info = self.network_api.get_instance_nw_info(context, instance) migration.status = 'migrating' migration.save() instance.task_state = task_states.RESIZE_MIGRATING instance.save(expected_task_state=task_states.RESIZE_PREP) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._send_resize_instance_notifications( context, instance, bdms, network_info, fields.NotificationPhase.START) block_device_info = self._get_instance_block_device_info( context, instance, bdms=bdms) timeout, retry_interval = self._get_power_off_values(context, instance, clean_shutdown) disk_info = self.driver.migrate_disk_and_power_off( context, instance, migration.dest_host, instance_type, network_info, block_device_info, timeout, retry_interval) self._terminate_volume_connections(context, instance, bdms) self.network_api.migrate_instance_start(context, instance, migration) migration.status = 'post-migrating' migration.save() instance.host = migration.dest_compute instance.node = migration.dest_node instance.task_state = task_states.RESIZE_MIGRATED instance.save(expected_task_state=task_states.RESIZE_MIGRATING) # RPC cast to the destination host to finish the resize/migration. self.compute_rpcapi.finish_resize(context, instance, migration, image, disk_info, migration.dest_compute, request_spec) self._send_resize_instance_notifications( context, instance, bdms, network_info, fields.NotificationPhase.END) self.instance_events.clear_events_for_instance(instance) def _send_resize_instance_notifications( self, context, instance, bdms, network_info, phase): """Send "resize.(start|end)" notifications. :param context: nova auth request context :param instance: The instance being resized :param bdms: BlockDeviceMappingList for the BDMs associated with the instance :param network_info: NetworkInfo for the instance info cache of ports :param phase: The phase of the action (NotificationPhase enum, either ``start`` or ``end``) """ action = fields.NotificationAction.RESIZE # Send the legacy unversioned notification. self._notify_about_instance_usage( context, instance, "%s.%s" % (action, phase), network_info=network_info) # Send the versioned notification. compute_utils.notify_about_instance_action( context, instance, self.host, action=action, phase=phase, bdms=bdms) def _terminate_volume_connections(self, context, instance, bdms): connector = None for bdm in bdms: if bdm.is_volume: if bdm.attachment_id: # NOTE(jdg): So here's the thing, the idea behind the new # attach API's was to have a new code fork/path that we # followed, we're not going to do that so we have to do # some extra work in here to make it *behave* just like the # old code. Cinder doesn't allow disconnect/reconnect (you # just delete the attachment and get a new one) # attachments in the new attach code so we have to do # a delete and create without a connector (reserve), # in other words, beware attachment_id = self.volume_api.attachment_create( context, bdm.volume_id, instance.uuid)['id'] self.volume_api.attachment_delete(context, bdm.attachment_id) bdm.attachment_id = attachment_id bdm.save() else: if connector is None: connector = self.driver.get_volume_connector(instance) self.volume_api.terminate_connection(context, bdm.volume_id, connector) @staticmethod def _set_instance_info(instance, instance_type): instance.instance_type_id = instance_type.id instance.memory_mb = instance_type.memory_mb instance.vcpus = instance_type.vcpus instance.root_gb = instance_type.root_gb instance.ephemeral_gb = instance_type.ephemeral_gb instance.flavor = instance_type def _update_volume_attachments(self, context, instance, bdms): """Updates volume attachments using the virt driver host connector. :param context: nova.context.RequestContext - user request context :param instance: nova.objects.Instance :param bdms: nova.objects.BlockDeviceMappingList - the list of block device mappings for the given instance """ if bdms: connector = None for bdm in bdms: if bdm.is_volume and bdm.attachment_id: if connector is None: connector = self.driver.get_volume_connector(instance) self.volume_api.attachment_update( context, bdm.attachment_id, connector, bdm.device_name) def _complete_volume_attachments(self, context, bdms): """Completes volume attachments for the instance :param context: nova.context.RequestContext - user request context :param bdms: nova.objects.BlockDeviceMappingList - the list of block device mappings for the given instance """ if bdms: for bdm in bdms: if bdm.is_volume and bdm.attachment_id: self.volume_api.attachment_complete( context, bdm.attachment_id) def _finish_resize(self, context, instance, migration, disk_info, image_meta, bdms, request_spec): resize_instance = False # indicates disks have been resized old_instance_type_id = migration['old_instance_type_id'] new_instance_type_id = migration['new_instance_type_id'] old_flavor = instance.flavor # the current flavor is now old # NOTE(mriedem): Get the old_vm_state so we know if we should # power on the instance. If old_vm_state is not set we need to default # to ACTIVE for backwards compatibility old_vm_state = instance.system_metadata.get('old_vm_state', vm_states.ACTIVE) instance.old_flavor = old_flavor if old_instance_type_id != new_instance_type_id: new_flavor = instance.new_flavor # this is set in _prep_resize # Set the flavor-related fields on the instance object including # making instance.flavor = new_flavor. self._set_instance_info(instance, new_flavor) for key in ('root_gb', 'swap', 'ephemeral_gb'): if old_flavor[key] != new_flavor[key]: resize_instance = True break instance.apply_migration_context() # NOTE(tr3buchet): setup networks on destination host self.network_api.setup_networks_on_host(context, instance, migration.dest_compute) provider_mappings = self._get_request_group_mapping(request_spec) # For neutron, migrate_instance_finish updates port bindings for this # host including any PCI devices claimed for SR-IOV ports. self.network_api.migrate_instance_finish( context, instance, migration, provider_mappings) network_info = self.network_api.get_instance_nw_info(context, instance) instance.task_state = task_states.RESIZE_FINISH instance.save(expected_task_state=task_states.RESIZE_MIGRATED) self._send_finish_resize_notifications( context, instance, bdms, network_info, fields.NotificationPhase.START) # We need to update any volume attachments using the destination # host connector so that we can update the BDM.connection_info # before calling driver.finish_migration otherwise the driver # won't know how to connect the volumes to this host. # Note that _get_instance_block_device_info with # refresh_conn_info=True will update the BDM.connection_info value # in the database so we must do this before calling that method. self._update_volume_attachments(context, instance, bdms) block_device_info = self._get_instance_block_device_info( context, instance, refresh_conn_info=True, bdms=bdms) # NOTE(mriedem): If the original vm_state was STOPPED, we don't # automatically power on the instance after it's migrated power_on = old_vm_state != vm_states.STOPPED # NOTE(sbauza): During a migration, the original allocation is against # the migration UUID while the target allocation (for the destination # node) is related to the instance UUID, so here we need to pass the # new ones. allocations = self.reportclient.get_allocs_for_consumer( context, instance.uuid)['allocations'] try: self.driver.finish_migration(context, migration, instance, disk_info, network_info, image_meta, resize_instance, allocations, block_device_info, power_on) except Exception: # Note that we do not rollback port bindings to the source host # because resize_instance (on the source host) updated the # instance.host to point to *this* host (the destination host) # so the port bindings pointing at this host are correct even # though we failed to create the guest. with excutils.save_and_reraise_exception(): # If we failed to create the guest on this host, reset the # instance flavor-related fields to the old flavor. An # error handler like reverts_task_state will save the changes. if old_instance_type_id != new_instance_type_id: self._set_instance_info(instance, old_flavor) # Now complete any volume attachments that were previously updated. self._complete_volume_attachments(context, bdms) migration.status = 'finished' migration.save() instance.vm_state = vm_states.RESIZED instance.task_state = None instance.launched_at = timeutils.utcnow() instance.save(expected_task_state=task_states.RESIZE_FINISH) return network_info @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def finish_resize(self, context, disk_info, image, instance, migration, request_spec=None): """Completes the migration process. Sets up the newly transferred disk and turns on the instance at its new host machine. """ try: self._finish_resize_helper(context, disk_info, image, instance, migration, request_spec) except Exception: with excutils.save_and_reraise_exception(): # At this point, resize_instance (which runs on the source) has # already updated the instance host/node values to point to # this (the dest) compute, so we need to leave the allocations # against the dest node resource provider intact and drop the # allocations against the source node resource provider. If the # user tries to recover the server by hard rebooting it, it # will happen on this host so that's where the allocations # should go. Note that this is the same method called from # confirm_resize to cleanup the source node allocations held # by the migration record. LOG.info('Deleting allocations for old flavor on source node ' '%s after finish_resize failure. You may be able to ' 'recover the instance by hard rebooting it.', migration.source_compute, instance=instance) self._delete_allocation_after_move( context, instance, migration) def _finish_resize_helper(self, context, disk_info, image, instance, migration, request_spec): """Completes the migration process. The caller must revert the instance's allocations if the migration process failed. """ bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) with self._error_out_instance_on_exception(context, instance): image_meta = objects.ImageMeta.from_dict(image) network_info = self._finish_resize(context, instance, migration, disk_info, image_meta, bdms, request_spec) # TODO(melwitt): We should clean up instance console tokens here. The # instance is on a new host and will need to establish a new console # connection. self._update_scheduler_instance_info(context, instance) self._send_finish_resize_notifications( context, instance, bdms, network_info, fields.NotificationPhase.END) def _send_finish_resize_notifications( self, context, instance, bdms, network_info, phase): """Send notifications for the finish_resize flow. :param context: nova auth request context :param instance: The instance being resized :param bdms: BlockDeviceMappingList for the BDMs associated with the instance :param network_info: NetworkInfo for the instance info cache of ports :param phase: The phase of the action (NotificationPhase enum, either ``start`` or ``end``) """ # Send the legacy unversioned notification. self._notify_about_instance_usage( context, instance, "finish_resize.%s" % phase, network_info=network_info) # Send the versioned notification. compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.RESIZE_FINISH, phase=phase, bdms=bdms) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def finish_snapshot_based_resize_at_dest( self, ctxt, instance, migration, snapshot_id, request_spec): """Finishes the snapshot-based resize at the destination compute. Sets up block devices and networking on the destination compute and spawns the guest. :param ctxt: nova auth request context targeted at the target cell DB :param instance: The Instance object being resized with the ``migration_context`` field set. Upon successful completion of this method the vm_state should be "resized", the task_state should be None, and migration context, host/node and flavor-related fields should be set on the instance. :param migration: The Migration object for this resize operation. Upon successful completion of this method the migration status should be "finished". :param snapshot_id: ID of the image snapshot created for a non-volume-backed instance, else None. :param request_spec: nova.objects.RequestSpec object for the operation """ LOG.info('Finishing snapshot based resize on destination host %s.', self.host, instance=instance) with self._error_out_instance_on_exception(ctxt, instance): # Note that if anything fails here, the migration-based allocations # created in conductor should be reverted by conductor as well, # see MigrationTask.rollback. self._finish_snapshot_based_resize_at_dest( ctxt, instance, migration, snapshot_id) def _finish_snapshot_based_resize_at_dest( self, ctxt, instance, migration, snapshot_id): """Private variant of finish_snapshot_based_resize_at_dest so the caller can handle reverting resource allocations on failure and perform other generic error handling. """ # Figure out the image metadata to use when spawning the guest. if snapshot_id: image_meta = objects.ImageMeta.from_image_ref( ctxt, self.image_api, snapshot_id) else: # Just use what is already on the volume-backed instance. image_meta = instance.image_meta resize = migration.migration_type == 'resize' instance.old_flavor = instance.flavor if resize: flavor = instance.new_flavor # If we are resizing to a new flavor we need to set the # flavor-related fields on the instance. # NOTE(mriedem): This is likely where storing old/new_flavor on # the MigrationContext would make this cleaner. self._set_instance_info(instance, flavor) instance.apply_migration_context() instance.task_state = task_states.RESIZE_FINISH instance.save(expected_task_state=task_states.RESIZE_MIGRATED) # This seems a bit late to be sending the start notification but # it is what traditional resize has always done as well and it does # contain the changes to the instance with the new_flavor and # task_state. bdms = instance.get_bdms() network_info = instance.get_network_info() self._send_finish_resize_notifications( ctxt, instance, bdms, network_info, fields.NotificationPhase.START) # Setup volumes and networking and spawn the guest in the hypervisor. self._finish_snapshot_based_resize_at_dest_spawn( ctxt, instance, migration, image_meta, bdms) # If we spawned from a temporary snapshot image we can delete that now, # similar to how unshelve works. if snapshot_id: compute_utils.delete_image( ctxt, instance, self.image_api, snapshot_id) migration.status = 'finished' migration.save() self._update_instance_after_spawn( ctxt, instance, vm_state=vm_states.RESIZED) # Setting the host/node values will make the ResourceTracker continue # to track usage for this instance on this host. instance.host = migration.dest_compute instance.node = migration.dest_node instance.save(expected_task_state=task_states.RESIZE_FINISH) # Broadcast to all schedulers that the instance is on this host. self._update_scheduler_instance_info(ctxt, instance) self._send_finish_resize_notifications( ctxt, instance, bdms, network_info, fields.NotificationPhase.END) def _finish_snapshot_based_resize_at_dest_spawn( self, ctxt, instance, migration, image_meta, bdms): """Sets up volumes and networking and spawns the guest on the dest host If the instance was stopped when the resize was initiated the guest will be created but remain in a shutdown power state. If the spawn fails, port bindings are rolled back to the source host and volume connections are terminated for this dest host. :param ctxt: nova auth request context :param instance: Instance object being migrated :param migration: Migration object for the operation :param image_meta: ImageMeta object used during driver.spawn :param bdms: BlockDeviceMappingList of BDMs for the instance """ # Update the volume attachments using this host's connector. # That will update the BlockDeviceMapping.connection_info which # will be used to connect the volumes on this host during spawn(). block_device_info = self._prep_block_device(ctxt, instance, bdms) allocations = self.reportclient.get_allocations_for_consumer( ctxt, instance.uuid) # We do not call self.network_api.setup_networks_on_host here because # for neutron that sets up the port migration profile which is only # used during live migration with DVR. Yes it is gross knowing what # that method does internally. We could change this when bug 1814837 # is fixed if setup_networks_on_host is made smarter by passing the # migration record and the method checks the migration_type. # Activate the port bindings for this host. # FIXME(mriedem): We're going to have the same issue as bug 1813789 # here because this will update the port bindings and send the # network-vif-plugged event and that means when driver.spawn waits for # it we might have already gotten the event and neutron won't send # another one so we could timeout. # TODO(mriedem): Calculate provider mappings when we support cross-cell # resize/migrate with ports having resource requests. self.network_api.migrate_instance_finish( ctxt, instance, migration, provider_mappings=None) network_info = self.network_api.get_instance_nw_info(ctxt, instance) # If the original vm_state was STOPPED, we do not automatically # power on the instance after it is migrated. power_on = instance.system_metadata['old_vm_state'] == vm_states.ACTIVE try: # NOTE(mriedem): If this instance uses a config drive, it will get # rebuilt here which means any personality files will be lost, # similar to unshelve. If the instance is not using a config drive # and getting metadata from the metadata API service, personality # files would be lost regardless of the move operation. self.driver.spawn( ctxt, instance, image_meta, injected_files=[], admin_password=None, allocations=allocations, network_info=network_info, block_device_info=block_device_info, power_on=power_on) except Exception: with excutils.save_and_reraise_exception(logger=LOG): # Rollback port bindings to the source host. try: # This is gross but migrate_instance_start looks at the # migration.dest_compute to determine where to activate the # port bindings and we want the source compute port # bindings to be re-activated. Remember at this point the # instance.host is still pointing at the source compute. # TODO(mriedem): Maybe we should be calling # setup_instance_network_on_host here to deal with pci # devices? with utils.temporary_mutation( migration, dest_compute=migration.source_compute): self.network_api.migrate_instance_start( ctxt, instance, migration) except Exception: LOG.exception( 'Failed to activate port bindings on the source ' 'host: %s', migration.source_compute, instance=instance) # Rollback volume connections on this host. for bdm in bdms: if bdm.is_volume: try: self._remove_volume_connection( ctxt, bdm, instance, delete_attachment=True) except Exception: LOG.exception('Failed to remove volume connection ' 'on this host %s for volume %s.', self.host, bdm.volume_id, instance=instance) @wrap_exception() @wrap_instance_fault def add_fixed_ip_to_instance(self, context, network_id, instance): """Calls network_api to add new fixed_ip to instance then injects the new network info and resets instance networking. """ self._notify_about_instance_usage( context, instance, "create_ip.start") network_info = self.network_api.add_fixed_ip_to_instance(context, instance, network_id) self._inject_network_info(context, instance, network_info) self.reset_network(context, instance) # NOTE(russellb) We just want to bump updated_at. See bug 1143466. instance.updated_at = timeutils.utcnow() instance.save() self._notify_about_instance_usage( context, instance, "create_ip.end", network_info=network_info) @wrap_exception() @wrap_instance_fault def remove_fixed_ip_from_instance(self, context, address, instance): """Calls network_api to remove existing fixed_ip from instance by injecting the altered network info and resetting instance networking. """ self._notify_about_instance_usage( context, instance, "delete_ip.start") network_info = self.network_api.remove_fixed_ip_from_instance(context, instance, address) self._inject_network_info(context, instance, network_info) self.reset_network(context, instance) # NOTE(russellb) We just want to bump updated_at. See bug 1143466. instance.updated_at = timeutils.utcnow() instance.save() self._notify_about_instance_usage( context, instance, "delete_ip.end", network_info=network_info) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def pause_instance(self, context, instance): """Pause an instance on this host.""" context = context.elevated() LOG.info('Pausing', instance=instance) self._notify_about_instance_usage(context, instance, 'pause.start') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.PAUSE, phase=fields.NotificationPhase.START) self.driver.pause(instance) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.PAUSED instance.task_state = None instance.save(expected_task_state=task_states.PAUSING) self._notify_about_instance_usage(context, instance, 'pause.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.PAUSE, phase=fields.NotificationPhase.END) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def unpause_instance(self, context, instance): """Unpause a paused instance on this host.""" context = context.elevated() LOG.info('Unpausing', instance=instance) self._notify_about_instance_usage(context, instance, 'unpause.start') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.UNPAUSE, phase=fields.NotificationPhase.START) self.driver.unpause(instance) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.save(expected_task_state=task_states.UNPAUSING) self._notify_about_instance_usage(context, instance, 'unpause.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.UNPAUSE, phase=fields.NotificationPhase.END) @wrap_exception() def host_power_action(self, context, action): """Reboots, shuts down or powers up the host.""" return self.driver.host_power_action(action) @wrap_exception() def host_maintenance_mode(self, context, host, mode): """Start/Stop host maintenance window. On start, it triggers guest VMs evacuation. """ return self.driver.host_maintenance_mode(host, mode) def _update_compute_provider_status(self, context, enabled): """Adds or removes the COMPUTE_STATUS_DISABLED trait for this host. For each ComputeNode managed by this service, adds or removes the COMPUTE_STATUS_DISABLED traits to/from the associated resource provider in Placement. :param context: nova auth RequestContext :param enabled: True if the node is enabled in which case the trait would be removed, False if the node is disabled in which case the trait would be added. :raises: ComputeHostNotFound if there are no compute nodes found in the ResourceTracker for this service. """ # Get the compute node(s) on this host. Remember that ironic can be # managing more than one compute node. nodes = self.rt.compute_nodes.values() if not nodes: raise exception.ComputeHostNotFound(host=self.host) # For each node, we want to add (or remove) the COMPUTE_STATUS_DISABLED # trait on the related resource provider in placement so the scheduler # (pre-)filters the provider based on its status. for node in nodes: try: self.virtapi.update_compute_provider_status( context, node.uuid, enabled) except (exception.ResourceProviderTraitRetrievalFailed, exception.ResourceProviderUpdateConflict, exception.ResourceProviderUpdateFailed, exception.TraitRetrievalFailed) as e: # This is best effort so just log a warning and continue. LOG.warning('An error occurred while updating ' 'COMPUTE_STATUS_DISABLED trait on compute node ' 'resource provider %s. The trait will be ' 'synchronized when the update_available_resource ' 'periodic task runs. Error: %s', node.uuid, e.format_message()) except Exception: LOG.exception('An error occurred while updating ' 'COMPUTE_STATUS_DISABLED trait on compute node ' 'resource provider %s. The trait will be ' 'synchronized when the ' 'update_available_resource periodic task runs.', node.uuid) @wrap_exception() def set_host_enabled(self, context, enabled): """Sets the specified host's ability to accept new instances. This method will add or remove the COMPUTE_STATUS_DISABLED trait to/from the associated compute node resource provider(s) for this compute service. """ try: self._update_compute_provider_status(context, enabled) except exception.ComputeHostNotFound: LOG.warning('Unable to add/remove trait COMPUTE_STATUS_DISABLED. ' 'No ComputeNode(s) found for host: %s', self.host) try: return self.driver.set_host_enabled(enabled) except NotImplementedError: # Only the xenapi driver implements set_host_enabled but we don't # want NotImplementedError to get raised back to the API. We still # need to honor the compute RPC API contract and return 'enabled' # or 'disabled' though. return 'enabled' if enabled else 'disabled' @wrap_exception() def get_host_uptime(self, context): """Returns the result of calling "uptime" on the target host.""" return self.driver.get_host_uptime() @wrap_exception() @wrap_instance_fault def get_diagnostics(self, context, instance): """Retrieve diagnostics for an instance on this host.""" current_power_state = self._get_power_state(context, instance) if current_power_state == power_state.RUNNING: LOG.info("Retrieving diagnostics", instance=instance) return self.driver.get_diagnostics(instance) else: raise exception.InstanceInvalidState( attr='power state', instance_uuid=instance.uuid, state=power_state.STATE_MAP[instance.power_state], method='get_diagnostics') @wrap_exception() @wrap_instance_fault def get_instance_diagnostics(self, context, instance): """Retrieve diagnostics for an instance on this host.""" current_power_state = self._get_power_state(context, instance) if current_power_state == power_state.RUNNING: LOG.info("Retrieving diagnostics", instance=instance) return self.driver.get_instance_diagnostics(instance) else: raise exception.InstanceInvalidState( attr='power state', instance_uuid=instance.uuid, state=power_state.STATE_MAP[instance.power_state], method='get_diagnostics') @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def suspend_instance(self, context, instance): """Suspend the given instance.""" context = context.elevated() # Store the old state instance.system_metadata['old_vm_state'] = instance.vm_state self._notify_about_instance_usage(context, instance, 'suspend.start') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SUSPEND, phase=fields.NotificationPhase.START) with self._error_out_instance_on_exception(context, instance, instance_state=instance.vm_state): self.driver.suspend(context, instance) instance.power_state = self._get_power_state(context, instance) instance.vm_state = vm_states.SUSPENDED instance.task_state = None instance.save(expected_task_state=task_states.SUSPENDING) self._notify_about_instance_usage(context, instance, 'suspend.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SUSPEND, phase=fields.NotificationPhase.END) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def resume_instance(self, context, instance): """Resume the given suspended instance.""" context = context.elevated() LOG.info('Resuming', instance=instance) self._notify_about_instance_usage(context, instance, 'resume.start') bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) block_device_info = self._get_instance_block_device_info( context, instance, bdms=bdms) compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESUME, phase=fields.NotificationPhase.START, bdms=bdms) network_info = self.network_api.get_instance_nw_info(context, instance) with self._error_out_instance_on_exception(context, instance, instance_state=instance.vm_state): self.driver.resume(context, instance, network_info, block_device_info) instance.power_state = self._get_power_state(context, instance) # We default to the ACTIVE state for backwards compatibility instance.vm_state = instance.system_metadata.pop('old_vm_state', vm_states.ACTIVE) instance.task_state = None instance.save(expected_task_state=task_states.RESUMING) self._notify_about_instance_usage(context, instance, 'resume.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.RESUME, phase=fields.NotificationPhase.END, bdms=bdms) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def shelve_instance(self, context, instance, image_id, clean_shutdown): """Shelve an instance. This should be used when you want to take a snapshot of the instance. It also adds system_metadata that can be used by a periodic task to offload the shelved instance after a period of time. :param context: request context :param instance: an Instance object :param image_id: an image id to snapshot to. :param clean_shutdown: give the GuestOS a chance to stop """ @utils.synchronized(instance.uuid) def do_shelve_instance(): self._shelve_instance(context, instance, image_id, clean_shutdown) do_shelve_instance() def _shelve_instance(self, context, instance, image_id, clean_shutdown): LOG.info('Shelving', instance=instance) offload = CONF.shelved_offload_time == 0 if offload: # Get the BDMs early so we can pass them into versioned # notifications since _shelve_offload_instance needs the # BDMs anyway. bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) else: bdms = None compute_utils.notify_usage_exists(self.notifier, context, instance, self.host, current_period=True) self._notify_about_instance_usage(context, instance, 'shelve.start') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SHELVE, phase=fields.NotificationPhase.START, bdms=bdms) def update_task_state(task_state, expected_state=task_states.SHELVING): shelving_state_map = { task_states.IMAGE_PENDING_UPLOAD: task_states.SHELVING_IMAGE_PENDING_UPLOAD, task_states.IMAGE_UPLOADING: task_states.SHELVING_IMAGE_UPLOADING, task_states.SHELVING: task_states.SHELVING} task_state = shelving_state_map[task_state] expected_state = shelving_state_map[expected_state] instance.task_state = task_state instance.save(expected_task_state=expected_state) # Do not attempt a clean shutdown of a paused guest since some # hypervisors will fail the clean shutdown if the guest is not # running. if instance.power_state == power_state.PAUSED: clean_shutdown = False self._power_off_instance(context, instance, clean_shutdown) self.driver.snapshot(context, instance, image_id, update_task_state) instance.system_metadata['shelved_at'] = timeutils.utcnow().isoformat() instance.system_metadata['shelved_image_id'] = image_id instance.system_metadata['shelved_host'] = self.host instance.vm_state = vm_states.SHELVED instance.task_state = None if CONF.shelved_offload_time == 0: instance.task_state = task_states.SHELVING_OFFLOADING instance.power_state = self._get_power_state(context, instance) instance.save(expected_task_state=[ task_states.SHELVING, task_states.SHELVING_IMAGE_UPLOADING]) self._notify_about_instance_usage(context, instance, 'shelve.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SHELVE, phase=fields.NotificationPhase.END, bdms=bdms) if offload: self._shelve_offload_instance(context, instance, clean_shutdown=False, bdms=bdms) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def shelve_offload_instance(self, context, instance, clean_shutdown): """Remove a shelved instance from the hypervisor. This frees up those resources for use by other instances, but may lead to slower unshelve times for this instance. This method is used by volume backed instances since restoring them doesn't involve the potentially large download of an image. :param context: request context :param instance: nova.objects.instance.Instance :param clean_shutdown: give the GuestOS a chance to stop """ @utils.synchronized(instance.uuid) def do_shelve_offload_instance(): self._shelve_offload_instance(context, instance, clean_shutdown) do_shelve_offload_instance() def _shelve_offload_instance(self, context, instance, clean_shutdown, bdms=None): LOG.info('Shelve offloading', instance=instance) if bdms is None: bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._notify_about_instance_usage(context, instance, 'shelve_offload.start') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SHELVE_OFFLOAD, phase=fields.NotificationPhase.START, bdms=bdms) self._power_off_instance(context, instance, clean_shutdown) current_power_state = self._get_power_state(context, instance) network_info = self.network_api.get_instance_nw_info(context, instance) block_device_info = self._get_instance_block_device_info(context, instance, bdms=bdms) self.driver.destroy(context, instance, network_info, block_device_info) # the instance is going to be removed from the host so we want to # terminate all the connections with the volume server and the host self._terminate_volume_connections(context, instance, bdms) # Free up the resource allocations in the placement service. # This should happen *before* the vm_state is changed to # SHELVED_OFFLOADED in case client-side code is polling the API to # schedule more instances (or unshelve) once this server is offloaded. self.rt.delete_allocation_for_shelve_offloaded_instance(context, instance) instance.power_state = current_power_state # NOTE(mriedem): The vm_state has to be set before updating the # resource tracker, see vm_states.ALLOW_RESOURCE_REMOVAL. The host/node # values cannot be nulled out until after updating the resource tracker # though. instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = None instance.save(expected_task_state=[task_states.SHELVING, task_states.SHELVING_OFFLOADING]) # NOTE(ndipanov): Free resources from the resource tracker self._update_resource_tracker(context, instance) # NOTE(sfinucan): RPC calls should no longer be attempted against this # instance, so ensure any calls result in errors self._nil_out_instance_obj_host_and_node(instance) instance.save(expected_task_state=None) # TODO(melwitt): We should clean up instance console tokens here. The # instance has no host at this point and will need to establish a new # console connection in the future after it is unshelved. self._delete_scheduler_instance_info(context, instance.uuid) self._notify_about_instance_usage(context, instance, 'shelve_offload.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.SHELVE_OFFLOAD, phase=fields.NotificationPhase.END, bdms=bdms) @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def unshelve_instance(self, context, instance, image, filter_properties, node, request_spec=None): """Unshelve the instance. :param context: request context :param instance: a nova.objects.instance.Instance object :param image: an image to build from. If None we assume a volume backed instance. :param filter_properties: dict containing limits, retry info etc. :param node: target compute node :param request_spec: the RequestSpec object used to schedule the instance """ if filter_properties is None: filter_properties = {} @utils.synchronized(instance.uuid) def do_unshelve_instance(): self._unshelve_instance( context, instance, image, filter_properties, node, request_spec) do_unshelve_instance() def _unshelve_instance_key_scrub(self, instance): """Remove data from the instance that may cause side effects.""" cleaned_keys = dict( key_data=instance.key_data, auto_disk_config=instance.auto_disk_config) instance.key_data = None instance.auto_disk_config = False return cleaned_keys def _unshelve_instance_key_restore(self, instance, keys): """Restore previously scrubbed keys before saving the instance.""" instance.update(keys) def _unshelve_instance(self, context, instance, image, filter_properties, node, request_spec): LOG.info('Unshelving', instance=instance) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) self._notify_about_instance_usage(context, instance, 'unshelve.start') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.UNSHELVE, phase=fields.NotificationPhase.START, bdms=bdms) instance.task_state = task_states.SPAWNING instance.save() block_device_info = self._prep_block_device(context, instance, bdms) scrubbed_keys = self._unshelve_instance_key_scrub(instance) if node is None: node = self._get_nodename(instance) limits = filter_properties.get('limits', {}) allocations = self.reportclient.get_allocations_for_consumer( context, instance.uuid) shelved_image_ref = instance.image_ref if image: instance.image_ref = image['id'] image_meta = objects.ImageMeta.from_dict(image) else: image_meta = objects.ImageMeta.from_dict( utils.get_image_from_system_metadata( instance.system_metadata)) provider_mappings = self._get_request_group_mapping(request_spec) try: if provider_mappings: update = ( compute_utils. update_pci_request_spec_with_allocated_interface_name) update(context, self.reportclient, instance, provider_mappings) self.network_api.setup_instance_network_on_host( context, instance, self.host, provider_mappings=provider_mappings) network_info = self.network_api.get_instance_nw_info( context, instance) with self.rt.instance_claim(context, instance, node, allocations, limits): self.driver.spawn(context, instance, image_meta, injected_files=[], admin_password=None, allocations=allocations, network_info=network_info, block_device_info=block_device_info) except Exception: with excutils.save_and_reraise_exception(logger=LOG): LOG.exception('Instance failed to spawn', instance=instance) # Cleanup allocations created by the scheduler on this host # since we failed to spawn the instance. We do this both if # the instance claim failed with ComputeResourcesUnavailable # or if we did claim but the spawn failed, because aborting the # instance claim will not remove the allocations. self.reportclient.delete_allocation_for_instance(context, instance.uuid) # FIXME: Umm, shouldn't we be rolling back port bindings too? self._terminate_volume_connections(context, instance, bdms) # The reverts_task_state decorator on unshelve_instance will # eventually save these updates. self._nil_out_instance_obj_host_and_node(instance) if image: instance.image_ref = shelved_image_ref self._delete_snapshot_of_shelved_instance(context, instance, image['id']) self._unshelve_instance_key_restore(instance, scrubbed_keys) self._update_instance_after_spawn(context, instance) # Delete system_metadata for a shelved instance compute_utils.remove_shelved_keys_from_system_metadata(instance) instance.save(expected_task_state=task_states.SPAWNING) self._update_scheduler_instance_info(context, instance) self._notify_about_instance_usage(context, instance, 'unshelve.end') compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.UNSHELVE, phase=fields.NotificationPhase.END, bdms=bdms) @messaging.expected_exceptions(NotImplementedError) @wrap_instance_fault def reset_network(self, context, instance): """Reset networking on the given instance.""" LOG.debug('Reset network', instance=instance) self.driver.reset_network(instance) def _inject_network_info(self, context, instance, network_info): """Inject network info for the given instance.""" LOG.debug('Inject network info', instance=instance) LOG.debug('network_info to inject: |%s|', network_info, instance=instance) self.driver.inject_network_info(instance, network_info) @wrap_instance_fault def inject_network_info(self, context, instance): """Inject network info, but don't return the info.""" network_info = self.network_api.get_instance_nw_info(context, instance) self._inject_network_info(context, instance, network_info) @messaging.expected_exceptions(NotImplementedError, exception.ConsoleNotAvailable, exception.InstanceNotFound) @wrap_exception() @wrap_instance_fault def get_console_output(self, context, instance, tail_length): """Send the console output for the given instance.""" context = context.elevated() LOG.info("Get console output", instance=instance) output = self.driver.get_console_output(context, instance) if type(output) is six.text_type: output = six.b(output) if tail_length is not None: output = self._tail_log(output, tail_length) return output.decode('ascii', 'replace') def _tail_log(self, log, length): try: length = int(length) except ValueError: length = 0 if length == 0: return b'' else: return b'\n'.join(log.split(b'\n')[-int(length):]) @messaging.expected_exceptions(exception.ConsoleTypeInvalid, exception.InstanceNotReady, exception.InstanceNotFound, exception.ConsoleTypeUnavailable, NotImplementedError) @wrap_exception() @wrap_instance_fault def get_vnc_console(self, context, console_type, instance): """Return connection information for a vnc console.""" context = context.elevated() LOG.debug("Getting vnc console", instance=instance) if not CONF.vnc.enabled: raise exception.ConsoleTypeUnavailable(console_type=console_type) if console_type == 'novnc': # For essex, novncproxy_base_url must include the full path # including the html file (like http://myhost/vnc_auto.html) access_url_base = CONF.vnc.novncproxy_base_url else: raise exception.ConsoleTypeInvalid(console_type=console_type) try: # Retrieve connect info from driver, and then decorate with our # access info token console = self.driver.get_vnc_console(context, instance) console_auth = objects.ConsoleAuthToken( context=context, console_type=console_type, host=console.host, port=console.port, internal_access_path=console.internal_access_path, instance_uuid=instance.uuid, access_url_base=access_url_base, ) console_auth.authorize(CONF.consoleauth.token_ttl) connect_info = console.get_connection_info( console_auth.token, console_auth.access_url) except exception.InstanceNotFound: if instance.vm_state != vm_states.BUILDING: raise raise exception.InstanceNotReady(instance_id=instance.uuid) return connect_info @messaging.expected_exceptions(exception.ConsoleTypeInvalid, exception.InstanceNotReady, exception.InstanceNotFound, exception.ConsoleTypeUnavailable, NotImplementedError) @wrap_exception() @wrap_instance_fault def get_spice_console(self, context, console_type, instance): """Return connection information for a spice console.""" context = context.elevated() LOG.debug("Getting spice console", instance=instance) if not CONF.spice.enabled: raise exception.ConsoleTypeUnavailable(console_type=console_type) if console_type != 'spice-html5': raise exception.ConsoleTypeInvalid(console_type=console_type) try: # Retrieve connect info from driver, and then decorate with our # access info token console = self.driver.get_spice_console(context, instance) console_auth = objects.ConsoleAuthToken( context=context, console_type=console_type, host=console.host, port=console.port, internal_access_path=console.internal_access_path, instance_uuid=instance.uuid, access_url_base=CONF.spice.html5proxy_base_url, ) console_auth.authorize(CONF.consoleauth.token_ttl) connect_info = console.get_connection_info( console_auth.token, console_auth.access_url) except exception.InstanceNotFound: if instance.vm_state != vm_states.BUILDING: raise raise exception.InstanceNotReady(instance_id=instance.uuid) return connect_info @messaging.expected_exceptions(exception.ConsoleTypeInvalid, exception.InstanceNotReady, exception.InstanceNotFound, exception.ConsoleTypeUnavailable, NotImplementedError) @wrap_exception() @wrap_instance_fault def get_rdp_console(self, context, console_type, instance): """Return connection information for a RDP console.""" context = context.elevated() LOG.debug("Getting RDP console", instance=instance) if not CONF.rdp.enabled: raise exception.ConsoleTypeUnavailable(console_type=console_type) if console_type != 'rdp-html5': raise exception.ConsoleTypeInvalid(console_type=console_type) try: # Retrieve connect info from driver, and then decorate with our # access info token console = self.driver.get_rdp_console(context, instance) console_auth = objects.ConsoleAuthToken( context=context, console_type=console_type, host=console.host, port=console.port, internal_access_path=console.internal_access_path, instance_uuid=instance.uuid, access_url_base=CONF.rdp.html5_proxy_base_url, ) console_auth.authorize(CONF.consoleauth.token_ttl) connect_info = console.get_connection_info( console_auth.token, console_auth.access_url) except exception.InstanceNotFound: if instance.vm_state != vm_states.BUILDING: raise raise exception.InstanceNotReady(instance_id=instance.uuid) return connect_info @messaging.expected_exceptions(exception.ConsoleTypeInvalid, exception.InstanceNotReady, exception.InstanceNotFound, exception.ConsoleTypeUnavailable, NotImplementedError) @wrap_exception() @wrap_instance_fault def get_mks_console(self, context, console_type, instance): """Return connection information for a MKS console.""" context = context.elevated() LOG.debug("Getting MKS console", instance=instance) if not CONF.mks.enabled: raise exception.ConsoleTypeUnavailable(console_type=console_type) if console_type != 'webmks': raise exception.ConsoleTypeInvalid(console_type=console_type) try: # Retrieve connect info from driver, and then decorate with our # access info token console = self.driver.get_mks_console(context, instance) console_auth = objects.ConsoleAuthToken( context=context, console_type=console_type, host=console.host, port=console.port, internal_access_path=console.internal_access_path, instance_uuid=instance.uuid, access_url_base=CONF.mks.mksproxy_base_url, ) console_auth.authorize(CONF.consoleauth.token_ttl) connect_info = console.get_connection_info( console_auth.token, console_auth.access_url) except exception.InstanceNotFound: if instance.vm_state != vm_states.BUILDING: raise raise exception.InstanceNotReady(instance_id=instance.uuid) return connect_info @messaging.expected_exceptions( exception.ConsoleTypeInvalid, exception.InstanceNotReady, exception.InstanceNotFound, exception.ConsoleTypeUnavailable, exception.SocketPortRangeExhaustedException, exception.ImageSerialPortNumberInvalid, exception.ImageSerialPortNumberExceedFlavorValue, NotImplementedError) @wrap_exception() @wrap_instance_fault def get_serial_console(self, context, console_type, instance): """Returns connection information for a serial console.""" LOG.debug("Getting serial console", instance=instance) if not CONF.serial_console.enabled: raise exception.ConsoleTypeUnavailable(console_type=console_type) context = context.elevated() try: # Retrieve connect info from driver, and then decorate with our # access info token console = self.driver.get_serial_console(context, instance) console_auth = objects.ConsoleAuthToken( context=context, console_type=console_type, host=console.host, port=console.port, internal_access_path=console.internal_access_path, instance_uuid=instance.uuid, access_url_base=CONF.serial_console.base_url, ) console_auth.authorize(CONF.consoleauth.token_ttl) connect_info = console.get_connection_info( console_auth.token, console_auth.access_url) except exception.InstanceNotFound: if instance.vm_state != vm_states.BUILDING: raise raise exception.InstanceNotReady(instance_id=instance.uuid) return connect_info @messaging.expected_exceptions(exception.ConsoleTypeInvalid, exception.InstanceNotReady, exception.InstanceNotFound) @wrap_exception() @wrap_instance_fault def validate_console_port(self, ctxt, instance, port, console_type): if console_type == "spice-html5": console_info = self.driver.get_spice_console(ctxt, instance) elif console_type == "rdp-html5": console_info = self.driver.get_rdp_console(ctxt, instance) elif console_type == "serial": console_info = self.driver.get_serial_console(ctxt, instance) elif console_type == "webmks": console_info = self.driver.get_mks_console(ctxt, instance) else: console_info = self.driver.get_vnc_console(ctxt, instance) # Some drivers may return an int on console_info.port but the port # variable in this method is a string, so cast to be sure we are # comparing the correct types. return str(console_info.port) == port @wrap_exception() @reverts_task_state @wrap_instance_fault def reserve_block_device_name(self, context, instance, device, volume_id, disk_bus, device_type, tag, multiattach): if (tag and not self.driver.capabilities.get('supports_tagged_attach_volume', False)): raise exception.VolumeTaggedAttachNotSupported() if (multiattach and not self.driver.capabilities.get('supports_multiattach', False)): raise exception.MultiattachNotSupportedByVirtDriver( volume_id=volume_id) @utils.synchronized(instance.uuid) def do_reserve(): bdms = ( objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid)) # NOTE(ndipanov): We need to explicitly set all the fields on the # object so that obj_load_attr does not fail new_bdm = objects.BlockDeviceMapping( context=context, source_type='volume', destination_type='volume', instance_uuid=instance.uuid, boot_index=None, volume_id=volume_id, device_name=device, guest_format=None, disk_bus=disk_bus, device_type=device_type, tag=tag) new_bdm.device_name = self._get_device_name_for_instance( instance, bdms, new_bdm) # NOTE(vish): create bdm here to avoid race condition new_bdm.create() return new_bdm return do_reserve() @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def attach_volume(self, context, instance, bdm): """Attach a volume to an instance.""" driver_bdm = driver_block_device.convert_volume(bdm) @utils.synchronized(instance.uuid) def do_attach_volume(context, instance, driver_bdm): try: return self._attach_volume(context, instance, driver_bdm) except Exception: with excutils.save_and_reraise_exception(): bdm.destroy() do_attach_volume(context, instance, driver_bdm) def _attach_volume(self, context, instance, bdm): context = context.elevated() LOG.info('Attaching volume %(volume_id)s to %(mountpoint)s', {'volume_id': bdm.volume_id, 'mountpoint': bdm['mount_device']}, instance=instance) compute_utils.notify_about_volume_attach_detach( context, instance, self.host, action=fields.NotificationAction.VOLUME_ATTACH, phase=fields.NotificationPhase.START, volume_id=bdm.volume_id) try: bdm.attach(context, instance, self.volume_api, self.driver, do_driver_attach=True) except Exception as e: with excutils.save_and_reraise_exception(): LOG.exception("Failed to attach %(volume_id)s " "at %(mountpoint)s", {'volume_id': bdm.volume_id, 'mountpoint': bdm['mount_device']}, instance=instance) if bdm['attachment_id']: # Try to delete the attachment to make the volume # available again. Note that DriverVolumeBlockDevice # may have already deleted the attachment so ignore # VolumeAttachmentNotFound. try: self.volume_api.attachment_delete( context, bdm['attachment_id']) except exception.VolumeAttachmentNotFound as exc: LOG.debug('Ignoring VolumeAttachmentNotFound: %s', exc, instance=instance) else: self.volume_api.unreserve_volume(context, bdm.volume_id) tb = traceback.format_exc() compute_utils.notify_about_volume_attach_detach( context, instance, self.host, action=fields.NotificationAction.VOLUME_ATTACH, phase=fields.NotificationPhase.ERROR, exception=e, volume_id=bdm.volume_id, tb=tb) info = {'volume_id': bdm.volume_id} self._notify_about_instance_usage( context, instance, "volume.attach", extra_usage_info=info) compute_utils.notify_about_volume_attach_detach( context, instance, self.host, action=fields.NotificationAction.VOLUME_ATTACH, phase=fields.NotificationPhase.END, volume_id=bdm.volume_id) def _notify_volume_usage_detach(self, context, instance, bdm): if CONF.volume_usage_poll_interval <= 0: return mp = bdm.device_name # Handle bootable volumes which will not contain /dev/ if '/dev/' in mp: mp = mp[5:] try: vol_stats = self.driver.block_stats(instance, mp) if vol_stats is None: return except NotImplementedError: return LOG.debug("Updating volume usage cache with totals", instance=instance) rd_req, rd_bytes, wr_req, wr_bytes, flush_ops = vol_stats vol_usage = objects.VolumeUsage(context) vol_usage.volume_id = bdm.volume_id vol_usage.instance_uuid = instance.uuid vol_usage.project_id = instance.project_id vol_usage.user_id = instance.user_id vol_usage.availability_zone = instance.availability_zone vol_usage.curr_reads = rd_req vol_usage.curr_read_bytes = rd_bytes vol_usage.curr_writes = wr_req vol_usage.curr_write_bytes = wr_bytes vol_usage.save(update_totals=True) self.notifier.info(context, 'volume.usage', vol_usage.to_dict()) compute_utils.notify_about_volume_usage(context, vol_usage, self.host) def _detach_volume(self, context, bdm, instance, destroy_bdm=True, attachment_id=None): """Detach a volume from an instance. :param context: security context :param bdm: nova.objects.BlockDeviceMapping volume bdm to detach :param instance: the Instance object to detach the volume from :param destroy_bdm: if True, the corresponding BDM entry will be marked as deleted. Disabling this is useful for operations like rebuild, when we don't want to destroy BDM :param attachment_id: The volume attachment_id for the given instance and volume. """ volume_id = bdm.volume_id compute_utils.notify_about_volume_attach_detach( context, instance, self.host, action=fields.NotificationAction.VOLUME_DETACH, phase=fields.NotificationPhase.START, volume_id=volume_id) self._notify_volume_usage_detach(context, instance, bdm) LOG.info('Detaching volume %(volume_id)s', {'volume_id': volume_id}, instance=instance) driver_bdm = driver_block_device.convert_volume(bdm) driver_bdm.detach(context, instance, self.volume_api, self.driver, attachment_id=attachment_id, destroy_bdm=destroy_bdm) info = dict(volume_id=volume_id) self._notify_about_instance_usage( context, instance, "volume.detach", extra_usage_info=info) compute_utils.notify_about_volume_attach_detach( context, instance, self.host, action=fields.NotificationAction.VOLUME_DETACH, phase=fields.NotificationPhase.END, volume_id=volume_id) if 'tag' in bdm and bdm.tag: self._delete_disk_metadata(instance, bdm) if destroy_bdm: bdm.destroy() def _delete_disk_metadata(self, instance, bdm): for device in instance.device_metadata.devices: if isinstance(device, objects.DiskMetadata): if 'serial' in device: if device.serial == bdm.volume_id: instance.device_metadata.devices.remove(device) instance.save() break else: # NOTE(artom) We log the entire device object because all # fields are nullable and may not be set LOG.warning('Unable to determine whether to clean up ' 'device metadata for disk %s', device, instance=instance) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def detach_volume(self, context, volume_id, instance, attachment_id): """Detach a volume from an instance. :param context: security context :param volume_id: the volume id :param instance: the Instance object to detach the volume from :param attachment_id: The volume attachment_id for the given instance and volume. """ @utils.synchronized(instance.uuid) def do_detach_volume(context, volume_id, instance, attachment_id): bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) self._detach_volume(context, bdm, instance, attachment_id=attachment_id) do_detach_volume(context, volume_id, instance, attachment_id) def _init_volume_connection(self, context, new_volume, old_volume_id, connector, bdm, new_attachment_id, mountpoint): new_volume_id = new_volume['id'] if new_attachment_id is None: # We're dealing with an old-style attachment so initialize the # connection so we can get the connection_info. new_cinfo = self.volume_api.initialize_connection(context, new_volume_id, connector) else: # Check for multiattach on the new volume and if True, check to # see if the virt driver supports multiattach. # TODO(mriedem): This is copied from DriverVolumeBlockDevice # and should be consolidated into some common code at some point. vol_multiattach = new_volume.get('multiattach', False) virt_multiattach = self.driver.capabilities.get( 'supports_multiattach', False) if vol_multiattach and not virt_multiattach: raise exception.MultiattachNotSupportedByVirtDriver( volume_id=new_volume_id) # This is a new style attachment and the API created the new # volume attachment and passed the id to the compute over RPC. # At this point we need to update the new volume attachment with # the host connector, which will give us back the new attachment # connection_info. new_cinfo = self.volume_api.attachment_update( context, new_attachment_id, connector, mountpoint)['connection_info'] if vol_multiattach: # This will be used by the volume driver to determine the # proper disk configuration. new_cinfo['multiattach'] = True old_cinfo = jsonutils.loads(bdm['connection_info']) if old_cinfo and 'serial' not in old_cinfo: old_cinfo['serial'] = old_volume_id # NOTE(lyarwood): serial is not always present in the returned # connection_info so set it if it is missing as we do in # DriverVolumeBlockDevice.attach(). if 'serial' not in new_cinfo: new_cinfo['serial'] = new_volume_id return (old_cinfo, new_cinfo) def _swap_volume(self, context, instance, bdm, connector, old_volume_id, new_volume, resize_to, new_attachment_id, is_cinder_migration): new_volume_id = new_volume['id'] mountpoint = bdm['device_name'] failed = False new_cinfo = None try: old_cinfo, new_cinfo = self._init_volume_connection( context, new_volume, old_volume_id, connector, bdm, new_attachment_id, mountpoint) # NOTE(lyarwood): The Libvirt driver, the only virt driver # currently implementing swap_volume, will modify the contents of # new_cinfo when connect_volume is called. This is then saved to # the BDM in swap_volume for future use outside of this flow. msg = ("swap_volume: Calling driver volume swap with " "connection infos: new: %(new_cinfo)s; " "old: %(old_cinfo)s" % {'new_cinfo': new_cinfo, 'old_cinfo': old_cinfo}) # Both new and old info might contain password LOG.debug(strutils.mask_password(msg), instance=instance) self.driver.swap_volume(context, old_cinfo, new_cinfo, instance, mountpoint, resize_to) if new_attachment_id: self.volume_api.attachment_complete(context, new_attachment_id) msg = ("swap_volume: Driver volume swap returned, new " "connection_info is now : %(new_cinfo)s" % {'new_cinfo': new_cinfo}) LOG.debug(strutils.mask_password(msg)) except Exception as ex: failed = True with excutils.save_and_reraise_exception(): tb = traceback.format_exc() compute_utils.notify_about_volume_swap( context, instance, self.host, fields.NotificationPhase.ERROR, old_volume_id, new_volume_id, ex, tb) if new_cinfo: msg = ("Failed to swap volume %(old_volume_id)s " "for %(new_volume_id)s") LOG.exception(msg, {'old_volume_id': old_volume_id, 'new_volume_id': new_volume_id}, instance=instance) else: msg = ("Failed to connect to volume %(volume_id)s " "with volume at %(mountpoint)s") LOG.exception(msg, {'volume_id': new_volume_id, 'mountpoint': bdm['device_name']}, instance=instance) # The API marked the volume as 'detaching' for the old volume # so we need to roll that back so the volume goes back to # 'in-use' state. self.volume_api.roll_detaching(context, old_volume_id) if new_attachment_id is None: # The API reserved the new volume so it would be in # 'attaching' status, so we need to unreserve it so it # goes back to 'available' status. self.volume_api.unreserve_volume(context, new_volume_id) else: # This is a new style attachment for the new volume, which # was created in the API. We just need to delete it here # to put the new volume back into 'available' status. self.volume_api.attachment_delete( context, new_attachment_id) finally: # TODO(mriedem): This finally block is terribly confusing and is # trying to do too much. We should consider removing the finally # block and move whatever needs to happen on success and failure # into the blocks above for clarity, even if it means a bit of # redundant code. conn_volume = new_volume_id if failed else old_volume_id if new_cinfo: LOG.debug("swap_volume: removing Cinder connection " "for volume %(volume)s", {'volume': conn_volume}, instance=instance) if bdm.attachment_id is None: # This is the pre-3.44 flow for new-style volume # attachments so just terminate the connection. self.volume_api.terminate_connection(context, conn_volume, connector) else: # This is a new style volume attachment. If we failed, then # the new attachment was already deleted above in the # exception block and we have nothing more to do here. If # swap_volume was successful in the driver, then we need to # "detach" the original attachment by deleting it. if not failed: self.volume_api.attachment_delete( context, bdm.attachment_id) # Need to make some decisions based on whether this was # a Cinder initiated migration or not. The callback to # migration completion isn't needed in the case of a # nova initiated simple swap of two volume # "volume-update" call so skip that. The new attachment # scenarios will give us a new attachment record and # that's what we want. if bdm.attachment_id and not is_cinder_migration: # we don't callback to cinder comp_ret = {'save_volume_id': new_volume_id} else: # NOTE(lyarwood): The following call to # os-migrate-volume-completion returns a dict containing # save_volume_id, this volume id has two possible values : # 1. old_volume_id if we are migrating (retyping) volumes # 2. new_volume_id if we are swapping between two existing # volumes # This volume id is later used to update the volume_id and # connection_info['serial'] of the BDM. comp_ret = self.volume_api.migrate_volume_completion( context, old_volume_id, new_volume_id, error=failed) LOG.debug("swap_volume: Cinder migrate_volume_completion " "returned: %(comp_ret)s", {'comp_ret': comp_ret}, instance=instance) return (comp_ret, new_cinfo) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def swap_volume(self, context, old_volume_id, new_volume_id, instance, new_attachment_id): """Replace the old volume with the new volume within the active server :param context: User request context :param old_volume_id: Original volume id :param new_volume_id: New volume id being swapped to :param instance: Instance with original_volume_id attached :param new_attachment_id: ID of the new attachment for new_volume_id """ @utils.synchronized(instance.uuid) def _do_locked_swap_volume(context, old_volume_id, new_volume_id, instance, new_attachment_id): self._do_swap_volume(context, old_volume_id, new_volume_id, instance, new_attachment_id) _do_locked_swap_volume(context, old_volume_id, new_volume_id, instance, new_attachment_id) def _do_swap_volume(self, context, old_volume_id, new_volume_id, instance, new_attachment_id): """Replace the old volume with the new volume within the active server :param context: User request context :param old_volume_id: Original volume id :param new_volume_id: New volume id being swapped to :param instance: Instance with original_volume_id attached :param new_attachment_id: ID of the new attachment for new_volume_id """ context = context.elevated() compute_utils.notify_about_volume_swap( context, instance, self.host, fields.NotificationPhase.START, old_volume_id, new_volume_id) bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, old_volume_id, instance.uuid) connector = self.driver.get_volume_connector(instance) resize_to = 0 old_volume = self.volume_api.get(context, old_volume_id) # Yes this is a tightly-coupled state check of what's going on inside # cinder, but we need this while we still support old (v1/v2) and # new style attachments (v3.44). Once we drop support for old style # attachments we could think about cleaning up the cinder-initiated # swap volume API flows. is_cinder_migration = False if 'migration_status' in old_volume: is_cinder_migration = old_volume['migration_status'] == 'migrating' old_vol_size = old_volume['size'] new_volume = self.volume_api.get(context, new_volume_id) new_vol_size = new_volume['size'] if new_vol_size > old_vol_size: resize_to = new_vol_size LOG.info('Swapping volume %(old_volume)s for %(new_volume)s', {'old_volume': old_volume_id, 'new_volume': new_volume_id}, instance=instance) comp_ret, new_cinfo = self._swap_volume(context, instance, bdm, connector, old_volume_id, new_volume, resize_to, new_attachment_id, is_cinder_migration) # NOTE(lyarwood): Update the BDM with the modified new_cinfo and # correct volume_id returned by Cinder. save_volume_id = comp_ret['save_volume_id'] new_cinfo['serial'] = save_volume_id values = { 'connection_info': jsonutils.dumps(new_cinfo), 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': save_volume_id, 'no_device': None} if resize_to: values['volume_size'] = resize_to if new_attachment_id is not None: # This was a volume swap for a new-style attachment so we # need to update the BDM attachment_id for the new attachment. values['attachment_id'] = new_attachment_id LOG.debug("swap_volume: Updating volume %(volume_id)s BDM record with " "%(updates)s", {'volume_id': bdm.volume_id, 'updates': values}, instance=instance) bdm.update(values) bdm.save() compute_utils.notify_about_volume_swap( context, instance, self.host, fields.NotificationPhase.END, old_volume_id, new_volume_id) @wrap_exception() def remove_volume_connection(self, context, volume_id, instance): """Remove the volume connection on this host Detach the volume from this instance on this host, and if this is the cinder v2 flow, call cinder to terminate the connection. """ try: # NOTE(mriedem): If the BDM was just passed directly we would not # need to do this DB query, but this is an RPC interface so # changing that requires some care. bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) # NOTE(mriedem): Normally we would pass delete_attachment=True to # _remove_volume_connection to delete a v3 style volume attachment, # but this method is RPC called from _rollback_live_migration which # already deletes the attachment, so because of that tight coupling # we cannot simply delete a v3 style attachment here without # needing to do some behavior modification of that # _rollback_live_migration flow which gets messy. self._remove_volume_connection(context, bdm, instance) except exception.NotFound: pass def _remove_volume_connection(self, context, bdm, instance, delete_attachment=False): """Remove the volume connection on this host Detach the volume from this instance on this host. :param context: nova auth request context :param bdm: BlockDeviceMapping object for a volume attached to the instance :param instance: Instance object with a volume attached represented by ``bdm`` :param delete_attachment: If ``bdm.attachment_id`` is not None the attachment was made as a cinder v3 style attachment and if True, then deletes the volume attachment, otherwise just terminates the connection for a cinder legacy style connection. """ driver_bdm = driver_block_device.convert_volume(bdm) driver_bdm.driver_detach(context, instance, self.volume_api, self.driver) if bdm.attachment_id is None: # cinder v2 api flow connector = self.driver.get_volume_connector(instance) self.volume_api.terminate_connection(context, bdm.volume_id, connector) elif delete_attachment: # cinder v3 api flow self.volume_api.attachment_delete(context, bdm.attachment_id) def _deallocate_port_for_instance(self, context, instance, port_id, raise_on_failure=False): try: result = self.network_api.deallocate_port_for_instance( context, instance, port_id) __, port_allocation = result except Exception as ex: with excutils.save_and_reraise_exception( reraise=raise_on_failure): LOG.warning('Failed to deallocate port %(port_id)s ' 'for instance. Error: %(error)s', {'port_id': port_id, 'error': ex}, instance=instance) else: if port_allocation: # Deallocate the resources in placement that were used by the # detached port. try: client = self.reportclient client.remove_resources_from_instance_allocation( context, instance.uuid, port_allocation) except Exception as ex: # We always raise here as it is not a race condition where # somebody has already deleted the port we want to cleanup. # Here we see that the port exists, the allocation exists, # but we cannot clean it up so we will actually leak # allocations. with excutils.save_and_reraise_exception(): LOG.warning('Failed to remove resource allocation ' 'of port %(port_id)s for instance. Error: ' '%(error)s', {'port_id': port_id, 'error': ex}, instance=instance) # TODO(mriedem): There are likely race failures which can result in # NotFound and QuotaError exceptions getting traced as well. @messaging.expected_exceptions( # Do not log a traceback for user errors. We use Invalid generically # since this method can raise lots of different exceptions: # AttachInterfaceNotSupported # NetworkInterfaceTaggedAttachNotSupported # NetworkAmbiguous # PortNotUsable # PortInUse # PortNotUsableDNS # AttachSRIOVPortNotSupported # NetworksWithQoSPolicyNotSupported exception.Invalid) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def attach_interface(self, context, instance, network_id, port_id, requested_ip, tag): """Use hotplug to add an network adapter to an instance.""" lockname = 'interface-%s-%s' % (instance.uuid, port_id) @utils.synchronized(lockname) def do_attach_interface(context, instance, network_id, port_id, requested_ip, tag): return self._attach_interface(context, instance, network_id, port_id, requested_ip, tag) return do_attach_interface(context, instance, network_id, port_id, requested_ip, tag) def _attach_interface(self, context, instance, network_id, port_id, requested_ip, tag): if not self.driver.capabilities.get('supports_attach_interface', False): raise exception.AttachInterfaceNotSupported( instance_uuid=instance.uuid) if (tag and not self.driver.capabilities.get('supports_tagged_attach_interface', False)): raise exception.NetworkInterfaceTaggedAttachNotSupported() compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.INTERFACE_ATTACH, phase=fields.NotificationPhase.START) bind_host_id = self.driver.network_binding_host_id(context, instance) network_info = self.network_api.allocate_port_for_instance( context, instance, port_id, network_id, requested_ip, bind_host_id=bind_host_id, tag=tag) if len(network_info) != 1: LOG.error('allocate_port_for_instance returned %(ports)s ' 'ports', {'ports': len(network_info)}) # TODO(elod.illes): an instance.interface_attach.error notification # should be sent here raise exception.InterfaceAttachFailed( instance_uuid=instance.uuid) image_meta = objects.ImageMeta.from_instance(instance) try: self.driver.attach_interface(context, instance, image_meta, network_info[0]) except exception.NovaException as ex: port_id = network_info[0].get('id') LOG.warning("attach interface failed , try to deallocate " "port %(port_id)s, reason: %(msg)s", {'port_id': port_id, 'msg': ex}, instance=instance) self._deallocate_port_for_instance(context, instance, port_id) tb = traceback.format_exc() compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.INTERFACE_ATTACH, phase=fields.NotificationPhase.ERROR, exception=ex, tb=tb) raise exception.InterfaceAttachFailed( instance_uuid=instance.uuid) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.INTERFACE_ATTACH, phase=fields.NotificationPhase.END) return network_info[0] @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def detach_interface(self, context, instance, port_id): """Detach a network adapter from an instance.""" lockname = 'interface-%s-%s' % (instance.uuid, port_id) @utils.synchronized(lockname) def do_detach_interface(context, instance, port_id): self._detach_interface(context, instance, port_id) do_detach_interface(context, instance, port_id) def _detach_interface(self, context, instance, port_id): # NOTE(aarents): we need to refresh info cache from DB here, # as previous detach/attach lock holder just updated it. compute_utils.refresh_info_cache_for_instance(context, instance) network_info = instance.info_cache.network_info condemned = None for vif in network_info: if vif['id'] == port_id: condemned = vif break if condemned is None: raise exception.PortNotFound(_("Port %s is not " "attached") % port_id) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.INTERFACE_DETACH, phase=fields.NotificationPhase.START) try: self.driver.detach_interface(context, instance, condemned) except exception.NovaException as ex: # If the instance was deleted before the interface was detached, # just log it at debug. log_level = (logging.DEBUG if isinstance(ex, exception.InstanceNotFound) else logging.WARNING) LOG.log(log_level, "Detach interface failed, port_id=%(port_id)s, reason: " "%(msg)s", {'port_id': port_id, 'msg': ex}, instance=instance) raise exception.InterfaceDetachFailed(instance_uuid=instance.uuid) else: self._deallocate_port_for_instance( context, instance, port_id, raise_on_failure=True) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.INTERFACE_DETACH, phase=fields.NotificationPhase.END) def _get_compute_info(self, context, host): return objects.ComputeNode.get_first_node_by_host_for_old_compat( context, host) @wrap_exception() def check_instance_shared_storage(self, ctxt, instance, data): """Check if the instance files are shared :param ctxt: security context :param instance: dict of instance data :param data: result of driver.check_instance_shared_storage_local Returns True if instance disks located on shared storage and False otherwise. """ return self.driver.check_instance_shared_storage_remote(ctxt, data) def _dest_can_numa_live_migrate(self, dest_check_data, migration): # TODO(artom) If we have a libvirt driver we expect it to set # dst_supports_numa_live_migration, but we have to remove it if we # did not get a migration from the conductor, indicating that it # cannot send RPC 5.3. This check can be removed in RPC 6.0. if ('dst_supports_numa_live_migration' in dest_check_data and dest_check_data.dst_supports_numa_live_migration and not migration): delattr(dest_check_data, 'dst_supports_numa_live_migration') return dest_check_data @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def check_can_live_migrate_destination(self, ctxt, instance, block_migration, disk_over_commit, migration=None, limits=None): """Check if it is possible to execute live migration. This runs checks on the destination host, and then calls back to the source host to check the results. :param context: security context :param instance: dict of instance data :param block_migration: if true, prepare for block migration if None, calculate it in driver :param disk_over_commit: if true, allow disk over commit if None, ignore disk usage checking :param migration: objects.Migration object for this live migration. :param limits: objects.SchedulerLimits object for this live migration. :returns: a LiveMigrateData object (hypervisor-dependent) """ # Error out if this host cannot accept the new instance due # to anti-affinity. This check at this moment is not very accurate, as # multiple requests may be happening concurrently and miss the lock, # but when it works it provides a better user experience by failing # earlier. Also, it should be safe to explode here, error becomes # NoValidHost and instance status remains ACTIVE. try: self._validate_instance_group_policy(ctxt, instance) except exception.RescheduledException as e: msg = ("Failed to validate instance group policy " "due to: {}".format(e)) raise exception.MigrationPreCheckError(reason=msg) src_compute_info = obj_base.obj_to_primitive( self._get_compute_info(ctxt, instance.host)) dst_compute_info = obj_base.obj_to_primitive( self._get_compute_info(ctxt, self.host)) dest_check_data = self.driver.check_can_live_migrate_destination(ctxt, instance, src_compute_info, dst_compute_info, block_migration, disk_over_commit) dest_check_data = self._dest_can_numa_live_migrate(dest_check_data, migration) LOG.debug('destination check data is %s', dest_check_data) try: allocs = self.reportclient.get_allocations_for_consumer( ctxt, instance.uuid) migrate_data = self.compute_rpcapi.check_can_live_migrate_source( ctxt, instance, dest_check_data) if ('src_supports_numa_live_migration' in migrate_data and migrate_data.src_supports_numa_live_migration): migrate_data = self._live_migration_claim( ctxt, instance, migrate_data, migration, limits, allocs) elif 'dst_supports_numa_live_migration' in dest_check_data: LOG.info('Destination was ready for NUMA live migration, ' 'but source is either too old, or is set to an ' 'older upgrade level.', instance=instance) if self.network_api.supports_port_binding_extension(ctxt): # Create migrate_data vifs migrate_data.vifs = \ migrate_data_obj.\ VIFMigrateData.create_skeleton_migrate_vifs( instance.get_network_info()) # Claim PCI devices for VIFs on destination (if needed) port_id_to_pci = self._claim_pci_for_instance_vifs( ctxt, instance) # Update migrate VIFs with the newly claimed PCI devices self._update_migrate_vifs_profile_with_pci( migrate_data.vifs, port_id_to_pci) finally: self.driver.cleanup_live_migration_destination_check(ctxt, dest_check_data) return migrate_data def _live_migration_claim(self, ctxt, instance, migrate_data, migration, limits, allocs): """Runs on the destination and does a resources claim, if necessary. Currently, only NUMA live migrations require it. :param ctxt: Request context :param instance: The Instance being live migrated :param migrate_data: The MigrateData object for this live migration :param migration: The Migration object for this live migration :param limits: The SchedulerLimits object for this live migration :returns: migrate_data with dst_numa_info set if necessary """ try: # NOTE(artom) We might have gotten here from _find_destination() in # the conductor live migrate task. At that point, # migration.dest_node is not set yet (nor should it be, we're still # looking for a destination, after all). Therefore, we cannot use # migration.dest_node here and must use self._get_nodename(). claim = self.rt.live_migration_claim( ctxt, instance, self._get_nodename(instance), migration, limits, allocs) LOG.debug('Created live migration claim.', instance=instance) except exception.ComputeResourcesUnavailable as e: raise exception.MigrationPreCheckError( reason=e.format_message()) return self.driver.post_claim_migrate_data(ctxt, instance, migrate_data, claim) def _source_can_numa_live_migrate(self, ctxt, dest_check_data, source_check_data): # TODO(artom) Our virt driver may have told us that it supports NUMA # live migration. However, the following other conditions must be met # for a NUMA live migration to happen: # 1. We got a True dst_supports_numa_live_migration in # dest_check_data, indicating that the dest virt driver supports # NUMA live migration and that the conductor can send RPC 5.3 and # that the destination compute manager can receive it. # 2. Ourselves, the source, can send RPC 5.3. There's no # sentinel/parameter for this, so we just ask our rpcapi directly. # If any of these are not met, we need to remove the # src_supports_numa_live_migration flag from source_check_data to avoid # incorrectly initiating a NUMA live migration. # All of this can be removed in RPC 6.0/objects 2.0. can_numa_live_migrate = ( 'dst_supports_numa_live_migration' in dest_check_data and dest_check_data.dst_supports_numa_live_migration and self.compute_rpcapi.supports_numa_live_migration(ctxt)) if ('src_supports_numa_live_migration' in source_check_data and source_check_data.src_supports_numa_live_migration and not can_numa_live_migrate): delattr(source_check_data, 'src_supports_numa_live_migration') return source_check_data @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def check_can_live_migrate_source(self, ctxt, instance, dest_check_data): """Check if it is possible to execute live migration. This checks if the live migration can succeed, based on the results from check_can_live_migrate_destination. :param ctxt: security context :param instance: dict of instance data :param dest_check_data: result of check_can_live_migrate_destination :returns: a LiveMigrateData object """ bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( ctxt, instance.uuid) is_volume_backed = compute_utils.is_volume_backed_instance( ctxt, instance, bdms) dest_check_data.is_volume_backed = is_volume_backed block_device_info = self._get_instance_block_device_info( ctxt, instance, refresh_conn_info=False, bdms=bdms) result = self.driver.check_can_live_migrate_source(ctxt, instance, dest_check_data, block_device_info) result = self._source_can_numa_live_migrate(ctxt, dest_check_data, result) LOG.debug('source check data is %s', result) return result # TODO(mriedem): Remove the block_migration argument in v6.0 of the compute # RPC API. @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def pre_live_migration(self, context, instance, block_migration, disk, migrate_data): """Preparations for live migration at dest host. :param context: security context :param instance: dict of instance data :param block_migration: if true, prepare for block migration :param disk: disk info of instance :param migrate_data: A dict or LiveMigrateData object holding data required for live migration without shared storage. :returns: migrate_data containing additional migration info """ LOG.debug('pre_live_migration data is %s', migrate_data) # Error out if this host cannot accept the new instance due # to anti-affinity. At this point the migration is already in-progress, # so this is the definitive moment to abort due to the policy # violation. Also, it should be safe to explode here. The instance # status remains ACTIVE, migration status failed. self._validate_instance_group_policy(context, instance) migrate_data.old_vol_attachment_ids = {} bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) network_info = self.network_api.get_instance_nw_info(context, instance) self._notify_about_instance_usage( context, instance, "live_migration.pre.start", network_info=network_info) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_PRE, phase=fields.NotificationPhase.START, bdms=bdms) connector = self.driver.get_volume_connector(instance) try: for bdm in bdms: if bdm.is_volume and bdm.attachment_id is not None: # This bdm uses the new cinder v3.44 API. # We will create a new attachment for this # volume on this migration destination host. The old # attachment will be deleted on the source host # when the migration succeeds. The old attachment_id # is stored in dict with the key being the bdm.volume_id # so it can be restored on rollback. # # Also note that attachment_update is not needed as we # are providing the connector in the create call. attach_ref = self.volume_api.attachment_create( context, bdm.volume_id, bdm.instance_uuid, connector=connector, mountpoint=bdm.device_name) # save current attachment so we can detach it on success, # or restore it on a rollback. # NOTE(mdbooth): This data is no longer used by the source # host since change Ibe9215c0. We can't remove it until we # are sure the source host has been upgraded. migrate_data.old_vol_attachment_ids[bdm.volume_id] = \ bdm.attachment_id # update the bdm with the new attachment_id. bdm.attachment_id = attach_ref['id'] bdm.save() block_device_info = self._get_instance_block_device_info( context, instance, refresh_conn_info=True, bdms=bdms) # The driver pre_live_migration will plug vifs on the host migrate_data = self.driver.pre_live_migration(context, instance, block_device_info, network_info, disk, migrate_data) LOG.debug('driver pre_live_migration data is %s', migrate_data) # driver.pre_live_migration is what plugs vifs on the destination # host so now we can set the wait_for_vif_plugged flag in the # migrate_data object which the source compute will use to # determine if it should wait for a 'network-vif-plugged' event # from neutron before starting the actual guest transfer in the # hypervisor using_multiple_port_bindings = ( 'vifs' in migrate_data and migrate_data.vifs) migrate_data.wait_for_vif_plugged = ( CONF.compute.live_migration_wait_for_vif_plug and using_multiple_port_bindings ) # NOTE(tr3buchet): setup networks on destination host self.network_api.setup_networks_on_host(context, instance, self.host) except Exception: # If we raise, migrate_data with the updated attachment ids # will not be returned to the source host for rollback. # So we need to rollback new attachments here. with excutils.save_and_reraise_exception(): old_attachments = migrate_data.old_vol_attachment_ids for bdm in bdms: if (bdm.is_volume and bdm.attachment_id is not None and bdm.volume_id in old_attachments): self.volume_api.attachment_delete(context, bdm.attachment_id) bdm.attachment_id = old_attachments[bdm.volume_id] bdm.save() # Volume connections are complete, tell cinder that all the # attachments have completed. for bdm in bdms: if bdm.is_volume and bdm.attachment_id is not None: self.volume_api.attachment_complete(context, bdm.attachment_id) self._notify_about_instance_usage( context, instance, "live_migration.pre.end", network_info=network_info) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_PRE, phase=fields.NotificationPhase.END, bdms=bdms) LOG.debug('pre_live_migration result data is %s', migrate_data) return migrate_data @staticmethod def _neutron_failed_migration_callback(event_name, instance): msg = ('Neutron reported failure during migration ' 'with %(event)s for instance %(uuid)s') msg_args = {'event': event_name, 'uuid': instance.uuid} if CONF.vif_plugging_is_fatal: raise exception.VirtualInterfacePlugException(msg % msg_args) LOG.error(msg, msg_args) @staticmethod def _get_neutron_events_for_live_migration(instance): # We don't generate events if CONF.vif_plugging_timeout=0 # meaning that the operator disabled using them. if CONF.vif_plugging_timeout: return (instance.get_network_info() .get_live_migration_plug_time_events()) else: return [] def _cleanup_pre_live_migration(self, context, dest, instance, migration, migrate_data, source_bdms): """Helper method for when pre_live_migration fails Sets the migration status to "error" and rolls back the live migration setup on the destination host. :param context: The user request context. :type context: nova.context.RequestContext :param dest: The live migration destination hostname. :type dest: str :param instance: The instance being live migrated. :type instance: nova.objects.Instance :param migration: The migration record tracking this live migration. :type migration: nova.objects.Migration :param migrate_data: Data about the live migration, populated from the destination host. :type migrate_data: Subclass of nova.objects.LiveMigrateData :param source_bdms: BDMs prior to modification by the destination compute host. Set by _do_live_migration and not part of the callback interface, so this is never None """ self._set_migration_status(migration, 'error') # Make sure we set this for _rollback_live_migration() # so it can find it, as expected if it was called later migrate_data.migration = migration self._rollback_live_migration(context, instance, dest, migrate_data=migrate_data, source_bdms=source_bdms) def _do_pre_live_migration_from_source(self, context, dest, instance, block_migration, migration, migrate_data, source_bdms): """Prepares for pre-live-migration on the source host and calls dest Will setup a callback networking event handler (if configured) and then call the dest host's pre_live_migration method to prepare the dest host for live migration (plugs vifs, connect volumes, etc). _rollback_live_migration (on the source) will be called if pre_live_migration (on the dest) fails. :param context: nova auth request context for this operation :param dest: name of the destination compute service host :param instance: Instance object being live migrated :param block_migration: If true, prepare for block migration. :param migration: Migration object tracking this operation :param migrate_data: MigrateData object for this operation populated by the destination host compute driver as part of the check_can_live_migrate_destination call. :param source_bdms: BlockDeviceMappingList of BDMs currently attached to the instance from the source host. :returns: MigrateData object which is a modified version of the ``migrate_data`` argument from the compute driver on the dest host during the ``pre_live_migration`` call. :raises: MigrationError if waiting for the network-vif-plugged event timed out and is fatal. """ class _BreakWaitForInstanceEvent(Exception): """Used as a signal to stop waiting for the network-vif-plugged event when we discover that [compute]/live_migration_wait_for_vif_plug is not set on the destination. """ pass events = self._get_neutron_events_for_live_migration(instance) try: if ('block_migration' in migrate_data and migrate_data.block_migration): block_device_info = self._get_instance_block_device_info( context, instance, bdms=source_bdms) disk = self.driver.get_instance_disk_info( instance, block_device_info=block_device_info) else: disk = None deadline = CONF.vif_plugging_timeout error_cb = self._neutron_failed_migration_callback # In order to avoid a race with the vif plugging that the virt # driver does on the destination host, we register our events # to wait for before calling pre_live_migration. Then if the # dest host reports back that we shouldn't wait, we can break # out of the context manager using _BreakWaitForInstanceEvent. with self.virtapi.wait_for_instance_event( instance, events, deadline=deadline, error_callback=error_cb): with timeutils.StopWatch() as timer: # TODO(mriedem): The "block_migration" parameter passed # here is not actually used in pre_live_migration but it # is not optional in the RPC interface either. migrate_data = self.compute_rpcapi.pre_live_migration( context, instance, block_migration, disk, dest, migrate_data) LOG.info('Took %0.2f seconds for pre_live_migration on ' 'destination host %s.', timer.elapsed(), dest, instance=instance) wait_for_vif_plugged = ( 'wait_for_vif_plugged' in migrate_data and migrate_data.wait_for_vif_plugged) if events and not wait_for_vif_plugged: raise _BreakWaitForInstanceEvent except _BreakWaitForInstanceEvent: if events: LOG.debug('Not waiting for events after pre_live_migration: ' '%s. ', events, instance=instance) # This is a bit weird, but we need to clear sys.exc_info() so that # oslo.log formatting does not inadvertently use it later if an # error message is logged without an explicit exc_info. This is # only a problem with python 2. if six.PY2: sys.exc_clear() except exception.VirtualInterfacePlugException: with excutils.save_and_reraise_exception(): LOG.exception('Failed waiting for network virtual interfaces ' 'to be plugged on the destination host %s.', dest, instance=instance) self._cleanup_pre_live_migration( context, dest, instance, migration, migrate_data, source_bdms) except eventlet.timeout.Timeout: # We only get here if wait_for_vif_plugged is True which means # live_migration_wait_for_vif_plug=True on the destination host. msg = ( 'Timed out waiting for events: %(events)s. If these timeouts ' 'are a persistent issue it could mean the networking backend ' 'on host %(dest)s does not support sending these events ' 'unless there are port binding host changes which does not ' 'happen at this point in the live migration process. You may ' 'need to disable the live_migration_wait_for_vif_plug option ' 'on host %(dest)s.') subs = {'events': events, 'dest': dest} LOG.warning(msg, subs, instance=instance) if CONF.vif_plugging_is_fatal: self._cleanup_pre_live_migration( context, dest, instance, migration, migrate_data, source_bdms) raise exception.MigrationError(reason=msg % subs) except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Pre live migration failed at %s', dest, instance=instance) self._cleanup_pre_live_migration( context, dest, instance, migration, migrate_data, source_bdms) return migrate_data def _do_live_migration(self, context, dest, instance, block_migration, migration, migrate_data): # NOTE(danms): We should enhance the RT to account for migrations # and use the status field to denote when the accounting has been # done on source/destination. For now, this is just here for status # reporting self._set_migration_status(migration, 'preparing') source_bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) migrate_data = self._do_pre_live_migration_from_source( context, dest, instance, block_migration, migration, migrate_data, source_bdms) # Set migrate_data.migration because that is how _post_live_migration # and _rollback_live_migration get the migration object for cleanup. # Yes this is gross but changing the _post_live_migration and # _rollback_live_migration interfaces would also mean changing how the # virt drivers call them from the driver.live_migration method, i.e. # we would have to pass the migration object through the driver (or # consider using a partial but some do not like that pattern). migrate_data.migration = migration # NOTE(Kevin_Zheng): Pop the migration from the waiting queue # if it exist in the queue, then we are good to moving on, if # not, some other process must have aborted it, then we should # rollback. try: self._waiting_live_migrations.pop(instance.uuid) except KeyError: LOG.debug('Migration %s aborted by another process, rollback.', migration.uuid, instance=instance) self._rollback_live_migration(context, instance, dest, migrate_data, 'cancelled', source_bdms=source_bdms) self._notify_live_migrate_abort_end(context, instance) return self._set_migration_status(migration, 'running') # NOTE(mdbooth): pre_live_migration will update connection_info and # attachment_id on all volume BDMS to reflect the new destination # host attachment. We fetch BDMs before that to retain connection_info # and attachment_id relating to the source host for post migration # cleanup. post_live_migration = functools.partial(self._post_live_migration, source_bdms=source_bdms) rollback_live_migration = functools.partial( self._rollback_live_migration, source_bdms=source_bdms) LOG.debug('live_migration data is %s', migrate_data) try: self.driver.live_migration(context, instance, dest, post_live_migration, rollback_live_migration, block_migration, migrate_data) except Exception: LOG.exception('Live migration failed.', instance=instance) with excutils.save_and_reraise_exception(): # Put instance and migration into error state, # as its almost certainly too late to rollback self._set_migration_status(migration, 'error') # first refresh instance as it may have got updated by # post_live_migration_at_destination instance.refresh() self._set_instance_obj_error_state(context, instance, clean_task_state=True) @wrap_exception() @wrap_instance_event(prefix='compute') @errors_out_migration @wrap_instance_fault def live_migration(self, context, dest, instance, block_migration, migration, migrate_data): """Executing live migration. :param context: security context :param dest: destination host :param instance: a nova.objects.instance.Instance object :param block_migration: if true, prepare for block migration :param migration: an nova.objects.Migration object :param migrate_data: implementation specific params """ self._set_migration_status(migration, 'queued') # NOTE(Kevin_Zheng): Submit the live_migration job to the pool and # put the returned Future object into dict mapped with migration.uuid # in order to be able to track and abort it in the future. self._waiting_live_migrations[instance.uuid] = (None, None) try: future = self._live_migration_executor.submit( self._do_live_migration, context, dest, instance, block_migration, migration, migrate_data) self._waiting_live_migrations[instance.uuid] = (migration, future) except RuntimeError: # GreenThreadPoolExecutor.submit will raise RuntimeError if the # pool is shutdown, which happens in # _cleanup_live_migrations_in_pool. LOG.info('Migration %s failed to submit as the compute service ' 'is shutting down.', migration.uuid, instance=instance) raise exception.LiveMigrationNotSubmitted( migration_uuid=migration.uuid, instance_uuid=instance.uuid) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def live_migration_force_complete(self, context, instance): """Force live migration to complete. :param context: Security context :param instance: The instance that is being migrated """ self._notify_about_instance_usage( context, instance, 'live.migration.force.complete.start') compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_FORCE_COMPLETE, phase=fields.NotificationPhase.START) self.driver.live_migration_force_complete(instance) self._notify_about_instance_usage( context, instance, 'live.migration.force.complete.end') compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_FORCE_COMPLETE, phase=fields.NotificationPhase.END) def _notify_live_migrate_abort_end(self, context, instance): self._notify_about_instance_usage( context, instance, 'live.migration.abort.end') compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_ABORT, phase=fields.NotificationPhase.END) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def live_migration_abort(self, context, instance, migration_id): """Abort an in-progress live migration. :param context: Security context :param instance: The instance that is being migrated :param migration_id: ID of in-progress live migration """ self._notify_about_instance_usage( context, instance, 'live.migration.abort.start') compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_ABORT, phase=fields.NotificationPhase.START) # NOTE(Kevin_Zheng): Pop the migration out from the queue, this might # lead to 3 scenarios: # 1. The selected migration is still in queue, and the future.cancel() # succeed, then the abort action is succeed, mark the migration # status to 'cancelled'. # 2. The selected migration is still in queue, but the future.cancel() # failed, then the _do_live_migration() has started executing, and # the migration status is 'preparing', then we just pop it from the # queue, and the migration process will handle it later. And the # migration status couldn't be 'running' in this scenario because # if _do_live_migration has started executing and we've already # popped it from the queue and set the migration status to # 'running' at this point, popping it here will raise KeyError at # which point we check if it's running and if so, we abort the old # way. # 3. The selected migration is not in the queue, then the migration # status is 'running', let the driver handle it. try: migration, future = ( self._waiting_live_migrations.pop(instance.uuid)) if future and future.cancel(): # If we got here, we've successfully aborted the queued # migration and _do_live_migration won't run so we need # to set the migration status to cancelled and send the # notification. If Future.cancel() fails, it means # _do_live_migration is running and the migration status # is preparing, and _do_live_migration() itself will attempt # to pop the queued migration, hit a KeyError, and rollback, # set the migration to cancelled and send the # live.migration.abort.end notification. self._set_migration_status(migration, 'cancelled') except KeyError: migration = objects.Migration.get_by_id(context, migration_id) if migration.status != 'running': raise exception.InvalidMigrationState( migration_id=migration_id, instance_uuid=instance.uuid, state=migration.status, method='abort live migration') self.driver.live_migration_abort(instance) self._notify_live_migrate_abort_end(context, instance) def _live_migration_cleanup_flags(self, migrate_data, migr_ctxt=None): """Determine whether disks, instance path or other resources need to be cleaned up after live migration (at source on success, at destination on rollback) Block migration needs empty image at destination host before migration starts, so if any failure occurs, any empty images has to be deleted. Also Volume backed live migration w/o shared storage needs to delete newly created instance-xxx dir on the destination as a part of its rollback process There may be other resources which need cleanup; currently this is limited to vPMEM devices with the libvirt driver. :param migrate_data: implementation specific data :param migr_ctxt: specific resources stored in migration_context :returns: (bool, bool) -- do_cleanup, destroy_disks """ # NOTE(pkoniszewski): block migration specific params are set inside # migrate_data objects for drivers that expose block live migration # information (i.e. Libvirt, Xenapi and HyperV). For other drivers # cleanup is not needed. do_cleanup = False destroy_disks = False if isinstance(migrate_data, migrate_data_obj.LibvirtLiveMigrateData): has_vpmem = False if migr_ctxt and migr_ctxt.old_resources: for resource in migr_ctxt.old_resources: if ('metadata' in resource and isinstance(resource.metadata, objects.LibvirtVPMEMDevice)): has_vpmem = True break # No instance booting at source host, but instance dir # must be deleted for preparing next block migration # must be deleted for preparing next live migration w/o shared # storage # vpmem must be cleanped do_cleanup = not migrate_data.is_shared_instance_path or has_vpmem destroy_disks = not migrate_data.is_shared_block_storage elif isinstance(migrate_data, migrate_data_obj.XenapiLiveMigrateData): do_cleanup = migrate_data.block_migration destroy_disks = migrate_data.block_migration elif isinstance(migrate_data, migrate_data_obj.HyperVLiveMigrateData): # NOTE(claudiub): We need to cleanup any zombie Planned VM. do_cleanup = True destroy_disks = not migrate_data.is_shared_instance_path return (do_cleanup, destroy_disks) def _post_live_migration_remove_source_vol_connections( self, context, instance, source_bdms): """Disconnect volume connections from the source host during _post_live_migration. :param context: nova auth RequestContext :param instance: Instance object being live migrated :param source_bdms: BlockDeviceMappingList representing the attached volumes with connection_info set for the source host """ # Detaching volumes. connector = self.driver.get_volume_connector(instance) for bdm in source_bdms: if bdm.is_volume: # Detaching volumes is a call to an external API that can fail. # If it does, we need to handle it gracefully so that the call # to post_live_migration_at_destination - where we set instance # host and task state - still happens. We need to rethink the # current approach of setting instance host and task state # AFTER a whole bunch of things that could fail in unhandled # ways, but that is left as a TODO(artom). try: if bdm.attachment_id is None: # Prior to cinder v3.44: # We don't want to actually mark the volume detached, # or delete the bdm, just remove the connection from # this host. # # remove the volume connection without detaching from # hypervisor because the instance is not running # anymore on the current host self.volume_api.terminate_connection(context, bdm.volume_id, connector) else: # cinder v3.44 api flow - delete the old attachment # for the source host self.volume_api.attachment_delete(context, bdm.attachment_id) except Exception as e: if bdm.attachment_id is None: LOG.error('Connection for volume %s not terminated on ' 'source host %s during post_live_migration: ' '%s', bdm.volume_id, self.host, six.text_type(e), instance=instance) else: LOG.error('Volume attachment %s not deleted on source ' 'host %s during post_live_migration: %s', bdm.attachment_id, self.host, six.text_type(e), instance=instance) @wrap_exception() @wrap_instance_fault def _post_live_migration(self, ctxt, instance, dest, block_migration=False, migrate_data=None, source_bdms=None): """Post operations for live migration. This method is called from live_migration and mainly updating database record. :param ctxt: security context :param instance: instance dict :param dest: destination host :param block_migration: if true, prepare for block migration :param migrate_data: if not None, it is a dict which has data :param source_bdms: BDMs prior to modification by the destination compute host. Set by _do_live_migration and not part of the callback interface, so this is never None required for live migration without shared storage """ LOG.info('_post_live_migration() is started..', instance=instance) # Cleanup source host post live-migration block_device_info = self._get_instance_block_device_info( ctxt, instance, bdms=source_bdms) self.driver.post_live_migration(ctxt, instance, block_device_info, migrate_data) # Disconnect volumes from this (the source) host. self._post_live_migration_remove_source_vol_connections( ctxt, instance, source_bdms) # Releasing vlan. # (not necessary in current implementation?) # NOTE(artom) At this point in time we have not bound the ports to the # destination host yet (this happens in migrate_instance_start() # below). Therefore, the "old" source network info that's still in the # instance info cache is safe to use here, since it'll be used below # during driver.post_live_migration_at_source() to unplug the VIFs on # the source. network_info = instance.get_network_info() self._notify_about_instance_usage(ctxt, instance, "live_migration._post.start", network_info=network_info) compute_utils.notify_about_instance_action( ctxt, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_POST, phase=fields.NotificationPhase.START) migration = {'source_compute': self.host, 'dest_compute': dest, } # For neutron, migrate_instance_start will activate the destination # host port bindings, if there are any created by conductor before live # migration started. self.network_api.migrate_instance_start(ctxt, instance, migration) destroy_vifs = False try: # It's possible that the vif type changed on the destination # host and is already bound and active, so we need to use the # stashed source vifs in migrate_data.vifs (if present) to unplug # on the source host. unplug_nw_info = network_info if migrate_data and 'vifs' in migrate_data: nw_info = [] for migrate_vif in migrate_data.vifs: nw_info.append(migrate_vif.source_vif) unplug_nw_info = network_model.NetworkInfo.hydrate(nw_info) LOG.debug('Calling driver.post_live_migration_at_source ' 'with original source VIFs from migrate_data: %s', unplug_nw_info, instance=instance) self.driver.post_live_migration_at_source(ctxt, instance, unplug_nw_info) except NotImplementedError as ex: LOG.debug(ex, instance=instance) # For all hypervisors other than libvirt, there is a possibility # they are unplugging networks from source node in the cleanup # method destroy_vifs = True # Free instance allocations on source before claims are allocated on # destination node self.rt.free_pci_device_allocations_for_instance(ctxt, instance) # NOTE(danms): Save source node before calling post method on # destination, which will update it source_node = instance.node do_cleanup, destroy_disks = self._live_migration_cleanup_flags( migrate_data, migr_ctxt=instance.migration_context) if do_cleanup: LOG.debug('Calling driver.cleanup from _post_live_migration', instance=instance) self.driver.cleanup(ctxt, instance, unplug_nw_info, destroy_disks=destroy_disks, migrate_data=migrate_data, destroy_vifs=destroy_vifs) # Define domain at destination host, without doing it, # pause/suspend/terminate do not work. post_at_dest_success = True try: self.compute_rpcapi.post_live_migration_at_destination(ctxt, instance, block_migration, dest) except Exception as error: post_at_dest_success = False # We don't want to break _post_live_migration() if # post_live_migration_at_destination() fails as it should never # affect cleaning up source node. LOG.exception("Post live migration at destination %s failed", dest, instance=instance, error=error) self.instance_events.clear_events_for_instance(instance) # NOTE(timello): make sure we update available resources on source # host even before next periodic task. self.update_available_resource(ctxt) self._update_scheduler_instance_info(ctxt, instance) self._notify_about_instance_usage(ctxt, instance, "live_migration._post.end", network_info=network_info) compute_utils.notify_about_instance_action( ctxt, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_POST, phase=fields.NotificationPhase.END) if post_at_dest_success: LOG.info('Migrating instance to %s finished successfully.', dest, instance=instance) self._clean_instance_console_tokens(ctxt, instance) if migrate_data and migrate_data.obj_attr_is_set('migration'): migrate_data.migration.status = 'completed' migrate_data.migration.save() self._delete_allocation_after_move(ctxt, instance, migrate_data.migration) else: # We didn't have data on a migration, which means we can't # look up to see if we had new-style migration-based # allocations. This should really only happen in cases of # a buggy virt driver. Log a warning so we know it happened. LOG.warning('Live migration ended with no migrate_data ' 'record. Unable to clean up migration-based ' 'allocations for node %s which is almost certainly ' 'not an expected situation.', source_node, instance=instance) def _consoles_enabled(self): """Returns whether a console is enable.""" return (CONF.vnc.enabled or CONF.spice.enabled or CONF.rdp.enabled or CONF.serial_console.enabled or CONF.mks.enabled) def _clean_instance_console_tokens(self, ctxt, instance): """Clean console tokens stored for an instance.""" # If the database backend isn't in use, don't bother trying to clean # tokens. if self._consoles_enabled(): objects.ConsoleAuthToken.\ clean_console_auths_for_instance(ctxt, instance.uuid) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def post_live_migration_at_destination(self, context, instance, block_migration): """Post operations for live migration . :param context: security context :param instance: Instance dict :param block_migration: if true, prepare for block migration """ LOG.info('Post operation of migration started', instance=instance) # NOTE(tr3buchet): setup networks on destination host # this is called a second time because # multi_host does not create the bridge in # plug_vifs # NOTE(mriedem): This is a no-op for neutron. self.network_api.setup_networks_on_host(context, instance, self.host) migration = objects.Migration(source_compute=instance.host, dest_compute=self.host, migration_type='live-migration') self.network_api.migrate_instance_finish( context, instance, migration, provider_mappings=None) network_info = self.network_api.get_instance_nw_info(context, instance) self._notify_about_instance_usage( context, instance, "live_migration.post.dest.start", network_info=network_info) compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_POST_DEST, phase=fields.NotificationPhase.START) block_device_info = self._get_instance_block_device_info(context, instance) # Allocate the claimed PCI resources at destination. self.rt.allocate_pci_devices_for_instance(context, instance) try: self.driver.post_live_migration_at_destination( context, instance, network_info, block_migration, block_device_info) except Exception: with excutils.save_and_reraise_exception(): instance.vm_state = vm_states.ERROR LOG.error('Unexpected error during post live migration at ' 'destination host.', instance=instance) finally: # Restore instance state and update host current_power_state = self._get_power_state(context, instance) node_name = None prev_host = instance.host try: compute_node = self._get_compute_info(context, self.host) node_name = compute_node.hypervisor_hostname except exception.ComputeHostNotFound: LOG.exception('Failed to get compute_info for %s', self.host) finally: # NOTE(artom) We need to apply the migration context here # regardless of whether the driver's # post_live_migration_at_destination succeeded or not: the # instance is on the destination, potentially with a new NUMA # topology and resource usage. We need to persist that. # NOTE(artom) Apply followed by drop looks weird, but apply # just saves the new fields while drop actually removes the # migration context from the instance. instance.apply_migration_context() instance.drop_migration_context() instance.host = self.host instance.power_state = current_power_state instance.task_state = None instance.node = node_name instance.progress = 0 instance.save(expected_task_state=task_states.MIGRATING) # NOTE(tr3buchet): tear down networks on source host (nova-net) # NOTE(mriedem): For neutron, this will delete any inactive source # host port bindings. try: self.network_api.setup_networks_on_host(context, instance, prev_host, teardown=True) except exception.PortBindingDeletionFailed as e: # Removing the inactive port bindings from the source host is not # critical so just log an error but don't fail. LOG.error('Network cleanup failed for source host %s during post ' 'live migration. You may need to manually clean up ' 'resources in the network service. Error: %s', prev_host, six.text_type(e)) # NOTE(vish): this is necessary to update dhcp for nova-network # NOTE(mriedem): This is a no-op for neutron. self.network_api.setup_networks_on_host(context, instance, self.host) self._notify_about_instance_usage( context, instance, "live_migration.post.dest.end", network_info=network_info) compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_POST_DEST, phase=fields.NotificationPhase.END) def _remove_remote_volume_connections(self, context, dest, bdms, instance): """Rollback remote volume connections on the dest""" for bdm in bdms: try: # remove the connection on the destination host # NOTE(lyarwood): This actually calls the cinderv2 # os-terminate_connection API if required. self.compute_rpcapi.remove_volume_connection( context, instance, bdm.volume_id, dest) except Exception: LOG.warning("Ignoring exception while attempting " "to rollback volume connections for " "volume %s on host %s.", bdm.volume_id, dest, instance=instance) def _rollback_volume_bdms(self, context, bdms, original_bdms, instance): """Rollback the connection_info and attachment_id for each bdm""" original_bdms_by_volid = {bdm.volume_id: bdm for bdm in original_bdms if bdm.is_volume} for bdm in bdms: try: original_bdm = original_bdms_by_volid[bdm.volume_id] # NOTE(lyarwood): Only delete the referenced attachment if it # is different to the original in order to avoid accidentally # removing the source host volume attachment after it has # already been rolled back by a failure in pre_live_migration. if (bdm.attachment_id and original_bdm.attachment_id and bdm.attachment_id != original_bdm.attachment_id): # NOTE(lyarwood): 3.44 cinder api flow. Delete the # attachment used by the bdm and reset it to that of # the original bdm. self.volume_api.attachment_delete(context, bdm.attachment_id) bdm.attachment_id = original_bdm.attachment_id # NOTE(lyarwood): Reset the connection_info to the original bdm.connection_info = original_bdm.connection_info bdm.save() except cinder_exception.ClientException: LOG.warning("Ignoring cinderclient exception when " "attempting to delete attachment %s for volume " "%s while rolling back volume bdms.", bdm.attachment_id, bdm.volume_id, instance=instance) except Exception: with excutils.save_and_reraise_exception(): LOG.exception("Exception while attempting to rollback " "BDM for volume %s.", bdm.volume_id, instance=instance) @wrap_exception() @wrap_instance_fault def _rollback_live_migration(self, context, instance, dest, migrate_data=None, migration_status='error', source_bdms=None): """Recovers Instance/volume state from migrating -> running. :param context: security context :param instance: nova.objects.instance.Instance object :param dest: This method is called from live migration src host. This param specifies destination host. :param migrate_data: if not none, contains implementation specific data. :param migration_status: Contains the status we want to set for the migration object :param source_bdms: BDMs prior to modification by the destination compute host. Set by _do_live_migration and not part of the callback interface, so this is never None """ # NOTE(gibi): We need to refresh pci_requests of the instance as it # might be changed by the conductor during scheduling based on the # selected destination host. If the instance has SRIOV ports with # resource request then the LiveMigrationTask._find_destination call # updated the instance.pci_requests.requests[].spec with the SRIOV PF # device name to be used on the destination host. As the migration is # rolling back to the source host now we don't want to persist the # destination host related changes in the DB. instance.pci_requests = \ objects.InstancePCIRequests.get_by_instance_uuid( context, instance.uuid) if (isinstance(migrate_data, migrate_data_obj.LiveMigrateData) and migrate_data.obj_attr_is_set('migration')): migration = migrate_data.migration else: migration = None if migration: # Remove allocations created in Placement for the dest node. # If migration is None, the virt driver didn't pass it which is # a bug. self._revert_allocation(context, instance, migration) else: LOG.error('Unable to revert allocations during live migration ' 'rollback; compute driver did not provide migrate_data', instance=instance) # NOTE(tr3buchet): setup networks on source host (really it's re-setup # for nova-network) # NOTE(mriedem): This is a no-op for neutron. self.network_api.setup_networks_on_host(context, instance, self.host) self.driver.rollback_live_migration_at_source(context, instance, migrate_data) # NOTE(lyarwood): Fetch the current list of BDMs, disconnect any # connected volumes from the dest and delete any volume attachments # used by the destination host before rolling back to the original # still valid source host volume attachments. bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) # TODO(lyarwood): Turn the following into a lookup method within # BlockDeviceMappingList. vol_bdms = [bdm for bdm in bdms if bdm.is_volume] self._remove_remote_volume_connections(context, dest, vol_bdms, instance) self._rollback_volume_bdms(context, vol_bdms, source_bdms, instance) self._notify_about_instance_usage(context, instance, "live_migration._rollback.start") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_ROLLBACK, phase=fields.NotificationPhase.START, bdms=bdms) do_cleanup, destroy_disks = self._live_migration_cleanup_flags( migrate_data, migr_ctxt=instance.migration_context) if do_cleanup: self.compute_rpcapi.rollback_live_migration_at_destination( context, instance, dest, destroy_disks=destroy_disks, migrate_data=migrate_data) else: # The port binding profiles need to be cleaned up. with errors_out_migration_ctxt(migration): try: # This call will delete any inactive destination host # port bindings. self.network_api.setup_networks_on_host( context, instance, host=dest, teardown=True) except exception.PortBindingDeletionFailed as e: # Removing the inactive port bindings from the destination # host is not critical so just log an error but don't fail. LOG.error( 'Network cleanup failed for destination host %s ' 'during live migration rollback. You may need to ' 'manually clean up resources in the network service. ' 'Error: %s', dest, six.text_type(e)) except Exception: with excutils.save_and_reraise_exception(): LOG.exception( 'An error occurred while cleaning up networking ' 'during live migration rollback.', instance=instance) # NOTE(luyao): We drop move_claim and migration_context after cleanup # is complete, to ensure the specific resources claimed on destination # are released safely. # TODO(artom) drop_move_claim_at_destination() is new in RPC 5.3, only # call it if we performed a NUMA-aware live migration (which implies us # being able to send RPC 5.3). To check this, we can use the # src_supports_numa_live_migration flag, as it will be set if and only # if: # - dst_supports_numa_live_migration made its way to the source # (meaning both dest and source are new and conductor can speak # RPC 5.3) # - src_supports_numa_live_migration was set by the source driver and # passed the send-RPC-5.3 check. # This check can be removed in RPC 6.0. if ('src_supports_numa_live_migration' in migrate_data and migrate_data.src_supports_numa_live_migration): LOG.debug('Calling destination to drop move claim.', instance=instance) self.compute_rpcapi.drop_move_claim_at_destination(context, instance, dest) # NOTE(luyao): We only update instance info after rollback operations # are complete instance.task_state = None instance.progress = 0 instance.drop_migration_context() instance.save(expected_task_state=[task_states.MIGRATING]) self._notify_about_instance_usage(context, instance, "live_migration._rollback.end") compute_utils.notify_about_instance_action(context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_ROLLBACK, phase=fields.NotificationPhase.END, bdms=bdms) # TODO(luyao): set migration status to 'failed' but not 'error' # which means rollback_live_migration is done, we have successfully # cleaned up and returned instance back to normal status. self._set_migration_status(migration, migration_status) @wrap_exception() @wrap_instance_fault def drop_move_claim_at_destination(self, context, instance): """Called by the source of a live migration during rollback to ask the destination to drop the MoveClaim object that was created for the live migration on the destination. """ nodename = self._get_nodename(instance) LOG.debug('Dropping live migration resource claim on destination ' 'node %s', nodename, instance=instance) self.rt.drop_move_claim( context, instance, nodename, instance_type=instance.flavor) @wrap_exception() @wrap_instance_event(prefix='compute') @wrap_instance_fault def rollback_live_migration_at_destination(self, context, instance, destroy_disks, migrate_data): """Cleaning up image directory that is created pre_live_migration. :param context: security context :param instance: a nova.objects.instance.Instance object sent over rpc :param destroy_disks: whether to destroy volumes or not :param migrate_data: contains migration info """ network_info = self.network_api.get_instance_nw_info(context, instance) self._notify_about_instance_usage( context, instance, "live_migration.rollback.dest.start", network_info=network_info) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_ROLLBACK_DEST, phase=fields.NotificationPhase.START) try: # NOTE(tr3buchet): tear down networks on dest host (nova-net) # NOTE(mriedem): For neutron, this call will delete any # destination host port bindings. # TODO(mriedem): We should eventually remove this call from # this method (rollback_live_migration_at_destination) since this # method is only called conditionally based on whether or not the # instance is running on shared storage. _rollback_live_migration # already calls this method for neutron if we are running on # shared storage. self.network_api.setup_networks_on_host(context, instance, self.host, teardown=True) except exception.PortBindingDeletionFailed as e: # Removing the inactive port bindings from the destination # host is not critical so just log an error but don't fail. LOG.error( 'Network cleanup failed for destination host %s ' 'during live migration rollback. You may need to ' 'manually clean up resources in the network service. ' 'Error: %s', self.host, six.text_type(e)) except Exception: with excutils.save_and_reraise_exception(): # NOTE(tdurakov): even if teardown networks fails driver # should try to rollback live migration on destination. LOG.exception('An error occurred while deallocating network.', instance=instance) finally: # always run this even if setup_networks_on_host fails # NOTE(vish): The mapping is passed in so the driver can disconnect # from remote volumes if necessary block_device_info = self._get_instance_block_device_info(context, instance) # free any instance PCI claims done on destination during # check_can_live_migrate_destination() self.rt.free_pci_device_claims_for_instance(context, instance) # NOTE(luyao): Apply migration_context temporarily since it's # on destination host, we rely on instance object to cleanup # specific resources like vpmem with instance.mutated_migration_context(): self.driver.rollback_live_migration_at_destination( context, instance, network_info, block_device_info, destroy_disks=destroy_disks, migrate_data=migrate_data) self._notify_about_instance_usage( context, instance, "live_migration.rollback.dest.end", network_info=network_info) compute_utils.notify_about_instance_action( context, instance, self.host, action=fields.NotificationAction.LIVE_MIGRATION_ROLLBACK_DEST, phase=fields.NotificationPhase.END) def _require_nw_info_update(self, context, instance): """Detect whether there is a mismatch in binding:host_id, or binding_failed or unbound binding:vif_type for any of the instances ports. """ # Only update port bindings if compute manager does manage port # bindings instead of the compute driver. For example IronicDriver # manages the port binding for baremetal instance ports, hence, # external intervention with the binding is not desired. if self.driver.manages_network_binding_host_id(): return False search_opts = {'device_id': instance.uuid, 'fields': ['binding:host_id', 'binding:vif_type']} ports = self.network_api.list_ports(context, **search_opts) for p in ports['ports']: if p.get('binding:host_id') != self.host: return True vif_type = p.get('binding:vif_type') if (vif_type == network_model.VIF_TYPE_UNBOUND or vif_type == network_model.VIF_TYPE_BINDING_FAILED): return True return False @periodic_task.periodic_task( spacing=CONF.heal_instance_info_cache_interval) def _heal_instance_info_cache(self, context): """Called periodically. On every call, try to update the info_cache's network information for another instance by calling to the network manager. This is implemented by keeping a cache of uuids of instances that live on this host. On each call, we pop one off of a list, pull the DB record, and try the call to the network API. If anything errors don't fail, as it's possible the instance has been deleted, etc. """ heal_interval = CONF.heal_instance_info_cache_interval if not heal_interval: return instance_uuids = getattr(self, '_instance_uuids_to_heal', []) instance = None LOG.debug('Starting heal instance info cache') if not instance_uuids: # The list of instances to heal is empty so rebuild it LOG.debug('Rebuilding the list of instances to heal') db_instances = objects.InstanceList.get_by_host( context, self.host, expected_attrs=[], use_slave=True) for inst in db_instances: # We don't want to refresh the cache for instances # which are building or deleting so don't put them # in the list. If they are building they will get # added to the list next time we build it. if (inst.vm_state == vm_states.BUILDING): LOG.debug('Skipping network cache update for instance ' 'because it is Building.', instance=inst) continue if (inst.task_state == task_states.DELETING): LOG.debug('Skipping network cache update for instance ' 'because it is being deleted.', instance=inst) continue if not instance: # Save the first one we find so we don't # have to get it again instance = inst else: instance_uuids.append(inst['uuid']) self._instance_uuids_to_heal = instance_uuids else: # Find the next valid instance on the list while instance_uuids: try: inst = objects.Instance.get_by_uuid( context, instance_uuids.pop(0), expected_attrs=['system_metadata', 'info_cache', 'flavor'], use_slave=True) except exception.InstanceNotFound: # Instance is gone. Try to grab another. continue # Check the instance hasn't been migrated if inst.host != self.host: LOG.debug('Skipping network cache update for instance ' 'because it has been migrated to another ' 'host.', instance=inst) # Check the instance isn't being deleting elif inst.task_state == task_states.DELETING: LOG.debug('Skipping network cache update for instance ' 'because it is being deleted.', instance=inst) else: instance = inst break if instance: # We have an instance now to refresh try: # Fix potential mismatch in port binding if evacuation failed # after reassigning the port binding to the dest host but # before the instance host is changed. # Do this only when instance has no pending task. if instance.task_state is None and \ self._require_nw_info_update(context, instance): LOG.info("Updating ports in neutron", instance=instance) self.network_api.setup_instance_network_on_host( context, instance, self.host) # Call to network API to get instance info.. this will # force an update to the instance's info_cache self.network_api.get_instance_nw_info( context, instance, force_refresh=True) LOG.debug('Updated the network info_cache for instance', instance=instance) except exception.InstanceNotFound: # Instance is gone. LOG.debug('Instance no longer exists. Unable to refresh', instance=instance) return except exception.InstanceInfoCacheNotFound: # InstanceInfoCache is gone. LOG.debug('InstanceInfoCache no longer exists. ' 'Unable to refresh', instance=instance) except Exception: LOG.error('An error occurred while refreshing the network ' 'cache.', instance=instance, exc_info=True) else: LOG.debug("Didn't find any instances for network info cache " "update.") @periodic_task.periodic_task def _poll_rebooting_instances(self, context): if CONF.reboot_timeout > 0: filters = {'task_state': [task_states.REBOOTING, task_states.REBOOT_STARTED, task_states.REBOOT_PENDING], 'host': self.host} rebooting = objects.InstanceList.get_by_filters( context, filters, expected_attrs=[], use_slave=True) to_poll = [] for instance in rebooting: if timeutils.is_older_than(instance.updated_at, CONF.reboot_timeout): to_poll.append(instance) self.driver.poll_rebooting_instances(CONF.reboot_timeout, to_poll) @periodic_task.periodic_task def _poll_rescued_instances(self, context): if CONF.rescue_timeout > 0: filters = {'vm_state': vm_states.RESCUED, 'host': self.host} rescued_instances = objects.InstanceList.get_by_filters( context, filters, expected_attrs=["system_metadata"], use_slave=True) to_unrescue = [] for instance in rescued_instances: if timeutils.is_older_than(instance.launched_at, CONF.rescue_timeout): to_unrescue.append(instance) for instance in to_unrescue: self.compute_api.unrescue(context, instance) @periodic_task.periodic_task def _poll_unconfirmed_resizes(self, context): if CONF.resize_confirm_window == 0: return migrations = objects.MigrationList.get_unconfirmed_by_dest_compute( context, CONF.resize_confirm_window, self.host, use_slave=True) migrations_info = dict(migration_count=len(migrations), confirm_window=CONF.resize_confirm_window) if migrations_info["migration_count"] > 0: LOG.info("Found %(migration_count)d unconfirmed migrations " "older than %(confirm_window)d seconds", migrations_info) def _set_migration_to_error(migration, reason, **kwargs): LOG.warning("Setting migration %(migration_id)s to error: " "%(reason)s", {'migration_id': migration['id'], 'reason': reason}, **kwargs) migration.status = 'error' migration.save() for migration in migrations: instance_uuid = migration.instance_uuid LOG.info("Automatically confirming migration " "%(migration_id)s for instance %(instance_uuid)s", {'migration_id': migration.id, 'instance_uuid': instance_uuid}) expected_attrs = ['metadata', 'system_metadata'] try: instance = objects.Instance.get_by_uuid(context, instance_uuid, expected_attrs=expected_attrs, use_slave=True) except exception.InstanceNotFound: reason = (_("Instance %s not found") % instance_uuid) _set_migration_to_error(migration, reason) continue if instance.vm_state == vm_states.ERROR: reason = _("In ERROR state") _set_migration_to_error(migration, reason, instance=instance) continue # race condition: The instance in DELETING state should not be # set the migration state to error, otherwise the instance in # to be deleted which is in RESIZED state # will not be able to confirm resize if instance.task_state in [task_states.DELETING, task_states.SOFT_DELETING]: msg = ("Instance being deleted or soft deleted during resize " "confirmation. Skipping.") LOG.debug(msg, instance=instance) continue # race condition: This condition is hit when this method is # called between the save of the migration record with a status of # finished and the save of the instance object with a state of # RESIZED. The migration record should not be set to error. if instance.task_state == task_states.RESIZE_FINISH: msg = ("Instance still resizing during resize " "confirmation. Skipping.") LOG.debug(msg, instance=instance) continue vm_state = instance.vm_state task_state = instance.task_state if vm_state != vm_states.RESIZED or task_state is not None: reason = (_("In states %(vm_state)s/%(task_state)s, not " "RESIZED/None") % {'vm_state': vm_state, 'task_state': task_state}) _set_migration_to_error(migration, reason, instance=instance) continue try: self.compute_api.confirm_resize(context, instance, migration=migration) except Exception as e: LOG.info("Error auto-confirming resize: %s. " "Will retry later.", e, instance=instance) @periodic_task.periodic_task(spacing=CONF.shelved_poll_interval) def _poll_shelved_instances(self, context): if CONF.shelved_offload_time <= 0: return filters = {'vm_state': vm_states.SHELVED, 'task_state': None, 'host': self.host} shelved_instances = objects.InstanceList.get_by_filters( context, filters=filters, expected_attrs=['system_metadata'], use_slave=True) to_gc = [] for instance in shelved_instances: sys_meta = instance.system_metadata shelved_at = timeutils.parse_strtime(sys_meta['shelved_at']) if timeutils.is_older_than(shelved_at, CONF.shelved_offload_time): to_gc.append(instance) for instance in to_gc: try: instance.task_state = task_states.SHELVING_OFFLOADING instance.save(expected_task_state=(None,)) self.shelve_offload_instance(context, instance, clean_shutdown=False) except Exception: LOG.exception('Periodic task failed to offload instance.', instance=instance) @periodic_task.periodic_task def _instance_usage_audit(self, context): if not CONF.instance_usage_audit: return begin, end = utils.last_completed_audit_period() if objects.TaskLog.get(context, 'instance_usage_audit', begin, end, self.host): return instances = objects.InstanceList.get_active_by_window_joined( context, begin, end, host=self.host, expected_attrs=['system_metadata', 'info_cache', 'metadata', 'flavor'], use_slave=True) num_instances = len(instances) errors = 0 successes = 0 LOG.info("Running instance usage audit for host %(host)s " "from %(begin_time)s to %(end_time)s. " "%(number_instances)s instances.", {'host': self.host, 'begin_time': begin, 'end_time': end, 'number_instances': num_instances}) start_time = time.time() task_log = objects.TaskLog(context) task_log.task_name = 'instance_usage_audit' task_log.period_beginning = begin task_log.period_ending = end task_log.host = self.host task_log.task_items = num_instances task_log.message = 'Instance usage audit started...' task_log.begin_task() for instance in instances: try: compute_utils.notify_usage_exists( self.notifier, context, instance, self.host, ignore_missing_network_data=False) successes += 1 except Exception: LOG.exception('Failed to generate usage ' 'audit for instance ' 'on host %s', self.host, instance=instance) errors += 1 task_log.errors = errors task_log.message = ( 'Instance usage audit ran for host %s, %s instances in %s seconds.' % (self.host, num_instances, time.time() - start_time)) task_log.end_task() @periodic_task.periodic_task(spacing=CONF.bandwidth_poll_interval) def _poll_bandwidth_usage(self, context): if not self._bw_usage_supported: return prev_time, start_time = utils.last_completed_audit_period() curr_time = time.time() if (curr_time - self._last_bw_usage_poll > CONF.bandwidth_poll_interval): self._last_bw_usage_poll = curr_time LOG.info("Updating bandwidth usage cache") instances = objects.InstanceList.get_by_host(context, self.host, use_slave=True) try: bw_counters = self.driver.get_all_bw_counters(instances) except NotImplementedError: # NOTE(mdragon): Not all hypervisors have bandwidth polling # implemented yet. If they don't it doesn't break anything, # they just don't get the info in the usage events. # NOTE(PhilDay): Record that its not supported so we can # skip fast on future calls rather than waste effort getting # the list of instances. LOG.info("Bandwidth usage not supported by %(driver)s.", {'driver': CONF.compute_driver}) self._bw_usage_supported = False return refreshed = timeutils.utcnow() for bw_ctr in bw_counters: # Allow switching of greenthreads between queries. greenthread.sleep(0) bw_in = 0 bw_out = 0 last_ctr_in = None last_ctr_out = None usage = objects.BandwidthUsage.get_by_instance_uuid_and_mac( context, bw_ctr['uuid'], bw_ctr['mac_address'], start_period=start_time, use_slave=True) if usage: bw_in = usage.bw_in bw_out = usage.bw_out last_ctr_in = usage.last_ctr_in last_ctr_out = usage.last_ctr_out else: usage = (objects.BandwidthUsage. get_by_instance_uuid_and_mac( context, bw_ctr['uuid'], bw_ctr['mac_address'], start_period=prev_time, use_slave=True)) if usage: last_ctr_in = usage.last_ctr_in last_ctr_out = usage.last_ctr_out if last_ctr_in is not None: if bw_ctr['bw_in'] < last_ctr_in: # counter rollover bw_in += bw_ctr['bw_in'] else: bw_in += (bw_ctr['bw_in'] - last_ctr_in) if last_ctr_out is not None: if bw_ctr['bw_out'] < last_ctr_out: # counter rollover bw_out += bw_ctr['bw_out'] else: bw_out += (bw_ctr['bw_out'] - last_ctr_out) objects.BandwidthUsage(context=context).create( bw_ctr['uuid'], bw_ctr['mac_address'], bw_in, bw_out, bw_ctr['bw_in'], bw_ctr['bw_out'], start_period=start_time, last_refreshed=refreshed) def _get_host_volume_bdms(self, context, use_slave=False): """Return all block device mappings on a compute host.""" compute_host_bdms = [] instances = objects.InstanceList.get_by_host(context, self.host, use_slave=use_slave) for instance in instances: bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid, use_slave=use_slave) instance_bdms = [bdm for bdm in bdms if bdm.is_volume] compute_host_bdms.append(dict(instance=instance, instance_bdms=instance_bdms)) return compute_host_bdms def _update_volume_usage_cache(self, context, vol_usages): """Updates the volume usage cache table with a list of stats.""" for usage in vol_usages: # Allow switching of greenthreads between queries. greenthread.sleep(0) vol_usage = objects.VolumeUsage(context) vol_usage.volume_id = usage['volume'] vol_usage.instance_uuid = usage['instance'].uuid vol_usage.project_id = usage['instance'].project_id vol_usage.user_id = usage['instance'].user_id vol_usage.availability_zone = usage['instance'].availability_zone vol_usage.curr_reads = usage['rd_req'] vol_usage.curr_read_bytes = usage['rd_bytes'] vol_usage.curr_writes = usage['wr_req'] vol_usage.curr_write_bytes = usage['wr_bytes'] vol_usage.save() self.notifier.info(context, 'volume.usage', vol_usage.to_dict()) compute_utils.notify_about_volume_usage(context, vol_usage, self.host) @periodic_task.periodic_task(spacing=CONF.volume_usage_poll_interval) def _poll_volume_usage(self, context): if CONF.volume_usage_poll_interval == 0: return compute_host_bdms = self._get_host_volume_bdms(context, use_slave=True) if not compute_host_bdms: return LOG.debug("Updating volume usage cache") try: vol_usages = self.driver.get_all_volume_usage(context, compute_host_bdms) except NotImplementedError: return self._update_volume_usage_cache(context, vol_usages) @periodic_task.periodic_task(spacing=CONF.sync_power_state_interval, run_immediately=True) def _sync_power_states(self, context): """Align power states between the database and the hypervisor. To sync power state data we make a DB call to get the number of virtual machines known by the hypervisor and if the number matches the number of virtual machines known by the database, we proceed in a lazy loop, one database record at a time, checking if the hypervisor has the same power state as is in the database. """ db_instances = objects.InstanceList.get_by_host(context, self.host, expected_attrs=[], use_slave=True) try: num_vm_instances = self.driver.get_num_instances() except exception.VirtDriverNotReady as e: # If the virt driver is not ready, like ironic-api not being up # yet in the case of ironic, just log it and exit. LOG.info('Skipping _sync_power_states periodic task due to: %s', e) return num_db_instances = len(db_instances) if num_vm_instances != num_db_instances: LOG.warning("While synchronizing instance power states, found " "%(num_db_instances)s instances in the database " "and %(num_vm_instances)s instances on the " "hypervisor.", {'num_db_instances': num_db_instances, 'num_vm_instances': num_vm_instances}) def _sync(db_instance): # NOTE(melwitt): This must be synchronized as we query state from # two separate sources, the driver and the database. # They are set (in stop_instance) and read, in sync. @utils.synchronized(db_instance.uuid) def query_driver_power_state_and_sync(): self._query_driver_power_state_and_sync(context, db_instance) try: query_driver_power_state_and_sync() except Exception: LOG.exception("Periodic sync_power_state task had an " "error while processing an instance.", instance=db_instance) self._syncs_in_progress.pop(db_instance.uuid) for db_instance in db_instances: # process syncs asynchronously - don't want instance locking to # block entire periodic task thread uuid = db_instance.uuid if uuid in self._syncs_in_progress: LOG.debug('Sync already in progress for %s', uuid) else: LOG.debug('Triggering sync for uuid %s', uuid) self._syncs_in_progress[uuid] = True self._sync_power_pool.spawn_n(_sync, db_instance) def _query_driver_power_state_and_sync(self, context, db_instance): if db_instance.task_state is not None: LOG.info("During sync_power_state the instance has a " "pending task (%(task)s). Skip.", {'task': db_instance.task_state}, instance=db_instance) return # No pending tasks. Now try to figure out the real vm_power_state. try: vm_instance = self.driver.get_info(db_instance) vm_power_state = vm_instance.state except exception.InstanceNotFound: vm_power_state = power_state.NOSTATE # Note(maoy): the above get_info call might take a long time, # for example, because of a broken libvirt driver. try: self._sync_instance_power_state(context, db_instance, vm_power_state, use_slave=True) except exception.InstanceNotFound: # NOTE(hanlind): If the instance gets deleted during sync, # silently ignore. pass def _stop_unexpected_shutdown_instance(self, context, vm_state, db_instance, orig_db_power_state): # this is an exceptional case; make sure our data is up # to date before slamming through a power off vm_instance = self.driver.get_info(db_instance, use_cache=False) vm_power_state = vm_instance.state # if it still looks off, go ahead and call stop() if vm_power_state in (power_state.SHUTDOWN, power_state.CRASHED): LOG.warning("Instance shutdown by itself. Calling the " "stop API. Current vm_state: %(vm_state)s, " "current task_state: %(task_state)s, " "original DB power_state: %(db_power_state)s, " "current VM power_state: %(vm_power_state)s", {'vm_state': vm_state, 'task_state': db_instance.task_state, 'db_power_state': orig_db_power_state, 'vm_power_state': vm_power_state}, instance=db_instance) try: # Note(maoy): here we call the API instead of # brutally updating the vm_state in the database # to allow all the hooks and checks to be performed. if db_instance.shutdown_terminate: self.compute_api.delete(context, db_instance) else: self.compute_api.stop(context, db_instance) except Exception: # Note(maoy): there is no need to propagate the error # because the same power_state will be retrieved next # time and retried. # For example, there might be another task scheduled. LOG.exception("error during stop() in sync_power_state.", instance=db_instance) def _sync_instance_power_state(self, context, db_instance, vm_power_state, use_slave=False): """Align instance power state between the database and hypervisor. If the instance is not found on the hypervisor, but is in the database, then a stop() API will be called on the instance. """ # We re-query the DB to get the latest instance info to minimize # (not eliminate) race condition. db_instance.refresh(use_slave=use_slave) db_power_state = db_instance.power_state vm_state = db_instance.vm_state if self.host != db_instance.host: # on the sending end of nova-compute _sync_power_state # may have yielded to the greenthread performing a live # migration; this in turn has changed the resident-host # for the VM; However, the instance is still active, it # is just in the process of migrating to another host. # This implies that the compute source must relinquish # control to the compute destination. LOG.info("During the sync_power process the " "instance has moved from " "host %(src)s to host %(dst)s", {'src': db_instance.host, 'dst': self.host}, instance=db_instance) return elif db_instance.task_state is not None: # on the receiving end of nova-compute, it could happen # that the DB instance already report the new resident # but the actual VM has not showed up on the hypervisor # yet. In this case, let's allow the loop to continue # and run the state sync in a later round LOG.info("During sync_power_state the instance has a " "pending task (%(task)s). Skip.", {'task': db_instance.task_state}, instance=db_instance) return orig_db_power_state = db_power_state if vm_power_state != db_power_state: LOG.info('During _sync_instance_power_state the DB ' 'power_state (%(db_power_state)s) does not match ' 'the vm_power_state from the hypervisor ' '(%(vm_power_state)s). Updating power_state in the ' 'DB to match the hypervisor.', {'db_power_state': db_power_state, 'vm_power_state': vm_power_state}, instance=db_instance) # power_state is always updated from hypervisor to db db_instance.power_state = vm_power_state db_instance.save() db_power_state = vm_power_state # Note(maoy): Now resolve the discrepancy between vm_state and # vm_power_state. We go through all possible vm_states. if vm_state in (vm_states.BUILDING, vm_states.RESCUED, vm_states.RESIZED, vm_states.SUSPENDED, vm_states.ERROR): # TODO(maoy): we ignore these vm_state for now. pass elif vm_state == vm_states.ACTIVE: # The only rational power state should be RUNNING if vm_power_state in (power_state.SHUTDOWN, power_state.CRASHED): self._stop_unexpected_shutdown_instance( context, vm_state, db_instance, orig_db_power_state) elif vm_power_state == power_state.SUSPENDED: LOG.warning("Instance is suspended unexpectedly. Calling " "the stop API.", instance=db_instance) try: self.compute_api.stop(context, db_instance) except Exception: LOG.exception("error during stop() in sync_power_state.", instance=db_instance) elif vm_power_state == power_state.PAUSED: # Note(maoy): a VM may get into the paused state not only # because the user request via API calls, but also # due to (temporary) external instrumentations. # Before the virt layer can reliably report the reason, # we simply ignore the state discrepancy. In many cases, # the VM state will go back to running after the external # instrumentation is done. See bug 1097806 for details. LOG.warning("Instance is paused unexpectedly. Ignore.", instance=db_instance) elif vm_power_state == power_state.NOSTATE: # Occasionally, depending on the status of the hypervisor, # which could be restarting for example, an instance may # not be found. Therefore just log the condition. LOG.warning("Instance is unexpectedly not found. Ignore.", instance=db_instance) elif vm_state == vm_states.STOPPED: if vm_power_state not in (power_state.NOSTATE, power_state.SHUTDOWN, power_state.CRASHED): LOG.warning("Instance is not stopped. Calling " "the stop API. Current vm_state: %(vm_state)s," " current task_state: %(task_state)s, " "original DB power_state: %(db_power_state)s, " "current VM power_state: %(vm_power_state)s", {'vm_state': vm_state, 'task_state': db_instance.task_state, 'db_power_state': orig_db_power_state, 'vm_power_state': vm_power_state}, instance=db_instance) try: # NOTE(russellb) Force the stop, because normally the # compute API would not allow an attempt to stop a stopped # instance. self.compute_api.force_stop(context, db_instance) except Exception: LOG.exception("error during stop() in sync_power_state.", instance=db_instance) elif vm_state == vm_states.PAUSED: if vm_power_state in (power_state.SHUTDOWN, power_state.CRASHED): LOG.warning("Paused instance shutdown by itself. Calling " "the stop API.", instance=db_instance) try: self.compute_api.force_stop(context, db_instance) except Exception: LOG.exception("error during stop() in sync_power_state.", instance=db_instance) elif vm_state in (vm_states.SOFT_DELETED, vm_states.DELETED): if vm_power_state not in (power_state.NOSTATE, power_state.SHUTDOWN): # Note(maoy): this should be taken care of periodically in # _cleanup_running_deleted_instances(). LOG.warning("Instance is not (soft-)deleted.", instance=db_instance) @periodic_task.periodic_task def _reclaim_queued_deletes(self, context): """Reclaim instances that are queued for deletion.""" interval = CONF.reclaim_instance_interval if interval <= 0: LOG.debug("CONF.reclaim_instance_interval <= 0, skipping...") return filters = {'vm_state': vm_states.SOFT_DELETED, 'task_state': None, 'host': self.host} instances = objects.InstanceList.get_by_filters( context, filters, expected_attrs=objects.instance.INSTANCE_DEFAULT_FIELDS, use_slave=True) for instance in instances: if self._deleted_old_enough(instance, interval): bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) LOG.info('Reclaiming deleted instance', instance=instance) try: self._delete_instance(context, instance, bdms) except Exception as e: LOG.warning("Periodic reclaim failed to delete " "instance: %s", e, instance=instance) def _get_nodename(self, instance, refresh=False): """Helper method to get the name of the first available node on this host. This method should not be used with any operations on ironic instances since it does not handle multiple nodes. """ node = self.driver.get_available_nodes(refresh=refresh)[0] LOG.debug("No node specified, defaulting to %s", node, instance=instance) return node def _update_available_resource_for_node(self, context, nodename, startup=False): try: self.rt.update_available_resource(context, nodename, startup=startup) except exception.ComputeHostNotFound: LOG.warning("Compute node '%s' not found in " "update_available_resource.", nodename) except exception.ReshapeFailed: # We're only supposed to get here on startup, if a reshape was # needed, was attempted, and failed. We want to kill the service. with excutils.save_and_reraise_exception(): LOG.critical("Resource provider data migration failed " "fatally during startup for node %s.", nodename) except exception.ReshapeNeeded: # This exception should only find its way here if the virt driver's # update_provider_tree raised it incorrectly: either # a) After the resource tracker already caught it once and # reinvoked update_provider_tree with allocations. At this point # the driver is just supposed to *do* the reshape, so if it raises # ReshapeNeeded, it's a bug, and we want to kill the compute # service. # b) On periodic rather than startup (we only allow reshapes to # happen on startup). In this case we'll just make the logs red and # go again at the next periodic interval, where the same thing may # or may not happen again. Depending on the previous and intended # shape of the providers/inventories, this may not actually cause # any immediately visible symptoms (in terms of scheduling, etc.) # If this becomes a problem, we may wish to make it pop immediately # (e.g. disable the service). with excutils.save_and_reraise_exception(): LOG.exception("ReshapeNeeded exception is unexpected here!") except Exception: LOG.exception("Error updating resources for node %(node)s.", {'node': nodename}) @periodic_task.periodic_task(spacing=CONF.update_resources_interval) def update_available_resource(self, context, startup=False): """See driver.get_available_resource() Periodic process that keeps that the compute host's understanding of resource availability and usage in sync with the underlying hypervisor. :param context: security context :param startup: True if this is being called when the nova-compute service is starting, False otherwise. """ try: nodenames = set(self.driver.get_available_nodes()) except exception.VirtDriverNotReady: LOG.warning("Virt driver is not ready.") return compute_nodes_in_db = self._get_compute_nodes_in_db(context, nodenames, use_slave=True, startup=startup) # Delete orphan compute node not reported by driver but still in db for cn in compute_nodes_in_db: if cn.hypervisor_hostname not in nodenames: LOG.info("Deleting orphan compute node %(id)s " "hypervisor host is %(hh)s, " "nodes are %(nodes)s", {'id': cn.id, 'hh': cn.hypervisor_hostname, 'nodes': nodenames}) cn.destroy() self.rt.remove_node(cn.hypervisor_hostname) # Delete the corresponding resource provider in placement, # along with any associated allocations. try: self.reportclient.delete_resource_provider(context, cn, cascade=True) except keystone_exception.ClientException as e: LOG.error( "Failed to delete compute node resource provider " "for compute node %s: %s", cn.uuid, six.text_type(e)) for nodename in nodenames: self._update_available_resource_for_node(context, nodename, startup=startup) def _get_compute_nodes_in_db(self, context, nodenames, use_slave=False, startup=False): try: return objects.ComputeNodeList.get_all_by_host(context, self.host, use_slave=use_slave) except exception.NotFound: # If the driver is not reporting any nodenames we should not # expect there to be compute nodes so we just return in that case. # For example, this could be an ironic compute and it is not # managing any nodes yet. if nodenames: if startup: LOG.warning( "No compute node record found for host %s. If this is " "the first time this service is starting on this " "host, then you can ignore this warning.", self.host) else: LOG.error("No compute node record for host %s", self.host) return [] @periodic_task.periodic_task( spacing=CONF.running_deleted_instance_poll_interval, run_immediately=True) def _cleanup_running_deleted_instances(self, context): """Cleanup any instances which are erroneously still running after having been deleted. Valid actions to take are: 1. noop - do nothing 2. log - log which instances are erroneously running 3. reap - shutdown and cleanup any erroneously running instances 4. shutdown - power off *and disable* any erroneously running instances The use-case for this cleanup task is: for various reasons, it may be possible for the database to show an instance as deleted but for that instance to still be running on a host machine (see bug https://bugs.launchpad.net/nova/+bug/911366). This cleanup task is a cross-hypervisor utility for finding these zombied instances and either logging the discrepancy (likely what you should do in production), or automatically reaping the instances (more appropriate for dev environments). """ action = CONF.running_deleted_instance_action if action == "noop": return # NOTE(sirp): admin contexts don't ordinarily return deleted records with utils.temporary_mutation(context, read_deleted="yes"): try: instances = self._running_deleted_instances(context) except exception.VirtDriverNotReady: # Since this task runs immediately on startup, if the # hypervisor is not yet ready handle it gracefully. LOG.debug('Unable to check for running deleted instances ' 'at this time since the hypervisor is not ready.') return for instance in instances: if action == "log": LOG.warning("Detected instance with name label " "'%s' which is marked as " "DELETED but still present on host.", instance.name, instance=instance) elif action == 'shutdown': LOG.info("Powering off instance with name label " "'%s' which is marked as " "DELETED but still present on host.", instance.name, instance=instance) try: try: # disable starting the instance self.driver.set_bootable(instance, False) except NotImplementedError: LOG.debug("set_bootable is not implemented " "for the current driver") # and power it off self.driver.power_off(instance) except Exception: LOG.warning("Failed to power off instance", instance=instance, exc_info=True) elif action == 'reap': LOG.info("Destroying instance with name label " "'%s' which is marked as " "DELETED but still present on host.", instance.name, instance=instance) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid, use_slave=True) self.instance_events.clear_events_for_instance(instance) try: self._shutdown_instance(context, instance, bdms, notify=False) self._cleanup_volumes(context, instance, bdms, detach=False) except Exception as e: LOG.warning("Periodic cleanup failed to delete " "instance: %s", e, instance=instance) else: raise Exception(_("Unrecognized value '%s'" " for CONF.running_deleted_" "instance_action") % action) def _running_deleted_instances(self, context): """Returns a list of instances nova thinks is deleted, but the hypervisor thinks is still running. """ timeout = CONF.running_deleted_instance_timeout filters = {'deleted': True, 'soft_deleted': False} instances = self._get_instances_on_driver(context, filters) return [i for i in instances if self._deleted_old_enough(i, timeout)] def _deleted_old_enough(self, instance, timeout): deleted_at = instance.deleted_at if deleted_at: deleted_at = deleted_at.replace(tzinfo=None) return (not deleted_at or timeutils.is_older_than(deleted_at, timeout)) @contextlib.contextmanager def _error_out_instance_on_exception(self, context, instance, instance_state=vm_states.ACTIVE): """Context manager to set instance.vm_state after some operation raises Used to handle NotImplementedError and InstanceFaultRollback errors and reset the instance vm_state and task_state. The vm_state is set to the $instance_state parameter and task_state is set to None. For all other types of exceptions, the vm_state is set to ERROR and the task_state is left unchanged (although most callers will have the @reverts_task_state decorator which will set the task_state to None). Re-raises the original exception *except* in the case of InstanceFaultRollback in which case the wrapped `inner_exception` is re-raised. :param context: The nova auth request context for the operation. :param instance: The instance to update. The vm_state will be set by this context manager when an exception is raised. :param instance_state: For NotImplementedError and InstanceFaultRollback this is the vm_state to set the instance to when handling one of those types of exceptions. By default the instance will be set to ACTIVE, but the caller should control this in case there have been no changes to the running state of the instance. For example, resizing a stopped server where prep_resize fails early and does not change the power state of the guest should not set the instance status to ACTIVE but remain STOPPED. This parameter is ignored for all other types of exceptions and the instance vm_state is set to ERROR. """ # NOTE(mriedem): Why doesn't this method just save off the # original instance.vm_state here rather than use a parameter? Or use # instance_state=None as an override but default to the current # vm_state when rolling back. instance_uuid = instance.uuid try: yield except (NotImplementedError, exception.InstanceFaultRollback) as error: # Use reraise=False to determine if we want to raise the original # exception or something else. with excutils.save_and_reraise_exception(reraise=False) as ctxt: LOG.info("Setting instance back to %(state)s after: %(error)s", {'state': instance_state, 'error': error}, instance_uuid=instance_uuid) self._instance_update(context, instance, vm_state=instance_state, task_state=None) if isinstance(error, exception.InstanceFaultRollback): # Raise the wrapped exception. raise error.inner_exception # Else re-raise the NotImplementedError. ctxt.reraise = True except Exception: LOG.exception('Setting instance vm_state to ERROR', instance_uuid=instance_uuid) with excutils.save_and_reraise_exception(): # NOTE(mriedem): Why don't we pass clean_task_state=True here? self._set_instance_obj_error_state(context, instance) @wrap_exception() def add_aggregate_host(self, context, aggregate, host, slave_info): """Notify hypervisor of change (for hypervisor pools).""" try: self.driver.add_to_aggregate(context, aggregate, host, slave_info=slave_info) except NotImplementedError: LOG.debug('Hypervisor driver does not support ' 'add_aggregate_host') except exception.AggregateError: with excutils.save_and_reraise_exception(): self.driver.undo_aggregate_operation( context, aggregate.delete_host, aggregate, host) @wrap_exception() def remove_aggregate_host(self, context, host, slave_info, aggregate): """Removes a host from a physical hypervisor pool.""" try: self.driver.remove_from_aggregate(context, aggregate, host, slave_info=slave_info) except NotImplementedError: LOG.debug('Hypervisor driver does not support ' 'remove_aggregate_host') except (exception.AggregateError, exception.InvalidAggregateAction) as e: with excutils.save_and_reraise_exception(): self.driver.undo_aggregate_operation( context, aggregate.add_host, aggregate, host, isinstance(e, exception.AggregateError)) def _process_instance_event(self, instance, event): _event = self.instance_events.pop_instance_event(instance, event) if _event: LOG.debug('Processing event %(event)s', {'event': event.key}, instance=instance) _event.send(event) else: # If it's a network-vif-unplugged event and the instance is being # deleted or live migrated then we don't need to make this a # warning as it's expected. There are other expected things which # could trigger this event like detaching an interface, but we # don't have a task state for that. # TODO(mriedem): We have other move operations and things like # hard reboot (probably rebuild as well) which trigger this event # but nothing listens for network-vif-unplugged. We should either # handle those other known cases or consider just not logging a # warning if we get this event and the instance is undergoing some # task state transition. if (event.name == 'network-vif-unplugged' and instance.task_state in ( task_states.DELETING, task_states.MIGRATING)): LOG.debug('Received event %s for instance with task_state %s.', event.key, instance.task_state, instance=instance) else: LOG.warning('Received unexpected event %(event)s for ' 'instance with vm_state %(vm_state)s and ' 'task_state %(task_state)s.', {'event': event.key, 'vm_state': instance.vm_state, 'task_state': instance.task_state}, instance=instance) def _process_instance_vif_deleted_event(self, context, instance, deleted_vif_id): # If an attached port is deleted by neutron, it needs to # be detached from the instance. # And info cache needs to be updated. network_info = instance.info_cache.network_info for index, vif in enumerate(network_info): if vif['id'] == deleted_vif_id: LOG.info('Neutron deleted interface %(intf)s; ' 'detaching it from the instance and ' 'deleting it from the info cache', {'intf': vif['id']}, instance=instance) profile = vif.get('profile', {}) or {} # profile can be None if profile.get('allocation'): LOG.error( 'The bound port %(port_id)s is deleted in Neutron but ' 'the resource allocation on the resource provider ' '%(rp_uuid)s is leaked until the server ' '%(server_uuid)s is deleted.', {'port_id': vif['id'], 'rp_uuid': vif['profile']['allocation'], 'server_uuid': instance.uuid}) del network_info[index] neutron.update_instance_cache_with_nw_info( self.network_api, context, instance, nw_info=network_info) try: self.driver.detach_interface(context, instance, vif) except NotImplementedError: # Not all virt drivers support attach/detach of interfaces # yet (like Ironic), so just ignore this. pass except exception.NovaException as ex: # If the instance was deleted before the interface was # detached, just log it at debug. log_level = (logging.DEBUG if isinstance(ex, exception.InstanceNotFound) else logging.WARNING) LOG.log(log_level, "Detach interface failed, " "port_id=%(port_id)s, reason: %(msg)s", {'port_id': deleted_vif_id, 'msg': ex}, instance=instance) break @wrap_instance_event(prefix='compute') @wrap_instance_fault def extend_volume(self, context, instance, extended_volume_id): # If an attached volume is extended by cinder, it needs to # be extended by virt driver so host can detect its new size. # And bdm needs to be updated. LOG.debug('Handling volume-extended event for volume %(vol)s', {'vol': extended_volume_id}, instance=instance) try: bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, extended_volume_id, instance.uuid) except exception.NotFound: LOG.warning('Extend volume failed, ' 'volume %(vol)s is not attached to instance.', {'vol': extended_volume_id}, instance=instance) return LOG.info('Cinder extended volume %(vol)s; ' 'extending it to detect new size', {'vol': extended_volume_id}, instance=instance) volume = self.volume_api.get(context, bdm.volume_id) if bdm.connection_info is None: LOG.warning('Extend volume failed, ' 'attached volume %(vol)s has no connection_info', {'vol': extended_volume_id}, instance=instance) return connection_info = jsonutils.loads(bdm.connection_info) bdm.volume_size = volume['size'] bdm.save() if not self.driver.capabilities.get('supports_extend_volume', False): raise exception.ExtendVolumeNotSupported() try: self.driver.extend_volume(context, connection_info, instance, bdm.volume_size * units.Gi) except Exception as ex: LOG.warning('Extend volume failed, ' 'volume_id=%(volume_id)s, reason: %(msg)s', {'volume_id': extended_volume_id, 'msg': ex}, instance=instance) raise @staticmethod def _is_state_valid_for_power_update_event(instance, target_power_state): """Check if the current state of the instance allows it to be a candidate for the power-update event. :param instance: The nova instance object. :param target_power_state: The desired target power state; this should either be "POWER_ON" or "POWER_OFF". :returns Boolean: True if the instance can be subjected to the power-update event. """ if ((target_power_state == external_event_obj.POWER_ON and instance.task_state is None and instance.vm_state == vm_states.STOPPED and instance.power_state == power_state.SHUTDOWN) or (target_power_state == external_event_obj.POWER_OFF and instance.task_state is None and instance.vm_state == vm_states.ACTIVE and instance.power_state == power_state.RUNNING)): return True return False @wrap_exception() @reverts_task_state @wrap_instance_event(prefix='compute') @wrap_instance_fault def power_update(self, context, instance, target_power_state): """Power update of an instance prompted by an external event. :param context: The API request context. :param instance: The nova instance object. :param target_power_state: The desired target power state; this should either be "POWER_ON" or "POWER_OFF". """ @utils.synchronized(instance.uuid) def do_power_update(): LOG.debug('Handling power-update event with target_power_state %s ' 'for instance', target_power_state, instance=instance) if not self._is_state_valid_for_power_update_event( instance, target_power_state): pow_state = fields.InstancePowerState.from_index( instance.power_state) LOG.info('The power-update %(tag)s event for instance ' '%(uuid)s is a no-op since the instance is in ' 'vm_state %(vm_state)s, task_state ' '%(task_state)s and power_state ' '%(power_state)s.', {'tag': target_power_state, 'uuid': instance.uuid, 'vm_state': instance.vm_state, 'task_state': instance.task_state, 'power_state': pow_state}) return LOG.debug("Trying to %s instance", target_power_state, instance=instance) if target_power_state == external_event_obj.POWER_ON: action = fields.NotificationAction.POWER_ON notification_name = "power_on." instance.task_state = task_states.POWERING_ON else: # It's POWER_OFF action = fields.NotificationAction.POWER_OFF notification_name = "power_off." instance.task_state = task_states.POWERING_OFF instance.progress = 0 try: # Note that the task_state is set here rather than the API # because this is a best effort operation and deferring # updating the task_state until we get to the compute service # avoids error handling in the API and needing to account for # older compute services during rolling upgrades from Stein. # If we lose a race, UnexpectedTaskStateError is handled # below. instance.save(expected_task_state=[None]) self._notify_about_instance_usage(context, instance, notification_name + "start") compute_utils.notify_about_instance_action(context, instance, self.host, action=action, phase=fields.NotificationPhase.START) # UnexpectedTaskStateError raised from the driver will be # handled below and not result in a fault, error notification # or failure of the instance action. Other driver errors like # NotImplementedError will be record a fault, send an error # notification and mark the instance action as failed. self.driver.power_update_event(instance, target_power_state) self._notify_about_instance_usage(context, instance, notification_name + "end") compute_utils.notify_about_instance_action(context, instance, self.host, action=action, phase=fields.NotificationPhase.END) except exception.UnexpectedTaskStateError as e: # Handling the power-update event is best effort and if we lost # a race with some other action happening to the instance we # just log it and return rather than fail the action. LOG.info("The power-update event was possibly preempted: %s ", e.format_message(), instance=instance) return do_power_update() @wrap_exception() def external_instance_event(self, context, instances, events): # NOTE(danms): Some event types are handled by the manager, such # as when we're asked to update the instance's info_cache. If it's # not one of those, look for some thread(s) waiting for the event and # unblock them if so. for event in events: instance = [inst for inst in instances if inst.uuid == event.instance_uuid][0] LOG.debug('Received event %(event)s', {'event': event.key}, instance=instance) if event.name == 'network-changed': try: LOG.debug('Refreshing instance network info cache due to ' 'event %s.', event.key, instance=instance) self.network_api.get_instance_nw_info( context, instance, refresh_vif_id=event.tag) except exception.NotFound as e: LOG.info('Failed to process external instance event ' '%(event)s due to: %(error)s', {'event': event.key, 'error': six.text_type(e)}, instance=instance) elif event.name == 'network-vif-deleted': try: self._process_instance_vif_deleted_event(context, instance, event.tag) except exception.NotFound as e: LOG.info('Failed to process external instance event ' '%(event)s due to: %(error)s', {'event': event.key, 'error': six.text_type(e)}, instance=instance) elif event.name == 'volume-extended': self.extend_volume(context, instance, event.tag) elif event.name == 'power-update': self.power_update(context, instance, event.tag) else: self._process_instance_event(instance, event) @periodic_task.periodic_task(spacing=CONF.image_cache.manager_interval, external_process_ok=True) def _run_image_cache_manager_pass(self, context): """Run a single pass of the image cache manager.""" if not self.driver.capabilities.get("has_imagecache", False): return # Determine what other nodes use this storage storage_users.register_storage_use(CONF.instances_path, CONF.host) nodes = storage_users.get_storage_users(CONF.instances_path) # Filter all_instances to only include those nodes which share this # storage path. # TODO(mikal): this should be further refactored so that the cache # cleanup code doesn't know what those instances are, just a remote # count, and then this logic should be pushed up the stack. filters = {'deleted': False, 'soft_deleted': True, 'host': nodes} filtered_instances = objects.InstanceList.get_by_filters(context, filters, expected_attrs=[], use_slave=True) self.driver.manage_image_cache(context, filtered_instances) def cache_images(self, context, image_ids): """Ask the virt driver to pre-cache a set of base images. :param context: The RequestContext :param image_ids: The image IDs to be cached :return: A dict, keyed by image-id where the values are one of: 'cached' if the image was downloaded, 'existing' if the image was already in the cache, 'unsupported' if the virt driver does not support caching, 'error' if the virt driver raised an exception. """ results = {} LOG.info('Caching %i image(s) by request', len(image_ids)) for image_id in image_ids: try: cached = self.driver.cache_image(context, image_id) if cached: results[image_id] = 'cached' else: results[image_id] = 'existing' except NotImplementedError: LOG.warning('Virt driver does not support image pre-caching;' ' ignoring request') # NOTE(danms): Yes, technically we could short-circuit here to # avoid trying the rest of the images, but it's very cheap to # just keep hitting the NotImplementedError to keep the logic # clean. results[image_id] = 'unsupported' except Exception as e: results[image_id] = 'error' LOG.error('Failed to cache image %(image_id)s: %(err)s', {'image_id': image_id, 'err': e}) return results @periodic_task.periodic_task(spacing=CONF.instance_delete_interval) def _run_pending_deletes(self, context): """Retry any pending instance file deletes.""" LOG.debug('Cleaning up deleted instances') filters = {'deleted': True, 'soft_deleted': False, 'host': CONF.host, 'cleaned': False} attrs = ['system_metadata'] with utils.temporary_mutation(context, read_deleted='yes'): instances = objects.InstanceList.get_by_filters( context, filters, expected_attrs=attrs, use_slave=True) LOG.debug('There are %d instances to clean', len(instances)) for instance in instances: attempts = int(instance.system_metadata.get('clean_attempts', '0')) LOG.debug('Instance has had %(attempts)s of %(max)s ' 'cleanup attempts', {'attempts': attempts, 'max': CONF.maximum_instance_delete_attempts}, instance=instance) if attempts < CONF.maximum_instance_delete_attempts: success = self.driver.delete_instance_files(instance) instance.system_metadata['clean_attempts'] = str(attempts + 1) if success: instance.cleaned = True with utils.temporary_mutation(context, read_deleted='yes'): instance.save() @periodic_task.periodic_task(spacing=CONF.instance_delete_interval) def _cleanup_incomplete_migrations(self, context): """Delete instance files on failed resize/revert-resize operation During resize/revert-resize operation, if that instance gets deleted in-between then instance files might remain either on source or destination compute node because of race condition. """ LOG.debug('Cleaning up deleted instances with incomplete migration ') migration_filters = {'host': CONF.host, 'status': 'error'} migrations = objects.MigrationList.get_by_filters(context, migration_filters) if not migrations: return inst_uuid_from_migrations = set([migration.instance_uuid for migration in migrations]) inst_filters = {'deleted': True, 'soft_deleted': False, 'uuid': inst_uuid_from_migrations} attrs = ['info_cache', 'security_groups', 'system_metadata'] with utils.temporary_mutation(context, read_deleted='yes'): instances = objects.InstanceList.get_by_filters( context, inst_filters, expected_attrs=attrs, use_slave=True) for instance in instances: if instance.host != CONF.host: for migration in migrations: if instance.uuid == migration.instance_uuid: # Delete instance files if not cleanup properly either # from the source or destination compute nodes when # the instance is deleted during resizing. self.driver.delete_instance_files(instance) try: migration.status = 'failed' migration.save() except exception.MigrationNotFound: LOG.warning("Migration %s is not found.", migration.id, instance=instance) break @messaging.expected_exceptions(exception.InstanceQuiesceNotSupported, exception.QemuGuestAgentNotEnabled, exception.NovaException, NotImplementedError) @wrap_exception() def quiesce_instance(self, context, instance): """Quiesce an instance on this host.""" context = context.elevated() image_meta = objects.ImageMeta.from_instance(instance) self.driver.quiesce(context, instance, image_meta) def _wait_for_snapshots_completion(self, context, mapping): for mapping_dict in mapping: if mapping_dict.get('source_type') == 'snapshot': def _wait_snapshot(): snapshot = self.volume_api.get_snapshot( context, mapping_dict['snapshot_id']) if snapshot.get('status') != 'creating': raise loopingcall.LoopingCallDone() timer = loopingcall.FixedIntervalLoopingCall(_wait_snapshot) timer.start(interval=0.5).wait() @messaging.expected_exceptions(exception.InstanceQuiesceNotSupported, exception.QemuGuestAgentNotEnabled, exception.NovaException, NotImplementedError) @wrap_exception() def unquiesce_instance(self, context, instance, mapping=None): """Unquiesce an instance on this host. If snapshots' image mapping is provided, it waits until snapshots are completed before unqueiscing. """ context = context.elevated() if mapping: try: self._wait_for_snapshots_completion(context, mapping) except Exception as error: LOG.exception("Exception while waiting completion of " "volume snapshots: %s", error, instance=instance) image_meta = objects.ImageMeta.from_instance(instance) self.driver.unquiesce(context, instance, image_meta) @periodic_task.periodic_task(spacing=CONF.instance_delete_interval) def _cleanup_expired_console_auth_tokens(self, context): """Remove all expired console auth tokens. Console authorization tokens and their connection data are stored in the database when a user asks for a console connection to an instance. After a time they expire. We periodically remove any expired tokens from the database. """ objects.ConsoleAuthToken.clean_expired_console_auths(context) def _claim_pci_for_instance_vifs(self, ctxt, instance): """Claim PCI devices for the instance's VIFs on the compute node :param ctxt: Context :param instance: Instance object :return: mapping for the VIFs that yielded a PCI claim on the compute node """ pci_req_id_to_port_id = {} pci_reqs = [] port_id_to_pci_dev = {} for vif in instance.get_network_info(): pci_req = pci_req_module.get_instance_pci_request_from_vif( ctxt, instance, vif) if pci_req: pci_req_id_to_port_id[pci_req.request_id] = vif['id'] pci_reqs.append(pci_req) if pci_reqs: # Create PCI requests and claim against PCI resource tracker # NOTE(adrianc): We claim against the same requests as on the # source node. vif_pci_requests = objects.InstancePCIRequests( requests=pci_reqs, instance_uuid=instance.uuid) claimed_pci_devices_objs = self.rt.claim_pci_devices( ctxt, vif_pci_requests) # Update VIFMigrateData profile with the newly claimed PCI # device for pci_dev in claimed_pci_devices_objs: LOG.debug("PCI device: %s Claimed on destination node", pci_dev.address) port_id = pci_req_id_to_port_id[pci_dev.request_id] port_id_to_pci_dev[port_id] = pci_dev return port_id_to_pci_dev def _update_migrate_vifs_profile_with_pci(self, migrate_vifs, port_id_to_pci_dev): """Update migrate vifs profile with the claimed PCI devices :param migrate_vifs: list of VIFMigrateData objects :param port_id_to_pci_dev: a mapping :return: None. """ for mig_vif in migrate_vifs: port_id = mig_vif.port_id if port_id not in port_id_to_pci_dev: continue pci_dev = port_id_to_pci_dev[port_id] profile = copy.deepcopy(mig_vif.source_vif['profile']) profile['pci_slot'] = pci_dev.address profile['pci_vendor_info'] = ':'.join([pci_dev.vendor_id, pci_dev.product_id]) mig_vif.profile = profile LOG.debug("Updating migrate VIF profile for port %(port_id)s:" "%(profile)s", {'port_id': port_id, 'profile': profile}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/migration_list.py0000664000175000017500000000666600000000000020621 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.compute import multi_cell_list from nova import context from nova.db import api as db from nova import exception from nova import objects from nova.objects import base class MigrationSortContext(multi_cell_list.RecordSortContext): def __init__(self, sort_keys, sort_dirs): if not sort_keys: sort_keys = ['created_at', 'id'] sort_dirs = ['desc', 'desc'] if 'uuid' not in sort_keys: # Add uuid into the list of sort_keys. Since we're striping # across cell databases here, many sort_keys arrangements will # yield nothing unique across all the databases to give us a stable # ordering, which can mess up expected client pagination behavior. # So, throw uuid into the sort_keys at the end if it's not already # there to keep us repeatable. sort_keys = copy.copy(sort_keys) + ['uuid'] sort_dirs = copy.copy(sort_dirs) + ['asc'] super(MigrationSortContext, self).__init__(sort_keys, sort_dirs) class MigrationLister(multi_cell_list.CrossCellLister): def __init__(self, sort_keys, sort_dirs): super(MigrationLister, self).__init__( MigrationSortContext(sort_keys, sort_dirs)) @property def marker_identifier(self): return 'uuid' def get_marker_record(self, ctx, marker): """Get the marker migration from its cell. This returns the marker migration from the cell in which it lives """ results = context.scatter_gather_skip_cell0( ctx, db.migration_get_by_uuid, marker) db_migration = None for result_cell_uuid, result in results.items(): if not context.is_cell_failure_sentinel(result): db_migration = result cell_uuid = result_cell_uuid break if not db_migration: raise exception.MarkerNotFound(marker=marker) return cell_uuid, db_migration def get_marker_by_values(self, ctx, values): return db.migration_get_by_sort_filters(ctx, self.sort_ctx.sort_keys, self.sort_ctx.sort_dirs, values) def get_by_filters(self, ctx, filters, limit, marker, **kwargs): return db.migration_get_all_by_filters( ctx, filters, limit=limit, marker=marker, sort_keys=self.sort_ctx.sort_keys, sort_dirs=self.sort_ctx.sort_dirs) def get_migration_objects_sorted(ctx, filters, limit, marker, sort_keys, sort_dirs): mig_generator = MigrationLister(sort_keys, sort_dirs).get_records_sorted( ctx, filters, limit, marker) return base.obj_make_list(ctx, objects.MigrationList(), objects.Migration, mig_generator) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2984703 nova-21.2.4/nova/compute/monitors/0000775000175000017500000000000000000000000017057 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/monitors/__init__.py0000664000175000017500000000766200000000000021203 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Resource monitor API specification. """ from oslo_log import log as logging from stevedore import enabled import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class MonitorHandler(object): NAMESPACES = [ 'nova.compute.monitors.cpu', ] def __init__(self, resource_tracker): # Dictionary keyed by the monitor type namespace. Value is the # first loaded monitor of that namespace or False. self.type_monitor_loaded = {ns: False for ns in self.NAMESPACES} self.monitors = [] for ns in self.NAMESPACES: plugin_mgr = enabled.EnabledExtensionManager( namespace=ns, invoke_on_load=True, check_func=self.check_enabled_monitor, invoke_args=(resource_tracker,) ) self.monitors += [ext.obj for ext in plugin_mgr] def check_enabled_monitor(self, ext): """Ensures that only one monitor is specified of any type.""" # The extension does not have a namespace attribute, unfortunately, # but we can get the namespace by examining the first part of the # entry_point_target attribute, which looks like this: # 'nova.compute.monitors.cpu.virt_driver:Monitor' ept = ext.entry_point_target ept_parts = ept.split(':') namespace_parts = ept_parts[0].split('.') namespace = '.'.join(namespace_parts[0:-1]) if self.type_monitor_loaded[namespace] is not False: LOG.warning("Excluding %(namespace)s monitor " "%(monitor_name)s. Already loaded " "%(loaded_monitor)s.", {'namespace': namespace, 'monitor_name': ext.name, 'loaded_monitor': self.type_monitor_loaded[namespace] }) return False # NOTE(jaypipes): We used to only have CPU monitors, so # CONF.compute_monitors could contain "virt_driver" without any monitor # type namespace. So, to maintain backwards-compatibility with that # older way of specifying monitors, we first loop through any values in # CONF.compute_monitors and put any non-namespace'd values into the # 'cpu' namespace. cfg_monitors = ['cpu.' + cfg if '.' not in cfg else cfg for cfg in CONF.compute_monitors] # NOTE(jaypipes): Append 'nova.compute.monitors.' to any monitor value # that doesn't have it to allow CONF.compute_monitors to use shortened # namespaces (like 'cpu.' instead of 'nova.compute.monitors.cpu.') cfg_monitors = ['nova.compute.monitors.' + cfg if 'nova.compute.monitors.' not in cfg else cfg for cfg in cfg_monitors] if namespace + '.' + ext.name in cfg_monitors: self.type_monitor_loaded[namespace] = ext.name return True # Only log something if compute_monitors is not empty. if CONF.compute_monitors: LOG.warning("Excluding %(namespace)s monitor %(monitor_name)s. " "Not in the list of enabled monitors " "(CONF.compute_monitors).", {'namespace': namespace, 'monitor_name': ext.name}) return False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/monitors/base.py0000664000175000017500000000561700000000000020354 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from nova.objects import fields @six.add_metaclass(abc.ABCMeta) class MonitorBase(object): """Base class for all resource monitor plugins. A monitor is responsible for adding a set of related metrics to a `nova.objects.MonitorMetricList` object after the monitor has performed some sampling or monitoring action. """ def __init__(self, compute_manager): self.compute_manager = compute_manager self.source = None @abc.abstractmethod def get_metric_names(self): """Get available metric names. Get available metric names, which are represented by a set of keys that can be used to check conflicts and duplications :returns: set containing one or more values from :py:attr: nova.objects.fields.MonitorMetricType.ALL """ raise NotImplementedError('get_metric_names') @abc.abstractmethod def populate_metrics(self, metric_list): """Monitors are responsible for populating this metric_list object with nova.objects.MonitorMetric objects with values collected via the respective compute drivers. Note that if the monitor class is responsible for tracking a *related* set of metrics -- e.g. a set of percentages of CPU time allocated to user, kernel, and idle -- it is the responsibility of the monitor implementation to do a single sampling call to the underlying monitor to ensure that related metric values make logical sense. :param metric_list: A mutable reference of the metric list object """ raise NotImplementedError('populate_metrics') class CPUMonitorBase(MonitorBase): """Base class for all monitors that return CPU-related metrics.""" def get_metric_names(self): return set([ fields.MonitorMetricType.CPU_FREQUENCY, fields.MonitorMetricType.CPU_USER_TIME, fields.MonitorMetricType.CPU_KERNEL_TIME, fields.MonitorMetricType.CPU_IDLE_TIME, fields.MonitorMetricType.CPU_IOWAIT_TIME, fields.MonitorMetricType.CPU_USER_PERCENT, fields.MonitorMetricType.CPU_KERNEL_PERCENT, fields.MonitorMetricType.CPU_IDLE_PERCENT, fields.MonitorMetricType.CPU_IOWAIT_PERCENT, fields.MonitorMetricType.CPU_PERCENT, ]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.2984703 nova-21.2.4/nova/compute/monitors/cpu/0000775000175000017500000000000000000000000017646 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/monitors/cpu/__init__.py0000664000175000017500000000000000000000000021745 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/monitors/cpu/virt_driver.py0000664000175000017500000001005100000000000022554 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ CPU monitor based on virt driver to retrieve CPU information """ from oslo_log import log as logging from oslo_utils import timeutils from nova.compute.monitors import base import nova.conf from nova import exception from nova import objects CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class Monitor(base.CPUMonitorBase): """CPU monitor that uses the virt driver's get_host_cpu_stats() call.""" def __init__(self, resource_tracker): super(Monitor, self).__init__(resource_tracker) self.source = CONF.compute_driver self.driver = resource_tracker.driver self._data = {} self._cpu_stats = {} def populate_metrics(self, metric_list): self._update_data() for name in self.get_metric_names(): metric_object = objects.MonitorMetric() metric_object.name = name metric_object.value = self._data[name] metric_object.timestamp = self._data["timestamp"] metric_object.source = self.source metric_list.objects.append(metric_object) def _update_data(self): self._data = {} self._data["timestamp"] = timeutils.utcnow() # Extract node's CPU statistics. try: stats = self.driver.get_host_cpu_stats() self._data["cpu.user.time"] = stats["user"] self._data["cpu.kernel.time"] = stats["kernel"] self._data["cpu.idle.time"] = stats["idle"] self._data["cpu.iowait.time"] = stats["iowait"] self._data["cpu.frequency"] = stats["frequency"] except (TypeError, KeyError): LOG.exception("Not all properties needed are implemented " "in the compute driver") raise exception.ResourceMonitorError( monitor=self.__class__.__name__) # The compute driver API returns the absolute values for CPU times. # We compute the utilization percentages for each specific CPU time # after calculating the delta between the current reading and the # previous reading. stats["total"] = (stats["user"] + stats["kernel"] + stats["idle"] + stats["iowait"]) cputime = float(stats["total"] - self._cpu_stats.get("total", 0)) # NOTE(jwcroppe): Convert all the `perc` values to their integer forms # since pre-conversion their values are within the range [0, 1] and the # objects.MonitorMetric.value field requires an integer. perc = (stats["user"] - self._cpu_stats.get("user", 0)) / cputime self._data["cpu.user.percent"] = int(perc * 100) perc = (stats["kernel"] - self._cpu_stats.get("kernel", 0)) / cputime self._data["cpu.kernel.percent"] = int(perc * 100) perc = (stats["idle"] - self._cpu_stats.get("idle", 0)) / cputime self._data["cpu.idle.percent"] = int(perc * 100) perc = (stats["iowait"] - self._cpu_stats.get("iowait", 0)) / cputime self._data["cpu.iowait.percent"] = int(perc * 100) # Compute the current system-wide CPU utilization as a percentage. used = stats["user"] + stats["kernel"] + stats["iowait"] prev_used = (self._cpu_stats.get("user", 0) + self._cpu_stats.get("kernel", 0) + self._cpu_stats.get("iowait", 0)) perc = (used - prev_used) / cputime self._data["cpu.percent"] = int(perc * 100) self._cpu_stats = stats.copy() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/multi_cell_list.py0000664000175000017500000004714500000000000020756 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import copy import heapq import eventlet import six from oslo_log import log as logging import nova.conf from nova import context from nova import exception from nova.i18n import _ LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class RecordSortContext(object): def __init__(self, sort_keys, sort_dirs): self.sort_keys = sort_keys self.sort_dirs = sort_dirs def compare_records(self, rec1, rec2): """Implements cmp(rec1, rec2) for the first key that is different. Adjusts for the requested sort direction by inverting the result as needed. """ for skey, sdir in zip(self.sort_keys, self.sort_dirs): resultflag = 1 if sdir == 'desc' else -1 if rec1[skey] < rec2[skey]: return resultflag elif rec1[skey] > rec2[skey]: return resultflag * -1 return 0 class RecordWrapper(object): """Wrap a DB object from the database so it is sortable. We use heapq.merge() below to do the merge sort of things from the cell databases. That routine assumes it can use regular python operators (> and <) on the contents. Since that won't work with instances from the database (and depends on the sort keys/dirs), we need this wrapper class to provide that. Implementing __lt__ is enough for heapq.merge() to do its work. """ def __init__(self, ctx, sort_ctx, db_record): self.cell_uuid = ctx.cell_uuid self._sort_ctx = sort_ctx self._db_record = db_record def __lt__(self, other): # NOTE(danms): This makes us always sort failure sentinels # higher than actual results. We do this so that they bubble # up in the get_records_sorted() feeder loop ahead of anything # else, and so that the implementation of RecordSortContext # never sees or has to handle the sentinels. If we did not # sort these to the top then we could potentially return # $limit results from good cells before we noticed the failed # cells, and would not properly report them as failed for # fix-up in the higher layers. if context.is_cell_failure_sentinel(self._db_record): return True elif context.is_cell_failure_sentinel(other._db_record): return False r = self._sort_ctx.compare_records(self._db_record, other._db_record) # cmp(x, y) returns -1 if x < y return r == -1 def query_wrapper(ctx, fn, *args, **kwargs): """This is a helper to run a query with predictable fail semantics. This is a generator which will mimic the scatter_gather_cells() behavior by honoring a timeout and catching exceptions, yielding the usual sentinel objects instead of raising. It wraps these in RecordWrapper objects, which will prioritize them to the merge sort, causing them to be handled by the main get_objects_sorted() feeder loop quickly and gracefully. """ with eventlet.timeout.Timeout(context.CELL_TIMEOUT, exception.CellTimeout): try: for record in fn(ctx, *args, **kwargs): yield record except exception.CellTimeout: # Here, we yield a RecordWrapper (no sort_ctx needed since # we won't call into the implementation's comparison routines) # wrapping the sentinel indicating timeout. yield RecordWrapper(ctx, None, context.did_not_respond_sentinel) return except Exception as e: # Here, we yield a RecordWrapper (no sort_ctx needed since # we won't call into the implementation's comparison routines) # wrapping the exception object indicating failure. yield RecordWrapper(ctx, None, e.__class__(e.args)) return @six.add_metaclass(abc.ABCMeta) class CrossCellLister(object): """An implementation of a cross-cell efficient lister. This primarily provides a listing implementation for fetching records from across multiple cells, paginated and sorted appropriately. The external interface is the get_records_sorted() method. You should implement this if you need to efficiently list your data type from cell databases. """ def __init__(self, sort_ctx, cells=None, batch_size=None): self.sort_ctx = sort_ctx self.cells = cells self.batch_size = batch_size self._cells_responded = set() self._cells_failed = set() self._cells_timed_out = set() @property def cells_responded(self): """A list of uuids representing those cells that returned a successful result. """ return list(self._cells_responded) @property def cells_failed(self): """A list of uuids representing those cells that failed to return a successful result. """ return list(self._cells_failed) @property def cells_timed_out(self): """A list of uuids representing those cells that timed out while being contacted. """ return list(self._cells_timed_out) @property @abc.abstractmethod def marker_identifier(self): """Return the name of the property used as the marker identifier. For instances (and many other types) this is 'uuid', but could also be things like 'id' or anything else used as the marker identifier when fetching a page of results. """ pass @abc.abstractmethod def get_marker_record(self, ctx, marker_id): """Get the cell UUID and instance of the marker record by id. This needs to look up the marker record in whatever cell it is in and return it. It should be populated with values corresponding to what is in self.sort_ctx.sort_keys. :param ctx: A RequestContext :param marker_id: The identifier of the marker to find :returns: A tuple of cell_uuid where the marker was found and an instance of the marker from the database :raises: MarkerNotFound if the marker does not exist """ pass @abc.abstractmethod def get_marker_by_values(self, ctx, values): """Get the identifier of the marker record by value. When we need to paginate across cells, the marker record exists in only one of those cells. The rest of the cells must decide on a record to be their equivalent marker with which to return the next page of results. This must be done by value, based on the values of the sort_keys properties on the actual marker, as if the results were sorted appropriately and the actual marker existed in each cell. :param ctx: A RequestContext :param values: The values of the sort_keys properties of fhe actual marker instance :returns: The identifier of the equivalent marker in the local database """ pass @abc.abstractmethod def get_by_filters(self, ctx, filters, limit, marker, **kwargs): """List records by filters, sorted and paginated. This is the standard filtered/sorted list method for the data type we are trying to list out of the database. Additional kwargs are passsed through. :param ctx: A RequestContext :param filters: A dict of column=filter items :param limit: A numeric limit on the number of results, or None :param marker: The marker identifier, or None :returns: A list of records """ pass def get_records_sorted(self, ctx, filters, limit, marker, **kwargs): """Get a cross-cell list of records matching filters. This iterates cells in parallel generating a unified and sorted list of records as efficiently as possible. It takes care to iterate the list as infrequently as possible. We wrap the results in RecordWrapper objects so that they are sortable by heapq.merge(), which requires that the '<' operator just works. Our sorting requirements are encapsulated into the RecordSortContext provided to the constructor for this object. This function is a generator of records from the database like what you would get from instance_get_all_by_filters_sort() in the DB API. NOTE: Since we do these in parallel, a nonzero limit will be passed to each database query, although the limit will be enforced in the output of this function. Meaning, we will still query $limit from each database, but only return $limit total results. :param cell_down_support: True if the API (and caller) support returning a minimal instance construct if the relevant cell is down. If its True, then the value of CONF.api.list_records_by_skipping_down_cells is ignored and if its False, results are either skipped or erred based on the value of CONF.api.list_records_by_skipping_down_cells. """ cell_down_support = kwargs.pop('cell_down_support', False) if marker: # A marker identifier was provided from the API. Call this # the 'global' marker as it determines where we start the # process across all cells. Look up the record in # whatever cell it is in and record the values for the # sort keys so we can find the marker instance in each # cell (called the 'local' marker). global_marker_cell, global_marker_record = self.get_marker_record( ctx, marker) global_marker_values = [global_marker_record[key] for key in self.sort_ctx.sort_keys] def do_query(cctx): """Generate RecordWrapper(record) objects from a cell. We do this inside the thread (created by scatter_gather_all_cells()) so that we return wrappers and avoid having to iterate the combined result list in the caller again. This is run against each cell by the scatter_gather routine. """ # The local marker is an identifier of a record in a cell # that is found by the special method # get_marker_by_values(). It should be the next record # in order according to the sort provided, but after the # marker instance which may have been in another cell. local_marker = None # Since the regular DB query routines take a marker and assume that # the marked record was the last entry of the previous page, we # may need to prefix it to our result query if we're not the cell # that had the actual marker record. local_marker_prefix = [] marker_id = self.marker_identifier if marker: if cctx.cell_uuid == global_marker_cell: local_marker = marker else: local_marker = self.get_marker_by_values( cctx, global_marker_values) if local_marker: if local_marker != marker: # We did find a marker in our cell, but it wasn't # the global marker. Thus, we will use it as our # marker in the main query below, but we also need # to prefix that result with this marker instance # since the result below will not return it and it # has not been returned to the user yet. Note that # we do _not_ prefix the marker instance if our # marker was the global one since that has already # been sent to the user. local_marker_filters = copy.copy(filters) if marker_id not in local_marker_filters: # If an $id filter was provided, it will # have included our marker already if this # instance is desired in the output # set. If it wasn't, we specifically query # for it. If the other filters would have # excluded it, then we'll get an empty set # here and not include it in the output as # expected. local_marker_filters[marker_id] = [local_marker] local_marker_prefix = self.get_by_filters( cctx, local_marker_filters, limit=1, marker=None, **kwargs) else: # There was a global marker but everything in our # cell is _before_ that marker, so we return # nothing. If we didn't have this clause, we'd # pass marker=None to the query below and return a # full unpaginated set for our cell. return if local_marker_prefix: # Per above, if we had a matching marker object, that is # the first result we should generate. yield RecordWrapper(cctx, self.sort_ctx, local_marker_prefix[0]) # If a batch size was provided, use that as the limit per # batch. If not, then ask for the entire $limit in a single # batch. batch_size = self.batch_size or limit # Keep track of how many we have returned in all batches return_count = 0 # If limit was unlimited then keep querying batches until # we run out of results. Otherwise, query until the total count # we have returned exceeds the limit. while limit is None or return_count < limit: batch_count = 0 # Do not query a full batch if it would cause our total # to exceed the limit if limit: query_size = min(batch_size, limit - return_count) else: query_size = batch_size # Get one batch query_result = self.get_by_filters( cctx, filters, limit=query_size or None, marker=local_marker, **kwargs) # Yield wrapped results from the batch, counting as we go # (to avoid traversing the list to count). Also, update our # local_marker each time so that local_marker is the end of # this batch in order to find the next batch. for item in query_result: local_marker = item[self.marker_identifier] yield RecordWrapper(cctx, self.sort_ctx, item) batch_count += 1 # No results means we are done for this cell if not batch_count: break return_count += batch_count LOG.debug(('Listed batch of %(batch)i results from cell ' 'out of %(limit)s limit. Returned %(total)i ' 'total so far.'), {'batch': batch_count, 'total': return_count, 'limit': limit or 'no'}) # NOTE(danms): The calls to do_query() will return immediately # with a generator. There is no point in us checking the # results for failure or timeout since we have not actually # run any code in do_query() until the first iteration # below. The query_wrapper() utility handles inline # translation of failures and timeouts to sentinels which will # be generated and consumed just like any normal result below. if self.cells: results = context.scatter_gather_cells(ctx, self.cells, context.CELL_TIMEOUT, query_wrapper, do_query) else: results = context.scatter_gather_all_cells(ctx, query_wrapper, do_query) # If a limit was provided, it was passed to the per-cell query # routines. That means we have NUM_CELLS * limit items across # results. So, we need to consume from that limit below and # stop returning results. Call that total_limit since we will # modify it in the loop below, but do_query() above also looks # at the original provided limit. total_limit = limit or 0 # Generate results from heapq so we can return the inner # instance instead of the wrapper. This is basically free # as it works as our caller iterates the results. feeder = heapq.merge(*results.values()) while True: try: item = next(feeder) except StopIteration: return if context.is_cell_failure_sentinel(item._db_record): if (not CONF.api.list_records_by_skipping_down_cells and not cell_down_support): # Value the config # ``CONF.api.list_records_by_skipping_down_cells`` only if # cell_down_support is False and generate the exception # if CONF.api.list_records_by_skipping_down_cells is False. # In all other cases the results from the down cell should # be skipped now to either construct minimal constructs # later if cell_down_support is True or to simply return # the skipped results if cell_down_support is False. raise exception.NovaException( _('Cell %s is not responding but configuration ' 'indicates that we should fail.') % item.cell_uuid) LOG.warning('Cell %s is not responding and hence is ' 'being omitted from the results', item.cell_uuid) if item._db_record == context.did_not_respond_sentinel: self._cells_timed_out.add(item.cell_uuid) elif isinstance(item._db_record, Exception): self._cells_failed.add(item.cell_uuid) # We might have received one batch but timed out or failed # on a later one, so be sure we fix the accounting. if item.cell_uuid in self._cells_responded: self._cells_responded.remove(item.cell_uuid) continue yield item._db_record self._cells_responded.add(item.cell_uuid) total_limit -= 1 if total_limit == 0: # We'll only hit this if limit was nonzero and we just # generated our last one return ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/power_state.py0000664000175000017500000000432600000000000020120 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # Copyright (c) 2010 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Power state is the state we get by calling virt driver on a particular domain. The hypervisor is always considered the authority on the status of a particular VM, and the power_state in the DB should be viewed as a snapshot of the VMs's state in the (recent) past. It can be periodically updated, and should also be updated at the end of a task if the task is supposed to affect power_state. """ from nova.objects import fields # NOTE(maoy): These are *not* virDomainState values from libvirt. # The hex value happens to match virDomainState for backward-compatibility # reasons. NOSTATE = fields.InstancePowerState.index(fields.InstancePowerState.NOSTATE) RUNNING = fields.InstancePowerState.index(fields.InstancePowerState.RUNNING) PAUSED = fields.InstancePowerState.index(fields.InstancePowerState.PAUSED) # the VM is powered off SHUTDOWN = fields.InstancePowerState.index(fields.InstancePowerState.SHUTDOWN) CRASHED = fields.InstancePowerState.index(fields.InstancePowerState.CRASHED) SUSPENDED = fields.InstancePowerState.index( fields.InstancePowerState.SUSPENDED) # TODO(justinsb): Power state really needs to be a proper class, # so that we're not locked into the libvirt status codes and can put mapping # logic here rather than spread throughout the code STATE_MAP = {fields.InstancePowerState.index(state): state for state in fields.InstancePowerState.ALL if state != fields.InstancePowerState._UNUSED} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/provider_tree.py0000664000175000017500000007540200000000000020440 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """An object describing a tree of resource providers and their inventories. This object is not stored in the Nova API or cell databases; rather, this object is constructed and used by the scheduler report client to track state changes for resources on the hypervisor or baremetal node. As such, there are no remoteable methods nor is there any interaction with the nova.db modules. """ import collections import copy import os_traits from oslo_concurrency import lockutils from oslo_log import log as logging from oslo_utils import uuidutils import six from nova.i18n import _ LOG = logging.getLogger(__name__) _LOCK_NAME = 'provider-tree-lock' # Point-in-time representation of a resource provider in the tree. # Note that, whereas namedtuple enforces read-only-ness of instances as a # whole, nothing prevents modification of the internals of attributes of # complex types (children/inventory/traits/aggregates). However, any such # modifications still have no effect on the ProviderTree the instance came # from. Like, you can Sharpie a moustache on a Polaroid of my face, but that # doesn't make a moustache appear on my actual face. ProviderData = collections.namedtuple( 'ProviderData', ['uuid', 'name', 'generation', 'parent_uuid', 'inventory', 'traits', 'aggregates', 'resources']) class _Provider(object): """Represents a resource provider in the tree. All operations against the tree should be done using the ProviderTree interface, since it controls thread-safety. """ def __init__(self, name, uuid=None, generation=None, parent_uuid=None): if uuid is None: uuid = uuidutils.generate_uuid() self.uuid = uuid self.name = name self.generation = generation self.parent_uuid = parent_uuid # Contains a dict, keyed by uuid of child resource providers having # this provider as a parent self.children = {} # dict of inventory records, keyed by resource class self.inventory = {} # Set of trait names self.traits = set() # Set of aggregate UUIDs self.aggregates = set() # dict of resource records, keyed by resource class # the value is the set of objects.Resource self.resources = {} @classmethod def from_dict(cls, pdict): """Factory method producing a _Provider based on a dict with appropriate keys. :param pdict: Dictionary representing a provider, with keys 'name', 'uuid', 'generation', 'parent_provider_uuid'. Of these, only 'name' is mandatory. """ return cls(pdict['name'], uuid=pdict.get('uuid'), generation=pdict.get('generation'), parent_uuid=pdict.get('parent_provider_uuid')) def data(self): inventory = copy.deepcopy(self.inventory) traits = copy.copy(self.traits) aggregates = copy.copy(self.aggregates) resources = copy.deepcopy(self.resources) return ProviderData( self.uuid, self.name, self.generation, self.parent_uuid, inventory, traits, aggregates, resources) def get_provider_uuids(self): """Returns a list, in top-down traversal order, of UUIDs of this provider and all its descendants. """ ret = [self.uuid] for child in self.children.values(): ret.extend(child.get_provider_uuids()) return ret def find(self, search): if self.name == search or self.uuid == search: return self if search in self.children: return self.children[search] if self.children: for child in self.children.values(): # We already searched for the child by UUID above, so here we # just check for a child name match if child.name == search: return child subchild = child.find(search) if subchild: return subchild return None def add_child(self, provider): self.children[provider.uuid] = provider def remove_child(self, provider): if provider.uuid in self.children: del self.children[provider.uuid] def has_inventory(self): """Returns whether the provider has any inventory records at all. """ return self.inventory != {} def has_inventory_changed(self, new): """Returns whether the inventory has changed for the provider.""" cur = self.inventory if set(cur) != set(new): return True for key, cur_rec in cur.items(): new_rec = new[key] # If the new record contains new fields (e.g. we're adding on # `reserved` or `allocation_ratio`) we want to make sure to pick # them up if set(new_rec) - set(cur_rec): return True for rec_key, cur_val in cur_rec.items(): if rec_key not in new_rec: # Deliberately don't want to compare missing keys in the # *new* inventory record. For instance, we will be passing # in fields like allocation_ratio in the current dict but # the resource tracker may only pass in the total field. We # want to return that inventory didn't change when the # total field values are the same even if the # allocation_ratio field is missing from the new record. continue if new_rec[rec_key] != cur_val: return True return False def _update_generation(self, generation, operation): if generation is not None and generation != self.generation: msg_args = { 'rp_uuid': self.uuid, 'old': self.generation, 'new': generation, 'op': operation } LOG.debug("Updating resource provider %(rp_uuid)s generation " "from %(old)s to %(new)s during operation: %(op)s", msg_args) self.generation = generation def update_inventory(self, inventory, generation): """Update the stored inventory for the provider along with a resource provider generation to set the provider to. The method returns whether the inventory has changed. """ self._update_generation(generation, 'update_inventory') if self.has_inventory_changed(inventory): LOG.debug('Updating inventory in ProviderTree for provider %s ' 'with inventory: %s', self.uuid, inventory) self.inventory = copy.deepcopy(inventory) return True LOG.debug('Inventory has not changed in ProviderTree for provider: %s', self.uuid) return False def have_traits_changed(self, new): """Returns whether the provider's traits have changed.""" return set(new) != self.traits def update_traits(self, new, generation=None): """Update the stored traits for the provider along with a resource provider generation to set the provider to. The method returns whether the traits have changed. """ self._update_generation(generation, 'update_traits') if self.have_traits_changed(new): self.traits = set(new) # create a copy of the new traits return True return False def has_traits(self, traits): """Query whether the provider has certain traits. :param traits: Iterable of string trait names to look for. :return: True if this provider has *all* of the specified traits; False if any of the specified traits are absent. Returns True if the traits parameter is empty. """ return not bool(set(traits) - self.traits) def have_aggregates_changed(self, new): """Returns whether the provider's aggregates have changed.""" return set(new) != self.aggregates def update_aggregates(self, new, generation=None): """Update the stored aggregates for the provider along with a resource provider generation to set the provider to. The method returns whether the aggregates have changed. """ self._update_generation(generation, 'update_aggregates') if self.have_aggregates_changed(new): self.aggregates = set(new) # create a copy of the new aggregates return True return False def in_aggregates(self, aggregates): """Query whether the provider is a member of certain aggregates. :param aggregates: Iterable of string aggregate UUIDs to look for. :return: True if this provider is a member of *all* of the specified aggregates; False if any of the specified aggregates are absent. Returns True if the aggregates parameter is empty. """ return not bool(set(aggregates) - self.aggregates) def have_resources_changed(self, new): """Returns whether the resources have changed for the provider.""" return self.resources != new def update_resources(self, resources): """Update the stored resources for the provider. The method returns whether the resources have changed. """ if self.have_resources_changed(resources): self.resources = copy.deepcopy(resources) return True return False class ProviderTree(object): def __init__(self): """Create an empty provider tree.""" self.lock = lockutils.internal_lock(_LOCK_NAME) self.roots_by_uuid = {} self.roots_by_name = {} @property def roots(self): return six.itervalues(self.roots_by_uuid) def get_provider_uuids(self, name_or_uuid=None): """Return a list, in top-down traversable order, of the UUIDs of all providers (in a (sub)tree). :param name_or_uuid: Provider name or UUID representing the root of a (sub)tree for which to return UUIDs. If not specified, the method returns all UUIDs in the ProviderTree. """ if name_or_uuid is not None: with self.lock: return self._find_with_lock(name_or_uuid).get_provider_uuids() # If no name_or_uuid, get UUIDs for all providers recursively. ret = [] with self.lock: for root in self.roots: ret.extend(root.get_provider_uuids()) return ret def get_provider_uuids_in_tree(self, name_or_uuid): """Returns a list, in top-down traversable order, of the UUIDs of all providers in the whole tree of which the provider identified by ``name_or_uuid`` is a member. :param name_or_uuid: Provider name or UUID representing any member of whole tree for which to return UUIDs. """ with self.lock: return self._find_with_lock( name_or_uuid, return_root=True).get_provider_uuids() def populate_from_iterable(self, provider_dicts): """Populates this ProviderTree from an iterable of provider dicts. This method will ADD providers to the tree if provider_dicts contains providers that do not exist in the tree already and will REPLACE providers in the tree if provider_dicts contains providers that are already in the tree. This method will NOT remove providers from the tree that are not in provider_dicts. But if a parent provider is in provider_dicts and the descendents are not, this method will remove the descendents from the tree. :param provider_dicts: An iterable of dicts of resource provider information. If a provider is present in provider_dicts, all its descendants must also be present. :raises: ValueError if any provider in provider_dicts has a parent that is not in this ProviderTree or elsewhere in provider_dicts. """ if not provider_dicts: return # Map of provider UUID to provider dict for the providers we're # *adding* via this method. to_add_by_uuid = {pd['uuid']: pd for pd in provider_dicts} with self.lock: # Sanity check for orphans. Every parent UUID must either be None # (the provider is a root), or be in the tree already, or exist as # a key in to_add_by_uuid (we're adding it). all_parents = set([None]) | set(to_add_by_uuid) # NOTE(efried): Can't use get_provider_uuids directly because we're # already under lock. for root in self.roots: all_parents |= set(root.get_provider_uuids()) missing_parents = set() for pd in to_add_by_uuid.values(): parent_uuid = pd.get('parent_provider_uuid') if parent_uuid not in all_parents: missing_parents.add(parent_uuid) if missing_parents: raise ValueError( _("The following parents were not found: %s") % ', '.join(missing_parents)) # Ready to do the work. # Use to_add_by_uuid to keep track of which providers are left to # be added. while to_add_by_uuid: # Find a provider that's suitable to inject. for uuid, pd in to_add_by_uuid.items(): # Roots are always okay to inject (None won't be a key in # to_add_by_uuid). Otherwise, we have to make sure we # already added the parent (and, by recursion, all # ancestors) if present in the input. parent_uuid = pd.get('parent_provider_uuid') if parent_uuid not in to_add_by_uuid: break else: # This should never happen - we already ensured all parents # exist in the tree, which means we can't have any branches # that don't wind up at the root, which means we can't have # cycles. But to quell the paranoia... raise ValueError( _("Unexpectedly failed to find parents already in the " "tree for any of the following: %s") % ','.join(set(to_add_by_uuid))) # Add or replace the provider, either as a root or under its # parent try: self._remove_with_lock(uuid) except ValueError: # Wasn't there in the first place - fine. pass provider = _Provider.from_dict(pd) if parent_uuid is None: self.roots_by_uuid[provider.uuid] = provider self.roots_by_name[provider.name] = provider else: parent = self._find_with_lock(parent_uuid) parent.add_child(provider) # Remove this entry to signify we're done with it. to_add_by_uuid.pop(uuid) def _remove_with_lock(self, name_or_uuid): found = self._find_with_lock(name_or_uuid) if found.parent_uuid: parent = self._find_with_lock(found.parent_uuid) parent.remove_child(found) else: del self.roots_by_uuid[found.uuid] del self.roots_by_name[found.name] def remove(self, name_or_uuid): """Safely removes the provider identified by the supplied name_or_uuid parameter and all of its children from the tree. :raises ValueError if name_or_uuid points to a non-existing provider. :param name_or_uuid: Either name or UUID of the resource provider to remove from the tree. """ with self.lock: self._remove_with_lock(name_or_uuid) def new_root(self, name, uuid, generation=None): """Adds a new root provider to the tree, returning its UUID. :param name: The name of the new root provider :param uuid: The UUID of the new root provider :param generation: Generation to set for the new root provider :returns: the UUID of the new provider :raises: ValueError if a provider with the specified uuid already exists in the tree. """ with self.lock: exists = True try: self._find_with_lock(uuid) except ValueError: exists = False if exists: err = _("Provider %s already exists.") raise ValueError(err % uuid) p = _Provider(name, uuid=uuid, generation=generation) self.roots_by_uuid[uuid] = p self.roots_by_name[name] = p return p.uuid def _find_with_lock(self, name_or_uuid, return_root=False): # Optimization for large number of roots (e.g. ironic): if name_or_uuid # represents a root, this is O(1). found = self.roots_by_uuid.get(name_or_uuid) if found: return found found = self.roots_by_name.get(name_or_uuid) if found: return found # Okay, it's a child; do it the hard way. for root in self.roots: found = root.find(name_or_uuid) if found: return root if return_root else found raise ValueError(_("No such provider %s") % name_or_uuid) def data(self, name_or_uuid): """Return a point-in-time copy of the specified provider's data. :param name_or_uuid: Either name or UUID of the resource provider whose data is to be returned. :return: ProviderData object representing the specified provider. :raises: ValueError if a provider with name_or_uuid was not found in the tree. """ with self.lock: return self._find_with_lock(name_or_uuid).data() def exists(self, name_or_uuid): """Given either a name or a UUID, return True if the tree contains the provider, False otherwise. """ with self.lock: try: self._find_with_lock(name_or_uuid) return True except ValueError: return False def new_child(self, name, parent, uuid=None, generation=None): """Creates a new child provider with the given name and uuid under the given parent. :param name: The name of the new child provider :param parent: Either name or UUID of the parent provider :param uuid: The UUID of the new child provider :param generation: Generation to set for the new child provider :returns: the UUID of the new provider :raises ValueError if a provider with the specified uuid or name already exists; or if parent_uuid points to a nonexistent provider. """ with self.lock: try: self._find_with_lock(uuid or name) except ValueError: pass else: err = _("Provider %s already exists.") raise ValueError(err % (uuid or name)) parent_node = self._find_with_lock(parent) p = _Provider(name, uuid, generation, parent_node.uuid) parent_node.add_child(p) return p.uuid def has_inventory(self, name_or_uuid): """Returns True if the provider identified by name_or_uuid has any inventory records at all. :raises: ValueError if a provider with uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider """ with self.lock: p = self._find_with_lock(name_or_uuid) return p.has_inventory() def has_inventory_changed(self, name_or_uuid, inventory): """Returns True if the supplied inventory is different for the provider with the supplied name or UUID. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to query inventory for. :param inventory: dict, keyed by resource class, of inventory information. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.has_inventory_changed(inventory) def update_inventory(self, name_or_uuid, inventory, generation=None): """Given a name or UUID of a provider and a dict of inventory resource records, update the provider's inventory and set the provider's generation. :returns: True if the inventory has changed. :note: The provider's generation is always set to the supplied generation, even if there were no changes to the inventory. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to update inventory for. :param inventory: dict, keyed by resource class, of inventory information. :param generation: The resource provider generation to set. If not specified, the provider's generation is not changed. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.update_inventory(inventory, generation) def has_sharing_provider(self, resource_class): """Returns whether the specified provider_tree contains any sharing providers of inventory of the specified resource_class. """ for rp_uuid in self.get_provider_uuids(): pdata = self.data(rp_uuid) has_rc = resource_class in pdata.inventory is_sharing = os_traits.MISC_SHARES_VIA_AGGREGATE in pdata.traits if has_rc and is_sharing: return True return False def has_traits(self, name_or_uuid, traits): """Given a name or UUID of a provider, query whether that provider has *all* of the specified traits. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to query for traits. :param traits: Iterable of string trait names to search for. :return: True if this provider has *all* of the specified traits; False if any of the specified traits are absent. Returns True if the traits parameter is empty, even if the provider has no traits. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.has_traits(traits) def have_traits_changed(self, name_or_uuid, traits): """Returns True if the specified traits list is different for the provider with the specified name or UUID. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to query traits for. :param traits: Iterable of string trait names to compare against the provider's traits. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.have_traits_changed(traits) def update_traits(self, name_or_uuid, traits, generation=None): """Given a name or UUID of a provider and an iterable of string trait names, update the provider's traits and set the provider's generation. :returns: True if the traits list has changed. :note: The provider's generation is always set to the supplied generation, even if there were no changes to the traits. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to update traits for. :param traits: Iterable of string trait names to set. :param generation: The resource provider generation to set. If None, the provider's generation is not changed. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.update_traits(traits, generation=generation) def add_traits(self, name_or_uuid, *traits): """Set traits on a provider, without affecting existing traits. :param name_or_uuid: The name or UUID of the provider whose traits are to be affected. :param traits: String names of traits to be added. """ if not traits: return with self.lock: provider = self._find_with_lock(name_or_uuid) final_traits = provider.traits | set(traits) provider.update_traits(final_traits) def remove_traits(self, name_or_uuid, *traits): """Unset traits on a provider, without affecting other existing traits. :param name_or_uuid: The name or UUID of the provider whose traits are to be affected. :param traits: String names of traits to be removed. """ if not traits: return with self.lock: provider = self._find_with_lock(name_or_uuid) final_traits = provider.traits - set(traits) provider.update_traits(final_traits) def in_aggregates(self, name_or_uuid, aggregates): """Given a name or UUID of a provider, query whether that provider is a member of *all* the specified aggregates. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to query for aggregates. :param aggregates: Iterable of string aggregate UUIDs to search for. :return: True if this provider is associated with *all* of the specified aggregates; False if any of the specified aggregates are absent. Returns True if the aggregates parameter is empty, even if the provider has no aggregate associations. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.in_aggregates(aggregates) def have_aggregates_changed(self, name_or_uuid, aggregates): """Returns True if the specified aggregates list is different for the provider with the specified name or UUID. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to query aggregates for. :param aggregates: Iterable of string aggregate UUIDs to compare against the provider's aggregates. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.have_aggregates_changed(aggregates) def update_aggregates(self, name_or_uuid, aggregates, generation=None): """Given a name or UUID of a provider and an iterable of string aggregate UUIDs, update the provider's aggregates and set the provider's generation. :returns: True if the aggregates list has changed. :note: The provider's generation is always set to the supplied generation, even if there were no changes to the aggregates. :raises: ValueError if a provider with name_or_uuid was not found in the tree. :param name_or_uuid: Either name or UUID of the resource provider to update aggregates for. :param aggregates: Iterable of string aggregate UUIDs to set. :param generation: The resource provider generation to set. If None, the provider's generation is not changed. """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.update_aggregates(aggregates, generation=generation) def add_aggregates(self, name_or_uuid, *aggregates): """Set aggregates on a provider, without affecting existing aggregates. :param name_or_uuid: The name or UUID of the provider whose aggregates are to be affected. :param aggregates: String UUIDs of aggregates to be added. """ with self.lock: provider = self._find_with_lock(name_or_uuid) final_aggs = provider.aggregates | set(aggregates) provider.update_aggregates(final_aggs) def remove_aggregates(self, name_or_uuid, *aggregates): """Unset aggregates on a provider, without affecting other existing aggregates. :param name_or_uuid: The name or UUID of the provider whose aggregates are to be affected. :param aggregates: String UUIDs of aggregates to be removed. """ with self.lock: provider = self._find_with_lock(name_or_uuid) final_aggs = provider.aggregates - set(aggregates) provider.update_aggregates(final_aggs) def update_resources(self, name_or_uuid, resources): """Given a name or UUID of a provider and a dict of resources, update the provider's resources. :param name_or_uuid: The name or UUID of the provider whose resources are to be affected. :param resources: A dict keyed by resource class, and the value is a set of objects.Resource instance. :returns: True if the resources are updated else False """ with self.lock: provider = self._find_with_lock(name_or_uuid) return provider.update_resources(resources) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/resource_tracker.py0000664000175000017500000024521400000000000021131 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Track resources like memory and disk for a compute host. Provides the scheduler with useful information about availability through the ComputeNode model. """ import collections import copy from keystoneauth1 import exceptions as ks_exc import os_traits from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import excutils import retrying from nova.compute import claims from nova.compute import monitors from nova.compute import stats as compute_stats from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base as obj_base from nova.objects import migration as migration_obj from nova.pci import manager as pci_manager from nova.pci import request as pci_request from nova import rpc from nova.scheduler.client import report from nova import utils from nova.virt import hardware CONF = nova.conf.CONF LOG = logging.getLogger(__name__) COMPUTE_RESOURCE_SEMAPHORE = "compute_resources" def _instance_in_resize_state(instance): """Returns True if the instance is in one of the resizing states. :param instance: `nova.objects.Instance` object """ vm = instance.vm_state task = instance.task_state if vm == vm_states.RESIZED: return True if vm in [vm_states.ACTIVE, vm_states.STOPPED] and task in ( task_states.resizing_states + task_states.rebuild_states): return True return False def _instance_is_live_migrating(instance): vm = instance.vm_state task = instance.task_state if task == task_states.MIGRATING and vm in [vm_states.ACTIVE, vm_states.PAUSED]: return True return False class ResourceTracker(object): """Compute helper class for keeping track of resource usage as instances are built and destroyed. """ def __init__(self, host, driver, reportclient=None): self.host = host self.driver = driver self.pci_tracker = None # Dict of objects.ComputeNode objects, keyed by nodename self.compute_nodes = {} # Dict of Stats objects, keyed by nodename self.stats = collections.defaultdict(compute_stats.Stats) # Set of UUIDs of instances tracked on this host. self.tracked_instances = set() self.tracked_migrations = {} self.is_bfv = {} # dict, keyed by instance uuid, to is_bfv boolean monitor_handler = monitors.MonitorHandler(self) self.monitors = monitor_handler.monitors self.old_resources = collections.defaultdict(objects.ComputeNode) self.reportclient = reportclient or report.SchedulerReportClient() self.ram_allocation_ratio = CONF.ram_allocation_ratio self.cpu_allocation_ratio = CONF.cpu_allocation_ratio self.disk_allocation_ratio = CONF.disk_allocation_ratio self.provider_tree = None # Dict of assigned_resources, keyed by resource provider uuid # the value is a dict again, keyed by resource class # and value of this sub-dict is a set of Resource obj self.assigned_resources = collections.defaultdict( lambda: collections.defaultdict(set)) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def instance_claim(self, context, instance, nodename, allocations, limits=None): """Indicate that some resources are needed for an upcoming compute instance build operation. This should be called before the compute node is about to perform an instance build operation that will consume additional resources. :param context: security context :param instance: instance to reserve resources for. :type instance: nova.objects.instance.Instance object :param nodename: The Ironic nodename selected by the scheduler :param allocations: The placement allocation records for the instance. :param limits: Dict of oversubscription limits for memory, disk, and CPUs. :returns: A Claim ticket representing the reserved resources. It can be used to revert the resource usage if an error occurs during the instance build. """ if self.disabled(nodename): # instance_claim() was called before update_available_resource() # (which ensures that a compute node exists for nodename). We # shouldn't get here but in case we do, just set the instance's # host and nodename attribute (probably incorrect) and return a # NoopClaim. # TODO(jaypipes): Remove all the disabled junk from the resource # tracker. Servicegroup API-level active-checking belongs in the # nova-compute manager. self._set_instance_host_and_node(instance, nodename) return claims.NopClaim() # sanity checks: if instance.host: LOG.warning("Host field should not be set on the instance " "until resources have been claimed.", instance=instance) if instance.node: LOG.warning("Node field should not be set on the instance " "until resources have been claimed.", instance=instance) cn = self.compute_nodes[nodename] pci_requests = instance.pci_requests claim = claims.Claim(context, instance, nodename, self, cn, pci_requests, limits=limits) # self._set_instance_host_and_node() will save instance to the DB # so set instance.numa_topology first. We need to make sure # that numa_topology is saved while under COMPUTE_RESOURCE_SEMAPHORE # so that the resource audit knows about any cpus we've pinned. instance_numa_topology = claim.claimed_numa_topology instance.numa_topology = instance_numa_topology self._set_instance_host_and_node(instance, nodename) if self.pci_tracker: # NOTE(jaypipes): ComputeNode.pci_device_pools is set below # in _update_usage_from_instance(). self.pci_tracker.claim_instance(context, pci_requests, instance_numa_topology) claimed_resources = self._claim_resources(allocations) instance.resources = claimed_resources # Mark resources in-use and update stats self._update_usage_from_instance(context, instance, nodename) elevated = context.elevated() # persist changes to the compute node: self._update(elevated, cn) return claim @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def rebuild_claim(self, context, instance, nodename, allocations, limits=None, image_meta=None, migration=None): """Create a claim for a rebuild operation.""" instance_type = instance.flavor return self._move_claim(context, instance, instance_type, nodename, migration, allocations, move_type='evacuation', limits=limits, image_meta=image_meta) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def resize_claim(self, context, instance, instance_type, nodename, migration, allocations, image_meta=None, limits=None): """Create a claim for a resize or cold-migration move. Note that this code assumes ``instance.new_flavor`` is set when resizing with a new flavor. """ return self._move_claim(context, instance, instance_type, nodename, migration, allocations, image_meta=image_meta, limits=limits) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def live_migration_claim(self, context, instance, nodename, migration, limits, allocs): """Builds a MoveClaim for a live migration. :param context: The request context. :param instance: The instance being live migrated. :param nodename: The nodename of the destination host. :param migration: The Migration object associated with this live migration. :param limits: A SchedulerLimits object from when the scheduler selected the destination host. :param allocs: The placement allocation records for the instance. :returns: A MoveClaim for this live migration. """ # Flavor and image cannot change during a live migration. instance_type = instance.flavor image_meta = instance.image_meta return self._move_claim(context, instance, instance_type, nodename, migration, allocs, move_type='live-migration', image_meta=image_meta, limits=limits) def _move_claim(self, context, instance, new_instance_type, nodename, migration, allocations, move_type=None, image_meta=None, limits=None): """Indicate that resources are needed for a move to this host. Move can be either a migrate/resize, live-migrate or an evacuate/rebuild operation. :param context: security context :param instance: instance object to reserve resources for :param new_instance_type: new instance_type being resized to :param nodename: The Ironic nodename selected by the scheduler :param migration: A migration object if one was already created elsewhere for this operation (otherwise None) :param allocations: the placement allocation records. :param move_type: move type - can be one of 'migration', 'resize', 'live-migration', 'evacuate' :param image_meta: instance image metadata :param limits: Dict of oversubscription limits for memory, disk, and CPUs :returns: A Claim ticket representing the reserved resources. This should be turned into finalize a resource claim or free resources after the compute operation is finished. """ image_meta = image_meta or {} if migration: self._claim_existing_migration(migration, nodename) else: migration = self._create_migration(context, instance, new_instance_type, nodename, move_type) if self.disabled(nodename): # compute_driver doesn't support resource tracking, just # generate the migration record and continue the resize: return claims.NopClaim(migration=migration) cn = self.compute_nodes[nodename] # TODO(moshele): we are recreating the pci requests even if # there was no change on resize. This will cause allocating # the old/new pci device in the resize phase. In the future # we would like to optimise this. new_pci_requests = pci_request.get_pci_requests_from_flavor( new_instance_type) new_pci_requests.instance_uuid = instance.uuid # On resize merge the SR-IOV ports pci_requests # with the new instance flavor pci_requests. if instance.pci_requests: for request in instance.pci_requests.requests: if request.source == objects.InstancePCIRequest.NEUTRON_PORT: new_pci_requests.requests.append(request) claim = claims.MoveClaim(context, instance, nodename, new_instance_type, image_meta, self, cn, new_pci_requests, migration, limits=limits) claimed_pci_devices_objs = [] # TODO(artom) The second part of this condition should not be # necessary, but since SRIOV live migration is currently handled # elsewhere - see for example _claim_pci_for_instance_vifs() in the # compute manager - we don't do any PCI claims if this is a live # migration to avoid stepping on that code's toes. Ideally, # MoveClaim/this method would be used for all live migration resource # claims. if self.pci_tracker and migration.migration_type != 'live-migration': # NOTE(jaypipes): ComputeNode.pci_device_pools is set below # in _update_usage_from_instance(). claimed_pci_devices_objs = self.pci_tracker.claim_instance( context, new_pci_requests, claim.claimed_numa_topology) claimed_pci_devices = objects.PciDeviceList( objects=claimed_pci_devices_objs) claimed_resources = self._claim_resources(allocations) old_resources = instance.resources # TODO(jaypipes): Move claimed_numa_topology out of the Claim's # constructor flow so the Claim constructor only tests whether # resources can be claimed, not consume the resources directly. mig_context = objects.MigrationContext( context=context, instance_uuid=instance.uuid, migration_id=migration.id, old_numa_topology=instance.numa_topology, new_numa_topology=claim.claimed_numa_topology, old_pci_devices=instance.pci_devices, new_pci_devices=claimed_pci_devices, old_pci_requests=instance.pci_requests, new_pci_requests=new_pci_requests, old_resources=old_resources, new_resources=claimed_resources) instance.migration_context = mig_context instance.save() # Mark the resources in-use for the resize landing on this # compute host: self._update_usage_from_migration(context, instance, migration, nodename) elevated = context.elevated() self._update(elevated, cn) return claim def _create_migration(self, context, instance, new_instance_type, nodename, move_type=None): """Create a migration record for the upcoming resize. This should be done while the COMPUTE_RESOURCES_SEMAPHORE is held so the resource claim will not be lost if the audit process starts. """ migration = objects.Migration(context=context.elevated()) migration.dest_compute = self.host migration.dest_node = nodename migration.dest_host = self.driver.get_host_ip_addr() migration.old_instance_type_id = instance.flavor.id migration.new_instance_type_id = new_instance_type.id migration.status = 'pre-migrating' migration.instance_uuid = instance.uuid migration.source_compute = instance.host migration.source_node = instance.node if move_type: migration.migration_type = move_type else: migration.migration_type = migration_obj.determine_migration_type( migration) migration.create() return migration def _claim_existing_migration(self, migration, nodename): """Make an existing migration record count for resource tracking. If a migration record was created already before the request made it to this compute host, only set up the migration so it's included in resource tracking. This should be done while the COMPUTE_RESOURCES_SEMAPHORE is held. """ migration.dest_compute = self.host migration.dest_node = nodename migration.dest_host = self.driver.get_host_ip_addr() # NOTE(artom) Migration objects for live migrations are created with # status 'accepted' by the conductor in live_migrate_instance() and do # not have a 'pre-migrating' status. if migration.migration_type != 'live-migration': migration.status = 'pre-migrating' migration.save() def _claim_resources(self, allocations): """Claim resources according to assigned resources from allocations and available resources in provider tree """ if not allocations: return None claimed_resources = [] for rp_uuid, alloc_dict in allocations.items(): try: provider_data = self.provider_tree.data(rp_uuid) except ValueError: # If an instance is in evacuating, it will hold new and old # allocations, but the provider UUIDs in old allocations won't # exist in the current provider tree, so skip it. LOG.debug("Skip claiming resources of provider %(rp_uuid)s, " "since the provider UUIDs are not in provider tree.", {'rp_uuid': rp_uuid}) continue for rc, amount in alloc_dict['resources'].items(): if rc not in provider_data.resources: # This means we don't use provider_data.resources to # assign this kind of resource class, such as 'VCPU' for # now, otherwise the provider_data.resources will be # populated with this resource class when updating # provider tree. continue assigned = self.assigned_resources[rp_uuid][rc] free = provider_data.resources[rc] - assigned if amount > len(free): reason = (_("Needed %(amount)d units of resource class " "%(rc)s, but %(avail)d are available.") % {'amount': amount, 'rc': rc, 'avail': len(free)}) raise exception.ComputeResourcesUnavailable(reason=reason) for i in range(amount): claimed_resources.append(free.pop()) if claimed_resources: self._add_assigned_resources(claimed_resources) return objects.ResourceList(objects=claimed_resources) def _populate_assigned_resources(self, context, instance_by_uuid): """Populate self.assigned_resources organized by resource class and reource provider uuid, which is as following format: { $RP_UUID: { $RESOURCE_CLASS: [objects.Resource, ...], $RESOURCE_CLASS: [...]}, ...} """ resources = [] # Get resources assigned to migrations for mig in self.tracked_migrations.values(): mig_ctx = mig.instance.migration_context # We might have a migration whose instance hasn't arrived here yet. # Ignore it. if not mig_ctx: continue if mig.source_compute == self.host and 'old_resources' in mig_ctx: resources.extend(mig_ctx.old_resources or []) if mig.dest_compute == self.host and 'new_resources' in mig_ctx: resources.extend(mig_ctx.new_resources or []) # Get resources assigned to instances for uuid in self.tracked_instances: resources.extend(instance_by_uuid[uuid].resources or []) self.assigned_resources.clear() self._add_assigned_resources(resources) def _check_resources(self, context): """Check if there are assigned resources not found in provider tree""" notfound = set() for rp_uuid in self.assigned_resources: provider_data = self.provider_tree.data(rp_uuid) for rc, assigned in self.assigned_resources[rp_uuid].items(): notfound |= (assigned - provider_data.resources[rc]) if not notfound: return # This only happens when assigned resources are removed # from the configuration and the compute service is SIGHUP'd # or restarted. resources = [(res.identifier, res.resource_class) for res in notfound] reason = _("The following resources are assigned to instances, " "but were not listed in the configuration: %s " "Please check if this will influence your instances, " "and restore your configuration if necessary") % resources raise exception.AssignedResourceNotFound(reason=reason) def _release_assigned_resources(self, resources): """Remove resources from self.assigned_resources.""" if not resources: return for resource in resources: rp_uuid = resource.provider_uuid rc = resource.resource_class try: self.assigned_resources[rp_uuid][rc].remove(resource) except KeyError: LOG.warning("Release resource %(rc)s: %(id)s of provider " "%(rp_uuid)s, not tracked in " "ResourceTracker.assigned_resources.", {'rc': rc, 'id': resource.identifier, 'rp_uuid': rp_uuid}) def _add_assigned_resources(self, resources): """Add resources to self.assigned_resources""" if not resources: return for resource in resources: rp_uuid = resource.provider_uuid rc = resource.resource_class self.assigned_resources[rp_uuid][rc].add(resource) def _set_instance_host_and_node(self, instance, nodename): """Tag the instance as belonging to this host. This should be done while the COMPUTE_RESOURCES_SEMAPHORE is held so the resource claim will not be lost if the audit process starts. """ # NOTE(mriedem): ComputeManager._nil_out_instance_obj_host_and_node is # somewhat tightly coupled to the fields set in this method so if this # method changes that method might need to be updated. instance.host = self.host instance.launched_on = self.host instance.node = nodename instance.save() def _unset_instance_host_and_node(self, instance): """Untag the instance so it no longer belongs to the host. This should be done while the COMPUTE_RESOURCES_SEMAPHORE is held so the resource claim will not be lost if the audit process starts. """ instance.host = None instance.node = None instance.save() @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def abort_instance_claim(self, context, instance, nodename): """Remove usage from the given instance.""" self._update_usage_from_instance(context, instance, nodename, is_removed=True) instance.clear_numa_topology() self._unset_instance_host_and_node(instance) self._update(context.elevated(), self.compute_nodes[nodename]) def _drop_pci_devices(self, instance, nodename, prefix): if self.pci_tracker: # free old/new allocated pci devices pci_devices = self._get_migration_context_resource( 'pci_devices', instance, prefix=prefix) if pci_devices: for pci_device in pci_devices: self.pci_tracker.free_device(pci_device, instance) dev_pools_obj = self.pci_tracker.stats.to_device_pools_obj() self.compute_nodes[nodename].pci_device_pools = dev_pools_obj @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def drop_move_claim_at_source(self, context, instance, migration): """Drop a move claim after confirming a resize or cold migration.""" migration.status = 'confirmed' migration.save() self._drop_move_claim( context, instance, migration.source_node, instance.old_flavor, prefix='old_') # NOTE(stephenfin): Unsetting this is unnecessary for cross-cell # resize, since the source and dest instance objects are different and # the source instance will be deleted soon. It's easier to just do it # though. instance.drop_migration_context() @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def drop_move_claim_at_dest(self, context, instance, migration): """Drop a move claim after reverting a resize or cold migration.""" # NOTE(stephenfin): This runs on the destination, before we return to # the source and resume the instance there. As such, the migration # isn't really really reverted yet, but this status is what we use to # indicate that we no longer needs to account for usage on this host migration.status = 'reverted' migration.save() self._drop_move_claim( context, instance, migration.dest_node, instance.new_flavor, prefix='new_') instance.revert_migration_context() instance.save(expected_task_state=[task_states.RESIZE_REVERTING]) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def drop_move_claim(self, context, instance, nodename, instance_type=None, prefix='new_'): self._drop_move_claim( context, instance, nodename, instance_type, prefix='new_') def _drop_move_claim( self, context, instance, nodename, instance_type=None, prefix='new_', ): """Remove usage for an incoming/outgoing migration. :param context: Security context. :param instance: The instance whose usage is to be removed. :param nodename: Host on which to remove usage. If the migration completed successfully, this is normally the source. If it did not complete successfully (failed or reverted), this is normally the destination. :param instance_type: The flavor that determines the usage to remove. If the migration completed successfully, this is the old flavor to be removed from the source. If the migration did not complete successfully, this is the new flavor to be removed from the destination. :param prefix: Prefix to use when accessing migration context attributes. 'old_' or 'new_', with 'new_' being the default. """ # Remove usage for an instance that is tracked in migrations, such as # on the dest node during revert resize. if instance['uuid'] in self.tracked_migrations: migration = self.tracked_migrations.pop(instance['uuid']) if not instance_type: instance_type = self._get_instance_type(instance, prefix, migration) # Remove usage for an instance that is not tracked in migrations (such # as on the source node after a migration). # NOTE(lbeliveau): On resize on the same node, the instance is # included in both tracked_migrations and tracked_instances. elif instance['uuid'] in self.tracked_instances: self.tracked_instances.remove(instance['uuid']) if instance_type is not None: numa_topology = self._get_migration_context_resource( 'numa_topology', instance, prefix=prefix) usage = self._get_usage_dict( instance_type, instance, numa_topology=numa_topology) self._drop_pci_devices(instance, nodename, prefix) resources = self._get_migration_context_resource( 'resources', instance, prefix=prefix) self._release_assigned_resources(resources) self._update_usage(usage, nodename, sign=-1) ctxt = context.elevated() self._update(ctxt, self.compute_nodes[nodename]) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def update_usage(self, context, instance, nodename): """Update the resource usage and stats after a change in an instance """ if self.disabled(nodename): return uuid = instance['uuid'] # don't update usage for this instance unless it submitted a resource # claim first: if uuid in self.tracked_instances: self._update_usage_from_instance(context, instance, nodename) self._update(context.elevated(), self.compute_nodes[nodename]) def disabled(self, nodename): return (nodename not in self.compute_nodes or not self.driver.node_is_available(nodename)) def _check_for_nodes_rebalance(self, context, resources, nodename): """Check if nodes rebalance has happened. The ironic driver maintains a hash ring mapping bare metal nodes to compute nodes. If a compute dies, the hash ring is rebuilt, and some of its bare metal nodes (more precisely, those not in ACTIVE state) are assigned to other computes. This method checks for this condition and adjusts the database accordingly. :param context: security context :param resources: initial values :param nodename: node name :returns: True if a suitable compute node record was found, else False """ if not self.driver.rebalances_nodes: return False # Its possible ironic just did a node re-balance, so let's # check if there is a compute node that already has the correct # hypervisor_hostname. We can re-use that rather than create a # new one and have to move existing placement allocations cn_candidates = objects.ComputeNodeList.get_by_hypervisor( context, nodename) if len(cn_candidates) == 1: cn = cn_candidates[0] LOG.info("ComputeNode %(name)s moving from %(old)s to %(new)s", {"name": nodename, "old": cn.host, "new": self.host}) cn.host = self.host self.compute_nodes[nodename] = cn self._copy_resources(cn, resources) self._setup_pci_tracker(context, cn, resources) self._update(context, cn) return True elif len(cn_candidates) > 1: LOG.error( "Found more than one ComputeNode for nodename %s. " "Please clean up the orphaned ComputeNode records in your DB.", nodename) return False def _init_compute_node(self, context, resources): """Initialize the compute node if it does not already exist. The resource tracker will be inoperable if compute_node is not defined. The compute_node will remain undefined if we fail to create it or if there is no associated service registered. If this method has to create a compute node it needs initial values - these come from resources. :param context: security context :param resources: initial values :returns: True if a new compute_nodes table record was created, False otherwise """ nodename = resources['hypervisor_hostname'] # if there is already a compute node just use resources # to initialize if nodename in self.compute_nodes: cn = self.compute_nodes[nodename] self._copy_resources(cn, resources) self._setup_pci_tracker(context, cn, resources) return False # now try to get the compute node record from the # database. If we get one we use resources to initialize cn = self._get_compute_node(context, nodename) if cn: self.compute_nodes[nodename] = cn self._copy_resources(cn, resources) self._setup_pci_tracker(context, cn, resources) return False if self._check_for_nodes_rebalance(context, resources, nodename): return False # there was no local copy and none in the database # so we need to create a new compute node. This needs # to be initialized with resource values. cn = objects.ComputeNode(context) cn.host = self.host self._copy_resources(cn, resources, initial=True) cn.create() # Only map the ComputeNode into compute_nodes if create() was OK # because if create() fails, on the next run through here nodename # would be in compute_nodes and we won't try to create again (because # of the logic above). self.compute_nodes[nodename] = cn LOG.info('Compute node record created for ' '%(host)s:%(node)s with uuid: %(uuid)s', {'host': self.host, 'node': nodename, 'uuid': cn.uuid}) self._setup_pci_tracker(context, cn, resources) return True def _setup_pci_tracker(self, context, compute_node, resources): if not self.pci_tracker: n_id = compute_node.id self.pci_tracker = pci_manager.PciDevTracker(context, node_id=n_id) if 'pci_passthrough_devices' in resources: dev_json = resources.pop('pci_passthrough_devices') self.pci_tracker.update_devices_from_hypervisor_resources( dev_json) dev_pools_obj = self.pci_tracker.stats.to_device_pools_obj() compute_node.pci_device_pools = dev_pools_obj def _copy_resources(self, compute_node, resources, initial=False): """Copy resource values to supplied compute_node.""" nodename = resources['hypervisor_hostname'] stats = self.stats[nodename] # purge old stats and init with anything passed in by the driver # NOTE(danms): Preserve 'failed_builds' across the stats clearing, # as that is not part of resources # TODO(danms): Stop doing this when we get a column to store this # directly prev_failed_builds = stats.get('failed_builds', 0) stats.clear() stats['failed_builds'] = prev_failed_builds stats.digest_stats(resources.get('stats')) compute_node.stats = stats # Update the allocation ratios for the related ComputeNode object # but only if the configured values are not the default; the # ComputeNode._from_db_object method takes care of providing default # allocation ratios when the config is left at the default, so # we'll really end up with something like a # ComputeNode.cpu_allocation_ratio of 16.0. We want to avoid # resetting the ComputeNode fields to None because that will make # the _resource_change method think something changed when really it # didn't. # NOTE(yikun): The CONF.initial_(cpu|ram|disk)_allocation_ratio would # be used when we initialize the compute node object, that means the # ComputeNode.(cpu|ram|disk)_allocation_ratio will be set to # CONF.initial_(cpu|ram|disk)_allocation_ratio when initial flag is # True. for res in ('cpu', 'disk', 'ram'): attr = '%s_allocation_ratio' % res if initial: conf_alloc_ratio = getattr(CONF, 'initial_%s' % attr) else: conf_alloc_ratio = getattr(self, attr) # NOTE(yikun): In Stein version, we change the default value of # (cpu|ram|disk)_allocation_ratio from 0.0 to None, but we still # should allow 0.0 to keep compatibility, and this 0.0 condition # will be removed in the next version (T version). if conf_alloc_ratio not in (0.0, None): setattr(compute_node, attr, conf_alloc_ratio) # now copy rest to compute_node compute_node.update_from_virt_driver(resources) def remove_node(self, nodename): """Handle node removal/rebalance. Clean up any stored data about a compute node no longer managed by this host. """ self.stats.pop(nodename, None) self.compute_nodes.pop(nodename, None) self.old_resources.pop(nodename, None) def _get_host_metrics(self, context, nodename): """Get the metrics from monitors and notify information to message bus. """ metrics = objects.MonitorMetricList() metrics_info = {} for monitor in self.monitors: try: monitor.populate_metrics(metrics) except NotImplementedError: LOG.debug("The compute driver doesn't support host " "metrics for %(mon)s", {'mon': monitor}) except Exception as exc: LOG.warning("Cannot get the metrics from %(mon)s; " "error: %(exc)s", {'mon': monitor, 'exc': exc}) # TODO(jaypipes): Remove this when compute_node.metrics doesn't need # to be populated as a JSONified string. metric_list = metrics.to_list() if len(metric_list): metrics_info['nodename'] = nodename metrics_info['metrics'] = metric_list metrics_info['host'] = self.host metrics_info['host_ip'] = CONF.my_ip notifier = rpc.get_notifier(service='compute', host=nodename) notifier.info(context, 'compute.metrics.update', metrics_info) compute_utils.notify_about_metrics_update( context, self.host, CONF.my_ip, nodename, metrics) return metric_list def update_available_resource(self, context, nodename, startup=False): """Override in-memory calculations of compute node resource usage based on data audited from the hypervisor layer. Add in resource claims in progress to account for operations that have declared a need for resources, but not necessarily retrieved them from the hypervisor layer yet. :param nodename: Temporary parameter representing the Ironic resource node. This parameter will be removed once Ironic baremetal resource nodes are handled like any other resource in the system. :param startup: Boolean indicating whether we're running this on on startup (True) or periodic (False). """ LOG.debug("Auditing locally available compute resources for " "%(host)s (node: %(node)s)", {'node': nodename, 'host': self.host}) resources = self.driver.get_available_resource(nodename) # NOTE(jaypipes): The resources['hypervisor_hostname'] field now # contains a non-None value, even for non-Ironic nova-compute hosts. It # is this value that will be populated in the compute_nodes table. resources['host_ip'] = CONF.my_ip # We want the 'cpu_info' to be None from the POV of the # virt driver, but the DB requires it to be non-null so # just force it to empty string if "cpu_info" not in resources or resources["cpu_info"] is None: resources["cpu_info"] = '' self._verify_resources(resources) self._report_hypervisor_resource_view(resources) self._update_available_resource(context, resources, startup=startup) def _pair_instances_to_migrations(self, migrations, instance_by_uuid): for migration in migrations: try: migration.instance = instance_by_uuid[migration.instance_uuid] except KeyError: # NOTE(danms): If this happens, we don't set it here, and # let the code either fail or lazy-load the instance later # which is what happened before we added this optimization. # NOTE(tdurakov) this situation is possible for resize/cold # migration when migration is finished but haven't yet # confirmed/reverted in that case instance already changed host # to destination and no matching happens LOG.debug('Migration for instance %(uuid)s refers to ' 'another host\'s instance!', {'uuid': migration.instance_uuid}) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def _update_available_resource(self, context, resources, startup=False): # initialize the compute node object, creating it # if it does not already exist. is_new_compute_node = self._init_compute_node(context, resources) nodename = resources['hypervisor_hostname'] # if we could not init the compute node the tracker will be # disabled and we should quit now if self.disabled(nodename): return # Grab all instances assigned to this node: instances = objects.InstanceList.get_by_host_and_node( context, self.host, nodename, expected_attrs=['system_metadata', 'numa_topology', 'flavor', 'migration_context', 'resources']) # Now calculate usage based on instance utilization: instance_by_uuid = self._update_usage_from_instances( context, instances, nodename) # Grab all in-progress migrations: migrations = objects.MigrationList.get_in_progress_by_host_and_node( context, self.host, nodename) self._pair_instances_to_migrations(migrations, instance_by_uuid) self._update_usage_from_migrations(context, migrations, nodename) # A new compute node means there won't be a resource provider yet since # that would be created via the _update() call below, and if there is # no resource provider then there are no allocations against it. if not is_new_compute_node: self._remove_deleted_instances_allocations( context, self.compute_nodes[nodename], migrations, instance_by_uuid) # Detect and account for orphaned instances that may exist on the # hypervisor, but are not in the DB: orphans = self._find_orphaned_instances() self._update_usage_from_orphans(orphans, nodename) cn = self.compute_nodes[nodename] # NOTE(yjiang5): Because pci device tracker status is not cleared in # this periodic task, and also because the resource tracker is not # notified when instances are deleted, we need remove all usages # from deleted instances. self.pci_tracker.clean_usage(instances, migrations, orphans) dev_pools_obj = self.pci_tracker.stats.to_device_pools_obj() cn.pci_device_pools = dev_pools_obj self._report_final_resource_view(nodename) metrics = self._get_host_metrics(context, nodename) # TODO(pmurray): metrics should not be a json string in ComputeNode, # but it is. This should be changed in ComputeNode cn.metrics = jsonutils.dumps(metrics) # Update assigned resources to self.assigned_resources self._populate_assigned_resources(context, instance_by_uuid) # update the compute_node self._update(context, cn, startup=startup) LOG.debug('Compute_service record updated for %(host)s:%(node)s', {'host': self.host, 'node': nodename}) # Check if there is any resource assigned but not found # in provider tree if startup: self._check_resources(context) def _get_compute_node(self, context, nodename): """Returns compute node for the host and nodename.""" try: return objects.ComputeNode.get_by_host_and_nodename( context, self.host, nodename) except exception.NotFound: LOG.warning("No compute node record for %(host)s:%(node)s", {'host': self.host, 'node': nodename}) def _report_hypervisor_resource_view(self, resources): """Log the hypervisor's view of free resources. This is just a snapshot of resource usage recorded by the virt driver. The following resources are logged: - free memory - free disk - free CPUs - assignable PCI devices """ nodename = resources['hypervisor_hostname'] free_ram_mb = resources['memory_mb'] - resources['memory_mb_used'] free_disk_gb = resources['local_gb'] - resources['local_gb_used'] vcpus = resources['vcpus'] if vcpus: free_vcpus = vcpus - resources['vcpus_used'] else: free_vcpus = 'unknown' pci_devices = resources.get('pci_passthrough_devices') LOG.debug("Hypervisor/Node resource view: " "name=%(node)s " "free_ram=%(free_ram)sMB " "free_disk=%(free_disk)sGB " "free_vcpus=%(free_vcpus)s " "pci_devices=%(pci_devices)s", {'node': nodename, 'free_ram': free_ram_mb, 'free_disk': free_disk_gb, 'free_vcpus': free_vcpus, 'pci_devices': pci_devices}) def _report_final_resource_view(self, nodename): """Report final calculate of physical memory, used virtual memory, disk, usable vCPUs, used virtual CPUs and PCI devices, including instance calculations and in-progress resource claims. These values will be exposed via the compute node table to the scheduler. """ cn = self.compute_nodes[nodename] vcpus = cn.vcpus if vcpus: tcpu = vcpus ucpu = cn.vcpus_used LOG.debug("Total usable vcpus: %(tcpu)s, " "total allocated vcpus: %(ucpu)s", {'tcpu': vcpus, 'ucpu': ucpu}) else: tcpu = 0 ucpu = 0 pci_stats = (list(cn.pci_device_pools) if cn.pci_device_pools else []) LOG.debug("Final resource view: " "name=%(node)s " "phys_ram=%(phys_ram)sMB " "used_ram=%(used_ram)sMB " "phys_disk=%(phys_disk)sGB " "used_disk=%(used_disk)sGB " "total_vcpus=%(total_vcpus)s " "used_vcpus=%(used_vcpus)s " "pci_stats=%(pci_stats)s", {'node': nodename, 'phys_ram': cn.memory_mb, 'used_ram': cn.memory_mb_used, 'phys_disk': cn.local_gb, 'used_disk': cn.local_gb_used, 'total_vcpus': tcpu, 'used_vcpus': ucpu, 'pci_stats': pci_stats}) def _resource_change(self, compute_node): """Check to see if any resources have changed.""" nodename = compute_node.hypervisor_hostname old_compute = self.old_resources[nodename] if not obj_base.obj_equal_prims( compute_node, old_compute, ['updated_at']): self.old_resources[nodename] = copy.deepcopy(compute_node) return True return False def _sync_compute_service_disabled_trait(self, context, traits): """Synchronize the COMPUTE_STATUS_DISABLED trait on the node provider. Determines if the COMPUTE_STATUS_DISABLED trait should be added to or removed from the provider's set of traits based on the related nova-compute service disabled status. :param context: RequestContext for cell database access :param traits: set of traits for the compute node resource provider; this is modified by reference """ trait = os_traits.COMPUTE_STATUS_DISABLED try: service = objects.Service.get_by_compute_host(context, self.host) if service.disabled: # The service is disabled so make sure the trait is reported. traits.add(trait) else: # The service is not disabled so do not report the trait. traits.discard(trait) except exception.NotFound: # This should not happen but handle it gracefully. The scheduler # should ignore this node if the compute service record is gone. LOG.error('Unable to find services table record for nova-compute ' 'host %s', self.host) def _get_traits(self, context, nodename, provider_tree): """Synchronizes internal and external traits for the node provider. This works in conjunction with the ComptueDriver.update_provider_tree flow and is used to synchronize traits reported by the compute driver, traits based on information in the ComputeNode record, and traits set externally using the placement REST API. :param context: RequestContext for cell database access :param nodename: ComputeNode.hypervisor_hostname for the compute node resource provider whose traits are being synchronized; the node must be in the ProviderTree. :param provider_tree: ProviderTree being updated """ # Get the traits from the ProviderTree which will be the set # of virt-owned traits plus any externally defined traits set # on the provider that aren't owned by the virt driver. traits = provider_tree.data(nodename).traits # Now get the driver's capabilities and add any supported # traits that are missing, and remove any existing set traits # that are not currently supported. for trait, supported in self.driver.capabilities_as_traits().items(): if supported: traits.add(trait) elif trait in traits: traits.remove(trait) # Always mark the compute node. This lets other processes (possibly # unrelated to nova or even OpenStack) find and distinguish these # providers easily. traits.add(os_traits.COMPUTE_NODE) self._sync_compute_service_disabled_trait(context, traits) return list(traits) @retrying.retry(stop_max_attempt_number=4, retry_on_exception=lambda e: isinstance( e, exception.ResourceProviderUpdateConflict)) def _update_to_placement(self, context, compute_node, startup): """Send resource and inventory changes to placement.""" # NOTE(jianghuaw): Some resources(e.g. VGPU) are not saved in the # object of compute_node; instead the inventory data for these # resource is reported by driver's update_provider_tree(). So even if # there is no resource change for compute_node, we need proceed # to get inventory and use report client interfaces to update # inventory to placement. It's report client's responsibility to # ensure the update request to placement only happens when inventory # is changed. nodename = compute_node.hypervisor_hostname # Persist the stats to the Scheduler # Retrieve the provider tree associated with this compute node. If # it doesn't exist yet, this will create it with a (single, root) # provider corresponding to the compute node. prov_tree = self.reportclient.get_provider_tree_and_ensure_root( context, compute_node.uuid, name=compute_node.hypervisor_hostname) # Let the virt driver rearrange the provider tree and set/update # the inventory, traits, and aggregates throughout. allocs = None try: self.driver.update_provider_tree(prov_tree, nodename) except exception.ReshapeNeeded: if not startup: # This isn't supposed to happen during periodic, so raise # it up; the compute manager will treat it specially. raise LOG.info("Performing resource provider inventory and " "allocation data migration during compute service " "startup or fast-forward upgrade.") allocs = self.reportclient.get_allocations_for_provider_tree( context, nodename) self.driver.update_provider_tree(prov_tree, nodename, allocations=allocs) # Inject driver capabilities traits into the provider # tree. We need to determine the traits that the virt # driver owns - so those that come from the tree itself # (via the virt driver) plus the compute capabilities # traits, and then merge those with the traits set # externally that the driver does not own - and remove any # set on the provider externally that the virt owns but # aren't in the current list of supported traits. For # example, let's say we reported multiattach support as a # trait at t1 and then at t2 it's not, so we need to # remove it. But at both t1 and t2 there is a # CUSTOM_VENDOR_TRAIT_X which we can't touch because it # was set externally on the provider. # We also want to sync the COMPUTE_STATUS_DISABLED trait based # on the related nova-compute service's disabled status. traits = self._get_traits( context, nodename, provider_tree=prov_tree) prov_tree.update_traits(nodename, traits) self.provider_tree = prov_tree # Flush any changes. If we processed ReshapeNeeded above, allocs is not # None, and this will hit placement's POST /reshaper route. self.reportclient.update_from_provider_tree(context, prov_tree, allocations=allocs) def _update(self, context, compute_node, startup=False): """Update partial stats locally and populate them to Scheduler.""" # _resource_change will update self.old_resources if it detects changes # but we want to restore those if compute_node.save() fails. nodename = compute_node.hypervisor_hostname old_compute = self.old_resources[nodename] if self._resource_change(compute_node): # If the compute_node's resource changed, update to DB. Note that # _update_to_placement below does not supersede the need to do this # because there are stats-related fields in the ComputeNode object # which could have changed and still need to be reported to the # scheduler filters/weighers (which could be out of tree as well). try: compute_node.save() except Exception: # Restore the previous state in self.old_resources so that on # the next trip through here _resource_change does not have # stale data to compare. with excutils.save_and_reraise_exception(logger=LOG): self.old_resources[nodename] = old_compute self._update_to_placement(context, compute_node, startup) if self.pci_tracker: self.pci_tracker.save(context) def _update_usage(self, usage, nodename, sign=1): # TODO(stephenfin): We don't use the CPU, RAM and disk fields for much # except 'Aggregate(Core|Ram|Disk)Filter', the 'os-hypervisors' API, # and perhaps some out-of-tree filters. Once the in-tree stuff is # removed or updated to use information from placement, we can think # about dropping the fields from the 'ComputeNode' object entirely mem_usage = usage['memory_mb'] disk_usage = usage.get('root_gb', 0) vcpus_usage = usage.get('vcpus', 0) cn = self.compute_nodes[nodename] cn.memory_mb_used += sign * mem_usage cn.local_gb_used += sign * disk_usage cn.local_gb_used += sign * usage.get('ephemeral_gb', 0) cn.local_gb_used += sign * usage.get('swap', 0) / 1024 cn.vcpus_used += sign * vcpus_usage # free ram and disk may be negative, depending on policy: cn.free_ram_mb = cn.memory_mb - cn.memory_mb_used cn.free_disk_gb = cn.local_gb - cn.local_gb_used stats = self.stats[nodename] cn.running_vms = stats.num_instances # calculate the NUMA usage, assuming the instance is actually using # NUMA, of course if cn.numa_topology and usage.get('numa_topology'): instance_numa_topology = usage.get('numa_topology') # the ComputeNode.numa_topology field is a StringField, so # deserialize host_numa_topology = objects.NUMATopology.obj_from_db_obj( cn.numa_topology) free = sign == -1 # ...and reserialize once we save it back cn.numa_topology = hardware.numa_usage_from_instance_numa( host_numa_topology, instance_numa_topology, free)._to_json() def _get_migration_context_resource(self, resource, instance, prefix='new_'): migration_context = instance.migration_context resource = prefix + resource if migration_context and resource in migration_context: return getattr(migration_context, resource) return None def _update_usage_from_migration(self, context, instance, migration, nodename): """Update usage for a single migration. The record may represent an incoming or outbound migration. """ uuid = migration.instance_uuid LOG.info("Updating resource usage from migration %s", migration.uuid, instance_uuid=uuid) incoming = (migration.dest_compute == self.host and migration.dest_node == nodename) outbound = (migration.source_compute == self.host and migration.source_node == nodename) same_node = (incoming and outbound) tracked = uuid in self.tracked_instances itype = None numa_topology = None sign = 0 if same_node: # Same node resize. Record usage for the 'new_' resources. This # is executed on resize_claim(). if (instance['instance_type_id'] == migration.old_instance_type_id): itype = self._get_instance_type(instance, 'new_', migration) numa_topology = self._get_migration_context_resource( 'numa_topology', instance) # Allocate pci device(s) for the instance. sign = 1 else: # The instance is already set to the new flavor (this is done # by the compute manager on finish_resize()), hold space for a # possible revert to the 'old_' resources. # NOTE(lbeliveau): When the periodic audit timer gets # triggered, the compute usage gets reset. The usage for an # instance that is migrated to the new flavor but not yet # confirmed/reverted will first get accounted for by # _update_usage_from_instances(). This method will then be # called, and we need to account for the '_old' resources # (just in case). itype = self._get_instance_type(instance, 'old_', migration) numa_topology = self._get_migration_context_resource( 'numa_topology', instance, prefix='old_') elif incoming and not tracked: # instance has not yet migrated here: itype = self._get_instance_type(instance, 'new_', migration) numa_topology = self._get_migration_context_resource( 'numa_topology', instance) # Allocate pci device(s) for the instance. sign = 1 LOG.debug('Starting to track incoming migration %s with flavor %s', migration.uuid, itype.flavorid, instance=instance) elif outbound and not tracked: # instance migrated, but record usage for a possible revert: itype = self._get_instance_type(instance, 'old_', migration) numa_topology = self._get_migration_context_resource( 'numa_topology', instance, prefix='old_') # We could be racing with confirm_resize setting the # instance.old_flavor field to None before the migration status # is "confirmed" so if we did not find the flavor in the outgoing # resized instance we won't track it. if itype: LOG.debug('Starting to track outgoing migration %s with ' 'flavor %s', migration.uuid, itype.flavorid, instance=instance) if itype: cn = self.compute_nodes[nodename] usage = self._get_usage_dict( itype, instance, numa_topology=numa_topology) if self.pci_tracker and sign: self.pci_tracker.update_pci_for_instance( context, instance, sign=sign) self._update_usage(usage, nodename) if self.pci_tracker: obj = self.pci_tracker.stats.to_device_pools_obj() cn.pci_device_pools = obj else: obj = objects.PciDevicePoolList() cn.pci_device_pools = obj self.tracked_migrations[uuid] = migration def _update_usage_from_migrations(self, context, migrations, nodename): filtered = {} instances = {} self.tracked_migrations.clear() # do some defensive filtering against bad migrations records in the # database: for migration in migrations: uuid = migration.instance_uuid try: if uuid not in instances: instances[uuid] = migration.instance except exception.InstanceNotFound as e: # migration referencing deleted instance LOG.debug('Migration instance not found: %s', e) continue # Skip migation if instance is neither in a resize state nor is # live-migrating. if (not _instance_in_resize_state(instances[uuid]) and not _instance_is_live_migrating(instances[uuid])): LOG.debug('Skipping migration as instance is neither ' 'resizing nor live-migrating.', instance_uuid=uuid) continue # filter to most recently updated migration for each instance: other_migration = filtered.get(uuid, None) # NOTE(claudiub): In Python 3, you cannot compare NoneTypes. if other_migration: om = other_migration other_time = om.updated_at or om.created_at migration_time = migration.updated_at or migration.created_at if migration_time > other_time: filtered[uuid] = migration else: filtered[uuid] = migration for migration in filtered.values(): instance = instances[migration.instance_uuid] # Skip migration (and mark it as error) if it doesn't match the # instance migration id. # This can happen if we have a stale migration record. # We want to proceed if instance.migration_context is None if (instance.migration_context is not None and instance.migration_context.migration_id != migration.id): LOG.info("Current instance migration %(im)s doesn't match " "migration %(m)s, marking migration as error. " "This can occur if a previous migration for this " "instance did not complete.", {'im': instance.migration_context.migration_id, 'm': migration.id}) migration.status = "error" migration.save() continue try: self._update_usage_from_migration(context, instance, migration, nodename) except exception.FlavorNotFound: LOG.warning("Flavor could not be found, skipping migration.", instance_uuid=instance.uuid) continue def _update_usage_from_instance(self, context, instance, nodename, is_removed=False): """Update usage for a single instance.""" uuid = instance['uuid'] is_new_instance = uuid not in self.tracked_instances # NOTE(sfinucan): Both brand new instances as well as instances that # are being unshelved will have is_new_instance == True is_removed_instance = not is_new_instance and (is_removed or instance['vm_state'] in vm_states.ALLOW_RESOURCE_REMOVAL) if is_new_instance: self.tracked_instances.add(uuid) sign = 1 if is_removed_instance: self.tracked_instances.remove(uuid) self._release_assigned_resources(instance.resources) sign = -1 cn = self.compute_nodes[nodename] stats = self.stats[nodename] stats.update_stats_for_instance(instance, is_removed_instance) cn.stats = stats # if it's a new or deleted instance: if is_new_instance or is_removed_instance: if self.pci_tracker: self.pci_tracker.update_pci_for_instance(context, instance, sign=sign) # new instance, update compute node resource usage: self._update_usage(self._get_usage_dict(instance, instance), nodename, sign=sign) # Stop tracking removed instances in the is_bfv cache. This needs to # happen *after* calling _get_usage_dict() since that relies on the # is_bfv cache. if is_removed_instance and uuid in self.is_bfv: del self.is_bfv[uuid] cn.current_workload = stats.calculate_workload() if self.pci_tracker: obj = self.pci_tracker.stats.to_device_pools_obj() cn.pci_device_pools = obj else: cn.pci_device_pools = objects.PciDevicePoolList() def _update_usage_from_instances(self, context, instances, nodename): """Calculate resource usage based on instance utilization. This is different than the hypervisor's view as it will account for all instances assigned to the local compute host, even if they are not currently powered on. """ self.tracked_instances.clear() cn = self.compute_nodes[nodename] # set some initial values, reserve room for host/hypervisor: cn.local_gb_used = CONF.reserved_host_disk_mb / 1024 cn.memory_mb_used = CONF.reserved_host_memory_mb cn.vcpus_used = CONF.reserved_host_cpus cn.free_ram_mb = (cn.memory_mb - cn.memory_mb_used) cn.free_disk_gb = (cn.local_gb - cn.local_gb_used) cn.current_workload = 0 cn.running_vms = 0 instance_by_uuid = {} for instance in instances: if instance.vm_state not in vm_states.ALLOW_RESOURCE_REMOVAL: self._update_usage_from_instance(context, instance, nodename) instance_by_uuid[instance.uuid] = instance return instance_by_uuid def _remove_deleted_instances_allocations(self, context, cn, migrations, instance_by_uuid): migration_uuids = [migration.uuid for migration in migrations if 'uuid' in migration] # NOTE(jaypipes): All of this code sucks. It's basically dealing with # all the corner cases in move, local delete, unshelve and rebuild # operations for when allocations should be deleted when things didn't # happen according to the normal flow of events where the scheduler # always creates allocations for an instance try: # pai: report.ProviderAllocInfo namedtuple pai = self.reportclient.get_allocations_for_resource_provider( context, cn.uuid) except (exception.ResourceProviderAllocationRetrievalFailed, ks_exc.ClientException) as e: LOG.error("Skipping removal of allocations for deleted instances: " "%s", e) return allocations = pai.allocations if not allocations: # The main loop below would short-circuit anyway, but this saves us # the (potentially expensive) context.elevated construction below. return read_deleted_context = context.elevated(read_deleted='yes') for consumer_uuid, alloc in allocations.items(): if consumer_uuid in self.tracked_instances: LOG.debug("Instance %s actively managed on this compute host " "and has allocations in placement: %s.", consumer_uuid, alloc) continue if consumer_uuid in migration_uuids: LOG.debug("Migration %s is active on this compute host " "and has allocations in placement: %s.", consumer_uuid, alloc) continue # We know these are instances now, so proceed instance_uuid = consumer_uuid instance = instance_by_uuid.get(instance_uuid) if not instance: try: instance = objects.Instance.get_by_uuid( read_deleted_context, consumer_uuid, expected_attrs=[]) except exception.InstanceNotFound: # The instance isn't even in the database. Either the # scheduler _just_ created an allocation for it and we're # racing with the creation in the cell database, or the # instance was deleted and fully archived before we got a # chance to run this. The former is far more likely than # the latter. Avoid deleting allocations for a building # instance here. LOG.info("Instance %(uuid)s has allocations against this " "compute host but is not found in the database.", {'uuid': instance_uuid}, exc_info=False) continue # NOTE(mriedem): A cross-cell migration will work with instance # records across two cells once the migration is confirmed/reverted # one of them will be deleted but the instance still exists in the # other cell. Before the instance is destroyed from the old cell # though it is marked hidden=True so if we find a deleted hidden # instance with allocations against this compute node we just # ignore it since the migration operation will handle cleaning up # those allocations. if instance.deleted and not instance.hidden: # The instance is gone, so we definitely want to remove # allocations associated with it. LOG.debug("Instance %s has been deleted (perhaps locally). " "Deleting allocations that remained for this " "instance against this compute host: %s.", instance_uuid, alloc) self.reportclient.delete_allocation_for_instance(context, instance_uuid) continue if not instance.host: # Allocations related to instances being scheduled should not # be deleted if we already wrote the allocation previously. LOG.debug("Instance %s has been scheduled to this compute " "host, the scheduler has made an allocation " "against this compute node but the instance has " "yet to start. Skipping heal of allocation: %s.", instance_uuid, alloc) continue if (instance.host == cn.host and instance.node == cn.hypervisor_hostname): # The instance is supposed to be on this compute host but is # not in the list of actively managed instances. This could be # because we are racing with an instance_claim call during # initial build or unshelve where the instance host/node is set # before the instance is added to tracked_instances. If the # task_state is set, then consider things in motion and log at # debug level instead of warning. if instance.task_state: LOG.debug('Instance with task_state "%s" is not being ' 'actively managed by this compute host but has ' 'allocations referencing this compute node ' '(%s): %s. Skipping heal of allocations during ' 'the task state transition.', instance.task_state, cn.uuid, alloc, instance=instance) else: LOG.warning("Instance %s is not being actively managed by " "this compute host but has allocations " "referencing this compute host: %s. Skipping " "heal of allocation because we do not know " "what to do.", instance_uuid, alloc) continue if instance.host != cn.host: # The instance has been moved to another host either via a # migration, evacuation or unshelve in between the time when we # ran InstanceList.get_by_host_and_node(), added those # instances to RT.tracked_instances and the above # Instance.get_by_uuid() call. We SHOULD attempt to remove any # allocations that reference this compute host if the VM is in # a stable terminal state (i.e. it isn't in a state of waiting # for resize to confirm/revert), however if the destination # host is an Ocata compute host, it will delete the allocation # that contains this source compute host information anyway and # recreate an allocation that only refers to itself. So we # don't need to do anything in that case. Just log the # situation here for information but don't attempt to delete or # change the allocation. LOG.warning("Instance %s has been moved to another host " "%s(%s). There are allocations remaining against " "the source host that might need to be removed: " "%s.", instance_uuid, instance.host, instance.node, alloc) def delete_allocation_for_evacuated_instance(self, context, instance, node, node_type='source'): # Clean up the instance allocation from this node in placement cn_uuid = self.compute_nodes[node].uuid if not self.reportclient.remove_provider_tree_from_instance_allocation( context, instance.uuid, cn_uuid): LOG.error("Failed to clean allocation of evacuated " "instance on the %s node %s", node_type, cn_uuid, instance=instance) def _find_orphaned_instances(self): """Given the set of instances and migrations already account for by resource tracker, sanity check the hypervisor to determine if there are any "orphaned" instances left hanging around. Orphans could be consuming memory and should be accounted for in usage calculations to guard against potential out of memory errors. """ uuids1 = frozenset(self.tracked_instances) uuids2 = frozenset(self.tracked_migrations.keys()) uuids = uuids1 | uuids2 usage = self.driver.get_per_instance_usage() vuuids = frozenset(usage.keys()) orphan_uuids = vuuids - uuids orphans = [usage[uuid] for uuid in orphan_uuids] return orphans def _update_usage_from_orphans(self, orphans, nodename): """Include orphaned instances in usage.""" for orphan in orphans: memory_mb = orphan['memory_mb'] LOG.warning("Detected running orphan instance: %(uuid)s " "(consuming %(memory_mb)s MB memory)", {'uuid': orphan['uuid'], 'memory_mb': memory_mb}) # just record memory usage for the orphan usage = {'memory_mb': memory_mb} self._update_usage(usage, nodename) def delete_allocation_for_shelve_offloaded_instance(self, context, instance): self.reportclient.delete_allocation_for_instance(context, instance.uuid) def _verify_resources(self, resources): resource_keys = ["vcpus", "memory_mb", "local_gb", "cpu_info", "vcpus_used", "memory_mb_used", "local_gb_used", "numa_topology"] missing_keys = [k for k in resource_keys if k not in resources] if missing_keys: reason = _("Missing keys: %s") % missing_keys raise exception.InvalidInput(reason=reason) def _get_instance_type(self, instance, prefix, migration): """Get the instance type from instance.""" stashed_flavors = migration.migration_type in ('resize',) if stashed_flavors: return getattr(instance, '%sflavor' % prefix) else: # NOTE(ndipanov): Certain migration types (all but resize) # do not change flavors so there is no need to stash # them. In that case - just get the instance flavor. return instance.flavor def _get_usage_dict(self, object_or_dict, instance, **updates): """Make a usage dict _update methods expect. Accepts a dict or an Instance or Flavor object, and a set of updates. Converts the object to a dict and applies the updates. :param object_or_dict: instance or flavor as an object or just a dict :param instance: nova.objects.Instance for the related operation; this is needed to determine if the instance is volume-backed :param updates: key-value pairs to update the passed object. Currently only considers 'numa_topology', all other keys are ignored. :returns: a dict with all the information from object_or_dict updated with updates """ def _is_bfv(): # Check to see if we have the is_bfv value cached. if instance.uuid in self.is_bfv: is_bfv = self.is_bfv[instance.uuid] else: is_bfv = compute_utils.is_volume_backed_instance( instance._context, instance) self.is_bfv[instance.uuid] = is_bfv return is_bfv usage = {} if isinstance(object_or_dict, objects.Instance): is_bfv = _is_bfv() usage = {'memory_mb': object_or_dict.flavor.memory_mb, 'swap': object_or_dict.flavor.swap, 'vcpus': object_or_dict.flavor.vcpus, 'root_gb': (0 if is_bfv else object_or_dict.flavor.root_gb), 'ephemeral_gb': object_or_dict.flavor.ephemeral_gb, 'numa_topology': object_or_dict.numa_topology} elif isinstance(object_or_dict, objects.Flavor): usage = obj_base.obj_to_primitive(object_or_dict) if _is_bfv(): usage['root_gb'] = 0 else: usage.update(object_or_dict) for key in ('numa_topology',): if key in updates: usage[key] = updates[key] return usage def build_failed(self, nodename): """Increments the failed_builds stats for the given node.""" self.stats[nodename].build_failed() def build_succeeded(self, nodename): """Resets the failed_builds stats for the given node.""" self.stats[nodename].build_succeeded() @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def claim_pci_devices(self, context, pci_requests): """Claim instance PCI resources :param context: security context :param pci_requests: a list of nova.objects.InstancePCIRequests :returns: a list of nova.objects.PciDevice objects """ result = self.pci_tracker.claim_instance( context, pci_requests, None) self.pci_tracker.save(context) return result @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def allocate_pci_devices_for_instance(self, context, instance): """Allocate instance claimed PCI resources :param context: security context :param instance: instance object """ self.pci_tracker.allocate_instance(instance) self.pci_tracker.save(context) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def free_pci_device_allocations_for_instance(self, context, instance): """Free instance allocated PCI resources :param context: security context :param instance: instance object """ self.pci_tracker.free_instance_allocations(context, instance) self.pci_tracker.save(context) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def free_pci_device_claims_for_instance(self, context, instance): """Free instance claimed PCI resources :param context: security context :param instance: instance object """ self.pci_tracker.free_instance_claims(context, instance) self.pci_tracker.save(context) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) def finish_evacuation(self, instance, node, migration): instance.apply_migration_context() # NOTE (ndipanov): This save will now update the host and node # attributes making sure that next RT pass is consistent since # it will be based on the instance and not the migration DB # entry. instance.host = self.host instance.node = node instance.save() instance.drop_migration_context() # NOTE (ndipanov): Mark the migration as done only after we # mark the instance as belonging to this host. if migration: migration.status = 'done' migration.save() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/rpcapi.py0000664000175000017500000021024500000000000017041 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the compute RPC API. """ from oslo_concurrency import lockutils from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import excutils import nova.conf from nova import context from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base as objects_base from nova.objects import service as service_obj from nova import profiler from nova import rpc CONF = nova.conf.CONF RPC_TOPIC = "compute" LOG = logging.getLogger(__name__) LAST_VERSION = None NO_COMPUTES_WARNING = False # Global for ComputeAPI.router. _ROUTER = None def reset_globals(): global NO_COMPUTES_WARNING global LAST_VERSION global _ROUTER NO_COMPUTES_WARNING = False LAST_VERSION = None _ROUTER = None def _compute_host(host, instance): '''Get the destination host for a message. :param host: explicit host to send the message to. :param instance: If an explicit host was not specified, use instance['host'] :returns: A host ''' if host: return host if not instance: raise exception.NovaException(_('No compute host specified')) if not instance.host: raise exception.NovaException(_('Unable to find host for ' 'Instance %s') % instance.uuid) return instance.host @profiler.trace_cls("rpc") class ComputeAPI(object): '''Client side of the compute rpc API. API version history: * 1.0 - Initial version. * 1.1 - Adds get_host_uptime() * 1.2 - Adds check_can_live_migrate_[destination|source] * 1.3 - Adds change_instance_metadata() * 1.4 - Remove instance_uuid, add instance argument to reboot_instance() * 1.5 - Remove instance_uuid, add instance argument to pause_instance(), unpause_instance() * 1.6 - Remove instance_uuid, add instance argument to suspend_instance() * 1.7 - Remove instance_uuid, add instance argument to get_console_output() * 1.8 - Remove instance_uuid, add instance argument to add_fixed_ip_to_instance() * 1.9 - Remove instance_uuid, add instance argument to attach_volume() * 1.10 - Remove instance_id, add instance argument to check_can_live_migrate_destination() * 1.11 - Remove instance_id, add instance argument to check_can_live_migrate_source() * 1.12 - Remove instance_uuid, add instance argument to confirm_resize() * 1.13 - Remove instance_uuid, add instance argument to detach_volume() * 1.14 - Remove instance_uuid, add instance argument to finish_resize() * 1.15 - Remove instance_uuid, add instance argument to finish_revert_resize() * 1.16 - Remove instance_uuid, add instance argument to get_diagnostics() * 1.17 - Remove instance_uuid, add instance argument to get_vnc_console() * 1.18 - Remove instance_uuid, add instance argument to inject_file() * 1.19 - Remove instance_uuid, add instance argument to inject_network_info() * 1.20 - Remove instance_id, add instance argument to post_live_migration_at_destination() * 1.21 - Remove instance_uuid, add instance argument to power_off_instance() and stop_instance() * 1.22 - Remove instance_uuid, add instance argument to power_on_instance() and start_instance() * 1.23 - Remove instance_id, add instance argument to pre_live_migration() * 1.24 - Remove instance_uuid, add instance argument to rebuild_instance() * 1.25 - Remove instance_uuid, add instance argument to remove_fixed_ip_from_instance() * 1.26 - Remove instance_id, add instance argument to remove_volume_connection() * 1.27 - Remove instance_uuid, add instance argument to rescue_instance() * 1.28 - Remove instance_uuid, add instance argument to reset_network() * 1.29 - Remove instance_uuid, add instance argument to resize_instance() * 1.30 - Remove instance_uuid, add instance argument to resume_instance() * 1.31 - Remove instance_uuid, add instance argument to revert_resize() * 1.32 - Remove instance_id, add instance argument to rollback_live_migration_at_destination() * 1.33 - Remove instance_uuid, add instance argument to set_admin_password() * 1.34 - Remove instance_uuid, add instance argument to snapshot_instance() * 1.35 - Remove instance_uuid, add instance argument to unrescue_instance() * 1.36 - Remove instance_uuid, add instance argument to change_instance_metadata() * 1.37 - Remove instance_uuid, add instance argument to terminate_instance() * 1.38 - Changes to prep_resize(): * remove instance_uuid, add instance * remove instance_type_id, add instance_type * remove topic, it was unused * 1.39 - Remove instance_uuid, add instance argument to run_instance() * 1.40 - Remove instance_id, add instance argument to live_migration() * 1.41 - Adds refresh_instance_security_rules() * 1.42 - Add reservations arg to prep_resize(), resize_instance(), finish_resize(), confirm_resize(), revert_resize() and finish_revert_resize() * 1.43 - Add migrate_data to live_migration() * 1.44 - Adds reserve_block_device_name() * 2.0 - Remove 1.x backwards compat * 2.1 - Adds orig_sys_metadata to rebuild_instance() * 2.2 - Adds slave_info parameter to add_aggregate_host() and remove_aggregate_host() * 2.3 - Adds volume_id to reserve_block_device_name() * 2.4 - Add bdms to terminate_instance * 2.5 - Add block device and network info to reboot_instance * 2.6 - Remove migration_id, add migration to resize_instance * 2.7 - Remove migration_id, add migration to confirm_resize * 2.8 - Remove migration_id, add migration to finish_resize * 2.9 - Add publish_service_capabilities() * 2.10 - Adds filter_properties and request_spec to prep_resize() * 2.11 - Adds soft_delete_instance() and restore_instance() * 2.12 - Remove migration_id, add migration to revert_resize * 2.13 - Remove migration_id, add migration to finish_revert_resize * 2.14 - Remove aggregate_id, add aggregate to add_aggregate_host * 2.15 - Remove aggregate_id, add aggregate to remove_aggregate_host * 2.16 - Add instance_type to resize_instance * 2.17 - Add get_backdoor_port() * 2.18 - Add bdms to rebuild_instance * 2.19 - Add node to run_instance * 2.20 - Add node to prep_resize * 2.21 - Add migrate_data dict param to pre_live_migration() * 2.22 - Add recreate, on_shared_storage and host arguments to rebuild_instance() * 2.23 - Remove network_info from reboot_instance * 2.24 - Added get_spice_console method * 2.25 - Add attach_interface() and detach_interface() * 2.26 - Add validate_console_port to ensure the service connects to vnc on the correct port * 2.27 - Adds 'reservations' to terminate_instance() and soft_delete_instance() ... Grizzly supports message version 2.27. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.27. * 2.28 - Adds check_instance_shared_storage() * 2.29 - Made start_instance() and stop_instance() take new-world instance objects * 2.30 - Adds live_snapshot_instance() * 2.31 - Adds shelve_instance(), shelve_offload_instance, and unshelve_instance() * 2.32 - Make reboot_instance take a new world instance object * 2.33 - Made suspend_instance() and resume_instance() take new-world instance objects * 2.34 - Added swap_volume() * 2.35 - Made terminate_instance() and soft_delete_instance() take new-world instance objects * 2.36 - Made pause_instance() and unpause_instance() take new-world instance objects * 2.37 - Added the legacy_bdm_in_spec parameter to run_instance * 2.38 - Made check_can_live_migrate_[destination|source] take new-world instance objects * 2.39 - Made revert_resize() and confirm_resize() take new-world instance objects * 2.40 - Made reset_network() take new-world instance object * 2.41 - Make inject_network_info take new-world instance object * 2.42 - Splits snapshot_instance() into snapshot_instance() and backup_instance() and makes them take new-world instance objects. * 2.43 - Made prep_resize() take new-world instance object * 2.44 - Add volume_snapshot_create(), volume_snapshot_delete() * 2.45 - Made resize_instance() take new-world objects * 2.46 - Made finish_resize() take new-world objects * 2.47 - Made finish_revert_resize() take new-world objects ... Havana supports message version 2.47. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.47. * 2.48 - Make add_aggregate_host() and remove_aggregate_host() take new-world objects * ... - Remove live_snapshot() that was never actually used * 3.0 - Remove 2.x compatibility * 3.1 - Update get_spice_console() to take an instance object * 3.2 - Update get_vnc_console() to take an instance object * 3.3 - Update validate_console_port() to take an instance object * 3.4 - Update rebuild_instance() to take an instance object * 3.5 - Pass preserve_ephemeral flag to rebuild_instance() * 3.6 - Make volume_snapshot_{create,delete} use new-world objects * 3.7 - Update change_instance_metadata() to take an instance object * 3.8 - Update set_admin_password() to take an instance object * 3.9 - Update rescue_instance() to take an instance object * 3.10 - Added get_rdp_console method * 3.11 - Update unrescue_instance() to take an object * 3.12 - Update add_fixed_ip_to_instance() to take an object * 3.13 - Update remove_fixed_ip_from_instance() to take an object * 3.14 - Update post_live_migration_at_destination() to take an object * 3.15 - Adds filter_properties and node to unshelve_instance() * 3.16 - Make reserve_block_device_name and attach_volume use new-world objects, and add disk_bus and device_type params to reserve_block_device_name, and bdm param to attach_volume * 3.17 - Update attach_interface and detach_interface to take an object * 3.18 - Update get_diagnostics() to take an instance object * Removed inject_file(), as it was unused. * 3.19 - Update pre_live_migration to take instance object * 3.20 - Make restore_instance take an instance object * 3.21 - Made rebuild take new-world BDM objects * 3.22 - Made terminate_instance take new-world BDM objects * 3.23 - Added external_instance_event() * build_and_run_instance was added in Havana and not used or documented. ... Icehouse supports message version 3.23. So, any changes to existing methods in 3.x after that point should be done such that they can handle the version_cap being set to 3.23. * 3.24 - Update rescue_instance() to take optional rescue_image_ref * 3.25 - Make detach_volume take an object * 3.26 - Make live_migration() and rollback_live_migration_at_destination() take an object * ... Removed run_instance() * 3.27 - Make run_instance() accept a new-world object * 3.28 - Update get_console_output() to accept a new-world object * 3.29 - Make check_instance_shared_storage accept a new-world object * 3.30 - Make remove_volume_connection() accept a new-world object * 3.31 - Add get_instance_diagnostics * 3.32 - Add destroy_disks and migrate_data optional parameters to rollback_live_migration_at_destination() * 3.33 - Make build_and_run_instance() take a NetworkRequestList object * 3.34 - Add get_serial_console method * 3.35 - Make reserve_block_device_name return a BDM object ... Juno supports message version 3.35. So, any changes to existing methods in 3.x after that point should be done such that they can handle the version_cap being set to 3.35. * 3.36 - Make build_and_run_instance() send a Flavor object * 3.37 - Add clean_shutdown to stop, resize, rescue, shelve, and shelve_offload * 3.38 - Add clean_shutdown to prep_resize * 3.39 - Add quiesce_instance and unquiesce_instance methods * 3.40 - Make build_and_run_instance() take a new-world topology limits object ... Kilo supports messaging version 3.40. So, any changes to existing methods in 3.x after that point should be done so that they can handle the version_cap being set to 3.40 ... Version 4.0 is equivalent to 3.40. Kilo sends version 4.0 by default, can accept 3.x calls from Juno nodes, and can be pinned to 3.x for Juno compatibility. All new changes should go against 4.x. * 4.0 - Remove 3.x compatibility * 4.1 - Make prep_resize() and resize_instance() send Flavor object * 4.2 - Add migration argument to live_migration() * 4.3 - Added get_mks_console method * 4.4 - Make refresh_instance_security_rules send an instance object * 4.5 - Add migration, scheduler_node and limits arguments to rebuild_instance() ... Liberty supports messaging version 4.5. So, any changes to existing methods in 4.x after that point should be done so that they can handle the version_cap being set to 4.5 * ... - Remove refresh_security_group_members() * ... - Remove refresh_security_group_rules() * 4.6 - Add trigger_crash_dump() * 4.7 - Add attachment_id argument to detach_volume() * 4.8 - Send migrate_data in object format for live_migration, rollback_live_migration_at_destination, and pre_live_migration. * ... - Remove refresh_provider_fw_rules() * 4.9 - Add live_migration_force_complete() * 4.10 - Add live_migration_abort() * 4.11 - Allow block_migration and disk_over_commit be None ... Mitaka supports messaging version 4.11. So, any changes to existing methods in 4.x after that point should be done so that they can handle the version_cap being set to 4.11 * 4.12 - Remove migration_id from live_migration_force_complete * 4.13 - Make get_instance_diagnostics send an instance object ... Newton and Ocata support messaging version 4.13. So, any changes to existing methods in 4.x after that point should be done so that they can handle the version_cap being set to 4.13 * 4.14 - Make get_instance_diagnostics return a diagnostics object instead of dictionary. Strictly speaking we don't need to bump the version because this method was unused before. The version was bumped to signal the availability of the corrected RPC API * 4.15 - Add tag argument to reserve_block_device_name() * 4.16 - Add tag argument to attach_interface() * 4.17 - Add new_attachment_id to swap_volume. ... Pike supports messaging version 4.17. So any changes to existing methods in 4.x after that point should be done so that they can handle the version_cap being set to 4.17. * 4.18 - Add migration to prep_resize() * 4.19 - build_and_run_instance() now gets a 'host_list' parameter representing potential alternate hosts for retries within a cell. * 4.20 - Add multiattach argument to reserve_block_device_name(). * 4.21 - prep_resize() now gets a 'host_list' parameter representing potential alternate hosts for retries within a cell. * 4.22 - Add request_spec to rebuild_instance() ... Version 5.0 is functionally equivalent to 4.22, aside from removing deprecated parameters. Queens sends 5.0 by default, can accept 4.x calls from Pike nodes, and can be pinned to 4.x for Pike compatibility. All new changes should go against 5.x. * 5.0 - Remove 4.x compatibility * 5.1 - Make prep_resize() take a RequestSpec object rather than a legacy dict. * 5.2 - Add request_spec parameter for the following: resize_instance, finish_resize, revert_resize, finish_revert_resize, unshelve_instance * 5.3 - Add migration and limits parameters to check_can_live_migrate_destination(), and a new drop_move_claim_at_destination() method * 5.4 - Add cache_images() support * 5.5 - Add prep_snapshot_based_resize_at_dest() * 5.6 - Add prep_snapshot_based_resize_at_source() * 5.7 - Add finish_snapshot_based_resize_at_dest() * 5.8 - Add confirm_snapshot_based_resize_at_source() * 5.9 - Add revert_snapshot_based_resize_at_dest() * 5.10 - Add finish_revert_snapshot_based_resize_at_source() * 5.11 - Add accel_uuids (accelerator requests) parameter to build_and_run_instance() ''' VERSION_ALIASES = { 'icehouse': '3.23', 'juno': '3.35', 'kilo': '4.0', 'liberty': '4.5', 'mitaka': '4.11', 'newton': '4.13', 'ocata': '4.13', 'pike': '4.17', 'queens': '5.0', 'rocky': '5.0', 'stein': '5.1', 'train': '5.3', 'ussuri': '5.11', } @property def router(self): """Provides singleton access to nova.rpc.ClientRouter for this API The ClientRouter is constructed and accessed as a singleton to avoid querying all cells for a minimum nova-compute service version when [upgrade_levels]/compute=auto and we have access to the API DB. """ global _ROUTER if _ROUTER is None: with lockutils.lock('compute-rpcapi-router'): if _ROUTER is None: target = messaging.Target(topic=RPC_TOPIC, version='5.0') upgrade_level = CONF.upgrade_levels.compute if upgrade_level == 'auto': version_cap = self._determine_version_cap(target) else: version_cap = self.VERSION_ALIASES.get(upgrade_level, upgrade_level) serializer = objects_base.NovaObjectSerializer() # NOTE(danms): We need to poke this path to register CONF # options that we use in self.get_client() rpc.get_client(target, version_cap, serializer) default_client = self.get_client(target, version_cap, serializer) _ROUTER = rpc.ClientRouter(default_client) return _ROUTER @staticmethod def _determine_version_cap(target): global LAST_VERSION global NO_COMPUTES_WARNING if LAST_VERSION: return LAST_VERSION # NOTE(danms): If we have a connection to the api database, # we should iterate all cells. If not, we must only look locally. if CONF.api_database.connection: try: service_version = service_obj.get_minimum_version_all_cells( context.get_admin_context(), ['nova-compute']) except exception.DBNotAllowed: # This most likely means we are in a nova-compute service # configured with [upgrade_levels]/compute=auto and a # connection to the API database. We should not be attempting # to "get out" of our cell to look at the minimum versions of # nova-compute services in other cells, so DBNotAllowed was # raised. Log a user-friendly message and re-raise the error. with excutils.save_and_reraise_exception(): LOG.error('This service is configured for access to the ' 'API database but is not allowed to directly ' 'access the database. You should run this ' 'service without the [api_database]/connection ' 'config option.') else: service_version = objects.Service.get_minimum_version( context.get_admin_context(), 'nova-compute') history = service_obj.SERVICE_VERSION_HISTORY # NOTE(johngarbutt) when there are no nova-compute services running we # get service_version == 0. In that case we do not want to cache # this result, because we will get a better answer next time. # As a sane default, return the current version. if service_version == 0: if not NO_COMPUTES_WARNING: # NOTE(danms): Only show this warning once LOG.debug("Not caching compute RPC version_cap, because min " "service_version is 0. Please ensure a nova-compute " "service has been started. Defaulting to current " "version.") NO_COMPUTES_WARNING = True return history[service_obj.SERVICE_VERSION]['compute_rpc'] try: version_cap = history[service_version]['compute_rpc'] except IndexError: LOG.error('Failed to extract compute RPC version from ' 'service history because I am too ' 'old (minimum version is now %(version)i)', {'version': service_version}) raise exception.ServiceTooOld(thisver=service_obj.SERVICE_VERSION, minver=service_version) except KeyError: LOG.error('Failed to extract compute RPC version from ' 'service history for version %(version)i', {'version': service_version}) return target.version LAST_VERSION = version_cap LOG.info('Automatically selected compute RPC version %(rpc)s ' 'from minimum service version %(service)i', {'rpc': version_cap, 'service': service_version}) return version_cap def get_client(self, target, version_cap, serializer): if CONF.rpc_response_timeout > rpc.HEARTBEAT_THRESHOLD: # NOTE(danms): If the operator has overridden RPC timeout # to be longer than rpc.HEARTBEAT_THRESHOLD then configure # the call monitor timeout to be the threshold to keep the # failure timing characteristics that our code likely # expects (from history) while allowing healthy calls # to run longer. cmt = rpc.HEARTBEAT_THRESHOLD else: cmt = None return rpc.get_client(target, version_cap=version_cap, serializer=serializer, call_monitor_timeout=cmt) def add_aggregate_host(self, ctxt, host, aggregate, host_param, slave_info=None): '''Add aggregate host. :param ctxt: request context :param aggregate: :param host_param: This value is placed in the message to be the 'host' parameter for the remote method. :param host: This is the host to send the message to. ''' version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) cctxt.cast(ctxt, 'add_aggregate_host', aggregate=aggregate, host=host_param, slave_info=slave_info) def add_fixed_ip_to_instance(self, ctxt, instance, network_id): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'add_fixed_ip_to_instance', instance=instance, network_id=network_id) def attach_interface(self, ctxt, instance, network_id, port_id, requested_ip, tag=None): kw = {'instance': instance, 'network_id': network_id, 'port_id': port_id, 'requested_ip': requested_ip, 'tag': tag} version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'attach_interface', **kw) def attach_volume(self, ctxt, instance, bdm): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'attach_volume', instance=instance, bdm=bdm) def change_instance_metadata(self, ctxt, instance, diff): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'change_instance_metadata', instance=instance, diff=diff) def check_can_live_migrate_destination(self, ctxt, instance, destination, block_migration, disk_over_commit, migration, limits): client = self.router.client(ctxt) version = '5.3' kwargs = { 'instance': instance, 'block_migration': block_migration, 'disk_over_commit': disk_over_commit, 'migration': migration, 'limits': limits } if not client.can_send_version(version): kwargs.pop('migration') kwargs.pop('limits') version = '5.0' cctxt = client.prepare(server=destination, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'check_can_live_migrate_destination', **kwargs) def check_can_live_migrate_source(self, ctxt, instance, dest_check_data): version = '5.0' client = self.router.client(ctxt) source = _compute_host(None, instance) cctxt = client.prepare(server=source, version=version) return cctxt.call(ctxt, 'check_can_live_migrate_source', instance=instance, dest_check_data=dest_check_data) def check_instance_shared_storage(self, ctxt, instance, data, host=None): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(host, instance), version=version) return cctxt.call(ctxt, 'check_instance_shared_storage', instance=instance, data=data) def confirm_resize(self, ctxt, instance, migration, host, cast=True): client = self.router.client(ctxt) version = '5.0' cctxt = client.prepare( server=_compute_host(host, instance), version=version) rpc_method = cctxt.cast if cast else cctxt.call return rpc_method(ctxt, 'confirm_resize', instance=instance, migration=migration) def confirm_snapshot_based_resize_at_source( self, ctxt, instance, migration): """Confirms a snapshot-based resize on the source host. Cleans the guest from the source hypervisor including disks and drops the MoveClaim which will free up "old_flavor" usage from the ResourceTracker. Deletes the allocations held by the migration consumer against the source compute node resource provider. This is a synchronous RPC call using the ``long_rpc_timeout`` configuration option. :param ctxt: nova auth request context targeted at the source cell :param instance: Instance object being resized which should have the "old_flavor" attribute set :param migration: Migration object for the resize operation :raises: nova.exception.MigrationError if the source compute is too old to perform the operation :raises: oslo_messaging.exceptions.MessagingTimeout if the RPC call times out """ version = '5.8' client = self.router.client(ctxt) if not client.can_send_version(version): raise exception.MigrationError(reason=_('Compute too old')) cctxt = client.prepare(server=migration.source_compute, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call( ctxt, 'confirm_snapshot_based_resize_at_source', instance=instance, migration=migration) def detach_interface(self, ctxt, instance, port_id): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'detach_interface', instance=instance, port_id=port_id) def detach_volume(self, ctxt, instance, volume_id, attachment_id=None): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'detach_volume', instance=instance, volume_id=volume_id, attachment_id=attachment_id) def finish_resize(self, ctxt, instance, migration, image, disk_info, host, request_spec): msg_args = { 'instance': instance, 'migration': migration, 'image': image, 'disk_info': disk_info, 'request_spec': request_spec, } client = self.router.client(ctxt) version = '5.2' if not client.can_send_version(version): msg_args.pop('request_spec') version = '5.0' cctxt = client.prepare( server=host, version=version) cctxt.cast(ctxt, 'finish_resize', **msg_args) def finish_revert_resize(self, ctxt, instance, migration, host, request_spec): msg_args = { 'instance': instance, 'migration': migration, 'request_spec': request_spec, } client = self.router.client(ctxt) version = '5.2' if not client.can_send_version(version): msg_args.pop('request_spec') version = '5.0' cctxt = client.prepare( server=host, version=version) cctxt.cast(ctxt, 'finish_revert_resize', **msg_args) def finish_snapshot_based_resize_at_dest( self, ctxt, instance, migration, snapshot_id, request_spec): """Finishes the snapshot-based resize at the destination compute. Sets up block devices and networking on the destination compute and spawns the guest. This is a synchronous RPC call using the ``long_rpc_timeout`` configuration option. :param ctxt: nova auth request context targeted at the target cell DB :param instance: The Instance object being resized with the ``migration_context`` field set. Upon successful completion of this method the vm_state should be "resized", the task_state should be None, and migration context, host/node and flavor-related fields should be set on the instance. :param migration: The Migration object for this resize operation. Upon successful completion of this method the migration status should be "finished". :param snapshot_id: ID of the image snapshot created for a non-volume-backed instance, else None. :param request_spec: nova.objects.RequestSpec object for the operation :raises: nova.exception.MigrationError if the destination compute service is too old for this method :raises: oslo_messaging.exceptions.MessagingTimeout if the pre-check RPC call times out """ client = self.router.client(ctxt) version = '5.7' if not client.can_send_version(version): raise exception.MigrationError(reason=_('Compute too old')) cctxt = client.prepare( server=migration.dest_compute, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call( ctxt, 'finish_snapshot_based_resize_at_dest', instance=instance, migration=migration, snapshot_id=snapshot_id, request_spec=request_spec) def finish_revert_snapshot_based_resize_at_source( self, ctxt, instance, migration): """Reverts a snapshot-based resize at the source host. Spawn the guest and re-connect volumes/VIFs on the source host and revert the instance to use the old_flavor for resource usage reporting. Updates allocations in the placement service to move the source node allocations, held by the migration record, to the instance and drop the allocations held by the instance on the destination node. This is a synchronous RPC call using the ``long_rpc_timeout`` configuration option. :param ctxt: nova auth request context targeted at the source cell :param instance: Instance object whose vm_state is "resized" and task_state is "resize_reverting". :param migration: Migration object whose status is "reverting". :raises: nova.exception.MigrationError if the source compute is too old to perform the operation :raises: oslo_messaging.exceptions.MessagingTimeout if the RPC call times out """ version = '5.10' client = self.router.client(ctxt) if not client.can_send_version(version): raise exception.MigrationError(reason=_('Compute too old')) cctxt = client.prepare(server=migration.source_compute, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call( ctxt, 'finish_revert_snapshot_based_resize_at_source', instance=instance, migration=migration) def get_console_output(self, ctxt, instance, tail_length): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_console_output', instance=instance, tail_length=tail_length) def get_console_pool_info(self, ctxt, host, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) return cctxt.call(ctxt, 'get_console_pool_info', console_type=console_type) # TODO(stephenfin): This is no longer used and can be removed in v6.0 def get_console_topic(self, ctxt, host): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) return cctxt.call(ctxt, 'get_console_topic') def get_diagnostics(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_diagnostics', instance=instance) def get_instance_diagnostics(self, ctxt, instance): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_instance_diagnostics', instance=instance) def get_vnc_console(self, ctxt, instance, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_vnc_console', instance=instance, console_type=console_type) def get_spice_console(self, ctxt, instance, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_spice_console', instance=instance, console_type=console_type) def get_rdp_console(self, ctxt, instance, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_rdp_console', instance=instance, console_type=console_type) def get_mks_console(self, ctxt, instance, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_mks_console', instance=instance, console_type=console_type) def get_serial_console(self, ctxt, instance, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'get_serial_console', instance=instance, console_type=console_type) def validate_console_port(self, ctxt, instance, port, console_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'validate_console_port', instance=instance, port=port, console_type=console_type) def host_maintenance_mode(self, ctxt, host, host_param, mode): '''Set host maintenance mode :param ctxt: request context :param host_param: This value is placed in the message to be the 'host' parameter for the remote method. :param mode: :param host: This is the host to send the message to. ''' version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) return cctxt.call(ctxt, 'host_maintenance_mode', host=host_param, mode=mode) def host_power_action(self, ctxt, host, action): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) return cctxt.call(ctxt, 'host_power_action', action=action) def inject_network_info(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'inject_network_info', instance=instance) def live_migration(self, ctxt, instance, dest, block_migration, host, migration, migrate_data=None): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=host, version=version) cctxt.cast(ctxt, 'live_migration', instance=instance, dest=dest, block_migration=block_migration, migrate_data=migrate_data, migration=migration) def live_migration_force_complete(self, ctxt, instance, migration): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare( server=_compute_host(migration.source_compute, instance), version=version) cctxt.cast(ctxt, 'live_migration_force_complete', instance=instance) def live_migration_abort(self, ctxt, instance, migration_id): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'live_migration_abort', instance=instance, migration_id=migration_id) def pause_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'pause_instance', instance=instance) def post_live_migration_at_destination(self, ctxt, instance, block_migration, host): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'post_live_migration_at_destination', instance=instance, block_migration=block_migration) # TODO(mriedem): Remove the unused block_migration argument in v6.0 of # the compute RPC API. def pre_live_migration(self, ctxt, instance, block_migration, disk, host, migrate_data): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=host, version=version, timeout=CONF.long_rpc_timeout, call_monitor_timeout=CONF.rpc_response_timeout) return cctxt.call(ctxt, 'pre_live_migration', instance=instance, block_migration=block_migration, disk=disk, migrate_data=migrate_data) def supports_resize_with_qos_port(self, ctxt): """Returns whether we can send 5.2, needed for migrating and resizing servers with ports having resource request. """ client = self.router.client(ctxt) return client.can_send_version('5.2') # TODO(mriedem): Drop compat for request_spec being a legacy dict in v6.0. def prep_resize(self, ctxt, instance, image, instance_type, host, migration, request_spec, filter_properties, node, clean_shutdown, host_list): # TODO(mriedem): We should pass the ImageMeta object through to the # compute but that also requires plumbing changes through the resize # flow for other methods like resize_instance and finish_resize. image_p = objects_base.obj_to_primitive(image) msg_args = {'instance': instance, 'instance_type': instance_type, 'image': image_p, 'request_spec': request_spec, 'filter_properties': filter_properties, 'node': node, 'migration': migration, 'clean_shutdown': clean_shutdown, 'host_list': host_list} client = self.router.client(ctxt) version = '5.1' if not client.can_send_version(version): msg_args['request_spec'] = ( request_spec.to_legacy_request_spec_dict()) version = '5.0' cctxt = client.prepare(server=host, version=version) cctxt.cast(ctxt, 'prep_resize', **msg_args) def prep_snapshot_based_resize_at_dest( self, ctxt, instance, flavor, nodename, migration, limits, request_spec, destination): """Performs pre-cross-cell resize resource claim on the dest host. This runs on the destination host in a cross-cell resize operation before the resize is actually started. Performs a resize_claim for resources that are not claimed in placement like PCI devices and NUMA topology. Note that this is different from same-cell prep_resize in that this: * Does not RPC cast to the source compute, that is orchestrated from conductor. * This does not reschedule on failure, conductor handles that since conductor is synchronously RPC calling this method. :param ctxt: user auth request context :param instance: the instance being resized :param flavor: the flavor being resized to (unchanged for cold migrate) :param nodename: Name of the target compute node :param migration: nova.objects.Migration object for the operation :param limits: nova.objects.SchedulerLimits object of resource limits :param request_spec: nova.objects.RequestSpec object for the operation :param destination: possible target host for the cross-cell resize :returns: nova.objects.MigrationContext; the migration context created on the destination host during the resize_claim. :raises: nova.exception.MigrationPreCheckError if the pre-check validation fails for the given host selection or the destination compute service is too old for this method :raises: oslo_messaging.exceptions.MessagingTimeout if the pre-check RPC call times out """ version = '5.5' client = self.router.client(ctxt) if not client.can_send_version(version): raise exception.MigrationPreCheckError(reason=_('Compute too old')) cctxt = client.prepare(server=destination, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'prep_snapshot_based_resize_at_dest', instance=instance, flavor=flavor, nodename=nodename, migration=migration, limits=limits, request_spec=request_spec) def prep_snapshot_based_resize_at_source( self, ctxt, instance, migration, snapshot_id=None): """Prepares the instance at the source host for cross-cell resize Performs actions like powering off the guest, upload snapshot data if the instance is not volume-backed, disconnecting volumes, unplugging VIFs and activating the destination host port bindings. :param ctxt: user auth request context targeted at source cell :param instance: nova.objects.Instance; the instance being resized. The expected instance.task_state is "resize_migrating" when calling this method, and the expected task_state upon successful completion is "resize_migrated". :param migration: nova.objects.Migration object for the operation. The expected migration.status is "pre-migrating" when calling this method and the expected status upon successful completion is "post-migrating". :param snapshot_id: ID of the image snapshot to upload if not a volume-backed instance :raises: nova.exception.InstancePowerOffFailure if stopping the instance fails :raises: nova.exception.MigrationError if the source compute is too old to perform the operation :raises: oslo_messaging.exceptions.MessagingTimeout if the RPC call times out """ version = '5.6' client = self.router.client(ctxt) if not client.can_send_version(version): raise exception.MigrationError(reason=_('Compute too old')) cctxt = client.prepare(server=_compute_host(None, instance), version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call( ctxt, 'prep_snapshot_based_resize_at_source', instance=instance, migration=migration, snapshot_id=snapshot_id) def reboot_instance(self, ctxt, instance, block_device_info, reboot_type): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'reboot_instance', instance=instance, block_device_info=block_device_info, reboot_type=reboot_type) def rebuild_instance(self, ctxt, instance, new_pass, injected_files, image_ref, orig_image_ref, orig_sys_metadata, bdms, recreate, on_shared_storage, host, node, preserve_ephemeral, migration, limits, request_spec): # NOTE(edleafe): compute nodes can only use the dict form of limits. if isinstance(limits, objects.SchedulerLimits): limits = limits.to_dict() msg_args = {'preserve_ephemeral': preserve_ephemeral, 'migration': migration, 'scheduled_node': node, 'limits': limits, 'request_spec': request_spec} version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(host, instance), version=version) cctxt.cast(ctxt, 'rebuild_instance', instance=instance, new_pass=new_pass, injected_files=injected_files, image_ref=image_ref, orig_image_ref=orig_image_ref, orig_sys_metadata=orig_sys_metadata, bdms=bdms, recreate=recreate, on_shared_storage=on_shared_storage, **msg_args) def remove_aggregate_host(self, ctxt, host, aggregate, host_param, slave_info=None): '''Remove aggregate host. :param ctxt: request context :param aggregate: :param host_param: This value is placed in the message to be the 'host' parameter for the remote method. :param host: This is the host to send the message to. ''' version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) cctxt.cast(ctxt, 'remove_aggregate_host', aggregate=aggregate, host=host_param, slave_info=slave_info) def remove_fixed_ip_from_instance(self, ctxt, instance, address): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'remove_fixed_ip_from_instance', instance=instance, address=address) def remove_volume_connection(self, ctxt, instance, volume_id, host): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) return cctxt.call(ctxt, 'remove_volume_connection', instance=instance, volume_id=volume_id) def rescue_instance(self, ctxt, instance, rescue_password, rescue_image_ref=None, clean_shutdown=True): version = '5.0' msg_args = {'rescue_password': rescue_password, 'clean_shutdown': clean_shutdown, 'rescue_image_ref': rescue_image_ref, 'instance': instance, } cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'rescue_instance', **msg_args) # Remove as it only supports nova network def reset_network(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'reset_network', instance=instance) def resize_instance(self, ctxt, instance, migration, image, instance_type, request_spec, clean_shutdown=True): msg_args = {'instance': instance, 'migration': migration, 'image': image, 'instance_type': instance_type, 'clean_shutdown': clean_shutdown, 'request_spec': request_spec, } version = '5.2' client = self.router.client(ctxt) if not client.can_send_version(version): msg_args.pop('request_spec') version = '5.0' cctxt = client.prepare(server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'resize_instance', **msg_args) def resume_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'resume_instance', instance=instance) def revert_resize(self, ctxt, instance, migration, host, request_spec): msg_args = { 'instance': instance, 'migration': migration, 'request_spec': request_spec, } client = self.router.client(ctxt) version = '5.2' if not client.can_send_version(version): msg_args.pop('request_spec') version = '5.0' cctxt = client.prepare( server=_compute_host(host, instance), version=version) cctxt.cast(ctxt, 'revert_resize', **msg_args) def revert_snapshot_based_resize_at_dest(self, ctxt, instance, migration): """Reverts a snapshot-based resize at the destination host. Cleans the guest from the destination compute service host hypervisor and related resources (ports, volumes) and frees resource usage from the compute service on that host. This is a synchronous RPC call using the ``long_rpc_timeout`` configuration option. :param ctxt: nova auth request context targeted at the target cell :param instance: Instance object whose vm_state is "resized" and task_state is "resize_reverting". :param migration: Migration object whose status is "reverting". :raises: nova.exception.MigrationError if the destination compute service is too old to perform the operation :raises: oslo_messaging.exceptions.MessagingTimeout if the RPC call times out """ version = '5.9' client = self.router.client(ctxt) if not client.can_send_version(version): raise exception.MigrationError(reason=_('Compute too old')) cctxt = client.prepare(server=migration.dest_compute, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call( ctxt, 'revert_snapshot_based_resize_at_dest', instance=instance, migration=migration) def rollback_live_migration_at_destination(self, ctxt, instance, host, destroy_disks, migrate_data): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=host, version=version) cctxt.cast(ctxt, 'rollback_live_migration_at_destination', instance=instance, destroy_disks=destroy_disks, migrate_data=migrate_data) def supports_numa_live_migration(self, ctxt): """Returns whether we can send 5.3, needed for NUMA live migration. """ client = self.router.client(ctxt) return client.can_send_version('5.3') def drop_move_claim_at_destination(self, ctxt, instance, host): """Called by the source of a live migration that's being rolled back. This is a call not because we care about the return value, but because dropping the move claim depends on instance.migration_context being set, and we drop the migration context on the source. Thus, to avoid races, we call the destination synchronously to make sure it's done dropping the move claim before we drop the migration context from the instance. """ version = '5.3' client = self.router.client(ctxt) cctxt = client.prepare(server=host, version=version) cctxt.call(ctxt, 'drop_move_claim_at_destination', instance=instance) def set_admin_password(self, ctxt, instance, new_pass): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'set_admin_password', instance=instance, new_pass=new_pass) def set_host_enabled(self, ctxt, host, enabled): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'set_host_enabled', enabled=enabled) def swap_volume(self, ctxt, instance, old_volume_id, new_volume_id, new_attachment_id): version = '5.0' client = self.router.client(ctxt) kwargs = dict(instance=instance, old_volume_id=old_volume_id, new_volume_id=new_volume_id, new_attachment_id=new_attachment_id) cctxt = client.prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'swap_volume', **kwargs) def get_host_uptime(self, ctxt, host): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=host, version=version) return cctxt.call(ctxt, 'get_host_uptime') def reserve_block_device_name(self, ctxt, instance, device, volume_id, disk_bus, device_type, tag, multiattach): kw = {'instance': instance, 'device': device, 'volume_id': volume_id, 'disk_bus': disk_bus, 'device_type': device_type, 'tag': tag, 'multiattach': multiattach} version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(None, instance), version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'reserve_block_device_name', **kw) def backup_instance(self, ctxt, instance, image_id, backup_type, rotation): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'backup_instance', instance=instance, image_id=image_id, backup_type=backup_type, rotation=rotation) def snapshot_instance(self, ctxt, instance, image_id): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'snapshot_instance', instance=instance, image_id=image_id) def start_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'start_instance', instance=instance) def stop_instance(self, ctxt, instance, do_cast=True, clean_shutdown=True): msg_args = {'instance': instance, 'clean_shutdown': clean_shutdown} version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) rpc_method = cctxt.cast if do_cast else cctxt.call return rpc_method(ctxt, 'stop_instance', **msg_args) def suspend_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'suspend_instance', instance=instance) def terminate_instance(self, ctxt, instance, bdms): client = self.router.client(ctxt) version = '5.0' cctxt = client.prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'terminate_instance', instance=instance, bdms=bdms) def unpause_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'unpause_instance', instance=instance) def unrescue_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'unrescue_instance', instance=instance) def soft_delete_instance(self, ctxt, instance): client = self.router.client(ctxt) version = '5.0' cctxt = client.prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'soft_delete_instance', instance=instance) def restore_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'restore_instance', instance=instance) def shelve_instance(self, ctxt, instance, image_id=None, clean_shutdown=True): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'shelve_instance', instance=instance, image_id=image_id, clean_shutdown=clean_shutdown) def shelve_offload_instance(self, ctxt, instance, clean_shutdown=True): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'shelve_offload_instance', instance=instance, clean_shutdown=clean_shutdown) def unshelve_instance(self, ctxt, instance, host, request_spec, image=None, filter_properties=None, node=None): version = '5.2' msg_kwargs = { 'instance': instance, 'image': image, 'filter_properties': filter_properties, 'node': node, 'request_spec': request_spec, } client = self.router.client(ctxt) if not client.can_send_version(version): msg_kwargs.pop('request_spec') version = '5.0' cctxt = client.prepare( server=host, version=version) cctxt.cast(ctxt, 'unshelve_instance', **msg_kwargs) def volume_snapshot_create(self, ctxt, instance, volume_id, create_info): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'volume_snapshot_create', instance=instance, volume_id=volume_id, create_info=create_info) def volume_snapshot_delete(self, ctxt, instance, volume_id, snapshot_id, delete_info): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'volume_snapshot_delete', instance=instance, volume_id=volume_id, snapshot_id=snapshot_id, delete_info=delete_info) def external_instance_event(self, ctxt, instances, events, host=None): instance = instances[0] cctxt = self.router.client(ctxt).prepare( server=_compute_host(host, instance), version='5.0') cctxt.cast(ctxt, 'external_instance_event', instances=instances, events=events) def build_and_run_instance(self, ctxt, instance, host, image, request_spec, filter_properties, admin_password=None, injected_files=None, requested_networks=None, security_groups=None, block_device_mapping=None, node=None, limits=None, host_list=None, accel_uuids=None): # NOTE(edleafe): compute nodes can only use the dict form of limits. if isinstance(limits, objects.SchedulerLimits): limits = limits.to_dict() kwargs = {"instance": instance, "image": image, "request_spec": request_spec, "filter_properties": filter_properties, "admin_password": admin_password, "injected_files": injected_files, "requested_networks": requested_networks, "security_groups": security_groups, "block_device_mapping": block_device_mapping, "node": node, "limits": limits, "host_list": host_list, "accel_uuids": accel_uuids, } client = self.router.client(ctxt) version = '5.11' if not client.can_send_version(version): kwargs.pop('accel_uuids') version = '5.0' cctxt = client.prepare(server=host, version=version) cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) def quiesce_instance(self, ctxt, instance): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) return cctxt.call(ctxt, 'quiesce_instance', instance=instance) def unquiesce_instance(self, ctxt, instance, mapping=None): version = '5.0' cctxt = self.router.client(ctxt).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'unquiesce_instance', instance=instance, mapping=mapping) # TODO(stephenfin): Remove this as it's nova-network only def refresh_instance_security_rules(self, ctxt, instance, host): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'refresh_instance_security_rules', instance=instance) def trigger_crash_dump(self, ctxt, instance): version = '5.0' client = self.router.client(ctxt) cctxt = client.prepare(server=_compute_host(None, instance), version=version) return cctxt.cast(ctxt, "trigger_crash_dump", instance=instance) def cache_images(self, ctxt, host, image_ids): version = '5.4' client = self.router.client(ctxt) if not client.can_send_version(version): raise exception.NovaException('Compute RPC version pin does not ' 'allow cache_images() to be called') # This is a potentially very long-running call, so we provide the # two timeout values which enables the call monitor in oslo.messaging # so that this can run for extended periods. cctxt = client.prepare(server=host, version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'cache_images', image_ids=image_ids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/stats.py0000664000175000017500000001212700000000000016720 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import task_states from nova.compute import vm_states from nova.i18n import _ class Stats(dict): """Handler for updates to compute node workload stats.""" def __init__(self): super(Stats, self).__init__() # Track instance states for compute node workload calculations: self.states = {} def clear(self): super(Stats, self).clear() self.states.clear() def digest_stats(self, stats): """Apply stats provided as a dict or a json encoded string.""" if stats is None: return if isinstance(stats, dict): self.update(stats) return raise ValueError(_('Unexpected type adding stats')) @property def io_workload(self): """Calculate an I/O based load by counting I/O heavy operations.""" def _get(state, state_type): key = "num_%s_%s" % (state_type, state) return self.get(key, 0) num_builds = _get(vm_states.BUILDING, "vm") num_migrations = _get(task_states.RESIZE_MIGRATING, "task") num_rebuilds = _get(task_states.REBUILDING, "task") num_resizes = _get(task_states.RESIZE_PREP, "task") num_snapshots = _get(task_states.IMAGE_SNAPSHOT, "task") num_backups = _get(task_states.IMAGE_BACKUP, "task") num_rescues = _get(task_states.RESCUING, "task") num_unshelves = _get(task_states.UNSHELVING, "task") return (num_builds + num_rebuilds + num_resizes + num_migrations + num_snapshots + num_backups + num_rescues + num_unshelves) def calculate_workload(self): """Calculate current load of the compute host based on task states. """ current_workload = 0 for k in self: if k.startswith("num_task") and not k.endswith("None"): current_workload += self[k] return current_workload @property def num_instances(self): return self.get("num_instances", 0) def num_instances_for_project(self, project_id): key = "num_proj_%s" % project_id return self.get(key, 0) def num_os_type(self, os_type): key = "num_os_type_%s" % os_type return self.get(key, 0) def update_stats_for_instance(self, instance, is_removed=False): """Update stats after an instance is changed.""" uuid = instance['uuid'] # First, remove stats from the previous instance # state: if uuid in self.states: old_state = self.states[uuid] self._decrement("num_vm_%s" % old_state['vm_state']) self._decrement("num_task_%s" % old_state['task_state']) self._decrement("num_os_type_%s" % old_state['os_type']) self._decrement("num_proj_%s" % old_state['project_id']) else: # new instance self._increment("num_instances") # Now update stats from the new instance state: (vm_state, task_state, os_type, project_id) = \ self._extract_state_from_instance(instance) if is_removed or vm_state in vm_states.ALLOW_RESOURCE_REMOVAL: self._decrement("num_instances") self.states.pop(uuid) else: self._increment("num_vm_%s" % vm_state) self._increment("num_task_%s" % task_state) self._increment("num_os_type_%s" % os_type) self._increment("num_proj_%s" % project_id) # save updated I/O workload in stats: self["io_workload"] = self.io_workload def _decrement(self, key): x = self.get(key, 0) self[key] = x - 1 def _increment(self, key): x = self.get(key, 0) self[key] = x + 1 def _extract_state_from_instance(self, instance): """Save the useful bits of instance state for tracking purposes.""" uuid = instance['uuid'] vm_state = instance['vm_state'] task_state = instance['task_state'] os_type = instance['os_type'] project_id = instance['project_id'] self.states[uuid] = dict(vm_state=vm_state, task_state=task_state, os_type=os_type, project_id=project_id) return (vm_state, task_state, os_type, project_id) def build_failed(self): self['failed_builds'] = self.get('failed_builds', 0) + 1 def build_succeeded(self): # FIXME(danms): Make this more graceful, either by time-based aging or # a fixed decline upon success self['failed_builds'] = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/task_states.py0000664000175000017500000001165100000000000020110 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Possible task states for instances. Compute instance task states represent what is happening to the instance at the current moment. These tasks can be generic, such as 'spawning', or specific, such as 'block_device_mapping'. These task states allow for a better view into what an instance is doing and should be displayed to users/administrators as necessary. """ from nova.objects import fields # possible task states during create() SCHEDULING = fields.InstanceTaskState.SCHEDULING BLOCK_DEVICE_MAPPING = fields.InstanceTaskState.BLOCK_DEVICE_MAPPING NETWORKING = fields.InstanceTaskState.NETWORKING SPAWNING = fields.InstanceTaskState.SPAWNING # possible task states during snapshot() IMAGE_SNAPSHOT = fields.InstanceTaskState.IMAGE_SNAPSHOT IMAGE_SNAPSHOT_PENDING = fields.InstanceTaskState.IMAGE_SNAPSHOT_PENDING IMAGE_PENDING_UPLOAD = fields.InstanceTaskState.IMAGE_PENDING_UPLOAD IMAGE_UPLOADING = fields.InstanceTaskState.IMAGE_UPLOADING # possible task states during backup() IMAGE_BACKUP = fields.InstanceTaskState.IMAGE_BACKUP # possible task states during set_admin_password() UPDATING_PASSWORD = fields.InstanceTaskState.UPDATING_PASSWORD # possible task states during resize() RESIZE_PREP = fields.InstanceTaskState.RESIZE_PREP RESIZE_MIGRATING = fields.InstanceTaskState.RESIZE_MIGRATING RESIZE_MIGRATED = fields.InstanceTaskState.RESIZE_MIGRATED RESIZE_FINISH = fields.InstanceTaskState.RESIZE_FINISH # possible task states during revert_resize() RESIZE_REVERTING = fields.InstanceTaskState.RESIZE_REVERTING # possible task states during confirm_resize() RESIZE_CONFIRMING = fields.InstanceTaskState.RESIZE_CONFIRMING # possible task states during reboot() REBOOTING = fields.InstanceTaskState.REBOOTING REBOOT_PENDING = fields.InstanceTaskState.REBOOT_PENDING REBOOT_STARTED = fields.InstanceTaskState.REBOOT_STARTED REBOOTING_HARD = fields.InstanceTaskState.REBOOTING_HARD REBOOT_PENDING_HARD = fields.InstanceTaskState.REBOOT_PENDING_HARD REBOOT_STARTED_HARD = fields.InstanceTaskState.REBOOT_STARTED_HARD # possible task states during pause() PAUSING = fields.InstanceTaskState.PAUSING # possible task states during unpause() UNPAUSING = fields.InstanceTaskState.UNPAUSING # possible task states during suspend() SUSPENDING = fields.InstanceTaskState.SUSPENDING # possible task states during resume() RESUMING = fields.InstanceTaskState.RESUMING # possible task states during power_off() POWERING_OFF = fields.InstanceTaskState.POWERING_OFF # possible task states during power_on() POWERING_ON = fields.InstanceTaskState.POWERING_ON # possible task states during rescue() RESCUING = fields.InstanceTaskState.RESCUING # possible task states during unrescue() UNRESCUING = fields.InstanceTaskState.UNRESCUING # possible task states during rebuild() REBUILDING = fields.InstanceTaskState.REBUILDING REBUILD_BLOCK_DEVICE_MAPPING = \ fields.InstanceTaskState.REBUILD_BLOCK_DEVICE_MAPPING REBUILD_SPAWNING = fields.InstanceTaskState.REBUILD_SPAWNING # possible task states during live_migrate() MIGRATING = fields.InstanceTaskState.MIGRATING # possible task states during delete() DELETING = fields.InstanceTaskState.DELETING # possible task states during soft_delete() SOFT_DELETING = fields.InstanceTaskState.SOFT_DELETING # possible task states during restore() RESTORING = fields.InstanceTaskState.RESTORING # possible task states during shelve() SHELVING = fields.InstanceTaskState.SHELVING SHELVING_IMAGE_PENDING_UPLOAD = \ fields.InstanceTaskState.SHELVING_IMAGE_PENDING_UPLOAD SHELVING_IMAGE_UPLOADING = fields.InstanceTaskState.SHELVING_IMAGE_UPLOADING # possible task states during shelve_offload() SHELVING_OFFLOADING = fields.InstanceTaskState.SHELVING_OFFLOADING # possible task states during unshelve() UNSHELVING = fields.InstanceTaskState.UNSHELVING ALLOW_REBOOT = [None, REBOOTING, REBOOT_PENDING, REBOOT_STARTED, RESUMING, REBOOTING_HARD, UNPAUSING, PAUSING, SUSPENDING] # These states indicate a reboot soft_reboot_states = (REBOOTING, REBOOT_PENDING, REBOOT_STARTED) hard_reboot_states = (REBOOTING_HARD, REBOOT_PENDING_HARD, REBOOT_STARTED_HARD) # These states indicate a resize in progress resizing_states = (RESIZE_PREP, RESIZE_MIGRATING, RESIZE_MIGRATED, RESIZE_FINISH) # These states indicate a rebuild rebuild_states = (REBUILDING, REBUILD_BLOCK_DEVICE_MAPPING, REBUILD_SPAWNING) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/compute/utils.py0000664000175000017500000017550500000000000016734 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Compute-related Utilities and helpers.""" import contextlib import functools import inspect import itertools import math import traceback import netifaces from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import excutils import six from nova.accelerator import cyborg from nova import block_device from nova.compute import power_state from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import exception from nova import notifications from nova.notifications.objects import aggregate as aggregate_notification from nova.notifications.objects import base as notification_base from nova.notifications.objects import compute_task as task_notification from nova.notifications.objects import exception as notification_exception from nova.notifications.objects import flavor as flavor_notification from nova.notifications.objects import instance as instance_notification from nova.notifications.objects import keypair as keypair_notification from nova.notifications.objects import libvirt as libvirt_notification from nova.notifications.objects import metrics as metrics_notification from nova.notifications.objects import request_spec as reqspec_notification from nova.notifications.objects import scheduler as scheduler_notification from nova.notifications.objects import server_group as sg_notification from nova.notifications.objects import volume as volume_notification from nova import objects from nova.objects import fields from nova import rpc from nova import safe_utils from nova import utils from nova.virt import driver CONF = nova.conf.CONF LOG = log.getLogger(__name__) # These properties are specific to a particular image by design. It # does not make sense for them to be inherited by server snapshots. # This list is distinct from the configuration option of the same # (lowercase) name. NON_INHERITABLE_IMAGE_PROPERTIES = frozenset([ 'cinder_encryption_key_id', 'cinder_encryption_key_deletion_policy', 'img_signature', 'img_signature_hash_method', 'img_signature_key_type', 'img_signature_certificate_uuid']) def exception_to_dict(fault, message=None): """Converts exceptions to a dict for use in notifications. :param fault: Exception that occurred :param message: Optional fault message, otherwise the message is derived from the fault itself. :returns: dict with the following items: - exception: the fault itself - message: one of (in priority order): - the provided message to this method - a formatted NovaException message - the fault class name - code: integer code for the fault (defaults to 500) """ # TODO(johngarbutt) move to nova/exception.py to share with wrap_exception code = 500 if hasattr(fault, "kwargs"): code = fault.kwargs.get('code', 500) # get the message from the exception that was thrown # if that does not exist, use the name of the exception class itself try: if not message: message = fault.format_message() # These exception handlers are broad so we don't fail to log the fault # just because there is an unexpected error retrieving the message except Exception: # In this case either we have a NovaException which failed to format # the message or we have a non-nova exception which could contain # sensitive details. Since we're not sure, be safe and set the message # to the exception class name. Note that we don't guard on # context.is_admin here because the message is always shown in the API, # even to non-admin users (e.g. NoValidHost) but only the traceback # details are shown to users with the admin role. Checking for admin # context here is also not helpful because admins can perform # operations on a tenant user's server (migrations, reboot, etc) and # service startup and periodic tasks could take actions on a server # and those use an admin context. message = fault.__class__.__name__ # NOTE(dripton) The message field in the database is limited to 255 chars. # MySQL silently truncates overly long messages, but PostgreSQL throws an # error if we don't truncate it. u_message = utils.safe_truncate(message, 255) fault_dict = dict(exception=fault) fault_dict["message"] = u_message fault_dict["code"] = code return fault_dict def _get_fault_details(exc_info, error_code): details = '' # TODO(mriedem): Why do we only include the details if the code is 500? # Though for non-nova exceptions the code will probably be 500. if exc_info and error_code == 500: # We get the full exception details including the value since # the fault message may not contain that information for non-nova # exceptions (see exception_to_dict). details = ''.join(traceback.format_exception( exc_info[0], exc_info[1], exc_info[2])) return six.text_type(details) def add_instance_fault_from_exc(context, instance, fault, exc_info=None, fault_message=None): """Adds the specified fault to the database.""" fault_obj = objects.InstanceFault(context=context) fault_obj.host = CONF.host fault_obj.instance_uuid = instance.uuid fault_obj.update(exception_to_dict(fault, message=fault_message)) code = fault_obj.code fault_obj.details = _get_fault_details(exc_info, code) fault_obj.create() def get_device_name_for_instance(instance, bdms, device): """Validates (or generates) a device name for instance. This method is a wrapper for get_next_device_name that gets the list of used devices and the root device from a block device mapping. :raises TooManyDiskDevices: if the maxmimum allowed devices to attach to a single instance is exceeded. """ mappings = block_device.instance_block_mapping(instance, bdms) return get_next_device_name(instance, mappings.values(), mappings['root'], device) def default_device_names_for_instance(instance, root_device_name, *block_device_lists): """Generate missing device names for an instance. :raises TooManyDiskDevices: if the maxmimum allowed devices to attach to a single instance is exceeded. """ dev_list = [bdm.device_name for bdm in itertools.chain(*block_device_lists) if bdm.device_name] if root_device_name not in dev_list: dev_list.append(root_device_name) for bdm in itertools.chain(*block_device_lists): dev = bdm.device_name if not dev: dev = get_next_device_name(instance, dev_list, root_device_name) bdm.device_name = dev bdm.save() dev_list.append(dev) def check_max_disk_devices_to_attach(num_devices): maximum = CONF.compute.max_disk_devices_to_attach if maximum < 0: return if num_devices > maximum: raise exception.TooManyDiskDevices(maximum=maximum) def get_next_device_name(instance, device_name_list, root_device_name=None, device=None): """Validates (or generates) a device name for instance. If device is not set, it will generate a unique device appropriate for the instance. It uses the root_device_name (if provided) and the list of used devices to find valid device names. If the device name is valid but applicable to a different backend (for example /dev/vdc is specified but the backend uses /dev/xvdc), the device name will be converted to the appropriate format. :raises TooManyDiskDevices: if the maxmimum allowed devices to attach to a single instance is exceeded. """ req_prefix = None req_letter = None if device: try: req_prefix, req_letter = block_device.match_device(device) except (TypeError, AttributeError, ValueError): raise exception.InvalidDevicePath(path=device) if not root_device_name: root_device_name = block_device.DEFAULT_ROOT_DEV_NAME try: prefix = block_device.match_device( block_device.prepend_dev(root_device_name))[0] except (TypeError, AttributeError, ValueError): raise exception.InvalidDevicePath(path=root_device_name) # NOTE(vish): remove this when xenapi is setting default_root_device if driver.is_xenapi(): prefix = '/dev/xvd' if req_prefix != prefix: LOG.debug("Using %(prefix)s instead of %(req_prefix)s", {'prefix': prefix, 'req_prefix': req_prefix}) used_letters = set() for device_path in device_name_list: letter = block_device.get_device_letter(device_path) used_letters.add(letter) # NOTE(vish): remove this when xenapi is properly setting # default_ephemeral_device and default_swap_device if driver.is_xenapi(): flavor = instance.get_flavor() if flavor.ephemeral_gb: used_letters.add('b') if flavor.swap: used_letters.add('c') check_max_disk_devices_to_attach(len(used_letters) + 1) if not req_letter: req_letter = _get_unused_letter(used_letters) if req_letter in used_letters: raise exception.DevicePathInUse(path=device) return prefix + req_letter def get_root_bdm(context, instance, bdms=None): if bdms is None: if isinstance(instance, objects.Instance): uuid = instance.uuid else: uuid = instance['uuid'] bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, uuid) return bdms.root_bdm() def is_volume_backed_instance(context, instance, bdms=None): root_bdm = get_root_bdm(context, instance, bdms) if root_bdm is not None: return root_bdm.is_volume # in case we hit a very old instance without root bdm, we _assume_ that # instance is backed by a volume, if and only if image_ref is not set if isinstance(instance, objects.Instance): return not instance.image_ref return not instance['image_ref'] def heal_reqspec_is_bfv(ctxt, request_spec, instance): """Calculates the is_bfv flag for a RequestSpec created before Rocky. Starting in Rocky, new instances have their RequestSpec created with the "is_bfv" flag to indicate if they are volume-backed which is used by the scheduler when determining root disk resource allocations. RequestSpecs created before Rocky will not have the is_bfv flag set so we need to calculate it here and update the RequestSpec. :param ctxt: nova.context.RequestContext auth context :param request_spec: nova.objects.RequestSpec used for scheduling :param instance: nova.objects.Instance being scheduled """ if 'is_bfv' in request_spec: return # Determine if this is a volume-backed instance and set the field # in the request spec accordingly. request_spec.is_bfv = is_volume_backed_instance(ctxt, instance) request_spec.save() def convert_mb_to_ceil_gb(mb_value): gb_int = 0 if mb_value: gb_float = mb_value / 1024.0 # ensure we reserve/allocate enough space by rounding up to nearest GB gb_int = int(math.ceil(gb_float)) return gb_int def _get_unused_letter(used_letters): # Return the first unused device letter index = 0 while True: letter = block_device.generate_device_letter(index) if letter not in used_letters: return letter index += 1 def get_value_from_system_metadata(instance, key, type, default): """Get a value of a specified type from image metadata. @param instance: The instance object @param key: The name of the property to get @param type: The python type the value is be returned as @param default: The value to return if key is not set or not the right type """ value = instance.system_metadata.get(key, default) try: return type(value) except ValueError: LOG.warning("Metadata value %(value)s for %(key)s is not of " "type %(type)s. Using default value %(default)s.", {'value': value, 'key': key, 'type': type, 'default': default}, instance=instance) return default def notify_usage_exists(notifier, context, instance_ref, host, current_period=False, ignore_missing_network_data=True, system_metadata=None, extra_usage_info=None): """Generates 'exists' unversioned legacy and transformed notification for an instance for usage auditing purposes. :param notifier: a messaging.Notifier :param context: request context for the current operation :param instance_ref: nova.objects.Instance object from which to report usage :param host: the host emitting the notification :param current_period: if True, this will generate a usage for the current usage period; if False, this will generate a usage for the previous audit period. :param ignore_missing_network_data: if True, log any exceptions generated while getting network info; if False, raise the exception. :param system_metadata: system_metadata override for the instance. If None, the instance_ref.system_metadata will be used. :param extra_usage_info: Dictionary containing extra values to add or override in the notification if not None. """ audit_start, audit_end = notifications.audit_period_bounds(current_period) bw = notifications.bandwidth_usage(context, instance_ref, audit_start, ignore_missing_network_data) if system_metadata is None: system_metadata = utils.instance_sys_meta(instance_ref) # add image metadata to the notification: image_meta = notifications.image_meta(system_metadata) extra_info = dict(audit_period_beginning=str(audit_start), audit_period_ending=str(audit_end), bandwidth=bw, image_meta=image_meta) if extra_usage_info: extra_info.update(extra_usage_info) notify_about_instance_usage(notifier, context, instance_ref, 'exists', extra_usage_info=extra_info) audit_period = instance_notification.AuditPeriodPayload( audit_period_beginning=audit_start, audit_period_ending=audit_end) bandwidth = [instance_notification.BandwidthPayload( network_name=label, in_bytes=b['bw_in'], out_bytes=b['bw_out']) for label, b in bw.items()] payload = instance_notification.InstanceExistsPayload( context=context, instance=instance_ref, audit_period=audit_period, bandwidth=bandwidth) notification = instance_notification.InstanceExistsNotification( context=context, priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.EXISTS), payload=payload) notification.emit(context) def notify_about_instance_usage(notifier, context, instance, event_suffix, network_info=None, extra_usage_info=None, fault=None): """Send an unversioned legacy notification about an instance. All new notifications should use notify_about_instance_action which sends a versioned notification. :param notifier: a messaging.Notifier :param event_suffix: Event type like "delete.start" or "exists" :param network_info: Networking information, if provided. :param extra_usage_info: Dictionary containing extra values to add or override in the notification. """ if not extra_usage_info: extra_usage_info = {} usage_info = notifications.info_from_instance(context, instance, network_info, populate_image_ref_url=True, **extra_usage_info) if fault: # NOTE(johngarbutt) mirrors the format in wrap_exception fault_payload = exception_to_dict(fault) LOG.debug(fault_payload["message"], instance=instance) usage_info.update(fault_payload) if event_suffix.endswith("error"): method = notifier.error else: method = notifier.info method(context, 'compute.instance.%s' % event_suffix, usage_info) def _get_fault_and_priority_from_exc_and_tb(exception, tb): fault = None priority = fields.NotificationPriority.INFO if exception: priority = fields.NotificationPriority.ERROR fault = notification_exception.ExceptionPayload.from_exc_and_traceback( exception, tb) return fault, priority @rpc.if_notifications_enabled def notify_about_instance_action(context, instance, host, action, phase=None, source=fields.NotificationSource.COMPUTE, exception=None, bdms=None, tb=None): """Send versioned notification about the action made on the instance :param instance: the instance which the action performed on :param host: the host emitting the notification :param action: the name of the action :param phase: the phase of the action :param source: the source of the notification :param exception: the thrown exception (used in error notifications) :param bdms: BlockDeviceMappingList object for the instance. If it is not provided then we will load it from the db if so configured :param tb: the traceback (used in error notifications) """ fault, priority = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = instance_notification.InstanceActionPayload( context=context, instance=instance, fault=fault, bdms=bdms) notification = instance_notification.InstanceActionNotification( context=context, priority=priority, publisher=notification_base.NotificationPublisher( host=host, source=source), event_type=notification_base.EventType( object='instance', action=action, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_instance_create(context, instance, host, phase=None, exception=None, bdms=None, tb=None): """Send versioned notification about instance creation :param context: the request context :param instance: the instance being created :param host: the host emitting the notification :param phase: the phase of the creation :param exception: the thrown exception (used in error notifications) :param bdms: BlockDeviceMappingList object for the instance. If it is not provided then we will load it from the db if so configured :param tb: the traceback (used in error notifications) """ fault, priority = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = instance_notification.InstanceCreatePayload( context=context, instance=instance, fault=fault, bdms=bdms) notification = instance_notification.InstanceCreateNotification( context=context, priority=priority, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.CREATE, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_scheduler_action(context, request_spec, action, phase=None, source=fields.NotificationSource.SCHEDULER): """Send versioned notification about the action made by the scheduler :param context: the RequestContext object :param request_spec: the RequestSpec object :param action: the name of the action :param phase: the phase of the action :param source: the source of the notification """ payload = reqspec_notification.RequestSpecPayload( request_spec=request_spec) notification = scheduler_notification.SelectDestinationsNotification( context=context, priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=CONF.host, source=source), event_type=notification_base.EventType( object='scheduler', action=action, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_volume_attach_detach(context, instance, host, action, phase, volume_id=None, exception=None, tb=None): """Send versioned notification about the action made on the instance :param instance: the instance which the action performed on :param host: the host emitting the notification :param action: the name of the action :param phase: the phase of the action :param volume_id: id of the volume will be attached :param exception: the thrown exception (used in error notifications) :param tb: the traceback (used in error notifications) """ fault, priority = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = instance_notification.InstanceActionVolumePayload( context=context, instance=instance, fault=fault, volume_id=volume_id) notification = instance_notification.InstanceActionVolumeNotification( context=context, priority=priority, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=action, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_instance_rescue_action(context, instance, host, rescue_image_ref, phase=None, exception=None, tb=None): """Send versioned notification about the action made on the instance :param instance: the instance which the action performed on :param host: the host emitting the notification :param rescue_image_ref: the rescue image ref :param phase: the phase of the action :param exception: the thrown exception (used in error notifications) :param tb: the traceback (used in error notifications) """ fault, priority = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = instance_notification.InstanceActionRescuePayload( context=context, instance=instance, fault=fault, rescue_image_ref=rescue_image_ref) notification = instance_notification.InstanceActionRescueNotification( context=context, priority=priority, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.RESCUE, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_keypair_action(context, keypair, action, phase): """Send versioned notification about the keypair action on the instance :param context: the request context :param keypair: the keypair which the action performed on :param action: the name of the action :param phase: the phase of the action """ payload = keypair_notification.KeypairPayload(keypair=keypair) notification = keypair_notification.KeypairNotification( priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.API), event_type=notification_base.EventType( object='keypair', action=action, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_volume_swap(context, instance, host, phase, old_volume_id, new_volume_id, exception=None, tb=None): """Send versioned notification about the volume swap action on the instance :param context: the request context :param instance: the instance which the action performed on :param host: the host emitting the notification :param phase: the phase of the action :param old_volume_id: the ID of the volume that is copied from and detached :param new_volume_id: the ID of the volume that is copied to and attached :param exception: an exception :param tb: the traceback (used in error notifications) """ fault, priority = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = instance_notification.InstanceActionVolumeSwapPayload( context=context, instance=instance, fault=fault, old_volume_id=old_volume_id, new_volume_id=new_volume_id) instance_notification.InstanceActionVolumeSwapNotification( context=context, priority=priority, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.VOLUME_SWAP, phase=phase), payload=payload).emit(context) @rpc.if_notifications_enabled def notify_about_instance_snapshot(context, instance, host, phase, snapshot_image_id): """Send versioned notification about the snapshot action executed on the instance :param context: the request context :param instance: the instance from which a snapshot image is being created :param host: the host emitting the notification :param phase: the phase of the action :param snapshot_image_id: the ID of the snapshot """ payload = instance_notification.InstanceActionSnapshotPayload( context=context, instance=instance, fault=None, snapshot_image_id=snapshot_image_id) instance_notification.InstanceActionSnapshotNotification( context=context, priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.SNAPSHOT, phase=phase), payload=payload).emit(context) @rpc.if_notifications_enabled def notify_about_resize_prep_instance(context, instance, host, phase, new_flavor): """Send versioned notification about the instance resize action on the instance :param context: the request context :param instance: the instance which the resize action performed on :param host: the host emitting the notification :param phase: the phase of the action :param new_flavor: new flavor """ payload = instance_notification.InstanceActionResizePrepPayload( context=context, instance=instance, fault=None, new_flavor=flavor_notification.FlavorPayload(flavor=new_flavor)) instance_notification.InstanceActionResizePrepNotification( context=context, priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.RESIZE_PREP, phase=phase), payload=payload).emit(context) def notify_about_server_group_update(context, event_suffix, sg_payload): """Send a notification about server group update. :param event_suffix: Event type like "create.start" or "create.end" :param sg_payload: payload for server group update """ notifier = rpc.get_notifier(service='servergroup') notifier.info(context, 'servergroup.%s' % event_suffix, sg_payload) def notify_about_aggregate_update(context, event_suffix, aggregate_payload): """Send a notification about aggregate update. :param event_suffix: Event type like "create.start" or "create.end" :param aggregate_payload: payload for aggregate update """ aggregate_identifier = aggregate_payload.get('aggregate_id', None) if not aggregate_identifier: aggregate_identifier = aggregate_payload.get('name', None) if not aggregate_identifier: LOG.debug("No aggregate id or name specified for this " "notification and it will be ignored") return notifier = rpc.get_notifier(service='aggregate', host=aggregate_identifier) notifier.info(context, 'aggregate.%s' % event_suffix, aggregate_payload) @rpc.if_notifications_enabled def notify_about_aggregate_action(context, aggregate, action, phase): payload = aggregate_notification.AggregatePayload(aggregate) notification = aggregate_notification.AggregateNotification( priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.API), event_type=notification_base.EventType( object='aggregate', action=action, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_aggregate_cache(context, aggregate, host, image_status, index, total): """Send a notification about aggregate cache_images progress. :param context: The RequestContext :param aggregate: The target aggregate :param host: The host within the aggregate for which to report status :param image_status: The result from the compute host, which is a dict of {image_id: status} :param index: An integer indicating progress toward completion, between 1 and $total :param total: The total number of hosts being processed in this operation, to bound $index """ success_statuses = ('cached', 'existing') payload = aggregate_notification.AggregateCachePayload(aggregate, host, index, total) payload.images_cached = [] payload.images_failed = [] for img, status in image_status.items(): if status in success_statuses: payload.images_cached.append(img) else: payload.images_failed.append(img) notification = aggregate_notification.AggregateCacheNotification( priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.CONDUCTOR), event_type=notification_base.EventType( object='aggregate', action=fields.NotificationAction.IMAGE_CACHE, phase=fields.NotificationPhase.PROGRESS), payload=payload) notification.emit(context) def notify_about_host_update(context, event_suffix, host_payload): """Send a notification about host update. :param event_suffix: Event type like "create.start" or "create.end" :param host_payload: payload for host update. It is a dict and there should be at least the 'host_name' key in this dict. """ host_identifier = host_payload.get('host_name') if not host_identifier: LOG.warning("No host name specified for the notification of " "HostAPI.%s and it will be ignored", event_suffix) return notifier = rpc.get_notifier(service='api', host=host_identifier) notifier.info(context, 'HostAPI.%s' % event_suffix, host_payload) @rpc.if_notifications_enabled def notify_about_server_group_action(context, group, action): payload = sg_notification.ServerGroupPayload(group) notification = sg_notification.ServerGroupNotification( priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.API), event_type=notification_base.EventType( object='server_group', action=action), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_server_group_add_member(context, group_id): group = objects.InstanceGroup.get_by_uuid(context, group_id) payload = sg_notification.ServerGroupPayload(group) notification = sg_notification.ServerGroupNotification( priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.API), event_type=notification_base.EventType( object='server_group', action=fields.NotificationAction.ADD_MEMBER), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_instance_rebuild(context, instance, host, action=fields.NotificationAction.REBUILD, phase=None, source=fields.NotificationSource.COMPUTE, exception=None, bdms=None, tb=None): """Send versioned notification about instance rebuild :param instance: the instance which the action performed on :param host: the host emitting the notification :param action: the name of the action :param phase: the phase of the action :param source: the source of the notification :param exception: the thrown exception (used in error notifications) :param bdms: BlockDeviceMappingList object for the instance. If it is not provided then we will load it from the db if so configured :param tb: the traceback (used in error notifications) """ fault, priority = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = instance_notification.InstanceActionRebuildPayload( context=context, instance=instance, fault=fault, bdms=bdms) notification = instance_notification.InstanceActionRebuildNotification( context=context, priority=priority, publisher=notification_base.NotificationPublisher( host=host, source=source), event_type=notification_base.EventType( object='instance', action=action, phase=phase), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_metrics_update(context, host, host_ip, nodename, monitor_metric_list): """Send versioned notification about updating metrics :param context: the request context :param host: the host emitting the notification :param host_ip: the IP address of the host :param nodename: the node name :param monitor_metric_list: the MonitorMetricList object """ payload = metrics_notification.MetricsPayload( host=host, host_ip=host_ip, nodename=nodename, monitor_metric_list=monitor_metric_list) notification = metrics_notification.MetricsNotification( context=context, priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='metrics', action=fields.NotificationAction.UPDATE), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_libvirt_connect_error(context, ip, exception, tb): """Send a versioned notification about libvirt connect error. :param context: the request context :param ip: the IP address of the host :param exception: the thrown exception :param tb: the traceback """ fault, _ = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = libvirt_notification.LibvirtErrorPayload(ip=ip, reason=fault) notification = libvirt_notification.LibvirtErrorNotification( priority=fields.NotificationPriority.ERROR, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='libvirt', action=fields.NotificationAction.CONNECT, phase=fields.NotificationPhase.ERROR), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_volume_usage(context, vol_usage, host): """Send versioned notification about the volume usage :param context: the request context :param vol_usage: the volume usage object :param host: the host emitting the notification """ payload = volume_notification.VolumeUsagePayload( vol_usage=vol_usage) notification = volume_notification.VolumeUsageNotification( context=context, priority=fields.NotificationPriority.INFO, publisher=notification_base.NotificationPublisher( host=host, source=fields.NotificationSource.COMPUTE), event_type=notification_base.EventType( object='volume', action=fields.NotificationAction.USAGE), payload=payload) notification.emit(context) @rpc.if_notifications_enabled def notify_about_compute_task_error(context, action, instance_uuid, request_spec, state, exception, tb): """Send a versioned notification about compute task error. :param context: the request context :param action: the name of the action :param instance_uuid: the UUID of the instance :param request_spec: the request spec object or the dict includes request spec information :param state: the vm state of the instance :param exception: the thrown exception :param tb: the traceback """ if (request_spec is not None and not isinstance(request_spec, objects.RequestSpec)): request_spec = objects.RequestSpec.from_primitives( context, request_spec, {}) fault, _ = _get_fault_and_priority_from_exc_and_tb(exception, tb) payload = task_notification.ComputeTaskPayload( instance_uuid=instance_uuid, request_spec=request_spec, state=state, reason=fault) notification = task_notification.ComputeTaskNotification( priority=fields.NotificationPriority.ERROR, publisher=notification_base.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.CONDUCTOR), event_type=notification_base.EventType( object='compute_task', action=action, phase=fields.NotificationPhase.ERROR), payload=payload) notification.emit(context) def refresh_info_cache_for_instance(context, instance): """Refresh the info cache for an instance. :param instance: The instance object. """ if instance.info_cache is not None and not instance.deleted: # Catch the exception in case the instance got deleted after the check # instance.deleted was executed try: instance.info_cache.refresh() except exception.InstanceInfoCacheNotFound: LOG.debug("Can not refresh info_cache because instance " "was not found", instance=instance) def get_reboot_type(task_state, current_power_state): """Checks if the current instance state requires a HARD reboot.""" if current_power_state != power_state.RUNNING: return 'HARD' if task_state in task_states.soft_reboot_states: return 'SOFT' return 'HARD' def get_machine_ips(): """Get the machine's ip addresses :returns: list of Strings of ip addresses """ addresses = [] for interface in netifaces.interfaces(): try: iface_data = netifaces.ifaddresses(interface) for family in iface_data: if family not in (netifaces.AF_INET, netifaces.AF_INET6): continue for address in iface_data[family]: addr = address['addr'] # If we have an ipv6 address remove the # %ether_interface at the end if family == netifaces.AF_INET6: addr = addr.split('%')[0] addresses.append(addr) except ValueError: pass return addresses def upsize_quota_delta(new_flavor, old_flavor): """Calculate deltas required to adjust quota for an instance upsize. :param new_flavor: the target instance type :param old_flavor: the original instance type """ def _quota_delta(resource): return (new_flavor[resource] - old_flavor[resource]) deltas = {} if _quota_delta('vcpus') > 0: deltas['cores'] = _quota_delta('vcpus') if _quota_delta('memory_mb') > 0: deltas['ram'] = _quota_delta('memory_mb') return deltas def get_headroom(quotas, usages, deltas): headroom = {res: quotas[res] - usages[res] for res in quotas.keys()} # If quota_cores is unlimited [-1]: # - set cores headroom based on instances headroom: if quotas.get('cores') == -1: if deltas.get('cores'): hc = headroom.get('instances', 1) * deltas['cores'] headroom['cores'] = hc / deltas.get('instances', 1) else: headroom['cores'] = headroom.get('instances', 1) # If quota_ram is unlimited [-1]: # - set ram headroom based on instances headroom: if quotas.get('ram') == -1: if deltas.get('ram'): hr = headroom.get('instances', 1) * deltas['ram'] headroom['ram'] = hr / deltas.get('instances', 1) else: headroom['ram'] = headroom.get('instances', 1) return headroom def check_num_instances_quota(context, instance_type, min_count, max_count, project_id=None, user_id=None, orig_num_req=None): """Enforce quota limits on number of instances created.""" # project_id is also used for the TooManyInstances error message if project_id is None: project_id = context.project_id if user_id is None: user_id = context.user_id # Check whether we need to count resources per-user and check a per-user # quota limit. If we have no per-user quota limit defined for a # project/user, we can avoid wasteful resource counting. user_quotas = objects.Quotas.get_all_by_project_and_user( context, project_id, user_id) if not any(r in user_quotas for r in ['instances', 'cores', 'ram']): user_id = None # Determine requested cores and ram req_cores = max_count * instance_type.vcpus req_ram = max_count * instance_type.memory_mb deltas = {'instances': max_count, 'cores': req_cores, 'ram': req_ram} try: objects.Quotas.check_deltas(context, deltas, project_id, user_id=user_id, check_project_id=project_id, check_user_id=user_id) except exception.OverQuota as exc: quotas = exc.kwargs['quotas'] overs = exc.kwargs['overs'] usages = exc.kwargs['usages'] # This is for the recheck quota case where we used a delta of zero. if min_count == max_count == 0: # orig_num_req is the original number of instances requested in the # case of a recheck quota, for use in the over quota exception. req_cores = orig_num_req * instance_type.vcpus req_ram = orig_num_req * instance_type.memory_mb requested = {'instances': orig_num_req, 'cores': req_cores, 'ram': req_ram} (overs, reqs, total_alloweds, useds) = get_over_quota_detail( deltas, overs, quotas, requested) msg = "Cannot run any more instances of this type." params = {'overs': overs, 'pid': project_id, 'msg': msg} LOG.debug("%(overs)s quota exceeded for %(pid)s. %(msg)s", params) raise exception.TooManyInstances(overs=overs, req=reqs, used=useds, allowed=total_alloweds) # OK, we exceeded quota; let's figure out why... headroom = get_headroom(quotas, usages, deltas) allowed = headroom.get('instances', 1) # Reduce 'allowed' instances in line with the cores & ram headroom if instance_type.vcpus: allowed = min(allowed, headroom['cores'] // instance_type.vcpus) if instance_type.memory_mb: allowed = min(allowed, headroom['ram'] // instance_type.memory_mb) # Convert to the appropriate exception message if allowed <= 0: msg = "Cannot run any more instances of this type." elif min_count <= allowed <= max_count: # We're actually OK, but still need to check against allowed return check_num_instances_quota(context, instance_type, min_count, allowed, project_id=project_id, user_id=user_id) else: msg = "Can only run %s more instances of this type." % allowed num_instances = (str(min_count) if min_count == max_count else "%s-%s" % (min_count, max_count)) requested = dict(instances=num_instances, cores=req_cores, ram=req_ram) (overs, reqs, total_alloweds, useds) = get_over_quota_detail( headroom, overs, quotas, requested) params = {'overs': overs, 'pid': project_id, 'min_count': min_count, 'max_count': max_count, 'msg': msg} if min_count == max_count: LOG.debug("%(overs)s quota exceeded for %(pid)s," " tried to run %(min_count)d instances. " "%(msg)s", params) else: LOG.debug("%(overs)s quota exceeded for %(pid)s," " tried to run between %(min_count)d and" " %(max_count)d instances. %(msg)s", params) raise exception.TooManyInstances(overs=overs, req=reqs, used=useds, allowed=total_alloweds) return max_count def get_over_quota_detail(headroom, overs, quotas, requested): reqs = [] useds = [] total_alloweds = [] for resource in overs: reqs.append(str(requested[resource])) useds.append(str(quotas[resource] - headroom[resource])) total_alloweds.append(str(quotas[resource])) (overs, reqs, useds, total_alloweds) = map(', '.join, ( overs, reqs, useds, total_alloweds)) return overs, reqs, total_alloweds, useds def remove_shelved_keys_from_system_metadata(instance): # Delete system_metadata for a shelved instance for key in ['shelved_at', 'shelved_image_id', 'shelved_host']: if key in instance.system_metadata: del (instance.system_metadata[key]) def create_image(context, instance, name, image_type, image_api, extra_properties=None): """Create new image entry in the image service. This new image will be reserved for the compute manager to upload a snapshot or backup. :param context: security context :param instance: nova.objects.instance.Instance object :param name: string for name of the snapshot :param image_type: snapshot | backup :param image_api: instance of nova.image.glance.API :param extra_properties: dict of extra image properties to include """ properties = { 'instance_uuid': instance.uuid, 'user_id': str(context.user_id), 'image_type': image_type, } properties.update(extra_properties or {}) image_meta = initialize_instance_snapshot_metadata( context, instance, name, properties) # if we're making a snapshot, omit the disk and container formats, # since the image may have been converted to another format, and the # original values won't be accurate. The driver will populate these # with the correct values later, on image upload. if image_type == 'snapshot': image_meta.pop('disk_format', None) image_meta.pop('container_format', None) return image_api.create(context, image_meta) def initialize_instance_snapshot_metadata(context, instance, name, extra_properties=None): """Initialize new metadata for a snapshot of the given instance. :param context: authenticated RequestContext; note that this may not be the owner of the instance itself, e.g. an admin creates a snapshot image of some user instance :param instance: nova.objects.instance.Instance object :param name: string for name of the snapshot :param extra_properties: dict of extra metadata properties to include :returns: the new instance snapshot metadata """ image_meta = utils.get_image_from_system_metadata( instance.system_metadata) image_meta['name'] = name # If the user creating the snapshot is not in the same project as # the owner of the instance, then the image visibility should be # "shared" so the owner of the instance has access to the image, like # in the case of an admin creating a snapshot of another user's # server, either directly via the createImage API or via shelve. extra_properties = extra_properties or {} if context.project_id != instance.project_id: # The glance API client-side code will use this to add the # instance project as a member of the image for access. image_meta['visibility'] = 'shared' extra_properties['instance_owner'] = instance.project_id # TODO(mriedem): Should owner_project_name and owner_user_name # be removed from image_meta['properties'] here, or added to # [DEFAULT]/non_inheritable_image_properties? It is confusing # otherwise to see the owner project not match those values. else: # The request comes from the owner of the instance so make the # image private. image_meta['visibility'] = 'private' # Delete properties that are non-inheritable properties = image_meta['properties'] keys_to_pop = set(CONF.non_inheritable_image_properties).union( NON_INHERITABLE_IMAGE_PROPERTIES) for key in keys_to_pop: properties.pop(key, None) # The properties in extra_properties have precedence properties.update(extra_properties) return image_meta def delete_image(context, instance, image_api, image_id, log_exc_info=False): """Deletes the image if it still exists. Ignores ImageNotFound if the image is already gone. :param context: the nova auth request context where the context.project_id matches the owner of the image :param instance: the instance for which the snapshot image was created :param image_api: the image API used to delete the image :param image_id: the ID of the image to delete :param log_exc_info: True if this is being called from an exception handler block and traceback should be logged at DEBUG level, False otherwise. """ LOG.debug("Cleaning up image %s", image_id, instance=instance, log_exc_info=log_exc_info) try: image_api.delete(context, image_id) except exception.ImageNotFound: # Since we're trying to cleanup an image, we don't care if # if it's already gone. pass except Exception: LOG.exception("Error while trying to clean up image %s", image_id, instance=instance) def may_have_ports_or_volumes(instance): """Checks to see if an instance may have ports or volumes based on vm_state This is primarily only useful when instance.host is None. :param instance: The nova.objects.Instance in question. :returns: True if the instance may have ports of volumes, False otherwise """ # NOTE(melwitt): When an instance build fails in the compute manager, # the instance host and node are set to None and the vm_state is set # to ERROR. In the case, the instance with host = None has actually # been scheduled and may have ports and/or volumes allocated on the # compute node. if instance.vm_state in (vm_states.SHELVED_OFFLOADED, vm_states.ERROR): return True return False def get_stashed_volume_connector(bdm, instance): """Lookup a connector dict from the bdm.connection_info if set Gets the stashed connector dict out of the bdm.connection_info if set and the connector host matches the instance host. :param bdm: nova.objects.block_device.BlockDeviceMapping :param instance: nova.objects.instance.Instance :returns: volume connector dict or None """ if 'connection_info' in bdm and bdm.connection_info is not None: # NOTE(mriedem): We didn't start stashing the connector in the # bdm.connection_info until Mitaka so it might not be there on old # attachments. Also, if the volume was attached when the instance # was in shelved_offloaded state and it hasn't been unshelved yet # we don't have the attachment/connection information either. connector = jsonutils.loads(bdm.connection_info).get('connector') if connector: if connector.get('host') == instance.host: return connector LOG.debug('Found stashed volume connector for instance but ' 'connector host %(connector_host)s does not match ' 'the instance host %(instance_host)s.', {'connector_host': connector.get('host'), 'instance_host': instance.host}, instance=instance) if (instance.host is None and may_have_ports_or_volumes(instance)): LOG.debug('Allowing use of stashed volume connector with ' 'instance host None because instance with ' 'vm_state %(vm_state)s has been scheduled in ' 'the past.', {'vm_state': instance.vm_state}, instance=instance) return connector class EventReporter(object): """Context manager to report instance action events. If constructed with ``graceful_exit=True`` the __exit__ function will handle and not re-raise on InstanceActionNotFound. """ def __init__(self, context, event_name, host, *instance_uuids, graceful_exit=False): self.context = context self.event_name = event_name self.instance_uuids = instance_uuids self.host = host self.graceful_exit = graceful_exit def __enter__(self): for uuid in self.instance_uuids: objects.InstanceActionEvent.event_start( self.context, uuid, self.event_name, want_result=False, host=self.host) return self def __exit__(self, exc_type, exc_val, exc_tb): for uuid in self.instance_uuids: try: objects.InstanceActionEvent.event_finish_with_failure( self.context, uuid, self.event_name, exc_val=exc_val, exc_tb=exc_tb, want_result=False) except exception.InstanceActionNotFound: # If the instance action was not found then determine if we # should re-raise based on the graceful_exit attribute. with excutils.save_and_reraise_exception( reraise=not self.graceful_exit): if self.graceful_exit: return True return False def wrap_instance_event(prefix, graceful_exit=False): """Wraps a method to log the event taken on the instance, and result. This decorator wraps a method to log the start and result of an event, as part of an action taken on an instance. :param prefix: prefix for the event name, usually a service binary like "compute" or "conductor" to indicate the origin of the event. :param graceful_exit: True if the decorator should gracefully handle InstanceActionNotFound errors, False otherwise. This should rarely be True. """ @utils.expects_func_args('instance') def helper(function): @functools.wraps(function) def decorated_function(self, context, *args, **kwargs): wrapped_func = safe_utils.get_wrapped_function(function) keyed_args = inspect.getcallargs(wrapped_func, self, context, *args, **kwargs) instance_uuid = keyed_args['instance']['uuid'] event_name = '{0}_{1}'.format(prefix, function.__name__) host = self.host if hasattr(self, 'host') else None with EventReporter(context, event_name, host, instance_uuid, graceful_exit=graceful_exit): return function(self, context, *args, **kwargs) return decorated_function return helper class UnlimitedSemaphore(object): def __enter__(self): pass def __exit__(self, exc_type, exc_val, exc_tb): pass @property def balance(self): return 0 # This semaphore is used to enforce a limit on disk-IO-intensive operations # (image downloads, image conversions) at any given time. # It is initialized at ComputeManager.init_host() disk_ops_semaphore = UnlimitedSemaphore() @contextlib.contextmanager def notify_about_instance_delete(notifier, context, instance, delete_type='delete', source=fields.NotificationSource.API): try: notify_about_instance_usage(notifier, context, instance, "%s.start" % delete_type) # Note(gibi): force_delete types will be handled in a # subsequent patch if delete_type in ['delete', 'soft_delete']: notify_about_instance_action( context, instance, host=CONF.host, source=source, action=delete_type, phase=fields.NotificationPhase.START) yield finally: notify_about_instance_usage(notifier, context, instance, "%s.end" % delete_type) if delete_type in ['delete', 'soft_delete']: notify_about_instance_action( context, instance, host=CONF.host, source=source, action=delete_type, phase=fields.NotificationPhase.END) def update_pci_request_spec_with_allocated_interface_name( context, report_client, instance, provider_mapping): """Update the instance's PCI request based on the request group - resource provider mapping and the device RP name from placement. :param context: the request context :param report_client: a SchedulerReportClient instance :param instance: an Instance object to be updated :param provider_mapping: the request group - resource provider mapping in the form returned by the RequestSpec.get_request_group_mapping() call. :raises AmbigousResourceProviderForPCIRequest: if more than one resource provider provides resource for the given PCI request. :raises UnexpectResourceProviderNameForPCIRequest: if the resource provider, which provides resource for the pci request, does not have a well formatted name so we cannot parse the parent interface name out of it. """ if not instance.pci_requests: return def needs_update(pci_request, mapping): return (pci_request.requester_id and pci_request.requester_id in mapping) for pci_request in instance.pci_requests.requests: if needs_update(pci_request, provider_mapping): provider_uuids = provider_mapping[pci_request.requester_id] if len(provider_uuids) != 1: raise exception.AmbiguousResourceProviderForPCIRequest( providers=provider_uuids, requester=pci_request.requester_id) dev_rp_name = report_client.get_resource_provider_name( context, provider_uuids[0]) # NOTE(gibi): the device RP name reported by neutron is # structured like :: rp_name_pieces = dev_rp_name.split(':') if len(rp_name_pieces) != 3: ex = exception.UnexpectedResourceProviderNameForPCIRequest raise ex( provider=provider_uuids[0], requester=pci_request.requester_id, provider_name=dev_rp_name) for spec in pci_request.spec: spec['parent_ifname'] = rp_name_pieces[2] def delete_arqs_if_needed(context, instance): """Delete Cyborg ARQs for the instance.""" dp_name = instance.flavor.extra_specs.get('accel:device_profile') if dp_name is None: return cyclient = cyborg.get_client(context) LOG.debug('Calling Cyborg to delete ARQs for instance %(instance)s', {'instance': instance.uuid}) try: cyclient.delete_arqs_for_instance(instance.uuid) except exception.AcceleratorRequestOpFailed as e: LOG.exception('Failed to delete accelerator requests for ' 'instance %s. Exception: %s', instance.uuid, e) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/compute/vm_states.py0000664000175000017500000000476700000000000017602 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Possible vm states for instances. Compute instance vm states represent the state of an instance as it pertains to a user or administrator. vm_state describes a VM's current stable (not transition) state. That is, if there is no ongoing compute API calls (running tasks), vm_state should reflect what the customer expect the VM to be. When combined with task states (task_states.py), a better picture can be formed regarding the instance's health and progress. See http://wiki.openstack.org/VMState """ from nova.objects import fields # VM is running ACTIVE = fields.InstanceState.ACTIVE # VM only exists in DB BUILDING = fields.InstanceState.BUILDING PAUSED = fields.InstanceState.PAUSED # VM is suspended to disk. SUSPENDED = fields.InstanceState.SUSPENDED # VM is powered off, the disk image is still there. STOPPED = fields.InstanceState.STOPPED # A rescue image is running with the original VM image attached RESCUED = fields.InstanceState.RESCUED # a VM with the new size is active. The user is expected to manually confirm # or revert. RESIZED = fields.InstanceState.RESIZED # VM is marked as deleted but the disk images are still available to restore. SOFT_DELETED = fields.InstanceState.SOFT_DELETED # VM is permanently deleted. DELETED = fields.InstanceState.DELETED ERROR = fields.InstanceState.ERROR # VM is powered off, resources still on hypervisor SHELVED = fields.InstanceState.SHELVED # VM and associated resources are not on hypervisor SHELVED_OFFLOADED = fields.InstanceState.SHELVED_OFFLOADED # states we can soft reboot from ALLOW_SOFT_REBOOT = [ACTIVE] # states we allow hard reboot from ALLOW_HARD_REBOOT = ALLOW_SOFT_REBOOT + [STOPPED, PAUSED, SUSPENDED, ERROR] # states we allow to trigger crash dump ALLOW_TRIGGER_CRASH_DUMP = [ACTIVE, PAUSED, RESCUED, RESIZED, ERROR] # states we allow resources to be freed in ALLOW_RESOURCE_REMOVAL = [DELETED, SHELVED_OFFLOADED] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3024702 nova-21.2.4/nova/conductor/0000775000175000017500000000000000000000000015531 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conductor/__init__.py0000664000175000017500000000132600000000000017644 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.conductor import api as conductor_api API = conductor_api.API ComputeTaskAPI = conductor_api.ComputeTaskAPI ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conductor/api.py0000664000175000017500000002061400000000000016657 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handles all requests to the conductor service.""" from oslo_log import log as logging import oslo_messaging as messaging from nova import baserpc from nova.conductor import rpcapi import nova.conf from nova.image import glance CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class API(object): """Conductor API that does updates via RPC to the ConductorManager.""" def __init__(self): self.conductor_rpcapi = rpcapi.ConductorAPI() self.base_rpcapi = baserpc.BaseAPI(topic=rpcapi.RPC_TOPIC) def object_backport_versions(self, context, objinst, object_versions): return self.conductor_rpcapi.object_backport_versions(context, objinst, object_versions) def wait_until_ready(self, context, early_timeout=10, early_attempts=10): '''Wait until a conductor service is up and running. This method calls the remote ping() method on the conductor topic until it gets a response. It starts with a shorter timeout in the loop (early_timeout) up to early_attempts number of tries. It then drops back to the globally configured timeout for rpc calls for each retry. ''' attempt = 0 timeout = early_timeout # if we show the timeout message, make sure we show a similar # message saying that everything is now working to avoid # confusion has_timedout = False while True: # NOTE(danms): Try ten times with a short timeout, and then punt # to the configured RPC timeout after that if attempt == early_attempts: timeout = None attempt += 1 # NOTE(russellb): This is running during service startup. If we # allow an exception to be raised, the service will shut down. # This may fail the first time around if nova-conductor wasn't # running when this service started. try: self.base_rpcapi.ping(context, '1.21 GigaWatts', timeout=timeout) if has_timedout: LOG.info('nova-conductor connection ' 'established successfully') break except messaging.MessagingTimeout: has_timedout = True LOG.warning('Timed out waiting for nova-conductor. ' 'Is it running? Or did this service start ' 'before nova-conductor? ' 'Reattempting establishment of ' 'nova-conductor connection...') class ComputeTaskAPI(object): """ComputeTask API that queues up compute tasks for nova-conductor.""" def __init__(self): self.conductor_compute_rpcapi = rpcapi.ComputeTaskAPI() self.image_api = glance.API() # TODO(stephenfin): Remove the 'reservations' parameter since we don't use # reservations anymore def resize_instance(self, context, instance, scheduler_hint, flavor, reservations=None, clean_shutdown=True, request_spec=None, host_list=None, do_cast=False): self.conductor_compute_rpcapi.migrate_server( context, instance, scheduler_hint, live=False, rebuild=False, flavor=flavor, block_migration=None, disk_over_commit=None, reservations=reservations, clean_shutdown=clean_shutdown, request_spec=request_spec, host_list=host_list, do_cast=do_cast) def live_migrate_instance(self, context, instance, host_name, block_migration, disk_over_commit, request_spec=None, async_=False): scheduler_hint = {'host': host_name} if async_: self.conductor_compute_rpcapi.live_migrate_instance( context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec) else: self.conductor_compute_rpcapi.migrate_server( context, instance, scheduler_hint, True, False, None, block_migration, disk_over_commit, None, request_spec=request_spec) def build_instances(self, context, instances, image, filter_properties, admin_password, injected_files, requested_networks, security_groups, block_device_mapping, legacy_bdm=True, request_spec=None, host_lists=None): self.conductor_compute_rpcapi.build_instances(context, instances=instances, image=image, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, legacy_bdm=legacy_bdm, request_spec=request_spec, host_lists=host_lists) def schedule_and_build_instances(self, context, build_requests, request_spec, image, admin_password, injected_files, requested_networks, block_device_mapping, tags=None): self.conductor_compute_rpcapi.schedule_and_build_instances( context, build_requests, request_spec, image, admin_password, injected_files, requested_networks, block_device_mapping, tags) def unshelve_instance(self, context, instance, request_spec=None): self.conductor_compute_rpcapi.unshelve_instance(context, instance=instance, request_spec=request_spec) def rebuild_instance(self, context, instance, orig_image_ref, image_ref, injected_files, new_pass, orig_sys_metadata, bdms, recreate=False, on_shared_storage=False, preserve_ephemeral=False, host=None, request_spec=None): self.conductor_compute_rpcapi.rebuild_instance(context, instance=instance, new_pass=new_pass, injected_files=injected_files, image_ref=image_ref, orig_image_ref=orig_image_ref, orig_sys_metadata=orig_sys_metadata, bdms=bdms, recreate=recreate, on_shared_storage=on_shared_storage, preserve_ephemeral=preserve_ephemeral, host=host, request_spec=request_spec) def cache_images(self, context, aggregate, image_ids): """Request images be pre-cached on hosts within an aggregate. :param context: The RequestContext :param aggregate: The objects.Aggregate representing the hosts to contact :param image_ids: A list of image ID strings to send to the hosts """ for image_id in image_ids: # Validate that we can get the image by id before we go # ask a bunch of hosts to do the same. We let this bubble # up to the API, which catches NovaException for the 4xx and # otherwise 500s if this fails in some unexpected way. self.image_api.get(context, image_id) self.conductor_compute_rpcapi.cache_images(context, aggregate, image_ids) def confirm_snapshot_based_resize( self, ctxt, instance, migration, do_cast=True): self.conductor_compute_rpcapi.confirm_snapshot_based_resize( ctxt, instance, migration, do_cast=do_cast) def revert_snapshot_based_resize( self, ctxt, instance, migration): self.conductor_compute_rpcapi.revert_snapshot_based_resize( ctxt, instance, migration) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conductor/manager.py0000664000175000017500000030146500000000000017526 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handles database requests from other nova services.""" import collections import contextlib import copy import eventlet import functools import sys from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils import excutils from oslo_utils import timeutils from oslo_utils import versionutils import six from nova.accelerator import cyborg from nova import availability_zones from nova.compute import instance_actions from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute.utils import wrap_instance_event from nova.compute import vm_states from nova.conductor.tasks import cross_cell_migrate from nova.conductor.tasks import live_migrate from nova.conductor.tasks import migrate from nova import context as nova_context from nova.db import base from nova import exception from nova.i18n import _ from nova.image import glance from nova import manager from nova.network import neutron from nova import notifications from nova import objects from nova.objects import base as nova_object from nova.objects import fields from nova import profiler from nova import rpc from nova.scheduler.client import query from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import servicegroup from nova import utils from nova.volume import cinder LOG = logging.getLogger(__name__) CONF = cfg.CONF def targets_cell(fn): """Wrap a method and automatically target the instance's cell. This decorates a method with signature func(self, context, instance, ...) and automatically targets the context with the instance's cell mapping. It does this by looking up the InstanceMapping. """ @functools.wraps(fn) def wrapper(self, context, *args, **kwargs): instance = kwargs.get('instance') or args[0] try: im = objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) except exception.InstanceMappingNotFound: LOG.error('InstanceMapping not found, unable to target cell', instance=instance) except db_exc.CantStartEngineError: # Check to see if we can ignore API DB connection failures # because we might already be in the cell conductor. with excutils.save_and_reraise_exception() as err_ctxt: if CONF.api_database.connection is None: err_ctxt.reraise = False else: LOG.debug('Targeting cell %(cell)s for conductor method %(meth)s', {'cell': im.cell_mapping.identity, 'meth': fn.__name__}) # NOTE(danms): Target our context to the cell for the rest of # this request, so that none of the subsequent code needs to # care about it. nova_context.set_target_cell(context, im.cell_mapping) return fn(self, context, *args, **kwargs) return wrapper class ConductorManager(manager.Manager): """Mission: Conduct things. The methods in the base API for nova-conductor are various proxy operations performed on behalf of the nova-compute service running on compute nodes. Compute nodes are not allowed to directly access the database, so this set of methods allows them to get specific work done without locally accessing the database. The nova-conductor service also exposes an API in the 'compute_task' namespace. See the ComputeTaskManager class for details. """ target = messaging.Target(version='3.0') def __init__(self, *args, **kwargs): super(ConductorManager, self).__init__(service_name='conductor', *args, **kwargs) self.compute_task_mgr = ComputeTaskManager() self.additional_endpoints.append(self.compute_task_mgr) # NOTE(hanlind): This can be removed in version 4.0 of the RPC API def provider_fw_rule_get_all(self, context): # NOTE(hanlind): Simulate an empty db result for compat reasons. return [] def _object_dispatch(self, target, method, args, kwargs): """Dispatch a call to an object method. This ensures that object methods get called and any exception that is raised gets wrapped in an ExpectedException for forwarding back to the caller (without spamming the conductor logs). """ try: # NOTE(danms): Keep the getattr inside the try block since # a missing method is really a client problem return getattr(target, method)(*args, **kwargs) except Exception: raise messaging.ExpectedException() def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): objclass = nova_object.NovaObject.obj_class_from_name( objname, object_versions[objname]) args = tuple([context] + list(args)) result = self._object_dispatch(objclass, objmethod, args, kwargs) # NOTE(danms): The RPC layer will convert to primitives for us, # but in this case, we need to honor the version the client is # asking for, so we do it before returning here. # NOTE(hanlind): Do not convert older than requested objects, # see bug #1596119. if isinstance(result, nova_object.NovaObject): target_version = object_versions[objname] requested_version = versionutils.convert_version_to_tuple( target_version) actual_version = versionutils.convert_version_to_tuple( result.VERSION) do_backport = requested_version < actual_version other_major_version = requested_version[0] != actual_version[0] if do_backport or other_major_version: result = result.obj_to_primitive( target_version=target_version, version_manifest=object_versions) return result def object_action(self, context, objinst, objmethod, args, kwargs): """Perform an action on an object.""" oldobj = objinst.obj_clone() result = self._object_dispatch(objinst, objmethod, args, kwargs) updates = dict() # NOTE(danms): Diff the object with the one passed to us and # generate a list of changes to forward back for name, field in objinst.fields.items(): if not objinst.obj_attr_is_set(name): # Avoid demand-loading anything continue if (not oldobj.obj_attr_is_set(name) or getattr(oldobj, name) != getattr(objinst, name)): updates[name] = field.to_primitive(objinst, name, getattr(objinst, name)) # This is safe since a field named this would conflict with the # method anyway updates['obj_what_changed'] = objinst.obj_what_changed() return updates, result def object_backport_versions(self, context, objinst, object_versions): target = object_versions[objinst.obj_name()] LOG.debug('Backporting %(obj)s to %(ver)s with versions %(manifest)s', {'obj': objinst.obj_name(), 'ver': target, 'manifest': ','.join( ['%s=%s' % (name, ver) for name, ver in object_versions.items()])}) return objinst.obj_to_primitive(target_version=target, version_manifest=object_versions) def reset(self): objects.Service.clear_min_version_cache() @contextlib.contextmanager def try_target_cell(context, cell): """If cell is not None call func with context.target_cell. This is a method to help during the transition period. Currently various mappings may not exist if a deployment has not migrated to cellsv2. If there is no mapping call the func as normal, otherwise call it in a target_cell context. """ if cell: with nova_context.target_cell(context, cell) as cell_context: yield cell_context else: yield context @contextlib.contextmanager def obj_target_cell(obj, cell): """Run with object's context set to a specific cell""" with try_target_cell(obj._context, cell) as target: with obj.obj_alternate_context(target): yield target @profiler.trace_cls("rpc") class ComputeTaskManager(base.Base): """Namespace for compute methods. This class presents an rpc API for nova-conductor under the 'compute_task' namespace. The methods here are compute operations that are invoked by the API service. These methods see the operation to completion, which may involve coordinating activities on multiple compute nodes. """ target = messaging.Target(namespace='compute_task', version='1.23') def __init__(self): super(ComputeTaskManager, self).__init__() self.compute_rpcapi = compute_rpcapi.ComputeAPI() self.volume_api = cinder.API() self.image_api = glance.API() self.network_api = neutron.API() self.servicegroup_api = servicegroup.API() self.query_client = query.SchedulerQueryClient() self.report_client = report.SchedulerReportClient() self.notifier = rpc.get_notifier('compute', CONF.host) # Help us to record host in EventReporter self.host = CONF.host def reset(self): LOG.info('Reloading compute RPC API') compute_rpcapi.LAST_VERSION = None self.compute_rpcapi = compute_rpcapi.ComputeAPI() # TODO(tdurakov): remove `live` parameter here on compute task api RPC # version bump to 2.x # TODO(danms): remove the `reservations` parameter here on compute task api # RPC version bump to 2.x @messaging.expected_exceptions( exception.NoValidHost, exception.ComputeServiceUnavailable, exception.ComputeHostNotFound, exception.InvalidHypervisorType, exception.InvalidCPUInfo, exception.UnableToMigrateToSelf, exception.DestinationHypervisorTooOld, exception.InvalidLocalStorage, exception.InvalidSharedStorage, exception.HypervisorUnavailable, exception.InstanceInvalidState, exception.MigrationPreCheckError, exception.UnsupportedPolicyException) @targets_cell @wrap_instance_event(prefix='conductor') def migrate_server(self, context, instance, scheduler_hint, live, rebuild, flavor, block_migration, disk_over_commit, reservations=None, clean_shutdown=True, request_spec=None, host_list=None): if instance and not isinstance(instance, nova_object.NovaObject): # NOTE(danms): Until v2 of the RPC API, we need to tolerate # old-world instance objects here attrs = ['metadata', 'system_metadata', 'info_cache', 'security_groups'] instance = objects.Instance._from_db_object( context, objects.Instance(), instance, expected_attrs=attrs) # NOTE: Remove this when we drop support for v1 of the RPC API if flavor and not isinstance(flavor, objects.Flavor): # Code downstream may expect extra_specs to be populated since it # is receiving an object, so lookup the flavor to ensure this. flavor = objects.Flavor.get_by_id(context, flavor['id']) if live and not rebuild and not flavor: self._live_migrate(context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec) elif not live and not rebuild and flavor: instance_uuid = instance.uuid with compute_utils.EventReporter(context, 'cold_migrate', self.host, instance_uuid): self._cold_migrate(context, instance, flavor, scheduler_hint['filter_properties'], clean_shutdown, request_spec, host_list) else: raise NotImplementedError() @staticmethod def _get_request_spec_for_cold_migrate(context, instance, flavor, filter_properties, request_spec): # NOTE(sbauza): If a reschedule occurs when prep_resize(), then # it only provides filter_properties legacy dict back to the # conductor with no RequestSpec part of the payload for =6.0. request_spec = objects.RequestSpec.from_primitives( context, request_spec, filter_properties) # We don't have to set the new flavor on the request spec because # if we got here it was due to a reschedule from the compute and # the request spec would already have the new flavor in it from the # else block below. else: # NOTE(sbauza): Resizes means new flavor, so we need to update the # original RequestSpec object for make sure the scheduler verifies # the right one and not the original flavor request_spec.flavor = flavor return request_spec def _cold_migrate(self, context, instance, flavor, filter_properties, clean_shutdown, request_spec, host_list): request_spec = self._get_request_spec_for_cold_migrate( context, instance, flavor, filter_properties, request_spec) task = self._build_cold_migrate_task(context, instance, flavor, request_spec, clean_shutdown, host_list) try: task.execute() except exception.NoValidHost as ex: vm_state = instance.vm_state if not vm_state: vm_state = vm_states.ACTIVE updates = {'vm_state': vm_state, 'task_state': None} self._set_vm_state_and_notify(context, instance.uuid, 'migrate_server', updates, ex, request_spec) # if the flavor IDs match, it's migrate; otherwise resize if flavor.id == instance.instance_type_id: msg = _("No valid host found for cold migrate") else: msg = _("No valid host found for resize") raise exception.NoValidHost(reason=msg) except exception.UnsupportedPolicyException as ex: with excutils.save_and_reraise_exception(): vm_state = instance.vm_state if not vm_state: vm_state = vm_states.ACTIVE updates = {'vm_state': vm_state, 'task_state': None} self._set_vm_state_and_notify(context, instance.uuid, 'migrate_server', updates, ex, request_spec) except Exception as ex: with excutils.save_and_reraise_exception(): # Refresh the instance so we don't overwrite vm_state changes # set after we executed the task. try: instance.refresh() # Passing vm_state is kind of silly but it's expected in # set_vm_state_and_notify. updates = {'vm_state': instance.vm_state, 'task_state': None} self._set_vm_state_and_notify(context, instance.uuid, 'migrate_server', updates, ex, request_spec) except exception.InstanceNotFound: # We can't send the notification because the instance is # gone so just log it. LOG.info('During %s the instance was deleted.', 'resize' if instance.instance_type_id != flavor.id else 'cold migrate', instance=instance) # NOTE(sbauza): Make sure we persist the new flavor in case we had # a successful scheduler call if and only if nothing bad happened if request_spec.obj_what_changed(): request_spec.save() def _set_vm_state_and_notify(self, context, instance_uuid, method, updates, ex, request_spec): scheduler_utils.set_vm_state_and_notify( context, instance_uuid, 'compute_task', method, updates, ex, request_spec) def _cleanup_allocated_networks( self, context, instance, requested_networks): try: # If we were told not to allocate networks let's save ourselves # the trouble of calling the network API. if not (requested_networks and requested_networks.no_allocate): self.network_api.deallocate_for_instance( context, instance, requested_networks=requested_networks) except Exception: LOG.exception('Failed to deallocate networks', instance=instance) return instance.system_metadata['network_allocated'] = 'False' try: instance.save() except exception.InstanceNotFound: # NOTE: It's possible that we're cleaning up the networks # because the instance was deleted. If that's the case then this # exception will be raised by instance.save() pass @targets_cell @wrap_instance_event(prefix='conductor') def live_migrate_instance(self, context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec): self._live_migrate(context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec) def _live_migrate(self, context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec): destination = scheduler_hint.get("host") def _set_vm_state(context, instance, ex, vm_state=None, task_state=None): request_spec = {'instance_properties': { 'uuid': instance.uuid, }, } scheduler_utils.set_vm_state_and_notify(context, instance.uuid, 'compute_task', 'migrate_server', dict(vm_state=vm_state, task_state=task_state, expected_task_state=task_states.MIGRATING,), ex, request_spec) migration = objects.Migration(context=context.elevated()) migration.dest_compute = destination migration.status = 'accepted' migration.instance_uuid = instance.uuid migration.source_compute = instance.host migration.migration_type = 'live-migration' if instance.obj_attr_is_set('flavor'): migration.old_instance_type_id = instance.flavor.id migration.new_instance_type_id = instance.flavor.id else: migration.old_instance_type_id = instance.instance_type_id migration.new_instance_type_id = instance.instance_type_id migration.create() task = self._build_live_migrate_task(context, instance, destination, block_migration, disk_over_commit, migration, request_spec) try: task.execute() except (exception.NoValidHost, exception.ComputeHostNotFound, exception.ComputeServiceUnavailable, exception.InvalidHypervisorType, exception.InvalidCPUInfo, exception.UnableToMigrateToSelf, exception.DestinationHypervisorTooOld, exception.InvalidLocalStorage, exception.InvalidSharedStorage, exception.HypervisorUnavailable, exception.InstanceInvalidState, exception.MigrationPreCheckError, exception.MigrationSchedulerRPCError) as ex: with excutils.save_and_reraise_exception(): _set_vm_state(context, instance, ex, instance.vm_state) migration.status = 'error' migration.save() except Exception as ex: LOG.error('Migration of instance %(instance_id)s to host' ' %(dest)s unexpectedly failed.', {'instance_id': instance.uuid, 'dest': destination}, exc_info=True) # Reset the task state to None to indicate completion of # the operation as it is done in case of known exceptions. _set_vm_state(context, instance, ex, vm_states.ERROR, task_state=None) migration.status = 'error' migration.save() raise exception.MigrationError(reason=six.text_type(ex)) def _build_live_migrate_task(self, context, instance, destination, block_migration, disk_over_commit, migration, request_spec=None): return live_migrate.LiveMigrationTask(context, instance, destination, block_migration, disk_over_commit, migration, self.compute_rpcapi, self.servicegroup_api, self.query_client, self.report_client, request_spec) def _build_cold_migrate_task(self, context, instance, flavor, request_spec, clean_shutdown, host_list): return migrate.MigrationTask(context, instance, flavor, request_spec, clean_shutdown, self.compute_rpcapi, self.query_client, self.report_client, host_list, self.network_api) def _destroy_build_request(self, context, instance): # The BuildRequest needs to be stored until the instance is mapped to # an instance table. At that point it will never be used again and # should be deleted. build_request = objects.BuildRequest.get_by_instance_uuid( context, instance.uuid) # TODO(alaski): Sync API updates of the build_request to the # instance before it is destroyed. Right now only locked_by can # be updated before this is destroyed. build_request.destroy() def _populate_instance_mapping(self, context, instance, host): try: inst_mapping = objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) except exception.InstanceMappingNotFound: # NOTE(alaski): If nova-api is up to date this exception should # never be hit. But during an upgrade it's possible that an old # nova-api didn't create an instance_mapping during this boot # request. LOG.debug('Instance was not mapped to a cell, likely due ' 'to an older nova-api service running.', instance=instance) return None else: try: host_mapping = objects.HostMapping.get_by_host(context, host.service_host) except exception.HostMappingNotFound: # NOTE(alaski): For now this exception means that a # deployment has not migrated to cellsv2 and we should # remove the instance_mapping that has been created. # Eventually this will indicate a failure to properly map a # host to a cell and we may want to reschedule. inst_mapping.destroy() return None else: inst_mapping.cell_mapping = host_mapping.cell_mapping inst_mapping.save() return inst_mapping def _validate_existing_attachment_ids(self, context, instance, bdms): """Ensure any attachment ids referenced by the bdms exist. New attachments will only be created if the attachment ids referenced by the bdms no longer exist. This can happen when an instance is rescheduled after a failure to spawn as cleanup code on the previous host will delete attachments before rescheduling. """ for bdm in bdms: if bdm.is_volume and bdm.attachment_id: try: self.volume_api.attachment_get(context, bdm.attachment_id) except exception.VolumeAttachmentNotFound: attachment = self.volume_api.attachment_create( context, bdm.volume_id, instance.uuid) bdm.attachment_id = attachment['id'] bdm.save() def _cleanup_when_reschedule_fails( self, context, instance, exception, legacy_request_spec, requested_networks): """Set the instance state and clean up. It is only used in case build_instance fails while rescheduling the instance """ updates = {'vm_state': vm_states.ERROR, 'task_state': None} self._set_vm_state_and_notify( context, instance.uuid, 'build_instances', updates, exception, legacy_request_spec) self._cleanup_allocated_networks( context, instance, requested_networks) compute_utils.delete_arqs_if_needed(context, instance) # NOTE(danms): This is never cell-targeted because it is only used for # n-cpu reschedules which go to the cell conductor and thus are always # cell-specific. def build_instances(self, context, instances, image, filter_properties, admin_password, injected_files, requested_networks, security_groups, block_device_mapping=None, legacy_bdm=True, request_spec=None, host_lists=None): # TODO(ndipanov): Remove block_device_mapping and legacy_bdm in version # 2.0 of the RPC API. # TODO(danms): Remove this in version 2.0 of the RPC API if (requested_networks and not isinstance(requested_networks, objects.NetworkRequestList)): requested_networks = objects.NetworkRequestList.from_tuples( requested_networks) # TODO(melwitt): Remove this in version 2.0 of the RPC API flavor = filter_properties.get('instance_type') if flavor and not isinstance(flavor, objects.Flavor): # Code downstream may expect extra_specs to be populated since it # is receiving an object, so lookup the flavor to ensure this. flavor = objects.Flavor.get_by_id(context, flavor['id']) filter_properties = dict(filter_properties, instance_type=flavor) # Older computes will not send a request_spec during reschedules so we # need to check and build our own if one is not provided. if request_spec is None: legacy_request_spec = scheduler_utils.build_request_spec( image, instances) else: # TODO(mriedem): This is annoying but to populate the local # request spec below using the filter_properties, we have to pass # in a primitive version of the request spec. Yes it's inefficient # and we can remove it once the populate_retry and # populate_filter_properties utility methods are converted to # work on a RequestSpec object rather than filter_properties. # NOTE(gibi): we have to keep a reference to the original # RequestSpec object passed to this function as we lose information # during the below legacy conversion legacy_request_spec = request_spec.to_legacy_request_spec_dict() # 'host_lists' will be None during a reschedule from a pre-Queens # compute. In all other cases, it will be a list of lists, though the # lists may be empty if there are no more hosts left in a rescheduling # situation. is_reschedule = host_lists is not None try: # check retry policy. Rather ugly use of instances[0]... # but if we've exceeded max retries... then we really only # have a single instance. # TODO(sbauza): Provide directly the RequestSpec object # when populate_retry() accepts it scheduler_utils.populate_retry( filter_properties, instances[0].uuid) instance_uuids = [instance.uuid for instance in instances] spec_obj = objects.RequestSpec.from_primitives( context, legacy_request_spec, filter_properties) LOG.debug("Rescheduling: %s", is_reschedule) if is_reschedule: # Make sure that we have a host, as we may have exhausted all # our alternates if not host_lists[0]: # We have an empty list of hosts, so this instance has # failed to build. msg = ("Exhausted all hosts available for retrying build " "failures for instance %(instance_uuid)s." % {"instance_uuid": instances[0].uuid}) raise exception.MaxRetriesExceeded(reason=msg) else: # This is not a reschedule, so we need to call the scheduler to # get appropriate hosts for the request. # NOTE(gibi): We only call the scheduler if we are rescheduling # from a really old compute. In that case we do not support # externally-defined resource requests, like port QoS. So no # requested_resources are set on the RequestSpec here. host_lists = self._schedule_instances(context, spec_obj, instance_uuids, return_alternates=True) except Exception as exc: # NOTE(mriedem): If we're rescheduling from a failed build on a # compute, "retry" will be set and num_attempts will be >1 because # populate_retry above will increment it. If the server build was # forced onto a host/node or [scheduler]/max_attempts=1, "retry" # won't be in filter_properties and we won't get here because # nova-compute will just abort the build since reschedules are # disabled in those cases. num_attempts = filter_properties.get( 'retry', {}).get('num_attempts', 1) for instance in instances: # If num_attempts > 1, we're in a reschedule and probably # either hit NoValidHost or MaxRetriesExceeded. Either way, # the build request should already be gone and we probably # can't reach the API DB from the cell conductor. if num_attempts <= 1: try: # If the BuildRequest stays around then instance # show/lists will pull from it rather than the errored # instance. self._destroy_build_request(context, instance) except exception.BuildRequestNotFound: pass self._cleanup_when_reschedule_fails( context, instance, exc, legacy_request_spec, requested_networks) return elevated = context.elevated() for (instance, host_list) in six.moves.zip(instances, host_lists): host = host_list.pop(0) if is_reschedule: # If this runs in the superconductor, the first instance will # already have its resources claimed in placement. If this is a # retry, though, this is running in the cell conductor, and we # need to claim first to ensure that the alternate host still # has its resources available. Note that there are schedulers # that don't support Placement, so must assume that the host is # still available. host_available = False while host and not host_available: if host.allocation_request: alloc_req = jsonutils.loads(host.allocation_request) else: alloc_req = None if alloc_req: try: host_available = scheduler_utils.claim_resources( elevated, self.report_client, spec_obj, instance.uuid, alloc_req, host.allocation_request_version) if request_spec and host_available: # NOTE(gibi): redo the request group - resource # provider mapping as the above claim call # moves the allocation of the instance to # another host scheduler_utils.fill_provider_mapping( request_spec, host) except Exception as exc: self._cleanup_when_reschedule_fails( context, instance, exc, legacy_request_spec, requested_networks) return else: # Some deployments use different schedulers that do not # use Placement, so they will not have an # allocation_request to claim with. For those cases, # there is no concept of claiming, so just assume that # the host is valid. host_available = True if not host_available: # Insufficient resources remain on that host, so # discard it and try the next. host = host_list.pop(0) if host_list else None if not host_available: # No more available hosts for retrying the build. msg = ("Exhausted all hosts available for retrying build " "failures for instance %(instance_uuid)s." % {"instance_uuid": instance.uuid}) exc = exception.MaxRetriesExceeded(reason=msg) self._cleanup_when_reschedule_fails( context, instance, exc, legacy_request_spec, requested_networks) return # The availability_zone field was added in v1.1 of the Selection # object so make sure to handle the case where it is missing. if 'availability_zone' in host: instance.availability_zone = host.availability_zone else: try: instance.availability_zone = ( availability_zones.get_host_availability_zone(context, host.service_host)) except Exception as exc: # Put the instance into ERROR state, set task_state to # None, inject a fault, etc. self._cleanup_when_reschedule_fails( context, instance, exc, legacy_request_spec, requested_networks) continue try: # NOTE(danms): This saves the az change above, refreshes our # instance, and tells us if it has been deleted underneath us instance.save() except (exception.InstanceNotFound, exception.InstanceInfoCacheNotFound): LOG.debug('Instance deleted during build', instance=instance) continue local_filter_props = copy.deepcopy(filter_properties) scheduler_utils.populate_filter_properties(local_filter_props, host) # Populate the request_spec with the local_filter_props information # like retries and limits. Note that at this point the request_spec # could have come from a compute via reschedule and it would # already have some things set, like scheduler_hints. local_reqspec = objects.RequestSpec.from_primitives( context, legacy_request_spec, local_filter_props) # NOTE(gibi): at this point the request spec already got converted # to a legacy dict and then back to an object so we lost the non # legacy part of the spec. Re-populate the requested_resources # field based on the original request spec object passed to this # function. if request_spec: local_reqspec.requested_resources = ( request_spec.requested_resources) # The block_device_mapping passed from the api doesn't contain # instance specific information bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) # This is populated in scheduler_utils.populate_retry num_attempts = local_filter_props.get('retry', {}).get('num_attempts', 1) if num_attempts <= 1: # If this is a reschedule the instance is already mapped to # this cell and the BuildRequest is already deleted so ignore # the logic below. inst_mapping = self._populate_instance_mapping(context, instance, host) try: self._destroy_build_request(context, instance) except exception.BuildRequestNotFound: # This indicates an instance delete has been requested in # the API. Stop the build, cleanup the instance_mapping and # potentially the block_device_mappings # TODO(alaski): Handle block_device_mapping cleanup if inst_mapping: inst_mapping.destroy() return else: # NOTE(lyarwood): If this is a reschedule then recreate any # attachments that were previously removed when cleaning up # after failures to spawn etc. self._validate_existing_attachment_ids(context, instance, bdms) alts = [(alt.service_host, alt.nodename) for alt in host_list] LOG.debug("Selected host: %s; Selected node: %s; Alternates: %s", host.service_host, host.nodename, alts, instance=instance) try: resource_provider_mapping = ( local_reqspec.get_request_group_mapping()) accel_uuids = self._create_and_bind_arqs( context, instance.uuid, instance.flavor.extra_specs, host.nodename, resource_provider_mapping) except Exception as exc: LOG.exception('Failed to reschedule. Reason: %s', exc) self._cleanup_when_reschedule_fails(context, instance, exc, legacy_request_spec, requested_networks) continue self.compute_rpcapi.build_and_run_instance(context, instance=instance, host=host.service_host, image=image, request_spec=local_reqspec, filter_properties=local_filter_props, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=bdms, node=host.nodename, limits=host.limits, host_list=host_list, accel_uuids=accel_uuids) def _schedule_instances(self, context, request_spec, instance_uuids=None, return_alternates=False): scheduler_utils.setup_instance_group(context, request_spec) with timeutils.StopWatch() as timer: host_lists = self.query_client.select_destinations( context, request_spec, instance_uuids, return_objects=True, return_alternates=return_alternates) LOG.debug('Took %0.2f seconds to select destinations for %s ' 'instance(s).', timer.elapsed(), len(instance_uuids)) return host_lists @staticmethod def _restrict_request_spec_to_cell(context, instance, request_spec): """Sets RequestSpec.requested_destination.cell for the move operation Move operations, e.g. evacuate and unshelve, must be restricted to the cell in which the instance already exists, so this method is used to target the RequestSpec, which is sent to the scheduler via the _schedule_instances method, to the instance's current cell. :param context: nova auth RequestContext """ instance_mapping = \ objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) LOG.debug('Requesting cell %(cell)s during scheduling', {'cell': instance_mapping.cell_mapping.identity}, instance=instance) if ('requested_destination' in request_spec and request_spec.requested_destination): request_spec.requested_destination.cell = ( instance_mapping.cell_mapping) else: request_spec.requested_destination = ( objects.Destination( cell=instance_mapping.cell_mapping)) # TODO(mriedem): Make request_spec required in ComputeTaskAPI RPC v2.0. @targets_cell def unshelve_instance(self, context, instance, request_spec=None): sys_meta = instance.system_metadata def safe_image_show(ctx, image_id): if image_id: return self.image_api.get(ctx, image_id, show_deleted=False) else: raise exception.ImageNotFound(image_id='') if instance.vm_state == vm_states.SHELVED: instance.task_state = task_states.POWERING_ON instance.save(expected_task_state=task_states.UNSHELVING) self.compute_rpcapi.start_instance(context, instance) elif instance.vm_state == vm_states.SHELVED_OFFLOADED: image = None image_id = sys_meta.get('shelved_image_id') # No need to check for image if image_id is None as # "shelved_image_id" key is not set for volume backed # instance during the shelve process if image_id: with compute_utils.EventReporter( context, 'get_image_info', self.host, instance.uuid): try: image = safe_image_show(context, image_id) except exception.ImageNotFound as error: instance.vm_state = vm_states.ERROR instance.save() reason = _('Unshelve attempted but the image %s ' 'cannot be found.') % image_id LOG.error(reason, instance=instance) compute_utils.add_instance_fault_from_exc( context, instance, error, sys.exc_info(), fault_message=reason) raise exception.UnshelveException( instance_id=instance.uuid, reason=reason) try: with compute_utils.EventReporter(context, 'schedule_instances', self.host, instance.uuid): # NOTE(sbauza): Force_hosts/nodes needs to be reset # if we want to make sure that the next destination # is not forced to be the original host request_spec.reset_forced_destinations() # TODO(sbauza): Provide directly the RequestSpec object # when populate_filter_properties accepts it filter_properties = request_spec.\ to_legacy_filter_properties_dict() port_res_req = ( self.network_api.get_requested_resource_for_instance( context, instance.uuid)) # NOTE(gibi): When cyborg or other module wants to handle # similar non-nova resources then here we have to collect # all the external resource requests in a single list and # add them to the RequestSpec. request_spec.requested_resources = port_res_req # NOTE(cfriesen): Ensure that we restrict the scheduler to # the cell specified by the instance mapping. self._restrict_request_spec_to_cell( context, instance, request_spec) request_spec.ensure_project_and_user_id(instance) request_spec.ensure_network_metadata(instance) compute_utils.heal_reqspec_is_bfv( context, request_spec, instance) host_lists = self._schedule_instances(context, request_spec, [instance.uuid], return_alternates=False) host_list = host_lists[0] selection = host_list[0] scheduler_utils.populate_filter_properties( filter_properties, selection) (host, node) = (selection.service_host, selection.nodename) instance.availability_zone = ( availability_zones.get_host_availability_zone( context, host)) scheduler_utils.fill_provider_mapping( request_spec, selection) self.compute_rpcapi.unshelve_instance( context, instance, host, request_spec, image=image, filter_properties=filter_properties, node=node) except (exception.NoValidHost, exception.UnsupportedPolicyException): instance.task_state = None instance.save() LOG.warning("No valid host found for unshelve instance", instance=instance) return except Exception: with excutils.save_and_reraise_exception(): instance.task_state = None instance.save() LOG.error("Unshelve attempted but an error " "has occurred", instance=instance) else: LOG.error('Unshelve attempted but vm_state not SHELVED or ' 'SHELVED_OFFLOADED', instance=instance) instance.vm_state = vm_states.ERROR instance.save() return def _allocate_for_evacuate_dest_host(self, context, instance, host, request_spec=None): # The user is forcing the destination host and bypassing the # scheduler. We need to copy the source compute node # allocations in Placement to the destination compute node. # Normally select_destinations() in the scheduler would do this # for us, but when forcing the target host we don't call the # scheduler. source_node = None # This is used for error handling below. try: source_node = objects.ComputeNode.get_by_host_and_nodename( context, instance.host, instance.node) dest_node = ( objects.ComputeNode.get_first_node_by_host_for_old_compat( context, host, use_slave=True)) except exception.ComputeHostNotFound as ex: with excutils.save_and_reraise_exception(): self._set_vm_state_and_notify( context, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, request_spec) if source_node: LOG.warning('Specified host %s for evacuate was not ' 'found.', host, instance=instance) else: LOG.warning('Source host %s and node %s for evacuate was ' 'not found.', instance.host, instance.node, instance=instance) try: scheduler_utils.claim_resources_on_destination( context, self.report_client, instance, source_node, dest_node) except exception.NoValidHost as ex: with excutils.save_and_reraise_exception(): self._set_vm_state_and_notify( context, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, request_spec) LOG.warning('Specified host %s for evacuate is ' 'invalid.', host, instance=instance) # TODO(mriedem): Make request_spec required in ComputeTaskAPI RPC v2.0. @targets_cell def rebuild_instance(self, context, instance, orig_image_ref, image_ref, injected_files, new_pass, orig_sys_metadata, bdms, recreate, on_shared_storage, preserve_ephemeral=False, host=None, request_spec=None): # recreate=True means the instance is being evacuated from a failed # host to a new destination host. The 'recreate' variable name is # confusing, so rename it to evacuate here at the top, which is simpler # than renaming a parameter in an RPC versioned method. evacuate = recreate # NOTE(efried): It would be nice if this were two separate events, one # for 'rebuild' and one for 'evacuate', but this is part of the API # now, so it would be nontrivial to change. with compute_utils.EventReporter(context, 'rebuild_server', self.host, instance.uuid): node = limits = None try: migration = objects.Migration.get_by_instance_and_status( context, instance.uuid, 'accepted') except exception.MigrationNotFoundByStatus: LOG.debug("No migration record for the rebuild/evacuate " "request.", instance=instance) migration = None # The host variable is passed in two cases: # 1. rebuild - the instance.host is passed to rebuild on the # same host and bypass the scheduler *unless* a new image # was specified # 2. evacuate with specified host and force=True - the specified # host is passed and is meant to bypass the scheduler. # NOTE(mriedem): This could be a lot more straight-forward if we # had separate methods for rebuild and evacuate... if host: # We only create a new allocation on the specified host if # we're doing an evacuate since that is a move operation. if host != instance.host: # If a destination host is forced for evacuate, create # allocations against it in Placement. try: self._allocate_for_evacuate_dest_host( context, instance, host, request_spec) except exception.AllocationUpdateFailed as ex: with excutils.save_and_reraise_exception(): if migration: migration.status = 'error' migration.save() # NOTE(efried): It would be nice if this were two # separate events, one for 'rebuild' and one for # 'evacuate', but this is part of the API now, so # it would be nontrivial to change. self._set_vm_state_and_notify( context, instance.uuid, 'rebuild_server', {'vm_state': vm_states.ERROR, 'task_state': None}, ex, request_spec) LOG.warning('Rebuild failed: %s', six.text_type(ex), instance=instance) except exception.NoValidHost: with excutils.save_and_reraise_exception(): if migration: migration.status = 'error' migration.save() else: # At this point, the user is either: # # 1. Doing a rebuild on the same host (not evacuate) and # specified a new image. # 2. Evacuating and specified a host but are not forcing it. # # In either case, the API passes host=None but sets up the # RequestSpec.requested_destination field for the specified # host. if evacuate: # NOTE(sbauza): Augment the RequestSpec object by excluding # the source host for avoiding the scheduler to pick it request_spec.ignore_hosts = [instance.host] # NOTE(sbauza): Force_hosts/nodes needs to be reset # if we want to make sure that the next destination # is not forced to be the original host request_spec.reset_forced_destinations() port_res_req = ( self.network_api.get_requested_resource_for_instance( context, instance.uuid)) # NOTE(gibi): When cyborg or other module wants to handle # similar non-nova resources then here we have to collect # all the external resource requests in a single list and # add them to the RequestSpec. request_spec.requested_resources = port_res_req try: # if this is a rebuild of instance on the same host with # new image. if not evacuate and orig_image_ref != image_ref: self._validate_image_traits_for_rebuild(context, instance, image_ref) self._restrict_request_spec_to_cell( context, instance, request_spec) request_spec.ensure_project_and_user_id(instance) request_spec.ensure_network_metadata(instance) compute_utils.heal_reqspec_is_bfv( context, request_spec, instance) host_lists = self._schedule_instances(context, request_spec, [instance.uuid], return_alternates=False) host_list = host_lists[0] selection = host_list[0] host, node, limits = (selection.service_host, selection.nodename, selection.limits) if recreate: scheduler_utils.fill_provider_mapping( request_spec, selection) except (exception.NoValidHost, exception.UnsupportedPolicyException, exception.AllocationUpdateFailed, # the next two can come from fill_provider_mapping and # signals a software error. NotImplementedError, ValueError) as ex: if migration: migration.status = 'error' migration.save() # Rollback the image_ref if a new one was provided (this # only happens in the rebuild case, not evacuate). if orig_image_ref and orig_image_ref != image_ref: instance.image_ref = orig_image_ref instance.save() with excutils.save_and_reraise_exception(): # NOTE(efried): It would be nice if this were two # separate events, one for 'rebuild' and one for # 'evacuate', but this is part of the API now, so it # would be nontrivial to change. self._set_vm_state_and_notify(context, instance.uuid, 'rebuild_server', {'vm_state': vm_states.ERROR, 'task_state': None}, ex, request_spec) LOG.warning('Rebuild failed: %s', six.text_type(ex), instance=instance) compute_utils.notify_about_instance_usage( self.notifier, context, instance, "rebuild.scheduled") compute_utils.notify_about_instance_rebuild( context, instance, host, action=fields.NotificationAction.REBUILD_SCHEDULED, source=fields.NotificationSource.CONDUCTOR) instance.availability_zone = ( availability_zones.get_host_availability_zone( context, host)) self.compute_rpcapi.rebuild_instance(context, instance=instance, new_pass=new_pass, injected_files=injected_files, image_ref=image_ref, orig_image_ref=orig_image_ref, orig_sys_metadata=orig_sys_metadata, bdms=bdms, recreate=evacuate, on_shared_storage=on_shared_storage, preserve_ephemeral=preserve_ephemeral, migration=migration, host=host, node=node, limits=limits, request_spec=request_spec) def _validate_image_traits_for_rebuild(self, context, instance, image_ref): """Validates that the traits specified in the image can be satisfied by the providers of the current allocations for the instance during rebuild of the instance. If the traits cannot be satisfied, fails the action by raising a NoValidHost exception. :raises: NoValidHost exception in case the traits on the providers of the allocated resources for the instance do not match the required traits on the image. """ image_meta = objects.ImageMeta.from_image_ref( context, self.image_api, image_ref) if ('properties' not in image_meta or 'traits_required' not in image_meta.properties or not image_meta.properties.traits_required): return image_traits = set(image_meta.properties.traits_required) # check any of the image traits are forbidden in flavor traits. # if so raise an exception extra_specs = instance.flavor.extra_specs forbidden_flavor_traits = set() for key, val in extra_specs.items(): if key.startswith('trait'): # get the actual key. prefix, parsed_key = key.split(':', 1) if val == 'forbidden': forbidden_flavor_traits.add(parsed_key) forbidden_traits = image_traits & forbidden_flavor_traits if forbidden_traits: raise exception.NoValidHost( reason=_("Image traits are part of forbidden " "traits in flavor associated with the server. " "Either specify a different image during rebuild " "or create a new server with the specified image " "and a compatible flavor.")) # If image traits are present, then validate against allocations. allocations = self.report_client.get_allocations_for_consumer( context, instance.uuid) instance_rp_uuids = list(allocations) # Get provider tree for the instance. We use the uuid of the host # on which the instance is rebuilding to get the provider tree. compute_node = objects.ComputeNode.get_by_host_and_nodename( context, instance.host, instance.node) # TODO(karimull): Call with a read-only version, when available. instance_rp_tree = ( self.report_client.get_provider_tree_and_ensure_root( context, compute_node.uuid)) traits_in_instance_rps = set() for rp_uuid in instance_rp_uuids: traits_in_instance_rps.update( instance_rp_tree.data(rp_uuid).traits) missing_traits = image_traits - traits_in_instance_rps if missing_traits: raise exception.NoValidHost( reason=_("Image traits cannot be " "satisfied by the current resource providers. " "Either specify a different image during rebuild " "or create a new server with the specified image.")) # TODO(avolkov): move method to bdm @staticmethod def _volume_size(instance_type, bdm): size = bdm.get('volume_size') # NOTE (ndipanov): inherit flavor size only for swap and ephemeral if (size is None and bdm.get('source_type') == 'blank' and bdm.get('destination_type') == 'local'): if bdm.get('guest_format') == 'swap': size = instance_type.get('swap', 0) else: size = instance_type.get('ephemeral_gb', 0) return size def _create_block_device_mapping(self, cell, instance_type, instance_uuid, block_device_mapping): """Create the BlockDeviceMapping objects in the db. This method makes a copy of the list in order to avoid using the same id field in case this is called for multiple instances. """ LOG.debug("block_device_mapping %s", list(block_device_mapping), instance_uuid=instance_uuid) instance_block_device_mapping = copy.deepcopy(block_device_mapping) for bdm in instance_block_device_mapping: bdm.volume_size = self._volume_size(instance_type, bdm) bdm.instance_uuid = instance_uuid with obj_target_cell(bdm, cell): bdm.update_or_create() return instance_block_device_mapping def _create_tags(self, context, instance_uuid, tags): """Create the Tags objects in the db.""" if tags: tag_list = [tag.tag for tag in tags] instance_tags = objects.TagList.create( context, instance_uuid, tag_list) return instance_tags else: return tags def _create_instance_action_for_cell0(self, context, instance, exc): """Create a failed "create" instance action for the instance in cell0. :param context: nova auth RequestContext targeted at cell0 :param instance: Instance object being buried in cell0 :param exc: Exception that occurred which resulted in burial """ # First create the action record. objects.InstanceAction.action_start( context, instance.uuid, instance_actions.CREATE, want_result=False) # Now create an event for that action record. event_name = 'conductor_schedule_and_build_instances' objects.InstanceActionEvent.event_start( context, instance.uuid, event_name, want_result=False, host=self.host) # And finish the event with the exception. Note that we expect this # method to be called from _bury_in_cell0 which is called from within # an exception handler so sys.exc_info should return values but if not # it's not the end of the world - this is best effort. objects.InstanceActionEvent.event_finish_with_failure( context, instance.uuid, event_name, exc_val=exc, exc_tb=sys.exc_info()[2], want_result=False) def _bury_in_cell0(self, context, request_spec, exc, build_requests=None, instances=None, block_device_mapping=None, tags=None): """Ensure all provided build_requests and instances end up in cell0. Cell0 is the fake cell we schedule dead instances to when we can't schedule them somewhere real. Requests that don't yet have instances will get a new instance, created in cell0. Instances that have not yet been created will be created in cell0. All build requests are destroyed after we're done. Failure to delete a build request will trigger the instance deletion, just like the happy path in schedule_and_build_instances() below. """ try: cell0 = objects.CellMapping.get_by_uuid( context, objects.CellMapping.CELL0_UUID) except exception.CellMappingNotFound: # Not yet setup for cellsv2. Instances will need to be written # to the configured database. This will become a deployment # error in Ocata. LOG.error('No cell mapping found for cell0 while ' 'trying to record scheduling failure. ' 'Setup is incomplete.') return build_requests = build_requests or [] instances = instances or [] instances_by_uuid = {inst.uuid: inst for inst in instances} for build_request in build_requests: if build_request.instance_uuid not in instances_by_uuid: # This is an instance object with no matching db entry. instance = build_request.get_new_instance(context) instances_by_uuid[instance.uuid] = instance updates = {'vm_state': vm_states.ERROR, 'task_state': None} for instance in instances_by_uuid.values(): inst_mapping = None try: # We don't need the cell0-targeted context here because the # instance mapping is in the API DB. inst_mapping = \ objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) except exception.InstanceMappingNotFound: # The API created the instance mapping record so it should # definitely be here. Log an error but continue to create the # instance in the cell0 database. LOG.error('While burying instance in cell0, no instance ' 'mapping was found.', instance=instance) # Perform a final sanity check that the instance is not mapped # to some other cell already because of maybe some crazy # clustered message queue weirdness. if inst_mapping and inst_mapping.cell_mapping is not None: LOG.error('When attempting to bury instance in cell0, the ' 'instance is already mapped to cell %s. Ignoring ' 'bury in cell0 attempt.', inst_mapping.cell_mapping.identity, instance=instance) continue with obj_target_cell(instance, cell0) as cctxt: instance.create() if inst_mapping: inst_mapping.cell_mapping = cell0 inst_mapping.save() # Record an instance action with a failed event. self._create_instance_action_for_cell0( cctxt, instance, exc) # NOTE(mnaser): In order to properly clean-up volumes after # being buried in cell0, we need to store BDMs. if block_device_mapping: self._create_block_device_mapping( cell0, instance.flavor, instance.uuid, block_device_mapping) self._create_tags(cctxt, instance.uuid, tags) # Use the context targeted to cell0 here since the instance is # now in cell0. self._set_vm_state_and_notify( cctxt, instance.uuid, 'build_instances', updates, exc, request_spec) for build_request in build_requests: try: build_request.destroy() except exception.BuildRequestNotFound: # Instance was deleted before we finished scheduling inst = instances_by_uuid[build_request.instance_uuid] with obj_target_cell(inst, cell0): inst.destroy() def schedule_and_build_instances(self, context, build_requests, request_specs, image, admin_password, injected_files, requested_networks, block_device_mapping, tags=None): # Add all the UUIDs for the instances instance_uuids = [spec.instance_uuid for spec in request_specs] try: host_lists = self._schedule_instances(context, request_specs[0], instance_uuids, return_alternates=True) except Exception as exc: LOG.exception('Failed to schedule instances') self._bury_in_cell0(context, request_specs[0], exc, build_requests=build_requests, block_device_mapping=block_device_mapping, tags=tags) return host_mapping_cache = {} cell_mapping_cache = {} instances = [] host_az = {} # host=az cache to optimize multi-create for (build_request, request_spec, host_list) in six.moves.zip( build_requests, request_specs, host_lists): instance = build_request.get_new_instance(context) # host_list is a list of one or more Selection objects, the first # of which has been selected and its resources claimed. host = host_list[0] # Convert host from the scheduler into a cell record if host.service_host not in host_mapping_cache: try: host_mapping = objects.HostMapping.get_by_host( context, host.service_host) host_mapping_cache[host.service_host] = host_mapping except exception.HostMappingNotFound as exc: LOG.error('No host-to-cell mapping found for selected ' 'host %(host)s. Setup is incomplete.', {'host': host.service_host}) self._bury_in_cell0( context, request_spec, exc, build_requests=[build_request], instances=[instance], block_device_mapping=block_device_mapping, tags=tags) # This is a placeholder in case the quota recheck fails. instances.append(None) continue else: host_mapping = host_mapping_cache[host.service_host] cell = host_mapping.cell_mapping # Before we create the instance, let's make one final check that # the build request is still around and wasn't deleted by the user # already. try: objects.BuildRequest.get_by_instance_uuid( context, instance.uuid) except exception.BuildRequestNotFound: # the build request is gone so we're done for this instance LOG.debug('While scheduling instance, the build request ' 'was already deleted.', instance=instance) # This is a placeholder in case the quota recheck fails. instances.append(None) # If the build request was deleted and the instance is not # going to be created, there is on point in leaving an orphan # instance mapping so delete it. try: im = objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) im.destroy() except exception.InstanceMappingNotFound: pass self.report_client.delete_allocation_for_instance( context, instance.uuid) continue else: if host.service_host not in host_az: host_az[host.service_host] = ( availability_zones.get_host_availability_zone( context, host.service_host)) instance.availability_zone = host_az[host.service_host] with obj_target_cell(instance, cell): instance.create() instances.append(instance) cell_mapping_cache[instance.uuid] = cell # NOTE(melwitt): We recheck the quota after creating the # objects to prevent users from allocating more resources # than their allowed quota in the event of a race. This is # configurable because it can be expensive if strict quota # limits are not required in a deployment. if CONF.quota.recheck_quota: try: compute_utils.check_num_instances_quota( context, instance.flavor, 0, 0, orig_num_req=len(build_requests)) except exception.TooManyInstances as exc: with excutils.save_and_reraise_exception(): self._cleanup_build_artifacts(context, exc, instances, build_requests, request_specs, block_device_mapping, tags, cell_mapping_cache) zipped = six.moves.zip(build_requests, request_specs, host_lists, instances) for (build_request, request_spec, host_list, instance) in zipped: if instance is None: # Skip placeholders that were buried in cell0 or had their # build requests deleted by the user before instance create. continue cell = cell_mapping_cache[instance.uuid] # host_list is a list of one or more Selection objects, the first # of which has been selected and its resources claimed. host = host_list.pop(0) alts = [(alt.service_host, alt.nodename) for alt in host_list] LOG.debug("Selected host: %s; Selected node: %s; Alternates: %s", host.service_host, host.nodename, alts, instance=instance) filter_props = request_spec.to_legacy_filter_properties_dict() scheduler_utils.populate_retry(filter_props, instance.uuid) scheduler_utils.populate_filter_properties(filter_props, host) # Now that we have a selected host (which has claimed resource # allocations in the scheduler) for this instance, we may need to # map allocations to resource providers in the request spec. try: scheduler_utils.fill_provider_mapping(request_spec, host) except Exception as exc: # If anything failed here we need to cleanup and bail out. with excutils.save_and_reraise_exception(): self._cleanup_build_artifacts( context, exc, instances, build_requests, request_specs, block_device_mapping, tags, cell_mapping_cache) # TODO(melwitt): Maybe we should set_target_cell on the contexts # once we map to a cell, and remove these separate with statements. with obj_target_cell(instance, cell) as cctxt: # send a state update notification for the initial create to # show it going from non-existent to BUILDING # This can lazy-load attributes on instance. notifications.send_update_with_states(cctxt, instance, None, vm_states.BUILDING, None, None, service="conductor") objects.InstanceAction.action_start( cctxt, instance.uuid, instance_actions.CREATE, want_result=False) instance_bdms = self._create_block_device_mapping( cell, instance.flavor, instance.uuid, block_device_mapping) instance_tags = self._create_tags(cctxt, instance.uuid, tags) # TODO(Kevin Zheng): clean this up once instance.create() handles # tags; we do this so the instance.create notification in # build_and_run_instance in nova-compute doesn't lazy-load tags instance.tags = instance_tags if instance_tags \ else objects.TagList() # Update mapping for instance. self._map_instance_to_cell(context, instance, cell) if not self._delete_build_request( context, build_request, instance, cell, instance_bdms, instance_tags): # The build request was deleted before/during scheduling so # the instance is gone and we don't have anything to build for # this one. continue accel_uuids = [] try: resource_provider_mapping = ( request_spec.get_request_group_mapping()) # Using nodename instead of hostname. See: # http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html # noqa accel_uuids = self._create_and_bind_arqs( context, instance.uuid, instance.flavor.extra_specs, host.nodename, resource_provider_mapping) except Exception as exc: # If anything failed here we need to cleanup and bail out. with excutils.save_and_reraise_exception(): self._cleanup_build_artifacts( context, exc, instances, build_requests, request_specs, block_device_mapping, tags, cell_mapping_cache) # NOTE(danms): Compute RPC expects security group names or ids # not objects, so convert this to a list of names until we can # pass the objects. legacy_secgroups = [s.identifier for s in request_spec.security_groups] with obj_target_cell(instance, cell) as cctxt: self.compute_rpcapi.build_and_run_instance( cctxt, instance=instance, image=image, request_spec=request_spec, filter_properties=filter_props, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=legacy_secgroups, block_device_mapping=instance_bdms, host=host.service_host, node=host.nodename, limits=host.limits, host_list=host_list, accel_uuids=accel_uuids) def _create_and_bind_arqs(self, context, instance_uuid, extra_specs, hostname, resource_provider_mapping): """Create ARQs, determine their RPs and initiate ARQ binding. The binding is asynchronous; Cyborg will notify on completion. The notification will be handled in the compute manager. """ dp_name = extra_specs.get('accel:device_profile') if not dp_name: return [] LOG.debug('Calling Cyborg to get ARQs. dp_name=%s instance=%s', dp_name, instance_uuid) cyclient = cyborg.get_client(context) arqs = cyclient.create_arqs_and_match_resource_providers( dp_name, resource_provider_mapping) LOG.debug('Got ARQs with resource provider mapping %s', arqs) bindings = {arq['uuid']: {"hostname": hostname, "device_rp_uuid": arq['device_rp_uuid'], "instance_uuid": instance_uuid } for arq in arqs} # Initiate Cyborg binding asynchronously cyclient.bind_arqs(bindings=bindings) return [arq['uuid'] for arq in arqs] @staticmethod def _map_instance_to_cell(context, instance, cell): """Update the instance mapping to point at the given cell. During initial scheduling once a host and cell is selected in which to build the instance this method is used to update the instance mapping to point at that cell. :param context: nova auth RequestContext :param instance: Instance object being built :param cell: CellMapping representing the cell in which the instance was created and is being built. :returns: InstanceMapping object that was updated. """ inst_mapping = objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) # Perform a final sanity check that the instance is not mapped # to some other cell already because of maybe some crazy # clustered message queue weirdness. if inst_mapping.cell_mapping is not None: LOG.error('During scheduling instance is already mapped to ' 'another cell: %s. This should not happen and is an ' 'indication of bigger problems. If you see this you ' 'should report it to the nova team. Overwriting ' 'the mapping to point at cell %s.', inst_mapping.cell_mapping.identity, cell.identity, instance=instance) inst_mapping.cell_mapping = cell inst_mapping.save() return inst_mapping def _cleanup_build_artifacts(self, context, exc, instances, build_requests, request_specs, block_device_mappings, tags, cell_mapping_cache): for (instance, build_request, request_spec) in six.moves.zip( instances, build_requests, request_specs): # Skip placeholders that were buried in cell0 or had their # build requests deleted by the user before instance create. if instance is None: continue updates = {'vm_state': vm_states.ERROR, 'task_state': None} cell = cell_mapping_cache[instance.uuid] with try_target_cell(context, cell) as cctxt: self._set_vm_state_and_notify(cctxt, instance.uuid, 'build_instances', updates, exc, request_spec) # In order to properly clean-up volumes when deleting a server in # ERROR status with no host, we need to store BDMs in the same # cell. if block_device_mappings: self._create_block_device_mapping( cell, instance.flavor, instance.uuid, block_device_mappings) # Like BDMs, the server tags provided by the user when creating the # server should be persisted in the same cell so they can be shown # from the API. if tags: with nova_context.target_cell(context, cell) as cctxt: self._create_tags(cctxt, instance.uuid, tags) # NOTE(mdbooth): To avoid an incomplete instance record being # returned by the API, the instance mapping must be # created after the instance record is complete in # the cell, and before the build request is # destroyed. # TODO(mnaser): The cell mapping should already be populated by # this point to avoid setting it below here. inst_mapping = objects.InstanceMapping.get_by_instance_uuid( context, instance.uuid) inst_mapping.cell_mapping = cell inst_mapping.save() # Be paranoid about artifacts being deleted underneath us. try: build_request.destroy() except exception.BuildRequestNotFound: pass try: request_spec.destroy() except exception.RequestSpecNotFound: pass def _delete_build_request(self, context, build_request, instance, cell, instance_bdms, instance_tags): """Delete a build request after creating the instance in the cell. This method handles cleaning up the instance in case the build request is already deleted by the time we try to delete it. :param context: the context of the request being handled :type context: nova.context.RequestContext :param build_request: the build request to delete :type build_request: nova.objects.BuildRequest :param instance: the instance created from the build_request :type instance: nova.objects.Instance :param cell: the cell in which the instance was created :type cell: nova.objects.CellMapping :param instance_bdms: list of block device mappings for the instance :type instance_bdms: nova.objects.BlockDeviceMappingList :param instance_tags: list of tags for the instance :type instance_tags: nova.objects.TagList :returns: True if the build request was successfully deleted, False if the build request was already deleted and the instance is now gone. """ try: build_request.destroy() except exception.BuildRequestNotFound: # This indicates an instance deletion request has been # processed, and the build should halt here. Clean up the # bdm, tags and instance record. with obj_target_cell(instance, cell) as cctxt: with compute_utils.notify_about_instance_delete( self.notifier, cctxt, instance, source=fields.NotificationSource.CONDUCTOR): try: instance.destroy() except exception.InstanceNotFound: pass except exception.ObjectActionError: # NOTE(melwitt): Instance became scheduled during # the destroy, "host changed". Refresh and re-destroy. try: instance.refresh() instance.destroy() except exception.InstanceNotFound: pass for bdm in instance_bdms: with obj_target_cell(bdm, cell): try: bdm.destroy() except exception.ObjectActionError: pass if instance_tags: with try_target_cell(context, cell) as target_ctxt: try: objects.TagList.destroy(target_ctxt, instance.uuid) except exception.InstanceNotFound: pass return False return True def cache_images(self, context, aggregate, image_ids): """Cache a set of images on the set of hosts in an aggregate. :param context: The RequestContext :param aggregate: The Aggregate object from the request to constrain the host list :param image_id: The IDs of the image to cache """ # TODO(mriedem): Consider including the list of images in the # notification payload. compute_utils.notify_about_aggregate_action( context, aggregate, fields.NotificationAction.IMAGE_CACHE, fields.NotificationPhase.START) clock = timeutils.StopWatch() threads = CONF.image_cache.precache_concurrency fetch_pool = eventlet.GreenPool(size=threads) hosts_by_cell = {} cells_by_uuid = {} # TODO(danms): Make this a much more efficient bulk query for hostname in aggregate.hosts: hmap = objects.HostMapping.get_by_host(context, hostname) cells_by_uuid.setdefault(hmap.cell_mapping.uuid, hmap.cell_mapping) hosts_by_cell.setdefault(hmap.cell_mapping.uuid, []) hosts_by_cell[hmap.cell_mapping.uuid].append(hostname) LOG.info('Preparing to request pre-caching of image(s) %(image_ids)s ' 'on %(hosts)i hosts across %(cells)i cells.', {'image_ids': ','.join(image_ids), 'hosts': len(aggregate.hosts), 'cells': len(hosts_by_cell)}) clock.start() stats = collections.defaultdict(lambda: (0, 0, 0, 0)) failed_images = collections.defaultdict(int) down_hosts = set() host_stats = { 'completed': 0, 'total': len(aggregate.hosts), } def host_completed(context, host, result): for image_id, status in result.items(): cached, existing, error, unsupported = stats[image_id] if status == 'error': failed_images[image_id] += 1 error += 1 elif status == 'cached': cached += 1 elif status == 'existing': existing += 1 elif status == 'unsupported': unsupported += 1 stats[image_id] = (cached, existing, error, unsupported) host_stats['completed'] += 1 compute_utils.notify_about_aggregate_cache(context, aggregate, host, result, host_stats['completed'], host_stats['total']) def wrap_cache_images(ctxt, host, image_ids): result = self.compute_rpcapi.cache_images( ctxt, host=host, image_ids=image_ids) host_completed(context, host, result) def skipped_host(context, host, image_ids): result = {image: 'skipped' for image in image_ids} host_completed(context, host, result) for cell_uuid, hosts in hosts_by_cell.items(): cell = cells_by_uuid[cell_uuid] with nova_context.target_cell(context, cell) as target_ctxt: for host in hosts: service = objects.Service.get_by_compute_host(target_ctxt, host) if not self.servicegroup_api.service_is_up(service): down_hosts.add(host) LOG.info( 'Skipping image pre-cache request to compute ' '%(host)r because it is not up', {'host': host}) skipped_host(target_ctxt, host, image_ids) continue fetch_pool.spawn_n(wrap_cache_images, target_ctxt, host, image_ids) # Wait until all those things finish fetch_pool.waitall() overall_stats = {'cached': 0, 'existing': 0, 'error': 0, 'unsupported': 0} for cached, existing, error, unsupported in stats.values(): overall_stats['cached'] += cached overall_stats['existing'] += existing overall_stats['error'] += error overall_stats['unsupported'] += unsupported clock.stop() LOG.info('Image pre-cache operation for image(s) %(image_ids)s ' 'completed in %(time).2f seconds; ' '%(cached)i cached, %(existing)i existing, %(error)i errors, ' '%(unsupported)i unsupported, %(skipped)i skipped (down) ' 'hosts', {'image_ids': ','.join(image_ids), 'time': clock.elapsed(), 'cached': overall_stats['cached'], 'existing': overall_stats['existing'], 'error': overall_stats['error'], 'unsupported': overall_stats['unsupported'], 'skipped': len(down_hosts), }) # Log error'd images specifically at warning level for image_id, fails in failed_images.items(): LOG.warning('Image pre-cache operation for image %(image)s ' 'failed %(fails)i times', {'image': image_id, 'fails': fails}) compute_utils.notify_about_aggregate_action( context, aggregate, fields.NotificationAction.IMAGE_CACHE, fields.NotificationPhase.END) @targets_cell @wrap_instance_event(prefix='conductor') def confirm_snapshot_based_resize(self, context, instance, migration): """Executes the ConfirmResizeTask :param context: nova auth request context targeted at the target cell :param instance: Instance object in "resized" status from the target cell :param migration: Migration object from the target cell for the resize operation expected to have status "confirming" """ task = cross_cell_migrate.ConfirmResizeTask( context, instance, migration, self.notifier, self.compute_rpcapi) task.execute() @targets_cell # NOTE(mriedem): Upon successful completion of RevertResizeTask the # instance is hard-deleted, along with its instance action record(s), from # the target cell database so EventReporter hits InstanceActionNotFound on # __exit__. Pass graceful_exit=True to avoid an ugly traceback. @wrap_instance_event(prefix='conductor', graceful_exit=True) def revert_snapshot_based_resize(self, context, instance, migration): """Executes the RevertResizeTask :param context: nova auth request context targeted at the target cell :param instance: Instance object in "resized" status from the target cell :param migration: Migration object from the target cell for the resize operation expected to have status "reverting" """ task = cross_cell_migrate.RevertResizeTask( context, instance, migration, self.notifier, self.compute_rpcapi) task.execute() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conductor/rpcapi.py0000664000175000017500000005052300000000000017366 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Client side of the conductor RPC API.""" import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_versionedobjects import base as ovo_base import nova.conf from nova import exception from nova.i18n import _ from nova.objects import base as objects_base from nova import profiler from nova import rpc CONF = nova.conf.CONF RPC_TOPIC = 'conductor' @profiler.trace_cls("rpc") class ConductorAPI(object): """Client side of the conductor RPC API API version history: * 1.0 - Initial version. * 1.1 - Added migration_update * 1.2 - Added instance_get_by_uuid and instance_get_all_by_host * 1.3 - Added aggregate_host_add and aggregate_host_delete * 1.4 - Added migration_get * 1.5 - Added bw_usage_update * 1.6 - Added get_backdoor_port() * 1.7 - Added aggregate_get_by_host, aggregate_metadata_add, and aggregate_metadata_delete * 1.8 - Added security_group_get_by_instance and security_group_rule_get_by_security_group * 1.9 - Added provider_fw_rule_get_all * 1.10 - Added agent_build_get_by_triple * 1.11 - Added aggregate_get * 1.12 - Added block_device_mapping_update_or_create * 1.13 - Added block_device_mapping_get_all_by_instance * 1.14 - Added block_device_mapping_destroy * 1.15 - Added instance_get_all_by_filters and instance_get_all_hung_in_rebooting and instance_get_active_by_window Deprecated instance_get_all_by_host * 1.16 - Added instance_destroy * 1.17 - Added instance_info_cache_delete * 1.18 - Added instance_type_get * 1.19 - Added vol_get_usage_by_time and vol_usage_update * 1.20 - Added migration_get_unconfirmed_by_dest_compute * 1.21 - Added service_get_all_by * 1.22 - Added ping * 1.23 - Added instance_get_all Un-Deprecate instance_get_all_by_host * 1.24 - Added instance_get * 1.25 - Added action_event_start and action_event_finish * 1.26 - Added instance_info_cache_update * 1.27 - Added service_create * 1.28 - Added binary arg to service_get_all_by * 1.29 - Added service_destroy * 1.30 - Added migration_create * 1.31 - Added migration_get_in_progress_by_host_and_node * 1.32 - Added optional node to instance_get_all_by_host * 1.33 - Added compute_node_create and compute_node_update * 1.34 - Added service_update * 1.35 - Added instance_get_active_by_window_joined * 1.36 - Added instance_fault_create * 1.37 - Added task_log_get, task_log_begin_task, task_log_end_task * 1.38 - Added service name to instance_update * 1.39 - Added notify_usage_exists * 1.40 - Added security_groups_trigger_handler and security_groups_trigger_members_refresh Remove instance_get_active_by_window * 1.41 - Added fixed_ip_get_by_instance, network_get, instance_floating_address_get_all, quota_commit, quota_rollback * 1.42 - Added get_ec2_ids, aggregate_metadata_get_by_host * 1.43 - Added compute_stop * 1.44 - Added compute_node_delete * 1.45 - Added project_id to quota_commit and quota_rollback * 1.46 - Added compute_confirm_resize * 1.47 - Added columns_to_join to instance_get_all_by_host and instance_get_all_by_filters * 1.48 - Added compute_unrescue ... Grizzly supports message version 1.48. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 1.48. * 1.49 - Added columns_to_join to instance_get_by_uuid * 1.50 - Added object_action() and object_class_action() * 1.51 - Added the 'legacy' argument to block_device_mapping_get_all_by_instance * 1.52 - Pass instance objects for compute_confirm_resize * 1.53 - Added compute_reboot * 1.54 - Added 'update_cells' argument to bw_usage_update * 1.55 - Pass instance objects for compute_stop * 1.56 - Remove compute_confirm_resize and migration_get_unconfirmed_by_dest_compute * 1.57 - Remove migration_create() * 1.58 - Remove migration_get() ... Havana supports message version 1.58. So, any changes to existing methods in 1.x after that point should be done such that they can handle the version_cap being set to 1.58. * 1.59 - Remove instance_info_cache_update() * 1.60 - Remove aggregate_metadata_add() and aggregate_metadata_delete() * ... - Remove security_group_get_by_instance() and security_group_rule_get_by_security_group() * 1.61 - Return deleted instance from instance_destroy() * 1.62 - Added object_backport() * 1.63 - Changed the format of values['stats'] from a dict to a JSON string in compute_node_update() * 1.64 - Added use_slave to instance_get_all_filters() - Remove instance_type_get() - Remove aggregate_get() - Remove aggregate_get_by_host() - Remove instance_get() - Remove migration_update() - Remove block_device_mapping_destroy() * 2.0 - Drop backwards compatibility - Remove quota_rollback() and quota_commit() - Remove aggregate_host_add() and aggregate_host_delete() - Remove network_migrate_instance_start() and network_migrate_instance_finish() - Remove vol_get_usage_by_time ... Icehouse supports message version 2.0. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.0. * Remove instance_destroy() * Remove compute_unrescue() * Remove instance_get_all_by_filters() * Remove instance_get_active_by_window_joined() * Remove instance_fault_create() * Remove action_event_start() and action_event_finish() * Remove instance_get_by_uuid() * Remove agent_build_get_by_triple() ... Juno supports message version 2.0. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.0. * 2.1 - Make notify_usage_exists() take an instance object * Remove bw_usage_update() * Remove notify_usage_exists() ... Kilo supports message version 2.1. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.1. * Remove get_ec2_ids() * Remove service_get_all_by() * Remove service_create() * Remove service_destroy() * Remove service_update() * Remove migration_get_in_progress_by_host_and_node() * Remove aggregate_metadata_get_by_host() * Remove block_device_mapping_update_or_create() * Remove block_device_mapping_get_all_by_instance() * Remove instance_get_all_by_host() * Remove compute_node_update() * Remove compute_node_delete() * Remove security_groups_trigger_handler() * Remove task_log_get() * Remove task_log_begin_task() * Remove task_log_end_task() * Remove security_groups_trigger_members_refresh() * Remove vol_usage_update() * Remove instance_update() * 2.2 - Add object_backport_versions() * 2.3 - Add object_class_action_versions() * Remove compute_node_create() * Remove object_backport() * 3.0 - Drop backwards compatibility ... Liberty, Mitaka, Newton, and Ocata support message version 3.0. So, any changes to existing methods in 3.x after that point should be done such that they can handle the version_cap being set to 3.0. * Remove provider_fw_rule_get_all() """ VERSION_ALIASES = { 'grizzly': '1.48', 'havana': '1.58', 'icehouse': '2.0', 'juno': '2.0', 'kilo': '2.1', 'liberty': '3.0', 'mitaka': '3.0', 'newton': '3.0', 'ocata': '3.0', } def __init__(self): super(ConductorAPI, self).__init__() target = messaging.Target(topic=RPC_TOPIC, version='3.0') version_cap = self.VERSION_ALIASES.get(CONF.upgrade_levels.conductor, CONF.upgrade_levels.conductor) serializer = objects_base.NovaObjectSerializer() self.client = rpc.get_client(target, version_cap=version_cap, serializer=serializer) # TODO(hanlind): This method can be removed once oslo.versionedobjects # has been converted to use version_manifests in remotable_classmethod # operations, which will use the new class action handler. def object_class_action(self, context, objname, objmethod, objver, args, kwargs): versions = ovo_base.obj_tree_get_versions(objname) return self.object_class_action_versions(context, objname, objmethod, versions, args, kwargs) def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): cctxt = self.client.prepare() return cctxt.call(context, 'object_class_action_versions', objname=objname, objmethod=objmethod, object_versions=object_versions, args=args, kwargs=kwargs) def object_action(self, context, objinst, objmethod, args, kwargs): cctxt = self.client.prepare() return cctxt.call(context, 'object_action', objinst=objinst, objmethod=objmethod, args=args, kwargs=kwargs) def object_backport_versions(self, context, objinst, object_versions): cctxt = self.client.prepare() return cctxt.call(context, 'object_backport_versions', objinst=objinst, object_versions=object_versions) @profiler.trace_cls("rpc") class ComputeTaskAPI(object): """Client side of the conductor 'compute' namespaced RPC API API version history: 1.0 - Initial version (empty). 1.1 - Added unified migrate_server call. 1.2 - Added build_instances 1.3 - Added unshelve_instance 1.4 - Added reservations to migrate_server. 1.5 - Added the legacy_bdm parameter to build_instances 1.6 - Made migrate_server use instance objects 1.7 - Do not send block_device_mapping and legacy_bdm to build_instances 1.8 - Add rebuild_instance 1.9 - Converted requested_networks to NetworkRequestList object 1.10 - Made migrate_server() and build_instances() send flavor objects 1.11 - Added clean_shutdown to migrate_server() 1.12 - Added request_spec to rebuild_instance() 1.13 - Added request_spec to migrate_server() 1.14 - Added request_spec to unshelve_instance() 1.15 - Added live_migrate_instance 1.16 - Added schedule_and_build_instances 1.17 - Added tags to schedule_and_build_instances() 1.18 - Added request_spec to build_instances(). 1.19 - build_instances() now gets a 'host_lists' parameter that represents potential alternate hosts for retries within a cell for each instance. 1.20 - migrate_server() now gets a 'host_list' parameter that represents potential alternate hosts for retries within a cell. 1.21 - Added cache_images() 1.22 - Added confirm_snapshot_based_resize() 1.23 - Added revert_snapshot_based_resize() """ def __init__(self): super(ComputeTaskAPI, self).__init__() target = messaging.Target(topic=RPC_TOPIC, namespace='compute_task', version='1.0') serializer = objects_base.NovaObjectSerializer() self.client = rpc.get_client(target, serializer=serializer) def live_migrate_instance(self, context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec): kw = {'instance': instance, 'scheduler_hint': scheduler_hint, 'block_migration': block_migration, 'disk_over_commit': disk_over_commit, 'request_spec': request_spec, } version = '1.15' cctxt = self.client.prepare(version=version) cctxt.cast(context, 'live_migrate_instance', **kw) # TODO(melwitt): Remove the reservations parameter in version 2.0 of the # RPC API. # TODO(mriedem): Make request_spec required *and* a RequestSpec object # rather than a legacy dict in version 2.0 of the RPC API. def migrate_server(self, context, instance, scheduler_hint, live, rebuild, flavor, block_migration, disk_over_commit, reservations=None, clean_shutdown=True, request_spec=None, host_list=None, do_cast=False): kw = {'instance': instance, 'scheduler_hint': scheduler_hint, 'live': live, 'rebuild': rebuild, 'flavor': flavor, 'block_migration': block_migration, 'disk_over_commit': disk_over_commit, 'reservations': reservations, 'clean_shutdown': clean_shutdown, 'request_spec': request_spec, 'host_list': host_list, } version = '1.20' if not self.client.can_send_version(version): del kw['host_list'] version = '1.13' if not self.client.can_send_version(version): del kw['request_spec'] version = '1.11' if not self.client.can_send_version(version): del kw['clean_shutdown'] version = '1.10' if not self.client.can_send_version(version): kw['flavor'] = objects_base.obj_to_primitive(flavor) version = '1.6' if not self.client.can_send_version(version): kw['instance'] = jsonutils.to_primitive( objects_base.obj_to_primitive(instance)) version = '1.4' cctxt = self.client.prepare( version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) if do_cast: return cctxt.cast(context, 'migrate_server', **kw) return cctxt.call(context, 'migrate_server', **kw) def build_instances(self, context, instances, image, filter_properties, admin_password, injected_files, requested_networks, security_groups, block_device_mapping, legacy_bdm=True, request_spec=None, host_lists=None): image_p = jsonutils.to_primitive(image) kwargs = {"instances": instances, "image": image_p, "filter_properties": filter_properties, "admin_password": admin_password, "injected_files": injected_files, "requested_networks": requested_networks, "security_groups": security_groups, "request_spec": request_spec, "host_lists": host_lists} version = '1.19' if not self.client.can_send_version(version): version = '1.18' kwargs.pop("host_lists") if not self.client.can_send_version(version): version = '1.10' kwargs.pop("request_spec") if not self.client.can_send_version(version): version = '1.9' if 'instance_type' in filter_properties: flavor = filter_properties['instance_type'] flavor_p = objects_base.obj_to_primitive(flavor) kwargs["filter_properties"] = dict(filter_properties, instance_type=flavor_p) if not self.client.can_send_version(version): version = '1.8' nets = kwargs['requested_networks'].as_tuples() kwargs['requested_networks'] = nets if not self.client.can_send_version('1.7'): version = '1.5' bdm_p = objects_base.obj_to_primitive(block_device_mapping) kwargs.update({'block_device_mapping': bdm_p, 'legacy_bdm': legacy_bdm}) cctxt = self.client.prepare(version=version) cctxt.cast(context, 'build_instances', **kwargs) def schedule_and_build_instances(self, context, build_requests, request_specs, image, admin_password, injected_files, requested_networks, block_device_mapping, tags=None): version = '1.17' kw = {'build_requests': build_requests, 'request_specs': request_specs, 'image': jsonutils.to_primitive(image), 'admin_password': admin_password, 'injected_files': injected_files, 'requested_networks': requested_networks, 'block_device_mapping': block_device_mapping, 'tags': tags} if not self.client.can_send_version(version): version = '1.16' del kw['tags'] cctxt = self.client.prepare(version=version) cctxt.cast(context, 'schedule_and_build_instances', **kw) def unshelve_instance(self, context, instance, request_spec=None): version = '1.14' kw = {'instance': instance, 'request_spec': request_spec } if not self.client.can_send_version(version): version = '1.3' del kw['request_spec'] cctxt = self.client.prepare(version=version) cctxt.cast(context, 'unshelve_instance', **kw) def rebuild_instance(self, ctxt, instance, new_pass, injected_files, image_ref, orig_image_ref, orig_sys_metadata, bdms, recreate=False, on_shared_storage=False, host=None, preserve_ephemeral=False, request_spec=None): version = '1.12' kw = {'instance': instance, 'new_pass': new_pass, 'injected_files': injected_files, 'image_ref': image_ref, 'orig_image_ref': orig_image_ref, 'orig_sys_metadata': orig_sys_metadata, 'bdms': bdms, 'recreate': recreate, 'on_shared_storage': on_shared_storage, 'preserve_ephemeral': preserve_ephemeral, 'host': host, 'request_spec': request_spec, } if not self.client.can_send_version(version): version = '1.8' del kw['request_spec'] cctxt = self.client.prepare(version=version) cctxt.cast(ctxt, 'rebuild_instance', **kw) def cache_images(self, ctxt, aggregate, image_ids): version = '1.21' if not self.client.can_send_version(version): raise exception.NovaException('Conductor RPC version pin does not ' 'allow cache_images() to be called') cctxt = self.client.prepare(version=version) cctxt.cast(ctxt, 'cache_images', aggregate=aggregate, image_ids=image_ids) def confirm_snapshot_based_resize( self, ctxt, instance, migration, do_cast=True): version = '1.22' if not self.client.can_send_version(version): raise exception.ServiceTooOld(_('nova-conductor too old')) kw = {'instance': instance, 'migration': migration} cctxt = self.client.prepare(version=version) if do_cast: return cctxt.cast(ctxt, 'confirm_snapshot_based_resize', **kw) return cctxt.call(ctxt, 'confirm_snapshot_based_resize', **kw) def revert_snapshot_based_resize(self, ctxt, instance, migration): version = '1.23' if not self.client.can_send_version(version): raise exception.ServiceTooOld(_('nova-conductor too old')) kw = {'instance': instance, 'migration': migration} cctxt = self.client.prepare(version=version) return cctxt.cast(ctxt, 'revert_snapshot_based_resize', **kw) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3024702 nova-21.2.4/nova/conductor/tasks/0000775000175000017500000000000000000000000016656 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conductor/tasks/__init__.py0000664000175000017500000000000000000000000020755 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conductor/tasks/base.py0000664000175000017500000000316300000000000020145 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import functools from oslo_utils import excutils import six def rollback_wrapper(original): @functools.wraps(original) def wrap(self): try: return original(self) except Exception as ex: with excutils.save_and_reraise_exception(): self.rollback(ex) return wrap @six.add_metaclass(abc.ABCMeta) class TaskBase(object): def __init__(self, context, instance): self.context = context self.instance = instance @rollback_wrapper def execute(self): """Run task's logic, written in _execute() method """ return self._execute() @abc.abstractmethod def _execute(self): """Descendants should place task's logic here, while resource initialization should be performed over __init__ """ pass def rollback(self, ex): """Rollback failed task Descendants should implement this method to allow task user to rollback status to state before execute method was call """ pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conductor/tasks/cross_cell_migrate.py0000664000175000017500000021756300000000000023106 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import excutils from nova import availability_zones from nova.compute import instance_actions from nova.compute import power_state from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova.conductor.tasks import base from nova import conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.image import glance from nova.network import constants as neutron_constants from nova.network import neutron from nova import objects from nova.objects import fields from nova.scheduler import utils as scheduler_utils from nova.volume import cinder LOG = logging.getLogger(__name__) CONF = conf.CONF def clone_creatable_object(ctxt, obj, delete_fields=None): """Targets the object at the given context and removes its id attribute Dirties all of the set fields on a new copy of the object. This is necessary before the object is created in a new cell. :param ctxt: cell-targeted nova auth request context to set on the clone :param obj: the object to re-target :param delete_fields: list of fields to delete from the new object; note that the ``id`` field is always deleted :returns: Cloned version of ``obj`` with all set fields marked as "changed" so they will be persisted on a subsequent ``obj.create()`` call. """ if delete_fields is None: delete_fields = [] if 'id' not in delete_fields: delete_fields.append('id') new_obj = obj.obj_clone() new_obj._context = ctxt for field in obj.obj_fields: if field in obj: if field in delete_fields: delattr(new_obj, field) else: # Dirty the field since obj_clone does not modify # _changed_fields. setattr(new_obj, field, getattr(obj, field)) return new_obj class TargetDBSetupTask(base.TaskBase): """Sub-task to create the instance data in the target cell DB. This is needed before any work can be done with the instance in the target cell, like validating the selected target compute host. """ def __init__(self, context, instance, source_migration, target_cell_context): """Initialize this task. :param context: source-cell targeted auth RequestContext :param instance: source-cell Instance object :param source_migration: source-cell Migration object for this operation :param target_cell_context: target-cell targeted auth RequestContext """ super(TargetDBSetupTask, self).__init__(context, instance) self.target_ctx = target_cell_context self.source_migration = source_migration self._target_cell_instance = None def _copy_migrations(self, migrations): """Copy migration records from the source cell to the target cell. :param migrations: MigrationList object of source cell DB records. :returns: Migration record in the target cell database that matches the active migration in the source cell. """ target_cell_migration = None for migration in migrations: migration = clone_creatable_object(self.target_ctx, migration) migration.create() if self.source_migration.uuid == migration.uuid: # Save this off so subsequent tasks don't need to look it up. target_cell_migration = migration return target_cell_migration def _execute(self): """Creates the instance and its related records in the target cell Instance.pci_devices are not copied over since those records are tightly coupled to the compute_nodes records and are meant to track inventory and allocations of PCI devices on a specific compute node. The instance.pci_requests are what "move" with the instance to the target cell and will result in new PCIDevice allocations on the target compute node in the target cell during the resize_claim. The instance.services field is not copied over since that represents the nova-compute service mapped to the instance.host, which will not make sense in the target cell. :returns: A two-item tuple of the Instance and Migration object created in the target cell """ LOG.debug( 'Creating (hidden) instance and its related records in the target ' 'cell: %s', self.target_ctx.cell_uuid, instance=self.instance) # We also have to create the BDMs and tags separately, just like in # ComputeTaskManager.schedule_and_build_instances, so get those out # of the source cell DB first before we start creating anything. # NOTE(mriedem): Console auth tokens are not copied over to the target # cell DB since they will be regenerated in the target cell as needed. # Similarly, expired console auth tokens will be automatically cleaned # from the source cell. bdms = self.instance.get_bdms() vifs = objects.VirtualInterfaceList.get_by_instance_uuid( self.context, self.instance.uuid) tags = self.instance.tags # We copy instance actions to preserve the history of the instance # in case the resize is confirmed. actions = objects.InstanceActionList.get_by_instance_uuid( self.context, self.instance.uuid) migrations = objects.MigrationList.get_by_filters( self.context, filters={'instance_uuid': self.instance.uuid}) # db.instance_create cannot handle some fields which might be loaded on # the instance object, so we omit those from the cloned object and # explicitly create the ones we care about (like tags) below. Things # like pci_devices and services will not make sense in the target DB # so we omit those as well. # TODO(mriedem): Determine if we care about copying faults over to the # target cell in case people use those for auditing (remember that # faults are only shown in the API for ERROR/DELETED instances and only # the most recent fault is shown). inst = clone_creatable_object( self.target_ctx, self.instance, delete_fields=['fault', 'pci_devices', 'services', 'tags']) # This part is important - we want to create the instance in the target # cell as "hidden" so while we have two copies of the instance in # different cells, listing servers out of the API will filter out the # hidden one. inst.hidden = True inst.create() self._target_cell_instance = inst # keep track of this for rollbacks # TODO(mriedem): Consider doing all of the inserts in a single # transaction context. If any of the following creates fail, the # rollback should perform a cascading hard-delete anyway. # Do the same dance for the other instance-related records. for bdm in bdms: bdm = clone_creatable_object(self.target_ctx, bdm) bdm.create() for vif in vifs: vif = clone_creatable_object(self.target_ctx, vif) vif.create() if tags: primitive_tags = [tag.tag for tag in tags] objects.TagList.create(self.target_ctx, inst.uuid, primitive_tags) for action in actions: new_action = clone_creatable_object(self.target_ctx, action) new_action.create() # For each pre-existing action, we need to also re-create its # events in the target cell. events = objects.InstanceActionEventList.get_by_action( self.context, action.id) for event in events: new_event = clone_creatable_object(self.target_ctx, event) new_event.create(action.instance_uuid, action.request_id) target_cell_migration = self._copy_migrations(migrations) return inst, target_cell_migration def rollback(self, ex): """Deletes the instance data from the target cell in case of failure""" if self._target_cell_instance: # Deleting the instance in the target cell DB should perform a # cascading delete of all related records, e.g. BDMs, VIFs, etc. LOG.debug('Destroying instance from target cell: %s', self.target_ctx.cell_uuid, instance=self._target_cell_instance) # This needs to be a hard delete because if resize fails later for # some reason, we want to be able to retry the resize to this cell # again without hitting a duplicate entry unique constraint error. self._target_cell_instance.destroy(hard_delete=True) class PrepResizeAtDestTask(base.TaskBase): """Task used to verify a given target host in a target cell. Upon successful completion, port bindings and volume attachments should be created for the target host in the target cell and resources should be claimed on the target host for the resize. Also, the instance task_state should be ``resize_prep``. """ def __init__(self, context, instance, flavor, target_migration, request_spec, compute_rpcapi, host_selection, network_api, volume_api): """Construct the PrepResizeAtDestTask instance :param context: The user request auth context. This should be targeted at the target cell. :param instance: The instance being migrated (this is the target cell copy of the instance record). :param flavor: The new flavor if performing resize and not just a cold migration :param target_migration: The Migration object from the target cell DB. :param request_spec: nova.objects.RequestSpec object for the operation :param compute_rpcapi: instance of nova.compute.rpcapi.ComputeAPI :param host_selection: nova.objects.Selection which is a possible target host for the cross-cell resize :param network_api: The neutron (client side) networking API. :param volume_api: The cinder (client side) block-storage API. """ super(PrepResizeAtDestTask, self).__init__(context, instance) self.flavor = flavor self.target_migration = target_migration self.request_spec = request_spec self.compute_rpcapi = compute_rpcapi self.host_selection = host_selection self.network_api = network_api self.volume_api = volume_api # Keep track of anything we created so we can rollback. self._bindings_by_port_id = {} self._created_volume_attachment_ids = [] def _create_port_bindings(self): """Creates inactive port bindings against the selected target host for the ports attached to the instance. The ``self._bindings_by_port_id`` variable will be set upon successful completion. :raises: MigrationPreCheckError if port binding failed """ LOG.debug('Creating port bindings for destination host %s', self.host_selection.service_host) try: self._bindings_by_port_id = self.network_api.bind_ports_to_host( self.context, self.instance, self.host_selection.service_host) except exception.PortBindingFailed: raise exception.MigrationPreCheckError(reason=_( 'Failed to create port bindings for host %s') % self.host_selection.service_host) def _create_volume_attachments(self): """Create empty volume attachments for volume BDMs attached to the instance in the target cell. The BlockDeviceMapping.attachment_id field is updated for each volume BDM processed. Remember that these BDM records are from the target cell database so the changes will only go there. :return: BlockDeviceMappingList of volume BDMs with an updated attachment_id field for the newly created empty attachment for that BDM """ LOG.debug('Creating volume attachments for destination host %s', self.host_selection.service_host) volume_bdms = objects.BlockDeviceMappingList(objects=[ bdm for bdm in self.instance.get_bdms() if bdm.is_volume]) for bdm in volume_bdms: # Create the empty (no host connector) attachment. attach_ref = self.volume_api.attachment_create( self.context, bdm.volume_id, bdm.instance_uuid) # Keep track of what we create for rollbacks. self._created_volume_attachment_ids.append(attach_ref['id']) # Update the BDM in the target cell database. bdm.attachment_id = attach_ref['id'] # Note that ultimately the BDMs in the target cell are either # pointing at attachments that we can use, or this sub-task has # failed in which case we will fail the main task and should # rollback and delete the instance and its BDMs in the target cell # database, so that is why we do not track the original attachment # IDs in order to roll them back on the BDM records. bdm.save() return volume_bdms def _execute(self): """Performs pre-cross-cell resize checks/claims on the targeted host This ensures things like networking (ports) will continue to work on the target host in the other cell before we initiate the migration of the server. Resources are also claimed on the target host which in turn creates the MigrationContext for the instance in the target cell database. :returns: MigrationContext created in the target cell database during the resize_claim in the destination compute service. :raises: nova.exception.MigrationPreCheckError if the pre-check validation fails for the given host selection; this indicates an alternative host *may* work but this one does not. """ destination = self.host_selection.service_host LOG.debug('Verifying selected host %s for cross-cell resize.', destination, instance=self.instance) # Validate networking by creating port bindings for this host. self._create_port_bindings() # Create new empty volume attachments for the volume BDMs attached # to the instance. Technically this is not host specific and we could # do this outside of the PrepResizeAtDestTask sub-task but volume # attachments are meant to be cheap and plentiful so it is nice to # keep them self-contained within each execution of this task and # rollback anything we created if we fail. self._create_volume_attachments() try: LOG.debug('Calling destination host %s to prepare for cross-cell ' 'resize and claim resources.', destination) return self.compute_rpcapi.prep_snapshot_based_resize_at_dest( self.context, self.instance, self.flavor, self.host_selection.nodename, self.target_migration, self.host_selection.limits, self.request_spec, destination) except messaging.MessagingTimeout: msg = _('RPC timeout while checking if we can cross-cell migrate ' 'to host: %s') % destination raise exception.MigrationPreCheckError(reason=msg) def rollback(self, ex): # Rollback anything we created. host = self.host_selection.service_host # Cleanup any destination host port bindings. LOG.debug('Cleaning up port bindings for destination host %s', host) for port_id in self._bindings_by_port_id: try: self.network_api.delete_port_binding( self.context, port_id, host) except Exception: # Don't raise if we fail to cleanup, just log it. LOG.exception('An error occurred while cleaning up binding ' 'for port %s on host %s.', port_id, host, instance=self.instance) # Cleanup any destination host volume attachments. LOG.debug( 'Cleaning up volume attachments for destination host %s', host) for attachment_id in self._created_volume_attachment_ids: try: self.volume_api.attachment_delete(self.context, attachment_id) except Exception: # Don't raise if we fail to cleanup, just log it. LOG.exception('An error occurred while cleaning up volume ' 'attachment %s.', attachment_id, instance=self.instance) class PrepResizeAtSourceTask(base.TaskBase): """Task to prepare the instance at the source host for the resize. Will power off the instance at the source host, create and upload a snapshot image for a non-volume-backed server, and disconnect volumes and networking from the source host. The vm_state is recorded with the "old_vm_state" key in the instance.system_metadata field prior to powering off the instance so the revert flow can determine if the guest should be running or stopped. Returns the snapshot image ID, if one was created, from the ``execute`` method. Upon successful completion, the instance.task_state will be ``resize_migrated`` and the migration.status will be ``post-migrating``. """ def __init__(self, context, instance, migration, request_spec, compute_rpcapi, image_api): """Initializes this PrepResizeAtSourceTask instance. :param context: nova auth context targeted at the source cell :param instance: Instance object from the source cell :param migration: Migration object from the source cell :param request_spec: RequestSpec object for the resize operation :param compute_rpcapi: instance of nova.compute.rpcapi.ComputeAPI :param image_api: instance of nova.image.glance.API """ super(PrepResizeAtSourceTask, self).__init__(context, instance) self.migration = migration self.request_spec = request_spec self.compute_rpcapi = compute_rpcapi self.image_api = image_api self._image_id = None def _execute(self): # Save off the vm_state so we can use that later on the source host # if the resize is reverted - it is used to determine if the reverted # guest should be powered on. self.instance.system_metadata['old_vm_state'] = self.instance.vm_state self.instance.task_state = task_states.RESIZE_MIGRATING # If the instance is not volume-backed, create a snapshot of the root # disk. if not self.request_spec.is_bfv: # Create an empty image. name = '%s-resize-temp' % self.instance.display_name image_meta = compute_utils.create_image( self.context, self.instance, name, 'snapshot', self.image_api) self._image_id = image_meta['id'] LOG.debug('Created snapshot image %s for cross-cell resize.', self._image_id, instance=self.instance) self.instance.save(expected_task_state=task_states.RESIZE_PREP) # RPC call the source host to prepare for resize. self.compute_rpcapi.prep_snapshot_based_resize_at_source( self.context, self.instance, self.migration, snapshot_id=self._image_id) return self._image_id def rollback(self, ex): # If we created a snapshot image, attempt to delete it. if self._image_id: compute_utils.delete_image( self.context, self.instance, self.image_api, self._image_id) # If the compute service successfully powered off the guest but failed # to snapshot (or timed out during the snapshot), then the # _sync_power_states periodic task should mark the instance as stopped # and the user can start/reboot it. # If the compute service powered off the instance, snapshot it and # destroyed the guest and then a failure occurred, the instance should # have been set to ERROR status (by the compute service) so the user # has to hard reboot or rebuild it. LOG.error('Preparing for cross-cell resize at the source host %s ' 'failed. The instance may need to be hard rebooted.', self.instance.host, instance=self.instance) class FinishResizeAtDestTask(base.TaskBase): """Task to finish the resize at the destination host. Calls the finish_snapshot_based_resize_at_dest method on the destination compute service which sets up networking and block storage and spawns the guest on the destination host. Upon successful completion of this task, the migration status should be 'finished', the instance task_state should be None and the vm_state should be 'resized'. The instance host/node information should also reflect the destination compute. If the compute call is successful, the task will change the instance mapping to point at the target cell and hide the source cell instance thus making the confirm/revert operations act on the target cell instance. """ def __init__(self, context, instance, migration, source_cell_instance, compute_rpcapi, target_cell_mapping, snapshot_id, request_spec): """Initialize this task. :param context: nova auth request context targeted at the target cell :param instance: Instance object in the target cell database :param migration: Migration object in the target cell database :param source_cell_instance: Instance object in the source cell DB :param compute_rpcapi: instance of nova.compute.rpcapi.ComputeAPI :param target_cell_mapping: CellMapping object for the target cell :param snapshot_id: ID of the image snapshot to use for a non-volume-backed instance. :param request_spec: nova.objects.RequestSpec object for the operation """ super(FinishResizeAtDestTask, self).__init__(context, instance) self.migration = migration self.source_cell_instance = source_cell_instance self.compute_rpcapi = compute_rpcapi self.target_cell_mapping = target_cell_mapping self.snapshot_id = snapshot_id self.request_spec = request_spec def _finish_snapshot_based_resize_at_dest(self): """Synchronously RPC calls finish_snapshot_based_resize_at_dest If the finish_snapshot_based_resize_at_dest method fails in the compute service, this method will update the source cell instance data to reflect the error (vm_state='error', copy the fault and instance action events for that compute method). """ LOG.debug('Finishing cross-cell resize at the destination host %s', self.migration.dest_compute, instance=self.instance) # prep_snapshot_based_resize_at_source in the source cell would have # changed the source cell instance.task_state to resize_migrated and # we need to reflect that in the target cell instance before calling # the destination compute. self.instance.task_state = task_states.RESIZE_MIGRATED self.instance.save() event_name = 'compute_finish_snapshot_based_resize_at_dest' source_cell_context = self.source_cell_instance._context try: with compute_utils.EventReporter( source_cell_context, event_name, self.migration.dest_compute, self.instance.uuid): self.compute_rpcapi.finish_snapshot_based_resize_at_dest( self.context, self.instance, self.migration, self.snapshot_id, self.request_spec) # finish_snapshot_based_resize_at_dest updates the target cell # instance so we need to refresh it here to have the latest copy. self.instance.refresh() except Exception: # We need to mimic the error handlers on # finish_snapshot_based_resize_at_dest in the destination compute # service so those changes are reflected in the source cell # instance. with excutils.save_and_reraise_exception(logger=LOG): # reverts_task_state and _error_out_instance_on_exception: self.source_cell_instance.task_state = None self.source_cell_instance.vm_state = vm_states.ERROR self.source_cell_instance.save() # wrap_instance_fault (this is best effort) self._copy_latest_fault(source_cell_context) def _copy_latest_fault(self, source_cell_context): """Copies the latest instance fault from the target cell to the source :param source_cell_context: nova auth request context targeted at the source cell """ try: # Get the latest fault from the target cell database. fault = objects.InstanceFault.get_latest_for_instance( self.context, self.instance.uuid) if fault: fault_clone = clone_creatable_object(source_cell_context, fault) fault_clone.create() except Exception: LOG.exception( 'Failed to copy instance fault from target cell DB', instance=self.instance) def _update_instance_mapping(self): """Swaps the hidden field value on the source and target cell instance and updates the instance mapping to point at the target cell. """ LOG.debug('Marking instance in source cell as hidden and updating ' 'instance mapping to point at target cell %s.', self.target_cell_mapping.identity, instance=self.instance) # Get the instance mapping first to make the window of time where both # instances are hidden=False as small as possible. instance_mapping = objects.InstanceMapping.get_by_instance_uuid( self.context, self.instance.uuid) # Mark the target cell instance record as hidden=False so it will show # up when listing servers. Note that because of how the API filters # duplicate instance records, even if the user is listing servers at # this exact moment only one copy of the instance will be returned. self.instance.hidden = False self.instance.save() # Update the instance mapping to point at the target cell. This is so # that the confirm/revert actions will be performed on the resized # instance in the target cell rather than the destroyed guest in the # source cell. Note that we could do this before finishing the resize # on the dest host, but it makes sense to defer this until the # instance is successfully resized in the dest host because if that # fails, we want to be able to rebuild in the source cell to recover # the instance. instance_mapping.cell_mapping = self.target_cell_mapping # If this fails the cascading task failures should delete the instance # in the target cell database so we do not need to hide it again. instance_mapping.save() # Mark the source cell instance record as hidden=True to hide it from # the user when listing servers. self.source_cell_instance.hidden = True self.source_cell_instance.save() def _execute(self): # Finish the resize on the destination host in the target cell. self._finish_snapshot_based_resize_at_dest() # Do the instance.hidden/instance_mapping.cell_mapping swap. self._update_instance_mapping() def rollback(self, ex): # The method executed in this task are self-contained for rollbacks. pass class CrossCellMigrationTask(base.TaskBase): """Orchestrates a cross-cell cold migration (resize).""" def __init__(self, context, instance, flavor, request_spec, source_migration, compute_rpcapi, host_selection, alternate_hosts): """Construct the CrossCellMigrationTask instance :param context: The user request auth context. This should be targeted to the source cell in which the instance is currently running. :param instance: The instance being migrated (from the source cell) :param flavor: The new flavor if performing resize and not just a cold migration :param request_spec: nova.objects.RequestSpec with scheduling details :param source_migration: nova.objects.Migration record for this operation (from the source cell) :param compute_rpcapi: instance of nova.compute.rpcapi.ComputeAPI :param host_selection: nova.objects.Selection of the initial selected target host from the scheduler where the selected host is in another cell which is different from the cell in which the instance is currently running :param alternate_hosts: list of 0 or more nova.objects.Selection objects representing alternate hosts within the same target cell as ``host_selection``. """ super(CrossCellMigrationTask, self).__init__(context, instance) self.request_spec = request_spec self.flavor = flavor self.source_migration = source_migration self.compute_rpcapi = compute_rpcapi self.host_selection = host_selection self.alternate_hosts = alternate_hosts self._target_cell_instance = None self._target_cell_context = None self.network_api = neutron.API() self.volume_api = cinder.API() self.image_api = glance.API() # Keep an ordered dict of the sub-tasks completed so we can call their # rollback routines if something fails. self._completed_tasks = collections.OrderedDict() def _get_target_cell_mapping(self): """Get the target host CellMapping for the selected host :returns: nova.objects.CellMapping for the cell of the selected target host :raises: nova.exception.CellMappingNotFound if the cell mapping for the selected target host cannot be found (this should not happen if the scheduler just selected it) """ return objects.CellMapping.get_by_uuid( self.context, self.host_selection.cell_uuid) def _setup_target_cell_db(self): """Creates the instance and its related records in the target cell Upon successful completion the self._target_cell_context and self._target_cell_instance variables are set. :returns: A 2-item tuple of: - The active Migration object from the target cell DB - The CellMapping for the target cell """ LOG.debug('Setting up the target cell database for the instance and ' 'its related records.', instance=self.instance) target_cell_mapping = self._get_target_cell_mapping() # Clone the context targeted at the source cell and then target the # clone at the target cell. self._target_cell_context = copy.copy(self.context) nova_context.set_target_cell( self._target_cell_context, target_cell_mapping) task = TargetDBSetupTask( self.context, self.instance, self.source_migration, self._target_cell_context) self._target_cell_instance, target_cell_migration = task.execute() self._completed_tasks['TargetDBSetupTask'] = task return target_cell_migration, target_cell_mapping def _perform_external_api_checks(self): """Performs checks on external service APIs for support. * Checks that the neutron port binding-extended API is available :raises: MigrationPreCheckError if any checks fail """ LOG.debug('Making sure neutron is new enough for cross-cell resize.') # Check that the port binding-extended API extension is available in # neutron because if it's not we can just fail fast. if not self.network_api.supports_port_binding_extension(self.context): raise exception.MigrationPreCheckError( reason=_("Required networking service API extension '%s' " "not found.") % neutron_constants.PORT_BINDING_EXTENDED) def _prep_resize_at_dest(self, target_cell_migration): """Executes PrepResizeAtDestTask and updates the source migration. :param target_cell_migration: Migration record from the target cell DB :returns: Refreshed Migration record from the target cell DB after the resize_claim on the destination host has updated the record. """ # TODO(mriedem): Check alternates if the primary selected host fails; # note that alternates are always in the same cell as the selected host # so if the primary fails pre-checks, the alternates may also fail. We # could reschedule but the scheduler does not yet have an ignore_cells # capability like ignore_hosts. # We set the target cell instance new_flavor attribute now since the # ResourceTracker.resize_claim on the destination host uses it. self._target_cell_instance.new_flavor = self.flavor verify_task = PrepResizeAtDestTask( self._target_cell_context, self._target_cell_instance, self.flavor, target_cell_migration, self.request_spec, self.compute_rpcapi, self.host_selection, self.network_api, self.volume_api) target_cell_migration_context = verify_task.execute() self._completed_tasks['PrepResizeAtDestTask'] = verify_task # Stash the old vm_state so we can set the resized/reverted instance # back to the same state later, i.e. if STOPPED do not power on the # guest. self._target_cell_instance.system_metadata['old_vm_state'] = ( self._target_cell_instance.vm_state) # Update the target cell instance availability zone now that we have # prepared the resize on the destination host. We do this in conductor # to avoid the "up-call" from the compute service to the API database. self._target_cell_instance.availability_zone = ( availability_zones.get_host_availability_zone( self.context, self.host_selection.service_host)) self._target_cell_instance.save() # We need to mirror the MigrationContext, created in the target cell # database, into the source cell database. Keep in mind that the # MigrationContext has pci_devices and a migration_id in it which # are specific to the target cell database. The only one we care about # correcting for the source cell database is migration_id since that # is used to route neutron external events to the source and target # hosts. self.instance.migration_context = ( target_cell_migration_context.obj_clone()) self.instance.migration_context.migration_id = self.source_migration.id self.instance.save() return self._update_migration_from_dest_after_claim( target_cell_migration) def _update_migration_from_dest_after_claim(self, target_cell_migration): """Update the source cell migration record with target cell info. The PrepResizeAtDestTask runs a resize_claim on the target compute host service in the target cell which sets fields about the destination in the migration record in the target cell. We need to reflect those changes back into the migration record in the source cell. :param target_cell_migration: Migration record from the target cell DB :returns: Refreshed Migration record from the target cell DB after the resize_claim on the destination host has updated the record. """ # Copy information about the dest compute that was set on the dest # migration record during the resize claim on the dest host. # We have to get a fresh copy of the target cell migration record to # pick up the changes made in the dest compute service. target_cell_migration = objects.Migration.get_by_uuid( self._target_cell_context, target_cell_migration.uuid) self.source_migration.dest_compute = target_cell_migration.dest_compute self.source_migration.dest_node = target_cell_migration.dest_node self.source_migration.dest_host = target_cell_migration.dest_host self.source_migration.save() return target_cell_migration def _prep_resize_at_source(self): """Executes PrepResizeAtSourceTask :return: The image snapshot ID if the instance is not volume-backed, else None. """ LOG.debug('Preparing source host %s for cross-cell resize.', self.source_migration.source_compute, instance=self.instance) prep_source_task = PrepResizeAtSourceTask( self.context, self.instance, self.source_migration, self.request_spec, self.compute_rpcapi, self.image_api) snapshot_id = prep_source_task.execute() self._completed_tasks['PrepResizeAtSourceTask'] = prep_source_task return snapshot_id def _finish_resize_at_dest( self, target_cell_migration, target_cell_mapping, snapshot_id): """Executes FinishResizeAtDestTask :param target_cell_migration: Migration object from the target cell DB :param target_cell_mapping: CellMapping object for the target cell :param snapshot_id: ID of the image snapshot to use for a non-volume-backed instance. """ task = FinishResizeAtDestTask( self._target_cell_context, self._target_cell_instance, target_cell_migration, self.instance, self.compute_rpcapi, target_cell_mapping, snapshot_id, self.request_spec) task.execute() self._completed_tasks['FinishResizeAtDestTask'] = task def _execute(self): """Execute high-level orchestration of the cross-cell resize""" # We are committed to a cross-cell move at this point so update the # migration record to reflect that. If we fail after this we are not # going to go back and try to run the MigrationTask to do a same-cell # migration, so we set the cross_cell_move flag early for audit/debug # in case something fails later and the operator wants to know if this # was a cross-cell or same-cell move operation. self.source_migration.cross_cell_move = True self.source_migration.save() # Make sure neutron APIs we need are available. self._perform_external_api_checks() # Before preparing the target host create the instance record data # in the target cell database since we cannot do anything in the # target cell without having an instance record there. Remember that # we lose the cell-targeting on the request context over RPC so we # cannot simply pass the source cell context and instance over RPC # to the target compute host and assume changes get mirrored back to # the source cell database. target_cell_migration, target_cell_mapping = ( self._setup_target_cell_db()) # Claim resources and validate the selected host in the target cell. target_cell_migration = self._prep_resize_at_dest( target_cell_migration) # Prepare the instance at the source host (stop it, optionally snapshot # it, disconnect volumes and VIFs, etc). snapshot_id = self._prep_resize_at_source() # Finish the resize at the destination host, swap the hidden fields # on the instances and update the instance mapping. self._finish_resize_at_dest( target_cell_migration, target_cell_mapping, snapshot_id) def rollback(self, ex): """Rollback based on how sub-tasks completed Sub-tasks should rollback appropriately for whatever they do but here we need to handle cleaning anything up from successful tasks, e.g. if tasks A and B were successful but task C fails, then we might need to cleanup changes from A and B here. """ # Rollback the completed tasks in reverse order. for task_name in reversed(self._completed_tasks): try: self._completed_tasks[task_name].rollback(ex) except Exception: LOG.exception('Rollback for task %s failed.', task_name) def get_inst_and_cell_map_from_source( target_cell_context, source_compute, instance_uuid): """Queries the instance from the source cell database. :param target_cell_context: nova auth request context targeted at the target cell database :param source_compute: name of the source compute service host :param instance_uuid: UUID of the instance :returns: 2-item tuple of: - Instance object from the source cell database. - CellMapping object of the source cell mapping """ # We can get the source cell via the host mapping based on the # source_compute in the migration object. source_host_mapping = objects.HostMapping.get_by_host( target_cell_context, source_compute) source_cell_mapping = source_host_mapping.cell_mapping # Clone the context targeted at the target cell and then target the # clone at the source cell. source_cell_context = copy.copy(target_cell_context) nova_context.set_target_cell(source_cell_context, source_cell_mapping) # Now get the instance from the source cell DB using the source # cell context which will make the source cell instance permanently # targeted to the source cell database. instance = objects.Instance.get_by_uuid( source_cell_context, instance_uuid, expected_attrs=['flavor', 'info_cache', 'system_metadata']) return instance, source_cell_mapping class ConfirmResizeTask(base.TaskBase): """Task which orchestrates a cross-cell resize confirm operation When confirming a cross-cell resize, the instance is in both the source and target cell databases and on the source and target compute hosts. The API operation is performed on the target cell instance and it is the job of this task to cleanup the source cell host and database and update the status of the instance in the target cell. This can be called either asynchronously from the API service during a normal confirmResize server action or synchronously when deleting a server in VERIFY_RESIZE status. """ def __init__(self, context, instance, migration, legacy_notifier, compute_rpcapi): """Initialize this ConfirmResizeTask instance :param context: nova auth request context targeted at the target cell :param instance: Instance object in "resized" status from the target cell :param migration: Migration object from the target cell for the resize operation expected to have status "confirming" :param legacy_notifier: LegacyValidatingNotifier for sending legacy unversioned notifications :param compute_rpcapi: instance of nova.compute.rpcapi.ComputeAPI """ super(ConfirmResizeTask, self).__init__(context, instance) self.migration = migration self.legacy_notifier = legacy_notifier self.compute_rpcapi = compute_rpcapi def _send_resize_confirm_notification(self, instance, phase): """Sends an unversioned and versioned resize.confirm.(phase) notification. :param instance: The instance whose resize is being confirmed. :param phase: The phase for the resize.confirm operation (either "start" or "end"). """ ctxt = instance._context # Send the legacy unversioned notification. compute_utils.notify_about_instance_usage( self.legacy_notifier, ctxt, instance, 'resize.confirm.%s' % phase) # Send the versioned notification. compute_utils.notify_about_instance_action( ctxt, instance, CONF.host, action=fields.NotificationAction.RESIZE_CONFIRM, phase=phase) def _cleanup_source_host(self, source_instance): """Cleans up the instance from the source host. Creates a confirmResize instance action in the source cell DB. Destroys the guest from the source hypervisor, cleans up networking and storage and frees up resource usage on the source host. :param source_instance: Instance object from the source cell DB """ ctxt = source_instance._context # The confirmResize instance action has to be created in the source # cell database before calling the compute service to properly # track action events. Note that the API created the same action # record but on the target cell instance. objects.InstanceAction.action_start( ctxt, source_instance.uuid, instance_actions.CONFIRM_RESIZE, want_result=False) # Get the Migration record from the source cell database. source_migration = objects.Migration.get_by_uuid( ctxt, self.migration.uuid) LOG.debug('Cleaning up source host %s for cross-cell resize confirm.', source_migration.source_compute, instance=source_instance) # The instance.old_flavor field needs to be set before the source # host drops the MoveClaim in the ResourceTracker. source_instance.old_flavor = source_instance.flavor # Use the EventReport context manager to create the same event that # the source compute will create but in the target cell DB so we do not # have to explicitly copy it over from source to target DB. event_name = 'compute_confirm_snapshot_based_resize_at_source' with compute_utils.EventReporter( self.context, event_name, source_migration.source_compute, source_instance.uuid): self.compute_rpcapi.confirm_snapshot_based_resize_at_source( ctxt, source_instance, source_migration) def _finish_confirm_in_target_cell(self): """Sets "terminal" states on the migration and instance in target cell. This is similar to how ``confirm_resize`` works in the compute service for same-cell resize. """ LOG.debug('Updating migration and instance status in target cell DB.', instance=self.instance) # Update the target cell migration. self.migration.status = 'confirmed' self.migration.save() # Update the target cell instance. # Delete stashed information for the resize. self.instance.old_flavor = None self.instance.new_flavor = None self.instance.system_metadata.pop('old_vm_state', None) self._set_vm_and_task_state() self.instance.drop_migration_context() # There are multiple possible task_states set on the instance because # if we are called from the confirmResize instance action the # task_state should be None, but if we are called from # _confirm_resize_on_deleting then the instance is being deleted. self.instance.save(expected_task_state=[ None, task_states.DELETING, task_states.SOFT_DELETING]) def _set_vm_and_task_state(self): """Sets the target cell instance vm_state based on the power_state. The task_state is set to None. """ # The old_vm_state could be STOPPED but the user might have manually # powered up the instance to confirm the resize/migrate, so we need to # check the current power state on the instance and set the vm_state # appropriately. We default to ACTIVE because if the power state is # not SHUTDOWN, we assume the _sync_power_states periodic task in the # compute service will clean it up. p_state = self.instance.power_state if p_state == power_state.SHUTDOWN: vm_state = vm_states.STOPPED LOG.debug("Resized/migrated instance is powered off. " "Setting vm_state to '%s'.", vm_state, instance=self.instance) else: vm_state = vm_states.ACTIVE self.instance.vm_state = vm_state self.instance.task_state = None def _execute(self): # First get the instance from the source cell so we can cleanup. source_cell_instance = get_inst_and_cell_map_from_source( self.context, self.migration.source_compute, self.instance.uuid)[0] # Send the resize.confirm.start notification(s) using the source # cell instance since we start there. self._send_resize_confirm_notification( source_cell_instance, fields.NotificationPhase.START) # RPC call the source compute to cleanup. self._cleanup_source_host(source_cell_instance) # Now we can delete the instance in the source cell database. LOG.info('Deleting instance record from source cell %s', source_cell_instance._context.cell_uuid, instance=source_cell_instance) # This needs to be a hard delete because we want to be able to resize # back to this cell without hitting a duplicate entry unique constraint # error. source_cell_instance.destroy(hard_delete=True) # Update the information in the target cell database. self._finish_confirm_in_target_cell() # Send the resize.confirm.end notification using the target cell # instance since we end there. self._send_resize_confirm_notification( self.instance, fields.NotificationPhase.END) def rollback(self, ex): with excutils.save_and_reraise_exception(): LOG.exception( 'An error occurred while confirming the resize for instance ' 'in target cell %s. Depending on the error, a copy of the ' 'instance may still exist in the source cell database which ' 'contains the source host %s. At this point the instance is ' 'on the target host %s and anything left in the source cell ' 'can be cleaned up.', self.context.cell_uuid, self.migration.source_compute, self.migration.dest_compute, instance=self.instance) # If anything failed set the migration status to 'error'. self.migration.status = 'error' self.migration.save() # Put the instance in the target DB into ERROR status, record # a fault and send an error notification. updates = {'vm_state': vm_states.ERROR, 'task_state': None} request_spec = objects.RequestSpec.get_by_instance_uuid( self.context, self.instance.uuid) scheduler_utils.set_vm_state_and_notify( self.context, self.instance.uuid, 'compute_task', 'migrate_server', updates, ex, request_spec) class RevertResizeTask(base.TaskBase): """Task to orchestrate a cross-cell resize revert operation. This task is responsible for coordinating the cleanup of the resources in the target cell and restoring the server and its related resources (e.g. networking and volumes) in the source cell. Upon successful completion the instance mapping should point back at the source cell, the source cell instance should no longer be hidden and the instance in the target cell should be destroyed. """ def __init__(self, context, instance, migration, legacy_notifier, compute_rpcapi): """Initialize this RevertResizeTask instance :param context: nova auth request context targeted at the target cell :param instance: Instance object in "resized" status from the target cell with task_state "resize_reverting" :param migration: Migration object from the target cell for the resize operation expected to have status "reverting" :param legacy_notifier: LegacyValidatingNotifier for sending legacy unversioned notifications :param compute_rpcapi: instance of nova.compute.rpcapi.ComputeAPI """ super(RevertResizeTask, self).__init__(context, instance) self.migration = migration self.legacy_notifier = legacy_notifier self.compute_rpcapi = compute_rpcapi # These are used for rollback handling. self._source_cell_migration = None self._source_cell_instance = None self.volume_api = cinder.API() def _send_resize_revert_notification(self, instance, phase): """Sends an unversioned and versioned resize.revert.(phase) notification. :param instance: The instance whose resize is being reverted. :param phase: The phase for the resize.revert operation (either "start" or "end"). """ ctxt = instance._context # Send the legacy unversioned notification. compute_utils.notify_about_instance_usage( self.legacy_notifier, ctxt, instance, 'resize.revert.%s' % phase) # Send the versioned notification. compute_utils.notify_about_instance_action( ctxt, instance, CONF.host, action=fields.NotificationAction.RESIZE_REVERT, phase=phase) @staticmethod def _update_source_obj_from_target_cell(source_obj, target_obj): """Updates the object from the source cell using the target cell object WARNING: This method does not support objects with nested objects, i.e. objects that have fields which are other objects. An error will be raised in that case. All fields on the source object are updated from the target object except for the ``id`` and ``created_at`` fields since those value must not change during an update. The ``updated_at`` field is also skipped because saving changes to ``source_obj`` will automatically update the ``updated_at`` field. It is expected that the two objects represent the same thing but from different cell databases, so for example, a uuid field (if one exists) should not change. Note that the changes to ``source_obj`` are not persisted in this method. :param source_obj: Versioned object from the source cell database :param target_obj: Versioned object from the target cell database :raises: ObjectActionError if nested object fields are encountered """ ignore_fields = ['created_at', 'id', 'updated_at'] for field in source_obj.obj_fields: if field in target_obj and field not in ignore_fields: if isinstance(source_obj.fields[field], fields.ObjectField): raise exception.ObjectActionError( action='_update_source_obj_from_target_cell', reason='nested objects are not supported') setattr(source_obj, field, getattr(target_obj, field)) def _update_bdms_in_source_cell(self, source_cell_context): """Update BlockDeviceMapppings in the source cell database. It is possible to attach/detach volumes to/from a resized instance, which would create/delete BDM records in the target cell, so we have to recreate newly attached BDMs in the source cell database and delete any old BDMs that were detached while resized in the target cell. :param source_cell_context: nova auth request context targeted at the source cell database """ bdms_from_source_cell = ( objects.BlockDeviceMappingList.get_by_instance_uuid( source_cell_context, self.instance.uuid)) source_cell_bdms_by_uuid = { bdm.uuid: bdm for bdm in bdms_from_source_cell} bdms_from_target_cell = ( objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, self.instance.uuid)) # Copy new/updated BDMs from the target cell DB to the source cell DB. for bdm in bdms_from_target_cell: if bdm.uuid in source_cell_bdms_by_uuid: # Remove this BDM from the list since we want to preserve it # along with its attachment_id. source_cell_bdms_by_uuid.pop(bdm.uuid) else: # Newly attached BDM while in the target cell, so create it # in the source cell. source_bdm = clone_creatable_object(source_cell_context, bdm) # revert_snapshot_based_resize_at_dest is going to delete the # attachment for this BDM so we need to create a new empty # attachment to reserve this volume so that # finish_revert_snapshot_based_resize_at_source can use it. attach_ref = self.volume_api.attachment_create( source_cell_context, bdm.volume_id, self.instance.uuid) source_bdm.attachment_id = attach_ref['id'] LOG.debug('Creating BlockDeviceMapping with volume ID %s ' 'and attachment %s in the source cell database ' 'since the volume was attached while the server was ' 'resized.', bdm.volume_id, attach_ref['id'], instance=self.instance) source_bdm.create() # If there are any source bdms left that were not processed from the # target cell bdms, it means those source bdms were detached while # resized in the target cell, and we need to delete them from the # source cell so they don't re-appear once the revert is complete. self._delete_orphan_source_cell_bdms(source_cell_bdms_by_uuid.values()) def _delete_orphan_source_cell_bdms(self, source_cell_bdms): """Deletes orphaned BDMs and volume attachments from the source cell. If any volumes were detached while the server was resized into the target cell they are destroyed here so they do not show up again once the instance is mapped back to the source cell. :param source_cell_bdms: Iterator of BlockDeviceMapping objects. """ for bdm in source_cell_bdms: LOG.debug('Destroying BlockDeviceMapping with volume ID %s and ' 'attachment ID %s from source cell database during ' 'cross-cell resize revert since the volume was detached ' 'while the server was resized.', bdm.volume_id, bdm.attachment_id, instance=self.instance) # First delete the (empty) attachment, created by # prep_snapshot_based_resize_at_source, so it is not leaked. try: self.volume_api.attachment_delete( bdm._context, bdm.attachment_id) except Exception as e: LOG.error('Failed to delete attachment %s for volume %s. The ' 'attachment may be leaked and needs to be manually ' 'cleaned up. Error: %s', bdm.attachment_id, bdm.volume_id, e, instance=self.instance) bdm.destroy() def _update_instance_actions_in_source_cell(self, source_cell_context): """Update instance action records in the source cell database We need to copy the REVERT_RESIZE instance action and related events from the target cell to the source cell. Otherwise the revert operation in the source compute service will not be able to lookup the correct instance action to track events. :param source_cell_context: nova auth request context targeted at the source cell database """ # FIXME(mriedem): This is a hack to just get revert working on # the source; we need to re-create any actions created in the target # cell DB after the instance was moved while it was in # VERIFY_RESIZE status, like if volumes were attached/detached. # Can we use a changes-since filter for that, i.e. find the last # instance action for the instance in the source cell database and then # get all instance actions from the target cell database that were # created after that time. action = objects.InstanceAction.get_by_request_id( self.context, self.instance.uuid, self.context.request_id) new_action = clone_creatable_object(source_cell_context, action) new_action.create() # Also create the events under this action. events = objects.InstanceActionEventList.get_by_action( self.context, action.id) for event in events: new_event = clone_creatable_object(source_cell_context, event) new_event.create(action.instance_uuid, action.request_id) def _update_migration_in_source_cell(self, source_cell_context): """Update the migration record in the source cell database. Updates the migration record in the source cell database based on the current information about the migration in the target cell database. :param source_cell_context: nova auth request context targeted at the source cell database :return: Migration object of the updated source cell database migration record """ source_cell_migration = objects.Migration.get_by_uuid( source_cell_context, self.migration.uuid) # The only change we really expect here is the status changing to # "reverting". self._update_source_obj_from_target_cell( source_cell_migration, self.migration) source_cell_migration.save() return source_cell_migration def _update_instance_in_source_cell(self, instance): """Updates the instance and related records in the source cell DB. Before reverting in the source cell we need to copy the latest state information from the target cell database where the instance lived before the revert. This is because data about the instance could have changed while it was in VERIFY_RESIZE status, like attached volumes. :param instance: Instance object from the source cell database :return: Migration object of the updated source cell database migration record """ LOG.debug('Updating instance-related records in the source cell ' 'database based on target cell database information.', instance=instance) # Copy information from the target cell instance that we need in the # source cell instance for doing the revert on the source compute host. instance.system_metadata['old_vm_state'] = ( self.instance.system_metadata.get('old_vm_state')) instance.old_flavor = instance.flavor instance.task_state = task_states.RESIZE_REVERTING instance.save() source_cell_context = instance._context self._update_bdms_in_source_cell(source_cell_context) self._update_instance_actions_in_source_cell(source_cell_context) source_cell_migration = self._update_migration_in_source_cell( source_cell_context) # NOTE(mriedem): We do not have to worry about ports changing while # resized since the API does not allow attach/detach interface while # resized. Same for tags. return source_cell_migration def _update_instance_mapping( self, source_cell_instance, source_cell_mapping): """Swaps the hidden field value on the source and target cell instance and updates the instance mapping to point at the source cell. :param source_cell_instance: Instance object from the source cell DB :param source_cell_mapping: CellMapping object for the source cell """ LOG.debug('Marking instance in target cell as hidden and updating ' 'instance mapping to point at source cell %s.', source_cell_mapping.identity, instance=source_cell_instance) # Get the instance mapping first to make the window of time where both # instances are hidden=False as small as possible. instance_mapping = objects.InstanceMapping.get_by_instance_uuid( self.context, self.instance.uuid) # Mark the source cell instance record as hidden=False so it will show # up when listing servers. Note that because of how the API filters # duplicate instance records, even if the user is listing servers at # this exact moment only one copy of the instance will be returned. source_cell_instance.hidden = False source_cell_instance.save() # Update the instance mapping to point at the source cell. We do this # before cleaning up the target host/cell because that is really best # effort and if something fails on the target we want the user to # now interact with the instance in the source cell with the original # flavor because they are ultimately trying to revert and get back # there, so if they hard reboot/rebuild after an error (for example) # that should happen in the source cell. instance_mapping.cell_mapping = source_cell_mapping instance_mapping.save() # Mark the target cell instance record as hidden=True to hide it from # the user when listing servers. self.instance.hidden = True self.instance.save() def _execute(self): # Send the resize.revert.start notification(s) using the target # cell instance since we start there. self._send_resize_revert_notification( self.instance, fields.NotificationPhase.START) source_cell_instance, source_cell_mapping = ( get_inst_and_cell_map_from_source( self.context, self.migration.source_compute, self.instance.uuid)) self._source_cell_instance = source_cell_instance # Update the source cell database information based on the target cell # database, i.e. the instance/migration/BDMs/action records. Do all of # this before updating the instance mapping in case it fails. source_cell_migration = self._update_instance_in_source_cell( source_cell_instance) # Swap the instance.hidden values and update the instance mapping to # point at the source cell. From here on out the user will see and # operate on the instance in the source cell. self._update_instance_mapping( source_cell_instance, source_cell_mapping) # Save off the source cell migration record for rollbacks. self._source_cell_migration = source_cell_migration # Clean the instance from the target host. LOG.debug('Calling destination host %s to revert cross-cell resize.', self.migration.dest_compute, instance=self.instance) # Use the EventReport context manager to create the same event that # the dest compute will create but in the source cell DB so we do not # have to explicitly copy it over from target to source DB. event_name = 'compute_revert_snapshot_based_resize_at_dest' with compute_utils.EventReporter( source_cell_instance._context, event_name, self.migration.dest_compute, self.instance.uuid): self.compute_rpcapi.revert_snapshot_based_resize_at_dest( self.context, self.instance, self.migration) # NOTE(mriedem): revert_snapshot_based_resize_at_dest updates the # target cell instance so if we need to do something with it here # in the future before destroying it, it should be refreshed. # Destroy the instance and its related records from the target cell DB. LOG.info('Deleting instance record from target cell %s', self.context.cell_uuid, instance=source_cell_instance) # This needs to be a hard delete because if we retry the resize to the # target cell we could hit a duplicate entry unique constraint error. self.instance.destroy(hard_delete=True) # Launch the guest at the source host with the old flavor. LOG.debug('Calling source host %s to finish reverting cross-cell ' 'resize.', self.migration.source_compute, instance=self.instance) self.compute_rpcapi.finish_revert_snapshot_based_resize_at_source( source_cell_instance._context, source_cell_instance, source_cell_migration) # finish_revert_snapshot_based_resize_at_source updates the source cell # instance so refresh it here so we have the latest copy. source_cell_instance.refresh() # Finish the conductor_revert_snapshot_based_resize event in the source # cell DB. ComputeTaskManager.revert_snapshot_based_resize uses the # wrap_instance_event decorator to create this action/event in the # target cell DB but now that the target cell instance is gone the # event needs to show up in the source cell DB. objects.InstanceActionEvent.event_finish( source_cell_instance._context, source_cell_instance.uuid, 'conductor_revert_snapshot_based_resize', want_result=False) # Send the resize.revert.end notification using the instance from # the source cell since we end there. self._send_resize_revert_notification( source_cell_instance, fields.NotificationPhase.END) def rollback(self, ex): with excutils.save_and_reraise_exception(): # If we have updated the instance mapping to point at the source # cell we update the records in the source cell, otherwise we # update the records in the target cell. instance_at_source = self._source_cell_migration is not None migration = self._source_cell_migration or self.migration instance = self._source_cell_instance or self.instance # NOTE(mriedem): This exception log is fairly generic. We could # probably make this more targeted based on what we know of the # state of the system if we want to make it more detailed, e.g. # the execute method could "record" checkpoints to be used here # or we could check to see if the instance was deleted from the # target cell by trying to refresh it and handle InstanceNotFound. LOG.exception( 'An error occurred while reverting the resize for instance. ' 'The instance is mapped to the %s cell %s. If the instance ' 'was deleted from the target cell %s then the target host %s ' 'was already cleaned up. If the instance is back in the ' 'source cell then you can try hard-rebooting it to recover.', ('source' if instance_at_source else 'target'), migration._context.cell_uuid, self.context.cell_uuid, migration.dest_compute, instance=instance) # If anything failed set the migration status to 'error'. migration.status = 'error' migration.save() # Put the instance into ERROR status, record a fault and send an # error notification. updates = {'vm_state': vm_states.ERROR, 'task_state': None} request_spec = objects.RequestSpec.get_by_instance_uuid( self.context, instance.uuid) scheduler_utils.set_vm_state_and_notify( instance._context, instance.uuid, 'compute_task', 'migrate_server', updates, ex, request_spec) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conductor/tasks/live_migrate.py0000664000175000017500000007046400000000000021712 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import excutils import six from nova import availability_zones from nova.compute import power_state from nova.compute import utils as compute_utils from nova.conductor.tasks import base from nova.conductor.tasks import migrate import nova.conf from nova import exception from nova.i18n import _ from nova.network import neutron from nova import objects from nova.objects import fields as obj_fields from nova.objects import migrate_data as migrate_data_obj from nova.scheduler import utils as scheduler_utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF def supports_vif_related_pci_allocations(context, host): """Checks if the compute host service is new enough to support VIF related PCI allocation during live migration :param context: The user request context. :param host: The nova-compute host to check. :returns: True if the compute host is new enough to support vif related PCI allocations """ svc = objects.Service.get_by_host_and_binary(context, host, 'nova-compute') return svc.version >= 36 def supports_vpmem_live_migration(context): """Checks if the commpute host service is new enough to support instance live migration with virtual persistent memory. :param context: The user request context. :returns: True if the compute hosts are new enough to support live migration with vpmem """ return objects.Service.get_minimum_version(context, 'nova-compute') >= 51 class LiveMigrationTask(base.TaskBase): def __init__(self, context, instance, destination, block_migration, disk_over_commit, migration, compute_rpcapi, servicegroup_api, query_client, report_client, request_spec=None): super(LiveMigrationTask, self).__init__(context, instance) self.destination = destination self.block_migration = block_migration self.disk_over_commit = disk_over_commit self.migration = migration self.source = instance.host self.migrate_data = None self.limits = None self.compute_rpcapi = compute_rpcapi self.servicegroup_api = servicegroup_api self.query_client = query_client self.report_client = report_client self.request_spec = request_spec self._source_cn = None self._held_allocations = None self.network_api = neutron.API() def _execute(self): self._check_instance_is_active() self._check_instance_has_no_numa() self._check_host_is_up(self.source) self._source_cn, self._held_allocations = ( # NOTE(danms): This may raise various exceptions, which will # propagate to the API and cause a 500. This is what we # want, as it would indicate internal data structure corruption # (such as missing migrations, compute nodes, etc). migrate.replace_allocation_with_migration(self.context, self.instance, self.migration)) if not self.destination: # Either no host was specified in the API request and the user # wants the scheduler to pick a destination host, or a host was # specified but is not forcing it, so they want the scheduler # filters to run on the specified host, like a scheduler hint. self.destination, dest_node, self.limits = self._find_destination() else: # This is the case that the user specified the 'force' flag when # live migrating with a specific destination host so the scheduler # is bypassed. There are still some minimal checks performed here # though. self._check_destination_is_not_source() self._check_host_is_up(self.destination) self._check_destination_has_enough_memory() source_node, dest_node = ( self._check_compatible_with_source_hypervisor( self.destination)) # TODO(mriedem): Call select_destinations() with a # skip_filters=True flag so the scheduler does the work of claiming # resources on the destination in Placement but still bypass the # scheduler filters, which honors the 'force' flag in the API. # This raises NoValidHost which will be handled in # ComputeTaskManager. # NOTE(gibi): consumer_generation = None as we expect that the # source host allocation is held by the migration therefore the # instance is a new, empty consumer for the dest allocation. If # this assumption fails then placement will return consumer # generation conflict and this call raise a AllocationUpdateFailed # exception. We let that propagate here to abort the migration. # NOTE(luyao): When forcing the target host we don't call the # scheduler, that means we need to get allocations from placement # first, then claim resources in resource tracker on the # destination host based on these allocations. scheduler_utils.claim_resources_on_destination( self.context, self.report_client, self.instance, source_node, dest_node, source_allocations=self._held_allocations, consumer_generation=None) try: self._check_requested_destination() except Exception: with excutils.save_and_reraise_exception(): self._remove_host_allocations(dest_node.uuid) # dest_node is a ComputeNode object, so we need to get the actual # node name off it to set in the Migration object below. dest_node = dest_node.hypervisor_hostname self.instance.availability_zone = ( availability_zones.get_host_availability_zone( self.context, self.destination)) self.migration.source_node = self.instance.node self.migration.dest_node = dest_node self.migration.dest_compute = self.destination self.migration.save() # TODO(johngarbutt) need to move complexity out of compute manager # TODO(johngarbutt) disk_over_commit? return self.compute_rpcapi.live_migration(self.context, host=self.source, instance=self.instance, dest=self.destination, block_migration=self.block_migration, migration=self.migration, migrate_data=self.migrate_data) def rollback(self, ex): # TODO(johngarbutt) need to implement the clean up operation # but this will make sense only once we pull in the compute # calls, since this class currently makes no state changes, # except to call the compute method, that has no matching # rollback call right now. if self._held_allocations: migrate.revert_allocation_for_migration(self.context, self._source_cn, self.instance, self.migration) def _check_instance_is_active(self): if self.instance.power_state not in (power_state.RUNNING, power_state.PAUSED): raise exception.InstanceInvalidState( instance_uuid=self.instance.uuid, attr='power_state', state=power_state.STATE_MAP[self.instance.power_state], method='live migrate') def _check_instance_has_no_numa(self): """Prevent live migrations of instances with NUMA topologies. TODO(artom) Remove this check in compute RPC 6.0. """ if not self.instance.numa_topology: return # Only KVM (libvirt) supports NUMA topologies with CPU pinning; # HyperV's vNUMA feature doesn't allow specific pinning hypervisor_type = objects.ComputeNode.get_by_host_and_nodename( self.context, self.source, self.instance.node).hypervisor_type # KVM is not a hypervisor, so when using a virt_type of "kvm" the # hypervisor_type will still be "QEMU". if hypervisor_type.lower() != obj_fields.HVType.QEMU: return # We're fully upgraded to a version that supports NUMA live # migration, carry on. if objects.Service.get_minimum_version( self.context, 'nova-compute') >= 40: return if CONF.workarounds.enable_numa_live_migration: LOG.warning( 'Instance has an associated NUMA topology, cell contains ' 'compute nodes older than train, but the ' 'enable_numa_live_migration workaround is enabled. Live ' 'migration will not be NUMA-aware. The instance NUMA ' 'topology, including related attributes such as CPU pinning, ' 'huge page and emulator thread pinning information, will not ' 'be recalculated. See bug #1289064 for more information.', instance=self.instance) else: raise exception.MigrationPreCheckError( reason='Instance has an associated NUMA topology, cell ' 'contains compute nodes older than train, and the ' 'enable_numa_live_migration workaround is disabled. ' 'Refusing to perform the live migration, as the ' 'instance NUMA topology, including related attributes ' 'such as CPU pinning, huge page and emulator thread ' 'pinning information, cannot be recalculated. See ' 'bug #1289064 for more information.') def _check_can_migrate_pci(self, src_host, dest_host): """Checks that an instance can migrate with PCI requests. At the moment support only if: 1. Instance contains VIF related PCI requests. 2. Neutron supports multiple port binding extension. 3. Src and Dest host support VIF related PCI allocations. """ if self.instance.pci_requests is None or not len( self.instance.pci_requests.requests): return for pci_request in self.instance.pci_requests.requests: if pci_request.source != objects.InstancePCIRequest.NEUTRON_PORT: # allow only VIF related PCI requests in live migration. raise exception.MigrationPreCheckError( reason= "non-VIF related PCI requests for instance " "are not allowed for live migration.") # All PCI requests are VIF related, now check neutron, # source and destination compute nodes. if not self.network_api.supports_port_binding_extension( self.context): raise exception.MigrationPreCheckError( reason="Cannot live migrate VIF with related PCI, Neutron " "does not support required port binding extension.") if not (supports_vif_related_pci_allocations(self.context, src_host) and supports_vif_related_pci_allocations(self.context, dest_host)): raise exception.MigrationPreCheckError( reason="Cannot live migrate VIF with related PCI, " "source and destination nodes do not support " "the operation.") def _check_can_migrate_specific_resources(self): """Checks that an instance can migrate with specific resources. For virtual persistent memory resource: 1. check if Instance contains vpmem resources 2. check if live migration with vpmem is supported """ if not self.instance.resources: return has_vpmem = False for resource in self.instance.resources: if resource.resource_class.startswith("CUSTOM_PMEM_NAMESPACE_"): has_vpmem = True break if has_vpmem and not supports_vpmem_live_migration(self.context): raise exception.MigrationPreCheckError( reason="Cannot live migrate with virtual persistent memory, " "the operation is not supported.") def _check_host_is_up(self, host): service = objects.Service.get_by_compute_host(self.context, host) if not self.servicegroup_api.service_is_up(service): raise exception.ComputeServiceUnavailable(host=host) def _check_requested_destination(self): """Performs basic pre-live migration checks for the forced host.""" # NOTE(gibi): This code path is used when the live migration is forced # to a target host and skipping the scheduler. Such operation is # rejected for servers with nested resource allocations since # I7cbd5d9fb875ebf72995362e0b6693492ce32051. So here we can safely # assume that the provider mapping is empty. self._call_livem_checks_on_host(self.destination, {}) # Make sure the forced destination host is in the same cell that the # instance currently lives in. # NOTE(mriedem): This can go away if/when the forced destination host # case calls select_destinations. source_cell_mapping = self._get_source_cell_mapping() dest_cell_mapping = self._get_destination_cell_mapping() if source_cell_mapping.uuid != dest_cell_mapping.uuid: raise exception.MigrationPreCheckError( reason=(_('Unable to force live migrate instance %s ' 'across cells.') % self.instance.uuid)) def _check_destination_is_not_source(self): if self.destination == self.source: raise exception.UnableToMigrateToSelf( instance_id=self.instance.uuid, host=self.destination) def _check_destination_has_enough_memory(self): compute = self._get_compute_info(self.destination) free_ram_mb = compute.free_ram_mb total_ram_mb = compute.memory_mb mem_inst = self.instance.memory_mb # NOTE(sbauza): Now the ComputeNode object reports an allocation ratio # that can be provided by the compute_node if new or by the controller ram_ratio = compute.ram_allocation_ratio # NOTE(sbauza): Mimic the RAMFilter logic in order to have the same # ram validation avail = total_ram_mb * ram_ratio - (total_ram_mb - free_ram_mb) if not mem_inst or avail <= mem_inst: instance_uuid = self.instance.uuid dest = self.destination reason = _("Unable to migrate %(instance_uuid)s to %(dest)s: " "Lack of memory(host:%(avail)s <= " "instance:%(mem_inst)s)") raise exception.MigrationPreCheckError(reason=reason % dict( instance_uuid=instance_uuid, dest=dest, avail=avail, mem_inst=mem_inst)) def _get_compute_info(self, host): return objects.ComputeNode.get_first_node_by_host_for_old_compat( self.context, host) def _check_compatible_with_source_hypervisor(self, destination): source_info = self._get_compute_info(self.source) destination_info = self._get_compute_info(destination) source_type = source_info.hypervisor_type destination_type = destination_info.hypervisor_type if source_type != destination_type: raise exception.InvalidHypervisorType() source_version = source_info.hypervisor_version destination_version = destination_info.hypervisor_version if source_version > destination_version: raise exception.DestinationHypervisorTooOld() return source_info, destination_info def _call_livem_checks_on_host(self, destination, provider_mapping): self._check_can_migrate_specific_resources() self._check_can_migrate_pci(self.source, destination) try: self.migrate_data = self.compute_rpcapi.\ check_can_live_migrate_destination(self.context, self.instance, destination, self.block_migration, self.disk_over_commit, self.migration, self.limits) except messaging.MessagingTimeout: msg = _("Timeout while checking if we can live migrate to host: " "%s") % destination raise exception.MigrationPreCheckError(msg) # Check to see that neutron supports the binding-extended API. if self.network_api.supports_port_binding_extension(self.context): if 'vifs' not in self.migrate_data: # migrate data vifs were not constructed in dest compute # during check_can_live_migrate_destination, construct a # skeleton to be updated after port binding. # TODO(adrianc): This can be removed once we move to U release self.migrate_data.vifs = migrate_data_obj.VIFMigrateData.\ create_skeleton_migrate_vifs( self.instance.get_network_info()) bindings = self._bind_ports_on_destination( destination, provider_mapping) self._update_migrate_vifs_from_bindings(self.migrate_data.vifs, bindings) @staticmethod def _get_port_profile_from_provider_mapping(port_id, provider_mappings): if port_id in provider_mappings: # NOTE(gibi): In the resource provider mapping there can be # more than one RP fulfilling a request group. But resource # requests of a Neutron port is always mapped to a # numbered request group that is always fulfilled by one # resource provider. So we only pass that single RP UUID # here. return {'allocation': provider_mappings[port_id][0]} else: return {} def _bind_ports_on_destination(self, destination, provider_mappings): LOG.debug('Start binding ports on destination host: %s', destination, instance=self.instance) # Bind ports on the destination host; returns a dict, keyed by # port ID, of a new destination host port binding dict per port # that was bound. This information is then stuffed into the # migrate_data. try: # NOTE(adrianc): migrate_data.vifs was partially filled # by destination compute if compute is new enough. # if that is the case, it may have updated the required port # profile for the destination node (e.g new PCI address if SR-IOV) # perform port binding against the requested profile ports_profile = {} for mig_vif in self.migrate_data.vifs: profile = mig_vif.profile if 'profile_json' in mig_vif else {} # NOTE(gibi): provider_mappings also contribute to the # binding profile of the ports if the port has resource # request. So we need to merge the profile information from # both sources. profile.update( self._get_port_profile_from_provider_mapping( mig_vif.port_id, provider_mappings)) if profile: ports_profile[mig_vif.port_id] = profile bindings = self.network_api.bind_ports_to_host( context=self.context, instance=self.instance, host=destination, vnic_types=None, port_profiles=ports_profile) except exception.PortBindingFailed as e: # Port binding failed for that host, try another one. raise exception.MigrationPreCheckError( reason=e.format_message()) return bindings def _update_migrate_vifs_from_bindings(self, migrate_vifs, bindings): for migrate_vif in migrate_vifs: for attr_name, attr_val in bindings[migrate_vif.port_id].items(): setattr(migrate_vif, attr_name, attr_val) def _get_source_cell_mapping(self): """Returns the CellMapping for the cell in which the instance lives :returns: nova.objects.CellMapping record for the cell where the instance currently lives. :raises: MigrationPreCheckError - in case a mapping is not found """ try: return objects.InstanceMapping.get_by_instance_uuid( self.context, self.instance.uuid).cell_mapping except exception.InstanceMappingNotFound: raise exception.MigrationPreCheckError( reason=(_('Unable to determine in which cell ' 'instance %s lives.') % self.instance.uuid)) def _get_destination_cell_mapping(self): """Returns the CellMapping for the destination host :returns: nova.objects.CellMapping record for the cell where the destination host is mapped. :raises: MigrationPreCheckError - in case a mapping is not found """ try: return objects.HostMapping.get_by_host( self.context, self.destination).cell_mapping except exception.HostMappingNotFound: raise exception.MigrationPreCheckError( reason=(_('Unable to determine in which cell ' 'destination host %s lives.') % self.destination)) def _get_request_spec_for_select_destinations(self, attempted_hosts=None): """Builds a RequestSpec that can be passed to select_destinations Used when calling the scheduler to pick a destination host for live migrating the instance. :param attempted_hosts: List of host names to ignore in the scheduler. This is generally at least seeded with the source host. :returns: nova.objects.RequestSpec object """ request_spec = self.request_spec # NOTE(sbauza): Force_hosts/nodes needs to be reset # if we want to make sure that the next destination # is not forced to be the original host request_spec.reset_forced_destinations() port_res_req = ( self.network_api.get_requested_resource_for_instance( self.context, self.instance.uuid)) # NOTE(gibi): When cyborg or other module wants to handle # similar non-nova resources then here we have to collect # all the external resource requests in a single list and # add them to the RequestSpec. request_spec.requested_resources = port_res_req scheduler_utils.setup_instance_group(self.context, request_spec) # We currently only support live migrating to hosts in the same # cell that the instance lives in, so we need to tell the scheduler # to limit the applicable hosts based on cell. cell_mapping = self._get_source_cell_mapping() LOG.debug('Requesting cell %(cell)s while live migrating', {'cell': cell_mapping.identity}, instance=self.instance) if ('requested_destination' in request_spec and request_spec.requested_destination): request_spec.requested_destination.cell = cell_mapping else: request_spec.requested_destination = objects.Destination( cell=cell_mapping) request_spec.ensure_project_and_user_id(self.instance) request_spec.ensure_network_metadata(self.instance) compute_utils.heal_reqspec_is_bfv( self.context, request_spec, self.instance) return request_spec def _find_destination(self): # TODO(johngarbutt) this retry loop should be shared attempted_hosts = [self.source] request_spec = self._get_request_spec_for_select_destinations( attempted_hosts) host = None while host is None: self._check_not_over_max_retries(attempted_hosts) request_spec.ignore_hosts = attempted_hosts try: selection_lists = self.query_client.select_destinations( self.context, request_spec, [self.instance.uuid], return_objects=True, return_alternates=False) # We only need the first item in the first list, as there is # only one instance, and we don't care about any alternates. selection = selection_lists[0][0] host = selection.service_host except messaging.RemoteError as ex: # TODO(ShaoHe Feng) There maybe multi-scheduler, and the # scheduling algorithm is R-R, we can let other scheduler try. # Note(ShaoHe Feng) There are types of RemoteError, such as # NoSuchMethod, UnsupportedVersion, we can distinguish it by # ex.exc_type. raise exception.MigrationSchedulerRPCError( reason=six.text_type(ex)) scheduler_utils.fill_provider_mapping(request_spec, selection) provider_mapping = request_spec.get_request_group_mapping() if provider_mapping: # NOTE(gibi): this call might update the pci_requests of the # instance based on the destination host if so then such change # will be persisted when post_live_migration_at_destination # runs. compute_utils.\ update_pci_request_spec_with_allocated_interface_name( self.context, self.report_client, self.instance, provider_mapping) try: self._check_compatible_with_source_hypervisor(host) self._call_livem_checks_on_host(host, provider_mapping) except (exception.Invalid, exception.MigrationPreCheckError) as e: LOG.debug("Skipping host: %(host)s because: %(e)s", {"host": host, "e": e}) attempted_hosts.append(host) # The scheduler would have created allocations against the # selected destination host in Placement, so we need to remove # those before moving on. self._remove_host_allocations(selection.compute_node_uuid) host = None # TODO(artom) We should probably just return the whole selection object # at this point. return (selection.service_host, selection.nodename, selection.limits) def _remove_host_allocations(self, compute_node_uuid): """Removes instance allocations against the given node from Placement :param compute_node_uuid: UUID of ComputeNode resource provider """ # Now remove the allocations for our instance against that node. # Note that this does not remove allocations against any other node # or shared resource provider, it's just undoing what the scheduler # allocated for the given (destination) node. self.report_client.remove_provider_tree_from_instance_allocation( self.context, self.instance.uuid, compute_node_uuid) def _check_not_over_max_retries(self, attempted_hosts): if CONF.migrate_max_retries == -1: return retries = len(attempted_hosts) - 1 if retries > CONF.migrate_max_retries: if self.migration: self.migration.status = 'failed' self.migration.save() msg = (_('Exceeded max scheduling retries %(max_retries)d for ' 'instance %(instance_uuid)s during live migration') % {'max_retries': retries, 'instance_uuid': self.instance.uuid}) raise exception.MaxRetriesExceeded(reason=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conductor/tasks/migrate.py0000664000175000017500000006106700000000000020672 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from nova import availability_zones from nova.compute import utils as compute_utils from nova.conductor.tasks import base from nova.conductor.tasks import cross_cell_migrate from nova import exception from nova.i18n import _ from nova import objects from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils LOG = logging.getLogger(__name__) def replace_allocation_with_migration(context, instance, migration): """Replace instance's allocation with one for a migration. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API :raises: ConsumerAllocationRetrievalFailed if reading the current allocation from placement fails :raises: ComputeHostNotFound if the host of the instance is not found in the databse :raises: AllocationMoveFailed if moving the allocation from the instance.uuid to the migration.uuid fails due to parallel placement operation on the instance consumer :raises: NoValidHost if placement rejectes the update for other reasons (e.g. not enough resources) :returns: (source_compute_node, migration_allocation) """ try: source_cn = objects.ComputeNode.get_by_host_and_nodename( context, instance.host, instance.node) except exception.ComputeHostNotFound: LOG.error('Unable to find record for source ' 'node %(node)s on %(host)s', {'host': instance.host, 'node': instance.node}, instance=instance) # A generic error like this will just error out the migration # and do any rollback required raise reportclient = report.SchedulerReportClient() orig_alloc = reportclient.get_allocs_for_consumer( context, instance.uuid)['allocations'] root_alloc = orig_alloc.get(source_cn.uuid, {}).get('resources', {}) if not root_alloc: LOG.debug('Unable to find existing allocations for instance on ' 'source compute node: %s. This is normal if you are not ' 'using the FilterScheduler.', source_cn.uuid, instance=instance) return None, None # FIXME(gibi): This method is flawed in that it does not handle allocations # against sharing providers in any special way. This leads to duplicate # allocations against the sharing provider during migration. success = reportclient.move_allocations(context, instance.uuid, migration.uuid) if not success: LOG.error('Unable to replace resource claim on source ' 'host %(host)s node %(node)s for instance', {'host': instance.host, 'node': instance.node}, instance=instance) # Mimic the "no space" error that could have come from the # scheduler. Once we have an atomic replace operation, this # would be a severe error. raise exception.NoValidHost( reason=_('Unable to replace instance claim on source')) else: LOG.debug('Created allocations for migration %(mig)s on %(rp)s', {'mig': migration.uuid, 'rp': source_cn.uuid}) return source_cn, orig_alloc def revert_allocation_for_migration(context, source_cn, instance, migration): """Revert an allocation made for a migration back to the instance.""" reportclient = report.SchedulerReportClient() # FIXME(gibi): This method is flawed in that it does not handle allocations # against sharing providers in any special way. This leads to duplicate # allocations against the sharing provider during migration. success = reportclient.move_allocations(context, migration.uuid, instance.uuid) if not success: LOG.error('Unable to replace resource claim on source ' 'host %(host)s node %(node)s for instance', {'host': instance.host, 'node': instance.node}, instance=instance) else: LOG.debug('Created allocations for instance %(inst)s on %(rp)s', {'inst': instance.uuid, 'rp': source_cn.uuid}) class MigrationTask(base.TaskBase): def __init__(self, context, instance, flavor, request_spec, clean_shutdown, compute_rpcapi, query_client, report_client, host_list, network_api): super(MigrationTask, self).__init__(context, instance) self.clean_shutdown = clean_shutdown self.request_spec = request_spec self.flavor = flavor self.compute_rpcapi = compute_rpcapi self.query_client = query_client self.reportclient = report_client self.host_list = host_list self.network_api = network_api # Persist things from the happy path so we don't have to look # them up if we need to roll back self._migration = None self._held_allocations = None self._source_cn = None def _preallocate_migration(self): # If this is a rescheduled migration, don't create a new record. migration_type = ("resize" if self.instance.flavor.id != self.flavor.id else "migration") filters = {"instance_uuid": self.instance.uuid, "migration_type": migration_type, "status": "pre-migrating"} migrations = objects.MigrationList.get_by_filters(self.context, filters).objects if migrations: migration = migrations[0] else: migration = objects.Migration(context=self.context.elevated()) migration.old_instance_type_id = self.instance.flavor.id migration.new_instance_type_id = self.flavor.id migration.status = 'pre-migrating' migration.instance_uuid = self.instance.uuid migration.source_compute = self.instance.host migration.source_node = self.instance.node migration.migration_type = migration_type migration.create() self._migration = migration self._source_cn, self._held_allocations = ( replace_allocation_with_migration(self.context, self.instance, self._migration)) return migration def _set_requested_destination_cell(self, legacy_props): instance_mapping = objects.InstanceMapping.get_by_instance_uuid( self.context, self.instance.uuid) if not ('requested_destination' in self.request_spec and self.request_spec.requested_destination): self.request_spec.requested_destination = objects.Destination() targeted = 'host' in self.request_spec.requested_destination # NOTE(mriedem): If the user is allowed to perform a cross-cell resize # then add the current cell to the request spec as "preferred" so the # scheduler will (by default) weigh hosts within the current cell over # hosts in another cell, all other things being equal. If the user is # not allowed to perform cross-cell resize, then we limit the request # spec and tell the scheduler to only look at hosts in the current # cell. cross_cell_allowed = ( self.request_spec.requested_destination.allow_cross_cell_move) if targeted and cross_cell_allowed: # If a target host is specified it might be in another cell so # we cannot restrict the cell in this case. We would not prefer # the source cell in that case either since we know where the # user wants it to go. We just let the scheduler figure it out. self.request_spec.requested_destination.cell = None else: self.request_spec.requested_destination.cell = ( instance_mapping.cell_mapping) # NOTE(takashin): In the case that the target host is specified, # if the migration is failed, it is not necessary to retry # the cold migration to the same host. So make sure that # reschedule will not occur. if targeted: legacy_props.pop('retry', None) self.request_spec.retry = None # Log our plan before calling the scheduler. if cross_cell_allowed and targeted: LOG.debug('Not restricting cell for targeted cold migration.', instance=self.instance) elif cross_cell_allowed: LOG.debug('Allowing migration from cell %(cell)s', {'cell': instance_mapping.cell_mapping.identity}, instance=self.instance) else: LOG.debug('Restricting to cell %(cell)s while migrating', {'cell': instance_mapping.cell_mapping.identity}, instance=self.instance) def _is_selected_host_in_source_cell(self, selection): """Checks if the given Selection is in the same cell as the instance :param selection: Selection object returned from the scheduler ``select_destinations`` method. :returns: True if the host Selection is in the same cell as the instance, False otherwise. """ # Note that the context is already targeted to the current cell in # which the instance exists. same_cell = selection.cell_uuid == self.context.cell_uuid if not same_cell: LOG.debug('Selected target host %s is in cell %s and instance is ' 'in cell: %s', selection.service_host, selection.cell_uuid, self.context.cell_uuid, instance=self.instance) return same_cell def _support_resource_request(self, selection): """Returns true if the host is new enough to support resource request during migration and that the RPC API version is not pinned during rolling upgrade. """ svc = objects.Service.get_by_host_and_binary( self.context, selection.service_host, 'nova-compute') return (svc.version >= 39 and self.compute_rpcapi.supports_resize_with_qos_port( self.context)) # TODO(gibi): Remove this compat code when nova doesn't need to support # Train computes any more. def _get_host_supporting_request(self, selection_list): """Return the first compute selection from the selection_list where the service is new enough to support resource request during migration and the resources claimed successfully. :param selection_list: a list of Selection objects returned by the scheduler :return: A two tuple. The first item is a Selection object representing the host that supports the request. The second item is a list of Selection objects representing the remaining alternate hosts. :raises MaxRetriesExceeded: if none of the hosts in the selection_list is new enough to support the request or we cannot claim resource on any of the hosts that are new enough. """ if not self.request_spec.requested_resources: return selection_list[0], selection_list[1:] # Scheduler allocated resources on the first host. So check if the # first host is new enough if self._support_resource_request(selection_list[0]): return selection_list[0], selection_list[1:] # First host is old, so we need to use an alternate. Therefore we have # to remove the allocation from the first host. self.reportclient.delete_allocation_for_instance( self.context, self.instance.uuid) LOG.debug( 'Scheduler returned host %(host)s as a possible migration target ' 'but that host is not new enough to support the migration with ' 'resource request %(request)s or the compute RPC is pinned to ' 'less than 5.2. Trying alternate hosts.', {'host': selection_list[0].service_host, 'request': self.request_spec.requested_resources}, instance=self.instance) alternates = selection_list[1:] for i, selection in enumerate(alternates): if self._support_resource_request(selection): # this host is new enough so we need to try to claim resources # on it if selection.allocation_request: alloc_req = jsonutils.loads( selection.allocation_request) resource_claimed = scheduler_utils.claim_resources( self.context, self.reportclient, self.request_spec, self.instance.uuid, alloc_req, selection.allocation_request_version) if not resource_claimed: LOG.debug( 'Scheduler returned alternate host %(host)s as a ' 'possible migration target but resource claim ' 'failed on that host. Trying another alternate.', {'host': selection.service_host}, instance=self.instance) else: return selection, alternates[i + 1:] else: # Some deployments use different schedulers that do not # use Placement, so they will not have an # allocation_request to claim with. For those cases, # there is no concept of claiming, so just assume that # the resources are available. return selection, alternates[i + 1:] else: LOG.debug( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target but that host is not new enough to ' 'support the migration with resource request %(request)s ' 'or the compute RPC is pinned to less than 5.2. ' 'Trying another alternate.', {'host': selection.service_host, 'request': self.request_spec.requested_resources}, instance=self.instance) # if we reach this point then none of the hosts was new enough for the # request or we failed to claim resources on every alternate reason = ("Exhausted all hosts available during compute service level " "check for instance %(instance_uuid)s." % {"instance_uuid": self.instance.uuid}) raise exception.MaxRetriesExceeded(reason=reason) def _execute(self): # NOTE(sbauza): Force_hosts/nodes needs to be reset if we want to make # sure that the next destination is not forced to be the original host. # This needs to be done before the populate_retry call otherwise # retries will be disabled if the server was created with a forced # host/node. self.request_spec.reset_forced_destinations() # TODO(sbauza): Remove once all the scheduler.utils methods accept a # RequestSpec object in the signature. legacy_props = self.request_spec.to_legacy_filter_properties_dict() scheduler_utils.setup_instance_group(self.context, self.request_spec) # If a target host is set in a requested destination, # 'populate_retry' need not be executed. if not ('requested_destination' in self.request_spec and self.request_spec.requested_destination and 'host' in self.request_spec.requested_destination): scheduler_utils.populate_retry(legacy_props, self.instance.uuid) port_res_req = self.network_api.get_requested_resource_for_instance( self.context, self.instance.uuid) # NOTE(gibi): When cyborg or other module wants to handle similar # non-nova resources then here we have to collect all the external # resource requests in a single list and add them to the RequestSpec. self.request_spec.requested_resources = port_res_req self._set_requested_destination_cell(legacy_props) # Once _preallocate_migration() is done, the source node allocation is # moved from the instance consumer to the migration record consumer, # and the instance consumer doesn't have any allocations. If this is # the first time through here (not a reschedule), select_destinations # below will allocate resources on the selected destination node for # the instance consumer. If we're rescheduling, host_list is not None # and we'll call claim_resources for the instance and the selected # alternate. If we exhaust our alternates and raise MaxRetriesExceeded, # the rollback() method should revert the allocation swaparoo and move # the source node allocation from the migration record back to the # instance record. migration = self._preallocate_migration() self.request_spec.ensure_project_and_user_id(self.instance) self.request_spec.ensure_network_metadata(self.instance) compute_utils.heal_reqspec_is_bfv( self.context, self.request_spec, self.instance) # On an initial call to migrate, 'self.host_list' will be None, so we # have to call the scheduler to get a list of acceptable hosts to # migrate to. That list will consist of a selected host, along with # zero or more alternates. On a reschedule, though, the alternates will # be passed to this object and stored in 'self.host_list', so we can # pop the first alternate from the list to use for the destination, and # pass the remaining alternates to the compute. if self.host_list is None: selection = self._schedule() if not self._is_selected_host_in_source_cell(selection): # If the selected host is in another cell, we need to execute # another task to do the cross-cell migration. LOG.info('Executing cross-cell resize task starting with ' 'target host: %s', selection.service_host, instance=self.instance) task = cross_cell_migrate.CrossCellMigrationTask( self.context, self.instance, self.flavor, self.request_spec, self._migration, self.compute_rpcapi, selection, self.host_list) task.execute() return else: # This is a reschedule that will use the supplied alternate hosts # in the host_list as destinations. selection = self._reschedule() scheduler_utils.populate_filter_properties(legacy_props, selection) (host, node) = (selection.service_host, selection.nodename) # The availability_zone field was added in v1.1 of the Selection # object so make sure to handle the case where it is missing. if 'availability_zone' in selection: self.instance.availability_zone = selection.availability_zone else: self.instance.availability_zone = ( availability_zones.get_host_availability_zone( self.context, host)) LOG.debug("Calling prep_resize with selected host: %s; " "Selected node: %s; Alternates: %s", host, node, self.host_list, instance=self.instance) # RPC cast to the destination host to start the migration process. self.compute_rpcapi.prep_resize( # NOTE(mriedem): Using request_spec.image here is potentially # dangerous if it is not kept up to date (i.e. rebuild/unshelve); # seems like the sane thing to do would be to pass the current # instance.image_meta since that is what MoveClaim will use for # any NUMA topology claims on the destination host... self.context, self.instance, self.request_spec.image, self.flavor, host, migration, request_spec=self.request_spec, filter_properties=legacy_props, node=node, clean_shutdown=self.clean_shutdown, host_list=self.host_list) def _schedule(self): selection_lists = self.query_client.select_destinations( self.context, self.request_spec, [self.instance.uuid], return_objects=True, return_alternates=True) # Since there is only ever one instance to migrate per call, we # just need the first returned element. selection_list = selection_lists[0] selection, self.host_list = self._get_host_supporting_request( selection_list) scheduler_utils.fill_provider_mapping(self.request_spec, selection) return selection def _reschedule(self): # Since the resources on these alternates may have been consumed and # might not be able to support the migrated instance, we need to first # claim the resources to verify the host still has sufficient # available resources. elevated = self.context.elevated() host_available = False selection = None while self.host_list and not host_available: selection = self.host_list.pop(0) if (self.request_spec.requested_resources and not self._support_resource_request(selection)): LOG.debug( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target for re-schedule but that host is not ' 'new enough to support the migration with resource ' 'request %(request)s. Trying another alternate.', {'host': selection.service_host, 'request': self.request_spec.requested_resources}, instance=self.instance) continue if selection.allocation_request: alloc_req = jsonutils.loads(selection.allocation_request) else: alloc_req = None if alloc_req: # If this call succeeds, the resources on the destination # host will be claimed by the instance. host_available = scheduler_utils.claim_resources( elevated, self.reportclient, self.request_spec, self.instance.uuid, alloc_req, selection.allocation_request_version) if host_available: scheduler_utils.fill_provider_mapping( self.request_spec, selection) else: # Some deployments use different schedulers that do not # use Placement, so they will not have an # allocation_request to claim with. For those cases, # there is no concept of claiming, so just assume that # the host is valid. host_available = True # There are no more available hosts. Raise a MaxRetriesExceeded # exception in that case. if not host_available: reason = ("Exhausted all hosts available for retrying build " "failures for instance %(instance_uuid)s." % {"instance_uuid": self.instance.uuid}) raise exception.MaxRetriesExceeded(reason=reason) return selection def rollback(self, ex): if self._migration: self._migration.status = 'error' self._migration.save() if not self._held_allocations: return # NOTE(danms): We created new-style migration-based # allocations for the instance, but failed before we kicked # off the migration in the compute. Normally the latter would # do that cleanup but we never got that far, so do it here and # now. revert_allocation_for_migration(self.context, self._source_cn, self.instance, self._migration) ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/conf/0000775000175000017500000000000000000000000014456 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/__init__.py0000664000175000017500000000734700000000000016602 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This package got introduced during the Mitaka cycle in 2015 to # have a central place where the config options of Nova can be maintained. # For more background see the blueprint "centralize-config-options" from oslo_config import cfg from nova.conf import api from nova.conf import availability_zone from nova.conf import base from nova.conf import cache from nova.conf import cinder from nova.conf import compute from nova.conf import conductor from nova.conf import configdrive from nova.conf import console from nova.conf import consoleauth from nova.conf import cyborg from nova.conf import database from nova.conf import devices from nova.conf import ephemeral_storage from nova.conf import glance from nova.conf import guestfs from nova.conf import hyperv from nova.conf import imagecache from nova.conf import ironic from nova.conf import key_manager from nova.conf import keystone from nova.conf import libvirt from nova.conf import mks from nova.conf import netconf from nova.conf import neutron from nova.conf import notifications from nova.conf import novnc from nova.conf import paths from nova.conf import pci from nova.conf import placement from nova.conf import powervm from nova.conf import quota from nova.conf import rdp from nova.conf import remote_debug from nova.conf import rpc from nova.conf import scheduler from nova.conf import serial_console from nova.conf import service from nova.conf import service_token from nova.conf import servicegroup from nova.conf import spice from nova.conf import upgrade_levels from nova.conf import vendordata from nova.conf import vmware from nova.conf import vnc from nova.conf import workarounds from nova.conf import wsgi from nova.conf import xenserver from nova.conf import zvm CONF = cfg.CONF api.register_opts(CONF) availability_zone.register_opts(CONF) base.register_opts(CONF) cache.register_opts(CONF) cinder.register_opts(CONF) compute.register_opts(CONF) conductor.register_opts(CONF) configdrive.register_opts(CONF) console.register_opts(CONF) consoleauth.register_opts(CONF) cyborg.register_opts(CONF) database.register_opts(CONF) devices.register_opts(CONF) ephemeral_storage.register_opts(CONF) glance.register_opts(CONF) guestfs.register_opts(CONF) hyperv.register_opts(CONF) mks.register_opts(CONF) imagecache.register_opts(CONF) ironic.register_opts(CONF) key_manager.register_opts(CONF) keystone.register_opts(CONF) libvirt.register_opts(CONF) netconf.register_opts(CONF) neutron.register_opts(CONF) notifications.register_opts(CONF) novnc.register_opts(CONF) paths.register_opts(CONF) pci.register_opts(CONF) placement.register_opts(CONF) powervm.register_opts(CONF) quota.register_opts(CONF) rdp.register_opts(CONF) rpc.register_opts(CONF) scheduler.register_opts(CONF) serial_console.register_opts(CONF) service.register_opts(CONF) service_token.register_opts(CONF) servicegroup.register_opts(CONF) spice.register_opts(CONF) upgrade_levels.register_opts(CONF) vendordata.register_opts(CONF) vmware.register_opts(CONF) vnc.register_opts(CONF) workarounds.register_opts(CONF) wsgi.register_opts(CONF) xenserver.register_opts(CONF) zvm.register_opts(CONF) remote_debug.register_cli_opts(CONF) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/api.py0000664000175000017500000003475200000000000015614 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg api_group = cfg.OptGroup('api', title='API options', help=""" Options under this group are used to define Nova API. """) auth_opts = [ cfg.StrOpt("auth_strategy", default="keystone", choices=[ ("keystone", "Use keystone for authentication."), ("noauth2", "Designed for testing only, as it does no actual " "credential checking. 'noauth2' provides administrative " "credentials only if 'admin' is specified as the username."), ], deprecated_for_removal=True, deprecated_since='21.0.0', deprecated_reason=""" The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. """, help=""" Determine the strategy to use for authentication. """), cfg.BoolOpt("use_forwarded_for", default=False, deprecated_group="DEFAULT", help=""" When True, the 'X-Forwarded-For' header is treated as the canonical remote address. When False (the default), the 'remote_address' header is used. You should only enable this if you have an HTML sanitizing proxy. """), ] metadata_opts = [ cfg.StrOpt("config_drive_skip_versions", default=("1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 " "2007-12-15 2008-02-01 2008-09-01"), deprecated_group="DEFAULT", help=""" When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don't appear in this option. As of the Liberty release, the available versions are: * 1.0 * 2007-01-19 * 2007-03-01 * 2007-08-29 * 2007-10-10 * 2007-12-15 * 2008-02-01 * 2008-09-01 * 2009-04-04 The option is in the format of a single string, with each version separated by a space. Possible values: * Any string that represents zero or more versions, separated by spaces. """), cfg.ListOpt('vendordata_providers', item_type=cfg.types.String(choices=[ ('StaticJSON', 'Load a JSON file from the path configured by ' '``vendordata_jsonfile_path`` and use this as the source for ' '``vendor_data.json`` and ``vendor_data2.json``.'), ('DynamicJSON', 'Build a JSON file using values defined in ' '``vendordata_dynamic_targets`` and use this as the source ' 'for ``vendor_data2.json``.'), ]), default=['StaticJSON'], deprecated_group="DEFAULT", help=""" A list of vendordata providers. vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference. Related options: * ``vendordata_dynamic_targets`` * ``vendordata_dynamic_ssl_certfile`` * ``vendordata_dynamic_connect_timeout`` * ``vendordata_dynamic_read_timeout`` * ``vendordata_dynamic_failure_fatal`` """), cfg.ListOpt('vendordata_dynamic_targets', default=[], deprecated_group="DEFAULT", help=""" A list of targets for the dynamic vendordata provider. These targets are of the form ``@``. The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference. """), cfg.StrOpt('vendordata_dynamic_ssl_certfile', default='', deprecated_group="DEFAULT", help=""" Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. Possible values: * An empty string, or a path to a valid certificate file Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_connect_timeout * vendordata_dynamic_read_timeout * vendordata_dynamic_failure_fatal """), cfg.IntOpt('vendordata_dynamic_connect_timeout', default=5, min=3, deprecated_group="DEFAULT", help=""" Maximum wait time for an external REST service to connect. Possible values: * Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small. Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_read_timeout * vendordata_dynamic_failure_fatal """), cfg.IntOpt('vendordata_dynamic_read_timeout', default=5, min=0, deprecated_group="DEFAULT", help=""" Maximum wait time for an external REST service to return data once connected. Possible values: * Any integer. Note that instance start is blocked during this wait time, so this value should be kept small. Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_connect_timeout * vendordata_dynamic_failure_fatal """), cfg.BoolOpt('vendordata_dynamic_failure_fatal', default=False, help=""" Should failures to fetch dynamic vendordata be fatal to instance boot? Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_connect_timeout * vendordata_dynamic_read_timeout """), cfg.IntOpt("metadata_cache_expiration", default=15, min=0, deprecated_group="DEFAULT", help=""" This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect. """), cfg.BoolOpt("local_metadata_per_cell", default=False, help=""" Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how neutron is setup. If you have networks that span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each Neutron metadata-agent to point to the corresponding nova-metadata API service. """), cfg.StrOpt("dhcp_domain", deprecated_group="DEFAULT", default="novalocal", help=""" Domain name used to configure FQDN for instances. Configure a fully-qualified domain name for instance hostnames. If unset, only the hostname without a domain will be configured. Possible values: * Any string that is a valid domain name. """), ] file_opts = [ cfg.StrOpt("vendordata_jsonfile_path", deprecated_group="DEFAULT", help=""" Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary. Note that when using this to provide static vendor data to a configuration drive, the nova-compute service must be configured with this option and the file must be accessible from the nova-compute host. Possible values: * Any string representing the path to the data file, or an empty string (default). """) ] osapi_opts = [ cfg.IntOpt("max_limit", default=1000, min=0, deprecated_group="DEFAULT", deprecated_name="osapi_max_limit", help=""" As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option. """), cfg.StrOpt("compute_link_prefix", deprecated_group="DEFAULT", deprecated_name="osapi_compute_link_prefix", help=""" This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged. Possible values: * Any string, including an empty string (the default). """), cfg.StrOpt("glance_link_prefix", deprecated_group="DEFAULT", deprecated_name="osapi_glance_link_prefix", help=""" This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged. Possible values: * Any string, including an empty string (the default). """), cfg.BoolOpt("instance_list_per_project_cells", default=False, help=""" When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True. """), cfg.StrOpt("instance_list_cells_batch_strategy", default="distributed", choices=[ ("distributed", "Divide the " "limit requested by the user by the number of cells in the " "system. This requires counting the cells in the system " "initially, which will not be refreshed until service restart " "or SIGHUP. The actual batch size will be increased by 10% " "over the result of ($limit / $num_cells)."), ("fixed", "Request fixed-size batches from each cell, as defined " "by ``instance_list_cells_batch_fixed_size``. " "If the limit is smaller than the batch size, the limit " "will be used instead. If you do not wish batching to be used " "at all, setting the fixed size equal to the ``max_limit`` " "value will cause only one request per cell database to be " "issued."), ], help=""" This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request. Related options: * instance_list_cells_batch_fixed_size * max_limit """), cfg.IntOpt("instance_list_cells_batch_fixed_size", min=100, default=100, help=""" This controls the batch size of instances requested from each cell database if ``instance_list_cells_batch_strategy``` is set to ``fixed``. This integral value will define the limit issued to each cell every time a batch of instances is requested, regardless of the number of cells in the system or any other factors. Per the general logic called out in the documentation for ``instance_list_cells_batch_strategy``, the minimum value for this is 100 records per batch. Related options: * instance_list_cells_batch_strategy * max_limit """), cfg.BoolOpt("list_records_by_skipping_down_cells", default=True, help=""" When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True. Note that from API microversion 2.69 there could be transient conditions in the deployment where certain records are not available and the results could be partial for certain requests containing those records. In those cases this option will be ignored. See "Handling Down Cells" section of the Compute API guide (https://docs.openstack.org/api-guide/compute/down_cells.html) for more information. """), ] os_network_opts = [ cfg.BoolOpt("use_neutron_default_nets", default=False, deprecated_group="DEFAULT", help=""" When True, the TenantNetworkController will query the Neutron API to get the default networks to use. Related options: * neutron_default_tenant_id """), cfg.StrOpt("neutron_default_tenant_id", default="default", deprecated_group="DEFAULT", help=""" Tenant ID for getting the default network from Neutron API (also referred in some places as the 'project ID') to use. Related options: * use_neutron_default_nets """), ] enable_inst_pw_opts = [ cfg.BoolOpt("enable_instance_password", default=True, deprecated_group="DEFAULT", help=""" Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False. """) ] API_OPTS = (auth_opts + metadata_opts + file_opts + osapi_opts + os_network_opts + enable_inst_pw_opts) def register_opts(conf): conf.register_group(api_group) conf.register_opts(API_OPTS, group=api_group) def list_opts(): return {api_group: API_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/availability_zone.py0000664000175000017500000000431000000000000020533 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg availability_zone_opts = [ cfg.StrOpt('internal_service_availability_zone', default='internal', help=""" Availability zone for internal services. This option determines the availability zone for the various internal nova services, such as 'nova-scheduler', 'nova-conductor', etc. Possible values: * Any string representing an existing availability zone name. """), cfg.StrOpt('default_availability_zone', default='nova', help=""" Default availability zone for compute services. This option determines the default availability zone for 'nova-compute' services, which will be used if the service(s) do not belong to aggregates with availability zone metadata. Possible values: * Any string representing an existing availability zone name. """), cfg.StrOpt('default_schedule_zone', help=""" Default availability zone for instances. This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime. Possible values: * Any string representing an existing availability zone name. * None, which means that the instance can move from one availability zone to another during its lifetime if it is moved from one compute node to another. Related options: * ``[cinder]/cross_az_attach`` """), ] def register_opts(conf): conf.register_opts(availability_zone_opts) def list_opts(): return {'DEFAULT': availability_zone_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/base.py0000664000175000017500000000443400000000000015747 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg base_options = [ cfg.IntOpt( 'password_length', default=12, min=0, help='Length of generated instance admin passwords.'), cfg.StrOpt( 'instance_usage_audit_period', default='month', regex='^(hour|month|day|year)(@([0-9]+))?$', help=''' Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset. Possible values: * period, example: ``hour``, ``day``, ``month` or ``year`` * period with offset, example: ``month@15`` will result in monthly audits starting on 15th day of month. '''), cfg.BoolOpt( 'use_rootwrap_daemon', default=False, help=''' Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes. '''), cfg.StrOpt( 'rootwrap_config', default="/etc/nova/rootwrap.conf", help=''' Path to the rootwrap configuration file. Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry. '''), cfg.StrOpt( 'tempdir', help='Explicitly specify the temporary working directory.'), ] def register_opts(conf): conf.register_opts(base_options) def list_opts(): return {'DEFAULT': base_options} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/cache.py0000664000175000017500000000170300000000000016074 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_cache import core def register_opts(conf): core.configure(conf) def list_opts(): # The oslo_cache library returns a list of tuples return dict(core._opts.list_opts()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/cinder.py0000664000175000017500000001000700000000000016272 0ustar00zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg cinder_group = cfg.OptGroup( 'cinder', title='Cinder Options', help="Configuration options for the block storage") cinder_opts = [ cfg.StrOpt('catalog_info', default='volumev3::publicURL', regex=r'^\w+:\w*:.*$', help=""" Info to match when looking for cinder in the service catalog. The ```` is optional and omitted by default since it should not be necessary in most deployments. Possible values: * Format is separated values of the form: :: Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options: * endpoint_template - Setting this option will override catalog_info """), cfg.StrOpt('endpoint_template', help=""" If this option is set then it will override service catalog lookup with this template for cinder endpoint Possible values: * URL for cinder endpoint API e.g. http://localhost:8776/v3/%(project_id)s Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options: * catalog_info - If endpoint_template is not set, catalog_info will be used. """), cfg.StrOpt('os_region_name', help=""" Region name of this node. This is used when picking the URL in the service catalog. Possible values: * Any string representing region name """), cfg.IntOpt('http_retries', default=3, min=0, help=""" Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values: * Any integer value. 0 means connection is attempted only once """), cfg.BoolOpt('cross_az_attach', default=True, help=""" Allow attach between instance and volume in different availability zones. If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not "volume" because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or ``allow_availability_zone_fallback=False`` in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach. Related options: * ``[DEFAULT]/default_schedule_zone`` """), ] def register_opts(conf): conf.register_group(cinder_group) conf.register_opts(cinder_opts, group=cinder_group) ks_loading.register_session_conf_options(conf, cinder_group.name) ks_loading.register_auth_conf_options(conf, cinder_group.name) def list_opts(): return { cinder_group.name: ( cinder_opts + ks_loading.get_session_conf_options() + ks_loading.get_auth_common_conf_options() + ks_loading.get_auth_plugin_conf_options('password') + ks_loading.get_auth_plugin_conf_options('v2password') + ks_loading.get_auth_plugin_conf_options('v3password')) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/compute.py0000664000175000017500000014327300000000000016516 0ustar00zuulzuul00000000000000# needs:check_deprecation_status # Copyright 2015 Huawei Technology corp. # Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket from oslo_config import cfg from oslo_config import types from nova.conf import paths compute_group = cfg.OptGroup( 'compute', title='Compute Manager Options', help=""" A collection of options specific to the nova-compute service. """) compute_opts = [ cfg.StrOpt('compute_driver', help=""" Defines which driver to use for controlling virtualization. Possible values: * ``libvirt.LibvirtDriver`` * ``xenapi.XenAPIDriver`` * ``fake.FakeDriver`` * ``ironic.IronicDriver`` * ``vmwareapi.VMwareVCDriver`` * ``hyperv.HyperVDriver`` * ``powervm.PowerVMDriver`` * ``zvm.ZVMDriver`` """), cfg.BoolOpt('allow_resize_to_same_host', default=False, help=""" Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize. """), cfg.ListOpt('non_inheritable_image_properties', default=['cache_in_nova', 'bittorrent'], help=""" Image properties that should not be inherited from the instance when taking a snapshot. This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots. .. note:: The following image properties are *never* inherited regardless of whether they are listed in this configuration option or not: * cinder_encryption_key_id * cinder_encryption_key_deletion_policy * img_signature * img_signature_hash_method * img_signature_key_type * img_signature_certificate_uuid Possible values: * A comma-separated list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images don't need them. * Default list: cache_in_nova, bittorrent """), cfg.IntOpt('max_local_block_devices', default=3, help=""" Maximum number of devices that will result in a local image being created on the hypervisor node. A negative number means unlimited. Setting ``max_local_block_devices`` to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of ``imageRef`` being used when creating a server, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail. Possible values: * 0: Creating a local disk is not allowed. * Negative number: Allows unlimited number of local discs. * Positive number: Allows only these many number of local discs. """), cfg.ListOpt('compute_monitors', default=[], help=""" A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility. NOTE: Only one monitor per namespace (For example: cpu) can be loaded at a time. Possible values: * An empty list will disable the feature (Default). * An example value that would enable the CPU bandwidth monitor that uses the virt driver variant:: compute_monitors = cpu.virt_driver """), cfg.StrOpt('default_ephemeral_format', help=""" The default format an ephemeral_volume will be formatted with on creation. Possible values: * ``ext2`` * ``ext3`` * ``ext4`` * ``xfs`` * ``ntfs`` (only for Windows guests) """), cfg.BoolOpt('vif_plugging_is_fatal', default=True, help=""" Determine if instance should boot or fail on VIF plugging timeout. Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval. This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready. Possible values: * True: Instances should fail after VIF plugging timeout * False: Instances should continue booting after VIF plugging timeout """), cfg.IntOpt('vif_plugging_timeout', default=300, min=0, help=""" Timeout for Neutron VIF plugging event message arrival. Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see 'vif_plugging_is_fatal'). If you are hitting timeout failures at scale, consider running rootwrap in "daemon mode" in the neutron agent via the ``[agent]/root_helper_daemon`` neutron configuration option. Related options: * vif_plugging_is_fatal - If ``vif_plugging_timeout`` is set to zero and ``vif_plugging_is_fatal`` is False, events should not be expected to arrive at all. """), cfg.IntOpt('arq_binding_timeout', default=300, min=1, help=""" Timeout for Accelerator Request (ARQ) bind event message arrival. Number of seconds to wait for ARQ bind resolution event to arrive. The event indicates that every ARQ for an instance has either bound successfully or failed to bind. If it does not arrive, instance bringup is aborted with an exception. """), cfg.StrOpt('injected_network_template', default=paths.basedir_def('nova/virt/interfaces.template'), help="""Path to '/etc/network/interfaces' template. The path to a template file for the '/etc/network/interfaces'-style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server. The template will be rendered using Jinja2 template engine, and receive a top-level key called ``interfaces``. This key will contain a list of dictionaries, one for each interface. Refer to the cloudinit documentaion for more information: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html Possible values: * A path to a Jinja2-formatted template for a Debian '/etc/network/interfaces' file. This applies even if using a non Debian-derived guest. Related options: * ``flat_inject``: This must be set to ``True`` to ensure nova embeds network configuration information in the metadata provided through the config drive. """), cfg.StrOpt('preallocate_images', default='none', choices=[ ('none', 'No storage provisioning is done up front'), ('space', 'Storage is fully allocated at instance start') ], help=""" The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn't available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation. """), cfg.BoolOpt('use_cow_images', default=True, help=""" Enable use of copy-on-write (cow) images. QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used. """), cfg.BoolOpt('force_raw_images', default=True, help=""" Force conversion of backing images to raw format. Possible values: * True: Backing image files will be converted to raw image format * False: Backing image files will not be converted Related options: * ``compute_driver``: Only the libvirt driver uses this option. * ``[libvirt]/images_type``: If images_type is rbd, setting this option to False is not allowed. See the bug https://bugs.launchpad.net/nova/+bug/1816686 for more details. """), # NOTE(yamahata): ListOpt won't work because the command may include a comma. # For example: # # mkfs.ext4 -O dir_index,extent -E stride=8,stripe-width=16 # --label %(fs_label)s %(target)s # # list arguments are comma separated and there is no way to escape such # commas. cfg.MultiStrOpt('virt_mkfs', default=[], help=""" Name of the mkfs commands for ephemeral device. The format is = """), cfg.BoolOpt('resize_fs_using_block_device', default=False, help=""" Enable resizing of filesystems via a block device. If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw). """), cfg.IntOpt('timeout_nbd', default=10, min=0, help='Amount of time, in seconds, to wait for NBD device start up.'), cfg.StrOpt('pointer_model', default='usbtablet', choices=[ ('ps2mouse', 'Uses relative movement. Mouse connected by PS2'), ('usbtablet', 'Uses absolute movement. Tablet connect by USB'), (None, 'Uses default behavior provided by drivers (mouse on PS2 ' 'for libvirt x86)'), ], help=""" Generic property to specify the pointer type. Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement. If set, the 'hw_pointer_model' image property takes precedence over this configuration option. Related options: * usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM. """), ] resource_tracker_opts = [ cfg.StrOpt('vcpu_pin_set', deprecated_for_removal=True, deprecated_since='20.0.0', deprecated_reason=""" This option has been superseded by the ``[compute] cpu_dedicated_set`` and ``[compute] cpu_shared_set`` options, which allow things like the co-existence of pinned and unpinned instances on the same host (for the libvirt driver). """, help=""" Mask of host CPUs that can be used for ``VCPU`` resources. The behavior of this option depends on the definition of the ``[compute] cpu_dedicated_set`` option and affects the behavior of the ``[compute] cpu_shared_set`` option. * If ``[compute] cpu_dedicated_set`` is defined, defining this option will result in an error. * If ``[compute] cpu_dedicated_set`` is not defined, this option will be used to determine inventory for ``VCPU`` resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to, overriding the ``[compute] cpu_shared_set`` option. Possible values: * A comma-separated list of physical CPU numbers that virtual CPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:: vcpu_pin_set = "4-12,^8,15" Related options: * ``[compute] cpu_dedicated_set`` * ``[compute] cpu_shared_set`` """), cfg.MultiOpt('reserved_huge_pages', item_type=types.Dict(), help=""" Number of huge/large memory pages to reserved per NUMA host cell. Possible values: * A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved. For example:: reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1 In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB. """), cfg.IntOpt('reserved_host_disk_mb', min=0, default=0, help=""" Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host. Possible values: * Any positive integer representing amount of disk in MB to reserve for the host. """), cfg.IntOpt('reserved_host_memory_mb', default=512, min=0, help=""" Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host. Possible values: * Any positive integer representing amount of memory in MB to reserve for the host. """), cfg.IntOpt('reserved_host_cpus', default=0, min=0, help=""" Number of host CPUs to reserve for host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. This value is used to determine the ``reserved`` value reported to placement. This option cannot be set if the ``[compute] cpu_shared_set`` or ``[compute] cpu_dedicated_set`` config options have been defined. When these options are defined, any host CPUs not included in these values are considered reserved for the host. Possible values: * Any positive integer representing number of physical CPUs to reserve for the host. Related options: * ``[compute] cpu_shared_set`` * ``[compute] cpu_dedicated_set`` """), ] allocation_ratio_opts = [ cfg.FloatOpt('cpu_allocation_ratio', default=None, min=0.0, help=""" Virtual CPU to physical CPU allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for ``VCPU`` inventory. In addition, the ``AggregateCoreFilter`` (deprecated) will fall back to this configuration value if no per-aggregate setting is found. .. note:: This option does not affect ``PCPU`` inventory, which cannot be overcommitted. .. note:: If this option is set to something *other than* ``None`` or ``0.0``, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to "unset" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of ``initial_cpu_allocation_ratio``. Possible values: * Any valid positive integer or float value Related options: * ``initial_cpu_allocation_ratio`` """), cfg.FloatOpt('ram_allocation_ratio', default=None, min=0.0, help=""" Virtual RAM to physical RAM allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for ``MEMORY_MB`` inventory. In addition, the ``AggregateRamFilter`` (deprecated) will fall back to this configuration value if no per-aggregate setting is found. .. note:: If this option is set to something *other than* ``None`` or ``0.0``, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to "unset" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of ``initial_ram_allocation_ratio``. Possible values: * Any valid positive integer or float value Related options: * ``initial_ram_allocation_ratio`` """), cfg.FloatOpt('disk_allocation_ratio', default=None, min=0.0, help=""" Virtual disk to physical disk allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for ``DISK_GB`` inventory. In addition, the ``AggregateDiskFilter`` (deprecated) will fall back to this configuration value if no per-aggregate setting is found. When configured, a ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances. .. note:: If the value is set to ``>1``, we recommend keeping track of the free disk space, as the value approaching ``0`` may result in the incorrect functioning of instances using it at the moment. .. note:: If this option is set to something *other than* ``None`` or ``0.0``, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to "unset" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of ``initial_disk_allocation_ratio``. Possible values: * Any valid positive integer or float value Related options: * ``initial_disk_allocation_ratio`` """), cfg.FloatOpt('initial_cpu_allocation_ratio', default=16.0, min=0.0, help=""" Initial virtual CPU to physical CPU allocation ratio. This is only used when initially creating the ``computes_nodes`` table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: * ``cpu_allocation_ratio`` """), cfg.FloatOpt('initial_ram_allocation_ratio', default=1.5, min=0.0, help=""" Initial virtual RAM to physical RAM allocation ratio. This is only used when initially creating the ``computes_nodes`` table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: * ``ram_allocation_ratio`` """), cfg.FloatOpt('initial_disk_allocation_ratio', default=1.0, min=0.0, help=""" Initial virtual disk to physical disk allocation ratio. This is only used when initially creating the ``computes_nodes`` table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: * ``disk_allocation_ratio`` """) ] compute_manager_opts = [ cfg.StrOpt('console_host', default=socket.gethostname(), sample_default="", help=""" Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host. Possible values: * Current hostname (default) or any string representing hostname. """), cfg.StrOpt('default_access_ip_network_name', help=""" Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen. Possible values: * None (default) * Any string representing network name. """), cfg.StrOpt('instances_path', default=paths.state_path_def('instances'), sample_default="$state_path/instances", help=""" Specifies where instances are stored on the hypervisor's disk. It can point to locally attached storage or a directory on NFS. Possible values: * $state_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova's state. (default) or Any string representing directory path. Related options: * ``[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup`` """), cfg.BoolOpt('instance_usage_audit', default=False, help=""" This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service. """), cfg.IntOpt('live_migration_retry_count', default=30, min=0, help=""" Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables. Possible values: * Any positive integer representing retry count. """), cfg.BoolOpt('resume_guests_state_on_host_boot', default=False, help=""" This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts. """), cfg.IntOpt('network_allocate_retries', default=0, min=0, help=""" Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails. Possible values: * Any positive integer representing retry count. """), cfg.IntOpt('max_concurrent_builds', default=10, min=0, help=""" Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node. Possible Values: * 0 : treated as unlimited. * Any positive integer representing maximum concurrent builds. """), cfg.IntOpt('max_concurrent_live_migrations', default=1, min=0, help=""" Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment. Possible values: * 0 : treated as unlimited. * Any positive integer representing maximum number of live migrations to run concurrently. """), cfg.IntOpt('block_device_allocate_retries', default=60, min=0, help=""" The number of times to check for a volume to be "available" before attaching it during server create. When creating a server with block device mappings where ``source_type`` is one of ``blank``, ``image`` or ``snapshot`` and the ``destination_type`` is ``volume``, the ``nova-compute`` service will create a volume and then attach it to the server. Before the volume can be attached, it must be in status "available". This option controls how many times to check for the created volume to be "available" before it is attached. If the operation times out, the volume will be deleted if the block device mapping ``delete_on_termination`` value is True. It is recommended to configure the image cache in the block storage service to speed up this operation. See https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html for details. Possible values: * 60 (default) * If value is 0, then one attempt is made. * For any value > 0, total attempts are (value + 1) Related options: * ``block_device_allocate_retries_interval`` - controls the interval between checks """), cfg.IntOpt('sync_power_state_pool_size', default=1000, help=""" Number of greenthreads available for use to sync power states. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic. Possible values: * Any positive integer representing greenthreads count. """) ] compute_group_opts = [ cfg.IntOpt('consecutive_build_service_disable_threshold', default=10, help=""" Enables reporting of build failures to the scheduler. Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher. Possible values: * Any positive integer enables reporting build failures. * Zero to disable reporting build failures. Related options: * [filter_scheduler]/build_failure_weight_multiplier """), cfg.IntOpt("shutdown_retry_interval", default=10, min=1, help=""" Time to wait in seconds before resending an ACPI shutdown signal to instances. The overall time to wait is set by ``shutdown_timeout``. Possible values: * Any integer greater than 0 in seconds Related options: * ``shutdown_timeout`` """), cfg.IntOpt('resource_provider_association_refresh', default=300, min=0, mutable=True, # TODO(efried): Provide more/better explanation of what this option is # all about. Reference bug(s). Unless we're just going to remove it. help=""" Interval for updating nova-compute-side cache of the compute node resource provider's inventories, aggregates, and traits. This option specifies the number of seconds between attempts to update a provider's inventories, aggregates and traits in the local cache of the compute node. A value of zero disables cache refresh completely. The cache can be cleared manually at any time by sending SIGHUP to the compute process, causing it to be repopulated the next time the data is accessed. Possible values: * Any positive integer in seconds, or zero to disable refresh. """), cfg.StrOpt('cpu_shared_set', help=""" Mask of host CPUs that can be used for ``VCPU`` resources and offloaded emulator threads. The behavior of this option depends on the definition of the deprecated ``vcpu_pin_set`` option. * If ``vcpu_pin_set`` is not defined, ``[compute] cpu_shared_set`` will be be used to provide ``VCPU`` inventory and to determine the host CPUs that unpinned instances can be scheduled to. It will also be used to determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the ``share`` emulator thread policy (``hw:emulator_threads_policy=share``). * If ``vcpu_pin_set`` is defined, ``[compute] cpu_shared_set`` will only be used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the ``share`` emulator thread policy (``hw:emulator_threads_policy=share``). ``vcpu_pin_set`` will be used to provide ``VCPU`` inventory and to determine the host CPUs that both pinned and unpinned instances can be scheduled to. This behavior will be simplified in a future release when ``vcpu_pin_set`` is removed. Possible values: * A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:: cpu_shared_set = "4-12,^8,15" Related options: * ``[compute] cpu_dedicated_set``: This is the counterpart option for defining where ``PCPU`` resources should be allocated from. * ``vcpu_pin_set``: A legacy option whose definition may change the behavior of this option. """), cfg.StrOpt('cpu_dedicated_set', help=""" Mask of host CPUs that can be used for ``PCPU`` resources. The behavior of this option affects the behavior of the deprecated ``vcpu_pin_set`` option. * If this option is defined, defining ``vcpu_pin_set`` will result in an error. * If this option is not defined, ``vcpu_pin_set`` will be used to determine inventory for ``VCPU`` resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to. This behavior will be simplified in a future release when ``vcpu_pin_set`` is removed. Possible values: * A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:: cpu_dedicated_set = "4-12,^8,15" Related options: * ``[compute] cpu_shared_set``: This is the counterpart option for defining where ``VCPU`` resources should be allocated from. * ``vcpu_pin_set``: A legacy option that this option partially replaces. """), cfg.BoolOpt('live_migration_wait_for_vif_plug', default=True, help=""" Determine if the source compute host should wait for a ``network-vif-plugged`` event from the (neutron) networking service before starting the actual transfer of the guest to the destination compute host. Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this. Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend **on the destination host**, a ``network-vif-plugged`` event may be triggered and then received on the source compute host and the source compute can wait for that event to ensure networking is set up on the destination host before starting the guest transfer in the hypervisor. .. note:: The compute service cannot reliably determine which types of virtual interfaces (``port.binding:vif_type``) will send ``network-vif-plugged`` events without an accompanying port ``binding:host_id`` change. Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least one known backend that will not currently work in this case, see bug https://launchpad.net/bugs/1755890 for more details. Possible values: * True: wait for ``network-vif-plugged`` events before starting guest transfer * False: do not wait for ``network-vif-plugged`` events before starting guest transfer (this is the legacy behavior) Related options: * [DEFAULT]/vif_plugging_is_fatal: if ``live_migration_wait_for_vif_plug`` is True and ``vif_plugging_timeout`` is greater than 0, and a timeout is reached, the live migration process will fail with an error but the guest transfer will not have started to the destination host * [DEFAULT]/vif_plugging_timeout: if ``live_migration_wait_for_vif_plug`` is True, this controls the amount of time to wait before timing out and either failing if ``vif_plugging_is_fatal`` is True, or simply continuing with the live migration """), cfg.IntOpt('max_concurrent_disk_ops', default=0, min=0, help=""" Number of concurrent disk-IO-intensive operations (glance image downloads, image format conversions, etc.) that we will do in parallel. If this is set too high then response time suffers. The default value of 0 means no limit. """), cfg.IntOpt('max_disk_devices_to_attach', default=-1, min=-1, help=""" Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the ``ide`` disk bus is limited to 4 attached devices. The configured maximum is enforced during server create, rebuild, evacuate, unshelve, live migrate, and attach volume. Usually, disk bus is determined automatically from the device type or disk device, and the virtualization type. However, disk bus can also be specified via a block device mapping or an image property. See the ``disk_bus`` field in :doc:`/user/block-device-mapping` for more information about specifying disk bus in a block device mapping, and see https://docs.openstack.org/glance/latest/admin/useful-image-properties.html for more information about the ``hw_disk_bus`` image property. Operators changing the ``[compute]/max_disk_devices_to_attach`` on a compute service that is hosting servers should be aware that it could cause rebuilds to fail, if the maximum is decreased lower than the number of devices already attached to servers. For example, if server A has 26 devices attached and an operators changes ``[compute]/max_disk_devices_to_attach`` to 20, a request to rebuild server A will fail and go into ERROR state because 26 devices are already attached and exceed the new configured maximum of 20. Operators setting ``[compute]/max_disk_devices_to_attach`` should also be aware that during a cold migration, the configured maximum is only enforced in-place and the destination is not checked before the move. This means if an operator has set a maximum of 26 on compute host A and a maximum of 20 on compute host B, a cold migration of a server with 26 attached devices from compute host A to compute host B will succeed. Then, once the server is on compute host B, a subsequent request to rebuild the server will fail and go into ERROR state because 26 devices are already attached and exceed the configured maximum of 20 on compute host B. The configured maximum is not enforced on shelved offloaded servers, as they have no compute host. .. warning:: If this option is set to 0, the ``nova-compute`` service will fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot. Possible values: * -1 means unlimited * Any integer >= 1 represents the maximum allowed. A value of 0 will cause the ``nova-compute`` service to fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot. """), ] interval_opts = [ cfg.IntOpt('bandwidth_poll_interval', default=600, help=""" Interval to pull network bandwidth usage info. Not supported on all hypervisors. If a hypervisor doesn't support bandwidth usage, it will not get the info in the usage events. Possible values: * 0: Will run at the default periodic interval. * Any value < 0: Disables the option. * Any positive integer in seconds. """), cfg.IntOpt('sync_power_state_interval', default=600, help=""" Interval to sync power states between the database and the hypervisor. The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state. Possible values: * 0: Will run at the default periodic interval. * Any value < 0: Disables the option. * Any positive integer in seconds. Related options: * If ``handle_virt_lifecycle_events`` in the ``workarounds`` group is false and this option is negative, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. """), cfg.IntOpt('heal_instance_info_cache_interval', default=60, help=""" Interval between instance network information cache updates. Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it's cache if this option is set to 0. If we don't update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0. Possible values: * Any positive integer in seconds. * Any value <=0 will disable the sync. This is not recommended. """), cfg.IntOpt('reclaim_instance_interval', default=0, help=""" Interval for reclaiming deleted instances. A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it's too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically. Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node. .. note:: When using this option, you should also configure the ``[cinder]`` auth options, e.g. ``auth_type``, ``auth_url``, ``username``, etc. Since the reclaim happens in a periodic task, there is no user token to cleanup volumes attached to any SOFT_DELETED servers so nova must be configured with administrator role access to cleanup those resources in cinder. Possible values: * Any positive integer(in seconds) greater than 0 will enable this option. * Any value <=0 will disable the option. Related options: * [cinder] auth options for cleaning up volumes attached to servers during the reclaim process """), cfg.IntOpt('volume_usage_poll_interval', default=0, help=""" Interval for gathering volume usages. This option updates the volume usage cache for every volume_usage_poll_interval number of seconds. Possible values: * Any positive integer(in seconds) greater than 0 will enable this option. * Any value <=0 will disable the option. """), cfg.IntOpt('shelved_poll_interval', default=3600, help=""" Interval for polling shelved instances to offload. The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the 'shelved_offload_time' config value it offloads the shelved instances. Check 'shelved_offload_time' config option description for details. Possible values: * Any value <= 0: Disables the option. * Any positive integer in seconds. Related options: * ``shelved_offload_time`` """), cfg.IntOpt('shelved_offload_time', default=0, help=""" Time before a shelved instance is eligible for removal from a host. By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes. Possible values: * 0: Instance will be immediately offloaded after being shelved. * Any value < 0: An instance will never offload. * Any positive integer in seconds: The instance will exist for the specified number of seconds before being offloaded. """), # NOTE(melwitt): We're also using this option as the interval for cleaning # up expired console authorizations from the database. It's related to the # delete_instance_interval in that it's another task for cleaning up # resources related to an instance. cfg.IntOpt('instance_delete_interval', default=300, help=""" Interval for retrying failed instance file deletes. This option depends on 'maximum_instance_delete_attempts'. This option specifies how often to retry deletes whereas 'maximum_instance_delete_attempts' specifies the maximum number of retry attempts that can be made. Possible values: * 0: Will run at the default periodic interval. * Any value < 0: Disables the option. * Any positive integer in seconds. Related options: * ``maximum_instance_delete_attempts`` from instance_cleaning_opts group. """), cfg.IntOpt('block_device_allocate_retries_interval', default=3, min=0, help=""" Interval (in seconds) between block device allocation retries on failures. This option allows the user to specify the time interval between consecutive retries. The ``block_device_allocate_retries`` option specifies the maximum number of retries. Possible values: * 0: Disables the option. * Any positive integer in seconds enables the option. Related options: * ``block_device_allocate_retries`` - controls the number of retries """), cfg.IntOpt('scheduler_instance_sync_interval', default=120, help=""" Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option 'scheduler_tracks_instance_changes' is False, the sync calls will not be made. So, changing this option will have no effect. If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently. Possible values: * 0: Will run at the default periodic interval. * Any value < 0: Disables the option. * Any positive integer in seconds. Related options: * This option has no impact if ``scheduler_tracks_instance_changes`` is set to False. """), cfg.IntOpt('update_resources_interval', default=0, help=""" Interval for updating compute resources. This option specifies how often the update_available_resource periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds. Possible values: * 0: Will run at the default periodic interval. * Any value < 0: Disables the option. * Any positive integer in seconds. """) ] timeout_opts = [ cfg.IntOpt("reboot_timeout", default=0, min=0, help=""" Time interval after which an instance is hard rebooted automatically. When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Possible values: * 0: Disables the option (default). * Any positive integer in seconds: Enables the option. """), cfg.IntOpt("instance_build_timeout", default=0, min=0, help=""" Maximum time in seconds that an instance can take to build. If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period. Possible values: * 0: Disables the option (default) * Any positive integer in seconds: Enables the option. """), cfg.IntOpt("rescue_timeout", default=0, min=0, help=""" Interval to wait before un-rescuing an instance stuck in RESCUE. Possible values: * 0: Disables the option (default) * Any positive integer in seconds: Enables the option. """), cfg.IntOpt("resize_confirm_window", default=0, min=0, help=""" Automatically confirm resizes after N seconds. Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time. Possible values: * 0: Disables the option (default) * Any positive integer in seconds: Enables the option. """), cfg.IntOpt("shutdown_timeout", default=60, min=0, help=""" Total time to wait in seconds for an instance to perform a clean shutdown. It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up. The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly. Possible values: * A positive integer or 0 (default value is 60). """) ] running_deleted_opts = [ cfg.StrOpt("running_deleted_instance_action", default="reap", choices=[ ('reap', 'Powers down the instances and deletes them'), ('log', 'Logs warning message about deletion of the resource'), ('shutdown', 'Powers down instances and marks them as ' 'non-bootable which can be later used for debugging/analysis'), ('noop', 'Takes no action'), ], help=""" The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified. Related options: * ``running_deleted_instance_poll_interval`` * ``running_deleted_instance_timeout`` """), cfg.IntOpt("running_deleted_instance_poll_interval", default=1800, help=""" Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If "running_deleted_instance _action" is set to "log" or "reap", a value greater than 0 must be set. Possible values: * Any positive integer in seconds enables the option. * 0: Disables the option. * 1800: Default value. Related options: * running_deleted_instance_action """), cfg.IntOpt("running_deleted_instance_timeout", default=0, help=""" Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup. Possible values: * Any positive integer in seconds(default is 0). Related options: * "running_deleted_instance_action" """), ] instance_cleaning_opts = [ cfg.IntOpt('maximum_instance_delete_attempts', default=5, min=1, help=""" The number of times to attempt to reap an instance's files. This option specifies the maximum number of retry attempts that can be made. Possible values: * Any positive integer defines how many attempts are made. Related options: * ``[DEFAULT] instance_delete_interval`` can be used to disable this option. """) ] db_opts = [ cfg.StrOpt('osapi_compute_unique_server_name_scope', default='', choices=[ ('', 'An empty value means that no uniqueness check is done and ' 'duplicate names are possible'), ('project', 'The instance name check is done only for instances ' 'within the same project'), ('global', 'The instance name check is done for all instances ' 'regardless of the project'), ], help=""" Sets the scope of the check for unique instance names. The default doesn't check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an ''InstanceExists'' error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don't have to distinguish among instances with the same name by their IDs. """), cfg.BoolOpt('enable_new_services', default=True, help=""" Enable new nova-compute services on this host automatically. When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, or nova-osapi_compute. Possible values: * ``True``: Each new compute service is enabled as soon as it registers itself. * ``False``: Compute services must be enabled via an os-services REST API call or with the CLI with ``nova service-enable ``, otherwise they are not ready to use. """), cfg.StrOpt('instance_name_template', default='instance-%08x', help=""" Template string to be used to generate instance names. This template controls the creation of the database name of an instance. This is *not* the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like ``instance-%(uuid)s``. If you already have instances in your deployment when you change this, your deployment will break. Possible values: * A string which either uses the instance database ID (like the default) * A string with a list of named database columns, for example ``%(id)d`` or ``%(uuid)s`` or ``%(hostname)s``. """), ] ALL_OPTS = (compute_opts + resource_tracker_opts + allocation_ratio_opts + compute_manager_opts + interval_opts + timeout_opts + running_deleted_opts + instance_cleaning_opts + db_opts) def register_opts(conf): conf.register_opts(ALL_OPTS) conf.register_group(compute_group) conf.register_opts(compute_group_opts, group=compute_group) def list_opts(): return {'DEFAULT': ALL_OPTS, 'compute': compute_group_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/conductor.py0000664000175000017500000000324600000000000017035 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg conductor_group = cfg.OptGroup( 'conductor', title='Conductor Options', help=""" Options under this group are used to define Conductor's communication, which manager should be act as a proxy between computes and database, and finally, how many worker processes will be used. """, ) ALL_OPTS = [ cfg.IntOpt( 'workers', help=""" Number of workers for OpenStack Conductor service. The default will be the number of CPUs available. """), ] migrate_opts = [ cfg.IntOpt( 'migrate_max_retries', default=-1, min=-1, help=""" Number of times to retry live-migration before failing. Possible values: * If == -1, try until out of hosts (default) * If == 0, only try once, no retries * Integer greater than 0 """), ] def register_opts(conf): conf.register_group(conductor_group) conf.register_opts(ALL_OPTS, group=conductor_group) conf.register_opts(migrate_opts) def list_opts(): return {"DEFAULT": migrate_opts, conductor_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/configdrive.py0000664000175000017500000001022200000000000017324 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg config_drive_opts = [ cfg.StrOpt('config_drive_format', default='iso9660', deprecated_for_removal=True, deprecated_since='19.0.0', deprecated_reason=""" This option was originally added as a workaround for bug in libvirt, #1246201, that was resolved in libvirt v1.2.17. As a result, this option is no longer necessary or useful. """, choices=[ ('iso9660', 'A file system image standard that is widely ' 'supported across operating systems.'), ('vfat', 'Provided for legacy reasons and to enable live ' 'migration with the libvirt driver and non-shared storage')], help=""" Config drive format. Config drive format that will contain metadata attached to the instance when it boots. Related options: * This option is meaningful when one of the following alternatives occur: 1. ``force_config_drive`` option set to ``true`` 2. the REST API call to create the instance contains an enable flag for config drive option 3. the image used to create the instance requires a config drive, this is defined by ``img_config_drive`` property for that image. * A compute node running Hyper-V hypervisor can be configured to attach config drive as a CD drive. To attach the config drive as a CD drive, set the ``[hyperv] config_drive_cdrom`` option to true. """), cfg.BoolOpt('force_config_drive', default=False, help=""" Force injection to take place on a config drive When this option is set to true config drive functionality will be forced enabled by default, otherwise users can still enable config drives via the REST API or image metadata properties. Launched instances are not affected by this option. Possible values: * True: Force to use of config drive regardless the user's input in the REST API call. * False: Do not force use of config drive. Config drives can still be enabled via the REST API or image metadata properties. Related options: * Use the 'mkisofs_cmd' flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag. * To use a config drive with Hyper-V, you must set the 'mkisofs_cmd' value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. """), cfg.StrOpt('mkisofs_cmd', default='genisoimage', help=""" Name or path of the tool used for ISO image creation. Use the ``mkisofs_cmd`` flag to set the path where you install the ``genisoimage`` program. If ``genisoimage`` is on the system path, you do not need to change the default value. To use a config drive with Hyper-V, you must set the ``mkisofs_cmd`` value to the full path to an ``mkisofs.exe`` installation. Additionally, you must set the ``qemu_img_cmd`` value in the hyperv configuration section to the full path to an ``qemu-img`` command installation. Possible values: * Name of the ISO image creator program, in case it is in the same directory as the nova-compute service * Path to ISO image creator program Related options: * This option is meaningful when config drives are enabled. * To use config drive with Hyper-V, you must set the ``qemu_img_cmd`` value in the hyperv configuration section to the full path to an ``qemu-img`` command installation. """), ] def register_opts(conf): conf.register_opts(config_drive_opts) def list_opts(): return {"DEFAULT": config_drive_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/console.py0000664000175000017500000000563500000000000016503 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg console_group = cfg.OptGroup('console', title='Console Options', help=""" Options under this group allow to tune the configuration of the console proxy service. Note: in configuration of every compute is a ``console_host`` option, which allows to select the console proxy service to connect to. """) console_opts = [ cfg.ListOpt('allowed_origins', default=[], deprecated_group='DEFAULT', deprecated_name='console_allowed_origins', help=""" Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header. Possible values: * A list where each element is an allowed origin hostnames, else an empty list """), cfg.StrOpt('ssl_ciphers', help=""" OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. For example:: ssl_ciphers = "kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES" See the man page for the OpenSSL `ciphers` command for details of the cipher preference string format and allowed values:: https://www.openssl.org/docs/man1.1.0/man1/ciphers.html Related options: * [DEFAULT] cert * [DEFAULT] key """), cfg.StrOpt('ssl_minimum_version', default='default', choices=[ # These values must align with SSL_OPTIONS in # websockify/websocketproxy.py ('default', 'Use the underlying system OpenSSL defaults'), ('tlsv1_1', 'Require TLS v1.1 or greater for TLS connections'), ('tlsv1_2', 'Require TLS v1.2 or greater for TLS connections'), ('tlsv1_3', 'Require TLS v1.3 or greater for TLS connections'), ], help=""" Minimum allowed SSL/TLS protocol version. Related options: * [DEFAULT] cert * [DEFAULT] key """), ] def register_opts(conf): conf.register_group(console_group) conf.register_opts(console_opts, group=console_group) def list_opts(): return { console_group: console_opts, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/consoleauth.py0000664000175000017500000000262700000000000017363 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg consoleauth_group = cfg.OptGroup( name='consoleauth', title='Console auth options') consoleauth_opts = [ cfg.IntOpt('token_ttl', default=600, min=0, deprecated_name='console_token_ttl', deprecated_group='DEFAULT', help=""" The lifetime of a console auth token (in seconds). A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted. """) ] def register_opts(conf): conf.register_group(consoleauth_group) conf.register_opts(consoleauth_opts, group=consoleauth_group) def list_opts(): return {consoleauth_group: consoleauth_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/cyborg.py0000664000175000017500000000245000000000000016316 0ustar00zuulzuul00000000000000# Copyright 2019 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg from nova.conf import utils as confutils DEFAULT_SERVICE_TYPE = 'accelerator' CYBORG_GROUP = 'cyborg' cyborg_group = cfg.OptGroup( CYBORG_GROUP, title='Cyborg Options', help=""" Configuration options for Cyborg (accelerator as a service). """) def register_opts(conf): conf.register_group(cyborg_group) confutils.register_ksa_opts(conf, cyborg_group, DEFAULT_SERVICE_TYPE, include_auth=False) def list_opts(): return { cyborg_group: ( ks_loading.get_session_conf_options() + confutils.get_ksa_adapter_opts(DEFAULT_SERVICE_TYPE)) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/database.py0000664000175000017500000001142600000000000016600 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_db import options as oslo_db_options from nova.conf import paths _DEFAULT_SQL_CONNECTION = 'sqlite:///' + paths.state_path_def('nova.sqlite') _ENRICHED = False # NOTE(markus_z): We cannot simply do: # conf.register_opts(oslo_db_options.database_opts, 'api_database') # If we reuse a db config option for two different groups ("api_database" # and "database") and deprecate or rename a config option in one of these # groups, "oslo.config" cannot correctly determine which one to update. # That's why we copied & pasted these config options for the "api_database" # group here. See commit ba407e3 ("Add support for multiple database engines") # for more details. api_db_group = cfg.OptGroup('api_database', title='API Database Options', help=""" The *Nova API Database* is a separate database which is used for information which is used across *cells*. This database is mandatory since the Mitaka release (13.0.0). This group should **not** be configured for the ``nova-compute`` service. """) api_db_opts = [ # TODO(markus_z): This should probably have a required=True attribute cfg.StrOpt('connection', secret=True, # This help gets appended to the oslo.db help so prefix with a space. help=' Do not set this for the ``nova-compute`` service.'), cfg.StrOpt('connection_parameters', default='', help=''), cfg.BoolOpt('sqlite_synchronous', default=True, help=''), cfg.StrOpt('slave_connection', secret=True, help=''), cfg.StrOpt('mysql_sql_mode', default='TRADITIONAL', help=''), cfg.IntOpt('connection_recycle_time', default=3600, deprecated_name='idle_timeout', help=''), # TODO(markus_z): We should probably default this to 5 to not rely on the # SQLAlchemy default. Otherwise we wouldn't provide a stable default. cfg.IntOpt('max_pool_size', help=''), cfg.IntOpt('max_retries', default=10, help=''), # TODO(markus_z): This should have a minimum attribute of 0 cfg.IntOpt('retry_interval', default=10, help=''), # TODO(markus_z): We should probably default this to 10 to not rely on the # SQLAlchemy default. Otherwise we wouldn't provide a stable default. cfg.IntOpt('max_overflow', help=''), # TODO(markus_z): This should probably make use of the "choices" attribute. # "oslo.db" uses only the values [<0, 0, 50, 100] see module # /oslo_db/sqlalchemy/engines.py method "_setup_logging" cfg.IntOpt('connection_debug', default=0, help=''), cfg.BoolOpt('connection_trace', default=False, help=''), # TODO(markus_z): We should probably default this to 30 to not rely on the # SQLAlchemy default. Otherwise we wouldn't provide a stable default. cfg.IntOpt('pool_timeout', help='') ] # noqa def enrich_help_text(alt_db_opts): def get_db_opts(): for group_name, db_opts in oslo_db_options.list_opts(): if group_name == 'database': return db_opts return [] for db_opt in get_db_opts(): for alt_db_opt in alt_db_opts: if alt_db_opt.name == db_opt.name: # NOTE(markus_z): We can append alternative DB specific help # texts here if needed. alt_db_opt.help = db_opt.help + alt_db_opt.help def register_opts(conf): oslo_db_options.set_defaults(conf, connection=_DEFAULT_SQL_CONNECTION) conf.register_opts(api_db_opts, group=api_db_group) def list_opts(): # NOTE(markus_z): 2016-04-04: If we list the oslo_db_options here, they # get emitted twice(!) in the "sample.conf" file. First under the # namespace "nova.conf" and second under the namespace "oslo.db". This # is due to the setting in file "etc/nova/nova-config-generator.conf". # As I think it is useful to have the "oslo.db" namespace information # in the "sample.conf" file, I omit the listing of the "oslo_db_options" # here. global _ENRICHED if not _ENRICHED: enrich_help_text(api_db_opts) _ENRICHED = True return { api_db_group: api_db_opts, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/devices.py0000664000175000017500000000530600000000000016456 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg devices_group = cfg.OptGroup( name='devices', title='physical or virtual device options') vgpu_opts = [ cfg.ListOpt('enabled_vgpu_types', default=[], help=""" The vGPU types enabled in the compute node. Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use this option to specify a list of enabled vGPU types that may be assigned to a guest instance. If more than one single vGPU type is provided, then for each *vGPU type* an additional section, ``[vgpu_$(VGPU_TYPE)]``, must be added to the configuration file. Each section then **must** be configured with a single configuration option, ``device_addresses``, which should be a list of PCI addresses corresponding to the physical GPU(s) to assign to this type. If one or more sections are missing (meaning that a specific type is not wanted to use for at least one physical GPU) or if no device addresses are provided, then Nova will only use the first type that was provided by ``[devices]/enabled_vgpu_types``. If the same PCI address is provided for two different types, nova-compute will return an InvalidLibvirtGPUConfig exception at restart. An example is as the following:: [devices] enabled_vgpu_types = nvidia-35, nvidia-36 [vgpu_nvidia-35] device_addresses = 0000:84:00.0,0000:85:00.0 [vgpu_nvidia-36] device_addresses = 0000:86:00.0 """) ] def register_opts(conf): conf.register_group(devices_group) conf.register_opts(vgpu_opts, group=devices_group) def register_dynamic_opts(conf): """Register dynamically-generated options and groups. This must be called by the service that wishes to use the options **after** the initial configuration has been loaded. """ opt = cfg.ListOpt('device_addresses', default=[], item_type=cfg.types.String()) # Register the '[vgpu_$(VGPU_TYPE)]/device_addresses' opts, implicitly # registering the '[vgpu_$(VGPU_TYPE)]' groups in the process for vgpu_type in conf.devices.enabled_vgpu_types: conf.register_opt(opt, group='vgpu_%s' % vgpu_type) def list_opts(): return {devices_group: vgpu_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/ephemeral_storage.py0000664000175000017500000000375600000000000020531 0ustar00zuulzuul00000000000000# Copyright 2015 Huawei Technology corp. # Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg ephemeral_storage_encryption_group = cfg.OptGroup( name='ephemeral_storage_encryption', title='Ephemeral storage encryption options') ephemeral_storage_encryption_opts = [ cfg.BoolOpt('enabled', default=False, help=""" Enables/disables LVM ephemeral storage encryption. """), cfg.StrOpt('cipher', default='aes-xts-plain64', help=""" Cipher-mode string to be used. The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: "--". Possible values: * Any crypto option listed in ``/proc/crypto``. """), cfg.IntOpt('key_size', default=512, min=1, help=""" Encryption key length in bits. The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key. """), ] def register_opts(conf): conf.register_group(ephemeral_storage_encryption_group) conf.register_opts(ephemeral_storage_encryption_opts, group=ephemeral_storage_encryption_group) def list_opts(): return {ephemeral_storage_encryption_group: ephemeral_storage_encryption_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/glance.py0000664000175000017500000001533400000000000016267 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg from nova.conf import utils as confutils DEFAULT_SERVICE_TYPE = 'image' glance_group = cfg.OptGroup( 'glance', title='Glance Options', help='Configuration options for the Image service') glance_opts = [ # NOTE(sdague/efried): there is intentionally no default here. This # requires configuration if ksa adapter config is not used. cfg.ListOpt('api_servers', deprecated_for_removal=True, deprecated_since='21.0.0', deprecated_reason=""" Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. """, help=""" List of glance api servers endpoints available to nova. https is used for ssl-based glance api servers. NOTE: The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason. Possible values: * A list of any fully qualified url of the form "scheme://hostname:port[/path]" (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image"). """), cfg.IntOpt('num_retries', default=3, min=0, help=""" Enable glance operation retries. Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries. """), cfg.ListOpt('allowed_direct_url_schemes', default=[], deprecated_for_removal=True, deprecated_since='17.0.0', deprecated_reason=""" This was originally added for the 'nova.image.download.file' FileTransfer extension which was removed in the 16.0.0 Pike release. The 'nova.image.download.modules' extension point is not maintained and there is no indication of its use in production clouds. """, help=""" List of url schemes that can be directly accessed. This option specifies a list of url schemes that can be downloaded directly via the direct_url. This direct_URL can be fetched from Image metadata which can be used by nova to get the image more efficiently. nova-compute could benefit from this by invoking a copy when it has access to the same file system as glance. Possible values: * [file], Empty list (default) """), cfg.BoolOpt('verify_glance_signatures', default=False, help=""" Enable image signature verification. nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers. Related options: * The options in the `key_manager` group, as the key_manager is used for the signature validation. * Both enable_certificate_validation and default_trusted_certificate_ids below depend on this option being enabled. """), cfg.BoolOpt('enable_certificate_validation', default=False, deprecated_for_removal=True, deprecated_since='16.0.0', deprecated_reason=""" This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together. """, help=""" Enable certificate validation for image signature verification. During image signature verification nova will first verify the validity of the image's signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy. Related options: * This option only takes effect if verify_glance_signatures is enabled. * The value of default_trusted_certificate_ids may be used when this option is enabled. """), cfg.ListOpt('default_trusted_certificate_ids', default=[], help=""" List of certificate IDs for certificates that should be trusted. May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail. Related options: * The value of this option may be used if both verify_glance_signatures and enable_certificate_validation are enabled. """), cfg.BoolOpt('debug', default=False, help='Enable or disable debug logging with glanceclient.') ] deprecated_ksa_opts = { 'insecure': [cfg.DeprecatedOpt('api_insecure', group=glance_group.name)], 'cafile': [cfg.DeprecatedOpt('ca_file', group="ssl")], 'certfile': [cfg.DeprecatedOpt('cert_file', group="ssl")], 'keyfile': [cfg.DeprecatedOpt('key_file', group="ssl")], } def register_opts(conf): conf.register_group(glance_group) conf.register_opts(glance_opts, group=glance_group) confutils.register_ksa_opts( conf, glance_group, DEFAULT_SERVICE_TYPE, include_auth=False, deprecated_opts=deprecated_ksa_opts) def list_opts(): return {glance_group: ( glance_opts + ks_loading.get_session_conf_options() + confutils.get_ksa_adapter_opts(DEFAULT_SERVICE_TYPE, deprecated_opts=deprecated_ksa_opts)) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/guestfs.py0000664000175000017500000000370200000000000016512 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg guestfs_group = cfg.OptGroup('guestfs', title='Guestfs Options', help=""" libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used/free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks and resizing disks. """) enable_guestfs_debug_opts = [ cfg.BoolOpt('debug', default=False, help=""" Enable/disables guestfs logging. This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, "libguestfs" package must be installed. Related options: Since libguestfs access and modifies VM's managed by libvirt, below options should be set to give access to those VM's. * ``libvirt.inject_key`` * ``libvirt.inject_partition`` * ``libvirt.inject_password`` """) ] def register_opts(conf): conf.register_group(guestfs_group) conf.register_opts(enable_guestfs_debug_opts, group=guestfs_group) def list_opts(): return {guestfs_group: enable_guestfs_debug_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/hyperv.py0000664000175000017500000002437500000000000016360 0ustar00zuulzuul00000000000000# Copyright (c) 2016 TUBITAK BILGEM # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg hyperv_opt_group = cfg.OptGroup("hyperv", title='The Hyper-V feature', help=""" The hyperv feature allows you to configure the Hyper-V hypervisor driver to be used within an OpenStack deployment. """) hyperv_opts = [ cfg.FloatOpt('dynamic_memory_ratio', default=1.0, help=""" Dynamic memory ratio Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup. Possible values: * 1.0: Disables dynamic memory allocation (Default). * Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup. """), cfg.BoolOpt('enable_instance_metrics_collection', default=False, help=""" Enable instance metrics collection Enables metrics collections for an instance by using Hyper-V's metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer. """), cfg.StrOpt('instances_path_share', default="", help=""" Instances path share The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally. Possible values: * "": An administrative share will be used (Default). * Name of a Windows share. Related options: * "instances_path": The directory which will be used if this option here is left blank. """), cfg.BoolOpt('limit_cpu_features', default=False, help=""" Limit CPU features This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance. """), cfg.IntOpt('mounted_disk_query_retry_count', default=10, min=0, help=""" Mounted disk query retry count The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached. Possible values: * Positive integer values. Values greater than 1 is recommended (Default: 10). Related options: * Time interval between disk mount retries is declared with "mounted_disk_query_retry_interval" option. """), cfg.IntOpt('mounted_disk_query_retry_interval', default=5, min=0, help=""" Mounted disk query retry interval Interval between checks for a mounted disk, in seconds. Possible values: * Time in seconds (Default: 5). Related options: * This option is meaningful when the mounted_disk_query_retry_count is greater than 1. * The retry loop runs with mounted_disk_query_retry_count and mounted_disk_query_retry_interval configuration options. """), cfg.IntOpt('power_state_check_timeframe', default=60, min=0, help=""" Power state check timeframe The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe. Possible values: * Timeframe in seconds (Default: 60). """), cfg.IntOpt('power_state_event_polling_interval', default=2, min=0, help=""" Power state event polling interval Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value. Possible values: * Time in seconds (Default: 2). """), cfg.StrOpt('qemu_img_cmd', default="qemu-img.exe", help=r""" qemu-img command qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value. Possible values: * Name of the qemu-img executable, in case it is in the same directory as the nova-compute service or its path is in the PATH environment variable (Default). * Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND). Related options: * If the config_drive_cdrom option is False, qemu-img will be used to convert the ISO to a VHD, otherwise the config drive will remain an ISO. To use config drive with Hyper-V, you must set the ``mkisofs_cmd`` value to the full path to an ``mkisofs.exe`` installation. """), cfg.StrOpt('vswitch_name', help=""" External virtual switch name The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private). Possible values: * If not provided, the first of a list of available vswitches is used. This list is queried using WQL. * Virtual switch name. """), cfg.IntOpt('wait_soft_reboot_seconds', default=60, min=0, help=""" Wait soft reboot seconds Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. Possible values: * Time in seconds (Default: 60). """), cfg.BoolOpt('config_drive_cdrom', default=False, help=""" Mount config drive as a CD drive. OpenStack can be configured to write instance metadata to a config drive, which is then attached to the instance before it boots. The config drive can be attached as a disk drive (default) or as a CD drive. Related options: * This option is meaningful with ``force_config_drive`` option set to ``True`` or when the REST API call to create an instance will have ``--config-drive=True`` flag. * ``config_drive_format`` option must be set to ``iso9660`` in order to use CD drive as the config drive image. * To use config drive with Hyper-V, you must set the ``mkisofs_cmd`` value to the full path to an ``mkisofs.exe`` installation. Additionally, you must set the ``qemu_img_cmd`` value to the full path to an ``qemu-img`` command installation. * You can configure the Compute service to always create a configuration drive by setting the ``force_config_drive`` option to ``True``. """), cfg.BoolOpt('config_drive_inject_password', default=False, help=""" Inject password to config drive. When enabled, the admin password will be available from the config drive image. Related options: * This option is meaningful when used with other options that enable config drive usage with Hyper-V, such as ``force_config_drive``. """), cfg.IntOpt('volume_attach_retry_count', default=10, min=0, help=""" Volume attach retry count The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached. Possible values: * Positive integer values (Default: 10). Related options: * Time interval between attachment attempts is declared with volume_attach_retry_interval option. """), cfg.IntOpt('volume_attach_retry_interval', default=5, min=0, help=""" Volume attach retry interval Interval between volume attachment attempts, in seconds. Possible values: * Time in seconds (Default: 5). Related options: * This options is meaningful when volume_attach_retry_count is greater than 1. * The retry loop runs with volume_attach_retry_count and volume_attach_retry_interval configuration options. """), cfg.BoolOpt('enable_remotefx', default=False, help=""" Enable RemoteFX feature This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled. Instances with RemoteFX can be requested with the following flavor extra specs: **os:resolution**. Guest VM screen resolution size. Acceptable values:: 1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160 ``3840x2160`` is only available on Windows / Hyper-V Server 2016. **os:monitors**. Guest VM number of monitors. Acceptable values:: [1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016 **os:vram**. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:: 64, 128, 256, 512, 1024 """), cfg.BoolOpt('use_multipath_io', default=False, help=""" Use multipath connections when attaching iSCSI or FC disks. This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices. """), cfg.ListOpt('iscsi_initiator_list', default=[], help=""" List of iSCSI initiators that will be used for estabilishing iSCSI sessions. If none are specified, the Microsoft iSCSI initiator service will choose the initiator. """) ] def register_opts(conf): conf.register_group(hyperv_opt_group) conf.register_opts(hyperv_opts, group=hyperv_opt_group) def list_opts(): return {hyperv_opt_group: hyperv_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/imagecache.py0000664000175000017500000000667000000000000017107 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg imagecache_group = cfg.OptGroup( 'image_cache', title='Image Cache Options', help=""" A collection of options specific to image caching. """) imagecache_opts = [ cfg.IntOpt('manager_interval', default=2400, min=-1, deprecated_name='image_cache_manager_interval', deprecated_group='DEFAULT', help=""" Number of seconds to wait between runs of the image cache manager. Note that when using shared storage for the ``[DEFAULT]/instances_path`` configuration option across multiple nova-compute services, this periodic could process a large number of instances. Similarly, using a compute driver that manages a cluster (like vmwareapi.VMwareVCDriver) could result in processing a large number of instances. Therefore you may need to adjust the time interval for the anticipated load, or only run on one nova-compute service within a shared storage aggregate. Possible values: * 0: run at the default interval of 60 seconds (not recommended) * -1: disable * Any other value Related options: * ``[DEFAULT]/compute_driver`` * ``[DEFAULT]/instances_path`` """), cfg.StrOpt('subdirectory_name', default='_base', deprecated_name='image_cache_subdirectory_name', deprecated_group='DEFAULT', help=""" Location of cached images. This is NOT the full path - just a folder name relative to '$instances_path'. For per-compute-host cached images, set to '_base_$my_ip' """), cfg.BoolOpt('remove_unused_base_images', default=True, deprecated_group='DEFAULT', help='Should unused base images be removed?'), cfg.IntOpt('remove_unused_original_minimum_age_seconds', default=(24 * 3600), deprecated_group='DEFAULT', help=""" Unused unresized base images younger than this will not be removed. """), cfg.IntOpt('remove_unused_resized_minimum_age_seconds', default=3600, deprecated_group='libvirt', help=""" Unused resized base images younger than this will not be removed. """), cfg.IntOpt('precache_concurrency', default=1, min=1, help=""" Maximum number of compute hosts to trigger image precaching in parallel. When an image precache request is made, compute nodes will be contacted to initiate the download. This number constrains the number of those that will happen in parallel. Higher numbers will cause more computes to work in parallel and may result in reduced time to complete the operation, but may also DDoS the image service. Lower numbers will result in more sequential operation, lower image service load, but likely longer runtime to completion. """), ] ALL_OPTS = (imagecache_opts,) def register_opts(conf): conf.register_group(imagecache_group) conf.register_opts(imagecache_opts, group=imagecache_group) def list_opts(): return {imagecache_group: imagecache_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/ironic.py0000664000175000017500000000654700000000000016327 0ustar00zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg from nova.conf import utils as confutils DEFAULT_SERVICE_TYPE = 'baremetal' ironic_group = cfg.OptGroup( 'ironic', title='Ironic Options', help=""" Configuration options for Ironic driver (Bare Metal). If using the Ironic driver following options must be set: * auth_type * auth_url * project_name * username * password * project_domain_id or project_domain_name * user_domain_id or user_domain_name """) ironic_options = [ cfg.IntOpt( 'api_max_retries', # TODO(raj_singh): Change this default to some sensible number default=60, min=0, help=""" The number of times to retry when a request conflicts. If set to 0, only try once, no retries. Related options: * api_retry_interval """), cfg.IntOpt( 'api_retry_interval', default=2, min=0, help=""" The number of seconds to wait before retrying the request. Related options: * api_max_retries """), cfg.IntOpt( 'serial_console_state_timeout', default=10, min=0, help='Timeout (seconds) to wait for node serial console state ' 'changed. Set to 0 to disable timeout.'), cfg.StrOpt( 'partition_key', default=None, mutable=True, max_length=255, regex=r'^[a-zA-Z0-9_.-]*$', help='Case-insensitive key to limit the set of nodes that may be ' 'managed by this service to the set of nodes in Ironic which ' 'have a matching conductor_group property. If unset, all ' 'available nodes will be eligible to be managed by this ' 'service. Note that setting this to the empty string (``""``) ' 'will match the default conductor group, and is different than ' 'leaving the option unset.'), cfg.ListOpt( 'peer_list', default=[], mutable=True, help='List of hostnames for all nova-compute services (including ' 'this host) with this partition_key config value. ' 'Nodes matching the partition_key value will be distributed ' 'between all services specified here. ' 'If partition_key is unset, this option is ignored.'), ] def register_opts(conf): conf.register_group(ironic_group) conf.register_opts(ironic_options, group=ironic_group) confutils.register_ksa_opts(conf, ironic_group, DEFAULT_SERVICE_TYPE) def list_opts(): return {ironic_group: ( ironic_options + ks_loading.get_session_conf_options() + ks_loading.get_auth_common_conf_options() + ks_loading.get_auth_plugin_conf_options('v3password') + confutils.get_ksa_adapter_opts(DEFAULT_SERVICE_TYPE)) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/key_manager.py0000664000175000017500000000445100000000000017316 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from castellan import options as castellan_opts from oslo_config import cfg key_manager_group = cfg.OptGroup( 'key_manager', title='Key manager options') key_manager_opts = [ # TODO(raj_singh): Deprecate or move this option to The Castellan library # NOTE(kfarr): The ability to use fixed_key should be deprecated and # removed and Barbican should be tested in the gate instead cfg.StrOpt( 'fixed_key', deprecated_group='keymgr', secret=True, help=""" Fixed key returned by key manager, specified in hex. Possible values: * Empty string or a key in hex value """), ] def register_opts(conf): castellan_opts.set_defaults(conf) conf.register_group(key_manager_group) conf.register_opts(key_manager_opts, group=key_manager_group) def list_opts(): # Castellan library also has a group name key_manager. So if # we append list returned from Castellan to this list, oslo will remove # one group as duplicate and only one group (either from this file or # Castellan library) will show up. So fix is to merge options of same # group name from this file and Castellan library opts = {key_manager_group.name: key_manager_opts} for group, options in castellan_opts.list_opts(): if group not in opts.keys(): opts[group] = options else: opts[group] = opts[group] + options return opts # TODO(raj_singh): When the last option "fixed_key" is removed/moved from # this file, then comment in code below and delete the code block above. # Castellan already returned a list which can be returned # directly from list_opts() # return castellan_opts.list_opts() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/keystone.py0000664000175000017500000000235100000000000016672 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg from nova.conf import utils as confutils DEFAULT_SERVICE_TYPE = 'identity' keystone_group = cfg.OptGroup( 'keystone', title='Keystone Options', help='Configuration options for the identity service') def register_opts(conf): conf.register_group(keystone_group) confutils.register_ksa_opts(conf, keystone_group.name, DEFAULT_SERVICE_TYPE, include_auth=False) def list_opts(): return { keystone_group: ( ks_loading.get_session_conf_options() + confutils.get_ksa_adapter_opts(DEFAULT_SERVICE_TYPE) ) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/libvirt.py0000664000175000017500000014472700000000000016522 0ustar00zuulzuul00000000000000# needs:fix_opt_description # needs:check_deprecation_status # needs:check_opt_group_and_type # needs:fix_opt_description_indentation # needs:fix_opt_registration_consistency # Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from oslo_config import cfg from oslo_config import types from nova.conf import paths libvirt_group = cfg.OptGroup("libvirt", title="Libvirt Options", help=""" Libvirt options allows cloud administrator to configure related libvirt hypervisor driver to be used within an OpenStack deployment. Almost all of the libvirt config options are influence by ``virt_type`` config which describes the virtualization type (or so called domain type) libvirt should use for specific features such as live migration, snapshot. """) libvirt_general_opts = [ cfg.StrOpt('rescue_image_id', help=""" The ID of the image to boot from to rescue data from a corrupted instance. If the rescue REST API operation doesn't provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used. Possible values: * An ID of an image or nothing. If it points to an *Amazon Machine Image* (AMI), consider to set the config options ``rescue_kernel_id`` and ``rescue_ramdisk_id`` too. If nothing is set, the image of the instance is used. Related options: * ``rescue_kernel_id``: If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image. * ``rescue_ramdisk_id``: If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image. """), cfg.StrOpt('rescue_kernel_id', help=""" The ID of the kernel (AKI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image. Possible values: * An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one. Related options: * ``rescue_image_id``: If that option points to an image in *Amazon*'s AMI/AKI/ARI image format, it's useful to use ``rescue_kernel_id`` too. """), cfg.StrOpt('rescue_ramdisk_id', help=""" The ID of the RAM disk (ARI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image. Possible values: * An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one. Related options: * ``rescue_image_id``: If that option points to an image in *Amazon*'s AMI/AKI/ARI image format, it's useful to use ``rescue_ramdisk_id`` too. """), cfg.StrOpt('virt_type', default='kvm', choices=('kvm', 'lxc', 'qemu', 'uml', 'xen', 'parallels'), help=""" Describes the virtualization type (or so called domain type) libvirt should use. The choice of this type must match the underlying virtualization strategy you have chosen for this host. Related options: * ``connection_uri``: depends on this * ``disk_prefix``: depends on this * ``cpu_mode``: depends on this * ``cpu_models``: depends on this """), cfg.StrOpt('connection_uri', default='', help=""" Overrides the default libvirt URI of the chosen virtualization type. If set, Nova will use this URI to connect to libvirt. Possible values: * An URI like ``qemu:///system`` or ``xen+ssh://oirase/`` for example. This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type. Related options: * ``virt_type``: Influences what is used as default value here. """), cfg.BoolOpt('inject_password', default=False, help=""" Allow the injection of an admin password for instance only at ``create`` and ``rebuild`` process. There is no agent needed within the image to do this. If *libguestfs* is available on the host, it will be used. Otherwise *nbd* is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won't be launched and an error is thrown. Be aware that the injection is *not* possible when the instance gets launched from a volume. *Linux* distribution guest only. Possible values: * True: Allows the injection. * False: Disallows the injection. Any via the REST API provided admin password will be silently ignored. Related options: * ``inject_partition``: That option will decide about the discovery and usage of the file system. It also can disable the injection at all. """), cfg.BoolOpt('inject_key', default=False, help=""" Allow the injection of an SSH key at boot time. There is no agent needed within the image to do this. If *libguestfs* is available on the host, it will be used. Otherwise *nbd* is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the ``authorized_keys`` of that user. The SELinux context will be set if necessary. Be aware that the injection is *not* possible when the instance gets launched from a volume. This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service. *Linux* distribution guest only. Related options: * ``inject_partition``: That option will decide about the discovery and usage of the file system. It also can disable the injection at all. """), cfg.IntOpt('inject_partition', default=-2, min=-2, help=""" Determines the way how the file system is chosen to inject data into it. *libguestfs* will be used a first solution to inject data. If that's not available on the host, the image will be locally mounted on the host as a fallback solution. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won't be boot. Possible values: * -2 => disable the injection of data. * -1 => find the root partition with the file system to mount with libguestfs * 0 => The image is not partitioned * >0 => The number of the partition to use for the injection *Linux* distribution guest only. Related options: * ``inject_key``: If this option allows the injection of a SSH key it depends on value greater or equal to -1 for ``inject_partition``. * ``inject_password``: If this option allows the injection of an admin password it depends on value greater or equal to -1 for ``inject_partition``. * ``guestfs`` You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues. * ``virt_type``: If you use ``lxc`` as virt_type it will be treated as a single partition image """), cfg.BoolOpt('use_usb_tablet', default=True, deprecated_for_removal=True, deprecated_reason="This option is being replaced by the " "'pointer_model' option.", deprecated_since='14.0.0', help=""" Enable a mouse cursor within a graphical VNC or SPICE sessions. This will only be taken into account if the VM is fully virtualized and VNC and/or SPICE is enabled. If the node doesn't support a graphical framebuffer, then it is valid to set this to False. Related options: * ``[vnc]enabled``: If VNC is enabled, ``use_usb_tablet`` will have an effect. * ``[spice]enabled`` + ``[spice].agent_enabled``: If SPICE is enabled and the spice agent is disabled, the config value of ``use_usb_tablet`` will have an effect. """), cfg.StrOpt('live_migration_scheme', help=""" URI scheme used for live migration. Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme. Related options: * ``virt_type``: This option is meaningful only when ``virt_type`` is set to `kvm` or `qemu`. * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the scheme used for live migration is taken from ``live_migration_uri`` instead. """), cfg.HostAddressOpt('live_migration_inbound_addr', help=""" Target used for live migration traffic. If this option is set to None, the hostname of the migration target compute node will be used. This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network. Related options: * ``live_migration_tunnelled``: The live_migration_inbound_addr value is ignored if tunneling is enabled. """), cfg.StrOpt('live_migration_uri', deprecated_for_removal=True, deprecated_since="15.0.0", deprecated_reason=""" live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: ``live_migration_scheme`` and ``live_migration_inbound_addr`` respectively. """, help=""" Live migration target URI to use. Override the default libvirt live migration target URI (which is dependent on virt_type). Any included "%s" is replaced with the migration target hostname. If this option is set to None (which is the default), Nova will automatically generate the `live_migration_uri` value based on only 4 supported `virt_type` in following list: * 'kvm': 'qemu+tcp://%s/system' * 'qemu': 'qemu+tcp://%s/system' * 'xen': 'xenmigr://%s/system' * 'parallels': 'parallels+tcp://%s/system' Related options: * ``live_migration_inbound_addr``: If ``live_migration_inbound_addr`` value is not None and ``live_migration_tunnelled`` is False, the ip/hostname address of target compute node is used instead of ``live_migration_uri`` as the uri for live migration. * ``live_migration_scheme``: If ``live_migration_uri`` is not set, the scheme used for live migration is taken from ``live_migration_scheme`` instead. """), cfg.BoolOpt('live_migration_tunnelled', default=False, help=""" Enable tunnelled migration. This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively. Note that this option is NOT compatible with use of block migration. Related options: * ``live_migration_inbound_addr``: The live_migration_inbound_addr value is ignored if tunneling is enabled. """), cfg.IntOpt('live_migration_bandwidth', default=0, help=""" Maximum bandwidth(in MiB/s) to be used during migration. If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details. """), cfg.IntOpt('live_migration_downtime', default=500, min=100, help=""" Maximum permitted downtime, in milliseconds, for live migration switchover. Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over. Related options: * live_migration_completion_timeout """), cfg.IntOpt('live_migration_downtime_steps', default=10, min=3, help=""" Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps. """), cfg.IntOpt('live_migration_downtime_delay', default=75, min=3, help=""" Time to wait, in seconds, between each step increase of the migration downtime. Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device. """), cfg.IntOpt('live_migration_completion_timeout', default=800, min=0, mutable=True, help=""" Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts. Related options: * live_migration_downtime * live_migration_downtime_steps * live_migration_downtime_delay """), cfg.StrOpt('live_migration_timeout_action', default='abort', choices=('abort', 'force_complete'), mutable=True, help=""" This option will be used to determine what action will be taken against a VM after ``live_migration_completion_timeout`` expires. By default, the live migrate operation will be aborted after completion timeout. If it is set to ``force_complete``, the compute service will either pause the VM or trigger post-copy depending on if post copy is enabled and available (``live_migration_permit_post_copy`` is set to True). Related options: * live_migration_completion_timeout * live_migration_permit_post_copy """), cfg.BoolOpt('live_migration_permit_post_copy', default=False, help=""" This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0. When permitted, post-copy mode will be automatically activated if we reach the timeout defined by ``live_migration_completion_timeout`` and ``live_migration_timeout_action`` is set to 'force_complete'. Note if you change to no timeout or choose to use 'abort', i.e. ``live_migration_completion_timeout = 0``, then there will be no automatic switch to post-copy. The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete. When using post-copy mode, if the source and destination hosts lose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide. Related options: * live_migration_permit_auto_converge * live_migration_timeout_action """), cfg.BoolOpt('live_migration_permit_auto_converge', default=False, help=""" This option allows nova to start live migration with auto converge on. Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use. Related options: * live_migration_permit_post_copy """), cfg.StrOpt('snapshot_image_format', choices=[ ('raw', 'RAW disk format'), ('qcow2', 'KVM default disk format'), ('vmdk', 'VMWare default disk format'), ('vdi', 'VirtualBox default disk format'), ], help=""" Determine the snapshot image format when sending to the image service. If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image. """), cfg.BoolOpt('live_migration_with_native_tls', default=False, help=""" Use QEMU-native TLS encryption when live migrating. This option will allow both migration stream (guest RAM plus device state) *and* disk stream to be transported over native TLS, i.e. TLS support built into QEMU. Prerequisite: TLS environment is configured correctly on all relevant Compute nodes. This means, Certificate Authority (CA), server, client certificates, their corresponding keys, and their file permisssions are in place, and are validated. Notes: * To have encryption for migration stream and disk stream (also called: "block migration"), ``live_migration_with_native_tls`` is the preferred config attribute instead of ``live_migration_tunnelled``. * The ``live_migration_tunnelled`` will be deprecated in the long-term for two main reasons: (a) it incurs a huge performance penalty; and (b) it is not compatible with block migration. Therefore, if your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is strongly recommended to use ``live_migration_with_native_tls``. * The ``live_migration_tunnelled`` and ``live_migration_with_native_tls`` should not be used at the same time. * Unlike ``live_migration_tunnelled``, the ``live_migration_with_native_tls`` *is* compatible with block migration. That is, with this option, NBD stream, over which disks are migrated to a target host, will be encrypted. Related options: ``live_migration_tunnelled``: This transports migration stream (but not disk stream) over libvirtd. """), cfg.StrOpt('disk_prefix', help=""" Override the default disk prefix for the devices attached to an instance. If set, this is used to identify a free disk device name for a bus. Possible values: * Any prefix which will result in a valid disk device name like 'sda' or 'hda' for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd. Related options: * ``virt_type``: Influences which device type is used, which determines the default disk prefix. """), cfg.IntOpt('wait_soft_reboot_seconds', default=120, help='Number of seconds to wait for instance to shut down after' ' soft reboot request is made. We fall back to hard reboot' ' if instance does not shutdown within this window.'), cfg.StrOpt('cpu_mode', choices=[ ('host-model', 'Clone the host CPU feature flags'), ('host-passthrough', 'Use the host CPU model exactly'), ('custom', 'Use the CPU model in ``[libvirt]cpu_models``'), ('none', "Don't set a specific CPU model. For instances with " "``[libvirt] virt_type`` as KVM/QEMU, the default CPU model from " "QEMU will be used, which provides a basic set of CPU features " "that are compatible with most hosts"), ], help=""" Is used to set the CPU mode an instance should have. If ``virt_type="kvm|qemu"``, it will default to ``host-model``, otherwise it will default to ``none``. Related options: * ``cpu_models``: This should be set ONLY when ``cpu_mode`` is set to ``custom``. Otherwise, it would result in an error and the instance launch will fail. """), cfg.ListOpt('cpu_models', deprecated_name='cpu_model', default=[], help=""" An ordered list of CPU models the host supports. It is expected that the list is ordered so that the more common and less advanced CPU models are listed earlier. Here is an example: ``SandyBridge,IvyBridge,Haswell,Broadwell``, the latter CPU model's features is richer that the previous CPU model. Possible values: * The named CPU models listed in ``/usr/share/libvirt/cpu_map.xml`` for libvirt prior to version 4.7.0 or ``/usr/share/libvirt/cpu_map/*.xml`` for version 4.7.0 and higher. Related options: * ``cpu_mode``: This should be set to ``custom`` ONLY when you want to configure (via ``cpu_models``) a specific named CPU model. Otherwise, it would result in an error and the instance launch will fail. * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this. .. note:: Be careful to only specify models which can be fully supported in hardware. """), cfg.ListOpt( 'cpu_model_extra_flags', item_type=types.String( ignore_case=True, ), default=[], help=""" This allows specifying granular CPU feature flags when configuring CPU models. For example, to explicitly specify the ``pcid`` (Process-Context ID, an Intel processor feature -- which is now required to address the guest performance degradation as a result of applying the "Meltdown" CVE fixes to certain Intel CPU models) flag to the "IvyBridge" virtual CPU model:: [libvirt] cpu_mode = custom cpu_models = IvyBridge cpu_model_extra_flags = pcid To specify multiple CPU flags (e.g. the Intel ``VMX`` to expose the virtualization extensions to the guest, or ``pdpe1gb`` to configure 1GB huge pages for CPU models that do not provide it):: [libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = PCID, VMX, pdpe1gb As it can be noticed from above, the ``cpu_model_extra_flags`` config attribute is case insensitive. And specifying extra flags is valid in combination with all the three possible values for ``cpu_mode``: ``custom`` (this also requires an explicit ``cpu_models`` to be specified), ``host-model``, or ``host-passthrough``. A valid example for allowing extra CPU flags even for ``host-passthrough`` mode is that sometimes QEMU may disable certain CPU features -- e.g. Intel's "invtsc", Invariable Time Stamp Counter, CPU flag. And if you need to expose that CPU flag to the Nova instance, the you need to explicitly ask for it. The possible values for ``cpu_model_extra_flags`` depends on the CPU model in use. Refer to ``/usr/share/libvirt/cpu_map.xml`` for libvirt prior to version 4.7.0 or ``/usr/share/libvirt/cpu_map/*.xml`` thereafter for possible CPU feature flags for a given CPU model. Note that when using this config attribute to set the 'PCID' CPU flag with the ``custom`` CPU mode, not all virtual (i.e. libvirt / QEMU) CPU models need it: * The only virtual CPU models that include the 'PCID' capability are Intel "Haswell", "Broadwell", and "Skylake" variants. * The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge", and "IvyBridge" will _not_ expose the 'PCID' capability by default, even if the host CPUs by the same name include it. I.e. 'PCID' needs to be explicitly specified when using the said virtual CPU models. The libvirt driver's default CPU mode, ``host-model``, will do the right thing with respect to handling 'PCID' CPU flag for the guest -- *assuming* you are running updated processor microcode, host and guest kernel, libvirt, and QEMU. The other mode, ``host-passthrough``, checks if 'PCID' is available in the hardware, and if so directly passes it through to the Nova guests. Thus, in context of 'PCID', with either of these CPU modes (``host-model`` or ``host-passthrough``), there is no need to use the ``cpu_model_extra_flags``. Related options: * cpu_mode * cpu_models """), cfg.StrOpt('snapshots_directory', default='$instances_path/snapshots', help='Location where libvirt driver will store snapshots ' 'before uploading them to image service'), cfg.StrOpt('xen_hvmloader_path', default='/usr/lib/xen/boot/hvmloader', help='Location where the Xen hvmloader is kept'), cfg.ListOpt('disk_cachemodes', default=[], help=""" Specific cache modes to use for different disk types. For example: file=directsync,block=none,network=writeback For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment. Possible cache modes: * default: "It Depends" -- For Nova-managed disks, ``none``, if the host file system is capable of Linux's 'O_DIRECT' semantics; otherwise ``writeback``. For volume drivers, the default is driver-dependent: ``none`` for everything except for SMBFS and Virtuzzo (which use ``writeback``). * none: With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured. However, because the host page cache is disabled, the read performance in the guest would not be as good as in the modes where the host page cache is enabled, such as writethrough mode. Shareable disk devices, like for a multi-attachable block storage volume, will have their cache mode set to 'none' regardless of configuration. * writethrough: With caching set to writethrough mode, the host page cache is enabled, but the disk write cache is disabled for the guest. Consequently, this caching mode ensures data integrity even if the applications and storage stack in the guest do not transfer data to permanent storage properly (either through fsync operations or file system barriers). Because the host page cache is enabled in this mode, the read performance for applications running in the guest is generally better. However, the write performance might be reduced because the disk write cache is disabled. * writeback: With caching set to writeback mode, both the host page cache and the disk write cache are enabled for the guest. Because of this, the I/O performance for applications running in the guest is good, but the data is not protected in a power failure. As a result, this caching mode is recommended only for temporary data where potential data loss is not a concern. NOTE: Certain backend disk mechanisms may provide safe writeback cache semantics. Specifically those that bypass the host page cache, such as QEMU's integrated RBD driver. Ceph documentation recommends setting this to writeback for maximum performance while maintaining data safety. * directsync: Like "writethrough", but it bypasses the host page cache. * unsafe: Caching mode of unsafe ignores cache transfer operations completely. As its name implies, this caching mode should be used only for temporary data where data loss is not a concern. This mode can be useful for speeding up guest installations, but you should switch to another caching mode in production environments. """), cfg.StrOpt('rng_dev_path', default='/dev/urandom', help=""" The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is ``/dev/urandom`` -- it is non-blocking, therefore relatively fast; and avoids the limitations of ``/dev/random``, which is a legacy interface. For more details (and comparision between different RNG sources), refer to the "Usage" section in the Linux kernel API documentation for ``[u]random``: http://man7.org/linux/man-pages/man4/urandom.4.html and http://man7.org/linux/man-pages/man7/random.7.html. """), cfg.ListOpt('hw_machine_type', help='For qemu or KVM guests, set this option to specify ' 'a default machine type per host architecture. ' 'You can find a list of supported machine types ' 'in your environment by checking the output of the ' ':command:`virsh capabilities` command. The format of ' 'the value for this config option is ' '``host-arch=machine-type``. For example: ' '``x86_64=machinetype1,armv7l=machinetype2``.'), cfg.StrOpt('sysinfo_serial', default='unique', choices=( ('none', 'A serial number entry is not added to the guest ' 'domain xml.'), ('os', 'A UUID serial number is generated from the host ' '``/etc/machine-id`` file.'), ('hardware', 'A UUID for the host hardware as reported by ' 'libvirt. This is typically from the host ' 'SMBIOS data, unless it has been overridden ' 'in ``libvirtd.conf``.'), ('auto', 'Uses the "os" source if possible, else ' '"hardware".'), ('unique', 'Uses instance UUID as the serial number.'), ), help=""" The data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS. All choices except ``unique`` will change the serial when migrating the instance to another host. Changing the choice of this option will also affect existing instances on this host once they are stopped and started again. It is recommended to use the default choice (``unique``) since that will not change when an instance is migrated. However, if you have a need for per-host serials in addition to per-instance serial numbers, then consider restricting flavors via host aggregates. """ ), cfg.IntOpt('mem_stats_period_seconds', default=10, help='A number of seconds to memory usage statistics period. ' 'Zero or negative value mean to disable memory usage ' 'statistics.'), cfg.ListOpt('uid_maps', default=[], help='List of uid targets and ranges.' 'Syntax is guest-uid:host-uid:count. ' 'Maximum of 5 allowed.'), cfg.ListOpt('gid_maps', default=[], help='List of guid targets and ranges.' 'Syntax is guest-gid:host-gid:count. ' 'Maximum of 5 allowed.'), cfg.IntOpt('realtime_scheduler_priority', default=1, help='In a realtime host context vCPUs for guest will run in ' 'that scheduling priority. Priority depends on the host ' 'kernel (usually 1-99)'), cfg.ListOpt('enabled_perf_events', default=[], help= """ This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statsitics via the libvirt driver, which in turn uses the Linux kernel's `perf` infrastructure. With this config attribute set, Nova will generate libvirt guest XML to monitor the specified events. For more information, refer to the "Performance monitoring events" section here: https://libvirt.org/formatdomain.html#elementsPerf. And here: https://libvirt.org/html/libvirt-libvirt-domain.html -- look for ``VIR_PERF_PARAM_*`` For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows:: [libvirt] enabled_perf_events = cpu_clock, cache_misses Possible values: A string list. The list of supported events can be found here: https://libvirt.org/formatdomain.html#elementsPerf. Note that support for Intel CMT events (`cmt`, `mbmbt`, `mbml`) is deprecated, and will be removed in the "Stein" release. That's because the upstream Linux kernel (from 4.14 onwards) has deleted support for Intel CMT, because it is broken by design. """), cfg.IntOpt('num_pcie_ports', default=0, min=0, max=28, help= """ The number of PCIe ports an instance will get. Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use. By default we have just 1-2 free ports which limits hotplug. More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt Due to QEMU limitations for aarch64/virt maximum value is set to '28'. Default value '0' moves calculating amount of ports to libvirt. """), cfg.IntOpt('file_backed_memory', default=0, min=0, help=""" Available capacity in MiB for file-backed memory. Set to 0 to disable file-backed memory. When enabled, instances will create memory files in the directory specified in ``/etc/libvirt/qemu.conf``'s ``memory_backing_dir`` option. The default location is ``/var/lib/libvirt/qemu/ram``. When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel's pagecache mechanism. .. note:: This feature is not compatible with hugepages. .. note:: This feature is not compatible with memory overcommit. Related options: * ``virt_type`` must be set to ``kvm`` or ``qemu``. * ``ram_allocation_ratio`` must be set to 1.0. """), cfg.IntOpt('num_memory_encrypted_guests', default=None, min=0, help=""" Maximum number of guests with encrypted memory which can run concurrently on this compute host. For now this is only relevant for AMD machines which support SEV (Secure Encrypted Virtualization). Such machines have a limited number of slots in their memory controller for storing encryption keys. Each running guest with encrypted memory will consume one of these slots. The option may be reused for other equivalent technologies in the future. If the machine does not support memory encryption, the option will be ignored and inventory will be set to 0. If the machine does support memory encryption, *for now* a value of ``None`` means an effectively unlimited inventory, i.e. no limit will be imposed by Nova on the number of SEV guests which can be launched, even though the underlying hardware will enforce its own limit. However it is expected that in the future, auto-detection of the inventory from the hardware will become possible, at which point ``None`` will cause auto-detection to automatically impose the correct limit. .. note:: It is recommended to read :ref:`the deployment documentation's section on this option ` before deciding whether to configure this setting or leave it at the default. Related options: * :oslo.config:option:`libvirt.virt_type` must be set to ``kvm``. * It's recommended to consider including ``x86_64=q35`` in :oslo.config:option:`libvirt.hw_machine_type`; see :ref:`deploying-sev-capable-infrastructure` for more on this. """), ] libvirt_imagebackend_opts = [ cfg.StrOpt('images_type', default='default', choices=('raw', 'flat', 'qcow2', 'lvm', 'rbd', 'ploop', 'default'), help=""" VM Images format. If default is specified, then use_cow_images flag is used instead of this one. Related options: * compute.use_cow_images * images_volume_group * [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup * compute.force_raw_images """), cfg.StrOpt('images_volume_group', help=""" LVM Volume Group that is used for VM images, when you specify images_type=lvm Related options: * images_type """), cfg.BoolOpt('sparse_logical_volumes', default=False, deprecated_for_removal=True, deprecated_since='18.0.0', deprecated_reason=""" Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes. """, help=""" Create sparse logical volumes (with virtualsize) if this flag is set to True. """), cfg.StrOpt('images_rbd_pool', default='rbd', help='The RADOS pool in which rbd volumes are stored'), cfg.StrOpt('images_rbd_ceph_conf', default='', # default determined by librados help='Path to the ceph configuration file to use'), cfg.StrOpt('hw_disk_discard', choices=('ignore', 'unmap'), help=""" Discard option for nova managed disks. Requires: * Libvirt >= 1.0.6 * Qemu >= 1.5 (raw format) * Qemu >= 1.6 (qcow2 format) """), ] libvirt_lvm_opts = [ cfg.StrOpt('volume_clear', default='zero', choices=[ ('zero', 'Overwrite volumes with zeroes'), ('shred', 'Overwrite volumes repeatedly'), ('none', 'Do not wipe deleted volumes'), ], help=""" Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage. Related options: * images_type - must be set to ``lvm`` * volume_clear_size """), cfg.IntOpt('volume_clear_size', default=0, min=0, help=""" Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in ``volume_clear`` option. Possible values: * 0 - clear whole volume * >0 - clear specified amount of MiB Related options: * images_type - must be set to ``lvm`` * volume_clear - must be set and the value must be different than ``none`` for this option to have any impact """), ] libvirt_utils_opts = [ cfg.BoolOpt('snapshot_compression', default=False, help=""" Enable snapshot compression for ``qcow2`` images. Note: you can set ``snapshot_image_format`` to ``qcow2`` to force all snapshots to be in ``qcow2`` format, independently from their original image type. Related options: * snapshot_image_format """), ] libvirt_vif_opts = [ cfg.BoolOpt('use_virtio_for_bridges', default=True, help='Use virtio for bridge interfaces with KVM/QEMU'), ] libvirt_volume_opts = [ cfg.BoolOpt('volume_use_multipath', default=False, deprecated_name='iscsi_use_multipath', help=""" Use multipath connection of the iSCSI or FC volume Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance. """), cfg.IntOpt('num_volume_scan_tries', deprecated_name='num_iscsi_scan_tries', default=5, help=""" Number of times to scan given storage protocol to find volume. """), ] libvirt_volume_aoe_opts = [ cfg.IntOpt('num_aoe_discover_tries', default=3, help=""" Number of times to rediscover AoE target to find volume. Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device. """) ] libvirt_volume_iscsi_opts = [ cfg.StrOpt('iscsi_iface', deprecated_name='iscsi_transport', help=""" The iSCSI transport iface to use to connect to target in case offload support is desired. Default format is of the form . where is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name. """) # iser is also supported, but use LibvirtISERVolumeDriver # instead ] libvirt_volume_iser_opts = [ cfg.IntOpt('num_iser_scan_tries', default=5, help=""" Number of times to scan iSER target to find volume. iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume. """), cfg.BoolOpt('iser_use_multipath', default=False, help=""" Use multipath connection of the iSER volume. iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance. """) ] libvirt_volume_net_opts = [ cfg.StrOpt('rbd_user', help=""" The RADOS client name for accessing rbd(RADOS Block Devices) volumes. Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server. """), cfg.StrOpt('rbd_secret_uuid', help=""" The libvirt UUID of the secret for the rbd_user volumes. """), cfg.IntOpt('rbd_connect_timeout', default=5, help=""" The RADOS client timeout in seconds when initially connecting to the cluster. """), ] libvirt_volume_nfs_opts = [ cfg.StrOpt('nfs_mount_point_base', default=paths.state_path_def('mnt'), help=""" Directory where the NFS volume is mounted on the compute node. The default is 'mnt' directory of the location where nova's Python module is installed. NFS provides shared storage for the OpenStack Block Storage service. Possible values: * A string representing absolute path of mount point. """), cfg.StrOpt('nfs_mount_options', help=""" Mount options passed to the NFS client. See section of the nfs man page for details. Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point. Possible values: * Any string representing mount options separated by commas. * Example string: vers=3,lookupcache=pos """), ] libvirt_volume_quobyte_opts = [ cfg.StrOpt('quobyte_mount_point_base', default=paths.state_path_def('mnt'), help=""" Directory where the Quobyte volume is mounted on the compute node. Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted. Possible values: * A string representing absolute path of mount point. """), cfg.StrOpt('quobyte_client_cfg', help='Path to a Quobyte Client configuration file.'), ] libvirt_volume_smbfs_opts = [ cfg.StrOpt('smbfs_mount_point_base', default=paths.state_path_def('mnt'), help=""" Directory where the SMBFS shares are mounted on the compute node. """), cfg.StrOpt('smbfs_mount_options', default='', help=""" Mount options passed to the SMBFS client. Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu ``uid`` and ``gid`` must be specified. """), ] libvirt_remotefs_opts = [ cfg.StrOpt('remote_filesystem_transport', default='ssh', choices=('ssh', 'rsync'), help=""" libvirt's transport method for remote file operations. Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for: * creating directory on remote host * creating file on remote host * removing file from remote host * copying file to remote host """) ] libvirt_volume_vzstorage_opts = [ cfg.StrOpt('vzstorage_mount_point_base', default=paths.state_path_def('mnt'), help=""" Directory where the Virtuozzo Storage clusters are mounted on the compute node. This option defines non-standard mountpoint for Vzstorage cluster. Related options: * vzstorage_mount_* group of parameters """ ), cfg.StrOpt('vzstorage_mount_user', default='stack', help=""" Mount owner user name. This option defines the owner user of Vzstorage cluster mountpoint. Related options: * vzstorage_mount_* group of parameters """ ), cfg.StrOpt('vzstorage_mount_group', default='qemu', help=""" Mount owner group name. This option defines the owner group of Vzstorage cluster mountpoint. Related options: * vzstorage_mount_* group of parameters """ ), cfg.StrOpt('vzstorage_mount_perms', default='0770', help=""" Mount access mode. This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0's. Related options: * vzstorage_mount_* group of parameters """ ), cfg.StrOpt('vzstorage_log_path', default='/var/log/vstorage/%(cluster_name)s/nova.log.gz', help=""" Path to vzstorage client log. This option defines the log of cluster operations, it should include "%(cluster_name)s" template to separate logs from multiple shares. Related options: * vzstorage_mount_opts may include more detailed logging options. """ ), cfg.StrOpt('vzstorage_cache_path', default=None, help=""" Path to the SSD cache file. You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client's SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility. This option defines the path which should include "%(cluster_name)s" template to separate caches from multiple shares. Related options: * vzstorage_mount_opts may include more detailed cache options. """ ), cfg.ListOpt('vzstorage_mount_opts', default=[], help=""" Extra mount options for pstorage-mount For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: "[\'-v\', \'-R\', \'500\']" Shouldn\'t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options. Related options: * All other vzstorage_* options """ ), ] # The queue size requires value to be a power of two from [256, 1024] # range. # https://libvirt.org/formatdomain.html#elementsDriverBackendOptions QueueSizeType = types.Integer(choices=(256, 512, 1024)) libvirt_virtio_queue_sizes = [ cfg.Opt('rx_queue_size', type=QueueSizeType, help=""" Configure virtio rx queue size. This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7."""), cfg.Opt('tx_queue_size', type=QueueSizeType, help=""" Configure virtio tx queue size. This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10."""), cfg.IntOpt('max_queues', default=None, min=1, help=""" The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. By default, this value is set to none meaning the legacy limits based on the reported kernel major version will be used. """), ] libvirt_volume_nvmeof_opts = [ cfg.IntOpt('num_nvme_discover_tries', default=5, help=""" Number of times to rediscover NVMe target to find volume Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device. """), ] libvirt_pmem_opts = [ cfg.ListOpt('pmem_namespaces', item_type=cfg.types.String(), default=[], help=""" Configure persistent memory(pmem) namespaces. These namespaces must have been already created on the host. This config option is in the following format:: "$LABEL:$NSNAME[|$NSNAME][,$LABEL:$NSNAME[|$NSNAME]]" * ``$NSNAME`` is the name of the pmem namespace. * ``$LABEL`` represents one resource class, this is used to generate the resource class name as ``CUSTOM_PMEM_NAMESPACE_$LABEL``. For example:: [libvirt] pmem_namespaces=128G:ns0|ns1|ns2|ns3,262144MB:ns4|ns5,MEDIUM:ns6|ns7 """), ] ALL_OPTS = list(itertools.chain( libvirt_general_opts, libvirt_imagebackend_opts, libvirt_lvm_opts, libvirt_utils_opts, libvirt_vif_opts, libvirt_volume_opts, libvirt_volume_aoe_opts, libvirt_volume_iscsi_opts, libvirt_volume_iser_opts, libvirt_volume_net_opts, libvirt_volume_nfs_opts, libvirt_volume_quobyte_opts, libvirt_volume_smbfs_opts, libvirt_remotefs_opts, libvirt_volume_vzstorage_opts, libvirt_virtio_queue_sizes, libvirt_volume_nvmeof_opts, libvirt_pmem_opts, )) def register_opts(conf): conf.register_group(libvirt_group) conf.register_opts(ALL_OPTS, group=libvirt_group) def list_opts(): return {libvirt_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/mks.py0000664000175000017500000000352100000000000015623 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg mks_group = cfg.OptGroup('mks', title='MKS Options', help=""" Nova compute node uses WebMKS, a desktop sharing protocol to provide instance console access to VM's created by VMware hypervisors. Related options: Following options must be set to provide console access. * mksproxy_base_url * enabled """) mks_opts = [ cfg.URIOpt('mksproxy_base_url', schemes=['http', 'https'], default='http://127.0.0.1:6090/', help=""" Location of MKS web console proxy The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured Possible values: * Must be a valid URL of the form:``http://host:port/`` or ``https://host:port/`` """), cfg.BoolOpt('enabled', default=False, help=""" Enables graphical console access for virtual machines. """), ] def register_opts(conf): conf.register_group(mks_group) conf.register_opts(mks_opts, group=mks_group) def list_opts(): return {mks_group: mks_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/netconf.py0000664000175000017500000000563300000000000016473 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket from oslo_config import cfg from oslo_utils import netutils netconf_opts = [ cfg.StrOpt("my_ip", default=netutils.get_my_ipv4(), sample_default='', help=""" The IP address which the host is using to connect to the management network. Possible values: * String with valid IP address. Default is IPv4 address of this host. Related options: * my_block_storage_ip """), cfg.StrOpt("my_block_storage_ip", default="$my_ip", help=""" The IP address which is used to connect to the block storage network. Possible values: * String with valid IP address. Default is IP address of this host. Related options: * my_ip - if my_block_storage_ip is not set, then my_ip value is used. """), cfg.StrOpt("host", default=socket.gethostname(), sample_default='', help=""" Hostname, FQDN or IP address of this host. Used as: * the oslo.messaging queue name for nova-compute worker * we use this value for the binding_host sent to neutron. This means if you use a neutron agent, it should have the same value for host. * cinder host attachment information Must be valid within AMQP key. Possible values: * String with hostname, FQDN or IP address. Default is hostname of this host. """), # TODO(sfinucan): This option is tied into the XenAPI, VMWare and Libvirt # drivers. # We should remove this dependency by either adding a new opt for each # driver or simply removing the offending code. Until then we cannot # deprecate this option. cfg.BoolOpt("flat_injected", default=False, help=""" This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware and xenapi virt drivers to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM. """), ] def register_opts(conf): conf.register_opts(netconf_opts) def list_opts(): return {'DEFAULT': netconf_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/neutron.py0000664000175000017500000001345100000000000016526 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg from nova.conf import utils as confutils DEFAULT_SERVICE_TYPE = 'network' NEUTRON_GROUP = 'neutron' neutron_group = cfg.OptGroup( NEUTRON_GROUP, title='Neutron Options', help=""" Configuration options for neutron (network connectivity as a service). """) neutron_opts = [ cfg.StrOpt('ovs_bridge', default='br-int', help=""" Default name for the Open vSwitch integration bridge. Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses. """), cfg.StrOpt('default_floating_pool', default='nova', help=""" Default name for the floating IP pool. Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding reponses. """), cfg.IntOpt('extension_sync_interval', default=600, min=0, help=""" Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait. """), cfg.ListOpt('physnets', default=[], help=""" List of physnets present on this host. For each *physnet* listed, an additional section, ``[neutron_physnet_$PHYSNET]``, will be added to the configuration file. Each section must be configured with a single configuration option, ``numa_nodes``, which should be a list of node IDs for all NUMA nodes this physnet is associated with. For example:: [neutron] physnets = foo, bar [neutron_physnet_foo] numa_nodes = 0 [neutron_physnet_bar] numa_nodes = 0,1 Any *physnet* that is not listed using this option will be treated as having no particular NUMA node affinity. Tunnelled networks (VXLAN, GRE, ...) cannot be accounted for in this way and are instead configured using the ``[neutron_tunnel]`` group. For example:: [neutron_tunnel] numa_nodes = 1 Related options: * ``[neutron_tunnel] numa_nodes`` can be used to configure NUMA affinity for all tunneled networks * ``[neutron_physnet_$PHYSNET] numa_nodes`` must be configured for each value of ``$PHYSNET`` specified by this option """), cfg.IntOpt('http_retries', default=3, min=0, help=""" Number of times neutronclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values: * Any integer value. 0 means connection is attempted only once """), ] metadata_proxy_opts = [ cfg.BoolOpt("service_metadata_proxy", default=False, help=""" When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the 'X-Instance-ID' header. Related options: * metadata_proxy_shared_secret """), cfg.StrOpt("metadata_proxy_shared_secret", default="", secret=True, help=""" This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the 'X-Metadata-Provider-Signature' header must be supplied in the request. Related options: * service_metadata_proxy """), ] ALL_OPTS = (neutron_opts + metadata_proxy_opts) def register_opts(conf): conf.register_group(neutron_group) conf.register_opts(ALL_OPTS, group=neutron_group) confutils.register_ksa_opts(conf, neutron_group, DEFAULT_SERVICE_TYPE) def register_dynamic_opts(conf): """Register dynamically-generated options and groups. This must be called by the service that wishes to use the options **after** the initial configuration has been loaded. """ opt = cfg.ListOpt('numa_nodes', default=[], item_type=cfg.types.Integer()) # Register the '[neutron_tunnel] numa_nodes' opt, implicitly # registering the '[neutron_tunnel]' group in the process. This could # be done statically but is done to avoid this group appearing in # nova.conf documentation while the other group does not. conf.register_opt(opt, group='neutron_tunnel') # Register the '[neutron_physnet_$PHYSNET] numa_nodes' opts, implicitly # registering the '[neutron_physnet_$PHYSNET]' groups in the process for physnet in conf.neutron.physnets: conf.register_opt(opt, group='neutron_physnet_%s' % physnet) def list_opts(): return { neutron_group: ( ALL_OPTS + ks_loading.get_session_conf_options() + ks_loading.get_auth_common_conf_options() + ks_loading.get_auth_plugin_conf_options('password') + ks_loading.get_auth_plugin_conf_options('v2password') + ks_loading.get_auth_plugin_conf_options('v3password') + confutils.get_ksa_adapter_opts(DEFAULT_SERVICE_TYPE)) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/notifications.py0000664000175000017500000001062300000000000017703 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg notifications_group = cfg.OptGroup( name='notifications', title='Notifications options', help=""" Most of the actions in Nova which manipulate the system state generate notifications which are posted to the messaging component (e.g. RabbitMQ) and can be consumed by any service outside the OpenStack. More technical details at https://docs.openstack.org/nova/latest/reference/notifications.html """) ALL_OPTS = [ cfg.StrOpt( 'notify_on_state_change', choices=[ (None, 'no notifications'), ('vm_state', 'Notifications are sent with VM state transition ' 'information in the ``old_state`` and ``state`` fields. The ' '``old_task_state`` and ``new_task_state`` fields will be set to ' 'the current task_state of the instance'), ('vm_and_task_state', 'Notifications are sent with VM and task ' 'state transition information'), ], deprecated_group='DEFAULT', help=""" If set, send compute.instance.update notifications on instance state changes. Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications. """), cfg.StrOpt( 'default_level', default='INFO', choices=('DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL'), deprecated_group='DEFAULT', deprecated_name='default_notification_level', help="Default notification level for outgoing notifications."), cfg.StrOpt( 'notification_format', default='unversioned', choices=[ ('both', 'Both the legacy unversioned and the new versioned ' 'notifications are emitted'), ('versioned', 'Only the new versioned notifications are emitted'), ('unversioned', 'Only the legacy unversioned notifications are ' 'emitted'), ], deprecated_group='DEFAULT', help=""" Specifies which notification format shall be emitted by nova. The versioned notification interface are in feature parity with the legacy interface and the versioned interface is actively developed so new consumers should used the versioned interface. However, the legacy interface is heavily used by ceilometer and other mature OpenStack components so it remains the default. Note that notifications can be completely disabled by setting ``driver=noop`` in the ``[oslo_messaging_notifications]`` group. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html """), cfg.ListOpt( 'versioned_notifications_topics', default=['versioned_notifications'], help=""" Specifies the topics for the versioned notifications issued by nova. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html """), cfg.BoolOpt( 'bdms_in_notifications', default=False, help=""" If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database. """) ] def register_opts(conf): conf.register_group(notifications_group) conf.register_opts(ALL_OPTS, group=notifications_group) def list_opts(): return {notifications_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/novnc.py0000664000175000017500000000374500000000000016164 0ustar00zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg novnc_opts = [ cfg.StrOpt('record', help=""" Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done. """), cfg.BoolOpt('daemon', default=False, help="Run as a background process."), cfg.BoolOpt('ssl_only', default=False, help=""" Disallow non-encrypted connections. Related options: * cert * key """), cfg.BoolOpt('source_is_ipv6', default=False, help="Set to True if source host is addressed with IPv6."), cfg.StrOpt('cert', default='self.pem', help=""" Path to SSL certificate file. Related options: * key * ssl_only * [console] ssl_ciphers * [console] ssl_minimum_version """), cfg.StrOpt('key', help=""" SSL key file (if separate from cert). Related options: * cert """), cfg.StrOpt('web', default='/usr/share/spice-html5', help=""" Path to directory with content which will be served by a web server. """), ] def register_opts(conf): conf.register_opts(novnc_opts) def register_cli_opts(conf): conf.register_cli_opts(novnc_opts) def list_opts(): return {'DEFAULT': novnc_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/opts.py0000664000175000017500000000521200000000000016015 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This is the single point of entry to generate the sample configuration file for Nova. It collects all the necessary info from the other modules in this package. It is assumed that: * every other module in this package has a 'list_opts' function which return a dict where * the keys are strings which are the group names * the value of each key is a list of config options for that group * the nova.conf package doesn't have further packages with config options * this module is only used in the context of sample file generation """ import collections import importlib import os import pkgutil LIST_OPTS_FUNC_NAME = "list_opts" def _tupleize(dct): """Take the dict of options and convert to the 2-tuple format.""" return [(key, val) for key, val in dct.items()] def list_opts(): opts = collections.defaultdict(list) module_names = _list_module_names() imported_modules = _import_modules(module_names) _append_config_options(imported_modules, opts) return _tupleize(opts) def _list_module_names(): module_names = [] package_path = os.path.dirname(os.path.abspath(__file__)) for _, modname, ispkg in pkgutil.iter_modules(path=[package_path]): if modname == "opts" or ispkg: continue else: module_names.append(modname) return module_names def _import_modules(module_names): imported_modules = [] for modname in module_names: mod = importlib.import_module("nova.conf." + modname) if not hasattr(mod, LIST_OPTS_FUNC_NAME): msg = "The module 'nova.conf.%s' should have a '%s' "\ "function which returns the config options." % \ (modname, LIST_OPTS_FUNC_NAME) raise Exception(msg) else: imported_modules.append(mod) return imported_modules def _append_config_options(imported_modules, config_options): for mod in imported_modules: configs = mod.list_opts() for key, val in configs.items(): config_options[key].extend(val) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/paths.py0000664000175000017500000000576100000000000016160 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys from oslo_config import cfg ALL_OPTS = [ cfg.StrOpt('pybasedir', default=os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')), sample_default='', help=""" The directory where the Nova python modules are installed. This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value. Possible values: * The full path to a directory. Related options: * ``state_path`` """), cfg.StrOpt('bindir', default=os.path.join(sys.prefix, 'local', 'bin'), help=""" The directory where the Nova binaries are installed. This option is only relevant if the networking capabilities from Nova are used (see services below). Nova's networking capabilities are targeted to be fully replaced by Neutron in the future. It is very unlikely that you need to change this option from its default value. Possible values: * The full path to a directory. """), cfg.StrOpt('state_path', default='$pybasedir', help=""" The top-level directory for maintaining Nova's state. This directory is used to store Nova's internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option ``instances_path`` gets overwritten, this directory can grow very large. Possible values: * The full path to a directory. Defaults to value provided in ``pybasedir``. """), ] def basedir_def(*args): """Return an uninterpolated path relative to $pybasedir.""" return os.path.join('$pybasedir', *args) def bindir_def(*args): """Return an uninterpolated path relative to $bindir.""" return os.path.join('$bindir', *args) def state_path_def(*args): """Return an uninterpolated path relative to $state_path.""" return os.path.join('$state_path', *args) def register_opts(conf): conf.register_opts(ALL_OPTS) def list_opts(): return {"DEFAULT": ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/pci.py0000664000175000017500000001336600000000000015614 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg pci_group = cfg.OptGroup( name='pci', title='PCI passthrough options') pci_opts = [ cfg.MultiStrOpt('alias', default=[], deprecated_name='pci_alias', deprecated_group='DEFAULT', help=""" An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements. This should be configured for the ``nova-api`` service and, assuming you wish to use move operations, for each ``nova-compute`` service. Possible Values: * A dictionary of JSON values which describe the aliases. For example:: alias = { "name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } This defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are : ``name`` Name of the PCI alias. ``product_id`` Product ID of the device in hexadecimal. ``vendor_id`` Vendor ID of the device in hexadecimal. ``device_type`` Type of PCI device. Valid values are: ``type-PCI``, ``type-PF`` and ``type-VF``. Note that ``"device_type": "type-PF"`` **must** be specified if you wish to passthrough a device that supports SR-IOV in its entirety. ``numa_policy`` Required NUMA affinity of device. Valid values are: ``legacy``, ``preferred`` and ``required``. * Supports multiple aliases by repeating the option (not by specifying a list value):: alias = { "name": "QuickAssist-1", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } alias = { "name": "QuickAssist-2", "product_id": "0444", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } """), cfg.MultiStrOpt('passthrough_whitelist', default=[], deprecated_name='pci_passthrough_whitelist', deprecated_group='DEFAULT', help=""" White list of PCI devices available to VMs. Possible values: * A JSON dictionary which describe a whitelisted PCI device. It should take the following format:: ["vendor_id": "",] ["product_id": "",] ["address": "[[[[]:]]:][][.[]]" | "devname": "",] {"": "",} Where ``[`` indicates zero or one occurrences, ``{`` indicates zero or multiple occurrences, and ``|`` mutually exclusive options. Note that any missing fields are automatically wildcarded. Valid key values are : ``vendor_id`` Vendor ID of the device in hexadecimal. ``product_id`` Product ID of the device in hexadecimal. ``address`` PCI address of the device. Both traditional glob style and regular expression syntax is supported. Please note that the address fields are restricted to the following maximum values: * domain - 0xFFFF * bus - 0xFF * slot - 0x1F * function - 0x7 ``devname`` Device name of the device (for e.g. interface name). Not all PCI devices have a name. ```` Additional ```` and ```` used for matching PCI devices. Supported ```` values are : - ``physical_network`` - ``trusted`` Valid examples are:: passthrough_whitelist = {"devname":"eth0", "physical_network":"physnet"} passthrough_whitelist = {"address":"*:0a:00.*"} passthrough_whitelist = {"address":":0a:00.", "physical_network":"physnet1"} passthrough_whitelist = {"vendor_id":"1137", "product_id":"0071"} passthrough_whitelist = {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"} passthrough_whitelist = {"address":{"domain": ".*", "bus": "02", "slot": "01", "function": "[2-7]"}, "physical_network":"physnet1"} passthrough_whitelist = {"address":{"domain": ".*", "bus": "02", "slot": "0[1-2]", "function": ".*"}, "physical_network":"physnet1"} passthrough_whitelist = {"devname": "eth0", "physical_network":"physnet1", "trusted": "true"} The following are invalid, as they specify mutually exclusive options:: passthrough_whitelist = {"devname":"eth0", "physical_network":"physnet", "address":"*:0a:00.*"} * A JSON list of JSON dictionaries corresponding to the above format. For example:: passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}] """) ] def register_opts(conf): conf.register_group(pci_group) conf.register_opts(pci_opts, group=pci_group) def list_opts(): return {pci_group: pci_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/placement.py0000664000175000017500000000271700000000000017007 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg from nova.conf import utils as confutils DEFAULT_SERVICE_TYPE = 'placement' placement_group = cfg.OptGroup( 'placement', title='Placement Service Options', help="Configuration options for connecting to the placement API service") def register_opts(conf): conf.register_group(placement_group) confutils.register_ksa_opts(conf, placement_group, DEFAULT_SERVICE_TYPE) def list_opts(): return { placement_group.name: ( ks_loading.get_session_conf_options() + ks_loading.get_auth_common_conf_options() + ks_loading.get_auth_plugin_conf_options('password') + ks_loading.get_auth_plugin_conf_options('v2password') + ks_loading.get_auth_plugin_conf_options('v3password') + confutils.get_ksa_adapter_opts(DEFAULT_SERVICE_TYPE)) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/powervm.py0000664000175000017500000000402100000000000016524 0ustar00zuulzuul00000000000000# Copyright 2018 IBM Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg powervm_group = cfg.OptGroup( name="powervm", title="PowerVM Options", help=""" PowerVM options allow cloud administrators to configure how OpenStack will work with the PowerVM hypervisor. """) powervm_opts = [ cfg.FloatOpt( 'proc_units_factor', default=0.1, min=0.05, max=1, help=""" Factor used to calculate the amount of physical processor compute power given to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas 0.05 means 1/20th of a physical processor. """), cfg.StrOpt('disk_driver', choices=['localdisk', 'ssp'], ignore_case=True, default='localdisk', help=""" The disk driver to use for PowerVM disks. PowerVM provides support for localdisk and PowerVM Shared Storage Pool disk drivers. Related options: * volume_group_name - required when using localdisk """), cfg.StrOpt('volume_group_name', default='', help=""" Volume Group to use for block device operations. If disk_driver is localdisk, then this attribute must be specified. It is strongly recommended NOT to use rootvg since that is used by the management partition and filling it will cause failures. """), ] def register_opts(conf): conf.register_group(powervm_group) conf.register_opts(powervm_opts, group=powervm_group) def list_opts(): return {powervm_group: powervm_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/quota.py0000664000175000017500000002276200000000000016172 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg quota_group = cfg.OptGroup( name='quota', title='Quota Options', help=""" Quota options allow to manage quotas in openstack deployment. """) quota_opts = [ cfg.IntOpt('instances', min=-1, default=10, deprecated_group='DEFAULT', deprecated_name='quota_instances', help=""" The number of instances allowed per project. Possible Values * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('cores', min=-1, default=20, deprecated_group='DEFAULT', deprecated_name='quota_cores', help=""" The number of instance cores or vCPUs allowed per project. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('ram', min=-1, default=50 * 1024, deprecated_group='DEFAULT', deprecated_name='quota_ram', help=""" The number of megabytes of instance RAM allowed per project. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('metadata_items', min=-1, default=128, deprecated_group='DEFAULT', deprecated_name='quota_metadata_items', help=""" The number of metadata items allowed per instance. Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('injected_files', min=-1, default=5, deprecated_group='DEFAULT', deprecated_name='quota_injected_files', help=""" The number of injected files allowed. File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include ``.bak`` extension appended with a timestamp. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('injected_file_content_bytes', min=-1, default=10 * 1024, deprecated_group='DEFAULT', deprecated_name='quota_injected_file_content_bytes', help=""" The number of bytes allowed per injected file. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('injected_file_path_length', min=-1, default=255, deprecated_group='DEFAULT', deprecated_name='quota_injected_file_path_length', help=""" The maximum allowed injected file path length. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('key_pairs', min=-1, default=100, deprecated_group='DEFAULT', deprecated_name='quota_key_pairs', help=""" The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('server_groups', min=-1, default=10, deprecated_group='DEFAULT', deprecated_name='quota_server_groups', help=""" The maxiumum number of server groups per project. Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.IntOpt('server_group_members', min=-1, default=10, deprecated_group='DEFAULT', deprecated_name='quota_server_group_members', help=""" The maximum number of servers per server group. Possible values: * A positive integer or 0. * -1 to disable the quota. """), cfg.StrOpt('driver', default='nova.quota.DbQuotaDriver', choices=[ ('nova.quota.DbQuotaDriver', 'Stores quota limit information ' 'in the database and relies on the ``quota_*`` configuration ' 'options for default quota limit values. Counts quota usage ' 'on-demand.'), ('nova.quota.NoopQuotaDriver', 'Ignores quota and treats all ' 'resources as unlimited.'), ], help=""" Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks. """), cfg.BoolOpt('recheck_quota', default=True, help=""" Recheck quota after resource creation to prevent allowing quota to be exceeded. This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them. The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request. """), cfg.BoolOpt( 'count_usage_from_placement', default=False, help=""" Enable the counting of quota usage from the placement service. Starting in Train, it is possible to count quota usage for cores and ram from the placement service and instances from the API database instead of counting from cell databases. This works well if there is only one Nova deployment running per placement deployment. However, if an operator is running more than one Nova deployment sharing a placement deployment, they should not set this option to True because currently the placement service has no way to partition resource providers per Nova deployment. When this option is left as the default or set to False, Nova will use the legacy counting method to count quota usage for instances, cores, and ram from its cell databases. Note that quota usage behavior related to resizes will be affected if this option is set to True. Placement resource allocations are claimed on the destination while holding allocations on the source during a resize, until the resize is confirmed or reverted. During this time, when the server is in VERIFY_RESIZE state, quota usage will reflect resource consumption on both the source and the destination. This can be beneficial as it reserves space for a revert of a downsize, but it also means quota usage will be inflated until a resize is confirmed or reverted. Behavior will also be different for unscheduled servers in ERROR state. A server in ERROR state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram. Behavior will be different for servers in SHELVED_OFFLOADED state. A server in SHELVED_OFFLOADED state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved. The ``populate_queued_for_delete`` and ``populate_user_id`` online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of an EXISTS database query during each quota check, if this configuration option is set to True. Operators who want to avoid the performance hit from the EXISTS queries should wait to set this configuration option to True until after they have completed their online data migrations via ``nova-manage db online_data_migrations``. """), ] def register_opts(conf): conf.register_group(quota_group) conf.register_opts(quota_opts, group=quota_group) def list_opts(): return {quota_group: quota_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/rdp.py0000664000175000017500000000565700000000000015632 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg rdp_group = cfg.OptGroup( 'rdp', title='RDP options', help=""" Options under this group enable and configure Remote Desktop Protocol ( RDP) related features. This group is only relevant to Hyper-V users. """ ) RDP_OPTS = [ cfg.BoolOpt('enabled', default=False, help=""" Enable Remote Desktop Protocol (RDP) related features. Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V. **Note:** RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform. Related options: * ``compute_driver``: Must be hyperv. """), cfg.URIOpt('html5_proxy_base_url', schemes=['http', 'https'], default='http://127.0.0.1:6083/', help=""" The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance. An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack. An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect Possible values: * ://:/ The scheme must be identical to the scheme configured for the RDP HTML5 console proxy service. It is ``http`` or ``https``. The IP address must be identical to the address on which the RDP HTML5 console proxy service is listening. The port must be identical to the port on which the RDP HTML5 console proxy service is listening. Related options: * ``rdp.enabled``: Must be set to ``True`` for ``html5_proxy_base_url`` to be effective. """), ] def register_opts(conf): conf.register_group(rdp_group) conf.register_opts(RDP_OPTS, rdp_group) def list_opts(): return {rdp_group: RDP_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/remote_debug.py0000664000175000017500000000452600000000000017500 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg debugger_group = cfg.OptGroup('remote_debug', title='debugger options') CLI_OPTS = [ cfg.HostAddressOpt('host', help=""" Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host. Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values: * IP address of a remote host as a command line parameter to a nova service. For Example: /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf --remote_debug-host """), cfg.PortOpt('port', help=""" Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host. Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values: * Port number you want to use as a command line parameter to a nova service. For Example: /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf --remote_debug-host --remote_debug-port it's listening on>. """), ] def register_cli_opts(conf): conf.register_group(debugger_group) conf.register_cli_opts(CLI_OPTS, group=debugger_group) def list_opts(): return {debugger_group: CLI_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/rpc.py0000664000175000017500000000257500000000000015625 0ustar00zuulzuul00000000000000# Copyright 2018 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg rpc_opts = [ cfg.IntOpt("long_rpc_timeout", default=1800, help=""" This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value. Operations with RPC calls that utilize this value: * live migration * scheduling * enabling/disabling a compute service * image pre-caching * snapshot-based / cross-cell resize * resize / cold migration * volume attach Related options: * rpc_response_timeout """), ] ALL_OPTS = rpc_opts def register_opts(conf): conf.register_opts(ALL_OPTS) def list_opts(): return {'DEFAULT': ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/scheduler.py0000664000175000017500000007756000000000000017025 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from nova.virt import arch scheduler_group = cfg.OptGroup(name="scheduler", title="Scheduler configuration") scheduler_opts = [ cfg.StrOpt("driver", default="filter_scheduler", deprecated_name="scheduler_driver", deprecated_group="DEFAULT", deprecated_for_removal=True, deprecated_since='21.0.0', deprecated_reason=""" nova no longer provides any in-tree filters except for the 'filter_scheduler' scheduler. This filter is considered flexible and pluggable enough for all use cases and can be extended through the use of custom, out-of-tree filters and weighers along with powerful, in-tree filters like the 'AggregateInstanceExtraSpecsFilter' and 'ComputeCapabilitiesFilter' filters. """, help=""" The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace 'nova.scheduler.driver' of file 'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is used. Possible values: * Any of the drivers included in Nova: * filter_scheduler * You may also set this to the entry point name of a custom scheduler driver, but you will be responsible for creating and maintaining it in your ``setup.cfg`` file. Related options: * workers """), cfg.IntOpt("periodic_task_interval", default=60, help=""" Periodic task interval. This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. Currently there are no in-tree scheduler driver that use this option. If this is larger than the nova-service 'service_down_time' setting, the ComputeFilter (if enabled) may think the compute service is down. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler. Possible values: * An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks. Related options: * ``nova-service service_down_time`` """), cfg.IntOpt("max_attempts", default=3, min=1, deprecated_name="scheduler_max_attempts", deprecated_group="DEFAULT", help=""" This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded exception is raised and the instance is set to an error state. Possible values: * A positive integer, where the integer corresponds to the max number of attempts that can be made when building or moving an instance. """), cfg.IntOpt("discover_hosts_in_cells_interval", default=-1, min=-1, help=""" Periodic task interval. This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur. Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run. """), cfg.IntOpt("max_placement_results", default=1000, min=1, help=""" This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates. A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on "will it fit" grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler. This option is only used by the FilterScheduler; if you use a different scheduler, this option has no effect. """), cfg.IntOpt("workers", min=0, help=""" Number of workers for the nova-scheduler service. The default will be the number of CPUs available if using the "filter_scheduler" scheduler driver, otherwise the default will be 1. """), cfg.BoolOpt("limit_tenants_to_placement_aggregate", default=False, help=""" This setting causes the scheduler to look up a host aggregate with the metadata key of `filter_tenant_id` set to the project of an incoming request, and request results from placement be limited to that aggregate. Multiple tenants may be added to a single aggregate by appending a serial number to the key, such as `filter_tenant_id:123`. The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request. See also the placement_aggregate_required_for_tenants option. """), cfg.BoolOpt("placement_aggregate_required_for_tenants", default=False, help=""" This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node. See also the limit_tenants_to_placement_aggregate option. """), cfg.BoolOpt("query_placement_for_availability_zone", default=False, help=""" This setting causes the scheduler to look up a host aggregate with the metadata key of `availability_zone` set to the value provided by an incoming request, and request results from placement be limited to that aggregate. The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the `availability_zone` key is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts. Note that if you enable this flag, you can disable the (less efficient) AvailabilityZoneFilter in the scheduler. """), cfg.BoolOpt("query_placement_for_image_type_support", default=False, help=""" This setting causes the scheduler to ask placement only for compute hosts that support the ``disk_format`` of the image used in the request. """), cfg.BoolOpt("enable_isolated_aggregate_filtering", default=False, help=""" This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key ``trait:$TRAIT_NAME`` and value ``required``, the instance flavor extra_specs and/or image metadata must also contain ``trait:$TRAIT_NAME=required`` to be eligible to be scheduled to hosts in that aggregate. More technical details at https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html """), cfg.BoolOpt("image_metadata_prefilter", default=False, help=""" This setting causes the scheduler to transform well known image metadata properties into placement required traits to filter host based on image metadata. This feature requires host support and is currently supported by the following compute drivers: - ``libvirt.LibvirtDriver`` (since Ussuri (21.0.0)) """), ] filter_scheduler_group = cfg.OptGroup(name="filter_scheduler", title="Filter scheduler options") filter_scheduler_opts = [ cfg.IntOpt("host_subset_size", default=1, min=1, deprecated_name="scheduler_host_subset_size", deprecated_group="DEFAULT", help=""" Size of subset of best hosts selected by scheduler. New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * An integer, where the integer corresponds to the size of a host subset. Any integer is valid, although any value less than 1 will be treated as 1 """), cfg.IntOpt("max_io_ops_per_host", default=8, deprecated_group="DEFAULT", help=""" The number of instances that can be actively performing IO on a host. Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'io_ops_filter' filter is enabled. Possible values: * An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host. """), cfg.IntOpt("max_instances_per_host", default=50, min=1, deprecated_group="DEFAULT", help=""" Maximum number of instances that can exist on a host. If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option's value. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ``NumInstancesFilter`` or ``AggregateNumInstancesFilter`` filter is enabled. Possible values: * An integer, where the integer corresponds to the max instances that can be scheduled on a host. """), cfg.BoolOpt("track_instance_changes", default=True, deprecated_name="scheduler_tracks_instance_changes", deprecated_group="DEFAULT", help=""" Enable querying of individual hosts for instance information. The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. NOTE: In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the [workarounds]/disable_group_policy_check_upcall option. """), cfg.MultiStrOpt("available_filters", default=["nova.scheduler.filters.all_filters"], deprecated_name="scheduler_available_filters", deprecated_group="DEFAULT", help=""" Filters that the scheduler can use. An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the 'enabled_filters' option will be used, but any filter appearing in that option must also be included in this list. By default, this is set to all filters that are included with nova. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host Related options: * enabled_filters """), cfg.ListOpt("enabled_filters", # NOTE(artom) If we change the defaults here, we should also update # Tempest's scheduler_enabled_filters to keep the default values in # sync. default=[ "AvailabilityZoneFilter", "ComputeFilter", "ComputeCapabilitiesFilter", "ImagePropertiesFilter", "ServerGroupAntiAffinityFilter", "ServerGroupAffinityFilter", ], deprecated_name="scheduler_default_filters", deprecated_group="DEFAULT", help=""" Filters that the scheduler will use. An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host Related options: * All of the filters in this option *must* be present in the 'available_filters' option, or a SchedulerHostFilterNotFound exception will be raised. """), cfg.ListOpt("weight_classes", default=["nova.scheduler.weights.all_weighers"], deprecated_name="scheduler_weight_classes", deprecated_group="DEFAULT", help=""" Weighers that the scheduler will use. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is 'scheduler_host_subset_size'. By default, this is set to all weighers that are included with Nova. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host """), cfg.FloatOpt("ram_weight_multiplier", default=1.0, deprecated_group="DEFAULT", help=""" RAM weight multipler ratio. This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'ram' weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. """), cfg.FloatOpt("cpu_weight_multiplier", default=1.0, help=""" CPU weight multiplier ratio. Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'cpu' weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: * ``filter_scheduler.weight_classes``: This weigher must be added to list of enabled weight classes if the ``weight_classes`` setting is set to a non-default value. """), cfg.FloatOpt("disk_weight_multiplier", default=1.0, deprecated_group="DEFAULT", help=""" Disk weight multipler ratio. Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'disk' weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. """), cfg.FloatOpt("io_ops_weight_multiplier", default=-1.0, deprecated_group="DEFAULT", help=""" IO operations weight multipler ratio. This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'io_ops' weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. """), cfg.FloatOpt("pci_weight_multiplier", default=1.0, min=0.0, help=""" PCI device affinity weight multiplier. The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance. The ``NUMATopologyFilter`` filter must be enabled for this to have any significance. For more information, refer to the filter documentation: https://docs.openstack.org/nova/latest/user/filter-scheduler.html Possible values: * A positive integer or float value, where the value corresponds to the multiplier ratio for this weigher. """), cfg.FloatOpt("soft_affinity_weight_multiplier", default=1.0, min=0.0, help=""" Multiplier used for weighing hosts for group soft-affinity. Possible values: * A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity. """), cfg.FloatOpt( "soft_anti_affinity_weight_multiplier", default=1.0, min=0.0, help=""" Multiplier used for weighing hosts for group soft-anti-affinity. Possible values: * A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity. """), cfg.FloatOpt( "build_failure_weight_multiplier", default=1000000.0, help=""" Multiplier used for weighing hosts that have had recent build failures. This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: * [compute]/consecutive_build_service_disable_threshold - Must be nonzero for a compute to report data considered by this weigher. """), cfg.FloatOpt( "cross_cell_move_weight_multiplier", default=1000000.0, help=""" Multiplier used for weighing hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving a server, for example during cross-cell resize. By default, when moving an instance, the scheduler will prefer hosts within the same cell since cross-cell move operations can be slower and riskier due to the complicated nature of cross-cell migrations. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Similarly, if your cloud is not configured to support cross-cell migrations, then this option has no effect. The value of this configuration option can be overridden per host aggregate by setting the aggregate metadata key with the same name (cross_cell_move_weight_multiplier). Possible values: * An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Positive values mean the weigher will prefer hosts within the same cell in which the instance is currently running. Negative values mean the weigher will prefer hosts in *other* cells from which the instance is currently running. """), cfg.BoolOpt( "shuffle_best_same_weighed_hosts", default=False, help=""" Enable spreading the instances between hosts with the same best weight. Enabling it is beneficial for cases when host_subset_size is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense. """), cfg.StrOpt( "image_properties_default_architecture", choices=arch.ALL, help=""" The default architecture to be used when using the image properties filter. When using the ImagePropertiesFilter, it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on aarch64 compute nodes because the user did not specify the 'hw_architecture' property in Glance. Possible values: * CPU Architectures such as x86_64, aarch64, s390x. """), # TODO(mikal): replace this option with something involving host aggregates cfg.ListOpt("isolated_images", default=[], deprecated_group="DEFAULT", help=""" List of UUIDs for images that can only be run on certain hosts. If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Possible values: * A list of UUID strings, where each string corresponds to the UUID of an image Related options: * scheduler/isolated_hosts * scheduler/restrict_isolated_hosts_to_isolated_images """), cfg.ListOpt("isolated_hosts", default=[], deprecated_group="DEFAULT", help=""" List of hosts that can only run certain images. If there is a need to restrict some images to only run on certain designated hosts, list those host names here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Possible values: * A list of strings, where each string corresponds to the name of a host Related options: * scheduler/isolated_images * scheduler/restrict_isolated_hosts_to_isolated_images """), cfg.BoolOpt( "restrict_isolated_hosts_to_isolated_images", default=True, deprecated_group="DEFAULT", help=""" Prevent non-isolated images from being built on isolated hosts. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Even then, this option doesn't affect the behavior of requests for isolated images, which will *always* be restricted to isolated hosts. Related options: * scheduler/isolated_images * scheduler/isolated_hosts """), cfg.StrOpt( "aggregate_image_properties_isolation_namespace", deprecated_group="DEFAULT", help=""" Image property namespace for use in the host aggregate. Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'aggregate_image_properties_isolation' filter is enabled. Possible values: * A string, where the string corresponds to an image property namespace Related options: * aggregate_image_properties_isolation_separator """), cfg.StrOpt( "aggregate_image_properties_isolation_separator", default=".", deprecated_group="DEFAULT", help=""" Separator character(s) for image property namespace and name. When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'aggregate_image_properties_isolation' filter is enabled. Possible values: * A string, where the string corresponds to an image property namespace separator character Related options: * aggregate_image_properties_isolation_namespace """)] metrics_group = cfg.OptGroup(name="metrics", title="Metrics parameters", help=""" Configuration options for metrics Options under this group allow to adjust how values assigned to metrics are calculated. """) metrics_weight_opts = [ cfg.FloatOpt("weight_multiplier", default=1.0, help=""" When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows: * >1.0: increases the effect of the metric on overall weight * 1.0: no change to the calculated weight * >0.0,<1.0: reduces the effect of the metric on overall weight * 0.0: the metric value is ignored, and the value of the 'weight_of_unavailable' option is returned instead * >-1.0,<0.0: the effect is reduced and reversed * -1.0: the effect is reversed * <-1.0: the effect is increased proportionally and reversed This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: * weight_of_unavailable """), cfg.ListOpt("weight_setting", default=[], help=""" This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more 'name=ratio' pairs, separated by commas, where 'name' is the name of the metric to be weighed, and 'ratio' is the relative weight for that metric. Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the 'weight_of_unavailable' option. As an example, let's consider the case where this option is set to: ``name1=1.0, name2=-1.3`` The final weight will be: ``(name1.value * 1.0) + (name2.value * -1.3)`` This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more key/value pairs separated by commas, where the key is a string representing the name of a metric and the value is a numeric weight for that metric. If any value is set to 0, the value is ignored and the weight will be set to the value of the 'weight_of_unavailable' option. Related options: * weight_of_unavailable """), cfg.BoolOpt("required", default=True, help=""" This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * True or False, where False ensures any metric being unavailable for a host will set the host weight to 'weight_of_unavailable'. Related options: * weight_of_unavailable """), cfg.FloatOpt("weight_of_unavailable", default=float(-10000.0), help=""" When any of the following conditions are met, this value will be used in place of any actual metric value: * One of the metrics named in 'weight_setting' is not available for a host, and the value of 'required' is False * The ratio specified for a metric in 'weight_setting' is 0 * The 'weight_multiplier' option is set to 0 This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: * weight_setting * required * weight_multiplier """), ] def register_opts(conf): conf.register_group(scheduler_group) conf.register_opts(scheduler_opts, group=scheduler_group) conf.register_group(filter_scheduler_group) conf.register_opts(filter_scheduler_opts, group=filter_scheduler_group) conf.register_group(metrics_group) conf.register_opts(metrics_weight_opts, group=metrics_group) def list_opts(): return {scheduler_group: scheduler_opts, filter_scheduler_group: filter_scheduler_opts, metrics_group: metrics_weight_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/serial_console.py0000664000175000017500000001025700000000000020036 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg DEFAULT_PORT_RANGE = '10000:20000' serial_opt_group = cfg.OptGroup("serial_console", title="The serial console feature", help=""" The serial console feature allows you to connect to a guest in case a graphical console like VNC, RDP or SPICE is not available. This is only currently supported for the libvirt, Ironic and hyper-v drivers.""") ALL_OPTS = [ cfg.BoolOpt('enabled', default=False, help=""" Enable the serial console feature. In order to use this feature, the service ``nova-serialproxy`` needs to run. This service is typically executed on the controller node. """), cfg.StrOpt('port_range', default=DEFAULT_PORT_RANGE, regex=r'^\d+:\d+$', help=r""" A range of TCP ports a guest can use for its backend. Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won't get launched. Possible values: * Each string which passes the regex ``^\d+:\d+$`` For example ``10000:20000``. Be sure that the first port number is lower than the second port number and that both are in range from 0 to 65535. """), # TODO(macsz) check if WS protocol is still being used cfg.URIOpt('base_url', default='ws://127.0.0.1:6083/', help=""" The URL an end user would use to connect to the ``nova-serialproxy`` service. The ``nova-serialproxy`` service is called with this token enriched URL and establishes the connection to the proper instance. Related options: * The IP address must be identical to the address to which the ``nova-serialproxy`` service is listening (see option ``serialproxy_host`` in this section). * The port must be the same as in the option ``serialproxy_port`` of this section. * If you choose to use a secured websocket connection, then start this option with ``wss://`` instead of the unsecured ``ws://``. The options ``cert`` and ``key`` in the ``[DEFAULT]`` section have to be set for that. """), cfg.StrOpt('proxyclient_address', default='127.0.0.1', help=""" The IP address to which proxy clients (like ``nova-serialproxy``) should connect to get the serial console of an instance. This is typically the IP address of the host of a ``nova-compute`` service. """), ] CLI_OPTS = [ cfg.StrOpt('serialproxy_host', default='0.0.0.0', help=""" The IP address which is used by the ``nova-serialproxy`` service to listen for incoming requests. The ``nova-serialproxy`` service listens on this IP address for incoming connection requests to instances which expose serial console. Related options: * Ensure that this is the same IP address which is defined in the option ``base_url`` of this section or use ``0.0.0.0`` to listen on all addresses. """), cfg.PortOpt('serialproxy_port', default=6083, help=""" The port number which is used by the ``nova-serialproxy`` service to listen for incoming requests. The ``nova-serialproxy`` service listens on this port number for incoming connection requests to instances which expose serial console. Related options: * Ensure that this is the same port number which is defined in the option ``base_url`` of this section. """) ] ALL_OPTS.extend(CLI_OPTS) def register_opts(conf): conf.register_group(serial_opt_group) conf.register_opts(ALL_OPTS, group=serial_opt_group) def register_cli_opts(conf): conf.register_group(serial_opt_group) conf.register_cli_opts(CLI_OPTS, serial_opt_group) def list_opts(): return {serial_opt_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/service.py0000664000175000017500000001256600000000000016502 0ustar00zuulzuul00000000000000# needs:check_deprecation_status # Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg service_opts = [ # TODO(johngarbutt) we need a better default and minimum, in a backwards # compatible way for report_interval cfg.IntOpt('report_interval', default=10, help=""" Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment. Related Options: * service_down_time report_interval should be less than service_down_time. If service_down_time is less than report_interval, services will routinely be considered down, because they report in too rarely. """), # TODO(johngarbutt) the code enforces the min value here, but we could # do to add some min value here, once we sort out report_interval cfg.IntOpt('service_down_time', default=60, help=""" Maximum time in seconds since last check-in for up service Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn't updated the status for more than service_down_time, then the compute node is considered down. Related Options: * report_interval (service_down_time should not be less than report_interval) * scheduler.periodic_task_interval """), cfg.BoolOpt('periodic_enable', default=True, help=""" Enable periodic tasks. If set to true, this option allows services to periodically run tasks on the manager. In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one. """), cfg.IntOpt('periodic_fuzzy_delay', default=60, min=0, help=""" Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler. Possible Values: * Any positive integer (in seconds) * 0 : disable the random delay """), cfg.ListOpt('enabled_apis', item_type=cfg.types.String(choices=['osapi_compute', 'metadata']), default=['osapi_compute', 'metadata'], help="List of APIs to be enabled by default."), cfg.ListOpt('enabled_ssl_apis', default=[], help=""" List of APIs with enabled SSL. Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support. """), cfg.StrOpt('osapi_compute_listen', default="0.0.0.0", help=""" IP address on which the OpenStack API will listen. The OpenStack API service listens on this IP address for incoming requests. """), cfg.PortOpt('osapi_compute_listen_port', default=8774, help=""" Port on which the OpenStack API will listen. The OpenStack API service listens on this port number for incoming requests. """), cfg.IntOpt('osapi_compute_workers', min=1, help=""" Number of workers for OpenStack API service. The default will be the number of CPUs available. OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes. Possible Values: * Any positive integer * None (default value) """), cfg.StrOpt('metadata_listen', default="0.0.0.0", help=""" IP address on which the metadata API will listen. The metadata API service listens on this IP address for incoming requests. """), cfg.PortOpt('metadata_listen_port', default=8775, help=""" Port on which the metadata API will listen. The metadata API service listens on this port number for incoming requests. """), cfg.IntOpt('metadata_workers', min=1, help=""" Number of workers for metadata service. If not specified the number of available CPUs will be used. The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes. Possible Values: * Any positive integer * None (default value) """), ] def register_opts(conf): conf.register_opts(service_opts) def list_opts(): return {'DEFAULT': service_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/service_token.py0000664000175000017500000000461400000000000017675 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg SERVICE_USER_GROUP = 'service_user' service_user = cfg.OptGroup( SERVICE_USER_GROUP, title = 'Service token authentication type options', help = """ Configuration options for service to service authentication using a service token. These options allow sending a service token along with the user's token when contacting external REST APIs. """ ) service_user_opts = [ cfg.BoolOpt('send_service_user_token', default=False, help=""" When True, if sending a user token to a REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user's behalf, we include a service token along with the user token. Should the user's token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. """), ] def register_opts(conf): conf.register_group(service_user) conf.register_opts(service_user_opts, group=service_user) ks_loading.register_session_conf_options(conf, SERVICE_USER_GROUP) ks_loading.register_auth_conf_options(conf, SERVICE_USER_GROUP) def list_opts(): return { service_user: ( service_user_opts + ks_loading.get_session_conf_options() + ks_loading.get_auth_common_conf_options() + ks_loading.get_auth_plugin_conf_options('password') + ks_loading.get_auth_plugin_conf_options('v2password') + ks_loading.get_auth_plugin_conf_options('v3password')) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/servicegroup.py0000664000175000017500000000323300000000000017546 0ustar00zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg SERVICEGROUP_OPTS = [ cfg.StrOpt('servicegroup_driver', default='db', choices=[ ('db', 'Database ServiceGroup driver'), ('mc', 'Memcache ServiceGroup driver'), ], help=""" This option specifies the driver to be used for the servicegroup service. ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver. Related Options: * ``service_down_time`` (maximum time since last check-in for up service) """), ] def register_opts(conf): conf.register_opts(SERVICEGROUP_OPTS) def list_opts(): return {'DEFAULT': SERVICEGROUP_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/spice.py0000664000175000017500000001324300000000000016136 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg spice_opt_group = cfg.OptGroup('spice', title="SPICE console features", help=""" SPICE console feature allows you to connect to a guest virtual machine. SPICE is a replacement for fairly limited VNC protocol. Following requirements must be met in order to use SPICE: * Virtualization driver must be libvirt * spice.enabled set to True * vnc.enabled set to False * update html5proxy_base_url * update server_proxyclient_address """) CLI_OPTS = [ cfg.HostAddressOpt('html5proxy_host', default='0.0.0.0', help=""" IP address or a hostname on which the ``nova-spicehtml5proxy`` service listens for incoming requests. Related options: * This option depends on the ``html5proxy_base_url`` option. The ``nova-spicehtml5proxy`` service must be listening on a host that is accessible from the HTML5 client. """), cfg.PortOpt('html5proxy_port', default=6082, help=""" Port on which the ``nova-spicehtml5proxy`` service listens for incoming requests. Related options: * This option depends on the ``html5proxy_base_url`` option. The ``nova-spicehtml5proxy`` service must be listening on a port that is accessible from the HTML5 client. """) ] ALL_OPTS = [ cfg.BoolOpt('enabled', default=False, help=""" Enable SPICE related features. Related options: * VNC must be explicitly disabled to get access to the SPICE console. Set the enabled option to False in the [vnc] section to disable the VNC console. """), cfg.BoolOpt('agent_enabled', default=True, help=""" Enable the SPICE guest agent support on the instances. The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled: * Copy & Paste of text and images between the guest and client machine * Automatic adjustment of resolution when the client screen changes - e.g. if you make the Spice console full screen the guest resolution will adjust to match it rather than letterboxing. * Better mouse integration - The mouse can be captured and released without needing to click inside the console or press keys to release it. The performance of mouse movement is also improved. """), cfg.URIOpt('html5proxy_base_url', default='http://127.0.0.1:6082/spice_auto.html', help=""" Location of the SPICE HTML5 console proxy. End user would use this URL to connect to the `nova-spicehtml5proxy`` service. This service will forward request to the console of an instance. In order to use SPICE console, the service ``nova-spicehtml5proxy`` should be running. This service is typically launched on the controller node. Possible values: * Must be a valid URL of the form: ``http://host:port/spice_auto.html`` where host is the node running ``nova-spicehtml5proxy`` and the port is typically 6082. Consider not using default value as it is not well defined for any real deployment. Related options: * This option depends on ``html5proxy_host`` and ``html5proxy_port`` options. The access URL returned by the compute node must have the host and port where the ``nova-spicehtml5proxy`` service is listening. """), cfg.StrOpt('server_listen', default='127.0.0.1', help=""" The address where the SPICE server running on the instances should listen. Typically, the ``nova-spicehtml5proxy`` proxy client runs on the controller node and connects over the private network to this address on the compute node(s). Possible values: * IP address to listen on. """), cfg.StrOpt('server_proxyclient_address', default='127.0.0.1', help=""" The address used by ``nova-spicehtml5proxy`` client to connect to instance console. Typically, the ``nova-spicehtml5proxy`` proxy client runs on the controller node and connects over the private network to this address on the compute node(s). Possible values: * Any valid IP address on the compute node. Related options: * This option depends on the ``server_listen`` option. The proxy client must be able to access the address specified in ``server_listen`` using the value of this option. """), cfg.StrOpt('keymap', deprecated_for_removal=True, deprecated_since='18.0.0', deprecated_reason=""" Configuring this option forces QEMU to do keymap conversions. These conversions are lossy and can result in significant issues for users of non en-US keyboards. Refer to bug #1682020 for more information.""", help=""" A keyboard layout which is supported by the underlying hypervisor on this node. Possible values: * This is usually an 'IETF language tag' (default is 'en-us'). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps. """) ] ALL_OPTS.extend(CLI_OPTS) def register_opts(conf): conf.register_opts(ALL_OPTS, group=spice_opt_group) def register_cli_opts(conf): conf.register_cli_opts(CLI_OPTS, group=spice_opt_group) def list_opts(): return {spice_opt_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/upgrade_levels.py0000664000175000017500000000773600000000000020046 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg upgrade_group = cfg.OptGroup('upgrade_levels', title='Upgrade levels Options', help=""" upgrade_levels options are used to set version cap for RPC messages sent between different nova services. By default all services send messages using the latest version they know about. The compute upgrade level is an important part of rolling upgrades where old and new nova-compute services run side by side. The other options can largely be ignored, and are only kept to help with a possible future backport issue. """) # TODO(sneti): Add default=auto for compute upgrade_levels_opts = [ cfg.StrOpt('compute', help=""" Compute RPC API version cap. By default, we always send messages using the most recent version the client knows about. Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can't understand. Note that we only support upgrading from release N to release N+1. Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Possible values: * By default send the latest version the client knows about * 'auto': Automatically determines what version to use based on the service versions in the deployment. * A string representing a version number in the format 'N.N'; for example, possible values might be '1.12' or '2.0'. * An OpenStack release name, in lower case, such as 'mitaka' or 'liberty'. """), cfg.StrOpt("cert", deprecated_for_removal=True, deprecated_since='18.0.0', deprecated_reason=""" The nova-cert service was removed in 16.0.0 (Pike) so this option is no longer used. """, help=""" Cert RPC API version cap. Possible values: * By default send the latest version the client knows about * A string representing a version number in the format 'N.N'; for example, possible values might be '1.12' or '2.0'. * An OpenStack release name, in lower case, such as 'mitaka' or 'liberty'. """), cfg.StrOpt("scheduler", help=""" Scheduler RPC API version cap. Possible values: * By default send the latest version the client knows about * A string representing a version number in the format 'N.N'; for example, possible values might be '1.12' or '2.0'. * An OpenStack release name, in lower case, such as 'mitaka' or 'liberty'. """), cfg.StrOpt('conductor', help=""" Conductor RPC API version cap. Possible values: * By default send the latest version the client knows about * A string representing a version number in the format 'N.N'; for example, possible values might be '1.12' or '2.0'. * An OpenStack release name, in lower case, such as 'mitaka' or 'liberty'. """), cfg.StrOpt('baseapi', help=""" Base API RPC API version cap. Possible values: * By default send the latest version the client knows about * A string representing a version number in the format 'N.N'; for example, possible values might be '1.12' or '2.0'. * An OpenStack release name, in lower case, such as 'mitaka' or 'liberty'. """) ] def register_opts(conf): conf.register_group(upgrade_group) conf.register_opts(upgrade_levels_opts, group=upgrade_group) def list_opts(): return {upgrade_group: upgrade_levels_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/utils.py0000664000175000017500000000763000000000000016176 0ustar00zuulzuul00000000000000# Copyright 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Common utilities for conf providers. This module does not provide any actual conf options. """ from keystoneauth1 import loading as ks_loading from oslo_config import cfg _ADAPTER_VERSION_OPTS = ('version', 'min_version', 'max_version') def get_ksa_adapter_opts(default_service_type, deprecated_opts=None): """Get auth, Session, and Adapter conf options from keystonauth1.loading. :param default_service_type: Default for the service_type conf option on the Adapter. :param deprecated_opts: dict of deprecated opts to register with the ksa Adapter opts. Works the same as the deprecated_opts kwarg to: keystoneauth1.loading.session.Session.register_conf_options :return: List of cfg.Opts. """ opts = ks_loading.get_adapter_conf_options(include_deprecated=False, deprecated_opts=deprecated_opts) for opt in opts[:]: # Remove version-related opts. Required/supported versions are # something the code knows about, not the operator. if opt.dest in _ADAPTER_VERSION_OPTS: opts.remove(opt) # Override defaults that make sense for nova cfg.set_defaults(opts, valid_interfaces=['internal', 'public'], service_type=default_service_type) return opts def _dummy_opt(name): # A config option that can't be set by the user, so it behaves as if it's # ignored; but consuming code may expect it to be present in a conf group. return cfg.Opt(name, type=lambda x: None) def register_ksa_opts(conf, group, default_service_type, include_auth=True, deprecated_opts=None): """Register keystoneauth auth, Session, and Adapter opts. :param conf: oslo_config.cfg.CONF in which to register the options :param group: Conf group, or string name thereof, in which to register the options. :param default_service_type: Default for the service_type conf option on the Adapter. :param include_auth: For service types where Nova is acting on behalf of the user, auth should come from the user context. In those cases, set this arg to False to avoid registering ksa auth options. :param deprecated_opts: dict of deprecated opts to register with the ksa Session or Adapter opts. See docstring for the deprecated_opts param of: keystoneauth1.loading.session.Session.register_conf_options """ # ksa register methods need the group name as a string. oslo doesn't care. group = getattr(group, 'name', group) ks_loading.register_session_conf_options( conf, group, deprecated_opts=deprecated_opts) if include_auth: ks_loading.register_auth_conf_options(conf, group) conf.register_opts(get_ksa_adapter_opts( default_service_type, deprecated_opts=deprecated_opts), group=group) # Have to register dummies for the version-related opts we removed for name in _ADAPTER_VERSION_OPTS: conf.register_opt(_dummy_opt(name), group=group) # NOTE(efried): Required for docs build. def list_opts(): return {} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/vendordata.py0000664000175000017500000000306500000000000017163 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from oslo_config import cfg vendordata_group = cfg.OptGroup('vendordata_dynamic_auth', title='Vendordata dynamic fetch auth options', help=""" Options within this group control the authentication of the vendordata subsystem of the metadata API server (and config drive) with external systems. """) def register_opts(conf): conf.register_group(vendordata_group) ks_loading.register_session_conf_options(conf, vendordata_group.name) ks_loading.register_auth_conf_options(conf, vendordata_group.name) def list_opts(): return { vendordata_group: ( ks_loading.get_session_conf_options() + ks_loading.get_auth_common_conf_options() + ks_loading.get_auth_plugin_conf_options('password') + ks_loading.get_auth_plugin_conf_options('v2password') + ks_loading.get_auth_plugin_conf_options('v3password') ) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/vmware.py0000664000175000017500000002200700000000000016332 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg vmware_group = cfg.OptGroup('vmware', title='VMWare Options', help=""" Related options: Following options must be set in order to launch VMware-based virtual machines. * compute_driver: Must use vmwareapi.VMwareVCDriver. * vmware.host_username * vmware.host_password * vmware.cluster_name """) vmwareapi_vif_opts = [ cfg.StrOpt('integration_bridge', help=""" This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. Possible values: * Any valid string representing the name of the integration bridge """), ] vmware_utils_opts = [ cfg.IntOpt('console_delay_seconds', min=0, help=""" Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. """), # NOTE(takashin): 'serial_port_service_uri' can be non URI format. # See https://opendev.org/x/vmware-vspc/src/branch/master/README.rst cfg.StrOpt('serial_port_service_uri', help=""" Identifies the remote system where the serial port traffic will be sent. This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. Possible values: * Any valid URI """), cfg.URIOpt('serial_port_proxy_uri', schemes=['telnet', 'telnets'], help=""" Identifies a proxy service that provides network access to the serial_port_service_uri. Possible values: * Any valid URI (The scheme is 'telnet' or 'telnets'.) Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri """), cfg.StrOpt('serial_log_dir', default='/opt/vmware/vspc', help=""" Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the 'serial_log_dir' config value of VSPC. """), ] vmwareapi_opts = [ cfg.HostAddressOpt('host_ip', help=""" Hostname or IP address for connection to VMware vCenter host."""), cfg.PortOpt('host_port', default=443, help="Port for connection to VMware vCenter host."), cfg.StrOpt('host_username', help="Username for connection to VMware vCenter host."), cfg.StrOpt('host_password', secret=True, help="Password for connection to VMware vCenter host."), cfg.StrOpt('ca_file', help=""" Specifies the CA bundle file to be used in verifying the vCenter server certificate. """), cfg.BoolOpt('insecure', default=False, help=""" If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. Related options: * ca_file: This option is ignored if "ca_file" is set. """), cfg.StrOpt('cluster_name', help="Name of a VMware Cluster ComputeResource."), cfg.StrOpt('datastore_regex', help=""" Regular expression pattern to match the name of datastore. The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". NOTE: If no regex is given, it just picks the datastore with the most freespace. Possible values: * Any matching regular expression to a datastore must be given """), cfg.FloatOpt('task_poll_interval', default=0.5, help=""" Time interval in seconds to poll remote tasks invoked on VMware VC server. """), cfg.IntOpt('api_retry_count', min=0, default=10, help=""" Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. """), cfg.PortOpt('vnc_port', default=5900, help=""" This option specifies VNC starting port. Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option 'vnc_port' helps you to set default starting port for the VNC client. Possible values: * Any valid port number within 5900 -(5900 + vnc_port_total) Related options: Below options should be set to enable VNC client. * vnc.enabled = True * vnc_port_total """), cfg.IntOpt('vnc_port_total', min=0, default=10000, help=""" Total number of VNC ports. """), cfg.StrOpt('vnc_keymap', default='en-us', help=""" Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values: * A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an 'IETF language tag' (for example 'en-us'). """), cfg.BoolOpt('use_linked_clone', default=True, help=""" This option enables/disables the use of linked clone. The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service. If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. """), cfg.IntOpt('connection_pool_size', min=10, default=10, help=""" This option sets the http connection pool size The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice. """) ] spbm_opts = [ cfg.BoolOpt('pbm_enabled', default=False, help=""" This option enables or disables storage policy based placement of instances. Related options: * pbm_default_policy """), cfg.StrOpt('pbm_wsdl_location', help=""" This option specifies the PBM service WSDL file location URL. Setting this will disable storage policy based placement of instances. Possible values: * Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl """), cfg.StrOpt('pbm_default_policy', help=""" This option specifies the default policy to be used. If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. Possible values: * Any valid storage policy such as VSAN default storage policy Related options: * pbm_enabled """), ] vmops_opts = [ cfg.IntOpt('maximum_objects', min=0, default=100, help=""" This option specifies the limit on the maximum number of objects to return in a single result. A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. """), cfg.StrOpt('cache_prefix', help=""" This option adds a prefix to the folder where cached images are stored This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. Note: This should only be used when the compute nodes are running on same host or they have a shared file system. Possible values: * Any string representing the cache prefix to the folder """) ] ALL_VMWARE_OPTS = (vmwareapi_vif_opts + vmware_utils_opts + vmwareapi_opts + spbm_opts + vmops_opts) def register_opts(conf): conf.register_group(vmware_group) conf.register_opts(ALL_VMWARE_OPTS, group=vmware_group) def list_opts(): return {vmware_group: ALL_VMWARE_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/vnc.py0000664000175000017500000001557300000000000015631 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_config import types vnc_group = cfg.OptGroup( 'vnc', title='VNC options', help=""" Virtual Network Computer (VNC) can be used to provide remote desktop console access to instances for tenants and/or administrators.""") ALL_OPTS = [ cfg.BoolOpt( 'enabled', default=True, deprecated_group='DEFAULT', deprecated_name='vnc_enabled', help=""" Enable VNC related features. Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest. """), cfg.StrOpt( 'keymap', deprecated_group='DEFAULT', deprecated_name='vnc_keymap', deprecated_for_removal=True, deprecated_since='18.0.0', deprecated_reason=""" Configuring this option forces QEMU to do keymap conversions. These conversions are lossy and can result in significant issues for users of non en-US keyboards. You should instead use a VNC client that supports Extended Key Event messages, such as noVNC 1.0.0. Refer to bug #1682020 for more information.""", help=""" Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values: * A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an 'IETF language tag' (for example 'en-us'). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at ``/usr/share/qemu/keymaps``. """), cfg.HostAddressOpt( 'server_listen', default='127.0.0.1', deprecated_opts=[ cfg.DeprecatedOpt('vncserver_listen', group='DEFAULT'), cfg.DeprecatedOpt('vncserver_listen', group='vnc'), ], help=""" The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node. """), cfg.HostAddressOpt( 'server_proxyclient_address', default='127.0.0.1', deprecated_opts=[ cfg.DeprecatedOpt('vncserver_proxyclient_address', group='DEFAULT'), cfg.DeprecatedOpt('vncserver_proxyclient_address', group='vnc'), ], help=""" Private, internal IP address or hostname of VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. This option sets the private address to which proxy clients, such as ``nova-novncproxy``, should connect to. """), cfg.URIOpt( 'novncproxy_base_url', default='http://127.0.0.1:6080/vnc_auto.html', deprecated_group='DEFAULT', help=""" Public address of noVNC VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions. If using noVNC >= 1.0.0, you should use ``vnc_lite.html`` instead of ``vnc_auto.html``. Related options: * novncproxy_host * novncproxy_port """), ] CLI_OPTS = [ cfg.StrOpt( 'novncproxy_host', default='0.0.0.0', deprecated_group='DEFAULT', help=""" IP address that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private address to which the noVNC console proxy service should bind to. Related options: * novncproxy_port * novncproxy_base_url """), cfg.PortOpt( 'novncproxy_port', default=6080, deprecated_group='DEFAULT', help=""" Port that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private port to which the noVNC console proxy service should bind to. Related options: * novncproxy_host * novncproxy_base_url """), cfg.ListOpt( 'auth_schemes', item_type=types.String(choices=( ('none', 'Allow connection without authentication'), ('vencrypt', 'Use VeNCrypt authentication scheme'), )), default=['none'], help=""" The authentication schemes to use with the compute node. Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first. Related options: * ``[vnc]vencrypt_client_key``, ``[vnc]vencrypt_client_cert``: must also be set """), cfg.StrOpt( 'vencrypt_client_key', help="""The path to the client certificate PEM file (for x509) The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication. Related options: * ``vnc.auth_schemes``: must include ``vencrypt`` * ``vnc.vencrypt_client_cert``: must also be set """), cfg.StrOpt( 'vencrypt_client_cert', help="""The path to the client key file (for x509) The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication. Realted options: * ``vnc.auth_schemes``: must include ``vencrypt`` * ``vnc.vencrypt_client_key``: must also be set """), cfg.StrOpt( 'vencrypt_ca_certs', help="""The path to the CA certificate PEM file The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server. Related options: * ``vnc.auth_schemes``: must include ``vencrypt`` """), ] ALL_OPTS.extend(CLI_OPTS) def register_opts(conf): conf.register_group(vnc_group) conf.register_opts(ALL_OPTS, group=vnc_group) def register_cli_opts(conf): conf.register_cli_opts(CLI_OPTS, group=vnc_group) def list_opts(): return {vnc_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/workarounds.py0000664000175000017500000003326300000000000017415 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The 'workarounds' group is for very specific reasons. If you're: - Working around an issue in a system tool (e.g. libvirt or qemu) where the fix is in flight/discussed in that community. - The tool can be/is fixed in some distributions and rather than patch the code those distributions can trivially set a config option to get the "correct" behavior. Then this is a good place for your workaround. .. warning:: Please use with care! Document the BugID that your workaround is paired with. """ from oslo_config import cfg workarounds_group = cfg.OptGroup( 'workarounds', title='Workaround Options', help=""" A collection of workarounds used to mitigate bugs or issues found in system tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These should only be enabled in exceptional circumstances. All options are linked against bug IDs, where more information on the issue can be found. """) ALL_OPTS = [ cfg.BoolOpt( 'disable_rootwrap', default=False, help=""" Use sudo instead of rootwrap. Allow fallback to sudo for performance reasons. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1415106 Possible values: * True: Use sudo instead of rootwrap * False: Use rootwrap as usual Interdependencies to other options: * Any options that affect 'rootwrap' will be ignored. """), cfg.BoolOpt( 'disable_libvirt_livesnapshot', default=False, deprecated_for_removal=True, deprecated_since='19.0.0', deprecated_reason=""" This option was added to work around issues with libvirt 1.2.2. We no longer support this version of libvirt, which means this workaround is no longer necessary. It will be removed in a future release. """, help=""" Disable live snapshots when using the libvirt driver. Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem. When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1334398 Possible values: * True: Live snapshot is disabled when using libvirt * False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it) """), cfg.BoolOpt( 'handle_virt_lifecycle_events', default=True, help=""" Enable handling of events emitted from compute drivers. Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored. This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature. Care should be taken when this feature is disabled and 'sync_power_state_interval' is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. For more information, refer to the bug report: https://bugs.launchpad.net/bugs/1444630 Interdependencies to other options: * If ``sync_power_state_interval`` is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. """), cfg.BoolOpt( 'disable_group_policy_check_upcall', default=False, help=""" Disable the server group policy check upcall in compute. In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy. Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute. Related options: * [filter_scheduler]/track_instance_changes also relies on upcalls from the compute service to the scheduler service. """), cfg.BoolOpt( 'enable_numa_live_migration', default=False, deprecated_for_removal=True, deprecated_since='20.0.0', deprecated_reason="""This option was added to mitigate known issues when live migrating instances with a NUMA topology with the libvirt driver. Those issues are resolved in Train. Clouds using the libvirt driver and fully upgraded to Train support NUMA-aware live migration. This option will be removed in a future release. """, help=""" Enable live migration of instances with NUMA topologies. Live migration of instances with NUMA topologies when using the libvirt driver is only supported in deployments that have been fully upgraded to Train. In previous versions, or in mixed Stein/Train deployments with a rolling upgrade in progress, live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in `bug #1289064`_. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes. Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances. Related options: * ``compute_driver``: Only the libvirt driver is affected. .. _bug #1289064: https://bugs.launchpad.net/nova/+bug/1289064 """), cfg.BoolOpt( 'ensure_libvirt_rbd_instance_dir_cleanup', default=False, help=""" Ensure the instance directory is removed during clean up when using rbd. When enabled this workaround will ensure that the instance directory is always removed during cleanup on hosts using ``[libvirt]/images_type=rbd``. This avoids the following bugs with evacuation and revert resize clean up that lead to the instance directory remaining on the host: https://bugs.launchpad.net/nova/+bug/1414895 https://bugs.launchpad.net/nova/+bug/1761062 Both of these bugs can then result in ``DestinationDiskExists`` errors being raised if the instances ever attempt to return to the host. .. warning:: Operators will need to ensure that the instance directory itself, specified by ``[DEFAULT]/instances_path``, is not shared between computes before enabling this workaround otherwise the console.log, kernels, ramdisks and any additional files being used by the running instance will be lost. Related options: * ``compute_driver`` (libvirt) * ``[libvirt]/images_type`` (rbd) * ``instances_path`` """), cfg.BoolOpt( 'disable_fallback_pcpu_query', default=False, deprecated_for_removal=True, deprecated_since='20.0.0', help=""" Disable fallback request for VCPU allocations when using pinned instances. Starting in Train, compute nodes using the libvirt virt driver can report ``PCPU`` inventory and will use this for pinned instances. The scheduler will automatically translate requests using the legacy CPU pinning-related flavor extra specs, ``hw:cpu_policy`` and ``hw:cpu_thread_policy``, their image metadata property equivalents, and the emulator threads pinning flavor extra spec, ``hw:emulator_threads_policy``, to new placement requests. However, compute nodes require additional configuration in order to report ``PCPU`` inventory and this configuration may not be present immediately after an upgrade. To ensure pinned instances can be created without this additional configuration, the scheduler will make a second request to placement for old-style ``VCPU``-based allocations and fallback to these allocation candidates if necessary. This has a slight performance impact and is not necessary on new or upgraded deployments where the new configuration has been set on all hosts. By setting this option, the second lookup is disabled and the scheduler will only request ``PCPU``-based allocations. """), cfg.BoolOpt( 'never_download_image_if_on_rbd', default=False, help=""" When booting from an image on a ceph-backed compute node, if the image does not already reside on the ceph cluster (as would be the case if glance is also using the same cluster), nova will download the image from glance and upload it to ceph itself. If using multiple ceph clusters, this may cause nova to unintentionally duplicate the image in a non-COW-able way in the local ceph deployment, wasting space. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1858877 Enabling this option will cause nova to *refuse* to boot an instance if it would require downloading the image from glance and uploading it to ceph itself. Related options: * ``compute_driver`` (libvirt) * ``[libvirt]/images_type`` (rbd) """), # TODO(lyarwood): Remove this workaround in the W release once all # supported distros have rebased to a version of libgcrypt that does not # have the performance issues listed below. cfg.BoolOpt( 'disable_native_luksv1', default=False, help=""" When attaching encrypted LUKSv1 Cinder volumes to instances the Libvirt driver configures the encrypted disks to be natively decrypted by QEMU. A performance issue has been discovered in the libgcrypt library used by QEMU that serverly limits the I/O performance in this scenario. For more information please refer to the following bug report: RFE: hardware accelerated AES-XTS mode https://bugzilla.redhat.com/show_bug.cgi?id=1762765 Enabling this workaround option will cause Nova to use the legacy dm-crypt based os-brick encryptor to decrypt the LUKSv1 volume. Note that enabling this option while using volumes that do not provide a host block device such as Ceph will result in a failure to boot from or attach the volume to an instance. See the ``[workarounds]/rbd_block_device`` option for a way to avoid this for RBD. Related options: * ``compute_driver`` (libvirt) * ``rbd_block_device`` (workarounds) """), # TODO(lyarwood): Remove this workaround in the W release when the # above disable_native_luksv1 configurable is removed. cfg.BoolOpt('rbd_volume_local_attach', default=False, help=""" Attach RBD Cinder volumes to the compute as host block devices. When enabled this option instructs os-brick to connect RBD volumes locally on the compute host as block devices instead of natively through QEMU. This workaround does not currently support extending attached volumes. This can be used with the disable_native_luksv1 workaround configuration option to avoid the recently discovered performance issues found within the libgcrypt library. This workaround is temporary and will be removed during the W release once all impacted distributions have been able to update their versions of the libgcrypt library. Related options: * ``compute_driver`` (libvirt) * ``disable_qemu_native_luksv1`` (workarounds) """), cfg.BoolOpt('reserve_disk_resource_for_image_cache', default=False, help=""" If it is set to True then the libvirt driver will reserve DISK_GB resource for the images stored in the image cache. If the :oslo.config:option:`DEFAULT.instances_path` is on different disk partition than the image cache directory then the driver will not reserve resource for the cache. Such disk reservation is done by a periodic task in the resource tracker that runs every :oslo.config:option:`update_resources_interval` seconds. So the reservation is not updated immediately when an image is cached. Related options: * :oslo.config:option:`DEFAULT.instances_path` * :oslo.config:option:`image_cache.subdirectory_name` * :oslo.config:option:`update_resources_interval` """), ] def register_opts(conf): conf.register_group(workarounds_group) conf.register_opts(ALL_OPTS, group=workarounds_group) def list_opts(): return {workarounds_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/wsgi.py0000664000175000017500000001537600000000000016015 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg wsgi_group = cfg.OptGroup( 'wsgi', title='WSGI Options', help=''' Options under this group are used to configure WSGI (Web Server Gateway Interface). WSGI is used to serve API requests. ''', ) ALL_OPTS = [ cfg.StrOpt( 'api_paste_config', default="api-paste.ini", deprecated_group='DEFAULT', help=""" This option represents a file name for the paste.deploy config for nova-api. Possible values: * A string representing file name for the paste.deploy config. """), # TODO(sfinucan): It is not possible to rename this to 'log_format' # yet, as doing so would cause a conflict if '[DEFAULT] log_format' # were used. When 'deprecated_group' is removed after Ocata, this # should be changed. cfg.StrOpt( 'wsgi_log_format', default='%(client_ip)s "%(request_line)s" status: %(status_code)s' ' len: %(body_length)s time: %(wall_seconds).7f', deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_since='16.0.0', deprecated_reason=""" This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi. """, help=""" It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect. Possible values: * '%(client_ip)s "%(request_line)s" status: %(status_code)s ' 'len: %(body_length)s time: %(wall_seconds).7f' (default) * Any formatted string formed by specific values. """), cfg.StrOpt( 'secure_proxy_ssl_header', deprecated_group='DEFAULT', help=""" This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy. Possible values: * None (default) - the request scheme is not influenced by any HTTP headers * Valid HTTP header, like ``HTTP_X_FORWARDED_PROTO`` WARNING: Do not set this unless you know what you are doing. Make sure ALL of the following are true before setting this (assuming the values from the example above): * Your API is behind a proxy. * Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. * Your proxy sets the X-Forwarded-Proto header and sends it to API, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None. """), cfg.StrOpt( 'ssl_ca_file', deprecated_group='DEFAULT', help=""" This option allows setting path to the CA certificate file that should be used to verify connecting clients. Possible values: * String representing path to the CA certificate file. Related options: * enabled_ssl_apis """), cfg.StrOpt( 'ssl_cert_file', deprecated_group='DEFAULT', help=""" This option allows setting path to the SSL certificate of API server. Possible values: * String representing path to the SSL certificate. Related options: * enabled_ssl_apis """), cfg.StrOpt( 'ssl_key_file', deprecated_group='DEFAULT', help=""" This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect. Possible values: * String representing path to the SSL private key. Related options: * enabled_ssl_apis """), cfg.IntOpt( 'tcp_keepidle', min=0, default=600, deprecated_group='DEFAULT', help=""" This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X. Related options: * keep_alive """), cfg.IntOpt( 'default_pool_size', min=0, default=1000, deprecated_group='DEFAULT', deprecated_name='wsgi_default_pool_size', help=""" This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option. """), cfg.IntOpt( 'max_header_line', min=0, default=16384, deprecated_group='DEFAULT', help=""" This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length. """), cfg.BoolOpt( 'keep_alive', default=True, deprecated_group='DEFAULT', deprecated_name='wsgi_keep_alive', help=""" This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse. Possible values: * True : reuse HTTP connection. * False : closes the client socket connection explicitly. Related options: * tcp_keepidle """), cfg.IntOpt( 'client_socket_timeout', min=0, default=900, deprecated_group='DEFAULT', help=""" This option specifies the timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0. """), ] def register_opts(conf): conf.register_group(wsgi_group) conf.register_opts(ALL_OPTS, group=wsgi_group) def list_opts(): return {wsgi_group: ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/conf/xenserver.py0000664000175000017500000004462000000000000017057 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket from oslo_config import cfg from oslo_utils import units xenserver_group = cfg.OptGroup('xenserver', title='Xenserver Options', help=""" .. warning:: The xenapi driver is deprecated and may be removed in a future release. The driver is not tested by the OpenStack project nor does it have clear maintainer(s) and thus its quality can not be ensured. If you are using the driver in production please let us know in freenode IRC and/or the openstack-discuss mailing list. XenServer options are used when the compute_driver is set to use XenServer (compute_driver=xenapi.XenAPIDriver). Must specify connection_url, connection_password and ovs_integration_bridge to use compute_driver=xenapi.XenAPIDriver. """) xenapi_agent_opts = [ cfg.IntOpt('agent_timeout', default=30, min=0, help=""" Number of seconds to wait for agent's reply to a request. Nova configures/performs certain administrative actions on a server with the help of an agent that's installed on the server. The communication between Nova and the agent is achieved via sharing messages, called records, over xenstore, a shared storage across all the domains on a Xenserver host. Operations performed by the agent on behalf of nova are: 'version',' key_init', 'password','resetnetwork','inject_file', and 'agentupdate'. To perform one of the above operations, the xapi 'agent' plugin writes the command and its associated parameters to a certain location known to the domain and awaits response. On being notified of the message, the agent performs appropriate actions on the server and writes the result back to xenstore. This result is then read by the xapi 'agent' plugin to determine the success/failure of the operation. This config option determines how long the xapi 'agent' plugin shall wait to read the response off of xenstore for a given request/command. If the agent on the instance fails to write the result in this time period, the operation is considered to have timed out. Related options: * ``agent_version_timeout`` * ``agent_resetnetwork_timeout`` """), cfg.IntOpt('agent_version_timeout', default=300, min=0, help=""" Number of seconds to wait for agent't reply to version request. This indicates the amount of time xapi 'agent' plugin waits for the agent to respond to the 'version' request specifically. The generic timeout for agent communication ``agent_timeout`` is ignored in this case. During the build process the 'version' request is used to determine if the agent is available/operational to perform other requests such as 'resetnetwork', 'password', 'key_init' and 'inject_file'. If the 'version' call fails, the other configuration is skipped. So, this configuration option can also be interpreted as time in which agent is expected to be fully operational. """), cfg.IntOpt('agent_resetnetwork_timeout', default=60, min=0, help=""" Number of seconds to wait for agent's reply to resetnetwork request. This indicates the amount of time xapi 'agent' plugin waits for the agent to respond to the 'resetnetwork' request specifically. The generic timeout for agent communication ``agent_timeout`` is ignored in this case. """), cfg.StrOpt('agent_path', default='usr/sbin/xe-update-networking', help=""" Path to locate guest agent on the server. Specifies the path in which the XenAPI guest agent should be located. If the agent is present, network configuration is not injected into the image. Related options: For this option to have an effect: * ``flat_injected`` should be set to ``True`` * ``compute_driver`` should be set to ``xenapi.XenAPIDriver`` """), cfg.BoolOpt('disable_agent', default=False, help=""" Disables the use of XenAPI agent. This configuration option suggests whether the use of agent should be enabled or not regardless of what image properties are present. Image properties have an effect only when this is set to ``True``. Read description of config option ``use_agent_default`` for more information. Related options: * ``use_agent_default`` """), cfg.BoolOpt('use_agent_default', default=False, help=""" Whether or not to use the agent by default when its usage is enabled but not indicated by the image. The use of XenAPI agent can be disabled altogether using the configuration option ``disable_agent``. However, if it is not disabled, the use of an agent can still be controlled by the image in use through one of its properties, ``xenapi_use_agent``. If this property is either not present or specified incorrectly on the image, the use of agent is determined by this configuration option. Note that if this configuration is set to ``True`` when the agent is not present, the boot times will increase significantly. Related options: * ``disable_agent`` """), ] xenapi_session_opts = [ cfg.IntOpt('login_timeout', default=10, min=0, help='Timeout in seconds for XenAPI login.'), cfg.IntOpt('connection_concurrent', default=5, min=1, help=""" Maximum number of concurrent XenAPI connections. In nova, multiple XenAPI requests can happen at a time. Configuring this option will parallelize access to the XenAPI session, which allows you to make concurrent XenAPI connections. """), ] xenapi_vm_utils_opts = [ cfg.StrOpt('cache_images', default='all', choices=[ ('all', 'Will cache all images'), ('some', 'Will only cache images that have the image_property ' '``cache_in_nova=True``'), ('none', 'Turns off caching entirely')], help=""" Cache glance images locally. The value for this option must be chosen from the choices listed here. Configuring a value other than these will default to 'all'. Note: There is nothing that deletes these images. """), cfg.IntOpt('image_compression_level', min=1, max=9, help=""" Compression level for images. By setting this option we can configure the gzip compression level. This option sets GZIP environment variable before spawning tar -cz to force the compression level. It defaults to none, which means the GZIP environment variable is not set and the default (usually -6) is used. Possible values: * Range is 1-9, e.g., 9 for gzip -9, 9 being most compressed but most CPU intensive on dom0. * Any values out of this range will default to None. """), cfg.StrOpt('default_os_type', default='linux', help='Default OS type used when uploading an image to glance'), cfg.IntOpt('block_device_creation_timeout', default=10, min=1, help='Time in secs to wait for a block device to be created'), cfg.IntOpt('max_kernel_ramdisk_size', default=16 * units.Mi, help=""" Maximum size in bytes of kernel or ramdisk images. Specifying the maximum size of kernel or ramdisk will avoid copying large files to dom0 and fill up /boot/guest. """), cfg.StrOpt('sr_matching_filter', default='default-sr:true', help=""" Filter for finding the SR to be used to install guest instances on. Possible values: * To use the Local Storage in default XenServer/XCP installations set this flag to other-config:i18n-key=local-storage. * To select an SR with a different matching criteria, you could set it to other-config:my_favorite_sr=true. * To fall back on the Default SR, as displayed by XenCenter, set this flag to: default-sr:true. """), cfg.BoolOpt('sparse_copy', default=True, help=""" Whether to use sparse_copy for copying data on a resize down. (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won't have to be rsynced. """), cfg.IntOpt('num_vbd_unplug_retries', default=10, min=0, help=""" Maximum number of retries to unplug VBD. If set to 0, should try once, no retries. """), cfg.StrOpt('ipxe_network_name', help=""" Name of network to use for booting iPXE ISOs. An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image. By default this option is not set. Enable this option to boot an iPXE ISO. Related Options: * `ipxe_boot_menu_url` * `ipxe_mkisofs_cmd` """), cfg.StrOpt('ipxe_boot_menu_url', help=""" URL to the iPXE boot menu. An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image. By default this option is not set. Enable this option to boot an iPXE ISO. Related Options: * `ipxe_network_name` * `ipxe_mkisofs_cmd` """), cfg.StrOpt('ipxe_mkisofs_cmd', default='mkisofs', help=""" Name and optionally path of the tool used for ISO image creation. An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image. Note: By default `mkisofs` is not present in the Dom0, so the package can either be manually added to Dom0 or include the `mkisofs` binary in the image itself. Related Options: * `ipxe_network_name` * `ipxe_boot_menu_url` """), ] xenapi_opts = [ cfg.StrOpt('connection_url', help=""" URL for connection to XenServer/Xen Cloud Platform. A special value of unix://local can be used to connect to the local unix socket. Possible values: * Any string that represents a URL. The connection_url is generally the management network IP address of the XenServer. * This option must be set if you chose the XenServer driver. """), cfg.StrOpt('connection_username', default='root', help='Username for connection to XenServer/Xen Cloud Platform'), cfg.StrOpt('connection_password', secret=True, help='Password for connection to XenServer/Xen Cloud Platform'), cfg.FloatOpt('vhd_coalesce_poll_interval', default=5.0, min=0, help=""" The interval used for polling of coalescing vhds. This is the interval after which the task of coalesce VHD is performed, until it reaches the max attempts that is set by vhd_coalesce_max_attempts. Related options: * `vhd_coalesce_max_attempts` """), cfg.BoolOpt('check_host', default=True, help=""" Ensure compute service is running on host XenAPI connects to. This option must be set to false if the 'independent_compute' option is set to true. Possible values: * Setting this option to true will make sure that compute service is running on the same host that is specified by connection_url. * Setting this option to false, doesn't perform the check. Related options: * `independent_compute` """), cfg.IntOpt('vhd_coalesce_max_attempts', default=20, min=0, help=""" Max number of times to poll for VHD to coalesce. This option determines the maximum number of attempts that can be made for coalescing the VHD before giving up. Related opitons: * `vhd_coalesce_poll_interval` """), cfg.StrOpt('sr_base_path', default='/var/run/sr-mount', help='Base path to the storage repository on the XenServer host.'), cfg.HostAddressOpt('target_host', help=""" The iSCSI Target Host. This option represents the hostname or ip of the iSCSI Target. If the target host is not present in the connection information from the volume provider then the value from this option is taken. Possible values: * Any string that represents hostname/ip of Target. """), cfg.PortOpt('target_port', default=3260, help=""" The iSCSI Target Port. This option represents the port of the iSCSI Target. If the target port is not present in the connection information from the volume provider then the value from this option is taken. """), cfg.BoolOpt('independent_compute', default=False, help=""" Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host. Related options: * ``CONF.flat_injected`` (Must be False) * ``CONF.xenserver.check_host`` (Must be False) * ``CONF.default_ephemeral_format`` (Must be unset or 'ext3') * Joining host aggregates (will error if attempted) * Swap disks for Windows VMs (will error if attempted) * Nova-based auto_configure_disk (will error if attempted) """) ] xenapi_vmops_opts = [ cfg.IntOpt('running_timeout', default=60, min=0, help=""" Wait time for instances to go to running state. Provide an integer value representing time in seconds to set the wait time for an instance to go to running state. When a request to create an instance is received by nova-api and communicated to nova-compute, the creation of the instance occurs through interaction with Xen via XenAPI in the compute node. Once the node on which the instance(s) are to be launched is decided by nova-schedule and the launch is triggered, a certain amount of wait time is involved until the instance(s) can become available and 'running'. This wait time is defined by running_timeout. If the instances do not go to running state within this specified wait time, the launch expires and the instance(s) are set to 'error' state. """), # TODO(dharinic): Make this, a stevedore plugin cfg.StrOpt('image_upload_handler', default='', deprecated_for_removal=True, deprecated_since='18.0.0', deprecated_reason=""" Instead of setting the class path here, we will use short names to represent image handlers. The download and upload handlers must also be matching. So another new option "image_handler" will be used to set the short name for a specific image handler for both image download and upload. """, help=""" Dom0 plugin driver used to handle image uploads. Provide a string value representing a plugin driver required to handle the image uploading to GlanceStore. Images, and snapshots from XenServer need to be uploaded to the data store for use. image_upload_handler takes in a value for the Dom0 plugin driver. This driver is then called to uplaod images to the GlanceStore. """), cfg.StrOpt('image_handler', default='direct_vhd', choices=[ ('direct_vhd', 'This plugin directly processes the VHD files in ' 'XenServer SR(Storage Repository). So this plugin only works ' 'when the host\'s SR type is file system based e.g. ext, nfs.'), ('vdi_local_dev', 'This plugin implements an image handler which ' 'attaches the instance\'s VDI as a local disk to the VM where ' 'the OpenStack Compute service runs. It uploads the raw disk ' 'to glance when creating image; when booting an instance from a ' 'glance image, it downloads the image and streams it into the ' 'disk which is attached to the compute VM.'), ('vdi_remote_stream', 'This plugin implements an image handler ' 'which works as a proxy between glance and XenServer. The VHD ' 'streams to XenServer via a remote import API supplied by XAPI ' 'for image download; and for image upload, the VHD streams from ' 'XenServer via a remote export API supplied by XAPI. This ' 'plugin works for all SR types supported by XenServer.'), ], help=""" The plugin used to handle image uploads and downloads. Provide a short name representing an image driver required to handle the image between compute host and glance. """), ] xenapi_volume_utils_opts = [ cfg.IntOpt('introduce_vdi_retry_wait', default=20, min=0, help=""" Number of seconds to wait for SR to settle if the VDI does not exist when first introduced. Some SRs, particularly iSCSI connections are slow to see the VDIs right after they got introduced. Setting this option to a time interval will make the SR to wait for that time period before raising VDI not found exception. """) ] xenapi_ovs_integration_bridge_opts = [ cfg.StrOpt('ovs_integration_bridge', help=""" The name of the integration Bridge that is used with xenapi when connecting with Open vSwitch. Note: The value of this config option is dependent on the environment, therefore this configuration value must be set accordingly if you are using XenAPI. Possible values: * Any string that represents a bridge name. """), ] xenapi_pool_opts = [ # TODO(macsz): This should be deprecated. Until providing solid reason, # leaving it as-it-is. cfg.BoolOpt('use_join_force', default=True, help=""" When adding new host to a pool, this will append a --force flag to the command, forcing hosts to join a pool, even if they have different CPUs. Since XenServer version 5.6 it is possible to create a pool of hosts that have different CPU capabilities. To accommodate CPU differences, XenServer limited features it uses to determine CPU compatibility to only the ones that are exposed by CPU and support for CPU masking was added. Despite this effort to level differences between CPUs, it is still possible that adding new host will fail, thus option to force join was introduced. """), ] xenapi_console_opts = [ cfg.StrOpt('console_public_hostname', default=socket.gethostname(), sample_default='', deprecated_group='DEFAULT', help=""" Publicly visible name for this console host. Possible values: * Current hostname (default) or any string representing hostname. """), ] ALL_XENSERVER_OPTS = (xenapi_agent_opts + xenapi_session_opts + xenapi_vm_utils_opts + xenapi_opts + xenapi_vmops_opts + xenapi_volume_utils_opts + xenapi_ovs_integration_bridge_opts + xenapi_pool_opts + xenapi_console_opts) def register_opts(conf): conf.register_group(xenserver_group) conf.register_opts(ALL_XENSERVER_OPTS, group=xenserver_group) def list_opts(): return {xenserver_group: ALL_XENSERVER_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/conf/zvm.py0000664000175000017500000000566700000000000015662 0ustar00zuulzuul00000000000000# Copyright 2017,2018 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from nova.conf import paths zvm_opt_group = cfg.OptGroup('zvm', title='zVM Options', help=""" zvm options allows cloud administrator to configure related z/VM hypervisor driver to be used within an OpenStack deployment. zVM options are used when the compute_driver is set to use zVM (compute_driver=zvm.ZVMDriver) """) zvm_opts = [ cfg.URIOpt('cloud_connector_url', sample_default='http://zvm.example.org:8080/', help=""" URL to be used to communicate with z/VM Cloud Connector. """), cfg.StrOpt('ca_file', default=None, help=""" CA certificate file to be verified in httpd server with TLS enabled A string, it must be a path to a CA bundle to use. """), cfg.StrOpt('image_tmp_path', default=paths.state_path_def('images'), sample_default="$state_path/images", help=""" The path at which images will be stored (snapshot, deploy, etc). Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location. Possible values: A file system path on the host running the compute service. """), cfg.IntOpt('reachable_timeout', default=300, help=""" Timeout (seconds) to wait for an instance to start. The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted. Possible Values: Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state. """), ] def register_opts(conf): conf.register_group(zvm_opt_group) conf.register_opts(zvm_opts, group=zvm_opt_group) def list_opts(): return {zvm_opt_group: zvm_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/config.py0000664000175000017500000000505200000000000015352 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging from oslo_log import log from oslo_utils import importutils import nova.conf from nova.db.sqlalchemy import api as sqlalchemy_api from nova import middleware from nova import rpc from nova import version profiler = importutils.try_import('osprofiler.opts') CONF = nova.conf.CONF def rabbit_heartbeat_filter(log_record): message = "Unexpected error during heartbeat thread processing" return message not in log_record.msg def parse_args(argv, default_config_files=None, configure_db=True, init_rpc=True): log.register_options(CONF) # We use the oslo.log default log levels which includes suds=INFO # and add only the extra levels that Nova needs if CONF.glance.debug: extra_default_log_levels = ['glanceclient=DEBUG'] else: extra_default_log_levels = ['glanceclient=WARN'] # NOTE(sean-k-mooney): this filter addresses bug #1825584 # https://bugs.launchpad.net/nova/+bug/1825584 # eventlet monkey-patching breaks AMQP heartbeat on uWSGI rabbit_logger = logging.getLogger('oslo.messaging._drivers.impl_rabbit') rabbit_logger.addFilter(rabbit_heartbeat_filter) # NOTE(danms): DEBUG logging in privsep will result in some large # and potentially sensitive things being logged. extra_default_log_levels.append('oslo.privsep.daemon=INFO') log.set_defaults(default_log_levels=log.get_default_log_levels() + extra_default_log_levels) rpc.set_defaults(control_exchange='nova') if profiler: profiler.set_defaults(CONF) middleware.set_defaults() CONF(argv[1:], project='nova', version=version.version_string(), default_config_files=default_config_files) if init_rpc: rpc.init(CONF) if configure_db: sqlalchemy_api.configure(CONF) ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/console/0000775000175000017500000000000000000000000015173 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/console/__init__.py0000664000175000017500000000152700000000000017311 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova.console` -- Wrappers around console proxies ====================================================== .. automodule:: nova.console :platform: Unix :synopsis: Wrapper around console proxies such as noVNC to set up multi-tenant VM console access. """ ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/console/rfb/0000775000175000017500000000000000000000000015744 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/console/rfb/__init__.py0000664000175000017500000000000000000000000020043 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/console/rfb/auth.py0000664000175000017500000000342700000000000017265 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2017 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import enum import six VERSION_LENGTH = 12 SUBTYPE_LENGTH = 4 AUTH_STATUS_FAIL = b"\x00" AUTH_STATUS_PASS = b"\x01" class AuthType(enum.IntEnum): INVALID = 0 NONE = 1 VNC = 2 RA2 = 5 RA2NE = 6 TIGHT = 16 ULTRA = 17 TLS = 18 # Used by VINO VENCRYPT = 19 # Used by VeNCrypt and QEMU SASL = 20 # SASL type used by VINO and QEMU ARD = 30 # Apple remote desktop (screen sharing) MSLOGON = 0xfffffffa # Used by UltraVNC @six.add_metaclass(abc.ABCMeta) class RFBAuthScheme(object): @abc.abstractmethod def security_type(self): """Return the security type supported by this scheme Returns the nova.console.rfb.auth.AuthType.XX constant representing the scheme implemented. """ pass @abc.abstractmethod def security_handshake(self, compute_sock): """Perform security-type-specific functionality. This method is expected to return the socket-like object used to communicate with the server securely. Should raise exception.RFBAuthHandshakeFailed if an error occurs :param compute_sock: socket connected to the compute node instance """ pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/console/rfb/authnone.py0000664000175000017500000000145000000000000020137 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.console.rfb import auth class RFBAuthSchemeNone(auth.RFBAuthScheme): def security_type(self): return auth.AuthType.NONE def security_handshake(self, compute_sock): return compute_sock ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/console/rfb/auths.py0000664000175000017500000000334000000000000017442 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2017 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from nova.console.rfb import authnone from nova.console.rfb import authvencrypt from nova import exception CONF = cfg.CONF class RFBAuthSchemeList(object): AUTH_SCHEME_MAP = { "none": authnone.RFBAuthSchemeNone, "vencrypt": authvencrypt.RFBAuthSchemeVeNCrypt, } def __init__(self): self.schemes = {} for name in CONF.vnc.auth_schemes: scheme = self.AUTH_SCHEME_MAP[name]() self.schemes[scheme.security_type()] = scheme def find_scheme(self, desired_types): """Find a suitable authentication scheme to use with compute node. Identify which of the ``desired_types`` we can accept. :param desired_types: A list of ints corresponding to the various authentication types supported. """ for security_type in desired_types: if security_type in self.schemes: return self.schemes[security_type] raise exception.RFBAuthNoAvailableScheme( allowed_types=", ".join([str(s) for s in self.schemes.keys()]), desired_types=", ".join([str(s) for s in desired_types])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/console/rfb/authvencrypt.py0000664000175000017500000001230600000000000021054 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import enum import ssl import struct from oslo_config import cfg from oslo_log import log as logging import six from nova.console.rfb import auth from nova import exception from nova.i18n import _ LOG = logging.getLogger(__name__) CONF = cfg.CONF class AuthVeNCryptSubtype(enum.IntEnum): """Possible VeNCrypt subtypes. From https://github.com/rfbproto/rfbproto/blob/master/rfbproto.rst """ PLAIN = 256 TLSNONE = 257 TLSVNC = 258 TLSPLAIN = 259 X509NONE = 260 X509VNC = 261 X509PLAIN = 262 X509SASL = 263 TLSSASL = 264 class RFBAuthSchemeVeNCrypt(auth.RFBAuthScheme): """A security proxy helper which uses VeNCrypt. This security proxy helper uses the VeNCrypt security type to achieve SSL/TLS-secured VNC. It supports both standard SSL/TLS encryption and SSL/TLS encryption with x509 authentication. Refer to https://www.berrange.com/~dan/vencrypt.txt for a brief overview of the protocol. """ def security_type(self): return auth.AuthType.VENCRYPT def security_handshake(self, compute_sock): def recv(num): b = compute_sock.recv(num) if len(b) != num: reason = _("Short read from compute socket, wanted " "%(wanted)d bytes but got %(got)d") % { 'wanted': num, 'got': len(b)} raise exception.RFBAuthHandshakeFailed(reason=reason) return b # get the VeNCrypt version from the server maj_ver = ord(recv(1)) min_ver = ord(recv(1)) LOG.debug("Server sent VeNCrypt version " "%(maj)s.%(min)s", {'maj': maj_ver, 'min': min_ver}) if maj_ver != 0 or min_ver != 2: reason = _("Only VeNCrypt version 0.2 is supported by this " "proxy, but the server wanted to use version " "%(maj)s.%(min)s") % {'maj': maj_ver, 'min': min_ver} raise exception.RFBAuthHandshakeFailed(reason=reason) # use version 0.2 compute_sock.sendall(b"\x00\x02") can_use_version = ord(recv(1)) if can_use_version > 0: reason = _("Server could not use VeNCrypt version 0.2") raise exception.RFBAuthHandshakeFailed(reason=reason) # get the supported sub-auth types sub_types_cnt = ord(recv(1)) sub_types_raw = recv(sub_types_cnt * auth.SUBTYPE_LENGTH) sub_types = struct.unpack('!' + str(sub_types_cnt) + 'I', sub_types_raw) LOG.debug("Server supports VeNCrypt sub-types %s", sub_types) # We use X509None as we're only seeking to encrypt the channel (ruling # out PLAIN) and prevent MITM (ruling out TLS*, which uses trivially # MITM'd Anonymous Diffie Hellmann (DH) cyphers) if AuthVeNCryptSubtype.X509NONE not in sub_types: reason = _("Server does not support the x509None (%s) VeNCrypt" " sub-auth type") % \ AuthVeNCryptSubtype.X509NONE raise exception.RFBAuthHandshakeFailed(reason=reason) LOG.debug("Attempting to use the x509None (%s) auth sub-type", AuthVeNCryptSubtype.X509NONE) compute_sock.sendall(struct.pack( '!I', AuthVeNCryptSubtype.X509NONE)) # NB(sross): the spec is missing a U8 here that's used in # multiple implementations (e.g. QEMU, GTK-VNC). 1 means # acceptance, 0 means failure (unlike the rest of RFB) auth_accepted = ord(recv(1)) if auth_accepted == 0: reason = _("Server didn't accept the requested auth sub-type") raise exception.RFBAuthHandshakeFailed(reason=reason) LOG.debug("Server accepted the requested sub-auth type") if (CONF.vnc.vencrypt_client_key and CONF.vnc.vencrypt_client_cert): client_key = CONF.vnc.vencrypt_client_key client_cert = CONF.vnc.vencrypt_client_cert else: client_key = None client_cert = None try: wrapped_sock = ssl.wrap_socket( compute_sock, keyfile=client_key, certfile=client_cert, server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs=CONF.vnc.vencrypt_ca_certs) LOG.info("VeNCrypt security handshake accepted") return wrapped_sock except ssl.SSLError as e: reason = _("Error establishing TLS connection to server: %s") % ( six.text_type(e)) raise exception.RFBAuthHandshakeFailed(reason=reason) ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/console/securityproxy/0000775000175000017500000000000000000000000020144 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/console/securityproxy/__init__.py0000664000175000017500000000000000000000000022243 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/console/securityproxy/base.py0000664000175000017500000000275000000000000021434 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six @six.add_metaclass(abc.ABCMeta) class SecurityProxy(object): """A console security Proxy Helper Console security proxy helpers should subclass this class and implement a generic `connect` for the particular protocol being used. Security drivers can then subclass the protocol-specific helper class. """ @abc.abstractmethod def connect(self, tenant_sock, compute_sock): """Initiate the console connection This method performs the protocol specific negotiation, and returns the socket-like object to use to communicate with the server securely. :param tenant_sock: socket connected to the remote tenant user :param compute_sock: socket connected to the compute node instance :returns: a new compute_sock for the instance """ pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/console/securityproxy/rfb.py0000664000175000017500000001741000000000000021272 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import struct from oslo_log import log as logging import six from nova.console.rfb import auth from nova.console.rfb import auths from nova.console.securityproxy import base from nova import exception from nova.i18n import _ LOG = logging.getLogger(__name__) class RFBSecurityProxy(base.SecurityProxy): """RFB Security Proxy Negotiation Helper. This class proxies the initial setup of the RFB connection between the client and the server. Then, when the RFB security negotiation step arrives, it intercepts the communication, posing as a server with the "None" authentication type to the client, and acting as a client (via the methods below) to the server. After security negotiation, normal proxying can be used. Note: this code mandates RFB version 3.8, since this is supported by any client and server impl written in the past 10+ years. See the general RFB specification at: https://tools.ietf.org/html/rfc6143 See an updated, maintained RDB specification at: https://github.com/rfbproto/rfbproto/blob/master/rfbproto.rst """ def __init__(self): self.auth_schemes = auths.RFBAuthSchemeList() def _make_var_str(self, message): message_str = six.text_type(message) message_bytes = message_str.encode('utf-8') message_len = struct.pack("!I", len(message_bytes)) return message_len + message_bytes def _fail(self, tenant_sock, compute_sock, message): # Tell the client there's been a problem result_code = struct.pack("!I", 1) tenant_sock.sendall(result_code + self._make_var_str(message)) if compute_sock is not None: # Tell the server that there's been a problem # by sending the "Invalid" security type compute_sock.sendall(auth.AUTH_STATUS_FAIL) @staticmethod def _parse_version(version_str): r"""Convert a version string to a float. >>> RFBSecurityProxy._parse_version('RFB 003.008\n') 0.2 """ maj_str = version_str[4:7] min_str = version_str[8:11] return float("%d.%d" % (int(maj_str), int(min_str))) def connect(self, tenant_sock, compute_sock): """Initiate the RFB connection process. This method performs the initial ProtocolVersion and Security messaging, and returns the socket-like object to use to communicate with the server securely. If an error occurs SecurityProxyNegotiationFailed will be raised. """ def recv(sock, num): b = sock.recv(num) if len(b) != num: reason = _("Incorrect read from socket, wanted %(wanted)d " "bytes but got %(got)d. Socket returned " "%(result)r") % {'wanted': num, 'got': len(b), 'result': b} raise exception.RFBAuthHandshakeFailed(reason=reason) return b # Negotiate version with compute server compute_version = recv(compute_sock, auth.VERSION_LENGTH) LOG.debug("Got version string '%s' from compute node", compute_version[:-1]) if self._parse_version(compute_version) != 3.8: reason = _("Security proxying requires RFB protocol " "version 3.8, but server sent %s"), compute_version[:-1] raise exception.SecurityProxyNegotiationFailed(reason=reason) compute_sock.sendall(compute_version) # Negotiate version with tenant tenant_sock.sendall(compute_version) tenant_version = recv(tenant_sock, auth.VERSION_LENGTH) LOG.debug("Got version string '%s' from tenant", tenant_version[:-1]) if self._parse_version(tenant_version) != 3.8: reason = _("Security proxying requires RFB protocol version " "3.8, but tenant asked for %s"), tenant_version[:-1] raise exception.SecurityProxyNegotiationFailed(reason=reason) # Negotiate security with server permitted_auth_types_cnt = six.byte2int(recv(compute_sock, 1)) if permitted_auth_types_cnt == 0: # Decode the reason why the request failed reason_len_raw = recv(compute_sock, 4) reason_len = struct.unpack('!I', reason_len_raw)[0] reason = recv(compute_sock, reason_len) tenant_sock.sendall(auth.AUTH_STATUS_FAIL + reason_len_raw + reason) raise exception.SecurityProxyNegotiationFailed(reason=reason) f = recv(compute_sock, permitted_auth_types_cnt) permitted_auth_types = [] for auth_type in f: if isinstance(auth_type, six.string_types): auth_type = ord(auth_type) permitted_auth_types.append(auth_type) LOG.debug("The server sent security types %s", permitted_auth_types) # Negotiate security with client before we say "ok" to the server # send 1:[None] tenant_sock.sendall(auth.AUTH_STATUS_PASS + six.int2byte(auth.AuthType.NONE)) client_auth = six.byte2int(recv(tenant_sock, 1)) if client_auth != auth.AuthType.NONE: self._fail(tenant_sock, compute_sock, _("Only the security type None (%d) is supported") % auth.AuthType.NONE) reason = _("Client requested a security type other than None " "(%(none_code)d): %(auth_type)s") % { 'auth_type': client_auth, 'none_code': auth.AuthType.NONE} raise exception.SecurityProxyNegotiationFailed(reason=reason) try: scheme = self.auth_schemes.find_scheme(permitted_auth_types) except exception.RFBAuthNoAvailableScheme as e: # Intentionally don't tell client what really failed # as that's information leakage self._fail(tenant_sock, compute_sock, _("Unable to negotiate security with server")) raise exception.SecurityProxyNegotiationFailed( reason=_("No compute auth available: %s") % six.text_type(e)) compute_sock.sendall(six.int2byte(scheme.security_type())) LOG.debug("Using security type %d with server, None with client", scheme.security_type()) try: compute_sock = scheme.security_handshake(compute_sock) except exception.RFBAuthHandshakeFailed as e: # Intentionally don't tell client what really failed # as that's information leakage self._fail(tenant_sock, None, _("Unable to negotiate security with server")) LOG.debug("Auth failed %s", six.text_type(e)) raise exception.SecurityProxyNegotiationFailed( reason=_("Auth handshake failed")) LOG.info("Finished security handshake, resuming normal proxy " "mode using secured socket") # we can just proxy the security result -- if the server security # negotiation fails, we want the client to think it has failed return compute_sock ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/console/serial.py0000664000175000017500000000544100000000000017030 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Serial consoles module.""" import socket from oslo_log import log as logging import six.moves import nova.conf from nova import exception from nova import utils LOG = logging.getLogger(__name__) ALLOCATED_PORTS = set() # in-memory set of already allocated ports SERIAL_LOCK = 'serial-lock' CONF = nova.conf.CONF # TODO(sahid): Add a method to initialize ALLOCATED_PORTS with the # already binded TPC port(s). (cf from danpb: list all running guests and # query the XML in libvirt driver to find out the TCP port(s) it uses). @utils.synchronized(SERIAL_LOCK) def acquire_port(host): """Returns a free TCP port on host. Find and returns a free TCP port on 'host' in the range of 'CONF.serial_console.port_range'. """ start, stop = _get_port_range() for port in six.moves.range(start, stop): if (host, port) in ALLOCATED_PORTS: continue try: _verify_port(host, port) ALLOCATED_PORTS.add((host, port)) return port except exception.SocketPortInUseException as e: LOG.warning(e.format_message()) raise exception.SocketPortRangeExhaustedException(host=host) @utils.synchronized(SERIAL_LOCK) def release_port(host, port): """Release TCP port to be used next time.""" ALLOCATED_PORTS.discard((host, port)) def _get_port_range(): config_range = CONF.serial_console.port_range start, stop = map(int, config_range.split(':')) if start >= stop: default_port_range = nova.conf.serial_console.DEFAULT_PORT_RANGE LOG.warning("serial_console.port_range should be in the " "format : and start < stop, " "Given value %(port_range)s is invalid. " "Taking the default port range %(default)s.", {'port_range': config_range, 'default': default_port_range}) start, stop = map(int, default_port_range.split(':')) return start, stop def _verify_port(host, port): s = socket.socket() try: s.bind((host, port)) except socket.error as e: raise exception.SocketPortInUseException( host=host, port=port, error=e) finally: s.close() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/console/type.py0000664000175000017500000000257700000000000016541 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class Console(object): def __init__(self, host, port, internal_access_path=None): self.host = host self.port = port self.internal_access_path = internal_access_path def get_connection_info(self, token, access_url): """Returns an unreferenced dict with connection information.""" ret = dict(self.__dict__) ret['token'] = token ret['access_url'] = access_url return ret class ConsoleVNC(Console): pass class ConsoleRDP(Console): pass class ConsoleSpice(Console): def __init__(self, host, port, tlsPort, internal_access_path=None): super(ConsoleSpice, self).__init__(host, port, internal_access_path) self.tlsPort = tlsPort class ConsoleSerial(Console): pass class ConsoleMKS(Console): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/console/websocketproxy.py0000664000175000017500000003217000000000000020640 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ''' Websocket proxy that is compatible with OpenStack Nova. Leverages websockify.py by Joel Martin ''' import copy from http import HTTPStatus import os import socket import sys from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import importutils import six from six.moves import http_cookies as Cookie import six.moves.urllib.parse as urlparse import websockify from nova.compute import rpcapi as compute_rpcapi import nova.conf from nova import context from nova import exception from nova.i18n import _ from nova import objects # Location of WebSockifyServer class in websockify v0.9.0 websockifyserver = importutils.try_import('websockify.websockifyserver') LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class TenantSock(object): """A socket wrapper for communicating with the tenant. This class provides a socket-like interface to the internal websockify send/receive queue for the client connection to the tenant user. It is used with the security proxy classes. """ def __init__(self, reqhandler): self.reqhandler = reqhandler self.queue = [] def recv(self, cnt): # NB(sross): it's ok to block here because we know # exactly the sequence of data arriving while len(self.queue) < cnt: # new_frames looks like ['abc', 'def'] new_frames, closed = self.reqhandler.recv_frames() # flatten frames onto queue for frame in new_frames: # The socket returns (byte) strings in Python 2... if six.PY2: self.queue.extend(frame) # ...and integers in Python 3. For the Python 3 case, we need # to convert these to characters using 'chr' and then, as this # returns unicode, convert the result to byte strings. else: self.queue.extend( [six.binary_type(chr(c), 'ascii') for c in frame]) if closed: break popped = self.queue[0:cnt] del self.queue[0:cnt] return b''.join(popped) def sendall(self, data): self.reqhandler.send_frames([encodeutils.safe_encode(data)]) def finish_up(self): self.reqhandler.send_frames([b''.join(self.queue)]) def close(self): self.finish_up() self.reqhandler.send_close() class NovaProxyRequestHandler(websockify.ProxyRequestHandler): def __init__(self, *args, **kwargs): self._compute_rpcapi = None websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) @property def compute_rpcapi(self): # Lazy load the rpcapi/ComputeAPI upon first use for this connection. # This way, if we receive a TCP RST, we will not create a ComputeAPI # object we won't use. if not self._compute_rpcapi: self._compute_rpcapi = compute_rpcapi.ComputeAPI() return self._compute_rpcapi def verify_origin_proto(self, connect_info, origin_proto): if 'access_url_base' not in connect_info: detail = _("No access_url_base in connect_info. " "Cannot validate protocol") raise exception.ValidationError(detail=detail) expected_protos = [ urlparse.urlparse(connect_info.access_url_base).scheme] # NOTE: For serial consoles the expected protocol could be ws or # wss which correspond to http and https respectively in terms of # security. if 'ws' in expected_protos: expected_protos.append('http') if 'wss' in expected_protos: expected_protos.append('https') return origin_proto in expected_protos def _check_console_port(self, ctxt, instance_uuid, port, console_type): try: instance = objects.Instance.get_by_uuid(ctxt, instance_uuid) except exception.InstanceNotFound: return # NOTE(melwitt): The port is expected to be a str for validation. return self.compute_rpcapi.validate_console_port(ctxt, instance, str(port), console_type) def _get_connect_info(self, ctxt, token): """Validate the token and get the connect info.""" # NOTE(PaulMurray) ConsoleAuthToken.validate validates the token. # We call the compute manager directly to check the console port # is correct. connect_info = objects.ConsoleAuthToken.validate(ctxt, token) valid_port = self._check_console_port( ctxt, connect_info.instance_uuid, connect_info.port, connect_info.console_type) if not valid_port: raise exception.InvalidToken(token='***') return connect_info def new_websocket_client(self): """Called after a new WebSocket connection has been established.""" # Reopen the eventlet hub to make sure we don't share an epoll # fd with parent and/or siblings, which would be bad from eventlet import hubs hubs.use_hub() # The nova expected behavior is to have token # passed to the method GET of the request parse = urlparse.urlparse(self.path) if parse.scheme not in ('http', 'https'): # From a bug in urlparse in Python < 2.7.4 we cannot support # special schemes (cf: http://bugs.python.org/issue9374) if sys.version_info < (2, 7, 4): raise exception.NovaException( _("We do not support scheme '%s' under Python < 2.7.4, " "please use http or https") % parse.scheme) query = parse.query token = urlparse.parse_qs(query).get("token", [""]).pop() if not token: # NoVNC uses it's own convention that forward token # from the request to a cookie header, we should check # also for this behavior hcookie = self.headers.get('cookie') if hcookie: cookie = Cookie.SimpleCookie() for hcookie_part in hcookie.split(';'): hcookie_part = hcookie_part.lstrip() try: cookie.load(hcookie_part) except Cookie.CookieError: # NOTE(stgleb): Do not print out cookie content # for security reasons. LOG.warning('Found malformed cookie') else: if 'token' in cookie: token = cookie['token'].value ctxt = context.get_admin_context() connect_info = self._get_connect_info(ctxt, token) # Verify Origin expected_origin_hostname = self.headers.get('Host') if ':' in expected_origin_hostname: e = expected_origin_hostname if '[' in e and ']' in e: expected_origin_hostname = e.split(']')[0][1:] else: expected_origin_hostname = e.split(':')[0] expected_origin_hostnames = CONF.console.allowed_origins expected_origin_hostnames.append(expected_origin_hostname) origin_url = self.headers.get('Origin') # missing origin header indicates non-browser client which is OK if origin_url is not None: origin = urlparse.urlparse(origin_url) origin_hostname = origin.hostname origin_scheme = origin.scheme # If the console connection was forwarded by a proxy (example: # haproxy), the original protocol could be contained in the # X-Forwarded-Proto header instead of the Origin header. Prefer the # forwarded protocol if it is present. forwarded_proto = self.headers.get('X-Forwarded-Proto') if forwarded_proto is not None: origin_scheme = forwarded_proto if origin_hostname == '' or origin_scheme == '': detail = _("Origin header not valid.") raise exception.ValidationError(detail=detail) if origin_hostname not in expected_origin_hostnames: detail = _("Origin header does not match this host.") raise exception.ValidationError(detail=detail) if not self.verify_origin_proto(connect_info, origin_scheme): detail = _("Origin header protocol does not match this host.") raise exception.ValidationError(detail=detail) sanitized_info = copy.copy(connect_info) sanitized_info.token = '***' self.msg(_('connect info: %s'), sanitized_info) host = connect_info.host port = connect_info.port # Connect to the target self.msg(_("connecting to: %(host)s:%(port)s") % {'host': host, 'port': port}) tsock = self.socket(host, port, connect=True) # Handshake as necessary if 'internal_access_path' in connect_info: path = connect_info.internal_access_path if path: tsock.send(encodeutils.safe_encode( 'CONNECT %s HTTP/1.1\r\n\r\n' % path)) end_token = "\r\n\r\n" while True: data = tsock.recv(4096, socket.MSG_PEEK) token_loc = data.find(end_token) if token_loc != -1: if data.split("\r\n")[0].find("200") == -1: raise exception.InvalidConnectionInfo() # remove the response from recv buffer tsock.recv(token_loc + len(end_token)) break if self.server.security_proxy is not None: tenant_sock = TenantSock(self) try: tsock = self.server.security_proxy.connect(tenant_sock, tsock) except exception.SecurityProxyNegotiationFailed: LOG.exception("Unable to perform security proxying, shutting " "down connection") tenant_sock.close() tsock.shutdown(socket.SHUT_RDWR) tsock.close() raise tenant_sock.finish_up() # Start proxying try: self.do_proxy(tsock) except Exception: if tsock: tsock.shutdown(socket.SHUT_RDWR) tsock.close() self.vmsg(_("%(host)s:%(port)s: " "Websocket client or target closed") % {'host': host, 'port': port}) raise def socket(self, *args, **kwargs): return websockifyserver.WebSockifyServer.socket(*args, **kwargs) def send_head(self): # This code is copied from this example patch: # https://bugs.python.org/issue32084#msg306545 path = self.translate_path(self.path) if os.path.isdir(path): parts = urlparse.urlsplit(self.path) if not parts.path.endswith('/'): # Browsers interpret "Location: //uri" as an absolute URI # like "http://URI" if self.path.startswith('//'): self.send_error(HTTPStatus.BAD_REQUEST, "URI must not start with //") return None return super(NovaProxyRequestHandler, self).send_head() class NovaWebSocketProxy(websockify.WebSocketProxy): def __init__(self, *args, **kwargs): """:param security_proxy: instance of nova.console.securityproxy.base.SecurityProxy Create a new web socket proxy, optionally using the @security_proxy instance to negotiate security layer with the compute node. """ self.security_proxy = kwargs.pop('security_proxy', None) # If 'default' was specified as the ssl_minimum_version, we leave # ssl_options unset to default to the underlying system defaults. # We do this to avoid using websockify's behaviour for 'default' # in select_ssl_version(), which hardcodes the versions to be # quite relaxed and prevents us from using sytem crypto policies. ssl_min_version = kwargs.pop('ssl_minimum_version', None) if ssl_min_version and ssl_min_version != 'default': kwargs['ssl_options'] = websockify.websocketproxy. \ select_ssl_version(ssl_min_version) super(NovaWebSocketProxy, self).__init__(*args, **kwargs) @staticmethod def get_logger(): return LOG ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/context.py0000664000175000017500000005257100000000000015601 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """RequestContext: context for requests that persist through all of nova.""" from contextlib import contextmanager import copy import eventlet.queue import eventlet.timeout from keystoneauth1.access import service_catalog as ksa_service_catalog from keystoneauth1 import plugin from oslo_context import context from oslo_db.sqlalchemy import enginefacade from oslo_log import log as logging from oslo_utils import timeutils import six from nova import exception from nova.i18n import _ from nova import objects from nova import policy from nova import utils LOG = logging.getLogger(__name__) CELL_CACHE = {} # NOTE(melwitt): Used for the scatter-gather utility to indicate we timed out # waiting for a result from a cell. did_not_respond_sentinel = object() # FIXME(danms): Keep a global cache of the cells we find the # first time we look. This needs to be refreshed on a timer or # trigger. CELLS = [] # Timeout value for waiting for cells to respond CELL_TIMEOUT = 60 class _ContextAuthPlugin(plugin.BaseAuthPlugin): """A keystoneauth auth plugin that uses the values from the Context. Ideally we would use the plugin provided by auth_token middleware however this plugin isn't serialized yet so we construct one from the serialized auth data. """ def __init__(self, auth_token, sc): super(_ContextAuthPlugin, self).__init__() self.auth_token = auth_token self.service_catalog = ksa_service_catalog.ServiceCatalogV2(sc) def get_token(self, *args, **kwargs): return self.auth_token def get_endpoint(self, session, service_type=None, interface=None, region_name=None, service_name=None, **kwargs): return self.service_catalog.url_for(service_type=service_type, service_name=service_name, interface=interface, region_name=region_name) @enginefacade.transaction_context_provider class RequestContext(context.RequestContext): """Security context and request information. Represents the user taking a given action within the system. """ def __init__(self, user_id=None, project_id=None, is_admin=None, read_deleted="no", remote_address=None, timestamp=None, quota_class=None, service_catalog=None, user_auth_plugin=None, **kwargs): """:param read_deleted: 'no' indicates deleted records are hidden, 'yes' indicates deleted records are visible, 'only' indicates that *only* deleted records are visible. :param overwrite: Set to False to ensure that the greenthread local copy of the index is not overwritten. :param user_auth_plugin: The auth plugin for the current request's authentication data. """ if user_id: kwargs['user_id'] = user_id if project_id: kwargs['project_id'] = project_id super(RequestContext, self).__init__(is_admin=is_admin, **kwargs) self.read_deleted = read_deleted self.remote_address = remote_address if not timestamp: timestamp = timeutils.utcnow() if isinstance(timestamp, six.string_types): timestamp = timeutils.parse_strtime(timestamp) self.timestamp = timestamp if service_catalog: # Only include required parts of service_catalog self.service_catalog = [s for s in service_catalog if s.get('type') in ('image', 'block-storage', 'volumev3', 'key-manager', 'placement', 'network', 'accelerator')] else: # if list is empty or none self.service_catalog = [] # NOTE(markmc): this attribute is currently only used by the # rs_limits turnstile pre-processor. # See https://lists.launchpad.net/openstack/msg12200.html self.quota_class = quota_class # NOTE(dheeraj): The following attributes are used by cellsv2 to store # connection information for connecting to the target cell. # It is only manipulated using the target_cell contextmanager # provided by this module self.db_connection = None self.mq_connection = None self.cell_uuid = None self.user_auth_plugin = user_auth_plugin if self.is_admin is None: self.is_admin = policy.check_is_admin(self) def get_auth_plugin(self): if self.user_auth_plugin: return self.user_auth_plugin else: return _ContextAuthPlugin(self.auth_token, self.service_catalog) def _get_read_deleted(self): return self._read_deleted def _set_read_deleted(self, read_deleted): if read_deleted not in ('no', 'yes', 'only'): raise ValueError(_("read_deleted can only be one of 'no', " "'yes' or 'only', not %r") % read_deleted) self._read_deleted = read_deleted def _del_read_deleted(self): del self._read_deleted read_deleted = property(_get_read_deleted, _set_read_deleted, _del_read_deleted) def to_dict(self): values = super(RequestContext, self).to_dict() # FIXME(dims): defensive hasattr() checks need to be # removed once we figure out why we are seeing stack # traces values.update({ 'user_id': getattr(self, 'user_id', None), 'project_id': getattr(self, 'project_id', None), 'is_admin': getattr(self, 'is_admin', None), 'read_deleted': getattr(self, 'read_deleted', 'no'), 'remote_address': getattr(self, 'remote_address', None), 'timestamp': utils.strtime(self.timestamp) if hasattr( self, 'timestamp') else None, 'request_id': getattr(self, 'request_id', None), 'quota_class': getattr(self, 'quota_class', None), 'user_name': getattr(self, 'user_name', None), 'service_catalog': getattr(self, 'service_catalog', None), 'project_name': getattr(self, 'project_name', None), }) # NOTE(tonyb): This can be removed once we're certain to have a # RequestContext contains 'is_admin_project', We can only get away with # this because we "know" the default value of 'is_admin_project' which # is very fragile. values.update({ 'is_admin_project': getattr(self, 'is_admin_project', True), }) return values @classmethod def from_dict(cls, values): return super(RequestContext, cls).from_dict( values, user_id=values.get('user_id'), project_id=values.get('project_id'), # TODO(sdague): oslo.context has show_deleted, if # possible, we should migrate to that in the future so we # don't need to be different here. read_deleted=values.get('read_deleted', 'no'), remote_address=values.get('remote_address'), timestamp=values.get('timestamp'), quota_class=values.get('quota_class'), service_catalog=values.get('service_catalog'), ) def elevated(self, read_deleted=None): """Return a version of this context with admin flag set.""" context = copy.copy(self) # context.roles must be deepcopied to leave original roles # without changes context.roles = copy.deepcopy(self.roles) context.is_admin = True if 'admin' not in context.roles: context.roles.append('admin') if read_deleted is not None: context.read_deleted = read_deleted return context def can(self, action, target=None, fatal=True): """Verifies that the given action is valid on the target in this context. :param action: string representing the action to be checked. :param target: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': instance.project_id}``. :param fatal: if False, will return False when an exception.Forbidden occurs. :raises nova.exception.Forbidden: if verification fails and fatal is True. :return: returns a non-False value (not necessarily "True") if authorized and False if not authorized and fatal is False. """ try: return policy.authorize(self, action, target) except exception.Forbidden: if fatal: raise return False def to_policy_values(self): policy = super(RequestContext, self).to_policy_values() policy['is_admin'] = self.is_admin return policy def __str__(self): return "" % self.to_dict() def get_context(): """A helper method to get a blank context. Note that overwrite is False here so this context will not update the greenthread-local stored context that is used when logging. """ return RequestContext(user_id=None, project_id=None, is_admin=False, overwrite=False) def get_admin_context(read_deleted="no"): # NOTE(alaski): This method should only be used when an admin context is # necessary for the entirety of the context lifetime. If that's not the # case please use get_context(), or create the RequestContext manually, and # use context.elevated() where necessary. Some periodic tasks may use # get_admin_context so that their database calls are not filtered on # project_id. return RequestContext(user_id=None, project_id=None, is_admin=True, read_deleted=read_deleted, overwrite=False) def is_user_context(context): """Indicates if the request context is a normal user.""" if not context: return False if context.is_admin: return False if not context.user_id or not context.project_id: return False return True def require_context(ctxt): """Raise exception.Forbidden() if context is not a user or an admin context. """ if not ctxt.is_admin and not is_user_context(ctxt): raise exception.Forbidden() def authorize_project_context(context, project_id): """Ensures a request has permission to access the given project.""" if is_user_context(context): if not context.project_id: raise exception.Forbidden() elif context.project_id != project_id: raise exception.Forbidden() def authorize_user_context(context, user_id): """Ensures a request has permission to access the given user.""" if is_user_context(context): if not context.user_id: raise exception.Forbidden() elif context.user_id != user_id: raise exception.Forbidden() def authorize_quota_class_context(context, class_name): """Ensures a request has permission to access the given quota class.""" if is_user_context(context): if not context.quota_class: raise exception.Forbidden() elif context.quota_class != class_name: raise exception.Forbidden() def set_target_cell(context, cell_mapping): """Adds database connection information to the context for communicating with the given target_cell. This is used for permanently targeting a cell in a context. Use this when you want all subsequent code to target a cell. Passing None for cell_mapping will untarget the context. :param context: The RequestContext to add connection information :param cell_mapping: An objects.CellMapping object or None """ global CELL_CACHE if cell_mapping is not None: # avoid circular import from nova.db import api as db from nova import rpc # Synchronize access to the cache by multiple API workers. @utils.synchronized(cell_mapping.uuid) def get_or_set_cached_cell_and_set_connections(): try: cell_tuple = CELL_CACHE[cell_mapping.uuid] except KeyError: db_connection_string = cell_mapping.database_connection context.db_connection = db.create_context_manager( db_connection_string) if not cell_mapping.transport_url.startswith('none'): context.mq_connection = rpc.create_transport( cell_mapping.transport_url) context.cell_uuid = cell_mapping.uuid CELL_CACHE[cell_mapping.uuid] = (context.db_connection, context.mq_connection) else: context.db_connection = cell_tuple[0] context.mq_connection = cell_tuple[1] context.cell_uuid = cell_mapping.uuid get_or_set_cached_cell_and_set_connections() else: context.db_connection = None context.mq_connection = None context.cell_uuid = None @contextmanager def target_cell(context, cell_mapping): """Yields a new context with connection information for a specific cell. This function yields a copy of the provided context, which is targeted to the referenced cell for MQ and DB connections. Passing None for cell_mapping will yield an untargetd copy of the context. :param context: The RequestContext to add connection information :param cell_mapping: An objects.CellMapping object or None """ # Create a sanitized copy of context by serializing and deserializing it # (like we would do over RPC). This help ensure that we have a clean # copy of the context with all the tracked attributes, but without any # of the hidden/private things we cache on a context. We do this to avoid # unintentional sharing of cached thread-local data across threads. # Specifically, this won't include any oslo_db-set transaction context, or # any existing cell targeting. cctxt = RequestContext.from_dict(context.to_dict()) set_target_cell(cctxt, cell_mapping) yield cctxt def scatter_gather_cells(context, cell_mappings, timeout, fn, *args, **kwargs): """Target cells in parallel and return their results. The first parameter in the signature of the function to call for each cell should be of type RequestContext. :param context: The RequestContext for querying cells :param cell_mappings: The CellMappings to target in parallel :param timeout: The total time in seconds to wait for all the results to be gathered :param fn: The function to call for each cell :param args: The args for the function to call for each cell, not including the RequestContext :param kwargs: The kwargs for the function to call for each cell :returns: A dict {cell_uuid: result} containing the joined results. The did_not_respond_sentinel will be returned if a cell did not respond within the timeout. The exception object will be returned if the call to a cell raised an exception. The exception will be logged. """ greenthreads = [] queue = eventlet.queue.LightQueue() results = {} def gather_result(cell_uuid, fn, *args, **kwargs): try: result = fn(*args, **kwargs) except Exception as e: # Only log the exception traceback for non-nova exceptions. if not isinstance(e, exception.NovaException): LOG.exception('Error gathering result from cell %s', cell_uuid) result = e.__class__(e.args) # The queue is already synchronized. queue.put((cell_uuid, result)) for cell_mapping in cell_mappings: with target_cell(context, cell_mapping) as cctxt: greenthreads.append((cell_mapping.uuid, utils.spawn(gather_result, cell_mapping.uuid, fn, cctxt, *args, **kwargs))) with eventlet.timeout.Timeout(timeout, exception.CellTimeout): try: while len(results) != len(greenthreads): cell_uuid, result = queue.get() results[cell_uuid] = result except exception.CellTimeout: # NOTE(melwitt): We'll fill in did_not_respond_sentinels at the # same time we kill/wait for the green threads. pass # Kill the green threads still pending and wait on those we know are done. for cell_uuid, greenthread in greenthreads: if cell_uuid not in results: greenthread.kill() results[cell_uuid] = did_not_respond_sentinel LOG.warning('Timed out waiting for response from cell %s', cell_uuid) else: greenthread.wait() return results def load_cells(): global CELLS if not CELLS: CELLS = objects.CellMappingList.get_all(get_admin_context()) LOG.debug('Found %(count)i cells: %(cells)s', dict(count=len(CELLS), cells=','.join([c.identity for c in CELLS]))) if not CELLS: LOG.error('No cells are configured, unable to continue') def is_cell_failure_sentinel(record): return (record is did_not_respond_sentinel or isinstance(record, Exception)) def scatter_gather_skip_cell0(context, fn, *args, **kwargs): """Target all cells except cell0 in parallel and return their results. The first parameter in the signature of the function to call for each cell should be of type RequestContext. There is a timeout for waiting on all results to be gathered. :param context: The RequestContext for querying cells :param fn: The function to call for each cell :param args: The args for the function to call for each cell, not including the RequestContext :param kwargs: The kwargs for the function to call for each cell :returns: A dict {cell_uuid: result} containing the joined results. The did_not_respond_sentinel will be returned if a cell did not respond within the timeout. The exception object will be returned if the call to a cell raised an exception. The exception will be logged. """ load_cells() cell_mappings = [cell for cell in CELLS if not cell.is_cell0()] return scatter_gather_cells(context, cell_mappings, CELL_TIMEOUT, fn, *args, **kwargs) def scatter_gather_single_cell(context, cell_mapping, fn, *args, **kwargs): """Target the provided cell and return its results or sentinels in case of failure. The first parameter in the signature of the function to call for each cell should be of type RequestContext. :param context: The RequestContext for querying cells :param cell_mapping: The CellMapping to target :param fn: The function to call for each cell :param args: The args for the function to call for each cell, not including the RequestContext :param kwargs: The kwargs for the function to call for this cell :returns: A dict {cell_uuid: result} containing the joined results. The did_not_respond_sentinel will be returned if the cell did not respond within the timeout. The exception object will be returned if the call to the cell raised an exception. The exception will be logged. """ return scatter_gather_cells(context, [cell_mapping], CELL_TIMEOUT, fn, *args, **kwargs) def scatter_gather_all_cells(context, fn, *args, **kwargs): """Target all cells in parallel and return their results. The first parameter in the signature of the function to call for each cell should be of type RequestContext. There is a timeout for waiting on all results to be gathered. :param context: The RequestContext for querying cells :param fn: The function to call for each cell :param args: The args for the function to call for each cell, not including the RequestContext :param kwargs: The kwargs for the function to call for each cell :returns: A dict {cell_uuid: result} containing the joined results. The did_not_respond_sentinel will be returned if a cell did not respond within the timeout. The exception object will be returned if the call to a cell raised an exception. The exception will be logged. """ load_cells() return scatter_gather_cells(context, CELLS, CELL_TIMEOUT, fn, *args, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/crypto.py0000664000175000017500000001265300000000000015432 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Wrappers around standard crypto data elements. Includes root and intermediate CAs, SSH key_pairs and x509 certificates. """ from __future__ import absolute_import import base64 import binascii import os from cryptography.hazmat import backends from cryptography.hazmat.primitives.asymmetric import padding from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives import serialization from cryptography import x509 from oslo_concurrency import processutils from oslo_log import log as logging import paramiko import six import nova.conf from nova import exception from nova.i18n import _ from nova import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF def generate_fingerprint(public_key): try: pub_bytes = public_key.encode('utf-8') # Test that the given public_key string is a proper ssh key. The # returned object is unused since pyca/cryptography does not have a # fingerprint method. serialization.load_ssh_public_key( pub_bytes, backends.default_backend()) pub_data = base64.b64decode(public_key.split(' ')[1]) digest = hashes.Hash(hashes.MD5(), backends.default_backend()) digest.update(pub_data) md5hash = digest.finalize() raw_fp = binascii.hexlify(md5hash) if six.PY3: raw_fp = raw_fp.decode('ascii') return ':'.join(a + b for a, b in zip(raw_fp[::2], raw_fp[1::2])) except Exception: raise exception.InvalidKeypair( reason=_('failed to generate fingerprint')) def generate_x509_fingerprint(pem_key): try: if isinstance(pem_key, six.text_type): pem_key = pem_key.encode('utf-8') cert = x509.load_pem_x509_certificate( pem_key, backends.default_backend()) raw_fp = binascii.hexlify(cert.fingerprint(hashes.SHA1())) if six.PY3: raw_fp = raw_fp.decode('ascii') return ':'.join(a + b for a, b in zip(raw_fp[::2], raw_fp[1::2])) except (ValueError, TypeError, binascii.Error) as ex: raise exception.InvalidKeypair( reason=_('failed to generate X509 fingerprint. ' 'Error message: %s') % ex) def generate_key_pair(bits=2048): key = paramiko.RSAKey.generate(bits) keyout = six.StringIO() key.write_private_key(keyout) private_key = keyout.getvalue() public_key = '%s %s Generated-by-Nova' % (key.get_name(), key.get_base64()) fingerprint = generate_fingerprint(public_key) return (private_key, public_key, fingerprint) def ssh_encrypt_text(ssh_public_key, text): """Encrypt text with an ssh public key. If text is a Unicode string, encode it to UTF-8. """ if isinstance(text, six.text_type): text = text.encode('utf-8') try: pub_bytes = ssh_public_key.encode('utf-8') pub_key = serialization.load_ssh_public_key( pub_bytes, backends.default_backend()) return pub_key.encrypt(text, padding.PKCS1v15()) except Exception as exc: raise exception.EncryptionFailure(reason=six.text_type(exc)) def generate_winrm_x509_cert(user_id, bits=2048): """Generate a cert for passwordless auth for user in project.""" subject = '/CN=%s' % user_id upn = '%s@localhost' % user_id with utils.tempdir() as tmpdir: keyfile = os.path.abspath(os.path.join(tmpdir, 'temp.key')) conffile = os.path.abspath(os.path.join(tmpdir, 'temp.conf')) _create_x509_openssl_config(conffile, upn) (certificate, _err) = processutils.execute( 'openssl', 'req', '-x509', '-nodes', '-days', '3650', '-config', conffile, '-newkey', 'rsa:%s' % bits, '-outform', 'PEM', '-keyout', keyfile, '-subj', subject, '-extensions', 'v3_req_client', binary=True) (out, _err) = processutils.execute('openssl', 'pkcs12', '-export', '-inkey', keyfile, '-password', 'pass:', process_input=certificate, binary=True) private_key = base64.b64encode(out) fingerprint = generate_x509_fingerprint(certificate) if six.PY3: private_key = private_key.decode('ascii') certificate = certificate.decode('utf-8') return (private_key, certificate, fingerprint) def _create_x509_openssl_config(conffile, upn): content = ("distinguished_name = req_distinguished_name\n" "[req_distinguished_name]\n" "[v3_req_client]\n" "extendedKeyUsage = clientAuth\n" "subjectAltName = otherName:""1.3.6.1.4.1.311.20.2.3;UTF8:%s\n") with open(conffile, 'w') as file: file.write(content % upn) ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/db/0000775000175000017500000000000000000000000014116 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/__init__.py0000664000175000017500000000124700000000000016233 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Use nova.db.api instead. In the past this file imported * from there, which led to unwanted imports.""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/api.py0000664000175000017500000014052100000000000015244 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Defines interface for DB access. Functions in this module are imported into the nova.db namespace. Call these functions from nova.db namespace, not the nova.db.api namespace. All functions in this module return objects that implement a dictionary-like interface. Currently, many of these objects are sqlalchemy objects that implement a dictionary interface. However, a future goal is to have all of these objects be simple dictionaries. """ from oslo_db import concurrency from oslo_log import log as logging import nova.conf from nova.db import constants CONF = nova.conf.CONF # NOTE(cdent): These constants are re-defined in this module to preserve # existing references to them. MAX_INT = constants.MAX_INT SQL_SP_FLOAT_MAX = constants.SQL_SP_FLOAT_MAX _BACKEND_MAPPING = {'sqlalchemy': 'nova.db.sqlalchemy.api'} IMPL = concurrency.TpoolDbapiWrapper(CONF, backend_mapping=_BACKEND_MAPPING) LOG = logging.getLogger(__name__) ################### def constraint(**conditions): """Return a constraint object suitable for use with some updates.""" return IMPL.constraint(**conditions) def equal_any(*values): """Return an equality condition object suitable for use in a constraint. Equal_any conditions require that a model object's attribute equal any one of the given values. """ return IMPL.equal_any(*values) def not_equal(*values): """Return an inequality condition object suitable for use in a constraint. Not_equal conditions require that a model object's attribute differs from all of the given values. """ return IMPL.not_equal(*values) def create_context_manager(connection): """Return a context manager for a cell database connection.""" return IMPL.create_context_manager(connection=connection) ################### def select_db_reader_mode(f): """Decorator to select synchronous or asynchronous reader mode. The kwarg argument 'use_slave' defines reader mode. Asynchronous reader will be used if 'use_slave' is True and synchronous reader otherwise. """ return IMPL.select_db_reader_mode(f) ################### def service_destroy(context, service_id): """Destroy the service or raise if it does not exist.""" return IMPL.service_destroy(context, service_id) def service_get(context, service_id): """Get a service or raise if it does not exist.""" return IMPL.service_get(context, service_id) def service_get_by_uuid(context, service_uuid): """Get a service by it's uuid or raise ServiceNotFound if it does not exist. """ return IMPL.service_get_by_uuid(context, service_uuid) def service_get_minimum_version(context, binary): """Get the minimum service version in the database.""" return IMPL.service_get_minimum_version(context, binary) def service_get_by_host_and_topic(context, host, topic): """Get a service by hostname and topic it listens to.""" return IMPL.service_get_by_host_and_topic(context, host, topic) def service_get_by_host_and_binary(context, host, binary): """Get a service by hostname and binary.""" return IMPL.service_get_by_host_and_binary(context, host, binary) def service_get_all(context, disabled=None): """Get all services.""" return IMPL.service_get_all(context, disabled) def service_get_all_by_topic(context, topic): """Get all services for a given topic.""" return IMPL.service_get_all_by_topic(context, topic) def service_get_all_by_binary(context, binary, include_disabled=False): """Get services for a given binary. Includes disabled services if 'include_disabled' parameter is True """ return IMPL.service_get_all_by_binary(context, binary, include_disabled=include_disabled) def service_get_all_computes_by_hv_type(context, hv_type, include_disabled=False): """Get all compute services for a given hypervisor type. Includes disabled services if 'include_disabled' parameter is True. """ return IMPL.service_get_all_computes_by_hv_type(context, hv_type, include_disabled=include_disabled) def service_get_all_by_host(context, host): """Get all services for a given host.""" return IMPL.service_get_all_by_host(context, host) def service_get_by_compute_host(context, host): """Get the service entry for a given compute host. Returns the service entry joined with the compute_node entry. """ return IMPL.service_get_by_compute_host(context, host) def service_create(context, values): """Create a service from the values dictionary.""" return IMPL.service_create(context, values) def service_update(context, service_id, values): """Set the given properties on a service and update it. Raises NotFound if service does not exist. """ return IMPL.service_update(context, service_id, values) ################### def compute_node_get(context, compute_id): """Get a compute node by its id. :param context: The security context :param compute_id: ID of the compute node :returns: Dictionary-like object containing properties of the compute node Raises ComputeHostNotFound if compute node with the given ID doesn't exist. """ return IMPL.compute_node_get(context, compute_id) # TODO(edleafe): remove once the compute node resource provider migration is # complete, and this distinction is no longer necessary. def compute_node_get_model(context, compute_id): """Get a compute node sqlalchemy model object by its id. :param context: The security context :param compute_id: ID of the compute node :returns: Sqlalchemy model object containing properties of the compute node Raises ComputeHostNotFound if compute node with the given ID doesn't exist. """ return IMPL.compute_node_get_model(context, compute_id) def compute_nodes_get_by_service_id(context, service_id): """Get a list of compute nodes by their associated service id. :param context: The security context :param service_id: ID of the associated service :returns: List of dictionary-like objects, each containing properties of the compute node, including its corresponding service and statistics Raises ServiceNotFound if service with the given ID doesn't exist. """ return IMPL.compute_nodes_get_by_service_id(context, service_id) def compute_node_get_by_host_and_nodename(context, host, nodename): """Get a compute node by its associated host and nodename. :param context: The security context (admin) :param host: Name of the host :param nodename: Name of the node :returns: Dictionary-like object containing properties of the compute node, including its statistics Raises ComputeHostNotFound if host with the given name doesn't exist. """ return IMPL.compute_node_get_by_host_and_nodename(context, host, nodename) def compute_node_get_by_nodename(context, hypervisor_hostname): """Get a compute node by hypervisor_hostname. :param context: The security context (admin) :param hypervisor_hostname: Name of the node :returns: Dictionary-like object containing properties of the compute node, including its statistics Raises ComputeHostNotFound if hypervisor_hostname with the given name doesn't exist. """ return IMPL.compute_node_get_by_nodename(context, hypervisor_hostname) def compute_node_get_all(context): """Get all computeNodes. :param context: The security context :returns: List of dictionaries each containing compute node properties """ return IMPL.compute_node_get_all(context) def compute_node_get_all_mapped_less_than(context, mapped_less_than): """Get all ComputeNode objects with specific mapped values. :param context: The security context :param mapped_less_than: Get compute nodes with mapped less than this value :returns: List of dictionaries each containing compute node properties """ return IMPL.compute_node_get_all_mapped_less_than(context, mapped_less_than) def compute_node_get_all_by_pagination(context, limit=None, marker=None): """Get compute nodes by pagination. :param context: The security context :param limit: Maximum number of items to return :param marker: The last item of the previous page, the next results after this value will be returned :returns: List of dictionaries each containing compute node properties """ return IMPL.compute_node_get_all_by_pagination(context, limit=limit, marker=marker) def compute_node_get_all_by_host(context, host): """Get compute nodes by host name :param context: The security context (admin) :param host: Name of the host :returns: List of dictionaries each containing compute node properties """ return IMPL.compute_node_get_all_by_host(context, host) def compute_node_search_by_hypervisor(context, hypervisor_match): """Get compute nodes by hypervisor hostname. :param context: The security context :param hypervisor_match: The hypervisor hostname :returns: List of dictionary-like objects each containing compute node properties """ return IMPL.compute_node_search_by_hypervisor(context, hypervisor_match) def compute_node_create(context, values): """Create a compute node from the values dictionary. :param context: The security context :param values: Dictionary containing compute node properties :returns: Dictionary-like object containing the properties of the created node, including its corresponding service and statistics """ return IMPL.compute_node_create(context, values) def compute_node_update(context, compute_id, values): """Set the given properties on a compute node and update it. :param context: The security context :param compute_id: ID of the compute node :param values: Dictionary containing compute node properties to be updated :returns: Dictionary-like object containing the properties of the updated compute node, including its corresponding service and statistics Raises ComputeHostNotFound if compute node with the given ID doesn't exist. """ return IMPL.compute_node_update(context, compute_id, values) def compute_node_delete(context, compute_id): """Delete a compute node from the database. :param context: The security context :param compute_id: ID of the compute node Raises ComputeHostNotFound if compute node with the given ID doesn't exist. """ return IMPL.compute_node_delete(context, compute_id) def compute_node_statistics(context): """Get aggregate statistics over all compute nodes. :param context: The security context :returns: Dictionary containing compute node characteristics summed up over all the compute nodes, e.g. 'vcpus', 'free_ram_mb' etc. """ return IMPL.compute_node_statistics(context) ################### def certificate_create(context, values): """Create a certificate from the values dictionary.""" return IMPL.certificate_create(context, values) def certificate_get_all_by_project(context, project_id): """Get all certificates for a project.""" return IMPL.certificate_get_all_by_project(context, project_id) def certificate_get_all_by_user(context, user_id): """Get all certificates for a user.""" return IMPL.certificate_get_all_by_user(context, user_id) def certificate_get_all_by_user_and_project(context, user_id, project_id): """Get all certificates for a user and project.""" return IMPL.certificate_get_all_by_user_and_project(context, user_id, project_id) #################### def migration_update(context, id, values): """Update a migration instance.""" return IMPL.migration_update(context, id, values) def migration_create(context, values): """Create a migration record.""" return IMPL.migration_create(context, values) def migration_get(context, migration_id): """Finds a migration by the id.""" return IMPL.migration_get(context, migration_id) def migration_get_by_uuid(context, migration_uuid): """Finds a migration by the migration uuid.""" return IMPL.migration_get_by_uuid(context, migration_uuid) def migration_get_by_id_and_instance(context, migration_id, instance_uuid): """Finds a migration by the migration id and the instance uuid.""" return IMPL.migration_get_by_id_and_instance(context, migration_id, instance_uuid) def migration_get_by_instance_and_status(context, instance_uuid, status): """Finds a migration by the instance uuid its migrating.""" return IMPL.migration_get_by_instance_and_status(context, instance_uuid, status) def migration_get_unconfirmed_by_dest_compute(context, confirm_window, dest_compute): """Finds all unconfirmed migrations within the confirmation window for a specific destination compute host. """ return IMPL.migration_get_unconfirmed_by_dest_compute(context, confirm_window, dest_compute) def migration_get_in_progress_by_host_and_node(context, host, node): """Finds all migrations for the given host + node that are not yet confirmed or reverted. """ return IMPL.migration_get_in_progress_by_host_and_node(context, host, node) def migration_get_all_by_filters(context, filters, sort_keys=None, sort_dirs=None, limit=None, marker=None): """Finds all migrations using the provided filters.""" return IMPL.migration_get_all_by_filters(context, filters, sort_keys=sort_keys, sort_dirs=sort_dirs, limit=limit, marker=marker) def migration_get_in_progress_by_instance(context, instance_uuid, migration_type=None): """Finds all migrations of an instance in progress.""" return IMPL.migration_get_in_progress_by_instance(context, instance_uuid, migration_type) def migration_get_by_sort_filters(context, sort_keys, sort_dirs, values): """Get the uuid of the first migration in a sort order. Return the first migration (uuid) of the set where each column value is greater than or equal to the matching one in @values, for each key in @sort_keys. """ return IMPL.migration_get_by_sort_filters(context, sort_keys, sort_dirs, values) #################### def virtual_interface_create(context, values): """Create a virtual interface record in the database.""" return IMPL.virtual_interface_create(context, values) def virtual_interface_update(context, address, values): """Create a virtual interface record in the database.""" return IMPL.virtual_interface_update(context, address, values) def virtual_interface_get(context, vif_id): """Gets a virtual interface from the table.""" return IMPL.virtual_interface_get(context, vif_id) def virtual_interface_get_by_address(context, address): """Gets a virtual interface from the table filtering on address.""" return IMPL.virtual_interface_get_by_address(context, address) def virtual_interface_get_by_uuid(context, vif_uuid): """Gets a virtual interface from the table filtering on vif uuid.""" return IMPL.virtual_interface_get_by_uuid(context, vif_uuid) def virtual_interface_get_by_instance(context, instance_id): """Gets all virtual_interfaces for instance.""" return IMPL.virtual_interface_get_by_instance(context, instance_id) def virtual_interface_get_by_instance_and_network(context, instance_id, network_id): """Gets all virtual interfaces for instance.""" return IMPL.virtual_interface_get_by_instance_and_network(context, instance_id, network_id) def virtual_interface_delete_by_instance(context, instance_id): """Delete virtual interface records associated with instance.""" return IMPL.virtual_interface_delete_by_instance(context, instance_id) def virtual_interface_delete(context, id): """Delete virtual interface by id.""" return IMPL.virtual_interface_delete(context, id) def virtual_interface_get_all(context): """Gets all virtual interfaces from the table.""" return IMPL.virtual_interface_get_all(context) #################### def instance_create(context, values): """Create an instance from the values dictionary.""" return IMPL.instance_create(context, values) def instance_destroy(context, instance_uuid, constraint=None, hard_delete=False): """Destroy the instance or raise if it does not exist. :param context: request context object :param instance_uuid: uuid of the instance to delete :param constraint: a constraint object :param hard_delete: when set to True, removes all records related to the instance """ return IMPL.instance_destroy(context, instance_uuid, constraint=constraint, hard_delete=hard_delete) def instance_get_by_uuid(context, uuid, columns_to_join=None): """Get an instance or raise if it does not exist.""" return IMPL.instance_get_by_uuid(context, uuid, columns_to_join) def instance_get(context, instance_id, columns_to_join=None): """Get an instance or raise if it does not exist.""" return IMPL.instance_get(context, instance_id, columns_to_join=columns_to_join) def instance_get_all(context, columns_to_join=None): """Get all instances.""" return IMPL.instance_get_all(context, columns_to_join=columns_to_join) def instance_get_all_uuids_by_hosts(context, hosts): """Get a dict, keyed by hostname, of a list of instance uuids on one or more hosts. """ return IMPL.instance_get_all_uuids_by_hosts(context, hosts) def instance_get_all_by_filters(context, filters, sort_key='created_at', sort_dir='desc', limit=None, marker=None, columns_to_join=None): """Get all instances that match all filters.""" # Note: This function exists for backwards compatibility since calls to # the instance layer coming in over RPC may specify the single sort # key/direction values; in this case, this function is invoked instead # of the 'instance_get_all_by_filters_sort' function. return IMPL.instance_get_all_by_filters(context, filters, sort_key, sort_dir, limit=limit, marker=marker, columns_to_join=columns_to_join) def instance_get_all_by_filters_sort(context, filters, limit=None, marker=None, columns_to_join=None, sort_keys=None, sort_dirs=None): """Get all instances that match all filters sorted by multiple keys. sort_keys and sort_dirs must be a list of strings. """ return IMPL.instance_get_all_by_filters_sort( context, filters, limit=limit, marker=marker, columns_to_join=columns_to_join, sort_keys=sort_keys, sort_dirs=sort_dirs) def instance_get_by_sort_filters(context, sort_keys, sort_dirs, values): """Get the uuid of the first instance in a sort order. Return the first instance (uuid) of the set where each column value is greater than or equal to the matching one in @values, for each key in @sort_keys. """ return IMPL.instance_get_by_sort_filters(context, sort_keys, sort_dirs, values) def instance_get_active_by_window_joined(context, begin, end=None, project_id=None, host=None, columns_to_join=None, limit=None, marker=None): """Get instances and joins active during a certain time window. Specifying a project_id will filter for a certain project. Specifying a host will filter for instances on a given compute host. """ return IMPL.instance_get_active_by_window_joined(context, begin, end, project_id, host, columns_to_join=columns_to_join, limit=limit, marker=marker) def instance_get_all_by_host(context, host, columns_to_join=None): """Get all instances belonging to a host.""" return IMPL.instance_get_all_by_host(context, host, columns_to_join) def instance_get_all_by_host_and_node(context, host, node, columns_to_join=None): """Get all instances belonging to a node.""" return IMPL.instance_get_all_by_host_and_node( context, host, node, columns_to_join=columns_to_join) def instance_get_all_by_host_and_not_type(context, host, type_id=None): """Get all instances belonging to a host with a different type_id.""" return IMPL.instance_get_all_by_host_and_not_type(context, host, type_id) # NOTE(hanlind): This method can be removed as conductor RPC API moves to v2.0. def instance_get_all_hung_in_rebooting(context, reboot_window): """Get all instances stuck in a rebooting state.""" return IMPL.instance_get_all_hung_in_rebooting(context, reboot_window) def instance_update(context, instance_uuid, values, expected=None): """Set the given properties on an instance and update it. Raises NotFound if instance does not exist. """ return IMPL.instance_update(context, instance_uuid, values, expected=expected) def instance_update_and_get_original(context, instance_uuid, values, columns_to_join=None, expected=None): """Set the given properties on an instance and update it. Return a shallow copy of the original instance reference, as well as the updated one. :param context: = request context object :param instance_uuid: = instance id or uuid :param values: = dict containing column values :returns: a tuple of the form (old_instance_ref, new_instance_ref) Raises NotFound if instance does not exist. """ rv = IMPL.instance_update_and_get_original(context, instance_uuid, values, columns_to_join=columns_to_join, expected=expected) return rv def instance_add_security_group(context, instance_id, security_group_id): """Associate the given security group with the given instance.""" return IMPL.instance_add_security_group(context, instance_id, security_group_id) def instance_remove_security_group(context, instance_id, security_group_id): """Disassociate the given security group from the given instance.""" return IMPL.instance_remove_security_group(context, instance_id, security_group_id) #################### def instance_info_cache_get(context, instance_uuid): """Gets an instance info cache from the table. :param instance_uuid: = uuid of the info cache's instance """ return IMPL.instance_info_cache_get(context, instance_uuid) def instance_info_cache_update(context, instance_uuid, values): """Update an instance info cache record in the table. :param instance_uuid: = uuid of info cache's instance :param values: = dict containing column values to update """ return IMPL.instance_info_cache_update(context, instance_uuid, values) def instance_info_cache_delete(context, instance_uuid): """Deletes an existing instance_info_cache record :param instance_uuid: = uuid of the instance tied to the cache record """ return IMPL.instance_info_cache_delete(context, instance_uuid) ################### def instance_extra_get_by_instance_uuid(context, instance_uuid, columns=None): """Get the instance extra record :param instance_uuid: = uuid of the instance tied to the topology record :param columns: A list of the columns to load, or None for 'all of them' """ return IMPL.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=columns) def instance_extra_update_by_uuid(context, instance_uuid, updates): """Update the instance extra record by instance uuid :param instance_uuid: = uuid of the instance tied to the record :param updates: A dict of updates to apply """ return IMPL.instance_extra_update_by_uuid(context, instance_uuid, updates) ################### def key_pair_create(context, values): """Create a key_pair from the values dictionary.""" return IMPL.key_pair_create(context, values) def key_pair_destroy(context, user_id, name): """Destroy the key_pair or raise if it does not exist.""" return IMPL.key_pair_destroy(context, user_id, name) def key_pair_get(context, user_id, name): """Get a key_pair or raise if it does not exist.""" return IMPL.key_pair_get(context, user_id, name) def key_pair_get_all_by_user(context, user_id, limit=None, marker=None): """Get all key_pairs by user.""" return IMPL.key_pair_get_all_by_user( context, user_id, limit=limit, marker=marker) def key_pair_count_by_user(context, user_id): """Count number of key pairs for the given user ID.""" return IMPL.key_pair_count_by_user(context, user_id) ############### def quota_create(context, project_id, resource, limit, user_id=None): """Create a quota for the given project and resource.""" return IMPL.quota_create(context, project_id, resource, limit, user_id=user_id) def quota_get(context, project_id, resource, user_id=None): """Retrieve a quota or raise if it does not exist.""" return IMPL.quota_get(context, project_id, resource, user_id=user_id) def quota_get_all_by_project_and_user(context, project_id, user_id): """Retrieve all quotas associated with a given project and user.""" return IMPL.quota_get_all_by_project_and_user(context, project_id, user_id) def quota_get_all_by_project(context, project_id): """Retrieve all quotas associated with a given project.""" return IMPL.quota_get_all_by_project(context, project_id) def quota_get_per_project_resources(): """Retrieve the names of resources whose quotas are calculated on a per-project rather than a per-user basis. """ return IMPL.quota_get_per_project_resources() def quota_get_all(context, project_id): """Retrieve all user quotas associated with a given project.""" return IMPL.quota_get_all(context, project_id) def quota_update(context, project_id, resource, limit, user_id=None): """Update a quota or raise if it does not exist.""" return IMPL.quota_update(context, project_id, resource, limit, user_id=user_id) ################### def quota_class_create(context, class_name, resource, limit): """Create a quota class for the given name and resource.""" return IMPL.quota_class_create(context, class_name, resource, limit) def quota_class_get(context, class_name, resource): """Retrieve a quota class or raise if it does not exist.""" return IMPL.quota_class_get(context, class_name, resource) def quota_class_get_default(context): """Retrieve all default quotas.""" return IMPL.quota_class_get_default(context) def quota_class_get_all_by_name(context, class_name): """Retrieve all quotas associated with a given quota class.""" return IMPL.quota_class_get_all_by_name(context, class_name) def quota_class_update(context, class_name, resource, limit): """Update a quota class or raise if it does not exist.""" return IMPL.quota_class_update(context, class_name, resource, limit) ################### def quota_destroy_all_by_project_and_user(context, project_id, user_id): """Destroy all quotas associated with a given project and user.""" return IMPL.quota_destroy_all_by_project_and_user(context, project_id, user_id) def quota_destroy_all_by_project(context, project_id): """Destroy all quotas associated with a given project.""" return IMPL.quota_destroy_all_by_project(context, project_id) ################### def block_device_mapping_create(context, values, legacy=True): """Create an entry of block device mapping.""" return IMPL.block_device_mapping_create(context, values, legacy) def block_device_mapping_update(context, bdm_id, values, legacy=True): """Update an entry of block device mapping.""" return IMPL.block_device_mapping_update(context, bdm_id, values, legacy) def block_device_mapping_update_or_create(context, values, legacy=True): """Update an entry of block device mapping. If not existed, create a new entry """ return IMPL.block_device_mapping_update_or_create(context, values, legacy) def block_device_mapping_get_all_by_instance_uuids(context, instance_uuids): """Get all block device mapping belonging to a list of instances.""" return IMPL.block_device_mapping_get_all_by_instance_uuids(context, instance_uuids) def block_device_mapping_get_all_by_instance(context, instance_uuid): """Get all block device mapping belonging to an instance.""" return IMPL.block_device_mapping_get_all_by_instance(context, instance_uuid) def block_device_mapping_get_all_by_volume_id(context, volume_id, columns_to_join=None): """Get block device mapping for a given volume.""" return IMPL.block_device_mapping_get_all_by_volume_id(context, volume_id, columns_to_join) def block_device_mapping_get_by_instance_and_volume_id(context, volume_id, instance_uuid, columns_to_join=None): """Get block device mapping for a given volume ID and instance UUID.""" return IMPL.block_device_mapping_get_by_instance_and_volume_id( context, volume_id, instance_uuid, columns_to_join) def block_device_mapping_destroy(context, bdm_id): """Destroy the block device mapping.""" return IMPL.block_device_mapping_destroy(context, bdm_id) def block_device_mapping_destroy_by_instance_and_device(context, instance_uuid, device_name): """Destroy the block device mapping.""" return IMPL.block_device_mapping_destroy_by_instance_and_device( context, instance_uuid, device_name) def block_device_mapping_destroy_by_instance_and_volume(context, instance_uuid, volume_id): """Destroy the block device mapping.""" return IMPL.block_device_mapping_destroy_by_instance_and_volume( context, instance_uuid, volume_id) #################### def security_group_get_all(context): """Get all security groups.""" return IMPL.security_group_get_all(context) def security_group_get(context, security_group_id, columns_to_join=None): """Get security group by its id.""" return IMPL.security_group_get(context, security_group_id, columns_to_join) def security_group_get_by_name(context, project_id, group_name, columns_to_join=None): """Returns a security group with the specified name from a project.""" return IMPL.security_group_get_by_name(context, project_id, group_name, columns_to_join=None) def security_group_get_by_project(context, project_id): """Get all security groups belonging to a project.""" return IMPL.security_group_get_by_project(context, project_id) def security_group_get_by_instance(context, instance_uuid): """Get security groups to which the instance is assigned.""" return IMPL.security_group_get_by_instance(context, instance_uuid) def security_group_in_use(context, group_id): """Indicates if a security group is currently in use.""" return IMPL.security_group_in_use(context, group_id) def security_group_create(context, values): """Create a new security group.""" return IMPL.security_group_create(context, values) def security_group_update(context, security_group_id, values, columns_to_join=None): """Update a security group.""" return IMPL.security_group_update(context, security_group_id, values, columns_to_join=columns_to_join) def security_group_ensure_default(context): """Ensure default security group exists for a project_id. Returns a tuple with the first element being a bool indicating if the default security group previously existed. Second element is the dict used to create the default security group. """ return IMPL.security_group_ensure_default(context) def security_group_destroy(context, security_group_id): """Deletes a security group.""" return IMPL.security_group_destroy(context, security_group_id) #################### def security_group_rule_create(context, values): """Create a new security group.""" return IMPL.security_group_rule_create(context, values) def security_group_rule_get_by_security_group(context, security_group_id, columns_to_join=None): """Get all rules for a given security group.""" return IMPL.security_group_rule_get_by_security_group( context, security_group_id, columns_to_join=columns_to_join) def security_group_rule_get_by_instance(context, instance_uuid): """Get all rules for a given instance.""" return IMPL.security_group_rule_get_by_instance(context, instance_uuid) def security_group_rule_destroy(context, security_group_rule_id): """Deletes a security group rule.""" return IMPL.security_group_rule_destroy(context, security_group_rule_id) def security_group_rule_get(context, security_group_rule_id): """Gets a security group rule.""" return IMPL.security_group_rule_get(context, security_group_rule_id) def security_group_rule_count_by_group(context, security_group_id): """Count rules in a given security group.""" return IMPL.security_group_rule_count_by_group(context, security_group_id) ################### def pci_device_get_by_addr(context, node_id, dev_addr): """Get PCI device by address.""" return IMPL.pci_device_get_by_addr(context, node_id, dev_addr) def pci_device_get_by_id(context, id): """Get PCI device by id.""" return IMPL.pci_device_get_by_id(context, id) def pci_device_get_all_by_node(context, node_id): """Get all PCI devices for one host.""" return IMPL.pci_device_get_all_by_node(context, node_id) def pci_device_get_all_by_instance_uuid(context, instance_uuid): """Get PCI devices allocated to instance.""" return IMPL.pci_device_get_all_by_instance_uuid(context, instance_uuid) def pci_device_get_all_by_parent_addr(context, node_id, parent_addr): """Get all PCI devices by parent address.""" return IMPL.pci_device_get_all_by_parent_addr(context, node_id, parent_addr) def pci_device_destroy(context, node_id, address): """Delete a PCI device record.""" return IMPL.pci_device_destroy(context, node_id, address) def pci_device_update(context, node_id, address, value): """Update a pci device.""" return IMPL.pci_device_update(context, node_id, address, value) #################### def instance_metadata_get(context, instance_uuid): """Get all metadata for an instance.""" return IMPL.instance_metadata_get(context, instance_uuid) def instance_metadata_delete(context, instance_uuid, key): """Delete the given metadata item.""" IMPL.instance_metadata_delete(context, instance_uuid, key) def instance_metadata_update(context, instance_uuid, metadata, delete): """Update metadata if it exists, otherwise create it.""" return IMPL.instance_metadata_update(context, instance_uuid, metadata, delete) #################### def instance_system_metadata_get(context, instance_uuid): """Get all system metadata for an instance.""" return IMPL.instance_system_metadata_get(context, instance_uuid) def instance_system_metadata_update(context, instance_uuid, metadata, delete): """Update metadata if it exists, otherwise create it.""" IMPL.instance_system_metadata_update( context, instance_uuid, metadata, delete) #################### def agent_build_create(context, values): """Create a new agent build entry.""" return IMPL.agent_build_create(context, values) def agent_build_get_by_triple(context, hypervisor, os, architecture): """Get agent build by hypervisor/OS/architecture triple.""" return IMPL.agent_build_get_by_triple(context, hypervisor, os, architecture) def agent_build_get_all(context, hypervisor=None): """Get all agent builds.""" return IMPL.agent_build_get_all(context, hypervisor) def agent_build_destroy(context, agent_update_id): """Destroy agent build entry.""" IMPL.agent_build_destroy(context, agent_update_id) def agent_build_update(context, agent_build_id, values): """Update agent build entry.""" IMPL.agent_build_update(context, agent_build_id, values) #################### def bw_usage_get(context, uuid, start_period, mac): """Return bw usage for instance and mac in a given audit period.""" return IMPL.bw_usage_get(context, uuid, start_period, mac) def bw_usage_get_by_uuids(context, uuids, start_period): """Return bw usages for instance(s) in a given audit period.""" return IMPL.bw_usage_get_by_uuids(context, uuids, start_period) def bw_usage_update(context, uuid, mac, start_period, bw_in, bw_out, last_ctr_in, last_ctr_out, last_refreshed=None): """Update cached bandwidth usage for an instance's network based on mac address. Creates new record if needed. """ rv = IMPL.bw_usage_update(context, uuid, mac, start_period, bw_in, bw_out, last_ctr_in, last_ctr_out, last_refreshed=last_refreshed) return rv ################### def vol_get_usage_by_time(context, begin): """Return volumes usage that have been updated after a specified time.""" return IMPL.vol_get_usage_by_time(context, begin) def vol_usage_update(context, id, rd_req, rd_bytes, wr_req, wr_bytes, instance_id, project_id, user_id, availability_zone, update_totals=False): """Update cached volume usage for a volume Creates new record if needed. """ return IMPL.vol_usage_update(context, id, rd_req, rd_bytes, wr_req, wr_bytes, instance_id, project_id, user_id, availability_zone, update_totals=update_totals) ################### def s3_image_get(context, image_id): """Find local s3 image represented by the provided id.""" return IMPL.s3_image_get(context, image_id) def s3_image_get_by_uuid(context, image_uuid): """Find local s3 image represented by the provided uuid.""" return IMPL.s3_image_get_by_uuid(context, image_uuid) def s3_image_create(context, image_uuid): """Create local s3 image represented by provided uuid.""" return IMPL.s3_image_create(context, image_uuid) #################### def instance_fault_create(context, values): """Create a new Instance Fault.""" return IMPL.instance_fault_create(context, values) def instance_fault_get_by_instance_uuids(context, instance_uuids, latest=False): """Get all instance faults for the provided instance_uuids.""" return IMPL.instance_fault_get_by_instance_uuids(context, instance_uuids, latest=latest) #################### def action_start(context, values): """Start an action for an instance.""" return IMPL.action_start(context, values) def action_finish(context, values): """Finish an action for an instance.""" return IMPL.action_finish(context, values) def actions_get(context, instance_uuid, limit=None, marker=None, filters=None): """Get all instance actions for the provided instance and filters.""" return IMPL.actions_get(context, instance_uuid, limit, marker, filters) def action_get_by_request_id(context, uuid, request_id): """Get the action by request_id and given instance.""" return IMPL.action_get_by_request_id(context, uuid, request_id) def action_event_start(context, values): """Start an event on an instance action.""" return IMPL.action_event_start(context, values) def action_event_finish(context, values): """Finish an event on an instance action.""" return IMPL.action_event_finish(context, values) def action_events_get(context, action_id): """Get the events by action id.""" return IMPL.action_events_get(context, action_id) def action_event_get_by_id(context, action_id, event_id): return IMPL.action_event_get_by_id(context, action_id, event_id) #################### def get_instance_uuid_by_ec2_id(context, ec2_id): """Get uuid through ec2 id from instance_id_mappings table.""" return IMPL.get_instance_uuid_by_ec2_id(context, ec2_id) def ec2_instance_create(context, instance_uuid, id=None): """Create the ec2 id to instance uuid mapping on demand.""" return IMPL.ec2_instance_create(context, instance_uuid, id) def ec2_instance_get_by_uuid(context, instance_uuid): return IMPL.ec2_instance_get_by_uuid(context, instance_uuid) def ec2_instance_get_by_id(context, instance_id): return IMPL.ec2_instance_get_by_id(context, instance_id) #################### def task_log_end_task(context, task_name, period_beginning, period_ending, host, errors, message=None): """Mark a task as complete for a given host/time period.""" return IMPL.task_log_end_task(context, task_name, period_beginning, period_ending, host, errors, message) def task_log_begin_task(context, task_name, period_beginning, period_ending, host, task_items=None, message=None): """Mark a task as started for a given host/time period.""" return IMPL.task_log_begin_task(context, task_name, period_beginning, period_ending, host, task_items, message) def task_log_get_all(context, task_name, period_beginning, period_ending, host=None, state=None): return IMPL.task_log_get_all(context, task_name, period_beginning, period_ending, host, state) def task_log_get(context, task_name, period_beginning, period_ending, host, state=None): return IMPL.task_log_get(context, task_name, period_beginning, period_ending, host, state) #################### def archive_deleted_rows(context=None, max_rows=None, before=None): """Move up to max_rows rows from production tables to the corresponding shadow tables. :param context: nova.context.RequestContext for database access :param max_rows: Maximum number of rows to archive (required) :param before: optional datetime which when specified filters the records to only archive those records deleted before the given date :returns: 3-item tuple: - dict that maps table name to number of rows archived from that table, for example:: { 'instances': 5, 'block_device_mapping': 5, 'pci_devices': 2, } - list of UUIDs of instances that were archived - total number of rows that were archived """ return IMPL.archive_deleted_rows(context=context, max_rows=max_rows, before=before) def pcidevice_online_data_migration(context, max_count): return IMPL.pcidevice_online_data_migration(context, max_count) #################### def instance_tag_add(context, instance_uuid, tag): """Add tag to the instance.""" return IMPL.instance_tag_add(context, instance_uuid, tag) def instance_tag_set(context, instance_uuid, tags): """Replace all of the instance tags with specified list of tags.""" return IMPL.instance_tag_set(context, instance_uuid, tags) def instance_tag_get_by_instance_uuid(context, instance_uuid): """Get all tags for a given instance.""" return IMPL.instance_tag_get_by_instance_uuid(context, instance_uuid) def instance_tag_delete(context, instance_uuid, tag): """Delete specified tag from the instance.""" return IMPL.instance_tag_delete(context, instance_uuid, tag) def instance_tag_delete_all(context, instance_uuid): """Delete all tags from the instance.""" return IMPL.instance_tag_delete_all(context, instance_uuid) def instance_tag_exists(context, instance_uuid, tag): """Check if specified tag exist on the instance.""" return IMPL.instance_tag_exists(context, instance_uuid, tag) #################### def console_auth_token_create(context, values): """Create a console authorization.""" return IMPL.console_auth_token_create(context, values) def console_auth_token_get_valid(context, token_hash, instance_uuid=None): """Get a valid console authorization by token_hash and instance_uuid. The console authorizations expire at the time specified by their 'expires' column. An expired console auth token will not be returned to the caller - it is treated as if it does not exist. If instance_uuid is specified, the token is validated against both expiry and instance_uuid. If instance_uuid is not specified, the token is validated against expiry only. """ return IMPL.console_auth_token_get_valid(context, token_hash, instance_uuid=instance_uuid) def console_auth_token_destroy_all_by_instance(context, instance_uuid): """Delete all console authorizations belonging to the instance.""" return IMPL.console_auth_token_destroy_all_by_instance(context, instance_uuid) def console_auth_token_destroy_expired(context): """Delete expired console authorizations. The console authorizations expire at the time specified by their 'expires' column. This function is used to garbage collect expired tokens. """ return IMPL.console_auth_token_destroy_expired(context) def console_auth_token_destroy_expired_by_host(context, host): """Delete expired console authorizations belonging to the host. The console authorizations expire at the time specified by their 'expires' column. This function is used to garbage collect expired tokens associated with the given host. """ return IMPL.console_auth_token_destroy_expired_by_host(context, host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/base.py0000664000175000017500000000171600000000000015407 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base class for classes that need database access.""" import nova.db.api class Base(object): """DB driver is injected in the init method.""" def __init__(self): super(Base, self).__init__() self.db = nova.db.api ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/db/constants.py0000664000175000017500000000237100000000000016507 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Useful db-related constants. In their own file so they can be imported cleanly.""" # The maximum value a signed INT type may have MAX_INT = 0x7FFFFFFF # NOTE(dosaboy): This is supposed to represent the maximum value that we can # place into a SQL single precision float so that we can check whether values # are oversize. Postgres and MySQL both define this as their max whereas Sqlite # uses dynamic typing so this would not apply. Different dbs react in different # ways to oversize values e.g. postgres will raise an exception while mysql # will round off the value. Nevertheless we may still want to know prior to # insert whether the value is oversize or not. SQL_SP_FLOAT_MAX = 3.40282e+38 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/migration.py0000664000175000017500000000431000000000000016457 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" from nova.db.sqlalchemy import migration IMPL = migration def db_sync(version=None, database='main', context=None): """Migrate the database to `version` or the most recent version.""" return IMPL.db_sync(version=version, database=database, context=context) def db_version(database='main', context=None): """Display the current database version.""" return IMPL.db_version(database=database, context=context) def db_initial_version(database='main'): """The starting version for the database.""" return IMPL.db_initial_version(database=database) def db_null_instance_uuid_scan(delete=False): """Utility for scanning the database to look for NULL instance uuid rows. Scans the backing nova database to look for table entries where instances.uuid or instance_uuid columns are NULL (except for the fixed_ips table since that can contain NULL instance_uuid entries by design). Dumps the tables that have NULL instance_uuid entries or optionally deletes them based on usage. This tool is meant to be used in conjunction with the 267 database migration script to detect and optionally cleanup NULL instance_uuid records. :param delete: If true, delete NULL instance_uuid records found, else just query to see if they exist for reporting. :returns: dict of table name to number of hits for NULL instance_uuid rows. """ return IMPL.db_null_instance_uuid_scan(delete=delete) ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/db/sqlalchemy/0000775000175000017500000000000000000000000016260 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/__init__.py0000664000175000017500000000000000000000000020357 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api.py0000664000175000017500000051475700000000000017426 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of SQLAlchemy backend.""" import collections import copy import datetime import functools import inspect import sys from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import update_match from oslo_db.sqlalchemy import utils as sqlalchemyutils from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import importutils from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import range import sqlalchemy as sa from sqlalchemy import and_ from sqlalchemy import Boolean from sqlalchemy.exc import NoSuchTableError from sqlalchemy.ext.compiler import compiles from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import or_ from sqlalchemy.orm import aliased from sqlalchemy.orm import joinedload from sqlalchemy.orm import noload from sqlalchemy.orm import subqueryload from sqlalchemy.orm import undefer from sqlalchemy.schema import Table from sqlalchemy import sql from sqlalchemy.sql.expression import asc from sqlalchemy.sql.expression import cast from sqlalchemy.sql.expression import desc from sqlalchemy.sql.expression import UpdateBase from sqlalchemy.sql import false from sqlalchemy.sql import func from sqlalchemy.sql import null from sqlalchemy.sql import true from nova import block_device from nova.compute import task_states from nova.compute import vm_states import nova.conf import nova.context from nova.db.sqlalchemy import models from nova import exception from nova.i18n import _ from nova import safe_utils profiler_sqlalchemy = importutils.try_import('osprofiler.sqlalchemy') CONF = nova.conf.CONF LOG = logging.getLogger(__name__) main_context_manager = enginefacade.transaction_context() api_context_manager = enginefacade.transaction_context() def _get_db_conf(conf_group, connection=None): kw = dict(conf_group.items()) if connection is not None: kw['connection'] = connection return kw def _context_manager_from_context(context): if context: try: return context.db_connection except AttributeError: pass def _joinedload_all(column): elements = column.split('.') joined = joinedload(elements.pop(0)) for element in elements: joined = joined.joinedload(element) return joined def configure(conf): main_context_manager.configure(**_get_db_conf(conf.database)) api_context_manager.configure(**_get_db_conf(conf.api_database)) if profiler_sqlalchemy and CONF.profiler.enabled \ and CONF.profiler.trace_sqlalchemy: main_context_manager.append_on_engine_create( lambda eng: profiler_sqlalchemy.add_tracing(sa, eng, "db")) api_context_manager.append_on_engine_create( lambda eng: profiler_sqlalchemy.add_tracing(sa, eng, "db")) def create_context_manager(connection=None): """Create a database context manager object. : param connection: The database connection string """ ctxt_mgr = enginefacade.transaction_context() ctxt_mgr.configure(**_get_db_conf(CONF.database, connection=connection)) return ctxt_mgr def get_context_manager(context): """Get a database context manager object. :param context: The request context that can contain a context manager """ return _context_manager_from_context(context) or main_context_manager def get_engine(use_slave=False, context=None): """Get a database engine object. :param use_slave: Whether to use the slave connection :param context: The request context that can contain a context manager """ ctxt_mgr = get_context_manager(context) if use_slave: return ctxt_mgr.reader.get_engine() return ctxt_mgr.writer.get_engine() def get_api_engine(): return api_context_manager.writer.get_engine() _SHADOW_TABLE_PREFIX = 'shadow_' _DEFAULT_QUOTA_NAME = 'default' PER_PROJECT_QUOTAS = ['fixed_ips', 'floating_ips', 'networks'] # NOTE(stephenfin): This is required and used by oslo.db def get_backend(): """The backend is this module itself.""" return sys.modules[__name__] def require_context(f): """Decorator to require *any* user or admin context. This does no authorization for user or project access matching, see :py:func:`nova.context.authorize_project_context` and :py:func:`nova.context.authorize_user_context`. The first argument to the wrapped function must be the context. """ @functools.wraps(f) def wrapper(*args, **kwargs): nova.context.require_context(args[0]) return f(*args, **kwargs) return wrapper def select_db_reader_mode(f): """Decorator to select synchronous or asynchronous reader mode. The kwarg argument 'use_slave' defines reader mode. Asynchronous reader will be used if 'use_slave' is True and synchronous reader otherwise. If 'use_slave' is not specified default value 'False' will be used. Wrapped function must have a context in the arguments. """ @functools.wraps(f) def wrapper(*args, **kwargs): wrapped_func = safe_utils.get_wrapped_function(f) keyed_args = inspect.getcallargs(wrapped_func, *args, **kwargs) context = keyed_args['context'] use_slave = keyed_args.get('use_slave', False) if use_slave: reader_mode = get_context_manager(context).async_ else: reader_mode = get_context_manager(context).reader with reader_mode.using(context): return f(*args, **kwargs) return wrapper def pick_context_manager_writer(f): """Decorator to use a writer db context manager. The db context manager will be picked from the RequestContext. Wrapped function must have a RequestContext in the arguments. """ @functools.wraps(f) def wrapped(context, *args, **kwargs): ctxt_mgr = get_context_manager(context) with ctxt_mgr.writer.using(context): return f(context, *args, **kwargs) return wrapped def pick_context_manager_reader(f): """Decorator to use a reader db context manager. The db context manager will be picked from the RequestContext. Wrapped function must have a RequestContext in the arguments. """ @functools.wraps(f) def wrapped(context, *args, **kwargs): ctxt_mgr = get_context_manager(context) with ctxt_mgr.reader.using(context): return f(context, *args, **kwargs) return wrapped def pick_context_manager_reader_allow_async(f): """Decorator to use a reader.allow_async db context manager. The db context manager will be picked from the RequestContext. Wrapped function must have a RequestContext in the arguments. """ @functools.wraps(f) def wrapped(context, *args, **kwargs): ctxt_mgr = get_context_manager(context) with ctxt_mgr.reader.allow_async.using(context): return f(context, *args, **kwargs) return wrapped def model_query(context, model, args=None, read_deleted=None, project_only=False): """Query helper that accounts for context's `read_deleted` field. :param context: NovaContext of the query. :param model: Model to query. Must be a subclass of ModelBase. :param args: Arguments to query. If None - model is used. :param read_deleted: If not None, overrides context's read_deleted field. Permitted values are 'no', which does not return deleted values; 'only', which only returns deleted values; and 'yes', which does not filter deleted values. :param project_only: If set and context is user-type, then restrict query to match the context's project_id. If set to 'allow_none', restriction includes project_id = None. """ if read_deleted is None: read_deleted = context.read_deleted query_kwargs = {} if 'no' == read_deleted: query_kwargs['deleted'] = False elif 'only' == read_deleted: query_kwargs['deleted'] = True elif 'yes' == read_deleted: pass else: raise ValueError(_("Unrecognized read_deleted value '%s'") % read_deleted) query = sqlalchemyutils.model_query( model, context.session, args, **query_kwargs) # We can't use oslo.db model_query's project_id here, as it doesn't allow # us to return both our projects and unowned projects. if nova.context.is_user_context(context) and project_only: if project_only == 'allow_none': query = query.\ filter(or_(model.project_id == context.project_id, model.project_id == null())) else: query = query.filter_by(project_id=context.project_id) return query def convert_objects_related_datetimes(values, *datetime_keys): if not datetime_keys: datetime_keys = ('created_at', 'deleted_at', 'updated_at') for key in datetime_keys: if key in values and values[key]: if isinstance(values[key], six.string_types): try: values[key] = timeutils.parse_strtime(values[key]) except ValueError: # Try alternate parsing since parse_strtime will fail # with say converting '2015-05-28T19:59:38+00:00' values[key] = timeutils.parse_isotime(values[key]) # NOTE(danms): Strip UTC timezones from datetimes, since they're # stored that way in the database values[key] = values[key].replace(tzinfo=None) return values ################### def constraint(**conditions): return Constraint(conditions) def equal_any(*values): return EqualityCondition(values) def not_equal(*values): return InequalityCondition(values) class Constraint(object): def __init__(self, conditions): self.conditions = conditions def apply(self, model, query): for key, condition in self.conditions.items(): for clause in condition.clauses(getattr(model, key)): query = query.filter(clause) return query class EqualityCondition(object): def __init__(self, values): self.values = values def clauses(self, field): # method signature requires us to return an iterable even if for OR # operator this will actually be a single clause return [or_(*[field == value for value in self.values])] class InequalityCondition(object): def __init__(self, values): self.values = values def clauses(self, field): return [field != value for value in self.values] class DeleteFromSelect(UpdateBase): def __init__(self, table, select, column): self.table = table self.select = select self.column = column # NOTE(guochbo): some versions of MySQL doesn't yet support subquery with # 'LIMIT & IN/ALL/ANY/SOME' We need work around this with nesting select . @compiles(DeleteFromSelect) def visit_delete_from_select(element, compiler, **kw): return "DELETE FROM %s WHERE %s in (SELECT T1.%s FROM (%s) as T1)" % ( compiler.process(element.table, asfrom=True), compiler.process(element.column), element.column.name, compiler.process(element.select)) ################### @pick_context_manager_writer def service_destroy(context, service_id): service = service_get(context, service_id) model_query(context, models.Service).\ filter_by(id=service_id).\ soft_delete(synchronize_session=False) if service.binary == 'nova-compute': # TODO(sbauza): Remove the service_id filter in a later release # once we are sure that all compute nodes report the host field model_query(context, models.ComputeNode).\ filter(or_(models.ComputeNode.service_id == service_id, models.ComputeNode.host == service['host'])).\ soft_delete(synchronize_session=False) @pick_context_manager_reader def service_get(context, service_id): query = model_query(context, models.Service).filter_by(id=service_id) result = query.first() if not result: raise exception.ServiceNotFound(service_id=service_id) return result @pick_context_manager_reader def service_get_by_uuid(context, service_uuid): query = model_query(context, models.Service).filter_by(uuid=service_uuid) result = query.first() if not result: raise exception.ServiceNotFound(service_id=service_uuid) return result @pick_context_manager_reader_allow_async def service_get_minimum_version(context, binaries): min_versions = context.session.query( models.Service.binary, func.min(models.Service.version)).\ filter(models.Service.binary.in_(binaries)).\ filter(models.Service.deleted == 0).\ filter(models.Service.forced_down == false()).\ group_by(models.Service.binary) return dict(min_versions) @pick_context_manager_reader def service_get_all(context, disabled=None): query = model_query(context, models.Service) if disabled is not None: query = query.filter_by(disabled=disabled) return query.all() @pick_context_manager_reader def service_get_all_by_topic(context, topic): return model_query(context, models.Service, read_deleted="no").\ filter_by(disabled=False).\ filter_by(topic=topic).\ all() @pick_context_manager_reader def service_get_by_host_and_topic(context, host, topic): return model_query(context, models.Service, read_deleted="no").\ filter_by(disabled=False).\ filter_by(host=host).\ filter_by(topic=topic).\ first() @pick_context_manager_reader def service_get_all_by_binary(context, binary, include_disabled=False): query = model_query(context, models.Service).filter_by(binary=binary) if not include_disabled: query = query.filter_by(disabled=False) return query.all() @pick_context_manager_reader def service_get_all_computes_by_hv_type(context, hv_type, include_disabled=False): query = model_query(context, models.Service, read_deleted="no").\ filter_by(binary='nova-compute') if not include_disabled: query = query.filter_by(disabled=False) query = query.join(models.ComputeNode, models.Service.host == models.ComputeNode.host).\ filter(models.ComputeNode.hypervisor_type == hv_type).\ distinct() return query.all() @pick_context_manager_reader def service_get_by_host_and_binary(context, host, binary): result = model_query(context, models.Service, read_deleted="no").\ filter_by(host=host).\ filter_by(binary=binary).\ first() if not result: raise exception.HostBinaryNotFound(host=host, binary=binary) return result @pick_context_manager_reader def service_get_all_by_host(context, host): return model_query(context, models.Service, read_deleted="no").\ filter_by(host=host).\ all() @pick_context_manager_reader_allow_async def service_get_by_compute_host(context, host): result = model_query(context, models.Service, read_deleted="no").\ filter_by(host=host).\ filter_by(binary='nova-compute').\ first() if not result: raise exception.ComputeHostNotFound(host=host) return result @pick_context_manager_writer def service_create(context, values): service_ref = models.Service() service_ref.update(values) # We only auto-disable nova-compute services since those are the only # ones that can be enabled using the os-services REST API and they are # the only ones where being disabled means anything. It does # not make sense to be able to disable non-compute services like # nova-scheduler or nova-osapi_compute since that does nothing. if not CONF.enable_new_services and values.get('binary') == 'nova-compute': msg = _("New compute service disabled due to config option.") service_ref.disabled = True service_ref.disabled_reason = msg try: service_ref.save(context.session) except db_exc.DBDuplicateEntry as e: if 'binary' in e.columns: raise exception.ServiceBinaryExists(host=values.get('host'), binary=values.get('binary')) raise exception.ServiceTopicExists(host=values.get('host'), topic=values.get('topic')) return service_ref @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def service_update(context, service_id, values): service_ref = service_get(context, service_id) # Only servicegroup.drivers.db.DbDriver._report_state() updates # 'report_count', so if that value changes then store the timestamp # as the last time we got a state report. if 'report_count' in values: if values['report_count'] > service_ref.report_count: service_ref.last_seen_up = timeutils.utcnow() service_ref.update(values) return service_ref ################### def _compute_node_select(context, filters=None, limit=None, marker=None): if filters is None: filters = {} cn_tbl = sa.alias(models.ComputeNode.__table__, name='cn') select = sa.select([cn_tbl]) if context.read_deleted == "no": select = select.where(cn_tbl.c.deleted == 0) if "compute_id" in filters: select = select.where(cn_tbl.c.id == filters["compute_id"]) if "service_id" in filters: select = select.where(cn_tbl.c.service_id == filters["service_id"]) if "host" in filters: select = select.where(cn_tbl.c.host == filters["host"]) if "hypervisor_hostname" in filters: hyp_hostname = filters["hypervisor_hostname"] select = select.where(cn_tbl.c.hypervisor_hostname == hyp_hostname) if "mapped" in filters: select = select.where(cn_tbl.c.mapped < filters['mapped']) if marker is not None: try: compute_node_get(context, marker) except exception.ComputeHostNotFound: raise exception.MarkerNotFound(marker=marker) select = select.where(cn_tbl.c.id > marker) if limit is not None: select = select.limit(limit) # Explicitly order by id, so we're not dependent on the native sort # order of the underlying DB. select = select.order_by(asc("id")) return select def _compute_node_fetchall(context, filters=None, limit=None, marker=None): select = _compute_node_select(context, filters, limit=limit, marker=marker) engine = get_engine(context=context) conn = engine.connect() results = conn.execute(select).fetchall() # Callers expect dict-like objects, not SQLAlchemy RowProxy objects... results = [dict(r) for r in results] conn.close() return results @pick_context_manager_reader def compute_node_get(context, compute_id): results = _compute_node_fetchall(context, {"compute_id": compute_id}) if not results: raise exception.ComputeHostNotFound(host=compute_id) return results[0] @pick_context_manager_reader def compute_node_get_model(context, compute_id): # TODO(edleafe): remove once the compute node resource provider migration # is complete, and this distinction is no longer necessary. result = model_query(context, models.ComputeNode).\ filter_by(id=compute_id).\ first() if not result: raise exception.ComputeHostNotFound(host=compute_id) return result @pick_context_manager_reader def compute_nodes_get_by_service_id(context, service_id): results = _compute_node_fetchall(context, {"service_id": service_id}) if not results: raise exception.ServiceNotFound(service_id=service_id) return results @pick_context_manager_reader def compute_node_get_by_host_and_nodename(context, host, nodename): results = _compute_node_fetchall(context, {"host": host, "hypervisor_hostname": nodename}) if not results: raise exception.ComputeHostNotFound(host=host) return results[0] @pick_context_manager_reader def compute_node_get_by_nodename(context, hypervisor_hostname): results = _compute_node_fetchall(context, {"hypervisor_hostname": hypervisor_hostname}) if not results: raise exception.ComputeHostNotFound(host=hypervisor_hostname) return results[0] @pick_context_manager_reader_allow_async def compute_node_get_all_by_host(context, host): results = _compute_node_fetchall(context, {"host": host}) if not results: raise exception.ComputeHostNotFound(host=host) return results @pick_context_manager_reader def compute_node_get_all(context): return _compute_node_fetchall(context) @pick_context_manager_reader def compute_node_get_all_mapped_less_than(context, mapped_less_than): return _compute_node_fetchall(context, {'mapped': mapped_less_than}) @pick_context_manager_reader def compute_node_get_all_by_pagination(context, limit=None, marker=None): return _compute_node_fetchall(context, limit=limit, marker=marker) @pick_context_manager_reader def compute_node_search_by_hypervisor(context, hypervisor_match): field = models.ComputeNode.hypervisor_hostname return model_query(context, models.ComputeNode).\ filter(field.like('%%%s%%' % hypervisor_match)).\ all() @pick_context_manager_writer def compute_node_create(context, values): """Creates a new ComputeNode and populates the capacity fields with the most recent data. """ convert_objects_related_datetimes(values) compute_node_ref = models.ComputeNode() compute_node_ref.update(values) try: compute_node_ref.save(context.session) except db_exc.DBDuplicateEntry: with excutils.save_and_reraise_exception(logger=LOG) as err_ctx: # Check to see if we have a (soft) deleted ComputeNode with the # same UUID and if so just update it and mark as no longer (soft) # deleted. See bug 1839560 for details. if 'uuid' in values: # Get a fresh context for a new DB session and allow it to # get a deleted record. ctxt = nova.context.get_admin_context(read_deleted='yes') compute_node_ref = _compute_node_get_and_update_deleted( ctxt, values) # If we didn't get anything back we failed to find the node # by uuid and update it so re-raise the DBDuplicateEntry. if compute_node_ref: err_ctx.reraise = False return compute_node_ref @pick_context_manager_writer def _compute_node_get_and_update_deleted(context, values): """Find a ComputeNode by uuid, update and un-delete it. This is a special case from the ``compute_node_create`` method which needs to be separate to get a new Session. This method will update the ComputeNode, if found, to have deleted=0 and deleted_at=None values. :param context: request auth context which should be able to read deleted records :param values: values used to update the ComputeNode record - must include uuid :return: updated ComputeNode sqlalchemy model object if successfully found and updated, None otherwise """ cn = model_query( context, models.ComputeNode).filter_by(uuid=values['uuid']).first() if cn: # Update with the provided values but un-soft-delete. update_values = copy.deepcopy(values) update_values['deleted'] = 0 update_values['deleted_at'] = None return compute_node_update(context, cn.id, update_values) @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def compute_node_update(context, compute_id, values): """Updates the ComputeNode record with the most recent data.""" compute_ref = compute_node_get_model(context, compute_id) # Always update this, even if there's going to be no other # changes in data. This ensures that we invalidate the # scheduler cache of compute node data in case of races. values['updated_at'] = timeutils.utcnow() convert_objects_related_datetimes(values) compute_ref.update(values) return compute_ref @pick_context_manager_writer def compute_node_delete(context, compute_id): """Delete a ComputeNode record.""" result = model_query(context, models.ComputeNode).\ filter_by(id=compute_id).\ soft_delete(synchronize_session=False) if not result: raise exception.ComputeHostNotFound(host=compute_id) @pick_context_manager_reader def compute_node_statistics(context): """Compute statistics over all compute nodes.""" engine = get_engine(context=context) services_tbl = models.Service.__table__ inner_sel = sa.alias(_compute_node_select(context), name='inner_sel') # TODO(sbauza): Remove the service_id filter in a later release # once we are sure that all compute nodes report the host field j = sa.join( inner_sel, services_tbl, sql.and_( sql.or_( inner_sel.c.host == services_tbl.c.host, inner_sel.c.service_id == services_tbl.c.id ), services_tbl.c.disabled == false(), services_tbl.c.binary == 'nova-compute', services_tbl.c.deleted == 0 ) ) # NOTE(jaypipes): This COALESCE() stuff is temporary while the data # migration to the new resource providers inventories and allocations # tables is completed. agg_cols = [ func.count().label('count'), sql.func.sum( inner_sel.c.vcpus ).label('vcpus'), sql.func.sum( inner_sel.c.memory_mb ).label('memory_mb'), sql.func.sum( inner_sel.c.local_gb ).label('local_gb'), sql.func.sum( inner_sel.c.vcpus_used ).label('vcpus_used'), sql.func.sum( inner_sel.c.memory_mb_used ).label('memory_mb_used'), sql.func.sum( inner_sel.c.local_gb_used ).label('local_gb_used'), sql.func.sum( inner_sel.c.free_ram_mb ).label('free_ram_mb'), sql.func.sum( inner_sel.c.free_disk_gb ).label('free_disk_gb'), sql.func.sum( inner_sel.c.current_workload ).label('current_workload'), sql.func.sum( inner_sel.c.running_vms ).label('running_vms'), sql.func.sum( inner_sel.c.disk_available_least ).label('disk_available_least'), ] select = sql.select(agg_cols).select_from(j) conn = engine.connect() results = conn.execute(select).fetchone() # Build a dict of the info--making no assumptions about result fields = ('count', 'vcpus', 'memory_mb', 'local_gb', 'vcpus_used', 'memory_mb_used', 'local_gb_used', 'free_ram_mb', 'free_disk_gb', 'current_workload', 'running_vms', 'disk_available_least') results = {field: int(results[idx] or 0) for idx, field in enumerate(fields)} conn.close() return results ################### @pick_context_manager_writer def certificate_create(context, values): certificate_ref = models.Certificate() for (key, value) in values.items(): certificate_ref[key] = value certificate_ref.save(context.session) return certificate_ref @pick_context_manager_reader def certificate_get_all_by_project(context, project_id): return model_query(context, models.Certificate, read_deleted="no").\ filter_by(project_id=project_id).\ all() @pick_context_manager_reader def certificate_get_all_by_user(context, user_id): return model_query(context, models.Certificate, read_deleted="no").\ filter_by(user_id=user_id).\ all() @pick_context_manager_reader def certificate_get_all_by_user_and_project(context, user_id, project_id): return model_query(context, models.Certificate, read_deleted="no").\ filter_by(user_id=user_id).\ filter_by(project_id=project_id).\ all() ################### @require_context @pick_context_manager_writer def virtual_interface_create(context, values): """Create a new virtual interface record in the database. :param values: = dict containing column values """ try: vif_ref = models.VirtualInterface() vif_ref.update(values) vif_ref.save(context.session) except db_exc.DBError: LOG.exception("VIF creation failed with a database error.") raise exception.VirtualInterfaceCreateException() return vif_ref def _virtual_interface_query(context): return model_query(context, models.VirtualInterface, read_deleted="no") @require_context @pick_context_manager_writer def virtual_interface_update(context, address, values): vif_ref = virtual_interface_get_by_address(context, address) vif_ref.update(values) vif_ref.save(context.session) return vif_ref @require_context @pick_context_manager_reader def virtual_interface_get(context, vif_id): """Gets a virtual interface from the table. :param vif_id: = id of the virtual interface """ vif_ref = _virtual_interface_query(context).\ filter_by(id=vif_id).\ first() return vif_ref @require_context @pick_context_manager_reader def virtual_interface_get_by_address(context, address): """Gets a virtual interface from the table. :param address: = the address of the interface you're looking to get """ try: vif_ref = _virtual_interface_query(context).\ filter_by(address=address).\ first() except db_exc.DBError: msg = _("Invalid virtual interface address %s in request") % address LOG.warning(msg) raise exception.InvalidIpAddressError(msg) return vif_ref @require_context @pick_context_manager_reader def virtual_interface_get_by_uuid(context, vif_uuid): """Gets a virtual interface from the table. :param vif_uuid: the uuid of the interface you're looking to get """ vif_ref = _virtual_interface_query(context).\ filter_by(uuid=vif_uuid).\ first() return vif_ref @require_context @pick_context_manager_reader_allow_async def virtual_interface_get_by_instance(context, instance_uuid): """Gets all virtual interfaces for instance. :param instance_uuid: = uuid of the instance to retrieve vifs for """ vif_refs = _virtual_interface_query(context).\ filter_by(instance_uuid=instance_uuid).\ order_by(asc("created_at"), asc("id")).\ all() return vif_refs @require_context @pick_context_manager_reader def virtual_interface_get_by_instance_and_network(context, instance_uuid, network_id): """Gets virtual interface for instance that's associated with network.""" vif_ref = _virtual_interface_query(context).\ filter_by(instance_uuid=instance_uuid).\ filter_by(network_id=network_id).\ first() return vif_ref @require_context @pick_context_manager_writer def virtual_interface_delete_by_instance(context, instance_uuid): """Delete virtual interface records that are associated with the instance given by instance_id. :param instance_uuid: = uuid of instance """ _virtual_interface_query(context).\ filter_by(instance_uuid=instance_uuid).\ soft_delete() @require_context @pick_context_manager_writer def virtual_interface_delete(context, id): """Delete virtual interface records. :param id: id of the interface """ _virtual_interface_query(context).\ filter_by(id=id).\ soft_delete() @require_context @pick_context_manager_reader def virtual_interface_get_all(context): """Get all vifs.""" vif_refs = _virtual_interface_query(context).all() return vif_refs ################### def _metadata_refs(metadata_dict, meta_class): metadata_refs = [] if metadata_dict: for k, v in metadata_dict.items(): metadata_ref = meta_class() metadata_ref['key'] = k metadata_ref['value'] = v metadata_refs.append(metadata_ref) return metadata_refs def _validate_unique_server_name(context, name): if not CONF.osapi_compute_unique_server_name_scope: return lowername = name.lower() base_query = model_query(context, models.Instance, read_deleted='no').\ filter(func.lower(models.Instance.hostname) == lowername) if CONF.osapi_compute_unique_server_name_scope == 'project': instance_with_same_name = base_query.\ filter_by(project_id=context.project_id).\ count() elif CONF.osapi_compute_unique_server_name_scope == 'global': instance_with_same_name = base_query.count() else: return if instance_with_same_name > 0: raise exception.InstanceExists(name=lowername) def _handle_objects_related_type_conversions(values): """Make sure that certain things in values (which may have come from an objects.instance.Instance object) are in suitable form for the database. """ # NOTE(danms): Make sure IP addresses are passed as strings to # the database engine for key in ('access_ip_v4', 'access_ip_v6'): if key in values and values[key] is not None: values[key] = str(values[key]) datetime_keys = ('created_at', 'deleted_at', 'updated_at', 'launched_at', 'terminated_at') convert_objects_related_datetimes(values, *datetime_keys) def _check_instance_exists_in_project(context, instance_uuid): if not model_query(context, models.Instance, read_deleted="no", project_only=True).filter_by( uuid=instance_uuid).first(): raise exception.InstanceNotFound(instance_id=instance_uuid) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def instance_create(context, values): """Create a new Instance record in the database. context - request context object values - dict containing column values. """ default_group = security_group_ensure_default(context) values = values.copy() values['metadata'] = _metadata_refs( values.get('metadata'), models.InstanceMetadata) values['system_metadata'] = _metadata_refs( values.get('system_metadata'), models.InstanceSystemMetadata) _handle_objects_related_type_conversions(values) instance_ref = models.Instance() if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() instance_ref['info_cache'] = models.InstanceInfoCache() info_cache = values.pop('info_cache', None) if info_cache is not None: instance_ref['info_cache'].update(info_cache) security_groups = values.pop('security_groups', []) instance_ref['extra'] = models.InstanceExtra() instance_ref['extra'].update( {'numa_topology': None, 'pci_requests': None, 'vcpu_model': None, 'trusted_certs': None, 'resources': None, }) instance_ref['extra'].update(values.pop('extra', {})) instance_ref.update(values) # Gather the security groups for the instance sg_models = [] if 'default' in security_groups: sg_models.append(default_group) # Generate a new list, so we don't modify the original security_groups = [x for x in security_groups if x != 'default'] if security_groups: sg_models.extend(_security_group_get_by_names( context, security_groups)) if 'hostname' in values: _validate_unique_server_name(context, values['hostname']) instance_ref.security_groups = sg_models context.session.add(instance_ref) # create the instance uuid to ec2_id mapping entry for instance ec2_instance_create(context, instance_ref['uuid']) # Parity with the return value of instance_get_all_by_filters_sort() # Obviously a newly-created instance record can't already have a fault # record because of the FK constraint, so this is fine. instance_ref.fault = None return instance_ref @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def instance_destroy(context, instance_uuid, constraint=None, hard_delete=False): if uuidutils.is_uuid_like(instance_uuid): instance_ref = _instance_get_by_uuid(context, instance_uuid) else: raise exception.InvalidUUID(uuid=instance_uuid) query = model_query(context, models.Instance).\ filter_by(uuid=instance_uuid) if constraint is not None: query = constraint.apply(models.Instance, query) # Either in hard or soft delete, we soft delete the instance first # to make sure that the constraints were met. count = query.soft_delete() if count == 0: raise exception.ConstraintNotMet() models_to_delete = [ models.SecurityGroupInstanceAssociation, models.InstanceInfoCache, models.InstanceMetadata, models.InstanceFault, models.InstanceExtra, models.InstanceSystemMetadata, models.BlockDeviceMapping, models.Migration, models.VirtualInterface ] # For most referenced models we filter by the instance_uuid column, but for # these models we filter by the uuid column. filtered_by_uuid = [models.InstanceIdMapping] for model in models_to_delete + filtered_by_uuid: key = 'instance_uuid' if model not in filtered_by_uuid else 'uuid' filter_ = {key: instance_uuid} if hard_delete: # We need to read any soft-deleted related records to make sure # and clean those up as well otherwise we can fail with ForeignKey # constraint errors when hard deleting the instance. model_query(context, model, read_deleted='yes').filter_by( **filter_).delete() else: model_query(context, model).filter_by(**filter_).soft_delete() # NOTE(snikitin): We can't use model_query here, because there is no # column 'deleted' in 'tags' or 'console_auth_tokens' tables. context.session.query(models.Tag).filter_by( resource_id=instance_uuid).delete() context.session.query(models.ConsoleAuthToken).filter_by( instance_uuid=instance_uuid).delete() # NOTE(cfriesen): We intentionally do not soft-delete entries in the # instance_actions or instance_actions_events tables because they # can be used by operators to find out what actions were performed on a # deleted instance. Both of these tables are special-cased in # _archive_deleted_rows_for_table(). if hard_delete: # NOTE(ttsiousts): In case of hard delete, we need to remove the # instance actions too since instance_uuid is a foreign key and # for this we need to delete the corresponding InstanceActionEvents actions = context.session.query(models.InstanceAction).filter_by( instance_uuid=instance_uuid).all() for action in actions: context.session.query(models.InstanceActionEvent).filter_by( action_id=action.id).delete() context.session.query(models.InstanceAction).filter_by( instance_uuid=instance_uuid).delete() # NOTE(ttsiouts): The instance is the last thing to be deleted in # order to respect all constraints context.session.query(models.Instance).filter_by( uuid=instance_uuid).delete() return instance_ref @require_context @pick_context_manager_reader_allow_async def instance_get_by_uuid(context, uuid, columns_to_join=None): return _instance_get_by_uuid(context, uuid, columns_to_join=columns_to_join) def _instance_get_by_uuid(context, uuid, columns_to_join=None): result = _build_instance_get(context, columns_to_join=columns_to_join).\ filter_by(uuid=uuid).\ first() if not result: raise exception.InstanceNotFound(instance_id=uuid) return result @require_context @pick_context_manager_reader def instance_get(context, instance_id, columns_to_join=None): try: result = _build_instance_get(context, columns_to_join=columns_to_join ).filter_by(id=instance_id).first() if not result: raise exception.InstanceNotFound(instance_id=instance_id) return result except db_exc.DBError: # NOTE(sdague): catch all in case the db engine chokes on the # id because it's too long of an int to store. LOG.warning("Invalid instance id %s in request", instance_id) raise exception.InvalidID(id=instance_id) def _build_instance_get(context, columns_to_join=None): query = model_query(context, models.Instance, project_only=True).\ options(_joinedload_all('security_groups.rules')).\ options(joinedload('info_cache')) if columns_to_join is None: columns_to_join = ['metadata', 'system_metadata'] for column in columns_to_join: if column in ['info_cache', 'security_groups']: # Already always joined above continue if 'extra.' in column: query = query.options(undefer(column)) elif column in ['metadata', 'system_metadata']: # NOTE(melwitt): We use subqueryload() instead of joinedload() for # metadata and system_metadata because of the one-to-many # relationship of the data. Directly joining these columns can # result in a large number of additional rows being queried if an # instance has a large number of (system_)metadata items, resulting # in a large data transfer. Instead, the subqueryload() will # perform additional queries to obtain metadata and system_metadata # for the instance. query = query.options(subqueryload(column)) else: query = query.options(joinedload(column)) # NOTE(alaski) Stop lazy loading of columns not needed. for col in ['metadata', 'system_metadata']: if col not in columns_to_join: query = query.options(noload(col)) # NOTE(melwitt): We need to use order_by() so that the # additional queries emitted by subqueryload() include the same ordering as # used by the parent query. # https://docs.sqlalchemy.org/en/13/orm/loading_relationships.html#the-importance-of-ordering return query.order_by(models.Instance.id) def _instances_fill_metadata(context, instances, manual_joins=None): """Selectively fill instances with manually-joined metadata. Note that instance will be converted to a dict. :param context: security context :param instances: list of instances to fill :param manual_joins: list of tables to manually join (can be any combination of 'metadata' and 'system_metadata' or None to take the default of both) """ uuids = [inst['uuid'] for inst in instances] if manual_joins is None: manual_joins = ['metadata', 'system_metadata'] meta = collections.defaultdict(list) if 'metadata' in manual_joins: for row in _instance_metadata_get_multi(context, uuids): meta[row['instance_uuid']].append(row) sys_meta = collections.defaultdict(list) if 'system_metadata' in manual_joins: for row in _instance_system_metadata_get_multi(context, uuids): sys_meta[row['instance_uuid']].append(row) pcidevs = collections.defaultdict(list) if 'pci_devices' in manual_joins: for row in _instance_pcidevs_get_multi(context, uuids): pcidevs[row['instance_uuid']].append(row) if 'fault' in manual_joins: faults = instance_fault_get_by_instance_uuids(context, uuids, latest=True) else: faults = {} filled_instances = [] for inst in instances: inst = dict(inst) inst['system_metadata'] = sys_meta[inst['uuid']] inst['metadata'] = meta[inst['uuid']] if 'pci_devices' in manual_joins: inst['pci_devices'] = pcidevs[inst['uuid']] inst_faults = faults.get(inst['uuid']) inst['fault'] = inst_faults and inst_faults[0] or None filled_instances.append(inst) return filled_instances def _manual_join_columns(columns_to_join): """Separate manually joined columns from columns_to_join If columns_to_join contains 'metadata', 'system_metadata', 'fault', or 'pci_devices' those columns are removed from columns_to_join and added to a manual_joins list to be used with the _instances_fill_metadata method. The columns_to_join formal parameter is copied and not modified, the return tuple has the modified columns_to_join list to be used with joinedload in a model query. :param:columns_to_join: List of columns to join in a model query. :return: tuple of (manual_joins, columns_to_join) """ manual_joins = [] columns_to_join_new = copy.copy(columns_to_join) for column in ('metadata', 'system_metadata', 'pci_devices', 'fault'): if column in columns_to_join_new: columns_to_join_new.remove(column) manual_joins.append(column) return manual_joins, columns_to_join_new @require_context @pick_context_manager_reader def instance_get_all(context, columns_to_join=None): if columns_to_join is None: columns_to_join_new = ['info_cache', 'security_groups'] manual_joins = ['metadata', 'system_metadata'] else: manual_joins, columns_to_join_new = ( _manual_join_columns(columns_to_join)) query = model_query(context, models.Instance) for column in columns_to_join_new: query = query.options(joinedload(column)) if not context.is_admin: # If we're not admin context, add appropriate filter.. if context.project_id: query = query.filter_by(project_id=context.project_id) else: query = query.filter_by(user_id=context.user_id) instances = query.all() return _instances_fill_metadata(context, instances, manual_joins) @require_context @pick_context_manager_reader_allow_async def instance_get_all_by_filters(context, filters, sort_key, sort_dir, limit=None, marker=None, columns_to_join=None): """Return instances matching all filters sorted by the primary key. See instance_get_all_by_filters_sort for more information. """ # Invoke the API with the multiple sort keys and directions using the # single sort key/direction return instance_get_all_by_filters_sort(context, filters, limit=limit, marker=marker, columns_to_join=columns_to_join, sort_keys=[sort_key], sort_dirs=[sort_dir]) def _get_query_nova_resource_by_changes_time(query, filters, model_object): """Filter resources by changes-since or changes-before. Special keys are used to tweek the query further:: | 'changes-since' - only return resources updated after | 'changes-before' - only return resources updated before Return query results. :param query: query to apply filters to. :param filters: dictionary of filters with regex values. :param model_object: object of the operation target. """ for change_filter in ['changes-since', 'changes-before']: if filters and filters.get(change_filter): changes_filter_time = timeutils.normalize_time( filters.get(change_filter)) updated_at = getattr(model_object, 'updated_at') if change_filter == 'changes-since': query = query.filter(updated_at >= changes_filter_time) else: query = query.filter(updated_at <= changes_filter_time) return query @require_context @pick_context_manager_reader_allow_async def instance_get_all_by_filters_sort(context, filters, limit=None, marker=None, columns_to_join=None, sort_keys=None, sort_dirs=None): """Return instances that match all filters sorted by the given keys. Deleted instances will be returned by default, unless there's a filter that says otherwise. Depending on the name of a filter, matching for that filter is performed using either exact matching or as regular expression matching. Exact matching is applied for the following filters:: | ['project_id', 'user_id', 'image_ref', | 'vm_state', 'instance_type_id', 'uuid', | 'metadata', 'host', 'system_metadata', 'locked', 'hidden'] Hidden instances will *not* be returned by default, unless there's a filter that says otherwise. A third type of filter (also using exact matching), filters based on instance metadata tags when supplied under a special key named 'filter':: | filters = { | 'filter': [ | {'name': 'tag-key', 'value': ''}, | {'name': 'tag-value', 'value': ''}, | {'name': 'tag:', 'value': ''} | ] | } Special keys are used to tweek the query further:: | 'changes-since' - only return instances updated after | 'changes-before' - only return instances updated before | 'deleted' - only return (or exclude) deleted instances | 'soft_deleted' - modify behavior of 'deleted' to either | include or exclude instances whose | vm_state is SOFT_DELETED. A fourth type of filter (also using exact matching), filters based on instance tags (not metadata tags). There are two types of these tags: `tags` -- One or more strings that will be used to filter results in an AND expression: T1 AND T2 `tags-any` -- One or more strings that will be used to filter results in an OR expression: T1 OR T2 `not-tags` -- One or more strings that will be used to filter results in an NOT AND expression: NOT (T1 AND T2) `not-tags-any` -- One or more strings that will be used to filter results in an NOT OR expression: NOT (T1 OR T2) Tags should be represented as list:: | filters = { | 'tags': [some-tag, some-another-tag], | 'tags-any: [some-any-tag, some-another-any-tag], | 'not-tags: [some-not-tag, some-another-not-tag], | 'not-tags-any: [some-not-any-tag, some-another-not-any-tag] | } """ # NOTE(mriedem): If the limit is 0 there is no point in even going # to the database since nothing is going to be returned anyway. if limit == 0: return [] sort_keys, sort_dirs = process_sort_params(sort_keys, sort_dirs, default_dir='desc') if columns_to_join is None: columns_to_join_new = ['info_cache', 'security_groups'] manual_joins = ['metadata', 'system_metadata'] else: manual_joins, columns_to_join_new = ( _manual_join_columns(columns_to_join)) query_prefix = context.session.query(models.Instance) for column in columns_to_join_new: if 'extra.' in column: query_prefix = query_prefix.options(undefer(column)) else: query_prefix = query_prefix.options(joinedload(column)) # Note: order_by is done in the sqlalchemy.utils.py paginate_query(), # no need to do it here as well # Make a copy of the filters dictionary to use going forward, as we'll # be modifying it and we shouldn't affect the caller's use of it. filters = copy.deepcopy(filters) model_object = models.Instance query_prefix = _get_query_nova_resource_by_changes_time(query_prefix, filters, model_object) if 'deleted' in filters: # Instances can be soft or hard deleted and the query needs to # include or exclude both deleted = filters.pop('deleted') if deleted: if filters.pop('soft_deleted', True): delete = or_( models.Instance.deleted == models.Instance.id, models.Instance.vm_state == vm_states.SOFT_DELETED ) query_prefix = query_prefix.\ filter(delete) else: query_prefix = query_prefix.\ filter(models.Instance.deleted == models.Instance.id) else: query_prefix = query_prefix.\ filter_by(deleted=0) if not filters.pop('soft_deleted', False): # It would be better to have vm_state not be nullable # but until then we test it explicitly as a workaround. not_soft_deleted = or_( models.Instance.vm_state != vm_states.SOFT_DELETED, models.Instance.vm_state == null() ) query_prefix = query_prefix.filter(not_soft_deleted) if 'cleaned' in filters: cleaned = 1 if filters.pop('cleaned') else 0 query_prefix = query_prefix.filter(models.Instance.cleaned == cleaned) if 'tags' in filters: tags = filters.pop('tags') # We build a JOIN ladder expression for each tag, JOIN'ing # the first tag to the instances table, and each subsequent # tag to the last JOIN'd tags table first_tag = tags.pop(0) query_prefix = query_prefix.join(models.Instance.tags) query_prefix = query_prefix.filter(models.Tag.tag == first_tag) for tag in tags: tag_alias = aliased(models.Tag) query_prefix = query_prefix.join(tag_alias, models.Instance.tags) query_prefix = query_prefix.filter(tag_alias.tag == tag) if 'tags-any' in filters: tags = filters.pop('tags-any') tag_alias = aliased(models.Tag) query_prefix = query_prefix.join(tag_alias, models.Instance.tags) query_prefix = query_prefix.filter(tag_alias.tag.in_(tags)) if 'not-tags' in filters: tags = filters.pop('not-tags') first_tag = tags.pop(0) subq = query_prefix.session.query(models.Tag.resource_id) subq = subq.join(models.Instance.tags) subq = subq.filter(models.Tag.tag == first_tag) for tag in tags: tag_alias = aliased(models.Tag) subq = subq.join(tag_alias, models.Instance.tags) subq = subq.filter(tag_alias.tag == tag) query_prefix = query_prefix.filter(~models.Instance.uuid.in_(subq)) if 'not-tags-any' in filters: tags = filters.pop('not-tags-any') query_prefix = query_prefix.filter(~models.Instance.tags.any( models.Tag.tag.in_(tags))) if not context.is_admin: # If we're not admin context, add appropriate filter.. if context.project_id: filters['project_id'] = context.project_id else: filters['user_id'] = context.user_id if filters.pop('hidden', False): query_prefix = query_prefix.filter(models.Instance.hidden == true()) else: # If the query should not include hidden instances, then # filter instances with hidden=False or hidden=NULL because # older records may have no value set. query_prefix = query_prefix.filter(or_( models.Instance.hidden == false(), models.Instance.hidden == null())) # Filters for exact matches that we can do along with the SQL query... # For other filters that don't match this, we will do regexp matching exact_match_filter_names = ['project_id', 'user_id', 'image_ref', 'vm_state', 'instance_type_id', 'uuid', 'metadata', 'host', 'task_state', 'system_metadata', 'locked', 'hidden'] # Filter the query query_prefix = _exact_instance_filter(query_prefix, filters, exact_match_filter_names) if query_prefix is None: return [] query_prefix = _regex_instance_filter(query_prefix, filters) # paginate query if marker is not None: try: marker = _instance_get_by_uuid( context.elevated(read_deleted='yes'), marker) except exception.InstanceNotFound: raise exception.MarkerNotFound(marker=marker) try: query_prefix = sqlalchemyutils.paginate_query(query_prefix, models.Instance, limit, sort_keys, marker=marker, sort_dirs=sort_dirs) except db_exc.InvalidSortKey: raise exception.InvalidSortKey() return _instances_fill_metadata(context, query_prefix.all(), manual_joins) @require_context @pick_context_manager_reader_allow_async def instance_get_by_sort_filters(context, sort_keys, sort_dirs, values): """Attempt to get a single instance based on a combination of sort keys, directions and filter values. This is used to try to find a marker instance when we don't have a marker uuid. This returns just a uuid of the instance that matched. """ model = models.Instance return _model_get_uuid_by_sort_filters(context, model, sort_keys, sort_dirs, values) def _model_get_uuid_by_sort_filters(context, model, sort_keys, sort_dirs, values): query = context.session.query(model.uuid) # NOTE(danms): Below is a re-implementation of our # oslo_db.sqlalchemy.utils.paginate_query() utility. We can't use that # directly because it does not return the marker and we need it to. # The below is basically the same algorithm, stripped down to just what # we need, and augmented with the filter criteria required for us to # get back the instance that would correspond to our query. # This is our position in sort_keys,sort_dirs,values for the loop below key_index = 0 # We build a list of criteria to apply to the query, which looks # approximately like this (assuming all ascending): # # OR(row.key1 > val1, # AND(row.key1 == val1, row.key2 > val2), # AND(row.key1 == val1, row.key2 == val2, row.key3 >= val3), # ) # # The final key is compared with the "or equal" variant so that # a complete match instance is still returned. criteria = [] for skey, sdir, val in zip(sort_keys, sort_dirs, values): # Apply ordering to our query for the key, direction we're processing if sdir == 'desc': query = query.order_by(desc(getattr(model, skey))) else: query = query.order_by(asc(getattr(model, skey))) # Build a list of equivalence requirements on keys we've already # processed through the loop. In other words, if we're adding # key2 > val2, make sure that key1 == val1 crit_attrs = [] for equal_attr in range(0, key_index): crit_attrs.append( (getattr(model, sort_keys[equal_attr]) == values[equal_attr])) model_attr = getattr(model, skey) if isinstance(model_attr.type, Boolean): model_attr = cast(model_attr, Integer) val = int(val) if skey == sort_keys[-1]: # If we are the last key, then we should use or-equal to # allow a complete match to be returned if sdir == 'asc': crit = (model_attr >= val) else: crit = (model_attr <= val) else: # If we're not the last key, then strict greater or less than # so we order strictly. if sdir == 'asc': crit = (model_attr > val) else: crit = (model_attr < val) # AND together all the above crit_attrs.append(crit) criteria.append(and_(*crit_attrs)) key_index += 1 # OR together all the ANDs query = query.filter(or_(*criteria)) # We can't raise InstanceNotFound because we don't have a uuid to # be looking for, so just return nothing if no match. result = query.limit(1).first() if result: # We're querying for a single column, which means we get back a # tuple of one thing. Strip that out and just return the uuid # for our caller. return result[0] else: return result def _db_connection_type(db_connection): """Returns a lowercase symbol for the db type. This is useful when we need to change what we are doing per DB (like handling regexes). In a CellsV2 world it probably needs to do something better than use the database configuration string. """ db_string = db_connection.split(':')[0].split('+')[0] return db_string.lower() def _safe_regex_mysql(raw_string): """Make regex safe to mysql. Certain items like '|' are interpreted raw by mysql REGEX. If you search for a single | then you trigger an error because it's expecting content on either side. For consistency sake we escape all '|'. This does mean we wouldn't support something like foo|bar to match completely different things, however, one can argue putting such complicated regex into name search probably means you are doing this wrong. """ return raw_string.replace('|', '\\|') def _get_regexp_ops(connection): """Return safety filter and db opts for regex.""" regexp_op_map = { 'postgresql': '~', 'mysql': 'REGEXP', 'sqlite': 'REGEXP' } regex_safe_filters = { 'mysql': _safe_regex_mysql } db_type = _db_connection_type(connection) return (regex_safe_filters.get(db_type, lambda x: x), regexp_op_map.get(db_type, 'LIKE')) def _regex_instance_filter(query, filters): """Applies regular expression filtering to an Instance query. Returns the updated query. :param query: query to apply filters to :param filters: dictionary of filters with regex values """ model = models.Instance safe_regex_filter, db_regexp_op = _get_regexp_ops(CONF.database.connection) for filter_name in filters: try: column_attr = getattr(model, filter_name) except AttributeError: continue if 'property' == type(column_attr).__name__: continue filter_val = filters[filter_name] # Sometimes the REGEX filter value is not a string if not isinstance(filter_val, six.string_types): filter_val = str(filter_val) if db_regexp_op == 'LIKE': query = query.filter(column_attr.op(db_regexp_op)( u'%' + filter_val + u'%')) else: filter_val = safe_regex_filter(filter_val) query = query.filter(column_attr.op(db_regexp_op)( filter_val)) return query def _exact_instance_filter(query, filters, legal_keys): """Applies exact match filtering to an Instance query. Returns the updated query. Modifies filters argument to remove filters consumed. :param query: query to apply filters to :param filters: dictionary of filters; values that are lists, tuples, sets, or frozensets cause an 'IN' test to be performed, while exact matching ('==' operator) is used for other values :param legal_keys: list of keys to apply exact filtering to """ filter_dict = {} model = models.Instance # Walk through all the keys for key in legal_keys: # Skip ones we're not filtering on if key not in filters: continue # OK, filtering on this key; what value do we search for? value = filters.pop(key) if key in ('metadata', 'system_metadata'): column_attr = getattr(model, key) if isinstance(value, list): for item in value: for k, v in item.items(): query = query.filter(column_attr.any(key=k)) query = query.filter(column_attr.any(value=v)) else: for k, v in value.items(): query = query.filter(column_attr.any(key=k)) query = query.filter(column_attr.any(value=v)) elif isinstance(value, (list, tuple, set, frozenset)): if not value: return None # empty IN-predicate; short circuit # Looking for values in a list; apply to query directly column_attr = getattr(model, key) query = query.filter(column_attr.in_(value)) else: # OK, simple exact match; save for later filter_dict[key] = value # Apply simple exact matches if filter_dict: query = query.filter(*[getattr(models.Instance, k) == v for k, v in filter_dict.items()]) return query def process_sort_params(sort_keys, sort_dirs, default_keys=['created_at', 'id'], default_dir='asc'): """Process the sort parameters to include default keys. Creates a list of sort keys and a list of sort directions. Adds the default keys to the end of the list if they are not already included. When adding the default keys to the sort keys list, the associated direction is: 1) The first element in the 'sort_dirs' list (if specified), else 2) 'default_dir' value (Note that 'asc' is the default value since this is the default in sqlalchemy.utils.paginate_query) :param sort_keys: List of sort keys to include in the processed list :param sort_dirs: List of sort directions to include in the processed list :param default_keys: List of sort keys that need to be included in the processed list, they are added at the end of the list if not already specified. :param default_dir: Sort direction associated with each of the default keys that are not supplied, used when they are added to the processed list :returns: list of sort keys, list of sort directions :raise exception.InvalidInput: If more sort directions than sort keys are specified or if an invalid sort direction is specified """ # Determine direction to use for when adding default keys if sort_dirs and len(sort_dirs) != 0: default_dir_value = sort_dirs[0] else: default_dir_value = default_dir # Create list of keys (do not modify the input list) if sort_keys: result_keys = list(sort_keys) else: result_keys = [] # If a list of directions is not provided, use the default sort direction # for all provided keys if sort_dirs: result_dirs = [] # Verify sort direction for sort_dir in sort_dirs: if sort_dir not in ('asc', 'desc'): msg = _("Unknown sort direction, must be 'desc' or 'asc'") raise exception.InvalidInput(reason=msg) result_dirs.append(sort_dir) else: result_dirs = [default_dir_value for _sort_key in result_keys] # Ensure that the key and direction length match while len(result_dirs) < len(result_keys): result_dirs.append(default_dir_value) # Unless more direction are specified, which is an error if len(result_dirs) > len(result_keys): msg = _("Sort direction size exceeds sort key size") raise exception.InvalidInput(reason=msg) # Ensure defaults are included for key in default_keys: if key not in result_keys: result_keys.append(key) result_dirs.append(default_dir_value) return result_keys, result_dirs @require_context @pick_context_manager_reader_allow_async def instance_get_active_by_window_joined(context, begin, end=None, project_id=None, host=None, columns_to_join=None, limit=None, marker=None): """Return instances and joins that were active during window.""" query = context.session.query(models.Instance) if columns_to_join is None: columns_to_join_new = ['info_cache', 'security_groups'] manual_joins = ['metadata', 'system_metadata'] else: manual_joins, columns_to_join_new = ( _manual_join_columns(columns_to_join)) for column in columns_to_join_new: if 'extra.' in column: query = query.options(undefer(column)) else: query = query.options(joinedload(column)) query = query.filter(or_(models.Instance.terminated_at == null(), models.Instance.terminated_at > begin)) if end: query = query.filter(models.Instance.launched_at < end) if project_id: query = query.filter_by(project_id=project_id) if host: query = query.filter_by(host=host) if marker is not None: try: marker = _instance_get_by_uuid( context.elevated(read_deleted='yes'), marker) except exception.InstanceNotFound: raise exception.MarkerNotFound(marker=marker) query = sqlalchemyutils.paginate_query( query, models.Instance, limit, ['project_id', 'uuid'], marker=marker) return _instances_fill_metadata(context, query.all(), manual_joins) def _instance_get_all_query(context, project_only=False, joins=None): if joins is None: joins = ['info_cache', 'security_groups'] query = model_query(context, models.Instance, project_only=project_only) for column in joins: if 'extra.' in column: query = query.options(undefer(column)) else: query = query.options(joinedload(column)) return query @pick_context_manager_reader_allow_async def instance_get_all_by_host(context, host, columns_to_join=None): query = _instance_get_all_query(context, joins=columns_to_join) return _instances_fill_metadata(context, query.filter_by(host=host).all(), manual_joins=columns_to_join) def _instance_get_all_uuids_by_hosts(context, hosts): itbl = models.Instance.__table__ default_deleted_value = itbl.c.deleted.default.arg sel = sql.select([itbl.c.host, itbl.c.uuid]) sel = sel.where(sql.and_( itbl.c.deleted == default_deleted_value, itbl.c.host.in_(sa.bindparam('hosts', expanding=True)))) # group the instance UUIDs by hostname res = collections.defaultdict(list) for rec in context.session.execute(sel, {'hosts': hosts}).fetchall(): res[rec[0]].append(rec[1]) return res @pick_context_manager_reader def instance_get_all_uuids_by_hosts(context, hosts): """Return a dict, keyed by hostname, of a list of the instance uuids on the host for each supplied hostname, not Instance model objects. The dict is a defaultdict of list, thus inspecting the dict for a host not in the dict will return an empty list not a KeyError. """ return _instance_get_all_uuids_by_hosts(context, hosts) @pick_context_manager_reader def instance_get_all_by_host_and_node(context, host, node, columns_to_join=None): if columns_to_join is None: manual_joins = [] else: candidates = ['system_metadata', 'metadata'] manual_joins = [x for x in columns_to_join if x in candidates] columns_to_join = list(set(columns_to_join) - set(candidates)) return _instances_fill_metadata(context, _instance_get_all_query( context, joins=columns_to_join).filter_by(host=host). filter_by(node=node).all(), manual_joins=manual_joins) @pick_context_manager_reader def instance_get_all_by_host_and_not_type(context, host, type_id=None): return _instances_fill_metadata(context, _instance_get_all_query(context).filter_by(host=host). filter(models.Instance.instance_type_id != type_id).all()) # NOTE(hanlind): This method can be removed as conductor RPC API moves to v2.0. @pick_context_manager_reader def instance_get_all_hung_in_rebooting(context, reboot_window): reboot_window = (timeutils.utcnow() - datetime.timedelta(seconds=reboot_window)) # NOTE(danms): this is only used in the _poll_rebooting_instances() # call in compute/manager, so we can avoid the metadata lookups # explicitly return _instances_fill_metadata(context, model_query(context, models.Instance). filter(models.Instance.updated_at <= reboot_window). filter_by(task_state=task_states.REBOOTING).all(), manual_joins=[]) def _retry_instance_update(): """Wrap with oslo_db_api.wrap_db_retry, and also retry on UnknownInstanceUpdateConflict. """ exception_checker = \ lambda exc: isinstance(exc, (exception.UnknownInstanceUpdateConflict,)) return oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True, exception_checker=exception_checker) @require_context @_retry_instance_update() @pick_context_manager_writer def instance_update(context, instance_uuid, values, expected=None): return _instance_update(context, instance_uuid, values, expected) @require_context @_retry_instance_update() @pick_context_manager_writer def instance_update_and_get_original(context, instance_uuid, values, columns_to_join=None, expected=None): """Set the given properties on an instance and update it. Return a shallow copy of the original instance reference, as well as the updated one. :param context: = request context object :param instance_uuid: = instance uuid :param values: = dict containing column values If "expected_task_state" exists in values, the update can only happen when the task state before update matches expected_task_state. Otherwise a UnexpectedTaskStateError is thrown. :returns: a tuple of the form (old_instance_ref, new_instance_ref) Raises NotFound if instance does not exist. """ instance_ref = _instance_get_by_uuid(context, instance_uuid, columns_to_join=columns_to_join) return (copy.copy(instance_ref), _instance_update( context, instance_uuid, values, expected, original=instance_ref)) # NOTE(danms): This updates the instance's metadata list in-place and in # the database to avoid stale data and refresh issues. It assumes the # delete=True behavior of instance_metadata_update(...) def _instance_metadata_update_in_place(context, instance, metadata_type, model, metadata): metadata = dict(metadata) to_delete = [] for keyvalue in instance[metadata_type]: key = keyvalue['key'] if key in metadata: keyvalue['value'] = metadata.pop(key) elif key not in metadata: to_delete.append(keyvalue) # NOTE: we have to hard_delete here otherwise we will get more than one # system_metadata record when we read deleted for an instance; # regular metadata doesn't have the same problem because we don't # allow reading deleted regular metadata anywhere. if metadata_type == 'system_metadata': for condemned in to_delete: context.session.delete(condemned) instance[metadata_type].remove(condemned) else: for condemned in to_delete: condemned.soft_delete(context.session) for key, value in metadata.items(): newitem = model() newitem.update({'key': key, 'value': value, 'instance_uuid': instance['uuid']}) context.session.add(newitem) instance[metadata_type].append(newitem) def _instance_update(context, instance_uuid, values, expected, original=None): if not uuidutils.is_uuid_like(instance_uuid): raise exception.InvalidUUID(uuid=instance_uuid) # NOTE(mdbooth): We pop values from this dict below, so we copy it here to # ensure there are no side effects for the caller or if we retry the # function due to a db conflict. updates = copy.copy(values) if expected is None: expected = {} else: # Coerce all single values to singleton lists expected = {k: [None] if v is None else sqlalchemyutils.to_list(v) for (k, v) in expected.items()} # Extract 'expected_' values from values dict, as these aren't actually # updates for field in ('task_state', 'vm_state'): expected_field = 'expected_%s' % field if expected_field in updates: value = updates.pop(expected_field, None) # Coerce all single values to singleton lists if value is None: expected[field] = [None] else: expected[field] = sqlalchemyutils.to_list(value) # Values which need to be updated separately metadata = updates.pop('metadata', None) system_metadata = updates.pop('system_metadata', None) _handle_objects_related_type_conversions(updates) # Hostname is potentially unique, but this is enforced in code rather # than the DB. The query below races, but the number of users of # osapi_compute_unique_server_name_scope is small, and a robust fix # will be complex. This is intentionally left as is for the moment. if 'hostname' in updates: _validate_unique_server_name(context, updates['hostname']) compare = models.Instance(uuid=instance_uuid, **expected) try: instance_ref = model_query(context, models.Instance, project_only=True).\ update_on_match(compare, 'uuid', updates) except update_match.NoRowsMatched: # Update failed. Try to find why and raise a specific error. # We should get here only because our expected values were not current # when update_on_match executed. Having failed, we now have a hint that # the values are out of date and should check them. # This code is made more complex because we are using repeatable reads. # If we have previously read the original instance in the current # transaction, reading it again will return the same data, even though # the above update failed because it has changed: it is not possible to # determine what has changed in this transaction. In this case we raise # UnknownInstanceUpdateConflict, which will cause the operation to be # retried in a new transaction. # Because of the above, if we have previously read the instance in the # current transaction it will have been passed as 'original', and there # is no point refreshing it. If we have not previously read the # instance, we can fetch it here and we will get fresh data. if original is None: original = _instance_get_by_uuid(context, instance_uuid) conflicts_expected = {} conflicts_actual = {} for (field, expected_values) in expected.items(): actual = original[field] if actual not in expected_values: conflicts_expected[field] = expected_values conflicts_actual[field] = actual # Exception properties exc_props = { 'instance_uuid': instance_uuid, 'expected': conflicts_expected, 'actual': conflicts_actual } # There was a conflict, but something (probably the MySQL read view, # but possibly an exceptionally unlikely second race) is preventing us # from seeing what it is. When we go round again we'll get a fresh # transaction and a fresh read view. if len(conflicts_actual) == 0: raise exception.UnknownInstanceUpdateConflict(**exc_props) # Task state gets special handling for convenience. We raise the # specific error UnexpectedDeletingTaskStateError or # UnexpectedTaskStateError as appropriate if 'task_state' in conflicts_actual: conflict_task_state = conflicts_actual['task_state'] if conflict_task_state == task_states.DELETING: exc = exception.UnexpectedDeletingTaskStateError else: exc = exception.UnexpectedTaskStateError # Everything else is an InstanceUpdateConflict else: exc = exception.InstanceUpdateConflict raise exc(**exc_props) if metadata is not None: _instance_metadata_update_in_place(context, instance_ref, 'metadata', models.InstanceMetadata, metadata) if system_metadata is not None: _instance_metadata_update_in_place(context, instance_ref, 'system_metadata', models.InstanceSystemMetadata, system_metadata) return instance_ref @pick_context_manager_writer def instance_add_security_group(context, instance_uuid, security_group_id): """Associate the given security group with the given instance.""" sec_group_ref = models.SecurityGroupInstanceAssociation() sec_group_ref.update({'instance_uuid': instance_uuid, 'security_group_id': security_group_id}) sec_group_ref.save(context.session) @require_context @pick_context_manager_writer def instance_remove_security_group(context, instance_uuid, security_group_id): """Disassociate the given security group from the given instance.""" model_query(context, models.SecurityGroupInstanceAssociation).\ filter_by(instance_uuid=instance_uuid).\ filter_by(security_group_id=security_group_id).\ soft_delete() ################### @require_context @pick_context_manager_reader def instance_info_cache_get(context, instance_uuid): """Gets an instance info cache from the table. :param instance_uuid: = uuid of the info cache's instance """ return model_query(context, models.InstanceInfoCache).\ filter_by(instance_uuid=instance_uuid).\ first() @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def instance_info_cache_update(context, instance_uuid, values): """Update an instance info cache record in the table. :param instance_uuid: = uuid of info cache's instance :param values: = dict containing column values to update """ convert_objects_related_datetimes(values) info_cache = model_query(context, models.InstanceInfoCache).\ filter_by(instance_uuid=instance_uuid).\ first() needs_create = False if info_cache and info_cache['deleted']: raise exception.InstanceInfoCacheNotFound( instance_uuid=instance_uuid) elif not info_cache: # NOTE(tr3buchet): just in case someone blows away an instance's # cache entry, re-create it. values['instance_uuid'] = instance_uuid info_cache = models.InstanceInfoCache(**values) needs_create = True try: with get_context_manager(context).writer.savepoint.using(context): if needs_create: info_cache.save(context.session) else: info_cache.update(values) except db_exc.DBDuplicateEntry: # NOTE(sirp): Possible race if two greenthreads attempt to # recreate the instance cache entry at the same time. First one # wins. pass return info_cache @require_context @pick_context_manager_writer def instance_info_cache_delete(context, instance_uuid): """Deletes an existing instance_info_cache record :param instance_uuid: = uuid of the instance tied to the cache record """ model_query(context, models.InstanceInfoCache).\ filter_by(instance_uuid=instance_uuid).\ soft_delete() ################### def _instance_extra_create(context, values): inst_extra_ref = models.InstanceExtra() inst_extra_ref.update(values) inst_extra_ref.save(context.session) return inst_extra_ref @pick_context_manager_writer def instance_extra_update_by_uuid(context, instance_uuid, values): rows_updated = model_query(context, models.InstanceExtra).\ filter_by(instance_uuid=instance_uuid).\ update(values) if not rows_updated: LOG.debug("Created instance_extra for %s", instance_uuid) create_values = copy.copy(values) create_values["instance_uuid"] = instance_uuid _instance_extra_create(context, create_values) rows_updated = 1 return rows_updated @pick_context_manager_reader def instance_extra_get_by_instance_uuid(context, instance_uuid, columns=None): query = model_query(context, models.InstanceExtra).\ filter_by(instance_uuid=instance_uuid) if columns is None: columns = ['numa_topology', 'pci_requests', 'flavor', 'vcpu_model', 'trusted_certs', 'resources', 'migration_context'] for column in columns: query = query.options(undefer(column)) instance_extra = query.first() return instance_extra ################### @require_context @pick_context_manager_writer def key_pair_create(context, values): try: key_pair_ref = models.KeyPair() key_pair_ref.update(values) key_pair_ref.save(context.session) return key_pair_ref except db_exc.DBDuplicateEntry: raise exception.KeyPairExists(key_name=values['name']) @require_context @pick_context_manager_writer def key_pair_destroy(context, user_id, name): result = model_query(context, models.KeyPair).\ filter_by(user_id=user_id).\ filter_by(name=name).\ soft_delete() if not result: raise exception.KeypairNotFound(user_id=user_id, name=name) @require_context @pick_context_manager_reader def key_pair_get(context, user_id, name): result = model_query(context, models.KeyPair).\ filter_by(user_id=user_id).\ filter_by(name=name).\ first() if not result: raise exception.KeypairNotFound(user_id=user_id, name=name) return result @require_context @pick_context_manager_reader def key_pair_get_all_by_user(context, user_id, limit=None, marker=None): marker_row = None if marker is not None: marker_row = model_query(context, models.KeyPair, read_deleted="no").\ filter_by(name=marker).filter_by(user_id=user_id).first() if not marker_row: raise exception.MarkerNotFound(marker=marker) query = model_query(context, models.KeyPair, read_deleted="no").\ filter_by(user_id=user_id) query = sqlalchemyutils.paginate_query( query, models.KeyPair, limit, ['name'], marker=marker_row) return query.all() @require_context @pick_context_manager_reader def key_pair_count_by_user(context, user_id): return model_query(context, models.KeyPair, read_deleted="no").\ filter_by(user_id=user_id).\ count() ################### @require_context @pick_context_manager_reader def quota_get(context, project_id, resource, user_id=None): model = models.ProjectUserQuota if user_id else models.Quota query = model_query(context, model).\ filter_by(project_id=project_id).\ filter_by(resource=resource) if user_id: query = query.filter_by(user_id=user_id) result = query.first() if not result: if user_id: raise exception.ProjectUserQuotaNotFound(project_id=project_id, user_id=user_id) else: raise exception.ProjectQuotaNotFound(project_id=project_id) return result @require_context @pick_context_manager_reader def quota_get_all_by_project_and_user(context, project_id, user_id): user_quotas = model_query(context, models.ProjectUserQuota, (models.ProjectUserQuota.resource, models.ProjectUserQuota.hard_limit)).\ filter_by(project_id=project_id).\ filter_by(user_id=user_id).\ all() result = {'project_id': project_id, 'user_id': user_id} for user_quota in user_quotas: result[user_quota.resource] = user_quota.hard_limit return result @require_context @pick_context_manager_reader def quota_get_all_by_project(context, project_id): rows = model_query(context, models.Quota, read_deleted="no").\ filter_by(project_id=project_id).\ all() result = {'project_id': project_id} for row in rows: result[row.resource] = row.hard_limit return result @require_context @pick_context_manager_reader def quota_get_all(context, project_id): result = model_query(context, models.ProjectUserQuota).\ filter_by(project_id=project_id).\ all() return result def quota_get_per_project_resources(): return PER_PROJECT_QUOTAS @pick_context_manager_writer def quota_create(context, project_id, resource, limit, user_id=None): per_user = user_id and resource not in PER_PROJECT_QUOTAS quota_ref = models.ProjectUserQuota() if per_user else models.Quota() if per_user: quota_ref.user_id = user_id quota_ref.project_id = project_id quota_ref.resource = resource quota_ref.hard_limit = limit try: quota_ref.save(context.session) except db_exc.DBDuplicateEntry: raise exception.QuotaExists(project_id=project_id, resource=resource) return quota_ref @pick_context_manager_writer def quota_update(context, project_id, resource, limit, user_id=None): per_user = user_id and resource not in PER_PROJECT_QUOTAS model = models.ProjectUserQuota if per_user else models.Quota query = model_query(context, model).\ filter_by(project_id=project_id).\ filter_by(resource=resource) if per_user: query = query.filter_by(user_id=user_id) result = query.update({'hard_limit': limit}) if not result: if per_user: raise exception.ProjectUserQuotaNotFound(project_id=project_id, user_id=user_id) else: raise exception.ProjectQuotaNotFound(project_id=project_id) ################### @require_context @pick_context_manager_reader def quota_class_get(context, class_name, resource): result = model_query(context, models.QuotaClass, read_deleted="no").\ filter_by(class_name=class_name).\ filter_by(resource=resource).\ first() if not result: raise exception.QuotaClassNotFound(class_name=class_name) return result @pick_context_manager_reader def quota_class_get_default(context): rows = model_query(context, models.QuotaClass, read_deleted="no").\ filter_by(class_name=_DEFAULT_QUOTA_NAME).\ all() result = {'class_name': _DEFAULT_QUOTA_NAME} for row in rows: result[row.resource] = row.hard_limit return result @require_context @pick_context_manager_reader def quota_class_get_all_by_name(context, class_name): rows = model_query(context, models.QuotaClass, read_deleted="no").\ filter_by(class_name=class_name).\ all() result = {'class_name': class_name} for row in rows: result[row.resource] = row.hard_limit return result @pick_context_manager_writer def quota_class_create(context, class_name, resource, limit): quota_class_ref = models.QuotaClass() quota_class_ref.class_name = class_name quota_class_ref.resource = resource quota_class_ref.hard_limit = limit quota_class_ref.save(context.session) return quota_class_ref @pick_context_manager_writer def quota_class_update(context, class_name, resource, limit): result = model_query(context, models.QuotaClass, read_deleted="no").\ filter_by(class_name=class_name).\ filter_by(resource=resource).\ update({'hard_limit': limit}) if not result: raise exception.QuotaClassNotFound(class_name=class_name) ################### @pick_context_manager_writer def quota_destroy_all_by_project_and_user(context, project_id, user_id): model_query(context, models.ProjectUserQuota, read_deleted="no").\ filter_by(project_id=project_id).\ filter_by(user_id=user_id).\ soft_delete(synchronize_session=False) @pick_context_manager_writer def quota_destroy_all_by_project(context, project_id): model_query(context, models.Quota, read_deleted="no").\ filter_by(project_id=project_id).\ soft_delete(synchronize_session=False) model_query(context, models.ProjectUserQuota, read_deleted="no").\ filter_by(project_id=project_id).\ soft_delete(synchronize_session=False) ################### def _block_device_mapping_get_query(context, columns_to_join=None): if columns_to_join is None: columns_to_join = [] query = model_query(context, models.BlockDeviceMapping) for column in columns_to_join: query = query.options(joinedload(column)) return query def _scrub_empty_str_values(dct, keys_to_scrub): """Remove any keys found in sequence keys_to_scrub from the dict if they have the value ''. """ for key in keys_to_scrub: if key in dct and dct[key] == '': del dct[key] def _from_legacy_values(values, legacy, allow_updates=False): if legacy: if allow_updates and block_device.is_safe_for_update(values): return values else: return block_device.BlockDeviceDict.from_legacy(values) else: return values def _set_or_validate_uuid(values): uuid = values.get('uuid') # values doesn't contain uuid, or it's blank if not uuid: values['uuid'] = uuidutils.generate_uuid() # values contains a uuid else: if not uuidutils.is_uuid_like(uuid): raise exception.InvalidUUID(uuid=uuid) @require_context @pick_context_manager_writer def block_device_mapping_create(context, values, legacy=True): _scrub_empty_str_values(values, ['volume_size']) values = _from_legacy_values(values, legacy) convert_objects_related_datetimes(values) _set_or_validate_uuid(values) bdm_ref = models.BlockDeviceMapping() bdm_ref.update(values) bdm_ref.save(context.session) return bdm_ref @require_context @pick_context_manager_writer def block_device_mapping_update(context, bdm_id, values, legacy=True): _scrub_empty_str_values(values, ['volume_size']) values = _from_legacy_values(values, legacy, allow_updates=True) convert_objects_related_datetimes(values) query = _block_device_mapping_get_query(context).filter_by(id=bdm_id) query.update(values) return query.first() @pick_context_manager_writer def block_device_mapping_update_or_create(context, values, legacy=True): # TODO(mdbooth): Remove this method entirely. Callers should know whether # they require update or create, and call the appropriate method. _scrub_empty_str_values(values, ['volume_size']) values = _from_legacy_values(values, legacy, allow_updates=True) convert_objects_related_datetimes(values) result = None # NOTE(xqueralt,danms): Only update a BDM when device_name or # uuid was provided. Prefer the uuid, if available, but fall # back to device_name if no uuid is provided, which can happen # for BDMs created before we had a uuid. We allow empty device # names so they will be set later by the manager. if 'uuid' in values: query = _block_device_mapping_get_query(context) result = query.filter_by(instance_uuid=values['instance_uuid'], uuid=values['uuid']).one_or_none() if not result and values['device_name']: query = _block_device_mapping_get_query(context) result = query.filter_by(instance_uuid=values['instance_uuid'], device_name=values['device_name']).first() if result: result.update(values) else: # Either the device_name or uuid doesn't exist in the database yet, or # neither was provided. Both cases mean creating a new BDM. _set_or_validate_uuid(values) result = models.BlockDeviceMapping(**values) result.save(context.session) # NOTE(xqueralt): Prevent from having multiple swap devices for the # same instance. This will delete all the existing ones. if block_device.new_format_is_swap(values): query = _block_device_mapping_get_query(context) query = query.filter_by(instance_uuid=values['instance_uuid'], source_type='blank', guest_format='swap') query = query.filter(models.BlockDeviceMapping.id != result.id) query.soft_delete() return result @require_context @pick_context_manager_reader_allow_async def block_device_mapping_get_all_by_instance_uuids(context, instance_uuids): if not instance_uuids: return [] return _block_device_mapping_get_query(context).filter( models.BlockDeviceMapping.instance_uuid.in_(instance_uuids)).all() @require_context @pick_context_manager_reader_allow_async def block_device_mapping_get_all_by_instance(context, instance_uuid): return _block_device_mapping_get_query(context).\ filter_by(instance_uuid=instance_uuid).\ all() @require_context @pick_context_manager_reader def block_device_mapping_get_all_by_volume_id(context, volume_id, columns_to_join=None): return _block_device_mapping_get_query(context, columns_to_join=columns_to_join).\ filter_by(volume_id=volume_id).\ all() @require_context @pick_context_manager_reader def block_device_mapping_get_by_instance_and_volume_id(context, volume_id, instance_uuid, columns_to_join=None): return _block_device_mapping_get_query(context, columns_to_join=columns_to_join).\ filter_by(volume_id=volume_id).\ filter_by(instance_uuid=instance_uuid).\ first() @require_context @pick_context_manager_writer def block_device_mapping_destroy(context, bdm_id): _block_device_mapping_get_query(context).\ filter_by(id=bdm_id).\ soft_delete() @require_context @pick_context_manager_writer def block_device_mapping_destroy_by_instance_and_volume(context, instance_uuid, volume_id): _block_device_mapping_get_query(context).\ filter_by(instance_uuid=instance_uuid).\ filter_by(volume_id=volume_id).\ soft_delete() @require_context @pick_context_manager_writer def block_device_mapping_destroy_by_instance_and_device(context, instance_uuid, device_name): _block_device_mapping_get_query(context).\ filter_by(instance_uuid=instance_uuid).\ filter_by(device_name=device_name).\ soft_delete() ################### @require_context @pick_context_manager_writer def security_group_create(context, values): security_group_ref = models.SecurityGroup() # FIXME(devcamcar): Unless I do this, rules fails with lazy load exception # once save() is called. This will get cleaned up in next orm pass. security_group_ref.rules = [] security_group_ref.update(values) try: with get_context_manager(context).writer.savepoint.using(context): security_group_ref.save(context.session) except db_exc.DBDuplicateEntry: raise exception.SecurityGroupExists( project_id=values['project_id'], security_group_name=values['name']) return security_group_ref def _security_group_get_query(context, read_deleted=None, project_only=False, join_rules=True): query = model_query(context, models.SecurityGroup, read_deleted=read_deleted, project_only=project_only) if join_rules: query = query.options(_joinedload_all('rules.grantee_group')) return query def _security_group_get_by_names(context, group_names): """Get security group models for a project by a list of names. Raise SecurityGroupNotFoundForProject for a name not found. """ query = _security_group_get_query(context, read_deleted="no", join_rules=False).\ filter_by(project_id=context.project_id).\ filter(models.SecurityGroup.name.in_(group_names)) sg_models = query.all() if len(sg_models) == len(group_names): return sg_models # Find the first one missing and raise group_names_from_models = [x.name for x in sg_models] for group_name in group_names: if group_name not in group_names_from_models: raise exception.SecurityGroupNotFoundForProject( project_id=context.project_id, security_group_id=group_name) # Not Reached @require_context @pick_context_manager_reader def security_group_get_all(context): return _security_group_get_query(context).all() @require_context @pick_context_manager_reader def security_group_get(context, security_group_id, columns_to_join=None): join_rules = columns_to_join and 'rules' in columns_to_join if join_rules: columns_to_join.remove('rules') query = _security_group_get_query(context, project_only=True, join_rules=join_rules).\ filter_by(id=security_group_id) if columns_to_join is None: columns_to_join = [] for column in columns_to_join: if column.startswith('instances'): query = query.options(_joinedload_all(column)) result = query.first() if not result: raise exception.SecurityGroupNotFound( security_group_id=security_group_id) return result @require_context @pick_context_manager_reader def security_group_get_by_name(context, project_id, group_name, columns_to_join=None): query = _security_group_get_query(context, read_deleted="no", join_rules=False).\ filter_by(project_id=project_id).\ filter_by(name=group_name) if columns_to_join is None: columns_to_join = ['instances', 'rules.grantee_group'] for column in columns_to_join: query = query.options(_joinedload_all(column)) result = query.first() if not result: raise exception.SecurityGroupNotFoundForProject( project_id=project_id, security_group_id=group_name) return result @require_context @pick_context_manager_reader def security_group_get_by_project(context, project_id): return _security_group_get_query(context, read_deleted="no").\ filter_by(project_id=project_id).\ all() @require_context @pick_context_manager_reader def security_group_get_by_instance(context, instance_uuid): return _security_group_get_query(context, read_deleted="no").\ join(models.SecurityGroup.instances).\ filter_by(uuid=instance_uuid).\ all() @require_context @pick_context_manager_reader def security_group_in_use(context, group_id): # Are there any instances that haven't been deleted # that include this group? inst_assoc = model_query(context, models.SecurityGroupInstanceAssociation, read_deleted="no").\ filter_by(security_group_id=group_id).\ all() for ia in inst_assoc: num_instances = model_query(context, models.Instance, read_deleted="no").\ filter_by(uuid=ia.instance_uuid).\ count() if num_instances: return True return False @require_context @pick_context_manager_writer def security_group_update(context, security_group_id, values, columns_to_join=None): query = model_query(context, models.SecurityGroup).filter_by( id=security_group_id) if columns_to_join: for column in columns_to_join: query = query.options(_joinedload_all(column)) security_group_ref = query.first() if not security_group_ref: raise exception.SecurityGroupNotFound( security_group_id=security_group_id) security_group_ref.update(values) name = security_group_ref['name'] project_id = security_group_ref['project_id'] try: security_group_ref.save(context.session) except db_exc.DBDuplicateEntry: raise exception.SecurityGroupExists( project_id=project_id, security_group_name=name) return security_group_ref def security_group_ensure_default(context): """Ensure default security group exists for a project_id.""" try: # NOTE(rpodolyaka): create the default security group, if it doesn't # exist. This must be done in a separate transaction, so that # this one is not aborted in case a concurrent one succeeds first # and the unique constraint for security group names is violated # by a concurrent INSERT with get_context_manager(context).writer.independent.using(context): return _security_group_ensure_default(context) except exception.SecurityGroupExists: # NOTE(rpodolyaka): a concurrent transaction has succeeded first, # suppress the error and proceed return security_group_get_by_name(context, context.project_id, 'default') @pick_context_manager_writer def _security_group_ensure_default(context): try: default_group = _security_group_get_by_names(context, ['default'])[0] except exception.NotFound: values = {'name': 'default', 'description': 'default', 'user_id': context.user_id, 'project_id': context.project_id} default_group = security_group_create(context, values) return default_group @require_context @pick_context_manager_writer def security_group_destroy(context, security_group_id): model_query(context, models.SecurityGroup).\ filter_by(id=security_group_id).\ soft_delete() model_query(context, models.SecurityGroupInstanceAssociation).\ filter_by(security_group_id=security_group_id).\ soft_delete() model_query(context, models.SecurityGroupIngressRule).\ filter_by(group_id=security_group_id).\ soft_delete() model_query(context, models.SecurityGroupIngressRule).\ filter_by(parent_group_id=security_group_id).\ soft_delete() ################### @pick_context_manager_writer def migration_create(context, values): migration = models.Migration() migration.update(values) migration.save(context.session) return migration @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def migration_update(context, id, values): migration = migration_get(context, id) migration.update(values) return migration @pick_context_manager_reader def migration_get(context, id): result = model_query(context, models.Migration, read_deleted="yes").\ filter_by(id=id).\ first() if not result: raise exception.MigrationNotFound(migration_id=id) return result @pick_context_manager_reader def migration_get_by_uuid(context, migration_uuid): result = model_query(context, models.Migration, read_deleted="yes").\ filter_by(uuid=migration_uuid).\ first() if not result: raise exception.MigrationNotFound(migration_id=migration_uuid) return result @pick_context_manager_reader def migration_get_by_id_and_instance(context, id, instance_uuid): result = model_query(context, models.Migration).\ filter_by(id=id).\ filter_by(instance_uuid=instance_uuid).\ first() if not result: raise exception.MigrationNotFoundForInstance(migration_id=id, instance_id=instance_uuid) return result @pick_context_manager_reader def migration_get_by_instance_and_status(context, instance_uuid, status): result = model_query(context, models.Migration, read_deleted="yes").\ filter_by(instance_uuid=instance_uuid).\ filter_by(status=status).\ first() if not result: raise exception.MigrationNotFoundByStatus(instance_id=instance_uuid, status=status) return result @pick_context_manager_reader_allow_async def migration_get_unconfirmed_by_dest_compute(context, confirm_window, dest_compute): confirm_window = (timeutils.utcnow() - datetime.timedelta(seconds=confirm_window)) return model_query(context, models.Migration, read_deleted="yes").\ filter(models.Migration.updated_at <= confirm_window).\ filter_by(status="finished").\ filter_by(dest_compute=dest_compute).\ all() @pick_context_manager_reader def migration_get_in_progress_by_host_and_node(context, host, node): # TODO(mriedem): Tracking what various code flows set for # migration status is nutty, since it happens all over the place # and several of the statuses are redundant (done and completed). # We need to define these in an enum somewhere and just update # that one central place that defines what "in progress" means. # NOTE(mriedem): The 'finished' status is not in this list because # 'finished' means a resize is finished on the destination host # and the instance is in VERIFY_RESIZE state, so the end state # for a resize is actually 'confirmed' or 'reverted'. return model_query(context, models.Migration).\ filter(or_(and_(models.Migration.source_compute == host, models.Migration.source_node == node), and_(models.Migration.dest_compute == host, models.Migration.dest_node == node))).\ filter(~models.Migration.status.in_(['confirmed', 'reverted', 'error', 'failed', 'completed', 'cancelled', 'done'])).\ options(_joinedload_all('instance.system_metadata')).\ all() @pick_context_manager_reader def migration_get_in_progress_by_instance(context, instance_uuid, migration_type=None): # TODO(Shaohe Feng) we should share the in-progress list. # TODO(Shaohe Feng) will also summarize all status to a new # MigrationStatus class. query = model_query(context, models.Migration).\ filter_by(instance_uuid=instance_uuid).\ filter(models.Migration.status.in_(['queued', 'preparing', 'running', 'post-migrating'])) if migration_type: query = query.filter(models.Migration.migration_type == migration_type) return query.all() @pick_context_manager_reader def migration_get_all_by_filters(context, filters, sort_keys=None, sort_dirs=None, limit=None, marker=None): if limit == 0: return [] query = model_query(context, models.Migration) if "uuid" in filters: # The uuid filter is here for the MigrationLister and multi-cell # paging support in the compute API. uuid = filters["uuid"] uuid = [uuid] if isinstance(uuid, six.string_types) else uuid query = query.filter(models.Migration.uuid.in_(uuid)) model_object = models.Migration query = _get_query_nova_resource_by_changes_time(query, filters, model_object) if "status" in filters: status = filters["status"] status = [status] if isinstance(status, six.string_types) else status query = query.filter(models.Migration.status.in_(status)) if "host" in filters: host = filters["host"] query = query.filter(or_(models.Migration.source_compute == host, models.Migration.dest_compute == host)) elif "source_compute" in filters: host = filters['source_compute'] query = query.filter(models.Migration.source_compute == host) if "migration_type" in filters: migtype = filters["migration_type"] query = query.filter(models.Migration.migration_type == migtype) if "hidden" in filters: hidden = filters["hidden"] query = query.filter(models.Migration.hidden == hidden) if "instance_uuid" in filters: instance_uuid = filters["instance_uuid"] query = query.filter(models.Migration.instance_uuid == instance_uuid) if 'user_id' in filters: user_id = filters['user_id'] query = query.filter(models.Migration.user_id == user_id) if 'project_id' in filters: project_id = filters['project_id'] query = query.filter(models.Migration.project_id == project_id) if marker: try: marker = migration_get_by_uuid(context, marker) except exception.MigrationNotFound: raise exception.MarkerNotFound(marker=marker) if limit or marker or sort_keys or sort_dirs: # Default sort by desc(['created_at', 'id']) sort_keys, sort_dirs = process_sort_params(sort_keys, sort_dirs, default_dir='desc') return sqlalchemyutils.paginate_query(query, models.Migration, limit=limit, sort_keys=sort_keys, marker=marker, sort_dirs=sort_dirs).all() else: return query.all() @require_context @pick_context_manager_reader_allow_async def migration_get_by_sort_filters(context, sort_keys, sort_dirs, values): """Attempt to get a single migration based on a combination of sort keys, directions and filter values. This is used to try to find a marker migration when we don't have a marker uuid. This returns just a uuid of the migration that matched. """ model = models.Migration return _model_get_uuid_by_sort_filters(context, model, sort_keys, sort_dirs, values) @pick_context_manager_writer def migration_migrate_to_uuid(context, count): # Avoid circular import from nova import objects db_migrations = model_query(context, models.Migration).filter_by( uuid=None).limit(count).all() done = 0 for db_migration in db_migrations: mig = objects.Migration(context) mig._from_db_object(context, mig, db_migration) done += 1 # We don't have any situation where we can (detectably) not # migrate a thing, so report anything that matched as "completed". return done, done ######################## # User-provided metadata def _instance_metadata_get_multi(context, instance_uuids): if not instance_uuids: return [] return model_query(context, models.InstanceMetadata).filter( models.InstanceMetadata.instance_uuid.in_(instance_uuids)) def _instance_metadata_get_query(context, instance_uuid): return model_query(context, models.InstanceMetadata, read_deleted="no").\ filter_by(instance_uuid=instance_uuid) @require_context @pick_context_manager_reader def instance_metadata_get(context, instance_uuid): rows = _instance_metadata_get_query(context, instance_uuid).all() return {row['key']: row['value'] for row in rows} @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def instance_metadata_delete(context, instance_uuid, key): _instance_metadata_get_query(context, instance_uuid).\ filter_by(key=key).\ soft_delete() @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def instance_metadata_update(context, instance_uuid, metadata, delete): all_keys = metadata.keys() if delete: _instance_metadata_get_query(context, instance_uuid).\ filter(~models.InstanceMetadata.key.in_(all_keys)).\ soft_delete(synchronize_session=False) already_existing_keys = [] meta_refs = _instance_metadata_get_query(context, instance_uuid).\ filter(models.InstanceMetadata.key.in_(all_keys)).\ all() for meta_ref in meta_refs: already_existing_keys.append(meta_ref.key) meta_ref.update({"value": metadata[meta_ref.key]}) new_keys = set(all_keys) - set(already_existing_keys) for key in new_keys: meta_ref = models.InstanceMetadata() meta_ref.update({"key": key, "value": metadata[key], "instance_uuid": instance_uuid}) context.session.add(meta_ref) return metadata ####################### # System-owned metadata def _instance_system_metadata_get_multi(context, instance_uuids): if not instance_uuids: return [] return model_query(context, models.InstanceSystemMetadata, read_deleted='yes').filter( models.InstanceSystemMetadata.instance_uuid.in_(instance_uuids)) def _instance_system_metadata_get_query(context, instance_uuid): return model_query(context, models.InstanceSystemMetadata).\ filter_by(instance_uuid=instance_uuid) @require_context @pick_context_manager_reader def instance_system_metadata_get(context, instance_uuid): rows = _instance_system_metadata_get_query(context, instance_uuid).all() return {row['key']: row['value'] for row in rows} @require_context @pick_context_manager_writer def instance_system_metadata_update(context, instance_uuid, metadata, delete): all_keys = metadata.keys() if delete: _instance_system_metadata_get_query(context, instance_uuid).\ filter(~models.InstanceSystemMetadata.key.in_(all_keys)).\ soft_delete(synchronize_session=False) already_existing_keys = [] meta_refs = _instance_system_metadata_get_query(context, instance_uuid).\ filter(models.InstanceSystemMetadata.key.in_(all_keys)).\ all() for meta_ref in meta_refs: already_existing_keys.append(meta_ref.key) meta_ref.update({"value": metadata[meta_ref.key]}) new_keys = set(all_keys) - set(already_existing_keys) for key in new_keys: meta_ref = models.InstanceSystemMetadata() meta_ref.update({"key": key, "value": metadata[key], "instance_uuid": instance_uuid}) context.session.add(meta_ref) return metadata #################### @pick_context_manager_writer def agent_build_create(context, values): agent_build_ref = models.AgentBuild() agent_build_ref.update(values) try: agent_build_ref.save(context.session) except db_exc.DBDuplicateEntry: raise exception.AgentBuildExists(hypervisor=values['hypervisor'], os=values['os'], architecture=values['architecture']) return agent_build_ref @pick_context_manager_reader def agent_build_get_by_triple(context, hypervisor, os, architecture): return model_query(context, models.AgentBuild, read_deleted="no").\ filter_by(hypervisor=hypervisor).\ filter_by(os=os).\ filter_by(architecture=architecture).\ first() @pick_context_manager_reader def agent_build_get_all(context, hypervisor=None): if hypervisor: return model_query(context, models.AgentBuild, read_deleted="no").\ filter_by(hypervisor=hypervisor).\ all() else: return model_query(context, models.AgentBuild, read_deleted="no").\ all() @pick_context_manager_writer def agent_build_destroy(context, agent_build_id): rows_affected = model_query(context, models.AgentBuild).filter_by( id=agent_build_id).soft_delete() if rows_affected == 0: raise exception.AgentBuildNotFound(id=agent_build_id) @pick_context_manager_writer def agent_build_update(context, agent_build_id, values): rows_affected = model_query(context, models.AgentBuild).\ filter_by(id=agent_build_id).\ update(values) if rows_affected == 0: raise exception.AgentBuildNotFound(id=agent_build_id) #################### @require_context @pick_context_manager_reader_allow_async def bw_usage_get(context, uuid, start_period, mac): values = {'start_period': start_period} values = convert_objects_related_datetimes(values, 'start_period') return model_query(context, models.BandwidthUsage, read_deleted="yes").\ filter_by(start_period=values['start_period']).\ filter_by(uuid=uuid).\ filter_by(mac=mac).\ first() @require_context @pick_context_manager_reader_allow_async def bw_usage_get_by_uuids(context, uuids, start_period): values = {'start_period': start_period} values = convert_objects_related_datetimes(values, 'start_period') return ( model_query(context, models.BandwidthUsage, read_deleted="yes"). filter(models.BandwidthUsage.uuid.in_(uuids)). filter_by(start_period=values['start_period']). all() ) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def bw_usage_update(context, uuid, mac, start_period, bw_in, bw_out, last_ctr_in, last_ctr_out, last_refreshed=None): if last_refreshed is None: last_refreshed = timeutils.utcnow() # NOTE(comstud): More often than not, we'll be updating records vs # creating records. Optimize accordingly, trying to update existing # records. Fall back to creation when no rows are updated. ts_values = {'last_refreshed': last_refreshed, 'start_period': start_period} ts_keys = ('start_period', 'last_refreshed') ts_values = convert_objects_related_datetimes(ts_values, *ts_keys) values = {'last_refreshed': ts_values['last_refreshed'], 'last_ctr_in': last_ctr_in, 'last_ctr_out': last_ctr_out, 'bw_in': bw_in, 'bw_out': bw_out} # NOTE(pkholkin): order_by() is needed here to ensure that the # same record is updated every time. It can be removed after adding # unique constraint to this model. bw_usage = model_query(context, models.BandwidthUsage, read_deleted='yes').\ filter_by(start_period=ts_values['start_period']).\ filter_by(uuid=uuid).\ filter_by(mac=mac).\ order_by(asc(models.BandwidthUsage.id)).first() if bw_usage: bw_usage.update(values) return bw_usage bwusage = models.BandwidthUsage() bwusage.start_period = ts_values['start_period'] bwusage.uuid = uuid bwusage.mac = mac bwusage.last_refreshed = ts_values['last_refreshed'] bwusage.bw_in = bw_in bwusage.bw_out = bw_out bwusage.last_ctr_in = last_ctr_in bwusage.last_ctr_out = last_ctr_out bwusage.save(context.session) return bwusage #################### @require_context @pick_context_manager_reader def vol_get_usage_by_time(context, begin): """Return volumes usage that have been updated after a specified time.""" return model_query(context, models.VolumeUsage, read_deleted="yes").\ filter(or_(models.VolumeUsage.tot_last_refreshed == null(), models.VolumeUsage.tot_last_refreshed > begin, models.VolumeUsage.curr_last_refreshed == null(), models.VolumeUsage.curr_last_refreshed > begin, )).all() @require_context @pick_context_manager_writer def vol_usage_update(context, id, rd_req, rd_bytes, wr_req, wr_bytes, instance_id, project_id, user_id, availability_zone, update_totals=False): refreshed = timeutils.utcnow() values = {} # NOTE(dricco): We will be mostly updating current usage records vs # updating total or creating records. Optimize accordingly. if not update_totals: values = {'curr_last_refreshed': refreshed, 'curr_reads': rd_req, 'curr_read_bytes': rd_bytes, 'curr_writes': wr_req, 'curr_write_bytes': wr_bytes, 'instance_uuid': instance_id, 'project_id': project_id, 'user_id': user_id, 'availability_zone': availability_zone} else: values = {'tot_last_refreshed': refreshed, 'tot_reads': models.VolumeUsage.tot_reads + rd_req, 'tot_read_bytes': models.VolumeUsage.tot_read_bytes + rd_bytes, 'tot_writes': models.VolumeUsage.tot_writes + wr_req, 'tot_write_bytes': models.VolumeUsage.tot_write_bytes + wr_bytes, 'curr_reads': 0, 'curr_read_bytes': 0, 'curr_writes': 0, 'curr_write_bytes': 0, 'instance_uuid': instance_id, 'project_id': project_id, 'user_id': user_id, 'availability_zone': availability_zone} current_usage = model_query(context, models.VolumeUsage, read_deleted="yes").\ filter_by(volume_id=id).\ first() if current_usage: if (rd_req < current_usage['curr_reads'] or rd_bytes < current_usage['curr_read_bytes'] or wr_req < current_usage['curr_writes'] or wr_bytes < current_usage['curr_write_bytes']): LOG.info("Volume(%s) has lower stats then what is in " "the database. Instance must have been rebooted " "or crashed. Updating totals.", id) if not update_totals: values['tot_reads'] = (models.VolumeUsage.tot_reads + current_usage['curr_reads']) values['tot_read_bytes'] = ( models.VolumeUsage.tot_read_bytes + current_usage['curr_read_bytes']) values['tot_writes'] = (models.VolumeUsage.tot_writes + current_usage['curr_writes']) values['tot_write_bytes'] = ( models.VolumeUsage.tot_write_bytes + current_usage['curr_write_bytes']) else: values['tot_reads'] = (models.VolumeUsage.tot_reads + current_usage['curr_reads'] + rd_req) values['tot_read_bytes'] = ( models.VolumeUsage.tot_read_bytes + current_usage['curr_read_bytes'] + rd_bytes) values['tot_writes'] = (models.VolumeUsage.tot_writes + current_usage['curr_writes'] + wr_req) values['tot_write_bytes'] = ( models.VolumeUsage.tot_write_bytes + current_usage['curr_write_bytes'] + wr_bytes) current_usage.update(values) current_usage.save(context.session) context.session.refresh(current_usage) return current_usage vol_usage = models.VolumeUsage() vol_usage.volume_id = id vol_usage.instance_uuid = instance_id vol_usage.project_id = project_id vol_usage.user_id = user_id vol_usage.availability_zone = availability_zone if not update_totals: vol_usage.curr_last_refreshed = refreshed vol_usage.curr_reads = rd_req vol_usage.curr_read_bytes = rd_bytes vol_usage.curr_writes = wr_req vol_usage.curr_write_bytes = wr_bytes else: vol_usage.tot_last_refreshed = refreshed vol_usage.tot_reads = rd_req vol_usage.tot_read_bytes = rd_bytes vol_usage.tot_writes = wr_req vol_usage.tot_write_bytes = wr_bytes vol_usage.save(context.session) return vol_usage #################### @pick_context_manager_reader def s3_image_get(context, image_id): """Find local s3 image represented by the provided id.""" result = model_query(context, models.S3Image, read_deleted="yes").\ filter_by(id=image_id).\ first() if not result: raise exception.ImageNotFound(image_id=image_id) return result @pick_context_manager_reader def s3_image_get_by_uuid(context, image_uuid): """Find local s3 image represented by the provided uuid.""" result = model_query(context, models.S3Image, read_deleted="yes").\ filter_by(uuid=image_uuid).\ first() if not result: raise exception.ImageNotFound(image_id=image_uuid) return result @pick_context_manager_writer def s3_image_create(context, image_uuid): """Create local s3 image represented by provided uuid.""" try: s3_image_ref = models.S3Image() s3_image_ref.update({'uuid': image_uuid}) s3_image_ref.save(context.session) except Exception as e: raise db_exc.DBError(e) return s3_image_ref #################### @pick_context_manager_writer def instance_fault_create(context, values): """Create a new InstanceFault.""" fault_ref = models.InstanceFault() fault_ref.update(values) fault_ref.save(context.session) return dict(fault_ref) @pick_context_manager_reader def instance_fault_get_by_instance_uuids(context, instance_uuids, latest=False): """Get all instance faults for the provided instance_uuids. :param instance_uuids: List of UUIDs of instances to grab faults for :param latest: Optional boolean indicating we should only return the latest fault for the instance """ if not instance_uuids: return {} faults_tbl = models.InstanceFault.__table__ # NOTE(rpodolyaka): filtering by instance_uuids is performed in both # code branches below for the sake of a better query plan. On change, # make sure to update the other one as well. query = model_query(context, models.InstanceFault, [faults_tbl], read_deleted='no') if latest: # NOTE(jaypipes): We join instance_faults to a derived table of the # latest faults per instance UUID. The SQL produced below looks like # this: # # SELECT instance_faults.* # FROM instance_faults # JOIN ( # SELECT instance_uuid, MAX(id) AS max_id # FROM instance_faults # WHERE instance_uuid IN ( ... ) # AND deleted = 0 # GROUP BY instance_uuid # ) AS latest_faults # ON instance_faults.id = latest_faults.max_id; latest_faults = model_query( context, models.InstanceFault, [faults_tbl.c.instance_uuid, sql.func.max(faults_tbl.c.id).label('max_id')], read_deleted='no' ).filter( faults_tbl.c.instance_uuid.in_(instance_uuids) ).group_by( faults_tbl.c.instance_uuid ).subquery(name="latest_faults") query = query.join(latest_faults, faults_tbl.c.id == latest_faults.c.max_id) else: query = query.filter(models.InstanceFault.instance_uuid.in_( instance_uuids)).order_by(desc("id")) output = {} for instance_uuid in instance_uuids: output[instance_uuid] = [] for row in query: output[row.instance_uuid].append(row._asdict()) return output ################## @pick_context_manager_writer def action_start(context, values): convert_objects_related_datetimes(values, 'start_time', 'updated_at') action_ref = models.InstanceAction() action_ref.update(values) action_ref.save(context.session) return action_ref @pick_context_manager_writer def action_finish(context, values): convert_objects_related_datetimes(values, 'start_time', 'finish_time', 'updated_at') query = model_query(context, models.InstanceAction).\ filter_by(instance_uuid=values['instance_uuid']).\ filter_by(request_id=values['request_id']) if query.update(values) != 1: raise exception.InstanceActionNotFound( request_id=values['request_id'], instance_uuid=values['instance_uuid']) return query.one() @pick_context_manager_reader def actions_get(context, instance_uuid, limit=None, marker=None, filters=None): """Get all instance actions for the provided uuid and filters.""" if limit == 0: return [] sort_keys = ['created_at', 'id'] sort_dirs = ['desc', 'desc'] query_prefix = model_query(context, models.InstanceAction).\ filter_by(instance_uuid=instance_uuid) model_object = models.InstanceAction query_prefix = _get_query_nova_resource_by_changes_time(query_prefix, filters, model_object) if marker is not None: marker = action_get_by_request_id(context, instance_uuid, marker) if not marker: raise exception.MarkerNotFound(marker=marker) actions = sqlalchemyutils.paginate_query(query_prefix, models.InstanceAction, limit, sort_keys, marker=marker, sort_dirs=sort_dirs).all() return actions @pick_context_manager_reader def action_get_by_request_id(context, instance_uuid, request_id): """Get the action by request_id and given instance.""" action = _action_get_by_request_id(context, instance_uuid, request_id) return action def _action_get_by_request_id(context, instance_uuid, request_id): result = model_query(context, models.InstanceAction).\ filter_by(instance_uuid=instance_uuid).\ filter_by(request_id=request_id).\ order_by(desc("created_at"), desc("id")).\ first() return result def _action_get_last_created_by_instance_uuid(context, instance_uuid): result = (model_query(context, models.InstanceAction). filter_by(instance_uuid=instance_uuid). order_by(desc("created_at"), desc("id")). first()) return result @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def action_event_start(context, values): """Start an event on an instance action.""" convert_objects_related_datetimes(values, 'start_time') action = _action_get_by_request_id(context, values['instance_uuid'], values['request_id']) # When nova-compute restarts, the context is generated again in # init_host workflow, the request_id was different with the request_id # recorded in InstanceAction, so we can't get the original record # according to request_id. Try to get the last created action so that # init_instance can continue to finish the recovery action, like: # powering_off, unpausing, and so on. update_action = True if not action and not context.project_id: action = _action_get_last_created_by_instance_uuid( context, values['instance_uuid']) # If we couldn't find an action by the request_id, we don't want to # update this action since it likely represents an inactive action. update_action = False if not action: raise exception.InstanceActionNotFound( request_id=values['request_id'], instance_uuid=values['instance_uuid']) values['action_id'] = action['id'] event_ref = models.InstanceActionEvent() event_ref.update(values) context.session.add(event_ref) # Update action updated_at. if update_action: action.update({'updated_at': values['start_time']}) action.save(context.session) return event_ref # NOTE: We need the retry_on_deadlock decorator for cases like resize where # a lot of events are happening at once between multiple hosts trying to # update the same action record in a small time window. @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @pick_context_manager_writer def action_event_finish(context, values): """Finish an event on an instance action.""" convert_objects_related_datetimes(values, 'start_time', 'finish_time') action = _action_get_by_request_id(context, values['instance_uuid'], values['request_id']) # When nova-compute restarts, the context is generated again in # init_host workflow, the request_id was different with the request_id # recorded in InstanceAction, so we can't get the original record # according to request_id. Try to get the last created action so that # init_instance can continue to finish the recovery action, like: # powering_off, unpausing, and so on. update_action = True if not action and not context.project_id: action = _action_get_last_created_by_instance_uuid( context, values['instance_uuid']) # If we couldn't find an action by the request_id, we don't want to # update this action since it likely represents an inactive action. update_action = False if not action: raise exception.InstanceActionNotFound( request_id=values['request_id'], instance_uuid=values['instance_uuid']) event_ref = model_query(context, models.InstanceActionEvent).\ filter_by(action_id=action['id']).\ filter_by(event=values['event']).\ first() if not event_ref: raise exception.InstanceActionEventNotFound(action_id=action['id'], event=values['event']) event_ref.update(values) if values['result'].lower() == 'error': action.update({'message': 'Error'}) # Update action updated_at. if update_action: action.update({'updated_at': values['finish_time']}) action.save(context.session) return event_ref @pick_context_manager_reader def action_events_get(context, action_id): events = model_query(context, models.InstanceActionEvent).\ filter_by(action_id=action_id).\ order_by(desc("created_at"), desc("id")).\ all() return events @pick_context_manager_reader def action_event_get_by_id(context, action_id, event_id): event = model_query(context, models.InstanceActionEvent).\ filter_by(action_id=action_id).\ filter_by(id=event_id).\ first() return event ################## @require_context @pick_context_manager_writer def ec2_instance_create(context, instance_uuid, id=None): """Create ec2 compatible instance by provided uuid.""" ec2_instance_ref = models.InstanceIdMapping() ec2_instance_ref.update({'uuid': instance_uuid}) if id is not None: ec2_instance_ref.update({'id': id}) ec2_instance_ref.save(context.session) return ec2_instance_ref @require_context @pick_context_manager_reader def ec2_instance_get_by_uuid(context, instance_uuid): result = _ec2_instance_get_query(context).\ filter_by(uuid=instance_uuid).\ first() if not result: raise exception.InstanceNotFound(instance_id=instance_uuid) return result @require_context @pick_context_manager_reader def ec2_instance_get_by_id(context, instance_id): result = _ec2_instance_get_query(context).\ filter_by(id=instance_id).\ first() if not result: raise exception.InstanceNotFound(instance_id=instance_id) return result @require_context @pick_context_manager_reader def get_instance_uuid_by_ec2_id(context, ec2_id): result = ec2_instance_get_by_id(context, ec2_id) return result['uuid'] def _ec2_instance_get_query(context): return model_query(context, models.InstanceIdMapping, read_deleted='yes') ################## def _task_log_get_query(context, task_name, period_beginning, period_ending, host=None, state=None): values = {'period_beginning': period_beginning, 'period_ending': period_ending} values = convert_objects_related_datetimes(values, *values.keys()) query = model_query(context, models.TaskLog).\ filter_by(task_name=task_name).\ filter_by(period_beginning=values['period_beginning']).\ filter_by(period_ending=values['period_ending']) if host is not None: query = query.filter_by(host=host) if state is not None: query = query.filter_by(state=state) return query @pick_context_manager_reader def task_log_get(context, task_name, period_beginning, period_ending, host, state=None): return _task_log_get_query(context, task_name, period_beginning, period_ending, host, state).first() @pick_context_manager_reader def task_log_get_all(context, task_name, period_beginning, period_ending, host=None, state=None): return _task_log_get_query(context, task_name, period_beginning, period_ending, host, state).all() @pick_context_manager_writer def task_log_begin_task(context, task_name, period_beginning, period_ending, host, task_items=None, message=None): values = {'period_beginning': period_beginning, 'period_ending': period_ending} values = convert_objects_related_datetimes(values, *values.keys()) task = models.TaskLog() task.task_name = task_name task.period_beginning = values['period_beginning'] task.period_ending = values['period_ending'] task.host = host task.state = "RUNNING" if message: task.message = message if task_items: task.task_items = task_items try: task.save(context.session) except db_exc.DBDuplicateEntry: raise exception.TaskAlreadyRunning(task_name=task_name, host=host) @pick_context_manager_writer def task_log_end_task(context, task_name, period_beginning, period_ending, host, errors, message=None): values = dict(state="DONE", errors=errors) if message: values["message"] = message rows = _task_log_get_query(context, task_name, period_beginning, period_ending, host).update(values) if rows == 0: # It's not running! raise exception.TaskNotRunning(task_name=task_name, host=host) ################## def _get_tables_with_fk_to_table(table): """Get a list of tables that refer to the given table by foreign key (FK). :param table: Table object (parent) for which to find references by FK :returns: A list of Table objects that refer to the specified table by FK """ tables = [] for t in models.BASE.metadata.tables.values(): for fk in t.foreign_keys: if fk.references(table): tables.append(t) return tables def _get_fk_stmts(metadata, conn, table, column, records): """Find records related to this table by foreign key (FK) and create and return insert/delete statements for them. Logic is: find the tables that reference the table passed to this method and walk the tree of references by FK. As child records are found, prepend them to deques to execute later in a single database transaction (to avoid orphaning related records if any one insert/delete fails or the archive process is otherwise interrupted). :param metadata: Metadata object to use to construct a shadow Table object :param conn: Connection object to use to select records related by FK :param table: Table object (parent) for which to find references by FK :param column: Column object (parent) to use to select records related by FK :param records: A list of records (column values) to use to select records related by FK :returns: tuple of (insert statements, delete statements) for records related by FK to insert into shadow tables and delete from main tables """ inserts = collections.deque() deletes = collections.deque() fk_tables = _get_tables_with_fk_to_table(table) for fk_table in fk_tables: # Create the shadow table for the referencing table. fk_shadow_tablename = _SHADOW_TABLE_PREFIX + fk_table.name try: fk_shadow_table = Table(fk_shadow_tablename, metadata, autoload=True) except NoSuchTableError: # No corresponding shadow table; skip it. continue # TODO(stephenfin): Drop this when we drop the table if fk_table.name == "dns_domains": # We have one table (dns_domains) where the key is called # "domain" rather than "id" fk_column = fk_table.c.domain else: fk_column = fk_table.c.id for fk in fk_table.foreign_keys: # We need to find the records in the referring (child) table that # correspond to the records in our (parent) table so we can archive # them. # First, select the column in the parent referenced by the child # table that corresponds to the parent table records that were # passed in. # Example: table = 'instances' and fk_table = 'instance_extra' # fk.parent = instance_extra.instance_uuid # fk.column = instances.uuid # SELECT instances.uuid FROM instances, instance_extra # WHERE instance_extra.instance_uuid = instances.uuid # AND instance.id IN () # We need the instance uuids for the in order to # look up the matching instance_extra records. select = sql.select([fk.column]).where( sql.and_(fk.parent == fk.column, column.in_(records))) rows = conn.execute(select).fetchall() p_records = [r[0] for r in rows] # Then, select rows in the child table that correspond to the # parent table records that were passed in. # Example: table = 'instances' and fk_table = 'instance_extra' # fk.parent = instance_extra.instance_uuid # fk.column = instances.uuid # SELECT instance_extra.id FROM instance_extra, instances # WHERE instance_extra.instance_uuid = instances.uuid # AND instances.uuid IN () # We will get the instance_extra ids we need to archive # them. fk_select = sql.select([fk_column]).where( sql.and_(fk.parent == fk.column, fk.column.in_(p_records))) fk_rows = conn.execute(fk_select).fetchall() fk_records = [r[0] for r in fk_rows] if fk_records: # If we found any records in the child table, create shadow # table insert statements for them and prepend them to the # deque. fk_columns = [c.name for c in fk_table.c] fk_insert = fk_shadow_table.insert(inline=True).\ from_select(fk_columns, sql.select([fk_table], fk_column.in_(fk_records))) inserts.appendleft(fk_insert) # Create main table delete statements and prepend them to the # deque. fk_delete = fk_table.delete().where(fk_column.in_(fk_records)) deletes.appendleft(fk_delete) # Repeat for any possible nested child tables. i, d = _get_fk_stmts(metadata, conn, fk_table, fk_column, fk_records) inserts.extendleft(i) deletes.extendleft(d) return inserts, deletes def _archive_deleted_rows_for_table(metadata, tablename, max_rows, before): """Move up to max_rows rows from one tables to the corresponding shadow table. Will also follow FK constraints and archive all referring rows. Example: archving a record from the 'instances' table will also archive the 'instance_extra' record before archiving the 'instances' record. :returns: 3-item tuple: - number of rows archived - list of UUIDs of instances that were archived - number of extra rows archived (due to FK constraints) dict of {tablename: rows_archived} """ conn = metadata.bind.connect() # NOTE(tdurakov): table metadata should be received # from models, not db tables. Default value specified by SoftDeleteMixin # is known only by models, not DB layer. # IMPORTANT: please do not change source of metadata information for table. table = models.BASE.metadata.tables[tablename] shadow_tablename = _SHADOW_TABLE_PREFIX + tablename rows_archived = 0 deleted_instance_uuids = [] try: shadow_table = Table(shadow_tablename, metadata, autoload=True) except NoSuchTableError: # No corresponding shadow table; skip it. return rows_archived, deleted_instance_uuids, {} # TODO(stephenfin): Drop this when we drop the table if tablename == "dns_domains": # We have one table (dns_domains) where the key is called # "domain" rather than "id" column = table.c.domain else: column = table.c.id # NOTE(guochbo): Use DeleteFromSelect to avoid # database's limit of maximum parameter in one SQL statement. deleted_column = table.c.deleted columns = [c.name for c in table.c] select = sql.select([column], deleted_column != deleted_column.default.arg) if before: select = select.where(table.c.deleted_at < before) select = select.order_by(column).limit(max_rows) rows = conn.execute(select).fetchall() records = [r[0] for r in rows] # We will archive deleted rows for this table and also generate insert and # delete statements for extra rows we may archive by following FK # relationships. Because we are iterating over the sorted_tables (list of # Table objects sorted in order of foreign key dependency), new inserts and # deletes ("leaves") will be added to the fronts of the deques created in # _get_fk_stmts. This way, we make sure we delete child table records # before we delete their parent table records. # Keep track of any extra tablenames to number of rows that we archive by # following FK relationships. # {tablename: extra_rows_archived} extras = collections.defaultdict(int) if records: insert = shadow_table.insert(inline=True).\ from_select(columns, sql.select([table], column.in_(records))) delete = table.delete().where(column.in_(records)) # Walk FK relationships and add insert/delete statements for rows that # refer to this table via FK constraints. fk_inserts and fk_deletes # will be prepended to by _get_fk_stmts if referring rows are found by # FK constraints. fk_inserts, fk_deletes = _get_fk_stmts( metadata, conn, table, column, records) # NOTE(tssurya): In order to facilitate the deletion of records from # instance_mappings, request_specs and instance_group_member tables in # the nova_api DB, the rows of deleted instances from the instances # table are stored prior to their deletion. Basically the uuids of the # archived instances are queried and returned. if tablename == "instances": query_select = sql.select([table.c.uuid], table.c.id.in_(records)) rows = conn.execute(query_select).fetchall() deleted_instance_uuids = [r[0] for r in rows] try: # Group the insert and delete in a transaction. with conn.begin(): for fk_insert in fk_inserts: conn.execute(fk_insert) for fk_delete in fk_deletes: result_fk_delete = conn.execute(fk_delete) extras[fk_delete.table.name] += result_fk_delete.rowcount conn.execute(insert) result_delete = conn.execute(delete) rows_archived += result_delete.rowcount except db_exc.DBReferenceError as ex: # A foreign key constraint keeps us from deleting some of # these rows until we clean up a dependent table. Just # skip this table for now; we'll come back to it later. LOG.warning("IntegrityError detected when archiving table " "%(tablename)s: %(error)s", {'tablename': tablename, 'error': six.text_type(ex)}) return rows_archived, deleted_instance_uuids, extras def archive_deleted_rows(context=None, max_rows=None, before=None): """Move up to max_rows rows from production tables to the corresponding shadow tables. :param context: nova.context.RequestContext for database access :param max_rows: Maximum number of rows to archive (required) :param before: optional datetime which when specified filters the records to only archive those records deleted before the given date :returns: 3-item tuple: - dict that maps table name to number of rows archived from that table, for example:: { 'instances': 5, 'block_device_mapping': 5, 'pci_devices': 2, } - list of UUIDs of instances that were archived - total number of rows that were archived """ table_to_rows_archived = collections.defaultdict(int) deleted_instance_uuids = [] total_rows_archived = 0 meta = MetaData(get_engine(use_slave=True, context=context)) meta.reflect() # Get the sorted list of tables in order of foreign key dependency. # Process the parent tables and find their dependent records in order to # archive the related records in a single database transactions. The goal # is to avoid a situation where, for example, an 'instances' table record # is missing its corresponding 'instance_extra' record due to running the # archive_deleted_rows command with max_rows. for table in meta.sorted_tables: tablename = table.name rows_archived = 0 # skip the special sqlalchemy-migrate migrate_version table and any # shadow tables if (tablename == 'migrate_version' or tablename.startswith(_SHADOW_TABLE_PREFIX)): continue rows_archived, _deleted_instance_uuids, extras = ( _archive_deleted_rows_for_table( meta, tablename, max_rows=max_rows - total_rows_archived, before=before)) total_rows_archived += rows_archived if tablename == 'instances': deleted_instance_uuids = _deleted_instance_uuids # Only report results for tables that had updates. if rows_archived: table_to_rows_archived[tablename] = rows_archived for tablename, extra_rows_archived in extras.items(): table_to_rows_archived[tablename] += extra_rows_archived total_rows_archived += extra_rows_archived if total_rows_archived >= max_rows: break return table_to_rows_archived, deleted_instance_uuids, total_rows_archived def _purgeable_tables(metadata): return [t for t in metadata.sorted_tables if (t.name.startswith(_SHADOW_TABLE_PREFIX) and not t.name.endswith('migrate_version'))] def purge_shadow_tables(context, before_date, status_fn=None): engine = get_engine(context=context) conn = engine.connect() metadata = MetaData() metadata.bind = engine metadata.reflect() total_deleted = 0 if status_fn is None: status_fn = lambda m: None # Some things never get formally deleted, and thus deleted_at # is never set. So, prefer specific timestamp columns here # for those special cases. overrides = { 'shadow_instance_actions': 'created_at', 'shadow_instance_actions_events': 'created_at', } for table in _purgeable_tables(metadata): if before_date is None: col = None elif table.name in overrides: col = getattr(table.c, overrides[table.name]) elif hasattr(table.c, 'deleted_at'): col = table.c.deleted_at elif hasattr(table.c, 'updated_at'): col = table.c.updated_at elif hasattr(table.c, 'created_at'): col = table.c.created_at else: status_fn(_('Unable to purge table %(table)s because it ' 'has no timestamp column') % { 'table': table.name}) continue if col is not None: delete = table.delete().where(col < before_date) else: delete = table.delete() deleted = conn.execute(delete) if deleted.rowcount > 0: status_fn(_('Deleted %(rows)i rows from %(table)s based on ' 'timestamp column %(col)s') % { 'rows': deleted.rowcount, 'table': table.name, 'col': col is None and '(n/a)' or col.name}) total_deleted += deleted.rowcount return total_deleted #################### @pick_context_manager_reader def pci_device_get_by_addr(context, node_id, dev_addr): pci_dev_ref = model_query(context, models.PciDevice).\ filter_by(compute_node_id=node_id).\ filter_by(address=dev_addr).\ first() if not pci_dev_ref: raise exception.PciDeviceNotFound(node_id=node_id, address=dev_addr) return pci_dev_ref @pick_context_manager_reader def pci_device_get_by_id(context, id): pci_dev_ref = model_query(context, models.PciDevice).\ filter_by(id=id).\ first() if not pci_dev_ref: raise exception.PciDeviceNotFoundById(id=id) return pci_dev_ref @pick_context_manager_reader def pci_device_get_all_by_node(context, node_id): return model_query(context, models.PciDevice).\ filter_by(compute_node_id=node_id).\ all() @pick_context_manager_reader def pci_device_get_all_by_parent_addr(context, node_id, parent_addr): return model_query(context, models.PciDevice).\ filter_by(compute_node_id=node_id).\ filter_by(parent_addr=parent_addr).\ all() @require_context @pick_context_manager_reader def pci_device_get_all_by_instance_uuid(context, instance_uuid): return model_query(context, models.PciDevice).\ filter_by(status='allocated').\ filter_by(instance_uuid=instance_uuid).\ all() @pick_context_manager_reader def _instance_pcidevs_get_multi(context, instance_uuids): if not instance_uuids: return [] return model_query(context, models.PciDevice).\ filter_by(status='allocated').\ filter(models.PciDevice.instance_uuid.in_(instance_uuids)) @pick_context_manager_writer def pci_device_destroy(context, node_id, address): result = model_query(context, models.PciDevice).\ filter_by(compute_node_id=node_id).\ filter_by(address=address).\ soft_delete() if not result: raise exception.PciDeviceNotFound(node_id=node_id, address=address) @pick_context_manager_writer def pci_device_update(context, node_id, address, values): query = model_query(context, models.PciDevice, read_deleted="no").\ filter_by(compute_node_id=node_id).\ filter_by(address=address) if query.update(values) == 0: device = models.PciDevice() device.update(values) context.session.add(device) return query.one() #################### @pick_context_manager_writer def instance_tag_add(context, instance_uuid, tag): tag_ref = models.Tag() tag_ref.resource_id = instance_uuid tag_ref.tag = tag try: _check_instance_exists_in_project(context, instance_uuid) with get_context_manager(context).writer.savepoint.using(context): context.session.add(tag_ref) except db_exc.DBDuplicateEntry: # NOTE(snikitin): We should ignore tags duplicates pass return tag_ref @pick_context_manager_writer def instance_tag_set(context, instance_uuid, tags): _check_instance_exists_in_project(context, instance_uuid) existing = context.session.query(models.Tag.tag).filter_by( resource_id=instance_uuid).all() existing = set(row.tag for row in existing) tags = set(tags) to_delete = existing - tags to_add = tags - existing if to_delete: context.session.query(models.Tag).filter_by( resource_id=instance_uuid).filter( models.Tag.tag.in_(to_delete)).delete( synchronize_session=False) if to_add: data = [ {'resource_id': instance_uuid, 'tag': tag} for tag in to_add] context.session.execute(models.Tag.__table__.insert(None), data) return context.session.query(models.Tag).filter_by( resource_id=instance_uuid).all() @pick_context_manager_reader def instance_tag_get_by_instance_uuid(context, instance_uuid): _check_instance_exists_in_project(context, instance_uuid) return context.session.query(models.Tag).filter_by( resource_id=instance_uuid).all() @pick_context_manager_writer def instance_tag_delete(context, instance_uuid, tag): _check_instance_exists_in_project(context, instance_uuid) result = context.session.query(models.Tag).filter_by( resource_id=instance_uuid, tag=tag).delete() if not result: raise exception.InstanceTagNotFound(instance_id=instance_uuid, tag=tag) @pick_context_manager_writer def instance_tag_delete_all(context, instance_uuid): _check_instance_exists_in_project(context, instance_uuid) context.session.query(models.Tag).filter_by( resource_id=instance_uuid).delete() @pick_context_manager_reader def instance_tag_exists(context, instance_uuid, tag): _check_instance_exists_in_project(context, instance_uuid) q = context.session.query(models.Tag).filter_by( resource_id=instance_uuid, tag=tag) return context.session.query(q.exists()).scalar() #################### @pick_context_manager_writer def console_auth_token_create(context, values): instance_uuid = values.get('instance_uuid') _check_instance_exists_in_project(context, instance_uuid) token_ref = models.ConsoleAuthToken() token_ref.update(values) context.session.add(token_ref) return token_ref @pick_context_manager_reader def console_auth_token_get_valid(context, token_hash, instance_uuid=None): if instance_uuid is not None: _check_instance_exists_in_project(context, instance_uuid) query = context.session.query(models.ConsoleAuthToken).\ filter_by(token_hash=token_hash) if instance_uuid is not None: query = query.filter_by(instance_uuid=instance_uuid) return query.filter( models.ConsoleAuthToken.expires > timeutils.utcnow_ts()).first() @pick_context_manager_writer def console_auth_token_destroy_all_by_instance(context, instance_uuid): context.session.query(models.ConsoleAuthToken).\ filter_by(instance_uuid=instance_uuid).delete() @pick_context_manager_writer def console_auth_token_destroy_expired(context): context.session.query(models.ConsoleAuthToken).\ filter(models.ConsoleAuthToken.expires <= timeutils.utcnow_ts()).\ delete() @pick_context_manager_writer def console_auth_token_destroy_expired_by_host(context, host): context.session.query(models.ConsoleAuthToken).\ filter_by(host=host).\ filter(models.ConsoleAuthToken.expires <= timeutils.utcnow_ts()).\ delete() ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31047 nova-21.2.4/nova/db/sqlalchemy/api_migrations/0000775000175000017500000000000000000000000021265 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/__init__.py0000664000175000017500000000000000000000000023364 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1636736378.31447 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/0000775000175000017500000000000000000000000023742 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/README0000664000175000017500000000015300000000000024621 0ustar00zuulzuul00000000000000This is a database migration repository. More information at http://code.google.com/p/sqlalchemy-migrate/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/__init__.py0000664000175000017500000000000000000000000026041 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/migrate.cfg0000664000175000017500000000173000000000000026054 0ustar00zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id=nova_api # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table=migrate_version # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs=[] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3224702 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/0000775000175000017500000000000000000000000025612 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/001_cell_mapping.py0000664000175000017500000000277300000000000031207 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine cell_mappings = Table('cell_mappings', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), nullable=False), Column('name', String(length=255)), Column('transport_url', Text()), Column('database_connection', Text()), UniqueConstraint('uuid', name='uniq_cell_mappings0uuid'), Index('uuid_idx', 'uuid'), mysql_engine='InnoDB', mysql_charset='utf8' ) cell_mappings.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/002_instance_mapping.py0000664000175000017500000000344700000000000032074 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset.constraint import ForeignKeyConstraint from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine cell_mappings = Table('cell_mappings', meta, autoload=True) instance_mappings = Table('instance_mappings', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_uuid', String(length=36), nullable=False), Column('cell_id', Integer, nullable=False), Column('project_id', String(length=255), nullable=False), UniqueConstraint('instance_uuid', name='uniq_instance_mappings0instance_uuid'), Index('instance_uuid_idx', 'instance_uuid'), Index('project_id_idx', 'project_id'), ForeignKeyConstraint(columns=['cell_id'], refcolumns=[cell_mappings.c.id]), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_mappings.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/003_host_mapping.py0000664000175000017500000000317200000000000031241 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset.constraint import ForeignKeyConstraint from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine cell_mappings = Table('cell_mappings', meta, autoload=True) host_mappings = Table('host_mappings', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('cell_id', Integer, nullable=False), Column('host', String(length=255), nullable=False), UniqueConstraint('host', name='uniq_host_mappings0host'), Index('host_idx', 'host'), ForeignKeyConstraint(columns=['cell_id'], refcolumns=[cell_mappings.c.id]), mysql_engine='InnoDB', mysql_charset='utf8' ) host_mappings.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/004_add_request_spec.py0000664000175000017500000000274400000000000032070 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine request_specs = Table('request_specs', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_uuid', String(36), nullable=False), Column('spec', Text, nullable=False), UniqueConstraint('instance_uuid', name='uniq_request_specs0instance_uuid'), Index('request_spec_instance_uuid_idx', 'instance_uuid'), mysql_engine='InnoDB', mysql_charset='utf8' ) request_specs.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/005_flavors.py0000664000175000017500000000637100000000000030233 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset.constraint import ForeignKeyConstraint from migrate import UniqueConstraint from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Float from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine flavors = Table('flavors', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('name', String(length=255), nullable=False), Column('id', Integer, primary_key=True, nullable=False), Column('memory_mb', Integer, nullable=False), Column('vcpus', Integer, nullable=False), Column('swap', Integer, nullable=False), Column('vcpu_weight', Integer), Column('flavorid', String(length=255), nullable=False), Column('rxtx_factor', Float), Column('root_gb', Integer), Column('ephemeral_gb', Integer), Column('disabled', Boolean), Column('is_public', Boolean), UniqueConstraint("flavorid", name="uniq_flavors0flavorid"), UniqueConstraint("name", name="uniq_flavors0name"), mysql_engine='InnoDB', mysql_charset='utf8' ) flavors.create(checkfirst=True) flavor_extra_specs = Table('flavor_extra_specs', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('flavor_id', Integer, nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=255)), UniqueConstraint('flavor_id', 'key', name='uniq_flavor_extra_specs0flavor_id0key'), Index('flavor_extra_specs_flavor_id_key_idx', 'flavor_id', 'key'), ForeignKeyConstraint(columns=['flavor_id'], refcolumns=[flavors.c.id]), mysql_engine='InnoDB', mysql_charset='utf8' ) flavor_extra_specs.create(checkfirst=True) flavor_projects = Table('flavor_projects', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('flavor_id', Integer, nullable=False), Column('project_id', String(length=255), nullable=False), UniqueConstraint('flavor_id', 'project_id', name='uniq_flavor_projects0flavor_id0project_id'), ForeignKeyConstraint(columns=['flavor_id'], refcolumns=[flavors.c.id]), mysql_engine='InnoDB', mysql_charset='utf8' ) flavor_projects.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/006_build_request.py0000664000175000017500000000523400000000000031424 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset.constraint import ForeignKeyConstraint from migrate import UniqueConstraint from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import dialects from sqlalchemy import Enum from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Text def InetSmall(): return String(length=39).with_variant(dialects.postgresql.INET(), 'postgresql') def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine request_specs = Table('request_specs', meta, autoload=True) build_requests = Table('build_requests', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('request_spec_id', Integer, nullable=False), Column('project_id', String(length=255), nullable=False), Column('user_id', String(length=255), nullable=False), Column('display_name', String(length=255)), Column('instance_metadata', Text), Column('progress', Integer), Column('vm_state', String(length=255)), Column('task_state', String(length=255)), Column('image_ref', String(length=255)), Column('access_ip_v4', InetSmall()), Column('access_ip_v6', InetSmall()), Column('info_cache', Text), Column('security_groups', Text, nullable=False), Column('config_drive', Boolean, default=False, nullable=False), Column('key_name', String(length=255)), Column('locked_by', Enum('owner', 'admin', name='build_requests0locked_by')), UniqueConstraint('request_spec_id', name='uniq_build_requests0request_spec_id'), Index('build_requests_project_id_idx', 'project_id'), ForeignKeyConstraint(columns=['request_spec_id'], refcolumns=[request_specs.c.id]), mysql_engine='InnoDB', mysql_charset='utf8' ) build_requests.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/007_instance_mapping_nullable_cellid.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/007_instance_mapping_nullable_ce0000664000175000017500000000151200000000000033766 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine instance_mapping = Table('instance_mappings', meta, autoload=True) instance_mapping.c.cell_id.alter(nullable=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/008_placeholder.py0000664000175000017500000000152200000000000031035 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/009_placeholder.py0000664000175000017500000000152200000000000031036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/010_placeholder.py0000664000175000017500000000152200000000000031026 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/011_placeholder.py0000664000175000017500000000152200000000000031027 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/012_placeholder.py0000664000175000017500000000152200000000000031030 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/013_build_request_extended_attrs.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/013_build_request_extended_attrs0000664000175000017500000000365600000000000034076 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy.engine import reflection from sqlalchemy import Index from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine build_requests = Table('build_requests', meta, autoload=True) columns_to_add = [ ('instance_uuid', Column('instance_uuid', String(length=36))), ('instance', Column('instance', Text())), ] for (col_name, column) in columns_to_add: if not hasattr(build_requests.c, col_name): build_requests.create_column(column) for index in build_requests.indexes: if [c.name for c in index.columns] == ['instance_uuid']: break else: index = Index('build_requests_instance_uuid_idx', build_requests.c.instance_uuid) index.create() inspector = reflection.Inspector.from_engine(migrate_engine) constrs = inspector.get_unique_constraints('build_requests') constr_names = [constr['name'] for constr in constrs] if 'uniq_build_requests0instance_uuid' not in constr_names: UniqueConstraint('instance_uuid', table=build_requests, name='uniq_build_requests0instance_uuid').create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/014_keypairs.py0000664000175000017500000000333600000000000030404 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Enum from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Text from nova.objects import keypair def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine enum = Enum('ssh', 'x509', metadata=meta, name='keypair_types') enum.create(checkfirst=True) keypairs = Table('key_pairs', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('name', String(255), nullable=False), Column('user_id', String(255), nullable=False), Column('fingerprint', String(255)), Column('public_key', Text()), Column('type', enum, nullable=False, server_default=keypair.KEYPAIR_TYPE_SSH), UniqueConstraint('user_id', 'name', name="uniq_key_pairs0user_id0name"), mysql_engine='InnoDB', mysql_charset='utf8' ) keypairs.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/015_build_request_nullable_columns.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/015_build_request_nullable_colum0000664000175000017500000000404700000000000034053 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset.constraint import ForeignKeyConstraint from migrate import UniqueConstraint from sqlalchemy.engine import reflection from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine build_requests = Table('build_requests', meta, autoload=True) request_specs = Table('request_specs', meta, autoload=True) for fkey in build_requests.foreign_keys: if fkey.target_fullname == 'request_specs.id': ForeignKeyConstraint(columns=['request_spec_id'], refcolumns=[request_specs.c.id], table=build_requests, name=fkey.name).drop() break # These are being made nullable because they are no longer used after the # addition of the instance column. However they need a deprecation period # before they can be dropped. columns_to_nullify = ['request_spec_id', 'user_id', 'security_groups', 'config_drive'] for column in columns_to_nullify: getattr(build_requests.c, column).alter(nullable=True) inspector = reflection.Inspector.from_engine(migrate_engine) constrs = inspector.get_unique_constraints('build_requests') constr_names = [constr['name'] for constr in constrs] if 'uniq_build_requests0request_spec_id' in constr_names: UniqueConstraint('request_spec_id', table=build_requests, name='uniq_build_requests0request_spec_id').drop() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/016_resource_providers.py0000664000175000017500000001061700000000000032503 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database migrations for resource-providers.""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Float from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Unicode def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine if migrate_engine.name == 'mysql': nameargs = {'collation': 'utf8_bin'} else: nameargs = {} resource_providers = Table( 'resource_providers', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(36), nullable=False), Column('name', Unicode(200, **nameargs), nullable=True), Column('generation', Integer, default=0), Column('can_host', Integer, default=0), UniqueConstraint('uuid', name='uniq_resource_providers0uuid'), UniqueConstraint('name', name='uniq_resource_providers0name'), Index('resource_providers_name_idx', 'name'), Index('resource_providers_uuid_idx', 'uuid'), mysql_engine='InnoDB', mysql_charset='latin1' ) inventories = Table( 'inventories', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('resource_provider_id', Integer, nullable=False), Column('resource_class_id', Integer, nullable=False), Column('total', Integer, nullable=False), Column('reserved', Integer, nullable=False), Column('min_unit', Integer, nullable=False), Column('max_unit', Integer, nullable=False), Column('step_size', Integer, nullable=False), Column('allocation_ratio', Float, nullable=False), Index('inventories_resource_provider_id_idx', 'resource_provider_id'), Index('inventories_resource_provider_resource_class_idx', 'resource_provider_id', 'resource_class_id'), Index('inventories_resource_class_id_idx', 'resource_class_id'), UniqueConstraint('resource_provider_id', 'resource_class_id', name='uniq_inventories0resource_provider_resource_class'), mysql_engine='InnoDB', mysql_charset='latin1' ) allocations = Table( 'allocations', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('resource_provider_id', Integer, nullable=False), Column('consumer_id', String(36), nullable=False), Column('resource_class_id', Integer, nullable=False), Column('used', Integer, nullable=False), Index('allocations_resource_provider_class_used_idx', 'resource_provider_id', 'resource_class_id', 'used'), Index('allocations_resource_class_id_idx', 'resource_class_id'), Index('allocations_consumer_id_idx', 'consumer_id'), mysql_engine='InnoDB', mysql_charset='latin1' ) resource_provider_aggregates = Table( 'resource_provider_aggregates', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('resource_provider_id', Integer, primary_key=True, nullable=False), Column('aggregate_id', Integer, primary_key=True, nullable=False), Index('resource_provider_aggregates_aggregate_id_idx', 'aggregate_id'), mysql_engine='InnoDB', mysql_charset='latin1' ) for table in [resource_providers, inventories, allocations, resource_provider_aggregates]: table.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/017_aggregates.py0000664000175000017500000000530500000000000030667 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """API Database migrations for aggregates""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine aggregates = Table('aggregates', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36)), Column('name', String(length=255)), Index('aggregate_uuid_idx', 'uuid'), UniqueConstraint('name', name='uniq_aggregate0name'), mysql_engine='InnoDB', mysql_charset='utf8' ) aggregates.create(checkfirst=True) aggregate_hosts = Table('aggregate_hosts', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('host', String(length=255)), Column('aggregate_id', Integer, ForeignKey('aggregates.id'), nullable=False), UniqueConstraint('host', 'aggregate_id', name='uniq_aggregate_hosts0host0aggregate_id'), mysql_engine='InnoDB', mysql_charset='utf8' ) aggregate_hosts.create(checkfirst=True) aggregate_metadata = Table('aggregate_metadata', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('aggregate_id', Integer, ForeignKey('aggregates.id'), nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=255), nullable=False), UniqueConstraint('aggregate_id', 'key', name='uniq_aggregate_metadata0aggregate_id0key'), Index('aggregate_metadata_key_idx', 'key'), mysql_engine='InnoDB', mysql_charset='utf8' ) aggregate_metadata.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/018_instance_groups.py0000664000175000017500000000511100000000000031755 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """API Database migrations for instance_groups""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine groups = Table('instance_groups', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('uuid', String(length=36), nullable=False), Column('name', String(length=255)), UniqueConstraint('uuid', name='uniq_instance_groups0uuid'), mysql_engine='InnoDB', mysql_charset='utf8', ) groups.create(checkfirst=True) group_policy = Table('instance_group_policy', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('policy', String(length=255)), Column('group_id', Integer, ForeignKey('instance_groups.id'), nullable=False), Index('instance_group_policy_policy_idx', 'policy'), mysql_engine='InnoDB', mysql_charset='utf8', ) group_policy.create(checkfirst=True) group_member = Table('instance_group_member', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_uuid', String(length=255)), Column('group_id', Integer, ForeignKey('instance_groups.id'), nullable=False), Index('instance_group_member_instance_idx', 'instance_uuid'), mysql_engine='InnoDB', mysql_charset='utf8', ) group_member.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/019_build_request_add_block_device_mapping.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/019_build_request_add_block_devi0000664000175000017500000000173100000000000033770 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine build_requests = Table('build_requests', meta, autoload=True) if not hasattr(build_requests.c, 'block_device_mappings'): build_requests.create_column(Column('block_device_mappings', Text())) ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/020_block_device_mappings_mediumtext.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/020_block_device_mappings_medium0000664000175000017500000000161300000000000033766 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData from sqlalchemy import Table from nova.db.sqlalchemy import api_models def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine build_requests = Table('build_requests', meta, autoload=True) build_requests.c.block_device_mappings.alter(type=api_models.MediumText()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_placeholder.py0000664000175000017500000000152200000000000031030 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/022_placeholder.py0000664000175000017500000000152200000000000031031 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/023_placeholder.py0000664000175000017500000000152200000000000031032 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/024_placeholder.py0000664000175000017500000000152200000000000031033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/025_placeholder.py0000664000175000017500000000152200000000000031034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/026_add_resource_classes.py0000664000175000017500000000245200000000000032732 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine resource_classes = Table('resource_classes', meta, Column('id', Integer, primary_key=True, nullable=False), Column('name', String(length=255), nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), UniqueConstraint('name', name='uniq_resource_classes0name'), mysql_engine='InnoDB', mysql_charset='latin1' ) resource_classes.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/027_quotas.py0000664000175000017500000001107500000000000030074 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """API Database migrations for quotas""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine quota_classes = Table('quota_classes', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('class_name', String(length=255)), Column('resource', String(length=255)), Column('hard_limit', Integer), Index('quota_classes_class_name_idx', 'class_name'), mysql_engine='InnoDB', mysql_charset='utf8' ) quota_classes.create(checkfirst=True) quota_usages = Table('quota_usages', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('project_id', String(length=255)), Column('resource', String(length=255), nullable=False), Column('in_use', Integer, nullable=False), Column('reserved', Integer, nullable=False), Column('until_refresh', Integer), Column('user_id', String(length=255)), Index('quota_usages_project_id_idx', 'project_id'), Index('quota_usages_user_id_idx', 'user_id'), mysql_engine='InnoDB', mysql_charset='utf8' ) quota_usages.create(checkfirst=True) quotas = Table('quotas', meta, Column('id', Integer, primary_key=True, nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), Column('project_id', String(length=255)), Column('resource', String(length=255), nullable=False), Column('hard_limit', Integer), UniqueConstraint('project_id', 'resource', name='uniq_quotas0project_id0resource'), mysql_engine='InnoDB', mysql_charset='utf8' ) quotas.create(checkfirst=True) uniq_name = "uniq_project_user_quotas0user_id0project_id0resource" project_user_quotas = Table('project_user_quotas', meta, Column('id', Integer, primary_key=True, nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), Column('user_id', String(length=255), nullable=False), Column('project_id', String(length=255), nullable=False), Column('resource', String(length=255), nullable=False), Column('hard_limit', Integer, nullable=True), UniqueConstraint('user_id', 'project_id', 'resource', name=uniq_name), Index('project_user_quotas_project_id_idx', 'project_id'), Index('project_user_quotas_user_id_idx', 'user_id'), mysql_engine='InnoDB', mysql_charset='utf8', ) project_user_quotas.create(checkfirst=True) reservations = Table('reservations', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), nullable=False), Column('usage_id', Integer, ForeignKey('quota_usages.id'), nullable=False), Column('project_id', String(length=255)), Column('resource', String(length=255)), Column('delta', Integer, nullable=False), Column('expire', DateTime), Column('user_id', String(length=255)), Index('reservations_project_id_idx', 'project_id'), Index('reservations_uuid_idx', 'uuid'), Index('reservations_expire_idx', 'expire'), Index('reservations_user_id_idx', 'user_id'), mysql_engine='InnoDB', mysql_charset='utf8' ) reservations.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/028_build_requests_instance_mediumtext.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/028_build_requests_instance_medi0000664000175000017500000000157600000000000034053 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData from sqlalchemy import Table from nova.db.sqlalchemy import api_models def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine build_requests = Table('build_requests', meta, autoload=True) build_requests.c.instance.alter(type=api_models.MediumText()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/029_placement_aggregates.py0000664000175000017500000000255400000000000032725 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """API Database migrations for placement_aggregates""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine placement_aggregates = Table('placement_aggregates', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), index=True), UniqueConstraint('uuid', name='uniq_placement_aggregates0uuid'), mysql_engine='InnoDB', mysql_charset='latin1' ) placement_aggregates.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/030_require_cell_setup.py0000664000175000017500000000473400000000000032451 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, func, select from nova import exception from nova.i18n import _ from nova import objects LOG = logging.getLogger(__name__) def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine flavors = Table('flavors', meta, autoload=True) count = select([func.count()]).select_from(flavors).scalar() if count == 0: # NOTE(danms): We need to be careful here if this is a new # installation, which can't possibly have any mappings. Check # to see if any flavors are defined to determine if we are # upgrading an existing system. If not, then don't obsess over # the lack of mappings return cell_mappings = Table('cell_mappings', meta, autoload=True) count = select([func.count()]).select_from(cell_mappings).scalar() # Two mappings are required at a minimum, cell0 and your first cell if count < 2: msg = _('Cell mappings are not created, but required for Ocata. ' 'Please run nova-manage cell_v2 simple_cell_setup before ' 'continuing.') raise exception.ValidationError(detail=msg) count = select([func.count()]).select_from(cell_mappings).where( cell_mappings.c.uuid == objects.CellMapping.CELL0_UUID).scalar() if count != 1: msg = _('A mapping for Cell0 was not found, but is required for ' 'Ocata. Please run nova-manage cell_v2 simple_cell_setup ' 'before continuing.') raise exception.ValidationError(detail=msg) host_mappings = Table('host_mappings', meta, autoload=True) count = select([func.count()]).select_from(host_mappings).scalar() if count == 0: LOG.warning('No host mappings were found, but are required for Ocata. ' 'Please run nova-manage cell_v2 simple_cell_setup before ' 'continuing.') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/031_placeholder.py0000664000175000017500000000152200000000000031031 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/032_placeholder.py0000664000175000017500000000152200000000000031032 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/033_placeholder.py0000664000175000017500000000152200000000000031033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/034_placeholder.py0000664000175000017500000000152200000000000031034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/035_placeholder.py0000664000175000017500000000152200000000000031035 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/036_placeholder.py0000664000175000017500000000152200000000000031036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/037_placeholder.py0000664000175000017500000000152200000000000031037 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/038_placeholder.py0000664000175000017500000000152200000000000031040 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/039_placeholder.py0000664000175000017500000000152200000000000031041 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/040_placeholder.py0000664000175000017500000000152200000000000031031 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/041_resource_provider_traits.py0000664000175000017500000000460100000000000033700 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database migrations for traits""" from migrate.changeset.constraint import ForeignKeyConstraint from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Unicode def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine if migrate_engine.name == 'mysql': nameargs = {'collation': 'utf8_bin'} else: nameargs = {} resource_providers = Table('resource_providers', meta, autoload=True) traits = Table( 'traits', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False, autoincrement=True), Column('name', Unicode(255, **nameargs), nullable=False), UniqueConstraint('name', name='uniq_traits0name'), mysql_engine='InnoDB', mysql_charset='latin1' ) resource_provider_traits = Table( 'resource_provider_traits', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('trait_id', Integer, ForeignKey('traits.id'), primary_key=True, nullable=False), Column('resource_provider_id', Integer, primary_key=True, nullable=False), Index('resource_provider_traits_resource_provider_trait_idx', 'resource_provider_id', 'trait_id'), ForeignKeyConstraint(columns=['resource_provider_id'], refcolumns=[resource_providers.c.id]), mysql_engine='InnoDB', mysql_charset='latin1' ) for table in [traits, resource_provider_traits]: table.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/042_build_requests_add_tags.py0000664000175000017500000000166700000000000033443 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine build_requests = Table('build_requests', meta, autoload=True) if not hasattr(build_requests.c, 'tags'): build_requests.create_column(Column('tags', Text())) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/043_placement_consumers.py0000664000175000017500000000324300000000000032622 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database migrations for consumers""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine consumers = Table('consumers', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False, autoincrement=True), Column('uuid', String(length=36), nullable=False), Column('project_id', String(length=255), nullable=False), Column('user_id', String(length=255), nullable=False), Index('consumers_project_id_uuid_idx', 'project_id', 'uuid'), Index('consumers_project_id_user_id_uuid_idx', 'project_id', 'user_id', 'uuid'), UniqueConstraint('uuid', name='uniq_consumers0uuid'), mysql_engine='InnoDB', mysql_charset='latin1' ) consumers.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/044_placement_add_projects_users.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/044_placement_add_projects_users0000664000175000017500000000513200000000000034037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Streamlines consumers table and adds projects and users table""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine projects = Table('projects', meta, Column('id', Integer, primary_key=True, nullable=False, autoincrement=True), Column('external_id', String(length=255), nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), UniqueConstraint('external_id', name='uniq_projects0external_id'), mysql_engine='InnoDB', mysql_charset='latin1' ) projects.create(checkfirst=True) users = Table('users', meta, Column('id', Integer, primary_key=True, nullable=False, autoincrement=True), Column('external_id', String(length=255), nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), UniqueConstraint('external_id', name='uniq_users0external_id'), mysql_engine='InnoDB', mysql_charset='latin1' ) users.create(checkfirst=True) consumers = Table('consumers', meta, autoload=True) project_id_col = consumers.c.project_id user_id_col = consumers.c.user_id # NOTE(jaypipes): For PostgreSQL, we can't do col.alter(type=Integer) # because NVARCHAR and INTEGER are not compatible, so we need to do this # manual ALTER TABLE ... USING approach. if migrate_engine.name == 'postgresql': migrate_engine.execute( "ALTER TABLE consumers ALTER COLUMN project_id " "TYPE INTEGER USING project_id::integer" ) migrate_engine.execute( "ALTER TABLE consumers ALTER COLUMN user_id " "TYPE INTEGER USING user_id::integer" ) else: project_id_col.alter(type=Integer) user_id_col.alter(type=Integer) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/045_placeholder.py0000664000175000017500000000152200000000000031036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/046_placeholder.py0000664000175000017500000000152200000000000031037 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/047_placeholder.py0000664000175000017500000000152200000000000031040 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/048_placeholder.py0000664000175000017500000000152200000000000031041 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/049_placeholder.py0000664000175000017500000000152200000000000031042 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/050_flavors_add_description.py0000664000175000017500000000165100000000000033442 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine flavors = Table('flavors', meta, autoload=True) if not hasattr(flavors.c, 'description'): flavors.create_column(Column('description', Text())) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/051_nested_resource_providers.py0000664000175000017500000000361400000000000034043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine resource_providers = Table('resource_providers', meta, autoload=True) columns_to_add = [ ('root_provider_id', Column('root_provider_id', Integer, ForeignKey('resource_providers.id'))), ('parent_provider_id', Column('parent_provider_id', Integer, ForeignKey('resource_providers.id'))), ] for col_name, column in columns_to_add: if not hasattr(resource_providers.c, col_name): resource_providers.create_column(column) indexed_columns = set() for idx in resource_providers.indexes: for c in idx.columns: indexed_columns.add(c.name) if 'root_provider_id' not in indexed_columns: index = Index('resource_providers_root_provider_id_idx', resource_providers.c.root_provider_id) index.create() if 'parent_provider_id' not in indexed_columns: index = Index('resource_providers_parent_provider_id_idx', resource_providers.c.parent_provider_id) index.create() ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/052_request_specs_spec_mediumtext.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/052_request_specs_spec_mediumtex0000664000175000017500000000167000000000000034107 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData from sqlalchemy import Table from nova.db.sqlalchemy import api_models def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine request_specs = Table('request_specs', meta, autoload=True) if request_specs.c.spec.type != api_models.MediumText(): request_specs.c.spec.alter(type=api_models.MediumText()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/053_placeholder.py0000664000175000017500000000152100000000000031034 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/054_placeholder.py0000664000175000017500000000152100000000000031035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/055_placeholder.py0000664000175000017500000000152100000000000031036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/056_placeholder.py0000664000175000017500000000152100000000000031037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/057_placeholder.py0000664000175000017500000000152100000000000031040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/058_cell_mapping_add_disabled.py0000664000175000017500000000171600000000000033656 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine cell_mappings = Table('cell_mappings', meta, autoload=True) if not hasattr(cell_mappings.c, 'disabled'): cell_mappings.create_column(Column('disabled', Boolean, default=False)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/059_add_consumer_generation.py0000664000175000017500000000223200000000000033436 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine consumers = Table("consumers", meta, autoload=True) if not hasattr(consumers.c, "generation"): # This is adding a column to an existing table, so the server_default # bit will make existing rows 0 for that column. consumers.create_column(Column("generation", Integer, default=0, server_default=text("0"), nullable=False)) ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/060_instance_group_policy_add_rules.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/060_instance_group_policy_add_ru0000664000175000017500000000165400000000000034045 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine policies = Table('instance_group_policy', meta, autoload=True) if not hasattr(policies.c, 'rules'): policies.create_column(Column('rules', Text)) ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/061_instance_mapping_add_queued_for_delete.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/061_instance_mapping_add_queued_0000664000175000017500000000177400000000000033772 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine instance_mappings = Table('instance_mappings', meta, autoload=True) if not hasattr(instance_mappings.c, 'queued_for_delete'): instance_mappings.create_column(Column('queued_for_delete', Boolean, default=False)) ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/062_instance_mapping_add_user_id.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/062_instance_mapping_add_user_id0000664000175000017500000000232100000000000033763 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import Index from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine instance_mappings = Table('instance_mappings', meta, autoload=True) if not hasattr(instance_mappings.c, 'user_id'): instance_mappings.create_column(Column('user_id', String(length=255), nullable=True)) index = Index('instance_mappings_user_id_project_id_idx', instance_mappings.c.user_id, instance_mappings.c.project_id) index.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/063_placeholder.py0000664000175000017500000000152100000000000031035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/064_placeholder.py0000664000175000017500000000152100000000000031036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/065_placeholder.py0000664000175000017500000000152100000000000031037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/066_placeholder.py0000664000175000017500000000152100000000000031040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/067_placeholder.py0000664000175000017500000000152100000000000031041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/068_placeholder.py0000664000175000017500000000152100000000000031042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/069_placeholder.py0000664000175000017500000000152100000000000031043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/070_placeholder.py0000664000175000017500000000152100000000000031033 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/071_placeholder.py0000664000175000017500000000152100000000000031034 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/072_placeholder.py0000664000175000017500000000152100000000000031035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/__init__.py0000664000175000017500000000000000000000000027711 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/api_models.py0000664000175000017500000005736700000000000020770 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import models from oslo_log import log as logging from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy.dialects.mysql import MEDIUMTEXT from sqlalchemy import Enum from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Float from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import orm from sqlalchemy.orm import backref from sqlalchemy import schema from sqlalchemy import String from sqlalchemy import Text from sqlalchemy import Unicode LOG = logging.getLogger(__name__) def MediumText(): return Text().with_variant(MEDIUMTEXT(), 'mysql') class _NovaAPIBase(models.ModelBase, models.TimestampMixin): pass API_BASE = declarative_base(cls=_NovaAPIBase) class AggregateHost(API_BASE): """Represents a host that is member of an aggregate.""" __tablename__ = 'aggregate_hosts' __table_args__ = (schema.UniqueConstraint( "host", "aggregate_id", name="uniq_aggregate_hosts0host0aggregate_id" ), ) id = Column(Integer, primary_key=True, autoincrement=True) host = Column(String(255)) aggregate_id = Column(Integer, ForeignKey('aggregates.id'), nullable=False) class AggregateMetadata(API_BASE): """Represents a metadata key/value pair for an aggregate.""" __tablename__ = 'aggregate_metadata' __table_args__ = ( schema.UniqueConstraint("aggregate_id", "key", name="uniq_aggregate_metadata0aggregate_id0key" ), Index('aggregate_metadata_key_idx', 'key'), ) id = Column(Integer, primary_key=True) key = Column(String(255), nullable=False) value = Column(String(255), nullable=False) aggregate_id = Column(Integer, ForeignKey('aggregates.id'), nullable=False) class Aggregate(API_BASE): """Represents a cluster of hosts that exists in this zone.""" __tablename__ = 'aggregates' __table_args__ = (Index('aggregate_uuid_idx', 'uuid'), schema.UniqueConstraint( "name", name="uniq_aggregate0name") ) id = Column(Integer, primary_key=True, autoincrement=True) uuid = Column(String(36)) name = Column(String(255)) _hosts = orm.relationship(AggregateHost, primaryjoin='Aggregate.id == AggregateHost.aggregate_id', cascade='delete') _metadata = orm.relationship(AggregateMetadata, primaryjoin='Aggregate.id == AggregateMetadata.aggregate_id', cascade='delete') @property def _extra_keys(self): return ['hosts', 'metadetails', 'availability_zone'] @property def hosts(self): return [h.host for h in self._hosts] @property def metadetails(self): return {m.key: m.value for m in self._metadata} @property def availability_zone(self): if 'availability_zone' not in self.metadetails: return None return self.metadetails['availability_zone'] class CellMapping(API_BASE): """Contains information on communicating with a cell""" __tablename__ = 'cell_mappings' __table_args__ = (Index('uuid_idx', 'uuid'), schema.UniqueConstraint('uuid', name='uniq_cell_mappings0uuid')) id = Column(Integer, primary_key=True) uuid = Column(String(36), nullable=False) name = Column(String(255)) transport_url = Column(Text()) database_connection = Column(Text()) disabled = Column(Boolean, default=False) host_mapping = orm.relationship('HostMapping', backref=backref('cell_mapping', uselist=False), foreign_keys=id, primaryjoin=( 'CellMapping.id == HostMapping.cell_id')) class InstanceMapping(API_BASE): """Contains the mapping of an instance to which cell it is in""" __tablename__ = 'instance_mappings' __table_args__ = (Index('project_id_idx', 'project_id'), Index('instance_uuid_idx', 'instance_uuid'), schema.UniqueConstraint('instance_uuid', name='uniq_instance_mappings0instance_uuid'), Index('instance_mappings_user_id_project_id_idx', 'user_id', 'project_id')) id = Column(Integer, primary_key=True) instance_uuid = Column(String(36), nullable=False) cell_id = Column(Integer, ForeignKey('cell_mappings.id'), nullable=True) project_id = Column(String(255), nullable=False) # FIXME(melwitt): This should eventually be non-nullable, but we need a # transition period first. user_id = Column(String(255), nullable=True) queued_for_delete = Column(Boolean) cell_mapping = orm.relationship('CellMapping', backref=backref('instance_mapping', uselist=False), foreign_keys=cell_id, primaryjoin=('InstanceMapping.cell_id == CellMapping.id')) class HostMapping(API_BASE): """Contains mapping of a compute host to which cell it is in""" __tablename__ = "host_mappings" __table_args__ = (Index('host_idx', 'host'), schema.UniqueConstraint('host', name='uniq_host_mappings0host')) id = Column(Integer, primary_key=True) cell_id = Column(Integer, ForeignKey('cell_mappings.id'), nullable=False) host = Column(String(255), nullable=False) class RequestSpec(API_BASE): """Represents the information passed to the scheduler.""" __tablename__ = 'request_specs' __table_args__ = ( Index('request_spec_instance_uuid_idx', 'instance_uuid'), schema.UniqueConstraint('instance_uuid', name='uniq_request_specs0instance_uuid'), ) id = Column(Integer, primary_key=True) instance_uuid = Column(String(36), nullable=False) spec = Column(MediumText(), nullable=False) class Flavors(API_BASE): """Represents possible flavors for instances""" __tablename__ = 'flavors' __table_args__ = ( schema.UniqueConstraint("flavorid", name="uniq_flavors0flavorid"), schema.UniqueConstraint("name", name="uniq_flavors0name")) id = Column(Integer, primary_key=True) name = Column(String(255), nullable=False) memory_mb = Column(Integer, nullable=False) vcpus = Column(Integer, nullable=False) root_gb = Column(Integer) ephemeral_gb = Column(Integer) flavorid = Column(String(255), nullable=False) swap = Column(Integer, nullable=False, default=0) rxtx_factor = Column(Float, default=1) vcpu_weight = Column(Integer) disabled = Column(Boolean, default=False) is_public = Column(Boolean, default=True) description = Column(Text) class FlavorExtraSpecs(API_BASE): """Represents additional specs as key/value pairs for a flavor""" __tablename__ = 'flavor_extra_specs' __table_args__ = ( Index('flavor_extra_specs_flavor_id_key_idx', 'flavor_id', 'key'), schema.UniqueConstraint('flavor_id', 'key', name='uniq_flavor_extra_specs0flavor_id0key'), {'mysql_collate': 'utf8_bin'}, ) id = Column(Integer, primary_key=True) key = Column(String(255), nullable=False) value = Column(String(255)) flavor_id = Column(Integer, ForeignKey('flavors.id'), nullable=False) flavor = orm.relationship(Flavors, backref='extra_specs', foreign_keys=flavor_id, primaryjoin=( 'FlavorExtraSpecs.flavor_id == Flavors.id')) class FlavorProjects(API_BASE): """Represents projects associated with flavors""" __tablename__ = 'flavor_projects' __table_args__ = (schema.UniqueConstraint('flavor_id', 'project_id', name='uniq_flavor_projects0flavor_id0project_id'),) id = Column(Integer, primary_key=True) flavor_id = Column(Integer, ForeignKey('flavors.id'), nullable=False) project_id = Column(String(255), nullable=False) flavor = orm.relationship(Flavors, backref='projects', foreign_keys=flavor_id, primaryjoin=( 'FlavorProjects.flavor_id == Flavors.id')) class BuildRequest(API_BASE): """Represents the information passed to the scheduler.""" __tablename__ = 'build_requests' __table_args__ = ( Index('build_requests_instance_uuid_idx', 'instance_uuid'), Index('build_requests_project_id_idx', 'project_id'), schema.UniqueConstraint('instance_uuid', name='uniq_build_requests0instance_uuid'), ) id = Column(Integer, primary_key=True) # TODO(mriedem): instance_uuid should be nullable=False instance_uuid = Column(String(36)) project_id = Column(String(255), nullable=False) instance = Column(MediumText()) block_device_mappings = Column(MediumText()) tags = Column(Text()) # TODO(alaski): Drop these from the db in Ocata # columns_to_drop = ['request_spec_id', 'user_id', 'display_name', # 'instance_metadata', 'progress', 'vm_state', 'task_state', # 'image_ref', 'access_ip_v4', 'access_ip_v6', 'info_cache', # 'security_groups', 'config_drive', 'key_name', 'locked_by', # 'reservation_id', 'launch_index', 'hostname', 'kernel_id', # 'ramdisk_id', 'root_device_name', 'user_data'] class KeyPair(API_BASE): """Represents a public key pair for ssh / WinRM.""" __tablename__ = 'key_pairs' __table_args__ = ( schema.UniqueConstraint("user_id", "name", name="uniq_key_pairs0user_id0name"), ) id = Column(Integer, primary_key=True, nullable=False) name = Column(String(255), nullable=False) user_id = Column(String(255), nullable=False) fingerprint = Column(String(255)) public_key = Column(Text()) type = Column(Enum('ssh', 'x509', name='keypair_types'), nullable=False, server_default='ssh') # TODO(stephenfin): Remove this as it's now unused post-placement split class ResourceClass(API_BASE): """Represents the type of resource for an inventory or allocation.""" __tablename__ = 'resource_classes' __table_args__ = ( schema.UniqueConstraint("name", name="uniq_resource_classes0name"), ) id = Column(Integer, primary_key=True, nullable=False) name = Column(String(255), nullable=False) # TODO(stephenfin): Remove this as it's now unused post-placement split class ResourceProvider(API_BASE): """Represents a mapping to a providers of resources.""" __tablename__ = "resource_providers" __table_args__ = ( Index('resource_providers_uuid_idx', 'uuid'), schema.UniqueConstraint('uuid', name='uniq_resource_providers0uuid'), Index('resource_providers_name_idx', 'name'), Index('resource_providers_root_provider_id_idx', 'root_provider_id'), Index('resource_providers_parent_provider_id_idx', 'parent_provider_id'), schema.UniqueConstraint('name', name='uniq_resource_providers0name') ) id = Column(Integer, primary_key=True, nullable=False) uuid = Column(String(36), nullable=False) name = Column(Unicode(200), nullable=True) generation = Column(Integer, default=0) # Represents the root of the "tree" that the provider belongs to root_provider_id = Column(Integer, ForeignKey('resource_providers.id'), nullable=True) # The immediate parent provider of this provider, or NULL if there is no # parent. If parent_provider_id == NULL then root_provider_id == id parent_provider_id = Column(Integer, ForeignKey('resource_providers.id'), nullable=True) # TODO(stephenfin): Remove this as it's now unused post-placement split class Inventory(API_BASE): """Represents a quantity of available resource.""" __tablename__ = "inventories" __table_args__ = ( Index('inventories_resource_provider_id_idx', 'resource_provider_id'), Index('inventories_resource_class_id_idx', 'resource_class_id'), Index('inventories_resource_provider_resource_class_idx', 'resource_provider_id', 'resource_class_id'), schema.UniqueConstraint('resource_provider_id', 'resource_class_id', name='uniq_inventories0resource_provider_resource_class') ) id = Column(Integer, primary_key=True, nullable=False) resource_provider_id = Column(Integer, nullable=False) resource_class_id = Column(Integer, nullable=False) total = Column(Integer, nullable=False) reserved = Column(Integer, nullable=False) min_unit = Column(Integer, nullable=False) max_unit = Column(Integer, nullable=False) step_size = Column(Integer, nullable=False) allocation_ratio = Column(Float, nullable=False) resource_provider = orm.relationship( "ResourceProvider", primaryjoin=('Inventory.resource_provider_id == ' 'ResourceProvider.id'), foreign_keys=resource_provider_id) # TODO(stephenfin): Remove this as it's now unused post-placement split class Allocation(API_BASE): """A use of inventory.""" __tablename__ = "allocations" __table_args__ = ( Index('allocations_resource_provider_class_used_idx', 'resource_provider_id', 'resource_class_id', 'used'), Index('allocations_resource_class_id_idx', 'resource_class_id'), Index('allocations_consumer_id_idx', 'consumer_id') ) id = Column(Integer, primary_key=True, nullable=False) resource_provider_id = Column(Integer, nullable=False) consumer_id = Column(String(36), nullable=False) resource_class_id = Column(Integer, nullable=False) used = Column(Integer, nullable=False) resource_provider = orm.relationship( "ResourceProvider", primaryjoin=('Allocation.resource_provider_id == ' 'ResourceProvider.id'), foreign_keys=resource_provider_id) # TODO(stephenfin): Remove this as it's now unused post-placement split class ResourceProviderAggregate(API_BASE): """Associate a resource provider with an aggregate.""" __tablename__ = 'resource_provider_aggregates' __table_args__ = ( Index('resource_provider_aggregates_aggregate_id_idx', 'aggregate_id'), ) resource_provider_id = Column(Integer, primary_key=True, nullable=False) aggregate_id = Column(Integer, primary_key=True, nullable=False) # TODO(stephenfin): Remove this as it's now unused post-placement split class PlacementAggregate(API_BASE): """A grouping of resource providers.""" __tablename__ = 'placement_aggregates' __table_args__ = ( schema.UniqueConstraint("uuid", name="uniq_placement_aggregates0uuid"), ) id = Column(Integer, primary_key=True, autoincrement=True) uuid = Column(String(36), index=True) class InstanceGroupMember(API_BASE): """Represents the members for an instance group.""" __tablename__ = 'instance_group_member' __table_args__ = ( Index('instance_group_member_instance_idx', 'instance_uuid'), ) id = Column(Integer, primary_key=True, nullable=False) instance_uuid = Column(String(255)) group_id = Column(Integer, ForeignKey('instance_groups.id'), nullable=False) class InstanceGroupPolicy(API_BASE): """Represents the policy type for an instance group.""" __tablename__ = 'instance_group_policy' __table_args__ = ( Index('instance_group_policy_policy_idx', 'policy'), ) id = Column(Integer, primary_key=True, nullable=False) policy = Column(String(255)) group_id = Column(Integer, ForeignKey('instance_groups.id'), nullable=False) rules = Column(Text) class InstanceGroup(API_BASE): """Represents an instance group. A group will maintain a collection of instances and the relationship between them. """ __tablename__ = 'instance_groups' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_instance_groups0uuid'), ) id = Column(Integer, primary_key=True, autoincrement=True) user_id = Column(String(255)) project_id = Column(String(255)) uuid = Column(String(36), nullable=False) name = Column(String(255)) _policies = orm.relationship(InstanceGroupPolicy, primaryjoin='InstanceGroup.id == InstanceGroupPolicy.group_id') _members = orm.relationship(InstanceGroupMember, primaryjoin='InstanceGroup.id == InstanceGroupMember.group_id') @property def policy(self): if len(self._policies) > 1: msg = ("More than one policy (%(policies)s) is associated with " "group %(group_name)s, only the first one in the list " "would be returned.") LOG.warning(msg, {"policies": [p.policy for p in self._policies], "group_name": self.name}) return self._policies[0] if self._policies else None @property def members(self): return [m.instance_uuid for m in self._members] class Quota(API_BASE): """Represents a single quota override for a project. If there is no row for a given project id and resource, then the default for the quota class is used. If there is no row for a given quota class and resource, then the default for the deployment is used. If the row is present but the hard limit is Null, then the resource is unlimited. """ __tablename__ = 'quotas' __table_args__ = ( schema.UniqueConstraint("project_id", "resource", name="uniq_quotas0project_id0resource" ), ) id = Column(Integer, primary_key=True) project_id = Column(String(255)) resource = Column(String(255), nullable=False) hard_limit = Column(Integer) class ProjectUserQuota(API_BASE): """Represents a single quota override for a user with in a project.""" __tablename__ = 'project_user_quotas' uniq_name = "uniq_project_user_quotas0user_id0project_id0resource" __table_args__ = ( schema.UniqueConstraint("user_id", "project_id", "resource", name=uniq_name), Index('project_user_quotas_project_id_idx', 'project_id'), Index('project_user_quotas_user_id_idx', 'user_id',) ) id = Column(Integer, primary_key=True, nullable=False) project_id = Column(String(255), nullable=False) user_id = Column(String(255), nullable=False) resource = Column(String(255), nullable=False) hard_limit = Column(Integer) class QuotaClass(API_BASE): """Represents a single quota override for a quota class. If there is no row for a given quota class and resource, then the default for the deployment is used. If the row is present but the hard limit is Null, then the resource is unlimited. """ __tablename__ = 'quota_classes' __table_args__ = ( Index('quota_classes_class_name_idx', 'class_name'), ) id = Column(Integer, primary_key=True) class_name = Column(String(255)) resource = Column(String(255)) hard_limit = Column(Integer) class QuotaUsage(API_BASE): """Represents the current usage for a given resource.""" __tablename__ = 'quota_usages' __table_args__ = ( Index('quota_usages_project_id_idx', 'project_id'), Index('quota_usages_user_id_idx', 'user_id'), ) id = Column(Integer, primary_key=True) project_id = Column(String(255)) user_id = Column(String(255)) resource = Column(String(255), nullable=False) in_use = Column(Integer, nullable=False) reserved = Column(Integer, nullable=False) @property def total(self): return self.in_use + self.reserved until_refresh = Column(Integer) class Reservation(API_BASE): """Represents a resource reservation for quotas.""" __tablename__ = 'reservations' __table_args__ = ( Index('reservations_project_id_idx', 'project_id'), Index('reservations_uuid_idx', 'uuid'), Index('reservations_expire_idx', 'expire'), Index('reservations_user_id_idx', 'user_id'), ) id = Column(Integer, primary_key=True, nullable=False) uuid = Column(String(36), nullable=False) usage_id = Column(Integer, ForeignKey('quota_usages.id'), nullable=False) project_id = Column(String(255)) user_id = Column(String(255)) resource = Column(String(255)) delta = Column(Integer, nullable=False) expire = Column(DateTime) usage = orm.relationship( "QuotaUsage", foreign_keys=usage_id, primaryjoin='Reservation.usage_id == QuotaUsage.id') class Trait(API_BASE): """Represents a trait.""" __tablename__ = "traits" __table_args__ = ( schema.UniqueConstraint('name', name='uniq_traits0name'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) name = Column(Unicode(255), nullable=False) # TODO(stephenfin): Remove this as it's now unused post-placement split class ResourceProviderTrait(API_BASE): """Represents the relationship between traits and resource provider""" __tablename__ = "resource_provider_traits" __table_args__ = ( Index('resource_provider_traits_resource_provider_trait_idx', 'resource_provider_id', 'trait_id'), ) trait_id = Column(Integer, ForeignKey('traits.id'), primary_key=True, nullable=False) resource_provider_id = Column(Integer, ForeignKey('resource_providers.id'), primary_key=True, nullable=False) # TODO(stephenfin): Remove this as it's unused class Project(API_BASE): """The project is the Keystone project.""" __tablename__ = 'projects' __table_args__ = ( schema.UniqueConstraint( 'external_id', name='uniq_projects0external_id', ), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) external_id = Column(String(255), nullable=False) # TODO(stephenfin): Remove this as it's unused class User(API_BASE): """The user is the Keystone user.""" __tablename__ = 'users' __table_args__ = ( schema.UniqueConstraint( 'external_id', name='uniq_users0external_id', ), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) external_id = Column(String(255), nullable=False) # TODO(stephenfin): Remove this as it's unused class Consumer(API_BASE): """Represents a resource consumer.""" __tablename__ = 'consumers' __table_args__ = ( Index('consumers_project_id_uuid_idx', 'project_id', 'uuid'), Index('consumers_project_id_user_id_uuid_idx', 'project_id', 'user_id', 'uuid'), schema.UniqueConstraint('uuid', name='uniq_consumers0uuid'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) uuid = Column(String(36), nullable=False) project_id = Column(Integer, nullable=False) user_id = Column(Integer, nullable=False) # FIXME(mriedem): Change this to server_default=text("0") to match the # 059_add_consumer_generation script once bug 1776527 is fixed. generation = Column(Integer, nullable=False, server_default="0", default=0) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3224702 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/0000775000175000017500000000000000000000000020735 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/README0000664000175000017500000000016300000000000021615 0ustar00zuulzuul00000000000000This is a database migration repository. More information at https://sqlalchemy-migrate.readthedocs.io/en/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/__init__.py0000664000175000017500000000000000000000000023034 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/manage.py0000664000175000017500000000135200000000000022540 0ustar00zuulzuul00000000000000#!/usr/bin/env python # Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.versioning.shell import main if __name__ == '__main__': main(debug='False', repository='.') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/migrate.cfg0000664000175000017500000000172400000000000023052 0ustar00zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id=nova # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table=migrate_version # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs=[] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/0000775000175000017500000000000000000000000022605 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py0000664000175000017500000017136600000000000025023 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset import UniqueConstraint from migrate import ForeignKeyConstraint from oslo_log import log as logging from sqlalchemy import Boolean, BigInteger, Column, DateTime, Enum, Float from sqlalchemy import dialects from sqlalchemy import ForeignKey, Index, Integer, MetaData, String, Table from sqlalchemy import Text from sqlalchemy.types import NullType LOG = logging.getLogger(__name__) # Note on the autoincrement flag: this is defaulted for primary key columns # of integral type, so is no longer set explicitly in such cases. # NOTE(dprince): This wrapper allows us to easily match the Folsom MySQL # Schema. In Folsom we created tables as latin1 and converted them to utf8 # later. This conversion causes some of the Text columns on MySQL to get # created as mediumtext instead of just text. def MediumText(): return Text().with_variant(dialects.mysql.MEDIUMTEXT(), 'mysql') def Inet(): return String(length=43).with_variant(dialects.postgresql.INET(), 'postgresql') def InetSmall(): return String(length=39).with_variant(dialects.postgresql.INET(), 'postgresql') def _create_shadow_tables(migrate_engine): meta = MetaData(migrate_engine) meta.reflect(migrate_engine) table_names = list(meta.tables.keys()) meta.bind = migrate_engine for table_name in table_names: table = Table(table_name, meta, autoload=True) columns = [] for column in table.columns: column_copy = None # NOTE(boris-42): BigInteger is not supported by sqlite, so # after copy it will have NullType, other # types that are used in Nova are supported by # sqlite. if isinstance(column.type, NullType): column_copy = Column(column.name, BigInteger(), default=0) if table_name == 'instances' and column.name == 'locked_by': enum = Enum('owner', 'admin', name='shadow_instances0locked_by') column_copy = Column(column.name, enum) else: column_copy = column.copy() columns.append(column_copy) shadow_table_name = 'shadow_' + table_name shadow_table = Table(shadow_table_name, meta, *columns, mysql_engine='InnoDB') try: shadow_table.create() except Exception: LOG.info(repr(shadow_table)) LOG.exception('Exception while creating table.') raise # NOTE(dprince): we add these here so our schema contains dump tables # which were added in migration 209 (in Havana). We can drop these in # Icehouse: https://bugs.launchpad.net/nova/+bug/1266538 def _create_dump_tables(migrate_engine): meta = MetaData(migrate_engine) meta.reflect(migrate_engine) table_names = ['compute_node_stats', 'compute_nodes', 'instance_actions', 'instance_actions_events', 'instance_faults', 'migrations'] for table_name in table_names: table = Table(table_name, meta, autoload=True) dump_table_name = 'dump_' + table.name columns = [] for column in table.columns: # NOTE(dprince): The dump_ tables were originally created from an # earlier schema version so we don't want to add the pci_stats # column so that schema diffs are exactly the same. if column.name == 'pci_stats': continue else: columns.append(column.copy()) table_dump = Table(dump_table_name, meta, *columns, mysql_engine='InnoDB') table_dump.create() def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine agent_builds = Table('agent_builds', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('hypervisor', String(length=255)), Column('os', String(length=255)), Column('architecture', String(length=255)), Column('version', String(length=255)), Column('url', String(length=255)), Column('md5hash', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) aggregate_hosts = Table('aggregate_hosts', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('host', String(length=255)), Column('aggregate_id', Integer, ForeignKey('aggregates.id'), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) aggregate_metadata = Table('aggregate_metadata', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('aggregate_id', Integer, ForeignKey('aggregates.id'), nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=255), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) aggregates = Table('aggregates', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('name', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) block_device_mapping = Table('block_device_mapping', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('device_name', String(length=255), nullable=True), Column('delete_on_termination', Boolean), Column('snapshot_id', String(length=36), nullable=True), Column('volume_id', String(length=36), nullable=True), Column('volume_size', Integer), Column('no_device', Boolean), Column('connection_info', MediumText()), Column('instance_uuid', String(length=36)), Column('deleted', Integer), Column('source_type', String(length=255), nullable=True), Column('destination_type', String(length=255), nullable=True), Column('guest_format', String(length=255), nullable=True), Column('device_type', String(length=255), nullable=True), Column('disk_bus', String(length=255), nullable=True), Column('boot_index', Integer), Column('image_id', String(length=36), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8' ) bw_usage_cache = Table('bw_usage_cache', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('start_period', DateTime, nullable=False), Column('last_refreshed', DateTime), Column('bw_in', BigInteger), Column('bw_out', BigInteger), Column('mac', String(length=255)), Column('uuid', String(length=36)), Column('last_ctr_in', BigInteger()), Column('last_ctr_out', BigInteger()), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) cells = Table('cells', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('api_url', String(length=255)), Column('weight_offset', Float), Column('weight_scale', Float), Column('name', String(length=255)), Column('is_parent', Boolean), Column('deleted', Integer), Column('transport_url', String(length=255), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) certificates = Table('certificates', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('file_name', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) compute_node_stats = Table('compute_node_stats', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('compute_node_id', Integer, nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) compute_nodes = Table('compute_nodes', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('service_id', Integer, nullable=False), Column('vcpus', Integer, nullable=False), Column('memory_mb', Integer, nullable=False), Column('local_gb', Integer, nullable=False), Column('vcpus_used', Integer, nullable=False), Column('memory_mb_used', Integer, nullable=False), Column('local_gb_used', Integer, nullable=False), Column('hypervisor_type', MediumText(), nullable=False), Column('hypervisor_version', Integer, nullable=False), Column('cpu_info', MediumText(), nullable=False), Column('disk_available_least', Integer), Column('free_ram_mb', Integer), Column('free_disk_gb', Integer), Column('current_workload', Integer), Column('running_vms', Integer), Column('hypervisor_hostname', String(length=255)), Column('deleted', Integer), Column('host_ip', InetSmall()), Column('supported_instances', Text), Column('pci_stats', Text, nullable=True), mysql_engine='InnoDB', mysql_charset='utf8' ) console_pools = Table('console_pools', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('address', InetSmall()), Column('username', String(length=255)), Column('password', String(length=255)), Column('console_type', String(length=255)), Column('public_hostname', String(length=255)), Column('host', String(length=255)), Column('compute_host', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) consoles_instance_uuid_column_args = ['instance_uuid', String(length=36)] consoles_instance_uuid_column_args.append( ForeignKey('instances.uuid', name='consoles_instance_uuid_fkey')) consoles = Table('consoles', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_name', String(length=255)), Column('password', String(length=255)), Column('port', Integer), Column('pool_id', Integer, ForeignKey('console_pools.id')), Column(*consoles_instance_uuid_column_args), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) dns_domains = Table('dns_domains', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Boolean), Column('domain', String(length=255), primary_key=True, nullable=False), Column('scope', String(length=255)), Column('availability_zone', String(length=255)), Column('project_id', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) fixed_ips = Table('fixed_ips', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('address', InetSmall()), Column('network_id', Integer), Column('allocated', Boolean), Column('leased', Boolean), Column('reserved', Boolean), Column('virtual_interface_id', Integer), Column('host', String(length=255)), Column('instance_uuid', String(length=36)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) floating_ips = Table('floating_ips', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('address', InetSmall()), Column('fixed_ip_id', Integer), Column('project_id', String(length=255)), Column('host', String(length=255)), Column('auto_assigned', Boolean), Column('pool', String(length=255)), Column('interface', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_faults = Table('instance_faults', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_uuid', String(length=36)), Column('code', Integer, nullable=False), Column('message', String(length=255)), Column('details', MediumText()), Column('host', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_id_mappings = Table('instance_id_mappings', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(36), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_info_caches = Table('instance_info_caches', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('network_info', MediumText()), Column('instance_uuid', String(length=36), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) groups = Table('instance_groups', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer), Column('id', Integer, primary_key=True, nullable=False), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('uuid', String(length=36), nullable=False), Column('name', String(length=255)), UniqueConstraint('uuid', 'deleted', name='uniq_instance_groups0uuid0deleted'), mysql_engine='InnoDB', mysql_charset='utf8', ) group_metadata = Table('instance_group_metadata', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer), Column('id', Integer, primary_key=True, nullable=False), Column('key', String(length=255)), Column('value', String(length=255)), Column('group_id', Integer, ForeignKey('instance_groups.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', ) group_policy = Table('instance_group_policy', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer), Column('id', Integer, primary_key=True, nullable=False), Column('policy', String(length=255)), Column('group_id', Integer, ForeignKey('instance_groups.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', ) group_member = Table('instance_group_member', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer), Column('id', Integer, primary_key=True, nullable=False), Column('instance_id', String(length=255)), Column('group_id', Integer, ForeignKey('instance_groups.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', ) instance_metadata = Table('instance_metadata', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('key', String(length=255)), Column('value', String(length=255)), Column('instance_uuid', String(length=36), nullable=True), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_system_metadata = Table('instance_system_metadata', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_uuid', String(length=36), nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_type_extra_specs = Table('instance_type_extra_specs', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_type_id', Integer, ForeignKey('instance_types.id'), nullable=False), Column('key', String(length=255)), Column('value', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_type_projects = Table('instance_type_projects', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('instance_type_id', Integer, nullable=False), Column('project_id', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_types = Table('instance_types', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('name', String(length=255)), Column('id', Integer, primary_key=True, nullable=False), Column('memory_mb', Integer, nullable=False), Column('vcpus', Integer, nullable=False), Column('swap', Integer, nullable=False), Column('vcpu_weight', Integer), Column('flavorid', String(length=255)), Column('rxtx_factor', Float), Column('root_gb', Integer), Column('ephemeral_gb', Integer), Column('disabled', Boolean), Column('is_public', Boolean), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) inst_lock_enum = Enum('owner', 'admin', name='instances0locked_by') instances = Table('instances', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('internal_id', Integer), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('image_ref', String(length=255)), Column('kernel_id', String(length=255)), Column('ramdisk_id', String(length=255)), Column('launch_index', Integer), Column('key_name', String(length=255)), Column('key_data', MediumText()), Column('power_state', Integer), Column('vm_state', String(length=255)), Column('memory_mb', Integer), Column('vcpus', Integer), Column('hostname', String(length=255)), Column('host', String(length=255)), Column('user_data', MediumText()), Column('reservation_id', String(length=255)), Column('scheduled_at', DateTime), Column('launched_at', DateTime), Column('terminated_at', DateTime), Column('display_name', String(length=255)), Column('display_description', String(length=255)), Column('availability_zone', String(length=255)), Column('locked', Boolean), Column('os_type', String(length=255)), Column('launched_on', MediumText()), Column('instance_type_id', Integer), Column('vm_mode', String(length=255)), Column('uuid', String(length=36)), Column('architecture', String(length=255)), Column('root_device_name', String(length=255)), Column('access_ip_v4', InetSmall()), Column('access_ip_v6', InetSmall()), Column('config_drive', String(length=255)), Column('task_state', String(length=255)), Column('default_ephemeral_device', String(length=255)), Column('default_swap_device', String(length=255)), Column('progress', Integer), Column('auto_disk_config', Boolean), Column('shutdown_terminate', Boolean), Column('disable_terminate', Boolean), Column('root_gb', Integer), Column('ephemeral_gb', Integer), Column('cell_name', String(length=255)), Column('node', String(length=255)), Column('deleted', Integer), Column('locked_by', inst_lock_enum), Column('cleaned', Integer, default=0), mysql_engine='InnoDB', mysql_charset='utf8' ) instance_actions = Table('instance_actions', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('action', String(length=255)), Column('instance_uuid', String(length=36)), Column('request_id', String(length=255)), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('start_time', DateTime), Column('finish_time', DateTime), Column('message', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8', ) instance_actions_events = Table('instance_actions_events', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('event', String(length=255)), Column('action_id', Integer, ForeignKey('instance_actions.id')), Column('start_time', DateTime), Column('finish_time', DateTime), Column('result', String(length=255)), Column('traceback', Text), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8', ) iscsi_targets = Table('iscsi_targets', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('target_num', Integer), Column('host', String(length=255)), Column('volume_id', String(length=36), nullable=True), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) key_pairs = Table('key_pairs', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('name', String(length=255)), Column('user_id', String(length=255)), Column('fingerprint', String(length=255)), Column('public_key', MediumText()), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) migrations = Table('migrations', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('source_compute', String(length=255)), Column('dest_compute', String(length=255)), Column('dest_host', String(length=255)), Column('status', String(length=255)), Column('instance_uuid', String(length=36)), Column('old_instance_type_id', Integer), Column('new_instance_type_id', Integer), Column('source_node', String(length=255)), Column('dest_node', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) networks = Table('networks', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('injected', Boolean), Column('cidr', Inet()), Column('netmask', InetSmall()), Column('bridge', String(length=255)), Column('gateway', InetSmall()), Column('broadcast', InetSmall()), Column('dns1', InetSmall()), Column('vlan', Integer), Column('vpn_public_address', InetSmall()), Column('vpn_public_port', Integer), Column('vpn_private_address', InetSmall()), Column('dhcp_start', InetSmall()), Column('project_id', String(length=255)), Column('host', String(length=255)), Column('cidr_v6', Inet()), Column('gateway_v6', InetSmall()), Column('label', String(length=255)), Column('netmask_v6', InetSmall()), Column('bridge_interface', String(length=255)), Column('multi_host', Boolean), Column('dns2', InetSmall()), Column('uuid', String(length=36)), Column('priority', Integer), Column('rxtx_base', Integer), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) pci_devices_uc_name = 'uniq_pci_devices0compute_node_id0address0deleted' pci_devices = Table('pci_devices', meta, Column('created_at', DateTime(timezone=False)), Column('updated_at', DateTime(timezone=False)), Column('deleted_at', DateTime(timezone=False)), Column('deleted', Integer, default=0, nullable=False), Column('id', Integer, primary_key=True), Column('compute_node_id', Integer, nullable=False), Column('address', String(12), nullable=False), Column('product_id', String(4)), Column('vendor_id', String(4)), Column('dev_type', String(8)), Column('dev_id', String(255)), Column('label', String(255), nullable=False), Column('status', String(36), nullable=False), Column('extra_info', Text, nullable=True), Column('instance_uuid', String(36), nullable=True), Index('ix_pci_devices_compute_node_id_deleted', 'compute_node_id', 'deleted'), Index('ix_pci_devices_instance_uuid_deleted', 'instance_uuid', 'deleted'), UniqueConstraint('compute_node_id', 'address', 'deleted', name=pci_devices_uc_name), mysql_engine='InnoDB', mysql_charset='utf8') provider_fw_rules = Table('provider_fw_rules', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('protocol', String(length=5)), Column('from_port', Integer), Column('to_port', Integer), Column('cidr', Inet()), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) quota_classes = Table('quota_classes', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('class_name', String(length=255)), Column('resource', String(length=255)), Column('hard_limit', Integer), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) quota_usages = Table('quota_usages', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('project_id', String(length=255)), Column('resource', String(length=255)), Column('in_use', Integer, nullable=False), Column('reserved', Integer, nullable=False), Column('until_refresh', Integer), Column('deleted', Integer), Column('user_id', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) quotas = Table('quotas', meta, Column('id', Integer, primary_key=True, nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('project_id', String(length=255)), Column('resource', String(length=255), nullable=False), Column('hard_limit', Integer), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) uniq_name = "uniq_project_user_quotas0user_id0project_id0resource0deleted" project_user_quotas = Table('project_user_quotas', meta, Column('id', Integer, primary_key=True, nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer), Column('user_id', String(length=255), nullable=False), Column('project_id', String(length=255), nullable=False), Column('resource', String(length=255), nullable=False), Column('hard_limit', Integer, nullable=True), UniqueConstraint('user_id', 'project_id', 'resource', 'deleted', name=uniq_name), mysql_engine='InnoDB', mysql_charset='utf8', ) reservations = Table('reservations', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), nullable=False), Column('usage_id', Integer, nullable=False), Column('project_id', String(length=255)), Column('resource', String(length=255)), Column('delta', Integer, nullable=False), Column('expire', DateTime), Column('deleted', Integer), Column('user_id', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) s3_images = Table('s3_images', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) security_group_instance_association = \ Table('security_group_instance_association', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('security_group_id', Integer), Column('instance_uuid', String(length=36)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) security_group_rules = Table('security_group_rules', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('parent_group_id', Integer, ForeignKey('security_groups.id')), Column('protocol', String(length=255)), Column('from_port', Integer), Column('to_port', Integer), Column('cidr', Inet()), Column('group_id', Integer, ForeignKey('security_groups.id')), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) security_groups = Table('security_groups', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('name', String(length=255)), Column('description', String(length=255)), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) security_group_default_rules = Table('security_group_default_rules', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer, default=0), Column('id', Integer, primary_key=True, nullable=False), Column('protocol', String(length=5)), Column('from_port', Integer), Column('to_port', Integer), Column('cidr', Inet()), mysql_engine='InnoDB', mysql_charset='utf8', ) services = Table('services', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('host', String(length=255)), Column('binary', String(length=255)), Column('topic', String(length=255)), Column('report_count', Integer, nullable=False), Column('disabled', Boolean), Column('deleted', Integer), Column('disabled_reason', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) snapshot_id_mappings = Table('snapshot_id_mappings', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) snapshots = Table('snapshots', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', String(length=36), primary_key=True, nullable=False), Column('volume_id', String(length=36), nullable=False), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('status', String(length=255)), Column('progress', String(length=255)), Column('volume_size', Integer), Column('scheduled_at', DateTime), Column('display_name', String(length=255)), Column('display_description', String(length=255)), Column('deleted', String(length=36)), mysql_engine='InnoDB', mysql_charset='utf8' ) task_log = Table('task_log', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('task_name', String(length=255), nullable=False), Column('state', String(length=255), nullable=False), Column('host', String(length=255), nullable=False), Column('period_beginning', DateTime, nullable=False), Column('period_ending', DateTime, nullable=False), Column('message', String(length=255), nullable=False), Column('task_items', Integer), Column('errors', Integer), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) virtual_interfaces = Table('virtual_interfaces', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('address', String(length=255)), Column('network_id', Integer), Column('uuid', String(length=36)), Column('instance_uuid', String(length=36), nullable=True), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) volume_id_mappings = Table('volume_id_mappings', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(length=36), nullable=False), Column('deleted', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) volumes = Table('volumes', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('id', String(length=36), primary_key=True, nullable=False), Column('ec2_id', String(length=255)), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('host', String(length=255)), Column('size', Integer), Column('availability_zone', String(length=255)), Column('mountpoint', String(length=255)), Column('status', String(length=255)), Column('attach_status', String(length=255)), Column('scheduled_at', DateTime), Column('launched_at', DateTime), Column('terminated_at', DateTime), Column('display_name', String(length=255)), Column('display_description', String(length=255)), Column('provider_location', String(length=256)), Column('provider_auth', String(length=256)), Column('snapshot_id', String(length=36)), Column('volume_type_id', Integer), Column('instance_uuid', String(length=36)), Column('attach_time', DateTime), Column('deleted', String(length=36)), mysql_engine='InnoDB', mysql_charset='utf8' ) volume_usage_cache = Table('volume_usage_cache', meta, Column('created_at', DateTime(timezone=False)), Column('updated_at', DateTime(timezone=False)), Column('deleted_at', DateTime(timezone=False)), Column('id', Integer(), primary_key=True, nullable=False), Column('volume_id', String(36), nullable=False), Column('tot_last_refreshed', DateTime(timezone=False)), Column('tot_reads', BigInteger(), default=0), Column('tot_read_bytes', BigInteger(), default=0), Column('tot_writes', BigInteger(), default=0), Column('tot_write_bytes', BigInteger(), default=0), Column('curr_last_refreshed', DateTime(timezone=False)), Column('curr_reads', BigInteger(), default=0), Column('curr_read_bytes', BigInteger(), default=0), Column('curr_writes', BigInteger(), default=0), Column('curr_write_bytes', BigInteger(), default=0), Column('deleted', Integer), Column("instance_uuid", String(length=36)), Column("project_id", String(length=36)), Column("user_id", String(length=36)), Column("availability_zone", String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) instances.create() Index('project_id', instances.c.project_id).create() Index('uuid', instances.c.uuid, unique=True).create() # create all tables tables = [aggregates, console_pools, instance_types, security_groups, snapshots, volumes, # those that are children and others later agent_builds, aggregate_hosts, aggregate_metadata, block_device_mapping, bw_usage_cache, cells, certificates, compute_node_stats, compute_nodes, consoles, dns_domains, fixed_ips, floating_ips, instance_faults, instance_id_mappings, instance_info_caches, instance_metadata, instance_system_metadata, instance_type_extra_specs, instance_type_projects, instance_actions, instance_actions_events, groups, group_metadata, group_policy, group_member, iscsi_targets, key_pairs, migrations, networks, pci_devices, provider_fw_rules, quota_classes, quota_usages, quotas, project_user_quotas, reservations, s3_images, security_group_instance_association, security_group_rules, security_group_default_rules, services, snapshot_id_mappings, task_log, virtual_interfaces, volume_id_mappings, volume_usage_cache] for table in tables: try: table.create() except Exception: LOG.info(repr(table)) LOG.exception('Exception while creating table.') raise # task log unique constraint task_log_uc = "uniq_task_log0task_name0host0period_beginning0period_ending" task_log_cols = ('task_name', 'host', 'period_beginning', 'period_ending') uc = UniqueConstraint(*task_log_cols, table=task_log, name=task_log_uc) uc.create() # networks unique constraint UniqueConstraint('vlan', 'deleted', table=networks, name='uniq_networks0vlan0deleted').create() # instance_type_name constraint UniqueConstraint('name', 'deleted', table=instance_types, name='uniq_instance_types0name0deleted').create() # flavorid unique constraint UniqueConstraint('flavorid', 'deleted', table=instance_types, name='uniq_instance_types0flavorid0deleted').create() # keypair constraint UniqueConstraint('user_id', 'name', 'deleted', table=key_pairs, name='uniq_key_pairs0user_id0name0deleted').create() # instance_type_projects constraint inst_type_uc_name = 'uniq_instance_type_projects0instance_type_id0' + \ 'project_id0deleted' UniqueConstraint('instance_type_id', 'project_id', 'deleted', table=instance_type_projects, name=inst_type_uc_name).create() # floating_ips unique constraint UniqueConstraint('address', 'deleted', table=floating_ips, name='uniq_floating_ips0address0deleted').create() # instance_info_caches UniqueConstraint('instance_uuid', table=instance_info_caches, name='uniq_instance_info_caches0instance_uuid').create() UniqueConstraint('address', 'deleted', table=virtual_interfaces, name='uniq_virtual_interfaces0address0deleted').create() # cells UniqueConstraint('name', 'deleted', table=cells, name='uniq_cells0name0deleted').create() # security_groups uc = UniqueConstraint('project_id', 'name', 'deleted', table=security_groups, name='uniq_security_groups0project_id0name0deleted') uc.create() # quotas UniqueConstraint('project_id', 'resource', 'deleted', table=quotas, name='uniq_quotas0project_id0resource0deleted').create() # fixed_ips UniqueConstraint('address', 'deleted', table=fixed_ips, name='uniq_fixed_ips0address0deleted').create() # services UniqueConstraint('host', 'topic', 'deleted', table=services, name='uniq_services0host0topic0deleted').create() UniqueConstraint('host', 'binary', 'deleted', table=services, name='uniq_services0host0binary0deleted').create() # agent_builds uc_name = 'uniq_agent_builds0hypervisor0os0architecture0deleted' UniqueConstraint('hypervisor', 'os', 'architecture', 'deleted', table=agent_builds, name=uc_name).create() uc_name = 'uniq_console_pools0host0console_type0compute_host0deleted' UniqueConstraint('host', 'console_type', 'compute_host', 'deleted', table=console_pools, name=uc_name).create() uc_name = 'uniq_aggregate_hosts0host0aggregate_id0deleted' UniqueConstraint('host', 'aggregate_id', 'deleted', table=aggregate_hosts, name=uc_name).create() uc_name = 'uniq_aggregate_metadata0aggregate_id0key0deleted' UniqueConstraint('aggregate_id', 'key', 'deleted', table=aggregate_metadata, name=uc_name).create() uc_name = 'uniq_instance_type_extra_specs0instance_type_id0key0deleted' UniqueConstraint('instance_type_id', 'key', 'deleted', table=instance_type_extra_specs, name=uc_name).create() # created first (to preserve ordering for schema diffs) mysql_pre_indexes = [ Index('instance_type_id', instance_type_projects.c.instance_type_id), Index('project_id', dns_domains.c.project_id), Index('fixed_ip_id', floating_ips.c.fixed_ip_id), Index('network_id', virtual_interfaces.c.network_id), Index('network_id', fixed_ips.c.network_id), Index('fixed_ips_virtual_interface_id_fkey', fixed_ips.c.virtual_interface_id), Index('address', fixed_ips.c.address), Index('fixed_ips_instance_uuid_fkey', fixed_ips.c.instance_uuid), Index('instance_uuid', instance_system_metadata.c.instance_uuid), Index('iscsi_targets_volume_id_fkey', iscsi_targets.c.volume_id), Index('snapshot_id', block_device_mapping.c.snapshot_id), Index('usage_id', reservations.c.usage_id), Index('virtual_interfaces_instance_uuid_fkey', virtual_interfaces.c.instance_uuid), Index('volume_id', block_device_mapping.c.volume_id), Index('security_group_id', security_group_instance_association.c.security_group_id), ] # Common indexes (indexes we apply to all databases) # NOTE: order specific for MySQL diff support common_indexes = [ # aggregate_metadata Index('aggregate_metadata_key_idx', aggregate_metadata.c.key), # agent_builds Index('agent_builds_hypervisor_os_arch_idx', agent_builds.c.hypervisor, agent_builds.c.os, agent_builds.c.architecture), # block_device_mapping Index('block_device_mapping_instance_uuid_idx', block_device_mapping.c.instance_uuid), Index('block_device_mapping_instance_uuid_device_name_idx', block_device_mapping.c.instance_uuid, block_device_mapping.c.device_name), # NOTE(dprince): This is now a duplicate index on MySQL and needs to # be removed there. We leave it here so the Index ordering # matches on schema diffs (for MySQL). # See Havana migration 186_new_bdm_format where we dropped the # virtual_name column. # IceHouse fix is here: https://bugs.launchpad.net/nova/+bug/1265839 Index( 'block_device_mapping_instance_uuid_virtual_name_device_name_idx', block_device_mapping.c.instance_uuid, block_device_mapping.c.device_name), Index('block_device_mapping_instance_uuid_volume_id_idx', block_device_mapping.c.instance_uuid, block_device_mapping.c.volume_id), # bw_usage_cache Index('bw_usage_cache_uuid_start_period_idx', bw_usage_cache.c.uuid, bw_usage_cache.c.start_period), Index('certificates_project_id_deleted_idx', certificates.c.project_id, certificates.c.deleted), Index('certificates_user_id_deleted_idx', certificates.c.user_id, certificates.c.deleted), # compute_node_stats Index('ix_compute_node_stats_compute_node_id', compute_node_stats.c.compute_node_id), Index('compute_node_stats_node_id_and_deleted_idx', compute_node_stats.c.compute_node_id, compute_node_stats.c.deleted), # consoles Index('consoles_instance_uuid_idx', consoles.c.instance_uuid), # dns_domains Index('dns_domains_domain_deleted_idx', dns_domains.c.domain, dns_domains.c.deleted), # fixed_ips Index('fixed_ips_host_idx', fixed_ips.c.host), Index('fixed_ips_network_id_host_deleted_idx', fixed_ips.c.network_id, fixed_ips.c.host, fixed_ips.c.deleted), Index('fixed_ips_address_reserved_network_id_deleted_idx', fixed_ips.c.address, fixed_ips.c.reserved, fixed_ips.c.network_id, fixed_ips.c.deleted), Index('fixed_ips_deleted_allocated_idx', fixed_ips.c.address, fixed_ips.c.deleted, fixed_ips.c.allocated), # floating_ips Index('floating_ips_host_idx', floating_ips.c.host), Index('floating_ips_project_id_idx', floating_ips.c.project_id), Index('floating_ips_pool_deleted_fixed_ip_id_project_id_idx', floating_ips.c.pool, floating_ips.c.deleted, floating_ips.c.fixed_ip_id, floating_ips.c.project_id), # group_member Index('instance_group_member_instance_idx', group_member.c.instance_id), # group_metadata Index('instance_group_metadata_key_idx', group_metadata.c.key), # group_policy Index('instance_group_policy_policy_idx', group_policy.c.policy), # instances Index('instances_reservation_id_idx', instances.c.reservation_id), Index('instances_terminated_at_launched_at_idx', instances.c.terminated_at, instances.c.launched_at), Index('instances_task_state_updated_at_idx', instances.c.task_state, instances.c.updated_at), Index('instances_host_deleted_idx', instances.c.host, instances.c.deleted), Index('instances_uuid_deleted_idx', instances.c.uuid, instances.c.deleted), Index('instances_host_node_deleted_idx', instances.c.host, instances.c.node, instances.c.deleted), Index('instances_host_deleted_cleaned_idx', instances.c.host, instances.c.deleted, instances.c.cleaned), # instance_actions Index('instance_uuid_idx', instance_actions.c.instance_uuid), Index('request_id_idx', instance_actions.c.request_id), # instance_faults Index('instance_faults_host_idx', instance_faults.c.host), Index('instance_faults_instance_uuid_deleted_created_at_idx', instance_faults.c.instance_uuid, instance_faults.c.deleted, instance_faults.c.created_at), # instance_id_mappings Index('ix_instance_id_mappings_uuid', instance_id_mappings.c.uuid), # instance_metadata Index('instance_metadata_instance_uuid_idx', instance_metadata.c.instance_uuid), # instance_type_extra_specs Index('instance_type_extra_specs_instance_type_id_key_idx', instance_type_extra_specs.c.instance_type_id, instance_type_extra_specs.c.key), # iscsi_targets Index('iscsi_targets_host_idx', iscsi_targets.c.host), Index('iscsi_targets_host_volume_id_deleted_idx', iscsi_targets.c.host, iscsi_targets.c.volume_id, iscsi_targets.c.deleted), # migrations Index('migrations_by_host_nodes_and_status_idx', migrations.c.deleted, migrations.c.source_compute, migrations.c.dest_compute, migrations.c.source_node, migrations.c.dest_node, migrations.c.status), Index('migrations_instance_uuid_and_status_idx', migrations.c.deleted, migrations.c.instance_uuid, migrations.c.status), # networks Index('networks_host_idx', networks.c.host), Index('networks_cidr_v6_idx', networks.c.cidr_v6), Index('networks_bridge_deleted_idx', networks.c.bridge, networks.c.deleted), Index('networks_project_id_deleted_idx', networks.c.project_id, networks.c.deleted), Index('networks_uuid_project_id_deleted_idx', networks.c.uuid, networks.c.project_id, networks.c.deleted), Index('networks_vlan_deleted_idx', networks.c.vlan, networks.c.deleted), # project_user_quotas Index('project_user_quotas_project_id_deleted_idx', project_user_quotas.c.project_id, project_user_quotas.c.deleted), Index('project_user_quotas_user_id_deleted_idx', project_user_quotas.c.user_id, project_user_quotas.c.deleted), # reservations Index('ix_reservations_project_id', reservations.c.project_id), Index('ix_reservations_user_id_deleted', reservations.c.user_id, reservations.c.deleted), Index('reservations_uuid_idx', reservations.c.uuid), # security_group_instance_association Index('security_group_instance_association_instance_uuid_idx', security_group_instance_association.c.instance_uuid), # task_log Index('ix_task_log_period_beginning', task_log.c.period_beginning), Index('ix_task_log_host', task_log.c.host), Index('ix_task_log_period_ending', task_log.c.period_ending), # quota_classes Index('ix_quota_classes_class_name', quota_classes.c.class_name), # quota_usages Index('ix_quota_usages_project_id', quota_usages.c.project_id), Index('ix_quota_usages_user_id_deleted', quota_usages.c.user_id, quota_usages.c.deleted), # volumes Index('volumes_instance_uuid_idx', volumes.c.instance_uuid), ] # MySQL specific indexes if migrate_engine.name == 'mysql': for index in mysql_pre_indexes: index.create(migrate_engine) # mysql-specific index by leftmost 100 chars. (mysql gets angry if the # index key length is too long.) sql = ("create index migrations_by_host_nodes_and_status_idx ON " "migrations (deleted, source_compute(100), dest_compute(100), " "source_node(100), dest_node(100), status)") migrate_engine.execute(sql) # PostgreSQL specific indexes if migrate_engine.name == 'postgresql': Index('address', fixed_ips.c.address).create() # NOTE(dprince): PostgreSQL doesn't allow duplicate indexes # so we skip creation of select indexes (so schemas match exactly). POSTGRES_INDEX_SKIPS = [ # See Havana migration 186_new_bdm_format where we dropped the # virtual_name column. # IceHouse fix is here: https://bugs.launchpad.net/nova/+bug/1265839 'block_device_mapping_instance_uuid_virtual_name_device_name_idx' ] MYSQL_INDEX_SKIPS = [ # we create this one manually for MySQL above 'migrations_by_host_nodes_and_status_idx' ] for index in common_indexes: if ((migrate_engine.name == 'postgresql' and index.name in POSTGRES_INDEX_SKIPS) or (migrate_engine.name == 'mysql' and index.name in MYSQL_INDEX_SKIPS)): continue else: index.create(migrate_engine) Index('project_id', dns_domains.c.project_id).drop # Common foreign keys fkeys = [ [[instance_type_projects.c.instance_type_id], [instance_types.c.id], 'instance_type_projects_ibfk_1'], [[iscsi_targets.c.volume_id], [volumes.c.id], 'iscsi_targets_volume_id_fkey'], [[reservations.c.usage_id], [quota_usages.c.id], 'reservations_ibfk_1'], [[security_group_instance_association.c.security_group_id], [security_groups.c.id], 'security_group_instance_association_ibfk_1'], [[compute_node_stats.c.compute_node_id], [compute_nodes.c.id], 'fk_compute_node_stats_compute_node_id'], [[compute_nodes.c.service_id], [services.c.id], 'fk_compute_nodes_service_id'], ] secgroup_instance_association_instance_uuid_fkey = ( 'security_group_instance_association_instance_uuid_fkey') fkeys.extend( [ [[fixed_ips.c.instance_uuid], [instances.c.uuid], 'fixed_ips_instance_uuid_fkey'], [[block_device_mapping.c.instance_uuid], [instances.c.uuid], 'block_device_mapping_instance_uuid_fkey'], [[instance_info_caches.c.instance_uuid], [instances.c.uuid], 'instance_info_caches_instance_uuid_fkey'], [[instance_metadata.c.instance_uuid], [instances.c.uuid], 'instance_metadata_instance_uuid_fkey'], [[instance_system_metadata.c.instance_uuid], [instances.c.uuid], 'instance_system_metadata_ibfk_1'], [[security_group_instance_association.c.instance_uuid], [instances.c.uuid], secgroup_instance_association_instance_uuid_fkey], [[virtual_interfaces.c.instance_uuid], [instances.c.uuid], 'virtual_interfaces_instance_uuid_fkey'], [[instance_actions.c.instance_uuid], [instances.c.uuid], 'fk_instance_actions_instance_uuid'], [[instance_faults.c.instance_uuid], [instances.c.uuid], 'fk_instance_faults_instance_uuid'], [[migrations.c.instance_uuid], [instances.c.uuid], 'fk_migrations_instance_uuid'] ]) for fkey_pair in fkeys: if migrate_engine.name == 'mysql': # For MySQL we name our fkeys explicitly # so they match Havana fkey = ForeignKeyConstraint(columns=fkey_pair[0], refcolumns=fkey_pair[1], name=fkey_pair[2]) fkey.create() elif migrate_engine.name == 'postgresql': # PostgreSQL names things like it wants (correct and compatible!) fkey = ForeignKeyConstraint(columns=fkey_pair[0], refcolumns=fkey_pair[1]) fkey.create() if migrate_engine.name == 'mysql': # In Folsom we explicitly converted migrate_version to UTF8. migrate_engine.execute( 'ALTER TABLE migrate_version CONVERT TO CHARACTER SET utf8') # Set default DB charset to UTF8. migrate_engine.execute( 'ALTER DATABASE %s DEFAULT CHARACTER SET utf8' % migrate_engine.url.database) _create_shadow_tables(migrate_engine) _create_dump_tables(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/217_placeholder.py0000664000175000017500000000157500000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/218_placeholder.py0000664000175000017500000000157500000000000026043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/219_placeholder.py0000664000175000017500000000157500000000000026044 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/220_placeholder.py0000664000175000017500000000157500000000000026034 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/221_placeholder.py0000664000175000017500000000157500000000000026035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/222_placeholder.py0000664000175000017500000000157500000000000026036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/223_placeholder.py0000664000175000017500000000157500000000000026037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/224_placeholder.py0000664000175000017500000000157500000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/225_placeholder.py0000664000175000017500000000157500000000000026041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/226_placeholder.py0000664000175000017500000000157500000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Havana backports. # Do not use this number for new Icehouse work. New Icehouse work starts after # all the placeholders. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/227_fix_project_user_quotas_resource_length.py0000664000175000017500000000252600000000000033774 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, String, Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) table = Table('project_user_quotas', meta, autoload=True) col_resource = getattr(table.c, 'resource') if col_resource.type.length == 25: # The resource of project_user_quotas table had been changed to # invalid length(25) since I56ad98d3702f53fe8cfa94093fea89074f7a5e90. # The following code fixes the length for the environments which are # deployed after I56ad98d3702f53fe8cfa94093fea89074f7a5e90. col_resource.alter(type=String(255)) table.update().where(table.c.resource == 'injected_file_content_byt')\ .values(resource='injected_file_content_bytes').execute() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/228_add_metrics_in_compute_nodes.py0000664000175000017500000000232400000000000031443 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine # Add a new column metrics to save metrics info for compute nodes compute_nodes = Table('compute_nodes', meta, autoload=True) shadow_compute_nodes = Table('shadow_compute_nodes', meta, autoload=True) metrics = Column('metrics', Text, nullable=True) shadow_metrics = Column('metrics', Text, nullable=True) compute_nodes.create_column(metrics) shadow_compute_nodes.create_column(shadow_metrics) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/229_add_extra_resources_in_compute_nodes.py0000664000175000017500000000251600000000000033216 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine # Add a new column extra_resources to save extra_resources info for # compute nodes compute_nodes = Table('compute_nodes', meta, autoload=True) shadow_compute_nodes = Table('shadow_compute_nodes', meta, autoload=True) extra_resources = Column('extra_resources', Text, nullable=True) shadow_extra_resources = Column('extra_resources', Text, nullable=True) compute_nodes.create_column(extra_resources) shadow_compute_nodes.create_column(shadow_extra_resources) ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/230_add_details_column_to_instance_actions_events.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/230_add_details_column_to_instance_actions_even0000664000175000017500000000232300000000000034051 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import utils from sqlalchemy import Column, String, Text from nova.db.sqlalchemy import api def upgrade(migrate_engine): actions_events = utils.get_table(migrate_engine, 'instance_actions_events') host = Column('host', String(255)) details = Column('details', Text) actions_events.create_column(host) actions_events.create_column(details) shadow_actions_events = utils.get_table(migrate_engine, api._SHADOW_TABLE_PREFIX + 'instance_actions_events') shadow_actions_events.create_column(host.copy()) shadow_actions_events.create_column(details.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/231_add_ephemeral_key_uuid.py0000664000175000017500000000261200000000000030215 0ustar00zuulzuul00000000000000# Copyright (c) 2014 The Johns Hopkins University/Applied Physics Laboratory # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table from sqlalchemy import String def upgrade(migrate_engine): """Function adds ephemeral storage encryption key uuid field.""" meta = MetaData(bind=migrate_engine) instances = Table('instances', meta, autoload=True) shadow_instances = Table('shadow_instances', meta, autoload=True) ephemeral_key_uuid = Column('ephemeral_key_uuid', String(36)) instances.create_column(ephemeral_key_uuid) shadow_instances.create_column(ephemeral_key_uuid.copy()) migrate_engine.execute(instances.update(). values(ephemeral_key_uuid=None)) migrate_engine.execute(shadow_instances.update(). values(ephemeral_key_uuid=None)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/232_drop_dump_tables.py0000664000175000017500000000203000000000000027063 0ustar00zuulzuul00000000000000# Copyright 2014, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData(migrate_engine) meta.reflect(migrate_engine) table_names = ['compute_node_stats', 'compute_nodes', 'instance_actions', 'instance_actions_events', 'instance_faults', 'migrations'] for table_name in table_names: table = Table('dump_' + table_name, meta) table.drop(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/233_add_stats_in_compute_nodes.py0000664000175000017500000000266400000000000031136 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(engine): meta = MetaData() meta.bind = engine # Drop the compute_node_stats table and add a 'stats' column to # compute_nodes directly. The data itself is transient and doesn't # need to be copied over. table_names = ('compute_node_stats', 'shadow_compute_node_stats') for table_name in table_names: table = Table(table_name, meta, autoload=True) table.drop() # Add a new stats column to compute nodes table_names = ('compute_nodes', 'shadow_compute_nodes') for table_name in table_names: table = Table(table_name, meta, autoload=True) stats = Column('stats', Text, default='{}') table.create_column(stats) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/234_add_expire_reservations_index.py0000664000175000017500000000270300000000000031650 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import Index, MetaData, Table LOG = logging.getLogger(__name__) def _get_deleted_expire_index(table): members = sorted(['deleted', 'expire']) for idx in table.indexes: if sorted(idx.columns.keys()) == members: return idx def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine reservations = Table('reservations', meta, autoload=True) if _get_deleted_expire_index(reservations): LOG.info('Skipped adding reservations_deleted_expire_idx ' 'because an equivalent index already exists.') return # Based on expire_reservations query # from: nova/db/sqlalchemy/api.py index = Index('reservations_deleted_expire_idx', reservations.c.deleted, reservations.c.expire) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/235_placeholder.py0000664000175000017500000000156300000000000026037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/236_placeholder.py0000664000175000017500000000156300000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/237_placeholder.py0000664000175000017500000000156300000000000026041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/238_placeholder.py0000664000175000017500000000156300000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/239_placeholder.py0000664000175000017500000000156300000000000026043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/240_placeholder.py0000664000175000017500000000156300000000000026033 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/241_placeholder.py0000664000175000017500000000156300000000000026034 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/242_placeholder.py0000664000175000017500000000156300000000000026035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/243_placeholder.py0000664000175000017500000000156300000000000026036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Icehouse backports. # Do not use this number for new Juno work. New Juno work starts after # all the placeholders. # # See blueprint backportable-db-migrations-juno # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/244_increase_user_id_length_volume_usage_cache.py 22 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/244_increase_user_id_length_volume_usage_cache.0000664000175000017500000000172700000000000033750 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, String, Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) table = Table('volume_usage_cache', meta, autoload=True) col_resource = getattr(table.c, 'user_id') # Match the maximum length of user_id in Keystone here instead of # assuming explicitly a single UUID length. col_resource.alter(type=String(64)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/245_add_mtu_and_dhcp_server.py0000664000175000017500000000412400000000000030375 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Nebula, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table from sqlalchemy import Boolean, Integer from nova.db.sqlalchemy import types def upgrade(migrate_engine): """Function adds network mtu, dhcp_server, and share_dhcp fields.""" meta = MetaData(bind=migrate_engine) networks = Table('networks', meta, autoload=True) shadow_networks = Table('shadow_networks', meta, autoload=True) # NOTE(vish): ignore duplicate runs of upgrade so this can # be backported mtu = Column('mtu', Integer) dhcp_server = Column('dhcp_server', types.IPAddress) enable_dhcp = Column('enable_dhcp', Boolean, default=True) share_address = Column('share_address', Boolean, default=False) if not hasattr(networks.c, 'mtu'): networks.create_column(mtu) if not hasattr(networks.c, 'dhcp_server'): networks.create_column(dhcp_server) if not hasattr(networks.c, 'enable_dhcp'): networks.create_column(enable_dhcp) if not hasattr(networks.c, 'share_address'): networks.create_column(share_address) if not hasattr(shadow_networks.c, 'mtu'): shadow_networks.create_column(mtu.copy()) if not hasattr(shadow_networks.c, 'dhcp_server'): shadow_networks.create_column(dhcp_server.copy()) if not hasattr(shadow_networks.c, 'enable_dhcp'): shadow_networks.create_column(enable_dhcp.copy()) if not hasattr(shadow_networks.c, 'share_address'): shadow_networks.create_column(share_address.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/246_add_compute_node_id_fk.py0000664000175000017500000000222000000000000030173 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset.constraint import ForeignKeyConstraint from sqlalchemy import MetaData, Table def upgrade(migrate_engine): """Add missing foreign key constraint on pci_devices.compute_node_id.""" meta = MetaData(bind=migrate_engine) pci_devices = Table('pci_devices', meta, autoload=True) compute_nodes = Table('compute_nodes', meta, autoload=True) fkey = ForeignKeyConstraint(columns=[pci_devices.c.compute_node_id], refcolumns=[compute_nodes.c.id]) fkey.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/246_sqlite_upgrade.sql0000664000175000017500000000244200000000000026733 0ustar00zuulzuul00000000000000CREATE TABLE pci_devices_new ( created_at DATETIME, updated_at DATETIME, deleted_at DATETIME, deleted INTEGER, id INTEGER NOT NULL, compute_node_id INTEGER NOT NULL, address VARCHAR(12) NOT NULL, vendor_id VARCHAR(4) NOT NULL, product_id VARCHAR(4) NOT NULL, dev_type VARCHAR(8) NOT NULL, dev_id VARCHAR(255), label VARCHAR(255) NOT NULL, status VARCHAR(36) NOT NULL, extra_info TEXT, instance_uuid VARCHAR(36), PRIMARY KEY (id), FOREIGN KEY (compute_node_id) REFERENCES compute_nodes(id), CONSTRAINT uniq_pci_devices0compute_node_id0address0deleted UNIQUE (compute_node_id, address, deleted) ); INSERT INTO pci_devices_new SELECT created_at, updated_at, deleted_at, deleted, id, compute_node_id, address, vendor_id, product_id, dev_type, dev_id, label, status, extra_info, instance_uuid FROM pci_devices; DROP TABLE pci_devices; ALTER TABLE pci_devices_new RENAME TO pci_devices; CREATE INDEX ix_pci_devices_compute_node_id_deleted ON pci_devices (compute_node_id, deleted); CREATE INDEX ix_pci_devices_instance_uuid_deleted ON pci_devices (instance_uuid, deleted); ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/247_nullable_mismatch.py0000664000175000017500000000213600000000000027240 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) quota_usages = Table('quota_usages', meta, autoload=True) quota_usages.c.resource.alter(nullable=False) pci_devices = Table('pci_devices', meta, autoload=True) pci_devices.c.deleted.alter(nullable=True) pci_devices.c.product_id.alter(nullable=False) pci_devices.c.vendor_id.alter(nullable=False) pci_devices.c.dev_type.alter(nullable=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/248_add_expire_reservations_index.py0000664000175000017500000000270300000000000031655 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import Index, MetaData, Table LOG = logging.getLogger(__name__) def _get_deleted_expire_index(table): members = sorted(['deleted', 'expire']) for idx in table.indexes: if sorted(idx.columns.keys()) == members: return idx def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine reservations = Table('reservations', meta, autoload=True) if _get_deleted_expire_index(reservations): LOG.info('Skipped adding reservations_deleted_expire_idx ' 'because an equivalent index already exists.') return # Based on expire_reservations query # from: nova/db/sqlalchemy/api.py index = Index('reservations_deleted_expire_idx', reservations.c.deleted, reservations.c.expire) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/249_remove_duplicate_index.py0000664000175000017500000000203100000000000030267 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table INDEX_NAME = 'block_device_mapping_instance_uuid_virtual_name_device_name_idx' def upgrade(migrate_engine): """Remove duplicate index from block_device_mapping table.""" meta = MetaData(bind=migrate_engine) bdm = Table('block_device_mapping', meta, autoload=True) for index in bdm.indexes: if index.name == INDEX_NAME: index.drop() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/250_remove_instance_groups_metadata.py0000664000175000017500000000225600000000000032172 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table def upgrade(migrate_engine): """Remove the instance_group_metadata table.""" meta = MetaData(bind=migrate_engine) if migrate_engine.has_table('instance_group_metadata'): group_metadata = Table('instance_group_metadata', meta, autoload=True) group_metadata.drop() if migrate_engine.has_table('shadow_instance_group_metadata'): shadow_group_metadata = Table('shadow_instance_group_metadata', meta, autoload=True) shadow_group_metadata.drop() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/251_add_numa_topology_to_comput_nodes.py0000664000175000017500000000221600000000000032534 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine compute_nodes = Table('compute_nodes', meta, autoload=True) shadow_compute_nodes = Table('shadow_compute_nodes', meta, autoload=True) numa_topology = Column('numa_topology', Text, nullable=True) shadow_numa_topology = Column('numa_topology', Text, nullable=True) compute_nodes.create_column(numa_topology) shadow_compute_nodes.create_column(shadow_numa_topology) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/252_add_instance_extra_table.py0000664000175000017500000000431600000000000030541 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import ForeignKeyConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine columns = [ (('created_at', DateTime), {}), (('updated_at', DateTime), {}), (('deleted_at', DateTime), {}), (('deleted', Integer), {}), (('id', Integer), dict(primary_key=True, nullable=False)), (('instance_uuid', String(length=36)), dict(nullable=False)), (('numa_topology', Text), dict(nullable=True)), ] for prefix in ('', 'shadow_'): instances = Table(prefix + 'instances', meta, autoload=True) basename = prefix + 'instance_extra' if migrate_engine.has_table(basename): continue _columns = tuple([Column(*args, **kwargs) for args, kwargs in columns]) table = Table(basename, meta, *_columns, mysql_engine='InnoDB', mysql_charset='utf8') table.create() # Index instance_uuid_index = Index(basename + '_idx', table.c.instance_uuid) instance_uuid_index.create(migrate_engine) # Foreign key if not prefix: fkey_columns = [table.c.instance_uuid] fkey_refcolumns = [instances.c.uuid] instance_fkey = ForeignKeyConstraint( columns=fkey_columns, refcolumns=fkey_refcolumns) instance_fkey.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/253_add_pci_requests_to_instance_extra_table.py0000664000175000017500000000213000000000000034022 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'pci_requests' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/254_add_request_id_in_pci_devices.py0000664000175000017500000000237000000000000031552 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(engine): """Function adds request_id field.""" meta = MetaData(bind=engine) pci_devices = Table('pci_devices', meta, autoload=True) shadow_pci_devices = Table('shadow_pci_devices', meta, autoload=True) request_id = Column('request_id', String(36), nullable=True) if not hasattr(pci_devices.c, 'request_id'): pci_devices.create_column(request_id) if not hasattr(shadow_pci_devices.c, 'request_id'): shadow_pci_devices.create_column(request_id.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/255_placeholder.py0000664000175000017500000000155700000000000026044 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/256_placeholder.py0000664000175000017500000000155700000000000026045 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/257_placeholder.py0000664000175000017500000000155700000000000026046 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/258_placeholder.py0000664000175000017500000000155700000000000026047 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/259_placeholder.py0000664000175000017500000000155700000000000026050 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/260_placeholder.py0000664000175000017500000000155700000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/261_placeholder.py0000664000175000017500000000155700000000000026041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/262_placeholder.py0000664000175000017500000000155700000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/263_placeholder.py0000664000175000017500000000155700000000000026043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/264_placeholder.py0000664000175000017500000000155700000000000026044 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Juno backports. # Do not use this number for new Kilo work. New Kilo work starts after # all the placeholders. # # See blueprint backportable-db-migrations-kilo # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/265_remove_duplicated_index.py0000664000175000017500000000232100000000000030433 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table INDEXES = [ # subset of instances_host_deleted_cleaned_idx ('instances', 'instances_host_deleted_idx'), # subset of iscsi_targets_host_volume_id_deleted_idx ('iscsi_targets', 'iscsi_targets_host_idx'), ] def upgrade(migrate_engine): """Remove index that are subsets of other indexes.""" meta = MetaData(bind=migrate_engine) for table_name, index_name in INDEXES: table = Table(table_name, meta, autoload=True) for index in table.indexes: if index.name == index_name: index.drop() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/266_add_instance_tags.py0000664000175000017500000000212200000000000027203 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy as sa def upgrade(migrate_engine): meta = sa.MetaData(bind=migrate_engine) tags = sa.Table('tags', meta, sa.Column('resource_id', sa.String(36), primary_key=True, nullable=False), sa.Column('tag', sa.Unicode(80), primary_key=True, nullable=False), sa.Index('tags_tag_idx', 'tag'), mysql_engine='InnoDB', mysql_charset='utf8') tags.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/267_instance_uuid_non_nullable.py0000664000175000017500000001142400000000000031141 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import MetaData from sqlalchemy.sql import null from nova import exception from nova.i18n import _ UC_NAME = 'uniq_instances0uuid' def scan_for_null_records(table, col_name, check_fkeys): """Queries the table looking for NULL instances of the given column. :param col_name: The name of the column to look for in the table. :param check_fkeys: If True, check the table for foreign keys back to the instances table and if not found, return. :raises: exception.ValidationError: If any records are found. """ if col_name in table.columns: # NOTE(mriedem): filter out tables that don't have a foreign key back # to the instances table since they could have stale data even if # instances.uuid wasn't NULL. if check_fkeys: fkey_found = False fkeys = table.c[col_name].foreign_keys or [] for fkey in fkeys: if fkey.column.table.name == 'instances': fkey_found = True if not fkey_found: return records = len(list( table.select().where(table.c[col_name] == null()).execute() )) if records: msg = _("There are %(records)d records in the " "'%(table_name)s' table where the uuid or " "instance_uuid column is NULL. These must be " "manually cleaned up before the migration will pass. " "Consider running the " "'nova-manage db null_instance_uuid_scan' command.") % ( {'records': records, 'table_name': table.name}) raise exception.ValidationError(detail=msg) def process_null_records(meta, scan=True): """Scans the database for null instance_uuid records for processing. :param meta: sqlalchemy.MetaData object, assumes tables are reflected. :param scan: If True, does a query and fails the migration if NULL instance uuid entries found. If False, makes instances.uuid non-nullable. """ if scan: for table in reversed(meta.sorted_tables): # NOTE(mriedem): There is a periodic task in the network manager # that calls nova.db.api.fixed_ip_disassociate_all_by_timeout which # will set fixed_ips.instance_uuid to None by design, so we have to # skip the fixed_ips table otherwise we'll wipeout the pool of # fixed IPs. if table.name not in ('fixed_ips', 'shadow_fixed_ips'): scan_for_null_records(table, 'instance_uuid', check_fkeys=True) for table_name in ('instances', 'shadow_instances'): table = meta.tables[table_name] if scan: scan_for_null_records(table, 'uuid', check_fkeys=False) else: # The record is gone so make the uuid column non-nullable. table.columns.uuid.alter(nullable=False) def upgrade(migrate_engine): # NOTE(mriedem): We're going to load up all of the tables so we can find # any with an instance_uuid column since those may be foreign keys back # to the instances table and we want to cleanup those records first. We # have to do this explicitly because the foreign keys in nova aren't # defined with cascading deletes. meta = MetaData(migrate_engine) meta.reflect(migrate_engine) # Scan the database first and fail if any NULL records found. process_null_records(meta, scan=True) # Now run the alter statements. process_null_records(meta, scan=False) # Create a unique constraint on instances.uuid for foreign keys. instances = meta.tables['instances'] UniqueConstraint('uuid', table=instances, name=UC_NAME).create() # NOTE(mriedem): We now have a unique index on instances.uuid from the # 216_havana migration and a unique constraint on the same column, which # is redundant but should not be a big performance penalty. We should # clean this up in a later (separate) migration since it involves dropping # any ForeignKeys on the instances.uuid column due to some index rename # issues in older versions of MySQL. That is beyond the scope of this # migration. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/268_add_host_in_compute_node.py0000664000175000017500000000320600000000000030573 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import MetaData, Table, Column, String def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine # Add a new column host compute_nodes = Table('compute_nodes', meta, autoload=True) shadow_compute_nodes = Table('shadow_compute_nodes', meta, autoload=True) # NOTE(sbauza) : Old compute nodes can report stats without this field, we # need to set it as nullable host = Column('host', String(255), nullable=True) if not hasattr(compute_nodes.c, 'host'): compute_nodes.create_column(host) if not hasattr(shadow_compute_nodes.c, 'host'): shadow_compute_nodes.create_column(host.copy()) # NOTE(sbauza) : Populate the host field with the value from the services # table will be done at the ComputeNode object level when save() ukey = UniqueConstraint('host', 'hypervisor_hostname', table=compute_nodes, name="uniq_compute_nodes0host0hypervisor_hostname") ukey.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/269_add_numa_node_column.py0000664000175000017500000000245200000000000027714 0ustar00zuulzuul00000000000000# Copyright 2014 Intel Corporation # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html from sqlalchemy import MetaData, Table, Column, Integer def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) # Add a new column to store PCI device numa node pci_devices = Table('pci_devices', meta, autoload=True) shadow_pci_devices = Table('shadow_pci_devices', meta, autoload=True) numa_node = Column('numa_node', Integer, default=None) if not hasattr(pci_devices.c, 'numa_node'): pci_devices.create_column(numa_node) if not hasattr(shadow_pci_devices.c, 'numa_node'): shadow_pci_devices.create_column(numa_node.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/270_flavor_data_in_extra.py0000664000175000017500000000212200000000000027717 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'flavor' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/271_sqlite_postgresql_indexes.py0000664000175000017500000000564000000000000031060 0ustar00zuulzuul00000000000000# Copyright 2014 Rackspace Hosting # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import utils INDEXES = [ ('block_device_mapping', 'snapshot_id', ['snapshot_id']), ('block_device_mapping', 'volume_id', ['volume_id']), ('dns_domains', 'dns_domains_project_id_idx', ['project_id']), ('fixed_ips', 'network_id', ['network_id']), ('fixed_ips', 'fixed_ips_instance_uuid_fkey', ['instance_uuid']), ('fixed_ips', 'fixed_ips_virtual_interface_id_fkey', ['virtual_interface_id']), ('floating_ips', 'fixed_ip_id', ['fixed_ip_id']), ('iscsi_targets', 'iscsi_targets_volume_id_fkey', ['volume_id']), ('virtual_interfaces', 'virtual_interfaces_network_id_idx', ['network_id']), ('virtual_interfaces', 'virtual_interfaces_instance_uuid_fkey', ['instance_uuid']), ] def ensure_index_exists(migrate_engine, table_name, index_name, column_names): if not utils.index_exists(migrate_engine, table_name, index_name): utils.add_index(migrate_engine, table_name, index_name, column_names) def ensure_index_removed(migrate_engine, table_name, index_name): if utils.index_exists(migrate_engine, table_name, index_name): utils.drop_index(migrate_engine, table_name, index_name) def upgrade(migrate_engine): """Add indexes missing on SQLite and PostgreSQL.""" # PostgreSQL and SQLite namespace indexes at the database level, whereas # MySQL namespaces indexes at the table level. Unfortunately, some of # the missing indexes in PostgreSQL and SQLite have conflicting names # that MySQL allowed. if migrate_engine.name in ('sqlite', 'postgresql'): for table_name, index_name, column_names in INDEXES: ensure_index_exists(migrate_engine, table_name, index_name, column_names) elif migrate_engine.name == 'mysql': # Rename some indexes with conflicting names ensure_index_removed(migrate_engine, 'dns_domains', 'project_id') ensure_index_exists(migrate_engine, 'dns_domains', 'dns_domains_project_id_idx', ['project_id']) ensure_index_removed(migrate_engine, 'virtual_interfaces', 'network_id') ensure_index_exists(migrate_engine, 'virtual_interfaces', 'virtual_interfaces_network_id_idx', ['network_id']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/272_add_keypair_type.py0000664000175000017500000000236500000000000027074 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(mikal): this migration number exists like this because change # I506dd1c8d0f0a877fdfc1a4ed11a8830d9600b98 needs to revert the hyper-v # keypair change, but we promise that we will never remove a schema migration # version. Instead, we replace this migration with a noop. # # It is hypothetically possible that a hyper-v continuous deployer exists who # will have a poor experience because of this code revert, if that deployer # is you, please contact the nova team at openstack-discuss@lists.openstack.org # and we will walk you through the manual fix required for this problem. def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/273_sqlite_foreign_keys.py0000664000175000017500000001112200000000000027614 0ustar00zuulzuul00000000000000# Copyright 2014 Rackspace Hosting # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import ForeignKeyConstraint, UniqueConstraint from oslo_db.sqlalchemy import utils from sqlalchemy import MetaData, schema, Table FKEYS = [ ('fixed_ips', 'instance_uuid', 'instances', 'uuid', 'fixed_ips_instance_uuid_fkey'), ('block_device_mapping', 'instance_uuid', 'instances', 'uuid', 'block_device_mapping_instance_uuid_fkey'), ('instance_info_caches', 'instance_uuid', 'instances', 'uuid', 'instance_info_caches_instance_uuid_fkey'), ('instance_metadata', 'instance_uuid', 'instances', 'uuid', 'instance_metadata_instance_uuid_fkey'), ('instance_system_metadata', 'instance_uuid', 'instances', 'uuid', 'instance_system_metadata_ibfk_1'), ('instance_type_projects', 'instance_type_id', 'instance_types', 'id', 'instance_type_projects_ibfk_1'), ('iscsi_targets', 'volume_id', 'volumes', 'id', 'iscsi_targets_volume_id_fkey'), ('reservations', 'usage_id', 'quota_usages', 'id', 'reservations_ibfk_1'), ('security_group_instance_association', 'instance_uuid', 'instances', 'uuid', 'security_group_instance_association_instance_uuid_fkey'), ('security_group_instance_association', 'security_group_id', 'security_groups', 'id', 'security_group_instance_association_ibfk_1'), ('virtual_interfaces', 'instance_uuid', 'instances', 'uuid', 'virtual_interfaces_instance_uuid_fkey'), ('compute_nodes', 'service_id', 'services', 'id', 'fk_compute_nodes_service_id'), ('instance_actions', 'instance_uuid', 'instances', 'uuid', 'fk_instance_actions_instance_uuid'), ('instance_faults', 'instance_uuid', 'instances', 'uuid', 'fk_instance_faults_instance_uuid'), ('migrations', 'instance_uuid', 'instances', 'uuid', 'fk_migrations_instance_uuid'), ] UNIQUES = [ ('compute_nodes', 'uniq_compute_nodes0host0hypervisor_hostname', ['host', 'hypervisor_hostname']), ('fixed_ips', 'uniq_fixed_ips0address0deleted', ['address', 'deleted']), ('instance_info_caches', 'uniq_instance_info_caches0instance_uuid', ['instance_uuid']), ('instance_type_projects', 'uniq_instance_type_projects0instance_type_id0project_id0deleted', ['instance_type_id', 'project_id', 'deleted']), ('pci_devices', 'uniq_pci_devices0compute_node_id0address0deleted', ['compute_node_id', 'address', 'deleted']), ('virtual_interfaces', 'uniq_virtual_interfaces0address0deleted', ['address', 'deleted']), ] def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine if migrate_engine.name == 'sqlite': # SQLite is also missing this one index if not utils.index_exists(migrate_engine, 'fixed_ips', 'address'): utils.add_index(migrate_engine, 'fixed_ips', 'address', ['address']) for src_table, src_column, dst_table, dst_column, name in FKEYS: src_table = Table(src_table, meta, autoload=True) if name in set(fk.name for fk in src_table.foreign_keys): continue src_column = src_table.c[src_column] dst_table = Table(dst_table, meta, autoload=True) dst_column = dst_table.c[dst_column] fkey = ForeignKeyConstraint(columns=[src_column], refcolumns=[dst_column], name=name) fkey.create() # SQLAlchemy versions < 1.0.0 don't reflect unique constraints # for SQLite correctly causing sqlalchemy-migrate to recreate # some tables with missing unique constraints. Re-add some # potentially missing unique constraints as a workaround. for table_name, name, column_names in UNIQUES: table = Table(table_name, meta, autoload=True) if name in set(c.name for c in table.constraints if isinstance(table, schema.UniqueConstraint)): continue uc = UniqueConstraint(*column_names, table=table, name=name) uc.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/274_update_instances_project_id_index.py0000664000175000017500000000305200000000000032475 0ustar00zuulzuul00000000000000# Copyright 2014 Rackspace Hosting # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) def upgrade(migrate_engine): """Change instances (project_id) index to cover (project_id, deleted).""" meta = MetaData(bind=migrate_engine) # Indexes can't be changed, we need to create the new one and delete # the old one instances = Table('instances', meta, autoload=True) for index in instances.indexes: if [c.name for c in index.columns] == ['project_id', 'deleted']: LOG.info('Skipped adding instances_project_id_deleted_idx ' 'because an equivalent index already exists.') break else: index = Index('instances_project_id_deleted_idx', instances.c.project_id, instances.c.deleted) index.create() for index in instances.indexes: if [c.name for c in index.columns] == ['project_id']: index.drop() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/275_add_keypair_type.py0000664000175000017500000000266300000000000027100 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table from sqlalchemy import Enum from nova.objects import keypair def upgrade(migrate_engine): """Function adds key_pairs type field.""" meta = MetaData(bind=migrate_engine) key_pairs = Table('key_pairs', meta, autoload=True) shadow_key_pairs = Table('shadow_key_pairs', meta, autoload=True) enum = Enum('ssh', 'x509', metadata=meta, name='keypair_types') enum.create() keypair_type = Column('type', enum, nullable=False, server_default=keypair.KEYPAIR_TYPE_SSH) if hasattr(key_pairs.c, 'type'): key_pairs.c.type.drop() if hasattr(shadow_key_pairs.c, 'type'): shadow_key_pairs.c.type.drop() key_pairs.create_column(keypair_type) shadow_key_pairs.create_column(keypair_type.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/276_vcpu_model.py0000664000175000017500000000212600000000000025713 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'vcpu_model' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/277_add_fixed_ip_updated_index.py0000664000175000017500000000265400000000000031061 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import Index, MetaData, Table LOG = logging.getLogger(__name__) INDEX_COLUMNS = ['deleted', 'allocated', 'updated_at'] INDEX_NAME = 'fixed_ips_%s_idx' % ('_'.join(INDEX_COLUMNS),) def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table('fixed_ips', meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == INDEX_COLUMNS: break else: idx = None return meta, table, idx def upgrade(migrate_engine): meta, table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME) return columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS] index = Index(INDEX_NAME, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/278_remove_service_fk_in_compute_nodes.py0000664000175000017500000000535700000000000032700 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import ForeignKeyConstraint, UniqueConstraint from sqlalchemy import MetaData, Table from sqlalchemy.engine import reflection def _correct_sqlite_unique_constraints(migrate_engine, table): # NOTE(sbauza): SQLAlchemy<1.0 doesn't provide the unique keys in the # constraints field of the Table object, so it would drop them if we change # either the scheme or the constraints. Adding them back to the Table # object before changing the model makes sure that they are not dropped. if migrate_engine.name != 'sqlite': # other engines don't have this problem return inspector = reflection.Inspector.from_engine(migrate_engine) constraints = inspector.get_unique_constraints(table.name) constraint_names = [c.name for c in table.constraints] for constraint in constraints: if constraint['name'] in constraint_names: # the constraint is already in the table continue table.constraints.add( UniqueConstraint(*constraint['column_names'], table=table, name=constraint['name'])) def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine compute_nodes = Table('compute_nodes', meta, autoload=True) shadow_compute_nodes = Table('shadow_compute_nodes', meta, autoload=True) services = Table('services', meta, autoload=True) _correct_sqlite_unique_constraints(migrate_engine, compute_nodes) # Make the service_id column nullable compute_nodes.c.service_id.alter(nullable=True) shadow_compute_nodes.c.service_id.alter(nullable=True) for fk in compute_nodes.foreign_keys: if fk.column == services.c.id: # Delete the FK fkey = ForeignKeyConstraint(columns=[compute_nodes.c.service_id], refcolumns=[services.c.id], name=fk.name) fkey.drop() break for index in compute_nodes.indexes: if 'service_id' in index.columns: # Delete the nested index which was created by the FK index.drop() break ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/279_fix_unique_constraint_for_compute_node.py0000664000175000017500000000237200000000000033613 0ustar00zuulzuul00000000000000# Copyright (c) Intel Corporation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import MetaData, Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine compute_nodes = Table('compute_nodes', meta, autoload=True) # Drop the old UniqueConstraint ukey = UniqueConstraint('host', 'hypervisor_hostname', table=compute_nodes, name="uniq_compute_nodes0host0hypervisor_hostname") ukey.drop() # Add new UniqueConstraint ukey = UniqueConstraint( 'host', 'hypervisor_hostname', 'deleted', table=compute_nodes, name="uniq_compute_nodes0host0hypervisor_hostname0deleted") ukey.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/280_add_nullable_false_to_keypairs_name.py0000664000175000017500000000273700000000000032752 0ustar00zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset import UniqueConstraint from sqlalchemy import MetaData, Table def upgrade(migrate_engine): """Function enforces non-null value for keypairs name field.""" meta = MetaData(bind=migrate_engine) key_pairs = Table('key_pairs', meta, autoload=True) # Note: Since we are altering name field, this constraint on name needs to # first be dropped before we can alter name. We then re-create the same # constraint. It was first added in 216_havana.py so no need to remove # constraint on downgrade. UniqueConstraint('user_id', 'name', 'deleted', table=key_pairs, name='uniq_key_pairs0user_id0name0deleted').drop() key_pairs.c.name.alter(nullable=False) UniqueConstraint('user_id', 'name', 'deleted', table=key_pairs, name='uniq_key_pairs0user_id0name0deleted').create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/281_placeholder.py0000664000175000017500000000153600000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/282_placeholder.py0000664000175000017500000000153600000000000026041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/283_placeholder.py0000664000175000017500000000153600000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/284_placeholder.py0000664000175000017500000000153600000000000026043 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/285_placeholder.py0000664000175000017500000000153600000000000026044 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/286_placeholder.py0000664000175000017500000000153600000000000026045 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/287_placeholder.py0000664000175000017500000000153600000000000026046 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/288_placeholder.py0000664000175000017500000000153600000000000026047 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/289_placeholder.py0000664000175000017500000000153600000000000026050 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/290_placeholder.py0000664000175000017500000000153600000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Kilo backports. # Do not use this number for new Liberty work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/291_enforce_flavors_migrated.py0000664000175000017500000000306200000000000030604 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, func, select from sqlalchemy.sql import and_ from nova import exception from nova.i18n import _ def upgrade(migrate_engine): meta = MetaData(migrate_engine) instances = Table('instances', meta, autoload=True) sysmeta = Table('instance_system_metadata', meta, autoload=True) count = select([func.count()]).select_from(sysmeta).where( and_(instances.c.uuid == sysmeta.c.instance_uuid, sysmeta.c.key == 'instance_type_id', sysmeta.c.deleted != sysmeta.c.id, instances.c.deleted != instances.c.id)).execute().scalar() if count > 0: msg = _('There are still %(count)i unmigrated flavor records. ' 'Migration cannot continue until all instance flavor ' 'records have been migrated to the new format. Please run ' '`nova-manage db migrate_flavor_data\' first.') % { 'count': count} raise exception.ValidationError(detail=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/292_drop_nova_volumes_tables.py0000664000175000017500000000415700000000000030655 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import ForeignKeyConstraint from sqlalchemy.engine import reflection from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(engine): meta = MetaData() meta.bind = engine def _get_columns(source_table, params): columns = set() for column in params: columns.add(source_table.c[column]) return columns def _remove_foreign_key_constraints(engine, meta, table_name): inspector = reflection.Inspector.from_engine(engine) for fk in inspector.get_foreign_keys(table_name): source_table = Table(table_name, meta, autoload=True) target_table = Table(fk['referred_table'], meta, autoload=True) fkey = ForeignKeyConstraint( columns=_get_columns(source_table, fk['constrained_columns']), refcolumns=_get_columns(target_table, fk['referred_columns']), name=fk['name']) fkey.drop() def _drop_table_and_indexes(meta, table_name): table = Table(table_name, meta, autoload=True) for index in table.indexes: index.drop() table.drop() # Drop the `iscsi_targets` and `volumes` tables They were used with # nova-volumes, which is deprecated and removed, but the related # tables are still created. table_names = ('iscsi_targets', 'shadow_iscsi_targets', 'volumes', 'shadow_volumes') for table_name in table_names: _remove_foreign_key_constraints(engine, meta, table_name) _drop_table_and_indexes(meta, table_name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/293_add_migration_type.py0000664000175000017500000000300500000000000027414 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table from sqlalchemy import Enum, Boolean def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) migrations = Table('migrations', meta, autoload=True) shadow_migrations = Table('shadow_migrations', meta, autoload=True) enum = Enum('migration', 'resize', 'live-migration', 'evacuation', metadata=meta, name='migration_type') enum.create() migration_type = Column('migration_type', enum, nullable=True) if not hasattr(migrations.c, 'migration_type'): migrations.create_column(migration_type) if not hasattr(shadow_migrations.c, 'migration_type'): shadow_migrations.create_column(migration_type.copy()) hidden = Column('hidden', Boolean, default=False) if not hasattr(migrations.c, 'hidden'): migrations.create_column(hidden) if not hasattr(shadow_migrations.c, 'hidden'): shadow_migrations.create_column(hidden.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/294_add_service_heartbeat.py0000664000175000017500000000204500000000000030045 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Wind River Systems Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Column, DateTime BASE_TABLE_NAME = 'services' NEW_COLUMN_NAME = 'last_seen_up' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, DateTime, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/295_add_virtual_interfaces_uuid_index.py0000664000175000017500000000264500000000000032503 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) INDEX_COLUMNS = ['uuid'] INDEX_NAME = 'virtual_interfaces_uuid_idx' def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table('virtual_interfaces', meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == INDEX_COLUMNS: break else: idx = None return meta, table, idx def upgrade(migrate_engine): meta, table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME) return columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS] index = Index(INDEX_NAME, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/296_add_missing_db2_fkeys.py0000664000175000017500000000135500000000000027774 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(sdague): this was a db2 specific migration that was removed # when we removed db2 support from tree. def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/297_add_forced_down_for_services.py0000664000175000017500000000201000000000000031423 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Boolean, Column, MetaData, Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine services = Table('services', meta, autoload=True) shadow_services = Table('shadow_services', meta, autoload=True) services.create_column(Column('forced_down', Boolean, default=False)) shadow_services.create_column(Column('forced_down', Boolean, default=False)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/298_mysql_extra_specs_binary_collation.py0000664000175000017500000000154500000000000032743 0ustar00zuulzuul00000000000000# Copyright 2015 Cisco Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def upgrade(migrate_engine): if migrate_engine.name == "mysql": migrate_engine.execute("ALTER TABLE instance_type_extra_specs " "CONVERT TO CHARACTER SET utf8 " "COLLATE utf8_bin;") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/299_service_version_number.py0000664000175000017500000000177000000000000030344 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Integer, Column, MetaData, Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine services = Table('services', meta, autoload=True) shadow_services = Table('shadow_services', meta, autoload=True) services.create_column(Column('version', Integer, default=0)) shadow_services.create_column(Column('version', Integer, default=0)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/300_migration_context.py0000664000175000017500000000213500000000000027277 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'migration_context' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/301_add_cpu_and_ram_ratios_for_compute_nodes.py0000664000175000017500000000223100000000000033773 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Float, Column, MetaData, Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine cn = Table('compute_nodes', meta, autoload=True) shadow_cn = Table('shadow_compute_nodes', meta, autoload=True) cn.create_column(Column('ram_allocation_ratio', Float, nullable=True)) cn.create_column(Column('cpu_allocation_ratio', Float, nullable=True)) shadow_cn.create_column(Column('ram_allocation_ratio', Float)) shadow_cn.create_column(Column('cpu_allocation_ratio', Float)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/302_pgsql_add_instance_system_metadata_index.py0000664000175000017500000000245200000000000034023 0ustar00zuulzuul00000000000000# Copyright 2015 Huawei. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import utils INDEX_COLUMNS = ['instance_uuid'] # using the index name same with mysql INDEX_NAME = 'instance_uuid' SYS_META_TABLE_NAME = 'instance_system_metadata' def upgrade(migrate_engine): """Add instance_system_metadata indexes missing on PostgreSQL and other DB. """ # This index was already added by migration 216 for MySQL if migrate_engine.name != 'mysql': # Adds index for PostgreSQL and other DB if not utils.index_exists(migrate_engine, SYS_META_TABLE_NAME, INDEX_NAME): utils.add_index(migrate_engine, SYS_META_TABLE_NAME, INDEX_NAME, INDEX_COLUMNS) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/303_placeholder.py0000664000175000017500000000154000000000000026026 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/304_placeholder.py0000664000175000017500000000154000000000000026027 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/305_placeholder.py0000664000175000017500000000154000000000000026030 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/306_placeholder.py0000664000175000017500000000154000000000000026031 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/307_placeholder.py0000664000175000017500000000154000000000000026032 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/308_placeholder.py0000664000175000017500000000154000000000000026033 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/309_placeholder.py0000664000175000017500000000154000000000000026034 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/310_placeholder.py0000664000175000017500000000154000000000000026024 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/311_placeholder.py0000664000175000017500000000154000000000000026025 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/312_placeholder.py0000664000175000017500000000154000000000000026026 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/313_add_parent_id_column.py0000664000175000017500000000314700000000000027704 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat Inc # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # See blueprint backportable-db-migrations-icehouse # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html from sqlalchemy import MetaData, Table, Column, String, Index def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) # Add a new column to store PCI device parent address pci_devices = Table('pci_devices', meta, autoload=True) shadow_pci_devices = Table('shadow_pci_devices', meta, autoload=True) parent_addr = Column('parent_addr', String(12), nullable=True) if not hasattr(pci_devices.c, 'parent_addr'): pci_devices.create_column(parent_addr) if not hasattr(shadow_pci_devices.c, 'parent_addr'): shadow_pci_devices.create_column(parent_addr.copy()) # Create index parent_index = Index('ix_pci_devices_compute_node_id_parent_addr_deleted', pci_devices.c.compute_node_id, pci_devices.c.parent_addr, pci_devices.c.deleted) parent_index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/314_add_resource_provider_tables.py0000664000175000017500000000620700000000000031456 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database migrations for resource-providers.""" from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import Float from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine resource_providers = Table( 'resource_providers', meta, Column('id', Integer, primary_key=True, nullable=False), Column('uuid', String(36), nullable=False), UniqueConstraint('uuid', name='uniq_resource_providers0uuid'), mysql_engine='InnoDB', mysql_charset='latin1' ) Index('resource_providers_uuid_idx', resource_providers.c.uuid) inventories = Table( 'inventories', meta, Column('id', Integer, primary_key=True, nullable=False), Column('resource_provider_id', Integer, nullable=False), Column('resource_class_id', Integer, nullable=False), Column('total', Integer, nullable=False), Column('reserved', Integer, nullable=False), Column('min_unit', Integer, nullable=False), Column('max_unit', Integer, nullable=False), Column('step_size', Integer, nullable=False), Column('allocation_ratio', Float, nullable=False), mysql_engine='InnoDB', mysql_charset='latin1' ) Index('inventories_resource_provider_id_idx', inventories.c.resource_provider_id) Index('inventories_resource_class_id_idx', inventories.c.resource_class_id) allocations = Table( 'allocations', meta, Column('id', Integer, primary_key=True, nullable=False), Column('resource_provider_id', Integer, nullable=False), Column('consumer_id', String(36), nullable=False), Column('resource_class_id', Integer, nullable=False), Column('used', Integer, nullable=False), mysql_engine='InnoDB', mysql_charset='latin1' ) Index('allocations_resource_provider_class_id_idx', allocations.c.resource_provider_id, allocations.c.resource_class_id) Index('allocations_consumer_id_idx', allocations.c.consumer_id) Index('allocations_resource_class_id_idx', allocations.c.resource_class_id) for table in [resource_providers, inventories, allocations]: table.create(checkfirst=True) for table_name in ('', 'shadow_'): uuid_column = Column('uuid', String(36)) compute_nodes = Table('%scompute_nodes' % table_name, meta) compute_nodes.create_column(uuid_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/315_add_migration_progresss_detail.py0000664000175000017500000000231600000000000032003 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import BigInteger from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) migrations = Table('migrations', meta, autoload=True) shadow_migrations = Table('shadow_migrations', meta, autoload=True) columns = ['memory_total', 'memory_processed', 'memory_remaining', 'disk_total', 'disk_processed', 'disk_remaining'] for column_name in columns: column = Column(column_name, BigInteger, nullable=True) migrations.create_column(column) shadow_migrations.create_column(column.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/316_add_disk_ratio_for_compute_nodes.py0000664000175000017500000000201500000000000032300 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Float, Column, MetaData, Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine cn = Table('compute_nodes', meta, autoload=True) shadow_cn = Table('shadow_compute_nodes', meta, autoload=True) cn.create_column(Column('disk_allocation_ratio', Float, nullable=True)) shadow_cn.create_column(Column('disk_allocation_ratio', Float)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/317_add_aggregate_uuid.py0000664000175000017500000000233300000000000027336 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database migrations for resource-providers.""" from sqlalchemy import Column from sqlalchemy import Index from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for table_prefix in ('', 'shadow_'): uuid_column = Column('uuid', String(36)) aggregates = Table('%saggregates' % table_prefix, meta) if not hasattr(aggregates.c, 'uuid'): aggregates.create_column(uuid_column) if not table_prefix: index = Index('aggregate_uuid_idx', aggregates.c.uuid) index.create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/318_resource_provider_name_aggregates.py0000664000175000017500000000725500000000000032515 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from oslo_db.sqlalchemy import utils from sqlalchemy import Column from sqlalchemy import DDL from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Unicode def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) resource_providers = Table('resource_providers', meta, autoload=True) name = Column('name', Unicode(200), nullable=True) generation = Column('generation', Integer, default=0) can_host = Column('can_host', Integer, default=0) if not hasattr(resource_providers.c, 'name'): # NOTE(cdent): The resource_providers table is defined as # latin1 to be more efficient. Now we need the name column # to be UTF8. First create the column, then modify it, # otherwise the declarative handling in sqlalchemy gets # confused. resource_providers.create_column(name) if migrate_engine.name == 'mysql': name_col_ddl = DDL( "ALTER TABLE resource_providers MODIFY name " "VARCHAR(200) CHARACTER SET utf8") conn = migrate_engine.connect() conn.execute(name_col_ddl) uc = UniqueConstraint('name', table=resource_providers, name='uniq_resource_providers0name') uc.create() utils.add_index(migrate_engine, 'resource_providers', 'resource_providers_name_idx', ['name']) if not hasattr(resource_providers.c, 'generation'): resource_providers.create_column(generation) if not hasattr(resource_providers.c, 'can_host'): resource_providers.create_column(can_host) resource_provider_aggregates = Table( 'resource_provider_aggregates', meta, Column('resource_provider_id', Integer, primary_key=True, nullable=False), Column('aggregate_id', Integer, primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='latin1' ) Index('resource_provider_aggregates_aggregate_id_idx', resource_provider_aggregates.c.aggregate_id) resource_provider_aggregates.create(checkfirst=True) utils.add_index(migrate_engine, 'allocations', 'allocations_resource_provider_class_used_idx', ['resource_provider_id', 'resource_class_id', 'used']) utils.drop_index(migrate_engine, 'allocations', 'allocations_resource_provider_class_id_idx') # Add a unique constraint so that any resource provider can have # only one inventory for any given resource class. inventories = Table('inventories', meta, autoload=True) inventories_uc = UniqueConstraint( 'resource_provider_id', 'resource_class_id', table=inventories, name='uniq_inventories0resource_provider_resource_class') inventories_uc.create() utils.add_index(migrate_engine, 'inventories', 'inventories_resource_provider_resource_class_idx', ['resource_provider_id', 'resource_class_id']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/319_add_instances_deleted_created_at_index.py0000664000175000017500000000271700000000000033411 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) INDEX_COLUMNS = ['deleted', 'created_at'] INDEX_NAME = 'instances_deleted_created_at_idx' def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table('instances', meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == INDEX_COLUMNS: break else: idx = None return meta, table, idx def upgrade(migrate_engine): meta, table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME) return columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS] index = Index(INDEX_NAME, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/320_placeholder.py0000664000175000017500000000152200000000000026025 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/321_placeholder.py0000664000175000017500000000152200000000000026026 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/322_placeholder.py0000664000175000017500000000152200000000000026027 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/323_placeholder.py0000664000175000017500000000152200000000000026030 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/324_placeholder.py0000664000175000017500000000152200000000000026031 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/325_placeholder.py0000664000175000017500000000152200000000000026032 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/326_placeholder.py0000664000175000017500000000152200000000000026033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/327_placeholder.py0000664000175000017500000000152200000000000026034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/328_placeholder.py0000664000175000017500000000152200000000000026035 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/329_placeholder.py0000664000175000017500000000152200000000000026036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/330_enforce_mitaka_online_migrations.py0000664000175000017500000000420600000000000032315 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, and_, func, select from nova import exception from nova.i18n import _ WARNING_MSG = _('There are still %(count)i unmigrated records in ' 'the %(table)s table. Migration cannot continue ' 'until all records have been migrated.') def upgrade(migrate_engine): meta = MetaData(migrate_engine) compute_nodes = Table('compute_nodes', meta, autoload=True) aggregates = Table('aggregates', meta, autoload=True) for table in (compute_nodes, aggregates): count = select([func.count()]).select_from(table).where(and_( table.c.deleted == 0, table.c.uuid == None)).execute().scalar() # NOQA if count > 0: msg = WARNING_MSG % { 'count': count, 'table': table.name, } raise exception.ValidationError(detail=msg) pci_devices = Table('pci_devices', meta, autoload=True) # Ensure that all non-deleted PCI device records have a populated # parent address. Note that we test directly against the 'type-VF' # enum value to prevent issues with this migration going forward # if the definition is altered. count = select([func.count()]).select_from(pci_devices).where(and_( pci_devices.c.deleted == 0, pci_devices.c.parent_addr == None, pci_devices.c.dev_type == 'type-VF')).execute().scalar() # NOQA if count > 0: msg = WARNING_MSG % { 'count': count, 'table': pci_devices.name, } raise exception.ValidationError(detail=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/331_add_tag_to_vifs_and_bdm.py0000664000175000017500000000315700000000000030333 0ustar00zuulzuul00000000000000# Copyright (C) 2016, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import utils from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from nova.db.sqlalchemy import api def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine shadow_prefix = api._SHADOW_TABLE_PREFIX tag = Column('tag', String(255)) vifs = utils.get_table(migrate_engine, 'virtual_interfaces') if not hasattr(vifs.c, 'tag'): vifs.create_column(tag.copy()) shadow_vifs = utils.get_table(migrate_engine, '%svirtual_interfaces' % shadow_prefix) if not hasattr(shadow_vifs.c, 'tag'): shadow_vifs.create_column(tag.copy()) bdm = utils.get_table(migrate_engine, 'block_device_mapping') if not hasattr(bdm.c, 'tag'): bdm.create_column(tag.copy()) shadow_bdm = utils.get_table(migrate_engine, '%sblock_device_mapping' % shadow_prefix) if not hasattr(shadow_bdm.c, 'tag'): shadow_bdm.create_column(tag.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/332_keypair_in_extra.py0000664000175000017500000000201100000000000027075 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + 'instance_extra', meta, autoload=True) new_column = Column('keypairs', Text, nullable=True) if not hasattr(table.c, 'keypairs'): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/333_add_console_auth_tokens_table.py0000664000175000017500000000376200000000000031604 0ustar00zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine auth_tokens = Table('console_auth_tokens', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('id', Integer, primary_key=True, nullable=False), Column('token_hash', String(255), nullable=False), Column('console_type', String(255), nullable=False), Column('host', String(255), nullable=False), Column('port', Integer, nullable=False), Column('internal_access_path', String(255)), Column('instance_uuid', String(36), nullable=False), Column('expires', Integer, nullable=False), Index('console_auth_tokens_instance_uuid_idx', 'instance_uuid'), Index('console_auth_tokens_host_expires_idx', 'host', 'expires'), Index('console_auth_tokens_token_hash_idx', 'token_hash'), UniqueConstraint('token_hash', name='uniq_console_auth_tokens0token_hash'), mysql_engine='InnoDB', mysql_charset='utf8' ) auth_tokens.create(checkfirst=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/334_device_metadata_in_extra.py0000664000175000017500000000202700000000000030541 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + 'instance_extra', meta, autoload=True) new_column = Column('device_metadata', Text, nullable=True) if not hasattr(table.c, 'device_metadata'): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/335_placeholder.py0000664000175000017500000000152200000000000026033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/336_placeholder.py0000664000175000017500000000152200000000000026034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/337_placeholder.py0000664000175000017500000000152200000000000026035 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/338_placeholder.py0000664000175000017500000000152200000000000026036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/339_placeholder.py0000664000175000017500000000152200000000000026037 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/340_placeholder.py0000664000175000017500000000152200000000000026027 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/341_placeholder.py0000664000175000017500000000152200000000000026030 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/342_placeholder.py0000664000175000017500000000152200000000000026031 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/343_placeholder.py0000664000175000017500000000152200000000000026032 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/344_placeholder.py0000664000175000017500000000152200000000000026033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/345_require_online_migration_completion.py0000664000175000017500000000527400000000000033104 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html from sqlalchemy import MetaData, Table, func, select from nova import exception from nova.i18n import _ def upgrade(migrate_engine): meta = MetaData(migrate_engine) instance_types = Table('instance_types', meta, autoload=True) keypairs = Table('key_pairs', meta, autoload=True) aggregates = Table('aggregates', meta, autoload=True) instance_groups = Table('instance_groups', meta, autoload=True) base_msg = _('Migration cannot continue until all these have ' 'been migrated to the api database. Please run ' '`nova-manage db online_data_migrations\' on Newton ' 'code before continuing.') count = select([func.count()]).select_from(instance_types).where( instance_types.c.deleted == 0).scalar() if count: msg = (base_msg + _(' There are still %(count)i unmigrated flavors. ') % { 'count': count}) raise exception.ValidationError(detail=msg) count = select([func.count()]).select_from(keypairs).where( keypairs.c.deleted == 0).scalar() if count: msg = (base_msg + _(' There are still %(count)i unmigrated keypairs. ') % { 'count': count}) raise exception.ValidationError(detail=msg) count = select([func.count()]).select_from(aggregates).where( aggregates.c.deleted == 0).scalar() if count: msg = (base_msg + _(' There are still %(count)i unmigrated aggregates. ') % { 'count': count}) raise exception.ValidationError(detail=msg) count = select([func.count()]).select_from(instance_groups).where( instance_groups.c.deleted == 0).scalar() if count: msg = (base_msg + _(' There are still %(count)i unmigrated instance groups. ') % { 'count': count}) raise exception.ValidationError(detail=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/346_remove_scheduled_at_column.py0000664000175000017500000000214000000000000031126 0ustar00zuulzuul00000000000000# Copyright 2016 Intel Corporation # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) column_name = 'scheduled_at' # Remove scheduled_at column from instances table instances = Table('instances', meta, autoload=True) shadow_instances = Table('shadow_instances', meta, autoload=True) if hasattr(instances.c, column_name): instances.drop_column(instances.c[column_name]) if hasattr(shadow_instances.c, column_name): shadow_instances.drop_column(shadow_instances.c[column_name]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/347_add_updated_at_index.py0000664000175000017500000000403700000000000027671 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index from sqlalchemy.engine import reflection LOG = logging.getLogger(__name__) INDEX_COLUMNS_1 = ['project_id'] INDEX_NAME_1 = 'instances_project_id_idx' INDEX_COLUMNS_2 = ['updated_at', 'project_id'] INDEX_NAME_2 = 'instances_updated_at_project_id_idx' TABLE_NAME = 'instances' def _get_table_index(migrate_engine, table_name, index_columns): inspector = reflection.Inspector.from_engine(migrate_engine) for idx in inspector.get_indexes(table_name): if idx['column_names'] == index_columns: break else: idx = None return idx def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table(TABLE_NAME, meta, autoload=True) if _get_table_index(migrate_engine, TABLE_NAME, INDEX_COLUMNS_1): LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME_1) else: columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS_1] index = Index(INDEX_NAME_1, *columns) index.create(migrate_engine) if _get_table_index(migrate_engine, TABLE_NAME, INDEX_COLUMNS_2): LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME_2) else: columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS_2] index = Index(INDEX_NAME_2, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/348_placeholder.py0000664000175000017500000000152200000000000026037 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/349_placeholder.py0000664000175000017500000000152200000000000026040 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/350_placeholder.py0000664000175000017500000000152200000000000026030 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/351_placeholder.py0000664000175000017500000000152200000000000026031 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/352_placeholder.py0000664000175000017500000000152200000000000026032 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/353_placeholder.py0000664000175000017500000000152200000000000026033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/354_placeholder.py0000664000175000017500000000152200000000000026034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/355_placeholder.py0000664000175000017500000000152200000000000026035 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/356_placeholder.py0000664000175000017500000000152200000000000026036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/357_placeholder.py0000664000175000017500000000152200000000000026037 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/358_bdm_attachment_id.py0000664000175000017500000000204100000000000027201 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + 'block_device_mapping', meta, autoload=True) new_column = Column('attachment_id', String(36), nullable=True) if not hasattr(table.c, 'attachment_id'): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/359_add_service_uuid.py0000664000175000017500000000252600000000000027062 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy.engine.reflection import Inspector from sqlalchemy import Index from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) for prefix in ('', 'shadow_'): services = Table(prefix + 'services', meta, autoload=True) if not hasattr(services.c, 'uuid'): services.create_column(Column('uuid', String(36), nullable=True)) uuid_index_name = 'services_uuid_idx' indexes = Inspector(migrate_engine).get_indexes('services') if uuid_index_name not in (i['name'] for i in indexes): services = Table('services', meta, autoload=True) Index(uuid_index_name, services.c.uuid, unique=True).create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/360_add_compute_node_mapped.py0000664000175000017500000000177200000000000030375 0ustar00zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Column, Integer def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) for prefix in ('', 'shadow_'): compute_nodes = Table('%scompute_nodes' % prefix, meta, autoload=True) mapped = Column('mapped', Integer, default=0, nullable=True) if not hasattr(compute_nodes.c, 'mapped'): compute_nodes.create_column(mapped) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/361_add_compute_nodes_uuid_index.py0000664000175000017500000000253000000000000031441 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table('compute_nodes', meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == ['uuid']: break else: idx = None return table, idx def upgrade(migrate_engine): table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding compute_nodes_uuid_idx because an ' 'equivalent index already exists.') return index = Index('compute_nodes_uuid_idx', table.c.uuid, unique=True) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/362_add_pci_devices_uuid.py0000664000175000017500000000243500000000000027670 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import utils from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from nova.db.sqlalchemy import api def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine shadow_prefix = api._SHADOW_TABLE_PREFIX uuid_col = Column('uuid', String(36)) pci_devices = utils.get_table(migrate_engine, 'pci_devices') if not hasattr(pci_devices.c, 'uuid'): pci_devices.create_column(uuid_col.copy()) shadow_pci_devices = utils.get_table(migrate_engine, shadow_prefix + 'pci_devices') if not hasattr(shadow_pci_devices.c, 'uuid'): shadow_pci_devices.create_column(uuid_col.copy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/363_placeholder.py0000664000175000017500000000152200000000000026034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/364_placeholder.py0000664000175000017500000000152200000000000026035 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/365_placeholder.py0000664000175000017500000000152200000000000026036 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/366_placeholder.py0000664000175000017500000000152200000000000026037 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/367_placeholder.py0000664000175000017500000000152200000000000026040 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/368_placeholder.py0000664000175000017500000000152200000000000026041 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/369_placeholder.py0000664000175000017500000000152200000000000026042 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/370_placeholder.py0000664000175000017500000000152200000000000026032 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/371_placeholder.py0000664000175000017500000000152200000000000026033 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/372_placeholder.py0000664000175000017500000000152200000000000026034 0ustar00zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/373_migration_uuid.py0000664000175000017500000000214200000000000026571 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table, Index from sqlalchemy import String def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) for prefix in ('', 'shadow_'): migrations = Table('%smigrations' % prefix, meta, autoload=True) if not hasattr(migrations.c, 'uuid'): uuid = Column('uuid', String(36)) migrations.create_column(uuid) idx = Index('%smigrations_uuid' % prefix, migrations.c.uuid, unique=True) idx.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/374_bdm_uuid.py0000664000175000017500000000252000000000000025343 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate import UniqueConstraint from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + 'block_device_mapping', meta, autoload=True) if not hasattr(table.c, 'uuid'): new_column = Column('uuid', String(36), nullable=True) table.create_column(new_column) if prefix == '': # Only add the constraint on the non-shadow table... con = UniqueConstraint('uuid', table=table, name="uniq_block_device_mapping0uuid") con.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/375_add_access_url_to_console_auth_tokens.py0000664000175000017500000000206000000000000033336 0ustar00zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table('console_auth_tokens', meta, autoload=True) new_column = Column('access_url_base', String(255), nullable=True) if not hasattr(table.c, 'access_url_base'): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/376_add_console_auth_tokens_index.py0000664000175000017500000000215700000000000031630 0ustar00zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Index from sqlalchemy import MetaData from sqlalchemy import Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table('console_auth_tokens', meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == ['token_hash', 'instance_uuid']: return idx_name = 'console_auth_tokens_token_hash_instance_uuid_idx' Index(idx_name, table.c.token_hash, table.c.instance_uuid).create() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/377_add_migrations_updated_at_index.py0000664000175000017500000000273500000000000032133 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) INDEX_COLUMNS = ['updated_at'] INDEX_NAME = 'migrations_updated_at_idx' TABLE_NAME = 'migrations' def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table(TABLE_NAME, meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == INDEX_COLUMNS: break else: idx = None return table, idx def upgrade(migrate_engine): table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME) return columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS] index = Index(INDEX_NAME, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/378_add_instance_actions_updated_at_index.py0000664000175000017500000000301000000000000033267 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) INDEX_COLUMNS = ['instance_uuid', 'updated_at'] INDEX_NAME = 'instance_actions_instance_uuid_updated_at_idx' TABLE_NAME = 'instance_actions' def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table(TABLE_NAME, meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == INDEX_COLUMNS: break else: idx = None return table, idx def upgrade(migrate_engine): table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME) return columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS] index = Index(INDEX_NAME, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/379_placeholder.py0000664000175000017500000000152100000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/380_placeholder.py0000664000175000017500000000152100000000000026032 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/381_placeholder.py0000664000175000017500000000152100000000000026033 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/382_placeholder.py0000664000175000017500000000152100000000000026034 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/383_placeholder.py0000664000175000017500000000152100000000000026035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/384_placeholder.py0000664000175000017500000000152100000000000026036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/385_placeholder.py0000664000175000017500000000152100000000000026037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/386_placeholder.py0000664000175000017500000000152100000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/387_placeholder.py0000664000175000017500000000152100000000000026041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/388_placeholder.py0000664000175000017500000000152100000000000026042 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/389_add_aggregate_metadata_index.py0000664000175000017500000000266300000000000031356 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from sqlalchemy import MetaData, Table, Index LOG = logging.getLogger(__name__) INDEX_COLUMNS = ['value'] INDEX_NAME = 'aggregate_metadata_value_idx' TABLE_NAME = 'aggregate_metadata' def _get_table_index(migrate_engine): meta = MetaData() meta.bind = migrate_engine table = Table(TABLE_NAME, meta, autoload=True) for idx in table.indexes: if idx.columns.keys() == INDEX_COLUMNS: break else: idx = None return table, idx def upgrade(migrate_engine): table, index = _get_table_index(migrate_engine) if index: LOG.info('Skipped adding %s because an equivalent index' ' already exists.', INDEX_NAME) return columns = [getattr(table.c, col_name) for col_name in INDEX_COLUMNS] index = Index(INDEX_NAME, *columns) index.create(migrate_engine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/390_add_trusted_certs.py0000664000175000017500000000213100000000000027251 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'trusted_certs' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/391_add_volume_type_to_bdm.py0000664000175000017500000000214600000000000030262 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import String from sqlalchemy import Table BASE_TABLE_NAME = 'block_device_mapping' NEW_COLUMN_NAME = 'volume_type' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, String(255), nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/392_placeholder.py0000664000175000017500000000152100000000000026035 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/393_placeholder.py0000664000175000017500000000152100000000000026036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/394_placeholder.py0000664000175000017500000000152100000000000026037 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/395_placeholder.py0000664000175000017500000000152100000000000026040 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/396_placeholder.py0000664000175000017500000000152100000000000026041 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/397_migrations_cross_cell_move.py0000664000175000017500000000177700000000000031207 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table from sqlalchemy import Boolean def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) for prefix in ('', 'shadow_'): migrations = Table('%smigrations' % prefix, meta, autoload=True) if not hasattr(migrations.c, 'cross_cell_move'): cross_cell_move = Column('cross_cell_move', Boolean, default=False) migrations.create_column(cross_cell_move) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/398_add_vpmems.py0000664000175000017500000000212100000000000025675 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'vpmems' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/399_add_instances_hidden.py0000664000175000017500000000235200000000000027677 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table from sqlalchemy import Boolean def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) for prefix in ('', 'shadow_'): instances = Table('%sinstances' % prefix, meta, autoload=True) if not hasattr(instances.c, 'hidden'): # NOTE(danms): This column originally included default=False. We # discovered in bug #1862205 that this will attempt to rewrite # the entire instances table with that value, which can time out # for large data sets (and does not even abort). hidden = Column('hidden', Boolean) instances.create_column(hidden) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/400_enforce_service_uuid.py0000664000175000017500000000260600000000000027735 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, func, null, select from sqlalchemy.sql import and_ from nova import exception from nova.i18n import _ def upgrade(migrate_engine): meta = MetaData(migrate_engine) services = Table('services', meta, autoload=True) # Count non-deleted services where uuid is null. count = select([func.count()]).select_from(services).where(and_( services.c.deleted == 0, services.c.uuid == null())).execute().scalar() if count > 0: msg = _('There are still %(count)i unmigrated records in ' 'the services table. Migration cannot continue ' 'until all records have been migrated. Run the ' '"nova-manage db online_data_migrations" routine.') % { 'count': count} raise exception.ValidationError(detail=msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/401_add_user_id_and_project_id_to_migrations.py0000664000175000017500000000212600000000000033770 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Column, Table, String NEW_COLUMNS_NAME = ['user_id', 'project_id'] BASE_TABLE_NAME = 'migrations' def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) for new_column_name in NEW_COLUMNS_NAME: new_column = Column(new_column_name, String(255), nullable=True) if not hasattr(table.c, new_column_name): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/402_add_resources.py0000664000175000017500000000212400000000000026365 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy import Text BASE_TABLE_NAME = 'instance_extra' NEW_COLUMN_NAME = 'resources' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine for prefix in ('', 'shadow_'): table = Table(prefix + BASE_TABLE_NAME, meta, autoload=True) new_column = Column(NEW_COLUMN_NAME, Text, nullable=True) if not hasattr(table.c, NEW_COLUMN_NAME): table.create_column(new_column) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/403_placeholder.py0000664000175000017500000000152100000000000026026 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/404_placeholder.py0000664000175000017500000000152100000000000026027 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/405_placeholder.py0000664000175000017500000000152100000000000026030 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/406_placeholder.py0000664000175000017500000000152100000000000026031 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/407_placeholder.py0000664000175000017500000000152100000000000026032 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migrate_repo/versions/__init__.py0000664000175000017500000000000000000000000024704 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/migration.py0000664000175000017500000002045400000000000020630 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from migrate import exceptions as versioning_exceptions from migrate.versioning import api as versioning_api from migrate.versioning.repository import Repository from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log as logging import sqlalchemy from sqlalchemy.sql import null from nova.db.sqlalchemy import api as db_session from nova import exception from nova.i18n import _ INIT_VERSION = {} INIT_VERSION['main'] = 215 INIT_VERSION['api'] = 0 _REPOSITORY = {} LOG = logging.getLogger(__name__) def get_engine(database='main', context=None): if database == 'main': return db_session.get_engine(context=context) if database == 'api': return db_session.get_api_engine() def db_sync(version=None, database='main', context=None): if version is not None: try: version = int(version) except ValueError: raise exception.NovaException(_("version should be an integer")) current_version = db_version(database, context=context) repository = _find_migrate_repo(database) if version is None or version > current_version: return versioning_api.upgrade(get_engine(database, context=context), repository, version) else: return versioning_api.downgrade(get_engine(database, context=context), repository, version) def db_version(database='main', context=None): repository = _find_migrate_repo(database) # NOTE(mdbooth): This is a crude workaround for races in _db_version. The 2 # races we have seen in practise are: # * versioning_api.db_version() fails because the migrate_version table # doesn't exist, but meta.tables subsequently contains tables because # another thread has already started creating the schema. This results in # the 'Essex' error. # * db_version_control() fails with pymysql.error.InternalError(1050) # (Create table failed) because of a race in sqlalchemy-migrate's # ControlledSchema._create_table_version, which does: # if not table.exists(): table.create() # This means that it doesn't raise the advertised # DatabaseAlreadyControlledError, which we could have handled explicitly. # # I believe the correct fix should be: # * Delete the Essex-handling code as unnecessary complexity which nobody # should still need. # * Fix the races in sqlalchemy-migrate such that version_control() always # raises a well-defined error, and then handle that error here. # # Until we do that, though, we should be able to just try again if we # failed for any reason. In both of the above races, trying again should # succeed the second time round. # # For additional context, see: # * https://bugzilla.redhat.com/show_bug.cgi?id=1652287 # * https://bugs.launchpad.net/nova/+bug/1804652 try: return _db_version(repository, database, context) except Exception: return _db_version(repository, database, context) def _db_version(repository, database, context): try: return versioning_api.db_version(get_engine(database, context=context), repository) except versioning_exceptions.DatabaseNotControlledError as exc: meta = sqlalchemy.MetaData() engine = get_engine(database, context=context) meta.reflect(bind=engine) tables = meta.tables if len(tables) == 0: db_version_control(INIT_VERSION[database], database, context=context) return versioning_api.db_version( get_engine(database, context=context), repository) else: LOG.exception(exc) # Some pre-Essex DB's may not be version controlled. # Require them to upgrade using Essex first. raise exception.NovaException( _("Upgrade DB using Essex release first.")) def db_initial_version(database='main'): return INIT_VERSION[database] def _process_null_records(table, col_name, check_fkeys, delete=False): """Queries the database and optionally deletes the NULL records. :param table: sqlalchemy.Table object. :param col_name: The name of the column to check in the table. :param check_fkeys: If True, check the table for foreign keys back to the instances table and if not found, return. :param delete: If true, run a delete operation on the table, else just query for number of records that match the NULL column. :returns: The number of records processed for the table and column. """ records = 0 if col_name in table.columns: # NOTE(mriedem): filter out tables that don't have a foreign key back # to the instances table since they could have stale data even if # instances.uuid wasn't NULL. if check_fkeys: fkey_found = False fkeys = table.c[col_name].foreign_keys or [] for fkey in fkeys: if fkey.column.table.name == 'instances': fkey_found = True if not fkey_found: return 0 if delete: records = table.delete().where( table.c[col_name] == null() ).execute().rowcount else: records = len(list( table.select().where(table.c[col_name] == null()).execute() )) return records def db_null_instance_uuid_scan(delete=False): """Scans the database for NULL instance_uuid records. :param delete: If true, delete NULL instance_uuid records found, else just query to see if they exist for reporting. :returns: dict of table name to number of hits for NULL instance_uuid rows. """ engine = get_engine() meta = sqlalchemy.MetaData(bind=engine) # NOTE(mriedem): We're going to load up all of the tables so we can find # any with an instance_uuid column since those may be foreign keys back # to the instances table and we want to cleanup those records first. We # have to do this explicitly because the foreign keys in nova aren't # defined with cascading deletes. meta.reflect(engine) # Keep track of all of the tables that had hits in the query. processed = {} for table in reversed(meta.sorted_tables): # Ignore the fixed_ips table by design. if table.name not in ('fixed_ips', 'shadow_fixed_ips'): processed[table.name] = _process_null_records( table, 'instance_uuid', check_fkeys=True, delete=delete) # Now process the *instances tables. for table_name in ('instances', 'shadow_instances'): table = db_utils.get_table(engine, table_name) processed[table.name] = _process_null_records( table, 'uuid', check_fkeys=False, delete=delete) return processed def db_version_control(version=None, database='main', context=None): repository = _find_migrate_repo(database) versioning_api.version_control(get_engine(database, context=context), repository, version) return version def _find_migrate_repo(database='main'): """Get the path for the migrate repository.""" global _REPOSITORY rel_path = 'migrate_repo' if database == 'api': rel_path = os.path.join('api_migrations', 'migrate_repo') path = os.path.join(os.path.abspath(os.path.dirname(__file__)), rel_path) assert os.path.exists(path) if _REPOSITORY.get(database) is None: _REPOSITORY[database] = Repository(path) return _REPOSITORY[database] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/models.py0000664000175000017500000017555600000000000020140 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SQLAlchemy models for nova data. """ from oslo_config import cfg from oslo_db.sqlalchemy import models from oslo_utils import timeutils from sqlalchemy import (Column, Index, Integer, BigInteger, Enum, String, schema, Unicode) from sqlalchemy.dialects.mysql import MEDIUMTEXT from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import orm from sqlalchemy import ForeignKey, DateTime, Boolean, Text, Float from nova.db.sqlalchemy import types CONF = cfg.CONF BASE = declarative_base() def MediumText(): return Text().with_variant(MEDIUMTEXT(), 'mysql') class NovaBase(models.TimestampMixin, models.ModelBase): metadata = None def __copy__(self): """Implement a safe copy.copy(). SQLAlchemy-mapped objects travel with an object called an InstanceState, which is pegged to that object specifically and tracks everything about that object. It's critical within all attribute operations, including gets and deferred loading. This object definitely cannot be shared among two instances, and must be handled. The copy routine here makes use of session.merge() which already essentially implements a "copy" style of operation, which produces a new instance with a new InstanceState and copies all the data along mapped attributes without using any SQL. The mode we are using here has the caveat that the given object must be "clean", e.g. that it has no database-loaded state that has been updated and not flushed. This is a good thing, as creating a copy of an object including non-flushed, pending database state is probably not a good idea; neither represents what the actual row looks like, and only one should be flushed. """ session = orm.Session() copy = session.merge(self, load=False) session.expunge(copy) return copy class Service(BASE, NovaBase, models.SoftDeleteMixin): """Represents a running service on a host.""" __tablename__ = 'services' __table_args__ = ( schema.UniqueConstraint("host", "topic", "deleted", name="uniq_services0host0topic0deleted"), schema.UniqueConstraint("host", "binary", "deleted", name="uniq_services0host0binary0deleted"), Index('services_uuid_idx', 'uuid', unique=True), ) id = Column(Integer, primary_key=True) uuid = Column(String(36), nullable=True) host = Column(String(255)) binary = Column(String(255)) topic = Column(String(255)) report_count = Column(Integer, nullable=False, default=0) disabled = Column(Boolean, default=False) disabled_reason = Column(String(255)) last_seen_up = Column(DateTime, nullable=True) forced_down = Column(Boolean, default=False) version = Column(Integer, default=0) instance = orm.relationship( "Instance", backref='services', primaryjoin='and_(Service.host == Instance.host,' 'Service.binary == "nova-compute",' 'Instance.deleted == 0)', foreign_keys=host, ) class ComputeNode(BASE, NovaBase, models.SoftDeleteMixin): """Represents a running compute service on a host.""" __tablename__ = 'compute_nodes' __table_args__ = ( Index('compute_nodes_uuid_idx', 'uuid', unique=True), schema.UniqueConstraint( 'host', 'hypervisor_hostname', 'deleted', name="uniq_compute_nodes0host0hypervisor_hostname0deleted"), ) id = Column(Integer, primary_key=True) service_id = Column(Integer, nullable=True) # FIXME(sbauza: Host field is nullable because some old Juno compute nodes # can still report stats from an old ResourceTracker without setting this # field. # This field has to be set non-nullable in a later cycle (probably Lxxx) # once we are sure that all compute nodes in production report it. host = Column(String(255), nullable=True) uuid = Column(String(36), nullable=True) vcpus = Column(Integer, nullable=False) memory_mb = Column(Integer, nullable=False) local_gb = Column(Integer, nullable=False) vcpus_used = Column(Integer, nullable=False) memory_mb_used = Column(Integer, nullable=False) local_gb_used = Column(Integer, nullable=False) hypervisor_type = Column(MediumText(), nullable=False) hypervisor_version = Column(Integer, nullable=False) hypervisor_hostname = Column(String(255)) # Free Ram, amount of activity (resize, migration, boot, etc) and # the number of running VM's are a good starting point for what's # important when making scheduling decisions. free_ram_mb = Column(Integer) free_disk_gb = Column(Integer) current_workload = Column(Integer) running_vms = Column(Integer) # Note(masumotok): Expected Strings example: # # '{"arch":"x86_64", # "model":"Nehalem", # "topology":{"sockets":1, "threads":2, "cores":3}, # "features":["tdtscp", "xtpr"]}' # # Points are "json translatable" and it must have all dictionary keys # above, since it is copied from tag of getCapabilities() # (See libvirt.virtConnection). cpu_info = Column(MediumText(), nullable=False) disk_available_least = Column(Integer) host_ip = Column(types.IPAddress()) supported_instances = Column(Text) metrics = Column(Text) # Note(yongli): json string PCI Stats # '[{"vendor_id":"8086", "product_id":"1234", "count":3 }, ...]' pci_stats = Column(Text) # extra_resources is a json string containing arbitrary # data about additional resources. extra_resources = Column(Text) # json-encode string containing compute node statistics stats = Column(Text, default='{}') # json-encoded dict that contains NUMA topology as generated by # objects.NUMATopology._to_json() numa_topology = Column(Text) # allocation ratios provided by the RT ram_allocation_ratio = Column(Float, nullable=True) cpu_allocation_ratio = Column(Float, nullable=True) disk_allocation_ratio = Column(Float, nullable=True) mapped = Column(Integer, nullable=True, default=0) class Certificate(BASE, NovaBase, models.SoftDeleteMixin): """Represents a x509 certificate.""" __tablename__ = 'certificates' __table_args__ = ( Index('certificates_project_id_deleted_idx', 'project_id', 'deleted'), Index('certificates_user_id_deleted_idx', 'user_id', 'deleted') ) id = Column(Integer, primary_key=True) user_id = Column(String(255)) project_id = Column(String(255)) file_name = Column(String(255)) class Instance(BASE, NovaBase, models.SoftDeleteMixin): """Represents a guest VM.""" __tablename__ = 'instances' __table_args__ = ( Index('uuid', 'uuid', unique=True), Index('instances_project_id_idx', 'project_id'), Index('instances_project_id_deleted_idx', 'project_id', 'deleted'), Index('instances_reservation_id_idx', 'reservation_id'), Index('instances_terminated_at_launched_at_idx', 'terminated_at', 'launched_at'), Index('instances_uuid_deleted_idx', 'uuid', 'deleted'), Index('instances_task_state_updated_at_idx', 'task_state', 'updated_at'), Index('instances_host_node_deleted_idx', 'host', 'node', 'deleted'), Index('instances_host_deleted_cleaned_idx', 'host', 'deleted', 'cleaned'), Index('instances_deleted_created_at_idx', 'deleted', 'created_at'), Index('instances_updated_at_project_id_idx', 'updated_at', 'project_id'), schema.UniqueConstraint('uuid', name='uniq_instances0uuid'), ) injected_files = [] id = Column(Integer, primary_key=True, autoincrement=True) @property def name(self): try: base_name = CONF.instance_name_template % self.id except TypeError: # Support templates like "uuid-%(uuid)s", etc. info = {} # NOTE(russellb): Don't use self.iteritems() here, as it will # result in infinite recursion on the name property. for column in iter(orm.object_mapper(self).columns): key = column.name # prevent recursion if someone specifies %(name)s # %(name)s will not be valid. if key == 'name': continue info[key] = self[key] try: base_name = CONF.instance_name_template % info except KeyError: base_name = self.uuid return base_name @property def _extra_keys(self): return ['name'] user_id = Column(String(255)) project_id = Column(String(255)) image_ref = Column(String(255)) kernel_id = Column(String(255)) ramdisk_id = Column(String(255)) hostname = Column(String(255)) launch_index = Column(Integer) key_name = Column(String(255)) key_data = Column(MediumText()) power_state = Column(Integer) vm_state = Column(String(255)) task_state = Column(String(255)) memory_mb = Column(Integer) vcpus = Column(Integer) root_gb = Column(Integer) ephemeral_gb = Column(Integer) ephemeral_key_uuid = Column(String(36)) # This is not related to hostname, above. It refers # to the nova node. host = Column(String(255)) # To identify the "ComputeNode" which the instance resides in. # This equals to ComputeNode.hypervisor_hostname. node = Column(String(255)) # *not* flavorid, this is the internal primary_key instance_type_id = Column(Integer) user_data = Column(MediumText()) reservation_id = Column(String(255)) launched_at = Column(DateTime) terminated_at = Column(DateTime) # This always refers to the availability_zone kwarg passed in /servers and # provided as an API option, not at all related to the host AZ the instance # belongs to. availability_zone = Column(String(255)) # User editable field for display in user-facing UIs display_name = Column(String(255)) display_description = Column(String(255)) # To remember on which host an instance booted. # An instance may have moved to another host by live migration. launched_on = Column(MediumText()) # locked is superseded by locked_by and locked is not really # necessary but still used in API code so it remains. locked = Column(Boolean) locked_by = Column(Enum('owner', 'admin', name='instances0locked_by')) os_type = Column(String(255)) architecture = Column(String(255)) vm_mode = Column(String(255)) uuid = Column(String(36), nullable=False) root_device_name = Column(String(255)) default_ephemeral_device = Column(String(255)) default_swap_device = Column(String(255)) config_drive = Column(String(255)) # User editable field meant to represent what ip should be used # to connect to the instance access_ip_v4 = Column(types.IPAddress()) access_ip_v6 = Column(types.IPAddress()) auto_disk_config = Column(Boolean()) progress = Column(Integer) # EC2 instance_initiated_shutdown_terminate # True: -> 'terminate' # False: -> 'stop' # Note(maoy): currently Nova will always stop instead of terminate # no matter what the flag says. So we set the default to False. shutdown_terminate = Column(Boolean(), default=False) # EC2 disable_api_termination disable_terminate = Column(Boolean(), default=False) # OpenStack compute cell name. This will only be set at the top of # the cells tree and it'll be a full cell name such as 'api!hop1!hop2' # TODO(stephenfin): Remove this cell_name = Column(String(255)) # NOTE(pumaranikar): internal_id attribute is no longer used (bug 1441242) # Hence, removing from object layer in current release (Ocata) and will # treated as deprecated. The column can be removed from schema with # a migration at the start of next release. # internal_id = Column(Integer) # Records whether an instance has been deleted from disk cleaned = Column(Integer, default=0) hidden = Column(Boolean, default=False) class InstanceInfoCache(BASE, NovaBase, models.SoftDeleteMixin): """Represents a cache of information about an instance """ __tablename__ = 'instance_info_caches' __table_args__ = ( schema.UniqueConstraint( "instance_uuid", name="uniq_instance_info_caches0instance_uuid"),) id = Column(Integer, primary_key=True, autoincrement=True) # text column used for storing a json object of network data for api network_info = Column(MediumText()) instance_uuid = Column(String(36), ForeignKey('instances.uuid'), nullable=False) instance = orm.relationship(Instance, backref=orm.backref('info_cache', uselist=False), foreign_keys=instance_uuid, primaryjoin=instance_uuid == Instance.uuid) class InstanceExtra(BASE, NovaBase, models.SoftDeleteMixin): __tablename__ = 'instance_extra' __table_args__ = ( Index('instance_extra_idx', 'instance_uuid'),) id = Column(Integer, primary_key=True, autoincrement=True) instance_uuid = Column(String(36), ForeignKey('instances.uuid'), nullable=False) device_metadata = orm.deferred(Column(Text)) numa_topology = orm.deferred(Column(Text)) pci_requests = orm.deferred(Column(Text)) flavor = orm.deferred(Column(Text)) vcpu_model = orm.deferred(Column(Text)) migration_context = orm.deferred(Column(Text)) keypairs = orm.deferred(Column(Text)) trusted_certs = orm.deferred(Column(Text)) # NOTE(Luyao): 'vpmems' is still in the database # and can be removed in the future release. resources = orm.deferred(Column(Text)) instance = orm.relationship(Instance, backref=orm.backref('extra', uselist=False), foreign_keys=instance_uuid, primaryjoin=instance_uuid == Instance.uuid) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class InstanceTypes(BASE, NovaBase, models.SoftDeleteMixin): """Represents possible flavors for instances. Note: instance_type and flavor are synonyms and the term instance_type is deprecated and in the process of being removed. """ __tablename__ = "instance_types" __table_args__ = ( schema.UniqueConstraint("flavorid", "deleted", name="uniq_instance_types0flavorid0deleted"), schema.UniqueConstraint("name", "deleted", name="uniq_instance_types0name0deleted") ) # Internal only primary key/id id = Column(Integer, primary_key=True) name = Column(String(255)) memory_mb = Column(Integer, nullable=False) vcpus = Column(Integer, nullable=False) root_gb = Column(Integer) ephemeral_gb = Column(Integer) # Public facing id will be renamed public_id flavorid = Column(String(255)) swap = Column(Integer, nullable=False, default=0) rxtx_factor = Column(Float, default=1) vcpu_weight = Column(Integer) disabled = Column(Boolean, default=False) is_public = Column(Boolean, default=True) class Quota(BASE, NovaBase, models.SoftDeleteMixin): """Represents a single quota override for a project. If there is no row for a given project id and resource, then the default for the quota class is used. If there is no row for a given quota class and resource, then the default for the deployment is used. If the row is present but the hard limit is Null, then the resource is unlimited. """ __tablename__ = 'quotas' __table_args__ = ( schema.UniqueConstraint("project_id", "resource", "deleted", name="uniq_quotas0project_id0resource0deleted" ), ) id = Column(Integer, primary_key=True) project_id = Column(String(255)) resource = Column(String(255), nullable=False) hard_limit = Column(Integer) class ProjectUserQuota(BASE, NovaBase, models.SoftDeleteMixin): """Represents a single quota override for a user with in a project.""" __tablename__ = 'project_user_quotas' uniq_name = "uniq_project_user_quotas0user_id0project_id0resource0deleted" __table_args__ = ( schema.UniqueConstraint("user_id", "project_id", "resource", "deleted", name=uniq_name), Index('project_user_quotas_project_id_deleted_idx', 'project_id', 'deleted'), Index('project_user_quotas_user_id_deleted_idx', 'user_id', 'deleted') ) id = Column(Integer, primary_key=True, nullable=False) project_id = Column(String(255), nullable=False) user_id = Column(String(255), nullable=False) resource = Column(String(255), nullable=False) hard_limit = Column(Integer) class QuotaClass(BASE, NovaBase, models.SoftDeleteMixin): """Represents a single quota override for a quota class. If there is no row for a given quota class and resource, then the default for the deployment is used. If the row is present but the hard limit is Null, then the resource is unlimited. """ __tablename__ = 'quota_classes' __table_args__ = ( Index('ix_quota_classes_class_name', 'class_name'), ) id = Column(Integer, primary_key=True) class_name = Column(String(255)) resource = Column(String(255)) hard_limit = Column(Integer) class QuotaUsage(BASE, NovaBase, models.SoftDeleteMixin): """Represents the current usage for a given resource.""" __tablename__ = 'quota_usages' __table_args__ = ( Index('ix_quota_usages_project_id', 'project_id'), Index('ix_quota_usages_user_id_deleted', 'user_id', 'deleted'), ) id = Column(Integer, primary_key=True) project_id = Column(String(255)) user_id = Column(String(255)) resource = Column(String(255), nullable=False) in_use = Column(Integer, nullable=False) reserved = Column(Integer, nullable=False) @property def total(self): return self.in_use + self.reserved until_refresh = Column(Integer) class Reservation(BASE, NovaBase, models.SoftDeleteMixin): """Represents a resource reservation for quotas.""" __tablename__ = 'reservations' __table_args__ = ( Index('ix_reservations_project_id', 'project_id'), Index('reservations_uuid_idx', 'uuid'), Index('reservations_deleted_expire_idx', 'deleted', 'expire'), Index('ix_reservations_user_id_deleted', 'user_id', 'deleted'), ) id = Column(Integer, primary_key=True, nullable=False) uuid = Column(String(36), nullable=False) usage_id = Column(Integer, ForeignKey('quota_usages.id'), nullable=False) project_id = Column(String(255)) user_id = Column(String(255)) resource = Column(String(255)) delta = Column(Integer, nullable=False) expire = Column(DateTime) usage = orm.relationship( "QuotaUsage", foreign_keys=usage_id, primaryjoin='and_(Reservation.usage_id == QuotaUsage.id,' 'QuotaUsage.deleted == 0)') # TODO(macsz) This class can be removed. It might need a DB migration to drop # this. class Snapshot(BASE, NovaBase, models.SoftDeleteMixin): """Represents a block storage device that can be attached to a VM.""" __tablename__ = 'snapshots' __table_args__ = () id = Column(String(36), primary_key=True, nullable=False) deleted = Column(String(36), default="") @property def volume_name(self): return CONF.volume_name_template % self.volume_id user_id = Column(String(255)) project_id = Column(String(255)) volume_id = Column(String(36), nullable=False) status = Column(String(255)) progress = Column(String(255)) volume_size = Column(Integer) scheduled_at = Column(DateTime) display_name = Column(String(255)) display_description = Column(String(255)) class BlockDeviceMapping(BASE, NovaBase, models.SoftDeleteMixin): """Represents block device mapping that is defined by EC2.""" __tablename__ = "block_device_mapping" __table_args__ = ( Index('snapshot_id', 'snapshot_id'), Index('volume_id', 'volume_id'), Index('block_device_mapping_instance_uuid_device_name_idx', 'instance_uuid', 'device_name'), Index('block_device_mapping_instance_uuid_volume_id_idx', 'instance_uuid', 'volume_id'), Index('block_device_mapping_instance_uuid_idx', 'instance_uuid'), schema.UniqueConstraint('uuid', name='uniq_block_device_mapping0uuid'), ) id = Column(Integer, primary_key=True, autoincrement=True) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) # NOTE(mdbooth): The REST API for BDMs includes a UUID field. That uuid # refers to an image, volume, or snapshot which will be used in the # initialisation of the BDM. It is only relevant during the API call, and # is not persisted directly. This is the UUID of the BDM itself. # FIXME(danms): This should eventually be non-nullable, but we need a # transition period first. uuid = Column(String(36)) instance = orm.relationship(Instance, backref=orm.backref('block_device_mapping'), foreign_keys=instance_uuid, primaryjoin='and_(BlockDeviceMapping.' 'instance_uuid==' 'Instance.uuid,' 'BlockDeviceMapping.deleted==' '0)') source_type = Column(String(255)) destination_type = Column(String(255)) guest_format = Column(String(255)) device_type = Column(String(255)) disk_bus = Column(String(255)) boot_index = Column(Integer) device_name = Column(String(255)) # default=False for compatibility of the existing code. # With EC2 API, # default True for ami specified device. # default False for created with other timing. # TODO(sshturm) add default in db delete_on_termination = Column(Boolean, default=False) snapshot_id = Column(String(36)) volume_id = Column(String(36)) volume_size = Column(Integer) volume_type = Column(String(255)) image_id = Column(String(36)) # for no device to suppress devices. no_device = Column(Boolean) connection_info = Column(MediumText()) tag = Column(String(255)) attachment_id = Column(String(36)) class SecurityGroupInstanceAssociation(BASE, NovaBase, models.SoftDeleteMixin): __tablename__ = 'security_group_instance_association' __table_args__ = ( Index('security_group_instance_association_instance_uuid_idx', 'instance_uuid'), ) id = Column(Integer, primary_key=True, nullable=False) security_group_id = Column(Integer, ForeignKey('security_groups.id')) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) class SecurityGroup(BASE, NovaBase, models.SoftDeleteMixin): """Represents a security group.""" __tablename__ = 'security_groups' __table_args__ = ( schema.UniqueConstraint('project_id', 'name', 'deleted', name='uniq_security_groups0project_id0' 'name0deleted'), ) id = Column(Integer, primary_key=True) name = Column(String(255)) description = Column(String(255)) user_id = Column(String(255)) project_id = Column(String(255)) instances = orm.relationship(Instance, secondary="security_group_instance_association", primaryjoin='and_(' 'SecurityGroup.id == ' 'SecurityGroupInstanceAssociation.security_group_id,' 'SecurityGroupInstanceAssociation.deleted == 0,' 'SecurityGroup.deleted == 0)', secondaryjoin='and_(' 'SecurityGroupInstanceAssociation.instance_uuid == Instance.uuid,' # (anthony) the condition below shouldn't be necessary now that the # association is being marked as deleted. However, removing this # may cause existing deployments to choke, so I'm leaving it 'Instance.deleted == 0)', backref='security_groups') # TODO(stephenfin): Remove this in the V release or later, once we're sure we # won't want it back (it's for nova-network, so we won't) class SecurityGroupIngressRule(BASE, NovaBase, models.SoftDeleteMixin): """Represents a rule in a security group.""" __tablename__ = 'security_group_rules' __table_args__ = () id = Column(Integer, primary_key=True) parent_group_id = Column(Integer, ForeignKey('security_groups.id')) parent_group = orm.relationship("SecurityGroup", backref="rules", foreign_keys=parent_group_id, primaryjoin='and_(' 'SecurityGroupIngressRule.parent_group_id == SecurityGroup.id,' 'SecurityGroupIngressRule.deleted == 0)') protocol = Column(String(255)) from_port = Column(Integer) to_port = Column(Integer) cidr = Column(types.CIDR()) # Note: This is not the parent SecurityGroup. It's SecurityGroup we're # granting access for. group_id = Column(Integer, ForeignKey('security_groups.id')) grantee_group = orm.relationship("SecurityGroup", foreign_keys=group_id, primaryjoin='and_(' 'SecurityGroupIngressRule.group_id == SecurityGroup.id,' 'SecurityGroupIngressRule.deleted == 0)') # TODO(stephenfin): Remove this in the V release or later, once we're sure we # won't want it back (it's for nova-network, so we won't) class SecurityGroupIngressDefaultRule(BASE, NovaBase, models.SoftDeleteMixin): __tablename__ = 'security_group_default_rules' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False) protocol = Column(String(5)) # "tcp", "udp" or "icmp" from_port = Column(Integer) to_port = Column(Integer) cidr = Column(types.CIDR()) # TODO(stephenfin): Remove this in the V release or later, once we're sure we # won't want it back (it's for nova-network, so we won't) class ProviderFirewallRule(BASE, NovaBase, models.SoftDeleteMixin): """Represents a rule in a security group.""" __tablename__ = 'provider_fw_rules' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False) protocol = Column(String(5)) # "tcp", "udp", or "icmp" from_port = Column(Integer) to_port = Column(Integer) cidr = Column(types.CIDR()) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class KeyPair(BASE, NovaBase, models.SoftDeleteMixin): """Represents a public key pair for ssh / WinRM.""" __tablename__ = 'key_pairs' __table_args__ = ( schema.UniqueConstraint("user_id", "name", "deleted", name="uniq_key_pairs0user_id0name0deleted"), ) id = Column(Integer, primary_key=True, nullable=False) name = Column(String(255), nullable=False) user_id = Column(String(255)) fingerprint = Column(String(255)) public_key = Column(MediumText()) type = Column(Enum('ssh', 'x509', name='keypair_types'), nullable=False, server_default='ssh') class Migration(BASE, NovaBase, models.SoftDeleteMixin): """Represents a running host-to-host migration.""" __tablename__ = 'migrations' __table_args__ = ( Index('migrations_instance_uuid_and_status_idx', 'deleted', 'instance_uuid', 'status'), Index('migrations_by_host_nodes_and_status_idx', 'deleted', 'source_compute', 'dest_compute', 'source_node', 'dest_node', 'status'), Index('migrations_uuid', 'uuid', unique=True), Index('migrations_updated_at_idx', 'updated_at'), ) id = Column(Integer, primary_key=True, nullable=False) # NOTE(tr3buchet): the ____compute variables are instance['host'] source_compute = Column(String(255)) dest_compute = Column(String(255)) # nodes are equivalent to a compute node's 'hypervisor_hostname' source_node = Column(String(255)) dest_node = Column(String(255)) # NOTE(tr3buchet): dest_host, btw, is an ip address dest_host = Column(String(255)) old_instance_type_id = Column(Integer()) new_instance_type_id = Column(Integer()) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) uuid = Column(String(36), nullable=True) # TODO(_cerberus_): enum status = Column(String(255)) migration_type = Column(Enum('migration', 'resize', 'live-migration', 'evacuation', name='migration_type'), nullable=True) hidden = Column(Boolean, default=False) memory_total = Column(BigInteger, nullable=True) memory_processed = Column(BigInteger, nullable=True) memory_remaining = Column(BigInteger, nullable=True) disk_total = Column(BigInteger, nullable=True) disk_processed = Column(BigInteger, nullable=True) disk_remaining = Column(BigInteger, nullable=True) cross_cell_move = Column(Boolean, default=False) user_id = Column(String(255), nullable=True) project_id = Column(String(255), nullable=True) instance = orm.relationship("Instance", foreign_keys=instance_uuid, primaryjoin='and_(Migration.instance_uuid == ' 'Instance.uuid, Instance.deleted == ' '0)') # TODO(stephenfin): Remove this in the V release or later, once we're sure we # won't want it back (it's for nova-network, so we won't) class Network(BASE, NovaBase, models.SoftDeleteMixin): """Represents a network.""" __tablename__ = 'networks' __table_args__ = ( schema.UniqueConstraint("vlan", "deleted", name="uniq_networks0vlan0deleted"), Index('networks_bridge_deleted_idx', 'bridge', 'deleted'), Index('networks_host_idx', 'host'), Index('networks_project_id_deleted_idx', 'project_id', 'deleted'), Index('networks_uuid_project_id_deleted_idx', 'uuid', 'project_id', 'deleted'), Index('networks_vlan_deleted_idx', 'vlan', 'deleted'), Index('networks_cidr_v6_idx', 'cidr_v6') ) id = Column(Integer, primary_key=True, nullable=False) label = Column(String(255)) injected = Column(Boolean, default=False) cidr = Column(types.CIDR()) cidr_v6 = Column(types.CIDR()) multi_host = Column(Boolean, default=False) gateway_v6 = Column(types.IPAddress()) netmask_v6 = Column(types.IPAddress()) netmask = Column(types.IPAddress()) bridge = Column(String(255)) bridge_interface = Column(String(255)) gateway = Column(types.IPAddress()) broadcast = Column(types.IPAddress()) dns1 = Column(types.IPAddress()) dns2 = Column(types.IPAddress()) vlan = Column(Integer) vpn_public_address = Column(types.IPAddress()) vpn_public_port = Column(Integer) vpn_private_address = Column(types.IPAddress()) dhcp_start = Column(types.IPAddress()) rxtx_base = Column(Integer) project_id = Column(String(255)) priority = Column(Integer) host = Column(String(255)) uuid = Column(String(36)) mtu = Column(Integer) dhcp_server = Column(types.IPAddress()) enable_dhcp = Column(Boolean, default=True) share_address = Column(Boolean, default=False) class VirtualInterface(BASE, NovaBase, models.SoftDeleteMixin): """Represents a virtual interface on an instance.""" __tablename__ = 'virtual_interfaces' __table_args__ = ( schema.UniqueConstraint("address", "deleted", name="uniq_virtual_interfaces0address0deleted"), Index('virtual_interfaces_network_id_idx', 'network_id'), Index('virtual_interfaces_instance_uuid_fkey', 'instance_uuid'), Index('virtual_interfaces_uuid_idx', 'uuid'), ) id = Column(Integer, primary_key=True, nullable=False) address = Column(String(255)) network_id = Column(Integer) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) uuid = Column(String(36)) tag = Column(String(255)) # TODO(stephenfin): Remove this in the V release or later, once we're sure we # won't want it back (it's for nova-network, so we won't) class FixedIp(BASE, NovaBase, models.SoftDeleteMixin): """Represents a fixed IP for an instance.""" __tablename__ = 'fixed_ips' __table_args__ = ( schema.UniqueConstraint( "address", "deleted", name="uniq_fixed_ips0address0deleted"), Index('fixed_ips_virtual_interface_id_fkey', 'virtual_interface_id'), Index('network_id', 'network_id'), Index('address', 'address'), Index('fixed_ips_instance_uuid_fkey', 'instance_uuid'), Index('fixed_ips_host_idx', 'host'), Index('fixed_ips_network_id_host_deleted_idx', 'network_id', 'host', 'deleted'), Index('fixed_ips_address_reserved_network_id_deleted_idx', 'address', 'reserved', 'network_id', 'deleted'), Index('fixed_ips_deleted_allocated_idx', 'address', 'deleted', 'allocated'), Index('fixed_ips_deleted_allocated_updated_at_idx', 'deleted', 'allocated', 'updated_at') ) id = Column(Integer, primary_key=True) address = Column(types.IPAddress()) network_id = Column(Integer) virtual_interface_id = Column(Integer) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) # associated means that a fixed_ip has its instance_id column set # allocated means that a fixed_ip has its virtual_interface_id column set # TODO(sshturm) add default in db allocated = Column(Boolean, default=False) # leased means dhcp bridge has leased the ip # TODO(sshturm) add default in db leased = Column(Boolean, default=False) # TODO(sshturm) add default in db reserved = Column(Boolean, default=False) host = Column(String(255)) network = orm.relationship(Network, backref=orm.backref('fixed_ips'), foreign_keys=network_id, primaryjoin='and_(' 'FixedIp.network_id == Network.id,' 'FixedIp.deleted == 0,' 'Network.deleted == 0)') instance = orm.relationship(Instance, foreign_keys=instance_uuid, primaryjoin='and_(' 'FixedIp.instance_uuid == Instance.uuid,' 'FixedIp.deleted == 0,' 'Instance.deleted == 0)') virtual_interface = orm.relationship(VirtualInterface, backref=orm.backref('fixed_ips'), foreign_keys=virtual_interface_id, primaryjoin='and_(' 'FixedIp.virtual_interface_id == ' 'VirtualInterface.id,' 'FixedIp.deleted == 0,' 'VirtualInterface.deleted == 0)') # TODO(stephenfin): Remove this in the V release or later, once we're sure we # won't want it back (it's for nova-network, so we won't) class FloatingIp(BASE, NovaBase, models.SoftDeleteMixin): """Represents a floating IP that dynamically forwards to a fixed IP.""" __tablename__ = 'floating_ips' __table_args__ = ( schema.UniqueConstraint("address", "deleted", name="uniq_floating_ips0address0deleted"), Index('fixed_ip_id', 'fixed_ip_id'), Index('floating_ips_host_idx', 'host'), Index('floating_ips_project_id_idx', 'project_id'), Index('floating_ips_pool_deleted_fixed_ip_id_project_id_idx', 'pool', 'deleted', 'fixed_ip_id', 'project_id') ) id = Column(Integer, primary_key=True) address = Column(types.IPAddress()) fixed_ip_id = Column(Integer) project_id = Column(String(255)) host = Column(String(255)) auto_assigned = Column(Boolean, default=False) # TODO(sshturm) add default in db pool = Column(String(255)) interface = Column(String(255)) fixed_ip = orm.relationship(FixedIp, backref=orm.backref('floating_ips'), foreign_keys=fixed_ip_id, primaryjoin='and_(' 'FloatingIp.fixed_ip_id == FixedIp.id,' 'FloatingIp.deleted == 0,' 'FixedIp.deleted == 0)') # TODO(stephenfin): Remove in V or later class DNSDomain(BASE, NovaBase, models.SoftDeleteMixin): """Represents a DNS domain with availability zone or project info.""" __tablename__ = 'dns_domains' __table_args__ = ( Index('dns_domains_project_id_idx', 'project_id'), Index('dns_domains_domain_deleted_idx', 'domain', 'deleted'), ) deleted = Column(Boolean, default=False) domain = Column(String(255), primary_key=True) scope = Column(String(255)) availability_zone = Column(String(255)) project_id = Column(String(255)) # TODO(stephenfin): Remove in V or later class ConsolePool(BASE, NovaBase, models.SoftDeleteMixin): """Represents pool of consoles on the same physical node.""" __tablename__ = 'console_pools' __table_args__ = ( schema.UniqueConstraint( "host", "console_type", "compute_host", "deleted", name="uniq_console_pools0host0console_type0compute_host0deleted"), ) id = Column(Integer, primary_key=True) address = Column(types.IPAddress()) username = Column(String(255)) password = Column(String(255)) console_type = Column(String(255)) public_hostname = Column(String(255)) host = Column(String(255)) compute_host = Column(String(255)) # TODO(stephenfin): Remove in V or later class Console(BASE, NovaBase, models.SoftDeleteMixin): """Represents a console session for an instance.""" __tablename__ = 'consoles' __table_args__ = ( Index('consoles_instance_uuid_idx', 'instance_uuid'), ) id = Column(Integer, primary_key=True) instance_name = Column(String(255)) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) password = Column(String(255)) port = Column(Integer) pool_id = Column(Integer, ForeignKey('console_pools.id')) pool = orm.relationship(ConsolePool, backref=orm.backref('consoles')) class InstanceMetadata(BASE, NovaBase, models.SoftDeleteMixin): """Represents a user-provided metadata key/value pair for an instance.""" __tablename__ = 'instance_metadata' __table_args__ = ( Index('instance_metadata_instance_uuid_idx', 'instance_uuid'), ) id = Column(Integer, primary_key=True) key = Column(String(255)) value = Column(String(255)) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) instance = orm.relationship(Instance, backref="metadata", foreign_keys=instance_uuid, primaryjoin='and_(' 'InstanceMetadata.instance_uuid == ' 'Instance.uuid,' 'InstanceMetadata.deleted == 0)') class InstanceSystemMetadata(BASE, NovaBase, models.SoftDeleteMixin): """Represents a system-owned metadata key/value pair for an instance.""" __tablename__ = 'instance_system_metadata' __table_args__ = ( Index('instance_uuid', 'instance_uuid'), ) id = Column(Integer, primary_key=True) key = Column(String(255), nullable=False) value = Column(String(255)) instance_uuid = Column(String(36), ForeignKey('instances.uuid'), nullable=False) instance = orm.relationship(Instance, backref="system_metadata", foreign_keys=instance_uuid) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class InstanceTypeProjects(BASE, NovaBase, models.SoftDeleteMixin): """Represent projects associated instance_types.""" __tablename__ = "instance_type_projects" __table_args__ = (schema.UniqueConstraint( "instance_type_id", "project_id", "deleted", name="uniq_instance_type_projects0instance_type_id0project_id0deleted" ), ) id = Column(Integer, primary_key=True) instance_type_id = Column(Integer, ForeignKey('instance_types.id'), nullable=False) project_id = Column(String(255)) instance_type = orm.relationship(InstanceTypes, backref="projects", foreign_keys=instance_type_id, primaryjoin='and_(' 'InstanceTypeProjects.instance_type_id == InstanceTypes.id,' 'InstanceTypeProjects.deleted == 0)') # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class InstanceTypeExtraSpecs(BASE, NovaBase, models.SoftDeleteMixin): """Represents additional specs as key/value pairs for an instance_type.""" __tablename__ = 'instance_type_extra_specs' __table_args__ = ( Index('instance_type_extra_specs_instance_type_id_key_idx', 'instance_type_id', 'key'), schema.UniqueConstraint( "instance_type_id", "key", "deleted", name=("uniq_instance_type_extra_specs0" "instance_type_id0key0deleted") ), {'mysql_collate': 'utf8_bin'}, ) id = Column(Integer, primary_key=True) key = Column(String(255)) value = Column(String(255)) instance_type_id = Column(Integer, ForeignKey('instance_types.id'), nullable=False) instance_type = orm.relationship(InstanceTypes, backref="extra_specs", foreign_keys=instance_type_id, primaryjoin='and_(' 'InstanceTypeExtraSpecs.instance_type_id == InstanceTypes.id,' 'InstanceTypeExtraSpecs.deleted == 0)') # TODO(stephenfin): Remove this in the U release or later, once we're sure we # won't want it back (it's for cells v1, so we won't) class Cell(BASE, NovaBase, models.SoftDeleteMixin): """Represents parent and child cells of this cell. Cells can have multiple parents and children, so there could be any number of entries with is_parent=True or False """ __tablename__ = 'cells' __table_args__ = (schema.UniqueConstraint( "name", "deleted", name="uniq_cells0name0deleted" ), ) id = Column(Integer, primary_key=True) # Name here is the 'short name' of a cell. For instance: 'child1' name = Column(String(255)) api_url = Column(String(255)) transport_url = Column(String(255), nullable=False) weight_offset = Column(Float(), default=0.0) weight_scale = Column(Float(), default=1.0) is_parent = Column(Boolean()) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class AggregateHost(BASE, NovaBase, models.SoftDeleteMixin): """Represents a host that is member of an aggregate.""" __tablename__ = 'aggregate_hosts' __table_args__ = (schema.UniqueConstraint( "host", "aggregate_id", "deleted", name="uniq_aggregate_hosts0host0aggregate_id0deleted" ), ) id = Column(Integer, primary_key=True, autoincrement=True) host = Column(String(255)) aggregate_id = Column(Integer, ForeignKey('aggregates.id'), nullable=False) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class AggregateMetadata(BASE, NovaBase, models.SoftDeleteMixin): """Represents a metadata key/value pair for an aggregate.""" __tablename__ = 'aggregate_metadata' __table_args__ = ( schema.UniqueConstraint("aggregate_id", "key", "deleted", name="uniq_aggregate_metadata0aggregate_id0key0deleted" ), Index('aggregate_metadata_key_idx', 'key'), Index('aggregate_metadata_value_idx', 'value'), ) id = Column(Integer, primary_key=True) key = Column(String(255), nullable=False) value = Column(String(255), nullable=False) aggregate_id = Column(Integer, ForeignKey('aggregates.id'), nullable=False) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class Aggregate(BASE, NovaBase, models.SoftDeleteMixin): """Represents a cluster of hosts that exists in this zone.""" __tablename__ = 'aggregates' __table_args__ = (Index('aggregate_uuid_idx', 'uuid'),) id = Column(Integer, primary_key=True, autoincrement=True) uuid = Column(String(36)) name = Column(String(255)) _hosts = orm.relationship(AggregateHost, primaryjoin='and_(' 'Aggregate.id == AggregateHost.aggregate_id,' 'AggregateHost.deleted == 0,' 'Aggregate.deleted == 0)') _metadata = orm.relationship(AggregateMetadata, primaryjoin='and_(' 'Aggregate.id == AggregateMetadata.aggregate_id,' 'AggregateMetadata.deleted == 0,' 'Aggregate.deleted == 0)') @property def _extra_keys(self): return ['hosts', 'metadetails', 'availability_zone'] @property def hosts(self): return [h.host for h in self._hosts] @property def metadetails(self): return {m.key: m.value for m in self._metadata} @property def availability_zone(self): if 'availability_zone' not in self.metadetails: return None return self.metadetails['availability_zone'] class AgentBuild(BASE, NovaBase, models.SoftDeleteMixin): """Represents an agent build.""" __tablename__ = 'agent_builds' __table_args__ = ( Index('agent_builds_hypervisor_os_arch_idx', 'hypervisor', 'os', 'architecture'), schema.UniqueConstraint("hypervisor", "os", "architecture", "deleted", name="uniq_agent_builds0hypervisor0os0architecture0deleted"), ) id = Column(Integer, primary_key=True) hypervisor = Column(String(255)) os = Column(String(255)) architecture = Column(String(255)) version = Column(String(255)) url = Column(String(255)) md5hash = Column(String(255)) class BandwidthUsage(BASE, NovaBase, models.SoftDeleteMixin): """Cache for instance bandwidth usage data pulled from the hypervisor.""" __tablename__ = 'bw_usage_cache' __table_args__ = ( Index('bw_usage_cache_uuid_start_period_idx', 'uuid', 'start_period'), ) id = Column(Integer, primary_key=True, nullable=False) uuid = Column(String(36)) mac = Column(String(255)) start_period = Column(DateTime, nullable=False) last_refreshed = Column(DateTime) bw_in = Column(BigInteger) bw_out = Column(BigInteger) last_ctr_in = Column(BigInteger) last_ctr_out = Column(BigInteger) class VolumeUsage(BASE, NovaBase, models.SoftDeleteMixin): """Cache for volume usage data pulled from the hypervisor.""" __tablename__ = 'volume_usage_cache' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False) volume_id = Column(String(36), nullable=False) instance_uuid = Column(String(36)) project_id = Column(String(36)) user_id = Column(String(64)) availability_zone = Column(String(255)) tot_last_refreshed = Column(DateTime) tot_reads = Column(BigInteger, default=0) tot_read_bytes = Column(BigInteger, default=0) tot_writes = Column(BigInteger, default=0) tot_write_bytes = Column(BigInteger, default=0) curr_last_refreshed = Column(DateTime) curr_reads = Column(BigInteger, default=0) curr_read_bytes = Column(BigInteger, default=0) curr_writes = Column(BigInteger, default=0) curr_write_bytes = Column(BigInteger, default=0) class S3Image(BASE, NovaBase, models.SoftDeleteMixin): """Compatibility layer for the S3 image service talking to Glance.""" __tablename__ = 's3_images' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) uuid = Column(String(36), nullable=False) class VolumeIdMapping(BASE, NovaBase, models.SoftDeleteMixin): """Compatibility layer for the EC2 volume service.""" __tablename__ = 'volume_id_mappings' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) uuid = Column(String(36), nullable=False) class SnapshotIdMapping(BASE, NovaBase, models.SoftDeleteMixin): """Compatibility layer for the EC2 snapshot service.""" __tablename__ = 'snapshot_id_mappings' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) uuid = Column(String(36), nullable=False) class InstanceFault(BASE, NovaBase, models.SoftDeleteMixin): __tablename__ = 'instance_faults' __table_args__ = ( Index('instance_faults_host_idx', 'host'), Index('instance_faults_instance_uuid_deleted_created_at_idx', 'instance_uuid', 'deleted', 'created_at') ) id = Column(Integer, primary_key=True, nullable=False) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) code = Column(Integer(), nullable=False) message = Column(String(255)) details = Column(MediumText()) host = Column(String(255)) class InstanceAction(BASE, NovaBase, models.SoftDeleteMixin): """Track client actions on an instance. The intention is that there will only be one of these per user request. A lookup by (instance_uuid, request_id) should always return a single result. """ __tablename__ = 'instance_actions' __table_args__ = ( Index('instance_uuid_idx', 'instance_uuid'), Index('request_id_idx', 'request_id'), Index('instance_actions_instance_uuid_updated_at_idx', 'instance_uuid', 'updated_at') ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) action = Column(String(255)) instance_uuid = Column(String(36), ForeignKey('instances.uuid')) request_id = Column(String(255)) user_id = Column(String(255)) project_id = Column(String(255)) start_time = Column(DateTime, default=timeutils.utcnow) finish_time = Column(DateTime) message = Column(String(255)) class InstanceActionEvent(BASE, NovaBase, models.SoftDeleteMixin): """Track events that occur during an InstanceAction.""" __tablename__ = 'instance_actions_events' __table_args__ = () id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) event = Column(String(255)) action_id = Column(Integer, ForeignKey('instance_actions.id')) start_time = Column(DateTime, default=timeutils.utcnow) finish_time = Column(DateTime) result = Column(String(255)) traceback = Column(Text) host = Column(String(255)) details = Column(Text) class InstanceIdMapping(BASE, NovaBase, models.SoftDeleteMixin): """Compatibility layer for the EC2 instance service.""" __tablename__ = 'instance_id_mappings' __table_args__ = ( Index('ix_instance_id_mappings_uuid', 'uuid'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) uuid = Column(String(36), nullable=False) class TaskLog(BASE, NovaBase, models.SoftDeleteMixin): """Audit log for background periodic tasks.""" __tablename__ = 'task_log' __table_args__ = ( schema.UniqueConstraint( 'task_name', 'host', 'period_beginning', 'period_ending', name="uniq_task_log0task_name0host0period_beginning0period_ending" ), Index('ix_task_log_period_beginning', 'period_beginning'), Index('ix_task_log_host', 'host'), Index('ix_task_log_period_ending', 'period_ending'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) task_name = Column(String(255), nullable=False) state = Column(String(255), nullable=False) host = Column(String(255), nullable=False) period_beginning = Column(DateTime, default=timeutils.utcnow, nullable=False) period_ending = Column(DateTime, default=timeutils.utcnow, nullable=False) message = Column(String(255), nullable=False) task_items = Column(Integer(), default=0) errors = Column(Integer(), default=0) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class InstanceGroupMember(BASE, NovaBase, models.SoftDeleteMixin): """Represents the members for an instance group.""" __tablename__ = 'instance_group_member' __table_args__ = ( Index('instance_group_member_instance_idx', 'instance_id'), ) id = Column(Integer, primary_key=True, nullable=False) instance_id = Column(String(255)) group_id = Column(Integer, ForeignKey('instance_groups.id'), nullable=False) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class InstanceGroupPolicy(BASE, NovaBase, models.SoftDeleteMixin): """Represents the policy type for an instance group.""" __tablename__ = 'instance_group_policy' __table_args__ = ( Index('instance_group_policy_policy_idx', 'policy'), ) id = Column(Integer, primary_key=True, nullable=False) policy = Column(String(255)) group_id = Column(Integer, ForeignKey('instance_groups.id'), nullable=False) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class InstanceGroup(BASE, NovaBase, models.SoftDeleteMixin): """Represents an instance group. A group will maintain a collection of instances and the relationship between them. """ __tablename__ = 'instance_groups' __table_args__ = ( schema.UniqueConstraint("uuid", "deleted", name="uniq_instance_groups0uuid0deleted"), ) id = Column(Integer, primary_key=True, autoincrement=True) user_id = Column(String(255)) project_id = Column(String(255)) uuid = Column(String(36), nullable=False) name = Column(String(255)) _policies = orm.relationship(InstanceGroupPolicy, primaryjoin='and_(' 'InstanceGroup.id == InstanceGroupPolicy.group_id,' 'InstanceGroupPolicy.deleted == 0,' 'InstanceGroup.deleted == 0)') _members = orm.relationship(InstanceGroupMember, primaryjoin='and_(' 'InstanceGroup.id == InstanceGroupMember.group_id,' 'InstanceGroupMember.deleted == 0,' 'InstanceGroup.deleted == 0)') @property def policies(self): return [p.policy for p in self._policies] @property def members(self): return [m.instance_id for m in self._members] class PciDevice(BASE, NovaBase, models.SoftDeleteMixin): """Represents a PCI host device that can be passed through to instances. """ __tablename__ = 'pci_devices' __table_args__ = ( Index('ix_pci_devices_compute_node_id_deleted', 'compute_node_id', 'deleted'), Index('ix_pci_devices_instance_uuid_deleted', 'instance_uuid', 'deleted'), Index('ix_pci_devices_compute_node_id_parent_addr_deleted', 'compute_node_id', 'parent_addr', 'deleted'), schema.UniqueConstraint( "compute_node_id", "address", "deleted", name="uniq_pci_devices0compute_node_id0address0deleted") ) id = Column(Integer, primary_key=True) uuid = Column(String(36)) compute_node_id = Column(Integer, ForeignKey('compute_nodes.id'), nullable=False) # physical address of device domain:bus:slot.func (0000:09:01.1) address = Column(String(12), nullable=False) vendor_id = Column(String(4), nullable=False) product_id = Column(String(4), nullable=False) dev_type = Column(String(8), nullable=False) dev_id = Column(String(255)) # label is abstract device name, that is used to unify devices with the # same functionality with different addresses or host. label = Column(String(255), nullable=False) status = Column(String(36), nullable=False) # the request_id is used to identify a device that is allocated for a # particular request request_id = Column(String(36), nullable=True) extra_info = Column(Text) instance_uuid = Column(String(36)) numa_node = Column(Integer, nullable=True) parent_addr = Column(String(12), nullable=True) instance = orm.relationship(Instance, backref="pci_devices", foreign_keys=instance_uuid, primaryjoin='and_(' 'PciDevice.instance_uuid == Instance.uuid,' 'PciDevice.deleted == 0)') class Tag(BASE, models.ModelBase): """Represents the tag for a resource.""" __tablename__ = "tags" __table_args__ = ( Index('tags_tag_idx', 'tag'), ) resource_id = Column(String(36), primary_key=True, nullable=False) tag = Column(Unicode(80), primary_key=True, nullable=False) instance = orm.relationship( "Instance", backref='tags', primaryjoin='and_(Tag.resource_id == Instance.uuid,' 'Instance.deleted == 0)', foreign_keys=resource_id ) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class ResourceProvider(BASE, models.ModelBase): """Represents a mapping to a providers of resources.""" __tablename__ = "resource_providers" __table_args__ = ( Index('resource_providers_uuid_idx', 'uuid'), schema.UniqueConstraint('uuid', name='uniq_resource_providers0uuid'), Index('resource_providers_name_idx', 'name'), schema.UniqueConstraint('name', name='uniq_resource_providers0name') ) id = Column(Integer, primary_key=True, nullable=False) uuid = Column(String(36), nullable=False) name = Column(Unicode(200), nullable=True) generation = Column(Integer, default=0) can_host = Column(Integer, default=0) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class Inventory(BASE, models.ModelBase): """Represents a quantity of available resource.""" __tablename__ = "inventories" __table_args__ = ( Index('inventories_resource_provider_id_idx', 'resource_provider_id'), Index('inventories_resource_class_id_idx', 'resource_class_id'), Index('inventories_resource_provider_resource_class_idx', 'resource_provider_id', 'resource_class_id'), schema.UniqueConstraint('resource_provider_id', 'resource_class_id', name='uniq_inventories0resource_provider_resource_class') ) id = Column(Integer, primary_key=True, nullable=False) resource_provider_id = Column(Integer, nullable=False) resource_class_id = Column(Integer, nullable=False) total = Column(Integer, nullable=False) reserved = Column(Integer, nullable=False) min_unit = Column(Integer, nullable=False) max_unit = Column(Integer, nullable=False) step_size = Column(Integer, nullable=False) allocation_ratio = Column(Float, nullable=False) resource_provider = orm.relationship( "ResourceProvider", primaryjoin=('and_(Inventory.resource_provider_id == ' 'ResourceProvider.id)'), foreign_keys=resource_provider_id) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class Allocation(BASE, models.ModelBase): """A use of inventory.""" __tablename__ = "allocations" __table_args__ = ( Index('allocations_resource_provider_class_used_idx', 'resource_provider_id', 'resource_class_id', 'used'), Index('allocations_resource_class_id_idx', 'resource_class_id'), Index('allocations_consumer_id_idx', 'consumer_id') ) id = Column(Integer, primary_key=True, nullable=False) resource_provider_id = Column(Integer, nullable=False) consumer_id = Column(String(36), nullable=False) resource_class_id = Column(Integer, nullable=False) used = Column(Integer, nullable=False) # NOTE(alaski): This table exists in the nova_api database and its usage here # is deprecated. class ResourceProviderAggregate(BASE, models.ModelBase): """Associate a resource provider with an aggregate.""" __tablename__ = 'resource_provider_aggregates' __table_args__ = ( Index('resource_provider_aggregates_aggregate_id_idx', 'aggregate_id'), ) resource_provider_id = Column(Integer, primary_key=True, nullable=False) aggregate_id = Column(Integer, primary_key=True, nullable=False) class ConsoleAuthToken(BASE, NovaBase): """Represents a console auth token""" __tablename__ = 'console_auth_tokens' __table_args__ = ( Index('console_auth_tokens_instance_uuid_idx', 'instance_uuid'), Index('console_auth_tokens_host_expires_idx', 'host', 'expires'), Index('console_auth_tokens_token_hash_idx', 'token_hash'), Index('console_auth_tokens_token_hash_instance_uuid_idx', 'token_hash', 'instance_uuid'), schema.UniqueConstraint("token_hash", name="uniq_console_auth_tokens0token_hash") ) id = Column(Integer, primary_key=True, nullable=False) token_hash = Column(String(255), nullable=False) console_type = Column(String(255), nullable=False) host = Column(String(255), nullable=False) port = Column(Integer, nullable=False) internal_access_path = Column(String(255)) instance_uuid = Column(String(36), nullable=False) expires = Column(Integer, nullable=False) access_url_base = Column(String(255)) instance = orm.relationship( "Instance", backref='console_auth_tokens', primaryjoin='and_(ConsoleAuthToken.instance_uuid == Instance.uuid,' 'Instance.deleted == 0)', foreign_keys=instance_uuid ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/types.py0000664000175000017500000000467300000000000020010 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Custom SQLAlchemy types.""" import netaddr from oslo_utils import netutils from sqlalchemy.dialects import postgresql from sqlalchemy import types from nova import utils class IPAddress(types.TypeDecorator): """An SQLAlchemy type representing an IP-address.""" impl = types.String def load_dialect_impl(self, dialect): if dialect.name == 'postgresql': return dialect.type_descriptor(postgresql.INET()) else: return dialect.type_descriptor(types.String(39)) def process_bind_param(self, value, dialect): """Process/Formats the value before insert it into the db.""" if dialect.name == 'postgresql': return value # NOTE(maurosr): The purpose here is to convert ipv6 to the shortened # form, not validate it. elif netutils.is_valid_ipv6(value): return utils.get_shortened_ipv6(value) return value class CIDR(types.TypeDecorator): """An SQLAlchemy type representing a CIDR definition.""" impl = types.String def load_dialect_impl(self, dialect): if dialect.name == 'postgresql': return dialect.type_descriptor(postgresql.INET()) else: return dialect.type_descriptor(types.String(43)) def process_bind_param(self, value, dialect): """Process/Formats the value before insert it into the db.""" # NOTE(sdague): normalize all the inserts if netutils.is_valid_ipv6_cidr(value): return utils.get_shortened_ipv6_cidr(value) return value def process_result_value(self, value, dialect): try: return str(netaddr.IPNetwork(value, version=4).cidr) except netaddr.AddrFormatError: return str(netaddr.IPNetwork(value, version=6).cidr) except TypeError: return None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/db/sqlalchemy/utils.py0000664000175000017500000001126200000000000017774 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Boris Pavlovic (boris@pavlovic.me). # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_db.sqlalchemy import utils as oslodbutils from oslo_log import log as logging from sqlalchemy.exc import OperationalError from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy.types import NullType from nova.db.sqlalchemy import api as db from nova import exception from nova.i18n import _ LOG = logging.getLogger(__name__) def check_shadow_table(migrate_engine, table_name): """This method checks that table with ``table_name`` and corresponding shadow table have same columns. """ meta = MetaData() meta.bind = migrate_engine table = Table(table_name, meta, autoload=True) shadow_table = Table(db._SHADOW_TABLE_PREFIX + table_name, meta, autoload=True) columns = {c.name: c for c in table.columns} shadow_columns = {c.name: c for c in shadow_table.columns} for name, column in columns.items(): if name not in shadow_columns: raise exception.NovaException( _("Missing column %(table)s.%(column)s in shadow table") % {'column': name, 'table': shadow_table.name}) shadow_column = shadow_columns[name] if not isinstance(shadow_column.type, type(column.type)): raise exception.NovaException( _("Different types in %(table)s.%(column)s and shadow table: " "%(c_type)s %(shadow_c_type)s") % {'column': name, 'table': table.name, 'c_type': column.type, 'shadow_c_type': shadow_column.type}) for name, column in shadow_columns.items(): if name not in columns: raise exception.NovaException( _("Extra column %(table)s.%(column)s in shadow table") % {'column': name, 'table': shadow_table.name}) return True def create_shadow_table(migrate_engine, table_name=None, table=None, **col_name_col_instance): """This method create shadow table for table with name ``table_name`` or table instance ``table``. :param table_name: Autoload table with this name and create shadow table :param table: Autoloaded table, so just create corresponding shadow table. :param col_name_col_instance: contains pair column_name=column_instance. column_instance is instance of Column. These params are required only for columns that have unsupported types by sqlite. For example BigInteger. :returns: The created shadow_table object. """ meta = MetaData(bind=migrate_engine) if table_name is None and table is None: raise exception.NovaException(_("Specify `table_name` or `table` " "param")) if not (table_name is None or table is None): raise exception.NovaException(_("Specify only one param `table_name` " "`table`")) if table is None: table = Table(table_name, meta, autoload=True) columns = [] for column in table.columns: if isinstance(column.type, NullType): new_column = oslodbutils._get_not_supported_column( col_name_col_instance, column.name) columns.append(new_column) else: columns.append(column.copy()) shadow_table_name = db._SHADOW_TABLE_PREFIX + table.name shadow_table = Table(shadow_table_name, meta, *columns, mysql_engine='InnoDB') try: shadow_table.create() return shadow_table except (db_exc.DBError, OperationalError): # NOTE(ekudryashova): At the moment there is a case in oslo.db code, # which raises unwrapped OperationalError, so we should catch it until # oslo.db would wraps all such exceptions LOG.info(repr(shadow_table)) LOG.exception('Exception while creating table.') raise exception.ShadowTableExists(name=shadow_table_name) except Exception: LOG.info(repr(shadow_table)) LOG.exception('Exception while creating table.') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/debugger.py0000664000175000017500000000406400000000000015673 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(markmc): this is imported before monkey patching in nova.cmd # so we avoid extra imports here import sys def enabled(): return ('--remote_debug-host' in sys.argv and '--remote_debug-port' in sys.argv) def init(): import nova.conf CONF = nova.conf.CONF # NOTE(markmc): gracefully handle the CLI options not being registered if 'remote_debug' not in CONF: return if not (CONF.remote_debug.host and CONF.remote_debug.port): return from nova.i18n import _LW from oslo_log import log as logging LOG = logging.getLogger(__name__) LOG.debug('Listening on %(host)s:%(port)s for debug connection', {'host': CONF.remote_debug.host, 'port': CONF.remote_debug.port}) try: from pydev import pydevd except ImportError: import pydevd pydevd.settrace(host=CONF.remote_debug.host, port=CONF.remote_debug.port, stdoutToServer=False, stderrToServer=False) LOG.warning(_LW('WARNING: Using the remote debug option changes how ' 'Nova uses the eventlet library to support async IO. This ' 'could result in failures that do not occur under normal ' 'operation. Use at your own risk.')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/exception.py0000664000175000017500000021412200000000000016103 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Nova base exception handling. Includes decorator for re-raising Nova-type exceptions. SHOULD include dedicated exception logging. """ from oslo_log import log as logging import six import webob.exc from webob import util as woutil from nova.i18n import _, _LE LOG = logging.getLogger(__name__) class ConvertedException(webob.exc.WSGIHTTPException): def __init__(self, code, title="", explanation=""): self.code = code # There is a strict rule about constructing status line for HTTP: # '...Status-Line, consisting of the protocol version followed by a # numeric status code and its associated textual phrase, with each # element separated by SP characters' # (http://www.faqs.org/rfcs/rfc2616.html) # 'code' and 'title' can not be empty because they correspond # to numeric status code and its associated text if title: self.title = title else: try: self.title = woutil.status_reasons[self.code] except KeyError: msg = _LE("Improper or unknown HTTP status code used: %d") LOG.error(msg, code) self.title = woutil.status_generic_reasons[self.code // 100] self.explanation = explanation super(ConvertedException, self).__init__() class NovaException(Exception): """Base Nova Exception To correctly use this class, inherit from it and define a 'msg_fmt' property. That msg_fmt will get printf'd with the keyword arguments provided to the constructor. """ msg_fmt = _("An unknown exception occurred.") code = 500 headers = {} safe = False def __init__(self, message=None, **kwargs): self.kwargs = kwargs if 'code' not in self.kwargs: try: self.kwargs['code'] = self.code except AttributeError: pass try: if not message: message = self.msg_fmt % kwargs else: message = six.text_type(message) except Exception: # NOTE(melwitt): This is done in a separate method so it can be # monkey-patched during testing to make it a hard failure. self._log_exception() message = self.msg_fmt self.message = message super(NovaException, self).__init__(message) def _log_exception(self): # kwargs doesn't match a variable in the message # log the issue and the kwargs LOG.exception(_LE('Exception in string format operation')) for name, value in self.kwargs.items(): LOG.error("%s: %s" % (name, value)) # noqa def format_message(self): # NOTE(mrodden): use the first argument to the python Exception object # which should be our full NovaException message, (see __init__) return self.args[0] def __repr__(self): dict_repr = self.__dict__ dict_repr['class'] = self.__class__.__name__ return str(dict_repr) class EncryptionFailure(NovaException): msg_fmt = _("Failed to encrypt text: %(reason)s") class VirtualInterfaceCreateException(NovaException): msg_fmt = _("Virtual Interface creation failed") class VirtualInterfaceMacAddressException(NovaException): msg_fmt = _("Creation of virtual interface with " "unique mac address failed") class VirtualInterfacePlugException(NovaException): msg_fmt = _("Virtual interface plugin failed") class VirtualInterfaceUnplugException(NovaException): msg_fmt = _("Failed to unplug virtual interface: %(reason)s") class GlanceConnectionFailed(NovaException): msg_fmt = _("Connection to glance host %(server)s failed: " "%(reason)s") class CinderConnectionFailed(NovaException): msg_fmt = _("Connection to cinder host failed: %(reason)s") class UnsupportedCinderAPIVersion(NovaException): msg_fmt = _('Nova does not support Cinder API version %(version)s') class CinderAPIVersionNotAvailable(NovaException): """Used to indicate that a requested Cinder API version, generally a microversion, is not available. """ msg_fmt = _('Cinder API version %(version)s is not available.') class Forbidden(NovaException): msg_fmt = _("Forbidden") code = 403 class ForbiddenWithAccelerators(NovaException): msg_fmt = _("Forbidden with instances that have accelerators.") code = 403 class AdminRequired(Forbidden): msg_fmt = _("User does not have admin privileges") class PolicyNotAuthorized(Forbidden): msg_fmt = _("Policy doesn't allow %(action)s to be performed.") class ImageNotActive(NovaException): # NOTE(jruzicka): IncorrectState is used for volumes only in EC2, # but it still seems like the most appropriate option. msg_fmt = _("Image %(image_id)s is not active.") class ImageNotAuthorized(NovaException): msg_fmt = _("Not authorized for image %(image_id)s.") class Invalid(NovaException): msg_fmt = _("Bad Request - Invalid Parameters") code = 400 class InvalidConfiguration(Invalid): msg_fmt = _("Configuration is Invalid.") class InvalidBDM(Invalid): msg_fmt = _("Block Device Mapping is Invalid.") class InvalidBDMSnapshot(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "failed to get snapshot %(id)s.") class InvalidBDMVolume(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "failed to get volume %(id)s.") class InvalidBDMImage(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "failed to get image %(id)s.") class InvalidBDMBootSequence(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "Boot sequence for the instance " "and image/block device mapping " "combination is not valid.") class InvalidBDMLocalsLimit(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "You specified more local devices than the " "limit allows") class InvalidBDMEphemeralSize(InvalidBDM): msg_fmt = _("Ephemeral disks requested are larger than " "the instance type allows. If no size is given " "in one block device mapping, flavor ephemeral " "size will be used.") class InvalidBDMSwapSize(InvalidBDM): msg_fmt = _("Swap drive requested is larger than instance type allows.") class InvalidBDMFormat(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "%(details)s") class InvalidBDMForLegacy(InvalidBDM): msg_fmt = _("Block Device Mapping cannot " "be converted to legacy format. ") class InvalidBDMVolumeNotBootable(InvalidBDM): msg_fmt = _("Block Device %(id)s is not bootable.") class TooManyDiskDevices(InvalidBDM): msg_fmt = _('The maximum allowed number of disk devices (%(maximum)d) to ' 'attach to a single instance has been exceeded.') code = 403 class InvalidBDMDiskBus(InvalidBDM): msg_fmr = _("Block Device Mapping is invalid: The provided disk bus " "%(disk_bus)s is not valid.") class InvalidAttribute(Invalid): msg_fmt = _("Attribute not supported: %(attr)s") class ValidationError(Invalid): msg_fmt = "%(detail)s" class VolumeAttachFailed(Invalid): msg_fmt = _("Volume %(volume_id)s could not be attached. " "Reason: %(reason)s") class VolumeDetachFailed(Invalid): msg_fmt = _("Volume %(volume_id)s could not be detached. " "Reason: %(reason)s") class MultiattachNotSupportedByVirtDriver(NovaException): # This exception indicates the compute hosting the instance does not # support multiattach volumes. This should generally be considered a # 409 HTTPConflict error in the API since we expect all virt drivers to # eventually support multiattach volumes. msg_fmt = _("Volume %(volume_id)s has 'multiattach' set, " "which is not supported for this instance.") code = 409 class MultiattachNotSupportedOldMicroversion(Invalid): msg_fmt = _('Multiattach volumes are only supported starting with ' 'compute API version 2.60.') class MultiattachToShelvedNotSupported(Invalid): msg_fmt = _("Attaching multiattach volumes is not supported for " "shelved-offloaded instances.") class MultiattachSwapVolumeNotSupported(Invalid): msg_fmt = _('Swapping multi-attach volumes with more than one read/write ' 'attachment is not supported.') class VolumeNotCreated(NovaException): msg_fmt = _("Volume %(volume_id)s did not finish being created" " even after we waited %(seconds)s seconds or %(attempts)s" " attempts. And its status is %(volume_status)s.") class ExtendVolumeNotSupported(Invalid): msg_fmt = _("Volume size extension is not supported by the hypervisor.") class VolumeEncryptionNotSupported(Invalid): msg_fmt = _("Volume encryption is not supported for %(volume_type)s " "volume %(volume_id)s") class VolumeTaggedAttachNotSupported(Invalid): msg_fmt = _("Tagged volume attachment is not supported for this server " "instance.") class VolumeTaggedAttachToShelvedNotSupported(VolumeTaggedAttachNotSupported): msg_fmt = _("Tagged volume attachment is not supported for " "shelved-offloaded instances.") class NetworkInterfaceTaggedAttachNotSupported(Invalid): msg_fmt = _("Tagged network interface attachment is not supported for " "this server instance.") class InvalidKeypair(Invalid): msg_fmt = _("Keypair data is invalid: %(reason)s") class InvalidRequest(Invalid): msg_fmt = _("The request is invalid.") class InvalidInput(Invalid): msg_fmt = _("Invalid input received: %(reason)s") class InvalidVolume(Invalid): msg_fmt = _("Invalid volume: %(reason)s") class InvalidVolumeAccessMode(Invalid): msg_fmt = _("Invalid volume access mode: %(access_mode)s") class StaleVolumeMount(InvalidVolume): msg_fmt = _("The volume mount at %(mount_path)s is unusable.") class InvalidMetadata(Invalid): msg_fmt = _("Invalid metadata: %(reason)s") class InvalidMetadataSize(Invalid): msg_fmt = _("Invalid metadata size: %(reason)s") class InvalidPortRange(Invalid): msg_fmt = _("Invalid port range %(from_port)s:%(to_port)s. %(msg)s") class InvalidIpProtocol(Invalid): msg_fmt = _("Invalid IP protocol %(protocol)s.") class InvalidContentType(Invalid): msg_fmt = _("Invalid content type %(content_type)s.") class InvalidAPIVersionString(Invalid): msg_fmt = _("API Version String %(version)s is of invalid format. Must " "be of format MajorNum.MinorNum.") class VersionNotFoundForAPIMethod(Invalid): msg_fmt = _("API version %(version)s is not supported on this method.") class InvalidGlobalAPIVersion(Invalid): msg_fmt = _("Version %(req_ver)s is not supported by the API. Minimum " "is %(min_ver)s and maximum is %(max_ver)s.") class ApiVersionsIntersect(Invalid): msg_fmt = _("Version of %(name)s %(min_ver)s %(max_ver)s intersects " "with another versions.") # Cannot be templated as the error syntax varies. # msg needs to be constructed when raised. class InvalidParameterValue(Invalid): msg_fmt = "%(err)s" class InvalidAggregateAction(Invalid): msg_fmt = _("Unacceptable parameters.") code = 400 class InvalidAggregateActionAdd(InvalidAggregateAction): msg_fmt = _("Cannot add host to aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidAggregateActionDelete(InvalidAggregateAction): msg_fmt = _("Cannot remove host from aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidAggregateActionUpdate(InvalidAggregateAction): msg_fmt = _("Cannot update aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidAggregateActionUpdateMeta(InvalidAggregateAction): msg_fmt = _("Cannot update metadata of aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidSortKey(Invalid): msg_fmt = _("Sort key supplied was not valid.") class InvalidStrTime(Invalid): msg_fmt = _("Invalid datetime string: %(reason)s") class InvalidNUMANodesNumber(Invalid): msg_fmt = _("The property 'numa_nodes' cannot be '%(nodes)s'. " "It must be a number greater than 0") class InvalidName(Invalid): msg_fmt = _("An invalid 'name' value was provided. " "The name must be: %(reason)s") class InstanceInvalidState(Invalid): msg_fmt = _("Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot " "%(method)s while the instance is in this state.") class InstanceNotRunning(Invalid): msg_fmt = _("Instance %(instance_id)s is not running.") class InstanceNotInRescueMode(Invalid): msg_fmt = _("Instance %(instance_id)s is not in rescue mode") class InstanceNotRescuable(Invalid): msg_fmt = _("Instance %(instance_id)s cannot be rescued: %(reason)s") class InstanceNotReady(Invalid): msg_fmt = _("Instance %(instance_id)s is not ready") class InstanceSuspendFailure(Invalid): msg_fmt = _("Failed to suspend instance: %(reason)s") class InstanceResumeFailure(Invalid): msg_fmt = _("Failed to resume instance: %(reason)s") class InstancePowerOnFailure(Invalid): msg_fmt = _("Failed to power on instance: %(reason)s") class InstancePowerOffFailure(Invalid): msg_fmt = _("Failed to power off instance: %(reason)s") class InstanceRebootFailure(Invalid): msg_fmt = _("Failed to reboot instance: %(reason)s") class InstanceTerminationFailure(Invalid): msg_fmt = _("Failed to terminate instance: %(reason)s") class InstanceDeployFailure(Invalid): msg_fmt = _("Failed to deploy instance: %(reason)s") class MultiplePortsNotApplicable(Invalid): msg_fmt = _("Failed to launch instances: %(reason)s") class InvalidFixedIpAndMaxCountRequest(Invalid): msg_fmt = _("Failed to launch instances: %(reason)s") class ServiceUnavailable(Invalid): msg_fmt = _("Service is unavailable at this time.") class ServiceNotUnique(Invalid): msg_fmt = _("More than one possible service found.") class ComputeResourcesUnavailable(ServiceUnavailable): msg_fmt = _("Insufficient compute resources: %(reason)s.") class HypervisorUnavailable(NovaException): msg_fmt = _("Connection to the hypervisor is broken on host") class ComputeServiceUnavailable(ServiceUnavailable): msg_fmt = _("Compute service of %(host)s is unavailable at this time.") class ComputeServiceInUse(NovaException): msg_fmt = _("Compute service of %(host)s is still in use.") class UnableToMigrateToSelf(Invalid): msg_fmt = _("Unable to migrate instance (%(instance_id)s) " "to current host (%(host)s).") class OperationNotSupportedForSEV(NovaException): msg_fmt = _("Operation '%(operation)s' not supported for SEV-enabled " "instance (%(instance_uuid)s).") code = 409 class InvalidHypervisorType(Invalid): msg_fmt = _("The supplied hypervisor type of is invalid.") class HypervisorTooOld(Invalid): msg_fmt = _("This compute node's hypervisor is older than the minimum " "supported version: %(version)s.") class DestinationHypervisorTooOld(Invalid): msg_fmt = _("The instance requires a newer hypervisor version than " "has been provided.") class ServiceTooOld(Invalid): msg_fmt = _("This service is older (v%(thisver)i) than the minimum " "(v%(minver)i) version of the rest of the deployment. " "Unable to continue.") class TooOldComputeService(Invalid): msg_fmt = _("Current Nova version does not support computes older than " "%(oldest_supported_version)s but the minimum compute service " "level in your %(scope)s is %(min_service_level)d and the " "oldest supported service level is " "%(oldest_supported_service)d.") class DestinationDiskExists(Invalid): msg_fmt = _("The supplied disk path (%(path)s) already exists, " "it is expected not to exist.") class InvalidDevicePath(Invalid): msg_fmt = _("The supplied device path (%(path)s) is invalid.") class DevicePathInUse(Invalid): msg_fmt = _("The supplied device path (%(path)s) is in use.") code = 409 class InvalidCPUInfo(Invalid): msg_fmt = _("Unacceptable CPU info: %(reason)s") class InvalidIpAddressError(Invalid): msg_fmt = _("%(address)s is not a valid IP v4/6 address.") class InvalidDiskFormat(Invalid): msg_fmt = _("Disk format %(disk_format)s is not acceptable") class InvalidDiskInfo(Invalid): msg_fmt = _("Disk info file is invalid: %(reason)s") class DiskInfoReadWriteFail(Invalid): msg_fmt = _("Failed to read or write disk info file: %(reason)s") class ImageUnacceptable(Invalid): msg_fmt = _("Image %(image_id)s is unacceptable: %(reason)s") class ImageBadRequest(Invalid): msg_fmt = _("Request of image %(image_id)s got BadRequest response: " "%(response)s") class ImageQuotaExceeded(NovaException): msg_fmt = _("Quota exceeded or out of space for image %(image_id)s " "in the image service.") class InstanceUnacceptable(Invalid): msg_fmt = _("Instance %(instance_id)s is unacceptable: %(reason)s") class InvalidUUID(Invalid): msg_fmt = _("Expected a uuid but received %(uuid)s.") class InvalidID(Invalid): msg_fmt = _("Invalid ID received %(id)s.") class ConstraintNotMet(NovaException): msg_fmt = _("Constraint not met.") code = 412 class NotFound(NovaException): msg_fmt = _("Resource could not be found.") code = 404 class AgentBuildNotFound(NotFound): msg_fmt = _("No agent-build associated with id %(id)s.") class AgentBuildExists(NovaException): msg_fmt = _("Agent-build with hypervisor %(hypervisor)s os %(os)s " "architecture %(architecture)s exists.") class VolumeAttachmentNotFound(NotFound): msg_fmt = _("Volume attachment %(attachment_id)s could not be found.") class VolumeNotFound(NotFound): msg_fmt = _("Volume %(volume_id)s could not be found.") class VolumeTypeNotFound(NotFound): msg_fmt = _("Volume type %(id_or_name)s could not be found.") class UndefinedRootBDM(NovaException): msg_fmt = _("Undefined Block Device Mapping root: BlockDeviceMappingList " "contains Block Device Mappings from multiple instances.") class BDMNotFound(NotFound): msg_fmt = _("No Block Device Mapping with id %(id)s.") class VolumeBDMNotFound(NotFound): msg_fmt = _("No volume Block Device Mapping with id %(volume_id)s.") class VolumeBDMIsMultiAttach(Invalid): msg_fmt = _("Block Device Mapping %(volume_id)s is a multi-attach volume" " and is not valid for this operation.") class VolumeBDMPathNotFound(VolumeBDMNotFound): msg_fmt = _("No volume Block Device Mapping at path: %(path)s") class DeviceDetachFailed(NovaException): msg_fmt = _("Device detach failed for %(device)s: %(reason)s") class DeviceNotFound(NotFound): msg_fmt = _("Device '%(device)s' not found.") class SnapshotNotFound(NotFound): msg_fmt = _("Snapshot %(snapshot_id)s could not be found.") class DiskNotFound(NotFound): msg_fmt = _("No disk at %(location)s") class VolumeDriverNotFound(NotFound): msg_fmt = _("Could not find a handler for %(driver_type)s volume.") class InvalidImageRef(Invalid): msg_fmt = _("Invalid image href %(image_href)s.") class AutoDiskConfigDisabledByImage(Invalid): msg_fmt = _("Requested image %(image)s " "has automatic disk resize disabled.") class ImageNotFound(NotFound): msg_fmt = _("Image %(image_id)s could not be found.") class ImageDeleteConflict(NovaException): msg_fmt = _("Conflict deleting image. Reason: %(reason)s.") class ImageHandlerUnsupported(NovaException): msg_fmt = _("Error: unsupported image handler %(image_handler)s.") class PreserveEphemeralNotSupported(Invalid): msg_fmt = _("The current driver does not support " "preserving ephemeral partitions.") class StorageRepositoryNotFound(NotFound): msg_fmt = _("Cannot find SR to read/write VDI.") class InstanceMappingNotFound(NotFound): msg_fmt = _("Instance %(uuid)s has no mapping to a cell.") class InvalidCidr(Invalid): msg_fmt = _("%(cidr)s is not a valid IP network.") class NetworkNotFound(NotFound): msg_fmt = _("Network %(network_id)s could not be found.") class PortNotFound(NotFound): msg_fmt = _("Port id %(port_id)s could not be found.") class NetworkNotFoundForBridge(NetworkNotFound): msg_fmt = _("Network could not be found for bridge %(bridge)s") class NetworkNotFoundForInstance(NetworkNotFound): msg_fmt = _("Network could not be found for instance %(instance_id)s.") class NetworkAmbiguous(Invalid): msg_fmt = _("More than one possible network found. Specify " "network ID(s) to select which one(s) to connect to.") class UnableToAutoAllocateNetwork(Invalid): msg_fmt = _('Unable to automatically allocate a network for project ' '%(project_id)s') class NetworkRequiresSubnet(Invalid): msg_fmt = _("Network %(network_uuid)s requires a subnet in order to boot" " instances on.") class ExternalNetworkAttachForbidden(Forbidden): msg_fmt = _("It is not allowed to create an interface on " "external network %(network_uuid)s") class NetworkMissingPhysicalNetwork(NovaException): msg_fmt = _("Physical network is missing for network %(network_uuid)s") class VifDetailsMissingVhostuserSockPath(Invalid): msg_fmt = _("vhostuser_sock_path not present in vif_details" " for vif %(vif_id)s") class VifDetailsMissingMacvtapParameters(Invalid): msg_fmt = _("Parameters %(missing_params)s not present in" " vif_details for vif %(vif_id)s. Check your Neutron" " configuration to validate that the macvtap parameters are" " correct.") class DatastoreNotFound(NotFound): msg_fmt = _("Could not find the datastore reference(s) which the VM uses.") class PortInUse(Invalid): msg_fmt = _("Port %(port_id)s is still in use.") class PortRequiresFixedIP(Invalid): msg_fmt = _("Port %(port_id)s requires a FixedIP in order to be used.") class PortNotUsable(Invalid): msg_fmt = _("Port %(port_id)s not usable for instance %(instance)s.") class PortNotUsableDNS(Invalid): msg_fmt = _("Port %(port_id)s not usable for instance %(instance)s. " "Value %(value)s assigned to dns_name attribute does not " "match instance's hostname %(hostname)s") class PortBindingFailed(Invalid): msg_fmt = _("Binding failed for port %(port_id)s, please check neutron " "logs for more information.") class PortBindingDeletionFailed(NovaException): msg_fmt = _("Failed to delete binding for port %(port_id)s and host " "%(host)s.") class PortBindingActivationFailed(NovaException): msg_fmt = _("Failed to activate binding for port %(port_id)s and host " "%(host)s.") class PortUpdateFailed(Invalid): msg_fmt = _("Port update failed for port %(port_id)s: %(reason)s") class AttachSRIOVPortNotSupported(Invalid): msg_fmt = _('Attaching SR-IOV port %(port_id)s to server ' '%(instance_uuid)s is not supported. SR-IOV ports must be ' 'specified during server creation.') class FixedIpNotFoundForAddress(NotFound): msg_fmt = _("Fixed IP not found for address %(address)s.") class FixedIpNotFoundForInstance(NotFound): msg_fmt = _("Instance %(instance_uuid)s does not have fixed IP '%(ip)s'.") class FixedIpAlreadyInUse(NovaException): msg_fmt = _("Fixed IP address %(address)s is already in use on instance " "%(instance_uuid)s.") class FixedIpAssociatedWithMultipleInstances(NovaException): msg_fmt = _("More than one instance is associated with fixed IP address " "'%(address)s'.") class FixedIpInvalidOnHost(Invalid): msg_fmt = _("The fixed IP associated with port %(port_id)s is not " "compatible with the host.") class NoMoreFixedIps(NovaException): msg_fmt = _("No fixed IP addresses available for network: %(net)s") class FloatingIpNotFound(NotFound): msg_fmt = _("Floating IP not found for ID %(id)s.") class FloatingIpNotFoundForAddress(FloatingIpNotFound): msg_fmt = _("Floating IP not found for address %(address)s.") class FloatingIpMultipleFoundForAddress(NovaException): msg_fmt = _("Multiple floating IPs are found for address %(address)s.") class FloatingIpPoolNotFound(NotFound): msg_fmt = _("Floating IP pool not found.") safe = True class NoMoreFloatingIps(FloatingIpNotFound): msg_fmt = _("Zero floating IPs available.") safe = True class FloatingIpAssociated(NovaException): msg_fmt = _("Floating IP %(address)s is associated.") class NoFloatingIpInterface(NotFound): msg_fmt = _("Interface %(interface)s not found.") class FloatingIpAssociateFailed(NovaException): msg_fmt = _("Floating IP %(address)s association has failed.") class FloatingIpBadRequest(Invalid): msg_fmt = _("The floating IP request failed with a BadRequest") class KeypairNotFound(NotFound): msg_fmt = _("Keypair %(name)s not found for user %(user_id)s") class ServiceNotFound(NotFound): msg_fmt = _("Service %(service_id)s could not be found.") class ConfGroupForServiceTypeNotFound(ServiceNotFound): msg_fmt = _("No conf group name could be found for service type " "%(stype)s.") class ServiceBinaryExists(NovaException): msg_fmt = _("Service with host %(host)s binary %(binary)s exists.") class ServiceTopicExists(NovaException): msg_fmt = _("Service with host %(host)s topic %(topic)s exists.") class HostNotFound(NotFound): msg_fmt = _("Host %(host)s could not be found.") class ComputeHostNotFound(HostNotFound): msg_fmt = _("Compute host %(host)s could not be found.") class HostBinaryNotFound(NotFound): msg_fmt = _("Could not find binary %(binary)s on host %(host)s.") class InvalidQuotaValue(Invalid): msg_fmt = _("Change would make usage less than 0 for the following " "resources: %(unders)s") class InvalidQuotaMethodUsage(Invalid): msg_fmt = _("Wrong quota method %(method)s used on resource %(res)s") class QuotaNotFound(NotFound): msg_fmt = _("Quota could not be found") class QuotaExists(NovaException): msg_fmt = _("Quota exists for project %(project_id)s, " "resource %(resource)s") class QuotaResourceUnknown(QuotaNotFound): msg_fmt = _("Unknown quota resources %(unknown)s.") class ProjectUserQuotaNotFound(QuotaNotFound): msg_fmt = _("Quota for user %(user_id)s in project %(project_id)s " "could not be found.") class ProjectQuotaNotFound(QuotaNotFound): msg_fmt = _("Quota for project %(project_id)s could not be found.") class QuotaClassNotFound(QuotaNotFound): msg_fmt = _("Quota class %(class_name)s could not be found.") class QuotaClassExists(NovaException): msg_fmt = _("Quota class %(class_name)s exists for resource %(resource)s") class OverQuota(NovaException): msg_fmt = _("Quota exceeded for resources: %(overs)s") class SecurityGroupNotFound(NotFound): msg_fmt = _("Security group %(security_group_id)s not found.") class SecurityGroupNotFoundForProject(SecurityGroupNotFound): msg_fmt = _("Security group %(security_group_id)s not found " "for project %(project_id)s.") class SecurityGroupExists(Invalid): msg_fmt = _("Security group %(security_group_name)s already exists " "for project %(project_id)s.") class SecurityGroupCannotBeApplied(Invalid): msg_fmt = _("Network requires port_security_enabled and subnet associated" " in order to apply security groups.") class NoUniqueMatch(NovaException): msg_fmt = _("No Unique Match Found.") code = 409 class NoActiveMigrationForInstance(NotFound): msg_fmt = _("Active live migration for instance %(instance_id)s not found") class MigrationNotFound(NotFound): msg_fmt = _("Migration %(migration_id)s could not be found.") class MigrationNotFoundByStatus(MigrationNotFound): msg_fmt = _("Migration not found for instance %(instance_id)s " "with status %(status)s.") class MigrationNotFoundForInstance(MigrationNotFound): msg_fmt = _("Migration %(migration_id)s not found for instance " "%(instance_id)s") class InvalidMigrationState(Invalid): msg_fmt = _("Migration %(migration_id)s state of instance " "%(instance_uuid)s is %(state)s. Cannot %(method)s while the " "migration is in this state.") class ConsoleLogOutputException(NovaException): msg_fmt = _("Console log output could not be retrieved for instance " "%(instance_id)s. Reason: %(reason)s") class ConsoleNotAvailable(NotFound): msg_fmt = _("Guest does not have a console available.") class ConsoleTypeInvalid(Invalid): msg_fmt = _("Invalid console type %(console_type)s") class ConsoleTypeUnavailable(Invalid): msg_fmt = _("Unavailable console type %(console_type)s.") class ConsolePortRangeExhausted(NovaException): msg_fmt = _("The console port range %(min_port)d-%(max_port)d is " "exhausted.") class FlavorNotFound(NotFound): msg_fmt = _("Flavor %(flavor_id)s could not be found.") class FlavorNotFoundByName(FlavorNotFound): msg_fmt = _("Flavor with name %(flavor_name)s could not be found.") class FlavorAccessNotFound(NotFound): msg_fmt = _("Flavor access not found for %(flavor_id)s / " "%(project_id)s combination.") class FlavorExtraSpecUpdateCreateFailed(NovaException): msg_fmt = _("Flavor %(id)s extra spec cannot be updated or created " "after %(retries)d retries.") class CellTimeout(NotFound): msg_fmt = _("Timeout waiting for response from cell") class SchedulerHostFilterNotFound(NotFound): msg_fmt = _("Scheduler Host Filter %(filter_name)s could not be found.") class FlavorExtraSpecsNotFound(NotFound): msg_fmt = _("Flavor %(flavor_id)s has no extra specs with " "key %(extra_specs_key)s.") class ComputeHostMetricNotFound(NotFound): msg_fmt = _("Metric %(name)s could not be found on the compute " "host node %(host)s.%(node)s.") class FileNotFound(NotFound): msg_fmt = _("File %(file_path)s could not be found.") class ClassNotFound(NotFound): msg_fmt = _("Class %(class_name)s could not be found: %(exception)s") class InstanceTagNotFound(NotFound): msg_fmt = _("Instance %(instance_id)s has no tag '%(tag)s'") class KeyPairExists(NovaException): msg_fmt = _("Key pair '%(key_name)s' already exists.") class InstanceExists(NovaException): msg_fmt = _("Instance %(name)s already exists.") class FlavorExists(NovaException): msg_fmt = _("Flavor with name %(name)s already exists.") class FlavorIdExists(NovaException): msg_fmt = _("Flavor with ID %(flavor_id)s already exists.") class FlavorAccessExists(NovaException): msg_fmt = _("Flavor access already exists for flavor %(flavor_id)s " "and project %(project_id)s combination.") class InvalidSharedStorage(NovaException): msg_fmt = _("%(path)s is not on shared storage: %(reason)s") class InvalidLocalStorage(NovaException): msg_fmt = _("%(path)s is not on local storage: %(reason)s") class StorageError(NovaException): msg_fmt = _("Storage error: %(reason)s") class MigrationError(NovaException): msg_fmt = _("Migration error: %(reason)s") class MigrationPreCheckError(MigrationError): msg_fmt = _("Migration pre-check error: %(reason)s") class MigrationSchedulerRPCError(MigrationError): msg_fmt = _("Migration select destinations error: %(reason)s") class MalformedRequestBody(NovaException): msg_fmt = _("Malformed message body: %(reason)s") # NOTE(johannes): NotFound should only be used when a 404 error is # appropriate to be returned class ConfigNotFound(NovaException): msg_fmt = _("Could not find config at %(path)s") class PasteAppNotFound(NovaException): msg_fmt = _("Could not load paste app '%(name)s' from %(path)s") class CannotResizeToSameFlavor(NovaException): msg_fmt = _("When resizing, instances must change flavor!") class ResizeError(NovaException): msg_fmt = _("Resize error: %(reason)s") class CannotResizeDisk(NovaException): msg_fmt = _("Server disk was unable to be resized because: %(reason)s") class FlavorMemoryTooSmall(NovaException): msg_fmt = _("Flavor's memory is too small for requested image.") class FlavorDiskTooSmall(NovaException): msg_fmt = _("The created instance's disk would be too small.") class FlavorDiskSmallerThanImage(FlavorDiskTooSmall): msg_fmt = _("Flavor's disk is too small for requested image. Flavor disk " "is %(flavor_size)i bytes, image is %(image_size)i bytes.") class FlavorDiskSmallerThanMinDisk(FlavorDiskTooSmall): msg_fmt = _("Flavor's disk is smaller than the minimum size specified in " "image metadata. Flavor disk is %(flavor_size)i bytes, " "minimum size is %(image_min_disk)i bytes.") class VolumeSmallerThanMinDisk(FlavorDiskTooSmall): msg_fmt = _("Volume is smaller than the minimum size specified in image " "metadata. Volume size is %(volume_size)i bytes, minimum " "size is %(image_min_disk)i bytes.") class BootFromVolumeRequiredForZeroDiskFlavor(Forbidden): msg_fmt = _("Only volume-backed servers are allowed for flavors with " "zero disk.") class InsufficientFreeMemory(NovaException): msg_fmt = _("Insufficient free memory on compute node to start %(uuid)s.") class NoValidHost(NovaException): msg_fmt = _("No valid host was found. %(reason)s") class RequestFilterFailed(NovaException): msg_fmt = _("Scheduling failed: %(reason)s") class MaxRetriesExceeded(NoValidHost): msg_fmt = _("Exceeded maximum number of retries. %(reason)s") class QuotaError(NovaException): msg_fmt = _("Quota exceeded: code=%(code)s") # NOTE(cyeoh): 413 should only be used for the ec2 API # The error status code for out of quota for the nova api should be # 403 Forbidden. code = 413 safe = True class TooManyInstances(QuotaError): msg_fmt = _("Quota exceeded for %(overs)s: Requested %(req)s," " but already used %(used)s of %(allowed)s %(overs)s") class FloatingIpLimitExceeded(QuotaError): msg_fmt = _("Maximum number of floating IPs exceeded") class MetadataLimitExceeded(QuotaError): msg_fmt = _("Maximum number of metadata items exceeds %(allowed)d") class OnsetFileLimitExceeded(QuotaError): msg_fmt = _("Personality file limit exceeded") class OnsetFilePathLimitExceeded(OnsetFileLimitExceeded): msg_fmt = _("Personality file path exceeds maximum %(allowed)s") class OnsetFileContentLimitExceeded(OnsetFileLimitExceeded): msg_fmt = _("Personality file content exceeds maximum %(allowed)s") class KeypairLimitExceeded(QuotaError): msg_fmt = _("Maximum number of key pairs exceeded") class SecurityGroupLimitExceeded(QuotaError): msg_fmt = _("Maximum number of security groups or rules exceeded") class PortLimitExceeded(QuotaError): msg_fmt = _("Maximum number of ports exceeded") class AggregateError(NovaException): msg_fmt = _("Aggregate %(aggregate_id)s: action '%(action)s' " "caused an error: %(reason)s.") class AggregateNotFound(NotFound): msg_fmt = _("Aggregate %(aggregate_id)s could not be found.") class AggregateNameExists(NovaException): msg_fmt = _("Aggregate %(aggregate_name)s already exists.") class AggregateHostNotFound(NotFound): msg_fmt = _("Aggregate %(aggregate_id)s has no host %(host)s.") class AggregateMetadataNotFound(NotFound): msg_fmt = _("Aggregate %(aggregate_id)s has no metadata with " "key %(metadata_key)s.") class AggregateHostExists(NovaException): msg_fmt = _("Aggregate %(aggregate_id)s already has host %(host)s.") class InstancePasswordSetFailed(NovaException): msg_fmt = _("Failed to set admin password on %(instance)s " "because %(reason)s") safe = True class InstanceNotFound(NotFound): msg_fmt = _("Instance %(instance_id)s could not be found.") class InstanceInfoCacheNotFound(NotFound): msg_fmt = _("Info cache for instance %(instance_uuid)s could not be " "found.") class MarkerNotFound(NotFound): msg_fmt = _("Marker %(marker)s could not be found.") class CouldNotFetchImage(NovaException): msg_fmt = _("Could not fetch image %(image_id)s") class CouldNotUploadImage(NovaException): msg_fmt = _("Could not upload image %(image_id)s") class TaskAlreadyRunning(NovaException): msg_fmt = _("Task %(task_name)s is already running on host %(host)s") class TaskNotRunning(NovaException): msg_fmt = _("Task %(task_name)s is not running on host %(host)s") class InstanceIsLocked(InstanceInvalidState): msg_fmt = _("Instance %(instance_uuid)s is locked") class ConfigDriveInvalidValue(Invalid): msg_fmt = _("Invalid value for Config Drive option: %(option)s") class ConfigDriveUnsupportedFormat(Invalid): msg_fmt = _("Config drive format '%(format)s' is not supported.") class ConfigDriveMountFailed(NovaException): msg_fmt = _("Could not mount vfat config drive. %(operation)s failed. " "Error: %(error)s") class ConfigDriveUnknownFormat(NovaException): msg_fmt = _("Unknown config drive format %(format)s. Select one of " "iso9660 or vfat.") class ConfigDriveNotFound(NotFound): msg_fmt = _("Instance %(instance_uuid)s requires config drive, but it " "does not exist.") class InterfaceAttachFailed(NovaException): msg_fmt = _("Failed to attach network adapter device to " "%(instance_uuid)s") class InterfaceAttachFailedNoNetwork(Invalid): msg_fmt = _("No specific network was requested and none are available " "for project '%(project_id)s'.") class InterfaceDetachFailed(Invalid): msg_fmt = _("Failed to detach network adapter device from " "%(instance_uuid)s") class InstanceUserDataMalformed(NovaException): msg_fmt = _("User data needs to be valid base 64.") class InstanceUpdateConflict(NovaException): msg_fmt = _("Conflict updating instance %(instance_uuid)s. " "Expected: %(expected)s. Actual: %(actual)s") class UnknownInstanceUpdateConflict(InstanceUpdateConflict): msg_fmt = _("Conflict updating instance %(instance_uuid)s, but we were " "unable to determine the cause") class UnexpectedTaskStateError(InstanceUpdateConflict): pass class UnexpectedDeletingTaskStateError(UnexpectedTaskStateError): pass class InstanceActionNotFound(NovaException): msg_fmt = _("Action for request_id %(request_id)s on instance" " %(instance_uuid)s not found") class InstanceActionEventNotFound(NovaException): msg_fmt = _("Event %(event)s not found for action id %(action_id)s") class InstanceEvacuateNotSupported(Invalid): msg_fmt = _('Instance evacuate is not supported.') class DBNotAllowed(NovaException): msg_fmt = _('%(binary)s attempted direct database access which is ' 'not allowed by policy') class UnsupportedVirtType(Invalid): msg_fmt = _("Virtualization type '%(virt)s' is not supported by " "this compute driver") class UnsupportedHardware(Invalid): msg_fmt = _("Requested hardware '%(model)s' is not supported by " "the '%(virt)s' virt driver") class UnsupportedRescueBus(Invalid): msg_fmt = _("Requested rescue bus '%(bus)s' is not supported by " "the '%(virt)s' virt driver") class UnsupportedRescueDevice(Invalid): msg_fmt = _("Requested rescue device '%(device)s' is not supported") class UnsupportedRescueImage(Invalid): msg_fmt = _("Requested rescue image '%(image)s' is not supported") class Base64Exception(NovaException): msg_fmt = _("Invalid Base 64 data for file %(path)s") class BuildAbortException(NovaException): msg_fmt = _("Build of instance %(instance_uuid)s aborted: %(reason)s") class RescheduledException(NovaException): msg_fmt = _("Build of instance %(instance_uuid)s was re-scheduled: " "%(reason)s") class ShadowTableExists(NovaException): msg_fmt = _("Shadow table with name %(name)s already exists.") class InstanceFaultRollback(NovaException): def __init__(self, inner_exception=None): message = _("Instance rollback performed due to: %s") self.inner_exception = inner_exception super(InstanceFaultRollback, self).__init__(message % inner_exception) class OrphanedObjectError(NovaException): msg_fmt = _('Cannot call %(method)s on orphaned %(objtype)s object') class ObjectActionError(NovaException): msg_fmt = _('Object action %(action)s failed because: %(reason)s') class AgentError(NovaException): msg_fmt = _('Error during following call to agent: %(method)s') class AgentTimeout(AgentError): msg_fmt = _('Unable to contact guest agent. ' 'The following call timed out: %(method)s') class AgentNotImplemented(AgentError): msg_fmt = _('Agent does not support the call: %(method)s') class InstanceGroupNotFound(NotFound): msg_fmt = _("Instance group %(group_uuid)s could not be found.") class InstanceGroupIdExists(NovaException): msg_fmt = _("Instance group %(group_uuid)s already exists.") class InstanceGroupSaveException(NovaException): msg_fmt = _("%(field)s should not be part of the updates.") class ResourceMonitorError(NovaException): msg_fmt = _("Error when creating resource monitor: %(monitor)s") class PciDeviceWrongAddressFormat(NovaException): msg_fmt = _("The PCI address %(address)s has an incorrect format.") class PciDeviceInvalidDeviceName(NovaException): msg_fmt = _("Invalid PCI Whitelist: " "The PCI whitelist can specify devname or address," " but not both") class PciDeviceNotFoundById(NotFound): msg_fmt = _("PCI device %(id)s not found") class PciDeviceNotFound(NotFound): msg_fmt = _("PCI Device %(node_id)s:%(address)s not found.") class PciDeviceInvalidStatus(Invalid): msg_fmt = _( "PCI device %(compute_node_id)s:%(address)s is %(status)s " "instead of %(hopestatus)s") class PciDeviceVFInvalidStatus(Invalid): msg_fmt = _( "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s " "are free.") class PciDevicePFInvalidStatus(Invalid): msg_fmt = _( "Physical Function %(compute_node_id)s:%(address)s, related to VF" " %(compute_node_id)s:%(vf_address)s is %(status)s " "instead of %(hopestatus)s") class PciDeviceInvalidOwner(Invalid): msg_fmt = _( "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s " "instead of %(hopeowner)s") class PciDeviceRequestFailed(NovaException): msg_fmt = _( "PCI device request %(requests)s failed") class PciDevicePoolEmpty(NovaException): msg_fmt = _( "Attempt to consume PCI device %(compute_node_id)s:%(address)s " "from empty pool") class PciInvalidAlias(Invalid): msg_fmt = _("Invalid PCI alias definition: %(reason)s") class PciRequestAliasNotDefined(NovaException): msg_fmt = _("PCI alias %(alias)s is not defined") class PciConfigInvalidWhitelist(Invalid): msg_fmt = _("Invalid PCI devices Whitelist config: %(reason)s") class PciRequestFromVIFNotFound(NotFound): msg_fmt = _("Failed to locate PCI request associated with the given VIF " "PCI address: %(pci_slot)s on compute node: %(node_id)s") # Cannot be templated, msg needs to be constructed when raised. class InternalError(NovaException): """Generic hypervisor errors. Consider subclassing this to provide more specific exceptions. """ msg_fmt = "%(err)s" class PciDevicePrepareFailed(NovaException): msg_fmt = _("Failed to prepare PCI device %(id)s for instance " "%(instance_uuid)s: %(reason)s") class PciDeviceDetachFailed(NovaException): msg_fmt = _("Failed to detach PCI device %(dev)s: %(reason)s") class PciDeviceUnsupportedHypervisor(NovaException): msg_fmt = _("%(type)s hypervisor does not support PCI devices") class KeyManagerError(NovaException): msg_fmt = _("Key manager error: %(reason)s") class VolumesNotRemoved(Invalid): msg_fmt = _("Failed to remove volume(s): (%(reason)s)") class VolumeRebaseFailed(NovaException): msg_fmt = _("Volume rebase failed: %(reason)s") class InvalidVideoMode(Invalid): msg_fmt = _("Provided video model (%(model)s) is not supported.") class RngDeviceNotExist(Invalid): msg_fmt = _("The provided RNG device path: (%(path)s) is not " "present on the host.") class RequestedVRamTooHigh(NovaException): msg_fmt = _("The requested amount of video memory %(req_vram)d is higher " "than the maximum allowed by flavor %(max_vram)d.") class SecurityProxyNegotiationFailed(NovaException): msg_fmt = _("Failed to negotiate security type with server: %(reason)s") class RFBAuthHandshakeFailed(NovaException): msg_fmt = _("Failed to complete auth handshake: %(reason)s") class RFBAuthNoAvailableScheme(NovaException): msg_fmt = _("No matching auth scheme: allowed types: '%(allowed_types)s', " "desired types: '%(desired_types)s'") class InvalidWatchdogAction(Invalid): msg_fmt = _("Provided watchdog action (%(action)s) is not supported.") class LiveMigrationNotSubmitted(NovaException): msg_fmt = _("Failed to submit live migration %(migration_uuid)s for " "instance %(instance_uuid)s for processing.") class SelectionObjectsWithOldRPCVersionNotSupported(NovaException): msg_fmt = _("Requests for Selection objects with alternates are not " "supported in select_destinations() before RPC version 4.5; " "version %(version)s requested.") class LiveMigrationURINotAvailable(NovaException): msg_fmt = _('No live migration URI configured and no default available ' 'for "%(virt_type)s" hypervisor virtualization type.') class UnshelveException(NovaException): msg_fmt = _("Error during unshelve instance %(instance_id)s: %(reason)s") class MismatchVolumeAZException(Invalid): msg_fmt = _("The availability zone between the server and its attached " "volumes do not match: %(reason)s.") code = 409 class UnshelveInstanceInvalidState(InstanceInvalidState): msg_fmt = _('Specifying an availability zone when unshelving server ' '%(instance_uuid)s with status "%(state)s" is not supported. ' 'The server status must be SHELVED_OFFLOADED.') code = 409 class ImageVCPULimitsRangeExceeded(Invalid): msg_fmt = _('Image vCPU topology limits (sockets=%(image_sockets)d, ' 'cores=%(image_cores)d, threads=%(image_threads)d) exceeds ' 'the limits of the flavor (sockets=%(flavor_sockets)d, ' 'cores=%(flavor_cores)d, threads=%(flavor_threads)d)') class ImageVCPUTopologyRangeExceeded(Invalid): msg_fmt = _('Image vCPU topology (sockets=%(image_sockets)d, ' 'cores=%(image_cores)d, threads=%(image_threads)d) exceeds ' 'the limits of the flavor or image (sockets=%(max_sockets)d, ' 'cores=%(max_cores)d, threads=%(max_threads)d)') class ImageVCPULimitsRangeImpossible(Invalid): msg_fmt = _("Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d " "are impossible to satisfy for vcpus count %(vcpus)d") class InvalidArchitectureName(Invalid): msg_fmt = _("Architecture name '%(arch)s' is not recognised") class ImageNUMATopologyIncomplete(Invalid): msg_fmt = _("CPU and memory allocation must be provided for all " "NUMA nodes") class ImageNUMATopologyForbidden(Forbidden): msg_fmt = _("Image property '%(name)s' is not permitted to override " "NUMA configuration set against the flavor") class ImageNUMATopologyRebuildConflict(Invalid): msg_fmt = _( "An instance's NUMA topology cannot be changed as part of a rebuild. " "The image provided is invalid for this instance.") class ImagePCINUMAPolicyForbidden(Forbidden): msg_fmt = _("Image property 'hw_pci_numa_affinity_policy' is not " "permitted to override the 'hw:pci_numa_affinity_policy' " "flavor extra spec.") class ImageNUMATopologyAsymmetric(Invalid): msg_fmt = _("Instance CPUs and/or memory cannot be evenly distributed " "across instance NUMA nodes. Explicit assignment of CPUs " "and memory to nodes is required") class ImageNUMATopologyCPUOutOfRange(Invalid): msg_fmt = _("CPU number %(cpunum)d is larger than max %(cpumax)d") class ImageNUMATopologyCPUDuplicates(Invalid): msg_fmt = _("CPU number %(cpunum)d is assigned to two nodes") class ImageNUMATopologyCPUsUnassigned(Invalid): msg_fmt = _("CPU number %(cpuset)s is not assigned to any node") class ImageNUMATopologyMemoryOutOfRange(Invalid): msg_fmt = _("%(memsize)d MB of memory assigned, but expected " "%(memtotal)d MB") class InvalidHostname(Invalid): msg_fmt = _("Invalid characters in hostname '%(hostname)s'") class NumaTopologyNotFound(NotFound): msg_fmt = _("Instance %(instance_uuid)s does not specify a NUMA topology") class MigrationContextNotFound(NotFound): msg_fmt = _("Instance %(instance_uuid)s does not specify a migration " "context.") class SocketPortRangeExhaustedException(NovaException): msg_fmt = _("Not able to acquire a free port for %(host)s") class SocketPortInUseException(NovaException): msg_fmt = _("Not able to bind %(host)s:%(port)d, %(error)s") class ImageSerialPortNumberInvalid(Invalid): msg_fmt = _("Number of serial ports specified in flavor is invalid: " "expected an integer, got '%(num_ports)s'") class ImageSerialPortNumberExceedFlavorValue(Invalid): msg_fmt = _("Forbidden to exceed flavor value of number of serial " "ports passed in image meta.") class SerialPortNumberLimitExceeded(Invalid): msg_fmt = _("Maximum number of serial port exceeds %(allowed)d " "for %(virt_type)s") class InvalidImageConfigDrive(Invalid): msg_fmt = _("Image's config drive option '%(config_drive)s' is invalid") class InvalidHypervisorVirtType(Invalid): msg_fmt = _("Hypervisor virtualization type '%(hv_type)s' is not " "recognised") class InvalidMachineType(Invalid): msg_fmt = _("Machine type '%(mtype)s' is not compatible with image " "%(image_name)s (%(image_id)s): %(reason)s") class InvalidVirtualMachineMode(Invalid): msg_fmt = _("Virtual machine mode '%(vmmode)s' is not recognised") class InvalidToken(Invalid): msg_fmt = _("The token '%(token)s' is invalid or has expired") class TokenInUse(Invalid): msg_fmt = _("The generated token is invalid") class InvalidConnectionInfo(Invalid): msg_fmt = _("Invalid Connection Info") class InstanceQuiesceNotSupported(Invalid): msg_fmt = _('Quiescing is not supported in instance %(instance_id)s') class InstanceAgentNotEnabled(Invalid): msg_fmt = _('Guest agent is not enabled for the instance') safe = True class QemuGuestAgentNotEnabled(InstanceAgentNotEnabled): msg_fmt = _('QEMU guest agent is not enabled') class SetAdminPasswdNotSupported(Invalid): msg_fmt = _('Set admin password is not supported') safe = True class MemoryPageSizeInvalid(Invalid): msg_fmt = _("Invalid memory page size '%(pagesize)s'") class MemoryPageSizeForbidden(Invalid): msg_fmt = _("Page size %(pagesize)s forbidden against '%(against)s'") class MemoryPageSizeNotSupported(Invalid): msg_fmt = _("Page size %(pagesize)s is not supported by the host.") class CPUPinningInvalid(Invalid): msg_fmt = _("CPU set to pin %(requested)s must be a subset of " "free CPU set %(available)s") class CPUUnpinningInvalid(Invalid): msg_fmt = _("CPU set to unpin %(requested)s must be a subset of " "pinned CPU set %(available)s") class CPUPinningUnknown(Invalid): msg_fmt = _("CPU set to pin %(requested)s must be a subset of " "known CPU set %(available)s") class CPUUnpinningUnknown(Invalid): msg_fmt = _("CPU set to unpin %(requested)s must be a subset of " "known CPU set %(available)s") class ImageCPUPinningForbidden(Forbidden): msg_fmt = _("Image property 'hw_cpu_policy' is not permitted to override " "CPU pinning policy set against the flavor") class ImageCPUThreadPolicyForbidden(Forbidden): msg_fmt = _("Image property 'hw_cpu_thread_policy' is not permitted to " "override CPU thread pinning policy set against the flavor") class ImagePMUConflict(Forbidden): msg_fmt = _("Image property 'hw_pmu' is not permitted to " "override the PMU policy set in the flavor") class UnsupportedPolicyException(Invalid): msg_fmt = _("ServerGroup policy is not supported: %(reason)s") class CellMappingNotFound(NotFound): msg_fmt = _("Cell %(uuid)s has no mapping.") class NUMATopologyUnsupported(Invalid): msg_fmt = _("Host does not support guests with NUMA topology set") class MemoryPagesUnsupported(Invalid): msg_fmt = _("Host does not support guests with custom memory page sizes") class InvalidImageFormat(Invalid): msg_fmt = _("Invalid image format '%(format)s'") class UnsupportedImageModel(Invalid): msg_fmt = _("Image model '%(image)s' is not supported") class HostMappingNotFound(Invalid): msg_fmt = _("Host '%(name)s' is not mapped to any cell") class HostMappingExists(Invalid): msg_fmt = _("Host '%(name)s' mapping already exists") class RealtimeConfigurationInvalid(Invalid): msg_fmt = _("Cannot set realtime policy in a non dedicated " "cpu pinning policy") class CPUThreadPolicyConfigurationInvalid(Invalid): msg_fmt = _("Cannot set cpu thread pinning policy in a non dedicated " "cpu pinning policy") class RequestSpecNotFound(NotFound): msg_fmt = _("RequestSpec not found for instance %(instance_uuid)s") class UEFINotSupported(Invalid): msg_fmt = _("UEFI is not supported") class TriggerCrashDumpNotSupported(Invalid): msg_fmt = _("Triggering crash dump is not supported") class UnsupportedHostCPUControlPolicy(Invalid): msg_fmt = _("Requested CPU control policy not supported by host") class LibguestfsCannotReadKernel(Invalid): msg_fmt = _("Libguestfs does not have permission to read host kernel.") class RealtimeMaskNotFoundOrInvalid(Invalid): msg_fmt = _("Realtime policy needs vCPU(s) mask configured with at least " "1 RT vCPU and 1 ordinary vCPU. See hw:cpu_realtime_mask " "or hw_cpu_realtime_mask") class OsInfoNotFound(NotFound): msg_fmt = _("No configuration information found for operating system " "%(os_name)s") class BuildRequestNotFound(NotFound): msg_fmt = _("BuildRequest not found for instance %(uuid)s") class AttachInterfaceNotSupported(Invalid): msg_fmt = _("Attaching interfaces is not supported for " "instance %(instance_uuid)s.") class AttachInterfaceWithQoSPolicyNotSupported(AttachInterfaceNotSupported): msg_fmt = _("Attaching interfaces with QoS policy is not supported for " "instance %(instance_uuid)s.") class NetworksWithQoSPolicyNotSupported(Invalid): msg_fmt = _("Using networks with QoS policy is not supported for " "instance %(instance_uuid)s. (Network ID is %(network_id)s)") class CreateWithPortResourceRequestOldVersion(Invalid): msg_fmt = _("Creating servers with ports having resource requests, like a " "port with a QoS minimum bandwidth policy, is not supported " "until microversion 2.72.") class InvalidReservedMemoryPagesOption(Invalid): msg_fmt = _("The format of the option 'reserved_huge_pages' is invalid. " "(found '%(conf)s') Please refer to the nova " "config-reference.") # An exception with this name is used on both sides of the placement/ # nova interaction. class ResourceProviderInUse(NovaException): msg_fmt = _("Resource provider has allocations.") class ResourceProviderRetrievalFailed(NovaException): msg_fmt = _("Failed to get resource provider with UUID %(uuid)s") class ResourceProviderAggregateRetrievalFailed(NovaException): msg_fmt = _("Failed to get aggregates for resource provider with UUID" " %(uuid)s") class ResourceProviderTraitRetrievalFailed(NovaException): msg_fmt = _("Failed to get traits for resource provider with UUID" " %(uuid)s") class ResourceProviderCreationFailed(NovaException): msg_fmt = _("Failed to create resource provider %(name)s") class ResourceProviderDeletionFailed(NovaException): msg_fmt = _("Failed to delete resource provider %(uuid)s") class ResourceProviderUpdateFailed(NovaException): msg_fmt = _("Failed to update resource provider via URL %(url)s: " "%(error)s") class ResourceProviderNotFound(NotFound): msg_fmt = _("No such resource provider %(name_or_uuid)s.") class ResourceProviderSyncFailed(NovaException): msg_fmt = _("Failed to synchronize the placement service with resource " "provider information supplied by the compute host.") class PlacementAPIConnectFailure(NovaException): msg_fmt = _("Unable to communicate with the Placement API.") class PlacementAPIConflict(NovaException): """Any 409 error from placement APIs should use (a subclass of) this exception. """ msg_fmt = _("A conflict was encountered attempting to invoke the " "placement API at URL %(url)s: %(error)s") class ResourceProviderUpdateConflict(PlacementAPIConflict): """A 409 caused by generation mismatch from attempting to update an existing provider record or its associated data (aggregates, traits, etc.). """ msg_fmt = _("A conflict was encountered attempting to update resource " "provider %(uuid)s (generation %(generation)d): %(error)s") class InvalidResourceClass(Invalid): msg_fmt = _("Resource class '%(resource_class)s' invalid.") class InvalidInventory(Invalid): msg_fmt = _("Inventory for '%(resource_class)s' on " "resource provider '%(resource_provider)s' invalid.") # An exception with this name is used on both sides of the placement/ # nova interaction. class InventoryInUse(InvalidInventory): pass class UsagesRetrievalFailed(NovaException): msg_fmt = _("Failed to retrieve usages for project '%(project_id)s' and " "user '%(user_id)s'.") class UnsupportedPointerModelRequested(Invalid): msg_fmt = _("Pointer model '%(model)s' requested is not supported by " "host.") class NotSupportedWithOption(Invalid): msg_fmt = _("%(operation)s is not supported in conjunction with the " "current %(option)s setting. Please refer to the nova " "config-reference.") class Unauthorized(NovaException): msg_fmt = _("Not authorized.") code = 401 class NeutronAdminCredentialConfigurationInvalid(Invalid): msg_fmt = _("Networking client is experiencing an unauthorized exception.") class InvalidEmulatorThreadsPolicy(Invalid): msg_fmt = _("CPU emulator threads option requested is invalid, " "given: '%(requested)s', available: '%(available)s'.") class InvalidCPUAllocationPolicy(Invalid): msg_fmt = _("CPU policy requested from '%(source)s' is invalid, " "given: '%(requested)s', available: '%(available)s'.") class InvalidCPUThreadAllocationPolicy(Invalid): msg_fmt = _("CPU thread policy requested from '%(source)s' is invalid, " "given: '%(requested)s', available: '%(available)s'.") class BadRequirementEmulatorThreadsPolicy(Invalid): msg_fmt = _("An isolated CPU emulator threads option requires a dedicated " "CPU policy option.") class InvalidNetworkNUMAAffinity(Invalid): msg_fmt = _("Invalid NUMA network affinity configured: %(reason)s") class InvalidPCINUMAAffinity(Invalid): msg_fmt = _("Invalid PCI NUMA affinity configured: %(policy)s") class PowerVMAPIFailed(NovaException): msg_fmt = _("PowerVM API failed to complete for instance=%(inst_name)s. " "%(reason)s") class TraitRetrievalFailed(NovaException): msg_fmt = _("Failed to retrieve traits from the placement API: %(error)s") class TraitCreationFailed(NovaException): msg_fmt = _("Failed to create trait %(name)s: %(error)s") class CannotMigrateToSameHost(NovaException): msg_fmt = _("Cannot migrate to the host where the server exists.") class VirtDriverNotReady(NovaException): msg_fmt = _("Virt driver is not ready.") class InvalidPeerList(NovaException): msg_fmt = _("Configured nova-compute peer list for the ironic virt " "driver is invalid on host %(host)s") class InstanceDiskMappingFailed(NovaException): msg_fmt = _("Failed to map boot disk of instance %(instance_name)s to " "the management partition from any Virtual I/O Server.") class NewMgmtMappingNotFoundException(NovaException): msg_fmt = _("Failed to find newly-created mapping of storage element " "%(stg_name)s from Virtual I/O Server %(vios_name)s to the " "management partition.") class NoDiskDiscoveryException(NovaException): msg_fmt = _("Having scanned SCSI bus %(bus)x on the management partition, " "disk with UDID %(udid)s failed to appear after %(polls)d " "polls over %(timeout)d seconds.") class UniqueDiskDiscoveryException(NovaException): msg_fmt = _("Expected to find exactly one disk on the management " "partition at %(path_pattern)s; found %(count)d.") class DeviceDeletionException(NovaException): msg_fmt = _("Device %(devpath)s is still present on the management " "partition after attempting to delete it. Polled %(polls)d " "times over %(timeout)d seconds.") class OptRequiredIfOtherOptValue(NovaException): msg_fmt = _("The %(then_opt)s option is required if %(if_opt)s is " "specified as '%(if_value)s'.") class AllocationCreateFailed(NovaException): msg_fmt = _('Failed to create allocations for instance %(instance)s ' 'against resource provider %(provider)s.') class AllocationUpdateFailed(NovaException): msg_fmt = _('Failed to update allocations for consumer %(consumer_uuid)s. ' 'Error: %(error)s') class AllocationMoveFailed(NovaException): msg_fmt = _('Failed to move allocations from consumer %(source_consumer)s ' 'to consumer %(target_consumer)s. ' 'Error: %(error)s') class AllocationDeleteFailed(NovaException): msg_fmt = _('Failed to delete allocations for consumer %(consumer_uuid)s. ' 'Error: %(error)s') class TooManyComputesForHost(NovaException): msg_fmt = _('Unexpected number of compute node records ' '(%(num_computes)d) found for host %(host)s. There should ' 'only be a one-to-one mapping.') class CertificateValidationFailed(NovaException): msg_fmt = _("Image signature certificate validation failed for " "certificate: %(cert_uuid)s. %(reason)s") class InstanceRescueFailure(NovaException): msg_fmt = _("Failed to move instance to rescue mode: %(reason)s") class InstanceUnRescueFailure(NovaException): msg_fmt = _("Failed to unrescue instance: %(reason)s") class IronicAPIVersionNotAvailable(NovaException): msg_fmt = _('Ironic API version %(version)s is not available.') class ZVMDriverException(NovaException): msg_fmt = _("ZVM Driver has error: %(error)s") class ZVMConnectorError(ZVMDriverException): msg_fmt = _("zVM Cloud Connector request failed: %(results)s") def __init__(self, message=None, **kwargs): """Exception for zVM ConnectorClient calls. :param results: The object returned from ZVMConnector.send_request. """ super(ZVMConnectorError, self).__init__(message=message, **kwargs) results = kwargs.get('results', {}) self.overallRC = results.get('overallRC') self.rc = results.get('rc') self.rs = results.get('rs') self.errmsg = results.get('errmsg') class NoResourceClass(NovaException): msg_fmt = _("Resource class not found for Ironic node %(node)s.") class ResourceProviderAllocationRetrievalFailed(NovaException): msg_fmt = _("Failed to retrieve allocations for resource provider " "%(rp_uuid)s: %(error)s") class ConsumerAllocationRetrievalFailed(NovaException): msg_fmt = _("Failed to retrieve allocations for consumer " "%(consumer_uuid)s: %(error)s") class ReshapeFailed(NovaException): msg_fmt = _("Resource provider inventory and allocation data migration " "failed: %(error)s") class ReshapeNeeded(NovaException): msg_fmt = _("Virt driver indicates that provider inventories need to be " "moved.") class FlavorImageConflict(NovaException): msg_fmt = _("Conflicting values for %(setting)s found in the flavor " "(%(flavor_val)s) and the image (%(image_val)s).") class MissingDomainCapabilityFeatureException(NovaException): msg_fmt = _("Guest config could not be built without domain capabilities " "including <%(feature)s> feature.") class HealPortAllocationException(NovaException): msg_fmt = _("Healing port allocation failed.") class MoreThanOneResourceProviderToHealFrom(HealPortAllocationException): msg_fmt = _("More than one matching resource provider %(rp_uuids)s is " "available for healing the port allocation for port " "%(port_id)s for instance %(instance_uuid)s. This script " "does not have enough information to select the proper " "resource provider from which to heal.") class NoResourceProviderToHealFrom(HealPortAllocationException): msg_fmt = _("No matching resource provider is " "available for healing the port allocation for port " "%(port_id)s for instance %(instance_uuid)s. There are no " "resource providers with matching traits %(traits)s in the " "provider tree of the resource provider %(node_uuid)s ." "This probably means that the neutron QoS configuration is " "wrong. Consult with " "https://docs.openstack.org/neutron/latest/admin/" "config-qos-min-bw.html for information on how to configure " "neutron. If the configuration is fixed the script can be run " "again.") class UnableToQueryPorts(HealPortAllocationException): msg_fmt = _("Unable to query ports for instance %(instance_uuid)s: " "%(error)s") class UnableToUpdatePorts(HealPortAllocationException): msg_fmt = _("Unable to update ports with allocations that are about to be " "created in placement: %(error)s. The healing of the " "instance is aborted. It is safe to try to heal the instance " "again.") class UnableToRollbackPortUpdates(HealPortAllocationException): msg_fmt = _("Failed to update neutron ports with allocation keys and the " "automatic rollback of the previously successful port updates " "also failed: %(error)s. Make sure that the " "binding:profile.allocation key of the affected ports " "%(port_uuids)s are manually cleaned in neutron according to " "document https://docs.openstack.org/nova/latest/cli/" "nova-manage.html#placement. If you re-run the script without " "the manual fix then the missing allocation for these ports " "will not be healed in placement.") class AssignedResourceNotFound(NovaException): msg_fmt = _("Assigned resources not found: %(reason)s") class PMEMNamespaceConfigInvalid(NovaException): msg_fmt = _("The pmem_namespaces configuration is invalid: %(reason)s, " "please check your conf file. ") class GetPMEMNamespacesFailed(NovaException): msg_fmt = _("Get PMEM namespaces on host failed: %(reason)s.") class VPMEMCleanupFailed(NovaException): msg_fmt = _("Failed to clean up the vpmem backend device %(dev)s: " "%(error)s") class RequestGroupSuffixConflict(NovaException): msg_fmt = _("Duplicate request group suffix %(suffix)s.") class AmbiguousResourceProviderForPCIRequest(NovaException): msg_fmt = _("Allocating resources from multiple resource providers " "%(providers)s for a single pci request %(requester)s is not " "supported.") class UnexpectedResourceProviderNameForPCIRequest(NovaException): msg_fmt = _("Resource provider %(provider)s used to allocate resources " "for the pci request %(requester)s does not have a properly " "formatted name. Expected name format is " "::, but got " "%(provider_name)s") class DeviceProfileError(NovaException): msg_fmt = _("Device profile name %(name)s: %(msg)s") class AcceleratorRequestOpFailed(NovaException): msg_fmt = _("Failed to %(op)s accelerator requests: %(msg)s") class InvalidLibvirtGPUConfig(NovaException): msg_fmt = _('Invalid configuration for GPU devices: %(reason)s') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/exception_wrapper.py0000664000175000017500000000744700000000000017655 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import inspect import traceback from oslo_utils import excutils import nova.conf from nova.notifications.objects import base from nova.notifications.objects import exception from nova.objects import fields from nova import rpc from nova import safe_utils CONF = nova.conf.CONF def _emit_exception_notification(notifier, context, ex, function_name, args, source, trace_back): _emit_legacy_exception_notification(notifier, context, ex, function_name, args) _emit_versioned_exception_notification(context, ex, source, trace_back) @rpc.if_notifications_enabled def _emit_versioned_exception_notification(context, ex, source, trace_back): versioned_exception_payload = \ exception.ExceptionPayload.from_exc_and_traceback(ex, trace_back) publisher = base.NotificationPublisher(host=CONF.host, source=source) event_type = base.EventType( object='compute', action=fields.NotificationAction.EXCEPTION) notification = exception.ExceptionNotification( publisher=publisher, event_type=event_type, priority=fields.NotificationPriority.ERROR, payload=versioned_exception_payload) notification.emit(context) def _emit_legacy_exception_notification(notifier, context, ex, function_name, args): payload = dict(exception=ex, args=args) notifier.error(context, function_name, payload) def wrap_exception(notifier=None, get_notifier=None, binary=None): """This decorator wraps a method to catch any exceptions that may get thrown. It also optionally sends the exception to the notification system. """ def inner(f): def wrapped(self, context, *args, **kw): # Don't store self or context in the payload, it now seems to # contain confidential information. try: return f(self, context, *args, **kw) except Exception as e: tb = traceback.format_exc() with excutils.save_and_reraise_exception(): if notifier or get_notifier: call_dict = _get_call_dict( f, self, context, *args, **kw) function_name = f.__name__ _emit_exception_notification( notifier or get_notifier(), context, e, function_name, call_dict, binary, tb) return functools.wraps(f)(wrapped) return inner def _get_call_dict(function, self, context, *args, **kw): wrapped_func = safe_utils.get_wrapped_function(function) call_dict = inspect.getcallargs(wrapped_func, self, context, *args, **kw) # self can't be serialized and shouldn't be in the # payload call_dict.pop('self', None) # NOTE(gibi) remove context as well as it contains sensitive information # and it can also contain circular references call_dict.pop('context', None) return _cleanse_dict(call_dict) def _cleanse_dict(original): """Strip all admin_password, new_pass, rescue_pass keys from a dict.""" return {k: v for k, v in original.items() if "_pass" not in k} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/filters.py0000664000175000017500000001172100000000000015555 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Filter support """ from oslo_log import log as logging from nova.i18n import _LI from nova import loadables LOG = logging.getLogger(__name__) class BaseFilter(object): """Base class for all filter classes.""" def _filter_one(self, obj, spec_obj): """Return True if it passes the filter, False otherwise. Override this in a subclass. """ return True def filter_all(self, filter_obj_list, spec_obj): """Yield objects that pass the filter. Can be overridden in a subclass, if you need to base filtering decisions on all objects. Otherwise, one can just override _filter_one() to filter a single object. """ for obj in filter_obj_list: if self._filter_one(obj, spec_obj): yield obj # Set to true in a subclass if a filter only needs to be run once # for each request rather than for each instance run_filter_once_per_request = False def run_filter_for_index(self, index): """Return True if the filter needs to be run for the "index-th" instance in a request. Only need to override this if a filter needs anything other than "first only" or "all" behaviour. """ if self.run_filter_once_per_request and index > 0: return False else: return True class BaseFilterHandler(loadables.BaseLoader): """Base class to handle loading filter classes. This class should be subclassed where one needs to use filters. """ def get_filtered_objects(self, filters, objs, spec_obj, index=0): list_objs = list(objs) LOG.debug("Starting with %d host(s)", len(list_objs)) # Track the hosts as they are removed. The 'full_filter_results' list # contains the host/nodename info for every host that passes each # filter, while the 'part_filter_results' list just tracks the number # removed by each filter, unless the filter returns zero hosts, in # which case it records the host/nodename for the last batch that was # removed. Since the full_filter_results can be very large, it is only # recorded if the LOG level is set to debug. part_filter_results = [] full_filter_results = [] log_msg = "%(cls_name)s: (start: %(start)s, end: %(end)s)" for filter_ in filters: if filter_.run_filter_for_index(index): cls_name = filter_.__class__.__name__ start_count = len(list_objs) objs = filter_.filter_all(list_objs, spec_obj) if objs is None: LOG.debug("Filter %s says to stop filtering", cls_name) return list_objs = list(objs) end_count = len(list_objs) part_filter_results.append(log_msg % {"cls_name": cls_name, "start": start_count, "end": end_count}) if list_objs: remaining = [(getattr(obj, "host", obj), getattr(obj, "nodename", "")) for obj in list_objs] full_filter_results.append((cls_name, remaining)) else: LOG.info(_LI("Filter %s returned 0 hosts"), cls_name) full_filter_results.append((cls_name, None)) break LOG.debug("Filter %(cls_name)s returned " "%(obj_len)d host(s)", {'cls_name': cls_name, 'obj_len': len(list_objs)}) if not list_objs: # Log the filtration history msg_dict = { "inst_uuid": spec_obj.instance_uuid, "str_results": str(full_filter_results), } full_msg = ("Filtering removed all hosts for the request with " "instance ID " "'%(inst_uuid)s'. Filter results: %(str_results)s" ) % msg_dict LOG.debug(full_msg) msg_dict["str_results"] = str(part_filter_results) part_msg = ("Filtering removed all hosts for the request with " "instance ID " "'%(inst_uuid)s'. Filter results: %(str_results)s" ) % msg_dict LOG.info(part_msg) return list_objs ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/hacking/0000775000175000017500000000000000000000000015135 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/hacking/__init__.py0000664000175000017500000000000000000000000017234 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/hacking/checks.py0000664000175000017500000010436700000000000016762 0ustar00zuulzuul00000000000000# Copyright (c) 2012, Cloudscaling # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Guidelines for writing new hacking checks - Use only for Nova specific tests. OpenStack general tests should be submitted to the common 'hacking' module. - Pick numbers in the range N3xx. Find the current test with the highest allocated number and then pick the next value. - Keep the test method code in the source file ordered based on the N3xx value. - List the new rule in the top level HACKING.rst file - Add test cases for each new rule to nova/tests/unit/test_hacking.py """ import ast import os import re from hacking import core import six UNDERSCORE_IMPORT_FILES = [] session_check = re.compile(r"\w*def [a-zA-Z0-9].*[(].*session.*[)]") cfg_re = re.compile(r".*\scfg\.") # Excludes oslo.config OptGroup objects cfg_opt_re = re.compile(r".*[\s\[]cfg\.[a-zA-Z]*Opt\(") rule_default_re = re.compile(r".*RuleDefault\(") policy_enforce_re = re.compile(r".*_ENFORCER\.enforce\(") virt_file_re = re.compile(r"\./nova/(?:tests/)?virt/(\w+)/") virt_import_re = re.compile( r"^\s*(?:import|from) nova\.(?:tests\.)?virt\.(\w+)") virt_config_re = re.compile( r"CONF\.import_opt\('.*?', 'nova\.virt\.(\w+)('|.)") asse_trueinst_re = re.compile( r"(.)*assertTrue\(isinstance\((\w|\.|\'|\"|\[|\])+, " r"(\w|\.|\'|\"|\[|\])+\)\)") asse_equal_type_re = re.compile( r"(.)*assertEqual\(type\((\w|\.|\'|\"|\[|\])+\), " r"(\w|\.|\'|\"|\[|\])+\)") asse_equal_in_end_with_true_or_false_re = re.compile(r"assertEqual\(" r"(\w|[][.'\"])+ in (\w|[][.'\", ])+, (True|False)\)") asse_equal_in_start_with_true_or_false_re = re.compile(r"assertEqual\(" r"(True|False), (\w|[][.'\"])+ in (\w|[][.'\", ])+\)") # NOTE(snikitin): Next two regexes weren't united to one for more readability. # asse_true_false_with_in_or_not_in regex checks # assertTrue/False(A in B) cases where B argument has no spaces # asse_true_false_with_in_or_not_in_spaces regex checks cases # where B argument has spaces and starts/ends with [, ', ". # For example: [1, 2, 3], "some string", 'another string'. # We have to separate these regexes to escape a false positives # results. B argument should have spaces only if it starts # with [, ", '. Otherwise checking of string # "assertFalse(A in B and C in D)" will be false positives. # In this case B argument is "B and C in D". asse_true_false_with_in_or_not_in = re.compile(r"assert(True|False)\(" r"(\w|[][.'\"])+( not)? in (\w|[][.'\",])+(, .*)?\)") asse_true_false_with_in_or_not_in_spaces = re.compile(r"assert(True|False)" r"\((\w|[][.'\"])+( not)? in [\[|'|\"](\w|[][.'\", ])+" r"[\[|'|\"](, .*)?\)") asse_raises_regexp = re.compile(r"assertRaisesRegexp\(") conf_attribute_set_re = re.compile(r"CONF\.[a-z0-9_.]+\s*=\s*\w") translated_log = re.compile( r"(.)*LOG\.(audit|error|info|critical|exception)" r"\(\s*_\(\s*('|\")") mutable_default_args = re.compile(r"^\s*def .+\((.+=\{\}|.+=\[\])") string_translation = re.compile(r"[^_]*_\(\s*('|\")") underscore_import_check = re.compile(r"(.)*import _(.)*") import_translation_for_log_or_exception = re.compile( r"(.)*(from\snova.i18n\simport)\s_") # We need this for cases where they have created their own _ function. custom_underscore_check = re.compile(r"(.)*_\s*=\s*(.)*") api_version_re = re.compile(r"@.*\bapi_version\b") dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)") decorator_re = re.compile(r"@.*") http_not_implemented_re = re.compile(r"raise .*HTTPNotImplemented\(") spawn_re = re.compile( r".*(eventlet|greenthread)\.(?Pspawn(_n)?)\(.*\)") contextlib_nested = re.compile(r"^with (contextlib\.)?nested\(") doubled_words_re = re.compile( r"\b(then?|[iao]n|i[fst]|but|f?or|at|and|[dt]o)\s+\1\b") log_remove_context = re.compile( r"(.)*LOG\.(.*)\(.*(context=[_a-zA-Z0-9].*)+.*\)") return_not_followed_by_space = re.compile(r"^\s*return(?:\(|{|\"|'|#).*$") uuid4_re = re.compile(r"uuid4\(\)($|[^\.]|\.hex)") redundant_import_alias_re = re.compile(r"import (?:.*\.)?(.+) as \1$") yield_not_followed_by_space = re.compile(r"^\s*yield(?:\(|{|\[|\"|').*$") asse_regexpmatches = re.compile( r"(assertRegexpMatches|assertNotRegexpMatches)\(") privsep_file_re = re.compile('^nova/privsep[./]') privsep_import_re = re.compile( r"^(?:import|from).*\bprivsep\b") # Redundant parenthetical masquerading as a tuple, used with ``in``: # Space, "in", space, open paren # Optional single or double quote (so we match strings or symbols) # A sequence of the characters that can make up a symbol. (This is weak: a # string can contain other characters; and a numeric symbol can start with a # minus, and a method call has a param list, and... Not sure this gets better # without a lexer.) # The same closing quote # Close paren disguised_as_tuple_re = re.compile(r''' in \((['"]?)[a-zA-Z0-9_.]+\1\)''') # NOTE(takashin): The patterns of nox-existent mock assertion methods and # attributes do not cover all cases. If you find a new pattern, # add the pattern in the following regex patterns. mock_assert_method_re = re.compile( r"\.((called_once(_with)*|has_calls)|" r"mock_assert_(called(_(once|with|once_with))?" r"|any_call|has_calls|not_called)|" r"(asser|asset|asssert|assset)_(called(_(once|with|once_with))?" r"|any_call|has_calls|not_called))\(") mock_attribute_re = re.compile(r"[\.\(](retrun_value)[,=\s]") # Regex for useless assertions useless_assertion_re = re.compile( r"\.((assertIsNone)\(None|(assertTrue)\((True|\d+|'.+'|\".+\")),") class BaseASTChecker(ast.NodeVisitor): """Provides a simple framework for writing AST-based checks. Subclasses should implement visit_* methods like any other AST visitor implementation. When they detect an error for a particular node the method should call ``self.add_error(offending_node)``. Details about where in the code the error occurred will be pulled from the node object. Subclasses should also provide a class variable named CHECK_DESC to be used for the human readable error message. """ def __init__(self, tree, filename): """This object is created automatically by pycodestyle. :param tree: an AST tree :param filename: name of the file being analyzed (ignored by our checks) """ self._tree = tree self._errors = [] def run(self): """Called automatically by pycodestyle.""" self.visit(self._tree) return self._errors def add_error(self, node, message=None): """Add an error caused by a node to the list of errors.""" message = message or self.CHECK_DESC error = (node.lineno, node.col_offset, message, self.__class__) self._errors.append(error) def _check_call_names(self, call_node, names): if isinstance(call_node, ast.Call): if isinstance(call_node.func, ast.Name): if call_node.func.id in names: return True return False @core.flake8ext def import_no_db_in_virt(logical_line, filename): """Check for db calls from nova/virt As of grizzly-2 all the database calls have been removed from nova/virt, and we want to keep it that way. N307 """ if "nova/virt" in filename and not filename.endswith("fake.py"): if logical_line.startswith("from nova.db import api"): yield (0, "N307: nova.db.api import not allowed in nova/virt/*") @core.flake8ext def no_db_session_in_public_api(logical_line, filename): if "db/api.py" in filename: if session_check.match(logical_line): yield (0, "N309: public db api methods may not accept session") @core.flake8ext def use_timeutils_utcnow(logical_line, filename): # tools are OK to use the standard datetime module if "/tools/" in filename: return msg = "N310: timeutils.utcnow() must be used instead of datetime.%s()" datetime_funcs = ['now', 'utcnow'] for f in datetime_funcs: pos = logical_line.find('datetime.%s' % f) if pos != -1: yield (pos, msg % f) def _get_virt_name(regex, data): m = regex.match(data) if m is None: return None driver = m.group(1) # Ignore things we mis-detect as virt drivers in the regex if driver in ["test_virt_drivers", "driver", "disk", "api", "imagecache", "cpu", "hardware", "image"]: return None return driver @core.flake8ext def import_no_virt_driver_import_deps(physical_line, filename): """Check virt drivers' modules aren't imported by other drivers Modules under each virt driver's directory are considered private to that virt driver. Other drivers in Nova must not access those drivers. Any code that is to be shared should be refactored into a common module N311 """ thisdriver = _get_virt_name(virt_file_re, filename) thatdriver = _get_virt_name(virt_import_re, physical_line) if (thatdriver is not None and thisdriver is not None and thisdriver != thatdriver): return (0, "N311: importing code from other virt drivers forbidden") @core.flake8ext def import_no_virt_driver_config_deps(physical_line, filename): """Check virt drivers' config vars aren't used by other drivers Modules under each virt driver's directory are considered private to that virt driver. Other drivers in Nova must not use their config vars. Any config vars that are to be shared should be moved into a common module N312 """ thisdriver = _get_virt_name(virt_file_re, filename) thatdriver = _get_virt_name(virt_config_re, physical_line) if (thatdriver is not None and thisdriver is not None and thisdriver != thatdriver): return (0, "N312: using config vars from other virt drivers forbidden") @core.flake8ext def capital_cfg_help(logical_line, tokens): msg = "N313: capitalize help string" if cfg_re.match(logical_line): for t in range(len(tokens)): if tokens[t][1] == "help": txt = tokens[t + 2][1] if len(txt) > 1 and txt[1].islower(): yield (0, msg) @core.flake8ext def assert_true_instance(logical_line): """Check for assertTrue(isinstance(a, b)) sentences N316 """ if asse_trueinst_re.match(logical_line): yield (0, "N316: assertTrue(isinstance(a, b)) sentences not allowed") @core.flake8ext def assert_equal_type(logical_line): """Check for assertEqual(type(A), B) sentences N317 """ if asse_equal_type_re.match(logical_line): yield (0, "N317: assertEqual(type(A), B) sentences not allowed") @core.flake8ext def check_python3_xrange(logical_line): if re.search(r"\bxrange\s*\(", logical_line): yield (0, "N327: Do not use xrange(). 'xrange()' is not compatible " "with Python 3. Use range() or six.moves.range() instead.") @core.flake8ext def no_translate_debug_logs(logical_line, filename): """Check for 'LOG.debug(_(' As per our translation policy, https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation we shouldn't translate debug level logs. * This check assumes that 'LOG' is a logger. * Use filename so we can start enforcing this in specific folders instead of needing to do so all at once. N319 """ if logical_line.startswith("LOG.debug(_("): yield (0, "N319 Don't translate debug level logs") @core.flake8ext def no_import_translation_in_tests(logical_line, filename): """Check for 'from nova.i18n import _' N337 """ if 'nova/tests/' in filename: res = import_translation_for_log_or_exception.match(logical_line) if res: yield (0, "N337 Don't import translation in tests") @core.flake8ext def no_setting_conf_directly_in_tests(logical_line, filename): """Check for setting CONF.* attributes directly in tests The value can leak out of tests affecting how subsequent tests run. Using self.flags(option=value) is the preferred method to temporarily set config options in tests. N320 """ if 'nova/tests/' in filename: res = conf_attribute_set_re.match(logical_line) if res: yield (0, "N320: Setting CONF.* attributes directly in tests is " "forbidden. Use self.flags(option=value) instead") @core.flake8ext def no_mutable_default_args(logical_line): msg = "N322: Method's default argument shouldn't be mutable!" if mutable_default_args.match(logical_line): yield (0, msg) @core.flake8ext def check_explicit_underscore_import(logical_line, filename): """Check for explicit import of the _ function We need to ensure that any files that are using the _() function to translate logs are explicitly importing the _ function. We can't trust unit test to catch whether the import has been added so we need to check for it here. """ # Build a list of the files that have _ imported. No further # checking needed once it is found. if filename in UNDERSCORE_IMPORT_FILES: pass elif (underscore_import_check.match(logical_line) or custom_underscore_check.match(logical_line)): UNDERSCORE_IMPORT_FILES.append(filename) elif (translated_log.match(logical_line) or string_translation.match(logical_line)): yield (0, "N323: Found use of _() without explicit import of _ !") @core.flake8ext def use_jsonutils(logical_line, filename): # the code below that path is not meant to be executed from neutron # tree where jsonutils module is present, so don't enforce its usage # for this subdirectory if "plugins/xenserver" in filename: return # tools are OK to use the standard json module if "/tools/" in filename: return msg = "N324: jsonutils.%(fun)s must be used instead of json.%(fun)s" if "json." in logical_line: json_funcs = ['dumps(', 'dump(', 'loads(', 'load('] for f in json_funcs: pos = logical_line.find('json.%s' % f) if pos != -1: yield (pos, msg % {'fun': f[:-1]}) @core.flake8ext def check_api_version_decorator(logical_line, previous_logical, blank_before, filename): msg = ("N332: the api_version decorator must be the first decorator" " on a method.") if blank_before == 0 and re.match(api_version_re, logical_line) \ and re.match(decorator_re, previous_logical): yield (0, msg) class CheckForStrUnicodeExc(BaseASTChecker): """Checks for the use of str() or unicode() on an exception. This currently only handles the case where str() or unicode() is used in the scope of an exception handler. If the exception is passed into a function, returned from an assertRaises, or used on an exception created in the same scope, this does not catch it. """ name = 'check_for_string_unicode_exc' version = '1.0' CHECK_DESC = ('N325 str() and unicode() cannot be used on an ' 'exception. Remove or use six.text_type()') def __init__(self, tree, filename): super(CheckForStrUnicodeExc, self).__init__(tree, filename) self.name = [] self.already_checked = [] # Python 2 produces ast.TryExcept and ast.TryFinally nodes, but Python 3 # only produces ast.Try nodes. if six.PY2: def visit_TryExcept(self, node): for handler in node.handlers: if handler.name: self.name.append(handler.name.id) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) else: def visit_Try(self, node): for handler in node.handlers: if handler.name: self.name.append(handler.name) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) def visit_Call(self, node): if self._check_call_names(node, ['str', 'unicode']): if node not in self.already_checked: self.already_checked.append(node) if isinstance(node.args[0], ast.Name): if node.args[0].id in self.name: self.add_error(node.args[0]) super(CheckForStrUnicodeExc, self).generic_visit(node) class CheckForTransAdd(BaseASTChecker): """Checks for the use of concatenation on a translated string. Translations should not be concatenated with other strings, but should instead include the string being added to the translated string to give the translators the most information. """ name = 'check_for_trans_add' version = '0.1' CHECK_DESC = ('N326 Translated messages cannot be concatenated. ' 'String should be included in translated message.') TRANS_FUNC = ['_', '_LI', '_LW', '_LE', '_LC'] def visit_BinOp(self, node): if isinstance(node.op, ast.Add): if self._check_call_names(node.left, self.TRANS_FUNC): self.add_error(node.left) elif self._check_call_names(node.right, self.TRANS_FUNC): self.add_error(node.right) super(CheckForTransAdd, self).generic_visit(node) class _FindVariableReferences(ast.NodeVisitor): def __init__(self): super(_FindVariableReferences, self).__init__() self._references = [] def visit_Name(self, node): if isinstance(node.ctx, ast.Load): # This means the value of a variable was loaded. For example a # variable 'foo' was used like: # mocked_thing.bar = foo # foo() # self.assertRaises(exception, foo) self._references.append(node.id) super(_FindVariableReferences, self).generic_visit(node) class CheckForUncalledTestClosure(BaseASTChecker): """Look for closures that are never called in tests. A recurring pattern when using multiple mocks is to create a closure decorated with mocks like: def test_thing(self): @mock.patch.object(self.compute, 'foo') @mock.patch.object(self.compute, 'bar') def _do_test(mock_bar, mock_foo): # Test things _do_test() However it is easy to leave off the _do_test() and have the test pass because nothing runs. This check looks for methods defined within a test method and ensures that there is a reference to them. Only methods defined one level deep are checked. Something like: def test_thing(self): class FakeThing: def foo(self): would not ensure that foo is referenced. N349 """ name = 'check_for_uncalled_test_closure' version = '0.1' def __init__(self, tree, filename): super(CheckForUncalledTestClosure, self).__init__(tree, filename) self._filename = filename def visit_FunctionDef(self, node): # self._filename is 'stdin' in the unit test for this check. if (not os.path.basename(self._filename).startswith('test_') and os.path.basename(self._filename) != 'stdin'): return closures = [] references = [] # Walk just the direct nodes of the test method for child_node in ast.iter_child_nodes(node): if isinstance(child_node, ast.FunctionDef): closures.append(child_node.name) # Walk all nodes to find references find_references = _FindVariableReferences() find_references.generic_visit(node) references = find_references._references missed = set(closures) - set(references) if missed: self.add_error(node, 'N349: Test closures not called: %s' % ','.join(missed)) @core.flake8ext def assert_true_or_false_with_in(logical_line): """Check for assertTrue/False(A in B), assertTrue/False(A not in B), assertTrue/False(A in B, message) or assertTrue/False(A not in B, message) sentences. N334 """ res = (asse_true_false_with_in_or_not_in.search(logical_line) or asse_true_false_with_in_or_not_in_spaces.search(logical_line)) if res: yield (0, "N334: Use assertIn/NotIn(A, B) rather than " "assertTrue/False(A in/not in B) when checking collection " "contents.") @core.flake8ext def assert_raises_regexp(logical_line): """Check for usage of deprecated assertRaisesRegexp N335 """ res = asse_raises_regexp.search(logical_line) if res: yield (0, "N335: assertRaisesRegex must be used instead " "of assertRaisesRegexp") @core.flake8ext def dict_constructor_with_list_copy(logical_line): msg = ("N336: Must use a dict comprehension instead of a dict constructor" " with a sequence of key-value pairs." ) if dict_constructor_with_list_copy_re.match(logical_line): yield (0, msg) @core.flake8ext def assert_equal_in(logical_line): """Check for assertEqual(A in B, True), assertEqual(True, A in B), assertEqual(A in B, False) or assertEqual(False, A in B) sentences N338 """ res = (asse_equal_in_start_with_true_or_false_re.search(logical_line) or asse_equal_in_end_with_true_or_false_re.search(logical_line)) if res: yield (0, "N338: Use assertIn/NotIn(A, B) rather than " "assertEqual(A in B, True/False) when checking collection " "contents.") @core.flake8ext def check_http_not_implemented(logical_line, filename, noqa): msg = ("N339: HTTPNotImplemented response must be implemented with" " common raise_feature_not_supported().") if noqa: return if ("nova/api/openstack/compute" not in filename): return if re.match(http_not_implemented_re, logical_line): yield (0, msg) @core.flake8ext def check_greenthread_spawns(logical_line, filename): """Check for use of greenthread.spawn(), greenthread.spawn_n(), eventlet.spawn(), and eventlet.spawn_n() N340 """ msg = ("N340: Use nova.utils.%(spawn)s() rather than " "greenthread.%(spawn)s() and eventlet.%(spawn)s()") if "nova/utils.py" in filename or "nova/tests/" in filename: return match = re.match(spawn_re, logical_line) if match: yield (0, msg % {'spawn': match.group('spawn_part')}) @core.flake8ext def check_no_contextlib_nested(logical_line, filename): msg = ("N341: contextlib.nested is deprecated. With Python 2.7 and later " "the with-statement supports multiple nested objects. See https://" "docs.python.org/2/library/contextlib.html#contextlib.nested for " "more information. nova.test.nested() is an alternative as well.") if contextlib_nested.match(logical_line): yield (0, msg) @core.flake8ext def check_config_option_in_central_place(logical_line, filename): msg = ("N342: Config options should be in the central location " "'/nova/conf/*'. Do not declare new config options outside " "of that folder.") # That's the correct location if "nova/conf/" in filename: return # (macsz) All config options (with exceptions that are clarified # in the list below) were moved to the central place. List below is for # all options that were impossible to move without doing a major impact # on code. Add full path to a module or folder. conf_exceptions = [ # CLI opts are allowed to be outside of nova/conf directory 'nova/cmd/manage.py', 'nova/cmd/policy.py', 'nova/cmd/status.py', # config options should not be declared in tests, but there is # another checker for it (N320) 'nova/tests', ] if any(f in filename for f in conf_exceptions): return if cfg_opt_re.match(logical_line): yield (0, msg) @core.flake8ext def check_policy_registration_in_central_place(logical_line, filename): msg = ('N350: Policy registration should be in the central location(s) ' '"/nova/policies/*"') # This is where registration should happen if "nova/policies/" in filename: return # A couple of policy tests register rules if "nova/tests/unit/test_policy.py" in filename: return if rule_default_re.match(logical_line): yield (0, msg) @core.flake8ext def check_policy_enforce(logical_line, filename): """Look for uses of nova.policy._ENFORCER.enforce() Now that policy defaults are registered in code the _ENFORCER.authorize method should be used. That ensures that only registered policies are used. Uses of _ENFORCER.enforce could allow unregistered policies to be used, so this check looks for uses of that method. N351 """ msg = ('N351: nova.policy._ENFORCER.enforce() should not be used. ' 'Use the authorize() method instead.') if policy_enforce_re.match(logical_line): yield (0, msg) @core.flake8ext def check_doubled_words(physical_line, filename): """Check for the common doubled-word typos N343 """ msg = ("N343: Doubled word '%(word)s' typo found") match = re.search(doubled_words_re, physical_line) if match: return (0, msg % {'word': match.group(1)}) @core.flake8ext def check_python3_no_iteritems(logical_line): msg = ("N344: Use items() instead of dict.iteritems().") if re.search(r".*\.iteritems\(\)", logical_line): yield (0, msg) @core.flake8ext def check_python3_no_iterkeys(logical_line): msg = ("N345: Use six.iterkeys() instead of dict.iterkeys().") if re.search(r".*\.iterkeys\(\)", logical_line): yield (0, msg) @core.flake8ext def check_python3_no_itervalues(logical_line): msg = ("N346: Use six.itervalues() instead of dict.itervalues().") if re.search(r".*\.itervalues\(\)", logical_line): yield (0, msg) @core.flake8ext def no_os_popen(logical_line): """Disallow 'os.popen(' Deprecated library function os.popen() Replace it using subprocess https://bugs.launchpad.net/tempest/+bug/1529836 N348 """ if 'os.popen(' in logical_line: yield (0, 'N348 Deprecated library function os.popen(). ' 'Replace it using subprocess module. ') @core.flake8ext def no_log_warn(logical_line): """Disallow 'LOG.warn(' Deprecated LOG.warn(), instead use LOG.warning https://bugs.launchpad.net/senlin/+bug/1508442 N352 """ msg = ("N352: LOG.warn is deprecated, please use LOG.warning!") if "LOG.warn(" in logical_line: yield (0, msg) @core.flake8ext def check_context_log(logical_line, filename, noqa): """check whether context is being passed to the logs Not correct: LOG.info(_LI("Rebooting instance"), context=context) Correct: LOG.info(_LI("Rebooting instance")) https://bugs.launchpad.net/nova/+bug/1500896 N353 """ if noqa: return if "nova/tests" in filename: return if log_remove_context.match(logical_line): yield (0, "N353: Nova is using oslo.context's RequestContext " "which means the context object is in scope when " "doing logging using oslo.log, so no need to pass it as " "kwarg.") @core.flake8ext def no_assert_equal_true_false(logical_line): """Enforce use of assertTrue/assertFalse. Prevent use of assertEqual(A, True|False), assertEqual(True|False, A), assertNotEqual(A, True|False), and assertNotEqual(True|False, A). N355 """ _start_re = re.compile(r'assert(Not)?Equal\((True|False),') _end_re = re.compile(r'assert(Not)?Equal\(.*,\s+(True|False)\)$') if _start_re.search(logical_line) or _end_re.search(logical_line): yield (0, "N355: assertEqual(A, True|False), " "assertEqual(True|False, A), assertNotEqual(A, True|False), " "or assertEqual(True|False, A) sentences must not be used. " "Use assertTrue(A) or assertFalse(A) instead") @core.flake8ext def no_assert_true_false_is_not(logical_line): """Enforce use of assertIs/assertIsNot. Prevent use of assertTrue(A is|is not B) and assertFalse(A is|is not B). N356 """ _re = re.compile(r'assert(True|False)\(.+\s+is\s+(not\s+)?.+\)$') if _re.search(logical_line): yield (0, "N356: assertTrue(A is|is not B) or " "assertFalse(A is|is not B) sentences must not be used. " "Use assertIs(A, B) or assertIsNot(A, B) instead") @core.flake8ext def check_uuid4(logical_line): """Generating UUID Use oslo_utils.uuidutils or uuidsentinel(in case of test cases) to generate UUID instead of uuid4(). N357 """ msg = ("N357: Use oslo_utils.uuidutils or uuidsentinel(in case of test " "cases) to generate UUID instead of uuid4().") if uuid4_re.search(logical_line): yield (0, msg) @core.flake8ext def return_followed_by_space(logical_line): """Return should be followed by a space. Return should be followed by a space to clarify that return is not a function. Adding a space may force the developer to rethink if there are unnecessary parentheses in the written code. Not correct: return(42), return(a, b) Correct: return, return 42, return (a, b), return a, b N358 """ if return_not_followed_by_space.match(logical_line): yield (0, "N358: Return keyword should be followed by a space.") @core.flake8ext def no_redundant_import_alias(logical_line): """Check for redundant import aliases. Imports should not be in the forms below. from x import y as y import x as x import x.y as y N359 """ if re.search(redundant_import_alias_re, logical_line): yield (0, "N359: Import alias should not be redundant.") @core.flake8ext def yield_followed_by_space(logical_line): """Yield should be followed by a space. Yield should be followed by a space to clarify that yield is not a function. Adding a space may force the developer to rethink if there are unnecessary parentheses in the written code. Not correct: yield(x), yield(a, b) Correct: yield x, yield (a, b), yield a, b N360 """ if yield_not_followed_by_space.match(logical_line): yield (0, "N360: Yield keyword should be followed by a space.") @core.flake8ext def assert_regexpmatches(logical_line): """Check for usage of deprecated assertRegexpMatches/assertNotRegexpMatches N361 """ res = asse_regexpmatches.search(logical_line) if res: yield (0, "N361: assertRegex/assertNotRegex must be used instead " "of assertRegexpMatches/assertNotRegexpMatches.") @core.flake8ext def privsep_imports_not_aliased(logical_line, filename): """Do not abbreviate or alias privsep module imports. When accessing symbols under nova.privsep in code or tests, the full module path (e.g. nova.privsep.path.readfile(...)) should be used explicitly rather than importing and using an alias/abbreviation such as: from nova.privsep import path ... path.readfile(...) See Ief177dbcb018da6fbad13bb0ff153fc47292d5b9. N362 """ if ( # Give modules under nova.privsep a pass not privsep_file_re.match(filename) and # Any style of import of privsep... privsep_import_re.match(logical_line) and # ...that isn't 'import nova.privsep[.foo...]' logical_line.count(' ') > 1): yield (0, "N362: always import privsep modules so that the use of " "escalated permissions is obvious to callers. For example, " "use 'import nova.privsep.path' instead of " "'from nova.privsep import path'.") @core.flake8ext def did_you_mean_tuple(logical_line): """Disallow ``(not_a_tuple)`` because you meant ``(a_tuple_of_one,)``. N363 """ if disguised_as_tuple_re.search(logical_line): yield (0, "N363: You said ``in (not_a_tuple)`` when you almost " "certainly meant ``in (a_tuple_of_one,)``.") @core.flake8ext def nonexistent_assertion_methods_and_attributes(logical_line, filename): """Check non-existent mock assertion methods and attributes. The following assertion methods do not exist. - called_once() - called_once_with() - has_calls() - mock_assert_*() The following typos were found in the past cases. - asser_* - asset_* - assset_* - asssert_* - retrun_value N364 """ msg = ("N364: Non existent mock assertion method or attribute (%s) is " "used. Check a typo or whether the assertion method should begin " "with 'assert_'.") if 'nova/tests/' in filename: match = mock_assert_method_re.search(logical_line) if match: yield (0, msg % match.group(1)) match = mock_attribute_re.search(logical_line) if match: yield (0, msg % match.group(1)) @core.flake8ext def useless_assertion(logical_line, filename): """Check useless assertions in tests. The following assertions are useless. - assertIsNone(None, ...) - assertTrue(True, ...) - assertTrue(2, ...) # Constant number - assertTrue('Constant string', ...) - assertTrue("Constant string", ...) They are usually misuses of assertIsNone or assertTrue. N365 """ msg = "N365: Misuse of %s." if 'nova/tests/' in filename: match = useless_assertion_re.search(logical_line) if match: yield (0, msg % (match.group(2) or match.group(3))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/hooks.py0000664000175000017500000001323400000000000015231 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Decorator and config option definitions for adding custom code (hooks) around callables. NOTE: as of Nova 13.0 hooks are DEPRECATED and will be removed in the near future. You should not build any new code using this facility. Any method may have the 'add_hook' decorator applied, which yields the ability to invoke Hook objects before or after the method. (i.e. pre and post) Hook objects are loaded by HookLoaders. Each named hook may invoke multiple Hooks. Example Hook object:: | class MyHook(object): | def pre(self, *args, **kwargs): | # do stuff before wrapped callable runs | | def post(self, rv, *args, **kwargs): | # do stuff after wrapped callable runs Example Hook object with function parameters:: | class MyHookWithFunction(object): | def pre(self, f, *args, **kwargs): | # do stuff with wrapped function info | def post(self, f, *args, **kwargs): | # do stuff with wrapped function info """ import functools from oslo_log import log as logging import stevedore from nova.i18n import _, _LE, _LW LOG = logging.getLogger(__name__) NS = 'nova.hooks' _HOOKS = {} # hook name => hook manager class FatalHookException(Exception): """Exception which should be raised by hooks to indicate that normal execution of the hooked function should be terminated. Raised exception will be logged and reraised. """ pass class HookManager(stevedore.hook.HookManager): def __init__(self, name): """Invoke_on_load creates an instance of the Hook class :param name: The name of the hooks to load. :type name: str """ super(HookManager, self).__init__(NS, name, invoke_on_load=True) def _run(self, name, method_type, args, kwargs, func=None): if method_type not in ('pre', 'post'): msg = _("Wrong type of hook method. " "Only 'pre' and 'post' type allowed") raise ValueError(msg) for e in self.extensions: obj = e.obj hook_method = getattr(obj, method_type, None) if hook_method: LOG.warning(_LW("Hooks are deprecated as of Nova 13.0 and " "will be removed in a future release")) LOG.debug("Running %(name)s %(type)s-hook: %(obj)s", {'name': name, 'type': method_type, 'obj': obj}) try: if func: hook_method(func, *args, **kwargs) else: hook_method(*args, **kwargs) except FatalHookException: msg = _LE("Fatal Exception running %(name)s " "%(type)s-hook: %(obj)s") LOG.exception(msg, {'name': name, 'type': method_type, 'obj': obj}) raise except Exception: msg = _LE("Exception running %(name)s " "%(type)s-hook: %(obj)s") LOG.exception(msg, {'name': name, 'type': method_type, 'obj': obj}) def run_pre(self, name, args, kwargs, f=None): """Execute optional pre methods of loaded hooks. :param name: The name of the loaded hooks. :param args: Positional arguments which would be transmitted into all pre methods of loaded hooks. :param kwargs: Keyword args which would be transmitted into all pre methods of loaded hooks. :param f: Target function. """ self._run(name=name, method_type='pre', args=args, kwargs=kwargs, func=f) def run_post(self, name, rv, args, kwargs, f=None): """Execute optional post methods of loaded hooks. :param name: The name of the loaded hooks. :param rv: Return values of target method call. :param args: Positional arguments which would be transmitted into all post methods of loaded hooks. :param kwargs: Keyword args which would be transmitted into all post methods of loaded hooks. :param f: Target function. """ self._run(name=name, method_type='post', args=(rv,) + args, kwargs=kwargs, func=f) def add_hook(name, pass_function=False): """Execute optional pre and post methods around the decorated function. This is useful for customization around callables. """ def outer(f): f.__hook_name__ = name @functools.wraps(f) def inner(*args, **kwargs): manager = _HOOKS.setdefault(name, HookManager(name)) function = None if pass_function: function = f manager.run_pre(name, args, kwargs, f=function) rv = f(*args, **kwargs) manager.run_post(name, rv, args, kwargs, f=function) return rv return inner return outer def reset(): """Clear loaded hooks.""" _HOOKS.clear() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/i18n.py0000664000175000017500000000250100000000000014660 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """oslo.i18n integration module. See https://docs.openstack.org/oslo.i18n/latest/user/index.html . """ import oslo_i18n DOMAIN = 'nova' _translators = oslo_i18n.TranslatorFactory(domain=DOMAIN) # The primary translation function using the well-known name "_" _ = _translators.primary # Translators for log levels. # # The abbreviated names are meant to reflect the usual use of a short # name like '_'. The "L" is for "log" and the other letter comes from # the level. _LI = _translators.log_info _LW = _translators.log_warning _LE = _translators.log_error _LC = _translators.log_critical def translate(value, user_locale): return oslo_i18n.translate(value, user_locale) def get_available_languages(): return oslo_i18n.get_available_languages(DOMAIN) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/image/0000775000175000017500000000000000000000000014613 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/image/__init__.py0000664000175000017500000000000000000000000016712 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/image/download/0000775000175000017500000000000000000000000016422 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/image/download/__init__.py0000664000175000017500000000423300000000000020535 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import stevedore.driver import stevedore.extension LOG = logging.getLogger(__name__) def load_transfer_modules(): module_dictionary = {} ex = stevedore.extension.ExtensionManager('nova.image.download.modules') for module_name in ex.names(): mgr = stevedore.driver.DriverManager( namespace='nova.image.download.modules', name=module_name, invoke_on_load=False) schemes_list = mgr.driver.get_schemes() for scheme in schemes_list: if scheme in module_dictionary: LOG.error('%(scheme)s is registered as a module twice. ' '%(module_name)s is not being used.', {'scheme': scheme, 'module_name': module_name}) else: module_dictionary[scheme] = mgr.driver if module_dictionary: LOG.warning('The nova.image.download.modules extension point is ' 'deprecated for removal starting in the 17.0.0 Queens ' 'release and may be removed as early as the 18.0.0 Rocky ' 'release. It is not maintained and there is no indication ' 'of its use in production clouds. If you are using this ' 'extension point, please make the nova development team ' 'aware by contacting us in the #openstack-nova freenode ' 'IRC channel or on the openstack-discuss mailing list.') return module_dictionary ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/image/glance.py0000664000175000017500000014366200000000000016432 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of an image service that uses Glance as the backend.""" from __future__ import absolute_import import copy import inspect import itertools import os import random import re import stat import sys import time import cryptography from cursive import certificate_utils from cursive import exception as cursive_exception from cursive import signature_utils import glanceclient import glanceclient.exc from glanceclient.v2 import schemas from keystoneauth1 import loading as ks_loading from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import excutils from oslo_utils import timeutils import six from six.moves import range import six.moves.urllib.parse as urlparse import nova.conf from nova import exception import nova.image.download as image_xfers from nova import objects from nova.objects import fields from nova import profiler from nova import service_auth from nova import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF _SESSION = None def _session_and_auth(context): # Session is cached, but auth needs to be pulled from context each time. global _SESSION if not _SESSION: _SESSION = ks_loading.load_session_from_conf_options( CONF, nova.conf.glance.glance_group.name) auth = service_auth.get_auth_plugin(context) return _SESSION, auth def _glanceclient_from_endpoint(context, endpoint, version): sess, auth = _session_and_auth(context) return glanceclient.Client(version, session=sess, auth=auth, endpoint_override=endpoint, global_request_id=context.global_id) def generate_glance_url(context): """Return a random glance url from the api servers we know about.""" return next(get_api_servers(context)) def _endpoint_from_image_ref(image_href): """Return the image_ref and guessed endpoint from an image url. :param image_href: href of an image :returns: a tuple of the form (image_id, endpoint_url) """ parts = image_href.split('/') image_id = parts[-1] # the endpoint is everything in the url except the last 3 bits # which are version, 'images', and image_id endpoint = '/'.join(parts[:-3]) return (image_id, endpoint) def generate_identity_headers(context, status='Confirmed'): return { 'X-Auth-Token': getattr(context, 'auth_token', None), 'X-User-Id': getattr(context, 'user_id', None), 'X-Tenant-Id': getattr(context, 'project_id', None), 'X-Roles': ','.join(getattr(context, 'roles', [])), 'X-Identity-Status': status, } def get_api_servers(context): """Shuffle a list of service endpoints and return an iterator that will cycle through the list, looping around to the beginning if necessary. """ # NOTE(efried): utils.get_ksa_adapter().get_endpoint() is the preferred # mechanism for endpoint discovery. Only use `api_servers` if you really # need to shuffle multiple endpoints. if CONF.glance.api_servers: api_servers = CONF.glance.api_servers random.shuffle(api_servers) else: sess, auth = _session_and_auth(context) ksa_adap = utils.get_ksa_adapter( nova.conf.glance.DEFAULT_SERVICE_TYPE, ksa_auth=auth, ksa_session=sess, min_version='2.0', max_version='2.latest') endpoint = utils.get_endpoint(ksa_adap) if endpoint: # NOTE(mriedem): Due to python-glanceclient bug 1707995 we have # to massage the endpoint URL otherwise it won't work properly. # We can't use glanceclient.common.utils.strip_version because # of bug 1748009. endpoint = re.sub(r'/v\d+(\.\d+)?/?$', '/', endpoint) api_servers = [endpoint] return itertools.cycle(api_servers) class GlanceClientWrapper(object): """Glance client wrapper class that implements retries.""" def __init__(self, context=None, endpoint=None): version = 2 if endpoint is not None: self.client = self._create_static_client(context, endpoint, version) else: self.client = None self.api_servers = None def _create_static_client(self, context, endpoint, version): """Create a client that we'll use for every call.""" self.api_server = str(endpoint) return _glanceclient_from_endpoint(context, endpoint, version) def _create_onetime_client(self, context, version): """Create a client that will be used for one call.""" if self.api_servers is None: self.api_servers = get_api_servers(context) self.api_server = next(self.api_servers) return _glanceclient_from_endpoint(context, self.api_server, version) def call(self, context, version, method, controller=None, args=None, kwargs=None): """Call a glance client method. If we get a connection error, retry the request according to CONF.glance.num_retries. :param context: RequestContext to use :param version: Numeric version of the *Glance API* to use :param method: string method name to execute on the glanceclient :param controller: optional string name of the client controller to use. Default (None) is to use the 'images' controller :param args: optional iterable of arguments to pass to the glanceclient method, splatted as positional args :param kwargs: optional dict of arguments to pass to the glanceclient, splatted into named arguments """ args = args or [] kwargs = kwargs or {} retry_excs = (glanceclient.exc.ServiceUnavailable, glanceclient.exc.InvalidEndpoint, glanceclient.exc.CommunicationError) num_attempts = 1 + CONF.glance.num_retries controller_name = controller or 'images' for attempt in range(1, num_attempts + 1): client = self.client or self._create_onetime_client(context, version) try: controller = getattr(client, controller_name) result = getattr(controller, method)(*args, **kwargs) if inspect.isgenerator(result): # Convert generator results to a list, so that we can # catch any potential exceptions now and retry the call. return list(result) return result except retry_excs as e: if attempt < num_attempts: extra = "retrying" else: extra = 'done trying' LOG.exception("Error contacting glance server " "'%(server)s' for '%(method)s', " "%(extra)s.", {'server': self.api_server, 'method': method, 'extra': extra}) if attempt == num_attempts: raise exception.GlanceConnectionFailed( server=str(self.api_server), reason=six.text_type(e)) time.sleep(1) class GlanceImageServiceV2(object): """Provides storage and retrieval of disk image objects within Glance.""" def __init__(self, client=None): self._client = client or GlanceClientWrapper() # NOTE(jbresnah) build the table of download handlers at the beginning # so that operators can catch errors at load time rather than whenever # a user attempts to use a module. Note this cannot be done in glance # space when this python module is loaded because the download module # may require configuration options to be parsed. self._download_handlers = {} download_modules = image_xfers.load_transfer_modules() for scheme, mod in download_modules.items(): if scheme not in CONF.glance.allowed_direct_url_schemes: continue try: self._download_handlers[scheme] = mod.get_download_handler() except Exception as ex: LOG.error('When loading the module %(module_str)s the ' 'following error occurred: %(ex)s', {'module_str': str(mod), 'ex': ex}) def show(self, context, image_id, include_locations=False, show_deleted=True): """Returns a dict with image data for the given opaque image id. :param context: The context object to pass to image client :param image_id: The UUID of the image :param include_locations: (Optional) include locations in the returned dict of information if the image service API supports it. If the image service API does not support the locations attribute, it will still be included in the returned dict, as an empty list. :param show_deleted: (Optional) show the image even the status of image is deleted. """ try: image = self._client.call(context, 2, 'get', args=(image_id,)) except Exception: _reraise_translated_image_exception(image_id) if not show_deleted and getattr(image, 'deleted', False): raise exception.ImageNotFound(image_id=image_id) if not _is_image_available(context, image): raise exception.ImageNotFound(image_id=image_id) image = _translate_from_glance(image, include_locations=include_locations) if include_locations: locations = image.get('locations', None) or [] du = image.get('direct_url', None) if du: locations.append({'url': du, 'metadata': {}}) image['locations'] = locations return image def _get_transfer_module(self, scheme): try: return self._download_handlers[scheme] except KeyError: return None except Exception: LOG.error("Failed to instantiate the download handler " "for %(scheme)s", {'scheme': scheme}) return def detail(self, context, **kwargs): """Calls out to Glance for a list of detailed image information.""" params = _extract_query_params_v2(kwargs) try: images = self._client.call(context, 2, 'list', kwargs=params) except Exception: _reraise_translated_exception() _images = [] for image in images: if _is_image_available(context, image): _images.append(_translate_from_glance(image)) return _images @staticmethod def _safe_fsync(fh): """Performs os.fsync on a filehandle only if it is supported. fsync on a pipe, FIFO, or socket raises OSError with EINVAL. This method discovers whether the target filehandle is one of these types and only performs fsync if it isn't. :param fh: Open filehandle (not a path or fileno) to maybe fsync. """ fileno = fh.fileno() mode = os.fstat(fileno).st_mode # A pipe answers True to S_ISFIFO if not any(check(mode) for check in (stat.S_ISFIFO, stat.S_ISSOCK)): os.fsync(fileno) def download(self, context, image_id, data=None, dst_path=None, trusted_certs=None): """Calls out to Glance for data and writes data.""" if CONF.glance.allowed_direct_url_schemes and dst_path is not None: image = self.show(context, image_id, include_locations=True) for entry in image.get('locations', []): loc_url = entry['url'] loc_meta = entry['metadata'] o = urlparse.urlparse(loc_url) xfer_mod = self._get_transfer_module(o.scheme) if xfer_mod: try: xfer_mod.download(context, o, dst_path, loc_meta) LOG.info("Successfully transferred using %s", o.scheme) return except Exception: LOG.exception("Download image error") try: image_chunks = self._client.call( context, 2, 'data', args=(image_id,)) except Exception: _reraise_translated_image_exception(image_id) if image_chunks.wrapped is None: # None is a valid return value, but there's nothing we can do with # a image with no associated data raise exception.ImageUnacceptable(image_id=image_id, reason='Image has no associated data') # Retrieve properties for verification of Glance image signature verifier = self._get_verifier(context, image_id, trusted_certs) close_file = False if data is None and dst_path: data = open(dst_path, 'wb') close_file = True if data is None: # Perform image signature verification if verifier: try: for chunk in image_chunks: verifier.update(chunk) verifier.verify() LOG.info('Image signature verification succeeded ' 'for image: %s', image_id) except cryptography.exceptions.InvalidSignature: with excutils.save_and_reraise_exception(): LOG.error('Image signature verification failed ' 'for image: %s', image_id) return image_chunks else: try: for chunk in image_chunks: if verifier: verifier.update(chunk) data.write(chunk) if verifier: verifier.verify() LOG.info('Image signature verification succeeded ' 'for image %s', image_id) except cryptography.exceptions.InvalidSignature: data.truncate(0) with excutils.save_and_reraise_exception(): LOG.error('Image signature verification failed ' 'for image: %s', image_id) except Exception as ex: with excutils.save_and_reraise_exception(): LOG.error("Error writing to %(path)s: %(exception)s", {'path': dst_path, 'exception': ex}) finally: if close_file: # Ensure that the data is pushed all the way down to # persistent storage. This ensures that in the event of a # subsequent host crash we don't have running instances # using a corrupt backing file. data.flush() self._safe_fsync(data) data.close() def _get_verifier(self, context, image_id, trusted_certs): verifier = None # Use the default certs if the user didn't provide any (and there are # default certs configured). if (not trusted_certs and CONF.glance.enable_certificate_validation and CONF.glance.default_trusted_certificate_ids): trusted_certs = objects.TrustedCerts( ids=CONF.glance.default_trusted_certificate_ids) # Verify image signature if feature is enabled or trusted # certificates were provided if trusted_certs or CONF.glance.verify_glance_signatures: image_meta_dict = self.show(context, image_id, include_locations=False) image_meta = objects.ImageMeta.from_dict(image_meta_dict) img_signature = image_meta.properties.get('img_signature') img_sig_hash_method = image_meta.properties.get( 'img_signature_hash_method' ) img_sig_cert_uuid = image_meta.properties.get( 'img_signature_certificate_uuid' ) img_sig_key_type = image_meta.properties.get( 'img_signature_key_type' ) try: verifier = signature_utils.get_verifier( context=context, img_signature_certificate_uuid=img_sig_cert_uuid, img_signature_hash_method=img_sig_hash_method, img_signature=img_signature, img_signature_key_type=img_sig_key_type, ) except cursive_exception.SignatureVerificationError: with excutils.save_and_reraise_exception(): LOG.error('Image signature verification failed ' 'for image: %s', image_id) # Validate image signature certificate if trusted certificates # were provided # NOTE(jackie-truong): Certificate validation will occur if # trusted_certs are provided, even if the certificate validation # feature is disabled. This is to provide safety for the user. # We may want to consider making this a "soft" check in the future. if trusted_certs: _verify_certs(context, img_sig_cert_uuid, trusted_certs) elif CONF.glance.enable_certificate_validation: msg = ('Image signature certificate validation enabled, ' 'but no trusted certificate IDs were provided. ' 'Unable to validate the certificate used to ' 'verify the image signature.') LOG.warning(msg) raise exception.CertificateValidationFailed(msg) else: LOG.debug('Certificate validation was not performed. A list ' 'of trusted image certificate IDs must be provided ' 'in order to validate an image certificate.') return verifier def create(self, context, image_meta, data=None): """Store the image data and return the new image object.""" # Here we workaround the situation when user wants to activate an # empty image right after the creation. In Glance v1 api (and # therefore in Nova) it is enough to set 'size = 0'. v2 api # doesn't allow this hack - we have to send an upload request with # empty data. force_activate = data is None and image_meta.get('size') == 0 # The "instance_owner" property is set in the API if a user, who is # not the owner of an instance, is creating the image, e.g. admin # snapshots or shelves another user's instance. This is used to add # member access to the image for the instance owner. sharing_member_id = image_meta.get('properties', {}).pop( 'instance_owner', None) sent_service_image_meta = _translate_to_glance(image_meta) try: image = self._create_v2(context, sent_service_image_meta, data, force_activate, sharing_member_id=sharing_member_id) except glanceclient.exc.HTTPException: _reraise_translated_exception() return _translate_from_glance(image) def _add_location(self, context, image_id, location): # 'show_multiple_locations' must be enabled in glance api conf file. try: return self._client.call( context, 2, 'add_location', args=(image_id, location, {})) except glanceclient.exc.HTTPBadRequest: _reraise_translated_exception() def _add_image_member(self, context, image_id, member_id): """Grant access to another project that does not own the image :param context: nova auth RequestContext where context.project_id is the owner of the image :param image_id: ID of the image on which to grant access :param member_id: ID of the member project to grant access to the image; this should not be the owner of the image :returns: A Member schema object of the created image member """ try: return self._client.call( context, 2, 'create', controller='image_members', args=(image_id, member_id)) except glanceclient.exc.HTTPBadRequest: _reraise_translated_exception() def _upload_data(self, context, image_id, data): self._client.call(context, 2, 'upload', args=(image_id, data)) return self._client.call(context, 2, 'get', args=(image_id,)) def _get_image_create_disk_format_default(self, context): """Gets an acceptable default image disk_format based on the schema. """ # These preferred disk formats are in order: # 1. we want qcow2 if possible (at least for backward compat) # 2. vhd for xenapi and hyperv # 3. vmdk for vmware # 4. raw should be universally accepted preferred_disk_formats = ( fields.DiskFormat.QCOW2, fields.DiskFormat.VHD, fields.DiskFormat.VMDK, fields.DiskFormat.RAW, ) # Get the image schema - note we don't cache this value since it could # change under us. This looks a bit funky, but what it's basically # doing is calling glanceclient.v2.Client.schemas.get('image'). image_schema = self._client.call( context, 2, 'get', args=('image',), controller='schemas') # get the disk_format schema property from the raw schema disk_format_schema = ( image_schema.raw()['properties'].get('disk_format') if image_schema else {} ) if disk_format_schema and 'enum' in disk_format_schema: supported_disk_formats = disk_format_schema['enum'] # try a priority ordered list for preferred_format in preferred_disk_formats: if preferred_format in supported_disk_formats: return preferred_format # alright, let's just return whatever is available LOG.debug('Unable to find a preferred disk_format for image ' 'creation with the Image Service v2 API. Using: %s', supported_disk_formats[0]) return supported_disk_formats[0] LOG.warning('Unable to determine disk_format schema from the ' 'Image Service v2 API. Defaulting to ' '%(preferred_disk_format)s.', {'preferred_disk_format': preferred_disk_formats[0]}) return preferred_disk_formats[0] def _create_v2(self, context, sent_service_image_meta, data=None, force_activate=False, sharing_member_id=None): # Glance v1 allows image activation without setting disk and # container formats, v2 doesn't. It leads to the dirtiest workaround # where we have to hardcode this parameters. if force_activate: data = '' if 'disk_format' not in sent_service_image_meta: sent_service_image_meta['disk_format'] = ( self._get_image_create_disk_format_default(context) ) if 'container_format' not in sent_service_image_meta: sent_service_image_meta['container_format'] = 'bare' location = sent_service_image_meta.pop('location', None) image = self._client.call( context, 2, 'create', kwargs=sent_service_image_meta) image_id = image['id'] # Sending image location in a separate request. if location: image = self._add_location(context, image_id, location) # Add image membership in a separate request. if sharing_member_id: LOG.debug('Adding access for member %s to image %s', sharing_member_id, image_id) self._add_image_member(context, image_id, sharing_member_id) # If we have some data we have to send it in separate request and # update the image then. if data is not None: image = self._upload_data(context, image_id, data) return image def update(self, context, image_id, image_meta, data=None, purge_props=True): """Modify the given image with the new data.""" sent_service_image_meta = _translate_to_glance(image_meta) # NOTE(bcwaldon): id is not an editable field, but it is likely to be # passed in by calling code. Let's be nice and ignore it. sent_service_image_meta.pop('id', None) sent_service_image_meta['image_id'] = image_id try: if purge_props: # In Glance v2 we have to explicitly set prop names # we want to remove. all_props = set(self.show( context, image_id)['properties'].keys()) props_to_update = set( image_meta.get('properties', {}).keys()) remove_props = list(all_props - props_to_update) sent_service_image_meta['remove_props'] = remove_props image = self._update_v2(context, sent_service_image_meta, data) except Exception: _reraise_translated_image_exception(image_id) return _translate_from_glance(image) def _update_v2(self, context, sent_service_image_meta, data=None): location = sent_service_image_meta.pop('location', None) image_id = sent_service_image_meta['image_id'] image = self._client.call( context, 2, 'update', kwargs=sent_service_image_meta) # Sending image location in a separate request. if location: image = self._add_location(context, image_id, location) # If we have some data we have to send it in separate request and # update the image then. if data is not None: image = self._upload_data(context, image_id, data) return image def delete(self, context, image_id): """Delete the given image. :raises: ImageNotFound if the image does not exist. :raises: NotAuthorized if the user is not an owner. :raises: ImageNotAuthorized if the user is not authorized. :raises: ImageDeleteConflict if the image is conflicted to delete. """ try: self._client.call(context, 2, 'delete', args=(image_id,)) except glanceclient.exc.NotFound: raise exception.ImageNotFound(image_id=image_id) except glanceclient.exc.HTTPForbidden: raise exception.ImageNotAuthorized(image_id=image_id) except glanceclient.exc.HTTPConflict as exc: raise exception.ImageDeleteConflict(reason=six.text_type(exc)) return True def _extract_query_params_v2(params): _params = {} accepted_params = ('filters', 'marker', 'limit', 'page_size', 'sort_key', 'sort_dir') for param in accepted_params: if params.get(param): _params[param] = params.get(param) # ensure filters is a dict _params.setdefault('filters', {}) # NOTE(vish): don't filter out private images _params['filters'].setdefault('is_public', 'none') # adopt filters to be accepted by glance v2 api filters = _params['filters'] new_filters = {} for filter_ in filters: # remove 'property-' prefix from filters by custom properties if filter_.startswith('property-'): new_filters[filter_.lstrip('property-')] = filters[filter_] elif filter_ == 'changes-since': # convert old 'changes-since' into new 'updated_at' filter updated_at = 'gte:' + filters['changes-since'] new_filters['updated_at'] = updated_at elif filter_ == 'is_public': # convert old 'is_public' flag into 'visibility' filter # omit the filter if is_public is None is_public = filters['is_public'] if is_public.lower() in ('true', '1'): new_filters['visibility'] = 'public' elif is_public.lower() in ('false', '0'): new_filters['visibility'] = 'private' else: new_filters[filter_] = filters[filter_] _params['filters'] = new_filters return _params def _is_image_available(context, image): """Check image availability. This check is needed in case Nova and Glance are deployed without authentication turned on. """ # The presence of an auth token implies this is an authenticated # request and we need not handle the noauth use-case. if hasattr(context, 'auth_token') and context.auth_token: return True def _is_image_public(image): # NOTE(jaypipes) V2 Glance API replaced the is_public attribute # with a visibility attribute. We do this here to prevent the # glanceclient for a V2 image model from throwing an # exception from warlock when trying to access an is_public # attribute. if hasattr(image, 'visibility'): return str(image.visibility).lower() == 'public' else: return image.is_public if context.is_admin or _is_image_public(image): return True properties = image.properties if context.project_id and ('owner_id' in properties): return str(properties['owner_id']) == str(context.project_id) if context.project_id and ('project_id' in properties): return str(properties['project_id']) == str(context.project_id) try: user_id = properties['user_id'] except KeyError: return False return str(user_id) == str(context.user_id) def _translate_to_glance(image_meta): image_meta = _convert_to_string(image_meta) image_meta = _remove_read_only(image_meta) image_meta = _convert_to_v2(image_meta) return image_meta def _convert_to_v2(image_meta): output = {} for name, value in image_meta.items(): if name == 'properties': for prop_name, prop_value in value.items(): # if allow_additional_image_properties is disabled we can't # define kernel_id and ramdisk_id as None, so we have to omit # these properties if they are not set. if prop_name in ('kernel_id', 'ramdisk_id') and \ prop_value is not None and \ prop_value.strip().lower() in ('none', ''): continue # in glance only string and None property values are allowed, # v1 client accepts any values and converts them to string, # v2 doesn't - so we have to take care of it. elif prop_value is None or isinstance( prop_value, six.string_types): output[prop_name] = prop_value else: output[prop_name] = str(prop_value) elif name in ('min_ram', 'min_disk'): output[name] = int(value) elif name == 'is_public': output['visibility'] = 'public' if value else 'private' elif name in ('size', 'deleted'): continue else: output[name] = value return output def _translate_from_glance(image, include_locations=False): image_meta = _extract_attributes_v2( image, include_locations=include_locations) image_meta = _convert_timestamps_to_datetimes(image_meta) image_meta = _convert_from_string(image_meta) return image_meta def _convert_timestamps_to_datetimes(image_meta): """Returns image with timestamp fields converted to datetime objects.""" for attr in ['created_at', 'updated_at', 'deleted_at']: if image_meta.get(attr): image_meta[attr] = timeutils.parse_isotime(image_meta[attr]) return image_meta # NOTE(bcwaldon): used to store non-string data in glance metadata def _json_loads(properties, attr): prop = properties[attr] if isinstance(prop, six.string_types): properties[attr] = jsonutils.loads(prop) def _json_dumps(properties, attr): prop = properties[attr] if not isinstance(prop, six.string_types): properties[attr] = jsonutils.dumps(prop) _CONVERT_PROPS = ('block_device_mapping', 'mappings') def _convert(method, metadata): metadata = copy.deepcopy(metadata) properties = metadata.get('properties') if properties: for attr in _CONVERT_PROPS: if attr in properties: method(properties, attr) return metadata def _convert_from_string(metadata): return _convert(_json_loads, metadata) def _convert_to_string(metadata): return _convert(_json_dumps, metadata) def _extract_attributes(image, include_locations=False): # TODO(mfedosin): Remove this function once we move to glance V2 # completely. # NOTE(hdd): If a key is not found, base.Resource.__getattr__() may perform # a get(), resulting in a useless request back to glance. This list is # therefore sorted, with dependent attributes as the end # 'deleted_at' depends on 'deleted' # 'checksum' depends on 'status' == 'active' IMAGE_ATTRIBUTES = ['size', 'disk_format', 'owner', 'container_format', 'status', 'id', 'name', 'created_at', 'updated_at', 'deleted', 'deleted_at', 'checksum', 'min_disk', 'min_ram', 'is_public', 'direct_url', 'locations'] queued = getattr(image, 'status') == 'queued' queued_exclude_attrs = ['disk_format', 'container_format'] include_locations_attrs = ['direct_url', 'locations'] output = {} for attr in IMAGE_ATTRIBUTES: if attr == 'deleted_at' and not output['deleted']: output[attr] = None elif attr == 'checksum' and output['status'] != 'active': output[attr] = None # image may not have 'name' attr elif attr == 'name': output[attr] = getattr(image, attr, None) # NOTE(liusheng): queued image may not have these attributes and 'name' elif queued and attr in queued_exclude_attrs: output[attr] = getattr(image, attr, None) # NOTE(mriedem): Only get location attrs if including locations. elif attr in include_locations_attrs: if include_locations: output[attr] = getattr(image, attr, None) # NOTE(mdorman): 'size' attribute must not be 'None', so use 0 instead elif attr == 'size': # NOTE(mriedem): A snapshot image may not have the size attribute # set so default to 0. output[attr] = getattr(image, attr, 0) or 0 else: # NOTE(xarses): Anything that is caught with the default value # will result in an additional lookup to glance for said attr. # Notable attributes that could have this issue: # disk_format, container_format, name, deleted, checksum output[attr] = getattr(image, attr, None) output['properties'] = getattr(image, 'properties', {}) return output def _extract_attributes_v2(image, include_locations=False): include_locations_attrs = ['direct_url', 'locations'] omit_attrs = ['self', 'schema', 'protected', 'virtual_size', 'file', 'tags'] raw_schema = image.schema schema = schemas.Schema(raw_schema) output = {'properties': {}, 'deleted': False, 'deleted_at': None, 'disk_format': None, 'container_format': None, 'name': None, 'checksum': None} for name, value in image.items(): if (name in omit_attrs or name in include_locations_attrs and not include_locations): continue elif name == 'visibility': output['is_public'] = value == 'public' elif name == 'size' and value is None: output['size'] = 0 elif schema.is_base_property(name): output[name] = value else: output['properties'][name] = value return output def _remove_read_only(image_meta): IMAGE_ATTRIBUTES = ['status', 'updated_at', 'created_at', 'deleted_at'] output = copy.deepcopy(image_meta) for attr in IMAGE_ATTRIBUTES: if attr in output: del output[attr] return output def _reraise_translated_image_exception(image_id): """Transform the exception for the image but keep its traceback intact.""" exc_type, exc_value, exc_trace = sys.exc_info() new_exc = _translate_image_exception(image_id, exc_value) six.reraise(type(new_exc), new_exc, exc_trace) def _reraise_translated_exception(): """Transform the exception but keep its traceback intact.""" exc_type, exc_value, exc_trace = sys.exc_info() new_exc = _translate_plain_exception(exc_value) six.reraise(type(new_exc), new_exc, exc_trace) def _translate_image_exception(image_id, exc_value): if isinstance(exc_value, (glanceclient.exc.Forbidden, glanceclient.exc.Unauthorized)): return exception.ImageNotAuthorized(image_id=image_id) if isinstance(exc_value, glanceclient.exc.NotFound): return exception.ImageNotFound(image_id=image_id) if isinstance(exc_value, glanceclient.exc.BadRequest): return exception.ImageBadRequest(image_id=image_id, response=six.text_type(exc_value)) if isinstance(exc_value, glanceclient.exc.HTTPOverLimit): return exception.ImageQuotaExceeded(image_id=image_id) return exc_value def _translate_plain_exception(exc_value): if isinstance(exc_value, (glanceclient.exc.Forbidden, glanceclient.exc.Unauthorized)): return exception.Forbidden(six.text_type(exc_value)) if isinstance(exc_value, glanceclient.exc.NotFound): return exception.NotFound(six.text_type(exc_value)) if isinstance(exc_value, glanceclient.exc.BadRequest): return exception.Invalid(six.text_type(exc_value)) return exc_value def _verify_certs(context, img_sig_cert_uuid, trusted_certs): try: certificate_utils.verify_certificate( context=context, certificate_uuid=img_sig_cert_uuid, trusted_certificate_uuids=trusted_certs.ids) LOG.debug('Image signature certificate validation ' 'succeeded for certificate: %s', img_sig_cert_uuid) except cursive_exception.SignatureVerificationError as e: LOG.warning('Image signature certificate validation ' 'failed for certificate: %s', img_sig_cert_uuid) raise exception.CertificateValidationFailed( cert_uuid=img_sig_cert_uuid, reason=six.text_type(e)) def get_remote_image_service(context, image_href): """Create an image_service and parse the id from the given image_href. The image_href param can be an href of the form 'http://example.com:9292/v1/images/b8b2c6f7-7345-4e2f-afa2-eedaba9cbbe3', or just an id such as 'b8b2c6f7-7345-4e2f-afa2-eedaba9cbbe3'. If the image_href is a standalone id, then the default image service is returned. :param image_href: href that describes the location of an image :returns: a tuple of the form (image_service, image_id) """ # NOTE(bcwaldon): If image_href doesn't look like a URI, assume its a # standalone image ID if '/' not in str(image_href): image_service = get_default_image_service() return image_service, image_href try: (image_id, endpoint) = _endpoint_from_image_ref(image_href) glance_client = GlanceClientWrapper(context=context, endpoint=endpoint) except ValueError: raise exception.InvalidImageRef(image_href=image_href) image_service = GlanceImageServiceV2(client=glance_client) return image_service, image_id def get_default_image_service(): return GlanceImageServiceV2() class UpdateGlanceImage(object): def __init__(self, context, image_id, metadata, stream): self.context = context self.image_id = image_id self.metadata = metadata self.image_stream = stream def start(self): image_service, image_id = get_remote_image_service( self.context, self.image_id) image_service.update(self.context, image_id, self.metadata, self.image_stream, purge_props=False) @profiler.trace_cls("nova_image") class API(object): """API for interacting with the image service.""" def _get_session_and_image_id(self, context, id_or_uri): """Returns a tuple of (session, image_id). If the supplied `id_or_uri` is an image ID, then the default client session will be returned for the context's user, along with the image ID. If the supplied `id_or_uri` parameter is a URI, then a client session connecting to the URI's image service endpoint will be returned along with a parsed image ID from that URI. :param context: The `nova.context.Context` object for the request :param id_or_uri: A UUID identifier or an image URI to look up image information for. """ return get_remote_image_service(context, id_or_uri) def _get_session(self, _context): """Returns a client session that can be used to query for image information. :param _context: The `nova.context.Context` object for the request """ # TODO(jaypipes): Refactor get_remote_image_service and # get_default_image_service into a single # method that takes a context and actually respects # it, returning a real session object that keeps # the context alive... return get_default_image_service() @staticmethod def generate_image_url(image_ref, context): """Generate an image URL from an image_ref. :param image_ref: The image ref to generate URL :param context: The `nova.context.Context` object for the request """ return "%s/images/%s" % (next(get_api_servers(context)), image_ref) def get_all(self, context, **kwargs): """Retrieves all information records about all disk images available to show to the requesting user. If the requesting user is an admin, all images in an ACTIVE status are returned. If the requesting user is not an admin, the all public images and all private images that are owned by the requesting user in the ACTIVE status are returned. :param context: The `nova.context.Context` object for the request :param kwargs: A dictionary of filter and pagination values that may be passed to the underlying image info driver. """ session = self._get_session(context) return session.detail(context, **kwargs) def get(self, context, id_or_uri, include_locations=False, show_deleted=True): """Retrieves the information record for a single disk image. If the supplied identifier parameter is a UUID, the default driver will be used to return information about the image. If the supplied identifier is a URI, then the driver that matches that URI endpoint will be used to query for image information. :param context: The `nova.context.Context` object for the request :param id_or_uri: A UUID identifier or an image URI to look up image information for. :param include_locations: (Optional) include locations in the returned dict of information if the image service API supports it. If the image service API does not support the locations attribute, it will still be included in the returned dict, as an empty list. :param show_deleted: (Optional) show the image even the status of image is deleted. """ session, image_id = self._get_session_and_image_id(context, id_or_uri) return session.show(context, image_id, include_locations=include_locations, show_deleted=show_deleted) def create(self, context, image_info, data=None): """Creates a new image record, optionally passing the image bits to backend storage. :param context: The `nova.context.Context` object for the request :param image_info: A dict of information about the image that is passed to the image registry. :param data: Optional file handle or bytestream iterator that is passed to backend storage. """ session = self._get_session(context) return session.create(context, image_info, data=data) def update(self, context, id_or_uri, image_info, data=None, purge_props=False): """Update the information about an image, optionally along with a file handle or bytestream iterator for image bits. If the optional file handle for updated image bits is supplied, the image may not have already uploaded bits for the image. :param context: The `nova.context.Context` object for the request :param id_or_uri: A UUID identifier or an image URI to look up image information for. :param image_info: A dict of information about the image that is passed to the image registry. :param data: Optional file handle or bytestream iterator that is passed to backend storage. :param purge_props: Optional, defaults to False. If set, the backend image registry will clear all image properties and replace them the image properties supplied in the image_info dictionary's 'properties' collection. """ session, image_id = self._get_session_and_image_id(context, id_or_uri) return session.update(context, image_id, image_info, data=data, purge_props=purge_props) def delete(self, context, id_or_uri): """Delete the information about an image and mark the image bits for deletion. :param context: The `nova.context.Context` object for the request :param id_or_uri: A UUID identifier or an image URI to look up image information for. """ session, image_id = self._get_session_and_image_id(context, id_or_uri) return session.delete(context, image_id) def download(self, context, id_or_uri, data=None, dest_path=None, trusted_certs=None): """Transfer image bits from Glance or a known source location to the supplied destination filepath. :param context: The `nova.context.RequestContext` object for the request :param id_or_uri: A UUID identifier or an image URI to look up image information for. :param data: A file object to use in downloading image data. :param dest_path: Filepath to transfer image bits to. :param trusted_certs: A 'nova.objects.trusted_certs.TrustedCerts' object with a list of trusted image certificate IDs. Note that because of the poor design of the `glance.ImageService.download` method, the function returns different things depending on what arguments are passed to it. If a data argument is supplied but no dest_path is specified (only done in the XenAPI virt driver's image.utils module) then None is returned from the method. If the data argument is not specified but a destination path *is* specified, then a writeable file handle to the destination path is constructed in the method and the image bits written to that file, and again, None is returned from the method. If no data argument is supplied and no dest_path argument is supplied (VMWare and XenAPI virt drivers), then the method returns an iterator to the image bits that the caller uses to write to wherever location it wants. Finally, if the allow_direct_url_schemes CONF option is set to something, then the nova.image.download modules are used to attempt to do an SCP copy of the image bits from a file location to the dest_path and None is returned after retrying one or more download locations (libvirt and Hyper-V virt drivers through nova.virt.images.fetch). I think the above points to just how hacky/wacky all of this code is, and the reason it needs to be cleaned up and standardized across the virt driver callers. """ # TODO(jaypipes): Deprecate and remove this method entirely when we # move to a system that simply returns a file handle # to a bytestream iterator and allows the caller to # handle streaming/copying/zero-copy as they see fit. session, image_id = self._get_session_and_image_id(context, id_or_uri) return session.download(context, image_id, data=data, dst_path=dest_path, trusted_certs=trusted_certs) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/keymgr/0000775000175000017500000000000000000000000015027 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/keymgr/__init__.py0000664000175000017500000000000000000000000017126 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/keymgr/conf_key_mgr.py0000664000175000017500000001157200000000000020051 0ustar00zuulzuul00000000000000# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ An implementation of a key manager that reads its key from the project's configuration options. This key manager implementation provides limited security, assuming that the key remains secret. Using the volume encryption feature as an example, encryption provides protection against a lost or stolen disk, assuming that the configuration file that contains the key is not stored on the disk. Encryption also protects the confidentiality of data as it is transmitted via iSCSI from the compute host to the storage host (again assuming that an attacker who intercepts the data does not know the secret key). Because this implementation uses a single, fixed key, it proffers no protection once that key is compromised. In particular, different volumes encrypted with a key provided by this key manager actually share the same encryption key so *any* volume can be decrypted once the fixed key is known. """ import binascii from castellan.common.objects import symmetric_key as key from castellan.key_manager import key_manager from oslo_log import log as logging import nova.conf from nova import exception from nova.i18n import _ CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class ConfKeyManager(key_manager.KeyManager): """This key manager implementation supports all the methods specified by the key manager interface. This implementation creates a single key in response to all invocations of create_key. Side effects (e.g., raising exceptions) for each method are handled as specified by the key manager interface. """ def __init__(self, configuration): LOG.warning('This key manager is insecure and is not recommended ' 'for production deployments') super(ConfKeyManager, self).__init__(configuration) self.key_id = '00000000-0000-0000-0000-000000000000' self.conf = CONF if configuration is None else configuration if CONF.key_manager.fixed_key is None: raise ValueError(_('keymgr.fixed_key not defined')) self._hex_key = CONF.key_manager.fixed_key super(ConfKeyManager, self).__init__(configuration) def _get_key(self): key_bytes = bytes(binascii.unhexlify(self._hex_key)) return key.SymmetricKey('AES', len(key_bytes) * 8, key_bytes) def create_key(self, context, algorithm, length, **kwargs): """Creates a symmetric key. This implementation returns a UUID for the key read from the configuration file. A Forbidden exception is raised if the specified context is None. """ if context is None: raise exception.Forbidden() return self.key_id def create_key_pair(self, context, **kwargs): raise NotImplementedError( "ConfKeyManager does not support asymmetric keys") def store(self, context, managed_object, **kwargs): """Stores (i.e., registers) a key with the key manager.""" if context is None: raise exception.Forbidden() if managed_object != self._get_key(): raise exception.KeyManagerError( reason="cannot store arbitrary keys") return self.key_id def get(self, context, managed_object_id): """Retrieves the key identified by the specified id. This implementation returns the key that is associated with the specified UUID. A Forbidden exception is raised if the specified context is None; a KeyError is raised if the UUID is invalid. """ if context is None: raise exception.Forbidden() if managed_object_id != self.key_id: raise KeyError(str(managed_object_id) + " != " + str(self.key_id)) return self._get_key() def delete(self, context, managed_object_id): """Represents deleting the key. Because the ConfKeyManager has only one key, which is read from the configuration file, the key is not actually deleted when this is called. """ if context is None: raise exception.Forbidden() if managed_object_id != self.key_id: raise exception.KeyManagerError( reason="cannot delete non-existent key") LOG.warning("Not deleting key %s", managed_object_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/loadables.py0000664000175000017500000001036400000000000016035 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Generic Loadable class support. Meant to be used by such things as scheduler filters and weights where we want to load modules from certain directories and find certain types of classes within those modules. Note that this is quite different than generic plugins and the pluginmanager code that exists elsewhere. Usage: Create a directory with an __init__.py with code such as: class SomeLoadableClass(object): pass class MyLoader(nova.loadables.BaseLoader) def __init__(self): super(MyLoader, self).__init__(SomeLoadableClass) If you create modules in the same directory and subclass SomeLoadableClass within them, MyLoader().get_all_classes() will return a list of such classes. """ import inspect import os import sys from oslo_utils import importutils from nova import exception class BaseLoader(object): def __init__(self, loadable_cls_type): mod = sys.modules[self.__class__.__module__] self.path = os.path.abspath(mod.__path__[0]) self.package = mod.__package__ self.loadable_cls_type = loadable_cls_type def _is_correct_class(self, obj): """Return whether an object is a class of the correct type and is not prefixed with an underscore. """ return (inspect.isclass(obj) and (not obj.__name__.startswith('_')) and issubclass(obj, self.loadable_cls_type)) def _get_classes_from_module(self, module_name): """Get the classes from a module that match the type we want.""" classes = [] module = importutils.import_module(module_name) for obj_name in dir(module): # Skip objects that are meant to be private. if obj_name.startswith('_'): continue itm = getattr(module, obj_name) if self._is_correct_class(itm): classes.append(itm) return classes def get_all_classes(self): """Get the classes of the type we want from all modules found in the directory that defines this class. """ classes = [] for dirpath, _, filenames in os.walk(self.path): relpath = os.path.relpath(dirpath, self.path) if relpath == '.': relpkg = '' else: relpkg = '.%s' % '.'.join(relpath.split(os.sep)) for fname in filenames: root, ext = os.path.splitext(fname) if ext != '.py' or root == '__init__': continue module_name = "%s%s.%s" % (self.package, relpkg, root) mod_classes = self._get_classes_from_module(module_name) classes.extend(mod_classes) return classes def get_matching_classes(self, loadable_class_names): """Get loadable classes from a list of names. Each name can be a full module path or the full path to a method that returns classes to use. The latter behavior is useful to specify a method that returns a list of classes to use in a default case. """ classes = [] for cls_name in loadable_class_names: obj = importutils.import_class(cls_name) if self._is_correct_class(obj): classes.append(obj) elif inspect.isfunction(obj): # Get list of classes from a function for cls in obj(): classes.append(cls) else: error_str = 'Not a class of the correct type' raise exception.ClassNotFound(class_name=cls_name, exception=error_str) return classes ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/0000775000175000017500000000000000000000000014770 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0624723 nova-21.2.4/nova/locale/cs/0000775000175000017500000000000000000000000015375 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/cs/LC_MESSAGES/0000775000175000017500000000000000000000000017162 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/cs/LC_MESSAGES/nova.po0000664000175000017500000030360000000000000020467 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # David Soukup , 2013 # FIRST AUTHOR , 2011 # Jaroslav Lichtblau , 2014 # Zbyněk Schwarz , 2013,2015 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:05+0000\n" "Last-Translator: Copied by Zanata \n" "Language: cs\n" "Plural-Forms: nplurals=3; plural=(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Czech\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s není platná IP adresa v4/6." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s se pokusil o přímý přístup k databázi, což není povoleno zásadou" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s není platná IP adresa sítě." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s by nemělo být součástí aktualizací." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "%(memsize)d MB paměti přiděleno, ale očekáváno %(memtotal)d MB" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s není v místním úložišti: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s není ve sdíleném úložišti: %(reason)s" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "Hypervizor %(type)s nepodporuje zařízení PCI" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "Hodnota %(worker_name)s ve %(workers)s je neplatná, musí být větší než 0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s nepodporuje zapojování disku za běhu." #, python-format msgid "%s format is not supported" msgstr "formát %s není podporován" #, python-format msgid "%s is not supported." msgstr "%s není podporováno." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s musí být buď 'MANUAL' nebo 'AUTO'." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' by měla být instancí '%(cls)s'" msgid "'qemu-img info' parsing failed." msgstr "zpracování 'qemu-img info' selhalo." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "Argument 'rxtx_factor' musí být desetinné číslo mezi 0 a %g" #, python-format msgid "A NetworkModel is required in field %s" msgstr "NetworkModel je v poli %s vyžadováno" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "Řetězec verze API %(version)s je v neplatném formátu- Musí být ve formátu " "Hlavní číslo verze a Vedlejší číslo verze" #, python-format msgid "API version %(version)s is not supported on this method." msgstr "API s verzí %(version)s není v této metodě podporován." msgid "Access list not available for public flavors." msgstr "Seznam přístupu není dostupný pro veřejné konfigurace." #, python-format msgid "Action %s not found" msgstr "Činnost %s nenalezena" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "Činnost pro id žádosti %(request_id)s v instanci %(instance_uuid)s nenalezena" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Činnost: '%(action)s', volání metody: %(meth)s, tělo: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "Přidání popisných dat selhalo pro agregát %(id)s po %(retries)s pokusech" msgid "Affinity instance group policy was violated." msgstr "Zásada skupiny slučivosti instance byla porušena." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "Agent volání nepodporuje: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "Sestavení agenta existuje s hypervizorem %(hypervisor)s, operačním systémem " "%(os)s a architekturou %(architecture)s" #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "Agregát %(aggregate_id)s již má hostitele %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "Agregát %(aggregate_id)s nemohl být nalezen." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "Agregát %(aggregate_id)s nemá hostitele %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "Agregát %(aggregate_id)s nemá žádná metadata s klíčem %(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Agregát %(aggregate_id)s: činnost '%(action)s' způsobila chybu: %(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "Agregát %(aggregate_name)s již existuje." #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "Agregát pro počet hostitelů %(host)s nelze nalézt." msgid "An unknown error has occurred. Please try your request again." msgstr "Vyskytla se neznámá chyba. Prosím zopakujte Váš požadavek." msgid "An unknown exception occurred." msgstr "Vyskytla se neočekávaná výjimka." msgid "Anti-affinity instance group policy was violated." msgstr "Zásady skupiny proti slučivosti instance byla porušena." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Název architektury '%(arch)s' nebyl rozpoznán" #, python-format msgid "Architecture name '%s' is not valid" msgstr "Název architektury '%s' není platný" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Pokus o spotřebu zařízení PCI %(compute_node_id)s:%(address)s z prázdné " "zásoby" msgid "Attempted overwrite of an existing value." msgstr "Pokus o přepsání existující hodnoty." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Vlastnost není podporována: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "Špatný formát sítě: chybí %s" msgid "Bad networks format" msgstr "Špatný formát sítě" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "Špatný formát sítí: uuid sítě není ve správném formátu (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Špatná předpona pro síť v cidr %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "Svázání portu %(port_id)s selhalo, pro další informace zkontrolujte prosím " "záznamy neutron." msgid "Blank components" msgstr "Prázdné součásti" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Prázdné svazky (zdroj: 'blank', cíl: 'volume') potřebují mít nenulovou " "velikost" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "Blokové zařízení %(id)s nelze zavést." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "Mapování blokového zařízení nemůže být převedeny na zastaralý formát." msgid "Block Device Mapping is Invalid." msgstr "Mapování blokového zařízení je neplatné." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "Mapování blokového zařízení je neplatné: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "Mapování blokového zařízení je neplatné: Zaváděcí sekvence pro kombinaci " "instance a mapování obrazu/blokového zařízení je neplatná." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "Mapování blokového zařízení je neplatné: Počet vámi zadaných místních " "zařízení přesahuje limit" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "Mapování blokového zařízení je neplatné: nelze získat obraz %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "Mapování blokového zařízení je neplatné: nelze získat snímek %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "Mapování blokového zařízení je neplatné: nelze získat svazek %(id)s." msgid "Block migration can not be used with shared storage." msgstr "Přesunutí bloku nemůže být použito ve sdíleném úložišti." msgid "Boot index is invalid." msgstr "Index zavedení je neplatný." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "Sestavení instance %(instance_uuid)s ukončeno: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "" "Sestavení instance %(instance_uuid)s bylo znovu naplánováno: %(reason)s" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "Přidělení CPU a paměti musí být provedeno u všech uzlů NUMA" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU nemá kompatibilitu.\n" "\n" "%(ret)s\n" "\n" "Prohlédněte si %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "Počet CPU %(cpunum)d je přidělen ke dvěma uzlům" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "Počet CPU %(cpunum)d je větší než maximum %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "Počet CPU %(cpuset)s není přidělen k žádným uzlům" msgid "Can not add access to a public flavor." msgstr "Nelze zpřístupnit veřejnou konfiguraci" msgid "Can not find requested image" msgstr "Nelze najít požadovaný obraz" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "Nelze zpracovat žádost o ověření přihlašovacích údajů %d" msgid "Can't resize a disk to 0 GB." msgstr "Nelze změnit velikost disku na 0 GB." msgid "Can't resize down ephemeral disks." msgstr "Nelze zmenšit efemerní disky." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "Nelze získat cestu kořenového zařízení z nastavení libvirt instance" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "Nelze '%(action)s' v instanci %(server_id)s zatímco je v %(attr)s %(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "\"%(instances_path)s\" se zdá být nedostupná, ujistěte se, že cesta " "existuje a že máte patřičná oprávnění. Zejména Nova-Compute nesmí být " "spouštěna zabudovaným účtem SYSTEM, nebo jinými účty, které se nemohou " "přihlásit na vzdálených hostitelích." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Nelze přidat hostitele do agregátu %(aggregate_id)s. Důvod: %(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "Nelze připojit jeden nebo více svazků k mnoha instancím" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Nelze volat %(method)s na osiřelém objektu %(objtype)s" msgid "Cannot find SR of content-type ISO" msgstr "Nelze najít SR typu obsahu ISO" msgid "Cannot find SR to read/write VDI." msgstr "Nelze najít SR pro čtení/zápis VDI." msgid "Cannot find image for rebuild" msgstr "Nelze najít obraz ke znovu sestavení" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "Nelze odstranit hostitele %(host)s z agregátu %(id)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Nelze odstranit hostitele z agregátu %(aggregate_id)s. Důvod: %(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "Nelze zachránit instanci zálohovanou na svazku" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "Nelze zmenšit velikost kořenového disku. Současná velikost: %(curr_root_gb)s " "GB. Požadovaná velikost: %(new_root_gb)s GB." msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "Nelze nastavit zásadu v reálném čase v nevyhrazené zásadě pro připnutí " "procesoru" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "Nelze aktualizovat agregát %(aggregate_id)s. Důvod: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Nelze aktualizovat popisná data agregátu %(aggregate_id)s. Důvod " "%(reason)s.: " #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "Buňka %(uuid)s nemá žádné mapování." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "" "Změna by využití změnila na méně než 0 pro následující zdroje: %(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "Třída %(class_name)s nemohla být nalezena: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Příkaz není podporován. Pro provedení této činnosti prosím použijte příkaz " "Ironic %(cmd)s." #, python-format msgid "Compute host %(host)s could not be found." msgstr "Výpočetní hostitel %(host)s nemohl být nalezen." #, python-format msgid "Compute host %s not found." msgstr "Výpočetní hostitel %s nenalezen." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "Výpočetní služba na %(host)s se stále používá." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "Výpočetní služba na %(host)s je v současnosti nedostupná." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "Jednotka s nastavením ve formátu '%(format)s' není podporována." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "Konfigurace vyžaduje konkrétní model procesu, ale současný hypervizor " "libvirt '%s' nepodporuje výběr modelů procesoru" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Konflikt při aktualizaci instance %(instance_uuid)s, ale příčina nemohla být " "zjištěna" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Konflikt při aktualizaci instance %(instance_uuid)s. Očekáváno %(expected)s. " "Skutečnost: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Připojení k hostiteli cinder selhalo: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "Připojení k hostiteli glance %(server)s selhalo: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Připojování k libvirt ztraceno: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "Připojení k hypervizoru je rozbité na hostiteli: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "Nelze získat výstup záznamu konzole instance %(instance_id)s. Důvod: " "%(reason)s" msgid "Constraint not met." msgstr "Omezení nesplněna." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Převedeno na prosté, ale formát je nyní %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "Nelze připojit obraz do zpětné smyčky: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "Nelze získat obraz %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "Nelze najít obslužnou rutinu pro svazek %(driver_type)s." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "Nelze najít binární soubor %(binary)s v hostiteli %(host)s." #, python-format msgid "Could not find config at %(path)s" msgstr "Nelze najít nastavení v %(path)s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "Nelze najít odkazy datového úložiště, který VM používá." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "Nelze načíst řádek %(line)s, obdržena chyba %(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "Nelze načíst aplikaci vložení '%(name)s' z %(path)s" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "Nelze připojit konfigurační jednotky vfat. %(operation)s selhala. Chyba: " "%(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "Nelze nahrát obraz %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "Vytváření virtuálního rozhraní s jedinečnou mac adresou selhalo" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "" "Regulární výraz datového úložiště %s neodpovídá žádným datovým úložištím" msgid "Datetime is in invalid format" msgstr "Datum a čas jsou v neplatném formátu" msgid "Default PBM policy is required if PBM is enabled." msgstr "Výchozí zásada PVM je pro povolení PBM vyžadována." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "Smazáno %(records)d záznamů z tabulky '%(table_name)s'." #, python-format msgid "Device '%(device)s' not found." msgstr "Zařízení '%(device)s' nenalezeno." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "ID zařízení %(id)s zadáno, ale není podporováno verzí hypervizoru %(version)s" msgid "Device name contains spaces." msgstr "Název zařízení obsahuje mezery." msgid "Device name empty or too long." msgstr "Název zařízení je prázdný nebo příliš dlouhý." #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "Různé typy v %(table)s.%(column)s a stínové tabulce: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "Disk obsahuje souborový systém, jehož velikost nelze změnit: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Formát disku %(disk_format)s není přijatelný" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Soubor informací o disku je neplatný: %(reason)s" msgid "Disk must have only one partition." msgstr "Disk musí mít pouze jeden oddíl." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Disk s id: %s nebylo připojeno k instanci." #, python-format msgid "Driver Error: %s" msgstr "Chyba ovladače: %s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Chyba při ničení instance na uzlu %(node)s. Stav poskytování bude " "'%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Chyba během následujícího volání agenta: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "Chyba během vyskladňování instance %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "Chyba od libvirt při získávání informací o doméně pro %(instance_name)s: " "[Kód chyby %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Chyba od libvirt při hledání %(instance_name)s: [Kód chyby %(error_code)s] " "%(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Chyba od libvirt při ztišování %(instance_name)s: [Kód chyby %(error_code)s] " "%(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Chyba od libvirt při nastavování hesla pro uživatele \"%(user)s\": [Kód " "chyby %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Chyba při připojování %(device)s do %(dir)s v obrazu %(image)s používající " "libguestfs (%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Chyba při připojování %(image)s pomocí libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Chyba při vytváření monitoru zdroje: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Chyba: Agent je zakázán" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "Událost %(event)s nenalezena pro id žádosti %(action_id)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "Událost musí být instancí nova.virt.event.Event" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Překročen maximální počet pokusů o znovu naplánování %(max_attempts)d pro " "instanci %(instance_uuid)s. Poslední výjimka: %(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Překročen maximální počet pokusů o znovu naplánování %(max_retries)d pro " "instanci %(instance_uuid)s během přesunu za běhu" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Překročen maximální počet pokusů. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "Očekáváno uuid ale obdrženo %(uuid)s." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Sloupec %(table)s.%(column)s je ve stínové tabulce navíc" msgid "Extracting vmdk from OVA failed." msgstr "Extrahování vmdk z OVA selhalo." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "K portu %(port_id)s není přístup: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Nelze přidat parametry pro nasazení na uzlu %(node)s při poskytování " "instance %(instance)s" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "Nelze přidělit síť(ě) s chybou %s, nebudou znovu naplánovány" msgid "Failed to allocate the network(s), not rescheduling." msgstr "Nelze přidělit síť(ě), nebudou znovu naplánovány." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "Nelze připojit zařízení síťového adaptéru k %(instance_uuid)s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Nelze zavést instanci: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Nelze odpojit zařízení PCI %(dev)s: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "Nelze odpojit zařízení síťového adaptéru od %(instance_uuid)s" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Nelze zašifrovat text: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Nelze spustit instance: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Nelze mapovat oddíly: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Nelze připojit souborový systém: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "Nelze zpracovat informace o zařízení pci pro průchod" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Nelze vypnout instanci: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Nelze zapnout instanci: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Nelze připravit PCI zařízení %(id)s pro instanci %(instance_uuid)s: " "%(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Poskytnutí instance %(inst)s selhalo: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "Nelze přečíst nebo zapsat soubor informací o disku: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Nelze restartovat instanci: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Nelze odpojit svazky: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "Nelze zažádat Ironic o znovu sestavení instance %(inst)s: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Nelze obnovit instanci: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "Nelze spustit qemu-img info na %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "Nelze nastavit heslo správce v %(instance)s z důvodu %(reason)s" msgid "Failed to spawn, rolling back" msgstr "Nelze vytvořit, vráceno zpět" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Nelze pozastavit instanci: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Nelze ukončit instanci: %(reason)s" msgid "Failure prepping block device." msgstr "Chyba při přípravě blokového zařízení." #, python-format msgid "File %(file_path)s could not be found." msgstr "Soubor %(file_path)s nemohl být nalezen." #, python-format msgid "File path %s not valid" msgstr "Cesta souboru %s není paltná" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "Pevná IP %(ip)s není platnou ip adresou pro síť %(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "Pevná IP adresa %s se již používá." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "Pevná IP adresa (%(address)s) se již používá v instanci %(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "Pevná IP není pro adresu %(address)s nalezena." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "Konfigurace %(flavor_id)s nemohla být nalezena." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "Konfigurace %(flavor_id)s nemá žádnou dodatečnou specifikaci s klíčem " "%(extra_specs_key)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Konfigurace %(flavor_id)s nemá žádnou dodatečnou specifikaci s klíčem " "%(key)s." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "Přístup ke konfiguraci již existuje u kombinace konfigurace %(flavor_id)s a " "projektu %(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Přístup ke konfiguraci nelze nalézt pro kombinaci %(flavor_id)s / " "%(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "Konfigurace použitá instancí nemohla být nalezena." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "Konfigurace s ID %(flavor_id)s již existuje." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "Konfigurace s názvem %(flavor_name)s nemohla být nalezena." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "Konfigurace s názvem %(name)s již existuje." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "Disk konfigurace je menší, než minimální zadaná velikost zadaná v popisných " "datech obrazu. Disk konfigurace má %(flavor_size)i bajtů, minimální velikost " "je %(image_min_disk)i bajtů." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "Disk konfigurace je pro požadovaný obraz příliš malý. Disk konfigurace má " "%(flavor_size)i bajtů, obraz má %(image_size)i bajtů." msgid "Flavor's memory is too small for requested image." msgstr "Paměť konfigurace je na požadovaný obraz příliš malá." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Přidružení plovoucí IP adresy %(address)s selhalo." #, python-format msgid "Floating IP %(address)s is associated." msgstr "Plovoucí IP %(address)s je přidružena." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "Plovoucí IP adresa %(address)s není přidružena k instanci %(id)s." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "Plovoucí IP není nalezena pro ID %(id)s." #, python-format msgid "Floating IP not found for ID %s" msgstr "Plovoucí IP adresa nenalezena pro ID %s" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "Plovoucí IP adresa nenalezena pro adresu %(address)s." msgid "Floating IP pool not found." msgstr "Zásoba plovoucích IP adres nenalezena." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "Je zakázáno překročit hodnotu z konfigurace ohledně počtu sériových portů " "předaných v popisných datech obrazu." msgid "Found no disk to snapshot." msgstr "Nenalezen žádný disk k pořízení snímku." #, python-format msgid "Found no network for bridge %s" msgstr "Žádná síť pro most %s nenalezena" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Nalezena sít mostu %s, která není jedinečná" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Nalezena síť s názvem štítku %s, který není jedinečný" #, python-format msgid "Host %(host)s could not be found." msgstr "Hostitel %(host)s nemohl být nalezen." #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "Hostitel '%(name)s' není namapován k žádné buňce" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Zapnutí hostitele není podporováno ovladačem Hyper-V" msgid "Host aggregate is not empty" msgstr "Agregát hostitele není prázdný" msgid "Host does not support guests with NUMA topology set" msgstr "Hostitel nepodporuje hosty s nastavenou topologií NUMA" msgid "Host does not support guests with custom memory page sizes" msgstr "Hostitel nepodporuje hosty s vlastní velikostí stránek paměti" msgid "Host startup on XenServer is not supported." msgstr "Spuštění hostitele na XenServer není podporováno." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Ovladač hypervizoru nepodporuje metodu po přesunutí za provozu ve zdroji" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Typ virtualizace hypervizoru '%s' není platný" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "Typ virtualizace hypervizoru '%(hv_type)s' nebyl rozpoznán" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "Hypervizor s ID !%s! nemohl být nalezen." #, python-format msgid "IP allocation over quota in pool %s." msgstr "Přidělení IP adres přesahující kvótu v zásobě %s." msgid "IP allocation over quota." msgstr "Přidělení IP přesahuje kvótu." #, python-format msgid "Image %(image_id)s could not be found." msgstr "Obraz %(image_id)s nemohl být nalezen." #, python-format msgid "Image %(image_id)s is not active." msgstr "Obraz %(image_id)s není aktivní." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "Obraz %(image_id)s je nepřijatelný: %(reason)s" msgid "Image is not raw format" msgstr "Obraz není v prostém formátu" msgid "Image metadata limit exceeded" msgstr "Popisná data obrazu překračují limit" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "Model obrazu '%(image)s' není podporován" msgid "Image not found." msgstr "Obraz nenalezen" #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "Vlastnost obrazu '%(name)s' nemá povoleno potlačit nastavení NUMA dané z " "nastavení konfigurace" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "Vlastnost obrazu 'hw_cpu_policy' nemá oprávnění potlačit zásadu připnutí CPU " "danou z konfigurace" msgid "Image that the instance was started with could not be found." msgstr "Obraz, z kterého byla instance spuštěna, nemohl být nalezen." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "Volba konfigurační jednotky obrazu '%(config_drive)s' není platná" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "Obrazy, které mají zadány 'volume' jako typ cíle, potřebují mít zadánu " "nenulovou velikost" msgid "In ERROR state" msgstr "ve stavu CHYBA" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "Ve stavech %(vm_state)s/%(task_state)s, není RESIZED/None" msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Nekompatibilní nastavení: šifrování efemerního úložiště je podporováno pouze " "pro obrazy LVM." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "Mezipaměť informaci instance %(instance_uuid)s nemohla být nalezena." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "Instance %(instance)s a svazek %(vol)s nejsou ve stejné zóně dostupnosti. " "Instance je v %(ins_zone)s. Svazek je v %(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "Instance %(instance)s nemá port s id %(port)s" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "Instanci %(instance_id)s nelze zachránit: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "Instance %(instance_id)s nemohla být nalezena." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "Instance %(instance_id)s nemá žádnou značku '%(tag)s'" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "Instance %(instance_id)s není v nouzovém režimu." #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "Instance %(instance_id)s není připravena" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "Instance %(instance_id)s není spuštěna." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "Instance %(instance_id)s je nepřijatelná: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "Instance %(instance_uuid)s neurčuje topologii NUMA" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "Instance %(instance_uuid)s neurčuje kontext přesunu." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "Instance %(instance_uuid)s v %(attr)s %(state)s. Nelze %(method)s zatímco " "je instance v tomto stavu." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "Instance %(instance_uuid)s je uzamčena." #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "Instance %(instance_uuid)s vyžaduje jednotku s nastavením, ale ta neexistuje." #, python-format msgid "Instance %(name)s already exists." msgstr "Instance %(name)s již existuje." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "Instance %(server_id)s je v neplatném stavu pro '%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "Instance %(uuid)s nemá žádné mapování na buňku." #, python-format msgid "Instance %s not found" msgstr "Instance %s není uložena" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Poskytování instance %s bylo ukončeno" msgid "Instance could not be found" msgstr "Instance nemohla být nalezena" msgid "Instance disk to be encrypted but no context provided" msgstr "Disk instance má být šifrován, ale není zadán kontext" msgid "Instance event failed" msgstr "Událost instance selhala" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "Skupina instance %(group_uuid)s již existuje." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "Skupina instance %(group_uuid)s nemohla být nalezena." msgid "Instance has no source host" msgstr "Instance nemá žádného zdrojového hostitele" msgid "Instance has not been resized." msgstr "Instanci nebyla změněna velikost." #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "Instance již je v záchranném režimu: %s" msgid "Instance is not a member of specified network" msgstr "Instance není členem zadané sítě" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Provedeno zpětné vrácení instance kvůli: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Nedostatečné místo ve skupině svazku %(vg)s. Dostupné pouze %(free_space)db, " "ale svazkem %(lv)s je vyžadováno %(size)db bajtů." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Nedostatečné výpočetní zdroje: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "Pro spuštění %(uuid)s je ve výpočetním uzlu nedostatek volné paměti." #, python-format msgid "Interface %(interface)s not found." msgstr "Rozhraní %(interface)s nenalezeno." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "Neplatná data Base 64 pro soubor %(path)s" msgid "Invalid Connection Info" msgstr "Neplatné informace o připojení" #, python-format msgid "Invalid ID received %(id)s." msgstr "Obdrženo neplatné ID %(id)s." #, python-format msgid "Invalid IP format %s" msgstr "Neplatný formát IP adresy %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Neplatný protokol IP %(protocol)s." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Neplatný seznam povolených PCI: Seznam povolených PCI může zadat název " "zařízení nebo adresu, ale ne oboje" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Neplatné určení přezdívky PCI: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Neplatný regulární výraz %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Neplatné znaky v názvu hostitele '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "Zadána neplatná konfigurační jednotka." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "Neplatný formát konfigurační jednotky \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Neplatný typ konzole %(console_type)s " #, python-format msgid "Invalid content type %(content_type)s." msgstr "Neplatný typ obsahu %(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Neplatný řetězec data a času: %(reason)s" msgid "Invalid device UUID." msgstr "Neplatné UUID zařízení." #, python-format msgid "Invalid entry: '%s'" msgstr "Neplatná položka: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Neplatná položka: '%s'; Očekáváno dict" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Neplatná položka: '%s'; Očekáváno list nebo dict" #, python-format msgid "Invalid exclusion expression %r" msgstr "Neplatný výraz vyloučení %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Neplatný formát obrazu '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "Neplatný href %(image_href)s obrazu." #, python-format msgid "Invalid inclusion expression %r" msgstr "Neplatný výraz začlenění %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Neplatný vstup pro pole/vlastnost %(path)s. Hodnota: %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Obdržen neplatný vstup: %(reason)s" msgid "Invalid instance image." msgstr "Neplatný obraz instance." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Neplatný filtr is_public [%s]" msgid "Invalid key_name provided." msgstr "Zadán neplatný název klíče." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Neplatná velikost stránky paměti '%(pagesize)s'" msgid "Invalid metadata key" msgstr "Neplatný klíč popisných dat" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Neplatná velikost popisných dat: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Neplatná popisná data: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Neplatný filtr minDisk [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Neplatný filtr minRam [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Neplatný rozsah portů %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "Neplatný podpis žádosti od prostředníka." #, python-format msgid "Invalid range expression %r" msgstr "Neplatný výraz rozsahu %r" msgid "Invalid service catalog json." msgstr "Neplatný json katalog služeb" msgid "Invalid start time. The start time cannot occur after the end time." msgstr "Neplatný čas spuštění. Čas spuštění nemůže nastat po čase ukončení." msgid "Invalid state of instance files on shared storage" msgstr "Neplatný stav souborů instance na sdíleném úložišti" #, python-format msgid "Invalid timestamp for date %s" msgstr "Neplatné časové razítko pro datum %s" #, python-format msgid "Invalid usage_type: %s" msgstr "Neplatný typ použití: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Neplatná hodnota pro volbu konfigurační jednotky: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Neplatná adresa virtuálního rozhraní %s v požadavku." #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Neplatný režim přístupu svazku: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Neplatný svazek: %(reason)s" msgid "Invalid volume_size." msgstr "Neplatná velikost svazku" #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "UUID uzlu Ironic nebylo předáno ovladači instance %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "Na vnější síti %(network_uuid)s není povoleno vytvářet rozhraní" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "Obraz Kernel/Ramdisk je příliš velký: %(vdi_size)d bajtů, max %(max_size)d " "bajtů" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Názvy klíče mohou obsahovat pouze alfanumerické znaky, tečky, pomlčky, " "podtržítka, dvojtečky a mezery." #, python-format msgid "Key manager error: %(reason)s" msgstr "Chyba správce klíčů: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "Pár klíčů '%(key_name)s' již existuje." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "Pár klíčů %(name)s nenalezena pro uživatele %(user_id)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "Data páru klíčů jsou neplatná: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "Název páru klíčů obsahuje nebezpečné znaky" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "Název páru klíče musí být řetězec dlouhý 1 až 255 znaků" msgid "Limits only supported from vCenter 6.0 and above" msgstr "Limity jsou podporovány pouze ve vCenter verze 6.0 a vyšší" #, python-format msgid "Malformed message body: %(reason)s" msgstr "Poškozené tělo zprávy: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "Poškozená URL požadavku: ID projektu URL '%(project_id)s' neodpovídá ID " "projektu obsahu '%(context_project_id)s'" msgid "Malformed request body" msgstr "Poškozené tělo požadavku" msgid "Mapping image to local is not supported." msgstr "Mapování obrazu na místní není podporováno" #, python-format msgid "Marker %(marker)s could not be found." msgstr "Indikátor %(marker)s nemohl být nalezen." msgid "Maximum number of floating IPs exceeded" msgstr "Překročen maximálních počet plovoucích IP adres" msgid "Maximum number of key pairs exceeded" msgstr "Překročen maximálních počet párů klíčů" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "Maximální počet popisných položek překračuje %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "Překročen maximálních počet portů" msgid "Maximum number of security groups or rules exceeded" msgstr "Překročen maximálních počet bezpečnostních skupin nebo pravidel" msgid "Metadata item was not found" msgstr "Položka popisných dat nenalezena" msgid "Metadata property key greater than 255 characters" msgstr "Klíč vlastnosti popisných dat je větší než 255 znaků" msgid "Metadata property value greater than 255 characters" msgstr "Hodnota vlastnosti popisných dat je vetší než 255 znaků" msgid "Metadata type should be dict." msgstr "Typ popisných dat by měl být dict" #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "Metrika %(name)s nemohla být nalezena v uzlu výpočetního hostitele %(host)s." "%(node)s." msgid "Migrate Receive failed" msgstr "Přijetí přesunu selhalo" msgid "Migrate Send failed" msgstr "Odeslání přesunu selhalo" #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "Přesun %(migration_id)s nemohl být nalezen." #, python-format msgid "Migration error: %(reason)s" msgstr "Chyba přesunu: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "Přesun není podporován u instancí zálohovaných na LVM" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "Přesun nenalezen v instanci %(instance_id)s se stavem %(status)s." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Chyba kontroly před přesunem: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Chybí argumenty: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Ve stínové tabulce chybí sloupec %(table)s.%(column)s" msgid "Missing device UUID." msgstr "UUID zařízení chybí." msgid "Missing disabled reason field" msgstr "Chybí pole důvodu zakázání" msgid "Missing forced_down field" msgstr "Chybí pole " msgid "Missing imageRef attribute" msgstr "Chybí vlastnost imageRef" #, python-format msgid "Missing keys: %s" msgstr "Chybí klíče: %s" #, python-format msgid "Missing parameter %s" msgstr "Chybí parametr %s" msgid "Missing parameter dict" msgstr "Chybí parametr dict" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "" "K pevné IP adrese '%(address)s'. je přidružena více než jedna instance." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Nalezena více než jedna možná síť. Zadejte ID sítě pro výběr té, ke které se " "chcete připojit." msgid "More than one swap drive requested." msgstr "Je požadován více než jeden odkládací disk." #, python-format msgid "Multi-boot operating system found in %s" msgstr "V %s nalezen operační systém s více zavaděči" msgid "Multiple X-Instance-ID headers found within request." msgstr "V žádosti nalezeno více hlaviček X-Instance-ID." msgid "Multiple X-Tenant-ID headers found within request." msgstr "V žádosti nalezeno více hlaviček X-Tenant-ID." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "Při hledání názvu '%s' nalezeno mnoho zásob plovoucích ip adres" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Nalezeno mnoho plovoucích IP pro adresu %(address)s." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Ovladač VMware vCenter může spravovat mnoho hostitelů; proto se čas provozu " "pouze pro jednoho hostitele neměří." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Nalezeno mnoho možných sítí, použijte ID sítě, nebo buďte konkrétnější." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Nalezeno mnoho bezpečnostních skupin odpovídající '%s'. Použijte ID nebo " "buďte konkrétnější." msgid "Must input network_id when request IP address" msgstr "Při žádání o IP adresu musíte zadat id sítě" msgid "Must not input both network_id and port_id" msgstr "Id sítě a id portu nesmí být zadány najednou" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "Pro použití compute_driver=xenapi.XenAPIDriver musíte zadat url připojení a " "volitelně uživatelské jméno a heslo připojení" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "Pro použití vmwareapi.VMwareVCDriver musíte zadat ip hostitele, jeho " "uživatelské jméno a heslo" msgid "Must supply a positive value for max_rows" msgstr "Musíte zadat kladnou hodnotu pro maximum řádků" #, python-format msgid "Network %(network_id)s could not be found." msgstr "Síť %(network_id)s nemohla být nalezena." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "Síť %(network_uuid)s vyžaduje podsíť, na které může zavádět instance." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "Síť nemohla být pro most %(bridge)s nalezena." #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "Síť nemohla být pro instanci %(instance_id)s nalezena." msgid "Network not found" msgstr "Síť nenalezena" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "Síť vyžaduje povolení bezpečnostního portu a přidruženou podsíť, aby mohla " "používat bezpečnostní skupiny." msgid "New volume must be detached in order to swap." msgstr "Pro výměnu musí být nový svazek odpojen." msgid "New volume must be the same size or larger." msgstr "Nový svazek musí mít stejnou nebo větší velikost." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "Žádné mapování blokového zařízení s id %(id)s." msgid "No Unique Match Found." msgstr "Nenalezena žádná jedinečná shoda." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "Žádné sestavení agenta není přidruženo k id %(id)s." msgid "No compute host specified" msgstr "Nezadán žádný výpočetní hostitel" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "Na VM neexistuje žádné zařízení s MAC adresou %s" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "Na VM neexistuje žádné zařízení mající id rozhraní %s" #, python-format msgid "No disk at %(location)s" msgstr "Źádný disk ve %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Nejsou dostupné žádné pevné IP adresy pro síť: %(net)s" msgid "No fixed IPs associated to instance" msgstr "K instanci nejsou přidruženy žádné pevné IP adresy" msgid "No free nbd devices" msgstr "Žádná volná zařízení nbd" msgid "No host available on cluster" msgstr "V clusteru není dostupný žádný hostitel" #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "Nemohl být nalezen žádný hypervizor shodující se s '%s'." msgid "No image locations are accessible" msgstr "Nejsou přístupné žádná umístění obrazu" msgid "No more floating IPs available." msgstr "Žádné další plovoucí IP adresy nejsou dostupné." #, python-format msgid "No more floating IPs in pool %s." msgstr "Žádné další plovoucí IP adresa v zásobě %s." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "V %(root)s z %(image)s nenalezeny žádné body připojení" #, python-format msgid "No operating system found in %s" msgstr "V %s nenalezen žádný operační systém" #, python-format msgid "No primary VDI found for %s" msgstr "Nenalezen žádný hlavní VDI pro %s" msgid "No root disk defined." msgstr "Nezadán žádný kořenový disk." msgid "No suitable network for migrate" msgstr "Žádné vhodné sítě pro přesun" msgid "No valid host found for cold migrate" msgstr "Nebyl nalezen žádný platný hostitel pro přesun při nepoužívání" msgid "No valid host found for resize" msgstr "Nebyl nalezen žádný platný hostitel pro změnu velikosti" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Nebyl nalezen žádný platný hostitel. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "Žádné mapování blokového zařízení svazku v cestě: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "Žádné mapování blokového zařízení svazku s id %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "Uzel %s nemohl být nalezen." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "Nelze získat volný port pro %(host)s" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "Nelze svázat %(host)s:%(port)d, %(error)s" msgid "Not an rbd snapshot" msgstr "Není snímkem rbd" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "Nemáte oprávnění k použití obrazu %(image_id)s." msgid "Not authorized." msgstr "Neschváleno." msgid "Not enough parameters to build a valid rule." msgstr "Není dostatek parametrů k sestavení platného pravidla." msgid "Not implemented on Windows" msgstr "Nezavedeno ve Windows" msgid "Not stored in rbd" msgstr "Neuloženo v rbd" msgid "Nothing was archived." msgstr "Nic nebylo archivováno." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova vyžaduje verzi libvirt %s nebo novější" msgid "Number of Rows Archived" msgstr "Počet archivovaných řádků" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Činnost objektu %(action)s selhala protože: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Starý svazek je připojen k jiné instanci." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Jeden nebo více hostitelů již jsou v zónách dostupnosti %s" msgid "Only administrators may list deleted instances" msgstr "Pouze správci mohou vypsat smazané instance" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "Pouze souborové SR (ext/NFS) jsou touto funkcí podporovány. SR %(uuid)s má " "typ %(type)s" msgid "Origin header does not match this host." msgstr "Hlavička původu neodpovídá tomuto hostiteli." msgid "Origin header not valid." msgstr "Hlavička původu není platná." msgid "Origin header protocol does not match this host." msgstr "Protokol původu hlavičky neodpovídá tomuto hostiteli." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "PCI zařízení %(node_id)s:%(address)s nenalezeno." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "Přezdívka PCI %(alias)s není určena" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI zařízení %(compute_node_id)s:%(address)s je %(status)s místo " "%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI zařízení %(compute_node_id)s:%(address)s vlastní %(owner)s místo " "%(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "PCI zařízení %(id)s nenalezeno" #, python-format msgid "PCI device request %(requests)s failed" msgstr "Žádost zařízení PCI %(requests)s selhala" #, python-format msgid "PIF %s does not contain IP address" msgstr "Fyz. rozhraní %s neobsahuje IP adresu" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Velikost stránky %(pagesize)s je zakázána v '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "Velikost stránky %(pagesize)s není hostitelem podporována" #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Parametry %(missing_params)s nejsou uvedeny v podrobnostech vif %(vif_id)s. " "Zkontrolujte nastavení Neutron a ověřte, že parametry mavctap jsou správné." #, python-format msgid "Path %s must be LVM logical volume" msgstr "Cesta %s musí být logickým svazkem LVM" msgid "Paused" msgstr "Pozastaveno" msgid "Personality file limit exceeded" msgstr "Překročen limit osobnostního souboru" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "Fyzická síť chybí u sítě %(network_uuid)s" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "Zásady nedovolují, aby bylo %(action)s provedeno." #, python-format msgid "Port %(port_id)s is still in use." msgstr "Port %(port_id)s se stále používá." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "Port %(port_id)s není použitelný pro instanci %(instance)s." #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "Port %(port_id)s pro použití vyžaduje pevnou IP adresu." #, python-format msgid "Port %s is not attached" msgstr "Port %s není připojen" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "Port id %(port_id)s nemohlo být nalezeno." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Zadaný model videa (%(model)s) není podporován." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "Zadaná činnost sledovače (%(action)s) není podporována" msgid "QEMU guest agent is not enabled" msgstr "Agent hosta QEMU není povolen" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "Ztišení není podporováno v instanci %(instance_id)s" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "Třída kvóty %(class_name)s nemohla být nalezena." msgid "Quota could not be found" msgstr "Kvóta nemohla být nalezena." #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Kvóta překročena pro %(overs)s: Požadováno %(req)s, ale již je použito " "%(used)s z %(allowed)s %(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Kvóta překročena pro zdroje: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Kvóta překročena, příliš mnoho párů klíčů." msgid "Quota exceeded, too many server groups." msgstr "Kvóta překročena, příliš mnoho skupin serveru." msgid "Quota exceeded, too many servers in group" msgstr "Kvóta překročena, příliš mnoho serverů ve skupině" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Kvóta překročena: kód=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "Kvóta existuje pro projekt %(project_id)s, zdroj %(resource)s" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "Kvóta pro projekt %(project_id)s nemohla být nalezena." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "Kvóta pro uživatele %(user_id)s v projektu %(project_id)s nemohla být " "nalezena." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "Limit kvóty %(limit)s pro %(resource)s musí být větší nebo rovno již použité " "a vyhrazené %(minimum)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "Limit kvóty %(limit)s pro %(resource)s musí být menší nebo rovno %(maximum)s." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "Dosaženo maximálního počtu nových pokusů o odpojení VBD %s" msgid "Request body and URI mismatch" msgstr "Neshoda s tělem požadavku a URI" msgid "Request is too large." msgstr "Požadavek je příliš velký." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "Žádost o obraz %(image_id)s obdržel odpověď o špatné žádosti: %(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "U instance %(instance_uuid)s nebyla nalezena žádost o specifikaci" msgid "Requested CPU control policy not supported by host" msgstr "" "Požadovaná zásada kontroly procesoru není podporována na tomto hostiteli" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "Požadovaný hardware '%(model)s' není podporován ovladačem virtualizace " "'%(virt)s'" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "Požadovaný obraz %(image)s má zakázanou automatickou změnu své velikosti." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "Požadovaná instance topologie NUMA se nemůže vejít do zadané topologie NUMA " "hostitele" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "Požadovaná instance topologie NUMA spolu s požadovanými zařízeními PCI se " "nemohou vejít do zadané topologie NUMA hostitele" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "Požadované limity vCPU %(sockets)d:%(cores)d:%(threads)d je nemožné pslnit " "pro daný počet vCPU %(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "Záchranné zařízení neexistuje v instanci %s" #, python-format msgid "Resize error: %(reason)s" msgstr "Chyba změny velikosti: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Změna na konfiguraci s nulovým diskem není povoleno." msgid "Resource could not be found." msgstr "Zdroj nemohl být nalezen." msgid "Resumed" msgstr "Obnoveno" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Název kořenového prvku by měl být '%(name)s' ne '%(tag)s'" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "Filtr hostitelů plánovače %(filter_name)s nemohl být nalezen." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "Bezpečnostní skupina %(name)s nenalezena v projektu %(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "Bezpečnostní skupina %(security_group_id)s není nalezena v projektu " "%(project_id)s." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "Bezpečnostní skupina %(security_group_id)s není nalezena." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "Bezpečnostní skupina %(security_group_name)s již existuje v projektu " "%(project_id)s." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "Bezpečnostní skupina %(security_group_name)s není přidružena k instanci " "%(instance)s" msgid "Security group id should be uuid" msgstr "IP bezpečnostní skupiny by mělo být uuid" msgid "Security group name cannot be empty" msgstr "Název bezpečnostní skupiny nemůže být prázdné" msgid "Security group not specified" msgstr "Není zadána bezpečnostní skupina" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "" "Změna velikosti disku serveru nemohla být provedena z důvodu: %(reason)s" msgid "Server does not exist" msgstr "Server neexistuje" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "Zásada ServerGroup není podporována: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "Filtr slučivosti skupiny serveru není nastaven" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "Filtr proti slučivosti skupiny serveru není nastaven" #, python-format msgid "Service %(service_id)s could not be found." msgstr "Služba %(service_id)s nemohla být nalezena." #, python-format msgid "Service %s not found." msgstr "Služba %s nenalezena." msgid "Service is unavailable at this time." msgstr "Služba je v tuto chvíli nedostupná." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "Služba na hostiteli %(host)s, binární soubor %(binary)s existuje." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "Služba na hostiteli %(host)s, téma %(topic)s existuje." msgid "Set admin password is not supported" msgstr "Nastavení hesla správce není podporováno" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "Stínová tabulka s názvem %(name)s již existuje." #, python-format msgid "Share '%s' is not supported" msgstr "Sdílení '%s' není podporováno" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "Úroveň sdílení '%s' nemůže mít sdílení nastavena" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "Zmenšení souborového systému pomocí resize2fs selhalo, prosím zkontrolujte, " "zda máte na svém disku dostatek volného místa." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "Snímek %(snapshot_id)s nemohl být nalezen." msgid "Some required fields are missing" msgstr "Některá povinná pole chybí" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "Při mazání snímku svazku se něco zvrtlo: Přeskládání síťového disku " "%(protocol)s pomocí qemu-img nebylo řádně ozkoušeno." msgid "Sort direction size exceeds sort key size" msgstr "Velikost směru řazení převyšuje velikost klíče řazení" msgid "Sort key supplied was not valid." msgstr "Zadaný klíč řazení byl neplatný." msgid "Specified fixed address not assigned to instance" msgstr "Zadaná pevná adresa není k instanci přidělena" msgid "Specify `table_name` or `table` param" msgstr "Zadejte parametr `table_name` nebo `table`" msgid "Specify only one param `table_name` `table`" msgstr "Zadejte pouze jeden parametr `table_name` `table`" msgid "Started" msgstr "Spuštěno" msgid "Stopped" msgstr "Zastaveno" #, python-format msgid "Storage error: %(reason)s" msgstr "Chyba úložiště: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "Zásada uložení %s neodpovídá žádným datovým úložištím" msgid "Success" msgstr "Úspěch" msgid "Suspended" msgstr "Uspáno" msgid "Swap drive requested is larger than instance type allows." msgstr "Požadovaný odkládací disk je větší než typ instance umožňuje." msgid "Table" msgstr "Tabulka" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "Úkol %(task_name)s již je spuštěn na hostiteli %(host)s" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "Úkol %(task_name)s není spuštěn na hostiteli %(host)s" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "PCI adresa %(address)s je ve špatném formátu." msgid "The backlog must be more than 0" msgstr "parametr backlog musí být větší než 0." #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "Rozsah portů konzole %(min_port)d-%(max_port)d je vyčerpán." msgid "The created instance's disk would be too small." msgstr "Vytvořený disk instance by byl příliš malý." msgid "The current driver does not support preserving ephemeral partitions." msgstr "Současný ovladač nepodporuje zachování efemerních oddílů." msgid "The default PBM policy doesn't exist on the backend." msgstr "Výchozí zásada PBM neexistuje na této podpůrné vrstvě. " msgid "The floating IP request failed with a BadRequest" msgstr "Žádost o plovoucí IP selhala s chybou Špatný požadavek." msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "Instance vyžaduje novější verzi hypervizoru, než byla poskytnuta." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "Počet zadaných portů: %(ports)d přesahuje limit: %(quota)d" msgid "The only partition should be partition 1." msgstr "Jediným oddílem by měl být oddíl 1." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "Zadaná cesta zařízení RNG: (%(path)s) se nevyskytuje na hostiteli." msgid "The request body can't be empty" msgstr "Tělo žádosti nemůže být prázdné" msgid "The request is invalid." msgstr "Požadavek je neplatný." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "Požadované množství video paměti %(req_vram)d je vyšší než maximální " "množství povolené konfigurací %(max_vram)d." msgid "The requested availability zone is not available" msgstr "Požadovaná zóna dostupnosti není dostupná" msgid "The requested console type details are not accessible" msgstr "Požadované podrobnosti typu konzole nejsou přístupné" msgid "The requested functionality is not supported." msgstr "Požadovaná funkce není podporována." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "Zadaný cluster '%s' nebyl nalezen ve vCenter" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "Zadaná cesta zařízení (%(path)s) se již používá." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "Zadaná cesta zařízení (%(path)s) je neplatná." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "Zadaná cesta disku (%(path)s) již existuje, očekává se, že nebude." msgid "The supplied hypervisor type of is invalid." msgstr "Zadaný typ hypervizoru je neplatný." msgid "The target host can't be the same one." msgstr "Cílový hostitel nemůže být ten stejný." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "Známka '%(token)s' je neplatná nebo vypršela" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "Svazek nemůže být přidělen ke stejnému názvu zařízení jako kořenové zařízení " "%s" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "V tabulce '%(table_name)s' existuje %(records)d záznamů, kde uuid nebo uuid " "instance je prázdné. Poté, co jste provedli zálohu všech důležitých dat, " "spusťte tento příkaz znovu s volbou --delete." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "V tabulce '%(table_name)s' existuje %(records)d záznamů, kde uuid nebo uuid " "instance je prázdné. Tyto musí být ručně vyčištěny předtím, než bude " "přesunutí úspěšně dokončeno. Zvažte spustit příkaz 'nova-manage db " "null_instance_uuid_scan'." msgid "There are not enough hosts available." msgstr "Není dostatek dostupných hostitelů." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Stále existuje %(count)i nepřesunutých záznamů konfigurace. Přesun nemůže " "pokračovat, dokud nebudou všechny záznamy konfigurace instance přesunuty do " "nového formátu. Prosím nejdříve spusťte `nova-manage db migrate_flavor_data'." #, python-format msgid "There is no such action: %s" msgstr "Žádná taková činnost: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "Nebyly nalezeny žádné záznamy, kde uuid instance bylo prázdné." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "Hypervizor tohoto uzlu výpočtu je starší než minimální podporovaná verze: " "%(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "Tento domU musí být spuštěn na hostiteli zadaném v url připojení." msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Tuto metodu je třeba zavolat buď s parametry networks=None a port_ids=None " "nebo id portů a sítě nezadány jako none." #, python-format msgid "This rule already exists in group %s" msgstr "Toto pravidlo již existuje ve skupině %s" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Tato služba je starší (v%(thisver)i) než minimální verze (v%(minver)i) ve " "zbytku nasazení. Nelze pokračovat." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Vypršel časový limit při čekání na vytvoření zařízení %s" msgid "Timeout waiting for response from cell" msgstr "Při čekání na odpověď od buňky vypršel čas" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "Při kontrole možnosti přesunu za provozu na hostitele %s vypršel časový " "limit." msgid "To and From ports must be integers" msgstr "Porty Do a Od musí být celá čísla" msgid "Token not found" msgstr "Známka nenalezena" msgid "Type and Code must be integers for ICMP protocol type" msgstr "Typ a kód musí být v protokolu ICMP celá čísla" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Nelze přidružit plovoucí IP adresu %(address)s k žádné z pevných IP adres " "instance %(id)s. Instance nemá žádné pevné adresy IPv4 k přidružení." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Nelze přidružit plovoucí IP adresu %(address)s k pevné IP adrese " "%(fixed_address)s instance %(id)s. Chyba: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Nelze ověřit klienta ironic." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Nelze kontaktovat agenta hosta. Následujícímu volání vypršel čas: %(method)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "Nelze zničit VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "Nelze zničit VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "Nelze zjistit řadič disku pro '%s'" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "Nelze zjistit předponu disku pro %s" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "Nelze vyjmout %s ze zásoby; Nenalezen žádný správce zásoby" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "Nelze vyjmout %s ze zásoby; zásoba není prázdná" #, python-format msgid "Unable to find SR from VBD %s" msgstr "Nelze najít SR z VBD %s" #, python-format msgid "Unable to find SR from VDI %s" msgstr "Nelze najít SR z VDI %s" #, python-format msgid "Unable to find ca_file : %s" msgstr "Nelze najít soubor certifikační autority : %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "Nelze najít soubor certifikátu : %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "Nelze najít hostitele pro instanci %s" msgid "Unable to find iSCSI Target" msgstr "Nelze najít cíl ISCSI" #, python-format msgid "Unable to find key_file : %s" msgstr "Nelze najít soubor s klíčem : %s" msgid "Unable to find root VBD/VDI for VM" msgstr "Nelze najít kořen VBD/VDI pro VM" msgid "Unable to find volume" msgstr "Nelze najít svazek" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "Nelze získat UUID hostitele: /etc/machine-id neexistuje" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "Nelze získat UUID hostitele: /etc/machine-id je prázdné" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Nelze získat záznam VDI %s na" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "Nelze zavést VDI pro SR %s" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "Nelze zavést VDI na SR %s" #, python-format msgid "Unable to join %s in the pool" msgstr "Nelze připojit %s do zásoby" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Nelze spustit více instancí s jedním nastaveným ID portu. Prosím spusťte " "vaše instance jednotlivě s odlišnými porty." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "Nelze přesunout %(instance_uuid)s do %(dest)s: Nedostatek paměti (hostitel:" "%(avail)s <= instance:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "Nelze přesunout %(instance_uuid)s: Disk instance je příliš velký (dostupná " "kapacita na cílovém hostiteli:%(available)s < nutná kapacita:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Nelze přesunout instanci (%(instance_id)s) na současného hostitele " "(%(host)s)." #, python-format msgid "Unable to obtain target information %s" msgstr "Nelze získat informace o cíli %s" msgid "Unable to resize disk down." msgstr "Nelze zmenšit velikost disku." msgid "Unable to set password on instance" msgstr "Nelze nastavit heslo instance" msgid "Unable to shrink disk." msgstr "Nelze zmenšit disk." msgid "Unable to terminate instance." msgstr "Nelze ukončit instanci." #, python-format msgid "Unable to unplug VBD %s" msgstr "Nelze odpojit VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Nepřijatelné informace o CPU: %(reason)s" msgid "Unacceptable parameters." msgstr "Nepřijatelné parametry." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Nedostupný typ konzole %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Neurčený kořen mapování blokového zařízení: BlockDeviceMappingList obsahuje " "mapování blokového zařízení z více instancí." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Neočekávaná chyba API. Prosím nahlaste ji na http://bugs.launchpad.net/nova/ " "a pokud možno připojte k ní záznam Nova API.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Neočekávaná činnost agregátu %s" msgid "Unexpected type adding stats" msgstr "Neočekávaný typ při přidávání statistik" #, python-format msgid "Unexpected vif_type=%s" msgstr "Neočekávaný typ vif=%s" msgid "Unknown" msgstr "Neznámé" msgid "Unknown action" msgstr "Neznámá činnost" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Neznámý formát konfigurační jednotky %(format)s. Vyberte jedno z iso9660 " "nebo vfat." #, python-format msgid "Unknown delete_info type %s" msgstr "Neznámý typ mazání informací %s" #, python-format msgid "Unknown image_type=%s" msgstr "Neznámý typ obrazu=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Neznámý zdroj kvóty %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Neznámý směr řazení, musí být 'desc' nebo 'asc'" #, python-format msgid "Unknown type: %s" msgstr "Neznámý typ: %s" msgid "Unrecognized legacy format." msgstr "Nerozpoznaný zastaralý formát." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Nerozpoznaná hodnota čtení smazaných '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Nerozpoznaná hodnota '%s' pro CONF.running_deleted_instance_action" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "Pokus o vyskladnění, ale obraz %s nemůže být nalezen." msgid "Unsupported Content-Type" msgstr "Nepodporovaný Content-Type" msgid "Upgrade DB using Essex release first." msgstr "Nejdříve aktualizujte DB pomocí verze z Essex." #, python-format msgid "User %(username)s not found in password file." msgstr "Uživatel %(username)s nenalezen v souboru hesel." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Uživatel %(username)s nenalezen ve stínovém souboru." msgid "User data needs to be valid base 64." msgstr "Uživatelská data potřebují být v platném formátu base 64." msgid "User does not have admin privileges" msgstr "Uživatel nemá správcovská oprávnění" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "Použití různých syntaxí mapování blokového zařízení není povoleno ve stejné " "žádosti." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s má %(virtual_size)d bajtů, což jje více než velikost " "konfigurace mající %(new_disk_size)d bajtů." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDi nenalezeno v SR %(sr)s (uuid vid %(vdi_uuid)s,cílový lun %(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "Překročen počet pokusů (%d) o splynutí VHD, operace zrušena..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "Verze %(req_ver)s není API podporována. Minimální verze je %(min_ver)s a " "maximální %(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Vytvoření virtuálního rozhraní selhalo" msgid "Virtual interface plugin failed" msgstr "Zásuvný modul virtuálního rozhraní selhalo" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "Uzel virtuálního stroje '%(vmmode)s' nebyl rozpoznán" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "Režim virtuálního stroje '%s' není platný" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "Typ virtualizace '%(virt)s' není podporováno tímto ovladačem výpočtu" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "Svazek %(volume_id)s nelze připojit. Důvod: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "Svazek %(volume_id)s nemohl být nalezen." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "Svazek %(volume_id)s nedokončil proces vytváření i po vyčkávání po dobu " "%(seconds)s vteřin nebo %(attempts)s pokusech. A jeho stav je " "%(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Svazek nepatří do požadované instance." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "Šifrování svazku není podporováno v %(volume_type)s svazku %(volume_id)s" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "Svazek je menší, než minimální zadaná velikost zadaná v popisných datech " "obrazu. Svazek má %(volume_size)i bajtů, minimální velikost je " "%(image_min_disk)i bajtů." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "Svazek nastavuje velikost bloku, ale současný hypervizor libvirt '%s' " "nepodporuje vlastní velikost bloku" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Schéma '%s' není podporováno v Python s verzí < 2.7.4, prosím použijte http " "nebo https" msgid "When resizing, instances must change flavor!" msgstr "Při změně velikosti musí instance změnit konfiguraci!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Při provozování serveru v režimu SSL musíte zadat konfigurační volby " "cert_file a key_file ve vašem souboru s nastavení" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Špatná metoda kvóty %(method)s použita na zdroj %(res)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "Špatný typ metody háku. Jsou povoleny pouze typy 'pre¨a 'post'" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For v žádosti chybí" msgid "X-Instance-ID header is missing from request." msgstr "Hlavička X-Instance-ID v žádosti chybí." msgid "X-Instance-ID-Signature header is missing from request." msgstr "Hlavička X-Instance-ID-Signature v žádosti chybí." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider v žádosti chybí." msgid "X-Tenant-ID header is missing from request." msgstr "Hlavička X-Tenant-ID v žádosti chybí." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "Vyžadováno XAPI podporující relax-xsm-sr-check=true" msgid "You are not allowed to delete the image." msgstr "Nemáte oprávnění smazat tento obraz." msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "Nemáte oprávnění pro přístup k obrazu, z kterého byla instance spuštěna." msgid "You must implement __call__" msgstr "Musíte zavést __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "Pro použití obrazů rbd byste měli zadat příznak images_rbd_pool." msgid "You should specify images_volume_group flag to use LVM images." msgstr "Pro použití obrazů LVM byste měli zadat příznak images_volume_group." msgid "Zero floating IPs available." msgstr "Je dostupných nula plovoucích IP adres." msgid "admin password can't be changed on existing disk" msgstr "heslo správce nelze měnit na existujícím disku" msgid "aggregate deleted" msgstr "agregát smazán" msgid "aggregate in error" msgstr "agregát má chybu" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate selhalo protože: %s" msgid "cannot understand JSON" msgstr "JSON nelze porozumět" msgid "clone() is not implemented" msgstr "clone() není zavedeno." #, python-format msgid "connect info: %s" msgstr "informace o připojení: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "připojování k: %(host)s:%(port)s" #, python-format msgid "disk type '%s' not supported" msgstr "typ disku '%s' není podporován" #, python-format msgid "empty project id for instance %s" msgstr "prázdné id projektu pro instanci %s" msgid "error setting admin password" msgstr "chyba při nastavování hesla správce" #, python-format msgid "error: %s" msgstr "chyba: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "nelze vytvořit otisk X509. Chybová zpráva: %s" msgid "failed to generate fingerprint" msgstr "nelze vytvořit otisk" msgid "filename cannot be None" msgstr "název souboru nemůže být None" msgid "floating IP is already associated" msgstr "Plovoucí IP adresa již přidružena." msgid "floating IP not found" msgstr "Plovoucí IP adresa nenalezena" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s zálohováno: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s neobsahuje verzi" msgid "image already mounted" msgstr "obraz již je připojen" #, python-format msgid "instance %s is not running" msgstr "Instance %s není spuštěna" msgid "instance has a kernel or ramdisk but not both" msgstr "Instance má kernel nebo ramdisk, ale ne oba" msgid "instance is a required argument to use @refresh_cache" msgstr "instance je povinný argument pro použití @refresh_cache" msgid "instance is not in a suspended state" msgstr "instance není v pozastaveném stavu" msgid "instance is not powered on" msgstr "Instance není zapnuta" msgid "instance is powered off and cannot be suspended." msgstr "instance je vypnutá a nemůže být pozastavena" #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "" "ID instance %s nemohlo být nalezeno na žádném z portů zařízení zadaného " "pomocí id" msgid "is_public must be a boolean" msgstr "is_public musí být boolean" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key není zadáno" msgid "l3driver call to add floating IP failed" msgstr "Volání ovladače l3 pro přidání plovoucí IP adresy selhalo" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs je nainstalováno ale nelze použít (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs není nainstalováno (%s)" #, python-format msgid "marker [%s] not found" msgstr "značka [%s] nenalezena" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "Pokud je zadána pevná IP, maximální počet nemůže být větší než 1." msgid "min_count must be <= max_count" msgstr "min_count musí být <= max_count" #, python-format msgid "nbd device %s did not show up" msgstr "zařízení nbd %s se nezobrazilo" msgid "nbd unavailable: module not loaded" msgstr "nbd nedostupné: modul nenačten" msgid "no hosts to remove" msgstr "žádní hostitelé pro odstranění" #, python-format msgid "no match found for %s" msgstr "nebyla nalezena shoda pro %s" #, python-format msgid "not able to execute ssh command: %s" msgstr "nelze spustit příkaz ssh: %s" msgid "operation time out" msgstr "operace vypršela" #, python-format msgid "partition %s not found" msgstr "Oddíl %s nenalezen" #, python-format msgid "partition search unsupported with %s" msgstr "Hledání oddílu není podporováno v %s" msgid "pause not supported for vmwareapi" msgstr "pozastavení není v vmwareapi podporováno" #, python-format msgid "qemu-nbd error: %s" msgstr "chyba qemu-nbd: %s" msgid "rbd python libraries not found" msgstr "python knihovny rbd nenalezeny" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "čtení smazaného může být buď 'no', 'yes' nebo 'only', ne %r" msgid "serve() can only be called once" msgstr "serve() mlže být voláno pouze jednou" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "service je povinný argument pro ovladače Skupiny serveru založené na databázi" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "service je povinný argument pro ovladače Skupiny serveru založené na " "Memcached" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "set_admin_password není tímto ovladačem nebo hostem instance zavedeno." msgid "setup in progress" msgstr "probíhá nastavování" #, python-format msgid "snapshot for %s" msgstr "snímek pro %s" msgid "snapshot_id required in create_info" msgstr "Id snímku vyžadováno při vytváření informací" msgid "token not provided" msgstr "příznak není zadán" msgid "too many body keys" msgstr "příliš mnoho klíčů těla" msgid "unpause not supported for vmwareapi" msgstr "zrušení pozastavení není v vmwareapi podporováno" msgid "version should be an integer" msgstr "verze by měla být celé číslo" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s musí být skupina svazku LVM" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "" "cesta soketu uživatele vhostitele není přítomna v podrobnostech vif " "%(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "vif typu %s není podporován" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "parametr vif_type musí být přítomen pro toto zavedení ovladače vif." #, python-format msgid "volume %s already attached" msgstr "svazek %s již je připojen" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "stav svazku '%(vol)s' musí být 'in-use'. Nyní je ve stavu '%(status)s'." #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake nemá zavedeno %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake nemá zavedeno %s, nebo byl zavolán se špatným počtem argumentů" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/de/0000775000175000017500000000000000000000000015360 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/de/LC_MESSAGES/0000775000175000017500000000000000000000000017145 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/de/LC_MESSAGES/nova.po0000664000175000017500000034301300000000000020454 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Alec Hans , 2013 # Ettore Atalan , 2014 # FIRST AUTHOR , 2011 # iLennart21 , 2013 # Laera Loris , 2013 # matthew wagoner , 2012 # English translations for nova. # Andreas Jaeger , 2016. #zanata # Andreas Jaeger , 2019. #zanata # Andreas Jaeger , 2020. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2020-04-25 10:30+0000\n" "Last-Translator: Andreas Jaeger \n" "Language: de\n" "Language-Team: German\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" "Generated-By: Babel 2.2.0\n" "X-Generator: Zanata 4.3.3\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s ist keine gültige IP v4/6-Adresse." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s hat versucht, direkt auf die Datenbank zuzugreifen, dies ist laut " "Richtlinie nicht zulässig" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s ist kein gültiges IP-Netzwerk." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s sollte nicht Teil der Aktualisierung sein." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "" "%(memsize)d MB Speicher zugewiesen, erwartet werden jedoch %(memtotal)d MB" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s ist nicht auf dem lokalen Speicher: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s ist nicht auf dem gemeinsamen Speicher: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "" "%(total)i Zeilen stimmten mit der Abfrage %(meth)s überein. %(done)i wurden " "migriert." #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "%(type)s Hypervisor unterstützt PCI Gerät nicht" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "Wert %(worker_name)s von %(workers)s ist ungültig; muss größer als 0 sein" #, python-format msgid "%s does not support disk hotplug." msgstr "%s unterstützt kein Anschließen von Platten im laufenden Betrieb." #, python-format msgid "%s format is not supported" msgstr "%s-Format wird nicht unterstützt" #, python-format msgid "%s is not supported." msgstr "%s wird nicht unterstützt." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s muss entweder 'MANUAL' oder 'AUTO' sein." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' sollte keine Instanz sein von '%(cls)s'" msgid "'qemu-img info' parsing failed." msgstr "Auswertung von 'qemu-img info' fehlgeschlagen." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "Argument 'rxtx_factor' muss eine Gleitkommazahl zwischen 0 und %g sein" #, python-format msgid "A NetworkModel is required in field %s" msgstr "Ein NetworkModel ist erforderlich im Feld %s" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "API Versionszeichenkette %(version)s ist im falschen Format. Muss im Format " "sein MajorNum.MinorNum." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "API Version %(version)s ist nicht unterstützt für diese Methode." msgid "Access list not available for public flavors." msgstr "Zugriffsliste ist für öffentliche Versionen nicht verfügbar. " #, python-format msgid "Action %s not found" msgstr "Aktion %s nicht gefunden" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "Aktion für request_id '%(request_id)s' für Instanz '%(instance_uuid)s' nicht " "gefunden" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Aktion '%(action)s', Aufrufmethode %(meth)s, Hauptteil %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "Fehler beim Hinzufügen von Metadaten für Aggregat %(id)s nach %(retries)s " "Wiederholungen" msgid "Affinity instance group policy was violated." msgstr "Gruppenrichtlinie von Affinitätsinstanz wurde nicht eingehalten." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "Agent unterstützt den Aufruf nicht: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "Agentenbuild mit Hypervisor %(hypervisor)s, Betriebssystem %(os)s, " "Architektur %(architecture)s ist bereits vorhanden." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "Aggregate %(aggregate_id)s hat bereits einen Host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "Aggregat %(aggregate_id)s konnte nicht gefunden werden." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "Aggregat %(aggregate_id)s hat keinen Host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "Aggregat %(aggregate_id)s enthält keine Metadaten mit Schlüssel " "%(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Aggregat %(aggregate_id)s: Aktion '%(action)s' hat einen Fehler verursacht: " "%(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "Aggregat %(aggregate_name)s ist bereits vorhanden." #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "Aggregat %s unterstützt keine leeren, bezeichneten Verfügbarkeitszonen" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "Aggregat für Host %(host)s konnte nicht gefunden werden. " #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "" "Es wurde ein ungültiger 'name'-Wert angegeben. Der Name muss lauten: " "%(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "" "Ein unbekannter Fehler ist aufgetreten. Stellen Sie Ihre Anforderung erneut." msgid "An unknown exception occurred." msgstr "Eine unbekannte Ausnahme ist aufgetreten." msgid "Anti-affinity instance group policy was violated." msgstr "Gruppenrichtlinie von Antiaffinitätsinstanz wurde nicht eingehalten." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Architekturname '%(arch)s' wird nicht erkannt" #, python-format msgid "Architecture name '%s' is not valid" msgstr "Architekturname '%s' ist nicht gültig" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Versuche PCI Gerät %(compute_node_id)s:%(address)s vom leeren Pool zu " "beziehen" msgid "Attempted overwrite of an existing value." msgstr "Versuchte Überschreibung eines vorhandenen Wertes." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Attribut nicht unterstützt: %(attr)s" msgid "Bad Request - Invalid Parameters" msgstr "Fehlerhafte Anfrage - Ungültige Parameter" #, python-format msgid "Bad network format: missing %s" msgstr "Falsches Netzwerkformat: fehlendes %s" msgid "Bad networks format" msgstr "Falsches Netzwerkformat" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "Ungültiges Netzformat: Netz-UUID ist nicht im richtigen Format (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Falsches Präfix für Netz in CIDR %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "Binden fehlgeschlagen für Port %(port_id)s, bitte überprüfen Sie die Neutron " "Logs für mehr Informationen." msgid "Blank components" msgstr "Leere Komponenten" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Leerer Datenträger (source: 'blank', dest: 'volume') muss eine Nicht-Null-" "Größe haben" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "Blockgerät %(id)s ist nicht bootfähig." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "Die Blockgerätezuordnung %(volume_id)s ist ein Datenträger mit mehreren " "Zuordnungen und für diese Operation nicht zulässig." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "" "Block-Geräte-Zuordnung kann nicht zu einem legalen Format konvertiert werden." msgid "Block Device Mapping is Invalid." msgstr "Block-Geräte-Zuordnung ist ungültig." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "Block-Geräte-Zuordnung ist ungültig; %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "Blockgerätezuordnung ist ungültig; Bootsequenz der Instanz und Abbild/Block-" "Geräte-Kombination ist ungültig." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "Blockgerätezuordnung ist ungültig; Sie haben mehr lokale Geräte " "spezifiziert, als das Limit erlaubt." #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "Blockgerätezuordnung ist ungültig; auslesen des Abbild %(id)s fehlgeschlagen." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "Block-Geräte-Zuordnung ist ungültig; auslesen der Schattenkopie %(id)s " "fehlgeschlagen." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "Blockgerätezuordnung ist ungültig; auslesen der Datenträger %(id)s " "fehlgeschlagen." msgid "Block migration can not be used with shared storage." msgstr "" "Blockmigration kann nicht mit gemeinsam genutztem Speicher verwendet werden." msgid "Boot index is invalid." msgstr "Bootindex ist ungültig." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "Build der Instanz %(instance_uuid)s abgebrochen: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "Build der Instanz %(instance_uuid)s neu geplant: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "BuildRequest für Instanz %(uuid)s nicht gefunden." msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "" "CPU- und Speicherzuordnung müssen für alle NUMA-Knoten angegeben werden" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU ist nicht kompatibel.\n" "\n" "%(ret)s\n" "\n" "Siehe %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "CPU-Nummer %(cpunum)d ist zwei Knoten zugeordnet" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "CPU-Nummer %(cpunum)d ist größer als das Maximum %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "CPU-Nummer %(cpuset)s ist keinem Knoten zugeordnet" msgid "Can not add access to a public flavor." msgstr "Kann keinen Zugriff auf eine öffentliche Version herstellen. " msgid "Can not find requested image" msgstr "Angefordertes Image kann nicht gefunden werden" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "" "Authentifizierungsanforderung für %d-Berechtigungsnachweis kann nicht " "verarbeitet werden" msgid "Can't resize a disk to 0 GB." msgstr "Die Größe einer Festplatte kann nicht auf 0 GB geändert werden." msgid "Can't resize down ephemeral disks." msgstr "Größe von inaktiven ephemeren Platten kann nicht geändert werden." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "" "Stammdatenträgerpfad kann nicht aus libvirt-Konfiguration der Instanz " "abgerufen werden" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "'%(action)s' für Instanz %(server_id)s nicht möglich, während sie sich in " "%(attr)s %(state)s befindet" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "Zugriff auf \"%(instances_path)s\" nicht möglich. Stellen Sie sicher, dass " "der Pfad vorhanden ist und dass Sie über die erforderlichen Berechtigungen " "verfügen. Insbesondere Nova-Compute darf nicht mit dem integrierten SYSTEM-" "Konto oder anderen Konten ausgeführt werden, für die keine Authentifizierung " "auf einem fernen Host möglich ist." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Host kann nicht zu Aggregat %(aggregate_id)s hinzugefügt werden. Ursache: " "%(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "" "Ein oder mehrere Datenträger können nicht an mehrere Instanzen angehängt " "werden" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "" "%(method)s kann nicht für ein verwaistes %(objtype)s-Objekt aufgerufen werden" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "Übergeordneter Speicherpool für %s wurde nicht erkannt. Der Speicherort für " "Abbilder kann nicht ermittelt werden." msgid "Cannot find SR of content-type ISO" msgstr "SR mit 'content-type' = ISO kann nicht gefunden werden" msgid "Cannot find SR to read/write VDI." msgstr "SR zum Lesen/Schreiben von VDI kann nicht gefunden werden." msgid "Cannot find image for rebuild" msgstr "Image für Wiederherstellung kann nicht gefunden werden" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "Host %(host)s in Aggregat %(id)s kann nicht entfernt werden" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Host kann nicht von Aggregat %(aggregate_id)s entfernt werden. Ursache: " "%(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "Eine Instanz vom Typ 'volume-backed' kann nicht gesichert werden" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "Root-Festplatte kann nicht verkleinert werden. Aktuelle Größe: " "%(curr_root_gb)s GB. Angeforderte Größe: %(new_root_gb)s GB." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "In einer nicht dedizierten CPU-Pinning-Richtlinie kann keine CPU-Thread-" "Pinning-Richtlinie definiert werden." msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "In einer nicht dedizierten CPU-Pinning-Richtlinie kann keine " "Echtzeitrichtlinie definiert werden." #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Aggregat %(aggregate_id)s kann nicht aktualisiert werden. Ursache: " "%(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Metadaten von Aggregat %(aggregate_id)s können nicht aktualisiert werden. " "Ursache: %(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "Zelle %(uuid)s hat keine Zuordnung." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "" "Durch die Änderung wäre die Nutzung kleiner als 0 für die folgenden " "Ressourcen: %(unders)s" #, python-format msgid "Cinder API version %(version)s is not available." msgstr "Die Cinder API-Version %(version)s ist nicht verfügbar." #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "Klasse %(class_name)s konnte nicht gefunden werden: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Befehl wird nicht unterstützt. Verwenden Sie den Ironic-Befehl %(cmd)s, um " "diese Aktion durchzuführen. " #, python-format msgid "Compute host %(host)s could not be found." msgstr "Rechenhost %(host)s konnte nicht gefunden werden." #, python-format msgid "Compute host %s not found." msgstr "Rechenhost %s nicht gefunden." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "Compute Dienst von %(host)s wird noch verwendet." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "Rechenservice von %(host)s ist derzeit nicht verfügbar." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "" "Das Konfigurationslaufwerksformat '%(format)s' wird nicht unterstützt. " #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "Config hat ein explizites CPU-Modell angefordert, aber der aktuelle libvirt-" "Hypervisor '%s' unterstützt nicht die Auswahl von CPU-Modellen" msgid "Configuration is Invalid." msgstr "Konfiguration ist ungültig." #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Konflikt beim Aktualisieren der Instanz %(instance_uuid)s, aber wir waren " "nicht in der Lage den Grund herauszufinden" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Konflikt beim Aktualisieren der Instanz %(instance_uuid)s. Erwartet: " "%(expected)s. Aktuell: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Verbindung zu Cinder Host fehlgeschlagen: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "Verbindung zu Glance-Host %(server)s fehlgeschlagen: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Verbindung zu libvirt verloren: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "Verbindung zum Hypervisor ist unterbrochen auf Host: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "Die Ausgabe des Konsolenprotokolls konnte für die Instanz %(instance_id)s " "nicht abgerufen werden. Ursache: %(reason)s" msgid "Constraint not met." msgstr "Bedingung nicht erfüllt." #, python-format msgid "Converted to raw, but format is now %s" msgstr "In unformatierten Zustand konvertiert, Format ist nun jedoch %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "Image konnte nicht an Loopback angehängt werden: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "Abbild %(image_id)s konnte nicht abgerufen werden" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "Konnte keinen Handler finden für %(driver_type)s Datenträger." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "" "Binärprogramm %(binary)s auf Host %(host)s konnte nicht gefunden werden." #, python-format msgid "Could not find config at %(path)s" msgstr "Konfiguration konnte unter %(path)s nicht gefunden werden" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "" "Die von der VM verwendeten Datenspeicherverweise konnten nicht gefunden " "werden." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "" "Zeile %(line)s konnte nicht geladen werden, Fehler %(error)s ist aufgetreten" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "paste-App '%(name)s' konnte von %(path)s nicht geladen werden" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "VFAT-Konfigurationslaufwerk konnte nicht angehängt werden. %(operation)s " "fehlgeschlagen. Fehler: %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "Abbild %(image_id)s konnte nicht hochgeladen werden" msgid "Creation of virtual interface with unique mac address failed" msgstr "" "Erstellung der virtuellen Schnittstelle mit eindeutiger MAC-Adresse " "fehlgeschlagen." #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "Datenspeicher-regex %s stimmt mit keinem Datenspeicher überein" msgid "Datetime is in invalid format" msgstr "Datum/Uhrzeit hat ein ungültiges Format" msgid "Default PBM policy is required if PBM is enabled." msgstr "Standard-PBM-Richtlinie ist erforderlich, wenn PBM aktiviert ist." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "%(records)d Datensätze aus Tabelle '%(table_name)s' gelöscht." msgid "Destroy instance failed" msgstr "Zerstören der Instanz fehlgeschlagen" #, python-format msgid "Device '%(device)s' not found." msgstr "Das Gerät '%(device)s' wurde nicht gefunden." #, python-format msgid "Device detach failed for %(device)s: %(reason)s" msgstr "Abhängen des Geräts fehlgeschlagen für %(device)s: %(reason)s" #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "Angegebene Einheiten-ID %(id)s wird nicht unterstützt von Hypervisorversion " "%(version)s" msgid "Device name contains spaces." msgstr "Gerätename enthält Leerzeichen." msgid "Device name empty or too long." msgstr "Gerätename leer oder zu lang." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "Gerätetypabweichung für Alias '%s'" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "Verschiedenen Typen in %(table)s.%(column)s und der Spiegeltabelle: " "%(c_type)s %(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "" "Platte enthält ein Dateisystem, dessen Größe nicht geändert werden kann: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Datenträgerformat %(disk_format)s ist nicht zulässig" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Datei mit Datenträgerinformationen ist ungültig: %(reason)s" msgid "Disk must have only one partition." msgstr "Festplatte darf nur eine Partition haben." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Platte mit ID %s nicht an Instanz angehängt gefunden." #, python-format msgid "Driver Error: %s" msgstr "Treiberfehler: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "Fehler bei dem Versuch, %(method)s auszuführen." #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Fehler beim Löschen der Instanz auf Knoten %(node)s. Bereitstellungsstatus " "ist noch '%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Fehler bei folgendem Aufruf an Agenten: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "Fehler beim Aufnehmen von Instanz %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "Fehler von libvirt beim Abrufen der Domäneninformationen für " "%(instance_name)s: [Fehlercode %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Fehler von libvirt während Suche nach %(instance_name)s: [Fehlercode " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Fehler von libvirt beim Versetzen in den Quiescemodus %(instance_name)s: " "[Fehlercode %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Fehler von libvirt beim Setzen des Passworts für Benutzername \"%(user)s\": " "[Fehlercode %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Fehler beim Einhängen %(device)s zu %(dir)s in Abbild %(image)s mit " "libguestfs (%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Fehler beim Einhängen %(image)s mit libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Fehler beim Erstellen von Ressourcenüberwachung: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Fehler: Agent ist deaktiviert" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "Ereignis %(event)s nicht gefunden für Aktions-ID '%(action_id)s'" msgid "Event must be an instance of nova.virt.event.Event" msgstr "Ereignis muss eine Instanz von 'nova.virt.event.Event' sein" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Maximale Anzahl %(max_attempts)d an Planungsversuchen für die Instanz " "%(instance_uuid)s überschritten. Letzte Ausnahme: %(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Maximale Anzahl %(max_retries)d Planungswiederholungen überschritten für " "Instanz %(instance_uuid)s während Livemigration" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Maximale Anzahl der WIederholungen überschritten. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "UUID erwartet, aber %(uuid)s erhalten." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Zusatzspalte %(table)s.%(column)s in Spiegeltabelle" msgid "Extracting vmdk from OVA failed." msgstr "Extraktion von vmdk aus OVA fehlgeschlagen." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "Zugriff auf Port %(port_id)s fehlgeschlagen: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Fehler beim Hinzufügen der Implementierungsparameter auf Knoten %(node)s " "beim Bereitstellen der Instanz %(instance)s " #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "" "Zuordnung von Netz(en) fehlgeschlagen mit Fehler %s; keine Neuterminierung." msgid "Failed to allocate the network(s), not rescheduling." msgstr "Netz(e) konnte(n) nicht zugeordnet werden; keine Neuterminierung." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "" "Anhängen des Netzwerkgeräteadapters an %(instance_uuid)s fehlgeschlagen" #, python-format msgid "Failed to create vif %s" msgstr "Vif %s konnte nicht erstellt werden." #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Instanz konnte nicht implementiert werden: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Abhängen des PCI Gerätes %(dev)s fehlgeschlagen: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "" "Abhängen des Netzwerkgeräteadapters von %(instance_uuid)s fehlgeschlagen" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Fehler beim Verschlüsseln des Textes: %(reason)s" #, python-format msgid "Failed to forget the SR for volume %s" msgstr "Fehler beim Vergessen des SR für Datenträger %s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Fehler beim Starten der Instanz: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Partitionen konnten nicht zugeordnet werden: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Fehler beim Anhängen von Dateisystem: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "" "Fehler beim Analysieren von Informationen zu einer PCI-Einheit für " "Passthrough" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Instanz konnte nicht ausgeschaltet werden: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Instanz konnte nicht eingeschaltet werden: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Vorbereitung des PCI Gerätes %(id)s für Instanz %(instance_uuid)s " "fehlgeschlagen: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Fehler beim Bereitstellen der Instanz %(inst)s: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "" "Datei mit Datenträgerinformationen konnte nicht gelesen oder geschrieben " "werden: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Instanz konnte nicht neu gestartet werden: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Löschen Datenträger fehlgeschlagen: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Neuerstellung von Instanz %(inst)s konnte nicht bei Ironic angefordert " "werden: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Instanz konnte nicht wiederaufgenommen werden: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "qemu-img info konnte nicht ausgeführt werden für %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "" "Administratorkennwort für %(instance)s konnte nicht festgelegt werden " "aufgrund von %(reason)s" msgid "Failed to spawn, rolling back" msgstr "Generierung nicht möglich, Rollback wird durchgeführt" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Instanz konnte nicht ausgesetzt werden: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Instanz konnte nicht beendet werden: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "Vif %s konnte nicht entfernt werden." #, python-format msgid "Failed to unplug virtual interface: %(reason)s" msgstr "Virtuelle Schnittstelle konnte nicht entfernt werden: %(reason)s" msgid "Failure prepping block device." msgstr "Fehler beim Vorbereiten des Block-Gerätes." #, python-format msgid "File %(file_path)s could not be found." msgstr "Datei %(file_path)s konnte nicht gefunden werden." #, python-format msgid "File path %s not valid" msgstr "Dateipfad %s nicht gültig" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "Feste IP %(ip)s ist keine gültige IP Adresse für Netzwerk %(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "Feste IP %s wird bereits verwendet." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "Statische IP-Adresse %(address)s wird bereits verwendet in Instanz " "%(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "Keine feste IP für Adresse %(address)s gefunden." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "Version %(flavor_id)s konnte nicht gefunden werden." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "Version %(flavor_id)s hat keine Sonderspezifikationen mit Schlüssel " "%(extra_specs_key)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Version %(flavor_id)s hat keine Sonderspezifikationen mit Schlüssel %(key)s." #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "Zusätzliche Spezifikation für die Variante %(id)s kann nach %(retries)d " "Neuversuchen nicht aktualisiert oder erstellt werden." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "Versionszugriff bereits vorhanden für Kombination aus Version %(flavor_id)s " "und Projekt %(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Versionszugriff nicht gefunden für Kombination aus %(flavor_id)s und " "%(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "Die von der Instanz verwendete Version konnte nicht gefunden werden. " #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "Version mit ID '%(flavor_id)s' ist bereits vorhanden." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "Version mit dem Namen %(flavor_name)s konnte nicht gefunden werden." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "Version mit dem Namen '%(name)s' ist bereits vorhanden." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "Variante des Datenträgers ist kleiner als die kleinste Größe der " "spezifizierten Abbild-Metadaten. Variante des Datenträgers ist " "%(flavor_size)i Bytes, kleinste Größe ist %(image_min_disk)i Bytes." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "Variante des Datenträgers ist zu klein für das angefragte Abbild. Variante " "des Datenträgers ist %(flavor_size)i Bytes, Abbild ist %(image_size)i Bytes." msgid "Flavor's memory is too small for requested image." msgstr "Speicher der Version ist zu klein für angefordertes Image." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Verknüpfung der Floating IP %(address)s ist fehlgeschlagen." #, python-format msgid "Floating IP %(address)s is associated." msgstr "Floating IP %(address)s ist zugeordnet." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "Floating IP %(address)s ist Instanz %(id)s nicht zugeordnet." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "Floating IP für ID %(id)s nicht gefunden." #, python-format msgid "Floating IP not found for ID %s" msgstr "Floating IP für ID %s nicht gefunden." #, python-format msgid "Floating IP not found for address %(address)s." msgstr "Floating IP für Adresse %(address)s nicht gefunden." msgid "Floating IP pool not found." msgstr "Pool mit Floating IPs nicht gefunden." msgid "Forbidden" msgstr "Verboten" msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "Es nicht zulässig, den Versionswert der Anzahl der seriellen Ports, die in " "den Imagemetadaten übergeben werden, zu überschreiten. " msgid "Found no disk to snapshot." msgstr "Es wurde keine Platte für eine Momentaufnahme gefunden." #, python-format msgid "Found no network for bridge %s" msgstr "Kein Netz für Brücke %s gefunden" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Nicht eindeutiges Netz für Brücke %s gefunden" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Nicht eindeutiges Netz für name_label %s gefunden" msgid "Guest does not have a console available" msgstr "Gast hat keine Konsole verfügbar" msgid "Guest does not have a console available." msgstr "Für Gast ist keine Konsole verfügbar." #, python-format msgid "Host %(host)s could not be found." msgstr "Host %(host)s konnte nicht gefunden werden." #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "Der Host %(host)s ist bereits der Zelle %(uuid)s zugeorndet." #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "Host '%(name)s' wird nicht abgebildet in irgendeiner Zelle" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Host-PowerOn wird vom Hyper-V-Treiber nicht unterstützt" msgid "Host aggregate is not empty" msgstr "Hostaggregat ist nicht leer" msgid "Host does not support guests with NUMA topology set" msgstr "Host unterstützt keine Gäste mit NUMA-Topologiegruppe" msgid "Host does not support guests with custom memory page sizes" msgstr "" "Host unterstützt keine Gäste mit benutzerdefinierter Speicherseitengröße" msgid "Host startup on XenServer is not supported." msgstr "Hoststart auf XenServer wird nicht unterstützt." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Hypervisortreiber unterstützt die Methode post_live_migration_at_source nicht" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Der Hypervisor-Virtualisierungstype '%s' ist nicht gültig" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "Der Hypervisor-Virtualisierungstyp '%(hv_type)s' wird nicht erkannt" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "Hypervisor mit ID '%s' konnte nicht gefunden werden. " #, python-format msgid "IP allocation over quota in pool %s." msgstr "IP-Zuordnung über Quote in Pool %s." msgid "IP allocation over quota." msgstr "IP-Zuordnung über Quote." #, python-format msgid "Image %(image_id)s could not be found." msgstr "Abbild %(image_id)s konnte nicht gefunden werden." #, python-format msgid "Image %(image_id)s is not active." msgstr "Abbild %(image_id)s ist nicht aktiv." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "Image %(image_id)s ist nicht zulässig: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "" "Abbildfestplattengröße ist größer als die angeforderte Festplattengröße" msgid "Image is not raw format" msgstr "Abbild ist kein Rohformat" msgid "Image metadata limit exceeded" msgstr "Grenzwert für Imagemetadaten überschritten" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "Abbild-Modell '%(image)s' wird nicht unterstützt" msgid "Image not found." msgstr "Abbild nicht gefunden." #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "Imageeigenschaft '%(name)s' darf die NUMA-Konfiguration für die Version " "nicht überschreiben" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "Image-Eigenschaft 'hw_cpu_policy' darf die CPU-Pinn-Richtlinie, die für die " "Version festgelegt wurde, nicht überschreiben" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "Die Abbildeigenschaft 'hw_cpu_thread_policy' darf die CPU-Thread-Pinning-" "Richtlinie, die für die Variante festgelegt wurde, nicht überschreiben." msgid "Image that the instance was started with could not be found." msgstr "" "Image, mit dem die Instanz gestartet wurde, konnte nicht gefunden werden." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "" "Die Konfigurationslaufwerkoption '%(config_drive)s' des Image ist ungültig" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "Für Images mit destination_type 'volume' muss eine Größe ungleich null " "angegeben werden" msgid "In ERROR state" msgstr "Im FEHLER-Zustand" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "In den Status %(vm_state)s/%(task_state)s, nicht RESIZED/None" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "Die in Bearbeitung befindliche Livemigration %(id)s wurde für den Server " "%(uuid)s nicht gefunden." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Inkompatible Einstellungen: Verschlüsselung für ephemeren Speicher wird nur " "unterstützt für LVM-Images. " #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "Infocache für Instanz %(instance_uuid)s konnte nicht gefunden werden." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "Instanz %(instance)s und Datenträger %(vol)s befinden sich nicht in " "derselben availability_zone. Die Instanz befindet sich in %(ins_zone)s. Der " "Datenträger befindet sich in %(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "Instanz %(instance)s weist keinen Port mit ID %(port)s auf" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "Instanz %(instance_id)s kann nicht gerettet werden: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "Instanz %(instance_id)s konnte nicht gefunden werden." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "Instanz %(instance_id)s weist kein Tag '%(tag)s' auf" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "Instanz %(instance_id)s ist nicht im Rettungsmodus" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "Instanz %(instance_id)s ist nicht bereit" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "Instanz %(instance_id)s läuft nicht." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "Instanz %(instance_id)s ist nicht zulässig: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "Instanz %(instance_uuid)s gibt keine NUMA-Topologie an" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "Instanz %(instance_uuid)s hat keinen spezifischen Migrationskontext." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "Instanz %(instance_uuid)s in %(attr)s %(state)s. %(method)s nicht möglich, " "während sich die Instanz in diesem Zustand befindet." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "Instanz %(instance_uuid)s ist gesperrt" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "Die Instanz %(instance_uuid)s erfordert ein Konfigurationslaufwerk. Es ist " "jedoch nicht vorhanden." #, python-format msgid "Instance %(name)s already exists." msgstr "Instanz %(name)s bereits vorhanden." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "" "Instanz %(server_id)s befindet sich in einem ungültigen Status für " "'%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "Instanz %(uuid)s hat keine Zuordung zu einer Zelle." #, python-format msgid "Instance %s not found" msgstr "Instanz %s nicht gefunden" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Instanz %s Provisionierung wurde abgebrochen" msgid "Instance could not be found" msgstr "Instanz konnte nicht gefunden werden" msgid "Instance disk to be encrypted but no context provided" msgstr "" "Verschlüsselung der Platte der Instanz steht an, aber es ist kein Kontext " "angegeben" msgid "Instance event failed" msgstr "Instanzereignis fehlgeschlagen" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "Instanzgruppe %(group_uuid)s bereits vorhanden." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "Instanzgruppe %(group_uuid)s konnte nicht gefunden werden." msgid "Instance has no source host" msgstr "Instanz weist keinen Quellenhost auf" msgid "Instance has not been resized." msgstr "Instanzgröße wurde nicht angepasst." #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "Instanzhostname %(hostname)s ist kein gültiger DNS-Name." #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "Instanz ist bereits im Rettungsmodus: %s" msgid "Instance is not a member of specified network" msgstr "Instanz ist nicht Mitglied des angegebenen Netzes" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Instanz-Rollback ausgeführt. Ursache: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Nicht ausreichend Speicherplatz in Datenträgergruppe %(vg)s. Nur " "%(free_space)db verfügbar, aber %(size)d Bytes für Datenträger %(lv)s " "erforderlich." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Nicht genug Compute Ressourcen: %(reason)s" #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "Nicht genügend freier Speicherplatz auf Rechenknoten zum Starten von " "%(uuid)s." #, python-format msgid "Interface %(interface)s not found." msgstr "Schnittstelle %(interface)s nicht gefunden." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "Ungültige Basis-64-Daten für Datei %(path)s" msgid "Invalid Connection Info" msgstr "Ungültige Verbindungsinformation" #, python-format msgid "Invalid ID received %(id)s." msgstr "Ungültige Kennung erhalten %(id)s." #, python-format msgid "Invalid IP format %s" msgstr "Ungültiges IP-Format %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Ungültiges IP Protokoll %(protocol)s." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Ungültige PCI-Whitelist: Die PCI-Whitelist kann den Einheitennamen oder die " "Adresse angeben, aber nicht beides" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Ungültige PCI-Aliasdefinition: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Ungültiger Regulärer Ausdruck %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Ungültige Zeichen im Hostnamen '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "Ungültige Angabe für 'config_drive'." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "Ungültiges config_drive_format \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Ungültiger Konsolentyp %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "Ungültiger Inhaltstyp %(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Falsche Datumszeit-Zeichenkette: %(reason)s" msgid "Invalid device UUID." msgstr "Ungültige Gerät-UUID." #, python-format msgid "Invalid entry: '%s'" msgstr "Ungültiger Eintrag: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Ungültiger Eintrag: '%s'; dict erwartet" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Ungültiger Eintrag: '%s'; list oder dict erwartet" #, python-format msgid "Invalid exclusion expression %r" msgstr "Ungültiger Ausschlussausdruck: %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Ungültiges Abbild-Format '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "Ungültiger Abbild-Hyperlink %(image_href)s." #, python-format msgid "Invalid image metadata. Error: %s" msgstr "Ungültige Abbildmetadaten. Fehler: %s" #, python-format msgid "Invalid inclusion expression %r" msgstr "Ungültiger Einschlussausdruck: %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Ungültige Eingabe für Feld/Attribut %(path)s. Wert: %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Ungültige Eingabe erhalten: %(reason)s" msgid "Invalid instance image." msgstr "Ungültiges Instanzabbild." #, python-format msgid "Invalid is_public filter [%s]" msgstr "'is_public-Filter' [%s] ungültig" msgid "Invalid key_name provided." msgstr "Ungültige Angabe für 'key_name'." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Ungültige Speicherseitengröße '%(pagesize)s'" msgid "Invalid metadata key" msgstr "Ungültiger Metadatenschlüssel" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Ungültige Metadatengröße: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Ungültige Metadaten: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "'minDisk-Filter' [%s] ungültig" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Ungültiger minRam-Filter [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Ungültiger Portbereich %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "Ungültige Proxy-Anforderungssignatur." #, python-format msgid "Invalid range expression %r" msgstr "Ungültiger Bereichsausdruck %r" msgid "Invalid service catalog json." msgstr "Ungültige Servicekatalog-JSON." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "Ungültige Startzeit. Die Startzeit darf nicht nach der Endzeit liegen." msgid "Invalid state of instance files on shared storage" msgstr "Ungültiger Status der Instanzdateien im gemeinsam genutzten Speicher" msgid "Invalid status value" msgstr "Ungültiger Status Wert" #, python-format msgid "Invalid timestamp for date %s" msgstr "Ungültiger Zeitstempel für Datum %s" #, python-format msgid "Invalid usage_type: %s" msgstr "Ungültiger usage_type: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Ungültiger Wert für Konfigurationslaufwerkoption: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Ungültige virtuelle Schnittstellenadresse %s in der Anforderung" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Ungültiger Datenträgerzugriffsmodus: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Ungültiger Datenträger: %(reason)s" msgid "Invalid volume_size." msgstr "Ungültige volume_size." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "UUID des Ironic-Knotens nicht angegeben für Treiber für Instanz %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "" "Das Erstellen einer Schnittstelle auf externem Netz %(network_uuid)s ist " "nicht zulässig" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "Kernel-/RAM-Plattenimage ist zu groß: %(vdi_size)d Byte, maximal " "%(max_size)d Byte" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Schlüsselnamen dürfen nur alphanumerische Zeichen, Punkte, Gedankenstriche, " "Unterstriche, Doppelpunkte und Leerzeichen enthalten." #, python-format msgid "Key manager error: %(reason)s" msgstr "Schlüsselmanagerfehler: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "Schlüsselpaar '%(key_name)s' bereits vorhanden." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "Schlüsselpaar %(name)s für Benutzer %(user_id)s nicht gefunden" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "Schlüsselpaardaten ungültig: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "Name von Schlüsselpaar enthält unsichere Zeichen" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "Name des Schlüsselpares muss eine Zeichenkette und zwischen 1 und 255 " "Zeichen lang sein." msgid "Limits only supported from vCenter 6.0 and above" msgstr "Grenzwerte werden nur von vCenter ab Version 6.0 unterstützt." #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "" "Die Livemigration %(id)s für den Server %(uuid)s ist nicht in Bearbeitung." #, python-format msgid "Malformed message body: %(reason)s" msgstr "Fehlerhafter Nachrichtentext: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "Fehlerhafte Anforderungs-URL: project_id '%(project_id)s' der URL stimmt " "nicht mit der project_id '%(context_project_id)s' des Kontextes überein" msgid "Malformed request body" msgstr "Fehlerhafter Anforderungshauptteil" msgid "Mapping image to local is not supported." msgstr "Zuordnung Abbild zu Lokal ist nicht unterstuetzt." #, python-format msgid "Marker %(marker)s could not be found." msgstr "Marker %(marker)s konnte nicht gefunden werden. " msgid "Maximum number of floating IPs exceeded" msgstr "Maximale Anzahl an Floating IPs überschritten" msgid "Maximum number of key pairs exceeded" msgstr "Maximale Anzahl an Schlüsselpaaren überschritten" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "Maximale Anzahl an Metadatenelementen überschreitet %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "Maximale Anzahl an Ports überschritten" msgid "Maximum number of security groups or rules exceeded" msgstr "Maximale Anzahl an Sicherheitsgruppen oder -regeln überschritten" msgid "Metadata item was not found" msgstr "Metadatenelement wurde nicht gefunden" msgid "Metadata property key greater than 255 characters" msgstr "Metadateneigenschaftenschlüssel größer als 255 Zeichen" msgid "Metadata property value greater than 255 characters" msgstr "Metadateneigenschaftenwert größer als 255 Zeichen" msgid "Metadata type should be dict." msgstr "Metadatentyp sollte 'dict' sein." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "Messwert %(name)s konnte auf dem Rechenhostknoten %(host)s.%(node)s nicht " "gefunden werden." msgid "Migrate Receive failed" msgstr "Empfangen der Migration fehlgeschlagen" msgid "Migrate Send failed" msgstr "Senden der Migration fehlgeschlagen" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "Die Migration %(id)s für den Server %(uuid)s ist keine Livermigration." #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "Migration %(migration_id)s konnte nicht gefunden werden." #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "Migration %(migration_id)s für Instanz %(instance_id)s nicht gefunden." #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "Status der Migration %(migration_id)s von Instanz %(instance_uuid)s ist " "%(state)s. %(method)s nicht möglich, während sich die Miration in diesem " "Status befindet. " #, python-format msgid "Migration error: %(reason)s" msgstr "Migrationsfehler: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "Migration wird für LVM-gesicherte Instanzen nicht unterstützt" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "Migration für Instanz %(instance_id)s mit Status %(status)s nicht gefunden." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Fehler bei der Migrationsvorabprüfung: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "Fehler für ausgewählte Migrationsziele: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Fehlende Argumente: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Fehlende Spalte %(table)s.%(column)s in Spiegeltabelle" msgid "Missing device UUID." msgstr "Fehlende Gerät-UUID." msgid "Missing disabled reason field" msgstr "Feld für Inaktivierungsgrund fehlt" msgid "Missing forced_down field" msgstr "Feld forced_down wird vermisst" msgid "Missing imageRef attribute" msgstr "Attribut 'imageRef' fehlt" #, python-format msgid "Missing keys: %s" msgstr "Fehlende Schlüssel: %s" #, python-format msgid "Missing parameter %s" msgstr "Fehlender Parameter %s" msgid "Missing parameter dict" msgstr "Parameter 'dict' fehlt" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "Der festen IP-Adresse '%(address)s' sind mehrere Instanzen zugeordnet." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Es wurde mehr als ein mögliches Netz gefunden. Geben Sie die Netz-ID(s) an, " "um auszuwählen, zu welchem Netz die Verbindung hergestellt werden soll." msgid "More than one swap drive requested." msgstr "Mehr als ein Swap-Laufwerk erforderlich." #, python-format msgid "Multi-boot operating system found in %s" msgstr "MultiBoot-Betriebssystem gefunden in %s" msgid "Multiple X-Instance-ID headers found within request." msgstr "Mehrere X-Instance-ID-Header in Anforderung gefunden. " msgid "Multiple X-Tenant-ID headers found within request." msgstr "Mehrere X-Tenant-ID-Header in Anforderung gefunden." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "" "Mehrere Übereinstimmungen in Pools dynamischer IP-Adressen gefunden für Name " "'%s'" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Mehrere Floating IPs für Adresse %(address)s gefunden." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Mehrere Hosts können vom VMWare-vCenter-Treiber verwaltet werden; daher " "wurde nicht die Betriebszeit für nur einen Host zurückgegeben." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Mehrere mögliche Netze gefunden; verwenden Sie eine Netz-ID, die genauer ist." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Mehrere mit '%s' übereinstimmende Sicherheitsgruppen gefunden. Verwenden Sie " "zur genaueren Bestimmung eine ID." msgid "Must input network_id when request IP address" msgstr "network_id muss beim Anfordern einer IP-Adresse eingegeben werden" msgid "Must not input both network_id and port_id" msgstr "Es dürfen nicht sowohl network_id als auch port_id eingegeben werden" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "Angabe von connection_url, connection_username (optional) und " "connection_password erforderlich zum Verwenden von compute_driver=xenapi." "XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "Angabe von host_ip, host_username und host_password erforderlich für die " "Verwendung von vmwareapi.VMwareVCDriver" msgid "Must supply a positive value for max-count" msgstr "Für max-count muss ein positiver Wert angegeben werden" msgid "Must supply a positive value for max_number" msgstr "Für max_number muss ein positiver Wert angegeben werden" msgid "Must supply a positive value for max_rows" msgstr "Für max_rows muss ein positiver Wert angegeben werden" #, python-format msgid "Network %(network_id)s could not be found." msgstr "Netzwerk %(network_id)s konnte nicht gefunden werden." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "Netz %(network_uuid)s benötigt ein Teilnetz, damit Instanzen darauf gebootet " "werden können." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "Netz konnte für Brücke %(bridge)s nicht gefunden werden" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "Netzwerk konnten nicht gefunden werden für Instanz %(instance_id)s." msgid "Network not found" msgstr "Netzwerk nicht gefunden" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "Netz erfordert 'port_security_enabled' und ein zugeordnetes Teilnetz, damit " "Sicherheitsgruppen zugeordnet werden können." msgid "New volume must be detached in order to swap." msgstr "Neuer Datenträger muss für den Austausch abgehängt werden." msgid "New volume must be the same size or larger." msgstr "Neuer Datenträger muss dieselbe Größe aufweisen oder größer sein." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "Keine Block-Geräte-Zuordnung mit ID %(id)s." msgid "No Unique Match Found." msgstr "Keine eindeutige Übereinstimmung gefunden." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "Kein Agenten-Build ist ID '%(id)s' zugeordnet." msgid "No compute host specified" msgstr "Kein Rechenhost angegeben " #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "" "Es wurden keine Konfigurationsinformationen für das Betriebssystem " "%(os_name)s gefunden." #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "Kein Gerät mit der MAC-Adresse %s ist auf der VM vorhanden" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "Es ist keine Einheit mit interface-id %s auf VM vorhanden" #, python-format msgid "No disk at %(location)s" msgstr "Kein Datenträger unter %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Keine statischen IP-Adressen verfügbar für Netz: %(net)s" msgid "No fixed IPs associated to instance" msgstr "Der Instanz sind keine festen IP-Adressen zugeordnet." msgid "No free nbd devices" msgstr "Keine freien nbd-Geräte" msgid "No host available on cluster" msgstr "Kein Host verfügbar auf Cluster" #, python-format msgid "No host with name %s found" msgstr "Kein Host mit dem Namen %s gefunden" msgid "No hosts found to map to cell, exiting." msgstr "Keine Host für die Zuordnung zur Zelle gefunden. Wird beendet." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "Kein mit '%s' übereinstimmender Hypervisor konnte gefunden werden. " msgid "No image locations are accessible" msgstr "Es sind keine Imagepositionen verfügbar." #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "Keine URI für Livemigration konfiguriert und kein Standardwert für den " "Hypervisorvirtualisierungstyp \"%(virt_type)s\" verfügbar." msgid "No more floating IPs available." msgstr "Keine Floating IPs mehr verfügbar." #, python-format msgid "No more floating IPs in pool %s." msgstr "Keine Floating IPs mehr in Pool %s vorhanden." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "Kein Einhängepunkt gefunden in %(root)s von %(image)s" #, python-format msgid "No operating system found in %s" msgstr "Kein Betriebssystem gefunden in %s" #, python-format msgid "No primary VDI found for %s" msgstr "Kein primäres VDI für %s gefunden" msgid "No root disk defined." msgstr "Keine Root-Festplatte bestimmt." #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "Es wurde kein bestimmtes Netzwerk angefordert und für das Projekt " "'%(project_id)s' ist kein Netzwerk verfügbar." msgid "No suitable network for migrate" msgstr "Kein geeignetes Netzwerk zum Migrieren" msgid "No valid host found for cold migrate" msgstr "Keinen gültigen Host gefunden für Migration ohne Daten" msgid "No valid host found for resize" msgstr "Kein gültiger Host für die Größenänderung gefunden" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Es wurde kein gültiger Host gefunden. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "Keine Datenträger Block-Geräte-Zuordnung im Pfad: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "Keine Datenträger Block-Geräte-Zuordnung mit ID %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "Knoten %s konnte nicht gefunden werden." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "Es kann kein freier Port für %(host)s angefordert werden" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "%(host)s:%(port)d kann nicht gebunden werden, %(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "Nicht alle virtuellen Funktionen von PF %(compute_node_id)s:%(address)s sind " "frei." msgid "Not an rbd snapshot" msgstr "Keine RBD-Momentaufnahme" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "Für Abbild %(image_id)s nicht autorisiert." msgid "Not authorized." msgstr "Nicht berechtigt." msgid "Not enough parameters to build a valid rule." msgstr "Nicht genügend Parameter zum Erstellen einer gültigen Regel." msgid "Not implemented on Windows" msgstr "Nicht implementiert auf Windows" msgid "Not stored in rbd" msgstr "Nicht in RBD gespeichert" msgid "Nothing was archived." msgstr "Es wurde nichts archiviert." #, python-format msgid "Nova requires QEMU version %s or greater." msgstr "Nova erfordert QEMU ab Version %s." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova erfordert libvirt ab Version %s." msgid "Number of Rows Archived" msgstr "Anzahl der archivierten Zeilen" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Objektaktion %(action)s fehlgeschlagen, weil: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Alter Datenträger ist an eine andere Instanz angehängt." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Eine oder mehrere Hosts sind schon in Verfügbarkeitszone(n) %s" #, python-format msgid "Only %d SCSI controllers are allowed to be created on this instance." msgstr "Nur %d SCSI-Controller dürfen in dieser Instanz erstellt werden." msgid "Only administrators may list deleted instances" msgstr "Nur Administratoren können gelöschte Instanzen auflisten" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "Nur dateibasierte SRs (ext/NFS) werden von dieser Funktion unterstützt. SR " "%(uuid)s weist den Typ %(type)s auf" msgid "Origin header does not match this host." msgstr "Ursprungsheader stimmt nicht mit diesem Host überein." msgid "Origin header not valid." msgstr "Ursprungsheader nicht gültig." msgid "Origin header protocol does not match this host." msgstr "Ursprungsheaderprotokoll stimmt nicht mit diesem Host überein." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "PCI-Gerät %(node_id)s:%(address)s nicht gefunden." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "PCI-Alias %(alias)s ist nicht definiert" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI-Gerät %(compute_node_id)s:%(address)s ist %(status)s anstatt " "%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI Gerät %(compute_node_id)s:%(address)s gehört zu %(owner)s anstatt " "%(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "PCI-Gerät %(id)s nicht gefunden" #, python-format msgid "PCI device request %(requests)s failed" msgstr "PCI-Geräteanforderung %(requests)s fehlgeschlagen" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s enthält keine IP-Adresse" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Seitengröße %(pagesize)s nicht zulässig für '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "Seitengröße %(pagesize)s wird vom Host nicht unterstützt." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Parameter %(missing_params)s ist in vif_details für vif %(vif_id)s nicht " "vorhanden. Überprüfen Sie Ihre Neutron Konfiguration, ob der macvtap " "Parameter richtig gesetzt ist." #, python-format msgid "Path %s must be LVM logical volume" msgstr "Pfad '%s' muss sich auf dem logischen LVM-Datenträger befinden" msgid "Paused" msgstr "Pausiert" msgid "Personality file limit exceeded" msgstr "Grenzwert von Persönlichkeitsdatei überschritten" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "Physische Funktion %(compute_node_id)s:%(address)s in Relation zur VF " "%(compute_node_id)s:%(vf_address)s ist %(status)s anstatt %(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "Physisches Netz für Netz %(network_uuid)s fehlt" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "Richtlinie lässt Ausführung von %(action)s nicht zu." #, python-format msgid "Port %(port_id)s is still in use." msgstr "Port %(port_id)s ist noch im Gebrauch. " #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "Port %(port_id)s ist für Instanz %(instance)s nicht verwendbar." #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "Der Port %(port_id)s kann für die Instanz %(instance)s nicht verwendet " "werden. Der Wert %(value)s, der dem dns_name-Attribut zugeordnet ist, stimmt " "nicht mit dem Instanzhostnamen %(hostname)s überein." #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "" "Damit Port %(port_id)s verwendet werden kann, ist eine statische IP-Adresse " "erforderlich." #, python-format msgid "Port %s is not attached" msgstr "Port %s ist nicht angehängt" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "Portkennung %(port_id)s konnte nicht gefunden werden." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Angegebenes Videomodell (%(model)s) wird nicht unterstützt." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "Angegebene Watchdogaktion (%(action)s) wird nicht unterstützt." msgid "QEMU guest agent is not enabled" msgstr "QEMU-Gastagent nicht aktiviert" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "Stillegen der Instanz %(instance_id)s wird nicht unterstützt" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "Quotenklasse %(class_name)s konnte nicht gefunden werden." msgid "Quota could not be found" msgstr "Quote konnte nicht gefunden werden" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Kontingent für %(overs)s: Requested %(req)s überschritten, schon %(used)s " "von %(allowed)s %(overs)s benutzt" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Quote für Ressourcen überschritten: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Quote überschritten, zu viele Schlüsselpaare" msgid "Quota exceeded, too many server groups." msgstr "Quote überschritten, zu viele Servergruppen. " msgid "Quota exceeded, too many servers in group" msgstr "Quote überschritten, zu viele Server in Gruppe" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Quote überschritten: code=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "" "Für Projekt %(project_id)s, Ressource %(resource)s ist eine Quote vorhanden" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "Quote für Projekt %(project_id)s konnte nicht gefunden werden." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "Quote für Benutzer %(user_id)s im Projekt %(project_id)s konnte nicht " "gefunden werden." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "Quotengrenzwert %(limit)s für %(resource)s muss größer-gleich dem bereits " "verwendeten und reservierten Wert %(minimum)s sein." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "Quotengrenzwert %(limit)s für %(resource)s muss kleiner-gleich %(maximum)s " "sein." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "Maximale Anzahl an Wiederholungen für das Trennen von VBD %s erreicht" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "Die Echtzeitrichtlinie erfordert eine mit 1 RT-vCPU und 1 normalen vCPU " "konfigurierte vCPU(s)-Maske. Weitere Informationen finden Sie unter 'hw:" "cpu_realtime_mask' bzw. 'hw_cpu_realtime_mask'." msgid "Request body and URI mismatch" msgstr "Abweichung zwischen Anforderungshauptteil und URI" msgid "Request is too large." msgstr "Anforderung ist zu groß." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "Bei der Anforderung des Image %(image_id)s wurde eine BadRequest-Antwort " "empfangen: %(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "" "Die Anforderungsspezifikation für die Instanz %(instance_uuid)s wurde nicht " "gefunden." msgid "Requested CPU control policy not supported by host" msgstr "" "Die angeforderte CPU-Steuerungsrichtlinie wird vom Host nicht unterstützt." #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "Angeforderte Hardware '%(model)s' wird vom virtuellen Treiber '%(virt)s' " "nicht unterstützt" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "Für das angeforderte Image %(image)s wurde die automatische " "Plattengrößenänderung inaktiviert." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "Angeforderte Instanz-NUMA-Topologie passt nicht zur angegebenen Host-NUMA-" "Topologie" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "Angefragte Instanz NUMA Topologie zusammen mit dem angefragten PCI Gerät " "stimmt nicht überein mit dem Host NUMA Topologie" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "Angeforderte vCPU-Grenzwerte %(sockets)d:%(cores)d:%(threads)d sind für die " "VCPU-Anzahl %(vcpus)d nicht ausreichend" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "Rescue-Einheit ist für Instanz %s nicht vorhanden" #, python-format msgid "Resize error: %(reason)s" msgstr "Größenanpassungsfehler: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Größenänderung in Plattengröße null ist nicht zulässig." msgid "Resource could not be found." msgstr "Ressource konnte nicht gefunden werden." msgid "Resumed" msgstr "Fortgesetzt" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Stammelementname muss '%(name)s' lauten, nicht '%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "%i-Batches werden ausgeführt, bis Ausführung abgeschlossen ist." #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "Scheduler-Hostfilter %(filter_name)s konnte nicht gefunden werden." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "Sicherheitsgruppe %(name)s nicht gefunden für Projekt %(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "Sicherheitsgruppe %(security_group_id)s für Projekt %(project_id)s nicht " "gefunden." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "Sicherheitsgruppe %(security_group_id)s nicht gefunden." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "Sicherheitsgruppe %(security_group_name)s für Projekt %(project_id)s ist " "bereits vorhanden." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "Sicherheitsgruppe %(security_group_name)s ist der Instanz %(instance)s nicht " "zugeordnet" msgid "Security group id should be uuid" msgstr "ID von Sicherheitsgruppe sollte eine UUID sein" msgid "Security group name cannot be empty" msgstr "Sicherheitsgruppenname darf nicht leer sein" msgid "Security group not specified" msgstr "Sicherheitsgruppe nicht angegeben" #, python-format msgid "Server %(server_id)s has no tag '%(tag)s'" msgstr "Server %(server_id)s hat kein Tag '%(tag)s'" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "" "Größe der Serverplatte konnte nicht geändert werden. Ursache: %(reason)s" msgid "Server does not exist" msgstr "Server ist nicht vorhanden" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "ServerGroup-Richtlinie wird nicht unterstützt: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter nicht konfiguriert" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter nicht konfiguriert" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher nicht konfiguriert" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher nicht konfiguriert" #, python-format msgid "Service %(service_id)s could not be found." msgstr "Service %(service_id)s konnte nicht gefunden werden." #, python-format msgid "Service %s not found." msgstr "Dienst %s nicht gefunden." msgid "Service is unavailable at this time." msgstr "Service ist derzeit nicht verfügbar." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "Service mit Host %(host)s, Binärcode %(binary)s ist bereits vorhanden." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "Service mit Host %(host)s, Topic %(topic)s ist bereits vorhanden." msgid "Set admin password is not supported" msgstr "Das Setzen des Admin-Passwortes wird nicht unterstützt" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "Spiegeltabelle mit dem Namen '%(name)s' ist bereits vorhanden." #, python-format msgid "Share '%s' is not supported" msgstr "Das freigegebene Verzeichnis '%s' wird nicht unterstützt" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "Geteilte Ebene '%s' kann keine geteilte Konfigration haben" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "Verkleinern des Dateisystems mit resize2fs ist fehlgeschlagen; überprüfen " "Sie, ob auf Ihrer Platte noch genügend freier Speicherplatz vorhanden ist." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "Momentaufnahme %(snapshot_id)s konnte nicht gefunden werden." msgid "Some required fields are missing" msgstr "Einige benötigte Felder fehlen" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "Beim Löschen einer Datenträgerschattenkopie ist ein Fehler aufgetreten: " "Zurücksetzen einer %(protocol)s-Netzplatte mit 'qemu-img' wurde nicht " "vollständig getestet." msgid "Sort direction size exceeds sort key size" msgstr "Größe der Sortierrichtung überschreitet Sortierschlüsselgröße" msgid "Sort key supplied was not valid." msgstr "Der angegebene Sortierschlüssel war nicht gültig. " msgid "Specified fixed address not assigned to instance" msgstr "Angegebene statische Adresse ist nicht der Instanz zugeordnet" msgid "Specify `table_name` or `table` param" msgstr "Geben Sie den Parameter `table_name` oder `table` an" msgid "Specify only one param `table_name` `table`" msgstr "Geben Sie nur einen Parameter an, `table_name` oder `table`" msgid "Started" msgstr "Gestartet" msgid "Stopped" msgstr "Gestoppt" #, python-format msgid "Storage error: %(reason)s" msgstr "Speicherfehler: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "Speicherrichtlinie %s stimmt mit keinem Datenspeicher überein" msgid "Success" msgstr "Erfolg" msgid "Suspended" msgstr "Ausgesetzt" msgid "Swap drive requested is larger than instance type allows." msgstr "" "Angeforderte Auslagerungsplatte ist größer als der Instanz-Typ erlaubt." msgid "Table" msgstr "Tabelle" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "Task %(task_name)s wird bereits auf Host %(host)s ausgeführt" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "Task %(task_name)s wird nicht auf Host %(host)s ausgeführt" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "Die PCI-Adresse %(address)s hat ein falsches Format." msgid "The backlog must be more than 0" msgstr "Das Backlog muss größer 0 sein" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "Der Konsolenportbereich %(min_port)d-%(max_port)d ist ausgeschöpft." msgid "The created instance's disk would be too small." msgstr "Der erstellte Datenträger für die Instanz würde zu klein sein." msgid "The current driver does not support preserving ephemeral partitions." msgstr "" "Das Beibehalten ephemerer Partitionen wird vom aktuellen Treiber nicht " "unterstützt." msgid "The default PBM policy doesn't exist on the backend." msgstr "Die Standard-PBM-Richtlinie ist auf dem Back-End nicht vorhanden." msgid "The floating IP request failed with a BadRequest" msgstr "" "Die Anfrage für dynamische IP-Adresse ist fehlgeschlagen mit BadRequest" #, python-format msgid "" "The format of the option 'reserved_huge_pages' is invalid. (found " "'%(conf)s') Please refer to the nova config-reference." msgstr "" "Das Format der Option 'reserved_huge_pages' ist ungültig ('%(conf)s' " "gefunden). Ziehen Sie die Nova-Konfigurationsreferenz zu Rate." msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "" "Die Instanz erfordert eine neuere als die bereitgestellte Hypervisorversion." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "" "Die Anzahl der definierten Ports (%(ports)d) ist über dem Limit: %(quota)d" msgid "The only partition should be partition 1." msgstr "Die einzige Partition sollte Partition 1 sein." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "" "Der angegebene RNG Gerätepfad: (%(path)s) existiert nicht auf dem Host." msgid "The request body can't be empty" msgstr "Der Anforderungshauptteil darf nicht leer sein" msgid "The request is invalid." msgstr "Die Anfrage ist ungültig." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "Die angeforderte Menge an Bildspeicher %(req_vram)d ist größer als der für " "Version %(max_vram)d zulässige Höchstwert." msgid "The requested availability zone is not available" msgstr "Die angeforderte Verfügbarkeitszone ist nicht verfügbar" msgid "The requested console type details are not accessible" msgstr "Auf die angeforderten Konsolentypdetails kann nicht zugegriffen werden" msgid "The requested functionality is not supported." msgstr "Die angeforderte Funktionalität wird nicht unterstützt." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "Der angegebene Cluster '%s' wurde im vCenter nicht gefunden" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "Der gelieferte Gerätepfad (%(path)s) ist in Benutzung." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "Der gelieferte Gerätepfad (%(path)s) ist ungültig." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "Der angegebene Plattenpfad (%(path)s) ist bereits vorhanden. Er sollte nicht " "vorhanden sein." msgid "The supplied hypervisor type of is invalid." msgstr "Der gelieferte Hypervisor-Typ ist ungültig." msgid "The target host can't be the same one." msgstr "Der Zielhost kann nicht der gleiche sein." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "Das Token '%(token)s' ist ungültig oder abgelaufen" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "Der Datenträger kann nicht zum delben Gerätenamen wir das Root Gerät %s " "zugewiesen werden" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "Es gibt %(records)d Datensätze in der Tabelle '%(table_name)s', bei denen " "die Spalte 'uuid' oder 'instance_uuid' NULL ist. Führen Sie diesen Befehl " "erneut mit der Option --delete aus, nachdem Sie alle erforderlichen Daten " "gesichert haben." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "Es gibt %(records)d Datensätze in der Tabelle '%(table_name)s', bei denen " "die Spalte 'uuid' oder' instance_uuid' NULL ist. Diese müssen manuell " "bereinigt werden, bevor die Migration übergeben wird. Möglicherweise sollten " "Sie den Befehl 'nova-manage db null_instance_uuid_scan' ausführen." msgid "There are not enough hosts available." msgstr "Es sind nicht genügend Hosts verfügbar." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Es gibt immer noch %(count)i nicht migrierte Versionseinträge. Migration " "kann nicht fortgesetzt werden, solange nicht alle Instanz-Versionseinträge " "zum neuen Format migriert sind. Bitte starten Sie zuerst `nova-manage db " "migrate_flavor_data'." #, python-format msgid "There is no such action: %s" msgstr "Aktion existiert nicht: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "Es wurden keine Datensätze gefunden, in denen instance_uuid NULL war." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "Die Hypervisorversion des Compute-Knotens ist älter als die Version, die für " "die Mindestunterstützung erforderlich ist: %(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "" "Diese domU muss auf dem von 'connection_url' angegebenen Host ausgeführt " "werden" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Diese Methode muss entweder mit Angabe von 'networks=None' und " "'port_ids=None' oder 'port_ids' und 'networks' mit einem Wert ungleich " "'None' aufgerufen werden. " #, python-format msgid "This rule already exists in group %s" msgstr "Diese Regel ist in Gruppe %s bereits vorhanden" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Dieser Dienst ist älter (v%(thisver)i) als die Mindestversion (v%(minver)i) " "der übrigen Implementierung. Fortfahren nicht möglich. " #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Zeitüberschreitung bei der Erstellung des Gerätes: %s" msgid "Timeout waiting for response from cell" msgstr "Zeitüberschreitung beim Warten auf Antwort von der Zelle" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "Zeitüberschreitung bei der Überprüfung der Live-Migration zu Host: %s" msgid "To and From ports must be integers" msgstr "Eingangs- und Ausgangsports müssen Ganzzahlen sein" msgid "Token not found" msgstr "Token nicht gefunden" msgid "Triggering crash dump is not supported" msgstr "Auslösen von Absturzabbildern wird nicht unterstützt. " msgid "Type and Code must be integers for ICMP protocol type" msgstr "Typ und Code müssen für den ICMP-Protokolltyp Ganzzahlen sein" msgid "UEFI is not supported" msgstr "UEFI wird nicht unterstützt." #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Verknüpfung der Floating IP %(address)s zu irgendeiner festen IP-Adresse für " "Instanz %(id)s fehlgeschlagen. Instanz hat keine feste IPv4-Adresse." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Floating IP %(address)s kann nicht der festen IP-Adresse %(fixed_address)s " "für Instanz %(id)s zugeordnet werden. Fehler: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Ironic-Client kann nicht authentifiziert werden. " #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Gastagent konnte nicht kontaktiert werden. Der folgende Aufruf ist " "abgelaufen: %(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "Abbild kann nicht konvertiert werden in %(format)s: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "Abbild kann nicht in ein Rohformat konvertiert werden: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "VBD %s kann nicht gelöscht werden" #, python-format msgid "Unable to destroy VDI %s" msgstr "VDI %s kann nicht gelöscht werden" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "Plattenbus für '%s' kann nicht bestimmt werden" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "Plattenpräfix für %s kann nicht bestimmt werden" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "" "%s kann nicht aus Pool entnommen werden; keine übergeordnete Einheit gefunden" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "%s kann nicht aus Pool entnommen werden; Pool ist nicht leer" msgid "Unable to find SR from VBD" msgstr "Konnte kein SR finden für VBD" #, python-format msgid "Unable to find SR from VBD %s" msgstr "SR kann nicht ausgehend von VBD '%s' gefunden werden" msgid "Unable to find SR from VDI" msgstr "Konnte kein SR finden für VDI" #, python-format msgid "Unable to find SR from VDI %s" msgstr "SR von VDI %s kann nicht gefunden werden" #, python-format msgid "Unable to find ca_file : %s" msgstr "'ca_file' konnte nicht gefunden werden: %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "'cert_file' konnte nicht gefunden werden: %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "Host für Instanz %s kann nicht gefunden werden" msgid "Unable to find iSCSI Target" msgstr "iSCSI-Ziel konnte nicht gefunden worden" #, python-format msgid "Unable to find key_file : %s" msgstr "'key_file' konnte nicht gefunden werden: %s" msgid "Unable to find root VBD/VDI for VM" msgstr "Root-VBD/VDI für VM kann nicht gefunden werden" msgid "Unable to find volume" msgstr "Datenträger kann nicht gefunden werden" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "Host UUID kann nicht abgerufen werden: /etc/machine-id existiert nicht" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "Host UUID kann nicht abgerufen werden: /etc/machine-id ist leer" msgid "Unable to get record of VDI" msgstr "Konnte keinen EIntrag für VDI beziehen" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Datensatz von VDI %s kann nicht abgerufen werden auf" msgid "Unable to introduce VDI for SR" msgstr "Bekanntmachung VDI an SR nicht möglich" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "VDI kann für SR %s nicht eingeführt werden" msgid "Unable to introduce VDI on SR" msgstr "Bekanntmachung VDI an SR nicht möglich" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "VDI kann nicht in SR '%s' eingeführt werden" #, python-format msgid "Unable to join %s in the pool" msgstr "Verknüpfung von %s im Pool nicht möglich" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Es können nicht mehrere Instanzen mit einer einzelnen konfigurierten Port-ID " "gestartet werden. Starten Sie Ihre Instanzen nacheinander mit " "unterschiedlichen Ports." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "%(instance_uuid)s kann nicht nach %(dest)s migriert werden: Mangel an " "Speicher (Host:%(avail)s <= Instanz:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "%(instance_uuid)s kann nicht migriert werden: Platte der Instanz ist zu groß " "(verfügbar auf Zielhost: %(available)s < Bedarf:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Instanz (%(instance_id)s) kann nicht auf aktuellen Host (%(host)s) migriert " "werden." #, python-format msgid "Unable to obtain target information %s" msgstr "Zielinformationen zu %s können nicht abgerufen werden" msgid "Unable to resize disk down." msgstr "Größe der inaktiven Platte kann nicht geändert werden." msgid "Unable to set password on instance" msgstr "Es kann kein Kennwort für die Instanz festgelegt werden" msgid "Unable to shrink disk." msgstr "Platte kann nicht verkleinert werden." msgid "Unable to terminate instance." msgstr "Instanz kann nicht beendet werden." #, python-format msgid "Unable to unplug VBD %s" msgstr "Verbindung zu VBD %s kann nicht getrennt werden" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Unzulässige CPU Information: %(reason)s" msgid "Unacceptable parameters." msgstr "Inakzeptable Parameter." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Nicht verfügbarer Konsolentyp %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Nicht definiertes Stammverzeichnis für Blockgerätezuordnung: " "BlockDeviceMappingList enthält Blockgerätezuordnungen aus mehreren Instanzen." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Unerwarteter API-Fehler. Melden Sie ihn unter http://bugs.launchpad.net/" "nova/ und hängen Sie das Nova-API-Protokoll an, falls möglich.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Unerwartete Aggregataktion %s" msgid "Unexpected type adding stats" msgstr "Unerwarteter Typ beim Hinzufügen von Statistiken" #, python-format msgid "Unexpected vif_type=%s" msgstr "Unerwarteter vif_type=%s" msgid "Unknown" msgstr "Unbekannt" msgid "Unknown action" msgstr "Unbekannte Aktion" #, python-format msgid "Unknown auth type: %s" msgstr "Unbekannter Authentifizierungstyp: %s" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Unbekanntes Format %(format)s des Konfigurationslaufwerks. Wählen Sie " "entweder 'iso9660' oder 'vfat' aus." #, python-format msgid "Unknown delete_info type %s" msgstr "Unbekannter delete_info-Typ %s" #, python-format msgid "Unknown image_type=%s" msgstr "Unbekannter image_type=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Unbekannte Quotenressourcen %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "" "Unbekannte Sortierrichtung; muss 'desc' (absteigend) oder " "'asc' (aufsteigend) sein" #, python-format msgid "Unknown type: %s" msgstr "Unbekannter Typ: %s" msgid "Unrecognized legacy format." msgstr "Nicht erkanntes Altformat." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Nicht erkannter read_deleted-Wert '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Nicht erkannter Wert '%s' für 'CONF.running_deleted_instance_action'" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "Aufnahme wurde versucht, aber das Image %s kann nicht gefunden werden." msgid "Unsupported Content-Type" msgstr "Nicht unterstützter Inhaltstyp" msgid "Upgrade DB using Essex release first." msgstr "" "Führen Sie zuerst ein Upgrade für die Datenbank unter Verwendung des Essex-" "Release durch." #, python-format msgid "User %(username)s not found in password file." msgstr "Benutzer %(username)s in Kennwortdatei nicht gefunden." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Benutzer %(username)s in Spiegeldatei nicht gefunden." msgid "User data needs to be valid base 64." msgstr "Benutzerdaten müssen gültige Base64-Daten sein. " msgid "User does not have admin privileges" msgstr "Benutzer hat keine Admin Rechte." msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "Benutzung unterschiedlicher Block_Geräte_Zuordnung-Schreibweisen ist nicht " "erlaubt in der selben Anfrage." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s hat eine Größe von %(virtual_size)d Bytes und ist damit " "größer als die Versionsgröße von %(new_disk_size)d Bytes." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDI nicht gefunden auf SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun " "'%(target_lun)s')" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "Mehr als (%d) VHD-Verbindungsversuche unternommen, Abbruch ..." #, python-format msgid "Value must match %s" msgstr "Wert muss %s entsprechen" #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "Version %(req_ver)s wird von der API nicht unterstützt. Minimum ist " "%(min_ver)s und Maximum ist %(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Virtuelle Schnittstellenerstellung fehlgeschlagen" msgid "Virtual interface plugin failed" msgstr "Virtuelles Schnittstellen-Plugin fehlgeschlagen" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "Der Modus '%(vmmode)s' der virtuellen Maschine wird nicht erkannt" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "Der Modus '%s' der virtuellen Machine ist nicht gültig" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "Virtualisierungstyp '%(virt)s' wird von diesem Rechentreiber nicht " "unterstützt" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "" "Datenträger %(volume_id)s konnte nicht angehängt werden. Ursache: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be detached. Reason: %(reason)s" msgstr "" "Datenträger %(volume_id)s konnte nicht abgehängt werden. Ursache: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "Datenträger %(volume_id)s konnte nicht gefunden werden." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "Datenträger %(volume_id)s wurde nicht erstellt nach einer Wartezeit von " "%(seconds)s Sekunden oder %(attempts)s Versuchen. Und der Status ist " "%(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Datenträger gehört nicht zur angeforderten Instanz." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "Datenträgerverschlüsselung wird nicht unterstützt für %(volume_type)s " "Datenträger %(volume_id)s" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "Datenträger ist kleiner als die minimale Größe spezifiziert in den Abbild-" "Metadaten. Datenträgergröße ist %(volume_size)i Bytes, minimale Größe ist " "%(image_min_disk)i Bytes." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "Datenträger legt Blockgröße fest, aber der aktuelle libvirt-Hypervisor %s " "unterstützt keine angepassten Blockgrößen" #, python-format msgid "Volume type %(id_or_name)s could not be found." msgstr "Datenträgertyp %(id_or_name)s wurde nicht gefunden." #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Wir unterstützen Schema '%s' unter Python bis Version 2.7.4 nicht, verwenden " "Sie bitte HTTP oder HTTPS" msgid "When resizing, instances must change flavor!" msgstr "Beim Ändern der Größe muss die Version der Instanzen geändert werden!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Wenn der Server im SSL-Modus läuft, müssen Sie sowohl für die 'cert_file'- " "als auch für die 'key_file'-Option in Ihrer Konfigurationsdatei einen Wert " "angeben" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Falsche Quotenmethode %(method)s für Ressource %(res)s verwendet" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "Falscher Typ von Hookmethode. Nur die Typen 'pre' und 'post' sind zulässig" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For wird in der Anfrage vermisst." msgid "X-Instance-ID header is missing from request." msgstr "X-Instance-ID-Header fehlt in Anforderung. " msgid "X-Instance-ID-Signature header is missing from request." msgstr "X-Instance-ID-Signature-Header fehlt in Anforderung." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider wird in der Anfrage vermisst." msgid "X-Tenant-ID header is missing from request." msgstr "X-Tenant-ID-Header fehlt in Anforderung." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "XAPI mit Unterstützung für 'relax-xsm-sr-check=true' erforderlich" msgid "You are not allowed to delete the image." msgstr "Sie sind nicht berechtigt, dieses Image zu löschen." msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "Sie sind nicht berechtigt, auf das Image zuzugreifen, mit dem die Instanz " "gestartet wurde." msgid "You must implement __call__" msgstr "Sie müssen '__call__' implementieren" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "Sie sollten das Flag 'images_rbd_pool' angeben, um RBD-Images zu verwenden" msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "Sie sollten das Flag 'images_volume_group' angeben, um LVM-Images zu " "verwenden" msgid "Zero floating IPs available." msgstr "Keine Floating IPs verfügbar." msgid "admin password can't be changed on existing disk" msgstr "" "Das Administrator Passwort kann nicht auf der bestehenden Festplatte " "geändert werden" msgid "aggregate deleted" msgstr "Aggregat gelöscht" msgid "aggregate in error" msgstr "Aggregat fehlerhaft" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate fehlgeschlagen. Ursache: %s" #, python-format msgid "attach network interface %s failed." msgstr "Anhängen von Netzwerkschnittstelle %s fehlgeschlagen." msgid "cannot understand JSON" msgstr "kann JSON nicht verstehen" msgid "clone() is not implemented" msgstr "clone() ist nicht implementiert" #, python-format msgid "connect info: %s" msgstr "Verbindungsinfo: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "verbinden mit: %(host)s:%(port)s" #, python-format msgid "detach network interface %s failed." msgstr "Abtrennen von Netzwerkschnittstelle %s fehlgeschlagen." msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot() ist nicht implementiert" #, python-format msgid "disk type '%s' not supported" msgstr "Festplattentyp '%s' nicht unterstützt" #, python-format msgid "empty project id for instance %s" msgstr "leere Projektkennung für Instanz %s" msgid "error setting admin password" msgstr "Fehler beim Festlegen des Administratorkennworts" #, python-format msgid "error: %s" msgstr "Fehler: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "Generierung des X509 FIngerabdrucks fehlgeschlagen. Fehlermeldung: %s" msgid "failed to generate fingerprint" msgstr "Erzeugen des Fingerabdrucks ist fehlgeschlagen" msgid "filename cannot be None" msgstr "Dateiname darf nicht 'None' sein" msgid "floating IP is already associated" msgstr "Die Floating IP ist bereits zugeordnet." msgid "floating IP not found" msgstr "Floating IP nicht gefunden" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s gesichert durch: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "Hyperlink %s enthält Version nicht" msgid "image already mounted" msgstr "Abbild bereits eingehängt" #, python-format msgid "instance %s is not running" msgstr "Instanz %s wird nicht ausgeführt" msgid "instance has a kernel or ramdisk but not both" msgstr "Instanz weist Kernel oder RAM-Platte auf, aber nicht beides" msgid "instance is a required argument to use @refresh_cache" msgstr "" "Instanz ist ein erforderliches Argument für die Verwendung von " "'@refresh_cache'" msgid "instance is not in a suspended state" msgstr "Instanz ist nicht im Aussetzstatus" msgid "instance is not powered on" msgstr "Instanz ist nicht eingeschaltet" msgid "instance is powered off and cannot be suspended." msgstr "Instanz wird ausgeschaltet und kann nicht ausgesetzt werden. " #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "instance_id %s konnte auf keinem Port als Einheiten-ID gefunden werden" msgid "is_public must be a boolean" msgstr "'is_public' muss ein boolescher Wert sein" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key nicht bestimmt" msgid "l3driver call to add floating IP failed" msgstr "'l3driver'-Aufruf zum Hinzufügen einer Floating IP ist fehlgeschlagen." #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs installiert, aber nicht benutzbar (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs ist nicht installiert (%s)" #, python-format msgid "marker [%s] not found" msgstr "Marker [%s] nicht gefunden" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "max. Zeilen müssen <= %(max_value)d sein" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "" "max_count darf nicht größer als 1 sein, wenn eine fixed_ip angegeben ist." msgid "min_count must be <= max_count" msgstr "'min_count' muss <= 'max_count' sein" #, python-format msgid "nbd device %s did not show up" msgstr "NBD-Einheit %s wurde nicht angezeigt" msgid "nbd unavailable: module not loaded" msgstr "nbd nicht verfügbar: Modul nicht geladen" msgid "no hosts to remove" msgstr "Keine Hosts zum Entfernen vorhanden" #, python-format msgid "no match found for %s" msgstr "keine Übereinstimmung gefunden für %s" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "Kein verwendbares übergeordnetes Abbild für Datenträger %s" #, python-format msgid "no write permission on storage pool %s" msgstr "Keine Schreibberechtigung für Speicherpool %s" #, python-format msgid "not able to execute ssh command: %s" msgstr "SSH-Befehl kann nicht ausgeführt werden: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "Alter Konfigurationsstil kann nur Verzeichnis- oder memcached-Backends " "verwenden." msgid "operation time out" msgstr "Vorgangszeitüberschreitung" #, python-format msgid "partition %s not found" msgstr "Partition %s nicht gefunden" #, python-format msgid "partition search unsupported with %s" msgstr "Partitionssuche nicht unterstützt mit %s" msgid "pause not supported for vmwareapi" msgstr "'pause' nicht unterstützt für 'vmwareapi'" msgid "printable characters with at least one non space character" msgstr "" "Druckbare Zeichen mit mindestens einem Zeichen, das kein Leerzeichen ist." msgid "printable characters. Can not start or end with whitespace." msgstr "Druckbare Zeichen. Keine Leerzeichen davor oder danach zulässig." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "qemu-img konnte nicht ausgeführt werden für %(path)s : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "qemu-nbd-Fehler: %s" msgid "rbd python libraries not found" msgstr "rbd Python-Bibliotheken nicht gefunden" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "'read_deleted' kann nur 'no', 'yes' oder 'only' sein, nicht '%r'" msgid "serve() can only be called once" msgstr "serve() kann nur einmal aufgerufen werden." msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "Service ist ein obligatorisches Argument für den datenbankbasierten " "ServiceGroup-Treiber" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "Service ist ein obligatorisches Argument für den Memcached-basierten " "ServiceGroup-Treiber" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "'set_admin_password' wird von diesem Treiber oder dieser Gastinstanz nicht " "implementiert" msgid "setup in progress" msgstr "Konfiguration in Bearbeitung" #, python-format msgid "snapshot for %s" msgstr "Momentaufnahme für %s" msgid "snapshot_id required in create_info" msgstr "snapshot_id in create_info erforderlich" msgid "token not provided" msgstr "Token nicht angegeben" msgid "too many body keys" msgstr "zu viele Textschlüssel" msgid "unpause not supported for vmwareapi" msgstr "'unpause' nicht unterstützt für 'vmwareapi'" msgid "version should be an integer" msgstr "Version sollte eine Ganzzahl sein" #, python-format msgid "vg %s must be LVM volume group" msgstr "Datenträgergruppe '%s' muss sich in LVM-Datenträgergruppe befinden" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path nicht vorhanden in vif_details für vif %(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "Vif-Typ %s nicht unterstütztnot supported" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "" "Parameter 'vif_type' muss für diese vif_driver-Implementierung vorhanden sein" #, python-format msgid "volume %s already attached" msgstr "Datenträger %s ist bereits angehängt" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "Status von Datenträger '%(vol)s' muss 'in-use' lauten. Weist derzeit den " "Status '%(status)s' auf" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake hat keine Implementierung für %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake weist keine Implementierung für %s auf oder wurde mit einer " "falschen Anzahl von Argumenten aufgerufen" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/es/0000775000175000017500000000000000000000000015377 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/es/LC_MESSAGES/0000775000175000017500000000000000000000000017164 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/es/LC_MESSAGES/nova.po0000664000175000017500000033456300000000000020505 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Adriana Chisco Landazábal , 2015 # Alberto Molina Coballes , 2012-2014 # Ying Chun Guo , 2013 # David Martinez Morata, 2014 # FIRST AUTHOR , 2011 # Jose Ramirez Garcia , 2014 # Edgar Carballo , 2013 # Pablo Sanchez , 2015 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:05+0000\n" "Last-Translator: Copied by Zanata \n" "Language: es\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Spanish\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s no es una direccion IP v4/6 valida" #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s ha intentado un acceso de bases de datos directo que no está " "permitido por la política." #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s no es una red de IP válida." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s on debería formar parte de las actualizaciones." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "" "Se han asignado %(memsize)d MB de memoria, pero se esperaban %(memtotal)d MB" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s no está en un almacenamiento local: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s no está en un almacenamiento compartido: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "" "%(total)i filas han coincidido con la consulta %(meth)s, %(done)i migradas" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "El hipervisor %(type)s no soporta dispositivos PCI" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "El valor %(worker_name)s de %(workers)s es inválido, debe ser mayor que 0." #, python-format msgid "%s does not support disk hotplug." msgstr "%s no soporta hotplug de disco." #, python-format msgid "%s format is not supported" msgstr "No se soporta formato %s" #, python-format msgid "%s is not supported." msgstr "%s no está soportada." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s debe ser 'MANUAL' o 'AUTO'." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' debería ser una instancia de '%(cls)s'." msgid "'qemu-img info' parsing failed." msgstr "Se ha encontrado un error en el análisis de 'qemu-img info'." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "El argumento 'rxtx_factor' debe ser un flotante entre 0 y %g" #, python-format msgid "A NetworkModel is required in field %s" msgstr "Se requiere un NetworkModel en campo %s" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "Secuencia API Versión %(version)s tiene un formato no válido. Debe ser un " "formato MajorNum.MinorNum." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "Versión API %(version)s, no soportada en este método." msgid "Access list not available for public flavors." msgstr "La lista de acceso no está disponible para tipos públicos. " #, python-format msgid "Action %s not found" msgstr "Acción %s no encontrada" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "La acción para request_id %(request_id)s en la instancia %(instance_uuid)s " "no se ha encontrado." #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Acción: '%(action)s', método de llamada: %(meth)s, cuerpo: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "Fallo en adición de metadata para el agregado %(id)s después de %(retries)s " "intentos" msgid "Affinity instance group policy was violated." msgstr "Se ha infringido la política de afinidad de instancia de grupo " #, python-format msgid "Agent does not support the call: %(method)s" msgstr "El agente no soporta la llamada %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "Compilación agente con hipervisor %(hypervisor)s S.O. %(os)s arquitectura " "%(architecture)s existe." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "El agregado %(aggregate_id)s ya tiene el host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "No se ha podido encontrar el agregado %(aggregate_id)s." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "El agregado %(aggregate_id)s no tiene ningún host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "El agregado %(aggregate_id)s no tiene metadatos con la clave " "%(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Agregado %(aggregate_id)s: la acción '%(action)s' ha producido un error: " "%(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "El agregado %(aggregate_name)s ya existe." #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "" "El agregado %s no admite una zona de disponibilidad con el nombre vacío" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "No se ha podido encontrar el agregado para el host %(host)s. " #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "" "Se ha proporcionado un valor no válido en el campo 'name'. El nombre debe " "ser: %(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "" "Ha sucedido un error desconocido. Por favor repite el intento de nuevo." msgid "An unknown exception occurred." msgstr "Una excepción desconocida ha ocurrido" msgid "Anti-affinity instance group policy was violated." msgstr "la política de grupo de anti-afinidad fue violada." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "No se reconoce el nombre de la Arquitectura '%(arch)s'" #, python-format msgid "Architecture name '%s' is not valid" msgstr "El nombre de la Arquitectura '%s' no es válido" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Intento de consumir dispositivo PCI %(compute_node_id)s:%(address)s de pool " "vacío" msgid "Attempted overwrite of an existing value." msgstr "Se ha intentado sobreescribir un valor ya existente." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Atributo no soportado: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "Formato de red erróneo: falta %s" msgid "Bad networks format" msgstr "Formato de redes erróneo" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "" "Formato incorrecto de redes: el uuid de red no está en el formato correcto " "(%s) " #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Prefijo erróneo para red en cidr %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "El enlace ha fallado para el puerto %(port_id)s, compruebe los registros de " "neutron para más información." msgid "Blank components" msgstr "Componentes en blanco" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Volumenes vacios (origen:'vacio',dest:'volume') necesitan tener un tamaño no " "nulo." #, python-format msgid "Block Device %(id)s is not bootable." msgstr "El dispositivo de bloque %(id)s no puede arrancar." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "El mapeo de dispositivo de bloques %(volume_id)s es un volumen con diversos " "adjuntos y no es válido para esta operación." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "" "La correlación de dispositivo de bloque no puede ser convertida a formato " "heredado." msgid "Block Device Mapping is Invalid." msgstr "La correlación de dispositivo de bloque no es válida." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "La correlación de dispositivo de bloque es inválida: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "La correlación de dispositivo de bloque es inválida: La secuencia de " "arranque para la instancia y la combinación de la imagen/correlación de " "dispositivo de bloque no es válida." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "La correlación de dispositivo de bloque es inválida: Ha especificado más " "dispositivos locales que el límite permitido" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "La correlación de dispositivo de bloque es inválida: no ha sido posible " "obtener la imagen %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "La correlación de dispositivo de bloque no es válida: no se ha podido " "obtener la instantánea %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "La correlación de dispositivo de bloque no es válida: no se ha podido " "obtener el volumen %(id)s." msgid "Block migration can not be used with shared storage." msgstr "" "No se puede utilizar la migración de bloque con almacenamiento compartido. " msgid "Boot index is invalid." msgstr "El índice de arranque es válido." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "Construcción de instancia %(instance_uuid)s abortada: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "Construcción de instancia %(instance_uuid)s reprogramada: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "" "No se ha encontrado la solicitud de compilación (BuildRequest) para la " "instancia %(uuid)s" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "" "La asignación de CPU y memoria debe proporcionarse para todos los nodos NUMA" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU no tiene compatibilidad.\n" " \n" "%(ret)s\n" "\n" "Consulte %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "El numero de CPU %(cpunum)d esta asignado a dos nodos" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "El numero de CPU %(cpunum)d es mas largo que le máximo %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "El numero de CPU %(cpuset)s no esta asignado a ningún nodo" msgid "Can not add access to a public flavor." msgstr "No se puede añadir acceso al sabor público." msgid "Can not find requested image" msgstr "No se puede encontrar la imagen solicitada " #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "" "No se puede manejar la solicitud de autenticación para las credenciales %d" msgid "Can't resize a disk to 0 GB." msgstr "No se puede cambiar el tamaño de archivo a 0 GB." msgid "Can't resize down ephemeral disks." msgstr "No se puede reducir el tamaño de los discos efímeros." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "" "No se puede recuperar la vía de acceso ed dispositivo raíz de la " "configuración de libvirt de instancia" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "No se puede '%(action)s' instancia %(server_id)s mientras está en %(attr)s " "%(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "No se puede acceder a \"%(instances_path)s\", asegúrese de que exsta la ruta " "y que tiene los permisos pertinentes. En particular Nova-Compute no se debe " "ejecutar con la cuenta SYSTEM incorporada u otras cuentas incapaces de " "autenticar en un host remoto." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "No se puede añadir host al agregado %(aggregate_id)s. Razón: %(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "No se pueden conectar uno o más volúmenes a varias instancias" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "No se puede ejecutar %(method)s en un objecto huérfano %(objtype)s" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "No se puede determinar la agrupación de almacenamiento padre para %s; no se " "puede determinar dónde se deben almacenar las imágenes" msgid "Cannot find SR of content-type ISO" msgstr "No se puede encontrar SR de content-type ISO" msgid "Cannot find SR to read/write VDI." msgstr "No se puede encontrar SR para leer/grabar VDI." msgid "Cannot find image for rebuild" msgstr "No se puede encontrar la imagen para reconstrucción " #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "No se puede eliminar el host %(host)s en el agregado %(id)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "No se puede remover el host del agregado %(aggregate_id)s. Razón: %(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "No se puede rescatar una instancia de volume-backed" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "No se puede cambiar el tamaño del disco raíz a un tamaño menor. Tamaño " "actual: %(curr_root_gb)s GB. Tamaño solicitado: %(new_root_gb)s GB." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "No se puede definir una política de anclaje de hebras de CPU en una política " "de anclaje de CPU no dedicada" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "No se puede definir una política en tiempo real en una política de anclaje " "de CPU no dedicada." #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "No se puede actualizar agregado %(aggregate_id)s. Razón: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "No se puede actualizar metadatos de agregado %(aggregate_id)s. Razón: " "%(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "La celda %(uuid)s no posee mapeo alguno" #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "" "El cambio produciría un uso inferior a 0 para los recursos siguientes: " "%(unders)s." #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "No se ha podido encontrar la clase %(class_name)s: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "No se soporta Comando. Por favor utilice el comando Ironic %(cmd)s para " "realizar esta acción." #, python-format msgid "Compute host %(host)s could not be found." msgstr "No se ha podido encontrar el host de Compute %(host)s." #, python-format msgid "Compute host %s not found." msgstr "No se ha encontrado Compute host %s." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "El servicio Compute de %(host)s todavía se encuentra en uso." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "El servicio Compute de %(host)s no está disponible en este momento." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "El formato de unidad de configuración '%(format)s' no está soportado." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "La configuración ha solicitado un modelo CPU explícito, pero el hipervisor " "libvirt actual '%s' no soporta la selección de modelos de CPU" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Ha ocurrido un conflicto al actualizar la instancia %(instance_uuid)s pero " "no hemos podido establecer la causa." #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Ha ocurrido un conflicto al actualizar la instancia %(instance_uuid)s. " "Esperado: %(expected)s. Actualmente: %(actual)s." #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Fallo en la conexión al alojamiento cinder: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "La conexión con el host glance %(server)s ha fallado: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Conexión hacia libvirt perdida: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "La conexión al hipervisor está perdida en el anfitrión: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "No se ha podido recuperar la salida del registro de la consola para la " "instancia %(instance_id)s. Motivo: %(reason)s" msgid "Constraint not met." msgstr "Restricción no cumplida." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Convertido a sin formato, pero el formato es ahora %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "No se puede unir la imagen con el loopback: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "No se ha podido captar la imagen %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "" "No se ha podido encontrar un manejador para el volumen %(driver_type)s." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "No se ha podido encontrar el binario %(binary)s en el host %(host)s." #, python-format msgid "Could not find config at %(path)s" msgstr "No se ha podido encontrar configuración en %(path)s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "" "No se ha podido encontrar la(s) referencia(s) de almacén de datos que la MV " "utiliza." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "" "No se puede cargar la linea %(line)s, se ha obtenido el error %(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "No se ha podido cargar aplicación de pegar '%(name)s' desde %(path)s " #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "No se ha podido montar la unidad de configuración vfat. %(operation)s ha " "fallado. Error: %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "No se ha podido cargar la imagen %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "La creación de la interfaz virtual con dirección MAC única ha fallado" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "" "El valor regex %s del almacén de datos no concuerda con algún almacén de " "datos." msgid "Datetime is in invalid format" msgstr "El formato de fecha no es válido" msgid "Default PBM policy is required if PBM is enabled." msgstr "Se requiere una política PBM por defecto si se habilita PBM." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "Se han eliminado %(records)d registros de la tabla '%(table_name)s'." #, python-format msgid "Device '%(device)s' not found." msgstr "No se ha encontrado el disposisitvo'%(device)s'." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "El dispositivo con identificador %(id)s especificado no está soportado por " "la versión del hipervisor %(version)s" msgid "Device name contains spaces." msgstr "El nombre del dispositivo contiene espacios." msgid "Device name empty or too long." msgstr "El nombre del dispositivo está vacío o es demasiado largo." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "Discrepancia de tipo de dispositivo para el alias '%s'" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "Diferentes tipos en %(table)s.%(column)s y la tabla shadow: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "" "El disco contiene un sistema de archivos incapaz de modificar su tamaño: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Formato de disco %(disk_format)s no es aceptable" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "El archivo de información de disco es inválido: %(reason)s" msgid "Disk must have only one partition." msgstr "el disco debe tener una sola partición." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Disco identificado como: %s no se ha encontrado adjunto a la instancia" #, python-format msgid "Driver Error: %s" msgstr "Error de dispositivo: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "Error al intentar ejecutar %(method)s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Error al destruir la instancia en nodo %(node)s. El estado de provisión aún " "es '%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Error durante la siguiente llamada al agente: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "" "Error durante la extracción de la instancia %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "Error de libvirt al obtener la información de dominio para " "%(instance_name)s: [Código de error %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Error de libvirt al buscar %(instance_name)s: [Código de error " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Error de libvirt durante el modo inactivo %(instance_name)s: [Código de Error" "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Error de libvert al establecer la contraseña para el usuario \"%(user)s\": " "[Código de error %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Error al montar %(device)s a %(dir)s en imagen %(image)s con libguestfs " "(%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Error al montar %(image)s con libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Error al crear monitor de recursos: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Error: El agente está inhabilitado" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "" "No se ha encontrado el suceso %(event)s para el id de acción %(action_id)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "El suceso debe ser una instancia de un nova.virt.event.Event" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Se ha superado el número máximo de intentos de planificación " "%(max_attempts)d para la instancia %(instance_uuid)s. Última excepción: " "%(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Se han excedido los intentos máximos de programación %(max_retries)d para la " "instancia %(instance_uuid)s durante la migración en vivo" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Se ha excedido el número máximo de intentos. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "Se esperaba un uuid pero se ha recibido %(uuid)s." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Columna extra %(table)s.%(column)s en la tabla shadow" msgid "Extracting vmdk from OVA failed." msgstr "Error al extraer vmdk de OVA." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "Error al acceder a puerto %(port_id)s: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Error al añadir parámetros de implementación en nodo %(node)s mientras se " "proporcionaba la instancia %(instance)s" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "No fue posible asignar red(es) con error %s, no se reprogramará." msgid "Failed to allocate the network(s), not rescheduling." msgstr "Fallo al asociar la(s) red(es), no se reprogramará." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "Error al conectar el dispositivo adaptador de red a %(instance_uuid)s" #, python-format msgid "Failed to create vif %s" msgstr "Error al crear la VIF %s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Fallo al desplegar instancia: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Fallo al desasociar el dispositivo PCI %(dev)s: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "" "Error al desconectar el dispositivo adaptador de red desde %(instance_uuid)s" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "No se ha podido cifrar el texto: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Fallo al ejecutar instancias: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "No se han podido correlacionar particiones: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Fallo al montar el sistema de ficheros: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "Fallo al pasar información sobre el dispositivo pci para el traspaso" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Fallo al apagar la instancia: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Fallo al arrancar la instancia: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Fallo al preparar el dispositivo PCI %(id)s para la instancia " "%(instance_uuid)s: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Fallo al proporcionar instancia la instancia %(inst)s: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "" "Fallo al leer o escribir el archivo de información de disco: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Fallo al reiniciar la instancia: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Fallo al remover el(los) volumen(es): (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Error al solicitar Ironic para reconstruir instancia %(inst)s: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Fallo al resumir la instancia: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "Error al ejecutar run qemu-img. Información en %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "" "No se ha podido establecer la contraseña de administrador en %(instance)s " "debido a %(reason)s" msgid "Failed to spawn, rolling back" msgstr "No se ha podido generar, retrotrayendo" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Fallo al suspender instancia: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Fallo al terminar la instancia: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "No se ha podido desconectar la VIF %s" msgid "Failure prepping block device." msgstr "Fallo al preparar el dispositivo de bloque." #, python-format msgid "File %(file_path)s could not be found." msgstr "No se ha podido encontrar el archivo %(file_path)s." #, python-format msgid "File path %s not valid" msgstr "La vía de acceso de archivo %s no es válida" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "La IP fija %(ip)s no es una direccion IP valida para la red %(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "IP fija %s ya está en uso." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "La dirección IP fija %(address)s ya se está utilizando en la instancia " "%(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "No se ha encontrado la IP fija de la dirección %(address)s." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "No se ha podido encontrar el tipo %(flavor_id)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "El tipo %(flavor_id)s no tiene especificaciones adicionales con clave " "%(extra_specs_key)s" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Tipo %(flavor_id)s no tiene especificaciones adicionales con clave %(key)s." #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "No se puede crear o actualizar el tipo %(id)s de especificaciones " "adicionales después de %(retries)d intentos." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "Acceso al tipo ya existe para la combinación del tipo %(flavor_id)s y el " "proyecto %(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "No se ha encontrado el acceso al tipo para la combinación %(flavor_id)s / " "%(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "No se ha podido encontrar el tipo utilizado por la instancia." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "El tipo identificado como %(flavor_id)s ya existe." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "No se puede encontrar el tipo con nombre %(flavor_name)s." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "El tipo de nombre %(name)s ya existe." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "El disco del sabor es más pequeño que el tamaño mínimo especificado en los " "metadatos del imagen. El disco del sabor es %(flavor_size)i bytes, tamaño " "mínimo es %(image_min_disk)i bytes." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "El disco Flavor es demasiado pequeño para la imagen solicitada. El disco " "Flavor tiene %(flavor_size)i bytes, la imagen tiene %(image_size)i bytes." msgid "Flavor's memory is too small for requested image." msgstr "La memoria del tipo es demasiado pequeña para la imagen solicitada." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Ha fallado la asociación de IP flotante %(address)s." #, python-format msgid "Floating IP %(address)s is associated." msgstr "La dirección IP flotante %(address)s está asociada." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "La IP flotante %(address)s no está asociada a la instancia %(id)s." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "No se ha encontrado ninguna dirección IP flotante para el ID %(id)s." #, python-format msgid "Floating IP not found for ID %s" msgstr "No se ha encontrado la IP flotante para el IP %s." #, python-format msgid "Floating IP not found for address %(address)s." msgstr "" "No se ha encontrado ninguna dirección IP flotante para la dirección " "%(address)s." msgid "Floating IP pool not found." msgstr "No se ha encontrado el pool de IP flotantes." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "Se prohíbe exceder el tipo de serie del número de puertos seriales presentes " "en meta imagen." msgid "Found no disk to snapshot." msgstr "No se ha encontrado disco relacionado a instantánea." #, python-format msgid "Found no network for bridge %s" msgstr "No se ha encontrado red para el puente %s" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Encontrada una red no única para el puente %s" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Se ha encontrado una red no exclusiva para name_label %s" msgid "Guest does not have a console available." msgstr "El invitado no tiene una consola disponible." #, python-format msgid "Host %(host)s could not be found." msgstr "No se ha podido encontrar el host %(host)s." #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "El host %(host)s ya está correlacionado con la celda %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "Host '%(name)s' no esta mapeado a ninguna celda" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "El controlador Hyper-V no soporta Host PowerOn" msgid "Host aggregate is not empty" msgstr "El agregado de anfitrión no está vacío" msgid "Host does not support guests with NUMA topology set" msgstr "Host no soporta invitados con conjunto de topología NUMA" msgid "Host does not support guests with custom memory page sizes" msgstr "" "Host no soporta invitados con tamaños de página de memoria perzonalizados" msgid "Host startup on XenServer is not supported." msgstr "No se soporta el arranque de host en XenServer." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "El controlador del hipervisor no soporta método post_live_migration_at_source" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "El tipo virtual de hipervisor '%s' no es válido" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "No se reconoce el tipo de virtualización de hipervisor '%(hv_type)s'" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "El hipervisor con el ID '%s' no se ha podido encontrar. " #, python-format msgid "IP allocation over quota in pool %s." msgstr "La asignación IP excede la capacidad en pool %s." msgid "IP allocation over quota." msgstr "La asignación IP excede la capacidad." #, python-format msgid "Image %(image_id)s could not be found." msgstr "No se ha podido encontrar la imagen %(image_id)s. " #, python-format msgid "Image %(image_id)s is not active." msgstr "La imagen %(image_id)s no está activa." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "La imagen %(image_id)s es inaceptable: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "La imagen de disco es más grande que el tamaño del disco solicitado" msgid "Image is not raw format" msgstr "La imagen no tiene formato original" msgid "Image metadata limit exceeded" msgstr "Se ha superado el límite de metadatos de imágenes" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "No se soporta modelo de imagen '%(image)s'" msgid "Image not found." msgstr "Imagen no encontrada." #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "No se permite que la propiedad de imagen '%(name)s' elimine conjunto de " "configuración NUMA relativo al tipo" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "No se permite que la propiedad de imagen 'hw_cpu_policy' elimine conjunto de " "política de anclaje de CPU relativo al tipo" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "No se permite que la propiedad de imagen 'hw_cpu_thread_policy' sustituya la " "política de anclaje de hebras de CPU definida para este tipo" msgid "Image that the instance was started with could not be found." msgstr "No se ha podido encontrar la imagen con la que se lanzó la instancia." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "" "Opción del controlador para la configuración de imagen '%(config_drive)s' " "no es válida." msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "Las imágenes con destination_type 'colume? necesitan tener un tamaño " "especificado diferente a cero" msgid "In ERROR state" msgstr "En estado de ERROR " #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "En los estados %(vm_state)s/%(task_state)s, no REDIMENSIONADO/Ninguno" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "La migración en vivo en curso %(id)s no se encuentra para el servidor " "%(uuid)s." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Configuraciones incompatibles: cifrado de almacenamiento efímero solo es " "soportado por imágenes LVM." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "" "No se ha podido encontrar la memoria caché de información para la instancia " "%(instance_uuid)s." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "Instancia %(instance)s y volumen %(vol)s no están en la misma " "availability_zone. Instancia está en %(ins_zone)s. Volumen está en " "%(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "Instancia %(instance)s no tiene un puerto identificado como %(port)s" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "La instancia %(instance_id)s no se puede rescatar: %(reason)s." #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "No se ha podido encontrar la instancia %(instance_id)s." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "Instancia %(instance_id)s no tiene etiqueta '%(tag)s'" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "La instancia %(instance_id)s no esta en modo de rescate" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "La instancia %(instance_id)s no está preparada" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "La instacia %(instance_id)s no se esta ejecutando" #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "La instancia %(instance_id)s no es aceptable: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "La instancia %(instance_uuid)s no especifica una topología NUMA" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "La instancia %(instance_uuid)s no especifica un contexto de migración." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "La instancia %(instance_uuid)s está en %(attr)s %(state)s. No se puede " "%(method)s mientras la instancia está en este estado." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "La instancia %(instance_uuid)s está bloqueada" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "La instancia %(instance_uuid)s requiere una unidad de configuración, pero no " "existe." #, python-format msgid "Instance %(name)s already exists." msgstr "La instancia %(name)s ya existe." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "" "La instancia %(server_id)s se encuentra en un estado no válido para " "'%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "Instancia %(uuid)s no tiene mapeo para una celda." #, python-format msgid "Instance %s not found" msgstr "No se ha encontrado la instancia %s" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Se ha abortado la provisión de instancia %s" msgid "Instance could not be found" msgstr "No se ha podido encontrar la instancia" msgid "Instance disk to be encrypted but no context provided" msgstr "Se encriptará disco de instancia ero no se ha proporcionado contexto" msgid "Instance event failed" msgstr "El evento de instancia ha fallado" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "El grupo de instancias %(group_uuid)s ya existe." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "No se ha podido encontrar el grupo de instancias %(group_uuid)s." msgid "Instance has no source host" msgstr "La instancia no tiene ningún host de origen" msgid "Instance has not been resized." msgstr "La instancia no se ha redimensionado." #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "El nombre de host de instancia %(hostname)s no es un nombre DNS válido" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "La instancia ya está en modalidad de rescate: %s " msgid "Instance is not a member of specified network" msgstr "La instancia no es miembro de la red especificada" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Reversión de instancia ejecutada debido a: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Espacio insuficiente en grupo de volumen %(vg)s. Sólo hay %(free_space)db " "disponibles, pero se necesitan %(size)d bytes para el volumen %(lv)s." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Recursos de cómputo insuficientes: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "No hay suficiente memoria libre en el nodo de cálculo para iniciar %(uuid)s." #, python-format msgid "Interface %(interface)s not found." msgstr "No se ha encontrado la interfaz %(interface)s." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "Datos Base-64 inválidos para el archivo %(path)s" msgid "Invalid Connection Info" msgstr "Información de conexión no válida" #, python-format msgid "Invalid ID received %(id)s." msgstr "Se ha recibido el ID %(id)s no válido." #, python-format msgid "Invalid IP format %s" msgstr "Formato IP inválido %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Protocolo IP invalido %(protocol)s" msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Lista blanca de PCI no válida: La lista blanca de PCI puede especificar un " "devname o una dirección, pero no ambas" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Definición de alias PCI inválido: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "La expresión regular %s es inválida" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Caracteres invalidos en el nombre del host '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "La config_drive proporcionada es inválida." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "config_drive_format \"%s\" no válido" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Tipo de consola %(console_type)s no válido " #, python-format msgid "Invalid content type %(content_type)s." msgstr "Tipo de contenido invalido %(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Cadena date time invalida: %(reason)s" msgid "Invalid device UUID." msgstr "Dispositivo UUID invalido." #, python-format msgid "Invalid entry: '%s'" msgstr "Entrada no válida: '%s' " #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Entrada no válida: '%s'; Esperando dict" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Entrada no válida: '%s'; esperando lista o dict" #, python-format msgid "Invalid exclusion expression %r" msgstr "Expresión de exclusión inválida %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "formato de imagen no válido '%(format)s' " #, python-format msgid "Invalid image href %(image_href)s." msgstr "href de imagen %(image_href)s no válida." #, python-format msgid "Invalid inclusion expression %r" msgstr "Expresión de inclusión inválida %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Conenido inválido para el campo/atributo %(path)s. Valor: %(value)s. " "%(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Entrada inválida recibida: %(reason)s" msgid "Invalid instance image." msgstr "Imagen de instancia no válida." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Filtro is_public no válido [%s]" msgid "Invalid key_name provided." msgstr "Se ha proporcionado un nombre de clave no válido." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Tamaño de página de memoria no válido '%(pagesize)s'" msgid "Invalid metadata key" msgstr "Clave de metadatos no válida" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Tamaño de metadatos inválido: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Metadatos inválidos: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Filtro minDisk no válido [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Filtro minRam no válido [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Rango de puertos invalido %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "Firma de solicitud de proxy no válida." #, python-format msgid "Invalid range expression %r" msgstr "Expresión de intérvalo inválida %r" msgid "Invalid service catalog json." msgstr "JSON de catálogo de servicios no válido." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "Hora de inicio no válida. La hora de inicio no pude tener lugar después de " "la hora de finalización." msgid "Invalid state of instance files on shared storage" msgstr "Estado no válido de archivos de instancia en almacenamiento compartido" #, python-format msgid "Invalid timestamp for date %s" msgstr "Indicación de fecha y hora no válida para la fecha %s" #, python-format msgid "Invalid usage_type: %s" msgstr "usage_type: %s no válido" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "" "Valor inválido para la opción de configuración de controlador: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Dirección de interfaz virtual inválida %s en la solicitud" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Modo de acceso al volumen invalido: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Volumen inválido: %(reason)s" msgid "Invalid volume_size." msgstr "volume_size invalido." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "" "No se ha proporcionado nodo uuid Ironic para controlador de instancia %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "" "No está permitido crear una interfaz en una red externa %(network_uuid)s" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "La imagen de kernel/disco RAM es demasiado grande: %(vdi_size)d bytes, máx. " "%(max_size)d bytes" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Los nombres de las claves solo pueden contener caracteres alfanuméricos, " "punto, guión, guión bajo, dos puntos y espacios." #, python-format msgid "Key manager error: %(reason)s" msgstr "error de administrador de claves: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "El par de claves '%(key_name)s' ya existe." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "" "No se ha encontrado el par de claves %(name)s para el usuario %(user_id)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "El conjunto de claves son inválidos: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "El nombre de par de claves contiene caracteres no seguros" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "El nombre de par de claves debe ser serial y contener de 1 a 255 caracteres" msgid "Limits only supported from vCenter 6.0 and above" msgstr "Sólo se admiten límites a partir de vCenter 6.0 " #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "" "La migración en vivo %(id)s para el servidor %(uuid)s no está en curso." #, python-format msgid "Malformed message body: %(reason)s" msgstr "Cuerpo de mensaje con formato incorrecto: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "Solicitud URL incorrecta: el project_id de la URL '%(project_id)s' no " "corresponde con el project_id del Contexto '%(context_project_id)s'" msgid "Malformed request body" msgstr "Cuerpo de solicitud incorrecto" msgid "Mapping image to local is not supported." msgstr "No se soporta el mapeo de imagen a local." #, python-format msgid "Marker %(marker)s could not be found." msgstr "No se ha podido encontrar el marcador %(marker)s." msgid "Maximum number of floating IPs exceeded" msgstr "Se ha superado el número máximo de IP flotantes" msgid "Maximum number of key pairs exceeded" msgstr "Se ha superado el número máximo de pares de claves" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "El número máximo de elementos de metadatos supera %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "El número máximo de puertos ha sido excedido." msgid "Maximum number of security groups or rules exceeded" msgstr "Se ha superado el número máximo de grupos o reglas de seguridad" msgid "Metadata item was not found" msgstr "No se ha encontrado el elemento metadatos" msgid "Metadata property key greater than 255 characters" msgstr "Clave de propiedad metadatos de más de 255 caracteres " msgid "Metadata property value greater than 255 characters" msgstr "Valor de propiedad de metadatos de más de 255 caracteres " msgid "Metadata type should be dict." msgstr "El tipo de metadato debería ser dict." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "La métrica %(name)s no se puede encontrar en el nodo de cómputo anfitrión " "%(host)s:%(node)s." msgid "Migrate Receive failed" msgstr "Ha fallado la recepción de migración" msgid "Migrate Send failed" msgstr "Ha fallado el envío de migración" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "" "La migración %(id)s para el servidor %(uuid)s no es una migración en vivo." #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "No se ha podido encontrar la migración %(migration_id)s." #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "" "No se ha encontrado la migración %(migration_id)s para la instancia " "%(instance_id)s" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "El estado de la migración %(migration_id)s de la instancia " "%(instance_uuid)s es %(state)s. No se puede %(method)s mientras la instancia " "está en este estado." #, python-format msgid "Migration error: %(reason)s" msgstr "Error en migración: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "No se soporta la migración de instancias LVM respaldadas" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "No se ha encontrado la migración para la instancia %(instance_id)s con el " "estado %(status)s." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Error de pre-verificación de migraión: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "Error de selección de destinos de migración: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Faltan argumentos: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Columna omitida %(table)s.%(column)s en la tabla de shadow" msgid "Missing device UUID." msgstr "Dispositivo UUID perdido." msgid "Missing disabled reason field" msgstr "Campo disabled reason omitido." msgid "Missing forced_down field" msgstr "Campo forced_down no presente." msgid "Missing imageRef attribute" msgstr "Falta el atributo imageRef" #, python-format msgid "Missing keys: %s" msgstr "Faltan claves: %s" #, python-format msgid "Missing parameter %s" msgstr "Falta parámetro %s" msgid "Missing parameter dict" msgstr "Falta el parámetro dict " #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "" "Hay más de una instancia asociada con la dirección IP fija '%(address)s'." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Se ha encontrado más de una red posible. Especifique ID(s) de la red para " "seleccionar a cuál(es) conectarse." msgid "More than one swap drive requested." msgstr "Más de un controlador de intercambio ha sido solicitado." #, python-format msgid "Multi-boot operating system found in %s" msgstr "Se ha encontrado sistema operativo multiarranque en %s" msgid "Multiple X-Instance-ID headers found within request." msgstr "" "Se han encontrado varias cabeceas de ID de instancia X en la solicitud." msgid "Multiple X-Tenant-ID headers found within request." msgstr "Se han encontrado múltiples cabeceras X-Tenant-ID en la solicitud." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "" "Se han encontrado varias coincidencias de agrupaciones de IP flotante para " "el nombre '%s' " #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Se han encontrado varias IP flotantes para la dirección %(address)s." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Múltiples anfitrionespueden ser administrados por el controlador de vCenter " "de VMware; por lo tanto no se puede regresar tiempo de ejecución solamente " "para un huésped." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Se han encontrado múltiples redes posibles, usa un identificador de red para " "ser más específico." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Se han encontrado varios grupos de seguridad que coinciden con '%s'. Utilice " "un ID para ser más específico." msgid "Must input network_id when request IP address" msgstr "Se debe ingresar a network_id cuando se solicite dirección IP" msgid "Must not input both network_id and port_id" msgstr "No se debe ingresar ni a network_id ni a port_id" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "Se debe especificar connection_url, connection_username (opcionalmente, y " "connection_password para utilizar compute_driver=xenapi.XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "Debe especificar host_ip, host_username y host_password para usar vmwareapi." "VMwareVCDriver" msgid "Must supply a positive value for max_number" msgstr "Debe indicar un valor positivo para max_number" msgid "Must supply a positive value for max_rows" msgstr "Se debe proporcionar un valor positivo para max_rows" #, python-format msgid "Network %(network_id)s could not be found." msgstr "No se ha podido encontrar la red %(network_id)s." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "La red %(network_uuid)s requiere una subred para poder arrancar instancias." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "No se ha podido encontrar la red para el puente %(bridge)s" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "No se ha podido encontrar la red para la instancia %(instance_id)s." msgid "Network not found" msgstr "No se ha encontrado la red" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "La red requiere port_security_enabled y una subred asociada para aplicar " "grupos de seguridad." msgid "New volume must be detached in order to swap." msgstr "" "El nuevo volumen debe ser desasociado para poder activar la memoria de " "intercambio." msgid "New volume must be the same size or larger." msgstr "El nuevo volumen debe ser del mismo o de mayor tamaño." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "No hay mapeo de dispositivo de bloque identificado como %(id)s." msgid "No Unique Match Found." msgstr "No se ha encontrado una sola coincidencia." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "No hay ninguna compilación de agente asociada con el id %(id)s." msgid "No compute host specified" msgstr "No se ha especificado ningún host de cálculo" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "" "No se ha encontrado ninguna información de configuración para el sistema " "operativo %(os_name)s" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "No existe dispositivo con dirección MAC %s en la VM" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "No existe dispositivo con identificación de interfaz %s en VM" #, python-format msgid "No disk at %(location)s" msgstr "No hay ningún disco en %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "No hay dirección IP fija disponibles para red: %(net)s. " msgid "No fixed IPs associated to instance" msgstr "No hay IP fijas asociadas a la instancia " msgid "No free nbd devices" msgstr "No hay dispositivos nbd libres" msgid "No host available on cluster" msgstr "No hay anfitrión disponible en cluster." msgid "No hosts found to map to cell, exiting." msgstr "" "No se ha encontrado ningún host para correlacionar con la celda, saliendo." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "No es ha podido encontrar ningún hipervisor que coincida con '%s'. " msgid "No image locations are accessible" msgstr "No hay ubicaciones de imagen accesibles" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "No se ha configurado ninguna URI de migración en vivo ni hay ninguna " "predeterminada disponible para el tipo de virtualización de hipervisor " "\"%(virt_type)s\"." msgid "No more floating IPs available." msgstr "No hay más IP flotantes disponibles." #, python-format msgid "No more floating IPs in pool %s." msgstr "No hay más IP flotantes en la agrupación %s." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "No se han encontrado puntos de montaje en %(root)s de %(image)s" #, python-format msgid "No operating system found in %s" msgstr "No se ha encontrado ningún sistema operativo en %s" #, python-format msgid "No primary VDI found for %s" msgstr "No se ha encontrado VDI primario para %s" msgid "No root disk defined." msgstr "No se ha definido un disco raíz." #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "No se ha solicitado ninguna red específica y no hay ninguna disponible para " "el proyecto '%(project_id)s'." msgid "No suitable network for migrate" msgstr "No hay red adecuada para migrar" msgid "No valid host found for cold migrate" msgstr "No se ha encontrado anfitrión para migración en frío" msgid "No valid host found for resize" msgstr "No se ha encontrado un host válido para redimensionamiento" #, python-format msgid "No valid host was found. %(reason)s" msgstr "No se ha encontrado ningún host válido. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "No hay mapeo de volumen de dispositivo de bloque en ruta: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "" "No hay volumen de Block Device Mapping con identificador %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "No se puede encontrar nodo %s." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "No se puede obtener un puerto libre para %(host)s" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "No se puede enlazar %(host)s:%(port)d, %(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "No todas las Funciones Virtuales de la PF %(compute_node_id)s:%(address)s " "son gratuitas." msgid "Not an rbd snapshot" msgstr "No es una instantánea rbd" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "No está autorizado para la imagen %(image_id)s." msgid "Not authorized." msgstr "No Autorizado" msgid "Not enough parameters to build a valid rule." msgstr "No hay suficientes parámetros para crear una regla válida." msgid "Not implemented on Windows" msgstr "No implementado en Windows" msgid "Not stored in rbd" msgstr "No está almacenado en rbd" msgid "Nothing was archived." msgstr "No se ha archivado nada." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova requiere versión libvirt %s o mayor." msgid "Number of Rows Archived" msgstr "Número de filas archivado" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "La acción objeto %(action)s falló debido a: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Volumen antigüo está ligado a una instancia diferente." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Uno o más hosts ya se encuentran en zona(s) de disponibilidad %s" msgid "Only administrators may list deleted instances" msgstr "Sólo los administradores pueden listar instancias suprimidas " #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "Solo los SRs basados en archivo (ext/NFS) están soportados por esta " "característica. SR %(uuid)s es del tipo %(type)s" msgid "Origin header does not match this host." msgstr "Cabecera de origen no coincide con este host." msgid "Origin header not valid." msgstr "Cabecera de origen no válida" msgid "Origin header protocol does not match this host." msgstr "Protocolo de cabecera de origen no coincide con este host." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "Dispositivo PCI %(node_id)s:%(address)s no encontrado." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "Alias PCI %(alias)s no definido" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "el dispositivo PCI %(compute_node_id)s:%(address)s está %(status)s en lugar " "de %(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "El dueño del dispositivo PCI %(compute_node_id)s:%(address)s es %(owner)s en " "lugar de %(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "Dispositivo PCI %(id)s no encontrado" #, python-format msgid "PCI device request %(requests)s failed" msgstr "La solicitud de dispositivo PCI %(requests)s ha fallado" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIC %s no contiene una dirección IP" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "El tamaño de página %(pagesize)s no es permitido por '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "El host no soporta el tamaño de página %(pagesize)s." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Los parámetros %(missing_params)s no están disponibles en vif_details para " "vif %(vif_id)s. Refiérese a la configuration de Neutron para averificar que " "los parámetros de macvtap son correctos." #, python-format msgid "Path %s must be LVM logical volume" msgstr "La vía de acceso %s debe ser el volumen lógico LVM" msgid "Paused" msgstr "Pausada" msgid "Personality file limit exceeded" msgstr "Se ha superado el límite de archivo de personalidad" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "La Función Física %(compute_node_id)s:%(address)s, relacionada con la VF " "%(compute_node_id)s:%(vf_address)s tiene el estado %(status)s en lugar de " "%(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "La red física no esta disponible para la red %(network_uuid)s" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "La política no permite que la %(action)s se realice" #, python-format msgid "Port %(port_id)s is still in use." msgstr "El puerto %(port_id)s todavía se está utilizando." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "El puerto %(port_id)s no es utilizable para la instancia %(instance)s." #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "La instancia %(instance)s no puede utilizar el puerto %(port_id)s. El valor " "%(value)s asignado al atributo dns_name no coincide con el nombre de host de " "la instancia %(hostname)s" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "El puerto %(port_id)s requiere una FixedIP para poder ser utilizado." #, python-format msgid "Port %s is not attached" msgstr "El puerto %s no se encuentra asignado" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "No se ha podido encontrar el ID de puerto %(port_id)s." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Modelo de vídeo proporcionado (%(model)s) no está sopotado." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "La acción watchdog proporcionada (%(action)s) no está soportada." msgid "QEMU guest agent is not enabled" msgstr "Agente invitado QEMU no está habilitado" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "No hay soporte para la desactivación en la instancia %(instance_id)s" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "No se ha encontrado la clase de cuota %(class_name)s." msgid "Quota could not be found" msgstr "No se ha podido encontrar la cuota" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Se ha superado la cuota para %(overs)s: Solicitado %(req)s, pero ya se " "utiliza %(used)s de %(allowed)s %(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Cuota superada para recursos: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Cuota superada, demasiados pares de claves." msgid "Quota exceeded, too many server groups." msgstr "Capacidad superada, demasiados grupos servidores. " msgid "Quota exceeded, too many servers in group" msgstr "Capacidad excedida, demasiados servidores en grupo" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Cuota excedida: código=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "Cuota existente para el proyecto %(project_id)s, recurso %(resource)s" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "No se ha encontrado la cuota para el proyecto %(project_id)s." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "No se ha encontrado la cuota para el usuario %(user_id)s en el proyecto " "%(project_id)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "Capacidad límite %(limit)s para %(resource)s debe ser mayor o igual que la " "utilizada o en reserva %(minimum)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "Capacidad límite %(limit)s para %(resource)s debe ser menor o igual que " "%(maximum)s." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "Se ha alcanzado el número máximo de reintentos de desconectar VBD %s " msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "La política en tiempo real requiere una máscara de vCPU(s) con al menos 1 RT " "vCPU y una vCPU ordinaria. Consulte hw:cpu_realtime_mask o " "hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "Discrepancia de URI y cuerpo de solicitud" msgid "Request is too large." msgstr "La solicitud es demasiado larga." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "La solicitud de la imagen %(image_id)s ha obtenido una respuesta de " "solicitud incorrecta (BadRequest): %(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "" "No se ha encontrado la especificación de solicitud (RequestSpec) para la " "instancia %(instance_uuid)s" msgid "Requested CPU control policy not supported by host" msgstr "El host no da soporte a la política de control de CPU solicitada" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "El hardware solicitado '%(model)s' no está soportado por el controlador de " "virtualización '%(virt)s'" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "La imagen solicitada %(image)s tiene desactivada la modificación automática " "de tamaño de disco." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "La topología de instancia NUMA no es compatible con la topología de host " "NUMA proporcionada" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "La topología de instancia NUMA solicitada junto con los dispositivos PCI " "solicitados no son compatibles con la topología de host NUMA proporcionada" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "Los límites VCPU solicitados %(sockets)d:%(cores)d:%(threads)d son " "imposibles de cumplir para el número de vcpus %(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "No existe dispositivo de rescate para instancia %s" #, python-format msgid "Resize error: %(reason)s" msgstr "Error de redimensionamiento: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "No se permite redimensionamiento a tipo cero del disco." msgid "Resource could not be found." msgstr "No se ha podido encontrar el recurso." msgid "Resumed" msgstr "Reanudada" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "El nombre de elemento raíz debe ser '%(name)s' no '%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "Ejecutando lotes de %i hasta finalizar" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "" "No se ha podido encontrar el filtro de host de planificador %(filter_name)s." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "" "El grupo de seguridad %(name)s no ha sido encontrado para el proyecto " "%(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "No se ha encontrado el grupo de seguridad %(security_group_id)s para el " "proyecto %(project_id)s." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "No se ha encontrado el grupo de seguridad %(security_group_id)s." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "El grupo de seguridad %(security_group_name)s ya existe para el proyecto " "%(project_id)s" #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "El grupo de seguridad %(security_group_name)s no está asociado a la " "instancia %(instance)s" msgid "Security group id should be uuid" msgstr "El id de grupo de seguridad debe ser uuid" msgid "Security group name cannot be empty" msgstr "El nombre de grupo de seguridad no puede estar vacío" msgid "Security group not specified" msgstr "Grupo de seguridad no especificado" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "El disco del servidor fue incapaz de re-escalarse debido a: %(reason)s" msgid "Server does not exist" msgstr "El servidor no existe" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "No se soporta la política ServerGroup: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter no configurado" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter no configurado" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "No se ha configurado ServerGroupSoftAffinityWeigher" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "No se ha configurado ServerGroupSoftAntiAffinityWeigher" #, python-format msgid "Service %(service_id)s could not be found." msgstr "No se ha podido encontrar el servicio %(service_id)s." #, python-format msgid "Service %s not found." msgstr "Servicio %s no encontrado." msgid "Service is unavailable at this time." msgstr "El servicio no esta disponible en este momento" #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "Servicio con host %(host)s binario %(binary)s existe." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "Servicio con host %(host)s asunto %(topic)s existe." msgid "Set admin password is not supported" msgstr "No se soporta el establecer de la constraseña del admin" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "Una Tabla Shadow con nombre %(name)s ya existe." #, python-format msgid "Share '%s' is not supported" msgstr "Compartido %s no está soportado." #, python-format msgid "Share level '%s' cannot have share configured" msgstr "Nivel compartido '%s' no puede tener configurado compartido" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "La reducción del sistema de archivos con resize2fs ha fallado, por favor " "verifica si tienes espacio libre suficiente en tu disco." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "No se ha podido encontrar la instantánea %(snapshot_id)s." msgid "Some required fields are missing" msgstr "Algunos campos obligatorios no están rellenos." #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "Ha habido algún problema al suprimir una instantánea de volumen: no se ha " "probado completamente el proceso de reorganizar un disco de red de " "%(protocol)s utilizando qemu-img" msgid "Sort direction size exceeds sort key size" msgstr "" "El tamaño de dirección de ordenación excede el tamaño de clave de ordenación" msgid "Sort key supplied was not valid." msgstr "La clave de clasificación proporcionada no es válida. " msgid "Specified fixed address not assigned to instance" msgstr "Dirección fija especificada no asignada a la instancia" msgid "Specify `table_name` or `table` param" msgstr "Especificar parámetro `table_name` o `table`" msgid "Specify only one param `table_name` `table`" msgstr "Especificar solamente un parámetro `table_name` `table`" msgid "Started" msgstr "Arrancado" msgid "Stopped" msgstr "Se ha detenido" #, python-format msgid "Storage error: %(reason)s" msgstr "Error de almacenamiento: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "" "La política de almacenamiento %s no coincide con ningún almacén de datos" msgid "Success" msgstr "Éxito" msgid "Suspended" msgstr "Suspendida" msgid "Swap drive requested is larger than instance type allows." msgstr "" "El controlador de intercambio solicitado es más grande que lo que permite la " "instancia." msgid "Table" msgstr "Tabla" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "La tarea %(task_name)s ya se está ejecutando en el host %(host)s" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "La tarea %(task_name)s no se está ejecutando en el host %(host)s" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "La dirección PCI %(address)s tiene un formato incorrecto." msgid "The backlog must be more than 0" msgstr "El retraso debe ser mayor que 0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "" "El puerto de rangos de consola %(min_port)d-%(max_port)d se ha agotado." msgid "The created instance's disk would be too small." msgstr "La capacidad del disco de la instancia creada sería demasiado pequeña." msgid "The current driver does not support preserving ephemeral partitions." msgstr "" "El dispositivo actual no soporta la preservación de particiones efímeras." msgid "The default PBM policy doesn't exist on the backend." msgstr "La política PBM por defecto no existe en el backend." msgid "The floating IP request failed with a BadRequest" msgstr "" "La solicitud de la IP flotante ha fallado con BadRequest (Solicitud " "incorrecta)" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "" "La instancia necesita una versión de hipervisor más reciente que la " "proporcionada." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "El número de puertos definidos: %(ports)d es más del límite: %(quota)d" msgid "The only partition should be partition 1." msgstr "La unica partición debe ser la partición 1." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "" "La ruta del dispositivo RNG proporcionada: (%(path)s) no está presente en el " "anfitrión." msgid "The request body can't be empty" msgstr "El cuerpo de solicitud no puede estar vacío" msgid "The request is invalid." msgstr "La petición es inválida." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "La cantidad solicitada de memoria de vídeo %(req_vram)d es mayor que la " "máxima permitida por el tipo %(max_vram)d." msgid "The requested availability zone is not available" msgstr "La zona de disponibilidad solicitada no está disponible" msgid "The requested console type details are not accessible" msgstr "Los detalles del tipo de consola solicitada no son accesibles" msgid "The requested functionality is not supported." msgstr "No se soporta la funcionalidad solicitada." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "El clúster especificado '%s' no se ha encontrado en vCenter" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "La ruta proporcionada al dispositivo (%(path)s) está en uso." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "La ruta proporcionada al dispositivo (%(path)s) no es válida." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "La ruta de disco proporcionada (%(path)s) ya existe, se espera una que no " "exista." msgid "The supplied hypervisor type of is invalid." msgstr "El tipo de hipervisor proporcionado no es válido. " msgid "The target host can't be the same one." msgstr "El host de destino no puede ser el mismo." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "El token '%(token)s' no es válido o ha expirado" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "No se puede asignar al volumen el mismo nombre de dispositivo del " "dispositivo principal %s" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "Hay %(records)d registros en la '%(table_name)s' tabla donde el uuid o " "columna instance_uuid es NO VÁLIDA. Ejecute de nuevo este comando con la " "opción --eliminar después hacer una copia de seguridad de cualquier " "información necesaria." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "Hay %(records)d registros en la '%(table_name)s' tabla donde el uuid o " "columna instance_uuid es NO VÁLIDA. Estos de deben limpiar manualmente antes " "de autorizar la migración. Considere ejecutar el comando 'nova-manage db " "null_instance_uuid_scan'." msgid "There are not enough hosts available." msgstr "No hay suficientes hosts disponibles." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Aún hay %(count)i registros de tipo sin migrar. La migración no puede " "continuar hasta que todos los registros de tipo de instancia hayan sido " "migradas a un nuevo formato. Por favor ejecute primero la base de datos nova-" "manage migrate_flavor_data'" #, python-format msgid "There is no such action: %s" msgstr "No existe esta acción: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "No se encontraron registros donde instance_uuid era NO VÁLIDA." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "El hipervisor de este nodo de cálculo es anterior a la versión mínima " "soportada: %(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "" "Este domU debe estar en ejecución en el anfitrión especificado por " "connection_url" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Este método se tiene que llamar con networks=None y port_ids=None o bien con " "port_ids y networks con un valor distinto de None." #, python-format msgid "This rule already exists in group %s" msgstr "Esta regla ya existe en el grupo %s" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Este servicio es anterior (v%(thisver)i) a la versión mímima soportada (v" "%(minver)i) del resto del despliegue. No se puede continuar." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Se ha excedido el tiempo esperando a que se creara el dispositivo %s" msgid "Timeout waiting for response from cell" msgstr "Se ha excedido el tiempo de espera de respuesta de la célula" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "Se ha agotado el tiempo de espera mientras se comprobaba si se puede migrar " "en directo al host: %s" msgid "To and From ports must be integers" msgstr "Puertos De y Hacia deben ser enteros" msgid "Token not found" msgstr "Token no encontrado" msgid "Triggering crash dump is not supported" msgstr "No se da soporte a desecadenar un volcado de memoria" msgid "Type and Code must be integers for ICMP protocol type" msgstr "Tipo y Código deben ser enteros del tipo de protocolo ICMP" msgid "UEFI is not supported" msgstr "UEFI no está soportado" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "No es posible asociar una IP flotante %(address)s a ninguna de las IPs fijas " "para instancia %(id)s. La instancia no tiene direcciones IPv4 fijas para " "asociar." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "No es posible asociar la IP flotante %(address)s a la IP fija " "%(fixed_address)s para instancia %(id)s. Error: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "No se puede autenticar cliente Ironic." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Unposible contactar al agente invitado. La siguiente llamada agotó su tiempo " "de espera: %(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "No se puede convertir la imagen a %(format)s: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "No se puede convertir la imagen a sin formato: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "Imposible destruir VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "No se puede destruir VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "No se puede determinar el bus de disco para '%s'" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "No se puede determinar el prefijo de disco para %s " #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "Incapaz de expulsar %s del conjunto: No se ha encontrado maestro" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "Incapaz de expulsar %s del conjunto; el conjunto no está vacío" #, python-format msgid "Unable to find SR from VBD %s" msgstr "Imposible encontrar SR en VBD %s" #, python-format msgid "Unable to find SR from VDI %s" msgstr "No ha sido posible encontrar SR desde VDI %s" #, python-format msgid "Unable to find ca_file : %s" msgstr "No se puede encontrar ca_file: %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "No se puede encontrar cert_file: %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "No se puede encontrar el host para la instancia %s " msgid "Unable to find iSCSI Target" msgstr "No se puede encontrar el destino iSCSI " #, python-format msgid "Unable to find key_file : %s" msgstr "No se puede encontrar key_file: %s" msgid "Unable to find root VBD/VDI for VM" msgstr "No se puede encontrar VBD/VDI de raíz para VM" msgid "Unable to find volume" msgstr "No se puede encontrar volumen" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "No se puede obtener el UUID de host: /etc/machine-id no existe" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "No se puede obtener el UUID de host: /etc/machine-id está vacío" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Imposible obtener copia del VDI %s en" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "Inposible insertar VDI para SR %s" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "Incapaz de insertar VDI en SR %s" #, python-format msgid "Unable to join %s in the pool" msgstr "Incapaz de incorporar %s al conjunto" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Incapaz de lanzar múltiples instancias con un solo identificador de puerto " "configurado. Por favor lanza tu instancia una por una con puertos diferentes." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "No se ha podido migrar %(instance_uuid)s a %(dest)s: falta de memoria (host:" "%(avail)s <= instancia:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "No se ha podido migrar %(instance_uuid)s: el disco de la instancia es " "demasiado grande (disponible en host de destino: %(available)s < necesario: " "%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Incapaz de emigrar la instancia %(instance_id)s al actual anfitrion " "(%(host)s)" #, python-format msgid "Unable to obtain target information %s" msgstr "Incapaz de obtener información del destino %s" msgid "Unable to resize disk down." msgstr "Incapaz de reducir el tamaño del disco." msgid "Unable to set password on instance" msgstr "No se puede establecer contraseña en la instancia" msgid "Unable to shrink disk." msgstr "No se puede empaquetar disco." msgid "Unable to terminate instance." msgstr "Incapaz de terminar instancia." #, python-format msgid "Unable to unplug VBD %s" msgstr "Imposible desconectar VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Información de CPU inválida: %(reason)s" msgid "Unacceptable parameters." msgstr "Parametros inaceptables" #, python-format msgid "Unavailable console type %(console_type)s." msgstr "El tipo de consola %(console_type)s no está disponible." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Raíz de mapeo de dispositivo de bloques no definida: la lista de mapeos de " "dispositivos de bloques (BlockDeviceMappingList ) contiene mapeos de " "dispositivos de bloques de diversas instancias." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Error inesperado de API. Por favor reporta esto en http://bugs.launchpad.net/" "nova/ e incluye el registro de Nova API en lo posible.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Acción de agregado inesperada %s" msgid "Unexpected type adding stats" msgstr "Estado de adición de tipo inesperada" #, python-format msgid "Unexpected vif_type=%s" msgstr "vif_type=%s inesperado" msgid "Unknown" msgstr "Desconocido" msgid "Unknown action" msgstr "Acción desconocida" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Formato de unidad de configuración desconocido %(format)s. Seleccione uno de " "iso9660 o vfat." #, python-format msgid "Unknown delete_info type %s" msgstr "Tipo delete_info %s desconocido" #, python-format msgid "Unknown image_type=%s" msgstr "image_type=%s desconocido " #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Recursos de cuota desconocidos %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Dirección de clasificación desconocida, debe ser 'desc' o ' asc'" #, python-format msgid "Unknown type: %s" msgstr "Tipo desconocido: %s" msgid "Unrecognized legacy format." msgstr "Formato heredado no reconocido." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Valor de read_deleted no reconocido '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Valor '%s' no reconocido para CONF.running_deleted_instance_action" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "Se ha intentado la extracción pero la imagen %s no ha sido encontrada." msgid "Unsupported Content-Type" msgstr "Tipo de contenido no soportado" msgid "Upgrade DB using Essex release first." msgstr "Actualice la base de datos utilizando primero el release de Essex." #, python-format msgid "User %(username)s not found in password file." msgstr "" "El usuario %(username)s no se ha encontrado en el archivo de contraseña." #, python-format msgid "User %(username)s not found in shadow file." msgstr "" "El usuario %(username)s no se ha encontrado en el archivo de duplicación. " msgid "User data needs to be valid base 64." msgstr "Los datos de usuario deben ser de base 64 válidos." msgid "User does not have admin privileges" msgstr "El usuario no tiene privilegios de administrador" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "El uso de sintáxis diferentes de block_device_mapping en la misma petición " "no está permitido." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "El VDI %(vdi_ref)s es de %(virtual_size)d bytes lo que es mayor que el " "tamaño de ll tipo de %(new_disk_size)d bytes." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDI no encontrado en SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun " "%(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "" "Intentos de incorporación de VHD excedidos (%d), dejando de intentar..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "Versión %(req_ver)s no soportada por el API. Mínimo es %(min_ver)s y máximo " "es %(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Creacion de interfaz virtual fallida" msgid "Virtual interface plugin failed" msgstr "Plugin de interfaz virtual fallido" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "No se reconoce el modo de máquina virtual '%(vmmode)s' " #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "El modo de máquina virtual '%s' no es válido" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "El tipo de virtualización '%(virt)s' no está soportado por este controlador " "de cálculo" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "No se ha podido adjuntar el volumen %(volume_id)s. Motivo: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "No se ha podido encontrar el volumen %(volume_id)s." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "La creación del volumen %(volume_id)s no se ha completado incluso después de " "esperar %(seconds)s segundos o %(attempts)s intentos. El estado es " "%(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "El volumen no pertenece a la instancia solicitada." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "La encriptación del volumen no es soportada por %(volume_type)s volume " "%(volume_id)s" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "El volumen es menor del tamaño mínimo especificado en los metatarso de la " "imagen. El tamaño del volumen es %(volume_size)i bytes, el tamaño mínimo es " "%(image_min_disk)i bytes." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "El volúmen establece el tamaño de bloque, pero el hipervisor libvirt actual " "'%s' no soporta tamaño de bloque personalizado." #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "No se soporta esquema '%s' bajo Python < 2.7.4, por favor utilice http o " "https" msgid "When resizing, instances must change flavor!" msgstr "Al redimensionarse, las instancias deben cambiar de tipo." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Al ejecutar el servidor en modalidad SSL, debe especificar un valor para las " "opciones cert_file y key_file en el archivo de configuración" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Método de contingencia %(method)s usado en recurso %(res)s es erróneo" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "Método de tipo de enlace incorrecto. Solo se permiten los tipos 'pre' y " "'post'" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For no está presente en la petición." msgid "X-Instance-ID header is missing from request." msgstr "Falta la cabecera de ID de instancia X en la solicitud." msgid "X-Instance-ID-Signature header is missing from request." msgstr "Cabecera X-Instance-ID-Signature hace falta en la solicitud." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider no está presente en la petición." msgid "X-Tenant-ID header is missing from request." msgstr "Falta cabecera X-Tenant-ID en la solicitud." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "Se requiere una XAPI que soporte relax-xsm-sr-check=true" msgid "You are not allowed to delete the image." msgstr "No le está permitido suprimir la imagen." msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "No está autorizado a acceder a la imagen con la que se ha lanzado la " "instancia." msgid "You must implement __call__" msgstr "Debe implementar __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "Debes especificar la bandera images_rbd_pool para utilizar imagenes rbd." msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "Debes especificar la bandera images_volue_group para utilizar imagenes LVM." msgid "Zero floating IPs available." msgstr "No hay ninguna dirección IP flotante disponible." msgid "admin password can't be changed on existing disk" msgstr "" "No se puede cambiar la contraseña de administrador en el disco existente" msgid "aggregate deleted" msgstr "agregado eliminado" msgid "aggregate in error" msgstr "error en agregado" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate ha fallado debido a: %s" msgid "cannot understand JSON" msgstr "no se puede entender JSON" msgid "clone() is not implemented" msgstr "no se ha implementado clone()" #, python-format msgid "connect info: %s" msgstr "información de conexión: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "conectando a: %(host)s:%(port)s" msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot() no está implementado" #, python-format msgid "disk type '%s' not supported" msgstr "No se soporta tipo de disco '%s' " #, python-format msgid "empty project id for instance %s" msgstr "ID de proyecto vacío para la instancia %s" msgid "error setting admin password" msgstr "error al establecer contraseña de administrador" #, python-format msgid "error: %s" msgstr "error: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "falló al generar huella digital X509. Mensaje de error: %s" msgid "failed to generate fingerprint" msgstr "falló al generar huella digital" msgid "filename cannot be None" msgstr "nombre del fichero no puede ser Ninguno" msgid "floating IP is already associated" msgstr "Esta IP flotante ya está asociada" msgid "floating IP not found" msgstr "No se ha encontrado la IP flotante" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s respaldado por: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s no contiene la versión" msgid "image already mounted" msgstr "imagen ya montada" #, python-format msgid "instance %s is not running" msgstr "No se está ejecutando instancia %s" msgid "instance has a kernel or ramdisk but not both" msgstr "la instancia tiene un kernel o un disco RAM, pero no ambos" msgid "instance is a required argument to use @refresh_cache" msgstr "la instancia es un argumento necesario para utilizar @refresh_cache " msgid "instance is not in a suspended state" msgstr "la instancia no está en un estado suspendido" msgid "instance is not powered on" msgstr "instancia no activada" msgid "instance is powered off and cannot be suspended." msgstr "instancia está desactivada y no se puede suspender. " #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "instance_id %s no ha sido encontrada como dispositivo en ningún puerto" msgid "is_public must be a boolean" msgstr "is_public debe ser un booleano" msgid "keymgr.fixed_key not defined" msgstr "keymgr:fixed_key no está definido" msgid "l3driver call to add floating IP failed" msgstr "La llamada l3driver para añadir IP flotante ha fallado" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs está instalado pero no puede ser usado (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs no está nstalado (%s)" #, python-format msgid "marker [%s] not found" msgstr "no se ha encontrado el marcador [%s]" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "el número máximo de filas debe ser <= %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "max_count no puede ser mayor a 1 si se especifica una fixed_ip." msgid "min_count must be <= max_count" msgstr "min_count debe ser <= max_count " #, python-format msgid "nbd device %s did not show up" msgstr "el dispositivo nbd %s no se ha mostrado" msgid "nbd unavailable: module not loaded" msgstr "nbd no disponible: módulo no cargado" msgid "no hosts to remove" msgstr "no hay hosts que eliminar" #, python-format msgid "no match found for %s" msgstr "No se ha encontrado coincidencia para %s" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "no hay ninguna instantánea padre para el volumen %s" #, python-format msgid "no write permission on storage pool %s" msgstr "" "no dispone de permiso de escritura en la agrupación de almacenamiento%s." #, python-format msgid "not able to execute ssh command: %s" msgstr "No es posible ejecutar comando ssh: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "La configuración antigua solo puede utilizar programas de fondo de ipo " "diccionario o guardados en la memoria caché" msgid "operation time out" msgstr "Tiempo de espera agotado para la operación" #, python-format msgid "partition %s not found" msgstr "No se ha encontrado la partición %s " #, python-format msgid "partition search unsupported with %s" msgstr "búsqueda de partición no soportada con %s " msgid "pause not supported for vmwareapi" msgstr "pausa no soportada para vmwareapi" msgid "printable characters with at least one non space character" msgstr "caracteres imprimibles con al menos un carácter que no sea un espacio." msgid "printable characters. Can not start or end with whitespace." msgstr "" "caracteres imprimibles. No pueden comenzar ni terminar con un espacio en " "blanco." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "No se ha podido ejecutar qemu-img en %(path)s : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "error de qemu-nbd: %s" msgid "rbd python libraries not found" msgstr "Las librerías rbd python no han sido encontradas" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "read_deleted solo puede ser 'no', 'yes' o 'only', no %r" msgid "serve() can only be called once" msgstr "serve() sólo se puede llamar una vez " msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "el servicio es un argumento obligatorio para el controlador ServiceGroup " "basado en base de datos" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "el servicio es un argumento obligatorio para el controlador de ServiceGroup " "basado en Memcached" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "esta instancia de invitado o controlador no implementa set_admin_password ." msgid "setup in progress" msgstr "Configuración en progreso" #, python-format msgid "snapshot for %s" msgstr "instantánea para %s " msgid "snapshot_id required in create_info" msgstr "snapshot_id es requerido en create_info" msgid "token not provided" msgstr "token no proporcionado" msgid "too many body keys" msgstr "demasiadas claves de cuerpo" msgid "unpause not supported for vmwareapi" msgstr "cancelación de pausa no soportada para vmwareapi" msgid "version should be an integer" msgstr "la versión debe ser un entero" #, python-format msgid "vg %s must be LVM volume group" msgstr "El grupo de volúmenes %s debe ser el grupo de volúmenes LVM" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "" "vhostuser_sock_path no está presente en vif_details para vif %(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "El tipo VIF %s no está soportado" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "" "El parámetro vif_type debe estar presente para esta implementación de " "vif_driver" #, python-format msgid "volume %s already attached" msgstr "volumen %s ya ha sido adjuntado" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "estado de volumen '%(vol)s' debe ser 'en-uso'. Actualmente en '%(status)s' " "estado" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake no tiene una implementación para %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake no tiene una implementación para %s o ha sido llamada con un " "número incorrecto de argumentos" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/fr/0000775000175000017500000000000000000000000015377 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/fr/LC_MESSAGES/0000775000175000017500000000000000000000000017164 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/fr/LC_MESSAGES/nova.po0000664000175000017500000033616200000000000020502 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # ariivarua , 2013 # Fabien B. , 2013 # Corina Roe , 2014 # CryLegend , 2013 # EVEILLARD , 2013 # FIRST AUTHOR , 2011 # Frédéric , 2014 # GuiTsi , 2013 # Jonathan Dupart , 2014 # Kodoku , 2013 # Lucas Mascaro , 2015 # Eric Marques , 2013 # Maxime COQUEREL , 2014-2015 # Andrew Melim , 2014 # Olivier Buisson , 2012 # Patrice LACHANCE , 2013 # EVEILLARD , 2013 # Vincent JOBARD , 2013 # Benjamin Godard , 2013 # Andreas Jaeger , 2016. #zanata # Thomas Morin , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-12-01 11:31+0000\n" "Last-Translator: Thomas Morin \n" "Language: fr\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: French\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s n'est pas une adresse IP v4/6 valide" #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s a tenté un accès direct à la base de données qui n'est pas " "autorisé par la stratégie" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s n'est pas une adresse IP réseau valide." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s ne devrait pas faire partie des mises à jour." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "%(memsize)d MB de mémoire assignée, mais %(memtotal)d MB attendus" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s n'est pas sur un stockage local : %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s n'est pas sur un stockage partagé : %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i lignes correspondent à la requête %(meth)s, %(done)i migrées" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "L'hyperviseur %(type)s ne supporte pas les périphériques PCI" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "La valeur %(worker_name)s de %(workers)s est invalide, elle doit être " "supérieure à 0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s ne prend pas en charge le branchement à chaud de disque." #, python-format msgid "%s format is not supported" msgstr "Le format %s n'est pas supporté" #, python-format msgid "%s is not supported." msgstr "%s n'est pas supporté." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s doit être 'MANUAL' ou 'AUTO'." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' devrait être une instance de '%(cls)s'" msgid "'qemu-img info' parsing failed." msgstr "Echec de l'analyse syntaxique de 'qemu-img info'." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "" "L'argument 'rxtx_factor' doit être une variable flottante entre 0 et %g." #, python-format msgid "A NetworkModel is required in field %s" msgstr "Un modèle de réseau est requis dans la zone %s" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "Le format de l'écriture de la version %(version)s de l'API est invalide. Il " "doit être de forme: NumMajeur.NumMineur ." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "La version %(version)s de l'API n'est pas supporté par cette méthode " msgid "Access list not available for public flavors." msgstr "Liste d'accès non disponible pour les versions publiques." #, python-format msgid "Action %s not found" msgstr "Action %s non trouvé" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "Action de request_id %(request_id)s sur l'instance %(instance_uuid)s non " "trouvée" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Action: '%(action)s', appellant la méthode: %(meth)s, corps: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "Echec de l'ajout de métadonnées pour l'agrégat %(id)s après %(retries)s " "tentatives" msgid "Affinity instance group policy was violated." msgstr "La stratégie de groupe d'instances anti-affinité a été violée." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "L'agent ne supporte l'appel : %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "La génération d'agent avec l'hyperviseur %(hypervisor)s le système " "d'exploitation %(os)s et l'architecture %(architecture)s existe." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "L'agrégat %(aggregate_id)s a déjà l'hôte %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "Agrégat %(aggregate_id)s introuvable." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "L'agrégat %(aggregate_id)s n'a pas d'hôte %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "L'agrégat %(aggregate_id)s n'a pas de métadonnées avec la clé " "%(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Agrégat %(aggregate_id)s : l'action '%(action)s' a généré une erreur : " "%(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "L'agrégat %(aggregate_name)s existe déjà." #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "" "L'agrégat de %s ne prend pas en charge la zone de disponibilité nommée vide" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "Agrégat introuvable pour le nombre d'hôtes %(host)s" #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "" "Une valeur 'name' non valide a été fournie. Le nom doit être : %(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "" "Une erreur inopinée à eu lieu. Merci d'essayer votre requête à nouveau." msgid "An unknown exception occurred." msgstr "Une exception inconnue s'est produite." msgid "Anti-affinity instance group policy was violated." msgstr "La stratégie de groupe d'instances anti-affinité a été violée." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Nom d'architecture '%(arch)s' n'est pas reconnu" #, python-format msgid "Architecture name '%s' is not valid" msgstr "Le nom d'architecture '%s' n'est pas valide" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Tentative d'utilisation du périphérique PCI %(compute_node_id)s:%(address)s " "à partir d'un pool vide" msgid "Attempted overwrite of an existing value." msgstr "Tentative d'écriture d'une valeur existante." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Attribut %(attr)s non supporté " #, python-format msgid "Bad network format: missing %s" msgstr "Format de réseau incorrect : %s manquant" msgid "Bad networks format" msgstr "Format de réseaux incorrect" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "" "Format de réseaux incorrect : l'UUID du réseau n'est pas au format approprié " "(%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Préfixe incorrecte pour le réseau dans cidr %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "Échec de l'attachement du port %(port_id)s, voir les logs neutron pour plus " "d'information." msgid "Blank components" msgstr "Composants vides" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Les volumes vides (source : 'vide', dest : 'volume') doivent avoir une " "taille non zéro" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "L'unité par bloc %(id)s n'est pas amorçable." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "Le volume de mappage d'unité par bloc %(volume_id)s est un volume multi-" "connexion et n'est pas valide pour cette opération." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "Le mappage d'unité par bloc ne peut être converti à l'ancien format." msgid "Block Device Mapping is Invalid." msgstr "Le mappage d'unité par bloc n'est pas valide." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "Le mappage d'unité par bloc n'est pas valide : %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "Le mappage des unités par bloc est invalide : La séquence de démarrage pour " "l'instance et la combinaison de mappage du périphérique image/bloc n'est pas " "valide." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "Le mappage d'unité par bloc est invalide : Vous avez spécifié plus de " "périphériques locaux que la limite autorise" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "Le mappage d'unité par bloc n'est pas valide : impossible d'obtenir l'image " "%(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "Le mappage d'unité par bloc n'est pas valide : échec d'obtention de " "l'instantané %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "Le mappage d'unité par bloc n'est pas valide : échec d'obtention du volume " "%(id)s." msgid "Block migration can not be used with shared storage." msgstr "" "La migration par bloc ne peut pas être utilisée avec le stockage partagé." msgid "Boot index is invalid." msgstr "L'index de démarrage est invalide." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "Construction de l'instance %(instance_uuid)s interrompue : %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "" "La construction de l'instance %(instance_uuid)s a été reprogrammée : " "%(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "BuildRequest introuvable pour l'instance %(uuid)s." msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "" "CPU et allocation mémoire doivent être fournis pour tous les nœuds NUMA" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "L'UC n'a pas de compatibilité.\n" "\n" "%(ret)s\n" "\n" "Voir %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "Le CPU numéro %(cpunum)d est assigné à deux nodes" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "Le nombre de CPU %(cpunum)d est plus grand que le maximum %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "Le numéro de CPU %(cpuset)s n'est assigné à aucun node" msgid "Can not add access to a public flavor." msgstr "Impossible d'ajouter l'accès à un gabarit public." msgid "Can not find requested image" msgstr "Impossible de trouver l'image demandée" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "" "Impossible de traiter la demande d'authentification pour les données " "d'identification %d" msgid "Can't resize a disk to 0 GB." msgstr "Impossible de redimensionner un disque à 0 Go." msgid "Can't resize down ephemeral disks." msgstr "Impossible de réduire la taille des disques éphémères." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "" "Impossible d'extraire le chemin d'unité racine de la configuration libvirt " "de l'instance" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "Impossible de '%(action)s' l'instance %(server_id)s lorsque elle a l'état " "%(attr)s %(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "Impossible d'accéder à \"%(instances_path)s\". Assurez-vous que le chemin " "existe et que vous disposez des droits appropriés. En particulier, Nova-" "Compute ne doit pas être exécuté avec le compte SYSTEM intégré ni avec des " "comptes qui ne peuvent pas s'authentifier auprès d'un hôte distant." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossible d'ajouter l’hôte sur l'ensemble %(aggregate_id)s. Raison: " "%(reason)s" msgid "Cannot attach one or more volumes to multiple instances" msgstr "Impossible de connecter un ou plusieurs volumes à plusieurs instances" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Pas d'appel de %(method)s sur un objet %(objtype)s orphelin" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "Impossible de déterminer le pool de stockage parent pour %s; impossible de " "déterminer où stocker les images" msgid "Cannot find SR of content-type ISO" msgstr "Impossible de trouver le référentiel de stockage ISO content-type" msgid "Cannot find SR to read/write VDI." msgstr "Impossible de trouver le SR pour lire/écrire le VDI." msgid "Cannot find image for rebuild" msgstr "Image introuvable pour la régénération" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "Impossible de supprimer l'hôte %(host)s dans l'agrégat %(id)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossible de supprimer l’hôte de l'ensemble %(aggregate_id)s. Raison: " "%(reason)s" msgid "Cannot rescue a volume-backed instance" msgstr "Impossible de sauver une instance volume" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "Impossible de réduire le disque à une taille inférieure. Taille actuelle: " "%(curr_root_gb)s Go. Taille souhaitée: %(new_root_gb)s Go." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "Impossible de définir une stratégie de réservation de thread de CPU dans une " "stratégie de réservation de CPU non dédiée" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "Impossible de définir une stratégie en temps réel dans une stratégie de " "réservation de CPU non dédiée" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossible de mettre a jour l'ensemble %(aggregate_id)s. Raison: %(reason)s" #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossible de mettre a jour les métadonnées de l'ensemble %(aggregate_id)s. " "Raison: %(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "La cellule %(uuid)s ne possède aucun mappage." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "" "La modification définira une utilisation inférieure à 0 pour les ressources " "suivantes : %(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "Classe %(class_name)s introuvable : %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Commande non supportée. Veuillez utiliser la commande Ironic %(cmd)s pour " "réaliser cette action." #, python-format msgid "Compute host %(host)s could not be found." msgstr "L'hôte de calcul %(host)s ne peut pas être trouvé." #, python-format msgid "Compute host %s not found." msgstr "Host Compute %s non trouvé." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "Le service de calcul de %(host)s est encore en cours d'utilisation." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "Le service de calcul de %(host)s est indisponible pour l'instant." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "" "Le format de l'unité de configuration '%(format)s' n'est pas pris en charge." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "La config a demandé un modèle d'UC explicite, mais l'hyperviseur libvirt " "'%s' actuel ne prend pas en charge la sélection des modèles d'UC" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Conflit lors de la mise à jour de l'instance %(instance_uuid)s, cause " "inconnue" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Conflit lors de la mise à jour de l'instance %(instance_uuid)s. Attendu: " "%(expected)s. Actuel: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "La connexion à l'hôte cinder a échoué : %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "La connexion à l'hôte Glance %(server)s a échoué : %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Perte de la connexion à libvirt : %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "La connexion à l'hyperviseur est cassée sur l'hôte : %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "Impossible d'extraire la sortie du journal de la console pour l'instance " "%(instance_id)s. Raison : %(reason)s" msgid "Constraint not met." msgstr "Contrainte non vérifiée." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Converti au format brut, mais le format est maintenant %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "Impossible de lier l'image au loopback : %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "Impossible d'extraire l'image %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "" "Impossible de trouver un gestionnaire pour le %(driver_type)s de volume." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "Impossible de trouver le binaire %(binary)s sur l'hôte %(host)s." #, python-format msgid "Could not find config at %(path)s" msgstr "Configuration introuvable dans %(path)s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "" "Impossible de trouver la ou les références de magasin de données utilisé par " "la machine virtuelle." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "Impossible de charger la ligne %(line)s, erreur %(error)s obtenue" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "Echec du chargement de l'app de collage '%(name)s' depuis %(path)s" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "Impossible d'installer l'unité de configuration vfat. %(operation)s a " "échoué. Erreur : %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "Impossible de télécharger l'image %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "" "La création d'une interface virtuelle avec une adresse mac unique a échoué" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "" "L'expression régulière %s du magasin de données ne correspond à aucun " "magasin de données" msgid "Datetime is in invalid format" msgstr "Datetime est dans un format non valide" msgid "Default PBM policy is required if PBM is enabled." msgstr "La règle PBM par défaut est nécessaire si PBM est activé." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "%(records)d entrée supprimer de la table '%(table_name)s'." #, python-format msgid "Device '%(device)s' not found." msgstr "Device '%(device)s' introuvable." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "L'ID d'unité %(id)s indiquée n'est pas prise en charge par la version " "%(version)s" msgid "Device name contains spaces." msgstr "Le nom du périphérique contient des espaces." msgid "Device name empty or too long." msgstr "Nom du périphérique vide ou trop long." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "Type de périphérique non concordant pour l'alias '%s'" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" " Types différents entre %(table)s.%(column)s et la table shadow: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "" "Le disque contient un système de fichiers qui ne peut pas être " "redimensionné : %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Le format de disque %(disk_format)s n'est pas acceptable" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Le ficher d'information du disque est invalide : %(reason)s" msgid "Disk must have only one partition." msgstr "Le disque doit comporter une seule partition." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Disque introuvable avec l'ID %s et connecté à l'instance." #, python-format msgid "Driver Error: %s" msgstr "Erreur du pilote: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "Erreur lors de la tentative d'exécution de %(method)s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Erreur lors de la destruction de l'instance sur le noeud %(node)s. L'état de " "la mise à disposition est encore '%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Erreur durant l'appel de l'agent: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "" "Erreur durant la dé-réservation de l'instance %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "Erreur de libvirt lors de l'obtention des informations de domaine pour " "%(instance_name)s : [Code d'erreur %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Erreur de libvirt lors de la recherche de %(instance_name)s : [Code d'erreur " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Erreur de libvirt lors de la mise au repos de %(instance_name)s : [Code " "d'erreur %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Erreur libvirt lors de la définition du mot de passe pour le nom " "d'utilisateur \"%(user)s\". [Code d'erreur : %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Erreur de montage de %(device)s pour %(dir)s dans l'image %(image)s avec " "libguestfs (%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Erreur lors du montage de %(image)s avec libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Erreur lors de la création du moniteur de ressource : %(monitor)s" msgid "Error: Agent is disabled" msgstr "Erreur : agent désactivé" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "Evénement %(event)s non trouvé pour l'ID action %(action_id)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "L'événement doit être une instance de nova.virt.event.Event" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Nombre de tentatives de planification max %(max_attempts)d pour l'instance " "%(instance_uuid)s dépassé. Dernière exception : %(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Le nombre maximal de nouvelles tentatives de planification (%(max_retries)d) " "a été dépassé pour l'instance %(instance_uuid)s pendant la migration à chaud" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Nombre maximum d'essai dépassé. %(reason)s." #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "UUID attendu mais %(uuid)s reçu." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Colonne supplémentaire %(table)s.%(column)s dans la table image" msgid "Extracting vmdk from OVA failed." msgstr "Echec de l'extraction de vmdk à partir d'OVA." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "Impossible d'accéder au port %(port_id)s : %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Echec de l'ajout des paramètres de déploiement sur le noeud %(node)s lors de " "la mise à disposition de l'instance %(instance)s" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "" "Echec de l'allocation de réseau(x) avec l'erreur %s, ne pas replanifier." msgid "Failed to allocate the network(s), not rescheduling." msgstr "Echec de l'allocation de réseau(x), ne pas replanifier." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "Impossible de connecter la carte réseau avec %(instance_uuid)s" #, python-format msgid "Failed to create vif %s" msgstr "Échec de création du vif %s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Echec de déploiement de l'instance: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Échec du détachement du périphérique PCI %(dev)s: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "Impossible de déconnecter la carte réseau de %(instance_uuid)s" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Echec du chiffrement du texte : %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Échec à lancer les instances : %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Echec de mappage des partitions : %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Impossible de monter le système de fichier : %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "" "Echec de l'analyse des informations d'un périphérique pci pour le passe-" "système" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Échec à éteindre l'instance : %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Echec à faire fonctionner l'instance : %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Échec de la préparation du périphérique PCI %(id)s pour l'instance " "%(instance_uuid)s : %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Echec de la mise à disposition de l'instance %(inst)s : %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "Echec à lire ou à écrire le fichier d'information disque : %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Echec à redémarrer l'instance : %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Echec de suppresion du volume(s): (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Echec de la demande de régénération de l'instance %(inst)s à Ironic : " "%(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Echec à résumé l'instance : %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "Échec à lancer qemu-img info sur %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "" "Echec de définition du mot de passe d'admin sur %(instance)s car %(reason)s" msgid "Failed to spawn, rolling back" msgstr "Echec de la génération, annulation" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Échec à suspendre l'instance : %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Échec à terminer l'instance : %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "Impossible de déconnecter le vif %s" msgid "Failure prepping block device." msgstr "Echec de préparation de l'unité par bloc." #, python-format msgid "File %(file_path)s could not be found." msgstr "Fichier %(file_path)s introuvable." #, python-format msgid "File path %s not valid" msgstr "Chemin d'accès au fichier %s non valide" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "L'IP fixe %(ip)s n'est pas une adresse IP valide pour le réseau " "%(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "L'adresse IP fixe %s est déjà utilisée." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "L'adresse IP fixe %(address)s est déjà utilisée sur l'instance " "%(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "Pas d'IP fixe trouvée pour l'adresse %(address)s." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "Le Flavor %(flavor_id)s ne peut être trouvé." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "Le type d'instance %(flavor_id)s n'a pas de spécifications supplémentaires " "avec la clé %(extra_specs_key)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Le type d'instance %(flavor_id)s n'a pas de spécifications supplémentaires " "avec la clé %(key)s" #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "Impossible de mettre à jour ou de créer la spécification supplémentaire " "%(id)s du gabarit après %(retries)d tentatives." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "L'accès de version existe déjà pour la combinaison version %(flavor_id)s et " "projet %(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Version inaccessible pour la combinaison %(flavor_id)s / %(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "La version utilisée par l'instance est introuvable." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "Le type d'instance avec l'ID %(flavor_id)s existe déjà." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "Le type d'instance nommé %(flavor_name)s ne peut être trouvé." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "Le type d'instance avec le nom %(name)s existe déjà." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "Le disque du gabarit est plus petit que la taille minimum spécifiée dans les " "métadonnées de l'image. La taille du disque est %(flavor_size)i octets, la " "taille minimum est %(image_min_disk)i octets." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "Le disque du gabaris est trop petit pour l'image demandée. Le disque du " "gabarit fait %(flavor_size)i bytes, l'image fait %(image_size)i bytes." msgid "Flavor's memory is too small for requested image." msgstr "La mémoire du type d'instance est trop petite pour l'image demandée." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "L'association de l'IP floattante %(address)s a échoué." #, python-format msgid "Floating IP %(address)s is associated." msgstr "L'adresse IP flottante %(address)s est associée." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "" "L'adresse IP flottante %(address)s n'est pas associée à l'instance %(id)s." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "Aucune IP dynamique trouvée pour l'id %(id)s." #, python-format msgid "Floating IP not found for ID %s" msgstr "Adresse IP flottante non trouvée pour l'ID %s" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "Aucune IP dynamique trouvée pour l'adresse %(address)s." msgid "Floating IP pool not found." msgstr "Pool d'IP flottantes non trouvé." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "Le dépassement du nombre de port série du type d'instance passé dans la meta " "image est interdit" msgid "Found no disk to snapshot." msgstr "Aucun disque trouvé pour l'instantané." #, python-format msgid "Found no network for bridge %s" msgstr "Aucun réseau trouvé pour le bridge %s" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Réseau non unique trouvé pour le bridge %s" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Réseau non unique trouvé pour name_label %s" msgid "Guest does not have a console available." msgstr "Aucune console n'est disponible pour l'invité." #, python-format msgid "Host %(host)s could not be found." msgstr "L'hôte %(host)s ne peut pas être trouvé." #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "L'hôte %(host)s est déjà mappé à la cellule %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "L'hôte '%(name)s' n'est mappé à aucune cellule" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "" "La mise sous tension de l'hôte n'est pas prise en charge par le pilote Hyper-" "V" msgid "Host aggregate is not empty" msgstr "L'agrégat d'hôte n'est pas vide" msgid "Host does not support guests with NUMA topology set" msgstr "L'hote ne supporte pas les invités avec le groupe topologique NUMA" msgid "Host does not support guests with custom memory page sizes" msgstr "" "L'hôte ne prend pas en charge les invités avec des tailles de pages de " "mémoire personnalisées" msgid "Host startup on XenServer is not supported." msgstr "Le démarrage à chaud sur XenServer n'est pas pris en charge." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Le pilote de l'hyperviseur ne prend pas en charge la méthode " "post_live_migration_at_source" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Le type virtuel d'hyperviseur '%s' n'est pas valide" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "" "Le type de virtualisation de l'hyperviseur '%(hv_type)s' n'est pas reconnu." #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "L'hyperviseur avec l'ID '%s' est introuvable." #, python-format msgid "IP allocation over quota in pool %s." msgstr "L'allocation IP dépasse le quota dans le pool %s." msgid "IP allocation over quota." msgstr "L'allocation IP dépasse le quota." #, python-format msgid "Image %(image_id)s could not be found." msgstr "L'image %(image_id)s n'a pas été trouvée." #, python-format msgid "Image %(image_id)s is not active." msgstr "L'image %(image_id)s n'est pas active." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "L'image %(image_id)s est inacceptable: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "" "La taille du disque d'image est supérieure à la taille de disque demandée" msgid "Image is not raw format" msgstr "L'image n'est pas au format raw" msgid "Image metadata limit exceeded" msgstr "Limite de métadonnées d'image dépassée" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "Le modèle d'image '%(image)s' n'est pas supporté" msgid "Image not found." msgstr "Image introuvable." #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "La propriété '%(name)s' de l'image n'est pas autorisée à réécrire la " "configuration NUMA réglé par rapport au type d'instance." msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "La propriété d'image 'hw_cpu_policy' ne peut pas remplacer la règle de " "réservation d'unité centrale définie sur la version" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "La propriété d'image 'hw_cpu_thread_policy' ne peut pas remplacer la règle " "de réservation de CPU définie sur la version" msgid "Image that the instance was started with could not be found." msgstr "L'image par laquelle l'instance a été démarrée est introuvable." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "" "L'option du lecteur de configuration de l'image '%(config_drive)s' est " "invalide" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "Les images avec destination_type 'volume' doit avoir une taille différente " "de zéro ." msgid "In ERROR state" msgstr "A l'état ERREUR" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "Dans les états %(vm_state)s/%(task_state)s, non RESIZED/None" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "La migration à chaud en cours %(id)s est introuvable pour le serveur " "%(uuid)s." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Paramètres incompatibles : le chiffrement éphémère du stockage est pris en " "charge uniquement pour les images LVM." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "Le cache d'infos pour l'instance %(instance_uuid)s est introuvable." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "L'instance %(instance)s et le volume %(vol)s ne sont pas dans la même zone " "de disponibilité. L'instance est dans %(ins_zone)s. Le volume est dans " "%(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "L'instance %(instance)s n'a pas de port avec id %(port)s" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "Impossible de sauver l'instance %(instance_id)s : %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "L'instance %(instance_id)s n'a pas pu être trouvée." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "L'instance %(instance_id)s n'a pas d'étiquette '%(tag)s'." #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "L'instance %(instance_id)s n'est pas en mode secours" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "L'instance %(instance_id)s n'est pas prête" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "L'instance %(instance_id)s ne fonctionne pas." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "L'instance %(instance_id)s est inacceptable: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "L'instance %(instance_uuid)s ne spécifie pas une topologie NUMA" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "L'instance %(instance_uuid)s ne spécifie pas de contexte de migration." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "L'instance %(instance_uuid)s dans %(attr)s %(state)s. Impossible de " "%(method)s pendant que l'instance est dans cet état." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "L'instance %(instance_uuid)s est verrouillée" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "L'instance %(instance_uuid)s nécessite une unité de configuration, mais " "cette unité n'existe pas." #, python-format msgid "Instance %(name)s already exists." msgstr "L'instance %(name)s existe déjà." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "L'instance %(server_id)s est dans un état invalide pour '%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "L'instance %(uuid)s ne comporte aucun mappage vers une cellule." #, python-format msgid "Instance %s not found" msgstr "Instance %s non trouvé" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Le provisionnement de l'instance %s a été interrompu" msgid "Instance could not be found" msgstr "Instance introuvable." msgid "Instance disk to be encrypted but no context provided" msgstr "Disque d'instance à chiffrer, mais aucun contexte fourni" msgid "Instance event failed" msgstr "Echec de l'événement d'instance" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "Groupe d'instance %(group_uuid)s existe déjà." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "Groupe d'instance %(group_uuid)s ne peut pas etre trouvé." msgid "Instance has no source host" msgstr "L'instance n'a aucun hôte source" msgid "Instance has not been resized." msgstr "L'instance n'a pas été redimensionnée." #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "Le nom d'hôte de l'instance %(hostname)s n'est pas un nom DNS valide" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "Instance déjà en mode secours : %s" msgid "Instance is not a member of specified network" msgstr "L'instance n'est pas un membre du réseau spécifié" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Retour-arrière de l'instance réalisé du à: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Espace insuffisant sur le groupe de volumes %(vg)s. Seulement " "%(free_space)db disponibles, mais %(size)db requis par volume %(lv)s." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Ressources de calcul insuffisante : %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "Mémoire libre insuffisante sur le noeud de calcul pour le démarrage de " "%(uuid)s." #, python-format msgid "Interface %(interface)s not found." msgstr "L'interface %(interface)s non trouvée." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "Contenu BAse 64 invalide pour le fichier %(path)s" msgid "Invalid Connection Info" msgstr "Informations de connexion non valides" #, python-format msgid "Invalid ID received %(id)s." msgstr "ID non valide %(id)s reçu." #, python-format msgid "Invalid IP format %s" msgstr "Format adresse IP non valide %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Le protocole IP %(protocol)s est invalide" msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Whitelist PCI invalide: la whitelist PCI peut spécifier le devname ou " "l'adresse, mais pas les deux." #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Définition d'un alias PCI invalide : %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Expression régulière non valide %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Caractères invalides dans le hostname '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "Le config_drive fourni est invalide." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "config_drive_format non valide \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Type de console non valide %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "Le type de contenu %(content_type)s est invalide" #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Chaîne de datetime invalide : %(reason)s" msgid "Invalid device UUID." msgstr "Périphérique UUID invalide." #, python-format msgid "Invalid entry: '%s'" msgstr "Entrée non valide : '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Entrée non valide : '%s' ; dictionnaire attendu" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Entrée non valide : '%s'; liste ou dictionnaire attendu" #, python-format msgid "Invalid exclusion expression %r" msgstr "Expression d'exclusion invalide %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Format d'image invalide '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "L'image href %(image_href)s est invalide." #, python-format msgid "Invalid inclusion expression %r" msgstr "Expression d'inclusion invalide %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Entrée invalide pour champ/attribut %(path)s. Valeur : %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Entrée invalide reçue : %(reason)s" msgid "Invalid instance image." msgstr "Instance image non valide." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Filtre is_public non valide [%s]" msgid "Invalid key_name provided." msgstr "key_name fourni non valide." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Taille de page de mémoire non valide '%(pagesize)s'" msgid "Invalid metadata key" msgstr "Clé de métadonnées non valide" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Taille de métadonnée invalide : %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Métadonnée invalide : %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Filtre minDisk non valide [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Filtre minRam non valide [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "La plage de port %(from_port)s:%(to_port)s. %(msg)s est invalide" msgid "Invalid proxy request signature." msgstr "Signature de demande de proxy non valide." #, python-format msgid "Invalid range expression %r" msgstr "Valeur de %r invalide" msgid "Invalid service catalog json." msgstr "json de catalogue de service non valide." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "Heure de début non valide. L'heure de début ne peut pas être définie après " "l'heure de fin." msgid "Invalid state of instance files on shared storage" msgstr "Etat non valide des fichiers d'instance sur la mémoire partagée" #, python-format msgid "Invalid timestamp for date %s" msgstr "Horodatage non valide pour la date %s" #, python-format msgid "Invalid usage_type: %s" msgstr "Type d'utilisation (usage_type) non valide : %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Valeur invalide pour l'option du lecteur de configuration : %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Adresse d'interface virtuelle %s non valide dans la demande" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Mode d'accès au volume invalide : %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Volume invalide : %(reason)s" msgid "Invalid volume_size." msgstr "volume_size invalide." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "L' uuid du noeud Ironic non fourni au pilote pour instance %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "" "Il est interdit de créer une interface sur le réseau externe %(network_uuid)s" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "L'image Kernel/Ramdisk est trop volumineuse : %(vdi_size)d octets, max " "%(max_size)d octets" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Les noms de clé peuvent seulement contenir des caractères alphanumériques, " "des points, des tirets, des traits de soulignement, des deux-points et des " "espaces." #, python-format msgid "Key manager error: %(reason)s" msgstr "Erreur du gestionaire de clé: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "La paire de clés %(key_name)s' existe déjà." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "" "La paire de clés %(name)s est introuvable pour l'utilisateur %(user_id)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "La donnée de paire de clés est invalide : %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "Le nom de la paire de clés contient des caractères non sécurisés" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "La paire de clé doit être une chaîne et de longueur comprise entre 1 et 255 " "caractères" msgid "Limits only supported from vCenter 6.0 and above" msgstr "Limites seulement supportées sur vCenter 6.0 et supérieur" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "" "La migration à chaud %(id)s pour le serveur %(uuid)s n'est pas en cours." #, python-format msgid "Malformed message body: %(reason)s" msgstr "Format de corps de message non valide : %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "URL de demande incorrectement formée : l'ID projet '%(project_id)s' de l'URL " "ne correspond pas à l'ID projet '%(context_project_id)s' du contexte" msgid "Malformed request body" msgstr "Format de corps de demande incorrect" msgid "Mapping image to local is not supported." msgstr "Le mappage de l'image sur local n'est pas pris en charge." #, python-format msgid "Marker %(marker)s could not be found." msgstr "Le marqueur %(marker)s est introuvable." msgid "Maximum number of floating IPs exceeded" msgstr "Nombre maximal d'adresses IP flottantes dépassé" msgid "Maximum number of key pairs exceeded" msgstr "Nombre maximal de paires de clés dépassé" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "Le nombre maximal d'éléments de métadonnées dépasse %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "Nombre maximum de ports dépassé" msgid "Maximum number of security groups or rules exceeded" msgstr "Nombre maximal de groupes de sécurité ou de règles dépassé" msgid "Metadata item was not found" msgstr "Elément de métadonnées introuvable" msgid "Metadata property key greater than 255 characters" msgstr "" "Taille de la clé de propriété de métadonnées supérieure à 255 caractères" msgid "Metadata property value greater than 255 characters" msgstr "" "Taille de la valeur de propriété de métadonnées supérieure à 255 caractères" msgid "Metadata type should be dict." msgstr "Le type de métadonnée doit être un dictionnaire." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "La métrique %(name)s ne peut être trouvé sur le noeud de calcul de l'hôte " "%(host)s.%(node)s." msgid "Migrate Receive failed" msgstr "Echec de réception de la migration" msgid "Migrate Send failed" msgstr "Echec d'envoi de la migration" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "" "La migration %(id)s pour le serveur %(uuid)s n'est pas une migration à chaud." #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "La migration %(migration_id)s ne peut être trouvée." #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "Migration %(migration_id)s introuvable pour l'instance %(instance_id)s" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "L'état de la migration %(migration_id)s de l'instance %(instance_uuid)s est " "%(state)s. Impossible de %(method)s tant que la migration est dans cet état." #, python-format msgid "Migration error: %(reason)s" msgstr "Erreur de migration: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "" "La migration n'est pas prise en charge pour des instances sauvegardées par " "LVM" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "Migration non trouvée pour l'instance %(instance_id)s avec le statut " "%(status)s." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Erreur lors de la vérification de la migration: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "Erreur de sélection de destinations de migration : %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Arguments manquants : %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Colonne %(table)s.%(column)s manquante dans la table shadow" msgid "Missing device UUID." msgstr "Périphérique UUID manquant." msgid "Missing disabled reason field" msgstr "Le champ de la raison de désactivation est manquant" msgid "Missing forced_down field" msgstr "Champ forced_down manquant" msgid "Missing imageRef attribute" msgstr "Attribut imageRef manquant" #, python-format msgid "Missing keys: %s" msgstr "Clés manquantes : %s" #, python-format msgid "Missing parameter %s" msgstr "Le paramètre %s est manquant" msgid "Missing parameter dict" msgstr "Paramètre dict manquant" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "Plusieurs instances sont associées à l'adresse IP fixe '%(address)s'." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Il y a plusieurs réseaux possibles. Veuillez indiquer les ID(s) de réseau " "pour que l'on sache auquel(s) se connecter." msgid "More than one swap drive requested." msgstr "Plusieurs unités demandées." #, python-format msgid "Multi-boot operating system found in %s" msgstr "Système d'exploitation à plusieurs démarrages trouvé dans %s" msgid "Multiple X-Instance-ID headers found within request." msgstr "Plusieurs en-têtes Multiple X-Instance-ID trouvés dans la demande." msgid "Multiple X-Tenant-ID headers found within request." msgstr "En-tête X-Tenant-ID multiple trouvés a l'interieur de la requete" #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "" "Plusieurs correspondances de pools d'adresses IP flottantes trouvées pour le " "nom '%s'" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Plusieurs adresses IP flottantes trouvées pour l'adresse %(address)s." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Plusieurs hôtes peuvent être gérés par le pilote VMWare vCenter ; par " "conséquent, nous ne retournons pas la disponibilité pour un seul hôte." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Plusieurs réseaux possibles trouvés. Utilisez un ID réseau pour préciser " "votre demande." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Plusieurs groupes de sécurité ont été trouvés correspondant à '%s'. Utilisez " "un ID pour préciser votre demande." msgid "Must input network_id when request IP address" msgstr "network_id doit être entré lors de la demande d'adresse IP" msgid "Must not input both network_id and port_id" msgstr "Vous ne devez pas entrer à la fois network_id et port_id" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "Il faut indiquer connection_url, connection_username (facultatif) et " "connection_password pour utiliser compute_driver=xenapi.XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "Il faut indiquer host_ip, host_username et host_password pour utiliser " "vmwareapi.VMwareVCDriver" msgid "Must supply a positive value for max_number" msgstr "Veuillez fournir une valeur positive pour max_number" msgid "Must supply a positive value for max_rows" msgstr "Veuillez fournir une valeur positive pour max_rows" #, python-format msgid "Network %(network_id)s could not be found." msgstr "Le réseau %(network_id)s n'a pas été trouvé." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "Réseau %(network_uuid)s demande un sous réseau pour pouvoir démarrer des " "instances dessus." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "Aucun réseau trouvé pour le pont %(bridge)s" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "Aucun réseau trouvé pour l'instance %(instance_id)s." msgid "Network not found" msgstr "Réseau non trouvé" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "Le réseau nécessite port_security_enabled et le sous-réseau associé afin " "d'appliquer les groupes de sécurité." msgid "New volume must be detached in order to swap." msgstr "Le nouveau volume doit être détaché afin de permuter." msgid "New volume must be the same size or larger." msgstr "Le nouveau volume doit être de la même taille ou plus grand." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "Pas de mappage d'unité par bloc avec l'id %(id)s." msgid "No Unique Match Found." msgstr "Correspondance unique non trouvée." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "Aucune génération d'agent associée à l'ID %(id)s." msgid "No compute host specified" msgstr "Aucun hôte de calcul spécifié" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "" "Aucune information de configuration n'a été trouvée pour le système " "d'exploitation %(os_name)s" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "" "Aucun périphérique ayant pour adresse MAC %s n'existe sur la machine " "virtuelle" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "" "Aucun périphérique ayant pour interface-id %s n'existe sur la machine " "virtuelle" #, python-format msgid "No disk at %(location)s" msgstr "Aucun disque sur %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Pas d'adresses IP fixes disponibles pour le réseau : %(net)s" msgid "No fixed IPs associated to instance" msgstr "Aucune adresse IP fixe associée à l'instance" msgid "No free nbd devices" msgstr "Pas de device nbd libre" msgid "No host available on cluster" msgstr "Aucun hôte disponible sur le cluster" msgid "No hosts found to map to cell, exiting." msgstr "Aucun hôte à mapper à la cellule n'a été trouvé. Sortie..." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "Aucun hyperviseur correspondant à '%s' n'a été trouvé." msgid "No image locations are accessible" msgstr "Aucun emplacement d'image n'est accessible" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "Aucun URI de migration à chaud n'est configuré et aucune valeur par défaut " "n'est disponible pour le type de virtualisation d'hyperviseur \"%(virt_type)s" "\"." msgid "No more floating IPs available." msgstr "Plus d'adresses IP flottantes disponibles." #, python-format msgid "No more floating IPs in pool %s." msgstr "Plus d'adresses IP flottantes disponibles dans le pool %s." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "Aucun point de montage trouvé dans %(root)s de l'image %(image)s" #, python-format msgid "No operating system found in %s" msgstr "Aucun système d'exploitation trouvé dans %s" #, python-format msgid "No primary VDI found for %s" msgstr "Aucun VDI primaire trouvé pour %s" msgid "No root disk defined." msgstr "Aucun disque racine défini." #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "Aucun réseau spécifique n'a été demandé et il n'existe aucun réseau " "disponible pour le projet '%(project_id)s'." msgid "No suitable network for migrate" msgstr "Aucun réseau adéquat pour migrer" msgid "No valid host found for cold migrate" msgstr "Aucun hôte valide n'a été trouvé pour la migration à froid" msgid "No valid host found for resize" msgstr "Aucun hôte valide n'a été trouvé pour le redimensionnement" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Hôte non valide trouvé. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "Pas de volume de mappage d'unité de bloc en: %(path)s." #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "Pas de volume de mappage d'unité de bloc avec l'id: %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "Noeud %s introuvable." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "Pas capable d'acquérir un port libre pour %(host)s" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "Pas capable de lier %(host)s : %(port)d, %(error)s " #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "Les fonctions virtuelles de PF %(compute_node_id)s:%(address)s ne sont pas " "toutes libres." msgid "Not an rbd snapshot" msgstr "N'est pas un instantané rbd" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "Non autorisé pour l'image %(image_id)s." msgid "Not authorized." msgstr "Non autorisé." msgid "Not enough parameters to build a valid rule." msgstr "Pas assez de parametres pour contruire un règle valide." msgid "Not implemented on Windows" msgstr "Non implémenté sous Windows" msgid "Not stored in rbd" msgstr "Non stocké dans rbd" msgid "Nothing was archived." msgstr "Aucun élément archivé." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova nécessite libvirt %s ou une version ultérieure." msgid "Number of Rows Archived" msgstr "Nombre de lignes archivées" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "L'action de l'objet %(action)s a échoué car : %(reason)s" msgid "Old volume is attached to a different instance." msgstr "L'ancien volume est attaché à une instance différente." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "" "Un ou plusieurs hôte(s) sont déjà présents dans la(les) zone(s) de " "disponibilité(s) %s" msgid "Only administrators may list deleted instances" msgstr "Seul l'administrateur peut afficher la liste des instances supprimées" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "Seuls les demandes de service (ext/NFS) basées sur des fichiers sont pris en " "charge par cette fonctionnalité. La demande de service %(uuid)s est de type " "%(type)s" msgid "Origin header does not match this host." msgstr "L'en-tête d'origine ne correspond pas à cet hôte." msgid "Origin header not valid." msgstr "En-tête d'origine non valide." msgid "Origin header protocol does not match this host." msgstr "Le protocole de l'en-tête d'origine ne correspond pas à cet hôte." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "Périphérique PCI %(node_id)s:%(address)s introuvable." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "L'alias PCI %(alias)s n'est pas défini" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "Le pérphérique PCI %(compute_node_id)s:%(address)s est %(status)s au lieu de " "%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "Le périphérique PCI %(compute_node_id)s:%(address)s appartient à %(owner)s " "au lieu de %(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "Périphérique PCI %(id)s introuvable" #, python-format msgid "PCI device request %(requests)s failed" msgstr "La requête %(requests)s au périphérique PCI a échoué." #, python-format msgid "PIF %s does not contain IP address" msgstr "INT. PHYS. %s ne contient pas l'adresse IP" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Taille de page %(pagesize)s interdite sur '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "Taille de page %(pagesize)s non prise en charge par l'hôte." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Le paramètre %(missing_params)s est absent de vif_details pour le vif " "%(vif_id)s. Vérifiez votre configuration de Neutron pour valider que les " "parapmètres macvtap sont correct." #, python-format msgid "Path %s must be LVM logical volume" msgstr "Le chemin %s doit être un volume logique LVM." msgid "Paused" msgstr "En pause" msgid "Personality file limit exceeded" msgstr "Limite de fichier de personnalité dépassé" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "La fonction physique %(compute_node_id)s:%(address)s, associée à VF " "%(compute_node_id)s:%(vf_address)s est %(status)s au lieu de %(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "Réseau physique manquant pour le réseau %(network_uuid)s" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "Le réglage des droits n'autorise pas %(action)s à être effectué(e)(s)" #, python-format msgid "Port %(port_id)s is still in use." msgstr "Le port %(port_id)s est encore en cours d'utilisation." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "Port %(port_id)s inutilisable pour l'instance %(instance)s." #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "Le port %(port_id)s n'est pas utilisable pour l'instance %(instance)s. La " "valeur %(value)s affectée à l'attribut dns_name ne correspond pas au nom " "d'hôte de l'instance %(hostname)s" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "Port %(port_id)s demande une IP fixe pour être utilisé." #, python-format msgid "Port %s is not attached" msgstr "Le port %s n'est pas connecté" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "ID port %(port_id)s introuvable." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Le modèle de vidéo fourni (%(model)s) n'est pas supporté." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "L'action de garde fourni (%(action)s) n'est pas supportée" msgid "QEMU guest agent is not enabled" msgstr "L'agent invité QEMU n'est pas activé" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "" "La mise au repos n'est pas prise en charge dans l'instance %(instance_id)s" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "Classe de quota %(class_name)s introuvable." msgid "Quota could not be found" msgstr "Le quota ne peut pas être trouvé" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Quota dépassé pour %(overs)s : demandé %(req)s, mais %(used)s déjà utilisés" "%(allowed)s %(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Quota dépassé pour les ressources : %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Quota dépassé, trop de paires de clés." msgid "Quota exceeded, too many server groups." msgstr "Quota dépassé, trop de groupes de serveur." msgid "Quota exceeded, too many servers in group" msgstr "Quota dépassé, trop de serveurs dans le groupe" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Quota dépassé: code=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "Le quota existe pour le projet %(project_id)s, ressource %(resource)s" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "Le quota pour le projet %(project_id)s ne peut pas être trouvé." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "Le quota de l'utilisateur %(user_id)s dans le projet %(project_id)s ne peut " "être trouvé." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "Le quota limite %(limit)s pour %(resource)s doit être supérieur ou égal a " "celle déjà utilisé et réservé: %(minimum)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "Le quota limite %(limit)s pour %(resource)s doit être inferieur ou égal à " "%(maximum)s." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "" "Nombre maximal de nouvelles tentatives atteint pour le débranchement de VBD " "%s" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "La stratégie en temps réel nécessite le masque vCPU(s) configuré avec au " "moins 1 RT vCPU et 1 vCPU ordinaire. Voir hw:cpu_realtime_mask ou " "hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "Corps et URI de demande discordants" msgid "Request is too large." msgstr "La demande est trop grande." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "La demande d'image %(image_id)s a reçu une réponse de demande incorrecte : " "%(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "RequestSpec non trouvé pour l'instance %(instance_uuid)s" msgid "Requested CPU control policy not supported by host" msgstr "" "La stratégie de contrôle d'UC demandée n'est pas prise en charge par l'hôte" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "Le matériel demandé '%(model)s' n'est pas pris en charge par le pilote " "virtuel '%(virt)s'" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "Le redimensionnement automatique du disque est désactivé pour l'image " "requise : %(image)s." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "La topologie NUMA de l'instance demandée ne tient pas dans la topologie NUMA " "de l'hôte donné" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "La topologie de l'instance NUMA demandée avec les périphériques PCI requis " "ne tient pas dans la topologie NUMA de l'hôte donné" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "Les limites vCPU demandées %(sockets)d:%(cores)d:%(threads)d ne peuvent être " "satisfaite pour le nombre de vcpus %(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "L'unité Rescue n'existe pas pour l'instance %s" #, python-format msgid "Resize error: %(reason)s" msgstr "Erreur de redimensionnement : %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Le redimensionnement sur une version de disque nulle est interdit." msgid "Resource could not be found." msgstr "La ressource n'a pas pu être trouvée." msgid "Resumed" msgstr "Repris" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Le nom de l'élément racine doit être '%(name)s', pas '%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "Exécution des lots de %i jusqu'à la fin" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "La plannification de filtre hôte %(filter_name)s ne peut être trouvée." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "Groupe de sécurité %(name)s introuvable pour le projet %(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "Groupe de sécurité %(security_group_id)s non trouvé pour le projet " "%(project_id)s." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "Groupe de sécurité %(security_group_id)s non trouvé." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "Groupe de sécurité %(security_group_name)s existe déjà pour le projet " "%(project_id)s." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "Le groupe de sécurité %(security_group_name)s n'est pas associé à l'instance " "%(instance)s" msgid "Security group id should be uuid" msgstr "L'ID groupe de sécurité doit être un UUID" msgid "Security group name cannot be empty" msgstr "Le nom du groupe de sécurité ne peut pas être vide" msgid "Security group not specified" msgstr "Groupe de sécurité non spécifié" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "" "La taille du disque du serveur n'a pas pu être modifiée car : %(reason)s" msgid "Server does not exist" msgstr "Le serveur n'existe pas" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "La stratégie ServerGroup n'est pas supporté: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter non configuré" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter non configuré" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher non configuré" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher non configuré" #, python-format msgid "Service %(service_id)s could not be found." msgstr "Le service %(service_id)s ne peut pas être trouvé." #, python-format msgid "Service %s not found." msgstr "Service %s non trouvé." msgid "Service is unavailable at this time." msgstr "Le service est indisponible actuellement." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "Le service avec l'hôte %(host)s et le binaire %(binary)s existe." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "Service avec l'hôte %(host)s et le topic %(topic)s existe." msgid "Set admin password is not supported" msgstr "La définition du mot de passe admin n'est pas supportée" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "La table fantôme avec le nom %(name)s existe déjà." #, python-format msgid "Share '%s' is not supported" msgstr "Le partage '%s' n'est pas pris en charge" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "Le niveau de partage '%s' n'a pas de partage configuré" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "Echec de la réduction de la taille du système de fichiers avec resize2fs, " "veuillez vérifier si vous avez suffisamment d'espace disponible sur votre " "disque." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "Le snapshot %(snapshot_id)s n'a pas été trouvé." msgid "Some required fields are missing" msgstr "Des champs requis sont manquants" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "Une erreur s'est produite lors de la suppression d'un instantané de volume : " "relocaliser un disque réseau %(protocol)s avec qemu-img n'a pas été " "entièrement testé" msgid "Sort direction size exceeds sort key size" msgstr "La taille du sens de tri dépasse la taille de la clé de tri" msgid "Sort key supplied was not valid." msgstr "La clé de tri fournie n'était pas valide." msgid "Specified fixed address not assigned to instance" msgstr "L'adresse fixe spécifiée n'est pas assignée à une instance" msgid "Specify `table_name` or `table` param" msgstr "Spécifiez un paramètre pour`table_name` ou `table`" msgid "Specify only one param `table_name` `table`" msgstr "Spécifiez seulement un paramètre pour `table_name` `table`" msgid "Started" msgstr "Démarré" msgid "Stopped" msgstr "Stoppé" #, python-format msgid "Storage error: %(reason)s" msgstr "Erreur de stockage: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "Les règles de stockage %s ne correspondent à aucun magasin de données" msgid "Success" msgstr "Succès" msgid "Suspended" msgstr "Suspendue" msgid "Swap drive requested is larger than instance type allows." msgstr "" "Le lecteur de swap demandé est plus grand que ce que le type d'instance " "autorise." msgid "Table" msgstr "Tableau" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "" "La tâche %(task_name)s est déjà en cours d'exécution sur l'hôte %(host)s" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "" "La tâche %(task_name)s n'est pas en cours d'exécution sur l'hôte %(host)s" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "L'adresse PCI %(address)s a un format incorrect." msgid "The backlog must be more than 0" msgstr "Le backlog doit être supérieur à 0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "L'intervalle de ports console %(min_port)d-%(max_port)d est épuisé." msgid "The created instance's disk would be too small." msgstr "Le disque de l'instance créée serait trop petit." msgid "The current driver does not support preserving ephemeral partitions." msgstr "Le pilote actuel ne permet pas de préserver les partitions éphémères." msgid "The default PBM policy doesn't exist on the backend." msgstr "La règle PBM par défaut n'existe pas sur le back-end." msgid "The floating IP request failed with a BadRequest" msgstr "La demande d'IP flottante a échouée avec l'erreur Mauvaise Requête" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "" "L'instance nécessite une version plus récente de l'hyperviseur que celle " "fournie." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "Le nombre de ports définis (%(ports)d) dépasse la limite (%(quota)d)" msgid "The only partition should be partition 1." msgstr "La seule partition doit être la partition 1." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "" "Le chemin du périphérique RNG donné: (%(path)s) n'est pas présent sur l’hôte." msgid "The request body can't be empty" msgstr "Le corps de la requete ne peut être vide" msgid "The request is invalid." msgstr "La requête est invalide." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "La quantité de mémoire vidéo demandée %(req_vram)d est plus élevée que le " "maximum autorisé par la version: %(max_vram)d." msgid "The requested availability zone is not available" msgstr "La zone de disponibilité demandée n'est pas disponible" msgid "The requested console type details are not accessible" msgstr "Les détails du type de console demandé ne sont pas accessibles" msgid "The requested functionality is not supported." msgstr "La fonctionnalité demandée n'est pas suportée" #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "Le cluster spécifié, '%s', est introuvable dans vCenter" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "" "le chemin d'accès d'unité fourni (%(path)s) est en cours d'utilisation." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "Le chemin de périphérique (%(path)s) est invalide." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "Le chemin d'accès du disque (%(path)s) existe déjà, il n'était pas prévu " "d'exister." msgid "The supplied hypervisor type of is invalid." msgstr "Le type de l'hyperviseur fourni n'est pas valide." msgid "The target host can't be the same one." msgstr "L'hôte de la cible ne peut pas être le même." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "Le jeton '%(token)s' est invalide ou a expiré" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "Le volume ne peut pas recevoir le même nom d'unité que l'unité racine %s" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "%(records)d enregistrements sont présents dans la table '%(table_name)s' " "dans laquelle la colonne uuid ou instance_uuid a pour valeur NULL. " "Réexécutez cette commande avec l'option --delete après la sauvegarde des " "données nécessaires." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "%(records)d enregistrements sont présents dans la table '%(table_name)s' " "dans laquelle la colonne uuid ou instance_uuid a pour valeur NULL. Ils " "doivent être manuellement nettoyés avant la migration. Prévoyez d'exécuter " "la commande 'nova-manage db null_instance_uuid_scan'." msgid "There are not enough hosts available." msgstr "Le nombre d'hôtes disponibles est insuffisant" #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Il existe encore %(count)i enregistrements de version non migrés. La " "migration ne peut pas continuer tant que tous les enregistrements de version " "d'instance n'ont pas été migrés vers le nouveau format. Exécutez tout " "d'abord `nova-manage db migrate_flavor_data'." #, python-format msgid "There is no such action: %s" msgstr "Aucune action de ce type : %s" msgid "There were no records found where instance_uuid was NULL." msgstr "" "Aucun enregistrement n'a été trouvé lorsque instance_uuid avait pour valeur " "NULL." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "L'hyperviseur de ce noeud de calcul est plus ancien que la version minimale " "prise en charge : %(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "Ce domU doit s'exécuter sur l'hôte indiqué par connection_url" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Cette méthode doit être appelée avec networks=None et port_ids=None ou avec " "des valeurs de port_ids et networks autres que None." #, python-format msgid "This rule already exists in group %s" msgstr "Cette règle existe déjà dans le groupe %s" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Ce service est plus ancien (v%(thisver)i) que la version minimale (v" "%(minver)i) du reste du déploiement. Impossible de continuer." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Dépassement du délai d'attente pour l'unité %s à créer" msgid "Timeout waiting for response from cell" msgstr "Dépassement du délai d'attente pour la réponse de la cellule" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "Timeout lors de la vérification de la possibilité de migrer à chaud vers " "l'hôte: %s" msgid "To and From ports must be integers" msgstr "Les ports de destination et d'origine doivent être des entiers" msgid "Token not found" msgstr "Token non trouvé" msgid "Triggering crash dump is not supported" msgstr "Déclenchement de vidage sur incident non pris en charge" msgid "Type and Code must be integers for ICMP protocol type" msgstr "" "Le type et le code doivent être des entiers pour le type de protocole ICMP" msgid "UEFI is not supported" msgstr "UEFI n'est pas supporté" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Impossible d'assigner l'IP flottante %(address)s à une IP fixe de l'instance " "%(id)s. L'instance n'a pas d'IPv4 fixe." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Incapable d'associer une IP flottante %(address)s à une IP fixe " "%(fixed_address)s pour l'instance %(id)s. Erreur : %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Impossible d'authentifier le client Ironic." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Impossible d'appeler l'agent invité. L'appel suivant a mit trop de temps: " "%(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "Impossible de convertir l'image en %(format)s : %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "Impossible de convertir l'image en raw : %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "Impossible de supprimer le VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "Impossible de détruire VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "Impossible de déterminer le bus de disque pour '%s'" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "Impossible de déterminer le préfixe du disque pour %s" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "Impossible d'éjecter %s du pool ; aucun maître trouvé" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "Impossible d'éjecter %s du pool ; pool non vide" #, python-format msgid "Unable to find SR from VBD %s" msgstr "Impossible de trouver SR du VDB %s" #, python-format msgid "Unable to find SR from VDI %s" msgstr "Impossible de trouver la demande de service depuis VDI %s" #, python-format msgid "Unable to find ca_file : %s" msgstr "Impossible de trouver ca_file : %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "Impossible de trouver cert_file : %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "Impossible de trouver l'hôte pour l'instance %s" msgid "Unable to find iSCSI Target" msgstr "Cible iSCSI introuvable" #, python-format msgid "Unable to find key_file : %s" msgstr "Impossible de trouver key_file : %s" msgid "Unable to find root VBD/VDI for VM" msgstr "Impossible de trouver le VBD/VDI racine pour la machine virtuelle" msgid "Unable to find volume" msgstr "Volume introuvable" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "Impossible d'obtenir l'UUID de l'hôte : /etc/machine-id n'existe pas" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "Impossible d'obtenir l'UUID de l'hôte : /etc/machine-id est vide" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Impossible de récuppérer l'enregistrement du VDI %s sur" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "Impossible d'introduire le VDI pour SR %s" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "Impossible d'introduire VDI sur SR %s" #, python-format msgid "Unable to join %s in the pool" msgstr "Impossible de joindre %s dans le pool" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Impossible de lancer plusieurs instances avec un seul ID de port configuré. " "Veuillez lancer vos instances une à une avec des ports différents." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "Impossible de migrer %(instance_uuid)s vers %(dest)s : manque de " "mémoire(hôte : %(avail)s <= instance : %(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "Impossible de migrer %(instance_uuid)s : Le disque de l'instance est trop " "grand (disponible sur l'hôte de destination :%(available)s < requis :" "%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Impossible de migrer l'instance (%(instance_id)s) vers l'hôte actuel " "(%(host)s)." #, python-format msgid "Unable to obtain target information %s" msgstr "Impossible d'obtenir les informations de la cible %s" msgid "Unable to resize disk down." msgstr "Impossible de redimensionner le disque à la baisse." msgid "Unable to set password on instance" msgstr "Impossible de définir le mot de passe sur l'instance" msgid "Unable to shrink disk." msgstr "Impossible de redimensionner le disque." msgid "Unable to terminate instance." msgstr "Impossibilité de mettre fin à l'instance." #, python-format msgid "Unable to unplug VBD %s" msgstr "Impossible de deconnecter le VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Information CPU inacceptable : %(reason)s" msgid "Unacceptable parameters." msgstr "Paramètres inacceptables." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Type de console indisponible %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Racine du mappage d'unité par bloc non définie : BlockDeviceMappingList " "contient des mappages d'unité par bloc provenant de plusieurs instances." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Erreur de l'API inattendue. Merci de la reporter sur http://bugs.launchpad." "net/nova/ et d'y joindre le rapport de L'API Nova si possible.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Action d'agrégat inattendue : %s" msgid "Unexpected type adding stats" msgstr "Type inattendu d'ajout des statistiques" #, python-format msgid "Unexpected vif_type=%s" msgstr "vif_type = %s inattendu" msgid "Unknown" msgstr "Inconnu" msgid "Unknown action" msgstr "Action inconnu" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Format d'unité de config inconnu %(format)s. Sélectionnez iso9660 ou vfat." #, python-format msgid "Unknown delete_info type %s" msgstr "Type inconnu delete_info %s" #, python-format msgid "Unknown image_type=%s" msgstr "image_type=%s inconnu" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Ressources de quota inconnues %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Direction d'ordonnancement inconnue, choisir 'desc' ou 'asc'" #, python-format msgid "Unknown type: %s" msgstr "Type inconnu: %s" msgid "Unrecognized legacy format." msgstr "Ancien format non reconnu." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Valeur read_deleted non reconnue '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Valeur non reconnue '%s' pour CONF.running_deleted_instance_action" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "Extraction tentée mais l'image %s est introuvable." msgid "Unsupported Content-Type" msgstr "Type de contenu non pris en charge" msgid "Upgrade DB using Essex release first." msgstr "Mettez à jour la BD en utilisant la version Essex préalablement." #, python-format msgid "User %(username)s not found in password file." msgstr "Utilisateur %(username)s non trouvé dans le fichier de mot de passe." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Utilisateur %(username)s non trouvé dans le fichier fantôme." msgid "User data needs to be valid base 64." msgstr "Les données utilisateur doivent être des données base 64 valides." msgid "User does not have admin privileges" msgstr "L’utilisateur n'a pas les privilèges administrateur" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "Utiliser différentes syntaxes de block_device_mapping n'est pas autorisé " "dans la même requête." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s a pour taille %(virtual_size)d octets, qui est supérieure à " "la taille de version de %(new_disk_size)d octets." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDI introuvable sur le référentiel de stockage %(sr)s (vdi_uuid " "%(vdi_uuid)s, target_lun %(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "" "Nombre de tentatives de coalescence du disque VHD supérieur à (%d), " "abandon..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "La version %(req_ver)s n'est pas supporté par l'API. Minimum requis: " "%(min_ver)s et le maximum: %(max_ver)s" msgid "Virtual Interface creation failed" msgstr "La création de l'Interface Virtuelle a échoué" msgid "Virtual interface plugin failed" msgstr "Echec du plugin d'interface virtuelle" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "Le mode de la machine virtuelle '%(vmmode)s' n'est pas reconnu" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "Le mode de machine virtuelle '%s' n'est pas valide" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "Le type de virtualisation '%(virt)s' n'est pas pris en charge par ce pilote " "de calcul" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "Impossible de connecter le volume %(volume_id)s. Raison : %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "Le volume %(volume_id)s n'a pas pu être trouvé." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "Création du %(volume_id)s non terminée après attente de %(seconds)s secondes " "ou %(attempts)s tentatives. Son statut est %(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Le volume n'appartient pas à l'instance demandée." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "L'encryptage de volume n'est pas supporté pour le volume %(volume_id)s de " "type %(volume_type)s" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "Le volume est plus petit que la taille minimum spécifiée dans les " "métadonnées de l'image. La taille du volume est %(volume_size)i octets, la " "taille minimum est %(image_min_disk)i octets." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "Le volume définit la taille de bloc, mais l'hyperviseur libvirt en cours " "'%s' ne prend pas en charge la taille de bloc personnalisée" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Schéma '%s' non pris en charge sous Python < 2.7.4. Utilisez HTTP ou HTTPS" msgid "When resizing, instances must change flavor!" msgstr "Lors du redimensionnement, les instances doivent changer la version !" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Lors de l'exécution du serveur en mode SSL, vous devez spécifier une valeur " "d'option cert_file et key_file dans votre fichier de configuration" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Mauvaise méthode de quota %(method)s utilisée sur la ressource %(res)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "Type de point d'ancrage non valide. Seuls les types 'pre' et 'post' sont " "autorisés" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For est manquant dans la requête" msgid "X-Instance-ID header is missing from request." msgstr "L'en-tête X-Instance-ID est manquant dans la demande." msgid "X-Instance-ID-Signature header is missing from request." msgstr "L'en-tête X-Instance-ID-Signature est absent de la demande." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider est manquant dans la requête" msgid "X-Tenant-ID header is missing from request." msgstr "L'entête X-Tenant-ID est manquante dans la requête." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "XAPI prenant en charge relax-xsm-sr-check=true obligatoire" msgid "You are not allowed to delete the image." msgstr "Vous n'êtes pas autorisé à supprimer l'image." msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "Vous n'êtes pas autorisé à accéder à l'image par laquelle l'instance a été " "démarrée." msgid "You must implement __call__" msgstr "Vous devez implémenter __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "Vous devez indiquer l'indicateur images_rbd_pool pour utiliser les images " "rbd." msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "Vous devez spécifier l'indicateur images_volume_group pour utiliser les " "images LVM." msgid "Zero floating IPs available." msgstr "Aucune adresse IP flottante n'est disponible." msgid "admin password can't be changed on existing disk" msgstr "Impossible de modifier le mot de passe admin sur le disque existant" msgid "aggregate deleted" msgstr "agrégat supprimé" msgid "aggregate in error" msgstr "agrégat en erreur" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate a échoué à cause de : %s" msgid "cannot understand JSON" msgstr "impossible de comprendre JSON" msgid "clone() is not implemented" msgstr "clone() n'est pas implémenté" #, python-format msgid "connect info: %s" msgstr "Information de connexion: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "connexion à : %(host)s:%(port)s" msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot() n'est pas implémenté" #, python-format msgid "disk type '%s' not supported" msgstr "type disque '%s' non supporté" #, python-format msgid "empty project id for instance %s" msgstr "ID projet vide pour l'instance %s" msgid "error setting admin password" msgstr "erreur lors de la définition du mot de passe admin" #, python-format msgid "error: %s" msgstr "erreur: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "Échec lors de la génération de l'emprunte X509. Message d'erreur: %s " msgid "failed to generate fingerprint" msgstr "Échec dans la génération de l'empreinte" msgid "filename cannot be None" msgstr "Nom de fichier ne peut pas etre \"vide\"" msgid "floating IP is already associated" msgstr "L'adresse IP flottante est déjà associée" msgid "floating IP not found" msgstr "Adresse IP flottante non trouvée" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s sauvegardé par : %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s ne contient pas de version" msgid "image already mounted" msgstr "image déjà montée" #, python-format msgid "instance %s is not running" msgstr "instance %s n'est pas en cours d'exécution" msgid "instance has a kernel or ramdisk but not both" msgstr "l'instance a un noyau ou un disque mais pas les deux" msgid "instance is a required argument to use @refresh_cache" msgstr "" "l'instance est un argument obligatoire pour l'utilisation de @refresh_cache" msgid "instance is not in a suspended state" msgstr "l'instance n'est pas à l'état suspendu" msgid "instance is not powered on" msgstr "l'instance n'est pas mise sous tension" msgid "instance is powered off and cannot be suspended." msgstr "L'instance est hors tension et ne peut pas être interrompue." #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "" "l'instance_id %s est introuvable comme identificateur d'unité sur aucun port" msgid "is_public must be a boolean" msgstr "is_public doit être booléen." msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key n'est pas défini" msgid "l3driver call to add floating IP failed" msgstr "Échec de l'ajout d'une adresse IP flottant par l'appel l3driver" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs est installé mais n'est pas utilisable (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs n'est pas installé (%s)" #, python-format msgid "marker [%s] not found" msgstr "le marqueur [%s] est introuvable" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "Le nombre maximum doit être <= %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "max_count ne peut être supérieur à 1 si un fixed_ip est spécifié." msgid "min_count must be <= max_count" msgstr "min_count doit être <= max_count" #, python-format msgid "nbd device %s did not show up" msgstr "Device nbd %s n'est pas apparu" msgid "nbd unavailable: module not loaded" msgstr "nbd non disponible : module non chargé" msgid "no hosts to remove" msgstr "aucun hôte à retirer" #, python-format msgid "no match found for %s" msgstr "aucune occurrence trouvée pour %s" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "aucun instantané parent utilisable pour le volume %s" #, python-format msgid "no write permission on storage pool %s" msgstr "aucun droit en écriture sur le pool de stockage %s" #, python-format msgid "not able to execute ssh command: %s" msgstr "impossible d'exécuter la commande ssh : %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "une ancienne configuration ne peut utiliser que des back-ends de type " "dictionary ou memcached" msgid "operation time out" msgstr "l'opération a dépassé le délai d'attente" #, python-format msgid "partition %s not found" msgstr "partition %s non trouvée" #, python-format msgid "partition search unsupported with %s" msgstr "recherche de partition non pris en charge avec %s" msgid "pause not supported for vmwareapi" msgstr "mise en pause non prise en charge pour vmwareapi" msgid "printable characters with at least one non space character" msgstr "" "caractères imprimables avec au moins un caractère différent d'un espace" msgid "printable characters. Can not start or end with whitespace." msgstr "" "caractères imprimables. Ne peut pas commencer ou se terminer par un espace." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "Echec d'exécution de qemu-img sur %(path)s : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "erreur qemu-nbd : %s" msgid "rbd python libraries not found" msgstr "Librairies python rbd non trouvé" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "" "read_deleted peut uniquement correspondre à 'no', 'yes' ou 'only', et non %r" msgid "serve() can only be called once" msgstr "serve() peut uniquement être appelé une fois" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "Le service est un argument obligatoire pour le pilote ServiceGroup utilisant " "la base de données" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "Le service est un argument obligatoire pour le pilote ServiceGroup utilisant " "Memcached" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password n'est pas implémenté par ce pilote ou par cette instance " "invitée." msgid "setup in progress" msgstr "Configuration en cours" #, python-format msgid "snapshot for %s" msgstr "instantané pour %s" msgid "snapshot_id required in create_info" msgstr "snapshot_id requis dans create_info" msgid "token not provided" msgstr "Jeton non fourni" msgid "too many body keys" msgstr "trop de clés de corps" msgid "unpause not supported for vmwareapi" msgstr "annulation de la mise en pause non prise en charge pour vmwareapi" msgid "version should be an integer" msgstr "la version doit être un entier" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s doit être un groupe de volumes LVM" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path absent de vif_détails pour le vif %(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "Type vif %s non pris en charge" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "" "Le paramètre vif_type doit être présent pour cette implémentation de " "vif_driver." #, python-format msgid "volume %s already attached" msgstr "Le volume %s est déjà connecté " #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "Le statut du volume '%(vol)s' doit être 'in-use'. Statut actuel : " "'%(status)s'" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake n'a pas d'implémentation pour %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake n'a pas d'implementation pour %s ou il a été appelé avec le " "mauvais nombre d'arguments" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/it/0000775000175000017500000000000000000000000015404 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/it/LC_MESSAGES/0000775000175000017500000000000000000000000017171 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/it/LC_MESSAGES/nova.po0000664000175000017500000033125500000000000020505 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Ying Chun Guo , 2013 # FIRST AUTHOR , 2011 # Loris Strozzini, 2012 # ls, 2012 # Mariano Iumiento , 2013 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:04+0000\n" "Last-Translator: Copied by Zanata \n" "Language: it\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Italian\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s non è un indirizzo v4/6 IP valido." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s ha tentato l'accesso diretto al database che non è consentito " "dalla politica" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s non è una rete IP valida." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s non deve fare parte degli aggiornamenti." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "%(memsize)d MB di memoria assegnata, ma previsti MB di %(memtotal)d" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s non si trova nella memoria locale: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s non si trova nella memoria condivisa: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "" "%(total)i righe corrispondenti alla query %(meth)s, %(done)i sono state " "migrate" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "l'hypervisor %(type)s non supporta i dispositivi PCI" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "Il valore %(worker_name)s di %(workers)s non è valido, deve essere maggiore " "di 0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s non supporta il collegamento a caldo del disco." #, python-format msgid "%s format is not supported" msgstr "Il formato %s non è supportato" #, python-format msgid "%s is not supported." msgstr "%s non è supportato." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s deve essere 'MANUAL' o 'AUTO'." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' deve essere un'istanza di '%(cls)s'" msgid "'qemu-img info' parsing failed." msgstr "analisi di 'qemu-img info' non riuscita." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "" "L'argomento 'rxtx_factor' deve essere un valore a virgola mobile compreso " "tra 0 e %g" #, python-format msgid "A NetworkModel is required in field %s" msgstr "Un modello di rete è richiesto nel campo %s" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "Stringa della versione API %(version)s in formato non valido. Deve essere in " "formato MajorNum.MinorNum." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "Versione API %(version)s non supportata in questo metodo." msgid "Access list not available for public flavors." msgstr "Elenco accessi non disponibile per i flavor pubblici." #, python-format msgid "Action %s not found" msgstr "Azione %s non trovata" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "L'azione per request_id %(request_id)s nell'istanza %(instance_uuid)s non è " "stata trovata" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Azione: '%(action)s', metodo chiamata: %(meth)s, corpo: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "L'aggiunta dei metadati non è riuscita per l'aggregato %(id)s dopo " "%(retries)s tentativi" msgid "Affinity instance group policy was violated." msgstr "La politica di affinità del gruppo di istanze è stata violata." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "L'agent non supporta la chiamata: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "L'agent-build con architettura hypervisor %(hypervisor)s os %(os)s " "%(architecture)s esiste." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "L'aggregato %(aggregate_id)s dispone già dell'host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "Impossibile trovare l'aggregato %(aggregate_id)s." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "L'aggregato %(aggregate_id)s non contiene alcun host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "L'aggregato %(aggregate_id)s non contiene metadati con la chiave " "%(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Aggregato %(aggregate_id)s: azione '%(action)s' ha causato un errore: " "%(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "L'aggregato %(aggregate_name)s esiste già." #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "" "L'aggregazione %s non supporta la zona di disponibilità denominata vuota" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "Aggregato per il conteggio host %(host)s non è stato trovato." #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "" "È stato fornito un valore 'name' non valido. Il nome deve essere: %(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "Si è verificato un errore sconosciuto. Ritentare la richiesta." msgid "An unknown exception occurred." msgstr "E' stato riscontrato un errore sconosciuto" msgid "Anti-affinity instance group policy was violated." msgstr "La politica di anti-affinità del gruppo di istanze è stata violata." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Il nome architettura '%(arch)s' non è riconosciuto" #, python-format msgid "Architecture name '%s' is not valid" msgstr "Il nome architettura '%s' non è valido" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Tentativo di utilizzare il dispositivo PCI %(compute_node_id)s:%(address)s " "dal pool al di fuori del pool" msgid "Attempted overwrite of an existing value." msgstr "Si è tentato di sovrascrivere un valore esistente." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Attributo non supportato: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "Formato rete non corretto: manca %s" msgid "Bad networks format" msgstr "Formato reti non corretto" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "" "Il formato delle reti non è corretto: il formato (%s) uuid della rete non è " "corretto" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Prefisso errato per la rete in cidr %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "Bind non riuscito per la porta %(port_id)s, controllare i log neutron per " "ulteriori informazioni." msgid "Blank components" msgstr "Componenti vuoti" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "I volumi vuoti (origine: 'blank', dest: 'volume') devono avere una " "dimensione diversa da zero" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "Il dispositivo di blocco %(id)s non è riavviabile." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "L'associazione del dispositivo di blocco %(volume_id)s è un volume multi-" "attach e non è valida per questa operazione." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "" "L'associazione del dispositivo di blocco non può essere convertita in " "formato legacy. " msgid "Block Device Mapping is Invalid." msgstr "La mappatura unità di blocco non è valida." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "L'associazione del dispositivo di blocco non è valida: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "L'associazione del dispositivo di blocco non è valida: la sequenza di avvio " "per l'istanza e la combinazione dell'associazione del dispositivo immagine/" "blocco non è valida." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "L'associazione del dispositivo di blocco non è valida: sono stati " "specificati più dispositivi locali del limite consentito" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "L'associazione del dispositivo di blocco non è valida: impossibile ottenere " "l'immagine %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "La mappatura unità di blocco non è valida: impossibile ottenere " "un'istantanea %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "La mappatura unità di blocco non è valida: impossibile ottenere il volume " "%(id)s." msgid "Block migration can not be used with shared storage." msgstr "" "La migrazione blocchi non può essere utilizzata con l'archivio condiviso." msgid "Boot index is invalid." msgstr "L'indice boot non è valido." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "La build dell'istanza %(instance_uuid)s è stata interrotta: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "" "La build dell'istanza %(instance_uuid)s è stata ripianificata: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "BuildRequest non trovata per l'istanza %(uuid)s" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "" "La CPU e l'allocazione di memoria devono essere forniti per tutti i nodi NUMA" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU non ha compatibilità.\n" "\n" "%(ret)s\n" "\n" "Fare riferimento a %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "Il numero CPU %(cpunum)d è assegnato a due nodi" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "Il numero CPU %(cpunum)d è superiore a quello massimo %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "Il numero CPU %(cpuset)s non è assegnato a nessun nodo" msgid "Can not add access to a public flavor." msgstr "Impossibile aggiungere l'accesso a una versione pubblica." msgid "Can not find requested image" msgstr "Impossibile trovare l'immagine richiesta" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "" "Impossibile gestire la richiesta di autenticazione per le credenziali %d" msgid "Can't resize a disk to 0 GB." msgstr "Impossibile ridimensionare un disco a 0 GB." msgid "Can't resize down ephemeral disks." msgstr "Impossibile ridimensionare verso il basso i dischi effimeri." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "" "Impossibile recuperare il percorso root dell'unità dalla configurazione " "libvirt dell'istanza" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "Impossibile '%(action)s' l'istanza %(server_id)s mentre si trova in %(attr)s " "%(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "Impossibile accedere a \"%(instances_path)s\", verificare che il percorso " "esista e che siano disponibili le autorizzazioni richieste. In particolare " "Nova-Compute non deve essere eseguito con l'account SYSTEM integrato o altri " "account non in grado di eseguire l'autenticazione su un host remoto." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossibile aggiungere l'host all'aggregato %(aggregate_id)s. Motivo: " "%(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "Impossibile collegare uno o più volume a più istanze" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Impossibile chiamare %(method)s su oggetto orfano %(objtype)s" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "Impossibile determinare il pool di archiviazione parent per %s; impossibile " "determinare dove archiviare le immagini" msgid "Cannot find SR of content-type ISO" msgstr "Impossibile trovare SR del tipo di contenuto ISO" msgid "Cannot find SR to read/write VDI." msgstr "Impossibile trovare SR per la lettura/scrittura di VDI." msgid "Cannot find image for rebuild" msgstr "Impossibile trovare l'immagine per la nuova build" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "Impossibile rimuovere l'host %(host)s nell'aggregato %(id)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossibile rimuovere l'host dall'aggregato %(aggregate_id)s. Motivo: " "%(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "Impossibile ripristinare un'istanza volume-backed" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "Impossibile ridimensionare il disco root in modo da ridurne la dimensione. " "Dimensione corrente: %(curr_root_gb)s GB. Dimensione richiesta: " "%(new_root_gb)s GB." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "Impossibile impostare la politica di blocco del thread della CPU in una " "politica di blocco della CPU non dedicata" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "Impossibile impostare la politica in tempo reale in una politica di blocco " "della CPU non dedicata " #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossibile aggiornare l'aggregato %(aggregate_id)s. Motivo: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Impossibile aggiornare i metadati dell'aggregato %(aggregate_id)s. Motivo: " "%(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "La cella %(uuid)s non dispone di alcuna associazione." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "" "La modifica renderebbe l'utilizzo inferiore a 0 per le seguenti risorse: " "%(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "Impossibile trovare la classe %(class_name)s: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Comando non supportato. Utilizzare il comando Ironic %(cmd)s per eseguire " "questa azione." #, python-format msgid "Compute host %(host)s could not be found." msgstr "Impossibile trovare l'host compute %(host)s." #, python-format msgid "Compute host %s not found." msgstr "Impossibile trovare l'host compute %s." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "Il servizio compute di %(host)s è ancora in uso." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "Il servizio compute di %(host)s non è disponibile in questo momento." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "Il formato dell'unità di configurazione '%(format)s' non è supportato." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "Config ha richiesto un modello di CPU esplicito, ma l'hypervisor libvirt " "'%s' non supporta la selezione dei modelli di CPU" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Conflitto durante l'aggiornamento dell'istanza %(instance_uuid)s, ma non è " "stato possibile determinare la causa." #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Conflitto durante l'aggiornamento dell'istanza %(instance_uuid)s. Previsto: " "%(expected)s. Reale: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Connessione a cinder host non riuscita: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "Connessione all'host glance %(server)s non riuscita: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Connessione a libvirt non riuscita: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "La connessione all'hypervisor è interrotta sull'host: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "Impossibile richiamare l'output del log della console per l'istanza " "%(instance_id)s. Motivo: %(reason)s" msgid "Constraint not met." msgstr "Vincolo non soddisfatto." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Convertito in non elaborato, ma il formato ora è %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "Impossibile collegare l'immagine al loopback: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "Impossibile recuperare l'immagine %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "Impossibile trovare un gestore per il volume %(driver_type)s." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "Impossibile trovare il binario %(binary)s nell'host %(host)s." #, python-format msgid "Could not find config at %(path)s" msgstr "Impossibile trovare la configurazione in %(path)s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "" "Impossibile trovare il riferimento(i) archivio dati utilizzato dalla VM." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "Impossibile caricare la linea %(line)s, ricevuto l'errore %(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "Impossibile caricare l'app paste '%(name)s' in %(path)s" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "Impossibile montare l'unità vfat config. %(operation)s non riuscito. Errore: " "%(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "Impossibile caricare l'immagine %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "" "Creazione dell'interfaccia virtuale con indirizzo mac univoco non riuscita" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "L'archivio dati regex %s non corrispondeva ad alcun archivio dati" msgid "Datetime is in invalid format" msgstr "La data/ora è in un formato non valido" msgid "Default PBM policy is required if PBM is enabled." msgstr "La politica PBM predefinita è richiesta se PBM è abilitato." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "%(records)d record eliminati dalla tabella '%(table_name)s'." #, python-format msgid "Device '%(device)s' not found." msgstr "Dispositivo '%(device)s' non trovato." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "L'ID dispositivo %(id)s specificato non è supportato dalla versione " "hypervisor %(version)s" msgid "Device name contains spaces." msgstr "Il nome dispositivo contiene degli spazi." msgid "Device name empty or too long." msgstr "Nome dispositivo vuoto o troppo lungo." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "Mancata corrispondenza del tipo di dispositivo per l'alias '%s'" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "Tipi differenti in %(table)s.%(column)s e nella tabella cronologica: " "%(c_type)s %(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "" "Il disco contiene un file system che non è in grado di eseguire il " "ridimensionamento: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Il formato disco %(disk_format)s non è accettabile" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Il file di informazioni sul disco non è valido: %(reason)s" msgid "Disk must have only one partition." msgstr "Il disco deve avere solo una partizione." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Il disco con ID: %s non è stato trovato collegato all'istanza." #, python-format msgid "Driver Error: %s" msgstr "Errore del driver: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "Errore nel tentativo di eseguire %(method)s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Errore durante la distruzione dell'istanza sul nodo %(node)s. Stato fornito " "ancora '%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Errore durante la seguente chiamata all'agent: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "Errore durante l'istanza non rinviata %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "Errore da libvirt durante l'acquisizione delle informazioni sul dominio per " "%(instance_name)s: [Codice di errore %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Errore da libvirt durante la ricerca di %(instance_name)s: [Codice di errore " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Errore da libvirt durante la disattivazione di %(instance_name)s: [Codice di " "errore %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Errore da libvirt durante l'impostazione della password per il nome utente " "\"%(user)s\": [Codice di errore %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Errore di montaggio %(device)s in %(dir)s nell'immagine %(image)s con " "libguestfs (%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Errore di montaggio %(image)s con libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Errore durante la creazione del monitor di risorse: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Errore: l'agent è disabilitato" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "L'evento %(event)s per l'id azione %(action_id)s non è stato trovato" msgid "Event must be an instance of nova.virt.event.Event" msgstr "L'evento deve essere un'istanza di nova.virt.event.Event" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Superamento numero max tentativi di pianificazione %(max_attempts)d per " "l'istanza %(instance_uuid)s. Ultima eccezione: %(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Superamento numero max tentativi di pianificazione %(max_retries)d per " "l'istanza %(instance_uuid)s durante la migrazione attiva" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Superato numero massimo di tentativi. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "Era previsto un uuid ma è stato ricevuto %(uuid)s." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Colonna supplementare %(table)s.%(column)s nella tabella cronologica" msgid "Extracting vmdk from OVA failed." msgstr "Estrazione di vmdk da OVA non riuscita." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "Impossibile accedere alla porta %(port_id)s: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Impossibile aggiungere i parametri di distribuzione sul nodo %(node)s " "quando si esegue il provisioning all'istanza %(instance)s" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "" "Impossibile allocare la rete con errore %s, nuova pianificazione non " "prevista." msgid "Failed to allocate the network(s), not rescheduling." msgstr "" "Impossibile allocare una o più reti, nuova pianificazione non prevista." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "" "Impossibile collegare il dispositivo dell'adattatore di rete a " "%(instance_uuid)s" #, python-format msgid "Failed to create vif %s" msgstr "Impossibile creare vif %s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Impossibile distribuire l'istanza: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Impossibile scollegare il dispositivo PCI %(dev)s: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "" "Impossibile scollegare il dispositivo dell'adattatore di rete da " "%(instance_uuid)s" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Impossibile decodificare testo: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Impossibile avviare le istanze: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Impossibile associare le partizioni: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Impossibile montare il file system: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "" "Impossibile analizzare le informazioni relative ad una periferica PCI per il " "trasferimento" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Impossibile disattivare l'istanza: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Impossibile alimentare l'istanza: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Impossibile preparare il dispositivo PCI %(id)s per l'istanza " "%(instance_uuid)s: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Impossibile fornire l'istanza %(inst)s: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "" "Impossibile leggere o scrivere nel file di informazioni sul disco: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Impossibile riavviare l'istanza: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Impossibile rimuovere il volume: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Impossibile richiedere ad Ironic di ricreare l'istanza %(inst)s: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Impossibile ripristinare l'istanza: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "Impossibile eseguire qemu-img info in %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "" "Impossibile impostare la password admin in %(instance)s perché %(reason)s" msgid "Failed to spawn, rolling back" msgstr "Generazione non riuscita, ripristino in corso" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Impossibile sospendere l'istanza: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Impossibile terminare l'istanza: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "Impossibile scollegare vif %s" msgid "Failure prepping block device." msgstr "Errore durante l'esecuzione preparatoria del dispositivo di blocco." #, python-format msgid "File %(file_path)s could not be found." msgstr "Impossibile trovare il file %(file_path)s." #, python-format msgid "File path %s not valid" msgstr "Percorso file %s non valido" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "L'IP fisso %(ip)s non è un indirizzo IP valido per la rete %(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "IP fisso %s già in uso." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "L'indirizzo IP fisso %(address)s è già in uso nell'istanza %(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "Impossibile trovare un IP fisso per l'indirizzo %(address)s." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "Impossibile trovare la tipologia %(flavor_id)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "Il flavor %(flavor_id)s non ha ulteriori specifiche con chiave " "%(extra_specs_key)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Il flavor %(flavor_id)s non ha ulteriori specifiche con chiave %(key)s." #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "Le specifiche supplementari del flavor %(id)s non possono essere aggiornate " "o create dopo %(retries)d tentativi." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "L'accesso a flavor esiste già per flavor %(flavor_id)s e progetto " "%(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Impossibile trovare l'accesso a flavor per la combinazione %(flavor_id)s / " "%(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "Impossibile trovare flavor utilizzato dall'istanza." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "Il flavor con ID %(flavor_id)s esiste già." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "Impossibile trovare il flavor con nome %(flavor_name)s." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "Il flavor con nome %(name)s esiste già." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "Il disco della versione è più piccolo della dimensione minima specificata " "nei metadati dell'immagine. Il disco versione è %(flavor_size)i byte, la " "dimensione minima è %(image_min_disk)i byte." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "Il disco della versione è troppo piccolo per l'immagine richiesta. Il disco " "versione è %(flavor_size)i byte, l'immagine è %(image_size)i byte." msgid "Flavor's memory is too small for requested image." msgstr "La memoria flavor è troppo piccola per l'immagine richiesta." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Associazione IP %(address)s mobile non riuscita." #, python-format msgid "Floating IP %(address)s is associated." msgstr "L'IP mobile %(address)s è associato." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "L'IP mobile %(address)s non è associato a un'istanza %(id)s." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "Impossibile trovare l'IP mobile per l'ID %(id)s." #, python-format msgid "Floating IP not found for ID %s" msgstr "Impossibile trovare l'IP mobile per l'ID %s." #, python-format msgid "Floating IP not found for address %(address)s." msgstr "Impossibile trovare l'IP mobile per l'indirizzo %(address)s." msgid "Floating IP pool not found." msgstr "Impossibile trovare pool di IP mobili." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "È vietato superare il valore flavor di numero di porte seriali trasferito ai " "metadati immagine." msgid "Found no disk to snapshot." msgstr "Non è stato trovato nessun disco per l'istantanea." #, python-format msgid "Found no network for bridge %s" msgstr "Non sono state trovate reti per il bridge %s" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Sono state trovate reti non univoche per il bridge %s" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Sono state trovate reti non univoche per name_label %s" msgid "Guest does not have a console available." msgstr "Guest non dispone di una console disponibile." #, python-format msgid "Host %(host)s could not be found." msgstr "Impossibile trovare l'host %(host)s." #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "L'host %(host)s è già associato alla cella %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "L'host '%(name)s' non è associato a una cella" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Host PowerOn non supportato dal driver Hyper-V" msgid "Host aggregate is not empty" msgstr "L'aggregato host non è vuoto" msgid "Host does not support guests with NUMA topology set" msgstr "L'host non supporta guest con la topologia NUMA impostata" msgid "Host does not support guests with custom memory page sizes" msgstr "" "L'host non supporta guest con dimensioni pagina di memoria personalizzate" msgid "Host startup on XenServer is not supported." msgstr "L'avvio dell'host su XenServer non è supportato." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Il driver hypervisor non supporta il metodo post_live_migration_at_source" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Tipo virt hypervisor '%s' non valido" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "" "Il tipo di virtualizzazione di hypervisor '%(hv_type)s' non è riconosciuto" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "Impossibile trovare hypervisor con ID '%s'." #, python-format msgid "IP allocation over quota in pool %s." msgstr "L'allocazione IP supera la quota nel pool %s." msgid "IP allocation over quota." msgstr "L'allocazione IP supera la quota." #, python-format msgid "Image %(image_id)s could not be found." msgstr "Impossibile trovare l'immagine %(image_id)s." #, python-format msgid "Image %(image_id)s is not active." msgstr "L'immagine %(image_id)s non è attiva." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "L'immagine %(image_id)s non è accettabile: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "Dimensione disco immagine maggiore della dimensione disco richiesta" msgid "Image is not raw format" msgstr "L'immagine non è nel formato non elaborato" msgid "Image metadata limit exceeded" msgstr "Superato il limite dei metadati dell'immagine" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "Modello immagine '%(image)s' non supportato" msgid "Image not found." msgstr "Immagine non trovata." #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "La proprietà immagine '%(name)s' non è consentita per sovrascrivere " "l'impostazione di configurazione NUMA rispetto al flavor" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "La proprietà immagine 'hw_cpu_policy' non può sostituire la politica di " "blocco della CPU impostata sul flavor" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "La proprietà immagine 'hw_cpu_thread_policy' non può sostituire la politica " "di blocco del thread della CPU impostata sul flavor" msgid "Image that the instance was started with could not be found." msgstr "Impossibile trovare l'immagine con cui è stata avviata l'istanza." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "" "L'opzione dell'unità di configurazione dell'immagine '%(config_drive)s' non " "è valida" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "Le immagini con destination-type 'volume' devono avere specificata una " "dimensione diversa da zero" msgid "In ERROR state" msgstr "In stato di ERRORE" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "Negli stati %(vm_state)s/%(task_state)s, non RESIZED/None" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "La migrazione attiva in corso %(id)s non è stata trovata per il server " "%(uuid)s." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Impostazioni incompatibili: la codifica della memoria temporanea è " "supportata solo per le immagini LVM." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "" "Impossibile trovare le informazioni della cache per l'istanza " "%(instance_uuid)s." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "L'istanza %(instance)s ed il volume %(vol)s non si trovano nella stessa " "availability_zone. L'istanza si trova in %(ins_zone)s. Il volume si trova in " "%(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "L'istanza %(instance)s non dispone di una porta con id%(port)s" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "L'istanza %(instance_id)s non può essere ripristinata: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "Impossibile trovare l'istanza %(instance_id)s." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "L'istanza %(instance_id)s non dispone di tag '%(tag)s'" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "L'istanza %(instance_id)s non è in modalità di ripristino" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "L'istanza %(instance_id)s non è pronta" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "L'istanza %(instance_id)s non è in esecuzione." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "L'istanza %(instance_id)s non è accettabile: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "L'istanza %(instance_uuid)s non specifica una topologia NUMA" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "L'istanza %(instance_uuid)s non specifica un contesto di migrazione." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "Istanza %(instance_uuid)s in %(attr)s %(state)s. Impossibile %(method)s " "quando l'istanza è in questo stato." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "L'istanza %(instance_uuid)s è bloccata" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "L'istanza %(instance_uuid)s richiede l'unità di configurazione ma non esiste." #, python-format msgid "Instance %(name)s already exists." msgstr "L'istanza %(name)s esiste già." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "" "L'istanza %(server_id)s si trova in uno stato non valido per '%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "L'istanza %(uuid)s non dispone di alcuna associazione a una cella." #, python-format msgid "Instance %s not found" msgstr "Istanza %s non trovata" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Il provisioning dell'istanza %s è stato interrotto" msgid "Instance could not be found" msgstr "Impossibile trovare l'istanza" msgid "Instance disk to be encrypted but no context provided" msgstr "" "Disco dell'istanza da codificare, ma non è stato fornito alcun contesto" msgid "Instance event failed" msgstr "Evento istanza non riuscito" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "Il gruppo di istanze %(group_uuid)s esiste già." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "Impossibile trovare il gruppo di istanze %(group_uuid)s." msgid "Instance has no source host" msgstr "L'istanza non dispone alcun host di origine" msgid "Instance has not been resized." msgstr "L'istanza non è stata ridmensionata." #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "Il nome host %(hostname)s dell'istanza non è un nome DNS valido" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "L'istanza è già in modalità di ripristino: %s" msgid "Instance is not a member of specified network" msgstr "L'istanza non è un membro della rete specificata" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Ripristino dell'istanza eseguito a causa di: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Spazio insufficiente nel gruppo volume %(vg)s. Solo %(free_space)db " "disponibile, ma %(size)d byte richiesti dal volume %(lv)s." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Risorse di elaborazione insufficienti: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "Memoria disponibile insufficiente sul nodo di calcolo per avviare %(uuid)s." #, python-format msgid "Interface %(interface)s not found." msgstr "Impossibile trovare l'interfaccia %(interface)s." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "I dati della base 64 non sono validi per il file %(path)s" msgid "Invalid Connection Info" msgstr "Informazioni sulla connessione non valide" #, python-format msgid "Invalid ID received %(id)s." msgstr "Ricevuto ID non valido %(id)s." #, python-format msgid "Invalid IP format %s" msgstr "Formato IP non valido %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Protocollo IP non valido %(protocol)s." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "La whitelist PCI non è valida: la whitelist PCI può specificare il devname o " "l'indirizzo, ma non entrambi" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Definizione alias PCI non valida: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Espressione regolare non valida %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Caratteri non validi nel nome host '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "config_drive specificato non è valido." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "config_drive_format \"%s\" non valido" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Tipo di console non valido %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "Tipo di contenuto non valido%(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Stringa data/ora non valida: %(reason)s" msgid "Invalid device UUID." msgstr "UUID del dispositivo non valido." #, python-format msgid "Invalid entry: '%s'" msgstr "Voce non valida: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Voce non valida: '%s'; è previsto dict" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Voce non valida: '%s'; è previsto list o dict" #, python-format msgid "Invalid exclusion expression %r" msgstr "Espressione di esclusione %r non valida" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Formato immagine non valido '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "href immagine %(image_href)s non valido." #, python-format msgid "Invalid inclusion expression %r" msgstr "Espressione di inclusione %r non valida" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Input non valido per campo/attributo %(path)s. Valore: %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Input ricevuto non valido: %(reason)s" msgid "Invalid instance image." msgstr "Immagine istanza non valida." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Filtro is_public non valido [%s]" msgid "Invalid key_name provided." msgstr "Il nome_chiave specificato non è valido." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Dimensione pagina di memoria non valida '%(pagesize)s'" msgid "Invalid metadata key" msgstr "La chiave di metadati non è valida" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Dimensione metadati non valida: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Metadati non validi: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Filtro minDisk non valido [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Filtro minRam non valido [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Intervallo di porta non valido %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "Firma della richiesta del proxy non valida." #, python-format msgid "Invalid range expression %r" msgstr "Espressione di intervallo %r non valida" msgid "Invalid service catalog json." msgstr "json del catalogo del servizio non è valido." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "Ora di inizio non valida. L'ora di inizio non può essere successiva all'ora " "di fine." msgid "Invalid state of instance files on shared storage" msgstr "Stato non valido dei file dell'istanza nella memoria condivisa" #, python-format msgid "Invalid timestamp for date %s" msgstr "Data/ora non valida per la data %s" #, python-format msgid "Invalid usage_type: %s" msgstr "usage_type non valido: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Valore non valido per l'opzione unità di config: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Indirizzo interfaccia virtuale non valido %s nella richiesta" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Modalità di accesso al volume non valida: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Volume non valido: %(reason)s" msgid "Invalid volume_size." msgstr "Volume_size non valido." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "L'UUID del nodo Ironic non è stato fornito al driver per l'istanza %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "" "Non è consentito creare un'interfaccia sulla rete esterna %(network_uuid)s" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "L'immagine Kernel/Ramdisk è troppo grande: %(vdi_size)d bytes, massimo " "%(max_size)d byte" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "I nomi chiave possono solo contenere caratteri alfanumerici, punti, " "trattini, caratteri di sottolineatura, due punti e spazi." #, python-format msgid "Key manager error: %(reason)s" msgstr "Errore gestore chiavi: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "La coppia di chiavi '%(key_name)s' esiste già." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "" "Impossibile trovare la coppia di chiavi %(name)s per l'utente %(user_id)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "I dati della coppia di chiavi non sono validi: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "Il nome coppia di chiavi contiene caratteri non sicuri" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "Il nome coppia di chiavi deve essere una stringa compresa tra 1 e 255 " "caratteri" msgid "Limits only supported from vCenter 6.0 and above" msgstr "Limiti supportati solo da vCenter 6.0 e successivi" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "La migrazione %(id)s per il server %(uuid)s non è in corso." #, python-format msgid "Malformed message body: %(reason)s" msgstr "Corpo del messaggio non valido: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "URL richiesto non valido: il project_id del progetto '%(project_id)s' non " "corrisponde al project_id del progetto '%(context_project_id)s'" msgid "Malformed request body" msgstr "Corpo richiesta non corretto" msgid "Mapping image to local is not supported." msgstr "Associazione dell'immagine all'elemento locale non supportata." #, python-format msgid "Marker %(marker)s could not be found." msgstr "Impossibile trovare l'indicatore %(marker)s." msgid "Maximum number of floating IPs exceeded" msgstr "Il numero massimo di IP mobili è stato superato" msgid "Maximum number of key pairs exceeded" msgstr "Il numero massimo di coppie di chiavi è stato superato" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "Il numero massimo di elementi metadati è stato superato %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "Numero massimo di porte superato" msgid "Maximum number of security groups or rules exceeded" msgstr "Il numero massimo di gruppi di sicurezza o di regole è stato superato" msgid "Metadata item was not found" msgstr "L'elemento metadati non è stato trovato" msgid "Metadata property key greater than 255 characters" msgstr "La chiave della proprietà dei metadati supera 255 caratteri" msgid "Metadata property value greater than 255 characters" msgstr "Il valore della proprietà dei metadati supera 255 caratteri" msgid "Metadata type should be dict." msgstr "Tipo di metadati deve essere dict." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "Impossibile trovare la metrica %(name)s sul nodo host compute %(host)s." "%(node)s." msgid "Migrate Receive failed" msgstr "Migrazione di Receive non riuscita" msgid "Migrate Send failed" msgstr "Migrazione di Send non riuscita" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "" "La migrazione %(id)s per il server %(uuid)s non è una migrazione attiva." #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "Impossibile trovare la migrazione %(migration_id)s." #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "Migrazione %(migration_id)s non trovata per l'istanza %(instance_id)s " #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "Lo stato dell'istanza %(instance_uuid)s della migrazione %(migration_id)s è " "%(state)s. Impossibile %(method)s mentre la migrazione è in questo stato." #, python-format msgid "Migration error: %(reason)s" msgstr "Errore di migrazione: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "" "La migrazione non è supportata per le istanze sottoposte a backup da LVM" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "La migrazione per l'istanza %(instance_id)s non è stata trovata con lo stato " "%(status)s." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Errore di verifica preliminare della migrazione: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "Errore delle destinazioni di selezione della migrazione: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Argomenti mancanti: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Manca la colonna %(table)s.%(column)s nella tabella cronologica" msgid "Missing device UUID." msgstr "Manca l'UUID del dispositivo." msgid "Missing disabled reason field" msgstr "Manca il campo causa disabilitata" msgid "Missing forced_down field" msgstr "Campo forced_down mancante" msgid "Missing imageRef attribute" msgstr "Manca l'attributo imageRef" #, python-format msgid "Missing keys: %s" msgstr "Mancano le chiavi: %s" #, python-format msgid "Missing parameter %s" msgstr "Parametro mancante %s" msgid "Missing parameter dict" msgstr "Manca il parametro dict" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "Più di un'istanza è associata all'indirizzo IP fisso '%(address)s'." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Trovate più reti possibili. Specificare l'ID rete(i) per selezionare " "quella(e) a cui effettuare la connessione." msgid "More than one swap drive requested." msgstr "È richiesta più di un'unità di scambio." #, python-format msgid "Multi-boot operating system found in %s" msgstr "Rilevato avvio multiplo del sistema operativo in %s" msgid "Multiple X-Instance-ID headers found within request." msgstr "" "Sono state trovate più intestazioni X-Instance-ID all'interno della " "richiesta." msgid "Multiple X-Tenant-ID headers found within request." msgstr "" "Sono state trovate più intestazioni X-Tenant-ID all'interno della richiesta." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "" "Sono state trovate più corrispondenze di pool di IP mobili per il nome '%s'" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Più IP mobili sono stati trovati per l'indirizzo %(address)s." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Più host possono essere gestiti dal driver VMWare vCenter; pertanto non " "restituiamo l'ora di aggiornamento solo per un host." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "Trovate più reti possibili, utilizzare un ID rete più specifico." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Trovati più gruppi sicurezza che corrispondono a '%s'. Utilizzare un ID per " "essere più precisi." msgid "Must input network_id when request IP address" msgstr "È necessario immettere network_id quando è richiesto l'indirizzo IP" msgid "Must not input both network_id and port_id" msgstr "Non si deve immettere entrambi network_id e port_id" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "È necessario specificare connection_url, connection_username " "(facoltativamente) e connection_password per l'utilizzo di " "compute_driver=xenapi.XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "È necessario specificare host_ip, host_username e host_password per " "l'utilizzo di vmwareapi.VMwareVCDriver" msgid "Must supply a positive value for max_number" msgstr "È necessario fornire un valore positivo per max_number" msgid "Must supply a positive value for max_rows" msgstr "È necessario fornire un valore positivo per max_rows" #, python-format msgid "Network %(network_id)s could not be found." msgstr "Impossibile trovare la rete %(network_id)s." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "La rete %(network_uuid)s richiede una sottorete per avviare le istanze." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "Impossibile trovare la rete per il bridge %(bridge)s" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "Impossibile trovare la rete per l'istanza %(instance_id)s." msgid "Network not found" msgstr "Rete non trovata" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "La rete richiede port_security_enabled e la sottorete associata al fine di " "applicare i gruppi sicurezza." msgid "New volume must be detached in order to swap." msgstr "Il nuovo volume deve essere scollegato per lo scambio." msgid "New volume must be the same size or larger." msgstr "Il nuovo volume deve avere la stessa dimensione o superiore." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "Nessuna associazione dispositivo di blocco con id %(id)s." msgid "No Unique Match Found." msgstr "Non è stata trovata nessuna corrispondenza univoca." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "Nessuna agent-build associata all'id %(id)s." msgid "No compute host specified" msgstr "Nessun host di calcolo specificato" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "" "Nessuna informazione di configurazione trovata per il sistema operativo " "%(os_name)s" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "Nessun dispositivo con l'indirizzo MAC %s esiste sulla VM" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "Nessun dispositivo con interface-id %s esiste sulla VM" #, python-format msgid "No disk at %(location)s" msgstr "Nessun disco in %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Nessun indirizzo IP fisso disponibile per la rete: %(net)s" msgid "No fixed IPs associated to instance" msgstr "Nessun IP fisso associato all'istanza" msgid "No free nbd devices" msgstr "Nessuna unità nbd disponibile" msgid "No host available on cluster" msgstr "Nessun host disponibile sul cluster" msgid "No hosts found to map to cell, exiting." msgstr "Nessun host trovato da associare alla cella, uscita." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "Non è stata trovata alcuna corrispondenza hypervisor '%s'." msgid "No image locations are accessible" msgstr "Nessuna ubicazione immagine è accessibile" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "Nessun URI di migrazione attiva configurato e nessun valore predefinito " "disponibile per il tipo di virtualizzazione dell'hypervisor \"%(virt_type)s" "\"." msgid "No more floating IPs available." msgstr "IP mobili non più disponibili." #, python-format msgid "No more floating IPs in pool %s." msgstr "Non ci sono più IP mobili nel pool %s." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "Nessun punto di montaggio trovato in %(root)s di %(image)s" #, python-format msgid "No operating system found in %s" msgstr "Nessun sistema operativo rilevato in %s" #, python-format msgid "No primary VDI found for %s" msgstr "Nessuna VDI principale trovata per %s" msgid "No root disk defined." msgstr "Nessun disco root definito" #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "Nessuna rete specifica era richiesta e nessuna è disponibile per il progetto " "'%(project_id)s'." msgid "No suitable network for migrate" msgstr "Nessuna rete adatta per la migrazione" msgid "No valid host found for cold migrate" msgstr "Nessun host valido trovato per la migrazione a freddo" msgid "No valid host found for resize" msgstr "Nessun host valido trovato per il ridimensionamento" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Non è stato trovato alcun host valido. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "" "Nessuna associazione dell'unità di blocco del volume nel percorso: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "Nessun volume di associazione unità di blocco con id %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "Impossibile trovare il nodo %s." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "Impossibile acquisire una porta libera per %(host)s" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "Impossibile collegare %(host)s:%(port)d, %(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "Non tutte le funzioni virtuali di PF %(compute_node_id)s:%(address)s sono " "disponibili." msgid "Not an rbd snapshot" msgstr "Non è un'istantanea rbd" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "Non autorizzato per l'immagine %(image_id)s." msgid "Not authorized." msgstr "Non autorizzato." msgid "Not enough parameters to build a valid rule." msgstr "Parametri non sufficienti per creare una regola valida" msgid "Not implemented on Windows" msgstr "Non implementato su Windows" msgid "Not stored in rbd" msgstr "Non memorizzato in rbd" msgid "Nothing was archived." msgstr "Non è stato archiviato alcun elemento." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova richiede libvirt versione %s o successiva." msgid "Number of Rows Archived" msgstr "Numero di righe archiviate" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Azione dell'oggetto %(action)s non riuscita perché: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Il volume precedente è collegato ad un'istanza diversa." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Uno o più host sono già nelle zone di disponibilità %s" msgid "Only administrators may list deleted instances" msgstr "Solo gli amministratori possono elencare le istanze eliminate" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "In questa funzione sono supportati solo gli SR basati su file (ext/NFS). SR " "%(uuid)s è di tipo %(type)s" msgid "Origin header does not match this host." msgstr "L'intestazione origine non corrisponde a questo host." msgid "Origin header not valid." msgstr "Intestazione origine non valida." msgid "Origin header protocol does not match this host." msgstr "Il protocollo dell'intestazione origine non corrisponde a questo host." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "Dispositivo PCI %(node_id)s:%(address)s non trovato." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "L'alias PCI %(alias)s non è definito" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "Il dispositivo PCI %(compute_node_id)s:%(address)s è %(status)s anziché di " "%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "Il dispositivo PCI %(compute_node_id)s:%(address)s è di proprietà di " "%(owner)s anziché di %(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "Dispositivo PCI %(id)s non trovato" #, python-format msgid "PCI device request %(requests)s failed" msgstr "La richiesta del dispositivo PCI %(requests)s non è riuscita" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s non contiene l'indirizzo IP" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Dimensione pagina %(pagesize)s non consentita su '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "Dimensione pagina %(pagesize)s non supportata dall'host." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Parametri %(missing_params)s non presenti in vif_details per vif %(vif_id)s. " "Controllare la configurazione neutron per confermare che i parametri macvtap " "siano corretti." #, python-format msgid "Path %s must be LVM logical volume" msgstr "Il percorso %s deve essere un volume logico LVM" msgid "Paused" msgstr "In pausa" msgid "Personality file limit exceeded" msgstr "Il limite del file di personalità è stato superato" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "La funzione fisica %(compute_node_id)s:%(address)s, correlata a VF " "%(compute_node_id)s:%(vf_address)s è %(status)s anziché %(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "Manca la rete fisica per la rete %(network_uuid)s" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "La politica non consente di eseguire l'azione %(action)s ." #, python-format msgid "Port %(port_id)s is still in use." msgstr "La porta %(port_id)s è ancora in uso." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "La porta %(port_id)s non è utilizzabile per l'istanza %(instance)s." #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "La porta %(port_id)s non è utilizzabile per l'istanza %(instance)s. Il " "valore %(value)s assegnato all'attributo dns_name non corrisponde al nome " "host dell'istanza %(hostname)s" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "La porta %(port_id)s richiede un FixedIP per essere utilizzata." #, python-format msgid "Port %s is not attached" msgstr "La porta %s non è collegata" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "Impossibile trovare l'd porta %(port_id)s." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Il modello video fornito (%(model)s) non è supportato." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "L'azione watchdog (%(action)s) non è supportata." msgid "QEMU guest agent is not enabled" msgstr "Agent guest QEMU non abilitato" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "Sospensione non supportata per l'istanza %(instance_id)s" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "Impossibile trovare la classe di quota %(class_name)s." msgid "Quota could not be found" msgstr "Impossibile trovare la quota" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Quota superata per %(overs)s: Richiesto %(req)s, ma già utilizzato %(used)s " "%(allowed)s %(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Quota superata per le risorse: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Quota superata, troppe coppie di chiavi." msgid "Quota exceeded, too many server groups." msgstr "Quota superata, troppi gruppi di server." msgid "Quota exceeded, too many servers in group" msgstr "Quota superata, troppi server nel gruppo" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Quota superata: code=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "la quota per il progetto %(project_id)s esiste, risorsa %(resource)s" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "Impossibile trovare la quota per il progetto %(project_id)s." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "Impossibile trovare la quota per l'utente %(user_id)s nel progetto " "%(project_id)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "Il limite della quota %(limit)s per %(resource)s deve essere maggiore o " "uguale a quello già utilizzato e prenotato %(minimum)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "Il limite della quota %(limit)s per %(resource)s deve essere inferiore o " "uguale a %(maximum)s." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "Raggiunto numero massimo di tentativi per scollegare VBD %s" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "La politica in tempo reale necessita della maschera vCPU(s) configurata con " "almeno 1 vCPU RT e 1 vCPU ordinaria. Vedere hw:cpu_realtime_mask o " "hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "Il corpo della richiesta e l'URI non corrispondono" msgid "Request is too large." msgstr "La richiesta è troppo grande." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "La richiesta dell'immagine %(image_id)s ha ricevuto una risposta BadRequest: " "%(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "RequestSpec non trovata per l'istanza %(instance_uuid)s" msgid "Requested CPU control policy not supported by host" msgstr "" "La politica di controllo della CPU richiesta non è supportata dall'host" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "L'hardware richiesto '%(model)s' non è supportato dal driver virtuale " "'%(virt)s'" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "Il ridimensionamento automatico del disco per l'immagine richiesta %(image)s " "è disabilitato." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "La topologia NUMA dell'istanza richiesta non si adatta alla topologia NUMA " "dell'host fornito" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "La topologia NUMA dell'istanza richiesta insieme ai dispositivi PCI " "richiesti non si adatta alla topologia NUMA dell'host fornito" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "I limiti vCPU richiesti %(sockets)d:%(cores)d:%(threads)d non sono possibili " "per soddisfare il conteggio vcpu %(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "Il dispositivo di ripristino non esiste per l'istanza %s" #, python-format msgid "Resize error: %(reason)s" msgstr "Errore di ridimensionamento: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Il ridimensionamento del flavor disco su zero non è consentito." msgid "Resource could not be found." msgstr "Impossibile trovare la risorsa." msgid "Resumed" msgstr "Ripristinato" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Il nome dell'elemento root deve essere '%(name)s', non '%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "Esecuzione di batch di %i fino al completamento" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "Impossibile trovare il filtro Scheduler Host %(filter_name)s." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "" "Il gruppo di sicurezza %(name)s non è stato trovato per il progetto " "%(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "Impossibile trovare il gruppo di sicurezza %(security_group_id)s per il " "progetto %(project_id)s." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "Impossibile trovare il gruppo di sicurezza %(security_group_id)s." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "Il gruppo di sicurezza %(security_group_name)s esiste già per il progetto " "%(project_id)s." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "Il gruppo di sicurezza %(security_group_name)s non è associato all'istanza " "%(instance)s" msgid "Security group id should be uuid" msgstr "L'id gruppo sicurezza deve essere uuid" msgid "Security group name cannot be empty" msgstr "Il nome gruppo di sicurezza non può essere vuoto" msgid "Security group not specified" msgstr "Gruppo di sicurezza non specificato" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "" "Non è stato possibile ridimensionare il disco del server perché: %(reason)s" msgid "Server does not exist" msgstr "Il server non esiste" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "Politica ServerGroup non supportata: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter non configurato" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter non configurato" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher non configurato" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher non configurato" #, python-format msgid "Service %(service_id)s could not be found." msgstr "Impossibile trovare il servizio %(service_id)s." #, python-format msgid "Service %s not found." msgstr "Il servizio %s non è stato trovato." msgid "Service is unavailable at this time." msgstr "Il servizio non è disponibile in questo momento." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "Il servizio con valore host %(host)s binario %(binary)s esiste già." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "Il servizio con host %(host)s topic %(topic)s esiste." msgid "Set admin password is not supported" msgstr "L'impostazione della password admin non è supportata" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "La tabella cronologia con il nome %(name)s esiste già." #, python-format msgid "Share '%s' is not supported" msgstr "La condivisione '%s' non è supportata" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "" "Il livello di condivisione '%s' non può avere la condivisione configurata" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "Riduzione del filesystem con resize2fs non riuscita, controllare se si " "dispone di spazio sufficiente sul proprio disco." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "Impossibile trovare l'istantanea %(snapshot_id)s." msgid "Some required fields are missing" msgstr "Mancano alcuni campi obbligatori" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "Si è verificato un errore durante l'eliminazione di un'istantanea del " "volume: la creazione di una nuova base per un disco di rete %(protocol)s " "tramite qemu-img non è stata testata completamente" msgid "Sort direction size exceeds sort key size" msgstr "" "La dimensione del criterio di ordinamento supera la dimensione della chiave " "di ordinamento" msgid "Sort key supplied was not valid." msgstr "La chiave di ordinamento fornita non è valida." msgid "Specified fixed address not assigned to instance" msgstr "L'indirizzo fisso specificato non è stato assegnato all'istanza" msgid "Specify `table_name` or `table` param" msgstr "Specificare il parametro `table_name` o `table`" msgid "Specify only one param `table_name` `table`" msgstr "Specificare solo un parametro `table_name` `table`" msgid "Started" msgstr "Avviato" msgid "Stopped" msgstr "Di arresto" #, python-format msgid "Storage error: %(reason)s" msgstr "Errore di memoria: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "La politica di archiviazione %s non corrisponde ad alcun archivio dati" msgid "Success" msgstr "Riuscito" msgid "Suspended" msgstr "Sospeso" msgid "Swap drive requested is larger than instance type allows." msgstr "" "L'unità di scambio richiesta è più grande di quanto consentito dal tipo di " "istanza." msgid "Table" msgstr "Tabella" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "L'attività %(task_name)s è già in esecuzione nell'host %(host)s" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "L'attività %(task_name)s non è in esecuzione nell'host %(host)s" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "Il formato dell'indirizzo PCI %(address)s non è corretto." msgid "The backlog must be more than 0" msgstr "Il backlog deve essere maggiore di 0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "La serie di porte di console %(min_port)d-%(max_port)d è esaurita." msgid "The created instance's disk would be too small." msgstr "Il disco dell'istanza creata potrebbe essere troppo piccolo." msgid "The current driver does not support preserving ephemeral partitions." msgstr "" "Il driver corrente non supporta la conservazione di partizioni effimere." msgid "The default PBM policy doesn't exist on the backend." msgstr "La politica PBM predefinita non esiste sul backend." msgid "The floating IP request failed with a BadRequest" msgstr "Richiesta IP mobile non riuscita con errore Richiesta non corretta" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "" "L'istanza richiede una versione di hypervisor più recente di quella fornita." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "" "Il numero di porte definite: %(ports)d è superiore al limite: %(quota)d" msgid "The only partition should be partition 1." msgstr "L'unica partizione dovrebbe essere la partizione 1." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "" "Il percorso del dispositivo RNG fornito (%(path)s) non è presente sull'host." msgid "The request body can't be empty" msgstr "Il corpo della richiesta non può essere vuoto" msgid "The request is invalid." msgstr "La richiesta non è valida." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "La quantità di memoria video richiesta %(req_vram)d è superiore a quella " "massima generalmente consentita %(max_vram)d." msgid "The requested availability zone is not available" msgstr "L'area di disponibilità richiesta non è disponibile" msgid "The requested console type details are not accessible" msgstr "I dettagli del tipo di console richiesta non sono accessibili" msgid "The requested functionality is not supported." msgstr "La funzionalità richiesta non è supportata." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "Il cluster specificato '%s' non è stato trovato in vCenter" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "Il percorso unità specificato (%(path)s) è in uso." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "Il percorso unità specificato (%(path)s) non è valido." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "Il percorso disco (%(path)s) specificato esiste già, è previsto che non " "esista." msgid "The supplied hypervisor type of is invalid." msgstr "Il tipo di hypervisor fornito non è valido." msgid "The target host can't be the same one." msgstr "L'host di destinazione non può essere lo stesso." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "Il token '%(token)s' non è valido oppure è scaduto" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "Non è possibile assegnare al volume lo stesso nome dispositivo assegnato al " "dispositivo root %s" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "Sono presenti %(records)d record nella tabella '%(table_name)s' in cui la " "colonna uuid o instance_uuid è NULL. Eseguire nuovamente questo comando con " "l'opzione --delete dopo aver eseguito il backup dei dati necessari." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "Sono presenti %(records)d record nella tabella '%(table_name)s' in cui la " "colonna uuid o instance_uuid è NULL. È necessario eliminarli manualmente " "prima della migrazione. Prendere in considerazione l'esecuzione del comando " "'nova-manage db null_instance_uuid_scan'." msgid "There are not enough hosts available." msgstr "Numero di host disponibili non sufficiente." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Ci sono ancora %(count)i record di versione non migrati. La migrazione non " "può continuare finché tutti i record di versione istanza non sono stati " "migrati al nuovo formato. Eseguire prima 'nova-manage db " "migrate_flavor_data'." #, python-format msgid "There is no such action: %s" msgstr "Non esiste alcuna azione simile: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "Nessun record trovato in cui instance_uuid era NULL." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "L' hypervisor del nodo di calcolo è più vecchio della versione minima " "supportata: %(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "" "Questo domU deve essere in esecuzione sull'host specificato da connection_url" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Questo metodo deve essere richiamato con netoworks=None e port_ids=None o " "port_ids e networks non none." #, python-format msgid "This rule already exists in group %s" msgstr "Questa regola già esiste nel gruppo %s" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Questo servizio è più vecchio (v%(thisver)i) della versione minima (v" "%(minver)i) del resto della distribuzione. Impossibile continuare." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Timeout in attesa che l'unità %s venga creata" msgid "Timeout waiting for response from cell" msgstr "Timeout in attesa di risposta dalla cella" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "Timeout durante il controllo della possibilità di eseguire la migrazione " "live all'host: %s" msgid "To and From ports must be integers" msgstr "Le porte 'Da' e 'A' devono essere numeri interi" msgid "Token not found" msgstr "Token non trovato" msgid "Triggering crash dump is not supported" msgstr "L'attivazione del dump di crash non è supportata" msgid "Type and Code must be integers for ICMP protocol type" msgstr "Tipo e codice devono essere numeri interi per il tipo protocollo ICMP" msgid "UEFI is not supported" msgstr "UEFI non è supportato" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Impossibile associare l'IP mobile %(address)s all'IP fisso per l'istanza " "%(id)s. L'istanza non presenta alcun indirizzo IPv4 fisso da associare." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Impossibile associare l'IP mobile %(address)s all'IP fisso %(fixed_address)s " "per l'istanza %(id)s. Errore: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Impossibile autenticare il client Ironic." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Impossibile contattare l'agent guest. La seguente chiamata è scaduta: " "%(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "Impossibile convertire l'immagine in %(format)s: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "Impossibile convertire l'immagine in immagine non elaborata: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "Impossibile distruggere VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "Impossibile distruggere VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "Impossibile determinare il bus del disco per '%s'" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "Impossibile determinare il prefisso del disco per %s" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "Impossibile espellere %s dal pool; Non è stato trovato alcun master" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "Impossibile espellere %s dal pool; il pool non è vuoto" #, python-format msgid "Unable to find SR from VBD %s" msgstr "Impossibile trovare SR da VBD %s" #, python-format msgid "Unable to find SR from VDI %s" msgstr "Impossibile trovare SR da VDI %s" #, python-format msgid "Unable to find ca_file : %s" msgstr "Impossibile trovare il file_ca: %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "Impossibile trovare il file_cert : %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "Impossibile trovare l'host per l'istanza %s" msgid "Unable to find iSCSI Target" msgstr "Impossibile trovare la destinazione iSCSI" #, python-format msgid "Unable to find key_file : %s" msgstr "Impossibile trovare il file_chiavi : %s" msgid "Unable to find root VBD/VDI for VM" msgstr "Impossibile trovare la root VBD/VDI per la VM" msgid "Unable to find volume" msgstr "Impossibile trovare il volume" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "Impossibile richiamare l'UUID host: /etc/machine-id non esiste" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "Impossibile richiamare l'UUID host: /etc/machine-id è vuoto" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Impossibile acquisire un record di VDI %s in" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "Impossibile introdurre VDI per SR %s" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "Impossibile introdurre VDI in SR %s" #, python-format msgid "Unable to join %s in the pool" msgstr "Impossibile unire %s nel pool" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Impossibile avviare più istanze con un singolo ID porta configurato. Avviare " "le proprie istanze una per volta con porte differenti." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "Impossibile migrare %(instance_uuid)s to %(dest)s: mancanza di memoria (host:" "%(avail)s <= istanza:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "Impossibile migrare %(instance_uuid)s: il disco dell'istanza è troppo grande " "(disponibile nell'host di destinazione: %(available)s < necessario: " "%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Impossibile migrare l'istanza (%(instance_id)s) nell'host corrente " "(%(host)s)." #, python-format msgid "Unable to obtain target information %s" msgstr "Impossibile ottenere le informazioni sulla destinazione %s" msgid "Unable to resize disk down." msgstr "Impossibile ridurre il disco a dimensioni inferiori." msgid "Unable to set password on instance" msgstr "Impossibile impostare la password sull'istanza" msgid "Unable to shrink disk." msgstr "Impossibile ridurre il disco." msgid "Unable to terminate instance." msgstr "Impossibile terminare l'istanza." #, python-format msgid "Unable to unplug VBD %s" msgstr "Impossibile scollegare VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Informazioni CPU non accettabili: %(reason)s" msgid "Unacceptable parameters." msgstr "Parametri inaccettabili." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Tipo di console non disponibile %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Root di associazione dispositivo di blocco non definita: " "BlockDeviceMappingList contiene le associazioni del dispositivo di blocco di " "più istanze." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Errore API non previsto. Segnalarlo a http://bugs.launchpad.net/nova/ e " "allegare il log Nova API, se possibile.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Azione aggregato non prevista %s" msgid "Unexpected type adding stats" msgstr "Tipo non previsto durante l'aggiunta di statistiche" #, python-format msgid "Unexpected vif_type=%s" msgstr "vif_type=%s imprevisto" msgid "Unknown" msgstr "Sconosciuto" msgid "Unknown action" msgstr "Azione sconosciuta" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Formato unità di configurazione sconosciuto %(format)s. Selezionare una di " "iso9660 o vfat." #, python-format msgid "Unknown delete_info type %s" msgstr "Tipo di delete_info %s sconosciuto" #, python-format msgid "Unknown image_type=%s" msgstr "image_type=%s sconosciuto" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Risorse quota sconosciute %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Direzione ordinamento sconosciuta, deve essere 'desc' o 'asc'" #, python-format msgid "Unknown type: %s" msgstr "Tipo sconosciuto: %s" msgid "Unrecognized legacy format." msgstr "Formato legacy non riconosciuto." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Valore read_deleted non riconosciuto '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Valore non riconosciuto '%s' per CONF.running_deleted_instance_action" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "Accantonamento tentato, ma l'immagine %s non è stata trovata" msgid "Unsupported Content-Type" msgstr "Tipo-contenuto non supportato" msgid "Upgrade DB using Essex release first." msgstr "Aggiorna il DB utilizzando prima la release Essex." #, python-format msgid "User %(username)s not found in password file." msgstr "Utente %(username)s non trovato nel file di password." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Utente %(username)s non trovato nel file shadow." msgid "User data needs to be valid base 64." msgstr "I dati utente devono avere una valida base 64." msgid "User does not have admin privileges" msgstr "L'utente non ha i privilegi dell'amministratore" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "L'utilizzo di sintassi block_device_mapping differenti non è consentito " "nella stessa CSR1vk." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s è %(virtual_size)d byte che è maggiore della dimensione del " "flavor di %(new_disk_size)d byte." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDI non trovato su SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun " "%(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "" "I tentativi di unione VHD sono stati superati (%d), rinuncia in corso..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "La versione %(req_ver)s non è supportata dall'API. Il valore minimo è " "%(min_ver)s ed il massimo è %(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Creazione interfaccia virtuale non riuscita" msgid "Virtual interface plugin failed" msgstr "Plugin dell'interfaccia virtuale non riuscito" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "La modalità della macchina virtuale '%(vmmode)s' non è riconosciuta" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "La modalità della macchina virtuale '%s' non è valida" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "Il tipo di virtualizzazione '%(virt)s' non è supportato dal questo driver " "compute" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "Impossibile collegare il volume %(volume_id)s. Motivo: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "Impossibile trovare il volume %(volume_id)s." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "La creazione del volume %(volume_id)s non è stata completata anche dopo " "un'attesa di %(seconds)s secondi o %(attempts)s tentativi e lo stato è " "%(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Il volume non appartiene all'istanza richiesta." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "Codifica volume non supportata per volume %(volume_type)s %(volume_id)s" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "Il volume è più piccolo della dimensione minima specificata nei metadati " "dell'immagine. Dimensione volume %(volume_size)i byte, dimensione minima " "%(image_min_disk)i byte." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "Il volume imposta la dimensione del blocco ma l'hypervisor libvirt corrente " "'%s' non supporta la dimensione del blocco personalizzata" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Lo schema '%s' non è supportato in Python < 2.7.4, utilizzare http o https" msgid "When resizing, instances must change flavor!" msgstr "Durante il ridimensionamento, le istanze devono cambiare tipologia!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Quando si esegue il server in modalità SSL, è necessario specificare sia un " "valore dell'opzione cert_file che key_file nel file di configurazione" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Metodo quota errato %(method)s utilizzato per la risorsa %(res)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "Tipo di metodo hook non valido. Sono consentiti solo i tipi 'pre' e 'post'" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For manca dalla richiesta." msgid "X-Instance-ID header is missing from request." msgstr "L'intestazione X-Instance-ID manca nella richiesta." msgid "X-Instance-ID-Signature header is missing from request." msgstr "Intestazione X-Instance-ID-Signature non presente nella richiesta." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider manca dalla richiesta." msgid "X-Tenant-ID header is missing from request." msgstr "L'intestazione X-Tenant-ID non è presente nella richiesta." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "XAPI richiesto per il supporto di relax-xsm-sr-check=true" msgid "You are not allowed to delete the image." msgstr "Non è consentito eliminare l'immagine." msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "Non si è autorizzati ad accedere all'immagine con cui è stata avviata " "l'istanza." msgid "You must implement __call__" msgstr "È necessario implementare __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "È necessario specificare l'indicatore images_rbd_pool per utilizzare le " "immagini rbd." msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "È necessario specificare l'indicatore images_volume_group per utilizzare le " "immagini LVM." msgid "Zero floating IPs available." msgstr "Nessun IP mobile disponibile." msgid "admin password can't be changed on existing disk" msgstr "La password admin non può essere modificata sul disco esistente" msgid "aggregate deleted" msgstr "aggregato eliminato" msgid "aggregate in error" msgstr "aggregato in errore" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate non riuscito a causa di: %s" msgid "cannot understand JSON" msgstr "impossibile riconoscere JSON" msgid "clone() is not implemented" msgstr "Il clone () non è implementato" #, python-format msgid "connect info: %s" msgstr "informazioni di connessione: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "connessione a: %(host)s:%(port)s" msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot() non implementato" #, python-format msgid "disk type '%s' not supported" msgstr "tipo di disco '%s' non supportato" #, python-format msgid "empty project id for instance %s" msgstr "id progetto vuoto per l'istanza %s" msgid "error setting admin password" msgstr "errore di impostazione della password admin" #, python-format msgid "error: %s" msgstr "errore: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "impossibile generare l'impronta digitale X509. Messaggio di errore: %s" msgid "failed to generate fingerprint" msgstr "impossibile generare l'impronta digitale" msgid "filename cannot be None" msgstr "il nome file non può essere None" msgid "floating IP is already associated" msgstr "l'IP mobile è già associato" msgid "floating IP not found" msgstr "IP mobile non trovato" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s sottoposto a backup da: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s non contiene la versione" msgid "image already mounted" msgstr "immagine già montata" #, python-format msgid "instance %s is not running" msgstr "l'istanza %s non è in esecuzione" msgid "instance has a kernel or ramdisk but not both" msgstr "l'istanza ha un kernel o ramdisk ma non entrambi" msgid "instance is a required argument to use @refresh_cache" msgstr "istanza è un argomento obbligatorio per utilizzare @refresh_cache" msgid "instance is not in a suspended state" msgstr "Lo stato dell'istanza non è suspended" msgid "instance is not powered on" msgstr "l'istanza non è accesa" msgid "instance is powered off and cannot be suspended." msgstr "l'istanza è spenta e non può essere sospesa." #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "" "Non è stato possibile trovare l'instance_id %s come ID dispositivo su " "qualsiasi porta" msgid "is_public must be a boolean" msgstr "is_public deve essere un booleano" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key non definito" msgid "l3driver call to add floating IP failed" msgstr "chiamata di l3driver per aggiungere IP mobile non riuscita" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs installato, ma non utilizzabile (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs non è installato (%s)" #, python-format msgid "marker [%s] not found" msgstr "indicatore [%s] non trovato" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "il numero massimo di righe deve essere <= %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "max_count non può essere maggiore di 1 se è specificato un fixed_ip." msgid "min_count must be <= max_count" msgstr "min_count deve essere <= max_count" #, python-format msgid "nbd device %s did not show up" msgstr "unità nbd %s non visualizzata" msgid "nbd unavailable: module not loaded" msgstr "nbd non disponibile: modulo non caricato" msgid "no hosts to remove" msgstr "nessun host da rimuovere" #, python-format msgid "no match found for %s" msgstr "nessuna corrispondenza trovata per %s" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "nessuna istantanea parent utilizzabile per il volume %s" #, python-format msgid "no write permission on storage pool %s" msgstr "nessuna autorizzazione di scrittura nel pool di archiviazione %s" #, python-format msgid "not able to execute ssh command: %s" msgstr "Impossibile eseguire il comando ssh: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "La configurazione old style può utilizzare solo backend di dizionario o " "memorizzati nella cache" msgid "operation time out" msgstr "timeout operazione" #, python-format msgid "partition %s not found" msgstr "partizione %s non trovata" #, python-format msgid "partition search unsupported with %s" msgstr "ricerca partizione non supportata con %s" msgid "pause not supported for vmwareapi" msgstr "sospensione non supportata per vmwareapi" msgid "printable characters with at least one non space character" msgstr "caratteri stampabili con almeno un carattere diverso dallo spazio " msgid "printable characters. Can not start or end with whitespace." msgstr "caratteri stampabili. Non possono iniziare o terminare con uno spazio." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "Impossibile eseguire qemu-img su %(path)s : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "errore qemu-nbd: %s" msgid "rbd python libraries not found" msgstr "Impossibile trovare le librerie rbd python" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "read_deleted può essere solo 'no', 'yes' o 'only', non %r" msgid "serve() can only be called once" msgstr "il servizio() può essere chiamato solo una volta" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "il servizio è un argomento obbligatorio per il driver ServiceGroup basato su " "DB" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "il servizio è un argomento obbligatorio per il driver basato Memcached" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password non è implementato da questo driver o istanza guest." msgid "setup in progress" msgstr "impostazione in corso" #, python-format msgid "snapshot for %s" msgstr "istantanea per %s" msgid "snapshot_id required in create_info" msgstr "snapshot_id obbligatorio in create_info" msgid "token not provided" msgstr "token non fornito" msgid "too many body keys" msgstr "troppe chiavi del corpo" msgid "unpause not supported for vmwareapi" msgstr "annullamento sospensione non supportato per vmwareapi" msgid "version should be an integer" msgstr "la versione deve essere un numero intero" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s deve essere il gruppo di volumi LVM" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path non presente in vif_details per vif %(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "Tipo vif %s non supportato" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "" "il parametro vif_type deve essere presente per questa implementazione di " "vif_driver" #, python-format msgid "volume %s already attached" msgstr "volume %s già collegato" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "Lo stato del volume '%(vol)s' deve essere 'in-use'. Attualmente lo stato è " "'%(status)s'" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake non dispone di un'implementazione per %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake non dispone di un'implementazione per %s o è stato chiamato con " "il numero errato di argomenti" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/ja/0000775000175000017500000000000000000000000015362 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/ja/LC_MESSAGES/0000775000175000017500000000000000000000000017147 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/ja/LC_MESSAGES/nova.po0000664000175000017500000037171000000000000020463 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # FIRST AUTHOR , 2011 # Sasuke(Kyohei MORIYAMA) <>, 2015 # *pokotan-in-the-sky* <>, 2012 # Tom Fifield , 2013 # Tomoyuki KATO , 2013 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:06+0000\n" "Last-Translator: Copied by Zanata \n" "Language: ja\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Japanese\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s が有効な IP v4/6 アドレスではありません。" #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s が直接データベースへアクセスしようとしましたが、これはポリシーで許" "可されていません" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s や有効な IP ネットワークではありません。" #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s を更新に含めることはできません。" #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "" "%(memsize)d MB のメモリーが割り当てられていますが、%(memtotal)d MB が期待され" "ていました" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s はローカルストレージ上にありません: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s は共有ストレージ上にありません: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i 行がクエリー %(meth)s に合致し、%(done)i が移行しました" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "%(type)s ハイパーバイザーは PCI デバイスをサポートしていません" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "%(workers)s の %(worker_name)s 値が無効です。0 より大きい値にしなければなりま" "せん" #, python-format msgid "%s does not support disk hotplug." msgstr "%s ではディスクのホットプラグはサポートされていません。" #, python-format msgid "%s format is not supported" msgstr "%s 形式はサポートされていません" #, python-format msgid "%s is not supported." msgstr "%s はサポートされていません。" #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s は 'MANUAL' または 'AUTO' のいずれかでなければいけません。" #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' は'%(cls)s' のインスタンスである必要があります" msgid "'qemu-img info' parsing failed." msgstr "'qemu-img info' の解析に失敗しました。" #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "" "'rxtx_factor' 引数は 0 から %g の範囲内の浮動小数点数でなければなりません" #, python-format msgid "A NetworkModel is required in field %s" msgstr "フィールド %s に NetworkModel が必要です" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "API バージョン文字列 %(version)s の形式が無効です。形式は MajorNum.MinorNum " "でなければなりません。" #, python-format msgid "API version %(version)s is not supported on this method." msgstr "API バージョン %(version)s はこのメソッドではサポートされていません。" msgid "Access list not available for public flavors." msgstr "パブリックフレーバーではアクセスリストを使用できません。" #, python-format msgid "Action %s not found" msgstr "アクション %s が見つかりません" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "インスタンス %(instance_uuid)s に対する request_id %(request_id)s のアクショ" "ンが見つかりません" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "アクション: '%(action)s'、呼び出しメソッド: %(meth)s、本文: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "アグリゲート %(id)s にメタデータを追加しようと %(retries)s 回再試行しました" "が、追加できませんでした" msgid "Affinity instance group policy was violated." msgstr "Affinity インスタンスグループポリシーに違反しました" #, python-format msgid "Agent does not support the call: %(method)s" msgstr "エージェントは、呼び出し %(method)s をサポートしていません" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "ハイパーバイザー %(hypervisor)s の OS %(os)s アーキテクチャー " "%(architecture)s のエージェントビルドが存在します。" #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "アグリゲート %(aggregate_id)s には既にホスト %(host)s があります。" #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "アグリゲート %(aggregate_id)s が見つかりませんでした。" #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "アグリゲート %(aggregate_id)s にはホスト %(host)s がありません。" #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "アグリゲート %(aggregate_id)s にはキー %(metadata_key)s を持つメタデータはあ" "りません。" #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "アグリゲート %(aggregate_id)s: アクション '%(action)s' でエラーが発生しまし" "た: %(reason)s。" #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "アグリゲート %(aggregate_name)s は既に存在します。" #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "アグリゲート %s は空の名前のアベイラビリティーゾーンをサポートしません" #, fuzzy, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "ホスト %(host)s カウントの総計が見つかりません。" #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "" "無効な 'name' の値が提供されました。名前は以下である必要があります: " "%(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "未知のエラーが発生しました。再度リクエストを実行してください。" msgid "An unknown exception occurred." msgstr "不明な例外が発生しました。" msgid "Anti-affinity instance group policy was violated." msgstr "anti-affinity インスタンスグループポリシーに違反しました。" #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "アーキテクチャー名 '%(arch)s' は認識できません" #, python-format msgid "Architecture name '%s' is not valid" msgstr "アーキテクチャー名 '%s' が有効ではありません" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "空のプールから PCI デバイス %(compute_node_id)s:%(address)s を取り込んでみて" "ください" msgid "Attempted overwrite of an existing value." msgstr "既存の値を上書きしようとしました。" #, python-format msgid "Attribute not supported: %(attr)s" msgstr "この属性はサポートされていません: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "ネットワークの形式が正しくありません。%s がありません" msgid "Bad networks format" msgstr "ネットワークの形式が正しくありません" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "" "ネットワークの形式が正しくありません。ネットワーク UUID が適切な形式になって" "いません (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "CIDR %s 内のネットワークでは無効なプレフィックス" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "ポート %(port_id)s でバインドが失敗しました。詳細情報については neutron のロ" "グを確認してください。" msgid "Blank components" msgstr "空白コンポーネント" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "ブランクのボリューム (ソース: 'blank'、宛先: 'volume') にはゼロでないサイズを" "設定する必要があります" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "ブロックデバイス %(id)s がブート可能ではありません。" #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "ブロックデバイスマッピング %(volume_id)s は複数の接続を持つボリュームであり、" "この処理には有効ではありません。" msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "ブロックデバイスマッピングを以前の形式に変換することはできません。" msgid "Block Device Mapping is Invalid." msgstr "ブロックデバイスマッピングが無効です。" #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "ブロックデバイスマッピングが無効です: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "ブロックデバイスマッピングが無効です。指定されたインスタンスとイメージ/ブロッ" "クデバイスマッピングの組み合わせでのブートシーケンスは無効です。" msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "ブロックデバイスマッピングが無効です。制限で許可されているよりも多くのローカ" "ルデバイスが指定されました。" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "ブロックデバイスマッピングが無効です。イメージ %(id)s の取得に失敗しました。" #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "ブロックデバイスマッピングが無効です。スナップショット %(id)s の取得に失敗し" "ました。" #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "ブロックデバイスマッピングが無効です。ボリューム %(id)s の取得に失敗しまし" "た。" msgid "Block migration can not be used with shared storage." msgstr "" "ブロックマイグレーションを使用するときに、共有ストレージを使用することはでき" "ません。" msgid "Boot index is invalid." msgstr "boot インデックスが無効です。" #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "インスタンス %(instance_uuid)s の作成は打ち切られました: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "" "インスタンス %(instance_uuid)s の作成は再スケジュールされました: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "インスタンス %(uuid)s に関する BuildRequest が見つかりません" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "" "CPU とメモリーの割り当ては、すべての NUMA ノードに指定しなければなりません" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU に互換性がありません。\n" "\n" "%(ret)s\n" "\n" "%(u)s を参照" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "CPU 番号 %(cpunum)d は 2 つのノードに割り当てられています" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "CPU 番号 %(cpunum)d は最大値 %(cpumax)d を超えています" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "CPU 番号 %(cpuset)s はノードに割り当てられていません" msgid "Can not add access to a public flavor." msgstr "パブリックフレーバーにアクセスを追加できません" msgid "Can not find requested image" msgstr "要求されたイメージが見つかりません" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "%d 認証情報に関する認証要求を処理できません" msgid "Can't resize a disk to 0 GB." msgstr "ディスクのサイズを 0 GB に変更することはできません。" msgid "Can't resize down ephemeral disks." msgstr "一時ディスクのサイズを減らすことはできません。" msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "インスタンスの libvirt 設定からルートデバイスのパスを取得できません" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "インスタンス %(server_id)s が %(attr)s %(state)s にある間は '%(action)s' を行" "うことはできません" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "\"%(instances_path)s\" にアクセスできません。パスが存在し、適切な許可を持って" "いることを確認してください。特に、Nova-Compute は、リモートホスト上で認証でき" "ない組み込みのシステムアカウント等のアカウントで実行してはなりません。" #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "ホストをアグリゲート %(aggregate_id)s に追加できません。理由: %(reason)s。" msgid "Cannot attach one or more volumes to multiple instances" msgstr "1 つのボリュームを複数のインスタンスに接続できません" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "" "親のない %(objtype)s オブジェクトで %(method)s を呼び出すことはできません" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "%s の親のストレージプールを検出できません。イメージを保存する場所を決定できま" "せん。" msgid "Cannot find SR of content-type ISO" msgstr "コンテンツタイプが ISO の SR が見つかりません" msgid "Cannot find SR to read/write VDI." msgstr "VDI の読み取り/書き込み用の SR が見つかりません。" msgid "Cannot find image for rebuild" msgstr "再作成用のイメージが見つかりません" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "ホスト %(host)s をアグリゲート %(id)s から削除できません" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "ホストをアグリゲート %(aggregate_id)s から削除できません。理由: %(reason)s。" msgid "Cannot rescue a volume-backed instance" msgstr "ボリュームを使ったインスタンスはレスキューできません" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "ルートディスクのサイズを小さくすることはできません。現在のサイズ: " "%(curr_root_gb)s GB。要求されたサイズ: %(new_root_gb)s GB。" msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "専用でない CPU の固定ポリシーではCPU スレッドの固定ポリシーを設定できません" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "専用でない CPU の固定ポリシーではリアルタイムのポリシーを設定できません" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "アグリゲート %(aggregate_id)s を更新できません。理由: %(reason)s。" #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "アグリゲート %(aggregate_id)s のメタデータを更新できません。理由: " "%(reason)s。" #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "セル %(uuid)s にはマッピングがありません。" #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "変更によって、リソース %(unders)s の使用量が 0 未満になります" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "クラス %(class_name)s が見つかりませんでした: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "コマンドはサポートされていません。このアクションを実行するには、Ironic コマン" "ド %(cmd)s を使用してください。" #, python-format msgid "Compute host %(host)s could not be found." msgstr "コンピュートホスト %(host)s が見つかりませんでした。" #, python-format msgid "Compute host %s not found." msgstr "コンピュートホスト %s が見つかりません。" #, python-format msgid "Compute service of %(host)s is still in use." msgstr "%(host)s のコンピュートサービスが依然として使用されています。" #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "この時点では %(host)s のコンピュートサービスを使用できません。" #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "コンフィグドライブ形式 %(format)s がサポートされません。" #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "設定で、明示的な CPU モデルが要求されましたが、現在の libvirt ハイパーバイ" "ザー '%s' は CPU モデルの選択をサポートしていません" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "インスタンス %(instance_uuid)s の更新で競合が発生したものの、原因を特定できま" "せんでした。" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "インスタンス %(instance_uuid)s の更新で競合が発生しました。%(expected)s を期" "待したものの、実際には %(actual)s が得られました" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "cinder ホストへの接続に失敗しました: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "glance ホスト %(server)s への接続に失敗しました: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "libvirt との接続が失われました: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "ホスト %(host)s でハイパーバイザーへの接続がおかしくなっています" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "インスタンス %(instance_id)s についてコンソールログの出力を取得できませんでし" "た。理由: %(reason)s" msgid "Constraint not met." msgstr "制約が満たされていません。" #, python-format msgid "Converted to raw, but format is now %s" msgstr "raw 形式に変換されましたが、現在の形式は %s です" #, fuzzy, python-format msgid "Could not attach image to loopback: %s" msgstr "イメージをループバック %s にアタッチできません。" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "イメージ %(image_id)s を取り出すことができませんでした" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "%(driver_type)s ボリュームのハンドラーが見つかりませんでした。" #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "ホスト %(host)s 上でバイナリー %(binary)s が見つかりませんでした。" #, python-format msgid "Could not find config at %(path)s" msgstr "%(path)s に config が見つかりませんでした" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "VM が使用するデータストア参照が見つかりませんでした。" #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "" "行 %(line)s をロードできませんでした。エラー %(error)s を受け取りました" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "" "paste アプリケーション '%(name)s' を %(path)s からロードできませんでした" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "vfat コンフィグドライブをマウントできません。%(operation)s が失敗しました。エ" "ラー: %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "イメージ %(image_id)s をアップロードできませんでした" msgid "Creation of virtual interface with unique mac address failed" msgstr "一意な MAC アドレスを持つ仮想インターフェースを作成できませんでした" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "データストア regex %s がどのデータストアとも一致しませんでした" msgid "Datetime is in invalid format" msgstr "日時が無効な形式です" msgid "Default PBM policy is required if PBM is enabled." msgstr "PBM が有効になっている場合、デフォルト PBM ポリシーは必須です。" #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "レコード %(records)d がテーブル '%(table_name)s' から削除されました。" #, python-format msgid "Device '%(device)s' not found." msgstr "デバイス '%(device)s' が見つかりません。" #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "指定されたデバイス id %(id)s はハイパーバイザーバージョン %(version)s ではサ" "ポートされていません" msgid "Device name contains spaces." msgstr "デバイス名に空白が含まれています。" msgid "Device name empty or too long." msgstr "デバイス名が空か、長すぎます。" #, python-format msgid "Device type mismatch for alias '%s'" msgstr "別名 '%s' のデバイスタイプが一致しません" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "%(table)s.%(column)s とシャドーテーブル内のタイプが異なります: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "ディスクにサイズ変更できないファイルシステムが含まれています: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "ディスク形式 %(disk_format)s は受け付けられません" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "ディスク情報ファイルが無効です: %(reason)s" msgid "Disk must have only one partition." msgstr "ディスクのパーティションは 1 つのみでなければなりません。" #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "ID が %s のインスタンスに接続されたディスクが見つかりません。" #, python-format msgid "Driver Error: %s" msgstr "ドライバーエラー: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "%(method)s を実行しようとしてエラーが発生しました" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "インスタンスをノード %(node)s で破棄しているときにエラーが発生しました。プロ" "ビジョニング状態は '%(state)s' です。" #, python-format msgid "Error during following call to agent: %(method)s" msgstr "エージェントに対する %(method)s の呼び出し中に、エラーが発生しました" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "インスタンス %(instance_id)s の復元中のエラー: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "%(instance_name)s のドメイン情報を取得している際に、libvirt でエラーが発生し" "ました: [エラーコード %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "%(instance_name)s の検索中に libvirt でエラーが発生しました: [エラーコード " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "%(instance_name)s の正常終了中に libvirt でエラーが発生しました: [エラーコー" "ド %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "ユーザー名 \"%(user)s\" のパスワードの設定中に libvirt でエラーが発生しまし" "た: [エラーコード %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "libguestfs (%(e)s) を使用してイメージ %(image)s の %(dir)s に %(device)s をマ" "ウントする際にエラーが発生しました" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "" "libguestfs (%(e)s) を使用してイメージ %(image)s をマウントする際にエラーが発" "生しました" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "リソースモニター %(monitor)s を作成するときにエラーが発生しました" msgid "Error: Agent is disabled" msgstr "エラー: エージェントは無効になっています" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "" "アクション ID %(action_id)s に対応するイベント %(event)s が見つかりません" msgid "Event must be an instance of nova.virt.event.Event" msgstr "イベントは nova.virt.event.Event のインスタンスでなければなりません" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "インスタンス %(instance_uuid)s に関してスケジューリング可能な最大試行回数 " "%(max_attempts)d を超えました。直近の例外: %(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "ライブマイグレーション時にインスタンス %(instance_uuid)s の最大スケジューリン" "グ再試行回数 %(max_retries)d を超えました" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "再試行の最大回数を超えました。%(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "UUID が必要ですが、%(uuid)s を受け取りました。" #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "シャドーテーブルに余分なカラム %(table)s.%(column)s があります" msgid "Extracting vmdk from OVA failed." msgstr "OVA からの vmdk の取得に失敗しました。" #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "ポート %(port_id)s へのアクセスに失敗しました: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "インスタンス %(instance)s のプロビジョニング中に、ノード %(node)s でデプロイ" "パラメーターを追加できませんでした。" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "" "エラー %s が発生したため、ネットワークを割り当てることができませんでした。再" "スケジュールは行われません。" msgid "Failed to allocate the network(s), not rescheduling." msgstr "" "ネットワークを割り当てることができませんでした。再スケジュールは行われませ" "ん。" #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "" "ネットワークアダプターデバイスを %(instance_uuid)s に接続できませんでした" #, python-format msgid "Failed to create vif %s" msgstr "vif %s の作成に失敗しました" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "インスタンスをデプロイできませんでした: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "PCI デバイス %(dev)s を切り離すことができませんでした: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "" "ネットワークアダプターデバイスを %(instance_uuid)s から切り離すことができませ" "んでした" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "テキストの暗号化に失敗しました: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "インスタンスを起動できませんでした: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "パーティションのマッピングに失敗しました: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "ファイルシステム %s のマウントに失敗しました。" msgid "Failed to parse information about a pci device for passthrough" msgstr "パススルー用の PCI デバイスに関する情報の解析に失敗しました" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "インスタンスの電源オフに失敗しました: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "インスタンスの電源オンに失敗しました: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "インスタンス %(instance_uuid)s 用に PCI デバイス %(id)s を準備できませんでし" "た: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "インスタンス %(inst)s をプロビジョニングできませんでした: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "ディスク情報ファイルの読み取りまたは書き込みに失敗しました: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "インスタンスをリブートできませんでした: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "ボリュームの削除に失敗しました: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "インスタンス %(inst)s の再構築を Ironic に要求できませんでした: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "インスタンスの再開に失敗しました: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "qemu-img info を %(path)s に対して実行できませんでした: %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "%(instance)s で管理者パスワードの設定に失敗しました。理由: %(reason)s" msgid "Failed to spawn, rolling back" msgstr "起動に失敗しました。ロールバックしています" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "インスタンスを休止できませんでした: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "インスタンスを削除できませんでした: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "vif %s の取り外しに失敗しました" msgid "Failure prepping block device." msgstr "ブロックデバイスを準備できませんでした" #, python-format msgid "File %(file_path)s could not be found." msgstr "ファイル %(file_path)s が見つかりませんでした。" #, python-format msgid "File path %s not valid" msgstr "ファイルパス %s は無効です" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "Fixed IP %(ip)s はネットワーク %(network_id)s の有効な IP アドレスではありま" "せん。" #, python-format msgid "Fixed IP %s is already in use." msgstr "Fixed IP %s は既に使用中です。" #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "Fixed IP アドレス %(address)s はインスタンス %(instance_uuid)s で既に使用され" "ています。" #, python-format msgid "Fixed IP not found for address %(address)s." msgstr " アドレス %(address)s に対応する Fixed IP が見つかりません。" #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "フレーバー %(flavor_id)s が見つかりませんでした。" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "フレーバー %(flavor_id)s にはキー %(extra_specs_key)s を持つ追加スペックはあ" "りません。" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "フレーバー %(flavor_id)s にはキー %(key)s を持つ追加スペックはありません。" #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "%(retries)d 回の再試行の後では、フレーバー %(id)s の追加仕様の更新と作成を行" "うことはできません。" #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "フレーバー %(flavor_id)s とプロジェクト %(project_id)s の組み合わせに対応する" "フレーバーアクセスは既に存在します。" #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "%(flavor_id)s / %(project_id)s の組み合わせに対応するフレーバーアクセスが見つ" "かりません。" msgid "Flavor used by the instance could not be found." msgstr "インスタンスで使用されたフレーバーが見つかりませんでした" #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "ID %(flavor_id)s を持つフレーバーは既に存在します。" #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "名前が %(flavor_name)s のフレーバーが見つかりませんでした。" #, python-format msgid "Flavor with name %(name)s already exists." msgstr "名前が %(name)s のフレーバーは既に存在します。" #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "フレーバーのディスクサイズがイメージのメタデータで指定された最小サイズよりも" "小さくなっています。フレーバーディスクは %(flavor_size)i バイト、最小サイズ" "は %(image_min_disk)i バイトです。" #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "フレーバーのディスクサイズがリクエストされたイメージに対し小さすぎます。フ" "レーバーディスクは %(flavor_size)i バイト、イメージは %(image_size)i バイトで" "す。" msgid "Flavor's memory is too small for requested image." msgstr "フレーバーのメモリーは要求されたイメージに対して小さすぎます。" #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Floating IP %(address)s の割り当てに失敗しました。" #, python-format msgid "Floating IP %(address)s is associated." msgstr "Floating IP %(address)s が割り当てられています。" #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "" "Floating IP %(address)s はインスタンス %(id)s に割り当てられていません。" #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "ID %(id)s の Floating IP が見つかりません。" #, python-format msgid "Floating IP not found for ID %s" msgstr "ID %s の Floating IP が見つかりません" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "アドレス %(address)s に対応する Floating IP が見つかりません。" msgid "Floating IP pool not found." msgstr "Floating IP プールが見つかりません。" msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "イメージメタデータで渡されるシリアルポート数のフレーバー値を超えないようにし" "てください。" msgid "Found no disk to snapshot." msgstr "スナップショットの作成対象のディスクが見つかりません" #, python-format msgid "Found no network for bridge %s" msgstr "ブリッジ %s に対するネットワークが存在しません。" #, python-format msgid "Found non-unique network for bridge %s" msgstr "ブリッジ %s について一意でないネットワークが見つかりました" #, python-format msgid "Found non-unique network for name_label %s" msgstr "name_label %s について一意でないネットワークが見つかりました" msgid "Guest does not have a console available." msgstr "ゲストはコンソールを使用することはできません。" #, python-format msgid "Host %(host)s could not be found." msgstr "ホスト %(host)s が見つかりませんでした。" #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "ホスト %(host)s が既にセル %(uuid)s にマッピングされています" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "ホスト '%(name)s' がどのセルにもマッピングされません" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "ホスト電源オンは Hyper-V ドライバーではサポートされていません" msgid "Host aggregate is not empty" msgstr "ホストアグリゲートが空ではありません" msgid "Host does not support guests with NUMA topology set" msgstr "ホストが NUMA トポロジーが設定されたゲストをサポートしていません" msgid "Host does not support guests with custom memory page sizes" msgstr "" "ホストがカスタムのメモリーページサイズが指定されたゲストをサポートしていませ" "ん" msgid "Host startup on XenServer is not supported." msgstr "XenServer 上でのホストの起動はサポートされていません。" msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "ハイパーバイザードライバーが post_live_migration_at_source メソッドをサポート" "していません" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "ハイバーバイザーの仮想化タイプ '%s' が有効ではありません" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "ハイパーバイザー仮想化タイプ '%(hv_type)s' は認識されていません" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "ID '%s' のハイパーバイザーが見つかりませんでした。" #, python-format msgid "IP allocation over quota in pool %s." msgstr "IP の割り当て量がプール %s 内のクォータを超えています。" msgid "IP allocation over quota." msgstr "IP の割り当て量がクォータを超えています。" #, python-format msgid "Image %(image_id)s could not be found." msgstr "イメージ %(image_id)s が見つかりませんでした。" #, python-format msgid "Image %(image_id)s is not active." msgstr "イメージ %(image_id)s はアクティブではありません。" #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "イメージ %(image_id)s は受け付けられません: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "イメージディスクが、要求されたディスクサイズよりも大きなサイズです" msgid "Image is not raw format" msgstr "イメージは raw 形式ではありません" msgid "Image metadata limit exceeded" msgstr "イメージメタデータ数の上限を超えました" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "イメージモデル '%(image)s' はサポートされません" msgid "Image not found." msgstr "イメージが見つかりません。" #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "イメージプロパティー '%(name)s' で、フレーバーに対して設定された NUMA 構成を" "オーバーライドすることは許可されません" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "イメージプロパティー 'hw_cpu_policy' はこのフレーバーでは設定されたCPU コア固" "定ポリシーのオーバーライドを許可されていません" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "イメージプロパティー 'hw_cpu_thread_policy' は、フレーバーに設定された CPU ス" "レッドの固定ポリシーを上書きすることはできません。" msgid "Image that the instance was started with could not be found." msgstr "インスタンスの起動時に使用されたイメージが見つかりませんでした。" #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "イメージのコンフィグドライブのオプション '%(config_drive)s' は無効です" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "destination_type 'volume' を含むイメージには、ゼロ以外のサイズが指定されてい" "る必要があります" msgid "In ERROR state" msgstr "エラー状態です" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "状態は %(vm_state)s/%(task_state)s です。RESIZED/None ではありません" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "サーバー %(uuid)s で進行中のライブマイグレーション %(id)s が見つかりません。" msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "設定に互換性がありません: 一時ストレージ暗号化は LVM イメージでのみサポート" "されています。" #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "" "インスタンス %(instance_uuid)s 用の情報キャッシュが見つかりませんでした。" #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "インスタンス %(instance)s とボリューム %(vol)s は同じアベイラビリティーゾーン" "にありません。インスタンスは %(ins_zone)s に、ボリュームは %(vol_zone)s にあ" "ります" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "インスタンス %(instance)s に ID %(port)s のポートがありません" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "インスタンス %(instance_id)s をレスキューできません: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "インスタンス %(instance_id)s が見つかりませんでした。" #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "インスタンス %(instance_id)s にタグ \"%(tag)s\" がありません" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "インスタンス %(instance_id)s はレスキューモードではありません。" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "インスタンス %(instance_id)s の準備ができていません" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "インスタンス %(instance_id)s は実行されていません。" #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "インスタンス %(instance_id)s は受け付けられません: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "インスタンス %(instance_uuid)s で NUMA トポロジーが指定されていません" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "" "インスタンス %(instance_uuid)s がマイグレーションのコンテキストを設定していま" "せん。" #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "インスタンス %(instance_uuid)s は %(attr)s %(state)s 状態です。インスタンスが" "この状態にある間は %(method)s を行えません。" #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "インスタンス %(instance_uuid)s はロックされています" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "インスタンス %(instance_uuid)s にはコンフィグドライブが必要ですが、存在しませ" "ん。" #, python-format msgid "Instance %(name)s already exists." msgstr "インスタンス %(name)s は既に存在します。" #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "" "インスタンス %(server_id)s は '%(action)s' が実行できない状態にあります" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "インスタンス %(uuid)s にはセルに対するマッピングがありません。" #, python-format msgid "Instance %s not found" msgstr "インスタンス %s が見つかりません" #, python-format msgid "Instance %s provisioning was aborted" msgstr "インスタンス %s のプロビジョニングが中止しました。" msgid "Instance could not be found" msgstr "インスタンスが見つかりませんでした" msgid "Instance disk to be encrypted but no context provided" msgstr "" "インスタンスディスクの暗号化が必要ですが、コンテキストが指定されていません。" msgid "Instance event failed" msgstr "インスタンスイベントが失敗しました" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "インスタンスグループ %(group_uuid)s は既に存在します。" #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "インスタンスグループ %(group_uuid)s が見つかりませんでした。" #, fuzzy msgid "Instance has no source host" msgstr "インスタンスにソースホストがありません" msgid "Instance has not been resized." msgstr "インスタンスのサイズ変更が行われていません" #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "インスタンスのホスト名 %(hostname)s は有効な DNS 名ではありません。" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "インスタンスは既にレスキューモードです: %s" msgid "Instance is not a member of specified network" msgstr "インスタンスは指定されたネットワークのメンバーではありません" #, python-format msgid "Instance rollback performed due to: %s" msgstr "インスタンスのロールバックが実行されました。原因: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "ボリュームグループ %(vg)s に十分なスペースがありません。使用可能なのは " "%(free_space)db のみですが、ボリューム %(lv)s には %(size)d バイト必要です。" #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "コンピュートリソースが不十分です: %(reason)s。" #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "コンピュートノードには %(uuid)s を開始するための十分な空きメモリーがありませ" "ん。" #, python-format msgid "Interface %(interface)s not found." msgstr "インターフェース %(interface)s が見つかりません。" #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "ファイル %(path)s の Base64 データが無効です" msgid "Invalid Connection Info" msgstr "無効な接続情報" #, python-format msgid "Invalid ID received %(id)s." msgstr "無効な ID %(id)s を受信しました。" #, python-format msgid "Invalid IP format %s" msgstr "%s は無効な IP 形式です" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "無効な IP プロトコル %(protocol)s。" msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "無効な PCI ホワイトリスト: PCI ホワイトリストでは devname またはアドレスを指" "定できますが、両方を指定することはできません" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "PCI エイリアス定義が無効です: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "正規表現 %s は無効です" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "ホスト名 '%(hostname)s' に無効な文字があります" msgid "Invalid config_drive provided." msgstr "無効な config_drive が指定されました。" #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "無効な config_drive_format \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "無効なコンソールタイプ %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "無効なコンテンツ形式 %(content_type)s。" #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "日時の文字列が無効です: %(reason)s" msgid "Invalid device UUID." msgstr "デバイス UUID が無効です。" #, python-format msgid "Invalid entry: '%s'" msgstr "項目 '%s' は無効です" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "項目 '%s' は無効です。辞書型が期待されています" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "項目 '%s' は無効です。リストまたは辞書型が期待されています" #, python-format msgid "Invalid exclusion expression %r" msgstr "排他式 %r は無効です" #, python-format msgid "Invalid image format '%(format)s'" msgstr "イメージ形式 '%(format)s' は無効です" #, python-format msgid "Invalid image href %(image_href)s." msgstr "無効なイメージ href %(image_href)s。" #, python-format msgid "Invalid inclusion expression %r" msgstr "包含式 %r は無効です" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "フィールド/属性 %(path)s の入力が無効です。値: %(value)s。%(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "無効な入力を受信しました: %(reason)s" msgid "Invalid instance image." msgstr "インスタンスイメージが無効です。" #, python-format msgid "Invalid is_public filter [%s]" msgstr "無効な is_public フィルター [%s]" msgid "Invalid key_name provided." msgstr "無効な key_name が指定されました。" #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "メモリーページサイズ \"%(pagesize)s\" が無効です" msgid "Invalid metadata key" msgstr "無効なメタデータキーです" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "無効なメタデータサイズ: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "メタデータが無効です: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "無効な minDisk フィルター [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "無効な minRam フィルター [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "無効なポート範囲 %(from_port)s:%(to_port)s。 %(msg)s" msgid "Invalid proxy request signature." msgstr "無効なプロキシー要求シグニチャー" #, python-format msgid "Invalid range expression %r" msgstr "範囲式 %r は無効です" msgid "Invalid service catalog json." msgstr "無効なサービスカタログ JSON。" msgid "Invalid start time. The start time cannot occur after the end time." msgstr "無効な開始時刻。開始時刻を終了時刻より後にすることはできません。" msgid "Invalid state of instance files on shared storage" msgstr "共有ストレージ上のインスタンスファイルの無効な状態" #, python-format msgid "Invalid timestamp for date %s" msgstr "日付 %s のタイムスタンプが無効です" #, python-format msgid "Invalid usage_type: %s" msgstr "usage_type %s は無効です" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "コンフィグドライブのオプション %(option)s の値が無効です" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "リクエストに無効な仮想インターフェースアドレス %s があります" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "ボリュームアクセスモードが無効です: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "無効なボリューム: %(reason)s" msgid "Invalid volume_size." msgstr "volume_size が無効です。" #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "" "Ironic ノード uuid が、インスタンス %s のドライバーに提供されていません。" #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "" "外部ネットワーク %(network_uuid)s でインターフェースを作成することは許可され" "ていません" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "カーネルイメージ/RAM ディスクイメージが大きすぎます: %(vdi_size)d バイト、最" "大値は %(max_size)d バイト" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "キー名に使用できるのは、英数字、ピリオド、ダッシュ、アンダースコアー、コロ" "ン、および空白のみです。" #, python-format msgid "Key manager error: %(reason)s" msgstr "鍵マネージャーエラー: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "キーペア '%(key_name)s' は既に存在します。" #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "ユーザー %(user_id)s のキーペア %(name)s が見つかりません" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "キーペアデータが無効です: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "キーペア名に安全ではない文字が含まれています" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "キーペア名は 1 から 255 文字の長さの文字列でなければなりません" msgid "Limits only supported from vCenter 6.0 and above" msgstr "上限が適用されるのは、vCenter 6.0 以降の場合のみです。" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "サーバー %(uuid)s のマイグレーション %(id)s は進行中ではありません。" #, python-format msgid "Malformed message body: %(reason)s" msgstr "メッセージ本文の形式に誤りがあります: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "誤った形式のリクエスト URL です。URL の project_id '%(project_id)s' がコンテ" "キストの project_id '%(context_project_id)s' と一致しません" msgid "Malformed request body" msgstr "誤った形式のリクエスト本文" msgid "Mapping image to local is not supported." msgstr "ローカルへのイメージマッピングはサポートしていません。" #, python-format msgid "Marker %(marker)s could not be found." msgstr "マーカー %(marker)s が見つかりませんでした。" msgid "Maximum number of floating IPs exceeded" msgstr "Floating IP の最大数を超えました" msgid "Maximum number of key pairs exceeded" msgstr "キーペアの最大数を超えました" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "メタデータ項目の最大数が %(allowed)d を超えています" msgid "Maximum number of ports exceeded" msgstr "最大ポート数を超えました" msgid "Maximum number of security groups or rules exceeded" msgstr "セキュリティーグループまたはルールの最大数を超えました" msgid "Metadata item was not found" msgstr "メタデータ項目が見つかりませんでした" msgid "Metadata property key greater than 255 characters" msgstr "メタデータプロパティーのキーが 255 文字を超えています" msgid "Metadata property value greater than 255 characters" msgstr "メタデータプロパティーの値が 255 文字を超えています" msgid "Metadata type should be dict." msgstr "メタデータタイプは dict でなければなりません。" #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "コンピュートホストノード %(host)s.%(node)s では、メトリック %(name)s は見つか" "りませんでした。" msgid "Migrate Receive failed" msgstr "マイグレーションの受け取りが失敗しました" msgid "Migrate Send failed" msgstr "マイグレーションの送信が失敗しました" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "" "サーバー %(uuid)s のマイグレーション %(id)s はライブマイグレーションではあり" "ません。" #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "マイグレーション %(migration_id)s が見つかりませんでした。" #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "" "インスタンス %(instance_id)s でマイグレーション %(migration_id)s が見つかりま" "せん" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "インスタンス %(instance_uuid)s のマイグレーション %(migration_id)s の状態が " "%(state)s です。マイグレーションがこの状態にある場合、%(method)s を実行できま" "せん。" #, python-format msgid "Migration error: %(reason)s" msgstr "マイグレーションエラー: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "" "マイグレーションは LVM 形式のイメージを使用するインスタンスではサポートされて" "いません" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "状態が %(status)s のインスタンス %(instance_id)s のマイグレーションが見つかり" "ません。" #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "マイグレーション事前検査エラー: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "マイグレーション先の選択エラー: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "引数 %s がありません" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "シャドーテーブルにカラム %(table)s.%(column)s がありません" msgid "Missing device UUID." msgstr "デバイス UUID がありません。" msgid "Missing disabled reason field" msgstr "「無効化の理由」フィールドがありません" msgid "Missing forced_down field" msgstr "forced_down フィールドがありません" msgid "Missing imageRef attribute" msgstr "imageRef 属性が指定されていません" #, python-format msgid "Missing keys: %s" msgstr "キーがありません: %s" #, python-format msgid "Missing parameter %s" msgstr "パラメーター %s が欠落しています" msgid "Missing parameter dict" msgstr "パラメーター dict が指定されていません" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "" "複数のインスタンスが Fixed IP アドレス '%(address)s' に割り当てられています。" msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "複数の使用可能なネットワークが見つかりました。接続先のネットワークを選択する" "には、ネットワーク ID を指定してください。" msgid "More than one swap drive requested." msgstr "複数のスワップドライブが要求されました。" #, python-format msgid "Multi-boot operating system found in %s" msgstr "%s 内にブート可能なオペレーティングシステムが複数見つかりました" msgid "Multiple X-Instance-ID headers found within request." msgstr "リクエストに複数の X-Instance-ID ヘッダーが検出されました。" msgid "Multiple X-Tenant-ID headers found within request." msgstr "リクエストに複数の X-Tenant-ID ヘッダーが検出されました。" #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "名前が '%s' の Floating IP プールが複数見つかりました" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "アドレス %(address)s に対して複数の Floating IP が見つかりました。" msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "複数のホストが VMWare vCenter ドライバーによって管理されている可能性がありま" "す。このため ホスト単位の稼働時間は返しません。" msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "使用可能なネットワークが複数見つかりました。ネットワーク ID を具体的に指定し" "てください。" #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "'%s' に一致するセキュリティーグループが複数見つかりました。より具体的な ID を" "使用してください。" msgid "Must input network_id when request IP address" msgstr "IP アドレスを要求するときは、network_id を入力する必要があります" msgid "Must not input both network_id and port_id" msgstr "network_id と port_id の両方を入力しないでください" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "compute_driver=xenapi.XenAPIDriver を使用するには、connection_url、" "connection_username (オプション)、および connection_password を指定する必要が" "あります" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "vmwareapi.VMwareVCDriver を使用するためのホスト IP、ユーザー名、ホストパス" "ワードを指定する必要があります" msgid "Must supply a positive value for max_number" msgstr "max_number には正の値を指定する必要があります" msgid "Must supply a positive value for max_rows" msgstr "max_rows には正の値を指定する必要があります" #, python-format msgid "Network %(network_id)s could not be found." msgstr "ネットワーク %(network_id)s が見つかりませんでした。" #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "ネットワーク %(network_uuid)s でインスタンスをブートするには、サブネットが必" "要です。" #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "ブリッジ %(bridge)s のネットワークが見つかりませんでした" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "インスタンス %(instance_id)s のネットワークが見つかりませんでした。" msgid "Network not found" msgstr "ネットワークが見つかりません" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "セキュリティーグループを適用するには、ネットワークが port_security_enabled に" "なっていて、サブネットが関連付けられている必要があります。" msgid "New volume must be detached in order to swap." msgstr "スワップを行うには、新規ボリュームを切断する必要があります。" msgid "New volume must be the same size or larger." msgstr "新規ボリュームは同じサイズか、それ以上でなければなりません。" #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "ID %(id)s を持つブロックデバイスマッピングがありません。" msgid "No Unique Match Found." msgstr "1 つだけ一致するデータが見つかりません。" #, python-format msgid "No agent-build associated with id %(id)s." msgstr "ID %(id)s に関連付けられたエージェントビルドはありません。" msgid "No compute host specified" msgstr "コンピュートホストが指定されていません" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "オペレーティングシステム %(os_name)s に関する設定情報が見つかりません" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "MAC アドレス %s を持つデバイスが VM にありません" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "interface-id %s を持つデバイスが VM にありません" #, python-format msgid "No disk at %(location)s" msgstr "%(location)s にディスクがありません" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "ネットワーク %(net)s で使用可能な Fixed IP アドレスがありません。" msgid "No fixed IPs associated to instance" msgstr "Fixed IP がインスタンスに割り当てられていません" msgid "No free nbd devices" msgstr "空きの nbd デバイスがありません" msgid "No host available on cluster" msgstr "クラスター上に使用可能なホストがありません" msgid "No hosts found to map to cell, exiting." msgstr "セルにマッピングするホストが見つかりません。処理を終了します。" #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "'%s' と合致するハイパーバイザーが見つかりませんでした。" msgid "No image locations are accessible" msgstr "イメージの場所にアクセスできません" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "ライブマイグレーションの URI が設定されておらず、ハイパーバイザーの仮想化タイ" "プの \"%(virt_type)s\" で使用可能なデフォルトが存在しません。" msgid "No more floating IPs available." msgstr "使用可能な Floating IP はこれ以上ありません。" #, python-format msgid "No more floating IPs in pool %s." msgstr "プール %s 内に Floating IP はこれ以上ありません。" #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "%(image)s の %(root)s にマウントポイントが見つかりません" #, python-format msgid "No operating system found in %s" msgstr "%s 内にオペレーティングシステムが見つかりません" #, python-format msgid "No primary VDI found for %s" msgstr "%s のプライマリー VDI が見つかりません" msgid "No root disk defined." msgstr "ルートディスクが定義されていません。" #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "特定のネットワークが要求されず、プロジェクト '%(project_id)s' で利用可能な" "ネットワークがありません。" msgid "No suitable network for migrate" msgstr "マイグレーションに適切なネットワークがありません" msgid "No valid host found for cold migrate" msgstr "コールドマイグレーションに有効なホストが見つかりません" msgid "No valid host found for resize" msgstr "サイズ変更の対象として有効なホストが見つかりません" #, python-format msgid "No valid host was found. %(reason)s" msgstr "有効なホストが見つかりませんでした。%(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "ボリュームのブロックデバイスマッピングがパス %(path)s にありません" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "" "ID %(volume_id)s を持つボリュームのブロックデバイスマッピングがありません。" #, python-format msgid "Node %s could not be found." msgstr "ノード %s が見つかりませんでした。" #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "%(host)s 用の未使用ポートを取得できません" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "%(host)s:%(port)d をバインドできません。%(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "PF %(compute_node_id)s の %(address)s のすべての Virtual Function に空きがあ" "るとは限りません。" msgid "Not an rbd snapshot" msgstr "rbd スナップショットではありません" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "イメージ %(image_id)s への権限がありません。" msgid "Not authorized." msgstr "権限がありません。" msgid "Not enough parameters to build a valid rule." msgstr "有効なルールを作成するだけの十分なパラメータがありません" msgid "Not implemented on Windows" msgstr "Windows では実装されていません" msgid "Not stored in rbd" msgstr "rbd 内に保管されていません" msgid "Nothing was archived." msgstr "アーカイブは行われませんでした" #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova には libvirt バージョン %s 以降が必要です。" msgid "Number of Rows Archived" msgstr "アーカイブ済みの行数" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "オブジェクトのアクション %(action)s が失敗しました。原因: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "旧ボリュームは別のインスタンスに接続されています。" #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "アベイラビリティーゾーン %s に 1 つ以上のホストが既にあります" msgid "Only administrators may list deleted instances" msgstr "削除済みインスタンスの一覧を取得できるのは管理者のみです。" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "この機能でサポートされるのは、ファイルベースの SR (ext/NFS) のみです。SR " "%(uuid)s のタイプは %(type)s です。" msgid "Origin header does not match this host." msgstr "オリジンヘッダーがこのホストに一致しません。" msgid "Origin header not valid." msgstr "オリジンヘッダーが無効です。" msgid "Origin header protocol does not match this host." msgstr "オリジンヘッダープロトコルがこのホストに一致しません。" #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "PCI デバイス %(node_id)s:%(address)s が見つかりません。" #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "PCI エイリアス %(alias)s が定義されていません" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI デバイス %(compute_node_id)s:%(address)s は %(hopestatus)s ではなく " "%(status)s です" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI デバイス %(compute_node_id)s:%(address)s の所有者は、%(hopeowner)s ではな" "く %(owner)s です" #, python-format msgid "PCI device %(id)s not found" msgstr "PCI デバイス %(id)s が見つかりません" #, python-format msgid "PCI device request %(requests)s failed" msgstr "PCI デバイス要求 %(requests)s が失敗しました" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s に IP アドレスが含まれていません" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "ページサイズ %(pagesize)s は \"%(against)s\" に対して禁止されています" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "ページサイズ %(pagesize)s はこのホストではサポートされていません。" #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "vif %(vif_id)s について、パラメーター %(missing_params)s が vif_details 内に" "存在しません。macvtapパラーメーターが正確に設定されているか Neutron の設定を" "確認してください。" #, python-format msgid "Path %s must be LVM logical volume" msgstr "パス %s は LVM 論理ボリュームでなければなりません" msgid "Paused" msgstr "一時停止済み" msgid "Personality file limit exceeded" msgstr "パーソナリティーファイル数の上限を超えました" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "VF %(compute_node_id)s に関連する %(address)s のPhysical Function " "%(compute_node_id)s。%(vf_address)s は %(status)s ではなく %(hopestatus)sです" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "ネットワーク %(network_uuid)s に対応する物理ネットワークがありません" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "ポリシーにより %(action)s の実行が許可されていません" #, python-format msgid "Port %(port_id)s is still in use." msgstr "ポート %(port_id)s はまだ使用中です。" #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "ポート %(port_id)s はインスタンス %(instance)s では使用できません。" #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "ポート %(port_id)s はインスタンス %(instance)s では使用できません。dns_name " "属性に割り当てられた値 %(value)s がインスタンスのホスト名 %(hostname)s と合致" "しません" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "ポート %(port_id)s を使用するには、Fixed IP が必要です。" #, python-format msgid "Port %s is not attached" msgstr "ポート %s は接続されていません" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "ポート ID %(port_id)s が見つかりませんでした。" #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "指定されたビデオモデル (%(model)s) はサポートされていません。" #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "" "指定されたウォッチドッグアクション (%(action)s) はサポートされていません。" msgid "QEMU guest agent is not enabled" msgstr "QEMU ゲストエージェントが有効になっていません" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "インスタンス %(instance_id)s を正常に終了することができません。" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "クォータクラス %(class_name)s が見つかりませんでした。" msgid "Quota could not be found" msgstr "クォータが見つかりませんでした" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "%(overs)s のクオータを超えました: %(req)s がリクエストされましたが、既に " "%(allowed)s のうち %(used)s を使用しています (%(overs)s)" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "リソース %(overs)s がクォータを超過しました" msgid "Quota exceeded, too many key pairs." msgstr "クォータを超過しました。キーペアが多すぎます。" msgid "Quota exceeded, too many server groups." msgstr "クォータを超過しました。サーバーグループが多すぎます。" msgid "Quota exceeded, too many servers in group" msgstr "クォータを超過しました。グループ内のサーバーが多すぎます" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "クォータを超過しました: code=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "" "プロジェクト %(project_id)s、リソース %(resource)s のクォータが存在します" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "プロジェクト %(project_id)s のクォータが見つかりませんでした。" #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "プロジェクト %(project_id)s のユーザー %(user_id)s のクォータが見つかりません" "でした。" #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "%(resource)s のクォータ上限 %(limit)s は、既に使用もしくは予約されている数で" "ある %(minimum)s 以上でなければなりません。" #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "%(resource)s のクォータ上限 %(limit)s は、%(maximum)s 以下でなければなりませ" "ん。" #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "VBD %s の取り外しの最大試行回数に達しました" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "リアルタイムポリシーでは、1 つ以上の RT vCPU と 1 つの通常の vCPU を使用して " "vCPU のマスクを設定する必要があります。hw:cpu_realtime_mask または " "hw_cpu_realtime_mask を参照してください" msgid "Request body and URI mismatch" msgstr "リクエスト本文と URI の不一致" msgid "Request is too large." msgstr "リクエストが大きすぎます。" #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "イメージ %(image_id)s のリクエストに対して BadRequest のレスポンスが返されま" "した: %(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "インスタンス %(instance_uuid)s に対する RequestSpec が見つかりません" msgid "Requested CPU control policy not supported by host" msgstr "ホストは要求された CPU の制御ポリシーをサポートしません" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "要求されたハードウェア '%(model)s' は '%(virt)s' virt ドライバーではサポート" "されません" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "要求されたイメージ %(image)s ではディスクサイズの自動変更が無効になっていま" "す。" msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "要求されたインスタンス NUMA トポロジーは、指定されたホスト NUMA トポロジーに" "適合しません" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "PCI デバイスとともに要求されたインスタンスの NUMA トポロジーは、指定されたホ" "ストの NUMA トポロジーに適合できません" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "要求された vCPU 制限 %(sockets)d:%(cores)d:%(threads)d は、vCPU カウント " "%(vcpus)d を満たすことができません" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "インスタンス %s 用のレスキューデバイスが存在しません" #, python-format msgid "Resize error: %(reason)s" msgstr "サイズ変更エラー: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "ディスクが 0 のフレーバーにサイズ変更することはできません。" msgid "Resource could not be found." msgstr "リソースを見つけられませんでした。" msgid "Resumed" msgstr "再開済み" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "" "ルートエレメント名は '%(tag)s' ではなく '%(name)s' でなければなりません" #, python-format msgid "Running batches of %i until complete" msgstr "完了するまで %i のバッチを実行します" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "" "スケジューラーホストフィルター %(filter_name)s が見つかりませんでした。" #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "" "プロジェクト %(project)s のセキュリティーグループ %(name)s が見つかりません" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "プロジェクト %(project_id)s のセキュリティーグループ %(security_group_id)s が" "見つかりません。" #, python-format msgid "Security group %(security_group_id)s not found." msgstr "セキュリティーグループ %(security_group_id)s が見つかりません。" #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "プロジェクト %(project_id)s にはセキュリティーグループ " "%(security_group_name)s がすでに存在します。" #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "セキュリティーグループ %(security_group_name)s がインスタンス %(instance)s に" "関連付けられていません" msgid "Security group id should be uuid" msgstr "セキュリティーグループ ID は UUID でなければなりません" msgid "Security group name cannot be empty" msgstr "セキュリティーグループ名を空にすることはできません" msgid "Security group not specified" msgstr "セキュリティーグループが指定されていません" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "サーバーディスクのサイズを変更できませんでした。理由: %(reason)s" msgid "Server does not exist" msgstr "サーバーが存在しません。" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "サーバーグループポリシーはサポートされていません: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter が設定されていません" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter が設定されていません" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher が設定されていません" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher が設定されていません" #, python-format msgid "Service %(service_id)s could not be found." msgstr "サービス %(service_id)s が見つかりませんでした。" #, python-format msgid "Service %s not found." msgstr "サービス %s が見つかりません。" msgid "Service is unavailable at this time." msgstr "サービスが現在利用できません。" #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "" "ホスト %(host)s のバイナリー %(binary)s を使用するサービスが存在します。" #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "ホスト %(host)s のトピック %(topic)s を使用するサービスが存在します。" msgid "Set admin password is not supported" msgstr "設定された管理者パスワードがサポートされません" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "名前が %(name)s のシャドーテーブルは既に存在します。" #, python-format msgid "Share '%s' is not supported" msgstr "シェア '%s' はサポートされません" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "シェアレベル '%s' に設定されたシェアがありません" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "resize2fs でファイルシステムのサイズを縮小できませんでした。ディスク上に十分" "な空き容量があるかどうかを確認してください。" #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "スナップショット %(snapshot_id)s が見つかりませんでした。" msgid "Some required fields are missing" msgstr "いくつかの必須フィールドがありません。" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "ボリュームのスナップショットの削除中に問題が発生しました: qemu-img を使用し" "て %(protocol)s のネットワークディスクを再設定することは十分に検証されていま" "せん。" msgid "Sort direction size exceeds sort key size" msgstr "ソート方向の数がソートキーの数より多いです" msgid "Sort key supplied was not valid." msgstr "指定されたソートキーが無効でした。" msgid "Specified fixed address not assigned to instance" msgstr "指定された固定アドレスはインスタンスに割り当てられていません" msgid "Specify `table_name` or `table` param" msgstr "'table_name' または 'table' のパラメーターを指定してください" msgid "Specify only one param `table_name` `table`" msgstr "" "'table_name' または 'table' のパラメーターのいずれか 1 つのみを指定してくださ" "い" msgid "Started" msgstr "開始済み" msgid "Stopped" msgstr "停止済み" #, python-format msgid "Storage error: %(reason)s" msgstr "ストレージエラー: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "ストレージポリシー %s がどのデータストアにも一致しませんでした" msgid "Success" msgstr "成功" msgid "Suspended" msgstr "休止済み" msgid "Swap drive requested is larger than instance type allows." msgstr "" "要求されたスワップドライブのサイズが、インスタンスタイプで許可されているサイ" "ズを超えています。" msgid "Table" msgstr "テーブル" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "タスク %(task_name)s はホスト %(host)s 上で既に実行中です" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "タスク %(task_name)s はホスト %(host)s 上で実行されていません" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "PCI アドレス %(address)s の形式が正しくありません。" msgid "The backlog must be more than 0" msgstr "バックログは 0 より大きい値である必要があります" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "" "コンソール用のポート範囲 %(min_port)d-%(max_port)d を使い切られています。" msgid "The created instance's disk would be too small." msgstr "作成されたインスタンスのディスクが小さすぎます。" msgid "The current driver does not support preserving ephemeral partitions." msgstr "現行ドライバーは、一時パーティションの保持をサポートしていません。" msgid "The default PBM policy doesn't exist on the backend." msgstr "デフォルト PBM ポリシーがバックエンドに存在しません。" msgid "The floating IP request failed with a BadRequest" msgstr "Floating IP のリクエストが BadRequest により失敗しました。" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "" "このインスタンスは使用されているものよりも新しいバージョンのハイパーバイザー" "を必要とします。" #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "定義したポート %(ports)d の数が上限 %(quota)d を超えています" msgid "The only partition should be partition 1." msgstr "唯一のパーティションはパーティション 1 でなければなりません。" #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "指定された RNG デバイスパス (%(path)s) がホスト上にありません。" msgid "The request body can't be empty" msgstr "リクエスト本文は空にはできません" msgid "The request is invalid." msgstr "リクエスト内容が無効です。" #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "要求されたビデオメモリー容量 %(req_vram)d がフレーバー %(max_vram)d で許可さ" "れている最大値を上回っています。" msgid "The requested availability zone is not available" msgstr "要求されたアベイラビリティーゾーンは使用不可です" msgid "The requested console type details are not accessible" msgstr "要求されたコンソールタイプの詳細にはアクセスできません" msgid "The requested functionality is not supported." msgstr "要求された機能はサポートされていません。" #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "指定されたクラスター '%s' が vCenter で見つかりませんでした" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "指定されたデバイスパス (%(path)s) は使用中です。" #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "指定されたデバイスパス (%(path)s) が無効です。" #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "指定されたディスクパス (%(path)s) は既にに存在します。これは存在しているべき" "ではありません。" msgid "The supplied hypervisor type of is invalid." msgstr "指定されたハイパーバイザーは無効です。" msgid "The target host can't be the same one." msgstr "宛先ホストを同じホストにすることはできません。" #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "トークン \"%(token)s\" が無効か、有効期限切れです" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "ボリュームにルートデバイス %s と同じデバイス名を割り当てることはできません" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "uuid 列または instance_uuid 列がヌルのレコード %(records)d がテーブル " "'%(table_name)s' にあります。必要なデータをバックアップした後で、--delete オ" "プションを指定してこのコマンドを再度実行してください。" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "uuid 列または instance_uuid 列がヌルのレコード %(records)d がテーブル " "'%(table_name)s' にあります。マイグレーションを行う前に、これらを手動でクリー" "ンアップする必要があります。 'nova-manage db null_instance_uuid_scan' コマン" "ドの実行を検討してください。" msgid "There are not enough hosts available." msgstr "使用可能なホストが不足しています。" #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "まだマイグレーションが行われていないフレーバーレコードの件数が %(count)i あり" "ます。すべてのインスタンスのフレーバーレコードが新形式に移行するまで、マイグ" "レーションを継続することはできません。最初に 'nova-manage db " "migrate_flavor_data' を実行してください。" #, python-format msgid "There is no such action: %s" msgstr "このようなアクションはありません: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "instance_uuid がヌルのレコードはありませんでした。" #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "このコンピュートノードのハイパーバイザーがサポートされる最小バージョンよりも" "古くなっています: %(version)s。" msgid "This domU must be running on the host specified by connection_url" msgstr "" "この domU は、connection_url で指定されたホスト上で実行されている必要がありま" "す" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "このメソッドを呼び出すには、networks と port_ids に None を設定するか、 " "port_ids と networks に None 以外の値を設定する必要があります。" #, python-format msgid "This rule already exists in group %s" msgstr "指定されたルールは既にグループ %s に存在しています。" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "このサービスが実装環境の残りの部分の最小 (v%(minver)i) バージョンよりも古く " "(v%(thisver)i) なっています。処理を継続できません。" #, python-format msgid "Timeout waiting for device %s to be created" msgstr "デバイス %s が作成されるのを待っている際にタイムアウトになりました" msgid "Timeout waiting for response from cell" msgstr "セルからの応答を待機中にタイムアウトになりました" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "ホスト %s にライブマイグレーションできるか確認中にタイムアウトが発生しました" msgid "To and From ports must be integers" msgstr "開始ポートと終了ポートは整数でなければなりません" msgid "Token not found" msgstr "トークンが見つかりません" msgid "Triggering crash dump is not supported" msgstr "クラッシュダンプのトリガーはサポートされません" msgid "Type and Code must be integers for ICMP protocol type" msgstr "ICMP プロトコルのタイプおよびコードは整数でなければなりません" msgid "UEFI is not supported" msgstr "UEFI はサポートされません" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "インスタンス %(id)s において Floating IP %(address)s を Fixed IP に割り当てる" "ことができません。インスタンスに割り当てを行うべき Fixed IPv4 がありません。" #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "インスタンス %(id)s において Floating IP %(address)s を Fixed IP " "%(fixed_address)s に割り当てることができません。エラー: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Ironic クライアントを認証できません。" #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "ゲストエージェントに接続できません。次の呼び出しがタイムアウトになりました: " "%(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "イメージを %(format)s に変換できません: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "イメージを raw 形式に変換できません: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "VBD %s を削除できません" #, python-format msgid "Unable to destroy VDI %s" msgstr "VDI %s を破棄できません" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "ディスク '%s' のバスを判別できません" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "%s のディスクプレフィックスを判別できません" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "プールから %s を削除できません。マスターが見つかりません" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "プールから %s を削除できません。プールは空ではありません" #, python-format msgid "Unable to find SR from VBD %s" msgstr "VBD %s から SR を取得できません。" #, python-format msgid "Unable to find SR from VDI %s" msgstr "VDI %s から SR を取得できません" #, python-format msgid "Unable to find ca_file : %s" msgstr "ca_file が見つかりません: %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "cert_file が見つかりません: %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "インスタンス %s のホストが見つかりません" msgid "Unable to find iSCSI Target" msgstr "iSCSI ターゲットが見つかりません" #, python-format msgid "Unable to find key_file : %s" msgstr "key_file %s が見つかりません" msgid "Unable to find root VBD/VDI for VM" msgstr "VM のルート VBD/VDI が見つかりません" msgid "Unable to find volume" msgstr "ボリュームが見つかりません" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr " ホストの UUID を取得できません: /etc/machine-id が存在しません" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "ホストの UUID が取得できません: /etc/machine-id が空です" #, python-format msgid "Unable to get record of VDI %s on" msgstr "VDI %s のレコードを取得できません。" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "SR %s で VDI を実装できません。" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "SR %s で VDI を実装できません。" #, python-format msgid "Unable to join %s in the pool" msgstr "プール内の %s を追加できません" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "1個の作成済みのポート ID で複数のインスタンスの起動はできません。1 つ 1 つの" "インスタンスを別々のポートで起動してください。" #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "%(instance_uuid)s を %(dest)s にマイグレーションできません: メモリー不足です " "(host:%(avail)s <= instance:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "%(instance_uuid)s をマイグレーションできません: インスタンスのディスクが大き" "すぎます (宛先ホスト上の使用可能量: %(available)s < 必要量: %(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "インスタンス (%(instance_id)s) を現在と同じホスト (%(host)s) にマイグレーショ" "ンすることはできません。" #, python-format msgid "Unable to obtain target information %s" msgstr "ターゲットの情報 %s を取得できません" msgid "Unable to resize disk down." msgstr "ディスクのサイズを縮小することができません。" msgid "Unable to set password on instance" msgstr "インスタンスにパスワードを設定できません" msgid "Unable to shrink disk." msgstr "ディスクを縮小できません。" #, fuzzy msgid "Unable to terminate instance." msgstr "インスタンスを強制終了できません。" #, python-format msgid "Unable to unplug VBD %s" msgstr "VBD %s の取り外しに失敗しました。" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "指定できない CPU 情報: %(reason)s" msgid "Unacceptable parameters." msgstr "指定できないパラメーターです。" #, python-format msgid "Unavailable console type %(console_type)s." msgstr "コンソールタイプ %(console_type)s は使用できません。" msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "ブロックデバイスマッピングのルートが定義されていません: " "BlockDeviceMappingList に複数のインスタンスのブロックデバイスのマッピングが含" "まれています。" #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "想定しない API エラーが発生しました。http://bugs.launchpad.net/nova/ でこれを" "報告して、可能な場合は Nova API ログを添付してください。\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "想定しないアグリゲートのアクション %s" msgid "Unexpected type adding stats" msgstr "統計の追加中に想定しないタイプが見つかりました" #, python-format msgid "Unexpected vif_type=%s" msgstr "想定しない vif_type=%s" msgid "Unknown" msgstr "不明" msgid "Unknown action" msgstr "不明なアクション" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "不明なコンフィグドライブ形式 %(format)s です。「iso9660」または「vfat」のいず" "れかを選択してください。" #, python-format msgid "Unknown delete_info type %s" msgstr "不明な delete_info タイプ %s" #, python-format msgid "Unknown image_type=%s" msgstr "不明な image_type=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "不明なクォータリソース %(unknown)s。" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "ソート方向が不明です。'desc' または 'asc' でなければなりません" #, python-format msgid "Unknown type: %s" msgstr "不明なタイプ: %s" msgid "Unrecognized legacy format." msgstr "認識できない以前のフォーマットです。" #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "認識されない read_deleted 値 '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "CONF.running_deleted_instance_action で認識されない値 '%s'" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "復元が試行されましたが、イメージ %s が見つかりません。" msgid "Unsupported Content-Type" msgstr "サポートされない Content-Type" msgid "Upgrade DB using Essex release first." msgstr "最初に Essex リリースを使用して DB をアップグレードします。" #, python-format msgid "User %(username)s not found in password file." msgstr "パスワードファイルにユーザー %(username)s が見つかりません。" #, python-format msgid "User %(username)s not found in shadow file." msgstr "shadow ファイルにユーザー %(username)s が見つかりません。" msgid "User data needs to be valid base 64." msgstr "ユーザーデータは有効な Base64 でなければなりません。" msgid "User does not have admin privileges" msgstr "ユーザーに管理者権限がありません" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "同じリクエスト内で異なる block_device_mapping 指定は使用できません。" #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s は %(virtual_size)d バイトです。これは、フレーバーのサイズで" "ある %(new_disk_size)d バイトを超えています。" #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "SR %(sr)s で VDI が見つかりません (vdi_uuid %(vdi_uuid)s、target_lun " "%(target_lun)s)" #, fuzzy, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "VHD 統合の試行時に (%d) を超過したため、中止します..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "バージョン %(req_ver)s はこの API ではサポートされていません。最小" "は%(min_ver)s、最大は %(max_ver)s です。" msgid "Virtual Interface creation failed" msgstr "仮想インターフェースの作成に失敗しました" msgid "Virtual interface plugin failed" msgstr "仮想インターフェースの接続に失敗しました" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "仮想マシンモード '%(vmmode)s' は認識できません" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "仮想マシンモード '%s' が有効ではありません" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "このコンピュートドライバーでは仮想化タイプ '%(virt)s' はサポートされません" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "ボリューム %(volume_id)s を接続できません。理由: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "ボリューム %(volume_id)s が見つかりませんでした。" #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "%(seconds)s 秒待機し、%(attempts)s 回試行したものの、ボリューム " "%(volume_id)s を作成することができませんでした。状況は %(volume_status)s で" "す。" msgid "Volume does not belong to the requested instance." msgstr "ボリュームが要求されたインスタンスに属していません。" #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "%(volume_type)s のボリューム %(volume_id)s に関してボリュームの暗号化はサポー" "トされません。" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "ボリュームがイメージのメタデータで指定された最小サイズよりも小さくなっていま" "す。ボリュームサイズは %(volume_size)i バイト、最小サイズは " "%(image_min_disk)i バイトです。" #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "ボリュームによってブロックサイズが設定されますが、現在の libvirt ハイパーバイ" "ザー '%s' はカスタムブロックサイズをサポートしていません" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Python < 2.7.4 ではスキーム '%s' はサポートされません。http またはhttps を使" "用してください" msgid "When resizing, instances must change flavor!" msgstr "サイズ変更の際は、インスタンスのフレーバーを変更する必要があります。" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "サーバーを SSL モードで実行する場合は、設定ファイルで cert_file と key_file " "の両方のオプションに値を指定する必要があります" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "" "リソース %(res)s で使用されるクォータメソッド %(method)s が正しくありません" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "フックメソッドのタイプが正しくありません。タイプ「pre」および「post」のみが許" "可されています" msgid "X-Forwarded-For is missing from request." msgstr "リクエストに X-Forwarded-For がありません。" msgid "X-Instance-ID header is missing from request." msgstr "リクエストに X-Instance-ID ヘッダーがありません。" msgid "X-Instance-ID-Signature header is missing from request." msgstr "X-Instance-ID-Signature ヘッダーがリクエストにありません。" msgid "X-Metadata-Provider is missing from request." msgstr "リクエストに X-Metadata-Provider がありません。" msgid "X-Tenant-ID header is missing from request." msgstr "リクエストに X-Tenant-ID ヘッダーがありません。" msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "relax-xsm-sr-check=true をサポートする XAPI が必要です" msgid "You are not allowed to delete the image." msgstr "このイメージの削除は許可されていません。" msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "インスタンスの起動時に使用されたイメージへのアクセスが許可されていません。" msgid "You must implement __call__" msgstr "__call__ を実装しなければなりません" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "rbd イメージを使用するには images_rbd_pool フラグを指定する必要があります。" msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "LVM イメージを使用するには images_volume_group フラグを指定する必要がありま" "す。" msgid "Zero floating IPs available." msgstr "使用可能な Floating IP はありません。" msgid "admin password can't be changed on existing disk" msgstr "既存のディスク上で管理者パスワードを変更することはできません" msgid "aggregate deleted" msgstr "アグリゲートが削除されました" msgid "aggregate in error" msgstr "アグリゲートでエラーが発生しました" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate が失敗しました: 理由 %s" msgid "cannot understand JSON" msgstr "JSON を解釈できません" msgid "clone() is not implemented" msgstr "clone() は実装されていません" #, python-format msgid "connect info: %s" msgstr "接続情報: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "%(host)s:%(port)s に接続中です" msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot() が実装されていません" #, python-format msgid "disk type '%s' not supported" msgstr "ディスクタイプ '%s' はサポートされていません" #, python-format msgid "empty project id for instance %s" msgstr "インスタンス %s のプロジェクト ID が空です" msgid "error setting admin password" msgstr "管理者パスワードの設定中にエラーが発生しました" #, python-format msgid "error: %s" msgstr "エラー: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "X.509 フィンガープリントの生成に失敗しました。エラーメッセージ: %s" msgid "failed to generate fingerprint" msgstr "フィンガープリントの生成に失敗しました" msgid "filename cannot be None" msgstr "ファイル名を None にすることはできません" msgid "floating IP is already associated" msgstr "Floating IP は既に割り当てられています" msgid "floating IP not found" msgstr "Floating IP が見つかりません" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s は %(backing_file)s でサポートされています" #, python-format msgid "href %s does not contain version" msgstr "href %s にバージョンが含まれていません" msgid "image already mounted" msgstr "イメージは既にマウントされています" #, python-format msgid "instance %s is not running" msgstr "インスタンス %s は実行されていません" msgid "instance has a kernel or ramdisk but not both" msgstr "" "インスタンスにはカーネルディスクと RAM ディスクの一方はありますが、両方はあり" "ません" msgid "instance is a required argument to use @refresh_cache" msgstr "@refresh_cache を使用する場合、インスタンスは必須の引数です" msgid "instance is not in a suspended state" msgstr "インスタンスは休止状態ではありません" msgid "instance is not powered on" msgstr "インスタンスの電源がオンになっていません" msgid "instance is powered off and cannot be suspended." msgstr "インスタンスは電源オフになっています。休止できません。" #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "instance_id %s がデバイス ID に設定されたポートが見つかりませんでした" msgid "is_public must be a boolean" msgstr "is_public はブール値でなければなりません" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key が定義されていません" msgid "l3driver call to add floating IP failed" msgstr "Floating IP を追加するための l3driver の呼び出しが失敗しました" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs はインストールされていますが、使用できません (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs がインストールされていません (%s)" #, python-format msgid "marker [%s] not found" msgstr "マーカー [%s] が見つかりません" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "最大行数は %(max_value)d 以上である必要があります" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "" "fixed_ip が指定されている場合、max_count を 1 より大きくすることはできませ" "ん。" msgid "min_count must be <= max_count" msgstr "min_count は max_count 以下でなければなりません" #, python-format msgid "nbd device %s did not show up" msgstr "nbd デバイス %s が出現しません" msgid "nbd unavailable: module not loaded" msgstr "nbd が使用不可です: モジュールがロードされていません" msgid "no hosts to remove" msgstr "削除するホストがありません" #, python-format msgid "no match found for %s" msgstr "%s に合致するものが見つかりません" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "ボリューム %s に関して使用可能な親スナップショットがありません" #, python-format msgid "no write permission on storage pool %s" msgstr "ストレージプール %s に書き込み権限がありません" #, python-format msgid "not able to execute ssh command: %s" msgstr "ssh コマンドを実行できません: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "古い形式の設定では、ディクショナリーと memcached のバックエンドのみを使用でき" "ます" msgid "operation time out" msgstr "操作がタイムアウトしました" #, python-format msgid "partition %s not found" msgstr "パーティション %s が見つかりません" #, python-format msgid "partition search unsupported with %s" msgstr "パーティションの検索は %s ではサポートされていません" msgid "pause not supported for vmwareapi" msgstr "vmwareapi では一時停止はサポートされていません" msgid "printable characters with at least one non space character" msgstr "1 つ以上のスペースではない文字を含む印刷可能な文字。" msgid "printable characters. Can not start or end with whitespace." msgstr "印刷可能な文字。空白で開始または終了することはできません。" #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "%(path)s で qemu-img を実行できませんでした: %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "qemu-nbd エラー: %s" msgid "rbd python libraries not found" msgstr "rbd python ライブラリーが見つかりません" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "" "read_deleted に指定できるのは 'no', 'yes', 'only' のいずれかです。%r は指定で" "きません。" msgid "serve() can only be called once" msgstr "serve() は一度しか呼び出せません" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "サービスは DB ベースの ServiceGroup ドライバーの必須の引数です" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "service は Memcached ベースの ServiceGroup ドライバーの必須の引数です" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password は、このドライバーまたはゲストインスタンスでは実装されてい" "ません。" msgid "setup in progress" msgstr "セットアップが進行中です" #, python-format msgid "snapshot for %s" msgstr "%s のスナップショット" msgid "snapshot_id required in create_info" msgstr "create_info には snapshot_id が必要です" msgid "token not provided" msgstr "トークンが指定されていません" msgid "too many body keys" msgstr "本文にキーが多すぎます" msgid "unpause not supported for vmwareapi" msgstr "vmwareapi では一時停止解除はサポートされていません" msgid "version should be an integer" msgstr "バージョンは整数でなければなりません" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s は LVM ボリュームグループでなければなりません" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path が vif %(vif_id)s の vif_details にありません" #, python-format msgid "vif type %s not supported" msgstr "vif タイプ %s はサポートされません" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "この vif_driver の実装では vif_type パラメーターが必要です" #, python-format msgid "volume %s already attached" msgstr "ボリューム %s は既に接続されています" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "ボリューム '%(vol)s' の状況は「使用中」でなければなりませんが、現在の状況は " "'%(status)s' です" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake には %s が実装されていません。" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "xenapi.fake に %s の実装がないか、引数の数が誤っています。" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/ko_KR/0000775000175000017500000000000000000000000015775 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/ko_KR/LC_MESSAGES/0000775000175000017500000000000000000000000017562 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/ko_KR/LC_MESSAGES/nova.po0000664000175000017500000035277700000000000021112 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Seunghyo Chun , 2013 # Seunghyo Chun , 2013 # Sungjin Kang , 2013 # Sungjin Kang , 2013 # Andreas Jaeger , 2016. #zanata # Jongwon Lee , 2016. #zanata # Ian Y. Choi , 2017. #zanata # Lee Dogeon , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-09-23 12:50+0000\n" "Last-Translator: Lee Dogeon \n" "Language: ko_KR\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Korean (South Korea)\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s는 v4/6주소에 맞지 않은 IP입니다." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s에서 정책적으로 허용되지 않는 직접 데이터베이스 액세스를 시도함" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s은(는) 올바른 IP 네트워크가 아닙니다." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s은(는) 업데이트의 일부여서는 안 됩니다. " #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "%(memtotal)dMB를 예상했지만 %(memsize)dMB의 메모리가 지정됨" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s이(가) 로컬 스토리지에 없음: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s이(가) 공유 스토리지에 없음: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i 행이 %(meth)s 조회와 일치함, %(done)i이(가) 마이그레이션됨" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "%(type)s 하이퍼바이저가 PCI 디바이스를 지원하지 않음" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "%(workers)s의 %(worker_name)s 값이 올바르지 않습니다. 해당 값은 0보다 커야 합" "니다." #, python-format msgid "%s does not support disk hotplug." msgstr "%s에서 디스크 hotplug를 지원하지 않습니다." #, python-format msgid "%s format is not supported" msgstr "%s 형식이 지원되지 않음" #, python-format msgid "%s is not supported." msgstr "%s이(가) 지원되지 않습니다." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s은(는) 'MANUAL' 또는 'AUTO'여야 합니다. " #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s'은(는) '%(cls)s'의 인스턴스여야 함" msgid "'qemu-img info' parsing failed." msgstr "'qemu-img info' 구문 분석에 실패했습니다. " #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "'rxtx_factor' 인수는 0에서 %g 까지의 부동수여야 함 " #, python-format msgid "A NetworkModel is required in field %s" msgstr "NetworkModel이 필드 %s에 필요함" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "API 버전 문자열 %(version)s 형식이 올바르지 않습니다. 형식은 MajorNum." "MinorNum 이어야 합니다." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "API 버전 %(version)s에서는 이 메소드를 지원하지 않습니다.." msgid "Access list not available for public flavors." msgstr "액세스 목록이 공용 플레이버에 사용할 수 없습니다. " #, python-format msgid "Action %s not found" msgstr "조치 %s을(를) 찾을 수 없음" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "%(instance_uuid)s 인스턴스에서 request_id %(request_id)s에 대한 조치를 찾을 " "수 없음" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "조치: '%(action)s', 호출 메소드: %(meth)s, 본문: %(body)s" #, python-format msgid "Active live migration for instance %(instance_id)s not found" msgstr "" "인스턴스 %(instance_id)s에 대한 활성화 상태의 마이그레이션을 찾을 수 없음" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "%(retries)s 재시도 후 %(id)s 집합에 대한 메타데이터 추가를 실패했습니다" msgid "Affinity instance group policy was violated." msgstr "선호도 인스턴스 그룹 정책을 위반했습니다. " #, python-format msgid "Agent does not support the call: %(method)s" msgstr "에이전트가 호출을 지원하지 않음: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "하이퍼바이저 %(hypervisor)s OS %(os)s 아키텍처%(architecture)s이(가) 있는 에" "이전트 빌드가 존재합니다." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "%(aggregate_id)s 집합에 이미 %(host)s 호스트가 있습니다. " #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "%(aggregate_id)s 집합을 찾을 수 없습니다. " #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "%(aggregate_id)s 집합에 %(host)s 호스트가 없습니다. " #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "%(aggregate_id)s 집합에 %(metadata_key)s 키를 갖는 메타데이터가 없습니다. " #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "%(aggregate_id)s 집합: '%(action)s' 조치로 다음 오류가 발생함: %(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "%(aggregate_name)s 집합이 이미 존재합니다. " #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "%s 집합에서 이름 지정된 비어 있는 가용 구역을 지원하지 않음" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "%(host)s 호스트에 대한 집합을 찾을 수 없습니다. " #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "" "올바르지 않은 'name' 값이 제공되었습니다. 이름은 %(reason)s이어야 합니다." msgid "An unknown error has occurred. Please try your request again." msgstr "알 수 없는 오류가 발생했습니다. 요청을 다시 시도하십시오. " msgid "An unknown exception occurred." msgstr "알 수 없는 예외가 발생했습니다. " msgid "Anti-affinity instance group policy was violated." msgstr "안티 선호도 인스턴스 그룹 정책을 위반했습니다." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "아키텍처 이름 '%(arch)s'이(가) 인식되지 않음" #, python-format msgid "Architecture name '%s' is not valid" msgstr "아키텍처 이름 '%s'이(가) 올바르지 않음" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "비어 있는 풀에서 PCI 디바이스 %(compute_node_id)s:%(address)s을(를) 이용하려" "고시도함" msgid "Attempted overwrite of an existing value." msgstr "기존 값을 겹쳐쓰려 했습니다." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "지원하지 않는 속성입니다: %(attr)s" msgid "Bad Request - Invalid Parameters" msgstr "잘못 된 요청 - 유효하지 않은 매개변수" #, python-format msgid "Bad network format: missing %s" msgstr "잘못된 네트워크 형식: %s 누락" msgid "Bad networks format" msgstr "잘못된 네트워크 형식" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "잘못된 네트워크 형식: 네트워크 uuid의 적절한 형식(%s)이 아님" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "cidr %s의 네트워크에 대한 접두부가 올바르지 않음" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "포트 %(port_id)s에 대해 바인딩에 실패했습니다. 자세한 정보는 neutron 로그를확" "인하십시오. " msgid "Blank components" msgstr "비어 있는 구성요소" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "공백 볼륨(소스: 'blank', 대상: 'volume')은 크기가 0(영)이 아니어야 함" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "%(id)s 블록 디바이스로 부팅할 수 없습니다." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "블록 디바이스 맵핑 %(volume_id)s은(는) 다중 연결 볼륨이므로 이 작업에는 올바" "르지 않습니다." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "블록 디바이스 맵핑을 레거시 형식으로 전환할 수 없습니다. " msgid "Block Device Mapping is Invalid." msgstr "블록 디바이스 맵핑이 올바르지 않습니다. " #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "블록 디바이스 맵핑이 올바르지 않습니다: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "블록 디바이스 맵핑이 올바르지 않습니다: 인스턴스와 이미지/블록 디바이스 맵핑 " "조합에 대한 부트 시퀀스가 올바르지 않습니다." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "블록 디바이스 맵핑이 올바르지 않습니다: 허용 한도보다 많은 로컬 디바이스를 지" "정했습니다." #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "블록 디바이스 맵핑이 올바르지 않습니다: %(id)s 이미지를 가져오지 못했습니다. " #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "블록 디바이스 맵핑이 올바르지 않습니다: %(id)s 스냅샷을 가져오지 못했습니다. " #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "블록 디바이스 맵핑이 올바르지 않습니다: %(id)s 볼륨을 가져오지 못했습니다. " msgid "Block migration can not be used with shared storage." msgstr "블록 마이그레이션은 공유 스토리지에서 사용할 수 없습니다. " msgid "Boot index is invalid." msgstr "부트 인덱스가 올바르지 않습니다." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "인스턴스 %(instance_uuid)s의 빌드가 중단됨: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "인스턴스 %(instance_uuid)s의 빌드가 다시 예정됨: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "%(uuid)s 인스턴스의 BuildRequest를 찾을 수 없음" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "모든 NUMA 노드에 CPU 및 메모리 할당을 제공해야 함" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU가 호환성을 갖지 않습니다.\n" "\n" "%(ret)s\n" "\n" "%(u)s을(를) 참조하십시오. " #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "CPU 번호 %(cpunum)d이(가) 두 개의 노드에 지정됨" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "CPU 번호 %(cpunum)d은(는) 최대값 %(cpumax)d 보다 큼" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "CPU 번호 %(cpuset)s이(가) 어느 노드에도 지정되지 않았음" msgid "Can not add access to a public flavor." msgstr "공용 플레이버에 대한 액세스를 추가할 수 없습니다. " msgid "Can not find requested image" msgstr "요청된 이미지를 찾을 수 없음" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "%d 신임 정보에 대한 인증 정보를 처리할 수 없음" msgid "Can't resize a disk to 0 GB." msgstr "디스크 크기를 0GB로 조정할 수 없습니다." msgid "Can't resize down ephemeral disks." msgstr "ephemeral 디스크의 크기를 줄일 수 없습니다." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "인스턴스 libvirt 구성에서 루트 디바이스 경로를 검색할 수 없음" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "다음 상태에서는 인스턴스 %(server_id)s에 '%(action)s' 조치를 수행할 수 없음: " "%(attr)s %(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "\"%(instances_path)s\"에 액세스할 수 없습니다. 경로가 있는지 필요한 권한이 있" "는지 확인하십시오. 특히 Nova-Compute를 내장 SYSTEM 계정이나 원격 호스트에서 " "인증할 수 없는 다른 계정으로 실행하지 않아야 합니다." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "호스트를 집합 %(aggregate_id)s에 추가할 수 없습니다. 이유: %(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "복수 인스턴스에 하나 이상의 볼륨을 첨부할 수 없음" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "고아 %(objtype)s 오브젝트에서 %(method)s 메소드를 호출할 수 없음" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "%s;의 상위 스토리지 풀을 판별할 수 없습니다. 이미지 저장 위치를 판별할 수 없" "습니다." msgid "Cannot find SR of content-type ISO" msgstr "컨텐츠 유형 ISO의 SR을 찾을 수 없음" msgid "Cannot find SR to read/write VDI." msgstr "VDI를 읽기/쓰기할 SR을 찾을 수 없습니다. " msgid "Cannot find image for rebuild" msgstr "다시 빌드할 이미지를 찾을 수 없음" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "%(id)s 집합에서 %(host)s 호스트를 제거할 수 없습니다" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "%(aggregate_id)s 집합에서 호스트를 제거할 수 없습니다. 이유: %(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "volume-backed 인스턴스를 구조할 수 없음" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "루트 디스크를 더 적은 크기로 조정할 수 없음. 현재 크기: %(curr_root_gb)sGB. " "요청된 크기: %(new_root_gb)sGB." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "전용이 아닌 CPU 고정 정책에 CPU 스레드 고정 정책을 설정할 수 없음" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "전용이 아닌 CPU 고정 정책에서 실시간 정책을 설정할 수 없음" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "%(aggregate_id)s 집합을 업데이트할 수 없습니다. 이유: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "%(aggregate_id)s 집합 메타데이터를 업데이트할 수 없습니다. 이유: %(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "셀 %(uuid)s에 맵핑이 없습니다." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "변경하면 다음 자원에 대한 사용량이 0보다 작아집니다: %(unders)s" #, python-format msgid "Cinder API version %(version)s is not available." msgstr "Cinder API 버전 %(version)s 을 찾을 수 없습니다." #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "%(class_name)s 클래스를 찾을 수 없음: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "명령이 지원되지 않습니다. 이 조치를 수행하려면 아이로닉 명령 %(cmd)s을(를) 사" "용하십시오." #, python-format msgid "Compute host %(host)s could not be found." msgstr "%(host)s 계산 호스트를 찾을 수 없습니다. " #, python-format msgid "Compute host %s not found." msgstr "계산 호스트 %s을(를) 찾을 수 없음." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "%(host)s Compute 서비스를 사용하고 있습니다." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "%(host)s Compute 서비스를 지금 사용할 수 없습니다." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "구성 드라이브 형식 '%(format)s'은(는) 지원되지 않습니다." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "구성이 명시적 CPU 모델을 요청했지만 현재 libvirt 하이퍼바이저 '%s'이(가) CPU " "모델 선택을 지원하지 않음" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "인스턴스 %(instance_uuid)s 업데이트 중에 충돌이 발생했지만 원인을 판별할 수 " "없음" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "인스턴스 %(instance_uuid)s 업데이트 중에 충돌이 발생했습니다. 예상: " "%(expected)s. 실제: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Cinder 호스트 연결하지 못했습니다: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "Glance 호스트 %(server)s에 연결하는 데 실패: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "libvirt 연결 유실: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "하이퍼바이저 연결이 호스트에서 끊겼습니다: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "%(instance_id)s 인스턴스의 콘솔 로그 출력을 검색할 수 없습니다. 이유: " "%(reason)s" msgid "Constraint not met." msgstr "제한조건이 만족되지 않았습니다. " #, python-format msgid "Converted to raw, but format is now %s" msgstr "원시로 변환되었지만 형식은 지금 %s임" #, python-format msgid "Could not attach image to loopback: %s" msgstr "루프백에 이미지를 첨부할 수 없음: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "%(image_id)s 이미지를 페치할 수 없음" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "%(driver_type)s 볼륨에 대한 핸들러를 찾을 수 없습니다. " #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "%(host)s 호스트에서 2진 %(binary)s을(를) 찾을 수 없습니다. " #, python-format msgid "Could not find config at %(path)s" msgstr "%(path)s에서 구성을 찾을 수 없음" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "VM이 사용하는 데이터 저장소 참조를 찾을 수 없습니다. " #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "%(line)s 행을 로드할 수 없음. %(error)s 오류가 발생했음" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "%(path)s에서 페이스트 앱 '%(name)s'을(를) 로드할 수 없음" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "vfat 구성 드라이브를 마운트할 수 없습니다. %(operation)s에 실패했습니다. 오" "류: %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "%(image_id)s 이미지를 업로드할 수 없음" msgid "Creation of virtual interface with unique mac address failed" msgstr "고유 MAC 주소가 있는 가상 인터페이 생성에 실패했습니다" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "데이터 저장소 regex %s이(가) 데이터 저장소와 일치하지 않음" msgid "Datetime is in invalid format" msgstr "Datetime이 올바르지 않은 형식임" msgid "Default PBM policy is required if PBM is enabled." msgstr "PBM을 사용하는 경우 기본 PBM 정책이 필요합니다." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "테이블 '%(table_name)s'에서 %(records)d개의 레코드를 삭제했습니다." #, python-format msgid "Device '%(device)s' not found." msgstr "'%(device)s' 디바이스를 찾을 수 없습니다." #, python-format msgid "Device detach failed for %(device)s: %(reason)s" msgstr "장치 해제가 %(device)s: %(reason)s 때문에 실패했습니다." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "지정된 디바이스 ID %(id)s은(는) 하이퍼바이저 버전 %(version)s임" msgid "Device name contains spaces." msgstr "장치 이름에 공백이 있습니다." msgid "Device name empty or too long." msgstr "장치 이름이 비어있거나 너무 깁니다." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "'%s' 별명의 디바이스 유형이 일치하지 않음" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "%(table)s.%(column)s 및 새도우 테이블에서 서로 다른 유형: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "디스크에 사용자가 크기를 조정할 수 없는 파일 시스템이 포함됨: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Disk format %(disk_format)s를 알 수 없습니다." #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "디스크 정보 파일이 올바르지 않음: %(reason)s" msgid "Disk must have only one partition." msgstr "디스크에는 하나의 파티션만 있어야 합니다. " #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "인스턴스에 접속된 ID가 %s인 디스크를 찾을 수 없습니다." #, python-format msgid "Driver Error: %s" msgstr "드라이버 오류: %s" msgid "" "Ephemeral disks requested are larger than the instance type allows. If no " "size is given in one block device mapping, flavor ephemeral size will be " "used." msgstr "" "임시 디스크는 인스턴스 유형이 허용하는 것 이상으로 요청 될 수 있습니다. 하나" "의 블록 장치 매핑에서 크기가 주어지지 않는다면, 선호하는 임시 크기로 사용 됩" "니다." #, python-format msgid "Error attempting to run %(method)s" msgstr "%(method)s을(를) 실행하는 중에 오류 발생" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "%(node)s 노드에서 인스턴스를 영구 삭제하는 중 오류가 발생했습니다. 프로비저" "닝 상태는 아직 '%(state)s'입니다." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "에이전트에 대한 다음 호출 중 오류: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "%(instance_id)s 인스턴스 언쉘브 중 오류 발생: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "%(instance_name)s의 도메인 정보를 가져오는 중 libvirt에서 오류 발생: [오류 코" "드 %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "%(instance_name)s 검색 중 libvirt에서 오류 발생: [오류 코드 %(error_code)s] " "%(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "%(instance_name)s을(를) Quiesce하는 중 libvirt에서 오류 발생: [오류 코드 " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "사용자 이름 \"%(user)s\"에 대한 비밀번호 설정 중 libvirt에서 오류 발생: [오" "류 코드 %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "%(image)s 이미지에서 %(device)s을(를) %(dir)s에 마운트하는 중 오류 발생" "(libguestfs(%(e)s)) " #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "libguestfs(%(e)s)를 갖는 %(image)s 마운트 오류" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "자원 모니터 작성 중에 오류 발생: %(monitor)s" msgid "Error: Agent is disabled" msgstr "오류: 에이전트가 사용 안됨" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "조치 ID %(action_id)s에 대한 %(event)s 이벤트를 찾을 수 없음" msgid "Event must be an instance of nova.virt.event.Event" msgstr "이벤트는 nova.virt.event.Event의 인스턴스여야 함" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "%(instance_uuid)s 인스턴스에 대한 최대 스케줄링 시도 %(max_attempts)d을(를) " "초과했습니다. 마지막 예외: %(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "인스턴스에 대한 최대 스케줄링 재시도 %(max_retries)d을(를) 초과" "함%(instance_uuid)s" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "최대 재시도 횟수를 초과했습니다. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "uuid를 예상했지만 %(uuid)s을(를) 수신했습니다. " #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "새도우 테이블에 %(table)s.%(column)s 열이 추가로 있음" msgid "Extracting vmdk from OVA failed." msgstr "OVA에서 vmdk의 압축을 풀지 못했습니다." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "포트 %(port_id)s에 액세스 실패: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "인스턴스 %(instance)s을(를) 프로비저닝할 때 %(node)s 노드에 배치 매개변수 추" "가 실패" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "%s 오류로 인해 네트워크 할당 실패. 다시 스케줄하지 않음" msgid "Failed to allocate the network(s), not rescheduling." msgstr "네트워크 할당 실패. 다시 스케줄하지 않음" #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "네트워크 어댑터 디바이스를 %(instance_uuid)s에 접속하는 데 실패함" msgid "Failed to create the interim network for vif" msgstr "vif에 대한 임시 네트워크 구성에 실패함" #, python-format msgid "Failed to create vif %s" msgstr "vif %s을(를) 생성하는 데 실패" msgid "Failed to delete bridge" msgstr "브릿지 제거에 실패함" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "인스턴스 배치 실패: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "PCI 디바이스 %(dev)s을(를) 분리하지 못함: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "네트워크 어댑터 디바이스를 %(instance_uuid)s에서 분리하는 데 실패함" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "텍스트를 암호화하지 못했습니다: %(reason)s" msgid "Failed to find bridge for vif" msgstr "vif 브릿지 검색에 실패함" #, python-format msgid "Failed to get resource provider with UUID %(uuid)s" msgstr "UUID로 리소스 공급자 가져오기 실패: %(uuid)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "인스턴스 실행 실패: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "파티션을 맵핑하지 못했음: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "파일 시스템 마운트 실패: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "패스스루용 pci 디바이스에 대한 정책을 구문 분석하지 못함" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "인스턴스 전원 끔 실패: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "인스턴스 전원 꼄 실패: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "%(instance_uuid)s 인스턴스에 대해 PCI 디바이스 %(id)s을(를) 준비하지 못함: " "%(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "인스턴스 %(inst)s 프로비저닝 실패: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "디스크 정보 파일을 읽거나 쓰지 못함: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "인스턴스 재부팅 실패: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "볼륨을 제거하지 못함: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "%(inst)s 인스턴스를 다시 빌드하기 위한 아이로닉 요청 실패: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "인스턴스 재개 실패: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "%(path)s에서 qemu-img 정보 실행 실패: %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "%(reason)s 때문에 %(instance)s에 관리 비밀번호를 설정하지 못했음" msgid "Failed to spawn, rolling back" msgstr "파생 실패. 롤백 중" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "인스턴스 일시중단 실패: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "인스턴스 종료 실패: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "vif %s의 플러그를 해제하는 데 실패" #, python-format msgid "Failed to unplug virtual interface: %(reason)s" msgstr "가상 인터페이스 해제 실패: %(reason)s" msgid "Failure prepping block device." msgstr "블록 디바이스 준비 실패" #, python-format msgid "File %(file_path)s could not be found." msgstr "%(file_path)s 파일을 찾을 수 없습니다. " #, python-format msgid "File path %s not valid" msgstr "파일 경로 %s이(가) 올바르지 않음" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "고정 IP %(ip)s이(가) 네트워크 %(network_id)s에 대해 올바른 IP 주소가 아닙니" "다. " #, python-format msgid "Fixed IP %s is already in use." msgstr "고정 IP %s을(를) 이미 사용하고 있습니다." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "고정 IP 주소 %(address)s이(가) 이미 %(instance_uuid)s 인스턴스에서 사용되고 " "있습니다." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "%(address)s 주소의 Fixed IP를 찾을 수 없습니다." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "%(flavor_id)s 플레이버를 찾을 수 없습니다. " #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "플레이버 %(flavor_id)s에 %(extra_specs_key)s 키가 있는 추가 스펙이 없습니다." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "플레이버 %(flavor_id)s에 %(key)s 키가 있는 추가 스펙이 없습니다." #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "%(retries)d번 재시도 후에 Flavor %(id)s 추가 스펙을 업데이트하거나 작성할 수 " "없습니다." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "플레이버 %(flavor_id)s 및 %(project_id)s 프로젝트 조합에 대한 플레이버 액세스" "가 이미 존재합니다. " #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "%(flavor_id)s / %(project_id)s 조합에 대한 플레이버 액세스를 찾을 수 없습니" "다. " msgid "Flavor used by the instance could not be found." msgstr "인스턴스가 사용한 플레이버를 찾을 수 없습니다. " #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "ID가 %(flavor_id)s인 플레이버가 이미 있습니다." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "이름이 %(flavor_name)s인 플레이버를 찾을 수 없습니다." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "이름이 %(name)s인 플레이버가 이미 있습니다." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "플레이버의 디스크가 이미지 메타데이터에서 지정된 최소 크기보다 작습니다. 플레" "이버 디스크는 %(flavor_size)i바이트이고 최소 크기는 %(image_min_disk)i바이트" "입니다. " #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "플레이버의 디스크가 요청된 이미지에 비해 너무 작습니다. 플레이버 디스크는 " "%(flavor_size)i바이트이고 이미지는 %(image_size)i바이트입니다. " msgid "Flavor's memory is too small for requested image." msgstr "플레이버의 메모리가 요청된 이미지에 대해 너무 작습니다." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "부동 IP %(address)s 연관에 실패했습니다. " #, python-format msgid "Floating IP %(address)s is associated." msgstr "Floating IP %(address)s이(가) 연관되어 있습니다." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "Floating IP %(address)s이(가) %(id)s 인스턴스와 연관되지 않았습니다." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "ID %(id)s의 Floating IP를 찾을 수 없습니다. " #, python-format msgid "Floating IP not found for ID %s" msgstr "ID %s의 Floating IP를 찾을 수 없음" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "%(address)s 주소의 Floating IP를 찾을 수 없습니다." msgid "Floating IP pool not found." msgstr "Floating IP 풀을 찾을 수 없습니다." msgid "Forbidden" msgstr "허용 되지 않은" msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "이미지 메타에 패스된 직렬 포트 수의 플레이버 값 초과가 금지되어 있습니다." msgid "Found no disk to snapshot." msgstr "스냅샷할 디스크를 찾지 못함." #, python-format msgid "Found no network for bridge %s" msgstr "브릿지 %s에 대한 네트워크를 발견하지 못함" #, python-format msgid "Found non-unique network for bridge %s" msgstr "브릿지 %s에 대한 고유하지 않은 네트워크 발견" #, python-format msgid "Found non-unique network for name_label %s" msgstr "name_label %s에 대한 고유하지 않은 네트워크 발견" msgid "Guest agent is not enabled for the instance" msgstr "게스트 에이전트는 해당 인스터스에 활성화 되지 않습니다." msgid "Guest does not have a console available." msgstr "게스트에 사용할 수 있는 콘솔이 없습니다." #, python-format msgid "Host %(host)s could not be found." msgstr "%(host)s 호스트를 찾을 수 없습니다. " #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "%(host)s 호스트가 이미 %(uuid)s 셀에 맵핑됨" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "'%(name)s' 호스트가 셀에 맵핑되지 않음" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Hyper-V 드라이버에서 호스트 PowerOn을 지원하지 않습니다." msgid "Host aggregate is not empty" msgstr "호스트 집합이 비어 있지 않습니다" msgid "Host does not support guests with NUMA topology set" msgstr "호스트에서 NUMA 토폴로지 세트가 있는 게스트를 지원하지 않음" msgid "Host does not support guests with custom memory page sizes" msgstr "" "호스트에서 사용자 정의 메모리 페이지 크기를 사용하는 게스트를 지원하지 않음" msgid "Host startup on XenServer is not supported." msgstr "XenServer에서의 호스트 시작은 지원되지 않습니다. " msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "하이퍼바이저 드라이버가 post_live_migration_at_source 메소드를 지원하지 않음" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "하이퍼바이저 가상화 유형 '%s'이(가) 올바르지 않음" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "하이퍼바이저 가상화 유형 '%(hv_type)s'이(가) 인식되지 않음" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "ID가 '%s'인 하이퍼바이저를 찾을 수 없습니다. " #, python-format msgid "IP allocation over quota in pool %s." msgstr "%s 풀에서 IP 할당이 할당량을 초과했습니다." msgid "IP allocation over quota." msgstr "IP 할당이 할당량을 초과했습니다." #, python-format msgid "Image %(image_id)s could not be found." msgstr "%(image_id)s 이미지를 찾을 수 없습니다. " #, python-format msgid "Image %(image_id)s is not active." msgstr "%(image_id)s 이미지가 active 상태가 아닙니다. " #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "%(image_id)s 이미지는 허용할 수 없음: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "이미지 디스크 크기가 요청된 디스크 크기보다 큼" msgid "Image is not raw format" msgstr "이미지가 원시 형식이 아님" msgid "Image metadata limit exceeded" msgstr "이미지 메타데이터 한계 초과" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "이미지 모델 '%(image)s'은(는) 지원되지 않음" msgid "Image not found." msgstr "이미지를 찾을 수 없습니다. " #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "이미지 특성 '%(name)s'은(는) 플레이버에 대해 설정된 NUMA 구성을 대체할 수 없" "음" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "이미지 특성 'hw_cpu_policy'는 플레이버에 맞지 않는 CPU 고정 정책 세트를 대체" "할 수 없음" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "'hw_cpu_thread_policy' 이미지 특성은 Flavor에 대한 CPU 스레드 고정 정책 세트" "를 대체할 수 없음" msgid "Image that the instance was started with could not be found." msgstr "인스턴스가 시작되었던 해당 이미지를 찾을 수 없음. " #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "" "이미지의 구성 드라이브 옵션 '%(config_drive)s'이(가) 올바르지 않습니다. " msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "destination_type이 '볼륨'인 이미지는 0이 아닌 크기를 지정해야 함" msgid "In ERROR state" msgstr "ERROR 상태에 있음" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "%(vm_state)s/%(task_state)s 상태에 있음, RESIZED/None이 아님" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "%(uuid)s 서버의 진행 중인 라이브 마이그레이션 %(id)s을(를) 찾을 수 없습니다." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "호환되지 않는 설정: ephemeral 스토리지 암호화가 LVM 이미지에만 지원됩니다." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "%(instance_uuid)s 인스턴스에 대한 정보 캐시를 찾을 수 없습니다." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "인스턴스 %(instance)s 및 볼륨 %(vol)s이(가) 같은 availability_zone에 있지 않" "습니다. 인스턴스는 %(ins_zone)s에 볼륨은 %(vol_zone)s에 있음" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "%(instance)s 인스턴스에 ID가 %(port)s인 포트가 없음" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "%(instance_id)s 인스턴스를 구조할 수 없습니다: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "%(instance_id)s 인스턴스를 찾을 수 없습니다. " #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "인스턴스 %(instance_id)s에 '%(tag)s' 태그가 없음" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "%(instance_id)s 인스턴스가 구조 모드에 있지 않습니다" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "%(instance_id)s 인스턴스가 준비 상태가 아닙니다" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "%(instance_id)s 인스턴스가 실행 중이 아닙니다. " #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "%(instance_id)s 인스턴스는 허용할 수 없음: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "인스턴스 %(instance_uuid)s이(가) NUMA 토폴로지를 지정하지 않음" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "" "인스턴스 %(instance_uuid)s이(가) 마이그레이션 컨텍스트를 지정하지 않습니다. " #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "%(instance_uuid)s 인스턴스가 %(attr)s %(state)s에 있습니다. 인스턴스가 이 상" "태에 있는 중에는 %(method)s할 수 없습니다." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "%(instance_uuid)s 인스턴스가 잠겼음" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "%(instance_uuid)s 인스턴스에 구성 드라이브가 필요하지만 없습니다." #, python-format msgid "Instance %(name)s already exists." msgstr "%(name)s 인스턴스가 이미 존재합니다. " #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "인스턴스 %(server_id)s의 상태가 '%(action)s'에 대해 올바르지 않음" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "%(uuid)s 인스턴스에 셀에 대한 맵핑이 없습니다." #, python-format msgid "Instance %s not found" msgstr "%s 인스턴스를 찾을 수 없음" #, python-format msgid "Instance %s provisioning was aborted" msgstr "인스턴스 %s 프로비저닝이 중단됨" msgid "Instance could not be found" msgstr "인스턴스를 찾을 수 없음" msgid "Instance disk to be encrypted but no context provided" msgstr "인스턴스 디스크를 암호화히자만 제공된 텍스트가 없음" msgid "Instance event failed" msgstr "인스턴스 이벤트에 실패" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "%(group_uuid)s 인스턴스 그룹이 이미 존재합니다. " #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "인스턴스 그룹 %(group_uuid)s을(를) 찾을 수 없습니다. " msgid "Instance has no source host" msgstr "인스턴스에 소스 호스트가 없음" msgid "Instance has not been resized." msgstr "인스턴스 크기가 조정되지 않았습니다. " #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "인스턴스 호스트 이름 %(hostname)s이(가) 올바른 DNS 이름이 아님" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "인스턴스가 이미 복구 모드에 있음: %s" msgid "Instance is not a member of specified network" msgstr "인스턴스가 지정된 네트워크의 멤버가 아님" msgid "Instance network is not ready yet" msgstr "인스턴스 네트워크가 아직 준비 되지 않았습니다." #, python-format msgid "Instance rollback performed due to: %s" msgstr "인스턴스 롤백이 수행됨. 원인: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "%(vg)s 볼륨 그룹의 공간이 충분하지 않습니다. %(free_space)db만 사용할 수 있지" "만, %(lv)s 볼륨에 %(size)d바이트가 필요합니다." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Compute 리소스가 충분하지 않습니다: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "%(uuid)s을(를) 시작하기에는 계산 노드의 사용 가능한 메모리가 부족합니다. " #, python-format msgid "Interface %(interface)s not found." msgstr "%(interface)s 인터페이스를 찾을 수 없습니다. " #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "파일 %(path)s에 대해 올바르지 않은 Base 64 데이터" msgid "Invalid Connection Info" msgstr "올바르지 않은 연결 정보" #, python-format msgid "Invalid ID received %(id)s." msgstr "올바르지 않은 ID가 %(id)s을(를) 수신했습니다." #, python-format msgid "Invalid IP format %s" msgstr "올바르지 않은 IP 형식 %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "올바르지 않은 IP 프로토콜 %(protocol)s." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "올바르지 않은 PCI 화이트리스트: PCI 화이트리스트는 디바이스 이름 또는 주소를 " "지정할 수 있지만 둘 다 지정할 수는 없음" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "올바르지 않은 PCI 별명 정의: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "올바르지 않은 정규식 %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "호스트 이름 '%(hostname)s'의 올바르지 않은 문자" msgid "Invalid config_drive provided." msgstr "올바르지 않은 config_drive가 제공되었습니다. " #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "올바르지 않은 config_drive_format \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "올바르지 않은 콘솔 유형 %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "올바르지 않은 컨텐츠 유형 %(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "올바르지 않은 Datetime 문자열: %(reason)s" msgid "Invalid device UUID." msgstr "장치 UUID가 올바르지 않습니다." #, python-format msgid "Invalid entry: '%s'" msgstr "올바르지 않은 항목: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "올바르지 않은 항목: '%s', 사전 예상" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "올바르지 않은 항목: '%s', 목록 또는 사전 예상" #, python-format msgid "Invalid exclusion expression %r" msgstr "올바르지 않은 제외 표현식 %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "올바르지 않은 이미지 형식 '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "올바르지 않은 이미지 href %(image_href)s." #, python-format msgid "Invalid inclusion expression %r" msgstr "올바르지 않은 포함 표현식 %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "필드/속성 %(path)s에 대한 올바르지 않은 입력. 값: %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "올바르지 않은 입력을 받았습니다: %(reason)s" msgid "Invalid instance image." msgstr "올바르지 않은 인스턴스 이미지" #, python-format msgid "Invalid is_public filter [%s]" msgstr "올바르지 않은 is_public 필터 [%s]" msgid "Invalid key_name provided." msgstr "올바르지 않은 key_name이 제공되었습니다. " #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "올바르지 않은 메모리 페이지 크기 '%(pagesize)s'" msgid "Invalid metadata key" msgstr "올바르지 않은 메타데이터 키" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "올바르지 않은 메타데이터 크기: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "올바르지 않은 메타데이터: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "올바르지 않은 minDisk 필터 [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "올바르지 않은 minRam 필터 [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "올바르지 않은 포트 범위 %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "올바르지 않은 프록시 요청 서명입니다. " #, python-format msgid "Invalid range expression %r" msgstr "올바르지 않은 범위 표현식 %r" msgid "Invalid service catalog json." msgstr "올바르지 않은 서비스 카탈로그 json입니다. " msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "시작 시간이 올바르지 않습니다. 종료 시간 후에는 시작 시간이 올 수 없습니다." msgid "Invalid state of instance files on shared storage" msgstr "공유 스토리지에서 인스턴스 파일의 올바르지 않은 상태" #, python-format msgid "Invalid timestamp for date %s" msgstr "날짜 %s에 대한 올바르지 않은 시간소인" #, python-format msgid "Invalid usage_type: %s" msgstr "올바르지 않은 usage_type: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "구성 드라이브 옵션에 대해 올바르지 않은 값: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "요청에서 올바르지 않은 가상 인터페이스 주소 %s" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "올바르지 않은 볼륨 접근 모드: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "올바르지 않은 볼륨: %(reason)s" msgid "Invalid volume_size." msgstr "volume_size가 올바르지 않습니다." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "아이로닉 노드 uuid가 인스턴스 %s의 드라이버에 제공되지 않습니다." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "외부 네트워크 %(network_uuid)s에 인터페이스를 작성할 수 없음" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "커널/램디스크 이미지가 너무 큼: %(vdi_size)d 바이트, 최대 %(max_size)d 바이트" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "키 이름은 영숫자 문자, 마침표, 대시, 밑줄, 콜론, 공백만 포함할 수 있습니다." #, python-format msgid "Key manager error: %(reason)s" msgstr "키 관리자 오류: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "'%(key_name)s' 키 쌍이 이미 존재합니다. " #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "%(user_id)s 사용자에 대한 키 쌍 %(name)s을(를) 찾을 수 없음" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "키 쌍 데이터가 올바르지 않습니다: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "키 쌍 이름에 안전하지 않은 문자가 들어있음" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "키 쌍 이름은 문자열이고 길이가 1 - 255자 범위에 속해야 함" msgid "Libguestfs does not have permission to read host kernel." msgstr "Libguestfs에게는 커널 호스트를 읽어올 수 있는 권한이 없습니다" msgid "Limits only supported from vCenter 6.0 and above" msgstr "vCenter 6.0 이상에서만 지원되는 한계" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "%(uuid)s 서버의 라이브 마이그레이션 %(id)s이(가) 진행 중이 아닙니다." #, python-format msgid "Malformed message body: %(reason)s" msgstr "잘못된 메시지 본문: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "잘못 구성된 요청 URL: URL의 프로젝트 ID '%(project_id)s'이(가)컨텍스트의 프로" "젝트 ID '%(context_project_id)s'과(와) 일치하지 않습니다. " msgid "Malformed request body" msgstr "형식이 틀린 요청 본문" msgid "Mapping image to local is not supported." msgstr "로컬에 매핑된 이미지는 지원하지 않습니다." #, python-format msgid "Marker %(marker)s could not be found." msgstr "%(marker)s 마커를 찾을 수 없습니다. " msgid "Maximum number of floating IPs exceeded" msgstr "Floating IP의 최대수 초과" msgid "Maximum number of key pairs exceeded" msgstr "키 쌍의 최대 수 초과" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "메타데이터의 최대 수가 %(allowed)d을(를) 초과함" msgid "Maximum number of ports exceeded" msgstr "최대 포트 수를 초과함" msgid "Maximum number of security groups or rules exceeded" msgstr "보안 그룹 또는 규칙의 최대 수 초과" #, python-format msgid "Maximum number of serial port exceeds %(allowed)d for %(virt_type)s" msgstr "" " 시리얼 포트에 대한 최대 허용 개수가 %(allowed)d for %(virt_type)s 를 초과했" "습니다." msgid "Metadata item was not found" msgstr "메타데이터 항목이 없음" msgid "Metadata property key greater than 255 characters" msgstr "메타데이터 특성 키가 255자보다 큼" msgid "Metadata property value greater than 255 characters" msgstr "메타데이터 특성 값이 255자보다 큼" msgid "Metadata type should be dict." msgstr "메타데이터 유형은 dict여야 합니다." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "메트릭 %(name)s을(를) 계산 호스트 노드 %(host)s.%(node)s.에서 찾을 수 없습니" "다." msgid "Migrate Receive failed" msgstr "마이그레이션 수신 실패" msgid "Migrate Send failed" msgstr "마이그레이션 전송 실패" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "%(uuid)s 서버의 %(id)s 마이그레이션이 라이브 마이그레이션이 아닙니다." #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "%(migration_id)s 마이그레이션을 찾을 수 없습니다. " #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "" "%(instance_id)s 인스턴스의 %(migration_id)s 마이그레이션을 찾을 수 없음" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "%(instance_uuid)s 인스턴스의 %(migration_id)s 마이그레이션 상태는 %(state)s입" "니다. 마이그레이션이 이 상태인 경우 %(method)s을(를) 수행할 수 없습니다." #, python-format msgid "Migration error: %(reason)s" msgstr "마이그레이션 오류: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "LVM 지원 인스턴스에 마이그레이션이 지원되지 않음" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "%(status)s 상태를 갖는 %(instance_id)s 인스턴스에 대한 마이그레이션을 찾을 " "수 없습니다. " #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "마이그레이션 사전 확인 오류: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "마이그레이션 선택 대상 오류: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "누락된 인수: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "새도우 테이블에 %(table)s.%(column)s 열이 누락됨" msgid "Missing device UUID." msgstr "장치 UUID가 비어 있습니다." msgid "Missing disabled reason field" msgstr "사용 안함 이유 필드가 누락됨" msgid "Missing forced_down field" msgstr "forced_down 필드 누락" msgid "Missing imageRef attribute" msgstr "imageRef 속성 누락" #, python-format msgid "Missing keys: %s" msgstr "누락 키: %s" #, python-format msgid "Missing parameter %s" msgstr "누락된 매개변수 %s" msgid "Missing parameter dict" msgstr "매개변수 사전 누락" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "" "둘 이상의 인스턴스가 Fixed IP 주소 '%(address)s'과(와) 연관되어 있습니다. " msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "사용 가능한 네트워크를 두 개 이상 발견했습니다. 네트워크 ID를 지정하여 연결" "할 항목을 선택하십시오." msgid "More than one swap drive requested." msgstr "둘 이상의 스왑 드라이브를 요청함 " #, python-format msgid "Multi-boot operating system found in %s" msgstr "%s에 다중 부트 운영 체제가 있음" msgid "Multiple X-Instance-ID headers found within request." msgstr "요청에 다중 X-Instance-ID 헤더가 있습니다. " msgid "Multiple X-Tenant-ID headers found within request." msgstr "요청 내에 다중 ltiple X-Tenant-ID 헤더가 있습니다." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "'%s' 이름에 대해 다중 부동 IP 풀 일치가 발견됨" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "%(address)s 주소의 Floating IP가 여러 개 있습니다." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "VMWare vCenter 드라이버에서 다중 호스트를 관리할 수도 있습니다. 따라서 단지 " "하나의 호스트에 대해서만 가동 시간을 리턴하지는 않습니다." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "가능한 여러 개의 네트워크가 발견됨. 좀 더 구체적인 네트워크 ID를 사용하십시" "오. " #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "여러 개의 보안 그룹에서 일치하는 '%s'을(를) 찾았습니다. 좀 더 구체적인 ID를 " "사용하십시오." msgid "Must input network_id when request IP address" msgstr "IP 주소 요청 시 network_id를 입력해야 함" msgid "Must not input both network_id and port_id" msgstr "network_id 및 port_id 둘 다 입력하지 않아야 함" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "connection_url, connection_username (optionally), connection_password를 지정" "해야 compute_driver=xenapi.XenAPIDriver를 사용할 수 있음" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "host_ip, host_username 및 host_password를 지정해야 vmwareapi.VMwareVCDriver" "를 사용할 수 있음" msgid "Must supply a positive value for max_number" msgstr "max_number에 양의 값을 제공해야 함" msgid "Must supply a positive value for max_rows" msgstr "최대 행 값으로 양수를 제공해야 합니다. " #, python-format msgid "Network %(network_id)s could not be found." msgstr "%(network_id)s 네트워크를 찾을 수 없습니다. " #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "인스턴스를 부팅하려면 %(network_uuid)s 네트워크에 서브넷이 필요합니다." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "%(bridge)s 브릿지에 대한 네트워크를 찾을 수 없음" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "%(instance_id)s 인스턴스에 대한 네트워크를 찾을 수 없습니다. " msgid "Network not found" msgstr "네트워크를 찾을 수 없음" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "보안 그룹을 적용하기 위해서는 네트워크에 port_security_enabled 및 서브넷이 연" "관되어 있어야 합니다." msgid "New volume must be detached in order to swap." msgstr "스왑하려면 새 볼륨을 분리해야 합니다." msgid "New volume must be the same size or larger." msgstr "새 볼륨은 동일한 크기이거나 이상이어야 합니다." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "ID가 %(id)s인 블록 디바이스 맵핑이 없습니다. " msgid "No Unique Match Found." msgstr "고유한 일치점을 찾지 못했습니다." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "ID %(id)s과(와) 연관된 에이전트 빌드가 없습니다. " msgid "No compute host specified" msgstr "지정된 계산 호스트가 없음" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "%(os_name)s 운영 체제의 구성 정보를 찾을 수 없음" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "VM에 MAC 주소가 %s인 디바이스가 없음" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "VM에 인터페이스 ID가 %s인 디바이스가 없음" #, python-format msgid "No disk at %(location)s" msgstr "%(location)s에 디스크가 없음" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "네트워크에 사용 가능한 고정 IP 주소가 없음: %(net)s" msgid "No fixed IPs associated to instance" msgstr "인스턴스와 연관된 Fixed IP가 없음" msgid "No free nbd devices" msgstr "여유 nbd 디바이스 없음" msgid "No host available on cluster" msgstr "클러스터에서 사용 가능한 호스트가 없음" msgid "No hosts found to map to cell, exiting." msgstr "셀에 맵핑할 호스트를 찾지 못하여 종료합니다." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "'%s'과(와) 일치하는 하이퍼바이저를 찾을 수 없습니다. " msgid "No image locations are accessible" msgstr "액세스 가능한 이미지 위치가 없음" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "라이브 마이그레이션 URI가 구성되지 않았으며 \"%(virt_type)s\" 하이퍼바이저 가" "상화 유형에 사용 가능한 기본값이 없습니다." msgid "No more floating IPs available." msgstr "더 이상 사용할 수 있는 Floating IP가 없습니다." #, python-format msgid "No more floating IPs in pool %s." msgstr "%s 풀에 추가 Floating IP가 없습니다." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "%(image)s의 %(root)s에 마운트 지점이 없음" #, python-format msgid "No operating system found in %s" msgstr "%s에 운영 체제가 없음" #, python-format msgid "No primary VDI found for %s" msgstr "%s에 대한 1차 VDI를 찾을 수 없음" msgid "No root disk defined." msgstr "루트 디스크가 정의되지 않았습니다." #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "'%(project_id)s' 프로젝트에 특정 네트워크가 요청되었지만, 사용할 수 있는 네트" "워크가 없습니다." msgid "No suitable network for migrate" msgstr "마이그레이션을 위한 지속 가능한 네트워크 없음" msgid "No valid host found for cold migrate" msgstr "콜드 마이그레이션에 대한 유효한 호스트를 찾을 수 없음" msgid "No valid host found for resize" msgstr "크기 조정할 올바른 호스트를 찾을 수 없음" #, python-format msgid "No valid host was found. %(reason)s" msgstr "유효한 호스트가 없습니다. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "경로 %(path)s에 볼륨 블록 디바이스 맵핑이 없음" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "ID가 %(volume_id)s인 볼륨 블록 디바이스 맵핑이 없습니다." #, python-format msgid "Node %s could not be found." msgstr "%s 노드를 찾을 수 없습니다. " #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "%(host)s에 사용 가능한 포트를 획득할 수 없음" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "%(host)s:%(port)d, %(error)s을(를) 바인드할 수 없음" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "PF %(compute_node_id)s:%(address)s의 일부 가상 기능은 사용할 수 없습니다." msgid "Not an rbd snapshot" msgstr "rbd 스냅샷이 아님" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "%(image_id)s 이미지에 대한 권한이 없습니다. " msgid "Not authorized." msgstr "권한이 없습니다. " msgid "Not enough parameters to build a valid rule." msgstr "유효한 규칙을 빌드하기엔 매개변수가 부족합니다. " msgid "Not implemented on Windows" msgstr "Windows에서 구현되지 않음" msgid "Not stored in rbd" msgstr "rbd에 저장되지 않음" msgid "Nothing was archived." msgstr "보관된 사항이 없습니다." #, python-format msgid "Nova does not support Cinder API version %(version)s" msgstr "Nova는 Cinder API 버전 %(version)s 를 지원하지 않습니다" #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova에 libvirt 버전 %s 이상이 필요합니다." msgid "Number of Rows Archived" msgstr "보관된 행 수" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "%(action)s 오브젝트 조치가 실패함. 이유: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "이전 볼륨이 다른 인스턴스에 접속되어 있습니다." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "하나 이상의 호스트가 이미 가용성 구역 %s에 있음" msgid "Only administrators may list deleted instances" msgstr "관리자만 삭제된 인스턴스를 나열할 수 있음" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "이 기능에서는 파일 기반 SR(ext/NFS)만 지원합니다. SR %(uuid)s이(가) %(type)s " "유형입니다." msgid "Origin header does not match this host." msgstr "원본 헤더가 이 호스트와 일치하지 않습니다." msgid "Origin header not valid." msgstr "원본 헤더가 올바르지 않습니다." msgid "Origin header protocol does not match this host." msgstr "원본 헤더 프로토콜이 이 호스트와 일치하지 않습니다." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "PCI 디바이스 %(node_id)s:%(address)s을(를) 찾을 수 없음" #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "PCI 별명 %(alias)s이(가) 정의되지 않음 " #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI 디바이스 %(compute_node_id)s:%(address)s이(가) %(status)s 상태임" "(%(hopestatus)s 대신)" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI 디바이스 %(compute_node_id)s:%(address)s을(를) %(owner)s이(가) 소유함" "(%(hopeowner)s 대신)" #, python-format msgid "PCI device %(id)s not found" msgstr "PCI 디바이스 %(id)s을(를) 찾을 수 없음" #, python-format msgid "PCI device request %(requests)s failed" msgstr "PCI 디바이스 요청 %(requests)s에 실패함" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s에 IP 주소가 없음" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "페이지 크기 %(pagesize)s이(가) '%(against)s'에 대해 금지됨" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "호스트에서 페이지 크기 %(pagesize)s을(를) 지원하지 않습니다." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "매개변수 %(missing_params)s이(가) vif %(vif_id)s에 대한 vif_details에 없습니" "다. Neutron 구성을 확인하여 macvtap 매개변수가 올바른지 유효성 검증하십시오. " #, python-format msgid "Path %s must be LVM logical volume" msgstr "경로 %s은(는) LVM 논리적 볼륨이어야 함" msgid "Paused" msgstr "정지함" msgid "Personality file limit exceeded" msgstr "특성 파일 한계 초과" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "VF %(compute_node_id)s:%(vf_address)s과(와) 관련된 실제 기능 " "%(compute_node_id)s:%(address)s은(는) %(hopestatus)s이(가) 아니라 %(status)s" "입니다." #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "%(network_uuid)s 네트워크의 실제 네트워크가 누락됨" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "%(action)s 정책이 수행되도록 허용되지 않았습니다." #, python-format msgid "Port %(port_id)s is still in use." msgstr "%(port_id)s 포트가 아직 사용 중입니다. " #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "%(port_id)s 포트를 %(instance)s 인스턴스에 사용할 수 없습니다. " #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "%(instance)s에 %(port_id)s 포트를 사용할 수 없습니다. dns_name 속성에 할당된 " "%(value)s 값은 인스턴스의 호스트 이름 %(hostname)s과(와) 일치하지 않습니다." #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "%(port_id)s 포트를 사용하려면 FixedIP가 필요합니다." #, python-format msgid "Port %s is not attached" msgstr "%s 포트가 접속되지 않음" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "포트 ID %(port_id)s을(를) 찾을 수 없습니다." #, python-format msgid "Port update failed for port %(port_id)s: %(reason)s" msgstr "포트 %(port_id)s: %(reason)s 때문에 포트 업데이트가 실패했습니다." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "제공된 비디오 모델(%(model)s)이 지원되지 않습니다." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "제공된 watchdog 조치(%(action)s)가 지원되지 않습니다." msgid "QEMU guest agent is not enabled" msgstr "QEMU 게스트 에이전트가 사용되지 않음" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "인스턴스 %(instance_id)s에서 Quiesce가 지원되지 않음" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "%(class_name)s 할당량 클래스를 찾을 수 없습니다. " msgid "Quota could not be found" msgstr "할당량을 찾을 수 없음" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "%(overs)s에 대한 할당량 초과: %(req)s을(를) 요청했지만 이미 %(allowed)s " "%(overs)s 중 %(used)s을(를) 사용했습니다. " #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "자원에 대한 할당량 초과: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "할당량 초과. 키 쌍이 너무 많습니다. " msgid "Quota exceeded, too many server groups." msgstr "할당량 초과. 서버 그룹이 너무 많습니다. " msgid "Quota exceeded, too many servers in group" msgstr "할당량 초과. 그룹에 서버가 너무 많습니다. " #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "할당량 초과: 코드=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "" "프로젝트 %(project_id)s, 자원 %(resource)s에 대한 할당량이 존재합니다. " #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "%(project_id)s 프로젝트에 대한 할당량을 찾을 수 없습니다. " #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "%(project_id)s 프로젝트의 %(user_id)s 사용자에 대한 할당량을 찾을 수 없습니" "다." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "%(resource)s의 할당량 한계 %(limit)s은(는) 이미 사용되고 예약된 %(minimum)s " "이상이어야 합니다." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "%(resource)s의 할당량 한계 %(limit)s은(는) %(maximum)s 이하여야 합니다." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "VBD %s을(를) 언플러그하려는 최대 재시도 횟수에 도달했음" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "실시간 정책에는 하나 이상의 1 RT vCPU와 1 일반 vCPU로 구성된 vCPU(s) 마스크" "가 필요합니다. hw:cpu_realtime_mask 또는 hw_cpu_realtime_mask를 참조하십시오." msgid "Request body and URI mismatch" msgstr "요청 본문 및 URI 불일치" msgid "Request is too large." msgstr "요청이 너무 큽니다. " #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "%(image_id)s 이미지 요청 결과 BadRequest 응답 발생: %(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "%(instance_uuid)s 인스턴스의 RequestSpec을 찾을 수 없음" msgid "Requested CPU control policy not supported by host" msgstr "요청된 CPU 제어 정책은 호스트에서 지원되지 않음" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "'%(virt)s' virt 드라이버가 요청된 하드웨어 '%(model)s'을(를) 지원하지 않음" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "요청한 이미지 %(image)s에서 자동 디스크 크기 조정을 사용할 수 없습니다." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "요청된 인스턴스 NUMA 토폴로지를 제공된 호스트 NUMA 토폴로지에 맞출 수 없음" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "요청된 인스턴스 NUMA 토폴로지와 요청된 PCI 디바이스를 함께 제공된 호스트 " "NUMA 토폴로지에 맞출 수 없음" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "요청된 vCPU 한계 %(sockets)d:%(cores)d:%(threads)d은(는) vcpus 개수 %(vcpus)d" "을(를) 충족시킬 수 없음" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "%s 인스턴스에 대한 복구 디바이스가 없음" #, python-format msgid "Resize error: %(reason)s" msgstr "크기 조정 오류: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "0 디스크 플레이버로의 크기 조정은 허용되지 않습니다." msgid "Resource could not be found." msgstr "자원을 찾을 수 없습니다. " msgid "Resource provider has allocations." msgstr "자원 제공자는 할당량을 갖고 있습니다." msgid "Resumed" msgstr "재시작함" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "루트 요소 이름은 '%(tag)s'이(가) 아닌 '%(name)s'이어야 함" #, python-format msgid "Running batches of %i until complete" msgstr "완료될 때까지 %i 일괄처리 실행" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "스케줄러 호스트 필터 %(filter_name)s을(를) 찾을 수 없습니다. " #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "프로젝트 %(project)s에 대해 보안 그룹 %(name)s을(를) 찾을 수 없음" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "%(project_id)s 프로젝트에 대한 %(security_group_id)s 보안 그룹을 찾을 수 없습" "니다. " #, python-format msgid "Security group %(security_group_id)s not found." msgstr "%(security_group_id)s 보안 그룹을 찾을 수 없습니다. " #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "보안 그룹 %(security_group_name)s이(가) 프로젝트%(project_id)s." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "보안 그룹 %(security_group_name)s이(가) 인스턴스와 연관되지 않음%(instance)s " "인스턴스와 연관되지 않음" msgid "Security group id should be uuid" msgstr "보안 그룹 ID는 uuid여야 함" msgid "Security group name cannot be empty" msgstr "보안 그룹 이름은 공백일 수 없음" msgid "Security group not specified" msgstr "보안 그룹이 지정되지 않음" #, python-format msgid "Server %(server_id)s has no tag '%(tag)s'" msgstr "서버 %(server_id)s는 '%(tag)s' 태그가 없습니다." #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "서버 디스크의 크기를 재조정할 수 없음. 이유: %(reason)s" msgid "Server does not exist" msgstr "서버가 없음" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "ServerGroup 정책이 지원되지 않음: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter가 구성되지 않았음" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter가 구성되지 않았음" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher이 구성되지 않음" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher이 구성되지 않음" #, python-format msgid "Service %(service_id)s could not be found." msgstr "%(service_id)s 서비스를 찾을 수 없습니다. " #, python-format msgid "Service %s not found." msgstr "%s 서비스를 찾을 수 없음" msgid "Service is unavailable at this time." msgstr "지금 서비스를 사용할 수 없습니다." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "호스트 %(host)s 바이너리 %(binary)s인 서비스가 존재합니다. " #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "호스트 %(host)s 주제 %(topic)s인 서비스가 존재합니다. " msgid "Set admin password is not supported" msgstr "설정된 관리 비밀번호가 지원되지 않음" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "이름이 %(name)s인 새도우 테이블이 이미 존재합니다. " #, python-format msgid "Share '%s' is not supported" msgstr "공유 '%s'은(는) 지원되지 않음" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "공유 레벨 '%s'에는 공유를 구성할 수 없음" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "resize2fs를 사용한 파일 시스템 축소에 실패했습니다. 사용자의 디스크에충분한 " "여유 공간이 있는지 확인하십시오." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "%(snapshot_id)s 스냅샷을 찾을 수 없습니다. " msgid "Some required fields are missing" msgstr "일부 필수 필드가 비어있습니다." #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "볼륨 스냅샷을 삭제할 때 문제 발생: qemu-img를 사용한 %(protocol)s 네트워크 디" "스크의 리베이스가 완전히 테스트되지 않음" msgid "Sort direction size exceeds sort key size" msgstr "정렬 방향 크기가 정렬 키 크기를 초과함" msgid "Sort key supplied was not valid." msgstr "제공되는 정렬 키가 올바르지 않습니다. " msgid "Specified fixed address not assigned to instance" msgstr "지정된 고정 주소가 인스턴스에 연관되지 않음" msgid "Specify `table_name` or `table` param" msgstr "`table_name` 또는 `table` 매개변수를 지정하십시오. " msgid "Specify only one param `table_name` `table`" msgstr "하나의 매개변수 `table_name` 또는 `table`만 지정하십시오." msgid "Started" msgstr "작동함" msgid "Stopped" msgstr "중지됨" #, python-format msgid "Storage error: %(reason)s" msgstr "스토리지 오류: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "스토리지 정책 %s이(가) 데이터 저장소와 일치하지 않음" msgid "Success" msgstr "완료" msgid "Suspended" msgstr "Suspended" msgid "Swap drive requested is larger than instance type allows." msgstr "요청한 스왑 드라이브가 허용되는 인스턴스 유형보다 큽니다." msgid "Table" msgstr "테이블" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "%(task_name)s 태스크가 이미 %(host)s 호스트에서 실행 중임" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "%(task_name)s 태스크가 %(host)s 호스트에서 실행 중이 아님" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "PCI 주소 %(address)s에 올바르지 않은 형식이 있습니다." msgid "The backlog must be more than 0" msgstr "백로그는 0보다 커야 함" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "콘솔 포트 범위 %(min_port)d-%(max_port)d이(가) 소진되었습니다." msgid "The created instance's disk would be too small." msgstr "작성된 인스턴스의 디스크가 너무 작습니다. " msgid "The current driver does not support preserving ephemeral partitions." msgstr "현재 드라이버는 임시 파티션 유지를 지원하지 않습니다." msgid "The default PBM policy doesn't exist on the backend." msgstr "백엔드에 기본 PBM 정책이 없습니다." #, python-format msgid "" "The fixed IP associated with port %(port_id)s is not compatible with the " "host." msgstr "" " %(port_id)s와 연관되어 있는 고정 IP주소는 해당 호스트와 호환되지 않습니다." msgid "The floating IP request failed with a BadRequest" msgstr "부동 IP 요청이 실패하여 BadRequest가 생성됨" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "인스턴스는 제공된 것보다 최신 하이퍼바이저 버전이 필요합니다. " #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "정의된 포트 수 %(ports)d이(가) 한계를 초과함: %(quota)d" msgid "The only partition should be partition 1." msgstr "유일한 파티션은 파티션 1이어야 합니다." #, python-format msgid "" "The property 'numa_nodes' cannot be '%(nodes)s'. It must be a number greater " "than 0" msgstr "" "속성 'numa_nodes'가 '%(nodes)s'이 될 순 없습니다. 반드시 숫자 0보다 커야 합" "니다. " #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "제공된 RNG 디바이스 경로: (%(path)s)이(가) 호스트에 없습니다." msgid "The request body can't be empty" msgstr "요청 분문은 비어 있을 수 없음" msgid "The request is invalid." msgstr "요청이 올바르지 않습니다." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "요청된 양의 비디오 메모리 %(req_vram)d이(가) %(max_vram)d 플레이버에 의해 허" "용된 최대값보다 높습니다." msgid "The requested availability zone is not available" msgstr "요청한 가용성 구역을 사용할 수 없음" msgid "The requested console type details are not accessible" msgstr "요청된 콘솔 유형 세부사항에 액세스할 수 없습니다." msgid "The requested functionality is not supported." msgstr "요청된 기능이 지원되지 않습니다." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "지정된 클러스터 '%s'을(를) vCenter에서 찾을 수 없음" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "제공된 디바이스 경로(%(path)s)가 사용 중입니다. " #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "제공된 디바이스 경로(%(path)s)가 올바르지 않습니다. " #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "제공된 디스크 경로(%(path)s)가 이미 존재합니다. 없어야합니다." msgid "The supplied hypervisor type of is invalid." msgstr "제공된 하이퍼바이저 유형이 올바르지 않습니다. " msgid "The target host can't be the same one." msgstr "대상 호스트가 동일한 것이어서는 안됩니다." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "토큰 '%(token)s'이(가) 올바르지 않거나 만료됨" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "볼륨에 루트 디바이스 %s과(와) 같은 디바이스 이름을 지정할 수 없음" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "uuid 또는 instance_uuid 열이 널인 '%(table_name)s' 테이블에 %(records)d개의 " "레코드가 있습니다. 필요한 데이터를 백업한 후에 --delete 옵션을 사용하여 이 명" "령을 다시 실행하십시오." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "uuid 또는 instance_uuid 열이 널인 '%(table_name)s' 테이블에 %(records)d개의 " "레코드가 있습니다. 이러한 레코드는 마이그레이션이 지나기 전에 수동으로 정리해" "야 합니다. 'nova-manage db null_instance_uuid_scan' 명령을 사용해 보십시오. " msgid "There are not enough hosts available." msgstr "사용 가능한 호스트가 부족합니다." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "여전히 %(count)i개의 플레이버 레코드가 마이그레이션되지 않았습니다. 모든 인스" "턴스 플레이버 레코드가 새로운 형식으로 마이그레이션될 때까지 마이그레이션을 " "계속할 수 없습니다. 먼저 `nova-manage db migrate_flavor_data'를 실행하십시" "오. " #, python-format msgid "There is no such action: %s" msgstr "해당 조치가 없음: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "instance_uuid가 널인 레코드가 없습니다." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "이 컴퓨트 노드의 하이퍼바이저가 지원되는 최소 버전 %(version)s보다 이전입니" "다." msgid "This domU must be running on the host specified by connection_url" msgstr "이 domU가 connection_url로 지정되는 호스트에서 실행 중이어야 함" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "이 메소드는 networks=None 및 port_ids=None 또는 port_ids와 networks를 none으" "로 지정하지 않은 상태로 호출해야 합니다." #, python-format msgid "This rule already exists in group %s" msgstr "이 규칙이 이미 %s 그룹에 존재함" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "이 서비스는 나머지 배치의 최소 (v%(minver)i) 버전보다 이전(v%(thisver)i)입니" "다." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "%s 디바이스가 작성되기를 기다리다가 제한시간 초과함" msgid "Timeout waiting for response from cell" msgstr "셀의 응답을 대시하는 중에 제한시간 초과" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "호스트로 라이브 마이그레이션할 수 있는지 확인하는 중에 제한시간 초과 발생: %s" msgid "To and From ports must be integers" msgstr "발신 및 수신 포트는 정수여야 함" msgid "Token not found" msgstr "토큰을 찾을 수 없음" msgid "Triggering crash dump is not supported" msgstr "충돌 덤프 트리거가 지원되지 않음" msgid "Type and Code must be integers for ICMP protocol type" msgstr "ICMP 프로토콜 유형의 경우 유형 및 코드는 정수여야 함" msgid "UEFI is not supported" msgstr "UEFI가 지원되지 않음" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Floating IP %(address)s을(를) %(id)s 인스턴스의 Fixed IP와 연관시킬 수 없습니" "다. 인스턴스에 연관시킬 Fixed IPv4 주소가 없습니다." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Floating IP %(address)s을(를) 인스턴스 %(id)s의 Fixed IP %(fixed_address)s에 " "연관시킬 수 없습니다. 오류: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "아이로닉 클라이언트를 인증할 수 없습니다." #, python-format msgid "Unable to automatically allocate a network for project %(project_id)s" msgstr "%(project_id)s 때문에 자동으로 네트워크를 할당할 수 없습니다." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "게스트 에이전트에 접속할 수 없음. 다음 호출의 제한시간이 초과됨: %(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "이미지를 %(format)s(으)로 변환할 수 없음: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "이미지를 원시로 변환할 수 없음: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "VBD %s을(를) 영구 삭제할 수 없음" #, python-format msgid "Unable to destroy VDI %s" msgstr "VDI %s을(를) 영구 삭제할 수 없음" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "'%s'의 디스크 버스를 판별할 수 없음" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "%s의 디스크 접두부를 판별할 수 없음" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "풀에서 %s을(를) 방출할 수 없음. 마스터를 찾을 수 없음" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "풀에서 %s을(를) 방출할 수 없음. 풀이 비어 있지 않음" #, python-format msgid "Unable to find SR from VBD %s" msgstr "VBD %s에서 SR을 찾을 수 없음" #, python-format msgid "Unable to find SR from VDI %s" msgstr "VDI %s에서 SR을 찾을 수 없음" #, python-format msgid "Unable to find ca_file : %s" msgstr "ca_file을 찾을 수 없음: %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "cert_file을 찾을 수 없음: %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "%s 인스턴스에 대한 호스트를 찾을 수 없음" msgid "Unable to find iSCSI Target" msgstr "iSCSI 대상을 찾을 수 없음" #, python-format msgid "Unable to find key_file : %s" msgstr "key_file을 찾을 수 없음: %s" msgid "Unable to find root VBD/VDI for VM" msgstr "VM에 대한 루트 VBD/VDI를 찾을 수 없음" msgid "Unable to find volume" msgstr "볼륨을 찾을 수 없음" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "호스트 UUID를 가져올 수 없음: /etc/machine-id가 존재하지 않음" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "호스트 UUID를 가져올 수 없음: /etc/machine-id가 비어 있음" #, python-format msgid "Unable to get record of VDI %s on" msgstr "VDI %s의 레코드를 가져올 수 없음" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "SR %s에 대한 VDI를 도입할 수 없음" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "SR %s에서 VDI를 도입할 수 없음" #, python-format msgid "Unable to join %s in the pool" msgstr "풀에 %s을(를) 결합할 수 없음" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "단일 포트 ID가 구성된 다중 인스턴스를 실행할 수 없습니다.인스턴스를 다른 포트" "와 함께 하나씩 실행하십시오." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "%(instance_uuid)s을(를) %(dest)s(으)로 마이그레이션할 수 없음. 메모리 부족(호" "스트:%(avail)s <= 인스턴스:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "%(instance_uuid)s을(를) 마이그레이션할 수 없음: 인스턴스의 디스크가 너무 큼" "(대상 호스트의 사용 가능량:%(available)s < 필요량:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "인스턴스(%(instance_id)s)를 현재 호스트(%(host)s)로 마이그레이션할 수 없습니" "다. " #, python-format msgid "Unable to obtain target information %s" msgstr "대상 정보 %s을(를) 얻을 수 없음" msgid "Unable to resize disk down." msgstr "디스크 크기를 줄일 수 없습니다." msgid "Unable to set password on instance" msgstr "인스턴스에 대한 비밀번호를 설정할 수 없음" msgid "Unable to shrink disk." msgstr "디스크를 줄일 수 없습니다." msgid "Unable to terminate instance." msgstr "인스턴스를 종료할 수 없음" #, python-format msgid "Unable to unplug VBD %s" msgstr "VBD %s을(를) 언플러그할 수 없음" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "CPU 정보를 확인할 수 없습니다: %(reason)s" msgid "Unacceptable parameters." msgstr "사용할 수 없는 매개변수입니다. " #, python-format msgid "Unavailable console type %(console_type)s." msgstr "사용 불가능한 콘솔 유형 %(console_type)s입니다." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "정의되지 않은 블록 디바이스 맵핑 루트: BlockDeviceMappingList에는 여러 인스턴" "스의 블록 디바이스 맵핑이 포함되어 있습니다." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "API Error가 발생했습니다. http://bugs.launchpad.net/nova/ 에 상세 내용을 보내" "주십시오. 가능하면 Nova API 로그를 포함하여 보내주십시오. %s" #, python-format msgid "Unexpected aggregate action %s" msgstr "예상치 않은 집합 %s 작업" msgid "Unexpected type adding stats" msgstr "통계를 추가하는 예기치 않은 유형이 있음" #, python-format msgid "Unexpected vif_type=%s" msgstr "예기치 않은 vif_type=%s" msgid "Unknown" msgstr "알 수 없음" msgid "Unknown action" msgstr "알 수 없는 조치" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "알 수 없는 구성 드라이브 형식 %(format)s입니다. iso9660 또는 vfat 중 하나를 " "선택하십시오. " #, python-format msgid "Unknown delete_info type %s" msgstr "알 수 없는 delete_info 유형: %s" #, python-format msgid "Unknown image_type=%s" msgstr "알 수 없는 image_type=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "알 수 없는 할당량 자원 %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "알 수 없는 정렬 방향입니다. 'desc' 또는 'asc'여야 함" #, python-format msgid "Unknown type: %s" msgstr "알 수 없는 유형: %s" msgid "Unrecognized legacy format." msgstr "인식할 수 없는 레거시 포맷입니다." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "인식되지 않는 read_deleted 값 '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "CONF.running_deleted_instance_action에 대한 인식되지 않는 값 '%s'" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "언쉘브를 시도했으나 %s 이미지를 찾을 수 없습니다." msgid "Unsupported Content-Type" msgstr "지원되지 않는 Content-Type" msgid "Upgrade DB using Essex release first." msgstr "먼저 Essex 릴리스를 사용하여 DB를 업그레이드하십시오. " #, python-format msgid "User %(username)s not found in password file." msgstr "%(username)s 사용자가 비밀번호 파일에 없습니다. " #, python-format msgid "User %(username)s not found in shadow file." msgstr "%(username)s 사용자가 새도우 파일에 없습니다. " msgid "User data needs to be valid base 64." msgstr "사용자 데이터는 유효한 base64여야 합니다. " msgid "User does not have admin privileges" msgstr "사용자에게 관리 권한이 없습니다" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "동일한 요청에서 다른 block_device_mapping 구문 사용은 요청을 건너뛰는 중입니" "다." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s은(는) %(virtual_size)d바이트이며 이는 플레이버 크기인 " "%(new_disk_size)d바이트보다 큽니다." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "SR %(sr)s에서 VDI를 찾지 못함(vdi_uuid %(vdi_uuid)s, target_lun " "%(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "VHD 합병 시도가 (%d)을(를) 초과했음, 포기하는 중..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "API에서 %(req_ver)s 버전을 지원하지 않습니다. 최소 %(min_ver)s 이상, 최대 " "%(max_ver)s 이하여야 합니다." #, python-format msgid "" "Version of %(name)s %(min_ver)s %(max_ver)s intersects with another versions." msgstr "%(name)s %(min_ver)s %(max_ver)s의 버전이 다른 버전과 겹칩니다." msgid "Virtual Interface creation failed" msgstr "가상 인터페이스 생성 실패하였습니다" msgid "Virtual interface plugin failed" msgstr "가상 인터페이스 연결 실패했습니다" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "가상 머신 모드 '%(vmmode)s'이(가) 인식되지 않음" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "가상 머신 모드 '%s'이(가) 올바르지 않음" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "이 컴퓨터 드라이버가 가상화 유형 '%(virt)s'을(를) 지원하지 않음" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "%(volume_id)s 볼륨을 연결할 수 없습니다. 이유: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be detached. Reason: %(reason)s" msgstr "%(volume_id)s 볼륨을 분리할 수 없습니다. 이유: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "%(volume_id)s 볼륨을 찾을 수 없습니다. " #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "%(seconds)s 초 동안 기다리거나 %(attempts)s 시도 하였으나 %(volume_id)s 볼륨" "이 생성되지 않았습니다. 지금 상태는 %(volume_status)s 입니다." msgid "Volume does not belong to the requested instance." msgstr "볼륨이 요청된 인스턴스에 속해 있지 않습니다." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "볼륨 암호화는 %(volume_type)s 볼륨 %(volume_id)s 를 지원하지 않습니다" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "볼륨이 이미지 메타데이터에서 지정된 최소 크기보다 작습니다. 볼륨크기는 " "%(volume_size)i바이트이고 최소 크기는 %(image_min_disk)i바이트입니다. " #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "볼륨에서 블록 크기를 설정하지만 현재 libvirt 하이퍼바이저 '%s'이(가) 사용자 " "정의 블록 크기를 지원하지 않음" msgid "Volume size extension is not supported by the hypervisor." msgstr "하이퍼바이저에서 볼륨크기 확장을 지원하지 않습니다" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "2.7.4 미만의 Python에서 스키마 '%s'을(를) 지원하지 않습니다. http 또는 https" "를 사용하십시오." msgid "When resizing, instances must change flavor!" msgstr "크기를 조정할 때 인스턴스는 플레이버를 변경해야 합니다!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "서버를 SSL 모드에서 실행할 때 구성 파일에 cert_file 및 key_file 옵션 값을 모" "두 지정해야 함" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "%(res)s 자원에서 올바르지 않은 할당량 메소드 %(method)s이(가) 사용됨" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "잘못된 유형의 후크 메소드임. 'pre' 및 'post' 유형만 허용됨" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For가 요청에서 누락되었습니다. " msgid "X-Instance-ID header is missing from request." msgstr "X-Instance-ID 헤더가 요청에서 누락되었습니다. " msgid "X-Instance-ID-Signature header is missing from request." msgstr "X-Instance-ID-Signature 헤더가 요청에서 누락되었습니다." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider가 요청에서 누락되었습니다. " msgid "X-Tenant-ID header is missing from request." msgstr "X-Tenant-ID 헤더가 요청에서 누락되었습니다." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "XAPI 지원 relax-xsm-sr-check=true가 필요함" msgid "You are not allowed to delete the image." msgstr "이미지를 삭제할 수 없습니다." msgid "" "You are not authorized to access the image the instance was started with." msgstr "인스턴스가 시작되는 해당 이미지에 액세스할 권한이 없습니다. " #, python-format msgid "" "You can't use %s options in vzstorage_mount_opts configuration parameter." msgstr "vzstorage_mount_opts 구성 매개변수에서 %s 옵션을 사용할 수 없습니다." msgid "You must implement __call__" msgstr "__call__을 구현해야 합니다. " msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "rbd 이미지를 사용하려면 images_rbd_pool 플래그를 지정해야 합니다." msgid "You should specify images_volume_group flag to use LVM images." msgstr "LVM 이미지를 사용하려면 images_volume_group 플래그를 지정해야 합니다." msgid "Zero floating IPs available." msgstr "사용할 수 있는 Floating IP가 0개입니다." msgid "admin password can't be changed on existing disk" msgstr "관리 비밀번호는 기존 디스크에서 변경될 수 없음" msgid "aggregate deleted" msgstr "집합이 삭제되었습니다" msgid "aggregate in error" msgstr "집합에 오류가 있습니다" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate 실패. 원인: %s" msgid "cannot understand JSON" msgstr "JSON을 이해할 수 없음" msgid "clone() is not implemented" msgstr "clone()이 구현되지 않음" #, python-format msgid "connect info: %s" msgstr "연결 정보: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "%(host)s:%(port)s에 연결 중" msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot()이 구현되지 않음" #, python-format msgid "disk type '%s' not supported" msgstr "디스크 유형 '%s'이(가) 지원되지 않음" #, python-format msgid "empty project id for instance %s" msgstr "%s 인스턴스에 대한 비어 있는 프로젝트 ID" msgid "error setting admin password" msgstr "관리 비밀번호 설정 오류" #, python-format msgid "error: %s" msgstr "오류: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "X509 fingerprint를 생성하지 못했습니다. 에러 메시지: %s" msgid "failed to generate fingerprint" msgstr "Fingerprint를 생성하지 못했습니다" msgid "filename cannot be None" msgstr "파일 이름은 None일 수 없음" msgid "floating IP is already associated" msgstr "Floating IP가 이미 연관되어 있음" msgid "floating IP not found" msgstr "Floating IP를 찾을 수 없음" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s 백업: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s에 버전이 없음" msgid "image already mounted" msgstr "이미지가 이미 마운트되었음" #, python-format msgid "instance %s is not running" msgstr "인스턴스 %s이(가) 실행 중이 아님" msgid "instance has a kernel or ramdisk but not both" msgstr "인스턴스가 커널 또는 램디스크를 갖지만 둘 다 갖지는 않음" msgid "instance is a required argument to use @refresh_cache" msgstr "인스턴스는 @refresh_cache를 사용하기 위한 필수 인수임" msgid "instance is not in a suspended state" msgstr "인스턴스가 일시중단 상태에 있지 않음" msgid "instance is not powered on" msgstr "인스턴스가 전원 공급되지 않음" msgid "instance is powered off and cannot be suspended." msgstr "인스턴스가 전원 차단되었고 일시중단될 수 없습니다. " #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "포트의 디바이스 ID로 instance_id %s을(를) 찾을 수 없음" msgid "is_public must be a boolean" msgstr "is_public은 부울이어야 함" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key가 정의되지 않음" msgid "l3driver call to add floating IP failed" msgstr "Floating IP를 추가하기 위한 l3driver 호출에 실패" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs가 설치되었지만 사용할 수 없음(%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs가 설치되지 않음(%s)" #, python-format msgid "marker [%s] not found" msgstr "마커 [%s]을(를) 찾을 수 없음" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "최대 행은 <= %(max_value)d이어야 함" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "fixed_ip가 지정된 경우 max_count는 1 이하여야 합니다." msgid "min_count must be <= max_count" msgstr "min_count는 max_count 이하여야 함" #, python-format msgid "nbd device %s did not show up" msgstr "nbd 디바이스 %s이(가) 표시되지 않음" msgid "nbd unavailable: module not loaded" msgstr "nbd 사용 불가능: 모듈이 로드되지 않았음" msgid "no hosts to remove" msgstr "제거할 호스트가 없음" #, python-format msgid "no match found for %s" msgstr "%s에 대한 일치 항목을 찾을 수 없음" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "%s 볼륨에 사용 가능한 상위 스냅샷이 없음" #, python-format msgid "no write permission on storage pool %s" msgstr "스토리지 풀 %s에 쓰기 권한이 없음" #, python-format msgid "not able to execute ssh command: %s" msgstr "ssh 명령을 실행할 수 없음: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "이전 스타일 구성에서는 사전 또는 memcached 백엔드만 사용할 수 있음" msgid "operation time out" msgstr "조작 제한시간이 초과됨" #, python-format msgid "partition %s not found" msgstr "%s 파티션을 찾을 수 없음" #, python-format msgid "partition search unsupported with %s" msgstr "파티션 검색이 %s에서 지원되지 않음" msgid "pause not supported for vmwareapi" msgstr "vmwareapi에 대한 일시정지는 지원되지 않음" msgid "printable characters with at least one non space character" msgstr "공백이 아닌 문자가 하나 이상 포함된 인쇄 가능한 문자" msgid "printable characters. Can not start or end with whitespace." msgstr "인쇄 가능한 문자입니다. 공백으로 시작하거나 종료할 수 없습니다." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "qemu-img가 %(path)s에서 실행하는 데 실패 : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "qemu-nbd 오류: %s" msgid "rbd python libraries not found" msgstr "rbd python 라이브러리를 찾을 수 없음" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "" "read_deleted는 'no', 'yes', 'only' 중 하나만 가능하며, %r은(는) 사용하지 못 " "합니다." msgid "serve() can only be called once" msgstr "serve()는 한 번만 호출할 수 있음" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "서비스는 DB 기반 ServiceGroup 드라이버의 필수 인수임" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "서비스는 Memcached 기반 ServiceGroup 드라이버의 필수 인수임" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password가 이 드라이버 또는 게스트 인스턴스에 의해 구현되지 않습니" "다. " msgid "setup in progress" msgstr "설정 진행 중" #, python-format msgid "snapshot for %s" msgstr "%s 스냅샷" msgid "snapshot_id required in create_info" msgstr "create_info에 snapshot_id가 필요함" msgid "token not provided" msgstr "토큰이 제공되지 않음" msgid "too many body keys" msgstr "본문 키가 너무 많음" msgid "unpause not supported for vmwareapi" msgstr "vmwareapi에 대한 일시정지 해제는 지원되지 않음" msgid "version should be an integer" msgstr "버전은 정수여야 함" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s은(는) LVM 볼륨 그룹이어야 함" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vif %(vif_id)s의 vif_details에 vhostuser_sock_path가 표시되지 않음" #, python-format msgid "vif type %s not supported" msgstr "vif 유형 %s이(가) 지원되지 않음" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "이 vif_driver 구현을 위해 vif_type 매개변수가 존재해야 함" #, python-format msgid "volume %s already attached" msgstr "볼륨 %s이(가) 이미 접속됨" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "볼륨 '%(vol)s' 상태는 '사용 중'이어야 합니다. 현재 상태 '%(status)s'" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake가 %s에 대한 구현을 갖지 않음" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake에 %s에 대한 구현이 없거나 잘못된 수의 인수를 사용하여 호출됨" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/pt_BR/0000775000175000017500000000000000000000000015776 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/pt_BR/LC_MESSAGES/0000775000175000017500000000000000000000000017563 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/pt_BR/LC_MESSAGES/nova.po0000664000175000017500000033200100000000000021065 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Francisco Demontiê dos Santos Junior , 2013 # FIRST AUTHOR , 2011 # Francisco Demontiê dos Santos Junior , 2013 # Gabriel Wainer, 2013 # Josemar Muller Lohn , 2013 # Leonardo Rodrigues de Mello <>, 2012 # Marcelo Dieder , 2013 # MichaelBr , 2013 # Volmar Oliveira Junior , 2013 # Welkson Renny de Medeiros , 2012 # Wiliam Souza , 2013 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:08+0000\n" "Last-Translator: Copied by Zanata \n" "Language: pt_BR\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Portuguese (Brazil)\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s não é um endereço IPv4/6 válido." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s tentou o acesso direto ao banco de dados, que não é permitido " "pela política" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s não é uma rede de IP válida." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s não deve fazer parte das atualizações." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "%(memsize)d MB de memória designada, mas esperada %(memtotal)d MB" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s não está no armazenamento local: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s não está no armazenamento compartilhado: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i linhas corresponderal à consulta %(meth)s, %(done)i migradas" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "O hypervisor %(type)s não suporta dispositivos PCI" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "%(worker_name)s valor de %(workers)s é inválido, deve ser maior que 0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s não suporta o hotplug do disco." #, python-format msgid "%s format is not supported" msgstr "O formato %s não é suportado" #, python-format msgid "%s is not supported." msgstr "%s não é suportado." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s deve ser 'MANUAL' ou 'AUTO'." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' deve ser uma instância de '%(cls)s'" msgid "'qemu-img info' parsing failed." msgstr "Falha na análise de 'qemu-img info'." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "Argumento 'rxtx_factor' deve ser um valor flutuante entre 0 e %g" #, python-format msgid "A NetworkModel is required in field %s" msgstr "Um NetworkModel é requerido no campo %s" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "String de Versão de API %(version)s é de formato inválido. Deve estar no " "formato MajorNum.MinorNum." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "Versão de API %(version)s não é suportada nesse método." msgid "Access list not available for public flavors." msgstr "Lista de acesso não disponível para métodos públicos." #, python-format msgid "Action %s not found" msgstr "Ação %s não localizada" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "Ação para request_id %(request_id)s na instância %(instance_uuid)s não " "localizada" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Ação: '%(action)s', método de chamada: %(meth)s, corpo: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "A inclusão de metadados falhou para o agregado %(id)s após %(retries)s novas " "tentativas" msgid "Affinity instance group policy was violated." msgstr "A política de grupo da instância de afinidade foi violada." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "O agente não suporta a chamada: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "A construção do agente com hypervisor %(hypervisor)s os %(os)s arquitetura " "%(architecture)s existe." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "O agregado %(aggregate_id)s já possui o host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "O agregado %(aggregate_id)s não pôde ser localizado." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "O agregado %(aggregate_id)s não possui host %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "O agregado %(aggregate_id)s não possui metadados com a chave " "%(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Agregado %(aggregate_id)s: ação '%(action)s' causou um erro: %(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "O agregado %(aggregate_name)s já existe." #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "O agregado %s não suporta zona de disponibilidade nomeada vazia" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "Agregar para a contagem de host %(host)s não pode ser localizada." #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "Um valor 'name' inválido foi fornecido. O nome deve ser: %(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "" "Ocorreu um erro desconhecido. Por favor tente sua requisição novamente." msgid "An unknown exception occurred." msgstr "Ocorreu uma exceção desconhecida." msgid "Anti-affinity instance group policy was violated." msgstr "Política de grupo de instância Antiafinidade foi violada." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Nome de arquitetura '%(arch)s' não é reconhecido" #, python-format msgid "Architecture name '%s' is not valid" msgstr "O nome de arquitetura '%s' não é válido" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Tentativa de consumir dispositivo PCI %(compute_node_id)s:%(address)s do " "conjunto vazio" msgid "Attempted overwrite of an existing value." msgstr "Tentativa de sobrescrever um valor existente." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Atributo não suportado: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "Formato de rede inválido: %s ausente" msgid "Bad networks format" msgstr "Formato de redes inválido" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "" "Formato de redes inválido: o uuid da rede não está em formato adequado (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Prefixo inválido para rede em cidr %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "A ligação falhou para a porta %(port_id)s, verifique os logs do neutron para " "obter informações adicionais." msgid "Blank components" msgstr "Componentes em branco" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Volumes em branco (origem: 'blank', dest: 'volume') precisam ter tamanho " "diferente de zero" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "%(id)s do Dispositivo de Bloco não é inicializável." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "O Mapeamento de Dispositivo de Bloco %(volume_id)s é um volume de diversas " "conexões e não é válido para essa operação." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "" "O Mapeamento do Dispositivo de Bloco não pode ser convertido para um formato " "legado." msgid "Block Device Mapping is Invalid." msgstr "O Mapeamento de Dispositivo de Bloco é Inválido." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "O Mapeamento do Dispositivo de Bloco é Inválido: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "Mapeamento de Dispositivo de Bloco inválido: A sequência de boot para a " "instância e a combinação de mapeamento de dispositivo de imagem/bloco é " "inválida." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "Mapeamento de Dispositivo de Bloco inválido: Você especificou mais " "dispositivos locais que o limite permitido" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "Mapeamento de Dispositivo de Bloco inválido: falha ao obter imagem %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "O Mapeamento de Dispositivo de Bloco é Inválido: falha ao obter captura " "instantânea %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "O Mapeamento de Dispositivo de Bloco é Inválido: falha ao obter volume " "%(id)s." msgid "Block migration can not be used with shared storage." msgstr "Migração de bloco não pode ser usada com armazenamento compartilhado." msgid "Boot index is invalid." msgstr "Índice de inicialização inválido." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "Construção da instância %(instance_uuid)s interrompida: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "" "A construção da instância %(instance_uuid)s foi replanejada: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "BuildRequest não localizado para a instância %(uuid)s" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "CPU e alocação de memória devem ser fornecidos para todos os nós NUMA" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "A CPU não possui compatibilidade.\n" "\n" "%(ret)s\n" "\n" "Consulte %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "Número de CPU %(cpunum)d é designado a dois nós" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "Número de CPU %(cpunum)d é maior que o máximo %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "Número de CPU %(cpuset)s não é designado a nenhum nó" msgid "Can not add access to a public flavor." msgstr "Não é possível incluir acesso em um tipo público." msgid "Can not find requested image" msgstr "Não é possível localizar a imagem solicitada" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "" "Não é possível manipular solicitação de autenticação para %d credenciais" msgid "Can't resize a disk to 0 GB." msgstr "Não é possível redimensionar um disco para 0 GB." msgid "Can't resize down ephemeral disks." msgstr "Não é possível redimensionar o disco temporário." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "" "Não é possível recuperar o caminho de dispositivo raiz da configuração de " "libvirt da instância" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "Não é possível '%(action)s' da instância %(server_id)s enquanto ele está em " "%(attr)s %(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "Não é possível acessar \"%(instances_path)s\", certifique-se de que o " "caminho exista e que você tenha as permissões apropriadas. Em particular " "Nova-Compute não deve ser executado com a conta do SYSTEM integrado ou " "outras contas que não conseguem autenticar em um host remoto." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Não é possível adicionar o host para o agregado %(aggregate_id)s. Motivo: " "%(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "Não é possível anexar um ou mais volumes a várias instâncias" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Não é possível chamar %(method)s no objeto órfão %(objtype)s" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "Não é possível determinar o conjunto de armazenamentos pai para %s; não é " "possível determinar onde armazenar as imagens" msgid "Cannot find SR of content-type ISO" msgstr "Não é possível localizar SR do tipo de conteúdo ISO" msgid "Cannot find SR to read/write VDI." msgstr "Não é possível localizar SR para VDI de leitura/gravação." msgid "Cannot find image for rebuild" msgstr "Não foi possível localizar a imagem para reconstrução" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "Não é possível remover o host %(host)s do agregado %(id)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Não é possível remover o host do agregado %(aggregate_id)s. Motivo: " "%(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "Não é possível resgatar uma instância suportada por volume" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "Não é possível redimensionar o disco raiz para um tamanho menor. Tamanho " "atual: %(curr_root_gb)s GB. Tamanho solicitado: %(new_root_gb)s GB." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "Não é possível configurar a política de pinning de encadeamento de CPU em " "uma política de pinning de CPU dedicada" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "Não é possível configurar a política em tempo real em uma política de " "pinning de CPU dedicada" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Não é possível atualizar o agregado %(aggregate_id)s. Motivo: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Não é possível atualizar o metadado do agregado %(aggregate_id)s. Motivo: " "%(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "Célula %(uuid)s não possui mapeamento." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "A mudança faria uso de menos de 0 dos recursos a seguir: %(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "A classe %(class_name)s não pôde ser localizada: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Comando não suportado. Utilize o comando Ironic %(cmd)s para realizar essa " "ação." #, python-format msgid "Compute host %(host)s could not be found." msgstr "O host de cálculo %(host)s não pôde ser localizado." #, python-format msgid "Compute host %s not found." msgstr "Compute host %s não pode ser encontrado." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "Serviço de cálculo de %(host)s ainda está em uso." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "O serviço de cálculo de %(host)s está indisponível no momento." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "O formato da unidade de configuração %(format)s não é suportado." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "Configuração solicitou um modelo de CPU explícito, mas o hypervisor libvirt " "atual '%s' não suporta a seleção de modelos de CPU" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Conflito ao atualizar a instância %(instance_uuid)s, mas não foi possível " "determinar a causa" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Conflito ao atualizar a instância %(instance_uuid)s. Esperado: %(expected)s. " "Real: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "A conexão com o host Cinder falhou: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "A conexão com o host Glance %(server)s falhou: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Conexão com libvirt perdida: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "A conexão com o hypervisor for interrompida no host: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "A saída de log do console não pôde ser recuperada para a instância " "%(instance_id)s. Motivo: %(reason)s" msgid "Constraint not met." msgstr "Restrição não atendida." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Convertido em bruto, mas o formato é agora %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "Não foi possível anexar imagem ao loopback: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "Não foi possível buscar a imagem %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "" "Não foi possível localizar um manipulador para o volume %(driver_type)s." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "Não foi possível localizar o binário %(binary)s no host %(host)s." #, python-format msgid "Could not find config at %(path)s" msgstr "Não foi possível localizar a configuração em %(path)s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "" "Não foi possível localizar a(s) referência(s) do armazenamento de dados que " "a VM " "usa." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "Não foi possível carregar a linha %(line)s, obteve erro %(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "" "Não foi possível carregar o aplicativo paste app '%(name)s' a partir do " "%(path)s" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "Não foi possível montar a unidade de configuração vfat. Falha de " "%(operation)s. Erro: %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "Não foi possível fazer upload da imagem %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "Criação da interface virtual com endereço mac único falhou" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "" "O regex do armazenamento de dados %s não correspondeu a nenhum armazenamento " "de dados" msgid "Datetime is in invalid format" msgstr "Data/hora estão em formato inválido" msgid "Default PBM policy is required if PBM is enabled." msgstr "Política de PBM padrão será necessária se PBM for ativado." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "Registros %(records)d excluídos da tabela ‘%(table_name)s‘." #, python-format msgid "Device '%(device)s' not found." msgstr "Dispositivo '%(device)s' não localizado." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "id do dispositivo %(id)s especificado não é suportado pela versão do " "hypervisor %(version)s" msgid "Device name contains spaces." msgstr "Nome do dispositivo contém espaços." msgid "Device name empty or too long." msgstr "Nome de dispositivo vazio ou muito longo." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "Tipo de dispositivo incompatível para o alias '%s'" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "Tipos diferentes em %(table)s.%(column)s e na tabela de sombra: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "" "O disco contém um sistema de arquivos que não é possível redimensionar: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Formato do disco %(disk_format)s não é aceito" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Arquivo de informações de disco é inválido: %(reason)s" msgid "Disk must have only one partition." msgstr "O disco deve ter apenas uma partição." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Disco com o ID: %s não foi encontrado anexado à instância." #, python-format msgid "Driver Error: %s" msgstr "Erro de driver: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "Erro ao tentar executar %(method)s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Erro ao destruir a instância no nó %(node)s. O estado da provisão ainda é " "'%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Erro durante a seguinte chamada ao agente: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "Erro durante a instância unshelve %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "erro de libvirt ao obter informações do domínio para %(instance_name)s: " "[Código de Erro %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Erro de libvirt ao consultar %(instance_name)s: [Código de Erro " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Erro de libvirt ao efetuar quiesce %(instance_name)s: [Código de Erro " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Erro de libvirt ao configurar senha para o nome do usuário \"%(user)s\": " "[Código de erro %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Erro ao montar %(device)s para %(dir)s na imagem %(image)s com libguestfs " "(%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Erro ao montar %(image)s com libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Erro ao criar monitor de recurso: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Erro: O agente está desativado" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "Evento %(event)s não localizado para o ID da ação %(action_id)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "O evento deve ser uma instância de nova.virt.event.Event" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Máximo excedido de tentativas de planejamento %(max_attempts)d para a " "instância %(instance_uuid)s. Última exceção:%(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Excedido o máximo de novas tentativas de planejamento %(max_retries)d para a " "instância %(instance_uuid)s durante migração em tempo real" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Foi excedido o número máximo de tentativas. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "Esperado um uuid, mas recebido %(uuid)s." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Coluna adicional %(table)s.%(column)s na tabela de sombra" msgid "Extracting vmdk from OVA failed." msgstr "A extração de vmdk de OVA falhou." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "Falha ao acessar a porta %(port_id)s: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Falha ao incluir parâmetros de implementação no nó %(node)s ao fornecer a " "instância %(instance)s" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "Falha ao alocar a(s) rede(s) com erro %s, não reagendando." msgid "Failed to allocate the network(s), not rescheduling." msgstr "Falha ao alocar a rede(s), não replanejando." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "" "Falha ao anexar o dispositivo de adaptador de rede para %(instance_uuid)s" #, python-format msgid "Failed to create vif %s" msgstr "Falha ao criar o vif %s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Falha ao provisionar a instância: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Falha ao remover o dispositivo PCI %(dev)s: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "" "Falha ao remover o dispositivo de adaptador de rede de %(instance_uuid)s" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Falha ao criptografar texto: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Falha ao ativar instâncias: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Falha ao mapear partições: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Falhou em montar sistema de arquivo: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "Falha ao analisar informações sobre um dispositivo pci para passagem" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Falha ao desativar a instância: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Falha ao ativar a instância: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Falha ao preparar o dispositivo PCI %(id)s para a instância " "%(instance_uuid)s: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Falha ao fornecer instância %(inst)s: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "Falha ao ler ou gravar o arquivo de informações do disco: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Falha ao reinicializar a instância: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Falha ao remover o(s) volume(s): (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Falha ao solicitar reconstrução da instância pelo Ironic %(inst)s: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Falha ao continuar a instância: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "Falha ao executar as informações de qemu-img em %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "" "Falha ao configurar a senha de administrador em %(instance)s porque " "%(reason)s" msgid "Failed to spawn, rolling back" msgstr "Falha ao fazer spawn; recuperando" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Falha ao suspender a instância: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Falha ao finalizar instância: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "Falha ao desconectar o vif %s" msgid "Failure prepping block device." msgstr "Falha na preparação do dispositivo de bloco." #, python-format msgid "File %(file_path)s could not be found." msgstr "O arquivo %(file_path)s não pôde ser localizado." #, python-format msgid "File path %s not valid" msgstr "Caminho de arquivo %s inválido" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "O IP fixo %(ip)s não é um endereço IP válido para a rede %(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "IP fixo %s já está em uso." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "O endereço IP fixo %(address)s já está em uso na instância %(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "IP fixo não localizado para o endereço %(address)s." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "O método %(flavor_id)s não pôde ser localizado." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "Tipo %(flavor_id)s não possui especificações extras com a chave " "%(extra_specs_key)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Tipo %(flavor_id)s não possui especificações extras com a chave %(key)s." #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "A especificação extra de tipo %(id)s não pode ser atualizada ou criada após " "%(retries)d novas tentativas." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "Acesso flavor já existe para o flavor %(flavor_id)s e o projeto " "%(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Acesso ao método não localizado para a combinação %(flavor_id)s / " "%(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "O método usado pela instância não pôde ser localizado." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "Tipo com ID %(flavor_id)s já existe." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "Tipo com nome %(flavor_name)s não pôde ser localizado." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "Tipo com nome %(name)s já existe." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "O disco do tipo é menor que o tamanho mínimo especificado nos metadados de " "imagem. O disco do tipo tem %(flavor_size)i bytes; o tamanho mínimo é " "%(image_min_disk)i bytes." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "O disco do tipo é muito pequeno para a imagem solicitada. O disco do tipo " "tem %(flavor_size)i bytes; a imagem tem %(image_size)i bytes." msgid "Flavor's memory is too small for requested image." msgstr "Memória do tipo é muito pequena para a imagem solicitada." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "A associação de IP flutuante %(address)s falhou." #, python-format msgid "Floating IP %(address)s is associated." msgstr "O IP flutuante %(address)s está associado." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "O IP flutuante %(address)s não está associado à instância %(id)s." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "IP flutuante não localizado para o ID %(id)s." #, python-format msgid "Floating IP not found for ID %s" msgstr "IP flutuante não localizado para ID %s" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "IP flutuante não localizado para o endereço %(address)s." msgid "Floating IP pool not found." msgstr "Conjunto de IPs flutuantes não localizado." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "Proibido exceder valor do tipo do número de portas seriais passadas nos " "metadados de imagem." msgid "Found no disk to snapshot." msgstr "Não foi localizado nenhum disco para captura instantânea." #, python-format msgid "Found no network for bridge %s" msgstr "Não foi encontrada rede para bridge %s" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Encontrado múltiplas redes para a bridge %s" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Localizada a rede não exclusiva para name_label %s" msgid "Guest does not have a console available." msgstr "O convidado não possui um console disponível" #, python-format msgid "Host %(host)s could not be found." msgstr "Host %(host)s não encontrado." #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "O host %(host)s já está mapeado para a célula %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "Host '%(name)s' não mapeado para qualquer célula" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "O host PowerOn não é suportado pelo driver Hyper-V" msgid "Host aggregate is not empty" msgstr "Agregado do host não está vazio" msgid "Host does not support guests with NUMA topology set" msgstr "O host não suporta convidados com a topologia NUMA configurada" msgid "Host does not support guests with custom memory page sizes" msgstr "" "O host não suporta convidados com tamanhos de página de memória customizados" msgid "Host startup on XenServer is not supported." msgstr "A inicialização do host em XenServer não é suportada." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Driver do hypervisor não suporta o método post_live_migration_at_source" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "O tipo hypervisor virt '%s' não é válido" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "Tipo de virtualização do hypervisor '%(hv_type)s' não é reconhecido" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "O hypervisor com o ID '%s' não pôde ser localizado." #, python-format msgid "IP allocation over quota in pool %s." msgstr "Alocação de IP de cota no conjunto %s." msgid "IP allocation over quota." msgstr "Alocação de IP acima da cota." #, python-format msgid "Image %(image_id)s could not be found." msgstr "Imagem %(image_id)s não foi encontrada." #, python-format msgid "Image %(image_id)s is not active." msgstr "A imagem %(image_id)s não está ativa." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "A imagem %(image_id)s é inaceitável: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "Tamanho do disco de imagem maior que o tamanho do disco solicitado" msgid "Image is not raw format" msgstr "A imagem não é um formato bruto" msgid "Image metadata limit exceeded" msgstr "Limite excedido de metadados da imagem" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "O modelo de imagem '%(image)s' não é suportado" msgid "Image not found." msgstr "Imagem não encontrada." #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "Propriedade de imagem '%(name)s' não é permitida para substituir a " "configuração NUMA definida com relação ao tipo" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "Propriedade de imagem 'hw_cpu_policy' não é permitida para substituir o " "pinning da CPU conjunto de política em relação ao tipo" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "A propriedade de imagem 'hw_cpu_thread_policy' não é permitida para " "substituir o conjunto de política de pinning de encadeamento de CPU em " "relação ao tipo" msgid "Image that the instance was started with could not be found." msgstr "A imagem que foi iniciada pela instância não pode ser localizada." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "Opção de unidade de configuração da imagem '%(config_drive)s' inválida" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "As imagens com destination_type 'volume' precisam ter um tamanho diferente " "de zero especificado" msgid "In ERROR state" msgstr "No estado ERROR" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "Nos estados %(vm_state)s/%(task_state)s, não RESIZED/Nenhum" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "" "A migração em tempo real em andamento %(id)s não está localizada para o " "servidor %(uuid)s." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Configurações incompatíveis: criptografia de armazenamento efêmera suportada " "somente para imagens LVM." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "" "O cache de informações para a instância %(instance_uuid)s não pôde ser " "localizado." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "Instância %(instance)s e volume %(vol)s não estão na mesma " "availability_zone. A instância está em %(ins_zone)s. O volume está em " "%(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "A instância %(instance)s não possui uma porta com o ID%(port)s" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "A instância %(instance_id)s não pode ser resgatada: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "A instância %(instance_id)s não pôde ser localizada." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "Instância %(instance_id)s não possui identificação ‘%(tag)s‘" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "A instância %(instance_id)s não está no modo de resgate" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "A instância %(instance_id)s não está pronta" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "A instância %(instance_id)s não está executando." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "A instância %(instance_id)s é inaceitável: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "A instância %(instance_uuid)s não especifica uma topologia NUMA" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "A instância %(instance_uuid)s não especifica um contexto de migração." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "Instância %(instance_uuid)s em %(attr)s %(state)s. Não é possível %(method)s " "enquanto a instância está nesse estado." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "A instância %(instance_uuid)s está bloqueada" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "A instância %(instance_uuid)s requer a unidade de configuração, mas ela não " "existe." #, python-format msgid "Instance %(name)s already exists." msgstr "A instância %(name)s já existe." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "A instância %(server_id)s está em um estado inválido para '%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "A instância %(uuid)s não possui nenhum mapeamento para uma célula." #, python-format msgid "Instance %s not found" msgstr "Instância %s não encontrada" #, python-format msgid "Instance %s provisioning was aborted" msgstr "A instância %s que está sendo provisionada foi interrompida" msgid "Instance could not be found" msgstr "A instância não pôde ser localizada" msgid "Instance disk to be encrypted but no context provided" msgstr "Disco da instância a ser criptografado, porém, sem contexto fornecido" msgid "Instance event failed" msgstr "Evento de instância com falha" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "O grupo de instância %(group_uuid)s já existe." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "O grupo de instância %(group_uuid)s não pôde ser localizado." msgid "Instance has no source host" msgstr "A instância não possui host de origem" msgid "Instance has not been resized." msgstr "A instância não foi redimensionada." #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "O nome do host da instância %(hostname)s não é um nome DNS válido" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "A instância já está em Modo de Resgate: %s" msgid "Instance is not a member of specified network" msgstr "A instância não é um membro de rede especificado" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Retrocesso de instância executado devido a: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Espaço Insuficiente no Grupo de Volumes %(vg)s. Apenas %(free_space)db " "disponíveis, mas %(size)d bytes requeridos pelo volume %(lv)s." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Recursos de cálculo insuficientes: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "" "Memória livre insuficiente no nodo de computação para iniciar %(uuid)s." #, python-format msgid "Interface %(interface)s not found." msgstr "Interface %(interface)s não encontrada." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "Dados Base 64 inválidos para o arquivo %(path)s" msgid "Invalid Connection Info" msgstr "Informações de conexão inválidas" #, python-format msgid "Invalid ID received %(id)s." msgstr "ID inválido recebido %(id)s." #, python-format msgid "Invalid IP format %s" msgstr "Formato de IP inválido %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Protocolo IP %(protocol)s é inválido." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Lista de desbloqueio de PCI inválida: A lista de desbloqueio de PCI pode " "especificar o devname ou endereço , mas não ambos" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Definição de alias de PCI inválida: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Expressão Regular inválida %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Caracteres inválidos no nome do host '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "config_drive inválida fornecida." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "config_drive_format inválido \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Tipo de console inválido %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "Tipo de conteúdo %(content_type)s é inválido." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "String datetime inválida: %(reason)s" msgid "Invalid device UUID." msgstr "UUID de dispositivo inválido." #, python-format msgid "Invalid entry: '%s'" msgstr "Entrada inválida: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Entrada inválida: '%s'; Esperando dicionário" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Entrada inválida: '%s'; Esperando dicionário ou lista" #, python-format msgid "Invalid exclusion expression %r" msgstr "Expressão de exclusão inválida %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Formato de imagem inválido '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "Imagem inválida href %(image_href)s." #, python-format msgid "Invalid inclusion expression %r" msgstr "Expressão de inclusão inválida %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Entrada inválida para campo/atributo %(path)s. Valor: %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Entrada inválida recebida: %(reason)s" msgid "Invalid instance image." msgstr "Imagem de instância inválida." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Filtro is_public inválido [%s]" msgid "Invalid key_name provided." msgstr "key_name inválido fornecido." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Tamanho de página de memória inválido ‘%(pagesize)s‘" msgid "Invalid metadata key" msgstr "Chave de metadados inválida" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Tamanho de metadados inválido: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Metadados inválidos: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Filtro minDisk inválido [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Filtro minRam inválido [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Sequencia de porta %(from_port)s:%(to_port)s é inválida. %(msg)s" msgid "Invalid proxy request signature." msgstr "Assinatura da solicitação de proxy inválida." #, python-format msgid "Invalid range expression %r" msgstr "Expressão de intervalo inválido %r" msgid "Invalid service catalog json." msgstr "Catálogo de serviço json inválido." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "Horário de início inválido. O horário de início não pode ocorrer após o " "horário de encerramento." msgid "Invalid state of instance files on shared storage" msgstr "" "Estado inválido de arquivos de instância em armazenamento compartilhado" #, python-format msgid "Invalid timestamp for date %s" msgstr "Registro de data e hora inválido para a data %s" #, python-format msgid "Invalid usage_type: %s" msgstr "Usage_type inválido: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Valor inválido para a opção Configuração de Unidade: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Endereço de interface virtual inválido %s na solicitação" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Modo de acesso a volume inválido: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Volume inválido: %(reason)s" msgid "Invalid volume_size." msgstr "volume_size inválido." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "UUID do nó do Ironic não fornecido ao driver para a instância %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "Não é permitido criar uma interface na rede externa %(network_uuid)s" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "A imagem de Kernel/Ramdisk é muito grande: %(vdi_size)d bytes, máx. " "%(max_size)d bytes" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Nomes de Chave podem conter apenas caracteres alfanuméricos, pontos, hifens, " "sublinhados, dois-pontos e espaços." #, python-format msgid "Key manager error: %(reason)s" msgstr "Erro do gerenciador de chaves: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "O par de chaves '%(key_name)s' já existe." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "Par de chaves %(name)s não localizado para o usuário %(user_id)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "Dados do par de chaves é inválido: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "O nome do par de chaves contém caracteres não seguros" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "O nome do par de chaves deve ser uma sequência e entre 1 e 255 caracteres de " "comprimento" msgid "Limits only supported from vCenter 6.0 and above" msgstr "Limites suportados somente a partir do vCenter 6.0 e acima" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "" "A migração em tempo real %(id)s para o servidor %(uuid)s não está em " "andamento. " #, python-format msgid "Malformed message body: %(reason)s" msgstr "Corpo da mensagem malformado: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "URL de solicitação Malformada: project_id '%(project_id)s' da URL não " "corresponde ao project_id '%(context_project_id)s' do contexto" msgid "Malformed request body" msgstr "Corpo do pedido está mal formado" msgid "Mapping image to local is not supported." msgstr "Mapeamento de imagem para local não é suportado." #, python-format msgid "Marker %(marker)s could not be found." msgstr "O marcador %(marker)s não pôde ser localizado." msgid "Maximum number of floating IPs exceeded" msgstr "Número máximo de IPs flutuantes excedido" msgid "Maximum number of key pairs exceeded" msgstr "Número máximo de pares de chaves excedido" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "O número máximo de itens de metadados excede %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "Número máximo de portas excedido" msgid "Maximum number of security groups or rules exceeded" msgstr "Número máximo de grupos de segurança ou regras excedido" msgid "Metadata item was not found" msgstr "O item de metadados não foi localizado" msgid "Metadata property key greater than 255 characters" msgstr "Chave da propriedade de metadados com mais de 255 caracteres" msgid "Metadata property value greater than 255 characters" msgstr "Valor da propriedade de metadados com mais de 255 caracteres" msgid "Metadata type should be dict." msgstr "Tipo de metadados deve ser dic." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "Métrica %(name)s não pôde ser localizada no nó de host de cálculo %(host)s." "%(node)s." msgid "Migrate Receive failed" msgstr "Falha em Migrar Recebimento" msgid "Migrate Send failed" msgstr "Falha em Migrar Envio" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "" "A migração %(id)s para o servidor %(uuid)s não é uma migração em tempo " "real. " #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "A migração %(migration_id)s não pôde ser localizada." #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "" "Migração %(migration_id)s não localizada para a instância %(instance_id)s" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "O estado da migração %(migration_id)s da instância %(instance_uuid)s é " "%(state)s. O método %(method)s não é possível enquanto a migração estiver " "nesse estado." #, python-format msgid "Migration error: %(reason)s" msgstr "Erro de migração: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "A migração não é suportada para instâncias do LVM de backup" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "Migração não localizada para a instância %(instance_id)s com o status " "%(status)s." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Erro de pré-verificação de migração: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "Erro de destinos de seleção de migração: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Argumentos ausentes: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Coluna ausente %(table)s.%(column)s na tabela de sombra" msgid "Missing device UUID." msgstr "UUID de dispositivo faltando." msgid "Missing disabled reason field" msgstr "Está faltando o campo do motivo da desativação" msgid "Missing forced_down field" msgstr "Faltando campo forced_down" msgid "Missing imageRef attribute" msgstr "Atributo imageRef ausente" #, python-format msgid "Missing keys: %s" msgstr "Chaves ausentes: %s" #, python-format msgid "Missing parameter %s" msgstr "Parâmetro ausente %s" msgid "Missing parameter dict" msgstr "Dicionário de parâmetros ausente" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "" "Mais de uma instância está associada ao endereço IP fixo '%(address)s'." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Mais de uma rede possível localizada. Especifique ID(s) de rede para " "selecionar qual(is) a se conectar." msgid "More than one swap drive requested." msgstr "Mais de uma unidade de troca solicitada." #, python-format msgid "Multi-boot operating system found in %s" msgstr "Sistema operacional de multi-inicialização localizado em %s" msgid "Multiple X-Instance-ID headers found within request." msgstr "Vários cabeçalhos X-Instance-ID localizados dentro da solicitação." msgid "Multiple X-Tenant-ID headers found within request." msgstr "Vários cabeçalhos X-Tenant-ID localizados dentro da solicitação." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "" "Vários correspondências de conjuntos de IP flutuantes localizadas para o " "nome '%s'" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Vários IPs flutuantes foram localizados para o endereço %(address)s." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Vários hosts podem ser gerenciados pelo driver vCenter VMWare; portanto, não " "retorne o tempo de atividade apenas para um host." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Várias redes possíveis localizados, use um ID de Rede para ser mais " "específico." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Foram localizados vários grupos de segurança que correspondem a '%s'. Use um " "ID para ser mais específico." msgid "Must input network_id when request IP address" msgstr "network_id deve ser inserido quando o endereço IP for solicitado" msgid "Must not input both network_id and port_id" msgstr "Ambos network_id e port_id não devem ser inseridos" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "Deve especificar connection_url, connection_username (opcionalmente) e " "connection_password para usar compute_driver=xenapi.XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "Deve especificar host_ip, host_username e host_password para utilizar " "vmwareapi.VMwareVCDriver" msgid "Must supply a positive value for max_number" msgstr "Deve-se fornecer um valor positivo para o max_number" msgid "Must supply a positive value for max_rows" msgstr "Deve ser fornecido um valor positivo para max_rows" #, python-format msgid "Network %(network_id)s could not be found." msgstr "Rede %(network_id)s não foi encontrada." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "A rede %(network_uuid)s requer uma sub-rede para inicializar instâncias." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "A rede não pôde ser localizada para a ponte %(bridge)s" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "A rede não pôde ser localizada para a instância %(instance_id)s." msgid "Network not found" msgstr "Rede não localizada" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "A rede requer port_security_enabled e sub-rede associados para aplicar " "grupos de segurança." msgid "New volume must be detached in order to swap." msgstr "O novo volume deve ser removido para a troca." msgid "New volume must be the same size or larger." msgstr "O novo volume deve ser do mesmo tamanho ou maior." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "Nenhum Mapeamento de Dispositivo de Bloco com id %(id)s." msgid "No Unique Match Found." msgstr "Nenhuma Correspondência Exclusiva Localizada." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "Nenhuma criação de agente associada ao id %(id)s." msgid "No compute host specified" msgstr "Nenhum host de cálculo especificado" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "" "Nenhuma informação de configuração localizada para o sistema operacional " "%(os_name)s" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "Nenhum dispositivo com endereço MAC %s existe na VM" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "Nenhum dispositivo com interface-id %s existe na VM" #, python-format msgid "No disk at %(location)s" msgstr "Nenhum disco em %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Nenhum endereço IP fixo disponível para rede: %(net)s" msgid "No fixed IPs associated to instance" msgstr "Nenhum IP fixo associado à instância" msgid "No free nbd devices" msgstr "Nenhum dispositivo nbd livre" msgid "No host available on cluster" msgstr "Nenhum host disponível no cluster" msgid "No hosts found to map to cell, exiting." msgstr "Nenhum host localizado para mapear para a célula, encerrando." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "Nenhum hypervisor corresponde a '%s' pôde ser localizado." msgid "No image locations are accessible" msgstr "Nenhum local da imagem é acessível" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "Nenhum URI de migração em tempo real configurado e nenhum padrão disponível " "para o tipo de virtualização do hypervisor \"%(virt_type)s\"." msgid "No more floating IPs available." msgstr "Nenhum IP flutuante disponível." #, python-format msgid "No more floating IPs in pool %s." msgstr "Sem IPs flutuantes no conjunto %s." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "Nenhum ponto de montagem localizado em %(root)s de %(image)s" #, python-format msgid "No operating system found in %s" msgstr "Nenhum sistema operacional localizado em %s" #, python-format msgid "No primary VDI found for %s" msgstr "Nenhum VDI primário localizado para %s" msgid "No root disk defined." msgstr "Nenhum disco raiz definido." #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "Nenhuma rede específica foi solicitada e nenhuma está disponível para o " "projeto '%(project_id)s'." msgid "No suitable network for migrate" msgstr "Nenhuma rede adequada para migração" msgid "No valid host found for cold migrate" msgstr "Nenhum host válido localizado para a migração a frio" msgid "No valid host found for resize" msgstr "Nenhum host válido localizado para redimensionamento" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Nenhum host válido localizado. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "" "Nenhum Mapeamento do Dispositivo de Bloco do volume no caminho: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "" "Nenhum Mapeamento de Dispositivo de Bloco do volume com o ID %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "O nó %s não pôde ser localizado." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "Não é possível adquirir uma porta livre para %(host)s" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "Não é possível ligar %(host)s:%(port)d, %(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "Nem todas as Funções Virtuais do %(compute_node_id)s:%(address)s estão " "livres." msgid "Not an rbd snapshot" msgstr "Não uma captura instantânea de rbd" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "Não autorizado para a imagem %(image_id)s." msgid "Not authorized." msgstr "Não autorizado." msgid "Not enough parameters to build a valid rule." msgstr "Não há parâmetros suficientes para construir uma regra válida." msgid "Not implemented on Windows" msgstr "Não implementado no Windows" msgid "Not stored in rbd" msgstr "Não armazenado em rbd" msgid "Nothing was archived." msgstr "Nada foi arquivado" #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova requer a versão libvirt %s ou superior." msgid "Number of Rows Archived" msgstr "Número de Linhas Arquivadas" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "A ação do objeto %(action)s falhou porque: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Um volume antigo está anexado a uma instância diferente." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Um ou mais hosts já na(s) zona(s) de disponibilidade %s" msgid "Only administrators may list deleted instances" msgstr "Apenas administradores podem listar instância excluídas" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "Apenas SRs bseados em arquivo (ext/NFS) são suportados por este recurso. SR " "%(uuid)s é do tipo %(type)s" msgid "Origin header does not match this host." msgstr "Cabeçalho de origem não corresponde a esse host." msgid "Origin header not valid." msgstr "Cabeçalho de origem não é válido." msgid "Origin header protocol does not match this host." msgstr "Protocolo do cabeçalho de origem não corresponde a esse host." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "Dispositivo PCI %(node_id)s:%(address)s não localizado." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "O alias de PCI %(alias)s não está definido" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "Dispositivo PCI %(compute_node_id)s:%(address)s é %(status)s ao invés de " "%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "Dispositivo PCI %(compute_node_id)s:%(address)s pertence a %(owner)s ao " "invés de %(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "Dispositivo PCI %(id)s não localizado" #, python-format msgid "PCI device request %(requests)s failed" msgstr "A solicitação de dispositivo PCI %(requests)s falhou" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s não contém endereço IP" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Tamanho da página %(pagesize)s proibido contra '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "Tamanho da página %(pagesize)s não é suportado pelo host." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Parâmetros %(missing_params)s não presentes em vif_details para vif " "%(vif_id)s. Verifique a configuração do Neutron para validar se os " "parâmetros macvtap estão corretos." #, python-format msgid "Path %s must be LVM logical volume" msgstr "O caminho %s deve ser um volume lógico LVM" msgid "Paused" msgstr "Interrompido" msgid "Personality file limit exceeded" msgstr "Limite excedido do arquivo de personalidade" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "A Função Física %(compute_node_id)s:%(address)s relacionada ao VF " "%(compute_node_id)s:%(vf_address)s é %(status)s em vez de %(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "A rede física está ausente para a rede %(network_uuid)s" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "A política não permite que %(action)s sejam executadas." #, python-format msgid "Port %(port_id)s is still in use." msgstr "A porta %(port_id)s ainda está em uso." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "Porta %(port_id)s não utilizável para a instância %(instance)s." #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "Porta %(port_id)s não utilizável para a instância %(instance)s. O valor " "%(value)s designado para o atributo dns_name não corresponde ao nome do host " "da instância %(hostname)s" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "A porta %(port_id)s requer um FixedIP para ser usado." #, python-format msgid "Port %s is not attached" msgstr "Porta %s não está conectada" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "O ID da porta %(port_id)s não pôde ser localizado." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Modelo de vídeo fornecido (%(model)s) não é suportado." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "Ação de watchdog fornecida (%(action)s) não é suportada." msgid "QEMU guest agent is not enabled" msgstr "O agente convidado QEMU não está ativado" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "Quiesce não é suportado na instância %(instance_id)s" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "A classe da cota %(class_name)s não pôde ser localizada." msgid "Quota could not be found" msgstr "A cota não pôde ser localizada" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Cota excedida para %(overs)s: Solicitados %(req)s, mas já usados %(used)s " "%(allowed)s %(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Cota excedida para os recursos: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Cota excedida; excesso de pares de chaves." msgid "Quota exceeded, too many server groups." msgstr "Cota excedida, grupos de servidor em excesso." msgid "Quota exceeded, too many servers in group" msgstr "Cota excedida, servidores em excesso no grupo" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Quota excedida: codigo=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "Existe cota para o projeto %(project_id)s, recurso %(resource)s" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "A cota para o projeto %(project_id)s não pôde ser localizada." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "Cota para o usuário %(user_id)s no projeto %(project_id)s não pôde ser " "encontrada." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "O limite de cota %(limit)s para %(resource)s deve ser maior ou igual ao " "%(minimum)s já utilizado e reservado." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "O limite de cota %(limit)s para %(resource)s deve ser menor ou igual a " "%(maximum)s." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "" "Número máximo de novas tentativas atingido ao tentar desconectar o VBD %s" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "A política em tempo real precisa da máscara de vCPU(s) configurada com pelo " "menos 1 vCPU RT e 1 vCPU ordinária. Consulte hw:cpu_realtime_mask ou " "hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "Corpo do pedido e incompatibilidade URI" msgid "Request is too large." msgstr "A solicitação é muito grande." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "" "A solicitação da imagem %(image_id)s obteve a resposta BadRequest: " "%(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "RequestSpec não localizado para a instância %(instance_uuid)s" msgid "Requested CPU control policy not supported by host" msgstr "Política de controle de CPU solicitada não suportada pelo host. " #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "O hardware solicitado '%(model)s' não é suportado pelo driver virt '%(virt)s'" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "A imagem solicitada %(image)s possui o redimensionamento automático de disco " "desativado." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "A topologia NUMA da instância solicitada não pode ser ajustada na topologia " "NUMA do host determinado" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "A topologia NUMA da instância solicitada juntamente com os dispositivos PCI " "solicitados não cabe na topologia NUMA do host determinado" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "Os limites de vCPU solicitados %(sockets)d:%(cores)d:%(threads)d são " "impossíveis de satisfazer para contagens de vcpus %(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "Dispositivo de resgate não existe para a instância %s" #, python-format msgid "Resize error: %(reason)s" msgstr "Erro de redimensionamento: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Redimensionar para tipo de disco zero não é permitido." msgid "Resource could not be found." msgstr "O recurso não pôde ser localizado." msgid "Resumed" msgstr "Retomado" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Nome do elemento raiz deve ser '%(name)s‘, não '%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "Executando lotes de %i até a conclusão" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "" "O Filtro do Host do Planejador %(filter_name)s não pôde ser localizado." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "Grupo de segurança %(name)s não localizado para o projeto %(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "Grupo de segurança %(security_group_id)s não localizado para o projeto " "%(project_id)s." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "Grupo de segurança %(security_group_id)s não localizado." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "O grupo de segurança %(security_group_name)s já existe para o projeto " "%(project_id)s." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "Grupo de segurança %(security_group_name)s não associado à instância " "%(instance)s" msgid "Security group id should be uuid" msgstr "O ID do grupo de segurança deve ser uuid" msgid "Security group name cannot be empty" msgstr "O nome do grupo de segurança não pode estar vazio" msgid "Security group not specified" msgstr "Grupo de segurança não especificado" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "O disco do servidor não pôde ser redimensionado porque: %(reason)s" msgid "Server does not exist" msgstr "O servidor não existe" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "Política do ServerGroup não é suportada: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter não configurado" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter não configurado" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher não configurado" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher não configurado" #, python-format msgid "Service %(service_id)s could not be found." msgstr "Serviço %(service_id)s não encontrado." #, python-format msgid "Service %s not found." msgstr "Serviço %s não localizado." msgid "Service is unavailable at this time." msgstr "Serviço está indisponível neste momento" #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "O serviço com host %(host)s binário %(binary)s existe." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "O serviço com host %(host)s tópico %(topic)s existe." msgid "Set admin password is not supported" msgstr "Definir senha admin não é suportado" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "A tabela de sombra com o nome %(name)s já existe." #, python-format msgid "Share '%s' is not supported" msgstr "O compartilhamento '%s' não é suportado" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "" "O nível de compartilhamento '%s' não pode ter compartilhamento configurado" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "Redução do sistema de arquivos com resize2fs falhou, verifique se você tem " "espaço livre suficiente em disco." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "A captura instantânea %(snapshot_id)s não pôde ser localizada." msgid "Some required fields are missing" msgstr "Alguns campos requeridos estão faltando." #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "|Algo de errado ocorreu ao excluir uma captura instantânea de um volume: O " "rebaseamento de um disco de rede %(protocol)s usando qemu-img não foi " "testado totalmente" msgid "Sort direction size exceeds sort key size" msgstr "" "O tamanho de direção de classificação excede o tamanho da chave de " "classificação" msgid "Sort key supplied was not valid." msgstr "A chave de classificação fornecida não era válida." msgid "Specified fixed address not assigned to instance" msgstr "Endereço fixo especificado não designado à instância" msgid "Specify `table_name` or `table` param" msgstr "Spe" msgid "Specify only one param `table_name` `table`" msgstr "Especifique apenas um parâmetro `table_name` `table`" msgid "Started" msgstr "Iniciado" msgid "Stopped" msgstr "Interrompido" #, python-format msgid "Storage error: %(reason)s" msgstr "Erro de armazenamento: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "" "Política de armazenamento %s não corresponde a nenhum armazenamento de dados" msgid "Success" msgstr "Sucesso" msgid "Suspended" msgstr "Suspenso" msgid "Swap drive requested is larger than instance type allows." msgstr "Drive de swap é maior do que o tipo de instância permite." msgid "Table" msgstr "Tabela" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "A tarefa %(task_name)s já está em execução no host %(host)s" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "A tarefa %(task_name)s não está em execução no host %(host)s" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "O endereço PCI %(address)s possui um formato incorreto." msgid "The backlog must be more than 0" msgstr "O backlog deve ser maior que 0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "" "O intervalo de portas do console %(min_port)d-%(max_port)d está esgotado." msgid "The created instance's disk would be too small." msgstr "O disco da instância criada seria muito pequeno." msgid "The current driver does not support preserving ephemeral partitions." msgstr "O driver atual não suporta a preservação partições temporárias." msgid "The default PBM policy doesn't exist on the backend." msgstr "A política de PBM padrão não existe no backend." msgid "The floating IP request failed with a BadRequest" msgstr "A solicitação de IP flutuante falhou com um BadRequest" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "" "A instância requer uma versão de hypervisor mais recente do que a fornecida." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "" "O número de portas definidas: %(ports)d está acima do limite: %(quota)d" msgid "The only partition should be partition 1." msgstr "A única partição deve ser a partição 1." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "" "O caminho de dispositivo RNG fornecido: (%(path)s) não está presente no host." msgid "The request body can't be empty" msgstr "O corpo da solicitação não pode estar vazio" msgid "The request is invalid." msgstr "A requisição é inválida." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "A quantidade solicitada de memória de vídeo %(req_vram)d é maior que o " "máximo permitido pelo tipo %(max_vram)d." msgid "The requested availability zone is not available" msgstr "A zona de disponibilidade solicitada não está disponível" msgid "The requested console type details are not accessible" msgstr "Os detalhes do tipo de console solicitado não estão acessíveis" msgid "The requested functionality is not supported." msgstr "A funcionalidade solicitada não é suportada." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "O cluster especificado '%s' não foi localizado no vCenter" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "O caminho de dispositivo fornecido (%(path)s) está em uso." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "O caminho do dispositivo fornecido (%(path)s) é inválido." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "O caminho de disco fornecido (%(path)s) já existe, era esperado não existir." msgid "The supplied hypervisor type of is invalid." msgstr "O tipo de hypervisor fornecido é inválido." msgid "The target host can't be the same one." msgstr "O host de destino não pode ser o mesmo." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "O token ‘%(token)s‘ é inválido ou expirou" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "O volume não pode ser atribuído ao mesmo nome de dispositivo que o " "dispositivo raiz %s" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "Existem registros %(records)d na tabela ‘%(table_name)s‘ em que a coluna " "uuid ou instance_uuid é NULA. Execute este comando novamente com a opção --" "delete depois de ter feito backup de todos os dados necessários." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "Existem registros %(records)d na tabela ‘%(table_name)s‘ em que a coluna " "uuid ou instance_uuid é NULA. Esses devem ser limpos manualmente antes da " "migração ser aprovada. Considere executar o comando 'nova-manage db " "null_instance_uuid_scan'." msgid "There are not enough hosts available." msgstr "Não há hosts suficientes disponíveis." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Ainda há %(count)i registros de tipo não migrados. A migração não pode " "continuar até que todos os registros de tipo de instância tenham sido " "migrados para o novo formato. Execute `nova-manage db migrate_flavor_data' " "primeiro." #, python-format msgid "There is no such action: %s" msgstr "Essa ação não existe: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "Não houve registros localizados em que instance_uuid era NULO." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "" "Esse hypervisor de nó de cálculo é mais antigo que a versão mínima " "suportada: %(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "" "Este domU deve estar em execução no host especificado pela connection_url" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Esse método precisa ser chamado com networks=None e port_ids=None ou com " "port_ids e rede como não none." #, python-format msgid "This rule already exists in group %s" msgstr "Esta regra já existe no grupo %s" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Esse serviço é mais antigo (v%(thisver)i) que a versão mínima (v%(minver)i) " "do resto da implementação. Não é possível continuar." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Tempo limite de espera para que o dispositivo %s seja criado" msgid "Timeout waiting for response from cell" msgstr "Aguardando tempo limite para a resposta da célula" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "Tempo limite atingido ao verificar se é possível migrar em tempo real para o " "host: %s" msgid "To and From ports must be integers" msgstr "Portas Para e De devem ser números inteiros" msgid "Token not found" msgstr "Token não localizado" msgid "Triggering crash dump is not supported" msgstr "O acionamento de dump de travamento não é suportado." msgid "Type and Code must be integers for ICMP protocol type" msgstr "Tipo e Código devem ser números inteiros para o tipo de protocolo ICMP" msgid "UEFI is not supported" msgstr "UEFI não é suportado" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Não é possível associar o IP flutuante %(address)s a nenhum IP fixo para a " "instância %(id)s. A instância não possui nenhum endereço IPv4 fixo para " "associar." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Não é possível associar o IP flutuante %(address)s ao IP fixo " "%(fixed_address)s para a instância %(id)s. Erro: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Não é possível autenticar cliente Ironic." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Não é possível entrar em contato com o agente convidado. A chamada a seguir " "atingiu o tempo limite: %(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "Não é possível converter a imagem em %(format)s: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "Não é possível converter a imagem para bruto: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "Não é possível destruir o VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "Não foi possível destruir o VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "Não é possível determinar o barramento de disco para '%s'" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "Não é possível determinar o prefixo do disco para %s" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "Não foi possível ejetar %s do conjunto; Nenhum principal localizado" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "Não foi possível ejetar %s do conjunto; conjunto não vazio" #, python-format msgid "Unable to find SR from VBD %s" msgstr "Não foi possível localizar SR a partir de VBD %s" #, python-format msgid "Unable to find SR from VDI %s" msgstr "Não é possível localizar SR a partir de VDI %s" #, python-format msgid "Unable to find ca_file : %s" msgstr "Não é possível localizar ca_file : %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "Não é possível localizar cert_file : %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "Não é possível localizar o host para a Instância %s" msgid "Unable to find iSCSI Target" msgstr "Não é possível localizar o Destino iSCSI" #, python-format msgid "Unable to find key_file : %s" msgstr "Não é possível localizar key_file : %s" msgid "Unable to find root VBD/VDI for VM" msgstr "Não é possível localizar VBD/VDI raiz para VM" msgid "Unable to find volume" msgstr "Não é possível localizar o volume" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "Não é possível obter UUID do host: /etc/machine-id não existe" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "Não é possível obter UUID do host: /etc/machine-id está vazio" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Não foi possível obter registro de VDI %s em" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "Não foi possível introduzir VDI para SR %s" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "Não foi possível introduzir VDI em SR %s" #, python-format msgid "Unable to join %s in the pool" msgstr "Não é possível associar %s ao conjunto" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Não é possível ativar várias instâncias com um único ID de porta " "configurada. Inicie sua instância uma por uma com portas diferentes." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "Não é possível migrar %(instance_uuid)s para %(dest)s: Falta de memória " "(host:%(avail)s <= instância:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "Não é possível migrar %(instance_uuid)s: O disco da instância é muito grande " "(disponível no host de destino: %(available)s < necessário: %(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Não é possível migrar a instância (%(instance_id)s) para o host atual " "(%(host)s)." #, python-format msgid "Unable to obtain target information %s" msgstr "Não é possível obter informações de destino %s" msgid "Unable to resize disk down." msgstr "Não é possível redimensionar o disco para um tamanho menor." msgid "Unable to set password on instance" msgstr "Não é possível configurar senha na instância" msgid "Unable to shrink disk." msgstr "Não é possível reduzir disco." msgid "Unable to terminate instance." msgstr "Não é possível finalizar a instância." #, python-format msgid "Unable to unplug VBD %s" msgstr "Não é possível desconectar o VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Informações de CPU inaceitáveis: %(reason)s" msgid "Unacceptable parameters." msgstr "Parâmetros inaceitáveis." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Tipo de console indisponível %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Raiz do Mapeamento de Dispositivo de Bloco indefinido: " "BlockDeviceMappingList contém os Mapeamentos de Dispositivo de Bloco a " "partir de diversas instâncias." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Erro inesperado da API. Relate isso em http://bugs.launchpad.net/nova/ e " "anexe o log da API Nova se possível.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Ação inesperada %s agregada" msgid "Unexpected type adding stats" msgstr "Estatísticas de inclusão de tipo inesperado" #, python-format msgid "Unexpected vif_type=%s" msgstr "vif_type inesperado=%s" msgid "Unknown" msgstr "Desconhecido" msgid "Unknown action" msgstr "Ação desconhecida" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Formato da unidade de configuração %(format)s desconhecido. Selecione um de " "iso9660 ou vfat." #, python-format msgid "Unknown delete_info type %s" msgstr "Tipo de delete_info desconhecido %s" #, python-format msgid "Unknown image_type=%s" msgstr "image_type desconhecido=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Recursos da cota desconhecidos %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Direção de classificação desconhecida; deve ser 'desc' ou 'asc'" #, python-format msgid "Unknown type: %s" msgstr "Tipo desconhecido: %s" msgid "Unrecognized legacy format." msgstr "Formato legado não reconhecido." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Valor read_deleted não reconhecido '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Valor não reconhecido '%s' para CONF.running_deleted_instance_action" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "" "Tentativa de remover adiamento, mas a imagem %s não pode ser localizada." msgid "Unsupported Content-Type" msgstr "Tipo de Conteúdo Não Suportado" msgid "Upgrade DB using Essex release first." msgstr "Faça o upgrade do BD usando a liberação de Essex primeiro." #, python-format msgid "User %(username)s not found in password file." msgstr "Usuário %(username)s não localizado no arquivo de senha." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Usuário %(username)s não localizado no arquivo de sombra." msgid "User data needs to be valid base 64." msgstr "Os dados do usuário devem ser base 64 válidos." msgid "User does not have admin privileges" msgstr "Usuário não tem privilégios de administrador" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "O uso de sintaxes block_device_mapping diferentes não é permitido na mesma " "solicitação." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s é %(virtual_size)d bytes que é maior do que o tamanho do " "tipo %(new_disk_size)d bytes." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDI não foi localizado no SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun " "%(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "Tentativas de união de VHD excedeu (%d), concedendo..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "Versão %(req_ver)s não é suportada pela API. Mínimo é %(min_ver)s e máximo " "é %(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Falha na criação da Interface Virtual" msgid "Virtual interface plugin failed" msgstr "Plugin da interface virtual falhou." #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "Modo da máquina virtual '%(vmmode)s' não reconhecido" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "O modo de máquina virtual '%s' não é válido" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "O tipo de virtualização '%(virt)s' não é suportado por esse driver de cálculo" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "O volume %(volume_id)s não pôde ser anexado. Motivo: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "Volume %(volume_id)s não pode ser encontrado." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "Volume %(volume_id)s acabou não sendo criado mesmo depois de esperarmos " "%(seconds)s segundos ou %(attempts)s tentativas. E seu estado é " "%(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Volume não pertence à instância solicitada." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "" "Criptografia de volume não é suportada para %(volume_type)s volume " "%(volume_id)s" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "O volume é menor que o tamanho mínimo especificado nos metadados de imagem. " "O tamanho do volume é %(volume_size)i bytes; o tamanho mínimo é " "%(image_min_disk)i bytes." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "O volume configura o tamanho de bloco, mas o hypervisor libvirt atual '%s' " "não suporta o tamanho de bloco customizado" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "Não suportamos esquema ‘%s' sob Python < 2.7.4, use http ou https" msgid "When resizing, instances must change flavor!" msgstr "Ao redimensionar, as instâncias devem alterar o método!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Ao executar o servidor no modo SSL, você deve especificar os valores de " "opção cert_file e key_file em seu arquivo de configuração" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Método de cota errado %(method)s usado no recurso %(res)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "Tipo errado de método de gancho. Somente o tipo 'pré' e ‘pós' permitido" msgid "X-Forwarded-For is missing from request." msgstr "X-Forwarded-For está ausente da solicitação." msgid "X-Instance-ID header is missing from request." msgstr "O cabeçalho X-Instance-ID está ausente da solicitação." msgid "X-Instance-ID-Signature header is missing from request." msgstr "Cabeçalho X-Instance-ID-Signature está ausente da solicitação." msgid "X-Metadata-Provider is missing from request." msgstr "X-Metadata-Provider está ausente da solicitação." msgid "X-Tenant-ID header is missing from request." msgstr "Cabeçalho X-Tenant-ID está ausente da solicitação." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "XAPI que suporte relax-xsm-sr-check=true necessário" msgid "You are not allowed to delete the image." msgstr "Você não tem permissão para excluir a imagem." msgid "" "You are not authorized to access the image the instance was started with." msgstr "" "Você não está autorizado a acessar a imagem com a qual a instância foi " "iniciada." msgid "You must implement __call__" msgstr "Você deve implementar __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "Você deve especificar o sinalizador images_rbd_pool para usar imagens rbd." msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "Você deve especificar o sinalizador images_volume_group para usar imagens " "LVM." msgid "Zero floating IPs available." msgstr "Nenhum IPs flutuantes disponíveis." msgid "admin password can't be changed on existing disk" msgstr "senha do administrador não pode ser alterada no disco existente" msgid "aggregate deleted" msgstr "agregação excluída" msgid "aggregate in error" msgstr "agregação em erro" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate falhou porque: %s" msgid "cannot understand JSON" msgstr "não é possível entender JSON" msgid "clone() is not implemented" msgstr "clone() não está implementado" #, python-format msgid "connect info: %s" msgstr "informações de conexão: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "conectando a:%(host)s:%(port)s" msgid "direct_snapshot() is not implemented" msgstr "direct_snapshot() não está implementada" #, python-format msgid "disk type '%s' not supported" msgstr "tipo de disco '%s' não suportado" #, python-format msgid "empty project id for instance %s" msgstr "ID do projeto vazio para a instância %s" msgid "error setting admin password" msgstr "erro ao configurar senha de administrador" #, python-format msgid "error: %s" msgstr "erro: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "falha ao gerar a impressão digital X509. Mensagem de erro:%s" msgid "failed to generate fingerprint" msgstr "falha ao gerar a impressão digital" msgid "filename cannot be None" msgstr "nome de arquivo não pode ser Nenhum" msgid "floating IP is already associated" msgstr "O IP flutuante já está associado" msgid "floating IP not found" msgstr "IP flutuante não localizado" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s retornado por: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s não contém versão" msgid "image already mounted" msgstr "imagem já montada" #, python-format msgid "instance %s is not running" msgstr "instância %s não está em execução" msgid "instance has a kernel or ramdisk but not both" msgstr "a instância possui um kernel ou ramdisk, mas não ambos" msgid "instance is a required argument to use @refresh_cache" msgstr "a instância é um argumento necessário para usar @refresh_cache" msgid "instance is not in a suspended state" msgstr "a instância não está em um estado suspenso" msgid "instance is not powered on" msgstr "a instância não está ativada" msgid "instance is powered off and cannot be suspended." msgstr "a instância está desligada e não pode ser suspensa." #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "" "instance_id %s não pôde ser localizado como id do dispositivo em nenhuma " "porta" msgid "is_public must be a boolean" msgstr "is_public deve ser um booleano" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key não definido" msgid "l3driver call to add floating IP failed" msgstr "Falha na chamada l3driver para incluir IP flutuante" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs instalado, mas não utilizável (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs não está instalado (%s)" #, python-format msgid "marker [%s] not found" msgstr "marcador [%s] não localizado" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "O máx. de linhas deve ser <= %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "max_count não pode ser maior que 1 se um fixed_ip for especificado." msgid "min_count must be <= max_count" msgstr "min_count deve ser <= max_count" #, python-format msgid "nbd device %s did not show up" msgstr "dispositivo nbd %s não mostrado" msgid "nbd unavailable: module not loaded" msgstr "nbd indisponível: módulo não carregado" msgid "no hosts to remove" msgstr "nenhum host para remover" #, python-format msgid "no match found for %s" msgstr "nenhuma correspondência localizada para %s" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "Nenhuma captura instantânea pai utilizável para o volume %s." #, python-format msgid "no write permission on storage pool %s" msgstr "Nenhuma permissão de gravação para o conjunto de armazenamentos %s" #, python-format msgid "not able to execute ssh command: %s" msgstr "não foi possível executar o comando ssh: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "A configuração de estilo antigo pode usar somente backends de dicionário ou " "memcached" msgid "operation time out" msgstr "tempo limite da operação" #, python-format msgid "partition %s not found" msgstr "partição %s não localizada" #, python-format msgid "partition search unsupported with %s" msgstr "procura de partição não suportada com %s" msgid "pause not supported for vmwareapi" msgstr "pausa não suportada para vmwareapi" msgid "printable characters with at least one non space character" msgstr "" "Caracteres imprimíveis com pelo menos um caractere diferente de espaço." msgid "printable characters. Can not start or end with whitespace." msgstr "" "Caracteres imprimíveis não podem iniciar ou terminar com espaço em branco." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "qemu-img falhou ao executar no %(path)s : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "erro qemu-nbd: %s" msgid "rbd python libraries not found" msgstr "Bibliotecas rbd python não localizadas" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "read_deleted pode ser apenas um de 'no', 'yes' ou 'only', não %r" msgid "serve() can only be called once" msgstr "serve() pode ser chamado apenas uma vez" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "o serviço é um argumento obrigatório para driver do ServiceGroup baseado em " "BD" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "o serviço é um argumento obrigatório para o driver do ServiceGroup com base " "em Memcached" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password não está implementado por este driver ou esta instância " "convidada." msgid "setup in progress" msgstr "configuração em andamento" #, python-format msgid "snapshot for %s" msgstr "captura instantânea para %s" msgid "snapshot_id required in create_info" msgstr "snapshot_id necessário em create_info" msgid "token not provided" msgstr "token não fornecido" msgid "too many body keys" msgstr "excesso de chaves de corpo" msgid "unpause not supported for vmwareapi" msgstr "cancelamento de pausa não suportado para vmwareapi" msgid "version should be an integer" msgstr "a versão deve ser um número inteiro" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s deve estar no grupo de volumes LVM" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path ausente no vif_details para vif %(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "Tipo de vif %s não suportado" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "" "o parâmetro vif_type deve estar presente para esta implementação de " "vif_driver" #, python-format msgid "volume %s already attached" msgstr "volume %s já conectado" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "o volume '%(vol)s' de status deve estar 'em uso'. Atualmente em '%(status)s' " "de status" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake não tem uma implementação para %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake não tem implementação para %s ou foi chamado com um número de " "argumentos inválido" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/ru/0000775000175000017500000000000000000000000015416 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3504698 nova-21.2.4/nova/locale/ru/LC_MESSAGES/0000775000175000017500000000000000000000000017203 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/ru/LC_MESSAGES/nova.po0000664000175000017500000041143400000000000020515 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Ilya Alekseyev , 2013 # Aleksandr Brezhnev , 2013 # Alexei Rudenko , 2013 # FIRST AUTHOR , 2011 # lykoz , 2012 # Alexei Rudenko , 2013 # Stanislav Hanzhin , 2013 # dvy , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:06+0000\n" "Last-Translator: Copied by Zanata \n" "Language: ru\n" "Plural-Forms: nplurals=4; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n" "%10<=4 && (n%100<12 || n%100>14) ? 1 : n%10==0 || (n%10>=5 && n%10<=9) || (n" "%100>=11 && n%100<=14)? 2 : 3);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Russian\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s не является допустимым IP-адресом в4/6." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s произвел попытку прямого доступа к базе данных, что не разрешено " "стратегией" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s не является допустимой IP-сетью." #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s не должно входить в состав обновлений." #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "Назначено %(memsize)d МБ, но ожидалось %(memtotal)d МБ" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s не находится в локальном хранилище: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s не находится в общем хранилище: %(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i строк, совпадающих с запросом %(meth)s, %(done)i перенесено" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "Гипервизор %(type)s не поддерживает устройства PCI" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "" "Значение %(worker_name)s, равное %(workers)s, является недопустимым. " "Значение должно быть больше 0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s не поддерживает горячее подключение дисков." #, python-format msgid "%s format is not supported" msgstr "Формат %s не поддерживается" #, python-format msgid "%s is not supported." msgstr "%s не поддерживается." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s должен быть 'MANUAL' или 'AUTO'." #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' должен быть экземпляром '%(cls)s'" msgid "'qemu-img info' parsing failed." msgstr "Ошибка анализа 'qemu-img info'." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "" "Аргумент 'rxtx_factor' должен быть числом с плавающей точкой в диапазоне от " "0 до %g" #, python-format msgid "A NetworkModel is required in field %s" msgstr "В поле %s требуется NetworkModel " #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "Недопустимый формат строки версии API %(version)s. Требуется формат: " "MajorNum.MinorNum." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "Версия API %(version)s не поддерживается этим методом." msgid "Access list not available for public flavors." msgstr "Список прав доступа не доступен для общих разновидностей." #, python-format msgid "Action %s not found" msgstr "Действие %s не найдено" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "Действие для request_id %(request_id)s в экземпляре %(instance_uuid)s не " "найдено" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Действие: %(action)s, вызывающий метод: %(meth)s, тело: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "Добавление метаданных не выполнено для множества %(id)s после %(retries)s " "попыток" msgid "Affinity instance group policy was violated." msgstr "Нарушена стратегия группы экземпляров привязки." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "Агент не поддерживает вызов: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "Agent-build с гипервизором %(hypervisor)s, операционная система %(os)s, " "архитектура %(architecture)s, существует." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "Множество %(aggregate_id)s уже имеет хост %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "Множество %(aggregate_id)s не найдено." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "Множество %(aggregate_id)s не имеет хоста %(host)s." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "Множество %(aggregate_id)s не имеет метаданных с ключом %(metadata_key)s." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Множество %(aggregate_id)s: действие '%(action)s' вызвало ошибку: %(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "Множество %(aggregate_name)s уже существует." #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "Совокупный ресурс %s не поддерживает зону доступности с пустым именем" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "Не найдено множество для числа хостов %(host)s." #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "Недопустимое значение 'name'. Должно быть указано: %(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "" "Произошла неизвестная ошибка. Пожалуйста, попытайтесь повторить ваш запрос." msgid "An unknown exception occurred." msgstr "Обнаружено неизвестное исключение." msgid "Anti-affinity instance group policy was violated." msgstr "Нарушена стратегия строгой распределенности группы экземпляров." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Имя архитектуры %(arch)s не распознано" #, python-format msgid "Architecture name '%s' is not valid" msgstr "Недопустимое имя архитектуры: '%s'" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Попытка приема устройства PCI %(compute_node_id)s:%(address)s из пустого из " "пула" msgid "Attempted overwrite of an existing value." msgstr "Попытка заменить существующее значение." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Атрибут не поддерживается: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "Недопустимый сетевой формат: отсутствует %s" msgid "Bad networks format" msgstr "Недопустимый сетевой формат" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "" "Недопустимый сетевой формат: сетевой uuid имеет неправильный формат (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "Неверный префикс для сети в cidr %s" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "" "Ошибка создания привязки для порта %(port_id)s. Дополнительные сведения " "можно найти в протоколах neutron." msgid "Blank components" msgstr "Пустые компоненты" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Пустые тома (source: 'blank', dest: 'volume') должны иметь ненулевой размер" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "Блочное устройство не загрузочное %(id)s." #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "" "Связывание блочного устройства %(volume_id)s - это том с множественным " "подключением, что недопустимо для этой операции." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "" "Связывание блочного устройства не может быть преобразовано в устаревший " "формат. " msgid "Block Device Mapping is Invalid." msgstr "Недопустимое связывание блочного устройства." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "Недопустимое связывание блочного устройства: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "Недопустимое связывание блочного устройства: Последовательность загрузки для " "данного экземпляра и сочетание связывания образа/блочного устройства " "является недопустимым." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "Недопустимое связывание блочного устройства: Вы указали больше локальных " "устройств, чем допускает ограничение" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "" "Недопустимое связывание блочного устройства: не удалось получить образ " "%(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "" "Недопустимое связывание блочного устройства: не удалось получить " "моментальную копию %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "" "Недопустимое связывание блочного устройства: не удалось получить том %(id)s." msgid "Block migration can not be used with shared storage." msgstr "Блочный перенос не может выполняться с общим хранилищем." msgid "Boot index is invalid." msgstr "Недопустимый индекс загрузки." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "Компоновка экземпляра %(instance_uuid)s прервана: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "" "Компоновка экземпляра %(instance_uuid)s повторно запланирована: %(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "BuildRequest не найден для экземпляра %(uuid)s" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "Выделение CPU и памяти должно обеспечиваться для всех узлов NUMA" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU не совместим.\n" "\n" "%(ret)s\n" "\n" "Ссылка на %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "Число CPU %(cpunum)d назначено двум узлам" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "Число CPU %(cpunum)d превышает максимальное (%(cpumax)d)" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "Число CPU %(cpuset)s не назначено ни одному узлу" msgid "Can not add access to a public flavor." msgstr "Невозможно добавить права доступа к общедоступной разновидности." msgid "Can not find requested image" msgstr "Невозможно найти запрошенный образ" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "" "Невозможно обработать запрос идентификации для идентификационных данных %d" msgid "Can't resize a disk to 0 GB." msgstr "Не удается изменить размер диска на 0 ГБ." msgid "Can't resize down ephemeral disks." msgstr "Не удается изменить размер временных дисков на меньший." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "" "Невозможно извлечь корневой путь к устройству из конфигурации libvirt " "экземпляра" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "Невозможно выполнить действие '%(action)s' для экземпляра %(server_id)s в " "состоянии %(attr)s %(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "Нет доступа к \"%(instances_path)s\", убедитесь в существовании пути и " "наличии требуемых прав доступа. В частности, Nova-Compute не должен " "выполняться со встроенной учетной записью SYSTEM или другими учетными " "записями, которые невозможно идентифицировать на удаленном хосте." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Не удается добавить хост в составной объект %(aggregate_id)s. Причина: " "%(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "Невозможно подключить один или несколько томов нескольким экземплярам" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Невозможно вызвать %(method)s в неприсвоенном объекте %(objtype)s" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "" "Неизвестен родительский пул памяти для %s. Не удается определить " "расположение для сохранения образов" msgid "Cannot find SR of content-type ISO" msgstr "Невозможно найти SR типа содержимого ISO" msgid "Cannot find SR to read/write VDI." msgstr "Невозможно найти SR для чтения/записи VDI." msgid "Cannot find image for rebuild" msgstr "Невозможно найти образ для перекомпоновки" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "Не удается убрать %(host)s из агрегата %(id)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Не удается удалить хост из составного объекта %(aggregate_id)s. Причина: " "%(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "Невозможно аварийно восстановить сохраненный в томе экземпляр" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "Невозможно изменить размер корневого диска в меньшую сторону. Текущий " "размер: %(curr_root_gb)s ГБ. Запрошенный размер: %(new_root_gb)s ГБ." msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "" "Невозможно задать стратегию прикрепления CPU к нити в стратегии прикрепления " "невыделенного CPU" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "" "Невозможно задать стратегию реального времени в стратегии прикрепления " "невыделенного CPU" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Не удается обновить составной объект %(aggregate_id)s. Причина: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "Не удается обновить метаданные в составном объекте %(aggregate_id)s. " "Причина: %(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "Ячейка %(uuid)s не имеет связей." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "" "Изменение будет использовать менее 0 для следующих ресурсов: %(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "Класс %(class_name)s не найден: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Команда не поддерживается. Используйте команду Ironic %(cmd)s для выполнения " "этого действия." #, python-format msgid "Compute host %(host)s could not be found." msgstr "Узел сompute %(host)s не найден." #, python-format msgid "Compute host %s not found." msgstr "Не найден хост вычисления %s." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "Служба вычисления %(host)s по-прежнему занята." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "Служба вычисления %(host)s недоступна в данный момент." #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "Формат диска конфигурации %(format)s не поддерживается." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "Конфигурация запросила явную модель CPU, но гипервизор текущей libvirt '%s' " "не поддерживает выбор моделей CPU" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "" "Конфликт при обновлении экземпляра %(instance_uuid)s, но не удалось " "определить причину." #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "Конфликт при обновлении экземпляра %(instance_uuid)s. Ожидалось: " "%(expected)s. Фактическое значение: %(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Не удалось подключиться к хосту cinder: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "" "Не удалось установить соединение с хостом glance %(server)s: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "Соединение с libvirt потеряно: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "Соединение с гипервизором разорвано на хосте: %(host)s" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "" "Не удается получить вывод протокола консоли %(instance_id)s. Причина: " "%(reason)s" msgid "Constraint not met." msgstr "Ограничение не выполнено." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Преобразование в необработанный, но текущий формат %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "Невозможно прикрепить образ для замыкания: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "Невозможно извлечь образ %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "Невозможно найти обработчик для тома %(driver_type)s." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "Не удалось найти двоичный файл %(binary)s на хосте %(host)s." #, python-format msgid "Could not find config at %(path)s" msgstr "Невозможно найти конфигурацию по адресу %(path)s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "Не удалось найти ссылки на хранилища данных, используемых VM." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "Не удалось загрузить строку %(line)s. Ошибка: %(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "Невозможно загрузить приложение '%(name)s' из %(path)s" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "Невозможно смонтировать диск конфигурации vfat. %(operation)s не выполнено. " "Ошибка: %(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "Невозможно передать образ %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "" "Не удалось создать виртуальный интерфейс с помощью уникального MAC-адреса" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "" "Регулярное выражение %s хранилища данных не соответствует ни одному " "хранилищу данных" msgid "Datetime is in invalid format" msgstr "Недопустимый формат даты/времени" msgid "Default PBM policy is required if PBM is enabled." msgstr "Стратегия PBM по умолчанию является обязательной, если включен PBM." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "Из таблицы '%(table_name)s' удалены записи (%(records)d)." #, python-format msgid "Device '%(device)s' not found." msgstr "Устройство '%(device)s' не найдено." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "Устройство с указанным ИД %(id)s не поддерживается в гипервизоре версии " "%(version)s" msgid "Device name contains spaces." msgstr "Имя устройства содержит пробелы." msgid "Device name empty or too long." msgstr "Имя устройства пустое или слишком длинное." #, python-format msgid "Device type mismatch for alias '%s'" msgstr "Несоответствие типа устройства для псевдонима '%s'" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "Различные типы в %(table)s.%(column)s и теневой таблице: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "Диск содержит файловую систему, размер которой невозможно изменить: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "Форматирование диска %(disk_format)s недопустимо" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Недопустимый файл информации о диске: %(reason)s" msgid "Disk must have only one partition." msgstr "Диск должен иметь только один раздел." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "Диск с ИД %s не подключен к экземпляру." #, python-format msgid "Driver Error: %s" msgstr "Ошибка драйвера: %s" #, python-format msgid "Error attempting to run %(method)s" msgstr "Ошибка при попытке выполнения %(method)s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "Ошибка уничтожения экземпляра на узле %(node)s. Состояние выделения ресурсов " "все еще '%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Ошибка при вызове агента: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "" "Ошибка возврата из отложенного состояния экземпляра %(instance_id)s: " "%(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "Ошибка в libvirt при получении информации о домене для %(instance_name)s: " "[Код ошибки: %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Ошибка libvirt во время поиска %(instance_name)s: [Код ошибки " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Ошибка в libvirt во время приостановки %(instance_name)s: [Код ошибки: " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "Ошибка в libvirt при установке пароля для имени пользователя \"%(user)s\": " "[Код ошибки %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "Ошибка при монтировании %(device)s на %(dir)s в образе %(image)s с " "libguestfs (%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "Ошибка при монтировании %(image)s с помощью libguestfs (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Ошибка при создании монитора ресурсов: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Ошибка: Агент выключен" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "Событие %(event)s не найдено для ИД действия %(action_id)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "Событие должно быть экземпляром nova.virt.event.Event" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "Выполнено максимальное запланированное число попыток %(max_attempts)d для " "экземпляра %(instance_uuid)s. Последняя исключительная ситуация: " "%(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Выполнено максимальное запланированное число повторных попыток " "%(max_retries)d для экземпляра %(instance_uuid)s в процессе оперативного " "переноса" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "Превышено максимальное количество попыток. %(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "Ожидался uuid, а получен %(uuid)s." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Дополнительный столбец %(table)s.%(column)s в теневой таблице" msgid "Extracting vmdk from OVA failed." msgstr "Извлечение vmdk из OVA не выполнено." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "Не удалось обратиться к порту %(port_id)s: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "Не удалось добавить параметры развертывания на узле %(node)s при выделении " "ресурсов экземпляру %(instance)s" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "" "Не удалось выделить сеть (сети). Ошибка: %s. Перепланировка не выполняется." msgid "Failed to allocate the network(s), not rescheduling." msgstr "Не удалось выделить сеть(сети), перепланировка не выполняется." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "Не удалось подключить устройство сетевого адаптера к %(instance_uuid)s" #, python-format msgid "Failed to create vif %s" msgstr "Не удалось создать vif %s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Не удалось развернуть экземпляр: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "Не удалось отключить устройство PCI %(dev)s: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "Не удалось отключить устройство сетевого адаптера от %(instance_uuid)s" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Не удалось зашифровать текст: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Не удалось запустить экземпляры: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Не удалось отобразить разделы: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Ошибка монтирования файловой системы: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "" "Не удалось проанализировать информацию об устройстве pci на предмет " "удаленного входа" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Не удалось выключить экземпляр: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Не удалось включить экземпляр: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "Не удалось подготовить устройство PCI %(id)s для экземпляра " "%(instance_uuid)s: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "Не удалось выделить ресурсы экземпляру %(inst)s: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "Не удалось прочитать или записать файл информации о диске: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Не удалось перезагрузить экземпляр: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Не удалось удалить тома: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Не удалось запросить Ironic для перекомпоновки экземпляра %(inst)s: " "%(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Не удалось возобновить экземпляр: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "Не удалось выполнить команду qemu-img info в %(path)s : %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "" "Не удалось установить пароль администратора в %(instance)s по причине: " "%(reason)s" msgid "Failed to spawn, rolling back" msgstr "Не удалось выполнить порождение, выполняется откат" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Не удалось приостановить экземпляр: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Не удалось завершить экземпляр: %(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "Не удалось отсоединить vif %s" msgid "Failure prepping block device." msgstr "Сбой при подготовке блочного устройства." #, python-format msgid "File %(file_path)s could not be found." msgstr "Файл %(file_path)s не может быть найден." #, python-format msgid "File path %s not valid" msgstr "Путь к файлу %s не верен" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "" "Фиксированный IP %(ip)s не является допустимым IP-адресом для сети " "%(network_id)s." #, python-format msgid "Fixed IP %s is already in use." msgstr "Фиксированный IP-адрес %s уже используется." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "Фиксированный IP-адрес %(address)s уже используется в экземпляре " "%(instance_uuid)s." #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "Для адреса %(address)s не найдено фиксированных IP." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "Разновидность %(flavor_id)s не найдена." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "Разновидность %(flavor_id)s не содержит дополнительных спецификаций с ключом " "%(extra_specs_key)s." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "" "Разновидность %(flavor_id)s не содержит дополнительных спецификаций с ключом " "%(key)s." #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "" "Дополнительную спецификацию %(id)s разновидности не удалось создать или " "изменить за %(retries)d попыток." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "Права доступа к разновидности уже существуют для комбинации разновидности " "%(flavor_id)s и проекта %(project_id)s." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Права доступа к разновидности не найдены для комбинации %(flavor_id)s / " "%(project_id)s." msgid "Flavor used by the instance could not be found." msgstr "Используемая экземпляром разновидность не найдена." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "Разновидность с ИД %(flavor_id)s уже существует." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "Не удалось найти разновидность с именем %(flavor_name)s." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "Разновидность с именем %(name)s уже существует." #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "Объем диска разновидности меньше минимального размера, указанного в " "метаданных образа. Объем диска разновидности: %(flavor_size)i байт, " "минимальный объем - %(image_min_disk)i байт." #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "Диск разновидности слишком мал для запрошенного образа. Диск разновидности " "составляет %(flavor_size)i байт, размер образа - %(image_size)i байт." msgid "Flavor's memory is too small for requested image." msgstr "Память разновидности слишком мала для запрошенного образа." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Сбой связи нефиксированного IP-адреса %(address)s." #, python-format msgid "Floating IP %(address)s is associated." msgstr "Нефиксированный IP %(address)s связан." #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "Нефиксированный IP %(address)s не связан с экземпляром %(id)s." #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "Для ИД %(id)s не найден нефиксированный IP." #, python-format msgid "Floating IP not found for ID %s" msgstr "Нефиксированный IP не найден для ИД %s" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "Нефиксированный IP не найден для адреса %(address)s." msgid "Floating IP pool not found." msgstr "Пул нефиксированных IP не найден." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "Запрещено превышать значение разновидности для числа последовательных " "портов, передаваемого в метаданные образа." msgid "Found no disk to snapshot." msgstr "Не найден диск для создания моментальной копии." #, python-format msgid "Found no network for bridge %s" msgstr "Не найдена сеть для моста %s" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Найдена не уникальная сеть для моста %s" #, python-format msgid "Found non-unique network for name_label %s" msgstr "Найдена не уникальная сеть для name_label %s" msgid "Guest does not have a console available." msgstr "Гость не имеет доступной консоли." #, python-format msgid "Host %(host)s could not be found." msgstr "Узел %(host)s не найден." #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "Хост %(host)s уже связан с ячейкой %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "Хост '%(name)s' не привязан ни к одной ячейке" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Драйвер Hyper-V не поддерживает включение питания хоста" msgid "Host aggregate is not empty" msgstr "Множество хостов не пустое" msgid "Host does not support guests with NUMA topology set" msgstr "Хост не поддерживает гостей с топологией NUMA" msgid "Host does not support guests with custom memory page sizes" msgstr "" "Хост не поддерживает гостей с пользовательскими размерами страниц памяти" msgid "Host startup on XenServer is not supported." msgstr "Запуск узла на XenServer не поддерживается." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Драйвер гипервизора не поддерживает метод post_live_migration_at_source" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Недопустимый тип виртуализации гипервизора '%s'" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "Тип виртуализации гипервизора %(hv_type)s не распознан" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "Гипервизор с ИД '%s' не найден." #, python-format msgid "IP allocation over quota in pool %s." msgstr "Превышение квоты выделения IP-адресов в пуле %s." msgid "IP allocation over quota." msgstr "Превышение квоты выделения IP-адресов." #, python-format msgid "Image %(image_id)s could not be found." msgstr "Образ %(image_id)s не найден." #, python-format msgid "Image %(image_id)s is not active." msgstr "Образ %(image_id)s не активен." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "Образ %(image_id)s недопустим: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "Размер диска образа превышает запрашиваемый размер диска" msgid "Image is not raw format" msgstr "Образ не в формате raw" msgid "Image metadata limit exceeded" msgstr "Ограничение метаданных образа превышено" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "Модель образа '%(image)s' не поддерживается" msgid "Image not found." msgstr "образ не найден." #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "Свойству %(name)s образа не разрешено переопределять конфигурацию NUMA, " "заданную для разновидности" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "Свойству образа 'hw_cpu_policy' не разрешено переопределять стратегию " "прикрепления CPU для данной разновидности" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "Свойству образа 'hw_cpu_thread_policy' не разрешено переопределять стратегию " "прикрепления CPU к нити для данной разновидности" msgid "Image that the instance was started with could not be found." msgstr "Образ, с помощью которого запущен экземпляр, не найден." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "" "В образе указано недопустимое значение параметра %(config_drive)s диска " "конфигурации" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "У образов с destination_type 'volume' должен быть указан ненулевой размер " msgid "In ERROR state" msgstr "В состоянии ОШИБКА" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "В состояниях %(vm_state)s/%(task_state)s, не RESIZED/None" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "Оперативный перенос %(id)s не найден для сервера %(uuid)s." msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Несовместимые параметры: шифрование временной памяти поддерживается только " "для образов LVM." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "Кэш информации для экземпляра %(instance_uuid)s не найден." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "Экземпляр %(instance)s и том %(vol)s не находятся в одной зоне доступности " "availability_zone. Экземпляр находится в %(ins_zone)s. Том находится в " "%(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "В экземпляре %(instance)s нет порта с ИД %(port)s" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "" "Экземпляр %(instance_id)s не может быть аварийно восстановлен: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "Копия %(instance_id)s не найдена." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "Экземпляр %(instance_id)s не имеет тега '%(tag)s'" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "Копия %(instance_id)s не переведена в режим восстановления" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "Экземпляр %(instance_id)s не готов" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "Копия %(instance_id)s не выполняется." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "Копия %(instance_id)s недопустима: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "В экземпляре %(instance_uuid)s не указана топология NUMA" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "Экземпляр %(instance_uuid)s не задает контекст миграции." #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "Копия %(instance_uuid)s в %(attr)s %(state)s. Невозможно %(method)s во время " "нахождения копии в этом состоянии." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "Экземпляр %(instance_uuid)s заблокирован" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "" "Для экземпляра %(instance_uuid)s необходим диск конфигурации, но оне не " "существует." #, python-format msgid "Instance %(name)s already exists." msgstr "Копия %(name)s уже существует." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "" "Экземпляр %(server_id)s находится в недопустимом состоянии для действия " "'%(action)s'" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "Экземпляр %(uuid)s не имеет связей с ячейкой." #, python-format msgid "Instance %s not found" msgstr "Экземпляр %s не найден" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Предоставление ресурсов для экземпляра %s прервано." msgid "Instance could not be found" msgstr "Копия не найдена" msgid "Instance disk to be encrypted but no context provided" msgstr "Диск экземпляра должен шифроваться, но не передан контекст" msgid "Instance event failed" msgstr "Сбой события экземпляра" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "Группа экземпляров %(group_uuid)s уже существует." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "Группа экземпляров %(group_uuid)s не найдена." msgid "Instance has no source host" msgstr "В экземпляре отсутствует исходный хост" msgid "Instance has not been resized." msgstr "С копией не производилось изменение размера." #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "Недопустимое имя DNS экземпляра %(hostname)s" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "Копия в состоянии режима восстановления: %s" msgid "Instance is not a member of specified network" msgstr "Копия не является участником заданной сети" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Выполнен откат экземпляра вследствие: %s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "Недостаточное пространство в группе томов %(vg)s. Доступно только " "%(free_space)db, но %(size)d байт запрошено томом %(lv)s." #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Недостаточно вычислительных ресурсов: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "Недостаточно памяти на узле сети compute для запуска %(uuid)s." #, python-format msgid "Interface %(interface)s not found." msgstr "Интерфейс %(interface)s не найден." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "Недопустимые данные Base 64 для файла %(path)s" msgid "Invalid Connection Info" msgstr "Неверная информация о соединении" #, python-format msgid "Invalid ID received %(id)s." msgstr "Получен неверный ИД %(id)s." #, python-format msgid "Invalid IP format %s" msgstr "Недопустимый формат IP %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Недопустимый протокол IP %(protocol)s." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Недопустимый белый список PCI: в белом списке PCI может быть задано либо имя " "устройства, либо адрес, но не то и другое одновременно" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Недопустимое определение псевдонима PCI: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Недопустимое регулярное выражение %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Недопустимые символы в имени хоста %(hostname)s" msgid "Invalid config_drive provided." msgstr "Указан неверный config_drive." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "Неверный config_drive_format \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Недопустимый тип консоли %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "Недопустимый тип содержимого %(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Недопустимая строка даты/времени: %(reason)s" msgid "Invalid device UUID." msgstr "Недопустимый UUID устройства." #, python-format msgid "Invalid entry: '%s'" msgstr "Недопустимая запись: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Недопустимая запись: '%s'; ожидался dict" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Недопустимая запись: '%s'; ожидается список или dict" #, python-format msgid "Invalid exclusion expression %r" msgstr "Недопустимое выражение исключения %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Недопустимый формат образа '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "Недопустимый образ href %(image_href)s." #, python-format msgid "Invalid inclusion expression %r" msgstr "Недопустимое выражение включения %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Недопустимые входные данные для поля/атрибута %(path)s. Значение: %(value)s. " "%(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Получены недопустимые входные данные: %(reason)s" msgid "Invalid instance image." msgstr "Неверный образ экземпляра." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Неверный фильтр is_public [%s]" msgid "Invalid key_name provided." msgstr "Предоставлен недопустимый key_name." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Недопустимый размер страницы памяти '%(pagesize)s'" msgid "Invalid metadata key" msgstr "Неправильный ключ метаданных" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Недопустимый размер метаданных: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Недопустимые метаданные: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Неверный фильтр minDisk_public [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "Неверный фильтр minRam_public [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Недопустимый диапазон портов %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "Неверная подпись запроса прокси." #, python-format msgid "Invalid range expression %r" msgstr "Недопустимое выражение диапазона %r" msgid "Invalid service catalog json." msgstr "Недопустимый json каталога службы." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "Неверное время запуска. Время запуска не может быть больше конечного времени." msgid "Invalid state of instance files on shared storage" msgstr "Недопустимое состояние файлов экземпляров в общем хранилище" #, python-format msgid "Invalid timestamp for date %s" msgstr "Неверное системное время для даты %s" #, python-format msgid "Invalid usage_type: %s" msgstr "Недопустимый usage_type: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Недопустимое значение для опции Диск конфигурации: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "Неверный адрес виртуального интерфейса %s в запросе" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Недопустимый режим доступа к тому: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Недопустимый том: %(reason)s" msgid "Invalid volume_size." msgstr "Недопустимое значение volume_size." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "Драйверу не передан UUID узла Ironic для экземпляра %s." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "Не разрешено создавать интерфейсы во внешней сети %(network_uuid)s" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "Превышен размер ядра/Ramdisk образа: %(vdi_size)d байт, макс. %(max_size)d " "байт" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Имена ключей могут содержать в себе только алфавитно-цифровые символы, " "точки, дефисы, подчеркивания, двоеточия и пробелы." #, python-format msgid "Key manager error: %(reason)s" msgstr "Ошибка администратора ключей: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "Пара ключа '%(key_name)s' уже существует." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "" "Криптографическая пара %(name)s не найдена для пользователя %(user_id)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "Недопустимая пара ключей: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "Имя криптографической пары содержит ненадежные символы" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "Имя криптографической пары должно быть строкой длиной от 1 до 255 символов" msgid "Limits only supported from vCenter 6.0 and above" msgstr "Ограничения поддерживаются только в vCenter 6.0 и выше" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "Оперативный перенос %(id)s для сервера %(uuid)s не выполняется." #, python-format msgid "Malformed message body: %(reason)s" msgstr "Неправильное тело сообщения: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "Неверный формат URL запроса: project_id '%(project_id)s' для URL не " "соответствует project_id '%(context_project_id)s' для контекста" msgid "Malformed request body" msgstr "Неправильное тело запроса" msgid "Mapping image to local is not supported." msgstr "Преобразование образа в локальный не поддерживается." #, python-format msgid "Marker %(marker)s could not be found." msgstr "Маркер %(marker)s не найден." msgid "Maximum number of floating IPs exceeded" msgstr "Превышено максимальное число нефиксированных IP" msgid "Maximum number of key pairs exceeded" msgstr "Максимальное число пар ключей превышено" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "Максимальное число элементов метаданных превышает %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "Превышено максимальное число портов" msgid "Maximum number of security groups or rules exceeded" msgstr "Максимальное число групп защиты или правил превышено" msgid "Metadata item was not found" msgstr "Элемент метаданных не найден" msgid "Metadata property key greater than 255 characters" msgstr "Ключ свойства метаданных превышает 256 символов" msgid "Metadata property value greater than 255 characters" msgstr "Значение свойства метаданных превышает 256 символов" msgid "Metadata type should be dict." msgstr "Тип метаданных должен быть задан как dict." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "Не удалось найти показатель %(name)s на вычислительном узле хоста %(host)s." "%(node)s." msgid "Migrate Receive failed" msgstr "Не удалось выполнить получение переноса" msgid "Migrate Send failed" msgstr "Не удалось выполнить отправку переноса" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "" "Перенос %(id)s для сервера %(uuid)s не выполняется в оперативном режиме." #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "Перемещение %(migration_id)s не найдено." #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "Перенос %(migration_id)s не найден для экземпляра %(instance_id)s" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "Состояние переноса %(migration_id)s для экземпляра %(instance_uuid)s - " "%(state)s. В этом состоянии переноса выполнить %(method)s невозможно." #, python-format msgid "Migration error: %(reason)s" msgstr "Ошибка переноса: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "Перенос не поддерживается для зарезервированных экземпляров LVM" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "" "Перемещение не найдено для копии %(instance_id)s в состоянии %(status)s." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Ошибка предварительной проверки переноса: %(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "Ошибка выбора целевых объектов переноса: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Отсутствуют аргументы: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Отсутствует столбец %(table)s.%(column)s в теневой таблице" msgid "Missing device UUID." msgstr "Не указан UUID устройства." msgid "Missing disabled reason field" msgstr "Отсутствует поле причины выключения" msgid "Missing forced_down field" msgstr "Отсутствует поле forced_down " msgid "Missing imageRef attribute" msgstr "Отсутствует атрибут imageRef" #, python-format msgid "Missing keys: %s" msgstr "Отсутствуют ключи: %s" #, python-format msgid "Missing parameter %s" msgstr "Не указан параметр %s" msgid "Missing parameter dict" msgstr "Отсутствует параметр dict" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "" "Более одного экземпляра связано с фиксированным IP-адресом '%(address)s'." msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Найдено более одной возможной сети. Укажите ИД сетей, к которой требуется " "выполнить подключение." msgid "More than one swap drive requested." msgstr "Запрошено несколько временных дисков." #, python-format msgid "Multi-boot operating system found in %s" msgstr "Операционная система альтернативной загрузки найдена в %s" msgid "Multiple X-Instance-ID headers found within request." msgstr "Несколько заголовков X-Instance-ID находится в запросе." msgid "Multiple X-Tenant-ID headers found within request." msgstr "В запросе обнаружено несколько заголовков X-Tenant-ID." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "Найдено несколько соответствий пулов нефиксированных IP для имени '%s'" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "Несколько нефиксированных IP найдено для адреса %(address)s." msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Драйвером VMWare vCenter может управляться несколько хостов, поэтому время " "работы для отдельного хоста не возвращается." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Найдено несколько возможных сетей, используйте более определенный ИД сети." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "Найдено несколько групп защиты, соответствующих '%s'. Используйте ИД, " "который будет более определенным." msgid "Must input network_id when request IP address" msgstr "Необходимо указывать network_id при запросе IP-адреса" msgid "Must not input both network_id and port_id" msgstr "Нельзя вводить и network_id, и port_id" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "Необходимо указать connection_url, connection_username (необязательно) и " "connection_password для использования compute_driver=xenapi.XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "Необходимо указать host_ip, host_username и host_password для использования " "vmwareapi.VMwareVCDriver" msgid "Must supply a positive value for max_number" msgstr "max_number должно быть положительным числом" msgid "Must supply a positive value for max_rows" msgstr "Необходимо предоставить положительное значение для max_rows" #, python-format msgid "Network %(network_id)s could not be found." msgstr "Сеть %(network_id)s не найдена." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "Сети %(network_uuid)s требуется подсеть для загрузки экземпляров." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "Сеть не может быть найдена для моста %(bridge)s" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "Сеть не найдена для копии %(instance_id)s." msgid "Network not found" msgstr "Сеть не найдена" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "Сеть требует связи port_security_enabled и subnet, для того чтобы применить " "группы защиты." msgid "New volume must be detached in order to swap." msgstr "Для подкачки новый том необходимо отключить." msgid "New volume must be the same size or larger." msgstr "Размер нового тома должен быть тем же или большим." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "Отсутствует связь блочного устройства с ИД %(id)s." msgid "No Unique Match Found." msgstr "Уникальное соответствие не найдено." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "С %(id)s не связан ни один agent-build." msgid "No compute host specified" msgstr "Хост вычислений не указан" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "" "Не найдена информация о конфигурации для операционной системы %(os_name)s" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "В виртуальной машине нет устройства с MAC-адресом %s" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "В виртуальной машине нет устройства с interface-id %s" #, python-format msgid "No disk at %(location)s" msgstr "Отсутствует диск в %(location)s" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Нет доступных фиксированных IP-адресов для сети %(net)s" msgid "No fixed IPs associated to instance" msgstr "Нет фиксированных IP, связанных с экземпляром" msgid "No free nbd devices" msgstr "Нет свободных устройств nbd" msgid "No host available on cluster" msgstr "Отсутствует хост в кластере" msgid "No hosts found to map to cell, exiting." msgstr "Нет хостов для связывания с ячейкой, выход." #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "Гипервизор, соответствующий '%s', не найден." msgid "No image locations are accessible" msgstr "Нет доступных расположений образов" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "Не настроен URI оперативной миграции, и не задано значение по умолчанию для " "типа виртуализации гипервизора \"%(virt_type)s\"." msgid "No more floating IPs available." msgstr "Нет доступных нефиксированных IP." #, python-format msgid "No more floating IPs in pool %s." msgstr "Нет доступных нефиксированных IP в пуле %s." #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "Точки монтирования не найдены в %(root)s из %(image)s" #, python-format msgid "No operating system found in %s" msgstr "Операционная система не найдена в %s" #, python-format msgid "No primary VDI found for %s" msgstr "Первичный VDI не найден для %s" msgid "No root disk defined." msgstr "Не определен корневой диск." #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "" "Требуемая сеть не задана, и нет доступных сетей для проекта '%(project_id)s'." msgid "No suitable network for migrate" msgstr "Нет подходящей сети для переноса" msgid "No valid host found for cold migrate" msgstr "Не найден допустимый хост для холодного переноса" msgid "No valid host found for resize" msgstr "Не найдены допустимые хосты для изменения размера" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Допустимый узел не найден. %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "Нет отображения блочных устройств тома в пути: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "Отсутствует связывание блочного устройства тома с ИД %(volume_id)s." #, python-format msgid "Node %s could not be found." msgstr "Узел %s не найден." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "Не удалось получить свободный порт для %(host)s" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "Не удалось связать %(host)s:%(port)d, %(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "" "Не все виртуальные функции для PF %(compute_node_id)s:%(address)s свободны." msgid "Not an rbd snapshot" msgstr "Не является моментальной копией rbd" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "Нет доступа к образу %(image_id)s." msgid "Not authorized." msgstr "Не авторизировано." msgid "Not enough parameters to build a valid rule." msgstr "Недостаточно параметров для сбора правильного правила." msgid "Not implemented on Windows" msgstr "Не реализован в Windows" msgid "Not stored in rbd" msgstr "Не сохранено в rbd" msgid "Nothing was archived." msgstr "Ничего не архивировано." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Для Nova требуется версия libvirt %s или выше." msgid "Number of Rows Archived" msgstr "Число архивированных строк" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Действие объекта %(action)s не выполнено, причина: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Старый том подключен к другому экземпляру." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Один или несколько хостов уже находятся в зоне готовности %s" msgid "Only administrators may list deleted instances" msgstr "Только администраторы могут выводить список удаленных экземпляров" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "В этой функции поддерживаются только SR (ext/NFS) на основе файлов. SR " "%(uuid)s имеет тип %(type)s" msgid "Origin header does not match this host." msgstr "Заголовок Origin не соответствует данному хосту." msgid "Origin header not valid." msgstr "Недопустимый заголовок Origin." msgid "Origin header protocol does not match this host." msgstr "Протокол заголовка Origin не соответствует данному хосту." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "Устройство PCI %(node_id)s:%(address)s не найдено." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "Псевдоним PCI %(alias)s не определен" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "Устройство PCI %(compute_node_id)s:%(address)s является %(status)s, а не " "%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "Устройство PCI %(compute_node_id)s:%(address)s принадлежит %(owner)s, а не " "%(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "Устройство PCI %(id)s не найдено" #, python-format msgid "PCI device request %(requests)s failed" msgstr "Запрос устройства PCI %(requests)s не выполнен" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s не содержит IP-адрес" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Размер страницы %(pagesize)s запрещен для '%(against)s'" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "Размер страницы %(pagesize)s не поддерживается хостом." #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "Параметры %(missing_params)s отсутствуют в vif_details для vif %(vif_id)s. " "Проверьте правильность параметров macvtap в конфигурации Neutron. " #, python-format msgid "Path %s must be LVM logical volume" msgstr "Путь %s должен быть логическим томом LVM" msgid "Paused" msgstr "На паузе" msgid "Personality file limit exceeded" msgstr "Превышено ограничение файла личных параметров" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "Состояние физической функции %(compute_node_id)s:%(address)s, связанной с VF " "%(compute_node_id)s:%(vf_address)s - это %(status)s вместо %(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "Отсутствует физическая сеть для сети %(network_uuid)s" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "Политика не допускает выполнения %(action)s." #, python-format msgid "Port %(port_id)s is still in use." msgstr "Порт %(port_id)s по-прежнему занят." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "Порт %(port_id)s не применим для экземпляра %(instance)s." #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "Порт %(port_id)s не применим для экземпляра %(instance)s. Значение " "%(value)s, присвоенное атрибуту dns_name attribute, не совпадает с именем " "хоста %(hostname)s экземпляра" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "Для использования порта %(port_id)s требуется FixedIP." #, python-format msgid "Port %s is not attached" msgstr "Порт %s не подключен" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "Не удалось найти ИД порта %(port_id)s." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Переданная модель видео (%(model)s) не поддерживается." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "" "Указанное действие сторожевого устройства %(action)s) не поддерживается." msgid "QEMU guest agent is not enabled" msgstr "Гостевой агент QEMU не включен" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "В экземпляре %(instance_id)s приостановка не поддерживается " #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "Класс квоты %(class_name)s не найден." msgid "Quota could not be found" msgstr "Квота не найдена" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "Превышена квота для %(overs)s: Запрошено %(req)s, но уже используется " "%(used)s %(allowed)s %(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Квота превышена для ресурсов: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Квота превышена, слишком много пар ключей." msgid "Quota exceeded, too many server groups." msgstr "Превышена квота, слишком много групп серверов." msgid "Quota exceeded, too many servers in group" msgstr "Превышена квота, слишком много серверов в группе." #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Квота превышена: код=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "Квота существует для проекта %(project_id)s, ресурс %(resource)s" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "Квота проекта %(project_id)s не найдена." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "Не удалось найти квоту для пользователя %(user_id)s в проекте %(project_id)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "Ограничение квоты %(limit)s для %(resource)s должно быть не меньше уже " "занятого и зарезервированного объема %(minimum)s." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "Ограничение квоты %(limit)s для %(resource)s должно быть не больше " "%(maximum)s." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "Достигнуто максимальное число попыток отсоединения VBD %s" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "Стратегия реального времени требует, чтобы маска vCPU(s) была настроена с " "хотя бы одним 1 vCPU реального времени и 1 обычным vCPU. См. hw:" "cpu_realtime_mask или hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "Тело запроса и URI не совпадают" msgid "Request is too large." msgstr "Запрос слишком велик." #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "На запрос образа %(image_id)s получен ответ BadRequest: %(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "RequestSpec не найден для %(instance_uuid)s" msgid "Requested CPU control policy not supported by host" msgstr "Запрошенная стратегия управления CPU не поддерживается хостом" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "Запрошенное аппаратное обеспечение '%(model)s' не поддерживается виртуальным " "драйвером '%(virt)s' " #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "" "Для запрашиваемого образа %(image)s отключена функция автоматического " "изменения размера диска." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "Запрошенная топология NUMA экземпляра не подходит для данной топологии NUMA " "хоста" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "Запрошенная топология NUMA экземпляра и запрошенные устройства PCI не " "помещаются в заданной топологии NUMA хоста" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "Запрошенные ограничения VCPU %(sockets)d:%(cores)d:%(threads)d невозможно " "удовлетворить для %(vcpus)d VCPU" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "Устройство аварийного восстановления не существует для экземпляра %s" #, python-format msgid "Resize error: %(reason)s" msgstr "Ошибка при изменении размера: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Разновидность диска нельзя сделать нулевого размера." msgid "Resource could not be found." msgstr "Ресурс не может быть найден." msgid "Resumed" msgstr "Продолжено" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Имя корневого элемента должно быть %(name)s, а не %(tag)s" #, python-format msgid "Running batches of %i until complete" msgstr "Пакеты %i будут выполняться до полного завершения" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "Фильтр узлов диспетчера %(filter_name)s не найден" #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "Группа защиты %(name)s не найдена для проекта %(project)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "Группа безопасности %(security_group_id)s не найдена для проекта " "%(project_id)s." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "Группа безопасности %(security_group_id)s не найдена." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "Группа защиты %(security_group_name)s уже существует для проекта. " "%(project_id)s." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "Группа защиты %(security_group_name)s не связана с экземпляром %(instance)s" msgid "Security group id should be uuid" msgstr "ИД группы защиты должен быть uuid" msgid "Security group name cannot be empty" msgstr "Наименование группы безопасности не может отсутствовать" msgid "Security group not specified" msgstr "Группа безопасности не задана" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "Размер диска сервера не может изменен, причина: %(reason)s" msgid "Server does not exist" msgstr "Сервер не существует" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "Стратегия ServerGroup не поддерживается: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter не настроен" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter не настроен" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "ServerGroupSoftAffinityWeigher не настроен" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "ServerGroupSoftAntiAffinityWeigher не настроен" #, python-format msgid "Service %(service_id)s could not be found." msgstr "Служба %(service_id)s не найдена." #, python-format msgid "Service %s not found." msgstr "Служба %s не найдена." msgid "Service is unavailable at this time." msgstr "В данный момент служба недоступна." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "Служба с хостом %(host)s для двоичного файла %(binary)s существует." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "Служба с хостом %(host)s для раздела %(topic)s существует." msgid "Set admin password is not supported" msgstr "Указание пароля администратора не поддерживается." #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "Теневая таблица с именем %(name)s уже существует." #, python-format msgid "Share '%s' is not supported" msgstr "Общий ресурс '%s' не поддерживается" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "Для уровня '%s' общего ресурса нельзя настраивать общий ресурс. " msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "Сокращение размера файловой системы с resize2fs не выполнено, проверьте, " "достаточно ли свободного места на диске." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "Снимок %(snapshot_id)s не может быть найден." msgid "Some required fields are missing" msgstr "Не указаны некоторые обязательные поля" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "Непредвиденная ошибка при удалении моментальной копии тома: изменение базы " "сетевого диска %(protocol)s с помощью qemu-img не полностью отлажено" msgid "Sort direction size exceeds sort key size" msgstr "Размер направления сортировки превышает размер ключа сортировки" msgid "Sort key supplied was not valid." msgstr "Указанный ключ сортировки неверен." msgid "Specified fixed address not assigned to instance" msgstr "Указанный фиксированный адрес не назначен экземпляру" msgid "Specify `table_name` or `table` param" msgstr "Укажите параметр `table_name` или `table`" msgid "Specify only one param `table_name` `table`" msgstr "Укажите только один параметр `table_name` или `table`" msgid "Started" msgstr "Начато" msgid "Stopped" msgstr "Остановлен" #, python-format msgid "Storage error: %(reason)s" msgstr "Ошибка хранилища: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "Стратегия хранения %s не соответствует ни одному хранилищу данных" msgid "Success" msgstr "Успешно" msgid "Suspended" msgstr "Приостановлено" msgid "Swap drive requested is larger than instance type allows." msgstr "" "Размер запрашиваемого съемного диска превышает допустимый для данного типа " "экземпляра." msgid "Table" msgstr "Таблица" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "Задача %(task_name)s уже выполняется на хосте %(host)s" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "Задача %(task_name)s не выполняется на хосте %(host)s" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "Некорректный формат адреса PCI %(address)s." msgid "The backlog must be more than 0" msgstr "Значение запаса должно быть больше 0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "Диапазон портов консоли %(min_port)d-%(max_port)d исчерпан." msgid "The created instance's disk would be too small." msgstr "Созданный диск экземпляра будет недостаточным." msgid "The current driver does not support preserving ephemeral partitions." msgstr "Текущий драйвер не поддерживает сохранение временных разделов." msgid "The default PBM policy doesn't exist on the backend." msgstr "Стратегия PBM по умолчанию не существует на базовом сервере." msgid "The floating IP request failed with a BadRequest" msgstr "Сбой нефиксированного IP-адреса с BadRequest" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "Копии необходима новая версия гипервизора, вместо предоставленной." #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "" "Число определенных портов %(ports)dis превышает максимально разрешенное: " "%(quota)d" msgid "The only partition should be partition 1." msgstr "Единственный раздел должен быть разделом 1." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "Указанный путь к устройству RNG: (%(path)s) не существует на хосте." msgid "The request body can't be empty" msgstr "Тело запроса не может быть пустым" msgid "The request is invalid." msgstr "Недопустимый запрос." #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "Запрошенный объем видео-памяти %(req_vram)d превышает максимально допустимый " "для разновидности %(max_vram)d." msgid "The requested availability zone is not available" msgstr "Запрашиваемая зона готовности недоступна" msgid "The requested console type details are not accessible" msgstr "Запрашиваемые сведения о типе консоли недоступны" msgid "The requested functionality is not supported." msgstr "Запрошенная функциональность не поддерживается." #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "Указанный кластер '%s' не найден в vCenter" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "Указанный путь к устройству (%(path)s) занят." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "Недопустимое размещение предоставленного устройства (%(path)s)." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "" "Предоставленный адрес диска (%(path)s) уже существует, но ожидалось, что " "отсутствует." msgid "The supplied hypervisor type of is invalid." msgstr "Представленный тип гипервизора неверен. " msgid "The target host can't be the same one." msgstr "Целевой хост не может быть тем же самым." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "Маркер '%(token)s' недопустим или устарел" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "" "Том нельзя назначить имени устройства, совпадающему с корневым устройством %s" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "В таблице '%(table_name)s' существуют записи (%(records)d), для которых " "столбец uuid или instance_uuid равен NULL. Запустите команду повторно с " "опцией --delete после создания резервной копии всей нужных данных." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "В таблице '%(table_name)s' существуют записи (%(records)d), для которых " "столбец uuid или instance_uuid равен NULL. Они должны быть очищены вручную " "до выполнения переноса. Запустите команду 'nova-manage db " "null_instance_uuid_scan'." msgid "There are not enough hosts available." msgstr "Нет достаточного числа доступных хостов." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Осталось %(count)i записей разновидности, которые не были перенесены. " "Продолжение миграции невозможно, пока все записи разновидности экземпляра не " "будут перенесены в новый формат. Вначале необходимо выполнить команду 'nova-" "manage db migrate_flavor_data'." #, python-format msgid "There is no such action: %s" msgstr "Не существует такого действия: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "Не обнаружены записи, для которых instance_uuid равен NULL." #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "Версия гипервизора этого узла ниже минимально допустимой: %(version)s." msgid "This domU must be running on the host specified by connection_url" msgstr "Этот domU должен быть запущен на хосте, указанном в connection_url" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "Метод должны быть вызван или с networks=None и port_ids=None, или с port_ids " "и networks отличными от none." #, python-format msgid "This rule already exists in group %s" msgstr "Это правило уже существует в группе %s" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "Версия этой службы (v%(thisver)i) меньше минимальной версии (v%(minver)i) " "остальных компонентов развертывания. Продолжение работы невозможно." #, python-format msgid "Timeout waiting for device %s to be created" msgstr "Время ожидания при создании устройства %s" msgid "Timeout waiting for response from cell" msgstr "Тайм-аут ожидания ответа от ячейки" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "" "Произошел тайм-аут при проверке возможности оперативной миграции на хост: %s" msgid "To and From ports must be integers" msgstr "Порты От и К должны быть целыми числами" msgid "Token not found" msgstr "Маркер не найден" msgid "Triggering crash dump is not supported" msgstr "Активация дампа сбоя не поддерживается" msgid "Type and Code must be integers for ICMP protocol type" msgstr "Значения Тип и Код должны быть целыми числами для типа протокола ICMP" msgid "UEFI is not supported" msgstr "UEFI не поддерживается" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "Не удается связать нефиксированный IP %(address)s с каким-либо из " "фиксированных IP для экземпляра %(id)s. У экземпляра нет фиксированных " "адресов IPv4 для связывания." #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "Не удалось связать нефиксированный IP-адрес %(address)s с фиксированным IP-" "адресом %(fixed_address)s для экземпляра %(id)s. Ошибка: %(error)s" msgid "Unable to authenticate Ironic client." msgstr "Не удалось идентифицировать клиент Ironic." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "" "Невозможно связаться с гостевым агентом. Возник тайм-аут следующего вызова: " "%(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "Не удается преобразовать образ в %(format)s: %(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "Не удается преобразовать образ в формат raw: %(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "Невозможно ликвидировать VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "Невозможно ликвидировать VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "Невозможно определить шину диска для '%s'" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "Невозможно определить префикс диска для %s" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "Невозможно удалить %s из пула; главный узел не найден" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "Невозможно удалить %s из пула; пул не пуст" #, python-format msgid "Unable to find SR from VBD %s" msgstr "Невозможно найти SR из VBD %s" #, python-format msgid "Unable to find SR from VDI %s" msgstr "Не найден SR из VDI %s" #, python-format msgid "Unable to find ca_file : %s" msgstr "Не удалось найти ca_file : %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "Не удалось найти cert_file : %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "Невозможно найти узел для копии %s" msgid "Unable to find iSCSI Target" msgstr "Невозможно найти назначение iSCSI" #, python-format msgid "Unable to find key_file : %s" msgstr "Невозможно найти key_file: %s" msgid "Unable to find root VBD/VDI for VM" msgstr "Невозможно найти корневой VBD/VDI для VM" msgid "Unable to find volume" msgstr "Невозможно найти том" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "Не удалось получить UUID хоста: /etc/machine-id не существует" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "Не удалось получить UUID хоста: /etc/machine-id пуст" #, python-format msgid "Unable to get record of VDI %s on" msgstr "Невозможно получить запись VDI %s на" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "Невозможно внедрить VDI для SR %s" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "Невозможно внедрить VDI на SR %s" #, python-format msgid "Unable to join %s in the pool" msgstr "Невозможно подключить %s в пул" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Невозможно запустить несколько экземпляров с одним заданным ИД порта. " "Запустите свой экземпляр последовательно на разных портах." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "Невозможно перенести %(instance_uuid)s в %(dest)s: Недостаточно памяти(хост:" "%(avail)s <= экземпляр:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "Невозможно перенести %(instance_uuid)s: Диск экземпляра слишком велик " "(доступно на целевом хосте:%(available)s < необходимо:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "" "Невозможно переместить копию (%(instance_id)s) на текущий узел (%(host)s)." #, python-format msgid "Unable to obtain target information %s" msgstr "Невозможно получить целевую информацию %s" msgid "Unable to resize disk down." msgstr "Нельзя уменьшить размер диска." msgid "Unable to set password on instance" msgstr "Невозможно установить пароль для экземпляра" msgid "Unable to shrink disk." msgstr "Не удалось уменьшить размер диска." msgid "Unable to terminate instance." msgstr "Невозможно завершить экземпляр." #, python-format msgid "Unable to unplug VBD %s" msgstr "Невозможно отсоединить VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Неприменимая информация о CPU: %(reason)s" msgid "Unacceptable parameters." msgstr "Недопустимые параметры." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Недопустимый тип консоли %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "Не определен корневой элемент связывания блочного устройства: " "BlockDeviceMappingList содержит связывания блочных устройств из нескольких " "экземпляров." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Неожиданная ошибка API. Сообщите об этом в http://bugs.launchpad.net/nova/ и " "прикрепите протокол API Nova, если возможно.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Непредвиденное составное действие %s" msgid "Unexpected type adding stats" msgstr "Непредвиденный тип добавления статистики" #, python-format msgid "Unexpected vif_type=%s" msgstr "Неожиданный vif_type=%s" msgid "Unknown" msgstr "Неизвестно" msgid "Unknown action" msgstr "Неизвестное действие" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "" "Неизвестный формат диска конфигурации %(format)s. Выберите iso9660 или vfat." #, python-format msgid "Unknown delete_info type %s" msgstr "Неизвестный тип delete_info %s" #, python-format msgid "Unknown image_type=%s" msgstr "Неизвестный image_type=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Неизвестные ресурсы квоты: %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Неизвестное направление сортировки, должно быть 'desc' или 'asc'" #, python-format msgid "Unknown type: %s" msgstr "Неизвестный тип: %s" msgid "Unrecognized legacy format." msgstr "Нераспознанный устаревший формат." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Нераспознанное значение read_deleted '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "Нераспознанное значение '%s' для CONF.running_deleted_instance_action" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "" "Предпринята попытка возврата из отложенного состояния, но образ %s не найден." msgid "Unsupported Content-Type" msgstr "Не поддерживаемый тип содержимого" msgid "Upgrade DB using Essex release first." msgstr "Обновите сначала базу данных с помощью выпуска Essex." #, python-format msgid "User %(username)s not found in password file." msgstr "Пользователь %(username)s не найден в файле паролей." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Пользователь %(username)s не найден в теневом файле." msgid "User data needs to be valid base 64." msgstr "Пользовательские данные должны иметь верный формат base64." msgid "User does not have admin privileges" msgstr "Пользователь не имеет административных привилегий" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "Использование разных форматов block_device_mapping в одном запросе не " "разрешено." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s %(virtual_size)d байт, что больше размера разновидности " "(%(new_disk_size)d байт)." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "VDI не найден в SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "Число попыток объединения VHD превысило (%d), отказ..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "Версия %(req_ver)s не поддерживается в API. Минимальная требуемая версия: " "%(min_ver)s, максимальная: %(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Ошибка создания виртуального интерфейса" msgid "Virtual interface plugin failed" msgstr "Ошибка в модуле виртуального интерфейса" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "Режим виртуальной машины %(vmmode)s не распознан" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "Недопустимый режим виртуальной машины '%s'" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "Тип виртуализации '%(virt)s' не поддерживается этим драйвером вычисления" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "Не удалось подключить том %(volume_id)s. Причина: %(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "Том %(volume_id)s не найден." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "Создание тома %(volume_id)s не завершается долгое время %(seconds)s секунд " "или %(attempts)s попыток. Состояние тома: %(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Том не относится к запрашиваемому экземпляру." #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "Для тома %(volume_type)s %(volume_id)s не поддерживается шифрование " #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "Размер тома меньше, чем минимальный размер, указанный в метаданных образа. " "Размер тома составляет %(volume_size)i байт, минимальный размер - " "%(image_min_disk)i байт." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "Том указывает размер блока, но текущий гипервизор libvirt '%s' не " "поддерживает нестандартный размер блока" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Схема '%s' не поддерживается в Python < 2.7.4, используйте http или https" msgid "When resizing, instances must change flavor!" msgstr "При изменении размера экземпляры должны изменить разновидность!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "При работе сервера в режиме SSL необходимо указать cert_file и key_file в " "файле конфигурации" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "" "Используется неверный метод контроля квоты %(method)s для ресурса %(res)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "Недопустимый тип метода перехватчика. Допустимые типы: 'pre' и 'post'" msgid "X-Forwarded-For is missing from request." msgstr "В запросе отсутствует X-Forwarded-For." msgid "X-Instance-ID header is missing from request." msgstr "Заголовок X-Instance-ID отсутствует в запросе." msgid "X-Instance-ID-Signature header is missing from request." msgstr "В запросе отсутствует заголовок X-Instance-ID-Signature." msgid "X-Metadata-Provider is missing from request." msgstr "В запросе отсутствует X-Metadata-Provider." msgid "X-Tenant-ID header is missing from request." msgstr "Заголовок X-Tenant-ID отсутствует в запросе." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "Требуется поддержка XAPI relax-xsm-sr-check=true" msgid "You are not allowed to delete the image." msgstr "Вам не разрешено удалять образ." msgid "" "You are not authorized to access the image the instance was started with." msgstr "У вас нет прав доступа к образу, с помощью которого запущен экземпляр." msgid "You must implement __call__" msgstr "Отсутствует реализация __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "Необходимо указать флаг images_rbd_pool для использования образов rbd." msgid "You should specify images_volume_group flag to use LVM images." msgstr "" "Необходимо указать флаг images_volume_group для использования образов LVM." msgid "Zero floating IPs available." msgstr "Нет доступных нефиксированных IP." msgid "admin password can't be changed on existing disk" msgstr "пароль администратора не может быть изменен на существующем диске" msgid "aggregate deleted" msgstr "составной объект удален" msgid "aggregate in error" msgstr "Ошибка в составном объекте" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "Не удалось выполнить assert_can_migrate по причине: %s" msgid "cannot understand JSON" msgstr "невозможно понять JSON" msgid "clone() is not implemented" msgstr "Функция clone() не реализована" #, python-format msgid "connect info: %s" msgstr "информация о соединении: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "подключение к: %(host)s:%(port)s" msgid "direct_snapshot() is not implemented" msgstr "Функция direct_snapshot() не реализована" #, python-format msgid "disk type '%s' not supported" msgstr "тип диска %s не поддерживается" #, python-format msgid "empty project id for instance %s" msgstr "пустой ИД проекта для экземпляра %s" msgid "error setting admin password" msgstr "ошибка при установке пароля администратора" #, python-format msgid "error: %s" msgstr "Ошибка: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "не удалось создать отпечаток X509. Сообщение об ошибке: %s" msgid "failed to generate fingerprint" msgstr "не удалось создать отпечаток" msgid "filename cannot be None" msgstr "имя файла не может быть None" msgid "floating IP is already associated" msgstr "нефиксированный IP уже связан" msgid "floating IP not found" msgstr "нефиксированный IP не найден" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s backed by: %(backing_file)s" #, python-format msgid "href %s does not contain version" msgstr "href %s не содержит версию" msgid "image already mounted" msgstr "образ уже присоединён" #, python-format msgid "instance %s is not running" msgstr "Экземпляр %s не запущен" msgid "instance has a kernel or ramdisk but not both" msgstr "копия содержит ядро или ramdisk, но не оба" msgid "instance is a required argument to use @refresh_cache" msgstr "" "экземпляр является требуемым аргументом для использования @refresh_cache" msgid "instance is not in a suspended state" msgstr "копия не в приостановленном состоянии" msgid "instance is not powered on" msgstr "копия не включена" msgid "instance is powered off and cannot be suspended." msgstr "экземпляр выключен и не может быть приостановлен." #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "Не удалось найти instance_id %s как ИД устройства ни на одном порту" msgid "is_public must be a boolean" msgstr "is_public должен быть boolean" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key не определен" msgid "l3driver call to add floating IP failed" msgstr "Не удалось выполнить вызов l3driver для добавления нефиксированного IP" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs установлена, но ее невозможно использовать (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "Не установлена libguestfs (%s)" #, python-format msgid "marker [%s] not found" msgstr "маркер [%s] не найден" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "максимальное число строк должно быть <= %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "" "Значение max_count не может быть больше 1, если указано значение fixed_ip." msgid "min_count must be <= max_count" msgstr "min_count должен быть <= max_count" #, python-format msgid "nbd device %s did not show up" msgstr "Устройство nbd %s не показан" msgid "nbd unavailable: module not loaded" msgstr "nbd недоступен: модуль не загружен" msgid "no hosts to remove" msgstr "Нет хостов для удаления" #, python-format msgid "no match found for %s" msgstr "не найдено соответствие для %s" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "не найдена пригодная родительская моментальная копия для тома %s" #, python-format msgid "no write permission on storage pool %s" msgstr "нет прав записи в пул памяти %s" #, python-format msgid "not able to execute ssh command: %s" msgstr "не может выполнить команду ssh: %s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "" "Конфигурация в старом стиле может основываться только на словаре или " "memcached" msgid "operation time out" msgstr "тайм-аут операции" #, python-format msgid "partition %s not found" msgstr "раздел %s не найден" #, python-format msgid "partition search unsupported with %s" msgstr "поиск раздела не поддерживается %s" msgid "pause not supported for vmwareapi" msgstr "остановка не поддерживается для vmwareapi" msgid "printable characters with at least one non space character" msgstr "печатаемые символы хотя бы один символ, не являющийся пробелом." msgid "printable characters. Can not start or end with whitespace." msgstr "печатаемые символы. Не может начинаться или заканчиваться пробелом." #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "qemu-img не выполнен в %(path)s : %(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "ошибка qemu-nbd: %s" msgid "rbd python libraries not found" msgstr "Не найдены библиотеки rbd и python" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "" "read_deleted может принимать значения 'no', 'yes' или 'only', значение %r " "недопустимо" msgid "serve() can only be called once" msgstr "serve() может быть вызван только один раз" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "" "служба является обязательным аргументом для базы данных на основе драйвера " "ServiceGroup" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "служба является обязательным аргументом для Memcached на основе драйвера " "ServiceGroup" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password не реализован этим драйвером или гостевым экземпляром." msgid "setup in progress" msgstr "Выполняется настройка" #, python-format msgid "snapshot for %s" msgstr "моментальная копия для %s" msgid "snapshot_id required in create_info" msgstr "snapshot_id обязателен в create_info" msgid "token not provided" msgstr "маркер не указан" msgid "too many body keys" msgstr "слишком много ключей тела" msgid "unpause not supported for vmwareapi" msgstr "отмена остановки не поддерживается для vmwareapi" msgid "version should be an integer" msgstr "версия должна быть целым числом" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s должен быть группой томов LVM" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path отсутствует в vif_details для vif %(vif_id)s" #, python-format msgid "vif type %s not supported" msgstr "Тип vif %s не поддерживается" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "Параметр vif_type должен присутствовать для этой реализации vif_driver" #, python-format msgid "volume %s already attached" msgstr "Том %s уже присоединен" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "Требуемое состояние '%(vol)s' тома: 'in-use'. Текущее состояние: '%(status)s'" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake не имеет реализации для %s" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake не имеет реализации для %s или был вызван с использованием " "неправильным числом аргументов" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/tr_TR/0000775000175000017500000000000000000000000016022 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3544698 nova-21.2.4/nova/locale/tr_TR/LC_MESSAGES/0000775000175000017500000000000000000000000017607 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/tr_TR/LC_MESSAGES/nova.po0000664000175000017500000025116100000000000021120 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Özcan Zafer AYAN , 2013 # Özcan Zafer AYAN , 2013 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:09+0000\n" "Last-Translator: Copied by Zanata \n" "Language: tr_TR\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Turkish (Turkey)\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s geçerli bir IP v4/6 adresi değildir." #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "" "%(binary)s ilkesel olarak izin verilmeyen şekilde doğrudan veri tabanı " "erişimine kalkıştı" #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "%(memsize)d MB hafıza atanmış, %(memtotal)d MB bekleniyordu" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s yerel depoda değil: %(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s paylaşımlı depoda değil: %(reason)s" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "%(type)s hipervizörü PCI aygıtlarını desteklemiyor" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "%(workers)s in %(worker_name)s değeri geçersiz, 0'dan büyük olmalı" #, python-format msgid "%s does not support disk hotplug." msgstr "%s disk canlı takmayı desteklemiyor." #, python-format msgid "%s format is not supported" msgstr "%s biçimi desteklenmiyor" #, python-format msgid "%s is not supported." msgstr "%s desteklenmiyor." #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s 'MANUAL' veya 'AUTO' olmak zorunda" msgid "'qemu-img info' parsing failed." msgstr "'qemu-img info' ayrıştırma başarısız." #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "" "'rxtx_factor' bağımsız değişkeni 0 ve %g arasında kesirli bir sayı olmalı " #, python-format msgid "A NetworkModel is required in field %s" msgstr "%s alanında bir AğModeli gerekli" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "" "API Sürüm Karakter Dizisi %(version)s geçersiz bir biçimde. MajorNum." "MinorNum biçiminde olmalı." #, python-format msgid "API version %(version)s is not supported on this method." msgstr "API sürümü %(version)s bu metodda desteklenmiyor." msgid "Access list not available for public flavors." msgstr "Erişim listesi açık nitelikler için kullanılamaz." #, python-format msgid "Action %s not found" msgstr "Eylem %s bulunamadı" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "%(instance_uuid)s sunucusu üzerinde request_id %(request_id)s için eylem " "bulunamadı" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "Eylem: '%(action)s', çağıran metod: %(meth)s, gövde: %(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "" "Metadata ekleme %(id)s takımı için %(retries)s denemeden sonra başarısız oldu" msgid "Affinity instance group policy was violated." msgstr "İlişki sunucu grubu ilkesi ihlal edildi." #, python-format msgid "Agent does not support the call: %(method)s" msgstr "Ajan çağrıyı desteklemiyor: %(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "%(hypervisor)s hipervizörüne %(os)s işletim sistemine %(architecture)s " "mimarisine sahip ajan-inşası mevcut." #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "%(aggregate_id)s kümesi zaten%(host)s sunucusuna sahip." #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "%(aggregate_id)s kümesi bulunamadı." #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "%(aggregate_id)s kümesi %(host)s sunucusuna sahip değil." #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "" "%(aggregate_id)s kümesi %(metadata_key)s. anahtarı ile hiç metadata'sı yok." #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "" "Takım %(aggregate_id)s: eylem '%(action)s' hataya sebep oldu: %(reason)s." #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "%(aggregate_name)s kümesi zaten var." #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "İstemci %(host)s sayısı için takım bulunamadı." msgid "An unknown error has occurred. Please try your request again." msgstr "Bilinmeyen bir hata oluştu. Lütfen tekrar deneyin." msgid "An unknown exception occurred." msgstr "Bilinmeyen bir istisna oluştu." msgid "Anti-affinity instance group policy was violated." msgstr "Zıt-ilişki sunucu grubu ilkesi ihlal edildi." #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "Mimari ismi '%(arch)s' tanınmıyor" #, python-format msgid "Architecture name '%s' is not valid" msgstr "Mimari adı '%s' geçerli değil" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "" "Boş havuzdan PCI aygıtı %(compute_node_id)s:%(address)s tüketme çalışması" msgid "Attempted overwrite of an existing value." msgstr "Mevcut değerin üzerine yazılması girişimi." #, python-format msgid "Attribute not supported: %(attr)s" msgstr "Öznitelik desteklenmiyor: %(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "Yanlış ağ biçimi: %s bulunamadı" msgid "Bad networks format" msgstr "Hatalı ağ biçimi" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "Yanlış ağ biçimi: ağ UUID'si uygun formatta değil(%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "%s cidr'indeki ağ için kötü ön ek" msgid "Blank components" msgstr "Boş bileşenler" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "" "Boş mantıksal sürücülerin (kaynak: 'boş', hedef:'mantıksal sürücü') sıfırdan " "farklı boyutu olmalı" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "Blok Aygıtı %(id)s ön yüklenebilir değil." msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "Blok Aygıt Eşleştirmesi eski biçime dönüştürülemiyor. " msgid "Block Device Mapping is Invalid." msgstr "Blok Aygıt Eşleştirmesi Geçersiz." #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "Blok Aygıt Eşleştirmesi Geçersiz: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "" "Blok Aygıt Eşleştirmesi Geçersiz: Sunucu için ön yükleme sırası ve imaj/blok " "aygıt haritası bileşimi geçersiz." msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "" "Blok Aygıt Eşleştirmesi Geçersiz: Sınırın izin verdiğinden çok yerel aygıt " "tanımladınız" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "Blok Aygıt Eşleştirmesi Geçersiz: imaj %(id)s alınamadı." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "Blok Aygıt Eşleştirmesi Geçersiz: Anlık görüntü %(id)s alınamadı." #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "Blok Aygıt Eşleştirmesi Geçersiz: mantıksal sürücü %(id)s alınamadı." msgid "Block migration can not be used with shared storage." msgstr "Blok göçü paylaşılan hafıza ile kullanılamaz." msgid "Boot index is invalid." msgstr "Yükleme indeksi geçersiz." #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "%(instance_uuid)s sunucusunun inşası iptal edildi: %(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "%(instance_uuid)s sunucusunun inşası yeniden zamanlandı: %(reason)s" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "Tüm NUMA düğümleri için ayrılan CPU ve hafıza sağlanmalıdır" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU uyumluluğu yok. \n" " \n" " %(ret)s \n" " \n" " Bkz: %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "%(cpunum)d CPU sayısı iki düğüme atanmış" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "CPU sayısı %(cpunum)d azami sayı %(cpumax)d den fazla" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "CPU sayısı %(cpuset)s herhangi bir düğüme atanmamış" msgid "Can not find requested image" msgstr "İstenilen imaj dosyası bulunamadı" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "%d kimlik bilgileri için kimlik doğrulama isteği ele alınamadı" msgid "Can't resize a disk to 0 GB." msgstr "Bir disk 0 GB'ye boyutlandırılamaz." msgid "Can't resize down ephemeral disks." msgstr "Geçici disklerin boyutu küçültülemedi." msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "Sunucu libvirt yapılandırmasından kök aygıt yolu alınamadı" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "%(attr)s %(state)s halindeyken sunucu %(server_id)s '%(action)s' yapılamaz" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "\"%(instances_path)s\"e erişilemiyor, yolun var olduğuna ve gerekli " "izinleriniz olduğuna emin olun. Nova-Hesaplama dahili SİSTEM hesabıyla ya da " "uzak istemciye doğrulama yapamayan diğer hesaplarla çalıştırılmamalıdır." #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "İstemci %(aggregate_id)s takımına eklenemiyor. Sebep: %(reason)s." msgid "Cannot attach one or more volumes to multiple instances" msgstr "" "Bir ya da daha fazla mantıksal sürücü birden fazla sunucuya eklenemiyor" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Artık %(objtype)s objesi üzerinde %(method)s çağrılamaz" msgid "Cannot find SR of content-type ISO" msgstr "ISO içerik türünün SR'si bulunamıyor" msgid "Cannot find SR to read/write VDI." msgstr "VDI'ya okuma/yazma yapılırken SR(Saklama deposu) bulunamadı." msgid "Cannot find image for rebuild" msgstr "Yeniden kurulum için imaj dosyası bulunamadı." #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "%(id)s takımındaki %(host)s istemcisi çıkarılamıyor" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "İstemci %(aggregate_id)s takımından çıkarılamıyor. Sebep: %(reason)s." msgid "Cannot rescue a volume-backed instance" msgstr "Mantıksal sürücü destekli sunucu kurtarılamıyor" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "%(aggregate_id)s takımı güncellenemiyor. Sebep: %(reason)s." #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "" "%(aggregate_id)s takımının metadata'sı güncellenemiyor. Sebep: %(reason)s." #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "%(uuid)s hücresinin eşleştirmesi yok." #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "Değişiklik şu kaynaklar için kullanımı 0 altında düşürür: %(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "%(class_name)s sınıfı bulunamadı: %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "" "Komut desteklenmiyor. Lütfen bu eylemi gerçekleştirmek için %(cmd)s Ironic " "komutunu kullanın." #, python-format msgid "Compute host %(host)s could not be found." msgstr "%(host)s hesaplama sunucusu bulunamadı." #, python-format msgid "Compute host %s not found." msgstr "Hesap istemcisi %s bulunamadı." #, python-format msgid "Compute service of %(host)s is still in use." msgstr "%(host)s hesaplama servisi hala kullanımda." #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "%(host)s hesaplama servisi şu an kullanılabilir değil." #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "Yapılandırma belirli bir CPU modeli istedi, ama mevcut libvirt hipervizörü " "'%s' CPU modeli seçme desteklemiyor" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "Cinder istemcisine bağlantı başarısız: %(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "libvirt bağlantısı kayboldu: %s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "Hipervizör bağlantısı istemci üzerinde bozuk: %(host)s" msgid "Constraint not met." msgstr "Kısıtlama karşılanmadı." #, python-format msgid "Converted to raw, but format is now %s" msgstr "Ham şekle dönüştürüldü, ama biçim artık %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "İmaj geri dönüşe eklenemiyor: %s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "%(image_id)s imajı getirilemedi" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "%(driver_type)s bölümü için bir işleyici bulunamadı." #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "%(host)s sunucusunda %(binary)s ikilisi bulunamadı." #, python-format msgid "Could not find config at %(path)s" msgstr "%(path)s'deki yapılandırma bulunamadı" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "VM'nin kullandığı veri deposu referansı(ları) bulunamadı." #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "%(line)s satırı yüklenemedi, %(error)s hatası alındı" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "Yapıştırma uygulaması '%(name)s' %(path)s yolundan yüklenemedi" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "" "vfat yapılandırma sürücüsü bağlanamadı. %(operation)s başarısız. Hata: " "%(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "İmaj %(image_id)s yüklenemedi" msgid "Creation of virtual interface with unique mac address failed" msgstr "Benzersiz mac adresine sahip sanal arayüzün oluşturulması başarısız" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "Veridepolama düzenli ifadesi %s herhangi bir verideposuyla eşleşmedi" msgid "Datetime is in invalid format" msgstr "Datetime geçersiz biçimde" msgid "Default PBM policy is required if PBM is enabled." msgstr "PBM etkin ise varsayılan PBM ilkesi gerekir." #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "'%(table_name)s' tablosundan %(records)d kayıt silindi." #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "" "Belirtilen aygıt kimliği %(id)s hipervizör sürüm %(version)s tarafından " "desteklenmiyor" msgid "Device name contains spaces." msgstr "Aygıt adı boşluk içeriyor." msgid "Device name empty or too long." msgstr "Aygıt adı boş veya çok uzun." #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "%(table)s.%(column)s ve gölge tabloda değişik türler: %(c_type)s " "%(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "Disk, yeniden boyutlandıramadığımız bir dosya sistemi içeriyor: %s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "%(disk_format)s disk formatı kabul edilemez." #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "Disk bilgi dosyası geçersiz: %(reason)s" msgid "Disk must have only one partition." msgstr "Diskin tek bir bölümü olmalı." #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "id:%s diski sunucuya ekli şekilde bulunmadı." #, python-format msgid "Driver Error: %s" msgstr "Sürücü Hatası: %s" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "" "%(node)s düğümündeki sunucu silinirken hata. Hazırlık durumu hala " "'%(state)s'." #, python-format msgid "Error during following call to agent: %(method)s" msgstr "Ajana yapılan şu çağrıda hata: %(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "Askıdan almadan hata sunucu %(instance_id)s: %(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "%(instance_name)s için alan bilgisi alınırken libvirt hatası: [Hata Kodu " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "%(instance_name)s aranırken libvirt hatası: [Hata Kodu %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "%(instance_name)s susturulurken libvirt hatası: [Hata Kodu %(error_code)s] " "%(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "%(device)s %(dir)s e %(image)s imajında libguestfs (%(e)s) ile bağlanamıyor" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "%(image)s'in libguestfs (%(e)s) ile bağlanmasında hata" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "Kaynak izleme oluşturulurken hata: %(monitor)s" msgid "Error: Agent is disabled" msgstr "Hata: Ajan kapalı" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "Olay %(event)s eylem kimliği %(action_id)s için bulunamadı" msgid "Event must be an instance of nova.virt.event.Event" msgstr "Olay bir nova.virt.event.Event örneği olmalı" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "Canlı göç sırasında %(instance_uuid)s sunucusu için azami zamanlama yeniden " "deneme sayısı %(max_retries)d aşıldı" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "Bir uuid bekleniyordu ama %(uuid)s alındı." #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "Gölge tabloda ek sütun %(table)s.%(column)s" msgid "Extracting vmdk from OVA failed." msgstr "OVA'dan vmdk çıkarma başarısız." #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "%(port_id)s bağlantı noktasına erişim başarısız: %(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "" "%(instance)s sunucusu hazırlanırken %(node)s düğümü üzerinde açma " "parametreleri eklenemedi" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "Ağ(lar) ayırma %s hatasıyla başarısız, yeniden zamanlanmıyor." msgid "Failed to allocate the network(s), not rescheduling." msgstr "Ağ(lar) ayrılamadı, yeniden zamanlanmıyor." #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "Ağ bağdaştırıcısı aygıtı %(instance_uuid)s e eklenemedi" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "Sunucu açılamadı: %(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "PCI aygıtı %(dev)s ayrılamadı: %(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "Ağ bağdaştırıcısı aygıtı %(instance_uuid)s den ayrılamadı" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "Metin şifrelenemedi: %(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "Sunucular açılamadı: %(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "Bölümler eşlenemiyor: %s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "Dosya sistemi bağlanamadı: %s" msgid "Failed to parse information about a pci device for passthrough" msgstr "" "Düzgeçiş için bir pci aygıtıyla ilgili bilginin ayrıştırılması başarısız" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "Sunucu kapatılamadı: %(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "Sunucu açılamadı: %(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "" "%(id)s PCI aygıtı %(instance_uuid)s sunucusu için hazırlanamadı: %(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "%(inst)s sunucusu hazırlanamadı: %(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "Disk bilgi dosyası okuma ya da yazması başarısız: %(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "Sunucu yeniden başlatılamadı: %(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "Mantıksal sürücü(ler) kaldırılamadı: (%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "" "Ironic'e %(inst)s sunucusunu yeniden inşa etmesi isteği başarısız: %(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "Sunucu sürdürülemedi: %(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "%(path)s üzerinde qemu-img info çalıştırılamadı: %(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "%(reason)s yüzünden %(instance)s üzerinde parola ayarlanamadı" msgid "Failed to spawn, rolling back" msgstr "Oluşturma başarısız, geri alınıyor" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "Sunucu askıya alınamadı: %(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "Sunucu sonlandırılamadı: %(reason)s" msgid "Failure prepping block device." msgstr "Blok aygıt hazırlama başarısız." #, python-format msgid "File %(file_path)s could not be found." msgstr "%(file_path)s dosyası bulunamadı." #, python-format msgid "File path %s not valid" msgstr "Dosya yolu %s geçerli değil" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "Sabit IP %(ip)s %(network_id)s için gereçli bir ip adresi değil." #, python-format msgid "Fixed IP %s is already in use." msgstr "Sabit IP %s zaten kullanılıyor." #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "" "Sabit IP adresi %(address)s %(instance_uuid)s sunucusu üzerinde zaten " "kullanımda." #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "%(flavor_id)s örnek türü bulunamadı." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "" "%(flavor_id)s niteliğinin %(extra_specs_key)s anahtarına sahip ek özelliği " "yok." #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "%(flavor_id)s niteliğinin %(key)s anahtarına sahip ek özelliği yok." #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "" "Nitelik erişimi %(flavor_id)s nitelik ve %(project_id)s proje katışımı için " "zaten mevcut." #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "" "Nitelik erişimi %(flavor_id)s / %(project_id)s katışımı için bulunamadı." msgid "Flavor used by the instance could not be found." msgstr "Sunucu tarafından kullanılan nitelik bulunamadı." #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "%(flavor_id)s kimliğine sahip nitelik zaten mevcut." #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "%(flavor_name)s ismine sahip nitelik bulunamadı." #, python-format msgid "Flavor with name %(name)s already exists." msgstr "%(name)s ismine sahip nitelik zaten mevcut." msgid "Flavor's memory is too small for requested image." msgstr "Niteliğin hafızası istenen imaj için çok küçük." #, python-format msgid "Floating IP %(address)s association has failed." msgstr "Değişken IP %(address)s ilişkilendirmesi başarısız." msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "" "İmaj metasına geçirilen nitelik seri bağlantı noktası sayısı değerini " "geçmeye izin verilmez." msgid "Found no disk to snapshot." msgstr "Anlık görüntüsü alınacak disk bulunamadı." #, python-format msgid "Found no network for bridge %s" msgstr "Köprü %s için ağ bulunamadı" #, python-format msgid "Found non-unique network for bridge %s" msgstr "Köprü %s için benzersiz olmayan ağ bulundu" #, python-format msgid "Found non-unique network for name_label %s" msgstr "name_label %s için benzersiz olmayan ağ bulundu" #, python-format msgid "Host %(host)s could not be found." msgstr "%(host)s sunucusu bulunamadı." msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "İstemci GüçAçma Hyper-V sürücüsü tarafından desteklenmiyor" msgid "Host aggregate is not empty" msgstr "İstemci takımı boş değil" msgid "Host does not support guests with NUMA topology set" msgstr "İstemci NUMA toploji kümesine sahip konukları desteklemiyor" msgid "Host does not support guests with custom memory page sizes" msgstr "İstemci özel hafıza sayfa boyutlarına sahip konukları desteklemiyor" msgid "Host startup on XenServer is not supported." msgstr "XenSunucu üzerinde istemci başlatma desteklenmiyor." msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "" "Hipervizör sürücüsü post_live_migration_at_source yöntemini desteklemiyor" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Hipervizör sanallaştırma türü '%s' geçerli değil" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "Hipervizör sanallaştırma türü '%(hv_type)s' tanınmıyor" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "'%s' kimlikli hipervizör bulunamadı." #, python-format msgid "IP allocation over quota in pool %s." msgstr "%s havuzundaki IP ayırma kota üzerinde." msgid "IP allocation over quota." msgstr "IP ayırma kota üzerinde." #, python-format msgid "Image %(image_id)s could not be found." msgstr "%(image_id)s imaj kaynak dosyası bulunamadı." #, python-format msgid "Image %(image_id)s is not active." msgstr "İmaj %(image_id)s etkin değil." #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "%(image_id)s imajı kabul edilemez: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "İmaj disk boyutu istenen disk boyutundan büyük" msgid "Image is not raw format" msgstr "İmaj ham biçim değil" msgid "Image metadata limit exceeded" msgstr "İmaj üstveri sınırı aşıldı" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "İmaj modeli '%(image)s' desteklenmiyor" msgid "Image not found." msgstr "İmaj bulunamadı" #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "İmaj özelliği '%(name)s' NUMA yapılandırma kümesini nitelikte belirtilene " "karşı yazma hakkına sahip değil" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "İmaj özelliği 'hw_cpu_policy' niteliğe karşı ayarlanmış CPU iğneleme " "ilkesini ezme iznine sahip değil" msgid "Image that the instance was started with could not be found." msgstr "Sunucunun başlatıldığı imaj bulunamadı." #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "İmajın yapılandırma sürücüsü seçeneği '%(config_drive)s' geçersiz" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "" "Hedef türü 'mantıksal sürücü' olan imajların boyutu sıfırdan farklı " "belirtilmelidir" msgid "In ERROR state" msgstr "HATA durumunda" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "%(vm_state)s/%(task_state)s durumu içinde, RESIZED/None değil" msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "" "Uyumsuz ayarlar: geçici depolama şifreleme yalnızca LVM imajlarında " "desteklenir." #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "%(instance_uuid)s sunucusu için bilgi zulası bulunamadı." #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "Sunucu %(instance)s ve mantıksal sürücü %(vol)s aynı kullanılabilir bölgede " "değil. Sunucu %(ins_zone)s içinde. Mantıksal sürücüler %(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "" "%(instance)s sunucusu %(port)s kimliğine sahip bir bağlantı noktasına sahip " "değil" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "Sunucu %(instance_id)s kurtarılamıyor: %(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "%(instance_id)s örneği bulunamadı." #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "%(instance_id)s sunucusunun '%(tag)s' etiketi yok" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "%(instance_id)s örneği kurtarma modunda değil" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "Sunucu %(instance_id)s hazır değil" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "%(instance_id)s örneği çalışmıyor." #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "%(instance_id)s örneği kabul edilemez: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "%(instance_uuid)s sunucusu bir NUMA topolojisi belirtmiyor" #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "%(attr)s %(state)s 'deki %(instance_uuid)s örneği. Örnek bu durumda iken " "%(method)s yapılamaz." #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "Sunucu %(instance_uuid)s kilitli" #, python-format msgid "Instance %(name)s already exists." msgstr "%(name)s örneği zaten var." #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "%(server_id)s sunucusu '%(action)s' eylemi için geçersiz bir durumda" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "%(uuid)s sunucusunun bir hücreye eşlemi yok." #, python-format msgid "Instance %s not found" msgstr "Sunucu %s bulunamadı" #, python-format msgid "Instance %s provisioning was aborted" msgstr "Sunucu %s hazırlığı iptal edildi" msgid "Instance could not be found" msgstr "Örnek bulunamadı." msgid "Instance disk to be encrypted but no context provided" msgstr "Sunucu diski şifrelenecek ama içerik sağlanmamış" msgid "Instance event failed" msgstr "Sunucu olayı başarısız" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "Sunucu grubu %(group_uuid)s zaten mevcut." #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "Sunucu grubu %(group_uuid)s bulunamadı." msgid "Instance has no source host" msgstr "Sunucunun kaynak istemcisi yok" msgid "Instance has not been resized." msgstr "Örnek tekrar boyutlandırılacak şekilde ayarlanmadı." #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "Sunucu zaten Kurtarma Kipinde: %s" msgid "Instance is not a member of specified network" msgstr "Örnek belirlenmiş ağın bir üyesi değil" #, python-format msgid "Instance rollback performed due to: %s" msgstr "Sunucu geri döndürme şu sebepten yapıldı: %s" #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "Yetersiz hesaplama kaynağı: %(reason)s." #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "%(uuid)s hesaplama düğümü başlatmada yetersiz boş hafıza." #, python-format msgid "Interface %(interface)s not found." msgstr "%(interface)s arayüzü bulunamadı." #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "%(path)s dosyası için geçersiz base 64 verisi" msgid "Invalid Connection Info" msgstr "Geçersiz Bağlantı Bilgisi" #, python-format msgid "Invalid ID received %(id)s." msgstr "Geçersiz ID alındı %(id)s." #, python-format msgid "Invalid IP format %s" msgstr "Geçersiz IP biçimi %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "Geçersiz IP %(protocol)s." msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "" "Geçersiz PCI Beyaz listesi: PCI beeyaz listesi aygıt ismi ya da adresi " "belirtebilir, ama ikisini birden belirtemez" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "Geçersiz PCI rumuzu tanımı: %(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "Geçersiz Düzenli İfade %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "Ana makine adında geçersiz karakter '%(hostname)s'" msgid "Invalid config_drive provided." msgstr "Sağlanan yapılandırma_sürücüsü geçersiz." #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "Geçersiz config_drive_format \"%s\"" #, python-format msgid "Invalid console type %(console_type)s" msgstr "Geçersiz konsol türü %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "Geçersiz içerik türü %(content_type)s." #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "Geçersiz datetime karakter dizisi: %(reason)s" msgid "Invalid device UUID." msgstr "Geçersiz aygıt UUID." #, python-format msgid "Invalid entry: '%s'" msgstr "Geçersiz girdi: '%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "Geçersiz girdi: '%s'; Sözlük bekleniyordu" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "Geçersiz girdi: '%s'. liste veya sözlük bekleniyor" #, python-format msgid "Invalid exclusion expression %r" msgstr "Geçersiz dışlama ifadesi %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "Geçersiz imaj biçimi '%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "Geçersiz %(image_href)s imaj kaynak dosyası." #, python-format msgid "Invalid inclusion expression %r" msgstr "Geçersiz dahil etme ifadesi %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "" "Alan/öznitelik %(path)s için geçersiz girdi. Değer: %(value)s. %(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "Geçersiz girdi alındı: %(reason)s" msgid "Invalid instance image." msgstr "Geçersiz sunucu imajı." #, python-format msgid "Invalid is_public filter [%s]" msgstr "Geçersiz is_public filtresi [%s]" msgid "Invalid key_name provided." msgstr "Geçersiz anahtar adı verildi." #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "Geçersiz hafıza sayfa boyutu '%(pagesize)s'" msgid "Invalid metadata key" msgstr "Geçersiz özellik anahtarı" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "Geçersiz metadata boyutu: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "Geçersiz metadata: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "Geçersiz minDisk filtresi [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "geçersiz minRam filtresi [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "Geçersiz port aralığı %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "Geçersiz vekil istek imzası." #, python-format msgid "Invalid range expression %r" msgstr "Geçersiz aralık ifadesi %r" msgid "Invalid service catalog json." msgstr "Geçersiz servis kataloğu json'u." msgid "Invalid start time. The start time cannot occur after the end time." msgstr "" "Geçersiz başlangıç zamanı. Başlangıç zamanı bitiş zamanından sonra olamaz." msgid "Invalid state of instance files on shared storage" msgstr "Paylaşımlı depolamada geçersiz sunucu durumu dosyaları" #, python-format msgid "Invalid timestamp for date %s" msgstr "%s tarihi için geçersiz zaman damgası" #, python-format msgid "Invalid usage_type: %s" msgstr "Geçersiz usage_type: %s" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "Yapılandırma Sürücüsü seçeneği için geçersiz değer: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "İstekte geçersiz sanal arayüz adresi %s" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "Geçersiz mantıksal sürücü erişim kipi: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "Geçersiz mantıksal sürücü: %(reason)s" msgid "Invalid volume_size." msgstr "Geçersiz volume_size." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "Ironic düğümü uuid'si %s sunucusu için sürücüye sağlanmamış." #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "Harici ağ %(network_uuid)s üzerinde arayüz oluşturmaya izin verilmiyor" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "Çekirdek/Ramdisk imajı çok büyük: %(vdi_size)d bayt, azami %(max_size)d bayt" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "" "Anahtar İsimleri yalnızca alfanümerik karakterler, nokta, tire, alt çizgi, " "sütun ve boşluk içerebilir." #, python-format msgid "Key manager error: %(reason)s" msgstr "Anahtar yöneticisi hatası: %(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "Anahtar çifti '%(key_name)s' zaten mevcut." #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "%(user_id)s kullanıcısı için %(name)s anahtar çifti bulunamadı" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "Anahtar çifti verisi geçersiz: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "Anahtar çifti ismi güvensiz karakterler içeriyor" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "" "Anahtar çifti adı karakter dizisi ve 1 ve 255 karakter uzunluğu arasında " "olmalıdır" #, python-format msgid "Malformed message body: %(reason)s" msgstr "Hatalı biçimlendirilmiş mesaj gövdesi: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "Bozuk istek URL'si: URL'nin proje_id'si '%(project_id)s' İçeriğin " "proje_id'si '%(context_project_id)s' ile eşleşmiyor" msgid "Malformed request body" msgstr "Kusurlu istek gövdesi" msgid "Mapping image to local is not supported." msgstr "İmajın yerele eşleştirilmesi desteklenmiyor." #, python-format msgid "Marker %(marker)s could not be found." msgstr "İşaretçi %(marker)s bulunamadı." msgid "Maximum number of key pairs exceeded" msgstr "Azami anahtar çifti sayısı aşıldı" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "Azami metadata öğesi sayısı %(allowed)d sayısını aşıyor" msgid "Maximum number of ports exceeded" msgstr "Azami bağlantı noktası sayısı aşıldı" msgid "Maximum number of security groups or rules exceeded" msgstr "Azami güvenlik grubu veya kural sayısı aşıldı" msgid "Metadata item was not found" msgstr "İçerik özelliği bilgisi bulunamadı" msgid "Metadata property key greater than 255 characters" msgstr "Metadata özellik anahtarı 255 karakterden büyük" msgid "Metadata property value greater than 255 characters" msgstr "Metadata özellik değeri 255 karakterden büyük" msgid "Metadata type should be dict." msgstr "Metadata türü sözlük olmalı." #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "" "%(name)s ölçüsü %(host)s.%(node)s hesaplama istemci düğümünde bulunamadı." msgid "Migrate Receive failed" msgstr "Göç Alma başarısız" msgid "Migrate Send failed" msgstr "Göç Gönderme başarısız" #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "%(migration_id)s göçü bulunamadı." #, python-format msgid "Migration error: %(reason)s" msgstr "Göç hatası: %(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "LVM destekli sunucularda göç desteklenmiyor" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "%(status)s durumuyla %(instance_id)s örneği için göç bulunamadı." #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "Göç ön-kontrol hatası: %(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "Eksik bağımsız değişken: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "Gölge tabloda eksik sütun %(table)s.%(column)s" msgid "Missing device UUID." msgstr "Eksik aygıt UUID." msgid "Missing disabled reason field" msgstr "Kapatılma sebebi alanı eksik" msgid "Missing imageRef attribute" msgstr "İmaj referans özelliği eksik" #, python-format msgid "Missing keys: %s" msgstr "Eksik anahtarlar: %s" #, python-format msgid "Missing parameter %s" msgstr "Eksik parametre %s" msgid "Missing parameter dict" msgstr "Parametre dizini eksik" msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "" "Birden fazla olası ağ bulundu. Hangilerine bağlanılacağını seçmek için ağ " "ID(leri) belirtin." msgid "More than one swap drive requested." msgstr "Birden fazla swap sürücü istendi." #, python-format msgid "Multi-boot operating system found in %s" msgstr "%s içinde birden fazla ön yüklemeli işletim sistemi bulundu" msgid "Multiple X-Instance-ID headers found within request." msgstr "İstekte birden fazla X-Instance-ID başlığı bulundu." msgid "Multiple X-Tenant-ID headers found within request." msgstr "İstekte birden fazla X-Tenant-ID başlığı bulundu." #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "'%s' ismi için birden fazla değişken IP havuzu eşleşmesi bulundu" msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "Birden fazla istemci VMWare vCenter sürücüsü tarafından yönetiliyor " "olabilir; bu yüzden tek bir istemci için açık kalma süresi döndürmüyoruz." msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "" "Birden fazla olası ağ bulundu, daha belirli olmak için Ağ Kimliği kullanın." #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "" "'%s' ile eşleşen birden fazla güvenlik grubu bulundu. Daha özgü olmak için " "bir ID kullanın." msgid "Must input network_id when request IP address" msgstr "IP adresi istendiğinde network_id girdisi verilmelidir" msgid "Must not input both network_id and port_id" msgstr "Hem network_id hem port_id verilmemelidir" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "compute_driver=xenapi.XenAPIDriver kullanmak için bağlantı_url'si, " "bağlantı_kullanıcıadı (isteğe bağlı), ve bağlantı parolası belirtilmeli" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "vmwareapi.VMwareVCDriver kullanmak için host_ip, host_username ve " "host_password belirtilmeli" msgid "Must supply a positive value for max_rows" msgstr "max_rows için pozitif bir değer verilmeli" #, python-format msgid "Network %(network_id)s could not be found." msgstr "%(network_id)s ağı bulunamadı." #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "" "Ağ %(network_uuid)s sunucuları başlatmak için alt ağlara ihtiyaç duyar." #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr " %(bridge)s köprüsü için ağ bulunamadı." #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "%(instance_id)s örneği için ağ bulunamadı." msgid "Network not found" msgstr "Ağ bulunamadı" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "" "Güvenlik gruplarını uygulamak için ağın port_security_enabled olmasına ve " "alt ağın ilişkilendirilmesine ihtiyaç var." msgid "New volume must be detached in order to swap." msgstr "" "Yeni mantıksal sürücünün değiştirilebilmesi için ayrılmış olması gerekir." msgid "New volume must be the same size or larger." msgstr "Yeni mantıksal sürücü aynı boyutta ya da daha büyük olmalı." #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "%(id)s kimliğine sahip bir Blok Aygıt Eşleştirmesi yok." msgid "No Unique Match Found." msgstr "Benzersiz Eşleşme Bulunamadı." #, python-format msgid "No agent-build associated with id %(id)s." msgstr "Hiçbir ajan-inşası id %(id)s ile ilişkilendirilmemiş." msgid "No compute host specified" msgstr "Hesap istemcisi belirtilmedi" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "VM'de %s MAC adresine sahip bir aygıt yok" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "VM'de interface-id %s'a ait bir aygıt yok" #, python-format msgid "No disk at %(location)s" msgstr "%(location)s'da disk yok." #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "Şu ağ için kullanılabilir sabit UP adresleri yok: %(net)s" msgid "No free nbd devices" msgstr "Boş nbd aygıtı yok" msgid "No host available on cluster" msgstr "Kümede kullanılabilir istemci yok" #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "'%s' ile eşleşen bir hipervizör bulunamadı." msgid "No image locations are accessible" msgstr "İmaj konumları erişilebilir değil" #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "%(image)s %(root)s içinde bağlantı noktası bulunmadı" #, python-format msgid "No operating system found in %s" msgstr "%s içinde işletim sistemi bulunamadı" #, python-format msgid "No primary VDI found for %s" msgstr "%s için birincil VDI bulunamadı" msgid "No root disk defined." msgstr "Kök disk tanımlanmamış." msgid "No suitable network for migrate" msgstr "Göç için uygun bir ağ yok" msgid "No valid host found for cold migrate" msgstr "Soğuk göç için geçerli bir istemci bulunamadı" msgid "No valid host found for resize" msgstr "Yeniden boyutlama için geçerli bir istemci bulunamadı" #, python-format msgid "No valid host was found. %(reason)s" msgstr "Geçerli bir sunucu bulunamadı: %(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "Şu yolda mantıksal sürücü Blok Aygıt eşleştirmesi yok: %(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "" "%(volume_id)s kimliğine sahip mantıksal sürücü Blok Aygıt Eşleştirmesi yok." #, python-format msgid "Node %s could not be found." msgstr "Düğüm %s bulunamadı." #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "%(host)s için boş bir bağlantı noktası edinilemedi" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "%(host)s:%(port)d bağlanamadı, %(error)s" msgid "Not an rbd snapshot" msgstr "Rbd anlık görüntüsü değil" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "İmaj %(image_id)s için yetkilendirilmemiş." msgid "Not authorized." msgstr "Yetkiniz yok." msgid "Not enough parameters to build a valid rule." msgstr "Geçerli bir kuralı oluşturmak için yeterli parametre yok." msgid "Not implemented on Windows" msgstr "Windows'da uygulanmamış" msgid "Not stored in rbd" msgstr "Rbd'de depolanmadı" #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova libvirt sürüm %s ya da daha yenisine ihtiyaç duyar." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Nesne eylemi %(action)s başarısız çünkü: %(reason)s" msgid "Old volume is attached to a different instance." msgstr "Eski mantıksal sürücü başka bir sunucuya eklenmiş." #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "Bir ya da daha fazla istemci zaten kullanılabilir bölge(ler)de %s" msgid "Only administrators may list deleted instances" msgstr "Yalnızca yöneticiler silinen sunucuları listeleyebilir" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "Bu özellik tarafından yalnızca dosya-tabanlı SR'ler (ext/NFS) desteklenir. " "SR %(uuid)s ise %(type)s türünde" msgid "Origin header does not match this host." msgstr "Kaynak başlık bu istemciyle eşleşmiyor." msgid "Origin header not valid." msgstr "Kaynak başlık geçerli değil." msgid "Origin header protocol does not match this host." msgstr "Kaynak başlık iletişim kuralı bu istemciyle eşleşmiyor." #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "PCI Aygıtı %(node_id)s:%(address)s bulunamadı." #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "PCI rumuzu %(alias)s tanımlanmamış" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI aygıtı %(compute_node_id)s:%(address)s %(hopestatus)s olacağına " "%(status)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI aygıtı %(compute_node_id)s:%(address)s %(hopeowner)s yerine %(owner)s " "tarafından sahiplenilmiş" #, python-format msgid "PCI device %(id)s not found" msgstr "PCI aygıtı %(id)s bulunamadı" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s IP adresi içermiyor" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "Sayfa boyutu %(pagesize)s '%(against)s' e karşı yasaklı" #, python-format msgid "Path %s must be LVM logical volume" msgstr "Yol %s LVM mantıksal sürücüsü olmalı" msgid "Paused" msgstr "Durduruldu" msgid "Personality file limit exceeded" msgstr "Kişisel dosya limiti aşıldı" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "%(network_uuid)s ağı için fiziksel ağ eksik" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "%(action)s uygulanmasına izin verilmiyor." #, python-format msgid "Port %(port_id)s is still in use." msgstr "Bağlantı noktası %(port_id)s hala kullanımda." #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "" "Bağlantı noktası %(port_id)s %(instance)s sunucusu için kullanılabilir değil." #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "" "Bağlantı noktası %(port_id)s kullanılabilmek için bir SabitIP'ye ihtiyaç " "duyuyor." #, python-format msgid "Port %s is not attached" msgstr "Bağlantı noktası %s eklenmemiş" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "Bağlantı noktası kimliği %(port_id)s bulunamadı." #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "Sağlanan video modeli (%(model)s) desteklenmiyor." #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "Sağlanan watchdog eylemi (%(action)s) desteklenmiyor." msgid "QEMU guest agent is not enabled" msgstr "QEMU konuk ajan etkin değil" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "Kota sınıfı %(class_name)s bulunamadı." msgid "Quota could not be found" msgstr "Kota bulunamadı." #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "Kaynaklar için kota aşıldı: %(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "Kota aşıldı, çok fazla anahtar çifti." msgid "Quota exceeded, too many server groups." msgstr "Kota aşıldı, çok fazla sunucu grubu." msgid "Quota exceeded, too many servers in group" msgstr "Kota aşıldı, grupta çok fazla sunucu var" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "Kota aşıldı: kod=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "Kota %(project_id)s projesi, %(resource)s kaynağı için mevcut" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "%(project_id)s projesi için bir kota bulunamadı." #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "" "%(project_id)s projesindeki %(user_id)s kullanıcısı için kota bulunamadı." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "%(resource)s için %(limit)s kota sınırı zaten kullanılan ve ayrılmış olan " "%(minimum)s den büyük ya da eşit olmalı." #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "" "%(resource)s için %(limit)s kota sınırı %(maximum)s den küçük ya da eşit " "olmalı." #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "VBD %s çıkarılmaya çalışılırken azami deneme sayısına ulaşıldı" msgid "Request body and URI mismatch" msgstr "URI ve gövde isteği uyumsuz" msgid "Request is too large." msgstr "İstek çok geniş" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "" "İstenen donanım '%(model)s' '%(virt)s' sanallaştırma sürücüsü tarafından " "desteklenmiyor" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "İstenen imajda %(image)s otomatik disk yeniden boyutlandırma kapalı." msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "" "İstenen sunucu NUMA topolojisi verilen istemci NUMA topolojisine uymuyor" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "" "İstenen NUMA topolojisi istenen PCI aygıtlarıyla birlikte istemci NUMA " "topolojisine uymuyor" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "İzin verilen vCPU sınırları %(sockets)d:%(cores)d:%(threads)d %(vcpus)d vcpu " "sayısını tatmin etmek için yetersiz" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "%s sunucusu için kurtarma aygıtı mevcut değil" #, python-format msgid "Resize error: %(reason)s" msgstr "Yeniden boyutlama hatası: %(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "Sıfıra yeniden boyutlandırma disk niteliğine izin verilmiyor." msgid "Resource could not be found." msgstr "Kaynak bulunamadı." msgid "Resumed" msgstr "Devam Edildi" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "Kök öğe ismi '%(name)s' olmalı '%(tag)s' değil" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "%(filter_name)s zamanlayıcı sunucu filtresi bulunamadı." #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "Güvenlik grubu %(name)s %(project)s projesi için bulunamadı" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "" "%(project_id)s projesi için %(security_group_id)s güvenlik grubu bulunamadı." #, python-format msgid "Security group %(security_group_id)s not found." msgstr "%(security_group_id)s güvenlik grubu bulunamadı." #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "" "%(security_group_name)s güvenlik grubu %(project_id)s projesi için zaten " "mevcut." #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "" "%(security_group_name)s güvenlik grubu %(instance)s sunucusu ile " "ilişkilendirilmemiş" msgid "Security group id should be uuid" msgstr "Güvenlik grubu kimliği uuid olmalı" msgid "Security group name cannot be empty" msgstr "Güvenlik grup adı boş bırakılamaz" msgid "Security group not specified" msgstr "Güvenlik grubu belirlenmedi" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "Sunucu diski yeniden boyutlandırılamadı çünkü: %(reason)s" msgid "Server does not exist" msgstr "Sunucu mevcut değil" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "SunucuGrubu ilkesi desteklenmiyor: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "ServerGroupAffinityFilter yapılandırılmamış" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "ServerGroupAntiAffinityFilter yapılandırılmamış" #, python-format msgid "Service %(service_id)s could not be found." msgstr "%(service_id)s servisi bulunamadı." #, python-format msgid "Service %s not found." msgstr "Servis %s bulunamadı." msgid "Service is unavailable at this time." msgstr "Şu anda servis kullanılamıyor." #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "İstemci %(host)s ikiliğine %(binary)s sahip servis mevcut." #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "İstemci %(host)s başlığına %(topic)s sahip servis mevcut." #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "%(name)s isminde bir gölge tablo zaten var." #, python-format msgid "Share '%s' is not supported" msgstr "Paylaşım '%s' desteklenmiyor" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "Paylaşım seviyesi '%s' paylaşım yapılandırmasına sahip olamaz" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "resize2fs ile dosya sisteminin küçültülmesi başarısız, lütfen diskinizde " "yeterli alan olduğundan emin olun." #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "%(snapshot_id)s sistem anlık görüntüsü bulunamadı." msgid "Some required fields are missing" msgstr "Bazı gerekli alanlar eksik" msgid "Sort direction size exceeds sort key size" msgstr "Sıralama yönü boyutu sııralama anahtarı boyutunu geçiyor" msgid "Sort key supplied was not valid." msgstr "Verilen sıralama anahtarı geçerli değil." msgid "Specified fixed address not assigned to instance" msgstr "Belirtilen sabit adres sunucuya atanmamış" msgid "Specify `table_name` or `table` param" msgstr "`tablo_ismi` veya `tablo` parametresi belirtin" msgid "Specify only one param `table_name` `table`" msgstr "Yalnızca bir parametre belirtin `table_name` `table`" msgid "Started" msgstr "Başlatıldı" msgid "Stopped" msgstr "Durduruldu" #, python-format msgid "Storage error: %(reason)s" msgstr "Depolama hatası: %(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "Depolama ilkesi %s hiçbir verideposuyla eşleşmedi" msgid "Success" msgstr "Başarılı" msgid "Suspended" msgstr "Askıda" msgid "Swap drive requested is larger than instance type allows." msgstr "İstenen swap sürücüsü sunucu türünün izin verdiğinden daha büyük." #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "Görev %(task_name)s zaten %(host)s istemcisi üstünde çalışıyor" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "Görev %(task_name)s %(host)s istemcisi üzerinde çalışmıyor" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "PCI adresi %(address)s yanlış biçimde." #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "Konsol bağlantı noktası aralığı %(min_port)d-%(max_port)d tükenmiş." msgid "The current driver does not support preserving ephemeral partitions." msgstr "Mevcut sürücü geçici bölümleri korumayı desteklemiyor." msgid "The default PBM policy doesn't exist on the backend." msgstr "Varsayılan PBM ilkesi arka uçta mevcut değil." msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "Örnek şu ankinden daha yeni hypervisor versiyonu gerektirir." msgid "The only partition should be partition 1." msgstr "Tek bölüm bölüm 1 olmalı." #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "Sağlanan RNG aygıt yolu: (%(path)s) istemci üzerinde mevcut değil." msgid "The request body can't be empty" msgstr "İstek gövdesi boş olamaz" msgid "The request is invalid." msgstr "İstek geçersiz" #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "İstenen miktarda video hafızası %(req_vram)d nitelik tarafından izin veirlen " "%(max_vram)d den yüksek." msgid "The requested availability zone is not available" msgstr "İstenen kullanılabilirlik bölgesi uygun değil" msgid "The requested console type details are not accessible" msgstr "İstenen konsol türü deteayları erişilebilir değil" msgid "The requested functionality is not supported." msgstr "İstenen işlevsellik desteklenmiyor" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "Verilen aygıt yolu (%(path)s) kullanımda." #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "Desteklenen cihaz yolu (%(path)s) geçersiz." #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "Desteklenen disk yolu (%(path)s) halen var,fakat var olmaması gerekir." msgid "The supplied hypervisor type of is invalid." msgstr "Desteklenen hypervisor türü geçersiz." msgid "The target host can't be the same one." msgstr "Hedef istemci aynısı olamaz." #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "Jeton '%(token)s' geçersiz ya da süresi dolmuş" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "Mantıksal sürücü kök aygıt %s ile aynı aygıt ismiyle atanamaz" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "'%(table_name)s' tablosunda uuid veya instance_uuid sütunu NULL olan " "%(records)d kayıt var. Tüm gerekli veriyi yedekledikten sonra bu komutu " "tekrar --delete seçeneğiyle çalıştırın." #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "'%(table_name)s' tablosunda %(records)d kaydın uuid veya instance_uuid " "sütunu NULL. Göç devam etmeden önce bunların elle temizlenmesi gerekiyor. " "'nova-manage db null_instance_uuid_scan' komutunu çalıştırmayı " "deneyebilirsiniz." msgid "There are not enough hosts available." msgstr "Yeterince kullanılabilir istemci yok." #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "Hala %(count)i göç etmemiş nitelik kaydı var. Göç tüm sunucu nitelik " "kayıtları yeni biçime göç ettirilmeden devam edemez. Lütfen önce `nova-" "manage db migrate_flavor_data' çalıştırın." #, python-format msgid "There is no such action: %s" msgstr "Böyle bir işlem yok: %s" msgid "There were no records found where instance_uuid was NULL." msgstr "instance_uuid'in NULL olduğu kayıt bulunamadı." msgid "This domU must be running on the host specified by connection_url" msgstr "Bu domU connection_url ile belirtilen istemcide çalışıyor olmalı" #, python-format msgid "This rule already exists in group %s" msgstr "Bu kural zaten grupta var %s" #, python-format msgid "Timeout waiting for device %s to be created" msgstr "%s aygıtının oluşturulması beklenirken zaman aşımı" msgid "Timeout waiting for response from cell" msgstr "Hücreden cevap beklerken zaman aşımı" msgid "To and From ports must be integers" msgstr "Hedef ve Kaynak bağlantı noktaları tam sayı olmalı" msgid "Token not found" msgstr "Jeton bulunamadı" msgid "Type and Code must be integers for ICMP protocol type" msgstr "ICMP iletişim kuralı türü için Tür ve Kod tam sayı olmalı" msgid "Unable to authenticate Ironic client." msgstr "Ironic istemcisi doğrulanamıyor." #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "Konuk ajana bağlanılamıyor. Şu çağrı zaman aşımına uğradı: %(method)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "VBD %s silinemedi" #, python-format msgid "Unable to destroy VDI %s" msgstr "VDI %s silinemedi" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "'%s' için disk veri yolu belirlenemiyor" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "%s için disk ön eki belirlenemiyor" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "%s havuzdan çıkarılamıyor; Ana bulunamadı" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "%s havuzdan boşaltılamadı; havuz boş değil" #, python-format msgid "Unable to find SR from VBD %s" msgstr "VBD %s'den SR bulunamadı" #, python-format msgid "Unable to find SR from VDI %s" msgstr "VDI %s'den SR bulunamadı" #, python-format msgid "Unable to find ca_file : %s" msgstr "ca_file bulunamadı: %s" #, python-format msgid "Unable to find cert_file : %s" msgstr "cert_file bulunamadı: %s" #, python-format msgid "Unable to find host for Instance %s" msgstr "%s örneği için sunucu bulma başarısız" msgid "Unable to find iSCSI Target" msgstr "iSCSI Hedefi bulunamadı" #, python-format msgid "Unable to find key_file : %s" msgstr "key_file bulunamadı: %s" msgid "Unable to find root VBD/VDI for VM" msgstr "VM için kök VBD/VDI bulunamadı" msgid "Unable to find volume" msgstr "Mantıksal sürücü bulunamadı" #, python-format msgid "Unable to get record of VDI %s on" msgstr "VDI %s kaydı alınamadı" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "SR %s için VDI getirilemedi" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "SR %s üzerine VDI getirilemiyor" #, python-format msgid "Unable to join %s in the pool" msgstr "Havuzda %s'e katılınamadı" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "Birden fazla sunucu tekil yapılandırılan bağlantı noktası ID'si ile " "başlatılamadı. Lütfen sunucunuzu tek tek değişik bağlantı noktalarıyla " "başlatın." #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "%(instance_uuid)s %(dest)s e göç ettirilemiyor: Hafıza yetersiz(istemci:" "%(avail)s <= sunucu:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "%(instance_uuid)s göç ettirilemiyor: Sunucunun diski çok büyük(hedef " "istemcide kullanılabilir olan:%(available)s < ihtiyaç duyulan:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "Mevcut (%(host)s) sunucusundan (%(instance_id)s) örneği geçirilemez." #, python-format msgid "Unable to obtain target information %s" msgstr "Hedef bilgi alınamadı %s" msgid "Unable to resize disk down." msgstr "Disk boyutu küçültülemedi." msgid "Unable to set password on instance" msgstr "Sunucuya parola ayarlanamadı" msgid "Unable to shrink disk." msgstr "Disk küçültülemiyor." msgid "Unable to terminate instance." msgstr "Sunucu sonlandırılamadı." #, python-format msgid "Unable to unplug VBD %s" msgstr "VBD %s çıkarılamıyor" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "Kabul edilemez CPU bilgisi: %(reason)s" msgid "Unacceptable parameters." msgstr "Kabul edilemez parametreler var." #, python-format msgid "Unavailable console type %(console_type)s." msgstr "Uygun olmayan konsol türü %(console_type)s." #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "Beklenmeyen API Hatası: Lütfen bunu http://bugs.launchpad.net/nova/ adresine " "raporlayın ve mümkünse Nova API kaydını da ekleyin.\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "Beklenmedik takım eylemi %s" msgid "Unexpected type adding stats" msgstr "Durum eklemede beklenmedik tür" #, python-format msgid "Unexpected vif_type=%s" msgstr "Beklenmeyen vif_type=%s" msgid "Unknown" msgstr "Bilinmeyen" msgid "Unknown action" msgstr "Bilinmeyen eylem" #, python-format msgid "Unknown delete_info type %s" msgstr "Bilinmeyen delete_info türü %s" #, python-format msgid "Unknown image_type=%s" msgstr "Bilinmeyen image_type=%s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "Bilinmeyen kota kaynakları %(unknown)s." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Bilinmeyen sıralama yönü, 'desc' veya 'asc' olmalı" #, python-format msgid "Unknown type: %s" msgstr "Bilinmeyen tür: %s" msgid "Unrecognized legacy format." msgstr "Tanınmayan geçmişe ait biçim." #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "Tanınmayan silinmiş okuma değeri '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "CONF.running_deleted_instance_action için tanınmayan değer '%s'" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "Askıdan almaya çalışıldı ama %s imajı bulunamadı." msgid "Unsupported Content-Type" msgstr "Desteklenmeyen içerik türü" msgid "Upgrade DB using Essex release first." msgstr "Önce Essex sürümünü kullanarak veritabanı yükseltimi yapın." #, python-format msgid "User %(username)s not found in password file." msgstr "Kullanıcı %(username)s parola dosyasında bulunamadı." #, python-format msgid "User %(username)s not found in shadow file." msgstr "Kullanıcı %(username)s gölge dosyasında bulunamadı." msgid "User data needs to be valid base 64." msgstr "Kullanıcı verisi geçerli base 64 olmalıdır." msgid "User does not have admin privileges" msgstr "Kullanıcı yönetici ayrıcalıklarına sahip değil" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "" "Aynı istekte değişik blok_aygıt_eşleştirmesi söz dizimi kullanımına izin " "verilmiyor." #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s %(virtual_size)d bayt ki bu nitelik boyutu olan " "%(new_disk_size)d bayttan fazla." #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "SR %(sr)s üzerinde VDI bulunamadı (vdi_uuid %(vdi_uuid)s, target_lun " "%(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "VHD ergitme girişimi aşıldı (%d), vaz geçiliyor..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "Sürüm %(req_ver)s API tarafından desteklenmiyor. Asgari %(min_ver)s ve azami " "%(max_ver)s." msgid "Virtual Interface creation failed" msgstr "Sanal arayüz oluşturma hatası" msgid "Virtual interface plugin failed" msgstr "Sanal arayüz eklentisi başarısız" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "Sanal makine kipi '%(vmmode)s' tanınmıyor" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "Sanal makine kipi '%s' geçerli değil" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "" "Sanallaştırma türü '%(virt)s' bu hesaplama sürücüsü tarafından desteklenmiyor" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "%(volume_id)s bölümü bulunamadı." #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "Mantıksal sürücü %(volume_id)s oluşturulması %(seconds)s saniye beklememize " "ya da %(attempts)s kere denemeye rağmen bitmedi. Ve durumu %(volume_status)s." msgid "Volume does not belong to the requested instance." msgstr "Mantıksal sürücü istenen sunucuya ait değil." #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "" "Mantıksal sürücü blok boyutu ayarlıyor, ama mevcut libvirt hipervizörü '%s' " "özel blok boyutunu desteklemiyor" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "" "Python < 2.7.4 altında '%s' şablonunu desteklemiyoruz, lütfen http ya da " "https kullanın" msgid "When resizing, instances must change flavor!" msgstr "Yeniden boyutlandırırken, sunucular nitelik değiştirmelidir!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Sunucu SSL kipinde çalıştırıldığında, yapılandırma dosyanızda hem cert_file " "hem key_file seçeneklerinin değerlerini belirtmelisiniz" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "Kaynak %(res)s üstünde yanlış kota metodu %(method)s kullanıldı" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "" "Yanlış türde kanca metodu. Yalnızca 'pre' ve 'post' türlerine izin verilir" msgid "X-Instance-ID header is missing from request." msgstr "İstekte X-Instance-ID başlığı eksik." msgid "X-Instance-ID-Signature header is missing from request." msgstr "İstekte X-Instance-ID-Signature başlığı eksik." msgid "X-Tenant-ID header is missing from request." msgstr "İstekte X-Tenant-ID başlığı eksik." msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "relax-xsm-sr-check=true destekleyen XAPI gerekli" msgid "You are not allowed to delete the image." msgstr "İmajı silmeye yetkili değilsiniz." msgid "" "You are not authorized to access the image the instance was started with." msgstr "Sunucunun başlatıldığı imaja erişme yetkiniz yok." msgid "You must implement __call__" msgstr "__call__ fonksiyonunu uygulamalısınız." msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "" "rbd imajlarını kullanmak için images_rbd_pool bayrağını belirtmelisiniz." msgid "You should specify images_volume_group flag to use LVM images." msgstr "LVM imajları kullanmak için images_volume_group belirtmelisiniz." msgid "admin password can't be changed on existing disk" msgstr "yönetici parolası mevcut diskte değiştirilemez" msgid "aggregate deleted" msgstr "takım silindi" msgid "aggregate in error" msgstr "takım hata durumunda" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate başarısız çünkü: %s" msgid "cannot understand JSON" msgstr "JSON dosyası anlaşılamadı" msgid "clone() is not implemented" msgstr "clone() uygulanmamış" #, python-format msgid "connect info: %s" msgstr "bağlantı bilgisi: %s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "bağlanılıyor: %(host)s:%(port)s" #, python-format msgid "disk type '%s' not supported" msgstr "disk türü '%s' desteklenmiyor" #, python-format msgid "empty project id for instance %s" msgstr "%s sunucusu için boş proje kimliği" msgid "error setting admin password" msgstr "yönetici parolası ayarlarken hata" #, python-format msgid "error: %s" msgstr "hata: %s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "X509 parmak izi oluşturulamadı. Hata iletisi: %s" msgid "failed to generate fingerprint" msgstr "parmak izi oluşturulamadı" msgid "filename cannot be None" msgstr "dosya ismi None olamaz" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "%(backing_file)s tarafından desteklenen fmt=%(fmt)s" #, python-format msgid "href %s does not contain version" msgstr "%s referansı versiyon içermiyor" msgid "image already mounted" msgstr "imaj zaten bağlanmış" #, python-format msgid "instance %s is not running" msgstr "sunucu %s çalışmıyor" msgid "instance has a kernel or ramdisk but not both" msgstr "sunucu bir çekirdek veya ramdisk'e sahip ama her ikisine birden değil" msgid "instance is a required argument to use @refresh_cache" msgstr "sunucu @refresh_cache kullanmak için gerekli bir bağımsız değişken" msgid "instance is not in a suspended state" msgstr "sunucu bekletilme durumunda değil" msgid "instance is not powered on" msgstr "sunucunun gücü açılmamış" msgid "instance is powered off and cannot be suspended." msgstr "sunucunun gücü kapalı ve bekletilemez." #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "" "instance_id %s herhangi bir bağlantı noktasında aygıt kimliği olarak " "bulunamadı" msgid "is_public must be a boolean" msgstr "is_public bool olmalı" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key tanımlanmamış" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs kurulu ama kullanılabilir değil (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs kurulu değil (%s)" #, python-format msgid "marker [%s] not found" msgstr " [%s] göstergesi bulunamadı" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "sabit ip belirtildiyse max_count 1'den büyük olamaz." msgid "min_count must be <= max_count" msgstr "min_count max_count'dan küçük olmalı" #, python-format msgid "nbd device %s did not show up" msgstr "nbd aygıtı %s ortaya çıkmadı" msgid "nbd unavailable: module not loaded" msgstr "nbd kullanılabilir değil: modül yüklenmemiş" msgid "no hosts to remove" msgstr "silinecek istemci yok" #, python-format msgid "no match found for %s" msgstr "%s için eşleşme bulunamadı" #, python-format msgid "not able to execute ssh command: %s" msgstr "ssh komutu çalıştırılamadı: %s" msgid "operation time out" msgstr "işlem zaman aşımına uğradı" #, python-format msgid "partition %s not found" msgstr "bölüm %s bulunamadı" #, python-format msgid "partition search unsupported with %s" msgstr "bölüm arama %s ile desteklenmiyor" msgid "pause not supported for vmwareapi" msgstr "vmwareapi için duraklatma desteklenmiyor" #, python-format msgid "qemu-nbd error: %s" msgstr "qemu-nbd hatası: %s" msgid "rbd python libraries not found" msgstr "rbd python kitaplıkları bulunamadı" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "" "read_deleted değişkeni 'no', 'yes' veya 'only' değerlerini alabilir, %r " "olamaz" msgid "serve() can only be called once" msgstr "serve() yalnızca bir kere çağrılabilir" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "servis, DB tabanlı Servis Grubu sürücüsü için gerekli bir değişkendir" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "" "servis, Memcached tabanlı Servis Grubu sürücüsü için gerekli bir bağımsız " "değişkendir" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "" "set_admin_password bu sürücü ya da konuk sunucu tarafından uygulanmıyor." msgid "setup in progress" msgstr "kurulum sürüyor" #, python-format msgid "snapshot for %s" msgstr "%s için anlık görüntü" msgid "snapshot_id required in create_info" msgstr "create_info'da snapshot_id gerekiyor" msgid "token not provided" msgstr "jeton sağlanmamış" msgid "too many body keys" msgstr "Çok sayıda gövde anahtarları" msgid "unpause not supported for vmwareapi" msgstr "vmwareapi için sürdürme desteklenmiyor" msgid "version should be an integer" msgstr "Sürüm tam sayı olmak zorunda" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s LVM mantıksal sürücü grubu olmalı" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "" "vif_details kısmında %(vif_id)s vif'i için vhostuser_sock_path mevcut değil" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "Bu vif_driver uygulaması için vif_type parametresi mevcut olmalı" #, python-format msgid "volume %s already attached" msgstr "mantıksal sürücü %s zaten ekli" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "" "mantıksal sürücü '%(vol)s' durumu 'kullanımda' olmalı. Şu an '%(status)s' " "durumunda" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake'in %s için bir uygulaması yok" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "" "xenapi.fake'in %s için bir uygulaması yok veya yanlış sayıda bağımsız " "değişken ile çağrılmış" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/zh_CN/0000775000175000017500000000000000000000000015771 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3544698 nova-21.2.4/nova/locale/zh_CN/LC_MESSAGES/0000775000175000017500000000000000000000000017556 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/zh_CN/LC_MESSAGES/nova.po0000664000175000017500000031605500000000000021073 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Amos Huang , 2013 # XiaoYong Yuan , 2013 # Ying Chun Guo , 2013 # donghua , 2013 # LIU Yulong , 2013 # LIU Yulong , 2013 # FIRST AUTHOR , 2011 # hamo , 2012 # hanxue , 2012 # honglei, 2015 # Jack River , 2013 # kwang1971 , 2014 # kwang1971 , 2014 # Lee Anthony , 2013 # Jack River , 2013 # Shuwen SUN , 2014 # Tom Fifield , 2013-2014 # Xiao Xi LIU , 2014 # XiaoYong Yuan , 2013 # 颜海峰 , 2014 # Yu Zhang, 2013 # Yu Zhang, 2013 # 汪军 , 2015 # 颜海峰 , 2014 # English translations for nova. # Andreas Jaeger , 2016. #zanata # zzxwill , 2016. #zanata # blkart , 2017. #zanata # Yikun Jiang , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-06-25 11:53+0000\n" "Last-Translator: Yikun Jiang \n" "Language: zh_CN\n" "Language-Team: Chinese (China)\n" "Plural-Forms: nplurals=1; plural=0\n" "Generated-By: Babel 2.2.0\n" "X-Generator: Zanata 4.3.3\n" msgid "\"Look for the VDIs failed" msgstr "查找VDI失败" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s 不是有效的IP v4/6地址。" #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "%(binary)s 尝试了直接访问数据库,策略不允许进行此访问" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s 是无效 IP 网络。" #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s应该是更新的部分。" #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "已分配 %(memsize)d MB 内存,但需要 %(memtotal)d MB" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s 没有在本地存储器上:%(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s 没有在共享存储器上:%(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i 行与查询 %(meth)s 匹配,%(done)i 已迁移" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "%(type)s监测器不支持PCI设备" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "工作线程%(worker_name)s的数量%(workers)s非法,必须大于0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s 不支持磁盘热插。" #, python-format msgid "%s format is not supported" msgstr "不支持格式%s" #, python-format msgid "%s is not supported." msgstr "不支持%s" #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s 必须是'MANUAL' 或者 'AUTO'。" #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s'应该是'%(cls)s'的实例" msgid "'qemu-img info' parsing failed." msgstr "'qemu-img info'解析失败" #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "'rxtx_factor'参数必须是0和%g之间的浮点数" #, python-format msgid "A NetworkModel is required in field %s" msgstr "在字段%s中网络模型是必须的" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "API 版本字符%(version)s格式无效。必须是大版本号.小版本号。" #, python-format msgid "API version %(version)s is not supported on this method." msgstr "这个方法不支持%(version)s版本的API。" msgid "Access list not available for public flavors." msgstr "未提供公用云主机类型的访问列表。" #, python-format msgid "Action %s not found" msgstr "行为 %s 未定义" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "找不到实例 %(instance_uuid)s 上针对 request_id %(request_id)s 的操作" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "操作:“%(action)s”,调用方法:%(meth)s,主体:%(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "在%(retries)s尝试后,为聚合%(id)s 添加元数据" msgid "Affinity instance group policy was violated." msgstr "违反亲和力实例组策略。" #, python-format msgid "Agent does not support the call: %(method)s" msgstr "代理不支持调用:%(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "虚拟机监控程序 %(hypervisor)s 操作系统 %(os)s 体系结构 %(architecture)s 的代" "理构建已存在。" #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "聚合 %(aggregate_id)s已经有主机 %(host)s。" #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "找不到聚合 %(aggregate_id)s。" #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "聚合 %(aggregate_id)s没有主机 %(host)s。" #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "聚合 %(aggregate_id)s 没有键为 %(metadata_key)s 的元数据。" #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "聚集 %(aggregate_id)s:操作“%(action)s”导致了错误:%(reason)s。" #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "聚合 %(aggregate_name)s 已经存在。" #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "聚集 %s 不支持名称为空的可用区域" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "找不到主机 %(host)s 计数的汇总。" #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "提供了无效“name”值。name 必须为:%(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "发生了一个未知的错误. 请重试你的请求." msgid "An unknown exception occurred." msgstr "发生未知异常。" msgid "Anti-affinity instance group policy was violated." msgstr "违反反亲和力实例组策略。" #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "体系结构名称“%(arch)s”不可识别" #, python-format msgid "Architecture name '%s' is not valid" msgstr "体系结构名称 '%s' 无效" msgid "Archiving" msgstr "归档" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "尝试从空池子中消费PCI设备%(compute_node_id)s:%(address)s" msgid "Attempted overwrite of an existing value." msgstr "尝试覆盖一个已存在的值。" #, python-format msgid "Attribute not supported: %(attr)s" msgstr "属性不受支持: %(attr)s" msgid "Bad Request - Invalid Parameters" msgstr "错误请求——参数无效" #, python-format msgid "Bad network format: missing %s" msgstr "错误的网络格式:丢失%s" msgid "Bad networks format" msgstr "错误的网络格式" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "损坏的网络格式:网络 uuid 格式不正确 (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "cidr %s 中网络前缀不正确" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "绑定端口%(port_id)s失败,更多细节请查看日志。" msgid "Blank components" msgstr "空组件" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "空白卷(source: 'blank', dest: 'volume')需要未非0的大小。" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "块设备 %(id)s 不能被引导。" #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "块设备映射 %(volume_id)s 是多重附加卷,对此操作无效。" msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "块设备映射不能被转换为旧格式。" msgid "Block Device Mapping is Invalid." msgstr "块设备映射无效。" #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "块设备映射无效: %(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "块设备映射无效: 当前云主机和镜像/块设备映射组合的启动序列无效。" msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "块设备映射无效: 你指定了多于限定允许的本地设备。" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "块设备映射无效: 无法获取镜像 %(id)s." #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "块设备映射无效:未能获取快照 %(id)s。" #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "块设备映射无效:未能获取卷 %(id)s。" msgid "Block migration can not be used with shared storage." msgstr "块存储迁移无法在共享存储使用" msgid "Boot index is invalid." msgstr "启动索引编号无效。" #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "实例%(instance_uuid)s的构建已中止:%(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "实例%(instance_uuid)s的构建已被重新安排:%(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "找不到对应实例 %(uuid)s 的无效请求" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "必须为所有 NUMA 节点提供 CPU 和内存分配" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU 不兼容.\n" "\n" "%(ret)s\n" "\n" "参考 %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "CPU 编号 %(cpunum)d 分配给两个节点" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "CPU 数 %(cpunum)d 大于最大值 %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "CPU 编号 %(cpuset)s 未分配为任何节点" msgid "Can not add access to a public flavor." msgstr "不能添加访问到公共云主机类型。" msgid "Can not find requested image" msgstr "无法找到请求的镜像" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "无法为 %d 凭证处理认证请求" msgid "Can't resize a disk to 0 GB." msgstr "不能调整磁盘到0GB." msgid "Can't resize down ephemeral disks." msgstr "不能向下调整瞬时磁盘。" msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "无法从实例 libvirt 配置中检索到根设备路径" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "" "在实例 %(server_id)s 处于 %(attr)s %(state)s 状态时,无法对该实例执" "行“%(action)s”" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "不能访问\"%(instances_path)s\",确保路径存在并且您有恰当的权限。实际系统内建" "账号在一个远程机无法认证的账号不能执行命令Nova-Compute。" #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "不能添加主机到聚合%(aggregate_id)s。原因是:%(reason)s。" msgid "Cannot attach one or more volumes to multiple instances" msgstr "无法将一个或多个卷连接至多个实例" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "对于孤立对象 %(objtype)s 无法调用 %(method)s" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "无法确定 %s 的父存储池;无法确定镜像的存储位置" msgid "Cannot find SR of content-type ISO" msgstr "无法找到content-type ISO的存储库" msgid "Cannot find SR to read/write VDI." msgstr "没有找到存储库来读写VDI。" msgid "Cannot find image for rebuild" msgstr "无法找到用来重新创建的镜像" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "无法在聚集 %(id)s 中除去主机 %(host)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "不能从聚合%(aggregate_id)s移除主机。原因是:%(reason)s。" msgid "Cannot rescue a volume-backed instance" msgstr "无法急救卷支持的实例" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "无法缩小根磁盘大小。当前大小:%(curr_root_gb)s GB。所请求大小:" "%(new_root_gb)s GB。" msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "无法在非专用 CPU 锁定策略中设置 CPU 线程锁定策略。" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "无法在非专用 CPU 锁定策略中设置实时策略。" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "不能更新聚合%(aggregate_id)s。原因是:%(reason)s。" #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "不能更新聚合%(aggregate_id)s的元素数据。原因是:%(reason)s。" #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "单元%(uuid)s没有映射" #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "对于下列资源,更改将导致使用量小于 0:%(unders)s" #, python-format msgid "Cinder API version %(version)s is not available." msgstr "Cinder API 版本 %(version)s 不可用." #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "找不到类 %(class_name)s :异常 %(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "命令不支持。请使用Ironic命令%(cmd)s执行这个操作。" msgid "Completed" msgstr "完成" #, python-format msgid "Compute host %(host)s could not be found." msgstr "计算主机 %(host)s 没有找到。" #, python-format msgid "Compute host %s not found." msgstr "计算主机 %s 未找到。" #, python-format msgid "Compute service of %(host)s is still in use." msgstr "%(host)s 的计算服务仍然在使用。" #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "%(host)s 的计算服务此时不可用。" #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "配置驱动器格式“%(format)s”不受支持。" #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "配置已请求显式 CPU 模型,但是当前 libvirt 管理程序“%s”不支持选择 CPU 模型" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "更新实例%(instance_uuid)s冲突,但是我们不能判断原因" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "更新实例%(instance_uuid)s冲突。期望:%(expected)s。实际:%(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "连接cinder主机失败: %(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "连接 Glance 主机 %(server)s 失败:%(reason)s" #, python-format msgid "Connection to libvirt failed: %s" msgstr "与 libvirt 的连接失败:%s" msgid "Connection to libvirt lost" msgstr "libvirt 连接丢失" #, python-format msgid "Connection to libvirt lost: %s" msgstr "到libvirt的连接丢失:%s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "在主机: %(host)s上面,监测器的连接断开。" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "无法检索实例 %(instance_id)s 控制台日志输出。原因:%(reason)s" msgid "Constraint not met." msgstr "未符合约束。" #, python-format msgid "Converted to raw, but format is now %s" msgstr "转化为裸格式,但目前格式是 %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "无法给loopback附加镜像:%s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "未能访存映像 %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "无法为 %(driver_type)s 卷找到句柄。" #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "没有找到二进制 %(binary)s 在主机 %(host)s 上。" #, python-format msgid "Could not find config at %(path)s" msgstr "在 %(path)s 找不到配置文件。" #, python-format msgid "Could not find disk path. Volume id: %s" msgstr "磁盘路径未找到. 卷 id: %s" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "无法找到虚拟机使用的数据存储引用。" #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "不能加载行%(line)s,得到的错误是%(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "无法从路径 %(path)s 中加载应用 '%(name)s'" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "未能安装 vfat 配置驱动器。%(operation)s 失败。错误:%(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "未能上载映像 %(image_id)s" #, python-format msgid "Couldn't unmount the Quobyte Volume at %s" msgstr "不能卸载在%s的Quobyte卷" msgid "Creation of virtual interface with unique mac address failed" msgstr "为使用特殊MAC地址的vm的创建失败" msgid "Database Connection" msgstr "数据库连接" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "存储正则表达式%s没有匹配存储" msgid "Datetime is in invalid format" msgstr "时间格式无效" msgid "Default PBM policy is required if PBM is enabled." msgstr "如果PBM启用,缺省的PBM策略是必须的。" #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "从表格 '%(table_name)s'删除%(records)d记录。" #, python-format msgid "Device '%(device)s' not found." msgstr "找不到设备“%(device)s”。" #, python-format msgid "Device detach failed for %(device)s: %(reason)s" msgstr "对 %(device)s 执行设备拆离失败:%(reason)s)" #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "在监测器版本%(version)s,不支持指定的设备%(id)s" msgid "Device name contains spaces." msgstr "Device名称中包含了空格" msgid "Device name empty or too long." msgstr "设备名称为空或者太长" #, python-format msgid "Device type mismatch for alias '%s'" msgstr "别名“%s”的设备类型不匹配" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "在%(table)s.%(column)s和影子表 : %(c_type)s %(shadow_c_type)s有不同的类型" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "磁盘包含一个我们无法调整的文件系统:%s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "磁盘格式 %(disk_format)s 不能接受" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "磁盘信息文件无效:%(reason)s" msgid "Disk must have only one partition." msgstr "磁盘必须只能有一个分区。" #, python-format msgid "Disk with id: %s not found attached to instance." msgstr " id: %s 磁盘没有绑定到实例。" #, python-format msgid "Driver Error: %s" msgstr "驱动错误:%s" #, python-format msgid "Error attempting to run %(method)s" msgstr "尝试运行 %(method)s 时出错" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "在节点%(node)s销毁实例出错。准备状态仍然是'%(state)s'。" #, python-format msgid "Error during following call to agent: %(method)s" msgstr "调用代理的%(method)s 方法出错" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "取消搁置实例 %(instance_id)s 期间出错:%(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "获取%(instance_name)s域信息时libvirt报错:[错误号 %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "查找 %(instance_name)s时libvirt出错:[错误代码 %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "当停顿%(instance_name)s时,libvirt报错: [错误代号 %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "当为用户\"%(user)s\"设置密码时,libvirt报错:[错误号 %(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "使用libguestfs在镜像%(image)s中挂载%(device)s 到 %(dir)s出错 (%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "使用 libguestfs挂载%(image)s出错 (%(e)s)" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "创建资源监控时出错:%(monitor)s" #, python-format msgid "" "Error:\n" "%s" msgstr "" "错误:\n" "%s" msgid "Error: Agent is disabled" msgstr "错误:代理已禁用" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "对于操作标识 %(action_id)s,找不到事件 %(event)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "事件必须是 nova.virt.event.Event 的实例" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "已超过实例 %(instance_uuid)s 的最大调度尝试次数 %(max_attempts)d。最近一次异" "常:%(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "在热迁移期间,对于实例%(instance_uuid)s超过最大调度次数%(max_retries)d" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "超过最大尝试次数。%(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "期望 uuid,但是接收到 %(uuid)s。" #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "在影子表中有额外列%(table)s.%(column)s" msgid "Extracting vmdk from OVA failed." msgstr "从OVA提前vmdk失败。" #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "访问端口%(port_id)s失败:%(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "在准备实例%(instance)s时,在主机%(node)s上停机部署参数失败" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "分配网络失败,错误是%s,不重新调度。" msgid "Failed to allocate the network(s), not rescheduling." msgstr "分配网络失败,不重新调度。" #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "连接网络适配器设备到%(instance_uuid)s失败" #, python-format msgid "Failed to attach volume at mountpoint: %s" msgstr "在挂载点%s绑定卷失败" #, python-format msgid "Failed to connect to libvirt: %(msg)s" msgstr "连接到 libvirt 失败: %(msg)s" #, python-format msgid "Failed to create vif %s" msgstr "创建 vif %s 失败" msgid "Failed to delete bridge" msgstr "删除 bridge 失败" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "未能部署实例: %(reason)s" #, python-format msgid "Failed to destroy instance: %s" msgstr "未能销毁云主机:%s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "断开PCI设备%(dev)s失败:%(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "从实例%(instance_uuid)s 断开网络适配器设备失败" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "未能对文本进行加密:%(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "无法启动云主机:%(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "映射分区失败:%s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "挂载文件系统失败:%s" msgid "Failed to parse information about a pci device for passthrough" msgstr "为了直通,解析pci设备的信息失败" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "云主机无法关机:%(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "云主机无法开机:%(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "为实例%(instance_uuid)s准备PCI设备%(id)s失败:%(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "准备实例%(inst)s失败:%(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "读写磁盘信息文件错误:%(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "云主机无法重启:%(reason)s" #, python-format msgid "Failed to remove snapshot for VM %s" msgstr "移除 VM %s 快照失败" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "移除卷失败:(%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "请求Ironic 重新构建实例%(inst)s失败:%(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "无法恢复云主机:%(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "在%(path)s运行 qemu-img info 失败:%(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "未能对 %(instance)s 设置管理员密码,原因如下:%(reason)s" msgid "Failed to spawn, rolling back" msgstr "未能衍生,正在回滚" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "无法挂起云主机:%(reason)s" msgid "Failed to teardown container filesystem" msgstr "未能卸载容器文件系统" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "无法终止云主机:%(reason)s" msgid "Failed to umount container filesystem" msgstr "未能取消挂载容器文件系统" #, python-format msgid "Failed to unplug vif %s" msgstr "拔除 vif %s 失败" msgid "Failed while plugging vif" msgstr "插入vif时失败" msgid "Failed while unplugging vif" msgstr "拔出 vif 时失败" msgid "Failure prepping block device." msgstr "准备块设备失败。" #, python-format msgid "File %(file_path)s could not be found." msgstr "找不到文件 %(file_path)s。" #, python-format msgid "File path %s not valid" msgstr "文件路径 %s 无效" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "对于网络%(network_id)s,固定IP %(ip)s 不是一个有效的ip地址。" #, python-format msgid "Fixed IP %s is already in use." msgstr "固定IP%s已在使用。" #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "在实例 %(instance_uuid)s 上,固定 IP 地址 %(address)s 已在使用中。" #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "找不到对应地址 %(address)s 的固定 IP。" #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "云主机类型 %(flavor_id)s 没有找到。" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "云主机类型 %(flavor_id)s 中没有名为 %(extra_specs_key)s 的附加规格。" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "云主机类型%(flavor_id)s中没有名为%(key)s的附加规格。" #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr " %(retries)d 次重试后,无法更新或创建类型 %(id)s 的特别设定。" #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "项目 %(project_id)s已经拥有对于云主机类型 %(flavor_id)s的访问权限。" #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "项目%(project_id)s中未发现云主机类型%(flavor_id)s。" msgid "Flavor used by the instance could not be found." msgstr "找不到实例使用的云主机类型。" #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "ID为%(flavor_id)s的云主机类型已经存在。" #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "没有找到名为%(flavor_name)s的云主机类型。" #, python-format msgid "Flavor with name %(name)s already exists." msgstr "名为%(name)s的云主机类型已经存在。" #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "云主机类型的磁盘比在镜像元数据中指定的最小值还小。云主机类型磁盘大小" "是%(flavor_size)i 字节,最小值大小是%(image_min_disk)i字节。" #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "对于请求的镜像,云主机类型的磁盘太小。云主机类型磁盘大小是%(flavor_size)i字" "节,镜像大小是%(image_size)i字节。" msgid "Flavor's memory is too small for requested image." msgstr "相对于申请的镜像,该云主机类型的内存太小。" #, python-format msgid "Floating IP %(address)s association has failed." msgstr "浮动IP %(address)s绑定失败。" #, python-format msgid "Floating IP %(address)s is associated." msgstr "浮动 IP %(address)s 已关联。" #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "浮动 IP %(address)s 未与实例 %(id)s 关联。" #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "找不到对应标识 %(id)s 的浮动 IP。" #, python-format msgid "Floating IP not found for ID %s" msgstr "找不到对应标识 %s 的浮动 IP" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "找不到对应地址 %(address)s 的浮动 IP。" msgid "Floating IP pool not found." msgstr "找不到浮动 IP 池。" msgid "Forbidden" msgstr "禁止" msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "传入镜像元数据的串口数不能超过云主机类型中的设定。" msgid "Found no disk to snapshot." msgstr "发现没有盘来做快照。" #, python-format msgid "Found no network for bridge %s" msgstr "发现网桥 %s 没有网络" #, python-format msgid "Found non-unique network for bridge %s" msgstr "发现桥 %s 的网络不唯一" #, python-format msgid "Found non-unique network for name_label %s" msgstr "发现不唯一的网络 name_label %s" msgid "Guest agent is not enabled for the instance" msgstr "该云主机未启用 Guest agent" msgid "Guest does not have a console available" msgstr "Guest 没有可用控制台" msgid "Guest does not have a console available." msgstr "访客没有可用控制台。" #, python-format msgid "Host %(host)s could not be found." msgstr "主机 %(host)s 没有找到。" #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "主机 %(host)s 已映射至单元 %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "主机 '%(name)s'没有映射到任何单元" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Hyper-V驱动不支持主机开机" msgid "Host aggregate is not empty" msgstr "主机聚合不能为空" msgid "Host does not support guests with NUMA topology set" msgstr "主机不支持具有 NUMA 拓扑集的客户机" msgid "Host does not support guests with custom memory page sizes" msgstr "主机不支持定制内存页大小的客户机" msgid "Host startup on XenServer is not supported." msgstr "不支持在XenServer启动主机" msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "监测器驱动不支持post_live_migration_at_source方法" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "监测器虚拟化类型'%s'无效" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "监测器虚拟化类型“%(hv_type)s”无法识别" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "找不到具有标识“%s”的管理程序。" #, python-format msgid "IP allocation over quota in pool %s." msgstr "IP分配操作池%s的配额。" msgid "IP allocation over quota." msgstr "IP分配超过配额。" #, python-format msgid "Image %(image_id)s could not be found." msgstr "镜像 %(image_id)s 没有找到。" #, python-format msgid "Image %(image_id)s is not active." msgstr "映像 %(image_id)s 处于不活动状态。" #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "镜像 %(image_id)s 无法接受,原因是: %(reason)s" msgid "Image disk size greater than requested disk size" msgstr "镜像磁盘大小大于请求磁盘大小" msgid "Image is not raw format" msgstr "镜像不是裸格式镜像" msgid "Image metadata limit exceeded" msgstr "超过镜像元数据限制" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "镜像模式 '%(image)s' 不支持" msgid "Image not found." msgstr "镜像没有找到。" #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "" "镜像属性'%(name)s'无法覆盖云主机类型设定的NUMA(非一致性内存访问)配置。" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "对照实例类型,不允许镜像属性“hw_cpu_policy”覆盖 CPU 锁定策略集" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "不允许使用镜像属性“hw_cpu_thread_policy”覆盖针对该类型设置的 CPU 线程锁定策略" msgid "Image that the instance was started with could not be found." msgstr "实例启动的镜像没有找到。" #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "镜像的配置驱动器选项“%(config_drive)s”无效" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "带目的类型'volume'的镜像需要指定非0的大小。" msgid "In ERROR state" msgstr "处于“错误”状态" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "处于状态 %(vm_state)s/%(task_state)s,而不是处于状态“已调整大小”/“无”" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "找不到对应服务器 %(uuid)s 的进行中的实时迁移 %(id)s。" msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "不兼容的设置:只有LVM镜像支持瞬时存储加密。" #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "找不到用于实例 %(instance_uuid)s 的信息高速缓存。" #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "实例%(instance)s和卷%(vol)s没有在同一个可用域availability_zone。实例" "在%(ins_zone)s。卷在%(vol_zone)s" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "实例%(instance)s没有唯一标识为 %(port)s的端口" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "无法急救实例 %(instance_id)s:%(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "实例 %(instance_id)s 没有找到。" #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "实例%(instance_id)s没有标签'%(tag)s'" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "实例 %(instance_id)s 不在救援模式。" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "实例 %(instance_id)s 未就绪" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "实例 %(instance_id)s 没有运行。" #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "实例 %(instance_id)s 无法接受,原因是: %(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "实例 %(instance_uuid)s 未指定 NUMA 拓扑" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "实例 %(instance_uuid)s未指定迁移上下文" #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "实例 %(instance_uuid)s 处于%(attr)s %(state)s 中。该实例在这种状态下不能执行 " "%(method)s。" #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "实例 %(instance_uuid)s 已锁定" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "实例 %(instance_uuid)s 需要配置驱动器,但配置驱动器不存在。" #, python-format msgid "Instance %(name)s already exists." msgstr "实例 %(name)s 已经存在。" #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "对于“%(action)s”,实例 %(server_id)s 处于无效状态" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "实例 %(uuid)s 没有映射到一个单元。" #, python-format msgid "Instance %s not found" msgstr "找不到实例 %s" #, python-format msgid "Instance %s provisioning was aborted" msgstr "废弃准备实例%s" msgid "Instance could not be found" msgstr "无法找到实例" msgid "Instance disk to be encrypted but no context provided" msgstr "实例磁盘将被加密,但是没有提供上下文" msgid "Instance event failed" msgstr "实例事件无效" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "实例组 %(group_uuid)s 已存在。" #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "找不到实例组 %(group_uuid)s" msgid "Instance has no source host" msgstr "实例没有任何源主机" msgid "Instance has not been resized." msgstr "实例还没有调整大小。" #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "实例主机名 %(hostname)s 是无效 DNS 名称" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "实例已处于救援模式:%s" msgid "Instance is not a member of specified network" msgstr "实例并不是指定网络的成员" msgid "Instance network is not ready yet" msgstr "云主机网络尚未就绪" #, python-format msgid "Instance rollback performed due to: %s" msgstr "由于%s,实例回滚已被执行" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "卷组 %(vg)s 上的空间不足。只有 %(free_space)db 可用,但卷 %(lv)s 需要 " "%(size)d 字节。" #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "计算资源不足:%(reason)s。" #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "没有足够的可用内存来启动计算节点 %(uuid)s。" #, python-format msgid "Interface %(interface)s not found." msgstr "接口 %(interface)s没有找到。" #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "文件%(path)s的Base 64数据非法" msgid "Invalid Connection Info" msgstr "连接信息无效" #, python-format msgid "Invalid ID received %(id)s." msgstr "接收到的标识 %(id)s 无效。" #, python-format msgid "Invalid IP format %s" msgstr "无效IP格式%s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "无效的IP协议 %(protocol)s。" msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "无效PCI白名单:PCI白名单可以指定设备名或地址,而不是两个同时指定" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "无效PCI别名定义:%(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "无效的正则表达式%s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "主机名“%(hostname)s”中的有无效字符" msgid "Invalid config_drive provided." msgstr "提供了无效的config_drive" #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "config_drive_format“%s”无效" #, python-format msgid "Invalid console type %(console_type)s" msgstr "控制台类型 %(console_type)s 无效" #, python-format msgid "Invalid content type %(content_type)s." msgstr "无效的内容类型 %(content_type)s。" #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "日期字符串无效: %(reason)s" msgid "Invalid device UUID." msgstr "无效的设备UUID" #, python-format msgid "Invalid entry: '%s'" msgstr "无效输入:'%s'" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "无效输入: '%s';期望词典" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "无效输入:'%s',期望列表或词典" #, python-format msgid "Invalid exclusion expression %r" msgstr "无效排它表达式%r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "无效镜像格式'%(format)s'" #, python-format msgid "Invalid image href %(image_href)s." msgstr "无效的镜像href %(image_href)s。" #, python-format msgid "Invalid image metadata. Error: %s" msgstr "无效镜像元数据。错误:%s" #, python-format msgid "Invalid inclusion expression %r" msgstr "无效包含表达式%r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "对于字段/属性 %(path)s,输入无效。值:%(value)s。%(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "输入无效: %(reason)s" msgid "Invalid instance image." msgstr "无效实例镜像。" #, python-format msgid "Invalid is_public filter [%s]" msgstr "is_public 过滤器 [%s] 无效" msgid "Invalid key_name provided." msgstr "提供了无效的key_name。" #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "内存页大小“%(pagesize)s”无效" msgid "Invalid metadata key" msgstr "无效的元数据键" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "元数据大小无效: %(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "元数据无效: %(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "minDisk 过滤器 [%s] 无效" #, python-format msgid "Invalid minRam filter [%s]" msgstr "minRam 过滤器 [%s] 无效" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "无效的端口范围 %(from_port)s:%(to_port)s. %(msg)s" msgid "Invalid proxy request signature." msgstr "代理请求签名无效。" #, python-format msgid "Invalid range expression %r" msgstr "无效范围表达式%r" msgid "Invalid service catalog json." msgstr "服务目录 json 无效。" msgid "Invalid start time. The start time cannot occur after the end time." msgstr "开始时间无效。开始时间不能出现在结束时间之后。" msgid "Invalid state of instance files on shared storage" msgstr "共享存储器上实例文件的状态无效" msgid "Invalid status value" msgstr "状态值无效" #, python-format msgid "Invalid timestamp for date %s" msgstr "对于日期 %s,时间戳记无效" #, python-format msgid "Invalid usage_type: %s" msgstr "usage_type: %s无效" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "非法的Config Drive值: %(option)s" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "请求中无效虚拟接口地址%s" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "无效的卷访问模式: %(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "卷无效: %(reason)s" msgid "Invalid volume_size." msgstr "无效volume_size." #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "Ironic节点uuid不提供实例 %s的驱动。" #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "在外部网络%(network_uuid)s创建一个接口是不允许的" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "内核/内存盘镜像太大:%(vdi_size)d 字节,最大 %(max_size)d 字节" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "关键字名字只能包含数字字母,句号,破折号,下划线,冒号和空格。" #, python-format msgid "Key manager error: %(reason)s" msgstr "关键管理者错误:%(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "密钥对 %(key_name)s 已经存在" #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "密钥对 %(name)s 没有为用户 %(user_id)s 找到。" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "密钥对数据不合法: %(reason)s" msgid "Keypair name contains unsafe characters" msgstr "密钥对名称包含不安全的字符" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "密钥对必须是字符串,并且长度在1到255个字符" msgid "Limits only supported from vCenter 6.0 and above" msgstr "仅 vCenter 6.0 及以上版本支持限制" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "对应服务器 %(uuid)s 的实时迁移 %(id)s 未在进行。" #, python-format msgid "Malformed message body: %(reason)s" msgstr "错误格式的消息体: %(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "请求 URL 的格式不正确:URL 的 project_id“%(project_id)s”与上下文的 " "project_id“%(context_project_id)s”不匹配" msgid "Malformed request body" msgstr "错误格式的请求主体" msgid "Mapping image to local is not supported." msgstr "不支持映射镜像到本地。" #, python-format msgid "Marker %(marker)s could not be found." msgstr "找不到标记符 %(marker)s。" msgid "Maximum number of floating IPs exceeded" msgstr "已超过最大浮动 IP 数" msgid "Maximum number of key pairs exceeded" msgstr "已超过最大密钥对数" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "最大元数据项数超过 %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "超过端口的最大数" msgid "Maximum number of security groups or rules exceeded" msgstr "已超过最大安全组数或最大规则数" msgid "Metadata item was not found" msgstr "元数据项目未找到" msgid "Metadata property key greater than 255 characters" msgstr "元数据属性关键字超过 255 个字符" msgid "Metadata property value greater than 255 characters" msgstr "元数据属性值超过 255 个字符" msgid "Metadata type should be dict." msgstr "元数据类型必须为词典。" #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "在计算主机节点上%(host)s.%(node)s,测量%(name)s没有找到。" msgid "Migrate Receive failed" msgstr "Migrate Receive 失败" msgid "Migrate Send failed" msgstr "Migrate Send 失败" msgid "Migration" msgstr "迁移" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "对应服务器 %(uuid)s 的迁移 %(id)s 并非实时迁移。" #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "迁移 %(migration_id)s 没有找到。" #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "找不到对应实例 %(instance_id)s 的迁移 %(migration_id)s" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "实例 %(instance_uuid)s 的迁移 %(migration_id)s 状态为 %(state)s。迁移处于此状" "态时,无法执行 %(method)s。" #, python-format msgid "Migration error: %(reason)s" msgstr "迁移错误:%(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "对于LVM作为后台的实例,不支持迁移" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "没有为实例 %(instance_id)s 找到迁移其状态为 %(status)s 。" #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "迁移预检查错误:%(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "迁移选择目标错误:%(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "缺少参数: %s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "在影子表格中缺少列%(table)s.%(column)s" msgid "Missing device UUID." msgstr "缺少设备UUID" msgid "Missing disabled reason field" msgstr "缺少禁用的原因字段" msgid "Missing forced_down field" msgstr "缺少forced_down字段" msgid "Missing imageRef attribute" msgstr "缺少属性imageRef" #, python-format msgid "Missing keys: %s" msgstr "缺少键:%s" #, python-format msgid "Missing parameter %s" msgstr "缺少参数%s" msgid "Missing parameter dict" msgstr "缺少参数 dict" #, python-format msgid "Missing vlan number in %s" msgstr " %s 中缺少 vlan id" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "多个关联实例与固定 IP 地址“%(address)s”相关联。" msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "发现不止一个可能的网络。指定网络ID来选择连接到那个网络。" msgid "More than one swap drive requested." msgstr "请求了多于一个交换驱动" #, python-format msgid "Multi-boot operating system found in %s" msgstr "在 %s 中找到多重引导操作系统" msgid "Multiple X-Instance-ID headers found within request." msgstr "在请求内找到多个 X-Instance-ID 头。" msgid "Multiple X-Tenant-ID headers found within request." msgstr "在请求内找到多个 X-Tenant-ID 。" #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "对于名称“%s”,找到多个浮动 IP 池匹配项" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "发现对应地址 %(address)s 的多个浮动 IP。" msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" " VMWare vCenter驱动可能管理者多个主机;然而,我们因为只有一台主机,没有及时反" "馈。" msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "发现多个可能的网络,用Network ID会更加明确。" #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "找到多个与“%s”匹配的安全组。请使用标识以更具体地进行查找。" msgid "Must input network_id when request IP address" msgstr "请求IP地址,必须输入network_id" msgid "Must not input both network_id and port_id" msgstr "network_id和port_id必须同时输入" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "为了使用compute_driver=xenapi.XenAPIDriver,必须指定connection_url, " "connection_username (可选), 和 connection_password" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "使用vmwareapi.VMwareVCDriver必须指定host_ip, host_username 和 host_password" msgid "Must supply a positive value for max_number" msgstr "必须对 max_number 提供正值" msgid "Must supply a positive value for max_rows" msgstr "必须为最大行max_rows提供一个正数" msgid "Name" msgstr "名称" #, python-format msgid "Network %(network_id)s could not be found." msgstr "网络 %(network_id)s 没有找到。" #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "网络%(network_uuid)s需要一个子网,以便启动实例。" #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "无法为桥 %(bridge)s 找到网络" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "无法为实例 %(instance_id)s 找到网络。" msgid "Network not found" msgstr "没有找到网络" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "网络需要关联的 port_security_enabled 和子网,以便应用安全组。" msgid "New volume must be detached in order to swap." msgstr "为了进行交换,新卷必须断开。" msgid "New volume must be the same size or larger." msgstr "新卷必须大小相同或者更大。" #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "没有块设备与ID %(id)s进行映射。" msgid "No Unique Match Found." msgstr "找不到任何唯一匹配项。" #, python-format msgid "No agent-build associated with id %(id)s." msgstr "不存在任何与标识 %(id)s 关联的代理构建。" msgid "No compute host specified" msgstr "未指定计算宿主机" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "找不到对应操作系统 %(os_name)s 的配置信息" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "在VM上面没有MAC 地址 %s 的设备" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "在VM上面没有 interface-id %s 的设备" #, python-format msgid "No disk at %(location)s" msgstr "在 %(location)s 没有磁盘" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "没有可用的固定IP给网络:%(net)s" msgid "No fixed IPs associated to instance" msgstr "没有固定 IP 与实例相关联" msgid "No free nbd devices" msgstr "没有空闲NBD设备" msgid "No host available on cluster" msgstr "没有可用的主机在集群中" msgid "No hosts found to map to cell, exiting." msgstr "找不到要映射至单元的主机,正在退出。" #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "找不到任何与“%s”匹配的管理程序。" msgid "No image locations are accessible" msgstr "没有镜像位置可以访问" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "未配置任何实时迁移 URI,并且未提供对应 “%(virt_type)s” hypervisor 虚拟化类型" "的缺省值。" msgid "No more floating IPs available." msgstr "没有其他可用浮动 IP。" #, python-format msgid "No more floating IPs in pool %s." msgstr "池 %s 中没有其他浮动 IP。" #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "在%(image)s的%(root)s没有找到挂载点" #, python-format msgid "No operating system found in %s" msgstr "在 %s 中找不到任何操作系统" #, python-format msgid "No primary VDI found for %s" msgstr "对于%s,没有找到主VDI" msgid "No root disk defined." msgstr "没有定义根磁盘。" #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "未请求特定网络,项目“%(project_id)s”没有可用网络。" msgid "No suitable network for migrate" msgstr "对于迁移,没有合适的网络" msgid "No valid host found for cold migrate" msgstr "冷迁移过程中发现无效主机" msgid "No valid host found for resize" msgstr "重新配置过程中没有发现有效主机" #, python-format msgid "No valid host was found. %(reason)s" msgstr "找不到有效主机,原因是 %(reason)s。" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "没有卷块设备映射到以下路径:%(path)s" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "没有卷块设备与ID %(volume_id)s进行映射。" #, python-format msgid "Node %s could not be found." msgstr "找不到节点%s。" #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "无法为 %(host)s 获取可用端口" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "无法绑定 %(host)s:%(port)d,%(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "并非 PF %(compute_node_id)s:%(address)s 的所有虚拟功能都可用。" msgid "Not an rbd snapshot" msgstr "不是 rbd 快照" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "未针对映像 %(image_id)s 授权。" msgid "Not authorized." msgstr "未授权。" msgid "Not enough parameters to build a valid rule." msgstr "参数不够创建有效规则。" msgid "Not implemented on Windows" msgstr "未在 Windows 上实现" msgid "Not stored in rbd" msgstr "未存储在 rbd 中" msgid "Nothing was archived." msgstr "未归档任何内容。" #, python-format msgid "Nova does not support Cinder API version %(version)s" msgstr "Nova 不支持的 Cinder API 版本 %(version)s" #, python-format msgid "Nova requires QEMU version %s or greater." msgstr "Nova 需要 %s 或更高版本的 QEMU." #, python-format msgid "Nova requires Virtuozzo version %s or greater." msgstr "Nova 需要 %s 或更高版本的 Virtuozzo." #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova请求的 libvirt 版本%s 或更高。" msgid "Number of Rows Archived" msgstr "已归档行数" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "由于 %(reason)s,对象操作 %(action)s 失败" msgid "Old volume is attached to a different instance." msgstr "旧卷绑定到一个不同的实例。" #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "在可用区域%s中,已经有一个或多个主机" #, python-format msgid "Only administrators can sort servers by %s" msgstr "只有管理员可以基于 %s 对服务器进行排序" msgid "Only administrators may list deleted instances" msgstr "仅管理员可列示已删除的实例" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "只有基于文件的SRs(ext/NFS)支持这个特性。SR %(uuid)s 是类型 %(type)s" msgid "Origin header does not match this host." msgstr "源头与这台主机不匹配。" msgid "Origin header not valid." msgstr "源头无效。" msgid "Origin header protocol does not match this host." msgstr "源头协议与这台不匹配主机。" #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "未找到PCI设备%(node_id)s:%(address)s。" #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "PCI别名%(alias)s没有定义" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI设备%(compute_node_id)s:%(address)s是%(status)s,而不是%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI设备%(compute_node_id)s:%(address)s归%(owner)s所有,而不是%(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "无法找到PCI设备 %(id)s" #, python-format msgid "PCI device request %(requests)s failed" msgstr "PCI 设备请求 %(requests)s 失败" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s 没有包含IP地址" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "已对照“%(against)s”禁用页大小 %(pagesize)s" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "主机不支持页大小 %(pagesize)s。" #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "在vif %(vif_id)s的vif_details详情中不存在参数%(missing_params)s。检查您的网络" "配置来验证macvtap参数是正确的。" #, python-format msgid "Path %s must be LVM logical volume" msgstr "路径 %s 必须为 LVM 逻辑卷" msgid "Paused" msgstr "已暂停" msgid "Personality file limit exceeded" msgstr "超过个性化文件限制" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "与 VF %(compute_node_id)s:%(vf_address)s 相关的物理功能 %(compute_node_id)s:" "%(address)s 为 %(status)s 而不是 %(hopestatus)s" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "网络%(network_uuid)s的物理网络丢失" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "政策不允许 %(action)s 被执行。" #, python-format msgid "Port %(port_id)s is still in use." msgstr "端口 %(port_id)s 仍在使用中。" #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "对于实例 %(instance)s,端口 %(port_id)s 不可用。" #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "端口 %(port_id)s 对实例 %(instance)s 不可用。分配给 dns_name 属性的值 " "%(value)s 与实例的主机名 %(hostname)s 不匹配" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "端口%(port_id)s需要有一个固定IP才可以使用。" #, python-format msgid "Port %s is not attached" msgstr "端口%s没有连接" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "找不到端口标识 %(port_id)s。" #, python-format msgid "Port update failed for port %(port_id)s: %(reason)s" msgstr "端口 %(port_id)s 更新失败: %(reason)s" #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "不支持提供视频模型(%(model)s)。" #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "不支持提供的看守程序操作(%(action)s)。" msgid "QEMU guest agent is not enabled" msgstr "QEMU客户端代理未启动" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "实例%(instance_id)s不支持停顿" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "找不到配额类 %(class_name)s。" msgid "Quota could not be found" msgstr "配额没有找到。" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "对于%(overs)s配额已超过:请求%(req)s,但是已经使用 %(allowed)s %(overs)s的 " "%(used)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "对于资源,已超过配额:%(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "已超过配额,密钥对太多。" msgid "Quota exceeded, too many server groups." msgstr "已超过配额,服务器组太多。" msgid "Quota exceeded, too many servers in group" msgstr "已超过配额,组中太多的服务器" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "配额用尽:code=%(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "项目 %(project_id)s 资源 %(resource)s 的配额存在。" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "没有为项目 %(project_id)s 找到配额。" #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "无法在项目 %(project_id)s 中为用户 %(user_id)s 找到配额。" #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "资源%(resource)s的配额限制%(limit)s 必须大于或等于已用和保留值%(minimum)s。" #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "资源%(resource)s的配额限制%(limit)s 必须少于或等于%(maximum)s。" #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "已达到尝试拔出 VBD %s 的最大重试次数" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "实时策略要求 vCPU 掩码配置有至少一个 1 个 RT vCPU 和 1 个普通 vCPU。请参阅hw:" "cpu_realtime_mask 或 hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "请求主体和URI不匹配" msgid "Request is too large." msgstr "请求太大。" #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "镜像 %(image_id)s 的请求收到无效请求响应:%(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "找不到对应实例 %(instance_uuid)s 的 RequestSpec" msgid "Requested CPU control policy not supported by host" msgstr "主机不支持所请求 CPU 控制策略" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "“%(virt)s”虚拟驱动程序不支持所请求硬件“%(model)s”" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "请求的镜像%(image)s禁用磁盘自动调整大小功能。" msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "请求实例的NUMA拓扑不能满足给定主机的NUMA拓扑" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "与在PCI设备请求一起的NUMA拓扑请求不能满足给定主机的NUMA拓扑" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "请求的 vCPU 限制 %(sockets)d:%(cores)d:%(threads)d 无法满足vcpus 计数 " "%(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "对于实例%s,营救设备不存在" #, python-format msgid "Resize error: %(reason)s" msgstr "发生调整大小错误:%(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "不允许将云主机类型的磁盘大小缩减为零。" msgid "Resource could not be found." msgstr "资源没有找到。" msgid "Resumed" msgstr "已恢复" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "根元素名应该是 '%(name)s' 而不是'%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "运行 %i 的批处理直到完成" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "调度器主机过滤器 %(filter_name)s 没有找到。" #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "在项目%(project)s没有找到安全组%(name)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "没有找到安全组 %(security_group_id)s 针对项目 %(project_id)s 。" #, python-format msgid "Security group %(security_group_id)s not found." msgstr "安全组 %(security_group_id)s 没有找到。" #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "对于项目 %(project_id)s,安全组 %(security_group_name)s 已经存在。" #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "安全组%(security_group_name)s 没有与实例%(instance)s 绑定" msgid "Security group id should be uuid" msgstr "安全组标识应该为 uuid" msgid "Security group name cannot be empty" msgstr "安全组名称不能是空" msgid "Security group not specified" msgstr "没有指定安全组" #, python-format msgid "Server %(server_id)s has no tag '%(tag)s'" msgstr "服务器%(server_id)s上未发现标签'%(tag)s'" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "由于: %(reason)s,实例磁盘空间不能修改" msgid "Server does not exist" msgstr "服务器不存在" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "服务器组策略不支持: %(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "没有配置ServerGroupAffinityFilter" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "没有配置ServerGroupAntiAffinityFilter" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "未配置 ServerGroupSoftAffinityWeigher" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "未配置 ServerGroupSoftAntiAffinityWeigher" #, python-format msgid "Service %(service_id)s could not be found." msgstr "服务 %(service_id)s 没有找到。" #, python-format msgid "Service %s not found." msgstr "服务%s没有找到。" msgid "Service is unavailable at this time." msgstr "此时的付不可用。" #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "主机 %(host)s 二进制 %(binary)s 的服务已存在。" #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "主机 %(host)s 主题 %(topic)s 的服务已存在。" msgid "Set admin password is not supported" msgstr "设置管理员密码不支持" msgid "Set the cell name." msgstr "设置 cell 名称" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "影子数据表 %(name)s已经存在" #, python-format msgid "Share '%s' is not supported" msgstr "不支持共享 '%s' " #, python-format msgid "Share level '%s' cannot have share configured" msgstr "共享级别 '%s' 不用共享配置" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "使用resize2fs向下压缩文件系统失败,请检查您的磁盘上是否有足够的剩余空间。" msgid "Shutting down VM (cleanly) failed." msgstr "关闭虚拟机(软)失败。" msgid "Shutting down VM (hard) failed" msgstr "关闭虚拟机(硬)失败" #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "快照 %(snapshot_id)s 没有找到。" msgid "Some required fields are missing" msgstr "有些必填字段没有填写" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "删除卷快照时出错:使用 qemu-img 对 %(protocol)s 网络磁盘重建基础的操作未经全" "面测试" msgid "Sort direction size exceeds sort key size" msgstr "分类大小超过分类关键值大小" msgid "Sort key supplied was not valid." msgstr "提供的排序键无效。" msgid "Specified fixed address not assigned to instance" msgstr "指定的固定IP地址没有分配给实例" msgid "Specify `table_name` or `table` param" msgstr "指定`table_name` 或 `table`参数" msgid "Specify only one param `table_name` `table`" msgstr "只能指定一个参数`table_name` `table`" msgid "Started" msgstr "已开始" msgid "Stopped" msgstr "已停止" #, python-format msgid "Storage error: %(reason)s" msgstr "存储错误:%(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "存储策略 %s 不匹配任何存储" msgid "Success" msgstr "成功" msgid "Suspended" msgstr "已挂起" msgid "Swap drive requested is larger than instance type allows." msgstr "交换驱动器的请求大于实例类型允许。" msgid "Table" msgstr "表" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "任务 %(task_name)s 已在主机 %(host)s 上运行" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "任务 %(task_name)s 未在主机 %(host)s 上运行" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "PCI地址%(address)s格式不正确。" msgid "The backlog must be more than 0" msgstr "backlog必须大于0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "控制台端口范围%(min_port)d-%(max_port)d已经耗尽。" msgid "The created instance's disk would be too small." msgstr "将创建的实例磁盘太小。" msgid "The current driver does not support preserving ephemeral partitions." msgstr "当前驱动不支持保存瞬时分区。" msgid "The default PBM policy doesn't exist on the backend." msgstr "在后台,缺省的PBM策略不存在。" msgid "The floating IP request failed with a BadRequest" msgstr "由于坏的请求,浮动IP请求失败" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "该实例需要比当前版本更新的虚拟机管理程序。" #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "定义的端口数量:%(ports)d 超过限制:%(quota)d" #, python-format msgid "The number of tags exceeded the per-server limit %d" msgstr "标签数量超过了保护限制,上限数量为%d" msgid "The only partition should be partition 1." msgstr "唯一的分区应该是分区1。" #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "主机上不存在提供的RNG设备路径:(%(path)s)。" msgid "The request body can't be empty" msgstr "请求题不能为空" msgid "The request is invalid." msgstr "请求无效。" #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "申请的显存数 %(req_vram)d 超过了云主机类型允许的最大值 %(max_vram)d 。" msgid "The requested availability zone is not available" msgstr "请求的可用域是不可用的!" msgid "The requested console type details are not accessible" msgstr "不能访问请求的控制台类型详情" msgid "The requested functionality is not supported." msgstr "不支持请求的功能" #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "在vCenter中没有找到指定的集群'%s'" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "提供的设备路径 (%(path)s) 正在使用中。" #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "提供的设备路径 (%(path)s) 是无效的。" #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "提供的磁盘路径 (%(path)s) 已经存在,预计是不存在的。" msgid "The supplied hypervisor type of is invalid." msgstr "提供的虚拟机管理程序类型无效。" msgid "The target host can't be the same one." msgstr "目标主机不能是当前主机。" #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "令牌“%(token)s”无效或已到期" msgid "The uuid of the cell to delete." msgstr "要删除的 cell 的 uuid" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "卷不能分配不能用与root设备%s一样的设备名称" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "在表格 '%(table_name)s'中有%(records)d记录,uuid或者instance_uuid列值为 " "NULL。在您备份任何需要的数据之后,使用参数 --delete再运行一次这个命令。" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "在表格 '%(table_name)s'中有%(records)d记录,uuid或者instance_uuid列值为 " "NULL。在迁移通过前,必须手动清除。考虑运行命令'nova-manage db " "null_instance_uuid_scan'。" msgid "There are not enough hosts available." msgstr "没有足够的主机可用。" #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "仍然有%(count)i 条未迁移的云主机类型。在所有实例类型记录被迁移到新的格式之" "前,迁移不能继续。请运行首先 `nova-manage db migrate_flavor_data'。" #, python-format msgid "There is no such action: %s" msgstr "没有该动作:%s" msgid "There were no records found where instance_uuid was NULL." msgstr "没有找到instance_uuid为NULL的记录。" #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "此计算节点的 hypervisor 的版本低于最低受支持版本:%(version)s。" msgid "This domU must be running on the host specified by connection_url" msgstr "这个domU 必须运行在connection_url指定的主机上" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "调用此方法时,需要以下设置:将 networks 和 port_ids 指定为 None,或者将 " "networks 和 port_ids 指定为非 None。" #, python-format msgid "This rule already exists in group %s" msgstr "这条规则已经存在于组%s 中" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "此服务的版本 (v%(thisver)i) 低于部署的余下部分的最低版本 (v%(minver)i)。无法" "继续。" #, python-format msgid "Timeout waiting for device %s to be created" msgstr "等待设备 %s 创建超时" msgid "Timeout waiting for response from cell" msgstr "等待来自单元的响应时发生超时" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "当检查我们是否可以在线迁移到主机%s时,超时。" msgid "To and From ports must be integers" msgstr "目的和源端口必须是整数" msgid "Token not found" msgstr "找不到令牌" msgid "Triggering crash dump is not supported" msgstr "触发崩溃转储不受支持" msgid "Type and Code must be integers for ICMP protocol type" msgstr "类型和编码必须是ICMP协议类型" msgid "UEFI is not supported" msgstr "UEFI 不受支持" msgid "UUID" msgstr "UUID" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "无法将浮动 IP %(address)s 关联至实例 %(id)s 的任何固定 IP。实例没有要关联的固" "定 IPv4 地址。" #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "无法将浮动 IP %(address)s 关联至实例 %(id)s 的任何固定 IP %(fixed_address)s。" "错误:%(error)s" msgid "Unable to authenticate Ironic client." msgstr "不能认证Ironic客户端。" #, python-format msgid "Unable to automatically allocate a network for project %(project_id)s" msgstr "无法为项目 %(project_id)s 自动分配网络" #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "不能连接客户端代理。以下调用超时:%(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "无法将镜像转换为 %(format)s:%(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "无法将镜像转换为原始格式:%(exp)s" msgid "Unable to destroy VBD" msgstr "不能销毁VBD" #, python-format msgid "Unable to destroy VBD %s" msgstr "无法销毁 VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "无法销毁 VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "无法为“%s”确定磁盘总线" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "无法为 %s 确定磁盘前缀" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "不能从池中弹出%s;没有找到主" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "不能从池中弹出%s ;池不空" msgid "Unable to find SR from VBD" msgstr "不能从VBD找到SR" #, python-format msgid "Unable to find SR from VBD %s" msgstr "无法在VBD %s找到存储库" msgid "Unable to find SR from VDI" msgstr "不能从VDI找到SR" #, python-format msgid "Unable to find SR from VDI %s" msgstr "从VDI %s 不能找到SR" #, python-format msgid "Unable to find ca_file : %s" msgstr "找不到 ca_file:%s" #, python-format msgid "Unable to find cert_file : %s" msgstr "找不到 cert_file:%s" #, python-format msgid "Unable to find host for Instance %s" msgstr "无法找到实例 %s 的宿主机" msgid "Unable to find iSCSI Target" msgstr "找不到 iSCSI 目标" #, python-format msgid "Unable to find key_file : %s" msgstr "找不到 key_file:%s" msgid "Unable to find root VBD/VDI for VM" msgstr "找不到 VM 的根 VBD/VDI" msgid "Unable to find volume" msgstr "找不到卷" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "不能获取主机UUID:/etc/machine-id不存在" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "不能获取主机UUID:/etc/machine-id 为空" #, python-format msgid "Unable to get record of VDI %s on" msgstr "无法使得VDI %s 的记录运行" msgid "Unable to get updated status" msgstr "无法获取已更新的状态" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "无法为存储库 %s 引入VDI" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "无法在存储库 %s 上引入VDI" #, python-format msgid "Unable to join %s in the pool" msgstr "在池中不能加入%s" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "" "无法启动多个实例,这些实例配置使用同一个端口ID。请一个一个启动您的实例,并且" "实例间使用不同的端口。" #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "无法将 %(instance_uuid)s 迁移至 %(dest)s:内存不足(主机:%(avail)s <= 实例:" "%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "无法迁移 %(instance_uuid)s:实例的磁盘太大(在目标主机上可用的:" "%(available)s < 需要:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "无法把实例 (%(instance_id)s) 迁移到当前主机 (%(host)s)。" #, python-format msgid "Unable to obtain target information %s" msgstr "不能获取目标信息%s" #, python-format msgid "Unable to parse rrd of %s" msgstr "不能解析 %s 的rrd" msgid "Unable to resize disk down." msgstr "不能向下调整磁盘。" msgid "Unable to set password on instance" msgstr "无法对实例设置密码" msgid "Unable to shrink disk." msgstr "不能压缩磁盘。" msgid "Unable to terminate instance." msgstr "不能终止实例。" msgid "Unable to unplug VBD" msgstr "不能拔出VBD" #, python-format msgid "Unable to unplug VBD %s" msgstr "无法移除 VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "CPU信息不能被接受:%(reason)s。" msgid "Unacceptable parameters." msgstr "无法接受的参数。" #, python-format msgid "Unavailable console type %(console_type)s." msgstr "不可用的控制台类型 %(console_type)s." msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "未定义的块设备映射根:BlockDeviceMappingList 包含来自多个实例的块设备映射。" #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "发生意外 API 错误。请在 http://bugs.launchpad.net/nova/ 处报告此错误,并且附" "上 Nova API 日志(如果可能)。\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "未预期的聚合动作%s" msgid "Unexpected type adding stats" msgstr "类型添加统计发生意外" #, python-format msgid "Unexpected vif_type=%s" msgstr "存在意外 vif_type=%s" msgid "Unknown" msgstr "未知" msgid "Unknown action" msgstr "未知操作" #, python-format msgid "Unknown auth type: %s" msgstr "未知认证类型:%s" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "配置驱动器格式 %(format)s 未知。请选择下列其中一项:iso9660 或 vfat。" #, python-format msgid "Unknown delete_info type %s" msgstr "未知delete_info类型%s" #, python-format msgid "Unknown image_type=%s" msgstr "image_type=%s 未知" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "配额资源 %(unknown)s 未知。" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "排序方向未知,必须为“降序”或“升序”" #, python-format msgid "Unknown type: %s" msgstr "未知类型:%s" msgid "Unrecognized legacy format." msgstr "不能识别的格式。" #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "无法识别的 read_deleted 取值”%s“" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "对于CONF.running_deleted_instance_action,无法识别的值 '%s'" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "试图取消废弃但是镜像%s没有找到。" msgid "Unsupported Content-Type" msgstr "不支持的Content-Type" msgid "Upgrade DB using Essex release first." msgstr "请首先使用 Essex 发行版来升级数据库。" #, python-format msgid "User %(username)s not found in password file." msgstr "在密码文件中找不到用户 %(username)s。" #, python-format msgid "User %(username)s not found in shadow file." msgstr "在影子文件中找不到用户 %(username)s。" msgid "User data needs to be valid base 64." msgstr "用户数据需要是有效的基本 64 位。" msgid "User does not have admin privileges" msgstr "用户没有管理员权限" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "在同一个请求中,不允许使用不同的 block_device_mapping语法。" #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s 的大小为 %(virtual_size)d 字节,比云主机类型定义的 " "%(new_disk_size)d 字节还大。" #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "SR %(sr)s (vdi_uuid %(vdi_uuid)s上没有找到VDI,目标lun target_lun " "%(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "VHD coalesce 尝试超过(%d),放弃。。。" #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "" "API不支持版本%(req_ver)s。小版本号是%(min_ver)s,大版本号是%(max_ver)s。" msgid "Virtual Interface creation failed" msgstr "虚拟接口创建失败" msgid "Virtual interface plugin failed" msgstr "虚拟接口插件失效" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "虚拟机模式“%(vmmode)s”无法识别" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "虚拟机模式 '%s'无效" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "此计算驱动程序不支持虚拟化类型“%(virt)s”" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "无法挂载卷 %(volume_id)s。原因:%(reason)s" #, python-format msgid "Volume %(volume_id)s could not be detached. Reason: %(reason)s" msgstr "无法卸载卷 %(volume_id)s 。原因:%(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "卷 %(volume_id)s 没有找到。" #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "在等待%(seconds)s 秒或%(attempts)s尝试后,卷%(volume_id)s仍然没有创建成功。它" "的状态是%(volume_status)s。" msgid "Volume does not belong to the requested instance." msgstr "卷不属于请求的实例。" #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "%(volume_type)s 类型的卷 %(volume_id)s不支持卷加密" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "卷比镜像元数据中指定的最小值还小。卷大小是%(volume_size)i字节,最小值" "是%(image_min_disk)i字节。" #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "卷设置块大小,但是当前libvirt监测器 '%s'不支持定制化块大小" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "Python<2.7.4 不支持实例 '%s',请使用http或者https" msgid "When resizing, instances must change flavor!" msgstr "调整大小时,实例必须更换云主机类型!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "以 SSL 方式运行服务器时,必须在配置文件中同时指定 cert_file 和 key_file 选项" "值" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "错误的配额方法%(method)s用在资源%(res)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "钩子方法的类型不正确。仅允许“pre”和“post”类型" msgid "X-Forwarded-For is missing from request." msgstr "请求中缺少 X-Forwarded-For 。" msgid "X-Instance-ID header is missing from request." msgstr "请求中缺少 X-Instance-ID 头。" msgid "X-Instance-ID-Signature header is missing from request." msgstr "请求中缺少 X-Instance-ID-Signature 。" msgid "X-Metadata-Provider is missing from request." msgstr "请求中缺少X-Metadata-Provider。" msgid "X-Tenant-ID header is missing from request." msgstr "请求中缺少 X-Tenant-ID 。" msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "XAPI 支持必须的 relax-xsm-sr-check=true" msgid "You are not allowed to delete the image." msgstr "不允许删除该映像。" msgid "" "You are not authorized to access the image the instance was started with." msgstr "您无权访问实例启动的镜像。" msgid "You must implement __call__" msgstr "你必须执行 __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "为了使用rbd镜像,您应该指定 images_rbd_pool标记。" msgid "You should specify images_volume_group flag to use LVM images." msgstr "为了使用LVM镜像,您应该指定 images_volume_group标记" msgid "Zero floating IPs available." msgstr "没有可用浮动 IP。" msgid "admin password can't be changed on existing disk" msgstr "无法在现有磁盘上更改管理员密码" msgid "aggregate deleted" msgstr "删除的聚合" msgid "aggregate in error" msgstr "聚合在错误状态" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate失败,原因是:%s" #, python-format msgid "attach network interface %s failed." msgstr "绑定网络接口 %s 失败." msgid "cannot understand JSON" msgstr "无法理解JSON" msgid "clone() is not implemented" msgstr "clone()没有实现" msgid "complete" msgstr "完成" #, python-format msgid "connect info: %s" msgstr "连接信息:%s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "连接到:%(host)s:%(port)s" #, python-format msgid "detach network interface %s failed." msgstr "解绑网络接口 %s 失败." msgid "direct_snapshot() is not implemented" msgstr "未实现 direct_snapshot()" #, python-format msgid "disk type '%s' not supported" msgstr "不支持磁盘类型'%s' " #, python-format msgid "empty project id for instance %s" msgstr "用于实例 %s 的项目标识为空" #, python-format msgid "error opening rbd image %s" msgstr "打开rbd镜像%s 出错" msgid "error setting admin password" msgstr "设置管理员密码时出错" #, python-format msgid "error: %s" msgstr "错误:%s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "生成X509指纹失败。错误消息:%s" msgid "failed to generate fingerprint" msgstr "生成fingerprint失败" msgid "filename cannot be None" msgstr "文件名不能为None" msgid "floating IP is already associated" msgstr "已关联浮动 IP" msgid "floating IP not found" msgstr "找不到浮动 IP" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt=%(fmt)s 由 %(backing_file)s 支持" #, python-format msgid "href %s does not contain version" msgstr "href %s 不包含版本" msgid "image already mounted" msgstr "镜像已经挂载" #, python-format msgid "instance %s is not running" msgstr "实例%s没有运行" msgid "instance has a kernel or ramdisk but not both" msgstr "实例拥有内核或者内存盘,但不是二者均有" msgid "instance is a required argument to use @refresh_cache" msgstr "使用 @refresh_cache 时,instance 是必需的自变量" msgid "instance is not in a suspended state" msgstr "实例不在挂起状态" msgid "instance is not powered on" msgstr "实例未启动" msgid "instance is powered off and cannot be suspended." msgstr "已对实例关闭电源,无法暂挂该实例。" #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "当设备在任意端口上时,无法找到实例id %s" msgid "is_public must be a boolean" msgstr "is_public 必须为布尔值" msgid "keymgr.fixed_key not defined" msgstr "keymgr.fixed_key 没有定义" msgid "l3driver call to add floating IP failed" msgstr "用于添加浮动 IP 的 l3driver 调用失败" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs安装了,但是不可用(%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "libguestfs没有安装 (%s)" #, python-format msgid "marker [%s] not found" msgstr "没有找到标记 [%s]" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "最大行数必须小于或等于 %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "如果指定了一个固定IP,最大数max_count不能大于1。" msgid "min_count must be <= max_count" msgstr "min_count 必须小于或等于 max_count" #, python-format msgid "nbd device %s did not show up" msgstr "nbd 设备 %s 没有出现" msgid "nbd unavailable: module not loaded" msgstr "NBD不可用:模块没有加载" msgid "no hosts to remove" msgstr "没有主见可移除" #, python-format msgid "no match found for %s" msgstr "对于%s没有找到匹配的" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "卷 %s 没有可用父快照" #, python-format msgid "no write permission on storage pool %s" msgstr "没有对存储池 %s 的写许可权" #, python-format msgid "not able to execute ssh command: %s" msgstr "不能执行ssh命令:%s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "旧样式配置只能使用字典或 memcached 后端" msgid "operation time out" msgstr "操作超时" #, python-format msgid "partition %s not found" msgstr "找不到分区 %s" #, python-format msgid "partition search unsupported with %s" msgstr "在具有 %s 的情况下,不支持搜索分区" msgid "pause not supported for vmwareapi" msgstr "vmwareapi 不支持暂停" msgid "printable characters with at least one non space character" msgstr "可显示字符,带有至少一个非空格字符" msgid "printable characters. Can not start or end with whitespace." msgstr "可显示字符。不能以空格开头或结尾。" #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "对 %(path)s 执行 qemu-img 失败:%(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "qemu-nbd 错误:%s" msgid "rbd python libraries not found" msgstr "没有找到rbd pyhon库" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "read_deleted 只能是“no”、“yes”或“only”其中一项,而不能是 %r" msgid "serve() can only be called once" msgstr "serve() 只能调用一次" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "service 对于基于数据库的 ServiceGroup 驱动程序是必需的自变量" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "service 对于基于 Memcached 的 ServiceGroup 驱动程序是必需的自变量" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "此驱动程序或 guest 实例未实现 set_admin_password。" msgid "setup in progress" msgstr "建立处理中" #, python-format msgid "snapshot for %s" msgstr "%s 的快照" msgid "snapshot_id required in create_info" msgstr "在create_info中必须有snapshot_id" msgid "stopped" msgstr "已停止" msgid "token not provided" msgstr "令牌没有提供" msgid "too many body keys" msgstr "过多主体密钥" msgid "unpause not supported for vmwareapi" msgstr "vmwareapi 不支持取消暂停" msgid "version should be an integer" msgstr "version应该是整数" #, python-format msgid "vg %s must be LVM volume group" msgstr "vg %s 必须为 LVM 卷组" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "在vif %(vif_id)s 的vif_details详情中不存在vhostuser_sock_path" #, python-format msgid "vif type %s not supported" msgstr "vif 类型 %s 不受支持" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "对于此 vif_driver 实现,必须存在 vif_type 参数" #, python-format msgid "volume %s already attached" msgstr "卷%s已经绑定" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "卷 '%(vol)s' 状态必须是‘使用中‘。当前处于 '%(status)s' 状态" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake 没有 %s 的实现" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "xenapi.fake 没有 %s 的实现或者调用时用了错误数目的参数" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0664723 nova-21.2.4/nova/locale/zh_TW/0000775000175000017500000000000000000000000016023 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3544698 nova-21.2.4/nova/locale/zh_TW/LC_MESSAGES/0000775000175000017500000000000000000000000017610 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/locale/zh_TW/LC_MESSAGES/nova.po0000664000175000017500000030534100000000000021121 0ustar00zuulzuul00000000000000# Translations template for nova. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the nova project. # # Translators: # Chao-Hsiung Liao , 2012 # FIRST AUTHOR , 2011 # Pellaeon Lin , 2013 # Pellaeon Lin , 2013 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: nova VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-04-27 16:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 06:10+0000\n" "Last-Translator: Copied by Zanata \n" "Language: zh_TW\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Chinese (Taiwan)\n" #, python-format msgid "%(address)s is not a valid IP v4/6 address." msgstr "%(address)s 不是有效的 IPv4/IPv6 位址。" #, python-format msgid "" "%(binary)s attempted direct database access which is not allowed by policy" msgstr "%(binary)s 嘗試直接存取資料庫,但原則不容許這樣做" #, python-format msgid "%(cidr)s is not a valid IP network." msgstr "%(cidr)s 不是有效的 IP 網路。" #, python-format msgid "%(field)s should not be part of the updates." msgstr "%(field)s 不應是更新的一部分。" #, python-format msgid "%(memsize)d MB of memory assigned, but expected %(memtotal)d MB" msgstr "已指派 %(memsize)d MB 記憶體,但預期為 %(memtotal)d MB" #, python-format msgid "%(path)s is not on local storage: %(reason)s" msgstr "%(path)s 不在本端儲存體上:%(reason)s" #, python-format msgid "%(path)s is not on shared storage: %(reason)s" msgstr "%(path)s 不在共用儲存體上:%(reason)s" #, python-format msgid "%(total)i rows matched query %(meth)s, %(done)i migrated" msgstr "%(total)i 列與查詢 %(meth)s 相符,已移轉 %(done)i" #, python-format msgid "%(type)s hypervisor does not support PCI devices" msgstr "%(type)s Hypervisor 不支援 PCI 裝置" #, python-format msgid "%(worker_name)s value of %(workers)s is invalid, must be greater than 0" msgstr "%(workers)s 的 %(worker_name)s 值無效,必須大於 0" #, python-format msgid "%s does not support disk hotplug." msgstr "%s 不支援磁碟熱插拔。" #, python-format msgid "%s format is not supported" msgstr "不支援 %s 格式" #, python-format msgid "%s is not supported." msgstr "不支援 %s。" #, python-format msgid "%s must be either 'MANUAL' or 'AUTO'." msgstr "%s 必須是 'MANUAL' 或 'AUTO'。" #, python-format msgid "'%(other)s' should be an instance of '%(cls)s'" msgstr "'%(other)s' 應該是 '%(cls)s' 的實例" msgid "'qemu-img info' parsing failed." msgstr "'qemu-img info' 剖析失敗。" #, python-format msgid "'rxtx_factor' argument must be a float between 0 and %g" msgstr "'rxtx_factor' 引數必須是介於 0 和 %g 之間的浮點數" #, python-format msgid "A NetworkModel is required in field %s" msgstr "欄位 %s 需要 NetworkModel" #, python-format msgid "" "API Version String %(version)s is of invalid format. Must be of format " "MajorNum.MinorNum." msgstr "API 版本字串 %(version)s 格式無效。格式必須為 MajorNum.MinorNum。" #, python-format msgid "API version %(version)s is not supported on this method." msgstr "此方法不支援 API %(version)s 版。" msgid "Access list not available for public flavors." msgstr "存取清單不適用於公用特性。" #, python-format msgid "Action %s not found" msgstr "找不到動作 %s" #, python-format msgid "" "Action for request_id %(request_id)s on instance %(instance_uuid)s not found" msgstr "" "在實例 %(instance_uuid)s 上找不到對 request_id %(request_id)s 執行的動作" #, python-format msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s" msgstr "動作:'%(action)s',呼叫方法:%(meth)s,主體:%(body)s" #, python-format msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries" msgstr "重試 %(retries)s 次之後,對聚集 %(id)s 新增 meta 資料仍失敗" msgid "Affinity instance group policy was violated." msgstr "違反了親緣性實例群組原則。" #, python-format msgid "Agent does not support the call: %(method)s" msgstr "代理程式不支援呼叫:%(method)s" #, python-format msgid "" "Agent-build with hypervisor %(hypervisor)s os %(os)s architecture " "%(architecture)s exists." msgstr "" "Hypervisor 為 %(hypervisor)s、OS 為 %(os)s 且架構為%(architecture)s 的 agent-" "build 已存在。" #, python-format msgid "Aggregate %(aggregate_id)s already has host %(host)s." msgstr "聚集 %(aggregate_id)s 已有主機 %(host)s。" #, python-format msgid "Aggregate %(aggregate_id)s could not be found." msgstr "找不到聚集 %(aggregate_id)s。" #, python-format msgid "Aggregate %(aggregate_id)s has no host %(host)s." msgstr "聚集 %(aggregate_id)s 沒有主機 %(host)s。" #, python-format msgid "Aggregate %(aggregate_id)s has no metadata with key %(metadata_key)s." msgstr "聚集 %(aggregate_id)s 沒有索引鍵為 %(metadata_key)s 的 meta 資料。" #, python-format msgid "" "Aggregate %(aggregate_id)s: action '%(action)s' caused an error: %(reason)s." msgstr "聚集 %(aggregate_id)s:動作 '%(action)s' 造成錯誤:%(reason)s。" #, python-format msgid "Aggregate %(aggregate_name)s already exists." msgstr "聚集 %(aggregate_name)s 已存在。" #, python-format msgid "Aggregate %s does not support empty named availability zone" msgstr "聚集 %s 不支援空白命名的可用區域" #, python-format msgid "Aggregate for host %(host)s count not be found." msgstr "找不到主機 %(host)s 計數的聚集。" #, python-format msgid "An invalid 'name' value was provided. The name must be: %(reason)s" msgstr "所提供的「名稱」值無效。名稱必須是:%(reason)s" msgid "An unknown error has occurred. Please try your request again." msgstr "發生不明錯誤。請重試要求。" msgid "An unknown exception occurred." msgstr "發生一個未知例外" msgid "Anti-affinity instance group policy was violated." msgstr "違反了反親緣性實例群組原則。" #, python-format msgid "Architecture name '%(arch)s' is not recognised" msgstr "未辨識架構名稱 '%(arch)s'" #, python-format msgid "Architecture name '%s' is not valid" msgstr "架構名稱 '%s' 無效" #, python-format msgid "" "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool" msgstr "嘗試從空儲存區耗用 PCI 裝置 %(compute_node_id)s:%(address)s" msgid "Attempted overwrite of an existing value." msgstr "試圖改寫現有值。" #, python-format msgid "Attribute not supported: %(attr)s" msgstr "不支援屬性:%(attr)s" #, python-format msgid "Bad network format: missing %s" msgstr "錯誤的網路格式:遺漏了 %s" msgid "Bad networks format" msgstr "錯誤的網路格式" #, python-format msgid "Bad networks format: network uuid is not in proper format (%s)" msgstr "錯誤的網路格式:網路 UUID 不是適當的格式 (%s)" #, python-format msgid "Bad prefix for network in cidr %s" msgstr "CIDR %s 中網路的字首錯誤" #, python-format msgid "" "Binding failed for port %(port_id)s, please check neutron logs for more " "information." msgstr "針對埠 %(port_id)s 的連結失敗,請檢查 Neutron 日誌,以取得相關資訊。" msgid "Blank components" msgstr "空白元件" msgid "" "Blank volumes (source: 'blank', dest: 'volume') need to have non-zero size" msgstr "空白磁區(來源:'blank',目的地:'volume')需要具有非零大小" #, python-format msgid "Block Device %(id)s is not bootable." msgstr "區塊裝置 %(id)s 不可啟動。" #, python-format msgid "" "Block Device Mapping %(volume_id)s is a multi-attach volume and is not valid " "for this operation." msgstr "「區塊裝置對映」%(volume_id)s 是一個多重連接磁區,且不適用於此作業。" msgid "Block Device Mapping cannot be converted to legacy format. " msgstr "無法將「區塊裝置對映」轉換為舊式格式。" msgid "Block Device Mapping is Invalid." msgstr "「區塊裝置對映」無效。" #, python-format msgid "Block Device Mapping is Invalid: %(details)s" msgstr "「區塊裝置對映」無效:%(details)s" msgid "" "Block Device Mapping is Invalid: Boot sequence for the instance and image/" "block device mapping combination is not valid." msgstr "「區塊裝置對映」無效:實例的開機順序和映像檔或區塊裝置對應的組合無效。" msgid "" "Block Device Mapping is Invalid: You specified more local devices than the " "limit allows" msgstr "「區塊裝置對映」無效:您指定了超越限制數量的本地裝置" #, python-format msgid "Block Device Mapping is Invalid: failed to get image %(id)s." msgstr "「區塊裝置對映」無效:無法取得映像檔 %(id)s。" #, python-format msgid "Block Device Mapping is Invalid: failed to get snapshot %(id)s." msgstr "「區塊裝置對映」無效:無法取得 Snapshot %(id)s。" #, python-format msgid "Block Device Mapping is Invalid: failed to get volume %(id)s." msgstr "「區塊裝置對映」無效:無法取得磁區 %(id)s。" msgid "Block migration can not be used with shared storage." msgstr "區塊移轉不能與共用儲存體配合使用。" msgid "Boot index is invalid." msgstr "啟動索引無效。" #, python-format msgid "Build of instance %(instance_uuid)s aborted: %(reason)s" msgstr "建置實例 %(instance_uuid)s 已中止:%(reason)s" #, python-format msgid "Build of instance %(instance_uuid)s was re-scheduled: %(reason)s" msgstr "建置實例 %(instance_uuid)s 已重新排定:%(reason)s" #, python-format msgid "BuildRequest not found for instance %(uuid)s" msgstr "找不到實例 %(uuid)s 的 BuildRequest" msgid "CPU and memory allocation must be provided for all NUMA nodes" msgstr "必須為所有 NUMA 節點都提供 CPU 和記憶體配置" #, python-format msgid "" "CPU doesn't have compatibility.\n" "\n" "%(ret)s\n" "\n" "Refer to %(u)s" msgstr "" "CPU 不相容。\n" "\n" "%(ret)s\n" "\n" "請參閱 %(u)s" #, python-format msgid "CPU number %(cpunum)d is assigned to two nodes" msgstr "CPU 數目 %(cpunum)d 已指派給兩個節點" #, python-format msgid "CPU number %(cpunum)d is larger than max %(cpumax)d" msgstr "CPU 數目 %(cpunum)d 大於上限 %(cpumax)d" #, python-format msgid "CPU number %(cpuset)s is not assigned to any node" msgstr "CPU 數目 %(cpuset)s 未指派給任何節點" msgid "Can not add access to a public flavor." msgstr "無法新增對公用特性的存取權。" msgid "Can not find requested image" msgstr "找不到所要求的映像檔" #, python-format msgid "Can not handle authentication request for %d credentials" msgstr "無法處理對 %d 認證的鑑別要求" msgid "Can't resize a disk to 0 GB." msgstr "無法將磁碟的大小調整為 0 GB。" msgid "Can't resize down ephemeral disks." msgstr "無法將暫時磁碟調小。" msgid "Can't retrieve root device path from instance libvirt configuration" msgstr "無法從實例 libVirt 配置擷取根裝置路徑" #, python-format msgid "" "Cannot '%(action)s' instance %(server_id)s while it is in %(attr)s %(state)s" msgstr "無法%(action)s實例 %(server_id)s,因為該實例處於 %(attr)s %(state)s" #, python-format msgid "" "Cannot access \"%(instances_path)s\", make sure the path exists and that you " "have the proper permissions. In particular Nova-Compute must not be executed " "with the builtin SYSTEM account or other accounts unable to authenticate on " "a remote host." msgstr "" "無法存取 \"%(instances_path)s\",請確保路徑存在,以及您具有適當的權限。特別" "是,不得使用內建系統帳戶或其他無法在遠端主機上進行鑑別的帳戶來執行 Nova-" "Compute。" #, python-format msgid "Cannot add host to aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "無法將主機新增至聚集 %(aggregate_id)s。原因:%(reason)s。" msgid "Cannot attach one or more volumes to multiple instances" msgstr "無法將一個以上的磁區連接至多個實例" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "無法對孤立的 %(objtype)s 物件呼叫 %(method)s" #, python-format msgid "" "Cannot determine the parent storage pool for %s; cannot determine where to " "store images" msgstr "無法判定 %s 的母項儲存區;無法判定用來儲存映像檔的位置" msgid "Cannot find SR of content-type ISO" msgstr "找不到內容類型為 ISO 的「儲存體儲存庫 (SR)」" msgid "Cannot find SR to read/write VDI." msgstr "找不到「儲存體儲存庫 (SR)」來讀寫 VDI。" msgid "Cannot find image for rebuild" msgstr "找不到要重建的映像檔" #, python-format msgid "Cannot remove host %(host)s in aggregate %(id)s" msgstr "無法移除聚集 %(id)s 中的主機 %(host)s" #, python-format msgid "Cannot remove host from aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "無法從聚集 %(aggregate_id)s 移除主機。原因:%(reason)s。" msgid "Cannot rescue a volume-backed instance" msgstr "無法救援以磁區為基礎的實例" #, python-format msgid "" "Cannot resize the root disk to a smaller size. Current size: " "%(curr_root_gb)s GB. Requested size: %(new_root_gb)s GB." msgstr "" "無法將根磁碟調小。現行大小:%(curr_root_gb)s GB。所要求的大小:" "%(new_root_gb)s GB。" msgid "" "Cannot set cpu thread pinning policy in a non dedicated cpu pinning policy" msgstr "無法在非專用 CPU 固定原則中設定 CPU 執行緒固定原則" msgid "Cannot set realtime policy in a non dedicated cpu pinning policy" msgstr "無法在非專用 CPU 固定原則中設定即時原則" #, python-format msgid "Cannot update aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "無法更新聚集 %(aggregate_id)s。原因:%(reason)s。" #, python-format msgid "" "Cannot update metadata of aggregate %(aggregate_id)s. Reason: %(reason)s." msgstr "無法更新聚集 %(aggregate_id)s 的 meta 資料。原因:%(reason)s。" #, python-format msgid "Cell %(uuid)s has no mapping." msgstr "Cell %(uuid)s 沒有對映。" #, python-format msgid "" "Change would make usage less than 0 for the following resources: %(unders)s" msgstr "變更會使下列資源的用量小於 0:%(unders)s" #, python-format msgid "Class %(class_name)s could not be found: %(exception)s" msgstr "找不到類別 %(class_name)s:%(exception)s" #, python-format msgid "" "Command Not supported. Please use Ironic command %(cmd)s to perform this " "action." msgstr "指令不受支援。請使用 Ironic 指令 %(cmd)s 來執行此動作。" #, python-format msgid "Compute host %(host)s could not be found." msgstr "找不到計算主機 %(host)s。" #, python-format msgid "Compute host %s not found." msgstr "找不到計算主機 %s。" #, python-format msgid "Compute service of %(host)s is still in use." msgstr "%(host)s 的計算服務仍在使用中。" #, python-format msgid "Compute service of %(host)s is unavailable at this time." msgstr "此時無法使用 %(host)s 的計算服務。" #, python-format msgid "Config drive format '%(format)s' is not supported." msgstr "不支援配置磁碟機格式 '%(format)s'。" #, python-format msgid "" "Config requested an explicit CPU model, but the current libvirt hypervisor " "'%s' does not support selecting CPU models" msgstr "" "配置已要求明確的 CPU 型號,但現行 libVirt Hypervisor '%s' 不支援選取 CPU 型號" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s, but we were unable to " "determine the cause" msgstr "更新實例 %(instance_uuid)s 時發生衝突,但是無法判定原因" #, python-format msgid "" "Conflict updating instance %(instance_uuid)s. Expected: %(expected)s. " "Actual: %(actual)s" msgstr "" "更新實例 %(instance_uuid)s 時發生衝突。預期:%(expected)s。實際:%(actual)s" #, python-format msgid "Connection to cinder host failed: %(reason)s" msgstr "連線 Cinder 主機失敗:%(reason)s" #, python-format msgid "Connection to glance host %(server)s failed: %(reason)s" msgstr "與 Glance 主機 %(server)s 的連線失敗:%(reason)s" #, python-format msgid "Connection to libvirt lost: %s" msgstr "libVirt 連線已中斷:%s" #, python-format msgid "Connection to the hypervisor is broken on host: %(host)s" msgstr "Hypervisor 連線在主機 %(host)s 上已中斷" #, python-format msgid "" "Console log output could not be retrieved for instance %(instance_id)s. " "Reason: %(reason)s" msgstr "無法擷取實例 %(instance_id)s 的主控台日誌輸出。原因:%(reason)s" msgid "Constraint not met." msgstr "不符合限制。" #, python-format msgid "Converted to raw, but format is now %s" msgstr "已轉換為原始,但格式現在為 %s" #, python-format msgid "Could not attach image to loopback: %s" msgstr "無法將映像檔連接至迴圈:%s" #, python-format msgid "Could not fetch image %(image_id)s" msgstr "無法提取映像檔 %(image_id)s" #, python-format msgid "Could not find a handler for %(driver_type)s volume." msgstr "找不到 %(driver_type)s 磁區的處理程式。" #, python-format msgid "Could not find binary %(binary)s on host %(host)s." msgstr "在主機 %(host)s 上找不到二進位檔 %(binary)s。" #, python-format msgid "Could not find config at %(path)s" msgstr "在 %(path)s 處找不到配置" msgid "Could not find the datastore reference(s) which the VM uses." msgstr "找不到 VM 所使用的資料儲存庫參照。" #, python-format msgid "Could not load line %(line)s, got error %(error)s" msgstr "無法載入行 %(line)s,取得錯誤 %(error)s" #, python-format msgid "Could not load paste app '%(name)s' from %(path)s" msgstr "無法從 %(path)s 載入 paste 應用程式 '%(name)s'" #, python-format msgid "" "Could not mount vfat config drive. %(operation)s failed. Error: %(error)s" msgstr "無法裝載 vfat 配置磁碟機。%(operation)s 失敗。錯誤:%(error)s" #, python-format msgid "Could not upload image %(image_id)s" msgstr "無法上傳映像檔 %(image_id)s" msgid "Creation of virtual interface with unique mac address failed" msgstr "使用唯一 MAC 位址來建立虛擬介面失敗" #, python-format msgid "Datastore regex %s did not match any datastores" msgstr "資料儲存庫正規表示式 %s 不符合任何資料儲存庫" msgid "Datetime is in invalid format" msgstr "日期時間的格式無效" msgid "Default PBM policy is required if PBM is enabled." msgstr "如果已啟用 PBM,則需要預設 PBM 原則。" #, python-format msgid "Deleted %(records)d records from table '%(table_name)s'." msgstr "已從表格 '%(table_name)s' 刪除 %(records)d 筆記錄。" #, python-format msgid "Device '%(device)s' not found." msgstr "找不到裝置 '%(device)s'。" #, python-format msgid "" "Device id %(id)s specified is not supported by hypervisor version %(version)s" msgstr "Hypervisor 版本 %(version)s 不支援所指定的裝置 ID %(id)s" msgid "Device name contains spaces." msgstr "裝置名稱包含空格。" msgid "Device name empty or too long." msgstr "裝置名稱為空,或者太長。" #, python-format msgid "Device type mismatch for alias '%s'" msgstr "別名 '%s' 的裝置類型不符" #, python-format msgid "" "Different types in %(table)s.%(column)s and shadow table: %(c_type)s " "%(shadow_c_type)s" msgstr "" "%(table)s.%(column)s 與備份副本表格中的類型不同:%(c_type)s %(shadow_c_type)s" #, python-format msgid "Disk contains a filesystem we are unable to resize: %s" msgstr "磁碟包含一個無法調整大小的檔案系統:%s" #, python-format msgid "Disk format %(disk_format)s is not acceptable" msgstr "無法接受磁碟格式 %(disk_format)s" #, python-format msgid "Disk info file is invalid: %(reason)s" msgstr "磁碟資訊檔無效:%(reason)s" msgid "Disk must have only one partition." msgstr "磁碟只能具有一個分割區。" #, python-format msgid "Disk with id: %s not found attached to instance." msgstr "找不到已連接至實例且 ID 為 %s 的磁碟。" #, python-format msgid "Driver Error: %s" msgstr "驅動程式錯誤:%s" #, python-format msgid "Error attempting to run %(method)s" msgstr "嘗試執行 %(method)s 時發生錯誤" #, python-format msgid "" "Error destroying the instance on node %(node)s. Provision state still " "'%(state)s'." msgstr "毀損節點 %(node)s 上的實例時發生錯誤。供應狀態仍為'%(state)s'。" #, python-format msgid "Error during following call to agent: %(method)s" msgstr "對代理程式進行下列呼叫期間發生錯誤:%(method)s" #, python-format msgid "Error during unshelve instance %(instance_id)s: %(reason)s" msgstr "解除擱置實例 %(instance_id)s 期間發生錯誤:%(reason)s" #, python-format msgid "" "Error from libvirt while getting domain info for %(instance_name)s: [Error " "Code %(error_code)s] %(ex)s" msgstr "" "獲取 %(instance_name)s 的網域資訊時 libVirt 傳回錯誤:[錯誤碼 " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while looking up %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "查閱 %(instance_name)s 時 libVirt 傳回錯誤:[錯誤碼 %(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while quiescing %(instance_name)s: [Error Code " "%(error_code)s] %(ex)s" msgstr "" "對 %(instance_name)s 執行靜止動作時,libVirt 中發生錯誤:[錯誤碼 " "%(error_code)s] %(ex)s" #, python-format msgid "" "Error from libvirt while set password for username \"%(user)s\": [Error Code " "%(error_code)s] %(ex)s" msgstr "" "設定使用者名稱 \"%(user)s\" 的密碼時,libvirt 傳回了錯誤:[錯誤" "碼%(error_code)s] %(ex)s" #, python-format msgid "" "Error mounting %(device)s to %(dir)s in image %(image)s with libguestfs " "(%(e)s)" msgstr "" "使用 libguestfs 將 %(device)s 裝載到映像檔 %(image)s 中的 %(dir)s 時發生錯誤" "(%(e)s)" #, python-format msgid "Error mounting %(image)s with libguestfs (%(e)s)" msgstr "裝載具有 libguestfs (%(e)s) 的 %(image)s 時發生錯誤" #, python-format msgid "Error when creating resource monitor: %(monitor)s" msgstr "建立資源監視器 %(monitor)s 時發生錯誤" msgid "Error: Agent is disabled" msgstr "錯誤:代理程式已停用" #, python-format msgid "Event %(event)s not found for action id %(action_id)s" msgstr "找不到動作識別碼 %(action_id)s 的事件 %(event)s" msgid "Event must be an instance of nova.virt.event.Event" msgstr "事件必須是 nova.virt.event.Event 的實例" #, python-format msgid "" "Exceeded max scheduling attempts %(max_attempts)d for instance " "%(instance_uuid)s. Last exception: %(exc_reason)s" msgstr "" "已超出實例 %(instance_uuid)s 的排程嘗試次數上限 %(max_attempts)d。前次異常狀" "況:%(exc_reason)s" #, python-format msgid "" "Exceeded max scheduling retries %(max_retries)d for instance " "%(instance_uuid)s during live migration" msgstr "" "在即時移轉期間,超出了實例 %(instance_uuid)s 的排程重試次數上限 " "%(max_retries)d" #, python-format msgid "Exceeded maximum number of retries. %(reason)s" msgstr "已超出重試次數上限。%(reason)s" #, python-format msgid "Expected a uuid but received %(uuid)s." msgstr "需要 UUID,但收到 %(uuid)s。" #, python-format msgid "Extra column %(table)s.%(column)s in shadow table" msgstr "備份副本表格中存在額外直欄 %(table)s.%(column)s" msgid "Extracting vmdk from OVA failed." msgstr "從 OVA 擷取 VMDK 失敗。" #, python-format msgid "Failed to access port %(port_id)s: %(reason)s" msgstr "無法存取埠 %(port_id)s:%(reason)s" #, python-format msgid "" "Failed to add deploy parameters on node %(node)s when provisioning the " "instance %(instance)s" msgstr "供應實例 %(instance)s 時,無法在節點 %(node)s 上新增部署參數" #, python-format msgid "Failed to allocate the network(s) with error %s, not rescheduling." msgstr "無法配置網路,發生錯誤 %s,將不重新排定。" msgid "Failed to allocate the network(s), not rescheduling." msgstr "無法配置網路,將不會重新排定。" #, python-format msgid "Failed to attach network adapter device to %(instance_uuid)s" msgstr "無法將網路配接卡裝置連接至 %(instance_uuid)s" #, python-format msgid "Failed to create vif %s" msgstr "無法建立 VIF %s" #, python-format msgid "Failed to deploy instance: %(reason)s" msgstr "無法部署實例:%(reason)s" #, python-format msgid "Failed to detach PCI device %(dev)s: %(reason)s" msgstr "無法分離 PCI 裝置 %(dev)s:%(reason)s" #, python-format msgid "Failed to detach network adapter device from %(instance_uuid)s" msgstr "無法將網路配接卡裝置從 %(instance_uuid)s 分離" #, python-format msgid "Failed to encrypt text: %(reason)s" msgstr "無法加密文字:%(reason)s" #, python-format msgid "Failed to launch instances: %(reason)s" msgstr "無法啟動實例:%(reason)s" #, python-format msgid "Failed to map partitions: %s" msgstr "無法對映分割區:%s" #, python-format msgid "Failed to mount filesystem: %s" msgstr "無法裝載檔案系統:%s" msgid "Failed to parse information about a pci device for passthrough" msgstr "無法剖析 PCI passthrough 裝置的相關資訊" #, python-format msgid "Failed to power off instance: %(reason)s" msgstr "無法關閉實例的電源:%(reason)s" #, python-format msgid "Failed to power on instance: %(reason)s" msgstr "無法開啟實例的電源:%(reason)s" #, python-format msgid "" "Failed to prepare PCI device %(id)s for instance %(instance_uuid)s: " "%(reason)s" msgstr "無法為實例 %(instance_uuid)s 準備 PCI 裝置 %(id)s:%(reason)s" #, python-format msgid "Failed to provision instance %(inst)s: %(reason)s" msgstr "無法供應實例 %(inst)s:%(reason)s" #, python-format msgid "Failed to read or write disk info file: %(reason)s" msgstr "無法讀取或寫入磁碟資訊檔:%(reason)s" #, python-format msgid "Failed to reboot instance: %(reason)s" msgstr "無法重新啟動實例:%(reason)s" #, python-format msgid "Failed to remove volume(s): (%(reason)s)" msgstr "無法移除磁區:(%(reason)s)" #, python-format msgid "Failed to request Ironic to rebuild instance %(inst)s: %(reason)s" msgstr "無法要求 Ironic 來重建實例 %(inst)s:%(reason)s" #, python-format msgid "Failed to resume instance: %(reason)s" msgstr "無法回復實例:%(reason)s" #, python-format msgid "Failed to run qemu-img info on %(path)s : %(error)s" msgstr "無法在 %(path)s 上執行 qemu-img 資訊:%(error)s" #, python-format msgid "Failed to set admin password on %(instance)s because %(reason)s" msgstr "無法在 %(instance)s 上設定管理者密碼,因為 %(reason)s" msgid "Failed to spawn, rolling back" msgstr "無法大量產生,正在回復" #, python-format msgid "Failed to suspend instance: %(reason)s" msgstr "無法懸置實例:%(reason)s" #, python-format msgid "Failed to terminate instance: %(reason)s" msgstr "無法終止實例:%(reason)s" #, python-format msgid "Failed to unplug vif %s" msgstr "無法拔除 VIF %s" msgid "Failure prepping block device." msgstr "準備區塊裝置時失敗。" #, python-format msgid "File %(file_path)s could not be found." msgstr "找不到檔案 %(file_path)s。" #, python-format msgid "File path %s not valid" msgstr "檔案路徑 %s 無效" #, python-format msgid "Fixed IP %(ip)s is not a valid ip address for network %(network_id)s." msgstr "固定 IP %(ip)s 不是網路 %(network_id)s 的有效 IP 位址。" #, python-format msgid "Fixed IP %s is already in use." msgstr "固定 IP %s 已在使用中。" #, python-format msgid "" "Fixed IP address %(address)s is already in use on instance %(instance_uuid)s." msgstr "實例 %(instance_uuid)s 上已使用固定 IP 位址 %(address)s。" #, python-format msgid "Fixed IP not found for address %(address)s." msgstr "找不到位址 %(address)s 的固定 IP。" #, python-format msgid "Flavor %(flavor_id)s could not be found." msgstr "找不到特性 %(flavor_id)s。" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(extra_specs_key)s." msgstr "特性 %(flavor_id)s 沒有索引鍵為 %(extra_specs_key)s 的額外規格。" #, python-format msgid "Flavor %(flavor_id)s has no extra specs with key %(key)s." msgstr "特性 %(flavor_id)s 沒有索引鍵為 %(key)s 的額外規格。" #, python-format msgid "" "Flavor %(id)s extra spec cannot be updated or created after %(retries)d " "retries." msgstr "在嘗試 %(retries)d 次之後,無法更新或建立特性 %(id)s 額外規格。" #, python-format msgid "" "Flavor access already exists for flavor %(flavor_id)s and project " "%(project_id)s combination." msgstr "特性 %(flavor_id)s 及專案%(project_id)s 組合已存在特性存取。" #, python-format msgid "Flavor access not found for %(flavor_id)s / %(project_id)s combination." msgstr "找不到 %(flavor_id)s / %(project_id)s 組合的特性存取。" msgid "Flavor used by the instance could not be found." msgstr "找不到實例所使用的特性。" #, python-format msgid "Flavor with ID %(flavor_id)s already exists." msgstr "ID 為 %(flavor_id)s 的特性已存在。" #, python-format msgid "Flavor with name %(flavor_name)s could not be found." msgstr "找不到名稱為 %(flavor_name)s 的特性。" #, python-format msgid "Flavor with name %(name)s already exists." msgstr "名稱為 %(name)s 的特性已存在。" #, python-format msgid "" "Flavor's disk is smaller than the minimum size specified in image metadata. " "Flavor disk is %(flavor_size)i bytes, minimum size is %(image_min_disk)i " "bytes." msgstr "" "特性磁碟小於映像檔 meta 資料中指定的大小下限。特性磁碟為 %(flavor_size)i 位元" "組,大小下限為 %(image_min_disk)i 位元組。" #, python-format msgid "" "Flavor's disk is too small for requested image. Flavor disk is " "%(flavor_size)i bytes, image is %(image_size)i bytes." msgstr "" "針對所要求的映像檔而言,特性磁碟太小。特性磁碟為%(flavor_size)i 位元組,映像" "檔為 %(image_size)i 位元組。" msgid "Flavor's memory is too small for requested image." msgstr "特性的記憶體太小,裝不下所要求的映像檔。" #, python-format msgid "Floating IP %(address)s association has failed." msgstr "浮動 IP %(address)s 關聯失敗。" #, python-format msgid "Floating IP %(address)s is associated." msgstr "已與浮動 IP %(address)s 產生關聯。" #, python-format msgid "Floating IP %(address)s is not associated with instance %(id)s." msgstr "浮動 IP %(address)s 未與實例 %(id)s 產生關聯。" #, python-format msgid "Floating IP not found for ID %(id)s." msgstr "找不到 ID %(id)s 的浮動 IP。" #, python-format msgid "Floating IP not found for ID %s" msgstr "找不到 ID %s 的浮動 IP" #, python-format msgid "Floating IP not found for address %(address)s." msgstr "找不到位址 %(address)s 的浮動 IP。" msgid "Floating IP pool not found." msgstr "找不到浮動 IP 儲存區。" msgid "" "Forbidden to exceed flavor value of number of serial ports passed in image " "meta." msgstr "已禁止超出映像檔 meta 中傳遞之序列埠數目的特性值。" msgid "Found no disk to snapshot." msgstr "找不到磁碟來取得 Snapshot。" #, python-format msgid "Found no network for bridge %s" msgstr "找不到橋接器 %s 的網路" #, python-format msgid "Found non-unique network for bridge %s" msgstr "發現橋接器 %s 的網路不是唯一的" #, python-format msgid "Found non-unique network for name_label %s" msgstr "發現 name_label %s 的網路不是唯一的" msgid "Guest does not have a console available." msgstr "訪客沒有主控台可用。" #, python-format msgid "Host %(host)s could not be found." msgstr "找不到主機 %(host)s。" #, python-format msgid "Host %(host)s is already mapped to cell %(uuid)s" msgstr "主機 %(host)s 已經對映至 Cell %(uuid)s" #, python-format msgid "Host '%(name)s' is not mapped to any cell" msgstr "主機 '%(name)s' 未對映至任何 Cell" msgid "Host PowerOn is not supported by the Hyper-V driver" msgstr "Hyper-V 驅動程式不支援主機 PowerOn" msgid "Host aggregate is not empty" msgstr "主機聚集不為空" msgid "Host does not support guests with NUMA topology set" msgstr "主機不支援具有 NUMA 拓蹼集的訪客" msgid "Host does not support guests with custom memory page sizes" msgstr "主機不支援具有自訂記憶體頁面大小的訪客" msgid "Host startup on XenServer is not supported." msgstr "不支援在 XenServer 上啟動主機。" msgid "Hypervisor driver does not support post_live_migration_at_source method" msgstr "Hypervisor 驅動程式不支援 post_live_migration_at_source 方法" #, python-format msgid "Hypervisor virt type '%s' is not valid" msgstr "Hypervisor virt 類型 '%s' 無效" #, python-format msgid "Hypervisor virtualization type '%(hv_type)s' is not recognised" msgstr "未辨識 Hypervisor 虛擬化類型 '%(hv_type)s'" #, python-format msgid "Hypervisor with ID '%s' could not be found." msgstr "找不到 ID 為 '%s' 的 Hypervisor。" #, python-format msgid "IP allocation over quota in pool %s." msgstr "IP 配置超過儲存區 %s 中的配額。" msgid "IP allocation over quota." msgstr "IP 配置超過配額。" #, python-format msgid "Image %(image_id)s could not be found." msgstr "找不到映像檔 %(image_id)s。" #, python-format msgid "Image %(image_id)s is not active." msgstr "映像檔 %(image_id)s 不在作用中。" #, python-format msgid "Image %(image_id)s is unacceptable: %(reason)s" msgstr "無法接受映像檔 %(image_id)s:%(reason)s" msgid "Image disk size greater than requested disk size" msgstr "映像檔磁碟大小大於所要求的磁碟大小" msgid "Image is not raw format" msgstr "映像檔不是原始格式" msgid "Image metadata limit exceeded" msgstr "已超出映像檔 meta 資料限制" #, python-format msgid "Image model '%(image)s' is not supported" msgstr "不支援映像檔模型 '%(image)s'" msgid "Image not found." msgstr "找不到映像檔。" #, python-format msgid "" "Image property '%(name)s' is not permitted to override NUMA configuration " "set against the flavor" msgstr "不允許映像檔內容 '%(name)s' 針對特性置換 NUMA 配置集" msgid "" "Image property 'hw_cpu_policy' is not permitted to override CPU pinning " "policy set against the flavor" msgstr "" "不允許使用映像檔內容 'hw_cpu_policy' 來置換針對特性所設定的 CPU 固定原則" msgid "" "Image property 'hw_cpu_thread_policy' is not permitted to override CPU " "thread pinning policy set against the flavor" msgstr "" "針對該特性,不允許映像檔內容 'hw_cpu_thread_policy' 置換 CPU 執行緒固定原則集" msgid "Image that the instance was started with could not be found." msgstr "找不到已用來啟動實例的映像檔。" #, python-format msgid "Image's config drive option '%(config_drive)s' is invalid" msgstr "映像檔的配置驅動選項 '%(config_drive)s' 無效" msgid "" "Images with destination_type 'volume' need to have a non-zero size specified" msgstr "destination_type 為 'volume' 的映像檔需要指定非零大小" msgid "In ERROR state" msgstr "處於 ERROR 狀態" #, python-format msgid "In states %(vm_state)s/%(task_state)s, not RESIZED/None" msgstr "處於狀態 %(vm_state)s/%(task_state)s,而不是 RESIZED/None" #, python-format msgid "In-progress live migration %(id)s is not found for server %(uuid)s." msgstr "找不到伺服器 %(uuid)s 的進行中即時移轉 %(id)s。" msgid "" "Incompatible settings: ephemeral storage encryption is supported only for " "LVM images." msgstr "不相容的設定:只有 LVM 映像檔才支援暫時儲存體加密。" #, python-format msgid "Info cache for instance %(instance_uuid)s could not be found." msgstr "找不到實例 %(instance_uuid)s 的資訊快取。" #, python-format msgid "" "Instance %(instance)s and volume %(vol)s are not in the same " "availability_zone. Instance is in %(ins_zone)s. Volume is in %(vol_zone)s" msgstr "" "實例 %(instance)s 與磁區 %(vol)s 不在同一可用性區域中。實例在 %(ins_zone)s " "中,而磁區在 %(vol_zone)s 中" #, python-format msgid "Instance %(instance)s does not have a port with id %(port)s" msgstr "實例 %(instance)s 沒有 ID 為 %(port)s 的埠" #, python-format msgid "Instance %(instance_id)s cannot be rescued: %(reason)s" msgstr "無法救援實例 %(instance_id)s:%(reason)s" #, python-format msgid "Instance %(instance_id)s could not be found." msgstr "找不到實例 %(instance_id)s。" #, python-format msgid "Instance %(instance_id)s has no tag '%(tag)s'" msgstr "實例 %(instance_id)s 沒有標籤 '%(tag)s'" #, python-format msgid "Instance %(instance_id)s is not in rescue mode" msgstr "實例 %(instance_id)s 不處於救援模式" #, python-format msgid "Instance %(instance_id)s is not ready" msgstr "實例 %(instance_id)s 未備妥" #, python-format msgid "Instance %(instance_id)s is not running." msgstr "實例 %(instance_id)s 不在執行中。" #, python-format msgid "Instance %(instance_id)s is unacceptable: %(reason)s" msgstr "無法接受實例 %(instance_id)s:%(reason)s" #, python-format msgid "Instance %(instance_uuid)s does not specify a NUMA topology" msgstr "實例 %(instance_uuid)s 未指定 NUMA 拓蹼" #, python-format msgid "Instance %(instance_uuid)s does not specify a migration context." msgstr "實例 %(instance_uuid)s 未指定移轉環境定義。" #, python-format msgid "" "Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot %(method)s while " "the instance is in this state." msgstr "" "實例 %(instance_uuid)s 處於 %(attr)s %(state)s。實例處於此狀態時,無法 " "%(method)s。" #, python-format msgid "Instance %(instance_uuid)s is locked" msgstr "已鎖定實例 %(instance_uuid)s" #, python-format msgid "" "Instance %(instance_uuid)s requires config drive, but it does not exist." msgstr "實例 %(instance_uuid)s 需要配置磁碟機,但該磁碟機不存在。" #, python-format msgid "Instance %(name)s already exists." msgstr "實例 %(name)s 已存在。" #, python-format msgid "Instance %(server_id)s is in an invalid state for '%(action)s'" msgstr "實例 %(server_id)s 處於無效的狀態,無法%(action)s" #, python-format msgid "Instance %(uuid)s has no mapping to a cell." msgstr "實例 %(uuid)s 沒有與 Cell 的對映。" #, python-format msgid "Instance %s not found" msgstr "找不到實例 %s" #, python-format msgid "Instance %s provisioning was aborted" msgstr "已中斷實例 %s 供應" msgid "Instance could not be found" msgstr "找不到實例" msgid "Instance disk to be encrypted but no context provided" msgstr "即將加密實例磁碟,但卻未提供環境定義" msgid "Instance event failed" msgstr "實例事件失敗" #, python-format msgid "Instance group %(group_uuid)s already exists." msgstr "實例群組 %(group_uuid)s 已存在。" #, python-format msgid "Instance group %(group_uuid)s could not be found." msgstr "找不到實例群組 %(group_uuid)s。" msgid "Instance has no source host" msgstr "實例沒有來源主機" msgid "Instance has not been resized." msgstr "尚未調整實例大小。" #, python-format msgid "Instance hostname %(hostname)s is not a valid DNS name" msgstr "實例主機名 %(hostname)s 不是有效的 DNS 名稱" #, python-format msgid "Instance is already in Rescue Mode: %s" msgstr "實例已處於救援模式:%s" msgid "Instance is not a member of specified network" msgstr "實例不是所指定網路的成員" #, python-format msgid "Instance rollback performed due to: %s" msgstr "已執行實例回復作業,原因:%s" #, python-format msgid "" "Insufficient Space on Volume Group %(vg)s. Only %(free_space)db available, " "but %(size)d bytes required by volume %(lv)s." msgstr "" "磁區群組 %(vg)s 上的空間不足。僅 %(free_space)db 可用,但磁區 %(lv)s 需要 " "%(size)d 位元組。" #, python-format msgid "Insufficient compute resources: %(reason)s." msgstr "計算資源不足:%(reason)s。" #, python-format msgid "Insufficient free memory on compute node to start %(uuid)s." msgstr "計算節點上的可用記憶體不足以啟動 %(uuid)s。" #, python-format msgid "Interface %(interface)s not found." msgstr "找不到介面 %(interface)s。" #, python-format msgid "Invalid Base 64 data for file %(path)s" msgstr "檔案 %(path)s 的 Base 64 資料無效" msgid "Invalid Connection Info" msgstr "無效的連線資訊" #, python-format msgid "Invalid ID received %(id)s." msgstr "收到無效的 ID %(id)s。" #, python-format msgid "Invalid IP format %s" msgstr "無效的 IP 格式 %s" #, python-format msgid "Invalid IP protocol %(protocol)s." msgstr "無效的 IP 通訊協定 %(protocol)s。" msgid "" "Invalid PCI Whitelist: The PCI whitelist can specify devname or address, but " "not both" msgstr "無效的 PCI 白名單:PCI 白名單可以指定裝置名稱或位址,但不能指定這兩者" #, python-format msgid "Invalid PCI alias definition: %(reason)s" msgstr "無效的 PCI 別名定義:%(reason)s" #, python-format msgid "Invalid Regular Expression %s" msgstr "無效的正規表示式 %s" #, python-format msgid "Invalid characters in hostname '%(hostname)s'" msgstr "主機名稱 '%(hostname)s' 中有無效字元" msgid "Invalid config_drive provided." msgstr "提供的 config_drive 無效。" #, python-format msgid "Invalid config_drive_format \"%s\"" msgstr "config_drive_format \"%s\" 無效" #, python-format msgid "Invalid console type %(console_type)s" msgstr "無效的主控台類型 %(console_type)s" #, python-format msgid "Invalid content type %(content_type)s." msgstr "無效的內容類型 %(content_type)s。" #, python-format msgid "Invalid datetime string: %(reason)s" msgstr "無效的日期時間字串:%(reason)s" msgid "Invalid device UUID." msgstr "無效的裝置 UUID。" #, python-format msgid "Invalid entry: '%s'" msgstr "項目 '%s' 無效" #, python-format msgid "Invalid entry: '%s'; Expecting dict" msgstr "項目 '%s' 無效;預期字典" #, python-format msgid "Invalid entry: '%s'; Expecting list or dict" msgstr "項目 '%s' 無效;預期清單或字典" #, python-format msgid "Invalid exclusion expression %r" msgstr "無效的排除表示式 %r" #, python-format msgid "Invalid image format '%(format)s'" msgstr "映像檔格式 '%(format)s' 無效" #, python-format msgid "Invalid image href %(image_href)s." msgstr "無效的映像檔 href %(image_href)s。" #, python-format msgid "Invalid inclusion expression %r" msgstr "無效的併入表示式 %r" #, python-format msgid "" "Invalid input for field/attribute %(path)s. Value: %(value)s. %(message)s" msgstr "欄位/屬性 %(path)s 的輸入無效。值:%(value)s。%(message)s" #, python-format msgid "Invalid input received: %(reason)s" msgstr "收到的輸入無效:%(reason)s" msgid "Invalid instance image." msgstr "無效的實例映像檔。" #, python-format msgid "Invalid is_public filter [%s]" msgstr "無效的 is_public 過濾器 [%s]" msgid "Invalid key_name provided." msgstr "提供的 key_name 無效。" #, python-format msgid "Invalid memory page size '%(pagesize)s'" msgstr "記憶體頁面大小 '%(pagesize)s' 無效" msgid "Invalid metadata key" msgstr "無效的 meta 資料索引鍵" #, python-format msgid "Invalid metadata size: %(reason)s" msgstr "無效的 meta 資料大小:%(reason)s" #, python-format msgid "Invalid metadata: %(reason)s" msgstr "無效的 meta 資料:%(reason)s" #, python-format msgid "Invalid minDisk filter [%s]" msgstr "無效的 minDisk 過濾器 [%s]" #, python-format msgid "Invalid minRam filter [%s]" msgstr "無效的 minRam 過濾器 [%s]" #, python-format msgid "Invalid port range %(from_port)s:%(to_port)s. %(msg)s" msgstr "無效的埠範圍 %(from_port)s:%(to_port)s。%(msg)s" msgid "Invalid proxy request signature." msgstr "無效的 Proxy 要求簽章。" #, python-format msgid "Invalid range expression %r" msgstr "無效的範圍表示式 %r" msgid "Invalid service catalog json." msgstr "無效的服務型錄 JSON。" msgid "Invalid start time. The start time cannot occur after the end time." msgstr "無效的開始時間。開始時間不能在結束時間之後。" msgid "Invalid state of instance files on shared storage" msgstr "共用儲存體上實例檔案的狀態無效" #, python-format msgid "Invalid timestamp for date %s" msgstr "日期 %s 的時間戳記無效" #, python-format msgid "Invalid usage_type: %s" msgstr "usage_type %s 無效" #, python-format msgid "Invalid value for Config Drive option: %(option)s" msgstr "「配置驅動」選項 %(option)s 的值無效" #, python-format msgid "Invalid virtual interface address %s in request" msgstr "要求中的虛擬介面位址 %s 無效" #, python-format msgid "Invalid volume access mode: %(access_mode)s" msgstr "無效的磁區存取模式:%(access_mode)s" #, python-format msgid "Invalid volume: %(reason)s" msgstr "無效的磁區:%(reason)s" msgid "Invalid volume_size." msgstr "無效的 volume_size。" #, python-format msgid "Ironic node uuid not supplied to driver for instance %s." msgstr "未將 Ironic 節點 UUID 提供給實例 %s 的驅動程式。" #, python-format msgid "" "It is not allowed to create an interface on external network %(network_uuid)s" msgstr "不容許在下列外部網路上建立介面:%(network_uuid)s" #, python-format msgid "" "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes" msgstr "" "核心/Ramdisk 映像檔太大:%(vdi_size)d 個位元組,上限為 %(max_size)d 個位元組" msgid "" "Key Names can only contain alphanumeric characters, periods, dashes, " "underscores, colons and spaces." msgstr "索引鍵名稱只能包含英數字元、句點、橫線、底線、冒號及空格。" #, python-format msgid "Key manager error: %(reason)s" msgstr "金鑰管理程式錯誤:%(reason)s" #, python-format msgid "Key pair '%(key_name)s' already exists." msgstr "金鑰組 '%(key_name)s' 已存在。" #, python-format msgid "Keypair %(name)s not found for user %(user_id)s" msgstr "找不到使用者 %(user_id)s 的金鑰組 %(name)s" #, python-format msgid "Keypair data is invalid: %(reason)s" msgstr "金鑰組資料無效:%(reason)s" msgid "Keypair name contains unsafe characters" msgstr "金鑰組名稱包含不安全的字元" msgid "Keypair name must be string and between 1 and 255 characters long" msgstr "金鑰組名稱必須是字串,並且長度必須介於 1 和 255 個字元之間" msgid "Limits only supported from vCenter 6.0 and above" msgstr "只有 vCenter 6.0 及更高版本中的限制才受支援" #, python-format msgid "Live migration %(id)s for server %(uuid)s is not in progress." msgstr "伺服器 %(uuid)s 的即時移轉 %(id)s 不在進行中。" #, python-format msgid "Malformed message body: %(reason)s" msgstr "訊息內文的格式不正確:%(reason)s" #, python-format msgid "" "Malformed request URL: URL's project_id '%(project_id)s' doesn't match " "Context's project_id '%(context_project_id)s'" msgstr "" "要求 URL 的格式不正確:URL 的 project_id '%(project_id)s',與環境定義的 " "project_id '%(context_project_id)s' 不符" msgid "Malformed request body" msgstr "要求內文的格式不正確" msgid "Mapping image to local is not supported." msgstr "不支援將映像檔對映至本端。" #, python-format msgid "Marker %(marker)s could not be found." msgstr "找不到標記 %(marker)s。" msgid "Maximum number of floating IPs exceeded" msgstr "超過了浮動 IP 數目上限" msgid "Maximum number of key pairs exceeded" msgstr "已超出金鑰組數目上限" #, python-format msgid "Maximum number of metadata items exceeds %(allowed)d" msgstr "meta 資料項目數目上限已超出所允許的 %(allowed)d" msgid "Maximum number of ports exceeded" msgstr "已超出埠數目上限" msgid "Maximum number of security groups or rules exceeded" msgstr "已超出安全群組或規則數目上限" msgid "Metadata item was not found" msgstr "找不到 meta 資料項目" msgid "Metadata property key greater than 255 characters" msgstr "meta 資料內容索引鍵超過 255 個字元" msgid "Metadata property value greater than 255 characters" msgstr "meta 資料內容值超過 255 個字元" msgid "Metadata type should be dict." msgstr "meta 資料類型應該是字典。" #, python-format msgid "" "Metric %(name)s could not be found on the compute host node %(host)s." "%(node)s." msgstr "在計算主機節點 %(host)s.%(node)s 上找不到度量 %(name)s。" msgid "Migrate Receive failed" msgstr "移轉接收失敗" msgid "Migrate Send failed" msgstr "移轉傳送失敗" #, python-format msgid "Migration %(id)s for server %(uuid)s is not live-migration." msgstr "伺服器 %(uuid)s 的移轉 %(id)s 不是即時移轉。" #, python-format msgid "Migration %(migration_id)s could not be found." msgstr "找不到移轉 %(migration_id)s。" #, python-format msgid "Migration %(migration_id)s not found for instance %(instance_id)s" msgstr "找不到實例 %(instance_id)s 的移轉 %(migration_id)s" #, python-format msgid "" "Migration %(migration_id)s state of instance %(instance_uuid)s is %(state)s. " "Cannot %(method)s while the migration is in this state." msgstr "" "實例 %(instance_uuid)s 的移轉 %(migration_id)s 狀態為 %(state)s。當移轉處於這" "種狀態時,無法執行 %(method)s。" #, python-format msgid "Migration error: %(reason)s" msgstr "移轉錯誤:%(reason)s" msgid "Migration is not supported for LVM backed instances" msgstr "不支援移轉以 LVM 為基礎的實例" #, python-format msgid "" "Migration not found for instance %(instance_id)s with status %(status)s." msgstr "找不到實例 %(instance_id)s(狀態為 %(status)s)的移轉。" #, python-format msgid "Migration pre-check error: %(reason)s" msgstr "移轉事先檢查發生錯誤:%(reason)s" #, python-format msgid "Migration select destinations error: %(reason)s" msgstr "移轉選取目的地錯誤:%(reason)s" #, python-format msgid "Missing arguments: %s" msgstr "遺漏引數:%s" #, python-format msgid "Missing column %(table)s.%(column)s in shadow table" msgstr "備份副本表格中遺漏了直欄 %(table)s.%(column)s" msgid "Missing device UUID." msgstr "遺漏裝置 UUID。" msgid "Missing disabled reason field" msgstr "遺漏了停用原因欄位" msgid "Missing forced_down field" msgstr "遺漏 forced_down 欄位" msgid "Missing imageRef attribute" msgstr "遺漏了 imageRef 屬性" #, python-format msgid "Missing keys: %s" msgstr "遺漏了索引鍵:%s" #, python-format msgid "Missing parameter %s" msgstr "遺漏參數 %s" msgid "Missing parameter dict" msgstr "遺漏了參數字典" #, python-format msgid "" "More than one instance is associated with fixed IP address '%(address)s'." msgstr "有多個實例與固定 IP 位址 '%(address)s' 相關聯。" msgid "" "More than one possible network found. Specify network ID(s) to select which " "one(s) to connect to." msgstr "找到多個可能的網路。請指定網路 ID 以選取要連接的網路。" msgid "More than one swap drive requested." msgstr "已要求多個交換磁碟機。" #, python-format msgid "Multi-boot operating system found in %s" msgstr "在 %s 中找到了多重啟動作業系統" msgid "Multiple X-Instance-ID headers found within request." msgstr "在要求中發現多個 X-Instance-ID 標頭。" msgid "Multiple X-Tenant-ID headers found within request." msgstr "在要求中發現多個 X-Tenant-ID 標頭。" #, python-format msgid "Multiple floating IP pools matches found for name '%s'" msgstr "找到名稱 '%s' 的多個浮動 IP 儲存區相符項" #, python-format msgid "Multiple floating IPs are found for address %(address)s." msgstr "找到位址 %(address)s 的多個浮動 IP。" msgid "" "Multiple hosts may be managed by the VMWare vCenter driver; therefore we do " "not return uptime for just one host." msgstr "" "多個主機可能由 VMWare vCenter 驅動程式進行管理;因此,將不會儘傳回一個主機執" "行時間。" msgid "Multiple possible networks found, use a Network ID to be more specific." msgstr "找到多個可能的網路,請使用更明確的網路 ID。" #, python-format msgid "" "Multiple security groups found matching '%s'. Use an ID to be more specific." msgstr "找到多個與 '%s' 相符的安全群組。請使用更明確的 ID。" msgid "Must input network_id when request IP address" msgstr "要求 IP 位址時,必須輸入 network_id" msgid "Must not input both network_id and port_id" msgstr "不得同時輸入 network_id 和 port_id" msgid "" "Must specify connection_url, connection_username (optionally), and " "connection_password to use compute_driver=xenapi.XenAPIDriver" msgstr "" "必須指定 connection_url、connection_username(選用項目)及" "connection_password,才能使用 compute_driver=xenapi.XenAPIDriver" msgid "" "Must specify host_ip, host_username and host_password to use vmwareapi." "VMwareVCDriver" msgstr "" "必須指定 host_ip、host_username 及 host_password,才能使用vmwareapi." "VMwareVCDriver" msgid "Must supply a positive value for max_number" msgstr "必須為 max_number 提供一個正值" msgid "Must supply a positive value for max_rows" msgstr "必須為 max_rows 提供正值" #, python-format msgid "Network %(network_id)s could not be found." msgstr "找不到網路 %(network_id)s。" #, python-format msgid "" "Network %(network_uuid)s requires a subnet in order to boot instances on." msgstr "網路 %(network_uuid)s 需要子網路才能啟動實例。" #, python-format msgid "Network could not be found for bridge %(bridge)s" msgstr "找不到橋接器 %(bridge)s 的網路" #, python-format msgid "Network could not be found for instance %(instance_id)s." msgstr "找不到實例 %(instance_id)s 的網路。" msgid "Network not found" msgstr "找不到網路" msgid "" "Network requires port_security_enabled and subnet associated in order to " "apply security groups." msgstr "網路需要 port_security_enabled 及相關聯的子網路,才能套用安全群組。" msgid "New volume must be detached in order to swap." msgstr "新磁區必須分離才能交換。" msgid "New volume must be the same size or larger." msgstr "新磁區必須具有相同大小或者更大。" #, python-format msgid "No Block Device Mapping with id %(id)s." msgstr "沒有 ID 為 %(id)s 的區塊裝置對映。" msgid "No Unique Match Found." msgstr "找不到唯一相符項。" #, python-format msgid "No agent-build associated with id %(id)s." msgstr "ID %(id)s 沒有相關聯的 agent-build。" msgid "No compute host specified" msgstr "未指定計算主機" #, python-format msgid "No configuration information found for operating system %(os_name)s" msgstr "找不到作業系統 %(os_name)s 的配置資訊" #, python-format msgid "No device with MAC address %s exists on the VM" msgstr "VM 上不存在 MAC 位址為 %s 的裝置" #, python-format msgid "No device with interface-id %s exists on VM" msgstr "VM 上不存在 interface-id 為 %s 的裝置" #, python-format msgid "No disk at %(location)s" msgstr "%(location)s 處沒有磁碟" #, python-format msgid "No fixed IP addresses available for network: %(net)s" msgstr "網路 %(net)s 沒有可用的固定 IP 位址" msgid "No fixed IPs associated to instance" msgstr "沒有固定 IP 與實例相關聯" msgid "No free nbd devices" msgstr "沒有可用的 NBD 裝置" msgid "No host available on cluster" msgstr "叢集上沒有可用的主機" msgid "No hosts found to map to cell, exiting." msgstr "找不到要對映至 Cell 的主機,正在結束。" #, python-format msgid "No hypervisor matching '%s' could be found." msgstr "找不到與 '%s' 相符的 Hypervisor。" msgid "No image locations are accessible" msgstr "任何映像檔位置均不可存取" #, python-format msgid "" "No live migration URI configured and no default available for \"%(virt_type)s" "\" hypervisor virtualization type." msgstr "" "未配置任何即時移轉 URI,且 \"%(virt_type)s\" Hypervisor 虛擬化類型無法使用任" "何預設值。" msgid "No more floating IPs available." msgstr "沒有更多的浮動 IP 可供使用。" #, python-format msgid "No more floating IPs in pool %s." msgstr "儲存區 %s 中沒有更多的浮動 IP。" #, python-format msgid "No mount points found in %(root)s of %(image)s" msgstr "在 %(image)s 的 %(root)s 中找不到裝載點" #, python-format msgid "No operating system found in %s" msgstr "在 %s 中找不到作業系統" #, python-format msgid "No primary VDI found for %s" msgstr "找不到 %s 的主要 VDI" msgid "No root disk defined." msgstr "未定義根磁碟。" #, python-format msgid "" "No specific network was requested and none are available for project " "'%(project_id)s'." msgstr "未要求任何特定網路,且專案 '%(project_id)s' 無法使用任何網路。" msgid "No suitable network for migrate" msgstr "沒有適合於移轉的網路" msgid "No valid host found for cold migrate" msgstr "找不到有效的主機進行冷移轉" msgid "No valid host found for resize" msgstr "找不到要調整其大小的有效主機" #, python-format msgid "No valid host was found. %(reason)s" msgstr "找不到有效的主機。%(reason)s" #, python-format msgid "No volume Block Device Mapping at path: %(path)s" msgstr "路徑 %(path)s 處不存在磁區區塊裝置對映" #, python-format msgid "No volume Block Device Mapping with id %(volume_id)s." msgstr "不存在 ID 為 %(volume_id)s 的磁區區塊裝置對映。" #, python-format msgid "Node %s could not be found." msgstr "找不到節點 %s。" #, python-format msgid "Not able to acquire a free port for %(host)s" msgstr "無法獲得 %(host)s 的可用埠" #, python-format msgid "Not able to bind %(host)s:%(port)d, %(error)s" msgstr "無法連結 %(host)s:%(port)d,%(error)s" #, python-format msgid "" "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free." msgstr "並非 PF %(compute_node_id)s:%(address)s 的所有虛擬函數都可用。" msgid "Not an rbd snapshot" msgstr "不是 rbd Snapshot" #, python-format msgid "Not authorized for image %(image_id)s." msgstr "未獲映像檔 %(image_id)s 的授權。" msgid "Not authorized." msgstr "未被授權" msgid "Not enough parameters to build a valid rule." msgstr "參數數目不足以建置有效的規則。" msgid "Not implemented on Windows" msgstr "未在 Windows 上實作" msgid "Not stored in rbd" msgstr "未儲存在 rbd 中" msgid "Nothing was archived." msgstr "未保存任何內容。" #, python-format msgid "Nova requires libvirt version %s or greater." msgstr "Nova 需要 libVirt %s 版或更高版本。" msgid "Number of Rows Archived" msgstr "已保存的列數" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "物件動作 %(action)s 失敗,原因:%(reason)s" msgid "Old volume is attached to a different instance." msgstr "已將舊磁區連接至其他實例。" #, python-format msgid "One or more hosts already in availability zone(s) %s" msgstr "一個以上的主機已經位於可用性區域 %s 中" msgid "Only administrators may list deleted instances" msgstr "只有管理者才能列出已刪除的實例" #, python-format msgid "" "Only file-based SRs (ext/NFS) are supported by this feature. SR %(uuid)s is " "of type %(type)s" msgstr "" "此特性僅支援檔案型「儲存體儲存庫 (SR)」(ext/NFS)。「儲存體儲存庫 " "(SR)」%(uuid)s 的類型是%(type)s" msgid "Origin header does not match this host." msgstr "原始標頭與此主機不符。" msgid "Origin header not valid." msgstr "原始標頭無效。" msgid "Origin header protocol does not match this host." msgstr "原始標頭通訊協定與此主機不符。" #, python-format msgid "PCI Device %(node_id)s:%(address)s not found." msgstr "找不到 PCI 裝置 %(node_id)s:%(address)s。" #, python-format msgid "PCI alias %(alias)s is not defined" msgstr "未定義 PCI 別名 %(alias)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is %(status)s instead of " "%(hopestatus)s" msgstr "" "PCI 裝置 %(compute_node_id)s:%(address)s 的狀態是 %(status)s,而不" "是%(hopestatus)s" #, python-format msgid "" "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s instead of " "%(hopeowner)s" msgstr "" "PCI 裝置 %(compute_node_id)s:%(address)s 的擁有者是 %(owner)s,而不" "是%(hopeowner)s" #, python-format msgid "PCI device %(id)s not found" msgstr "找不到 PCI 裝置 %(id)s" #, python-format msgid "PCI device request %(requests)s failed" msgstr "PCI 裝置要求 %(requests)s 失敗" #, python-format msgid "PIF %s does not contain IP address" msgstr "PIF %s 不包含 IP 位址" #, python-format msgid "Page size %(pagesize)s forbidden against '%(against)s'" msgstr "針對 '%(against)s',已禁止頁面大小 %(pagesize)s" #, python-format msgid "Page size %(pagesize)s is not supported by the host." msgstr "主機不支援頁面大小 %(pagesize)s。" #, python-format msgid "" "Parameters %(missing_params)s not present in vif_details for vif %(vif_id)s. " "Check your Neutron configuration to validate that the macvtap parameters are " "correct." msgstr "" "參數 %(missing_params)s 未呈現在 vif %(vif_id)s 的 vif_details 中。 請檢查 " "Neutron 配置,以確認 macvtap 參數是正確的。" #, python-format msgid "Path %s must be LVM logical volume" msgstr "路徑 %s 必須是 LVM 邏輯磁區" msgid "Paused" msgstr "已暫停" msgid "Personality file limit exceeded" msgstr "已超出特質檔案限制" #, python-format msgid "" "Physical Function %(compute_node_id)s:%(address)s, related to VF " "%(compute_node_id)s:%(vf_address)s is %(status)s instead of %(hopestatus)s" msgstr "" "與 VF %(compute_node_id)s:%(vf_address)s 相關的實體函數 %(compute_node_id)s:" "%(address)s 處於 %(status)s 狀態,而不是 %(hopestatus)s 狀態" #, python-format msgid "Physical network is missing for network %(network_uuid)s" msgstr "遺漏了用於網路 %(network_uuid)s 的實體網路" #, python-format msgid "Policy doesn't allow %(action)s to be performed." msgstr "原則不容許執行 %(action)s。" #, python-format msgid "Port %(port_id)s is still in use." msgstr "埠 %(port_id)s 仍在使用中。" #, python-format msgid "Port %(port_id)s not usable for instance %(instance)s." msgstr "埠 %(port_id)s 不適用於實例 %(instance)s。" #, python-format msgid "" "Port %(port_id)s not usable for instance %(instance)s. Value %(value)s " "assigned to dns_name attribute does not match instance's hostname " "%(hostname)s" msgstr "" "實例 %(instance)s 無法使用埠 %(port_id)s。指派給 dns_name 屬性的值 %(value)s " "與實例的主機名 %(hostname)s 不符" #, python-format msgid "Port %(port_id)s requires a FixedIP in order to be used." msgstr "埠 %(port_id)s 需要固定 IP 才能使用。" #, python-format msgid "Port %s is not attached" msgstr "未連接埠 %s" #, python-format msgid "Port id %(port_id)s could not be found." msgstr "找不到埠 ID %(port_id)s。" #, python-format msgid "Provided video model (%(model)s) is not supported." msgstr "不支援提供的視訊模型 (%(model)s)。" #, python-format msgid "Provided watchdog action (%(action)s) is not supported." msgstr "不支援所提供的監視器動作 (%(action)s)。" msgid "QEMU guest agent is not enabled" msgstr "未啟用 QEMU 訪客代理程式" #, python-format msgid "Quiescing is not supported in instance %(instance_id)s" msgstr "實例 %(instance_id)s 不支援靜止" #, python-format msgid "Quota class %(class_name)s could not be found." msgstr "找不到配額類別 %(class_name)s。" msgid "Quota could not be found" msgstr "找不到配額" #, python-format msgid "" "Quota exceeded for %(overs)s: Requested %(req)s, but already used %(used)s " "of %(allowed)s %(overs)s" msgstr "" "%(overs)s 已超出配額:要求 %(req)s,但已經使用了 %(used)s %(allowed)s " "%(overs)s" #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "資源已超出配額:%(overs)s" msgid "Quota exceeded, too many key pairs." msgstr "已超出配額,金鑰組太多。" msgid "Quota exceeded, too many server groups." msgstr "已超出配額,伺服器群組太多。" msgid "Quota exceeded, too many servers in group" msgstr "已超出配額,群組中的伺服器太多" #, python-format msgid "Quota exceeded: code=%(code)s" msgstr "已超出配額:錯誤碼 = %(code)s" #, python-format msgid "Quota exists for project %(project_id)s, resource %(resource)s" msgstr "專案 %(project_id)s 資源 %(resource)s 已存在配額" #, python-format msgid "Quota for project %(project_id)s could not be found." msgstr "找不到專案 %(project_id)s 的配額。" #, python-format msgid "" "Quota for user %(user_id)s in project %(project_id)s could not be found." msgstr "找不到專案 %(project_id)s 中使用者 %(user_id)s 的配額。" #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be greater than or equal to " "already used and reserved %(minimum)s." msgstr "" "%(resource)s 的配額限制 %(limit)s 必須大於或等於已經使用並保留的 " "%(minimum)s。" #, python-format msgid "" "Quota limit %(limit)s for %(resource)s must be less than or equal to " "%(maximum)s." msgstr "%(resource)s 的配額限制 %(limit)s 必須小於或等於%(maximum)s。" #, python-format msgid "Reached maximum number of retries trying to unplug VBD %s" msgstr "嘗試拔除 VBD %s 時達到了重試次數上限" msgid "" "Realtime policy needs vCPU(s) mask configured with at least 1 RT vCPU and 1 " "ordinary vCPU. See hw:cpu_realtime_mask or hw_cpu_realtime_mask" msgstr "" "即時原則需要使用至少 1 個 RT vCPU 和 1 個普通 vCPU 進行配置的 vCPU 遮罩。請參" "閱 hw:cpu_realtime_mask 或 hw_cpu_realtime_mask" msgid "Request body and URI mismatch" msgstr "要求內文與 URI 不符" msgid "Request is too large." msgstr "要求太大。" #, python-format msgid "Request of image %(image_id)s got BadRequest response: %(response)s" msgstr "映像檔 %(image_id)s 的要求取得 BadRequest 回應:%(response)s" #, python-format msgid "RequestSpec not found for instance %(instance_uuid)s" msgstr "找不到實例 %(instance_uuid)s 的 RequestSpec" msgid "Requested CPU control policy not supported by host" msgstr "主機不支援所要求的 CPU 控制原則" #, python-format msgid "" "Requested hardware '%(model)s' is not supported by the '%(virt)s' virt driver" msgstr "'%(virt)s' 虛擬化驅動程式不支援所要求的硬體 '%(model)s'" #, python-format msgid "Requested image %(image)s has automatic disk resize disabled." msgstr "所要求的映像檔 %(image)s 已禁止自動調整磁碟大小。" msgid "" "Requested instance NUMA topology cannot fit the given host NUMA topology" msgstr "所要求的實例 NUMA 拓蹼無法適合給定的主機 NUMA 拓蹼" msgid "" "Requested instance NUMA topology together with requested PCI devices cannot " "fit the given host NUMA topology" msgstr "所要求的實例 NUMA 拓蹼及所要求的 PCI 裝置無法適合給定的主機 NUMA 拓蹼" #, python-format msgid "" "Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d are impossible to " "satisfy for vcpus count %(vcpus)d" msgstr "" "所要求的 vCPU 限制 %(sockets)d:%(cores)d:%(threads)d 無法滿足 vCPU 計數 " "%(vcpus)d" #, python-format msgid "Rescue device does not exist for instance %s" msgstr "實例 %s 的救援裝置不存在" #, python-format msgid "Resize error: %(reason)s" msgstr "調整大小錯誤:%(reason)s" msgid "Resize to zero disk flavor is not allowed." msgstr "不容許調整大小至 0 磁碟特性。" msgid "Resource could not be found." msgstr "找不到資源。" msgid "Resumed" msgstr "已恢復" #, python-format msgid "Root element name should be '%(name)s' not '%(tag)s'" msgstr "根元素名稱應該為 '%(name)s' 而不是 '%(tag)s'" #, python-format msgid "Running batches of %i until complete" msgstr "正在執行 %i 的各個批次,直到完成為止" #, python-format msgid "Scheduler Host Filter %(filter_name)s could not be found." msgstr "找不到「排程器主機過濾器」%(filter_name)s。" #, python-format msgid "Security group %(name)s is not found for project %(project)s" msgstr "找不到專案 %(project)s 的安全群組 %(name)s" #, python-format msgid "" "Security group %(security_group_id)s not found for project %(project_id)s." msgstr "找不到專案 %(project_id)s 的安全群組 %(security_group_id)s。" #, python-format msgid "Security group %(security_group_id)s not found." msgstr "找不到安全群組 %(security_group_id)s。" #, python-format msgid "" "Security group %(security_group_name)s already exists for project " "%(project_id)s." msgstr "專案 %(project_id)s 已存在安全群組%(security_group_name)s。" #, python-format msgid "" "Security group %(security_group_name)s not associated with the instance " "%(instance)s" msgstr "安全群組 %(security_group_name)s 未與實例 %(instance)s 產生關聯" msgid "Security group id should be uuid" msgstr "安全群組 ID 應該是 UUID" msgid "Security group name cannot be empty" msgstr "安全群組名稱不能是空的" msgid "Security group not specified" msgstr "未指定安全群組" #, python-format msgid "Server disk was unable to be resized because: %(reason)s" msgstr "無法調整伺服器磁碟的大小,原因:%(reason)s" msgid "Server does not exist" msgstr "伺服器不存在" #, python-format msgid "ServerGroup policy is not supported: %(reason)s" msgstr "不支援 ServerGroup 原則:%(reason)s" msgid "ServerGroupAffinityFilter not configured" msgstr "未配置 ServerGroupAffinityFilter" msgid "ServerGroupAntiAffinityFilter not configured" msgstr "未配置 ServerGroupAntiAffinityFilter" msgid "ServerGroupSoftAffinityWeigher not configured" msgstr "未配置 ServerGroupSoftAffinityWeigher" msgid "ServerGroupSoftAntiAffinityWeigher not configured" msgstr "未配置 ServerGroupSoftAntiAffinityWeigher" #, python-format msgid "Service %(service_id)s could not be found." msgstr "找不到服務 %(service_id)s。" #, python-format msgid "Service %s not found." msgstr "找不到服務 %s。" msgid "Service is unavailable at this time." msgstr "此時無法使用服務。" #, python-format msgid "Service with host %(host)s binary %(binary)s exists." msgstr "主機為 %(host)s 且二進位檔為 %(binary)s 的服務已存在。" #, python-format msgid "Service with host %(host)s topic %(topic)s exists." msgstr "主機為 %(host)s 且主題為 %(topic)s 的服務已存在。" msgid "Set admin password is not supported" msgstr "不支援設定管理密碼" #, python-format msgid "Shadow table with name %(name)s already exists." msgstr "名稱為 %(name)s 的備份副本表格已存在。" #, python-format msgid "Share '%s' is not supported" msgstr "不支援共用 '%s'" #, python-format msgid "Share level '%s' cannot have share configured" msgstr "共用層次 '%s' 不能配置共用" msgid "" "Shrinking the filesystem down with resize2fs has failed, please check if you " "have enough free space on your disk." msgstr "" "使用 resize2fs 來縮小檔案系統時失敗,請檢查磁碟上是否具有足夠的可用空間。" #, python-format msgid "Snapshot %(snapshot_id)s could not be found." msgstr "找不到 Snapshot %(snapshot_id)s。" msgid "Some required fields are missing" msgstr "遺漏了部分必要欄位" #, python-format msgid "" "Something went wrong when deleting a volume snapshot: rebasing a " "%(protocol)s network disk using qemu-img has not been fully tested" msgstr "" "刪除磁區 Snapshot 時發生問題:尚未完全測試使用 qemu-img 來重設 %(protocol)s " "網路磁碟的基線" msgid "Sort direction size exceeds sort key size" msgstr "排序方向大小超過排序鍵大小" msgid "Sort key supplied was not valid." msgstr "提供的排序鍵無效。" msgid "Specified fixed address not assigned to instance" msgstr "沒有將所指定的固定位址指派給實例" msgid "Specify `table_name` or `table` param" msgstr "請指定 `table_name` 或 `table` 參數" msgid "Specify only one param `table_name` `table`" msgstr "請僅指定 `table_name` 或 `table` 中的一個參數" msgid "Started" msgstr "已開始" msgid "Stopped" msgstr "已停止" #, python-format msgid "Storage error: %(reason)s" msgstr "儲存體錯誤:%(reason)s" #, python-format msgid "Storage policy %s did not match any datastores" msgstr "儲存體原則 %s 不符合任何資料儲存庫" msgid "Success" msgstr "成功" msgid "Suspended" msgstr "已停止" msgid "Swap drive requested is larger than instance type allows." msgstr "所要求的交換磁碟機,大於實例類型所容許的容量。" msgid "Table" msgstr "表格" #, python-format msgid "Task %(task_name)s is already running on host %(host)s" msgstr "作業 %(task_name)s 已經在主機 %(host)s 上執行" #, python-format msgid "Task %(task_name)s is not running on host %(host)s" msgstr "作業 %(task_name)s 未在主機 %(host)s 上執行" #, python-format msgid "The PCI address %(address)s has an incorrect format." msgstr "PCI 位址 %(address)s 的格式不正確。" msgid "The backlog must be more than 0" msgstr "待辦事項必須大於 0" #, python-format msgid "The console port range %(min_port)d-%(max_port)d is exhausted." msgstr "主控台埠範圍 %(min_port)d-%(max_port)d 已耗盡。" msgid "The created instance's disk would be too small." msgstr "已建立實例的磁碟將太小。" msgid "The current driver does not support preserving ephemeral partitions." msgstr "現行驅動程式不支援保留暫時分割區。" msgid "The default PBM policy doesn't exist on the backend." msgstr "預設 PBM 原則不存在於後端上。" msgid "The floating IP request failed with a BadRequest" msgstr "浮動 IP 要求失敗,發生 BadRequest" msgid "" "The instance requires a newer hypervisor version than has been provided." msgstr "實例需要比所提供版本還新的 Hypervisor 版本。" #, python-format msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d" msgstr "所定義的埠數目 %(ports)d 超出限制:%(quota)d" msgid "The only partition should be partition 1." msgstr "唯一的分割區應該是分割區 1。" #, python-format msgid "The provided RNG device path: (%(path)s) is not present on the host." msgstr "主機上不存在所提供的 RNG 裝置路徑:(%(path)s)。" msgid "The request body can't be empty" msgstr "要求內文不能是空的" msgid "The request is invalid." msgstr "要求無效。" #, python-format msgid "" "The requested amount of video memory %(req_vram)d is higher than the maximum " "allowed by flavor %(max_vram)d." msgstr "" "所要求的視訊記憶體數量 %(req_vram)d 大於特性所容許的上限 %(max_vram)d。" msgid "The requested availability zone is not available" msgstr "所要求的可用性區域無法使用" msgid "The requested console type details are not accessible" msgstr "無法存取所要求的主控台類型詳細資料" msgid "The requested functionality is not supported." msgstr "所要求的功能不受支援。" #, python-format msgid "The specified cluster '%s' was not found in vCenter" msgstr "在 vCenter 中找不到指定的叢集 '%s'" #, python-format msgid "The supplied device path (%(path)s) is in use." msgstr "提供的裝置路徑 (%(path)s) 已在使用中。" #, python-format msgid "The supplied device path (%(path)s) is invalid." msgstr "提供的裝置路徑 (%(path)s) 無效。" #, python-format msgid "" "The supplied disk path (%(path)s) already exists, it is expected not to " "exist." msgstr "提供的磁碟路徑 (%(path)s) 已存在,但它不應該存在。" msgid "The supplied hypervisor type of is invalid." msgstr "提供的 Hypervisor 類型無效。" msgid "The target host can't be the same one." msgstr "目標主機不能是相同主機。" #, python-format msgid "The token '%(token)s' is invalid or has expired" msgstr "記號 '%(token)s' 無效或已過期" #, python-format msgid "" "The volume cannot be assigned the same device name as the root device %s" msgstr "無法對磁區指派與根裝置 %s 相同的裝置名稱" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. Run this command again with the --delete " "option after you have backed up any necessary data." msgstr "" "'%(table_name)s' 表格中有 %(records)d 筆 UUID 或 instance_uuid 直欄為空值的記" "錄。在備份任何必要資料之後,請使用 --delete 選項再次執行此指令。" #, python-format msgid "" "There are %(records)d records in the '%(table_name)s' table where the uuid " "or instance_uuid column is NULL. These must be manually cleaned up before " "the migration will pass. Consider running the 'nova-manage db " "null_instance_uuid_scan' command." msgstr "" "'%(table_name)s' 表格中有 %(records)d 筆 UUID 或 instance_uuid 直欄為空值的記" "錄。必須先手動清除這些記錄,移轉才能通過。請考量執行 'nova-manage db " "null_instance_uuid_scan' 指令。" msgid "There are not enough hosts available." msgstr "沒有足夠的可用主機。" #, python-format msgid "" "There are still %(count)i unmigrated flavor records. Migration cannot " "continue until all instance flavor records have been migrated to the new " "format. Please run `nova-manage db migrate_flavor_data' first." msgstr "" "仍有 %(count)i 個未移轉的特性記錄。移轉無法繼續,直到將所有實例特性記錄都移轉" "為新的格式為止。請先執行 `nova-manage db migrate_flavor_data'。" #, python-format msgid "There is no such action: %s" msgstr "沒有這樣的動作:%s" msgid "There were no records found where instance_uuid was NULL." msgstr "找不到 instance_uuid 為空值的記錄。" #, python-format msgid "" "This compute node's hypervisor is older than the minimum supported version: " "%(version)s." msgstr "這部電腦節點的 Hypervisor 版本低於所支援的版本下限:%(version)s。" msgid "This domU must be running on the host specified by connection_url" msgstr "此 domU 必須正在 connection_url 所指定的主機上執行" msgid "" "This method needs to be called with either networks=None and port_ids=None " "or port_ids and networks as not none." msgstr "" "需要在 networks=None 且 port_ids=None 或者 port_ids 及 networks 都不為 None " "時,呼叫此方法。" #, python-format msgid "This rule already exists in group %s" msgstr "此規則已存在於群組 %s 中" #, python-format msgid "" "This service is older (v%(thisver)i) than the minimum (v%(minver)i) version " "of the rest of the deployment. Unable to continue." msgstr "" "此服務的版本 (v%(thisver)i) 低於其餘部署的版本下限 (v%(minver)i)。無法繼續。" #, python-format msgid "Timeout waiting for device %s to be created" msgstr "等待建立裝置 %s 時發生逾時" msgid "Timeout waiting for response from cell" msgstr "等候 Cell 回應時發生逾時" #, python-format msgid "Timeout while checking if we can live migrate to host: %s" msgstr "在檢查是否可以即時移轉至主機時逾時:%s" msgid "To and From ports must be integers" msgstr "目標埠和來源埠必須是整數" msgid "Token not found" msgstr "找不到記號" msgid "Triggering crash dump is not supported" msgstr "不支援觸發損毀傾出" msgid "Type and Code must be integers for ICMP protocol type" msgstr "ICMP 通訊協定類型的類型及代碼必須是整數" msgid "UEFI is not supported" msgstr "不支援 UEFI" #, python-format msgid "" "Unable to associate floating IP %(address)s to any fixed IPs for instance " "%(id)s. Instance has no fixed IPv4 addresses to associate." msgstr "" "無法將浮動 IP %(address)s 關聯至實例 %(id)s 的任何固定 IP。該實例沒有固定 " "IPv4 位址與其相關聯。" #, python-format msgid "" "Unable to associate floating IP %(address)s to fixed IP %(fixed_address)s " "for instance %(id)s. Error: %(error)s" msgstr "" "無法將浮動 IP %(address)s 關聯至實例 %(id)s 的固定 IP %(fixed_address)s。錯" "誤:%(error)s" msgid "Unable to authenticate Ironic client." msgstr "無法鑑別 Ironic 用戶端。" #, python-format msgid "Unable to contact guest agent. The following call timed out: %(method)s" msgstr "無法聯絡來賓代理程式。下列呼叫已逾時:%(method)s" #, python-format msgid "Unable to convert image to %(format)s: %(exp)s" msgstr "無法將映像檔轉換為 %(format)s:%(exp)s" #, python-format msgid "Unable to convert image to raw: %(exp)s" msgstr "無法將映像檔轉換為原始格式:%(exp)s" #, python-format msgid "Unable to destroy VBD %s" msgstr "無法毀損 VBD %s" #, python-format msgid "Unable to destroy VDI %s" msgstr "無法毀損 VDI %s" #, python-format msgid "Unable to determine disk bus for '%s'" msgstr "無法判定 '%s' 的磁碟匯流排" #, python-format msgid "Unable to determine disk prefix for %s" msgstr "無法判定 %s 的磁碟字首" #, python-format msgid "Unable to eject %s from the pool; No master found" msgstr "無法將 %s 從儲存區中退出;找不到主要主機" #, python-format msgid "Unable to eject %s from the pool; pool not empty" msgstr "無法將 %s 從儲存區中退出;儲存區不是空的" #, python-format msgid "Unable to find SR from VBD %s" msgstr "在 VBD %s 中找不到「儲存體儲存庫 (SR)」" #, python-format msgid "Unable to find SR from VDI %s" msgstr "從 VDI %s 中找不到 SR" #, python-format msgid "Unable to find ca_file : %s" msgstr "找不到 ca_file:%s" #, python-format msgid "Unable to find cert_file : %s" msgstr "找不到 cert_file:%s" #, python-format msgid "Unable to find host for Instance %s" msgstr "找不到實例 %s 的主機" msgid "Unable to find iSCSI Target" msgstr "找不到 iSCSI 目標" #, python-format msgid "Unable to find key_file : %s" msgstr "找不到 key_file:%s" msgid "Unable to find root VBD/VDI for VM" msgstr "找不到 VM 的根 VBD/VDI" msgid "Unable to find volume" msgstr "找不到磁區" msgid "Unable to get host UUID: /etc/machine-id does not exist" msgstr "無法取得主機 UUID:/etc/machine-id 不存在" msgid "Unable to get host UUID: /etc/machine-id is empty" msgstr "無法取得主機 UUID:/etc/machine-id 是空的" #, python-format msgid "Unable to get record of VDI %s on" msgstr "無法取得下列位置上 VDI %s 的記錄:" #, python-format msgid "Unable to introduce VDI for SR %s" msgstr "無法給「儲存體儲存庫 (SR)」%s 建立 VDI" #, python-format msgid "Unable to introduce VDI on SR %s" msgstr "無法在「儲存體儲存庫 (SR)」%s 上建立 VDI" #, python-format msgid "Unable to join %s in the pool" msgstr "無法結合儲存區中的 %s" msgid "" "Unable to launch multiple instances with a single configured port ID. Please " "launch your instance one by one with different ports." msgstr "無法以單一配置埠 ID 來啟動多個實例。請使用不同的埠,逐一啟動實例。" #, python-format msgid "" "Unable to migrate %(instance_uuid)s to %(dest)s: Lack of memory(host:" "%(avail)s <= instance:%(mem_inst)s)" msgstr "" "無法將 %(instance_uuid)s 移轉至 %(dest)s:記憶體不足(主機:%(avail)s <= 實" "例:%(mem_inst)s)" #, python-format msgid "" "Unable to migrate %(instance_uuid)s: Disk of instance is too large(available " "on destination host:%(available)s < need:%(necessary)s)" msgstr "" "無法移轉 %(instance_uuid)s:實例的磁碟太大(目的地主機上的可用空間:" "%(available)s < 需要的空間:%(necessary)s)" #, python-format msgid "" "Unable to migrate instance (%(instance_id)s) to current host (%(host)s)." msgstr "無法將實例 (%(instance_id)s) 移轉至現行主機 (%(host)s)。" #, python-format msgid "Unable to obtain target information %s" msgstr "無法取得目標資訊 %s" msgid "Unable to resize disk down." msgstr "無法將磁碟大小調小。" msgid "Unable to set password on instance" msgstr "無法在實例上設定密碼" msgid "Unable to shrink disk." msgstr "無法收縮磁碟。" msgid "Unable to terminate instance." msgstr "無法終止實例。" #, python-format msgid "Unable to unplug VBD %s" msgstr "無法拔除 VBD %s" #, python-format msgid "Unacceptable CPU info: %(reason)s" msgstr "無法接受的 CPU 資訊:%(reason)s" msgid "Unacceptable parameters." msgstr "不可接受的參數值" #, python-format msgid "Unavailable console type %(console_type)s." msgstr "無法使用的主控台類型 %(console_type)s。" msgid "" "Undefined Block Device Mapping root: BlockDeviceMappingList contains Block " "Device Mappings from multiple instances." msgstr "" "未定義的「區塊裝置對映」根:BlockDeviceMappingList 包含來自多個實例的「區塊裝" "置對映」。" #, python-format msgid "" "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ " "and attach the Nova API log if possible.\n" "%s" msgstr "" "非預期的 API 錯誤。請在網站http://bugs.launchpad.net/nova/ 上報告此問題,而且" "如有可能,請附加 Nova API 日誌。\n" "%s" #, python-format msgid "Unexpected aggregate action %s" msgstr "非預期的聚集動作 %s" msgid "Unexpected type adding stats" msgstr "新增統計資料時遇到非預期的類型" #, python-format msgid "Unexpected vif_type=%s" msgstr "非預期的 vif_type = %s" msgid "Unknown" msgstr "未知" msgid "Unknown action" msgstr "不明動作" #, python-format msgid "Unknown config drive format %(format)s. Select one of iso9660 or vfat." msgstr "不明的配置磁碟機格式 %(format)s。請選取 iso9660 或 vfat 的其中之一。" #, python-format msgid "Unknown delete_info type %s" msgstr "不明的 delete_info 類型 %s" #, python-format msgid "Unknown image_type=%s" msgstr "不明的 image_type = %s" #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "不明的配額資源 %(unknown)s。" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "不明的排序方向,必須為 'desc' 或 'asc'" #, python-format msgid "Unknown type: %s" msgstr "不明的類型:%s" msgid "Unrecognized legacy format." msgstr "無法辨識的舊格式。" #, python-format msgid "Unrecognized read_deleted value '%s'" msgstr "無法辨識 read_deleted 值 '%s'" #, python-format msgid "Unrecognized value '%s' for CONF.running_deleted_instance_action" msgstr "無法辨識 CONF.running_deleted_instance_action 的值 '%s'" #, python-format msgid "Unshelve attempted but the image %s cannot be found." msgstr "已嘗試解除擱置,但卻找不到映像檔 %s。" msgid "Unsupported Content-Type" msgstr "不支援的內容類型" msgid "Upgrade DB using Essex release first." msgstr "請先使用 Essex 版本來升級 DB。" #, python-format msgid "User %(username)s not found in password file." msgstr "在密碼檔中找不到使用者 %(username)s。" #, python-format msgid "User %(username)s not found in shadow file." msgstr "在備份副本檔中找不到使用者 %(username)s。" msgid "User data needs to be valid base 64." msgstr "使用者資料必須是有效的 base64。" msgid "User does not have admin privileges" msgstr "使用者並沒有管理者權力" msgid "" "Using different block_device_mapping syntaxes is not allowed in the same " "request." msgstr "同一個要求中不容許使用其他 block_device_mapping語法。" #, python-format msgid "" "VDI %(vdi_ref)s is %(virtual_size)d bytes which is larger than flavor size " "of %(new_disk_size)d bytes." msgstr "" "VDI %(vdi_ref)s 為 %(virtual_size)d 位元組,這大於特性大小%(new_disk_size)d " "位元組。" #, python-format msgid "" "VDI not found on SR %(sr)s (vdi_uuid %(vdi_uuid)s, target_lun %(target_lun)s)" msgstr "" "在「儲存體儲存庫 (SR)」%(sr)s 上找不到 VDI(vdi_uuid %(vdi_uuid)s、" "target_lun %(target_lun)s)" #, python-format msgid "VHD coalesce attempts exceeded (%d), giving up..." msgstr "VHD 聯合嘗試次數已超出 (%d) 次,正在放棄..." #, python-format msgid "" "Version %(req_ver)s is not supported by the API. Minimum is %(min_ver)s and " "maximum is %(max_ver)s." msgstr "API 不支援 %(req_ver)s 版。最低為 %(min_ver)s,最高為 %(max_ver)s。" msgid "Virtual Interface creation failed" msgstr "建立虛擬介面失敗" msgid "Virtual interface plugin failed" msgstr "虛擬介面外掛程式失敗" #, python-format msgid "Virtual machine mode '%(vmmode)s' is not recognised" msgstr "未辨識虛擬機器模式 '%(vmmode)s'" #, python-format msgid "Virtual machine mode '%s' is not valid" msgstr "虛擬機器模式 '%s' 無效" #, python-format msgid "Virtualization type '%(virt)s' is not supported by this compute driver" msgstr "此計算驅動程式不支援虛擬化類型 '%(virt)s'" #, python-format msgid "Volume %(volume_id)s could not be attached. Reason: %(reason)s" msgstr "無法連接磁區 %(volume_id)s。原因:%(reason)s" #, python-format msgid "Volume %(volume_id)s could not be found." msgstr "找不到磁區 %(volume_id)s。" #, python-format msgid "" "Volume %(volume_id)s did not finish being created even after we waited " "%(seconds)s seconds or %(attempts)s attempts. And its status is " "%(volume_status)s." msgstr "" "即使在等待 %(seconds)s 秒或者嘗試 %(attempts)s 次之後,也未完成建立磁區 " "%(volume_id)s。並且它的狀態是%(volume_status)s。" msgid "Volume does not belong to the requested instance." msgstr "磁區不屬於所要求的實例。" #, python-format msgid "" "Volume encryption is not supported for %(volume_type)s volume %(volume_id)s" msgstr "%(volume_type)s 的磁區 %(volume_id)s 不支援磁區加密" #, python-format msgid "" "Volume is smaller than the minimum size specified in image metadata. Volume " "size is %(volume_size)i bytes, minimum size is %(image_min_disk)i bytes." msgstr "" "磁區小於映像檔 meta 資料中指定的大小下限。磁區大小為 %(volume_size)i 位元組," "大小下限為 %(image_min_disk)i 位元組。" #, python-format msgid "" "Volume sets block size, but the current libvirt hypervisor '%s' does not " "support custom block size" msgstr "由磁區設定區塊大小,但現行 libVirt Hypervisor '%s' 不支援自訂區塊大小" #, python-format msgid "" "We do not support scheme '%s' under Python < 2.7.4, please use http or https" msgstr "在低於 2.7.4 的 Python 下,不支援架構 '%s',請使用 HTTP 或HTTPS" msgid "When resizing, instances must change flavor!" msgstr "重新調整大小時,實例必須變更特性!" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "在 SSL 模式下執行伺服器時,必須在配置檔中指定 cert_file 及 key_file 選項值" #, python-format msgid "Wrong quota method %(method)s used on resource %(res)s" msgstr "在資源 %(res)s 上使用了錯誤的配額方法 %(method)s" msgid "Wrong type of hook method. Only 'pre' and 'post' type allowed" msgstr "連結鉤方法類型錯誤。僅容許 'pre' 及 'post' 類型" msgid "X-Forwarded-For is missing from request." msgstr "要求遺漏了 X-Forwarded-For。" msgid "X-Instance-ID header is missing from request." msgstr "要求遺漏了 X-Instance-ID 標頭。" msgid "X-Instance-ID-Signature header is missing from request." msgstr "要求中遺漏 X-Instance-ID-Signature 標頭。" msgid "X-Metadata-Provider is missing from request." msgstr "要求遺漏了 X-Metadata-Provider。" msgid "X-Tenant-ID header is missing from request." msgstr "要求遺漏了 X-Tenant-ID 標頭。" msgid "XAPI supporting relax-xsm-sr-check=true required" msgstr "需要支援 relax-xsm-sr-check=true 的 XAPI" msgid "You are not allowed to delete the image." msgstr "不容許您刪除該映像檔。" msgid "" "You are not authorized to access the image the instance was started with." msgstr "您未獲授權來存取已用來啟動實例的映像檔。" msgid "You must implement __call__" msgstr "必須實作 __call__" msgid "You should specify images_rbd_pool flag to use rbd images." msgstr "應該指定 images_rbd_pool 旗標以使用 rbd 映像檔。" msgid "You should specify images_volume_group flag to use LVM images." msgstr "應該指定 images_volume_group 旗標以使用 LVM 映像檔。" msgid "Zero floating IPs available." msgstr "有 0 個浮動 IP 可供使用。" msgid "admin password can't be changed on existing disk" msgstr "無法在現有磁碟上變更管理者密碼" msgid "aggregate deleted" msgstr "已刪除聚集" msgid "aggregate in error" msgstr "聚集發生錯誤" #, python-format msgid "assert_can_migrate failed because: %s" msgstr "assert_can_migrate 失敗,原因:%s" msgid "cannot understand JSON" msgstr "無法理解 JSON" msgid "clone() is not implemented" msgstr "未實作 clone()" #, python-format msgid "connect info: %s" msgstr "連接資訊:%s" #, python-format msgid "connecting to: %(host)s:%(port)s" msgstr "正在連接至:%(host)s:%(port)s" msgid "direct_snapshot() is not implemented" msgstr "未實作 direct_snapshot()" #, python-format msgid "disk type '%s' not supported" msgstr "磁碟類型 '%s' 不受支援" #, python-format msgid "empty project id for instance %s" msgstr "實例 %s 的專案 ID 是空的" msgid "error setting admin password" msgstr "設定管理者密碼時發生錯誤" #, python-format msgid "error: %s" msgstr "錯誤:%s" #, python-format msgid "failed to generate X509 fingerprint. Error message: %s" msgstr "無法產生 X509 指紋。錯誤訊息:%s" msgid "failed to generate fingerprint" msgstr "無法產生指紋" msgid "filename cannot be None" msgstr "檔名不能為 None" msgid "floating IP is already associated" msgstr "已與浮動 IP 產生關聯" msgid "floating IP not found" msgstr "找不到浮動 IP" #, python-format msgid "fmt=%(fmt)s backed by: %(backing_file)s" msgstr "fmt = %(fmt)s 受 %(backing_file)s 支援" #, python-format msgid "href %s does not contain version" msgstr "href %s 不包含版本" msgid "image already mounted" msgstr "已裝載映像檔" #, python-format msgid "instance %s is not running" msgstr "實例 %s 未在執行中" msgid "instance has a kernel or ramdisk but not both" msgstr "實例具有核心或 Ramdisk,而不是兩者兼有" msgid "instance is a required argument to use @refresh_cache" msgstr "實例是使用 @refresh_cache 的必要引數" msgid "instance is not in a suspended state" msgstr "實例不處於暫停狀態" msgid "instance is not powered on" msgstr "未開啟實例的電源" msgid "instance is powered off and cannot be suspended." msgstr "實例已關閉電源,無法暫停。" #, python-format msgid "instance_id %s could not be found as device id on any ports" msgstr "找不到 instance_id %s 來作為任何埠上的裝置 ID" msgid "is_public must be a boolean" msgstr "is_public 必須是布林值" msgid "keymgr.fixed_key not defined" msgstr "未定義 keymgr.fixed_key" msgid "l3driver call to add floating IP failed" msgstr "l3driver 呼叫以新增浮動 IP 失敗" #, python-format msgid "libguestfs installed but not usable (%s)" msgstr "libguestfs 已安裝,但卻無法使用 (%s)" #, python-format msgid "libguestfs is not installed (%s)" msgstr "未安裝 libguestfs (%s)" #, python-format msgid "marker [%s] not found" msgstr "找不到標記 [%s]" #, python-format msgid "max rows must be <= %(max_value)d" msgstr "列數上限必須小於或等於 %(max_value)d" msgid "max_count cannot be greater than 1 if an fixed_ip is specified." msgstr "如果指定了 fixed_ip,則 max_count 不得大於 1。" msgid "min_count must be <= max_count" msgstr "min_count 必須 <= max_count" #, python-format msgid "nbd device %s did not show up" msgstr "NBD 裝置 %s 未顯示" msgid "nbd unavailable: module not loaded" msgstr "NBD 無法使用:未載入模組" msgid "no hosts to remove" msgstr "沒有要移除的主機" #, python-format msgid "no match found for %s" msgstr "找不到 %s 的相符項" #, python-format msgid "no usable parent snapshot for volume %s" msgstr "磁區 %s 沒有可使用的母項 Snapshot" #, python-format msgid "no write permission on storage pool %s" msgstr "對儲存區 %s 沒有寫入權" #, python-format msgid "not able to execute ssh command: %s" msgstr "無法執行 SSH 指令:%s" msgid "old style configuration can use only dictionary or memcached backends" msgstr "舊樣式配置只能使用字典或 Memcached 後端" msgid "operation time out" msgstr "作業逾時" #, python-format msgid "partition %s not found" msgstr "找不到分割區 %s" #, python-format msgid "partition search unsupported with %s" msgstr "%s 不支援進行分割區搜尋" msgid "pause not supported for vmwareapi" msgstr "vmwareapi 不支援暫停" msgid "printable characters with at least one non space character" msgstr "含有至少一個非空格字元的可列印字元" msgid "printable characters. Can not start or end with whitespace." msgstr "可列印字元。不能以空格開頭或結尾。" #, python-format msgid "qemu-img failed to execute on %(path)s : %(exp)s" msgstr "qemu-img 無法在 %(path)s 上執行:%(exp)s" #, python-format msgid "qemu-nbd error: %s" msgstr "qemu-nbd 錯誤:%s" msgid "rbd python libraries not found" msgstr "找不到 rbd Python 程式庫" #, python-format msgid "read_deleted can only be one of 'no', 'yes' or 'only', not %r" msgstr "read_deleted 只能是 'no'、'yes' 或 'only' 其中之一,不能是 %r" msgid "serve() can only be called once" msgstr "只能呼叫 serve() 一次" msgid "service is a mandatory argument for DB based ServiceGroup driver" msgstr "服務是 DB 型 ServiceGroup 驅動程式的必要引數" msgid "service is a mandatory argument for Memcached based ServiceGroup driver" msgstr "服務是 Memcached 型 ServiceGroup 驅動程式的必要引數" msgid "set_admin_password is not implemented by this driver or guest instance." msgstr "set_admin_password 不是由此驅動程式或來賓實例實作。" msgid "setup in progress" msgstr "正在進行設定" #, python-format msgid "snapshot for %s" msgstr "%s 的 Snapshot" msgid "snapshot_id required in create_info" msgstr "create_info 中需要 snapshot_id" msgid "token not provided" msgstr "未提供記號" msgid "too many body keys" msgstr "主體金鑰太多" msgid "unpause not supported for vmwareapi" msgstr "vmwareapi 不支援取消暫停" msgid "version should be an integer" msgstr "版本應該是整數" #, python-format msgid "vg %s must be LVM volume group" msgstr "磁區群組 %s 必須是 LVM 磁區群組" #, python-format msgid "vhostuser_sock_path not present in vif_details for vif %(vif_id)s" msgstr "vhostuser_sock_path 未出現在 VIF %(vif_id)s 的 vif_details 中" #, python-format msgid "vif type %s not supported" msgstr "VIF 類型 %s 不受支援" msgid "vif_type parameter must be present for this vif_driver implementation" msgstr "此 vif_driver 實作的 vif_type 參數必須存在" #, python-format msgid "volume %s already attached" msgstr "已連接磁區 %s" #, python-format msgid "" "volume '%(vol)s' status must be 'in-use'. Currently in '%(status)s' status" msgstr "磁區 '%(vol)s' 狀態必須為「使用中」。目前處於「%(status)s」狀態" #, python-format msgid "xenapi.fake does not have an implementation for %s" msgstr "xenapi.fake 沒有 %s 的實作" #, python-format msgid "" "xenapi.fake does not have an implementation for %s or it has been called " "with the wrong number of arguments" msgstr "xenapi.fake 沒有 %s 的實作,或者已使用錯誤的引數數目進行呼叫" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/manager.py0000664000175000017500000001244700000000000015525 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base Manager class. Managers are responsible for a certain aspect of the system. It is a logical grouping of code relating to a portion of the system. In general other components should be using the manager to make changes to the components that it is responsible for. For example, other components that need to deal with volumes in some way, should do so by calling methods on the VolumeManager instead of directly changing fields in the database. This allows us to keep all of the code relating to volumes in the same place. We have adopted a basic strategy of Smart managers and dumb data, which means rather than attaching methods to data objects, components should call manager methods that act on the data. Methods on managers that can be executed locally should be called directly. If a particular method must execute on a remote host, this should be done via rpc to the service that wraps the manager Managers should be responsible for most of the db access, and non-implementation specific data. Anything implementation specific that can't be generalized should be done by the Driver. In general, we prefer to have one manager with multiple drivers for different implementations, but sometimes it makes sense to have multiple managers. You can think of it this way: Abstract different overall strategies at the manager level(FlatNetwork vs VlanNetwork), and different implementations at the driver level(LinuxNetDriver vs CiscoNetDriver). Managers will often provide methods for initial setup of a host or periodic tasks to a wrapping service. This module provides Manager, a base class for managers. """ from oslo_service import periodic_task import six import nova.conf from nova.db import base from nova import profiler from nova import rpc CONF = nova.conf.CONF class PeriodicTasks(periodic_task.PeriodicTasks): def __init__(self): super(PeriodicTasks, self).__init__(CONF) class ManagerMeta(profiler.get_traced_meta(), type(PeriodicTasks)): """Metaclass to trace all children of a specific class. This metaclass wraps every public method (not starting with _ or __) of the class using it. All children classes of the class using ManagerMeta will be profiled as well. Adding this metaclass requires that the __trace_args__ attribute be added to the class we want to modify. That attribute is a dictionary with one mandatory key: "name". "name" defines the name of the action to be traced (for example, wsgi, rpc, db). The OSprofiler-based tracing, although, will only happen if profiler instance was initiated somewhere before in the thread, that can only happen if profiling is enabled in nova.conf and the API call to Nova API contained specific headers. """ @six.add_metaclass(ManagerMeta) class Manager(base.Base, PeriodicTasks): __trace_args__ = {"name": "rpc"} def __init__(self, host=None, service_name='undefined'): if not host: host = CONF.host self.host = host self.backdoor_port = None self.service_name = service_name self.notifier = rpc.get_notifier(self.service_name, self.host) self.additional_endpoints = [] super(Manager, self).__init__() def periodic_tasks(self, context, raise_on_error=False): """Tasks to be run at a periodic interval.""" return self.run_periodic_tasks(context, raise_on_error=raise_on_error) def init_host(self): """Hook to do additional manager initialization when one requests the service be started. This is called before any service record is created. Child classes should override this method. """ pass def cleanup_host(self): """Hook to do cleanup work when the service shuts down. Child classes should override this method. """ pass def pre_start_hook(self): """Hook to provide the manager the ability to do additional start-up work before any RPC queues/consumers are created. This is called after other initialization has succeeded and a service record is created. Child classes should override this method. """ pass def post_start_hook(self): """Hook to provide the manager the ability to do additional start-up work immediately after a service creates RPC consumers and starts 'running'. Child classes should override this method. """ pass def reset(self): """Hook called on SIGHUP to signal the manager to re-read any dynamic configuration or do any reconfiguration tasks. """ pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/middleware.py0000664000175000017500000000255400000000000016226 0ustar00zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_middleware import cors def set_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id'], expose_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/monkey_patch.py0000664000175000017500000001167200000000000016573 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # Copyright 2019 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Enable eventlet monkey patching.""" import os def _monkey_patch(): # See https://bugs.launchpad.net/nova/+bug/1164822 # TODO(mdbooth): This feature was deprecated and removed in eventlet at # some point but brought back in version 0.21.0, presumably because some # users still required it to work round issues. However, there have been a # number of greendns fixes in eventlet since then. Specifically, it looks # as though the originally reported IPv6 issue may have been fixed in # version 0.24.0. We should remove this when we can confirm that the # original issue is fixed. # NOTE(artom) eventlet processes environment variables at import-time. We # therefore set this here, before importing eventlet, in order to correctly # disable greendns. os.environ['EVENTLET_NO_GREENDNS'] = 'yes' # NOTE(mdbooth): Anything imported here will not be monkey patched. It is # important to take care not to import anything here which requires monkey # patching. import eventlet import sys # NOTE(mdbooth): Imports only sys (2019-01-30). Other modules imported at # runtime on execution of debugger.init(). from nova import debugger # Note any modules with known monkey-patching issues which have been # imported before monkey patching. # urllib3: https://bugs.launchpad.net/nova/+bug/1808951 # oslo_context.context: https://bugs.launchpad.net/nova/+bug/1773102 problems = (set(['urllib3', 'oslo_context.context']) & set(sys.modules.keys())) if debugger.enabled(): # turn off thread patching to enable the remote debugger eventlet.monkey_patch(thread=False) elif os.name == 'nt': # for nova-compute running on Windows(Hyper-v) # pipes don't support non-blocking I/O eventlet.monkey_patch(os=False) else: eventlet.monkey_patch() # Monkey patch the original current_thread to use the up-to-date _active # global variable. See https://bugs.launchpad.net/bugs/1863021 and # https://github.com/eventlet/eventlet/issues/592 import __original_module_threading as orig_threading import threading orig_threading.current_thread.__globals__['_active'] = threading._active # NOTE(rpodolyaka): import oslo_service first, so that it makes eventlet # hub use a monotonic clock to avoid issues with drifts of system time (see # LP 1510234 for details) # NOTE(mdbooth): This was fixed in eventlet 0.21.0. Remove when bumping # eventlet version. import oslo_service # noqa eventlet.hubs.use_hub("oslo_service:service_hub") # NOTE(mdbooth): Log here instead of earlier to avoid loading oslo logging # before monkey patching. # NOTE(mdbooth): Ideally we would raise an exception here, as this is # likely to cause problems when executing nova code. However, some non-nova # tools load nova only to extract metadata and do not execute it. Two # examples are oslopolicy-policy-generator and sphinx, both of which can # fail if we assert here. It is not ideal that these utilities are monkey # patching at all, but we should not break them. # TODO(mdbooth): If there is any way to reliably determine if we are being # loaded in that kind of context without breaking existing callers, we # should do it and bypass monkey patching here entirely. if problems: from oslo_log import log as logging LOG = logging.getLogger(__name__) LOG.warning("Modules with known eventlet monkey patching issues were " "imported prior to eventlet monkey patching: %s. This " "warning can usually be ignored if the caller is only " "importing and not executing nova code.", ', '.join(problems)) # NOTE(mdbooth): This workaround is required to avoid breaking sphinx. See # separate comment in doc/source/conf.py. It may also be useful for other # non-nova utilities. Ideally the requirement for this workaround will be # removed as soon as possible, so do not rely on, or extend it. if (os.environ.get('OS_NOVA_DISABLE_EVENTLET_PATCHING', '').lower() not in ('1', 'true', 'yes')): _monkey_patch() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3544698 nova-21.2.4/nova/network/0000775000175000017500000000000000000000000015222 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/network/__init__.py0000664000175000017500000000000000000000000017321 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/network/constants.py0000664000175000017500000000236100000000000017612 0ustar00zuulzuul00000000000000# Copyright 2013 UnitedStack Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. QOS_QUEUE = 'QoS Queue' NET_EXTERNAL = 'router:external' VNIC_INDEX_EXT = 'VNIC Index' DNS_INTEGRATION = 'DNS Integration' MULTI_NET_EXT = 'Multi Provider Network' FIP_PORT_DETAILS = 'Floating IP Port Details Extension' SUBSTR_PORT_FILTERING = 'IP address substring filtering' PORT_BINDING = 'Port Binding' PORT_BINDING_EXTENDED = 'Port Bindings Extended' LIVE_MIGRATION = 'live-migration' DEFAULT_SECGROUP = 'default' BINDING_PROFILE = 'binding:profile' BINDING_HOST_ID = 'binding:host_id' MIGRATING_ATTR = 'migrating_to' L3_NETWORK_TYPES = ['vxlan', 'gre', 'geneve'] ALLOCATION = 'allocation' RESOURCE_REQUEST = 'resource_request' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/network/model.py0000664000175000017500000005226000000000000016701 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import netaddr from oslo_serialization import jsonutils import six from nova import exception from nova.i18n import _ from nova import utils # Constants for the 'vif_type' field in VIF class VIF_TYPE_OVS = 'ovs' VIF_TYPE_IVS = 'ivs' VIF_TYPE_DVS = 'dvs' VIF_TYPE_IOVISOR = 'iovisor' VIF_TYPE_BRIDGE = 'bridge' VIF_TYPE_802_QBG = '802.1qbg' VIF_TYPE_802_QBH = '802.1qbh' VIF_TYPE_HW_VEB = 'hw_veb' VIF_TYPE_HYPERV = 'hyperv' VIF_TYPE_HOSTDEV = 'hostdev_physical' VIF_TYPE_IB_HOSTDEV = 'ib_hostdev' VIF_TYPE_MIDONET = 'midonet' VIF_TYPE_VHOSTUSER = 'vhostuser' VIF_TYPE_VROUTER = 'vrouter' VIF_TYPE_OTHER = 'other' VIF_TYPE_TAP = 'tap' VIF_TYPE_MACVTAP = 'macvtap' VIF_TYPE_AGILIO_OVS = 'agilio_ovs' VIF_TYPE_BINDING_FAILED = 'binding_failed' VIF_TYPE_VIF = 'vif' VIF_TYPE_UNBOUND = 'unbound' # Constants for dictionary keys in the 'vif_details' field in the VIF # class VIF_DETAILS_PORT_FILTER = 'port_filter' VIF_DETAILS_OVS_HYBRID_PLUG = 'ovs_hybrid_plug' VIF_DETAILS_PHYSICAL_NETWORK = 'physical_network' VIF_DETAILS_BRIDGE_NAME = 'bridge_name' VIF_DETAILS_OVS_DATAPATH_TYPE = 'datapath_type' # The following constant defines an SR-IOV related parameter in the # 'vif_details'. 'profileid' should be used for VIF_TYPE_802_QBH VIF_DETAILS_PROFILEID = 'profileid' # The following constant defines an SR-IOV and macvtap related parameter in # the 'vif_details'. 'vlan' should be used for VIF_TYPE_HW_VEB or # VIF_TYPE_MACVTAP VIF_DETAILS_VLAN = 'vlan' # The following three constants define the macvtap related fields in # the 'vif_details'. VIF_DETAILS_MACVTAP_SOURCE = 'macvtap_source' VIF_DETAILS_MACVTAP_MODE = 'macvtap_mode' VIF_DETAILS_PHYS_INTERFACE = 'physical_interface' # Constants for vhost-user related fields in 'vif_details'. # Sets mode on vhost-user socket, valid values are 'client' # and 'server' VIF_DETAILS_VHOSTUSER_MODE = 'vhostuser_mode' # vhost-user socket path VIF_DETAILS_VHOSTUSER_SOCKET = 'vhostuser_socket' # Specifies whether vhost-user socket should be plugged # into ovs bridge. Valid values are True and False VIF_DETAILS_VHOSTUSER_OVS_PLUG = 'vhostuser_ovs_plug' # Specifies whether vhost-user socket should be used to # create a fp netdevice interface. VIF_DETAILS_VHOSTUSER_FP_PLUG = 'vhostuser_fp_plug' # Specifies whether vhost-user socket should be used to # create a vrouter netdevice interface # TODO(mhenkel): Consider renaming this to be contrail-specific. VIF_DETAILS_VHOSTUSER_VROUTER_PLUG = 'vhostuser_vrouter_plug' # Constants for dictionary keys in the 'vif_details' field that are # valid for VIF_TYPE_TAP. VIF_DETAILS_TAP_MAC_ADDRESS = 'mac_address' # Open vSwitch datapath types. VIF_DETAILS_OVS_DATAPATH_SYSTEM = 'system' VIF_DETAILS_OVS_DATAPATH_NETDEV = 'netdev' # Define supported virtual NIC types. VNIC_TYPE_DIRECT and VNIC_TYPE_MACVTAP # are used for SR-IOV ports VNIC_TYPE_NORMAL = 'normal' VNIC_TYPE_DIRECT = 'direct' VNIC_TYPE_MACVTAP = 'macvtap' VNIC_TYPE_DIRECT_PHYSICAL = 'direct-physical' VNIC_TYPE_BAREMETAL = 'baremetal' VNIC_TYPE_VIRTIO_FORWARDER = 'virtio-forwarder' # Define list of ports which needs pci request. # Note: The macvtap port needs a PCI request as it is a tap interface # with VF as the lower physical interface. # Note: Currently, VNIC_TYPE_VIRTIO_FORWARDER assumes a 1:1 # relationship with a VF. This is expected to change in the future. VNIC_TYPES_SRIOV = (VNIC_TYPE_DIRECT, VNIC_TYPE_MACVTAP, VNIC_TYPE_DIRECT_PHYSICAL, VNIC_TYPE_VIRTIO_FORWARDER) # Define list of ports which are passthrough to the guest # and need a special treatment on snapshot and suspend/resume VNIC_TYPES_DIRECT_PASSTHROUGH = (VNIC_TYPE_DIRECT, VNIC_TYPE_DIRECT_PHYSICAL) # Constants for the 'vif_model' values VIF_MODEL_VIRTIO = 'virtio' VIF_MODEL_NE2K_PCI = 'ne2k_pci' VIF_MODEL_PCNET = 'pcnet' VIF_MODEL_RTL8139 = 'rtl8139' VIF_MODEL_E1000 = 'e1000' VIF_MODEL_E1000E = 'e1000e' VIF_MODEL_NETFRONT = 'netfront' VIF_MODEL_SPAPR_VLAN = 'spapr-vlan' VIF_MODEL_LAN9118 = 'lan9118' VIF_MODEL_SRIOV = 'sriov' VIF_MODEL_VMXNET = 'vmxnet' VIF_MODEL_VMXNET3 = 'vmxnet3' VIF_MODEL_ALL = ( VIF_MODEL_VIRTIO, VIF_MODEL_NE2K_PCI, VIF_MODEL_PCNET, VIF_MODEL_RTL8139, VIF_MODEL_E1000, VIF_MODEL_E1000E, VIF_MODEL_NETFRONT, VIF_MODEL_SPAPR_VLAN, VIF_MODEL_LAN9118, VIF_MODEL_SRIOV, VIF_MODEL_VMXNET, VIF_MODEL_VMXNET3, ) # these types have been leaked to guests in network_data.json LEGACY_EXPOSED_VIF_TYPES = ( VIF_TYPE_BRIDGE, VIF_TYPE_DVS, VIF_TYPE_HW_VEB, VIF_TYPE_HYPERV, VIF_TYPE_OVS, VIF_TYPE_TAP, VIF_TYPE_VHOSTUSER, VIF_TYPE_VIF, ) # Constant for max length of network interface names # eg 'bridge' in the Network class or 'devname' in # the VIF class NIC_NAME_LEN = 14 class Model(dict): """Defines some necessary structures for most of the network models.""" def __repr__(self): return jsonutils.dumps(self) def _set_meta(self, kwargs): # pull meta out of kwargs if it's there self['meta'] = kwargs.pop('meta', {}) # update meta with any additional kwargs that may exist self['meta'].update(kwargs) def get_meta(self, key, default=None): """calls get(key, default) on self['meta'].""" return self['meta'].get(key, default) class IP(Model): """Represents an IP address in Nova.""" def __init__(self, address=None, type=None, **kwargs): super(IP, self).__init__() self['address'] = address self['type'] = type self['version'] = kwargs.pop('version', None) self._set_meta(kwargs) # determine version from address if not passed in if self['address'] and not self['version']: try: self['version'] = netaddr.IPAddress(self['address']).version except netaddr.AddrFormatError: msg = _("Invalid IP format %s") % self['address'] raise exception.InvalidIpAddressError(msg) def __eq__(self, other): keys = ['address', 'type', 'version'] return all(self[k] == other[k] for k in keys) def __ne__(self, other): return not self.__eq__(other) def is_in_subnet(self, subnet): if self['address'] and subnet['cidr']: return (netaddr.IPAddress(self['address']) in netaddr.IPNetwork(subnet['cidr'])) else: return False @classmethod def hydrate(cls, ip): if ip: return cls(**ip) return None class FixedIP(IP): """Represents a Fixed IP address in Nova.""" def __init__(self, floating_ips=None, **kwargs): super(FixedIP, self).__init__(**kwargs) self['floating_ips'] = floating_ips or [] if not self['type']: self['type'] = 'fixed' def add_floating_ip(self, floating_ip): if floating_ip not in self['floating_ips']: self['floating_ips'].append(floating_ip) def floating_ip_addresses(self): return [ip['address'] for ip in self['floating_ips']] @staticmethod def hydrate(fixed_ip): fixed_ip = FixedIP(**fixed_ip) fixed_ip['floating_ips'] = [IP.hydrate(floating_ip) for floating_ip in fixed_ip['floating_ips']] return fixed_ip def __eq__(self, other): keys = ['address', 'type', 'version', 'floating_ips'] return all(self[k] == other[k] for k in keys) def __ne__(self, other): return not self.__eq__(other) class Route(Model): """Represents an IP Route in Nova.""" def __init__(self, cidr=None, gateway=None, interface=None, **kwargs): super(Route, self).__init__() self['cidr'] = cidr self['gateway'] = gateway # FIXME(mriedem): Is this actually used? It's never set. self['interface'] = interface self._set_meta(kwargs) @classmethod def hydrate(cls, route): route = cls(**route) route['gateway'] = IP.hydrate(route['gateway']) return route class Subnet(Model): """Represents a Subnet in Nova.""" def __init__(self, cidr=None, dns=None, gateway=None, ips=None, routes=None, **kwargs): super(Subnet, self).__init__() self['cidr'] = cidr self['dns'] = dns or [] self['gateway'] = gateway self['ips'] = ips or [] self['routes'] = routes or [] self['version'] = kwargs.pop('version', None) self._set_meta(kwargs) if self['cidr'] and not self['version']: self['version'] = netaddr.IPNetwork(self['cidr']).version def __eq__(self, other): keys = ['cidr', 'dns', 'gateway', 'ips', 'routes', 'version'] return all(self[k] == other[k] for k in keys) def __ne__(self, other): return not self.__eq__(other) def add_route(self, new_route): if new_route not in self['routes']: self['routes'].append(new_route) def add_dns(self, dns): if dns not in self['dns']: self['dns'].append(dns) def add_ip(self, ip): if ip not in self['ips']: self['ips'].append(ip) def as_netaddr(self): """Convenient function to get cidr as a netaddr object.""" return netaddr.IPNetwork(self['cidr']) @classmethod def hydrate(cls, subnet): subnet = cls(**subnet) subnet['dns'] = [IP.hydrate(dns) for dns in subnet['dns']] subnet['ips'] = [FixedIP.hydrate(ip) for ip in subnet['ips']] subnet['routes'] = [Route.hydrate(route) for route in subnet['routes']] subnet['gateway'] = IP.hydrate(subnet['gateway']) return subnet class Network(Model): """Represents a Network in Nova.""" def __init__(self, id=None, bridge=None, label=None, subnets=None, **kwargs): super(Network, self).__init__() self['id'] = id self['bridge'] = bridge self['label'] = label self['subnets'] = subnets or [] self._set_meta(kwargs) def add_subnet(self, subnet): if subnet not in self['subnets']: self['subnets'].append(subnet) @classmethod def hydrate(cls, network): if network: network = cls(**network) network['subnets'] = [Subnet.hydrate(subnet) for subnet in network['subnets']] return network def __eq__(self, other): keys = ['id', 'bridge', 'label', 'subnets'] return all(self[k] == other[k] for k in keys) def __ne__(self, other): return not self.__eq__(other) class VIF8021QbgParams(Model): """Represents the parameters for a 802.1qbg VIF.""" def __init__(self, managerid, typeid, typeidversion, instanceid): super(VIF8021QbgParams, self).__init__() self['managerid'] = managerid self['typeid'] = typeid self['typeidversion'] = typeidversion self['instanceid'] = instanceid class VIF8021QbhParams(Model): """Represents the parameters for a 802.1qbh VIF.""" def __init__(self, profileid): super(VIF8021QbhParams, self).__init__() self['profileid'] = profileid class VIF(Model): """Represents a Virtual Interface in Nova.""" def __init__(self, id=None, address=None, network=None, type=None, details=None, devname=None, ovs_interfaceid=None, qbh_params=None, qbg_params=None, active=False, vnic_type=VNIC_TYPE_NORMAL, profile=None, preserve_on_delete=False, **kwargs): super(VIF, self).__init__() self['id'] = id self['address'] = address self['network'] = network or None self['type'] = type self['details'] = details or {} self['devname'] = devname self['ovs_interfaceid'] = ovs_interfaceid self['qbh_params'] = qbh_params self['qbg_params'] = qbg_params self['active'] = active self['vnic_type'] = vnic_type self['profile'] = profile self['preserve_on_delete'] = preserve_on_delete self._set_meta(kwargs) def __eq__(self, other): keys = ['id', 'address', 'network', 'vnic_type', 'type', 'profile', 'details', 'devname', 'ovs_interfaceid', 'qbh_params', 'qbg_params', 'active', 'preserve_on_delete'] return all(self[k] == other[k] for k in keys) def __ne__(self, other): return not self.__eq__(other) def fixed_ips(self): if self['network']: return [fixed_ip for subnet in self['network']['subnets'] for fixed_ip in subnet['ips']] else: return [] def floating_ips(self): return [floating_ip for fixed_ip in self.fixed_ips() for floating_ip in fixed_ip['floating_ips']] def labeled_ips(self): """Returns the list of all IPs The return value looks like this flat structure:: {'network_label': 'my_network', 'network_id': 'n8v29837fn234782f08fjxk3ofhb84', 'ips': [{'address': '123.123.123.123', 'version': 4, 'type: 'fixed', 'meta': {...}}, {'address': '124.124.124.124', 'version': 4, 'type': 'floating', 'meta': {...}}, {'address': 'fe80::4', 'version': 6, 'type': 'fixed', 'meta': {...}}] """ if self['network']: # remove unnecessary fields on fixed_ips ips = [IP(**ip) for ip in self.fixed_ips()] for ip in ips: # remove floating ips from IP, since this is a flat structure # of all IPs del ip['meta']['floating_ips'] # add floating ips to list (if any) ips.extend(self.floating_ips()) return {'network_label': self['network']['label'], 'network_id': self['network']['id'], 'ips': ips} return [] def has_bind_time_event(self, migration): """Returns whether this VIF's network-vif-plugged external event will be sent by Neutron at "bind-time" - in other words, as soon as the port binding is updated. This is in the context of updating the port binding to a host that already has the instance in a shutoff state - in practice, this means reverting either a cold migration or a non-same-host resize. """ return (self.is_hybrid_plug_enabled() and not migration.is_same_host()) @property def has_live_migration_plug_time_event(self): """Returns whether this VIF's network-vif-plugged external event will be sent by Neutron at "plugtime" - in other words, as soon as neutron completes configuring the network backend. """ return self.is_hybrid_plug_enabled() def is_hybrid_plug_enabled(self): return self['details'].get(VIF_DETAILS_OVS_HYBRID_PLUG, False) def is_neutron_filtering_enabled(self): return self['details'].get(VIF_DETAILS_PORT_FILTER, False) def get_physical_network(self): phy_network = self['network']['meta'].get('physical_network') if not phy_network: phy_network = self['details'].get(VIF_DETAILS_PHYSICAL_NETWORK) return phy_network @classmethod def hydrate(cls, vif): vif = cls(**vif) vif['network'] = Network.hydrate(vif['network']) return vif def has_allocation(self): return self['profile'] and bool(self['profile'].get('allocation')) def get_netmask(ip, subnet): """Returns the netmask appropriate for injection into a guest.""" if ip['version'] == 4: return str(subnet.as_netaddr().netmask) return subnet.as_netaddr()._prefixlen class NetworkInfo(list): """Stores and manipulates network information for a Nova instance.""" # NetworkInfo is a list of VIFs def fixed_ips(self): """Returns all fixed_ips without floating_ips attached.""" return [ip for vif in self for ip in vif.fixed_ips()] def floating_ips(self): """Returns all floating_ips.""" return [ip for vif in self for ip in vif.floating_ips()] @classmethod def hydrate(cls, network_info): if isinstance(network_info, six.string_types): network_info = jsonutils.loads(network_info) return cls([VIF.hydrate(vif) for vif in network_info]) def wait(self, do_raise=True): """Wait for asynchronous call to finish.""" # There is no asynchronous call for this class, so this is a no-op # here, but subclasses may override to provide asynchronous # capabilities. Must be defined here in the parent class so that code # which works with both parent and subclass types can reference this # method. pass def json(self): return jsonutils.dumps(self) def get_bind_time_events(self, migration): """Returns a list of external events for any VIFs that have "bind-time" events during cold migration. """ return [('network-vif-plugged', vif['id']) for vif in self if vif.has_bind_time_event(migration)] def get_live_migration_plug_time_events(self): """Returns a list of external events for any VIFs that have "plug-time" events during live migration. """ return [('network-vif-plugged', vif['id']) for vif in self if vif.has_live_migration_plug_time_event] def get_plug_time_events(self, migration): """Returns a list of external events for any VIFs that have "plug-time" events during cold migration. """ return [('network-vif-plugged', vif['id']) for vif in self if not vif.has_bind_time_event(migration)] def has_port_with_allocation(self): return any(vif.has_allocation() for vif in self) class NetworkInfoAsyncWrapper(NetworkInfo): """Wrapper around NetworkInfo that allows retrieving NetworkInfo in an async manner. This allows one to start querying for network information before you know you will need it. If you have a long-running operation, this allows the network model retrieval to occur in the background. When you need the data, it will ensure the async operation has completed. As an example: def allocate_net_info(arg1, arg2) return call_neutron_to_allocate(arg1, arg2) network_info = NetworkInfoAsyncWrapper(allocate_net_info, arg1, arg2) [do a long running operation -- real network_info will be retrieved in the background] [do something with network_info] """ def __init__(self, async_method, *args, **kwargs): super(NetworkInfoAsyncWrapper, self).__init__() self._gt = utils.spawn(async_method, *args, **kwargs) methods = ['json', 'fixed_ips', 'floating_ips'] for method in methods: fn = getattr(self, method) wrapper = functools.partial(self._sync_wrapper, fn) functools.update_wrapper(wrapper, fn) setattr(self, method, wrapper) def _sync_wrapper(self, wrapped, *args, **kwargs): """Synchronize the model before running a method.""" self.wait() return wrapped(*args, **kwargs) def __getitem__(self, *args, **kwargs): fn = super(NetworkInfoAsyncWrapper, self).__getitem__ return self._sync_wrapper(fn, *args, **kwargs) def __iter__(self, *args, **kwargs): fn = super(NetworkInfoAsyncWrapper, self).__iter__ return self._sync_wrapper(fn, *args, **kwargs) def __len__(self, *args, **kwargs): fn = super(NetworkInfoAsyncWrapper, self).__len__ return self._sync_wrapper(fn, *args, **kwargs) def __str__(self, *args, **kwargs): fn = super(NetworkInfoAsyncWrapper, self).__str__ return self._sync_wrapper(fn, *args, **kwargs) def __repr__(self, *args, **kwargs): fn = super(NetworkInfoAsyncWrapper, self).__repr__ return self._sync_wrapper(fn, *args, **kwargs) def wait(self, do_raise=True): """Wait for asynchronous call to finish.""" if self._gt is not None: try: # NOTE(comstud): This looks funky, but this object is # subclassed from list. In other words, 'self' is really # just a list with a bunch of extra methods. So this # line just replaces the current list (which should be # empty) with the result. self[:] = self._gt.wait() except Exception: if do_raise: raise finally: self._gt = None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/network/neutron.py0000664000175000017500000051401200000000000017271 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved # Copyright (c) 2012 NEC Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ API and utilities for nova-network interactions. """ import copy import functools import time from keystoneauth1 import loading as ks_loading from neutronclient.common import exceptions as neutron_client_exc from neutronclient.v2_0 import client as clientv20 from oslo_concurrency import lockutils from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import uuidutils import six from nova.compute import utils as compute_utils import nova.conf from nova import context as nova_context from nova.db import base from nova import exception from nova import hooks from nova.i18n import _ from nova.network import constants from nova.network import model as network_model from nova import objects from nova.objects import fields as obj_fields from nova.pci import manager as pci_manager from nova.pci import request as pci_request from nova.pci import utils as pci_utils from nova.pci import whitelist as pci_whitelist from nova.policies import servers as servers_policies from nova import profiler from nova import service_auth from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) _SESSION = None _ADMIN_AUTH = None def reset_state(): global _ADMIN_AUTH global _SESSION _ADMIN_AUTH = None _SESSION = None def _load_auth_plugin(conf): auth_plugin = ks_loading.load_auth_from_conf_options(conf, nova.conf.neutron.NEUTRON_GROUP) if auth_plugin: return auth_plugin if conf.neutron.auth_type is None: # If we're coming in through a REST API call for something like # creating a server, the end user is going to get a 500 response # which is accurate since the system is mis-configured, but we should # leave a breadcrumb for the operator that is checking the logs. LOG.error('The [neutron] section of your nova configuration file ' 'must be configured for authentication with the networking ' 'service endpoint. See the networking service install guide ' 'for details: ' 'https://docs.openstack.org/neutron/latest/install/') err_msg = _('Unknown auth type: %s') % conf.neutron.auth_type raise neutron_client_exc.Unauthorized(message=err_msg) def get_binding_profile(port): """Convenience method to get the binding:profile from the port The binding:profile in the port is undefined in the networking service API and is dependent on backend configuration. This means it could be an empty dict, None, or have some values. :param port: dict port response body from the networking service API :returns: The port binding:profile dict; empty if not set on the port """ return port.get(constants.BINDING_PROFILE, {}) or {} @hooks.add_hook('instance_network_info') def update_instance_cache_with_nw_info(impl, context, instance, nw_info=None): if instance.deleted: LOG.debug('Instance is deleted, no further info cache update', instance=instance) return try: if not isinstance(nw_info, network_model.NetworkInfo): nw_info = None if nw_info is None: nw_info = impl._get_instance_nw_info(context, instance) LOG.debug('Updating instance_info_cache with network_info: %s', nw_info, instance=instance) # NOTE(comstud): The save() method actually handles updating or # creating the instance. We don't need to retrieve the object # from the DB first. ic = objects.InstanceInfoCache.new(context, instance.uuid) ic.network_info = nw_info ic.save() instance.info_cache = ic except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Failed storing info cache', instance=instance) def refresh_cache(f): """Decorator to update the instance_info_cache Requires context and instance as function args """ argspec = utils.getargspec(f) @functools.wraps(f) def wrapper(self, context, *args, **kwargs): try: # get the instance from arguments (or raise ValueError) instance = kwargs.get('instance') if not instance: instance = args[argspec.args.index('instance') - 2] except ValueError: msg = _('instance is a required argument to use @refresh_cache') raise Exception(msg) with lockutils.lock('refresh_cache-%s' % instance.uuid): # We need to call the wrapped function with the lock held to ensure # that it can call _get_instance_nw_info safely. res = f(self, context, *args, **kwargs) update_instance_cache_with_nw_info(self, context, instance, nw_info=res) # return the original function's return value return res return wrapper @profiler.trace_cls("neutron_api") class ClientWrapper(clientv20.Client): """A Neutron client wrapper class. Wraps the callable methods, catches Unauthorized,Forbidden from Neutron and convert it to a 401,403 for Nova clients. """ def __init__(self, base_client, admin): # Expose all attributes from the base_client instance self.__dict__ = base_client.__dict__ self.base_client = base_client self.admin = admin def __getattribute__(self, name): obj = object.__getattribute__(self, name) if callable(obj): obj = object.__getattribute__(self, 'proxy')(obj) return obj def proxy(self, obj): def wrapper(*args, **kwargs): try: ret = obj(*args, **kwargs) except neutron_client_exc.Unauthorized: if not self.admin: # Token is expired so Neutron is raising a # unauthorized exception, we should convert it to # raise a 401 to make client to handle a retry by # regenerating a valid token and trying a new # attempt. raise exception.Unauthorized() # In admin context if token is invalid Neutron client # should be able to regenerate a valid by using the # Neutron admin credential configuration located in # nova.conf. LOG.error("Neutron client was not able to generate a " "valid admin token, please verify Neutron " "admin credential located in nova.conf") raise exception.NeutronAdminCredentialConfigurationInvalid() except neutron_client_exc.Forbidden as e: raise exception.Forbidden(six.text_type(e)) return ret return wrapper def _get_auth_plugin(context, admin=False): # NOTE(dprince): In the case where no auth_token is present we allow use of # neutron admin tenant credentials if it is an admin context. This is to # support some services (metadata API) where an admin context is used # without an auth token. global _ADMIN_AUTH if admin or (context.is_admin and not context.auth_token): if not _ADMIN_AUTH: _ADMIN_AUTH = _load_auth_plugin(CONF) return _ADMIN_AUTH if context.auth_token: return service_auth.get_auth_plugin(context) # We did not get a user token and we should not be using # an admin token so log an error raise exception.Unauthorized() def _get_session(): global _SESSION if not _SESSION: _SESSION = ks_loading.load_session_from_conf_options( CONF, nova.conf.neutron.NEUTRON_GROUP) return _SESSION def get_client(context, admin=False): auth_plugin = _get_auth_plugin(context, admin=admin) session = _get_session() client_args = dict(session=session, auth=auth_plugin, global_request_id=context.global_id, connect_retries=CONF.neutron.http_retries) # NOTE(efried): We build an adapter # to pull conf options # to pass to neutronclient # which uses them to build an Adapter. # This should be unwound at some point. adap = utils.get_ksa_adapter( 'network', ksa_auth=auth_plugin, ksa_session=session) client_args = dict(client_args, service_type=adap.service_type, service_name=adap.service_name, interface=adap.interface, region_name=adap.region_name, endpoint_override=adap.endpoint_override) return ClientWrapper(clientv20.Client(**client_args), admin=admin or context.is_admin) def _get_ksa_client(context, admin=False): """Returns a keystoneauth Adapter This method should only be used if python-neutronclient does not yet provide the necessary API bindings. :param context: User request context :param admin: If True, uses the configured credentials, else uses the existing auth_token in the context (the user token). :returns: keystoneauth1 Adapter object """ auth_plugin = _get_auth_plugin(context, admin=admin) session = _get_session() client = utils.get_ksa_adapter( 'network', ksa_auth=auth_plugin, ksa_session=session) client.additional_headers = {'accept': 'application/json'} client.connect_retries = CONF.neutron.http_retries return client def _is_not_duplicate(item, items, items_list_name, instance): present = item in items # The expectation from this function's perspective is that the # item is not part of the items list so if it is part of it # we should at least log it as a warning if present: LOG.warning("%(item)s already exists in list: %(list_name)s " "containing: %(items)s. ignoring it", {'item': item, 'list_name': items_list_name, 'items': items}, instance=instance) return not present def _ensure_no_port_binding_failure(port): binding_vif_type = port.get('binding:vif_type') if binding_vif_type == network_model.VIF_TYPE_BINDING_FAILED: raise exception.PortBindingFailed(port_id=port['id']) class API(base.Base): """API for interacting with the neutron 2.x API.""" def __init__(self): super(API, self).__init__() self.last_neutron_extension_sync = None self.extensions = {} self.pci_whitelist = pci_whitelist.Whitelist( CONF.pci.passthrough_whitelist) def _update_port_with_migration_profile( self, instance, port_id, port_profile, admin_client): try: updated_port = admin_client.update_port( port_id, {'port': {constants.BINDING_PROFILE: port_profile}}) return updated_port except Exception as ex: with excutils.save_and_reraise_exception(): LOG.error("Unable to update binding profile " "for port: %(port)s due to failure: %(error)s", {'port': port_id, 'error': ex}, instance=instance) def _clear_migration_port_profile( self, context, instance, admin_client, ports): for p in ports: # If the port already has a migration profile and if # it is to be torn down, then we need to clean up # the migration profile. port_profile = get_binding_profile(p) if not port_profile: continue if constants.MIGRATING_ATTR in port_profile: del port_profile[constants.MIGRATING_ATTR] LOG.debug("Removing port %s migration profile", p['id'], instance=instance) self._update_port_with_migration_profile( instance, p['id'], port_profile, admin_client) def _setup_migration_port_profile( self, context, instance, host, admin_client, ports): # Migrating to a new host for p in ports: # If the host hasn't changed, there is nothing to do. # But if the destination host is different than the # current one, please update the port_profile with # the 'migrating_to'(constants.MIGRATING_ATTR) key pointing to # the given 'host'. host_id = p.get(constants.BINDING_HOST_ID) if host_id != host: port_profile = get_binding_profile(p) # If the "migrating_to" attribute already points at the given # host, then skip the port update call since we're not changing # anything. if host != port_profile.get(constants.MIGRATING_ATTR): port_profile[constants.MIGRATING_ATTR] = host self._update_port_with_migration_profile( instance, p['id'], port_profile, admin_client) LOG.debug("Port %(port_id)s updated with migration " "profile %(profile_data)s successfully", {'port_id': p['id'], 'profile_data': port_profile}, instance=instance) def setup_networks_on_host(self, context, instance, host=None, teardown=False): """Setup or teardown the network structures. :param context: The user request context. :param instance: The instance with attached ports. :param host: Optional host used to control the setup. If provided and is not the same as the current instance.host, this method assumes the instance is being migrated and sets the "migrating_to" attribute in the binding profile for the attached ports. :param teardown: Whether or not network information for the ports should be cleaned up. If True, at a minimum the "migrating_to" attribute is cleared in the binding profile for the ports. If a host is also provided, then port bindings for that host are deleted when teardown is True as long as the host does not match the current instance.host. :raises: nova.exception.PortBindingDeletionFailed if host is not None, teardown is True, and port binding deletion fails. """ # Check if the instance is migrating to a new host. port_migrating = host and (instance.host != host) # If the port is migrating to a new host or if it is a # teardown on the original host, then proceed. if port_migrating or teardown: search_opts = {'device_id': instance.uuid, 'tenant_id': instance.project_id, constants.BINDING_HOST_ID: instance.host} # Now get the port details to process the ports # binding profile info. data = self.list_ports(context, **search_opts) ports = data['ports'] admin_client = get_client(context, admin=True) if teardown: # Reset the port profile self._clear_migration_port_profile( context, instance, admin_client, ports) # If a host was provided, delete any bindings between that # host and the ports as long as the host isn't the same as # the current instance.host. has_binding_ext = self.supports_port_binding_extension(context) if port_migrating and has_binding_ext: self._delete_port_bindings(context, ports, host) elif port_migrating: # Setup the port profile self._setup_migration_port_profile( context, instance, host, admin_client, ports) def _delete_port_bindings(self, context, ports, host): """Attempt to delete all port bindings on the host. :param context: The user request context. :param ports: list of port dicts to cleanup; the 'id' field is required per port dict in the list :param host: host from which to delete port bindings :raises: PortBindingDeletionFailed if port binding deletion fails. """ failed_port_ids = [] for port in ports: # This call is safe in that 404s for non-existing # bindings are ignored. try: self.delete_port_binding( context, port['id'], host) except exception.PortBindingDeletionFailed: # delete_port_binding will log an error for each # failure but since we're iterating a list we want # to keep track of all failures to build a generic # exception to raise failed_port_ids.append(port['id']) if failed_port_ids: msg = (_("Failed to delete binding for port(s) " "%(port_ids)s and host %(host)s.") % {'port_ids': ','.join(failed_port_ids), 'host': host}) raise exception.PortBindingDeletionFailed(msg) def _get_available_networks(self, context, project_id, net_ids=None, neutron=None, auto_allocate=False): """Return a network list available for the tenant. The list contains networks owned by the tenant and public networks. If net_ids specified, it searches networks with requested IDs only. """ if not neutron: neutron = get_client(context) if net_ids: # If user has specified to attach instance only to specific # networks then only add these to **search_opts. This search will # also include 'shared' networks. search_opts = {'id': net_ids} nets = neutron.list_networks(**search_opts).get('networks', []) else: # (1) Retrieve non-public network list owned by the tenant. search_opts = {'tenant_id': project_id, 'shared': False} if auto_allocate: # The auto-allocated-topology extension may create complex # network topologies and it does so in a non-transactional # fashion. Therefore API users may be exposed to resources that # are transient or partially built. A client should use # resources that are meant to be ready and this can be done by # checking their admin_state_up flag. search_opts['admin_state_up'] = True nets = neutron.list_networks(**search_opts).get('networks', []) # (2) Retrieve public network list. search_opts = {'shared': True} nets += neutron.list_networks(**search_opts).get('networks', []) _ensure_requested_network_ordering( lambda x: x['id'], nets, net_ids) return nets def _cleanup_created_port(self, port_client, port_id, instance): try: port_client.delete_port(port_id) except neutron_client_exc.NeutronClientException: LOG.exception( 'Failed to delete port %(port_id)s while cleaning up after an ' 'error.', {'port_id': port_id}, instance=instance) def _create_port_minimal(self, port_client, instance, network_id, fixed_ip=None, security_group_ids=None): """Attempts to create a port for the instance on the given network. :param port_client: The client to use to create the port. :param instance: Create the port for the given instance. :param network_id: Create the port on the given network. :param fixed_ip: Optional fixed IP to use from the given network. :param security_group_ids: Optional list of security group IDs to apply to the port. :returns: The created port. :raises PortLimitExceeded: If neutron fails with an OverQuota error. :raises NoMoreFixedIps: If neutron fails with IpAddressGenerationFailure error. :raises: PortBindingFailed: If port binding failed. :raises NetworksWithQoSPolicyNotSupported: if the created port has resource request. """ # Set the device_id so it's clear who this port was created for, # and to stop other instances trying to use it port_req_body = {'port': {'device_id': instance.uuid}} try: if fixed_ip: port_req_body['port']['fixed_ips'] = [ {'ip_address': str(fixed_ip)}] port_req_body['port']['network_id'] = network_id port_req_body['port']['admin_state_up'] = True port_req_body['port']['tenant_id'] = instance.project_id if security_group_ids: port_req_body['port']['security_groups'] = security_group_ids port_response = port_client.create_port(port_req_body) port = port_response['port'] port_id = port['id'] # NOTE(gibi): Checking if the created port has resource request as # such ports are currently not supported as they would at least # need resource allocation manipulation in placement but might also # need a new scheduling if resource on this host is not available. if port.get(constants.RESOURCE_REQUEST, None): msg = _( "The auto-created port %(port_id)s is being deleted due " "to its network having QoS policy.") LOG.info(msg, {'port_id': port_id}) self._cleanup_created_port(port_client, port_id, instance) # NOTE(gibi): This limitation regarding server create can be # removed when the port creation is moved to the conductor. But # this code also limits attaching a network that has QoS # minimum bandwidth rule. raise exception.NetworksWithQoSPolicyNotSupported( instance_uuid=instance.uuid, network_id=network_id) try: _ensure_no_port_binding_failure(port) except exception.PortBindingFailed: with excutils.save_and_reraise_exception(): port_client.delete_port(port_id) LOG.debug('Successfully created port: %s', port_id, instance=instance) return port except neutron_client_exc.InvalidIpForNetworkClient: LOG.warning('Neutron error: %(ip)s is not a valid IP address ' 'for network %(network_id)s.', {'ip': fixed_ip, 'network_id': network_id}, instance=instance) msg = (_('Fixed IP %(ip)s is not a valid ip address for ' 'network %(network_id)s.') % {'ip': fixed_ip, 'network_id': network_id}) raise exception.InvalidInput(reason=msg) except (neutron_client_exc.IpAddressInUseClient, neutron_client_exc.IpAddressAlreadyAllocatedClient): LOG.warning('Neutron error: Fixed IP %s is ' 'already in use.', fixed_ip, instance=instance) msg = _("Fixed IP %s is already in use.") % fixed_ip raise exception.FixedIpAlreadyInUse(message=msg) except neutron_client_exc.OverQuotaClient: LOG.warning( 'Neutron error: Port quota exceeded in tenant: %s', port_req_body['port']['tenant_id'], instance=instance) raise exception.PortLimitExceeded() except neutron_client_exc.IpAddressGenerationFailureClient: LOG.warning('Neutron error: No more fixed IPs in network: %s', network_id, instance=instance) raise exception.NoMoreFixedIps(net=network_id) except neutron_client_exc.NeutronClientException: with excutils.save_and_reraise_exception(): LOG.exception('Neutron error creating port on network %s', network_id, instance=instance) def _update_port(self, port_client, instance, port_id, port_req_body): try: port_response = port_client.update_port(port_id, port_req_body) port = port_response['port'] _ensure_no_port_binding_failure(port) LOG.debug('Successfully updated port: %s', port_id, instance=instance) return port except neutron_client_exc.MacAddressInUseClient: mac_address = port_req_body['port'].get('mac_address') network_id = port_req_body['port'].get('network_id') LOG.warning('Neutron error: MAC address %(mac)s is already ' 'in use on network %(network)s.', {'mac': mac_address, 'network': network_id}, instance=instance) raise exception.PortInUse(port_id=mac_address) except neutron_client_exc.HostNotCompatibleWithFixedIpsClient: network_id = port_req_body['port'].get('network_id') LOG.warning('Neutron error: Tried to bind a port with ' 'fixed_ips to a host in the wrong segment on ' 'network %(network)s.', {'network': network_id}, instance=instance) raise exception.FixedIpInvalidOnHost(port_id=port_id) def _check_external_network_attach(self, context, nets): """Check if attaching to external network is permitted.""" if not context.can(servers_policies.NETWORK_ATTACH_EXTERNAL, fatal=False): for net in nets: # Perform this check here rather than in validate_networks to # ensure the check is performed every time # allocate_for_instance is invoked if net.get('router:external') and not net.get('shared'): raise exception.ExternalNetworkAttachForbidden( network_uuid=net['id']) def _unbind_ports(self, context, ports, neutron, port_client=None): """Unbind the given ports by clearing their device_id, device_owner and dns_name. :param context: The request context. :param ports: list of port IDs. :param neutron: neutron client for the current context. :param port_client: The client with appropriate karma for updating the ports. """ if port_client is None: # Requires admin creds to set port bindings port_client = get_client(context, admin=True) networks = {} for port_id in ports: # A port_id is optional in the NetworkRequest object so check here # in case the caller forgot to filter the list. if port_id is None: continue port_req_body = {'port': {'device_id': '', 'device_owner': ''}} port_req_body['port'][constants.BINDING_HOST_ID] = None try: port = self._show_port( context, port_id, neutron_client=neutron, fields=[constants.BINDING_PROFILE, 'network_id']) except exception.PortNotFound: LOG.debug('Unable to show port %s as it no longer ' 'exists.', port_id) return except Exception: # NOTE: In case we can't retrieve the binding:profile or # network info assume that they are empty LOG.exception("Unable to get binding:profile for port '%s'", port_id) port_profile = {} network = {} else: port_profile = get_binding_profile(port) net_id = port.get('network_id') if net_id in networks: network = networks.get(net_id) else: network = neutron.show_network(net_id, fields=['dns_domain'] ).get('network') networks[net_id] = network # NOTE: We're doing this to remove the binding information # for the physical device but don't want to overwrite the other # information in the binding profile. for profile_key in ('pci_vendor_info', 'pci_slot', constants.ALLOCATION): if profile_key in port_profile: del port_profile[profile_key] port_req_body['port'][constants.BINDING_PROFILE] = port_profile # NOTE: For internal DNS integration (network does not have a # dns_domain), or if we cannot retrieve network info, we use the # admin client to reset dns_name. if self._has_dns_extension() and not network.get('dns_domain'): port_req_body['port']['dns_name'] = '' try: port_client.update_port(port_id, port_req_body) except neutron_client_exc.PortNotFoundClient: LOG.debug('Unable to unbind port %s as it no longer ' 'exists.', port_id) except Exception: LOG.exception("Unable to clear device ID for port '%s'", port_id) # NOTE: For external DNS integration, we use the neutron client # with user's context to reset the dns_name since the recordset is # under user's zone. self._reset_port_dns_name(network, port_id, neutron) def _validate_requested_port_ids(self, context, instance, neutron, requested_networks, attach=False): """Processes and validates requested networks for allocation. Iterates over the list of NetworkRequest objects, validating the request and building sets of ports and networks to use for allocating ports for the instance. :param context: The user request context. :type context: nova.context.RequestContext :param instance: allocate networks on this instance :type instance: nova.objects.Instance :param neutron: neutron client session :type neutron: neutronclient.v2_0.client.Client :param requested_networks: List of user-requested networks and/or ports :type requested_networks: nova.objects.NetworkRequestList :param attach: Boolean indicating if a port is being attached to an existing running instance. Should be False during server create. :type attach: bool :returns: tuple of: - ports: dict mapping of port id to port dict - ordered_networks: list of nova.objects.NetworkRequest objects for requested networks (either via explicit network request or the network for an explicit port request) :raises nova.exception.PortNotFound: If a requested port is not found in Neutron. :raises nova.exception.PortNotUsable: If a requested port is not owned by the same tenant that the instance is created under. :raises nova.exception.PortInUse: If a requested port is already attached to another instance. :raises nova.exception.PortNotUsableDNS: If a requested port has a value assigned to its dns_name attribute. :raises nova.exception.AttachSRIOVPortNotSupported: If a requested port is an SR-IOV port and ``attach=True``. """ ports = {} ordered_networks = [] # If we're asked to auto-allocate the network then there won't be any # ports or real neutron networks to lookup, so just return empty # results. if requested_networks and not requested_networks.auto_allocate: for request in requested_networks: # Process a request to use a pre-existing neutron port. if request.port_id: # Make sure the port exists. port = self._show_port(context, request.port_id, neutron_client=neutron) # Make sure the instance has access to the port. if port['tenant_id'] != instance.project_id: raise exception.PortNotUsable(port_id=request.port_id, instance=instance.uuid) # Make sure the port isn't already attached to another # instance. if port.get('device_id'): raise exception.PortInUse(port_id=request.port_id) # Make sure that if the user assigned a value to the port's # dns_name attribute, it is equal to the instance's # hostname if port.get('dns_name'): if port['dns_name'] != instance.hostname: raise exception.PortNotUsableDNS( port_id=request.port_id, instance=instance.uuid, value=port['dns_name'], hostname=instance.hostname) # Make sure the port is usable _ensure_no_port_binding_failure(port) # Make sure the port can be attached. if attach: # SR-IOV port attach is not supported. vnic_type = port.get('binding:vnic_type', network_model.VNIC_TYPE_NORMAL) if vnic_type in network_model.VNIC_TYPES_SRIOV: raise exception.AttachSRIOVPortNotSupported( port_id=port['id'], instance_uuid=instance.uuid) # If requesting a specific port, automatically process # the network for that port as if it were explicitly # requested. request.network_id = port['network_id'] ports[request.port_id] = port # Process a request to use a specific neutron network. if request.network_id: ordered_networks.append(request) return ports, ordered_networks def _clean_security_groups(self, security_groups): """Cleans security groups requested from Nova API Neutron already passes a 'default' security group when creating ports so it's not necessary to specify it to the request. """ if not security_groups: security_groups = [] elif security_groups == [constants.DEFAULT_SECGROUP]: security_groups = [] return security_groups def _process_security_groups(self, instance, neutron, security_groups): """Processes and validates requested security groups for allocation. Iterates over the list of requested security groups, validating the request and filtering out the list of security group IDs to use for port allocation. :param instance: allocate networks on this instance :type instance: nova.objects.Instance :param neutron: neutron client session :type neutron: neutronclient.v2_0.client.Client :param security_groups: list of requested security group name or IDs to use when allocating new ports for the instance :return: list of security group IDs to use when allocating new ports :raises nova.exception.NoUniqueMatch: If multiple security groups are requested with the same name. :raises nova.exception.SecurityGroupNotFound: If a requested security group is not in the tenant-filtered list of available security groups in Neutron. """ security_group_ids = [] # TODO(arosen) Should optimize more to do direct query for security # group if len(security_groups) == 1 if len(security_groups): # NOTE(slaweq): fields other than name and id aren't really needed # so asking only about those fields will allow Neutron to not # prepare list of rules for each found security group. That may # speed processing of this request a lot in case when tenant has # got many security groups sg_fields = ['id', 'name'] search_opts = {'tenant_id': instance.project_id} user_security_groups = neutron.list_security_groups( fields=sg_fields, **search_opts).get('security_groups') for security_group in security_groups: name_match = None uuid_match = None for user_security_group in user_security_groups: if user_security_group['name'] == security_group: # If there was a name match in a previous iteration # of the loop, we have a conflict. if name_match: raise exception.NoUniqueMatch( _("Multiple security groups found matching" " '%s'. Use an ID to be more specific.") % security_group) name_match = user_security_group['id'] if user_security_group['id'] == security_group: uuid_match = user_security_group['id'] # If a user names the security group the same as # another's security groups uuid, the name takes priority. if name_match: security_group_ids.append(name_match) elif uuid_match: security_group_ids.append(uuid_match) else: raise exception.SecurityGroupNotFound( security_group_id=security_group) return security_group_ids def _validate_requested_network_ids(self, context, instance, neutron, requested_networks, ordered_networks): """Check requested networks using the Neutron API. Check the user has access to the network they requested, and that it is a suitable network to connect to. This includes getting the network details for any ports that have been passed in, because the request will have been updated with the network_id in _validate_requested_port_ids. If the user has not requested any ports or any networks, we get back a full list of networks the user has access to, and if there is only one network, we update ordered_networks so we will connect the instance to that network. :param context: The request context. :param instance: nova.objects.instance.Instance object. :param neutron: neutron client :param requested_networks: nova.objects.NetworkRequestList, list of user-requested networks and/or ports; may be empty :param ordered_networks: output from _validate_requested_port_ids that will be used to create and update ports :returns: dict, keyed by network ID, of networks to use :raises InterfaceAttachFailedNoNetwork: If no specific networks were requested and none are available. :raises NetworkAmbiguous: If no specific networks were requested but more than one is available. :raises ExternalNetworkAttachForbidden: If the policy rules forbid the request context from using an external non-shared network but one was requested (or available). """ # Get networks from Neutron # If net_ids is empty, this actually returns all available nets auto_allocate = requested_networks and requested_networks.auto_allocate net_ids = [request.network_id for request in ordered_networks] nets = self._get_available_networks(context, instance.project_id, net_ids, neutron=neutron, auto_allocate=auto_allocate) if not nets: if requested_networks: # There are no networks available for the project to use and # none specifically requested, so check to see if we're asked # to auto-allocate the network. if auto_allocate: # During validate_networks we checked to see if # auto-allocation is available so we don't need to do that # again here. nets = [self._auto_allocate_network(instance, neutron)] else: # NOTE(chaochin): If user specifies a network id and the # network can not be found, raise NetworkNotFound error. for request in requested_networks: if not request.port_id and request.network_id: raise exception.NetworkNotFound( network_id=request.network_id) else: # no requested nets and user has no available nets return {} # if this function is directly called without a requested_network param # or if it is indirectly called through allocate_port_for_instance() # with None params=(network_id=None, requested_ip=None, port_id=None, # pci_request_id=None): if (not requested_networks or requested_networks.is_single_unspecified or requested_networks.auto_allocate): # If no networks were requested and none are available, consider # it a bad request. if not nets: raise exception.InterfaceAttachFailedNoNetwork( project_id=instance.project_id) # bug/1267723 - if no network is requested and more # than one is available then raise NetworkAmbiguous Exception if len(nets) > 1: msg = _("Multiple possible networks found, use a Network " "ID to be more specific.") raise exception.NetworkAmbiguous(msg) ordered_networks.append( objects.NetworkRequest(network_id=nets[0]['id'])) # NOTE(melwitt): check external net attach permission after the # check for ambiguity, there could be another # available net which is permitted bug/1364344 self._check_external_network_attach(context, nets) return {net['id']: net for net in nets} def _create_ports_for_instance(self, context, instance, ordered_networks, nets, neutron, security_group_ids): """Create port for network_requests that don't have a port_id :param context: The request context. :param instance: nova.objects.instance.Instance object. :param ordered_networks: objects.NetworkRequestList in requested order :param nets: a dict of network_id to networks returned from neutron :param neutron: neutronclient built from users request context :param security_group_ids: a list of security group IDs to be applied to any ports created :returns a list of pairs (NetworkRequest, created_port_uuid); note that created_port_uuid will be None for the pair where a pre-existing port was part of the user request """ created_port_ids = [] requests_and_created_ports = [] for request in ordered_networks: network = nets.get(request.network_id) # if network_id did not pass validate_networks() and not available # here then skip it safely not continuing with a None Network if not network: continue try: port_security_enabled = network.get( 'port_security_enabled', True) if port_security_enabled: if not network.get('subnets'): # Neutron can't apply security groups to a port # for a network without L3 assignments. LOG.debug('Network with port security enabled does ' 'not have subnets so security groups ' 'cannot be applied: %s', network, instance=instance) raise exception.SecurityGroupCannotBeApplied() else: if security_group_ids: # We don't want to apply security groups on port # for a network defined with # 'port_security_enabled=False'. LOG.debug('Network has port security disabled so ' 'security groups cannot be applied: %s', network, instance=instance) raise exception.SecurityGroupCannotBeApplied() created_port_id = None if not request.port_id: # create minimal port, if port not already created by user created_port = self._create_port_minimal( neutron, instance, request.network_id, request.address, security_group_ids) created_port_id = created_port['id'] created_port_ids.append(created_port_id) requests_and_created_ports.append(( request, created_port_id)) except Exception: with excutils.save_and_reraise_exception(): if created_port_ids: self._delete_ports( neutron, instance, created_port_ids) return requests_and_created_ports def allocate_for_instance(self, context, instance, vpn, requested_networks, security_groups=None, bind_host_id=None, attach=False, resource_provider_mapping=None): """Allocate network resources for the instance. :param context: The request context. :param instance: nova.objects.instance.Instance object. :param vpn: A boolean, ignored by this driver. :param requested_networks: objects.NetworkRequestList object. :param security_groups: None or security groups to allocate for instance. :param bind_host_id: the host ID to attach to the ports being created. :param attach: Boolean indicating if a port is being attached to an existing running instance. Should be False during server create. :param resource_provider_mapping: a dict keyed by ids of the entities (for example Neutron port) requesting resources for this instance mapped to a list of resource provider UUIDs that are fulfilling such a resource request. :returns: network info as from get_instance_nw_info() """ LOG.debug('allocate_for_instance()', instance=instance) if not instance.project_id: msg = _('empty project id for instance %s') raise exception.InvalidInput( reason=msg % instance.uuid) # We do not want to create a new neutron session for each call neutron = get_client(context) # We always need admin_client to build nw_info, # we sometimes need it when updating ports admin_client = get_client(context, admin=True) # # Validate ports and networks with neutron. The requested_ports_dict # variable is a dict, keyed by port ID, of ports that were on the user # request and may be empty. The ordered_networks variable is a list of # NetworkRequest objects for any networks or ports specifically # requested by the user, which again may be empty. # # NOTE(gibi): we use the admin_client here to ensure that the returned # ports has the resource_request attribute filled as later we use this # information to decide when to add allocation key to the port binding. # See bug 1849657. requested_ports_dict, ordered_networks = ( self._validate_requested_port_ids( context, instance, admin_client, requested_networks, attach=attach)) nets = self._validate_requested_network_ids( context, instance, neutron, requested_networks, ordered_networks) if not nets: LOG.debug("No network configured", instance=instance) return network_model.NetworkInfo([]) # Validate requested security groups security_groups = self._clean_security_groups(security_groups) security_group_ids = self._process_security_groups( instance, neutron, security_groups) # Tell Neutron which resource provider fulfills the ports' resource # request. # We only consider pre-created ports here as ports created # below based on requested networks are not scheduled to have their # resource request fulfilled. for port in requested_ports_dict.values(): # only communicate the allocations if the port has resource # requests if port.get(constants.RESOURCE_REQUEST): profile = get_binding_profile(port) # NOTE(gibi): In the resource provider mapping there can be # more than one RP fulfilling a request group. But resource # requests of a Neutron port is always mapped to a # numbered request group that is always fulfilled by one # resource provider. So we only pass that single RP UUID here. profile[constants.ALLOCATION] = resource_provider_mapping[ port['id']][0] port[constants.BINDING_PROFILE] = profile # Create ports from the list of ordered_networks. The returned # requests_and_created_ports variable is a list of 2-item tuples of # the form (NetworkRequest, created_port_id). Note that a tuple pair # will have None for the created_port_id if the NetworkRequest already # contains a port_id, meaning the user requested a specific # pre-existing port so one wasn't created here. The ports will be # updated later in _update_ports_for_instance to be bound to the # instance and compute host. requests_and_created_ports = self._create_ports_for_instance( context, instance, ordered_networks, nets, neutron, security_group_ids) # # Update existing and newly created ports # ordered_nets, ordered_port_ids, preexisting_port_ids, \ created_port_ids = self._update_ports_for_instance( context, instance, neutron, admin_client, requests_and_created_ports, nets, bind_host_id, requested_ports_dict) # # Perform a full update of the network_info_cache, # including re-fetching lots of the required data from neutron # nw_info = self.get_instance_nw_info( context, instance, networks=ordered_nets, port_ids=ordered_port_ids, admin_client=admin_client, preexisting_port_ids=preexisting_port_ids) # Only return info about ports we processed in this run, which might # have been pre-existing neutron ports or ones that nova created. In # the initial allocation case (server create), this will be everything # we processed, and in later runs will only be what was processed that # time. For example, if the instance was created with port A and # then port B was attached in this call, only port B would be returned. # Thus, this filtering only affects the attach case. return network_model.NetworkInfo([vif for vif in nw_info if vif['id'] in created_port_ids + preexisting_port_ids]) def _update_ports_for_instance(self, context, instance, neutron, admin_client, requests_and_created_ports, nets, bind_host_id, requested_ports_dict): """Update ports from network_requests. Updates the pre-existing ports and the ones created in ``_create_ports_for_instance`` with ``device_id``, ``device_owner``, optionally ``mac_address`` and, depending on the loaded extensions, ``rxtx_factor``, ``binding:host_id``, ``dns_name``. :param context: The request context. :param instance: nova.objects.instance.Instance object. :param neutron: client using user context :param admin_client: client using admin context :param requests_and_created_ports: [(NetworkRequest, created_port_id)]; Note that created_port_id will be None for any user-requested pre-existing port. :param nets: a dict of network_id to networks returned from neutron :param bind_host_id: a string for port['binding:host_id'] :param requested_ports_dict: dict, keyed by port ID, of ports requested by the user :returns: tuple with the following:: * list of network dicts in their requested order * list of port IDs in their requested order - note that does not mean the port was requested by the user, it could be a port created on a network requested by the user * list of pre-existing port IDs requested by the user * list of created port IDs """ # We currently require admin creds to set port bindings. port_client = admin_client preexisting_port_ids = [] created_port_ids = [] ports_in_requested_order = [] nets_in_requested_order = [] created_vifs = [] # this list is for cleanups if we fail for request, created_port_id in requests_and_created_ports: vifobj = objects.VirtualInterface(context) vifobj.instance_uuid = instance.uuid vifobj.tag = request.tag if 'tag' in request else None network = nets.get(request.network_id) # if network_id did not pass validate_networks() and not available # here then skip it safely not continuing with a None Network if not network: continue nets_in_requested_order.append(network) zone = 'compute:%s' % instance.availability_zone port_req_body = {'port': {'device_id': instance.uuid, 'device_owner': zone}} if (requested_ports_dict and request.port_id in requested_ports_dict and get_binding_profile(requested_ports_dict[request.port_id])): port_req_body['port'][constants.BINDING_PROFILE] = \ get_binding_profile(requested_ports_dict[request.port_id]) try: self._populate_neutron_extension_values( context, instance, request.pci_request_id, port_req_body, network=network, neutron=neutron, bind_host_id=bind_host_id) self._populate_pci_mac_address(instance, request.pci_request_id, port_req_body) if created_port_id: port_id = created_port_id created_port_ids.append(port_id) else: port_id = request.port_id ports_in_requested_order.append(port_id) # After port is created, update other bits updated_port = self._update_port( port_client, instance, port_id, port_req_body) # NOTE(danms): The virtual_interfaces table enforces global # uniqueness on MAC addresses, which clearly does not match # with neutron's view of the world. Since address is a 255-char # string we can namespace it with our port id. Using '/' should # be safely excluded from MAC address notations as well as # UUIDs. We can stop doing this now that we've removed # nova-network, but we need to leave the read translation in # for longer than that of course. vifobj.address = '%s/%s' % (updated_port['mac_address'], updated_port['id']) vifobj.uuid = port_id vifobj.create() created_vifs.append(vifobj) if not created_port_id: # only add if update worked and port create not called preexisting_port_ids.append(port_id) self._update_port_dns_name(context, instance, network, ports_in_requested_order[-1], neutron) except Exception: with excutils.save_and_reraise_exception(): self._unbind_ports(context, preexisting_port_ids, neutron, port_client) self._delete_ports(neutron, instance, created_port_ids) for vif in created_vifs: vif.destroy() return (nets_in_requested_order, ports_in_requested_order, preexisting_port_ids, created_port_ids) def _refresh_neutron_extensions_cache(self, context, neutron=None): """Refresh the neutron extensions cache when necessary.""" if (not self.last_neutron_extension_sync or ((time.time() - self.last_neutron_extension_sync) >= CONF.neutron.extension_sync_interval)): if neutron is None: neutron = get_client(context) extensions_list = neutron.list_extensions()['extensions'] self.last_neutron_extension_sync = time.time() self.extensions.clear() self.extensions = {ext['name']: ext for ext in extensions_list} def _has_multi_provider_extension(self, context, neutron=None): self._refresh_neutron_extensions_cache(context, neutron=neutron) return constants.MULTI_NET_EXT in self.extensions def _has_dns_extension(self): return constants.DNS_INTEGRATION in self.extensions def _has_qos_queue_extension(self, context, neutron=None): self._refresh_neutron_extensions_cache(context, neutron=neutron) return constants.QOS_QUEUE in self.extensions def _has_fip_port_details_extension(self, context, neutron=None): self._refresh_neutron_extensions_cache(context, neutron=neutron) return constants.FIP_PORT_DETAILS in self.extensions def has_substr_port_filtering_extension(self, context): self._refresh_neutron_extensions_cache(context) return constants.SUBSTR_PORT_FILTERING in self.extensions def supports_port_binding_extension(self, context): """This is a simple check to see if the neutron "binding-extended" extension exists and is enabled. The "binding-extended" extension allows nova to bind a port to multiple hosts at the same time, like during live migration. :param context: the user request context :returns: True if the binding-extended API extension is available, False otherwise """ self._refresh_neutron_extensions_cache(context) return constants.PORT_BINDING_EXTENDED in self.extensions def bind_ports_to_host(self, context, instance, host, vnic_types=None, port_profiles=None): """Attempts to bind the ports from the instance on the given host If the ports are already actively bound to another host, like the source host during live migration, then the new port bindings will be inactive, assuming $host is the destination host for the live migration. In the event of an error, any ports which were successfully bound to the host should have those host bindings removed from the ports. This method should not be used if "supports_port_binding_extension" returns False. :param context: the user request context :type context: nova.context.RequestContext :param instance: the instance with a set of ports :type instance: nova.objects.Instance :param host: the host on which to bind the ports which are attached to the instance :type host: str :param vnic_types: optional dict for the host port binding :type vnic_types: dict of : :param port_profiles: optional dict per port ID for the host port binding profile. note that the port binding profile is mutable via the networking "Port Binding" API so callers that pass in a profile should ensure they have the latest version from neutron with their changes merged, which can be determined using the "revision_number" attribute of the port. :type port_profiles: dict of : :raises: PortBindingFailed if any of the ports failed to be bound to the destination host :returns: dict, keyed by port ID, of a new host port binding dict per port that was bound """ # Get the current ports off the instance. This assumes the cache is # current. network_info = instance.get_network_info() if not network_info: # The instance doesn't have any ports so there is nothing to do. LOG.debug('Instance does not have any ports.', instance=instance) return {} client = _get_ksa_client(context, admin=True) bindings_by_port_id = {} for vif in network_info: # Now bind each port to the destination host and keep track of each # port that is bound to the resulting binding so we can rollback in # the event of a failure, or return the results if everything is OK port_id = vif['id'] binding = dict(host=host) if vnic_types is None or port_id not in vnic_types: binding['vnic_type'] = vif['vnic_type'] else: binding['vnic_type'] = vnic_types[port_id] if port_profiles is None or port_id not in port_profiles: binding['profile'] = vif['profile'] else: binding['profile'] = port_profiles[port_id] data = dict(binding=binding) resp = self._create_port_binding(context, client, port_id, data) if resp: bindings_by_port_id[port_id] = resp.json()['binding'] else: # Something failed, so log the error and rollback any # successful bindings. LOG.error('Binding failed for port %s and host %s. ' 'Error: (%s %s)', port_id, host, resp.status_code, resp.text, instance=instance) for rollback_port_id in bindings_by_port_id: try: self.delete_port_binding( context, rollback_port_id, host) except exception.PortBindingDeletionFailed: LOG.warning('Failed to remove binding for port %s on ' 'host %s.', rollback_port_id, host, instance=instance) raise exception.PortBindingFailed(port_id=port_id) return bindings_by_port_id @staticmethod def _create_port_binding(context, client, port_id, data): """Creates a port binding with the specified data. :param context: The request context for the operation. :param client: keystoneauth1.adapter.Adapter :param port_id: The ID of the port on which to create the binding. :param data: dict of port binding data (requires at least the host), for example:: {'binding': {'host': 'dest.host.com'}} :return: requests.Response object """ return client.post( '/v2.0/ports/%s/bindings' % port_id, json=data, raise_exc=False, global_request_id=context.global_id) def delete_port_binding(self, context, port_id, host): """Delete the port binding for the given port ID and host This method should not be used if "supports_port_binding_extension" returns False. :param context: The request context for the operation. :param port_id: The ID of the port with a binding to the host. :param host: The host from which port bindings should be deleted. :raises: nova.exception.PortBindingDeletionFailed if a non-404 error response is received from neutron. """ client = _get_ksa_client(context, admin=True) resp = self._delete_port_binding(context, client, port_id, host) if resp: LOG.debug('Deleted binding for port %s and host %s.', port_id, host) else: # We can safely ignore 404s since we're trying to delete # the thing that wasn't found anyway. if resp.status_code != 404: # Log the details, raise an exception. LOG.error('Unexpected error trying to delete binding ' 'for port %s and host %s. Code: %s. ' 'Error: %s', port_id, host, resp.status_code, resp.text) raise exception.PortBindingDeletionFailed( port_id=port_id, host=host) @staticmethod def _delete_port_binding(context, client, port_id, host): """Deletes the binding for the given host on the given port. :param context: The request context for the operation. :param client: keystoneauth1.adapter.Adapter :param port_id: ID of the port from which to delete the binding :param host: A string name of the host on which the port is bound :return: requests.Response object """ return client.delete( '/v2.0/ports/%s/bindings/%s' % (port_id, host), raise_exc=False, global_request_id=context.global_id) def activate_port_binding(self, context, port_id, host): """Activates an inactive port binding. If there are two port bindings to different hosts, activating the inactive binding atomically changes the other binding to inactive. :param context: The request context for the operation. :param port_id: The ID of the port with an inactive binding on the host. :param host: The host on which the inactive port binding should be activated. :raises: nova.exception.PortBindingActivationFailed if a non-409 error response is received from neutron. """ client = _get_ksa_client(context, admin=True) # This is a bit weird in that we don't PUT and update the status # to ACTIVE, it's more like a POST action method in the compute API. resp = self._activate_port_binding(context, client, port_id, host) if resp: LOG.debug('Activated binding for port %s and host %s.', port_id, host) # A 409 means the port binding is already active, which shouldn't # happen if the caller is doing things in the correct order. elif resp.status_code == 409: LOG.warning('Binding for port %s and host %s is already ' 'active.', port_id, host) else: # Log the details, raise an exception. LOG.error('Unexpected error trying to activate binding ' 'for port %s and host %s. Code: %s. ' 'Error: %s', port_id, host, resp.status_code, resp.text) raise exception.PortBindingActivationFailed( port_id=port_id, host=host) @staticmethod def _activate_port_binding(context, client, port_id, host): """Activates an inactive port binding. :param context: The request context for the operation. :param client: keystoneauth1.adapter.Adapter :param port_id: ID of the port to activate the binding on :param host: A string name of the host identifying the binding to be activated :return: requests.Response object """ return client.put( '/v2.0/ports/%s/bindings/%s/activate' % (port_id, host), raise_exc=False, global_request_id=context.global_id) @staticmethod def _get_port_binding(context, client, port_id, host): """Returns a port binding of a given port on a given host :param context: The request context for the operation. :param client: keystoneauth1.adapter.Adapter :param port_id: ID of the port to get the binding :param host: A string name of the host identifying the binding to be returned :return: requests.Response object """ return client.get( '/v2.0/ports/%s/bindings/%s' % (port_id, host), raise_exc=False, global_request_id=context.global_id) def _get_pci_device_profile(self, pci_dev): dev_spec = self.pci_whitelist.get_devspec(pci_dev) if dev_spec: return {'pci_vendor_info': "%s:%s" % (pci_dev.vendor_id, pci_dev.product_id), 'pci_slot': pci_dev.address, 'physical_network': dev_spec.get_tags().get('physical_network')} raise exception.PciDeviceNotFound(node_id=pci_dev.compute_node_id, address=pci_dev.address) def _populate_neutron_binding_profile(self, instance, pci_request_id, port_req_body): """Populate neutron binding:profile. Populate it with SR-IOV related information :raises PciDeviceNotFound: If a claimed PCI device for the given pci_request_id cannot be found on the instance. """ if pci_request_id: pci_devices = pci_manager.get_instance_pci_devs( instance, pci_request_id) if not pci_devices: # The pci_request_id likely won't mean much except for tracing # through the logs since it is generated per request. LOG.error('Unable to find PCI device using PCI request ID in ' 'list of claimed instance PCI devices: %s. Is the ' '[pci]/passthrough_whitelist configuration correct?', # Convert to a primitive list to stringify it. list(instance.pci_devices), instance=instance) raise exception.PciDeviceNotFound( _('PCI device not found for request ID %s.') % pci_request_id) pci_dev = pci_devices.pop() profile = copy.deepcopy(get_binding_profile(port_req_body['port'])) profile.update(self._get_pci_device_profile(pci_dev)) port_req_body['port'][constants.BINDING_PROFILE] = profile @staticmethod def _populate_pci_mac_address(instance, pci_request_id, port_req_body): """Add the updated MAC address value to the update_port request body. Currently this is done only for PF passthrough. """ if pci_request_id is not None: pci_devs = pci_manager.get_instance_pci_devs( instance, pci_request_id) if len(pci_devs) != 1: # NOTE(ndipanov): We shouldn't ever get here since # InstancePCIRequest instances built from network requests # only ever index a single device, which needs to be # successfully claimed for this to be called as part of # allocate_networks method LOG.error("PCI request %s does not have a " "unique device associated with it. Unable to " "determine MAC address", pci_request_id, instance=instance) return pci_dev = pci_devs[0] if pci_dev.dev_type == obj_fields.PciDeviceType.SRIOV_PF: try: mac = pci_utils.get_mac_by_pci_address(pci_dev.address) except exception.PciDeviceNotFoundById as e: LOG.error( "Could not determine MAC address for %(addr)s, " "error: %(e)s", {"addr": pci_dev.address, "e": e}, instance=instance) else: port_req_body['port']['mac_address'] = mac def _populate_neutron_extension_values(self, context, instance, pci_request_id, port_req_body, network=None, neutron=None, bind_host_id=None): """Populate neutron extension values for the instance. If the extensions loaded contain QOS_QUEUE then pass the rxtx_factor. """ if self._has_qos_queue_extension(context, neutron=neutron): flavor = instance.get_flavor() rxtx_factor = flavor.get('rxtx_factor') port_req_body['port']['rxtx_factor'] = rxtx_factor port_req_body['port'][constants.BINDING_HOST_ID] = bind_host_id self._populate_neutron_binding_profile(instance, pci_request_id, port_req_body) if self._has_dns_extension(): # If the DNS integration extension is enabled in Neutron, most # ports will get their dns_name attribute set in the port create or # update requests in allocate_for_instance. So we just add the # dns_name attribute to the payload of those requests. The # exception is when the port binding extension is enabled in # Neutron and the port is on a network that has a non-blank # dns_domain attribute. This case requires to be processed by # method _update_port_dns_name if (not network.get('dns_domain')): port_req_body['port']['dns_name'] = instance.hostname def _update_port_dns_name(self, context, instance, network, port_id, neutron): """Update an instance port dns_name attribute with instance.hostname. The dns_name attribute of a port on a network with a non-blank dns_domain attribute will be sent to the external DNS service (Designate) if DNS integration is enabled in Neutron. This requires the assignment of the dns_name to the port to be done with a Neutron client using the user's context. allocate_for_instance uses a port with admin context if the port binding extensions is enabled in Neutron. In this case, we assign in this method the dns_name attribute to the port with an additional update request. Only a very small fraction of ports will require this additional update request. """ if self._has_dns_extension() and network.get('dns_domain'): try: port_req_body = {'port': {'dns_name': instance.hostname}} neutron.update_port(port_id, port_req_body) except neutron_client_exc.BadRequest: LOG.warning('Neutron error: Instance hostname ' '%(hostname)s is not a valid DNS name', {'hostname': instance.hostname}, instance=instance) msg = (_('Instance hostname %(hostname)s is not a valid DNS ' 'name') % {'hostname': instance.hostname}) raise exception.InvalidInput(reason=msg) def _reset_port_dns_name(self, network, port_id, neutron_client): """Reset an instance port dns_name attribute to empty when using external DNS service. _unbind_ports uses a client with admin context to reset the dns_name if the DNS extension is enabled and network does not have dns_domain set. When external DNS service is enabled, we use this method to make the request with a Neutron client using user's context, so that the DNS record can be found under user's zone and domain. """ if self._has_dns_extension() and network.get('dns_domain'): try: port_req_body = {'port': {'dns_name': ''}} neutron_client.update_port(port_id, port_req_body) except neutron_client_exc.NeutronClientException: LOG.exception("Failed to reset dns_name for port %s", port_id) def _delete_ports(self, neutron, instance, ports, raise_if_fail=False): exceptions = [] for port in ports: try: neutron.delete_port(port) except neutron_client_exc.NeutronClientException as e: if e.status_code == 404: LOG.warning("Port %s does not exist", port, instance=instance) else: exceptions.append(e) LOG.warning("Failed to delete port %s for instance.", port, instance=instance, exc_info=True) if len(exceptions) > 0 and raise_if_fail: raise exceptions[0] def deallocate_for_instance(self, context, instance, **kwargs): """Deallocate all network resources related to the instance.""" LOG.debug('deallocate_for_instance()', instance=instance) search_opts = {'device_id': instance.uuid} neutron = get_client(context) data = neutron.list_ports(**search_opts) ports = [port['id'] for port in data.get('ports', [])] requested_networks = kwargs.get('requested_networks') or [] # NOTE(danms): Temporary and transitional if isinstance(requested_networks, objects.NetworkRequestList): requested_networks = requested_networks.as_tuples() ports_to_skip = set([port_id for nets, fips, port_id, pci_request_id in requested_networks]) # NOTE(boden): requested_networks only passed in when deallocating # from a failed build / spawn call. Therefore we need to include # preexisting ports when deallocating from a standard delete op # in which case requested_networks is not provided. ports_to_skip |= set(self._get_preexisting_port_ids(instance)) ports = set(ports) - ports_to_skip # Reset device_id and device_owner for the ports that are skipped self._unbind_ports(context, ports_to_skip, neutron) # Delete the rest of the ports self._delete_ports(neutron, instance, ports, raise_if_fail=True) # deallocate vifs (mac addresses) objects.VirtualInterface.delete_by_instance_uuid( context, instance.uuid) # NOTE(arosen): This clears out the network_cache only if the instance # hasn't already been deleted. This is needed when an instance fails to # launch and is rescheduled onto another compute node. If the instance # has already been deleted this call does nothing. update_instance_cache_with_nw_info(self, context, instance, network_model.NetworkInfo([])) def allocate_port_for_instance(self, context, instance, port_id, network_id=None, requested_ip=None, bind_host_id=None, tag=None): """Allocate a port for the instance.""" requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=network_id, address=requested_ip, port_id=port_id, pci_request_id=None, tag=tag)]) return self.allocate_for_instance(context, instance, vpn=False, requested_networks=requested_networks, bind_host_id=bind_host_id, attach=True) def deallocate_port_for_instance(self, context, instance, port_id): """Remove a specified port from the instance. :param context: the request context :param instance: the instance object the port is detached from :param port_id: the UUID of the port being detached :return: A NetworkInfo, port_allocation tuple where the port_allocation is a dict which contains the resource allocation of the port per resource provider uuid. E.g.: { rp_uuid: { NET_BW_EGR_KILOBIT_PER_SEC: 10000, NET_BW_IGR_KILOBIT_PER_SEC: 20000, } } Note that right now this dict only contains a single key as a neutron port only allocates from a single resource provider. """ neutron = get_client(context) port_allocation = {} try: # NOTE(gibi): we need to read the port resource information from # neutron here as we might delete the port below port = neutron.show_port(port_id)['port'] except exception.PortNotFound: LOG.debug('Unable to determine port %s resource allocation ' 'information as the port no longer exists.', port_id) port = None preexisting_ports = self._get_preexisting_port_ids(instance) if port_id in preexisting_ports: self._unbind_ports(context, [port_id], neutron) else: self._delete_ports(neutron, instance, [port_id], raise_if_fail=True) # Delete the VirtualInterface for the given port_id. vif = objects.VirtualInterface.get_by_uuid(context, port_id) if vif: if 'tag' in vif and vif.tag: self._delete_nic_metadata(instance, vif) vif.destroy() else: LOG.debug('VirtualInterface not found for port: %s', port_id, instance=instance) if port: # if there is resource associated to this port then that needs to # be deallocated so lets return info about such allocation resource_request = port.get(constants.RESOURCE_REQUEST) profile = get_binding_profile(port) allocated_rp = profile.get(constants.ALLOCATION) if resource_request and allocated_rp: port_allocation = { allocated_rp: resource_request.get('resources', {})} else: # Check the info_cache. If the port is still in the info_cache and # in that cache there is allocation in the profile then we suspect # that the port is disappeared without deallocating the resources. for vif in instance.get_network_info(): if vif['id'] == port_id: profile = vif.get('profile') or {} rp_uuid = profile.get(constants.ALLOCATION) if rp_uuid: LOG.warning( 'Port %s disappeared during deallocate but it had ' 'resource allocation on resource provider %s. ' 'Resource allocation for this port may be ' 'leaked.', port_id, rp_uuid, instance=instance) break return self.get_instance_nw_info(context, instance), port_allocation def _delete_nic_metadata(self, instance, vif): for device in instance.device_metadata.devices: if (isinstance(device, objects.NetworkInterfaceMetadata) and device.mac == vif.address): instance.device_metadata.devices.remove(device) instance.save() break def list_ports(self, context, **search_opts): """List ports for the client based on search options.""" return get_client(context).list_ports(**search_opts) def show_port(self, context, port_id): """Return the port for the client given the port id. :param context: Request context. :param port_id: The id of port to be queried. :returns: A dict containing port data keyed by 'port', e.g. :: {'port': {'port_id': 'abcd', 'fixed_ip_address': '1.2.3.4'}} """ return dict(port=self._show_port(context, port_id)) def _show_port(self, context, port_id, neutron_client=None, fields=None): """Return the port for the client given the port id. :param context: Request context. :param port_id: The id of port to be queried. :param neutron_client: A neutron client. :param fields: The condition fields to query port data. :returns: A dict of port data. e.g. {'port_id': 'abcd', 'fixed_ip_address': '1.2.3.4'} """ if not neutron_client: neutron_client = get_client(context) try: if fields: result = neutron_client.show_port(port_id, fields=fields) else: result = neutron_client.show_port(port_id) return result.get('port') except neutron_client_exc.PortNotFoundClient: raise exception.PortNotFound(port_id=port_id) except neutron_client_exc.Unauthorized: raise exception.Forbidden() except neutron_client_exc.NeutronClientException as exc: msg = (_("Failed to access port %(port_id)s: %(reason)s") % {'port_id': port_id, 'reason': exc}) raise exception.NovaException(message=msg) def get_instance_nw_info(self, context, instance, **kwargs): """Returns all network info related to an instance.""" with lockutils.lock('refresh_cache-%s' % instance.uuid): result = self._get_instance_nw_info(context, instance, **kwargs) update_instance_cache_with_nw_info(self, context, instance, nw_info=result) return result def _get_instance_nw_info(self, context, instance, networks=None, port_ids=None, admin_client=None, preexisting_port_ids=None, refresh_vif_id=None, force_refresh=False, **kwargs): # NOTE(danms): This is an inner method intended to be called # by other code that updates instance nwinfo. It *must* be # called with the refresh_cache-%(instance_uuid) lock held! if force_refresh: LOG.debug('Forcefully refreshing network info cache for instance', instance=instance) elif refresh_vif_id: LOG.debug('Refreshing network info cache for port %s', refresh_vif_id, instance=instance) else: LOG.debug('Building network info cache for instance', instance=instance) # Ensure that we have an up to date copy of the instance info cache. # Otherwise multiple requests could collide and cause cache # corruption. compute_utils.refresh_info_cache_for_instance(context, instance) nw_info = self._build_network_info_model(context, instance, networks, port_ids, admin_client, preexisting_port_ids, refresh_vif_id, force_refresh=force_refresh) return network_model.NetworkInfo.hydrate(nw_info) def _gather_port_ids_and_networks(self, context, instance, networks=None, port_ids=None, neutron=None): """Return an instance's complete list of port_ids and networks. The results are based on the instance info_cache in the nova db, not the instance's current list of ports in neutron. """ if ((networks is None and port_ids is not None) or (port_ids is None and networks is not None)): message = _("This method needs to be called with either " "networks=None and port_ids=None or port_ids and " "networks as not none.") raise exception.NovaException(message=message) ifaces = instance.get_network_info() # This code path is only done when refreshing the network_cache if port_ids is None: port_ids = [iface['id'] for iface in ifaces] net_ids = [iface['network']['id'] for iface in ifaces] if networks is None: networks = self._get_available_networks(context, instance.project_id, net_ids, neutron) # an interface was added/removed from instance. else: # Prepare the network ids list for validation purposes networks_ids = [network['id'] for network in networks] # Validate that interface networks doesn't exist in networks. # Though this issue can and should be solved in methods # that prepare the networks list, this method should have this # ignore-duplicate-networks/port-ids mechanism to reduce the # probability of failing to boot the VM. networks = networks + [ {'id': iface['network']['id'], 'name': iface['network']['label'], 'tenant_id': iface['network']['meta']['tenant_id']} for iface in ifaces if _is_not_duplicate(iface['network']['id'], networks_ids, "networks", instance)] # Include existing interfaces so they are not removed from the db. # Validate that the interface id is not in the port_ids port_ids = [iface['id'] for iface in ifaces if _is_not_duplicate(iface['id'], port_ids, "port_ids", instance)] + port_ids return networks, port_ids @refresh_cache def add_fixed_ip_to_instance(self, context, instance, network_id): """Add a fixed IP to the instance from specified network.""" neutron = get_client(context) search_opts = {'network_id': network_id} data = neutron.list_subnets(**search_opts) ipam_subnets = data.get('subnets', []) if not ipam_subnets: raise exception.NetworkNotFoundForInstance( instance_id=instance.uuid) zone = 'compute:%s' % instance.availability_zone search_opts = {'device_id': instance.uuid, 'device_owner': zone, 'network_id': network_id} data = neutron.list_ports(**search_opts) ports = data['ports'] for p in ports: for subnet in ipam_subnets: fixed_ips = p['fixed_ips'] fixed_ips.append({'subnet_id': subnet['id']}) port_req_body = {'port': {'fixed_ips': fixed_ips}} try: neutron.update_port(p['id'], port_req_body) return self._get_instance_nw_info(context, instance) except Exception as ex: msg = ("Unable to update port %(portid)s on subnet " "%(subnet_id)s with failure: %(exception)s") LOG.debug(msg, {'portid': p['id'], 'subnet_id': subnet['id'], 'exception': ex}, instance=instance) raise exception.NetworkNotFoundForInstance( instance_id=instance.uuid) @refresh_cache def remove_fixed_ip_from_instance(self, context, instance, address): """Remove a fixed IP from the instance.""" neutron = get_client(context) zone = 'compute:%s' % instance.availability_zone search_opts = {'device_id': instance.uuid, 'device_owner': zone, 'fixed_ips': 'ip_address=%s' % address} data = neutron.list_ports(**search_opts) ports = data['ports'] for p in ports: fixed_ips = p['fixed_ips'] new_fixed_ips = [] for fixed_ip in fixed_ips: if fixed_ip['ip_address'] != address: new_fixed_ips.append(fixed_ip) port_req_body = {'port': {'fixed_ips': new_fixed_ips}} try: neutron.update_port(p['id'], port_req_body) except Exception as ex: msg = ("Unable to update port %(portid)s with" " failure: %(exception)s") LOG.debug(msg, {'portid': p['id'], 'exception': ex}, instance=instance) return self._get_instance_nw_info(context, instance) raise exception.FixedIpNotFoundForInstance( instance_uuid=instance.uuid, ip=address) def _get_physnet_tunneled_info(self, context, neutron, net_id): """Retrieve detailed network info. :param context: The request context. :param neutron: The neutron client object. :param net_id: The ID of the network to retrieve information for. :return: A tuple containing the physnet name, if defined, and the tunneled status of the network. If the network uses multiple segments, the first segment that defines a physnet value will be used for the physnet name. """ if self._has_multi_provider_extension(context, neutron=neutron): network = neutron.show_network(net_id, fields='segments').get('network') segments = network.get('segments', {}) for net in segments: # NOTE(vladikr): In general, "multi-segments" network is a # combination of L2 segments. The current implementation # contains a vxlan and vlan(s) segments, where only a vlan # network will have a physical_network specified, but may # change in the future. The purpose of this method # is to find a first segment that provides a physical network. # TODO(vladikr): Additional work will be required to handle the # case of multiple vlan segments associated with different # physical networks. physnet_name = net.get('provider:physical_network') if physnet_name: return physnet_name, False # Raising here as at least one segment should # have a physical network provided. if segments: msg = (_("None of the segments of network %s provides a " "physical_network") % net_id) raise exception.NovaException(message=msg) net = neutron.show_network( net_id, fields=['provider:physical_network', 'provider:network_type']).get('network') return (net.get('provider:physical_network'), net.get('provider:network_type') in constants.L3_NETWORK_TYPES) @staticmethod def _get_trusted_mode_from_port(port): """Returns whether trusted mode is requested If port binding does not provide any information about trusted status this function is returning None """ value = get_binding_profile(port).get('trusted') if value is not None: # This allows the user to specify things like '1' and 'yes' in # the port binding profile and we can handle it as a boolean. return strutils.bool_from_string(value) def _get_port_vnic_info(self, context, neutron, port_id): """Retrieve port vNIC info :param context: The request context :param neutron: The Neutron client :param port_id: The id of port to be queried :return: A tuple of vNIC type, trusted status, network ID and resource request of the port if any. Trusted status only affects SR-IOV ports and will always be None for other port types. """ port = self._show_port( context, port_id, neutron_client=neutron, fields=['binding:vnic_type', constants.BINDING_PROFILE, 'network_id', constants.RESOURCE_REQUEST]) network_id = port.get('network_id') trusted = None vnic_type = port.get('binding:vnic_type', network_model.VNIC_TYPE_NORMAL) if vnic_type in network_model.VNIC_TYPES_SRIOV: trusted = self._get_trusted_mode_from_port(port) # NOTE(gibi): Get the port resource_request which may or may not be # set depending on neutron configuration, e.g. if QoS rules are # applied to the port/network and the port-resource-request API # extension is enabled. resource_request = port.get(constants.RESOURCE_REQUEST, None) return vnic_type, trusted, network_id, resource_request def create_resource_requests( self, context, requested_networks, pci_requests=None, affinity_policy=None): """Retrieve all information for the networks passed at the time of creating the server. :param context: The request context. :param requested_networks: The networks requested for the server. :type requested_networks: nova.objects.NetworkRequestList :param pci_requests: The list of PCI requests to which additional PCI requests created here will be added. :type pci_requests: nova.objects.InstancePCIRequests :param affinity_policy: requested pci numa affinity policy :type affinity_policy: nova.objects.fields.PCINUMAAffinityPolicy :returns: A tuple with an instance of ``objects.NetworkMetadata`` for use by the scheduler or None and a list of RequestGroup objects representing the resource needs of each requested port """ if not requested_networks or requested_networks.no_allocate: return None, [] physnets = set() tunneled = False neutron = get_client(context, admin=True) resource_requests = [] for request_net in requested_networks: physnet = None trusted = None tunneled_ = False vnic_type = network_model.VNIC_TYPE_NORMAL pci_request_id = None requester_id = None if request_net.port_id: result = self._get_port_vnic_info( context, neutron, request_net.port_id) vnic_type, trusted, network_id, resource_request = result physnet, tunneled_ = self._get_physnet_tunneled_info( context, neutron, network_id) if resource_request: # InstancePCIRequest.requester_id is semantically linked # to a port with a resource_request. requester_id = request_net.port_id # NOTE(gibi): explicitly orphan the RequestGroup by setting # context=None as we never intended to save it to the DB. resource_requests.append( objects.RequestGroup.from_port_request( context=None, port_uuid=request_net.port_id, port_resource_request=resource_request)) elif request_net.network_id and not request_net.auto_allocate: network_id = request_net.network_id physnet, tunneled_ = self._get_physnet_tunneled_info( context, neutron, network_id) # All tunneled traffic must use the same logical NIC so we just # need to know if there is one or more tunneled networks present. tunneled = tunneled or tunneled_ # ...conversely, there can be multiple physnets, which will # generally be mapped to different NICs, and some requested # networks may use the same physnet. As a result, we need to know # the *set* of physnets from every network requested if physnet: physnets.add(physnet) if vnic_type in network_model.VNIC_TYPES_SRIOV: # TODO(moshele): To differentiate between the SR-IOV legacy # and SR-IOV ovs hardware offload we will leverage the nic # feature based scheduling in nova. This mean we will need # libvirt to expose the nic feature. At the moment # there is a limitation that deployers cannot use both # SR-IOV modes (legacy and ovs) in the same deployment. spec = {pci_request.PCI_NET_TAG: physnet} dev_type = pci_request.DEVICE_TYPE_FOR_VNIC_TYPE.get(vnic_type) if dev_type: spec[pci_request.PCI_DEVICE_TYPE_TAG] = dev_type if trusted is not None: # We specifically have requested device on a pool # with a tag trusted set to true or false. We # convert the value to string since tags are # compared in that way. spec[pci_request.PCI_TRUSTED_TAG] = str(trusted) request = objects.InstancePCIRequest( count=1, spec=[spec], request_id=uuidutils.generate_uuid(), requester_id=requester_id) if affinity_policy: request.numa_policy = affinity_policy pci_requests.requests.append(request) pci_request_id = request.request_id # Add pci_request_id into the requested network request_net.pci_request_id = pci_request_id return (objects.NetworkMetadata(physnets=physnets, tunneled=tunneled), resource_requests) def _can_auto_allocate_network(self, context, neutron): """Helper method to determine if we can auto-allocate networks :param context: nova request context :param neutron: neutron client :returns: True if it's possible to auto-allocate networks, False otherwise. """ # run the dry-run validation, which will raise a 409 if not ready try: neutron.validate_auto_allocated_topology_requirements( context.project_id) LOG.debug('Network auto-allocation is available for project ' '%s', context.project_id) return True except neutron_client_exc.Conflict as ex: LOG.debug('Unable to auto-allocate networks. %s', six.text_type(ex)) return False def _auto_allocate_network(self, instance, neutron): """Automatically allocates a network for the given project. :param instance: create the network for the project that owns this instance :param neutron: neutron client :returns: Details of the network that was created. :raises: nova.exception.UnableToAutoAllocateNetwork :raises: nova.exception.NetworkNotFound """ project_id = instance.project_id LOG.debug('Automatically allocating a network for project %s.', project_id, instance=instance) try: topology = neutron.get_auto_allocated_topology( project_id)['auto_allocated_topology'] except neutron_client_exc.Conflict: raise exception.UnableToAutoAllocateNetwork(project_id=project_id) try: network = neutron.show_network(topology['id'])['network'] except neutron_client_exc.NetworkNotFoundClient: # This shouldn't happen since we just created the network, but # handle it anyway. LOG.error('Automatically allocated network %(network_id)s ' 'was not found.', {'network_id': topology['id']}, instance=instance) raise exception.UnableToAutoAllocateNetwork(project_id=project_id) LOG.debug('Automatically allocated network: %s', network, instance=instance) return network def _ports_needed_per_instance(self, context, neutron, requested_networks): # TODO(danms): Remove me when all callers pass an object if requested_networks and isinstance(requested_networks[0], tuple): requested_networks = objects.NetworkRequestList.from_tuples( requested_networks) ports_needed_per_instance = 0 if (requested_networks is None or len(requested_networks) == 0 or requested_networks.auto_allocate): nets = self._get_available_networks(context, context.project_id, neutron=neutron) if len(nets) > 1: # Attaching to more than one network by default doesn't # make sense, as the order will be arbitrary and the guest OS # won't know which to configure msg = _("Multiple possible networks found, use a Network " "ID to be more specific.") raise exception.NetworkAmbiguous(msg) if not nets and ( requested_networks and requested_networks.auto_allocate): # If there are no networks available to this project and we # were asked to auto-allocate a network, check to see that we # can do that first. LOG.debug('No networks are available for project %s; checking ' 'to see if we can automatically allocate a network.', context.project_id) if not self._can_auto_allocate_network(context, neutron): raise exception.UnableToAutoAllocateNetwork( project_id=context.project_id) ports_needed_per_instance = 1 else: net_ids_requested = [] for request in requested_networks: if request.port_id: port = self._show_port(context, request.port_id, neutron_client=neutron) if port.get('device_id', None): raise exception.PortInUse(port_id=request.port_id) deferred_ip = port.get('ip_allocation') == 'deferred' # NOTE(carl_baldwin) A deferred IP port doesn't have an # address here. If it fails to get one later when nova # updates it with host info, Neutron will error which # raises an exception. if not deferred_ip and not port.get('fixed_ips'): raise exception.PortRequiresFixedIP( port_id=request.port_id) request.network_id = port['network_id'] else: ports_needed_per_instance += 1 net_ids_requested.append(request.network_id) # NOTE(jecarey) There is currently a race condition. # That is, if you have more than one request for a specific # fixed IP at the same time then only one will be allocated # the ip. The fixed IP will be allocated to only one of the # instances that will run. The second instance will fail on # spawn. That instance will go into error state. # TODO(jecarey) Need to address this race condition once we # have the ability to update mac addresses in Neutron. if request.address: # TODO(jecarey) Need to look at consolidating list_port # calls once able to OR filters. search_opts = {'network_id': request.network_id, 'fixed_ips': 'ip_address=%s' % ( request.address), 'fields': 'device_id'} existing_ports = neutron.list_ports( **search_opts)['ports'] if existing_ports: i_uuid = existing_ports[0]['device_id'] raise exception.FixedIpAlreadyInUse( address=request.address, instance_uuid=i_uuid) # Now check to see if all requested networks exist if net_ids_requested: nets = self._get_available_networks( context, context.project_id, net_ids_requested, neutron=neutron) for net in nets: if not net.get('subnets'): raise exception.NetworkRequiresSubnet( network_uuid=net['id']) if len(nets) != len(net_ids_requested): requested_netid_set = set(net_ids_requested) returned_netid_set = set([net['id'] for net in nets]) lostid_set = requested_netid_set - returned_netid_set if lostid_set: id_str = '' for _id in lostid_set: id_str = id_str and id_str + ', ' + _id or _id raise exception.NetworkNotFound(network_id=id_str) return ports_needed_per_instance def get_requested_resource_for_instance(self, context, instance_uuid): """Collect resource requests from the ports associated to the instance :param context: nova request context :param instance_uuid: The UUID of the instance :return: A list of RequestGroup objects """ # NOTE(gibi): We need to use an admin client as otherwise a non admin # initiated resize causes that neutron does not fill the # resource_request field of the port and this will lead to resource # allocation issues. See bug 1849695 neutron = get_client(context, admin=True) # get the ports associated to this instance data = neutron.list_ports( device_id=instance_uuid, fields=['id', 'resource_request']) resource_requests = [] for port in data.get('ports', []): if port.get('resource_request'): # NOTE(gibi): explicitly orphan the RequestGroup by setting # context=None as we never intended to save it to the DB. resource_requests.append( objects.RequestGroup.from_port_request( context=None, port_uuid=port['id'], port_resource_request=port['resource_request'])) return resource_requests def validate_networks(self, context, requested_networks, num_instances): """Validate that the tenant can use the requested networks. Return the number of instances than can be successfully allocated with the requested network configuration. """ LOG.debug('validate_networks() for %s', requested_networks) neutron = get_client(context) ports_needed_per_instance = self._ports_needed_per_instance( context, neutron, requested_networks) # Note(PhilD): Ideally Nova would create all required ports as part of # network validation, but port creation requires some details # from the hypervisor. So we just check the quota and return # how many of the requested number of instances can be created if ports_needed_per_instance: quotas = neutron.show_quota(context.project_id)['quota'] if quotas.get('port', -1) == -1: # Unlimited Port Quota return num_instances # We only need the port count so only ask for ids back. params = dict(tenant_id=context.project_id, fields=['id']) ports = neutron.list_ports(**params)['ports'] free_ports = quotas.get('port') - len(ports) if free_ports < 0: msg = (_("The number of defined ports: %(ports)d " "is over the limit: %(quota)d") % {'ports': len(ports), 'quota': quotas.get('port')}) raise exception.PortLimitExceeded(msg) ports_needed = ports_needed_per_instance * num_instances if free_ports >= ports_needed: return num_instances else: return free_ports // ports_needed_per_instance return num_instances def _get_instance_uuids_by_ip(self, context, address): """Retrieve instance uuids associated with the given IP address. :returns: A list of dicts containing the uuids keyed by 'instance_uuid' e.g. [{'instance_uuid': uuid}, ...] """ search_opts = {"fixed_ips": 'ip_address=%s' % address} data = get_client(context).list_ports(**search_opts) ports = data.get('ports', []) return [{'instance_uuid': port['device_id']} for port in ports if port['device_id']] def _get_port_id_by_fixed_address(self, client, instance, address): """Return port_id from a fixed address.""" zone = 'compute:%s' % instance.availability_zone search_opts = {'device_id': instance.uuid, 'device_owner': zone} data = client.list_ports(**search_opts) ports = data['ports'] port_id = None for p in ports: for ip in p['fixed_ips']: if ip['ip_address'] == address: port_id = p['id'] break if not port_id: raise exception.FixedIpNotFoundForAddress(address=address) return port_id @refresh_cache def associate_floating_ip(self, context, instance, floating_address, fixed_address, affect_auto_assigned=False): """Associate a floating IP with a fixed IP.""" # Note(amotoki): 'affect_auto_assigned' is not respected # since it is not used anywhere in nova code and I could # find why this parameter exists. client = get_client(context) port_id = self._get_port_id_by_fixed_address(client, instance, fixed_address) fip = self._get_floating_ip_by_address(client, floating_address) param = {'port_id': port_id, 'fixed_ip_address': fixed_address} try: client.update_floatingip(fip['id'], {'floatingip': param}) except neutron_client_exc.Conflict as e: raise exception.FloatingIpAssociateFailed(six.text_type(e)) # If the floating IP was associated with another server, try to refresh # the cache for that instance to avoid a window of time where multiple # servers in the API say they are using the same floating IP. if fip['port_id']: # Trap and log any errors from # _update_inst_info_cache_for_disassociated_fip but not let them # raise back up to the caller since this refresh is best effort. try: self._update_inst_info_cache_for_disassociated_fip( context, instance, client, fip) except Exception as e: LOG.warning('An error occurred while trying to refresh the ' 'network info cache for an instance associated ' 'with port %s. Error: %s', fip['port_id'], e) def _update_inst_info_cache_for_disassociated_fip(self, context, instance, client, fip): """Update the network info cache when a floating IP is re-assigned. :param context: nova auth RequestContext :param instance: The instance to which the floating IP is now assigned :param client: ClientWrapper instance for using the Neutron API :param fip: dict for the floating IP that was re-assigned where the the ``port_id`` value represents the port that was associated with another server. """ port = self._show_port(context, fip['port_id'], neutron_client=client) orig_instance_uuid = port['device_id'] msg_dict = dict(address=fip['floating_ip_address'], instance_id=orig_instance_uuid) LOG.info('re-assign floating IP %(address)s from ' 'instance %(instance_id)s', msg_dict, instance=instance) orig_instance = self._get_instance_by_uuid_using_api_db( context, orig_instance_uuid) if orig_instance: # purge cached nw info for the original instance; pass the # context from the instance in case we found it in another cell update_instance_cache_with_nw_info( self, orig_instance._context, orig_instance) else: # Leave a breadcrumb about not being able to refresh the # the cache for the original instance. LOG.info('Unable to refresh the network info cache for ' 'instance %s after disassociating floating IP %s. ' 'If the instance still exists, its info cache may ' 'be healed automatically.', orig_instance_uuid, fip['id']) @staticmethod def _get_instance_by_uuid_using_api_db(context, instance_uuid): """Look up the instance by UUID This method is meant to be used sparingly since it tries to find the instance by UUID in the cell-targeted context. If the instance is not found, this method will try to determine if it's not found because it is deleted or if it is just in another cell. Therefore it assumes to have access to the API database and should only be called from methods that are used in the control plane services. :param context: cell-targeted nova auth RequestContext :param instance_uuid: UUID of the instance to find :returns: Instance object if the instance was found, else None. """ try: return objects.Instance.get_by_uuid(context, instance_uuid) except exception.InstanceNotFound: # The instance could be deleted or it could be in another cell. # To determine if its in another cell, check the instance # mapping in the API DB. try: inst_map = objects.InstanceMapping.get_by_instance_uuid( context, instance_uuid) except exception.InstanceMappingNotFound: # The instance is gone so just return. return # We have the instance mapping, look up the instance in the # cell the instance is in. with nova_context.target_cell( context, inst_map.cell_mapping) as cctxt: try: return objects.Instance.get_by_uuid(cctxt, instance_uuid) except exception.InstanceNotFound: # Alright it's really gone. return def get_all(self, context): """Get all networks for client.""" client = get_client(context) return client.list_networks().get('networks') def get(self, context, network_uuid): """Get specific network for client.""" client = get_client(context) try: return client.show_network(network_uuid).get('network') or {} except neutron_client_exc.NetworkNotFoundClient: raise exception.NetworkNotFound(network_id=network_uuid) def get_fixed_ip_by_address(self, context, address): """Return instance uuids given an address.""" uuid_maps = self._get_instance_uuids_by_ip(context, address) if len(uuid_maps) == 1: return uuid_maps[0] elif not uuid_maps: raise exception.FixedIpNotFoundForAddress(address=address) else: raise exception.FixedIpAssociatedWithMultipleInstances( address=address) def get_floating_ip(self, context, id): """Return floating IP object given the floating IP id.""" client = get_client(context) try: fip = client.show_floatingip(id)['floatingip'] except neutron_client_exc.NeutronClientException as e: if e.status_code == 404: raise exception.FloatingIpNotFound(id=id) with excutils.save_and_reraise_exception(): LOG.exception('Unable to access floating IP %s', id) # retrieve and cache the network details now since many callers need # the network name which isn't present in the response from neutron network_uuid = fip['floating_network_id'] try: fip['network_details'] = client.show_network( network_uuid)['network'] except neutron_client_exc.NetworkNotFoundClient: raise exception.NetworkNotFound(network_id=network_uuid) # ...and retrieve the port details for the same reason, but only if # they're not already there because the fip-port-details extension is # present if not self._has_fip_port_details_extension(context, client): port_id = fip['port_id'] try: fip['port_details'] = client.show_port( port_id)['port'] except neutron_client_exc.PortNotFoundClient: # it's possible to create floating IPs without a port fip['port_details'] = None return fip def get_floating_ip_by_address(self, context, address): """Return a floating IP given an address.""" client = get_client(context) fip = self._get_floating_ip_by_address(client, address) # retrieve and cache the network details now since many callers need # the network name which isn't present in the response from neutron network_uuid = fip['floating_network_id'] try: fip['network_details'] = client.show_network( network_uuid)['network'] except neutron_client_exc.NetworkNotFoundClient: raise exception.NetworkNotFound(network_id=network_uuid) # ...and retrieve the port details for the same reason, but only if # they're not already there because the fip-port-details extension is # present if not self._has_fip_port_details_extension(context, client): port_id = fip['port_id'] try: fip['port_details'] = client.show_port( port_id)['port'] except neutron_client_exc.PortNotFoundClient: # it's possible to create floating IPs without a port fip['port_details'] = None return fip def get_floating_ip_pools(self, context): """Return floating IP pools a.k.a. external networks.""" client = get_client(context) data = client.list_networks(**{constants.NET_EXTERNAL: True}) return data['networks'] def get_floating_ips_by_project(self, context): client = get_client(context) project_id = context.project_id fips = self._safe_get_floating_ips(client, tenant_id=project_id) if not fips: return fips # retrieve and cache the network details now since many callers need # the network name which isn't present in the response from neutron networks = {net['id']: net for net in self._get_available_networks( context, project_id, [fip['floating_network_id'] for fip in fips], client)} for fip in fips: network_uuid = fip['floating_network_id'] if network_uuid not in networks: raise exception.NetworkNotFound(network_id=network_uuid) fip['network_details'] = networks[network_uuid] # ...and retrieve the port details for the same reason, but only if # they're not already there because the fip-port-details extension is # present if not self._has_fip_port_details_extension(context, client): ports = {port['id']: port for port in client.list_ports( **{'tenant_id': project_id})['ports']} for fip in fips: port_id = fip['port_id'] if port_id in ports: fip['port_details'] = ports[port_id] else: # it's possible to create floating IPs without a port fip['port_details'] = None return fips def get_instance_id_by_floating_address(self, context, address): """Return the instance id a floating IP's fixed IP is allocated to.""" client = get_client(context) fip = self._get_floating_ip_by_address(client, address) if not fip['port_id']: return None try: port = self._show_port(context, fip['port_id'], neutron_client=client) except exception.PortNotFound: # NOTE: Here is a potential race condition between _show_port() and # _get_floating_ip_by_address(). fip['port_id'] shows a port which # is the server instance's. At _get_floating_ip_by_address(), # Neutron returns the list which includes the instance. Just after # that, the deletion of the instance happens and Neutron returns # 404 on _show_port(). LOG.debug('The port(%s) is not found', fip['port_id']) return None return port['device_id'] def get_vifs_by_instance(self, context, instance): return objects.VirtualInterfaceList.get_by_instance_uuid(context, instance.uuid) def _get_floating_ip_pool_id_by_name_or_id(self, client, name_or_id): search_opts = {constants.NET_EXTERNAL: True, 'fields': 'id'} if uuidutils.is_uuid_like(name_or_id): search_opts.update({'id': name_or_id}) else: search_opts.update({'name': name_or_id}) data = client.list_networks(**search_opts) nets = data['networks'] if len(nets) == 1: return nets[0]['id'] elif len(nets) == 0: raise exception.FloatingIpPoolNotFound() else: msg = (_("Multiple floating IP pools matches found for name '%s'") % name_or_id) raise exception.NovaException(message=msg) def allocate_floating_ip(self, context, pool=None): """Add a floating IP to a project from a pool.""" client = get_client(context) pool = pool or CONF.neutron.default_floating_pool pool_id = self._get_floating_ip_pool_id_by_name_or_id(client, pool) param = {'floatingip': {'floating_network_id': pool_id}} try: fip = client.create_floatingip(param) except (neutron_client_exc.IpAddressGenerationFailureClient, neutron_client_exc.ExternalIpAddressExhaustedClient) as e: raise exception.NoMoreFloatingIps(six.text_type(e)) except neutron_client_exc.OverQuotaClient as e: raise exception.FloatingIpLimitExceeded(six.text_type(e)) except neutron_client_exc.BadRequest as e: raise exception.FloatingIpBadRequest(six.text_type(e)) return fip['floatingip']['floating_ip_address'] def _safe_get_floating_ips(self, client, **kwargs): """Get floating IP gracefully handling 404 from Neutron.""" try: return client.list_floatingips(**kwargs)['floatingips'] # If a neutron plugin does not implement the L3 API a 404 from # list_floatingips will be raised. except neutron_client_exc.NotFound: return [] except neutron_client_exc.NeutronClientException as e: # bug/1513879 neutron client is currently using # NeutronClientException when there is no L3 API if e.status_code == 404: return [] with excutils.save_and_reraise_exception(): LOG.exception('Unable to access floating IP for %s', ', '.join(['%s %s' % (k, v) for k, v in kwargs.items()])) def _get_floating_ip_by_address(self, client, address): """Get floating IP from floating IP address.""" if not address: raise exception.FloatingIpNotFoundForAddress(address=address) fips = self._safe_get_floating_ips(client, floating_ip_address=address) if len(fips) == 0: raise exception.FloatingIpNotFoundForAddress(address=address) elif len(fips) > 1: raise exception.FloatingIpMultipleFoundForAddress(address=address) return fips[0] def _get_floating_ips_by_fixed_and_port(self, client, fixed_ip, port): """Get floating IPs from fixed IP and port.""" return self._safe_get_floating_ips(client, fixed_ip_address=fixed_ip, port_id=port) def release_floating_ip(self, context, address, affect_auto_assigned=False): """Remove a floating IP with the given address from a project.""" # Note(amotoki): We cannot handle a case where multiple pools # have overlapping IP address range. In this case we cannot use # 'address' as a unique key. # This is a limitation of the current nova. # Note(amotoki): 'affect_auto_assigned' is not respected # since it is not used anywhere in nova code and I could # find why this parameter exists. self._release_floating_ip(context, address) def disassociate_and_release_floating_ip(self, context, instance, floating_ip): """Removes (deallocates) and deletes the floating IP. This api call was added to allow this to be done in one operation if using neutron. """ @refresh_cache def _release_floating_ip_and_refresh_cache(self, context, instance, floating_ip): self._release_floating_ip( context, floating_ip['floating_ip_address'], raise_if_associated=False) if instance: _release_floating_ip_and_refresh_cache(self, context, instance, floating_ip) else: self._release_floating_ip( context, floating_ip['floating_ip_address'], raise_if_associated=False) def _release_floating_ip(self, context, address, raise_if_associated=True): client = get_client(context) fip = self._get_floating_ip_by_address(client, address) if raise_if_associated and fip['port_id']: raise exception.FloatingIpAssociated(address=address) try: client.delete_floatingip(fip['id']) except neutron_client_exc.NotFound: raise exception.FloatingIpNotFoundForAddress( address=address ) @refresh_cache def disassociate_floating_ip(self, context, instance, address, affect_auto_assigned=False): """Disassociate a floating IP from the instance.""" # Note(amotoki): 'affect_auto_assigned' is not respected # since it is not used anywhere in nova code and I could # find why this parameter exists. client = get_client(context) fip = self._get_floating_ip_by_address(client, address) client.update_floatingip(fip['id'], {'floatingip': {'port_id': None}}) def migrate_instance_start(self, context, instance, migration): """Start to migrate the network of an instance. If the instance has port bindings on the destination compute host, they are activated in this method which will atomically change the source compute host port binding to inactive and also change the port "binding:host_id" attribute to the destination host. If there are no binding resources for the attached ports on the given destination host, this method is a no-op. :param context: The user request context. :param instance: The instance being migrated. :param migration: dict with required keys:: "source_compute": The name of the source compute host. "dest_compute": The name of the destination compute host. :raises: nova.exception.PortBindingActivationFailed if any port binding activation fails """ if not self.supports_port_binding_extension(context): # If neutron isn't new enough yet for the port "binding-extended" # API extension, we just no-op. The port binding host will be # be updated in migrate_instance_finish, which is functionally OK, # it's just not optimal. LOG.debug('Neutron is not new enough to perform early destination ' 'host port binding activation. Port bindings will be ' 'updated later.', instance=instance) return client = _get_ksa_client(context, admin=True) dest_host = migration['dest_compute'] for vif in instance.get_network_info(): # Not all compute migration flows use the port binding-extended # API yet, so first check to see if there is a binding for the # port and destination host. resp = self._get_port_binding( context, client, vif['id'], dest_host) if resp: if resp.json()['binding']['status'] != 'ACTIVE': self.activate_port_binding(context, vif['id'], dest_host) # TODO(mriedem): Do we need to call # _clear_migration_port_profile? migrate_instance_finish # would normally take care of clearing the "migrating_to" # attribute on each port when updating the port's # binding:host_id to point to the destination host. else: # We might be racing with another thread that's handling # post-migrate operations and already activated the port # binding for the destination host. LOG.debug('Port %s binding to destination host %s is ' 'already ACTIVE.', vif['id'], dest_host, instance=instance) elif resp.status_code == 404: # If there is no port binding record for the destination host, # we can safely assume none of the ports attached to the # instance are using the binding-extended API in this flow and # exit early. return else: # We don't raise an exception here because we assume that # port bindings will be updated correctly when # migrate_instance_finish runs. LOG.error('Unexpected error trying to get binding info ' 'for port %s and destination host %s. Code: %i. ' 'Error: %s', vif['id'], dest_host, resp.status_code, resp.text) def migrate_instance_finish( self, context, instance, migration, provider_mappings): """Finish migrating the network of an instance. :param context: nova auth request context :param instance: Instance object being migrated :param migration: Migration object for the operation; used to determine the phase of the migration which dictates what to do with claimed PCI devices for SR-IOV ports :param provider_mappings: a dict of list of resource provider uuids keyed by port uuid """ self._update_port_binding_for_instance( context, instance, migration.dest_compute, migration=migration, provider_mappings=provider_mappings) def _nw_info_get_ips(self, client, port): network_IPs = [] for fixed_ip in port['fixed_ips']: fixed = network_model.FixedIP(address=fixed_ip['ip_address']) floats = self._get_floating_ips_by_fixed_and_port( client, fixed_ip['ip_address'], port['id']) for ip in floats: fip = network_model.IP(address=ip['floating_ip_address'], type='floating') fixed.add_floating_ip(fip) network_IPs.append(fixed) return network_IPs def _nw_info_get_subnets(self, context, port, network_IPs, client=None): subnets = self._get_subnets_from_port(context, port, client) for subnet in subnets: subnet['ips'] = [fixed_ip for fixed_ip in network_IPs if fixed_ip.is_in_subnet(subnet)] return subnets def _nw_info_build_network(self, context, port, networks, subnets): # TODO(stephenfin): Pass in an existing admin client if available. neutron = get_client(context, admin=True) network_name = None network_mtu = None for net in networks: if port['network_id'] == net['id']: network_name = net['name'] tenant_id = net['tenant_id'] network_mtu = net.get('mtu') break else: tenant_id = port['tenant_id'] LOG.warning("Network %(id)s not matched with the tenants " "network! The ports tenant %(tenant_id)s will be " "used.", {'id': port['network_id'], 'tenant_id': tenant_id}) bridge = None ovs_interfaceid = None # Network model metadata should_create_bridge = None vif_type = port.get('binding:vif_type') port_details = port.get('binding:vif_details', {}) if vif_type in [network_model.VIF_TYPE_OVS, network_model.VIF_TYPE_AGILIO_OVS]: bridge = port_details.get(network_model.VIF_DETAILS_BRIDGE_NAME, CONF.neutron.ovs_bridge) ovs_interfaceid = port['id'] elif vif_type == network_model.VIF_TYPE_BRIDGE: bridge = port_details.get(network_model.VIF_DETAILS_BRIDGE_NAME, "brq" + port['network_id']) should_create_bridge = True elif vif_type == network_model.VIF_TYPE_DVS: # The name of the DVS port group will contain the neutron # network id bridge = port['network_id'] elif (vif_type == network_model.VIF_TYPE_VHOSTUSER and port_details.get(network_model.VIF_DETAILS_VHOSTUSER_OVS_PLUG, False)): bridge = port_details.get(network_model.VIF_DETAILS_BRIDGE_NAME, CONF.neutron.ovs_bridge) ovs_interfaceid = port['id'] elif (vif_type == network_model.VIF_TYPE_VHOSTUSER and port_details.get(network_model.VIF_DETAILS_VHOSTUSER_FP_PLUG, False)): bridge = port_details.get(network_model.VIF_DETAILS_BRIDGE_NAME, "brq" + port['network_id']) # Prune the bridge name if necessary. For the DVS this is not done # as the bridge is a '-'. if bridge is not None and vif_type != network_model.VIF_TYPE_DVS: bridge = bridge[:network_model.NIC_NAME_LEN] physnet, tunneled = self._get_physnet_tunneled_info( context, neutron, port['network_id']) network = network_model.Network( id=port['network_id'], bridge=bridge, injected=CONF.flat_injected, label=network_name, tenant_id=tenant_id, mtu=network_mtu, physical_network=physnet, tunneled=tunneled ) network['subnets'] = subnets if should_create_bridge is not None: network['should_create_bridge'] = should_create_bridge return network, ovs_interfaceid def _get_preexisting_port_ids(self, instance): """Retrieve the preexisting ports associated with the given instance. These ports were not created by nova and hence should not be deallocated upon instance deletion. """ net_info = instance.get_network_info() if not net_info: LOG.debug('Instance cache missing network info.', instance=instance) return [vif['id'] for vif in net_info if vif.get('preserve_on_delete')] def _build_vif_model(self, context, client, current_neutron_port, networks, preexisting_port_ids): """Builds a ``nova.network.model.VIF`` object based on the parameters and current state of the port in Neutron. :param context: Request context. :param client: Neutron client. :param current_neutron_port: The current state of a Neutron port from which to build the VIF object model. :param networks: List of dicts which represent Neutron networks associated with the ports currently attached to a given server instance. :param preexisting_port_ids: List of IDs of ports attached to a given server instance which Nova did not create and therefore should not delete when the port is detached from the server. :return: nova.network.model.VIF object which represents a port in the instance network info cache. """ vif_active = False if (current_neutron_port['admin_state_up'] is False or current_neutron_port['status'] == 'ACTIVE'): vif_active = True network_IPs = self._nw_info_get_ips(client, current_neutron_port) subnets = self._nw_info_get_subnets(context, current_neutron_port, network_IPs, client) devname = "tap" + current_neutron_port['id'] devname = devname[:network_model.NIC_NAME_LEN] network, ovs_interfaceid = ( self._nw_info_build_network(context, current_neutron_port, networks, subnets)) preserve_on_delete = (current_neutron_port['id'] in preexisting_port_ids) return network_model.VIF( id=current_neutron_port['id'], address=current_neutron_port['mac_address'], network=network, vnic_type=current_neutron_port.get('binding:vnic_type', network_model.VNIC_TYPE_NORMAL), type=current_neutron_port.get('binding:vif_type'), profile=get_binding_profile(current_neutron_port), details=current_neutron_port.get('binding:vif_details'), ovs_interfaceid=ovs_interfaceid, devname=devname, active=vif_active, preserve_on_delete=preserve_on_delete) def _build_network_info_model(self, context, instance, networks=None, port_ids=None, admin_client=None, preexisting_port_ids=None, refresh_vif_id=None, force_refresh=False): """Return list of ordered VIFs attached to instance. :param context: Request context. :param instance: Instance we are returning network info for. :param networks: List of networks being attached to an instance. If value is None this value will be populated from the existing cached value. :param port_ids: List of port_ids that are being attached to an instance in order of attachment. If value is None this value will be populated from the existing cached value. :param admin_client: A neutron client for the admin context. :param preexisting_port_ids: List of port_ids that nova didn't allocate and there shouldn't be deleted when an instance is de-allocated. Supplied list will be added to the cached list of preexisting port IDs for this instance. :param refresh_vif_id: Optional port ID to refresh within the existing cache rather than the entire cache. This can be triggered via a "network-changed" server external event from Neutron. :param force_refresh: If ``networks`` and ``port_ids`` are both None, by default the instance.info_cache will be used to populate the network info. Pass ``True`` to force collection of ports and networks from neutron directly. """ search_opts = {'tenant_id': instance.project_id, 'device_id': instance.uuid, } if admin_client is None: client = get_client(context, admin=True) else: client = admin_client data = client.list_ports(**search_opts) current_neutron_ports = data.get('ports', []) if preexisting_port_ids is None: preexisting_port_ids = [] preexisting_port_ids = set( preexisting_port_ids + self._get_preexisting_port_ids(instance)) current_neutron_port_map = {} for current_neutron_port in current_neutron_ports: current_neutron_port_map[current_neutron_port['id']] = ( current_neutron_port) # Figure out what kind of operation we're processing. If we're given # a single port to refresh then we try to optimize and update just the # information for that VIF in the existing cache rather than try to # rebuild the entire thing. if refresh_vif_id is not None: # TODO(mriedem): Consider pulling this out into it's own method. nw_info = instance.get_network_info() if nw_info: current_neutron_port = current_neutron_port_map.get( refresh_vif_id) if current_neutron_port: # Get the network for the port. networks = self._get_available_networks( context, instance.project_id, [current_neutron_port['network_id']], client) # Build the VIF model given the latest port information. refreshed_vif = self._build_vif_model( context, client, current_neutron_port, networks, preexisting_port_ids) for index, vif in enumerate(nw_info): if vif['id'] == refresh_vif_id: # Update the existing entry. nw_info[index] = refreshed_vif LOG.debug('Updated VIF entry in instance network ' 'info cache for port %s.', refresh_vif_id, instance=instance) break else: # If it wasn't in the existing cache, add it. nw_info.append(refreshed_vif) LOG.debug('Added VIF to instance network info cache ' 'for port %s.', refresh_vif_id, instance=instance) else: # This port is no longer associated with the instance, so # simply remove it from the nw_info cache. for index, vif in enumerate(nw_info): if vif['id'] == refresh_vif_id: LOG.info('Port %s from network info_cache is no ' 'longer associated with instance in ' 'Neutron. Removing from network ' 'info_cache.', refresh_vif_id, instance=instance) del nw_info[index] break return nw_info # else there is no existing cache and we need to build it # Determine if we're doing a full refresh (_heal_instance_info_cache) # or if we are refreshing because we have attached/detached a port. # TODO(mriedem); we should leverage refresh_vif_id in the latter case # since we are unnecessarily rebuilding the entire cache for one port nw_info_refresh = networks is None and port_ids is None if nw_info_refresh and force_refresh: # Use the current set of ports from neutron rather than the cache. port_ids = self._get_ordered_port_list(context, instance, current_neutron_ports) net_ids = [current_neutron_port_map.get(port_id).get('network_id') for port_id in port_ids] # This is copied from _gather_port_ids_and_networks. networks = self._get_available_networks( context, instance.project_id, net_ids, client) else: # We are refreshing the full cache using the existing cache rather # than what is currently in neutron. networks, port_ids = self._gather_port_ids_and_networks( context, instance, networks, port_ids, client) nw_info = network_model.NetworkInfo() for port_id in port_ids: current_neutron_port = current_neutron_port_map.get(port_id) if current_neutron_port: vif = self._build_vif_model( context, client, current_neutron_port, networks, preexisting_port_ids) nw_info.append(vif) elif nw_info_refresh: LOG.info('Port %s from network info_cache is no ' 'longer associated with instance in Neutron. ' 'Removing from network info_cache.', port_id, instance=instance) return nw_info def _get_ordered_port_list(self, context, instance, current_neutron_ports): """Returns ordered port list using nova virtual_interface data.""" # a dict, keyed by port UUID, of the port's "index" # so that we can order the returned port UUIDs by the # original insertion order followed by any newly-attached # ports port_uuid_to_index_map = {} port_order_list = [] ports_without_order = [] # Get set of ports from nova vifs vifs = self.get_vifs_by_instance(context, instance) for port in current_neutron_ports: # NOTE(mjozefcz): For each port check if we have its index from # nova virtual_interfaces objects. If not - it seems # to be a new port - add it at the end of list. # Find port index if it was attached before. for vif in vifs: if vif.uuid == port['id']: port_uuid_to_index_map[port['id']] = vif.id break if port['id'] not in port_uuid_to_index_map: # Assume that it's new port and add it to the end of port list. ports_without_order.append(port['id']) # Lets sort created port order_list by given index. port_order_list = sorted(port_uuid_to_index_map, key=lambda k: port_uuid_to_index_map[k]) # Add ports without order to the end of list port_order_list.extend(ports_without_order) return port_order_list def _get_subnets_from_port(self, context, port, client=None): """Return the subnets for a given port.""" fixed_ips = port['fixed_ips'] # No fixed_ips for the port means there is no subnet associated # with the network the port is created on. # Since list_subnets(id=[]) returns all subnets visible for the # current tenant, returned subnets may contain subnets which is not # related to the port. To avoid this, the method returns here. if not fixed_ips: return [] if not client: client = get_client(context) search_opts = {'id': list(set(ip['subnet_id'] for ip in fixed_ips))} data = client.list_subnets(**search_opts) ipam_subnets = data.get('subnets', []) subnets = [] for subnet in ipam_subnets: subnet_dict = {'cidr': subnet['cidr'], 'gateway': network_model.IP( address=subnet['gateway_ip'], type='gateway'), } if subnet.get('ipv6_address_mode'): subnet_dict['ipv6_address_mode'] = subnet['ipv6_address_mode'] # attempt to populate DHCP server field search_opts = {'network_id': subnet['network_id'], 'device_owner': 'network:dhcp'} data = client.list_ports(**search_opts) dhcp_ports = data.get('ports', []) for p in dhcp_ports: for ip_pair in p['fixed_ips']: if ip_pair['subnet_id'] == subnet['id']: subnet_dict['dhcp_server'] = ip_pair['ip_address'] break # NOTE(arnaudmorin): If enable_dhcp is set on subnet, but, for # some reason neutron did not have any DHCP port yet, we still # want the network_info to be populated with a valid dhcp_server # value. This is mostly useful for the metadata API (which is # relying on this value to give network_data to the instance). # # This will also help some providers which are using external # DHCP servers not handled by neutron. # In this case, neutron will never create any DHCP port in the # subnet. # # Also note that we cannot set the value to None because then the # value would be discarded by the metadata API. # So the subnet gateway will be used as fallback. if subnet.get('enable_dhcp') and 'dhcp_server' not in subnet_dict: subnet_dict['dhcp_server'] = subnet['gateway_ip'] subnet_object = network_model.Subnet(**subnet_dict) for dns in subnet.get('dns_nameservers', []): subnet_object.add_dns( network_model.IP(address=dns, type='dns')) for route in subnet.get('host_routes', []): subnet_object.add_route( network_model.Route(cidr=route['destination'], gateway=network_model.IP( address=route['nexthop'], type='gateway'))) subnets.append(subnet_object) return subnets def setup_instance_network_on_host( self, context, instance, host, migration=None, provider_mappings=None): """Setup network for specified instance on host. :param context: The request context. :param instance: nova.objects.instance.Instance object. :param host: The host which network should be setup for instance. :param migration: The migration object if the instance is being tracked with a migration. :param provider_mappings: a dict of lists of resource provider uuids keyed by port uuid """ self._update_port_binding_for_instance( context, instance, host, migration, provider_mappings) def cleanup_instance_network_on_host(self, context, instance, host): """Cleanup network for specified instance on host. Port bindings for the given host are deleted. The ports associated with the instance via the port device_id field are left intact. :param context: The user request context. :param instance: Instance object with the associated ports :param host: host from which to delete port bindings :raises: PortBindingDeletionFailed if port binding deletion fails. """ # First check to see if the port binding extension is supported. if not self.supports_port_binding_extension(context): LOG.info("Neutron extension '%s' is not supported; not cleaning " "up port bindings for host %s.", constants.PORT_BINDING_EXTENDED, host, instance=instance) return # Now get the ports associated with the instance. We go directly to # neutron rather than rely on the info cache just like # setup_networks_on_host. search_opts = {'device_id': instance.uuid, 'tenant_id': instance.project_id, 'fields': ['id']} # we only need the port id data = self.list_ports(context, **search_opts) self._delete_port_bindings(context, data['ports'], host) def _get_pci_mapping_for_migration(self, instance, migration): if not instance.migration_context: return {} # In case of revert, swap old and new devices to # update the ports back to the original devices. revert = (migration and migration.get('status') == 'reverted') return instance.migration_context.get_pci_mapping_for_migration(revert) def _update_port_binding_for_instance( self, context, instance, host, migration=None, provider_mappings=None): neutron = get_client(context, admin=True) search_opts = {'device_id': instance.uuid, 'tenant_id': instance.project_id} data = neutron.list_ports(**search_opts) pci_mapping = None port_updates = [] ports = data['ports'] FAILED_VIF_TYPES = (network_model.VIF_TYPE_UNBOUND, network_model.VIF_TYPE_BINDING_FAILED) for p in ports: updates = {} binding_profile = get_binding_profile(p) # We need to update the port binding if the host has changed or if # the binding is clearly wrong due to previous lost messages. vif_type = p.get('binding:vif_type') if (p.get(constants.BINDING_HOST_ID) != host or vif_type in FAILED_VIF_TYPES): updates[constants.BINDING_HOST_ID] = host # If the host changed, the AZ could have also changed so we # need to update the device_owner. updates['device_owner'] = ( 'compute:%s' % instance.availability_zone) # NOTE: Before updating the port binding make sure we # remove the pre-migration status from the binding profile if binding_profile.get(constants.MIGRATING_ATTR): del binding_profile[constants.MIGRATING_ATTR] updates[constants.BINDING_PROFILE] = binding_profile # Update port with newly allocated PCI devices. Even if the # resize is happening on the same host, a new PCI device can be # allocated. Note that this only needs to happen if a migration # is in progress such as in a resize / migrate. It is possible # that this function is called without a migration object, such # as in an unshelve operation. vnic_type = p.get('binding:vnic_type') if (vnic_type in network_model.VNIC_TYPES_SRIOV and migration is not None and migration['migration_type'] != constants.LIVE_MIGRATION): # Note(adrianc): for live migration binding profile was already # updated in conductor when calling bind_ports_to_host() if not pci_mapping: pci_mapping = self._get_pci_mapping_for_migration( instance, migration) pci_slot = binding_profile.get('pci_slot') new_dev = pci_mapping.get(pci_slot) if new_dev: binding_profile.update( self._get_pci_device_profile(new_dev)) updates[constants.BINDING_PROFILE] = binding_profile else: raise exception.PortUpdateFailed(port_id=p['id'], reason=_("Unable to correlate PCI slot %s") % pci_slot) # NOTE(gibi): during live migration the conductor already sets the # allocation key in the port binding. However during resize, cold # migrate, evacuate and unshelve we have to set the binding here. # Also note that during unshelve no migration object is created. if (p.get('resource_request') and (migration is None or migration['migration_type'] != constants.LIVE_MIGRATION)): if not provider_mappings: # TODO(gibi): Remove this check when compute RPC API is # bumped to 6.0 # NOTE(gibi): This should not happen as the API level # minimum compute service version check ensures that the # compute services already send the RequestSpec during # the move operations between the source and the # destination and the dest compute calculates the # mapping based on that. LOG.warning( "Provider mappings are not available to the compute " "service but are required for ports with a resource " "request. If compute RPC API versions are pinned for " "a rolling upgrade, you will need to retry this " "operation once the RPC version is unpinned and the " "nova-compute services are all upgraded.", instance=instance) raise exception.PortUpdateFailed( port_id=p['id'], reason=_( "Provider mappings are not available to the " "compute service but are required for ports with " "a resource request.")) # NOTE(gibi): In the resource provider mapping there can be # more than one RP fulfilling a request group. But resource # requests of a Neutron port is always mapped to a # numbered request group that is always fulfilled by one # resource provider. So we only pass that single RP UUID here. binding_profile[constants.ALLOCATION] = \ provider_mappings[p['id']][0] updates[constants.BINDING_PROFILE] = binding_profile port_updates.append((p['id'], updates)) # Avoid rolling back updates if we catch an error above. # TODO(lbeliveau): Batch up the port updates in one neutron call. for port_id, updates in port_updates: if updates: LOG.info("Updating port %(port)s with " "attributes %(attributes)s", {"port": port_id, "attributes": updates}, instance=instance) try: neutron.update_port(port_id, {'port': updates}) except Exception: with excutils.save_and_reraise_exception(): LOG.exception("Unable to update binding details " "for port %s", port_id, instance=instance) def update_instance_vnic_index(self, context, instance, vif, index): """Update instance vnic index. When the 'VNIC index' extension is supported this method will update the vnic index of the instance on the port. An instance may have more than one vnic. :param context: The request context. :param instance: nova.objects.instance.Instance object. :param vif: The VIF in question. :param index: The index on the instance for the VIF. """ self._refresh_neutron_extensions_cache(context) if constants.VNIC_INDEX_EXT in self.extensions: neutron = get_client(context) port_req_body = {'port': {'vnic_index': index}} try: neutron.update_port(vif['id'], port_req_body) except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Unable to update instance VNIC index ' 'for port %s.', vif['id'], instance=instance) def _ensure_requested_network_ordering(accessor, unordered, preferred): """Sort a list with respect to the preferred network ordering.""" if preferred: unordered.sort(key=lambda i: preferred.index(accessor(i))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/network/os_vif_util.py0000664000175000017500000004351300000000000020124 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ''' This module contains code for converting from the original nova.network.model data structure, to the new os-vif based versioned object model os_vif.objects.* ''' from os_vif import objects from oslo_config import cfg from oslo_log import log as logging from nova import exception from nova.i18n import _ from nova.network import model LOG = logging.getLogger(__name__) CONF = cfg.CONF LEGACY_VIFS = { model.VIF_TYPE_DVS, model.VIF_TYPE_IOVISOR, model.VIF_TYPE_802_QBG, model.VIF_TYPE_802_QBH, model.VIF_TYPE_HW_VEB, model.VIF_TYPE_HOSTDEV, model.VIF_TYPE_IB_HOSTDEV, model.VIF_TYPE_MIDONET, model.VIF_TYPE_TAP, model.VIF_TYPE_MACVTAP } def _get_vif_name(vif): """Get a VIF device name :param vif: the nova.network.model.VIF instance Get a string suitable for use as a host OS network device name :returns: a device name """ if vif.get('devname', None) is not None: return vif['devname'] return ('nic' + vif['id'])[:model.NIC_NAME_LEN] def _get_hybrid_bridge_name(vif): """Get a bridge device name :param vif: the nova.network.model.VIF instance Get a string suitable for use as a host OS bridge device name :returns: a bridge name """ return ('qbr' + vif['id'])[:model.NIC_NAME_LEN] def _set_vhostuser_settings(vif, obj): """Set vhostuser socket mode and path :param vif: the nova.network.model.VIF instance :param obj: a os_vif.objects.vif.VIFVHostUser instance :raises: exception.VifDetailsMissingVhostuserSockPath """ obj.mode = vif['details'].get( model.VIF_DETAILS_VHOSTUSER_MODE, 'server') path = vif['details'].get( model.VIF_DETAILS_VHOSTUSER_SOCKET, None) if path: obj.path = path else: raise exception.VifDetailsMissingVhostuserSockPath( vif_id=vif['id']) def nova_to_osvif_instance(instance): """Convert a Nova instance object to an os-vif instance object :param vif: a nova.objects.Instance instance :returns: a os_vif.objects.instance_info.InstanceInfo """ info = objects.instance_info.InstanceInfo( uuid=instance.uuid, name=instance.name) if (instance.obj_attr_is_set("project_id") and instance.project_id is not None): info.project_id = instance.project_id return info def _nova_to_osvif_ip(ip): """Convert Nova IP object into os_vif object :param route: nova.network.model.IP instance :returns: os_vif.objects.fixed_ip.FixedIP instance """ floating_ips = [fip['address'] for fip in ip.get('floating_ips', [])] return objects.fixed_ip.FixedIP( address=ip['address'], floating_ips=floating_ips) def _nova_to_osvif_ips(ips): """Convert Nova IP list into os_vif object :param routes: list of nova.network.model.IP instances :returns: os_vif.objects.fixed_ip.FixedIPList instance """ return objects.fixed_ip.FixedIPList( objects=[_nova_to_osvif_ip(ip) for ip in ips]) def _nova_to_osvif_route(route): """Convert Nova route object into os_vif object :param route: nova.network.model.Route instance :returns: os_vif.objects.route.Route instance """ obj = objects.route.Route( cidr=route['cidr']) if route['interface'] is not None: obj.interface = route['interface'] if (route['gateway'] is not None and route['gateway']['address'] is not None): obj.gateway = route['gateway']['address'] return obj def _nova_to_osvif_routes(routes): """Convert Nova route list into os_vif object :param routes: list of nova.network.model.Route instances :returns: os_vif.objects.route.RouteList instance """ return objects.route.RouteList( objects=[_nova_to_osvif_route(route) for route in routes]) def _nova_to_osvif_subnet(subnet): """Convert Nova subnet object into os_vif object :param subnet: nova.network.model.Subnet instance :returns: os_vif.objects.subnet.Subnet instance """ dnsaddrs = [ip['address'] for ip in subnet['dns']] obj = objects.subnet.Subnet( dns=dnsaddrs, ips=_nova_to_osvif_ips(subnet['ips']), routes=_nova_to_osvif_routes(subnet['routes'])) if subnet['cidr'] is not None: obj.cidr = subnet['cidr'] if (subnet['gateway'] is not None and subnet['gateway']['address'] is not None): obj.gateway = subnet['gateway']['address'] return obj def _nova_to_osvif_subnets(subnets): """Convert Nova subnet list into os_vif object :param subnets: list of nova.network.model.Subnet instances :returns: os_vif.objects.subnet.SubnetList instance """ return objects.subnet.SubnetList( objects=[_nova_to_osvif_subnet(subnet) for subnet in subnets]) def _nova_to_osvif_network(network): """Convert Nova network object into os_vif object :param network: nova.network.model.Network instance :returns: os_vif.objects.network.Network instance """ netobj = objects.network.Network( id=network['id'], bridge_interface=network.get_meta("bridge_interface"), subnets=_nova_to_osvif_subnets(network['subnets'])) if network["bridge"] is not None: netobj.bridge = network['bridge'] if network['label'] is not None: netobj.label = network['label'] if network.get_meta("mtu") is not None: netobj.mtu = network.get_meta("mtu") if network.get_meta("multi_host") is not None: netobj.multi_host = network.get_meta("multi_host") if network.get_meta("should_create_bridge") is not None: netobj.should_provide_bridge = network.get_meta("should_create_bridge") if network.get_meta("should_create_vlan") is not None: netobj.should_provide_vlan = network.get_meta("should_create_vlan") if network.get_meta("vlan") is None: raise exception.NovaException(_("Missing vlan number in %s") % network) netobj.vlan = network.get_meta("vlan") return netobj def _get_vif_instance(vif, cls, plugin, **kwargs): """Instantiate an os-vif VIF instance :param vif: the nova.network.model.VIF instance :param cls: class for a os_vif.objects.vif.VIFBase subclass :returns: a os_vif.objects.vif.VIFBase instance """ return cls( id=vif['id'], address=vif['address'], network=_nova_to_osvif_network(vif['network']), has_traffic_filtering=vif.is_neutron_filtering_enabled(), preserve_on_delete=vif['preserve_on_delete'], active=vif['active'], plugin=plugin, **kwargs) def _set_representor_datapath_offload_settings(vif, obj): """Populate the representor datapath offload metadata in the port profile. This function should only be called if the VIF's ``vnic_type`` is in the VNIC_TYPES_SRIOV list, and the ``port_profile`` field of ``obj`` has been populated. :param vif: the nova.network.model.VIF instance :param obj: an os_vif.objects.vif.VIFBase instance """ datapath_offload = objects.vif.DatapathOffloadRepresentor( representor_name=_get_vif_name(vif), representor_address=vif["profile"]["pci_slot"]) obj.port_profile.datapath_offload = datapath_offload def _get_vnic_direct_vif_instance(vif, port_profile, plugin, set_bridge=True): """Instantiate an os-vif VIF instance for ``vnic_type`` = VNIC_TYPE_DIRECT :param vif: the nova.network.model.VIF instance :param port_profile: an os_vif.objects.vif.VIFPortProfileBase instance :param plugin: the os-vif plugin name :param set_bridge: if True, populate obj.network.bridge :returns: an os_vif.objects.vif.VIFHostDevice instance """ obj = _get_vif_instance( vif, objects.vif.VIFHostDevice, port_profile=port_profile, plugin=plugin, dev_address=vif["profile"]["pci_slot"], dev_type=objects.fields.VIFHostDeviceDevType.ETHERNET ) if set_bridge and vif["network"]["bridge"] is not None: obj.network.bridge = vif["network"]["bridge"] return obj def _get_ovs_representor_port_profile(vif): """Instantiate an os-vif port_profile object. :param vif: the nova.network.model.VIF instance :returns: an os_vif.objects.vif.VIFPortProfileOVSRepresentor instance """ # TODO(jangutter): in accordance with the generic-os-vif-offloads spec, # the datapath offload info is duplicated in both interfaces for Stein. # The port profile should be transitioned to VIFPortProfileOpenVSwitch # during Train. return objects.vif.VIFPortProfileOVSRepresentor( interface_id=vif.get('ovs_interfaceid') or vif['id'], representor_name=_get_vif_name(vif), representor_address=vif["profile"]['pci_slot']) # VIF_TYPE_BRIDGE = 'bridge' def _nova_to_osvif_vif_bridge(vif): obj = _get_vif_instance( vif, objects.vif.VIFBridge, plugin="linux_bridge", vif_name=_get_vif_name(vif)) if vif["network"]["bridge"] is not None: obj.bridge_name = vif["network"]["bridge"] return obj # VIF_TYPE_OVS = 'ovs' def _nova_to_osvif_vif_ovs(vif): vif_name = _get_vif_name(vif) vnic_type = vif.get('vnic_type', model.VNIC_TYPE_NORMAL) profile = objects.vif.VIFPortProfileOpenVSwitch( interface_id=vif.get('ovs_interfaceid') or vif['id'], datapath_type=vif['details'].get( model.VIF_DETAILS_OVS_DATAPATH_TYPE)) if vnic_type == model.VNIC_TYPE_DIRECT: obj = _get_vnic_direct_vif_instance( vif, port_profile=_get_ovs_representor_port_profile(vif), plugin="ovs") _set_representor_datapath_offload_settings(vif, obj) elif vif.is_hybrid_plug_enabled(): obj = _get_vif_instance( vif, objects.vif.VIFBridge, port_profile=profile, plugin="ovs", vif_name=vif_name, bridge_name=_get_hybrid_bridge_name(vif)) else: obj = _get_vif_instance( vif, objects.vif.VIFOpenVSwitch, port_profile=profile, plugin="ovs", vif_name=vif_name) if vif["network"]["bridge"] is not None: obj.bridge_name = vif["network"]["bridge"] return obj # VIF_TYPE_AGILIO_OVS = 'agilio_ovs' def _nova_to_osvif_vif_agilio_ovs(vif): vnic_type = vif.get('vnic_type', model.VNIC_TYPE_NORMAL) if vnic_type == model.VNIC_TYPE_DIRECT: obj = _get_vnic_direct_vif_instance( vif, plugin="agilio_ovs", port_profile=_get_ovs_representor_port_profile(vif)) _set_representor_datapath_offload_settings(vif, obj) elif vnic_type == model.VNIC_TYPE_VIRTIO_FORWARDER: obj = _get_vif_instance( vif, objects.vif.VIFVHostUser, port_profile=_get_ovs_representor_port_profile(vif), plugin="agilio_ovs", vif_name=_get_vif_name(vif)) _set_representor_datapath_offload_settings(vif, obj) _set_vhostuser_settings(vif, obj) if vif["network"]["bridge"] is not None: obj.network.bridge = vif["network"]["bridge"] else: LOG.debug("agilio_ovs falling through to ovs %s", vif) obj = _nova_to_osvif_vif_ovs(vif) return obj # VIF_TYPE_VHOST_USER = 'vhostuser' def _nova_to_osvif_vif_vhostuser(vif): if vif['details'].get(model.VIF_DETAILS_VHOSTUSER_FP_PLUG, False): if vif['details'].get(model.VIF_DETAILS_VHOSTUSER_OVS_PLUG, False): profile = objects.vif.VIFPortProfileFPOpenVSwitch( interface_id=vif.get('ovs_interfaceid') or vif['id'], datapath_type=vif['details'].get( model.VIF_DETAILS_OVS_DATAPATH_TYPE)) if vif.is_hybrid_plug_enabled(): profile.bridge_name = _get_hybrid_bridge_name(vif) profile.hybrid_plug = True else: profile.hybrid_plug = False if vif["network"]["bridge"] is not None: profile.bridge_name = vif["network"]["bridge"] else: profile = objects.vif.VIFPortProfileFPBridge() if vif["network"]["bridge"] is not None: profile.bridge_name = vif["network"]["bridge"] obj = _get_vif_instance(vif, objects.vif.VIFVHostUser, plugin="vhostuser_fp", vif_name=_get_vif_name(vif), port_profile=profile) _set_vhostuser_settings(vif, obj) return obj elif vif['details'].get(model.VIF_DETAILS_VHOSTUSER_OVS_PLUG, False): profile = objects.vif.VIFPortProfileOpenVSwitch( interface_id=vif.get('ovs_interfaceid') or vif['id'], datapath_type=vif['details'].get( model.VIF_DETAILS_OVS_DATAPATH_TYPE)) vif_name = ('vhu' + vif['id'])[:model.NIC_NAME_LEN] obj = _get_vif_instance(vif, objects.vif.VIFVHostUser, port_profile=profile, plugin="ovs", vif_name=vif_name) if vif["network"]["bridge"] is not None: obj.bridge_name = vif["network"]["bridge"] _set_vhostuser_settings(vif, obj) return obj elif vif['details'].get(model.VIF_DETAILS_VHOSTUSER_VROUTER_PLUG, False): obj = _get_vif_instance(vif, objects.vif.VIFVHostUser, plugin="contrail_vrouter", vif_name=_get_vif_name(vif)) _set_vhostuser_settings(vif, obj) return obj else: obj = _get_vif_instance(vif, objects.vif.VIFVHostUser, plugin="noop", vif_name=_get_vif_name(vif)) _set_vhostuser_settings(vif, obj) return obj # VIF_TYPE_IVS = 'ivs' def _nova_to_osvif_vif_ivs(vif): if vif.is_hybrid_plug_enabled(): obj = _get_vif_instance( vif, objects.vif.VIFBridge, plugin="ivs", vif_name=_get_vif_name(vif), bridge_name=_get_hybrid_bridge_name(vif)) else: obj = _get_vif_instance( vif, objects.vif.VIFGeneric, plugin="ivs", vif_name=_get_vif_name(vif)) return obj # VIF_TYPE_VROUTER = 'vrouter' def _nova_to_osvif_vif_vrouter(vif): vif_name = _get_vif_name(vif) vnic_type = vif.get('vnic_type', model.VNIC_TYPE_NORMAL) if vnic_type == model.VNIC_TYPE_NORMAL: obj = _get_vif_instance( vif, objects.vif.VIFGeneric, plugin="vrouter", vif_name=vif_name) elif vnic_type == model.VNIC_TYPE_DIRECT: obj = _get_vnic_direct_vif_instance( vif, port_profile=objects.vif.VIFPortProfileBase(), plugin="vrouter", set_bridge=False) _set_representor_datapath_offload_settings(vif, obj) elif vnic_type == model.VNIC_TYPE_VIRTIO_FORWARDER: obj = _get_vif_instance( vif, objects.vif.VIFVHostUser, port_profile=objects.vif.VIFPortProfileBase(), plugin="vrouter", vif_name=vif_name) _set_representor_datapath_offload_settings(vif, obj) _set_vhostuser_settings(vif, obj) else: raise NotImplementedError() return obj def nova_to_osvif_vif(vif): """Convert a Nova VIF model to an os-vif object :param vif: a nova.network.model.VIF instance Attempt to convert a nova VIF instance into an os-vif VIF object, pointing to a suitable plugin. This will return None if there is no os-vif plugin available yet. :returns: a os_vif.objects.vif.VIFBase subclass, or None if not supported with os-vif yet """ LOG.debug("Converting VIF %s", vif) vif_type = vif['type'] if vif_type in LEGACY_VIFS: # We want to explicitly fall back to the legacy path for these VIF # types LOG.debug('No conversion for VIF type %s yet', vif_type) return None if vif_type in {model.VIF_TYPE_BINDING_FAILED, model.VIF_TYPE_UNBOUND}: # These aren't real VIF types. VIF_TYPE_BINDING_FAILED indicates port # binding to a host failed and we are trying to plug the VIFs again, # which will fail because we do not know the actual real VIF type, like # VIF_TYPE_OVS, VIF_TYPE_BRIDGE, etc. VIF_TYPE_UNBOUND, by comparison, # is the default VIF type of a driver when it is not bound to any host, # i.e. we have not set the host ID in the binding driver. This should # also only happen in error cases. # TODO(stephenfin): We probably want a more meaningful log here LOG.debug('No conversion for VIF type %s yet', vif_type) return None if vif_type == model.VIF_TYPE_OVS: vif_obj = _nova_to_osvif_vif_ovs(vif) elif vif_type == model.VIF_TYPE_IVS: vif_obj = _nova_to_osvif_vif_ivs(vif) elif vif_type == model.VIF_TYPE_BRIDGE: vif_obj = _nova_to_osvif_vif_bridge(vif) elif vif_type == model.VIF_TYPE_AGILIO_OVS: vif_obj = _nova_to_osvif_vif_agilio_ovs(vif) elif vif_type == model.VIF_TYPE_VHOSTUSER: vif_obj = _nova_to_osvif_vif_vhostuser(vif) elif vif_type == model.VIF_TYPE_VROUTER: vif_obj = _nova_to_osvif_vif_vrouter(vif) else: raise exception.NovaException('Unsupported VIF type %s' % vif_type) LOG.debug('Converted object %s', vif_obj) return vif_obj ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/network/security_group_api.py0000664000175000017500000006722300000000000021522 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Piston Cloud Computing, Inc. # Copyright 2012 Red Hat, Inc. # Copyright 2013 Nicira, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import netaddr from neutronclient.common import exceptions as n_exc from neutronclient.neutron import v2_0 as neutronv20 from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import netutils from oslo_utils import uuidutils import six from six.moves import urllib from webob import exc from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.network import neutron as neutronapi from nova.objects import security_group as security_group_obj from nova import utils LOG = logging.getLogger(__name__) # NOTE: Neutron client has a max URL length of 8192, so we have # to limit the number of IDs we include in any single search. Really # doesn't seem to be any point in making this a config value. MAX_SEARCH_IDS = 150 def validate_id(id): if not uuidutils.is_uuid_like(id): msg = _("Security group id should be uuid") raise exception.Invalid(msg) return id def validate_name( context: nova_context.RequestContext, name: str): """Validate a security group name and return the corresponding UUID. :param context: The nova request context. :param name: The name of the security group. :raises NoUniqueMatch: If there is no unique match for the provided name. :raises SecurityGroupNotFound: If there's no match for the provided name. :raises NeutronClientException: For all other exceptions. """ neutron = neutronapi.get_client(context) try: return neutronv20.find_resourceid_by_name_or_id( neutron, 'security_group', name, context.project_id) except n_exc.NeutronClientNoUniqueMatch as e: raise exception.NoUniqueMatch(six.text_type(e)) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: LOG.debug('Neutron security group %s not found', name) raise exception.SecurityGroupNotFound(six.text_type(e)) else: LOG.error('Neutron Error: %s', e) six.reraise(*exc_info) def parse_cidr(cidr): if not cidr: return '0.0.0.0/0' try: cidr = encodeutils.safe_decode(urllib.parse.unquote(cidr)) except Exception: raise exception.InvalidCidr(cidr=cidr) if not netutils.is_valid_cidr(cidr): raise exception.InvalidCidr(cidr=cidr) return cidr def new_group_ingress_rule(grantee_group_id, protocol, from_port, to_port): return _new_ingress_rule( protocol, from_port, to_port, group_id=grantee_group_id) def new_cidr_ingress_rule(grantee_cidr, protocol, from_port, to_port): return _new_ingress_rule( protocol, from_port, to_port, cidr=grantee_cidr) def _new_ingress_rule(ip_protocol, from_port, to_port, group_id=None, cidr=None): values = {} if group_id: values['group_id'] = group_id # Open everything if an explicit port range or type/code are not # specified, but only if a source group was specified. ip_proto_upper = ip_protocol.upper() if ip_protocol else '' if (ip_proto_upper == 'ICMP' and from_port is None and to_port is None): from_port = -1 to_port = -1 elif (ip_proto_upper in ['TCP', 'UDP'] and from_port is None and to_port is None): from_port = 1 to_port = 65535 elif cidr: values['cidr'] = cidr if ip_protocol and from_port is not None and to_port is not None: ip_protocol = str(ip_protocol) try: # Verify integer conversions from_port = int(from_port) to_port = int(to_port) except ValueError: if ip_protocol.upper() == 'ICMP': raise exception.InvalidInput(reason=_("Type and" " Code must be integers for ICMP protocol type")) else: raise exception.InvalidInput(reason=_("To and From ports " "must be integers")) if ip_protocol.upper() not in ['TCP', 'UDP', 'ICMP']: raise exception.InvalidIpProtocol(protocol=ip_protocol) # Verify that from_port must always be less than # or equal to to_port if (ip_protocol.upper() in ['TCP', 'UDP'] and (from_port > to_port)): raise exception.InvalidPortRange(from_port=from_port, to_port=to_port, msg="Former value cannot" " be greater than the later") # Verify valid TCP, UDP port ranges if (ip_protocol.upper() in ['TCP', 'UDP'] and (from_port < 1 or to_port > 65535)): raise exception.InvalidPortRange(from_port=from_port, to_port=to_port, msg="Valid %s ports should" " be between 1-65535" % ip_protocol.upper()) # Verify ICMP type and code if (ip_protocol.upper() == "ICMP" and (from_port < -1 or from_port > 255 or to_port < -1 or to_port > 255)): raise exception.InvalidPortRange(from_port=from_port, to_port=to_port, msg="For ICMP, the" " type:code must be valid") values['protocol'] = ip_protocol values['from_port'] = from_port values['to_port'] = to_port else: # If cidr based filtering, protocol and ports are mandatory if cidr: return None return values def create_security_group_rule(context, security_group, new_rule): if _rule_exists(security_group, new_rule): msg = (_('This rule already exists in group %s') % new_rule['parent_group_id']) raise exception.Invalid(msg) return add_rules(context, new_rule['parent_group_id'], security_group['name'], [new_rule])[0] def _rule_exists(security_group, new_rule): """Indicates whether the specified rule is already defined in the given security group. """ for rule in security_group['rules']: keys = ('group_id', 'cidr', 'from_port', 'to_port', 'protocol') for key in keys: if rule.get(key) != new_rule.get(key): break else: return rule.get('id') or True return False def validate_property(value, property, allowed): """Validate given security group property. :param value: the value to validate, as a string or unicode :param property: the property, either 'name' or 'description' :param allowed: the range of characters allowed, but not used because Neutron is allowing any characters. """ utils.check_string_length(value, name=property, min_length=0, max_length=255) def populate_security_groups(security_groups): """Build and return a SecurityGroupList. :param security_groups: list of requested security group names or uuids :type security_groups: list :returns: nova.objects.security_group.SecurityGroupList """ if not security_groups: # Make sure it's an empty SecurityGroupList and not None return security_group_obj.SecurityGroupList() return security_group_obj.make_secgroup_list(security_groups) def create_security_group(context, name, description): neutron = neutronapi.get_client(context) body = _make_neutron_security_group_dict(name, description) try: security_group = neutron.create_security_group( body).get('security_group') except n_exc.BadRequest as e: raise exception.Invalid(six.text_type(e)) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() LOG.exception("Neutron Error creating security group %s", name) if e.status_code == 401: # TODO(arosen) Cannot raise generic response from neutron here # as this error code could be related to bad input or over # quota raise exc.HTTPBadRequest() elif e.status_code == 409: raise exception.SecurityGroupLimitExceeded(six.text_type(e)) six.reraise(*exc_info) return _convert_to_nova_security_group_format(security_group) def update_security_group(context, security_group, name, description): neutron = neutronapi.get_client(context) body = _make_neutron_security_group_dict(name, description) try: security_group = neutron.update_security_group( security_group['id'], body).get('security_group') except n_exc.NeutronClientException as e: exc_info = sys.exc_info() LOG.exception("Neutron Error updating security group %s", name) if e.status_code == 401: # TODO(arosen) Cannot raise generic response from neutron here # as this error code could be related to bad input or over # quota raise exc.HTTPBadRequest() six.reraise(*exc_info) return _convert_to_nova_security_group_format(security_group) def _convert_to_nova_security_group_format(security_group): nova_group = {} nova_group['id'] = security_group['id'] nova_group['description'] = security_group['description'] nova_group['name'] = security_group['name'] nova_group['project_id'] = security_group['tenant_id'] nova_group['rules'] = [] for rule in security_group.get('security_group_rules', []): if rule['direction'] == 'ingress': nova_group['rules'].append( _convert_to_nova_security_group_rule_format(rule)) return nova_group def _convert_to_nova_security_group_rule_format(rule): nova_rule = {} nova_rule['id'] = rule['id'] nova_rule['parent_group_id'] = rule['security_group_id'] nova_rule['protocol'] = rule['protocol'] if (nova_rule['protocol'] and rule.get('port_range_min') is None and rule.get('port_range_max') is None): if rule['protocol'].upper() in ['TCP', 'UDP']: nova_rule['from_port'] = 1 nova_rule['to_port'] = 65535 else: nova_rule['from_port'] = -1 nova_rule['to_port'] = -1 else: nova_rule['from_port'] = rule.get('port_range_min') nova_rule['to_port'] = rule.get('port_range_max') nova_rule['group_id'] = rule['remote_group_id'] nova_rule['cidr'] = parse_cidr(rule.get('remote_ip_prefix')) return nova_rule def get(context, id): neutron = neutronapi.get_client(context) try: group = neutron.show_security_group(id).get('security_group') return _convert_to_nova_security_group_format(group) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: LOG.debug('Neutron security group %s not found', id) raise exception.SecurityGroupNotFound(six.text_type(e)) else: LOG.error("Neutron Error: %s", e) six.reraise(*exc_info) def list(context, project, search_opts=None): """Returns list of security group rules owned by tenant.""" neutron = neutronapi.get_client(context) params = {} search_opts = search_opts if search_opts else {} # NOTE(jeffrey4l): list all the security groups when following # conditions are met # * names and ids don't exist. # * it is admin context and all_tenants exist in search_opts. # * project is not specified. list_all_tenants = (context.is_admin and 'all_tenants' in search_opts) # NOTE(jeffrey4l): neutron doesn't have `all-tenants` concept. # All the security group will be returned if the project/tenant # id is not passed. if not list_all_tenants: params['tenant_id'] = project try: security_groups = neutron.list_security_groups(**params).get( 'security_groups') except n_exc.NeutronClientException: with excutils.save_and_reraise_exception(): LOG.exception("Neutron Error getting security groups") converted_rules = [] for security_group in security_groups: converted_rules.append( _convert_to_nova_security_group_format(security_group)) return converted_rules def destroy(context, security_group): """This function deletes a security group.""" neutron = neutronapi.get_client(context) try: neutron.delete_security_group(security_group['id']) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: raise exception.SecurityGroupNotFound(six.text_type(e)) elif e.status_code == 409: raise exception.Invalid(six.text_type(e)) else: LOG.error("Neutron Error: %s", e) six.reraise(*exc_info) def add_rules(context, id, name, vals): """Add security group rule(s) to security group. Note: the Nova security group API doesn't support adding multiple security group rules at once but the EC2 one does. Therefore, this function is written to support both. Multiple rules are installed to a security group in neutron using bulk support. """ neutron = neutronapi.get_client(context) body = _make_neutron_security_group_rules_list(vals) try: rules = neutron.create_security_group_rule( body).get('security_group_rules') except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: LOG.exception("Neutron Error getting security group %s", name) raise exception.SecurityGroupNotFound(six.text_type(e)) elif e.status_code == 409: LOG.exception("Neutron Error adding rules to security " "group %s", name) raise exception.SecurityGroupLimitExceeded(six.text_type(e)) elif e.status_code == 400: LOG.exception("Neutron Error: %s", e) raise exception.Invalid(six.text_type(e)) else: six.reraise(*exc_info) converted_rules = [] for rule in rules: converted_rules.append( _convert_to_nova_security_group_rule_format(rule)) return converted_rules def _make_neutron_security_group_dict(name, description): return {'security_group': {'name': name, 'description': description}} def _make_neutron_security_group_rules_list(rules): new_rules = [] for rule in rules: new_rule = {} # nova only supports ingress rules so all rules are ingress. new_rule['direction'] = "ingress" new_rule['protocol'] = rule.get('protocol') # FIXME(arosen) Nova does not expose ethertype on security group # rules. Therefore, in the case of self referential rules we # should probably assume they want to allow both IPv4 and IPv6. # Unfortunately, this would require adding two rules in neutron. # The reason we do not do this is because when the user using the # nova api wants to remove the rule we'd have to have some way to # know that we should delete both of these rules in neutron. # For now, self referential rules only support IPv4. if not rule.get('cidr'): new_rule['ethertype'] = 'IPv4' else: version = netaddr.IPNetwork(rule.get('cidr')).version new_rule['ethertype'] = 'IPv6' if version == 6 else 'IPv4' new_rule['remote_ip_prefix'] = rule.get('cidr') new_rule['security_group_id'] = rule.get('parent_group_id') new_rule['remote_group_id'] = rule.get('group_id') if 'from_port' in rule and rule['from_port'] != -1: new_rule['port_range_min'] = rule['from_port'] if 'to_port' in rule and rule['to_port'] != -1: new_rule['port_range_max'] = rule['to_port'] new_rules.append(new_rule) return {'security_group_rules': new_rules} def remove_rules(context, security_group, rule_ids): neutron = neutronapi.get_client(context) rule_ids = set(rule_ids) try: # The ec2 api allows one to delete multiple security group rules # at once. Since there is no bulk delete for neutron the best # thing we can do is delete the rules one by one and hope this # works.... :/ for rule_id in range(0, len(rule_ids)): neutron.delete_security_group_rule(rule_ids.pop()) except n_exc.NeutronClientException: with excutils.save_and_reraise_exception(): LOG.exception("Neutron Error unable to delete %s", rule_ids) def get_rule(context, id): neutron = neutronapi.get_client(context) try: rule = neutron.show_security_group_rule( id).get('security_group_rule') except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: LOG.debug("Neutron security group rule %s not found", id) raise exception.SecurityGroupNotFound(six.text_type(e)) else: LOG.error("Neutron Error: %s", e) six.reraise(*exc_info) return _convert_to_nova_security_group_rule_format(rule) def _get_ports_from_server_list(servers, neutron): """Returns a list of ports used by the servers.""" def _chunk_by_ids(servers, limit): ids = [] for server in servers: ids.append(server['id']) if len(ids) >= limit: yield ids ids = [] if ids: yield ids # Note: Have to split the query up as the search criteria # form part of the URL, which has a fixed max size ports = [] for ids in _chunk_by_ids(servers, MAX_SEARCH_IDS): search_opts = {'device_id': ids} try: ports.extend(neutron.list_ports(**search_opts).get('ports')) except n_exc.PortNotFoundClient: # There could be a race between deleting an instance and # retrieving its port groups from Neutron. In this case # PortNotFoundClient is raised and it can be safely ignored LOG.debug("Port not found for device with id %s", ids) return ports def _get_secgroups_from_port_list(ports, neutron, fields=None): """Returns a dict of security groups keyed by their ids.""" def _chunk_by_ids(sg_ids, limit): sg_id_list = [] for sg_id in sg_ids: sg_id_list.append(sg_id) if len(sg_id_list) >= limit: yield sg_id_list sg_id_list = [] if sg_id_list: yield sg_id_list # Find the set of unique SecGroup IDs to search for sg_ids = set() for port in ports: sg_ids.update(port.get('security_groups', [])) # Note: Have to split the query up as the search criteria # form part of the URL, which has a fixed max size security_groups = {} for sg_id_list in _chunk_by_ids(sg_ids, MAX_SEARCH_IDS): sg_search_opts = {'id': sg_id_list} if fields: sg_search_opts['fields'] = fields search_results = neutron.list_security_groups(**sg_search_opts) for sg in search_results.get('security_groups'): security_groups[sg['id']] = sg return security_groups def get_instances_security_groups_bindings(context, servers, detailed=False): """Returns a dict(instance_id, [security_groups]) to allow obtaining all of the instances and their security groups in one shot. If detailed is False only the security group name is returned. """ neutron = neutronapi.get_client(context) ports = _get_ports_from_server_list(servers, neutron) # If detailed is True, we want all fields from the security groups # including the potentially slow-to-join security_group_rules field. # But if detailed is False, only get the id and name fields since # that's all we'll use below. fields = None if detailed else ['id', 'name'] security_groups = _get_secgroups_from_port_list( ports, neutron, fields=fields) instances_security_group_bindings = {} for port in ports: for port_sg_id in port.get('security_groups', []): # Note: have to check we found port_sg as its possible # the port has an SG that this user doesn't have access to port_sg = security_groups.get(port_sg_id) if port_sg: if detailed: sg_entry = _convert_to_nova_security_group_format( port_sg) instances_security_group_bindings.setdefault( port['device_id'], []).append(sg_entry) else: # name is optional in neutron so if not specified # return id name = port_sg.get('name') if not name: name = port_sg.get('id') sg_entry = {'name': name} instances_security_group_bindings.setdefault( port['device_id'], []).append(sg_entry) return instances_security_group_bindings def get_instance_security_groups(context, instance, detailed=False): """Returns the security groups that are associated with an instance. If detailed is True then it also returns the full details of the security groups associated with an instance, otherwise just the security group name. """ servers = [{'id': instance.uuid}] sg_bindings = get_instances_security_groups_bindings( context, servers, detailed) return sg_bindings.get(instance.uuid, []) def _has_security_group_requirements(port): port_security_enabled = port.get('port_security_enabled', True) has_ip = port.get('fixed_ips') deferred_ip = port.get('ip_allocation') == 'deferred' if has_ip or deferred_ip: return port_security_enabled return False def add_to_instance(context, instance, security_group_name): """Add security group to the instance.""" neutron = neutronapi.get_client(context) try: security_group_id = neutronv20.find_resourceid_by_name_or_id( neutron, 'security_group', security_group_name, context.project_id) except n_exc.NeutronClientNoUniqueMatch as e: raise exception.NoUniqueMatch(six.text_type(e)) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: msg = (_("Security group %(name)s is not found for " "project %(project)s") % {'name': security_group_name, 'project': context.project_id}) raise exception.SecurityGroupNotFound(msg) else: six.reraise(*exc_info) params = {'device_id': instance.uuid} try: ports = neutron.list_ports(**params).get('ports') except n_exc.NeutronClientException: with excutils.save_and_reraise_exception(): LOG.exception("Neutron Error:") if not ports: msg = (_("instance_id %s could not be found as device id on" " any ports") % instance.uuid) raise exception.SecurityGroupNotFound(msg) for port in ports: if not _has_security_group_requirements(port): LOG.warning("Cannot add security group %(name)s to " "%(instance)s since the port %(port_id)s " "does not meet security requirements", {'name': security_group_name, 'instance': instance.uuid, 'port_id': port['id']}) raise exception.SecurityGroupCannotBeApplied() if 'security_groups' not in port: port['security_groups'] = [] port['security_groups'].append(security_group_id) updated_port = {'security_groups': port['security_groups']} try: LOG.info("Adding security group %(security_group_id)s to " "port %(port_id)s", {'security_group_id': security_group_id, 'port_id': port['id']}) neutron.update_port(port['id'], {'port': updated_port}) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 400: raise exception.SecurityGroupCannotBeApplied( six.text_type(e)) else: six.reraise(*exc_info) except Exception: with excutils.save_and_reraise_exception(): LOG.exception("Neutron Error:") def remove_from_instance(context, instance, security_group_name): """Remove the security group associated with the instance.""" neutron = neutronapi.get_client(context) try: security_group_id = neutronv20.find_resourceid_by_name_or_id( neutron, 'security_group', security_group_name, context.project_id) except n_exc.NeutronClientException as e: exc_info = sys.exc_info() if e.status_code == 404: msg = (_("Security group %(name)s is not found for " "project %(project)s") % {'name': security_group_name, 'project': context.project_id}) raise exception.SecurityGroupNotFound(msg) else: six.reraise(*exc_info) params = {'device_id': instance.uuid} try: ports = neutron.list_ports(**params).get('ports') except n_exc.NeutronClientException: with excutils.save_and_reraise_exception(): LOG.exception("Neutron Error:") if not ports: msg = (_("instance_id %s could not be found as device id on" " any ports") % instance.uuid) raise exception.SecurityGroupNotFound(msg) found_security_group = False for port in ports: try: port.get('security_groups', []).remove(security_group_id) except ValueError: # When removing a security group from an instance the security # group should be on both ports since it was added this way if # done through the nova api. In case it is not a 404 is only # raised if the security group is not found on any of the # ports on the instance. continue updated_port = {'security_groups': port['security_groups']} try: LOG.info("Removing security group %(security_group_id)s from " "port %(port_id)s", {'security_group_id': security_group_id, 'port_id': port['id']}) neutron.update_port(port['id'], {'port': updated_port}) found_security_group = True except Exception: with excutils.save_and_reraise_exception(): LOG.exception("Neutron Error:") if not found_security_group: msg = (_("Security group %(security_group_name)s not associated " "with the instance %(instance)s") % {'security_group_name': security_group_name, 'instance': instance.uuid}) raise exception.SecurityGroupNotFound(msg) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3544698 nova-21.2.4/nova/notifications/0000775000175000017500000000000000000000000016402 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/notifications/__init__.py0000664000175000017500000000237100000000000020516 0ustar00zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Note(gibi): Importing publicly called functions so the caller code does not # need to be changed after we moved these function inside the package # Todo(gibi): remove these imports after legacy notifications using these are # transformed to versioned notifications from nova.notifications.base import audit_period_bounds # noqa from nova.notifications.base import bandwidth_usage # noqa from nova.notifications.base import image_meta # noqa from nova.notifications.base import info_from_instance # noqa from nova.notifications.base import send_update # noqa from nova.notifications.base import send_update_with_states # noqa ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/notifications/base.py0000664000175000017500000004254100000000000017674 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functionality related to notifications common to multiple layers of the system. """ import datetime from keystoneauth1 import exceptions as ks_exc from oslo_log import log from oslo_utils import excutils from oslo_utils import timeutils import nova.conf import nova.context from nova import exception from nova.image import glance from nova.network import model as network_model from nova.network import neutron from nova.notifications.objects import base as notification_base from nova.notifications.objects import instance as instance_notification from nova import objects from nova.objects import base as obj_base from nova.objects import fields from nova import rpc from nova import utils LOG = log.getLogger(__name__) CONF = nova.conf.CONF def send_update(context, old_instance, new_instance, service="compute", host=None): """Send compute.instance.update notification to report any changes occurred in that instance """ if not CONF.notifications.notify_on_state_change: # skip all this if updates are disabled return update_with_state_change = False old_vm_state = old_instance["vm_state"] new_vm_state = new_instance["vm_state"] old_task_state = old_instance["task_state"] new_task_state = new_instance["task_state"] # we should check if we need to send a state change or a regular # notification if old_vm_state != new_vm_state: # yes, the vm state is changing: update_with_state_change = True elif (CONF.notifications.notify_on_state_change == "vm_and_task_state" and old_task_state != new_task_state): # yes, the task state is changing: update_with_state_change = True if update_with_state_change: # send a notification with state changes # value of verify_states need not be True as the check for states is # already done here send_update_with_states(context, new_instance, old_vm_state, new_vm_state, old_task_state, new_task_state, service, host) else: try: old_display_name = None if new_instance["display_name"] != old_instance["display_name"]: old_display_name = old_instance["display_name"] send_instance_update_notification(context, new_instance, service=service, host=host, old_display_name=old_display_name) except exception.InstanceNotFound: LOG.debug('Failed to send instance update notification. The ' 'instance could not be found and was most likely ' 'deleted.', instance=new_instance) except Exception: LOG.exception("Failed to send state update notification", instance=new_instance) def send_update_with_states(context, instance, old_vm_state, new_vm_state, old_task_state, new_task_state, service="compute", host=None, verify_states=False): """Send compute.instance.update notification to report changes if there are any, in the instance """ if not CONF.notifications.notify_on_state_change: # skip all this if updates are disabled return fire_update = True # send update notification by default if verify_states: # check whether we need to send notification related to state changes fire_update = False # do not send notification if the conditions for vm and(or) task state # are not satisfied if old_vm_state != new_vm_state: # yes, the vm state is changing: fire_update = True elif (CONF.notifications.notify_on_state_change == "vm_and_task_state" and old_task_state != new_task_state): # yes, the task state is changing: fire_update = True if fire_update: # send either a state change or a regular notification try: send_instance_update_notification(context, instance, old_vm_state=old_vm_state, old_task_state=old_task_state, new_vm_state=new_vm_state, new_task_state=new_task_state, service=service, host=host) except exception.InstanceNotFound: LOG.debug('Failed to send instance update notification. The ' 'instance could not be found and was most likely ' 'deleted.', instance=instance) except Exception: LOG.exception("Failed to send state update notification", instance=instance) def _compute_states_payload(instance, old_vm_state=None, old_task_state=None, new_vm_state=None, new_task_state=None): # If the states were not specified we assume the current instance # states are the correct information. This is important to do for # both old and new states because otherwise we create some really # confusing notifications like: # # None(None) => Building(none) # # When we really were just continuing to build if new_vm_state is None: new_vm_state = instance["vm_state"] if new_task_state is None: new_task_state = instance["task_state"] if old_vm_state is None: old_vm_state = instance["vm_state"] if old_task_state is None: old_task_state = instance["task_state"] states_payload = { "old_state": old_vm_state, "state": new_vm_state, "old_task_state": old_task_state, "new_task_state": new_task_state, } return states_payload def send_instance_update_notification(context, instance, old_vm_state=None, old_task_state=None, new_vm_state=None, new_task_state=None, service="compute", host=None, old_display_name=None): """Send 'compute.instance.update' notification to inform observers about instance state changes. """ # NOTE(gibi): The image_ref_url is only used in unversioned notifications. # Calling the generate_image_url() could be costly as it calls # the Keystone API. So only do the call if the actual value will be # used. populate_image_ref_url = (CONF.notifications.notification_format in ('both', 'unversioned')) payload = info_from_instance(context, instance, None, populate_image_ref_url=populate_image_ref_url) # determine how we'll report states payload.update( _compute_states_payload( instance, old_vm_state, old_task_state, new_vm_state, new_task_state)) # add audit fields: (audit_start, audit_end) = audit_period_bounds(current_period=True) payload["audit_period_beginning"] = null_safe_isotime(audit_start) payload["audit_period_ending"] = null_safe_isotime(audit_end) # add bw usage info: bw = bandwidth_usage(context, instance, audit_start) payload["bandwidth"] = bw # add old display name if it is changed if old_display_name: payload["old_display_name"] = old_display_name rpc.get_notifier(service, host).info(context, 'compute.instance.update', payload) _send_versioned_instance_update(context, instance, payload, host, service) @rpc.if_notifications_enabled def _send_versioned_instance_update(context, instance, payload, host, service): def _map_legacy_service_to_source(legacy_service): if not legacy_service.startswith('nova-'): return 'nova-' + service else: return service state_update = instance_notification.InstanceStateUpdatePayload( old_state=payload.get('old_state'), state=payload.get('state'), old_task_state=payload.get('old_task_state'), new_task_state=payload.get('new_task_state')) audit_period = instance_notification.AuditPeriodPayload( audit_period_beginning=payload.get('audit_period_beginning'), audit_period_ending=payload.get('audit_period_ending')) bandwidth = [instance_notification.BandwidthPayload( network_name=label, in_bytes=bw['bw_in'], out_bytes=bw['bw_out']) for label, bw in payload['bandwidth'].items()] versioned_payload = instance_notification.InstanceUpdatePayload( context=context, instance=instance, state_update=state_update, audit_period=audit_period, bandwidth=bandwidth, old_display_name=payload.get('old_display_name')) notification = instance_notification.InstanceUpdateNotification( priority=fields.NotificationPriority.INFO, event_type=notification_base.EventType( object='instance', action=fields.NotificationAction.UPDATE), publisher=notification_base.NotificationPublisher( host=host or CONF.host, source=_map_legacy_service_to_source(service)), payload=versioned_payload) notification.emit(context) def audit_period_bounds(current_period=False): """Get the start and end of the relevant audit usage period :param current_period: if True, this will generate a usage for the current usage period; if False, this will generate a usage for the previous audit period. """ begin, end = utils.last_completed_audit_period() if current_period: audit_start = end audit_end = timeutils.utcnow() else: audit_start = begin audit_end = end return (audit_start, audit_end) def bandwidth_usage(context, instance_ref, audit_start, ignore_missing_network_data=True): """Get bandwidth usage information for the instance for the specified audit period. """ admin_context = context.elevated(read_deleted='yes') def _get_nwinfo_old_skool(): """Support for getting network info without objects.""" if (instance_ref.get('info_cache') and instance_ref['info_cache'].get('network_info') is not None): cached_info = instance_ref['info_cache']['network_info'] if isinstance(cached_info, network_model.NetworkInfo): return cached_info return network_model.NetworkInfo.hydrate(cached_info) try: return neutron.API().get_instance_nw_info(admin_context, instance_ref) except Exception: try: with excutils.save_and_reraise_exception(): LOG.exception('Failed to get nw_info', instance=instance_ref) except Exception: if ignore_missing_network_data: return raise # FIXME(comstud): Temporary as we transition to objects. if isinstance(instance_ref, obj_base.NovaObject): nw_info = instance_ref.info_cache.network_info if nw_info is None: nw_info = network_model.NetworkInfo() else: nw_info = _get_nwinfo_old_skool() macs = [vif['address'] for vif in nw_info] uuids = [instance_ref["uuid"]] bw_usages = objects.BandwidthUsageList.get_by_uuids(admin_context, uuids, audit_start) bw = {} for b in bw_usages: if b.mac in macs: label = 'net-name-not-found-%s' % b.mac for vif in nw_info: if vif['address'] == b.mac: label = vif['network']['label'] break bw[label] = dict(bw_in=b.bw_in, bw_out=b.bw_out) return bw def image_meta(system_metadata): """Format image metadata for use in notifications from the instance system metadata. """ image_meta = {} for md_key, md_value in system_metadata.items(): if md_key.startswith('image_'): image_meta[md_key[6:]] = md_value return image_meta def null_safe_str(s): return str(s) if s else '' def null_safe_isotime(s): if isinstance(s, datetime.datetime): return utils.strtime(s) else: return str(s) if s else '' def info_from_instance(context, instance, network_info, populate_image_ref_url=False, **kw): """Get detailed instance information for an instance which is common to all notifications. :param:instance: nova.objects.Instance :param:network_info: network_info provided if not None :param:populate_image_ref_url: If True then the full URL of the image of the instance is generated and returned. This, depending on the configuration, might mean a call to Keystone. If false, None value is returned in the dict at the image_ref_url key. """ image_ref_url = None if populate_image_ref_url: try: # NOTE(mriedem): We can eventually drop this when we no longer # support legacy notifications since versioned notifications don't # use this. image_ref_url = glance.API().generate_image_url( instance.image_ref, context) except ks_exc.EndpointNotFound: # We might be running from a periodic task with no auth token and # CONF.glance.api_servers isn't set, so we can't get the image API # endpoint URL from the service catalog, therefore just use the # image id for the URL (yes it's a lie, but it's best effort at # this point). with excutils.save_and_reraise_exception() as exc_ctx: if context.auth_token is None: image_ref_url = instance.image_ref exc_ctx.reraise = False instance_type = instance.get_flavor() instance_type_name = instance_type.get('name', '') instance_flavorid = instance_type.get('flavorid', '') instance_info = dict( # Owner properties tenant_id=instance.project_id, user_id=instance.user_id, # Identity properties instance_id=instance.uuid, display_name=instance.display_name, reservation_id=instance.reservation_id, hostname=instance.hostname, # Type properties instance_type=instance_type_name, instance_type_id=instance.instance_type_id, instance_flavor_id=instance_flavorid, architecture=instance.architecture, # Capacity properties memory_mb=instance.flavor.memory_mb, disk_gb=instance.flavor.root_gb + instance.flavor.ephemeral_gb, vcpus=instance.flavor.vcpus, # Note(dhellmann): This makes the disk_gb value redundant, but # we are keeping it for backwards-compatibility with existing # users of notifications. root_gb=instance.flavor.root_gb, ephemeral_gb=instance.flavor.ephemeral_gb, # Location properties host=instance.host, node=instance.node, availability_zone=instance.availability_zone, cell_name=null_safe_str(instance.cell_name), # Date properties created_at=str(instance.created_at), # Terminated and Deleted are slightly different (although being # terminated and not deleted is a transient state), so include # both and let the recipient decide which they want to use. terminated_at=null_safe_isotime(instance.get('terminated_at', None)), deleted_at=null_safe_isotime(instance.get('deleted_at', None)), launched_at=null_safe_isotime(instance.get('launched_at', None)), # Image properties image_ref_url=image_ref_url, os_type=instance.os_type, kernel_id=instance.kernel_id, ramdisk_id=instance.ramdisk_id, # Status properties state=instance.vm_state, state_description=null_safe_str(instance.task_state), # NOTE(gibi): It might seems wrong to default the progress to an empty # string but this is how legacy work and this code only used by the # legacy notification so try to keep the compatibility here but also # keep it contained. progress=int(instance.progress) if instance.progress else '', # accessIPs access_ip_v4=instance.access_ip_v4, access_ip_v6=instance.access_ip_v6, ) if network_info is not None: fixed_ips = [] for vif in network_info: for ip in vif.fixed_ips(): ip["label"] = vif["network"]["label"] ip["vif_mac"] = vif["address"] fixed_ips.append(ip) instance_info['fixed_ips'] = fixed_ips # add image metadata image_meta_props = image_meta(instance.system_metadata) instance_info["image_meta"] = image_meta_props # add instance metadata instance_info['metadata'] = instance.metadata instance_info.update(kw) return instance_info ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3584697 nova-21.2.4/nova/notifications/objects/0000775000175000017500000000000000000000000020033 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/__init__.py0000664000175000017500000000000000000000000022132 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/aggregate.py0000664000175000017500000001013200000000000022330 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class AggregatePayload(base.NotificationPayloadBase): SCHEMA = { 'id': ('aggregate', 'id'), 'uuid': ('aggregate', 'uuid'), 'name': ('aggregate', 'name'), 'hosts': ('aggregate', 'hosts'), 'metadata': ('aggregate', 'metadata'), } # Version 1.0: Initial version # 1.1: Making the id field nullable VERSION = '1.1' fields = { # NOTE(gibi): id is nullable as aggregate.create.start is sent before # the id is generated by the db 'id': fields.IntegerField(nullable=True), 'uuid': fields.UUIDField(nullable=False), 'name': fields.StringField(), 'hosts': fields.ListOfStringsField(nullable=True), 'metadata': fields.DictOfStringsField(nullable=True), } def __init__(self, aggregate): super(AggregatePayload, self).__init__() self.populate_schema(aggregate=aggregate) @base.notification_sample('aggregate-create-start.json') @base.notification_sample('aggregate-create-end.json') @base.notification_sample('aggregate-delete-start.json') @base.notification_sample('aggregate-delete-end.json') @base.notification_sample('aggregate-add_host-start.json') @base.notification_sample('aggregate-add_host-end.json') @base.notification_sample('aggregate-remove_host-start.json') @base.notification_sample('aggregate-remove_host-end.json') @base.notification_sample('aggregate-update_metadata-start.json') @base.notification_sample('aggregate-update_metadata-end.json') @base.notification_sample('aggregate-update_prop-start.json') @base.notification_sample('aggregate-update_prop-end.json') @base.notification_sample('aggregate-cache_images-start.json') @base.notification_sample('aggregate-cache_images-end.json') @nova_base.NovaObjectRegistry.register_notification class AggregateNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('AggregatePayload') } @nova_base.NovaObjectRegistry.register_notification class AggregateCachePayload(base.NotificationPayloadBase): SCHEMA = { 'id': ('aggregate', 'id'), 'uuid': ('aggregate', 'uuid'), 'name': ('aggregate', 'name'), } # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(), 'name': fields.StringField(), # The host that we just worked 'host': fields.StringField(), # The images that were downloaded or are already there 'images_cached': fields.ListOfStringsField(), # The images that are unable to be cached for some reason 'images_failed': fields.ListOfStringsField(), # The N/M progress information for this operation 'index': fields.IntegerField(), 'total': fields.IntegerField(), } def __init__(self, aggregate, host, index, total): super(AggregateCachePayload, self).__init__() self.populate_schema(aggregate=aggregate) self.host = host self.index = index self.total = total @base.notification_sample('aggregate-cache_images-progress.json') @nova_base.NovaObjectRegistry.register_notification class AggregateCacheNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('AggregateCachePayload'), } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/base.py0000664000175000017500000002465100000000000021327 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import excutils from oslo_versionedobjects import exception as ovo_exception from nova import exception from nova.objects import base from nova.objects import fields from nova import rpc LOG = logging.getLogger(__name__) @base.NovaObjectRegistry.register_if(False) class NotificationObject(base.NovaObject): """Base class for every notification related versioned object.""" # Version 1.0: Initial version VERSION = '1.0' def __init__(self, **kwargs): super(NotificationObject, self).__init__(**kwargs) # The notification objects are created on the fly when nova emits the # notification. This causes that every object shows every field as # changed. We don't want to send this meaningless information so we # reset the object after creation. self.obj_reset_changes(recursive=False) @base.NovaObjectRegistry.register_notification class EventType(NotificationObject): # Version 1.0: Initial version # Version 1.1: New valid actions values are added to the # NotificationActionField enum # Version 1.2: DELETE value is added to the NotificationActionField enum # Version 1.3: Set of new values are added to NotificationActionField enum # Version 1.4: Another set of new values are added to # NotificationActionField enum # Version 1.5: Aggregate related values have been added to # NotificationActionField enum # Version 1.6: ADD_FIX_IP replaced with INTERFACE_ATTACH in # NotificationActionField enum # Version 1.7: REMOVE_FIXED_IP replaced with INTERFACE_DETACH in # NotificationActionField enum # Version 1.8: IMPORT value is added to NotificationActionField enum # Version 1.9: ADD_MEMBER value is added to NotificationActionField enum # Version 1.10: UPDATE_METADATA value is added to the # NotificationActionField enum # Version 1.11: LOCK is added to NotificationActionField enum # Version 1.12: UNLOCK is added to NotificationActionField enum # Version 1.13: REBUILD_SCHEDULED value is added to the # NotificationActionField enum # Version 1.14: UPDATE_PROP value is added to the NotificationActionField # enum # Version 1.15: LIVE_MIGRATION_FORCE_COMPLETE is added to the # NotificationActionField enum # Version 1.16: CONNECT is added to NotificationActionField enum # Version 1.17: USAGE is added to NotificationActionField enum # Version 1.18: ComputeTask related values have been added to # NotificationActionField enum # Version 1.19: SELECT_DESTINATIONS is added to the NotificationActionField # enum # Version 1.20: IMAGE_CACHE is added to the NotificationActionField enum # Version 1.21: PROGRESS added to NotificationPhase enum VERSION = '1.21' fields = { 'object': fields.StringField(nullable=False), 'action': fields.NotificationActionField(nullable=False), 'phase': fields.NotificationPhaseField(nullable=True), } def __init__(self, object, action, phase=None): super(EventType, self).__init__() self.object = object self.action = action self.phase = phase def to_notification_event_type_field(self): """Serialize the object to the wire format.""" s = '%s.%s' % (self.object, self.action) if self.phase: s += '.%s' % self.phase return s @base.NovaObjectRegistry.register_if(False) class NotificationPayloadBase(NotificationObject): """Base class for the payload of versioned notifications.""" # SCHEMA defines how to populate the payload fields. It is a dictionary # where every key value pair has the following format: # : (, # ) # The is the name where the data will be stored in the # payload object, this field has to be defined as a field of the payload. # The shall refer to name of the parameter passed as # kwarg to the payload's populate_schema() call and this object will be # used as the source of the data. The shall be # a valid field of the passed argument. # The SCHEMA needs to be applied with the populate_schema() call before the # notification can be emitted. # The value of the payload. field will be set by the # . field. The # will not be part of the payload object internal or # external representation. # Payload fields that are not set by the SCHEMA can be filled in the same # way as in any versioned object. SCHEMA = {} # Version 1.0: Initial version VERSION = '1.0' def __init__(self): super(NotificationPayloadBase, self).__init__() self.populated = not self.SCHEMA @rpc.if_notifications_enabled def populate_schema(self, set_none=True, **kwargs): """Populate the object based on the SCHEMA and the source objects :param kwargs: A dict contains the source object at the key defined in the SCHEMA """ for key, (obj, field) in self.SCHEMA.items(): source = kwargs[obj] # trigger lazy-load if possible try: setattr(self, key, getattr(source, field)) # ObjectActionError - not lazy loadable field # NotImplementedError - obj_load_attr() is not even defined # OrphanedObjectError - lazy loadable field but context is None except (exception.ObjectActionError, NotImplementedError, exception.OrphanedObjectError, ovo_exception.OrphanedObjectError): if set_none: # If it is unset or non lazy loadable in the source object # then we cannot do anything else but try to default it # in the payload object we are generating here. # NOTE(gibi): This will fail if the payload field is not # nullable, but that means that either the source object # is not properly initialized or the payload field needs # to be defined as nullable setattr(self, key, None) except Exception: with excutils.save_and_reraise_exception(): LOG.error('Failed trying to populate attribute "%s" ' 'using field: %s', key, field) self.populated = True # the schema population will create changed fields but we don't need # this information in the notification self.obj_reset_changes(recursive=True) @base.NovaObjectRegistry.register_notification class NotificationPublisher(NotificationObject): # Version 1.0: Initial version # 2.0: The binary field has been renamed to source # 2.1: The type of the source field changed from string to enum. # This only needs a minor bump as the enum uses the possible # values of the previous string field # 2.2: New enum for source fields added VERSION = '2.2' # TODO(stephenfin): Remove 'nova-cells' from 'NotificationSourceField' enum # when bumping this object to version 3.0 fields = { 'host': fields.StringField(nullable=False), 'source': fields.NotificationSourceField(nullable=False), } def __init__(self, host, source): super(NotificationPublisher, self).__init__() self.host = host self.source = source @classmethod def from_service_obj(cls, service): source = fields.NotificationSource.get_source_by_binary(service.binary) return cls(host=service.host, source=source) @base.NovaObjectRegistry.register_if(False) class NotificationBase(NotificationObject): """Base class for versioned notifications. Every subclass shall define a 'payload' field. """ # Version 1.0: Initial version VERSION = '1.0' fields = { 'priority': fields.NotificationPriorityField(), 'event_type': fields.ObjectField('EventType'), 'publisher': fields.ObjectField('NotificationPublisher'), } def _emit(self, context, event_type, publisher_id, payload): notifier = rpc.get_versioned_notifier(publisher_id) notify = getattr(notifier, self.priority) notify(context, event_type=event_type, payload=payload) @rpc.if_notifications_enabled def emit(self, context): """Send the notification.""" assert self.payload.populated # Note(gibi): notification payload will be a newly populated object # therefore every field of it will look changed so this does not carry # any extra information so we drop this from the payload. self.payload.obj_reset_changes(recursive=True) self._emit(context, event_type= self.event_type.to_notification_event_type_field(), publisher_id='%s:%s' % (self.publisher.source, self.publisher.host), payload=self.payload.obj_to_primitive()) def notification_sample(sample): """Class decorator to attach the notification sample information to the notification object for documentation generation purposes. :param sample: the path of the sample json file relative to the doc/notification_samples/ directory in the nova repository root. """ def wrap(cls): if not getattr(cls, 'samples', None): cls.samples = [sample] else: cls.samples.append(sample) return cls return wrap ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/compute_task.py0000664000175000017500000000421000000000000023100 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.notifications.objects import request_spec as reqspec_payload from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class ComputeTaskPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'instance_uuid': fields.UUIDField(), # There are some cases that request_spec is None. # e.g. Old instances can still have no RequestSpec object # attached to them. 'request_spec': fields.ObjectField('RequestSpecPayload', nullable=True), 'state': fields.InstanceStateField(nullable=True), 'reason': fields.ObjectField('ExceptionPayload') } def __init__(self, instance_uuid, request_spec, state, reason): super(ComputeTaskPayload, self).__init__() self.instance_uuid = instance_uuid self.request_spec = reqspec_payload.RequestSpecPayload( request_spec) if request_spec is not None else None self.state = state self.reason = reason @base.notification_sample('compute_task-build_instances-error.json') @base.notification_sample('compute_task-migrate_server-error.json') @base.notification_sample('compute_task-rebuild_server-error.json') @nova_base.NovaObjectRegistry.register_notification class ComputeTaskNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('ComputeTaskPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/notifications/objects/exception.py0000664000175000017500000000472000000000000022406 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import six from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class ExceptionPayload(base.NotificationPayloadBase): # Version 1.0: Initial version # Version 1.1: Add traceback field to ExceptionPayload VERSION = '1.1' fields = { 'module_name': fields.StringField(), 'function_name': fields.StringField(), 'exception': fields.StringField(), 'exception_message': fields.StringField(), 'traceback': fields.StringField() } def __init__(self, module_name, function_name, exception, exception_message, traceback): super(ExceptionPayload, self).__init__() self.module_name = module_name self.function_name = function_name self.exception = exception self.exception_message = exception_message self.traceback = traceback @classmethod def from_exc_and_traceback(cls, fault, traceback): trace = inspect.trace()[-1] # TODO(gibi): apply strutils.mask_password on exception_message and # consider emitting the exception_message only if the safe flag is # true in the exception like in the REST API module = inspect.getmodule(trace[0]) module_name = module.__name__ if module else 'unknown' return cls( function_name=trace[3], module_name=module_name, exception=fault.__class__.__name__, exception_message=six.text_type(fault), traceback=traceback) @base.notification_sample('compute-exception.json') @nova_base.NovaObjectRegistry.register_notification class ExceptionNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('ExceptionPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/flavor.py0000664000175000017500000000677700000000000021717 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @base.notification_sample('flavor-create.json') @base.notification_sample('flavor-update.json') @base.notification_sample('flavor-delete.json') @nova_base.NovaObjectRegistry.register_notification class FlavorNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('FlavorPayload') } @nova_base.NovaObjectRegistry.register_notification class FlavorPayload(base.NotificationPayloadBase): # Version 1.0: Initial version # Version 1.1: Add other fields for Flavor # Version 1.2: Add extra_specs and projects fields # Version 1.3: Make projects and extra_specs field nullable as they are # not always available when a notification is emitted. # Version 1.4: Added description field. VERSION = '1.4' # NOTE: if we'd want to rename some fields(memory_mb->ram, root_gb->disk, # ephemeral_gb: ephemeral), bumping to payload version 2.0 will be needed. SCHEMA = { 'flavorid': ('flavor', 'flavorid'), 'memory_mb': ('flavor', 'memory_mb'), 'vcpus': ('flavor', 'vcpus'), 'root_gb': ('flavor', 'root_gb'), 'ephemeral_gb': ('flavor', 'ephemeral_gb'), 'name': ('flavor', 'name'), 'swap': ('flavor', 'swap'), 'rxtx_factor': ('flavor', 'rxtx_factor'), 'vcpu_weight': ('flavor', 'vcpu_weight'), 'disabled': ('flavor', 'disabled'), 'is_public': ('flavor', 'is_public'), 'extra_specs': ('flavor', 'extra_specs'), 'projects': ('flavor', 'projects'), 'description': ('flavor', 'description') } fields = { 'flavorid': fields.StringField(nullable=True), 'memory_mb': fields.IntegerField(nullable=True), 'vcpus': fields.IntegerField(nullable=True), 'root_gb': fields.IntegerField(nullable=True), 'ephemeral_gb': fields.IntegerField(nullable=True), 'name': fields.StringField(), 'swap': fields.IntegerField(), 'rxtx_factor': fields.FloatField(nullable=True), 'vcpu_weight': fields.IntegerField(nullable=True), 'disabled': fields.BooleanField(), 'is_public': fields.BooleanField(), 'extra_specs': fields.DictOfStringsField(nullable=True), 'projects': fields.ListOfStringsField(nullable=True), 'description': fields.StringField(nullable=True) } def __init__(self, flavor): super(FlavorPayload, self).__init__() if 'projects' not in flavor: # NOTE(danms): If projects is not loaded in the flavor, # don't attempt to load it. If we're in a child cell then # we can't load the real flavor, and if we're a flavor on # an instance then we don't want to anyway. flavor = flavor.obj_clone() flavor._context = None self.populate_schema(flavor=flavor) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/notifications/objects/image.py0000664000175000017500000001336200000000000021474 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields from nova.objects import image_meta @nova_base.NovaObjectRegistry.register_notification class ImageMetaPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'id': ('image_meta', 'id'), 'name': ('image_meta', 'name'), 'status': ('image_meta', 'status'), 'visibility': ('image_meta', 'visibility'), 'protected': ('image_meta', 'protected'), 'checksum': ('image_meta', 'checksum'), 'owner': ('image_meta', 'owner'), 'size': ('image_meta', 'size'), 'virtual_size': ('image_meta', 'virtual_size'), 'container_format': ('image_meta', 'container_format'), 'disk_format': ('image_meta', 'disk_format'), 'created_at': ('image_meta', 'created_at'), 'updated_at': ('image_meta', 'updated_at'), 'tags': ('image_meta', 'tags'), 'direct_url': ('image_meta', 'direct_url'), 'min_ram': ('image_meta', 'min_ram'), 'min_disk': ('image_meta', 'min_disk') } # NOTE(takashin): The reason that each field is nullable is as follows. # # a. It is defined as "The value might be null (JSON null data type)." # in the "Show image" API (GET /v2/images/{image_id}) # in the glance API v2 Reference. # (https://docs.openstack.org/api-ref/image/v2/index.html) # # * checksum # * container_format # * disk_format # * min_disk # * min_ram # * name # * owner # * size # * updated_at # * virtual_size # # b. It is optional in the response from glance. # * direct_url # # a. It is defined as nullable in the ImageMeta object. # * created_at # # c. It cannot be got in the boot from volume case. # See VIM_IMAGE_ATTRIBUTES in nova/utils.py. # # * id (not 'image_id') # * visibility # * protected # * status # * tags fields = { 'id': fields.UUIDField(nullable=True), 'name': fields.StringField(nullable=True), 'status': fields.StringField(nullable=True), 'visibility': fields.StringField(nullable=True), 'protected': fields.FlexibleBooleanField(nullable=True), 'checksum': fields.StringField(nullable=True), 'owner': fields.StringField(nullable=True), 'size': fields.IntegerField(nullable=True), 'virtual_size': fields.IntegerField(nullable=True), 'container_format': fields.StringField(nullable=True), 'disk_format': fields.StringField(nullable=True), 'created_at': fields.DateTimeField(nullable=True), 'updated_at': fields.DateTimeField(nullable=True), 'tags': fields.ListOfStringsField(nullable=True), 'direct_url': fields.StringField(nullable=True), 'min_ram': fields.IntegerField(nullable=True), 'min_disk': fields.IntegerField(nullable=True), 'properties': fields.ObjectField('ImageMetaPropsPayload') } def __init__(self, image_meta): super(ImageMetaPayload, self).__init__() self.properties = ImageMetaPropsPayload( image_meta_props=image_meta.properties) self.populate_schema(image_meta=image_meta) @nova_base.NovaObjectRegistry.register_notification class ImageMetaPropsPayload(base.NotificationPayloadBase): """Built dynamically from ImageMetaProps. This has the following implications: * When you make a versioned update to ImageMetaProps, you must *also* bump the version of this object, even though you didn't make any explicit changes here. There's an object hash test that should catch this for you. * As currently written, this relies on all of the fields of ImageMetaProps being initialized with no arguments. If you add one with arguments (e.g. ``nullable=True`` or with a ``default``), something needs to change here. """ # Version 1.0: Initial version # Version 1.1: Added 'gop', 'virtio' and 'none' to hw_video_model field # Version 1.2: Added hw_pci_numa_affinity_policy field # Version 1.3: Added hw_mem_encryption, hw_pmu and hw_time_hpet fields VERSION = '1.3' SCHEMA = { k: ('image_meta_props', k) for k in image_meta.ImageMetaProps.fields} # NOTE(efried): This logic currently relies on all of the fields of # ImageMetaProps being initialized with no arguments. See the docstring. # NOTE(efried): It's possible this could just be: # fields = image_meta.ImageMetaProps.fields # But it is not clear that OVO can tolerate the same *instance* of a type # class being used in more than one place. fields = { k: v.__class__() for k, v in image_meta.ImageMetaProps.fields.items()} def __init__(self, image_meta_props): super(ImageMetaPropsPayload, self).__init__() # NOTE(takashin): If fields are not set in the ImageMetaProps object, # it will not set the fields in the ImageMetaPropsPayload # in order to avoid too many fields whose values are None. self.populate_schema(set_none=False, image_meta_props=image_meta_props) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/notifications/objects/instance.py0000664000175000017500000007336400000000000022226 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova.notifications.objects import base from nova.notifications.objects import flavor as flavor_payload from nova.notifications.objects import keypair as keypair_payload from nova.objects import base as nova_base from nova.objects import fields CONF = nova.conf.CONF @nova_base.NovaObjectRegistry.register_notification class InstancePayload(base.NotificationPayloadBase): SCHEMA = { 'uuid': ('instance', 'uuid'), 'user_id': ('instance', 'user_id'), 'tenant_id': ('instance', 'project_id'), 'reservation_id': ('instance', 'reservation_id'), 'display_name': ('instance', 'display_name'), 'display_description': ('instance', 'display_description'), 'host_name': ('instance', 'hostname'), 'host': ('instance', 'host'), 'node': ('instance', 'node'), 'os_type': ('instance', 'os_type'), 'architecture': ('instance', 'architecture'), 'availability_zone': ('instance', 'availability_zone'), 'image_uuid': ('instance', 'image_ref'), 'key_name': ('instance', 'key_name'), 'kernel_id': ('instance', 'kernel_id'), 'ramdisk_id': ('instance', 'ramdisk_id'), 'created_at': ('instance', 'created_at'), 'launched_at': ('instance', 'launched_at'), 'terminated_at': ('instance', 'terminated_at'), 'deleted_at': ('instance', 'deleted_at'), 'updated_at': ('instance', 'updated_at'), 'state': ('instance', 'vm_state'), 'power_state': ('instance', 'power_state'), 'task_state': ('instance', 'task_state'), 'progress': ('instance', 'progress'), 'metadata': ('instance', 'metadata'), 'locked': ('instance', 'locked'), 'auto_disk_config': ('instance', 'auto_disk_config') } # Version 1.0: Initial version # Version 1.1: add locked and display_description field # Version 1.2: Add auto_disk_config field # Version 1.3: Add key_name field # Version 1.4: Add BDM related data # Version 1.5: Add updated_at field # Version 1.6: Add request_id field # Version 1.7: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.8: Added locked_reason field VERSION = '1.8' fields = { 'uuid': fields.UUIDField(), 'user_id': fields.StringField(nullable=True), 'tenant_id': fields.StringField(nullable=True), 'reservation_id': fields.StringField(nullable=True), 'display_name': fields.StringField(nullable=True), 'display_description': fields.StringField(nullable=True), 'host_name': fields.StringField(nullable=True), 'host': fields.StringField(nullable=True), 'node': fields.StringField(nullable=True), 'os_type': fields.StringField(nullable=True), 'architecture': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'flavor': fields.ObjectField('FlavorPayload'), 'image_uuid': fields.StringField(nullable=True), 'key_name': fields.StringField(nullable=True), 'kernel_id': fields.StringField(nullable=True), 'ramdisk_id': fields.StringField(nullable=True), 'created_at': fields.DateTimeField(nullable=True), 'launched_at': fields.DateTimeField(nullable=True), 'terminated_at': fields.DateTimeField(nullable=True), 'deleted_at': fields.DateTimeField(nullable=True), 'updated_at': fields.DateTimeField(nullable=True), 'state': fields.InstanceStateField(nullable=True), 'power_state': fields.InstancePowerStateField(nullable=True), 'task_state': fields.InstanceTaskStateField(nullable=True), 'progress': fields.IntegerField(nullable=True), 'ip_addresses': fields.ListOfObjectsField('IpPayload'), 'block_devices': fields.ListOfObjectsField('BlockDevicePayload', nullable=True), 'metadata': fields.DictOfStringsField(), 'locked': fields.BooleanField(), 'auto_disk_config': fields.DiskConfigField(), 'request_id': fields.StringField(nullable=True), 'action_initiator_user': fields.StringField(nullable=True), 'action_initiator_project': fields.StringField(nullable=True), 'locked_reason': fields.StringField(nullable=True), } def __init__(self, context, instance, bdms=None): super(InstancePayload, self).__init__() network_info = instance.get_network_info() self.ip_addresses = IpPayload.from_network_info(network_info) self.flavor = flavor_payload.FlavorPayload(flavor=instance.flavor) if bdms is not None: self.block_devices = BlockDevicePayload.from_bdms(bdms) else: self.block_devices = BlockDevicePayload.from_instance(instance) # NOTE(Kevin_Zheng): Don't include request_id for periodic tasks, # RequestContext for periodic tasks does not include project_id # and user_id. Consider modify this once periodic tasks got a # consistent request_id. self.request_id = context.request_id if (context.project_id and context.user_id) else None self.action_initiator_user = context.user_id self.action_initiator_project = context.project_id self.locked_reason = instance.system_metadata.get("locked_reason") self.populate_schema(instance=instance) @nova_base.NovaObjectRegistry.register_notification class InstanceActionPayload(InstancePayload): # No SCHEMA as all the additional fields are calculated # Version 1.1: locked and display_description added to InstancePayload # Version 1.2: Added auto_disk_config field to InstancePayload # Version 1.3: Added key_name field to InstancePayload # Version 1.4: Add BDM related data # Version 1.5: Added updated_at field to InstancePayload # Version 1.6: Added request_id field to InstancePayload # Version 1.7: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.8: Added locked_reason field to InstancePayload VERSION = '1.8' fields = { 'fault': fields.ObjectField('ExceptionPayload', nullable=True), 'request_id': fields.StringField(nullable=True), } def __init__(self, context, instance, fault, bdms=None): super(InstanceActionPayload, self).__init__(context=context, instance=instance, bdms=bdms) self.fault = fault @nova_base.NovaObjectRegistry.register_notification class InstanceActionVolumePayload(InstanceActionPayload): # Version 1.0: Initial version # Version 1.1: Added key_name field to InstancePayload # Version 1.2: Add BDM related data # Version 1.3: Added updated_at field to InstancePayload # Version 1.4: Added request_id field to InstancePayload # Version 1.5: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.6: Added locked_reason field to InstancePayload VERSION = '1.6' fields = { 'volume_id': fields.UUIDField() } def __init__(self, context, instance, fault, volume_id): super(InstanceActionVolumePayload, self).__init__( context=context, instance=instance, fault=fault) self.volume_id = volume_id @nova_base.NovaObjectRegistry.register_notification class InstanceActionVolumeSwapPayload(InstanceActionPayload): # No SCHEMA as all the additional fields are calculated # Version 1.1: locked and display_description added to InstancePayload # Version 1.2: Added auto_disk_config field to InstancePayload # Version 1.3: Added key_name field to InstancePayload # Version 1.4: Add BDM related data # Version 1.5: Added updated_at field to InstancePayload # Version 1.6: Added request_id field to InstancePayload # Version 1.7: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.8: Added locked_reason field to InstancePayload VERSION = '1.8' fields = { 'old_volume_id': fields.UUIDField(), 'new_volume_id': fields.UUIDField(), } def __init__(self, context, instance, fault, old_volume_id, new_volume_id): super(InstanceActionVolumeSwapPayload, self).__init__( context=context, instance=instance, fault=fault) self.old_volume_id = old_volume_id self.new_volume_id = new_volume_id @nova_base.NovaObjectRegistry.register_notification class InstanceCreatePayload(InstanceActionPayload): # No SCHEMA as all the additional fields are calculated # Version 1.2: Initial version. It starts at 1.2 to match with the version # of the InstanceActionPayload at the time when this specific # payload is created as a child of it so that the # instance.create notification using this new payload does not # have decreasing version. # 1.3: Add keypairs field # 1.4: Add key_name field to InstancePayload # 1.5: Add BDM related data to InstancePayload # 1.6: Add tags field to InstanceCreatePayload # 1.7: Added updated_at field to InstancePayload # 1.8: Added request_id field to InstancePayload # 1.9: Add trusted_image_certificates field to # InstanceCreatePayload # 1.10: Added action_initiator_user and action_initiator_project to # InstancePayload # 1.11: Added instance_name to InstanceCreatePayload # Version 1.12: Added locked_reason field to InstancePayload VERSION = '1.12' fields = { 'keypairs': fields.ListOfObjectsField('KeypairPayload'), 'tags': fields.ListOfStringsField(), 'trusted_image_certificates': fields.ListOfStringsField( nullable=True), 'instance_name': fields.StringField(nullable=True), } def __init__(self, context, instance, fault, bdms): super(InstanceCreatePayload, self).__init__( context=context, instance=instance, fault=fault, bdms=bdms) self.keypairs = [keypair_payload.KeypairPayload(keypair=keypair) for keypair in instance.keypairs] self.tags = [instance_tag.tag for instance_tag in instance.tags] self.trusted_image_certificates = None if instance.trusted_certs: self.trusted_image_certificates = instance.trusted_certs.ids self.instance_name = instance.name @nova_base.NovaObjectRegistry.register_notification class InstanceActionResizePrepPayload(InstanceActionPayload): # No SCHEMA as all the additional fields are calculated # Version 1.0: Initial version # Version 1.1: Added request_id field to InstancePayload # Version 1.2: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.3: Added locked_reason field to InstancePayload VERSION = '1.3' fields = { 'new_flavor': fields.ObjectField('FlavorPayload', nullable=True) } def __init__(self, context, instance, fault, new_flavor): super(InstanceActionResizePrepPayload, self).__init__( context=context, instance=instance, fault=fault) self.new_flavor = new_flavor @nova_base.NovaObjectRegistry.register_notification class InstanceUpdatePayload(InstancePayload): # Version 1.0: Initial version # Version 1.1: locked and display_description added to InstancePayload # Version 1.2: Added tags field # Version 1.3: Added auto_disk_config field to InstancePayload # Version 1.4: Added key_name field to InstancePayload # Version 1.5: Add BDM related data # Version 1.6: Added updated_at field to InstancePayload # Version 1.7: Added request_id field to InstancePayload # Version 1.8: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.9: Added locked_reason field to InstancePayload VERSION = '1.9' fields = { 'state_update': fields.ObjectField('InstanceStateUpdatePayload'), 'audit_period': fields.ObjectField('AuditPeriodPayload'), 'bandwidth': fields.ListOfObjectsField('BandwidthPayload'), 'old_display_name': fields.StringField(nullable=True), 'tags': fields.ListOfStringsField(), } def __init__(self, context, instance, state_update, audit_period, bandwidth, old_display_name): super(InstanceUpdatePayload, self).__init__( context=context, instance=instance) self.state_update = state_update self.audit_period = audit_period self.bandwidth = bandwidth self.old_display_name = old_display_name self.tags = [instance_tag.tag for instance_tag in instance.tags.objects] @nova_base.NovaObjectRegistry.register_notification class InstanceActionRescuePayload(InstanceActionPayload): # Version 1.0: Initial version # Version 1.1: Added request_id field to InstancePayload # Version 1.2: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.3: Added locked_reason field to InstancePayload VERSION = '1.3' fields = { 'rescue_image_ref': fields.UUIDField(nullable=True) } def __init__(self, context, instance, fault, rescue_image_ref): super(InstanceActionRescuePayload, self).__init__( context=context, instance=instance, fault=fault) self.rescue_image_ref = rescue_image_ref @nova_base.NovaObjectRegistry.register_notification class InstanceActionRebuildPayload(InstanceActionPayload): # No SCHEMA as all the additional fields are calculated # Version 1.7: Initial version. It starts at 1.7 to equal one more than # the version of the InstanceActionPayload at the time # when this specific payload is created so that the # instance.rebuild.* notifications using this new payload # signal the change of nova_object.name. # Version 1.8: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.9: Added locked_reason field to InstancePayload VERSION = '1.9' fields = { 'trusted_image_certificates': fields.ListOfStringsField( nullable=True) } def __init__(self, context, instance, fault, bdms=None): super(InstanceActionRebuildPayload, self).__init__( context=context, instance=instance, fault=fault, bdms=bdms) self.trusted_image_certificates = None if instance.trusted_certs: self.trusted_image_certificates = instance.trusted_certs.ids @nova_base.NovaObjectRegistry.register_notification class IpPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'label': fields.StringField(), 'mac': fields.MACAddressField(), 'meta': fields.DictOfStringsField(), 'port_uuid': fields.UUIDField(nullable=True), 'version': fields.IntegerField(), 'address': fields.IPV4AndV6AddressField(), 'device_name': fields.StringField(nullable=True) } def __init__(self, label, mac, meta, port_uuid, version, address, device_name): super(IpPayload, self).__init__() self.label = label self.mac = mac self.meta = meta self.port_uuid = port_uuid self.version = version self.address = address self.device_name = device_name @classmethod def from_network_info(cls, network_info): """Returns a list of IpPayload object based on the passed network_info. """ ips = [] if network_info is not None: for vif in network_info: for ip in vif.fixed_ips(): ips.append(cls( label=vif["network"]["label"], mac=vif["address"], meta=vif["meta"], port_uuid=vif["id"], version=ip["version"], address=ip["address"], device_name=vif["devname"])) return ips @nova_base.NovaObjectRegistry.register_notification class BandwidthPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'network_name': fields.StringField(), 'in_bytes': fields.IntegerField(), 'out_bytes': fields.IntegerField(), } def __init__(self, network_name, in_bytes, out_bytes): super(BandwidthPayload, self).__init__() self.network_name = network_name self.in_bytes = in_bytes self.out_bytes = out_bytes @nova_base.NovaObjectRegistry.register_notification class AuditPeriodPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'audit_period_beginning': fields.DateTimeField(), 'audit_period_ending': fields.DateTimeField(), } def __init__(self, audit_period_beginning, audit_period_ending): super(AuditPeriodPayload, self).__init__() self.audit_period_beginning = audit_period_beginning self.audit_period_ending = audit_period_ending @nova_base.NovaObjectRegistry.register_notification class BlockDevicePayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'device_name': ('bdm', 'device_name'), 'boot_index': ('bdm', 'boot_index'), 'delete_on_termination': ('bdm', 'delete_on_termination'), 'volume_id': ('bdm', 'volume_id'), 'tag': ('bdm', 'tag') } fields = { 'device_name': fields.StringField(nullable=True), 'boot_index': fields.IntegerField(nullable=True), 'delete_on_termination': fields.BooleanField(default=False), 'volume_id': fields.UUIDField(), 'tag': fields.StringField(nullable=True) } def __init__(self, bdm): super(BlockDevicePayload, self).__init__() self.populate_schema(bdm=bdm) @classmethod def from_instance(cls, instance): """Returns a list of BlockDevicePayload objects based on the passed bdms. """ if not CONF.notifications.bdms_in_notifications: return None instance_bdms = instance.get_bdms() if instance_bdms is not None: return cls.from_bdms(instance_bdms) else: return [] @classmethod def from_bdms(cls, bdms): """Returns a list of BlockDevicePayload objects based on the passed BlockDeviceMappingList. """ payloads = [] for bdm in bdms: if bdm.volume_id is not None: payloads.append(cls(bdm)) return payloads @nova_base.NovaObjectRegistry.register_notification class InstanceStateUpdatePayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'old_state': fields.StringField(nullable=True), 'state': fields.StringField(nullable=True), 'old_task_state': fields.StringField(nullable=True), 'new_task_state': fields.StringField(nullable=True), } def __init__(self, old_state, state, old_task_state, new_task_state): super(InstanceStateUpdatePayload, self).__init__() self.old_state = old_state self.state = state self.old_task_state = old_task_state self.new_task_state = new_task_state @base.notification_sample('instance-delete-start.json') @base.notification_sample('instance-delete-end.json') @base.notification_sample('instance-pause-start.json') @base.notification_sample('instance-pause-end.json') @base.notification_sample('instance-unpause-start.json') @base.notification_sample('instance-unpause-end.json') @base.notification_sample('instance-resize-start.json') @base.notification_sample('instance-resize-end.json') @base.notification_sample('instance-resize-error.json') @base.notification_sample('instance-suspend-start.json') @base.notification_sample('instance-suspend-end.json') @base.notification_sample('instance-power_on-start.json') @base.notification_sample('instance-power_on-end.json') @base.notification_sample('instance-power_off-start.json') @base.notification_sample('instance-power_off-end.json') @base.notification_sample('instance-reboot-start.json') @base.notification_sample('instance-reboot-end.json') @base.notification_sample('instance-reboot-error.json') @base.notification_sample('instance-shutdown-start.json') @base.notification_sample('instance-shutdown-end.json') @base.notification_sample('instance-interface_attach-start.json') @base.notification_sample('instance-interface_attach-end.json') @base.notification_sample('instance-interface_attach-error.json') @base.notification_sample('instance-shelve-start.json') @base.notification_sample('instance-shelve-end.json') @base.notification_sample('instance-resume-start.json') @base.notification_sample('instance-resume-end.json') @base.notification_sample('instance-restore-start.json') @base.notification_sample('instance-restore-end.json') @base.notification_sample('instance-evacuate.json') @base.notification_sample('instance-resize_finish-start.json') @base.notification_sample('instance-resize_finish-end.json') @base.notification_sample('instance-live_migration_pre-start.json') @base.notification_sample('instance-live_migration_pre-end.json') @base.notification_sample('instance-live_migration_abort-start.json') @base.notification_sample('instance-live_migration_abort-end.json') @base.notification_sample('instance-live_migration_post-start.json') @base.notification_sample('instance-live_migration_post-end.json') @base.notification_sample('instance-live_migration_post_dest-start.json') @base.notification_sample('instance-live_migration_post_dest-end.json') @base.notification_sample('instance-live_migration_rollback-start.json') @base.notification_sample('instance-live_migration_rollback-end.json') @base.notification_sample('instance-live_migration_rollback_dest-start.json') @base.notification_sample('instance-live_migration_rollback_dest-end.json') @base.notification_sample('instance-interface_detach-start.json') @base.notification_sample('instance-interface_detach-end.json') @base.notification_sample('instance-resize_confirm-start.json') @base.notification_sample('instance-resize_confirm-end.json') @base.notification_sample('instance-resize_revert-start.json') @base.notification_sample('instance-resize_revert-end.json') @base.notification_sample('instance-live_migration_force_complete-start.json') @base.notification_sample('instance-live_migration_force_complete-end.json') @base.notification_sample('instance-shelve_offload-start.json') @base.notification_sample('instance-shelve_offload-end.json') @base.notification_sample('instance-soft_delete-start.json') @base.notification_sample('instance-soft_delete-end.json') @base.notification_sample('instance-trigger_crash_dump-start.json') @base.notification_sample('instance-trigger_crash_dump-end.json') @base.notification_sample('instance-unrescue-start.json') @base.notification_sample('instance-unrescue-end.json') @base.notification_sample('instance-unshelve-start.json') @base.notification_sample('instance-unshelve-end.json') @base.notification_sample('instance-lock.json') @base.notification_sample('instance-unlock.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionPayload') } @base.notification_sample('instance-update.json') @nova_base.NovaObjectRegistry.register_notification class InstanceUpdateNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceUpdatePayload') } @base.notification_sample('instance-volume_swap-start.json') @base.notification_sample('instance-volume_swap-end.json') @base.notification_sample('instance-volume_swap-error.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionVolumeSwapNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionVolumeSwapPayload') } @base.notification_sample('instance-volume_attach-start.json') @base.notification_sample('instance-volume_attach-end.json') @base.notification_sample('instance-volume_attach-error.json') @base.notification_sample('instance-volume_detach-start.json') @base.notification_sample('instance-volume_detach-end.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionVolumeNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionVolumePayload') } @base.notification_sample('instance-create-start.json') @base.notification_sample('instance-create-end.json') @base.notification_sample('instance-create-error.json') @nova_base.NovaObjectRegistry.register_notification class InstanceCreateNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceCreatePayload') } @base.notification_sample('instance-resize_prep-start.json') @base.notification_sample('instance-resize_prep-end.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionResizePrepNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionResizePrepPayload') } @base.notification_sample('instance-snapshot-start.json') @base.notification_sample('instance-snapshot-end.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionSnapshotNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionSnapshotPayload') } @base.notification_sample('instance-rescue-start.json') @base.notification_sample('instance-rescue-end.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionRescueNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionRescuePayload') } @base.notification_sample('instance-rebuild_scheduled.json') @base.notification_sample('instance-rebuild-start.json') @base.notification_sample('instance-rebuild-end.json') @base.notification_sample('instance-rebuild-error.json') @nova_base.NovaObjectRegistry.register_notification class InstanceActionRebuildNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceActionRebuildPayload') } @nova_base.NovaObjectRegistry.register_notification class InstanceActionSnapshotPayload(InstanceActionPayload): # Version 1.6: Initial version. It starts at version 1.6 as # instance.snapshot.start and .end notifications are switched # from using InstanceActionPayload 1.5 to this new payload and # also it added a new field so we wanted to keep the version # number increasing to signal the change. # Version 1.7: Added request_id field to InstancePayload # Version 1.8: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.9: Added locked_reason field to InstancePayload VERSION = '1.9' fields = { 'snapshot_image_id': fields.UUIDField(), } def __init__(self, context, instance, fault, snapshot_image_id): super(InstanceActionSnapshotPayload, self).__init__( context=context, instance=instance, fault=fault) self.snapshot_image_id = snapshot_image_id @nova_base.NovaObjectRegistry.register_notification class InstanceExistsPayload(InstancePayload): # Version 1.0: Initial version # Version 1.1: Added action_initiator_user and action_initiator_project to # InstancePayload # Version 1.2: Added locked_reason field to InstancePayload VERSION = '1.2' fields = { 'audit_period': fields.ObjectField('AuditPeriodPayload'), 'bandwidth': fields.ListOfObjectsField('BandwidthPayload'), } def __init__(self, context, instance, audit_period, bandwidth): super(InstanceExistsPayload, self).__init__(context=context, instance=instance) self.audit_period = audit_period self.bandwidth = bandwidth @base.notification_sample('instance-exists.json') @nova_base.NovaObjectRegistry.register_notification class InstanceExistsNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('InstanceExistsPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/keypair.py0000664000175000017500000000405200000000000022052 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class KeypairPayload(base.NotificationPayloadBase): SCHEMA = { 'user_id': ('keypair', 'user_id'), 'name': ('keypair', 'name'), 'public_key': ('keypair', 'public_key'), 'fingerprint': ('keypair', 'fingerprint'), 'type': ('keypair', 'type') } # Version 1.0: Initial version VERSION = '1.0' fields = { 'user_id': fields.StringField(nullable=True), 'name': fields.StringField(nullable=False), 'fingerprint': fields.StringField(nullable=True), 'public_key': fields.StringField(nullable=True), 'type': fields.StringField(nullable=False), } def __init__(self, keypair, **kwargs): super(KeypairPayload, self).__init__(**kwargs) self.populate_schema(keypair=keypair) @base.notification_sample('keypair-create-start.json') @base.notification_sample('keypair-create-end.json') @base.notification_sample('keypair-delete-start.json') @base.notification_sample('keypair-delete-end.json') @base.notification_sample('keypair-import-start.json') @base.notification_sample('keypair-import-end.json') @nova_base.NovaObjectRegistry.register_notification class KeypairNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('KeypairPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/libvirt.py0000664000175000017500000000260600000000000022064 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class LibvirtErrorPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'ip': fields.StringField(), 'reason': fields.ObjectField('ExceptionPayload'), } def __init__(self, ip, reason): super(LibvirtErrorPayload, self).__init__() self.ip = ip self.reason = reason @base.notification_sample('libvirt-connect-error.json') @nova_base.NovaObjectRegistry.register_notification class LibvirtErrorNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('LibvirtErrorPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/metrics.py0000664000175000017500000000551600000000000022062 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @base.notification_sample('metrics-update.json') @nova_base.NovaObjectRegistry.register_notification class MetricsNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('MetricsPayload') } @nova_base.NovaObjectRegistry.register_notification class MetricPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'name': ('monitor_metric', 'name'), 'value': ('monitor_metric', 'value'), 'numa_membw_values': ('monitor_metric', 'numa_membw_values'), 'timestamp': ('monitor_metric', 'timestamp'), 'source': ('monitor_metric', 'source'), } fields = { 'name': fields.MonitorMetricTypeField(), 'value': fields.IntegerField(), 'numa_membw_values': fields.DictOfIntegersField(nullable=True), 'timestamp': fields.DateTimeField(), 'source': fields.StringField(), } def __init__(self, monitor_metric): super(MetricPayload, self).__init__() self.populate_schema(monitor_metric=monitor_metric) @classmethod def from_monitor_metric_list_obj(cls, monitor_metric_list): """Returns a list of MetricPayload objects based on the passed MonitorMetricList object. """ payloads = [] for monitor_metric in monitor_metric_list: payloads.append(cls(monitor_metric)) return payloads @nova_base.NovaObjectRegistry.register_notification class MetricsPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'host': fields.StringField(), 'host_ip': fields.StringField(), 'nodename': fields.StringField(), 'metrics': fields.ListOfObjectsField('MetricPayload'), } def __init__(self, host, host_ip, nodename, monitor_metric_list): super(MetricsPayload, self).__init__() self.host = host self.host_ip = host_ip self.nodename = nodename self.metrics = MetricPayload.from_monitor_metric_list_obj( monitor_metric_list) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/notifications/objects/request_spec.py0000664000175000017500000003212700000000000023114 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.notifications.objects import flavor as flavor_payload from nova.notifications.objects import image as image_payload from nova.notifications.objects import server_group as server_group_payload from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class RequestSpecPayload(base.NotificationPayloadBase): # Version 1.0: Initial version # Version 1.1: Add force_hosts, force_nodes, ignore_hosts, image_meta, # instance_group, requested_destination, retry, # scheduler_hints and security_groups fields VERSION = '1.1' SCHEMA = { 'ignore_hosts': ('request_spec', 'ignore_hosts'), 'instance_uuid': ('request_spec', 'instance_uuid'), 'project_id': ('request_spec', 'project_id'), 'user_id': ('request_spec', 'user_id'), 'availability_zone': ('request_spec', 'availability_zone'), 'num_instances': ('request_spec', 'num_instances'), 'scheduler_hints': ('request_spec', 'scheduler_hints'), } fields = { 'instance_uuid': fields.UUIDField(), 'project_id': fields.StringField(nullable=True), 'user_id': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'flavor': fields.ObjectField('FlavorPayload', nullable=True), 'force_hosts': fields.StringField(nullable=True), 'force_nodes': fields.StringField(nullable=True), 'ignore_hosts': fields.ListOfStringsField(nullable=True), 'image_meta': fields.ObjectField('ImageMetaPayload', nullable=True), 'instance_group': fields.ObjectField('ServerGroupPayload', nullable=True), 'image': fields.ObjectField('ImageMetaPayload', nullable=True), 'numa_topology': fields.ObjectField('InstanceNUMATopologyPayload', nullable=True), 'pci_requests': fields.ObjectField('InstancePCIRequestsPayload', nullable=True), 'num_instances': fields.IntegerField(default=1), 'requested_destination': fields.ObjectField('DestinationPayload', nullable=True), 'retry': fields.ObjectField('SchedulerRetriesPayload', nullable=True), 'scheduler_hints': fields.DictOfListOfStringsField(nullable=True), 'security_groups': fields.ListOfStringsField(), } def __init__(self, request_spec): super(RequestSpecPayload, self).__init__() self.flavor = flavor_payload.FlavorPayload( request_spec.flavor) if request_spec.obj_attr_is_set( 'flavor') else None self.image = image_payload.ImageMetaPayload( request_spec.image) if request_spec.image else None if request_spec.numa_topology is not None: if not request_spec.numa_topology.obj_attr_is_set('instance_uuid'): request_spec.numa_topology.instance_uuid = ( request_spec.instance_uuid) self.numa_topology = InstanceNUMATopologyPayload( request_spec.numa_topology) else: self.numa_topology = None if request_spec.pci_requests is not None: if not request_spec.pci_requests.obj_attr_is_set('instance_uuid'): request_spec.pci_requests.instance_uuid = ( request_spec.instance_uuid) self.pci_requests = InstancePCIRequestsPayload( request_spec.pci_requests) else: self.pci_requests = None if 'requested_destination' in request_spec \ and request_spec.requested_destination: self.requested_destination = DestinationPayload( destination=request_spec.requested_destination) else: self.requested_destination = None if 'retry' in request_spec and request_spec.retry: self.retry = SchedulerRetriesPayload( retry=request_spec.retry) else: self.retry = None self.security_groups = [ sec_group.identifier for sec_group in request_spec.security_groups] if 'instance_group' in request_spec and request_spec.instance_group: self.instance_group = server_group_payload.ServerGroupPayload( group=request_spec.instance_group) else: self.instance_group = None if 'force_hosts' in request_spec and request_spec.force_hosts: self.force_hosts = request_spec.force_hosts[0] else: self.force_hosts = None if 'force_nodes' in request_spec and request_spec.force_nodes: self.force_nodes = request_spec.force_nodes[0] else: self.force_nodes = None self.populate_schema(request_spec=request_spec) @nova_base.NovaObjectRegistry.register_notification class InstanceNUMATopologyPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'instance_uuid': ('numa_topology', 'instance_uuid'), 'emulator_threads_policy': ('numa_topology', 'emulator_threads_policy') } fields = { 'instance_uuid': fields.UUIDField(), 'cells': fields.ListOfObjectsField('InstanceNUMACellPayload'), 'emulator_threads_policy': fields.CPUEmulatorThreadsPolicyField( nullable=True) } def __init__(self, numa_topology): super(InstanceNUMATopologyPayload, self).__init__() self.cells = InstanceNUMACellPayload.from_numa_cell_list_obj( numa_topology.cells) self.populate_schema(numa_topology=numa_topology) @nova_base.NovaObjectRegistry.register_notification class InstanceNUMACellPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'id': ('numa_cell', 'id'), 'cpuset': ('numa_cell', 'cpuset'), 'memory': ('numa_cell', 'memory'), 'pagesize': ('numa_cell', 'pagesize'), 'cpu_pinning_raw': ('numa_cell', 'cpu_pinning_raw'), 'cpu_policy': ('numa_cell', 'cpu_policy'), 'cpu_thread_policy': ('numa_cell', 'cpu_thread_policy'), 'cpuset_reserved': ('numa_cell', 'cpuset_reserved'), } fields = { 'id': fields.IntegerField(), 'cpuset': fields.SetOfIntegersField(), 'memory': fields.IntegerField(), 'pagesize': fields.IntegerField(nullable=True), 'cpu_topology': fields.ObjectField('VirtCPUTopologyPayload', nullable=True), 'cpu_pinning_raw': fields.DictOfIntegersField(nullable=True), 'cpu_policy': fields.CPUAllocationPolicyField(nullable=True), 'cpu_thread_policy': fields.CPUThreadAllocationPolicyField( nullable=True), 'cpuset_reserved': fields.SetOfIntegersField(nullable=True) } def __init__(self, numa_cell): super(InstanceNUMACellPayload, self).__init__() if (numa_cell.obj_attr_is_set('cpu_topology') and numa_cell.cpu_topology is not None): self.cpu_topology = VirtCPUTopologyPayload(numa_cell.cpu_topology) else: self.cpu_topology = None self.populate_schema(numa_cell=numa_cell) @classmethod def from_numa_cell_list_obj(cls, numa_cell_list): """Returns a list of InstanceNUMACellPayload objects based on the passed list of InstanceNUMACell objects. """ payloads = [] for numa_cell in numa_cell_list: payloads.append(cls(numa_cell)) return payloads @nova_base.NovaObjectRegistry.register_notification class VirtCPUTopologyPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'sockets': ('virt_cpu_topology', 'sockets'), 'cores': ('virt_cpu_topology', 'cores'), 'threads': ('virt_cpu_topology', 'threads'), } fields = { 'sockets': fields.IntegerField(nullable=True, default=1), 'cores': fields.IntegerField(nullable=True, default=1), 'threads': fields.IntegerField(nullable=True, default=1), } def __init__(self, virt_cpu_topology): super(VirtCPUTopologyPayload, self).__init__() self.populate_schema(virt_cpu_topology=virt_cpu_topology) @nova_base.NovaObjectRegistry.register_notification class InstancePCIRequestsPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'instance_uuid': ('pci_requests', 'instance_uuid') } fields = { 'instance_uuid': fields.UUIDField(), 'requests': fields.ListOfObjectsField('InstancePCIRequestPayload') } def __init__(self, pci_requests): super(InstancePCIRequestsPayload, self).__init__() self.requests = InstancePCIRequestPayload.from_pci_request_list_obj( pci_requests.requests) self.populate_schema(pci_requests=pci_requests) @nova_base.NovaObjectRegistry.register_notification class InstancePCIRequestPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'count': ('pci_request', 'count'), 'spec': ('pci_request', 'spec'), 'alias_name': ('pci_request', 'alias_name'), 'request_id': ('pci_request', 'request_id'), 'numa_policy': ('pci_request', 'numa_policy') } fields = { 'count': fields.IntegerField(), 'spec': fields.ListOfDictOfNullableStringsField(), 'alias_name': fields.StringField(nullable=True), 'request_id': fields.UUIDField(nullable=True), 'numa_policy': fields.PCINUMAAffinityPolicyField(nullable=True) } def __init__(self, pci_request): super(InstancePCIRequestPayload, self).__init__() self.populate_schema(pci_request=pci_request) @classmethod def from_pci_request_list_obj(cls, pci_request_list): """Returns a list of InstancePCIRequestPayload objects based on the passed list of InstancePCIRequest objects. """ payloads = [] for pci_request in pci_request_list: payloads.append(cls(pci_request)) return payloads @nova_base.NovaObjectRegistry.register_notification class DestinationPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'aggregates': ('destination', 'aggregates'), } fields = { 'host': fields.StringField(), 'node': fields.StringField(nullable=True), 'cell': fields.ObjectField('CellMappingPayload', nullable=True), 'aggregates': fields.ListOfStringsField(nullable=True, default=None), } def __init__(self, destination): super(DestinationPayload, self).__init__() if (destination.obj_attr_is_set('host') and destination.host is not None): self.host = destination.host if (destination.obj_attr_is_set('node') and destination.node is not None): self.node = destination.node if (destination.obj_attr_is_set('cell') and destination.cell is not None): self.cell = CellMappingPayload(destination.cell) self.populate_schema(destination=destination) @nova_base.NovaObjectRegistry.register_notification class SchedulerRetriesPayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'num_attempts': ('retry', 'num_attempts'), } fields = { 'num_attempts': fields.IntegerField(), 'hosts': fields.ListOfStringsField(), } def __init__(self, retry): super(SchedulerRetriesPayload, self).__init__() self.hosts = [] for compute_node in retry.hosts: self.hosts.append(compute_node.hypervisor_hostname) self.populate_schema(retry=retry) @nova_base.NovaObjectRegistry.register_notification class CellMappingPayload(base.NotificationPayloadBase): # Version 1.0: Initial version # Version 2.0: Remove transport_url and database_connection fields. VERSION = '2.0' SCHEMA = { 'uuid': ('cell', 'uuid'), 'name': ('cell', 'name'), 'disabled': ('cell', 'disabled'), } fields = { 'uuid': fields.UUIDField(), 'name': fields.StringField(nullable=True), 'disabled': fields.BooleanField(default=False), } def __init__(self, cell): super(CellMappingPayload, self).__init__() self.populate_schema(cell=cell) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/scheduler.py0000664000175000017500000000207100000000000022363 0ustar00zuulzuul00000000000000# Copyright 2017 Ericsson # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @base.notification_sample('scheduler-select_destinations-start.json') @base.notification_sample('scheduler-select_destinations-end.json') @nova_base.NovaObjectRegistry.register_notification class SelectDestinationsNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('RequestSpecPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/server_group.py0000664000175000017500000000511500000000000023131 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @nova_base.NovaObjectRegistry.register_notification class ServerGroupPayload(base.NotificationPayloadBase): SCHEMA = { 'uuid': ('group', 'uuid'), 'name': ('group', 'name'), 'user_id': ('group', 'user_id'), 'project_id': ('group', 'project_id'), 'policies': ('group', 'policies'), 'members': ('group', 'members'), 'hosts': ('group', 'hosts'), 'policy': ('group', 'policy'), 'rules': ('group', 'rules'), } # Version 1.0: Initial version # Version 1.1: Deprecate policies, add policy and add rules VERSION = '1.1' fields = { 'uuid': fields.UUIDField(), 'name': fields.StringField(nullable=True), 'user_id': fields.StringField(nullable=True), 'project_id': fields.StringField(nullable=True), # NOTE(yikun): policies is deprecated and should # be removed on the next major version bump 'policies': fields.ListOfStringsField(nullable=True), 'members': fields.ListOfStringsField(nullable=True), 'hosts': fields.ListOfStringsField(nullable=True), 'policy': fields.StringField(nullable=True), 'rules': fields.DictOfStringsField(), } def __init__(self, group): super(ServerGroupPayload, self).__init__() # Note: The group is orphaned here to avoid triggering lazy-loading of # the group.hosts field. cgroup = copy.deepcopy(group) cgroup._context = None self.populate_schema(group=cgroup) @base.notification_sample('server_group-add_member.json') @base.notification_sample('server_group-create.json') @base.notification_sample('server_group-delete.json') @nova_base.NovaObjectRegistry.register_notification class ServerGroupNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('ServerGroupPayload') } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/service.py0000664000175000017500000000506200000000000022050 0ustar00zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @base.notification_sample('service-create.json') @base.notification_sample('service-update.json') @base.notification_sample('service-delete.json') @nova_base.NovaObjectRegistry.register_notification class ServiceStatusNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('ServiceStatusPayload') } @nova_base.NovaObjectRegistry.register_notification class ServiceStatusPayload(base.NotificationPayloadBase): SCHEMA = { 'host': ('service', 'host'), 'binary': ('service', 'binary'), 'topic': ('service', 'topic'), 'report_count': ('service', 'report_count'), 'disabled': ('service', 'disabled'), 'disabled_reason': ('service', 'disabled_reason'), 'availability_zone': ('service', 'availability_zone'), 'last_seen_up': ('service', 'last_seen_up'), 'forced_down': ('service', 'forced_down'), 'version': ('service', 'version'), 'uuid': ('service', 'uuid') } # Version 1.0: Initial version # Version 1.1: Added uuid field. VERSION = '1.1' fields = { 'host': fields.StringField(nullable=True), 'binary': fields.StringField(nullable=True), 'topic': fields.StringField(nullable=True), 'report_count': fields.IntegerField(), 'disabled': fields.BooleanField(), 'disabled_reason': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'last_seen_up': fields.DateTimeField(nullable=True), 'forced_down': fields.BooleanField(), 'version': fields.IntegerField(), 'uuid': fields.UUIDField() } def __init__(self, service): super(ServiceStatusPayload, self).__init__() self.populate_schema(service=service) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/notifications/objects/volume.py0000664000175000017500000000452500000000000021722 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.notifications.objects import base from nova.objects import base as nova_base from nova.objects import fields @base.notification_sample('volume-usage.json') @nova_base.NovaObjectRegistry.register_notification class VolumeUsageNotification(base.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('VolumeUsagePayload') } @nova_base.NovaObjectRegistry.register_notification class VolumeUsagePayload(base.NotificationPayloadBase): # Version 1.0: Initial version VERSION = '1.0' SCHEMA = { 'volume_id': ('vol_usage', 'volume_id'), 'project_id': ('vol_usage', 'project_id'), 'user_id': ('vol_usage', 'user_id'), 'availability_zone': ('vol_usage', 'availability_zone'), 'instance_uuid': ('vol_usage', 'instance_uuid'), 'last_refreshed': ('vol_usage', 'last_refreshed'), 'reads': ('vol_usage', 'reads'), 'read_bytes': ('vol_usage', 'read_bytes'), 'writes': ('vol_usage', 'writes'), 'write_bytes': ('vol_usage', 'write_bytes') } fields = { 'volume_id': fields.UUIDField(), 'project_id': fields.StringField(nullable=True), 'user_id': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'instance_uuid': fields.UUIDField(nullable=True), 'last_refreshed': fields.DateTimeField(nullable=True), 'reads': fields.IntegerField(), 'read_bytes': fields.IntegerField(), 'writes': fields.IntegerField(), 'write_bytes': fields.IntegerField() } def __init__(self, vol_usage): super(VolumeUsagePayload, self).__init__() self.populate_schema(vol_usage=vol_usage) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3664696 nova-21.2.4/nova/objects/0000775000175000017500000000000000000000000015162 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/__init__.py0000664000175000017500000000624700000000000017304 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(comstud): You may scratch your head as you see code that imports # this module and then accesses attributes for objects such as Instance, # etc, yet you do not see these attributes in here. Never fear, there is # a little bit of magic. When objects are registered, an attribute is set # on this module automatically, pointing to the newest/latest version of # the object. def register_all(): # NOTE(danms): You must make sure your object gets imported in this # function in order for it to be registered by services that may # need to receive it via RPC. __import__('nova.objects.agent') __import__('nova.objects.aggregate') __import__('nova.objects.bandwidth_usage') __import__('nova.objects.block_device') __import__('nova.objects.build_request') __import__('nova.objects.cell_mapping') __import__('nova.objects.compute_node') __import__('nova.objects.diagnostics') __import__('nova.objects.console_auth_token') __import__('nova.objects.ec2') __import__('nova.objects.external_event') __import__('nova.objects.flavor') __import__('nova.objects.host_mapping') __import__('nova.objects.hv_spec') __import__('nova.objects.image_meta') __import__('nova.objects.instance') __import__('nova.objects.instance_action') __import__('nova.objects.instance_fault') __import__('nova.objects.instance_group') __import__('nova.objects.instance_info_cache') __import__('nova.objects.instance_mapping') __import__('nova.objects.instance_numa') __import__('nova.objects.instance_pci_requests') __import__('nova.objects.keypair') __import__('nova.objects.migrate_data') __import__('nova.objects.virt_device_metadata') __import__('nova.objects.migration') __import__('nova.objects.migration_context') __import__('nova.objects.monitor_metric') __import__('nova.objects.network_metadata') __import__('nova.objects.network_request') __import__('nova.objects.numa') __import__('nova.objects.pci_device') __import__('nova.objects.pci_device_pool') __import__('nova.objects.request_spec') __import__('nova.objects.tag') __import__('nova.objects.quotas') __import__('nova.objects.resource') __import__('nova.objects.security_group') __import__('nova.objects.selection') __import__('nova.objects.service') __import__('nova.objects.task_log') __import__('nova.objects.trusted_certs') __import__('nova.objects.vcpu_model') __import__('nova.objects.virt_cpu_topology') __import__('nova.objects.virtual_interface') __import__('nova.objects.volume_usage') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/agent.py0000664000175000017500000000545000000000000016636 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db import api as db from nova import exception from nova import objects from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class Agent(base.NovaPersistentObject, base.NovaObject): VERSION = '1.0' fields = { 'id': fields.IntegerField(read_only=True), 'hypervisor': fields.StringField(), 'os': fields.StringField(), 'architecture': fields.StringField(), 'version': fields.StringField(), 'url': fields.StringField(), 'md5hash': fields.StringField(), } @staticmethod def _from_db_object(context, agent, db_agent): for name in agent.fields: setattr(agent, name, db_agent[name]) agent._context = context agent.obj_reset_changes() return agent @base.remotable_classmethod def get_by_triple(cls, context, hypervisor, os, architecture): db_agent = db.agent_build_get_by_triple(context, hypervisor, os, architecture) if not db_agent: return None return cls._from_db_object(context, objects.Agent(), db_agent) @base.remotable def create(self): updates = self.obj_get_changes() if 'id' in updates: raise exception.ObjectActionError(action='create', reason='Already Created') db_agent = db.agent_build_create(self._context, updates) self._from_db_object(self._context, self, db_agent) @base.remotable def destroy(self): db.agent_build_destroy(self._context, self.id) @base.remotable def save(self): updates = self.obj_get_changes() db.agent_build_update(self._context, self.id, updates) self.obj_reset_changes() @base.NovaObjectRegistry.register class AgentList(base.ObjectListBase, base.NovaObject): VERSION = '1.0' fields = { 'objects': fields.ListOfObjectsField('Agent'), } @base.remotable_classmethod def get_all(cls, context, hypervisor=None): db_agents = db.agent_build_get_all(context, hypervisor=hypervisor) return base.obj_make_list(context, cls(), objects.Agent, db_agents) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/aggregate.py0000664000175000017500000005307300000000000017472 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import uuidutils from sqlalchemy.orm import contains_eager from sqlalchemy.orm import joinedload from nova.compute import utils as compute_utils from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) DEPRECATED_FIELDS = ['deleted', 'deleted_at'] @db_api.api_context_manager.reader def _aggregate_get_from_db(context, aggregate_id): query = context.session.query(api_models.Aggregate).\ options(joinedload('_hosts')).\ options(joinedload('_metadata')) query = query.filter(api_models.Aggregate.id == aggregate_id) aggregate = query.first() if not aggregate: raise exception.AggregateNotFound(aggregate_id=aggregate_id) return aggregate @db_api.api_context_manager.reader def _aggregate_get_from_db_by_uuid(context, aggregate_uuid): query = context.session.query(api_models.Aggregate).\ options(joinedload('_hosts')).\ options(joinedload('_metadata')) query = query.filter(api_models.Aggregate.uuid == aggregate_uuid) aggregate = query.first() if not aggregate: raise exception.AggregateNotFound(aggregate_id=aggregate_uuid) return aggregate def _host_add_to_db(context, aggregate_id, host): try: with db_api.api_context_manager.writer.using(context): # Check to see if the aggregate exists _aggregate_get_from_db(context, aggregate_id) host_ref = api_models.AggregateHost() host_ref.update({"host": host, "aggregate_id": aggregate_id}) host_ref.save(context.session) return host_ref except db_exc.DBDuplicateEntry: raise exception.AggregateHostExists(host=host, aggregate_id=aggregate_id) def _host_delete_from_db(context, aggregate_id, host): count = 0 with db_api.api_context_manager.writer.using(context): # Check to see if the aggregate exists _aggregate_get_from_db(context, aggregate_id) query = context.session.query(api_models.AggregateHost) query = query.filter(api_models.AggregateHost.aggregate_id == aggregate_id) count = query.filter_by(host=host).delete() if count == 0: raise exception.AggregateHostNotFound(aggregate_id=aggregate_id, host=host) def _metadata_add_to_db(context, aggregate_id, metadata, max_retries=10, set_delete=False): all_keys = metadata.keys() for attempt in range(max_retries): try: with db_api.api_context_manager.writer.using(context): query = context.session.query(api_models.AggregateMetadata).\ filter_by(aggregate_id=aggregate_id) if set_delete: query.filter(~api_models.AggregateMetadata.key. in_(all_keys)).\ delete(synchronize_session=False) already_existing_keys = set() if all_keys: query = query.filter( api_models.AggregateMetadata.key.in_(all_keys)) for meta_ref in query.all(): key = meta_ref.key meta_ref.update({"value": metadata[key]}) already_existing_keys.add(key) new_entries = [] for key, value in metadata.items(): if key in already_existing_keys: continue new_entries.append({"key": key, "value": value, "aggregate_id": aggregate_id}) if new_entries: context.session.execute( api_models.AggregateMetadata.__table__.insert(None), new_entries) return metadata except db_exc.DBDuplicateEntry: # a concurrent transaction has been committed, # try again unless this was the last attempt with excutils.save_and_reraise_exception() as ctxt: if attempt < max_retries - 1: ctxt.reraise = False else: msg = _("Add metadata failed for aggregate %(id)s " "after %(retries)s retries") % \ {"id": aggregate_id, "retries": max_retries} LOG.warning(msg) @db_api.api_context_manager.writer def _metadata_delete_from_db(context, aggregate_id, key): # Check to see if the aggregate exists _aggregate_get_from_db(context, aggregate_id) query = context.session.query(api_models.AggregateMetadata) query = query.filter(api_models.AggregateMetadata.aggregate_id == aggregate_id) count = query.filter_by(key=key).delete() if count == 0: raise exception.AggregateMetadataNotFound( aggregate_id=aggregate_id, metadata_key=key) @db_api.api_context_manager.writer def _aggregate_create_in_db(context, values, metadata=None): query = context.session.query(api_models.Aggregate) query = query.filter(api_models.Aggregate.name == values['name']) aggregate = query.first() if not aggregate: aggregate = api_models.Aggregate() aggregate.update(values) aggregate.save(context.session) # We don't want these to be lazy loaded later. We know there is # nothing here since we just created this aggregate. aggregate._hosts = [] aggregate._metadata = [] else: raise exception.AggregateNameExists(aggregate_name=values['name']) if metadata: _metadata_add_to_db(context, aggregate.id, metadata) context.session.expire(aggregate, ['_metadata']) aggregate._metadata return aggregate @db_api.api_context_manager.writer def _aggregate_delete_from_db(context, aggregate_id): # Delete Metadata first context.session.query(api_models.AggregateMetadata).\ filter_by(aggregate_id=aggregate_id).\ delete() count = context.session.query(api_models.Aggregate).\ filter(api_models.Aggregate.id == aggregate_id).\ delete() if count == 0: raise exception.AggregateNotFound(aggregate_id=aggregate_id) @db_api.api_context_manager.writer def _aggregate_update_to_db(context, aggregate_id, values): aggregate = _aggregate_get_from_db(context, aggregate_id) set_delete = True if "availability_zone" in values: az = values.pop('availability_zone') if 'metadata' not in values: values['metadata'] = {'availability_zone': az} set_delete = False else: values['metadata']['availability_zone'] = az metadata = values.get('metadata') if metadata is not None: _metadata_add_to_db(context, aggregate_id, values.pop('metadata'), set_delete=set_delete) aggregate.update(values) try: aggregate.save(context.session) except db_exc.DBDuplicateEntry: if 'name' in values: raise exception.AggregateNameExists( aggregate_name=values['name']) else: raise return _aggregate_get_from_db(context, aggregate_id) @base.NovaObjectRegistry.register class Aggregate(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Added uuid field # Version 1.3: Added get_by_uuid method VERSION = '1.3' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(nullable=False), 'name': fields.StringField(), 'hosts': fields.ListOfStringsField(nullable=True), 'metadata': fields.DictOfStringsField(nullable=True), } obj_extra_fields = ['availability_zone'] @staticmethod def _from_db_object(context, aggregate, db_aggregate): for key in aggregate.fields: if key == 'metadata': db_key = 'metadetails' elif key in DEPRECATED_FIELDS and key not in db_aggregate: continue else: db_key = key setattr(aggregate, key, db_aggregate[db_key]) # NOTE: This can be removed when we bump Aggregate to v2.0 aggregate.deleted_at = None aggregate.deleted = False aggregate._context = context aggregate.obj_reset_changes() return aggregate def _assert_no_hosts(self, action): if 'hosts' in self.obj_what_changed(): raise exception.ObjectActionError( action=action, reason='hosts updated inline') @base.remotable_classmethod def get_by_id(cls, context, aggregate_id): db_aggregate = _aggregate_get_from_db(context, aggregate_id) return cls._from_db_object(context, cls(), db_aggregate) @base.remotable_classmethod def get_by_uuid(cls, context, aggregate_uuid): db_aggregate = _aggregate_get_from_db_by_uuid(context, aggregate_uuid) return cls._from_db_object(context, cls(), db_aggregate) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') self._assert_no_hosts('create') updates = self.obj_get_changes() payload = dict(updates) if 'metadata' in updates: # NOTE(danms): For some reason the notification format is weird payload['meta_data'] = payload.pop('metadata') if 'uuid' not in updates: updates['uuid'] = uuidutils.generate_uuid() self.uuid = updates['uuid'] LOG.debug('Generated uuid %(uuid)s for aggregate', dict(uuid=updates['uuid'])) compute_utils.notify_about_aggregate_update(self._context, "create.start", payload) compute_utils.notify_about_aggregate_action( context=self._context, aggregate=self, action=fields.NotificationAction.CREATE, phase=fields.NotificationPhase.START) metadata = updates.pop('metadata', None) db_aggregate = _aggregate_create_in_db(self._context, updates, metadata=metadata) self._from_db_object(self._context, self, db_aggregate) payload['aggregate_id'] = self.id compute_utils.notify_about_aggregate_update(self._context, "create.end", payload) compute_utils.notify_about_aggregate_action( context=self._context, aggregate=self, action=fields.NotificationAction.CREATE, phase=fields.NotificationPhase.END) @base.remotable def save(self): self._assert_no_hosts('save') updates = self.obj_get_changes() payload = {'aggregate_id': self.id} if 'metadata' in updates: payload['meta_data'] = updates['metadata'] compute_utils.notify_about_aggregate_update(self._context, "updateprop.start", payload) compute_utils.notify_about_aggregate_action( context=self._context, aggregate=self, action=fields.NotificationAction.UPDATE_PROP, phase=fields.NotificationPhase.START) updates.pop('id', None) db_aggregate = _aggregate_update_to_db(self._context, self.id, updates) compute_utils.notify_about_aggregate_update(self._context, "updateprop.end", payload) compute_utils.notify_about_aggregate_action( context=self._context, aggregate=self, action=fields.NotificationAction.UPDATE_PROP, phase=fields.NotificationPhase.END) self._from_db_object(self._context, self, db_aggregate) @base.remotable def update_metadata(self, updates): payload = {'aggregate_id': self.id, 'meta_data': updates} compute_utils.notify_about_aggregate_update(self._context, "updatemetadata.start", payload) compute_utils.notify_about_aggregate_action( context=self._context, aggregate=self, action=fields.NotificationAction.UPDATE_METADATA, phase=fields.NotificationPhase.START) to_add = {} for key, value in updates.items(): if value is None: try: _metadata_delete_from_db(self._context, self.id, key) except exception.AggregateMetadataNotFound: pass try: self.metadata.pop(key) except KeyError: pass else: to_add[key] = value self.metadata[key] = value _metadata_add_to_db(self._context, self.id, to_add) compute_utils.notify_about_aggregate_update(self._context, "updatemetadata.end", payload) compute_utils.notify_about_aggregate_action( context=self._context, aggregate=self, action=fields.NotificationAction.UPDATE_METADATA, phase=fields.NotificationPhase.END) self.obj_reset_changes(fields=['metadata']) @base.remotable def destroy(self): _aggregate_delete_from_db(self._context, self.id) @base.remotable def add_host(self, host): _host_add_to_db(self._context, self.id, host) if self.hosts is None: self.hosts = [] self.hosts.append(host) self.obj_reset_changes(fields=['hosts']) @base.remotable def delete_host(self, host): _host_delete_from_db(self._context, self.id, host) self.hosts.remove(host) self.obj_reset_changes(fields=['hosts']) @property def availability_zone(self): return self.metadata.get('availability_zone', None) @db_api.api_context_manager.reader def _get_all_from_db(context): query = context.session.query(api_models.Aggregate).\ options(joinedload('_hosts')).\ options(joinedload('_metadata')) return query.all() @db_api.api_context_manager.reader def _get_by_host_from_db(context, host, key=None): query = context.session.query(api_models.Aggregate).\ options(joinedload('_hosts')).\ options(joinedload('_metadata')) query = query.join('_hosts') query = query.filter(api_models.AggregateHost.host == host) if key: query = query.join("_metadata").filter( api_models.AggregateMetadata.key == key) return query.all() @db_api.api_context_manager.reader def _get_by_metadata_from_db(context, key=None, value=None): assert(key is not None or value is not None) query = context.session.query(api_models.Aggregate) query = query.join("_metadata") if key is not None: query = query.filter(api_models.AggregateMetadata.key == key) if value is not None: query = query.filter(api_models.AggregateMetadata.value == value) query = query.options(contains_eager("_metadata")) query = query.options(joinedload("_hosts")) return query.all() @db_api.api_context_manager.reader def _get_non_matching_by_metadata_keys_from_db(context, ignored_keys, key_prefix, value): """Filter aggregates based on non matching metadata. Find aggregates with at least one ${key_prefix}*[=${value}] metadata where the metadata key are not in the ignored_keys list. :return: Aggregates with any metadata entry: - whose key starts with `key_prefix`; and - whose value is `value` and - whose key is *not* in the `ignored_keys` list. """ if not key_prefix: raise ValueError(_('key_prefix mandatory field.')) query = context.session.query(api_models.Aggregate) query = query.join("_metadata") query = query.filter(api_models.AggregateMetadata.value == value) query = query.filter(api_models.AggregateMetadata.key.like( key_prefix + '%')) if len(ignored_keys) > 0: query = query.filter(~api_models.AggregateMetadata.key.in_( ignored_keys)) query = query.options(contains_eager("_metadata")) query = query.options(joinedload("_hosts")) return query.all() @base.NovaObjectRegistry.register class AggregateList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added key argument to get_by_host() # Aggregate <= version 1.1 # Version 1.2: Added get_by_metadata_key # Version 1.3: Added get_by_metadata VERSION = '1.3' fields = { 'objects': fields.ListOfObjectsField('Aggregate'), } @classmethod def _filter_db_aggregates(cls, db_aggregates, hosts): if not isinstance(hosts, set): hosts = set(hosts) filtered_aggregates = [] for db_aggregate in db_aggregates: for host in db_aggregate['hosts']: if host in hosts: filtered_aggregates.append(db_aggregate) break return filtered_aggregates @base.remotable_classmethod def get_all(cls, context): db_aggregates = _get_all_from_db(context) return base.obj_make_list(context, cls(context), objects.Aggregate, db_aggregates) @base.remotable_classmethod def get_by_host(cls, context, host, key=None): db_aggregates = _get_by_host_from_db(context, host, key=key) return base.obj_make_list(context, cls(context), objects.Aggregate, db_aggregates) @base.remotable_classmethod def get_by_metadata_key(cls, context, key, hosts=None): db_aggregates = _get_by_metadata_from_db(context, key=key) if hosts is not None: db_aggregates = cls._filter_db_aggregates(db_aggregates, hosts) return base.obj_make_list(context, cls(context), objects.Aggregate, db_aggregates) @base.remotable_classmethod def get_by_metadata(cls, context, key=None, value=None): """Return aggregates with a metadata key set to value. This returns a list of all aggregates that have a metadata key set to some value. If key is specified, then only values for that key will qualify. """ db_aggregates = _get_by_metadata_from_db(context, key=key, value=value) return base.obj_make_list(context, cls(context), objects.Aggregate, db_aggregates) @classmethod def get_non_matching_by_metadata_keys(cls, context, ignored_keys, key_prefix, value): """Return aggregates that are not matching with metadata. For example, we have aggregates with metadata as below: 'agg1' with trait:HW_CPU_X86_MMX="required" 'agg2' with trait:HW_CPU_X86_SGX="required" 'agg3' with trait:HW_CPU_X86_MMX="required" 'agg3' with trait:HW_CPU_X86_SGX="required" Assume below request: aggregate_obj.AggregateList.get_non_matching_by_metadata_keys( self.context, ['trait:HW_CPU_X86_MMX'], 'trait:', value='required') It will return 'agg2' and 'agg3' as aggregates that are not matching with metadata. :param context: The security context :param ignored_keys: List of keys to match with the aggregate metadata keys that starts with key_prefix. :param key_prefix: Only compares metadata keys that starts with the key_prefix :param value: Value of metadata :returns: List of aggregates that doesn't match metadata keys that starts with key_prefix with the supplied keys. """ db_aggregates = _get_non_matching_by_metadata_keys_from_db( context, ignored_keys, key_prefix, value) return base.obj_make_list(context, objects.AggregateList(context), objects.Aggregate, db_aggregates) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/bandwidth_usage.py0000664000175000017500000000771100000000000020672 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class BandwidthUsage(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add use_slave to get_by_instance_uuid_and_mac # Version 1.2: Add update_cells to create VERSION = '1.2' fields = { 'instance_uuid': fields.UUIDField(), 'mac': fields.StringField(), 'start_period': fields.DateTimeField(), 'last_refreshed': fields.DateTimeField(), 'bw_in': fields.IntegerField(), 'bw_out': fields.IntegerField(), 'last_ctr_in': fields.IntegerField(), 'last_ctr_out': fields.IntegerField() } @staticmethod def _from_db_object(context, bw_usage, db_bw_usage): for field in bw_usage.fields: if field == 'instance_uuid': setattr(bw_usage, field, db_bw_usage['uuid']) else: setattr(bw_usage, field, db_bw_usage[field]) bw_usage._context = context bw_usage.obj_reset_changes() return bw_usage @staticmethod @db.select_db_reader_mode def _db_bw_usage_get(context, uuid, start_period, mac, use_slave=False): return db.bw_usage_get(context, uuid=uuid, start_period=start_period, mac=mac) @base.serialize_args @base.remotable_classmethod def get_by_instance_uuid_and_mac(cls, context, instance_uuid, mac, start_period=None, use_slave=False): db_bw_usage = cls._db_bw_usage_get(context, uuid=instance_uuid, start_period=start_period, mac=mac, use_slave=use_slave) if db_bw_usage: return cls._from_db_object(context, cls(), db_bw_usage) # TODO(stephenfin): Remove 'update_cells' in version 2.0 of the object @base.serialize_args @base.remotable def create(self, uuid, mac, bw_in, bw_out, last_ctr_in, last_ctr_out, start_period=None, last_refreshed=None, update_cells=True): db_bw_usage = db.bw_usage_update( self._context, uuid, mac, start_period, bw_in, bw_out, last_ctr_in, last_ctr_out, last_refreshed=last_refreshed) self._from_db_object(self._context, self, db_bw_usage) @base.NovaObjectRegistry.register class BandwidthUsageList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add use_slave to get_by_uuids # Version 1.2: BandwidthUsage <= version 1.2 VERSION = '1.2' fields = { 'objects': fields.ListOfObjectsField('BandwidthUsage'), } @staticmethod @db.select_db_reader_mode def _db_bw_usage_get_by_uuids(context, uuids, start_period, use_slave=False): return db.bw_usage_get_by_uuids(context, uuids=uuids, start_period=start_period) @base.serialize_args @base.remotable_classmethod def get_by_uuids(cls, context, uuids, start_period=None, use_slave=False): db_bw_usages = cls._db_bw_usage_get_by_uuids(context, uuids=uuids, start_period=start_period, use_slave=use_slave) return base.obj_make_list(context, cls(), BandwidthUsage, db_bw_usages) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/base.py0000664000175000017500000003612500000000000016455 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Nova common internal object model""" import contextlib import datetime import functools import traceback import netaddr import oslo_messaging as messaging from oslo_utils import versionutils from oslo_versionedobjects import base as ovoo_base from oslo_versionedobjects import exception as ovoo_exc import six from nova import objects from nova.objects import fields as obj_fields from nova import utils def all_things_equal(obj_a, obj_b): if obj_b is None: return False for name in obj_a.fields: set_a = name in obj_a set_b = name in obj_b if set_a != set_b: return False elif not set_a: continue if getattr(obj_a, name) != getattr(obj_b, name): return False return True def get_attrname(name): """Return the mangled name of the attribute's underlying storage.""" # FIXME(danms): This is just until we use o.vo's class properties # and object base. return '_obj_' + name class NovaObjectRegistry(ovoo_base.VersionedObjectRegistry): notification_classes = [] def registration_hook(self, cls, index): # NOTE(danms): This is called when an object is registered, # and is responsible for maintaining nova.objects.$OBJECT # as the highest-versioned implementation of a given object. version = versionutils.convert_version_to_tuple(cls.VERSION) if not hasattr(objects, cls.obj_name()): setattr(objects, cls.obj_name(), cls) else: cur_version = versionutils.convert_version_to_tuple( getattr(objects, cls.obj_name()).VERSION) if version >= cur_version: setattr(objects, cls.obj_name(), cls) @classmethod def register_notification(cls, notification_cls): """Register a class as notification. Use only to register concrete notification or payload classes, do not register base classes intended for inheritance only. """ cls.register_if(False)(notification_cls) cls.notification_classes.append(notification_cls) return notification_cls @classmethod def register_notification_objects(cls): """Register previously decorated notification as normal ovos. This is not intended for production use but only for testing and document generation purposes. """ for notification_cls in cls.notification_classes: cls.register(notification_cls) remotable_classmethod = ovoo_base.remotable_classmethod remotable = ovoo_base.remotable obj_make_list = ovoo_base.obj_make_list NovaObjectDictCompat = ovoo_base.VersionedObjectDictCompat NovaTimestampObject = ovoo_base.TimestampedObject class NovaObject(ovoo_base.VersionedObject): """Base class and object factory. This forms the base of all objects that can be remoted or instantiated via RPC. Simply defining a class that inherits from this base class will make it remotely instantiatable. Objects should implement the necessary "get" classmethod routines as well as "save" object methods as appropriate. """ OBJ_SERIAL_NAMESPACE = 'nova_object' OBJ_PROJECT_NAMESPACE = 'nova' # NOTE(ndipanov): This is nova-specific @staticmethod def should_migrate_data(): """A check that can be used to inhibit online migration behavior This is usually used to check if all services that will be accessing the db directly are ready for the new format. """ raise NotImplementedError() # NOTE(danms): This is nova-specific @contextlib.contextmanager def obj_alternate_context(self, context): original_context = self._context self._context = context try: yield finally: self._context = original_context class NovaPersistentObject(object): """Mixin class for Persistent objects. This adds the fields that we use in common for most persistent objects. """ fields = { 'created_at': obj_fields.DateTimeField(nullable=True), 'updated_at': obj_fields.DateTimeField(nullable=True), 'deleted_at': obj_fields.DateTimeField(nullable=True), 'deleted': obj_fields.BooleanField(default=False), } # NOTE(danms): This is copied from oslo.versionedobjects ahead of # a release. Do not use it directly or modify it. # TODO(danms): Remove this when we can get it from oslo.versionedobjects class EphemeralObject(object): """Mix-in to provide more recognizable field defaulting. If an object should have all fields with a default= set to those values during instantiation, inherit from this class. The base VersionedObject class is designed in such a way that all fields are optional, which makes sense when representing a remote database row where not all columns are transported across RPC and not all columns should be set during an update operation. This is why fields with default= are not set implicitly during object instantiation, to avoid clobbering existing fields in the database. However, objects based on VersionedObject are also used to represent all-or-nothing blobs stored in the database, or even used purely in RPC to represent things that are not ever stored in the database. Thus, this mix-in is provided for these latter object use cases where the desired behavior is to always have default= fields be set at __init__ time. """ def __init__(self, *args, **kwargs): super(EphemeralObject, self).__init__(*args, **kwargs) # Not specifying any fields causes all defaulted fields to be set self.obj_set_defaults() class NovaEphemeralObject(EphemeralObject, NovaObject): """Base class for objects that are not row-column in the DB. Objects that are used purely over RPC (i.e. not persisted) or are written to the database in blob form or otherwise do not represent rows directly as fields should inherit from this object. The principal difference is that fields with a default value will be set at __init__ time instead of requiring manual intervention. """ pass class ObjectListBase(ovoo_base.ObjectListBase): # NOTE(danms): These are for transition to using the oslo # base object and can be removed when we move to it. @classmethod def _obj_primitive_key(cls, field): return 'nova_object.%s' % field @classmethod def _obj_primitive_field(cls, primitive, field, default=obj_fields.UnspecifiedDefault): key = cls._obj_primitive_key(field) if default == obj_fields.UnspecifiedDefault: return primitive[key] else: return primitive.get(key, default) class NovaObjectSerializer(messaging.NoOpSerializer): """A NovaObject-aware Serializer. This implements the Oslo Serializer interface and provides the ability to serialize and deserialize NovaObject entities. Any service that needs to accept or return NovaObjects as arguments or result values should pass this to its RPCClient and RPCServer objects. """ @property def conductor(self): if not hasattr(self, '_conductor'): from nova import conductor self._conductor = conductor.API() return self._conductor def _process_object(self, context, objprim): try: objinst = NovaObject.obj_from_primitive(objprim, context=context) except ovoo_exc.IncompatibleObjectVersion: objver = objprim['nova_object.version'] if objver.count('.') == 2: # NOTE(danms): For our purposes, the .z part of the version # should be safe to accept without requiring a backport objprim['nova_object.version'] = \ '.'.join(objver.split('.')[:2]) return self._process_object(context, objprim) objname = objprim['nova_object.name'] version_manifest = ovoo_base.obj_tree_get_versions(objname) if objname in version_manifest: objinst = self.conductor.object_backport_versions( context, objprim, version_manifest) else: raise return objinst def _process_iterable(self, context, action_fn, values): """Process an iterable, taking an action on each value. :param:context: Request context :param:action_fn: Action to take on each item in values :param:values: Iterable container of things to take action on :returns: A new container of the same type (except set) with items from values having had action applied. """ iterable = values.__class__ if issubclass(iterable, dict): return iterable(**{k: action_fn(context, v) for k, v in values.items()}) else: # NOTE(danms, gibi) A set can't have an unhashable value inside, # such as a dict. Convert the set to list, which is fine, since we # can't send them over RPC anyway. We convert it to list as this # way there will be no semantic change between the fake rpc driver # used in functional test and a normal rpc driver. if iterable == set: iterable = list return iterable([action_fn(context, value) for value in values]) def serialize_entity(self, context, entity): if isinstance(entity, (tuple, list, set, dict)): entity = self._process_iterable(context, self.serialize_entity, entity) elif (hasattr(entity, 'obj_to_primitive') and callable(entity.obj_to_primitive)): entity = entity.obj_to_primitive() return entity def deserialize_entity(self, context, entity): if isinstance(entity, dict) and 'nova_object.name' in entity: entity = self._process_object(context, entity) elif isinstance(entity, (tuple, list, set, dict)): entity = self._process_iterable(context, self.deserialize_entity, entity) return entity def obj_to_primitive(obj): """Recursively turn an object into a python primitive. A NovaObject becomes a dict, and anything that implements ObjectListBase becomes a list. """ if isinstance(obj, ObjectListBase): return [obj_to_primitive(x) for x in obj] elif isinstance(obj, NovaObject): result = {} for key in obj.obj_fields: if obj.obj_attr_is_set(key) or key in obj.obj_extra_fields: result[key] = obj_to_primitive(getattr(obj, key)) return result elif isinstance(obj, netaddr.IPAddress): return str(obj) elif isinstance(obj, netaddr.IPNetwork): return str(obj) else: return obj def obj_make_dict_of_lists(context, list_cls, obj_list, item_key): """Construct a dictionary of object lists, keyed by item_key. :param:context: Request context :param:list_cls: The ObjectListBase class :param:obj_list: The list of objects to place in the dictionary :param:item_key: The object attribute name to use as a dictionary key """ obj_lists = {} for obj in obj_list: key = getattr(obj, item_key) if key not in obj_lists: obj_lists[key] = list_cls() obj_lists[key].objects = [] obj_lists[key].objects.append(obj) for key in obj_lists: obj_lists[key]._context = context obj_lists[key].obj_reset_changes() return obj_lists def serialize_args(fn): """Decorator that will do the arguments serialization before remoting.""" def wrapper(obj, *args, **kwargs): args = [utils.strtime(arg) if isinstance(arg, datetime.datetime) else arg for arg in args] for k, v in kwargs.items(): if k == 'exc_val' and v: try: # NOTE(danms): When we run this for a remotable method, # we need to attempt to format_message() the exception to # get the sanitized message, and if it's not a # NovaException, fall back to just the exception class # name. However, a remotable will end up calling this again # on the other side of the RPC call, so we must not try # to do that again, otherwise we will always end up with # just str. So, only do that if exc_val is an Exception # class. kwargs[k] = (v.format_message() if isinstance(v, Exception) else v) except Exception: kwargs[k] = v.__class__.__name__ elif k == 'exc_tb' and v and not isinstance(v, six.string_types): kwargs[k] = ''.join(traceback.format_tb(v)) elif isinstance(v, datetime.datetime): kwargs[k] = utils.strtime(v) if hasattr(fn, '__call__'): return fn(obj, *args, **kwargs) # NOTE(danms): We wrap a descriptor, so use that protocol return fn.__get__(None, obj)(*args, **kwargs) # NOTE(danms): Make this discoverable wrapper.remotable = getattr(fn, 'remotable', False) wrapper.original_fn = fn return (functools.wraps(fn)(wrapper) if hasattr(fn, '__call__') else classmethod(wrapper)) def obj_equal_prims(obj_1, obj_2, ignore=None): """Compare two primitives for equivalence ignoring some keys. This operation tests the primitives of two objects for equivalence. Object primitives may contain a list identifying fields that have been changed - this is ignored in the comparison. The ignore parameter lists any other keys to be ignored. :param:obj1: The first object in the comparison :param:obj2: The second object in the comparison :param:ignore: A list of fields to ignore :returns: True if the primitives are equal ignoring changes and specified fields, otherwise False. """ def _strip(prim, keys): if isinstance(prim, dict): for k in keys: prim.pop(k, None) for v in prim.values(): _strip(v, keys) if isinstance(prim, list): for v in prim: _strip(v, keys) return prim if ignore is not None: keys = ['nova_object.changes'] + ignore else: keys = ['nova_object.changes'] prim_1 = _strip(obj_1.obj_to_primitive(), keys) prim_2 = _strip(obj_2.obj_to_primitive(), keys) return prim_1 == prim_2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/block_device.py0000664000175000017500000004315100000000000020151 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import api as oslo_db_api from oslo_db.sqlalchemy import update_match from oslo_log import log as logging from oslo_utils import uuidutils from oslo_utils import versionutils from nova import block_device from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models as db_models from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) _BLOCK_DEVICE_OPTIONAL_JOINED_FIELD = ['instance'] BLOCK_DEVICE_OPTIONAL_ATTRS = _BLOCK_DEVICE_OPTIONAL_JOINED_FIELD def _expected_cols(expected_attrs): return [attr for attr in expected_attrs if attr in _BLOCK_DEVICE_OPTIONAL_JOINED_FIELD] # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class BlockDeviceMapping(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Add instance_uuid to get_by_volume_id method # Version 1.2: Instance version 1.14 # Version 1.3: Instance version 1.15 # Version 1.4: Instance version 1.16 # Version 1.5: Instance version 1.17 # Version 1.6: Instance version 1.18 # Version 1.7: Add update_or_create method # Version 1.8: Instance version 1.19 # Version 1.9: Instance version 1.20 # Version 1.10: Changed source_type field to BlockDeviceSourceTypeField. # Version 1.11: Changed destination_type field to # BlockDeviceDestinationTypeField. # Version 1.12: Changed device_type field to BlockDeviceTypeField. # Version 1.13: Instance version 1.21 # Version 1.14: Instance version 1.22 # Version 1.15: Instance version 1.23 # Version 1.16: Deprecate get_by_volume_id(), add # get_by_volume() and get_by_volume_and_instance() # Version 1.17: Added tag field # Version 1.18: Added attachment_id # Version 1.19: Added uuid # Version 1.20: Added volume_type VERSION = '1.20' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(), 'instance_uuid': fields.UUIDField(), 'instance': fields.ObjectField('Instance', nullable=True), 'source_type': fields.BlockDeviceSourceTypeField(nullable=True), 'destination_type': fields.BlockDeviceDestinationTypeField( nullable=True), 'guest_format': fields.StringField(nullable=True), 'device_type': fields.BlockDeviceTypeField(nullable=True), 'disk_bus': fields.StringField(nullable=True), 'boot_index': fields.IntegerField(nullable=True), 'device_name': fields.StringField(nullable=True), 'delete_on_termination': fields.BooleanField(default=False), 'snapshot_id': fields.StringField(nullable=True), 'volume_id': fields.StringField(nullable=True), 'volume_size': fields.IntegerField(nullable=True), 'image_id': fields.StringField(nullable=True), 'no_device': fields.BooleanField(default=False), 'connection_info': fields.SensitiveStringField(nullable=True), 'tag': fields.StringField(nullable=True), 'attachment_id': fields.UUIDField(nullable=True), # volume_type field can be a volume type name or ID(UUID). 'volume_type': fields.StringField(nullable=True), } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 20) and 'volume_type' in primitive: del primitive['volume_type'] if target_version < (1, 19) and 'uuid' in primitive: del primitive['uuid'] if target_version < (1, 18) and 'attachment_id' in primitive: del primitive['attachment_id'] if target_version < (1, 17) and 'tag' in primitive: del primitive['tag'] @classmethod def populate_uuids(cls, context, count): @db_api.pick_context_manager_reader def get_bdms_no_uuid(context): return context.session.query(db_models.BlockDeviceMapping).\ filter_by(uuid=None).limit(count).all() db_bdms = get_bdms_no_uuid(context) done = 0 for db_bdm in db_bdms: cls._create_uuid(context, db_bdm['id']) done += 1 return done, done @staticmethod @oslo_db_api.wrap_db_retry(max_retries=1, retry_on_deadlock=True) def _create_uuid(context, bdm_id): # NOTE(mdbooth): This method is only required until uuid is made # non-nullable in a future release. # NOTE(mdbooth): We wrap this method in a retry loop because it can # fail (safely) on multi-master galera if concurrent updates happen on # different masters. It will never fail on single-master. We can only # ever need one retry. uuid = uuidutils.generate_uuid() values = {'uuid': uuid} compare = db_models.BlockDeviceMapping(id=bdm_id, uuid=None) # NOTE(mdbooth): We explicitly use an independent transaction context # here so as not to fail if: # 1. We retry. # 2. We're in a read transaction.This is an edge case of what's # normally a read operation. Forcing everything (transitively) # which reads a BDM to be in a write transaction for a narrow # temporary edge case is undesirable. tctxt = db_api.get_context_manager(context).writer.independent with tctxt.using(context): query = context.session.query(db_models.BlockDeviceMapping).\ filter_by(id=bdm_id) try: query.update_on_match(compare, 'id', values) except update_match.NoRowsMatched: # We can only get here if we raced, and another writer already # gave this bdm a uuid result = query.one() uuid = result['uuid'] assert(uuid is not None) return uuid @classmethod def _from_db_object(cls, context, block_device_obj, db_block_device, expected_attrs=None): if expected_attrs is None: expected_attrs = [] for key in block_device_obj.fields: if key in BLOCK_DEVICE_OPTIONAL_ATTRS: continue if key == 'uuid' and not db_block_device.get(key): # NOTE(danms): While the records could be nullable, # generate a UUID on read since the object requires it bdm_id = db_block_device['id'] db_block_device[key] = cls._create_uuid(context, bdm_id) block_device_obj[key] = db_block_device[key] if 'instance' in expected_attrs: my_inst = objects.Instance(context) my_inst._from_db_object(context, my_inst, db_block_device['instance']) block_device_obj.instance = my_inst block_device_obj._context = context block_device_obj.obj_reset_changes() return block_device_obj def _create(self, context, update_or_create=False): """Create the block device record in the database. In case the id field is set on the object, and if the instance is set raise an ObjectActionError. Resets all the changes on the object. Returns None :param context: security context used for database calls :param update_or_create: consider existing block devices for the instance based on the device name and swap, and only update the ones that match. Normally only used when creating the instance for the first time. """ if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() if 'instance' in updates: raise exception.ObjectActionError(action='create', reason='instance assigned') if update_or_create: db_bdm = db.block_device_mapping_update_or_create( context, updates, legacy=False) else: db_bdm = db.block_device_mapping_create( context, updates, legacy=False) self._from_db_object(context, self, db_bdm) @base.remotable def create(self): self._create(self._context) @base.remotable def update_or_create(self): self._create(self._context, update_or_create=True) @base.remotable def destroy(self): if not self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='destroy', reason='already destroyed') db.block_device_mapping_destroy(self._context, self.id) delattr(self, base.get_attrname('id')) @base.remotable def save(self): updates = self.obj_get_changes() if 'instance' in updates: raise exception.ObjectActionError(action='save', reason='instance changed') updates.pop('id', None) updated = db.block_device_mapping_update(self._context, self.id, updates, legacy=False) if not updated: raise exception.BDMNotFound(id=self.id) self._from_db_object(self._context, self, updated) # NOTE(danms): This method is deprecated and will be removed in # v2.0 of the object @base.remotable_classmethod def get_by_volume_id(cls, context, volume_id, instance_uuid=None, expected_attrs=None): if expected_attrs is None: expected_attrs = [] db_bdms = db.block_device_mapping_get_all_by_volume_id( context, volume_id, _expected_cols(expected_attrs)) if not db_bdms: raise exception.VolumeBDMNotFound(volume_id=volume_id) if len(db_bdms) > 1: LOG.warning('Legacy get_by_volume_id() call found multiple ' 'BDMs for volume %(volume)s', {'volume': volume_id}) db_bdm = db_bdms[0] # NOTE (ndipanov): Move this to the db layer into a # get_by_instance_and_volume_id method if instance_uuid and instance_uuid != db_bdm['instance_uuid']: raise exception.InvalidVolume( reason=_("Volume does not belong to the " "requested instance.")) return cls._from_db_object(context, cls(), db_bdm, expected_attrs=expected_attrs) @base.remotable_classmethod def get_by_volume_and_instance(cls, context, volume_id, instance_uuid, expected_attrs=None): if expected_attrs is None: expected_attrs = [] db_bdm = db.block_device_mapping_get_by_instance_and_volume_id( context, volume_id, instance_uuid, _expected_cols(expected_attrs)) if not db_bdm: raise exception.VolumeBDMNotFound(volume_id=volume_id) return cls._from_db_object(context, cls(), db_bdm, expected_attrs=expected_attrs) @base.remotable_classmethod def get_by_volume(cls, context, volume_id, expected_attrs=None): if expected_attrs is None: expected_attrs = [] db_bdms = db.block_device_mapping_get_all_by_volume_id( context, volume_id, _expected_cols(expected_attrs)) if not db_bdms: raise exception.VolumeBDMNotFound(volume_id=volume_id) if len(db_bdms) > 1: raise exception.VolumeBDMIsMultiAttach(volume_id=volume_id) return cls._from_db_object(context, cls(), db_bdms[0], expected_attrs=expected_attrs) @property def is_root(self): return self.boot_index == 0 @property def is_volume(self): return (self.destination_type == fields.BlockDeviceDestinationType.VOLUME) @property def is_image(self): return self.source_type == fields.BlockDeviceSourceType.IMAGE def get_image_mapping(self): return block_device.BlockDeviceDict(self).get_image_mapping() def obj_load_attr(self, attrname): if attrname not in BLOCK_DEVICE_OPTIONAL_ATTRS: raise exception.ObjectActionError( action='obj_load_attr', reason='attribute %s not lazy-loadable' % attrname) if not self._context: raise exception.OrphanedObjectError(method='obj_load_attr', objtype=self.obj_name()) LOG.debug("Lazy-loading '%(attr)s' on %(name)s using uuid %(uuid)s", {'attr': attrname, 'name': self.obj_name(), 'uuid': self.instance_uuid, }) self.instance = objects.Instance.get_by_uuid(self._context, self.instance_uuid) self.obj_reset_changes(fields=['instance']) @base.NovaObjectRegistry.register class BlockDeviceMappingList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: BlockDeviceMapping <= version 1.1 # Version 1.2: Added use_slave to get_by_instance_uuid # Version 1.3: BlockDeviceMapping <= version 1.2 # Version 1.4: BlockDeviceMapping <= version 1.3 # Version 1.5: BlockDeviceMapping <= version 1.4 # Version 1.6: BlockDeviceMapping <= version 1.5 # Version 1.7: BlockDeviceMapping <= version 1.6 # Version 1.8: BlockDeviceMapping <= version 1.7 # Version 1.9: BlockDeviceMapping <= version 1.8 # Version 1.10: BlockDeviceMapping <= version 1.9 # Version 1.11: BlockDeviceMapping <= version 1.10 # Version 1.12: BlockDeviceMapping <= version 1.11 # Version 1.13: BlockDeviceMapping <= version 1.12 # Version 1.14: BlockDeviceMapping <= version 1.13 # Version 1.15: BlockDeviceMapping <= version 1.14 # Version 1.16: BlockDeviceMapping <= version 1.15 # Version 1.17: Add get_by_instance_uuids() VERSION = '1.17' fields = { 'objects': fields.ListOfObjectsField('BlockDeviceMapping'), } @property def instance_uuids(self): return set( bdm.instance_uuid for bdm in self if bdm.obj_attr_is_set('instance_uuid') ) @classmethod def bdms_by_instance_uuid(cls, context, instance_uuids): bdms = cls.get_by_instance_uuids(context, instance_uuids) return base.obj_make_dict_of_lists( context, cls, bdms, 'instance_uuid') @staticmethod @db.select_db_reader_mode def _db_block_device_mapping_get_all_by_instance_uuids( context, instance_uuids, use_slave=False): return db.block_device_mapping_get_all_by_instance_uuids( context, instance_uuids) @base.remotable_classmethod def get_by_instance_uuids(cls, context, instance_uuids, use_slave=False): db_bdms = cls._db_block_device_mapping_get_all_by_instance_uuids( context, instance_uuids, use_slave=use_slave) return base.obj_make_list( context, cls(), objects.BlockDeviceMapping, db_bdms or []) @staticmethod @db.select_db_reader_mode def _db_block_device_mapping_get_all_by_instance( context, instance_uuid, use_slave=False): return db.block_device_mapping_get_all_by_instance( context, instance_uuid) @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid, use_slave=False): db_bdms = cls._db_block_device_mapping_get_all_by_instance( context, instance_uuid, use_slave=use_slave) return base.obj_make_list( context, cls(), objects.BlockDeviceMapping, db_bdms or []) def root_bdm(self): """It only makes sense to call this method when the BlockDeviceMappingList contains BlockDeviceMappings from exactly one instance rather than BlockDeviceMappings from multiple instances. For example, you should not call this method from a BlockDeviceMappingList created by get_by_instance_uuids(), but you may call this method from a BlockDeviceMappingList created by get_by_instance_uuid(). """ if len(self.instance_uuids) > 1: raise exception.UndefinedRootBDM() try: return next(bdm_obj for bdm_obj in self if bdm_obj.is_root) except StopIteration: return def block_device_make_list(context, db_list, **extra_args): return base.obj_make_list(context, objects.BlockDeviceMappingList(context), objects.BlockDeviceMapping, db_list, **extra_args) def block_device_make_list_from_dicts(context, bdm_dicts_list): bdm_objects = [objects.BlockDeviceMapping(context=context, **bdm) for bdm in bdm_dicts_list] return BlockDeviceMappingList(objects=bdm_objects) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/build_request.py0000664000175000017500000004720000000000000020406 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import re from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import versionutils from oslo_versionedobjects import exception as ovoo_exc import six from nova.db.sqlalchemy import api as db from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) @base.NovaObjectRegistry.register class BuildRequest(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added block_device_mappings # Version 1.2: Added save() method # Version 1.3: Added tags VERSION = '1.3' fields = { 'id': fields.IntegerField(), 'instance_uuid': fields.UUIDField(), 'project_id': fields.StringField(), 'instance': fields.ObjectField('Instance'), 'block_device_mappings': fields.ObjectField('BlockDeviceMappingList'), # NOTE(alaski): Normally these would come from the NovaPersistentObject # mixin but they're being set explicitly because we only need # created_at/updated_at. There is no soft delete for this object. 'created_at': fields.DateTimeField(nullable=True), 'updated_at': fields.DateTimeField(nullable=True), 'tags': fields.ObjectField('TagList'), } def obj_make_compatible(self, primitive, target_version): super(BuildRequest, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1) and 'block_device_mappings' in primitive: del primitive['block_device_mappings'] elif target_version < (1, 3) and 'tags' in primitive: del primitive['tags'] def _load_instance(self, db_instance): # NOTE(alaski): Be very careful with instance loading because it # changes more than most objects. try: self.instance = objects.Instance.obj_from_primitive( jsonutils.loads(db_instance)) except TypeError: LOG.debug('Failed to load instance from BuildRequest with uuid ' '%s because it is None', self.instance_uuid) raise exception.BuildRequestNotFound(uuid=self.instance_uuid) except ovoo_exc.IncompatibleObjectVersion: # This should only happen if proper service upgrade strategies are # not followed. Log the exception and raise BuildRequestNotFound. # If the instance can't be loaded this object is useless and may # as well not exist. LOG.debug('Could not deserialize instance store in BuildRequest ' 'with uuid %(instance_uuid)s. Found version %(version)s ' 'which is not supported here.', dict(instance_uuid=self.instance_uuid, version=jsonutils.loads( db_instance)["nova_object.version"])) LOG.exception('Could not deserialize instance in BuildRequest') raise exception.BuildRequestNotFound(uuid=self.instance_uuid) # NOTE(sbauza): The instance primitive should already have the deleted # field being set, so when hydrating it back here, we should get the # right value but in case we don't have it, let's suppose that the # instance is not deleted, which is the default value for that field. # NOTE(mriedem): Same for the "hidden" field. self.instance.obj_set_defaults('deleted', 'hidden') # NOTE(alaski): Set some fields on instance that are needed by the api, # not lazy-loadable, and don't change. self.instance.disable_terminate = False self.instance.terminated_at = None self.instance.host = None self.instance.node = None self.instance.launched_at = None self.instance.launched_on = None self.instance.cell_name = None # The fields above are not set until the instance is in a cell at # which point this BuildRequest will be gone. locked_by could # potentially be set by an update so it should not be overwritten. if not self.instance.obj_attr_is_set('locked_by'): self.instance.locked_by = None # created_at/updated_at are not on the serialized instance because it # was never persisted. self.instance.created_at = self.created_at self.instance.updated_at = self.updated_at self.instance.tags = self.tags def _load_block_device_mappings(self, db_bdms): # 'db_bdms' is a serialized BlockDeviceMappingList object. If it's None # we're in a mixed version nova-api scenario and can't retrieve the # actual list. Set it to an empty list here which will cause a # temporary API inconsistency that will be resolved as soon as the # instance is scheduled and on a compute. if db_bdms is None: LOG.debug('Failed to load block_device_mappings from BuildRequest ' 'for instance %s because it is None', self.instance_uuid) self.block_device_mappings = objects.BlockDeviceMappingList() return self.block_device_mappings = ( objects.BlockDeviceMappingList.obj_from_primitive( jsonutils.loads(db_bdms))) def _load_tags(self, db_tags): # 'db_tags' is a serialized TagList object. If it's None # we're in a mixed version nova-api scenario and can't retrieve the # actual list. Set it to an empty list here which will cause a # temporary API inconsistency that will be resolved as soon as the # instance is scheduled and on a compute. if db_tags is None: LOG.debug('Failed to load tags from BuildRequest ' 'for instance %s because it is None', self.instance_uuid) self.tags = objects.TagList() return self.tags = ( objects.TagList.obj_from_primitive( jsonutils.loads(db_tags))) @staticmethod def _from_db_object(context, req, db_req): # Set this up front so that it can be pulled for error messages or # logging at any point. req.instance_uuid = db_req['instance_uuid'] for key in req.fields: if key == 'instance': continue elif isinstance(req.fields[key], fields.ObjectField): try: getattr(req, '_load_%s' % key)(db_req[key]) except AttributeError: LOG.exception('No load handler for %s', key) else: setattr(req, key, db_req[key]) # Load instance last because other fields on req may be referenced req._load_instance(db_req['instance']) req.obj_reset_changes(recursive=True) req._context = context return req @staticmethod @db.api_context_manager.reader def _get_by_instance_uuid_from_db(context, instance_uuid): db_req = context.session.query(api_models.BuildRequest).filter_by( instance_uuid=instance_uuid).first() if not db_req: raise exception.BuildRequestNotFound(uuid=instance_uuid) return db_req @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_req = cls._get_by_instance_uuid_from_db(context, instance_uuid) return cls._from_db_object(context, cls(), db_req) @staticmethod @db.api_context_manager.writer def _create_in_db(context, updates): db_req = api_models.BuildRequest() db_req.update(updates) db_req.save(context.session) return db_req def _get_update_primitives(self): updates = self.obj_get_changes() for key, value in updates.items(): if isinstance(self.fields[key], fields.ObjectField): updates[key] = jsonutils.dumps(value.obj_to_primitive()) return updates @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') if not self.obj_attr_is_set('instance_uuid'): # We can't guarantee this is not null in the db so check here raise exception.ObjectActionError(action='create', reason='instance_uuid must be set') updates = self._get_update_primitives() db_req = self._create_in_db(self._context, updates) self._from_db_object(self._context, self, db_req) @staticmethod @db.api_context_manager.writer def _destroy_in_db(context, instance_uuid): result = context.session.query(api_models.BuildRequest).filter_by( instance_uuid=instance_uuid).delete() if not result: raise exception.BuildRequestNotFound(uuid=instance_uuid) @base.remotable def destroy(self): self._destroy_in_db(self._context, self.instance_uuid) @db.api_context_manager.writer def _save_in_db(self, context, req_id, updates): db_req = context.session.query( api_models.BuildRequest).filter_by(id=req_id).first() if not db_req: raise exception.BuildRequestNotFound(uuid=self.instance_uuid) db_req.update(updates) context.session.add(db_req) return db_req @base.remotable def save(self): updates = self._get_update_primitives() db_req = self._save_in_db(self._context, self.id, updates) self._from_db_object(self._context, self, db_req) def get_new_instance(self, context): # NOTE(danms): This is a hack to make sure that the returned # instance has all dirty fields. There are probably better # ways to do this, but they kinda involve o.vo internals # so this is okay for the moment. instance = objects.Instance(context) for field in self.instance.obj_fields: # NOTE(danms): Don't copy the defaulted tags field # as instance.create() won't handle it properly. # TODO(zhengzhenyu): Handle this when the API supports creating # servers with tags. if field == 'tags': continue if self.instance.obj_attr_is_set(field): setattr(instance, field, getattr(self.instance, field)) return instance @base.NovaObjectRegistry.register class BuildRequestList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'objects': fields.ListOfObjectsField('BuildRequest'), } @staticmethod @db.api_context_manager.reader def _get_all_from_db(context): query = context.session.query(api_models.BuildRequest) if not context.is_admin: query = query.filter_by(project_id=context.project_id) db_reqs = query.all() return db_reqs @base.remotable_classmethod def get_all(cls, context): db_build_reqs = cls._get_all_from_db(context) return base.obj_make_list(context, cls(context), objects.BuildRequest, db_build_reqs) @staticmethod def _pass_exact_filters(instance, filters): for filter_key, filter_val in filters.items(): if filter_key in ('metadata', 'system_metadata'): if isinstance(filter_val, list): for item in filter_val: for k, v in item.items(): if (k not in instance.metadata or v != instance.metadata[k]): return False else: for k, v in filter_val.items(): if (k not in instance.metadata or v != instance.metadata[k]): return False elif filter_key in ( 'tags', 'tags-any', 'not-tags', 'not-tags-any'): # Get the list of simple string tags first. tags = ([tag.tag for tag in instance.tags] if instance.tags else []) if filter_key == 'tags': for item in filter_val: if item not in tags: return False elif filter_key == 'tags-any': found = [] for item in filter_val: if item in tags: found.append(item) if not found: return False elif filter_key == 'not-tags': found = [] for item in filter_val: if item in tags: found.append(item) if len(found) == len(filter_val): return False elif filter_key == 'not-tags-any': for item in filter_val: if item in tags: return False elif isinstance(filter_val, (list, tuple, set, frozenset)): if not filter_val: # Special value to indicate that nothing will match. return None if instance.get(filter_key, None) not in filter_val: return False else: if instance.get(filter_key, None) != filter_val: return False return True @staticmethod def _pass_regex_filters(instance, filters): for filter_name, filter_val in filters.items(): try: instance_attr = getattr(instance, filter_name) except AttributeError: continue # Sometimes the REGEX filter value is not a string if not isinstance(filter_val, six.string_types): filter_val = str(filter_val) filter_re = re.compile(filter_val) if instance_attr and not filter_re.search(str(instance_attr)): return False return True @staticmethod def _sort_build_requests(build_req_list, sort_keys, sort_dirs): # build_req_list is a [] of build_reqs sort_keys.reverse() sort_dirs.reverse() def sort_attr(sort_key, build_req): if sort_key == 'id': # 'id' is not set on the instance yet. Use the BuildRequest # 'id' instead. return build_req.id return getattr(build_req.instance, sort_key) for sort_key, sort_dir in zip(sort_keys, sort_dirs): reverse = False if sort_dir.lower().startswith('asc') else True build_req_list.sort(key=functools.partial(sort_attr, sort_key), reverse=reverse) return build_req_list @base.remotable_classmethod def get_by_filters(cls, context, filters, limit=None, marker=None, sort_keys=None, sort_dirs=None): # Short-circuit on anything that will not yield results. # 'deleted' records can not be returned from here since build_requests # are not soft deleted. # 'cleaned' records won't exist as they would need to be deleted. if (limit == 0 or filters.get('deleted', False) or filters.get('cleaned', False)): # If we have a marker honor the MarkerNotFound semantics. if marker: raise exception.MarkerNotFound(marker=marker) return cls(context, objects=[]) # Because the build_requests table stores an instance as a serialized # versioned object it is not feasible to do the filtering and sorting # in the database. Just get all potentially relevant records and # process them here. It should be noted that build requests are short # lived so there should not be a lot of results to deal with. build_requests = cls.get_all(context) # Fortunately some filters do not apply here. # 'changes-since' works off of the updated_at field which has not yet # been set at the point in the boot process where build_request still # exists. So it can be ignored. # 'deleted' and 'cleaned' are handled above. sort_keys, sort_dirs = db.process_sort_params(sort_keys, sort_dirs, default_dir='desc') # For other filters that don't match this, we will do regexp matching # Taken from db/sqlalchemy/api.py exact_match_filter_names = ['project_id', 'user_id', 'image_ref', 'vm_state', 'instance_type_id', 'uuid', 'metadata', 'host', 'task_state', 'system_metadata', 'tags', 'tags-any', 'not-tags', 'not-tags-any'] exact_filters = {} regex_filters = {} for key, value in filters.items(): if key in exact_match_filter_names: exact_filters[key] = value else: regex_filters[key] = value # As much as possible this copies the logic from db/sqlalchemy/api.py # instance_get_all_by_filters_sort. The main difference is that method # builds a sql query and this filters in python. filtered_build_reqs = [] for build_req in build_requests: instance = build_req.instance filter_result = cls._pass_exact_filters(instance, exact_filters) if filter_result is None: # The filter condition is such that nothing will match. # Bail early. return cls(context, objects=[]) if filter_result is False: continue if not cls._pass_regex_filters(instance, regex_filters): continue filtered_build_reqs.append(build_req) if (((len(filtered_build_reqs) < 2) or (not sort_keys)) and not marker): # No need to sort return cls(context, objects=filtered_build_reqs) sorted_build_reqs = cls._sort_build_requests(filtered_build_reqs, sort_keys, sort_dirs) marker_index = 0 if marker: for i, build_req in enumerate(sorted_build_reqs): if build_req.instance.uuid == marker: # The marker is the last seen item in the last page, so # we increment the index to the next item immediately # after the marker so the marker is not returned. marker_index = i + 1 break else: raise exception.MarkerNotFound(marker=marker) len_build_reqs = len(sorted_build_reqs) limit_index = len_build_reqs if limit: limit_index = marker_index + limit if limit_index > len_build_reqs: limit_index = len_build_reqs return cls(context, objects=sorted_build_reqs[marker_index:limit_index]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/cell_mapping.py0000664000175000017500000002372200000000000020174 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import versionutils import six.moves.urllib.parse as urlparse from sqlalchemy.sql.expression import asc from sqlalchemy.sql import false from sqlalchemy.sql import true import nova.conf from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova.objects import base from nova.objects import fields CONF = nova.conf.CONF LOG = logging.getLogger(__name__) def _parse_netloc(netloc): """Parse a user:pass@host:port and return a dict suitable for formatting a cell mapping template. """ these = { 'username': None, 'password': None, 'hostname': None, 'port': None, } if '@' in netloc: userpass, hostport = netloc.split('@', 1) else: hostport = netloc userpass = '' if hostport.startswith('['): host_end = hostport.find(']') if host_end < 0: raise ValueError('Invalid IPv6 URL') these['hostname'] = hostport[1:host_end] these['port'] = hostport[host_end + 1:] elif ':' in hostport: these['hostname'], these['port'] = hostport.split(':', 1) else: these['hostname'] = hostport if ':' in userpass: these['username'], these['password'] = userpass.split(':', 1) else: these['username'] = userpass return these @base.NovaObjectRegistry.register class CellMapping(base.NovaTimestampObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added disabled field VERSION = '1.1' CELL0_UUID = '00000000-0000-0000-0000-000000000000' fields = { 'id': fields.IntegerField(read_only=True), 'uuid': fields.UUIDField(), 'name': fields.StringField(nullable=True), 'transport_url': fields.StringField(), 'database_connection': fields.StringField(), 'disabled': fields.BooleanField(default=False), } def obj_make_compatible(self, primitive, target_version): super(CellMapping, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1): if 'disabled' in primitive: del primitive['disabled'] @property def identity(self): if 'name' in self and self.name: return '%s(%s)' % (self.uuid, self.name) else: return self.uuid @staticmethod def _format_url(url, default): default_url = urlparse.urlparse(default) subs = { 'username': default_url.username, 'password': default_url.password, 'hostname': default_url.hostname, 'port': default_url.port, 'scheme': default_url.scheme, 'query': default_url.query, 'fragment': default_url.fragment, 'path': default_url.path.lstrip('/'), } # NOTE(danms): oslo.messaging has an extended format for the URL # which we need to support: # scheme://user:pass@host:port[,user1:pass@host1:port, ...]/path # Encode these values, if they exist, as indexed keys like # username1, password1, hostname1, port1. if ',' in default_url.netloc: netlocs = default_url.netloc.split(',') index = 0 for netloc in netlocs: index += 1 these = _parse_netloc(netloc) for key in these: subs['%s%i' % (key, index)] = these[key] return url.format(**subs) @staticmethod def format_db_url(url): if CONF.database.connection is None: if '{' in url: LOG.error('Cell mapping database_connection is a template, ' 'but [database]/connection is not set') return url try: return CellMapping._format_url(url, CONF.database.connection) except Exception: LOG.exception('Failed to parse [database]/connection to ' 'format cell mapping') return url @staticmethod def format_mq_url(url): if CONF.transport_url is None: if '{' in url: LOG.error('Cell mapping transport_url is a template, but ' '[DEFAULT]/transport_url is not set') return url try: return CellMapping._format_url(url, CONF.transport_url) except Exception: LOG.exception('Failed to parse [DEFAULT]/transport_url to ' 'format cell mapping') return url @staticmethod def _from_db_object(context, cell_mapping, db_cell_mapping): for key in cell_mapping.fields: val = db_cell_mapping[key] if key == 'database_connection': val = cell_mapping.format_db_url(val) elif key == 'transport_url': val = cell_mapping.format_mq_url(val) setattr(cell_mapping, key, val) cell_mapping.obj_reset_changes() cell_mapping._context = context return cell_mapping @staticmethod @db_api.api_context_manager.reader def _get_by_uuid_from_db(context, uuid): db_mapping = context.session.query(api_models.CellMapping).filter_by( uuid=uuid).first() if not db_mapping: raise exception.CellMappingNotFound(uuid=uuid) return db_mapping @base.remotable_classmethod def get_by_uuid(cls, context, uuid): db_mapping = cls._get_by_uuid_from_db(context, uuid) return cls._from_db_object(context, cls(), db_mapping) @staticmethod @db_api.api_context_manager.writer def _create_in_db(context, updates): db_mapping = api_models.CellMapping() db_mapping.update(updates) db_mapping.save(context.session) return db_mapping @base.remotable def create(self): db_mapping = self._create_in_db(self._context, self.obj_get_changes()) self._from_db_object(self._context, self, db_mapping) @staticmethod @db_api.api_context_manager.writer def _save_in_db(context, uuid, updates): db_mapping = context.session.query( api_models.CellMapping).filter_by(uuid=uuid).first() if not db_mapping: raise exception.CellMappingNotFound(uuid=uuid) db_mapping.update(updates) context.session.add(db_mapping) return db_mapping @base.remotable def save(self): changes = self.obj_get_changes() db_mapping = self._save_in_db(self._context, self.uuid, changes) self._from_db_object(self._context, self, db_mapping) self.obj_reset_changes() @staticmethod @db_api.api_context_manager.writer def _destroy_in_db(context, uuid): result = context.session.query(api_models.CellMapping).filter_by( uuid=uuid).delete() if not result: raise exception.CellMappingNotFound(uuid=uuid) @base.remotable def destroy(self): self._destroy_in_db(self._context, self.uuid) def is_cell0(self): return self.obj_attr_is_set('uuid') and self.uuid == self.CELL0_UUID @base.NovaObjectRegistry.register class CellMappingList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add get_by_disabled() VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('CellMapping'), } @staticmethod @db_api.api_context_manager.reader def _get_all_from_db(context): return context.session.query(api_models.CellMapping).order_by( asc(api_models.CellMapping.id)).all() @base.remotable_classmethod def get_all(cls, context): db_mappings = cls._get_all_from_db(context) return base.obj_make_list(context, cls(), CellMapping, db_mappings) @staticmethod @db_api.api_context_manager.reader def _get_by_disabled_from_db(context, disabled): if disabled: return context.session.query(api_models.CellMapping).filter_by( disabled=true()).order_by(asc(api_models.CellMapping.id)).all() else: return context.session.query(api_models.CellMapping).filter_by( disabled=false()).order_by(asc( api_models.CellMapping.id)).all() @base.remotable_classmethod def get_by_disabled(cls, context, disabled): db_mappings = cls._get_by_disabled_from_db(context, disabled) return base.obj_make_list(context, cls(), CellMapping, db_mappings) @staticmethod @db_api.api_context_manager.reader def _get_by_project_id_from_db(context, project_id): # SELECT DISTINCT cell_id FROM instance_mappings \ # WHERE project_id = $project_id; cell_ids = context.session.query( api_models.InstanceMapping.cell_id).filter_by( project_id=project_id).distinct().subquery() # SELECT cell_mappings WHERE cell_id IN ($cell_ids); return context.session.query(api_models.CellMapping).filter( api_models.CellMapping.id.in_(cell_ids)).all() @classmethod def get_by_project_id(cls, context, project_id): """Return a list of CellMapping objects which correspond to cells in which project_id has InstanceMappings. """ db_mappings = cls._get_by_project_id_from_db(context, project_id) return base.obj_make_list(context, cls(), CellMapping, db_mappings) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/compute_node.py0000664000175000017500000005562500000000000020232 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import uuidutils from oslo_utils import versionutils from sqlalchemy import or_ from sqlalchemy.sql import null import nova.conf from nova.db import api as db from nova.db.sqlalchemy import api as sa_api from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova.objects import base from nova.objects import fields from nova.objects import pci_device_pool CONF = nova.conf.CONF @base.NovaObjectRegistry.register class ComputeNode(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added get_by_service_id() # Version 1.2: String attributes updated to support unicode # Version 1.3: Added stats field # Version 1.4: Added host ip field # Version 1.5: Added numa_topology field # Version 1.6: Added supported_hv_specs # Version 1.7: Added host field # Version 1.8: Added get_by_host_and_nodename() # Version 1.9: Added pci_device_pools # Version 1.10: Added get_first_node_by_host_for_old_compat() # Version 1.11: PciDevicePoolList version 1.1 # Version 1.12: HVSpec version 1.1 # Version 1.13: Changed service_id field to be nullable # Version 1.14: Added cpu_allocation_ratio and ram_allocation_ratio # Version 1.15: Added uuid # Version 1.16: Added disk_allocation_ratio # Version 1.17: Added mapped # Version 1.18: Added get_by_uuid(). # Version 1.19: Added get_by_nodename(). VERSION = '1.19' fields = { 'id': fields.IntegerField(read_only=True), 'uuid': fields.UUIDField(read_only=True), 'service_id': fields.IntegerField(nullable=True), 'host': fields.StringField(nullable=True), 'vcpus': fields.IntegerField(), 'memory_mb': fields.IntegerField(), 'local_gb': fields.IntegerField(), 'vcpus_used': fields.IntegerField(), 'memory_mb_used': fields.IntegerField(), 'local_gb_used': fields.IntegerField(), 'hypervisor_type': fields.StringField(), 'hypervisor_version': fields.IntegerField(), 'hypervisor_hostname': fields.StringField(nullable=True), 'free_ram_mb': fields.IntegerField(nullable=True), 'free_disk_gb': fields.IntegerField(nullable=True), 'current_workload': fields.IntegerField(nullable=True), 'running_vms': fields.IntegerField(nullable=True), # TODO(melwitt): cpu_info is non-nullable in the schema but we must # wait until version 2.0 of ComputeNode to change it to non-nullable 'cpu_info': fields.StringField(nullable=True), 'disk_available_least': fields.IntegerField(nullable=True), 'metrics': fields.StringField(nullable=True), 'stats': fields.DictOfNullableStringsField(nullable=True), 'host_ip': fields.IPAddressField(nullable=True), # TODO(rlrossit): because of history, numa_topology is held here as a # StringField, not a NUMATopology object. In version 2 of ComputeNode # this will be converted over to a fields.ObjectField('NUMATopology') 'numa_topology': fields.StringField(nullable=True), # NOTE(pmurray): the supported_hv_specs field maps to the # supported_instances field in the database 'supported_hv_specs': fields.ListOfObjectsField('HVSpec'), # NOTE(pmurray): the pci_device_pools field maps to the # pci_stats field in the database 'pci_device_pools': fields.ObjectField('PciDevicePoolList', nullable=True), 'cpu_allocation_ratio': fields.FloatField(), 'ram_allocation_ratio': fields.FloatField(), 'disk_allocation_ratio': fields.FloatField(), 'mapped': fields.IntegerField(), } def obj_make_compatible(self, primitive, target_version): super(ComputeNode, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 17): if 'mapped' in primitive: del primitive['mapped'] if target_version < (1, 16): if 'disk_allocation_ratio' in primitive: del primitive['disk_allocation_ratio'] if target_version < (1, 15): if 'uuid' in primitive: del primitive['uuid'] if target_version < (1, 14): if 'ram_allocation_ratio' in primitive: del primitive['ram_allocation_ratio'] if 'cpu_allocation_ratio' in primitive: del primitive['cpu_allocation_ratio'] if target_version < (1, 13) and primitive.get('service_id') is None: # service_id is non-nullable in versions before 1.13 try: service = objects.Service.get_by_compute_host( self._context, primitive['host']) primitive['service_id'] = service.id except (exception.ComputeHostNotFound, KeyError): # NOTE(hanlind): In case anything goes wrong like service not # found or host not being set, catch and set a fake value just # to allow for older versions that demand a value to work. # Setting to -1 will, if value is later used result in a # ServiceNotFound, so should be safe. primitive['service_id'] = -1 if target_version < (1, 9) and 'pci_device_pools' in primitive: del primitive['pci_device_pools'] if target_version < (1, 7) and 'host' in primitive: del primitive['host'] if target_version < (1, 6) and 'supported_hv_specs' in primitive: del primitive['supported_hv_specs'] if target_version < (1, 5) and 'numa_topology' in primitive: del primitive['numa_topology'] if target_version < (1, 4) and 'host_ip' in primitive: del primitive['host_ip'] if target_version < (1, 3) and 'stats' in primitive: # pre 1.3 version does not have a stats field del primitive['stats'] @staticmethod def _host_from_db_object(compute, db_compute): if (('host' not in db_compute or db_compute['host'] is None) and 'service_id' in db_compute and db_compute['service_id'] is not None): # FIXME(sbauza) : Unconverted compute record, provide compatibility # This has to stay until we can be sure that any/all compute nodes # in the database have been converted to use the host field # Service field of ComputeNode could be deprecated in a next patch, # so let's use directly the Service object try: service = objects.Service.get_by_id( compute._context, db_compute['service_id']) except exception.ServiceNotFound: compute.host = None return try: compute.host = service.host except (AttributeError, exception.OrphanedObjectError): # Host can be nullable in Service compute.host = None elif 'host' in db_compute and db_compute['host'] is not None: # New-style DB having host as a field compute.host = db_compute['host'] else: # We assume it should not happen but in case, let's set it to None compute.host = None @staticmethod def _from_db_object(context, compute, db_compute): special_cases = set([ 'stats', 'supported_hv_specs', 'host', 'pci_device_pools', ]) fields = set(compute.fields) - special_cases online_updates = {} for key in fields: value = db_compute[key] # NOTE(sbauza): Since all compute nodes don't possibly run the # latest RT code updating allocation ratios, we need to provide # a backwards compatible way of hydrating them. # As we want to care about our operators and since we don't want to # ask them to change their configuration files before upgrading, we # prefer to hardcode the default values for the ratios here until # the next release (Newton) where the opt default values will be # restored for both cpu (16.0), ram (1.5) and disk (1.0) # allocation ratios. # TODO(yikun): Remove this online migration code when all ratio # values are NOT 0.0 or NULL ratio_keys = ['cpu_allocation_ratio', 'ram_allocation_ratio', 'disk_allocation_ratio'] if key in ratio_keys and value in (None, 0.0): # ResourceTracker is not updating the value (old node) # or the compute node is updated but the default value has # not been changed r = getattr(CONF, key) # NOTE(yikun): If the allocation ratio record is not set, the # allocation ratio will be changed to the # CONF.x_allocation_ratio value if x_allocation_ratio is # set, and fallback to use the CONF.initial_x_allocation_ratio # otherwise. init_x_ratio = getattr(CONF, 'initial_%s' % key) value = r if r else init_x_ratio online_updates[key] = value elif key == 'mapped': value = 0 if value is None else value setattr(compute, key, value) if online_updates: db.compute_node_update(context, compute.id, online_updates) stats = db_compute['stats'] if stats: compute.stats = jsonutils.loads(stats) sup_insts = db_compute.get('supported_instances') if sup_insts: hv_specs = jsonutils.loads(sup_insts) hv_specs = [objects.HVSpec.from_list(hv_spec) for hv_spec in hv_specs] compute.supported_hv_specs = hv_specs pci_stats = db_compute.get('pci_stats') if pci_stats is not None: pci_stats = pci_device_pool.from_pci_stats(pci_stats) compute.pci_device_pools = pci_stats compute._context = context # Make sure that we correctly set the host field depending on either # host column is present in the table or not compute._host_from_db_object(compute, db_compute) compute.obj_reset_changes() return compute @base.remotable_classmethod def get_by_id(cls, context, compute_id): db_compute = db.compute_node_get(context, compute_id) return cls._from_db_object(context, cls(), db_compute) @base.remotable_classmethod def get_by_uuid(cls, context, compute_uuid): nodes = ComputeNodeList.get_all_by_uuids(context, [compute_uuid]) # We have a unique index on the uuid column so we can get back 0 or 1. if not nodes: raise exception.ComputeHostNotFound(host=compute_uuid) return nodes[0] # NOTE(hanlind): This is deprecated and should be removed on the next # major version bump @base.remotable_classmethod def get_by_service_id(cls, context, service_id): db_computes = db.compute_nodes_get_by_service_id(context, service_id) # NOTE(sbauza): Old version was returning an item, we need to keep this # behaviour for backwards compatibility db_compute = db_computes[0] return cls._from_db_object(context, cls(), db_compute) @base.remotable_classmethod def get_by_host_and_nodename(cls, context, host, nodename): db_compute = db.compute_node_get_by_host_and_nodename( context, host, nodename) return cls._from_db_object(context, cls(), db_compute) @base.remotable_classmethod def get_by_nodename(cls, context, hypervisor_hostname): '''Get by node name (i.e. hypervisor hostname). Raises ComputeHostNotFound if hypervisor_hostname with the given name doesn't exist. ''' db_compute = db.compute_node_get_by_nodename( context, hypervisor_hostname) return cls._from_db_object(context, cls(), db_compute) # TODO(pkholkin): Remove this method in the next major version bump @base.remotable_classmethod def get_first_node_by_host_for_old_compat(cls, context, host, use_slave=False): computes = ComputeNodeList.get_all_by_host(context, host, use_slave) # FIXME(sbauza): Ironic deployments can return multiple # nodes per host, we should return all the nodes and modify the callers # instead. # Arbitrarily returning the first node. return computes[0] @staticmethod def _convert_stats_to_db_format(updates): stats = updates.pop('stats', None) if stats is not None: updates['stats'] = jsonutils.dumps(stats) @staticmethod def _convert_host_ip_to_db_format(updates): host_ip = updates.pop('host_ip', None) if host_ip: updates['host_ip'] = str(host_ip) @staticmethod def _convert_supported_instances_to_db_format(updates): hv_specs = updates.pop('supported_hv_specs', None) if hv_specs is not None: hv_specs = [hv_spec.to_list() for hv_spec in hv_specs] updates['supported_instances'] = jsonutils.dumps(hv_specs) @staticmethod def _convert_pci_stats_to_db_format(updates): if 'pci_device_pools' in updates: pools = updates.pop('pci_device_pools') if pools is not None: pools = jsonutils.dumps(pools.obj_to_primitive()) updates['pci_stats'] = pools @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() if 'uuid' not in updates: updates['uuid'] = uuidutils.generate_uuid() self.uuid = updates['uuid'] self._convert_stats_to_db_format(updates) self._convert_host_ip_to_db_format(updates) self._convert_supported_instances_to_db_format(updates) self._convert_pci_stats_to_db_format(updates) db_compute = db.compute_node_create(self._context, updates) self._from_db_object(self._context, self, db_compute) @base.remotable def save(self, prune_stats=False): # NOTE(belliott) ignore prune_stats param, no longer relevant updates = self.obj_get_changes() updates.pop('id', None) self._convert_stats_to_db_format(updates) self._convert_host_ip_to_db_format(updates) self._convert_supported_instances_to_db_format(updates) self._convert_pci_stats_to_db_format(updates) db_compute = db.compute_node_update(self._context, self.id, updates) self._from_db_object(self._context, self, db_compute) @base.remotable def destroy(self): db.compute_node_delete(self._context, self.id) def update_from_virt_driver(self, resources): # NOTE(pmurray): the virt driver provides a dict of values that # can be copied into the compute node. The names and representation # do not exactly match. # TODO(pmurray): the resources dict should be formalized. keys = ["vcpus", "memory_mb", "local_gb", "cpu_info", "vcpus_used", "memory_mb_used", "local_gb_used", "numa_topology", "hypervisor_type", "hypervisor_version", "hypervisor_hostname", "disk_available_least", "host_ip", "uuid"] for key in keys: if key in resources: # The uuid field is read-only so it should only be set when # creating the compute node record for the first time. Ignore # it otherwise. if key == 'uuid' and 'uuid' in self: continue setattr(self, key, resources[key]) # supported_instances has a different name in compute_node if 'supported_instances' in resources: si = resources['supported_instances'] self.supported_hv_specs = [objects.HVSpec.from_list(s) for s in si] @base.NovaObjectRegistry.register class ComputeNodeList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # ComputeNode <= version 1.2 # Version 1.1 ComputeNode version 1.3 # Version 1.2 Add get_by_service() # Version 1.3 ComputeNode version 1.4 # Version 1.4 ComputeNode version 1.5 # Version 1.5 Add use_slave to get_by_service # Version 1.6 ComputeNode version 1.6 # Version 1.7 ComputeNode version 1.7 # Version 1.8 ComputeNode version 1.8 + add get_all_by_host() # Version 1.9 ComputeNode version 1.9 # Version 1.10 ComputeNode version 1.10 # Version 1.11 ComputeNode version 1.11 # Version 1.12 ComputeNode version 1.12 # Version 1.13 ComputeNode version 1.13 # Version 1.14 ComputeNode version 1.14 # Version 1.15 Added get_by_pagination() # Version 1.16: Added get_all_by_uuids() # Version 1.17: Added get_all_by_not_mapped() VERSION = '1.17' fields = { 'objects': fields.ListOfObjectsField('ComputeNode'), } @base.remotable_classmethod def get_all(cls, context): db_computes = db.compute_node_get_all(context) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) @base.remotable_classmethod def get_all_by_not_mapped(cls, context, mapped_less_than): """Return ComputeNode records that are not mapped at a certain level""" db_computes = db.compute_node_get_all_mapped_less_than( context, mapped_less_than) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) @base.remotable_classmethod def get_by_pagination(cls, context, limit=None, marker=None): db_computes = db.compute_node_get_all_by_pagination( context, limit=limit, marker=marker) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) @base.remotable_classmethod def get_by_hypervisor(cls, context, hypervisor_match): db_computes = db.compute_node_search_by_hypervisor(context, hypervisor_match) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) # NOTE(hanlind): This is deprecated and should be removed on the next # major version bump @base.remotable_classmethod def _get_by_service(cls, context, service_id, use_slave=False): try: db_computes = db.compute_nodes_get_by_service_id( context, service_id) except exception.ServiceNotFound: # NOTE(sbauza): Previous behaviour was returning an empty list # if the service was created with no computes, we need to keep it. db_computes = [] return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) @staticmethod @db.select_db_reader_mode def _db_compute_node_get_all_by_host(context, host, use_slave=False): return db.compute_node_get_all_by_host(context, host) @base.remotable_classmethod def get_all_by_host(cls, context, host, use_slave=False): db_computes = cls._db_compute_node_get_all_by_host(context, host, use_slave=use_slave) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) @staticmethod @db.select_db_reader_mode def _db_compute_node_get_all_by_uuids(context, compute_uuids): db_computes = sa_api.model_query(context, models.ComputeNode).filter( models.ComputeNode.uuid.in_(compute_uuids)).all() return db_computes @base.remotable_classmethod def get_all_by_uuids(cls, context, compute_uuids): db_computes = cls._db_compute_node_get_all_by_uuids(context, compute_uuids) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) @staticmethod @db.select_db_reader_mode def _db_compute_node_get_by_hv_type(context, hv_type): db_computes = context.session.query(models.ComputeNode).filter( models.ComputeNode.hypervisor_type == hv_type).all() return db_computes @classmethod def get_by_hypervisor_type(cls, context, hv_type): db_computes = cls._db_compute_node_get_by_hv_type(context, hv_type) return base.obj_make_list(context, cls(context), objects.ComputeNode, db_computes) def _get_node_empty_ratio(context, max_count): """Query the DB for non-deleted compute_nodes with 0.0/None alloc ratios Results are limited by ``max_count``. """ return context.session.query(models.ComputeNode).filter(or_( models.ComputeNode.ram_allocation_ratio == '0.0', models.ComputeNode.cpu_allocation_ratio == '0.0', models.ComputeNode.disk_allocation_ratio == '0.0', models.ComputeNode.ram_allocation_ratio == null(), models.ComputeNode.cpu_allocation_ratio == null(), models.ComputeNode.disk_allocation_ratio == null() )).filter(models.ComputeNode.deleted == 0).limit(max_count).all() @sa_api.pick_context_manager_writer def migrate_empty_ratio(context, max_count): cns = _get_node_empty_ratio(context, max_count) # NOTE(yikun): If it's an existing record with 0.0 or None values, # we need to migrate this record using 'xxx_allocation_ratio' config # if it's set, and fallback to use the 'initial_xxx_allocation_ratio' # otherwise. for cn in cns: for t in ['cpu', 'disk', 'ram']: current_ratio = getattr(cn, '%s_allocation_ratio' % t) if current_ratio in (0.0, None): r = getattr(CONF, "%s_allocation_ratio" % t) init_x_ratio = getattr(CONF, "initial_%s_allocation_ratio" % t) conf_alloc_ratio = r if r else init_x_ratio setattr(cn, '%s_allocation_ratio' % t, conf_alloc_ratio) context.session.add(cn) found = done = len(cns) return found, done ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/console_auth_token.py0000664000175000017500000001702200000000000021421 0ustar00zuulzuul00000000000000# Copyright 2015 Intel Corp # Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.exception import DBDuplicateEntry from oslo_log import log as logging from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import uuidutils import six.moves.urllib.parse as urlparse from nova.db import api as db from nova import exception from nova.i18n import _ from nova.objects import base from nova.objects import fields from nova import utils LOG = logging.getLogger(__name__) @base.NovaObjectRegistry.register class ConsoleAuthToken(base.NovaTimestampObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add clean_expired_console_auths method. # The clean_expired_console_auths_for_host method # was deprecated. VERSION = '1.1' fields = { 'id': fields.IntegerField(), 'console_type': fields.StringField(nullable=False), 'host': fields.StringField(nullable=False), 'port': fields.IntegerField(nullable=False), 'internal_access_path': fields.StringField(nullable=True), 'instance_uuid': fields.UUIDField(nullable=False), 'access_url_base': fields.StringField(nullable=True), # NOTE(PaulMurray): The unhashed token field is not stored in the # database. A hash of the token is stored instead and is not a # field on the object. 'token': fields.StringField(nullable=False), } @property def access_url(self): """The access url with token parameter. :returns: the access url with credential parameters access_url_base is the base url used to access a console. Adding the unhashed token as a parameter in a query string makes it specific to this authorization. """ if self.obj_attr_is_set('id'): if self.console_type == 'novnc': # NOTE(melwitt): As of noVNC v1.1.0, we must use the 'path' # query parameter to pass the auth token within, as the # top-level 'token' query parameter was removed. The 'path' # parameter is supported in older noVNC versions, so it is # backward compatible. qparams = {'path': '?token=%s' % self.token} return '%s?%s' % (self.access_url_base, urlparse.urlencode(qparams)) else: return '%s?token=%s' % (self.access_url_base, self.token) @staticmethod def _from_db_object(context, obj, db_obj): # NOTE(PaulMurray): token is not stored in the database but # this function assumes it is in db_obj. The unhashed token # field is populated in the authorize method after the token # authorization is created in the database. for field in obj.fields: setattr(obj, field, db_obj[field]) obj._context = context obj.obj_reset_changes() return obj @base.remotable def authorize(self, ttl): """Authorise the console token and store in the database. :param ttl: time to live in seconds :returns: an authorized token The expires value is set for ttl seconds in the future and the token hash is stored in the database. This function can only succeed if the token is unique and the object has not already been stored. """ if self.obj_attr_is_set('id'): raise exception.ObjectActionError( action='authorize', reason=_('must be a new object to authorize')) token = uuidutils.generate_uuid() token_hash = utils.get_sha256_str(token) expires = timeutils.utcnow_ts() + ttl updates = self.obj_get_changes() # NOTE(melwitt): token could be in the updates if authorize() has been # called twice on the same object. 'token' is not a database column and # should not be included in the call to create the database record. if 'token' in updates: del updates['token'] updates['token_hash'] = token_hash updates['expires'] = expires try: db_obj = db.console_auth_token_create(self._context, updates) db_obj['token'] = token self._from_db_object(self._context, self, db_obj) except DBDuplicateEntry: # NOTE(PaulMurray) we are generating the token above so this # should almost never happen - but technically its possible raise exception.TokenInUse() LOG.debug("Authorized token with expiry %(expires)s for console " "connection %(console)s", {'expires': expires, 'console': strutils.mask_password(self)}) return token @base.remotable_classmethod def validate(cls, context, token): """Validate the token. :param context: the context :param token: the token for the authorization :returns: The ConsoleAuthToken object if valid The token is valid if the token is in the database and the expires time has not passed. """ token_hash = utils.get_sha256_str(token) db_obj = db.console_auth_token_get_valid(context, token_hash) if db_obj is not None: db_obj['token'] = token obj = cls._from_db_object(context, cls(), db_obj) LOG.debug("Validated token - console connection is " "%(console)s", {'console': strutils.mask_password(obj)}) return obj else: LOG.debug("Token validation failed") raise exception.InvalidToken(token='***') @base.remotable_classmethod def clean_console_auths_for_instance(cls, context, instance_uuid): """Remove all console authorizations for the instance. :param context: the context :param instance_uuid: the instance to be cleaned All authorizations related to the specified instance will be removed from the database. """ db.console_auth_token_destroy_all_by_instance(context, instance_uuid) @base.remotable_classmethod def clean_expired_console_auths(cls, context): """Remove all expired console authorizations. :param context: the context All expired authorizations will be removed. Tokens that have not expired will remain. """ db.console_auth_token_destroy_expired(context) # TODO(takashin): This method was deprecated and will be removed # in a next major version bump. @base.remotable_classmethod def clean_expired_console_auths_for_host(cls, context, host): """Remove all expired console authorizations for the host. :param context: the context :param host: the host name All expired authorizations related to the specified host will be removed. Tokens that have not expired will remain. """ db.console_auth_token_destroy_expired_by_host(context, host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/diagnostics.py0000664000175000017500000001513000000000000020043 0ustar00zuulzuul00000000000000# Copyright (c) 2014 VMware, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class CpuDiagnostics(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(nullable=True), 'time': fields.IntegerField(nullable=True), 'utilisation': fields.IntegerField(nullable=True), } @base.NovaObjectRegistry.register class NicDiagnostics(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'mac_address': fields.MACAddressField(nullable=True), 'rx_octets': fields.IntegerField(nullable=True), 'rx_errors': fields.IntegerField(nullable=True), 'rx_drop': fields.IntegerField(nullable=True), 'rx_packets': fields.IntegerField(nullable=True), 'rx_rate': fields.IntegerField(nullable=True), 'tx_octets': fields.IntegerField(nullable=True), 'tx_errors': fields.IntegerField(nullable=True), 'tx_drop': fields.IntegerField(nullable=True), 'tx_packets': fields.IntegerField(nullable=True), 'tx_rate': fields.IntegerField(nullable=True) } @base.NovaObjectRegistry.register class DiskDiagnostics(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'read_bytes': fields.IntegerField(nullable=True), 'read_requests': fields.IntegerField(nullable=True), 'write_bytes': fields.IntegerField(nullable=True), 'write_requests': fields.IntegerField(nullable=True), 'errors_count': fields.IntegerField(nullable=True) } @base.NovaObjectRegistry.register class MemoryDiagnostics(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'maximum': fields.IntegerField(nullable=True), 'used': fields.IntegerField(nullable=True) } @base.NovaObjectRegistry.register class Diagnostics(base.NovaEphemeralObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'state': fields.InstancePowerStateField(), 'driver': fields.HypervisorDriverField(), 'hypervisor': fields.StringField(nullable=True), 'hypervisor_os': fields.StringField(nullable=True), 'uptime': fields.IntegerField(nullable=True), 'config_drive': fields.BooleanField(), 'memory_details': fields.ObjectField('MemoryDiagnostics', default=MemoryDiagnostics()), 'cpu_details': fields.ListOfObjectsField('CpuDiagnostics', default=[]), 'nic_details': fields.ListOfObjectsField('NicDiagnostics', default=[]), 'disk_details': fields.ListOfObjectsField('DiskDiagnostics', default=[]), 'num_cpus': fields.IntegerField(), 'num_nics': fields.IntegerField(), 'num_disks': fields.IntegerField() } def __init__(self, *args, **kwargs): super(Diagnostics, self).__init__(*args, **kwargs) self.num_cpus = len(self.cpu_details) self.num_nics = len(self.nic_details) self.num_disks = len(self.disk_details) def add_cpu(self, id=None, time=None, utilisation=None): """Add a new CpuDiagnostics object :param id: The virtual cpu number (Integer) :param time: CPU Time in nano seconds (Integer) :param utilisation: CPU utilisation in percentages (Integer) """ self.num_cpus += 1 self.cpu_details.append( CpuDiagnostics(id=id, time=time, utilisation=utilisation)) def add_nic(self, mac_address=None, rx_octets=None, rx_errors=None, rx_drop=None, rx_packets=None, rx_rate=None, tx_octets=None, tx_errors=None, tx_drop=None, tx_packets=None, tx_rate=None): """Add a new NicDiagnostics object :param mac_address: Mac address of the interface (String) :param rx_octets: Received octets (Integer) :param rx_errors: Received errors (Integer) :param rx_drop: Received packets dropped (Integer) :param rx_packets: Received packets (Integer) :param rx_rate: Receive rate (Integer) :param tx_octets: Transmitted Octets (Integer) :param tx_errors: Transmit errors (Integer) :param tx_drop: Transmit dropped packets (Integer) :param tx_packets: Transmit packets (Integer) :param tx_rate: Transmit rate (Integer) """ self.num_nics += 1 self.nic_details.append(NicDiagnostics(mac_address=mac_address, rx_octets=rx_octets, rx_errors=rx_errors, rx_drop=rx_drop, rx_packets=rx_packets, rx_rate=rx_rate, tx_octets=tx_octets, tx_errors=tx_errors, tx_drop=tx_drop, tx_packets=tx_packets, tx_rate=tx_rate)) def add_disk(self, read_bytes=None, read_requests=None, write_bytes=None, write_requests=None, errors_count=None): """Create a new DiskDiagnostics object :param read_bytes: Disk reads in bytes(Integer) :param read_requests: Read requests (Integer) :param write_bytes: Disk writes in bytes (Integer) :param write_requests: Write requests (Integer) :param errors_count: Disk errors (Integer) """ self.num_disks += 1 self.disk_details.append(DiskDiagnostics(read_bytes=read_bytes, read_requests=read_requests, write_bytes=write_bytes, write_requests=write_requests, errors_count=errors_count)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/ec2.py0000664000175000017500000001610400000000000016207 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from oslo_utils import uuidutils from nova import cache_utils from nova.db import api as db from nova import exception from nova.objects import base from nova.objects import fields # NOTE(vish): cache mapping for one week _CACHE_TIME = 7 * 24 * 60 * 60 _CACHE = None def memoize(func): @functools.wraps(func) def memoizer(context, reqid): global _CACHE if not _CACHE: _CACHE = cache_utils.get_client(expiration_time=_CACHE_TIME) key = "%s:%s" % (func.__name__, reqid) key = str(key) value = _CACHE.get(key) if value is None: value = func(context, reqid) _CACHE.set(key, value) return value return memoizer def id_to_ec2_id(instance_id, template='i-%08x'): """Convert an instance ID (int) to an ec2 ID (i-[base 16 number]).""" return template % int(instance_id) def id_to_ec2_inst_id(context, instance_id): """Get or create an ec2 instance ID (i-[base 16 number]) from uuid.""" if instance_id is None: return None elif uuidutils.is_uuid_like(instance_id): int_id = get_int_id_from_instance_uuid(context, instance_id) return id_to_ec2_id(int_id) else: return id_to_ec2_id(instance_id) @memoize def get_int_id_from_instance_uuid(context, instance_uuid): if instance_uuid is None: return try: imap = EC2InstanceMapping.get_by_uuid(context, instance_uuid) return imap.id except exception.NotFound: imap = EC2InstanceMapping(context) imap.uuid = instance_uuid imap.create() return imap.id def glance_id_to_ec2_id(context, glance_id, image_type='ami'): image_id = glance_id_to_id(context, glance_id) if image_id is None: return template = image_type + '-%08x' return id_to_ec2_id(image_id, template=template) @memoize def glance_id_to_id(context, glance_id): """Convert a glance id to an internal (db) id.""" if not glance_id: return try: return S3ImageMapping.get_by_uuid(context, glance_id).id except exception.NotFound: s3imap = S3ImageMapping(context, uuid=glance_id) s3imap.create() return s3imap.id def glance_type_to_ec2_type(image_type): """Converts to a three letter image type. aki, kernel => aki ari, ramdisk => ari anything else => ami """ if image_type == 'kernel': return 'aki' if image_type == 'ramdisk': return 'ari' if image_type not in ['aki', 'ari']: return 'ami' return image_type @base.NovaObjectRegistry.register class EC2InstanceMapping(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(), } @staticmethod def _from_db_object(context, imap, db_imap): for field in imap.fields: setattr(imap, field, db_imap[field]) imap._context = context imap.obj_reset_changes() return imap @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') db_imap = db.ec2_instance_create(self._context, self.uuid) self._from_db_object(self._context, self, db_imap) @base.remotable_classmethod def get_by_uuid(cls, context, instance_uuid): db_imap = db.ec2_instance_get_by_uuid(context, instance_uuid) if db_imap: return cls._from_db_object(context, cls(), db_imap) @base.remotable_classmethod def get_by_id(cls, context, ec2_id): db_imap = db.ec2_instance_get_by_id(context, ec2_id) if db_imap: return cls._from_db_object(context, cls(), db_imap) @base.NovaObjectRegistry.register class S3ImageMapping(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(read_only=True), 'uuid': fields.UUIDField(), } @staticmethod def _from_db_object(context, s3imap, db_s3imap): for field in s3imap.fields: setattr(s3imap, field, db_s3imap[field]) s3imap._context = context s3imap.obj_reset_changes() return s3imap @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') db_s3imap = db.s3_image_create(self._context, self.uuid) self._from_db_object(self._context, self, db_s3imap) @base.remotable_classmethod def get_by_uuid(cls, context, s3_image_uuid): db_s3imap = db.s3_image_get_by_uuid(context, s3_image_uuid) if db_s3imap: return cls._from_db_object(context, cls(context), db_s3imap) @base.remotable_classmethod def get_by_id(cls, context, s3_id): db_s3imap = db.s3_image_get(context, s3_id) if db_s3imap: return cls._from_db_object(context, cls(context), db_s3imap) @base.NovaObjectRegistry.register class EC2Ids(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'instance_id': fields.StringField(read_only=True), 'ami_id': fields.StringField(nullable=True, read_only=True), 'kernel_id': fields.StringField(nullable=True, read_only=True), 'ramdisk_id': fields.StringField(nullable=True, read_only=True), } @staticmethod def _from_dict(ec2ids, dict_ec2ids): for field in ec2ids.fields: setattr(ec2ids, field, dict_ec2ids[field]) return ec2ids @staticmethod def _get_ec2_ids(context, instance): ec2_ids = {} ec2_ids['instance_id'] = id_to_ec2_inst_id(context, instance.uuid) ec2_ids['ami_id'] = glance_id_to_ec2_id(context, instance.image_ref) for image_type in ['kernel', 'ramdisk']: image_id = getattr(instance, '%s_id' % image_type) ec2_id = None if image_id is not None: ec2_image_type = glance_type_to_ec2_type(image_type) ec2_id = glance_id_to_ec2_id(context, image_id, ec2_image_type) ec2_ids['%s_id' % image_type] = ec2_id return ec2_ids @base.remotable_classmethod def get_by_instance(cls, context, instance): ec2_ids = cls._get_ec2_ids(context, instance) return cls._from_dict(cls(context), ec2_ids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/external_event.py0000664000175000017500000000442700000000000020566 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import base as obj_base from nova.objects import fields EVENT_NAMES = [ # Network has changed for this instance, rebuild info_cache 'network-changed', # VIF plugging notifications, tag is port_id 'network-vif-plugged', 'network-vif-unplugged', 'network-vif-deleted', # Volume was extended for this instance, tag is volume_id 'volume-extended', # Power state has changed for this instance 'power-update', # Accelerator Request got bound, tag is ARQ uuid. # Sent when an ARQ for an instance has been bound or failed to bind. 'accelerator-request-bound', ] EVENT_STATUSES = ['failed', 'completed', 'in-progress'] # Possible tag values for the power-update event. POWER_ON = 'POWER_ON' POWER_OFF = 'POWER_OFF' @obj_base.NovaObjectRegistry.register class InstanceExternalEvent(obj_base.NovaObject): # Version 1.0: Initial version # Supports network-changed and vif-plugged # Version 1.1: adds network-vif-deleted event # Version 1.2: adds volume-extended event # Version 1.3: adds power-update event # Version 1.4: adds accelerator-request-bound event VERSION = '1.4' fields = { 'instance_uuid': fields.UUIDField(), 'name': fields.EnumField(valid_values=EVENT_NAMES), 'status': fields.EnumField(valid_values=EVENT_STATUSES), 'tag': fields.StringField(nullable=True), 'data': fields.DictOfStringsField(), } @staticmethod def make_key(name, tag=None): if tag is not None: return '%s-%s' % (name, tag) else: return name @property def key(self): return self.make_key(self.name, self.tag) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/fields.py0000664000175000017500000010662100000000000017010 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import re from cursive import signature_utils from oslo_serialization import jsonutils from oslo_versionedobjects import fields import six from nova import exception from nova.i18n import _ from nova.network import model as network_model from nova.virt import arch # Import field errors from oslo.versionedobjects KeyTypeError = fields.KeyTypeError ElementTypeError = fields.ElementTypeError # Import fields from oslo.versionedobjects BooleanField = fields.BooleanField UnspecifiedDefault = fields.UnspecifiedDefault IntegerField = fields.IntegerField NonNegativeIntegerField = fields.NonNegativeIntegerField UUIDField = fields.UUIDField FloatField = fields.FloatField NonNegativeFloatField = fields.NonNegativeFloatField StringField = fields.StringField SensitiveStringField = fields.SensitiveStringField EnumField = fields.EnumField DateTimeField = fields.DateTimeField DictOfStringsField = fields.DictOfStringsField DictOfNullableStringsField = fields.DictOfNullableStringsField DictOfIntegersField = fields.DictOfIntegersField ListOfStringsField = fields.ListOfStringsField ListOfUUIDField = fields.ListOfUUIDField SetOfIntegersField = fields.SetOfIntegersField ListOfSetsOfIntegersField = fields.ListOfSetsOfIntegersField ListOfDictOfNullableStringsField = fields.ListOfDictOfNullableStringsField DictProxyField = fields.DictProxyField ObjectField = fields.ObjectField ListOfObjectsField = fields.ListOfObjectsField VersionPredicateField = fields.VersionPredicateField FlexibleBooleanField = fields.FlexibleBooleanField DictOfListOfStringsField = fields.DictOfListOfStringsField IPAddressField = fields.IPAddressField IPV4AddressField = fields.IPV4AddressField IPV6AddressField = fields.IPV6AddressField IPV4AndV6AddressField = fields.IPV4AndV6AddressField IPNetworkField = fields.IPNetworkField IPV4NetworkField = fields.IPV4NetworkField IPV6NetworkField = fields.IPV6NetworkField AutoTypedField = fields.AutoTypedField BaseEnumField = fields.BaseEnumField MACAddressField = fields.MACAddressField ListOfIntegersField = fields.ListOfIntegersField PCIAddressField = fields.PCIAddressField # NOTE(danms): These are things we need to import for some of our # own implementations below, our tests, or other transitional # bits of code. These should be removable after we finish our # conversion. So do not use these nova fields directly in any new code; # instead, use the oslo.versionedobjects fields. Enum = fields.Enum Field = fields.Field FieldType = fields.FieldType Set = fields.Set Dict = fields.Dict List = fields.List Object = fields.Object IPAddress = fields.IPAddress IPV4Address = fields.IPV4Address IPV6Address = fields.IPV6Address IPNetwork = fields.IPNetwork IPV4Network = fields.IPV4Network IPV6Network = fields.IPV6Network class ResourceClass(fields.StringPattern): PATTERN = r"^[A-Z0-9_]+$" _REGEX = re.compile(PATTERN) @staticmethod def coerce(obj, attr, value): if isinstance(value, six.string_types): uppered = value.upper() if ResourceClass._REGEX.match(uppered): return uppered raise ValueError(_("Malformed Resource Class %s") % value) class ResourceClassField(AutoTypedField): AUTO_TYPE = ResourceClass() class SetOfStringsField(AutoTypedField): AUTO_TYPE = Set(fields.String()) class BaseNovaEnum(Enum): def __init__(self, **kwargs): super(BaseNovaEnum, self).__init__(valid_values=self.__class__.ALL) class Architecture(BaseNovaEnum): """Represents CPU architectures. Provides the standard names for all known processor architectures. Many have multiple variants to deal with big-endian vs little-endian modes, as well as 32 vs 64 bit word sizes. These names are chosen to be identical to the architecture names expected by libvirt, so if ever adding new ones then ensure it matches libvirt's expectation. """ ALPHA = arch.ALPHA ARMV6 = arch.ARMV6 ARMV7 = arch.ARMV7 ARMV7B = arch.ARMV7B AARCH64 = arch.AARCH64 CRIS = arch.CRIS I686 = arch.I686 IA64 = arch.IA64 LM32 = arch.LM32 M68K = arch.M68K MICROBLAZE = arch.MICROBLAZE MICROBLAZEEL = arch.MICROBLAZEEL MIPS = arch.MIPS MIPSEL = arch.MIPSEL MIPS64 = arch.MIPS64 MIPS64EL = arch.MIPS64EL OPENRISC = arch.OPENRISC PARISC = arch.PARISC PARISC64 = arch.PARISC64 PPC = arch.PPC PPCLE = arch.PPCLE PPC64 = arch.PPC64 PPC64LE = arch.PPC64LE PPCEMB = arch.PPCEMB S390 = arch.S390 S390X = arch.S390X SH4 = arch.SH4 SH4EB = arch.SH4EB SPARC = arch.SPARC SPARC64 = arch.SPARC64 UNICORE32 = arch.UNICORE32 X86_64 = arch.X86_64 XTENSA = arch.XTENSA XTENSAEB = arch.XTENSAEB ALL = arch.ALL @classmethod def from_host(cls): """Get the architecture of the host OS :returns: the canonicalized host architecture """ return cls.canonicalize(os.uname()[4]) @classmethod def is_valid(cls, name): """Check if a string is a valid architecture :param name: architecture name to validate :returns: True if @name is valid """ return name in cls.ALL @classmethod def canonicalize(cls, name): """Canonicalize the architecture name :param name: architecture name to canonicalize :returns: a canonical architecture name """ if name is None: return None newname = name.lower() if newname in ("i386", "i486", "i586"): newname = cls.I686 # Xen mistake from Icehouse or earlier if newname in ("x86_32", "x86_32p"): newname = cls.I686 if newname == "amd64": newname = cls.X86_64 if not cls.is_valid(newname): raise exception.InvalidArchitectureName(arch=name) return newname def coerce(self, obj, attr, value): try: value = self.canonicalize(value) except exception.InvalidArchitectureName: msg = _("Architecture name '%s' is not valid") % value raise ValueError(msg) return super(Architecture, self).coerce(obj, attr, value) class BlockDeviceDestinationType(BaseNovaEnum): """Represents possible destination_type values for a BlockDeviceMapping.""" LOCAL = 'local' VOLUME = 'volume' ALL = (LOCAL, VOLUME) class BlockDeviceSourceType(BaseNovaEnum): """Represents the possible source_type values for a BlockDeviceMapping.""" BLANK = 'blank' IMAGE = 'image' SNAPSHOT = 'snapshot' VOLUME = 'volume' ALL = (BLANK, IMAGE, SNAPSHOT, VOLUME) class BlockDeviceType(BaseNovaEnum): """Represents possible device_type values for a BlockDeviceMapping.""" CDROM = 'cdrom' DISK = 'disk' FLOPPY = 'floppy' FS = 'fs' LUN = 'lun' ALL = (CDROM, DISK, FLOPPY, FS, LUN) class ConfigDrivePolicy(BaseNovaEnum): OPTIONAL = "optional" MANDATORY = "mandatory" ALL = (OPTIONAL, MANDATORY) class CPUAllocationPolicy(BaseNovaEnum): DEDICATED = "dedicated" SHARED = "shared" ALL = (DEDICATED, SHARED) class CPUThreadAllocationPolicy(BaseNovaEnum): # prefer (default): The host may or may not have hyperthreads. This # retains the legacy behavior, whereby siblings are preferred when # available. This is the default if no policy is specified. PREFER = "prefer" # isolate: The host may or many not have hyperthreads. If hyperthreads are # present, each vCPU will be placed on a different core and no vCPUs from # other guests will be able to be placed on the same core, i.e. one # thread sibling is guaranteed to always be unused. If hyperthreads are # not present, each vCPU will still be placed on a different core and # there are no thread siblings to be concerned with. ISOLATE = "isolate" # require: The host must have hyperthreads. Each vCPU will be allocated on # thread siblings. REQUIRE = "require" ALL = (PREFER, ISOLATE, REQUIRE) class CPUEmulatorThreadsPolicy(BaseNovaEnum): # share (default): Emulator threads float across the pCPUs # associated to the guest. SHARE = "share" # isolate: Emulator threads are isolated on a single pCPU. ISOLATE = "isolate" ALL = (SHARE, ISOLATE) class CPUMode(BaseNovaEnum): CUSTOM = 'custom' HOST_MODEL = 'host-model' HOST_PASSTHROUGH = 'host-passthrough' ALL = (CUSTOM, HOST_MODEL, HOST_PASSTHROUGH) class CPUMatch(BaseNovaEnum): MINIMUM = 'minimum' EXACT = 'exact' STRICT = 'strict' ALL = (MINIMUM, EXACT, STRICT) class CPUFeaturePolicy(BaseNovaEnum): FORCE = 'force' REQUIRE = 'require' OPTIONAL = 'optional' DISABLE = 'disable' FORBID = 'forbid' ALL = (FORCE, REQUIRE, OPTIONAL, DISABLE, FORBID) class DiskBus(BaseNovaEnum): # NOTE(aspiers): If you change this, don't forget to update the # docs and metadata for hw_*_bus in glance. # NOTE(lyarwood): Also update the possible values in the api-ref for the # block_device_mapping_v2.disk_bus parameter. FDC = "fdc" IDE = "ide" SATA = "sata" SCSI = "scsi" USB = "usb" VIRTIO = "virtio" XEN = "xen" LXC = "lxc" UML = "uml" ALL = (FDC, IDE, SATA, SCSI, USB, VIRTIO, XEN, LXC, UML) class DiskConfig(BaseNovaEnum): MANUAL = "MANUAL" AUTO = "AUTO" ALL = (MANUAL, AUTO) def coerce(self, obj, attr, value): enum_value = DiskConfig.AUTO if value else DiskConfig.MANUAL return super(DiskConfig, self).coerce(obj, attr, enum_value) class FirmwareType(BaseNovaEnum): UEFI = "uefi" BIOS = "bios" ALL = (UEFI, BIOS) class HVType(BaseNovaEnum): """Represents virtualization types. Provide the standard names for all known guest virtualization types. This is not to be confused with the Nova hypervisor driver types, since one driver may support multiple virtualization types and one virtualization type (eg 'xen') may be supported by multiple drivers ('XenAPI' or 'Libvirt-Xen'). """ BAREMETAL = 'baremetal' BHYVE = 'bhyve' DOCKER = 'docker' FAKE = 'fake' HYPERV = 'hyperv' IRONIC = 'ironic' KQEMU = 'kqemu' KVM = 'kvm' LXC = 'lxc' LXD = 'lxd' OPENVZ = 'openvz' PARALLELS = 'parallels' VIRTUOZZO = 'vz' PHYP = 'phyp' QEMU = 'qemu' TEST = 'test' UML = 'uml' VBOX = 'vbox' VMWARE = 'vmware' XEN = 'xen' ZVM = 'zvm' PRSM = 'prsm' ALL = (BAREMETAL, BHYVE, DOCKER, FAKE, HYPERV, IRONIC, KQEMU, KVM, LXC, LXD, OPENVZ, PARALLELS, PHYP, QEMU, TEST, UML, VBOX, VIRTUOZZO, VMWARE, XEN, ZVM, PRSM) def coerce(self, obj, attr, value): try: value = self.canonicalize(value) except exception.InvalidHypervisorVirtType: msg = _("Hypervisor virt type '%s' is not valid") % value raise ValueError(msg) return super(HVType, self).coerce(obj, attr, value) @classmethod def is_valid(cls, name): """Check if a string is a valid hypervisor type :param name: hypervisor type name to validate :returns: True if @name is valid """ return name in cls.ALL @classmethod def canonicalize(cls, name): """Canonicalize the hypervisor type name :param name: hypervisor type name to canonicalize :returns: a canonical hypervisor type name """ if name is None: return None newname = name.lower() if newname == 'xapi': newname = cls.XEN if not cls.is_valid(newname): raise exception.InvalidHypervisorVirtType(hv_type=name) return newname class ImageSignatureHashType(BaseNovaEnum): # Represents the possible hash methods used for image signing ALL = tuple(sorted(signature_utils.HASH_METHODS.keys())) class ImageSignatureKeyType(BaseNovaEnum): # Represents the possible keypair types used for image signing ALL = ( 'DSA', 'ECC_SECP384R1', 'ECC_SECP521R1', 'ECC_SECT409K1', 'ECC_SECT409R1', 'ECC_SECT571K1', 'ECC_SECT571R1', 'RSA-PSS' ) class OSType(BaseNovaEnum): LINUX = "linux" WINDOWS = "windows" ALL = (LINUX, WINDOWS) def coerce(self, obj, attr, value): # Some code/docs use upper case or initial caps # so canonicalize to all lower case value = value.lower() return super(OSType, self).coerce(obj, attr, value) class RNGModel(BaseNovaEnum): # NOTE(kchamart): Along with "virtio", we may need to extend this (if a # good reason shows up) to allow two more values for VirtIO # transitional and non-transitional devices (available since libvirt # 5.2.0): # # - virtio-transitional # - virtio-nontransitional # # This allows one to choose whether you want to have compatibility # with older guest operating systems. The value you select will in # turn decide the kind of PCI topology the guest will get. # # Details: # https://libvirt.org/formatdomain.html#elementsVirtioTransitional VIRTIO = "virtio" ALL = (VIRTIO,) class SCSIModel(BaseNovaEnum): BUSLOGIC = "buslogic" IBMVSCSI = "ibmvscsi" LSILOGIC = "lsilogic" LSISAS1068 = "lsisas1068" LSISAS1078 = "lsisas1078" VIRTIO_SCSI = "virtio-scsi" VMPVSCSI = "vmpvscsi" ALL = (BUSLOGIC, IBMVSCSI, LSILOGIC, LSISAS1068, LSISAS1078, VIRTIO_SCSI, VMPVSCSI) def coerce(self, obj, attr, value): # Some compat for strings we'd see in the legacy # vmware_adaptertype image property value = value.lower() if value == "lsilogicsas": value = SCSIModel.LSISAS1068 elif value == "paravirtual": value = SCSIModel.VMPVSCSI return super(SCSIModel, self).coerce(obj, attr, value) class SecureBoot(BaseNovaEnum): REQUIRED = "required" DISABLED = "disabled" OPTIONAL = "optional" ALL = (REQUIRED, DISABLED, OPTIONAL) class VideoModel(BaseNovaEnum): CIRRUS = "cirrus" QXL = "qxl" VGA = "vga" VMVGA = "vmvga" XEN = "xen" VIRTIO = 'virtio' GOP = 'gop' NONE = 'none' ALL = (CIRRUS, QXL, VGA, VMVGA, XEN, VIRTIO, GOP, NONE) class VIFModel(BaseNovaEnum): LEGACY_VALUES = {"virtuale1000": network_model.VIF_MODEL_E1000, "virtuale1000e": network_model.VIF_MODEL_E1000E, "virtualpcnet32": network_model.VIF_MODEL_PCNET, "virtualsriovethernetcard": network_model.VIF_MODEL_SRIOV, "virtualvmxnet": network_model.VIF_MODEL_VMXNET, "virtualvmxnet3": network_model.VIF_MODEL_VMXNET3, } ALL = network_model.VIF_MODEL_ALL def coerce(self, obj, attr, value): # Some compat for strings we'd see in the legacy # hw_vif_model image property value = value.lower() value = VIFModel.LEGACY_VALUES.get(value, value) return super(VIFModel, self).coerce(obj, attr, value) class VMMode(BaseNovaEnum): """Represents possible vm modes for instances. Compute instance VM modes represent the host/guest ABI used for the virtual machine or container. Individual hypervisors may support multiple different vm modes per host. Available VM modes for a hypervisor driver may also vary according to the architecture it is running on. """ HVM = 'hvm' # Native ABI (aka fully virtualized) XEN = 'xen' # Xen 3.0 paravirtualized UML = 'uml' # User Mode Linux paravirtualized EXE = 'exe' # Executables in containers ALL = (HVM, XEN, UML, EXE) def coerce(self, obj, attr, value): try: value = self.canonicalize(value) except exception.InvalidVirtualMachineMode: msg = _("Virtual machine mode '%s' is not valid") % value raise ValueError(msg) return super(VMMode, self).coerce(obj, attr, value) @classmethod def get_from_instance(cls, instance): """Get the vm mode for an instance :param instance: instance object to query :returns: canonicalized vm mode for the instance """ mode = instance.vm_mode return cls.canonicalize(mode) @classmethod def is_valid(cls, name): """Check if a string is a valid vm mode :param name: vm mode name to validate :returns: True if @name is valid """ return name in cls.ALL @classmethod def canonicalize(cls, mode): """Canonicalize the vm mode :param name: vm mode name to canonicalize :returns: a canonical vm mode name """ if mode is None: return None mode = mode.lower() # For compatibility with pre-Folsom deployments if mode == 'pv': mode = cls.XEN if mode == 'hv': mode = cls.HVM if mode == 'baremetal': mode = cls.HVM if not cls.is_valid(mode): raise exception.InvalidVirtualMachineMode(vmmode=mode) return mode class WatchdogAction(BaseNovaEnum): NONE = "none" PAUSE = "pause" POWEROFF = "poweroff" RESET = "reset" DISABLED = "disabled" ALL = (NONE, PAUSE, POWEROFF, RESET, DISABLED) class MonitorMetricType(BaseNovaEnum): CPU_FREQUENCY = "cpu.frequency" CPU_USER_TIME = "cpu.user.time" CPU_KERNEL_TIME = "cpu.kernel.time" CPU_IDLE_TIME = "cpu.idle.time" CPU_IOWAIT_TIME = "cpu.iowait.time" CPU_USER_PERCENT = "cpu.user.percent" CPU_KERNEL_PERCENT = "cpu.kernel.percent" CPU_IDLE_PERCENT = "cpu.idle.percent" CPU_IOWAIT_PERCENT = "cpu.iowait.percent" CPU_PERCENT = "cpu.percent" NUMA_MEM_BW_MAX = "numa.membw.max" NUMA_MEM_BW_CURRENT = "numa.membw.current" ALL = ( CPU_FREQUENCY, CPU_USER_TIME, CPU_KERNEL_TIME, CPU_IDLE_TIME, CPU_IOWAIT_TIME, CPU_USER_PERCENT, CPU_KERNEL_PERCENT, CPU_IDLE_PERCENT, CPU_IOWAIT_PERCENT, CPU_PERCENT, NUMA_MEM_BW_MAX, NUMA_MEM_BW_CURRENT, ) class HostStatus(BaseNovaEnum): UP = "UP" # The nova-compute is up. DOWN = "DOWN" # The nova-compute is forced_down. MAINTENANCE = "MAINTENANCE" # The nova-compute is disabled. UNKNOWN = "UNKNOWN" # The nova-compute has not reported. NONE = "" # No host or nova-compute. ALL = (UP, DOWN, MAINTENANCE, UNKNOWN, NONE) class PciDeviceStatus(BaseNovaEnum): AVAILABLE = "available" CLAIMED = "claimed" ALLOCATED = "allocated" REMOVED = "removed" # The device has been hot-removed and not yet deleted DELETED = "deleted" # The device is marked not available/deleted. UNCLAIMABLE = "unclaimable" UNAVAILABLE = "unavailable" ALL = (AVAILABLE, CLAIMED, ALLOCATED, REMOVED, DELETED, UNAVAILABLE, UNCLAIMABLE) class PciDeviceType(BaseNovaEnum): # NOTE(jaypipes): It's silly that the word "type-" is in these constants, # but alas, these were the original constant strings used... STANDARD = "type-PCI" SRIOV_PF = "type-PF" SRIOV_VF = "type-VF" ALL = (STANDARD, SRIOV_PF, SRIOV_VF) class PCINUMAAffinityPolicy(BaseNovaEnum): REQUIRED = "required" LEGACY = "legacy" PREFERRED = "preferred" ALL = (REQUIRED, LEGACY, PREFERRED) class DiskFormat(BaseNovaEnum): RBD = "rbd" LVM = "lvm" QCOW2 = "qcow2" RAW = "raw" PLOOP = "ploop" VHD = "vhd" VMDK = "vmdk" VDI = "vdi" ISO = "iso" ALL = (RBD, LVM, QCOW2, RAW, PLOOP, VHD, VMDK, VDI, ISO) class HypervisorDriver(BaseNovaEnum): LIBVIRT = "libvirt" XENAPI = "xenapi" VMWAREAPI = "vmwareapi" IRONIC = "ironic" HYPERV = "hyperv" ALL = (LIBVIRT, XENAPI, VMWAREAPI, IRONIC, HYPERV) class PointerModelType(BaseNovaEnum): USBTABLET = "usbtablet" ALL = (USBTABLET) class NotificationPriority(BaseNovaEnum): AUDIT = 'audit' CRITICAL = 'critical' DEBUG = 'debug' INFO = 'info' ERROR = 'error' SAMPLE = 'sample' WARN = 'warn' ALL = (AUDIT, CRITICAL, DEBUG, INFO, ERROR, SAMPLE, WARN) class NotificationPhase(BaseNovaEnum): START = 'start' END = 'end' ERROR = 'error' PROGRESS = 'progress' ALL = (START, END, ERROR, PROGRESS) class NotificationSource(BaseNovaEnum): """Represents possible nova binary service names in notification envelope. The publisher_id field of the nova notifications consists of the name of the host and the name of the service binary that emits the notification. The below values are the ones that is used in every notification. Please note that on the REST API the nova-api service binary is called nova-osapi_compute. This is not reflected here as notifications always used the name nova-api instead. """ COMPUTE = 'nova-compute' API = 'nova-api' CONDUCTOR = 'nova-conductor' SCHEDULER = 'nova-scheduler' # TODO(stephenfin): Remove 'NETWORK' when 'NotificationPublisher' is # updated to version 3.0 NETWORK = 'nova-network' # TODO(stephenfin): Remove 'CONSOLEAUTH' when 'NotificationPublisher' is # updated to version 3.0 CONSOLEAUTH = 'nova-consoleauth' # TODO(stephenfin): Remove when 'NotificationPublisher' object version is # bumped to 3.0 CELLS = 'nova-cells' # TODO(stephenfin): Remove when 'NotificationPublisher' object version is # bumped to 3.0 CONSOLE = 'nova-console' METADATA = 'nova-metadata' ALL = (API, COMPUTE, CONDUCTOR, SCHEDULER, NETWORK, CONSOLEAUTH, CELLS, CONSOLE, METADATA) @staticmethod def get_source_by_binary(binary): # nova-osapi_compute binary name needs to be translated to nova-api # notification source enum value. return "nova-api" if binary == "nova-osapi_compute" else binary class NotificationAction(BaseNovaEnum): UPDATE = 'update' EXCEPTION = 'exception' DELETE = 'delete' PAUSE = 'pause' UNPAUSE = 'unpause' RESIZE = 'resize' VOLUME_SWAP = 'volume_swap' SUSPEND = 'suspend' POWER_ON = 'power_on' POWER_OFF = 'power_off' REBOOT = 'reboot' SHUTDOWN = 'shutdown' SNAPSHOT = 'snapshot' INTERFACE_ATTACH = 'interface_attach' SHELVE = 'shelve' RESUME = 'resume' RESTORE = 'restore' EXISTS = 'exists' RESCUE = 'rescue' VOLUME_ATTACH = 'volume_attach' VOLUME_DETACH = 'volume_detach' CREATE = 'create' IMPORT = 'import' EVACUATE = 'evacuate' RESIZE_FINISH = 'resize_finish' LIVE_MIGRATION_ABORT = 'live_migration_abort' LIVE_MIGRATION_POST_DEST = 'live_migration_post_dest' LIVE_MIGRATION_POST = 'live_migration_post' LIVE_MIGRATION_PRE = 'live_migration_pre' LIVE_MIGRATION_ROLLBACK_DEST = 'live_migration_rollback_dest' LIVE_MIGRATION_ROLLBACK = 'live_migration_rollback' LIVE_MIGRATION_FORCE_COMPLETE = 'live_migration_force_complete' REBUILD = 'rebuild' REBUILD_SCHEDULED = 'rebuild_scheduled' INTERFACE_DETACH = 'interface_detach' RESIZE_CONFIRM = 'resize_confirm' RESIZE_PREP = 'resize_prep' RESIZE_REVERT = 'resize_revert' SELECT_DESTINATIONS = 'select_destinations' SHELVE_OFFLOAD = 'shelve_offload' SOFT_DELETE = 'soft_delete' TRIGGER_CRASH_DUMP = 'trigger_crash_dump' UNRESCUE = 'unrescue' UNSHELVE = 'unshelve' ADD_HOST = 'add_host' REMOVE_HOST = 'remove_host' ADD_MEMBER = 'add_member' UPDATE_METADATA = 'update_metadata' LOCK = 'lock' UNLOCK = 'unlock' UPDATE_PROP = 'update_prop' CONNECT = 'connect' USAGE = 'usage' BUILD_INSTANCES = 'build_instances' MIGRATE_SERVER = 'migrate_server' REBUILD_SERVER = 'rebuild_server' IMAGE_CACHE = 'cache_images' ALL = (UPDATE, EXCEPTION, DELETE, PAUSE, UNPAUSE, RESIZE, VOLUME_SWAP, SUSPEND, POWER_ON, REBOOT, SHUTDOWN, SNAPSHOT, INTERFACE_ATTACH, POWER_OFF, SHELVE, RESUME, RESTORE, EXISTS, RESCUE, VOLUME_ATTACH, VOLUME_DETACH, CREATE, IMPORT, EVACUATE, RESIZE_FINISH, LIVE_MIGRATION_ABORT, LIVE_MIGRATION_POST_DEST, LIVE_MIGRATION_POST, LIVE_MIGRATION_PRE, LIVE_MIGRATION_ROLLBACK, LIVE_MIGRATION_ROLLBACK_DEST, REBUILD, INTERFACE_DETACH, RESIZE_CONFIRM, RESIZE_PREP, RESIZE_REVERT, SHELVE_OFFLOAD, SOFT_DELETE, TRIGGER_CRASH_DUMP, UNRESCUE, UNSHELVE, ADD_HOST, REMOVE_HOST, ADD_MEMBER, UPDATE_METADATA, LOCK, UNLOCK, REBUILD_SCHEDULED, UPDATE_PROP, LIVE_MIGRATION_FORCE_COMPLETE, CONNECT, USAGE, BUILD_INSTANCES, MIGRATE_SERVER, REBUILD_SERVER, SELECT_DESTINATIONS, IMAGE_CACHE) # TODO(rlrossit): These should be changed over to be a StateMachine enum from # oslo.versionedobjects using the valid state transitions described in # nova.compute.vm_states class InstanceState(BaseNovaEnum): ACTIVE = 'active' BUILDING = 'building' PAUSED = 'paused' SUSPENDED = 'suspended' STOPPED = 'stopped' RESCUED = 'rescued' RESIZED = 'resized' SOFT_DELETED = 'soft-delete' DELETED = 'deleted' ERROR = 'error' SHELVED = 'shelved' SHELVED_OFFLOADED = 'shelved_offloaded' ALL = (ACTIVE, BUILDING, PAUSED, SUSPENDED, STOPPED, RESCUED, RESIZED, SOFT_DELETED, DELETED, ERROR, SHELVED, SHELVED_OFFLOADED) # TODO(rlrossit): These should be changed over to be a StateMachine enum from # oslo.versionedobjects using the valid state transitions described in # nova.compute.task_states class InstanceTaskState(BaseNovaEnum): SCHEDULING = 'scheduling' BLOCK_DEVICE_MAPPING = 'block_device_mapping' NETWORKING = 'networking' SPAWNING = 'spawning' IMAGE_SNAPSHOT = 'image_snapshot' IMAGE_SNAPSHOT_PENDING = 'image_snapshot_pending' IMAGE_PENDING_UPLOAD = 'image_pending_upload' IMAGE_UPLOADING = 'image_uploading' IMAGE_BACKUP = 'image_backup' UPDATING_PASSWORD = 'updating_password' RESIZE_PREP = 'resize_prep' RESIZE_MIGRATING = 'resize_migrating' RESIZE_MIGRATED = 'resize_migrated' RESIZE_FINISH = 'resize_finish' RESIZE_REVERTING = 'resize_reverting' RESIZE_CONFIRMING = 'resize_confirming' REBOOTING = 'rebooting' REBOOT_PENDING = 'reboot_pending' REBOOT_STARTED = 'reboot_started' REBOOTING_HARD = 'rebooting_hard' REBOOT_PENDING_HARD = 'reboot_pending_hard' REBOOT_STARTED_HARD = 'reboot_started_hard' PAUSING = 'pausing' UNPAUSING = 'unpausing' SUSPENDING = 'suspending' RESUMING = 'resuming' POWERING_OFF = 'powering-off' POWERING_ON = 'powering-on' RESCUING = 'rescuing' UNRESCUING = 'unrescuing' REBUILDING = 'rebuilding' REBUILD_BLOCK_DEVICE_MAPPING = "rebuild_block_device_mapping" REBUILD_SPAWNING = 'rebuild_spawning' MIGRATING = "migrating" DELETING = 'deleting' SOFT_DELETING = 'soft-deleting' RESTORING = 'restoring' SHELVING = 'shelving' SHELVING_IMAGE_PENDING_UPLOAD = 'shelving_image_pending_upload' SHELVING_IMAGE_UPLOADING = 'shelving_image_uploading' SHELVING_OFFLOADING = 'shelving_offloading' UNSHELVING = 'unshelving' ALL = (SCHEDULING, BLOCK_DEVICE_MAPPING, NETWORKING, SPAWNING, IMAGE_SNAPSHOT, IMAGE_SNAPSHOT_PENDING, IMAGE_PENDING_UPLOAD, IMAGE_UPLOADING, IMAGE_BACKUP, UPDATING_PASSWORD, RESIZE_PREP, RESIZE_MIGRATING, RESIZE_MIGRATED, RESIZE_FINISH, RESIZE_REVERTING, RESIZE_CONFIRMING, REBOOTING, REBOOT_PENDING, REBOOT_STARTED, REBOOTING_HARD, REBOOT_PENDING_HARD, REBOOT_STARTED_HARD, PAUSING, UNPAUSING, SUSPENDING, RESUMING, POWERING_OFF, POWERING_ON, RESCUING, UNRESCUING, REBUILDING, REBUILD_BLOCK_DEVICE_MAPPING, REBUILD_SPAWNING, MIGRATING, DELETING, SOFT_DELETING, RESTORING, SHELVING, SHELVING_IMAGE_PENDING_UPLOAD, SHELVING_IMAGE_UPLOADING, SHELVING_OFFLOADING, UNSHELVING) class InstancePowerState(Enum): _UNUSED = '_unused' NOSTATE = 'pending' RUNNING = 'running' PAUSED = 'paused' SHUTDOWN = 'shutdown' CRASHED = 'crashed' SUSPENDED = 'suspended' # The order is important here. If you make changes, only *append* # values to the end of the list. ALL = ( NOSTATE, RUNNING, _UNUSED, PAUSED, SHUTDOWN, _UNUSED, CRASHED, SUSPENDED, ) def __init__(self): super(InstancePowerState, self).__init__( valid_values=InstancePowerState.ALL) def coerce(self, obj, attr, value): try: value = int(value) value = self.from_index(value) except (ValueError, KeyError): pass return super(InstancePowerState, self).coerce(obj, attr, value) @classmethod def index(cls, value): """Return an index into the Enum given a value.""" return cls.ALL.index(value) @classmethod def from_index(cls, index): """Return the Enum value at a given index.""" return cls.ALL[index] class NetworkModel(FieldType): @staticmethod def coerce(obj, attr, value): if isinstance(value, network_model.NetworkInfo): return value elif isinstance(value, six.string_types): # Hmm, do we need this? return network_model.NetworkInfo.hydrate(value) else: raise ValueError(_('A NetworkModel is required in field %s') % attr) @staticmethod def to_primitive(obj, attr, value): return value.json() @staticmethod def from_primitive(obj, attr, value): return network_model.NetworkInfo.hydrate(value) def stringify(self, value): return 'NetworkModel(%s)' % ( ','.join([str(vif['id']) for vif in value])) def get_schema(self): return {'type': ['string']} class NetworkVIFModel(FieldType): """Represents a nova.network.model.VIF object, which is a dict of stuff.""" @staticmethod def coerce(obj, attr, value): if isinstance(value, network_model.VIF): return value elif isinstance(value, six.string_types): return NetworkVIFModel.from_primitive(obj, attr, value) else: raise ValueError(_('A nova.network.model.VIF object is required ' 'in field %s') % attr) @staticmethod def to_primitive(obj, attr, value): return jsonutils.dumps(value) @staticmethod def from_primitive(obj, attr, value): return network_model.VIF.hydrate(jsonutils.loads(value)) def get_schema(self): return {'type': ['string']} class AddressBase(FieldType): @staticmethod def coerce(obj, attr, value): if re.match(obj.PATTERN, str(value)): return str(value) else: raise ValueError(_('Value must match %s') % obj.PATTERN) def get_schema(self): return {'type': ['string'], 'pattern': self.PATTERN} class USBAddress(AddressBase): PATTERN = '[a-f0-9]+:[a-f0-9]+' @staticmethod def coerce(obj, attr, value): return AddressBase.coerce(USBAddress, attr, value) class SCSIAddress(AddressBase): PATTERN = '[a-f0-9]+:[a-f0-9]+:[a-f0-9]+:[a-f0-9]+' @staticmethod def coerce(obj, attr, value): return AddressBase.coerce(SCSIAddress, attr, value) class IDEAddress(AddressBase): PATTERN = '[0-1]:[0-1]' @staticmethod def coerce(obj, attr, value): return AddressBase.coerce(IDEAddress, attr, value) class XenAddress(AddressBase): PATTERN = '(00[0-9]{2}00)|[1-9][0-9]+' @staticmethod def coerce(obj, attr, value): return AddressBase.coerce(XenAddress, attr, value) class USBAddressField(AutoTypedField): AUTO_TYPE = USBAddress() class SCSIAddressField(AutoTypedField): AUTO_TYPE = SCSIAddress() class IDEAddressField(AutoTypedField): AUTO_TYPE = IDEAddress() class XenAddressField(AutoTypedField): AUTO_TYPE = XenAddress() class ArchitectureField(BaseEnumField): AUTO_TYPE = Architecture() class BlockDeviceDestinationTypeField(BaseEnumField): AUTO_TYPE = BlockDeviceDestinationType() class BlockDeviceSourceTypeField(BaseEnumField): AUTO_TYPE = BlockDeviceSourceType() class BlockDeviceTypeField(BaseEnumField): AUTO_TYPE = BlockDeviceType() class ConfigDrivePolicyField(BaseEnumField): AUTO_TYPE = ConfigDrivePolicy() class CPUAllocationPolicyField(BaseEnumField): AUTO_TYPE = CPUAllocationPolicy() class CPUThreadAllocationPolicyField(BaseEnumField): AUTO_TYPE = CPUThreadAllocationPolicy() class CPUEmulatorThreadsPolicyField(BaseEnumField): AUTO_TYPE = CPUEmulatorThreadsPolicy() class CPUModeField(BaseEnumField): AUTO_TYPE = CPUMode() class CPUMatchField(BaseEnumField): AUTO_TYPE = CPUMatch() class CPUFeaturePolicyField(BaseEnumField): AUTO_TYPE = CPUFeaturePolicy() class DiskBusField(BaseEnumField): AUTO_TYPE = DiskBus() class DiskConfigField(BaseEnumField): AUTO_TYPE = DiskConfig() class FirmwareTypeField(BaseEnumField): AUTO_TYPE = FirmwareType() class HVTypeField(BaseEnumField): AUTO_TYPE = HVType() class ImageSignatureHashTypeField(BaseEnumField): AUTO_TYPE = ImageSignatureHashType() class ImageSignatureKeyTypeField(BaseEnumField): AUTO_TYPE = ImageSignatureKeyType() class OSTypeField(BaseEnumField): AUTO_TYPE = OSType() class RNGModelField(BaseEnumField): AUTO_TYPE = RNGModel() class SCSIModelField(BaseEnumField): AUTO_TYPE = SCSIModel() class SecureBootField(BaseEnumField): AUTO_TYPE = SecureBoot() class VideoModelField(BaseEnumField): AUTO_TYPE = VideoModel() class VIFModelField(BaseEnumField): AUTO_TYPE = VIFModel() class VMModeField(BaseEnumField): AUTO_TYPE = VMMode() class WatchdogActionField(BaseEnumField): AUTO_TYPE = WatchdogAction() class MonitorMetricTypeField(BaseEnumField): AUTO_TYPE = MonitorMetricType() class PciDeviceStatusField(BaseEnumField): AUTO_TYPE = PciDeviceStatus() class PciDeviceTypeField(BaseEnumField): AUTO_TYPE = PciDeviceType() class PCINUMAAffinityPolicyField(BaseEnumField): AUTO_TYPE = PCINUMAAffinityPolicy() class DiskFormatField(BaseEnumField): AUTO_TYPE = DiskFormat() class HypervisorDriverField(BaseEnumField): AUTO_TYPE = HypervisorDriver() class PointerModelField(BaseEnumField): AUTO_TYPE = PointerModelType() class NotificationPriorityField(BaseEnumField): AUTO_TYPE = NotificationPriority() class NotificationPhaseField(BaseEnumField): AUTO_TYPE = NotificationPhase() class NotificationActionField(BaseEnumField): AUTO_TYPE = NotificationAction() class NotificationSourceField(BaseEnumField): AUTO_TYPE = NotificationSource() class InstanceStateField(BaseEnumField): AUTO_TYPE = InstanceState() class InstanceTaskStateField(BaseEnumField): AUTO_TYPE = InstanceTaskState() class InstancePowerStateField(BaseEnumField): AUTO_TYPE = InstancePowerState() class ListOfListsOfStringsField(AutoTypedField): AUTO_TYPE = List(List(fields.String())) class DictOfSetOfIntegersField(AutoTypedField): AUTO_TYPE = Dict(Set(fields.Integer())) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/flavor.py0000664000175000017500000006274200000000000017040 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_db.sqlalchemy import utils as sqlalchemyutils from oslo_utils import versionutils from sqlalchemy import or_ from sqlalchemy.orm import joinedload from sqlalchemy.sql.expression import asc from sqlalchemy.sql import true import nova.conf from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy.api import require_context from nova.db.sqlalchemy import api_models from nova import exception from nova.notifications.objects import base as notification from nova.notifications.objects import flavor as flavor_notification from nova import objects from nova.objects import base from nova.objects import fields OPTIONAL_FIELDS = ['extra_specs', 'projects'] # Remove these fields in version 2.0 of the object. DEPRECATED_FIELDS = ['deleted', 'deleted_at'] # Non-joined fields which can be updated. MUTABLE_FIELDS = set(['description']) CONF = nova.conf.CONF def _dict_with_extra_specs(flavor_model): extra_specs = {x['key']: x['value'] for x in flavor_model['extra_specs']} return dict(flavor_model, extra_specs=extra_specs) # NOTE(danms): There are some issues with the oslo_db context manager # decorators with static methods. We pull these out for now and can # move them back into the actual staticmethods on the object when those # issues are resolved. @db_api.api_context_manager.reader def _get_projects_from_db(context, flavorid): db_flavor = context.session.query(api_models.Flavors).\ filter_by(flavorid=flavorid).\ options(joinedload('projects')).\ first() if not db_flavor: raise exception.FlavorNotFound(flavor_id=flavorid) return [x['project_id'] for x in db_flavor['projects']] @db_api.api_context_manager.writer def _flavor_add_project(context, flavor_id, project_id): project = api_models.FlavorProjects() project.update({'flavor_id': flavor_id, 'project_id': project_id}) try: project.save(context.session) except db_exc.DBDuplicateEntry: raise exception.FlavorAccessExists(flavor_id=flavor_id, project_id=project_id) @db_api.api_context_manager.writer def _flavor_del_project(context, flavor_id, project_id): result = context.session.query(api_models.FlavorProjects).\ filter_by(project_id=project_id).\ filter_by(flavor_id=flavor_id).\ delete() if result == 0: raise exception.FlavorAccessNotFound(flavor_id=flavor_id, project_id=project_id) @db_api.api_context_manager.writer def _flavor_extra_specs_add(context, flavor_id, specs, max_retries=10): writer = db_api.api_context_manager.writer for attempt in range(max_retries): try: spec_refs = context.session.query( api_models.FlavorExtraSpecs).\ filter_by(flavor_id=flavor_id).\ filter(api_models.FlavorExtraSpecs.key.in_( specs.keys())).\ all() existing_keys = set() for spec_ref in spec_refs: key = spec_ref["key"] existing_keys.add(key) with writer.savepoint.using(context): spec_ref.update({"value": specs[key]}) for key, value in specs.items(): if key in existing_keys: continue spec_ref = api_models.FlavorExtraSpecs() with writer.savepoint.using(context): spec_ref.update({"key": key, "value": value, "flavor_id": flavor_id}) context.session.add(spec_ref) return specs except db_exc.DBDuplicateEntry: # a concurrent transaction has been committed, # try again unless this was the last attempt if attempt == max_retries - 1: raise exception.FlavorExtraSpecUpdateCreateFailed( id=flavor_id, retries=max_retries) @db_api.api_context_manager.writer def _flavor_extra_specs_del(context, flavor_id, key): result = context.session.query(api_models.FlavorExtraSpecs).\ filter_by(flavor_id=flavor_id).\ filter_by(key=key).\ delete() if result == 0: raise exception.FlavorExtraSpecsNotFound( extra_specs_key=key, flavor_id=flavor_id) @db_api.api_context_manager.writer def _flavor_create(context, values): specs = values.get('extra_specs') db_specs = [] if specs: for k, v in specs.items(): db_spec = api_models.FlavorExtraSpecs() db_spec['key'] = k db_spec['value'] = v db_specs.append(db_spec) projects = values.get('projects') db_projects = [] if projects: for project in set(projects): db_project = api_models.FlavorProjects() db_project['project_id'] = project db_projects.append(db_project) values['extra_specs'] = db_specs values['projects'] = db_projects db_flavor = api_models.Flavors() db_flavor.update(values) try: db_flavor.save(context.session) except db_exc.DBDuplicateEntry as e: if 'flavorid' in e.columns: raise exception.FlavorIdExists(flavor_id=values['flavorid']) raise exception.FlavorExists(name=values['name']) except Exception as e: raise db_exc.DBError(e) return _dict_with_extra_specs(db_flavor) @db_api.api_context_manager.writer def _flavor_destroy(context, flavor_id=None, flavorid=None): query = context.session.query(api_models.Flavors) if flavor_id is not None: query = query.filter(api_models.Flavors.id == flavor_id) else: query = query.filter(api_models.Flavors.flavorid == flavorid) result = query.first() if not result: raise exception.FlavorNotFound(flavor_id=(flavor_id or flavorid)) context.session.query(api_models.FlavorProjects).\ filter_by(flavor_id=result.id).delete() context.session.query(api_models.FlavorExtraSpecs).\ filter_by(flavor_id=result.id).delete() context.session.delete(result) return result # TODO(berrange): Remove NovaObjectDictCompat # TODO(mriedem): Remove NovaPersistentObject in version 2.0 @base.NovaObjectRegistry.register class Flavor(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Added save_projects(), save_extra_specs(), removed # remotable from save() # Version 1.2: Added description field. Note: this field should not be # persisted with the embedded instance.flavor. VERSION = '1.2' fields = { 'id': fields.IntegerField(), 'name': fields.StringField(nullable=True), 'memory_mb': fields.IntegerField(), 'vcpus': fields.IntegerField(), 'root_gb': fields.IntegerField(), 'ephemeral_gb': fields.IntegerField(), 'flavorid': fields.StringField(), 'swap': fields.IntegerField(), 'rxtx_factor': fields.FloatField(nullable=True, default=1.0), 'vcpu_weight': fields.IntegerField(nullable=True), 'disabled': fields.BooleanField(), 'is_public': fields.BooleanField(), 'extra_specs': fields.DictOfStringsField(), 'projects': fields.ListOfStringsField(), 'description': fields.StringField(nullable=True) } def __init__(self, *args, **kwargs): super(Flavor, self).__init__(*args, **kwargs) self._orig_extra_specs = {} self._orig_projects = [] def obj_make_compatible(self, primitive, target_version): super(Flavor, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2) and 'description' in primitive: del primitive['description'] @staticmethod def _from_db_object(context, flavor, db_flavor, expected_attrs=None): if expected_attrs is None: expected_attrs = [] flavor._context = context for name, field in flavor.fields.items(): if name in OPTIONAL_FIELDS: continue if name in DEPRECATED_FIELDS and name not in db_flavor: continue value = db_flavor[name] if isinstance(field, fields.IntegerField): value = value if value is not None else 0 flavor[name] = value # NOTE(danms): This is to support processing the API flavor # model, which does not have these deprecated fields. When we # remove compatibility with the old InstanceType model, we can # remove this as well. if any(f not in db_flavor for f in DEPRECATED_FIELDS): flavor.deleted_at = None flavor.deleted = False if 'extra_specs' in expected_attrs: flavor.extra_specs = db_flavor['extra_specs'] if 'projects' in expected_attrs: if 'projects' in db_flavor: flavor['projects'] = [x['project_id'] for x in db_flavor['projects']] else: flavor._load_projects() flavor.obj_reset_changes() return flavor @staticmethod @db_api.api_context_manager.reader def _flavor_get_query_from_db(context): query = context.session.query(api_models.Flavors).\ options(joinedload('extra_specs')) if not context.is_admin: the_filter = [api_models.Flavors.is_public == true()] the_filter.extend([ api_models.Flavors.projects.any(project_id=context.project_id) ]) query = query.filter(or_(*the_filter)) return query @staticmethod @require_context def _flavor_get_from_db(context, id): """Returns a dict describing specific flavor.""" result = Flavor._flavor_get_query_from_db(context).\ filter_by(id=id).\ first() if not result: raise exception.FlavorNotFound(flavor_id=id) return _dict_with_extra_specs(result) @staticmethod @require_context def _flavor_get_by_name_from_db(context, name): """Returns a dict describing specific flavor.""" result = Flavor._flavor_get_query_from_db(context).\ filter_by(name=name).\ first() if not result: raise exception.FlavorNotFoundByName(flavor_name=name) return _dict_with_extra_specs(result) @staticmethod @require_context def _flavor_get_by_flavor_id_from_db(context, flavor_id): """Returns a dict describing specific flavor_id.""" result = Flavor._flavor_get_query_from_db(context).\ filter_by(flavorid=flavor_id).\ order_by(asc(api_models.Flavors.id)).\ first() if not result: raise exception.FlavorNotFound(flavor_id=flavor_id) return _dict_with_extra_specs(result) @staticmethod def _get_projects_from_db(context, flavorid): return _get_projects_from_db(context, flavorid) @base.remotable def _load_projects(self): self.projects = self._get_projects_from_db(self._context, self.flavorid) self.obj_reset_changes(['projects']) def obj_load_attr(self, attrname): # NOTE(danms): Only projects could be lazy-loaded right now if attrname != 'projects': raise exception.ObjectActionError( action='obj_load_attr', reason='unable to load %s' % attrname) self._load_projects() def obj_reset_changes(self, fields=None, recursive=False): super(Flavor, self).obj_reset_changes(fields=fields, recursive=recursive) if fields is None or 'extra_specs' in fields: self._orig_extra_specs = (dict(self.extra_specs) if self.obj_attr_is_set('extra_specs') else {}) if fields is None or 'projects' in fields: self._orig_projects = (list(self.projects) if self.obj_attr_is_set('projects') else []) def obj_what_changed(self): changes = super(Flavor, self).obj_what_changed() if ('extra_specs' in self and self.extra_specs != self._orig_extra_specs): changes.add('extra_specs') if 'projects' in self and self.projects != self._orig_projects: changes.add('projects') return changes @classmethod def _obj_from_primitive(cls, context, objver, primitive): self = super(Flavor, cls)._obj_from_primitive(context, objver, primitive) changes = self.obj_what_changed() if 'extra_specs' not in changes: # This call left extra_specs "clean" so update our tracker self._orig_extra_specs = (dict(self.extra_specs) if self.obj_attr_is_set('extra_specs') else {}) if 'projects' not in changes: # This call left projects "clean" so update our tracker self._orig_projects = (list(self.projects) if self.obj_attr_is_set('projects') else []) return self @base.remotable_classmethod def get_by_id(cls, context, id): db_flavor = cls._flavor_get_from_db(context, id) return cls._from_db_object(context, cls(context), db_flavor, expected_attrs=['extra_specs']) @base.remotable_classmethod def get_by_name(cls, context, name): db_flavor = cls._flavor_get_by_name_from_db(context, name) return cls._from_db_object(context, cls(context), db_flavor, expected_attrs=['extra_specs']) @base.remotable_classmethod def get_by_flavor_id(cls, context, flavor_id, read_deleted=None): db_flavor = cls._flavor_get_by_flavor_id_from_db(context, flavor_id) return cls._from_db_object(context, cls(context), db_flavor, expected_attrs=['extra_specs']) @staticmethod def _flavor_add_project(context, flavor_id, project_id): return _flavor_add_project(context, flavor_id, project_id) @staticmethod def _flavor_del_project(context, flavor_id, project_id): return _flavor_del_project(context, flavor_id, project_id) def _add_access(self, project_id): self._flavor_add_project(self._context, self.id, project_id) @base.remotable def add_access(self, project_id): if 'projects' in self.obj_what_changed(): raise exception.ObjectActionError(action='add_access', reason='projects modified') self._add_access(project_id) self._load_projects() self._send_notification(fields.NotificationAction.UPDATE) def _remove_access(self, project_id): self._flavor_del_project(self._context, self.id, project_id) @base.remotable def remove_access(self, project_id): if 'projects' in self.obj_what_changed(): raise exception.ObjectActionError(action='remove_access', reason='projects modified') self._remove_access(project_id) self._load_projects() self._send_notification(fields.NotificationAction.UPDATE) @staticmethod def _flavor_create(context, updates): return _flavor_create(context, updates) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() expected_attrs = [] for attr in OPTIONAL_FIELDS: if attr in updates: expected_attrs.append(attr) db_flavor = self._flavor_create(self._context, updates) self._from_db_object(self._context, self, db_flavor, expected_attrs=expected_attrs) self._send_notification(fields.NotificationAction.CREATE) @base.remotable def save_projects(self, to_add=None, to_delete=None): """Add or delete projects. :param:to_add: A list of projects to add :param:to_delete: A list of projects to remove """ to_add = to_add if to_add is not None else [] to_delete = to_delete if to_delete is not None else [] for project_id in to_add: self._add_access(project_id) for project_id in to_delete: self._remove_access(project_id) self.obj_reset_changes(['projects']) @staticmethod def _flavor_extra_specs_add(context, flavor_id, specs, max_retries=10): return _flavor_extra_specs_add(context, flavor_id, specs, max_retries) @staticmethod def _flavor_extra_specs_del(context, flavor_id, key): return _flavor_extra_specs_del(context, flavor_id, key) @base.remotable def save_extra_specs(self, to_add=None, to_delete=None): """Add or delete extra_specs. :param:to_add: A dict of new keys to add/update :param:to_delete: A list of keys to remove """ to_add = to_add if to_add is not None else {} to_delete = to_delete if to_delete is not None else [] if to_add: self._flavor_extra_specs_add(self._context, self.id, to_add) for key in to_delete: self._flavor_extra_specs_del(self._context, self.id, key) self.obj_reset_changes(['extra_specs']) # NOTE(mriedem): This method is not remotable since we only expect the API # to be able to make updates to a flavor. @db_api.api_context_manager.writer def _save(self, context, values): db_flavor = context.session.query(api_models.Flavors).\ filter_by(id=self.id).first() if not db_flavor: raise exception.FlavorNotFound(flavor_id=self.id) db_flavor.update(values) db_flavor.save(context.session) # Refresh ourselves from the DB object so we get the new updated_at. self._from_db_object(context, self, db_flavor) self.obj_reset_changes() def save(self): updates = self.obj_get_changes() projects = updates.pop('projects', None) extra_specs = updates.pop('extra_specs', None) if updates: # Only allowed to update from the whitelist of mutable fields. if set(updates.keys()) - MUTABLE_FIELDS: raise exception.ObjectActionError( action='save', reason='read-only fields were changed') self._save(self._context, updates) if extra_specs is not None: deleted_keys = (set(self._orig_extra_specs.keys()) - set(extra_specs.keys())) added_keys = self.extra_specs else: added_keys = deleted_keys = None if projects is not None: deleted_projects = set(self._orig_projects) - set(projects) added_projects = set(projects) - set(self._orig_projects) else: added_projects = deleted_projects = None # NOTE(danms): The first remotable method we call will reset # our of the original values for projects and extra_specs. Thus, # we collect the added/deleted lists for both above and /then/ # call these methods to update them. if added_keys or deleted_keys: self.save_extra_specs(self.extra_specs, deleted_keys) if added_projects or deleted_projects: self.save_projects(added_projects, deleted_projects) if (added_keys or deleted_keys or added_projects or deleted_projects or updates): self._send_notification(fields.NotificationAction.UPDATE) @staticmethod def _flavor_destroy(context, flavor_id=None, flavorid=None): return _flavor_destroy(context, flavor_id=flavor_id, flavorid=flavorid) @base.remotable def destroy(self): # NOTE(danms): Historically the only way to delete a flavor # is via name, which is not very precise. We need to be able to # support the light construction of a flavor object and subsequent # delete request with only our name filled out. However, if we have # our id property, we should instead delete with that since it's # far more specific. if 'id' in self: db_flavor = self._flavor_destroy(self._context, flavor_id=self.id) else: db_flavor = self._flavor_destroy(self._context, flavorid=self.flavorid) self._from_db_object(self._context, self, db_flavor) self._send_notification(fields.NotificationAction.DELETE) def _send_notification(self, action): # NOTE(danms): Instead of making the below notification # lazy-load projects (which is a problem for instance-bound # flavors and compute-cell operations), just load them here. if 'projects' not in self: # If the flavor is deleted we can't lazy-load projects. # FlavorPayload will orphan the flavor which will make the # NotificationPayloadBase set projects=None in the notification # payload. if action != fields.NotificationAction.DELETE: self._load_projects() notification_type = flavor_notification.FlavorNotification payload_type = flavor_notification.FlavorPayload payload = payload_type(self) notification_type( publisher=notification.NotificationPublisher( host=CONF.host, source=fields.NotificationSource.API), event_type=notification.EventType(object="flavor", action=action), priority=fields.NotificationPriority.INFO, payload=payload).emit(self._context) @db_api.api_context_manager.reader def _flavor_get_all_from_db(context, inactive, filters, sort_key, sort_dir, limit, marker): """Returns all flavors. """ filters = filters or {} query = Flavor._flavor_get_query_from_db(context) if 'min_memory_mb' in filters: query = query.filter( api_models.Flavors.memory_mb >= filters['min_memory_mb']) if 'min_root_gb' in filters: query = query.filter( api_models.Flavors.root_gb >= filters['min_root_gb']) if 'disabled' in filters: query = query.filter( api_models.Flavors.disabled == filters['disabled']) if 'is_public' in filters and filters['is_public'] is not None: the_filter = [api_models.Flavors.is_public == filters['is_public']] if filters['is_public'] and context.project_id is not None: the_filter.extend([api_models.Flavors.projects.any( project_id=context.project_id)]) if len(the_filter) > 1: query = query.filter(or_(*the_filter)) else: query = query.filter(the_filter[0]) marker_row = None if marker is not None: marker_row = Flavor._flavor_get_query_from_db(context).\ filter_by(flavorid=marker).\ first() if not marker_row: raise exception.MarkerNotFound(marker=marker) query = sqlalchemyutils.paginate_query(query, api_models.Flavors, limit, [sort_key, 'id'], marker=marker_row, sort_dir=sort_dir) return [_dict_with_extra_specs(i) for i in query.all()] @base.NovaObjectRegistry.register class FlavorList(base.ObjectListBase, base.NovaObject): VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('Flavor'), } @base.remotable_classmethod def get_all(cls, context, inactive=False, filters=None, sort_key='flavorid', sort_dir='asc', limit=None, marker=None): api_db_flavors = _flavor_get_all_from_db(context, inactive=inactive, filters=filters, sort_key=sort_key, sort_dir=sort_dir, limit=limit, marker=marker) return base.obj_make_list(context, cls(context), objects.Flavor, api_db_flavors, expected_attrs=['extra_specs']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/host_mapping.py0000664000175000017500000002462400000000000020234 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from sqlalchemy.orm import joinedload from nova import context from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova.i18n import _ from nova.objects import base from nova.objects import cell_mapping from nova.objects import fields def _cell_id_in_updates(updates): cell_mapping_obj = updates.pop("cell_mapping", None) if cell_mapping_obj: updates["cell_id"] = cell_mapping_obj.id def _apply_updates(context, db_mapping, updates): db_mapping.update(updates) db_mapping.save(context.session) # NOTE: This is done because a later access will trigger a lazy load # outside of the db session so it will fail. We don't lazy load # cell_mapping on the object later because we never need a HostMapping # without the CellMapping. db_mapping.cell_mapping return db_mapping @base.NovaObjectRegistry.register class HostMapping(base.NovaTimestampObject, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(read_only=True), 'host': fields.StringField(), 'cell_mapping': fields.ObjectField('CellMapping'), } def _get_cell_mapping(self): with db_api.api_context_manager.reader.using(self._context) as session: cell_map = (session.query(api_models.CellMapping) .join(api_models.HostMapping) .filter(api_models.HostMapping.host == self.host) .first()) if cell_map is not None: return cell_mapping.CellMapping._from_db_object( self._context, cell_mapping.CellMapping(), cell_map) def _load_cell_mapping(self): self.cell_mapping = self._get_cell_mapping() def obj_load_attr(self, attrname): if attrname == 'cell_mapping': self._load_cell_mapping() @staticmethod def _from_db_object(context, host_mapping, db_host_mapping): for key in host_mapping.fields: db_value = db_host_mapping.get(key) if key == "cell_mapping": # NOTE(dheeraj): If cell_mapping is stashed in db object # we load it here. Otherwise, lazy loading will happen # when .cell_mapping is accessed later if not db_value: continue db_value = cell_mapping.CellMapping._from_db_object( host_mapping._context, cell_mapping.CellMapping(), db_value) setattr(host_mapping, key, db_value) host_mapping.obj_reset_changes() host_mapping._context = context return host_mapping @staticmethod @db_api.api_context_manager.reader def _get_by_host_from_db(context, host): db_mapping = (context.session.query(api_models.HostMapping) .options(joinedload('cell_mapping')) .filter(api_models.HostMapping.host == host)).first() if not db_mapping: raise exception.HostMappingNotFound(name=host) return db_mapping @base.remotable_classmethod def get_by_host(cls, context, host): db_mapping = cls._get_by_host_from_db(context, host) return cls._from_db_object(context, cls(), db_mapping) @staticmethod @db_api.api_context_manager.writer def _create_in_db(context, updates): db_mapping = api_models.HostMapping() return _apply_updates(context, db_mapping, updates) @base.remotable def create(self): changes = self.obj_get_changes() # cell_mapping must be mapped to cell_id for create _cell_id_in_updates(changes) db_mapping = self._create_in_db(self._context, changes) self._from_db_object(self._context, self, db_mapping) @staticmethod @db_api.api_context_manager.writer def _save_in_db(context, obj, updates): db_mapping = context.session.query(api_models.HostMapping).filter_by( id=obj.id).first() if not db_mapping: raise exception.HostMappingNotFound(name=obj.host) return _apply_updates(context, db_mapping, updates) @base.remotable def save(self): changes = self.obj_get_changes() # cell_mapping must be mapped to cell_id for updates _cell_id_in_updates(changes) db_mapping = self._save_in_db(self._context, self, changes) self._from_db_object(self._context, self, db_mapping) self.obj_reset_changes() @staticmethod @db_api.api_context_manager.writer def _destroy_in_db(context, host): result = context.session.query(api_models.HostMapping).filter_by( host=host).delete() if not result: raise exception.HostMappingNotFound(name=host) @base.remotable def destroy(self): self._destroy_in_db(self._context, self.host) @base.NovaObjectRegistry.register class HostMappingList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add get_all method VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('HostMapping'), } @staticmethod @db_api.api_context_manager.reader def _get_from_db(context, cell_id=None): query = (context.session.query(api_models.HostMapping) .options(joinedload('cell_mapping'))) if cell_id: query = query.filter(api_models.HostMapping.cell_id == cell_id) return query.all() @base.remotable_classmethod def get_by_cell_id(cls, context, cell_id): db_mappings = cls._get_from_db(context, cell_id) return base.obj_make_list(context, cls(), HostMapping, db_mappings) @base.remotable_classmethod def get_all(cls, context): db_mappings = cls._get_from_db(context) return base.obj_make_list(context, cls(), HostMapping, db_mappings) def _create_host_mapping(host_mapping): try: host_mapping.create() except db_exc.DBDuplicateEntry: raise exception.HostMappingExists(name=host_mapping.host) def _check_and_create_node_host_mappings(ctxt, cm, compute_nodes, status_fn): host_mappings = [] for compute in compute_nodes: status_fn(_("Checking host mapping for compute host " "'%(host)s': %(uuid)s") % {'host': compute.host, 'uuid': compute.uuid}) try: HostMapping.get_by_host(ctxt, compute.host) except exception.HostMappingNotFound: status_fn(_("Creating host mapping for compute host " "'%(host)s': %(uuid)s") % {'host': compute.host, 'uuid': compute.uuid}) host_mapping = HostMapping( ctxt, host=compute.host, cell_mapping=cm) _create_host_mapping(host_mapping) host_mappings.append(host_mapping) compute.mapped = 1 compute.save() return host_mappings def _check_and_create_service_host_mappings(ctxt, cm, services, status_fn): host_mappings = [] for service in services: try: HostMapping.get_by_host(ctxt, service.host) except exception.HostMappingNotFound: status_fn(_('Creating host mapping for service %(srv)s') % {'srv': service.host}) host_mapping = HostMapping( ctxt, host=service.host, cell_mapping=cm) _create_host_mapping(host_mapping) host_mappings.append(host_mapping) return host_mappings def _check_and_create_host_mappings(ctxt, cm, status_fn, by_service): from nova import objects if by_service: services = objects.ServiceList.get_by_binary( ctxt, 'nova-compute', include_disabled=True) added_hm = _check_and_create_service_host_mappings(ctxt, cm, services, status_fn) else: compute_nodes = objects.ComputeNodeList.get_all_by_not_mapped( ctxt, 1) added_hm = _check_and_create_node_host_mappings(ctxt, cm, compute_nodes, status_fn) return added_hm def discover_hosts(ctxt, cell_uuid=None, status_fn=None, by_service=False): # TODO(alaski): If this is not run on a host configured to use the API # database most of the lookups below will fail and may not provide a # great error message. Add a check which will raise a useful error # message about running this from an API host. from nova import objects if not status_fn: status_fn = lambda x: None if cell_uuid: cell_mappings = [objects.CellMapping.get_by_uuid(ctxt, cell_uuid)] else: cell_mappings = objects.CellMappingList.get_all(ctxt) status_fn(_('Found %s cell mappings.') % len(cell_mappings)) host_mappings = [] for cm in cell_mappings: if cm.is_cell0(): status_fn(_('Skipping cell0 since it does not contain hosts.')) continue if 'name' in cm and cm.name: status_fn(_("Getting computes from cell '%(name)s': " "%(uuid)s") % {'name': cm.name, 'uuid': cm.uuid}) else: status_fn(_("Getting computes from cell: %(uuid)s") % {'uuid': cm.uuid}) with context.target_cell(ctxt, cm) as cctxt: added_hm = _check_and_create_host_mappings(cctxt, cm, status_fn, by_service) status_fn(_('Found %(num)s unmapped computes in cell: %(uuid)s') % {'num': len(added_hm), 'uuid': cm.uuid}) host_mappings.extend(added_hm) return host_mappings ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/hv_spec.py0000664000175000017500000000352600000000000017171 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import versionutils from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class HVSpec(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added 'vz' hypervisor # Version 1.2: Added 'lxd' hypervisor VERSION = '1.2' fields = { 'arch': fields.ArchitectureField(), 'hv_type': fields.HVTypeField(), 'vm_mode': fields.VMModeField(), } # NOTE(pmurray): for backward compatibility, the supported instance # data is stored in the database as a list. @classmethod def from_list(cls, data): return cls(arch=data[0], hv_type=data[1], vm_mode=data[2]) def to_list(self): return [self.arch, self.hv_type, self.vm_mode] def obj_make_compatible(self, primitive, target_version): super(HVSpec, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if (target_version < (1, 1) and 'hv_type' in primitive and fields.HVType.VIRTUOZZO == primitive['hv_type']): primitive['hv_type'] = fields.HVType.PARALLELS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/image_meta.py0000664000175000017500000006506700000000000017642 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_utils import versionutils import six from nova import exception from nova import objects from nova.objects import base from nova.objects import fields from nova import utils from nova.virt import hardware NULLABLE_STRING_FIELDS = ['name', 'checksum', 'owner', 'container_format', 'disk_format'] NULLABLE_INTEGER_FIELDS = ['size', 'virtual_size'] @base.NovaObjectRegistry.register class ImageMeta(base.NovaObject): # Version 1.0: Initial version # Version 1.1: updated ImageMetaProps # Version 1.2: ImageMetaProps version 1.2 # Version 1.3: ImageMetaProps version 1.3 # Version 1.4: ImageMetaProps version 1.4 # Version 1.5: ImageMetaProps version 1.5 # Version 1.6: ImageMetaProps version 1.6 # Version 1.7: ImageMetaProps version 1.7 # Version 1.8: ImageMetaProps version 1.8 VERSION = '1.8' # These are driven by what the image client API returns # to Nova from Glance. This is defined in the glance # code glance/api/v2/images.py get_base_properties() # method. A few things are currently left out: # self, file, schema - Nova does not appear to ever use # these field; locations - modelling the arbitrary # data in the 'metadata' subfield is non-trivial as # there's no clear spec. # # TODO(ft): In version 2.0, these fields should be nullable: # name, checksum, owner, size, virtual_size, container_format, disk_format # fields = { 'id': fields.UUIDField(), 'name': fields.StringField(), 'status': fields.StringField(), 'visibility': fields.StringField(), 'protected': fields.FlexibleBooleanField(), 'checksum': fields.StringField(), 'owner': fields.StringField(), 'size': fields.IntegerField(), 'virtual_size': fields.IntegerField(), 'container_format': fields.StringField(), 'disk_format': fields.StringField(), 'created_at': fields.DateTimeField(nullable=True), 'updated_at': fields.DateTimeField(nullable=True), 'tags': fields.ListOfStringsField(), 'direct_url': fields.StringField(), 'min_ram': fields.IntegerField(), 'min_disk': fields.IntegerField(), 'properties': fields.ObjectField('ImageMetaProps'), } @classmethod def from_dict(cls, image_meta): """Create instance from image metadata dict :param image_meta: image metadata dictionary Creates a new object instance, initializing from the properties associated with the image metadata instance :returns: an ImageMeta instance """ if image_meta is None: image_meta = {} # We must turn 'properties' key dict into an object # so copy image_meta to avoid changing original image_meta = copy.deepcopy(image_meta) image_meta["properties"] = \ objects.ImageMetaProps.from_dict( image_meta.get("properties", {})) # Some fields are nullable in Glance DB schema, but was not marked that # in ImageMeta initially by mistake. To keep compatibility with compute # nodes which are run with previous versions these fields are still # not nullable in ImageMeta, but the code below converts None to # appropriate empty values. for fld in NULLABLE_STRING_FIELDS: if fld in image_meta and image_meta[fld] is None: image_meta[fld] = '' for fld in NULLABLE_INTEGER_FIELDS: if fld in image_meta and image_meta[fld] is None: image_meta[fld] = 0 return cls(**image_meta) @classmethod def from_instance(cls, instance): """Create instance from instance system metadata :param instance: Instance object Creates a new object instance, initializing from the system metadata "image_*" properties associated with instance :returns: an ImageMeta instance """ sysmeta = utils.instance_sys_meta(instance) image_meta = utils.get_image_from_system_metadata(sysmeta) return cls.from_dict(image_meta) @classmethod def from_image_ref(cls, context, image_api, image_ref): """Create instance from glance image :param context: the request context :param image_api: the glance client API :param image_ref: the glance image identifier Creates a new object instance, initializing from the properties associated with a glance image :returns: an ImageMeta instance """ image_meta = image_api.get(context, image_ref) image = cls.from_dict(image_meta) setattr(image, "id", image_ref) return image @base.NovaObjectRegistry.register class ImageMetaProps(base.NovaObject): # Version 1.0: Initial version # Version 1.1: added os_require_quiesce field # Version 1.2: added img_hv_type and img_hv_requested_version fields # Version 1.3: HVSpec version 1.1 # Version 1.4: added hw_vif_multiqueue_enabled field # Version 1.5: added os_admin_user field # Version 1.6: Added 'lxc' and 'uml' enum types to DiskBusField # Version 1.7: added img_config_drive field # Version 1.8: Added 'lxd' to hypervisor types # Version 1.9: added hw_cpu_thread_policy field # Version 1.10: added hw_cpu_realtime_mask field # Version 1.11: Added hw_firmware_type field # Version 1.12: Added properties for image signature verification # Version 1.13: added os_secure_boot field # Version 1.14: Added 'hw_pointer_model' field # Version 1.15: Added hw_rescue_bus and hw_rescue_device. # Version 1.16: WatchdogActionField supports 'disabled' enum. # Version 1.17: Add lan9118 as valid nic for hw_vif_model property for qemu # Version 1.18: Pull signature properties from cursive library # Version 1.19: Added 'img_hide_hypervisor_id' type field # Version 1.20: Added 'traits_required' list field # Version 1.21: Added 'hw_time_hpet' field # Version 1.22: Added 'gop', 'virtio' and 'none' to hw_video_model field # Version 1.23: Added 'hw_pmu' field # Version 1.24: Added 'hw_mem_encryption' field # Version 1.25: Added 'hw_pci_numa_affinity_policy' field # NOTE(efried): When bumping this version, the version of # ImageMetaPropsPayload must also be bumped. See its docstring for details. VERSION = '1.25' def obj_make_compatible(self, primitive, target_version): super(ImageMetaProps, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 25): primitive.pop('hw_pci_numa_affinity_policy', None) if target_version < (1, 24): primitive.pop('hw_mem_encryption', None) if target_version < (1, 23): primitive.pop('hw_pmu', None) # NOTE(sean-k-mooney): unlike other nova object we version this object # when composed object are updated. if target_version < (1, 22): video = primitive.get('hw_video_model', None) if video in ('gop', 'virtio', 'none'): raise exception.ObjectActionError( action='obj_make_compatible', reason='hw_video_model=%s not supported in version %s' % ( video, target_version)) if target_version < (1, 21): primitive.pop('hw_time_hpet', None) if target_version < (1, 20): primitive.pop('traits_required', None) if target_version < (1, 19): primitive.pop('img_hide_hypervisor_id', None) if target_version < (1, 16) and 'hw_watchdog_action' in primitive: # Check to see if hw_watchdog_action was set to 'disabled' and if # so, remove it since not specifying it is the same behavior. if primitive['hw_watchdog_action'] == \ fields.WatchdogAction.DISABLED: primitive.pop('hw_watchdog_action') if target_version < (1, 15): primitive.pop('hw_rescue_bus', None) primitive.pop('hw_rescue_device', None) if target_version < (1, 14): primitive.pop('hw_pointer_model', None) if target_version < (1, 13): primitive.pop('os_secure_boot', None) if target_version < (1, 11): primitive.pop('hw_firmware_type', None) if target_version < (1, 10): primitive.pop('hw_cpu_realtime_mask', None) if target_version < (1, 9): primitive.pop('hw_cpu_thread_policy', None) if target_version < (1, 7): primitive.pop('img_config_drive', None) if target_version < (1, 5): primitive.pop('os_admin_user', None) if target_version < (1, 4): primitive.pop('hw_vif_multiqueue_enabled', None) if target_version < (1, 2): primitive.pop('img_hv_type', None) primitive.pop('img_hv_requested_version', None) if target_version < (1, 1): primitive.pop('os_require_quiesce', None) if target_version < (1, 6): bus = primitive.get('hw_disk_bus', None) if bus in ('lxc', 'uml'): raise exception.ObjectActionError( action='obj_make_compatible', reason='hw_disk_bus=%s not supported in version %s' % ( bus, target_version)) # Maximum number of NUMA nodes permitted for the guest topology NUMA_NODES_MAX = 128 # 'hw_' - settings affecting the guest virtual machine hardware # 'img_' - settings affecting the use of images by the compute node # 'os_' - settings affecting the guest operating system setup # 'traits_required' - The required traits associated with the image fields = { # name of guest hardware architecture eg i686, x86_64, ppc64 'hw_architecture': fields.ArchitectureField(), # used to decide to expand root disk partition and fs to full size of # root disk 'hw_auto_disk_config': fields.StringField(), # whether to display BIOS boot device menu 'hw_boot_menu': fields.FlexibleBooleanField(), # name of the CDROM bus to use eg virtio, scsi, ide 'hw_cdrom_bus': fields.DiskBusField(), # preferred number of CPU cores per socket 'hw_cpu_cores': fields.IntegerField(), # preferred number of CPU sockets 'hw_cpu_sockets': fields.IntegerField(), # maximum number of CPU cores per socket 'hw_cpu_max_cores': fields.IntegerField(), # maximum number of CPU sockets 'hw_cpu_max_sockets': fields.IntegerField(), # maximum number of CPU threads per core 'hw_cpu_max_threads': fields.IntegerField(), # CPU allocation policy 'hw_cpu_policy': fields.CPUAllocationPolicyField(), # CPU thread allocation policy 'hw_cpu_thread_policy': fields.CPUThreadAllocationPolicyField(), # CPU mask indicates which vCPUs will have realtime enable, # example ^0-1 means that all vCPUs except 0 and 1 will have a # realtime policy. 'hw_cpu_realtime_mask': fields.StringField(), # preferred number of CPU threads per core 'hw_cpu_threads': fields.IntegerField(), # guest ABI version for guest xentools either 1 or 2 (or 3 - depends on # Citrix PV tools version installed in image) 'hw_device_id': fields.IntegerField(), # name of the hard disk bus to use eg virtio, scsi, ide 'hw_disk_bus': fields.DiskBusField(), # allocation mode eg 'preallocated' 'hw_disk_type': fields.StringField(), # name of the floppy disk bus to use eg fd, scsi, ide 'hw_floppy_bus': fields.DiskBusField(), # This indicates the guest needs UEFI firmware 'hw_firmware_type': fields.FirmwareTypeField(), # boolean - used to trigger code to inject networking when booting a CD # image with a network boot image 'hw_ipxe_boot': fields.FlexibleBooleanField(), # There are sooooooooooo many possible machine types in # QEMU - several new ones with each new release - that it # is not practical to enumerate them all. So we use a free # form string 'hw_machine_type': fields.StringField(), # boolean indicating that the guest needs to be booted with # encrypted memory 'hw_mem_encryption': fields.FlexibleBooleanField(), # One of the magic strings 'small', 'any', 'large' # or an explicit page size in KB (eg 4, 2048, ...) 'hw_mem_page_size': fields.StringField(), # Number of guest NUMA nodes 'hw_numa_nodes': fields.IntegerField(), # Each list entry corresponds to a guest NUMA node and the # set members indicate CPUs for that node 'hw_numa_cpus': fields.ListOfSetsOfIntegersField(), # Each list entry corresponds to a guest NUMA node and the # list value indicates the memory size of that node. 'hw_numa_mem': fields.ListOfIntegersField(), # Enum field to specify pci device NUMA affinity. 'hw_pci_numa_affinity_policy': fields.PCINUMAAffinityPolicyField(), # Generic property to specify the pointer model type. 'hw_pointer_model': fields.PointerModelField(), # boolean 'true' or 'false' to enable virtual performance # monitoring unit (vPMU). 'hw_pmu': fields.FlexibleBooleanField(), # boolean 'yes' or 'no' to enable QEMU guest agent 'hw_qemu_guest_agent': fields.FlexibleBooleanField(), # name of the rescue bus to use with the associated rescue device. 'hw_rescue_bus': fields.DiskBusField(), # name of rescue device to use. 'hw_rescue_device': fields.BlockDeviceTypeField(), # name of the RNG device type eg virtio # NOTE(kchamart): Although this is currently not used anymore, # we should not remove / deprecate it yet, as we are likely to # extend this field to allow two more values to support "VirtIO # transitional/non-transitional devices" (refer to the note in # RNGModel() class in nova/objects/fields.py), and thus expose # to the user again. 'hw_rng_model': fields.RNGModelField(), # boolean 'true' or 'false' to enable HPET 'hw_time_hpet': fields.FlexibleBooleanField(), # number of serial ports to create 'hw_serial_port_count': fields.IntegerField(), # name of the SCSI bus controller eg 'virtio-scsi', 'lsilogic', etc 'hw_scsi_model': fields.SCSIModelField(), # name of the video adapter model to use, eg cirrus, vga, xen, qxl 'hw_video_model': fields.VideoModelField(), # MB of video RAM to provide eg 64 'hw_video_ram': fields.IntegerField(), # name of a NIC device model eg virtio, e1000, rtl8139 'hw_vif_model': fields.VIFModelField(), # "xen" vs "hvm" 'hw_vm_mode': fields.VMModeField(), # action to take when watchdog device fires eg reset, poweroff, pause, # none 'hw_watchdog_action': fields.WatchdogActionField(), # boolean - If true, this will enable the virtio-multiqueue feature 'hw_vif_multiqueue_enabled': fields.FlexibleBooleanField(), # if true download using bittorrent 'img_bittorrent': fields.FlexibleBooleanField(), # Which data format the 'img_block_device_mapping' field is # using to represent the block device mapping 'img_bdm_v2': fields.FlexibleBooleanField(), # Block device mapping - the may can be in one or two completely # different formats. The 'img_bdm_v2' field determines whether # it is in legacy format, or the new current format. Ideally # we would have a formal data type for this field instead of a # dict, but with 2 different formats to represent this is hard. # See nova/block_device.py from_legacy_mapping() for the complex # conversion code. So for now leave it as a dict and continue # to use existing code that is able to convert dict into the # desired internal BDM formats 'img_block_device_mapping': fields.ListOfDictOfNullableStringsField(), # boolean - if True, and image cache set to "some" decides if image # should be cached on host when server is booted on that host 'img_cache_in_nova': fields.FlexibleBooleanField(), # Compression level for images. (1-9) 'img_compression_level': fields.IntegerField(), # hypervisor supported version, eg. '>=2.6' 'img_hv_requested_version': fields.VersionPredicateField(), # type of the hypervisor, eg kvm, ironic, xen 'img_hv_type': fields.HVTypeField(), # Whether the image needs/expected config drive 'img_config_drive': fields.ConfigDrivePolicyField(), # boolean flag to set space-saving or performance behavior on the # Datastore 'img_linked_clone': fields.FlexibleBooleanField(), # Image mappings - related to Block device mapping data - mapping # of virtual image names to device names. This could be represented # as a formal data type, but is left as dict for same reason as # img_block_device_mapping field. It would arguably make sense for # the two to be combined into a single field and data type in the # future. 'img_mappings': fields.ListOfDictOfNullableStringsField(), # image project id (set on upload) 'img_owner_id': fields.StringField(), # root device name, used in snapshotting eg /dev/ 'img_root_device_name': fields.StringField(), # boolean - if false don't talk to nova agent 'img_use_agent': fields.FlexibleBooleanField(), # integer value 1 'img_version': fields.IntegerField(), # base64 of encoding of image signature 'img_signature': fields.StringField(), # string indicating hash method used to compute image signature 'img_signature_hash_method': fields.ImageSignatureHashTypeField(), # string indicating Castellan uuid of certificate # used to compute the image's signature 'img_signature_certificate_uuid': fields.UUIDField(), # string indicating type of key used to compute image signature 'img_signature_key_type': fields.ImageSignatureKeyTypeField(), # boolean - hide hypervisor signature on instance 'img_hide_hypervisor_id': fields.FlexibleBooleanField(), # string of username with admin privileges 'os_admin_user': fields.StringField(), # string of boot time command line arguments for the guest kernel 'os_command_line': fields.StringField(), # the name of the specific guest operating system distro. This # is not done as an Enum since the list of operating systems is # growing incredibly fast, and valid values can be arbitrarily # user defined. Nova has no real need for strict validation so # leave it freeform 'os_distro': fields.StringField(), # boolean - if true, then guest must support disk quiesce # or snapshot operation will be denied 'os_require_quiesce': fields.FlexibleBooleanField(), # Secure Boot feature will be enabled by setting the "os_secure_boot" # image property to "required". Other options can be: "disabled" or # "optional". # "os:secure_boot" flavor extra spec value overrides the image property # value. 'os_secure_boot': fields.SecureBootField(), # boolean - if using agent don't inject files, assume someone else is # doing that (cloud-init) 'os_skip_agent_inject_files_at_boot': fields.FlexibleBooleanField(), # boolean - if using agent don't try inject ssh key, assume someone # else is doing that (cloud-init) 'os_skip_agent_inject_ssh': fields.FlexibleBooleanField(), # The guest operating system family such as 'linux', 'windows' - this # is a fairly generic type. For a detailed type consider os_distro # instead 'os_type': fields.OSTypeField(), # The required traits associated with the image. Traits are expected to # be defined as starting with `trait:` like below: # trait:HW_CPU_X86_AVX2=required # for trait in image_meta.traits_required: # will yield trait strings such as 'HW_CPU_X86_AVX2' 'traits_required': fields.ListOfStringsField(), } # The keys are the legacy property names and # the values are the current preferred names _legacy_property_map = { 'architecture': 'hw_architecture', 'owner_id': 'img_owner_id', 'vmware_disktype': 'hw_disk_type', 'vmware_image_version': 'img_version', 'vmware_ostype': 'os_distro', 'auto_disk_config': 'hw_auto_disk_config', 'ipxe_boot': 'hw_ipxe_boot', 'xenapi_device_id': 'hw_device_id', 'xenapi_image_compression_level': 'img_compression_level', 'vmware_linked_clone': 'img_linked_clone', 'xenapi_use_agent': 'img_use_agent', 'xenapi_skip_agent_inject_ssh': 'os_skip_agent_inject_ssh', 'xenapi_skip_agent_inject_files_at_boot': 'os_skip_agent_inject_files_at_boot', 'cache_in_nova': 'img_cache_in_nova', 'vm_mode': 'hw_vm_mode', 'bittorrent': 'img_bittorrent', 'mappings': 'img_mappings', 'block_device_mapping': 'img_block_device_mapping', 'bdm_v2': 'img_bdm_v2', 'root_device_name': 'img_root_device_name', 'hypervisor_version_requires': 'img_hv_requested_version', 'hypervisor_type': 'img_hv_type', } # TODO(berrange): Need to run this from a data migration # at some point so we can eventually kill off the compat def _set_attr_from_legacy_names(self, image_props): for legacy_key in self._legacy_property_map: new_key = self._legacy_property_map[legacy_key] if legacy_key not in image_props: continue setattr(self, new_key, image_props[legacy_key]) vmware_adaptertype = image_props.get("vmware_adaptertype") if vmware_adaptertype == "ide": setattr(self, "hw_disk_bus", "ide") elif vmware_adaptertype: setattr(self, "hw_disk_bus", "scsi") setattr(self, "hw_scsi_model", vmware_adaptertype) def _set_numa_mem(self, image_props): hw_numa_mem = [] hw_numa_mem_set = False for cellid in range(ImageMetaProps.NUMA_NODES_MAX): memprop = "hw_numa_mem.%d" % cellid if memprop not in image_props: break hw_numa_mem.append(int(image_props[memprop])) hw_numa_mem_set = True del image_props[memprop] if hw_numa_mem_set: self.hw_numa_mem = hw_numa_mem def _set_numa_cpus(self, image_props): hw_numa_cpus = [] hw_numa_cpus_set = False for cellid in range(ImageMetaProps.NUMA_NODES_MAX): cpuprop = "hw_numa_cpus.%d" % cellid if cpuprop not in image_props: break hw_numa_cpus.append( hardware.parse_cpu_spec(image_props[cpuprop])) hw_numa_cpus_set = True del image_props[cpuprop] if hw_numa_cpus_set: self.hw_numa_cpus = hw_numa_cpus def _set_attr_from_current_names(self, image_props): for key in self.fields: # The two NUMA fields need special handling to # un-stringify them correctly if key == "hw_numa_mem": self._set_numa_mem(image_props) elif key == "hw_numa_cpus": self._set_numa_cpus(image_props) else: # traits_required will be populated by # _set_attr_from_trait_names if key not in image_props or key == "traits_required": continue setattr(self, key, image_props[key]) def _set_attr_from_trait_names(self, image_props): for trait in [six.text_type(k[6:]) for k, v in image_props.items() if six.text_type(k).startswith("trait:") and six.text_type(v) == six.text_type('required')]: if 'traits_required' not in self: self.traits_required = [] self.traits_required.append(trait) @classmethod def from_dict(cls, image_props): """Create instance from image properties dict :param image_props: dictionary of image metadata properties Creates a new object instance, initializing from a dictionary of image metadata properties :returns: an ImageMetaProps instance """ obj = cls() # We look to see if the dict has entries for any # of the legacy property names first. Then we use # the current property names. That way if both the # current and legacy names are set, the value # associated with the current name takes priority obj._set_attr_from_legacy_names(image_props) obj._set_attr_from_current_names(image_props) obj._set_attr_from_trait_names(image_props) return obj def get(self, name, defvalue=None): """Get the value of an attribute :param name: the attribute to request :param defvalue: the default value if not set This returns the value of an attribute if it is currently set, otherwise it will return None. This differs from accessing props.attrname, because that will raise an exception if the attribute has no value set. So instead of if image_meta.properties.obj_attr_is_set("some_attr"): val = image_meta.properties.some_attr else val = None Callers can rely on unconditional access val = image_meta.properties.get("some_attr") :returns: the attribute value or None """ if not self.obj_attr_is_set(name): return defvalue return getattr(self, name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance.py0000664000175000017500000020553100000000000017346 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import versionutils from sqlalchemy import or_ from sqlalchemy.sql import false from sqlalchemy.sql import func from sqlalchemy.sql import null from nova import availability_zones as avail_zone from nova.compute import task_states from nova.compute import vm_states from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models from nova import exception from nova.i18n import _ from nova.network import model as network_model from nova import notifications from nova import objects from nova.objects import base from nova.objects import fields from nova import utils CONF = cfg.CONF LOG = logging.getLogger(__name__) # List of fields that can be joined in DB layer. _INSTANCE_OPTIONAL_JOINED_FIELDS = ['metadata', 'system_metadata', 'info_cache', 'security_groups', 'pci_devices', 'tags', 'services', 'fault'] # These are fields that are optional but don't translate to db columns _INSTANCE_OPTIONAL_NON_COLUMN_FIELDS = ['flavor', 'old_flavor', 'new_flavor', 'ec2_ids'] # These are fields that are optional and in instance_extra _INSTANCE_EXTRA_FIELDS = ['numa_topology', 'pci_requests', 'flavor', 'vcpu_model', 'migration_context', 'keypairs', 'device_metadata', 'trusted_certs', 'resources'] # These are fields that applied/drooped by migration_context _MIGRATION_CONTEXT_ATTRS = ['numa_topology', 'pci_requests', 'pci_devices', 'resources'] # These are fields that can be specified as expected_attrs INSTANCE_OPTIONAL_ATTRS = (_INSTANCE_OPTIONAL_JOINED_FIELDS + _INSTANCE_OPTIONAL_NON_COLUMN_FIELDS + _INSTANCE_EXTRA_FIELDS) # These are fields that most query calls load by default INSTANCE_DEFAULT_FIELDS = ['metadata', 'system_metadata', 'info_cache', 'security_groups'] # Maximum count of tags to one instance MAX_TAG_COUNT = 50 def _expected_cols(expected_attrs): """Return expected_attrs that are columns needing joining. NB: This function may modify expected_attrs if one requested attribute requires another. """ if not expected_attrs: return expected_attrs simple_cols = [attr for attr in expected_attrs if attr in _INSTANCE_OPTIONAL_JOINED_FIELDS] complex_cols = ['extra.%s' % field for field in _INSTANCE_EXTRA_FIELDS if field in expected_attrs] if complex_cols: simple_cols.append('extra') simple_cols = [x for x in simple_cols if x not in _INSTANCE_EXTRA_FIELDS] expected_cols = simple_cols + complex_cols # NOTE(pumaranikar): expected_cols list can contain duplicates since # caller appends column attributes to expected_attr without checking if # it is already present in the list or not. Hence, we remove duplicates # here, if any. The resultant list is sorted based on list index to # maintain the insertion order. return sorted(list(set(expected_cols)), key=expected_cols.index) _NO_DATA_SENTINEL = object() # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class Instance(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 2.0: Initial version # Version 2.1: Added services # Version 2.2: Added keypairs # Version 2.3: Added device_metadata # Version 2.4: Added trusted_certs # Version 2.5: Added hard_delete kwarg in destroy # Version 2.6: Added hidden # Version 2.7: Added resources VERSION = '2.7' fields = { 'id': fields.IntegerField(), 'user_id': fields.StringField(nullable=True), 'project_id': fields.StringField(nullable=True), 'image_ref': fields.StringField(nullable=True), 'kernel_id': fields.StringField(nullable=True), 'ramdisk_id': fields.StringField(nullable=True), 'hostname': fields.StringField(nullable=True), 'launch_index': fields.IntegerField(nullable=True), 'key_name': fields.StringField(nullable=True), 'key_data': fields.StringField(nullable=True), 'power_state': fields.IntegerField(nullable=True), 'vm_state': fields.StringField(nullable=True), 'task_state': fields.StringField(nullable=True), 'services': fields.ObjectField('ServiceList'), 'memory_mb': fields.IntegerField(nullable=True), 'vcpus': fields.IntegerField(nullable=True), 'root_gb': fields.IntegerField(nullable=True), 'ephemeral_gb': fields.IntegerField(nullable=True), 'ephemeral_key_uuid': fields.UUIDField(nullable=True), 'host': fields.StringField(nullable=True), 'node': fields.StringField(nullable=True), 'instance_type_id': fields.IntegerField(nullable=True), 'user_data': fields.StringField(nullable=True), 'reservation_id': fields.StringField(nullable=True), 'launched_at': fields.DateTimeField(nullable=True), 'terminated_at': fields.DateTimeField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'display_name': fields.StringField(nullable=True), 'display_description': fields.StringField(nullable=True), 'launched_on': fields.StringField(nullable=True), 'locked': fields.BooleanField(default=False), 'locked_by': fields.StringField(nullable=True), 'os_type': fields.StringField(nullable=True), 'architecture': fields.StringField(nullable=True), 'vm_mode': fields.StringField(nullable=True), 'uuid': fields.UUIDField(), 'root_device_name': fields.StringField(nullable=True), 'default_ephemeral_device': fields.StringField(nullable=True), 'default_swap_device': fields.StringField(nullable=True), 'config_drive': fields.StringField(nullable=True), 'access_ip_v4': fields.IPV4AddressField(nullable=True), 'access_ip_v6': fields.IPV6AddressField(nullable=True), 'auto_disk_config': fields.BooleanField(default=False), 'progress': fields.IntegerField(nullable=True), 'shutdown_terminate': fields.BooleanField(default=False), 'disable_terminate': fields.BooleanField(default=False), # TODO(stephenfin): Remove this in version 3.0 of the object 'cell_name': fields.StringField(nullable=True), 'metadata': fields.DictOfStringsField(), 'system_metadata': fields.DictOfNullableStringsField(), 'info_cache': fields.ObjectField('InstanceInfoCache', nullable=True), # TODO(stephenfin): Remove this in version 3.0 of the object as it's # related to nova-network 'security_groups': fields.ObjectField('SecurityGroupList'), 'fault': fields.ObjectField('InstanceFault', nullable=True), 'cleaned': fields.BooleanField(default=False), 'pci_devices': fields.ObjectField('PciDeviceList', nullable=True), 'numa_topology': fields.ObjectField('InstanceNUMATopology', nullable=True), 'pci_requests': fields.ObjectField('InstancePCIRequests', nullable=True), 'device_metadata': fields.ObjectField('InstanceDeviceMetadata', nullable=True), 'tags': fields.ObjectField('TagList'), 'flavor': fields.ObjectField('Flavor'), 'old_flavor': fields.ObjectField('Flavor', nullable=True), 'new_flavor': fields.ObjectField('Flavor', nullable=True), 'vcpu_model': fields.ObjectField('VirtCPUModel', nullable=True), 'ec2_ids': fields.ObjectField('EC2Ids'), 'migration_context': fields.ObjectField('MigrationContext', nullable=True), 'keypairs': fields.ObjectField('KeyPairList'), 'trusted_certs': fields.ObjectField('TrustedCerts', nullable=True), 'hidden': fields.BooleanField(default=False), 'resources': fields.ObjectField('ResourceList', nullable=True), } obj_extra_fields = ['name'] def obj_make_compatible(self, primitive, target_version): super(Instance, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (2, 7) and 'resources' in primitive: del primitive['resources'] if target_version < (2, 6) and 'hidden' in primitive: del primitive['hidden'] if target_version < (2, 4) and 'trusted_certs' in primitive: del primitive['trusted_certs'] if target_version < (2, 3) and 'device_metadata' in primitive: del primitive['device_metadata'] if target_version < (2, 2) and 'keypairs' in primitive: del primitive['keypairs'] if target_version < (2, 1) and 'services' in primitive: del primitive['services'] def __init__(self, *args, **kwargs): super(Instance, self).__init__(*args, **kwargs) self._reset_metadata_tracking() @property def image_meta(self): return objects.ImageMeta.from_instance(self) def _reset_metadata_tracking(self, fields=None): if fields is None or 'system_metadata' in fields: self._orig_system_metadata = (dict(self.system_metadata) if 'system_metadata' in self else {}) if fields is None or 'metadata' in fields: self._orig_metadata = (dict(self.metadata) if 'metadata' in self else {}) def obj_clone(self): """Create a copy of this instance object.""" nobj = super(Instance, self).obj_clone() # Since the base object only does a deep copy of the defined fields, # need to make sure to also copy the additional tracking metadata # attributes so they don't show as changed and cause the metadata # to always be updated even when stale information. if hasattr(self, '_orig_metadata'): nobj._orig_metadata = dict(self._orig_metadata) if hasattr(self, '_orig_system_metadata'): nobj._orig_system_metadata = dict(self._orig_system_metadata) return nobj def obj_reset_changes(self, fields=None, recursive=False): super(Instance, self).obj_reset_changes(fields, recursive=recursive) self._reset_metadata_tracking(fields=fields) def obj_what_changed(self): changes = super(Instance, self).obj_what_changed() if 'metadata' in self and self.metadata != self._orig_metadata: changes.add('metadata') if 'system_metadata' in self and (self.system_metadata != self._orig_system_metadata): changes.add('system_metadata') return changes @classmethod def _obj_from_primitive(cls, context, objver, primitive): self = super(Instance, cls)._obj_from_primitive(context, objver, primitive) self._reset_metadata_tracking() return self @property def name(self): try: base_name = CONF.instance_name_template % self.id except TypeError: # Support templates like "uuid-%(uuid)s", etc. info = {} # NOTE(russellb): Don't use self.iteritems() here, as it will # result in infinite recursion on the name property. for key in self.fields: if key == 'name': # NOTE(danms): prevent recursion continue elif not self.obj_attr_is_set(key): # NOTE(danms): Don't trigger lazy-loads continue info[key] = self[key] try: base_name = CONF.instance_name_template % info except KeyError: base_name = self.uuid except (exception.ObjectActionError, exception.OrphanedObjectError): # This indicates self.id was not set and/or could not be # lazy loaded. What this means is the instance has not # been persisted to a db yet, which should indicate it has # not been scheduled yet. In this situation it will have a # blank name. if (self.vm_state == vm_states.BUILDING and self.task_state == task_states.SCHEDULING): base_name = '' else: # If the vm/task states don't indicate that it's being booted # then we have a bug here. Log an error and attempt to return # the uuid which is what an error above would return. LOG.error('Could not lazy-load instance.id while ' 'attempting to generate the instance name.') base_name = self.uuid return base_name def _flavor_from_db(self, db_flavor): """Load instance flavor information from instance_extra.""" # Before we stored flavors in instance_extra, certain fields, defined # in nova.compute.flavors.system_metadata_flavor_props, were stored # in the instance.system_metadata for the embedded instance.flavor. # The "disabled" and "is_public" fields weren't one of those keys, # however, so really old instances that had their embedded flavor # converted to the serialized instance_extra form won't have the # disabled attribute set and we need to default those here so callers # don't explode trying to load instance.flavor.disabled. def _default_flavor_values(flavor): if 'disabled' not in flavor: flavor.disabled = False if 'is_public' not in flavor: flavor.is_public = True flavor_info = jsonutils.loads(db_flavor) self.flavor = objects.Flavor.obj_from_primitive(flavor_info['cur']) _default_flavor_values(self.flavor) if flavor_info['old']: self.old_flavor = objects.Flavor.obj_from_primitive( flavor_info['old']) _default_flavor_values(self.old_flavor) else: self.old_flavor = None if flavor_info['new']: self.new_flavor = objects.Flavor.obj_from_primitive( flavor_info['new']) _default_flavor_values(self.new_flavor) else: self.new_flavor = None self.obj_reset_changes(['flavor', 'old_flavor', 'new_flavor']) @staticmethod def _from_db_object(context, instance, db_inst, expected_attrs=None): """Method to help with migration to objects. Converts a database entity to a formal object. """ instance._context = context if expected_attrs is None: expected_attrs = [] # Most of the field names match right now, so be quick for field in instance.fields: if field in INSTANCE_OPTIONAL_ATTRS: continue elif field == 'deleted': instance.deleted = db_inst['deleted'] == db_inst['id'] elif field == 'cleaned': instance.cleaned = db_inst['cleaned'] == 1 else: instance[field] = db_inst[field] if 'metadata' in expected_attrs: instance['metadata'] = utils.instance_meta(db_inst) if 'system_metadata' in expected_attrs: instance['system_metadata'] = utils.instance_sys_meta(db_inst) if 'fault' in expected_attrs: instance['fault'] = ( objects.InstanceFault.get_latest_for_instance( context, instance.uuid)) if 'ec2_ids' in expected_attrs: instance._load_ec2_ids() if 'info_cache' in expected_attrs: if db_inst.get('info_cache') is None: instance.info_cache = None elif not instance.obj_attr_is_set('info_cache'): # TODO(danms): If this ever happens on a backlevel instance # passed to us by a backlevel service, things will break instance.info_cache = objects.InstanceInfoCache(context) if instance.info_cache is not None: instance.info_cache._from_db_object(context, instance.info_cache, db_inst['info_cache']) # TODO(danms): If we are updating these on a backlevel instance, # we'll end up sending back new versions of these objects (see # above note for new info_caches if 'pci_devices' in expected_attrs: pci_devices = base.obj_make_list( context, objects.PciDeviceList(context), objects.PciDevice, db_inst['pci_devices']) instance['pci_devices'] = pci_devices # TODO(stephenfin): Remove this as it's related to nova-network if 'security_groups' in expected_attrs: sec_groups = base.obj_make_list( context, objects.SecurityGroupList(context), objects.SecurityGroup, db_inst.get('security_groups', [])) instance['security_groups'] = sec_groups if 'tags' in expected_attrs: tags = base.obj_make_list( context, objects.TagList(context), objects.Tag, db_inst['tags']) instance['tags'] = tags if 'services' in expected_attrs: services = base.obj_make_list( context, objects.ServiceList(context), objects.Service, db_inst['services']) instance['services'] = services instance._extra_attributes_from_db_object(instance, db_inst, expected_attrs) instance.obj_reset_changes() return instance @staticmethod def _extra_attributes_from_db_object(instance, db_inst, expected_attrs=None): """Method to help with migration of extra attributes to objects. """ if expected_attrs is None: expected_attrs = [] # NOTE(danms): We can be called with a dict instead of a # SQLAlchemy object, so we have to be careful here if hasattr(db_inst, '__dict__'): have_extra = 'extra' in db_inst.__dict__ and db_inst['extra'] else: have_extra = 'extra' in db_inst and db_inst['extra'] if 'numa_topology' in expected_attrs: if have_extra: instance._load_numa_topology( db_inst['extra'].get('numa_topology')) else: instance.numa_topology = None if 'pci_requests' in expected_attrs: if have_extra: instance._load_pci_requests( db_inst['extra'].get('pci_requests')) else: instance.pci_requests = None if 'device_metadata' in expected_attrs: if have_extra: instance._load_device_metadata( db_inst['extra'].get('device_metadata')) else: instance.device_metadata = None if 'vcpu_model' in expected_attrs: if have_extra: instance._load_vcpu_model( db_inst['extra'].get('vcpu_model')) else: instance.vcpu_model = None if 'migration_context' in expected_attrs: if have_extra: instance._load_migration_context( db_inst['extra'].get('migration_context')) else: instance.migration_context = None if 'keypairs' in expected_attrs: if have_extra: instance._load_keypairs(db_inst['extra'].get('keypairs')) if 'trusted_certs' in expected_attrs: if have_extra: instance._load_trusted_certs( db_inst['extra'].get('trusted_certs')) else: instance.trusted_certs = None if 'resources' in expected_attrs: if have_extra: instance._load_resources( db_inst['extra'].get('resources')) else: instance.resources = None if any([x in expected_attrs for x in ('flavor', 'old_flavor', 'new_flavor')]): if have_extra and db_inst['extra'].get('flavor'): instance._flavor_from_db(db_inst['extra']['flavor']) @staticmethod @db.select_db_reader_mode def _db_instance_get_by_uuid(context, uuid, columns_to_join, use_slave=False): return db.instance_get_by_uuid(context, uuid, columns_to_join=columns_to_join) @base.remotable_classmethod def get_by_uuid(cls, context, uuid, expected_attrs=None, use_slave=False): if expected_attrs is None: expected_attrs = ['info_cache', 'security_groups'] columns_to_join = _expected_cols(expected_attrs) db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join, use_slave=use_slave) return cls._from_db_object(context, cls(), db_inst, expected_attrs) @base.remotable_classmethod def get_by_id(cls, context, inst_id, expected_attrs=None): if expected_attrs is None: expected_attrs = ['info_cache', 'security_groups'] columns_to_join = _expected_cols(expected_attrs) db_inst = db.instance_get(context, inst_id, columns_to_join=columns_to_join) return cls._from_db_object(context, cls(), db_inst, expected_attrs) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') if self.obj_attr_is_set('deleted') and self.deleted: raise exception.ObjectActionError(action='create', reason='already deleted') updates = self.obj_get_changes() # NOTE(danms): We know because of the check above that deleted # is either unset or false. Since we need to avoid passing False # down to the DB layer (which uses an integer), we can always # default it to zero here. updates['deleted'] = 0 expected_attrs = [attr for attr in INSTANCE_DEFAULT_FIELDS if attr in updates] # TODO(stephenfin): Remove this as it's related to nova-network if 'security_groups' in updates: updates['security_groups'] = [x.name for x in updates['security_groups']] if 'info_cache' in updates: updates['info_cache'] = { 'network_info': updates['info_cache'].network_info.json() } updates['extra'] = {} numa_topology = updates.pop('numa_topology', None) expected_attrs.append('numa_topology') if numa_topology: updates['extra']['numa_topology'] = numa_topology._to_json() else: updates['extra']['numa_topology'] = None pci_requests = updates.pop('pci_requests', None) expected_attrs.append('pci_requests') if pci_requests: updates['extra']['pci_requests'] = ( pci_requests.to_json()) else: updates['extra']['pci_requests'] = None device_metadata = updates.pop('device_metadata', None) expected_attrs.append('device_metadata') if device_metadata: updates['extra']['device_metadata'] = ( device_metadata._to_json()) else: updates['extra']['device_metadata'] = None flavor = updates.pop('flavor', None) if flavor: expected_attrs.append('flavor') old = ((self.obj_attr_is_set('old_flavor') and self.old_flavor) and self.old_flavor.obj_to_primitive() or None) new = ((self.obj_attr_is_set('new_flavor') and self.new_flavor) and self.new_flavor.obj_to_primitive() or None) flavor_info = { 'cur': self.flavor.obj_to_primitive(), 'old': old, 'new': new, } self._nullify_flavor_description(flavor_info) updates['extra']['flavor'] = jsonutils.dumps(flavor_info) keypairs = updates.pop('keypairs', None) if keypairs is not None: expected_attrs.append('keypairs') updates['extra']['keypairs'] = jsonutils.dumps( keypairs.obj_to_primitive()) vcpu_model = updates.pop('vcpu_model', None) expected_attrs.append('vcpu_model') if vcpu_model: updates['extra']['vcpu_model'] = ( jsonutils.dumps(vcpu_model.obj_to_primitive())) else: updates['extra']['vcpu_model'] = None trusted_certs = updates.pop('trusted_certs', None) expected_attrs.append('trusted_certs') if trusted_certs: updates['extra']['trusted_certs'] = jsonutils.dumps( trusted_certs.obj_to_primitive()) else: updates['extra']['trusted_certs'] = None resources = updates.pop('resources', None) expected_attrs.append('resources') if resources: updates['extra']['resources'] = jsonutils.dumps( resources.obj_to_primitive()) else: updates['extra']['resources'] = None db_inst = db.instance_create(self._context, updates) self._from_db_object(self._context, self, db_inst, expected_attrs) # NOTE(danms): The EC2 ids are created on their first load. In order # to avoid them being missing and having to be loaded later, we # load them once here on create now that the instance record is # created. self._load_ec2_ids() self.obj_reset_changes(['ec2_ids']) @base.remotable def destroy(self, hard_delete=False): if not self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='destroy', reason='already destroyed') if not self.obj_attr_is_set('uuid'): raise exception.ObjectActionError(action='destroy', reason='no uuid') if not self.obj_attr_is_set('host') or not self.host: # NOTE(danms): If our host is not set, avoid a race constraint = db.constraint(host=db.equal_any(None)) else: constraint = None try: db_inst = db.instance_destroy(self._context, self.uuid, constraint=constraint, hard_delete=hard_delete) self._from_db_object(self._context, self, db_inst) except exception.ConstraintNotMet: raise exception.ObjectActionError(action='destroy', reason='host changed') delattr(self, base.get_attrname('id')) def _save_info_cache(self, context): if self.info_cache: with self.info_cache.obj_alternate_context(context): self.info_cache.save() # TODO(stephenfin): Remove this as it's related to nova-network def _save_security_groups(self, context): security_groups = self.security_groups or [] for secgroup in security_groups: with secgroup.obj_alternate_context(context): secgroup.save() self.security_groups.obj_reset_changes() def _save_fault(self, context): # NOTE(danms): I don't think we need to worry about this, do we? pass def _save_pci_requests(self, context): # TODO(danms): Unfortunately, extra.pci_requests is not a serialized # PciRequests object (!), so we have to handle it specially here. # That should definitely be fixed! self._extra_values_to_save['pci_requests'] = ( self.pci_requests.to_json()) def _save_pci_devices(self, context): # NOTE(yjiang5): All devices held by PCI tracker, only PCI tracker # permitted to update the DB. all change to devices from here will # be dropped. pass def _save_tags(self, context): # NOTE(gibi): tags are not saved through the instance pass def _save_services(self, context): # NOTE(mriedem): services are not saved through the instance pass @staticmethod def _nullify_flavor_description(flavor_info): """Helper method to nullify descriptions from a set of primitive flavors. Note that we don't remove the flavor description since that would make the versioned notification FlavorPayload have to handle the field not being set on the embedded instance.flavor. :param dict: dict of primitive flavor objects where the values are the flavors which get persisted in the instance_extra.flavor table. """ for flavor in flavor_info.values(): if flavor and 'description' in flavor['nova_object.data']: flavor['nova_object.data']['description'] = None def _save_flavor(self, context): if not any([x in self.obj_what_changed() for x in ('flavor', 'old_flavor', 'new_flavor')]): return flavor_info = { 'cur': self.flavor.obj_to_primitive(), 'old': (self.old_flavor and self.old_flavor.obj_to_primitive() or None), 'new': (self.new_flavor and self.new_flavor.obj_to_primitive() or None), } self._nullify_flavor_description(flavor_info) self._extra_values_to_save['flavor'] = jsonutils.dumps(flavor_info) self.obj_reset_changes(['flavor', 'old_flavor', 'new_flavor']) def _save_old_flavor(self, context): if 'old_flavor' in self.obj_what_changed(): self._save_flavor(context) def _save_new_flavor(self, context): if 'new_flavor' in self.obj_what_changed(): self._save_flavor(context) def _save_ec2_ids(self, context): # NOTE(hanlind): Read-only so no need to save this. pass def _save_keypairs(self, context): if 'keypairs' in self.obj_what_changed(): self._save_extra_generic('keypairs') self.obj_reset_changes(['keypairs'], recursive=True) def _save_extra_generic(self, field): if field in self.obj_what_changed(): obj = getattr(self, field) value = None if obj is not None: value = jsonutils.dumps(obj.obj_to_primitive()) self._extra_values_to_save[field] = value # TODO(stephenfin): Remove the 'admin_state_reset' field in version 3.0 of # the object @base.remotable def save(self, expected_vm_state=None, expected_task_state=None, admin_state_reset=False): """Save updates to this instance Column-wise updates will be made based on the result of self.obj_what_changed(). If expected_task_state is provided, it will be checked against the in-database copy of the instance before updates are made. :param expected_vm_state: Optional tuple of valid vm states for the instance to be in :param expected_task_state: Optional tuple of valid task states for the instance to be in :param admin_state_reset: True if admin API is forcing setting of task_state/vm_state """ context = self._context self._extra_values_to_save = {} updates = {} changes = self.obj_what_changed() for field in self.fields: # NOTE(danms): For object fields, we construct and call a # helper method like self._save_$attrname() if (self.obj_attr_is_set(field) and isinstance(self.fields[field], fields.ObjectField)): try: getattr(self, '_save_%s' % field)(context) except AttributeError: if field in _INSTANCE_EXTRA_FIELDS: self._save_extra_generic(field) continue LOG.exception('No save handler for %s', field, instance=self) except db_exc.DBReferenceError as exp: if exp.key != 'instance_uuid': raise # NOTE(melwitt): This will happen if we instance.save() # before an instance.create() and FK constraint fails. # In practice, this occurs in cells during a delete of # an unscheduled instance. Otherwise, it could happen # as a result of bug. raise exception.InstanceNotFound(instance_id=self.uuid) elif field in changes: updates[field] = self[field] if self._extra_values_to_save: db.instance_extra_update_by_uuid(context, self.uuid, self._extra_values_to_save) if not updates: return # Cleaned needs to be turned back into an int here if 'cleaned' in updates: if updates['cleaned']: updates['cleaned'] = 1 else: updates['cleaned'] = 0 if expected_task_state is not None: updates['expected_task_state'] = expected_task_state if expected_vm_state is not None: updates['expected_vm_state'] = expected_vm_state expected_attrs = [attr for attr in _INSTANCE_OPTIONAL_JOINED_FIELDS if self.obj_attr_is_set(attr)] if 'pci_devices' in expected_attrs: # NOTE(danms): We don't refresh pci_devices on save right now expected_attrs.remove('pci_devices') # NOTE(alaski): We need to pull system_metadata for the # notification.send_update() below. If we don't there's a KeyError # when it tries to extract the flavor. if 'system_metadata' not in expected_attrs: expected_attrs.append('system_metadata') old_ref, inst_ref = db.instance_update_and_get_original( context, self.uuid, updates, columns_to_join=_expected_cols(expected_attrs)) self._from_db_object(context, self, inst_ref, expected_attrs=expected_attrs) # NOTE(danms): We have to be super careful here not to trigger # any lazy-loads that will unmigrate or unbackport something. So, # make a copy of the instance for notifications first. new_ref = self.obj_clone() notifications.send_update(context, old_ref, new_ref) self.obj_reset_changes() @base.remotable def refresh(self, use_slave=False): extra = [field for field in INSTANCE_OPTIONAL_ATTRS if self.obj_attr_is_set(field)] current = self.__class__.get_by_uuid(self._context, uuid=self.uuid, expected_attrs=extra, use_slave=use_slave) # NOTE(danms): We orphan the instance copy so we do not unexpectedly # trigger a lazy-load (which would mean we failed to calculate the # expected_attrs properly) current._context = None for field in self.fields: if field not in self: continue if field not in current: # If the field isn't in current we should not # touch it, triggering a likely-recursive lazy load. # Log it so we can see it happening though, as it # probably isn't expected in most cases. LOG.debug('Field %s is set but not in refreshed ' 'instance, skipping', field) continue if field == 'info_cache': self.info_cache.refresh() elif self[field] != current[field]: self[field] = current[field] self.obj_reset_changes() def _load_generic(self, attrname): instance = self.__class__.get_by_uuid(self._context, uuid=self.uuid, expected_attrs=[attrname]) if attrname not in instance: # NOTE(danms): Never allow us to recursively-load raise exception.ObjectActionError( action='obj_load_attr', reason=_('loading %s requires recursion') % attrname) # NOTE(danms): load anything we don't already have from the # instance we got from the database to make the most of the # performance hit. for field in self.fields: if field in instance and field not in self: setattr(self, field, getattr(instance, field)) def _load_fault(self): self.fault = objects.InstanceFault.get_latest_for_instance( self._context, self.uuid) def _load_numa_topology(self, db_topology=_NO_DATA_SENTINEL): if db_topology is None: self.numa_topology = None elif db_topology is not _NO_DATA_SENTINEL: self.numa_topology = \ objects.InstanceNUMATopology.obj_from_db_obj(self.uuid, db_topology) else: try: self.numa_topology = \ objects.InstanceNUMATopology.get_by_instance_uuid( self._context, self.uuid) except exception.NumaTopologyNotFound: self.numa_topology = None def _load_pci_requests(self, db_requests=_NO_DATA_SENTINEL): if db_requests is not _NO_DATA_SENTINEL: self.pci_requests = objects.InstancePCIRequests.obj_from_db( self._context, self.uuid, db_requests) else: self.pci_requests = \ objects.InstancePCIRequests.get_by_instance_uuid( self._context, self.uuid) def _load_device_metadata(self, db_dev_meta=_NO_DATA_SENTINEL): if db_dev_meta is None: self.device_metadata = None elif db_dev_meta is not _NO_DATA_SENTINEL: self.device_metadata = \ objects.InstanceDeviceMetadata.obj_from_db( self._context, db_dev_meta) else: self.device_metadata = \ objects.InstanceDeviceMetadata.get_by_instance_uuid( self._context, self.uuid) def _load_flavor(self): instance = self.__class__.get_by_uuid( self._context, uuid=self.uuid, expected_attrs=['flavor']) # NOTE(danms): Orphan the instance to make sure we don't lazy-load # anything below instance._context = None self.flavor = instance.flavor self.old_flavor = instance.old_flavor self.new_flavor = instance.new_flavor def _load_vcpu_model(self, db_vcpu_model=_NO_DATA_SENTINEL): if db_vcpu_model is None: self.vcpu_model = None elif db_vcpu_model is _NO_DATA_SENTINEL: self.vcpu_model = objects.VirtCPUModel.get_by_instance_uuid( self._context, self.uuid) else: db_vcpu_model = jsonutils.loads(db_vcpu_model) self.vcpu_model = objects.VirtCPUModel.obj_from_primitive( db_vcpu_model) def _load_ec2_ids(self): self.ec2_ids = objects.EC2Ids.get_by_instance(self._context, self) # TODO(stephenfin): Remove this as it's related to nova-network def _load_security_groups(self): self.security_groups = objects.SecurityGroupList.get_by_instance( self._context, self) def _load_pci_devices(self): self.pci_devices = objects.PciDeviceList.get_by_instance_uuid( self._context, self.uuid) def _load_migration_context(self, db_context=_NO_DATA_SENTINEL): if db_context is _NO_DATA_SENTINEL: try: self.migration_context = ( objects.MigrationContext.get_by_instance_uuid( self._context, self.uuid)) except exception.MigrationContextNotFound: self.migration_context = None elif db_context is None: self.migration_context = None else: self.migration_context = objects.MigrationContext.obj_from_db_obj( db_context) def _load_keypairs(self, db_keypairs=_NO_DATA_SENTINEL): if db_keypairs is _NO_DATA_SENTINEL: inst = objects.Instance.get_by_uuid(self._context, self.uuid, expected_attrs=['keypairs']) if 'keypairs' in inst: self.keypairs = inst.keypairs self.keypairs.obj_reset_changes(recursive=True) self.obj_reset_changes(['keypairs']) else: self.keypairs = objects.KeyPairList(objects=[]) # NOTE(danms): We leave the keypairs attribute dirty in hopes # someone else will save it for us elif db_keypairs: self.keypairs = objects.KeyPairList.obj_from_primitive( jsonutils.loads(db_keypairs)) self.obj_reset_changes(['keypairs']) def _load_tags(self): self.tags = objects.TagList.get_by_resource_id( self._context, self.uuid) def _load_trusted_certs(self, db_trusted_certs=_NO_DATA_SENTINEL): if db_trusted_certs is None: self.trusted_certs = None elif db_trusted_certs is _NO_DATA_SENTINEL: self.trusted_certs = objects.TrustedCerts.get_by_instance_uuid( self._context, self.uuid) else: self.trusted_certs = objects.TrustedCerts.obj_from_primitive( jsonutils.loads(db_trusted_certs)) def _load_resources(self, db_resources=_NO_DATA_SENTINEL): if db_resources is None: self.resources = None elif db_resources is _NO_DATA_SENTINEL: self.resources = objects.ResourceList.get_by_instance_uuid( self._context, self.uuid) else: self.resources = objects.ResourceList.obj_from_primitive( jsonutils.loads(db_resources)) def apply_migration_context(self): if self.migration_context: self._set_migration_context_to_instance(prefix='new_') else: LOG.debug("Trying to apply a migration context that does not " "seem to be set for this instance", instance=self) def revert_migration_context(self): if self.migration_context: self._set_migration_context_to_instance(prefix='old_') else: LOG.debug("Trying to revert a migration context that does not " "seem to be set for this instance", instance=self) def _set_migration_context_to_instance(self, prefix): for inst_attr_name in _MIGRATION_CONTEXT_ATTRS: setattr(self, inst_attr_name, None) attr_name = prefix + inst_attr_name if attr_name in self.migration_context: attr_value = getattr( self.migration_context, attr_name) setattr(self, inst_attr_name, attr_value) @contextlib.contextmanager def mutated_migration_context(self): """Context manager to temporarily apply the migration context. Calling .save() from within the context manager means that the mutated context will be saved which can cause incorrect resource tracking, and should be avoided. """ # First check to see if we even have a migration context set and if not # we can exit early without lazy-loading other attributes. if 'migration_context' in self and self.migration_context is None: yield return current_values = {} for attr_name in _MIGRATION_CONTEXT_ATTRS: current_values[attr_name] = getattr(self, attr_name) self.apply_migration_context() try: yield finally: for attr_name in _MIGRATION_CONTEXT_ATTRS: setattr(self, attr_name, current_values[attr_name]) @base.remotable def drop_migration_context(self): if self.migration_context: db.instance_extra_update_by_uuid(self._context, self.uuid, {'migration_context': None}) self.migration_context = None def clear_numa_topology(self): numa_topology = self.numa_topology if numa_topology is not None: self.numa_topology = numa_topology.clear_host_pinning() def obj_load_attr(self, attrname): # NOTE(danms): We can't lazy-load anything without a context and a uuid if not self._context: raise exception.OrphanedObjectError(method='obj_load_attr', objtype=self.obj_name()) if 'uuid' not in self: raise exception.ObjectActionError( action='obj_load_attr', reason=_('attribute %s not lazy-loadable') % attrname) LOG.debug("Lazy-loading '%(attr)s' on %(name)s uuid %(uuid)s", {'attr': attrname, 'name': self.obj_name(), 'uuid': self.uuid, }) with utils.temporary_mutation(self._context, read_deleted='yes'): self._obj_load_attr(attrname) def _obj_load_attr(self, attrname): """Internal method for loading attributes from instances. NOTE: Do not use this directly. This method contains the implementation of lazy-loading attributes from Instance object, minus some massaging of the context and error-checking. This should always be called with the object-local context set for reading deleted instances and with uuid set. All of the code below depends on those two things. Thus, this should only be called from obj_load_attr() itself. :param attrname: The name of the attribute to be loaded """ # NOTE(danms): We handle some fields differently here so that we # can be more efficient if attrname == 'fault': self._load_fault() elif attrname == 'numa_topology': self._load_numa_topology() elif attrname == 'device_metadata': self._load_device_metadata() elif attrname == 'pci_requests': self._load_pci_requests() elif attrname == 'vcpu_model': self._load_vcpu_model() elif attrname == 'ec2_ids': self._load_ec2_ids() elif attrname == 'migration_context': self._load_migration_context() elif attrname == 'keypairs': # NOTE(danms): Let keypairs control its own destiny for # resetting changes. return self._load_keypairs() elif attrname == 'trusted_certs': return self._load_trusted_certs() elif attrname == 'resources': return self._load_resources() elif attrname == 'security_groups': self._load_security_groups() elif attrname == 'pci_devices': self._load_pci_devices() elif 'flavor' in attrname: self._load_flavor() elif attrname == 'services' and self.deleted: # NOTE(mriedem): The join in the data model for instances.services # filters on instances.deleted == 0, so if the instance is deleted # don't attempt to even load services since we'll fail. self.services = objects.ServiceList(self._context) elif attrname == 'tags': if self.deleted: # NOTE(mriedem): Same story as services, the DB API query # in instance_tag_get_by_instance_uuid will fail if the # instance has been deleted so just return an empty tag list. self.tags = objects.TagList(self._context) else: self._load_tags() elif attrname in self.fields and attrname != 'id': # NOTE(danms): We've never let 'id' be lazy-loaded, and use its # absence as a sentinel that it hasn't been created in the database # yet, so refuse to do so here. self._load_generic(attrname) else: # NOTE(danms): This is historically what we did for # something not in a field that was force-loaded. So, just # do this for consistency. raise exception.ObjectActionError( action='obj_load_attr', reason=_('attribute %s not lazy-loadable') % attrname) self.obj_reset_changes([attrname]) def get_flavor(self, namespace=None): prefix = ('%s_' % namespace) if namespace is not None else '' attr = '%sflavor' % prefix try: return getattr(self, attr) except exception.FlavorNotFound: # NOTE(danms): This only happens in the case where we don't # have flavor information in instance_extra, and doing # this triggers a lookup based on our instance_type_id for # (very) legacy instances. That legacy code expects a None here, # so emulate it for this helper, even though the actual attribute # is not nullable. return None @base.remotable def delete_metadata_key(self, key): """Optimized metadata delete method. This provides a more efficient way to delete a single metadata key, instead of just calling instance.save(). This should be called with the key still present in self.metadata, which it will update after completion. """ db.instance_metadata_delete(self._context, self.uuid, key) md_was_changed = 'metadata' in self.obj_what_changed() del self.metadata[key] self._orig_metadata.pop(key, None) notifications.send_update(self._context, self, self) if not md_was_changed: self.obj_reset_changes(['metadata']) def get_network_info(self): if self.info_cache is None: return network_model.NetworkInfo.hydrate([]) return self.info_cache.network_info def get_bdms(self): return objects.BlockDeviceMappingList.get_by_instance_uuid( self._context, self.uuid) def _make_instance_list(context, inst_list, db_inst_list, expected_attrs): get_fault = expected_attrs and 'fault' in expected_attrs inst_faults = {} if get_fault: # Build an instance_uuid:latest-fault mapping expected_attrs.remove('fault') instance_uuids = [inst['uuid'] for inst in db_inst_list] faults = objects.InstanceFaultList.get_by_instance_uuids( context, instance_uuids) for fault in faults: if fault.instance_uuid not in inst_faults: inst_faults[fault.instance_uuid] = fault inst_cls = objects.Instance inst_list.objects = [] for db_inst in db_inst_list: inst_obj = inst_cls._from_db_object( context, inst_cls(context), db_inst, expected_attrs=expected_attrs) if get_fault: inst_obj.fault = inst_faults.get(inst_obj.uuid, None) inst_list.objects.append(inst_obj) inst_list.obj_reset_changes() return inst_list @db_api.pick_context_manager_writer def populate_missing_availability_zones(context, count): # instances without host have no reasonable AZ to set not_empty_host = models.Instance.host != None # noqa E711 instances = (context.session.query(models.Instance). filter(not_empty_host). filter_by(availability_zone=None).limit(count).all()) count_all = len(instances) count_hit = 0 for instance in instances: az = avail_zone.get_instance_availability_zone(context, instance) instance.availability_zone = az instance.save(context.session) count_hit += 1 return count_all, count_hit @base.NovaObjectRegistry.register class InstanceList(base.ObjectListBase, base.NovaObject): # Version 2.0: Initial Version # Version 2.1: Add get_uuids_by_host() # Version 2.2: Pagination for get_active_by_window_joined() # Version 2.3: Add get_count_by_vm_state() # Version 2.4: Add get_counts() # Version 2.5: Add get_uuids_by_host_and_node() # Version 2.6: Add get_uuids_by_hosts() VERSION = '2.6' fields = { 'objects': fields.ListOfObjectsField('Instance'), } @classmethod @db.select_db_reader_mode def _get_by_filters_impl(cls, context, filters, sort_key='created_at', sort_dir='desc', limit=None, marker=None, expected_attrs=None, use_slave=False, sort_keys=None, sort_dirs=None): if sort_keys or sort_dirs: db_inst_list = db.instance_get_all_by_filters_sort( context, filters, limit=limit, marker=marker, columns_to_join=_expected_cols(expected_attrs), sort_keys=sort_keys, sort_dirs=sort_dirs) else: db_inst_list = db.instance_get_all_by_filters( context, filters, sort_key, sort_dir, limit=limit, marker=marker, columns_to_join=_expected_cols(expected_attrs)) return db_inst_list @base.remotable_classmethod def get_by_filters(cls, context, filters, sort_key='created_at', sort_dir='desc', limit=None, marker=None, expected_attrs=None, use_slave=False, sort_keys=None, sort_dirs=None): db_inst_list = cls._get_by_filters_impl( context, filters, sort_key=sort_key, sort_dir=sort_dir, limit=limit, marker=marker, expected_attrs=expected_attrs, use_slave=use_slave, sort_keys=sort_keys, sort_dirs=sort_dirs) # NOTE(melwitt): _make_instance_list could result in joined objects' # (from expected_attrs) _from_db_object methods being called during # Instance._from_db_object, each of which might choose to perform # database writes. So, we call this outside of _get_by_filters_impl to # avoid being nested inside a 'reader' database transaction context. return _make_instance_list(context, cls(), db_inst_list, expected_attrs) @staticmethod @db.select_db_reader_mode def _db_instance_get_all_by_host(context, host, columns_to_join, use_slave=False): return db.instance_get_all_by_host(context, host, columns_to_join=columns_to_join) @base.remotable_classmethod def get_by_host(cls, context, host, expected_attrs=None, use_slave=False): db_inst_list = cls._db_instance_get_all_by_host( context, host, columns_to_join=_expected_cols(expected_attrs), use_slave=use_slave) return _make_instance_list(context, cls(), db_inst_list, expected_attrs) @base.remotable_classmethod def get_by_host_and_node(cls, context, host, node, expected_attrs=None): db_inst_list = db.instance_get_all_by_host_and_node( context, host, node, columns_to_join=_expected_cols(expected_attrs)) return _make_instance_list(context, cls(), db_inst_list, expected_attrs) @staticmethod @db_api.pick_context_manager_reader def _get_uuids_by_host_and_node(context, host, node): return context.session.query( models.Instance.uuid).filter_by( host=host).filter_by(node=node).filter_by(deleted=0).all() @base.remotable_classmethod def get_uuids_by_host_and_node(cls, context, host, node): """Return non-deleted instance UUIDs for the given host and node. :param context: nova auth request context :param host: Filter instances on this host. :param node: Filter instances on this node. :returns: list of non-deleted instance UUIDs on the given host and node """ return cls._get_uuids_by_host_and_node(context, host, node) @base.remotable_classmethod def get_by_host_and_not_type(cls, context, host, type_id=None, expected_attrs=None): db_inst_list = db.instance_get_all_by_host_and_not_type( context, host, type_id=type_id) return _make_instance_list(context, cls(), db_inst_list, expected_attrs) @base.remotable_classmethod def get_all(cls, context, expected_attrs=None): """Returns all instances on all nodes.""" db_instances = db.instance_get_all( context, columns_to_join=_expected_cols(expected_attrs)) return _make_instance_list(context, cls(), db_instances, expected_attrs) @base.remotable_classmethod def get_hung_in_rebooting(cls, context, reboot_window, expected_attrs=None): db_inst_list = db.instance_get_all_hung_in_rebooting(context, reboot_window) return _make_instance_list(context, cls(), db_inst_list, expected_attrs) @staticmethod @db.select_db_reader_mode def _db_instance_get_active_by_window_joined( context, begin, end, project_id, host, columns_to_join, use_slave=False, limit=None, marker=None): return db.instance_get_active_by_window_joined( context, begin, end, project_id, host, columns_to_join=columns_to_join, limit=limit, marker=marker) @base.remotable_classmethod def _get_active_by_window_joined(cls, context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): # NOTE(mriedem): We need to convert the begin/end timestamp strings # to timezone-aware datetime objects for the DB API call. begin = timeutils.parse_isotime(begin) end = timeutils.parse_isotime(end) if end else None db_inst_list = cls._db_instance_get_active_by_window_joined( context, begin, end, project_id, host, columns_to_join=_expected_cols(expected_attrs), use_slave=use_slave, limit=limit, marker=marker) return _make_instance_list(context, cls(), db_inst_list, expected_attrs) @classmethod def get_active_by_window_joined(cls, context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): """Get instances and joins active during a certain time window. :param:context: nova request context :param:begin: datetime for the start of the time window :param:end: datetime for the end of the time window :param:project_id: used to filter instances by project :param:host: used to filter instances on a given compute host :param:expected_attrs: list of related fields that can be joined in the database layer when querying for instances :param use_slave if True, ship this query off to a DB slave :param limit: maximum number of instances to return per page :param marker: last instance uuid from the previous page :returns: InstanceList """ # NOTE(mriedem): We have to convert the datetime objects to string # primitives for the remote call. begin = utils.isotime(begin) end = utils.isotime(end) if end else None return cls._get_active_by_window_joined(context, begin, end, project_id, host, expected_attrs, use_slave=use_slave, limit=limit, marker=marker) # TODO(stephenfin): Remove this as it's related to nova-network @base.remotable_classmethod def get_by_security_group_id(cls, context, security_group_id): db_secgroup = db.security_group_get( context, security_group_id, columns_to_join=['instances.info_cache', 'instances.system_metadata']) return _make_instance_list(context, cls(), db_secgroup['instances'], ['info_cache', 'system_metadata']) # TODO(stephenfin): Remove this as it's related to nova-network @classmethod def get_by_security_group(cls, context, security_group): return cls.get_by_security_group_id(context, security_group.id) # TODO(stephenfin): Remove this as it's related to nova-network @base.remotable_classmethod def get_by_grantee_security_group_ids(cls, context, security_group_ids): raise NotImplementedError() def fill_faults(self): """Batch query the database for our instances' faults. :returns: A list of instance uuids for which faults were found. """ uuids = [inst.uuid for inst in self] faults = objects.InstanceFaultList.get_latest_by_instance_uuids( self._context, uuids) faults_by_uuid = {} for fault in faults: faults_by_uuid[fault.instance_uuid] = fault for instance in self: if instance.uuid in faults_by_uuid: instance.fault = faults_by_uuid[instance.uuid] else: # NOTE(danms): Otherwise the caller will cause a lazy-load # when checking it, and we know there are none instance.fault = None instance.obj_reset_changes(['fault']) return faults_by_uuid.keys() @base.remotable_classmethod def get_uuids_by_host(cls, context, host): return db.instance_get_all_uuids_by_hosts(context, [host])[host] @base.remotable_classmethod def get_uuids_by_hosts(cls, context, hosts): """Returns a dict, keyed by hypervisor hostname, of a list of instance UUIDs associated with that compute node. """ return db.instance_get_all_uuids_by_hosts(context, hosts) @staticmethod @db_api.pick_context_manager_reader def _get_count_by_vm_state_in_db(context, project_id, user_id, vm_state): return context.session.query(models.Instance.id).\ filter_by(deleted=0).\ filter_by(project_id=project_id).\ filter_by(user_id=user_id).\ filter_by(vm_state=vm_state).\ count() @base.remotable_classmethod def get_count_by_vm_state(cls, context, project_id, user_id, vm_state): return cls._get_count_by_vm_state_in_db(context, project_id, user_id, vm_state) @staticmethod @db_api.pick_context_manager_reader def _get_counts_in_db(context, project_id, user_id=None): # NOTE(melwitt): Copied from nova/db/sqlalchemy/api.py: # It would be better to have vm_state not be nullable # but until then we test it explicitly as a workaround. not_soft_deleted = or_( models.Instance.vm_state != vm_states.SOFT_DELETED, models.Instance.vm_state == null() ) project_query = context.session.query( func.count(models.Instance.id), func.sum(models.Instance.vcpus), func.sum(models.Instance.memory_mb)).\ filter_by(deleted=0).\ filter(not_soft_deleted).\ filter_by(project_id=project_id) # NOTE(mriedem): Filter out hidden instances since there should be a # non-hidden version of the instance in another cell database and the # API will only show one of them, so we don't count the hidden copy. project_query = project_query.filter( or_(models.Instance.hidden == false(), models.Instance.hidden == null())) project_result = project_query.first() fields = ('instances', 'cores', 'ram') project_counts = {field: int(project_result[idx] or 0) for idx, field in enumerate(fields)} counts = {'project': project_counts} if user_id: user_result = project_query.filter_by(user_id=user_id).first() user_counts = {field: int(user_result[idx] or 0) for idx, field in enumerate(fields)} counts['user'] = user_counts return counts @base.remotable_classmethod def get_counts(cls, context, project_id, user_id=None): """Get the counts of Instance objects in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'instances': , 'cores': , 'ram': , 'cores': , 'ram': }} """ return cls._get_counts_in_db(context, project_id, user_id=user_id) @staticmethod @db_api.pick_context_manager_reader def _get_count_by_hosts(context, hosts): return context.session.query(models.Instance).\ filter_by(deleted=0).\ filter(models.Instance.host.in_(hosts)).count() @classmethod def get_count_by_hosts(cls, context, hosts): return cls._get_count_by_hosts(context, hosts) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_action.py0000664000175000017500000002775600000000000020716 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import timeutils from oslo_utils import versionutils import six from nova.compute import utils as compute_utils from nova.db import api as db from nova import exception from nova import objects from nova.objects import base from nova.objects import fields # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstanceAction(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Add create() method. VERSION = '1.2' fields = { 'id': fields.IntegerField(), 'action': fields.StringField(nullable=True), 'instance_uuid': fields.UUIDField(nullable=True), 'request_id': fields.StringField(nullable=True), 'user_id': fields.StringField(nullable=True), 'project_id': fields.StringField(nullable=True), 'start_time': fields.DateTimeField(nullable=True), 'finish_time': fields.DateTimeField(nullable=True), 'message': fields.StringField(nullable=True), } @staticmethod def _from_db_object(context, action, db_action): for field in action.fields: action[field] = db_action[field] action._context = context action.obj_reset_changes() return action @staticmethod def pack_action_start(context, instance_uuid, action_name): values = {'request_id': context.request_id, 'instance_uuid': instance_uuid, 'user_id': context.user_id, 'project_id': context.project_id, 'action': action_name, 'start_time': context.timestamp, 'updated_at': context.timestamp} return values @staticmethod def pack_action_finish(context, instance_uuid): utcnow = timeutils.utcnow() values = {'request_id': context.request_id, 'instance_uuid': instance_uuid, 'finish_time': utcnow, 'updated_at': utcnow} return values @base.remotable_classmethod def get_by_request_id(cls, context, instance_uuid, request_id): db_action = db.action_get_by_request_id(context, instance_uuid, request_id) if db_action: return cls._from_db_object(context, cls(), db_action) @base.remotable_classmethod def action_start(cls, context, instance_uuid, action_name, want_result=True): values = cls.pack_action_start(context, instance_uuid, action_name) db_action = db.action_start(context, values) if want_result: return cls._from_db_object(context, cls(), db_action) @base.remotable_classmethod def action_finish(cls, context, instance_uuid, want_result=True): values = cls.pack_action_finish(context, instance_uuid) db_action = db.action_finish(context, values) if want_result: return cls._from_db_object(context, cls(), db_action) @base.remotable def finish(self): values = self.pack_action_finish(self._context, self.instance_uuid) db_action = db.action_finish(self._context, values) self._from_db_object(self._context, self, db_action) # NOTE(mriedem): In most cases, the action_start() method should be used # to create new InstanceAction records. This method should only be used # in specific exceptional cases like when cloning actions from one cell # database to another. @base.remotable def create(self): if 'id' in self: raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() db_action = db.action_start(self._context, updates) self._from_db_object(self._context, self, db_action) @base.NovaObjectRegistry.register class InstanceActionList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: get_by_instance_uuid added pagination and filters support VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('InstanceAction'), } @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid, limit=None, marker=None, filters=None): db_actions = db.actions_get( context, instance_uuid, limit, marker, filters) return base.obj_make_list(context, cls(), InstanceAction, db_actions) # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstanceActionEvent(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: event_finish_with_failure decorated with serialize_args # Version 1.2: Add 'host' field # Version 1.3: Add create() method. # Version 1.4: Added 'details' field. VERSION = '1.4' fields = { 'id': fields.IntegerField(), 'event': fields.StringField(nullable=True), 'action_id': fields.IntegerField(nullable=True), 'start_time': fields.DateTimeField(nullable=True), 'finish_time': fields.DateTimeField(nullable=True), 'result': fields.StringField(nullable=True), 'traceback': fields.StringField(nullable=True), 'host': fields.StringField(nullable=True), 'details': fields.StringField(nullable=True) } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4) and 'details' in primitive: del primitive['details'] if target_version < (1, 2) and 'host' in primitive: del primitive['host'] @staticmethod def _from_db_object(context, event, db_event): for field in event.fields: event[field] = db_event[field] event._context = context event.obj_reset_changes() return event @staticmethod def pack_action_event_start(context, instance_uuid, event_name, host=None): values = {'event': event_name, 'instance_uuid': instance_uuid, 'request_id': context.request_id, 'start_time': timeutils.utcnow(), 'host': host} return values @staticmethod def pack_action_event_finish(context, instance_uuid, event_name, exc_val=None, exc_tb=None): values = {'event': event_name, 'instance_uuid': instance_uuid, 'request_id': context.request_id, 'finish_time': timeutils.utcnow()} if exc_tb is None: values['result'] = 'Success' else: values['result'] = 'Error' # Store the details using the same logic as storing an instance # fault message. if exc_val: # If we got a string for exc_val it's probably because of # the serialize_args decorator on event_finish_with_failure # so pass that as the message to exception_to_dict otherwise # the details will just the exception class name since it # cannot format the message as a NovaException. message = ( exc_val if isinstance(exc_val, six.string_types) else None) values['details'] = compute_utils.exception_to_dict( exc_val, message=message)['message'] values['traceback'] = exc_tb return values @base.remotable_classmethod def get_by_id(cls, context, action_id, event_id): db_event = db.action_event_get_by_id(context, action_id, event_id) return cls._from_db_object(context, cls(), db_event) @base.remotable_classmethod def event_start(cls, context, instance_uuid, event_name, want_result=True, host=None): values = cls.pack_action_event_start(context, instance_uuid, event_name, host=host) db_event = db.action_event_start(context, values) if want_result: return cls._from_db_object(context, cls(), db_event) @base.serialize_args @base.remotable_classmethod def event_finish_with_failure(cls, context, instance_uuid, event_name, exc_val=None, exc_tb=None, want_result=None): values = cls.pack_action_event_finish(context, instance_uuid, event_name, exc_val=exc_val, exc_tb=exc_tb) db_event = db.action_event_finish(context, values) if want_result: return cls._from_db_object(context, cls(), db_event) @base.remotable_classmethod def event_finish(cls, context, instance_uuid, event_name, want_result=True): return cls.event_finish_with_failure(context, instance_uuid, event_name, exc_val=None, exc_tb=None, want_result=want_result) @base.remotable def finish_with_failure(self, exc_val, exc_tb): values = self.pack_action_event_finish(self._context, self.instance_uuid, self.event, exc_val=exc_val, exc_tb=exc_tb) db_event = db.action_event_finish(self._context, values) self._from_db_object(self._context, self, db_event) @base.remotable def finish(self): self.finish_with_failure(self._context, exc_val=None, exc_tb=None) # NOTE(mriedem): In most cases, the event_start() method should be used # to create new InstanceActionEvent records. This method should only be # used in specific exceptional cases like when cloning events from one cell # database to another. @base.remotable def create(self, instance_uuid, request_id): if 'id' in self: raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() # The instance_uuid and request_id uniquely identify the "parent" # InstanceAction for this event and are used in action_event_start(). # TODO(mriedem): This could be optimized if we just didn't use # db.action_event_start and inserted the record ourselves and passed # in the action_id. updates['instance_uuid'] = instance_uuid updates['request_id'] = request_id db_event = db.action_event_start(self._context, updates) self._from_db_object(self._context, self, db_event) @base.NovaObjectRegistry.register class InstanceActionEventList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: InstanceActionEvent <= 1.1 VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('InstanceActionEvent'), } @base.remotable_classmethod def get_by_action(cls, context, action_id): db_events = db.action_events_get(context, action_id) return base.obj_make_list(context, cls(context), objects.InstanceActionEvent, db_events) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_fault.py0000664000175000017500000001007500000000000020536 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from oslo_log import log as logging from nova.db import api as db from nova import exception from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstanceFault(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Added create() VERSION = '1.2' fields = { 'id': fields.IntegerField(), 'instance_uuid': fields.UUIDField(), 'code': fields.IntegerField(), 'message': fields.StringField(nullable=True), 'details': fields.StringField(nullable=True), 'host': fields.StringField(nullable=True), } @staticmethod def _from_db_object(context, fault, db_fault): # NOTE(danms): These are identical right now for key in fault.fields: fault[key] = db_fault[key] fault._context = context fault.obj_reset_changes() return fault @base.remotable_classmethod def get_latest_for_instance(cls, context, instance_uuid): db_faults = db.instance_fault_get_by_instance_uuids(context, [instance_uuid]) if instance_uuid in db_faults and db_faults[instance_uuid]: return cls._from_db_object(context, cls(), db_faults[instance_uuid][0]) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') values = { 'instance_uuid': self.instance_uuid, 'code': self.code, 'message': self.message, 'details': self.details, 'host': self.host, } db_fault = db.instance_fault_create(self._context, values) self._from_db_object(self._context, self, db_fault) self.obj_reset_changes() @base.NovaObjectRegistry.register class InstanceFaultList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # InstanceFault <= version 1.1 # Version 1.1: InstanceFault version 1.2 # Version 1.2: Added get_latest_by_instance_uuids() method VERSION = '1.2' fields = { 'objects': fields.ListOfObjectsField('InstanceFault'), } @base.remotable_classmethod def get_latest_by_instance_uuids(cls, context, instance_uuids): db_faultdict = db.instance_fault_get_by_instance_uuids(context, instance_uuids, latest=True) db_faultlist = itertools.chain(*db_faultdict.values()) return base.obj_make_list(context, cls(context), objects.InstanceFault, db_faultlist) @base.remotable_classmethod def get_by_instance_uuids(cls, context, instance_uuids): db_faultdict = db.instance_fault_get_by_instance_uuids(context, instance_uuids) db_faultlist = itertools.chain(*db_faultdict.values()) return base.obj_make_list(context, cls(context), objects.InstanceFault, db_faultlist) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_group.py0000664000175000017500000005674700000000000020577 0ustar00zuulzuul00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_db import exception as db_exc from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import uuidutils from oslo_utils import versionutils from sqlalchemy.orm import contains_eager from sqlalchemy.orm import joinedload from nova.compute import utils as compute_utils from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.objects import base from nova.objects import fields LAZY_LOAD_FIELDS = ['hosts'] LOG = logging.getLogger(__name__) def _instance_group_get_query(context, id_field=None, id=None): query = context.session.query(api_models.InstanceGroup).\ options(joinedload('_policies')).\ options(joinedload('_members')) if not context.is_admin: query = query.filter_by(project_id=context.project_id) if id and id_field: query = query.filter(id_field == id) return query def _instance_group_model_get_query(context, model_class, group_id): return context.session.query(model_class).filter_by(group_id=group_id) def _instance_group_model_add(context, model_class, items, item_models, field, group_id, append_to_models=None): models = [] already_existing = set() for db_item in item_models: already_existing.add(getattr(db_item, field)) models.append(db_item) for item in items: if item in already_existing: continue model = model_class() values = {'group_id': group_id} values[field] = item model.update(values) context.session.add(model) if append_to_models: append_to_models.append(model) models.append(model) return models def _instance_group_members_add(context, group, members): query = _instance_group_model_get_query(context, api_models.InstanceGroupMember, group.id) query = query.filter( api_models.InstanceGroupMember.instance_uuid.in_(set(members))) return _instance_group_model_add(context, api_models.InstanceGroupMember, members, query.all(), 'instance_uuid', group.id, append_to_models=group._members) def _instance_group_members_add_by_uuid(context, group_uuid, members): # NOTE(melwitt): The condition on the join limits the number of members # returned to only those we wish to check as already existing. group = context.session.query(api_models.InstanceGroup).\ outerjoin(api_models.InstanceGroupMember, api_models.InstanceGroupMember.instance_uuid.in_(set(members))).\ filter(api_models.InstanceGroup.uuid == group_uuid).\ options(contains_eager('_members')).first() if not group: raise exception.InstanceGroupNotFound(group_uuid=group_uuid) return _instance_group_model_add(context, api_models.InstanceGroupMember, members, group._members, 'instance_uuid', group.id) # TODO(berrange): Remove NovaObjectDictCompat # TODO(mriedem): Replace NovaPersistentObject with TimestampedObject in v2.0. @base.NovaObjectRegistry.register class InstanceGroup(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Use list/dict helpers for policies, metadetails, members # Version 1.3: Make uuid a non-None real string # Version 1.4: Add add_members() # Version 1.5: Add get_hosts() # Version 1.6: Add get_by_name() # Version 1.7: Deprecate metadetails # Version 1.8: Add count_members_by_user() # Version 1.9: Add get_by_instance_uuid() # Version 1.10: Add hosts field # Version 1.11: Add policy and deprecate policies, add _rules VERSION = '1.11' fields = { 'id': fields.IntegerField(), 'user_id': fields.StringField(nullable=True), 'project_id': fields.StringField(nullable=True), 'uuid': fields.UUIDField(), 'name': fields.StringField(nullable=True), 'policies': fields.ListOfStringsField(nullable=True, read_only=True), 'members': fields.ListOfStringsField(nullable=True), 'hosts': fields.ListOfStringsField(nullable=True), 'policy': fields.StringField(nullable=True), # NOTE(danms): Use rules not _rules for general access '_rules': fields.DictOfStringsField(), } def __init__(self, *args, **kwargs): if 'rules' in kwargs: kwargs['_rules'] = kwargs.pop('rules') super(InstanceGroup, self).__init__(*args, **kwargs) @property def rules(self): if '_rules' not in self: return {} # NOTE(danms): Coerce our rules into a typed dict for convenience rules = {} if 'max_server_per_host' in self._rules: rules['max_server_per_host'] = \ int(self._rules['max_server_per_host']) return rules def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 11): # NOTE(yikun): Before 1.11, we had a policies property which is # the list of policy name, even though it was a list, there was # ever only one entry in the list. policy = primitive.pop('policy', None) if policy: primitive['policies'] = [policy] else: primitive['policies'] = [] primitive.pop('rules', None) if target_version < (1, 7): # NOTE(danms): Before 1.7, we had an always-empty # metadetails property primitive['metadetails'] = {} @staticmethod def _from_db_object(context, instance_group, db_inst): """Method to help with migration to objects. Converts a database entity to a formal object. """ # Most of the field names match right now, so be quick for field in instance_group.fields: if field in LAZY_LOAD_FIELDS: continue # This is needed to handle db models from both the api # database and the main database. In the migration to # the api database, we have removed soft-delete, so # the object fields for delete must be filled in with # default values for db models from the api database. # TODO(mriedem): Remove this when NovaPersistentObject is removed. ignore = {'deleted': False, 'deleted_at': None} if '_rules' == field: db_policy = db_inst['policy'] instance_group._rules = ( jsonutils.loads(db_policy['rules']) if db_policy and db_policy['rules'] else {}) elif field in ignore and not hasattr(db_inst, field): instance_group[field] = ignore[field] elif 'policies' == field: continue # NOTE(yikun): The obj.policies is deprecated and marked as # read_only in version 1.11, and there is no "policies" property # in InstanceGroup model anymore, so we just skip to set # "policies" and then load the "policies" when "policy" is set. elif 'policy' == field: db_policy = db_inst['policy'] if db_policy: instance_group.policy = db_policy['policy'] instance_group.policies = [instance_group.policy] else: instance_group.policy = None instance_group.policies = [] else: instance_group[field] = db_inst[field] instance_group._context = context instance_group.obj_reset_changes() return instance_group @staticmethod @db_api.api_context_manager.reader def _get_from_db_by_uuid(context, uuid): grp = _instance_group_get_query(context, id_field=api_models.InstanceGroup.uuid, id=uuid).first() if not grp: raise exception.InstanceGroupNotFound(group_uuid=uuid) return grp @staticmethod @db_api.api_context_manager.reader def _get_from_db_by_id(context, id): grp = _instance_group_get_query(context, id_field=api_models.InstanceGroup.id, id=id).first() if not grp: raise exception.InstanceGroupNotFound(group_uuid=id) return grp @staticmethod @db_api.api_context_manager.reader def _get_from_db_by_name(context, name): grp = _instance_group_get_query(context).filter_by(name=name).first() if not grp: raise exception.InstanceGroupNotFound(group_uuid=name) return grp @staticmethod @db_api.api_context_manager.reader def _get_from_db_by_instance(context, instance_uuid): grp_member = context.session.query(api_models.InstanceGroupMember).\ filter_by(instance_uuid=instance_uuid).first() if not grp_member: raise exception.InstanceGroupNotFound(group_uuid='') grp = InstanceGroup._get_from_db_by_id(context, grp_member.group_id) return grp @staticmethod @db_api.api_context_manager.writer def _save_in_db(context, group_uuid, values): grp = InstanceGroup._get_from_db_by_uuid(context, group_uuid) values_copy = copy.copy(values) members = values_copy.pop('members', None) grp.update(values_copy) if members is not None: _instance_group_members_add(context, grp, members) return grp @staticmethod @db_api.api_context_manager.writer def _create_in_db(context, values, policies=None, members=None, policy=None, rules=None): try: group = api_models.InstanceGroup() group.update(values) group.save(context.session) except db_exc.DBDuplicateEntry: raise exception.InstanceGroupIdExists(group_uuid=values['uuid']) if policies: db_policy = api_models.InstanceGroupPolicy( group_id=group['id'], policy=policies[0], rules=None) group._policies = [db_policy] group.rules = None elif policy: db_rules = jsonutils.dumps(rules or {}) db_policy = api_models.InstanceGroupPolicy( group_id=group['id'], policy=policy, rules=db_rules) group._policies = [db_policy] else: group._policies = [] if group._policies: group.save(context.session) if members: group._members = _instance_group_members_add(context, group, members) else: group._members = [] return group @staticmethod @db_api.api_context_manager.writer def _destroy_in_db(context, group_uuid): qry = _instance_group_get_query(context, id_field=api_models.InstanceGroup.uuid, id=group_uuid) if qry.count() == 0: raise exception.InstanceGroupNotFound(group_uuid=group_uuid) # Delete policies and members group_id = qry.first().id instance_models = [api_models.InstanceGroupPolicy, api_models.InstanceGroupMember] for model in instance_models: context.session.query(model).filter_by(group_id=group_id).delete() qry.delete() @staticmethod @db_api.api_context_manager.writer def _add_members_in_db(context, group_uuid, members): return _instance_group_members_add_by_uuid(context, group_uuid, members) @staticmethod @db_api.api_context_manager.writer def _remove_members_in_db(context, group_id, instance_uuids): # There is no public method provided for removing members because the # user-facing API doesn't allow removal of instance group members. We # need to be able to remove members to address quota races. context.session.query(api_models.InstanceGroupMember).\ filter_by(group_id=group_id).\ filter(api_models.InstanceGroupMember.instance_uuid. in_(set(instance_uuids))).\ delete(synchronize_session=False) @staticmethod @db_api.api_context_manager.writer def _destroy_members_bulk_in_db(context, instance_uuids): return context.session.query(api_models.InstanceGroupMember).filter( api_models.InstanceGroupMember.instance_uuid.in_(instance_uuids)).\ delete(synchronize_session=False) @classmethod def destroy_members_bulk(cls, context, instance_uuids): return cls._destroy_members_bulk_in_db(context, instance_uuids) def obj_load_attr(self, attrname): # NOTE(sbauza): Only hosts could be lazy-loaded right now if attrname != 'hosts': raise exception.ObjectActionError( action='obj_load_attr', reason='unable to load %s' % attrname) LOG.debug("Lazy-loading '%(attr)s' on %(name)s uuid %(uuid)s", {'attr': attrname, 'name': self.obj_name(), 'uuid': self.uuid, }) self.hosts = self.get_hosts() self.obj_reset_changes(['hosts']) @base.remotable_classmethod def get_by_uuid(cls, context, uuid): db_group = cls._get_from_db_by_uuid(context, uuid) return cls._from_db_object(context, cls(), db_group) @base.remotable_classmethod def get_by_name(cls, context, name): db_group = cls._get_from_db_by_name(context, name) return cls._from_db_object(context, cls(), db_group) @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_group = cls._get_from_db_by_instance(context, instance_uuid) return cls._from_db_object(context, cls(), db_group) @classmethod def get_by_hint(cls, context, hint): if uuidutils.is_uuid_like(hint): return cls.get_by_uuid(context, hint) else: return cls.get_by_name(context, hint) @base.remotable def save(self): """Save updates to this instance group.""" updates = self.obj_get_changes() # NOTE(sbauza): We do NOT save the set of compute nodes that an # instance group is connected to in this method. Instance groups are # implicitly connected to compute nodes when the # InstanceGroup.add_members() method is called, which adds the mapping # table entries. # So, since the only way to have hosts in the updates is to set that # field explicitly, we prefer to raise an Exception so the developer # knows he has to call obj_reset_changes(['hosts']) right after setting # the field. if 'hosts' in updates: raise exception.InstanceGroupSaveException(field='hosts') # NOTE(yikun): You have to provide exactly one policy on group create, # and also there are no group update APIs, so we do NOT support # policies update. if 'policies' in updates: raise exception.InstanceGroupSaveException(field='policies') if not updates: return payload = dict(updates) payload['server_group_id'] = self.uuid db_group = self._save_in_db(self._context, self.uuid, updates) self._from_db_object(self._context, self, db_group) compute_utils.notify_about_server_group_update(self._context, "update", payload) @base.remotable def refresh(self): """Refreshes the instance group.""" current = self.__class__.get_by_uuid(self._context, self.uuid) for field in self.fields: if self.obj_attr_is_set(field) and self[field] != current[field]: self[field] = current[field] self.obj_reset_changes() @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() payload = dict(updates) updates.pop('id', None) policies = updates.pop('policies', None) policy = updates.pop('policy', None) rules = updates.pop('_rules', None) members = updates.pop('members', None) if 'uuid' not in updates: self.uuid = uuidutils.generate_uuid() updates['uuid'] = self.uuid db_group = self._create_in_db(self._context, updates, policies=policies, members=members, policy=policy, rules=rules) self._from_db_object(self._context, self, db_group) payload['server_group_id'] = self.uuid compute_utils.notify_about_server_group_update(self._context, "create", payload) compute_utils.notify_about_server_group_action( context=self._context, group=self, action=fields.NotificationAction.CREATE) @base.remotable def destroy(self): payload = {'server_group_id': self.uuid} self._destroy_in_db(self._context, self.uuid) self.obj_reset_changes() compute_utils.notify_about_server_group_update(self._context, "delete", payload) compute_utils.notify_about_server_group_action( context=self._context, group=self, action=fields.NotificationAction.DELETE) @base.remotable_classmethod def add_members(cls, context, group_uuid, instance_uuids): payload = {'server_group_id': group_uuid, 'instance_uuids': instance_uuids} members = cls._add_members_in_db(context, group_uuid, instance_uuids) members = [member['instance_uuid'] for member in members] compute_utils.notify_about_server_group_update(context, "addmember", payload) compute_utils.notify_about_server_group_add_member(context, group_uuid) return list(members) @base.remotable def get_hosts(self, exclude=None): """Get a list of hosts for non-deleted instances in the group This method allows you to get a list of the hosts where instances in this group are currently running. There's also an option to exclude certain instance UUIDs from this calculation. """ filter_uuids = self.members if exclude: filter_uuids = set(filter_uuids) - set(exclude) filters = {'uuid': filter_uuids, 'deleted': False} # Pass expected_attrs=[] to avoid unnecessary joins. # TODO(mriedem): This is pretty inefficient since all we care about # are the hosts. We could optimize this with a single-purpose SQL query # like: # SELECT host FROM instances WHERE deleted=0 AND host IS NOT NULL # AND uuid IN ($filter_uuids) GROUP BY host; instances = objects.InstanceList.get_by_filters(self._context, filters=filters, expected_attrs=[]) return list(set([instance.host for instance in instances if instance.host])) @base.remotable def count_members_by_user(self, user_id): """Count the number of instances in a group belonging to a user.""" filter_uuids = self.members filters = {'uuid': filter_uuids, 'user_id': user_id, 'deleted': False} instances = objects.InstanceList.get_by_filters(self._context, filters=filters) return len(instances) @base.NovaObjectRegistry.register class InstanceGroupList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # InstanceGroup <= version 1.3 # Version 1.1: InstanceGroup <= version 1.4 # Version 1.2: InstanceGroup <= version 1.5 # Version 1.3: InstanceGroup <= version 1.6 # Version 1.4: InstanceGroup <= version 1.7 # Version 1.5: InstanceGroup <= version 1.8 # Version 1.6: InstanceGroup <= version 1.9 # Version 1.7: InstanceGroup <= version 1.10 # Version 1.8: Added get_counts() for quotas VERSION = '1.8' fields = { 'objects': fields.ListOfObjectsField('InstanceGroup'), } @staticmethod @db_api.api_context_manager.reader def _get_from_db(context, project_id=None): query = _instance_group_get_query(context) if project_id is not None: query = query.filter_by(project_id=project_id) return query.all() @staticmethod @db_api.api_context_manager.reader def _get_counts_from_db(context, project_id, user_id=None): query = context.session.query(api_models.InstanceGroup.id).\ filter_by(project_id=project_id) counts = {} counts['project'] = {'server_groups': query.count()} if user_id: query = query.filter_by(user_id=user_id) counts['user'] = {'server_groups': query.count()} return counts @base.remotable_classmethod def get_by_project_id(cls, context, project_id): api_db_groups = cls._get_from_db(context, project_id=project_id) return base.obj_make_list(context, cls(context), objects.InstanceGroup, api_db_groups) @base.remotable_classmethod def get_all(cls, context): api_db_groups = cls._get_from_db(context) return base.obj_make_list(context, cls(context), objects.InstanceGroup, api_db_groups) @base.remotable_classmethod def get_counts(cls, context, project_id, user_id=None): """Get the counts of InstanceGroup objects in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'server_groups': }, 'user': {'server_groups': }} """ return cls._get_counts_from_db(context, project_id, user_id=user_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_info_cache.py0000664000175000017500000000730200000000000021500 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.db import api as db from nova import exception from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) @base.NovaObjectRegistry.register class InstanceInfoCache(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Converted network_info to store the model. # Version 1.2: Added new() and update_cells kwarg to save(). # Version 1.3: Added delete() # Version 1.4: String attributes updated to support unicode # Version 1.5: Actually set the deleted, created_at, updated_at, and # deleted_at attributes VERSION = '1.5' fields = { 'instance_uuid': fields.UUIDField(), 'network_info': fields.Field(fields.NetworkModel(), nullable=True), } @staticmethod def _from_db_object(context, info_cache, db_obj): for field in info_cache.fields: setattr(info_cache, field, db_obj[field]) info_cache.obj_reset_changes() info_cache._context = context return info_cache @classmethod def new(cls, context, instance_uuid): """Create an InfoCache object that can be used to create the DB entry for the first time. When save()ing this object, the info_cache_update() DB call will properly handle creating it if it doesn't exist already. """ info_cache = cls() info_cache.instance_uuid = instance_uuid info_cache.network_info = None info_cache._context = context # Leave the fields dirty return info_cache @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_obj = db.instance_info_cache_get(context, instance_uuid) if not db_obj: raise exception.InstanceInfoCacheNotFound( instance_uuid=instance_uuid) return cls._from_db_object(context, cls(context), db_obj) # TODO(stephenfin): Remove 'update_cells' in version 2.0 @base.remotable def save(self, update_cells=True): if 'network_info' in self.obj_what_changed(): nw_info_json = self.fields['network_info'].to_primitive( self, 'network_info', self.network_info) rv = db.instance_info_cache_update(self._context, self.instance_uuid, {'network_info': nw_info_json}) self._from_db_object(self._context, self, rv) self.obj_reset_changes() @base.remotable def delete(self): db.instance_info_cache_delete(self._context, self.instance_uuid) @base.remotable def refresh(self): current = self.__class__.get_by_instance_uuid(self._context, self.instance_uuid) current._context = None for field in self.fields: if (self.obj_attr_is_set(field) and getattr(self, field) != getattr(current, field)): setattr(self, field, getattr(current, field)) self.obj_reset_changes() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_mapping.py0000664000175000017500000004762600000000000021072 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_log import log as logging from oslo_utils import versionutils import six from sqlalchemy.orm import exc as orm_exc from sqlalchemy.orm import joinedload from sqlalchemy.sql import false from sqlalchemy.sql import func from sqlalchemy.sql import or_ from nova import context as nova_context from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base from nova.objects import cell_mapping from nova.objects import fields from nova.objects import virtual_interface LOG = logging.getLogger(__name__) @base.NovaObjectRegistry.register class InstanceMapping(base.NovaTimestampObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add queued_for_delete # Version 1.2: Add user_id VERSION = '1.2' fields = { 'id': fields.IntegerField(read_only=True), 'instance_uuid': fields.UUIDField(), 'cell_mapping': fields.ObjectField('CellMapping', nullable=True), 'project_id': fields.StringField(), 'user_id': fields.StringField(), 'queued_for_delete': fields.BooleanField(default=False), } def obj_make_compatible(self, primitive, target_version): super(InstanceMapping, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2) and 'user_id' in primitive: del primitive['user_id'] if target_version < (1, 1): if 'queued_for_delete' in primitive: del primitive['queued_for_delete'] def obj_load_attr(self, attrname): if attrname == 'user_id': LOG.error('The unset user_id attribute of an unmigrated instance ' 'mapping should not be accessed.') raise exception.ObjectActionError( action='obj_load_attr', reason=_('attribute user_id is not lazy-loadable')) super(InstanceMapping, self).obj_load_attr(attrname) def _update_with_cell_id(self, updates): cell_mapping_obj = updates.pop("cell_mapping", None) if cell_mapping_obj: updates["cell_id"] = cell_mapping_obj.id return updates @staticmethod def _from_db_object(context, instance_mapping, db_instance_mapping): for key in instance_mapping.fields: db_value = db_instance_mapping.get(key) if key == 'cell_mapping': # cell_mapping can be None indicating that the instance has # not been scheduled yet. if db_value: db_value = cell_mapping.CellMapping._from_db_object( context, cell_mapping.CellMapping(), db_value) if key == 'user_id' and db_value is None: # NOTE(melwitt): If user_id is NULL, we can't set the field # because it's non-nullable. We don't plan for any code to read # the user_id field at this time, so skip setting it. continue setattr(instance_mapping, key, db_value) instance_mapping.obj_reset_changes() instance_mapping._context = context return instance_mapping @staticmethod @db_api.api_context_manager.reader def _get_by_instance_uuid_from_db(context, instance_uuid): db_mapping = (context.session.query(api_models.InstanceMapping) .options(joinedload('cell_mapping')) .filter( api_models.InstanceMapping.instance_uuid == instance_uuid)).first() if not db_mapping: raise exception.InstanceMappingNotFound(uuid=instance_uuid) return db_mapping @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_mapping = cls._get_by_instance_uuid_from_db(context, instance_uuid) return cls._from_db_object(context, cls(), db_mapping) @staticmethod @db_api.api_context_manager.writer def _create_in_db(context, updates): db_mapping = api_models.InstanceMapping() db_mapping.update(updates) db_mapping.save(context.session) # NOTE: This is done because a later access will trigger a lazy load # outside of the db session so it will fail. We don't lazy load # cell_mapping on the object later because we never need an # InstanceMapping without the CellMapping. db_mapping.cell_mapping return db_mapping @base.remotable def create(self): changes = self.obj_get_changes() changes = self._update_with_cell_id(changes) if 'queued_for_delete' not in changes: # NOTE(danms): If we are creating a mapping, it should be # not queued_for_delete (unless we are being asked to # create one in deleted state for some reason). changes['queued_for_delete'] = False db_mapping = self._create_in_db(self._context, changes) self._from_db_object(self._context, self, db_mapping) @staticmethod @db_api.api_context_manager.writer def _save_in_db(context, instance_uuid, updates): db_mapping = context.session.query( api_models.InstanceMapping).filter_by( instance_uuid=instance_uuid).first() if not db_mapping: raise exception.InstanceMappingNotFound(uuid=instance_uuid) db_mapping.update(updates) # NOTE: This is done because a later access will trigger a lazy load # outside of the db session so it will fail. We don't lazy load # cell_mapping on the object later because we never need an # InstanceMapping without the CellMapping. db_mapping.cell_mapping context.session.add(db_mapping) return db_mapping @base.remotable def save(self): changes = self.obj_get_changes() changes = self._update_with_cell_id(changes) try: db_mapping = self._save_in_db(self._context, self.instance_uuid, changes) except orm_exc.StaleDataError: # NOTE(melwitt): If the instance mapping has been deleted out from # under us by conductor (delete requested while booting), we will # encounter a StaleDataError after we retrieved the row and try to # update it after it's been deleted. We can treat this like an # instance mapping not found and allow the caller to handle it. raise exception.InstanceMappingNotFound(uuid=self.instance_uuid) self._from_db_object(self._context, self, db_mapping) self.obj_reset_changes() @staticmethod @db_api.api_context_manager.writer def _destroy_in_db(context, instance_uuid): result = context.session.query(api_models.InstanceMapping).filter_by( instance_uuid=instance_uuid).delete() if not result: raise exception.InstanceMappingNotFound(uuid=instance_uuid) @base.remotable def destroy(self): self._destroy_in_db(self._context, self.instance_uuid) @db_api.api_context_manager.writer def populate_queued_for_delete(context, max_count): cells = objects.CellMappingList.get_all(context) processed = 0 for cell in cells: ims = ( # Get a direct list of instance mappings for this cell which # have not yet received a defined value decision for # queued_for_delete context.session.query(api_models.InstanceMapping) .filter( api_models.InstanceMapping.queued_for_delete == None) # noqa .filter(api_models.InstanceMapping.cell_id == cell.id) .limit(max_count).all()) ims_by_inst = {im.instance_uuid: im for im in ims} if not ims_by_inst: # If there is nothing from this cell to migrate, move on. continue with nova_context.target_cell(context, cell) as cctxt: filters = {'uuid': list(ims_by_inst.keys()), 'deleted': True, 'soft_deleted': True} instances = objects.InstanceList.get_by_filters( cctxt, filters, expected_attrs=[]) # Walk through every deleted instance that has a mapping needing # to be updated and update it for instance in instances: im = ims_by_inst.pop(instance.uuid) im.queued_for_delete = True context.session.add(im) processed += 1 # Any instances we did not just hit must be not-deleted, so # update the remaining mappings for non_deleted_im in ims_by_inst.values(): non_deleted_im.queued_for_delete = False context.session.add(non_deleted_im) processed += 1 max_count -= len(ims) if max_count <= 0: break return processed, processed @db_api.api_context_manager.writer def populate_user_id(context, max_count): cells = objects.CellMappingList.get_all(context) cms_by_id = {cell.id: cell for cell in cells} done = 0 unmigratable_ims = False ims = ( # Get a list of instance mappings which do not have user_id populated. # We need to include records with queued_for_delete=True because they # include SOFT_DELETED instances, which could be restored at any time # in the future. If we don't migrate SOFT_DELETED instances now, we # wouldn't be able to retire this migration code later. Also filter # out the marker instance created by the virtual interface migration. context.session.query(api_models.InstanceMapping) .filter_by(user_id=None) .filter(api_models.InstanceMapping.project_id != virtual_interface.FAKE_UUID) .limit(max_count).all()) found = len(ims) ims_by_inst_uuid = {} inst_uuids_by_cell_id = collections.defaultdict(set) for im in ims: ims_by_inst_uuid[im.instance_uuid] = im inst_uuids_by_cell_id[im.cell_id].add(im.instance_uuid) for cell_id, inst_uuids in inst_uuids_by_cell_id.items(): # We cannot migrate instance mappings that don't have a cell yet. if cell_id is None: unmigratable_ims = True continue with nova_context.target_cell(context, cms_by_id[cell_id]) as cctxt: # We need to migrate SOFT_DELETED instances because they could be # restored at any time in the future, preventing us from being able # to remove any other interim online data migration code we have, # if we don't migrate them here. # NOTE: it's not possible to query only for SOFT_DELETED instances. # We must query for both deleted and SOFT_DELETED instances. filters = {'uuid': inst_uuids} try: instances = objects.InstanceList.get_by_filters( cctxt, filters, expected_attrs=[]) except Exception as exp: LOG.warning('Encountered exception: "%s" while querying ' 'instances from cell: %s. Continuing to the next ' 'cell.', six.text_type(exp), cms_by_id[cell_id].identity) continue # Walk through every instance that has a mapping needing to be updated # and update it. for instance in instances: im = ims_by_inst_uuid.pop(instance.uuid) im.user_id = instance.user_id context.session.add(im) done += 1 if ims_by_inst_uuid: unmigratable_ims = True if done >= max_count: break if unmigratable_ims: LOG.warning('Some instance mappings were not migratable. This may ' 'be transient due to in-flight instance builds, or could ' 'be due to stale data that will be cleaned up after ' 'running "nova-manage db archive_deleted_rows --purge".') return found, done @base.NovaObjectRegistry.register class InstanceMappingList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added get_by_cell_id method. # Version 1.2: Added get_by_instance_uuids method # Version 1.3: Added get_counts() VERSION = '1.3' fields = { 'objects': fields.ListOfObjectsField('InstanceMapping'), } @staticmethod @db_api.api_context_manager.reader def _get_by_project_id_from_db(context, project_id): return (context.session.query(api_models.InstanceMapping) .options(joinedload('cell_mapping')) .filter( api_models.InstanceMapping.project_id == project_id)).all() @base.remotable_classmethod def get_by_project_id(cls, context, project_id): db_mappings = cls._get_by_project_id_from_db(context, project_id) return base.obj_make_list(context, cls(), objects.InstanceMapping, db_mappings) @staticmethod @db_api.api_context_manager.reader def _get_by_cell_id_from_db(context, cell_id): return (context.session.query(api_models.InstanceMapping) .options(joinedload('cell_mapping')) .filter(api_models.InstanceMapping.cell_id == cell_id)).all() @base.remotable_classmethod def get_by_cell_id(cls, context, cell_id): db_mappings = cls._get_by_cell_id_from_db(context, cell_id) return base.obj_make_list(context, cls(), objects.InstanceMapping, db_mappings) @staticmethod @db_api.api_context_manager.reader def _get_by_instance_uuids_from_db(context, uuids): return (context.session.query(api_models.InstanceMapping) .options(joinedload('cell_mapping')) .filter(api_models.InstanceMapping.instance_uuid.in_(uuids)) .all()) @base.remotable_classmethod def get_by_instance_uuids(cls, context, uuids): db_mappings = cls._get_by_instance_uuids_from_db(context, uuids) return base.obj_make_list(context, cls(), objects.InstanceMapping, db_mappings) @staticmethod @db_api.api_context_manager.writer def _destroy_bulk_in_db(context, instance_uuids): return context.session.query(api_models.InstanceMapping).filter( api_models.InstanceMapping.instance_uuid.in_(instance_uuids)).\ delete(synchronize_session=False) @classmethod def destroy_bulk(cls, context, instance_uuids): return cls._destroy_bulk_in_db(context, instance_uuids) @staticmethod @db_api.api_context_manager.reader def _get_not_deleted_by_cell_and_project_from_db(context, cell_uuid, project_id, limit): query = context.session.query(api_models.InstanceMapping) if project_id is not None: # Note that the project_id can be None in case # instances are being listed for the all-tenants case. query = query.filter_by(project_id=project_id) # Both the values NULL (for cases when the online data migration for # queued_for_delete was not run) and False (cases when the online # data migration for queued_for_delete was run) are assumed to mean # that the instance is not queued for deletion. query = (query.filter(or_( api_models.InstanceMapping.queued_for_delete == false(), api_models.InstanceMapping.queued_for_delete.is_(None))) .join('cell_mapping') .options(joinedload('cell_mapping')) .filter(api_models.CellMapping.uuid == cell_uuid)) if limit is not None: query = query.limit(limit) return query.all() @classmethod def get_not_deleted_by_cell_and_project(cls, context, cell_uuid, project_id, limit=None): """Return a limit restricted list of InstanceMapping objects which are mapped to the specified cell_uuid, belong to the specified project_id and are not queued for deletion (note that unlike the other InstanceMappingList query methods which return all mappings irrespective of whether they are queued for deletion this method explicitly queries only for those mappings that are *not* queued for deletion as is evident from the naming of the method). """ db_mappings = cls._get_not_deleted_by_cell_and_project_from_db( context, cell_uuid, project_id, limit) return base.obj_make_list(context, cls(), objects.InstanceMapping, db_mappings) @staticmethod @db_api.api_context_manager.reader def _get_counts_in_db(context, project_id, user_id=None): project_query = context.session.query( func.count(api_models.InstanceMapping.id)).\ filter_by(queued_for_delete=False).\ filter_by(project_id=project_id) project_result = project_query.scalar() counts = {'project': {'instances': project_result}} if user_id: user_result = project_query.filter_by(user_id=user_id).scalar() counts['user'] = {'instances': user_result} return counts @base.remotable_classmethod def get_counts(cls, context, project_id, user_id=None): """Get the counts of InstanceMapping objects in the database. The count is used to represent the count of instances for the purpose of counting quota usage. Instances that are queued_for_deleted=True are not included in the count (deleted and SOFT_DELETED instances). Instances that are queued_for_deleted=None are not included in the count because we are not certain about whether or not they are deleted. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'instances': }, 'user': {'instances': }} """ return cls._get_counts_in_db(context, project_id, user_id=user_id) @staticmethod @db_api.api_context_manager.reader def _get_count_by_uuids_and_user_in_db(context, uuids, user_id): query = (context.session.query( func.count(api_models.InstanceMapping.id)) .filter(api_models.InstanceMapping.instance_uuid.in_(uuids)) .filter_by(queued_for_delete=False) .filter_by(user_id=user_id)) return query.scalar() @classmethod def get_count_by_uuids_and_user(cls, context, uuids, user_id): """Get the count of InstanceMapping objects by UUIDs and user_id. The count is used to represent the count of server group members belonging to a particular user, for the purpose of counting quota usage. Instances that are queued_for_deleted=True are not included in the count (deleted and SOFT_DELETED instances). :param uuids: List of instance UUIDs on which to filter :param user_id: The user_id on which to filter :returns: An integer for the count """ return cls._get_count_by_uuids_and_user_in_db(context, uuids, user_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_numa.py0000664000175000017500000002140700000000000020364 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from oslo_serialization import jsonutils from oslo_utils import versionutils from nova.db import api as db from nova import exception from nova.objects import base from nova.objects import fields as obj_fields from nova.virt import hardware # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstanceNUMACell(base.NovaEphemeralObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Add pagesize field # Version 1.2: Add cpu_pinning_raw and topology fields # Version 1.3: Add cpu_policy and cpu_thread_policy fields # Version 1.4: Add cpuset_reserved field VERSION = '1.4' def obj_make_compatible(self, primitive, target_version): super(InstanceNUMACell, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4): primitive.pop('cpuset_reserved', None) if target_version < (1, 3): primitive.pop('cpu_policy', None) primitive.pop('cpu_thread_policy', None) fields = { 'id': obj_fields.IntegerField(), 'cpuset': obj_fields.SetOfIntegersField(), 'memory': obj_fields.IntegerField(), 'pagesize': obj_fields.IntegerField(nullable=True, default=None), 'cpu_topology': obj_fields.ObjectField('VirtCPUTopology', nullable=True), 'cpu_pinning_raw': obj_fields.DictOfIntegersField(nullable=True, default=None), 'cpu_policy': obj_fields.CPUAllocationPolicyField(nullable=True, default=None), 'cpu_thread_policy': obj_fields.CPUThreadAllocationPolicyField( nullable=True, default=None), # These physical CPUs are reserved for use by the hypervisor 'cpuset_reserved': obj_fields.SetOfIntegersField(nullable=True, default=None), } cpu_pinning = obj_fields.DictProxyField('cpu_pinning_raw') def __len__(self): return len(self.cpuset) @classmethod def _from_dict(cls, data_dict): # NOTE(sahid): Used as legacy, could be renamed in # _legacy_from_dict_ to the future to avoid confusing. cpuset = hardware.parse_cpu_spec(data_dict.get('cpus', '')) memory = data_dict.get('mem', {}).get('total', 0) cell_id = data_dict.get('id') pagesize = data_dict.get('pagesize') return cls(id=cell_id, cpuset=cpuset, memory=memory, pagesize=pagesize) @property def siblings(self): cpu_list = sorted(list(self.cpuset)) threads = 0 if ('cpu_topology' in self) and self.cpu_topology: threads = self.cpu_topology.threads if threads == 1: threads = 0 return list(map(set, zip(*[iter(cpu_list)] * threads))) @property def cpu_pinning_requested(self): return self.cpu_policy == obj_fields.CPUAllocationPolicy.DEDICATED def pin(self, vcpu, pcpu): if vcpu not in self.cpuset: return pinning_dict = self.cpu_pinning or {} pinning_dict[vcpu] = pcpu self.cpu_pinning = pinning_dict def pin_vcpus(self, *cpu_pairs): for vcpu, pcpu in cpu_pairs: self.pin(vcpu, pcpu) def clear_host_pinning(self): """Clear any data related to how this cell is pinned to the host. Needed for aborting claims as we do not want to keep stale data around. """ self.id = -1 self.cpu_pinning = {} return self # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstanceNUMATopology(base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Takes into account pagesize # Version 1.2: InstanceNUMACell 1.2 # Version 1.3: Add emulator threads policy VERSION = '1.3' def obj_make_compatible(self, primitive, target_version): super(InstanceNUMATopology, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 3): primitive.pop('emulator_threads_policy', None) fields = { # NOTE(danms): The 'id' field is no longer used and should be # removed in the future when convenient 'id': obj_fields.IntegerField(), 'instance_uuid': obj_fields.UUIDField(), 'cells': obj_fields.ListOfObjectsField('InstanceNUMACell'), 'emulator_threads_policy': ( obj_fields.CPUEmulatorThreadsPolicyField(nullable=True)), } @classmethod def obj_from_primitive(cls, primitive, context=None): if 'nova_object.name' in primitive: obj_topology = super(InstanceNUMATopology, cls).obj_from_primitive( primitive, context=None) else: # NOTE(sahid): This compatibility code needs to stay until we can # guarantee that there are no cases of the old format stored in # the database (or forever, if we can never guarantee that). obj_topology = InstanceNUMATopology._from_dict(primitive) obj_topology.id = 0 return obj_topology @classmethod def obj_from_db_obj(cls, instance_uuid, db_obj): primitive = jsonutils.loads(db_obj) obj_topology = cls.obj_from_primitive(primitive) if 'nova_object.name' not in db_obj: obj_topology.instance_uuid = instance_uuid # No benefit to store a list of changed fields obj_topology.obj_reset_changes() return obj_topology # TODO(ndipanov) Remove this method on the major version bump to 2.0 @base.remotable def create(self): values = {'numa_topology': self._to_json()} db.instance_extra_update_by_uuid(self._context, self.instance_uuid, values) self.obj_reset_changes() @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_extra = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['numa_topology']) if not db_extra: raise exception.NumaTopologyNotFound(instance_uuid=instance_uuid) if db_extra['numa_topology'] is None: return None return cls.obj_from_db_obj(instance_uuid, db_extra['numa_topology']) def _to_json(self): return jsonutils.dumps(self.obj_to_primitive()) def __len__(self): """Defined so that boolean testing works the same as for lists.""" return len(self.cells) @classmethod def _from_dict(cls, data_dict): # NOTE(sahid): Used as legacy, could be renamed in _legacy_from_dict_ # in the future to avoid confusing. return cls(cells=[ InstanceNUMACell._from_dict(cell_dict) for cell_dict in data_dict.get('cells', [])]) @property def cpu_pinning(self): """Return a set of all host CPUs this NUMATopology is pinned to.""" return set(itertools.chain.from_iterable([ cell.cpu_pinning.values() for cell in self.cells if cell.cpu_pinning])) @property def cpu_pinning_requested(self): return all(cell.cpu_pinning_requested for cell in self.cells) def clear_host_pinning(self): """Clear any data related to how instance is pinned to the host. Needed for aborting claims as we do not want to keep stale data around. """ for cell in self.cells: cell.clear_host_pinning() return self @property def emulator_threads_isolated(self): """Determines whether emulator threads should be isolated""" return (self.obj_attr_is_set('emulator_threads_policy') and (self.emulator_threads_policy == obj_fields.CPUEmulatorThreadsPolicy.ISOLATE)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/instance_pci_requests.py0000664000175000017500000001510500000000000022130 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import versionutils from nova.db import api as db from nova.objects import base from nova.objects import fields # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstancePCIRequest(base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Add request_id # Version 1.2: Add PCI NUMA affinity policy # Version 1.3: Add requester_id VERSION = '1.3' # Possible sources for a PCI request: # FLAVOR_ALIAS : Request originated from a flavor alias. # NEUTRON_PORT : Request originated from a neutron port. FLAVOR_ALIAS = 0 NEUTRON_PORT = 1 fields = { 'count': fields.IntegerField(), 'spec': fields.ListOfDictOfNullableStringsField(), 'alias_name': fields.StringField(nullable=True), # Note(moshele): is_new is deprecated and should be removed # on major version bump 'is_new': fields.BooleanField(default=False), 'request_id': fields.UUIDField(nullable=True), 'requester_id': fields.StringField(nullable=True), 'numa_policy': fields.PCINUMAAffinityPolicyField(nullable=True), } @property def source(self): # PCI requests originate from two sources: instance flavor alias and # neutron SR-IOV ports. # SR-IOV ports pci_request don't have an alias_name. return (InstancePCIRequest.NEUTRON_PORT if self.alias_name is None else InstancePCIRequest.FLAVOR_ALIAS) def obj_load_attr(self, attr): setattr(self, attr, None) def obj_make_compatible(self, primitive, target_version): super(InstancePCIRequest, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 3) and 'requester_id' in primitive: del primitive['requester_id'] if target_version < (1, 2) and 'numa_policy' in primitive: del primitive['numa_policy'] if target_version < (1, 1) and 'request_id' in primitive: del primitive['request_id'] # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class InstancePCIRequests(base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: InstancePCIRequest 1.1 VERSION = '1.1' fields = { 'instance_uuid': fields.UUIDField(), 'requests': fields.ListOfObjectsField('InstancePCIRequest'), } @classmethod def obj_from_db(cls, context, instance_uuid, db_requests): self = cls(context=context, requests=[], instance_uuid=instance_uuid) if db_requests is not None: requests = jsonutils.loads(db_requests) else: requests = [] for request in requests: # Note(moshele): is_new is deprecated and therefore we load it # with default value of False request_obj = InstancePCIRequest( count=request['count'], spec=request['spec'], alias_name=request['alias_name'], is_new=False, numa_policy=request.get('numa_policy', fields.PCINUMAAffinityPolicy.LEGACY), request_id=request['request_id'], requester_id=request.get('requester_id')) request_obj.obj_reset_changes() self.requests.append(request_obj) self.obj_reset_changes() return self @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_pci_requests = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['pci_requests']) if db_pci_requests is not None: db_pci_requests = db_pci_requests['pci_requests'] return cls.obj_from_db(context, instance_uuid, db_pci_requests) @staticmethod def _load_legacy_requests(sysmeta_value, is_new=False): if sysmeta_value is None: return [] requests = [] db_requests = jsonutils.loads(sysmeta_value) for db_request in db_requests: request = InstancePCIRequest( count=db_request['count'], spec=db_request['spec'], alias_name=db_request['alias_name'], is_new=is_new) request.obj_reset_changes() requests.append(request) return requests @classmethod def get_by_instance(cls, context, instance): # NOTE (baoli): not all callers are passing instance as object yet. # Therefore, use the dict syntax in this routine if 'pci_requests' in instance['system_metadata']: # NOTE(danms): This instance hasn't been converted to use # instance_extra yet, so extract the data from sysmeta sysmeta = instance['system_metadata'] _requests = ( cls._load_legacy_requests(sysmeta['pci_requests']) + cls._load_legacy_requests(sysmeta.get('new_pci_requests'), is_new=True)) requests = cls(instance_uuid=instance['uuid'], requests=_requests) requests.obj_reset_changes() return requests else: return cls.get_by_instance_uuid(context, instance['uuid']) def to_json(self): blob = [{'count': x.count, 'spec': x.spec, 'alias_name': x.alias_name, 'is_new': x.is_new, 'numa_policy': x.numa_policy, 'request_id': x.request_id, 'requester_id': x.requester_id} for x in self.requests] return jsonutils.dumps(blob) @classmethod def from_request_spec_instance_props(cls, pci_requests): objs = [InstancePCIRequest(**request) for request in pci_requests['requests']] return cls(requests=objs, instance_uuid=pci_requests['instance_uuid']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/keypair.py0000664000175000017500000002035700000000000017207 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_db.sqlalchemy import utils as sqlalchemyutils from oslo_log import log as logging from oslo_utils import versionutils from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.objects import base from nova.objects import fields KEYPAIR_TYPE_SSH = 'ssh' KEYPAIR_TYPE_X509 = 'x509' LOG = logging.getLogger(__name__) @db_api.api_context_manager.reader def _get_from_db(context, user_id, name=None, limit=None, marker=None): query = context.session.query(api_models.KeyPair).\ filter(api_models.KeyPair.user_id == user_id) if name is not None: db_keypair = query.filter(api_models.KeyPair.name == name).\ first() if not db_keypair: raise exception.KeypairNotFound(user_id=user_id, name=name) return db_keypair marker_row = None if marker is not None: marker_row = context.session.query(api_models.KeyPair).\ filter(api_models.KeyPair.name == marker).\ filter(api_models.KeyPair.user_id == user_id).first() if not marker_row: raise exception.MarkerNotFound(marker=marker) query = sqlalchemyutils.paginate_query( query, api_models.KeyPair, limit, ['name'], marker=marker_row) return query.all() @db_api.api_context_manager.reader def _get_count_from_db(context, user_id): return context.session.query(api_models.KeyPair).\ filter(api_models.KeyPair.user_id == user_id).\ count() @db_api.api_context_manager.writer def _create_in_db(context, values): kp = api_models.KeyPair() kp.update(values) try: kp.save(context.session) except db_exc.DBDuplicateEntry: raise exception.KeyPairExists(key_name=values['name']) return kp @db_api.api_context_manager.writer def _destroy_in_db(context, user_id, name): result = context.session.query(api_models.KeyPair).\ filter_by(user_id=user_id).\ filter_by(name=name).\ delete() if not result: raise exception.KeypairNotFound(user_id=user_id, name=name) # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class KeyPair(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Added keypair type # Version 1.3: Name field is non-null # Version 1.4: Add localonly flag to get_by_name() VERSION = '1.4' fields = { 'id': fields.IntegerField(), 'name': fields.StringField(nullable=False), 'user_id': fields.StringField(nullable=True), 'fingerprint': fields.StringField(nullable=True), 'public_key': fields.StringField(nullable=True), 'type': fields.StringField(nullable=False), } def obj_make_compatible(self, primitive, target_version): super(KeyPair, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2) and 'type' in primitive: del primitive['type'] @staticmethod def _from_db_object(context, keypair, db_keypair): ignore = {'deleted': False, 'deleted_at': None} for key in keypair.fields: if key in ignore and not hasattr(db_keypair, key): keypair[key] = ignore[key] else: keypair[key] = db_keypair[key] keypair._context = context keypair.obj_reset_changes() return keypair @staticmethod def _get_from_db(context, user_id, name): return _get_from_db(context, user_id, name=name) @staticmethod def _destroy_in_db(context, user_id, name): return _destroy_in_db(context, user_id, name) @staticmethod def _create_in_db(context, values): return _create_in_db(context, values) @base.remotable_classmethod def get_by_name(cls, context, user_id, name, localonly=False): db_keypair = None if not localonly: try: db_keypair = cls._get_from_db(context, user_id, name) except exception.KeypairNotFound: pass if db_keypair is None: db_keypair = db.key_pair_get(context, user_id, name) return cls._from_db_object(context, cls(), db_keypair) @base.remotable_classmethod def destroy_by_name(cls, context, user_id, name): try: cls._destroy_in_db(context, user_id, name) except exception.KeypairNotFound: db.key_pair_destroy(context, user_id, name) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') # NOTE(danms): Check to see if it exists in the old DB before # letting them create in the API DB, since we won't get protection # from the UC. try: db.key_pair_get(self._context, self.user_id, self.name) raise exception.KeyPairExists(key_name=self.name) except exception.KeypairNotFound: pass self._create() def _create(self): updates = self.obj_get_changes() db_keypair = self._create_in_db(self._context, updates) self._from_db_object(self._context, self, db_keypair) @base.remotable def destroy(self): try: self._destroy_in_db(self._context, self.user_id, self.name) except exception.KeypairNotFound: db.key_pair_destroy(self._context, self.user_id, self.name) @base.NovaObjectRegistry.register class KeyPairList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # KeyPair <= version 1.1 # Version 1.1: KeyPair <= version 1.2 # Version 1.2: KeyPair <= version 1.3 # Version 1.3: Add new parameters 'limit' and 'marker' to get_by_user() VERSION = '1.3' fields = { 'objects': fields.ListOfObjectsField('KeyPair'), } @staticmethod def _get_from_db(context, user_id, limit, marker): return _get_from_db(context, user_id, limit=limit, marker=marker) @staticmethod def _get_count_from_db(context, user_id): return _get_count_from_db(context, user_id) @base.remotable_classmethod def get_by_user(cls, context, user_id, limit=None, marker=None): try: api_db_keypairs = cls._get_from_db( context, user_id, limit=limit, marker=marker) # NOTE(pkholkin): If we were asked for a marker and found it in # results from the API DB, we must continue our pagination with # just the limit (if any) to the main DB. marker = None except exception.MarkerNotFound: api_db_keypairs = [] if limit is not None: limit_more = limit - len(api_db_keypairs) else: limit_more = None if limit_more is None or limit_more > 0: main_db_keypairs = db.key_pair_get_all_by_user( context, user_id, limit=limit_more, marker=marker) else: main_db_keypairs = [] return base.obj_make_list(context, cls(context), objects.KeyPair, api_db_keypairs + main_db_keypairs) @base.remotable_classmethod def get_count_by_user(cls, context, user_id): return (cls._get_count_from_db(context, user_id) + db.key_pair_count_by_user(context, user_id)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/migrate_data.py0000664000175000017500000004116600000000000020165 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import versionutils from nova import exception from nova.objects import base as obj_base from nova.objects import fields LOG = log.getLogger(__name__) @obj_base.NovaObjectRegistry.register class VIFMigrateData(obj_base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' # The majority of the fields here represent a port binding on the # **destination** host during a live migration. The vif_type, among # other fields, could be different from the existing binding on the # source host, which is represented by the "source_vif" field. fields = { 'port_id': fields.StringField(), 'vnic_type': fields.StringField(), # TODO(sean-k-mooney): make enum? 'vif_type': fields.StringField(), # vif_details is a dict whose contents are dependent on the vif_type # and can be any number of types for the values, so we just store it # as a serialized dict 'vif_details_json': fields.StringField(), # profile is in the same random dict of terrible boat as vif_details # so it's stored as a serialized json string 'profile_json': fields.StringField(), 'host': fields.StringField(), # The source_vif attribute is a copy of the VIF network model # representation of the port on the source host which can be used # for filling in blanks about the VIF (port) when building a # configuration reference for the destination host. # NOTE(mriedem): This might not be sufficient based on how the # destination host is configured for all vif types. See the note in # the libvirt driver here: https://review.opendev.org/#/c/551370/ # 29/nova/virt/libvirt/driver.py@7036 'source_vif': fields.Field(fields.NetworkVIFModel()), } @property def vif_details(self): return jsonutils.loads(self.vif_details_json) @vif_details.setter def vif_details(self, vif_details_dict): self.vif_details_json = jsonutils.dumps(vif_details_dict) @property def profile(self): return jsonutils.loads(self.profile_json) @profile.setter def profile(self, profile_dict): self.profile_json = jsonutils.dumps(profile_dict) def get_dest_vif(self): """Get a destination VIF representation of this object. This method takes the source_vif and updates it to include the destination host port binding information using the other fields on this object. :return: nova.network.model.VIF object """ if 'source_vif' not in self: raise exception.ObjectActionError( action='get_dest_vif', reason='source_vif is not set') vif = copy.deepcopy(self.source_vif) vif['type'] = self.vif_type vif['vnic_type'] = self.vnic_type vif['profile'] = self.profile vif['details'] = self.vif_details return vif @classmethod def create_skeleton_migrate_vifs(cls, vifs): """Create migrate vifs for live migration. :param vifs: a list of VIFs. :return: list of VIFMigrateData object corresponding to the provided VIFs. """ vif_mig_data = [] for vif in vifs: mig_vif = cls(port_id=vif['id'], source_vif=vif) vif_mig_data.append(mig_vif) return vif_mig_data @obj_base.NovaObjectRegistry.register class LibvirtLiveMigrateNUMAInfo(obj_base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { # NOTE(artom) We need a 1:many cardinality here, so DictOfIntegers with # its 1:1 cardinality cannot work here. cpu_pins can have a single # guest CPU pinned to multiple host CPUs. 'cpu_pins': fields.DictOfSetOfIntegersField(), # NOTE(artom) Currently we never pin a guest cell to more than a single # host cell, so cell_pins could be a DictOfIntegers, but # DictOfSetOfIntegers is more future-proof. 'cell_pins': fields.DictOfSetOfIntegersField(), 'emulator_pins': fields.SetOfIntegersField(), 'sched_vcpus': fields.SetOfIntegersField(), 'sched_priority': fields.IntegerField(), } @obj_base.NovaObjectRegistry.register_if(False) class LiveMigrateData(obj_base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added old_vol_attachment_ids field. # Version 1.2: Added wait_for_vif_plugged # Version 1.3: Added vifs field. VERSION = '1.3' fields = { 'is_volume_backed': fields.BooleanField(), 'migration': fields.ObjectField('Migration'), # old_vol_attachment_ids is a dict used to store the old attachment_ids # for each volume so they can be restored on a migration rollback. The # key is the volume_id, and the value is the attachment_id. # TODO(mdbooth): This field was made redundant by change Ibe9215c0. We # should eventually remove it. 'old_vol_attachment_ids': fields.DictOfStringsField(), # wait_for_vif_plugged is set in pre_live_migration on the destination # compute host based on the [compute]/live_migration_wait_for_vif_plug # config option value; a default value is not set here since the # default for the config option may change in the future 'wait_for_vif_plugged': fields.BooleanField(), 'vifs': fields.ListOfObjectsField('VIFMigrateData'), } @obj_base.NovaObjectRegistry.register class LibvirtLiveMigrateBDMInfo(obj_base.NovaObject): # VERSION 1.0 : Initial version # VERSION 1.1 : Added encryption_secret_uuid for tracking volume secret # uuid created on dest during migration with encrypted vols. VERSION = '1.1' fields = { # FIXME(danms): some of these can be enums? 'serial': fields.StringField(), 'bus': fields.StringField(), 'dev': fields.StringField(), 'type': fields.StringField(), 'format': fields.StringField(nullable=True), 'boot_index': fields.IntegerField(nullable=True), 'connection_info_json': fields.StringField(), 'encryption_secret_uuid': fields.UUIDField(nullable=True), } def obj_make_compatible(self, primitive, target_version): super(LibvirtLiveMigrateBDMInfo, self).obj_make_compatible( primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1) and 'encryption_secret_uuid' in primitive: del primitive['encryption_secret_uuid'] # NOTE(danms): We don't have a connection_info object right # now, and instead mostly store/pass it as JSON that we're # careful with. When we get a connection_info object in the # future, we should use it here, so make this easy to convert # for later. @property def connection_info(self): return jsonutils.loads(self.connection_info_json) @connection_info.setter def connection_info(self, info): self.connection_info_json = jsonutils.dumps(info) def as_disk_info(self): info_dict = { 'dev': self.dev, 'bus': self.bus, 'type': self.type, } if self.obj_attr_is_set('format') and self.format: info_dict['format'] = self.format if self.obj_attr_is_set('boot_index') and self.boot_index is not None: info_dict['boot_index'] = str(self.boot_index) return info_dict @obj_base.NovaObjectRegistry.register class LibvirtLiveMigrateData(LiveMigrateData): # Version 1.0: Initial version # Version 1.1: Added target_connect_addr # Version 1.2: Added 'serial_listen_ports' to allow live migration with # serial console. # Version 1.3: Added 'supported_perf_events' # Version 1.4: Added old_vol_attachment_ids # Version 1.5: Added src_supports_native_luks # Version 1.6: Added wait_for_vif_plugged # Version 1.7: Added dst_wants_file_backed_memory # Version 1.8: Added file_backed_memory_discard # Version 1.9: Inherited vifs from LiveMigrateData # Version 1.10: Added dst_numa_info, src_supports_numa_live_migration, and # dst_supports_numa_live_migration fields VERSION = '1.10' fields = { 'filename': fields.StringField(), # FIXME: image_type should be enum? 'image_type': fields.StringField(), 'block_migration': fields.BooleanField(), 'disk_over_commit': fields.BooleanField(), 'disk_available_mb': fields.IntegerField(nullable=True), 'is_shared_instance_path': fields.BooleanField(), 'is_shared_block_storage': fields.BooleanField(), 'instance_relative_path': fields.StringField(), 'graphics_listen_addr_vnc': fields.IPAddressField(nullable=True), 'graphics_listen_addr_spice': fields.IPAddressField(nullable=True), 'serial_listen_addr': fields.StringField(nullable=True), 'serial_listen_ports': fields.ListOfIntegersField(), 'bdms': fields.ListOfObjectsField('LibvirtLiveMigrateBDMInfo'), 'target_connect_addr': fields.StringField(nullable=True), 'supported_perf_events': fields.ListOfStringsField(), 'src_supports_native_luks': fields.BooleanField(), 'dst_wants_file_backed_memory': fields.BooleanField(), # file_backed_memory_discard is ignored unless # dst_wants_file_backed_memory is set 'file_backed_memory_discard': fields.BooleanField(), # TODO(artom) (src|dst)_supports_numa_live_migration are only used as # flags to indicate that the compute host is new enough to perform a # NUMA-aware live migration. Remove in version 2.0. 'src_supports_numa_live_migration': fields.BooleanField(), 'dst_supports_numa_live_migration': fields.BooleanField(), 'dst_numa_info': fields.ObjectField('LibvirtLiveMigrateNUMAInfo'), } def obj_make_compatible(self, primitive, target_version): super(LibvirtLiveMigrateData, self).obj_make_compatible( primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if (target_version < (1, 10) and 'src_supports_numa_live_migration' in primitive): del primitive['src_supports_numa_live_migration'] if target_version < (1, 10) and 'dst_numa_info' in primitive: del primitive['dst_numa_info'] if target_version < (1, 9) and 'vifs' in primitive: del primitive['vifs'] if target_version < (1, 8): if 'file_backed_memory_discard' in primitive: del primitive['file_backed_memory_discard'] if target_version < (1, 7): if 'dst_wants_file_backed_memory' in primitive: del primitive['dst_wants_file_backed_memory'] if target_version < (1, 6) and 'wait_for_vif_plugged' in primitive: del primitive['wait_for_vif_plugged'] if target_version < (1, 5): if 'src_supports_native_luks' in primitive: del primitive['src_supports_native_luks'] if target_version < (1, 4): if 'old_vol_attachment_ids' in primitive: del primitive['old_vol_attachment_ids'] if target_version < (1, 3): if 'supported_perf_events' in primitive: del primitive['supported_perf_events'] if target_version < (1, 2): if 'serial_listen_ports' in primitive: del primitive['serial_listen_ports'] if target_version < (1, 1) and 'target_connect_addr' in primitive: del primitive['target_connect_addr'] def is_on_shared_storage(self): return self.is_shared_block_storage or self.is_shared_instance_path @obj_base.NovaObjectRegistry.register class XenapiLiveMigrateData(LiveMigrateData): # Version 1.0: Initial version # Version 1.1: Added vif_uuid_map # Version 1.2: Added old_vol_attachment_ids # Version 1.3: Added wait_for_vif_plugged # Version 1.4: Inherited vifs from LiveMigrateData VERSION = '1.4' fields = { 'block_migration': fields.BooleanField(nullable=True), 'destination_sr_ref': fields.StringField(nullable=True), 'migrate_send_data': fields.DictOfStringsField(nullable=True), 'sr_uuid_map': fields.DictOfStringsField(), 'kernel_file': fields.StringField(), 'ramdisk_file': fields.StringField(), 'vif_uuid_map': fields.DictOfStringsField(), } def obj_make_compatible(self, primitive, target_version): super(XenapiLiveMigrateData, self).obj_make_compatible( primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4) and 'vifs' in primitive: del primitive['vifs'] if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive: del primitive['wait_for_vif_plugged'] if target_version < (1, 2): if 'old_vol_attachment_ids' in primitive: del primitive['old_vol_attachment_ids'] if target_version < (1, 1): if 'vif_uuid_map' in primitive: del primitive['vif_uuid_map'] @obj_base.NovaObjectRegistry.register class HyperVLiveMigrateData(LiveMigrateData): # Version 1.0: Initial version # Version 1.1: Added is_shared_instance_path # Version 1.2: Added old_vol_attachment_ids # Version 1.3: Added wait_for_vif_plugged # Version 1.4: Inherited vifs from LiveMigrateData VERSION = '1.4' fields = {'is_shared_instance_path': fields.BooleanField()} def obj_make_compatible(self, primitive, target_version): super(HyperVLiveMigrateData, self).obj_make_compatible( primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4) and 'vifs' in primitive: del primitive['vifs'] if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive: del primitive['wait_for_vif_plugged'] if target_version < (1, 2): if 'old_vol_attachment_ids' in primitive: del primitive['old_vol_attachment_ids'] if target_version < (1, 1): if 'is_shared_instance_path' in primitive: del primitive['is_shared_instance_path'] @obj_base.NovaObjectRegistry.register class PowerVMLiveMigrateData(LiveMigrateData): # Version 1.0: Initial version # Version 1.1: Added the Virtual Ethernet Adapter VLAN mappings. # Version 1.2: Added old_vol_attachment_ids # Version 1.3: Added wait_for_vif_plugged # Version 1.4: Inherited vifs from LiveMigrateData VERSION = '1.4' fields = { 'host_mig_data': fields.DictOfNullableStringsField(), 'dest_ip': fields.StringField(), 'dest_user_id': fields.StringField(), 'dest_sys_name': fields.StringField(), 'public_key': fields.StringField(), 'dest_proc_compat': fields.StringField(), 'vol_data': fields.DictOfNullableStringsField(), 'vea_vlan_mappings': fields.DictOfNullableStringsField(), } def obj_make_compatible(self, primitive, target_version): super(PowerVMLiveMigrateData, self).obj_make_compatible( primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4) and 'vifs' in primitive: del primitive['vifs'] if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive: del primitive['wait_for_vif_plugged'] if target_version < (1, 2): if 'old_vol_attachment_ids' in primitive: del primitive['old_vol_attachment_ids'] if target_version < (1, 1): if 'vea_vlan_mappings' in primitive: del primitive['vea_vlan_mappings'] @obj_base.NovaObjectRegistry.register class VMwareLiveMigrateData(LiveMigrateData): VERSION = '1.0' fields = { 'cluster_name': fields.StringField(nullable=False), 'datastore_regex': fields.StringField(nullable=False), } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/migration.py0000664000175000017500000002564700000000000017543 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_utils import uuidutils from oslo_utils import versionutils from nova.db import api as db from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base from nova.objects import fields def determine_migration_type(migration): if migration['old_instance_type_id'] != migration['new_instance_type_id']: return 'resize' else: return 'migration' # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class Migration(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Added migration_type and hidden # Version 1.3: Added get_by_id_and_instance() # Version 1.4: Added migration progress detail # Version 1.5: Added uuid # Version 1.6: Added cross_cell_move and get_by_uuid(). # Version 1.7: Added user_id and project_id VERSION = '1.7' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(), 'source_compute': fields.StringField(nullable=True), # source hostname 'dest_compute': fields.StringField(nullable=True), # dest hostname 'source_node': fields.StringField(nullable=True), # source nodename 'dest_node': fields.StringField(nullable=True), # dest nodename 'dest_host': fields.StringField(nullable=True), # dest host IP 'old_instance_type_id': fields.IntegerField(nullable=True), 'new_instance_type_id': fields.IntegerField(nullable=True), 'instance_uuid': fields.StringField(nullable=True), 'status': fields.StringField(nullable=True), 'migration_type': fields.EnumField(['migration', 'resize', 'live-migration', 'evacuation'], nullable=False), 'hidden': fields.BooleanField(nullable=False, default=False), 'memory_total': fields.IntegerField(nullable=True), 'memory_processed': fields.IntegerField(nullable=True), 'memory_remaining': fields.IntegerField(nullable=True), 'disk_total': fields.IntegerField(nullable=True), 'disk_processed': fields.IntegerField(nullable=True), 'disk_remaining': fields.IntegerField(nullable=True), 'cross_cell_move': fields.BooleanField(default=False), # request context user id 'user_id': fields.StringField(nullable=True), # request context project id 'project_id': fields.StringField(nullable=True), } @staticmethod def _from_db_object(context, migration, db_migration): for key in migration.fields: value = db_migration[key] if key == 'migration_type' and value is None: value = determine_migration_type(db_migration) elif key == 'uuid' and value is None: continue migration[key] = value migration._context = context migration.obj_reset_changes() migration._ensure_uuid() return migration def obj_make_compatible(self, primitive, target_version): super(Migration, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2): if 'migration_type' in primitive: del primitive['migration_type'] del primitive['hidden'] if target_version < (1, 4): if 'memory_total' in primitive: del primitive['memory_total'] del primitive['memory_processed'] del primitive['memory_remaining'] del primitive['disk_total'] del primitive['disk_processed'] del primitive['disk_remaining'] if target_version < (1, 5): if 'uuid' in primitive: del primitive['uuid'] if target_version < (1, 6) and 'cross_cell_move' in primitive: del primitive['cross_cell_move'] if target_version < (1, 7): if 'user_id' in primitive: del primitive['user_id'] if 'project_id' in primitive: del primitive['project_id'] def obj_load_attr(self, attrname): if attrname == 'migration_type': # NOTE(danms): The only reason we'd need to load this is if # some older node sent us one. So, guess the type. self.migration_type = determine_migration_type(self) elif attrname in ['hidden', 'cross_cell_move']: self.obj_set_defaults(attrname) else: super(Migration, self).obj_load_attr(attrname) def _ensure_uuid(self): if 'uuid' in self: return self.uuid = uuidutils.generate_uuid() try: self.save() except db_exc.DBDuplicateEntry: # NOTE(danms) We raced to generate a uuid for this, # so fetch the winner and use that uuid fresh = self.__class__.get_by_id(self.context, self.id) self.uuid = fresh.uuid @base.remotable_classmethod def get_by_uuid(cls, context, migration_uuid): db_migration = db.migration_get_by_uuid(context, migration_uuid) return cls._from_db_object(context, cls(), db_migration) @base.remotable_classmethod def get_by_id(cls, context, migration_id): db_migration = db.migration_get(context, migration_id) return cls._from_db_object(context, cls(), db_migration) @base.remotable_classmethod def get_by_id_and_instance(cls, context, migration_id, instance_uuid): db_migration = db.migration_get_by_id_and_instance( context, migration_id, instance_uuid) return cls._from_db_object(context, cls(), db_migration) @base.remotable_classmethod def get_by_instance_and_status(cls, context, instance_uuid, status): db_migration = db.migration_get_by_instance_and_status( context, instance_uuid, status) return cls._from_db_object(context, cls(), db_migration) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') if 'uuid' not in self: self.uuid = uuidutils.generate_uuid() # Record who is initiating the migration which is # not necessarily the owner of the instance. if 'user_id' not in self: self.user_id = self._context.user_id if 'project_id' not in self: self.project_id = self._context.project_id updates = self.obj_get_changes() if 'migration_type' not in updates: raise exception.ObjectActionError( action="create", reason=_("cannot create a Migration object without a " "migration_type set")) db_migration = db.migration_create(self._context, updates) self._from_db_object(self._context, self, db_migration) @base.remotable def save(self): updates = self.obj_get_changes() updates.pop('id', None) db_migration = db.migration_update(self._context, self.id, updates) self._from_db_object(self._context, self, db_migration) self.obj_reset_changes() @property def instance(self): if not hasattr(self, '_cached_instance'): self._cached_instance = objects.Instance.get_by_uuid( self._context, self.instance_uuid, expected_attrs=['migration_context', 'flavor']) return self._cached_instance @instance.setter def instance(self, instance): self._cached_instance = instance def is_same_host(self): return self.source_compute == self.dest_compute @base.NovaObjectRegistry.register class MigrationList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Migration <= 1.1 # Version 1.1: Added use_slave to get_unconfirmed_by_dest_compute # Version 1.2: Migration version 1.2 # Version 1.3: Added a new function to get in progress migrations # for an instance. # Version 1.4: Added sort_keys, sort_dirs, limit, marker kwargs to # get_by_filters for migrations pagination support. VERSION = '1.4' fields = { 'objects': fields.ListOfObjectsField('Migration'), } @staticmethod @db.select_db_reader_mode def _db_migration_get_unconfirmed_by_dest_compute( context, confirm_window, dest_compute, use_slave=False): return db.migration_get_unconfirmed_by_dest_compute( context, confirm_window, dest_compute) @base.remotable_classmethod def get_unconfirmed_by_dest_compute(cls, context, confirm_window, dest_compute, use_slave=False): db_migrations = cls._db_migration_get_unconfirmed_by_dest_compute( context, confirm_window, dest_compute, use_slave=use_slave) return base.obj_make_list(context, cls(context), objects.Migration, db_migrations) @base.remotable_classmethod def get_in_progress_by_host_and_node(cls, context, host, node): db_migrations = db.migration_get_in_progress_by_host_and_node( context, host, node) return base.obj_make_list(context, cls(context), objects.Migration, db_migrations) @base.remotable_classmethod def get_by_filters(cls, context, filters, sort_keys=None, sort_dirs=None, limit=None, marker=None): db_migrations = db.migration_get_all_by_filters( context, filters, sort_keys=sort_keys, sort_dirs=sort_dirs, limit=limit, marker=marker) return base.obj_make_list(context, cls(context), objects.Migration, db_migrations) @base.remotable_classmethod def get_in_progress_by_instance(cls, context, instance_uuid, migration_type=None): db_migrations = db.migration_get_in_progress_by_instance( context, instance_uuid, migration_type) return base.obj_make_list(context, cls(context), objects.Migration, db_migrations) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/migration_context.py0000664000175000017500000001342500000000000021276 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import versionutils from nova.db import api as db from nova import exception from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) @base.NovaObjectRegistry.register class MigrationContext(base.NovaPersistentObject, base.NovaObject): """Data representing additional resources related to a migration. Some resources cannot be calculated from knowing the flavor alone for the purpose of resources tracking, but need to be persisted at the time the claim was made, for subsequent resource tracking runs to be consistent. MigrationContext objects are created when the claim is done and are there to facilitate resource tracking and final provisioning of the instance on the destination host. """ # Version 1.0: Initial version # Version 1.1: Add old/new pci_devices and pci_requests # Version 1.2: Add old/new resources VERSION = '1.2' fields = { 'instance_uuid': fields.UUIDField(), 'migration_id': fields.IntegerField(), 'new_numa_topology': fields.ObjectField('InstanceNUMATopology', nullable=True), 'old_numa_topology': fields.ObjectField('InstanceNUMATopology', nullable=True), 'new_pci_devices': fields.ObjectField('PciDeviceList', nullable=True), 'old_pci_devices': fields.ObjectField('PciDeviceList', nullable=True), 'new_pci_requests': fields.ObjectField('InstancePCIRequests', nullable=True), 'old_pci_requests': fields.ObjectField('InstancePCIRequests', nullable=True), 'new_resources': fields.ObjectField('ResourceList', nullable=True), 'old_resources': fields.ObjectField('ResourceList', nullable=True), } @classmethod def obj_make_compatible(cls, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2): primitive.pop('old_resources', None) primitive.pop('new_resources', None) if target_version < (1, 1): primitive.pop('old_pci_devices', None) primitive.pop('new_pci_devices', None) primitive.pop('old_pci_requests', None) primitive.pop('new_pci_requests', None) @classmethod def obj_from_db_obj(cls, db_obj): primitive = jsonutils.loads(db_obj) return cls.obj_from_primitive(primitive) @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_extra = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['migration_context']) if not db_extra: raise exception.MigrationContextNotFound( instance_uuid=instance_uuid) if db_extra['migration_context'] is None: return None return cls.obj_from_db_obj(db_extra['migration_context']) def get_pci_mapping_for_migration(self, revert): """Get the mapping between the old PCI devices and the new PCI devices that have been allocated during this migration. The correlation is based on PCI request ID which is unique per PCI devices for SR-IOV ports. :param revert: If True, return a reverse mapping i.e mapping between new PCI devices and old PCI devices. :returns: dictionary of PCI mapping. if revert==False: {'': } if revert==True: {'': } """ step = -1 if revert else 1 current_pci_devs, updated_pci_devs = (self.old_pci_devices, self.new_pci_devices)[::step] if current_pci_devs and updated_pci_devs: LOG.debug("Determining PCI devices mapping using migration " "context: current_pci_devs: %(cur)s, " "updated_pci_devs: %(upd)s", {'cur': [dev for dev in current_pci_devs], 'upd': [dev for dev in updated_pci_devs]}) return {curr_dev.address: upd_dev for curr_dev in current_pci_devs for upd_dev in updated_pci_devs if curr_dev.request_id == upd_dev.request_id} return {} def is_cross_cell_move(self): """Helper to determine if this is a context for a cross-cell move. Based on the ``migration_id`` in this context, gets the Migration object and returns its ``cross_cell_move`` value. :return: True if this is a cross cell move migration, False otherwise. """ migration = objects.Migration.get_by_id( self._context, self.migration_id) return migration.cross_cell_move ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/monitor_metric.py0000664000175000017500000001116400000000000020571 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import versionutils from nova.objects import base from nova.objects import fields from nova import utils # NOTE(jwcroppe): Used to determine which fields whose value we need to adjust # (read: divide by 100.0) before sending information to the RPC notifier since # these values were expected to be within the range [0, 1]. FIELDS_REQUIRING_CONVERSION = [fields.MonitorMetricType.CPU_USER_PERCENT, fields.MonitorMetricType.CPU_KERNEL_PERCENT, fields.MonitorMetricType.CPU_IDLE_PERCENT, fields.MonitorMetricType.CPU_IOWAIT_PERCENT, fields.MonitorMetricType.CPU_PERCENT] @base.NovaObjectRegistry.register class MonitorMetric(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added NUMA support VERSION = '1.1' fields = { 'name': fields.MonitorMetricTypeField(nullable=False), 'value': fields.IntegerField(nullable=False), 'numa_membw_values': fields.DictOfIntegersField(nullable=True), 'timestamp': fields.DateTimeField(nullable=False), # This will be the stevedore extension full class name # for the plugin from which the metric originates. 'source': fields.StringField(nullable=False), } def obj_make_compatible(self, primitive, target_version): super(MonitorMetric, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1) and 'numa_membw_values' in primitive: del primitive['numa_membw_values'] # NOTE(jaypipes): This method exists to convert the object to the # format expected by the RPC notifier for metrics events. def to_dict(self): dict_to_return = { 'name': self.name, # NOTE(jaypipes): This is what jsonutils.dumps() does to # datetime.datetime objects, which is what timestamp is in # this object as well as the original simple dict metrics 'timestamp': utils.strtime(self.timestamp), 'source': self.source, } if self.obj_attr_is_set('value'): if self.name in FIELDS_REQUIRING_CONVERSION: dict_to_return['value'] = self.value / 100.0 else: dict_to_return['value'] = self.value elif self.obj_attr_is_set('numa_membw_values'): dict_to_return['numa_membw_values'] = self.numa_membw_values return dict_to_return @base.NovaObjectRegistry.register class MonitorMetricList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: MonitorMetric version 1.1 VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('MonitorMetric'), } @classmethod def from_json(cls, metrics): """Converts a legacy json object into a list of MonitorMetric objs and finally returns of MonitorMetricList :param metrics: a string of json serialized objects :returns: a MonitorMetricList Object. """ metrics = jsonutils.loads(metrics) if metrics else [] # NOTE(suro-patz): While instantiating the MonitorMetric() from # JSON-ified string, we need to re-convert the # normalized metrics to avoid truncation to 0 by # typecasting into an integer. metric_list = [] for metric in metrics: if ('value' in metric and metric['name'] in FIELDS_REQUIRING_CONVERSION): metric['value'] = metric['value'] * 100 metric_list.append(MonitorMetric(**metric)) return MonitorMetricList(objects=metric_list) # NOTE(jaypipes): This method exists to convert the object to the # format expected by the RPC notifier for metrics events. def to_list(self): return [m.to_dict() for m in self.objects] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/network_metadata.py0000664000175000017500000000414700000000000021073 0ustar00zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class NetworkMetadata(base.NovaObject): """Hold aggregate metadata for a collection of networks. This object holds aggregate information for a collection of neutron networks. There are two types of network collections we care about and use this for: the collection of networks configured or requested for a guest and the collection of networks available to a host. We want this information to allow us to map a given neutron network to the logical NICs it does or will use (or, rather, to identify the NUMA affinity of those NICs and therefore the networks). Given that there are potentially tens of thousands of neutron networks accessible from a given host and tens or hundreds of networks configured for an instance, we need a way to group networks by some common attribute that would identify the logical NIC it would use. For L2 networks, this is the physnet attribute (e.g. ``provider:physical_network=provider1``), which is an arbitrary string used to distinguish between multiple physical (in the sense of physical wiring) networks. For L3 (tunneled) networks, this is merely the fact that they are L3 networks (e.g. ``provider:network_type=vxlan``) because, in neutron, *all* L3 networks must use the same logical NIC. """ # Version 1.0: Initial version VERSION = '1.0' fields = { 'physnets': fields.SetOfStringsField(), 'tunneled': fields.BooleanField(), } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/network_request.py0000664000175000017500000000720700000000000021003 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import versionutils from nova.objects import base as obj_base from nova.objects import fields # These are special case enums for the auto-allocate scenario. 'none' means # do not allocate a network on server create. 'auto' means auto-allocate a # network (if possible) if none are already available to the project. Other # values for network_id can be a specific network id, or None, where None # is the case before auto-allocation was supported in the compute API. NETWORK_ID_NONE = 'none' NETWORK_ID_AUTO = 'auto' @obj_base.NovaObjectRegistry.register class NetworkRequest(obj_base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added pci_request_id # Version 1.2: Added tag field VERSION = '1.2' fields = { 'network_id': fields.StringField(nullable=True), 'address': fields.IPAddressField(nullable=True), 'port_id': fields.UUIDField(nullable=True), 'pci_request_id': fields.UUIDField(nullable=True), 'tag': fields.StringField(nullable=True), } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2) and 'tag' in primitive: del primitive['tag'] def obj_load_attr(self, attr): setattr(self, attr, None) def to_tuple(self): address = str(self.address) if self.address is not None else None return self.network_id, address, self.port_id, self.pci_request_id @classmethod def from_tuple(cls, net_tuple): network_id, address, port_id, pci_request_id = net_tuple return cls(network_id=network_id, address=address, port_id=port_id, pci_request_id=pci_request_id) @property def auto_allocate(self): return self.network_id == NETWORK_ID_AUTO @property def no_allocate(self): return self.network_id == NETWORK_ID_NONE @obj_base.NovaObjectRegistry.register class NetworkRequestList(obj_base.ObjectListBase, obj_base.NovaObject): fields = { 'objects': fields.ListOfObjectsField('NetworkRequest'), } VERSION = '1.1' def as_tuples(self): return [x.to_tuple() for x in self.objects] @classmethod def from_tuples(cls, net_tuples): """Convenience method for converting a list of network request tuples into a NetworkRequestList object. :param net_tuples: list of network request tuples :returns: NetworkRequestList object """ requested_networks = cls(objects=[NetworkRequest.from_tuple(t) for t in net_tuples]) return requested_networks @property def is_single_unspecified(self): return ((len(self.objects) == 1) and (self.objects[0].to_tuple() == NetworkRequest().to_tuple())) @property def auto_allocate(self): return len(self.objects) == 1 and self.objects[0].auto_allocate @property def no_allocate(self): return len(self.objects) == 1 and self.objects[0].no_allocate ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/numa.py0000664000175000017500000002346500000000000016506 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import versionutils from nova import exception from nova.objects import base from nova.objects import fields as obj_fields from nova.virt import hardware @base.NovaObjectRegistry.register class NUMACell(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added pinned_cpus and siblings fields # Version 1.2: Added mempages field # Version 1.3: Add network_metadata field # Version 1.4: Add pcpuset VERSION = '1.4' fields = { 'id': obj_fields.IntegerField(read_only=True), 'cpuset': obj_fields.SetOfIntegersField(), 'pcpuset': obj_fields.SetOfIntegersField(), 'memory': obj_fields.IntegerField(), 'cpu_usage': obj_fields.IntegerField(default=0), 'memory_usage': obj_fields.IntegerField(default=0), 'pinned_cpus': obj_fields.SetOfIntegersField(), 'siblings': obj_fields.ListOfSetsOfIntegersField(), 'mempages': obj_fields.ListOfObjectsField('NUMAPagesTopology'), 'network_metadata': obj_fields.ObjectField('NetworkMetadata'), } def obj_make_compatible(self, primitive, target_version): super(NUMACell, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4): primitive.pop('pcpuset', None) if target_version < (1, 3): primitive.pop('network_metadata', None) def __eq__(self, other): return base.all_things_equal(self, other) def __ne__(self, other): return not (self == other) @property def free_pcpus(self): """Return available dedicated CPUs.""" return self.pcpuset - self.pinned_cpus or set() @property def free_siblings(self): """Return available dedicated CPUs in their sibling set form.""" return [sibling_set & self.free_pcpus for sibling_set in self.siblings] @property def avail_pcpus(self): """Return number of available dedicated CPUs.""" return len(self.free_pcpus) @property def avail_memory(self): return self.memory - self.memory_usage @property def has_threads(self): """Check if SMT threads, a.k.a. HyperThreads, are present.""" return any(len(sibling_set) > 1 for sibling_set in self.siblings) def pin_cpus(self, cpus): if cpus - self.pcpuset: raise exception.CPUPinningUnknown(requested=list(cpus), available=list(self.pcpuset)) if self.pinned_cpus & cpus: available = list(self.pcpuset - self.pinned_cpus) raise exception.CPUPinningInvalid(requested=list(cpus), available=available) self.pinned_cpus |= cpus def unpin_cpus(self, cpus): if cpus - self.pcpuset: raise exception.CPUUnpinningUnknown(requested=list(cpus), available=list(self.pcpuset)) if (self.pinned_cpus & cpus) != cpus: raise exception.CPUUnpinningInvalid(requested=list(cpus), available=list( self.pinned_cpus)) self.pinned_cpus -= cpus def pin_cpus_with_siblings(self, cpus): pin_siblings = set() for sib in self.siblings: if cpus & sib: pin_siblings.update(sib) self.pin_cpus(pin_siblings) def unpin_cpus_with_siblings(self, cpus): pin_siblings = set() for sib in self.siblings: if cpus & sib: pin_siblings.update(sib) self.unpin_cpus(pin_siblings) @classmethod def _from_dict(cls, data_dict): cpuset = hardware.parse_cpu_spec( data_dict.get('cpus', '')) cpu_usage = data_dict.get('cpu_usage', 0) memory = data_dict.get('mem', {}).get('total', 0) memory_usage = data_dict.get('mem', {}).get('used', 0) cell_id = data_dict.get('id') return cls(id=cell_id, cpuset=cpuset, memory=memory, cpu_usage=cpu_usage, memory_usage=memory_usage, mempages=[], pinned_cpus=set([]), siblings=[]) def can_fit_pagesize(self, pagesize, memory, use_free=True): """Returns whether memory can fit into a given pagesize. :param pagesize: a page size in KibB :param memory: a memory size asked to fit in KiB :param use_free: if true, assess based on free memory rather than total memory. This means overcommit is not allowed, which should be the case for hugepages since these are memlocked by the kernel and can't be swapped out. :returns: whether memory can fit in hugepages :raises: MemoryPageSizeNotSupported if page size not supported """ for pages in self.mempages: avail_kb = pages.free_kb if use_free else pages.total_kb if pages.size_kb == pagesize: return memory <= avail_kb and (memory % pages.size_kb) == 0 raise exception.MemoryPageSizeNotSupported(pagesize=pagesize) @base.NovaObjectRegistry.register class NUMAPagesTopology(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Adds reserved field VERSION = '1.1' fields = { 'size_kb': obj_fields.IntegerField(), 'total': obj_fields.IntegerField(), 'used': obj_fields.IntegerField(default=0), 'reserved': obj_fields.IntegerField(default=0), } def obj_make_compatible(self, primitive, target_version): super(NUMAPagesTopology, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1): primitive.pop('reserved', None) def __eq__(self, other): return base.all_things_equal(self, other) def __ne__(self, other): return not (self == other) @property def free(self): """Returns the number of avail pages.""" if not self.obj_attr_is_set('reserved'): # In case where an old compute node is sharing resource to # an updated node we must ensure that this property is defined. self.reserved = 0 return self.total - self.used - self.reserved @property def free_kb(self): """Returns the avail memory size in KiB.""" return self.free * self.size_kb @property def total_kb(self): """Returns the total memory size in KiB.""" return self.total * self.size_kb @base.NovaObjectRegistry.register class NUMATopology(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Update NUMACell to 1.1 # Version 1.2: Update NUMACell to 1.2 VERSION = '1.2' fields = { 'cells': obj_fields.ListOfObjectsField('NUMACell'), } def __eq__(self, other): return base.all_things_equal(self, other) def __ne__(self, other): return not (self == other) @property def has_threads(self): """Check if any cell use SMT threads (a.k.a. Hyperthreads)""" return any(cell.has_threads for cell in self.cells) @classmethod def obj_from_primitive(cls, primitive, context=None): if 'nova_object.name' in primitive: obj_topology = super(NUMATopology, cls).obj_from_primitive( primitive, context=context) else: # NOTE(sahid): This compatibility code needs to stay until we can # guarantee that there are no cases of the old format stored in # the database (or forever, if we can never guarantee that). obj_topology = NUMATopology._from_dict(primitive) return obj_topology def _to_json(self): return jsonutils.dumps(self.obj_to_primitive()) @classmethod def obj_from_db_obj(cls, db_obj): """Convert serialized representation to object. Deserialize instances of this object that have been stored as JSON blobs in the database. """ return cls.obj_from_primitive(jsonutils.loads(db_obj)) def __len__(self): """Defined so that boolean testing works the same as for lists.""" return len(self.cells) @classmethod def _from_dict(cls, data_dict): return cls(cells=[ NUMACell._from_dict(cell_dict) for cell_dict in data_dict.get('cells', [])]) @base.NovaObjectRegistry.register class NUMATopologyLimits(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add network_metadata field VERSION = '1.1' fields = { 'cpu_allocation_ratio': obj_fields.FloatField(), 'ram_allocation_ratio': obj_fields.FloatField(), 'network_metadata': obj_fields.ObjectField('NetworkMetadata'), } def obj_make_compatible(self, primitive, target_version): super(NUMATopologyLimits, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1): primitive.pop('network_metadata', None) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/pci_device.py0000664000175000017500000005173500000000000017641 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import uuidutils from oslo_utils import versionutils import six from nova.db import api as db from nova import exception from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) def compare_pci_device_attributes(obj_a, obj_b): if not isinstance(obj_b, PciDevice): return False pci_ignore_fields = base.NovaPersistentObject.fields.keys() for name in obj_a.obj_fields: if name in pci_ignore_fields: continue is_set_a = obj_a.obj_attr_is_set(name) is_set_b = obj_b.obj_attr_is_set(name) if is_set_a != is_set_b: return False if is_set_a: if getattr(obj_a, name) != getattr(obj_b, name): return False return True @base.NovaObjectRegistry.register class PciDevice(base.NovaPersistentObject, base.NovaObject): """Object to represent a PCI device on a compute node. PCI devices are managed by the compute resource tracker, which discovers the devices from the hardware platform, claims, allocates and frees devices for instances. The PCI device information is permanently maintained in a database. This makes it convenient to get PCI device information, like physical function for a VF device, adjacent switch IP address for a NIC, hypervisor identification for a PCI device, etc. It also provides a convenient way to check device allocation information for administrator purposes. A device can be in available/claimed/allocated/deleted/removed state. A device is available when it is discovered.. A device is claimed prior to being allocated to an instance. Normally the transition from claimed to allocated is quick. However, during a resize operation the transition can take longer, because devices are claimed in prep_resize and allocated in finish_resize. A device becomes removed when hot removed from a node (i.e. not found in the next auto-discover) but not yet synced with the DB. A removed device should not be allocated to any instance, and once deleted from the DB, the device object is changed to deleted state and no longer synced with the DB. Filed notes:: | 'dev_id': | Hypervisor's identification for the device, the string format | is hypervisor specific | 'extra_info': | Device-specific properties like PF address, switch ip address etc. """ # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: added request_id field # Version 1.3: Added field to represent PCI device NUMA node # Version 1.4: Added parent_addr field # Version 1.5: Added 2 new device statuses: UNCLAIMABLE and UNAVAILABLE # Version 1.6: Added uuid field VERSION = '1.6' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(), # Note(yjiang5): the compute_node_id may be None because the pci # device objects are created before the compute node is created in DB 'compute_node_id': fields.IntegerField(nullable=True), 'address': fields.StringField(), 'vendor_id': fields.StringField(), 'product_id': fields.StringField(), 'dev_type': fields.PciDeviceTypeField(), 'status': fields.PciDeviceStatusField(), 'dev_id': fields.StringField(nullable=True), 'label': fields.StringField(nullable=True), 'instance_uuid': fields.StringField(nullable=True), 'request_id': fields.StringField(nullable=True), 'extra_info': fields.DictOfStringsField(default={}), 'numa_node': fields.IntegerField(nullable=True), 'parent_addr': fields.StringField(nullable=True), } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2) and 'request_id' in primitive: del primitive['request_id'] if target_version < (1, 4) and 'parent_addr' in primitive: if primitive['parent_addr'] is not None: extra_info = primitive.get('extra_info', {}) extra_info['phys_function'] = primitive['parent_addr'] del primitive['parent_addr'] if target_version < (1, 5) and 'parent_addr' in primitive: added_statuses = (fields.PciDeviceStatus.UNCLAIMABLE, fields.PciDeviceStatus.UNAVAILABLE) status = primitive['status'] if status in added_statuses: raise exception.ObjectActionError( action='obj_make_compatible', reason='status=%s not supported in version %s' % ( status, target_version)) if target_version < (1, 6) and 'uuid' in primitive: del primitive['uuid'] def update_device(self, dev_dict): """Sync the content from device dictionary to device object. The resource tracker updates the available devices periodically. To avoid meaningless syncs with the database, we update the device object only if a value changed. """ # Note(yjiang5): status/instance_uuid should only be updated by # functions like claim/allocate etc. The id is allocated by # database. The extra_info is created by the object. no_changes = ('status', 'instance_uuid', 'id', 'extra_info') for key in no_changes: dev_dict.pop(key, None) # NOTE(ndipanov): This needs to be set as it's accessed when matching dev_dict.setdefault('parent_addr') for k, v in dev_dict.items(): if k in self.fields.keys(): setattr(self, k, v) else: # NOTE(yjiang5): extra_info.update does not update # obj_what_changed, set it explicitly # NOTE(ralonsoh): list of parameters currently added to # "extra_info" dict: # - "capabilities": dict of (strings/list of strings) extra_info = self.extra_info data = (v if isinstance(v, six.string_types) else jsonutils.dumps(v)) extra_info.update({k: data}) self.extra_info = extra_info def __init__(self, *args, **kwargs): super(PciDevice, self).__init__(*args, **kwargs) # NOTE(ndipanov): These are required to build an in-memory device tree # but don't need to be proper fields (and can't easily be as they would # hold circular references) self.parent_device = None self.child_devices = [] def obj_load_attr(self, attr): if attr in ['extra_info']: # NOTE(danms): extra_info used to be defaulted during init, # so make sure any bare instantiations of this object can # rely on the expectation that referencing that field will # not fail. self.obj_set_defaults(attr) else: super(PciDevice, self).obj_load_attr(attr) def __eq__(self, other): return compare_pci_device_attributes(self, other) def __ne__(self, other): return not (self == other) @staticmethod def _from_db_object(context, pci_device, db_dev): for key in pci_device.fields: if key == 'uuid' and db_dev['uuid'] is None: # Older records might not have a uuid field set in the # database so we need to skip those here and auto-generate # a uuid later below. continue elif key != 'extra_info': setattr(pci_device, key, db_dev[key]) else: extra_info = db_dev.get("extra_info") pci_device.extra_info = jsonutils.loads(extra_info) pci_device._context = context pci_device.obj_reset_changes() # TODO(jaypipes): Remove in 2.0 version of object. This does an inline # migration to populate the uuid field. A similar inline migration is # performed in the save() method. if db_dev['uuid'] is None: pci_device.uuid = uuidutils.generate_uuid() pci_device.save() return pci_device @base.remotable_classmethod def get_by_dev_addr(cls, context, compute_node_id, dev_addr): db_dev = db.pci_device_get_by_addr( context, compute_node_id, dev_addr) return cls._from_db_object(context, cls(), db_dev) @base.remotable_classmethod def get_by_dev_id(cls, context, id): db_dev = db.pci_device_get_by_id(context, id) return cls._from_db_object(context, cls(), db_dev) @classmethod def create(cls, context, dev_dict): """Create a PCI device based on hypervisor information. As the device object is just created and is not synced with db yet thus we should not reset changes here for fields from dict. """ pci_device = cls() # NOTE(danms): extra_info used to always be defaulted during init, # so make sure we replicate that behavior outside of init here # for compatibility reasons. pci_device.obj_set_defaults('extra_info') pci_device.update_device(dev_dict) pci_device.status = fields.PciDeviceStatus.AVAILABLE pci_device.uuid = uuidutils.generate_uuid() pci_device._context = context return pci_device @base.remotable def save(self): if self.status == fields.PciDeviceStatus.REMOVED: self.status = fields.PciDeviceStatus.DELETED db.pci_device_destroy(self._context, self.compute_node_id, self.address) elif self.status != fields.PciDeviceStatus.DELETED: # TODO(jaypipes): Remove in 2.0 version of object. This does an # inline migration to populate the uuid field. A similar migration # is done in the _from_db_object() method to migrate objects as # they are read from the DB. if 'uuid' not in self: self.uuid = uuidutils.generate_uuid() updates = self.obj_get_changes() if 'extra_info' in updates: updates['extra_info'] = jsonutils.dumps(updates['extra_info']) if updates: db_pci = db.pci_device_update(self._context, self.compute_node_id, self.address, updates) self._from_db_object(self._context, self, db_pci) @staticmethod def _bulk_update_status(dev_list, status): for dev in dev_list: dev.status = status def claim(self, instance_uuid): if self.status != fields.PciDeviceStatus.AVAILABLE: raise exception.PciDeviceInvalidStatus( compute_node_id=self.compute_node_id, address=self.address, status=self.status, hopestatus=[fields.PciDeviceStatus.AVAILABLE]) if self.dev_type == fields.PciDeviceType.SRIOV_PF: # Update PF status to CLAIMED if all of it dependants are free # and set their status to UNCLAIMABLE vfs_list = self.child_devices if not all([vf.is_available() for vf in vfs_list]): raise exception.PciDeviceVFInvalidStatus( compute_node_id=self.compute_node_id, address=self.address) self._bulk_update_status(vfs_list, fields.PciDeviceStatus.UNCLAIMABLE) elif self.dev_type == fields.PciDeviceType.SRIOV_VF: # Update VF status to CLAIMED if it's parent has not been # previously allocated or claimed # When claiming/allocating a VF, it's parent PF becomes # unclaimable/unavailable. Therefore, it is expected to find the # parent PF in an unclaimable/unavailable state for any following # claims to a sibling VF parent_ok_statuses = (fields.PciDeviceStatus.AVAILABLE, fields.PciDeviceStatus.UNCLAIMABLE, fields.PciDeviceStatus.UNAVAILABLE) parent = self.parent_device if parent: if parent.status not in parent_ok_statuses: raise exception.PciDevicePFInvalidStatus( compute_node_id=self.compute_node_id, address=self.parent_addr, status=self.status, vf_address=self.address, hopestatus=parent_ok_statuses) # Set PF status if parent.status == fields.PciDeviceStatus.AVAILABLE: parent.status = fields.PciDeviceStatus.UNCLAIMABLE else: LOG.debug('Physical function addr: %(pf_addr)s parent of ' 'VF addr: %(vf_addr)s was not found', {'pf_addr': self.parent_addr, 'vf_addr': self.address}) self.status = fields.PciDeviceStatus.CLAIMED self.instance_uuid = instance_uuid def allocate(self, instance): ok_statuses = (fields.PciDeviceStatus.AVAILABLE, fields.PciDeviceStatus.CLAIMED) parent_ok_statuses = (fields.PciDeviceStatus.AVAILABLE, fields.PciDeviceStatus.UNCLAIMABLE, fields.PciDeviceStatus.UNAVAILABLE) dependants_ok_statuses = (fields.PciDeviceStatus.AVAILABLE, fields.PciDeviceStatus.UNCLAIMABLE) if self.status not in ok_statuses: raise exception.PciDeviceInvalidStatus( compute_node_id=self.compute_node_id, address=self.address, status=self.status, hopestatus=ok_statuses) if (self.status == fields.PciDeviceStatus.CLAIMED and self.instance_uuid != instance['uuid']): raise exception.PciDeviceInvalidOwner( compute_node_id=self.compute_node_id, address=self.address, owner=self.instance_uuid, hopeowner=instance['uuid']) if self.dev_type == fields.PciDeviceType.SRIOV_PF: vfs_list = self.child_devices if not all([vf.status in dependants_ok_statuses for vf in vfs_list]): raise exception.PciDeviceVFInvalidStatus( compute_node_id=self.compute_node_id, address=self.address) self._bulk_update_status(vfs_list, fields.PciDeviceStatus.UNAVAILABLE) elif (self.dev_type == fields.PciDeviceType.SRIOV_VF): parent = self.parent_device if parent: if parent.status not in parent_ok_statuses: raise exception.PciDevicePFInvalidStatus( compute_node_id=self.compute_node_id, address=self.parent_addr, status=self.status, vf_address=self.address, hopestatus=parent_ok_statuses) # Set PF status parent.status = fields.PciDeviceStatus.UNAVAILABLE else: LOG.debug('Physical function addr: %(pf_addr)s parent of ' 'VF addr: %(vf_addr)s was not found', {'pf_addr': self.parent_addr, 'vf_addr': self.address}) self.status = fields.PciDeviceStatus.ALLOCATED self.instance_uuid = instance['uuid'] # Notes(yjiang5): remove this check when instance object for # compute manager is finished if isinstance(instance, dict): if 'pci_devices' not in instance: instance['pci_devices'] = [] instance['pci_devices'].append(copy.copy(self)) else: instance.pci_devices.objects.append(copy.copy(self)) def remove(self): if self.status != fields.PciDeviceStatus.AVAILABLE: raise exception.PciDeviceInvalidStatus( compute_node_id=self.compute_node_id, address=self.address, status=self.status, hopestatus=[fields.PciDeviceStatus.AVAILABLE]) self.status = fields.PciDeviceStatus.REMOVED self.instance_uuid = None self.request_id = None def free(self, instance=None): ok_statuses = (fields.PciDeviceStatus.ALLOCATED, fields.PciDeviceStatus.CLAIMED) free_devs = [] if self.status not in ok_statuses: raise exception.PciDeviceInvalidStatus( compute_node_id=self.compute_node_id, address=self.address, status=self.status, hopestatus=ok_statuses) if instance and self.instance_uuid != instance['uuid']: raise exception.PciDeviceInvalidOwner( compute_node_id=self.compute_node_id, address=self.address, owner=self.instance_uuid, hopeowner=instance['uuid']) if self.dev_type == fields.PciDeviceType.SRIOV_PF: # Set all PF dependants status to AVAILABLE vfs_list = self.child_devices self._bulk_update_status(vfs_list, fields.PciDeviceStatus.AVAILABLE) free_devs.extend(vfs_list) if self.dev_type == fields.PciDeviceType.SRIOV_VF: # Set PF status to AVAILABLE if all of it's VFs are free parent = self.parent_device if not parent: LOG.debug('Physical function addr: %(pf_addr)s parent of ' 'VF addr: %(vf_addr)s was not found', {'pf_addr': self.parent_addr, 'vf_addr': self.address}) else: vfs_list = parent.child_devices if all([vf.is_available() for vf in vfs_list if vf.id != self.id]): parent.status = fields.PciDeviceStatus.AVAILABLE free_devs.append(parent) old_status = self.status self.status = fields.PciDeviceStatus.AVAILABLE free_devs.append(self) self.instance_uuid = None self.request_id = None if old_status == fields.PciDeviceStatus.ALLOCATED and instance: # Notes(yjiang5): remove this check when instance object for # compute manager is finished existed = next((dev for dev in instance['pci_devices'] if dev.id == self.id)) if isinstance(instance, dict): instance['pci_devices'].remove(existed) else: instance.pci_devices.objects.remove(existed) return free_devs def is_available(self): return self.status == fields.PciDeviceStatus.AVAILABLE @base.NovaObjectRegistry.register class PciDeviceList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # PciDevice <= 1.1 # Version 1.1: PciDevice 1.2 # Version 1.2: PciDevice 1.3 # Version 1.3: Adds get_by_parent_address VERSION = '1.3' fields = { 'objects': fields.ListOfObjectsField('PciDevice'), } def __init__(self, *args, **kwargs): super(PciDeviceList, self).__init__(*args, **kwargs) if 'objects' not in kwargs: self.objects = [] self.obj_reset_changes() @base.remotable_classmethod def get_by_compute_node(cls, context, node_id): db_dev_list = db.pci_device_get_all_by_node(context, node_id) return base.obj_make_list(context, cls(context), objects.PciDevice, db_dev_list) @base.remotable_classmethod def get_by_instance_uuid(cls, context, uuid): db_dev_list = db.pci_device_get_all_by_instance_uuid(context, uuid) return base.obj_make_list(context, cls(context), objects.PciDevice, db_dev_list) @base.remotable_classmethod def get_by_parent_address(cls, context, node_id, parent_addr): db_dev_list = db.pci_device_get_all_by_parent_addr(context, node_id, parent_addr) return base.obj_make_list(context, cls(context), objects.PciDevice, db_dev_list) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/pci_device_pool.py0000664000175000017500000000734700000000000020672 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_serialization import jsonutils from oslo_utils import versionutils import six from nova import objects from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class PciDevicePool(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added numa_node field VERSION = '1.1' fields = { 'product_id': fields.StringField(), 'vendor_id': fields.StringField(), 'numa_node': fields.IntegerField(nullable=True), 'tags': fields.DictOfNullableStringsField(), 'count': fields.IntegerField(), } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1) and 'numa_node' in primitive: del primitive['numa_node'] # NOTE(pmurray): before this object existed the pci device pool data was # stored as a dict. For backward compatibility we need to be able to read # it in from a dict @classmethod def from_dict(cls, value): pool_dict = copy.copy(value) pool = cls() pool.vendor_id = pool_dict.pop("vendor_id") pool.product_id = pool_dict.pop("product_id") pool.numa_node = pool_dict.pop("numa_node", None) pool.count = pool_dict.pop("count") pool.tags = pool_dict return pool # NOTE(sbauza): Before using objects, pci stats was a list of # dictionaries not having tags. For compatibility with other modules, let's # create a reversible method def to_dict(self): pci_pool = base.obj_to_primitive(self) tags = pci_pool.pop('tags', {}) for k, v in tags.items(): pci_pool[k] = v return pci_pool @base.NovaObjectRegistry.register class PciDevicePoolList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # PciDevicePool <= 1.0 # Version 1.1: PciDevicePool version 1.1 VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('PciDevicePool'), } def from_pci_stats(pci_stats): """Create and return a PciDevicePoolList from the data stored in the db, which can be either the serialized object, or, prior to the creation of the device pool objects, a simple dict or a list of such dicts. """ pools = [] if isinstance(pci_stats, six.string_types): try: pci_stats = jsonutils.loads(pci_stats) except (ValueError, TypeError): pci_stats = None if pci_stats: # Check for object-ness, or old-style storage format. if 'nova_object.namespace' in pci_stats: return objects.PciDevicePoolList.obj_from_primitive(pci_stats) else: # This can be either a dict or a list of dicts if isinstance(pci_stats, list): pools = [objects.PciDevicePool.from_dict(stat) for stat in pci_stats] else: pools = [objects.PciDevicePool.from_dict(pci_stats)] return objects.PciDevicePoolList(objects=pools) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/quotas.py0000664000175000017500000006323300000000000017057 0ustar00zuulzuul00000000000000# Copyright 2013 Rackspace Hosting. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_db import exception as db_exc from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova.db.sqlalchemy import models as main_models from nova import exception from nova.objects import base from nova.objects import fields from nova import quota def ids_from_instance(context, instance): if (context.is_admin and context.project_id != instance['project_id']): project_id = instance['project_id'] else: project_id = context.project_id if context.user_id != instance['user_id']: user_id = instance['user_id'] else: user_id = context.user_id return project_id, user_id # TODO(lyj): This method needs to be cleaned up once the # ids_from_instance helper method is renamed or some common # method is added for objects.quotas. def ids_from_security_group(context, security_group): return ids_from_instance(context, security_group) # TODO(PhilD): This method needs to be cleaned up once the # ids_from_instance helper method is renamed or some common # method is added for objects.quotas. def ids_from_server_group(context, server_group): return ids_from_instance(context, server_group) @base.NovaObjectRegistry.register class Quotas(base.NovaObject): # Version 1.0: initial version # Version 1.1: Added create_limit() and update_limit() # Version 1.2: Added limit_check() and count() # Version 1.3: Added check_deltas(), limit_check_project_and_user(), # and count_as_dict() VERSION = '1.3' fields = { # TODO(melwitt): Remove this field in version 2.0 of the object. 'reservations': fields.ListOfStringsField(nullable=True, default=[]), 'project_id': fields.StringField(nullable=True, default=None), 'user_id': fields.StringField(nullable=True, default=None), } def obj_load_attr(self, attr): self.obj_set_defaults(attr) # NOTE(danms): This is strange because resetting these would cause # them not to be saved to the database. I would imagine this is # from overzealous defaulting and that all three fields ultimately # get set all the time. However, quotas are weird, so replicate the # longstanding behavior of setting defaults and clearing their # dirty bit. self.obj_reset_changes(fields=[attr]) @staticmethod @db_api.api_context_manager.reader def _get_from_db(context, project_id, resource, user_id=None): model = api_models.ProjectUserQuota if user_id else api_models.Quota query = context.session.query(model).\ filter_by(project_id=project_id).\ filter_by(resource=resource) if user_id: query = query.filter_by(user_id=user_id) result = query.first() if not result: if user_id: raise exception.ProjectUserQuotaNotFound(project_id=project_id, user_id=user_id) else: raise exception.ProjectQuotaNotFound(project_id=project_id) return result @staticmethod @db_api.api_context_manager.reader def _get_all_from_db(context, project_id): return context.session.query(api_models.ProjectUserQuota).\ filter_by(project_id=project_id).\ all() @staticmethod @db_api.api_context_manager.reader def _get_all_from_db_by_project(context, project_id): # by_project refers to the returned dict that has a 'project_id' key rows = context.session.query(api_models.Quota).\ filter_by(project_id=project_id).\ all() result = {'project_id': project_id} for row in rows: result[row.resource] = row.hard_limit return result @staticmethod @db_api.api_context_manager.reader def _get_all_from_db_by_project_and_user(context, project_id, user_id): # by_project_and_user refers to the returned dict that has # 'project_id' and 'user_id' keys columns = (api_models.ProjectUserQuota.resource, api_models.ProjectUserQuota.hard_limit) user_quotas = context.session.query(*columns).\ filter_by(project_id=project_id).\ filter_by(user_id=user_id).\ all() result = {'project_id': project_id, 'user_id': user_id} for user_quota in user_quotas: result[user_quota.resource] = user_quota.hard_limit return result @staticmethod @db_api.api_context_manager.writer def _destroy_all_in_db_by_project(context, project_id): per_project = context.session.query(api_models.Quota).\ filter_by(project_id=project_id).\ delete(synchronize_session=False) per_user = context.session.query(api_models.ProjectUserQuota).\ filter_by(project_id=project_id).\ delete(synchronize_session=False) if not per_project and not per_user: raise exception.ProjectQuotaNotFound(project_id=project_id) @staticmethod @db_api.api_context_manager.writer def _destroy_all_in_db_by_project_and_user(context, project_id, user_id): result = context.session.query(api_models.ProjectUserQuota).\ filter_by(project_id=project_id).\ filter_by(user_id=user_id).\ delete(synchronize_session=False) if not result: raise exception.ProjectUserQuotaNotFound(project_id=project_id, user_id=user_id) @staticmethod @db_api.api_context_manager.reader def _get_class_from_db(context, class_name, resource): result = context.session.query(api_models.QuotaClass).\ filter_by(class_name=class_name).\ filter_by(resource=resource).\ first() if not result: raise exception.QuotaClassNotFound(class_name=class_name) return result @staticmethod @db_api.api_context_manager.reader def _get_all_class_from_db_by_name(context, class_name): # by_name refers to the returned dict that has a 'class_name' key rows = context.session.query(api_models.QuotaClass).\ filter_by(class_name=class_name).\ all() result = {'class_name': class_name} for row in rows: result[row.resource] = row.hard_limit return result @staticmethod @db_api.api_context_manager.writer def _create_limit_in_db(context, project_id, resource, limit, user_id=None): # TODO(melwitt): We won't have per project resources after nova-network # is removed. # TODO(stephenfin): We need to do something here now...but what? per_user = (user_id and resource not in db_api.quota_get_per_project_resources()) quota_ref = (api_models.ProjectUserQuota() if per_user else api_models.Quota()) if per_user: quota_ref.user_id = user_id quota_ref.project_id = project_id quota_ref.resource = resource quota_ref.hard_limit = limit try: quota_ref.save(context.session) except db_exc.DBDuplicateEntry: raise exception.QuotaExists(project_id=project_id, resource=resource) return quota_ref @staticmethod @db_api.api_context_manager.writer def _update_limit_in_db(context, project_id, resource, limit, user_id=None): # TODO(melwitt): We won't have per project resources after nova-network # is removed. # TODO(stephenfin): We need to do something here now...but what? per_user = (user_id and resource not in db_api.quota_get_per_project_resources()) model = api_models.ProjectUserQuota if per_user else api_models.Quota query = context.session.query(model).\ filter_by(project_id=project_id).\ filter_by(resource=resource) if per_user: query = query.filter_by(user_id=user_id) result = query.update({'hard_limit': limit}) if not result: if per_user: raise exception.ProjectUserQuotaNotFound(project_id=project_id, user_id=user_id) else: raise exception.ProjectQuotaNotFound(project_id=project_id) @staticmethod @db_api.api_context_manager.writer def _create_class_in_db(context, class_name, resource, limit): # NOTE(melwitt): There's no unique constraint on the QuotaClass model, # so check for duplicate manually. try: Quotas._get_class_from_db(context, class_name, resource) except exception.QuotaClassNotFound: pass else: raise exception.QuotaClassExists(class_name=class_name, resource=resource) quota_class_ref = api_models.QuotaClass() quota_class_ref.class_name = class_name quota_class_ref.resource = resource quota_class_ref.hard_limit = limit quota_class_ref.save(context.session) return quota_class_ref @staticmethod @db_api.api_context_manager.writer def _update_class_in_db(context, class_name, resource, limit): result = context.session.query(api_models.QuotaClass).\ filter_by(class_name=class_name).\ filter_by(resource=resource).\ update({'hard_limit': limit}) if not result: raise exception.QuotaClassNotFound(class_name=class_name) # TODO(melwitt): Remove this method in version 2.0 of the object. @base.remotable def reserve(self, expire=None, project_id=None, user_id=None, **deltas): # Honor the expected attributes even though we're not reserving # anything anymore. This will protect against things exploding if # someone has an Ocata compute host running by accident, for example. self.reservations = None self.project_id = project_id self.user_id = user_id self.obj_reset_changes() # TODO(melwitt): Remove this method in version 2.0 of the object. @base.remotable def commit(self): pass # TODO(melwitt): Remove this method in version 2.0 of the object. @base.remotable def rollback(self): pass @base.remotable_classmethod def limit_check(cls, context, project_id=None, user_id=None, **values): """Check quota limits.""" return quota.QUOTAS.limit_check( context, project_id=project_id, user_id=user_id, **values) @base.remotable_classmethod def limit_check_project_and_user(cls, context, project_values=None, user_values=None, project_id=None, user_id=None): """Check values against quota limits.""" return quota.QUOTAS.limit_check_project_and_user(context, project_values=project_values, user_values=user_values, project_id=project_id, user_id=user_id) # NOTE(melwitt): This can be removed once no old code can call count(). @base.remotable_classmethod def count(cls, context, resource, *args, **kwargs): """Count a resource.""" count = quota.QUOTAS.count_as_dict(context, resource, *args, **kwargs) key = 'user' if 'user' in count else 'project' return count[key][resource] @base.remotable_classmethod def count_as_dict(cls, context, resource, *args, **kwargs): """Count a resource and return a dict.""" return quota.QUOTAS.count_as_dict( context, resource, *args, **kwargs) @base.remotable_classmethod def check_deltas(cls, context, deltas, *count_args, **count_kwargs): """Check usage delta against quota limits. This does a Quotas.count_as_dict() followed by a Quotas.limit_check_project_and_user() using the provided deltas. :param context: The request context, for access checks :param deltas: A dict of {resource_name: delta, ...} to check against the quota limits :param count_args: Optional positional arguments to pass to count_as_dict() :param count_kwargs: Optional keyword arguments to pass to count_as_dict() :param check_project_id: Optional project_id for scoping the limit check to a different project than in the context :param check_user_id: Optional user_id for scoping the limit check to a different user than in the context :raises: exception.OverQuota if the limit check exceeds the quota limits """ # We can't do f(*args, kw=None, **kwargs) in python 2.x check_project_id = count_kwargs.pop('check_project_id', None) check_user_id = count_kwargs.pop('check_user_id', None) check_kwargs = collections.defaultdict(dict) for resource in deltas: # If we already counted a resource in a batch count, avoid # unnecessary re-counting and avoid creating empty dicts in # the defaultdict. if (resource in check_kwargs.get('project_values', {}) or resource in check_kwargs.get('user_values', {})): continue count = cls.count_as_dict(context, resource, *count_args, **count_kwargs) for res in count.get('project', {}): if res in deltas: total = count['project'][res] + deltas[res] check_kwargs['project_values'][res] = total for res in count.get('user', {}): if res in deltas: total = count['user'][res] + deltas[res] check_kwargs['user_values'][res] = total if check_project_id is not None: check_kwargs['project_id'] = check_project_id if check_user_id is not None: check_kwargs['user_id'] = check_user_id try: cls.limit_check_project_and_user(context, **check_kwargs) except exception.OverQuota as exc: # Report usage in the exception when going over quota key = 'user' if 'user' in count else 'project' exc.kwargs['usages'] = count[key] raise exc @base.remotable_classmethod def create_limit(cls, context, project_id, resource, limit, user_id=None): try: db.quota_get(context, project_id, resource, user_id=user_id) except exception.QuotaNotFound: cls._create_limit_in_db(context, project_id, resource, limit, user_id=user_id) else: raise exception.QuotaExists(project_id=project_id, resource=resource) @base.remotable_classmethod def update_limit(cls, context, project_id, resource, limit, user_id=None): try: cls._update_limit_in_db(context, project_id, resource, limit, user_id=user_id) except exception.QuotaNotFound: db.quota_update(context, project_id, resource, limit, user_id=user_id) @classmethod def create_class(cls, context, class_name, resource, limit): try: db.quota_class_get(context, class_name, resource) except exception.QuotaClassNotFound: cls._create_class_in_db(context, class_name, resource, limit) else: raise exception.QuotaClassExists(class_name=class_name, resource=resource) @classmethod def update_class(cls, context, class_name, resource, limit): try: cls._update_class_in_db(context, class_name, resource, limit) except exception.QuotaClassNotFound: db.quota_class_update(context, class_name, resource, limit) # NOTE(melwitt): The following methods are not remotable and return # dict-like database model objects. We are using classmethods to provide # a common interface for accessing the api/main databases. @classmethod def get(cls, context, project_id, resource, user_id=None): try: quota = cls._get_from_db(context, project_id, resource, user_id=user_id) except exception.QuotaNotFound: quota = db.quota_get(context, project_id, resource, user_id=user_id) return quota @classmethod def get_all(cls, context, project_id): api_db_quotas = cls._get_all_from_db(context, project_id) main_db_quotas = db.quota_get_all(context, project_id) return api_db_quotas + main_db_quotas @classmethod def get_all_by_project(cls, context, project_id): api_db_quotas_dict = cls._get_all_from_db_by_project(context, project_id) main_db_quotas_dict = db.quota_get_all_by_project(context, project_id) for k, v in api_db_quotas_dict.items(): main_db_quotas_dict[k] = v return main_db_quotas_dict @classmethod def get_all_by_project_and_user(cls, context, project_id, user_id): api_db_quotas_dict = cls._get_all_from_db_by_project_and_user( context, project_id, user_id) main_db_quotas_dict = db.quota_get_all_by_project_and_user( context, project_id, user_id) for k, v in api_db_quotas_dict.items(): main_db_quotas_dict[k] = v return main_db_quotas_dict @classmethod def destroy_all_by_project(cls, context, project_id): try: cls._destroy_all_in_db_by_project(context, project_id) except exception.ProjectQuotaNotFound: db.quota_destroy_all_by_project(context, project_id) @classmethod def destroy_all_by_project_and_user(cls, context, project_id, user_id): try: cls._destroy_all_in_db_by_project_and_user(context, project_id, user_id) except exception.ProjectUserQuotaNotFound: db.quota_destroy_all_by_project_and_user(context, project_id, user_id) @classmethod def get_class(cls, context, class_name, resource): try: qclass = cls._get_class_from_db(context, class_name, resource) except exception.QuotaClassNotFound: qclass = db.quota_class_get(context, class_name, resource) return qclass @classmethod def get_default_class(cls, context): try: qclass = cls._get_all_class_from_db_by_name( context, db_api._DEFAULT_QUOTA_NAME) except exception.QuotaClassNotFound: qclass = db.quota_class_get_default(context) return qclass @classmethod def get_all_class_by_name(cls, context, class_name): api_db_quotas_dict = cls._get_all_class_from_db_by_name(context, class_name) main_db_quotas_dict = db.quota_class_get_all_by_name(context, class_name) for k, v in api_db_quotas_dict.items(): main_db_quotas_dict[k] = v return main_db_quotas_dict @base.NovaObjectRegistry.register class QuotasNoOp(Quotas): # TODO(melwitt): Remove this method in version 2.0 of the object. def reserve(context, expire=None, project_id=None, user_id=None, **deltas): pass # TODO(melwitt): Remove this method in version 2.0 of the object. def commit(self, context=None): pass # TODO(melwitt): Remove this method in version 2.0 of the object. def rollback(self, context=None): pass def check_deltas(cls, context, deltas, *count_args, **count_kwargs): pass @db_api.require_context @db_api.pick_context_manager_reader def _get_main_per_project_limits(context, limit): return context.session.query(main_models.Quota).\ filter_by(deleted=0).\ limit(limit).\ all() @db_api.require_context @db_api.pick_context_manager_reader def _get_main_per_user_limits(context, limit): return context.session.query(main_models.ProjectUserQuota).\ filter_by(deleted=0).\ limit(limit).\ all() @db_api.require_context @db_api.pick_context_manager_writer def _destroy_main_per_project_limits(context, project_id, resource): context.session.query(main_models.Quota).\ filter_by(deleted=0).\ filter_by(project_id=project_id).\ filter_by(resource=resource).\ soft_delete(synchronize_session=False) @db_api.require_context @db_api.pick_context_manager_writer def _destroy_main_per_user_limits(context, project_id, resource, user_id): context.session.query(main_models.ProjectUserQuota).\ filter_by(deleted=0).\ filter_by(project_id=project_id).\ filter_by(user_id=user_id).\ filter_by(resource=resource).\ soft_delete(synchronize_session=False) @db_api.api_context_manager.writer def _create_limits_in_api_db(context, db_limits, per_user=False): for db_limit in db_limits: user_id = db_limit.user_id if per_user else None Quotas._create_limit_in_db(context, db_limit.project_id, db_limit.resource, db_limit.hard_limit, user_id=user_id) def migrate_quota_limits_to_api_db(context, count): # Migrate per project limits main_per_project_limits = _get_main_per_project_limits(context, count) done = 0 try: # Create all the limits in a single transaction. _create_limits_in_api_db(context, main_per_project_limits) except exception.QuotaExists: # NOTE(melwitt): This can happen if the migration is interrupted after # limits were created in the api db but before they were deleted from # the main db, and the migration is re-run. pass # Delete the limits separately. for db_limit in main_per_project_limits: _destroy_main_per_project_limits(context, db_limit.project_id, db_limit.resource) done += 1 if done == count: return len(main_per_project_limits), done # Migrate per user limits count -= done main_per_user_limits = _get_main_per_user_limits(context, count) try: # Create all the limits in a single transaction. _create_limits_in_api_db(context, main_per_user_limits, per_user=True) except exception.QuotaExists: # NOTE(melwitt): This can happen if the migration is interrupted after # limits were created in the api db but before they were deleted from # the main db, and the migration is re-run. pass # Delete the limits separately. for db_limit in main_per_user_limits: _destroy_main_per_user_limits(context, db_limit.project_id, db_limit.resource, db_limit.user_id) done += 1 return len(main_per_project_limits) + len(main_per_user_limits), done @db_api.require_context @db_api.pick_context_manager_reader def _get_main_quota_classes(context, limit): return context.session.query(main_models.QuotaClass).\ filter_by(deleted=0).\ limit(limit).\ all() @db_api.pick_context_manager_writer def _destroy_main_quota_classes(context, db_classes): for db_class in db_classes: context.session.query(main_models.QuotaClass).\ filter_by(deleted=0).\ filter_by(id=db_class.id).\ soft_delete(synchronize_session=False) @db_api.api_context_manager.writer def _create_classes_in_api_db(context, db_classes): for db_class in db_classes: Quotas._create_class_in_db(context, db_class.class_name, db_class.resource, db_class.hard_limit) def migrate_quota_classes_to_api_db(context, count): main_quota_classes = _get_main_quota_classes(context, count) done = 0 try: # Create all the classes in a single transaction. _create_classes_in_api_db(context, main_quota_classes) except exception.QuotaClassExists: # NOTE(melwitt): This can happen if the migration is interrupted after # classes were created in the api db but before they were deleted from # the main db, and the migration is re-run. pass # Delete the classes in a single transaction. _destroy_main_quota_classes(context, main_quota_classes) found = done = len(main_quota_classes) return found, done ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/request_spec.py0000664000175000017500000015565500000000000020257 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import itertools import os_resource_classes as orc from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import versionutils import six from nova.db.sqlalchemy import api as db from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.objects import base from nova.objects import fields from nova.objects import instance as obj_instance LOG = logging.getLogger(__name__) REQUEST_SPEC_OPTIONAL_ATTRS = ['requested_destination', 'security_groups', 'network_metadata', 'requested_resources', 'request_level_params'] @base.NovaObjectRegistry.register class RequestSpec(base.NovaObject): # Version 1.0: Initial version # Version 1.1: ImageMeta version 1.6 # Version 1.2: SchedulerRetries version 1.1 # Version 1.3: InstanceGroup version 1.10 # Version 1.4: ImageMeta version 1.7 # Version 1.5: Added get_by_instance_uuid(), create(), save() # Version 1.6: Added requested_destination # Version 1.7: Added destroy() # Version 1.8: Added security_groups # Version 1.9: Added user_id # Version 1.10: Added network_metadata # Version 1.11: Added is_bfv # Version 1.12: Added requested_resources # Version 1.13: Added request_level_params VERSION = '1.13' fields = { 'id': fields.IntegerField(), 'image': fields.ObjectField('ImageMeta', nullable=True), 'numa_topology': fields.ObjectField('InstanceNUMATopology', nullable=True), 'pci_requests': fields.ObjectField('InstancePCIRequests', nullable=True), # TODO(mriedem): The project_id shouldn't be nullable since the # scheduler relies on it being set. 'project_id': fields.StringField(nullable=True), 'user_id': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'flavor': fields.ObjectField('Flavor', nullable=False), 'num_instances': fields.IntegerField(default=1), # NOTE(alex_xu): This field won't be persisted. 'ignore_hosts': fields.ListOfStringsField(nullable=True), # NOTE(mriedem): In reality, you can only ever have one # host in the force_hosts list. The fact this is a list # is a mistake perpetuated over time. 'force_hosts': fields.ListOfStringsField(nullable=True), # NOTE(mriedem): In reality, you can only ever have one # node in the force_nodes list. The fact this is a list # is a mistake perpetuated over time. 'force_nodes': fields.ListOfStringsField(nullable=True), # NOTE(alex_xu): This field won't be persisted. 'requested_destination': fields.ObjectField('Destination', nullable=True, default=None), # NOTE(alex_xu): This field won't be persisted. 'retry': fields.ObjectField('SchedulerRetries', nullable=True), 'limits': fields.ObjectField('SchedulerLimits', nullable=True), 'instance_group': fields.ObjectField('InstanceGroup', nullable=True), # NOTE(sbauza): Since hints are depending on running filters, we prefer # to leave the API correctly validating the hints per the filters and # just provide to the RequestSpec object a free-form dictionary 'scheduler_hints': fields.DictOfListOfStringsField(nullable=True), 'instance_uuid': fields.UUIDField(), # TODO(stephenfin): Remove this as it's related to nova-network 'security_groups': fields.ObjectField('SecurityGroupList'), # NOTE(alex_xu): This field won't be persisted. 'network_metadata': fields.ObjectField('NetworkMetadata'), 'is_bfv': fields.BooleanField(), # NOTE(gibi): Eventually we want to store every resource request as # RequestGroup objects here. However currently the flavor based # resources like vcpu, ram, disk, and flavor.extra_spec based resources # are not handled this way. See the Todo in from_components() where # requested_resources are set. # NOTE(alex_xu): This field won't be persisted. 'requested_resources': fields.ListOfObjectsField('RequestGroup', nullable=True, default=None), # NOTE(efried): This field won't be persisted. 'request_level_params': fields.ObjectField('RequestLevelParams'), } def obj_make_compatible(self, primitive, target_version): super(RequestSpec, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 13) and 'request_level_params' in primitive: del primitive['request_level_params'] if target_version < (1, 12): if 'requested_resources' in primitive: del primitive['requested_resources'] if target_version < (1, 11) and 'is_bfv' in primitive: del primitive['is_bfv'] if target_version < (1, 10): if 'network_metadata' in primitive: del primitive['network_metadata'] if target_version < (1, 9): if 'user_id' in primitive: del primitive['user_id'] if target_version < (1, 8): if 'security_groups' in primitive: del primitive['security_groups'] if target_version < (1, 6): if 'requested_destination' in primitive: del primitive['requested_destination'] def obj_load_attr(self, attrname): if attrname not in REQUEST_SPEC_OPTIONAL_ATTRS: raise exception.ObjectActionError( action='obj_load_attr', reason='attribute %s not lazy-loadable' % attrname) if attrname == 'security_groups': self.security_groups = objects.SecurityGroupList(objects=[]) return if attrname == 'network_metadata': self.network_metadata = objects.NetworkMetadata( physnets=set(), tunneled=False) return if attrname == 'request_level_params': self.request_level_params = RequestLevelParams() return # NOTE(sbauza): In case the primitive was not providing that field # because of a previous RequestSpec version, we want to default # that field in order to have the same behaviour. self.obj_set_defaults(attrname) @property def vcpus(self): return self.flavor.vcpus @property def memory_mb(self): return self.flavor.memory_mb @property def root_gb(self): return self.flavor.root_gb @property def ephemeral_gb(self): return self.flavor.ephemeral_gb @property def swap(self): return self.flavor.swap @property def root_required(self): # self.request_level_params and .root_required lazy-default via their # respective obj_load_attr methods. return self.request_level_params.root_required @property def root_forbidden(self): # self.request_level_params and .root_forbidden lazy-default via their # respective obj_load_attr methods. return self.request_level_params.root_forbidden def _image_meta_from_image(self, image): if isinstance(image, objects.ImageMeta): self.image = image elif isinstance(image, dict): # NOTE(sbauza): Until Nova is fully providing an ImageMeta object # for getting properties, we still need to hydrate it here # TODO(sbauza): To be removed once all RequestSpec hydrations are # done on the conductor side and if the image is an ImageMeta self.image = objects.ImageMeta.from_dict(image) else: self.image = None def _from_instance(self, instance): if isinstance(instance, obj_instance.Instance): # NOTE(sbauza): Instance should normally be a NovaObject... getter = getattr elif isinstance(instance, dict): # NOTE(sbauza): ... but there are some cases where request_spec # has an instance key as a dictionary, just because # select_destinations() is getting a request_spec dict made by # sched_utils.build_request_spec() # TODO(sbauza): To be removed once all RequestSpec hydrations are # done on the conductor side getter = lambda x, y: x.get(y) else: # If the instance is None, there is no reason to set the fields return instance_fields = ['numa_topology', 'pci_requests', 'uuid', 'project_id', 'user_id', 'availability_zone'] for field in instance_fields: if field == 'uuid': setattr(self, 'instance_uuid', getter(instance, field)) elif field == 'pci_requests': self._from_instance_pci_requests(getter(instance, field)) elif field == 'numa_topology': self._from_instance_numa_topology(getter(instance, field)) else: setattr(self, field, getter(instance, field)) def _from_instance_pci_requests(self, pci_requests): if isinstance(pci_requests, dict): pci_req_cls = objects.InstancePCIRequests self.pci_requests = pci_req_cls.from_request_spec_instance_props( pci_requests) else: self.pci_requests = pci_requests def _from_instance_numa_topology(self, numa_topology): if isinstance(numa_topology, six.string_types): numa_topology = objects.InstanceNUMATopology.obj_from_primitive( jsonutils.loads(numa_topology)) self.numa_topology = numa_topology def _from_flavor(self, flavor): if isinstance(flavor, objects.Flavor): self.flavor = flavor elif isinstance(flavor, dict): # NOTE(sbauza): Again, request_spec is primitived by # sched_utils.build_request_spec() and passed to # select_destinations() like this # TODO(sbauza): To be removed once all RequestSpec hydrations are # done on the conductor side self.flavor = objects.Flavor(**flavor) def _from_retry(self, retry_dict): self.retry = (SchedulerRetries.from_dict(self._context, retry_dict) if retry_dict else None) def _populate_group_info(self, filter_properties): if filter_properties.get('instance_group'): # New-style group information as a NovaObject, we can directly set # the field self.instance_group = filter_properties.get('instance_group') elif filter_properties.get('group_updated') is True: # Old-style group information having ugly dict keys containing sets # NOTE(sbauza): Can be dropped once select_destinations is removed policies = list(filter_properties.get('group_policies')) hosts = list(filter_properties.get('group_hosts')) members = list(filter_properties.get('group_members')) self.instance_group = objects.InstanceGroup(policy=policies[0], hosts=hosts, members=members) # InstanceGroup.uuid is not nullable so only set it if we got it. group_uuid = filter_properties.get('group_uuid') if group_uuid: self.instance_group.uuid = group_uuid # hosts has to be not part of the updates for saving the object self.instance_group.obj_reset_changes(['hosts']) else: # Set the value anyway to avoid any call to obj_attr_is_set for it self.instance_group = None def _from_limits(self, limits): if isinstance(limits, dict): self.limits = SchedulerLimits.from_dict(limits) else: # Already a SchedulerLimits object. self.limits = limits def _from_hints(self, hints_dict): if hints_dict is None: self.scheduler_hints = None return self.scheduler_hints = { hint: value if isinstance(value, list) else [value] for hint, value in hints_dict.items()} @classmethod def from_primitives(cls, context, request_spec, filter_properties): """Returns a new RequestSpec object by hydrating it from legacy dicts. Deprecated. A RequestSpec object is created early in the boot process using the from_components method. That object will either be passed to places that require it, or it can be looked up with get_by_instance_uuid. This method can be removed when there are no longer any callers. Because the method is not remotable it is not tied to object versioning. That helper is not intended to leave the legacy dicts kept in the nova codebase, but is rather just for giving a temporary solution for populating the Spec object until we get rid of scheduler_utils' build_request_spec() and the filter_properties hydratation in the conductor. :param context: a context object :param request_spec: An old-style request_spec dictionary :param filter_properties: An old-style filter_properties dictionary """ num_instances = request_spec.get('num_instances', 1) spec = cls(context, num_instances=num_instances) # Hydrate from request_spec first image = request_spec.get('image') spec._image_meta_from_image(image) instance = request_spec.get('instance_properties') spec._from_instance(instance) flavor = request_spec.get('instance_type') spec._from_flavor(flavor) # Hydrate now from filter_properties spec.ignore_hosts = filter_properties.get('ignore_hosts') spec.force_hosts = filter_properties.get('force_hosts') spec.force_nodes = filter_properties.get('force_nodes') retry = filter_properties.get('retry', {}) spec._from_retry(retry) limits = filter_properties.get('limits', {}) spec._from_limits(limits) spec._populate_group_info(filter_properties) scheduler_hints = filter_properties.get('scheduler_hints', {}) spec._from_hints(scheduler_hints) spec.requested_destination = filter_properties.get( 'requested_destination') # NOTE(sbauza): Default the other fields that are not part of the # original contract spec.obj_set_defaults() return spec def get_scheduler_hint(self, hint_name, default=None): """Convenient helper for accessing a particular scheduler hint since it is hydrated by putting a single item into a list. In order to reduce the complexity, that helper returns a string if the requested hint is a list of only one value, and if not, returns the value directly (ie. the list). If the hint is not existing (or scheduler_hints is None), then it returns the default value. :param hint_name: name of the hint :param default: the default value if the hint is not there """ if (not self.obj_attr_is_set('scheduler_hints') or self.scheduler_hints is None): return default hint_val = self.scheduler_hints.get(hint_name, default) return (hint_val[0] if isinstance(hint_val, list) and len(hint_val) == 1 else hint_val) def _to_legacy_image(self): return base.obj_to_primitive(self.image) if ( self.obj_attr_is_set('image') and self.image) else {} def _to_legacy_instance(self): # NOTE(sbauza): Since the RequestSpec only persists a few Instance # fields, we can only return a dict. instance = {} instance_fields = ['numa_topology', 'pci_requests', 'project_id', 'user_id', 'availability_zone', 'instance_uuid'] for field in instance_fields: if not self.obj_attr_is_set(field): continue if field == 'instance_uuid': instance['uuid'] = getattr(self, field) else: instance[field] = getattr(self, field) flavor_fields = ['root_gb', 'ephemeral_gb', 'memory_mb', 'vcpus'] if not self.obj_attr_is_set('flavor'): return instance for field in flavor_fields: instance[field] = getattr(self.flavor, field) return instance def _to_legacy_group_info(self): # NOTE(sbauza): Since this is only needed until the AffinityFilters are # modified by using directly the RequestSpec object, we need to keep # the existing dictionary as a primitive. return {'group_updated': True, 'group_hosts': set(self.instance_group.hosts), 'group_policies': set([self.instance_group.policy]), 'group_members': set(self.instance_group.members), 'group_uuid': self.instance_group.uuid} def to_legacy_request_spec_dict(self): """Returns a legacy request_spec dict from the RequestSpec object. Since we need to manage backwards compatibility and rolling upgrades within our RPC API, we need to accept to provide an helper for primitiving the right RequestSpec object into a legacy dict until we drop support for old Scheduler RPC API versions. If you don't understand why this method is needed, please don't use it. """ req_spec = {} if not self.obj_attr_is_set('num_instances'): req_spec['num_instances'] = self.fields['num_instances'].default else: req_spec['num_instances'] = self.num_instances req_spec['image'] = self._to_legacy_image() req_spec['instance_properties'] = self._to_legacy_instance() if self.obj_attr_is_set('flavor'): req_spec['instance_type'] = self.flavor else: req_spec['instance_type'] = {} return req_spec def to_legacy_filter_properties_dict(self): """Returns a legacy filter_properties dict from the RequestSpec object. Since we need to manage backwards compatibility and rolling upgrades within our RPC API, we need to accept to provide an helper for primitiving the right RequestSpec object into a legacy dict until we drop support for old Scheduler RPC API versions. If you don't understand why this method is needed, please don't use it. """ filt_props = {} if self.obj_attr_is_set('ignore_hosts') and self.ignore_hosts: filt_props['ignore_hosts'] = self.ignore_hosts if self.obj_attr_is_set('force_hosts') and self.force_hosts: filt_props['force_hosts'] = self.force_hosts if self.obj_attr_is_set('force_nodes') and self.force_nodes: filt_props['force_nodes'] = self.force_nodes if self.obj_attr_is_set('retry') and self.retry: filt_props['retry'] = self.retry.to_dict() if self.obj_attr_is_set('limits') and self.limits: filt_props['limits'] = self.limits.to_dict() if self.obj_attr_is_set('instance_group') and self.instance_group: filt_props.update(self._to_legacy_group_info()) if self.obj_attr_is_set('scheduler_hints') and self.scheduler_hints: # NOTE(sbauza): We need to backport all the hints correctly since # we had to hydrate the field by putting a single item into a list. filt_props['scheduler_hints'] = {hint: self.get_scheduler_hint( hint) for hint in self.scheduler_hints} if self.obj_attr_is_set('requested_destination' ) and self.requested_destination: filt_props['requested_destination'] = self.requested_destination return filt_props @classmethod def from_components(cls, context, instance_uuid, image, flavor, numa_topology, pci_requests, filter_properties, instance_group, availability_zone, security_groups=None, project_id=None, user_id=None, port_resource_requests=None): """Returns a new RequestSpec object hydrated by various components. This helper is useful in creating the RequestSpec from the various objects that are assembled early in the boot process. This method creates a complete RequestSpec object with all properties set or intentionally left blank. :param context: a context object :param instance_uuid: the uuid of the instance to schedule :param image: a dict of properties for an image or volume :param flavor: a flavor NovaObject :param numa_topology: InstanceNUMATopology or None :param pci_requests: InstancePCIRequests :param filter_properties: a dict of properties for scheduling :param instance_group: None or an instance group NovaObject :param availability_zone: an availability_zone string :param security_groups: A SecurityGroupList object. If None, don't set security_groups on the resulting object. :param project_id: The project_id for the requestspec (should match the instance project_id). :param user_id: The user_id for the requestspec (should match the instance user_id). :param port_resource_requests: a list of RequestGroup objects representing the resource needs of the neutron ports """ spec_obj = cls(context) spec_obj.num_instances = 1 spec_obj.instance_uuid = instance_uuid spec_obj.instance_group = instance_group if spec_obj.instance_group is None and filter_properties: spec_obj._populate_group_info(filter_properties) spec_obj.project_id = project_id or context.project_id spec_obj.user_id = user_id or context.user_id spec_obj._image_meta_from_image(image) spec_obj._from_flavor(flavor) spec_obj._from_instance_pci_requests(pci_requests) spec_obj._from_instance_numa_topology(numa_topology) spec_obj.ignore_hosts = filter_properties.get('ignore_hosts') spec_obj.force_hosts = filter_properties.get('force_hosts') spec_obj.force_nodes = filter_properties.get('force_nodes') spec_obj._from_retry(filter_properties.get('retry', {})) spec_obj._from_limits(filter_properties.get('limits', {})) spec_obj._from_hints(filter_properties.get('scheduler_hints', {})) spec_obj.availability_zone = availability_zone if security_groups is not None: spec_obj.security_groups = security_groups spec_obj.requested_destination = filter_properties.get( 'requested_destination') # TODO(gibi): do the creation of the unnumbered group and any # numbered group from the flavor by moving the logic from # nova.scheduler.utils.resources_from_request_spec() here. See also # the comment in the definition of requested_resources field. spec_obj.requested_resources = [] if port_resource_requests: spec_obj.requested_resources.extend(port_resource_requests) # NOTE(efried): We don't need to handle request_level_params here yet # because they're set dynamically by the scheduler. That could change # in the future. # NOTE(sbauza): Default the other fields that are not part of the # original contract spec_obj.obj_set_defaults() return spec_obj def ensure_project_and_user_id(self, instance): if 'project_id' not in self or self.project_id is None: self.project_id = instance.project_id if 'user_id' not in self or self.user_id is None: self.user_id = instance.user_id def ensure_network_metadata(self, instance): if not (instance.info_cache and instance.info_cache.network_info): return physnets = set([]) tunneled = True # physical_network and tunneled might not be in the cache for old # instances that haven't had their info_cache healed yet for vif in instance.info_cache.network_info: physnet = vif.get('network', {}).get('meta', {}).get( 'physical_network', None) if physnet: physnets.add(physnet) tunneled |= vif.get('network', {}).get('meta', {}).get( 'tunneled', False) self.network_metadata = objects.NetworkMetadata( physnets=physnets, tunneled=tunneled) @staticmethod def _from_db_object(context, spec, db_spec): spec_obj = spec.obj_from_primitive(jsonutils.loads(db_spec['spec'])) for key in spec.fields: # Load these from the db model not the serialized object within, # though they should match. if key in ['id', 'instance_uuid']: setattr(spec, key, db_spec[key]) elif key in ('requested_destination', 'requested_resources', 'network_metadata', 'request_level_params'): # Do not override what we already have in the object as this # field is not persisted. If save() is called after # one of these fields is populated, it will reset the field to # None and we'll lose what is set (but not persisted) on the # object. continue elif key in ('retry', 'ignore_hosts'): # NOTE(takashin): Do not override the 'retry' or 'ignore_hosts' # fields which are not persisted. They are not lazy-loadable # fields. If they are not set, set None. if not spec.obj_attr_is_set(key): setattr(spec, key, None) elif key in spec_obj: setattr(spec, key, getattr(spec_obj, key)) spec._context = context if 'instance_group' in spec and spec.instance_group: # NOTE(mriedem): We could have a half-baked instance group with no # uuid if some legacy translation was performed on this spec in the # past. In that case, try to workaround the issue by getting the # group uuid from the scheduler hint. if 'uuid' not in spec.instance_group: spec.instance_group.uuid = spec.get_scheduler_hint('group') # NOTE(danms): We don't store the full instance group in # the reqspec since it would be stale almost immediately. # Instead, load it by uuid here so it's up-to-date. try: spec.instance_group = objects.InstanceGroup.get_by_uuid( context, spec.instance_group.uuid) except exception.InstanceGroupNotFound: # NOTE(danms): Instance group may have been deleted spec.instance_group = None spec.obj_reset_changes() return spec @staticmethod @db.api_context_manager.reader def _get_by_instance_uuid_from_db(context, instance_uuid): db_spec = context.session.query(api_models.RequestSpec).filter_by( instance_uuid=instance_uuid).first() if not db_spec: raise exception.RequestSpecNotFound( instance_uuid=instance_uuid) return db_spec @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_spec = cls._get_by_instance_uuid_from_db(context, instance_uuid) return cls._from_db_object(context, cls(), db_spec) @staticmethod @db.api_context_manager.writer def _create_in_db(context, updates): db_spec = api_models.RequestSpec() db_spec.update(updates) db_spec.save(context.session) return db_spec def _get_update_primitives(self): """Serialize object to match the db model. We store copies of embedded objects rather than references to these objects because we want a snapshot of the request at this point. If the references changed or were deleted we would not be able to reschedule this instance under the same conditions as it was originally scheduled with. """ updates = self.obj_get_changes() db_updates = None # NOTE(alaski): The db schema is the full serialized object in a # 'spec' column. If anything has changed we rewrite the full thing. if updates: # NOTE(danms): Don't persist the could-be-large and could-be-stale # properties of InstanceGroup spec = self.obj_clone() if 'instance_group' in spec and spec.instance_group: spec.instance_group.members = None spec.instance_group.hosts = None # NOTE(mriedem): Don't persist these since they are per-request for excluded in ('retry', 'requested_destination', 'requested_resources', 'ignore_hosts', 'request_level_params'): if excluded in spec and getattr(spec, excluded): setattr(spec, excluded, None) # NOTE(stephenfin): Don't persist network metadata since we have # no need for it after scheduling if 'network_metadata' in spec and spec.network_metadata: del spec.network_metadata db_updates = {'spec': jsonutils.dumps(spec.obj_to_primitive())} if 'instance_uuid' in updates: db_updates['instance_uuid'] = updates['instance_uuid'] return db_updates @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') updates = self._get_update_primitives() if not updates: raise exception.ObjectActionError(action='create', reason='no fields are set') db_spec = self._create_in_db(self._context, updates) self._from_db_object(self._context, self, db_spec) @staticmethod @db.api_context_manager.writer def _save_in_db(context, instance_uuid, updates): # FIXME(sbauza): Provide a classmethod when oslo.db bug #1520195 is # fixed and released db_spec = RequestSpec._get_by_instance_uuid_from_db(context, instance_uuid) db_spec.update(updates) db_spec.save(context.session) return db_spec @base.remotable def save(self): updates = self._get_update_primitives() if updates: db_spec = self._save_in_db(self._context, self.instance_uuid, updates) self._from_db_object(self._context, self, db_spec) self.obj_reset_changes() @staticmethod @db.api_context_manager.writer def _destroy_in_db(context, instance_uuid): result = context.session.query(api_models.RequestSpec).filter_by( instance_uuid=instance_uuid).delete() if not result: raise exception.RequestSpecNotFound(instance_uuid=instance_uuid) @base.remotable def destroy(self): self._destroy_in_db(self._context, self.instance_uuid) @staticmethod @db.api_context_manager.writer def _destroy_bulk_in_db(context, instance_uuids): return context.session.query(api_models.RequestSpec).filter( api_models.RequestSpec.instance_uuid.in_(instance_uuids)).\ delete(synchronize_session=False) @classmethod def destroy_bulk(cls, context, instance_uuids): return cls._destroy_bulk_in_db(context, instance_uuids) def reset_forced_destinations(self): """Clears the forced destination fields from the RequestSpec object. This method is for making sure we don't ask the scheduler to give us again the same destination(s) without persisting the modifications. """ self.force_hosts = None self.force_nodes = None # NOTE(sbauza): Make sure we don't persist this, we need to keep the # original request for the forced hosts self.obj_reset_changes(['force_hosts', 'force_nodes']) @property def maps_requested_resources(self): """Returns True if this RequestSpec needs to map requested_resources to resource providers, False otherwise. """ return 'requested_resources' in self and self.requested_resources def _is_valid_group_rp_mapping( self, group_rp_mapping, placement_allocations, provider_traits): """Decides if the mapping is valid from resources and traits perspective. :param group_rp_mapping: A list of RequestGroup - RP UUID two tuples representing a mapping between request groups in this RequestSpec and RPs from the allocation. It contains every RequestGroup in this RequestSpec but the mapping might not be valid from resources and traits perspective. :param placement_allocations: The overall allocation made by the scheduler for this RequestSpec :param provider_traits: A dict keyed by resource provider uuids containing the list of traits the given RP has. This dict contains info only about RPs appearing in the placement_allocations param. :return: True if each group's resource and trait request can be fulfilled from the RP it is mapped to. False otherwise. """ # Check that traits are matching for each group - rp pair in # this mapping for group, rp_uuid in group_rp_mapping: if not group.required_traits.issubset(provider_traits[rp_uuid]): return False # TODO(gibi): add support for groups with forbidden_traits and # aggregates # Check that each group can consume the requested resources from the rp # that it is mapped to in the current mapping. Consume each group's # request from the allocation, if anything drops below zero, then this # is not a solution rcs = set() allocs = copy.deepcopy(placement_allocations) for group, rp_uuid in group_rp_mapping: rp_allocs = allocs[rp_uuid]['resources'] for rc, amount in group.resources.items(): rcs.add(rc) if rc in rp_allocs: rp_allocs[rc] -= amount if rp_allocs[rc] < 0: return False else: return False # Check that all the allocations are consumed from the resource # classes that appear in the request groups. It should never happen # that we have a match but also have some leftover if placement returns # valid allocation candidates. Except if the leftover in the allocation # are due to the RC requested in the unnumbered group. for rp_uuid in allocs: rp_allocs = allocs[rp_uuid]['resources'] for rc, amount in group.resources.items(): if rc in rcs and rc in rp_allocs: if rp_allocs[rc] != 0: LOG.debug( 'Found valid group - RP mapping %s but there are ' 'allocations leftover in %s from resource class ' '%s', group_rp_mapping, allocs, rc) return False # If both the traits and the allocations are OK then mapping is valid return True def map_requested_resources_to_providers( self, placement_allocations, provider_traits): """Fill the provider_uuids field in each RequestGroup objects in the requested_resources field. The mapping is generated based on the overall allocation made for this RequestSpec, the request in each RequestGroup, and the traits of the RPs in the allocation. Limitations: * only groups with use_same_provider = True is mapped, the un-numbered group are not supported. * mapping is generated only based on the resource request and the required traits, aggregate membership and forbidden traits are not supported. * requesting the same resource class in numbered and un-numbered group is not supported We can live with these limitations today as Neutron does not use forbidden traits and aggregates in the request and each Neutron port is mapped to a numbered group and the resources class used by neutron ports are never requested through the flavor extra_spec. This is a workaround as placement does not return which RP fulfills which granular request group in the allocation candidate request. There is a spec proposing a solution in placement: https://review.opendev.org/#/c/597601/ :param placement_allocations: The overall allocation made by the scheduler for this RequestSpec :param provider_traits: A dict keyed by resource provider uuids containing the list of traits the given RP has. This dict contains info only about RPs appearing in the placement_allocations param. """ if not self.maps_requested_resources: # Nothing to do, so let's return early return for group in self.requested_resources: # See the limitations in the func doc above if (not group.use_same_provider or group.aggregates or group.forbidden_traits): raise NotImplementedError() # Iterate through every possible group - RP mappings and try to find a # valid one. If there are more than one possible solution then it is # enough to find one as these solutions are interchangeable from # backend (e.g. Neutron) perspective. LOG.debug('Trying to find a valid group - RP mapping for groups %s to ' 'allocations %s with traits %s', self.requested_resources, placement_allocations, provider_traits) # This generator first creates permutations with repetition of the RPs # with length of the number of groups we have. So if there is # 2 RPs (rp1, rp2) and # 3 groups (g1, g2, g3). # Then the itertools.product(('rp1', 'rp2'), repeat=3)) will be: # (rp1, rp1, rp1) # (rp1, rp1, rp2) # (rp1, rp2, rp1) # ... # (rp2, rp2, rp2) # Then we zip each of this permutations to our group list resulting in # a list of list of group - rp pairs: # [[('g1', 'rp1'), ('g2', 'rp1'), ('g3', 'rp1')], # [('g1', 'rp1'), ('g2', 'rp1'), ('g3', 'rp2')], # [('g1', 'rp1'), ('g2', 'rp2'), ('g3', 'rp1')], # ... # [('g1', 'rp2'), ('g2', 'rp2'), ('g3', 'rp2')]] # NOTE(gibi): the list() around the zip() below is needed as the # algorithm looks into the mapping more than once and zip returns an # iterator in py3.x. Still we need to generate a mapping once hence the # generator expression. every_possible_mapping = (list(zip(self.requested_resources, rps)) for rps in itertools.product( placement_allocations.keys(), repeat=len(self.requested_resources))) for mapping in every_possible_mapping: if self._is_valid_group_rp_mapping( mapping, placement_allocations, provider_traits): for group, rp in mapping: # NOTE(gibi): un-numbered group might be mapped to more # than one RP but we do not support that yet here. group.provider_uuids = [rp] LOG.debug('Found valid group - RP mapping %s', mapping) return # if we reached this point then none of the possible mappings was # valid. This should never happen as Placement returns allocation # candidates based on the overall resource request of the server # including the request of the groups. raise ValueError('No valid group - RP mapping is found for ' 'groups %s, allocation %s and provider traits %s' % (self.requested_resources, placement_allocations, provider_traits)) def get_request_group_mapping(self): """Return request group resource - provider mapping. This is currently used for Neutron ports that have resource request due to the port having QoS minimum bandwidth policy rule attached. :returns: A dict keyed by RequestGroup requester_id, currently Neutron port_id, to a list of resource provider UUIDs which provide resource for that RequestGroup. """ if ('requested_resources' in self and self.requested_resources is not None): return { group.requester_id: group.provider_uuids for group in self.requested_resources } @base.NovaObjectRegistry.register class Destination(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add cell field # Version 1.2: Add aggregates field # Version 1.3: Add allow_cross_cell_move field. # Version 1.4: Add forbidden_aggregates field VERSION = '1.4' fields = { 'host': fields.StringField(), # NOTE(sbauza): Given we want to split the host/node relationship later # and also remove the possibility to have multiple nodes per service, # let's provide a possible nullable node here. 'node': fields.StringField(nullable=True), 'cell': fields.ObjectField('CellMapping', nullable=True), # NOTE(dansmith): These are required aggregates (or sets) and # are passed to placement. See require_aggregates() below. 'aggregates': fields.ListOfStringsField(nullable=True, default=None), # NOTE(mriedem): allow_cross_cell_move defaults to False so that the # scheduler by default selects hosts from the cell specified in the # cell field. 'allow_cross_cell_move': fields.BooleanField(default=False), # NOTE(vrushali): These are forbidden aggregates passed to placement as # query params to the allocation candidates API. Nova uses this field # to implement the isolate_aggregates request filter. 'forbidden_aggregates': fields.SetOfStringsField(nullable=True, default=None), } def obj_make_compatible(self, primitive, target_version): super(Destination, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 4): if 'forbidden_aggregates' in primitive: del primitive['forbidden_aggregates'] if target_version < (1, 3) and 'allow_cross_cell_move' in primitive: del primitive['allow_cross_cell_move'] if target_version < (1, 2): if 'aggregates' in primitive: del primitive['aggregates'] if target_version < (1, 1): if 'cell' in primitive: del primitive['cell'] def obj_load_attr(self, attrname): self.obj_set_defaults(attrname) def require_aggregates(self, aggregates): """Add a set of aggregates to the list of required aggregates. This will take a list of aggregates, which are to be logically OR'd together and add them to the list of required aggregates that will be used to query placement. Aggregate sets provided in sequential calls to this method will be AND'd together. For example, the following set of calls: dest.require_aggregates(['foo', 'bar']) dest.require_aggregates(['baz']) will generate the following logical query to placement: "Candidates should be in 'foo' OR 'bar', but definitely in 'baz'" :param aggregates: A list of aggregates, at least one of which must contain the destination host. """ if self.aggregates is None: self.aggregates = [] self.aggregates.append(','.join(aggregates)) def append_forbidden_aggregates(self, forbidden_aggregates): """Add a set of aggregates to the forbidden aggregates. This will take a set of forbidden aggregates that should be ignored by the placement service. :param forbidden_aggregates: A set of aggregates which should be ignored by the placement service. """ if self.forbidden_aggregates is None: self.forbidden_aggregates = set([]) self.forbidden_aggregates |= forbidden_aggregates @base.NovaObjectRegistry.register class SchedulerRetries(base.NovaObject): # Version 1.0: Initial version # Version 1.1: ComputeNodeList version 1.14 VERSION = '1.1' fields = { 'num_attempts': fields.IntegerField(), # NOTE(sbauza): Even if we are only using host/node strings, we need to # know which compute nodes were tried 'hosts': fields.ObjectField('ComputeNodeList'), } @classmethod def from_dict(cls, context, retry_dict): # NOTE(sbauza): We are not persisting the user context since it's only # needed for hydrating the Retry object retry_obj = cls() if not ('num_attempts' and 'hosts') in retry_dict: # NOTE(sbauza): We prefer to return an empty object if the # primitive is not good enough return retry_obj retry_obj.num_attempts = retry_dict.get('num_attempts') # NOTE(sbauza): each retry_dict['hosts'] item is a list of [host, node] computes = [objects.ComputeNode(context=context, host=host, hypervisor_hostname=node) for host, node in retry_dict.get('hosts')] retry_obj.hosts = objects.ComputeNodeList(objects=computes) return retry_obj def to_dict(self): legacy_hosts = [[cn.host, cn.hypervisor_hostname] for cn in self.hosts] return {'num_attempts': self.num_attempts, 'hosts': legacy_hosts} @base.NovaObjectRegistry.register class SchedulerLimits(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'numa_topology': fields.ObjectField('NUMATopologyLimits', nullable=True, default=None), 'vcpu': fields.IntegerField(nullable=True, default=None), 'disk_gb': fields.IntegerField(nullable=True, default=None), 'memory_mb': fields.IntegerField(nullable=True, default=None), } @classmethod def from_dict(cls, limits_dict): limits = cls(**limits_dict) # NOTE(sbauza): Since the limits can be set for each field or not, we # prefer to have the fields nullable, but default the value to None. # Here we accept that the object is always generated from a primitive # hence the use of obj_set_defaults exceptionally. limits.obj_set_defaults() return limits def to_dict(self): limits = {} for field in self.fields: if getattr(self, field) is not None: limits[field] = getattr(self, field) return limits @base.NovaObjectRegistry.register class RequestGroup(base.NovaEphemeralObject): """Versioned object based on the unversioned nova.api.openstack.placement.lib.RequestGroup object. """ # Version 1.0: Initial version # Version 1.1: add requester_id and provider_uuids fields # Version 1.2: add in_tree field # Version 1.3: Add forbidden_aggregates field VERSION = '1.3' fields = { 'use_same_provider': fields.BooleanField(default=True), 'resources': fields.DictOfIntegersField(default={}), 'required_traits': fields.SetOfStringsField(default=set()), 'forbidden_traits': fields.SetOfStringsField(default=set()), # The aggregates field has a form of # [[aggregate_UUID1], # [aggregate_UUID2, aggregate_UUID3]] # meaning that the request should be fulfilled from an RP that is a # member of the aggregate aggregate_UUID1 and member of the aggregate # aggregate_UUID2 or aggregate_UUID3 . 'aggregates': fields.ListOfListsOfStringsField(default=[]), # The forbidden_aggregates field has a form of # set(['aggregate_UUID1', 'aggregate_UUID12', 'aggregate_UUID3']) # meaning that the request should not be fulfilled from an RP # belonging to any of the aggregates in forbidden_aggregates field. 'forbidden_aggregates': fields.SetOfStringsField(default=set()), # The entity the request is coming from (e.g. the Neutron port uuid) # which may not always be a UUID. 'requester_id': fields.StringField(nullable=True, default=None), # The resource provider UUIDs that together fulfill the request # NOTE(gibi): this can be more than one if this is the unnumbered # request group (i.e. use_same_provider=False) 'provider_uuids': fields.ListOfUUIDField(default=[]), 'in_tree': fields.UUIDField(nullable=True, default=None), } @classmethod def from_port_request(cls, context, port_uuid, port_resource_request): """Init the group from the resource request of a neutron port :param context: the request context :param port_uuid: the port requesting the resources :param port_resource_request: the resource_request attribute of the neutron port For example: port_resource_request = { "resources": { "NET_BW_IGR_KILOBIT_PER_SEC": 1000, "NET_BW_EGR_KILOBIT_PER_SEC": 1000}, "required": ["CUSTOM_PHYSNET_2", "CUSTOM_VNIC_TYPE_NORMAL"] } """ # NOTE(gibi): Assumptions: # * a port requests resource from a single provider. # * a port only specifies resources and required traits # NOTE(gibi): Placement rejects allocation candidates where a request # group has traits but no resources specified. This is why resources # are handled as mandatory below but not traits. obj = cls(context=context, use_same_provider=True, resources=port_resource_request['resources'], required_traits=set(port_resource_request.get( 'required', [])), requester_id=port_uuid) obj.obj_set_defaults() return obj def obj_make_compatible(self, primitive, target_version): super(RequestGroup, self).obj_make_compatible( primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 3): if 'forbidden_aggregates' in primitive: del primitive['forbidden_aggregates'] if target_version < (1, 2): if 'in_tree' in primitive: del primitive['in_tree'] if target_version < (1, 1): if 'requester_id' in primitive: del primitive['requester_id'] if 'provider_uuids' in primitive: del primitive['provider_uuids'] def add_resource(self, rclass, amount): # Validate the class. if not (rclass.startswith(orc.CUSTOM_NAMESPACE) or rclass in orc.STANDARDS): LOG.warning( "Received an invalid ResourceClass '%(key)s' in extra_specs.", {"key": rclass}) return # val represents the amount. Convert to int, or warn and skip. try: amount = int(amount) if amount < 0: raise ValueError() except ValueError: LOG.warning( "Resource amounts must be nonnegative integers. Received '%s'", amount) return self.resources[rclass] = amount def add_trait(self, trait_name, trait_type): # Currently the only valid values for a trait entry are 'required' # and 'forbidden' trait_vals = ('required', 'forbidden') if trait_type == 'required': self.required_traits.add(trait_name) elif trait_type == 'forbidden': self.forbidden_traits.add(trait_name) else: LOG.warning( "Only (%(tvals)s) traits are supported. Received '%(val)s'.", {"tvals": ', '.join(trait_vals), "val": trait_type}) def is_empty(self): return not any(( self.resources, self.required_traits, self.forbidden_traits, self.aggregates, self.forbidden_aggregates)) def strip_zeros(self): """Remove any resources whose amount is zero.""" for rclass in list(self.resources): if self.resources[rclass] == 0: self.resources.pop(rclass) @base.NovaObjectRegistry.register class RequestLevelParams(base.NovaObject): """Options destined for the "top level" of the placement allocation candidates query (parallel to, but not including, the list of RequestGroup). """ # Version 1.0: Initial version VERSION = '1.0' fields = { # Traits required on the root provider 'root_required': fields.SetOfStringsField(default=set()), # Traits forbidden on the root provider 'root_forbidden': fields.SetOfStringsField(default=set()), # NOTE(efried): group_policy would be appropriate to include here, once # we have a use case for it. } def obj_load_attr(self, attrname): self.obj_set_defaults(attrname) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/resource.py0000664000175000017500000000711100000000000017363 0ustar00zuulzuul00000000000000# Copyright 2019 Intel Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class ResourceMetadata(base.NovaObject): # Version 1.0: Initial version VERSION = "1.0" # This is parent object of specific resources. # And it's used to be a object field of Resource, # that is to say Resource.metadata. def __eq__(self, other): return base.all_things_equal(self, other) def __ne__(self, other): return not (self == other) @base.NovaObjectRegistry.register class Resource(base.NovaObject): # Version 1.0: Initial version VERSION = "1.0" fields = { # UUID of resource provider 'provider_uuid': fields.UUIDField(), # resource class of the Resource 'resource_class': fields.ResourceClassField(), # identifier is used to identify resource, it is up to virt drivers # for mdev, it will be a UUID, for vpmem, it's backend namespace name 'identifier': fields.StringField(), # metadata is used to contain virt driver specific resource info 'metadata': fields.ObjectField('ResourceMetadata', subclasses=True), } def __eq__(self, other): return base.all_things_equal(self, other) def __ne__(self, other): return not (self == other) def __hash__(self): metadata = self.metadata if 'metadata' in self else None return hash((self.provider_uuid, self.resource_class, self.identifier, metadata)) @base.NovaObjectRegistry.register class ResourceList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version VERSION = "1.0" fields = { 'objects': fields.ListOfObjectsField('Resource'), } @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_extra = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['resources']) if not db_extra or db_extra['resources'] is None: return None primitive = jsonutils.loads(db_extra['resources']) resources = cls.obj_from_primitive(primitive) return resources @base.NovaObjectRegistry.register class LibvirtVPMEMDevice(ResourceMetadata): # Version 1.0: Initial version VERSION = "1.0" fields = { # This is configured in file, used to generate resource class name # CUSTOM_PMEM_NAMESPACE_$LABEL 'label': fields.StringField(), # Backend pmem namespace's name 'name': fields.StringField(), # Backend pmem namespace's size 'size': fields.IntegerField(), # Backend device path 'devpath': fields.StringField(), # Backend pmem namespace's alignment 'align': fields.IntegerField(), } def __hash__(self): # Be sure all fields are set before using hash method return hash((self.label, self.name, self.size, self.devpath, self.align)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/security_group.py0000664000175000017500000001576500000000000020635 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(stephenfin): This is all nova-network related and can be deleted as soon # as we remove the 'security_group' field from the 'Instance' object from oslo_utils import uuidutils from oslo_utils import versionutils from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models from nova import objects from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class SecurityGroup(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: String attributes updated to support unicode # Version 1.2: Added uuid field for Neutron security groups. VERSION = '1.2' fields = { 'id': fields.IntegerField(), 'name': fields.StringField(), 'description': fields.StringField(), 'user_id': fields.StringField(), 'project_id': fields.StringField(), # The uuid field is only used for Neutron security groups and is not # persisted to the Nova database. 'uuid': fields.UUIDField() } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 2) and 'uuid' in primitive: del primitive['uuid'] @staticmethod def _from_db_object(context, secgroup, db_secgroup): for field in secgroup.fields: if field != 'uuid': setattr(secgroup, field, db_secgroup[field]) secgroup._context = context secgroup.obj_reset_changes() return secgroup @base.remotable_classmethod def get(cls, context, secgroup_id): db_secgroup = db.security_group_get(context, secgroup_id) return cls._from_db_object(context, cls(), db_secgroup) @base.remotable_classmethod def get_by_name(cls, context, project_id, group_name): db_secgroup = db.security_group_get_by_name(context, project_id, group_name) return cls._from_db_object(context, cls(), db_secgroup) @base.remotable def in_use(self): return db.security_group_in_use(self._context, self.id) @base.remotable def save(self): updates = self.obj_get_changes() # We don't store uuid in the Nova database so remove it if someone # mistakenly tried to save a neutron security group object. We only # need the uuid in the object for obj_to_primitive() calls where this # object is serialized and stored in the RequestSpec object. updates.pop('uuid', None) if updates: db_secgroup = db.security_group_update(self._context, self.id, updates) self._from_db_object(self._context, self, db_secgroup) self.obj_reset_changes() @base.remotable def refresh(self): self._from_db_object(self._context, self, db.security_group_get(self._context, self.id)) @property def identifier(self): return self.uuid if 'uuid' in self else self.name @base.NovaObjectRegistry.register class SecurityGroupList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # SecurityGroup <= version 1.1 # Version 1.1: Added get_counts() for quotas VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('SecurityGroup'), } def __init__(self, *args, **kwargs): super(SecurityGroupList, self).__init__(*args, **kwargs) self.objects = [] self.obj_reset_changes() @staticmethod @db_api.pick_context_manager_reader def _get_counts_from_db(context, project_id, user_id=None): query = context.session.query(models.SecurityGroup.id).\ filter_by(deleted=0).\ filter_by(project_id=project_id) counts = {} counts['project'] = {'security_groups': query.count()} if user_id: query = query.filter_by(user_id=user_id) counts['user'] = {'security_groups': query.count()} return counts @base.remotable_classmethod def get_all(cls, context): groups = db.security_group_get_all(context) return base.obj_make_list(context, cls(context), objects.SecurityGroup, groups) @base.remotable_classmethod def get_by_project(cls, context, project_id): groups = db.security_group_get_by_project(context, project_id) return base.obj_make_list(context, cls(context), objects.SecurityGroup, groups) @base.remotable_classmethod def get_by_instance(cls, context, instance): groups = db.security_group_get_by_instance(context, instance.uuid) return base.obj_make_list(context, cls(context), objects.SecurityGroup, groups) @base.remotable_classmethod def get_counts(cls, context, project_id, user_id=None): """Get the counts of SecurityGroup objects in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'security_groups': }, 'user': {'security_groups': }} """ return cls._get_counts_from_db(context, project_id, user_id=user_id) def make_secgroup_list(security_groups): """A helper to make security group objects from a list of names or uuids. Note that this does not make them save-able or have the rest of the attributes they would normally have, but provides a quick way to fill, for example, an instance object during create. """ secgroups = objects.SecurityGroupList() secgroups.objects = [] for sg in security_groups: secgroup = objects.SecurityGroup() if uuidutils.is_uuid_like(sg): # This is a neutron security group uuid so store in the uuid field. secgroup.uuid = sg else: # This is neutron's special 'default' security group secgroup.name = sg secgroups.objects.append(secgroup) return secgroups ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/selection.py0000664000175000017500000001174500000000000017531 0ustar00zuulzuul00000000000000# Copyright (c) 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import versionutils from oslo_versionedobjects import base as ovo_base from oslo_versionedobjects import fields from nova import conf from nova import objects from nova.objects import base from nova.scheduler.filters import utils as filter_utils CONF = conf.CONF @base.NovaObjectRegistry.register class Selection(base.NovaObject, ovo_base.ComparableVersionedObject): """Represents a destination that has been selected by the Scheduler. Note that these objects are not persisted to the database. """ # Version 1.0: Initial version # Version 1.1: Added availability_zone field. VERSION = "1.1" fields = { "compute_node_uuid": fields.UUIDField(), "service_host": fields.StringField(), "nodename": fields.StringField(), "cell_uuid": fields.UUIDField(), "limits": fields.ObjectField("SchedulerLimits", nullable=True), # An allocation_request is a non-trivial dict, and so it will be stored # as an encoded string. "allocation_request": fields.StringField(nullable=True), "allocation_request_version": fields.StringField(nullable=True), # The availability_zone represents the AZ the service_host is in at # the time of scheduling. This is nullable for two reasons: # 1. The Instance.availability_zone field is nullable - though that's # not a great reason, the bigger reason is: # 2. The host may not be in an AZ, and CONF.default_availability_zone # is a StrOpt which technically could be set to None, so we have to # account for it being a None value (rather than just not setting # the field). 'availability_zone': fields.StringField(nullable=True), } def obj_make_compatible(self, primitive, target_version): super(Selection, self).obj_make_compatible(primitive, target_version) target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1): primitive.pop('availability_zone', None) @classmethod def from_host_state(cls, host_state, allocation_request=None, allocation_request_version=None): """A convenience method for converting a HostState, an allocation_request, and an allocation_request_version into a Selection object. Note that allocation_request and allocation_request_version must be passed separately, as they are not part of the HostState. """ allocation_request_json = jsonutils.dumps(allocation_request) limits = objects.SchedulerLimits.from_dict(host_state.limits) # Note that the AZ logic here is similar to the AvailabilityZoneFilter. metadata = filter_utils.aggregate_metadata_get_by_host( host_state, key='availability_zone') availability_zone = metadata.get('availability_zone') if availability_zone: # aggregate_metadata_get_by_host returns a set for the value but # a host can only be in one AZ. availability_zone = list(availability_zone)[0] else: availability_zone = CONF.default_availability_zone return cls(compute_node_uuid=host_state.uuid, service_host=host_state.host, nodename=host_state.nodename, cell_uuid=host_state.cell_uuid, limits=limits, allocation_request=allocation_request_json, allocation_request_version=allocation_request_version, availability_zone=availability_zone) def to_dict(self): if self.limits is not None: limits = self.limits.to_dict() else: limits = {} # The NUMATopologyFilter can set 'numa_topology' in the limits dict to # a NUMATopologyLimits object which we need to convert to a primitive # before this hits jsonutils.to_primitive(). We only check for that # known case specifically as we don't care about handling out of tree # filters or drivers injecting non-serializable things in the limits # dict. numa_limit = limits.get("numa_topology") if numa_limit is not None: limits['numa_topology'] = numa_limit.obj_to_primitive() return { 'host': self.service_host, 'nodename': self.nodename, 'limits': limits, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/service.py0000664000175000017500000006776500000000000017221 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import uuidutils from oslo_utils import versionutils from nova import availability_zones from nova import context as nova_context from nova.db import api as db from nova import exception from nova.notifications.objects import base as notification from nova.notifications.objects import service as service_notification from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) # NOTE(danms): This is the global service version counter SERVICE_VERSION = 51 # NOTE(danms): This is our SERVICE_VERSION history. The idea is that any # time we bump the version, we will put an entry here to record the change, # along with any pertinent data. For things that we can programatically # detect that need a bump, we put something in _collect_things() below to # assemble a dict of things we can check. For example, we pretty much always # want to consider the compute RPC API version a thing that requires a service # bump so that we can drive version pins from it. We could include other # service RPC versions at some point, minimum object versions, etc. # # The TestServiceVersion test will fail if the calculated set of # things differs from the value in the last item of the list below, # indicating that a version bump is needed. # # Also note that there are other reasons we may want to bump this, # which will not be caught by the test. An example of this would be # triggering (or disabling) an online data migration once all services # in the cluster are at the same level. # # If a version bump is required for something mechanical, just document # that generic thing here (like compute RPC version bumps). No need to # replicate the details from compute/rpcapi.py here. However, for more # complex service interactions, extra detail should be provided SERVICE_VERSION_HISTORY = ( # Version 0: Pre-history {'compute_rpc': '4.0'}, # Version 1: Introduction of SERVICE_VERSION {'compute_rpc': '4.4'}, # Version 2: Compute RPC version 4.5 {'compute_rpc': '4.5'}, # Version 3: Compute RPC version 4.6 {'compute_rpc': '4.6'}, # Version 4: Add PciDevice.parent_addr (data migration needed) {'compute_rpc': '4.6'}, # Version 5: Compute RPC version 4.7 {'compute_rpc': '4.7'}, # Version 6: Compute RPC version 4.8 {'compute_rpc': '4.8'}, # Version 7: Compute RPC version 4.9 {'compute_rpc': '4.9'}, # Version 8: Compute RPC version 4.10 {'compute_rpc': '4.10'}, # Version 9: Compute RPC version 4.11 {'compute_rpc': '4.11'}, # Version 10: Compute node conversion to Inventories {'compute_rpc': '4.11'}, # Version 11: Compute RPC version 4.12 {'compute_rpc': '4.12'}, # Version 12: The network APIs and compute manager support a NetworkRequest # object where the network_id value is 'auto' or 'none'. BuildRequest # objects are populated by nova-api during instance boot. {'compute_rpc': '4.12'}, # Version 13: Compute RPC version 4.13 {'compute_rpc': '4.13'}, # Version 14: The compute manager supports setting device tags. {'compute_rpc': '4.13'}, # Version 15: Indicate that nova-conductor will stop a boot if BuildRequest # is deleted before RPC to nova-compute. {'compute_rpc': '4.13'}, # Version 16: Indicate that nova-compute will refuse to start if it doesn't # have a placement section configured. {'compute_rpc': '4.13'}, # Version 17: Add 'reserve_volume' to the boot from volume flow and # remove 'check_attach'. The service version bump is needed to fall back to # the old check in the API as the old computes fail if the volume is moved # to 'attaching' state by reserve. {'compute_rpc': '4.13'}, # Version 18: Compute RPC version 4.14 {'compute_rpc': '4.14'}, # Version 19: Compute RPC version 4.15 {'compute_rpc': '4.15'}, # Version 20: Compute RPC version 4.16 {'compute_rpc': '4.16'}, # Version 21: Compute RPC version 4.17 {'compute_rpc': '4.17'}, # Version 22: A marker for the behaviour change of auto-healing code on the # compute host regarding allocations against an instance {'compute_rpc': '4.17'}, # Version 23: Compute hosts allow pre-creation of the migration object # for cold migration. {'compute_rpc': '4.18'}, # Version 24: Add support for Cinder v3 attach/detach API. {'compute_rpc': '4.18'}, # Version 25: Compute hosts allow migration-based allocations # for live migration. {'compute_rpc': '4.18'}, # Version 26: Adds a 'host_list' parameter to build_and_run_instance() {'compute_rpc': '4.19'}, # Version 27: Compute RPC version 4.20; adds multiattach argument to # reserve_block_device_name(). {'compute_rpc': '4.20'}, # Version 28: Adds a 'host_list' parameter to prep_resize() {'compute_rpc': '4.21'}, # Version 29: Compute RPC version 4.22 {'compute_rpc': '4.22'}, # Version 30: Compute RPC version 5.0 {'compute_rpc': '5.0'}, # Version 31: The compute manager checks if 'trusted_certs' are supported {'compute_rpc': '5.0'}, # Version 32: Add 'file_backed_memory' support. The service version bump is # needed to allow the destination of a live migration to reject the # migration if 'file_backed_memory' is enabled and the source does not # support 'file_backed_memory' {'compute_rpc': '5.0'}, # Version 33: Add support for check on the server group with # 'max_server_per_host' rules {'compute_rpc': '5.0'}, # Version 34: Adds support to abort queued/preparing live migrations. {'compute_rpc': '5.0'}, # Version 35: Indicates that nova-compute supports live migration with # ports bound early on the destination host using VIFMigrateData. {'compute_rpc': '5.0'}, # Version 36: Indicates that nova-compute supports specifying volume # type when booting a volume-backed server. {'compute_rpc': '5.0'}, # Version 37: prep_resize takes a RequestSpec object {'compute_rpc': '5.1'}, # Version 38: set_host_enabled reflects COMPUTE_STATUS_DISABLED trait {'compute_rpc': '5.1'}, # Version 39: resize_instance, finish_resize, revert_resize, # finish_revert_resize, unshelve_instance takes a RequestSpec object {'compute_rpc': '5.2'}, # Version 40: Add migration and limits parameters to # check_can_live_migrate_destination(), new # drop_move_claim_at_destination() method, and numa_live_migration # parameter to check_can_live_migrate_source() {'compute_rpc': '5.3'}, # Version 41: Add cache_images() to compute rpcapi (version 5.4) {'compute_rpc': '5.4'}, # Version 42: Compute RPC version 5.5; +prep_snapshot_based_resize_at_dest {'compute_rpc': '5.5'}, # Version 43: Compute RPC version 5.6: prep_snapshot_based_resize_at_source {'compute_rpc': '5.6'}, # Version 44: Compute RPC version 5.7: finish_snapshot_based_resize_at_dest {'compute_rpc': '5.7'}, # Version 45: Compute RPC v5.8: confirm_snapshot_based_resize_at_source {'compute_rpc': '5.8'}, # Version 46: Compute RPC v5.9: revert_snapshot_based_resize_at_dest {'compute_rpc': '5.9'}, # Version 47: Compute RPC v5.10: # finish_revert_snapshot_based_resize_at_source {'compute_rpc': '5.10'}, # Version 48: Drivers report COMPUTE_SAME_HOST_COLD_MIGRATE trait. {'compute_rpc': '5.10'}, # Version 49: Compute now support server move operations with qos ports {'compute_rpc': '5.10'}, # Version 50: Compute RPC v5.11: # Add accel_uuids (accelerator requests) param to build_and_run_instance {'compute_rpc': '5.11'}, # Version 51: Add support for live migration with vpmem {'compute_rpc': '5.11'}, ) # This is used to raise an error at service startup if older than N-1 computes # are detected. Update this at the beginning of every release cycle OLDEST_SUPPORTED_SERVICE_VERSION = 'Ussuri' SERVICE_VERSION_ALIASES = { 'Ussuri': 41 } # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register class Service(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 1.0: Initial version # Version 1.1: Added compute_node nested object # Version 1.2: String attributes updated to support unicode # Version 1.3: ComputeNode version 1.5 # Version 1.4: Added use_slave to get_by_compute_host # Version 1.5: ComputeNode version 1.6 # Version 1.6: ComputeNode version 1.7 # Version 1.7: ComputeNode version 1.8 # Version 1.8: ComputeNode version 1.9 # Version 1.9: ComputeNode version 1.10 # Version 1.10: Changes behaviour of loading compute_node # Version 1.11: Added get_by_host_and_binary # Version 1.12: ComputeNode version 1.11 # Version 1.13: Added last_seen_up # Version 1.14: Added forced_down # Version 1.15: ComputeNode version 1.12 # Version 1.16: Added version # Version 1.17: ComputeNode version 1.13 # Version 1.18: ComputeNode version 1.14 # Version 1.19: Added get_minimum_version() # Version 1.20: Added get_minimum_version_multi() # Version 1.21: Added uuid # Version 1.22: Added get_by_uuid() VERSION = '1.22' fields = { 'id': fields.IntegerField(read_only=True), 'uuid': fields.UUIDField(), 'host': fields.StringField(nullable=True), 'binary': fields.StringField(nullable=True), 'topic': fields.StringField(nullable=True), 'report_count': fields.IntegerField(), 'disabled': fields.BooleanField(), 'disabled_reason': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'compute_node': fields.ObjectField('ComputeNode'), 'last_seen_up': fields.DateTimeField(nullable=True), 'forced_down': fields.BooleanField(), 'version': fields.IntegerField(), } _MIN_VERSION_CACHE = {} _SERVICE_VERSION_CACHING = False def __init__(self, *args, **kwargs): # NOTE(danms): We're going against the rules here and overriding # init. The reason is that we want to *ensure* that we're always # setting the current service version on our objects, overriding # whatever else might be set in the database, or otherwise (which # is the normal reason not to override init). # # We also need to do this here so that it's set on the client side # all the time, such that create() and save() operations will # include the current service version. if 'version' in kwargs: raise exception.ObjectActionError( action='init', reason='Version field is immutable') super(Service, self).__init__(*args, **kwargs) self.version = SERVICE_VERSION def obj_make_compatible_from_manifest(self, primitive, target_version, version_manifest): super(Service, self).obj_make_compatible_from_manifest( primitive, target_version, version_manifest) _target_version = versionutils.convert_version_to_tuple(target_version) if _target_version < (1, 21) and 'uuid' in primitive: del primitive['uuid'] if _target_version < (1, 16) and 'version' in primitive: del primitive['version'] if _target_version < (1, 14) and 'forced_down' in primitive: del primitive['forced_down'] if _target_version < (1, 13) and 'last_seen_up' in primitive: del primitive['last_seen_up'] if _target_version < (1, 10): # service.compute_node was not lazy-loaded, we need to provide it # when called self._do_compute_node(self._context, primitive, version_manifest) def _do_compute_node(self, context, primitive, version_manifest): try: target_version = version_manifest['ComputeNode'] # NOTE(sbauza): Ironic deployments can have multiple # nodes for the same service, but for keeping same behaviour, # returning only the first elem of the list compute = objects.ComputeNodeList.get_all_by_host( context, primitive['host'])[0] except Exception: return primitive['compute_node'] = compute.obj_to_primitive( target_version=target_version, version_manifest=version_manifest) @staticmethod def _from_db_object(context, service, db_service): allow_missing = ('availability_zone',) for key in service.fields: if key in allow_missing and key not in db_service: continue if key == 'compute_node': # NOTE(sbauza); We want to only lazy-load compute_node continue elif key == 'version': # NOTE(danms): Special handling of the version field, since # it is read_only and set in our init. setattr(service, base.get_attrname(key), db_service[key]) elif key == 'uuid' and not db_service.get(key): # Leave uuid off the object if undefined in the database # so that it will be generated below. continue else: service[key] = db_service[key] service._context = context service.obj_reset_changes() return service def obj_load_attr(self, attrname): if not self._context: raise exception.OrphanedObjectError(method='obj_load_attr', objtype=self.obj_name()) LOG.debug("Lazy-loading '%(attr)s' on %(name)s id %(id)s", {'attr': attrname, 'name': self.obj_name(), 'id': self.id, }) if attrname != 'compute_node': raise exception.ObjectActionError( action='obj_load_attr', reason='attribute %s not lazy-loadable' % attrname) if self.binary == 'nova-compute': # Only n-cpu services have attached compute_node(s) compute_nodes = objects.ComputeNodeList.get_all_by_host( self._context, self.host) else: # NOTE(sbauza); Previous behaviour was raising a ServiceNotFound, # we keep it for backwards compatibility raise exception.ServiceNotFound(service_id=self.id) # NOTE(sbauza): Ironic deployments can have multiple nodes # for the same service, but for keeping same behaviour, returning only # the first elem of the list self.compute_node = compute_nodes[0] @base.remotable_classmethod def get_by_id(cls, context, service_id): db_service = db.service_get(context, service_id) return cls._from_db_object(context, cls(), db_service) @base.remotable_classmethod def get_by_uuid(cls, context, service_uuid): db_service = db.service_get_by_uuid(context, service_uuid) return cls._from_db_object(context, cls(), db_service) @base.remotable_classmethod def get_by_host_and_topic(cls, context, host, topic): db_service = db.service_get_by_host_and_topic(context, host, topic) return cls._from_db_object(context, cls(), db_service) @base.remotable_classmethod def get_by_host_and_binary(cls, context, host, binary): try: db_service = db.service_get_by_host_and_binary(context, host, binary) except exception.HostBinaryNotFound: return return cls._from_db_object(context, cls(), db_service) @staticmethod @db.select_db_reader_mode def _db_service_get_by_compute_host(context, host, use_slave=False): return db.service_get_by_compute_host(context, host) @base.remotable_classmethod def get_by_compute_host(cls, context, host, use_slave=False): db_service = cls._db_service_get_by_compute_host(context, host, use_slave=use_slave) return cls._from_db_object(context, cls(), db_service) # NOTE(ndipanov): This is deprecated and should be removed on the next # major version bump @base.remotable_classmethod def get_by_args(cls, context, host, binary): db_service = db.service_get_by_host_and_binary(context, host, binary) return cls._from_db_object(context, cls(), db_service) def _check_minimum_version(self): """Enforce that we are not older that the minimum version. This is a loose check to avoid creating or updating our service record if we would do so with a version that is older that the current minimum of all services. This could happen if we were started with older code by accident, either due to a rollback or an old and un-updated node suddenly coming back onto the network. There is technically a race here between the check and the update, but since the minimum version should always roll forward and never backwards, we don't need to worry about doing it atomically. Further, the consequence for getting this wrong is minor, in that we'll just fail to send messages that other services understand. """ if not self.obj_attr_is_set('version'): return if not self.obj_attr_is_set('binary'): return minver = self.get_minimum_version(self._context, self.binary) if minver > self.version: raise exception.ServiceTooOld(thisver=self.version, minver=minver) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') self._check_minimum_version() updates = self.obj_get_changes() if 'uuid' not in updates: updates['uuid'] = uuidutils.generate_uuid() self.uuid = updates['uuid'] db_service = db.service_create(self._context, updates) self._from_db_object(self._context, self, db_service) self._send_notification(fields.NotificationAction.CREATE) @base.remotable def save(self): updates = self.obj_get_changes() updates.pop('id', None) self._check_minimum_version() db_service = db.service_update(self._context, self.id, updates) self._from_db_object(self._context, self, db_service) self._send_status_update_notification(updates) def _send_status_update_notification(self, updates): # Note(gibi): We do not trigger notification on version as that field # is always dirty, which would cause that nova sends notification on # every other field change. See the comment in save() too. if set(updates.keys()).intersection( {'disabled', 'disabled_reason', 'forced_down'}): self._send_notification(fields.NotificationAction.UPDATE) def _send_notification(self, action): payload = service_notification.ServiceStatusPayload(self) service_notification.ServiceStatusNotification( publisher=notification.NotificationPublisher.from_service_obj( self), event_type=notification.EventType( object='service', action=action), priority=fields.NotificationPriority.INFO, payload=payload).emit(self._context) @base.remotable def destroy(self): db.service_destroy(self._context, self.id) self._send_notification(fields.NotificationAction.DELETE) @classmethod def enable_min_version_cache(cls): cls.clear_min_version_cache() cls._SERVICE_VERSION_CACHING = True @classmethod def clear_min_version_cache(cls): cls._MIN_VERSION_CACHE = {} @staticmethod @db.select_db_reader_mode def _db_service_get_minimum_version(context, binaries, use_slave=False): return db.service_get_minimum_version(context, binaries) @base.remotable_classmethod def get_minimum_version_multi(cls, context, binaries, use_slave=False): if not all(binary.startswith('nova-') for binary in binaries): LOG.warning('get_minimum_version called with likely-incorrect ' 'binaries `%s\'', ','.join(binaries)) raise exception.ObjectActionError(action='get_minimum_version', reason='Invalid binary prefix') if (not cls._SERVICE_VERSION_CACHING or any(binary not in cls._MIN_VERSION_CACHE for binary in binaries)): min_versions = cls._db_service_get_minimum_version( context, binaries, use_slave=use_slave) if min_versions: min_versions = {binary: version or 0 for binary, version in min_versions.items()} cls._MIN_VERSION_CACHE.update(min_versions) else: min_versions = {binary: cls._MIN_VERSION_CACHE[binary] for binary in binaries} if min_versions: version = min(min_versions.values()) else: version = 0 # NOTE(danms): Since our return value is not controlled by object # schema, be explicit here. version = int(version) return version @base.remotable_classmethod def get_minimum_version(cls, context, binary, use_slave=False): return cls.get_minimum_version_multi(context, [binary], use_slave=use_slave) def get_minimum_version_all_cells(context, binaries, require_all=False): """Get the minimum service version, checking all cells. This attempts to calculate the minimum service version for a set of binaries across all the cells in the system. If require_all is False, then any cells that fail to report a version will be ignored (assuming they won't be candidates for scheduling and thus excluding them from the minimum version calculation is reasonable). If require_all is True, then a failing cell will cause this to raise exception.CellTimeout, as would be appropriate for gating some data migration until everything is new enough. Note that services that do not report a positive version are excluded from this, as it crosses all cells which will naturally not have all services. """ if not all(binary.startswith('nova-') for binary in binaries): LOG.warning('get_minimum_version_all_cells called with ' 'likely-incorrect binaries `%s\'', ','.join(binaries)) raise exception.ObjectActionError( action='get_minimum_version_all_cells', reason='Invalid binary prefix') # NOTE(danms): Instead of using Service.get_minimum_version_multi(), we # replicate the call directly to the underlying DB method here because # we want to defeat the caching and we need to filter non-present # services differently from the single-cell method. results = nova_context.scatter_gather_all_cells( context, Service._db_service_get_minimum_version, binaries) min_version = None for cell_uuid, result in results.items(): if result is nova_context.did_not_respond_sentinel: LOG.warning('Cell %s did not respond when getting minimum ' 'service version', cell_uuid) if require_all: raise exception.CellTimeout() elif isinstance(result, Exception): LOG.warning('Failed to get minimum service version for cell %s', cell_uuid) if require_all: # NOTE(danms): Okay, this isn't necessarily a timeout, but # it's functionally the same from the caller's perspective # and we logged the fact that it was actually a failure # for the forensic investigator during the scatter/gather # routine. raise exception.CellTimeout() else: # NOTE(danms): Don't consider a zero or None result as the minimum # since we're crossing cells and will likely not have all the # services being probed. relevant_versions = [version for version in result.values() if version] if relevant_versions: min_version_cell = min(relevant_versions) min_version = (min(min_version, min_version_cell) if min_version else min_version_cell) # NOTE(danms): If we got no matches at all (such as at first startup) # then report that as zero to be consistent with the other such # methods. return min_version or 0 @base.NovaObjectRegistry.register class ServiceList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Service <= version 1.2 # Version 1.1 Service version 1.3 # Version 1.2: Service version 1.4 # Version 1.3: Service version 1.5 # Version 1.4: Service version 1.6 # Version 1.5: Service version 1.7 # Version 1.6: Service version 1.8 # Version 1.7: Service version 1.9 # Version 1.8: Service version 1.10 # Version 1.9: Added get_by_binary() and Service version 1.11 # Version 1.10: Service version 1.12 # Version 1.11: Service version 1.13 # Version 1.12: Service version 1.14 # Version 1.13: Service version 1.15 # Version 1.14: Service version 1.16 # Version 1.15: Service version 1.17 # Version 1.16: Service version 1.18 # Version 1.17: Service version 1.19 # Version 1.18: Added include_disabled parameter to get_by_binary() # Version 1.19: Added get_all_computes_by_hv_type() VERSION = '1.19' fields = { 'objects': fields.ListOfObjectsField('Service'), } @base.remotable_classmethod def get_by_topic(cls, context, topic): db_services = db.service_get_all_by_topic(context, topic) return base.obj_make_list(context, cls(context), objects.Service, db_services) # NOTE(paul-carlton2): In v2.0 of the object the include_disabled flag # will be removed so both enabled and disabled hosts are returned @base.remotable_classmethod def get_by_binary(cls, context, binary, include_disabled=False): db_services = db.service_get_all_by_binary( context, binary, include_disabled=include_disabled) return base.obj_make_list(context, cls(context), objects.Service, db_services) @base.remotable_classmethod def get_by_host(cls, context, host): db_services = db.service_get_all_by_host(context, host) return base.obj_make_list(context, cls(context), objects.Service, db_services) @base.remotable_classmethod def get_all(cls, context, disabled=None, set_zones=False): db_services = db.service_get_all(context, disabled=disabled) if set_zones: db_services = availability_zones.set_availability_zones( context, db_services) return base.obj_make_list(context, cls(context), objects.Service, db_services) @base.remotable_classmethod def get_all_computes_by_hv_type(cls, context, hv_type): db_services = db.service_get_all_computes_by_hv_type( context, hv_type, include_disabled=False) return base.obj_make_list(context, cls(context), objects.Service, db_services) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/tag.py0000664000175000017500000000473400000000000016317 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db import api as db from nova import objects from nova.objects import base from nova.objects import fields MAX_TAG_LENGTH = 60 @base.NovaObjectRegistry.register class Tag(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Added method exists() VERSION = '1.1' fields = { 'resource_id': fields.StringField(), 'tag': fields.StringField(), } @staticmethod def _from_db_object(context, tag, db_tag): for key in tag.fields: setattr(tag, key, db_tag[key]) tag.obj_reset_changes() tag._context = context return tag @base.remotable def create(self): db_tag = db.instance_tag_add(self._context, self.resource_id, self.tag) self._from_db_object(self._context, self, db_tag) @base.remotable_classmethod def destroy(cls, context, resource_id, name): db.instance_tag_delete(context, resource_id, name) @base.remotable_classmethod def exists(cls, context, resource_id, name): return db.instance_tag_exists(context, resource_id, name) @base.NovaObjectRegistry.register class TagList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Tag <= version 1.1 VERSION = '1.1' fields = { 'objects': fields.ListOfObjectsField('Tag'), } @base.remotable_classmethod def get_by_resource_id(cls, context, resource_id): db_tags = db.instance_tag_get_by_instance_uuid(context, resource_id) return base.obj_make_list(context, cls(), objects.Tag, db_tags) @base.remotable_classmethod def create(cls, context, resource_id, tags): db_tags = db.instance_tag_set(context, resource_id, tags) return base.obj_make_list(context, cls(), objects.Tag, db_tags) @base.remotable_classmethod def destroy(cls, context, resource_id): db.instance_tag_delete_all(context, resource_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/task_log.py0000664000175000017500000000603400000000000017342 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class TaskLog(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(read_only=True), 'task_name': fields.StringField(), 'state': fields.StringField(read_only=True), 'host': fields.StringField(), 'period_beginning': fields.DateTimeField(), 'period_ending': fields.DateTimeField(), 'message': fields.StringField(), 'task_items': fields.IntegerField(), 'errors': fields.IntegerField(), } @staticmethod def _from_db_object(context, task_log, db_task_log): for field in task_log.fields: setattr(task_log, field, db_task_log[field]) task_log._context = context task_log.obj_reset_changes() return task_log @base.serialize_args @base.remotable_classmethod def get(cls, context, task_name, period_beginning, period_ending, host, state=None): db_task_log = db.task_log_get(context, task_name, period_beginning, period_ending, host, state=state) if db_task_log: return cls._from_db_object(context, cls(context), db_task_log) @base.remotable def begin_task(self): db.task_log_begin_task( self._context, self.task_name, self.period_beginning, self.period_ending, self.host, task_items=self.task_items, message=self.message) @base.remotable def end_task(self): db.task_log_end_task( self._context, self.task_name, self.period_beginning, self.period_ending, self.host, errors=self.errors, message=self.message) @base.NovaObjectRegistry.register class TaskLogList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'objects': fields.ListOfObjectsField('TaskLog'), } @base.serialize_args @base.remotable_classmethod def get_all(cls, context, task_name, period_beginning, period_ending, host=None, state=None): db_task_logs = db.task_log_get_all(context, task_name, period_beginning, period_ending, host=host, state=state) return base.obj_make_list(context, cls(context), TaskLog, db_task_logs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/trusted_certs.py0000664000175000017500000000243600000000000020433 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class TrustedCerts(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'ids': fields.ListOfStringsField(nullable=False), } @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_extra = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['trusted_certs']) if not db_extra or not db_extra['trusted_certs']: return None return cls.obj_from_primitive( jsonutils.loads(db_extra['trusted_certs'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/vcpu_model.py0000664000175000017500000000441500000000000017675 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class VirtCPUModel(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'arch': fields.ArchitectureField(nullable=True), 'vendor': fields.StringField(nullable=True), 'topology': fields.ObjectField('VirtCPUTopology', nullable=True), 'features': fields.ListOfObjectsField("VirtCPUFeature", default=[]), 'mode': fields.CPUModeField(nullable=True), 'model': fields.StringField(nullable=True), 'match': fields.CPUMatchField(nullable=True), } def obj_load_attr(self, attrname): setattr(self, attrname, None) def to_json(self): return jsonutils.dumps(self.obj_to_primitive()) @classmethod def from_json(cls, jsonstr): return cls.obj_from_primitive(jsonutils.loads(jsonstr)) @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_extra = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['vcpu_model']) if not db_extra or not db_extra['vcpu_model']: return None return cls.obj_from_primitive(jsonutils.loads(db_extra['vcpu_model'])) @base.NovaObjectRegistry.register class VirtCPUFeature(base.NovaObject): VERSION = '1.0' fields = { 'policy': fields.CPUFeaturePolicyField(nullable=True), 'name': fields.StringField(nullable=False), } def obj_load_attr(self, attrname): setattr(self, attrname, None) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/objects/virt_cpu_topology.py0000664000175000017500000000266400000000000021333 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class VirtCPUTopology(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'sockets': fields.IntegerField(nullable=True, default=1), 'cores': fields.IntegerField(nullable=True, default=1), 'threads': fields.IntegerField(nullable=True, default=1), } # NOTE(jaypipes): for backward compatibility, the virt CPU topology # data is stored in the database as a nested dict. @classmethod def from_dict(cls, data): return cls(sockets=data.get('sockets'), cores=data.get('cores'), threads=data.get('threads')) def to_dict(self): return { 'sockets': self.sockets, 'cores': self.cores, 'threads': self.threads } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/virt_device_metadata.py0000664000175000017500000000733600000000000021710 0ustar00zuulzuul00000000000000# Copyright (C) 2016, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import versionutils from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class DeviceBus(base.NovaObject): VERSION = '1.0' @base.NovaObjectRegistry.register class PCIDeviceBus(DeviceBus): VERSION = '1.0' fields = { 'address': fields.PCIAddressField(), } @base.NovaObjectRegistry.register class USBDeviceBus(DeviceBus): VERSION = '1.0' fields = { 'address': fields.USBAddressField(), } @base.NovaObjectRegistry.register class SCSIDeviceBus(DeviceBus): VERSION = '1.0' fields = { 'address': fields.SCSIAddressField(), } @base.NovaObjectRegistry.register class IDEDeviceBus(DeviceBus): VERSION = '1.0' fields = { 'address': fields.IDEAddressField(), } @base.NovaObjectRegistry.register class XenDeviceBus(DeviceBus): VERSION = '1.0' fields = { 'address': fields.XenAddressField(), } @base.NovaObjectRegistry.register class DeviceMetadata(base.NovaObject): VERSION = '1.0' fields = { 'bus': fields.ObjectField("DeviceBus", subclasses=True), 'tags': fields.ListOfStringsField(), } @base.NovaObjectRegistry.register class NetworkInterfaceMetadata(DeviceMetadata): # Version 1.0: Initial version # Version 1.1: Add vlans field # Version 1.2: Add vf_trusted field VERSION = '1.2' fields = { 'mac': fields.MACAddressField(), 'vlan': fields.IntegerField(), 'vf_trusted': fields.BooleanField(default=False), } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1) and 'vlan' in primitive: del primitive['vlan'] if target_version < (1, 2) and 'vf_trusted' in primitive: del primitive['vf_trusted'] @base.NovaObjectRegistry.register class DiskMetadata(DeviceMetadata): VERSION = '1.0' fields = { 'serial': fields.StringField(nullable=True), 'path': fields.StringField(nullable=True), } @base.NovaObjectRegistry.register class InstanceDeviceMetadata(base.NovaObject): VERSION = '1.0' fields = { 'devices': fields.ListOfObjectsField('DeviceMetadata', subclasses=True), } @classmethod def obj_from_db(cls, context, db_dev_meta): primitive = jsonutils.loads(db_dev_meta) device_metadata = cls.obj_from_primitive(primitive) return device_metadata @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid): db_extra = db.instance_extra_get_by_instance_uuid( context, instance_uuid, columns=['device_metadata']) if not db_extra or db_extra['device_metadata'] is None: return None primitive = jsonutils.loads(db_extra['device_metadata']) device_metadata = cls.obj_from_primitive(primitive) return device_metadata def _to_json(self): return jsonutils.dumps(self.obj_to_primitive()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/virtual_interface.py0000664000175000017500000003256500000000000021255 0ustar00zuulzuul00000000000000# Copyright (C) 2014, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import versionutils from nova import context as nova_context from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova.objects import base from nova.objects import fields LOG = logging.getLogger(__name__) VIF_OPTIONAL_FIELDS = ['network_id'] FAKE_UUID = '00000000-0000-0000-0000-000000000000' @base.NovaObjectRegistry.register class VirtualInterface(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add tag field # Version 1.2: Adding a save method # Version 1.3: Added destroy() method VERSION = '1.3' fields = { 'id': fields.IntegerField(), # This is a MAC address. 'address': fields.StringField(nullable=True), 'network_id': fields.IntegerField(), 'instance_uuid': fields.UUIDField(), 'uuid': fields.UUIDField(), 'tag': fields.StringField(nullable=True), } def obj_make_compatible(self, primitive, target_version): target_version = versionutils.convert_version_to_tuple(target_version) if target_version < (1, 1) and 'tag' in primitive: del primitive['tag'] @staticmethod def _from_db_object(context, vif, db_vif): for field in vif.fields: if not db_vif[field] and field in VIF_OPTIONAL_FIELDS: continue else: setattr(vif, field, db_vif[field]) # NOTE(danms): The neutronv2 module namespaces mac addresses # with port id to avoid uniqueness constraints currently on # our table. Strip that out here so nobody else needs to care. if 'address' in vif and '/' in vif.address: vif.address, _ = vif.address.split('/', 1) vif._context = context vif.obj_reset_changes() return vif @base.remotable_classmethod def get_by_id(cls, context, vif_id): db_vif = db.virtual_interface_get(context, vif_id) if db_vif: return cls._from_db_object(context, cls(), db_vif) @base.remotable_classmethod def get_by_uuid(cls, context, vif_uuid): db_vif = db.virtual_interface_get_by_uuid(context, vif_uuid) if db_vif: return cls._from_db_object(context, cls(), db_vif) @base.remotable_classmethod def get_by_address(cls, context, address): db_vif = db.virtual_interface_get_by_address(context, address) if db_vif: return cls._from_db_object(context, cls(), db_vif) @base.remotable_classmethod def get_by_instance_and_network(cls, context, instance_uuid, network_id): db_vif = db.virtual_interface_get_by_instance_and_network(context, instance_uuid, network_id) if db_vif: return cls._from_db_object(context, cls(), db_vif) @base.remotable def create(self): if self.obj_attr_is_set('id'): raise exception.ObjectActionError(action='create', reason='already created') updates = self.obj_get_changes() db_vif = db.virtual_interface_create(self._context, updates) self._from_db_object(self._context, self, db_vif) @base.remotable def save(self): updates = self.obj_get_changes() if 'address' in updates: raise exception.ObjectActionError(action='save', reason='address is not mutable') db_vif = db.virtual_interface_update(self._context, self.address, updates) return self._from_db_object(self._context, self, db_vif) @base.remotable_classmethod def delete_by_instance_uuid(cls, context, instance_uuid): db.virtual_interface_delete_by_instance(context, instance_uuid) @base.remotable def destroy(self): db.virtual_interface_delete(self._context, self.id) @base.NovaObjectRegistry.register class VirtualInterfaceList(base.ObjectListBase, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'objects': fields.ListOfObjectsField('VirtualInterface'), } @base.remotable_classmethod def get_all(cls, context): db_vifs = db.virtual_interface_get_all(context) return base.obj_make_list(context, cls(context), objects.VirtualInterface, db_vifs) @staticmethod @db.select_db_reader_mode def _db_virtual_interface_get_by_instance(context, instance_uuid, use_slave=False): return db.virtual_interface_get_by_instance(context, instance_uuid) @base.remotable_classmethod def get_by_instance_uuid(cls, context, instance_uuid, use_slave=False): db_vifs = cls._db_virtual_interface_get_by_instance( context, instance_uuid, use_slave=use_slave) return base.obj_make_list(context, cls(context), objects.VirtualInterface, db_vifs) @db_api.api_context_manager.writer def fill_virtual_interface_list(context, max_count): """This fills missing VirtualInterface Objects in Nova DB""" count_hit = 0 count_all = 0 def _regenerate_vif_list_base_on_cache(context, instance, old_vif_list, nw_info): # Set old VirtualInterfaces as deleted. for vif in old_vif_list: vif.destroy() # Generate list based on current cache: for vif in nw_info: vif_obj = objects.VirtualInterface(context) vif_obj.uuid = vif['id'] vif_obj.address = "%s/%s" % (vif['address'], vif['id']) vif_obj.instance_uuid = instance['uuid'] # Find tag from previous VirtualInterface object if exist. old_vif = [x for x in old_vif_list if x.uuid == vif['id']] vif_obj.tag = old_vif[0].tag if len(old_vif) > 0 else None vif_obj.create() cells = objects.CellMappingList.get_all(context) for cell in cells: if count_all == max_count: # We reached the limit of checked instances per # this function run. # Stop, do not go to other cell. break with nova_context.target_cell(context, cell) as cctxt: marker = _get_marker_for_migrate_instances(cctxt) filters = {'deleted': False} # Adjust the limit of migrated instances. # If user wants to process a total of 100 instances # and we did a 75 in cell1, then we only need to # verify 25 more in cell2, no more. adjusted_limit = max_count - count_all instances = objects.InstanceList.get_by_filters( cctxt, filters=filters, sort_key='created_at', sort_dir='asc', marker=marker, limit=adjusted_limit) for instance in instances: # We don't want to fill vif for FAKE instance. if instance.uuid == FAKE_UUID: continue try: info_cache = objects.InstanceInfoCache.\ get_by_instance_uuid(cctxt, instance.get('uuid')) if not info_cache.network_info: LOG.info('InstanceInfoCache object has not set ' 'NetworkInfo field. ' 'Skipping build of VirtualInterfaceList.') continue except exception.InstanceInfoCacheNotFound: LOG.info('Instance has no InstanceInfoCache object. ' 'Skipping build of VirtualInterfaceList for it.') continue # It by design filters out deleted vifs. vif_list = VirtualInterfaceList.\ get_by_instance_uuid(cctxt, instance.get('uuid')) nw_info = info_cache.network_info # This should be list with proper order of vifs, # but we're not sure about that. cached_vif_ids = [vif['id'] for vif in nw_info] # This is ordered list of vifs taken from db. db_vif_ids = [vif.uuid for vif in vif_list] count_all += 1 if cached_vif_ids == db_vif_ids: # The list of vifs and its order in cache and in # virtual_interfaces is the same. So we could end here. continue elif len(db_vif_ids) < len(cached_vif_ids): # Seems to be an instance from release older than # Newton and we don't have full VirtualInterfaceList for # it. Rewrite whole VirtualInterfaceList using interface # order from InstanceInfoCache. count_hit += 1 LOG.info('Got an instance %s with less VIFs defined in DB ' 'than in cache. Could be Pre-Newton instance. ' 'Building new VirtualInterfaceList for it.', instance.uuid) _regenerate_vif_list_base_on_cache(cctxt, instance, vif_list, nw_info) elif len(db_vif_ids) > len(cached_vif_ids): # Seems vif list is inconsistent with cache. # it could be a broken cache or interface # during attach. Do nothing. LOG.info('Got an unexpected number of VIF records in the ' 'database compared to what was stored in the ' 'instance_info_caches table for instance %s. ' 'Perhaps it is an instance during interface ' 'attach. Do nothing.', instance.uuid) continue else: # The order is different between lists. # We need a source of truth, so rebuild order # from cache. count_hit += 1 LOG.info('Got an instance %s with different order of ' 'VIFs between DB and cache. ' 'We need a source of truth, so rebuild order ' 'from cache.', instance.uuid) _regenerate_vif_list_base_on_cache(cctxt, instance, vif_list, nw_info) # Set marker to point last checked instance. if instances: marker = instances[-1].uuid _set_or_delete_marker_for_migrate_instances(cctxt, marker) return count_all, count_hit # NOTE(mjozefcz): This is similiar to marker mechanism made for # RequestSpecs object creation. # Since we have a lot of instances to be check this # will add a FAKE row that points to last instance # we checked. # Please notice that because of virtual_interfaces_instance_uuid_fkey # we need to have FAKE_UUID instance object, even deleted one. @db_api.pick_context_manager_writer def _set_or_delete_marker_for_migrate_instances(context, marker=None): context.session.query(models.VirtualInterface).filter_by( instance_uuid=FAKE_UUID).delete() # Create FAKE_UUID instance objects, only for marker, if doesn't exist. # It is needed due constraint: virtual_interfaces_instance_uuid_fkey instance = context.session.query(models.Instance).filter_by( uuid=FAKE_UUID).first() if not instance: instance = objects.Instance(context) instance.uuid = FAKE_UUID instance.project_id = FAKE_UUID instance.user_id = FAKE_UUID instance.create() # Thats fake instance, lets destroy it. # We need only its row to solve constraint issue. instance.destroy() if marker is not None: # ... but there can be a new marker to set db_mapping = objects.VirtualInterface(context) db_mapping.instance_uuid = FAKE_UUID db_mapping.uuid = FAKE_UUID db_mapping.tag = marker db_mapping.address = 'ff:ff:ff:ff:ff:ff/%s' % FAKE_UUID db_mapping.create() @db_api.pick_context_manager_reader def _get_marker_for_migrate_instances(context): vif = (context.session.query(models.VirtualInterface).filter_by( instance_uuid=FAKE_UUID)).first() marker = vif['tag'] if vif else None return marker ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/objects/volume_usage.py0000664000175000017500000000741700000000000020240 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db import api as db from nova.objects import base from nova.objects import fields @base.NovaObjectRegistry.register class VolumeUsage(base.NovaPersistentObject, base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(read_only=True), 'volume_id': fields.UUIDField(), 'instance_uuid': fields.UUIDField(nullable=True), 'project_id': fields.StringField(nullable=True), 'user_id': fields.StringField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'tot_last_refreshed': fields.DateTimeField(nullable=True, read_only=True), 'tot_reads': fields.IntegerField(read_only=True), 'tot_read_bytes': fields.IntegerField(read_only=True), 'tot_writes': fields.IntegerField(read_only=True), 'tot_write_bytes': fields.IntegerField(read_only=True), 'curr_last_refreshed': fields.DateTimeField(nullable=True, read_only=True), 'curr_reads': fields.IntegerField(), 'curr_read_bytes': fields.IntegerField(), 'curr_writes': fields.IntegerField(), 'curr_write_bytes': fields.IntegerField() } @property def last_refreshed(self): if self.tot_last_refreshed and self.curr_last_refreshed: return max(self.tot_last_refreshed, self.curr_last_refreshed) elif self.tot_last_refreshed: return self.tot_last_refreshed else: # curr_last_refreshed must be set return self.curr_last_refreshed @property def reads(self): return self.tot_reads + self.curr_reads @property def read_bytes(self): return self.tot_read_bytes + self.curr_read_bytes @property def writes(self): return self.tot_writes + self.curr_writes @property def write_bytes(self): return self.tot_write_bytes + self.curr_write_bytes @staticmethod def _from_db_object(context, vol_usage, db_vol_usage): for field in vol_usage.fields: setattr(vol_usage, field, db_vol_usage[field]) vol_usage._context = context vol_usage.obj_reset_changes() return vol_usage @base.remotable def save(self, update_totals=False): db_vol_usage = db.vol_usage_update( self._context, self.volume_id, self.curr_reads, self.curr_read_bytes, self.curr_writes, self.curr_write_bytes, self.instance_uuid, self.project_id, self.user_id, self.availability_zone, update_totals=update_totals) self._from_db_object(self._context, self, db_vol_usage) def to_dict(self): return { 'volume_id': self.volume_id, 'tenant_id': self.project_id, 'user_id': self.user_id, 'availability_zone': self.availability_zone, 'instance_id': self.instance_uuid, 'last_refreshed': str( self.last_refreshed) if self.last_refreshed else '', 'reads': self.reads, 'read_bytes': self.read_bytes, 'writes': self.writes, 'write_bytes': self.write_bytes } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3664696 nova-21.2.4/nova/pci/0000775000175000017500000000000000000000000014304 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/pci/__init__.py0000664000175000017500000000000000000000000016403 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/pci/devspec.py0000664000175000017500000002614400000000000016316 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import re import string import six from nova import exception from nova.i18n import _ from nova.pci import utils MAX_VENDOR_ID = 0xFFFF MAX_PRODUCT_ID = 0xFFFF MAX_FUNC = 0x7 MAX_DOMAIN = 0xFFFF MAX_BUS = 0xFF MAX_SLOT = 0x1F ANY = '*' REGEX_ANY = '.*' @six.add_metaclass(abc.ABCMeta) class PciAddressSpec(object): """Abstract class for all PCI address spec styles This class checks the address fields of the pci.passthrough_whitelist """ @abc.abstractmethod def match(self, pci_addr): pass def is_single_address(self): return all([ all(c in string.hexdigits for c in self.domain), all(c in string.hexdigits for c in self.bus), all(c in string.hexdigits for c in self.slot), all(c in string.hexdigits for c in self.func)]) def _set_pci_dev_info(self, prop, maxval, hex_value): a = getattr(self, prop) if a == ANY: return try: v = int(a, 16) except ValueError: raise exception.PciConfigInvalidWhitelist( reason=_("property %(property)s ('%(attr)s') does not parse " "as a hex number.") % {'property': prop, 'attr': a}) if v > maxval: raise exception.PciConfigInvalidWhitelist( reason=_("property %(property)s (%(attr)s) is greater than " "the maximum allowable value (%(max)X).") % {'property': prop, 'attr': a, 'max': maxval}) setattr(self, prop, hex_value % v) class PhysicalPciAddress(PciAddressSpec): """Manages the address fields for a fully-qualified PCI address. This function class will validate the address fields for a single PCI device. """ def __init__(self, pci_addr): try: if isinstance(pci_addr, dict): self.domain = pci_addr['domain'] self.bus = pci_addr['bus'] self.slot = pci_addr['slot'] self.func = pci_addr['function'] else: self.domain, self.bus, self.slot, self.func = ( utils.get_pci_address_fields(pci_addr)) self._set_pci_dev_info('func', MAX_FUNC, '%1x') self._set_pci_dev_info('domain', MAX_DOMAIN, '%04x') self._set_pci_dev_info('bus', MAX_BUS, '%02x') self._set_pci_dev_info('slot', MAX_SLOT, '%02x') except (KeyError, ValueError): raise exception.PciDeviceWrongAddressFormat(address=pci_addr) def match(self, phys_pci_addr): conditions = [ self.domain == phys_pci_addr.domain, self.bus == phys_pci_addr.bus, self.slot == phys_pci_addr.slot, self.func == phys_pci_addr.func, ] return all(conditions) class PciAddressGlobSpec(PciAddressSpec): """Manages the address fields with glob style. This function class will validate the address fields with glob style, check for wildcards, and insert wildcards where the field is left blank. """ def __init__(self, pci_addr): self.domain = ANY self.bus = ANY self.slot = ANY self.func = ANY dbs, sep, func = pci_addr.partition('.') if func: self.func = func.strip() self._set_pci_dev_info('func', MAX_FUNC, '%01x') if dbs: dbs_fields = dbs.split(':') if len(dbs_fields) > 3: raise exception.PciDeviceWrongAddressFormat(address=pci_addr) # If we got a partial address like ":00.", we need to turn this # into a domain of ANY, a bus of ANY, and a slot of 00. This code # allows the address bus and/or domain to be left off dbs_all = [ANY] * (3 - len(dbs_fields)) dbs_all.extend(dbs_fields) dbs_checked = [s.strip() or ANY for s in dbs_all] self.domain, self.bus, self.slot = dbs_checked self._set_pci_dev_info('domain', MAX_DOMAIN, '%04x') self._set_pci_dev_info('bus', MAX_BUS, '%02x') self._set_pci_dev_info('slot', MAX_SLOT, '%02x') def match(self, phys_pci_addr): conditions = [ self.domain in (ANY, phys_pci_addr.domain), self.bus in (ANY, phys_pci_addr.bus), self.slot in (ANY, phys_pci_addr.slot), self.func in (ANY, phys_pci_addr.func) ] return all(conditions) class PciAddressRegexSpec(PciAddressSpec): """Manages the address fields with regex style. This function class will validate the address fields with regex style. The validation includes check for all PCI address attributes and validate their regex. """ def __init__(self, pci_addr): try: self.domain = pci_addr.get('domain', REGEX_ANY) self.bus = pci_addr.get('bus', REGEX_ANY) self.slot = pci_addr.get('slot', REGEX_ANY) self.func = pci_addr.get('function', REGEX_ANY) self.domain_regex = re.compile(self.domain) self.bus_regex = re.compile(self.bus) self.slot_regex = re.compile(self.slot) self.func_regex = re.compile(self.func) except re.error: raise exception.PciDeviceWrongAddressFormat(address=pci_addr) def match(self, phys_pci_addr): conditions = [ bool(self.domain_regex.match(phys_pci_addr.domain)), bool(self.bus_regex.match(phys_pci_addr.bus)), bool(self.slot_regex.match(phys_pci_addr.slot)), bool(self.func_regex.match(phys_pci_addr.func)) ] return all(conditions) class WhitelistPciAddress(object): """Manages the address fields of the whitelist. This class checks the address fields of the pci.passthrough_whitelist configuration option, validating the address fields. Example configs: | [pci] | passthrough_whitelist = {"address":"*:0a:00.*", | "physical_network":"physnet1"} | passthrough_whitelist = {"address": {"domain": ".*", "bus": "02", "slot": "01", "function": "[0-2]"}, "physical_network":"net1"} | passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"} """ def __init__(self, pci_addr, is_physical_function): self.is_physical_function = is_physical_function self._init_address_fields(pci_addr) def _check_physical_function(self): if self.pci_address_spec.is_single_address(): self.is_physical_function = ( utils.is_physical_function( self.pci_address_spec.domain, self.pci_address_spec.bus, self.pci_address_spec.slot, self.pci_address_spec.func)) def _init_address_fields(self, pci_addr): if not self.is_physical_function: if isinstance(pci_addr, six.string_types): self.pci_address_spec = PciAddressGlobSpec(pci_addr) elif isinstance(pci_addr, dict): self.pci_address_spec = PciAddressRegexSpec(pci_addr) else: raise exception.PciDeviceWrongAddressFormat(address=pci_addr) self._check_physical_function() else: self.pci_address_spec = PhysicalPciAddress(pci_addr) def match(self, pci_addr, pci_phys_addr): """Match a device to this PciAddress. Assume this is called given pci_addr and pci_phys_addr reported by libvirt, no attempt is made to verify if pci_addr is a VF of pci_phys_addr. :param pci_addr: PCI address of the device to match. :param pci_phys_addr: PCI address of the parent of the device to match (or None if the device is not a VF). """ # Try to match on the parent PCI address if the PciDeviceSpec is a # PF (sriov is available) and the device to match is a VF. This # makes it possible to specify the PCI address of a PF in the # pci.passthrough_whitelist to match any of its VFs' PCI addresses. if self.is_physical_function and pci_phys_addr: pci_phys_addr_obj = PhysicalPciAddress(pci_phys_addr) if self.pci_address_spec.match(pci_phys_addr_obj): return True # Try to match on the device PCI address only. pci_addr_obj = PhysicalPciAddress(pci_addr) return self.pci_address_spec.match(pci_addr_obj) class PciDeviceSpec(PciAddressSpec): def __init__(self, dev_spec): self.tags = dev_spec self._init_dev_details() def _init_dev_details(self): self.vendor_id = self.tags.pop("vendor_id", ANY) self.product_id = self.tags.pop("product_id", ANY) # Note(moshele): The address attribute can be a string or a dict. # For glob syntax or specific pci it is a string and for regex syntax # it is a dict. The WhitelistPciAddress class handles both types. self.address = self.tags.pop("address", None) self.dev_name = self.tags.pop("devname", None) self.vendor_id = self.vendor_id.strip() self._set_pci_dev_info('vendor_id', MAX_VENDOR_ID, '%04x') self._set_pci_dev_info('product_id', MAX_PRODUCT_ID, '%04x') if self.address and self.dev_name: raise exception.PciDeviceInvalidDeviceName() if not self.dev_name: pci_address = self.address or "*:*:*.*" self.address = WhitelistPciAddress(pci_address, False) def match(self, dev_dict): if self.dev_name: address_str, pf = utils.get_function_by_ifname( self.dev_name) if not address_str: return False # Note(moshele): In this case we always passing a string # of the PF pci address address_obj = WhitelistPciAddress(address_str, pf) elif self.address: address_obj = self.address return all([ self.vendor_id in (ANY, dev_dict['vendor_id']), self.product_id in (ANY, dev_dict['product_id']), address_obj.match(dev_dict['address'], dev_dict.get('parent_addr'))]) def match_pci_obj(self, pci_obj): return self.match({'vendor_id': pci_obj.vendor_id, 'product_id': pci_obj.product_id, 'address': pci_obj.address, 'parent_addr': pci_obj.parent_addr}) def get_tags(self): return self.tags ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/pci/manager.py0000664000175000017500000004662100000000000016301 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils import six from nova import exception from nova import objects from nova.objects import fields from nova.pci import stats from nova.pci import whitelist CONF = cfg.CONF LOG = logging.getLogger(__name__) class PciDevTracker(object): """Manage pci devices in a compute node. This class fetches pci passthrough information from hypervisor and tracks the usage of these devices. It's called by compute node resource tracker to allocate and free devices to/from instances, and to update the available pci passthrough device information from the hypervisor periodically. The `pci_devs` attribute of this class is the in-memory "master copy" of all devices on each compute host, and all data changes that happen when claiming/allocating/freeing devices HAVE TO be made against instances contained in `pci_devs` list, because they are periodically flushed to the DB when the save() method is called. It is unsafe to fetch PciDevice objects elsewhere in the code for update purposes as those changes will end up being overwritten when the `pci_devs` are saved. """ def __init__(self, context, node_id=None): """Create a pci device tracker. If a node_id is passed in, it will fetch pci devices information from database, otherwise, it will create an empty devices list and the resource tracker will update the node_id information later. """ super(PciDevTracker, self).__init__() self.stale = {} self.node_id = node_id self.dev_filter = whitelist.Whitelist(CONF.pci.passthrough_whitelist) self.stats = stats.PciDeviceStats(dev_filter=self.dev_filter) self._context = context if node_id: self.pci_devs = objects.PciDeviceList.get_by_compute_node( context, node_id) else: self.pci_devs = objects.PciDeviceList(objects=[]) self._build_device_tree(self.pci_devs) self._initial_instance_usage() def _initial_instance_usage(self): self.allocations = collections.defaultdict(list) self.claims = collections.defaultdict(list) for dev in self.pci_devs: uuid = dev.instance_uuid if dev.status == fields.PciDeviceStatus.CLAIMED: self.claims[uuid].append(dev) elif dev.status == fields.PciDeviceStatus.ALLOCATED: self.allocations[uuid].append(dev) elif dev.status == fields.PciDeviceStatus.AVAILABLE: self.stats.add_device(dev) def save(self, context): for dev in self.pci_devs: if dev.obj_what_changed(): with dev.obj_alternate_context(context): dev.save() if dev.status == fields.PciDeviceStatus.DELETED: self.pci_devs.objects.remove(dev) @property def pci_stats(self): return self.stats def update_devices_from_hypervisor_resources(self, devices_json): """Sync the pci device tracker with hypervisor information. To support pci device hot plug, we sync with the hypervisor periodically, fetching all devices information from hypervisor, update the tracker and sync the DB information. Devices should not be hot-plugged when assigned to a guest, but possibly the hypervisor has no such guarantee. The best we can do is to give a warning if a device is changed or removed while assigned. :param devices_json: The JSON-ified string of device information that is returned from the virt driver's get_available_resource() call in the pci_passthrough_devices key. """ devices = [] for dev in jsonutils.loads(devices_json): try: if self.dev_filter.device_assignable(dev): devices.append(dev) except exception.PciConfigInvalidWhitelist as e: # The raised exception is misleading as the problem is not with # the whitelist config but with the host PCI device reported by # libvirt. The code that matches the host PCI device to the # withelist spec reuses the WhitelistPciAddress object to parse # the host PCI device address. That parsing can fail if the # PCI address has a 32 bit domain. But this should not prevent # processing the rest of the devices. So we simply skip this # device and continue. # Please note that this except block does not ignore the # invalid whitelist configuration. The whitelist config has # already been parsed or rejected in case it was invalid. At # this point the self.dev_filter representes the parsed and # validated whitelist config. LOG.debug( 'Skipping PCI device %s reported by the hypervisor: %s', {k: v for k, v in dev.items() if k in ['address', 'parent_addr']}, # NOTE(gibi): this is ugly but the device_assignable() call # uses the PhysicalPciAddress class to parse the PCI # addresses and that class reuses the code from # PciAddressSpec that was originally designed to parse # whitelist spec. Hence the raised exception talks about # whitelist config. This is misleading as in our case the # PCI address that we failed to parse came from the # hypervisor. # TODO(gibi): refactor the false abstraction to make the # code reuse clean from the false assumption that we only # parse whitelist config with # devspec.PciAddressSpec._set_pci_dev_info() six.text_type(e).replace( 'Invalid PCI devices Whitelist config:', 'The')) self._set_hvdevs(devices) @staticmethod def _build_device_tree(all_devs): """Build a tree of devices that represents parent-child relationships. We need to have the relationships set up so that we can easily make all the necessary changes to parent/child devices without having to figure it out at each call site. This method just adds references to relevant instances already found in `pci_devs` to `child_devices` and `parent_device` fields of each one. Currently relationships are considered for SR-IOV PFs/VFs only. """ # Ensures that devices are ordered in ASC so VFs will come # after their PFs. all_devs.sort(key=lambda x: x.address) parents = {} for dev in all_devs: if dev.status in (fields.PciDeviceStatus.REMOVED, fields.PciDeviceStatus.DELETED): # NOTE(ndipanov): Removed devs are pruned from # self.pci_devs on save() so we need to make sure we # are not looking at removed ones as we may build up # the tree sooner than they are pruned. continue if dev.dev_type == fields.PciDeviceType.SRIOV_PF: dev.child_devices = [] parents[dev.address] = dev elif dev.dev_type == fields.PciDeviceType.SRIOV_VF: dev.parent_device = parents.get(dev.parent_addr) if dev.parent_device: parents[dev.parent_addr].child_devices.append(dev) def _set_hvdevs(self, devices): exist_addrs = set([dev.address for dev in self.pci_devs]) new_addrs = set([dev['address'] for dev in devices]) for existed in self.pci_devs: if existed.address in exist_addrs - new_addrs: # Remove previously tracked PCI devices that are either # no longer reported by the hypervisor or have been removed # from the pci whitelist. try: existed.remove() except exception.PciDeviceInvalidStatus as e: LOG.warning("Unable to remove device with %(status)s " "ownership %(instance_uuid)s because of " "%(pci_exception)s. " "Check your [pci]passthrough_whitelist " "configuration to make sure this allocated " "device is whitelisted. If you have removed " "the device from the whitelist intentionally " "or the device is no longer available on the " "host you will need to delete the server or " "migrate it to another host to silence this " "warning.", {'status': existed.status, 'instance_uuid': existed.instance_uuid, 'pci_exception': e.format_message()}) # NOTE(sean-k-mooney): the device may not be tracked for # two reasons: first the device could have been removed # from the host or second the whitelist could have been # updated. While force removing may seam reasonable, if # the device is allocated to a vm, force removing the # device entry from the resource tracker can prevent the vm # from rebooting. If the PCI device was removed due to an # update to the PCI whitelist which was later reverted, # removing the entry from the database and adding it back # later may lead to the scheduler incorrectly selecting # this host and the ResourceTracker assigning the PCI # device to a second vm. To prevent this bug we skip # deleting the device from the db in this iteration and # will try again on the next sync. continue else: # Note(yjiang5): no need to update stats if an assigned # device is hot removed. self.stats.remove_device(existed) else: # Update tracked devices. new_value = next((dev for dev in devices if dev['address'] == existed.address)) new_value['compute_node_id'] = self.node_id if existed.status in (fields.PciDeviceStatus.CLAIMED, fields.PciDeviceStatus.ALLOCATED): # Pci properties may change while assigned because of # hotplug or config changes. Although normally this should # not happen. # As the devices have been assigned to an instance, # we defer the change till the instance is destroyed. # We will not sync the new properties with database # before that. # TODO(yjiang5): Not sure if this is a right policy, but # at least it avoids some confusion and, if needed, # we can add more action like killing the instance # by force in future. self.stale[new_value['address']] = new_value else: existed.update_device(new_value) self.stats.update_device(existed) # Track newly discovered devices. for dev in [dev for dev in devices if dev['address'] in new_addrs - exist_addrs]: dev['compute_node_id'] = self.node_id dev_obj = objects.PciDevice.create(self._context, dev) self.pci_devs.objects.append(dev_obj) self.stats.add_device(dev_obj) self._build_device_tree(self.pci_devs) def _claim_instance(self, context, pci_requests, instance_numa_topology): instance_cells = None if instance_numa_topology: instance_cells = instance_numa_topology.cells devs = self.stats.consume_requests(pci_requests.requests, instance_cells) if not devs: return None instance_uuid = pci_requests.instance_uuid for dev in devs: dev.claim(instance_uuid) if instance_numa_topology and any( dev.numa_node is None for dev in devs): LOG.warning("Assigning a pci device without numa affinity to " "instance %(instance)s which has numa topology", {'instance': instance_uuid}) return devs def _allocate_instance(self, instance, devs): for dev in devs: dev.allocate(instance) def allocate_instance(self, instance): devs = self.claims.pop(instance['uuid'], []) self._allocate_instance(instance, devs) if devs: self.allocations[instance['uuid']] += devs def claim_instance(self, context, pci_requests, instance_numa_topology): devs = [] if self.pci_devs and pci_requests.requests: instance_uuid = pci_requests.instance_uuid devs = self._claim_instance(context, pci_requests, instance_numa_topology) if devs: self.claims[instance_uuid] = devs return devs def free_device(self, dev, instance): """Free device from pci resource tracker :param dev: cloned pci device object that needs to be free :param instance: the instance that this pci device is allocated to """ for pci_dev in self.pci_devs: # Find the matching pci device in the pci resource tracker. # Once found, free it. if dev.id == pci_dev.id and dev.instance_uuid == instance['uuid']: self._remove_device_from_pci_mapping( instance['uuid'], pci_dev, self.allocations) self._remove_device_from_pci_mapping( instance['uuid'], pci_dev, self.claims) self._free_device(pci_dev) break def _remove_device_from_pci_mapping( self, instance_uuid, pci_device, pci_mapping): """Remove a PCI device from allocations or claims. If there are no more PCI devices, pop the uuid. """ pci_devices = pci_mapping.get(instance_uuid, []) if pci_device in pci_devices: pci_devices.remove(pci_device) if len(pci_devices) == 0: pci_mapping.pop(instance_uuid, None) def _free_device(self, dev, instance=None): freed_devs = dev.free(instance) stale = self.stale.pop(dev.address, None) if stale: dev.update_device(stale) for dev in freed_devs: self.stats.add_device(dev) def free_instance_allocations(self, context, instance): """Free devices that are in ALLOCATED state for instance. :param context: user request context (nova.context.RequestContext) :param instance: instance object """ if self.allocations.pop(instance['uuid'], None): for dev in self.pci_devs: if (dev.status == fields.PciDeviceStatus.ALLOCATED and dev.instance_uuid == instance['uuid']): self._free_device(dev) def free_instance_claims(self, context, instance): """Free devices that are in CLAIMED state for instance. :param context: user request context (nova.context.RequestContext) :param instance: instance object """ if self.claims.pop(instance['uuid'], None): for dev in self.pci_devs: if (dev.status == fields.PciDeviceStatus.CLAIMED and dev.instance_uuid == instance['uuid']): self._free_device(dev) def free_instance(self, context, instance): """Free devices that are in CLAIMED or ALLOCATED state for instance. :param context: user request context (nova.context.RequestContext) :param instance: instance object """ # Note(yjiang5): When an instance is resized, the devices in the # destination node are claimed to the instance in prep_resize stage. # However, the instance contains only allocated devices # information, not the claimed one. So we can't use # instance['pci_devices'] to check the devices to be freed. self.free_instance_allocations(context, instance) self.free_instance_claims(context, instance) def update_pci_for_instance(self, context, instance, sign): """Update PCI usage information if devices are de/allocated. """ if not self.pci_devs: return if sign == -1: self.free_instance(context, instance) if sign == 1: self.allocate_instance(instance) def clean_usage(self, instances, migrations, orphans): """Remove all usages for instances not passed in the parameter. The caller should hold the COMPUTE_RESOURCE_SEMAPHORE lock """ existed = set(inst['uuid'] for inst in instances) existed |= set(mig['instance_uuid'] for mig in migrations) existed |= set(inst['uuid'] for inst in orphans) # need to copy keys, because the dict is modified in the loop body for uuid in list(self.claims): if uuid not in existed: devs = self.claims.pop(uuid, []) for dev in devs: self._free_device(dev) # need to copy keys, because the dict is modified in the loop body for uuid in list(self.allocations): if uuid not in existed: devs = self.allocations.pop(uuid, []) for dev in devs: self._free_device(dev) def get_instance_pci_devs(inst, request_id=None): """Get the devices allocated to one or all requests for an instance. - For generic PCI request, the request id is None. - For sr-iov networking, the request id is a valid uuid - There are a couple of cases where all the PCI devices allocated to an instance need to be returned. Refer to libvirt driver that handles soft_reboot and hard_boot of 'xen' instances. """ pci_devices = inst.pci_devices if pci_devices is None: return [] return [device for device in pci_devices if device.request_id == request_id or request_id == 'all'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/pci/request.py0000664000175000017500000002273000000000000016352 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Example of a PCI alias:: | [pci] | alias = '{ | "name": "QuickAssist", | "product_id": "0443", | "vendor_id": "8086", | "device_type": "type-PCI", | "numa_policy": "legacy" | }' Aliases with the same name, device_type and numa_policy are ORed:: | [pci] | alias = '{ | "name": "QuickAssist", | "product_id": "0442", | "vendor_id": "8086", | "device_type": "type-PCI", | }' These two aliases define a device request meaning: vendor_id is "8086" and product_id is "0442" or "0443". """ import jsonschema from oslo_log import log as logging from oslo_serialization import jsonutils import six import nova.conf from nova import exception from nova.i18n import _ from nova.network import model as network_model from nova import objects from nova.objects import fields as obj_fields from nova.pci import utils LOG = logging.getLogger(__name__) PCI_NET_TAG = 'physical_network' PCI_TRUSTED_TAG = 'trusted' PCI_DEVICE_TYPE_TAG = 'dev_type' DEVICE_TYPE_FOR_VNIC_TYPE = { network_model.VNIC_TYPE_DIRECT_PHYSICAL: obj_fields.PciDeviceType.SRIOV_PF } CONF = nova.conf.CONF _ALIAS_CAP_TYPE = ['pci'] _ALIAS_SCHEMA = { "type": "object", "additionalProperties": False, "properties": { "name": { "type": "string", "minLength": 1, "maxLength": 256, }, # TODO(stephenfin): This isn't used anywhere outside of tests and # should probably be removed. "capability_type": { "type": "string", "enum": _ALIAS_CAP_TYPE, }, "product_id": { "type": "string", "pattern": utils.PCI_VENDOR_PATTERN, }, "vendor_id": { "type": "string", "pattern": utils.PCI_VENDOR_PATTERN, }, "device_type": { "type": "string", "enum": list(obj_fields.PciDeviceType.ALL), }, "numa_policy": { "type": "string", "enum": list(obj_fields.PCINUMAAffinityPolicy.ALL), }, }, "required": ["name"], } def _get_alias_from_config(): """Parse and validate PCI aliases from the nova config. :returns: A dictionary where the keys are device names and the values are tuples of form ``(specs, numa_policy)``. ``specs`` is a list of PCI device specs, while ``numa_policy`` describes the required NUMA affinity of the device(s). :raises: exception.PciInvalidAlias if two aliases with the same name have different device types or different NUMA policies. """ jaliases = CONF.pci.alias aliases = {} # map alias name to alias spec list try: for jsonspecs in jaliases: spec = jsonutils.loads(jsonspecs) jsonschema.validate(spec, _ALIAS_SCHEMA) name = spec.pop('name').strip() numa_policy = spec.pop('numa_policy', None) if not numa_policy: numa_policy = obj_fields.PCINUMAAffinityPolicy.LEGACY dev_type = spec.pop('device_type', None) if dev_type: spec['dev_type'] = dev_type if name not in aliases: aliases[name] = (numa_policy, [spec]) continue if aliases[name][0] != numa_policy: reason = _("NUMA policy mismatch for alias '%s'") % name raise exception.PciInvalidAlias(reason=reason) if aliases[name][1][0]['dev_type'] != spec['dev_type']: reason = _("Device type mismatch for alias '%s'") % name raise exception.PciInvalidAlias(reason=reason) aliases[name][1].append(spec) except exception.PciInvalidAlias: raise except jsonschema.exceptions.ValidationError as exc: raise exception.PciInvalidAlias(reason=exc.message) except Exception as exc: raise exception.PciInvalidAlias(reason=six.text_type(exc)) return aliases def _translate_alias_to_requests(alias_spec, affinity_policy=None): """Generate complete pci requests from pci aliases in extra_spec.""" pci_aliases = _get_alias_from_config() pci_requests = [] for name, count in [spec.split(':') for spec in alias_spec.split(',')]: name = name.strip() if name not in pci_aliases: raise exception.PciRequestAliasNotDefined(alias=name) count = int(count) numa_policy, spec = pci_aliases[name] policy = affinity_policy or numa_policy # NOTE(gibi): InstancePCIRequest has a requester_id field that could # be filled with the flavor.flavorid but currently there is no special # handling for InstancePCIRequests created from the flavor. So it is # left empty. pci_requests.append(objects.InstancePCIRequest( count=count, spec=spec, alias_name=name, numa_policy=policy)) return pci_requests def get_instance_pci_request_from_vif(context, instance, vif): """Given an Instance, return the PCI request associated to the PCI device related to the given VIF (if any) on the compute node the instance is currently running. In this method we assume a VIF is associated with a PCI device if 'pci_slot' attribute exists in the vif 'profile' dict. :param context: security context :param instance: instance object :param vif: network VIF model object :raises: raises PciRequestFromVIFNotFound if a pci device is requested but not found on current host :return: instance's PCIRequest object associated with the given VIF or None if no PCI device is requested """ # Get PCI device address for VIF if exists vif_pci_dev_addr = vif['profile'].get('pci_slot') \ if vif['profile'] else None if not vif_pci_dev_addr: return None try: cn_id = objects.ComputeNode.get_by_host_and_nodename( context, instance.host, instance.node).id except exception.NotFound: LOG.warning("expected to find compute node with host %s " "and node %s when getting instance PCI request " "from VIF", instance.host, instance.node) return None # Find PCIDevice associated with vif_pci_dev_addr on the compute node # the instance is running on. found_pci_dev = None for pci_dev in instance.pci_devices: if (pci_dev.compute_node_id == cn_id and pci_dev.address == vif_pci_dev_addr): found_pci_dev = pci_dev break if not found_pci_dev: return None # Find PCIRequest associated with the given PCIDevice in instance for pci_req in instance.pci_requests.requests: if pci_req.request_id == found_pci_dev.request_id: return pci_req raise exception.PciRequestFromVIFNotFound( pci_slot=vif_pci_dev_addr, node_id=cn_id) def get_pci_requests_from_flavor(flavor, affinity_policy=None): """Validate and return PCI requests. The ``pci_passthrough:alias`` extra spec describes the flavor's PCI requests. The extra spec's value is a comma-separated list of format ``alias_name_x:count, alias_name_y:count, ... ``, where ``alias_name`` is defined in ``pci.alias`` configurations. The flavor's requirement is translated into a PCI requests list. Each entry in the list is an instance of nova.objects.InstancePCIRequests with four keys/attributes. - 'spec' states the PCI device properties requirement - 'count' states the number of devices - 'alias_name' (optional) is the corresponding alias definition name - 'numa_policy' (optional) states the required NUMA affinity of the devices For example, assume alias configuration is:: { 'vendor_id':'8086', 'device_id':'1502', 'name':'alias_1' } While flavor extra specs includes:: 'pci_passthrough:alias': 'alias_1:2' The returned ``pci_requests`` are:: [{ 'count':2, 'specs': [{'vendor_id':'8086', 'device_id':'1502'}], 'alias_name': 'alias_1' }] :param flavor: The flavor to be checked :param affinity_policy: pci numa affinity policy :returns: A list of PCI requests :rtype: nova.objects.InstancePCIRequests :raises: exception.PciRequestAliasNotDefined if an invalid PCI alias is provided :raises: exception.PciInvalidAlias if the configuration contains invalid aliases. """ pci_requests = [] if ('extra_specs' in flavor and 'pci_passthrough:alias' in flavor['extra_specs']): pci_requests = _translate_alias_to_requests( flavor['extra_specs']['pci_passthrough:alias'], affinity_policy=affinity_policy) return objects.InstancePCIRequests(requests=pci_requests) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/pci/stats.py0000664000175000017500000004555000000000000016025 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_config import cfg from oslo_log import log as logging import six from nova import exception from nova.objects import fields from nova.objects import pci_device_pool from nova.pci import utils from nova.pci import whitelist CONF = cfg.CONF LOG = logging.getLogger(__name__) class PciDeviceStats(object): """PCI devices summary information. According to the PCI SR-IOV spec, a PCI physical function can have up to 256 PCI virtual functions, thus the number of assignable PCI functions in a cloud can be big. The scheduler needs to know all device availability information in order to determine which compute hosts can support a PCI request. Passing individual virtual device information to the scheduler does not scale, so we provide summary information. Usually the virtual functions provided by a host PCI device have the same value for most properties, like vendor_id, product_id and class type. The PCI stats class summarizes this information for the scheduler. The pci stats information is maintained exclusively by compute node resource tracker and updated to database. The scheduler fetches the information and selects the compute node accordingly. If a compute node is selected, the resource tracker allocates the devices to the instance and updates the pci stats information. This summary information will be helpful for cloud management also. """ pool_keys = ['product_id', 'vendor_id', 'numa_node', 'dev_type'] def __init__(self, stats=None, dev_filter=None): super(PciDeviceStats, self).__init__() # NOTE(sbauza): Stats are a PCIDevicePoolList object self.pools = [pci_pool.to_dict() for pci_pool in stats] if stats else [] self.pools.sort(key=lambda item: len(item)) self.dev_filter = dev_filter or whitelist.Whitelist( CONF.pci.passthrough_whitelist) def _equal_properties(self, dev, entry, matching_keys): return all(dev.get(prop) == entry.get(prop) for prop in matching_keys) def _find_pool(self, dev_pool): """Return the first pool that matches dev.""" for pool in self.pools: pool_keys = pool.copy() del pool_keys['count'] del pool_keys['devices'] if (len(pool_keys.keys()) == len(dev_pool.keys()) and self._equal_properties(dev_pool, pool_keys, dev_pool.keys())): return pool def _create_pool_keys_from_dev(self, dev): """create a stats pool dict that this dev is supposed to be part of Note that this pool dict contains the stats pool's keys and their values. 'count' and 'devices' are not included. """ # Don't add a device that doesn't have a matching device spec. # This can happen during initial sync up with the controller devspec = self.dev_filter.get_devspec(dev) if not devspec: return tags = devspec.get_tags() pool = {k: getattr(dev, k) for k in self.pool_keys} if tags: pool.update(tags) # NOTE(gibi): parent_ifname acts like a tag during pci claim but # not provided as part of the whitelist spec as it is auto detected # by the virt driver. # This key is used for match InstancePciRequest backed by neutron ports # that has resource_request and therefore that has resource allocation # already in placement. if dev.extra_info.get('parent_ifname'): pool['parent_ifname'] = dev.extra_info['parent_ifname'] return pool def _get_pool_with_device_type_mismatch(self, dev): """Check for device type mismatch in the pools for a given device. Return (pool, device) if device type does not match or a single None if the device type matches. """ for pool in self.pools: for device in pool['devices']: if device.address == dev.address: if dev.dev_type != pool["dev_type"]: return pool, device return None return None def update_device(self, dev): """Update a device to its matching pool.""" pool_device_info = self._get_pool_with_device_type_mismatch(dev) if pool_device_info is None: return pool, device = pool_device_info pool['devices'].remove(device) self._decrease_pool_count(self.pools, pool) self.add_device(dev) def add_device(self, dev): """Add a device to its matching pool.""" dev_pool = self._create_pool_keys_from_dev(dev) if dev_pool: pool = self._find_pool(dev_pool) if not pool: dev_pool['count'] = 0 dev_pool['devices'] = [] self.pools.append(dev_pool) self.pools.sort(key=lambda item: len(item)) pool = dev_pool pool['count'] += 1 pool['devices'].append(dev) @staticmethod def _decrease_pool_count(pool_list, pool, count=1): """Decrement pool's size by count. If pool becomes empty, remove pool from pool_list. """ if pool['count'] > count: pool['count'] -= count count = 0 else: count -= pool['count'] pool_list.remove(pool) return count def remove_device(self, dev): """Remove one device from the first pool that it matches.""" dev_pool = self._create_pool_keys_from_dev(dev) if dev_pool: pool = self._find_pool(dev_pool) if not pool: raise exception.PciDevicePoolEmpty( compute_node_id=dev.compute_node_id, address=dev.address) pool['devices'].remove(dev) self._decrease_pool_count(self.pools, pool) def get_free_devs(self): free_devs = [] for pool in self.pools: free_devs.extend(pool['devices']) return free_devs def consume_requests(self, pci_requests, numa_cells=None): alloc_devices = [] for request in pci_requests: count = request.count spec = request.spec # For now, keep the same algorithm as during scheduling: # a spec may be able to match multiple pools. pools = self._filter_pools_for_spec(self.pools, spec) if numa_cells: numa_policy = None if 'numa_policy' in request: numa_policy = request.numa_policy pools = self._filter_pools_for_numa_cells( pools, numa_cells, numa_policy, count) pools = self._filter_non_requested_pfs(pools, request) # Failed to allocate the required number of devices # Return the devices already allocated back to their pools if sum([pool['count'] for pool in pools]) < count: LOG.error("Failed to allocate PCI devices for instance. " "Unassigning devices back to pools. " "This should not happen, since the scheduler " "should have accurate information, and allocation " "during claims is controlled via a hold " "on the compute node semaphore.") for d in range(len(alloc_devices)): self.add_device(alloc_devices.pop()) return None for pool in pools: if pool['count'] >= count: num_alloc = count else: num_alloc = pool['count'] count -= num_alloc pool['count'] -= num_alloc for d in range(num_alloc): pci_dev = pool['devices'].pop() self._handle_device_dependents(pci_dev) pci_dev.request_id = request.request_id alloc_devices.append(pci_dev) if count == 0: break return alloc_devices def _handle_device_dependents(self, pci_dev): """Remove device dependents or a parent from pools. In case the device is a PF, all of it's dependent VFs should be removed from pools count, if these are present. When the device is a VF, it's parent PF pool count should be decreased, unless it is no longer in a pool. """ if pci_dev.dev_type == fields.PciDeviceType.SRIOV_PF: vfs_list = pci_dev.child_devices if vfs_list: for vf in vfs_list: self.remove_device(vf) elif pci_dev.dev_type == fields.PciDeviceType.SRIOV_VF: try: parent = pci_dev.parent_device # Make sure not to decrease PF pool count if this parent has # been already removed from pools if parent in self.get_free_devs(): self.remove_device(parent) except exception.PciDeviceNotFound: return @staticmethod def _filter_pools_for_spec(pools, request_specs): return [pool for pool in pools if utils.pci_device_prop_match(pool, request_specs)] @classmethod def _filter_pools_for_numa_cells(cls, pools, numa_cells, numa_policy, requested_count): """Filter out pools with the wrong NUMA affinity, if required. Exclude pools that do not have *suitable* PCI NUMA affinity. ``numa_policy`` determines what *suitable* means, being one of PREFERRED (nice-to-have), LEGACY (must-have-if-available) and REQUIRED (must-have). We iterate through the various policies in order of strictness. This means that even if we only *prefer* PCI-NUMA affinity, we will still attempt to provide it if possible. :param pools: A list of PCI device pool dicts :param numa_cells: A list of InstanceNUMACell objects whose ``id`` corresponds to the ``id`` of host NUMACells. :param numa_policy: The PCI NUMA affinity policy to apply. :param requested_count: The number of PCI devices requested. :returns: A list of pools that can, together, provide at least ``requested_count`` PCI devices with the level of NUMA affinity required by ``numa_policy``, else all pools that can satisfy this policy even if it's not enough. """ # NOTE(stephenfin): We may wish to change the default policy at a later # date requested_policy = numa_policy or fields.PCINUMAAffinityPolicy.LEGACY numa_cell_ids = [cell.id for cell in numa_cells] # filter out pools which numa_node is not included in numa_cell_ids filtered_pools = [ pool for pool in pools if any(utils.pci_device_prop_match( pool, [{'numa_node': cell}]) for cell in numa_cell_ids)] # we can't apply a less strict policy than the one requested, so we # need to return if we've demanded a NUMA affinity of REQUIRED. # However, NUMA affinity is a good thing. If we can get enough devices # with the stricter policy then we will use them. if requested_policy == fields.PCINUMAAffinityPolicy.REQUIRED or sum( pool['count'] for pool in filtered_pools) >= requested_count: return filtered_pools # some systems don't report NUMA node info for PCI devices, in which # case None is reported in 'pci_device.numa_node'. The LEGACY policy # allows us to use these devices so we include None in the list of # suitable NUMA cells. numa_cell_ids.append(None) # filter out pools which numa_node is not included in numa_cell_ids filtered_pools = [ pool for pool in pools if any(utils.pci_device_prop_match( pool, [{'numa_node': cell}]) for cell in numa_cell_ids)] # once again, we can't apply a less strict policy than the one # requested, so we need to return if we've demanded a NUMA affinity of # LEGACY. Similarly, we will also return if we have enough devices to # satisfy this somewhat strict policy. if requested_policy == fields.PCINUMAAffinityPolicy.LEGACY or sum( pool['count'] for pool in filtered_pools) >= requested_count: return filtered_pools # if we've got here, we're using the PREFERRED policy and weren't able # to provide anything with stricter affinity. Use whatever devices you # can, folks. return sorted( pools, key=lambda pool: pool.get('numa_node') not in numa_cell_ids) @classmethod def _filter_non_requested_pfs(cls, pools, request): # Remove SRIOV_PFs from pools, unless it has been explicitly requested # This is especially needed in cases where PFs and VFs have the same # product_id. if all(spec.get('dev_type') != fields.PciDeviceType.SRIOV_PF for spec in request.spec): pools = cls._filter_pools_for_pfs(pools) return pools @staticmethod def _filter_pools_for_pfs(pools): return [pool for pool in pools if not pool.get('dev_type') == fields.PciDeviceType.SRIOV_PF] def _apply_request(self, pools, request, numa_cells=None): """Apply a PCI request. Apply a PCI request against a given set of PCI device pools, which are collections of devices with similar traits. If ``numa_cells`` is provided then NUMA locality may be taken into account, depending on the value of ``request.numa_policy``. :param pools: A list of PCI device pool dicts :param request: An InstancePCIRequest object describing the type, quantity and required NUMA affinity of device(s) we want.. :param numa_cells: A list of InstanceNUMACell objects whose ``id`` corresponds to the ``id`` of host NUMACells. :returns: True if the request was applied against the provided pools successfully, else False. """ # NOTE(vladikr): This code maybe open to race conditions. # Two concurrent requests may succeed when called support_requests # because this method does not remove related devices from the pools count = request.count # Firstly, let's exclude all devices that don't match our spec (e.g. # they've got different PCI IDs or something) matching_pools = self._filter_pools_for_spec(pools, request.spec) # Next, let's exclude all devices that aren't on the correct NUMA node # *assuming* we have devices and care about that, as determined by # policy if numa_cells: numa_policy = None if 'numa_policy' in request: numa_policy = request.numa_policy matching_pools = self._filter_pools_for_numa_cells(matching_pools, numa_cells, numa_policy, count) # Finally, if we're not requesting PFs then we should not use these. # Exclude them. matching_pools = self._filter_non_requested_pfs(matching_pools, request) # Do we still have any devices left? if sum([pool['count'] for pool in matching_pools]) < count: return False else: for pool in matching_pools: count = self._decrease_pool_count(pools, pool, count) if not count: break return True def support_requests(self, requests, numa_cells=None): """Determine if the PCI requests can be met. Determine, based on a compute node's PCI stats, if an instance can be scheduled on the node. **Support does not mean real allocation**. If ``numa_cells`` is provided then NUMA locality may be taken into account, depending on the value of ``numa_policy``. :param requests: A list of InstancePCIRequest object describing the types, quantities and required NUMA affinities of devices we want. :type requests: nova.objects.InstancePCIRequests :param numa_cells: A list of InstanceNUMACell objects whose ``id`` corresponds to the ``id`` of host NUMACells, or None. :returns: Whether this compute node can satisfy the given request. """ # note (yjiang5): this function has high possibility to fail, # so no exception should be triggered for performance reason. pools = copy.deepcopy(self.pools) return all(self._apply_request(pools, r, numa_cells) for r in requests) def apply_requests(self, requests, numa_cells=None): """Apply PCI requests to the PCI stats. This is used in multiple instance creation, when the scheduler has to maintain how the resources are consumed by the instances. If ``numa_cells`` is provided then NUMA locality may be taken into account, depending on the value of ``numa_policy``. :param requests: A list of InstancePCIRequest object describing the types, quantities and required NUMA affinities of devices we want. :type requests: nova.objects.InstancePCIRequests :param numa_cells: A list of InstanceNUMACell objects whose ``id`` corresponds to the ``id`` of host NUMACells, or None. :raises: exception.PciDeviceRequestFailed if this compute node cannot satisfy the given request. """ if not all(self._apply_request(self.pools, r, numa_cells) for r in requests): raise exception.PciDeviceRequestFailed(requests=requests) def __iter__(self): # 'devices' shouldn't be part of stats pools = [] for pool in self.pools: tmp = {k: v for k, v in pool.items() if k != 'devices'} pools.append(tmp) return iter(pools) def clear(self): """Clear all the stats maintained.""" self.pools = [] def __eq__(self, other): return self.pools == other.pools if six.PY2: def __ne__(self, other): return not (self == other) def to_device_pools_obj(self): """Return the contents of the pools as a PciDevicePoolList object.""" stats = [x for x in self] return pci_device_pool.from_pci_stats(stats) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/pci/utils.py0000664000175000017500000001767600000000000016037 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glob import os import re from oslo_log import log as logging import six from nova import exception LOG = logging.getLogger(__name__) PCI_VENDOR_PATTERN = "^(hex{4})$".replace("hex", r"[\da-fA-F]") _PCI_ADDRESS_PATTERN = ("^(hex{4}):(hex{2}):(hex{2}).(oct{1})$". replace("hex", r"[\da-fA-F]"). replace("oct", "[0-7]")) _PCI_ADDRESS_REGEX = re.compile(_PCI_ADDRESS_PATTERN) _SRIOV_TOTALVFS = "sriov_totalvfs" def pci_device_prop_match(pci_dev, specs): """Check if the pci_dev meet spec requirement Specs is a list of PCI device property requirements. An example of device requirement that the PCI should be either: a) Device with vendor_id as 0x8086 and product_id as 0x8259, or b) Device with vendor_id as 0x10de and product_id as 0x10d8: [{"vendor_id":"8086", "product_id":"8259"}, {"vendor_id":"10de", "product_id":"10d8", "capabilities_network": ["rx", "tx", "tso", "gso"]}] """ def _matching_devices(spec): for k, v in spec.items(): pci_dev_v = pci_dev.get(k) if isinstance(v, list) and isinstance(pci_dev_v, list): if not all(x in pci_dev.get(k) for x in v): return False else: # We don't need to check case for tags in order to avoid any # mismatch with the tags provided by users for port # binding profile and the ones configured by operators # with pci whitelist option. if isinstance(v, six.string_types): v = v.lower() if isinstance(pci_dev_v, six.string_types): pci_dev_v = pci_dev_v.lower() if pci_dev_v != v: return False return True return any(_matching_devices(spec) for spec in specs) def parse_address(address): """Returns (domain, bus, slot, function) from PCI address that is stored in PciDevice DB table. """ m = _PCI_ADDRESS_REGEX.match(address) if not m: raise exception.PciDeviceWrongAddressFormat(address=address) return m.groups() def get_pci_address_fields(pci_addr): """Parse a fully-specified PCI device address. Does not validate that the components are valid hex or wildcard values. :param pci_addr: A string of the form "::.". :return: A 4-tuple of strings ("", "", "", "") """ dbs, sep, func = pci_addr.partition('.') domain, bus, slot = dbs.split(':') return domain, bus, slot, func def get_pci_address(domain, bus, slot, func): """Assembles PCI address components into a fully-specified PCI address. Does not validate that the components are valid hex or wildcard values. :param domain, bus, slot, func: Hex or wildcard strings. :return: A string of the form "::.". """ return '%s:%s:%s.%s' % (domain, bus, slot, func) def get_function_by_ifname(ifname): """Given the device name, returns the PCI address of a device and returns True if the address is in a physical function. """ dev_path = "/sys/class/net/%s/device" % ifname sriov_totalvfs = 0 if os.path.isdir(dev_path): try: # sriov_totalvfs contains the maximum possible VFs for this PF with open(os.path.join(dev_path, _SRIOV_TOTALVFS)) as fd: sriov_totalvfs = int(fd.read()) return (os.readlink(dev_path).strip("./"), sriov_totalvfs > 0) except (IOError, ValueError): return os.readlink(dev_path).strip("./"), False return None, False def is_physical_function(domain, bus, slot, function): dev_path = "/sys/bus/pci/devices/%(d)s:%(b)s:%(s)s.%(f)s/" % { "d": domain, "b": bus, "s": slot, "f": function} if os.path.isdir(dev_path): try: with open(dev_path + _SRIOV_TOTALVFS) as fd: sriov_totalvfs = int(fd.read()) return sriov_totalvfs > 0 except (IOError, ValueError): pass return False def _get_sysfs_netdev_path(pci_addr, pf_interface): """Get the sysfs path based on the PCI address of the device. Assumes a networking device - will not check for the existence of the path. """ if pf_interface: return "/sys/bus/pci/devices/%s/physfn/net" % pci_addr return "/sys/bus/pci/devices/%s/net" % pci_addr def get_ifname_by_pci_address(pci_addr, pf_interface=False): """Get the interface name based on a VF's pci address. The returned interface name is either the parent PF's or that of the VF itself based on the argument of pf_interface. """ dev_path = _get_sysfs_netdev_path(pci_addr, pf_interface) try: dev_info = os.listdir(dev_path) return dev_info.pop() except Exception: raise exception.PciDeviceNotFoundById(id=pci_addr) def get_mac_by_pci_address(pci_addr, pf_interface=False): """Get the MAC address of the nic based on its PCI address. Raises PciDeviceNotFoundById in case the pci device is not a NIC """ dev_path = _get_sysfs_netdev_path(pci_addr, pf_interface) if_name = get_ifname_by_pci_address(pci_addr, pf_interface) addr_file = os.path.join(dev_path, if_name, 'address') try: with open(addr_file) as f: mac = next(f).strip() return mac except (IOError, StopIteration) as e: LOG.warning("Could not find the expected sysfs file for " "determining the MAC address of the PCI device " "%(addr)s. May not be a NIC. Error: %(e)s", {'addr': pci_addr, 'e': e}) raise exception.PciDeviceNotFoundById(id=pci_addr) def get_vf_num_by_pci_address(pci_addr): """Get the VF number based on a VF's pci address A VF is associated with an VF number, which ip link command uses to configure it. This number can be obtained from the PCI device filesystem. """ VIRTFN_RE = re.compile(r"virtfn(\d+)") virtfns_path = "/sys/bus/pci/devices/%s/physfn/virtfn*" % (pci_addr) vf_num = None try: for vf_path in glob.iglob(virtfns_path): if re.search(pci_addr, os.readlink(vf_path)): t = VIRTFN_RE.search(vf_path) vf_num = t.group(1) break except Exception: pass if vf_num is None: raise exception.PciDeviceNotFoundById(id=pci_addr) return vf_num def get_net_name_by_vf_pci_address(vfaddress): """Given the VF PCI address, returns the net device name. Every VF is associated to a PCI network device. This function returns the libvirt name given to this network device; e.g.: net_enp8s0f0_90_e2_ba_5e_a6_40 ... In the libvirt parser information tree, the network device stores the network capabilities associated to this device. """ try: mac = get_mac_by_pci_address(vfaddress).split(':') ifname = get_ifname_by_pci_address(vfaddress) return ("net_%(ifname)s_%(mac)s" % {'ifname': ifname, 'mac': '_'.join(mac)}) except Exception: LOG.warning("No net device was found for VF %(vfaddress)s", {'vfaddress': vfaddress}) return ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/pci/whitelist.py0000664000175000017500000000655100000000000016701 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import exception from nova.i18n import _ from nova.pci import devspec class Whitelist(object): """White list class to represent assignable pci devices. Not all devices on a compute node can be assigned to a guest. The cloud administrator decides which devices can be assigned based on ``vendor_id`` or ``product_id``, etc. If no white list is specified, no devices will be assignable. """ def __init__(self, whitelist_spec=None): """White list constructor For example, the following json string specifies that devices whose vendor_id is '8086' and product_id is '1520' can be assigned to guests. :: '[{"product_id":"1520", "vendor_id":"8086"}]' :param whitelist_spec: A JSON string for a dictionary or list thereof. Each dictionary specifies the pci device properties requirement. See the definition of ``passthrough_whitelist`` in ``nova.conf.pci`` for details and examples. """ if whitelist_spec: self.specs = self._parse_white_list_from_config(whitelist_spec) else: self.specs = [] @staticmethod def _parse_white_list_from_config(whitelists): """Parse and validate the pci whitelist from the nova config.""" specs = [] for jsonspec in whitelists: try: dev_spec = jsonutils.loads(jsonspec) except ValueError: raise exception.PciConfigInvalidWhitelist( reason=_("Invalid entry: '%s'") % jsonspec) if isinstance(dev_spec, dict): dev_spec = [dev_spec] elif not isinstance(dev_spec, list): raise exception.PciConfigInvalidWhitelist( reason=_("Invalid entry: '%s'; " "Expecting list or dict") % jsonspec) for ds in dev_spec: if not isinstance(ds, dict): raise exception.PciConfigInvalidWhitelist( reason=_("Invalid entry: '%s'; " "Expecting dict") % ds) spec = devspec.PciDeviceSpec(ds) specs.append(spec) return specs def device_assignable(self, dev): """Check if a device can be assigned to a guest. :param dev: A dictionary describing the device properties """ for spec in self.specs: if spec.match(dev): return True return False def get_devspec(self, pci_dev): for spec in self.specs: if spec.match_pci_obj(pci_dev): return spec ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3744698 nova-21.2.4/nova/policies/0000775000175000017500000000000000000000000015340 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/__init__.py0000664000175000017500000001134100000000000017451 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from nova.policies import admin_actions from nova.policies import admin_password from nova.policies import agents from nova.policies import aggregates from nova.policies import assisted_volume_snapshots from nova.policies import attach_interfaces from nova.policies import availability_zone from nova.policies import baremetal_nodes from nova.policies import base from nova.policies import console_auth_tokens from nova.policies import console_output from nova.policies import create_backup from nova.policies import deferred_delete from nova.policies import evacuate from nova.policies import extended_server_attributes from nova.policies import extensions from nova.policies import flavor_access from nova.policies import flavor_extra_specs from nova.policies import flavor_manage from nova.policies import floating_ip_pools from nova.policies import floating_ips from nova.policies import hosts from nova.policies import hypervisors from nova.policies import instance_actions from nova.policies import instance_usage_audit_log from nova.policies import ips from nova.policies import keypairs from nova.policies import limits from nova.policies import lock_server from nova.policies import migrate_server from nova.policies import migrations from nova.policies import multinic from nova.policies import networks from nova.policies import pause_server from nova.policies import quota_class_sets from nova.policies import quota_sets from nova.policies import remote_consoles from nova.policies import rescue from nova.policies import security_groups from nova.policies import server_diagnostics from nova.policies import server_external_events from nova.policies import server_groups from nova.policies import server_metadata from nova.policies import server_password from nova.policies import server_tags from nova.policies import server_topology from nova.policies import servers from nova.policies import servers_migrations from nova.policies import services from nova.policies import shelve from nova.policies import simple_tenant_usage from nova.policies import suspend_server from nova.policies import tenant_networks from nova.policies import volumes from nova.policies import volumes_attachments def list_rules(): return itertools.chain( base.list_rules(), admin_actions.list_rules(), admin_password.list_rules(), agents.list_rules(), aggregates.list_rules(), assisted_volume_snapshots.list_rules(), attach_interfaces.list_rules(), availability_zone.list_rules(), baremetal_nodes.list_rules(), console_auth_tokens.list_rules(), console_output.list_rules(), create_backup.list_rules(), deferred_delete.list_rules(), evacuate.list_rules(), extended_server_attributes.list_rules(), extensions.list_rules(), flavor_access.list_rules(), flavor_extra_specs.list_rules(), flavor_manage.list_rules(), floating_ip_pools.list_rules(), floating_ips.list_rules(), hosts.list_rules(), hypervisors.list_rules(), instance_actions.list_rules(), instance_usage_audit_log.list_rules(), ips.list_rules(), keypairs.list_rules(), limits.list_rules(), lock_server.list_rules(), migrate_server.list_rules(), migrations.list_rules(), multinic.list_rules(), networks.list_rules(), pause_server.list_rules(), quota_class_sets.list_rules(), quota_sets.list_rules(), remote_consoles.list_rules(), rescue.list_rules(), security_groups.list_rules(), server_diagnostics.list_rules(), server_external_events.list_rules(), server_groups.list_rules(), server_metadata.list_rules(), server_password.list_rules(), server_tags.list_rules(), server_topology.list_rules(), servers.list_rules(), servers_migrations.list_rules(), services.list_rules(), shelve.list_rules(), simple_tenant_usage.list_rules(), suspend_server.list_rules(), tenant_networks.list_rules(), volumes.list_rules(), volumes_attachments.list_rules() ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/admin_actions.py0000664000175000017500000000370600000000000020530 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-admin-actions:%s' admin_actions_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'reset_state', check_str=base.SYSTEM_ADMIN, description="Reset the state of a given server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (os-resetState)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'inject_network_info', check_str=base.SYSTEM_ADMIN, description="Inject network information into the server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (injectNetworkInfo)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'reset_network', check_str=base.SYSTEM_ADMIN, description="Reset networking on a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (resetNetwork)' } ], scope_types=['system', 'project']) ] def list_rules(): return admin_actions_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/admin_password.py0000664000175000017500000000233400000000000020726 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-admin-password' admin_password_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Change the administrative password for a server", operations=[ { 'path': '/servers/{server_id}/action (changePassword)', 'method': 'POST' } ], scope_types=['system', 'project']) ] def list_rules(): return admin_password_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/agents.py0000664000175000017500000000656000000000000017202 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-agents:%s' DEPRECATED_AGENTS_POLICY = policy.DeprecatedRule( 'os_compute_api:os-agents', base.RULE_ADMIN_API, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ agents_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list', check_str=base.SYSTEM_READER, description="""List guest agent builds This is XenAPI driver specific. It is used to force the upgrade of the XenAPI guest agent on instance boot. """, operations=[ { 'path': '/os-agents', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_AGENTS_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.SYSTEM_ADMIN, description="""Create guest agent builds This is XenAPI driver specific. It is used to force the upgrade of the XenAPI guest agent on instance boot. """, operations=[ { 'path': '/os-agents', 'method': 'POST' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_AGENTS_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.SYSTEM_ADMIN, description="""Update guest agent builds This is XenAPI driver specific. It is used to force the upgrade of the XenAPI guest agent on instance boot. """, operations=[ { 'path': '/os-agents/{agent_build_id}', 'method': 'PUT' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_AGENTS_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.SYSTEM_ADMIN, description="""Delete guest agent builds This is XenAPI driver specific. It is used to force the upgrade of the XenAPI guest agent on instance boot. """, operations=[ { 'path': '/os-agents/{agent_build_id}', 'method': 'DELETE' } ], scope_types=['system'], deprecated_rule=DEPRECATED_AGENTS_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return agents_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/aggregates.py0000664000175000017500000000770600000000000020035 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-aggregates:%s' NEW_POLICY_ROOT = 'compute:aggregates:%s' aggregates_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'set_metadata', check_str=base.SYSTEM_ADMIN, description="Create or replace metadata for an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}/action (set_metadata)', 'method': 'POST' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'add_host', check_str=base.SYSTEM_ADMIN, description="Add a host to an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}/action (add_host)', 'method': 'POST' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.SYSTEM_ADMIN, description="Create an aggregate", operations=[ { 'path': '/os-aggregates', 'method': 'POST' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'remove_host', check_str=base.SYSTEM_ADMIN, description="Remove a host from an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}/action (remove_host)', 'method': 'POST' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.SYSTEM_ADMIN, description="Update name and/or availability zone for an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}', 'method': 'PUT' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.SYSTEM_READER, description="List all aggregates", operations=[ { 'path': '/os-aggregates', 'method': 'GET' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.SYSTEM_ADMIN, description="Delete an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}', 'method': 'DELETE' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.SYSTEM_READER, description="Show details for an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}', 'method': 'GET' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=NEW_POLICY_ROOT % 'images', check_str=base.SYSTEM_ADMIN, description="Request image caching for an aggregate", operations=[ { 'path': '/os-aggregates/{aggregate_id}/images', 'method': 'POST' } ], scope_types=['system']), ] def list_rules(): return aggregates_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/assisted_volume_snapshots.py0000664000175000017500000000306600000000000023227 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-assisted-volume-snapshots:%s' assisted_volume_snapshots_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.SYSTEM_ADMIN, description="Create an assisted volume snapshot", operations=[ { 'path': '/os-assisted-volume-snapshots', 'method': 'POST' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.SYSTEM_ADMIN, description="Delete an assisted volume snapshot", operations=[ { 'path': '/os-assisted-volume-snapshots/{snapshot_id}', 'method': 'DELETE' } ], scope_types=['system']), ] def list_rules(): return assisted_volume_snapshots_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/attach_interfaces.py0000664000175000017500000000636400000000000021372 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-attach-interfaces' POLICY_ROOT = 'os_compute_api:os-attach-interfaces:%s' DEPRECATED_INTERFACES_POLICY = policy.DeprecatedRule( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ attach_interfaces_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List port interfaces attached to a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-interface' }, ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_INTERFACES_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show details of a port interface attached to a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-interface/{port_id}' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_INTERFACES_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Attach an interface to a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/os-interface' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_INTERFACES_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Detach an interface from a server", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}/os-interface/{port_id}' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_INTERFACES_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0') ] def list_rules(): return attach_interfaces_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/availability_zone.py0000664000175000017500000000312600000000000021421 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-availability-zone:%s' availability_zone_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list', check_str=base.RULE_ANY, description="List availability zone information without host " "information", operations=[ { 'method': 'GET', 'path': '/os-availability-zone' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'detail', check_str=base.SYSTEM_READER, description="List detailed availability zone information with host " "information", operations=[ { 'method': 'GET', 'path': '/os-availability-zone/detail' } ], scope_types=['system']) ] def list_rules(): return availability_zone_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/baremetal_nodes.py0000664000175000017500000000244100000000000021037 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-baremetal-nodes' baremetal_nodes_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_API, """List and show details of bare metal nodes. These APIs are proxy calls to the Ironic service and are deprecated. """, [ { 'method': 'GET', 'path': '/os-baremetal-nodes' }, { 'method': 'GET', 'path': '/os-baremetal-nodes/{node_id}' } ]), ] def list_rules(): return baremetal_nodes_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/base.py0000664000175000017500000001507500000000000016634 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy RULE_ADMIN_OR_OWNER = 'rule:admin_or_owner' # Admins or owners of the resource RULE_ADMIN_API = 'rule:admin_api' # Allow only users with the admin role RULE_ANY = '@' # Any user is allowed to perform the action. RULE_NOBODY = '!' # No users are allowed to perform the action. DEPRECATED_ADMIN_POLICY = policy.DeprecatedRule( name=RULE_ADMIN_API, check_str='is_admin:True', ) DEPRECATED_ADMIN_OR_OWNER_POLICY = policy.DeprecatedRule( name=RULE_ADMIN_OR_OWNER, check_str='is_admin:True or project_id:%(project_id)s', ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ # TODO(gmann): # Special string ``system_scope:all`` is added for system # scoped policies for backwards compatibility where ``nova.conf [oslo_policy] # enforce_scope = False``. # Otherwise, this might open up APIs to be more permissive unintentionally if a # deployment isn't enforcing scope. For example, the 'list all servers' # policy will be System Scoped Reader with ``role:reader`` and # scope_type=['system'] Until enforce_scope=True by default, it would # be possible for users with the ``reader`` role on a project to access the # 'list all servers' API. Once nova defaults ``nova.conf [oslo_policy] # enforce_scope=True``, the ``system_scope:all`` bits of these check strings # can be removed since that will be handled automatically by scope_types in # oslo.policy's RuleDefault objects. SYSTEM_ADMIN = 'rule:system_admin_api' SYSTEM_READER = 'rule:system_reader_api' PROJECT_ADMIN = 'rule:project_admin_api' PROJECT_MEMBER = 'rule:project_member_api' PROJECT_READER = 'rule:project_reader_api' PROJECT_MEMBER_OR_SYSTEM_ADMIN = 'rule:system_admin_or_owner' PROJECT_READER_OR_SYSTEM_READER = 'rule:system_or_project_reader' # NOTE(gmann): Below is the mapping of new roles and scope_types # with legacy roles:: # Legacy Rule | New Rules |Operation |scope_type| # -------------------+----------------------------------+----------+----------- # |-> SYSTEM_ADMIN |Global | [system] # RULE_ADMIN_API | Write # |-> SYSTEM_READER |Global | [system] # | |Read | # # |-> PROJECT_MEMBER_OR_SYSTEM_ADMIN |Project | [system, # RULE_ADMIN_OR_OWNER| |Write | project] # |-> PROJECT_READER_OR_SYSTEM_READER|Project | [system, # |Read | project] # NOTE(johngarbutt) The base rules here affect so many APIs the list # of related API operations has not been populated. It would be # crazy hard to manually maintain such a list. # NOTE(gmann): Keystone already support implied roles means assignment # of one role implies the assignment of another. New defaults roles # `reader`, `member` also has been added in bootstrap. If the bootstrap # process is re-run, and a `reader`, `member`, or `admin` role already # exists, a role implication chain will be created: `admin` implies # `member` implies `reader`. # For example: If we give access to 'reader' it means the 'admin' and # 'member' also get access. rules = [ policy.RuleDefault( "context_is_admin", "role:admin", "Decides what is required for the 'is_admin:True' check to succeed."), policy.RuleDefault( "admin_or_owner", "is_admin:True or project_id:%(project_id)s", "Default rule for most non-Admin APIs.", deprecated_for_removal=True, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( "admin_api", "is_admin:True", "Default rule for most Admin APIs.", deprecated_for_removal=True, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( name="system_admin_api", check_str='role:admin and system_scope:all', description="Default rule for System Admin APIs.", deprecated_rule=DEPRECATED_ADMIN_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( name="system_reader_api", check_str="role:reader and system_scope:all", description="Default rule for System level read only APIs.", deprecated_rule=DEPRECATED_ADMIN_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( "project_admin_api", "role:admin and project_id:%(project_id)s", "Default rule for Project level admin APIs.", deprecated_rule=DEPRECATED_ADMIN_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( "project_member_api", "role:member and project_id:%(project_id)s", "Default rule for Project level non admin APIs.", deprecated_rule=DEPRECATED_ADMIN_OR_OWNER_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( "project_reader_api", "role:reader and project_id:%(project_id)s", "Default rule for Project level read only APIs."), policy.RuleDefault( name="system_admin_or_owner", check_str="rule:system_admin_api or rule:project_member_api", description="Default rule for System admin+owner APIs.", deprecated_rule=DEPRECATED_ADMIN_OR_OWNER_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.RuleDefault( "system_or_project_reader", "rule:system_reader_api or rule:project_reader_api", "Default rule for System+Project read only APIs.", deprecated_rule=DEPRECATED_ADMIN_OR_OWNER_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0') ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/console_auth_tokens.py0000664000175000017500000000236100000000000021762 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-console-auth-tokens' console_auth_tokens_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.SYSTEM_READER, description="Show console connection information for a given console " "authentication token", operations=[ { 'method': 'GET', 'path': '/os-console-auth-tokens/{console_token}' } ], scope_types=['system']) ] def list_rules(): return console_auth_tokens_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/console_output.py0000664000175000017500000000232200000000000020773 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-console-output' console_output_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description='Show console output for a server', operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (os-getConsoleOutput)' } ], scope_types=['system', 'project']) ] def list_rules(): return console_output_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/create_backup.py0000664000175000017500000000230400000000000020501 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-create-backup' create_backup_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description='Create a back up of a server', operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (createBackup)' } ], scope_types=['system', 'project']) ] def list_rules(): return create_backup_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/deferred_delete.py0000664000175000017500000000423100000000000021014 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-deferred-delete:%s' DEPRECATED_POLICY = policy.DeprecatedRule( 'os_compute_api:os-deferred-delete', base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ deferred_delete_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'restore', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Restore a soft deleted server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (restore)' }, ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Force delete a server before deferred cleanup", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (forceDelete)' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0') ] def list_rules(): return deferred_delete_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/evacuate.py0000664000175000017500000000226600000000000017515 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-evacuate' evacuate_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.SYSTEM_ADMIN, description="Evacuate a server from a failed host to a new host", operations=[ { 'path': '/servers/{server_id}/action (evacuate)', 'method': 'POST' } ], scope_types=['system', 'project']), ] def list_rules(): return evacuate_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/extended_server_attributes.py0000664000175000017500000000445400000000000023355 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-extended-server-attributes' extended_server_attributes_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.SYSTEM_ADMIN, description="""Return extended attributes for server. This rule will control the visibility for a set of servers attributes: - ``OS-EXT-SRV-ATTR:host`` - ``OS-EXT-SRV-ATTR:instance_name`` - ``OS-EXT-SRV-ATTR:reservation_id`` (since microversion 2.3) - ``OS-EXT-SRV-ATTR:launch_index`` (since microversion 2.3) - ``OS-EXT-SRV-ATTR:hostname`` (since microversion 2.3) - ``OS-EXT-SRV-ATTR:kernel_id`` (since microversion 2.3) - ``OS-EXT-SRV-ATTR:ramdisk_id`` (since microversion 2.3) - ``OS-EXT-SRV-ATTR:root_device_name`` (since microversion 2.3) - ``OS-EXT-SRV-ATTR:user_data`` (since microversion 2.3) Microvision 2.75 added the above attributes in the ``PUT /servers/{server_id}`` and ``POST /servers/{server_id}/action (rebuild)`` API responses which are also controlled by this policy rule, like the ``GET /servers*`` APIs. """, operations=[ { 'method': 'GET', 'path': '/servers/{id}' }, { 'method': 'GET', 'path': '/servers/detail' }, { 'method': 'PUT', 'path': '/servers/{server_id}' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (rebuild)' } ], scope_types=['system', 'project'] ), ] def list_rules(): return extended_server_attributes_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/extensions.py0000664000175000017500000000234000000000000020110 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:extensions' extensions_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, "List available extensions and show information for an extension " "by alias", [ { 'method': 'GET', 'path': '/extensions' }, { 'method': 'GET', 'path': '/extensions/{alias}' } ]), ] def list_rules(): return extensions_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/flavor_access.py0000664000175000017500000000567300000000000020537 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-flavor-access' POLICY_ROOT = 'os_compute_api:os-flavor-access:%s' # NOTE(gmann): Deprecating this policy explicitly as old defaults # admin or owner is not suitable for that which should be admin (Bug#1867840) # but changing that will break old deployment so let's keep supporting # the old default also and new default can be SYSTEM_READER # SYSTEM_READER rule in base class is defined with the deprecated rule of admin # not admin or owner which is the main reason that we need to explicitly # deprecate this policy here. DEPRECATED_FLAVOR_ACCESS_POLICY = policy.DeprecatedRule( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ flavor_access_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'add_tenant_access', check_str=base.SYSTEM_ADMIN, description="Add flavor access to a tenant", operations=[ { 'method': 'POST', 'path': '/flavors/{flavor_id}/action (addTenantAccess)' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'remove_tenant_access', check_str=base.SYSTEM_ADMIN, description="Remove flavor access from a tenant", operations=[ { 'method': 'POST', 'path': '/flavors/{flavor_id}/action (removeTenantAccess)' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.SYSTEM_READER, description="""List flavor access information Allows access to the full list of tenants that have access to a flavor via an os-flavor-access API. """, operations=[ { 'method': 'GET', 'path': '/flavors/{flavor_id}/os-flavor-access' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_FLAVOR_ACCESS_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return flavor_access_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/flavor_extra_specs.py0000664000175000017500000001006000000000000021600 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-flavor-extra-specs:%s' flavor_extra_specs_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show an extra spec for a flavor", operations=[ { 'path': '/flavors/{flavor_id}/os-extra_specs/' '{flavor_extra_spec_key}', 'method': 'GET' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.SYSTEM_ADMIN, description="Create extra specs for a flavor", operations=[ { 'path': '/flavors/{flavor_id}/os-extra_specs/', 'method': 'POST' } ], scope_types=['system'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.SYSTEM_ADMIN, description="Update an extra spec for a flavor", operations=[ { 'path': '/flavors/{flavor_id}/os-extra_specs/' '{flavor_extra_spec_key}', 'method': 'PUT' } ], scope_types=['system'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.SYSTEM_ADMIN, description="Delete an extra spec for a flavor", operations=[ { 'path': '/flavors/{flavor_id}/os-extra_specs/' '{flavor_extra_spec_key}', 'method': 'DELETE' } ], scope_types=['system'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List extra specs for a flavor. Starting with " "microversion 2.47, the flavor used for a server is also returned " "in the response when showing server details, updating a server or " "rebuilding a server. Starting with microversion 2.61, extra specs " "may be returned in responses for the flavor resource.", operations=[ { 'path': '/flavors/{flavor_id}/os-extra_specs/', 'method': 'GET' }, # Microversion 2.47 operations for servers: { 'path': '/servers/detail', 'method': 'GET' }, { 'path': '/servers/{server_id}', 'method': 'GET' }, { 'path': '/servers/{server_id}', 'method': 'PUT' }, { 'path': '/servers/{server_id}/action (rebuild)', 'method': 'POST' }, # Microversion 2.61 operations for flavors: { 'path': '/flavors', 'method': 'POST' }, { 'path': '/flavors/detail', 'method': 'GET' }, { 'path': '/flavors/{flavor_id}', 'method': 'GET' }, { 'path': '/flavors/{flavor_id}', 'method': 'PUT' } ], scope_types=['system', 'project'] ), ] def list_rules(): return flavor_extra_specs_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/flavor_manage.py0000664000175000017500000000340100000000000020511 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-flavor-manage:%s' flavor_manage_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.SYSTEM_ADMIN, description="Create a flavor", operations=[ { 'method': 'POST', 'path': '/flavors' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.SYSTEM_ADMIN, description="Update a flavor", operations=[ { 'method': 'PUT', 'path': '/flavors/{flavor_id}' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.SYSTEM_ADMIN, description="Delete a flavor", operations=[ { 'method': 'DELETE', 'path': '/flavors/{flavor_id}' } ], scope_types=['system']), ] def list_rules(): return flavor_manage_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/floating_ip_pools.py0000664000175000017500000000216200000000000021422 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-floating-ip-pools' floating_ip_pools_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, "List floating IP pools. This API is deprecated.", [ { 'method': 'GET', 'path': '/os-floating-ip-pools' } ]), ] def list_rules(): return floating_ip_pools_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/floating_ips.py0000664000175000017500000000333500000000000020374 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-floating-ips' floating_ips_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, "Manage a project's floating IPs. These APIs are all deprecated.", [ { 'method': 'POST', 'path': '/servers/{server_id}/action (addFloatingIp)' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (removeFloatingIp)' }, { 'method': 'GET', 'path': '/os-floating-ips' }, { 'method': 'POST', 'path': '/os-floating-ips' }, { 'method': 'GET', 'path': '/os-floating-ips/{floating_ip_id}' }, { 'method': 'DELETE', 'path': '/os-floating-ips/{floating_ip_id}' }, ]), ] def list_rules(): return floating_ips_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/hosts.py0000664000175000017500000000330000000000000017046 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-hosts' hosts_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_API, """List, show and manage physical hosts. These APIs are all deprecated in favor of os-hypervisors and os-services.""", [ { 'method': 'GET', 'path': '/os-hosts' }, { 'method': 'GET', 'path': '/os-hosts/{host_name}' }, { 'method': 'PUT', 'path': '/os-hosts/{host_name}' }, { 'method': 'GET', 'path': '/os-hosts/{host_name}/reboot' }, { 'method': 'GET', 'path': '/os-hosts/{host_name}/shutdown' }, { 'method': 'GET', 'path': '/os-hosts/{host_name}/startup' } ]), ] def list_rules(): return hosts_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/hypervisors.py0000664000175000017500000001105600000000000020312 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-hypervisors:%s' DEPRECATED_POLICY = policy.DeprecatedRule( 'os_compute_api:os-hypervisors', base.RULE_ADMIN_API, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ hypervisors_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list', check_str=base.SYSTEM_READER, description="List all hypervisors.", operations=[ { 'path': '/os-hypervisors', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list-detail', check_str=base.SYSTEM_READER, description="List all hypervisors with details", operations=[ { 'path': '/os-hypervisors/details', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'statistics', check_str=base.SYSTEM_READER, description="Show summary statistics for all hypervisors " "over all compute nodes.", operations=[ { 'path': '/os-hypervisors/statistics', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.SYSTEM_READER, description="Show details for a hypervisor.", operations=[ { 'path': '/os-hypervisors/{hypervisor_id}', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'uptime', check_str=base.SYSTEM_READER, description="Show the uptime of a hypervisor.", operations=[ { 'path': '/os-hypervisors/{hypervisor_id}/uptime', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'search', check_str=base.SYSTEM_READER, description="Search hypervisor by hypervisor_hostname pattern.", operations=[ { 'path': '/os-hypervisors/{hypervisor_hostname_pattern}/search', 'method': 'GET' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'servers', check_str=base.SYSTEM_READER, description="List all servers on hypervisors that can match " "the provided hypervisor_hostname pattern.", operations=[ { 'path': '/os-hypervisors/{hypervisor_hostname_pattern}/servers', 'method': 'GET' } ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0', ), ] def list_rules(): return hypervisors_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/instance_actions.py0000664000175000017500000000766000000000000021247 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base ROOT_POLICY = 'os_compute_api:os-instance-actions' BASE_POLICY_NAME = 'os_compute_api:os-instance-actions:%s' DEPRECATED_INSTANCE_ACTION_POLICY = policy.DeprecatedRule( ROOT_POLICY, base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ instance_actions_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'events:details', check_str=base.SYSTEM_READER, description="""Add "details" key in action events for a server. This check is performed only after the check os_compute_api:os-instance-actions:show passes. Beginning with Microversion 2.84, new field 'details' is exposed via API which can have more details about event failure. That field is controlled by this policy which is system reader by default. Making the 'details' field visible to the non-admin user helps to understand the nature of the problem (i.e. if the action can be retried), but in the other hand it might leak information about the deployment (e.g. the type of the hypervisor). """, operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-instance-actions/{request_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'events', check_str=base.SYSTEM_READER, description="""Add events details in action details for a server. This check is performed only after the check os_compute_api:os-instance-actions:show passes. Beginning with Microversion 2.51, events details are always included; traceback information is provided per event if policy enforcement passes. Beginning with Microversion 2.62, each event includes a hashed host identifier and, if policy enforcement passes, the name of the host.""", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-instance-actions/{request_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="""List actions for a server.""", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-instance-actions' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_INSTANCE_ACTION_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="""Show action details for a server.""", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-instance-actions/{request_id}' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_INSTANCE_ACTION_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return instance_actions_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/instance_usage_audit_log.py0000664000175000017500000000433100000000000022732 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-instance-usage-audit-log:%s' DEPRECATED_POLICY = policy.DeprecatedRule( 'os_compute_api:os-instance-usage-audit-log', base.RULE_ADMIN_API, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ instance_usage_audit_log_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list', check_str=base.SYSTEM_READER, description="List all usage audits.", operations=[ { 'method': 'GET', 'path': '/os-instance_usage_audit_log' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.SYSTEM_READER, description="List all usage audits occurred before " "a specified time for all servers on all compute hosts where " "usage auditing is configured", operations=[ { 'method': 'GET', 'path': '/os-instance_usage_audit_log/{before_timestamp}' } ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return instance_usage_audit_log_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/ips.py0000664000175000017500000000311600000000000016506 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:ips:%s' ips_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show IP addresses details for a network label of a " " server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/ips/{network_label}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List IP addresses that are assigned to a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/ips' } ], scope_types=['system', 'project']), ] def list_rules(): return ips_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/keypairs.py0000664000175000017500000000437700000000000017554 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-keypairs:%s' keypairs_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str='(' + base.SYSTEM_READER + ') or user_id:%(user_id)s', description="List all keypairs", operations=[ { 'path': '/os-keypairs', 'method': 'GET' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str='(' + base.SYSTEM_ADMIN + ') or user_id:%(user_id)s', description="Create a keypair", operations=[ { 'path': '/os-keypairs', 'method': 'POST' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str='(' + base.SYSTEM_ADMIN + ') or user_id:%(user_id)s', description="Delete a keypair", operations=[ { 'path': '/os-keypairs/{keypair_name}', 'method': 'DELETE' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str='(' + base.SYSTEM_READER + ') or user_id:%(user_id)s', description="Show details of a keypair", operations=[ { 'path': '/os-keypairs/{keypair_name}', 'method': 'GET' } ], scope_types=['system', 'project']), ] def list_rules(): return keypairs_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/limits.py0000664000175000017500000000421500000000000017215 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:limits' OTHER_PROJECT_LIMIT_POLICY_NAME = 'os_compute_api:limits:other_project' DEPRECATED_POLICY = policy.DeprecatedRule( 'os_compute_api:os-used-limits', base.RULE_ADMIN_API, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ limits_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.RULE_ANY, description="Show rate and absolute limits for the current user " "project", operations=[ { 'method': 'GET', 'path': '/limits' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=OTHER_PROJECT_LIMIT_POLICY_NAME, check_str=base.SYSTEM_READER, description="""Show rate and absolute limits of other project. This policy only checks if the user has access to the requested project limits. And this check is performed only after the check os_compute_api:limits passes""", operations=[ { 'method': 'GET', 'path': '/limits' } ], scope_types=['system'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return limits_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/lock_server.py0000664000175000017500000000402400000000000020230 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-lock-server:%s' lock_server_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'lock', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Lock a server", operations=[ { 'path': '/servers/{server_id}/action (lock)', 'method': 'POST' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'unlock', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Unlock a server", operations=[ { 'path': '/servers/{server_id}/action (unlock)', 'method': 'POST' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'unlock:unlock_override', check_str=base.SYSTEM_ADMIN, description="""Unlock a server, regardless who locked the server. This check is performed only after the check os_compute_api:os-lock-server:unlock passes""", operations=[ { 'path': '/servers/{server_id}/action (unlock)', 'method': 'POST' } ], scope_types=['system', 'project'] ), ] def list_rules(): return lock_server_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/migrate_server.py0000664000175000017500000000311000000000000020723 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-migrate-server:%s' migrate_server_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'migrate', check_str=base.SYSTEM_ADMIN, description="Cold migrate a server to a host", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (migrate)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'migrate_live', check_str=base.SYSTEM_ADMIN, description="Live migrate a server to a new host without a reboot", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (os-migrateLive)' } ], scope_types=['system', 'project']), ] def list_rules(): return migrate_server_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/migrations.py0000664000175000017500000000217100000000000020067 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-migrations:%s' migrations_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.SYSTEM_READER, description="List migrations", operations=[ { 'method': 'GET', 'path': '/os-migrations' } ], scope_types=['system']), ] def list_rules(): return migrations_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/multinic.py0000664000175000017500000000250200000000000017535 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-multinic' multinic_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, """Add or remove a fixed IP address from a server. These APIs are proxy calls to the Network service. These are all deprecated.""", [ { 'method': 'POST', 'path': '/servers/{server_id}/action (addFixedIp)' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (removeFixedIp)' } ]), ] def list_rules(): return multinic_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/networks.py0000664000175000017500000000244200000000000017570 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-networks:%s' networks_policies = [ policy.DocumentedRuleDefault( POLICY_ROOT % 'view', base.RULE_ADMIN_OR_OWNER, """List networks for the project and show details for a network. These APIs are proxy calls to the Network service. These are all deprecated.""", [ { 'method': 'GET', 'path': '/os-networks' }, { 'method': 'GET', 'path': '/os-networks/{network_id}' } ]), ] def list_rules(): return networks_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/pause_server.py0000664000175000017500000000306200000000000020416 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-pause-server:%s' pause_server_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'pause', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Pause a server", operations=[ { 'path': '/servers/{server_id}/action (pause)', 'method': 'POST' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'unpause', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Unpause a paused server", operations=[ { 'path': '/servers/{server_id}/action (unpause)', 'method': 'POST' } ], scope_types=['system', 'project'] ), ] def list_rules(): return pause_server_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/quota_class_sets.py0000664000175000017500000000303100000000000021263 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-quota-class-sets:%s' quota_class_sets_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.SYSTEM_READER, description="List quotas for specific quota classs", operations=[ { 'method': 'GET', 'path': '/os-quota-class-sets/{quota_class}' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.SYSTEM_ADMIN, description='Update quotas for specific quota class', operations=[ { 'method': 'PUT', 'path': '/os-quota-class-sets/{quota_class}' } ], scope_types=['system']), ] def list_rules(): return quota_class_sets_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/quota_sets.py0000664000175000017500000000501100000000000020076 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-quota-sets:%s' quota_sets_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.SYSTEM_ADMIN, description="Update the quotas", operations=[ { 'method': 'PUT', 'path': '/os-quota-sets/{tenant_id}' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'defaults', check_str=base.RULE_ANY, description="List default quotas", operations=[ { 'method': 'GET', 'path': '/os-quota-sets/{tenant_id}/defaults' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show a quota", operations=[ { 'method': 'GET', 'path': '/os-quota-sets/{tenant_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.SYSTEM_ADMIN, description="Revert quotas to defaults", operations=[ { 'method': 'DELETE', 'path': '/os-quota-sets/{tenant_id}' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'detail', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show the detail of quota", operations=[ { 'method': 'GET', 'path': '/os-quota-sets/{tenant_id}/detail' } ], scope_types=['system', 'project']), ] def list_rules(): return quota_sets_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/remote_consoles.py0000664000175000017500000000370500000000000021117 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-remote-consoles' remote_consoles_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="""Generate a URL to access remove server console. This policy is for ``POST /remote-consoles`` API and below Server actions APIs are deprecated: - ``os-getRDPConsole`` - ``os-getSerialConsole`` - ``os-getSPICEConsole`` - ``os-getVNCConsole``.""", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (os-getRDPConsole)' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (os-getSerialConsole)' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (os-getSPICEConsole)' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (os-getVNCConsole)' }, { 'method': 'POST', 'path': '/servers/{server_id}/remote-consoles' }, ], scope_types=['system', 'project']), ] def list_rules(): return remote_consoles_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/rescue.py0000664000175000017500000000367200000000000017210 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-rescue' UNRESCUE_POLICY_NAME = 'os_compute_api:os-unrescue' DEPRECATED_POLICY = policy.DeprecatedRule( 'os_compute_api:os-rescue', base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Rescue/Unrescue API policies are made granular with new policy for unrescue and keeping old policy for rescue. """ rescue_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Rescue a server", operations=[ { 'path': '/servers/{server_id}/action (rescue)', 'method': 'POST' }, ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=UNRESCUE_POLICY_NAME, check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Unrescue a server", operations=[ { 'path': '/servers/{server_id}/action (unrescue)', 'method': 'POST' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0' ), ] def list_rules(): return rescue_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/security_groups.py0000664000175000017500000000730400000000000021164 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-security-groups' POLICY_NAME = 'os_compute_api:os-security-groups:%s' DEPRECATED_POLICY = policy.DeprecatedRule( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ security_groups_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.RULE_ADMIN_OR_OWNER, description="""List, show, add, or remove security groups. APIs which are directly related to security groups resource are deprecated: Lists, shows information for, creates, updates and deletes security groups. Creates and deletes security group rules. All these APIs are deprecated.""", operations=[ { 'method': 'GET', 'path': '/os-security-groups' }, { 'method': 'GET', 'path': '/os-security-groups/{security_group_id}' }, { 'method': 'POST', 'path': '/os-security-groups' }, { 'method': 'PUT', 'path': '/os-security-groups/{security_group_id}' }, { 'method': 'DELETE', 'path': '/os-security-groups/{security_group_id}' }, ]), policy.DocumentedRuleDefault( name=POLICY_NAME % 'list', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List security groups of server.", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-security-groups' }, ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=POLICY_NAME % 'add', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Add security groups to server.", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (addSecurityGroup)' }, ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=POLICY_NAME % 'remove', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Remove security groups from server.", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (removeSecurityGroup)' }, ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return security_groups_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/server_diagnostics.py0000664000175000017500000000227300000000000021613 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-server-diagnostics' server_diagnostics_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME, check_str=base.SYSTEM_ADMIN, description="Show the usage data for a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/diagnostics' } ], scope_types=['system', 'project']), ] def list_rules(): return server_diagnostics_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/server_external_events.py0000664000175000017500000000227500000000000022514 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-server-external-events:%s' server_external_events_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.SYSTEM_ADMIN, description="Create one or more external events", operations=[ { 'method': 'POST', 'path': '/os-server-external-events' } ], scope_types=['system']), ] def list_rules(): return server_external_events_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/server_groups.py0000664000175000017500000000625600000000000020630 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-server-groups:%s' server_groups_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.PROJECT_MEMBER, description="Create a new server group", operations=[ { 'path': '/os-server-groups', 'method': 'POST' } ], # (NOTE)gmann: Reason for 'project' only scope: # POST SG need project_id to create the serve groups # system scope members do not have project id for which # SG needs to be created. # If we allow system scope role also then created SG will have # project_id of system role, not the one he/she wants to create the SG # for (nobody can create the SG for other projects because API does # not take project id in request ). So keeping this scoped to project # only as these roles are the only ones who will be creating SG. scope_types=['project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Delete a server group", operations=[ { 'path': '/os-server-groups/{server_group_id}', 'method': 'DELETE' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List all server groups", operations=[ { 'path': '/os-server-groups', 'method': 'GET' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index:all_projects', check_str=base.SYSTEM_READER, description="List all server groups for all projects", operations=[ { 'path': '/os-server-groups', 'method': 'GET' } ], scope_types=['system'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show details of a server group", operations=[ { 'path': '/os-server-groups/{server_group_id}', 'method': 'GET' } ], scope_types=['system', 'project'] ), ] def list_rules(): return server_groups_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/server_metadata.py0000664000175000017500000000606600000000000021070 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:server-metadata:%s' server_metadata_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List all metadata of a server", operations=[ { 'path': '/servers/{server_id}/metadata', 'method': 'GET' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show metadata for a server", operations=[ { 'path': '/servers/{server_id}/metadata/{key}', 'method': 'GET' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Create metadata for a server", operations=[ { 'path': '/servers/{server_id}/metadata', 'method': 'POST' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update_all', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Replace metadata for a server", operations=[ { 'path': '/servers/{server_id}/metadata', 'method': 'PUT' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Update metadata from a server", operations=[ { 'path': '/servers/{server_id}/metadata/{key}', 'method': 'PUT' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Delete metadata from a server", operations=[ { 'path': '/servers/{server_id}/metadata/{key}', 'method': 'DELETE' } ], scope_types=['system', 'project'] ), ] def list_rules(): return server_metadata_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/server_password.py0000664000175000017500000000432300000000000021144 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-server-password:%s' DEPRECATED_POLICY = policy.DeprecatedRule( 'os_compute_api:os-server-password', base.RULE_ADMIN_OR_OWNER, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ server_password_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show the encrypted administrative " "password of a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-server-password' }, ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'clear', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Clear the encrypted administrative " "password of a server", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}/os-server-password' } ], scope_types=['system', 'project'], deprecated_rule=DEPRECATED_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return server_password_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/server_tags.py0000664000175000017500000000617300000000000020245 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-server-tags:%s' server_tags_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete_all', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Delete all the server tags", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}/tags' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List all tags for given server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/tags' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update_all', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Replace all tags on specified server with the new set " "of tags.", operations=[ { 'method': 'PUT', 'path': '/servers/{server_id}/tags' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Delete a single tag from the specified server", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}/tags/{tag}' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Add a single tag to the server if server has no " "specified tag", operations=[ { 'method': 'PUT', 'path': '/servers/{server_id}/tags/{tag}' } ], scope_types=['system', 'project'] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Check tag existence on the server.", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/tags/{tag}' } ], scope_types=['system', 'project'] ), ] def list_rules(): return server_tags_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/server_topology.py0000664000175000017500000000315500000000000021160 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'compute:server:topology:%s' server_topology_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show the NUMA topology data for a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/topology' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( # Control host NUMA node and cpu pinning information name=BASE_POLICY_NAME % 'host:index', check_str=base.SYSTEM_READER, description="Show the NUMA topology data for a server with host" "NUMA ID and CPU pinning information", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/topology' } ], scope_types=['system']), ] def list_rules(): return server_topology_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/servers.py0000664000175000017500000004143000000000000017405 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base RULE_AOO = base.RULE_ADMIN_OR_OWNER SERVERS = 'os_compute_api:servers:%s' NETWORK_ATTACH_EXTERNAL = 'network:attach_external_network' ZERO_DISK_FLAVOR = SERVERS % 'create:zero_disk_flavor' REQUESTED_DESTINATION = 'compute:servers:create:requested_destination' CROSS_CELL_RESIZE = 'compute:servers:resize:cross_cell' rules = [ policy.DocumentedRuleDefault( name=SERVERS % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List all servers", operations=[ { 'method': 'GET', 'path': '/servers' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'detail', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List all servers with detailed information", operations=[ { 'method': 'GET', 'path': '/servers/detail' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'index:get_all_tenants', check_str=base.SYSTEM_READER, description="List all servers for all projects", operations=[ { 'method': 'GET', 'path': '/servers' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=SERVERS % 'detail:get_all_tenants', check_str=base.SYSTEM_READER, description="List all servers with detailed information for " " all projects", operations=[ { 'method': 'GET', 'path': '/servers/detail' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=SERVERS % 'allow_all_filters', check_str=base.SYSTEM_READER, description="Allow all filters when listing servers", operations=[ { 'method': 'GET', 'path': '/servers' }, { 'method': 'GET', 'path': '/servers/detail' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=SERVERS % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show a server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}' } ], scope_types=['system', 'project']), # the details in host_status are pretty sensitive, only admins # should do that by default. policy.DocumentedRuleDefault( name=SERVERS % 'show:host_status', check_str=base.SYSTEM_ADMIN, description=""" Show a server with additional host status information. This means host_status will be shown irrespective of status value. If showing only host_status UNKNOWN is desired, use the ``os_compute_api:servers:show:host_status:unknown-only`` policy rule. Microvision 2.75 added the ``host_status`` attribute in the ``PUT /servers/{server_id}`` and ``POST /servers/{server_id}/action (rebuild)`` API responses which are also controlled by this policy rule, like the ``GET /servers*`` APIs. """, operations=[ { 'method': 'GET', 'path': '/servers/{server_id}' }, { 'method': 'GET', 'path': '/servers/detail' }, { 'method': 'PUT', 'path': '/servers/{server_id}' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (rebuild)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'show:host_status:unknown-only', check_str=base.SYSTEM_ADMIN, description=""" Show a server with additional host status information, only if host status is UNKNOWN. This policy rule will only be enforced when the ``os_compute_api:servers:show:host_status`` policy rule does not pass for the request. An example policy configuration could be where the ``os_compute_api:servers:show:host_status`` rule is set to allow admin-only and the ``os_compute_api:servers:show:host_status:unknown-only`` rule is set to allow everyone. """, operations=[ { 'method': 'GET', 'path': '/servers/{server_id}' }, { 'method': 'GET', 'path': '/servers/detail' }, { 'method': 'PUT', 'path': '/servers/{server_id}' }, { 'method': 'POST', 'path': '/servers/{server_id}/action (rebuild)' } ], scope_types=['system', 'project'],), policy.DocumentedRuleDefault( name=SERVERS % 'create', check_str=base.PROJECT_MEMBER, description="Create a server", operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['project']), policy.DocumentedRuleDefault( name=SERVERS % 'create:forced_host', # TODO(gmann): We need to make it SYSTEM_ADMIN. # PROJECT_ADMIN is added for now because create server # policy is project scoped and there is no way to # pass the project_id in request body for system scoped # roles so that create server for other project with force host. # To achieve that, we need to update the create server API to # accept the project_id for whom the server needs to be created # and then change the scope of this policy to system-only # Because that is API change it needs to be done with new # microversion. check_str=base.PROJECT_ADMIN, description=""" Create a server on the specified host and/or node. In this case, the server is forced to launch on the specified host and/or node by bypassing the scheduler filters unlike the ``compute:servers:create:requested_destination`` rule. """, operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=REQUESTED_DESTINATION, check_str=base.RULE_ADMIN_API, description=""" Create a server on the requested compute service host and/or hypervisor_hostname. In this case, the requested host and/or hypervisor_hostname is validated by the scheduler filters unlike the ``os_compute_api:servers:create:forced_host`` rule. """, operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'create:attach_volume', check_str=base.PROJECT_MEMBER, description="Create a server with the requested volume attached to it", operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['project']), policy.DocumentedRuleDefault( name=SERVERS % 'create:attach_network', check_str=base.PROJECT_MEMBER, description="Create a server with the requested network attached " " to it", operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['project']), policy.DocumentedRuleDefault( name=SERVERS % 'create:trusted_certs', check_str=base.PROJECT_MEMBER, description="Create a server with trusted image certificate IDs", operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['project']), policy.DocumentedRuleDefault( name=ZERO_DISK_FLAVOR, # TODO(gmann): We need to make it SYSTEM_ADMIN. # PROJECT_ADMIN is added for now because create server # policy is project scoped and there is no way to # pass the project_id in request body for system scoped # roles so that create server for other project with zero disk flavor. # To achieve that, we need to update the create server API to # accept the project_id for whom the server needs to be created # and then change the scope of this policy to system-only # Because that is API change it needs to be done with new # microversion. check_str=base.PROJECT_ADMIN, description=""" This rule controls the compute API validation behavior of creating a server with a flavor that has 0 disk, indicating the server should be volume-backed. For a flavor with disk=0, the root disk will be set to exactly the size of the image used to deploy the instance. However, in this case the filter_scheduler cannot select the compute host based on the virtual image size. Therefore, 0 should only be used for volume booted instances or for testing purposes. WARNING: It is a potential security exposure to enable this policy rule if users can upload their own images since repeated attempts to create a disk=0 flavor instance with a large image can exhaust the local disk of the compute (or shared storage cluster). See bug https://bugs.launchpad.net/nova/+bug/1739646 for details. """, operations=[ { 'method': 'POST', 'path': '/servers' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=NETWORK_ATTACH_EXTERNAL, # TODO(gmann): We need to make it SYSTEM_ADMIN. # PROJECT_ADMIN is added for now because create server # policy is project scoped and there is no way to # pass the project_id in request body for system scoped # roles so that create server for other project or attach the # external network. To achieve that, we need to update the # create server API to accept the project_id for whom the # server needs to be created and then change the scope of this # policy to system-only Because that is API change it needs to # be done with new microversion. check_str=base.PROJECT_ADMIN, description="Attach an unshared external network to a server", operations=[ # Create a server with a requested network or port. { 'method': 'POST', 'path': '/servers' }, # Attach a network or port to an existing server. { 'method': 'POST', 'path': '/servers/{server_id}/os-interface' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'delete', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Delete a server", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'update', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Update a server", operations=[ { 'method': 'PUT', 'path': '/servers/{server_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'confirm_resize', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Confirm a server resize", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (confirmResize)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'revert_resize', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Revert a server resize", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (revertResize)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'reboot', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Reboot a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (reboot)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'resize', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Resize a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (resize)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=CROSS_CELL_RESIZE, check_str=base.RULE_NOBODY, description="Resize a server across cells. By default, this is " "disabled for all users and recommended to be tested in a " "deployment for admin users before opening it up to non-admin users. " "Resizing within a cell is the default preferred behavior even if " "this is enabled. ", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (resize)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'rebuild', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Rebuild a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (rebuild)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'rebuild:trusted_certs', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Rebuild a server with trusted image certificate IDs", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (rebuild)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'create_image', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Create an image from a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (createImage)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'create_image:allow_volume_backed', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Create an image from a volume backed server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (createImage)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'start', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Start a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (os-start)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'stop', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Stop a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (os-stop)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=SERVERS % 'trigger_crash_dump', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Trigger crash dump in a server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (trigger_crash_dump)' } ], scope_types=['system', 'project']), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/servers_migrations.py0000664000175000017500000000472300000000000021645 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:servers:migrations:%s' servers_migrations_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.SYSTEM_READER, description="Show details for an in-progress live migration for a " "given server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/migrations/{migration_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'force_complete', check_str=base.SYSTEM_ADMIN, description="Force an in-progress live migration for a given server " "to complete", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/migrations/{migration_id}' '/action (force_complete)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.SYSTEM_ADMIN, description="Delete(Abort) an in-progress live migration", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}/migrations/{migration_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.SYSTEM_READER, description="Lists in-progress live migrations for a given server", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/migrations' } ], scope_types=['system', 'project']), ] def list_rules(): return servers_migrations_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/services.py0000664000175000017500000000505700000000000017544 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-services:%s' DEPRECATED_SERVICE_POLICY = policy.DeprecatedRule( 'os_compute_api:os-services', base.RULE_ADMIN_API, ) DEPRECATED_REASON = """ Nova API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in nova 23.0.0 release. """ services_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list', check_str=base.SYSTEM_READER, description="List all running Compute services in a region.", operations=[ { 'method': 'GET', 'path': '/os-services' } ], scope_types=['system'], deprecated_rule=DEPRECATED_SERVICE_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.SYSTEM_ADMIN, description="Update a Compute service.", operations=[ { # Added in microversion 2.53. 'method': 'PUT', 'path': '/os-services/{service_id}' }, ], scope_types=['system'], deprecated_rule=DEPRECATED_SERVICE_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.SYSTEM_ADMIN, description="Delete a Compute service.", operations=[ { 'method': 'DELETE', 'path': '/os-services/{service_id}' } ], scope_types=['system'], deprecated_rule=DEPRECATED_SERVICE_POLICY, deprecated_reason=DEPRECATED_REASON, deprecated_since='21.0.0'), ] def list_rules(): return services_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/shelve.py0000664000175000017500000000363400000000000017206 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-shelve:%s' shelve_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'shelve', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Shelve server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (shelve)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'unshelve', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Unshelve (restore) shelved server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (unshelve)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'shelve_offload', check_str=base.SYSTEM_ADMIN, description="Shelf-offload (remove) server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (shelveOffload)' } ], scope_types=['system', 'project']), ] def list_rules(): return shelve_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/simple_tenant_usage.py0000664000175000017500000000310400000000000021736 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-simple-tenant-usage:%s' simple_tenant_usage_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show usage statistics for a specific tenant", operations=[ { 'method': 'GET', 'path': '/os-simple-tenant-usage/{tenant_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list', check_str=base.SYSTEM_READER, description="List per tenant usage statistics for all tenants", operations=[ { 'method': 'GET', 'path': '/os-simple-tenant-usage' } ], scope_types=['system']), ] def list_rules(): return simple_tenant_usage_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/suspend_server.py0000664000175000017500000000306000000000000020760 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-suspend-server:%s' suspend_server_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'resume', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Resume suspended server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (resume)' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'suspend', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Suspend server", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/action (suspend)' } ], scope_types=['system', 'project']), ] def list_rules(): return suspend_server_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/tenant_networks.py0000664000175000017500000000250700000000000021143 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-tenant-networks' tenant_networks_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, """Create, list, show information for, and delete project networks. These APIs are proxy calls to the Network service. These are all deprecated.""", [ { 'method': 'GET', 'path': '/os-tenant-networks' }, { 'method': 'GET', 'path': '/os-tenant-networks/{network_id}' }, ]), ] def list_rules(): return tenant_networks_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policies/volumes.py0000664000175000017500000000421000000000000017401 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base BASE_POLICY_NAME = 'os_compute_api:os-volumes' volumes_policies = [ policy.DocumentedRuleDefault( BASE_POLICY_NAME, base.RULE_ADMIN_OR_OWNER, """Manage volumes for use with the Compute API. Lists, shows details, creates, and deletes volumes and snapshots. These APIs are proxy calls to the Volume service. These are all deprecated. """, [ { 'method': 'GET', 'path': '/os-volumes' }, { 'method': 'POST', 'path': '/os-volumes' }, { 'method': 'GET', 'path': '/os-volumes/detail' }, { 'method': 'GET', 'path': '/os-volumes/{volume_id}' }, { 'method': 'DELETE', 'path': '/os-volumes/{volume_id}' }, { 'method': 'GET', 'path': '/os-snapshots' }, { 'method': 'POST', 'path': '/os-snapshots' }, { 'method': 'GET', 'path': '/os-snapshots/detail' }, { 'method': 'GET', 'path': '/os-snapshots/{snapshot_id}' }, { 'method': 'DELETE', 'path': '/os-snapshots/{snapshot_id}' } ]), ] def list_rules(): return volumes_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/policies/volumes_attachments.py0000664000175000017500000000661300000000000022005 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from nova.policies import base POLICY_ROOT = 'os_compute_api:os-volumes-attachments:%s' volumes_attachments_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="List volume attachments for an instance", operations=[ {'method': 'GET', 'path': '/servers/{server_id}/os-volume_attachments' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Attach a volume to an instance", operations=[ { 'method': 'POST', 'path': '/servers/{server_id}/os-volume_attachments' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.PROJECT_READER_OR_SYSTEM_READER, description="Show details of a volume attachment", operations=[ { 'method': 'GET', 'path': '/servers/{server_id}/os-volume_attachments/{volume_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="""Update a volume attachment. New 'update' policy about 'swap + update' request (which is possible only >2.85) only is checked. We expect to be always superset of this policy permission. """, operations=[ { 'method': 'PUT', 'path': '/servers/{server_id}/os-volume_attachments/{volume_id}' } ], scope_types=['system', 'project']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'swap', check_str=base.SYSTEM_ADMIN, description="Update a volume attachment with a different volumeId", operations=[ { 'method': 'PUT', 'path': '/servers/{server_id}/os-volume_attachments/{volume_id}' } ], scope_types=['system']), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.PROJECT_MEMBER_OR_SYSTEM_ADMIN, description="Detach a volume from an instance", operations=[ { 'method': 'DELETE', 'path': '/servers/{server_id}/os-volume_attachments/{volume_id}' } ], scope_types=['system', 'project']), ] def list_rules(): return volumes_attachments_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/policy.py0000664000175000017500000002400500000000000015403 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Policy Engine For Nova.""" import copy import re from oslo_config import cfg from oslo_log import log as logging from oslo_policy import policy from oslo_utils import excutils from nova import exception from nova.i18n import _LE, _LW from nova import policies CONF = cfg.CONF LOG = logging.getLogger(__name__) _ENFORCER = None # This list is about the resources which support user based policy enforcement. # Avoid sending deprecation warning for those resources. USER_BASED_RESOURCES = ['os-keypairs'] # oslo_policy will read the policy configuration file again when the file # is changed in runtime so the old policy rules will be saved to # saved_file_rules and used to compare with new rules to determine the # rules whether were updated. saved_file_rules = [] KEY_EXPR = re.compile(r'%\((\w+)\)s') def reset(): global _ENFORCER if _ENFORCER: _ENFORCER.clear() _ENFORCER = None def init(policy_file=None, rules=None, default_rule=None, use_conf=True, suppress_deprecation_warnings=False): """Init an Enforcer class. :param policy_file: Custom policy file to use, if none is specified, `CONF.policy_file` will be used. :param rules: Default dictionary / Rules to use. It will be considered just in the first instantiation. :param default_rule: Default rule to use, CONF.default_rule will be used if none is specified. :param use_conf: Whether to load rules from config file. :param suppress_deprecation_warnings: Whether to suppress the deprecation warnings. """ global _ENFORCER global saved_file_rules if not _ENFORCER: _ENFORCER = policy.Enforcer(CONF, policy_file=policy_file, rules=rules, default_rule=default_rule, use_conf=use_conf) # NOTE(gmann): Explictly disable the warnings for policies # changing their default check_str. During policy-defaults-refresh # work, all the policy defaults have been changed and warning for # each policy started filling the logs limit for various tool. # Once we move to new defaults only world then we can enable these # warning again. _ENFORCER.suppress_default_change_warnings = True if suppress_deprecation_warnings: _ENFORCER.suppress_deprecation_warnings = True register_rules(_ENFORCER) _ENFORCER.load_rules() # Only the rules which are loaded from file may be changed. current_file_rules = _ENFORCER.file_rules current_file_rules = _serialize_rules(current_file_rules) # Checks whether the rules are updated in the runtime if saved_file_rules != current_file_rules: _warning_for_deprecated_user_based_rules(current_file_rules) saved_file_rules = copy.deepcopy(current_file_rules) def _serialize_rules(rules): """Serialize all the Rule object as string which is used to compare the rules list. """ result = [(rule_name, str(rule)) for rule_name, rule in rules.items()] return sorted(result, key=lambda rule: rule[0]) def _warning_for_deprecated_user_based_rules(rules): """Warning user based policy enforcement used in the rule but the rule doesn't support it. """ for rule in rules: # We will skip the warning for the resources which support user based # policy enforcement. if [resource for resource in USER_BASED_RESOURCES if resource in rule[0]]: continue if 'user_id' in KEY_EXPR.findall(rule[1]): LOG.warning(_LW("The user_id attribute isn't supported in the " "rule '%s'. All the user_id based policy " "enforcement will be removed in the " "future."), rule[0]) def set_rules(rules, overwrite=True, use_conf=False): """Set rules based on the provided dict of rules. :param rules: New rules to use. It should be an instance of dict. :param overwrite: Whether to overwrite current rules or update them with the new rules. :param use_conf: Whether to reload rules from config file. """ init(use_conf=False) _ENFORCER.set_rules(rules, overwrite, use_conf) def authorize(context, action, target=None, do_raise=True, exc=None): """Verifies that the action is valid on the target in this context. :param context: nova context :param action: string representing the action to be checked this should be colon separated for clarity. i.e. ``compute:create_instance``, ``compute:attach_volume``, ``volume:attach_volume`` :param target: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': instance.project_id}`` If None, then this default target will be considered: {'project_id': self.project_id, 'user_id': self.user_id} :param do_raise: if True (the default), raises PolicyNotAuthorized; if False, returns False :param exc: Class of the exception to raise if the check fails. Any remaining arguments passed to :meth:`authorize` (both positional and keyword arguments) will be passed to the exception class. If not specified, :class:`PolicyNotAuthorized` will be used. :raises nova.exception.PolicyNotAuthorized: if verification fails and do_raise is True. Or if 'exc' is specified it will raise an exception of that type. :return: returns a non-False value (not necessarily "True") if authorized, and the exact value False if not authorized and do_raise is False. """ init() if not exc: exc = exception.PolicyNotAuthorized # Legacy fallback for emtpy target from context.can() # should be removed once we improve testing and scope checks if target is None: target = default_target(context) try: result = _ENFORCER.authorize(action, target, context, do_raise=do_raise, exc=exc, action=action) except policy.PolicyNotRegistered: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Policy not registered')) except policy.InvalidScope: LOG.debug('Policy check for %(action)s failed with scope check ' '%(credentials)s', {'action': action, 'credentials': context.to_policy_values()}) raise exc(action=action) except Exception: with excutils.save_and_reraise_exception(): LOG.debug('Policy check for %(action)s failed with credentials ' '%(credentials)s', {'action': action, 'credentials': context.to_policy_values()}) return result def default_target(context): return {'project_id': context.project_id, 'user_id': context.user_id} def check_is_admin(context): """Whether or not roles contains 'admin' role according to policy setting. """ init() # the target is user-self target = default_target(context) return _ENFORCER.authorize('context_is_admin', target, context) @policy.register('is_admin') class IsAdminCheck(policy.Check): """An explicit check for is_admin.""" def __init__(self, kind, match): """Initialize the check.""" self.expected = (match.lower() == 'true') super(IsAdminCheck, self).__init__(kind, str(self.expected)) def __call__(self, target, creds, enforcer): """Determine whether is_admin matches the requested value.""" return creds['is_admin'] == self.expected def get_rules(): if _ENFORCER: return _ENFORCER.rules def register_rules(enforcer): enforcer.register_defaults(policies.list_rules()) def get_enforcer(): # This method is used by oslopolicy CLI scripts in order to generate policy # files from overrides on disk and defaults in code. cfg.CONF([], project='nova') init() return _ENFORCER def verify_deprecated_policy(old_policy, new_policy, default_rule, context): """Check the rule of the deprecated policy action If the current rule of the deprecated policy action is set to a non-default value, then a warning message is logged stating that the new policy action should be used to dictate permissions as the old policy action is being deprecated. :param old_policy: policy action that is being deprecated :param new_policy: policy action that is replacing old_policy :param default_rule: the old_policy action default rule value :param context: the nova context """ if _ENFORCER: current_rule = str(_ENFORCER.rules[old_policy]) else: current_rule = None if current_rule != default_rule: LOG.warning("Start using the new action '%(new_policy)s'. " "The existing action '%(old_policy)s' is being deprecated " "and will be removed in future release.", {'new_policy': new_policy, 'old_policy': old_policy}) context.can(old_policy) return True else: return False ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3744698 nova-21.2.4/nova/privsep/0000775000175000017500000000000000000000000015221 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/privsep/__init__.py0000664000175000017500000000221400000000000017331 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Setup privsep decorator.""" from oslo_privsep import capabilities from oslo_privsep import priv_context sys_admin_pctxt = priv_context.PrivContext( 'nova', cfg_section='nova_sys_admin', pypath=__name__ + '.sys_admin_pctxt', capabilities=[capabilities.CAP_CHOWN, capabilities.CAP_DAC_OVERRIDE, capabilities.CAP_DAC_READ_SEARCH, capabilities.CAP_FOWNER, capabilities.CAP_NET_ADMIN, capabilities.CAP_SYS_ADMIN], ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/privsep/fs.py0000664000175000017500000003000500000000000016201 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Helpers for filesystem related routines. """ import hashlib import six from oslo_concurrency import processutils from oslo_log import log as logging import nova.privsep LOG = logging.getLogger(__name__) @nova.privsep.sys_admin_pctxt.entrypoint def mount(fstype, device, mountpoint, options): mount_cmd = ['mount'] if fstype: mount_cmd.extend(['-t', fstype]) if options is not None: mount_cmd.extend(options) mount_cmd.extend([device, mountpoint]) return processutils.execute(*mount_cmd) @nova.privsep.sys_admin_pctxt.entrypoint def umount(mountpoint): processutils.execute('umount', mountpoint, attempts=3, delay_on_retry=True) @nova.privsep.sys_admin_pctxt.entrypoint def lvcreate(size, lv, vg, preallocated=None): cmd = ['lvcreate'] if not preallocated: cmd.extend(['-L', '%db' % size]) else: cmd.extend(['-L', '%db' % preallocated, '--virtualsize', '%db' % size]) cmd.extend(['-n', lv, vg]) processutils.execute(*cmd, attempts=3) @nova.privsep.sys_admin_pctxt.entrypoint def vginfo(vg): return processutils.execute('vgs', '--noheadings', '--nosuffix', '--separator', '|', '--units', 'b', '-o', 'vg_size,vg_free', vg) @nova.privsep.sys_admin_pctxt.entrypoint def lvlist(vg): return processutils.execute('lvs', '--noheadings', '-o', 'lv_name', vg) @nova.privsep.sys_admin_pctxt.entrypoint def lvinfo(path): return processutils.execute('lvs', '-o', 'vg_all,lv_all', '--separator', '|', path) @nova.privsep.sys_admin_pctxt.entrypoint def lvremove(path): processutils.execute('lvremove', '-f', path, attempts=3) @nova.privsep.sys_admin_pctxt.entrypoint def blockdev_size(path): return processutils.execute('blockdev', '--getsize64', path) @nova.privsep.sys_admin_pctxt.entrypoint def blockdev_flush(path): return processutils.execute('blockdev', '--flushbufs', path) @nova.privsep.sys_admin_pctxt.entrypoint def clear(path, volume_size, shred=False): cmd = ['shred'] if shred: cmd.extend(['-n3']) else: cmd.extend(['-n0', '-z']) cmd.extend(['-s%d' % volume_size, path]) processutils.execute(*cmd) @nova.privsep.sys_admin_pctxt.entrypoint def loopsetup(path): return processutils.execute('losetup', '--find', '--show', path) @nova.privsep.sys_admin_pctxt.entrypoint def loopremove(device): return processutils.execute('losetup', '--detach', device, attempts=3) @nova.privsep.sys_admin_pctxt.entrypoint def nbd_connect(device, image): return processutils.execute('qemu-nbd', '-c', device, image) @nova.privsep.sys_admin_pctxt.entrypoint def nbd_disconnect(device): return processutils.execute('qemu-nbd', '-d', device) @nova.privsep.sys_admin_pctxt.entrypoint def create_device_maps(device): return processutils.execute('kpartx', '-a', device) @nova.privsep.sys_admin_pctxt.entrypoint def remove_device_maps(device): return processutils.execute('kpartx', '-d', device) @nova.privsep.sys_admin_pctxt.entrypoint def get_filesystem_type(device): return processutils.execute('blkid', '-o', 'value', '-s', 'TYPE', device, check_exit_code=[0, 2]) @nova.privsep.sys_admin_pctxt.entrypoint def e2fsck(image, flags='-fp'): unprivileged_e2fsck(image, flags=flags) # NOTE(mikal): this method is deliberately not wrapped in a privsep # entrypoint. This is not for unit testing, there are some callers who do # not require elevated permissions when calling this. def unprivileged_e2fsck(image, flags='-fp'): processutils.execute('e2fsck', flags, image, check_exit_code=[0, 1, 2]) @nova.privsep.sys_admin_pctxt.entrypoint def resize2fs(image, check_exit_code, size=None): unprivileged_resize2fs(image, check_exit_code=check_exit_code, size=size) # NOTE(mikal): this method is deliberately not wrapped in a privsep # entrypoint. This is not for unit testing, there are some callers who do # not require elevated permissions when calling this. def unprivileged_resize2fs(image, check_exit_code, size=None): if size: cmd = ['resize2fs', image, size] else: cmd = ['resize2fs', image] processutils.execute(*cmd, check_exit_code=check_exit_code) @nova.privsep.sys_admin_pctxt.entrypoint def create_partition_table(device, style, check_exit_code=True): processutils.execute('parted', '--script', device, 'mklabel', style, check_exit_code=check_exit_code) @nova.privsep.sys_admin_pctxt.entrypoint def create_partition(device, style, start, end, check_exit_code=True): processutils.execute('parted', '--script', device, '--', 'mkpart', style, start, end, check_exit_code=check_exit_code) @nova.privsep.sys_admin_pctxt.entrypoint def list_partitions(device): return unprivileged_list_partitions(device) # NOTE(mikal): this method is deliberately not wrapped in a privsep # entrypoint. This is not for unit testing, there are some callers who do # not require elevated permissions when calling this. def unprivileged_list_partitions(device): """Return partition information (num, size, type) for a device.""" out, _err = processutils.execute('parted', '--script', '--machine', device, 'unit s', 'print') lines = [line for line in out.split('\n') if line] partitions = [] LOG.debug('Partitions:') for line in lines[2:]: line = line.rstrip(';') num, start, end, size, fstype, name, flags = line.split(':') num = int(num) start = int(start.rstrip('s')) end = int(end.rstrip('s')) size = int(size.rstrip('s')) LOG.debug(' %(num)s: %(fstype)s %(size)d sectors', {'num': num, 'fstype': fstype, 'size': size}) partitions.append((num, start, size, fstype, name, flags)) return partitions @nova.privsep.sys_admin_pctxt.entrypoint def resize_partition(device, start, end, bootable): processutils.execute('parted', '--script', device, 'rm', '1') processutils.execute('parted', '--script', device, 'mkpart', 'primary', '%ds' % start, '%ds' % end) if bootable: processutils.execute('parted', '--script', device, 'set', '1', 'boot', 'on') @nova.privsep.sys_admin_pctxt.entrypoint def ext_journal_disable(device): processutils.execute('tune2fs', '-O ^has_journal', device) @nova.privsep.sys_admin_pctxt.entrypoint def ext_journal_enable(device): processutils.execute('tune2fs', '-j', device) # NOTE(mikal): nova allows deployers to configure the command line which is # used to create a filesystem of a given type. This is frankly a little bit # weird, but its also historical and probably should be in some sort of # museum. So, we do that thing here, but it requires a funny dance in order # to load that configuration at startup. # NOTE(mikal): I really feel like this whole thing should be deprecated, I # just don't think its a great idea to let people specify a command in a # configuration option to run as root. _MKFS_COMMAND = {} _DEFAULT_MKFS_COMMAND = None FS_FORMAT_EXT2 = "ext2" FS_FORMAT_EXT3 = "ext3" FS_FORMAT_EXT4 = "ext4" FS_FORMAT_XFS = "xfs" FS_FORMAT_NTFS = "ntfs" FS_FORMAT_VFAT = "vfat" SUPPORTED_FS_TO_EXTEND = ( FS_FORMAT_EXT2, FS_FORMAT_EXT3, FS_FORMAT_EXT4) _DEFAULT_FILE_SYSTEM = FS_FORMAT_VFAT _DEFAULT_FS_BY_OSTYPE = {'linux': FS_FORMAT_EXT4, 'windows': FS_FORMAT_NTFS} def load_mkfs_command(os_type, command): global _MKFS_COMMAND global _DEFAULT_MKFS_COMMAND _MKFS_COMMAND[os_type] = command if os_type == 'default': _DEFAULT_MKFS_COMMAND = command def get_fs_type_for_os_type(os_type): global _MKFS_COMMAND return os_type if _MKFS_COMMAND.get(os_type) else 'default' # NOTE(mikal): this method needs to be duplicated from utils because privsep # can't depend on code outside the privsep directory. def _get_hash_str(base_str): """Returns string that represents MD5 hash of base_str (in hex format). If base_str is a Unicode string, encode it to UTF-8. """ if isinstance(base_str, six.text_type): base_str = base_str.encode('utf-8') return hashlib.md5(base_str).hexdigest() def get_file_extension_for_os_type(os_type, default_ephemeral_format, specified_fs=None): global _MKFS_COMMAND global _DEFAULT_MKFS_COMMAND mkfs_command = _MKFS_COMMAND.get(os_type, _DEFAULT_MKFS_COMMAND) if mkfs_command: extension = mkfs_command else: if not specified_fs: specified_fs = default_ephemeral_format if not specified_fs: specified_fs = _DEFAULT_FS_BY_OSTYPE.get(os_type, _DEFAULT_FILE_SYSTEM) extension = specified_fs return _get_hash_str(extension)[:7] @nova.privsep.sys_admin_pctxt.entrypoint def mkfs(fs, path, label=None): unprivileged_mkfs(fs, path, label=None) # NOTE(mikal): this method is deliberately not wrapped in a privsep # entrypoint. This is not for unit testing, there are some callers who do # not require elevated permissions when calling this. def unprivileged_mkfs(fs, path, label=None): """Format a file or block device :param fs: Filesystem type (examples include 'swap', 'ext3', 'ext4' 'btrfs', etc.) :param path: Path to file or block device to format :param label: Volume label to use """ if fs == 'swap': args = ['mkswap'] else: args = ['mkfs', '-t', fs] # add -F to force no interactive execute on non-block device. if fs in ('ext3', 'ext4', 'ntfs'): args.extend(['-F']) if label: if fs in ('msdos', 'vfat'): label_opt = '-n' else: label_opt = '-L' args.extend([label_opt, label]) args.append(path) processutils.execute(*args) @nova.privsep.sys_admin_pctxt.entrypoint def _inner_configurable_mkfs(os_type, fs_label, target): mkfs_command = (_MKFS_COMMAND.get(os_type, _DEFAULT_MKFS_COMMAND) or '') % {'fs_label': fs_label, 'target': target} processutils.execute(*mkfs_command.split()) # NOTE(mikal): this method is deliberately not wrapped in a privsep entrypoint def configurable_mkfs(os_type, fs_label, target, run_as_root, default_ephemeral_format, specified_fs=None): # Format a file or block device using a user provided command for each # os type. If user has not provided any configuration, format type will # be used according to a default_ephemeral_format configuration or a # system default. global _MKFS_COMMAND global _DEFAULT_MKFS_COMMAND mkfs_command = (_MKFS_COMMAND.get(os_type, _DEFAULT_MKFS_COMMAND) or '') % {'fs_label': fs_label, 'target': target} if mkfs_command: if run_as_root: _inner_configurable_mkfs(os_type, fs_label, target) else: processutils.execute(*mkfs_command.split()) else: if not specified_fs: specified_fs = default_ephemeral_format if not specified_fs: specified_fs = _DEFAULT_FS_BY_OSTYPE.get(os_type, _DEFAULT_FILE_SYSTEM) if run_as_root: mkfs(specified_fs, target, fs_label) else: unprivileged_mkfs(specified_fs, target, fs_label) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/privsep/idmapshift.py0000664000175000017500000001102400000000000017721 0ustar00zuulzuul00000000000000# Copyright 2014 Rackspace, Andrew Melton # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ IDMapShift is a tool that properly sets the ownership of a filesystem for use with linux user namespaces. When using user namespaces with linux containers, the filesystem of the container must be owned by the targeted user and group ids being applied to that container. Otherwise, processes inside the container won't be able to access the filesystem. For example, when using the id map string '0:10000:2000', this means that user ids inside the container between 0 and 1999 will map to user ids on the host between 10000 and 11999. Root (0) becomes 10000, user 1 becomes 10001, user 50 becomes 10050 and user 1999 becomes 11999. This means that files that are owned by root need to actually be owned by user 10000, and files owned by 50 need to be owned by 10050, and so on. IDMapShift will take the uid and gid strings used for user namespaces and properly set up the filesystem for use by those users. Uids and gids outside of provided ranges will be mapped to nobody (max uid/gid) so that they are inaccessible inside the container. """ import os from oslo_log import log as logging import nova.privsep LOG = logging.getLogger(__name__) NOBODY_ID = 65534 def find_target_id(fsid, mappings, nobody, memo): if fsid not in memo: for start, target, count in mappings: if start <= fsid < start + count: memo[fsid] = (fsid - start) + target break else: memo[fsid] = nobody return memo[fsid] def print_chown(path, uid, gid, target_uid, target_gid): LOG.debug('%s %s:%s -> %s:%s', path, uid, gid, target_uid, target_gid) def shift_path(path, uid_mappings, gid_mappings, nobody, uid_memo, gid_memo): stat = os.lstat(path) uid = stat.st_uid gid = stat.st_gid target_uid = find_target_id(uid, uid_mappings, nobody, uid_memo) target_gid = find_target_id(gid, gid_mappings, nobody, gid_memo) print_chown(path, uid, gid, target_uid, target_gid) os.lchown(path, target_uid, target_gid) def shift_dir(fsdir, uid_mappings, gid_mappings, nobody): uid_memo = dict() gid_memo = dict() def shift_path_short(p): shift_path(p, uid_mappings, gid_mappings, nobody, uid_memo=uid_memo, gid_memo=gid_memo) shift_path_short(fsdir) for root, dirs, files in os.walk(fsdir): for d in dirs: path = os.path.join(root, d) shift_path_short(path) for f in files: path = os.path.join(root, f) shift_path_short(path) def confirm_path(path, uid_ranges, gid_ranges, nobody): stat = os.lstat(path) uid = stat.st_uid gid = stat.st_gid uid_in_range = True if uid == nobody else False gid_in_range = True if gid == nobody else False if not uid_in_range or not gid_in_range: for (start, end) in uid_ranges: if start <= uid <= end: uid_in_range = True break for (start, end) in gid_ranges: if start <= gid <= end: gid_in_range = True break return uid_in_range and gid_in_range def get_ranges(maps): return [(target, target + count - 1) for (start, target, count) in maps] def confirm_dir(fsdir, uid_mappings, gid_mappings, nobody): uid_ranges = get_ranges(uid_mappings) gid_ranges = get_ranges(gid_mappings) if not confirm_path(fsdir, uid_ranges, gid_ranges, nobody): return False for root, dirs, files in os.walk(fsdir): for d in dirs: path = os.path.join(root, d) if not confirm_path(path, uid_ranges, gid_ranges, nobody): return False for f in files: path = os.path.join(root, f) if not confirm_path(path, uid_ranges, gid_ranges, nobody): return False return True @nova.privsep.sys_admin_pctxt.entrypoint def shift(path, uid_map, gid_map): if confirm_dir(uid_map, gid_map, path, NOBODY_ID): return shift_dir(path, uid_map, gid_map, NOBODY_ID) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/privsep/libvirt.py0000664000175000017500000002136500000000000017255 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ libvirt specific routines. """ import binascii import os import stat from oslo_concurrency import processutils from oslo_log import log as logging from oslo_utils import units from oslo_utils import uuidutils from nova.i18n import _ import nova.privsep LOG = logging.getLogger(__name__) @nova.privsep.sys_admin_pctxt.entrypoint def dmcrypt_create_volume(target, device, cipher, key_size, key): """Sets up a dmcrypt mapping :param target: device mapper logical device name :param device: underlying block device :param cipher: encryption cipher string digestible by cryptsetup :param key_size: encryption key size :param key: encoded encryption key bytestring """ cmd = ('cryptsetup', 'create', target, device, '--cipher=' + cipher, '--key-size=' + str(key_size), '--key-file=-') key = binascii.hexlify(key).decode('utf-8') processutils.execute(*cmd, process_input=key) @nova.privsep.sys_admin_pctxt.entrypoint def dmcrypt_delete_volume(target): """Deletes a dmcrypt mapping :param target: name of the mapped logical device """ processutils.execute('cryptsetup', 'remove', target) @nova.privsep.sys_admin_pctxt.entrypoint def ploop_init(size, disk_format, fs_type, disk_path): """Initialize ploop disk, make it readable for non-root user :param disk_format: data allocation format (raw or expanded) :param fs_type: filesystem (ext4, ext3, none) :param disk_path: ploop image file """ processutils.execute('ploop', 'init', '-s', size, '-f', disk_format, '-t', fs_type, disk_path, check_exit_code=True) # Add read access for all users, because "ploop init" creates # disk with rw rights only for root. OpenStack user should have access # to the disk to request info via "qemu-img info" # TODO(mikal): this is a faithful rendition of the pre-privsep code from # the libvirt driver, but it seems undesirable to me. It would be good to # create the loop file with the right owner or group such that we don't # need to have it world readable. I don't have access to a system to test # this on however. st = os.stat(disk_path) os.chmod(disk_path, st.st_mode | stat.S_IROTH) @nova.privsep.sys_admin_pctxt.entrypoint def ploop_resize(disk_path, size): """Resize ploop disk :param disk_path: ploop image file :param size: new size (in bytes) """ processutils.execute('prl_disk_tool', 'resize', '--size', '%dM' % (size // units.Mi), '--resize_partition', '--hdd', disk_path, check_exit_code=True) @nova.privsep.sys_admin_pctxt.entrypoint def ploop_restore_descriptor(image_dir, base_delta, fmt): """Restore ploop disk descriptor XML :param image_dir: path to where descriptor XML is created :param base_delta: ploop image file containing the data :param fmt: ploop data allocation format (raw or expanded) """ processutils.execute('ploop', 'restore-descriptor', '-f', fmt, image_dir, base_delta, check_exit_code=True) @nova.privsep.sys_admin_pctxt.entrypoint def plug_infiniband_vif(vnic_mac, device_id, fabric, net_model, pci_slot): processutils.execute('ebrctl', 'add-port', vnic_mac, device_id, fabric, net_model, pci_slot) @nova.privsep.sys_admin_pctxt.entrypoint def unplug_infiniband_vif(fabric, vnic_mac): processutils.execute('ebrctl', 'del-port', fabric, vnic_mac) @nova.privsep.sys_admin_pctxt.entrypoint def plug_midonet_vif(port_id, dev): processutils.execute('mm-ctl', '--bind-port', port_id, dev) @nova.privsep.sys_admin_pctxt.entrypoint def unplug_midonet_vif(port_id): processutils.execute('mm-ctl', '--unbind-port', port_id) @nova.privsep.sys_admin_pctxt.entrypoint def plug_plumgrid_vif(dev, iface_id, vif_address, net_id, tenant_id): processutils.execute('ifc_ctl', 'gateway', 'add_port', dev) processutils.execute('ifc_ctl', 'gateway', 'ifup', dev, 'access_vm', iface_id, vif_address, 'pgtag2=%s' % net_id, 'pgtag1=%s' % tenant_id) @nova.privsep.sys_admin_pctxt.entrypoint def unplug_plumgrid_vif(dev): processutils.execute('ifc_ctl', 'gateway', 'ifdown', dev) processutils.execute('ifc_ctl', 'gateway', 'del_port', dev) @nova.privsep.sys_admin_pctxt.entrypoint def readpty(path): # TODO(mikal): I'm not a huge fan that we don't enforce a valid pty path # here, but I haven't come up with a great way of doing that. # NOTE(mikal): I am deliberately not catching the ImportError # exception here... Some platforms (I'm looking at you Windows) # don't have a fcntl and we may as well let them know that # with an ImportError, not that they should be calling this at all. import fcntl try: with open(path, 'r') as f: current_flags = fcntl.fcntl(f.fileno(), fcntl.F_GETFL) fcntl.fcntl(f.fileno(), fcntl.F_SETFL, current_flags | os.O_NONBLOCK) return f.read() except Exception as e: # NOTE(mikal): dear internet, I see you looking at me with your # judging eyes. There's a story behind why we do this. You see, the # previous implementation did this: # # out, err = utils.execute('dd', # 'if=%s' % pty, # 'iflag=nonblock', # run_as_root=True, # check_exit_code=False) # return out # # So, it never checked stderr or the return code of the process it # ran to read the pty. Doing something better than that has turned # out to be unexpectedly hard because there are a surprisingly large # variety of errors which appear to be thrown when doing this read. # # Therefore for now we log the errors, but keep on rolling. Volunteers # to help clean this up are welcome and will receive free beverages. LOG.info(_('Ignored error while reading from instance console ' 'pty: %s'), e) return '' @nova.privsep.sys_admin_pctxt.entrypoint def xend_probe(): processutils.execute('xend', 'status', check_exit_code=True) @nova.privsep.sys_admin_pctxt.entrypoint def create_mdev(physical_device, mdev_type, uuid=None): """Instantiate a mediated device.""" if uuid is None: uuid = uuidutils.generate_uuid() fpath = '/sys/class/mdev_bus/{0}/mdev_supported_types/{1}/create' fpath = fpath.format(physical_device, mdev_type) with open(fpath, 'w') as f: f.write(uuid) return uuid @nova.privsep.sys_admin_pctxt.entrypoint def systemd_run_qb_mount(qb_vol, mnt_base, cfg_file=None): """Mount QB volume in separate CGROUP""" # Note(kaisers): Details on why we run without --user at bug #1756823 sysdr_cmd = ['systemd-run', '--scope', 'mount.quobyte', '--disable-xattrs', qb_vol, mnt_base] if cfg_file: sysdr_cmd.extend(['-c', cfg_file]) return processutils.execute(*sysdr_cmd) # NOTE(kaisers): this method is deliberately not wrapped in a privsep entry. def unprivileged_qb_mount(qb_vol, mnt_base, cfg_file=None): """Mount QB volume""" mnt_cmd = ['mount.quobyte', '--disable-xattrs', qb_vol, mnt_base] if cfg_file: mnt_cmd.extend(['-c', cfg_file]) return processutils.execute(*mnt_cmd) @nova.privsep.sys_admin_pctxt.entrypoint def umount(mnt_base): """Unmount volume""" unprivileged_umount(mnt_base) # NOTE(kaisers): this method is deliberately not wrapped in a privsep entry. def unprivileged_umount(mnt_base): """Unmount volume""" umnt_cmd = ['umount', mnt_base] return processutils.execute(*umnt_cmd) @nova.privsep.sys_admin_pctxt.entrypoint def get_pmem_namespaces(): ndctl_cmd = ['ndctl', 'list', '-X'] nss_info = processutils.execute(*ndctl_cmd)[0] return nss_info @nova.privsep.sys_admin_pctxt.entrypoint def cleanup_vpmem(devpath): daxio_cmd = ['daxio', '-z', '-o', '%s' % devpath] processutils.execute(*daxio_cmd) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/privsep/linux_net.py0000664000175000017500000001075300000000000017606 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Linux network specific helpers. """ import os from oslo_concurrency import processutils from oslo_log import log as logging from oslo_utils import excutils import nova.privsep.linux_net LOG = logging.getLogger(__name__) def device_exists(device): """Check if ethernet device exists.""" return os.path.exists('/sys/class/net/%s' % device) def delete_net_dev(dev): """Delete a network device only if it exists.""" if device_exists(dev): try: delete_net_dev_escalated(dev) LOG.debug("Net device removed: '%s'", dev) except processutils.ProcessExecutionError: with excutils.save_and_reraise_exception(): LOG.error("Failed removing net device: '%s'", dev) @nova.privsep.sys_admin_pctxt.entrypoint def delete_net_dev_escalated(dev): processutils.execute('ip', 'link', 'delete', dev, check_exit_code=[0, 2, 254]) @nova.privsep.sys_admin_pctxt.entrypoint def set_device_mtu(dev, mtu): if mtu: processutils.execute('ip', 'link', 'set', dev, 'mtu', mtu, check_exit_code=[0, 2, 254]) @nova.privsep.sys_admin_pctxt.entrypoint def set_device_enabled(dev): _set_device_enabled_inner(dev) def _set_device_enabled_inner(dev): processutils.execute('ip', 'link', 'set', dev, 'up', check_exit_code=[0, 2, 254]) @nova.privsep.sys_admin_pctxt.entrypoint def set_device_trust(dev, vf_num, trusted): _set_device_trust_inner(dev, vf_num, trusted) def _set_device_trust_inner(dev, vf_num, trusted): processutils.execute('ip', 'link', 'set', dev, 'vf', vf_num, 'trust', bool(trusted) and 'on' or 'off', check_exit_code=[0, 2, 254]) @nova.privsep.sys_admin_pctxt.entrypoint def set_device_macaddr(dev, mac_addr, port_state=None): _set_device_macaddr_inner(dev, mac_addr, port_state=port_state) def _set_device_macaddr_inner(dev, mac_addr, port_state=None): if port_state: processutils.execute('ip', 'link', 'set', dev, 'address', mac_addr, port_state, check_exit_code=[0, 2, 254]) else: processutils.execute('ip', 'link', 'set', dev, 'address', mac_addr, check_exit_code=[0, 2, 254]) @nova.privsep.sys_admin_pctxt.entrypoint def set_device_macaddr_and_vlan(dev, vf_num, mac_addr, vlan): processutils.execute('ip', 'link', 'set', dev, 'vf', vf_num, 'mac', mac_addr, 'vlan', vlan, run_as_root=True, check_exit_code=[0, 2, 254]) @nova.privsep.sys_admin_pctxt.entrypoint def create_tap_dev(dev, mac_address=None, multiqueue=False): if not device_exists(dev): try: # First, try with 'ip' cmd = ('ip', 'tuntap', 'add', dev, 'mode', 'tap') if multiqueue: cmd = cmd + ('multi_queue', ) processutils.execute(*cmd, check_exit_code=[0, 2, 254]) except processutils.ProcessExecutionError: if multiqueue: LOG.warning( 'Failed to create a tap device with ip tuntap. ' 'tunctl does not support creation of multi-queue ' 'enabled devices, skipping fallback.') raise # Second option: tunctl processutils.execute('tunctl', '-b', '-t', dev) if mac_address: _set_device_macaddr_inner(dev, mac_address) _set_device_enabled_inner(dev) @nova.privsep.sys_admin_pctxt.entrypoint def add_vlan(bridge_interface, interface, vlan_num): processutils.execute('ip', 'link', 'add', 'link', bridge_interface, 'name', interface, 'type', 'vlan', 'id', vlan_num, check_exit_code=[0, 2, 254]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/privsep/path.py0000664000175000017500000000647200000000000016540 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Routines that bypass file-system checks.""" import errno import os from oslo_utils import fileutils from nova import exception import nova.privsep @nova.privsep.sys_admin_pctxt.entrypoint def readfile(path): if not os.path.exists(path): raise exception.FileNotFound(file_path=path) with open(path, 'r') as f: return f.read() @nova.privsep.sys_admin_pctxt.entrypoint def writefile(path, mode, content): if not os.path.exists(os.path.dirname(path)): raise exception.FileNotFound(file_path=path) with open(path, mode) as f: f.write(content) @nova.privsep.sys_admin_pctxt.entrypoint def readlink(path): if not os.path.exists(path): raise exception.FileNotFound(file_path=path) return os.readlink(path) @nova.privsep.sys_admin_pctxt.entrypoint def chown(path, uid=-1, gid=-1): if not os.path.exists(path): raise exception.FileNotFound(file_path=path) return os.chown(path, uid, gid) @nova.privsep.sys_admin_pctxt.entrypoint def makedirs(path): fileutils.ensure_tree(path) @nova.privsep.sys_admin_pctxt.entrypoint def chmod(path, mode): if not os.path.exists(path): raise exception.FileNotFound(file_path=path) os.chmod(path, mode) @nova.privsep.sys_admin_pctxt.entrypoint def utime(path): if not os.path.exists(path): raise exception.FileNotFound(file_path=path) # NOTE(mikal): the old version of this used execute(touch, ...), which # would apparently fail on shared storage when multiple instances were # being launched at the same time. If we see failures here, we might need # to wrap this in a try / except. os.utime(path, None) @nova.privsep.sys_admin_pctxt.entrypoint def rmdir(path): if not os.path.exists(path): raise exception.FileNotFound(file_path=path) os.rmdir(path) class path(object): @staticmethod @nova.privsep.sys_admin_pctxt.entrypoint def exists(path): return os.path.exists(path) @nova.privsep.sys_admin_pctxt.entrypoint def last_bytes(path, num): """Return num bytes from the end of the file, and remaining byte count. :param path: The file to read :param num: The number of bytes to return :returns: (data, remaining) """ with open(path, 'rb') as f: try: f.seek(-num, os.SEEK_END) except IOError as e: # seek() fails with EINVAL when trying to go before the start of # the file. It means that num is larger than the file size, so # just go to the start. if e.errno == errno.EINVAL: f.seek(0, os.SEEK_SET) else: raise remaining = f.tell() return (f.read(), remaining) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/privsep/qemu.py0000664000175000017500000001222600000000000016545 0ustar00zuulzuul00000000000000# Copyright 2018 Michael Still and Aptira # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Helpers for qemu tasks. """ import os from oslo_concurrency import processutils from oslo_log import log as logging from oslo_utils import units from nova import exception from nova.i18n import _ import nova.privsep.utils LOG = logging.getLogger(__name__) QEMU_IMG_LIMITS = processutils.ProcessLimits( cpu_time=30, address_space=1 * units.Gi) @nova.privsep.sys_admin_pctxt.entrypoint def convert_image(source, dest, in_format, out_format, instances_path, compress): unprivileged_convert_image(source, dest, in_format, out_format, instances_path, compress) # NOTE(mikal): this method is deliberately not wrapped in a privsep entrypoint def unprivileged_convert_image(source, dest, in_format, out_format, instances_path, compress): # NOTE(mdbooth, kchamart): `qemu-img convert` defaults to # 'cache=writeback' for the source image, and 'cache=unsafe' for the # target, which means that data is not synced to disk at completion. # We explicitly use 'cache=none' here, for the target image, to (1) # ensure that we don't interfere with other applications using the # host's I/O cache, and (2) ensure that the data is on persistent # storage when the command exits. Without (2), a host crash may # leave a corrupt image in the image cache, which Nova cannot # recover automatically. # NOTE(zigo, kchamart): We cannot use `qemu-img convert -t none` if # the 'instance_dir' is mounted on a filesystem that doesn't support # O_DIRECT, which is the case, for example, with 'tmpfs'. This # simply crashes `openstack server create` in environments like live # distributions. In such cases, the best choice is 'writeback', # which (a) makes the conversion multiple times faster; and (b) is # as safe as it can be, because at the end of the conversion it, # just like 'writethrough', calls fsync(2)|fdatasync(2), which # ensures to safely write the data to the physical disk. # NOTE(mikal): there is an assumption here that the source and destination # are in the instances_path. Is that worth enforcing? if nova.privsep.utils.supports_direct_io(instances_path): cache_mode = 'none' else: cache_mode = 'writeback' cmd = ('qemu-img', 'convert', '-t', cache_mode, '-O', out_format) if in_format is not None: cmd = cmd + ('-f', in_format) if compress: cmd += ('-c',) cmd = cmd + (source, dest) processutils.execute(*cmd) @nova.privsep.sys_admin_pctxt.entrypoint def privileged_qemu_img_info(path, format=None): """Return an oject containing the parsed output from qemu-img info This is a privileged call to qemu-img info using the sys_admin_pctxt entrypoint allowing host block devices etc to be accessed. """ return unprivileged_qemu_img_info(path, format=format) def unprivileged_qemu_img_info(path, format=None): """Return an object containing the parsed output from qemu-img info.""" try: # The following check is about ploop images that reside within # directories and always have DiskDescriptor.xml file beside them if (os.path.isdir(path) and os.path.exists(os.path.join(path, "DiskDescriptor.xml"))): path = os.path.join(path, "root.hds") cmd = ( 'env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', path, '--force-share', '--output=json', ) if format is not None: cmd = cmd + ('-f', format) out, err = processutils.execute(*cmd, prlimit=QEMU_IMG_LIMITS) except processutils.ProcessExecutionError as exp: if exp.exit_code == -9: # this means we hit prlimits, make the exception more specific msg = (_("qemu-img aborted by prlimits when inspecting " "%(path)s : %(exp)s") % {'path': path, 'exp': exp}) elif exp.exit_code == 1 and 'No such file or directory' in exp.stderr: # The os.path.exists check above can race so this is a simple # best effort at catching that type of failure and raising a more # specific error. raise exception.DiskNotFound(location=path) else: msg = (_("qemu-img failed to execute on %(path)s : %(exp)s") % {'path': path, 'exp': exp}) raise exception.InvalidDiskInfo(reason=msg) if not out: msg = (_("Failed to run qemu-img info on %(path)s : %(error)s") % {'path': path, 'error': err}) raise exception.InvalidDiskInfo(reason=msg) return out ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/privsep/utils.py0000664000175000017500000000621700000000000016741 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # Copyright 2018 Michael Still and Aptira # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This module is utility methods that privsep depends on. Privsep isn't allowed # to depend on anything outside the privsep directory, so these need to be # here. That said, other parts of nova can call into these utilities if # needed. import errno import mmap import os import random import sys from oslo_log import log as logging from oslo_utils import excutils # NOTE(mriedem): Avoid importing nova.utils since that can cause a circular # import with the privsep code. In fact, avoid importing anything outside # of nova/privsep/ if possible. LOG = logging.getLogger(__name__) def generate_random_string(): return str(random.randint(0, sys.maxsize)) def supports_direct_io(dirpath): if not hasattr(os, 'O_DIRECT'): LOG.debug("This python runtime does not support direct I/O") return False # Use a random filename to avoid issues with $dirpath being on shared # storage. file_name = "%s.%s" % (".directio.test", generate_random_string()) testfile = os.path.join(dirpath, file_name) hasDirectIO = True fd = None try: fd = os.open(testfile, os.O_CREAT | os.O_WRONLY | os.O_DIRECT) # Check is the write allowed with 4096 byte alignment align_size = 4096 m = mmap.mmap(-1, align_size) m.write(b"x" * align_size) os.write(fd, m) LOG.debug("Path '%(path)s' supports direct I/O", {'path': dirpath}) except OSError as e: if e.errno in (errno.EINVAL, errno.ENOENT): LOG.debug("Path '%(path)s' does not support direct I/O: " "'%(ex)s'", {'path': dirpath, 'ex': e}) hasDirectIO = False else: with excutils.save_and_reraise_exception(): LOG.error("Error on '%(path)s' while checking " "direct I/O: '%(ex)s'", {'path': dirpath, 'ex': e}) except Exception as e: with excutils.save_and_reraise_exception(): LOG.error("Error on '%(path)s' while checking direct I/O: " "'%(ex)s'", {'path': dirpath, 'ex': e}) finally: # ensure unlink(filepath) will actually remove the file by deleting # the remaining link to it in close(fd) if fd is not None: os.close(fd) try: os.unlink(testfile) except Exception: pass return hasDirectIO ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/privsep/xenapi.py0000664000175000017500000000234300000000000017061 0ustar00zuulzuul00000000000000# Copyright 2018 Michael Still and Aptira # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ xenapi specific routines. """ from oslo_concurrency import processutils import nova.privsep @nova.privsep.sys_admin_pctxt.entrypoint def xenstore_read(path): return processutils.execute('xenstore-read', path) @nova.privsep.sys_admin_pctxt.entrypoint def block_copy(src_path, dst_path, block_size, num_blocks): processutils.execute('dd', 'if=%s' % src_path, 'of=%s' % dst_path, 'bs=%d' % block_size, 'count=%d' % num_blocks, 'iflag=direct,sync', 'oflag=direct,sync') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/profiler.py0000664000175000017500000000440400000000000015727 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import importutils import webob.dec import nova.conf profiler = importutils.try_import('osprofiler.profiler') profiler_web = importutils.try_import('osprofiler.web') CONF = nova.conf.CONF class WsgiMiddleware(object): def __init__(self, application, **kwargs): self.application = application @classmethod def factory(cls, global_conf, **local_conf): if profiler_web: return profiler_web.WsgiMiddleware.factory(global_conf, **local_conf) def filter_(app): return cls(app, **local_conf) return filter_ @webob.dec.wsgify def __call__(self, request): return request.get_response(self.application) def get_traced_meta(): if profiler and 'profiler' in CONF and CONF.profiler.enabled: return profiler.TracedMeta else: # NOTE(rpodolyaka): if we do not return a child of type, then Python # fails to build a correct MRO when osprofiler is not installed class NoopMeta(type): pass return NoopMeta def trace_cls(name, **kwargs): """Wrap the OSProfiler trace_cls decorator so that it will not try to patch the class unless OSProfiler is present and enabled in the config :param name: The name of action. E.g. wsgi, rpc, db, etc.. :param kwargs: Any other keyword args used by profiler.trace_cls """ def decorator(cls): if profiler and 'profiler' in CONF and CONF.profiler.enabled: trace_decorator = profiler.trace_cls(name, kwargs) return trace_decorator(cls) return cls return decorator ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/quota.py0000664000175000017500000017200300000000000015237 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Quotas for resources per project.""" import copy from oslo_log import log as logging from oslo_utils import importutils from sqlalchemy.sql import and_ from sqlalchemy.sql import false from sqlalchemy.sql import null from sqlalchemy.sql import or_ import nova.conf from nova import context as nova_context from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.scheduler.client import report from nova import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF # Lazy-loaded on first access. # Avoid constructing the KSA adapter and provider tree on every access. PLACEMENT_CLIENT = None # If user_id and queued_for_delete are populated for a project, cache the # result to avoid doing unnecessary EXISTS database queries. UID_QFD_POPULATED_CACHE_BY_PROJECT = set() # For the server group members check, we do not scope to a project, so if all # user_id and queued_for_delete are populated for all projects, cache the # result to avoid doing unnecessary EXISTS database queries. UID_QFD_POPULATED_CACHE_ALL = False class DbQuotaDriver(object): """Driver to perform necessary checks to enforce quotas and obtain quota information. The default driver utilizes the local database. """ UNLIMITED_VALUE = -1 def get_defaults(self, context, resources): """Given a list of resources, retrieve the default quotas. Use the class quotas named `_DEFAULT_QUOTA_NAME` as default quotas, if it exists. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. """ quotas = {} default_quotas = objects.Quotas.get_default_class(context) for resource in resources.values(): # resource.default returns the config options. So if there's not # an entry for the resource in the default class, it uses the # config option. quotas[resource.name] = default_quotas.get(resource.name, resource.default) return quotas def get_class_quotas(self, context, resources, quota_class): """Given a list of resources, retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param quota_class: The name of the quota class to return quotas for. """ quotas = {} class_quotas = objects.Quotas.get_all_class_by_name(context, quota_class) for resource in resources.values(): quotas[resource.name] = class_quotas.get(resource.name, resource.default) return quotas def _process_quotas(self, context, resources, project_id, quotas, quota_class=None, usages=None, remains=False): modified_quotas = {} # Get the quotas for the appropriate class. If the project ID # matches the one in the context, we use the quota_class from # the context, otherwise, we use the provided quota_class (if # any) if project_id == context.project_id: quota_class = context.quota_class if quota_class: class_quotas = objects.Quotas.get_all_class_by_name(context, quota_class) else: class_quotas = {} default_quotas = self.get_defaults(context, resources) for resource in resources.values(): limit = quotas.get(resource.name, class_quotas.get( resource.name, default_quotas[resource.name])) modified_quotas[resource.name] = dict(limit=limit) # Include usages if desired. This is optional because one # internal consumer of this interface wants to access the # usages directly from inside a transaction. if usages: usage = usages.get(resource.name, {}) modified_quotas[resource.name].update( in_use=usage.get('in_use', 0), ) # Initialize remains quotas with the default limits. if remains: modified_quotas[resource.name].update(remains=limit) if remains: # Get all user quotas for a project and subtract their limits # from the class limits to get the remains. For example, if the # class/default is 20 and there are two users each with quota of 5, # then there is quota of 10 left to give out. all_quotas = objects.Quotas.get_all(context, project_id) for quota in all_quotas: if quota.resource in modified_quotas: modified_quotas[quota.resource]['remains'] -= \ quota.hard_limit return modified_quotas def _get_usages(self, context, resources, project_id, user_id=None): """Get usages of specified resources. This function is called to get resource usages for validating quota limit creates or updates in the os-quota-sets API and for displaying resource usages in the os-used-limits API. This function is not used for checking resource usage against quota limits. :param context: The request context for access checks :param resources: The dict of Resources for which to get usages :param project_id: The project_id for scoping the usage count :param user_id: Optional user_id for scoping the usage count :returns: A dict containing resources and their usage information, for example: {'project_id': 'project-uuid', 'user_id': 'user-uuid', 'instances': {'in_use': 5}} """ usages = {} for resource in resources.values(): # NOTE(melwitt): We should skip resources that are not countable, # such as AbsoluteResources. if not isinstance(resource, CountableResource): continue if resource.name in usages: # This is needed because for any of the resources: # ('instances', 'cores', 'ram'), they are counted at the same # time for efficiency (query the instances table once instead # of multiple times). So, a count of any one of them contains # counts for the others and we can avoid re-counting things. continue if resource.name in ('key_pairs', 'server_group_members'): # These per user resources are special cases whose usages # are not considered when validating limit create/update or # displaying used limits. They are always zero. usages[resource.name] = {'in_use': 0} else: if resource.name in db.quota_get_per_project_resources(): count = resource.count_as_dict(context, project_id) key = 'project' else: # NOTE(melwitt): This assumes a specific signature for # count_as_dict(). Usages used to be records in the # database but now we are counting resources. The # count_as_dict() function signature needs to match this # call, else it should get a conditional in this function. count = resource.count_as_dict(context, project_id, user_id=user_id) key = 'user' if user_id else 'project' # Example count_as_dict() return value: # {'project': {'instances': 5}, # 'user': {'instances': 2}} counted_resources = count[key].keys() for res in counted_resources: count_value = count[key][res] usages[res] = {'in_use': count_value} return usages def get_user_quotas(self, context, resources, project_id, user_id, quota_class=None, usages=True, project_quotas=None, user_quotas=None): """Given a list of resources, retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param usages: If True, the current counts will also be returned. :param project_quotas: Quotas dictionary for the specified project. :param user_quotas: Quotas dictionary for the specified project and user. """ if user_quotas: user_quotas = user_quotas.copy() else: user_quotas = objects.Quotas.get_all_by_project_and_user( context, project_id, user_id) # Use the project quota for default user quota. proj_quotas = project_quotas or objects.Quotas.get_all_by_project( context, project_id) for key, value in proj_quotas.items(): if key not in user_quotas.keys(): user_quotas[key] = value user_usages = {} if usages: user_usages = self._get_usages(context, resources, project_id, user_id=user_id) return self._process_quotas(context, resources, project_id, user_quotas, quota_class, usages=user_usages) def get_project_quotas(self, context, resources, project_id, quota_class=None, usages=True, remains=False, project_quotas=None): """Given a list of resources, retrieve the quotas for the given project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param usages: If True, the current counts will also be returned. :param remains: If True, the current remains of the project will will be returned. :param project_quotas: Quotas dictionary for the specified project. """ project_quotas = project_quotas or objects.Quotas.get_all_by_project( context, project_id) project_usages = {} if usages: project_usages = self._get_usages(context, resources, project_id) return self._process_quotas(context, resources, project_id, project_quotas, quota_class, usages=project_usages, remains=remains) def _is_unlimited_value(self, v): """A helper method to check for unlimited value. """ return v <= self.UNLIMITED_VALUE def _sum_quota_values(self, v1, v2): """A helper method that handles unlimited values when performing sum operation. """ if self._is_unlimited_value(v1) or self._is_unlimited_value(v2): return self.UNLIMITED_VALUE return v1 + v2 def _sub_quota_values(self, v1, v2): """A helper method that handles unlimited values when performing subtraction operation. """ if self._is_unlimited_value(v1) or self._is_unlimited_value(v2): return self.UNLIMITED_VALUE return v1 - v2 def get_settable_quotas(self, context, resources, project_id, user_id=None): """Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. """ settable_quotas = {} db_proj_quotas = objects.Quotas.get_all_by_project(context, project_id) project_quotas = self.get_project_quotas(context, resources, project_id, remains=True, project_quotas=db_proj_quotas) if user_id: setted_quotas = objects.Quotas.get_all_by_project_and_user( context, project_id, user_id) user_quotas = self.get_user_quotas(context, resources, project_id, user_id, project_quotas=db_proj_quotas, user_quotas=setted_quotas) for key, value in user_quotas.items(): # Maximum is the remaining quota for a project (class/default # minus the sum of all user quotas in the project), plus the # given user's quota. So if the class/default is 20 and there # are two users each with quota of 5, then there is quota of # 10 remaining. The given user currently has quota of 5, so # the maximum you could update their quota to would be 15. # Class/default 20 - currently used in project 10 + current # user 5 = 15. maximum = \ self._sum_quota_values(project_quotas[key]['remains'], setted_quotas.get(key, 0)) # This function is called for the quota_sets api and the # corresponding nova-manage command. The idea is when someone # attempts to update a quota, the value chosen must be at least # as much as the current usage and less than or equal to the # project limit less the sum of existing per user limits. minimum = value['in_use'] settable_quotas[key] = {'minimum': minimum, 'maximum': maximum} else: for key, value in project_quotas.items(): minimum = \ max(int(self._sub_quota_values(value['limit'], value['remains'])), int(value['in_use'])) settable_quotas[key] = {'minimum': minimum, 'maximum': -1} return settable_quotas def _get_quotas(self, context, resources, keys, project_id=None, user_id=None, project_quotas=None): """A helper method which retrieves the quotas for the specific resources identified by keys, and which apply to the current context. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param keys: A list of the desired quotas to retrieve. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. :param project_quotas: Quotas dictionary for the specified project. """ # Filter resources desired = set(keys) sub_resources = {k: v for k, v in resources.items() if k in desired} # Make sure we accounted for all of them... if len(keys) != len(sub_resources): unknown = desired - set(sub_resources.keys()) raise exception.QuotaResourceUnknown(unknown=sorted(unknown)) if user_id: LOG.debug('Getting quotas for user %(user_id)s and project ' '%(project_id)s. Resources: %(keys)s', {'user_id': user_id, 'project_id': project_id, 'keys': keys}) # Grab and return the quotas (without usages) quotas = self.get_user_quotas(context, sub_resources, project_id, user_id, context.quota_class, usages=False, project_quotas=project_quotas) else: LOG.debug('Getting quotas for project %(project_id)s. Resources: ' '%(keys)s', {'project_id': project_id, 'keys': keys}) # Grab and return the quotas (without usages) quotas = self.get_project_quotas(context, sub_resources, project_id, context.quota_class, usages=False, project_quotas=project_quotas) return {k: v['limit'] for k, v in quotas.items()} def limit_check(self, context, resources, values, project_id=None, user_id=None): """Check simple quota limits. For limits--those quotas for which there is no usage synchronization function--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param values: A dictionary of the values to check against the quota. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. """ _valid_method_call_check_resources(values, 'check', resources) # Ensure no value is less than zero unders = [key for key, val in values.items() if val < 0] if unders: raise exception.InvalidQuotaValue(unders=sorted(unders)) # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user id is None, then we use the user_id in context if user_id is None: user_id = context.user_id # Get the applicable quotas project_quotas = objects.Quotas.get_all_by_project(context, project_id) quotas = self._get_quotas(context, resources, values.keys(), project_id=project_id, project_quotas=project_quotas) user_quotas = self._get_quotas(context, resources, values.keys(), project_id=project_id, user_id=user_id, project_quotas=project_quotas) # Check the quotas and construct a list of the resources that # would be put over limit by the desired values overs = [key for key, val in values.items() if quotas[key] >= 0 and quotas[key] < val or (user_quotas[key] >= 0 and user_quotas[key] < val)] if overs: headroom = {} for key in overs: headroom[key] = min( val for val in (quotas.get(key), project_quotas.get(key)) if val is not None ) raise exception.OverQuota(overs=sorted(overs), quotas=quotas, usages={}, headroom=headroom) def limit_check_project_and_user(self, context, resources, project_values=None, user_values=None, project_id=None, user_id=None): """Check values (usage + desired delta) against quota limits. For limits--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks :param resources: A dictionary of the registered resources :param project_values: Optional dict containing the resource values to check against project quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param user_values: Optional dict containing the resource values to check against user quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param project_id: Optional project_id for scoping the limit check to a different project than in the context :param user_id: Optional user_id for scoping the limit check to a different user than in the context """ if project_values is None: project_values = {} if user_values is None: user_values = {} _valid_method_call_check_resources(project_values, 'check', resources) _valid_method_call_check_resources(user_values, 'check', resources) if not any([project_values, user_values]): raise exception.Invalid( 'Must specify at least one of project_values or user_values ' 'for the limit check.') # Ensure no value is less than zero for vals in (project_values, user_values): unders = [key for key, val in vals.items() if val < 0] if unders: raise exception.InvalidQuotaValue(unders=sorted(unders)) # Get a set of all keys for calling _get_quotas() so we get all of the # resource limits we need. all_keys = set(project_values).union(user_values) # Keys that are in both project_values and user_values need to be # checked against project quota and user quota, respectively. # Keys that are not in both only need to be checked against project # quota or user quota, if it is defined. Separate the keys that don't # need to be checked against both quotas, merge them into one dict, # and remove them from project_values and user_values. keys_to_merge = set(project_values).symmetric_difference(user_values) merged_values = {} for key in keys_to_merge: # The key will be either in project_values or user_values based on # the earlier symmetric_difference. Default to 0 in case the found # value is 0 and won't take precedence over a None default. merged_values[key] = (project_values.get(key, 0) or user_values.get(key, 0)) project_values.pop(key, None) user_values.pop(key, None) # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user id is None, then we use the user_id in context if user_id is None: user_id = context.user_id # Get the applicable quotas. They will be merged together (taking the # min limit) if project_values and user_values were not specified # together. # per project quota limits (quotas that have no concept of # user-scoping: ) project_quotas = objects.Quotas.get_all_by_project(context, project_id) # per user quotas, project quota limits (for quotas that have # user-scoping, limits for the project) quotas = self._get_quotas(context, resources, all_keys, project_id=project_id, project_quotas=project_quotas) # per user quotas, user quota limits (for quotas that have # user-scoping, the limits for the user) user_quotas = self._get_quotas(context, resources, all_keys, project_id=project_id, user_id=user_id, project_quotas=project_quotas) if merged_values: # This is for resources that are not counted across a project and # must pass both the quota for the project and the quota for the # user. # Combine per user project quotas and user_quotas for use in the # checks, taking the minimum limit between the two. merged_quotas = copy.deepcopy(quotas) for k, v in user_quotas.items(): if k in merged_quotas: merged_quotas[k] = min(merged_quotas[k], v) else: merged_quotas[k] = v # Check the quotas and construct a list of the resources that # would be put over limit by the desired values overs = [key for key, val in merged_values.items() if merged_quotas[key] >= 0 and merged_quotas[key] < val] if overs: headroom = {} for key in overs: headroom[key] = merged_quotas[key] raise exception.OverQuota(overs=sorted(overs), quotas=merged_quotas, usages={}, headroom=headroom) # This is for resources that are counted across a project and # across a user (instances, cores, ram, server_groups). The # project_values must pass the quota for the project and the # user_values must pass the quota for the user. over_user_quota = False overs = [] for key in user_values.keys(): # project_values and user_values should contain the same keys or # be empty after the keys in the symmetric_difference were removed # from both dicts. if quotas[key] >= 0 and quotas[key] < project_values[key]: overs.append(key) elif (user_quotas[key] >= 0 and user_quotas[key] < user_values[key]): overs.append(key) over_user_quota = True if overs: quotas_exceeded = user_quotas if over_user_quota else quotas headroom = {} for key in overs: headroom[key] = quotas_exceeded[key] raise exception.OverQuota(overs=sorted(overs), quotas=quotas_exceeded, usages={}, headroom=headroom) class NoopQuotaDriver(object): """Driver that turns quotas calls into no-ops and pretends that quotas for all resources are unlimited. This can be used if you do not wish to have any quota checking. """ def get_defaults(self, context, resources): """Given a list of resources, retrieve the default quotas. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. """ quotas = {} for resource in resources.values(): quotas[resource.name] = -1 return quotas def get_class_quotas(self, context, resources, quota_class): """Given a list of resources, retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param quota_class: The name of the quota class to return quotas for. """ quotas = {} for resource in resources.values(): quotas[resource.name] = -1 return quotas def _get_noop_quotas(self, resources, usages=None, remains=False): quotas = {} for resource in resources.values(): quotas[resource.name] = {} quotas[resource.name]['limit'] = -1 if usages: quotas[resource.name]['in_use'] = -1 if remains: quotas[resource.name]['remains'] = -1 return quotas def get_user_quotas(self, context, resources, project_id, user_id, quota_class=None, usages=True): """Given a list of resources, retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param usages: If True, the current counts will also be returned. """ return self._get_noop_quotas(resources, usages=usages) def get_project_quotas(self, context, resources, project_id, quota_class=None, usages=True, remains=False): """Given a list of resources, retrieve the quotas for the given project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param usages: If True, the current counts will also be returned. :param remains: If True, the current remains of the project will will be returned. """ return self._get_noop_quotas(resources, usages=usages, remains=remains) def get_settable_quotas(self, context, resources, project_id, user_id=None): """Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. """ quotas = {} for resource in resources.values(): quotas[resource.name] = {'minimum': 0, 'maximum': -1} return quotas def limit_check(self, context, resources, values, project_id=None, user_id=None): """Check simple quota limits. For limits--those quotas for which there is no usage synchronization function--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param values: A dictionary of the values to check against the quota. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. """ pass def limit_check_project_and_user(self, context, resources, project_values=None, user_values=None, project_id=None, user_id=None): """Check values against quota limits. For limits--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks :param resources: A dictionary of the registered resources :param project_values: Optional dict containing the resource values to check against project quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param user_values: Optional dict containing the resource values to check against user quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param project_id: Optional project_id for scoping the limit check to a different project than in the context :param user_id: Optional user_id for scoping the limit check to a different user than in the context """ pass class BaseResource(object): """Describe a single resource for quota checking.""" def __init__(self, name, flag=None): """Initializes a Resource. :param name: The name of the resource, i.e., "instances". :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ self.name = name self.flag = flag @property def default(self): """Return the default value of the quota.""" return CONF.quota[self.flag] if self.flag else -1 class AbsoluteResource(BaseResource): """Describe a resource that does not correspond to database objects.""" valid_method = 'check' class CountableResource(AbsoluteResource): """Describe a resource where the counts aren't based solely on the project ID. """ def __init__(self, name, count_as_dict, flag=None): """Initializes a CountableResource. Countable resources are those resources which directly correspond to objects in the database, but for which a count by project ID is inappropriate e.g. keypairs A CountableResource must be constructed with a counting function, which will be called to determine the current counts of the resource. The counting function will be passed the context, along with the extra positional and keyword arguments that are passed to Quota.count_as_dict(). It should return a dict specifying the count scoped to a project and/or a user. Example count of instances, cores, or ram returned as a rollup of all the resources since we only want to query the instances table once, not multiple times, for each resource. Instances, cores, and ram are counted across a project and across a user: {'project': {'instances': 5, 'cores': 8, 'ram': 4096}, 'user': {'instances': 1, 'cores': 2, 'ram': 512}} Example count of server groups keeping a consistent format. Server groups are counted across a project and across a user: {'project': {'server_groups': 7}, 'user': {'server_groups': 2}} Example count of key pairs keeping a consistent format. Key pairs are counted across a user only: {'user': {'key_pairs': 5}} Note that this counting is not performed in a transaction-safe manner. This resource class is a temporary measure to provide required functionality, until a better approach to solving this problem can be evolved. :param name: The name of the resource, i.e., "instances". :param count_as_dict: A callable which returns the count of the resource as a dict. The arguments passed are as described above. :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ super(CountableResource, self).__init__(name, flag=flag) self.count_as_dict = count_as_dict class QuotaEngine(object): """Represent the set of recognized quotas.""" def __init__(self, quota_driver=None, resources=None): """Initialize a Quota object. :param quota_driver: a QuotaDriver object (only used in testing. if None (default), instantiates a driver from the CONF.quota.driver option) :param resources: iterable of Resource objects """ resources = resources or [] self._resources = { resource.name: resource for resource in resources } # NOTE(mriedem): quota_driver is ever only supplied in tests with a # fake driver. self.__driver = quota_driver @property def _driver(self): if self.__driver: return self.__driver self.__driver = importutils.import_object(CONF.quota.driver) return self.__driver def get_defaults(self, context): """Retrieve the default quotas. :param context: The request context, for access checks. """ return self._driver.get_defaults(context, self._resources) def get_class_quotas(self, context, quota_class): """Retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param quota_class: The name of the quota class to return quotas for. """ return self._driver.get_class_quotas(context, self._resources, quota_class) def get_user_quotas(self, context, project_id, user_id, quota_class=None, usages=True): """Retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param usages: If True, the current counts will also be returned. """ return self._driver.get_user_quotas(context, self._resources, project_id, user_id, quota_class=quota_class, usages=usages) def get_project_quotas(self, context, project_id, quota_class=None, usages=True, remains=False): """Retrieve the quotas for the given project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param usages: If True, the current counts will also be returned. :param remains: If True, the current remains of the project will will be returned. """ return self._driver.get_project_quotas(context, self._resources, project_id, quota_class=quota_class, usages=usages, remains=remains) def get_settable_quotas(self, context, project_id, user_id=None): """Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. """ return self._driver.get_settable_quotas(context, self._resources, project_id, user_id=user_id) def count_as_dict(self, context, resource, *args, **kwargs): """Count a resource and return a dict. For countable resources, invokes the count_as_dict() function and returns its result. Arguments following the context and resource are passed directly to the count function declared by the resource. :param context: The request context, for access checks. :param resource: The name of the resource, as a string. :returns: A dict containing the count(s) for the resource, for example: {'project': {'instances': 2, 'cores': 4, 'ram': 1024}, 'user': {'instances': 1, 'cores': 2, 'ram': 512}} another example: {'user': {'key_pairs': 5}} """ # Get the resource res = self._resources.get(resource) if not res or not hasattr(res, 'count_as_dict'): raise exception.QuotaResourceUnknown(unknown=[resource]) return res.count_as_dict(context, *args, **kwargs) # TODO(melwitt): This can be removed once no old code can call # limit_check(). It will be replaced with limit_check_project_and_user(). def limit_check(self, context, project_id=None, user_id=None, **values): """Check simple quota limits. For limits--those quotas for which there is no usage synchronization function--this method checks that a set of proposed values are permitted by the limit restriction. The values to check are given as keyword arguments, where the key identifies the specific quota limit to check, and the value is the proposed value. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. """ return self._driver.limit_check(context, self._resources, values, project_id=project_id, user_id=user_id) def limit_check_project_and_user(self, context, project_values=None, user_values=None, project_id=None, user_id=None): """Check values against quota limits. For limits--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks :param project_values: Optional dict containing the resource values to check against project quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param user_values: Optional dict containing the resource values to check against user quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param project_id: Optional project_id for scoping the limit check to a different project than in the context :param user_id: Optional user_id for scoping the limit check to a different user than in the context """ return self._driver.limit_check_project_and_user( context, self._resources, project_values=project_values, user_values=user_values, project_id=project_id, user_id=user_id) @property def resources(self): return sorted(self._resources.keys()) def get_reserved(self): if isinstance(self._driver, NoopQuotaDriver): return -1 return 0 @db_api.api_context_manager.reader def _user_id_queued_for_delete_populated(context, project_id=None): """Determine whether user_id and queued_for_delete are set. This will be used to determine whether we need to fall back on the legacy quota counting method (if we cannot rely on counting instance mappings for the instance count). If any records with user_id=None and queued_for_delete=False are found, we need to fall back to the legacy counting method. If any records with queued_for_delete=None are found, we need to fall back to the legacy counting method. Note that this check specifies queued_for_deleted=False, which excludes deleted and SOFT_DELETED instances. The 'populate_user_id' data migration migrates SOFT_DELETED instances because they could be restored at any time in the future. However, for this quota-check-time method, it is acceptable to ignore SOFT_DELETED instances, since we just want to know if it is safe to use instance mappings to count instances at this point in time (and SOFT_DELETED instances do not count against quota limits). We also want to fall back to the legacy counting method if we detect any records that have not yet populated the queued_for_delete field. We do this instead of counting queued_for_delete=None records since that might not accurately reflect the project or project user's quota usage. :param project_id: The project to check :returns: True if user_id is set for all non-deleted instances and queued_for_delete is set for all instances, else False """ user_id_not_populated = and_( api_models.InstanceMapping.user_id == null(), api_models.InstanceMapping.queued_for_delete == false()) # If either queued_for_delete or user_id are unmigrated, we will return # False. unmigrated_filter = or_( api_models.InstanceMapping.queued_for_delete == null(), user_id_not_populated) query = context.session.query(api_models.InstanceMapping).filter( unmigrated_filter) if project_id: query = query.filter_by(project_id=project_id) return not context.session.query(query.exists()).scalar() def _keypair_get_count_by_user(context, user_id): count = objects.KeyPairList.get_count_by_user(context, user_id) return {'user': {'key_pairs': count}} def _server_group_count_members_by_user_legacy(context, group, user_id): # NOTE(melwitt): This is mostly duplicated from # InstanceGroup.count_members_by_user() to query across multiple cells. # We need to be able to pass the correct cell context to # InstanceList.get_by_filters(). # NOTE(melwitt): Counting across cells for instances means we will miss # counting resources if a cell is down. cell_mappings = objects.CellMappingList.get_all(context) greenthreads = [] filters = {'deleted': False, 'user_id': user_id, 'uuid': group.members} for cell_mapping in cell_mappings: with nova_context.target_cell(context, cell_mapping) as cctxt: greenthreads.append(utils.spawn( objects.InstanceList.get_by_filters, cctxt, filters, expected_attrs=[])) instances = objects.InstanceList(objects=[]) for greenthread in greenthreads: found = greenthread.wait() instances = instances + found # Count build requests using the same filters to catch group members # that are not yet creatd in a cell. # NOTE(mriedem): BuildRequestList.get_by_filters is not very efficient for # what we need and we can optimize this with a new query method. build_requests = objects.BuildRequestList.get_by_filters(context, filters) # Ignore any duplicates since build requests and instances can co-exist # for a short window of time after the instance is created in a cell but # before the build request is deleted. instance_uuids = [inst.uuid for inst in instances] count = len(instances) for build_request in build_requests: if build_request.instance_uuid not in instance_uuids: count += 1 return {'user': {'server_group_members': count}} def _server_group_count_members_by_user(context, group, user_id): """Get the count of server group members for a group by user. :param context: The request context for database access :param group: The InstanceGroup object with members to count :param user_id: The user_id to count across :returns: A dict containing the user-scoped count. For example: {'user': 'server_group_members': }} """ # Because server group members quota counting is not scoped to a project, # but scoped to a particular InstanceGroup and user, we have no reasonable # way of pruning down our migration check to only a subset of all instance # mapping records. # So, we check whether user_id/queued_for_delete is populated for all # records and cache the result to prevent unnecessary checking once the # data migration has been completed. global UID_QFD_POPULATED_CACHE_ALL if not UID_QFD_POPULATED_CACHE_ALL: LOG.debug('Checking whether user_id and queued_for_delete are ' 'populated for all projects') UID_QFD_POPULATED_CACHE_ALL = _user_id_queued_for_delete_populated( context) if UID_QFD_POPULATED_CACHE_ALL: count = objects.InstanceMappingList.get_count_by_uuids_and_user( context, group.members, user_id) return {'user': {'server_group_members': count}} LOG.warning('Falling back to legacy quota counting method for server ' 'group members') return _server_group_count_members_by_user_legacy(context, group, user_id) def _instances_cores_ram_count_legacy(context, project_id, user_id=None): """Get the counts of instances, cores, and ram in cell databases. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'instances': , 'cores': , 'ram': }, 'user': {'instances': , 'cores': , 'ram': }} """ # NOTE(melwitt): Counting across cells for instances, cores, and ram means # we will miss counting resources if a cell is down. # NOTE(tssurya): We only go into those cells in which the tenant has # instances. We could optimize this to avoid the CellMappingList query # for single-cell deployments by checking the cell cache and only doing # this filtering if there is more than one non-cell0 cell. # TODO(tssurya): Consider adding a scatter_gather_cells_for_project # variant that makes this native to nova.context. if CONF.api.instance_list_per_project_cells: cell_mappings = objects.CellMappingList.get_by_project_id( context, project_id) else: nova_context.load_cells() cell_mappings = nova_context.CELLS results = nova_context.scatter_gather_cells( context, cell_mappings, nova_context.CELL_TIMEOUT, objects.InstanceList.get_counts, project_id, user_id=user_id) total_counts = {'project': {'instances': 0, 'cores': 0, 'ram': 0}} if user_id: total_counts['user'] = {'instances': 0, 'cores': 0, 'ram': 0} for result in results.values(): if not nova_context.is_cell_failure_sentinel(result): for resource, count in result['project'].items(): total_counts['project'][resource] += count if user_id: for resource, count in result['user'].items(): total_counts['user'][resource] += count return total_counts def _cores_ram_count_placement(context, project_id, user_id=None): global PLACEMENT_CLIENT if not PLACEMENT_CLIENT: PLACEMENT_CLIENT = report.SchedulerReportClient() return PLACEMENT_CLIENT.get_usages_counts_for_quota(context, project_id, user_id=user_id) def _instances_cores_ram_count_api_db_placement(context, project_id, user_id=None): # Will return a dict with format: {'project': {'instances': M}, # 'user': {'instances': N}} # where the 'user' key is optional. total_counts = objects.InstanceMappingList.get_counts(context, project_id, user_id=user_id) cores_ram_counts = _cores_ram_count_placement(context, project_id, user_id=user_id) total_counts['project'].update(cores_ram_counts['project']) if 'user' in total_counts: total_counts['user'].update(cores_ram_counts['user']) return total_counts def _instances_cores_ram_count(context, project_id, user_id=None): """Get the counts of instances, cores, and ram. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'instances': , 'cores': , 'ram': }, 'user': {'instances': , 'cores': , 'ram': }} """ global UID_QFD_POPULATED_CACHE_BY_PROJECT if CONF.quota.count_usage_from_placement: # If a project has all user_id and queued_for_delete data populated, # cache the result to avoid needless database checking in the future. if (not UID_QFD_POPULATED_CACHE_ALL and project_id not in UID_QFD_POPULATED_CACHE_BY_PROJECT): LOG.debug('Checking whether user_id and queued_for_delete are ' 'populated for project_id %s', project_id) uid_qfd_populated = _user_id_queued_for_delete_populated( context, project_id) if uid_qfd_populated: UID_QFD_POPULATED_CACHE_BY_PROJECT.add(project_id) else: uid_qfd_populated = True if uid_qfd_populated: return _instances_cores_ram_count_api_db_placement(context, project_id, user_id=user_id) LOG.warning('Falling back to legacy quota counting method for ' 'instances, cores, and ram') return _instances_cores_ram_count_legacy(context, project_id, user_id=user_id) def _server_group_count(context, project_id, user_id=None): """Get the counts of server groups in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'server_groups': }, 'user': {'server_groups': }} """ return objects.InstanceGroupList.get_counts(context, project_id, user_id=user_id) QUOTAS = QuotaEngine( resources=[ CountableResource( 'instances', _instances_cores_ram_count, 'instances'), CountableResource( 'cores', _instances_cores_ram_count, 'cores'), CountableResource( 'ram', _instances_cores_ram_count, 'ram'), AbsoluteResource( 'metadata_items', 'metadata_items'), AbsoluteResource( 'injected_files', 'injected_files'), AbsoluteResource( 'injected_file_content_bytes', 'injected_file_content_bytes'), AbsoluteResource( 'injected_file_path_bytes', 'injected_file_path_length'), CountableResource( 'key_pairs', _keypair_get_count_by_user, 'key_pairs'), CountableResource( 'server_groups', _server_group_count, 'server_groups'), CountableResource( 'server_group_members', _server_group_count_members_by_user, 'server_group_members'), # Deprecated nova-network quotas, retained to avoid changing API # responses AbsoluteResource('fixed_ips'), AbsoluteResource('floating_ips'), AbsoluteResource('security_groups'), AbsoluteResource('security_group_rules'), ], ) def _valid_method_call_check_resource(name, method, resources): if name not in resources: raise exception.InvalidQuotaMethodUsage(method=method, res=name) res = resources[name] if res.valid_method != method: raise exception.InvalidQuotaMethodUsage(method=method, res=name) def _valid_method_call_check_resources(resource_values, method, resources): """A method to check whether the resource can use the quota method. :param resource_values: Dict containing the resource names and values :param method: The quota method to check :param resources: Dict containing Resource objects to validate against """ for name in resource_values.keys(): _valid_method_call_check_resource(name, method, resources) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/rpc.py0000664000175000017500000004010400000000000014666 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from oslo_log import log as logging import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils from oslo_service import periodic_task from oslo_utils import importutils import six import nova.conf import nova.context import nova.exception from nova.i18n import _ __all__ = [ 'init', 'cleanup', 'set_defaults', 'add_extra_exmods', 'clear_extra_exmods', 'get_allowed_exmods', 'RequestContextSerializer', 'get_client', 'get_server', 'get_notifier', ] profiler = importutils.try_import("osprofiler.profiler") CONF = nova.conf.CONF LOG = logging.getLogger(__name__) # TODO(stephenfin): These should be private TRANSPORT = None LEGACY_NOTIFIER = None NOTIFICATION_TRANSPORT = None NOTIFIER = None # NOTE(danms): If rpc_response_timeout is over this value (per-call or # globally), we will enable heartbeating HEARTBEAT_THRESHOLD = 60 ALLOWED_EXMODS = [ nova.exception.__name__, ] EXTRA_EXMODS = [] def init(conf): global TRANSPORT, NOTIFICATION_TRANSPORT, LEGACY_NOTIFIER, NOTIFIER exmods = get_allowed_exmods() TRANSPORT = create_transport(get_transport_url()) NOTIFICATION_TRANSPORT = messaging.get_notification_transport( conf, allowed_remote_exmods=exmods) serializer = RequestContextSerializer(JsonPayloadSerializer()) if conf.notifications.notification_format == 'unversioned': LEGACY_NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT, serializer=serializer) NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT, serializer=serializer, driver='noop') elif conf.notifications.notification_format == 'both': LEGACY_NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT, serializer=serializer) NOTIFIER = messaging.Notifier( NOTIFICATION_TRANSPORT, serializer=serializer, topics=conf.notifications.versioned_notifications_topics) else: LEGACY_NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT, serializer=serializer, driver='noop') NOTIFIER = messaging.Notifier( NOTIFICATION_TRANSPORT, serializer=serializer, topics=conf.notifications.versioned_notifications_topics) def cleanup(): global TRANSPORT, NOTIFICATION_TRANSPORT, LEGACY_NOTIFIER, NOTIFIER assert TRANSPORT is not None assert NOTIFICATION_TRANSPORT is not None assert LEGACY_NOTIFIER is not None assert NOTIFIER is not None TRANSPORT.cleanup() NOTIFICATION_TRANSPORT.cleanup() TRANSPORT = NOTIFICATION_TRANSPORT = LEGACY_NOTIFIER = NOTIFIER = None def set_defaults(control_exchange): messaging.set_transport_defaults(control_exchange) def add_extra_exmods(*args): EXTRA_EXMODS.extend(args) def clear_extra_exmods(): del EXTRA_EXMODS[:] def get_allowed_exmods(): return ALLOWED_EXMODS + EXTRA_EXMODS class JsonPayloadSerializer(messaging.NoOpSerializer): @staticmethod def fallback(obj): """Serializer fallback This method is used to serialize an object which jsonutils.to_primitive does not otherwise know how to handle. This is mostly only needed in tests because of the use of the nova CheatingSerializer fixture which keeps some non-serializable fields on the RequestContext, like db_connection. """ if isinstance(obj, nova.context.RequestContext): # This matches RequestContextSerializer.serialize_context(). return obj.to_dict() # The default fallback in jsonutils.to_primitive() is six.text_type. return six.text_type(obj) def serialize_entity(self, context, entity): return jsonutils.to_primitive(entity, convert_instances=True, fallback=self.fallback) class RequestContextSerializer(messaging.Serializer): def __init__(self, base): self._base = base def serialize_entity(self, context, entity): if not self._base: return entity return self._base.serialize_entity(context, entity) def deserialize_entity(self, context, entity): if not self._base: return entity return self._base.deserialize_entity(context, entity) def serialize_context(self, context): return context.to_dict() def deserialize_context(self, context): return nova.context.RequestContext.from_dict(context) class ProfilerRequestContextSerializer(RequestContextSerializer): def serialize_context(self, context): _context = super(ProfilerRequestContextSerializer, self).serialize_context(context) prof = profiler.get() if prof: # FIXME(DinaBelova): we'll add profiler.get_info() method # to extract this info -> we'll need to update these lines trace_info = { "hmac_key": prof.hmac_key, "base_id": prof.get_base_id(), "parent_id": prof.get_id() } _context.update({"trace_info": trace_info}) return _context def deserialize_context(self, context): trace_info = context.pop("trace_info", None) if trace_info: profiler.init(**trace_info) return super(ProfilerRequestContextSerializer, self).deserialize_context(context) def get_transport_url(url_str=None): return messaging.TransportURL.parse(CONF, url_str) def get_client(target, version_cap=None, serializer=None, call_monitor_timeout=None): assert TRANSPORT is not None if profiler: serializer = ProfilerRequestContextSerializer(serializer) else: serializer = RequestContextSerializer(serializer) return messaging.RPCClient(TRANSPORT, target, version_cap=version_cap, serializer=serializer, call_monitor_timeout=call_monitor_timeout) def get_server(target, endpoints, serializer=None): assert TRANSPORT is not None if profiler: serializer = ProfilerRequestContextSerializer(serializer) else: serializer = RequestContextSerializer(serializer) access_policy = dispatcher.DefaultRPCAccessPolicy return messaging.get_rpc_server(TRANSPORT, target, endpoints, executor='eventlet', serializer=serializer, access_policy=access_policy) def get_notifier(service, host=None, publisher_id=None): assert LEGACY_NOTIFIER is not None if not publisher_id: publisher_id = "%s.%s" % (service, host or CONF.host) return LegacyValidatingNotifier( LEGACY_NOTIFIER.prepare(publisher_id=publisher_id)) def get_versioned_notifier(publisher_id): assert NOTIFIER is not None return NOTIFIER.prepare(publisher_id=publisher_id) def if_notifications_enabled(f): """Calls decorated method only if versioned notifications are enabled.""" @functools.wraps(f) def wrapped(*args, **kwargs): if (NOTIFIER.is_enabled() and CONF.notifications.notification_format in ('both', 'versioned')): return f(*args, **kwargs) else: return None return wrapped def create_transport(url): exmods = get_allowed_exmods() return messaging.get_rpc_transport(CONF, url=url, allowed_remote_exmods=exmods) class LegacyValidatingNotifier(object): """Wraps an oslo.messaging Notifier and checks for allowed event_types.""" # If true an exception is thrown if the event_type is not allowed, if false # then only a WARNING is logged fatal = False # This list contains the already existing therefore allowed legacy # notification event_types. New items shall not be added to the list as # Nova does not allow new legacy notifications any more. This list will be # removed when all the notification is transformed to versioned # notifications. allowed_legacy_notification_event_types = [ 'aggregate.addhost.end', 'aggregate.addhost.start', 'aggregate.create.end', 'aggregate.create.start', 'aggregate.delete.end', 'aggregate.delete.start', 'aggregate.removehost.end', 'aggregate.removehost.start', 'aggregate.updatemetadata.end', 'aggregate.updatemetadata.start', 'aggregate.updateprop.end', 'aggregate.updateprop.start', 'compute.instance.create.end', 'compute.instance.create.error', 'compute.instance.create_ip.end', 'compute.instance.create_ip.start', 'compute.instance.create.start', 'compute.instance.delete.end', 'compute.instance.delete_ip.end', 'compute.instance.delete_ip.start', 'compute.instance.delete.start', 'compute.instance.evacuate', 'compute.instance.exists', 'compute.instance.finish_resize.end', 'compute.instance.finish_resize.start', 'compute.instance.live.migration.abort.start', 'compute.instance.live.migration.abort.end', 'compute.instance.live.migration.force.complete.start', 'compute.instance.live.migration.force.complete.end', 'compute.instance.live_migration.post.dest.end', 'compute.instance.live_migration.post.dest.start', 'compute.instance.live_migration._post.end', 'compute.instance.live_migration._post.start', 'compute.instance.live_migration.pre.end', 'compute.instance.live_migration.pre.start', 'compute.instance.live_migration.rollback.dest.end', 'compute.instance.live_migration.rollback.dest.start', 'compute.instance.live_migration._rollback.end', 'compute.instance.live_migration._rollback.start', 'compute.instance.pause.end', 'compute.instance.pause.start', 'compute.instance.power_off.end', 'compute.instance.power_off.start', 'compute.instance.power_on.end', 'compute.instance.power_on.start', 'compute.instance.reboot.end', 'compute.instance.reboot.error', 'compute.instance.reboot.start', 'compute.instance.rebuild.end', 'compute.instance.rebuild.error', 'compute.instance.rebuild.scheduled', 'compute.instance.rebuild.start', 'compute.instance.rescue.end', 'compute.instance.rescue.start', 'compute.instance.resize.confirm.end', 'compute.instance.resize.confirm.start', 'compute.instance.resize.end', 'compute.instance.resize.error', 'compute.instance.resize.prep.end', 'compute.instance.resize.prep.start', 'compute.instance.resize.revert.end', 'compute.instance.resize.revert.start', 'compute.instance.resize.start', 'compute.instance.restore.end', 'compute.instance.restore.start', 'compute.instance.resume.end', 'compute.instance.resume.start', 'compute.instance.shelve.end', 'compute.instance.shelve_offload.end', 'compute.instance.shelve_offload.start', 'compute.instance.shelve.start', 'compute.instance.shutdown.end', 'compute.instance.shutdown.start', 'compute.instance.snapshot.end', 'compute.instance.snapshot.start', 'compute.instance.soft_delete.end', 'compute.instance.soft_delete.start', 'compute.instance.suspend.end', 'compute.instance.suspend.start', 'compute.instance.trigger_crash_dump.end', 'compute.instance.trigger_crash_dump.start', 'compute.instance.unpause.end', 'compute.instance.unpause.start', 'compute.instance.unrescue.end', 'compute.instance.unrescue.start', 'compute.instance.unshelve.start', 'compute.instance.unshelve.end', 'compute.instance.update', 'compute.instance.volume.attach', 'compute.instance.volume.detach', 'compute.libvirt.error', 'compute.metrics.update', 'compute_task.build_instances', 'compute_task.migrate_server', 'compute_task.rebuild_server', 'HostAPI.power_action.end', 'HostAPI.power_action.start', 'HostAPI.set_enabled.end', 'HostAPI.set_enabled.start', 'HostAPI.set_maintenance.end', 'HostAPI.set_maintenance.start', 'keypair.create.start', 'keypair.create.end', 'keypair.delete.start', 'keypair.delete.end', 'keypair.import.start', 'keypair.import.end', 'network.floating_ip.allocate', 'network.floating_ip.associate', 'network.floating_ip.deallocate', 'network.floating_ip.disassociate', 'scheduler.select_destinations.end', 'scheduler.select_destinations.start', 'servergroup.addmember', 'servergroup.create', 'servergroup.delete', 'volume.usage', ] message = _('%(event_type)s is not a versioned notification and not ' 'whitelisted. See ./doc/source/reference/notifications.rst') def __init__(self, notifier): self.notifier = notifier for priority in ['debug', 'info', 'warn', 'error', 'critical']: setattr(self, priority, functools.partial(self._notify, priority)) def _is_wrap_exception_notification(self, payload): # nova.exception_wrapper.wrap_exception decorator emits notification # where the event_type is the name of the decorated function. This # is used in many places but it will be converted to versioned # notification in one run by updating the decorator so it is pointless # to white list all the function names here we white list the # notification itself detected by the special payload keys. return {'exception', 'args'} == set(payload.keys()) def _notify(self, priority, ctxt, event_type, payload): if (event_type not in self.allowed_legacy_notification_event_types and not self._is_wrap_exception_notification(payload)): if self.fatal: raise AssertionError(self.message % {'event_type': event_type}) else: LOG.warning(self.message, {'event_type': event_type}) getattr(self.notifier, priority)(ctxt, event_type, payload) class ClientRouter(periodic_task.PeriodicTasks): """Creates RPC clients that honor the context's RPC transport or provides a default. """ def __init__(self, default_client): super(ClientRouter, self).__init__(CONF) self.default_client = default_client self.target = default_client.target self.version_cap = default_client.version_cap self.serializer = default_client.serializer def client(self, context): transport = context.mq_connection if transport: cmt = self.default_client.call_monitor_timeout return messaging.RPCClient(transport, self.target, version_cap=self.version_cap, serializer=self.serializer, call_monitor_timeout=cmt) else: return self.default_client ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/safe_utils.py0000664000175000017500000000303700000000000016244 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities and helper functions that won't produce circular imports.""" def get_wrapped_function(function): """Get the method at the bottom of a stack of decorators.""" if not hasattr(function, '__closure__') or not function.__closure__: return function def _get_wrapped_function(function): if not hasattr(function, '__closure__') or not function.__closure__: return None for closure in function.__closure__: func = closure.cell_contents deeper_func = _get_wrapped_function(func) if deeper_func: return deeper_func elif hasattr(closure.cell_contents, '__call__'): return closure.cell_contents return function return _get_wrapped_function(function) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3784697 nova-21.2.4/nova/scheduler/0000775000175000017500000000000000000000000015507 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/__init__.py0000664000175000017500000000151000000000000017615 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova.scheduler` -- Scheduler Nodes ===================================================== .. automodule:: nova.scheduler :platform: Unix :synopsis: Module that picks a compute node to run a VM instance. """ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3784697 nova-21.2.4/nova/scheduler/client/0000775000175000017500000000000000000000000016765 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/client/__init__.py0000664000175000017500000000000000000000000021064 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/client/query.py0000664000175000017500000001030100000000000020477 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.scheduler import rpcapi as scheduler_rpcapi class SchedulerQueryClient(object): """Client class for querying to the scheduler.""" def __init__(self): self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI() def select_destinations(self, context, spec_obj, instance_uuids, return_objects=False, return_alternates=False): """Returns destinations(s) best suited for this request_spec and filter_properties. When return_objects is False, the result will be the "old-style" list of dicts with 'host', 'nodename' and 'limits' as keys. The value of return_alternates is ignored. When return_objects is True, the result will be a list of lists of Selection objects, with one list per instance. Each instance's list will contain a Selection representing the selected (and claimed) host, and, if return_alternates is True, zero or more Selection objects that represent alternate hosts. The number of alternates returned depends on the configuration setting `CONF.scheduler.max_attempts`. """ return self.scheduler_rpcapi.select_destinations(context, spec_obj, instance_uuids, return_objects, return_alternates) def update_aggregates(self, context, aggregates): """Updates HostManager internal aggregates information. :param aggregates: Aggregate(s) to update :type aggregates: :class:`nova.objects.Aggregate` or :class:`nova.objects.AggregateList` """ self.scheduler_rpcapi.update_aggregates(context, aggregates) def delete_aggregate(self, context, aggregate): """Deletes HostManager internal information about a specific aggregate. :param aggregate: Aggregate to delete :type aggregate: :class:`nova.objects.Aggregate` """ self.scheduler_rpcapi.delete_aggregate(context, aggregate) def update_instance_info(self, context, host_name, instance_info): """Updates the HostManager with the current information about the instances on a host. :param context: local context :param host_name: name of host sending the update :param instance_info: an InstanceList object. """ self.scheduler_rpcapi.update_instance_info(context, host_name, instance_info) def delete_instance_info(self, context, host_name, instance_uuid): """Updates the HostManager with the current information about an instance that has been deleted on a host. :param context: local context :param host_name: name of host sending the update :param instance_uuid: the uuid of the deleted instance """ self.scheduler_rpcapi.delete_instance_info(context, host_name, instance_uuid) def sync_instance_info(self, context, host_name, instance_uuids): """Notifies the HostManager of the current instances on a host by sending a list of the uuids for those instances. The HostManager can then compare that with its in-memory view of the instances to detect when they are out of sync. :param context: local context :param host_name: name of host sending the update :param instance_uuids: a list of UUID strings representing the current instances on the specified host """ self.scheduler_rpcapi.sync_instance_info(context, host_name, instance_uuids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/client/report.py0000664000175000017500000034067300000000000020667 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import copy import functools import random import time from keystoneauth1 import exceptions as ks_exc import os_resource_classes as orc import os_traits from oslo_log import log as logging from oslo_middleware import request_id from oslo_utils import excutils from oslo_utils import versionutils import retrying import six from nova.compute import provider_tree import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) WARN_EVERY = 10 ROOT_REQUIRED_VERSION = '1.35' RESHAPER_VERSION = '1.30' CONSUMER_GENERATION_VERSION = '1.28' ALLOW_RESERVED_EQUAL_TOTAL_INVENTORY_VERSION = '1.26' POST_RPS_RETURNS_PAYLOAD_API_VERSION = '1.20' AGGREGATE_GENERATION_VERSION = '1.19' NESTED_PROVIDER_API_VERSION = '1.14' POST_ALLOCATIONS_API_VERSION = '1.13' GET_USAGES_VERSION = '1.9' AggInfo = collections.namedtuple('AggInfo', ['aggregates', 'generation']) TraitInfo = collections.namedtuple('TraitInfo', ['traits', 'generation']) ProviderAllocInfo = collections.namedtuple( 'ProviderAllocInfo', ['allocations']) def warn_limit(self, msg): if self._warn_count: self._warn_count -= 1 else: self._warn_count = WARN_EVERY LOG.warning(msg) def safe_connect(f): @functools.wraps(f) def wrapper(self, *a, **k): try: return f(self, *a, **k) except ks_exc.EndpointNotFound: warn_limit( self, 'The placement API endpoint was not found.') # Reset client session so there is a new catalog, which # gets cached when keystone is first successfully contacted. self._client = self._create_client() except ks_exc.MissingAuthPlugin: warn_limit( self, 'No authentication information found for placement API.') except ks_exc.Unauthorized: warn_limit( self, 'Placement service credentials do not work.') except ks_exc.DiscoveryFailure: # TODO(_gryf): Looks like DiscoveryFailure is not the only missing # exception here. In Pike we should take care about keystoneauth1 # failures handling globally. warn_limit(self, 'Discovering suitable URL for placement API failed.') except ks_exc.ConnectFailure: LOG.warning('Placement API service is not responding.') return wrapper class Retry(Exception): def __init__(self, operation, reason): self.operation = operation self.reason = reason def retries(f): """Decorator to retry a call three times if it raises Retry Note that this returns the actual value of the inner call on success or returns False if all the retries fail. """ @functools.wraps(f) def wrapper(self, *a, **k): for retry in range(0, 4): try: sleep_time = random.uniform(0, retry * 2) time.sleep(sleep_time) return f(self, *a, **k) except Retry as e: LOG.debug( 'Unable to %(op)s because %(reason)s; retrying...', {'op': e.operation, 'reason': e.reason}) LOG.error('Failed scheduler client operation %s: out of retries', f.__name__) return False return wrapper def _move_operation_alloc_request(source_allocs, dest_alloc_req): """Given existing allocations for a source host and a new allocation request for a destination host, return a new allocation_request that contains resources claimed against both source and destination, accounting for shared providers. This is expected to only be used during an evacuate operation. :param source_allocs: Dict, keyed by resource provider UUID, of resources allocated on the source host :param dest_alloc_req: The allocation_request for resources against the destination host """ LOG.debug("Doubling-up allocation_request for move operation. Current " "allocations: %s", source_allocs) # Remove any allocations against resource providers that are # already allocated against on the source host (like shared storage # providers) cur_rp_uuids = set(source_allocs.keys()) new_rp_uuids = set(dest_alloc_req['allocations']) - cur_rp_uuids current_allocs = { cur_rp_uuid: {'resources': alloc['resources']} for cur_rp_uuid, alloc in source_allocs.items() } new_alloc_req = {'allocations': current_allocs} for rp_uuid in dest_alloc_req['allocations']: if rp_uuid in new_rp_uuids: new_alloc_req['allocations'][rp_uuid] = dest_alloc_req[ 'allocations'][rp_uuid] LOG.debug("New allocation_request containing both source and " "destination hosts in move operation: %s", new_alloc_req) return new_alloc_req def get_placement_request_id(response): if response is not None: return response.headers.get(request_id.HTTP_RESP_HEADER_REQUEST_ID) # TODO(mriedem): Consider making SchedulerReportClient a global singleton so # that things like the compute API do not have to lazy-load it. That would # likely require inspecting methods that use a ProviderTree cache to see if # they need locks. class SchedulerReportClient(object): """Client class for updating the scheduler.""" def __init__(self, adapter=None): """Initialize the report client. :param adapter: A prepared keystoneauth1 Adapter for API communication. If unspecified, one is created based on config options in the [placement] section. """ self._adapter = adapter # An object that contains a nova-compute-side cache of resource # provider and inventory information self._provider_tree = None # Track the last time we updated providers' aggregates and traits self._association_refresh_time = None self._client = self._create_client() # NOTE(danms): Keep track of how naggy we've been self._warn_count = 0 def clear_provider_cache(self, init=False): if not init: LOG.info("Clearing the report client's provider cache.") self._provider_tree = provider_tree.ProviderTree() self._association_refresh_time = {} def _clear_provider_cache_for_tree(self, rp_uuid): """Clear the provider cache for only the tree containing rp_uuid. This exists for situations where we encounter an error updating placement, and therefore need to refresh the provider tree cache before redriving the update. However, it would be wasteful and inefficient to clear the *entire* cache, which may contain many separate trees (e.g. ironic nodes or sharing providers) which should be unaffected by the error. :param rp_uuid: UUID of a resource provider, which may be anywhere in a a tree hierarchy, i.e. need not be a root. For non-root providers, we still clear the cache for the entire tree including descendants, ancestors up to the root, siblings/cousins and *their* ancestors/descendants. """ try: uuids = self._provider_tree.get_provider_uuids_in_tree(rp_uuid) except ValueError: # If the provider isn't in the tree, it should also not be in the # timer dict, so nothing to clear. return # get_provider_uuids_in_tree returns UUIDs in top-down order, so the # first one is the root; and .remove() is recursive. self._provider_tree.remove(uuids[0]) for uuid in uuids: self._association_refresh_time.pop(uuid, None) def _create_client(self): """Create the HTTP session accessing the placement service.""" # Flush provider tree and associations so we start from a clean slate. self.clear_provider_cache(init=True) client = self._adapter or utils.get_sdk_adapter('placement') # Set accept header on every request to ensure we notify placement # service of our response body media type preferences. client.additional_headers = {'accept': 'application/json'} return client def get(self, url, version=None, global_request_id=None): return self._client.get(url, microversion=version, global_request_id=global_request_id) def post(self, url, data, version=None, global_request_id=None): # NOTE(sdague): using json= instead of data= sets the # media type to application/json for us. Placement API is # more sensitive to this than other APIs in the OpenStack # ecosystem. return self._client.post(url, json=data, microversion=version, global_request_id=global_request_id) def put(self, url, data, version=None, global_request_id=None): # NOTE(sdague): using json= instead of data= sets the # media type to application/json for us. Placement API is # more sensitive to this than other APIs in the OpenStack # ecosystem. return self._client.put(url, json=data, microversion=version, global_request_id=global_request_id) def delete(self, url, version=None, global_request_id=None): return self._client.delete(url, microversion=version, global_request_id=global_request_id) @safe_connect def get_allocation_candidates(self, context, resources): """Returns a tuple of (allocation_requests, provider_summaries, allocation_request_version). The allocation_requests are a collection of potential JSON objects that can be passed to the PUT /allocations/{consumer_uuid} Placement REST API to claim resources against one or more resource providers that meet the requested resource constraints. The provider summaries is a dict, keyed by resource provider UUID, of inventory and capacity information and traits for any resource provider involved in the allocation_requests. :returns: A tuple with a list of allocation_request dicts, a dict of provider information, and the microversion used to request this data from placement, or (None, None, None) if the request failed :param context: The security context :param nova.scheduler.utils.ResourceRequest resources: A ResourceRequest object representing the requested resources, traits, and aggregates from the request spec. Example member_of (aggregates) value in resources: [('foo', 'bar'), ('baz',)] translates to: "Candidates are in either 'foo' or 'bar', but definitely in 'baz'" """ # Note that claim_resources() will use this version as well to # make allocations by `PUT /allocations/{consumer_uuid}` version = ROOT_REQUIRED_VERSION qparams = resources.to_querystring() url = "/allocation_candidates?%s" % qparams resp = self.get(url, version=version, global_request_id=context.global_id) if resp.status_code == 200: data = resp.json() return (data['allocation_requests'], data['provider_summaries'], version) args = { 'resource_request': str(resources), 'status_code': resp.status_code, 'err_text': resp.text, } msg = ("Failed to retrieve allocation candidates from placement " "API for filters: %(resource_request)s\n" "Got %(status_code)d: %(err_text)s.") LOG.error(msg, args) return None, None, None @safe_connect def _get_provider_aggregates(self, context, rp_uuid): """Queries the placement API for a resource provider's aggregates. :param rp_uuid: UUID of the resource provider to grab aggregates for. :return: A namedtuple comprising: * .aggregates: A set() of string aggregate UUIDs, which may be empty if the specified provider is associated with no aggregates. * .generation: The resource provider generation. :raise: ResourceProviderAggregateRetrievalFailed on errors. In particular, we raise this exception (as opposed to returning None or the empty set()) if the specified resource provider does not exist. """ resp = self.get("/resource_providers/%s/aggregates" % rp_uuid, version=AGGREGATE_GENERATION_VERSION, global_request_id=context.global_id) if resp.status_code == 200: data = resp.json() return AggInfo(aggregates=set(data['aggregates']), generation=data['resource_provider_generation']) placement_req_id = get_placement_request_id(resp) msg = ("[%(placement_req_id)s] Failed to retrieve aggregates from " "placement API for resource provider with UUID %(uuid)s. " "Got %(status_code)d: %(err_text)s.") args = { 'placement_req_id': placement_req_id, 'uuid': rp_uuid, 'status_code': resp.status_code, 'err_text': resp.text, } LOG.error(msg, args) raise exception.ResourceProviderAggregateRetrievalFailed(uuid=rp_uuid) def get_provider_traits(self, context, rp_uuid): """Queries the placement API for a resource provider's traits. :param context: The security context :param rp_uuid: UUID of the resource provider to grab traits for. :return: A namedtuple comprising: * .traits: A set() of string trait names, which may be empty if the specified provider has no traits. * .generation: The resource provider generation. :raise: ResourceProviderTraitRetrievalFailed on errors. In particular, we raise this exception (as opposed to returning None or the empty set()) if the specified resource provider does not exist. :raise: keystoneauth1.exceptions.ClientException if placement API communication fails. """ resp = self.get("/resource_providers/%s/traits" % rp_uuid, version='1.6', global_request_id=context.global_id) if resp.status_code == 200: json = resp.json() return TraitInfo(traits=set(json['traits']), generation=json['resource_provider_generation']) placement_req_id = get_placement_request_id(resp) LOG.error( "[%(placement_req_id)s] Failed to retrieve traits from " "placement API for resource provider with UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s.", {'placement_req_id': placement_req_id, 'uuid': rp_uuid, 'status_code': resp.status_code, 'err_text': resp.text}) raise exception.ResourceProviderTraitRetrievalFailed(uuid=rp_uuid) def get_resource_provider_name(self, context, uuid): """Return the name of a RP. It tries to use the internal of RPs or falls back to calling placement directly. :param context: The security context :param uuid: UUID identifier for the resource provider to look up :return: The name of the RP :raise: ResourceProviderRetrievalFailed if the RP is not in the cache and the communication with the placement is failed. :raise: ResourceProviderNotFound if the RP does not exist. """ try: return self._provider_tree.data(uuid).name except ValueError: rsp = self._get_resource_provider(context, uuid) if rsp is None: raise exception.ResourceProviderNotFound(name_or_uuid=uuid) else: return rsp['name'] @safe_connect def _get_resource_provider(self, context, uuid): """Queries the placement API for a resource provider record with the supplied UUID. :param context: The security context :param uuid: UUID identifier for the resource provider to look up :return: A dict of resource provider information if found or None if no such resource provider could be found. :raise: ResourceProviderRetrievalFailed on error. """ resp = self.get("/resource_providers/%s" % uuid, version=NESTED_PROVIDER_API_VERSION, global_request_id=context.global_id) if resp.status_code == 200: data = resp.json() return data elif resp.status_code == 404: return None else: placement_req_id = get_placement_request_id(resp) msg = ("[%(placement_req_id)s] Failed to retrieve resource " "provider record from placement API for UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s.") args = { 'uuid': uuid, 'status_code': resp.status_code, 'err_text': resp.text, 'placement_req_id': placement_req_id, } LOG.error(msg, args) raise exception.ResourceProviderRetrievalFailed(uuid=uuid) @safe_connect def _get_sharing_providers(self, context, agg_uuids): """Queries the placement API for a list of the resource providers associated with any of the specified aggregates and possessing the MISC_SHARES_VIA_AGGREGATE trait. :param context: The security context :param agg_uuids: Iterable of string UUIDs of aggregates to filter on. :return: A list of dicts of resource provider information, which may be empty if no provider exists with the specified UUID. :raise: ResourceProviderRetrievalFailed on error. """ if not agg_uuids: return [] aggs = ','.join(agg_uuids) url = "/resource_providers?member_of=in:%s&required=%s" % ( aggs, os_traits.MISC_SHARES_VIA_AGGREGATE) resp = self.get(url, version='1.18', global_request_id=context.global_id) if resp.status_code == 200: return resp.json()['resource_providers'] msg = _("[%(placement_req_id)s] Failed to retrieve sharing resource " "providers associated with the following aggregates from " "placement API: %(aggs)s. Got %(status_code)d: %(err_text)s.") args = { 'aggs': aggs, 'status_code': resp.status_code, 'err_text': resp.text, 'placement_req_id': get_placement_request_id(resp), } LOG.error(msg, args) raise exception.ResourceProviderRetrievalFailed(message=msg % args) def get_providers_in_tree(self, context, uuid): """Queries the placement API for a list of the resource providers in the tree associated with the specified UUID. :param context: The security context :param uuid: UUID identifier for the resource provider to look up :return: A list of dicts of resource provider information, which may be empty if no provider exists with the specified UUID. :raise: ResourceProviderRetrievalFailed on error. :raise: keystoneauth1.exceptions.ClientException if placement API communication fails. """ resp = self.get("/resource_providers?in_tree=%s" % uuid, version=NESTED_PROVIDER_API_VERSION, global_request_id=context.global_id) if resp.status_code == 200: return resp.json()['resource_providers'] # Some unexpected error placement_req_id = get_placement_request_id(resp) msg = ("[%(placement_req_id)s] Failed to retrieve resource provider " "tree from placement API for UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s.") args = { 'uuid': uuid, 'status_code': resp.status_code, 'err_text': resp.text, 'placement_req_id': placement_req_id, } LOG.error(msg, args) raise exception.ResourceProviderRetrievalFailed(uuid=uuid) @safe_connect def _create_resource_provider(self, context, uuid, name, parent_provider_uuid=None): """Calls the placement API to create a new resource provider record. :param context: The security context :param uuid: UUID of the new resource provider :param name: Name of the resource provider :param parent_provider_uuid: Optional UUID of the immediate parent :return: A dict of resource provider information object representing the newly-created resource provider. :raise: ResourceProviderCreationFailed or ResourceProviderRetrievalFailed on error. """ url = "/resource_providers" payload = { 'uuid': uuid, 'name': name, } if parent_provider_uuid is not None: payload['parent_provider_uuid'] = parent_provider_uuid # Bug #1746075: First try the microversion that returns the new # provider's payload. resp = self.post(url, payload, version=POST_RPS_RETURNS_PAYLOAD_API_VERSION, global_request_id=context.global_id) placement_req_id = get_placement_request_id(resp) if resp: msg = ("[%(placement_req_id)s] Created resource provider record " "via placement API for resource provider with UUID " "%(uuid)s and name %(name)s.") args = { 'uuid': uuid, 'name': name, 'placement_req_id': placement_req_id, } LOG.info(msg, args) return resp.json() # TODO(efried): Push error codes from placement, and use 'em. name_conflict = 'Conflicting resource provider name:' if resp.status_code == 409 and name_conflict not in resp.text: # Another thread concurrently created a resource provider with the # same UUID. Log a warning and then just return the resource # provider object from _get_resource_provider() msg = ("[%(placement_req_id)s] Another thread already created a " "resource provider with the UUID %(uuid)s. Grabbing that " "record from the placement API.") args = { 'uuid': uuid, 'placement_req_id': placement_req_id, } LOG.info(msg, args) return self._get_resource_provider(context, uuid) # A provider with the same *name* already exists, or some other error. msg = ("[%(placement_req_id)s] Failed to create resource provider " "record in placement API for UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s.") args = { 'uuid': uuid, 'status_code': resp.status_code, 'err_text': resp.text, 'placement_req_id': placement_req_id, } LOG.error(msg, args) raise exception.ResourceProviderCreationFailed(name=name) def _ensure_resource_provider(self, context, uuid, name=None, parent_provider_uuid=None): """Ensures that the placement API has a record of a resource provider with the supplied UUID. If not, creates the resource provider record in the placement API for the supplied UUID, passing in a name for the resource provider. If found or created, the provider's UUID is returned from this method. If the resource provider for the supplied uuid was not found and the resource provider record could not be created in the placement API, an exception is raised. If this method returns successfully, callers are assured that the placement API contains a record of the provider; and that the local cache of resource provider information contains a record of: - The specified provider - All providers in its tree - All sharing providers associated via aggregate with all providers in said tree and for each of those providers: - The UUIDs of its aggregates - The trait strings associated with the provider Note that if the provider did not exist prior to this call, the above reduces to just the specified provider as a root, with no aggregates or traits. :param context: The security context :param uuid: UUID identifier for the resource provider to ensure exists :param name: Optional name for the resource provider if the record does not exist. If empty, the name is set to the UUID value :param parent_provider_uuid: Optional UUID of the immediate parent, which must have been previously _ensured. :raise ResourceProviderCreationFailed: If we expected to be creating providers, but couldn't. :raise: keystoneauth1.exceptions.ClientException if placement API communication fails. """ # NOTE(efried): We currently have no code path where we need to set the # parent_provider_uuid on a previously-parent-less provider - so we do # NOT handle that scenario here. # If we already have the root provider in the cache, and it's not # stale, don't refresh it; and use the cache to determine the # descendants to (soft) refresh. # NOTE(efried): This assumes the compute service only cares about # providers it "owns". If that ever changes, we'll need a way to find # out about out-of-band changes here. Options that have been # brainstormed at this time: # - Make this condition more frequently True # - Some kind of notification subscription so a separate thread is # alerted when . # - "Cascading generations" - i.e. a change to a leaf node percolates # generation bump up the tree so that we bounce 409 the next time we # try to update anything and have to refresh. if (self._provider_tree.exists(uuid) and not self._associations_stale(uuid)): uuids_to_refresh = [ u for u in self._provider_tree.get_provider_uuids(uuid) if self._associations_stale(u)] else: # We either don't have it locally or it's stale. Pull or create it. created_rp = None rps_to_refresh = self.get_providers_in_tree(context, uuid) if not rps_to_refresh: created_rp = self._create_resource_provider( context, uuid, name or uuid, parent_provider_uuid=parent_provider_uuid) # If @safe_connect can't establish a connection to the # placement service, like if placement isn't running or # nova-compute is mis-configured for authentication, we'll get # None back and need to treat it like we couldn't create the # provider (because we couldn't). if created_rp is None: raise exception.ResourceProviderCreationFailed( name=name or uuid) # Don't add the created_rp to rps_to_refresh. Since we just # created it, it has no aggregates or traits. # But do mark it as having just been "refreshed". self._association_refresh_time[uuid] = time.time() self._provider_tree.populate_from_iterable( rps_to_refresh or [created_rp]) uuids_to_refresh = [rp['uuid'] for rp in rps_to_refresh] # At this point, the whole tree exists in the local cache. for uuid_to_refresh in uuids_to_refresh: self._refresh_associations(context, uuid_to_refresh, force=True) return uuid def _delete_provider(self, rp_uuid, global_request_id=None): resp = self.delete('/resource_providers/%s' % rp_uuid, global_request_id=global_request_id) # Check for 404 since we don't need to warn/raise if we tried to delete # something which doesn"t actually exist. if resp or resp.status_code == 404: if resp: LOG.info("Deleted resource provider %s", rp_uuid) # clean the caches try: self._provider_tree.remove(rp_uuid) except ValueError: pass self._association_refresh_time.pop(rp_uuid, None) return msg = ("[%(placement_req_id)s] Failed to delete resource provider " "with UUID %(uuid)s from the placement API. Got " "%(status_code)d: %(err_text)s.") args = { 'placement_req_id': get_placement_request_id(resp), 'uuid': rp_uuid, 'status_code': resp.status_code, 'err_text': resp.text } LOG.error(msg, args) # On conflict, the caller may wish to delete allocations and # redrive. (Note that this is not the same as a # PlacementAPIConflict case.) if resp.status_code == 409: raise exception.ResourceProviderInUse() raise exception.ResourceProviderDeletionFailed(uuid=rp_uuid) def _get_inventory(self, context, rp_uuid): url = '/resource_providers/%s/inventories' % rp_uuid result = self.get(url, global_request_id=context.global_id) if not result: # TODO(efried): Log. return None return result.json() def _refresh_and_get_inventory(self, context, rp_uuid): """Helper method that retrieves the current inventory for the supplied resource provider according to the placement API. If the cached generation of the resource provider is not the same as the generation returned from the placement API, we update the cached generation and attempt to update inventory if any exists, otherwise return empty inventories. """ curr = self._get_inventory(context, rp_uuid) if curr is None: return None LOG.debug('Updating ProviderTree inventory for provider %s from ' '_refresh_and_get_inventory using data: %s', rp_uuid, curr['inventories']) self._provider_tree.update_inventory( rp_uuid, curr['inventories'], generation=curr['resource_provider_generation']) return curr def _refresh_associations(self, context, rp_uuid, force=False, refresh_sharing=True): """Refresh inventories, aggregates, traits, and (optionally) aggregate- associated sharing providers for the specified resource provider uuid. Only refresh if there has been no refresh during the lifetime of this process, CONF.compute.resource_provider_association_refresh seconds have passed, or the force arg has been set to True. :param context: The security context :param rp_uuid: UUID of the resource provider to check for fresh inventories, aggregates, and traits :param force: If True, force the refresh :param refresh_sharing: If True, fetch all the providers associated by aggregate with the specified provider, including their inventories, traits, and aggregates (but not *their* sharing providers). :raise: On various placement API errors, one of: - ResourceProviderAggregateRetrievalFailed - ResourceProviderTraitRetrievalFailed - ResourceProviderRetrievalFailed :raise: keystoneauth1.exceptions.ClientException if placement API communication fails. """ if force or self._associations_stale(rp_uuid): # Refresh inventories msg = "Refreshing inventories for resource provider %s" LOG.debug(msg, rp_uuid) self._refresh_and_get_inventory(context, rp_uuid) # Refresh aggregates agg_info = self._get_provider_aggregates(context, rp_uuid) # If @safe_connect makes the above return None, this will raise # TypeError. Good. aggs, generation = agg_info.aggregates, agg_info.generation msg = ("Refreshing aggregate associations for resource provider " "%s, aggregates: %s") LOG.debug(msg, rp_uuid, ','.join(aggs or ['None'])) # NOTE(efried): This will blow up if called for a RP that doesn't # exist in our _provider_tree. self._provider_tree.update_aggregates( rp_uuid, aggs, generation=generation) # Refresh traits trait_info = self.get_provider_traits(context, rp_uuid) traits, generation = trait_info.traits, trait_info.generation msg = ("Refreshing trait associations for resource provider %s, " "traits: %s") LOG.debug(msg, rp_uuid, ','.join(traits or ['None'])) # NOTE(efried): This will blow up if called for a RP that doesn't # exist in our _provider_tree. self._provider_tree.update_traits( rp_uuid, traits, generation=generation) if refresh_sharing: # Refresh providers associated by aggregate for rp in self._get_sharing_providers(context, aggs): if not self._provider_tree.exists(rp['uuid']): # NOTE(efried): Right now sharing providers are always # treated as roots. This is deliberate. From the # context of this compute's RP, it doesn't matter if a # sharing RP is part of a tree. self._provider_tree.new_root( rp['name'], rp['uuid'], generation=rp['generation']) # Now we have to (populate or) refresh that provider's # traits, aggregates, and inventories (but not *its* # aggregate-associated providers). No need to override # force=True for newly-added providers - the missing # timestamp will always trigger them to refresh. self._refresh_associations(context, rp['uuid'], force=force, refresh_sharing=False) self._association_refresh_time[rp_uuid] = time.time() def _associations_stale(self, uuid): """Respond True if aggregates and traits have not been refreshed "recently". Associations are stale if association_refresh_time for this uuid is not set or is more than CONF.compute.resource_provider_association_refresh seconds ago. Always False if CONF.compute.resource_provider_association_refresh is zero. """ rpar = CONF.compute.resource_provider_association_refresh refresh_time = self._association_refresh_time.get(uuid, 0) # If refresh is disabled, associations are "never" stale. (But still # load them if we haven't yet done so.) if rpar == 0 and refresh_time != 0: # TODO(efried): If refresh is disabled, we could avoid touching the # _association_refresh_time dict anywhere, but that would take some # nontrivial refactoring. return False return (time.time() - refresh_time) > rpar def get_provider_tree_and_ensure_root(self, context, rp_uuid, name=None, parent_provider_uuid=None): """Returns a fresh ProviderTree representing all providers which are in the same tree or in the same aggregate as the specified provider, including their aggregates, traits, and inventories. If the specified provider does not exist, it is created with the specified UUID, name, and parent provider (which *must* already exist). :param context: The security context :param rp_uuid: UUID of the resource provider for which to populate the tree. (This doesn't need to be the UUID of the root.) :param name: Optional name for the resource provider if the record does not exist. If empty, the name is set to the UUID value :param parent_provider_uuid: Optional UUID of the immediate parent, which must have been previously _ensured. :return: A new ProviderTree object. """ # TODO(efried): We would like to have the caller handle create-and/or- # cache-if-not-already, but the resource tracker is currently # structured to handle initialization and update in a single path. At # some point this should be refactored, and this method can *just* # return a deep copy of the local _provider_tree cache. # (Re)populate the local ProviderTree self._ensure_resource_provider( context, rp_uuid, name=name, parent_provider_uuid=parent_provider_uuid) # Return a *copy* of the tree. return copy.deepcopy(self._provider_tree) def set_inventory_for_provider(self, context, rp_uuid, inv_data): """Given the UUID of a provider, set the inventory records for the provider to the supplied dict of resources. The provider must exist - this method does not attempt to create it. :param context: The security context :param rp_uuid: The UUID of the provider whose inventory is to be updated. :param inv_data: Dict, keyed by resource class name, of inventory data to set for the provider. Use None or the empty dict to remove all inventory for the provider. :raises: InventoryInUse if inv_data indicates removal of inventory in a resource class which has active allocations for this provider. :raises: InvalidResourceClass if inv_data contains a resource class which cannot be created. :raises: ResourceProviderUpdateConflict if the provider's generation doesn't match the generation in the cache. Callers may choose to retrieve the provider and its associations afresh and redrive this operation. :raises: ResourceProviderUpdateFailed on any other placement API failure. """ # NOTE(efried): This is here because _ensure_resource_class already has # @safe_connect, so we don't want to decorate this whole method with it @safe_connect def do_put(url, payload): # NOTE(vdrok): in microversion 1.26 it is allowed to have inventory # records with reserved value equal to total return self.put( url, payload, global_request_id=context.global_id, version=ALLOW_RESERVED_EQUAL_TOTAL_INVENTORY_VERSION) # If not different from what we've got, short out if not self._provider_tree.has_inventory_changed(rp_uuid, inv_data): LOG.debug('Inventory has not changed for provider %s based ' 'on inventory data: %s', rp_uuid, inv_data) return # Ensure non-standard resource classes exist, creating them if needed. self._ensure_resource_classes(context, set(inv_data)) url = '/resource_providers/%s/inventories' % rp_uuid inv_data = inv_data or {} generation = self._provider_tree.data(rp_uuid).generation payload = { 'resource_provider_generation': generation, 'inventories': inv_data, } resp = do_put(url, payload) if resp.status_code == 200: LOG.debug('Updated inventory for provider %s with generation %s ' 'in Placement from set_inventory_for_provider using ' 'data: %s', rp_uuid, generation, inv_data) json = resp.json() self._provider_tree.update_inventory( rp_uuid, json['inventories'], generation=json['resource_provider_generation']) return # Some error occurred; log it msg = ("[%(placement_req_id)s] Failed to update inventory to " "[%(inv_data)s] for resource provider with UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s") args = { 'placement_req_id': get_placement_request_id(resp), 'uuid': rp_uuid, 'inv_data': str(inv_data), 'status_code': resp.status_code, 'err_text': resp.text, } LOG.error(msg, args) if resp.status_code == 409: # If a conflict attempting to remove inventory in a resource class # with active allocations, raise InventoryInUse err = resp.json()['errors'][0] # TODO(efried): If there's ever a lib exporting symbols for error # codes, use it. if err['code'] == 'placement.inventory.inuse': # The error detail includes the resource class and provider. raise exception.InventoryInUse(err['detail']) # Other conflicts are generation mismatch: raise conflict exception raise exception.ResourceProviderUpdateConflict( uuid=rp_uuid, generation=generation, error=resp.text) # Otherwise, raise generic exception raise exception.ResourceProviderUpdateFailed(url=url, error=resp.text) @safe_connect def _ensure_traits(self, context, traits): """Make sure all specified traits exist in the placement service. :param context: The security context :param traits: Iterable of trait strings to ensure exist. :raises: TraitCreationFailed if traits contains a trait that did not exist in placement, and couldn't be created. When this exception is raised, it is possible that *some* of the requested traits were created. :raises: TraitRetrievalFailed if the initial query of existing traits was unsuccessful. In this scenario, it is guaranteed that no traits were created. """ if not traits: return # Query for all the requested traits. Whichever ones we *don't* get # back, we need to create. # NOTE(efried): We don't attempt to filter based on our local idea of # standard traits, which may not be in sync with what the placement # service knows. If the caller tries to ensure a nonexistent # "standard" trait, they deserve the TraitCreationFailed exception # they'll get. resp = self.get('/traits?name=in:' + ','.join(traits), version='1.6', global_request_id=context.global_id) if resp.status_code == 200: traits_to_create = set(traits) - set(resp.json()['traits']) # Might be neat to have a batch create. But creating multiple # traits will generally happen once, at initial startup, if at all. for trait in traits_to_create: resp = self.put('/traits/' + trait, None, version='1.6', global_request_id=context.global_id) if not resp: raise exception.TraitCreationFailed(name=trait, error=resp.text) return # The initial GET failed msg = ("[%(placement_req_id)s] Failed to retrieve the list of traits. " "Got %(status_code)d: %(err_text)s") args = { 'placement_req_id': get_placement_request_id(resp), 'status_code': resp.status_code, 'err_text': resp.text, } LOG.error(msg, args) raise exception.TraitRetrievalFailed(error=resp.text) @safe_connect def set_traits_for_provider(self, context, rp_uuid, traits): """Replace a provider's traits with those specified. The provider must exist - this method does not attempt to create it. :param context: The security context :param rp_uuid: The UUID of the provider whose traits are to be updated :param traits: Iterable of traits to set on the provider :raises: ResourceProviderUpdateConflict if the provider's generation doesn't match the generation in the cache. Callers may choose to retrieve the provider and its associations afresh and redrive this operation. :raises: ResourceProviderUpdateFailed on any other placement API failure. :raises: TraitCreationFailed if traits contains a trait that did not exist in placement, and couldn't be created. :raises: TraitRetrievalFailed if the initial query of existing traits was unsuccessful. """ # If not different from what we've got, short out if not self._provider_tree.have_traits_changed(rp_uuid, traits): return self._ensure_traits(context, traits) url = '/resource_providers/%s/traits' % rp_uuid # NOTE(efried): Don't use the DELETE API when traits is empty, because # that method doesn't return content, and we need to update the cached # provider tree with the new generation. traits = list(traits) if traits else [] generation = self._provider_tree.data(rp_uuid).generation payload = { 'resource_provider_generation': generation, 'traits': traits, } resp = self.put(url, payload, version='1.6', global_request_id=context.global_id) if resp.status_code == 200: json = resp.json() self._provider_tree.update_traits( rp_uuid, json['traits'], generation=json['resource_provider_generation']) return # Some error occurred; log it msg = ("[%(placement_req_id)s] Failed to update traits to " "[%(traits)s] for resource provider with UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s") args = { 'placement_req_id': get_placement_request_id(resp), 'uuid': rp_uuid, 'traits': ','.join(traits), 'status_code': resp.status_code, 'err_text': resp.text, } LOG.error(msg, args) # If a conflict, raise special conflict exception if resp.status_code == 409: raise exception.ResourceProviderUpdateConflict( uuid=rp_uuid, generation=generation, error=resp.text) # Otherwise, raise generic exception raise exception.ResourceProviderUpdateFailed(url=url, error=resp.text) @safe_connect def set_aggregates_for_provider(self, context, rp_uuid, aggregates, use_cache=True, generation=None): """Replace a provider's aggregates with those specified. The provider must exist - this method does not attempt to create it. :param context: The security context :param rp_uuid: The UUID of the provider whose aggregates are to be updated. :param aggregates: Iterable of aggregates to set on the provider. :param use_cache: If False, indicates not to update the cache of resource providers. :param generation: Resource provider generation. Required if use_cache is False. :raises: ResourceProviderUpdateConflict if the provider's generation doesn't match the generation in the cache. Callers may choose to retrieve the provider and its associations afresh and redrive this operation. :raises: ResourceProviderUpdateFailed on any other placement API failure. """ # If a generation is specified, it trumps whatever's in the cache. # Otherwise... if generation is None: if use_cache: generation = self._provider_tree.data(rp_uuid).generation else: # Either cache or generation is required raise ValueError( _("generation is required with use_cache=False")) # Check whether aggregates need updating. We can only do this if we # have a cache entry with a matching generation. try: if (self._provider_tree.data(rp_uuid).generation == generation and not self._provider_tree.have_aggregates_changed( rp_uuid, aggregates)): return except ValueError: # Not found in the cache; proceed pass url = '/resource_providers/%s/aggregates' % rp_uuid aggregates = list(aggregates) if aggregates else [] payload = {'aggregates': aggregates, 'resource_provider_generation': generation} resp = self.put(url, payload, version=AGGREGATE_GENERATION_VERSION, global_request_id=context.global_id) if resp.status_code == 200: # Try to update the cache regardless. If use_cache=False, ignore # any failures. try: data = resp.json() self._provider_tree.update_aggregates( rp_uuid, data['aggregates'], generation=data['resource_provider_generation']) except ValueError: if use_cache: # The entry should've been there raise return # Some error occurred; log it msg = ("[%(placement_req_id)s] Failed to update aggregates to " "[%(aggs)s] for resource provider with UUID %(uuid)s. Got " "%(status_code)d: %(err_text)s") args = { 'placement_req_id': get_placement_request_id(resp), 'uuid': rp_uuid, 'aggs': ','.join(aggregates), 'status_code': resp.status_code, 'err_text': resp.text, } # If a conflict, invalidate the cache and raise special exception if resp.status_code == 409: # No reason to condition cache invalidation on use_cache - if we # got a 409, the cache entry is still bogus if it exists; and the # below is a no-op if it doesn't. try: self._provider_tree.remove(rp_uuid) except ValueError: pass self._association_refresh_time.pop(rp_uuid, None) LOG.warning(msg, args) raise exception.ResourceProviderUpdateConflict( uuid=rp_uuid, generation=generation, error=resp.text) # Otherwise, raise generic exception LOG.error(msg, args) raise exception.ResourceProviderUpdateFailed(url=url, error=resp.text) @safe_connect def _ensure_resource_classes(self, context, names): """Make sure resource classes exist. :param context: The security context :param names: Iterable of string names of the resource classes to check/create. Must not be None. :raises: exception.InvalidResourceClass if an attempt is made to create an invalid resource class. """ # Placement API version that supports PUT /resource_classes/CUSTOM_* # to create (or validate the existence of) a consumer-specified # resource class. version = '1.7' to_ensure = set(n for n in names if n.startswith(orc.CUSTOM_NAMESPACE)) for name in to_ensure: # no payload on the put request resp = self.put( "/resource_classes/%s" % name, None, version=version, global_request_id=context.global_id) if not resp: msg = ("Failed to ensure resource class record with placement " "API for resource class %(rc_name)s. Got " "%(status_code)d: %(err_text)s.") args = { 'rc_name': name, 'status_code': resp.status_code, 'err_text': resp.text, } LOG.error(msg, args) raise exception.InvalidResourceClass(resource_class=name) def _reshape(self, context, inventories, allocations): """Perform atomic inventory & allocation data migration. :param context: The security context :param inventories: A dict, keyed by resource provider UUID, of: { "inventories": { inventory dicts, keyed by resource class }, "resource_provider_generation": $RP_GEN } :param allocations: A dict, keyed by consumer UUID, of: { "project_id": $PROJ_ID, "user_id": $USER_ID, "consumer_generation": $CONSUMER_GEN, "allocations": { $RP_UUID: { "resources": { $RC: $AMOUNT, ... } }, ... } } :return: The Response object representing a successful API call. :raises: ReshapeFailed if the POST /reshaper request fails. :raises: keystoneauth1.exceptions.ClientException if placement API communication fails. """ # We have to make sure any new resource classes exist for invs in inventories.values(): self._ensure_resource_classes(context, list(invs['inventories'])) payload = {"inventories": inventories, "allocations": allocations} resp = self.post('/reshaper', payload, version=RESHAPER_VERSION, global_request_id=context.global_id) if not resp: raise exception.ReshapeFailed(error=resp.text) return resp def _set_up_and_do_reshape(self, context, old_tree, new_tree, allocations): LOG.info("Performing resource provider inventory and allocation " "data migration.") new_uuids = new_tree.get_provider_uuids() inventories = {} for rp_uuid in new_uuids: data = new_tree.data(rp_uuid) inventories[rp_uuid] = { "inventories": data.inventory, "resource_provider_generation": data.generation } # Even though we're going to delete them immediately, we still want # to send "inventory changes" for to-be-removed providers in this # reshape request so they're done atomically. This prevents races # where the scheduler could allocate between here and when we # delete the providers. to_remove = set(old_tree.get_provider_uuids()) - set(new_uuids) for rp_uuid in to_remove: inventories[rp_uuid] = { "inventories": {}, "resource_provider_generation": old_tree.data(rp_uuid).generation } # Now we're ready to POST /reshaper. This can raise ReshapeFailed, # but we also need to convert any other exception (including e.g. # PlacementAPIConnectFailure) to ReshapeFailed because we want any # failure here to be fatal to the caller. try: self._reshape(context, inventories, allocations) except exception.ReshapeFailed: raise except Exception as e: # Make sure the original stack trace gets logged. LOG.exception('Reshape failed') raise exception.ReshapeFailed(error=e) def update_from_provider_tree(self, context, new_tree, allocations=None): """Flush changes from a specified ProviderTree back to placement. The specified ProviderTree is compared against the local cache. Any changes are flushed back to the placement service. Upon successful completion, the local cache should reflect the specified ProviderTree. This method is best-effort and not atomic. When exceptions are raised, it is possible that some of the changes have been flushed back, leaving the placement database in an inconsistent state. This should be recoverable through subsequent calls. :param context: The security context :param new_tree: A ProviderTree instance representing the desired state of providers in placement. :param allocations: A dict, keyed by consumer UUID, of allocation records of the form returned by GET /allocations/{consumer_uuid} representing the comprehensive final picture of the allocations for each consumer therein. A value of None indicates that no reshape is being performed. :raises: ResourceProviderUpdateConflict if a generation conflict was encountered - i.e. we are attempting to update placement based on a stale view of it. :raises: ResourceProviderSyncFailed if any errors were encountered attempting to perform the necessary API operations, except reshape (see below). :raises: ReshapeFailed if a reshape was signaled (allocations not None) and it fails for any reason. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API """ # NOTE(efried): We currently do not handle the "rename" case. This is # where new_tree contains a provider named Y whose UUID already exists # but is named X. @contextlib.contextmanager def catch_all(rp_uuid): """Convert all "expected" exceptions from placement API helpers to ResourceProviderSyncFailed* and invalidate the caches for the tree around `rp_uuid`. * Except ResourceProviderUpdateConflict, which signals the caller to redrive the operation; and ReshapeFailed, which triggers special error handling behavior in the resource tracker and compute manager. """ # TODO(efried): Make a base exception class from which all these # can inherit. helper_exceptions = ( exception.InvalidResourceClass, exception.InventoryInUse, exception.ResourceProviderAggregateRetrievalFailed, exception.ResourceProviderDeletionFailed, exception.ResourceProviderInUse, exception.ResourceProviderRetrievalFailed, exception.ResourceProviderTraitRetrievalFailed, exception.ResourceProviderUpdateFailed, exception.TraitCreationFailed, exception.TraitRetrievalFailed, # NOTE(efried): We do not trap/convert ReshapeFailed - that one # needs to bubble up right away and be handled specially. ) try: yield except exception.ResourceProviderUpdateConflict: # Invalidate the tree around the failing provider and reraise # the conflict exception. This signals the resource tracker to # redrive the update right away rather than waiting until the # next periodic. with excutils.save_and_reraise_exception(): self._clear_provider_cache_for_tree(rp_uuid) except helper_exceptions: # Invalidate the relevant part of the cache. It gets rebuilt on # the next pass. self._clear_provider_cache_for_tree(rp_uuid) raise exception.ResourceProviderSyncFailed() # Helper methods herein will be updating the local cache (this is # intentional) so we need to grab up front any data we need to operate # on in its "original" form. old_tree = self._provider_tree old_uuids = old_tree.get_provider_uuids() new_uuids = new_tree.get_provider_uuids() uuids_to_add = set(new_uuids) - set(old_uuids) uuids_to_remove = set(old_uuids) - set(new_uuids) # In case a reshape is happening, we first have to create (or load) any # "new" providers. # We have to do additions in top-down order, so we don't error # attempting to create a child before its parent exists. for uuid in new_uuids: if uuid not in uuids_to_add: continue provider = new_tree.data(uuid) with catch_all(uuid): self._ensure_resource_provider( context, uuid, name=provider.name, parent_provider_uuid=provider.parent_uuid) # We have to stuff the freshly-created provider's generation # into the new_tree so we don't get conflicts updating its # inventories etc. later. # TODO(efried): We don't have a good way to set the generation # independently; this is a hack. new_tree.update_inventory( uuid, new_tree.data(uuid).inventory, generation=self._provider_tree.data(uuid).generation) # If we need to reshape, do it here. if allocations is not None: # NOTE(efried): We do not catch_all here, because ReshapeFailed # needs to bubble up right away and be handled specially. self._set_up_and_do_reshape(context, old_tree, new_tree, allocations) # The reshape updated provider generations, so the ones we have in # the cache are now stale. The inventory update below will short # out, but we would still bounce with a provider generation # conflict on the trait and aggregate updates. for uuid in new_uuids: # TODO(efried): GET /resource_providers?uuid=in:[list] would be # handy here. Meanwhile, this is an already-written, if not # obvious, way to refresh provider generations in the cache. with catch_all(uuid): self._refresh_and_get_inventory(context, uuid) # Now we can do provider deletions, because we should have moved any # allocations off of them via reshape. # We have to do deletions in bottom-up order, so we don't error # attempting to delete a parent who still has children. (We get the # UUIDs in bottom-up order by reversing old_uuids, which was given to # us in top-down order per ProviderTree.get_provider_uuids().) for uuid in reversed(old_uuids): if uuid not in uuids_to_remove: continue with catch_all(uuid): self._delete_provider(uuid) # At this point the local cache should have all the same providers as # new_tree. Whether we added them or not, walk through and diff/flush # inventories, traits, and aggregates as necessary. Note that, if we # reshaped above, any inventory changes have already been done. But the # helper methods are set up to check and short out when the relevant # property does not differ from what's in the cache. # If we encounter any error and remove a provider from the cache, all # its descendants are also removed, and set_*_for_provider methods on # it wouldn't be able to get started. Walking the tree in bottom-up # order ensures we at least try to process all of the providers. (We # get the UUIDs in bottom-up order by reversing new_uuids, which was # given to us in top-down order per ProviderTree.get_provider_uuids().) for uuid in reversed(new_uuids): pd = new_tree.data(uuid) with catch_all(pd.uuid): self.set_inventory_for_provider( context, pd.uuid, pd.inventory) self.set_aggregates_for_provider( context, pd.uuid, pd.aggregates) self.set_traits_for_provider(context, pd.uuid, pd.traits) # TODO(efried): Cut users of this method over to get_allocs_for_consumer def get_allocations_for_consumer(self, context, consumer): """Legacy method for allocation retrieval. Callers should move to using get_allocs_for_consumer, which handles errors properly and returns the entire payload. :param context: The nova.context.RequestContext auth context :param consumer: UUID of the consumer resource :returns: A dict of the form: { $RP_UUID: { "generation": $RP_GEN, "resources": { $RESOURCE_CLASS: $AMOUNT ... }, }, ... } """ try: return self.get_allocs_for_consumer( context, consumer)['allocations'] except ks_exc.ClientException as e: LOG.warning("Failed to get allocations for consumer %(consumer)s: " "%(error)s", {'consumer': consumer, 'error': e}) # Because this is what @safe_connect did return None except exception.ConsumerAllocationRetrievalFailed as e: LOG.warning(e) # Because this is how we used to treat non-200 return {} def get_allocs_for_consumer(self, context, consumer): """Makes a GET /allocations/{consumer} call to Placement. :param context: The nova.context.RequestContext auth context :param consumer: UUID of the consumer resource :return: Dict of the form: { "allocations": { $RP_UUID: { "generation": $RP_GEN, "resources": { $RESOURCE_CLASS: $AMOUNT ... }, }, ... }, "consumer_generation": $CONSUMER_GEN, "project_id": $PROJ_ID, "user_id": $USER_ID, } :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API :raises: ConsumerAllocationRetrievalFailed if the placement API call fails """ resp = self.get('/allocations/%s' % consumer, version=CONSUMER_GENERATION_VERSION, global_request_id=context.global_id) if not resp: # TODO(efried): Use code/title/detail to make a better exception raise exception.ConsumerAllocationRetrievalFailed( consumer_uuid=consumer, error=resp.text) return resp.json() def get_allocations_for_consumer_by_provider(self, context, rp_uuid, consumer): """Return allocations for a consumer and a resource provider. :param context: The nova.context.RequestContext auth context :param rp_uuid: UUID of the resource provider :param consumer: UUID of the consumer :return: the resources dict of the consumer's allocation keyed by resource classes """ # NOTE(cdent): This trims to just the allocations being # used on this resource provider. In the future when there # are shared resources there might be other providers. allocations = self.get_allocations_for_consumer(context, consumer) if allocations is None: # safe_connect can return None on 404 allocations = {} return allocations.get( rp_uuid, {}).get('resources', {}) # NOTE(jaypipes): Currently, this method is ONLY used in three places: # 1. By the scheduler to allocate resources on the selected destination # hosts. # 2. By the conductor LiveMigrationTask to allocate resources on a forced # destination host. In this case, the source node allocations have # already been moved to the migration record so the instance should not # have allocations and _move_operation_alloc_request will not be called. # 3. By the conductor ComputeTaskManager to allocate resources on a forced # destination host during evacuate. This case will call the # _move_operation_alloc_request method. # This method should not be called by the resource tracker. @safe_connect @retries def claim_resources(self, context, consumer_uuid, alloc_request, project_id, user_id, allocation_request_version, consumer_generation=None): """Creates allocation records for the supplied instance UUID against the supplied resource providers. We check to see if resources have already been claimed for this consumer. If so, we assume that a move operation is underway and the scheduler is attempting to claim resources against the new (destination host). In order to prevent compute nodes currently performing move operations from being scheduled to improperly, we create a "doubled-up" allocation that consumes resources on *both* the source and the destination host during the move operation. :param context: The security context :param consumer_uuid: The instance's UUID. :param alloc_request: The JSON body of the request to make to the placement's PUT /allocations API :param project_id: The project_id associated with the allocations. :param user_id: The user_id associated with the allocations. :param allocation_request_version: The microversion used to request the allocations. :param consumer_generation: The expected generation of the consumer. None if a new consumer is expected :returns: True if the allocations were created, False otherwise. :raise AllocationUpdateFailed: If consumer_generation in the alloc_request does not match with the placement view. """ # Ensure we don't change the supplied alloc request since it's used in # a loop within the scheduler against multiple instance claims ar = copy.deepcopy(alloc_request) url = '/allocations/%s' % consumer_uuid payload = ar # We first need to determine if this is a move operation and if so # create the "doubled-up" allocation that exists for the duration of # the move operation against both the source and destination hosts r = self.get(url, global_request_id=context.global_id, version=CONSUMER_GENERATION_VERSION) if r.status_code == 200: body = r.json() current_allocs = body['allocations'] if current_allocs: if 'consumer_generation' not in ar: # this is non-forced evacuation. Evacuation does not use # the migration.uuid to hold the source host allocation # therefore when the scheduler calls claim_resources() then # the two allocations need to be combined. Scheduler does # not know that this is not a new consumer as it only sees # allocation candidates. # Therefore we need to use the consumer generation from # the above GET. # If between the GET and the PUT the consumer generation # changes in placement then we raise # AllocationUpdateFailed. # NOTE(gibi): This only detect a small portion of possible # cases when allocation is modified outside of the this # code path. The rest can only be detected if nova would # cache at least the consumer generation of the instance. consumer_generation = body['consumer_generation'] else: # this is forced evacuation and the caller # claim_resources_on_destination() provides the consumer # generation it sees in the conductor when it generates the # request. consumer_generation = ar['consumer_generation'] payload = _move_operation_alloc_request(current_allocs, ar) payload['project_id'] = project_id payload['user_id'] = user_id if (versionutils.convert_version_to_tuple( allocation_request_version) >= versionutils.convert_version_to_tuple( CONSUMER_GENERATION_VERSION)): payload['consumer_generation'] = consumer_generation r = self._put_allocations( context, consumer_uuid, payload, version=allocation_request_version) if r.status_code != 204: err = r.json()['errors'][0] if err['code'] == 'placement.concurrent_update': # NOTE(jaypipes): Yes, it sucks doing string comparison like # this but we have no error codes, only error messages. # TODO(gibi): Use more granular error codes when available if 'consumer generation conflict' in err['detail']: reason = ('another process changed the consumer %s after ' 'the report client read the consumer state ' 'during the claim ' % consumer_uuid) raise exception.AllocationUpdateFailed( consumer_uuid=consumer_uuid, error=reason) # this is not a consumer generation conflict so it can only be # a resource provider generation conflict. The caller does not # provide resource provider generation so this is just a # placement internal race. We can blindly retry locally. reason = ('another process changed the resource providers ' 'involved in our attempt to put allocations for ' 'consumer %s' % consumer_uuid) raise Retry('claim_resources', reason) return r.status_code == 204 def remove_resources_from_instance_allocation( self, context, consumer_uuid, resources): """Removes certain resources from the current allocation of the consumer. :param context: the request context :param consumer_uuid: the uuid of the consumer to update :param resources: a dict of resources. E.g.: { : { : amount : amount } : { : amount } } :raises AllocationUpdateFailed: if the requested resource cannot be removed from the current allocation (e.g. rp is missing from the allocation) or there was multiple generation conflict and we run out of retires. :raises ConsumerAllocationRetrievalFailed: If the current allocation cannot be read from placement. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API """ # NOTE(gibi): It is just a small wrapper to raise instead of return # if we run out of retries. if not self._remove_resources_from_instance_allocation( context, consumer_uuid, resources): error_reason = _("Cannot remove resources %s from the allocation " "due to multiple successive generation conflicts " "in placement.") raise exception.AllocationUpdateFailed( consumer_uuid=consumer_uuid, error=error_reason % resources) @retries def _remove_resources_from_instance_allocation( self, context, consumer_uuid, resources): if not resources: # Nothing to remove so do not query or update allocation in # placement. # The True value is only here because the retry decorator returns # False when runs out of retries. It would be nicer to raise in # that case too. return True current_allocs = self.get_allocs_for_consumer(context, consumer_uuid) if not current_allocs['allocations']: error_reason = _("Cannot remove resources %(resources)s from " "allocation %(allocations)s. The allocation is " "empty.") raise exception.AllocationUpdateFailed( consumer_uuid=consumer_uuid, error=error_reason % {'resources': resources, 'allocations': current_allocs}) try: for rp_uuid, resources_to_remove in resources.items(): allocation_on_rp = current_allocs['allocations'][rp_uuid] for rc, value in resources_to_remove.items(): allocation_on_rp['resources'][rc] -= value if allocation_on_rp['resources'][rc] < 0: error_reason = _( "Cannot remove resources %(resources)s from " "allocation %(allocations)s. There are not enough " "allocated resources left on %(rp_uuid)s resource " "provider to remove %(amount)d amount of " "%(resource_class)s resources.") raise exception.AllocationUpdateFailed( consumer_uuid=consumer_uuid, error=error_reason % {'resources': resources, 'allocations': current_allocs, 'rp_uuid': rp_uuid, 'amount': value, 'resource_class': rc}) if allocation_on_rp['resources'][rc] == 0: # if no allocation left for this rc then remove it # from the allocation del allocation_on_rp['resources'][rc] except KeyError as e: error_reason = _("Cannot remove resources %(resources)s from " "allocation %(allocations)s. Key %(missing_key)s " "is missing from the allocation.") # rp_uuid is missing from the allocation or resource class is # missing from the allocation raise exception.AllocationUpdateFailed( consumer_uuid=consumer_uuid, error=error_reason % {'resources': resources, 'allocations': current_allocs, 'missing_key': e}) # we have to remove the rps from the allocation that has no resources # any more current_allocs['allocations'] = { rp_uuid: alloc for rp_uuid, alloc in current_allocs['allocations'].items() if alloc['resources']} r = self._put_allocations( context, consumer_uuid, current_allocs) if r.status_code != 204: err = r.json()['errors'][0] if err['code'] == 'placement.concurrent_update': reason = ('another process changed the resource providers or ' 'the consumer involved in our attempt to update ' 'allocations for consumer %s so we cannot remove ' 'resources %s from the current allocation %s' % (consumer_uuid, resources, current_allocs)) # NOTE(gibi): automatic retry is meaningful if we can still # remove the resources from the updated allocations. Retry # works here as this function (re)queries the allocations. raise Retry( 'remove_resources_from_instance_allocation', reason) # It is only here because the retry decorator returns False when runs # out of retries. It would be nicer to raise in that case too. return True def remove_provider_tree_from_instance_allocation(self, context, consumer_uuid, root_rp_uuid): """Removes every allocation from the consumer that is on the specified provider tree. Note that this function does not try to remove allocations from sharing providers. :param context: The security context :param consumer_uuid: The UUID of the consumer to manipulate :param root_rp_uuid: The root of the provider tree :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API :raises: ConsumerAllocationRetrievalFailed if this call cannot read the current state of the allocations from placement :raises: ResourceProviderRetrievalFailed if it cannot collect the RPs in the tree specified by root_rp_uuid. """ current_allocs = self.get_allocs_for_consumer(context, consumer_uuid) if not current_allocs['allocations']: LOG.error("Expected to find current allocations for %s, but " "found none.", consumer_uuid) # TODO(gibi): do not return False as none of the callers # do anything with the return value except log return False rps = self.get_providers_in_tree(context, root_rp_uuid) rp_uuids = [rp['uuid'] for rp in rps] # go through the current allocations and remove every RP from it that # belongs to the RP tree identified by the root_rp_uuid parameter has_changes = False for rp_uuid in rp_uuids: changed = bool( current_allocs['allocations'].pop(rp_uuid, None)) has_changes = has_changes or changed # If nothing changed then don't do anything if not has_changes: LOG.warning( "Expected to find allocations referencing resource " "provider tree rooted at %s for %s, but found none.", root_rp_uuid, consumer_uuid) # TODO(gibi): do not return a value as none of the callers # do anything with the return value except logging return True r = self._put_allocations(context, consumer_uuid, current_allocs) # TODO(gibi): do not return a value as none of the callers # do anything with the return value except logging return r.status_code == 204 def _put_allocations( self, context, consumer_uuid, payload, version=CONSUMER_GENERATION_VERSION): url = '/allocations/%s' % consumer_uuid r = self.put(url, payload, version=version, global_request_id=context.global_id) if r.status_code != 204: LOG.warning("Failed to save allocation for %s. Got HTTP %s: %s", consumer_uuid, r.status_code, r.text) return r @safe_connect @retries def move_allocations(self, context, source_consumer_uuid, target_consumer_uuid): """Move allocations from one consumer to the other Note that this call moves the current allocation from the source consumer to the target consumer. If parallel update happens on either consumer during this call then Placement will detect that and this code will raise AllocationMoveFailed. If you want to move a known piece of allocation from source to target then this function might not be what you want as it always moves what source has in Placement. If the target consumer has allocations but the source consumer does not, this method assumes the allocations were already moved and returns True. :param context: The security context :param source_consumer_uuid: the UUID of the consumer from which allocations are moving :param target_consumer_uuid: the UUID of the target consumer for the allocations :returns: True if the move was successful (or already done), False otherwise. :raises AllocationMoveFailed: If the source or the target consumer has been modified while this call tries to move allocations. """ source_alloc = self.get_allocs_for_consumer( context, source_consumer_uuid) target_alloc = self.get_allocs_for_consumer( context, target_consumer_uuid) if target_alloc and target_alloc['allocations']: # Check to see if the source allocations still exist because if # they don't they might have already been moved to the target. if not (source_alloc and source_alloc['allocations']): LOG.info('Allocations not found for consumer %s; assuming ' 'they were already moved to consumer %s', source_consumer_uuid, target_consumer_uuid) return True LOG.debug('Overwriting current allocation %(allocation)s on ' 'consumer %(consumer)s', {'allocation': target_alloc, 'consumer': target_consumer_uuid}) new_allocs = { source_consumer_uuid: { # 'allocations': {} means we are removing the allocation from # the source consumer 'allocations': {}, 'project_id': source_alloc['project_id'], 'user_id': source_alloc['user_id'], 'consumer_generation': source_alloc['consumer_generation']}, target_consumer_uuid: { 'allocations': source_alloc['allocations'], # NOTE(gibi): Is there any case when we need to keep the # project_id and user_id of the target allocation that we are # about to overwrite? 'project_id': source_alloc['project_id'], 'user_id': source_alloc['user_id'], 'consumer_generation': target_alloc.get('consumer_generation') } } r = self.post('/allocations', new_allocs, version=CONSUMER_GENERATION_VERSION, global_request_id=context.global_id) if r.status_code != 204: err = r.json()['errors'][0] if err['code'] == 'placement.concurrent_update': # NOTE(jaypipes): Yes, it sucks doing string comparison like # this but we have no error codes, only error messages. # TODO(gibi): Use more granular error codes when available if 'consumer generation conflict' in err['detail']: raise exception.AllocationMoveFailed( source_consumer=source_consumer_uuid, target_consumer=target_consumer_uuid, error=r.text) reason = ('another process changed the resource providers ' 'involved in our attempt to post allocations for ' 'consumer %s' % target_consumer_uuid) raise Retry('move_allocations', reason) else: LOG.warning( 'Unable to post allocations for consumer ' '%(uuid)s (%(code)i %(text)s)', {'uuid': target_consumer_uuid, 'code': r.status_code, 'text': r.text}) return r.status_code == 204 @retries def put_allocations(self, context, consumer_uuid, payload): """Creates allocation records for the supplied consumer UUID based on the provided allocation dict :param context: The security context :param consumer_uuid: The instance's UUID. :param payload: Dict in the format expected by the placement PUT /allocations/{consumer_uuid} API :returns: True if the allocations were created, False otherwise. :raises: Retry if the operation should be retried due to a concurrent resource provider update. :raises: AllocationUpdateFailed if placement returns a consumer generation conflict :raises: PlacementAPIConnectFailure on failure to communicate with the placement API """ try: r = self._put_allocations(context, consumer_uuid, payload) except ks_exc.ClientException: raise exception.PlacementAPIConnectFailure() if r.status_code != 204: err = r.json()['errors'][0] # NOTE(jaypipes): Yes, it sucks doing string comparison like this # but we have no error codes, only error messages. # TODO(gibi): Use more granular error codes when available if err['code'] == 'placement.concurrent_update': if 'consumer generation conflict' in err['detail']: raise exception.AllocationUpdateFailed( consumer_uuid=consumer_uuid, error=err['detail']) # this is not a consumer generation conflict so it can only be # a resource provider generation conflict. The caller does not # provide resource provider generation so this is just a # placement internal race. We can blindly retry locally. reason = ('another process changed the resource providers ' 'involved in our attempt to put allocations for ' 'consumer %s' % consumer_uuid) raise Retry('put_allocations', reason) return r.status_code == 204 @safe_connect def delete_allocation_for_instance(self, context, uuid, consumer_type='instance'): """Delete the instance allocation from placement :param context: The security context :param uuid: the instance or migration UUID which will be used as the consumer UUID towards placement :param consumer_type: The type of the consumer specified by uuid. 'instance' or 'migration' (Default: instance) :return: Returns True if the allocation is successfully deleted by this call. Returns False if the allocation does not exist. :raises AllocationDeleteFailed: If the allocation cannot be read from placement or it is changed by another process while we tried to delete it. """ url = '/allocations/%s' % uuid # We read the consumer generation then try to put an empty allocation # for that consumer. If between the GET and the PUT the consumer # generation changes then we raise AllocationDeleteFailed. # NOTE(gibi): This only detect a small portion of possible cases when # allocation is modified outside of the delete code path. The rest can # only be detected if nova would cache at least the consumer generation # of the instance. # NOTE(gibi): placement does not return 404 for non-existing consumer # but returns an empty consumer instead. Putting an empty allocation to # that non-existing consumer won't be 404 or other error either. r = self.get(url, global_request_id=context.global_id, version=CONSUMER_GENERATION_VERSION) if not r: # at the moment there is no way placement returns a failure so we # could even delete this code LOG.warning('Unable to delete allocation for %(consumer_type)s ' '%(uuid)s. Got %(code)i while retrieving existing ' 'allocations: (%(text)s)', {'consumer_type': consumer_type, 'uuid': uuid, 'code': r.status_code, 'text': r.text}) raise exception.AllocationDeleteFailed(consumer_uuid=uuid, error=r.text) allocations = r.json() if allocations['allocations'] == {}: # the consumer did not exist in the first place LOG.debug('Cannot delete allocation for %s consumer in placement ' 'as consumer does not exist', uuid) return False # removing all resources from the allocation will auto delete the # consumer in placement allocations['allocations'] = {} r = self.put(url, allocations, global_request_id=context.global_id, version=CONSUMER_GENERATION_VERSION) if r.status_code == 204: LOG.info('Deleted allocation for %(consumer_type)s %(uuid)s', {'consumer_type': consumer_type, 'uuid': uuid}) return True else: LOG.warning('Unable to delete allocation for %(consumer_type)s ' '%(uuid)s: (%(code)i %(text)s)', {'consumer_type': consumer_type, 'uuid': uuid, 'code': r.status_code, 'text': r.text}) raise exception.AllocationDeleteFailed(consumer_uuid=uuid, error=r.text) def get_allocations_for_resource_provider(self, context, rp_uuid): """Retrieves the allocations for a specific provider. :param context: The nova.context.RequestContext auth context :param rp_uuid: The UUID of the provider. :return: ProviderAllocInfo namedtuple. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API :raises: ResourceProviderAllocationRetrievalFailed if the placement API call fails. """ url = '/resource_providers/%s/allocations' % rp_uuid resp = self.get(url, global_request_id=context.global_id) if not resp: raise exception.ResourceProviderAllocationRetrievalFailed( rp_uuid=rp_uuid, error=resp.text) data = resp.json() return ProviderAllocInfo(allocations=data['allocations']) def get_allocations_for_provider_tree(self, context, nodename): """Retrieve allocation records associated with all providers in the provider tree. This method uses the cache exclusively to discover providers. The caller must ensure that the cache is populated. This method is (and should remain) used exclusively in the reshaper flow by the resource tracker. Note that, in addition to allocations on providers in this compute node's provider tree, this method will return allocations on sharing providers if those allocations are associated with a consumer on this compute node. This is intentional and desirable. But it may also return allocations belonging to other hosts, e.g. if this is happening in the middle of an evacuate. ComputeDriver.update_provider_tree is supposed to ignore such allocations if they appear. :param context: The security context :param nodename: The name of a node for whose tree we are getting allocations. :returns: A dict, keyed by consumer UUID, of allocation records: { $CONSUMER_UUID: { # The shape of each "allocations" dict below is identical # to the return from GET /allocations/{consumer_uuid} "allocations": { $RP_UUID: { "generation": $RP_GEN, "resources": { $RESOURCE_CLASS: $AMOUNT, ... }, }, ... }, "project_id": $PROJ_ID, "user_id": $USER_ID, "consumer_generation": $CONSUMER_GEN, }, ... } :raises: keystoneauth1.exceptions.ClientException if placement API communication fails. :raises: ResourceProviderAllocationRetrievalFailed if a placement API call fails. :raises: ValueError if there's no provider with the specified nodename. """ # NOTE(efried): Despite our best efforts, there are some scenarios # (e.g. mid-evacuate) where we can still wind up returning allocations # against providers belonging to other hosts. We count on the consumer # of this information (i.e. the reshaper flow of a virt driver's # update_provider_tree) to ignore allocations associated with any # provider it is not reshaping - and it should never be reshaping # providers belonging to other hosts. # We can't get *all* allocations for associated sharing providers # because some of those will belong to consumers on other hosts. So we # have to discover all the consumers associated with the providers in # the "local" tree (we use the nodename to figure out which providers # are "local"). # All we want to do at this point is accumulate the set of consumers we # care about. consumers = set() # TODO(efried): This could be more efficient if placement offered an # operation like GET /allocations?rp_uuid=in: for u in self._provider_tree.get_provider_uuids(name_or_uuid=nodename): alloc_info = self.get_allocations_for_resource_provider(context, u) # The allocations dict is keyed by consumer UUID consumers.update(alloc_info.allocations) # Now get all the allocations for each of these consumers to build the # result. This will include allocations on sharing providers, which is # intentional and desirable. But it may also include allocations # belonging to other hosts, e.g. if this is happening in the middle of # an evacuate. ComputeDriver.update_provider_tree is supposed to ignore # such allocations if they appear. # TODO(efried): This could be more efficient if placement offered an # operation like GET /allocations?consumer_uuid=in: return {consumer: self.get_allocs_for_consumer(context, consumer) for consumer in consumers} def delete_resource_provider(self, context, compute_node, cascade=False): """Deletes the ResourceProvider record for the compute_node. :param context: The security context :param compute_node: The nova.objects.ComputeNode object that is the resource provider being deleted. :param cascade: Boolean value that, when True, will first delete any associated Allocation records for the compute node :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API """ nodename = compute_node.hypervisor_hostname host = compute_node.host rp_uuid = compute_node.uuid if cascade: # Delete any allocations for this resource provider. # Since allocations are by consumer, we get the consumers on this # host, which are its instances. # NOTE(mriedem): This assumes the only allocations on this node # are instances, but there could be migration consumers if the # node is deleted during a migration or allocations from an # evacuated host (bug 1829479). Obviously an admin shouldn't # do that but...you know. I guess the provider deletion should fail # in that case which is what we'd want to happen. instance_uuids = objects.InstanceList.get_uuids_by_host_and_node( context, host, nodename) for instance_uuid in instance_uuids: self.delete_allocation_for_instance(context, instance_uuid) try: self._delete_provider(rp_uuid, global_request_id=context.global_id) except (exception.ResourceProviderInUse, exception.ResourceProviderDeletionFailed): # TODO(efried): Raise these. Right now this is being left a no-op # for backward compatibility. pass def get_provider_by_name(self, context, name): """Queries the placement API for resource provider information matching a supplied name. :param context: The security context :param name: Name of the resource provider to look up :return: A dict of resource provider information including the provider's UUID and generation :raises: `exception.ResourceProviderNotFound` when no such provider was found :raises: PlacementAPIConnectFailure if there was an issue making the API call to placement. """ try: resp = self.get("/resource_providers?name=%s" % name, global_request_id=context.global_id) except ks_exc.ClientException as ex: LOG.error('Failed to get resource provider by name: %s. Error: %s', name, six.text_type(ex)) raise exception.PlacementAPIConnectFailure() if resp.status_code == 200: data = resp.json() records = data['resource_providers'] num_recs = len(records) if num_recs == 1: return records[0] elif num_recs > 1: msg = ("Found multiple resource provider records for resource " "provider name %(rp_name)s: %(rp_uuids)s. " "This should not happen.") LOG.warning(msg, { 'rp_name': name, 'rp_uuids': ','.join([r['uuid'] for r in records]) }) elif resp.status_code != 404: msg = ("Failed to retrieve resource provider information by name " "for resource provider %s. Got %d: %s") LOG.warning(msg, name, resp.status_code, resp.text) raise exception.ResourceProviderNotFound(name_or_uuid=name) @retrying.retry(stop_max_attempt_number=4, retry_on_exception=lambda e: isinstance( e, exception.ResourceProviderUpdateConflict)) def aggregate_add_host(self, context, agg_uuid, host_name=None, rp_uuid=None): """Looks up a resource provider by the supplied host name, and adds the aggregate with supplied UUID to that resource provider. :note: This method does NOT use the cached provider tree. It is only called from the Compute API when a nova host aggregate is modified :param context: The security context :param agg_uuid: UUID of the aggregate being modified :param host_name: Name of the nova-compute service worker to look up a resource provider for. Either host_name or rp_uuid is required. :param rp_uuid: UUID of the resource provider to add to the aggregate. Either host_name or rp_uuid is required. :raises: `exceptions.ResourceProviderNotFound` if no resource provider matching the host name could be found from the placement API :raises: `exception.ResourceProviderAggregateRetrievalFailed` when failing to get a provider's existing aggregates :raises: `exception.ResourceProviderUpdateFailed` if there was a failure attempting to save the provider aggregates :raises: `exception.ResourceProviderUpdateConflict` if a concurrent update to the provider was detected. :raises: PlacementAPIConnectFailure if there was an issue making an API call to placement. """ if host_name is None and rp_uuid is None: raise ValueError(_("Either host_name or rp_uuid is required")) if rp_uuid is None: rp_uuid = self.get_provider_by_name(context, host_name)['uuid'] # Now attempt to add the aggregate to the resource provider. We don't # want to overwrite any other aggregates the provider may be associated # with, however, so we first grab the list of aggregates for this # provider and add the aggregate to the list of aggregates it already # has agg_info = self._get_provider_aggregates(context, rp_uuid) # @safe_connect can make the above return None if agg_info is None: raise exception.PlacementAPIConnectFailure() existing_aggs, gen = agg_info.aggregates, agg_info.generation if agg_uuid in existing_aggs: return new_aggs = existing_aggs | set([agg_uuid]) self.set_aggregates_for_provider( context, rp_uuid, new_aggs, use_cache=False, generation=gen) @retrying.retry(stop_max_attempt_number=4, retry_on_exception=lambda e: isinstance( e, exception.ResourceProviderUpdateConflict)) def aggregate_remove_host(self, context, agg_uuid, host_name): """Looks up a resource provider by the supplied host name, and removes the aggregate with supplied UUID from that resource provider. :note: This method does NOT use the cached provider tree. It is only called from the Compute API when a nova host aggregate is modified :param context: The security context :param agg_uuid: UUID of the aggregate being modified :param host_name: Name of the nova-compute service worker to look up a resource provider for :raises: `exceptions.ResourceProviderNotFound` if no resource provider matching the host name could be found from the placement API :raises: `exception.ResourceProviderAggregateRetrievalFailed` when failing to get a provider's existing aggregates :raises: `exception.ResourceProviderUpdateFailed` if there was a failure attempting to save the provider aggregates :raises: `exception.ResourceProviderUpdateConflict` if a concurrent update to the provider was detected. :raises: PlacementAPIConnectFailure if there was an issue making an API call to placement. """ rp_uuid = self.get_provider_by_name(context, host_name)['uuid'] # Now attempt to remove the aggregate from the resource provider. We # don't want to overwrite any other aggregates the provider may be # associated with, however, so we first grab the list of aggregates for # this provider and remove the aggregate from the list of aggregates it # already has agg_info = self._get_provider_aggregates(context, rp_uuid) # @safe_connect can make the above return None if agg_info is None: raise exception.PlacementAPIConnectFailure() existing_aggs, gen = agg_info.aggregates, agg_info.generation if agg_uuid not in existing_aggs: return new_aggs = existing_aggs - set([agg_uuid]) self.set_aggregates_for_provider( context, rp_uuid, new_aggs, use_cache=False, generation=gen) @staticmethod def _handle_usages_error_from_placement(resp, project_id, user_id=None): msg = ('[%(placement_req_id)s] Failed to retrieve usages for project ' '%(project_id)s and user %(user_id)s. Got %(status_code)d: ' '%(err_text)s') args = {'placement_req_id': get_placement_request_id(resp), 'project_id': project_id, 'user_id': user_id or 'N/A', 'status_code': resp.status_code, 'err_text': resp.text} LOG.error(msg, args) raise exception.UsagesRetrievalFailed(project_id=project_id, user_id=user_id or 'N/A') @retrying.retry(stop_max_attempt_number=4, retry_on_exception=lambda e: isinstance( e, ks_exc.ConnectFailure)) def _get_usages(self, context, project_id, user_id=None): url = '/usages?project_id=%s' % project_id if user_id: url = ''.join([url, '&user_id=%s' % user_id]) return self.get(url, version=GET_USAGES_VERSION, global_request_id=context.global_id) def get_usages_counts_for_quota(self, context, project_id, user_id=None): """Get the usages counts for the purpose of counting quota usage. :param context: The request context :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped and user-scoped counts if user_id is specified. For example: {'project': {'cores': , 'ram': }, {'user': {'cores': , 'ram': }, :raises: `exception.UsagesRetrievalFailed` if a placement API call fails """ def _get_core_usages(usages): """For backward-compatible with existing behavior, the quota limit on flavor.vcpus. That included the shared and dedicated CPU. So we need to count both the orc.VCPU and orc.PCPU at here. """ vcpus = usages['usages'].get(orc.VCPU, 0) pcpus = usages['usages'].get(orc.PCPU, 0) return vcpus + pcpus total_counts = {'project': {}} # First query counts across all users of a project LOG.debug('Getting usages for project_id %s from placement', project_id) resp = self._get_usages(context, project_id) if resp: data = resp.json() # The response from placement will not contain a resource class if # there is no usage. We can consider a missing class to be 0 usage. cores = _get_core_usages(data) ram = data['usages'].get(orc.MEMORY_MB, 0) total_counts['project'] = {'cores': cores, 'ram': ram} else: self._handle_usages_error_from_placement(resp, project_id) # If specified, second query counts across one user in the project if user_id: LOG.debug('Getting usages for project_id %s and user_id %s from ' 'placement', project_id, user_id) resp = self._get_usages(context, project_id, user_id=user_id) if resp: data = resp.json() cores = _get_core_usages(data) ram = data['usages'].get(orc.MEMORY_MB, 0) total_counts['user'] = {'cores': cores, 'ram': ram} else: self._handle_usages_error_from_placement(resp, project_id, user_id=user_id) return total_counts ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/driver.py0000664000175000017500000000523600000000000017362 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler base class that all Schedulers should inherit from """ import abc import six from nova import objects from nova.scheduler import host_manager from nova import servicegroup @six.add_metaclass(abc.ABCMeta) class Scheduler(object): """The base class that all Scheduler classes should inherit from.""" # TODO(mriedem): We should remove this flag now so that all scheduler # drivers, both in-tree and out-of-tree, must rely on placement for # scheduling decisions. We're likely going to have more and more code # over time that relies on the scheduler creating allocations and it # will not be sustainable to try and keep compatibility code around for # scheduler drivers that do not create allocations in Placement. USES_ALLOCATION_CANDIDATES = True """Indicates that the scheduler driver calls the Placement API for allocation candidates and uses those allocation candidates in its decision-making. """ def __init__(self): self.host_manager = host_manager.HostManager() self.servicegroup_api = servicegroup.API() def run_periodic_tasks(self, context): """Manager calls this so drivers can perform periodic tasks.""" pass def hosts_up(self, context, topic): """Return the list of hosts that have a running service for topic.""" services = objects.ServiceList.get_by_topic(context, topic) return [service.host for service in services if self.servicegroup_api.service_is_up(service)] @abc.abstractmethod def select_destinations(self, context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version=None, return_alternates=False): """Returns a list of lists of Selection objects that have been chosen by the scheduler driver, one for each requested instance. """ return [] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filter_scheduler.py0000664000175000017500000006365000000000000021416 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ The FilterScheduler is for creating instances locally. You can customize this scheduler by specifying your own Host Filters and Weighing Functions. """ import random from oslo_log import log as logging from six.moves import range from nova.compute import utils as compute_utils import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova.objects import fields as fields_obj from nova import rpc from nova.scheduler.client import report from nova.scheduler import driver from nova.scheduler import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class FilterScheduler(driver.Scheduler): """Scheduler that can be used for filtering and weighing.""" def __init__(self, *args, **kwargs): super(FilterScheduler, self).__init__(*args, **kwargs) self.notifier = rpc.get_notifier('scheduler') self.placement_client = report.SchedulerReportClient() def select_destinations(self, context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version=None, return_alternates=False): """Returns a list of lists of Selection objects, which represent the hosts and (optionally) alternates for each instance. :param context: The RequestContext object :param spec_obj: The RequestSpec object :param instance_uuids: List of UUIDs, one for each value of the spec object's num_instances attribute :param alloc_reqs_by_rp_uuid: Optional dict, keyed by resource provider UUID, of the allocation_requests that may be used to claim resources against matched hosts. If None, indicates either the placement API wasn't reachable or that there were no allocation_requests returned by the placement API. If the latter, the provider_summaries will be an empty dict, not None. :param provider_summaries: Optional dict, keyed by resource provider UUID, of information that will be used by the filters/weighers in selecting matching hosts for a request. If None, indicates that the scheduler driver should grab all compute node information locally and that the Placement API is not used. If an empty dict, indicates the Placement API returned no potential matches for the requested resources. :param allocation_request_version: The microversion used to request the allocations. :param return_alternates: When True, zero or more alternate hosts are returned with each selected host. The number of alternates is determined by the configuration option `CONF.scheduler.max_attempts`. """ self.notifier.info( context, 'scheduler.select_destinations.start', dict(request_spec=spec_obj.to_legacy_request_spec_dict())) compute_utils.notify_about_scheduler_action( context=context, request_spec=spec_obj, action=fields_obj.NotificationAction.SELECT_DESTINATIONS, phase=fields_obj.NotificationPhase.START) host_selections = self._schedule(context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version, return_alternates) self.notifier.info( context, 'scheduler.select_destinations.end', dict(request_spec=spec_obj.to_legacy_request_spec_dict())) compute_utils.notify_about_scheduler_action( context=context, request_spec=spec_obj, action=fields_obj.NotificationAction.SELECT_DESTINATIONS, phase=fields_obj.NotificationPhase.END) return host_selections def _schedule(self, context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version=None, return_alternates=False): """Returns a list of lists of Selection objects. :param context: The RequestContext object :param spec_obj: The RequestSpec object :param instance_uuids: List of instance UUIDs to place or move. :param alloc_reqs_by_rp_uuid: Optional dict, keyed by resource provider UUID, of the allocation_requests that may be used to claim resources against matched hosts. If None, indicates either the placement API wasn't reachable or that there were no allocation_requests returned by the placement API. If the latter, the provider_summaries will be an empty dict, not None. :param provider_summaries: Optional dict, keyed by resource provider UUID, of information that will be used by the filters/weighers in selecting matching hosts for a request. If None, indicates that the scheduler driver should grab all compute node information locally and that the Placement API is not used. If an empty dict, indicates the Placement API returned no potential matches for the requested resources. :param allocation_request_version: The microversion used to request the allocations. :param return_alternates: When True, zero or more alternate hosts are returned with each selected host. The number of alternates is determined by the configuration option `CONF.scheduler.max_attempts`. """ elevated = context.elevated() # Find our local list of acceptable hosts by repeatedly # filtering and weighing our options. Each time we choose a # host, we virtually consume resources on it so subsequent # selections can adjust accordingly. # Note: remember, we are using a generator-iterator here. So only # traverse this list once. This can bite you if the hosts # are being scanned in a filter or weighing function. hosts = self._get_all_host_states(elevated, spec_obj, provider_summaries) # NOTE(sbauza): The RequestSpec.num_instances field contains the number # of instances created when the RequestSpec was used to first boot some # instances. This is incorrect when doing a move or resize operation, # so prefer the length of instance_uuids unless it is None. num_instances = (len(instance_uuids) if instance_uuids else spec_obj.num_instances) # For each requested instance, we want to return a host whose resources # for the instance have been claimed, along with zero or more # alternates. These alternates will be passed to the cell that the # selected host is in, so that if for some reason the build fails, the # cell conductor can retry building the instance on one of these # alternates instead of having to simply fail. The number of alternates # is based on CONF.scheduler.max_attempts; note that if there are not # enough filtered hosts to provide the full number of alternates, the # list of hosts may be shorter than this amount. num_alts = (CONF.scheduler.max_attempts - 1 if return_alternates else 0) if (instance_uuids is None or not self.USES_ALLOCATION_CANDIDATES or alloc_reqs_by_rp_uuid is None): # We still support external scheduler drivers that don't use the # placement API (and set USES_ALLOCATION_CANDIDATE = False) and # therefore we skip all the claiming logic for those scheduler # drivers. Also, if there was a problem communicating with the # placement API, alloc_reqs_by_rp_uuid will be None, so we skip # claiming in that case as well. In the case where instance_uuids # is None, that indicates an older conductor, so we need to return # the objects without alternates. They will be converted back to # the older dict format representing HostState objects. return self._legacy_find_hosts(context, num_instances, spec_obj, hosts, num_alts, instance_uuids=instance_uuids) # A list of the instance UUIDs that were successfully claimed against # in the placement API. If we are not able to successfully claim for # all involved instances, we use this list to remove those allocations # before returning claimed_instance_uuids = [] # The list of hosts that have been selected (and claimed). claimed_hosts = [] for num, instance_uuid in enumerate(instance_uuids): # In a multi-create request, the first request spec from the list # is passed to the scheduler and that request spec's instance_uuid # might not be the same as the instance we're processing, so we # update the instance_uuid in that case before passing the request # spec to filters since at least one filter # (ServerGroupAntiAffinityFilter) depends on that information being # accurate. spec_obj.instance_uuid = instance_uuid # Reset the field so it's not persisted accidentally. spec_obj.obj_reset_changes(['instance_uuid']) hosts = self._get_sorted_hosts(spec_obj, hosts, num) if not hosts: # NOTE(jaypipes): If we get here, that means not all instances # in instance_uuids were able to be matched to a selected host. # Any allocations will be cleaned up in the # _ensure_sufficient_hosts() call. break # Attempt to claim the resources against one or more resource # providers, looping over the sorted list of possible hosts # looking for an allocation_request that contains that host's # resource provider UUID claimed_host = None for host in hosts: cn_uuid = host.uuid if cn_uuid not in alloc_reqs_by_rp_uuid: msg = ("A host state with uuid = '%s' that did not have a " "matching allocation_request was encountered while " "scheduling. This host was skipped.") LOG.debug(msg, cn_uuid) continue alloc_reqs = alloc_reqs_by_rp_uuid[cn_uuid] # TODO(jaypipes): Loop through all allocation_requests instead # of just trying the first one. For now, since we'll likely # want to order the allocation_requests in the future based on # information in the provider summaries, we'll just try to # claim resources using the first allocation_request alloc_req = alloc_reqs[0] if utils.claim_resources(elevated, self.placement_client, spec_obj, instance_uuid, alloc_req, allocation_request_version=allocation_request_version): claimed_host = host break if claimed_host is None: # We weren't able to claim resources in the placement API # for any of the sorted hosts identified. So, clean up any # successfully-claimed resources for prior instances in # this request and return an empty list which will cause # select_destinations() to raise NoValidHost LOG.debug("Unable to successfully claim against any host.") break claimed_instance_uuids.append(instance_uuid) claimed_hosts.append(claimed_host) # Now consume the resources so the filter/weights will change for # the next instance. self._consume_selected_host(claimed_host, spec_obj, instance_uuid=instance_uuid) # Check if we were able to fulfill the request. If not, this call will # raise a NoValidHost exception. self._ensure_sufficient_hosts(context, claimed_hosts, num_instances, claimed_instance_uuids) # We have selected and claimed hosts for each instance. Now we need to # find alternates for each host. selections_to_return = self._get_alternate_hosts( claimed_hosts, spec_obj, hosts, num, num_alts, alloc_reqs_by_rp_uuid, allocation_request_version) return selections_to_return def _ensure_sufficient_hosts(self, context, hosts, required_count, claimed_uuids=None): """Checks that we have selected a host for each requested instance. If not, log this failure, remove allocations for any claimed instances, and raise a NoValidHost exception. """ if len(hosts) == required_count: # We have enough hosts. return if claimed_uuids: self._cleanup_allocations(context, claimed_uuids) # NOTE(Rui Chen): If multiple creates failed, set the updated time # of selected HostState to None so that these HostStates are # refreshed according to database in next schedule, and release # the resource consumed by instance in the process of selecting # host. for host in hosts: host.updated = None # Log the details but don't put those into the reason since # we don't want to give away too much information about our # actual environment. LOG.debug('There are %(hosts)d hosts available but ' '%(required_count)d instances requested to build.', {'hosts': len(hosts), 'required_count': required_count}) reason = _('There are not enough hosts available.') raise exception.NoValidHost(reason=reason) def _cleanup_allocations(self, context, instance_uuids): """Removes allocations for the supplied instance UUIDs.""" if not instance_uuids: return LOG.debug("Cleaning up allocations for %s", instance_uuids) for uuid in instance_uuids: self.placement_client.delete_allocation_for_instance(context, uuid) def _legacy_find_hosts(self, context, num_instances, spec_obj, hosts, num_alts, instance_uuids=None): """Some schedulers do not do claiming, or we can sometimes not be able to if the Placement service is not reachable. Additionally, we may be working with older conductors that don't pass in instance_uuids. """ # The list of hosts selected for each instance selected_hosts = [] for num in range(num_instances): instance_uuid = instance_uuids[num] if instance_uuids else None if instance_uuid: # Update the RequestSpec.instance_uuid before sending it to # the filters in case we're doing a multi-create request, but # don't persist the change. spec_obj.instance_uuid = instance_uuid spec_obj.obj_reset_changes(['instance_uuid']) hosts = self._get_sorted_hosts(spec_obj, hosts, num) if not hosts: # No hosts left, so break here, and the # _ensure_sufficient_hosts() call below will handle this. break selected_host = hosts[0] selected_hosts.append(selected_host) self._consume_selected_host(selected_host, spec_obj, instance_uuid=instance_uuid) # Check if we were able to fulfill the request. If not, this call will # raise a NoValidHost exception. self._ensure_sufficient_hosts(context, selected_hosts, num_instances) # This the overall list of values to be returned. There will be one # item per instance, and each item will be a list of Selection objects # representing the selected host along with zero or more alternates # from the same cell. selections_to_return = self._get_alternate_hosts(selected_hosts, spec_obj, hosts, num, num_alts) return selections_to_return @staticmethod def _consume_selected_host(selected_host, spec_obj, instance_uuid=None): LOG.debug("Selected host: %(host)s", {'host': selected_host}, instance_uuid=instance_uuid) selected_host.consume_from_request(spec_obj) # If we have a server group, add the selected host to it for the # (anti-)affinity filters to filter out hosts for subsequent instances # in a multi-create request. if spec_obj.instance_group is not None: spec_obj.instance_group.hosts.append(selected_host.host) # hosts has to be not part of the updates when saving spec_obj.instance_group.obj_reset_changes(['hosts']) # The ServerGroupAntiAffinityFilter also relies on # HostState.instances being accurate within a multi-create request. if instance_uuid and instance_uuid not in selected_host.instances: # Set a stub since ServerGroupAntiAffinityFilter only cares # about the keys. selected_host.instances[instance_uuid] = ( objects.Instance(uuid=instance_uuid)) def _get_alternate_hosts(self, selected_hosts, spec_obj, hosts, index, num_alts, alloc_reqs_by_rp_uuid=None, allocation_request_version=None): # We only need to filter/weigh the hosts again if we're dealing with # more than one instance and are going to be picking alternates. if index > 0 and num_alts > 0: # The selected_hosts have all had resources 'claimed' via # _consume_selected_host, so we need to filter/weigh and sort the # hosts again to get an accurate count for alternates. hosts = self._get_sorted_hosts(spec_obj, hosts, index) # This is the overall list of values to be returned. There will be one # item per instance, and each item will be a list of Selection objects # representing the selected host along with alternates from the same # cell. selections_to_return = [] for selected_host in selected_hosts: # This is the list of hosts for one particular instance. if alloc_reqs_by_rp_uuid: selected_alloc_req = alloc_reqs_by_rp_uuid.get( selected_host.uuid)[0] else: selected_alloc_req = None selection = objects.Selection.from_host_state(selected_host, allocation_request=selected_alloc_req, allocation_request_version=allocation_request_version) selected_plus_alts = [selection] cell_uuid = selected_host.cell_uuid # This will populate the alternates with many of the same unclaimed # hosts. This is OK, as it should be rare for a build to fail. And # if there are not enough hosts to fully populate the alternates, # it's fine to return fewer than we'd like. Note that we exclude # any claimed host from consideration as an alternate because it # will have had its resources reduced and will have a much lower # chance of being able to fit another instance on it. for host in hosts: if len(selected_plus_alts) >= num_alts + 1: break if host.cell_uuid == cell_uuid and host not in selected_hosts: if alloc_reqs_by_rp_uuid is not None: alt_uuid = host.uuid if alt_uuid not in alloc_reqs_by_rp_uuid: msg = ("A host state with uuid = '%s' that did " "not have a matching allocation_request " "was encountered while scheduling. This " "host was skipped.") LOG.debug(msg, alt_uuid) continue # TODO(jaypipes): Loop through all allocation_requests # instead of just trying the first one. For now, since # we'll likely want to order the allocation_requests in # the future based on information in the provider # summaries, we'll just try to claim resources using # the first allocation_request alloc_req = alloc_reqs_by_rp_uuid[alt_uuid][0] alt_selection = ( objects.Selection.from_host_state(host, alloc_req, allocation_request_version)) else: alt_selection = objects.Selection.from_host_state(host) selected_plus_alts.append(alt_selection) selections_to_return.append(selected_plus_alts) return selections_to_return def _get_sorted_hosts(self, spec_obj, host_states, index): """Returns a list of HostState objects that match the required scheduling constraints for the request spec object and have been sorted according to the weighers. """ filtered_hosts = self.host_manager.get_filtered_hosts(host_states, spec_obj, index) LOG.debug("Filtered %(hosts)s", {'hosts': filtered_hosts}) if not filtered_hosts: return [] weighed_hosts = self.host_manager.get_weighed_hosts(filtered_hosts, spec_obj) if CONF.filter_scheduler.shuffle_best_same_weighed_hosts: # NOTE(pas-ha) Randomize best hosts, relying on weighed_hosts # being already sorted by weight in descending order. # This decreases possible contention and rescheduling attempts # when there is a large number of hosts having the same best # weight, especially so when host_subset_size is 1 (default) best_hosts = [w for w in weighed_hosts if w.weight == weighed_hosts[0].weight] random.shuffle(best_hosts) weighed_hosts = best_hosts + weighed_hosts[len(best_hosts):] # Log the weighed hosts before stripping off the wrapper class so that # the weight value gets logged. LOG.debug("Weighed %(hosts)s", {'hosts': weighed_hosts}) # Strip off the WeighedHost wrapper class... weighed_hosts = [h.obj for h in weighed_hosts] # We randomize the first element in the returned list to alleviate # congestion where the same host is consistently selected among # numerous potential hosts for similar request specs. host_subset_size = CONF.filter_scheduler.host_subset_size if host_subset_size < len(weighed_hosts): weighed_subset = weighed_hosts[0:host_subset_size] else: weighed_subset = weighed_hosts chosen_host = random.choice(weighed_subset) weighed_hosts.remove(chosen_host) return [chosen_host] + weighed_hosts def _get_all_host_states(self, context, spec_obj, provider_summaries): """Template method, so a subclass can implement caching.""" # NOTE(jaypipes): provider_summaries being None is treated differently # from an empty dict. provider_summaries is None when we want to grab # all compute nodes, for instance when using a scheduler driver that # sets USES_ALLOCATION_CANDIDATES=False. # The provider_summaries variable will be an empty dict when the # Placement API found no providers that match the requested # constraints, which in turn makes compute_uuids an empty list and # get_host_states_by_uuids will return an empty generator-iterator # also, which will eventually result in a NoValidHost error. compute_uuids = None if provider_summaries is not None: compute_uuids = list(provider_summaries.keys()) return self.host_manager.get_host_states_by_uuids(context, compute_uuids, spec_obj) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3824697 nova-21.2.4/nova/scheduler/filters/0000775000175000017500000000000000000000000017157 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/__init__.py0000664000175000017500000000460100000000000021271 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler host filters """ from nova import filters class BaseHostFilter(filters.BaseFilter): """Base class for host filters.""" # This is set to True if this filter should be run for rebuild. # For example, with rebuild, we need to ask the scheduler if the # existing host is still legit for a rebuild with the new image and # other parameters. We care about running policy filters (i.e. # ImagePropertiesFilter) but not things that check usage on the # existing compute node, etc. RUN_ON_REBUILD = False def _filter_one(self, obj, spec): """Return True if the object passes the filter, otherwise False.""" # Do this here so we don't get scheduler.filters.utils from nova.scheduler import utils if not self.RUN_ON_REBUILD and utils.request_is_rebuild(spec): # If we don't filter, default to passing the host. return True else: # We are either a rebuild filter, in which case we always run, # or this request is not rebuild in which case all filters # should run. return self.host_passes(obj, spec) def host_passes(self, host_state, filter_properties): """Return True if the HostState passes the filter, otherwise False. Override this in a subclass. """ raise NotImplementedError() class HostFilterHandler(filters.BaseFilterHandler): def __init__(self): super(HostFilterHandler, self).__init__(BaseHostFilter) def all_filters(): """Return a list of filter classes found in this directory. This method is used as the default for available scheduler filters and should return a list of all filter classes available. """ return HostFilterHandler().get_all_classes() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/affinity_filter.py0000664000175000017500000001432000000000000022707 0ustar00zuulzuul00000000000000# Copyright 2012, Piston Cloud Computing, Inc. # Copyright 2012, OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import netaddr from oslo_log import log as logging from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) class DifferentHostFilter(filters.BaseHostFilter): """Schedule the instance on a different host from a set of instances.""" # The hosts the instances are running on doesn't change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): affinity_uuids = spec_obj.get_scheduler_hint('different_host') if affinity_uuids: overlap = utils.instance_uuids_overlap(host_state, affinity_uuids) return not overlap # With no different_host key return True class SameHostFilter(filters.BaseHostFilter): """Schedule the instance on the same host as another instance in a set of instances. """ # The hosts the instances are running on doesn't change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): affinity_uuids = spec_obj.get_scheduler_hint('same_host') if affinity_uuids: overlap = utils.instance_uuids_overlap(host_state, affinity_uuids) return overlap # With no same_host key return True class SimpleCIDRAffinityFilter(filters.BaseHostFilter): """Schedule the instance on a host with a particular cidr""" # The address of a host doesn't change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): affinity_cidr = spec_obj.get_scheduler_hint('cidr', '/24') affinity_host_addr = spec_obj.get_scheduler_hint('build_near_host_ip') host_ip = host_state.host_ip if affinity_host_addr: affinity_net = netaddr.IPNetwork(str.join('', (affinity_host_addr, affinity_cidr))) return netaddr.IPAddress(host_ip) in affinity_net # We don't have an affinity host address. return True class _GroupAntiAffinityFilter(filters.BaseHostFilter): """Schedule the instance on a different host from a set of group hosts. """ RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): # Only invoke the filter if 'anti-affinity' is configured instance_group = spec_obj.instance_group policy = instance_group.policy if instance_group else None if self.policy_name != policy: return True # NOTE(hanrong): Move operations like resize can check the same source # compute node where the instance is. That case, AntiAffinityFilter # must not return the source as a non-possible destination. if spec_obj.instance_uuid in host_state.instances.keys(): return True # The list of instances UUIDs on the given host instances = set(host_state.instances.keys()) # The list of instances UUIDs which are members of this group members = set(spec_obj.instance_group.members) # The set of instances on the host that are also members of this group servers_on_host = instances.intersection(members) rules = instance_group.rules if rules and 'max_server_per_host' in rules: max_server_per_host = rules['max_server_per_host'] else: max_server_per_host = 1 # Very old request specs don't have a full InstanceGroup with the UUID group_uuid = (instance_group.uuid if instance_group and 'uuid' in instance_group else 'n/a') LOG.debug("Group anti-affinity: check if the number of servers from " "group %(group_uuid)s on host %(host)s is less than " "%(max_server)s.", {'group_uuid': group_uuid, 'host': host_state.host, 'max_server': max_server_per_host}) # NOTE(yikun): If the number of servers from same group on this host # is less than the max_server_per_host, this filter will accept the # given host. In the default case(max_server_per_host=1), this filter # will accept the given host if there are 0 servers from the group # already on this host. return len(servers_on_host) < max_server_per_host class ServerGroupAntiAffinityFilter(_GroupAntiAffinityFilter): def __init__(self): self.policy_name = 'anti-affinity' super(ServerGroupAntiAffinityFilter, self).__init__() class _GroupAffinityFilter(filters.BaseHostFilter): """Schedule the instance on to host from a set of group hosts. """ RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): # Only invoke the filter if 'affinity' is configured policies = (spec_obj.instance_group.policies if spec_obj.instance_group else []) if self.policy_name not in policies: return True group_hosts = (spec_obj.instance_group.hosts if spec_obj.instance_group else []) LOG.debug("Group affinity: check if %(host)s in " "%(configured)s", {'host': host_state.host, 'configured': group_hosts}) if group_hosts: return host_state.host in group_hosts # No groups configured return True class ServerGroupAffinityFilter(_GroupAffinityFilter): def __init__(self): self.policy_name = 'affinity' super(ServerGroupAffinityFilter, self).__init__() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/aggregate_image_properties_isolation.py0000664000175000017500000000536200000000000027164 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Cloudwatt # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import nova.conf from nova.scheduler import filters from nova.scheduler.filters import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class AggregateImagePropertiesIsolation(filters.BaseHostFilter): """AggregateImagePropertiesIsolation works with image properties.""" # Aggregate data and instance type does not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = True def host_passes(self, host_state, spec_obj): """Checks a host in an aggregate that metadata key/value match with image properties. """ cfg_namespace = (CONF.filter_scheduler. aggregate_image_properties_isolation_namespace) cfg_separator = (CONF.filter_scheduler. aggregate_image_properties_isolation_separator) image_props = spec_obj.image.properties if spec_obj.image else {} metadata = utils.aggregate_metadata_get_by_host(host_state) for key, options in metadata.items(): if (cfg_namespace and not key.startswith(cfg_namespace + cfg_separator)): continue prop = None try: prop = image_props.get(key) except AttributeError: LOG.warning("Host '%(host)s' has a metadata key '%(key)s' " "that is not present in the image metadata.", {"host": host_state.host, "key": key}) continue # NOTE(sbauza): Aggregate metadata is only strings, we need to # stringify the property to match with the option # TODO(sbauza): Fix that very ugly pattern matching if prop and str(prop) not in options: LOG.debug("%(host_state)s fails image aggregate properties " "requirements. Property %(prop)s does not " "match %(options)s.", {'host_state': host_state, 'prop': prop, 'options': options}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/aggregate_instance_extra_specs.py0000664000175000017500000000577200000000000025756 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # Copyright (c) 2012 Cloudscaling # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters from nova.scheduler.filters import extra_specs_ops from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) _SCOPE = 'aggregate_instance_extra_specs' class AggregateInstanceExtraSpecsFilter(filters.BaseHostFilter): """AggregateInstanceExtraSpecsFilter works with InstanceType records.""" # Aggregate data and instance type does not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): """Return a list of hosts that can create instance_type Check that the extra specs associated with the instance type match the metadata provided by aggregates. If not present return False. """ instance_type = spec_obj.flavor # If 'extra_specs' is not present or extra_specs are empty then we # need not proceed further if (not instance_type.obj_attr_is_set('extra_specs') or not instance_type.extra_specs): return True metadata = utils.aggregate_metadata_get_by_host(host_state) for key, req in instance_type.extra_specs.items(): # Either not scope format, or aggregate_instance_extra_specs scope scope = key.split(':', 1) if len(scope) > 1: if scope[0] != _SCOPE: continue else: del scope[0] key = scope[0] aggregate_vals = metadata.get(key, None) if not aggregate_vals: LOG.debug( "%(host_state)s fails instance_type extra_specs " "requirements. Extra_spec %(key)s is not in aggregate.", {'host_state': host_state, 'key': key}) return False for aggregate_val in aggregate_vals: if extra_specs_ops.match(aggregate_val, req): break else: LOG.debug("%(host_state)s fails instance_type extra_specs " "requirements. '%(aggregate_vals)s' do not " "match '%(req)s'", {'host_state': host_state, 'req': req, 'aggregate_vals': aggregate_vals}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/aggregate_multitenancy_isolation.py0000664000175000017500000000406700000000000026343 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) class AggregateMultiTenancyIsolation(filters.BaseHostFilter): """Isolate tenants in specific aggregates.""" # Aggregate data and tenant do not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): """If a host is in an aggregate that has the metadata key "filter_tenant_id" it can only create instances from that tenant(s). A host can be in different aggregates. If a host doesn't belong to an aggregate with the metadata key "filter_tenant_id" it can create instances from all tenants. """ tenant_id = spec_obj.project_id metadata = utils.aggregate_metadata_get_by_host(host_state, key="filter_tenant_id") if metadata != {}: configured_tenant_ids = metadata.get("filter_tenant_id") if configured_tenant_ids: if tenant_id not in configured_tenant_ids: LOG.debug("%s fails tenant id on aggregate", host_state) return False LOG.debug("Host tenant id %s matched", tenant_id) else: LOG.debug("No tenant id's defined on host. Host passes.") return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/all_hosts_filter.py0000664000175000017500000000170600000000000023072 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.scheduler import filters class AllHostsFilter(filters.BaseHostFilter): """NOOP host filter. Returns all hosts.""" # list of hosts doesn't change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/availability_zone_filter.py0000664000175000017500000000415300000000000024606 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import nova.conf from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class AvailabilityZoneFilter(filters.BaseHostFilter): """Filters Hosts by availability zone. Works with aggregate metadata availability zones, using the key 'availability_zone' Note: in theory a compute node can be part of multiple availability_zones """ # Availability zones do not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): availability_zone = spec_obj.availability_zone if not availability_zone: return True metadata = utils.aggregate_metadata_get_by_host( host_state, key='availability_zone') if 'availability_zone' in metadata: hosts_passes = availability_zone in metadata['availability_zone'] host_az = metadata['availability_zone'] else: hosts_passes = availability_zone == CONF.default_availability_zone host_az = CONF.default_availability_zone if not hosts_passes: LOG.debug("Availability Zone '%(az)s' requested. " "%(host_state)s has AZs: %(host_az)s", {'host_state': host_state, 'az': availability_zone, 'host_az': host_az}) return hosts_passes ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/compute_capabilities_filter.py0000664000175000017500000001140200000000000025261 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils import six from nova.scheduler import filters from nova.scheduler.filters import extra_specs_ops LOG = logging.getLogger(__name__) class ComputeCapabilitiesFilter(filters.BaseHostFilter): """HostFilter hard-coded to work with InstanceType records.""" # Instance type and host capabilities do not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def _get_capabilities(self, host_state, scope): cap = host_state for index in range(0, len(scope)): try: if isinstance(cap, six.string_types): try: cap = jsonutils.loads(cap) except ValueError as e: LOG.debug("%(host_state)s fails. The capabilities " "'%(cap)s' couldn't be loaded from JSON: " "%(error)s", {'host_state': host_state, 'cap': cap, 'error': e}) return None if not isinstance(cap, dict): if getattr(cap, scope[index], None) is None: # If can't find, check stats dict cap = cap.stats.get(scope[index], None) else: cap = getattr(cap, scope[index], None) else: cap = cap.get(scope[index], None) except AttributeError as e: LOG.debug("%(host_state)s fails. The capabilities couldn't " "be retrieved: %(error)s.", {'host_state': host_state, 'error': e}) return None if cap is None: LOG.debug("%(host_state)s fails. There are no capabilities " "to retrieve.", {'host_state': host_state}) return None return cap def _satisfies_extra_specs(self, host_state, instance_type): """Check that the host_state provided by the compute service satisfies the extra specs associated with the instance type. """ if 'extra_specs' not in instance_type: return True for key, req in instance_type.extra_specs.items(): # Either not scope format, or in capabilities scope scope = key.split(':') # If key does not have a namespace, the scope's size is 1, check # whether host_state contains the key as an attribute. If not, # ignore it. If it contains, deal with it in the same way as # 'capabilities:key'. This is for backward-compatible. # If the key has a namespace, the scope's size will be bigger than # 1, check that whether the namespace is 'capabilities'. If not, # ignore it. if len(scope) == 1: stats = getattr(host_state, 'stats', {}) has_attr = hasattr(host_state, key) or key in stats if not has_attr: continue else: if scope[0] != "capabilities": continue else: del scope[0] cap = self._get_capabilities(host_state, scope) if cap is None: return False if not extra_specs_ops.match(str(cap), req): LOG.debug("%(host_state)s fails extra_spec requirements. " "'%(req)s' does not match '%(cap)s'", {'host_state': host_state, 'req': req, 'cap': cap}) return False return True def host_passes(self, host_state, spec_obj): """Return a list of hosts that can create instance_type.""" instance_type = spec_obj.flavor if not self._satisfies_extra_specs(host_state, instance_type): LOG.debug("%(host_state)s fails instance_type extra_specs " "requirements", {'host_state': host_state}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/compute_filter.py0000664000175000017500000000326500000000000022560 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters from nova import servicegroup LOG = logging.getLogger(__name__) class ComputeFilter(filters.BaseHostFilter): """Filter on active Compute nodes.""" RUN_ON_REBUILD = False def __init__(self): self.servicegroup_api = servicegroup.API() # Host state does not change within a request run_filter_once_per_request = True def host_passes(self, host_state, spec_obj): """Returns True for only active compute nodes.""" service = host_state.service if service['disabled']: LOG.debug("%(host_state)s is disabled, reason: %(reason)s", {'host_state': host_state, 'reason': service.get('disabled_reason')}) return False else: if not self.servicegroup_api.service_is_up(service): LOG.warning("%(host_state)s has not been heard from in a " "while", {'host_state': host_state}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/core_filter.py0000664000175000017500000000761700000000000022041 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # Copyright (c) 2012 Justin Santa Barbara # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) class AggregateCoreFilter(filters.BaseHostFilter): """DEPRECATED: AggregateCoreFilter with per-aggregate allocation ratio. Fall back to global cpu_allocation_ratio if no per-aggregate setting found. """ RUN_ON_REBUILD = False def __init__(self): super(AggregateCoreFilter, self).__init__() LOG.warning('The AggregateCoreFilter is deprecated since the 20.0.0 ' 'Train release. VCPU filtering is performed natively ' 'using the Placement service when using the ' 'filter_scheduler driver. Operators should define cpu ' 'allocation ratios either per host in the nova.conf ' 'or via the placement API.') def _get_cpu_allocation_ratio(self, host_state, spec_obj): aggregate_vals = utils.aggregate_values_from_key( host_state, 'cpu_allocation_ratio') try: ratio = utils.validate_num_values( aggregate_vals, host_state.cpu_allocation_ratio, cast_to=float) except ValueError as e: LOG.warning("Could not decode cpu_allocation_ratio: '%s'", e) ratio = host_state.cpu_allocation_ratio return ratio def host_passes(self, host_state, spec_obj): """Return True if host has sufficient CPU cores. :param host_state: nova.scheduler.host_manager.HostState :param spec_obj: filter options :return: boolean """ if not host_state.vcpus_total: # Fail safe LOG.warning("VCPUs not set; assuming CPU collection broken") return True instance_vcpus = spec_obj.vcpus cpu_allocation_ratio = self._get_cpu_allocation_ratio(host_state, spec_obj) vcpus_total = host_state.vcpus_total * cpu_allocation_ratio # Only provide a VCPU limit to compute if the virt driver is reporting # an accurate count of installed VCPUs. (XenServer driver does not) if vcpus_total > 0: host_state.limits['vcpu'] = vcpus_total # Do not allow an instance to overcommit against itself, only # against other instances. if instance_vcpus > host_state.vcpus_total: LOG.debug("%(host_state)s does not have %(instance_vcpus)d " "total cpus before overcommit, it only has %(cpus)d", {'host_state': host_state, 'instance_vcpus': instance_vcpus, 'cpus': host_state.vcpus_total}) return False free_vcpus = vcpus_total - host_state.vcpus_used if free_vcpus < instance_vcpus: LOG.debug("%(host_state)s does not have %(instance_vcpus)d " "usable vcpus, it only has %(free_vcpus)d usable " "vcpus", {'host_state': host_state, 'instance_vcpus': instance_vcpus, 'free_vcpus': free_vcpus}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/disk_filter.py0000664000175000017500000000757600000000000022047 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) class AggregateDiskFilter(filters.BaseHostFilter): """DEPRECATED: AggregateDiskFilter with per-aggregate disk allocation ratio Fall back to global disk_allocation_ratio if no per-aggregate setting found. """ RUN_ON_REBUILD = False def __init__(self): super(AggregateDiskFilter, self).__init__() LOG.warning('The AggregateDiskFilter is deprecated since the 20.0.0 ' 'Train release. DISK_GB filtering is performed natively ' 'using the Placement service when using the ' 'filter_scheduler driver. Operators should define disk ' 'allocation ratios either per host in the nova.conf ' 'or via the placement API.') def _get_disk_allocation_ratio(self, host_state, spec_obj): aggregate_vals = utils.aggregate_values_from_key( host_state, 'disk_allocation_ratio') try: ratio = utils.validate_num_values( aggregate_vals, host_state.disk_allocation_ratio, cast_to=float) except ValueError as e: LOG.warning("Could not decode disk_allocation_ratio: '%s'", e) ratio = host_state.disk_allocation_ratio return ratio def host_passes(self, host_state, spec_obj): """Filter based on disk usage.""" requested_disk = (1024 * (spec_obj.root_gb + spec_obj.ephemeral_gb) + spec_obj.swap) free_disk_mb = host_state.free_disk_mb total_usable_disk_mb = host_state.total_usable_disk_gb * 1024 # Do not allow an instance to overcommit against itself, only against # other instances. In other words, if there isn't room for even just # this one instance in total_usable_disk space, consider the host full. if total_usable_disk_mb < requested_disk: LOG.debug("%(host_state)s does not have %(requested_disk)s " "MB usable disk space before overcommit, it only " "has %(physical_disk_size)s MB.", {'host_state': host_state, 'requested_disk': requested_disk, 'physical_disk_size': total_usable_disk_mb}) return False disk_allocation_ratio = self._get_disk_allocation_ratio( host_state, spec_obj) disk_mb_limit = total_usable_disk_mb * disk_allocation_ratio used_disk_mb = total_usable_disk_mb - free_disk_mb usable_disk_mb = disk_mb_limit - used_disk_mb if not usable_disk_mb >= requested_disk: LOG.debug("%(host_state)s does not have %(requested_disk)s MB " "usable disk, it only has %(usable_disk_mb)s MB usable " "disk.", {'host_state': host_state, 'requested_disk': requested_disk, 'usable_disk_mb': usable_disk_mb}) return False disk_gb_limit = disk_mb_limit / 1024 host_state.limits['disk_gb'] = disk_gb_limit return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/extra_specs_ops.py0000664000175000017500000000431700000000000022737 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator # 1. The following operations are supported: # =, s==, s!=, s>=, s>, s<=, s<, , , , ==, !=, >=, <= # 2. Note that is handled in a different way below. # 3. If the first word in the extra_specs is not one of the operators, # it is ignored. op_methods = {'=': lambda x, y: float(x) >= float(y), '': lambda x, y: y in x, '': lambda x, y: all(val in x for val in y), '==': lambda x, y: float(x) == float(y), '!=': lambda x, y: float(x) != float(y), '>=': lambda x, y: float(x) >= float(y), '<=': lambda x, y: float(x) <= float(y), 's==': operator.eq, 's!=': operator.ne, 's<': operator.lt, 's<=': operator.le, 's>': operator.gt, 's>=': operator.ge} def match(value, req): words = req.split() op = method = None if words: op = words.pop(0) method = op_methods.get(op) if op != '' and not method: return value == req if value is None: return False if op == '': # Ex: v1 v2 v3 while True: if words.pop(0) == value: return True if not words: break words.pop(0) # remove a keyword if not words: break return False if words: if op == '': # requires a list not a string return method(value, words) return method(value, words[0]) return False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/image_props_filter.py0000664000175000017500000001125300000000000023405 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # Copyright (c) 2012 Canonical Ltd # Copyright (c) 2012 SUSE LINUX Products GmbH # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from distutils import versionpredicate from oslo_log import log as logging from oslo_utils import versionutils import nova.conf from nova.objects import fields from nova.scheduler import filters LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class ImagePropertiesFilter(filters.BaseHostFilter): """Filter compute nodes that satisfy instance image properties. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec. """ RUN_ON_REBUILD = True # Image Properties and Compute Capabilities do not change within # a request run_filter_once_per_request = True def _get_default_architecture(self): return CONF.filter_scheduler.image_properties_default_architecture def _instance_supported(self, host_state, image_props, hypervisor_version): default_img_arch = self._get_default_architecture() img_arch = image_props.get('hw_architecture', default_img_arch) img_h_type = image_props.get('img_hv_type') img_vm_mode = image_props.get('hw_vm_mode') checked_img_props = ( fields.Architecture.canonicalize(img_arch), fields.HVType.canonicalize(img_h_type), fields.VMMode.canonicalize(img_vm_mode) ) # Supported if no compute-related instance properties are specified if not any(checked_img_props): return True supp_instances = host_state.supported_instances # Not supported if an instance property is requested but nothing # advertised by the host. if not supp_instances: LOG.debug("Instance contains properties %(image_props)s, " "but no corresponding supported_instances are " "advertised by the compute node", {'image_props': image_props}) return False def _compare_props(props, other_props): for i in props: if i and i not in other_props: return False return True def _compare_product_version(hyper_version, image_props): version_required = image_props.get('img_hv_requested_version') if not(hypervisor_version and version_required): return True img_prop_predicate = versionpredicate.VersionPredicate( 'image_prop (%s)' % version_required) hyper_ver_str = versionutils.convert_version_to_str(hyper_version) return img_prop_predicate.satisfied_by(hyper_ver_str) for supp_inst in supp_instances: if _compare_props(checked_img_props, supp_inst): if _compare_product_version(hypervisor_version, image_props): return True LOG.debug("Instance contains properties %(image_props)s " "that are not provided by the compute node " "supported_instances %(supp_instances)s or " "hypervisor version %(hypervisor_version)s do not match", {'image_props': image_props, 'supp_instances': supp_instances, 'hypervisor_version': hypervisor_version}) return False def host_passes(self, host_state, spec_obj): """Check if host passes specified image properties. Returns True for compute nodes that satisfy image properties contained in the request_spec. """ image_props = spec_obj.image.properties if spec_obj.image else {} if not self._instance_supported(host_state, image_props, host_state.hypervisor_version): LOG.debug("%(host_state)s does not support requested " "instance_properties", {'host_state': host_state}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/io_ops_filter.py0000664000175000017500000000465400000000000022377 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import nova.conf from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class IoOpsFilter(filters.BaseHostFilter): """Filter out hosts with too many concurrent I/O operations.""" RUN_ON_REBUILD = False def _get_max_io_ops_per_host(self, host_state, spec_obj): return CONF.filter_scheduler.max_io_ops_per_host def host_passes(self, host_state, spec_obj): """Use information about current vm and task states collected from compute node statistics to decide whether to filter. """ num_io_ops = host_state.num_io_ops max_io_ops = self._get_max_io_ops_per_host( host_state, spec_obj) passes = num_io_ops < max_io_ops if not passes: LOG.debug("%(host_state)s fails I/O ops check: Max IOs per host " "is set to %(max_io_ops)s", {'host_state': host_state, 'max_io_ops': max_io_ops}) return passes class AggregateIoOpsFilter(IoOpsFilter): """AggregateIoOpsFilter with per-aggregate the max io operations. Fall back to global max_io_ops_per_host if no per-aggregate setting found. """ def _get_max_io_ops_per_host(self, host_state, spec_obj): max_io_ops_per_host = CONF.filter_scheduler.max_io_ops_per_host aggregate_vals = utils.aggregate_values_from_key( host_state, 'max_io_ops_per_host') try: value = utils.validate_num_values( aggregate_vals, max_io_ops_per_host, cast_to=int) except ValueError as e: LOG.warning("Could not decode max_io_ops_per_host: '%s'", e) value = max_io_ops_per_host return value ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/isolated_hosts_filter.py0000664000175000017500000000570000000000000024124 0ustar00zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova.scheduler import filters CONF = nova.conf.CONF class IsolatedHostsFilter(filters.BaseHostFilter): """Keep specified images to selected hosts.""" # The configuration values do not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = True def host_passes(self, host_state, spec_obj): """Result Matrix with 'restrict_isolated_hosts_to_isolated_images' set to True:: | | isolated_image | non_isolated_image | -------------+----------------+------------------- | iso_host | True | False | non_iso_host | False | True Result Matrix with 'restrict_isolated_hosts_to_isolated_images' set to False:: | | isolated_image | non_isolated_image | -------------+----------------+------------------- | iso_host | True | True | non_iso_host | False | True """ # If the configuration does not list any hosts, the filter will always # return True, assuming a configuration error, so letting all hosts # through. isolated_hosts = CONF.filter_scheduler.isolated_hosts isolated_images = CONF.filter_scheduler.isolated_images restrict_isolated_hosts_to_isolated_images = ( CONF.filter_scheduler.restrict_isolated_hosts_to_isolated_images) if not isolated_images: # As there are no images to match, return True if the filter is # not restrictive otherwise return False if the host is in the # isolation list. return ((not restrict_isolated_hosts_to_isolated_images) or (host_state.host not in isolated_hosts)) # Check to see if the image id is set since volume-backed instances # can be created without an imageRef in the server create request. image_ref = spec_obj.image.id \ if spec_obj.image and 'id' in spec_obj.image else None image_isolated = image_ref in isolated_images host_isolated = host_state.host in isolated_hosts if restrict_isolated_hosts_to_isolated_images: return (image_isolated == host_isolated) else: return (not image_isolated) or host_isolated ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/json_filter.py0000664000175000017500000001120700000000000022050 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator from oslo_serialization import jsonutils import six from nova.scheduler import filters class JsonFilter(filters.BaseHostFilter): """Host Filter to allow simple JSON-based grammar for selecting hosts. """ RUN_ON_REBUILD = False def _op_compare(self, args, op): """Returns True if the specified operator can successfully compare the first item in the args with all the rest. Will return False if only one item is in the list. """ if len(args) < 2: return False if op is operator.contains: bad = args[0] not in args[1:] else: bad = [arg for arg in args[1:] if not op(args[0], arg)] return not bool(bad) def _equals(self, args): """First term is == all the other terms.""" return self._op_compare(args, operator.eq) def _less_than(self, args): """First term is < all the other terms.""" return self._op_compare(args, operator.lt) def _greater_than(self, args): """First term is > all the other terms.""" return self._op_compare(args, operator.gt) def _in(self, args): """First term is in set of remaining terms.""" return self._op_compare(args, operator.contains) def _less_than_equal(self, args): """First term is <= all the other terms.""" return self._op_compare(args, operator.le) def _greater_than_equal(self, args): """First term is >= all the other terms.""" return self._op_compare(args, operator.ge) def _not(self, args): """Flip each of the arguments.""" return [not arg for arg in args] def _or(self, args): """True if any arg is True.""" return any(args) def _and(self, args): """True if all args are True.""" return all(args) commands = { '=': _equals, '<': _less_than, '>': _greater_than, 'in': _in, '<=': _less_than_equal, '>=': _greater_than_equal, 'not': _not, 'or': _or, 'and': _and, } def _parse_string(self, string, host_state): """Strings prefixed with $ are capability lookups in the form '$variable' where 'variable' is an attribute in the HostState class. If $variable is a dictionary, you may use: $variable.dictkey """ if not string: return None if not string.startswith("$"): return string path = string[1:].split(".") obj = getattr(host_state, path[0], None) if obj is None: return None for item in path[1:]: obj = obj.get(item, None) if obj is None: return None return obj def _process_filter(self, query, host_state): """Recursively parse the query structure.""" if not query: return True cmd = query[0] method = self.commands[cmd] cooked_args = [] for arg in query[1:]: if isinstance(arg, list): arg = self._process_filter(arg, host_state) elif isinstance(arg, six.string_types): arg = self._parse_string(arg, host_state) if arg is not None: cooked_args.append(arg) result = method(self, cooked_args) return result def host_passes(self, host_state, spec_obj): """Return a list of hosts that can fulfill the requirements specified in the query. """ query = spec_obj.get_scheduler_hint('query') if not query: return True # NOTE(comstud): Not checking capabilities or service for # enabled/disabled so that a provided json filter can decide result = self._process_filter(jsonutils.loads(query), host_state) if isinstance(result, list): # If any succeeded, include the host result = any(result) if result: # Filter it out. return True return False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/metrics_filter.py0000664000175000017500000000355600000000000022555 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Intel, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import nova.conf from nova.scheduler import filters from nova.scheduler import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class MetricsFilter(filters.BaseHostFilter): """Metrics Filter This filter is used to filter out those hosts which don't have the corresponding metrics so these the metrics weigher won't fail due to these hosts. """ RUN_ON_REBUILD = False def __init__(self): super(MetricsFilter, self).__init__() opts = utils.parse_options(CONF.metrics.weight_setting, sep='=', converter=float, name="metrics.weight_setting") self.keys = set([x[0] for x in opts]) def host_passes(self, host_state, spec_obj): metrics_on_host = set(m.name for m in host_state.metrics) if not self.keys.issubset(metrics_on_host): unavail = metrics_on_host - self.keys LOG.debug("%(host_state)s does not have the following " "metrics: %(metrics)s", {'host_state': host_state, 'metrics': ', '.join(unavail)}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/num_instances_filter.py0000664000175000017500000000456100000000000023752 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import nova.conf from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class NumInstancesFilter(filters.BaseHostFilter): """Filter out hosts with too many instances.""" RUN_ON_REBUILD = False def _get_max_instances_per_host(self, host_state, spec_obj): return CONF.filter_scheduler.max_instances_per_host def host_passes(self, host_state, spec_obj): num_instances = host_state.num_instances max_instances = self._get_max_instances_per_host( host_state, spec_obj) passes = num_instances < max_instances if not passes: LOG.debug("%(host_state)s fails num_instances check: Max " "instances per host is set to %(max_instances)s", {'host_state': host_state, 'max_instances': max_instances}) return passes class AggregateNumInstancesFilter(NumInstancesFilter): """AggregateNumInstancesFilter with per-aggregate the max num instances. Fall back to global max_num_instances_per_host if no per-aggregate setting found. """ def _get_max_instances_per_host(self, host_state, spec_obj): max_instances_per_host = CONF.filter_scheduler.max_instances_per_host aggregate_vals = utils.aggregate_values_from_key( host_state, 'max_instances_per_host') try: value = utils.validate_num_values( aggregate_vals, max_instances_per_host, cast_to=int) except ValueError as e: LOG.warning("Could not decode max_instances_per_host: '%s'", e) value = max_instances_per_host return value ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/numa_topology_filter.py0000664000175000017500000001224700000000000024000 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova import objects from nova.objects import fields from nova.scheduler import filters from nova.virt import hardware LOG = logging.getLogger(__name__) class NUMATopologyFilter(filters.BaseHostFilter): """Filter on requested NUMA topology.""" # NOTE(sean-k-mooney): In change I0322d872bdff68936033a6f5a54e8296a6fb343 # we validate that the NUMA topology does not change in the api. If the # requested image would alter the NUMA constraints we reject the rebuild # request and therefore do not need to run this filter on rebuild. RUN_ON_REBUILD = False def _satisfies_cpu_policy(self, host_state, extra_specs, image_props): """Check that the host_state provided satisfies any available CPU policy requirements. """ host_topology = host_state.numa_topology # NOTE(stephenfin): There can be conflicts between the policy # specified by the image and that specified by the instance, but this # is not the place to resolve these. We do this during scheduling. cpu_policy = [extra_specs.get('hw:cpu_policy'), image_props.get('hw_cpu_policy')] cpu_thread_policy = [extra_specs.get('hw:cpu_thread_policy'), image_props.get('hw_cpu_thread_policy')] if not host_topology: return True if fields.CPUAllocationPolicy.DEDICATED not in cpu_policy: return True if fields.CPUThreadAllocationPolicy.REQUIRE not in cpu_thread_policy: return True if not host_topology.has_threads: LOG.debug("%(host_state)s fails CPU policy requirements. " "Host does not have hyperthreading or " "hyperthreading is disabled, but 'require' threads " "policy was requested.", {'host_state': host_state}) return False return True def host_passes(self, host_state, spec_obj): # TODO(stephenfin): The 'numa_fit_instance_to_host' function has the # unfortunate side effect of modifying 'spec_obj.numa_topology' - an # InstanceNUMATopology object - by populating the 'cpu_pinning' field. # This is rather rude and said function should be reworked to avoid # doing this. That's a large, non-backportable cleanup however, so for # now we just duplicate spec_obj to prevent changes propagating to # future filter calls. spec_obj = spec_obj.obj_clone() ram_ratio = host_state.ram_allocation_ratio cpu_ratio = host_state.cpu_allocation_ratio extra_specs = spec_obj.flavor.extra_specs image_props = spec_obj.image.properties requested_topology = spec_obj.numa_topology host_topology = host_state.numa_topology pci_requests = spec_obj.pci_requests network_metadata = None if 'network_metadata' in spec_obj: network_metadata = spec_obj.network_metadata if pci_requests: pci_requests = pci_requests.requests if not self._satisfies_cpu_policy(host_state, extra_specs, image_props): return False if requested_topology and host_topology: limits = objects.NUMATopologyLimits( cpu_allocation_ratio=cpu_ratio, ram_allocation_ratio=ram_ratio) if network_metadata: limits.network_metadata = network_metadata instance_topology = (hardware.numa_fit_instance_to_host( host_topology, requested_topology, limits=limits, pci_requests=pci_requests, pci_stats=host_state.pci_stats)) if not instance_topology: LOG.debug("%(host)s, %(node)s fails NUMA topology " "requirements. The instance does not fit on this " "host.", {'host': host_state.host, 'node': host_state.nodename}, instance_uuid=spec_obj.instance_uuid) return False host_state.limits['numa_topology'] = limits return True elif requested_topology: LOG.debug("%(host)s, %(node)s fails NUMA topology requirements. " "No host NUMA topology while the instance specified " "one.", {'host': host_state.host, 'node': host_state.nodename}, instance_uuid=spec_obj.instance_uuid) return False else: return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/filters/pci_passthrough_filter.py0000664000175000017500000000376500000000000024313 0ustar00zuulzuul00000000000000# Copyright (c) 2013 ISP RAS. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters LOG = logging.getLogger(__name__) class PciPassthroughFilter(filters.BaseHostFilter): """Pci Passthrough Filter based on PCI request Filter that schedules instances on a host if the host has devices to meet the device requests in the 'extra_specs' for the flavor. PCI resource tracker provides updated summary information about the PCI devices for each host, like:: | [{"count": 5, "vendor_id": "8086", "product_id": "1520", | "extra_info":'{}'}], and VM requests PCI devices via PCI requests, like:: | [{"count": 1, "vendor_id": "8086", "product_id": "1520",}]. The filter checks if the host passes or not based on this information. """ RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): """Return true if the host has the required PCI devices.""" pci_requests = spec_obj.pci_requests if not pci_requests or not pci_requests.requests: return True if (not host_state.pci_stats or not host_state.pci_stats.support_requests(pci_requests.requests)): LOG.debug("%(host_state)s doesn't have the required PCI devices" " (%(requests)s)", {'host_state': host_state, 'requests': pci_requests}) return False return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/ram_filter.py0000664000175000017500000000720600000000000021662 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # Copyright (c) 2012 Cloudscaling # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters from nova.scheduler.filters import utils LOG = logging.getLogger(__name__) class AggregateRamFilter(filters.BaseHostFilter): """DEPRECATED: AggregateRamFilter with per-aggregate ram subscription flag. Fall back to global ram_allocation_ratio if no per-aggregate setting found. """ RUN_ON_REBUILD = False def __init__(self): super(AggregateRamFilter, self).__init__() LOG.warning('The AggregateRamFilter is deprecated since the 20.0.0 ' 'Train release. MEMORY_MB filtering is performed natively ' 'using the Placement service when using the ' 'filter_scheduler driver. Operators should define ram ' 'allocation ratios either per host in the nova.conf ' 'or via the placement API.') def _get_ram_allocation_ratio(self, host_state, spec_obj): aggregate_vals = utils.aggregate_values_from_key( host_state, 'ram_allocation_ratio') try: ratio = utils.validate_num_values( aggregate_vals, host_state.ram_allocation_ratio, cast_to=float) except ValueError as e: LOG.warning("Could not decode ram_allocation_ratio: '%s'", e) ratio = host_state.ram_allocation_ratio return ratio def host_passes(self, host_state, spec_obj): """Only return hosts with sufficient available RAM.""" requested_ram = spec_obj.memory_mb free_ram_mb = host_state.free_ram_mb total_usable_ram_mb = host_state.total_usable_ram_mb # Do not allow an instance to overcommit against itself, only against # other instances. if not total_usable_ram_mb >= requested_ram: LOG.debug("%(host_state)s does not have %(requested_ram)s MB " "usable ram before overcommit, it only has " "%(usable_ram)s MB.", {'host_state': host_state, 'requested_ram': requested_ram, 'usable_ram': total_usable_ram_mb}) return False ram_allocation_ratio = self._get_ram_allocation_ratio(host_state, spec_obj) memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio used_ram_mb = total_usable_ram_mb - free_ram_mb usable_ram = memory_mb_limit - used_ram_mb if not usable_ram >= requested_ram: LOG.debug("%(host_state)s does not have %(requested_ram)s MB " "usable ram, it only has %(usable_ram)s MB usable ram.", {'host_state': host_state, 'requested_ram': requested_ram, 'usable_ram': usable_ram}) return False # save oversubscription limit for compute node to test against: host_state.limits['memory_mb'] = memory_mb_limit return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/retry_filter.py0000664000175000017500000000433500000000000022250 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.scheduler import filters LOG = logging.getLogger(__name__) class RetryFilter(filters.BaseHostFilter): """Filter out nodes that have already been attempted for scheduling purposes """ # NOTE(danms): This does not affect _where_ an instance lands, so not # related to rebuild. RUN_ON_REBUILD = False def __init__(self): super(RetryFilter, self).__init__() LOG.warning('The RetryFilter is deprecated since the 20.0.0 Train ' 'release. Since the 17.0.0 (Queens) release, the ' 'scheduler has provided alternate hosts for rescheduling ' 'so the scheduler does not need to be called during a ' 'reschedule which makes the RetryFilter useless.') def host_passes(self, host_state, spec_obj): """Skip nodes that have already been attempted.""" retry = spec_obj.retry if not retry: return True # TODO(sbauza): Once the HostState is actually a ComputeNode, we could # easily get this one... host = [host_state.host, host_state.nodename] # TODO(sbauza)... and we wouldn't need to primitive the hosts into # lists hosts = [[cn.host, cn.hypervisor_hostname] for cn in retry.hosts] passes = host not in hosts if not passes: LOG.info("Host %(host)s fails. Previously tried hosts: " "%(hosts)s", {'host': host, 'hosts': hosts}) # Host passes if it's not in the list of previously attempted hosts: return passes ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/type_filter.py0000664000175000017500000000274500000000000022067 0ustar00zuulzuul00000000000000# Copyright (c) 2012 The Cloudscaling Group, Inc. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.scheduler import filters from nova.scheduler.filters import utils class AggregateTypeAffinityFilter(filters.BaseHostFilter): """AggregateTypeAffinityFilter limits instance_type by aggregate return True if no instance_type key is set or if the aggregate metadata key 'instance_type' has the instance_type name as a value """ # Aggregate data does not change within a request run_filter_once_per_request = True RUN_ON_REBUILD = False def host_passes(self, host_state, spec_obj): instance_type = spec_obj.flavor aggregate_vals = utils.aggregate_values_from_key( host_state, 'instance_type') for val in aggregate_vals: if (instance_type.name in [x.strip() for x in val.split(',')]): return True return not aggregate_vals ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/filters/utils.py0000664000175000017500000000562700000000000020703 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Bench of utility methods used by filters.""" import collections from oslo_log import log as logging import six LOG = logging.getLogger(__name__) def aggregate_values_from_key(host_state, key_name): """Returns a set of values based on a metadata key for a specific host.""" aggrlist = host_state.aggregates return {aggr.metadata[key_name] for aggr in aggrlist if key_name in aggr.metadata } def aggregate_metadata_get_by_host(host_state, key=None): """Returns a dict of all metadata based on a metadata key for a specific host. If the key is not provided, returns a dict of all metadata. """ aggrlist = host_state.aggregates metadata = collections.defaultdict(set) for aggr in aggrlist: if key is None or key in aggr.metadata: for k, v in aggr.metadata.items(): metadata[k].update(x.strip() for x in v.split(',')) return metadata def validate_num_values(vals, default=None, cast_to=int, based_on=min): """Returns a correctly casted value based on a set of values. This method is useful to work with per-aggregate filters, It takes a set of values then return the 'based_on'{min/max} converted to 'cast_to' of the set or the default value. Note: The cast implies a possible ValueError """ num_values = len(vals) if num_values == 0: return default if num_values > 1: if based_on == min: LOG.info("%(num_values)d values found, " "of which the minimum value will be used.", {'num_values': num_values}) else: LOG.info("%(num_values)d values found, " "of which the maximum value will be used.", {'num_values': num_values}) return based_on([cast_to(val) for val in vals]) def instance_uuids_overlap(host_state, uuids): """Tests for overlap between a host_state and a list of uuids. Returns True if any of the supplied uuids match any of the instance.uuid values in the host_state. """ if isinstance(uuids, six.string_types): uuids = [uuids] set_uuids = set(uuids) # host_state.instances is a dict whose keys are the instance uuids host_uuids = set(host_state.instances.keys()) return bool(host_uuids.intersection(set_uuids)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/host_manager.py0000664000175000017500000012230100000000000020527 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Manage hosts in the current zone. """ import collections import functools import time try: from collections import UserDict as IterableUserDict # Python 3 except ImportError: from UserDict import IterableUserDict # Python 2 import iso8601 from oslo_log import log as logging from oslo_utils import timeutils import six import nova.conf from nova import context as context_module from nova import exception from nova import objects from nova.pci import stats as pci_stats from nova.scheduler import filters from nova.scheduler import weights from nova import utils from nova.virt import hardware CONF = nova.conf.CONF LOG = logging.getLogger(__name__) HOST_INSTANCE_SEMAPHORE = "host_instance" class ReadOnlyDict(IterableUserDict): """A read-only dict.""" def __init__(self, source=None): self.data = {} if source: self.data.update(source) def __setitem__(self, key, item): raise TypeError() def __delitem__(self, key): raise TypeError() def clear(self): raise TypeError() def pop(self, key, *args): raise TypeError() def popitem(self): raise TypeError() def update(self): raise TypeError() @utils.expects_func_args('self', 'spec_obj') def set_update_time_on_success(function): """Set updated time of HostState when consuming succeed.""" @functools.wraps(function) def decorated_function(self, spec_obj): return_value = None try: return_value = function(self, spec_obj) except Exception as e: # Ignores exception raised from consume_from_request() so that # booting instance would fail in the resource claim of compute # node, other suitable node may be chosen during scheduling retry. LOG.warning("Selected host: %(host)s failed to consume from " "instance. Error: %(error)s", {'host': self.host, 'error': e}) else: now = timeutils.utcnow() # NOTE(sbauza): Objects are UTC tz-aware by default self.updated = now.replace(tzinfo=iso8601.UTC) return return_value return decorated_function class HostState(object): """Mutable and immutable information tracked for a host. This is an attempt to remove the ad-hoc data structures previously used and lock down access. """ def __init__(self, host, node, cell_uuid): self.host = host self.nodename = node self.uuid = None self._lock_name = (host, node) # Mutable available resources. # These will change as resources are virtually "consumed". self.total_usable_ram_mb = 0 self.total_usable_disk_gb = 0 self.disk_mb_used = 0 self.free_ram_mb = 0 self.free_disk_mb = 0 self.vcpus_total = 0 self.vcpus_used = 0 self.pci_stats = None self.numa_topology = None # Additional host information from the compute node stats: self.num_instances = 0 self.num_io_ops = 0 self.failed_builds = 0 # Other information self.host_ip = None self.hypervisor_type = None self.hypervisor_version = None self.hypervisor_hostname = None self.cpu_info = None self.supported_instances = None # Resource oversubscription values for the compute host: self.limits = {} # Generic metrics from compute nodes self.metrics = None # List of aggregates the host belongs to self.aggregates = [] # Instances on this host self.instances = {} # Allocation ratios for this host self.ram_allocation_ratio = None self.cpu_allocation_ratio = None self.disk_allocation_ratio = None # Host cell (v2) membership self.cell_uuid = cell_uuid self.updated = None def update(self, compute=None, service=None, aggregates=None, inst_dict=None): """Update all information about a host.""" @utils.synchronized(self._lock_name) def _locked_update(self, compute, service, aggregates, inst_dict): # Scheduler API is inherently multi-threaded as every incoming RPC # message will be dispatched in it's own green thread. So the # shared host state should be updated in a consistent way to make # sure its data is valid under concurrent write operations. if compute is not None: LOG.debug("Update host state from compute node: %s", compute) self._update_from_compute_node(compute) if aggregates is not None: LOG.debug("Update host state with aggregates: %s", aggregates) self.aggregates = aggregates if service is not None: LOG.debug("Update host state with service dict: %s", service) self.service = ReadOnlyDict(service) if inst_dict is not None: LOG.debug("Update host state with instances: %s", list(inst_dict)) self.instances = inst_dict return _locked_update(self, compute, service, aggregates, inst_dict) def _update_from_compute_node(self, compute): """Update information about a host from a ComputeNode object.""" # NOTE(jichenjc): if the compute record is just created but not updated # some field such as free_disk_gb can be None if 'free_disk_gb' not in compute or compute.free_disk_gb is None: LOG.debug('Ignoring compute node %s as its usage has not been ' 'updated yet.', compute.uuid) return if (self.updated and compute.updated_at and self.updated > compute.updated_at): return all_ram_mb = compute.memory_mb self.uuid = compute.uuid # Assume virtual size is all consumed by instances if use qcow2 disk. free_gb = compute.free_disk_gb least_gb = compute.disk_available_least if least_gb is not None: if least_gb > free_gb: # can occur when an instance in database is not on host LOG.warning( "Host %(hostname)s has more disk space than database " "expected (%(physical)s GB > %(database)s GB)", {'physical': least_gb, 'database': free_gb, 'hostname': compute.hypervisor_hostname}) free_gb = min(least_gb, free_gb) free_disk_mb = free_gb * 1024 self.disk_mb_used = compute.local_gb_used * 1024 # NOTE(jogo) free_ram_mb can be negative self.free_ram_mb = compute.free_ram_mb self.total_usable_ram_mb = all_ram_mb self.total_usable_disk_gb = compute.local_gb self.free_disk_mb = free_disk_mb self.vcpus_total = compute.vcpus self.vcpus_used = compute.vcpus_used self.updated = compute.updated_at # the ComputeNode.numa_topology field is a StringField so deserialize self.numa_topology = objects.NUMATopology.obj_from_db_obj( compute.numa_topology) if compute.numa_topology else None self.pci_stats = pci_stats.PciDeviceStats( stats=compute.pci_device_pools) # All virt drivers report host_ip self.host_ip = compute.host_ip self.hypervisor_type = compute.hypervisor_type self.hypervisor_version = compute.hypervisor_version self.hypervisor_hostname = compute.hypervisor_hostname self.cpu_info = compute.cpu_info if compute.supported_hv_specs: self.supported_instances = [spec.to_list() for spec in compute.supported_hv_specs] else: self.supported_instances = [] # Don't store stats directly in host_state to make sure these don't # overwrite any values, or get overwritten themselves. Store in self so # filters can schedule with them. self.stats = compute.stats or {} # Track number of instances on host self.num_instances = int(self.stats.get('num_instances', 0)) self.num_io_ops = int(self.stats.get('io_workload', 0)) # update metrics self.metrics = objects.MonitorMetricList.from_json(compute.metrics) # update allocation ratios given by the ComputeNode object self.cpu_allocation_ratio = compute.cpu_allocation_ratio self.ram_allocation_ratio = compute.ram_allocation_ratio self.disk_allocation_ratio = compute.disk_allocation_ratio # update failed_builds counter reported by the compute self.failed_builds = int(self.stats.get('failed_builds', 0)) def consume_from_request(self, spec_obj): """Incrementally update host state from a RequestSpec object.""" @utils.synchronized(self._lock_name) @set_update_time_on_success def _locked(self, spec_obj): # Scheduler API is inherently multi-threaded as every incoming RPC # message will be dispatched in its own green thread. So the # shared host state should be consumed in a consistent way to make # sure its data is valid under concurrent write operations. self._locked_consume_from_request(spec_obj) return _locked(self, spec_obj) def _locked_consume_from_request(self, spec_obj): disk_mb = (spec_obj.root_gb + spec_obj.ephemeral_gb) * 1024 ram_mb = spec_obj.memory_mb vcpus = spec_obj.vcpus self.free_ram_mb -= ram_mb self.free_disk_mb -= disk_mb self.vcpus_used += vcpus # Track number of instances on host self.num_instances += 1 pci_requests = spec_obj.pci_requests if pci_requests and self.pci_stats: pci_requests = pci_requests.requests else: pci_requests = None # Calculate the NUMA usage... if self.numa_topology and spec_obj.numa_topology: spec_obj.numa_topology = hardware.numa_fit_instance_to_host( self.numa_topology, spec_obj.numa_topology, limits=self.limits.get('numa_topology'), pci_requests=pci_requests, pci_stats=self.pci_stats) self.numa_topology = hardware.numa_usage_from_instance_numa( self.numa_topology, spec_obj.numa_topology) # ...and the PCI usage if pci_requests: instance_cells = None if spec_obj.numa_topology: instance_cells = spec_obj.numa_topology.cells self.pci_stats.apply_requests(pci_requests, instance_cells) # NOTE(sbauza): By considering all cases when the scheduler is called # and when consume_from_request() is run, we can safely say that there # is always an IO operation because we want to move the instance self.num_io_ops += 1 def __repr__(self): return ("(%(host)s, %(node)s) ram: %(free_ram)sMB " "disk: %(free_disk)sMB io_ops: %(num_io_ops)s " "instances: %(num_instances)s" % {'host': self.host, 'node': self.nodename, 'free_ram': self.free_ram_mb, 'free_disk': self.free_disk_mb, 'num_io_ops': self.num_io_ops, 'num_instances': self.num_instances}) class HostManager(object): """Base HostManager class.""" # Can be overridden in a subclass def host_state_cls(self, host, node, cell, **kwargs): return HostState(host, node, cell) def __init__(self): self.refresh_cells_caches() self.filter_handler = filters.HostFilterHandler() filter_classes = self.filter_handler.get_matching_classes( CONF.filter_scheduler.available_filters) self.filter_cls_map = {cls.__name__: cls for cls in filter_classes} self.filter_obj_map = {} self.enabled_filters = self._choose_host_filters(self._load_filters()) self.weight_handler = weights.HostWeightHandler() weigher_classes = self.weight_handler.get_matching_classes( CONF.filter_scheduler.weight_classes) self.weighers = [cls() for cls in weigher_classes] # Dict of aggregates keyed by their ID self.aggs_by_id = {} # Dict of set of aggregate IDs keyed by the name of the host belonging # to those aggregates self.host_aggregates_map = collections.defaultdict(set) self._init_aggregates() self.track_instance_changes = ( CONF.filter_scheduler.track_instance_changes) # Dict of instances and status, keyed by host self._instance_info = {} if self.track_instance_changes: self._init_instance_info() def _load_filters(self): return CONF.filter_scheduler.enabled_filters def _init_aggregates(self): elevated = context_module.get_admin_context() aggs = objects.AggregateList.get_all(elevated) for agg in aggs: self.aggs_by_id[agg.id] = agg for host in agg.hosts: self.host_aggregates_map[host].add(agg.id) def update_aggregates(self, aggregates): """Updates internal HostManager information about aggregates.""" if isinstance(aggregates, (list, objects.AggregateList)): for agg in aggregates: self._update_aggregate(agg) else: self._update_aggregate(aggregates) def _update_aggregate(self, aggregate): self.aggs_by_id[aggregate.id] = aggregate for host in aggregate.hosts: self.host_aggregates_map[host].add(aggregate.id) # Refreshing the mapping dict to remove all hosts that are no longer # part of the aggregate for host in self.host_aggregates_map: if (aggregate.id in self.host_aggregates_map[host] and host not in aggregate.hosts): self.host_aggregates_map[host].remove(aggregate.id) def delete_aggregate(self, aggregate): """Deletes internal HostManager information about a specific aggregate. """ if aggregate.id in self.aggs_by_id: del self.aggs_by_id[aggregate.id] for host in self.host_aggregates_map: if aggregate.id in self.host_aggregates_map[host]: self.host_aggregates_map[host].remove(aggregate.id) def _init_instance_info(self, computes_by_cell=None): """Creates the initial view of instances for all hosts. As this initial population of instance information may take some time, we don't wish to block the scheduler's startup while this completes. The async method allows us to simply mock out the _init_instance_info() method in tests. :param compute_nodes: a list of nodes to populate instances info for if is None, compute_nodes will be looked up in database """ def _async_init_instance_info(computes_by_cell): context = context_module.get_admin_context() LOG.debug("START:_async_init_instance_info") self._instance_info = {} count = 0 if not computes_by_cell: computes_by_cell = {} for cell in self.cells.values(): with context_module.target_cell(context, cell) as cctxt: cell_cns = objects.ComputeNodeList.get_all( cctxt).objects computes_by_cell[cell] = cell_cns count += len(cell_cns) LOG.debug("Total number of compute nodes: %s", count) for cell, compute_nodes in computes_by_cell.items(): # Break the queries into batches of 10 to reduce the total # number of calls to the DB. batch_size = 10 start_node = 0 end_node = batch_size while start_node <= len(compute_nodes): curr_nodes = compute_nodes[start_node:end_node] start_node += batch_size end_node += batch_size filters = {"host": [curr_node.host for curr_node in curr_nodes], "deleted": False} with context_module.target_cell(context, cell) as cctxt: result = objects.InstanceList.get_by_filters( cctxt, filters) instances = result.objects LOG.debug("Adding %s instances for hosts %s-%s", len(instances), start_node, end_node) for instance in instances: host = instance.host if host not in self._instance_info: self._instance_info[host] = {"instances": {}, "updated": False} inst_dict = self._instance_info[host] inst_dict["instances"][instance.uuid] = instance # Call sleep() to cooperatively yield time.sleep(0) LOG.debug("END:_async_init_instance_info") # Run this async so that we don't block the scheduler start-up utils.spawn_n(_async_init_instance_info, computes_by_cell) def _choose_host_filters(self, filter_cls_names): """Since the caller may specify which filters to use we need to have an authoritative list of what is permissible. This function checks the filter names against a predefined set of acceptable filters. """ if not isinstance(filter_cls_names, (list, tuple)): filter_cls_names = [filter_cls_names] good_filters = [] bad_filters = [] for filter_name in filter_cls_names: if filter_name not in self.filter_obj_map: if filter_name not in self.filter_cls_map: bad_filters.append(filter_name) continue filter_cls = self.filter_cls_map[filter_name] self.filter_obj_map[filter_name] = filter_cls() good_filters.append(self.filter_obj_map[filter_name]) if bad_filters: msg = ", ".join(bad_filters) raise exception.SchedulerHostFilterNotFound(filter_name=msg) return good_filters def get_filtered_hosts(self, hosts, spec_obj, index=0): """Filter hosts and return only ones passing all filters.""" def _strip_ignore_hosts(host_map, hosts_to_ignore): ignored_hosts = [] for host in hosts_to_ignore: for (hostname, nodename) in list(host_map.keys()): if host.lower() == hostname.lower(): del host_map[(hostname, nodename)] ignored_hosts.append(host) ignored_hosts_str = ', '.join(ignored_hosts) LOG.info('Host filter ignoring hosts: %s', ignored_hosts_str) def _match_forced_hosts(host_map, hosts_to_force): forced_hosts = [] lowered_hosts_to_force = [host.lower() for host in hosts_to_force] for (hostname, nodename) in list(host_map.keys()): if hostname.lower() not in lowered_hosts_to_force: del host_map[(hostname, nodename)] else: forced_hosts.append(hostname) if host_map: forced_hosts_str = ', '.join(forced_hosts) LOG.info('Host filter forcing available hosts to %s', forced_hosts_str) else: forced_hosts_str = ', '.join(hosts_to_force) LOG.info("No hosts matched due to not matching " "'force_hosts' value of '%s'", forced_hosts_str) def _match_forced_nodes(host_map, nodes_to_force): forced_nodes = [] for (hostname, nodename) in list(host_map.keys()): if nodename not in nodes_to_force: del host_map[(hostname, nodename)] else: forced_nodes.append(nodename) if host_map: forced_nodes_str = ', '.join(forced_nodes) LOG.info('Host filter forcing available nodes to %s', forced_nodes_str) else: forced_nodes_str = ', '.join(nodes_to_force) LOG.info("No nodes matched due to not matching " "'force_nodes' value of '%s'", forced_nodes_str) def _get_hosts_matching_request(hosts, requested_destination): """Get hosts through matching the requested destination. We will both set host and node to requested destination object and host will never be None and node will be None in some cases. Starting with API 2.74 microversion, we also can specify the host/node to select hosts to launch a server: - If only host(or only node)(or both host and node) is supplied and we get one node from get_compute_nodes_by_host_or_node which is called in resources_from_request_spec function, the destination will be set both host and node. - If only host is supplied and we get more than one node from get_compute_nodes_by_host_or_node which is called in resources_from_request_spec function, the destination will only include host. """ (host, node) = (requested_destination.host, requested_destination.node) if node: requested_nodes = [x for x in hosts if x.host == host and x.nodename == node] else: requested_nodes = [x for x in hosts if x.host == host] if requested_nodes: LOG.info('Host filter only checking host %(host)s and ' 'node %(node)s', {'host': host, 'node': node}) else: # NOTE(sbauza): The API level should prevent the user from # providing a wrong destination but let's make sure a wrong # destination doesn't trample the scheduler still. LOG.info('No hosts matched due to not matching requested ' 'destination (%(host)s, %(node)s)', {'host': host, 'node': node}) return iter(requested_nodes) ignore_hosts = spec_obj.ignore_hosts or [] force_hosts = spec_obj.force_hosts or [] force_nodes = spec_obj.force_nodes or [] requested_node = spec_obj.requested_destination if requested_node is not None and 'host' in requested_node: # NOTE(sbauza): Reduce a potentially long set of hosts as much as # possible to any requested destination nodes before passing the # list to the filters hosts = _get_hosts_matching_request(hosts, requested_node) if ignore_hosts or force_hosts or force_nodes: # NOTE(deva): we can't assume "host" is unique because # one host may have many nodes. name_to_cls_map = {(x.host, x.nodename): x for x in hosts} if ignore_hosts: _strip_ignore_hosts(name_to_cls_map, ignore_hosts) if not name_to_cls_map: return [] # NOTE(deva): allow force_hosts and force_nodes independently if force_hosts: _match_forced_hosts(name_to_cls_map, force_hosts) if force_nodes: _match_forced_nodes(name_to_cls_map, force_nodes) check_type = ('scheduler_hints' in spec_obj and spec_obj.scheduler_hints.get('_nova_check_type')) if not check_type and (force_hosts or force_nodes): # NOTE(deva,dansmith): Skip filters when forcing host or node # unless we've declared the internal check type flag, in which # case we're asking for a specific host and for filtering to # be done. if name_to_cls_map: return name_to_cls_map.values() else: return [] hosts = six.itervalues(name_to_cls_map) return self.filter_handler.get_filtered_objects(self.enabled_filters, hosts, spec_obj, index) def get_weighed_hosts(self, hosts, spec_obj): """Weigh the hosts.""" return self.weight_handler.get_weighed_objects(self.weighers, hosts, spec_obj) def _get_computes_for_cells(self, context, cells, compute_uuids=None): """Get a tuple of compute node and service information. :param context: request context :param cells: list of CellMapping objects :param compute_uuids: list of ComputeNode UUIDs. If this is None, all compute nodes from each specified cell will be returned, otherwise only the ComputeNode objects with a UUID in the list of UUIDs in any given cell is returned. If this is an empty list, the returned compute_nodes tuple item will be an empty dict. Returns a tuple (compute_nodes, services) where: - compute_nodes is cell-uuid keyed dict of compute node lists - services is a dict of services indexed by hostname """ def targeted_operation(cctxt): services = objects.ServiceList.get_by_binary( cctxt, 'nova-compute', include_disabled=True) if compute_uuids is None: return services, objects.ComputeNodeList.get_all(cctxt) else: return services, objects.ComputeNodeList.get_all_by_uuids( cctxt, compute_uuids) timeout = context_module.CELL_TIMEOUT results = context_module.scatter_gather_cells(context, cells, timeout, targeted_operation) compute_nodes = collections.defaultdict(list) services = {} for cell_uuid, result in results.items(): if isinstance(result, Exception): LOG.warning('Failed to get computes for cell %s', cell_uuid) elif result is context_module.did_not_respond_sentinel: LOG.warning('Timeout getting computes for cell %s', cell_uuid) else: _services, _compute_nodes = result compute_nodes[cell_uuid].extend(_compute_nodes) services.update({service.host: service for service in _services}) return compute_nodes, services def _get_cell_by_host(self, ctxt, host): '''Get CellMapping object of a cell the given host belongs to.''' try: host_mapping = objects.HostMapping.get_by_host(ctxt, host) return host_mapping.cell_mapping except exception.HostMappingNotFound: LOG.warning('No host-to-cell mapping found for selected ' 'host %(host)s.', {'host': host}) return def get_compute_nodes_by_host_or_node(self, ctxt, host, node, cell=None): '''Get compute nodes from given host or node''' def return_empty_list_for_not_found(func): def wrapper(*args, **kwargs): try: ret = func(*args, **kwargs) except exception.NotFound: ret = objects.ComputeNodeList() return ret return wrapper @return_empty_list_for_not_found def _get_by_host_and_node(ctxt): compute_node = objects.ComputeNode.get_by_host_and_nodename( ctxt, host, node) return objects.ComputeNodeList(objects=[compute_node]) @return_empty_list_for_not_found def _get_by_host(ctxt): return objects.ComputeNodeList.get_all_by_host(ctxt, host) @return_empty_list_for_not_found def _get_by_node(ctxt): compute_node = objects.ComputeNode.get_by_nodename(ctxt, node) return objects.ComputeNodeList(objects=[compute_node]) if host and node: target_fnc = _get_by_host_and_node elif host: target_fnc = _get_by_host else: target_fnc = _get_by_node if host and not cell: # optimization not to issue queries to every cell DB cell = self._get_cell_by_host(ctxt, host) cells = [cell] if cell else self.enabled_cells timeout = context_module.CELL_TIMEOUT nodes_by_cell = context_module.scatter_gather_cells( ctxt, cells, timeout, target_fnc) # Only one cell should have values for the compute nodes # so we get them here, or return an empty list if no cell # has a value; be sure to filter out cell failures. nodes = next( (nodes for nodes in nodes_by_cell.values() if nodes and not context_module.is_cell_failure_sentinel(nodes)), objects.ComputeNodeList()) return nodes def refresh_cells_caches(self): # NOTE(tssurya): This function is called from the scheduler manager's # reset signal handler and also upon startup of the scheduler. context = context_module.get_admin_context() temp_cells = objects.CellMappingList.get_all(context) # NOTE(tssurya): filtering cell0 from the list since it need # not be considered for scheduling. for c in temp_cells: if c.is_cell0(): temp_cells.objects.remove(c) # once its done break for optimization break # NOTE(danms, tssurya): global dict, keyed by cell uuid, of cells # cached which will be refreshed every time a SIGHUP is sent to the # scheduler. self.cells = {cell.uuid: cell for cell in temp_cells} LOG.debug('Found %(count)i cells: %(cells)s', {'count': len(self.cells), 'cells': ', '.join(self.cells)}) # NOTE(tssurya): Global cache of only the enabled cells. This way # scheduling is limited only to the enabled cells. However this # cache will be refreshed every time a cell is disabled or enabled # or when a new cell is created as long as a SIGHUP signal is sent # to the scheduler. self.enabled_cells = [c for c in temp_cells if not c.disabled] # Filtering the disabled cells only for logging purposes. if LOG.isEnabledFor(logging.DEBUG): disabled_cells = [c for c in temp_cells if c.disabled] LOG.debug('Found %(count)i disabled cells: %(cells)s', {'count': len(disabled_cells), 'cells': ', '.join( [c.identity for c in disabled_cells])}) # Dict, keyed by host name, to cell UUID to be used to look up the # cell a particular host is in (used with self.cells). self.host_to_cell_uuid = {} def get_host_states_by_uuids(self, context, compute_uuids, spec_obj): if not self.cells: LOG.warning("No cells were found") # Restrict to a single cell if and only if the request spec has a # requested cell and allow_cross_cell_move=False. if (spec_obj and 'requested_destination' in spec_obj and spec_obj.requested_destination and 'cell' in spec_obj.requested_destination and not spec_obj.requested_destination.allow_cross_cell_move): only_cell = spec_obj.requested_destination.cell else: only_cell = None if only_cell: cells = [only_cell] else: cells = self.enabled_cells compute_nodes, services = self._get_computes_for_cells( context, cells, compute_uuids=compute_uuids) return self._get_host_states(context, compute_nodes, services) def _get_host_states(self, context, compute_nodes, services): """Returns a generator over HostStates given a list of computes. Also updates the HostStates internal mapping for the HostManager. """ # Get resource usage across the available compute nodes: host_state_map = {} seen_nodes = set() for cell_uuid, computes in compute_nodes.items(): for compute in computes: service = services.get(compute.host) if not service: LOG.warning( "No compute service record found for host %(host)s", {'host': compute.host}) continue host = compute.host node = compute.hypervisor_hostname state_key = (host, node) host_state = host_state_map.get(state_key) if not host_state: host_state = self.host_state_cls(host, node, cell_uuid, compute=compute) host_state_map[state_key] = host_state # We force to update the aggregates info each time a # new request comes in, because some changes on the # aggregates could have been happening after setting # this field for the first time host_state.update(compute, dict(service), self._get_aggregates_info(host), self._get_instance_info(context, compute)) seen_nodes.add(state_key) return (host_state_map[host] for host in seen_nodes) def _get_aggregates_info(self, host): return [self.aggs_by_id[agg_id] for agg_id in self.host_aggregates_map[host]] def _get_cell_mapping_for_host(self, context, host_name): """Finds the CellMapping for a particular host name Relies on a cache to quickly fetch the CellMapping if we have looked up this host before, otherwise gets the CellMapping via the HostMapping record for the given host name. :param context: nova auth request context :param host_name: compute service host name :returns: CellMapping object :raises: HostMappingNotFound if the host is not mapped to a cell """ # Check to see if we have the host in our cache. if host_name in self.host_to_cell_uuid: cell_uuid = self.host_to_cell_uuid[host_name] if cell_uuid in self.cells: return self.cells[cell_uuid] # Something is wrong so log a warning and just fall through to # lookup the HostMapping. LOG.warning('Host %s is expected to be in cell %s but that cell ' 'uuid was not found in our cache. The service may ' 'need to be restarted to refresh the cache.', host_name, cell_uuid) # We have not cached this host yet so get the HostMapping, cache the # result and return the CellMapping. hm = objects.HostMapping.get_by_host(context, host_name) cell_mapping = hm.cell_mapping self.host_to_cell_uuid[host_name] = cell_mapping.uuid return cell_mapping def _get_instances_by_host(self, context, host_name): try: cm = self._get_cell_mapping_for_host(context, host_name) except exception.HostMappingNotFound: # It's possible to hit this when the compute service first starts # up and casts to update_instance_info with an empty list but # before the host is mapped in the API database. LOG.info('Host mapping not found for host %s. Not tracking ' 'instance info for this host.', host_name) return {} with context_module.target_cell(context, cm) as cctxt: uuids = objects.InstanceList.get_uuids_by_host(cctxt, host_name) # Putting the context in the otherwise fake Instance object at # least allows out of tree filters to lazy-load fields. return {uuid: objects.Instance(cctxt, uuid=uuid) for uuid in uuids} def _get_instance_info(self, context, compute): """Gets the host instance info from the compute host. Some sites may disable ``track_instance_changes`` for performance or isolation reasons. In either of these cases, there will either be no information for the host, or the 'updated' value for that host dict will be False. In those cases, we need to grab the current InstanceList instead of relying on the version in _instance_info. """ host_name = compute.host host_info = self._instance_info.get(host_name) if host_info and host_info.get("updated"): inst_dict = host_info["instances"] else: # Updates aren't flowing from nova-compute. inst_dict = self._get_instances_by_host(context, host_name) return inst_dict def _recreate_instance_info(self, context, host_name): """Get the InstanceList for the specified host, and store it in the _instance_info dict. """ inst_dict = self._get_instances_by_host(context, host_name) host_info = self._instance_info[host_name] = {} host_info["instances"] = inst_dict host_info["updated"] = False @utils.synchronized(HOST_INSTANCE_SEMAPHORE) def update_instance_info(self, context, host_name, instance_info): """Receives an InstanceList object from a compute node. This method receives information from a compute node when it starts up, or when its instances have changed, and updates its view of hosts and instances with it. """ host_info = self._instance_info.get(host_name) if host_info: inst_dict = host_info.get("instances") for instance in instance_info.objects: # Overwrite the entry (if any) with the new info. inst_dict[instance.uuid] = instance host_info["updated"] = True else: instances = instance_info.objects if len(instances) > 1: # This is a host sending its full instance list, so use it. host_info = self._instance_info[host_name] = {} host_info["instances"] = {instance.uuid: instance for instance in instances} host_info["updated"] = True else: self._recreate_instance_info(context, host_name) LOG.info("Received an update from an unknown host '%s'. " "Re-created its InstanceList.", host_name) @utils.synchronized(HOST_INSTANCE_SEMAPHORE) def delete_instance_info(self, context, host_name, instance_uuid): """Receives the UUID from a compute node when one of its instances is terminated. The instance in the local view of the host's instances is removed. """ host_info = self._instance_info.get(host_name) if host_info: inst_dict = host_info["instances"] # Remove the existing Instance object, if any inst_dict.pop(instance_uuid, None) host_info["updated"] = True else: self._recreate_instance_info(context, host_name) LOG.info("Received a delete update from an unknown host '%s'. " "Re-created its InstanceList.", host_name) @utils.synchronized(HOST_INSTANCE_SEMAPHORE) def sync_instance_info(self, context, host_name, instance_uuids): """Receives the uuids of the instances on a host. This method is periodically called by the compute nodes, which send a list of all the UUID values for the instances on that node. This is used by the scheduler's HostManager to detect when its view of the compute node's instances is out of sync. """ host_info = self._instance_info.get(host_name) if host_info: local_set = set(host_info["instances"].keys()) compute_set = set(instance_uuids) if not local_set == compute_set: self._recreate_instance_info(context, host_name) LOG.info("The instance sync for host '%s' did not match. " "Re-created its InstanceList.", host_name) return host_info["updated"] = True LOG.debug("Successfully synced instances from host '%s'.", host_name) else: self._recreate_instance_info(context, host_name) LOG.info("Received a sync request from an unknown host '%s'. " "Re-created its InstanceList.", host_name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/manager.py0000664000175000017500000002776000000000000017507 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler Service """ import collections from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_service import periodic_task import six from stevedore import driver import nova.conf from nova import exception from nova import manager from nova import objects from nova.objects import host_mapping as host_mapping_obj from nova import quota from nova.scheduler.client import report from nova.scheduler import request_filter from nova.scheduler import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF QUOTAS = quota.QUOTAS HOST_MAPPING_EXISTS_WARNING = False class SchedulerManager(manager.Manager): """Chooses a host to run instances on.""" target = messaging.Target(version='4.5') _sentinel = object() def __init__(self, *args, **kwargs): self.placement_client = report.SchedulerReportClient() self.driver = driver.DriverManager( 'nova.scheduler.driver', CONF.scheduler.driver, invoke_on_load=True ).driver super(SchedulerManager, self).__init__( service_name='scheduler', *args, **kwargs ) @periodic_task.periodic_task( spacing=CONF.scheduler.discover_hosts_in_cells_interval, run_immediately=True) def _discover_hosts_in_cells(self, context): global HOST_MAPPING_EXISTS_WARNING try: host_mappings = host_mapping_obj.discover_hosts(context) if host_mappings: LOG.info('Discovered %(count)i new hosts: %(hosts)s', {'count': len(host_mappings), 'hosts': ','.join(['%s:%s' % (hm.cell_mapping.name, hm.host) for hm in host_mappings])}) except exception.HostMappingExists as exp: msg = ('This periodic task should only be enabled on a single ' 'scheduler to prevent collisions between multiple ' 'schedulers: %s' % six.text_type(exp)) if not HOST_MAPPING_EXISTS_WARNING: LOG.warning(msg) HOST_MAPPING_EXISTS_WARNING = True else: LOG.debug(msg) @periodic_task.periodic_task(spacing=CONF.scheduler.periodic_task_interval, run_immediately=True) def _run_periodic_tasks(self, context): self.driver.run_periodic_tasks(context) def reset(self): # NOTE(tssurya): This is a SIGHUP handler which will reset the cells # and enabled cells caches in the host manager. So every time an # existing cell is disabled or enabled or a new cell is created, a # SIGHUP signal has to be sent to the scheduler for proper scheduling. # NOTE(mriedem): Similarly there is a host-to-cell cache which should # be reset if a host is deleted from a cell and "discovered" in another # cell. self.driver.host_manager.refresh_cells_caches() @messaging.expected_exceptions(exception.NoValidHost) def select_destinations(self, ctxt, request_spec=None, filter_properties=None, spec_obj=_sentinel, instance_uuids=None, return_objects=False, return_alternates=False): """Returns destinations(s) best suited for this RequestSpec. Starting in Queens, this method returns a list of lists of Selection objects, with one list for each requested instance. Each instance's list will have its first element be the Selection object representing the chosen host for the instance, and if return_alternates is True, zero or more alternate objects that could also satisfy the request. The number of alternates is determined by the configuration option `CONF.scheduler.max_attempts`. The ability of a calling method to handle this format of returned destinations is indicated by a True value in the parameter `return_objects`. However, there may still be some older conductors in a deployment that have not been updated to Queens, and in that case return_objects will be False, and the result will be a list of dicts with 'host', 'nodename' and 'limits' as keys. When return_objects is False, the value of return_alternates has no effect. The reason there are two kwarg parameters return_objects and return_alternates is so we can differentiate between callers that understand the Selection object format but *don't* want to get alternate hosts, as is the case with the conductors that handle certain move operations. """ LOG.debug("Starting to schedule for instances: %s", instance_uuids) # TODO(sbauza): Change the method signature to only accept a spec_obj # argument once API v5 is provided. if spec_obj is self._sentinel: spec_obj = objects.RequestSpec.from_primitives(ctxt, request_spec, filter_properties) is_rebuild = utils.request_is_rebuild(spec_obj) alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version \ = None, None, None if self.driver.USES_ALLOCATION_CANDIDATES and not is_rebuild: # Only process the Placement request spec filters when Placement # is used. try: request_filter.process_reqspec(ctxt, spec_obj) except exception.RequestFilterFailed as e: raise exception.NoValidHost(reason=e.message) resources = utils.resources_from_request_spec( ctxt, spec_obj, self.driver.host_manager, enable_pinning_translate=True) res = self.placement_client.get_allocation_candidates(ctxt, resources) if res is None: # We have to handle the case that we failed to connect to the # Placement service and the safe_connect decorator on # get_allocation_candidates returns None. res = None, None, None alloc_reqs, provider_summaries, allocation_request_version = res alloc_reqs = alloc_reqs or [] provider_summaries = provider_summaries or {} # if the user requested pinned CPUs, we make a second query to # placement for allocation candidates using VCPUs instead of PCPUs. # This is necessary because users might not have modified all (or # any) of their compute nodes meaning said compute nodes will not # be reporting PCPUs yet. This is okay to do because the # NUMATopologyFilter (scheduler) or virt driver (compute node) will # weed out hosts that are actually using new style configuration # but simply don't have enough free PCPUs (or any PCPUs). # TODO(stephenfin): Remove when we drop support for 'vcpu_pin_set' if (resources.cpu_pinning_requested and not CONF.workarounds.disable_fallback_pcpu_query): LOG.debug('Requesting fallback allocation candidates with ' 'VCPU instead of PCPU') resources = utils.resources_from_request_spec( ctxt, spec_obj, self.driver.host_manager, enable_pinning_translate=False) res = self.placement_client.get_allocation_candidates( ctxt, resources) if res: # merge the allocation requests and provider summaries from # the two requests together alloc_reqs_fallback, provider_summaries_fallback, _ = res alloc_reqs.extend(alloc_reqs_fallback) provider_summaries.update(provider_summaries_fallback) if not alloc_reqs: LOG.info("Got no allocation candidates from the Placement " "API. This could be due to insufficient resources " "or a temporary occurrence as compute nodes start " "up.") raise exception.NoValidHost(reason="") else: # Build a dict of lists of allocation requests, keyed by # provider UUID, so that when we attempt to claim resources for # a host, we can grab an allocation request easily alloc_reqs_by_rp_uuid = collections.defaultdict(list) for ar in alloc_reqs: for rp_uuid in ar['allocations']: alloc_reqs_by_rp_uuid[rp_uuid].append(ar) # Only return alternates if both return_objects and return_alternates # are True. return_alternates = return_alternates and return_objects selections = self.driver.select_destinations(ctxt, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version, return_alternates) # If `return_objects` is False, we need to convert the selections to # the older format, which is a list of host state dicts. if not return_objects: selection_dicts = [sel[0].to_dict() for sel in selections] return jsonutils.to_primitive(selection_dicts) return selections def update_aggregates(self, ctxt, aggregates): """Updates HostManager internal aggregates information. :param aggregates: Aggregate(s) to update :type aggregates: :class:`nova.objects.Aggregate` or :class:`nova.objects.AggregateList` """ # NOTE(sbauza): We're dropping the user context now as we don't need it self.driver.host_manager.update_aggregates(aggregates) def delete_aggregate(self, ctxt, aggregate): """Deletes HostManager internal information about a specific aggregate. :param aggregate: Aggregate to delete :type aggregate: :class:`nova.objects.Aggregate` """ # NOTE(sbauza): We're dropping the user context now as we don't need it self.driver.host_manager.delete_aggregate(aggregate) def update_instance_info(self, context, host_name, instance_info): """Receives information about changes to a host's instances, and updates the driver's HostManager with that information. """ self.driver.host_manager.update_instance_info(context, host_name, instance_info) def delete_instance_info(self, context, host_name, instance_uuid): """Receives information about the deletion of one of a host's instances, and updates the driver's HostManager with that information. """ self.driver.host_manager.delete_instance_info(context, host_name, instance_uuid) def sync_instance_info(self, context, host_name, instance_uuids): """Receives a sync request from a host, and passes it on to the driver's HostManager. """ self.driver.host_manager.sync_instance_info(context, host_name, instance_uuids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/request_filter.py0000664000175000017500000002377400000000000021133 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import os_traits from oslo_log import log as logging from oslo_utils import timeutils import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova.scheduler import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) TENANT_METADATA_KEY = 'filter_tenant_id' def trace_request_filter(fn): @functools.wraps(fn) def wrapper(ctxt, request_spec): timer = timeutils.StopWatch() ran = False with timer: try: ran = fn(ctxt, request_spec) finally: if ran: # Only log info if the filter was enabled and not # excluded for some reason LOG.debug('Request filter %r took %.1f seconds', fn.__name__, timer.elapsed()) return ran return wrapper @trace_request_filter def isolate_aggregates(ctxt, request_spec): """Prepare list of aggregates that should be isolated. This filter will prepare the list of aggregates that should be ignored by the placement service. It checks if aggregates has metadata 'trait:='required' and if is not present in either of flavor extra specs or image properties, then those aggregates will be included in the list of isolated aggregates. Precisely this filter gets the trait request form the image and flavor and unions them. Then it accumulates the set of aggregates that request traits are "non_matching_by_metadata_keys" and uses that to produce the list of isolated aggregates. """ if not CONF.scheduler.enable_isolated_aggregate_filtering: return False # Get required traits set in flavor and image res_req = utils.ResourceRequest(request_spec) required_traits = res_req.all_required_traits keys = ['trait:%s' % trait for trait in required_traits] isolated_aggregates = ( objects.aggregate.AggregateList.get_non_matching_by_metadata_keys( ctxt, keys, 'trait:', value='required')) # Set list of isolated aggregates to destination object of request_spec if isolated_aggregates: if ('requested_destination' not in request_spec or request_spec.requested_destination is None): request_spec.requested_destination = objects.Destination() destination = request_spec.requested_destination destination.append_forbidden_aggregates( agg.uuid for agg in isolated_aggregates) return True @trace_request_filter def require_tenant_aggregate(ctxt, request_spec): """Require hosts in an aggregate based on tenant id. This will modify request_spec to request hosts in an aggregate defined specifically for the tenant making the request. We do that by looking for a nova host aggregate with metadata indicating which tenant it is for, and passing that aggregate uuid to placement to limit results accordingly. """ enabled = CONF.scheduler.limit_tenants_to_placement_aggregate agg_required = CONF.scheduler.placement_aggregate_required_for_tenants if not enabled: return False aggregates = objects.AggregateList.get_by_metadata( ctxt, value=request_spec.project_id) aggregate_uuids_for_tenant = set([]) for agg in aggregates: for key, value in agg.metadata.items(): if key.startswith(TENANT_METADATA_KEY): aggregate_uuids_for_tenant.add(agg.uuid) break if aggregate_uuids_for_tenant: if ('requested_destination' not in request_spec or request_spec.requested_destination is None): request_spec.requested_destination = objects.Destination() destination = request_spec.requested_destination destination.require_aggregates(aggregate_uuids_for_tenant) LOG.debug('require_tenant_aggregate request filter added ' 'aggregates %s for tenant %r', ','.join(aggregate_uuids_for_tenant), request_spec.project_id) elif agg_required: LOG.warning('Tenant %(tenant)s has no available aggregates', {'tenant': request_spec.project_id}) raise exception.RequestFilterFailed( reason=_('No hosts available for tenant')) return True @trace_request_filter def map_az_to_placement_aggregate(ctxt, request_spec): """Map requested nova availability zones to placement aggregates. This will modify request_spec to request hosts in an aggregate that matches the desired AZ of the user's request. """ if not CONF.scheduler.query_placement_for_availability_zone: return False az_hint = request_spec.availability_zone if not az_hint: return False aggregates = objects.AggregateList.get_by_metadata(ctxt, key='availability_zone', value=az_hint) if aggregates: if ('requested_destination' not in request_spec or request_spec.requested_destination is None): request_spec.requested_destination = objects.Destination() agg_uuids = [agg.uuid for agg in aggregates] request_spec.requested_destination.require_aggregates(agg_uuids) LOG.debug('map_az_to_placement_aggregate request filter added ' 'aggregates %s for az %r', ','.join(agg_uuids), az_hint) return True @trace_request_filter def require_image_type_support(ctxt, request_spec): """Request type-specific trait on candidates. This will modify the request_spec to request hosts that support the disk_format of the image provided. """ if not CONF.scheduler.query_placement_for_image_type_support: return False if request_spec.is_bfv: # We are booting from volume, and thus compute node image # disk_format support does not matter. return False disk_format = request_spec.image.disk_format trait_name = 'COMPUTE_IMAGE_TYPE_%s' % disk_format.upper() if not hasattr(os_traits, trait_name): LOG.error(('Computed trait name %r is not valid; ' 'is os-traits up to date?'), trait_name) return False request_spec.root_required.add(trait_name) LOG.debug('require_image_type_support request filter added required ' 'trait %s', trait_name) return True @trace_request_filter def transform_image_metadata(ctxt, request_spec): """Transform image metadata to required traits. This will modify the request_spec to request hosts that support virtualisation capabilities based on the image metadata properties. """ if not CONF.scheduler.image_metadata_prefilter: return False prefix_map = { 'hw_cdrom_bus': 'COMPUTE_STORAGE_BUS', 'hw_disk_bus': 'COMPUTE_STORAGE_BUS', 'hw_video_model': 'COMPUTE_GRAPHICS_MODEL', 'hw_vif_model': 'COMPUTE_NET_VIF_MODEL', } trait_names = [] for key, prefix in prefix_map.items(): if key in request_spec.image.properties: value = request_spec.image.properties.get(key).replace( '-', '_').upper() trait_name = f'{prefix}_{value}' if not hasattr(os_traits, trait_name): LOG.error(('Computed trait name %r is not valid; ' 'is os-traits up to date?'), trait_name) return False trait_names.append(trait_name) for trait_name in trait_names: LOG.debug( 'transform_image_metadata request filter added required ' 'trait %s', trait_name ) request_spec.root_required.add(trait_name) return True @trace_request_filter def compute_status_filter(ctxt, request_spec): """Pre-filter compute node resource providers using COMPUTE_STATUS_DISABLED The ComputeFilter filters out hosts for compute services that are disabled. Compute node resource providers managed by a disabled compute service should have the COMPUTE_STATUS_DISABLED trait set and be excluded by this mandatory pre-filter. """ trait_name = os_traits.COMPUTE_STATUS_DISABLED request_spec.root_forbidden.add(trait_name) LOG.debug('compute_status_filter request filter added forbidden ' 'trait %s', trait_name) return True @trace_request_filter def accelerators_filter(ctxt, request_spec): """Allow only compute nodes with accelerator support. This filter retains only nodes whose compute manager published the COMPUTE_ACCELERATORS trait, thus indicating the version of n-cpu is sufficient to handle accelerator requests. """ trait_name = os_traits.COMPUTE_ACCELERATORS if request_spec.flavor.extra_specs.get('accel:device_profile'): request_spec.root_required.add(trait_name) LOG.debug('accelerators_filter request filter added required ' 'trait %s', trait_name) return True ALL_REQUEST_FILTERS = [ require_tenant_aggregate, map_az_to_placement_aggregate, require_image_type_support, compute_status_filter, isolate_aggregates, transform_image_metadata, accelerators_filter, ] def process_reqspec(ctxt, request_spec): """Process an objects.ReqestSpec before calling placement. :param ctxt: A RequestContext :param request_spec: An objects.RequestSpec to be inspected/modified """ for filter in ALL_REQUEST_FILTERS: filter(ctxt, request_spec) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/rpcapi.py0000664000175000017500000001736000000000000017346 0ustar00zuulzuul00000000000000# Copyright 2013, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the scheduler manager RPC API. """ import oslo_messaging as messaging import nova.conf from nova import exception as exc from nova.objects import base as objects_base from nova import profiler from nova import rpc CONF = nova.conf.CONF RPC_TOPIC = "scheduler" @profiler.trace_cls("rpc") class SchedulerAPI(object): '''Client side of the scheduler rpc API. API version history: * 1.0 - Initial version. * 1.1 - Changes to prep_resize(): * remove instance_uuid, add instance * remove instance_type_id, add instance_type * remove topic, it was unused * 1.2 - Remove topic from run_instance, it was unused * 1.3 - Remove instance_id, add instance to live_migration * 1.4 - Remove update_db from prep_resize * 1.5 - Add reservations argument to prep_resize() * 1.6 - Remove reservations argument to run_instance() * 1.7 - Add create_volume() method, remove topic from live_migration() * 2.0 - Remove 1.x backwards compat * 2.1 - Add image_id to create_volume() * 2.2 - Remove reservations argument to create_volume() * 2.3 - Remove create_volume() * 2.4 - Change update_service_capabilities() * accepts a list of capabilities * 2.5 - Add get_backdoor_port() * 2.6 - Add select_hosts() ... Grizzly supports message version 2.6. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.6. * 2.7 - Add select_destinations() * 2.8 - Deprecate prep_resize() -- JUST KIDDING. It is still used by the compute manager for retries. * 2.9 - Added the legacy_bdm_in_spec parameter to run_instance() ... Havana supports message version 2.9. So, any changes to existing methods in 2.x after that point should be done such that they can handle the version_cap being set to 2.9. * Deprecated live_migration() call, moved to conductor * Deprecated select_hosts() 3.0 - Removed backwards compat ... Icehouse and Juno support message version 3.0. So, any changes to existing methods in 3.x after that point should be done such that they can handle the version_cap being set to 3.0. * 3.1 - Made select_destinations() send flavor object * 4.0 - Removed backwards compat for Icehouse * 4.1 - Add update_aggregates() and delete_aggregate() * 4.2 - Added update_instance_info(), delete_instance_info(), and sync_instance_info() methods ... Kilo and Liberty support message version 4.2. So, any changes to existing methods in 4.x after that point should be done such that they can handle the version_cap being set to 4.2. * 4.3 - Modify select_destinations() signature by providing a RequestSpec obj ... Mitaka, Newton, and Ocata support message version 4.3. So, any changes to existing methods in 4.x after that point should be done such that they can handle the version_cap being set to 4.3. * 4.4 - Modify select_destinations() signature by providing the instance_uuids for the request. ... Pike supports message version 4.4. So any changes to existing methods in 4.x after that point should be done such that they can handle the version_cap being set to 4.4. * 4.5 - Modify select_destinations() to optionally return a list of lists of Selection objects, along with zero or more alternates. ''' VERSION_ALIASES = { 'grizzly': '2.6', 'havana': '2.9', 'icehouse': '3.0', 'juno': '3.0', 'kilo': '4.2', 'liberty': '4.2', 'mitaka': '4.3', 'newton': '4.3', 'ocata': '4.3', 'pike': '4.4', } def __init__(self): super(SchedulerAPI, self).__init__() target = messaging.Target(topic=RPC_TOPIC, version='4.0') version_cap = self.VERSION_ALIASES.get(CONF.upgrade_levels.scheduler, CONF.upgrade_levels.scheduler) serializer = objects_base.NovaObjectSerializer() self.client = rpc.get_client(target, version_cap=version_cap, serializer=serializer) def select_destinations(self, ctxt, spec_obj, instance_uuids, return_objects=False, return_alternates=False): # Modify the parameters if an older version is requested version = '4.5' msg_args = {'instance_uuids': instance_uuids, 'spec_obj': spec_obj, 'return_objects': return_objects, 'return_alternates': return_alternates} if not self.client.can_send_version(version): if msg_args['return_objects'] or msg_args['return_alternates']: # The client is requesting an RPC version we can't support. raise exc.SelectionObjectsWithOldRPCVersionNotSupported( version=self.client.version_cap) del msg_args['return_objects'] del msg_args['return_alternates'] version = '4.4' if not self.client.can_send_version(version): del msg_args['instance_uuids'] version = '4.3' if not self.client.can_send_version(version): del msg_args['spec_obj'] msg_args['request_spec'] = spec_obj.to_legacy_request_spec_dict() msg_args['filter_properties' ] = spec_obj.to_legacy_filter_properties_dict() version = '4.0' cctxt = self.client.prepare( version=version, call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) return cctxt.call(ctxt, 'select_destinations', **msg_args) def update_aggregates(self, ctxt, aggregates): # NOTE(sbauza): Yes, it's a fanout, we need to update all schedulers cctxt = self.client.prepare(fanout=True, version='4.1') cctxt.cast(ctxt, 'update_aggregates', aggregates=aggregates) def delete_aggregate(self, ctxt, aggregate): # NOTE(sbauza): Yes, it's a fanout, we need to update all schedulers cctxt = self.client.prepare(fanout=True, version='4.1') cctxt.cast(ctxt, 'delete_aggregate', aggregate=aggregate) def update_instance_info(self, ctxt, host_name, instance_info): cctxt = self.client.prepare(version='4.2', fanout=True) return cctxt.cast(ctxt, 'update_instance_info', host_name=host_name, instance_info=instance_info) def delete_instance_info(self, ctxt, host_name, instance_uuid): cctxt = self.client.prepare(version='4.2', fanout=True) return cctxt.cast(ctxt, 'delete_instance_info', host_name=host_name, instance_uuid=instance_uuid) def sync_instance_info(self, ctxt, host_name, instance_uuids): cctxt = self.client.prepare(version='4.2', fanout=True) return cctxt.cast(ctxt, 'sync_instance_info', host_name=host_name, instance_uuids=instance_uuids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/scheduler/utils.py0000664000175000017500000016221700000000000017232 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for scheduling.""" import collections import re import sys import traceback import os_resource_classes as orc import os_traits from oslo_log import log as logging from oslo_serialization import jsonutils from six.moves.urllib import parse from nova.compute import flavors from nova.compute import utils as compute_utils import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova import objects from nova.objects import base as obj_base from nova.objects import fields as obj_fields from nova.objects import instance as obj_instance from nova import rpc from nova.scheduler.filters import utils as filters_utils from nova.virt import hardware LOG = logging.getLogger(__name__) CONF = nova.conf.CONF GroupDetails = collections.namedtuple('GroupDetails', ['hosts', 'policy', 'members']) class ResourceRequest(object): """Presents a granular resource request via RequestGroup instances.""" # extra_specs-specific consts XS_RES_PREFIX = 'resources' XS_TRAIT_PREFIX = 'trait' # Regex patterns for suffixed or unsuffixed resources/trait keys XS_KEYPAT = re.compile(r"^(%s)([a-zA-Z0-9_-]{1,64})?:(.*)$" % '|'.join((XS_RES_PREFIX, XS_TRAIT_PREFIX))) def __init__(self, request_spec, enable_pinning_translate=True): """Create a new instance of ResourceRequest from a RequestSpec. Examines the flavor, flavor extra specs, (optional) image metadata, and (optional) requested_resources and request_level_params of the provided ``request_spec``. For extra specs, items of the following form are examined: - ``resources:$RESOURCE_CLASS``: $AMOUNT - ``resources$S:$RESOURCE_CLASS``: $AMOUNT - ``trait:$TRAIT_NAME``: "required" - ``trait$S:$TRAIT_NAME``: "required" ...where ``$S`` is a string suffix as supported via Placement microversion 1.33 https://docs.openstack.org/placement/train/specs/train/implemented/2005575-nested-magic-1.html#arbitrary-group-suffixes .. note:: This does *not* yet handle ``member_of[$S]``. The string suffix is used as the RequestGroup.requester_id to facilitate mapping of requests to allocation candidates using the ``mappings`` piece of the response added in Placement microversion 1.34 https://docs.openstack.org/placement/train/specs/train/implemented/placement-resource-provider-request-group-mapping-in-allocation-candidates.html For image metadata, traits are extracted from the ``traits_required`` property, if present. For the flavor, ``VCPU``, ``MEMORY_MB`` and ``DISK_GB`` are calculated from Flavor properties, though these are only used if they aren't overridden by flavor extra specs. requested_resources, which are existing RequestGroup instances created on the RequestSpec based on resources specified outside of the flavor/ image (e.g. from ports) are incorporated as is, but ensuring that they get unique group suffixes. request_level_params - settings associated with the request as a whole rather than with a specific RequestGroup - are incorporated as is. :param request_spec: An instance of ``objects.RequestSpec``. :param enable_pinning_translate: True if the CPU policy extra specs should be translated to placement resources and traits. """ # { ident: RequestGroup } self._rg_by_id = {} self._group_policy = None # root_required+=these self._root_required = request_spec.root_required # root_required+=!these self._root_forbidden = request_spec.root_forbidden # Default to the configured limit but _limit can be # set to None to indicate "no limit". self._limit = CONF.scheduler.max_placement_results # TODO(efried): Handle member_of[$S], which will need to be reconciled # with destination.aggregates handling in resources_from_request_spec # request_spec.image is nullable if 'image' in request_spec and request_spec.image: image = request_spec.image else: image = objects.ImageMeta(properties=objects.ImageMetaProps()) # Parse the flavor extra specs self._process_extra_specs(request_spec.flavor) self.suffixed_groups_from_flavor = self.get_num_of_suffixed_groups() # Now parse the (optional) image metadata self._process_image_meta(image) # TODO(stephenfin): Remove this parameter once we drop support for # 'vcpu_pin_set' self.cpu_pinning_requested = False if enable_pinning_translate: # Next up, let's handle those pesky CPU pinning policies self._translate_pinning_policies(request_spec.flavor, image) # Add on any request groups that came from outside of the flavor/image, # e.g. from ports or device profiles. self._process_requested_resources(request_spec) # Parse the flavor itself, though we'll only use these fields if they # don't conflict with something already provided by the flavor extra # specs. These are all added to the unsuffixed request group. merged_resources = self.merged_resources() if (orc.VCPU not in merged_resources and orc.PCPU not in merged_resources): self._add_resource(None, orc.VCPU, request_spec.vcpus) if orc.MEMORY_MB not in merged_resources: self._add_resource(None, orc.MEMORY_MB, request_spec.memory_mb) if orc.DISK_GB not in merged_resources: disk = request_spec.ephemeral_gb disk += compute_utils.convert_mb_to_ceil_gb(request_spec.swap) if 'is_bfv' not in request_spec or not request_spec.is_bfv: disk += request_spec.root_gb if disk: self._add_resource(None, orc.DISK_GB, disk) self._translate_memory_encryption(request_spec.flavor, image) self._translate_vpmems_request(request_spec.flavor) self.strip_zeros() def _process_requested_resources(self, request_spec): requested_resources = (request_spec.requested_resources if 'requested_resources' in request_spec and request_spec.requested_resources else []) for group in requested_resources: self._add_request_group(group) def _process_extra_specs(self, flavor): if 'extra_specs' not in flavor: return for key, val in flavor.extra_specs.items(): if key == 'group_policy': self._add_group_policy(val) continue match = self.XS_KEYPAT.match(key) if not match: continue # 'prefix' is 'resources' or 'trait' # 'suffix' is $S or None # 'name' is either the resource class name or the trait name. prefix, suffix, name = match.groups() # Process "resources[$S]" if prefix == self.XS_RES_PREFIX: self._add_resource(suffix, name, val) # Process "trait[$S]" elif prefix == self.XS_TRAIT_PREFIX: self._add_trait(suffix, name, val) def _process_image_meta(self, image): if not image or 'properties' not in image: return for trait in image.properties.get('traits_required', []): # required traits from the image are always added to the # unsuffixed request group, granular request groups are not # supported in image traits self._add_trait(None, trait, "required") def _translate_memory_encryption(self, flavor, image): """When the hw:mem_encryption extra spec or the hw_mem_encryption image property are requested, translate into a request for resources:MEM_ENCRYPTION_CONTEXT=1 which requires a slot on a host which can support encryption of the guest memory. """ # NOTE(aspiers): In theory this could raise FlavorImageConflict, # but we already check it in the API layer, so that should never # happen. if not hardware.get_mem_encryption_constraint(flavor, image): # No memory encryption required, so no further action required. return self._add_resource(None, orc.MEM_ENCRYPTION_CONTEXT, 1) LOG.debug("Added %s=1 to requested resources", orc.MEM_ENCRYPTION_CONTEXT) def _translate_vpmems_request(self, flavor): """When the hw:pmem extra spec is present, require hosts which can provide enough vpmem resources. """ vpmem_labels = hardware.get_vpmems(flavor) if not vpmem_labels: # No vpmems required return amount_by_rc = collections.defaultdict(int) for vpmem_label in vpmem_labels: resource_class = orc.normalize_name( "PMEM_NAMESPACE_" + vpmem_label) amount_by_rc[resource_class] += 1 for resource_class, amount in amount_by_rc.items(): self._add_resource(None, resource_class, amount) LOG.debug("Added resource %s=%d to requested resources", resource_class, amount) def _translate_pinning_policies(self, flavor, image): """Translate the legacy pinning policies to resource requests.""" # NOTE(stephenfin): These can raise exceptions but these have already # been validated by 'nova.virt.hardware.numa_get_constraints' in the # API layer (see change I06fad233006c7bab14749a51ffa226c3801f951b). # This call also handles conflicts between explicit VCPU/PCPU # requests and implicit 'hw:cpu_policy'-based requests, mismatches # between the number of CPUs in the flavor and explicit VCPU/PCPU # requests, etc. cpu_policy = hardware.get_cpu_policy_constraint( flavor, image) cpu_thread_policy = hardware.get_cpu_thread_policy_constraint( flavor, image) emul_thread_policy = hardware.get_emulator_thread_policy_constraint( flavor) # We don't need to worry about handling 'SHARED' - that will result in # VCPUs which we include by default if cpu_policy == obj_fields.CPUAllocationPolicy.DEDICATED: # TODO(stephenfin): Remove when we drop support for 'vcpu_pin_set' self.cpu_pinning_requested = True # Switch VCPU -> PCPU cpus = flavor.vcpus LOG.debug('Translating request for %(vcpu_rc)s=%(cpus)d to ' '%(vcpu_rc)s=0,%(pcpu_rc)s=%(cpus)d', {'vcpu_rc': orc.VCPU, 'pcpu_rc': orc.PCPU, 'cpus': cpus}) if emul_thread_policy == 'isolate': cpus += 1 LOG.debug('Adding additional %(pcpu_rc)s to account for ' 'emulator threads', {'pcpu_rc': orc.PCPU}) self._add_resource(None, orc.PCPU, cpus) trait = { obj_fields.CPUThreadAllocationPolicy.ISOLATE: 'forbidden', obj_fields.CPUThreadAllocationPolicy.REQUIRE: 'required', }.get(cpu_thread_policy) if trait: LOG.debug('Adding %(trait)s=%(value)s trait', {'trait': os_traits.HW_CPU_HYPERTHREADING, 'value': trait}) self._add_trait(None, os_traits.HW_CPU_HYPERTHREADING, trait) @property def group_policy(self): return self._group_policy @group_policy.setter def group_policy(self, value): self._group_policy = value def get_request_group(self, ident): if ident not in self._rg_by_id: rq_grp = objects.RequestGroup( use_same_provider=bool(ident), requester_id=ident) self._rg_by_id[ident] = rq_grp return self._rg_by_id[ident] def _add_request_group(self, request_group): """Inserts the existing group with a unique suffix. The groups coming from the flavor can have arbitrary suffixes; those are guaranteed to be unique within the flavor. A group coming from "outside" (ports, device profiles) must be associated with a requester_id, such as a port UUID. We use this requester_id as the group suffix (but ensure that it is unique in combination with suffixes from the flavor). Groups coming from "outside" are not allowed to be no-ops. That is, they must provide resources and/or required/forbidden traits/aggregates :param request_group: the RequestGroup to be added. :raise: ValueError if request_group has no requester_id, or if it provides no resources or (required/forbidden) traits or aggregates. :raise: RequestGroupSuffixConflict if request_group.requester_id already exists in this ResourceRequest. """ # NOTE(efried): Deliberately check False-ness rather than None-ness # here, since both would result in the unsuffixed request group being # used, and that's bad. if not request_group.requester_id: # NOTE(efried): An "outside" RequestGroup is created by a # programmatic agent and that agent is responsible for guaranteeing # the presence of a unique requester_id. This is in contrast to # flavor extra_specs where a human is responsible for the group # suffix. raise ValueError( _('Missing requester_id in RequestGroup! This is probably a ' 'programmer error. %s') % request_group) if request_group.is_empty(): # NOTE(efried): It is up to the calling code to enforce a nonempty # RequestGroup with suitable logic and exceptions. raise ValueError( _('Refusing to add no-op RequestGroup with requester_id=%s. ' 'This is a probably a programmer error.') % request_group.requester_id) if request_group.requester_id in self._rg_by_id: raise exception.RequestGroupSuffixConflict( suffix=request_group.requester_id) self._rg_by_id[request_group.requester_id] = request_group def _add_resource(self, groupid, rclass, amount): self.get_request_group(groupid).add_resource(rclass, amount) def _add_trait(self, groupid, trait_name, trait_type): self.get_request_group(groupid).add_trait(trait_name, trait_type) def _add_group_policy(self, policy): # The only valid values for group_policy are 'none' and 'isolate'. if policy not in ('none', 'isolate'): LOG.warning( "Invalid group_policy '%s'. Valid values are 'none' and " "'isolate'.", policy) return self._group_policy = policy def get_num_of_suffixed_groups(self): return len([ident for ident in self._rg_by_id.keys() if ident is not None]) def merged_resources(self): """Returns a merge of {resource_class: amount} for all resource groups. Amounts of the same resource class from different groups are added together. :return: A dict of the form {resource_class: amount} """ ret = collections.defaultdict(lambda: 0) for rg in self._rg_by_id.values(): for resource_class, amount in rg.resources.items(): ret[resource_class] += amount return dict(ret) def strip_zeros(self): """Remove any resources whose amounts are zero.""" for rg in self._rg_by_id.values(): rg.strip_zeros() # Get rid of any empty RequestGroup instances. for ident, rg in list(self._rg_by_id.items()): if rg.is_empty(): self._rg_by_id.pop(ident) def to_querystring(self): """Produce a querystring of the form expected by GET /allocation_candidates. """ # TODO(gibi): We have a RequestGroup OVO so we can move this to that # class as a member function. # NOTE(efried): The sorting herein is not necessary for the API; it is # to make testing easier and logging/debugging predictable. def to_queryparams(request_group, suffix): res = request_group.resources required_traits = request_group.required_traits forbidden_traits = request_group.forbidden_traits aggregates = request_group.aggregates in_tree = request_group.in_tree forbidden_aggregates = request_group.forbidden_aggregates resource_query = ",".join( sorted("%s:%s" % (rc, amount) for (rc, amount) in res.items())) qs_params = [('resources%s' % suffix, resource_query)] # Assemble required and forbidden traits, allowing for either/both # to be empty. required_val = ','.join( sorted(required_traits) + ['!%s' % ft for ft in sorted(forbidden_traits)]) if required_val: qs_params.append(('required%s' % suffix, required_val)) if aggregates: aggs = [] # member_of$S is a list of lists. We need a tuple of # ('member_of$S', 'in:uuid,uuid,...') for each inner list. for agglist in aggregates: aggs.append(('member_of%s' % suffix, 'in:' + ','.join(sorted(agglist)))) qs_params.extend(sorted(aggs)) if in_tree: qs_params.append(('in_tree%s' % suffix, in_tree)) if forbidden_aggregates: # member_of$S is a list of aggregate uuids. We need a # tuple of ('member_of$S, '!in:uuid,uuid,...'). forbidden_aggs = '!in:' + ','.join( sorted(forbidden_aggregates)) qs_params.append(('member_of%s' % suffix, forbidden_aggs)) return qs_params if self._limit is not None: qparams = [('limit', self._limit)] else: qparams = [] if self._group_policy is not None: qparams.append(('group_policy', self._group_policy)) if self._root_required or self._root_forbidden: vals = sorted(self._root_required) + ['!' + t for t in sorted(self._root_forbidden)] qparams.append(('root_required', ','.join(vals))) for ident, rg in self._rg_by_id.items(): # [('resources[$S]', 'rclass:amount,rclass:amount,...'), # ('required[$S]', 'trait_name,!trait_name,...'), # ('member_of[$S]', 'in:uuid,uuid,...'), # ('member_of[$S]', 'in:uuid,uuid,...')] qparams.extend(to_queryparams(rg, ident or '')) return parse.urlencode(sorted(qparams)) @property def all_required_traits(self): traits = set() for rr in self._rg_by_id.values(): traits = traits.union(rr.required_traits) return traits def __str__(self): return ', '.join(sorted( list(str(rg) for rg in list(self._rg_by_id.values())))) def build_request_spec(image, instances, instance_type=None): """Build a request_spec (ahem, not a RequestSpec) for the scheduler. The request_spec assumes that all instances to be scheduled are the same type. :param image: optional primitive image meta dict :param instances: list of instances; objects will be converted to primitives :param instance_type: optional flavor; objects will be converted to primitives :return: dict with the following keys:: 'image': the image dict passed in or {} 'instance_properties': primitive version of the first instance passed 'instance_type': primitive version of the instance_type or None 'num_instances': the number of instances passed in """ instance = instances[0] if instance_type is None: if isinstance(instance, obj_instance.Instance): instance_type = instance.get_flavor() else: instance_type = flavors.extract_flavor(instance) if isinstance(instance, obj_instance.Instance): instance = obj_base.obj_to_primitive(instance) # obj_to_primitive doesn't copy this enough, so be sure # to detach our metadata blob because we modify it below. instance['system_metadata'] = dict(instance.get('system_metadata', {})) if isinstance(instance_type, objects.Flavor): instance_type = obj_base.obj_to_primitive(instance_type) # NOTE(danms): Replicate this old behavior because the # scheduler RPC interface technically expects it to be # there. Remove this when we bump the scheduler RPC API to # v5.0 try: flavors.save_flavor_info(instance.get('system_metadata', {}), instance_type) except KeyError: # If the flavor isn't complete (which is legit with a # flavor object, just don't put it in the request spec pass request_spec = { 'image': image or {}, 'instance_properties': instance, 'instance_type': instance_type, 'num_instances': len(instances)} # NOTE(mriedem): obj_to_primitive above does not serialize everything # in an object, like datetime fields, so we need to still call to_primitive # to recursively serialize the items in the request_spec dict. return jsonutils.to_primitive(request_spec) def resources_from_flavor(instance, flavor): """Convert a flavor into a set of resources for placement, taking into account boot-from-volume instances. This takes an instance and a flavor and returns a dict of resource_class:amount based on the attributes of the flavor, accounting for any overrides that are made in extra_specs. """ is_bfv = compute_utils.is_volume_backed_instance(instance._context, instance) # create a fake RequestSpec as a wrapper to the caller req_spec = objects.RequestSpec(flavor=flavor, is_bfv=is_bfv) # TODO(efried): This method is currently only used from places that # assume the compute node is the only resource provider. So for now, we # just merge together all the resources specified in the flavor and pass # them along. This will need to be adjusted when nested and/or shared RPs # are in play. res_req = ResourceRequest(req_spec) return res_req.merged_resources() def resources_from_request_spec(ctxt, spec_obj, host_manager, enable_pinning_translate=True): """Given a RequestSpec object, returns a ResourceRequest of the resources, traits, and aggregates it represents. :param context: The request context. :param spec_obj: A RequestSpec object. :param host_manager: A HostManager object. :param enable_pinning_translate: True if the CPU policy extra specs should be translated to placement resources and traits. :return: A ResourceRequest object. :raises NoValidHost: If the specified host/node is not found in the DB. """ res_req = ResourceRequest(spec_obj, enable_pinning_translate) # values to get the destination target compute uuid target_host = None target_node = None target_cell = None if 'requested_destination' in spec_obj: destination = spec_obj.requested_destination if destination: if 'host' in destination: target_host = destination.host if 'node' in destination: target_node = destination.node if 'cell' in destination: target_cell = destination.cell if destination.aggregates: grp = res_req.get_request_group(None) # If the target must be either in aggA *or* in aggB and must # definitely be in aggC, the destination.aggregates would be # ['aggA,aggB', 'aggC'] # Here we are converting it to # [['aggA', 'aggB'], ['aggC']] grp.aggregates = [ored.split(',') for ored in destination.aggregates] if destination.forbidden_aggregates: grp = res_req.get_request_group(None) grp.forbidden_aggregates |= destination.forbidden_aggregates if 'force_hosts' in spec_obj and spec_obj.force_hosts: # Prioritize the value from requested_destination just in case # so that we don't inadvertently overwrite it to the old value # of force_hosts persisted in the DB target_host = target_host or spec_obj.force_hosts[0] if 'force_nodes' in spec_obj and spec_obj.force_nodes: # Prioritize the value from requested_destination just in case # so that we don't inadvertently overwrite it to the old value # of force_nodes persisted in the DB target_node = target_node or spec_obj.force_nodes[0] if target_host or target_node: nodes = host_manager.get_compute_nodes_by_host_or_node( ctxt, target_host, target_node, cell=target_cell) if not nodes: reason = (_('No such host - host: %(host)s node: %(node)s ') % {'host': target_host, 'node': target_node}) raise exception.NoValidHost(reason=reason) if len(nodes) == 1: if 'requested_destination' in spec_obj and destination: # When we only supply hypervisor_hostname in api to create a # server, the destination object will only include the node. # Here when we get one node, we set both host and node to # destination object. So we can reduce the number of HostState # objects to run through the filters. destination.host = nodes[0].host destination.node = nodes[0].hypervisor_hostname grp = res_req.get_request_group(None) grp.in_tree = nodes[0].uuid else: # Multiple nodes are found when a target host is specified # without a specific node. Since placement doesn't support # multiple uuids in the `in_tree` queryparam, what we can do here # is to remove the limit from the `GET /a_c` query to prevent # the found nodes from being filtered out in placement. res_req._limit = None # Don't limit allocation candidates when using affinity/anti-affinity. if ('scheduler_hints' in spec_obj and any( key in ['group', 'same_host', 'different_host'] for key in spec_obj.scheduler_hints)): res_req._limit = None if res_req.get_num_of_suffixed_groups() >= 2 and not res_req.group_policy: LOG.warning( "There is more than one numbered request group in the " "allocation candidate query but the flavor did not specify " "any group policy. This query would fail in placement due to " "the missing group policy. If you specified more than one " "numbered request group in the flavor extra_spec then you need to " "specify the group policy in the flavor extra_spec. If it is OK " "to let these groups be satisfied by overlapping resource " "providers then use 'group_policy': 'none'. If you want each " "group to be satisfied from a separate resource provider then " "use 'group_policy': 'isolate'.") if res_req.suffixed_groups_from_flavor <= 1: LOG.info( "At least one numbered request group is defined outside of " "the flavor (e.g. in a port that has a QoS minimum bandwidth " "policy rule attached) but the flavor did not specify any " "group policy. To avoid the placement failure nova defaults " "the group policy to 'none'.") res_req.group_policy = 'none' return res_req def claim_resources_on_destination( context, reportclient, instance, source_node, dest_node, source_allocations=None, consumer_generation=None): """Copies allocations from source node to dest node in Placement Normally the scheduler will allocate resources on a chosen destination node during a move operation like evacuate and live migration. However, because of the ability to force a host and bypass the scheduler, this method can be used to manually copy allocations from the source node to the forced destination node. This is only appropriate when the instance flavor on the source node is the same on the destination node, i.e. don't use this for resize. :param context: The request context. :param reportclient: An instance of the SchedulerReportClient. :param instance: The instance being moved. :param source_node: source ComputeNode where the instance currently lives :param dest_node: destination ComputeNode where the instance is being moved :param source_allocations: The consumer's current allocations on the source compute :param consumer_generation: The expected generation of the consumer. None if a new consumer is expected :raises NoValidHost: If the allocation claim on the destination node fails. :raises: keystoneauth1.exceptions.base.ClientException on failure to communicate with the placement API :raises: ConsumerAllocationRetrievalFailed if the placement API call fails :raises: AllocationUpdateFailed: If a parallel consumer update changed the consumer """ # Get the current allocations for the source node and the instance. # NOTE(gibi) For the live migrate case, the caller provided the # allocation that needs to be used on the dest_node along with the # expected consumer_generation of the consumer (which is the instance). if not source_allocations: # NOTE(gibi): This is the forced evacuate case where the caller did not # provide any allocation request. So we ask placement here for the # current allocation and consumer generation and use that for the new # allocation on the dest_node. If the allocation fails due to consumer # generation conflict then the claim will raise and the operation will # be aborted. # NOTE(gibi): This only detect a small portion of possible # cases when allocation is modified outside of the this # code path. The rest can only be detected if nova would # cache at least the consumer generation of the instance. allocations = reportclient.get_allocs_for_consumer( context, instance.uuid) source_allocations = allocations.get('allocations', {}) consumer_generation = allocations.get('consumer_generation') if not source_allocations: # This shouldn't happen, so just raise an error since we cannot # proceed. raise exception.ConsumerAllocationRetrievalFailed( consumer_uuid=instance.uuid, error=_( 'Expected to find allocations for source node resource ' 'provider %s. Retry the operation without forcing a ' 'destination host.') % source_node.uuid) # Generate an allocation request for the destination node. # NOTE(gibi): if the source allocation allocates from more than one RP # then we need to fail as the dest allocation might also need to be # complex (e.g. nested) and we cannot calculate that allocation request # properly without a placement allocation candidate call. # Alternatively we could sum up the source allocation and try to # allocate that from the root RP of the dest host. It would only work # if the dest host would not require nested allocation for this server # which is really a rare case. if len(source_allocations) > 1: reason = (_('Unable to move instance %(instance_uuid)s to ' 'host %(host)s. The instance has complex allocations ' 'on the source host so move cannot be forced.') % {'instance_uuid': instance.uuid, 'host': dest_node.host}) raise exception.NoValidHost(reason=reason) alloc_request = { 'allocations': { dest_node.uuid: { 'resources': source_allocations[source_node.uuid]['resources']} }, } # import locally to avoid cyclic import from nova.scheduler.client import report # The claim_resources method will check for existing allocations # for the instance and effectively "double up" the allocations for # both the source and destination node. That's why when requesting # allocations for resources on the destination node before we move, # we use the existing resource allocations from the source node. if reportclient.claim_resources( context, instance.uuid, alloc_request, instance.project_id, instance.user_id, allocation_request_version=report.CONSUMER_GENERATION_VERSION, consumer_generation=consumer_generation): LOG.debug('Instance allocations successfully created on ' 'destination node %(dest)s: %(alloc_request)s', {'dest': dest_node.uuid, 'alloc_request': alloc_request}, instance=instance) else: # We have to fail even though the user requested that we force # the host. This is because we need Placement to have an # accurate reflection of what's allocated on all nodes so the # scheduler can make accurate decisions about which nodes have # capacity for building an instance. reason = (_('Unable to move instance %(instance_uuid)s to ' 'host %(host)s. There is not enough capacity on ' 'the host for the instance.') % {'instance_uuid': instance.uuid, 'host': dest_node.host}) raise exception.NoValidHost(reason=reason) def set_vm_state_and_notify(context, instance_uuid, service, method, updates, ex, request_spec): """Updates the instance, sets the fault and sends an error notification. :param context: The request context. :param instance_uuid: The UUID of the instance to update. :param service: The name of the originating service, e.g. 'compute_task'. This becomes part of the publisher_id for the notification payload. :param method: The method that failed, e.g. 'migrate_server'. :param updates: dict of updates for the instance object, typically a vm_state and task_state value. :param ex: An exception which occurred during the given method. :param request_spec: Optional request spec. """ # e.g. "Failed to compute_task_migrate_server: No valid host was found" LOG.warning("Failed to %(service)s_%(method)s: %(ex)s", {'service': service, 'method': method, 'ex': ex}) # Convert the request spec to a dict if needed. if request_spec is not None: if isinstance(request_spec, objects.RequestSpec): request_spec = request_spec.to_legacy_request_spec_dict() else: request_spec = {} # TODO(mriedem): We should make vm_state optional since not all callers # of this method want to change the vm_state, e.g. the Exception block # in ComputeTaskManager._cold_migrate. vm_state = updates['vm_state'] properties = request_spec.get('instance_properties', {}) notifier = rpc.get_notifier(service) state = vm_state.upper() LOG.warning('Setting instance to %s state.', state, instance_uuid=instance_uuid) instance = objects.Instance(context=context, uuid=instance_uuid, **updates) instance.obj_reset_changes(['uuid']) instance.save() compute_utils.add_instance_fault_from_exc( context, instance, ex, sys.exc_info()) payload = dict(request_spec=request_spec, instance_properties=properties, instance_id=instance_uuid, state=vm_state, method=method, reason=ex) event_type = '%s.%s' % (service, method) notifier.error(context, event_type, payload) compute_utils.notify_about_compute_task_error( context, method, instance_uuid, request_spec, vm_state, ex, traceback.format_exc()) def build_filter_properties(scheduler_hints, forced_host, forced_node, instance_type): """Build the filter_properties dict from data in the boot request.""" filter_properties = dict(scheduler_hints=scheduler_hints) filter_properties['instance_type'] = instance_type # TODO(alaski): It doesn't seem necessary that these are conditionally # added. Let's just add empty lists if not forced_host/node. if forced_host: filter_properties['force_hosts'] = [forced_host] if forced_node: filter_properties['force_nodes'] = [forced_node] return filter_properties def populate_filter_properties(filter_properties, selection): """Add additional information to the filter properties after a node has been selected by the scheduling process. :param filter_properties: dict of filter properties (the legacy form of the RequestSpec) :param selection: Selection object """ host = selection.service_host nodename = selection.nodename # Need to convert SchedulerLimits object to older dict format. if "limits" in selection and selection.limits is not None: limits = selection.limits.to_dict() else: limits = {} # Adds a retry entry for the selected compute host and node: _add_retry_host(filter_properties, host, nodename) # Adds oversubscription policy if not filter_properties.get('force_hosts'): filter_properties['limits'] = limits def populate_retry(filter_properties, instance_uuid): max_attempts = CONF.scheduler.max_attempts force_hosts = filter_properties.get('force_hosts', []) force_nodes = filter_properties.get('force_nodes', []) # In the case of multiple force hosts/nodes, scheduler should not # disable retry filter but traverse all force hosts/nodes one by # one till scheduler gets a valid target host. if (max_attempts == 1 or len(force_hosts) == 1 or len(force_nodes) == 1): # re-scheduling is disabled, log why if max_attempts == 1: LOG.debug('Re-scheduling is disabled due to "max_attempts" config') else: LOG.debug("Re-scheduling is disabled due to forcing a host (%s) " "and/or node (%s)", force_hosts, force_nodes) return # retry is enabled, update attempt count: retry = filter_properties.setdefault( 'retry', { 'num_attempts': 0, 'hosts': [] # list of compute hosts tried }) retry['num_attempts'] += 1 _log_compute_error(instance_uuid, retry) exc_reason = retry.pop('exc_reason', None) if retry['num_attempts'] > max_attempts: msg = (_('Exceeded max scheduling attempts %(max_attempts)d ' 'for instance %(instance_uuid)s. ' 'Last exception: %(exc_reason)s') % {'max_attempts': max_attempts, 'instance_uuid': instance_uuid, 'exc_reason': exc_reason}) raise exception.MaxRetriesExceeded(reason=msg) def _log_compute_error(instance_uuid, retry): """If the request contained an exception from a previous compute build/resize operation, log it to aid debugging """ exc = retry.get('exc') # string-ified exception from compute if not exc: return # no exception info from a previous attempt, skip hosts = retry.get('hosts', None) if not hosts: return # no previously attempted hosts, skip last_host, last_node = hosts[-1] LOG.error( 'Error from last host: %(last_host)s (node %(last_node)s): %(exc)s', {'last_host': last_host, 'last_node': last_node, 'exc': exc}, instance_uuid=instance_uuid) def _add_retry_host(filter_properties, host, node): """Add a retry entry for the selected compute node. In the event that the request gets re-scheduled, this entry will signal that the given node has already been tried. """ retry = filter_properties.get('retry', None) if not retry: return hosts = retry['hosts'] hosts.append([host, node]) def parse_options(opts, sep='=', converter=str, name=""): """Parse a list of options, each in the format of . Also use the converter to convert the value into desired type. :params opts: list of options, e.g. from oslo_config.cfg.ListOpt :params sep: the separator :params converter: callable object to convert the value, should raise ValueError for conversion failure :params name: name of the option :returns: a lists of tuple of values (key, converted_value) """ good = [] bad = [] for opt in opts: try: key, seen_sep, value = opt.partition(sep) value = converter(value) except ValueError: key = None value = None if key and seen_sep and value is not None: good.append((key, value)) else: bad.append(opt) if bad: LOG.warning("Ignoring the invalid elements of the option " "%(name)s: %(options)s", {'name': name, 'options': ", ".join(bad)}) return good def validate_filter(filter): """Validates that the filter is configured in the default filters.""" return filter in CONF.filter_scheduler.enabled_filters def validate_weigher(weigher): """Validates that the weigher is configured in the default weighers.""" weight_classes = CONF.filter_scheduler.weight_classes if 'nova.scheduler.weights.all_weighers' in weight_classes: return True return weigher in weight_classes _SUPPORTS_AFFINITY = None _SUPPORTS_ANTI_AFFINITY = None _SUPPORTS_SOFT_AFFINITY = None _SUPPORTS_SOFT_ANTI_AFFINITY = None def _get_group_details(context, instance_uuid, user_group_hosts=None): """Provide group_hosts and group_policies sets related to instances if those instances are belonging to a group and if corresponding filters are enabled. :param instance_uuid: UUID of the instance to check :param user_group_hosts: Hosts from the group or empty set :returns: None or namedtuple GroupDetails """ global _SUPPORTS_AFFINITY if _SUPPORTS_AFFINITY is None: _SUPPORTS_AFFINITY = validate_filter( 'ServerGroupAffinityFilter') global _SUPPORTS_ANTI_AFFINITY if _SUPPORTS_ANTI_AFFINITY is None: _SUPPORTS_ANTI_AFFINITY = validate_filter( 'ServerGroupAntiAffinityFilter') global _SUPPORTS_SOFT_AFFINITY if _SUPPORTS_SOFT_AFFINITY is None: _SUPPORTS_SOFT_AFFINITY = validate_weigher( 'nova.scheduler.weights.affinity.ServerGroupSoftAffinityWeigher') global _SUPPORTS_SOFT_ANTI_AFFINITY if _SUPPORTS_SOFT_ANTI_AFFINITY is None: _SUPPORTS_SOFT_ANTI_AFFINITY = validate_weigher( 'nova.scheduler.weights.affinity.' 'ServerGroupSoftAntiAffinityWeigher') if not instance_uuid: return try: group = objects.InstanceGroup.get_by_instance_uuid(context, instance_uuid) except exception.InstanceGroupNotFound: return policies = set(('anti-affinity', 'affinity', 'soft-affinity', 'soft-anti-affinity')) if group.policy in policies: if not _SUPPORTS_AFFINITY and 'affinity' == group.policy: msg = _("ServerGroupAffinityFilter not configured") LOG.error(msg) raise exception.UnsupportedPolicyException(reason=msg) if not _SUPPORTS_ANTI_AFFINITY and 'anti-affinity' == group.policy: msg = _("ServerGroupAntiAffinityFilter not configured") LOG.error(msg) raise exception.UnsupportedPolicyException(reason=msg) if (not _SUPPORTS_SOFT_AFFINITY and 'soft-affinity' == group.policy): msg = _("ServerGroupSoftAffinityWeigher not configured") LOG.error(msg) raise exception.UnsupportedPolicyException(reason=msg) if (not _SUPPORTS_SOFT_ANTI_AFFINITY and 'soft-anti-affinity' == group.policy): msg = _("ServerGroupSoftAntiAffinityWeigher not configured") LOG.error(msg) raise exception.UnsupportedPolicyException(reason=msg) group_hosts = set(group.get_hosts()) user_hosts = set(user_group_hosts) if user_group_hosts else set() return GroupDetails(hosts=user_hosts | group_hosts, policy=group.policy, members=group.members) def _get_instance_group_hosts_all_cells(context, instance_group): def get_hosts_in_cell(cell_context): # NOTE(melwitt): The obj_alternate_context is going to mutate the # cell_instance_group._context and to do this in a scatter-gather # with multiple parallel greenthreads, we need the instance groups # to be separate object copies. cell_instance_group = instance_group.obj_clone() with cell_instance_group.obj_alternate_context(cell_context): return cell_instance_group.get_hosts() results = nova_context.scatter_gather_skip_cell0(context, get_hosts_in_cell) hosts = [] for result in results.values(): # TODO(melwitt): We will need to handle scenarios where an exception # is raised while targeting a cell and when a cell does not respond # as part of the "handling of a down cell" spec: # https://blueprints.launchpad.net/nova/+spec/handling-down-cell if not nova_context.is_cell_failure_sentinel(result): hosts.extend(result) return hosts def setup_instance_group(context, request_spec): """Add group_hosts and group_policies fields to filter_properties dict based on instance uuids provided in request_spec, if those instances are belonging to a group. :param request_spec: Request spec """ # NOTE(melwitt): Proactively query for the instance group hosts instead of # relying on a lazy-load via the 'hosts' field of the InstanceGroup object. if (request_spec.instance_group and 'hosts' not in request_spec.instance_group): group = request_spec.instance_group # If the context is already targeted to a cell (during a move # operation), we don't need to scatter-gather. We do need to use # obj_alternate_context here because the RequestSpec is queried at the # start of a move operation in compute/api, before the context has been # targeted. # NOTE(mriedem): If doing a cross-cell move and the group policy # is anti-affinity, this could be wrong since there could be # instances in the group on other hosts in other cells. However, # ServerGroupAntiAffinityFilter does not look at group.hosts. if context.db_connection: with group.obj_alternate_context(context): group.hosts = group.get_hosts() else: group.hosts = _get_instance_group_hosts_all_cells(context, group) if request_spec.instance_group and request_spec.instance_group.hosts: group_hosts = request_spec.instance_group.hosts else: group_hosts = None instance_uuid = request_spec.instance_uuid # This queries the group details for the group where the instance is a # member. The group_hosts passed in are the hosts that contain members of # the requested instance group. group_info = _get_group_details(context, instance_uuid, group_hosts) if group_info is not None: request_spec.instance_group.hosts = list(group_info.hosts) request_spec.instance_group.policy = group_info.policy request_spec.instance_group.members = group_info.members def request_is_rebuild(spec_obj): """Returns True if request is for a rebuild. :param spec_obj: An objects.RequestSpec to examine (or None). """ if not spec_obj: return False if 'scheduler_hints' not in spec_obj: return False check_type = spec_obj.scheduler_hints.get('_nova_check_type') return check_type == ['rebuild'] def claim_resources(ctx, client, spec_obj, instance_uuid, alloc_req, allocation_request_version=None): """Given an instance UUID (representing the consumer of resources) and the allocation_request JSON object returned from Placement, attempt to claim resources for the instance in the placement API. Returns True if the claim process was successful, False otherwise. :param ctx: The RequestContext object :param client: The scheduler client to use for making the claim call :param spec_obj: The RequestSpec object - needed to get the project_id :param instance_uuid: The UUID of the consuming instance :param alloc_req: The allocation_request received from placement for the resources we want to claim against the chosen host. The allocation_request satisfies the original request for resources and can be supplied as-is (along with the project and user ID to the placement API's PUT /allocations/{consumer_uuid} call to claim resources for the instance :param allocation_request_version: The microversion used to request the allocations. """ if request_is_rebuild(spec_obj): # NOTE(danms): This is a rebuild-only scheduling request, so we should # not be doing any extra claiming LOG.debug('Not claiming resources in the placement API for ' 'rebuild-only scheduling of instance %(uuid)s', {'uuid': instance_uuid}) return True LOG.debug("Attempting to claim resources in the placement API for " "instance %s", instance_uuid) project_id = spec_obj.project_id # We didn't start storing the user_id in the RequestSpec until Rocky so # if it's not set on an old RequestSpec, use the user_id from the context. if 'user_id' in spec_obj and spec_obj.user_id: user_id = spec_obj.user_id else: # FIXME(mriedem): This would actually break accounting if we relied on # the allocations for something like counting quota usage because in # the case of migrating or evacuating an instance, the user here is # likely the admin, not the owner of the instance, so the allocation # would be tracked against the wrong user. user_id = ctx.user_id # NOTE(gibi): this could raise AllocationUpdateFailed which means there is # a serious issue with the instance_uuid as a consumer. Every caller of # utils.claim_resources() assumes that instance_uuid will be a new consumer # and therefore we passing None as expected consumer_generation to # reportclient.claim_resources() here. If the claim fails # due to consumer generation conflict, which in this case means the # consumer is not new, then we let the AllocationUpdateFailed propagate and # fail the build / migrate as the instance is in inconsistent state. return client.claim_resources(ctx, instance_uuid, alloc_req, project_id, user_id, allocation_request_version=allocation_request_version, consumer_generation=None) def get_weight_multiplier(host_state, multiplier_name, multiplier_config): """Given a HostState object, multplier_type name and multiplier_config, returns the weight multiplier. It reads the "multiplier_name" from "aggregate metadata" in host_state to override the multiplier_config. If the aggregate metadata doesn't contain the multiplier_name, the multiplier_config will be returned directly. :param host_state: The HostState object, which contains aggregate metadata :param multiplier_name: The weight multiplier name, like "cpu_weight_multiplier". :param multiplier_config: The weight multiplier configuration value """ aggregate_vals = filters_utils.aggregate_values_from_key(host_state, multiplier_name) try: value = filters_utils.validate_num_values( aggregate_vals, multiplier_config, cast_to=float) except ValueError as e: LOG.warning("Could not decode '%(name)s' weight multiplier: %(exce)s", {'exce': e, 'name': multiplier_name}) value = multiplier_config return value def fill_provider_mapping(request_spec, host_selection): """Fills out the request group - resource provider mapping in the request spec. :param request_spec: The RequestSpec object associated with the operation :param host_selection: The Selection object returned by the scheduler for this operation """ # Exit early if this request spec does not require mappings. if not request_spec.maps_requested_resources: return # Technically out-of-tree scheduler drivers can still not create # allocations in placement but if request_spec.maps_requested_resources # is not empty and the scheduling succeeded then placement has to be # involved mappings = jsonutils.loads(host_selection.allocation_request)['mappings'] for request_group in request_spec.requested_resources: # NOTE(efried): We can count on request_group.requester_id being set: # - For groups from flavors, ResourceRequest.get_request_group sets it # to the group suffix. # - For groups from other sources (e.g. ports, accelerators), it is # required to be set by ResourceRequest._add_request_group, and that # method uses it as the suffix. # And we can count on mappings[requester_id] existing because each # RequestGroup translated into a (replete - empties are disallowed by # ResourceRequest._add_request_group) group fed to Placement. request_group.provider_uuids = mappings[request_group.requester_id] def fill_provider_mapping_based_on_allocation( context, report_client, request_spec, allocation): """Fills out the request group - resource provider mapping in the request spec based on the current allocation of the instance. The fill_provider_mapping() variant is expected to be called in every scenario when a Selection object is available from the scheduler. However in case of revert operations such Selection does not exists. In this case the mapping is calculated based on the allocation of the source host the move operation is reverting to. This is a workaround as placement does not return which RP fulfills which granular request group except in the allocation candidate request (because request groups are ephemeral, only existing in the scope of that request). .. todo:: Figure out a better way to preserve the mappings so we can get rid of this workaround. :param context: The security context :param report_client: SchedulerReportClient instance to be used to communicate with placement :param request_spec: The RequestSpec object associated with the operation :param allocation: allocation dict of the instance, keyed by RP UUID. """ # Exit early if this request spec does not require mappings. if not request_spec.maps_requested_resources: return # NOTE(gibi): Getting traits from placement for each instance in a # instance multi-create scenario is unnecessarily expensive. But # instance multi-create cannot be used with pre-created neutron ports # and this code can only be triggered with such pre-created ports so # instance multi-create is not an issue. If this ever become an issue # in the future then we could stash the RP->traits mapping on the # Selection object since we can pull the traits for each provider from # the GET /allocation_candidates response in the scheduler (or leverage # the change from the spec mentioned in the docstring above). provider_traits = { rp_uuid: report_client.get_provider_traits( context, rp_uuid).traits for rp_uuid in allocation} # NOTE(gibi): The allocation dict is in the format of the PUT /allocations # and that format can change. The current format can be detected from # allocation_request_version key of the Selection object. request_spec.map_requested_resources_to_providers( allocation, provider_traits) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3824697 nova-21.2.4/nova/scheduler/weights/0000775000175000017500000000000000000000000017161 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/__init__.py0000664000175000017500000000255200000000000021276 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler host weights """ from nova import weights class WeighedHost(weights.WeighedObject): def to_dict(self): x = dict(weight=self.weight) x['host'] = self.obj.host return x def __repr__(self): return "WeighedHost [host: %r, weight: %s]" % ( self.obj, self.weight) class BaseHostWeigher(weights.BaseWeigher): """Base class for host weights.""" pass class HostWeightHandler(weights.BaseWeightHandler): object_class = WeighedHost def __init__(self): super(HostWeightHandler, self).__init__(BaseHostWeigher) def all_weighers(): """Return a list of weight plugin classes found in this directory.""" return HostWeightHandler().get_all_classes() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/affinity.py0000664000175000017500000000505200000000000021346 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Ericsson AB # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Affinity Weighers. Weigh hosts by the number of instances from a given host. AffinityWeigher implements the soft-affinity policy for server groups by preferring the hosts that has more instances from the given group. AntiAffinityWeigher implements the soft-anti-affinity policy for server groups by preferring the hosts that has less instances from the given group. """ from oslo_config import cfg from oslo_log import log as logging from nova.scheduler import utils from nova.scheduler import weights CONF = cfg.CONF LOG = logging.getLogger(__name__) class _SoftAffinityWeigherBase(weights.BaseHostWeigher): policy_name = None def _weigh_object(self, host_state, request_spec): """Higher weights win.""" if not request_spec.instance_group: return 0 policy = request_spec.instance_group.policy if self.policy_name != policy: return 0 instances = set(host_state.instances.keys()) members = set(request_spec.instance_group.members) member_on_host = instances.intersection(members) return len(member_on_host) class ServerGroupSoftAffinityWeigher(_SoftAffinityWeigherBase): policy_name = 'soft-affinity' def weight_multiplier(self, host_state): return utils.get_weight_multiplier( host_state, 'soft_affinity_weight_multiplier', CONF.filter_scheduler.soft_affinity_weight_multiplier) class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase): policy_name = 'soft-anti-affinity' def weight_multiplier(self, host_state): return utils.get_weight_multiplier( host_state, 'soft_anti_affinity_weight_multiplier', CONF.filter_scheduler.soft_anti_affinity_weight_multiplier) def _weigh_object(self, host_state, request_spec): weight = super(ServerGroupSoftAntiAffinityWeigher, self)._weigh_object( host_state, request_spec) return -1 * weight ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/compute.py0000664000175000017500000000250300000000000021207 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ BuildFailure Weigher. Weigh hosts by the number of recent failed boot attempts. """ import nova.conf from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF class BuildFailureWeigher(weights.BaseHostWeigher): def weight_multiplier(self, host_state): """Override the weight multiplier. Note this is negated.""" return -1 * utils.get_weight_multiplier( host_state, 'build_failure_weight_multiplier', CONF.filter_scheduler.build_failure_weight_multiplier) def _weigh_object(self, host_state, weight_properties): """Higher weights win. Our multiplier is negative, so reduce our weight by number of failed builds. """ return host_state.failed_builds ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/cpu.py0000664000175000017500000000314600000000000020326 0ustar00zuulzuul00000000000000# Copyright (c) 2016, Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ CPU Weigher. Weigh hosts by their CPU usage. The default is to spread instances across all hosts evenly. If you prefer stacking, you can set the 'cpu_weight_multiplier' option (by configuration or aggregate metadata) to a negative number and the weighing has the opposite effect of the default. """ import nova.conf from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF class CPUWeigher(weights.BaseHostWeigher): minval = 0 def weight_multiplier(self, host_state): """Override the weight multiplier.""" return utils.get_weight_multiplier( host_state, 'cpu_weight_multiplier', CONF.filter_scheduler.cpu_weight_multiplier) def _weigh_object(self, host_state, weight_properties): """Higher weights win. We want spreading to be the default.""" vcpus_free = ( host_state.vcpus_total * host_state.cpu_allocation_ratio - host_state.vcpus_used) return vcpus_free ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/cross_cell.py0000664000175000017500000000556100000000000021672 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Cross-cell move weigher. Weighs hosts based on which cell they are in. "Local" cells are preferred when moving an instance. In other words, select a host from the source cell all other things being equal. """ from nova import conf from nova.scheduler import utils from nova.scheduler import weights CONF = conf.CONF class CrossCellWeigher(weights.BaseHostWeigher): def weight_multiplier(self, host_state): """How weighted this weigher should be.""" return utils.get_weight_multiplier( host_state, 'cross_cell_move_weight_multiplier', CONF.filter_scheduler.cross_cell_move_weight_multiplier) def _weigh_object(self, host_state, weight_properties): """Higher weights win. Hosts within the "preferred" cell are weighed higher than hosts in other cells. :param host_state: nova.scheduler.host_manager.HostState object representing a ComputeNode in a cell :param weight_properties: nova.objects.RequestSpec - this is inspected to see if there is a preferred cell via the requested_destination field and if so, is the request spec allowing cross-cell move :returns: 1 if cross-cell move and host_state is within the preferred cell, -1 if cross-cell move and host_state is *not* within the preferred cell, 0 for all other cases """ # RequestSpec.requested_destination.cell should only be set for # move operations. The allow_cross_cell_move value will only be True if # policy allows. if ('requested_destination' in weight_properties and weight_properties.requested_destination and 'cell' in weight_properties.requested_destination and weight_properties.requested_destination.cell and weight_properties.requested_destination.allow_cross_cell_move): # Determine if the given host is in the "preferred" cell from # the request spec. If it is, weigh it higher. if (host_state.cell_uuid == weight_properties.requested_destination.cell.uuid): return 1 # The host is in another cell, so weigh it lower. return -1 # We don't know or don't care what cell we're going to be in, so noop. return 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/disk.py0000664000175000017500000000277600000000000020501 0ustar00zuulzuul00000000000000# Copyright (c) 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Disk Weigher. Weigh hosts by their disk usage. The default is to spread instances across all hosts evenly. If you prefer stacking, you can set the 'disk_weight_multiplier' option (by configuration or aggregate metadata) to a negative number and the weighing has the opposite effect of the default. """ import nova.conf from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF class DiskWeigher(weights.BaseHostWeigher): minval = 0 def weight_multiplier(self, host_state): """Override the weight multiplier.""" return utils.get_weight_multiplier( host_state, 'disk_weight_multiplier', CONF.filter_scheduler.disk_weight_multiplier) def _weigh_object(self, host_state, weight_properties): """Higher weights win. We want spreading to be the default.""" return host_state.free_disk_mb ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/io_ops.py0000664000175000017500000000311100000000000021017 0ustar00zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Io Ops Weigher. Weigh hosts by their io ops number. The default is to preferably choose light workload compute hosts. If you prefer choosing heavy workload compute hosts, you can set 'io_ops_weight_multiplier' option (by configuration or aggregate metadata) to a positive number and the weighing has the opposite effect of the default. """ import nova.conf from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF class IoOpsWeigher(weights.BaseHostWeigher): minval = 0 def weight_multiplier(self, host_state): """Override the weight multiplier.""" return utils.get_weight_multiplier( host_state, 'io_ops_weight_multiplier', CONF.filter_scheduler.io_ops_weight_multiplier) def _weigh_object(self, host_state, weight_properties): """Higher weights win. We want to choose light workload host to be the default. """ return host_state.num_io_ops ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/metrics.py0000664000175000017500000000554500000000000021212 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Metrics Weigher. Weigh hosts by their metrics. This weigher can compute the weight based on the compute node host's various metrics. The to-be weighed metrics and their weighing ratio are specified in the configuration file as the followings: [metrics] weight_setting = name1=1.0, name2=-1.0 The final weight would be name1.value * 1.0 + name2.value * -1.0. """ import nova.conf from nova import exception from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF class MetricsWeigher(weights.BaseHostWeigher): def __init__(self): self._parse_setting() def _parse_setting(self): self.setting = utils.parse_options(CONF.metrics.weight_setting, sep='=', converter=float, name="metrics.weight_setting") def weight_multiplier(self, host_state): """Override the weight multiplier.""" return utils.get_weight_multiplier( host_state, 'metrics_weight_multiplier', CONF.metrics.weight_multiplier) def _weigh_object(self, host_state, weight_properties): value = 0.0 # NOTE(sbauza): Keying a dict of Metrics per metric name given that we # have a MonitorMetricList object metrics_dict = {m.name: m for m in host_state.metrics or []} for (name, ratio) in self.setting: try: value += metrics_dict[name].value * ratio except KeyError: if CONF.metrics.required: raise exception.ComputeHostMetricNotFound( host=host_state.host, node=host_state.nodename, name=name) else: # We treat the unavailable metric as the most negative # factor, i.e. set the value to make this obj would be # at the end of the ordered weighed obj list # Do nothing if ratio or weight_multiplier is 0. if ratio * self.weight_multiplier(host_state) != 0: return CONF.metrics.weight_of_unavailable return value ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/pci.py0000664000175000017500000000561100000000000020311 0ustar00zuulzuul00000000000000# Copyright (c) 2016, Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ PCI Affinity Weigher. Weigh hosts by their PCI availability. Prefer hosts with PCI devices for instances with PCI requirements and vice versa. Configure the importance of this affinitization using the 'pci_weight_multiplier' option (by configuration or aggregate metadata). """ import nova.conf from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF # An arbitrary value used to ensure PCI-requesting instances are stacked rather # than spread on hosts with PCI devices. The actual value of this filter is in # the scarcity case, where there are very few PCI devices left in the cloud and # we want to preserve the ones that do exist. To this end, we don't really mind # if a host with 2000 PCI devices is weighted the same as one with 500 devices, # as there's clearly no shortage there. MAX_DEVS = 100 class PCIWeigher(weights.BaseHostWeigher): def weight_multiplier(self, host_state): """Override the weight multiplier.""" return utils.get_weight_multiplier( host_state, 'pci_weight_multiplier', CONF.filter_scheduler.pci_weight_multiplier) def _weigh_object(self, host_state, request_spec): """Higher weights win. We want to keep PCI hosts free unless needed. Prefer hosts with the least number of PCI devices. If the instance requests PCI devices, this will ensure a stacking behavior and reserve as many totally free PCI hosts as possible. If PCI devices are not requested, this will ensure hosts with PCI devices are avoided completely, if possible. """ pools = host_state.pci_stats.pools if host_state.pci_stats else [] free = sum(pool['count'] for pool in pools) or 0 # reverse the "has PCI" values. For instances *without* PCI device # requests, this ensures we avoid the hosts with the most free PCI # devices. For the instances *with* PCI devices requests, this helps to # prevent fragmentation. If we didn't do this, hosts with the most PCI # devices would be weighted highest and would be used first which would # prevent instances requesting a larger number of PCI devices from # launching successfully. weight = MAX_DEVS - min(free, MAX_DEVS - 1) return weight ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/scheduler/weights/ram.py0000664000175000017500000000276700000000000020326 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ RAM Weigher. Weigh hosts by their RAM usage. The default is to spread instances across all hosts evenly. If you prefer stacking, you can set the 'ram_weight_multiplier' option (by configuration or aggregate metadata) to a negative number and the weighing has the opposite effect of the default. """ import nova.conf from nova.scheduler import utils from nova.scheduler import weights CONF = nova.conf.CONF class RAMWeigher(weights.BaseHostWeigher): minval = 0 def weight_multiplier(self, host_state): """Override the weight multiplier.""" return utils.get_weight_multiplier( host_state, 'ram_weight_multiplier', CONF.filter_scheduler.ram_weight_multiplier) def _weigh_object(self, host_state, weight_properties): """Higher weights win. We want spreading to be the default.""" return host_state.free_ram_mb ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/service.py0000664000175000017500000004425000000000000015550 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Generic Node base class for all workers that run on hosts.""" import os import random import sys from oslo_concurrency import processutils from oslo_log import log as logging import oslo_messaging as messaging from oslo_service import service from oslo_utils import importutils import six from nova.api import wsgi as api_wsgi from nova import baserpc from nova import conductor import nova.conf from nova import context from nova import debugger from nova import exception from nova.i18n import _, _LE, _LI, _LW from nova import objects from nova.objects import base as objects_base from nova.objects import service as service_obj from nova import rpc from nova import servicegroup from nova import utils from nova import version from nova import wsgi osprofiler = importutils.try_import("osprofiler") osprofiler_initializer = importutils.try_import("osprofiler.initializer") LOG = logging.getLogger(__name__) CONF = nova.conf.CONF SERVICE_MANAGERS = { 'nova-compute': 'nova.compute.manager.ComputeManager', 'nova-conductor': 'nova.conductor.manager.ConductorManager', 'nova-scheduler': 'nova.scheduler.manager.SchedulerManager', } def _create_service_ref(this_service, context): service = objects.Service(context) service.host = this_service.host service.binary = this_service.binary service.topic = this_service.topic service.report_count = 0 service.create() return service def _update_service_ref(service): if service.version != service_obj.SERVICE_VERSION: LOG.info(_LI('Updating service version for %(binary)s on ' '%(host)s from %(old)i to %(new)i'), {'binary': service.binary, 'host': service.host, 'old': service.version, 'new': service_obj.SERVICE_VERSION}) service.version = service_obj.SERVICE_VERSION service.save() def setup_profiler(binary, host): if osprofiler and CONF.profiler.enabled: osprofiler.initializer.init_from_conf( conf=CONF, context=context.get_admin_context().to_dict(), project="nova", service=binary, host=host) LOG.info(_LI("OSProfiler is enabled.")) def assert_eventlet_uses_monotonic_clock(): from eventlet import hubs import monotonic hub = hubs.get_hub() if hub.clock is not monotonic.monotonic: raise RuntimeError( 'eventlet hub is not using a monotonic clock - ' 'periodic tasks will be affected by drifts of system time.') class Service(service.Service): """Service object for binaries running on hosts. A service takes a manager and enables rpc by listening to queues based on topic. It also periodically runs tasks on the manager and reports its state to the database services table. """ def __init__(self, host, binary, topic, manager, report_interval=None, periodic_enable=None, periodic_fuzzy_delay=None, periodic_interval_max=None, *args, **kwargs): super(Service, self).__init__() self.host = host self.binary = binary self.topic = topic self.manager_class_name = manager self.servicegroup_api = servicegroup.API() manager_class = importutils.import_class(self.manager_class_name) if objects_base.NovaObject.indirection_api: conductor_api = conductor.API() conductor_api.wait_until_ready(context.get_admin_context()) self.manager = manager_class(host=self.host, *args, **kwargs) self.rpcserver = None self.report_interval = report_interval self.periodic_enable = periodic_enable self.periodic_fuzzy_delay = periodic_fuzzy_delay self.periodic_interval_max = periodic_interval_max self.saved_args, self.saved_kwargs = args, kwargs self.backdoor_port = None setup_profiler(binary, self.host) def __repr__(self): return "<%(cls_name)s: host=%(host)s, binary=%(binary)s, " \ "manager_class_name=%(manager)s>" % { 'cls_name': self.__class__.__name__, 'host': self.host, 'binary': self.binary, 'manager': self.manager_class_name } def start(self): """Start the service. This includes starting an RPC service, initializing periodic tasks, etc. """ # NOTE(melwitt): Clear the cell cache holding database transaction # context manager objects. We do this to ensure we create new internal # oslo.db locks to avoid a situation where a child process receives an # already locked oslo.db lock when it is forked. When a child process # inherits a locked oslo.db lock, database accesses through that # transaction context manager will never be able to acquire the lock # and requests will fail with CellTimeout errors. # See https://bugs.python.org/issue6721 for more information. # With python 3.7, it would be possible for oslo.db to make use of the # os.register_at_fork() method to reinitialize its lock. Until we # require python 3.7 as a mininum version, we must handle the situation # outside of oslo.db. context.CELL_CACHE = {} assert_eventlet_uses_monotonic_clock() verstr = version.version_string_with_package() LOG.info(_LI('Starting %(topic)s node (version %(version)s)'), {'topic': self.topic, 'version': verstr}) self.basic_config_check() self.manager.init_host() self.model_disconnected = False ctxt = context.get_admin_context() self.service_ref = objects.Service.get_by_host_and_binary( ctxt, self.host, self.binary) if self.service_ref: _update_service_ref(self.service_ref) else: try: self.service_ref = _create_service_ref(self, ctxt) except (exception.ServiceTopicExists, exception.ServiceBinaryExists): # NOTE(danms): If we race to create a record with a sibling # worker, don't fail here. self.service_ref = objects.Service.get_by_host_and_binary( ctxt, self.host, self.binary) self.manager.pre_start_hook() if self.backdoor_port is not None: self.manager.backdoor_port = self.backdoor_port LOG.debug("Creating RPC server for service %s", self.topic) target = messaging.Target(topic=self.topic, server=self.host) endpoints = [ self.manager, baserpc.BaseRPCAPI(self.manager.service_name, self.backdoor_port) ] endpoints.extend(self.manager.additional_endpoints) serializer = objects_base.NovaObjectSerializer() self.rpcserver = rpc.get_server(target, endpoints, serializer) self.rpcserver.start() self.manager.post_start_hook() LOG.debug("Join ServiceGroup membership for this service %s", self.topic) # Add service to the ServiceGroup membership group. self.servicegroup_api.join(self.host, self.topic, self) if self.periodic_enable: if self.periodic_fuzzy_delay: initial_delay = random.randint(0, self.periodic_fuzzy_delay) else: initial_delay = None self.tg.add_dynamic_timer(self.periodic_tasks, initial_delay=initial_delay, periodic_interval_max= self.periodic_interval_max) def __getattr__(self, key): manager = self.__dict__.get('manager', None) return getattr(manager, key) @classmethod def create(cls, host=None, binary=None, topic=None, manager=None, report_interval=None, periodic_enable=None, periodic_fuzzy_delay=None, periodic_interval_max=None): """Instantiates class and passes back application object. :param host: defaults to CONF.host :param binary: defaults to basename of executable :param topic: defaults to bin_name - 'nova-' part :param manager: defaults to CONF._manager :param report_interval: defaults to CONF.report_interval :param periodic_enable: defaults to CONF.periodic_enable :param periodic_fuzzy_delay: defaults to CONF.periodic_fuzzy_delay :param periodic_interval_max: if set, the max time to wait between runs """ if not host: host = CONF.host if not binary: binary = os.path.basename(sys.argv[0]) if not topic: topic = binary.rpartition('nova-')[2] if not manager: manager = SERVICE_MANAGERS.get(binary) if report_interval is None: report_interval = CONF.report_interval if periodic_enable is None: periodic_enable = CONF.periodic_enable if periodic_fuzzy_delay is None: periodic_fuzzy_delay = CONF.periodic_fuzzy_delay debugger.init() service_obj = cls(host, binary, topic, manager, report_interval=report_interval, periodic_enable=periodic_enable, periodic_fuzzy_delay=periodic_fuzzy_delay, periodic_interval_max=periodic_interval_max) # NOTE(gibi): This have to be after the service object creation as # that is the point where we can safely use the RPC to the conductor. # E.g. the Service.__init__ actually waits for the conductor to start # up before it allows the service to be created. The # raise_if_old_compute() depends on the RPC to be up and does not # implement its own retry mechanism to connect to the conductor. try: utils.raise_if_old_compute() except exception.TooOldComputeService as e: LOG.warning(six.text_type(e)) return service_obj def kill(self): """Destroy the service object in the datastore. NOTE: Although this method is not used anywhere else than tests, it is convenient to have it here, so the tests might easily and in clean way stop and remove the service_ref. """ self.stop() try: self.service_ref.destroy() except exception.NotFound: LOG.warning(_LW('Service killed that has no database entry')) def stop(self): """stop the service and clean up.""" try: self.rpcserver.stop() self.rpcserver.wait() except Exception: pass try: self.manager.cleanup_host() except Exception: LOG.exception(_LE('Service error occurred during cleanup_host')) pass super(Service, self).stop() def periodic_tasks(self, raise_on_error=False): """Tasks to be run at a periodic interval.""" ctxt = context.get_admin_context() return self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error) def basic_config_check(self): """Perform basic config checks before starting processing.""" # Make sure the tempdir exists and is writable try: with utils.tempdir(): pass except Exception as e: LOG.error(_LE('Temporary directory is invalid: %s'), e) sys.exit(1) def reset(self): """reset the service.""" self.manager.reset() # Reset the cell cache that holds database transaction context managers context.CELL_CACHE = {} class WSGIService(service.Service): """Provides ability to launch API from a 'paste' configuration.""" def __init__(self, name, loader=None, use_ssl=False, max_url_len=None): """Initialize, but do not start the WSGI server. :param name: The name of the WSGI server given to the loader. :param loader: Loads the WSGI application using the given name. :returns: None """ self.name = name # NOTE(danms): Name can be metadata, osapi_compute, per # nova.service's enabled_apis self.binary = 'nova-%s' % name LOG.warning('Running %s using eventlet is deprecated. Deploy with ' 'a WSGI server such as uwsgi or mod_wsgi.', self.binary) self.topic = None self.manager = self._get_manager() self.loader = loader or api_wsgi.Loader() self.app = self.loader.load_app(name) # inherit all compute_api worker counts from osapi_compute if name.startswith('openstack_compute_api'): wname = 'osapi_compute' else: wname = name self.host = getattr(CONF, '%s_listen' % name, "0.0.0.0") self.port = getattr(CONF, '%s_listen_port' % name, 0) self.workers = (getattr(CONF, '%s_workers' % wname, None) or processutils.get_worker_count()) if self.workers and self.workers < 1: worker_name = '%s_workers' % name msg = (_("%(worker_name)s value of %(workers)s is invalid, " "must be greater than 0") % {'worker_name': worker_name, 'workers': str(self.workers)}) raise exception.InvalidInput(msg) self.use_ssl = use_ssl self.server = wsgi.Server(name, self.app, host=self.host, port=self.port, use_ssl=self.use_ssl, max_url_len=max_url_len) # Pull back actual port used self.port = self.server.port self.backdoor_port = None setup_profiler(name, self.host) def reset(self): """Reset the following: * server greenpool size to default * service version cache * cell cache holding database transaction context managers :returns: None """ self.server.reset() service_obj.Service.clear_min_version_cache() context.CELL_CACHE = {} def _get_manager(self): """Initialize a Manager object appropriate for this service. Use the service name to look up a Manager subclass from the configuration and initialize an instance. If no class name is configured, just return None. :returns: a Manager instance, or None. """ manager = SERVICE_MANAGERS.get(self.binary) if manager is None: return None manager_class = importutils.import_class(manager) return manager_class() def start(self): """Start serving this service using loaded configuration. Also, retrieve updated port number in case '0' was passed in, which indicates a random port should be used. :returns: None """ # NOTE(melwitt): Clear the cell cache holding database transaction # context manager objects. We do this to ensure we create new internal # oslo.db locks to avoid a situation where a child process receives an # already locked oslo.db lock when it is forked. When a child process # inherits a locked oslo.db lock, database accesses through that # transaction context manager will never be able to acquire the lock # and requests will fail with CellTimeout errors. # See https://bugs.python.org/issue6721 for more information. # With python 3.7, it would be possible for oslo.db to make use of the # os.register_at_fork() method to reinitialize its lock. Until we # require python 3.7 as a mininum version, we must handle the situation # outside of oslo.db. context.CELL_CACHE = {} ctxt = context.get_admin_context() service_ref = objects.Service.get_by_host_and_binary(ctxt, self.host, self.binary) if service_ref: _update_service_ref(service_ref) else: try: service_ref = _create_service_ref(self, ctxt) except (exception.ServiceTopicExists, exception.ServiceBinaryExists): # NOTE(danms): If we race to create a record wth a sibling, # don't fail here. service_ref = objects.Service.get_by_host_and_binary( ctxt, self.host, self.binary) if self.manager: self.manager.init_host() self.manager.pre_start_hook() if self.backdoor_port is not None: self.manager.backdoor_port = self.backdoor_port self.server.start() if self.manager: self.manager.post_start_hook() def stop(self): """Stop serving this API. :returns: None """ self.server.stop() def wait(self): """Wait for the service to stop serving this API. :returns: None """ self.server.wait() def process_launcher(): return service.ProcessLauncher(CONF, restart_method='mutate') # NOTE(vish): the global launcher is to maintain the existing # functionality of calling service.serve + # service.wait _launcher = None def serve(server, workers=None): global _launcher if _launcher: raise RuntimeError(_('serve() can only be called once')) _launcher = service.launch(CONF, server, workers=workers, restart_method='mutate') def wait(): _launcher.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/service_auth.py0000664000175000017500000000342000000000000016563 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from keystoneauth1 import service_token from oslo_log import log as logging import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) _SERVICE_AUTH = None def reset_globals(): """For async unit test consistency.""" global _SERVICE_AUTH _SERVICE_AUTH = None def get_auth_plugin(context): user_auth = context.get_auth_plugin() if CONF.service_user.send_service_user_token: global _SERVICE_AUTH if not _SERVICE_AUTH: _SERVICE_AUTH = ks_loading.load_auth_from_conf_options( CONF, group= nova.conf.service_token.SERVICE_USER_GROUP) if _SERVICE_AUTH is None: # This indicates a misconfiguration so log a warning and # return the user_auth. LOG.warning('Unable to load auth from [service_user] ' 'configuration. Ensure "auth_type" is set.') return user_auth return service_token.ServiceTokenAuthWrapper( user_auth=user_auth, service_auth=_SERVICE_AUTH) return user_auth ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3824697 nova-21.2.4/nova/servicegroup/0000775000175000017500000000000000000000000016246 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/servicegroup/__init__.py0000664000175000017500000000144700000000000020365 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # Copyright (c) AT&T Labs Inc. 2012 Yun Mao # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ The membership service for Nova. Different implementations can be plugged according to the Nova configuration. """ from nova.servicegroup import api API = api.API ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/servicegroup/api.py0000664000175000017500000000616300000000000017377 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # Copyright (c) AT&T Labs Inc. 2012 Yun Mao # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Define APIs for the servicegroup access.""" from oslo_log import log as logging from oslo_utils import importutils import nova.conf from nova.i18n import _LW LOG = logging.getLogger(__name__) _driver_name_class_mapping = { 'db': 'nova.servicegroup.drivers.db.DbDriver', 'mc': 'nova.servicegroup.drivers.mc.MemcachedDriver' } CONF = nova.conf.CONF # NOTE(geekinutah): By default drivers wait 5 seconds before reporting INITIAL_REPORTING_DELAY = 5 class API(object): def __init__(self, *args, **kwargs): '''Create an instance of the servicegroup API. args and kwargs are passed down to the servicegroup driver when it gets created. ''' # Make sure report interval is less than service down time report_interval = CONF.report_interval if CONF.service_down_time <= report_interval: new_service_down_time = int(report_interval * 2.5) LOG.warning(_LW("Report interval must be less than service down " "time. Current config: . Setting service_down_time " "to: %(new_service_down_time)s"), {'service_down_time': CONF.service_down_time, 'report_interval': report_interval, 'new_service_down_time': new_service_down_time}) CONF.set_override('service_down_time', new_service_down_time) driver_class = _driver_name_class_mapping[CONF.servicegroup_driver] self._driver = importutils.import_object(driver_class, *args, **kwargs) def join(self, member, group, service=None): """Add a new member to a service group. :param member: the joined member ID/name :param group: the group ID/name, of the joined member :param service: a `nova.service.Service` object """ return self._driver.join(member, group, service) def service_is_up(self, member): """Check if the given member is up.""" # NOTE(johngarbutt) no logging in this method, # so this doesn't slow down the scheduler if member.get('forced_down'): return False return self._driver.is_up(member) def get_updated_time(self, member): """Get the updated time from drivers except db""" return self._driver.updated_time(member) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3824697 nova-21.2.4/nova/servicegroup/drivers/0000775000175000017500000000000000000000000017724 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/servicegroup/drivers/__init__.py0000664000175000017500000000000000000000000022023 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/servicegroup/drivers/base.py0000664000175000017500000000221500000000000021210 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. class Driver(object): """Base class for all ServiceGroup drivers.""" def join(self, member, group, service=None): """Add a new member to a service group. :param member: the joined member ID/name :param group: the group ID/name, of the joined member :param service: a `nova.service.Service` object """ raise NotImplementedError() def is_up(self, member): """Check whether the given member is up.""" raise NotImplementedError() def updated_time(self, service_ref): """Get the updated time""" raise NotImplementedError() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/servicegroup/drivers/db.py0000664000175000017500000001262600000000000020672 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import timeutils import six import nova.conf from nova import exception from nova.i18n import _, _LI, _LW, _LE from nova.servicegroup import api from nova.servicegroup.drivers import base CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class DbDriver(base.Driver): def __init__(self, *args, **kwargs): self.service_down_time = CONF.service_down_time def join(self, member, group, service=None): """Add a new member to a service group. :param member: the joined member ID/name :param group: the group ID/name, of the joined member :param service: a `nova.service.Service` object """ LOG.debug('DB_Driver: join new ServiceGroup member %(member)s to ' 'the %(group)s group, service = %(service)s', {'member': member, 'group': group, 'service': service}) if service is None: raise RuntimeError(_('service is a mandatory argument for DB based' ' ServiceGroup driver')) report_interval = service.report_interval if report_interval: service.tg.add_timer_args( report_interval, self._report_state, args=[service], initial_delay=api.INITIAL_REPORTING_DELAY) def is_up(self, service_ref): """Moved from nova.utils Check whether a service is up based on last heartbeat. """ last_heartbeat = (service_ref.get('last_seen_up') or service_ref['created_at']) if isinstance(last_heartbeat, six.string_types): # NOTE(russellb) If this service_ref came in over rpc via # conductor, then the timestamp will be a string and needs to be # converted back to a datetime. last_heartbeat = timeutils.parse_strtime(last_heartbeat) else: # Objects have proper UTC timezones, but the timeutils comparison # below does not (and will fail) last_heartbeat = last_heartbeat.replace(tzinfo=None) # Timestamps in DB are UTC. elapsed = timeutils.delta_seconds(last_heartbeat, timeutils.utcnow()) is_up = abs(elapsed) <= self.service_down_time if not is_up: LOG.debug('Seems service %(binary)s on host %(host)s is down. ' 'Last heartbeat was %(lhb)s. Elapsed time is %(el)s', {'binary': service_ref.get('binary'), 'host': service_ref.get('host'), 'lhb': str(last_heartbeat), 'el': str(elapsed)}) return is_up def updated_time(self, service_ref): """Get the updated time from db""" return service_ref['updated_at'] def _report_state(self, service): """Update the state of this service in the datastore.""" try: service.service_ref.report_count += 1 service.service_ref.save() # TODO(termie): make this pattern be more elegant. if getattr(service, 'model_disconnected', False): service.model_disconnected = False LOG.info( _LI('Recovered from being unable to report status.')) except messaging.MessagingTimeout: # NOTE(johngarbutt) during upgrade we will see messaging timeouts # as nova-conductor is restarted, so only log this error once. if not getattr(service, 'model_disconnected', False): service.model_disconnected = True LOG.warning(_LW('Lost connection to nova-conductor ' 'for reporting service status.')) except exception.ServiceNotFound: # The service may have been deleted via the API but the actual # process is still running. Provide a useful error message rather # than the noisy traceback in the generic Exception block below. LOG.error('The services table record for the %s service on ' 'host %s is gone. You either need to stop this service ' 'if it should be deleted or restart it to recreate the ' 'record in the database.', service.service_ref.binary, service.service_ref.host) service.model_disconnected = True except Exception: # NOTE(rpodolyaka): we'd like to avoid catching of all possible # exceptions here, but otherwise it would become possible for # the state reporting thread to stop abruptly, and thus leave # the service unusable until it's restarted. LOG.exception( _LE('Unexpected error while reporting service status')) # trigger the recovery log message, if this error goes away service.model_disconnected = True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/servicegroup/drivers/mc.py0000664000175000017500000001026500000000000020701 0ustar00zuulzuul00000000000000# Service heartbeat driver using Memcached # Copyright (c) 2013 Akira Yoshiyama # # This is derived from nova/servicegroup/drivers/db.py. # Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import iso8601 from oslo_log import log as logging from oslo_utils import timeutils from nova import cache_utils import nova.conf from nova.i18n import _, _LI, _LW from nova.servicegroup import api from nova.servicegroup.drivers import base CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class MemcachedDriver(base.Driver): def __init__(self, *args, **kwargs): self.mc = cache_utils.get_memcached_client( expiration_time=CONF.service_down_time) def join(self, member_id, group_id, service=None): """Join the given service with its group.""" LOG.debug('Memcached_Driver: join new ServiceGroup member ' '%(member_id)s to the %(group_id)s group, ' 'service = %(service)s', {'member_id': member_id, 'group_id': group_id, 'service': service}) if service is None: raise RuntimeError(_('service is a mandatory argument for ' 'Memcached based ServiceGroup driver')) report_interval = service.report_interval if report_interval: service.tg.add_timer_args( report_interval, self._report_state, args=[service], initial_delay=api.INITIAL_REPORTING_DELAY) def is_up(self, service_ref): """Moved from nova.utils Check whether a service is up based on last heartbeat. """ key = "%(topic)s:%(host)s" % service_ref is_up = self.mc.get(str(key)) is not None if not is_up: LOG.debug('Seems service %s is down', key) return is_up def updated_time(self, service_ref): """Get the updated time from memcache""" key = "%(topic)s:%(host)s" % service_ref updated_time_in_mc = self.mc.get(str(key)) updated_time_in_db = service_ref['updated_at'] if updated_time_in_mc: # Change mc time to offset-aware time updated_time_in_mc = updated_time_in_mc.replace(tzinfo=iso8601.UTC) # If [DEFAULT]/enable_new_services is set to be false, the # ``updated_time_in_db`` will be None, in this case, use # ``updated_time_in_mc`` instead. if (not updated_time_in_db or updated_time_in_db <= updated_time_in_mc): return updated_time_in_mc return updated_time_in_db def _report_state(self, service): """Update the state of this service in the datastore.""" try: key = "%(topic)s:%(host)s" % service.service_ref # memcached has data expiration time capability. # set(..., time=CONF.service_down_time) uses it and # reduces key-deleting code. self.mc.set(str(key), timeutils.utcnow()) # TODO(termie): make this pattern be more elegant. if getattr(service, 'model_disconnected', False): service.model_disconnected = False LOG.info( _LI('Recovered connection to memcache server ' 'for reporting service status.')) # TODO(vish): this should probably only catch connection errors except Exception: if not getattr(service, 'model_disconnected', False): service.model_disconnected = True LOG.warning(_LW('Lost connection to memcache server ' 'for reporting service status.')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/test.py0000664000175000017500000011131200000000000015061 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base classes for our unit tests. Allows overriding of flags for use of fakes, and some black magic for inline callbacks. """ import nova.monkey_patch # noqa import abc import collections import copy import datetime import inspect import itertools import os import os.path import pprint import sys import fixtures import mock from oslo_cache import core as cache from oslo_concurrency import lockutils from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_log.fixture import logging_error as log_fixture from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_versionedobjects import fixture as ovo_fixture from oslotest import base from oslotest import mock_fixture import six from six.moves import builtins from sqlalchemy.dialects import sqlite import testtools from nova.api.openstack import wsgi_app from nova.compute import rpcapi as compute_rpcapi from nova import context from nova.db.sqlalchemy import api as sqlalchemy_api from nova import exception from nova import objects from nova.objects import base as objects_base from nova import quota from nova.tests import fixtures as nova_fixtures from nova.tests.unit import conf_fixture from nova.tests.unit import matchers from nova.tests.unit import policy_fixture from nova import utils from nova.virt import images if six.PY2: import contextlib2 as contextlib else: import contextlib CONF = cfg.CONF logging.register_options(CONF) CONF.set_override('use_stderr', False) logging.setup(CONF, 'nova') cache.configure(CONF) LOG = logging.getLogger(__name__) _TRUE_VALUES = ('True', 'true', '1', 'yes') CELL1_NAME = 'cell1' # For compatibility with the large number of tests which use test.nested nested = utils.nested_contexts class TestingException(Exception): pass # NOTE(claudiub): this needs to be called before any mock.patch calls are # being done, and especially before any other test classes load. This fixes # the mock.patch autospec issue: # https://github.com/testing-cabal/mock/issues/396 mock_fixture.patch_mock_module() def _poison_unfair_compute_resource_semaphore_locking(): """Ensure that every locking on COMPUTE_RESOURCE_SEMAPHORE is called with fair=True. """ orig_synchronized = utils.synchronized def poisoned_synchronized(*args, **kwargs): # Only check fairness if the decorator is used with # COMPUTE_RESOURCE_SEMAPHORE. But the name of the semaphore can be # passed as args or as kwargs. # Note that we cannot import COMPUTE_RESOURCE_SEMAPHORE as that would # apply the decorators we want to poison here. if len(args) >= 1: name = args[0] else: name = kwargs.get("name") if name == "compute_resources" and not kwargs.get("fair", False): raise AssertionError( 'Locking on COMPUTE_RESOURCE_SEMAPHORE should always be fair. ' 'See bug 1864122.') # go and act like the original decorator return orig_synchronized(*args, **kwargs) # replace the synchronized decorator factory with our own that checks the # params passed in utils.synchronized = poisoned_synchronized # NOTE(gibi): This poisoning needs to be done in import time as decorators are # applied in import time on the ResourceTracker _poison_unfair_compute_resource_semaphore_locking() class NovaExceptionReraiseFormatError(object): real_log_exception = exception.NovaException._log_exception @classmethod def patch(cls): exception.NovaException._log_exception = cls._wrap_log_exception @staticmethod def _wrap_log_exception(self): exc_info = sys.exc_info() NovaExceptionReraiseFormatError.real_log_exception(self) six.reraise(*exc_info) # NOTE(melwitt) This needs to be done at import time in order to also catch # NovaException format errors that are in mock decorators. In these cases, the # errors will be raised during test listing, before tests actually run. NovaExceptionReraiseFormatError.patch() class TestCase(base.BaseTestCase): """Test case base class for all unit tests. Due to the slowness of DB access, please consider deriving from `NoDBTestCase` first. """ # USES_DB is set to False for tests that inherit from NoDBTestCase. USES_DB = True # USES_DB_SELF is set to True in tests that specifically want to use the # database but need to configure it themselves, for example to setup the # API DB but not the cell DB. In those cases the test will override # USES_DB_SELF = True but inherit from the NoDBTestCase class so it does # not get the default fixture setup when using a database (which is the # API and cell DBs, and adding the default flavors). USES_DB_SELF = False REQUIRES_LOCKING = False # Setting to True makes the test use the RPCFixture. STUB_RPC = True # The number of non-cell0 cells to create. This is only used in the # base class when USES_DB is True. NUMBER_OF_CELLS = 1 def setUp(self): """Run before each test method to initialize test environment.""" # Ensure BaseTestCase's ConfigureLogging fixture is disabled since # we're using our own (StandardLogging). with fixtures.EnvironmentVariable('OS_LOG_CAPTURE', '0'): super(TestCase, self).setUp() # How many of which service we've started. {$service-name: $count} self._service_fixture_count = collections.defaultdict(int) self.useFixture(nova_fixtures.OpenStackSDKFixture()) self.useFixture(log_fixture.get_logging_handle_error_fixture()) self.stdlog = self.useFixture(nova_fixtures.StandardLogging()) # NOTE(sdague): because of the way we were using the lock # wrapper we ended up with a lot of tests that started # relying on global external locking being set up for them. We # consider all of these to be *bugs*. Tests should not require # global external locking, or if they do, they should # explicitly set it up themselves. # # The following REQUIRES_LOCKING class parameter is provided # as a bridge to get us there. No new tests should be added # that require it, and existing classes and tests should be # fixed to not need it. if self.REQUIRES_LOCKING: lock_path = self.useFixture(fixtures.TempDir()).path self.fixture = self.useFixture( config_fixture.Config(lockutils.CONF)) self.fixture.config(lock_path=lock_path, group='oslo_concurrency') self.useFixture(conf_fixture.ConfFixture(CONF)) if self.STUB_RPC: self.useFixture(nova_fixtures.RPCFixture('nova.test')) # we cannot set this in the ConfFixture as oslo only registers the # notification opts at the first instantiation of a Notifier that # happens only in the RPCFixture CONF.set_default('driver', ['test'], group='oslo_messaging_notifications') # NOTE(danms): Make sure to reset us back to non-remote objects # for each test to avoid interactions. Also, backup the object # registry. objects_base.NovaObject.indirection_api = None self._base_test_obj_backup = copy.copy( objects_base.NovaObjectRegistry._registry._obj_classes) self.addCleanup(self._restore_obj_registry) objects.Service.clear_min_version_cache() # NOTE(danms): Reset the cached list of cells from nova.compute import api api.CELLS = [] context.CELL_CACHE = {} context.CELLS = [] self.computes = {} self.cell_mappings = {} self.host_mappings = {} # NOTE(danms): If the test claims to want to set up the database # itself, then it is responsible for all the mapping stuff too. if self.USES_DB: # NOTE(danms): Full database setup involves a cell0, cell1, # and the relevant mappings. self.useFixture(nova_fixtures.Database(database='api')) self._setup_cells() self.useFixture(nova_fixtures.DefaultFlavorsFixture()) elif not self.USES_DB_SELF: # NOTE(danms): If not using the database, we mock out the # mapping stuff and effectively collapse everything to a # single cell. self.useFixture(nova_fixtures.SingleCellSimple()) self.useFixture(nova_fixtures.DatabasePoisonFixture()) # NOTE(blk-u): WarningsFixture must be after the Database fixture # because sqlalchemy-migrate messes with the warnings filters. self.useFixture(nova_fixtures.WarningsFixture()) self.useFixture(ovo_fixture.StableObjectJsonFixture()) # Reset the global QEMU version flag. images.QEMU_VERSION = None # Reset the compute RPC API globals (mostly the _ROUTER). compute_rpcapi.reset_globals() self.addCleanup(self._clear_attrs) self.useFixture(fixtures.EnvironmentVariable('http_proxy')) self.policy = self.useFixture(policy_fixture.PolicyFixture()) self.useFixture(nova_fixtures.PoisonFunctions()) self.useFixture(nova_fixtures.ForbidNewLegacyNotificationFixture()) # NOTE(mikal): make sure we don't load a privsep helper accidentally self.useFixture(nova_fixtures.PrivsepNoHelperFixture()) self.useFixture(mock_fixture.MockAutospecFixture()) # FIXME(danms): Disable this for all tests by default to avoid breaking # any that depend on default/previous ordering self.flags(build_failure_weight_multiplier=0.0, group='filter_scheduler') # NOTE(melwitt): Reset the cached set of projects quota.UID_QFD_POPULATED_CACHE_BY_PROJECT = set() quota.UID_QFD_POPULATED_CACHE_ALL = False # make sure that the wsgi app is fully initialized for all testcase # instead of only once initialized for test worker wsgi_app.init_global_data.reset() def _setup_cells(self): """Setup a normal cellsv2 environment. This sets up the CellDatabase fixture with two cells, one cell0 and one normal cell. CellMappings are created for both so that cells-aware code can find those two databases. """ celldbs = nova_fixtures.CellDatabases() ctxt = context.get_context() fake_transport = 'fake://nowhere/' c0 = objects.CellMapping( context=ctxt, uuid=objects.CellMapping.CELL0_UUID, name='cell0', transport_url=fake_transport, database_connection=objects.CellMapping.CELL0_UUID) c0.create() self.cell_mappings[c0.name] = c0 celldbs.add_cell_database(objects.CellMapping.CELL0_UUID) for x in range(self.NUMBER_OF_CELLS): name = 'cell%i' % (x + 1) uuid = getattr(uuids, name) cell = objects.CellMapping( context=ctxt, uuid=uuid, name=name, transport_url=fake_transport, database_connection=uuid) cell.create() self.cell_mappings[name] = cell # cell1 is the default cell celldbs.add_cell_database(uuid, default=(x == 0)) self.useFixture(celldbs) def _restore_obj_registry(self): objects_base.NovaObjectRegistry._registry._obj_classes = \ self._base_test_obj_backup def _clear_attrs(self): # Delete attributes that don't start with _ so they don't pin # memory around unnecessarily for the duration of the test # suite for key in [k for k in self.__dict__.keys() if k[0] != '_']: # NOTE(gmann): Skip attribute 'id' because if tests are being # generated using testscenarios then, 'id' attribute is being # added during cloning the tests. And later that 'id' attribute # is being used by test suite to generate the results for each # newly generated tests by testscenarios. if key != 'id': del self.__dict__[key] def stub_out(self, old, new): """Replace a function for the duration of the test. Use the monkey patch fixture to replace a function for the duration of a test. Useful when you want to provide fake methods instead of mocks during testing. """ self.useFixture(fixtures.MonkeyPatch(old, new)) @staticmethod def patch_exists(patched_path, result): """Provide a static method version of patch_exists(), which if you haven't already imported nova.test can be slightly easier to use as a context manager within a test method via: def test_something(self): with self.patch_exists(path, True): ... """ return patch_exists(patched_path, result) @staticmethod def patch_open(patched_path, read_data): """Provide a static method version of patch_open() which is easier to use as a context manager within a test method via: def test_something(self): with self.patch_open(path, "fake contents of file"): ... """ return patch_open(patched_path, read_data) def flags(self, **kw): """Override flag variables for a test.""" group = kw.pop('group', None) for k, v in kw.items(): CONF.set_override(k, v, group) def enforce_fk_constraints(self, engine=None): if engine is None: engine = sqlalchemy_api.get_engine() dialect = engine.url.get_dialect() if dialect == sqlite.dialect: # We're seeing issues with foreign key support in SQLite 3.6.20 # SQLAlchemy doesn't support it at all with < SQLite 3.6.19 # It works fine in SQLite 3.7. # So return early to skip this test if running SQLite < 3.7 import sqlite3 tup = sqlite3.sqlite_version_info if tup[0] < 3 or (tup[0] == 3 and tup[1] < 7): self.skipTest( 'sqlite version too old for reliable SQLA foreign_keys') engine.connect().execute("PRAGMA foreign_keys = ON") def start_service(self, name, host=None, cell_name=None, **kwargs): # Disallow starting multiple scheduler services if name == 'scheduler' and self._service_fixture_count[name]: raise TestingException("Duplicate start_service(%s)!" % name) cell = None # if the host is None then the CONF.host remains defaulted to # 'fake-mini' (originally done in ConfFixture) if host is not None: # Make sure that CONF.host is relevant to the right hostname self.useFixture(nova_fixtures.ConfPatcher(host=host)) if name == 'compute' and self.USES_DB: # NOTE(danms): We need to create the HostMapping first, because # otherwise we'll fail to update the scheduler while running # the compute node startup routines below. ctxt = context.get_context() cell_name = cell_name or CELL1_NAME cell = self.cell_mappings[cell_name] if (host or name) not in self.host_mappings: # NOTE(gibi): If the HostMapping does not exists then this is # the first start of the service so we create the mapping. hm = objects.HostMapping(context=ctxt, host=host or name, cell_mapping=cell) hm.create() self.host_mappings[hm.host] = hm svc = self.useFixture( nova_fixtures.ServiceFixture(name, host, cell=cell, **kwargs)) # Keep track of how many instances of this service are running. self._service_fixture_count[name] += 1 real_stop = svc.service.stop # Make sure stopping the service decrements the active count, so that # start,stop,start doesn't trigger the "Duplicate start_service" # exception. def patch_stop(*a, **k): self._service_fixture_count[name] -= 1 return real_stop(*a, **k) self.useFixture(fixtures.MockPatchObject( svc.service, 'stop', patch_stop)) return svc.service def _start_compute(self, host, cell_name=None): """Start a nova compute service on the given host :param host: the name of the host that will be associated to the compute service. :param cell_name: optional name of the cell in which to start the compute service :return: the nova compute service object """ compute = self.start_service('compute', host=host, cell_name=cell_name) self.computes[host] = compute return compute def _run_periodics(self): """Run the update_available_resource task on every compute manager This runs periodics on the computes in an undefined order; some child class redefine this function to force a specific order. """ ctx = context.get_admin_context() for host, compute in self.computes.items(): LOG.info('Running periodic for compute (%s)', host) # Make sure the context is targeted to the proper cell database # for multi-cell tests. with context.target_cell( ctx, self.host_mappings[host].cell_mapping) as cctxt: compute.manager.update_available_resource(cctxt) LOG.info('Finished with periodics') def restart_compute_service(self, compute, keep_hypervisor_state=True): """Stops the service and starts a new one to have realistic restart :param:compute: the nova-compute service to be restarted :param:keep_hypervisor_state: If true then already defined instances will survive the compute service restart. If false then the new service will see an empty hypervisor :returns: a new compute service instance serving the same host and and node """ # NOTE(gibi): The service interface cannot be used to simulate a real # service restart as the manager object will not be recreated after a # service.stop() and service.start() therefore the manager state will # survive. For example the resource tracker will not be recreated after # a stop start. The service.kill() call cannot help as it deletes # the service from the DB which is unrealistic and causes that some # operation that refers to the killed host (e.g. evacuate) fails. # So this helper method will stop the original service and then starts # a brand new compute service for the same host and node. This way # a new ComputeManager instance will be created and initialized during # the service startup. compute.stop() # this service was running previously so we have to make sure that # we restart it in the same cell cell_name = self.host_mappings[compute.host].cell_mapping.name if keep_hypervisor_state: # NOTE(gibi): FakeDriver does not provide a meaningful way to # define some servers that exists already on the hypervisor when # the driver is (re)created during the service startup. This means # that we cannot simulate that the definition of a server # survives a nova-compute service restart on the hypervisor. # Instead here we save the FakeDriver instance that knows about # the defined servers and inject that driver into the new Manager # class during the startup of the compute service. old_driver = compute.manager.driver with mock.patch( 'nova.virt.driver.load_compute_driver') as load_driver: load_driver.return_value = old_driver new_compute = self.start_service( 'compute', host=compute.host, cell_name=cell_name) else: new_compute = self.start_service( 'compute', host=compute.host, cell_name=cell_name) return new_compute def assertJsonEqual(self, expected, observed, message=''): """Asserts that 2 complex data structures are json equivalent. We use data structures which serialize down to json throughout the code, and often times we just need to know that these are json equivalent. This means that list order is not important, and should be sorted. Because this is a recursive set of assertions, when failure happens we want to expose both the local failure and the global view of the 2 data structures being compared. So a MismatchError which includes the inner failure as the mismatch, and the passed in expected / observed as matchee / matcher. """ if isinstance(expected, six.string_types): expected = jsonutils.loads(expected) if isinstance(observed, six.string_types): observed = jsonutils.loads(observed) def sort_key(x): if isinstance(x, (set, list)) or isinstance(x, datetime.datetime): return str(x) if isinstance(x, dict): items = ((sort_key(key), sort_key(value)) for key, value in x.items()) return sorted(items) return x def inner(expected, observed, path='root'): if isinstance(expected, dict) and isinstance(observed, dict): self.assertEqual( len(expected), len(observed), ('path: %s. Different dict key sets\n' 'expected=%s\n' 'observed=%s\n' 'difference=%s') % (path, sorted(expected.keys()), sorted(observed.keys()), list(set(expected.keys()).symmetric_difference( set(observed.keys()))))) expected_keys = sorted(expected) observed_keys = sorted(observed) self.assertEqual( expected_keys, observed_keys, 'path: %s. Dict keys are not equal' % path) for key in list(six.iterkeys(expected)): inner(expected[key], observed[key], path + '.%s' % key) elif (isinstance(expected, (list, tuple, set)) and isinstance(observed, (list, tuple, set))): self.assertEqual( len(expected), len(observed), ('path: %s. Different list items\n' 'expected=%s\n' 'observed=%s\n' 'difference=%s') % (path, sorted(expected, key=sort_key), sorted(observed, key=sort_key), [a for a in itertools.chain(expected, observed) if (a not in expected) or (a not in observed)])) expected_values_iter = iter(sorted(expected, key=sort_key)) observed_values_iter = iter(sorted(observed, key=sort_key)) for i in range(len(expected)): inner(next(expected_values_iter), next(observed_values_iter), path + '[%s]' % i) else: self.assertEqual(expected, observed, 'path: %s' % path) try: inner(expected, observed) except testtools.matchers.MismatchError as e: difference = e.mismatch.describe() if message: message = 'message: %s\n' % message msg = "\nexpected:\n%s\nactual:\n%s\ndifference:\n%s\n%s" % ( pprint.pformat(expected), pprint.pformat(observed), difference, message) error = AssertionError(msg) error.expected = expected error.observed = observed error.difference = difference raise error def assertXmlEqual(self, expected, observed, **options): self.assertThat(observed, matchers.XMLMatches(expected, **options)) def assertPublicAPISignatures(self, baseinst, inst): def get_public_apis(inst): methods = {} def findmethods(object): return inspect.ismethod(object) or inspect.isfunction(object) for (name, value) in inspect.getmembers(inst, findmethods): if name.startswith("_"): continue methods[name] = value return methods baseclass = baseinst.__class__.__name__ basemethods = get_public_apis(baseinst) implmethods = get_public_apis(inst) extranames = [] for name in sorted(implmethods.keys()): if name not in basemethods: extranames.append(name) self.assertEqual([], extranames, "public APIs not listed in base class %s" % baseclass) for name in sorted(implmethods.keys()): baseargs = utils.getargspec(basemethods[name]) implargs = utils.getargspec(implmethods[name]) self.assertEqual(baseargs, implargs, "%s args don't match base class %s" % (name, baseclass)) class APICoverage(object): cover_api = None def test_api_methods(self): self.assertIsNotNone(self.cover_api) api_methods = [x for x in dir(self.cover_api) if not x.startswith('_')] test_methods = [x[5:] for x in dir(self) if x.startswith('test_')] self.assertThat( test_methods, testtools.matchers.ContainsAll(api_methods)) @six.add_metaclass(abc.ABCMeta) class SubclassSignatureTestCase(testtools.TestCase): """Ensure all overridden methods of all subclasses of the class under test exactly match the signature of the base class. A subclass of SubclassSignatureTestCase should define a method _get_base_class which: * Returns a base class whose subclasses will all be checked * Ensures that all subclasses to be tested have been imported SubclassSignatureTestCase defines a single test, test_signatures, which does a recursive, depth-first check of all subclasses, ensuring that their method signatures are identical to those of the base class. """ @abc.abstractmethod def _get_base_class(self): raise NotImplementedError() def setUp(self): self.base = self._get_base_class() super(SubclassSignatureTestCase, self).setUp() @staticmethod def _get_argspecs(cls): """Return a dict of method_name->argspec for every method of cls.""" argspecs = {} # getmembers returns all members, including members inherited from # the base class. It's redundant for us to test these, but as # they'll always pass it's not worth the complexity to filter them out. for (name, method) in inspect.getmembers(cls, inspect.ismethod): # Subclass __init__ methods can usually be legitimately different if name == '__init__': continue while hasattr(method, '__wrapped__'): # This is a wrapped function. The signature we're going to # see here is that of the wrapper, which is almost certainly # going to involve varargs and kwargs, and therefore is # unlikely to be what we want. If the wrapper manupulates the # arguments taken by the wrapped function, the wrapped function # isn't what we want either. In that case we're just stumped: # if it ever comes up, add more knobs here to work round it (or # stop using a dynamic language). # # Here we assume the wrapper doesn't manipulate the arguments # to the wrapped function and inspect the wrapped function # instead. method = getattr(method, '__wrapped__') argspecs[name] = utils.getargspec(method) return argspecs @staticmethod def _clsname(cls): """Return the fully qualified name of cls.""" return "%s.%s" % (cls.__module__, cls.__name__) def _test_signatures_recurse(self, base, base_argspecs): for sub in base.__subclasses__(): sub_argspecs = self._get_argspecs(sub) # Check that each subclass method matches the signature of the # base class for (method, sub_argspec) in sub_argspecs.items(): # Methods which don't override methods in the base class # are good. if method in base_argspecs: self.assertEqual(base_argspecs[method], sub_argspec, 'Signature of %(sub)s.%(method)s ' 'differs from superclass %(base)s' % {'base': self._clsname(base), 'sub': self._clsname(sub), 'method': method}) # Recursively check this subclass self._test_signatures_recurse(sub, sub_argspecs) def test_signatures(self): self._test_signatures_recurse(self.base, self._get_argspecs(self.base)) class TimeOverride(fixtures.Fixture): """Fixture to start and remove time override.""" def setUp(self): super(TimeOverride, self).setUp() timeutils.set_time_override() self.addCleanup(timeutils.clear_time_override) class NoDBTestCase(TestCase): """`NoDBTestCase` differs from TestCase in that DB access is not supported. This makes tests run significantly faster. If possible, all new tests should derive from this class. """ USES_DB = False class BaseHookTestCase(NoDBTestCase): def assert_has_hook(self, expected_name, func): self.assertTrue(hasattr(func, '__hook_name__')) self.assertEqual(expected_name, func.__hook_name__) class MatchType(object): """Matches any instance of a specified type The MatchType class is a helper for use with the mock.assert_called_with() method that lets you assert that a particular parameter has a specific data type. It enables stricter checking than the built in mock.ANY helper. Example usage could be: mock_some_method.assert_called_once_with( "hello", MatchType(objects.Instance), mock.ANY, "world", MatchType(objects.KeyPair)) """ def __init__(self, wanttype): self.wanttype = wanttype def __eq__(self, other): return type(other) == self.wanttype def __ne__(self, other): return type(other) != self.wanttype def __repr__(self): return "" class MatchObjPrims(object): """Matches objects with equal primitives.""" def __init__(self, want_obj): self.want_obj = want_obj def __eq__(self, other): return objects_base.obj_equal_prims(other, self.want_obj) def __ne__(self, other): return not other == self.want_obj def __repr__(self): return '' class ContainKeyValue(object): """Checks whether a key/value pair is in a dict parameter. The ContainKeyValue class is a helper for use with the mock.assert_*() method that lets you assert that a particular dict contain a key/value pair. It enables stricter checking than the built in mock.ANY helper. Example usage could be: mock_some_method.assert_called_once_with( "hello", ContainKeyValue('foo', bar), mock.ANY, "world", ContainKeyValue('hello', world)) """ def __init__(self, wantkey, wantvalue): self.wantkey = wantkey self.wantvalue = wantvalue def __eq__(self, other): try: return other[self.wantkey] == self.wantvalue except (KeyError, TypeError): return False def __ne__(self, other): try: return other[self.wantkey] != self.wantvalue except (KeyError, TypeError): return True def __repr__(self): return "" @contextlib.contextmanager def patch_exists(patched_path, result): """Selectively patch os.path.exists() so that if it's called with patched_path, return result. Calls with any other path are passed through to the real os.path.exists() function. Either import and use as a decorator / context manager, or use the nova.TestCase.patch_exists() static method as a context manager. Currently it is *not* recommended to use this if any of the following apply: - You want to patch via decorator *and* make assertions about how the mock is called (since using it in the decorator form will not make the mock available to your code). - You want the result of the patched exists() call to be determined programmatically (e.g. by matching substrings of patched_path). - You expect exists() to be called multiple times on the same path and return different values each time. Additionally within unit tests which only test a very limited code path, it may be possible to ensure that the code path only invokes exists() once, in which case it's slightly overkill to do selective patching based on the path. In this case something like like this may be more appropriate: @mock.patch('os.path.exists', return_value=True) def test_my_code(self, mock_exists): ... mock_exists.assert_called_once_with(path) """ real_exists = os.path.exists def fake_exists(path): if path == patched_path: return result return real_exists(path) with mock.patch.object(os.path, "exists") as mock_exists: mock_exists.side_effect = fake_exists yield mock_exists @contextlib.contextmanager def patch_open(patched_path, read_data): """Selectively patch open() so that if it's called with patched_path, return a mock which makes it look like the file contains read_data. Calls with any other path are passed through to the real open() function. Either import and use as a decorator, or use the nova.TestCase.patch_open() static method as a context manager. Currently it is *not* recommended to use this if any of the following apply: - The code under test will attempt to write to patched_path. - You want to patch via decorator *and* make assertions about how the mock is called (since using it in the decorator form will not make the mock available to your code). - You want the faked file contents to be determined programmatically (e.g. by matching substrings of patched_path). - You expect open() to be called multiple times on the same path and return different file contents each time. Additionally within unit tests which only test a very limited code path, it may be possible to ensure that the code path only invokes open() once, in which case it's slightly overkill to do selective patching based on the path. In this case something like like this may be more appropriate: @mock.patch(six.moves.builtins, 'open') def test_my_code(self, mock_open): ... mock_open.assert_called_once_with(path) """ real_open = builtins.open m = mock.mock_open(read_data=read_data) def selective_fake_open(path, *args, **kwargs): if path == patched_path: return m(patched_path) return real_open(path, *args, **kwargs) with mock.patch.object(builtins, 'open') as mock_open: mock_open.side_effect = selective_fake_open yield m ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3824697 nova-21.2.4/nova/tests/0000775000175000017500000000000000000000000014673 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/__init__.py0000664000175000017500000000000000000000000016772 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/fixtures.py0000664000175000017500000032060400000000000017123 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixtures for Nova tests.""" from __future__ import absolute_import import collections from contextlib import contextmanager import copy import logging as std_logging import os import random import warnings import fixtures import futurist from keystoneauth1 import adapter as ksa_adap import mock from neutronclient.common import exceptions as neutron_client_exc from openstack import service_description import os_resource_classes as orc from oslo_concurrency import lockutils from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging import oslo_messaging as messaging from oslo_messaging import conffixture as messaging_conffixture from oslo_privsep import daemon as privsep_daemon from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel from oslo_utils import uuidutils from requests import adapters from sqlalchemy import exc as sqla_exc from wsgi_intercept import interceptor from nova.api.openstack import wsgi_app from nova.api import wsgi from nova.compute import multi_cell_list from nova.compute import rpcapi as compute_rpcapi from nova import context from nova.db import migration from nova.db.sqlalchemy import api as session from nova import exception from nova.network import constants as neutron_constants from nova.network import model as network_model from nova import objects from nova.objects import base as obj_base from nova.objects import service as service_obj import nova.privsep from nova import quota as nova_quota from nova import rpc from nova.scheduler import weights from nova import service from nova.tests.functional.api import client from nova.tests.unit import fake_requests _TRUE_VALUES = ('True', 'true', '1', 'yes') CONF = cfg.CONF DB_SCHEMA = {'main': "", 'api': ""} SESSION_CONFIGURED = False LOG = logging.getLogger(__name__) class ServiceFixture(fixtures.Fixture): """Run a service as a test fixture.""" def __init__(self, name, host=None, cell=None, **kwargs): name = name # If not otherwise specified, the host will default to the # name of the service. Some things like aggregates care that # this is stable. host = host or name kwargs.setdefault('host', host) kwargs.setdefault('binary', 'nova-%s' % name) self.cell = cell self.kwargs = kwargs def setUp(self): super(ServiceFixture, self).setUp() self.ctxt = context.get_admin_context() if self.cell: context.set_target_cell(self.ctxt, self.cell) with mock.patch('nova.context.get_admin_context', return_value=self.ctxt): self.service = service.Service.create(**self.kwargs) self.service.start() self.addCleanup(self.service.kill) class NullHandler(std_logging.Handler): """custom default NullHandler to attempt to format the record. Used in conjunction with log_fixture.get_logging_handle_error_fixture to detect formatting errors in debug level logs without saving the logs. """ def handle(self, record): self.format(record) def emit(self, record): pass def createLock(self): self.lock = None class StandardLogging(fixtures.Fixture): """Setup Logging redirection for tests. There are a number of things we want to handle with logging in tests: * Redirect the logging to somewhere that we can test or dump it later. * Ensure that as many DEBUG messages as possible are actually executed, to ensure they are actually syntactically valid (they often have not been). * Ensure that we create useful output for tests that doesn't overwhelm the testing system (which means we can't capture the 100 MB of debug logging on every run). To do this we create a logger fixture at the root level, which defaults to INFO and create a Null Logger at DEBUG which lets us execute log messages at DEBUG but not keep the output. To support local debugging OS_DEBUG=True can be set in the environment, which will print out the full debug logging. There are also a set of overrides for particularly verbose modules to be even less than INFO. """ def setUp(self): super(StandardLogging, self).setUp() # set root logger to debug root = std_logging.getLogger() root.setLevel(std_logging.DEBUG) # supports collecting debug level for local runs if os.environ.get('OS_DEBUG') in _TRUE_VALUES: level = std_logging.DEBUG else: level = std_logging.INFO # Collect logs fs = '%(asctime)s %(levelname)s [%(name)s] %(message)s' self.logger = self.useFixture( fixtures.FakeLogger(format=fs, level=None)) # TODO(sdague): why can't we send level through the fake # logger? Tests prove that it breaks, but it's worth getting # to the bottom of. root.handlers[0].setLevel(level) if level > std_logging.DEBUG: # Just attempt to format debug level logs, but don't save them handler = NullHandler() self.useFixture(fixtures.LogHandler(handler, nuke_handlers=False)) handler.setLevel(std_logging.DEBUG) # Don't log every single DB migration step std_logging.getLogger( 'migrate.versioning.api').setLevel(std_logging.WARNING) # Or alembic for model comparisons. std_logging.getLogger('alembic').setLevel(std_logging.WARNING) # At times we end up calling back into main() functions in # testing. This has the possibility of calling logging.setup # again, which completely unwinds the logging capture we've # created here. Once we've setup the logging the way we want, # disable the ability for the test to change this. def fake_logging_setup(*args): pass self.useFixture( fixtures.MonkeyPatch('oslo_log.log.setup', fake_logging_setup)) def delete_stored_logs(self): # NOTE(gibi): this depends on the internals of the fixtures.FakeLogger. # This could be enhanced once the PR # https://github.com/testing-cabal/fixtures/pull/42 merges self.logger._output.truncate(0) class DatabasePoisonFixture(fixtures.Fixture): def setUp(self): super(DatabasePoisonFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'oslo_db.sqlalchemy.enginefacade._TransactionFactory.' '_create_session', self._poison_configure)) def _poison_configure(self, *a, **k): # If you encounter this error, you might be tempted to just not # inherit from NoDBTestCase. Bug #1568414 fixed a few hundred of these # errors, and not once was that the correct solution. Instead, # consider some of the following tips (when applicable): # # - mock at the object layer rather than the db layer, for example: # nova.objects.instance.Instance.get # vs. # nova.db.instance_get # # - mock at the api layer rather than the object layer, for example: # nova.api.openstack.common.get_instance # vs. # nova.objects.instance.Instance.get # # - mock code that requires the database but is otherwise tangential # to the code you're testing (for example: EventReporterStub) # # - peruse some of the other database poison warning fixes here: # https://review.opendev.org/#/q/topic:bug/1568414 raise Exception('This test uses methods that set internal oslo_db ' 'state, but it does not claim to use the database. ' 'This will conflict with the setup of tests that ' 'do use the database and cause failures later.') class SingleCellSimple(fixtures.Fixture): """Setup the simplest cells environment possible This should be used when you do not care about multiple cells, or having a "real" environment for tests that should not care. This will give you a single cell, and map any and all accesses to that cell (even things that would go to cell0). If you need to distinguish between cell0 and cellN, then you should use the CellDatabases fixture. If instances should appear to still be in scheduling state, pass instances_created=False to init. """ def __init__(self, instances_created=True): self.instances_created = instances_created def setUp(self): super(SingleCellSimple, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.objects.CellMappingList._get_all_from_db', self._fake_cell_list)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.CellMappingList._get_by_project_id_from_db', self._fake_cell_list)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.CellMapping._get_by_uuid_from_db', self._fake_cell_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.HostMapping._get_by_host_from_db', self._fake_hostmapping_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.InstanceMapping._get_by_instance_uuid_from_db', self._fake_instancemapping_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.InstanceMappingList._get_by_instance_uuids_from_db', self._fake_instancemapping_get_uuids)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.InstanceMapping._save_in_db', self._fake_instancemapping_get_save)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.target_cell', self._fake_target_cell)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.set_target_cell', self._fake_set_target_cell)) def _fake_hostmapping_get(self, *args): return {'id': 1, 'updated_at': None, 'created_at': None, 'host': 'host1', 'cell_mapping': self._fake_cell_list()[0]} def _fake_instancemapping_get_common(self, instance_uuid): return { 'id': 1, 'updated_at': None, 'created_at': None, 'instance_uuid': instance_uuid, 'cell_id': (self.instances_created and 1 or None), 'project_id': 'project', 'cell_mapping': ( self.instances_created and self._fake_cell_get() or None), } def _fake_instancemapping_get_save(self, *args): return self._fake_instancemapping_get_common(args[-2]) def _fake_instancemapping_get(self, *args): return self._fake_instancemapping_get_common(args[-1]) def _fake_instancemapping_get_uuids(self, *args): return [self._fake_instancemapping_get(uuid) for uuid in args[-1]] def _fake_cell_get(self, *args): return self._fake_cell_list()[0] def _fake_cell_list(self, *args): return [{'id': 1, 'updated_at': None, 'created_at': None, 'uuid': uuidsentinel.cell1, 'name': 'onlycell', 'transport_url': 'fake://nowhere/', 'database_connection': 'sqlite:///', 'disabled': False}] @contextmanager def _fake_target_cell(self, context, target_cell): # Just do something simple and set/unset the cell_uuid on the context. if target_cell: context.cell_uuid = getattr(target_cell, 'uuid', uuidsentinel.cell1) else: context.cell_uuid = None yield context def _fake_set_target_cell(self, context, cell_mapping): # Just do something simple and set/unset the cell_uuid on the context. if cell_mapping: context.cell_uuid = getattr(cell_mapping, 'uuid', uuidsentinel.cell1) else: context.cell_uuid = None class CheatingSerializer(rpc.RequestContextSerializer): """A messaging.RequestContextSerializer that helps with cells. Our normal serializer does not pass in the context like db_connection and mq_connection, for good reason. We don't really want/need to force a remote RPC server to use our values for this. However, during unit and functional tests, since we're all in the same process, we want cell-targeted RPC calls to preserve these values. Unless we had per-service config and database layer state for the fake services we start, this is a reasonable cheat. """ def serialize_context(self, context): """Serialize context with the db_connection inside.""" values = super(CheatingSerializer, self).serialize_context(context) values['db_connection'] = context.db_connection values['mq_connection'] = context.mq_connection return values def deserialize_context(self, values): """Deserialize context and honor db_connection if present.""" ctxt = super(CheatingSerializer, self).deserialize_context(values) ctxt.db_connection = values.pop('db_connection', None) ctxt.mq_connection = values.pop('mq_connection', None) return ctxt class CellDatabases(fixtures.Fixture): """Create per-cell databases for testing. How to use:: fix = CellDatabases() fix.add_cell_database('connection1') fix.add_cell_database('connection2', default=True) self.useFixture(fix) Passing default=True tells the fixture which database should be given to code that doesn't target a specific cell. """ def __init__(self): self._ctxt_mgrs = {} self._last_ctxt_mgr = None self._default_ctxt_mgr = None # NOTE(danms): Use a ReaderWriterLock to synchronize our # global database muckery here. If we change global db state # to point to a cell, we need to take an exclusive lock to # prevent any other calls to get_context_manager() until we # reset to the default. self._cell_lock = lockutils.ReaderWriterLock() def _cache_schema(self, connection_str): # NOTE(melwitt): See the regular Database fixture for why # we do this. global DB_SCHEMA if not DB_SCHEMA['main']: ctxt_mgr = self._ctxt_mgrs[connection_str] engine = ctxt_mgr.writer.get_engine() conn = engine.connect() migration.db_sync(database='main') DB_SCHEMA['main'] = "".join(line for line in conn.connection.iterdump()) engine.dispose() @contextmanager def _wrap_target_cell(self, context, cell_mapping): # NOTE(danms): This method is responsible for switching global # database state in a safe way such that code that doesn't # know anything about cell targeting (i.e. compute node code) # can continue to operate when called from something that has # targeted a specific cell. In order to make this safe from a # dining-philosopher-style deadlock, we need to be able to # support multiple threads talking to the same cell at the # same time and potentially recursion within the same thread # from code that would otherwise be running on separate nodes # in real life, but where we're actually recursing in the # tests. # # The basic logic here is: # 1. Grab a reader lock to see if the state is already pointing at # the cell we want. If it is, we can yield and return without # altering the global state further. The read lock ensures that # global state won't change underneath us, and multiple threads # can be working at the same time, as long as they are looking # for the same cell. # 2. If we do need to change the global state, grab a writer lock # to make that change, which assumes that nothing else is looking # at a cell right now. We do only non-schedulable things while # holding that lock to avoid the deadlock mentioned above. # 3. We then re-lock with a reader lock just as step #1 above and # yield to do the actual work. We can do schedulable things # here and not exclude other threads from making progress. # If an exception is raised, we capture that and save it. # 4. If we changed state in #2, we need to change it back. So we grab # a writer lock again and do that. # 5. Finally, if an exception was raised in #3 while state was # changed, we raise it to the caller. if cell_mapping: desired = self._ctxt_mgrs[cell_mapping.database_connection] else: desired = self._default_ctxt_mgr with self._cell_lock.read_lock(): if self._last_ctxt_mgr == desired: with self._real_target_cell(context, cell_mapping) as c: yield c return raised_exc = None with self._cell_lock.write_lock(): if cell_mapping is not None: # This assumes the next local DB access is the same cell that # was targeted last time. self._last_ctxt_mgr = desired with self._cell_lock.read_lock(): if self._last_ctxt_mgr != desired: # NOTE(danms): This is unlikely to happen, but it's possible # another waiting writer changed the state between us letting # it go and re-acquiring as a reader. If lockutils supported # upgrading and downgrading locks, this wouldn't be a problem. # Regardless, assert that it is still as we left it here # so we don't hit the wrong cell. If this becomes a problem, # we just need to retry the write section above until we land # here with the cell we want. raise RuntimeError('Global DB state changed underneath us') try: with self._real_target_cell(context, cell_mapping) as ccontext: yield ccontext except Exception as exc: raised_exc = exc with self._cell_lock.write_lock(): # Once we have returned from the context, we need # to restore the default context manager for any # subsequent calls self._last_ctxt_mgr = self._default_ctxt_mgr if raised_exc: raise raised_exc def _wrap_create_context_manager(self, connection=None): ctxt_mgr = self._ctxt_mgrs[connection] return ctxt_mgr def _wrap_get_context_manager(self, context): try: # If already targeted, we can proceed without a lock if context.db_connection: return context.db_connection except AttributeError: # Unit tests with None, FakeContext, etc pass # NOTE(melwitt): This is a hack to try to deal with # local accesses i.e. non target_cell accesses. with self._cell_lock.read_lock(): # FIXME(mriedem): This is actually misleading and means we don't # catch things like bug 1717000 where a context should be targeted # to a cell but it's not, and the fixture here just returns the # last targeted context that was used. return self._last_ctxt_mgr def _wrap_get_server(self, target, endpoints, serializer=None): """Mirror rpc.get_server() but with our special sauce.""" serializer = CheatingSerializer(serializer) return messaging.get_rpc_server(rpc.TRANSPORT, target, endpoints, executor='eventlet', serializer=serializer) def _wrap_get_client(self, target, version_cap=None, serializer=None, call_monitor_timeout=None): """Mirror rpc.get_client() but with our special sauce.""" serializer = CheatingSerializer(serializer) return messaging.RPCClient(rpc.TRANSPORT, target, version_cap=version_cap, serializer=serializer, call_monitor_timeout=call_monitor_timeout) def add_cell_database(self, connection_str, default=False): """Add a cell database to the fixture. :param connection_str: An identifier used to represent the connection string for this database. It should match the database_connection field in the corresponding CellMapping. """ # NOTE(danms): Create a new context manager for the cell, which # will house the sqlite:// connection for this cell's in-memory # database. Store/index it by the connection string, which is # how we identify cells in CellMapping. ctxt_mgr = session.create_context_manager() self._ctxt_mgrs[connection_str] = ctxt_mgr # NOTE(melwitt): The first DB access through service start is # local so this initializes _last_ctxt_mgr for that and needs # to be a compute cell. self._last_ctxt_mgr = ctxt_mgr # NOTE(danms): Record which context manager should be the default # so we can restore it when we return from target-cell contexts. # If none has been provided yet, store the current one in case # no default is ever specified. if self._default_ctxt_mgr is None or default: self._default_ctxt_mgr = ctxt_mgr def get_context_manager(context): return ctxt_mgr # NOTE(danms): This is a temporary MonkeyPatch just to get # a new database created with the schema we need and the # context manager for it stashed. with fixtures.MonkeyPatch( 'nova.db.sqlalchemy.api.get_context_manager', get_context_manager): self._cache_schema(connection_str) engine = ctxt_mgr.writer.get_engine() engine.dispose() conn = engine.connect() conn.connection.executescript(DB_SCHEMA['main']) def setUp(self): super(CellDatabases, self).setUp() self.addCleanup(self.cleanup) self._real_target_cell = context.target_cell # NOTE(danms): These context managers are in place for the # duration of the test (unlike the temporary ones above) and # provide the actual "runtime" switching of connections for us. self.useFixture(fixtures.MonkeyPatch( 'nova.db.sqlalchemy.api.create_context_manager', self._wrap_create_context_manager)) self.useFixture(fixtures.MonkeyPatch( 'nova.db.sqlalchemy.api.get_context_manager', self._wrap_get_context_manager)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.target_cell', self._wrap_target_cell)) self.useFixture(fixtures.MonkeyPatch( 'nova.rpc.get_server', self._wrap_get_server)) self.useFixture(fixtures.MonkeyPatch( 'nova.rpc.get_client', self._wrap_get_client)) def cleanup(self): for ctxt_mgr in self._ctxt_mgrs.values(): engine = ctxt_mgr.writer.get_engine() engine.dispose() class Database(fixtures.Fixture): def __init__(self, database='main', connection=None): """Create a database fixture. :param database: The type of database, 'main', or 'api' :param connection: The connection string to use """ super(Database, self).__init__() # NOTE(pkholkin): oslo_db.enginefacade is configured in tests the same # way as it is done for any other service that uses db global SESSION_CONFIGURED if not SESSION_CONFIGURED: session.configure(CONF) SESSION_CONFIGURED = True self.database = database if database == 'main': if connection is not None: ctxt_mgr = session.create_context_manager( connection=connection) self.get_engine = ctxt_mgr.writer.get_engine else: self.get_engine = session.get_engine elif database == 'api': self.get_engine = session.get_api_engine def _cache_schema(self): global DB_SCHEMA if not DB_SCHEMA[self.database]: engine = self.get_engine() conn = engine.connect() migration.db_sync(database=self.database) DB_SCHEMA[self.database] = "".join(line for line in conn.connection.iterdump()) engine.dispose() def cleanup(self): engine = self.get_engine() engine.dispose() def reset(self): self._cache_schema() engine = self.get_engine() engine.dispose() conn = engine.connect() conn.connection.executescript(DB_SCHEMA[self.database]) def setUp(self): super(Database, self).setUp() self.reset() self.addCleanup(self.cleanup) class DatabaseAtVersion(fixtures.Fixture): def __init__(self, version, database='main'): """Create a database fixture. :param version: Max version to sync to (or None for current) :param database: The type of database, 'main', 'api' """ super(DatabaseAtVersion, self).__init__() self.database = database self.version = version if database == 'main': self.get_engine = session.get_engine elif database == 'api': self.get_engine = session.get_api_engine def cleanup(self): engine = self.get_engine() engine.dispose() def reset(self): engine = self.get_engine() engine.dispose() engine.connect() migration.db_sync(version=self.version, database=self.database) def setUp(self): super(DatabaseAtVersion, self).setUp() self.reset() self.addCleanup(self.cleanup) class DefaultFlavorsFixture(fixtures.Fixture): def setUp(self): super(DefaultFlavorsFixture, self).setUp() ctxt = context.get_admin_context() defaults = {'rxtx_factor': 1.0, 'disabled': False, 'is_public': True, 'ephemeral_gb': 0, 'swap': 0} extra_specs = { "hw:numa_nodes": "1" } default_flavors = [ objects.Flavor(context=ctxt, memory_mb=512, vcpus=1, root_gb=1, flavorid='1', name='m1.tiny', **defaults), objects.Flavor(context=ctxt, memory_mb=2048, vcpus=1, root_gb=20, flavorid='2', name='m1.small', **defaults), objects.Flavor(context=ctxt, memory_mb=4096, vcpus=2, root_gb=40, flavorid='3', name='m1.medium', **defaults), objects.Flavor(context=ctxt, memory_mb=8192, vcpus=4, root_gb=80, flavorid='4', name='m1.large', **defaults), objects.Flavor(context=ctxt, memory_mb=16384, vcpus=8, root_gb=160, flavorid='5', name='m1.xlarge', **defaults), objects.Flavor(context=ctxt, memory_mb=512, vcpus=1, root_gb=1, flavorid='6', name='m1.tiny.specs', extra_specs=extra_specs, **defaults), ] for flavor in default_flavors: flavor.create() class RPCFixture(fixtures.Fixture): def __init__(self, *exmods): super(RPCFixture, self).__init__() self.exmods = [] self.exmods.extend(exmods) self._buses = {} def _fake_create_transport(self, url): # FIXME(danms): Right now, collapse all connections # to a single bus. This is how our tests expect things # to work. When the tests are fixed, this fixture can # support simulating multiple independent buses, and this # hack should be removed. url = None # NOTE(danms): This will be called with a non-None url by # cells-aware code that is requesting to contact something on # one of the many transports we're multplexing here. if url not in self._buses: exmods = rpc.get_allowed_exmods() self._buses[url] = messaging.get_rpc_transport( CONF, url=url, allowed_remote_exmods=exmods) return self._buses[url] def setUp(self): super(RPCFixture, self).setUp() self.addCleanup(rpc.cleanup) rpc.add_extra_exmods(*self.exmods) self.addCleanup(rpc.clear_extra_exmods) self.messaging_conf = messaging_conffixture.ConfFixture(CONF) self.messaging_conf.transport_url = 'fake:/' self.useFixture(self.messaging_conf) self.useFixture(fixtures.MonkeyPatch( 'nova.rpc.create_transport', self._fake_create_transport)) # NOTE(danms): Execute the init with get_transport_url() as None, # instead of the parsed TransportURL(None) so that we can cache # it as it will be called later if the default is requested by # one of our mq-switching methods. with mock.patch('nova.rpc.get_transport_url') as mock_gtu: mock_gtu.return_value = None rpc.init(CONF) def cleanup_in_flight_rpc_messages(): messaging._drivers.impl_fake.FakeExchangeManager._exchanges = {} self.addCleanup(cleanup_in_flight_rpc_messages) class WarningsFixture(fixtures.Fixture): """Filters out warnings during test runs.""" def setUp(self): super(WarningsFixture, self).setUp() # NOTE(sdague): Make deprecation warnings only happen once. Otherwise # this gets kind of crazy given the way that upstream python libs use # this. warnings.simplefilter("once", DeprecationWarning) warnings.filterwarnings('ignore', message='With-statements now directly support' ' multiple context managers') # NOTE(sdague): nova does not use pkg_resources directly, this # is all very long standing deprecations about other tools # using it. None of this is useful to Nova development. warnings.filterwarnings('ignore', module='pkg_resources') # NOTE(sdague): this remains an unresolved item around the way # forward on is_admin, the deprecation is definitely really premature. warnings.filterwarnings('ignore', message='Policy enforcement is depending on the value of is_admin.' ' This key is deprecated. Please update your policy ' 'file to use the standard policy values.') # NOTE(mriedem): Ignore scope check UserWarnings from oslo.policy. warnings.filterwarnings('ignore', message="Policy .* failed scope check", category=UserWarning) # NOTE(gibi): The UUIDFields emits a warning if the value is not a # valid UUID. Let's escalate that to an exception in the test to # prevent adding violations. warnings.filterwarnings('error', message=".*invalid UUID.*") # NOTE(mriedem): Avoid adding anything which tries to convert an # object to a primitive which jsonutils.to_primitive() does not know # how to handle (or isn't given a fallback callback). warnings.filterwarnings( 'error', message="Cannot convert : {: }} self._port_bindings = collections.defaultdict(dict) # The fixture does not allow network, subnet or security group updates # so we don't have to deepcopy here self._networks = { self.network_1['id']: self.network_1 } self._subnets = { self.subnet_1['id']: self.subnet_1, self.subnet_ipv6_1['id']: self.subnet_ipv6_1, } self._security_groups = { self.security_group['id']: self.security_group, } def setUp(self): super(NeutronFixture, self).setUp() # NOTE(gibi): This is the simplest way to unblock nova during live # migration. A nicer way would be to actually send network-vif-plugged # events to the nova-api from NeutronFixture when the port is bound but # calling nova API from this fixture needs a big surgery and sending # event right at the binding request means that such event will arrive # to nova earlier than the compute manager starts waiting for it. self.test.flags(vif_plugging_timeout=0) self.test.stub_out( 'nova.network.neutron.API.add_fixed_ip_to_instance', lambda *args, **kwargs: network_model.NetworkInfo.hydrate( self.nw_info)) self.test.stub_out( 'nova.network.neutron.API.remove_fixed_ip_from_instance', lambda *args, **kwargs: network_model.NetworkInfo.hydrate( self.nw_info)) # Stub out port binding APIs which go through a KSA client Adapter # rather than python-neutronclient. self.test.stub_out( 'nova.network.neutron._get_ksa_client', lambda *args, **kwargs: mock.Mock( spec=ksa_adap.Adapter)) self.test.stub_out( 'nova.network.neutron.API._create_port_binding', self.create_port_binding) self.test.stub_out( 'nova.network.neutron.API._delete_port_binding', self.delete_port_binding) self.test.stub_out( 'nova.network.neutron.API._activate_port_binding', self.activate_port_binding) self.test.stub_out( 'nova.network.neutron.API._get_port_binding', self.get_port_binding) self.test.stub_out('nova.network.neutron.get_client', self._get_client) def _get_client(self, context, admin=False): # This logic is copied from nova.network.neutron._get_auth_plugin admin = admin or context.is_admin and not context.auth_token return _FakeNeutronClient(self, admin) def create_port_binding(self, context, client, port_id, data): if port_id not in self._ports: return fake_requests.FakeResponse( 404, content='Port %s not found' % port_id) host = data['binding']['host'] # We assume that every binding that is created is inactive. # This is only true from the current nova code perspective where # explicit binding creation only happen for migration where the port # is already actively bound to the source host. # TODO(gibi): enhance update_port to detect if the port is bound by # the update and create a binding internally in _port_bindings. Then # we can change the logic here to mimic neutron better by making the # first binding active by default. data['binding']['status'] = 'INACTIVE' self._port_bindings[port_id][host] = copy.deepcopy(data['binding']) return fake_requests.FakeResponse(200, content=jsonutils.dumps(data)) def _get_failure_response_if_port_or_binding_not_exists( self, port_id, host): if port_id not in self._ports: return fake_requests.FakeResponse( 404, content='Port %s not found' % port_id) if host not in self._port_bindings[port_id]: return fake_requests.FakeResponse( 404, content='Binding for host %s for port %s not found' % (host, port_id)) def delete_port_binding(self, context, client, port_id, host): failure = self._get_failure_response_if_port_or_binding_not_exists( port_id, host) if failure is not None: return failure del self._port_bindings[port_id][host] return fake_requests.FakeResponse(204) def activate_port_binding(self, context, client, port_id, host): failure = self._get_failure_response_if_port_or_binding_not_exists( port_id, host) if failure is not None: return failure # It makes sure that only one binding is active for a port for h, binding in self._port_bindings[port_id].items(): if h == host: # NOTE(gibi): neutron returns 409 if this binding is already # active but nova does not depend on this behaviour yet. binding['status'] = 'ACTIVE' else: binding['status'] = 'INACTIVE' return fake_requests.FakeResponse(200) def get_port_binding(self, context, client, port_id, host): failure = self._get_failure_response_if_port_or_binding_not_exists( port_id, host) if failure is not None: return failure binding = {"binding": self._port_bindings[port_id][host]} return fake_requests.FakeResponse( 200, content=jsonutils.dumps(binding)) def _list_resource(self, resources, retrieve_all, **_params): # If 'fields' is passed we need to strip that out since it will mess # up the filtering as 'fields' is not a filter parameter. _params.pop('fields', None) result = [] for resource in resources.values(): for key, val in _params.items(): # params can be strings or lists/tuples and these need to be # handled differently if isinstance(val, list) or isinstance(val, tuple): if not any(resource.get(key) == v for v in val): break else: if resource.get(key) != val: break else: # triggers if we didn't hit a break above result.append(copy.deepcopy(resource)) return result def list_extensions(self, *args, **kwargs): return { 'extensions': [ { # Copied from neutron-lib portbindings_extended.py "updated": "2017-07-17T10:00:00-00:00", "name": neutron_constants.PORT_BINDING_EXTENDED, "links": [], "alias": "binding-extended", "description": "Expose port bindings of a virtual port to " "external application" } ] } def _get_active_binding(self, port_id): for host, binding in self._port_bindings[port_id].items(): if binding['status'] == 'ACTIVE': return binding def _merge_in_active_binding(self, port): """Update the port dict with the currently active port binding""" if port['id'] not in self._port_bindings: return binding = self._get_active_binding(port['id']) or {} for key, value in binding.items(): # keys in the binding is like 'vnic_type' but in the port response # they are like 'binding:vnic_type'. Except for the host_id that # is called 'host' in the binding but 'binding:host_id' in the # port response. if key != 'host': port['binding:' + key] = value else: port['binding:host_id'] = binding['host'] def show_port(self, port_id, **_params): if port_id not in self._ports: raise exception.PortNotFound(port_id=port_id) port = copy.deepcopy(self._ports[port_id]) self._merge_in_active_binding(port) return {'port': port} def delete_port(self, port_id, **_params): if port_id in self._ports: del self._ports[port_id] # Not all flow use explicit binding creation by calling # neutronv2.api.API.bind_ports_to_host(). Non live migration flows # simply update the port to bind it. So we need to delete bindings # conditionally if port_id in self._port_bindings: del self._port_bindings[port_id] def list_ports(self, is_admin, retrieve_all=True, **_params): ports = self._list_resource(self._ports, retrieve_all, **_params) for port in ports: self._merge_in_active_binding(port) # Neutron returns None instead of the real resource_request if # the ports are queried by a non-admin. So simulate this behavior # here if not is_admin: if 'resource_request' in port: port['resource_request'] = None return {'ports': ports} def show_network(self, network_id, **_params): if network_id not in self._networks: raise neutron_client_exc.NetworkNotFoundClient() return {'network': copy.deepcopy(self._networks[network_id])} def list_networks(self, retrieve_all=True, **_params): return {'networks': self._list_resource( self._networks, retrieve_all, **_params)} def list_subnets(self, retrieve_all=True, **_params): # NOTE(gibi): The fixture does not support filtering for subnets return {'subnets': copy.deepcopy(list(self._subnets.values()))} def list_floatingips(self, retrieve_all=True, **_params): return {'floatingips': []} def list_security_groups(self, retrieve_all=True, **_params): return {'security_groups': self._list_resource( self._security_groups, retrieve_all, **_params)} def create_port(self, body=None): body = body or {'port': {}} # Note(gibi): Some of the test expects that a pre-defined port is # created. This is port_2. So if that port is not created yet then # that is the one created here. new_port = copy.deepcopy(body['port']) new_port.update(copy.deepcopy(self.port_2)) if self.port_2['id'] in self._ports: # If port_2 is already created then create a new port based on # the request body, the port_2 as a template, and assign new # port_id and mac_address for the new port # we need truly random uuids instead of named sentinels as some # tests needs more than 3 ports new_port.update({ 'id': str(uuidutils.generate_uuid()), 'mac_address': '00:' + ':'.join( ['%02x' % random.randint(0, 255) for _ in range(5)]), }) self._ports[new_port['id']] = new_port # we need to copy again what we return as nova might modify the # returned port locally and we don't want that it effects the port in # the self._ports dict. return {'port': copy.deepcopy(new_port)} def update_port(self, port_id, body=None): # TODO(gibi): check if the port update binds the port and update the # internal _port_bindings dict accordingly. Such a binding always # becomes and active port binding of the port. port = self._ports[port_id] # We need to deepcopy here as well as the body can have a nested dict # which can be modified by the caller after this update_port call port.update(copy.deepcopy(body['port'])) return {'port': copy.deepcopy(port)} def show_quota(self, project_id): # unlimited quota return {'quota': {'port': -1}} def validate_auto_allocated_topology_requirements(self, project_id): # from https://github.com/openstack/python-neutronclient/blob/6.14.0/ # neutronclient/v2_0/client.py#L2009-L2011 return self.get_auto_allocated_topology(project_id, fields=['dry-run']) def get_auto_allocated_topology(self, project_id, **_params): # from https://github.com/openstack/neutron/blob/14.0.0/ # neutron/services/auto_allocate/db.py#L134-L162 if _params == {'fields': ['dry-run']}: return {'id': 'dry-run=pass', 'tenant_id': project_id} return { 'auto_allocated_topology': { 'id': self.network_1['id'], 'tenant_id': project_id, } } class _NoopConductor(object): def __getattr__(self, key): def _noop_rpc(*args, **kwargs): return None return _noop_rpc class NoopConductorFixture(fixtures.Fixture): """Stub out the conductor API to do nothing""" def setUp(self): super(NoopConductorFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.conductor.ComputeTaskAPI', _NoopConductor)) self.useFixture(fixtures.MonkeyPatch( 'nova.conductor.API', _NoopConductor)) class EventReporterStub(fixtures.Fixture): def setUp(self): super(EventReporterStub, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.compute.utils.EventReporter', lambda *args, **kwargs: mock.MagicMock())) class CinderFixture(fixtures.Fixture): """A fixture to volume operations with the new Cinder attach/detach API""" # the default project_id in OSAPIFixtures tenant_id = '6f70656e737461636b20342065766572' SWAP_OLD_VOL = 'a07f71dc-8151-4e7d-a0cc-cd24a3f11113' SWAP_NEW_VOL = '227cc671-f30b-4488-96fd-7d0bf13648d8' SWAP_ERR_OLD_VOL = '828419fa-3efb-4533-b458-4267ca5fe9b1' SWAP_ERR_NEW_VOL = '9c6d9c2d-7a8f-4c80-938d-3bf062b8d489' SWAP_ERR_ATTACH_ID = '4a3cd440-b9c2-11e1-afa6-0800200c9a66' MULTIATTACH_VOL = '4757d51f-54eb-4442-8684-3399a6431f67' # This represents a bootable image-backed volume to test # boot-from-volume scenarios. IMAGE_BACKED_VOL = '6ca404f3-d844-4169-bb96-bc792f37de98' # This represents a bootable image-backed volume with required traits # as part of volume image metadata IMAGE_WITH_TRAITS_BACKED_VOL = '6194fc02-c60e-4a01-a8e5-600798208b5f' def __init__(self, test, az='nova'): """Initialize this instance of the CinderFixture. :param test: The TestCase using this fixture. :param az: The availability zone to return in volume GET responses. Defaults to "nova" since that is the default we would see from Cinder's storage_availability_zone config option. """ super(CinderFixture, self).__init__() self.test = test self.swap_volume_instance_uuid = None self.swap_volume_instance_error_uuid = None self.attachment_error_id = None self.az = az # A dict, keyed by volume id, to a dict, keyed by attachment id, # with keys: # - id: the attachment id # - instance_uuid: uuid of the instance attached to the volume # - connector: host connector dict; None if not connected # Note that a volume can have multiple attachments even without # multi-attach, as some flows create a blank 'reservation' attachment # before deleting another attachment. However, a non-multiattach volume # can only have at most one attachment with a host connector at a time. self.volume_to_attachment = collections.defaultdict(dict) def volume_ids_for_instance(self, instance_uuid): for volume_id, attachments in self.volume_to_attachment.items(): for attachment in attachments.values(): if attachment['instance_uuid'] == instance_uuid: # we might have multiple volumes attached to this instance # so yield rather than return yield volume_id break def attachment_ids_for_instance(self, instance_uuid): attachment_ids = [] for volume_id, attachments in self.volume_to_attachment.items(): for attachment in attachments.values(): if attachment['instance_uuid'] == instance_uuid: attachment_ids.append(attachment['id']) return attachment_ids def setUp(self): super(CinderFixture, self).setUp() def fake_get(self_api, context, volume_id, microversion=None): # Check for the special swap volumes. attachments = self.volume_to_attachment[volume_id] if volume_id in (self.SWAP_OLD_VOL, self.SWAP_ERR_OLD_VOL): volume = { 'status': 'available', 'display_name': 'TEST1', 'attach_status': 'detached', 'id': volume_id, 'multiattach': False, 'size': 1 } if ((self.swap_volume_instance_uuid and volume_id == self.SWAP_OLD_VOL) or (self.swap_volume_instance_error_uuid and volume_id == self.SWAP_ERR_OLD_VOL)): instance_uuid = (self.swap_volume_instance_uuid if volume_id == self.SWAP_OLD_VOL else self.swap_volume_instance_error_uuid) if attachments: attachment = list(attachments.values())[0] volume.update({ 'status': 'in-use', 'attachments': { instance_uuid: { 'mountpoint': '/dev/vdb', 'attachment_id': attachment['id'] } }, 'attach_status': 'attached' }) return volume # Check to see if the volume is attached. if attachments: # The volume is attached. attachment = list(attachments.values())[0] volume = { 'status': 'in-use', 'display_name': volume_id, 'attach_status': 'attached', 'id': volume_id, 'multiattach': volume_id == self.MULTIATTACH_VOL, 'size': 1, 'attachments': { attachment['instance_uuid']: { 'attachment_id': attachment['id'], 'mountpoint': '/dev/vdb' } } } else: # This is a test that does not care about the actual details. volume = { 'status': 'available', 'display_name': 'TEST2', 'attach_status': 'detached', 'id': volume_id, 'multiattach': volume_id == self.MULTIATTACH_VOL, 'size': 1 } if 'availability_zone' not in volume: volume['availability_zone'] = self.az # Check for our special image-backed volume. if volume_id in (self.IMAGE_BACKED_VOL, self.IMAGE_WITH_TRAITS_BACKED_VOL): # Make it a bootable volume. volume['bootable'] = True if volume_id == self.IMAGE_BACKED_VOL: # Add the image_id metadata. volume['volume_image_metadata'] = { # There would normally be more image metadata in here. 'image_id': '155d900f-4e14-4e4c-a73d-069cbf4541e6' } elif volume_id == self.IMAGE_WITH_TRAITS_BACKED_VOL: # Add the image_id metadata with traits. volume['volume_image_metadata'] = { 'image_id': '155d900f-4e14-4e4c-a73d-069cbf4541e6', "trait:HW_CPU_X86_SGX": "required", } return volume def fake_migrate_volume_completion(_self, context, old_volume_id, new_volume_id, error): return {'save_volume_id': new_volume_id} def _find_attachment(attachment_id): """Find attachment corresponding to ``attachment_id``. Returns: A tuple of the volume ID, an attachment dict for the given attachment ID, and a dict (keyed by attachment id) of attachment dicts for the volume. """ for volume_id, attachments in self.volume_to_attachment.items(): for attachment in attachments.values(): if attachment_id == attachment['id']: return volume_id, attachment, attachments raise exception.VolumeAttachmentNotFound( attachment_id=attachment_id) def fake_attachment_create(_self, context, volume_id, instance_uuid, connector=None, mountpoint=None): attachment_id = uuidutils.generate_uuid() if self.attachment_error_id is not None: attachment_id = self.attachment_error_id attachment = {'id': attachment_id, 'connection_info': {'data': {}}} self.volume_to_attachment[volume_id][attachment_id] = { 'id': attachment_id, 'instance_uuid': instance_uuid, 'connector': connector} LOG.info('Created attachment %s for volume %s. Total ' 'attachments for volume: %d', attachment_id, volume_id, len(self.volume_to_attachment[volume_id])) return attachment def fake_attachment_delete(_self, context, attachment_id): # 'attachment' is a tuple defining a attachment-instance mapping volume_id, attachment, attachments = ( _find_attachment(attachment_id)) del attachments[attachment_id] LOG.info('Deleted attachment %s for volume %s. Total attachments ' 'for volume: %d', attachment_id, volume_id, len(attachments)) def fake_attachment_update(_self, context, attachment_id, connector, mountpoint=None): # Ensure the attachment exists volume_id, attachment, attachments = ( _find_attachment(attachment_id)) # Cinder will only allow one "connected" attachment per # non-multiattach volume at a time. if volume_id != self.MULTIATTACH_VOL: for _attachment in attachments.values(): if _attachment['connector'] is not None: raise exception.InvalidInput( 'Volume %s is already connected with attachment ' '%s on host %s' % (volume_id, _attachment['id'], _attachment['connector'].get('host'))) attachment['connector'] = connector LOG.info('Updating volume attachment: %s', attachment_id) attachment_ref = {'driver_volume_type': 'fake_type', 'id': attachment_id, 'connection_info': {'data': {'foo': 'bar', 'target_lun': '1'}}} if attachment_id == self.SWAP_ERR_ATTACH_ID: # This intentionally triggers a TypeError for the # instance.volume_swap.error versioned notification tests. attachment_ref = {'connection_info': ()} return attachment_ref def fake_attachment_get(_self, context, attachment_id): # Ensure the attachment exists _find_attachment(attachment_id) attachment_ref = {'driver_volume_type': 'fake_type', 'id': attachment_id, 'connection_info': {'data': {'foo': 'bar', 'target_lun': '1'}}} return attachment_ref def fake_get_all_volume_types(*args, **kwargs): return [{ # This is used in the 2.67 API sample test. 'id': '5f9204ec-3e94-4f27-9beb-fe7bb73b6eb9', 'name': 'lvm-1' }] def fake_attachment_complete(_self, _context, attachment_id): # Ensure the attachment exists _find_attachment(attachment_id) LOG.info('Completing volume attachment: %s', attachment_id) self.test.stub_out('nova.volume.cinder.API.attachment_create', fake_attachment_create) self.test.stub_out('nova.volume.cinder.API.attachment_delete', fake_attachment_delete) self.test.stub_out('nova.volume.cinder.API.attachment_update', fake_attachment_update) self.test.stub_out('nova.volume.cinder.API.attachment_complete', fake_attachment_complete) self.test.stub_out('nova.volume.cinder.API.attachment_get', fake_attachment_get) self.test.stub_out('nova.volume.cinder.API.begin_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.get', fake_get) self.test.stub_out( 'nova.volume.cinder.API.migrate_volume_completion', fake_migrate_volume_completion) self.test.stub_out('nova.volume.cinder.API.roll_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.is_microversion_supported', lambda ctxt, microversion: None) self.test.stub_out('nova.volume.cinder.API.check_attached', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.get_all_volume_types', fake_get_all_volume_types) class UnHelperfulClientChannel(privsep_daemon._ClientChannel): def __init__(self, context): raise Exception('You have attempted to start a privsep helper. ' 'This is not allowed in the gate, and ' 'indicates a failure to have mocked your tests.') class PrivsepNoHelperFixture(fixtures.Fixture): """A fixture to catch failures to mock privsep's rootwrap helper. If you fail to mock away a privsep'd method in a unit test, then you may well end up accidentally running the privsep rootwrap helper. This will fail in the gate, but it fails in a way which doesn't identify which test is missing a mock. Instead, we raise an exception so that you at least know where you've missed something. """ def setUp(self): super(PrivsepNoHelperFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'oslo_privsep.daemon.RootwrapClientChannel', UnHelperfulClientChannel)) class PrivsepFixture(fixtures.Fixture): """Disable real privsep checking so we can test the guts of methods decorated with sys_admin_pctxt. """ def setUp(self): super(PrivsepFixture, self).setUp() self.useFixture(fixtures.MockPatchObject( nova.privsep.sys_admin_pctxt, 'client_mode', False)) class NoopQuotaDriverFixture(fixtures.Fixture): """A fixture to run tests using the NoopQuotaDriver. We can't simply set self.flags to the NoopQuotaDriver in tests to use the NoopQuotaDriver because the QuotaEngine object is global. Concurrently running tests will fail intermittently because they might get the NoopQuotaDriver globally when they expected the default DbQuotaDriver behavior. So instead, we can patch the _driver property of the QuotaEngine class on a per-test basis. """ def setUp(self): super(NoopQuotaDriverFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch('nova.quota.QuotaEngine._driver', nova_quota.NoopQuotaDriver())) # Set the config option just so that code checking for the presence of # the NoopQuotaDriver setting will see it as expected. # For some reason, this does *not* work when TestCase.flags is used. # When using self.flags, the concurrent test failures returned. CONF.set_override('driver', 'nova.quota.NoopQuotaDriver', 'quota') self.addCleanup(CONF.clear_override, 'driver', 'quota') class DownCellFixture(fixtures.Fixture): """A fixture to simulate when a cell is down either due to error or timeout This fixture will stub out the scatter_gather_cells routine and target_cell used in various cells-related API operations like listing/showing server details to return a ``oslo_db.exception.DBError`` per cell in the results. Therefore it is best used with a test scenario like this: 1. Create a server successfully. 2. Using the fixture, list/show servers. Depending on the microversion used, the API should either return minimal results or by default skip the results from down cells. Example usage:: with nova_fixtures.DownCellFixture(): # List servers with down cells. self.api.get_servers() # Show a server in a down cell. self.api.get_server(server['id']) # List services with down cells. self.admin_api.api_get('/os-services') """ def __init__(self, down_cell_mappings=None): self.down_cell_mappings = down_cell_mappings def setUp(self): super(DownCellFixture, self).setUp() def stub_scatter_gather_cells(ctxt, cell_mappings, timeout, fn, *args, **kwargs): # Return a dict with an entry per cell mapping where the results # are some kind of exception. up_cell_mappings = objects.CellMappingList() if not self.down_cell_mappings: # User has not passed any down cells explicitly, so all cells # are considered as down cells. self.down_cell_mappings = cell_mappings else: # User has passed down cell mappings, so the rest of the cells # should be up meaning we should return the right results. # We assume that down cells will be a subset of the # cell_mappings. down_cell_uuids = [cell.uuid for cell in self.down_cell_mappings] up_cell_mappings.objects = [cell for cell in cell_mappings if cell.uuid not in down_cell_uuids] def wrap(cell_uuid, thing): # We should embed the cell_uuid into the context before # wrapping since its used to calcualte the cells_timed_out and # cells_failed properties in the object. ctxt.cell_uuid = cell_uuid return multi_cell_list.RecordWrapper(ctxt, sort_ctx, thing) if fn is multi_cell_list.query_wrapper: # If the function called through scatter-gather utility is the # multi_cell_list.query_wrapper, we should wrap the exception # object into the multi_cell_list.RecordWrapper. This is # because unlike the other functions where the exception object # is returned directly, the query_wrapper wraps this into the # RecordWrapper object format. So if we do not wrap it will # blow up at the point of generating results from heapq further # down the stack. sort_ctx = multi_cell_list.RecordSortContext([], []) ret1 = { cell_mapping.uuid: [wrap(cell_mapping.uuid, db_exc.DBError())] for cell_mapping in self.down_cell_mappings } else: ret1 = { cell_mapping.uuid: db_exc.DBError() for cell_mapping in self.down_cell_mappings } ret2 = {} for cell in up_cell_mappings: ctxt.cell_uuid = cell.uuid cctxt = context.RequestContext.from_dict(ctxt.to_dict()) context.set_target_cell(cctxt, cell) result = fn(cctxt, *args, **kwargs) ret2[cell.uuid] = result return dict(list(ret1.items()) + list(ret2.items())) @contextmanager def stub_target_cell(ctxt, cell_mapping): # This is to give the freedom to simulate down cells for each # individual cell targeted function calls. if not self.down_cell_mappings: # User has not passed any down cells explicitly, so all cells # are considered as down cells. self.down_cell_mappings = [cell_mapping] raise db_exc.DBError() else: # if down_cell_mappings are passed, then check if this cell # is down or up. down_cell_uuids = [cell.uuid for cell in self.down_cell_mappings] if cell_mapping.uuid in down_cell_uuids: # its a down cell raise the exception straight away raise db_exc.DBError() else: # its an up cell, so yield its context cctxt = context.RequestContext.from_dict(ctxt.to_dict()) context.set_target_cell(cctxt, cell_mapping) yield cctxt self.useFixture(fixtures.MonkeyPatch( 'nova.context.scatter_gather_cells', stub_scatter_gather_cells)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.target_cell', stub_target_cell)) class AvailabilityZoneFixture(fixtures.Fixture): """Fixture to stub out the nova.availability_zones module The list of ``zones`` provided to the fixture are what get returned from ``get_availability_zones``. ``get_instance_availability_zone`` will return the availability_zone requested when creating a server otherwise the instance.availabilty_zone or default_availability_zone is returned. """ def __init__(self, zones): self.zones = zones def setUp(self): super(AvailabilityZoneFixture, self).setUp() def fake_get_availability_zones( ctxt, hostapi, get_only_available=False, with_hosts=False, services=None): # A 2-item tuple is returned if get_only_available=False. if not get_only_available: return self.zones, [] return self.zones self.useFixture(fixtures.MonkeyPatch( 'nova.availability_zones.get_availability_zones', fake_get_availability_zones)) def fake_get_instance_availability_zone(ctxt, instance): # If the server was created with a specific AZ, return it. reqspec = objects.RequestSpec.get_by_instance_uuid( ctxt, instance.uuid) requested_az = reqspec.availability_zone if requested_az: return requested_az # Otherwise return the instance.availability_zone if set else # the default AZ. return instance.availability_zone or CONF.default_availability_zone self.useFixture(fixtures.MonkeyPatch( 'nova.availability_zones.get_instance_availability_zone', fake_get_instance_availability_zone)) class KSAFixture(fixtures.Fixture): """Lets us initialize an openstack.connection.Connection by stubbing the auth plugin. """ def setUp(self): super(KSAFixture, self).setUp() self.mock_load_auth = self.useFixture(fixtures.MockPatch( 'keystoneauth1.loading.load_auth_from_conf_options')).mock self.mock_load_sess = self.useFixture(fixtures.MockPatch( 'keystoneauth1.loading.load_session_from_conf_options')).mock # For convenience, an attribute for the "Session" itself self.mock_session = self.mock_load_sess.return_value class OpenStackSDKFixture(fixtures.Fixture): # This satisfies tests that happen to run through get_sdk_adapter but don't # care about the adapter itself (default mocks are fine). # TODO(efried): Get rid of this and use fixtures from openstacksdk once # https://storyboard.openstack.org/#!/story/2005475 is resolved. def setUp(self): super(OpenStackSDKFixture, self).setUp() self.useFixture(fixtures.MockPatch( 'openstack.proxy.Proxy.get_endpoint')) real_make_proxy = service_description.ServiceDescription._make_proxy _stub_service_types = {'placement'} def fake_make_proxy(self, instance): if self.service_type in _stub_service_types: return instance.config.get_session_client( self.service_type, allow_version_hack=True, ) return real_make_proxy(self, instance) self.useFixture(fixtures.MockPatchObject( service_description.ServiceDescription, '_make_proxy', fake_make_proxy)) class HostNameWeigher(weights.BaseHostWeigher): """Weigher to make the scheduler host selection deterministic. Note that this weigher is supposed to be used via HostNameWeigherFixture and will fail to instantiate if used without that fixture. """ def __init__(self): self.weights = self.get_weights() def get_weights(self): raise NotImplementedError() def _weigh_object(self, host_state, weight_properties): # Any unspecified host gets no weight. return self.weights.get(host_state.host, 0) class HostNameWeigherFixture(fixtures.Fixture): """Fixture to make the scheduler host selection deterministic. Note that this fixture needs to be used before the scheduler service is started as it changes the scheduler configuration. """ def __init__(self, weights=None): """Create the fixture :param weights: A dict of weights keyed by host names. Defaulted to {'host1': 100, 'host2': 50, 'host3': 10}" """ if weights: self.weights = weights else: # default weights good for most of the functional tests self.weights = {'host1': 100, 'host2': 50, 'host3': 10} def setUp(self): super(HostNameWeigherFixture, self).setUp() # Make sure that when the scheduler instantiate the HostNameWeigher it # is initialized with the weights that is configured in this fixture self.useFixture(fixtures.MockPatchObject( HostNameWeigher, 'get_weights', return_value=self.weights)) # Make sure that the scheduler loads the HostNameWeigher and only that self.useFixture(ConfPatcher( weight_classes=[__name__ + '.HostNameWeigher'], group='filter_scheduler')) def _get_device_profile(dp_name, trait): dp = [ {'name': dp_name, 'uuid': 'cbec22f3-ac29-444e-b4bb-98509f32faae', 'groups': [{ 'resources:FPGA': '1', 'trait:' + trait: 'required', }], # Skipping links key in Cyborg API return value } ] return dp def get_arqs(dp_name): arq = { 'uuid': 'b59d34d3-787b-4fb0-a6b9-019cd81172f8', 'device_profile_name': dp_name, 'device_profile_group_id': 0, 'state': 'Initial', 'device_rp_uuid': None, 'hostname': None, 'instance_uuid': None, 'attach_handle_info': {}, 'attach_handle_type': '', } bound_arq = copy.deepcopy(arq) bound_arq.update( {'state': 'Bound', 'attach_handle_type': 'TEST_PCI', 'attach_handle_info': { 'bus': '0c', 'device': '0', 'domain': '0000', 'function': '0' }, }) return [arq], [bound_arq] class CyborgFixture(fixtures.Fixture): """Fixture that mocks Cyborg APIs used by nova/accelerator/cyborg.py""" dp_name = 'fakedev-dp' trait = 'CUSTOM_FAKE_DEVICE' arq_list, bound_arq_list = get_arqs(dp_name) # NOTE(Sundar): The bindings passed to the fake_bind_arqs() from the # conductor are indexed by ARQ UUID and include the host name, device # RP UUID and instance UUID. (See params to fake_bind_arqs below.) # # Later, when the compute manager calls fake_get_arqs_for_instance() with # the instance UUID, the returned ARQs must contain the host name and # device RP UUID. But these can vary from test to test. # # So, fake_bind_arqs() below takes bindings indexed by ARQ UUID and # converts them to bindings indexed by instance UUID, which are then # stored in the dict below. This dict looks like: # { $instance_uuid: [ # {'hostname': $hostname, # 'device_rp_uuid': $device_rp_uuid, # 'arq_uuid': $arq_uuid # } # ] # } # Since it is indexed by instance UUID, and that is presumably unique # across concurrently executing tests, this should be safe for # concurrent access. bindings_by_instance = {} @staticmethod def fake_bind_arqs(bindings): """Simulate Cyborg ARQ bindings. Since Nova calls Cyborg for binding on per-instance basis, the instance UUIDs would be the same for all ARQs in a single call. This function converts bindings indexed by ARQ UUID to bindings indexed by instance UUID, so that fake_get_arqs_for_instance can retrieve them later. :param bindings: { "$arq_uuid": { "hostname": STRING "device_rp_uuid": UUID "instance_uuid": UUID }, ... } :returns: None """ binding_by_instance = collections.defaultdict(list) for index, arq_uuid in enumerate(bindings): arq_binding = bindings[arq_uuid] # instance_uuid is same for all ARQs in a single call. instance_uuid = arq_binding['instance_uuid'] newbinding = { 'hostname': arq_binding['hostname'], 'device_rp_uuid': arq_binding['device_rp_uuid'], 'arq_uuid': arq_uuid, } binding_by_instance[instance_uuid].append(newbinding) CyborgFixture.bindings_by_instance.update(binding_by_instance) @staticmethod def fake_get_arqs_for_instance(instance_uuid, only_resolved=False): """Get list of bound ARQs for this instance. This function uses bindings indexed by instance UUID to populate the bound ARQ templates in CyborgFixture.bound_arq_list. """ arq_host_rp_list = CyborgFixture.bindings_by_instance[instance_uuid] # The above looks like: # [{'hostname': $hostname, # 'device_rp_uuid': $device_rp_uuid, # 'arq_uuid': $arq_uuid # }] bound_arq_list = copy.deepcopy(CyborgFixture.bound_arq_list) for arq in bound_arq_list: match = [(arq_host_rp['hostname'], arq_host_rp['device_rp_uuid'], instance_uuid) for arq_host_rp in arq_host_rp_list if arq_host_rp['arq_uuid'] == arq['uuid'] ] # Only 1 ARQ UUID would match, so len(match) == 1 arq['hostname'], arq['device_rp_uuid'], arq['instance_uuid'] = ( match[0][0], match[0][1], match[0][2]) return bound_arq_list @staticmethod def fake_delete_arqs_for_instance(instance_uuid): return None def setUp(self): super(CyborgFixture, self).setUp() self.mock_get_dp = self.useFixture(fixtures.MockPatch( 'nova.accelerator.cyborg._CyborgClient._get_device_profile_list', return_value=_get_device_profile(self.dp_name, self.trait))).mock self.mock_create_arqs = self.useFixture(fixtures.MockPatch( 'nova.accelerator.cyborg._CyborgClient._create_arqs', return_value=self.arq_list)).mock self.mock_bind_arqs = self.useFixture(fixtures.MockPatch( 'nova.accelerator.cyborg._CyborgClient.bind_arqs', side_effect=self.fake_bind_arqs)).mock self.mock_get_arqs = self.useFixture(fixtures.MockPatch( 'nova.accelerator.cyborg._CyborgClient.' 'get_arqs_for_instance', side_effect=self.fake_get_arqs_for_instance)).mock self.mock_del_arqs = self.useFixture(fixtures.MockPatch( 'nova.accelerator.cyborg._CyborgClient.' 'delete_arqs_for_instance', side_effect=self.fake_delete_arqs_for_instance)).mock ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3904696 nova-21.2.4/nova/tests/functional/0000775000175000017500000000000000000000000017035 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/__init__.py0000664000175000017500000000146100000000000021150 0ustar00zuulzuul00000000000000# Copyright (c) 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`functional` -- Nova functional tests ===================================================== .. automodule:: nova.tests.functional :platform: Unix """ import nova.monkey_patch # noqa ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3904696 nova-21.2.4/nova/tests/functional/api/0000775000175000017500000000000000000000000017606 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api/__init__.py0000664000175000017500000000133500000000000021721 0ustar00zuulzuul00000000000000# Copyright (c) 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`api` -- OpenStack API client, for testing rather than production ================================= """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api/client.py0000664000175000017500000005155500000000000021451 0ustar00zuulzuul00000000000000# Copyright (c) 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils import requests from six.moves.urllib import parse LOG = logging.getLogger(__name__) class APIResponse(object): """Decoded API Response This provides a decoded version of the Requests response which include a json decoded body, far more convenient for testing that returned structures are correct, or using parts of returned structures in tests. This class is a simple wrapper around dictionaries for API responses in tests. It includes extra attributes so that they can be inspected in addition to the attributes. All json responses from Nova APIs are dictionary compatible, or blank, so other possible base classes are not needed. """ status = 200 """The HTTP status code as an int""" content = "" """The Raw HTTP response body as a string""" body = {} """The decoded json body as a dictionary""" headers = {} """Response headers as a dictionary""" def __init__(self, response): """Construct an API response from a Requests response :param response: a ``requests`` library response """ super(APIResponse, self).__init__() self.status = response.status_code self.content = response.content if self.content: # The Compute API and Placement API handle error responses a bit # differently so we need to check the content-type header to # figure out what to do. content_type = response.headers.get('content-type') if 'application/json' in content_type: self.body = response.json() elif 'text/html' in content_type: self.body = response.text else: raise ValueError('Unexpected response content-type: %s' % content_type) self.headers = response.headers def __str__(self): # because __str__ falls back to __repr__ we can still use repr # on self but add in the other attributes. return "" % (self.body, self.status) class OpenStackApiException(Exception): def __init__(self, message=None, response=None): self.response = response if not message: message = 'Unspecified error' if response: _status = response.status_code _body = response.content message = ('%(message)s\nStatus Code: %(_status)s\n' 'Body: %(_body)s' % {'message': message, '_status': _status, '_body': _body}) super(OpenStackApiException, self).__init__(message) class OpenStackApiAuthenticationException(OpenStackApiException): def __init__(self, response=None, message=None): if not message: message = "Authentication error" super(OpenStackApiAuthenticationException, self).__init__(message, response) class OpenStackApiAuthorizationException(OpenStackApiException): def __init__(self, response=None, message=None): if not message: message = "Authorization error" super(OpenStackApiAuthorizationException, self).__init__(message, response) class OpenStackApiNotFoundException(OpenStackApiException): def __init__(self, response=None, message=None): if not message: message = "Item not found" super(OpenStackApiNotFoundException, self).__init__(message, response) class TestOpenStackClient(object): """Simple OpenStack API Client. This is a really basic OpenStack API client that is under our control, so we can make changes / insert hooks for testing """ def __init__(self, auth_user, base_url, project_id=None): super(TestOpenStackClient, self).__init__() self.auth_user = auth_user self.base_url = base_url if project_id is None: self.project_id = "6f70656e737461636b20342065766572" else: self.project_id = project_id self.microversion = None def request(self, url, method='GET', body=None, headers=None): _headers = {'Content-Type': 'application/json'} _headers.update(headers or {}) response = requests.request(method, url, data=body, headers=_headers) return response def api_request(self, relative_uri, check_response_status=None, strip_version=False, **kwargs): base_uri = self.base_url if strip_version: # The base_uri is either http://%(host)s:%(port)s/%(api_version)s # or http://%(host)s:%(port)s/%(api_version)s/%(project_id)s # NOTE(efried): Using urlparse was not easier :) chunked = base_uri.split('/') base_uri = '/'.join(chunked[:3]) # Restore the project ID if present if len(chunked) == 5: base_uri += '/' + chunked[-1] full_uri = '%s/%s' % (base_uri, relative_uri) headers = kwargs.setdefault('headers', {}) if ('X-OpenStack-Nova-API-Version' in headers or 'OpenStack-API-Version' in headers): raise Exception('Microversion should be set via ' 'microversion attribute in API client.') elif self.microversion: headers['X-OpenStack-Nova-API-Version'] = self.microversion headers['OpenStack-API-Version'] = 'compute %s' % self.microversion headers.setdefault('X-Auth-User', self.auth_user) headers.setdefault('X-User-Id', self.auth_user) headers.setdefault('X-Auth-Project-Id', self.project_id) response = self.request(full_uri, **kwargs) http_status = response.status_code LOG.debug("%(relative_uri)s => code %(http_status)s", {'relative_uri': relative_uri, 'http_status': http_status}) if check_response_status: if http_status not in check_response_status: if http_status == 404: raise OpenStackApiNotFoundException(response=response) elif http_status == 401: raise OpenStackApiAuthorizationException(response=response) else: raise OpenStackApiException( message="Unexpected status code: %s" % response.text, response=response) return response def _decode_json(self, response): resp = APIResponse(status=response.status_code) if response.content: resp.body = jsonutils.loads(response.content) return resp def api_get(self, relative_uri, **kwargs): kwargs.setdefault('check_response_status', [200]) return APIResponse(self.api_request(relative_uri, **kwargs)) def api_post(self, relative_uri, body, **kwargs): kwargs['method'] = 'POST' if body: headers = kwargs.setdefault('headers', {}) headers['Content-Type'] = 'application/json' kwargs['body'] = jsonutils.dumps(body) kwargs.setdefault('check_response_status', [200, 201, 202, 204]) return APIResponse(self.api_request(relative_uri, **kwargs)) def api_put(self, relative_uri, body, **kwargs): kwargs['method'] = 'PUT' if body: headers = kwargs.setdefault('headers', {}) headers['Content-Type'] = 'application/json' kwargs['body'] = jsonutils.dumps(body) kwargs.setdefault('check_response_status', [200, 202, 204]) return APIResponse(self.api_request(relative_uri, **kwargs)) def api_delete(self, relative_uri, **kwargs): kwargs['method'] = 'DELETE' kwargs.setdefault('check_response_status', [200, 202, 204]) return APIResponse(self.api_request(relative_uri, **kwargs)) ##################################### # # Convenience methods # # The following are a set of convenience methods to get well known # resources, they can be helpful in setting up resources in # tests. All of these convenience methods throw exceptions if they # get a non 20x status code, so will appropriately abort tests if # they fail. # # They all return the most relevant part of their response body as # decoded data structure. # ##################################### def get_server(self, server_id): return self.api_get('/servers/%s' % server_id).body['server'] def get_servers(self, detail=True, search_opts=None): rel_url = '/servers/detail' if detail else '/servers' if search_opts is not None: qparams = {} for opt, val in search_opts.items(): qparams[opt] = val if qparams: query_string = "?%s" % parse.urlencode(qparams) rel_url += query_string return self.api_get(rel_url).body['servers'] def post_server(self, server): response = self.api_post('/servers', server).body if 'reservation_id' in response: return response else: return response['server'] def put_server(self, server_id, server): return self.api_put('/servers/%s' % server_id, server).body def post_server_action(self, server_id, data, **kwargs): return self.api_post( '/servers/%s/action' % server_id, data, **kwargs).body def delete_server(self, server_id): return self.api_delete('/servers/%s' % server_id) def force_down_service(self, host, binary, forced_down): req = { "host": host, "binary": binary, "forced_down": forced_down } return self.api_put('/os-services/force-down', req).body['service'] def get_image(self, image_id): return self.api_get('/images/%s' % image_id).body['image'] def get_images(self, detail=True): rel_url = '/images/detail' if detail else '/images' return self.api_get(rel_url).body['images'] def post_image(self, image): return self.api_post('/images', image).body['image'] def delete_image(self, image_id): return self.api_delete('/images/%s' % image_id) def put_image_meta_key(self, image_id, key, value): """Creates or updates a given image metadata key/value pair.""" req_body = { 'meta': { key: value } } return self.api_put('/images/%s/metadata/%s' % (image_id, key), req_body) def get_flavor(self, flavor_id): return self.api_get('/flavors/%s' % flavor_id).body['flavor'] def get_flavors(self, detail=True): rel_url = '/flavors/detail' if detail else '/flavors' return self.api_get(rel_url).body['flavors'] def post_flavor(self, flavor): return self.api_post('/flavors', flavor).body['flavor'] def delete_flavor(self, flavor_id): return self.api_delete('/flavors/%s' % flavor_id) def get_extra_specs(self, flavor_id): return self.api_get( '/flavors/%s/os-extra_specs' % flavor_id ).body['extra_specs'] def get_extra_spec(self, flavor_id, spec_id): return self.api_get( '/flavors/%s/os-extra_specs/%s' % (flavor_id, spec_id), ).body def post_extra_spec(self, flavor_id, body, **_params): url = '/flavors/%s/os-extra_specs' % flavor_id if _params: query_string = '?%s' % parse.urlencode(list(_params.items())) url += query_string return self.api_post(url, body) def put_extra_spec(self, flavor_id, spec_id, body, **_params): url = '/flavors/%s/os-extra_specs/%s' % (flavor_id, spec_id) if _params: query_string = '?%s' % parse.urlencode(list(_params.items())) url += query_string return self.api_put(url, body) def get_volume(self, volume_id): return self.api_get('/os-volumes/%s' % volume_id).body['volume'] def get_volumes(self, detail=True): rel_url = '/os-volumes/detail' if detail else '/os-volumes' return self.api_get(rel_url).body['volumes'] def post_volume(self, volume): return self.api_post('/os-volumes', volume).body['volume'] def delete_volume(self, volume_id): return self.api_delete('/os-volumes/%s' % volume_id) def get_snapshot(self, snap_id): return self.api_get('/os-snapshots/%s' % snap_id).body['snapshot'] def get_snapshots(self, detail=True): rel_url = '/os-snapshots/detail' if detail else '/os-snapshots' return self.api_get(rel_url).body['snapshots'] def post_snapshot(self, snapshot): return self.api_post('/os-snapshots', snapshot).body['snapshot'] def delete_snapshot(self, snap_id): return self.api_delete('/os-snapshots/%s' % snap_id) def get_server_volume(self, server_id, attachment_id): return self.api_get('/servers/%s/os-volume_attachments/%s' % (server_id, attachment_id) ).body['volumeAttachment'] def get_server_volumes(self, server_id): return self.api_get('/servers/%s/os-volume_attachments' % (server_id)).body['volumeAttachments'] def post_server_volume(self, server_id, volume_attachment): return self.api_post('/servers/%s/os-volume_attachments' % (server_id), volume_attachment ).body['volumeAttachment'] def put_server_volume(self, server_id, attachment_id, volume_id): return self.api_put('/servers/%s/os-volume_attachments/%s' % (server_id, attachment_id), {"volumeAttachment": {"volumeId": volume_id}}) def delete_server_volume(self, server_id, attachment_id): return self.api_delete('/servers/%s/os-volume_attachments/%s' % (server_id, attachment_id)) def post_server_metadata(self, server_id, metadata): post_body = {'metadata': {}} post_body['metadata'].update(metadata) return self.api_post('/servers/%s/metadata' % server_id, post_body).body['metadata'] def delete_server_metadata(self, server_id, key): return self.api_delete('/servers/%s/metadata/%s' % (server_id, key)) def get_server_groups(self, all_projects=None): if all_projects: return self.api_get( '/os-server-groups?all_projects').body['server_groups'] else: return self.api_get('/os-server-groups').body['server_groups'] def get_server_group(self, group_id): return self.api_get('/os-server-groups/%s' % group_id).body['server_group'] def post_server_groups(self, group): response = self.api_post('/os-server-groups', {"server_group": group}) return response.body['server_group'] def delete_server_group(self, group_id): self.api_delete('/os-server-groups/%s' % group_id) def create_server_external_events(self, events): body = {'events': events} return self.api_post('/os-server-external-events', body).body['events'] def get_instance_actions(self, server_id): return self.api_get('/servers/%s/os-instance-actions' % (server_id)).body['instanceActions'] def get_instance_action_details(self, server_id, request_id): return self.api_get('/servers/%s/os-instance-actions/%s' % (server_id, request_id)).body['instanceAction'] def post_aggregate(self, aggregate): return self.api_post('/os-aggregates', aggregate).body['aggregate'] def delete_aggregate(self, aggregate_id): self.api_delete('/os-aggregates/%s' % aggregate_id) def add_host_to_aggregate(self, aggregate_id, host): return self.api_post('/os-aggregates/%s/action' % aggregate_id, {'add_host': {'host': host}}) def remove_host_from_aggregate(self, aggregate_id, host): return self.api_post('/os-aggregates/%s/action' % aggregate_id, {'remove_host': {'host': host}}) def get_limits(self): return self.api_get('/limits').body['limits'] def get_server_tags(self, server_id): """Get the tags on the given server. :param server_id: The server uuid :return: The list of tags from the response """ return self.api_get('/servers/%s/tags' % server_id).body['tags'] def put_server_tags(self, server_id, tags): """Put (or replace) a list of tags on the given server. Returns the list of tags from the response. """ return self.api_put('/servers/%s/tags' % server_id, {'tags': tags}).body['tags'] def get_port_interfaces(self, server_id): return self.api_get('/servers/%s/os-interface' % (server_id)).body['interfaceAttachments'] def attach_interface(self, server_id, post): return self.api_post('/servers/%s/os-interface' % server_id, post) def detach_interface(self, server_id, port_id): return self.api_delete('/servers/%s/os-interface/%s' % (server_id, port_id)) def get_services(self, binary=None, host=None): url = '/os-services?' if binary: url += 'binary=%s&' % binary if host: url += 'host=%s&' % host return self.api_get(url).body['services'] def put_service(self, service_id, req): return self.api_put( '/os-services/%s' % service_id, req).body['service'] def post_keypair(self, keypair): return self.api_post('/os-keypairs', keypair).body['keypair'] def delete_keypair(self, keypair_name): self.api_delete('/os-keypairs/%s' % keypair_name) def post_aggregate_action(self, aggregate_id, body): return self.api_post( '/os-aggregates/%s/action' % aggregate_id, body).body['aggregate'] def get_active_migrations(self, server_id): return self.api_get('/servers/%s/migrations' % server_id).body['migrations'] def get_migrations(self, user_id=None, project_id=None): url = '/os-migrations?' if user_id: url += 'user_id=%s&' % user_id if project_id: url += 'project_id=%s&' % project_id return self.api_get(url).body['migrations'] def force_complete_migration(self, server_id, migration_id): return self.api_post( '/servers/%s/migrations/%s/action' % (server_id, migration_id), {'force_complete': None}) def delete_migration(self, server_id, migration_id): return self.api_delete( '/servers/%s/migrations/%s' % (server_id, migration_id)) def put_aggregate(self, aggregate_id, body): return self.api_put( '/os-aggregates/%s' % aggregate_id, body).body['aggregate'] def get_hypervisor_stats(self): return self.api_get( '/os-hypervisors/statistics').body['hypervisor_statistics'] def get_service_id(self, binary_name): for service in self.get_services(): if service['binary'] == binary_name: return service['id'] raise OpenStackApiNotFoundException('Service cannot be found.') def put_service_force_down(self, service_id, forced_down): req = { 'forced_down': forced_down } return self.api_put('os-services/%s' % service_id, req).body['service'] def get_server_diagnostics(self, server_id): return self.api_get('/servers/%s/diagnostics' % server_id).body def get_quota_detail(self, project_id=None, user_id=None): if not project_id: project_id = self.project_id url = '/os-quota-sets/%s/detail' if user_id: url += '?user_id=%s' % user_id return self.api_get(url % project_id).body['quota_set'] def update_quota(self, quotas, project_id=None, user_id=None): if not project_id: project_id = self.project_id url = '/os-quota-sets/%s' if user_id: url += '?user_id=%s' % user_id body = {'quota_set': {}} body['quota_set'].update(quotas) return self.api_put(url % project_id, body).body['quota_set'] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3904696 nova-21.2.4/nova/tests/functional/api/openstack/0000775000175000017500000000000000000000000021575 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api/openstack/__init__.py0000664000175000017500000000000000000000000023674 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_paste_fixture.py0000664000175000017500000000423300000000000023124 0ustar00zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import os import fixtures import nova.conf from nova.conf import paths CONF = nova.conf.CONF class ApiPasteV21Fixture(fixtures.Fixture): def _replace_line(self, target_file, line): # TODO(johnthetubaguy) should really point the tests at /v2.1 target_file.write(line.replace( "/v2: openstack_compute_api_v21_legacy_v2_compatible", "/v2: openstack_compute_api_v21")) def setUp(self): super(ApiPasteV21Fixture, self).setUp() CONF.set_default('api_paste_config', paths.state_path_def('etc/nova/api-paste.ini'), group='wsgi') tmp_api_paste_dir = self.useFixture(fixtures.TempDir()) tmp_api_paste_file_name = os.path.join(tmp_api_paste_dir.path, 'fake_api_paste.ini') with open(CONF.wsgi.api_paste_config, 'r') as orig_api_paste: with open(tmp_api_paste_file_name, 'w') as tmp_file: for line in orig_api_paste: self._replace_line(tmp_file, line) CONF.set_override('api_paste_config', tmp_api_paste_file_name, group='wsgi') class ApiPasteNoProjectId(ApiPasteV21Fixture): def _replace_line(self, target_file, line): line = line.replace( "paste.filter_factory = nova.api.openstack.auth:" "NoAuthMiddleware.factory", "paste.filter_factory = nova.api.openstack.auth:" "NoAuthMiddlewareV2_18.factory") target_file.write(line) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3984694 nova-21.2.4/nova/tests/functional/api_sample_tests/0000775000175000017500000000000000000000000022371 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/README.rst0000664000175000017500000000166400000000000024067 0ustar00zuulzuul00000000000000Api Samples =========== This part of the tree contains templates for API samples. The documentation in doc/api_samples is completely autogenerated from the tests in this directory. To add a new api sample, add tests for the common passing and failing cases in this directory for your extension, and modify test_samples.py for your tests. Then run the following command: tox -e api-samples Which will create the files on doc/api_samples. If new tests are added or the .tpl files are changed due to bug fixes, the samples must be regenerated so they are in sync with the templates, as there is an additional test which reloads the documentation and ensures that it's in sync. Debugging sample generation --------------------------- If a .tpl is changed, its matching .json must be removed else the samples won't be generated. If an entirely new extension is added, a directory for it must be created before its samples will be generated. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/__init__.py0000664000175000017500000000000000000000000024470 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_sample_base.py0000664000175000017500000001240200000000000026046 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import testscenarios import nova.conf from nova.tests import fixtures from nova.tests.functional import api_paste_fixture from nova.tests.functional import api_samples_test_base CONF = nova.conf.CONF # API samples heavily uses testscenarios. This allows us to use the # same tests, with slight variations in configuration to ensure our # various ways of calling the API are compatible. Testscenarios works # through the class level ``scenarios`` variable. It is an array of # tuples where the first value in each tuple is an arbitrary name for # the scenario (should be unique), and the second item is a dictionary # of attributes to change in the class for the test. # # By default we're running scenarios for 2 situations # # - Hitting the default /v2 endpoint with the v2.1 Compatibility stack # # - Hitting the default /v2.1 endpoint # # Things we need to set: # # - api_major_version - what version of the API we should be hitting # # - microversion - what API microversion should be used # # - _additional_fixtures - any additional fixtures need # # NOTE(sdague): if you want to build a test that only tests specific # microversions, then replace the ``scenarios`` class variable in that # test class with something like: # # [("v2_11", {'api_major_version': 'v2.1', 'microversion': '2.11'})] class ApiSampleTestBaseV21(testscenarios.WithScenarios, api_samples_test_base.ApiSampleTestBase): SUPPORTS_CELLS = False api_major_version = 'v2' # any additional fixtures needed for this scenario _additional_fixtures = [] sample_dir = None # Include the project ID in request URLs by default. This is overridden # for certain `scenarios` and by certain subclasses. # Note that API sample tests also use this in substitutions to validate # that URLs in responses (e.g. location of a server just created) are # correctly constructed. USE_PROJECT_ID = True # Availability zones for the API samples tests. Can be overridden by # sub-classes. If set, the AvailabilityZoneFilter is not used. availability_zones = ['us-west'] scenarios = [ # test v2 with the v2.1 compatibility stack ('v2', { 'api_major_version': 'v2'}), # test v2.1 base microversion ('v2_1', { 'api_major_version': 'v2.1'}), # test v2.18 code without project id ('v2_1_noproject_id', { 'api_major_version': 'v2.1', 'USE_PROJECT_ID': False, '_additional_fixtures': [ api_paste_fixture.ApiPasteNoProjectId]}) ] def setUp(self): self.flags(glance_link_prefix=self._get_glance_host(), compute_link_prefix=self._get_host(), group='api') # load any additional fixtures specified by the scenario for fix in self._additional_fixtures: self.useFixture(fix()) if not self.SUPPORTS_CELLS: # NOTE(danms): Disable base automatic DB (and cells) config self.USES_DB = False self.USES_DB_SELF = True # super class call is delayed here so that we have the right # paste and conf before loading all the services, as we can't # change these later. super(ApiSampleTestBaseV21, self).setUp() if not self.SUPPORTS_CELLS: self.useFixture(fixtures.Database()) self.useFixture(fixtures.Database(database='api')) self.useFixture(fixtures.DefaultFlavorsFixture()) self.useFixture(fixtures.SingleCellSimple()) super(ApiSampleTestBaseV21, self)._setup_services() self.useFixture(fixtures.SpawnIsSynchronousFixture()) # this is used to generate sample docs self.generate_samples = os.getenv('GENERATE_SAMPLES') is not None if self.availability_zones: self.useFixture( fixtures.AvailabilityZoneFixture(self.availability_zones)) def _setup_services(self): pass def _setup_scheduler_service(self): """Overrides _IntegratedTestBase._setup_scheduler_service to filter out the AvailabilityZoneFilter prior to starting the scheduler. """ if self.availability_zones: # The test is using fake zones so disable the # AvailabilityZoneFilter which is otherwise enabled by default. enabled_filters = CONF.filter_scheduler.enabled_filters if 'AvailabilityZoneFilter' in enabled_filters: enabled_filters.remove('AvailabilityZoneFilter') self.flags(enabled_filters=enabled_filters, group='filter_scheduler') return super(ApiSampleTestBaseV21, self)._setup_scheduler_service() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3984694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/0000775000175000017500000000000000000000000024666 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3984694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/0000775000175000017500000000000000000000000027633 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/extensions-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/extensions-get-resp.js0000664000175000017500000000040400000000000034112 0ustar00zuulzuul00000000000000{ "extension": { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } }././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/extensions-list-resp-v21-compatible.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/extensions-list-resp-v0000664000175000017500000007632200000000000034152 0ustar00zuulzuul00000000000000{ "extensions": [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "Adds type parameter to the ip list.", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "Adds mac address parameter to the ip list.", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-VIF-NET", "description": "Adds network id parameter to the virtual interface list.", "links": [], "name": "ExtendedVIFNet", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "Support to show the disabled status of a flavor.", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "Provide additional data for flavors.", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n Actions include: resetNetwork, injectNetworkInfo, os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server API.\n 2. Add availability zones describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "Add extended status in Baremetal Nodes v2 API.", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "Allow boot with the new BDM data format.", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "Adding functionality to get cell capacities.", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding neighbor cells,\n listing neighbor cells, and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n When running with the Vlan network mode, you need a mechanism to route\n from the public Internet to your vlans. This mechanism is known as a\n cloudpipe.\n\n At the time of creating this class, only OpenVPN is supported. Support for\n a SSH Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "Adds the ability to set the vpn ip/port for cloudpipe instances.", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "Extended support to the Create Server v1.1 API.", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "Enables server evacuation without target host. Scheduler will select one to target.", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "Adds optional fixed_address to the add floating IP command.", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "Extended hypervisors support.", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "Adds additional fields to networks.", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "Adds ability for admins to delete quota and optionally force the update Quota command.", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "Allow the user to specify the image to use for rescue.", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "Extended services support.", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "Extended services deletion support.", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "Support to show the swap status of a flavor.", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "Show hypervisor status.", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "Adds quota support to server groups.", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "Allow to filter the servers by a set of status values.", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "Add sorting support in get Server v2 API.", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "Start/Stop instance compute API support.", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "Provide data to admin on limited resources used by other tenants.", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "Project user quota support.", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "Support for updating a volume attachment.", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/extensions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extension-info/extensions-list-resp.j0000664000175000017500000007560700000000000034144 0ustar00zuulzuul00000000000000{ "extensions": [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "Adds type parameter to the ip list.", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "Adds mac address parameter to the ip list.", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "Support to show the disabled status of a flavor.", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "Provide additional data for flavors.", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n Actions include: resetNetwork, injectNetworkInfo, os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server API.\n 2. Add availability zones describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "Add extended status in Baremetal Nodes v2 API.", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "Allow boot with the new BDM data format.", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "Adding functionality to get cell capacities.", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding neighbor cells,\n listing neighbor cells, and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n When running with the Vlan network mode, you need a mechanism to route\n from the public Internet to your vlans. This mechanism is known as a\n cloudpipe.\n\n At the time of creating this class, only OpenVPN is supported. Support for\n a SSH Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "Adds the ability to set the vpn ip/port for cloudpipe instances.", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "Extended support to the Create Server v1.1 API.", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "Enables server evacuation without target host. Scheduler will select one to target.", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "Adds optional fixed_address to the add floating IP command.", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "Extended hypervisors support.", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "Adds additional fields to networks.", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "Adds ability for admins to delete quota and optionally force the update Quota command.", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "Allow the user to specify the image to use for rescue.", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "Extended services support.", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "Extended services deletion support.", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "Support to show the swap status of a flavor.", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "Show hypervisor status.", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "Adds quota support to server groups.", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "Allow to filter the servers by a set of status values.", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "Add sorting support in get Server v2 API.", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "Start/Stop instance compute API support.", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "Provide data to admin on limited resources used by other tenants.", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "Project user quota support.", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "Support for updating a volume attachment.", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extensions-list-resp-v21-compatible.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extensions-list-resp-v21-compatible.j0000664000175000017500000007374500000000000033723 0ustar00zuulzuul00000000000000{ "extensions": [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-VIF-NET", "description": "", "links": [], "name": "ExtendedVIFNet", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n Actions include: resetNetwork, injectNetworkInfo, os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server API.\n 2. Add availability zones describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding neighbor cells,\n listing neighbor cells, and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n When running with the Vlan network mode, you need a mechanism to route\n from the public Internet to your vlans. This mechanism is known as a\n cloudpipe.\n\n At the time of creating this class, only OpenVPN is supported. Support for\n a SSH Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] }././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/extensions-list-resp.json.tpl0000664000175000017500000007332200000000000032505 0ustar00zuulzuul00000000000000{ "extensions": [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n Actions include: resetNetwork, injectNetworkInfo, os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server API.\n 2. Add availability zones describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding neighbor cells,\n listing neighbor cells, and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n When running with the Vlan network mode, you need a mechanism to route\n from the public Internet to your vlans. This mechanism is known as a\n cloudpipe.\n\n At the time of creating this class, only OpenVPN is supported. Support for\n a SSH Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3984694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/0000775000175000017500000000000000000000000027416 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-add-tenant-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-add-tenan0000664000175000017500000000010500000000000033716 0ustar00zuulzuul00000000000000{ "addTenantAccess": { "tenant": "%(tenant_id)s" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-add-tenant-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-add-tenan0000664000175000017500000000021000000000000033713 0ustar00zuulzuul00000000000000{ "flavor_access": [ { "flavor_id": "%(flavor_id)s", "tenant_id": "%(tenant_id)s" } ] } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-list-resp0000664000175000017500000000020600000000000034007 0ustar00zuulzuul00000000000000{ "flavor_access": [ { "flavor_id": "%(flavor_id)s", "tenant_id": "fake_tenant" } ] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-remove-tenant-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-remove-te0000664000175000017500000000011000000000000033762 0ustar00zuulzuul00000000000000{ "removeTenantAccess": { "tenant": "%(tenant_id)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-remove-tenant-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-access-remove-te0000664000175000017500000000003300000000000033766 0ustar00zuulzuul00000000000000{ "flavor_access": [] }././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/flavor-create-req.json.0000664000175000017500000000030500000000000033704 0ustar00zuulzuul00000000000000{ "flavor": { "name": "%(flavor_name)s", "ram": 1024, "vcpus": 2, "disk": 10, "id": "%(flavor_id)s", "os-flavor-access:is_public": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3984694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/v2.7/0000775000175000017500000000000000000000000030112 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/v2.7/flavor-access-add-tenant-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/v2.7/flavor-access-add-0000664000175000017500000000010500000000000033364 0ustar00zuulzuul00000000000000{ "addTenantAccess": { "tenant": "%(tenant_id)s" } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/v2.7/flavor-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-access/v2.7/flavor-create-req.0000664000175000017500000000030400000000000033427 0ustar00zuulzuul00000000000000{ "flavor": { "name": "%(flavor_name)s", "ram": 1024, "vcpus": 2, "disk": 10, "id": "%(flavor_id)s", "os-flavor-access:is_public": true } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.3984694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/0000775000175000017500000000000000000000000030413 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs0000664000175000017500000000015400000000000034063 0ustar00zuulzuul00000000000000{ "extra_specs": { "hw:cpu_policy": "%(value1)s", "hw:numa_nodes": "%(value2)s" } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs0000664000175000017500000000015400000000000034063 0ustar00zuulzuul00000000000000{ "extra_specs": { "hw:cpu_policy": "%(value1)s", "hw:numa_nodes": "%(value2)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs0000664000175000017500000000004600000000000034063 0ustar00zuulzuul00000000000000{ "hw:numa_nodes": "%(value1)s" } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs0000664000175000017500000000015400000000000034063 0ustar00zuulzuul00000000000000{ "extra_specs": { "hw:cpu_policy": "%(value1)s", "hw:numa_nodes": "%(value2)s" } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs0000664000175000017500000000004600000000000034063 0ustar00zuulzuul00000000000000{ "hw:numa_nodes": "%(value1)s" } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs-update-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-extra-specs/flavor-extra-specs0000664000175000017500000000004600000000000034063 0ustar00zuulzuul00000000000000{ "hw:numa_nodes": "%(value1)s" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/0000775000175000017500000000000000000000000027405 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/flavor-create-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/flavor-create-post-req.0000664000175000017500000000026400000000000033712 0ustar00zuulzuul00000000000000{ "flavor": { "name": "%(flavor_name)s", "ram": 1024, "vcpus": 2, "disk": 10, "id": "%(flavor_id)s", "rxtx_factor": 2.0 } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/flavor-create-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/flavor-create-post-resp0000664000175000017500000000116400000000000034016 0ustar00zuulzuul00000000000000{ "flavor": { "disk": 10, "id": "%(flavor_id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavor_id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavor_id)s", "rel": "bookmark" } ], "name": "%(flavor_name)s", "os-flavor-access:is_public": true, "ram": 1024, "vcpus": 2, "OS-FLV-DISABLED:disabled": false, "OS-FLV-EXT-DATA:ephemeral": 0, "swap": "", "rxtx_factor": 2.0 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/0000775000175000017500000000000000000000000030164 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-create-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-create-pos0000664000175000017500000000033700000000000033443 0ustar00zuulzuul00000000000000{ "flavor": { "name": "%(flavor_name)s", "ram": 1024, "vcpus": 2, "disk": 10, "id": "%(flavor_id)s", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-create-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-create-pos0000664000175000017500000000123700000000000033443 0ustar00zuulzuul00000000000000{ "flavor": { "disk": 10, "id": "%(flavor_id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavor_id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavor_id)s", "rel": "bookmark" } ], "name": "%(flavor_name)s", "os-flavor-access:is_public": true, "ram": 1024, "vcpus": 2, "OS-FLV-DISABLED:disabled": false, "OS-FLV-EXT-DATA:ephemeral": 0, "swap": "", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-update-req0000664000175000017500000000010700000000000033443 0ustar00zuulzuul00000000000000{ "flavor": { "description": "updated description" } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-update-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.55/flavor-update-res0000664000175000017500000000130100000000000033442 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "updated description" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/0000775000175000017500000000000000000000000030161 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-create-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-create-pos0000664000175000017500000000033700000000000033440 0ustar00zuulzuul00000000000000{ "flavor": { "name": "%(flavor_name)s", "ram": 1024, "vcpus": 2, "disk": 10, "id": "%(flavor_id)s", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-create-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-create-pos0000664000175000017500000000127200000000000033437 0ustar00zuulzuul00000000000000{ "flavor": { "disk": 10, "id": "%(flavor_id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavor_id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavor_id)s", "rel": "bookmark" } ], "name": "%(flavor_name)s", "os-flavor-access:is_public": true, "ram": 1024, "vcpus": 2, "OS-FLV-DISABLED:disabled": false, "OS-FLV-EXT-DATA:ephemeral": 0, "swap": "", "rxtx_factor": 2.0, "description": "test description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-update-req0000664000175000017500000000010700000000000033440 0ustar00zuulzuul00000000000000{ "flavor": { "description": "updated description" } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-update-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.61/flavor-update-res0000664000175000017500000000133400000000000033445 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "updated description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/0000775000175000017500000000000000000000000030166 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-create-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-create-pos0000664000175000017500000000033700000000000033445 0ustar00zuulzuul00000000000000{ "flavor": { "name": "%(flavor_name)s", "ram": 1024, "vcpus": 2, "disk": 10, "id": "%(flavor_id)s", "rxtx_factor": 2.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-create-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-create-pos0000664000175000017500000000127100000000000033443 0ustar00zuulzuul00000000000000{ "flavor": { "disk": 10, "id": "%(flavor_id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavor_id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavor_id)s", "rel": "bookmark" } ], "name": "%(flavor_name)s", "os-flavor-access:is_public": true, "ram": 1024, "vcpus": 2, "OS-FLV-DISABLED:disabled": false, "OS-FLV-EXT-DATA:ephemeral": 0, "swap": 0, "rxtx_factor": 2.0, "description": "test description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-update-req0000664000175000017500000000010700000000000033445 0ustar00zuulzuul00000000000000{ "flavor": { "description": "updated description" } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-update-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavor-manage/v2.75/flavor-update-res0000664000175000017500000000133300000000000033451 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "os-flavor-access:is_public": true, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": "updated description", "extra_specs": {} } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/0000775000175000017500000000000000000000000026342 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/flavor-get-resp.json.tpl0000664000175000017500000000110600000000000033046 0ustar00zuulzuul00000000000000{ "flavor": { "disk": 1, "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "os-flavor-access:is_public": true, "ram": 512, "vcpus": 1, "OS-FLV-DISABLED:disabled": false, "OS-FLV-EXT-DATA:ephemeral": 0, "swap": "", "rxtx_factor": 1.0 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/flavors-detail-resp.json.tpl0000664000175000017500000000762600000000000033731 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "os-flavor-access:is_public": true, "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "os-flavor-access:is_public": true, "ram": 4096, "swap": "", "vcpus": 2, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "os-flavor-access:is_public": true, "ram": 8192, "swap": "", "vcpus": 4, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "os-flavor-access:is_public": true, "ram": 16384, "swap": "", "vcpus": 8, "rxtx_factor": 1.0 }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "os-flavor-access:is_public": true, "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0 } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/flavors-list-resp.json.tpl0000664000175000017500000000452000000000000033430 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny" }, { "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small" }, { "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium" }, { "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large" }, { "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge" }, { "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/0000775000175000017500000000000000000000000027121 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/flavor-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/flavor-get-resp.json.tp0000664000175000017500000000124100000000000033451 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description" } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/flavors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/flavors-detail-resp.jso0000664000175000017500000001151400000000000033523 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "os-flavor-access:is_public": true, "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "os-flavor-access:is_public": true, "ram": 4096, "swap": "", "vcpus": 2, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "os-flavor-access:is_public": true, "ram": 8192, "swap": "", "vcpus": 4, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "os-flavor-access:is_public": true, "ram": 16384, "swap": "", "vcpus": 8, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "os-flavor-access:is_public": true, "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/flavors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.55/flavors-list-resp.json.0000664000175000017500000000577200000000000033501 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "description": null }, { "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "description": null }, { "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "description": null }, { "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "description": null }, { "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "description": null }, { "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "description": null }, { "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/0000775000175000017500000000000000000000000027116 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/flavor-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/flavor-get-resp.json.tp0000664000175000017500000000141500000000000033451 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/flavors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/flavors-detail-resp.jso0000664000175000017500000001226400000000000033523 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "os-flavor-access:is_public": true, "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "os-flavor-access:is_public": true, "ram": 4096, "swap": "", "vcpus": 2, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "os-flavor-access:is_public": true, "ram": 8192, "swap": "", "vcpus": 4, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "os-flavor-access:is_public": true, "ram": 16384, "swap": "", "vcpus": 8, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "os-flavor-access:is_public": true, "ram": 512, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": { "hw:numa_nodes": "1" } }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "os-flavor-access:is_public": true, "ram": 2048, "swap": "", "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/flavors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.61/flavors-list-resp.json.0000664000175000017500000000577200000000000033476 0ustar00zuulzuul00000000000000{ "flavors": [ { "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "description": null }, { "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "description": null }, { "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "description": null }, { "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "description": null }, { "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "description": null }, { "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "description": null }, { "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "description": "test description" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4024694 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/0000775000175000017500000000000000000000000027123 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/flavor-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/flavor-get-resp.json.tp0000664000175000017500000000141400000000000033455 0ustar00zuulzuul00000000000000{ "flavor": { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "os-flavor-access:is_public": true, "ram": 2048, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/flavors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/flavors-detail-resp.jso0000664000175000017500000001225500000000000033530 0ustar00zuulzuul00000000000000{ "flavors": [ { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "1", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/1", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny", "os-flavor-access:is_public": true, "ram": 512, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "2", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/2", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/2", "rel": "bookmark" } ], "name": "m1.small", "os-flavor-access:is_public": true, "ram": 2048, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 40, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "3", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/3", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/3", "rel": "bookmark" } ], "name": "m1.medium", "os-flavor-access:is_public": true, "ram": 4096, "swap": 0, "vcpus": 2, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 80, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "4", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/4", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/4", "rel": "bookmark" } ], "name": "m1.large", "os-flavor-access:is_public": true, "ram": 8192, "swap": 0, "vcpus": 4, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 160, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "5", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/5", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge", "os-flavor-access:is_public": true, "ram": 16384, "swap": 0, "vcpus": 8, "rxtx_factor": 1.0, "description": null, "extra_specs": {} }, { "OS-FLV-DISABLED:disabled": false, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "6", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/6", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs", "os-flavor-access:is_public": true, "ram": 512, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": null, "extra_specs": { "hw:numa_nodes": "1" } }, { "OS-FLV-DISABLED:disabled": false, "disk": 20, "OS-FLV-EXT-DATA:ephemeral": 0, "id": "%(flavorid)s", "links": [ { "href": "%(versioned_compute_endpoint)s/flavors/%(flavorid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/flavors/%(flavorid)s", "rel": "bookmark" } ], "name": "m1.small.description", "os-flavor-access:is_public": true, "ram": 2048, "swap": 0, "vcpus": 1, "rxtx_factor": 1.0, "description": "test description", "extra_specs": { "hw:cpu_policy": "shared", "hw:numa_nodes": "1" } } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/flavors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/flavors/v2.75/flavors-list-resp.json.0000664000175000017500000000676300000000000033504 0ustar00zuulzuul00000000000000{ "flavors": [ { "description": null, "id": "1", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/1", "rel": "bookmark" } ], "name": "m1.tiny" }, { "description": null, "id": "2", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/2", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/2", "rel": "bookmark" } ], "name": "m1.small" }, { "description": null, "id": "3", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/3", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/3", "rel": "bookmark" } ], "name": "m1.medium" }, { "description": null, "id": "4", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/4", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/4", "rel": "bookmark" } ], "name": "m1.large" }, { "description": null, "id": "5", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/5", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/5", "rel": "bookmark" } ], "name": "m1.xlarge" }, { "description": null, "id": "6", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/6", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/6", "rel": "bookmark" } ], "name": "m1.tiny.specs" }, { "description": "test description", "id": "7", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/flavors/7", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/flavors/7", "rel": "bookmark" } ], "name": "m1.small.description" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/0000775000175000017500000000000000000000000026133 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-get-resp.json.tpl0000664000175000017500000000215000000000000032430 0ustar00zuulzuul00000000000000{ "image": { "OS-DCF:diskConfig": "AUTO", "created": "2011-01-01T01:02:03Z", "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "%(versioned_compute_endpoint)s/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "self" }, { "href": "%(compute_endpoint)s/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage7", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-meta-key-get.json.tpl0000664000175000017500000000006700000000000033200 0ustar00zuulzuul00000000000000{ "meta": { "kernel_id": "nokernel" } }././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-meta-key-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-meta-key-put-req.json.tp0000664000175000017500000000007400000000000033640 0ustar00zuulzuul00000000000000{ "meta": { "auto_disk_config": "False" } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-meta-key-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-meta-key-put-resp.json.t0000664000175000017500000000007300000000000033641 0ustar00zuulzuul00000000000000{ "meta": { "auto_disk_config": "False" } }././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-get-resp.json.t0000664000175000017500000000024300000000000033653 0ustar00zuulzuul00000000000000{ "metadata": { "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "nokernel", "ramdisk_id": "nokernel" } }././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-post-req.json.t0000664000175000017500000000013200000000000033674 0ustar00zuulzuul00000000000000{ "metadata": { "kernel_id": "False", "Label": "UpdatedImage" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-post-resp.json.0000664000175000017500000000030100000000000033670 0ustar00zuulzuul00000000000000{ "metadata": { "Label": "UpdatedImage", "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "False", "ramdisk_id": "nokernel" } }././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-put-req.json.tp0000664000175000017500000000013300000000000033700 0ustar00zuulzuul00000000000000{ "metadata": { "auto_disk_config": "True", "Label": "Changed" } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/image-metadata-put-resp.json.t0000664000175000017500000000013200000000000033701 0ustar00zuulzuul00000000000000{ "metadata": { "Label": "Changed", "auto_disk_config": "True" } }././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/images-details-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/images-details-get-resp.json.t0000664000175000017500000002010400000000000033701 0ustar00zuulzuul00000000000000{ "images": [ { "OS-DCF:diskConfig": "AUTO", "created": "2011-01-01T01:02:03Z", "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "%(versioned_compute_endpoint)s/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "self" }, { "href": "%(compute_endpoint)s/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "auto_disk_config": "True", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage7", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "created": "2011-01-01T01:02:03Z", "id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "links": [ { "href": "%(versioned_compute_endpoint)s/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "self" }, { "href": "%(compute_endpoint)s/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "created": "2011-01-01T01:02:03Z", "id": "a2459075-d96c-40d5-893e-577ff92e721c", "links": [ { "href": "%(versioned_compute_endpoint)s/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "self" }, { "href": "%(compute_endpoint)s/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "OS-DCF:diskConfig": "MANUAL", "created": "2011-01-01T01:02:03Z", "id": "a440c04b-79fa-479c-bed1-0b816eaec379", "links": [ { "href": "%(versioned_compute_endpoint)s/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "self" }, { "href": "%(compute_endpoint)s/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "auto_disk_config": "False", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage6", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "created": "2011-01-01T01:02:03Z", "id": "c905cedb-7281-47e4-8a62-f26bc5fc4c77", "links": [ { "href": "%(versioned_compute_endpoint)s/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "self" }, { "href": "%(compute_endpoint)s/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "kernel_id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "ramdisk_id": null }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "created": "2011-01-01T01:02:03Z", "id": "cedef40a-ed67-4d10-800e-17455edce175", "links": [ { "href": "%(versioned_compute_endpoint)s/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "self" }, { "href": "%(compute_endpoint)s/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" }, { "created": "2011-01-01T01:02:03Z", "id": "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "links": [ { "href": "%(versioned_compute_endpoint)s/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "self" }, { "href": "%(compute_endpoint)s/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "metadata": { "architecture": "x86_64", "kernel_id": "nokernel", "ramdisk_id": "nokernel" }, "minDisk": 0, "minRam": 0, "name": "fakeimage123456", "OS-EXT-IMG-SIZE:size": %(int)s, "progress": 100, "status": "ACTIVE", "updated": "2011-01-01T01:02:03Z" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/images/images-list-get-resp.json.tpl0000664000175000017500000001223700000000000033573 0ustar00zuulzuul00000000000000{ "images": [ { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "%(versioned_compute_endpoint)s/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "self" }, { "href": "%(compute_endpoint)s/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage7" }, { "id": "155d900f-4e14-4e4c-a73d-069cbf4541e6", "links": [ { "href": "%(versioned_compute_endpoint)s/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "self" }, { "href": "%(compute_endpoint)s/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/155d900f-4e14-4e4c-a73d-069cbf4541e6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "a2459075-d96c-40d5-893e-577ff92e721c", "links": [ { "href": "%(versioned_compute_endpoint)s/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "self" }, { "href": "%(compute_endpoint)s/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a2459075-d96c-40d5-893e-577ff92e721c", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "a440c04b-79fa-479c-bed1-0b816eaec379", "links": [ { "href": "%(versioned_compute_endpoint)s/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "self" }, { "href": "%(compute_endpoint)s/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/a440c04b-79fa-479c-bed1-0b816eaec379", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage6" }, { "id": "c905cedb-7281-47e4-8a62-f26bc5fc4c77", "links": [ { "href": "%(versioned_compute_endpoint)s/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "self" }, { "href": "%(compute_endpoint)s/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/c905cedb-7281-47e4-8a62-f26bc5fc4c77", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "cedef40a-ed67-4d10-800e-17455edce175", "links": [ { "href": "%(versioned_compute_endpoint)s/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "self" }, { "href": "%(compute_endpoint)s/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/cedef40a-ed67-4d10-800e-17455edce175", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" }, { "id": "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "links": [ { "href": "%(versioned_compute_endpoint)s/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "self" }, { "href": "%(compute_endpoint)s/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "bookmark" }, { "href": "http://glance.openstack.example.com/images/76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "rel": "alternate", "type": "application/vnd.openstack.image" } ], "name": "fakeimage123456" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/0000775000175000017500000000000000000000000026167 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/limit-get-resp.json.tpl0000664000175000017500000000141200000000000032520 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": -1, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxSecurityGroups": -1, "maxSecurityGroupRules": -1, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalSecurityGroupsUsed": 0, "totalFloatingIpsUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/v2.36/0000775000175000017500000000000000000000000026745 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/v2.36/limit-get-resp.json.tpl0000664000175000017500000000110400000000000033274 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/v2.39/0000775000175000017500000000000000000000000026750 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/v2.39/limit-get-resp.json.tpl0000664000175000017500000000104300000000000033301 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxPersonality": 5, "maxPersonalitySize": 10240, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/v2.57/0000775000175000017500000000000000000000000026750 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/limits/v2.57/limit-get-resp.json.tpl0000664000175000017500000000073100000000000033304 0ustar00zuulzuul00000000000000{ "limits": { "absolute": { "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/0000775000175000017500000000000000000000000030033 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-inject-network-info.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-inject0000664000175000017500000000004200000000000033752 0ustar00zuulzuul00000000000000{ "injectNetworkInfo": null } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-reset-network.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-reset-0000664000175000017500000000003500000000000033677 0ustar00zuulzuul00000000000000{ "resetNetwork": null } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-reset-server-state.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-reset-0000664000175000017500000000007300000000000033701 0ustar00zuulzuul00000000000000{ "os-resetState": { "state": "active" } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-reset-state.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-actions/admin-actions-reset-0000664000175000017500000000007300000000000033701 0ustar00zuulzuul00000000000000{ 'os-resetState': { 'state': 'active' } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-password/0000775000175000017500000000000000000000000030235 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-password/admin-password-change-password.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-admin-password/admin-password-chan0000664000175000017500000000011000000000000034007 0ustar00zuulzuul00000000000000{ "changePassword" : { "adminPass" : "%(password)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/0000775000175000017500000000000000000000000026566 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agent-post-req.json.tpl0000664000175000017500000000034200000000000033124 0ustar00zuulzuul00000000000000{ "agent": { "hypervisor": "%(hypervisor)s", "os": "%(os)s", "architecture": "%(architecture)s", "version": "%(version)s", "md5hash": "%(md5hash)s", "url": "%(url)s" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agent-post-resp.json.tpl0000664000175000017500000000040600000000000033307 0ustar00zuulzuul00000000000000{ "agent": { "agent_id": 1, "architecture": "x86", "hypervisor": "xen", "md5hash": "add6bb58e139be103324d04d82d8f545", "os": "os", "url": "http://example.com/path/to/resource", "version": "8.0" } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agent-update-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agent-update-put-req.json.t0000664000175000017500000000016500000000000033676 0ustar00zuulzuul00000000000000{ "para": { "url": "%(url)s", "md5hash": "%(md5hash)s", "version": "%(version)s" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agent-update-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agent-update-put-resp.json.0000664000175000017500000000027000000000000033671 0ustar00zuulzuul00000000000000{ "agent": { "agent_id": "1", "md5hash": "add6bb58e139be103324d04d82d8f545", "url": "http://example.com/path/to/resource", "version": "7.0" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-agents/agents-get-resp.json.tpl0000664000175000017500000000046700000000000033273 0ustar00zuulzuul00000000000000{ "agents": [ { "agent_id": 1, "architecture": "x86", "hypervisor": "xen", "md5hash": "add6bb58e139be103324d04d82d8f545", "os": "os", "url": "http://example.com/path/to/resource", "version": "8.0" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4064693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/0000775000175000017500000000000000000000000027416 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-add-host-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-add-host-post0000664000175000017500000000007400000000000033754 0ustar00zuulzuul00000000000000{ "add_host": { "host": "%(host_name)s" } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-metadata-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-metadata-post0000664000175000017500000000021300000000000034024 0ustar00zuulzuul00000000000000{ "set_metadata": { "metadata": { "key": "value" } } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-post-req.json0000664000175000017500000000013700000000000034010 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "name", "availability_zone": "london" } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-post-resp.jso0000664000175000017500000000036200000000000034014 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "id": %(aggregate_id)s, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-remove-host-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-remove-host-p0000664000175000017500000000007700000000000033776 0ustar00zuulzuul00000000000000{ "remove_host": { "host": "%(host_name)s" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-update-post-r0000664000175000017500000000014100000000000033765 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "newname", "availability_zone": "nova2" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregate-update-post-r0000664000175000017500000000051500000000000033772 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "nova2", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "nova2" }, "name": "newname", "updated_at": "%(strtime)s" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-add-host-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-add-host-pos0000664000175000017500000000055300000000000033755 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [ "%(compute_host)s" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-get-resp.jso0000664000175000017500000000050300000000000033766 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-list-get-res0000664000175000017500000000066500000000000033776 0ustar00zuulzuul00000000000000{ "aggregates": [ { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [ "%(compute_host)s" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } ] } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-metadata-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-metadata-pos0000664000175000017500000000054600000000000034034 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london", "key": "value" }, "name": "name", "updated_at": %(strtime)s } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-remove-host-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/aggregates-remove-host-0000664000175000017500000000050300000000000033773 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4104693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/0000775000175000017500000000000000000000000030170 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-add-host-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-add-hos0000664000175000017500000000007400000000000033357 0ustar00zuulzuul00000000000000{ "add_host": { "host": "%(host_name)s" } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-metadata-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-metadat0000664000175000017500000000021300000000000033452 0ustar00zuulzuul00000000000000{ "set_metadata": { "metadata": { "key": "value" } } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-post-re0000664000175000017500000000013700000000000033431 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "name", "availability_zone": "london" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-post-re0000664000175000017500000000041600000000000033431 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "id": %(aggregate_id)s, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-remove-host-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-remove-0000664000175000017500000000007700000000000033415 0ustar00zuulzuul00000000000000{ "remove_host": { "host": "%(host_name)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-update-0000664000175000017500000000014100000000000033372 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "newname", "availability_zone": "nova2" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregate-update-0000664000175000017500000000055100000000000033377 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "nova2", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "nova2" }, "name": "newname", "updated_at": "%(strtime)s", "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-add-host-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-add-ho0000664000175000017500000000060700000000000033361 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [ "%(compute_host)s" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-get-re0000664000175000017500000000053700000000000033412 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-list-g0000664000175000017500000000072500000000000033425 0ustar00zuulzuul00000000000000{ "aggregates": [ { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [ "%(compute_host)s" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } ] } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-metadata-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-metada0000664000175000017500000000060200000000000033453 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london", "key": "value" }, "name": "name", "updated_at": %(strtime)s, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-remove-host-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.41/aggregates-remove0000664000175000017500000000053700000000000033524 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4104693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/0000775000175000017500000000000000000000000030174 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-add-host-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-add-hos0000664000175000017500000000007400000000000033363 0ustar00zuulzuul00000000000000{ "add_host": { "host": "%(host_name)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-images-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-images-0000664000175000017500000000007400000000000033366 0ustar00zuulzuul00000000000000{ "cache": [ {"id": "%(image_id)s"} ] } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-metadata-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-metadat0000664000175000017500000000021300000000000033456 0ustar00zuulzuul00000000000000{ "set_metadata": { "metadata": { "key": "value" } } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-post-re0000664000175000017500000000013700000000000033435 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "name", "availability_zone": "london" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-post-re0000664000175000017500000000041600000000000033435 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "id": %(aggregate_id)s, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-remove-host-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-remove-0000664000175000017500000000007700000000000033421 0ustar00zuulzuul00000000000000{ "remove_host": { "host": "%(host_name)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-update-0000664000175000017500000000014100000000000033376 0ustar00zuulzuul00000000000000{ "aggregate": { "name": "newname", "availability_zone": "nova2" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregate-update-0000664000175000017500000000055100000000000033403 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "nova2", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "nova2" }, "name": "newname", "updated_at": "%(strtime)s", "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-add-host-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-add-ho0000664000175000017500000000060700000000000033365 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [ "%(compute_host)s" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-get-re0000664000175000017500000000053700000000000033416 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-list-g0000664000175000017500000000072500000000000033431 0ustar00zuulzuul00000000000000{ "aggregates": [ { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [ "%(compute_host)s" ], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } ] } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-metadata-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-metada0000664000175000017500000000060200000000000033457 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london", "key": "value" }, "name": "name", "updated_at": %(strtime)s, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-remove-host-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-aggregates/v2.81/aggregates-remove0000664000175000017500000000053700000000000033530 0ustar00zuulzuul00000000000000{ "aggregate": { "availability_zone": "london", "created_at": "%(strtime)s", "deleted": false, "deleted_at": null, "hosts": [], "id": 1, "metadata": { "availability_zone": "london" }, "name": "name", "updated_at": null, "uuid": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4104693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-assisted-volume-snapshots/0000775000175000017500000000000000000000000032451 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024100000000000011452 xustar0000000000000000139 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-assisted-volume-snapshots/snapshot-create-assisted-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-assisted-volume-snapshots/snapshot0000664000175000017500000000042400000000000034233 0ustar00zuulzuul00000000000000{ "snapshot": { "volume_id": "%(volume_id)s", "create_info": { "snapshot_id": "%(snapshot_id)s", "type": "%(type)s", "new_file": "%(new_file)s", "id": "421752a6-acf6-4b2d-bc7a-119f9148cd8c" } } } ././@PaxHeader0000000000000000000000000000024200000000000011453 xustar0000000000000000140 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-assisted-volume-snapshots/snapshot-create-assisted-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-assisted-volume-snapshots/snapshot0000664000175000017500000000016100000000000034231 0ustar00zuulzuul00000000000000{ "snapshot": { "id": "421752a6-acf6-4b2d-bc7a-119f9148cd8c", "volumeId": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4104693 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/0000775000175000017500000000000000000000000030672 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interfaces-create-net_id-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interface0000664000175000017500000000031200000000000034013 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3" } ], "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6" } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interfaces-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interface0000664000175000017500000000014100000000000034013 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "port_id": "ce531f90-199f-48c0-816c-13e38010b442" } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interfaces-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interface0000664000175000017500000000062200000000000034017 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } }././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interfaces-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interface0000664000175000017500000000071700000000000034024 0ustar00zuulzuul00000000000000{ "interfaceAttachments": [ { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } ] }././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interfaces-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/attach-interface0000664000175000017500000000062200000000000034017 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.49/0000775000175000017500000000000000000000000031454 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.49/attach-interfaces-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.49/attach-int0000664000175000017500000000016700000000000033437 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "tag": "foo" } } ././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.49/attach-interfaces-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.49/attach-int0000664000175000017500000000062200000000000033433 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/0000775000175000017500000000000000000000000031446 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024600000000000011457 xustar0000000000000000144 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-net_id-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-int0000664000175000017500000000034400000000000033426 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3" } ], "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "%(tag)s" } } ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-int0000664000175000017500000000014200000000000033422 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "port_id": "%(port_id)s", "tag": "%(tag)s" } } ././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-interfaces-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-int0000664000175000017500000000055000000000000033425 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "%(ip_address)s", "subnet_id": "%(subnet_id)s" } ], "mac_addr": "%(mac_addr)s", "net_id": "%(net_id)s", "port_id": "%(port_id)s", "port_state": "%(port_state)s", "tag": "%(tag)s" } }././@PaxHeader0000000000000000000000000000023600000000000011456 xustar0000000000000000136 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-interfaces-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-int0000664000175000017500000000065100000000000033427 0ustar00zuulzuul00000000000000{ "interfaceAttachments": [ { "fixed_ips": [ { "ip_address": "%(ip_address)s", "subnet_id": "%(subnet_id)s" } ], "mac_addr": "%(mac_addr)s", "net_id": "%(net_id)s", "port_id": "%(port_id)s", "port_state": "%(port_state)s", "tag": "%(tag)s" } ] }././@PaxHeader0000000000000000000000000000023600000000000011456 xustar0000000000000000136 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-interfaces-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-attach-interfaces/v2.70/attach-int0000664000175000017500000000055000000000000033425 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "%(ip_address)s", "subnet_id": "%(subnet_id)s" } ], "mac_addr": "%(mac_addr)s", "net_id": "%(net_id)s", "port_id": "%(port_id)s", "port_state": "%(port_state)s", "tag": "%(tag)s" } }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-availability-zone/0000775000175000017500000000000000000000000030730 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-availability-zone/availability-zone-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-availability-zone/availability-zon0000664000175000017500000000215000000000000034127 0ustar00zuulzuul00000000000000{ "availabilityZoneInfo": [ { "hosts": { "conductor": { "nova-conductor": { "active": true, "available": true, "updated_at": %(strtime_or_none)s } }, "scheduler": { "nova-scheduler": { "active": true, "available": true, "updated_at": %(strtime_or_none)s } } }, "zoneName": "internal", "zoneState": { "available": true } }, { "hosts": { "compute": { "nova-compute": { "active": true, "available": true, "updated_at": %(strtime_or_none)s } } }, "zoneName": "nova", "zoneState": { "available": true } } ] } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-availability-zone/availability-zone-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-availability-zone/availability-zon0000664000175000017500000000030200000000000034124 0ustar00zuulzuul00000000000000{ "availabilityZoneInfo": [ { "hosts": null, "zoneName": "nova", "zoneState": { "available": true } } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-baremetal-nodes/0000775000175000017500000000000000000000000030347 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-baremetal-nodes/baremetal-node-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-baremetal-nodes/baremetal-node-get0000664000175000017500000000046400000000000033732 0ustar00zuulzuul00000000000000{ "node": { "cpus": "2", "disk_gb": "10", "host": "IRONIC MANAGED", "id": "058d27fa-241b-445a-a386-08c04f96db43", "instance_uuid": "1ea4e53e-149a-4f02-9515-590c9fb2315a", "interfaces": [], "memory_mb": "1024", "task_state": "active" } }././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-baremetal-nodes/baremetal-node-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-baremetal-nodes/baremetal-node-lis0000664000175000017500000000106100000000000033734 0ustar00zuulzuul00000000000000{ "nodes": [ { "cpus": "2", "disk_gb": "10", "host": "IRONIC MANAGED", "id": "058d27fa-241b-445a-a386-08c04f96db43", "interfaces": [], "memory_mb": "1024", "task_state": "active" }, { "cpus": "2", "disk_gb": "10", "host": "IRONIC MANAGED", "id": "e2025409-f3ce-4d6a-9788-c565cf3b1b1c", "interfaces": [], "memory_mb": "1024", "task_state": "active" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/0000775000175000017500000000000000000000000027752 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/certificate-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/certificate-create-re0000664000175000017500000000000000000000000034012 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/certificate-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/certificate-create-re0000664000175000017500000000013500000000000034023 0ustar00zuulzuul00000000000000{ "certificate": { "data": "%(text)s", "private_key": "%(text)s" } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/certificate-get-root-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-certificates/certificate-get-root-0000664000175000017500000000012700000000000033772 0ustar00zuulzuul00000000000000{ "certificate": { "data": "%(text)s", "private_key": null } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-auth-tokens/0000775000175000017500000000000000000000000031207 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-auth-tokens/get-console-connect-info-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-auth-tokens/get-console-co0000664000175000017500000000025700000000000033754 0ustar00zuulzuul00000000000000{ "console": { "instance_uuid": "%(id)s", "host": "%(host)s", "port": %(port)s, "internal_access_path": "%(internal_access_path)s" } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-auth-tokens/get-rdp-console-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-auth-tokens/get-rdp-consol0000664000175000017500000000010000000000000033756 0ustar00zuulzuul00000000000000{ "os-getRDPConsole": { "type": "rdp-html5" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-output/0000775000175000017500000000000000000000000030305 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-output/console-output-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-output/console-output-post0000664000175000017500000000007400000000000034214 0ustar00zuulzuul00000000000000{ "os-getConsoleOutput": { "length": 50 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-output/console-output-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-console-output/console-output-post0000664000175000017500000000007300000000000034213 0ustar00zuulzuul00000000000000{ "output": "FAKE CONSOLE OUTPUT\nANOTHER\nLAST LINE" }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/0000775000175000017500000000000000000000000030013 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/create-backup-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/create-backup-req.js0000664000175000017500000000016200000000000033643 0ustar00zuulzuul00000000000000{ "createBackup": { "name": "Backup 1", "backup_type": "daily", "rotation": 1 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/v2.45/0000775000175000017500000000000000000000000030571 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/v2.45/create-backup-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/v2.45/create-backup-0000664000175000017500000000016300000000000033277 0ustar00zuulzuul00000000000000{ "createBackup": { "name": "Backup 1", "backup_type": "weekly", "rotation": 1 } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/v2.45/create-backup-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-create-backup/v2.45/create-backup-0000664000175000017500000000003700000000000033277 0ustar00zuulzuul00000000000000{ "image_id": "%(uuid)s" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-deferred-delete/0000775000175000017500000000000000000000000030325 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-deferred-delete/force-delete-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-deferred-delete/force-delete-post-0000664000175000017500000000003400000000000033643 0ustar00zuulzuul00000000000000{ "forceDelete": null } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-deferred-delete/restore-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-deferred-delete/restore-post-req.j0000664000175000017500000000003000000000000033724 0ustar00zuulzuul00000000000000{ "restore": null } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/0000775000175000017500000000000000000000000027102 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-find-host-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-find-host0000664000175000017500000000016300000000000034017 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "%(adminPass)s", "onSharedStorage": "%(onSharedStorage)s" } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-find-host-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-find-host0000664000175000017500000000004400000000000034015 0ustar00zuulzuul00000000000000{ "adminPass": "%(password)s" } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-req.json.0000664000175000017500000000021700000000000033741 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "%(host)s", "adminPass": "%(adminPass)s", "onSharedStorage": "%(onSharedStorage)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/server-evacuate-resp.json0000664000175000017500000000004400000000000034043 0ustar00zuulzuul00000000000000{ "adminPass": "%(password)s" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.14/0000775000175000017500000000000000000000000027654 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.14/server-evacuate-find-host-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.14/server-evacuate-fin0000664000175000017500000000010100000000000033442 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "%(adminPass)s" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.14/server-evacuate-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.14/server-evacuate-req0000664000175000017500000000013500000000000033464 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "%(host)s", "adminPass": "%(adminPass)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.29/0000775000175000017500000000000000000000000027662 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.29/server-evacuate-find-host-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.29/server-evacuate-fin0000664000175000017500000000010100000000000033450 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "%(adminPass)s" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.29/server-evacuate-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.29/server-evacuate-req0000664000175000017500000000017300000000000033474 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "%(host)s", "adminPass": "%(adminPass)s", "force": "%(force)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4144692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.68/0000775000175000017500000000000000000000000027665 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.68/server-evacuate-find-host-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.68/server-evacuate-fin0000664000175000017500000000010100000000000033453 0ustar00zuulzuul00000000000000{ "evacuate": { "adminPass": "%(adminPass)s" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.68/server-evacuate-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.68/server-evacuate-req0000664000175000017500000000013500000000000033475 0ustar00zuulzuul00000000000000{ "evacuate": { "host": "%(host)s", "adminPass": "%(adminPass)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4184692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ip-pools/0000775000175000017500000000000000000000000030650 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ip-pools/floatingippools-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ip-pools/floatingippools-0000664000175000017500000000021600000000000034060 0ustar00zuulzuul00000000000000{ "floating_ip_pools": [ { "name": "%(pool1)s" }, { "name": "%(pool2)s" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4184692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/0000775000175000017500000000000000000000000027701 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-create-r0000664000175000017500000000003200000000000033713 0ustar00zuulzuul00000000000000{ "pool": "%(pool)s" }././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-create-r0000664000175000017500000000030200000000000033713 0ustar00zuulzuul00000000000000{ "floating_ip": { "fixed_ip": null, "id": "8baeddb4-45e2-4c36-8cb7-d79439a5f67c", "instance_id": null, "ip": "172.24.4.17", "pool": "public" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-get-resp0000664000175000017500000000030200000000000033737 0ustar00zuulzuul00000000000000{ "floating_ip": { "fixed_ip": null, "id": "8baeddb4-45e2-4c36-8cb7-d79439a5f67c", "instance_id": null, "ip": "172.24.4.17", "pool": "public" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-list-empty-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-list-emp0000664000175000017500000000003300000000000033744 0ustar00zuulzuul00000000000000{ "floating_ips": [] } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-floating-ips/floating-ips-list-res0000664000175000017500000000067000000000000033763 0ustar00zuulzuul00000000000000{ "floating_ips": [ { "fixed_ip": null, "id": "8baeddb4-45e2-4c36-8cb7-d79439a5f67c", "instance_id": null, "ip": "172.24.4.17", "pool": "public" }, { "fixed_ip": null, "id": "05ef7490-745a-4af9-98e5-610dc97493c4", "instance_id": null, "ip": "172.24.4.78", "pool": "public" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4184692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/0000775000175000017500000000000000000000000026445 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-get-reboot.json.tpl0000664000175000017500000000007600000000000033163 0ustar00zuulzuul00000000000000{ "host": "%(host_name)s", "power_action": "reboot" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-get-resp.json.tpl0000664000175000017500000000131600000000000032640 0ustar00zuulzuul00000000000000{ "host": [ { "resource": { "cpu": 2, "disk_gb": 1028, "host": "%(host_name)s", "memory_mb": 8192, "project": "(total)" } }, { "resource": { "cpu": 0, "disk_gb": 0, "host": "%(host_name)s", "memory_mb": 512, "project": "(used_now)" } }, { "resource": { "cpu": 0, "disk_gb": 0, "host": "%(host_name)s", "memory_mb": 0, "project": "(used_max)" } } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-get-shutdown.json.tpl0000664000175000017500000000010000000000000033530 0ustar00zuulzuul00000000000000{ "host": "%(host_name)s", "power_action": "shutdown" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-get-startup.json.tpl0000664000175000017500000000007700000000000033374 0ustar00zuulzuul00000000000000{ "host": "%(host_name)s", "power_action": "startup" } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-put-maintenance-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-put-maintenance-req.jso0000664000175000017500000000007600000000000034015 0ustar00zuulzuul00000000000000{ "status": "enable", "maintenance_mode": "disable" } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-put-maintenance-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/host-put-maintenance-resp.js0000664000175000017500000000014400000000000034014 0ustar00zuulzuul00000000000000{ "host": "%(host_name)s", "maintenance_mode": "off_maintenance", "status": "enabled" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hosts/hosts-list-resp.json.tpl0000664000175000017500000000063000000000000033215 0ustar00zuulzuul00000000000000{ "hosts": [ { "host_name": "%(host_name)s", "service": "conductor", "zone": "internal" }, { "host_name": "%(host_name)s", "service": "compute", "zone": "nova" }, { "host_name": "%(host_name)s", "service": "scheduler", "zone": "internal" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4184692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/0000775000175000017500000000000000000000000027702 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-detail-res0000664000175000017500000000177500000000000034263 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": "{\"arch\": \"x86_64\", \"model\": \"Nehalem\", \"vendor\": \"Intel\", \"features\": [\"pge\", \"clflush\"], \"topology\": {\"cores\": 1, \"threads\": 1, \"sockets\": 4}}", "current_workload": 0, "state": "up", "status": "enabled", "disk_available_least": 0, "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": %(hypervisor_id)s, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(int:service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ] } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-list-resp.0000664000175000017500000000026300000000000034221 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "state": "up", "status": "enabled", "id": 1 } ] } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-search-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-search-res0000664000175000017500000000026300000000000034255 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-show-resp.0000664000175000017500000000161000000000000034223 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": "{\"arch\": \"x86_64\", \"model\": \"Nehalem\", \"vendor\": \"Intel\", \"features\": [\"pge\", \"clflush\"], \"topology\": {\"cores\": 1, \"threads\": 1, \"sockets\": 4}}", "current_workload": 0, "disk_available_least": 0, "state": "up", "status": "enabled", "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": %(hypervisor_id)s, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(int:service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-statistics-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-statistics0000664000175000017500000000055700000000000034421 0ustar00zuulzuul00000000000000{ "hypervisor_statistics": { "count": 1, "current_workload": 0, "disk_available_least": 0, "free_disk_gb": 1028, "free_ram_mb": 7680, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "vcpus": 2, "vcpus_used": 0 } }././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-uptime-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-uptime-res0000664000175000017500000000037200000000000034314 0ustar00zuulzuul00000000000000{ "hypervisor": { "hypervisor_hostname": "fake-mini", "id": %(hypervisor_id)s, "state": "up", "status": "enabled", "uptime": " 08:32:11 up 93 days, 18:25, 12 users, load average: 0.20, 0.12, 0.14" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-with-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-with-serve0000664000175000017500000000100200000000000034306 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ] } ] } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-without-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/hypervisors-without-se0000664000175000017500000000026300000000000034331 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4184692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/0000775000175000017500000000000000000000000030461 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-deta0000664000175000017500000000230700000000000033716 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "state": "up", "status": "enabled", "disk_available_least": 0, "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": %(hypervisor_id)s, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(int:service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-list0000664000175000017500000000026300000000000033753 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "state": "up", "status": "enabled", "id": 1 } ] } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-search-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-sear0000664000175000017500000000026300000000000033732 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-show0000664000175000017500000000203600000000000033760 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "disk_available_least": 0, "state": "up", "status": "enabled", "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": %(hypervisor_id)s, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(int:service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-statistics-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-stat0000664000175000017500000000055700000000000033761 0ustar00zuulzuul00000000000000{ "hypervisor_statistics": { "count": 1, "current_workload": 0, "disk_available_least": 0, "free_disk_gb": 1028, "free_ram_mb": 7680, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "vcpus": 2, "vcpus_used": 0 } }././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-uptime-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-upti0000664000175000017500000000037200000000000033762 0ustar00zuulzuul00000000000000{ "hypervisor": { "hypervisor_hostname": "fake-mini", "id": %(hypervisor_id)s, "state": "up", "status": "enabled", "uptime": " 08:32:11 up 93 days, 18:25, 12 users, load average: 0.20, 0.12, 0.14" } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-with-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-with0000664000175000017500000000100200000000000033743 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ] } ] } ././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-without-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.28/hypervisors-with0000664000175000017500000000026300000000000033753 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": 1, "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4184692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/0000775000175000017500000000000000000000000030455 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-deta0000664000175000017500000000262400000000000033714 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "state": "up", "status": "enabled", "disk_available_least": 0, "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "host1", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": %(hypervisor_id)s, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(int:service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors/detail?limit=1&marker=2", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-list0000664000175000017500000000057100000000000033751 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "host1", "id": 2, "state": "up", "status": "enabled" } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors?limit=1&marker=2", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4224691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/0000775000175000017500000000000000000000000030457 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-deta0000664000175000017500000000264200000000000033716 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "state": "up", "status": "enabled", "disk_available_least": 0, "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "host2", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "%(hypervisor_id)s", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors/detail?limit=1&marker=%(hypervisor_id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000024100000000000011452 xustar0000000000000000139 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-detail-with-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-deta0000664000175000017500000000302400000000000033711 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ], "disk_available_least": 0, "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "%(hypervisor_id)s", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-list0000664000175000017500000000063300000000000033752 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "host2", "id": "%(hypervisor_id)s", "state": "up", "status": "enabled" } ], "hypervisors_links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/os-hypervisors?limit=1&marker=%(hypervisor_id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-search-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-sear0000664000175000017500000000030500000000000033725 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": "%(hypervisor_id)s", "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-show0000664000175000017500000000203400000000000033754 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "disk_available_least": 0, "state": "up", "status": "enabled", "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "%(hypervisor_id)s", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-show-with-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-show0000664000175000017500000000250300000000000033755 0ustar00zuulzuul00000000000000{ "hypervisor": { "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "disk_available_least": 0, "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ], "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_hostname": "fake-mini", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "%(hypervisor_id)s", "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "%(host_name)s", "id": "%(service_id)s", "disabled_reason": null }, "vcpus": 2, "vcpus_used": 0 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-statistics-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-stat0000664000175000017500000000055700000000000033757 0ustar00zuulzuul00000000000000{ "hypervisor_statistics": { "count": 1, "current_workload": 0, "disk_available_least": 0, "free_disk_gb": 1028, "free_ram_mb": 7680, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "vcpus": 2, "vcpus_used": 0 } }././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-uptime-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-upti0000664000175000017500000000037400000000000033762 0ustar00zuulzuul00000000000000{ "hypervisor": { "hypervisor_hostname": "fake-mini", "id": "%(hypervisor_id)s", "state": "up", "status": "enabled", "uptime": " 08:32:11 up 93 days, 18:25, 12 users, load average: 0.20, 0.12, 0.14" } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-with-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-with0000664000175000017500000000102400000000000033745 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": "%(hypervisor_id)s", "state": "up", "status": "enabled", "servers": [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" } ] } ] } ././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-without-servers-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-with0000664000175000017500000000030500000000000033746 0ustar00zuulzuul00000000000000{ "hypervisors": [ { "hypervisor_hostname": "fake-mini", "id": "%(hypervisor_id)s", "state": "up", "status": "enabled" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4224691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/0000775000175000017500000000000000000000000030547 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/instance-action-g0000664000175000017500000000103700000000000033776 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null } ] } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/instance-actions-0000664000175000017500000000114200000000000034007 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4224691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.21/0000775000175000017500000000000000000000000031317 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.21/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.21/instance-ac0000664000175000017500000000113700000000000033431 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null } ] } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.21/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.21/instance-ac0000664000175000017500000000114200000000000033425 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4224691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/0000775000175000017500000000000000000000000031322 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024400000000000011455 xustar0000000000000000142 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/instance-action-get-non-admin-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/instance-ac0000664000175000017500000000077400000000000033442 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success" } ] } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/instance-ac0000664000175000017500000000103700000000000033433 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null } ] } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.51/instance-ac0000664000175000017500000000114200000000000033430 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4224691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/0000775000175000017500000000000000000000000031331 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024400000000000011455 xustar0000000000000000142 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-action-get-non-admin-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-ac0000664000175000017500000000104100000000000033435 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success" } ] } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-ac0000664000175000017500000000110400000000000033435 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null } ] } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-ac0000664000175000017500000000126400000000000033444 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000025200000000000011454 xustar0000000000000000148 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-actions-list-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-ac0000664000175000017500000000055200000000000033443 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000024700000000000011460 xustar0000000000000000145 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-actions-list-with-limit-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-ac0000664000175000017500000000104700000000000033443 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ], "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s/os-instance-actions?limit=1&marker=%(request_id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000025000000000000011452 xustar0000000000000000146 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-actions-list-with-marker-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.58/instance-ac0000664000175000017500000000055400000000000033445 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4224691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/0000775000175000017500000000000000000000000031324 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024400000000000011455 xustar0000000000000000142 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-action-get-non-admin-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-ac0000664000175000017500000000111700000000000033434 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "hostId": "%(event_hostId)s" } ] } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-ac0000664000175000017500000000123400000000000033434 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null, "host": "%(event_host)s", "hostId": "%(event_hostId)s" } ] } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-ac0000664000175000017500000000126400000000000033437 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000025200000000000011454 xustar0000000000000000148 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-actions-list-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-ac0000664000175000017500000000055200000000000033436 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000024700000000000011460 xustar0000000000000000145 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-actions-list-with-limit-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-ac0000664000175000017500000000104700000000000033436 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ], "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s/os-instance-actions?limit=1&marker=%(request_id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000025000000000000011452 xustar0000000000000000146 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-actions-list-with-marker-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.62/instance-ac0000664000175000017500000000055400000000000033440 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.426469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/0000775000175000017500000000000000000000000031330 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024400000000000011455 xustar0000000000000000142 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-action-get-non-admin-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000111700000000000033440 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "hostId": "%(event_hostId)s" } ] } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000123400000000000033440 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null, "host": "%(event_host)s", "hostId": "%(event_hostId)s" } ] } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000126400000000000033443 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000025300000000000011455 xustar0000000000000000149 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-actions-list-with-changes-before.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000126400000000000033443 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000025200000000000011454 xustar0000000000000000148 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-actions-list-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000055200000000000033442 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000024700000000000011460 xustar0000000000000000145 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-actions-list-with-limit-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000104700000000000033442 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ], "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s/os-instance-actions?limit=1&marker=%(request_id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000025000000000000011452 xustar0000000000000000146 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-actions-list-with-marker-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.66/instance-ac0000664000175000017500000000055400000000000033444 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.426469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/0000775000175000017500000000000000000000000031330 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024400000000000011455 xustar0000000000000000142 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-action-get-non-admin-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000111700000000000033440 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "hostId": "%(event_hostId)s" } ] } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-action-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000127500000000000033445 0ustar00zuulzuul00000000000000{ "instanceAction": { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null, "events": [ { "event": "compute_stop_instance", "start_time": "%(strtime)s", "finish_time": "%(strtime)s", "result": "Success", "traceback": null, "host": "%(event_host)s", "hostId": "%(event_hostId)s", "details": null } ] } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-actions-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000126400000000000033443 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000025300000000000011455 xustar0000000000000000149 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-actions-list-with-changes-before.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000126400000000000033443 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null }, { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000025200000000000011454 xustar0000000000000000148 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-actions-list-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000055200000000000033442 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000024700000000000011460 xustar0000000000000000145 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-actions-list-with-limit-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000104700000000000033442 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "stop", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ], "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s/os-instance-actions?limit=1&marker=%(request_id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000025000000000000011452 xustar0000000000000000146 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-actions-list-with-marker-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-actions/v2.84/instance-ac0000664000175000017500000000055400000000000033444 0ustar00zuulzuul00000000000000{ "instanceActions": [ { "action": "create", "instance_uuid": "%(uuid)s", "request_id": "%(request_id)s", "user_id": "%(user_id)s", "project_id": "%(project_id)s", "start_time": "%(strtime)s", "updated_at": "%(strtime)s", "message": null } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.426469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-usage-audit-log/0000775000175000017500000000000000000000000032076 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024700000000000011460 xustar0000000000000000145 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-usage-audit-log/inst-usage-audit-log-index-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-usage-audit-log/inst-usag0000664000175000017500000000225300000000000033735 0ustar00zuulzuul00000000000000{ "instance_usage_audit_logs": { "hosts_not_run": [ "samplehost3" ], "log": { "samplehost0": { "errors": 1, "instances": 1, "message": "Instance usage audit ran for host samplehost0, 1 instances in 0.01 seconds.", "state": "DONE" }, "samplehost1": { "errors": 1, "instances": 2, "message": "Instance usage audit ran for host samplehost1, 2 instances in 0.01 seconds.", "state": "DONE" }, "samplehost2": { "errors": 1, "instances": 3, "message": "Instance usage audit ran for host samplehost2, 3 instances in 0.01 seconds.", "state": "DONE" } }, "num_hosts": 4, "num_hosts_done": 3, "num_hosts_not_run": 1, "num_hosts_running": 0, "overall_status": "3 of 4 hosts done. 3 errors.", "period_beginning": "2012-06-01 00:00:00", "period_ending": "2012-07-01 00:00:00", "total_errors": 3, "total_instances": 6 } } ././@PaxHeader0000000000000000000000000000024600000000000011457 xustar0000000000000000144 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-usage-audit-log/inst-usage-audit-log-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-instance-usage-audit-log/inst-usag0000664000175000017500000000225200000000000033734 0ustar00zuulzuul00000000000000{ "instance_usage_audit_log": { "hosts_not_run": [ "samplehost3" ], "log": { "samplehost0": { "errors": 1, "instances": 1, "message": "Instance usage audit ran for host samplehost0, 1 instances in 0.01 seconds.", "state": "DONE" }, "samplehost1": { "errors": 1, "instances": 2, "message": "Instance usage audit ran for host samplehost1, 2 instances in 0.01 seconds.", "state": "DONE" }, "samplehost2": { "errors": 1, "instances": 3, "message": "Instance usage audit ran for host samplehost2, 3 instances in 0.01 seconds.", "state": "DONE" } }, "num_hosts": 4, "num_hosts_done": 3, "num_hosts_not_run": 1, "num_hosts_running": 0, "overall_status": "3 of 4 hosts done. 3 errors.", "period_beginning": "2012-06-01 00:00:00", "period_ending": "2012-07-01 00:00:00", "total_errors": 3, "total_instances": 6 } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.426469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/0000775000175000017500000000000000000000000027134 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-get-resp.json.tp0000664000175000017500000000046300000000000034027 0ustar00zuulzuul00000000000000{ "keypair": { "public_key": "%(public_key)s", "name": "%(keypair_name)s", "fingerprint": "%(fingerprint)s", "user_id": "fake", "deleted": false, "created_at": "%(strtime)s", "updated_at": null, "deleted_at": null, "id": 1 } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-import-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-import-post-req.0000664000175000017500000000014600000000000034045 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s", "public_key": "%(public_key)s" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-import-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-import-post-resp0000664000175000017500000000025300000000000034150 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "public_key": "%(public_key)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-list-resp.json.t0000664000175000017500000000034400000000000034041 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "public_key": "%(public_key)s" } } ] } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-post-req.json.tp0000664000175000017500000000007600000000000034053 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s" } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/keypairs-post-resp.json.t0000664000175000017500000000032500000000000034052 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "private_key": "%(private_key)s", "public_key": "%(public_key)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.426469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/0000775000175000017500000000000000000000000027702 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-get-resp.j0000664000175000017500000000053600000000000033434 0ustar00zuulzuul00000000000000{ "keypair": { "public_key": "%(public_key)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "fingerprint": "%(fingerprint)s", "user_id": "%(user_id)s", "deleted": false, "created_at": "%(strtime)s", "updated_at": null, "deleted_at": null, "id": 1 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-import-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-import-pos0000664000175000017500000000025400000000000033564 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s", "user_id": "%(user_id)s" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-import-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-import-pos0000664000175000017500000000032600000000000033564 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s", "user_id": "%(user_id)s" } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-list-resp.0000664000175000017500000000042000000000000033446 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s" } } ] } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-post-req.j0000664000175000017500000000020400000000000033450 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s", "type": "%(keypair_type)s", "user_id": "%(user_id)s" } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.10/keypairs-post-resp.0000664000175000017500000000040000000000000033456 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "private_key": "%(private_key)s", "public_key": "%(public_key)s", "user_id": "%(user_id)s" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.426469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/0000775000175000017500000000000000000000000027623 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-get-resp.js0000664000175000017500000000052700000000000033540 0ustar00zuulzuul00000000000000{ "keypair": { "public_key": "%(public_key)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "fingerprint": "%(fingerprint)s", "user_id": "fake", "deleted": false, "created_at": "%(strtime)s", "updated_at": null, "deleted_at": null, "id": 1 } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-import-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-import-post0000664000175000017500000000021200000000000033663 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s" } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-import-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-import-post0000664000175000017500000000031700000000000033671 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-list-resp.j0000664000175000017500000000042000000000000033541 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s" } } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-post-req.js0000664000175000017500000000014200000000000033555 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s", "type": "%(keypair_type)s" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.2/keypairs-post-resp.j0000664000175000017500000000037100000000000033560 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "private_key": "%(private_key)s", "public_key": "%(public_key)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/0000775000175000017500000000000000000000000027711 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-list-resp.0000664000175000017500000000067700000000000033473 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s" } } ], "keypairs_links": [ { "href": "%(versioned_compute_endpoint)s/os-keypairs?limit=1&marker=%(keypair_name)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-list-user1-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-list-user10000664000175000017500000000042000000000000033465 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s" } } ] } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-list-user2-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-list-user20000664000175000017500000000071500000000000033475 0ustar00zuulzuul00000000000000{ "keypairs": [ { "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "public_key": "%(public_key)s" } } ], "keypairs_links": [ { "href": "%(versioned_compute_endpoint)s/os-keypairs?limit=1&marker=%(keypair_name)s&user_id=user2", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-post-req.j0000664000175000017500000000020400000000000033457 0ustar00zuulzuul00000000000000{ "keypair": { "name": "%(keypair_name)s", "type": "%(keypair_type)s", "user_id": "%(user_id)s" } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-keypairs/v2.35/keypairs-post-resp.0000664000175000017500000000040000000000000033465 0ustar00zuulzuul00000000000000{ "keypair": { "fingerprint": "%(fingerprint)s", "name": "%(keypair_name)s", "type": "%(keypair_type)s", "private_key": "%(private_key)s", "public_key": "%(public_key)s", "user_id": "%(user_id)s" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/0000775000175000017500000000000000000000000027541 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/lock-server.json.tpl0000664000175000017500000000002500000000000033463 0ustar00zuulzuul00000000000000{ "lock": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/unlock-server.json.tpl0000664000175000017500000000002700000000000034030 0ustar00zuulzuul00000000000000{ "unlock": null } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/0000775000175000017500000000000000000000000030320 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/lock-server-with-reason.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/lock-server-with0000664000175000017500000000007200000000000033447 0ustar00zuulzuul00000000000000{ "lock": {"locked_reason": "I don't want to work"} } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/lock-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/lock-server.json0000664000175000017500000000002500000000000033444 0ustar00zuulzuul00000000000000{ "lock": null } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/unlock-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-lock-server/v2.73/unlock-server.js0000664000175000017500000000002700000000000033454 0ustar00zuulzuul00000000000000{ "unlock": null } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/0000775000175000017500000000000000000000000030241 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/live-migrate-server0000664000175000017500000000020600000000000034053 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "%(hostname)s", "block_migration": false, "disk_over_commit": false } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/migrate-server.json0000664000175000017500000000003000000000000034061 0ustar00zuulzuul00000000000000{ "migrate": null } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.25/0000775000175000017500000000000000000000000031015 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.25/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.25/live-migrate-0000664000175000017500000000014400000000000033401 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "%(hostname)s", "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.30/0000775000175000017500000000000000000000000031011 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.30/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.30/live-migrate-0000664000175000017500000000020200000000000033370 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "%(hostname)s", "block_migration": "auto", "force": "%(force)s" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.56/0000775000175000017500000000000000000000000031021 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.56/migrate-server-null.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.56/migrate-serve0000664000175000017500000000003000000000000033507 0ustar00zuulzuul00000000000000{ "migrate": null } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.56/migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.56/migrate-serve0000664000175000017500000000007000000000000033513 0ustar00zuulzuul00000000000000{ "migrate": { "host": %(hostname)s } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.68/0000775000175000017500000000000000000000000031024 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.68/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.68/live-migrate-0000664000175000017500000000014400000000000033410 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "%(hostname)s", "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/0000775000175000017500000000000000000000000027461 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/migrations-get.json.tpl0000664000175000017500000000206200000000000034103 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2012-10-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1234, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "new_instance_type_id": 2, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "done", "updated_at": "2012-10-29T13:42:02.000000" }, { "created_at": "2013-10-22T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 5678, "instance_uuid": "9128d044-7b61-403e-b766-7547076ff6c1", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "done", "updated_at": "2013-10-22T13:42:02.000000" } ] }././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.23/0000775000175000017500000000000000000000000030233 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.23/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.23/migrations-get.js0000664000175000017500000000506500000000000033530 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 2, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T13:42:02.000000" }, { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "%(instance_1)s", "new_instance_type_id": 2, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "error", "updated_at": "2016-01-29T13:42:02.000000" }, { "created_at": "2016-01-22T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-01-22T13:42:02.000000" }, { "created_at": "2016-01-22T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-01-22T13:42:02.000000" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/0000775000175000017500000000000000000000000030244 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get-wi0000664000175000017500000000232300000000000033535 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get-with-limit.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get-wi0000664000175000017500000000146700000000000033545 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" } ], "migrations_links": [{ "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/os-migrations?limit=1&marker=42341d4b-346a-40d0-83c6-5f4f6892b650", "rel": "next" }] } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get-with-marker.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get-wi0000664000175000017500000000202200000000000033531 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.59/migrations-get.js0000664000175000017500000000544500000000000033543 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "%(instance_1)s", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "error", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.430469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/0000775000175000017500000000000000000000000030242 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023600000000000011456 xustar0000000000000000136 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-with-changes-before.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-wi0000664000175000017500000000202200000000000033527 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-wi0000664000175000017500000000232300000000000033533 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-with-limit.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-wi0000664000175000017500000000146700000000000033543 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" } ], "migrations_links": [{ "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/os-migrations?limit=1&marker=42341d4b-346a-40d0-83c6-5f4f6892b650", "rel": "next" }] } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-with-marker.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get-wi0000664000175000017500000000202200000000000033527 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.66/migrations-get.js0000664000175000017500000000544500000000000033541 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "%(instance_1)s", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "error", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/0000775000175000017500000000000000000000000030236 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023600000000000011456 xustar0000000000000000136 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-with-changes-before.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-wi0000664000175000017500000000222300000000000033526 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-with-changes-since.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-wi0000664000175000017500000000272500000000000033535 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" } ] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-with-limit.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-wi0000664000175000017500000000167000000000000033533 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "status": "migrating", "migration_type": "resize", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" } ], "migrations_links": [{ "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/os-migrations?limit=1&marker=42341d4b-346a-40d0-83c6-5f4f6892b650", "rel": "next" }] } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-with-marker.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-wi0000664000175000017500000000237500000000000033536 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "8600d31b-d1a1-4632-b2ff-45c2be1a70ff", "links": [ { "href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "self" }, { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/8600d31b-d1a1-4632-b2ff-45c2be1a70ff/migrations/1", "rel": "bookmark" } ], "migration_type": "live-migration", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000024200000000000011453 xustar0000000000000000140 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-with-user-or-project-id.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get-wi0000664000175000017500000000356100000000000033534 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "%(instance_1)s", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "error", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-migrations/v2.80/migrations-get.js0000664000175000017500000000645100000000000033533 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-06-23T14:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 4, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "migrating", "updated_at": "2016-06-23T14:42:02.000000", "uuid": "42341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" }, { "created_at": "2016-06-23T13:42:02.000000", "dest_compute": "compute20", "dest_host": "5.6.7.8", "dest_node": "node20", "id": 3, "instance_uuid": "%(instance_2)s", "new_instance_type_id": 6, "old_instance_type_id": 5, "source_compute": "compute10", "source_node": "node10", "migration_type": "resize", "status": "error", "updated_at": "2016-06-23T13:42:02.000000", "uuid": "32341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "78348f0e-97ee-4d70-ad34-189692673ea2", "project_id": "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" }, { "created_at": "2016-01-29T12:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 2, "instance_uuid": "%(instance_1)s", "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "error", "updated_at": "2016-01-29T12:42:02.000000", "uuid": "22341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" }, { "created_at": "2016-01-29T11:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "instance_uuid": "%(instance_1)s", "links": [ { "href": "%(host)s/v2.1/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "self" }, { "href": "%(host)s/6f70656e737461636b20342065766572/servers/%(instance_1)s/migrations/1", "rel": "bookmark" } ], "new_instance_type_id": 1, "old_instance_type_id": 1, "source_compute": "compute1", "source_node": "node1", "migration_type": "live-migration", "status": "running", "updated_at": "2016-01-29T11:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "5c48ebaa-193f-4c5d-948a-f559cc92cd5e", "project_id": "ef92ccff-00f3-46e4-b015-811110e36ee4" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multinic/0000775000175000017500000000000000000000000027131 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multinic/multinic-add-fixed-ip-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multinic/multinic-add-fixed-ip-req0000664000175000017500000000010300000000000033710 0ustar00zuulzuul00000000000000{ "addFixedIp": { "networkId": "%(networkId)s" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multinic/multinic-remove-fixed-ip-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multinic/multinic-remove-fixed-ip-0000664000175000017500000000007500000000000033755 0ustar00zuulzuul00000000000000{ "removeFixedIp": { "address": "%(ip)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/0000775000175000017500000000000000000000000030401 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023300000000000011453 xustar0000000000000000133 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-no-resv-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-no0000664000175000017500000000041700000000000034034 0ustar00zuulzuul00000000000000{ "server": { "name": "new-server-test", "imageRef": "%(image_id)s", "flavorRef": "1", "metadata": { "My Server Name": "Apache1" }, "min_count": "%(min_count)s", "max_count": "%(max_count)s" } } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-no-resv-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-no0000664000175000017500000000100400000000000034025 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-po0000664000175000017500000000047000000000000034035 0ustar00zuulzuul00000000000000{ "server": { "name": "new-server-test", "imageRef": "%(image_id)s", "flavorRef": "1", "metadata": { "My Server Name": "Apache1" }, "return_reservation_id": "True", "min_count": "%(min_count)s", "max_count": "%(max_count)s" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-multiple-create/multiple-create-po0000664000175000017500000000005700000000000034036 0ustar00zuulzuul00000000000000{ "reservation_id": "%(reservation_id)s" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-networks/0000775000175000017500000000000000000000000027161 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-networks/network-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-networks/network-show-resp.json.tp0000664000175000017500000000201000000000000034105 0ustar00zuulzuul00000000000000{ "network": { "bridge": null, "bridge_interface": null, "broadcast": null, "cidr": null, "cidr_v6": null, "created_at": null, "deleted": null, "deleted_at": null, "dhcp_start": null, "dns1": null, "dns2": null, "gateway": null, "gateway_v6": null, "host": null, "id": "%(id)s", "injected": null, "label": "private", "multi_host": null, "netmask": null, "netmask_v6": null, "priority": null, "project_id": null, "rxtx_base": null, "updated_at": null, "vlan": null, "vpn_private_address": null, "vpn_public_address": null, "vpn_public_port": null, "mtu": null, "dhcp_server": null, "enable_dhcp": null, "share_address": null } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-networks/networks-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-networks/networks-list-resp.json.t0000664000175000017500000000202100000000000034105 0ustar00zuulzuul00000000000000{ "networks": [ { "bridge": null, "bridge_interface": null, "broadcast": null, "cidr": null, "cidr_v6": null, "created_at": null, "deleted": null, "deleted_at": null, "dhcp_start": null, "dns1": null, "dns2": null, "gateway": null, "gateway_v6": null, "host": null, "id": "%(id)s", "injected": null, "label": "private", "multi_host": null, "netmask": null, "netmask_v6": null, "priority": null, "project_id": null, "rxtx_base": null, "updated_at": null, "vlan": null, "vpn_private_address": null, "vpn_public_address": null, "vpn_public_port": null, "mtu": null, "dhcp_server": null, "enable_dhcp": null, "share_address": null } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-pause-server/0000775000175000017500000000000000000000000027726 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-pause-server/pause-server.json.tpl0000664000175000017500000000002600000000000034036 0ustar00zuulzuul00000000000000{ "pause": null } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-pause-server/unpause-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-pause-server/unpause-server.json.t0000664000175000017500000000003000000000000034040 0ustar00zuulzuul00000000000000{ "unpause": null } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-preserve-ephemeral-rebuild/0000775000175000017500000000000000000000000032524 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000026300000000000011456 xustar0000000000000000157 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-preserve-ephemeral-rebuild/server-action-rebuild-preserve-ephemeral-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-preserve-ephemeral-rebuild/server-0000664000175000017500000000270500000000000034036 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000025600000000000011460 xustar0000000000000000152 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-preserve-ephemeral-rebuild/server-action-rebuild-preserve-ephemeral.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-preserve-ephemeral-rebuild/server-0000664000175000017500000000036100000000000034032 0ustar00zuulzuul00000000000000{ "rebuild": { "imageRef": "%(uuid)s", "name": "%(name)s", "adminPass": "%(pass)s", "metadata": { "meta_var": "meta_val" }, "preserve_ephemeral": %(preserve_ephemeral)s } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/0000775000175000017500000000000000000000000030515 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/quota-classes-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/quota-classes-sho0000664000175000017500000000064700000000000034022 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 20, "floating_ips": -1, "fixed_ips": -1, "id": "%(set_id)s", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1 } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/quota-classes-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/quota-classes-upd0000664000175000017500000000061300000000000034012 0ustar00zuulzuul00000000000000{ "quota_class_set": { "instances": 50, "cores": 50, "ram": 51200, "floating_ips": -1, "fixed_ips": -1, "metadata_items": 128, "injected_files": 5, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "security_groups": -1, "security_group_rules": -1, "key_pairs": 100 } } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/quota-classes-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/quota-classes-upd0000664000175000017500000000061300000000000034012 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 50, "floating_ips": -1, "fixed_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 50, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4344692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/0000775000175000017500000000000000000000000031267 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/quota-classes-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/quota-class0000664000175000017500000000056000000000000033447 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 20, "id": "%(set_id)s", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/quota-classes-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/quota-class0000664000175000017500000000052400000000000033447 0ustar00zuulzuul00000000000000{ "quota_class_set": { "instances": 50, "cores": 50, "ram": 51200, "metadata_items": 128, "injected_files": 5, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "key_pairs": 100, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/quota-classes-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.50/quota-class0000664000175000017500000000052400000000000033447 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 50, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 50, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4384692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/0000775000175000017500000000000000000000000031276 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/quota-classes-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/quota-class0000664000175000017500000000037400000000000033461 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 20, "id": "%(set_id)s", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/quota-classes-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/quota-class0000664000175000017500000000034000000000000033452 0ustar00zuulzuul00000000000000{ "quota_class_set": { "instances": 50, "cores": 50, "ram": 51200, "metadata_items": 128, "key_pairs": 100, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/quota-classes-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-class-sets/v2.57/quota-class0000664000175000017500000000034000000000000033452 0ustar00zuulzuul00000000000000{ "quota_class_set": { "cores": 50, "instances": 50, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4384692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/0000775000175000017500000000000000000000000027412 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-show-defaults-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-show-defaults-ge0000664000175000017500000000074300000000000034031 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "floating_ips": -1, "fixed_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-show-detail-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-show-detail-get-0000664000175000017500000000321100000000000033716 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": 0, "limit": 20, "reserved": 0 }, "fixed_ips": { "in_use": 0, "limit": -1, "reserved": 0 }, "floating_ips": { "in_use": 0, "limit": -1, "reserved": 0 }, "id": "fake_tenant", "injected_file_content_bytes": { "in_use": 0, "limit": 10240, "reserved": 0 }, "injected_file_path_bytes": { "in_use": 0, "limit": 255, "reserved": 0 }, "injected_files": { "in_use": 0, "limit": 5, "reserved": 0 }, "instances": { "in_use": 0, "limit": 10, "reserved": 0 }, "key_pairs": { "in_use": 0, "limit": 100, "reserved": 0 }, "metadata_items": { "in_use": 0, "limit": 128, "reserved": 0 }, "ram": { "in_use": 0, "limit": 51200, "reserved": 0 }, "security_group_rules": { "in_use": 0, "limit": -1, "reserved": 0 }, "security_groups": { "in_use": 0, "limit": -1, "reserved": 0 }, "server_group_members": { "in_use": 0, "limit": 10, "reserved": 0 }, "server_groups": { "in_use": 0, "limit": 10, "reserved": 0 } } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-show-get-resp.js0000664000175000017500000000074300000000000033772 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-force-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-force-pos0000664000175000017500000000011600000000000034022 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-force-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-force-pos0000664000175000017500000000070600000000000034027 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-post-req.0000664000175000017500000000006100000000000033754 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 45 } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/quotas-update-post-resp0000664000175000017500000000070600000000000034066 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 45, "floating_ips": -1, "fixed_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/user-quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/user-quotas-show-get-re0000664000175000017500000000074300000000000033770 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "floating_ips": -1, "fixed_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/user-quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/user-quotas-update-post0000664000175000017500000000011500000000000034065 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 9 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/user-quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/user-quotas-update-post0000664000175000017500000000070500000000000034072 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "floating_ips": -1, "fixed_ips": -1, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 9, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": -1, "security_groups": -1, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4424691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/0000775000175000017500000000000000000000000030170 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-show-defaults-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-show-defau0000664000175000017500000000055300000000000033472 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-show-detail-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-show-detai0000664000175000017500000000227500000000000033477 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": 0, "limit": 20, "reserved": 0 }, "id": "fake_tenant", "injected_file_content_bytes": { "in_use": 0, "limit": 10240, "reserved": 0 }, "injected_file_path_bytes": { "in_use": 0, "limit": 255, "reserved": 0 }, "injected_files": { "in_use": 0, "limit": 5, "reserved": 0 }, "instances": { "in_use": 0, "limit": 10, "reserved": 0 }, "key_pairs": { "in_use": 0, "limit": 100, "reserved": 0 }, "metadata_items": { "in_use": 0, "limit": 128, "reserved": 0 }, "ram": { "in_use": 0, "limit": 51200, "reserved": 0 }, "server_group_members": { "in_use": 0, "limit": 10, "reserved": 0 }, "server_groups": { "in_use": 0, "limit": 10, "reserved": 0 } } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-show-get-r0000664000175000017500000000055300000000000033424 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-force-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-for0000664000175000017500000000011600000000000033471 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-force-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-for0000664000175000017500000000051600000000000033475 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-pos0000664000175000017500000000006500000000000033507 0ustar00zuulzuul00000000000000{ "quota_set": { "instances": 45 } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/quotas-update-pos0000664000175000017500000000051600000000000033510 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/user-quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/user-quotas-show-0000664000175000017500000000055300000000000033441 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/user-quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/user-quotas-updat0000664000175000017500000000011500000000000033513 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 9 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/user-quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.36/user-quotas-updat0000664000175000017500000000051500000000000033517 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 9, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4424691 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/0000775000175000017500000000000000000000000030173 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-show-defaults-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-show-defau0000664000175000017500000000036700000000000033500 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-show-detail-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-show-detai0000664000175000017500000000151200000000000033473 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": 0, "limit": 20, "reserved": 0 }, "id": "fake_tenant", "instances": { "in_use": 0, "limit": 10, "reserved": 0 }, "key_pairs": { "in_use": 0, "limit": 100, "reserved": 0 }, "metadata_items": { "in_use": 0, "limit": 128, "reserved": 0 }, "ram": { "in_use": 0, "limit": 51200, "reserved": 0 }, "server_group_members": { "in_use": 0, "limit": 10, "reserved": 0 }, "server_groups": { "in_use": 0, "limit": 10, "reserved": 0 } } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-show-get-r0000664000175000017500000000036700000000000033432 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-force-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-for0000664000175000017500000000011600000000000033474 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-force-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-for0000664000175000017500000000033200000000000033474 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "instances": 45, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-pos0000664000175000017500000000006500000000000033512 0ustar00zuulzuul00000000000000{ "quota_set": { "instances": 20 } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/quotas-update-pos0000664000175000017500000000033200000000000033507 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "instances": 20, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/user-quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/user-quotas-show-0000664000175000017500000000036700000000000033447 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "id": "fake_tenant", "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/user-quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/user-quotas-updat0000664000175000017500000000006400000000000033521 0ustar00zuulzuul00000000000000{ "quota_set": { "instances": 9 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/user-quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets/v2.57/user-quotas-updat0000664000175000017500000000033100000000000033516 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": 20, "instances": 9, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "server_groups": 10, "server_group_members": 10 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4384692 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/0000775000175000017500000000000000000000000030363 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-show-defaults-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-show-defaul0000664000175000017500000000073300000000000034041 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "floating_ips": -1, "fixed_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_groups": -1, "server_group_members": -1 } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-show-detail-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-show-detail0000664000175000017500000000323500000000000034043 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": { "in_use": -1, "limit": -1, "reserved": -1 }, "fixed_ips": { "in_use": -1, "limit": -1, "reserved": -1 }, "floating_ips": { "in_use": -1, "limit": -1, "reserved": -1 }, "id": "fake_tenant", "injected_file_content_bytes": { "in_use": -1, "limit": -1, "reserved": -1 }, "injected_file_path_bytes": { "in_use": -1, "limit": -1, "reserved": -1 }, "injected_files": { "in_use": -1, "limit": -1, "reserved": -1 }, "instances": { "in_use": -1, "limit": -1, "reserved": -1 }, "key_pairs": { "in_use": -1, "limit": -1, "reserved": -1 }, "metadata_items": { "in_use": -1, "limit": -1, "reserved": -1 }, "ram": { "in_use": -1, "limit": -1, "reserved": -1 }, "security_group_rules": { "in_use": -1, "limit": -1, "reserved": -1 }, "security_groups": { "in_use": -1, "limit": -1, "reserved": -1 }, "server_group_members": { "in_use": -1, "limit": -1, "reserved": -1 }, "server_groups": { "in_use": -1, "limit": -1, "reserved": -1 } } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-show-get-re0000664000175000017500000000073300000000000033764 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-force-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-forc0000664000175000017500000000011600000000000034027 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 45 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-force-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-forc0000664000175000017500000000067600000000000034042 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-post0000664000175000017500000000007300000000000034065 0ustar00zuulzuul00000000000000{ "quota_set": { "security_groups": 45 } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/quotas-update-post0000664000175000017500000000067600000000000034076 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/user-quotas-show-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/user-quotas-show-g0000664000175000017500000000073300000000000034003 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "id": "fake_tenant", "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/user-quotas-update-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/user-quotas-update0000664000175000017500000000011500000000000034053 0ustar00zuulzuul00000000000000{ "quota_set": { "force": "True", "instances": 9 } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/user-quotas-update-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-quota-sets-noop/user-quotas-update0000664000175000017500000000067600000000000034067 0ustar00zuulzuul00000000000000{ "quota_set": { "cores": -1, "fixed_ips": -1, "floating_ips": -1, "injected_file_content_bytes": -1, "injected_file_path_bytes": -1, "injected_files": -1, "instances": -1, "key_pairs": -1, "metadata_items": -1, "ram": -1, "security_group_rules": -1, "security_groups": -1, "server_group_members": -1, "server_groups": -1 } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.446469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/0000775000175000017500000000000000000000000030423 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-rdp-console-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-rdp-console-po0000664000175000017500000000010000000000000033753 0ustar00zuulzuul00000000000000{ "os-getRDPConsole": { "type": "rdp-html5" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-rdp-console-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-rdp-console-po0000664000175000017500000000015700000000000033767 0ustar00zuulzuul00000000000000{ "console": { "type": "rdp-html5", "url": "http://127.0.0.1:6083/?token=%(uuid)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-serial-console-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-serial-console0000664000175000017500000000010000000000000034031 0ustar00zuulzuul00000000000000{ "os-getSerialConsole": { "type": "serial" } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-serial-console-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-serial-console0000664000175000017500000000015200000000000034040 0ustar00zuulzuul00000000000000{ "console": { "type": "serial", "url": "ws://127.0.0.1:6083/?token=%(uuid)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-spice-console-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-spice-console-0000664000175000017500000000010400000000000033736 0ustar00zuulzuul00000000000000{ "os-getSPICEConsole": { "type": "spice-html5" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-spice-console-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-spice-console-0000664000175000017500000000020000000000000033733 0ustar00zuulzuul00000000000000{ "console": { "type": "spice-html5", "url": "http://127.0.0.1:6082/spice_auto.html?token=%(uuid)s" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-vnc-console-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-vnc-console-po0000664000175000017500000000007400000000000033766 0ustar00zuulzuul00000000000000{ "os-getVNCConsole": { "type": "novnc" } } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-vnc-console-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/get-vnc-console-po0000664000175000017500000000020400000000000033761 0ustar00zuulzuul00000000000000{ "console": { "type": "novnc", "url": "http://127.0.0.1:6080/vnc_auto.html?path=%%3Ftoken%%3D%(uuid)s" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.446469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.6/0000775000175000017500000000000000000000000031116 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.6/create-vnc-console-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.6/create-vnc-co0000664000175000017500000000012500000000000033465 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "vnc", "type": "novnc" } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.6/create-vnc-console-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.6/create-vnc-co0000664000175000017500000000015700000000000033472 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "vnc", "type": "novnc", "url": "%(url)s" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.446469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.8/0000775000175000017500000000000000000000000031120 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.8/create-mks-console-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.8/create-mks-co0000664000175000017500000000012600000000000033474 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "mks", "type": "webmks" } } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.8/create-mks-console-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-remote-consoles/v2.8/create-mks-co0000664000175000017500000000016000000000000033472 0ustar00zuulzuul00000000000000{ "remote_console": { "protocol": "mks", "type": "webmks", "url": "%(url)s" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.446469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/0000775000175000017500000000000000000000000026573 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-get-resp-rescue.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-get-resp-rescue.json0000664000175000017500000000424000000000000034004 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4, "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed" } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "status": "%(status)s", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "key_name": null, "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-STS:power_state": 4, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "rescued", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-get-resp-unrescue.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-get-resp-unrescue.js0000664000175000017500000000426600000000000034022 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4, "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed" } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "%(status)s", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "key_name": null, "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-rescue-req-with-image-ref.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-rescue-req-with-imag0000664000175000017500000000020200000000000033753 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "MySecretPass", "rescue_image_ref": "70a599e0-31e7-49b7-b260-868f441e862b" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-rescue-req.json.tpl0000664000175000017500000000007600000000000033646 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "%(password)s" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-rescue.json.tpl0000664000175000017500000000004400000000000033054 0ustar00zuulzuul00000000000000{ "adminPass": "%(password)s" } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-unrescue-req.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/server-unrescue-req.json.tp0000664000175000017500000000003000000000000034023 0ustar00zuulzuul00000000000000{ "unrescue": null }././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.446469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/0000775000175000017500000000000000000000000027357 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-get-resp-rescue.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-get-resp-rescu0000664000175000017500000000536700000000000033466 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 4, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "rescued", "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "%(isotime)s", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "RESCUE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-get-resp-unrescue.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-get-resp-unres0000664000175000017500000000541500000000000033473 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "%(isotime)s", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake", "progress": 0 } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-rescue-req-with-image-ref.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-rescue-req-wit0000664000175000017500000000020200000000000033454 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "MySecretPass", "rescue_image_ref": "70a599e0-31e7-49b7-b260-868f441e862b" } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-rescue-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-rescue-req.jso0000664000175000017500000000007600000000000033456 0ustar00zuulzuul00000000000000{ "rescue": { "adminPass": "%(password)s" } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-rescue.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-rescue.json.tp0000664000175000017500000000004400000000000033464 0ustar00zuulzuul00000000000000{ "adminPass": "%(password)s" } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-unrescue-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-rescue/v2.87/server-unrescue-req.j0000664000175000017500000000003000000000000033445 0ustar00zuulzuul00000000000000{ "unrescue": null }././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/0000775000175000017500000000000000000000000030471 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-add-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-add0000664000175000017500000000010500000000000034137 0ustar00zuulzuul00000000000000{ "addSecurityGroup": { "name": "%(group_name)s" } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-pos0000664000175000017500000000015100000000000034211 0ustar00zuulzuul00000000000000{ "security_group": { "name": "%(group_name)s", "description": "description" } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-remove-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-rem0000664000175000017500000000011000000000000034166 0ustar00zuulzuul00000000000000{ "removeSecurityGroup": { "name": "%(group_name)s" } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-rules-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-rul0000664000175000017500000000032600000000000034216 0ustar00zuulzuul00000000000000{ "security_group_rule": { "parent_group_id": "21111111-1111-1111-1111-111111111112", "ip_protocol": "tcp", "from_port": 22, "to_port": 22, "cidr": "10.0.0.0/24" } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-rules-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-group-rul0000664000175000017500000000050500000000000034215 0ustar00zuulzuul00000000000000{ "security_group_rule": { "from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": "11111111-1111-1111-1111-111111111111", "ip_range": { "cidr": "10.0.0.0/24" }, "id": "00000000-0000-0000-0000-000000000000" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-groups-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-groups-cr0000664000175000017500000000031400000000000034200 0ustar00zuulzuul00000000000000{ "security_group": { "description": "%(description)s", "id": 1, "name": "%(group_name)s", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-groups-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-groups-ge0000664000175000017500000000027500000000000034175 0ustar00zuulzuul00000000000000{ "security_group": { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } } ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-groups-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/security-groups-li0000664000175000017500000000034600000000000034205 0ustar00zuulzuul00000000000000{ "security_groups": [ { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } ] } ././@PaxHeader0000000000000000000000000000023300000000000011453 xustar0000000000000000133 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/server-security-groups-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-security-groups/server-security-gr0000664000175000017500000000034600000000000034200 0ustar00zuulzuul00000000000000{ "security_groups": [ { "description": "default", "id": 1, "name": "default", "rules": [], "tenant_id": "6f70656e737461636b20342065766572" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-diagnostics/0000775000175000017500000000000000000000000031120 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-diagnostics/server-diagnostics-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-diagnostics/server-diagnost0000664000175000017500000000060300000000000034156 0ustar00zuulzuul00000000000000{ "cpu0_time": 17300000000, "memory": 524288, "vda_errors": -1, "vda_read": 262144, "vda_read_req": 112, "vda_write": 5778432, "vda_write_req": 488, "vnet1_rx": 2070139, "vnet1_rx_drop": 0, "vnet1_rx_errors": 0, "vnet1_rx_packets": 26701, "vnet1_tx": 140208, "vnet1_tx_drop": 0, "vnet1_tx_errors": 0, "vnet1_tx_packets": 662 } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-diagnostics/v2.48/0000775000175000017500000000000000000000000031701 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-diagnostics/v2.48/server-diagnostics-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-diagnostics/v2.48/server-di0000664000175000017500000000201500000000000033522 0ustar00zuulzuul00000000000000{ "config_drive": true, "cpu_details": [ { "id": 0, "time": 17300000000, "utilisation": 15 } ], "disk_details": [ { "errors_count": 1, "read_bytes": 262144, "read_requests": 112, "write_bytes": 5778432, "write_requests": 488 } ], "driver": "libvirt", "hypervisor": "kvm", "hypervisor_os": "ubuntu", "memory_details": { "maximum": 524288, "used": 0 }, "nic_details": [ { "mac_address": "01:23:45:67:89:ab", "rx_drop": 200, "rx_errors": 100, "rx_octets": 2070139, "rx_packets": 26701, "rx_rate": 300, "tx_drop": 500, "tx_errors": 400, "tx_octets": 140208, "tx_packets": 662, "tx_rate": 600 } ], "num_cpus": 1, "num_disks": 1, "num_nics": 1, "state": "running", "uptime": 46664 } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-external-events/0000775000175000017500000000000000000000000031735 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-external-events/event-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-external-events/event-creat0000664000175000017500000000024000000000000034071 0ustar00zuulzuul00000000000000{ "events": [ { "name": "%(name)s", "tag": "%(tag)s", "status": "%(status)s", "server_uuid": "%(uuid)s" } ] } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-external-events/event-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-external-events/event-creat0000664000175000017500000000031700000000000034076 0ustar00zuulzuul00000000000000{ "events": [ { "code": 200, "name": "%(name)s", "server_uuid": "%(uuid)s", "status": "%(status)s", "tag": "%(tag)s" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/0000775000175000017500000000000000000000000030130 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-get-re0000664000175000017500000000025100000000000034055 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "name": "%(name)s", "policies": ["anti-affinity"], "members": [], "metadata": {} } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-list-r0000664000175000017500000000031500000000000034105 0ustar00zuulzuul00000000000000{ "server_groups": [ { "id": "%(id)s", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {} } ] } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-post-r0000664000175000017500000000014200000000000034115 0ustar00zuulzuul00000000000000{ "server_group": { "name": "%(name)s", "policies": ["anti-affinity"] } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/server-groups-post-r0000664000175000017500000000025200000000000034117 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "name": "%(name)s", "policies": ["anti-affinity"], "members": [], "metadata": {} } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/0000775000175000017500000000000000000000000030701 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-0000664000175000017500000000037600000000000033532 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "name": "%(name)s", "policies": ["anti-affinity"], "members": [], "metadata": {}, "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-0000664000175000017500000000045200000000000033525 0ustar00zuulzuul00000000000000{ "server_groups": [ { "id": "%(id)s", "name": "test", "policies": ["anti-affinity"], "members": [], "metadata": {}, "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } ] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-0000664000175000017500000000014200000000000033521 0ustar00zuulzuul00000000000000{ "server_group": { "name": "%(name)s", "policies": ["anti-affinity"] } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.13/server-groups-0000664000175000017500000000037600000000000033532 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "name": "%(name)s", "policies": ["anti-affinity"], "members": [], "metadata": {}, "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/0000775000175000017500000000000000000000000030707 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-0000664000175000017500000000041700000000000033534 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "name": "%(name)s", "policy": "anti-affinity", "rules": {"max_server_per_host": 3}, "members": [], "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-0000664000175000017500000000047300000000000033536 0ustar00zuulzuul00000000000000{ "server_groups": [ { "id": "%(id)s", "name": "test", "policy": "anti-affinity", "rules": {"max_server_per_host": 3}, "members": [], "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } ] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-0000664000175000017500000000021300000000000033526 0ustar00zuulzuul00000000000000{ "server_group": { "name": "%(name)s", "policy": "anti-affinity", "rules": {"max_server_per_host": 3} } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-groups/v2.64/server-groups-0000664000175000017500000000041700000000000033534 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "name": "%(name)s", "policy": "anti-affinity", "rules": {"max_server_per_host": 3}, "members": [], "project_id": "6f70656e737461636b20342065766572", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.450469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-password/0000775000175000017500000000000000000000000030453 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-password/get-password-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-password/get-password-resp.0000664000175000017500000000005500000000000034042 0ustar00zuulzuul00000000000000{ "password": "%(encrypted_password)s" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0744722 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/0000775000175000017500000000000000000000000027547 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.454469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/0000775000175000017500000000000000000000000030324 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-index-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-inde0000664000175000017500000000005100000000000033422 0ustar00zuulzuul00000000000000{ "tags": ["%(tag1)s", "%(tag2)s"] } ././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-put-all-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-put-0000664000175000017500000000005100000000000033370 0ustar00zuulzuul00000000000000{ "tags": ["%(tag1)s", "%(tag2)s"] } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-put-all-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-put-0000664000175000017500000000005100000000000033370 0ustar00zuulzuul00000000000000{ "tags": ["%(tag1)s", "%(tag2)s"] } ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-show-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/server-tags-show0000664000175000017500000000521700000000000033474 0ustar00zuulzuul00000000000000{ "server": { "tags": ["%(tag1)s", "%(tag2)s"], "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "192.168.1.30", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "key_name": null, "user_id": "fake", "locked": false, "description": null, "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ], "host_status": "UP" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/servers-tags-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-tags/v2.26/servers-tags-det0000664000175000017500000000574000000000000033454 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false, "tags": ["%(tag1)s", "%(tag2)s"], "description": null, "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ], "host_status": "UP" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0744722 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-topology/0000775000175000017500000000000000000000000030465 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.454469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-topology/v2.78/0000775000175000017500000000000000000000000031251 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023300000000000011453 xustar0000000000000000133 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-topology/v2.78/servers-topology-resp-user.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-topology/v2.78/servers-topo0000664000175000017500000000104300000000000033642 0ustar00zuulzuul00000000000000{ "nodes": [ { "memory_mb": 1024, "siblings": [ [ 0, 1 ] ], "vcpu_set": [ 0, 1 ] }, { "memory_mb": 2048, "siblings": [ [ 2, 3 ] ], "vcpu_set": [ 2, 3 ] } ], "pagesize_kb": 4 }././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-topology/v2.78/servers-topology-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-server-topology/v2.78/servers-topo0000664000175000017500000000142200000000000033643 0ustar00zuulzuul00000000000000{ "nodes": [ { "cpu_pinning": { "0": 0, "1": 5 }, "host_node": 0, "memory_mb": 1024, "siblings": [ [ 0, 1 ] ], "vcpu_set": [ 0, 1 ] }, { "cpu_pinning": { "2": 1, "3": 8 }, "host_node": 1, "memory_mb": 2048, "siblings": [ [ 2, 3 ] ], "vcpu_set": [ 2, 3 ] } ], "pagesize_kb": 4 } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.454469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/0000775000175000017500000000000000000000000027130 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-log-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-log-put-r0000664000175000017500000000014500000000000033740 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s", "disabled_reason": "%(disabled_reason)s" } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-log-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-log-put-r0000664000175000017500000000022600000000000033740 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-put-req.j0000664000175000017500000000006700000000000033742 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s" } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-disable-put-resp.0000664000175000017500000000016200000000000033746 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-enable-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-enable-put-req.js0000664000175000017500000000006700000000000033750 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s" } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-enable-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/service-enable-put-resp.j0000664000175000017500000000016100000000000033742 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "enabled" } }././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/services-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/services-list-get-resp.js0000664000175000017500000000217700000000000034015 0ustar00zuulzuul00000000000000{ "services": [ { "binary": "nova-scheduler", "disabled_reason": "test1", "host": "host1", "id": 1, "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "zone": "internal" }, { "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "id": 2, "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "zone": "nova" }, { "binary": "nova-scheduler", "disabled_reason": null, "host": "host2", "id": 3, "state": "down", "status": "enabled", "updated_at": "%(strtime)s", "zone": "internal" }, { "binary": "nova-compute", "disabled_reason": "test4", "host": "host2", "id": 4, "state": "down", "status": "disabled", "updated_at": "%(strtime)s", "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.458469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/0000775000175000017500000000000000000000000027677 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-log-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-log0000664000175000017500000000014500000000000033442 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s", "disabled_reason": "%(disabled_reason)s" } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-log-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-log0000664000175000017500000000022600000000000033442 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-put0000664000175000017500000000006700000000000033474 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s" } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-disable-put0000664000175000017500000000016200000000000033470 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "disabled" } }././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-enable-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-enable-put-0000664000175000017500000000006700000000000033374 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s" } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-enable-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-enable-put-0000664000175000017500000000016100000000000033367 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "status": "enabled" } }././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-force-down-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-force-down-0000664000175000017500000000013300000000000033375 0ustar00zuulzuul00000000000000{ "host": "%(host)s", "binary": "%(binary)s", "forced_down": %(forced_down)s } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-force-down-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/service-force-down-0000664000175000017500000000016200000000000033377 0ustar00zuulzuul00000000000000{ "service": { "binary": "nova-compute", "host": "host1", "forced_down": true } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/services-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.11/services-list-get-r0000664000175000017500000000240700000000000033435 0ustar00zuulzuul00000000000000{ "services": [ { "binary": "nova-scheduler", "disabled_reason": "test1", "host": "host1", "id": 1, "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "forced_down": false, "zone": "internal" }, { "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "id": 2, "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "forced_down": false, "zone": "nova" }, { "binary": "nova-scheduler", "disabled_reason": null, "host": "host2", "id": 3, "state": "down", "status": "enabled", "updated_at": "%(strtime)s", "forced_down": false, "zone": "internal" }, { "binary": "nova-compute", "disabled_reason": "test4", "host": "host2", "id": 4, "state": "down", "status": "disabled", "updated_at": "%(strtime)s", "forced_down": false, "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.458469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/0000775000175000017500000000000000000000000027705 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-log-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-log0000664000175000017500000000011300000000000033443 0ustar00zuulzuul00000000000000{ "status": "disabled", "disabled_reason": "%(disabled_reason)s" } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-log-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-log0000664000175000017500000000050400000000000033447 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": "maintenance", "host": "host1", "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "forced_down": false, "zone": "nova" } }././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-put0000664000175000017500000000003500000000000033475 0ustar00zuulzuul00000000000000{ "status": "disabled" } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-disable-put0000664000175000017500000000047300000000000033503 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": null, "host": "host1", "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "forced_down": false, "zone": "nova" } }././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-enable-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-enable-put-0000664000175000017500000000003400000000000033374 0ustar00zuulzuul00000000000000{ "status": "enabled" } ././@PaxHeader0000000000000000000000000000022100000000000011450 xustar0000000000000000123 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-enable-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-enable-put-0000664000175000017500000000051100000000000033374 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": null, "host": "host1", "state": "up", "status": "enabled", "updated_at": "2012-10-29T13:42:05.000000", "forced_down": false, "zone": "nova" } }././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-force-down-put-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-force-down-0000664000175000017500000000004700000000000033407 0ustar00zuulzuul00000000000000{ "forced_down": %(forced_down)s } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-force-down-put-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/service-force-down-0000664000175000017500000000050000000000000033401 0ustar00zuulzuul00000000000000{ "service": { "id": "e81d66a4-ddd3-4aba-8a84-171d1cb4d339", "binary": "nova-compute", "disabled_reason": "test2", "host": "host1", "state": "down", "status": "disabled", "updated_at": "%(strtime)s", "forced_down": true, "zone": "nova" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/services-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.53/services-list-get-r0000664000175000017500000000244300000000000033443 0ustar00zuulzuul00000000000000{ "services": [ { "binary": "nova-scheduler", "disabled_reason": "test1", "forced_down": false, "host": "host1", "id": "%(id)s", "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "zone": "internal" }, { "binary": "nova-compute", "disabled_reason": "test2", "forced_down": false, "host": "host1", "id": "%(id)s", "state": "up", "status": "disabled", "updated_at": "%(strtime)s", "zone": "nova" }, { "binary": "nova-scheduler", "disabled_reason": null, "forced_down": false, "host": "host2", "id": "%(id)s", "state": "down", "status": "enabled", "updated_at": "%(strtime)s", "zone": "internal" }, { "binary": "nova-compute", "disabled_reason": "test4", "forced_down": false, "host": "host2", "id": "%(id)s", "state": "down", "status": "disabled", "updated_at": "%(strtime)s", "zone": "nova" } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.458469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.69/0000775000175000017500000000000000000000000027714 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.69/services-list-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-services/v2.69/services-list-get-r0000664000175000017500000000041200000000000033444 0ustar00zuulzuul00000000000000{ "services": [ { "binary": "nova-compute", "host": "host1", "status": "UNKNOWN" }, { "binary": "nova-compute", "host": "host2", "status": "UNKNOWN" } ] }././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.458469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/0000775000175000017500000000000000000000000026573 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/os-shelve-offload.json.tpl0000664000175000017500000000003300000000000033575 0ustar00zuulzuul00000000000000{ "%(action)s": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/os-shelve.json.tpl0000664000175000017500000000003300000000000032165 0ustar00zuulzuul00000000000000{ "%(action)s": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/os-unshelve.json.tpl0000664000175000017500000000003300000000000032530 0ustar00zuulzuul00000000000000{ "%(action)s": null } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.458469 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/v2.77/0000775000175000017500000000000000000000000027356 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/v2.77/os-shelve.json.tpl0000664000175000017500000000003300000000000032750 0ustar00zuulzuul00000000000000{ "%(action)s": null } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/v2.77/os-unshelve-null.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/v2.77/os-unshelve-null.json0000664000175000017500000000003300000000000033465 0ustar00zuulzuul00000000000000{ "%(action)s": null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-shelve/v2.77/os-unshelve.json.tpl0000664000175000017500000000012300000000000033313 0ustar00zuulzuul00000000000000{ "%(action)s": { "availability_zone": "%(availability_zone)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4624689 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/0000775000175000017500000000000000000000000031167 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023500000000000011455 xustar0000000000000000135 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/simple-tenant-usage-get-detail.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/simple-tenant-0000664000175000017500000000163500000000000033754 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0, "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "new-server-test", "started_at": "%(strtime)s", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ] } ] } ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/simple-tenant-usage-get-specific.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/simple-tenant-0000664000175000017500000000145400000000000033753 0ustar00zuulzuul00000000000000{ "tenant_usage": { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "new-server-test", "started_at": "%(strtime)s", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/simple-tenant-usage-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/simple-tenant-0000664000175000017500000000052300000000000033747 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4624689 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/0000775000175000017500000000000000000000000031740 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-all.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-t0000664000175000017500000000445100000000000033421 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "instance-3", "started_at": "%(strtime)s", "state": "active", "tenant_id": "0000000e737461636b20342065000000", "uptime": 3600, "vcpus": 1 } ], "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "0000000e737461636b20342065000000", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 }, { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "instance-1", "started_at": "%(strtime)s", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 }, { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "%(strtime)s", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 2.0, "total_local_gb_usage": 2.0, "total_memory_mb_usage": 1024.0, "total_vcpus_usage": 2.0 } ] } ././@PaxHeader0000000000000000000000000000024300000000000011454 xustar0000000000000000141 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-detail.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-t0000664000175000017500000000220400000000000033413 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0, "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "%(strtime)s", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ] } ], "tenant_usages_links": [ { "href": "%(versioned_compute_endpoint)s/os-simple-tenant-usage?detailed=1&end=%(strtime_url)s&limit=1&marker=%(uuid)s&start=%(strtime_url)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000024500000000000011456 xustar0000000000000000143 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get-specific.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-t0000664000175000017500000000202500000000000033414 0ustar00zuulzuul00000000000000{ "tenant_usage": { "server_usages": [ { "ended_at": null, "flavor": "m1.tiny", "hours": 1.0, "instance_id": "%(uuid)s", "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "%(strtime)s", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 }, "tenant_usage_links": [ { "href": "%(versioned_compute_endpoint)s/os-simple-tenant-usage/%(tenant_id)s?end=%(strtime_url)s&limit=1&marker=%(uuid)s&start=%(strtime_url)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000023400000000000011454 xustar0000000000000000134 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-tenant-usage-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-simple-tenant-usage/v2.40/simple-t0000664000175000017500000000106400000000000033416 0ustar00zuulzuul00000000000000{ "tenant_usages": [ { "start": "%(strtime)s", "stop": "%(strtime)s", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 } ], "tenant_usages_links": [ { "href": "%(versioned_compute_endpoint)s/os-simple-tenant-usage?end=%(strtime_url)s&limit=1&marker=%(uuid)s&start=%(strtime_url)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4624689 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-suspend-server/0000775000175000017500000000000000000000000030272 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-suspend-server/server-resume.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-suspend-server/server-resume.json.0000664000175000017500000000002700000000000034046 0ustar00zuulzuul00000000000000{ "resume": null } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-suspend-server/server-suspend.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-suspend-server/server-suspend.json0000664000175000017500000000003000000000000034143 0ustar00zuulzuul00000000000000{ "suspend": null } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4624689 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-tenant-networks/0000775000175000017500000000000000000000000030450 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-tenant-networks/networks-list-res.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-tenant-networks/networks-list-res.0000664000175000017500000000021100000000000034057 0ustar00zuulzuul00000000000000{ "networks": [ { "cidr": "None", "id": "%(uuid)s", "label": "private" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4624689 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/0000775000175000017500000000000000000000000026777 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/attach-volume-to-server-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/attach-volume-to-server-re0000664000175000017500000000015000000000000034017 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(volume_id)s", "device": "%(device)s" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/attach-volume-to-server-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/attach-volume-to-server-re0000664000175000017500000000024700000000000034026 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id)s" } } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/list-volume-attachments-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/list-volume-attachments-re0000664000175000017500000000056400000000000034124 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id)s" }, { "device": "%(text)s", "id": "%(volume_id2)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id2)s" } ] } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-detail-resp.jso0000664000175000017500000000121700000000000034035 0ustar00zuulzuul00000000000000{ "volumes": [ { "attachments": [ { "device": "/", "id": "%(uuid)s", "serverId": "%(uuid)s", "volumeId": "%(uuid)s" } ], "availabilityZone": "zone1:host1", "createdAt": "%(strtime)s", "displayDescription": "%(volume_desc)s", "displayName": "%(volume_name)s", "id": "%(uuid)s", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } ] } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-get-resp.json.t0000664000175000017500000000106200000000000033770 0ustar00zuulzuul00000000000000{ "volume": { "attachments": [ { "device": "/", "id": "%(uuid)s", "serverId": "%(uuid)s", "volumeId": "%(uuid)s" } ], "availabilityZone": "zone1:host1", "createdAt": "%(strtime)s", "displayDescription": "%(volume_desc)s", "displayName": "%(volume_name)s", "id": "%(uuid)s", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-index-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-index-resp.json0000664000175000017500000000121700000000000034060 0ustar00zuulzuul00000000000000{ "volumes": [ { "attachments": [ { "device": "/", "id": "%(uuid)s", "serverId": "%(uuid)s", "volumeId": "%(uuid)s" } ], "availabilityZone": "zone1:host1", "createdAt": "%(strtime)s", "displayDescription": "%(volume_desc)s", "displayName": "%(volume_name)s", "id": "%(uuid)s", "metadata": {}, "size": 100, "snapshotId": null, "status": "in-use", "volumeType": "Backup" } ] } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-post-req.json.t0000664000175000017500000000026700000000000034022 0ustar00zuulzuul00000000000000{ "volume": { "availability_zone": "zone1:host1", "display_name": "%(volume_name)s", "display_description": "%(volume_desc)s", "size": 100 } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-post-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/os-volumes-post-resp.json.0000664000175000017500000000101300000000000034006 0ustar00zuulzuul00000000000000{ "volume": { "status": "in-use", "displayDescription": "%(volume_desc)s", "availabilityZone": "zone1:host1", "displayName": "%(volume_name)s", "attachments": [ { "device": "/", "serverId": "%(uuid)s", "id": "%(uuid)s", "volumeId": "%(uuid)s" } ], "volumeType": "Backup", "snapshotId": null, "metadata": {}, "id": "%(uuid)s", "createdAt": "%(strtime)s", "size": 100 } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshot-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshot-create-req.json.t0000664000175000017500000000027100000000000034021 0ustar00zuulzuul00000000000000{ "snapshot": { "display_name": "%(snapshot_name)s", "display_description": "%(description)s", "volume_id": "%(volume_id)s", "force": false } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshot-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshot-create-resp.json.0000664000175000017500000000040300000000000034014 0ustar00zuulzuul00000000000000{ "snapshot": { "createdAt": "%(strtime)s", "displayDescription": "%(description)s", "displayName": "%(snapshot_name)s", "id": 100, "size": 100, "status": "available", "volumeId": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshots-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshots-detail-resp.json0000664000175000017500000000151100000000000034121 0ustar00zuulzuul00000000000000{ "snapshots": [ { "createdAt": "%(strtime)s", "displayDescription": "Default description", "displayName": "Default name", "id": 100, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "%(strtime)s", "displayDescription": "Default description", "displayName": "Default name", "id": 101, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "%(strtime)s", "displayDescription": "Default description", "displayName": "Default name", "id": 102, "size": 100, "status": "available", "volumeId": 12 } ] } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshots-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshots-list-resp.json.t0000664000175000017500000000143400000000000034100 0ustar00zuulzuul00000000000000{ "snapshots": [ { "createdAt": "%(strtime)s", "displayDescription": "%(text)s", "displayName": "%(text)s", "id": 100, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "%(strtime)s", "displayDescription": "%(text)s", "displayName": "%(text)s", "id": 101, "size": 100, "status": "available", "volumeId": 12 }, { "createdAt": "%(strtime)s", "displayDescription": "%(text)s", "displayName": "%(text)s", "id": 102, "size": 100, "status": "available", "volumeId": 12 } ] } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshots-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/snapshots-show-resp.json.t0000664000175000017500000000037500000000000034110 0ustar00zuulzuul00000000000000{ "snapshot": { "createdAt": "%(strtime)s", "displayDescription": "%(description)s", "displayName": "%(snapshot_name)s", "id": "100", "size": 100, "status": "available", "volumeId": 12 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/update-volume-req.json.tpl0000664000175000017500000000011400000000000034040 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(new_volume_id)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4624689 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/0000775000175000017500000000000000000000000027561 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/attach-volume-to-server-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/attach-volume-to-ser0000664000175000017500000000014200000000000033461 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(volume_id)s", "tag": "%(tag)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/attach-volume-to-server-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/attach-volume-to-ser0000664000175000017500000000024700000000000033467 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/list-volume-attachments-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/list-volume-attachme0000664000175000017500000000056400000000000033555 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id)s" }, { "device": "%(text)s", "id": "%(volume_id2)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id2)s" } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/update-volume-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/update-volume-req.js0000664000175000017500000000011400000000000033467 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(new_volume_id)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/volume-attachment-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.49/volume-attachment-de0000664000175000017500000000024700000000000033532 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/0000775000175000017500000000000000000000000027553 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/attach-volume-to-server-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/attach-volume-to-ser0000664000175000017500000000014200000000000033453 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(volume_id)s", "tag": "%(tag)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/attach-volume-to-server-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/attach-volume-to-ser0000664000175000017500000000030100000000000033450 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s" } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/list-volume-attachments-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/list-volume-attachme0000664000175000017500000000065300000000000033546 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s" }, { "device": "%(text)s", "id": "%(volume_id2)s", "serverId": "%(uuid)s", "tag": null, "volumeId": "%(volume_id2)s" } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/update-volume-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/update-volume-req.js0000664000175000017500000000011400000000000033461 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(new_volume_id)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/volume-attachment-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.70/volume-attachment-de0000664000175000017500000000030100000000000033513 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/0000775000175000017500000000000000000000000027564 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/attach-volume-to-server-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/attach-volume-to-ser0000664000175000017500000000021100000000000033461 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(volume_id)s", "tag": "%(tag)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/attach-volume-to-server-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/attach-volume-to-ser0000664000175000017500000000035000000000000033465 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/list-volume-attachments-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/list-volume-attachme0000664000175000017500000000100200000000000033544 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s", "delete_on_termination": true }, { "device": "%(text)s", "id": "%(volume_id2)s", "serverId": "%(uuid)s", "tag": null, "volumeId": "%(volume_id2)s", "delete_on_termination": false } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/update-volume-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/update-volume-req.js0000664000175000017500000000011400000000000033472 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(new_volume_id)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/volume-attachment-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.79/volume-attachment-de0000664000175000017500000000035000000000000033530 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/0000775000175000017500000000000000000000000027561 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022400000000000011453 xustar0000000000000000126 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/attach-volume-to-server-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/attach-volume-to-ser0000664000175000017500000000021100000000000033456 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(volume_id)s", "tag": "%(tag)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/attach-volume-to-server-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/attach-volume-to-ser0000664000175000017500000000035000000000000033462 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/list-volume-attachments-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/list-volume-attachme0000664000175000017500000000100200000000000033541 0ustar00zuulzuul00000000000000{ "volumeAttachments": [ { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s", "delete_on_termination": true }, { "device": "%(text)s", "id": "%(volume_id2)s", "serverId": "%(uuid)s", "tag": null, "volumeId": "%(volume_id2)s", "delete_on_termination": false } ] } ././@PaxHeader0000000000000000000000000000024100000000000011452 xustar0000000000000000139 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/update-volume-attachment-delete-flag-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/update-volume-attach0000664000175000017500000000035500000000000033540 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(volume_id)s", "id": "%(volume_id)s", "serverId": "%(server_id)s", "device": "%(device)s", "tag": "%(tag)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/update-volume-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/update-volume-req.js0000664000175000017500000000011400000000000033467 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "volumeId": "%(new_volume_id)s" } } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/volume-attachment-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/v2.85/volume-attachment-de0000664000175000017500000000035000000000000033525 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "tag": "%(tag)s", "volumeId": "%(volume_id)s", "delete_on_termination": true } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/volume-attachment-detail-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/os-volumes/volume-attachment-detail-r0000664000175000017500000000024700000000000034061 0ustar00zuulzuul00000000000000{ "volumeAttachment": { "device": "%(device)s", "id": "%(volume_id)s", "serverId": "%(uuid)s", "volumeId": "%(volume_id)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-ips/0000775000175000017500000000000000000000000026765 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-ips/server-ips-network-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-ips/server-ips-network-resp.js0000664000175000017500000000014600000000000034061 0ustar00zuulzuul00000000000000{ "private": [ { "addr": "%(ip)s", "version": 4 } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-ips/server-ips-resp.json.tpl0000664000175000017500000000022700000000000033525 0ustar00zuulzuul00000000000000{ "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/0000775000175000017500000000000000000000000027752 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-all-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-all-r0000664000175000017500000000006700000000000033771 0ustar00zuulzuul00000000000000{ "metadata": { "foo": "%(value)s" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-all-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-all-r0000664000175000017500000000006700000000000033771 0ustar00zuulzuul00000000000000{ "metadata": { "foo": "%(value)s" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-req.j0000664000175000017500000000006300000000000033775 0ustar00zuulzuul00000000000000{ "meta": { "foo": "%(value)s" } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-metadata/server-metadata-resp.0000664000175000017500000000006300000000000034005 0ustar00zuulzuul00000000000000{ "meta": { "foo": "%(value)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.0784721 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/0000775000175000017500000000000000000000000030346 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.22/0000775000175000017500000000000000000000000031117 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.22/force_complete.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.22/force_complet0000664000175000017500000000003700000000000033663 0ustar00zuulzuul00000000000000{ "force_complete": null } ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.22/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.22/live-migrate-0000664000175000017500000000020600000000000033502 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "%(hostname)s", "block_migration": false, "disk_over_commit": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.23/0000775000175000017500000000000000000000000031120 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.23/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.23/migrations-ge0000664000175000017500000000116000000000000033606 0ustar00zuulzuul00000000000000{ "migration": { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "%(server_uuid)s", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.23/migrations-index.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.23/migrations-in0000664000175000017500000000130700000000000033624 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "%(server_uuid_1)s", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.24/0000775000175000017500000000000000000000000031121 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.24/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.24/live-migrate-0000664000175000017500000000020600000000000033504 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": "%(hostname)s", "block_migration": false, "disk_over_commit": false } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.59/0000775000175000017500000000000000000000000031131 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.59/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.59/migrations-ge0000664000175000017500000000125000000000000033617 0ustar00zuulzuul00000000000000{ "migration": { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "%(server_uuid)s", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.59/migrations-index.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.59/migrations-in0000664000175000017500000000140300000000000033632 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "%(server_uuid_1)s", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.65/0000775000175000017500000000000000000000000031126 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.65/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.65/live-migrate-0000664000175000017500000000013200000000000033507 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": null, "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4664688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/0000775000175000017500000000000000000000000031123 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/live-migrate-server.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/live-migrate-0000664000175000017500000000013200000000000033504 0ustar00zuulzuul00000000000000{ "os-migrateLive": { "host": null, "block_migration": "auto" } } ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/migrations-get.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/migrations-ge0000664000175000017500000000144100000000000033613 0ustar00zuulzuul00000000000000{ "migration": { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "%(server_uuid)s", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "8dbaa0f0-ab95-4ffe-8cb4-9c89d2ac9d24", "project_id": "5f705771-3aa9-4f4c-8660-0d9522ffdbea" } } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/migrations-index.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/server-migrations/v2.80/migrations-in0000664000175000017500000000160400000000000033627 0ustar00zuulzuul00000000000000{ "migrations": [ { "created_at": "2016-01-29T13:42:02.000000", "dest_compute": "compute2", "dest_host": "1.2.3.4", "dest_node": "node2", "id": 1, "server_uuid": "%(server_uuid_1)s", "source_compute": "compute1", "source_node": "node1", "status": "running", "memory_total_bytes": 123456, "memory_processed_bytes": 12345, "memory_remaining_bytes": 111111, "disk_total_bytes": 234567, "disk_processed_bytes": 23456, "disk_remaining_bytes": 211111, "updated_at": "2016-01-29T13:42:02.000000", "uuid": "12341d4b-346a-40d0-83c6-5f4f6892b650", "user_id": "8dbaa0f0-ab95-4ffe-8cb4-9c89d2ac9d24", "project_id": "5f705771-3aa9-4f4c-8660-0d9522ffdbea" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4704688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/0000775000175000017500000000000000000000000026357 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/attach-interfaces-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/attach-interfaces-list-resp.j0000664000175000017500000000071700000000000034044 0ustar00zuulzuul00000000000000{ "interfaceAttachments": [ { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } ] }././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/attach-interfaces-show-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/attach-interfaces-show-resp.j0000664000175000017500000000062200000000000034044 0ustar00zuulzuul00000000000000{ "interfaceAttachment": { "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "mac_addr": "fa:16:3e:4c:2c:30", "net_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "port_id": "ce531f90-199f-48c0-816c-13e38010b442", "port_state": "ACTIVE" } }././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-addfloatingip-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-addfloatingip-r0000664000175000017500000000016000000000000034122 0ustar00zuulzuul00000000000000{ "addFloatingIp" : { "address": "%(address)s", "fixed_address": "%(fixed_address)s" } }././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-confirm-resize.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-confirm-resize.0000664000175000017500000000003700000000000034073 0ustar00zuulzuul00000000000000{ "confirmResize" : null } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-create-image.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-create-image.js0000664000175000017500000000020100000000000034010 0ustar00zuulzuul00000000000000{ "createImage" : { "name" : "%(name)s", "metadata": { "meta_var": "meta_val" } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-reboot.json.tpl0000664000175000017500000000006700000000000034124 0ustar00zuulzuul00000000000000{ "reboot" : { "type" : "%(type)s" } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-rebuild-resp.js0000664000175000017500000000270500000000000034075 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-rebuild.json.tp0000664000175000017500000000155600000000000034110 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "imageRef" : "%(uuid)s", "name" : "%(name)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ] } } ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-removefloatingip-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-removefloatingi0000664000175000017500000000010600000000000034250 0ustar00zuulzuul00000000000000{ "removeFloatingIp" : { "address": "%(address)s" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-resize.json.tpl0000664000175000017500000000013700000000000034131 0ustar00zuulzuul00000000000000{ "resize" : { "flavorRef" : "%(id)s", "OS-DCF:diskConfig": "AUTO" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-revert-resize.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-revert-resize.j0000664000175000017500000000003600000000000034116 0ustar00zuulzuul00000000000000{ "revertResize" : null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-start.json.tpl0000664000175000017500000000003400000000000033761 0ustar00zuulzuul00000000000000{ "%(action)s" : null } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-action-stop.json.tpl0000664000175000017500000000003400000000000033611 0ustar00zuulzuul00000000000000{ "%(action)s" : null } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-create-req-v237.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-create-req-v237.json.t0000664000175000017500000000222700000000000033552 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "1", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" }, "OS-SCH-HNT:scheduler_hints": { "same_host": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-create-req-v257.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-create-req-v257.json.t0000664000175000017500000000107700000000000033556 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "http://openstack.example.com/flavors/1", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-create-req.json.tpl0000664000175000017500000000220200000000000033400 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "imageRef" : "%(image_id)s", "flavorRef" : "1", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s" }, "OS-SCH-HNT:scheduler_hints": { "same_host": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-create-resp.json.tpl0000664000175000017500000000100400000000000033561 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-get-resp.json.tpl0000664000175000017500000000440200000000000033102 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1"}, {"id": "volume_id2"} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-update-req.json.tpl0000664000175000017500000000026600000000000033427 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "name" : "new-server-test" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/server-update-resp.json.tpl0000664000175000017500000000265400000000000033614 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/servers-details-resp.json.tpl0000664000175000017500000000534000000000000034135 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1"}, {"id": "volume_id2"} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/servers-list-resp.json.tpl0000664000175000017500000000113000000000000033454 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/servers-list-status-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/servers-list-status-resp.json0000664000175000017500000000115200000000000034203 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&status=%(status)s&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4704688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/0000775000175000017500000000000000000000000027133 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/server-get-resp.json.tp0000664000175000017500000000532500000000000033507 0ustar00zuulzuul00000000000000{ "server": { "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:kernel_id": null, "OS-EXT-SRV-ATTR:ramdisk_id": null, "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "security_groups": [ { "name": "default" } ], "locked": false, "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "updated": "%(isotime)s", "created": "%(isotime)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4, "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed" } ] }, "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "host_status": "UP", "tenant_id": "6f70656e737461636b20342065766572", "user_id": "fake", "key_name": null } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/servers-details-resp.js0000664000175000017500000000632000000000000033555 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "host_status": "UP", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.16/servers-list-resp.json.0000664000175000017500000000113000000000000033510 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4704688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.17/0000775000175000017500000000000000000000000027134 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.17/server-action-trigger-crash-dump.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.17/server-action-trigger-c0000664000175000017500000000004300000000000033516 0ustar00zuulzuul00000000000000{ "trigger_crash_dump": null } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4744687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/0000775000175000017500000000000000000000000027136 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-action-rebuild-r0000664000175000017500000000301000000000000033517 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "description": "%(description)s", "progress": 0, "OS-DCF:diskConfig": "AUTO", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-action-rebuild.j0000664000175000017500000000050200000000000033513 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "imageRef" : "%(uuid)s", "name" : "%(name)s", "description" : "%(description)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" } } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-create-req.json.0000664000175000017500000000054000000000000033442 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "description" : "new-server-description", "imageRef" : "%(image_id)s", "flavorRef" : "%(host)s/flavors/1", "metadata" : { "My Server Name" : "Apache1" } } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-create-resp.json0000664000175000017500000000100400000000000033542 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-get-resp.json.tp0000664000175000017500000000535700000000000033517 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "description": "new-server-description", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "host_status": "UP", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-put-req.json.tpl0000664000175000017500000000017000000000000033526 0ustar00zuulzuul00000000000000{ "server" : { "name" : "updated-server-test", "description" : "updated-server-description" } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-put-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/server-put-resp.json.tp0000664000175000017500000000277400000000000033550 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "updated-server-test", "description": "updated-server-description", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/servers-details-resp.js0000664000175000017500000000636500000000000033571 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "description": "new-server-description", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "host_status": "UP", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.19/servers-list-resp.json.0000664000175000017500000000113000000000000033513 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4744687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.26/0000775000175000017500000000000000000000000027134 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.26/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.26/server-action-rebuild-r0000664000175000017500000000306500000000000033527 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "OS-DCF:diskConfig": "%(disk_config)s", "progress": 0, "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false, "description": "%(description)s", "tags": ["tag1", "tag2"] } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.26/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.26/server-action-rebuild.j0000664000175000017500000000173500000000000033522 0ustar00zuulzuul00000000000000{ "rebuild" : { "imageRef" : "%(uuid)s", "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "name" : "%(name)s", "OS-DCF:diskConfig": "%(disk_config)s", "personality" : [ { "path" : "/etc/banner.txt", "contents" : "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "preserve_ephemeral": %(preserve_ephemeral)s, "description" : "%(description)s" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4744687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.3/0000775000175000017500000000000000000000000027047 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.3/server-get-resp.json.tpl0000664000175000017500000000523000000000000033572 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.3/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.3/servers-details-resp.jso0000664000175000017500000000622200000000000033651 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.3/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.3/servers-list-resp.json.t0000664000175000017500000000113000000000000033610 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4744687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.32/0000775000175000017500000000000000000000000027131 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.32/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.32/server-create-req.json.0000664000175000017500000000074700000000000033446 0ustar00zuulzuul00000000000000{ "server" : { "name" : "device-tagging-server", "flavorRef" : "%(host)s/flavors/1", "networks" : [{ "uuid" : "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "nic1" }], "block_device_mapping_v2": [{ "uuid": "%(image_id)s", "source_type": "image", "destination_type": "volume", "boot_index": 0, "volume_size": "1", "tag": "disk1" }] } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.32/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.32/server-create-resp.json0000664000175000017500000000100400000000000033535 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4744687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.37/0000775000175000017500000000000000000000000027136 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.37/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.37/server-create-req.json.0000664000175000017500000000025500000000000033445 0ustar00zuulzuul00000000000000{ "server": { "name": "auto-allocate-network", "imageRef": "%(image_id)s", "flavorRef": "%(host)s/flavors/1", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.37/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.37/server-create-resp.json0000664000175000017500000000100400000000000033542 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4744687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.42/0000775000175000017500000000000000000000000027132 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.42/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.42/server-create-req.json.0000664000175000017500000000074700000000000033447 0ustar00zuulzuul00000000000000{ "server" : { "name" : "device-tagging-server", "flavorRef" : "%(host)s/flavors/1", "networks" : [{ "uuid" : "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "nic1" }], "block_device_mapping_v2": [{ "uuid": "%(image_id)s", "source_type": "image", "destination_type": "volume", "boot_index": 0, "volume_size": "1", "tag": "disk1" }] } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.42/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.42/server-create-resp.json0000664000175000017500000000100400000000000033536 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4784687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.45/0000775000175000017500000000000000000000000027135 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.45/server-action-create-image-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.45/server-action-create-im0000664000175000017500000000004000000000000033477 0ustar00zuulzuul00000000000000{ "image_id": "%(uuid)s" } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.45/server-action-create-image.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.45/server-action-create-im0000664000175000017500000000020100000000000033476 0ustar00zuulzuul00000000000000{ "createImage" : { "name" : "%(name)s", "metadata": { "meta_var": "meta_val" } } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4784687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/0000775000175000017500000000000000000000000027137 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-action-rebuild-r0000664000175000017500000000301300000000000033523 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "description": null, "progress": 0, "OS-DCF:diskConfig": "AUTO", "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-action-rebuild.j0000664000175000017500000000155600000000000033526 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "imageRef" : "%(uuid)s", "name" : "%(name)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ] } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-create-req.json.0000664000175000017500000000222700000000000033447 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "6", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" }, "OS-SCH-HNT:scheduler_hints": { "same_host": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-create-resp.json0000664000175000017500000000100400000000000033543 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-get-resp.json.tp0000664000175000017500000000547500000000000033521 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": [], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-update-req.json.0000664000175000017500000000034200000000000033462 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-update-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/server-update-resp.json0000664000175000017500000000300200000000000033562 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "%(isotime)s", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/servers-details-resp.js0000664000175000017500000000651700000000000033571 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": [], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.47/servers-list-resp.json.0000664000175000017500000000113000000000000033514 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4784687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/0000775000175000017500000000000000000000000027133 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/server-create-req.json.0000664000175000017500000000234500000000000033444 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "imageRef" : "%(image_id)s", "flavorRef" : "http://openstack.example.com/flavors/1", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "personality": [ { "path": "/etc/banner.txt", "contents": "ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBp dCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5k IGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVs c2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4g QnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRo ZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlv dSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vy c2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6 b25zLiINCg0KLVJpY2hhcmQgQmFjaA==" } ], "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto", "tags": ["tag1", "tag2"] }, "OS-SCH-HNT:scheduler_hints": { "same_host": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/server-create-resp.json0000664000175000017500000000100400000000000033537 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/server-get-resp.json.tp0000664000175000017500000000542300000000000033506 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": ["tag1", "tag2"], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/servers-details-resp.js0000664000175000017500000000643500000000000033564 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": ["tag1", "tag2"], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.52/servers-list-resp.json.0000664000175000017500000000113000000000000033510 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4784687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.54/0000775000175000017500000000000000000000000027135 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.54/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.54/server-action-rebuild-r0000664000175000017500000000307400000000000033530 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "key_name": "%(key_name)s", "description": "%(description)s", "progress": 0, "OS-DCF:diskConfig": "AUTO", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "tags": [] } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.54/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.54/server-action-rebuild.j0000664000175000017500000000054700000000000033523 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "imageRef" : "%(uuid)s", "name" : "%(name)s", "key_name" : "%(key_name)s", "description" : "%(description)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" } } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4784687 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/0000775000175000017500000000000000000000000027140 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-action-rebuild-r0000664000175000017500000000315500000000000033533 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "key_name": "%(key_name)s", "description": "%(description)s", "progress": 0, "OS-DCF:diskConfig": "AUTO", "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "tags": [], "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-action-rebuild.j0000664000175000017500000000063000000000000033517 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "imageRef" : "%(uuid)s", "name" : "%(name)s", "key_name" : "%(key_name)s", "description" : "%(description)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-create-req.json.0000664000175000017500000000110600000000000033443 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "imageRef" : "%(image_id)s", "flavorRef" : "http://openstack.example.com/flavors/1", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.57/server-create-resp.json0000664000175000017500000000100400000000000033544 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4824686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/0000775000175000017500000000000000000000000027135 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-action-rebuild-r0000664000175000017500000000347700000000000033537 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "name": "%(name)s", "key_name": "%(key_name)s", "description": "%(description)s", "progress": 0, "OS-DCF:diskConfig": "AUTO", "status": "ACTIVE", "tags": [], "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-action-rebuild.j0000664000175000017500000000112700000000000033516 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "imageRef" : "%(uuid)s", "name" : "%(name)s", "key_name" : "%(key_name)s", "description" : "%(description)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-create-req.json.0000664000175000017500000000136100000000000033443 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "6", "availability_zone": "us-west", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] }, "OS-SCH-HNT:scheduler_hints": { "same_host": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-create-resp.json0000664000175000017500000000100400000000000033541 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-get-resp.json.tp0000664000175000017500000000551300000000000033510 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": [], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-update-req.json.0000664000175000017500000000034200000000000033460 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-update-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/server-update-resp.json0000664000175000017500000000333400000000000033570 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "%(isotime)s", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.63/servers-details-resp.js0000664000175000017500000000654100000000000033564 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": [], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "%(isotime)s", "user_id": "fake" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4824686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/0000775000175000017500000000000000000000000027140 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/server-create-req.json.0000664000175000017500000000137700000000000033455 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "6", "availability_zone": "%(availability_zone)s", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ] }, "OS-SCH-HNT:scheduler_hints": { "same_host": "%(uuid)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/server-create-resp.json0000664000175000017500000000100400000000000033544 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/servers-details-with-changes-before.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/servers-details-with-ch0000664000175000017500000000627200000000000033547 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": [], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { "hw:numa_nodes": "1" }, "original_name": "m1.tiny.specs", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": [ "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8", "674736e3-f25c-405c-8362-bbf991e0ce0a" ], "updated": "%(isotime)s", "user_id": "fake" } ] } ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/servers-list-with-changes-before.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.66/servers-list-with-chang0000664000175000017500000000067000000000000033557 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4824686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.67/0000775000175000017500000000000000000000000027141 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.67/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.67/server-create-req.json.0000664000175000017500000000102100000000000033440 0ustar00zuulzuul00000000000000{ "server" : { "name" : "bfv-server-with-volume-type", "flavorRef" : "%(host)s/flavors/1", "networks" : [{ "uuid" : "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "tag": "nic1" }], "block_device_mapping_v2": [{ "uuid": "%(image_id)s", "source_type": "image", "destination_type": "volume", "boot_index": 0, "volume_size": "1", "tag": "disk1", "volume_type": "lvm-1" }] } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.67/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.67/server-create-resp.json0000664000175000017500000000100400000000000033545 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4824686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/0000775000175000017500000000000000000000000027143 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/server-create-req.json.0000664000175000017500000000075300000000000033455 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "imageRef" : "%(image_id)s", "flavorRef" : "1", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/server-create-resp.json0000664000175000017500000000100300000000000033546 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/server-get-resp.json.tp0000664000175000017500000000212300000000000033510 0ustar00zuulzuul00000000000000{ "server": { "OS-EXT-AZ:availability_zone": "UNKNOWN", "OS-EXT-STS:power_state": 0, "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "id": "%(id)s", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "status": "UNKNOWN", "tenant_id": "project", "user_id": "fake", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ] } }././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/servers-details-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/servers-details-resp.js0000664000175000017500000000103300000000000033561 0ustar00zuulzuul00000000000000{ "servers": [ { "created": "%(isotime)s", "id": "%(uuid)s", "status": "UNKNOWN", "tenant_id": "6f70656e737461636b20342065766572", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ] } ] } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.69/servers-list-resp.json.0000664000175000017500000000066700000000000033536 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(uuid)s", "status": "UNKNOWN", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ] } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4824686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/0000775000175000017500000000000000000000000027134 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-action-rebuild-r0000664000175000017500000000325100000000000033524 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "version": 4 } ] }, "adminPass": "%(password)s", "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(uuid)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "locked": false, "metadata": { "meta_var": "meta_val" }, "server_groups": ["%(uuid)s"], "trusted_image_certificates": null, "name": "%(name)s", "description": null, "progress": 0, "OS-DCF:diskConfig": "AUTO", "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-action-rebuild.j0000664000175000017500000000055500000000000033521 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "imageRef" : "%(uuid)s", "name" : "%(name)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-create-req.json.0000664000175000017500000000106500000000000033443 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "imageRef" : "%(image_id)s", "flavorRef" : "1", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" }, "OS-SCH-HNT:scheduler_hints": { "group": "%(sg_uuid)s" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-create-resp.json0000664000175000017500000000100300000000000033537 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-get-down-cell-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-get-down-cell-re0000664000175000017500000000217300000000000033433 0ustar00zuulzuul00000000000000{ "server": { "OS-EXT-STS:power_state": 0, "OS-EXT-AZ:availability_zone": "UNKNOWN", "created": "%(isotime)s", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "id": "%(id)s", "image": { "id": "70a599e0-31e7-49b7-b260-868f441e862b", "links": [ { "href": "http://openstack.example.com/6f70656e737461636b20342065766572/images/70a599e0-31e7-49b7-b260-868f441e862b", "rel": "bookmark" } ] }, "status": "UNKNOWN", "server_groups": ["%(uuid)s"], "tenant_id": "project", "user_id": "fake", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ] } } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-get-resp.json.tp0000664000175000017500000000531100000000000033503 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "description": null, "host_status": "UP", "locked": false, "tags": [], "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "%(cdrive)s", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": ["%(uuid)s"], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-groups-post-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-groups-post-req.0000664000175000017500000000013100000000000033523 0ustar00zuulzuul00000000000000{ "server_group": { "name": "%(name)s", "policy": "affinity" } } ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-groups-post-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-groups-post-resp0000664000175000017500000000035500000000000033637 0ustar00zuulzuul00000000000000{ "server_group": { "id": "%(id)s", "members": [], "name": "test", "policy": "affinity", "project_id": "6f70656e737461636b20342065766572", "rules": {}, "user_id": "fake" } }././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-update-req.json.0000664000175000017500000000034200000000000033457 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-update-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.71/server-update-resp.json0000664000175000017500000000314200000000000033564 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "%(isotime)s", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": { }, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "status": "ACTIVE", "server_groups": ["%(uuid)s"], "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4864686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/0000775000175000017500000000000000000000000027136 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/lock-server-with-reason.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/lock-server-with-reason0000664000175000017500000000007200000000000033552 0ustar00zuulzuul00000000000000{ "lock": {"locked_reason": "I don't want to work"} } ././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-action-rebuild-r0000664000175000017500000000332300000000000033526 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "created": "%(isotime)s", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "progress": 0, "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-action-rebuild.j0000664000175000017500000000055500000000000033523 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "imageRef" : "%(uuid)s", "name" : "%(name)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-create-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-create-req.json.0000664000175000017500000000075300000000000033450 0ustar00zuulzuul00000000000000{ "server" : { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "new-server-test", "imageRef" : "%(image_id)s", "flavorRef" : "1", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-create-resp.json0000664000175000017500000000100300000000000033541 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } }././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-get-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-get-resp.json.tp0000664000175000017500000000543200000000000033511 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "%(isotime)s", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": true, "locked_reason": "I don't want to work", "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-update-req.json.0000664000175000017500000000034200000000000033461 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-update-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/server-update-resp.json0000664000175000017500000000320600000000000033567 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "addr": "192.168.1.30", "version": 4 } ] }, "created": "%(isotime)s", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "progress": 0, "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } }././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.73/servers-details-resp.js0000664000175000017500000000613000000000000033557 0ustar00zuulzuul00000000000000{ "servers": [ { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "%(ip)s", "version": 4 } ] }, "config_drive": "", "created": "%(isotime)s", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": true, "locked_reason": "I don't want to work", "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } ] }././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4864686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/0000775000175000017500000000000000000000000027137 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-req-with-host-and-node.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-req-with-0000664000175000017500000000114100000000000033441 0ustar00zuulzuul00000000000000{ "server" : { "adminPass": "MySecretPass", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "6", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto", "host": "openstack-node-01", "hypervisor_hostname": "openstack-node-01" } }././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-req-with-only-host.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-req-with-0000664000175000017500000000105500000000000033445 0ustar00zuulzuul00000000000000{ "server" : { "adminPass": "MySecretPass", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "6", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto", "host": "openstack-node-01" } }././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-req-with-only-node.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-req-with-0000664000175000017500000000107400000000000033446 0ustar00zuulzuul00000000000000{ "server" : { "adminPass": "MySecretPass", "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "name" : "%(name)s", "imageRef" : "%(image_id)s", "flavorRef" : "6", "OS-DCF:diskConfig": "AUTO", "metadata" : { "My Server Name" : "Apache1" }, "security_groups": [ { "name": "default" } ], "user_data" : "%(user_data)s", "networks": "auto", "hypervisor_hostname": "openstack-node-01" } }././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.74/server-create-resp.json0000664000175000017500000000100400000000000033543 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "adminPass": "%(password)s", "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "security_groups": [ { "name": "default" } ] } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4864686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/0000775000175000017500000000000000000000000027140 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022000000000000011447 xustar0000000000000000122 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-action-rebuild-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-action-rebuild-r0000664000175000017500000000535700000000000033541 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "accessIPv4": "1.2.3.4", "accessIPv6": "80fe::", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "adminPass": "seekr3t", "config_drive": "", "created": "%(isotime)s", "description": null, "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "2091634baaccdc4c5a1d57069c833e402921df696b7f970791b12ec6", "host_status": "UP", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "meta_var": "meta_val" }, "name": "foobar", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_data": "ZWNobyAiaGVsbG8gd29ybGQi", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-action-rebuild.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-action-rebuild.j0000664000175000017500000000055500000000000033525 0ustar00zuulzuul00000000000000{ "rebuild" : { "accessIPv4" : "%(access_ip_v4)s", "accessIPv6" : "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "imageRef" : "%(uuid)s", "name" : "%(name)s", "adminPass" : "%(pass)s", "metadata" : { "meta_var" : "meta_val" }, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-update-req.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-update-req.json.0000664000175000017500000000034200000000000033463 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "OS-DCF:diskConfig": "AUTO", "name": "new-server-test", "description": "Sample description" } } ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-update-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.75/server-update-resp.json0000664000175000017500000000540000000000000033567 0ustar00zuulzuul00000000000000{ "server": { "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "compute", "OS-EXT-SRV-ATTR:hostname": "new-server-test", "OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "addr": "192.168.1.30", "version": 4 } ] }, "config_drive": "", "created": "%(isotime)s", "description": "Sample description", "flavor": { "disk": 1, "ephemeral": 0, "extra_specs": {}, "original_name": "m1.tiny", "ram": 512, "swap": 0, "vcpus": 1 }, "hostId": "%(hostid)s", "host_status": "UP", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "locked": false, "locked_reason": null, "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "os-extended-volumes:volumes_attached": [], "progress": 0, "security_groups": [ { "name": "default" } ], "server_groups": [], "status": "ACTIVE", "tags": [], "tenant_id": "6f70656e737461636b20342065766572", "trusted_image_certificates": null, "updated": "%(isotime)s", "user_id": "fake" } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4864686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.9/0000775000175000017500000000000000000000000027055 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.9/server-get-resp.json.tpl0000664000175000017500000000526100000000000033604 0ustar00zuulzuul00000000000000{ "server": { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(uuid)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false } } ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.9/servers-details-resp.json.tpl 22 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.9/servers-details-resp.jso0000664000175000017500000000625700000000000033667 0ustar00zuulzuul00000000000000{ "servers": [ { "accessIPv4": "%(access_ip_v4)s", "accessIPv6": "%(access_ip_v6)s", "addresses": { "private": [ { "addr": "%(ip)s", "OS-EXT-IPS-MAC:mac_addr": "00:0c:29:0d:11:74", "OS-EXT-IPS:type": "fixed", "version": 4 } ] }, "created": "%(isotime)s", "flavor": { "id": "1", "links": [ { "href": "%(compute_endpoint)s/flavors/1", "rel": "bookmark" } ] }, "hostId": "%(hostid)s", "id": "%(id)s", "image": { "id": "%(uuid)s", "links": [ { "href": "%(compute_endpoint)s/images/%(uuid)s", "rel": "bookmark" } ] }, "key_name": null, "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(uuid)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "metadata": { "My Server Name": "Apache1" }, "name": "new-server-test", "config_drive": "", "OS-DCF:diskConfig": "AUTO", "OS-EXT-AZ:availability_zone": "us-west", "OS-EXT-SRV-ATTR:host": "%(compute_host)s", "OS-EXT-SRV-ATTR:hypervisor_hostname": "%(hypervisor_hostname)s", "OS-EXT-SRV-ATTR:instance_name": "%(instance_name)s", "OS-EXT-SRV-ATTR:reservation_id": "%(reservation_id)s", "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:kernel_id": "", "OS-EXT-SRV-ATTR:ramdisk_id": "", "OS-EXT-SRV-ATTR:hostname": "%(hostname)s", "OS-EXT-SRV-ATTR:root_device_name": "/dev/sda", "OS-EXT-SRV-ATTR:user_data": "%(user_data)s", "OS-EXT-STS:power_state": 1, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "os-extended-volumes:volumes_attached": [ {"id": "volume_id1", "delete_on_termination": false}, {"id": "volume_id2", "delete_on_termination": false} ], "OS-SRV-USG:launched_at": "%(strtime)s", "OS-SRV-USG:terminated_at": null, "progress": 0, "security_groups": [ { "name": "default" } ], "status": "ACTIVE", "tenant_id": "6f70656e737461636b20342065766572", "updated": "%(isotime)s", "user_id": "fake", "locked": false } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers/detail?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.9/servers-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers/v2.9/servers-list-resp.json.t0000664000175000017500000000113000000000000033616 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ], "servers_links": [ { "href": "%(versioned_compute_endpoint)s/servers?limit=1&marker=%(id)s", "rel": "next" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4704688 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers-sort/0000775000175000017500000000000000000000000027344 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers-sort/server-sort-keys-list-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/servers-sort/server-sort-keys-list-re0000664000175000017500000000067000000000000034113 0ustar00zuulzuul00000000000000{ "servers": [ { "id": "%(id)s", "links": [ { "href": "%(versioned_compute_endpoint)s/servers/%(id)s", "rel": "self" }, { "href": "%(compute_endpoint)s/servers/%(id)s", "rel": "bookmark" } ], "name": "new-server-test" } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4864686 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/versions/0000775000175000017500000000000000000000000026536 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/versions/v2-version-get-resp.json.tpl0000664000175000017500000000117700000000000033773 0ustar00zuulzuul00000000000000{ "version": { "id": "v2.0", "links": [ { "href": "%(host)s/v2/", "rel": "self" }, { "href": "http://docs.openstack.org/", "rel": "describedby", "type": "text/html" } ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2" } ], "min_version": "", "status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", "version": "" } }././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/versions/v21-version-get-resp.json.tpl 22 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/versions/v21-version-get-resp.json.tp0000664000175000017500000000123000000000000033666 0ustar00zuulzuul00000000000000{ "version": { "id": "v2.1", "links": [ { "href": "%(host)s/v2.1/", "rel": "self" }, { "href": "http://docs.openstack.org/", "rel": "describedby", "type": "text/html" } ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1" } ], "status": "CURRENT", "version": "%(max_api_version)s", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/api_samples/versions/versions-get-resp.json.tpl0000664000175000017500000000132500000000000033624 0ustar00zuulzuul00000000000000{ "versions": [ { "id": "v2.0", "links": [ { "href": "%(host)s/v2/", "rel": "self" } ], "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z" }, { "id": "v2.1", "links": [ { "href": "%(host)s/v2.1/", "rel": "self" } ], "status": "CURRENT", "version": "%(max_api_version)s", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z" } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_admin_actions.py0000664000175000017500000000361600000000000026620 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class AdminActionsSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-admin-actions" def setUp(self): """setUp Method for AdminActions api samples extension This method creates the server that will be used in each tests """ super(AdminActionsSamplesJsonTest, self).setUp() self.uuid = self._post_server() def test_post_reset_network(self): # Get api samples to reset server network request. response = self._do_post('servers/%s/action' % self.uuid, 'admin-actions-reset-network', {}) self.assertEqual(202, response.status_code) def test_post_inject_network_info(self): # Get api samples to inject network info request. response = self._do_post('servers/%s/action' % self.uuid, 'admin-actions-inject-network-info', {}) self.assertEqual(202, response.status_code) def test_post_reset_state(self): # get api samples to server reset state request. response = self._do_post('servers/%s/action' % self.uuid, 'admin-actions-reset-server-state', {}) self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_admin_password.py0000664000175000017500000000221600000000000027015 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class AdminPasswordJsonTest(test_servers.ServersSampleBase): sample_dir = 'os-admin-password' def test_server_password(self): uuid = self._post_server() subs = {"password": "foo"} response = self._do_post('servers/%s/action' % uuid, 'admin-password-change-password', subs) self.assertEqual(202, response.status_code) self.assertEqual("", response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_agents.py0000664000175000017500000000750400000000000025271 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db.sqlalchemy import models from nova.tests.functional.api_sample_tests import api_sample_base class AgentsJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-agents" def setUp(self): super(AgentsJsonTest, self).setUp() fake_agents_list = [{'url': 'http://example.com/path/to/resource', 'hypervisor': 'xen', 'architecture': 'x86', 'os': 'os', 'version': '8.0', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'id': 1}] def fake_agent_build_create(context, values): values['id'] = 1 agent_build_ref = models.AgentBuild() agent_build_ref.update(values) return agent_build_ref def fake_agent_build_get_all(context, hypervisor): agent_build_all = [] for agent in fake_agents_list: if hypervisor and hypervisor != agent['hypervisor']: continue agent_build_ref = models.AgentBuild() agent_build_ref.update(agent) agent_build_all.append(agent_build_ref) return agent_build_all def fake_agent_build_update(context, agent_build_id, values): pass def fake_agent_build_destroy(context, agent_update_id): pass self.stub_out("nova.db.api.agent_build_create", fake_agent_build_create) self.stub_out("nova.db.api.agent_build_get_all", fake_agent_build_get_all) self.stub_out("nova.db.api.agent_build_update", fake_agent_build_update) self.stub_out("nova.db.api.agent_build_destroy", fake_agent_build_destroy) def test_agent_create(self): # Creates a new agent build. project = {'url': 'http://example.com/path/to/resource', 'hypervisor': 'xen', 'architecture': 'x86', 'os': 'os', 'version': '8.0', 'md5hash': 'add6bb58e139be103324d04d82d8f545' } response = self._do_post('os-agents', 'agent-post-req', project) self._verify_response('agent-post-resp', project, response, 200) def test_agent_list(self): # Return a list of all agent builds. response = self._do_get('os-agents') self._verify_response('agents-get-resp', {}, response, 200) def test_agent_update(self): # Update an existing agent build. agent_id = 1 subs = {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'} response = self._do_put('os-agents/%s' % agent_id, 'agent-update-put-req', subs) self._verify_response('agent-update-put-resp', subs, response, 200) def test_agent_delete(self): # Deletes an existing agent build. agent_id = 1 response = self._do_delete('os-agents/%s' % agent_id) self.assertEqual(200, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_aggregates.py0000664000175000017500000001270100000000000026114 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova.tests.functional.api_sample_tests import api_sample_base from nova.tests.unit.image import fake as fake_image class AggregatesSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-aggregates" # extra_subs is a noop in the base v2.1 test class; it's used to sub in # additional details for response verification of actions performed on an # existing aggregate. extra_subs = {} def _test_aggregate_create(self): subs = { "aggregate_id": r'(?P\d+)' } response = self._do_post('os-aggregates', 'aggregate-post-req', subs) return self._verify_response('aggregate-post-resp', subs, response, 200) def test_aggregate_create(self): self._test_aggregate_create() def _test_add_host(self, aggregate_id, host): subs = { "host_name": host } response = self._do_post('os-aggregates/%s/action' % aggregate_id, 'aggregate-add-host-post-req', subs) subs.update(self.extra_subs) self._verify_response('aggregates-add-host-post-resp', subs, response, 200) def test_list_aggregates(self): aggregate_id = self._test_aggregate_create() self._test_add_host(aggregate_id, self.compute.host) response = self._do_get('os-aggregates') self._verify_response('aggregates-list-get-resp', {}, response, 200) def test_aggregate_get(self): agg_id = self._test_aggregate_create() response = self._do_get('os-aggregates/%s' % agg_id) self._verify_response('aggregates-get-resp', self.extra_subs, response, 200) def test_add_metadata(self): agg_id = self._test_aggregate_create() response = self._do_post('os-aggregates/%s/action' % agg_id, 'aggregate-metadata-post-req', {'action': 'set_metadata'}) self._verify_response('aggregates-metadata-post-resp', self.extra_subs, response, 200) def test_add_host(self): aggregate_id = self._test_aggregate_create() self._test_add_host(aggregate_id, self.compute.host) def test_remove_host(self): self.test_add_host() subs = { "host_name": self.compute.host, } response = self._do_post('os-aggregates/1/action', 'aggregate-remove-host-post-req', subs) subs.update(self.extra_subs) self._verify_response('aggregates-remove-host-post-resp', subs, response, 200) def test_update_aggregate(self): aggregate_id = self._test_aggregate_create() response = self._do_put('os-aggregates/%s' % aggregate_id, 'aggregate-update-post-req', {}) self._verify_response('aggregate-update-post-resp', self.extra_subs, response, 200) class AggregatesV2_41_SampleJsonTest(AggregatesSampleJsonTest): microversion = '2.41' scenarios = [ ( "v2_41", { 'api_major_version': 'v2.1', }, ) ] def _test_aggregate_create(self): subs = { "aggregate_id": r'(?P\d+)', } response = self._do_post('os-aggregates', 'aggregate-post-req', subs) # This feels like cheating since we're getting the uuid from the # response before we even validate that it exists in the response based # on the sample, but we'll fail with a KeyError if it doesn't which is # maybe good enough. Alternatively we have to mock out the DB API # to return a fake aggregate with a hard-coded uuid that matches the # API sample which isn't fun either. subs['uuid'] = jsonutils.loads(response.content)['aggregate']['uuid'] # save off the uuid for subs validation on other actions performed # on this aggregate self.extra_subs['uuid'] = subs['uuid'] return self._verify_response('aggregate-post-resp', subs, response, 200) class AggregatesV2_81_SampleJsonTest(AggregatesV2_41_SampleJsonTest): microversion = '2.81' scenarios = [ ( "v2_81", { 'api_major_version': 'v2.1', }, ) ] def test_images(self): agg_id = self._test_aggregate_create() image = fake_image.get_valid_image_id() response = self._do_post('os-aggregates/%s/images' % agg_id, 'aggregate-images-post-req', {'image_id': image}) # No response body, so just check the status self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_assisted_volume_snapshots.py0000664000175000017500000000401300000000000031310 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit.api.openstack import fakes class AssistedVolumeSnapshotsJsonTests(test_servers.ServersSampleBase): sample_dir = "os-assisted-volume-snapshots" def test_create(self): """Create a volume snapshots.""" self.stub_out('nova.compute.api.API.volume_snapshot_create', fakes.stub_compute_volume_snapshot_create) subs = { 'volume_id': '521752a6-acf6-4b2d-bc7a-119f9148cd8c', 'snapshot_id': '421752a6-acf6-4b2d-bc7a-119f9148cd8c', 'type': 'qcow2', 'new_file': 'new_file_name' } response = self._do_post("os-assisted-volume-snapshots", "snapshot-create-assisted-req", subs) self._verify_response("snapshot-create-assisted-resp", subs, response, 200) def test_snapshots_delete_assisted(self): self.stub_out('nova.compute.api.API.volume_snapshot_delete', fakes.stub_compute_volume_snapshot_delete) snapshot_id = '100' response = self._do_delete( 'os-assisted-volume-snapshots/%s?delete_info=' '{"volume_id":"521752a6-acf6-4b2d-bc7a-119f9148cd8c"}' % snapshot_id) self.assertEqual(204, response.status_code) self.assertEqual('', response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_attach_interfaces.py0000664000175000017500000002315100000000000027453 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import exception from nova import objects from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit import fake_network_cache_model class AttachInterfacesSampleJsonTest(test_servers.ServersSampleBase): sample_dir = 'os-attach-interfaces' def setUp(self): super(AttachInterfacesSampleJsonTest, self).setUp() def fake_list_ports(self, *args, **kwargs): uuid = kwargs.get('device_id', None) if not uuid: raise exception.InstanceNotFound(instance_id=None) port_data = { "id": "ce531f90-199f-48c0-816c-13e38010b442", "network_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "admin_state_up": True, "status": "ACTIVE", "mac_address": "fa:16:3e:4c:2c:30", "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "device_id": uuid, } ports = {'ports': [port_data]} return ports def fake_attach_interface(self, context, instance, network_id, port_id, requested_ip='192.168.1.3', tag=None): if not network_id: network_id = "fake_net_uuid" if not port_id: port_id = "fake_port_uuid" vif = fake_network_cache_model.new_vif() vif['id'] = port_id vif['network']['id'] = network_id vif['network']['subnets'][0]['ips'][0] = requested_ip return vif def fake_detach_interface(self, context, instance, port_id): pass self.stub_out('nova.network.neutron.API.list_ports', fake_list_ports) self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) self.stub_out('nova.compute.api.API.detach_interface', fake_detach_interface) self.flags(timeout=30, group='neutron') def generalize_subs(self, subs, vanilla_regexes): subs['subnet_id'] = vanilla_regexes['uuid'] subs['net_id'] = vanilla_regexes['uuid'] subs['port_id'] = vanilla_regexes['uuid'] subs['mac_addr'] = '(?:[a-f0-9]{2}:){5}[a-f0-9]{2}' subs['ip_address'] = vanilla_regexes['ip'] return subs def _get_subs(self): """Allows sub-classes to override the subs dict used for verification. """ return { 'ip_address': '192.168.1.3', 'subnet_id': 'f8a6e8f8-c2ec-497c-9f23-da9616de54ef', 'mac_addr': 'fa:16:3e:4c:2c:30', 'net_id': '3cb9bc59-5699-4588-a4b1-b87f96708bc6', 'port_id': 'ce531f90-199f-48c0-816c-13e38010b442', 'port_state': 'ACTIVE' } def test_list_interfaces(self): instance_uuid = self._post_server() response = self._do_get('servers/%s/os-interface' % instance_uuid) subs = self._get_subs() self._verify_response('attach-interfaces-list-resp', subs, response, 200) def _stub_show_for_instance(self, instance_uuid, port_id): show_port = self.neutron.show_port(port_id) show_port['port']['device_id'] = instance_uuid self.stub_out('nova.network.neutron.API.show_port', lambda *a, **k: show_port) def test_show_interfaces(self): instance_uuid = self._post_server() # NOTE(stephenfin): This ID is taken from the NeutronFixture port_id = 'ce531f90-199f-48c0-816c-13e38010b442' self._stub_show_for_instance(instance_uuid, port_id) response = self._do_get('servers/%s/os-interface/%s' % (instance_uuid, port_id)) subs = self._get_subs() self._verify_response('attach-interfaces-show-resp', subs, response, 200) def test_create_interfaces(self, instance_uuid=None): if instance_uuid is None: instance_uuid = self._post_server() subs = self._get_subs() self._stub_show_for_instance(instance_uuid, subs['port_id']) response = self._do_post('servers/%s/os-interface' % instance_uuid, 'attach-interfaces-create-req', subs) self._verify_response('attach-interfaces-create-resp', subs, response, 200) def test_create_interfaces_with_net_id_and_fixed_ips(self, instance_uuid=None): if instance_uuid is None: instance_uuid = self._post_server() subs = self._get_subs() self._stub_show_for_instance(instance_uuid, subs['port_id']) response = self._do_post('servers/%s/os-interface' % instance_uuid, 'attach-interfaces-create-net_id-req', subs) self._verify_response('attach-interfaces-create-resp', subs, response, 200) def test_delete_interfaces(self): instance_uuid = self._post_server() # NOTE(stephenfin): This ID is taken from the NeutronFixture port_id = 'ce531f90-199f-48c0-816c-13e38010b442' response = self._do_delete('servers/%s/os-interface/%s' % (instance_uuid, port_id)) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) class AttachInterfacesSampleV249JsonTest(test_servers.ServersSampleBase): sample_dir = 'os-attach-interfaces' microversion = '2.49' scenarios = [('v2_49', {'api_major_version': 'v2.1'})] def setUp(self): super(AttachInterfacesSampleV249JsonTest, self).setUp() def fake_attach_interface(self, context, instance, network_id, port_id, requested_ip='192.168.1.3', tag=None): if not network_id: network_id = "fake_net_uuid" if not port_id: port_id = "fake_port_uuid" vif = fake_network_cache_model.new_vif() vif['id'] = port_id vif['network']['id'] = network_id vif['network']['subnets'][0]['ips'][0] = requested_ip vif['tag'] = tag return vif self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) def _stub_show_for_instance(self, instance_uuid, port_id): show_port = self.neutron.show_port(port_id) show_port['port']['device_id'] = instance_uuid self.stub_out('nova.network.neutron.API.show_port', lambda *a, **k: show_port) def test_create_interfaces(self, instance_uuid=None): if instance_uuid is None: instance_uuid = self._post_server() subs = { 'net_id': '3cb9bc59-5699-4588-a4b1-b87f96708bc6', 'port_id': 'ce531f90-199f-48c0-816c-13e38010b442', 'subnet_id': 'f8a6e8f8-c2ec-497c-9f23-da9616de54ef', 'ip_address': '192.168.1.3', 'port_state': 'ACTIVE', 'mac_addr': 'fa:16:3e:4c:2c:30', } self._stub_show_for_instance(instance_uuid, subs['port_id']) response = self._do_post('servers/%s/os-interface' % instance_uuid, 'attach-interfaces-create-req', subs) self._verify_response('attach-interfaces-create-resp', subs, response, 200) class AttachInterfacesSampleV270JsonTest(AttachInterfacesSampleJsonTest): """Tests for the 2.70 microversion in the os-interface API which returns the 'tag' field in response bodies to GET and POST methods. """ microversion = '2.70' scenarios = [('v2_70', {'api_major_version': 'v2.1'})] def setUp(self): super(AttachInterfacesSampleV270JsonTest, self).setUp() port_id = 'ce531f90-199f-48c0-816c-13e38010b442' def fake_virtual_interface_list_by_instance_uuid(*args, **kwargs): return objects.VirtualInterfaceList(objects=[ objects.VirtualInterface( # All these tests care about is the uuid and tag. uuid=port_id, tag='public')]) def fake_virtual_interface_get_by_uuid(*args, **kwargs): return objects.VirtualInterface(uuid=port_id, tag='public') self.stub_out('nova.objects.VirtualInterface.get_by_uuid', fake_virtual_interface_get_by_uuid) self.stub_out('nova.objects.VirtualInterfaceList.get_by_instance_uuid', fake_virtual_interface_list_by_instance_uuid) def _get_subs(self): subs = super(AttachInterfacesSampleV270JsonTest, self)._get_subs() # 2.70 adds the tag parameter to the request and response. subs['tag'] = 'public' return subs ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_availability_zone.py0000664000175000017500000000260600000000000027513 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class AvailabilityZoneJsonTest(test_servers.ServersSampleBase): ADMIN_API = True sample_dir = "os-availability-zone" # Do not use the AvailabilityZoneFixture in the base class. # TODO(mriedem): Make this more realistic by creating a "us-west" zone # and putting the "compute" service host in it. availability_zones = [] def test_availability_zone_list(self): response = self._do_get('os-availability-zone') self._verify_response('availability-zone-list-resp', {}, response, 200) def test_availability_zone_detail(self): response = self._do_get('os-availability-zone/detail') self._verify_response('availability-zone-detail-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_baremetal_nodes.py0000664000175000017500000000425400000000000027133 0ustar00zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.tests.functional.api_sample_tests import api_sample_base class FakeNode(object): def __init__(self, uuid='058d27fa-241b-445a-a386-08c04f96db43'): self.uuid = uuid self.provision_state = 'active' self.properties = {'cpus': '2', 'memory_mb': '1024', 'local_gb': '10'} self.instance_uuid = '1ea4e53e-149a-4f02-9515-590c9fb2315a' class NodeManager(object): def list(self, detail=False): return [FakeNode(), FakeNode('e2025409-f3ce-4d6a-9788-c565cf3b1b1c')] def get(self, id): return FakeNode(id) def list_ports(self, id): return [] class fake_client(object): node = NodeManager() class BareMetalNodesSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-baremetal-nodes" @mock.patch("nova.api.openstack.compute.baremetal_nodes" "._get_ironic_client") def test_baremetal_nodes_list(self, mock_get_irc): mock_get_irc.return_value = fake_client() response = self._do_get('os-baremetal-nodes') self._verify_response('baremetal-node-list-resp', {}, response, 200) @mock.patch("nova.api.openstack.compute.baremetal_nodes" "._get_ironic_client") def test_baremetal_nodes_get(self, mock_get_irc): mock_get_irc.return_value = fake_client() response = self._do_get('os-baremetal-nodes/' '058d27fa-241b-445a-a386-08c04f96db43') self._verify_response('baremetal-node-get-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_block_device_mapping_boot.py0000664000175000017500000000234300000000000031153 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit.api.openstack import fakes class BlockDeviceMappingV1BootJsonTest(test_servers.ServersSampleBase): sample_dir = "os-block-device-mapping-v1" def test_servers_post_with_bdm(self): self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get) self.stub_out('nova.volume.cinder.API.check_attach', fakes.stub_volume_check_attach) return self._post_server() class BlockDeviceMappingV2BootJsonTest(BlockDeviceMappingV1BootJsonTest): sample_dir = "os-block-device-mapping" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_cells.py0000664000175000017500000000412500000000000025106 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # Copyright 2019 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class CellsTest(api_sample_base.ApiSampleTestBaseV21): def test_cells_list(self): self.api.api_get('os-cells', check_response_status=[410]) def test_cells_capacity(self): self.api.api_get('os-cells/capacities', check_response_status=[410]) def test_cells_detail(self): self.api.api_get('os-cells/detail', check_response_status=[410]) def test_cells_info(self): self.api.api_get('os-cells/info', check_response_status=[410]) def test_cells_sync_instances(self): self.api.api_post('os-cells/sync_instances', {}, check_response_status=[410]) def test_cell_create(self): self.api.api_post('os-cells', {}, check_response_status=[410]) def test_cell_show(self): self.api.api_get('os-cells/cell3', check_response_status=[410]) def test_cell_update(self): self.api.api_put('os-cells/cell3', {}, check_response_status=[410]) def test_cell_delete(self): self.api.api_delete('os-cells/cell3', check_response_status=[410]) def test_cell_capacity(self): self.api.api_get('os-cells/cell3/capacities', check_response_status=[410]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_compare_result.py0000664000175000017500000003757500000000000027047 0ustar00zuulzuul00000000000000# Copyright 2015 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import testtools from nova import test from nova.tests.functional import api_samples_test_base class TestCompareResult(test.NoDBTestCase): """Provide test coverage for result comparison logic in functional tests. _compare_result two types of comparisons, template data and sample data. Template data means the response is checked against a regex that is referenced by the template name. The template name is specified in the format %(name) Sample data is a normal value comparison. """ def getApiSampleTestBaseHelper(self): """Build an instance without running any unwanted test methods""" # NOTE(auggy): TestCase takes a "test" method name to run in __init__ # calling this way prevents additional test methods from running ast_instance = api_samples_test_base.ApiSampleTestBase('setUp') # required by ApiSampleTestBase ast_instance.api_major_version = 'v2' ast_instance.USE_PROJECT_ID = 'True' # automagically create magic methods usually handled by test classes ast_instance.compute = mock.MagicMock() ast_instance.subs = ast_instance._get_regexes() return ast_instance def setUp(self): super(TestCompareResult, self).setUp() self.ast = self.getApiSampleTestBaseHelper() def test_bare_strings_match(self): """compare 2 bare strings that match""" sample_data = u'foo' response_data = u'foo' result = self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") # NOTE(auggy): _compare_result will not return a matched value in the # case of bare strings. If they don't match it will throw an exception, # otherwise it returns "None". self.assertEqual( expected=None, observed=result, message='Check _compare_result of 2 bare strings') def test_bare_strings_no_match(self): """check 2 bare strings that don't match""" sample_data = u'foo' response_data = u'bar' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_template_strings_match(self): """compare 2 template strings (contain %) that match""" template_data = u'%(id)s' response_data = u'858f295a-8543-45fa-804a-08f8356d616d' result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=response_data, observed=result, message='Check _compare_result of 2 template strings') def test_template_strings_no_match(self): """check 2 template strings (contain %) that don't match""" template_data = u'%(id)s' response_data = u'$58f295a-8543-45fa-804a-08f8356d616d' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") # TODO(auggy): _compare_result needs a consistent return value # In some cases it returns the value if it matched, in others it returns # None. In all cases, it throws an exception if there's no match. def test_bare_int_match(self): """check 2 bare ints that match""" sample_data = 42 response_data = 42 result = self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") self.assertEqual( expected=None, observed=result, message='Check _compare_result of 2 bare ints') def test_bare_int_no_match(self): """check 2 bare ints that don't match""" sample_data = 42 response_data = 43 with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") # TODO(auggy): _compare_result needs a consistent return value def test_template_int_match(self): """check template int against string containing digits""" template_data = u'%(int)s' response_data = u'42' result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=None, observed=result, message='Check _compare_result of template ints') def test_template_int_no_match(self): """check template int against a string containing no digits""" template_data = u'%(int)s' response_data = u'foo' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_template_int_value(self): """check an int value of a template int throws exception""" # template_data = u'%(int_test)' # response_data = 42 # use an int instead of a string as the subs value local_subs = copy.deepcopy(self.ast.subs) local_subs.update({'int_test': 42}) with testtools.ExpectedException(TypeError): self.ast.subs = local_subs # TODO(auggy): _compare_result needs a consistent return value def test_dict_match(self): """check 2 matching dictionaries""" template_data = { u'server': { u'id': u'%(id)s', u'adminPass': u'%(password)s' } } response_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'4ZQ3bb6WYbC2'} } result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=u'858f295a-8543-45fa-804a-08f8356d616d', observed=result, message='Check _compare_result of 2 dictionaries') def test_dict_no_match_value(self): """check 2 dictionaries where one has a different value""" sample_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'foo' } } response_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'4ZQ3bb6WYbC2'} } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_dict_no_match_extra_key(self): """check 2 dictionaries where one has an extra key""" template_data = { u'server': { u'id': u'%(id)s', u'adminPass': u'%(password)s', u'foo': u'foo' } } response_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'4ZQ3bb6WYbC2'} } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_dict_result_type_mismatch(self): """check expected is a dictionary and result is not a dictionary""" template_data = { u'server': { u'id': u'%(id)s', u'adminPass': u'%(password)s', } } response_data = u'foo' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") # TODO(auggy): _compare_result needs a consistent return value def test_list_match(self): """check 2 matching lists""" template_data = { u'links': [ { u'href': u'%(versioned_compute_endpoint)s/server/%(uuid)s', u'rel': u'self' }, { u'href': u'%(compute_endpoint)s/servers/%(uuid)s', u'rel': u'bookmark' } ] } response_data = { u'links': [ { u'href': (u'http://openstack.example.com/v2/%s/server/' '858f295a-8543-45fa-804a-08f8356d616d' % api_samples_test_base.PROJECT_ID ), u'rel': u'self' }, { u'href': (u'http://openstack.example.com/%s/servers/' '858f295a-8543-45fa-804a-08f8356d616d' % api_samples_test_base.PROJECT_ID ), u'rel': u'bookmark' } ] } result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=None, observed=result, message='Check _compare_result of 2 lists') def test_list_match_extra_item_result(self): """check extra list items in result """ template_data = { u'links': [ { u'href': u'%(versioned_compute_endpoint)s/server/%(uuid)s', u'rel': u'self' }, { u'href': u'%(compute_endpoint)s/servers/%(uuid)s', u'rel': u'bookmark' } ] } response_data = { u'links': [ { u'href': (u'http://openstack.example.com/v2/openstack/server/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'self' }, { u'href': (u'http://openstack.example.com/openstack/servers/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'bookmark' }, u'foo' ] } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_list_match_extra_item_template(self): """check extra list items in template """ template_data = { u'links': [ { u'href': u'%(versioned_compute_endpoint)s/server/%(uuid)s', u'rel': u'self' }, { u'href': u'%(compute_endpoint)s/servers/%(uuid)s', u'rel': u'bookmark' }, u'foo' # extra field ] } response_data = { u'links': [ { u'href': (u'http://openstack.example.com/v2/openstack/server/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'self' }, { u'href': (u'http://openstack.example.com/openstack/servers/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'bookmark' } ] } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_list_no_match(self): """check 2 matching lists""" template_data = { u'things': [ { u'foo': u'bar', u'baz': 0 }, { u'foo': u'zod', u'baz': 1 } ] } response_data = { u'things': [ { u'foo': u'bar', u'baz': u'0' }, { u'foo': u'zod', u'baz': 1 } ] } # TODO(auggy): This error returns "extra list items" # it should show the item/s in the list that didn't match with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_none_match(self): """check that None matches""" sample_data = None response_data = None result = self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") # NOTE(auggy): _compare_result will not return a matched value in the # case of bare strings. If they don't match it will throw an exception, # otherwise it returns "None". self.assertEqual( expected=None, observed=result, message='Check _compare_result of None') def test_none_no_match(self): """check expected none and non-None response don't match""" sample_data = None response_data = u'bar' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_none_result_no_match(self): """check result none and expected non-None response don't match""" sample_data = u'foo' response_data = None with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_template_no_subs_key(self): """check an int value of a template int throws exception""" template_data = u'%(foo)' response_data = 'bar' with testtools.ExpectedException(KeyError): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_console_auth_tokens.py0000664000175000017500000000344500000000000030056 0ustar00zuulzuul00000000000000# Copyright 2013 Cloudbase Solutions Srl # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_serialization import jsonutils from nova.tests.functional.api_sample_tests import test_servers class ConsoleAuthTokensSampleJsonTests(test_servers.ServersSampleBase): ADMIN_API = True sample_dir = "os-console-auth-tokens" def _get_console_url(self, data): return jsonutils.loads(data)["console"]["url"] def _get_console_token(self, uuid): response = self._do_post('servers/%s/action' % uuid, 'get-rdp-console-post-req', {'action': 'os-getRDPConsole'}) url = self._get_console_url(response.content) return re.match('.+?token=([^&]+)', url).groups()[0] def test_get_console_connect_info(self): self.flags(enabled=True, group='rdp') uuid = self._post_server() token = self._get_console_token(uuid) response = self._do_get('os-console-auth-tokens/%s' % token) subs = {} subs["uuid"] = uuid subs["host"] = r"[\w\.\-]+" subs["port"] = "[0-9]+" subs["internal_access_path"] = ".*" self._verify_response('get-console-connect-info-get-resp', subs, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_console_output.py0000664000175000017500000000206700000000000027071 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class ConsoleOutputSampleJsonTest(test_servers.ServersSampleBase): sample_dir = "os-console-output" def test_get_console_output(self): uuid = self._post_server() response = self._do_post('servers/%s/action' % uuid, 'console-output-post-req', {}) self._verify_response('console-output-post-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_consoles.py0000664000175000017500000000262200000000000025631 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' class ConsolesSamplesJsonTest(test_servers.ServersSampleBase): def test_create_consoles(self): self.api.api_post('servers/%s/consoles' % FAKE_UUID, {}, check_response_status=[410]) def test_list_consoles(self): self.api.api_get('servers/%s/consoles' % FAKE_UUID, check_response_status=[410]) def test_console_get(self): self.api.api_get('servers/%s/consoles/1' % FAKE_UUID, check_response_status=[410]) def test_console_delete(self): self.api.api_delete('servers/%s/consoles/1' % FAKE_UUID, check_response_status=[410]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_create_backup.py0000664000175000017500000000440200000000000026572 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit.image import fake class CreateBackupSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-create-backup" def setUp(self): """setUp Method for PauseServer api samples extension This method creates the server that will be used in each tests """ super(CreateBackupSamplesJsonTest, self).setUp() self.uuid = self._post_server() @mock.patch.object(fake._FakeImageService, 'detail', return_value=[]) def test_post_backup_server(self, mock_method): # Get api samples to backup server request. response = self._do_post('servers/%s/action' % self.uuid, 'create-backup-req', {}) self.assertEqual(202, response.status_code) # we should have gotten a location header back self.assertIn('location', response.headers) # we should not have gotten a body back self.assertEqual(0, len(response.content)) class CreateBackupSamplesJsonTestv2_45(CreateBackupSamplesJsonTest): """Tests the createBackup server action API with microversion 2.45.""" microversion = '2.45' scenarios = [('v2_45', {'api_major_version': 'v2.1'})] def test_post_backup_server(self): # Get api samples to backup server request. response = self._do_post('servers/%s/action' % self.uuid, 'create-backup-req', {}) self._verify_response('create-backup-resp', {}, response, 202) # assert that no location header was returned self.assertNotIn('location', response.headers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_deferred_delete.py0000664000175000017500000000307000000000000027104 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class DeferredDeleteSampleJsonTests(test_servers.ServersSampleBase): sample_dir = "os-deferred-delete" def setUp(self): super(DeferredDeleteSampleJsonTests, self).setUp() self.flags(reclaim_instance_interval=1) def test_restore(self): uuid = self._post_server() self._do_delete('servers/%s' % uuid) response = self._do_post('servers/%s/action' % uuid, 'restore-post-req', {}) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_force_delete(self): uuid = self._post_server() self._do_delete('servers/%s' % uuid) response = self._do_post('servers/%s/action' % uuid, 'force-delete-post-req', {}) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_evacuate.py0000664000175000017500000002262200000000000025603 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.tests.functional.api_sample_tests import test_servers class EvacuateJsonTest(test_servers.ServersSampleBase): ADMIN_API = True sample_dir = "os-evacuate" def _test_evacuate(self, req_subs, server_req, server_resp, expected_resp_code): self.uuid = self._post_server() def fake_service_is_up(self, service): """Simulate validation of instance host is down.""" return False def fake_service_get_by_compute_host(self, context, host): """Simulate that given host is a valid host.""" return { 'host_name': host, 'service': 'compute', 'zone': 'nova' } def fake_check_instance_exists(self, context, instance): """Simulate validation of instance does not exist.""" return False self.stub_out( 'nova.servicegroup.api.API.service_is_up', fake_service_is_up) self.stub_out( 'nova.compute.api.HostAPI.service_get_by_compute_host', fake_service_get_by_compute_host) self.stub_out( 'nova.compute.manager.ComputeManager._check_instance_exists', fake_check_instance_exists) response = self._do_post('servers/%s/action' % self.uuid, server_req, req_subs) if server_resp: self._verify_response(server_resp, {}, response, expected_resp_code) else: # NOTE(gibi): no server_resp means we expect empty body as # a response self.assertEqual(expected_resp_code, response.status_code) self.assertEqual('', response.text) @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') def test_server_evacuate(self, rebuild_mock): # Note (wingwj): The host can't be the same one req_subs = { 'host': 'testHost', "adminPass": "MySecretPass", "onSharedStorage": 'False' } self._test_evacuate(req_subs, 'server-evacuate-req', 'server-evacuate-resp', 200) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=False, preserve_ephemeral=mock.ANY, host='testHost', request_spec=mock.ANY) @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') def test_server_evacuate_find_host(self, rebuild_mock): req_subs = { "adminPass": "MySecretPass", "onSharedStorage": 'False' } self._test_evacuate(req_subs, 'server-evacuate-find-host-req', 'server-evacuate-find-host-resp', 200) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=False, preserve_ephemeral=mock.ANY, host=None, request_spec=mock.ANY) class EvacuateJsonTestV214(EvacuateJsonTest): microversion = '2.14' scenarios = [('v2_14', {'api_major_version': 'v2.1'})] @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') def test_server_evacuate(self, rebuild_mock): # Note (wingwj): The host can't be the same one req_subs = { 'host': 'testHost', "adminPass": "MySecretPass", } self._test_evacuate(req_subs, 'server-evacuate-req', server_resp=None, expected_resp_code=200) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=None, preserve_ephemeral=mock.ANY, host='testHost', request_spec=mock.ANY) @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') def test_server_evacuate_find_host(self, rebuild_mock): req_subs = { "adminPass": "MySecretPass", } self._test_evacuate(req_subs, 'server-evacuate-find-host-req', server_resp=None, expected_resp_code=200) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=None, preserve_ephemeral=mock.ANY, host=None, request_spec=mock.ANY) class EvacuateJsonTestV229(EvacuateJsonTestV214): microversion = '2.29' scenarios = [('v2_29', {'api_major_version': 'v2.1'})] @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_server_evacuate(self, compute_node_get_all_by_host, rebuild_mock): # Note (wingwj): The host can't be the same one req_subs = { 'host': 'testHost', "adminPass": "MySecretPass", "force": "false", } fake_computes = objects.ComputeNodeList( objects=[objects.ComputeNode(host='testHost', hypervisor_hostname='host')]) compute_node_get_all_by_host.return_value = fake_computes self._test_evacuate(req_subs, 'server-evacuate-req', server_resp=None, expected_resp_code=200) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=None, preserve_ephemeral=mock.ANY, host=None, request_spec=mock.ANY) @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_server_evacuate_with_force(self, compute_node_get_all_by_host, rebuild_mock): # Note (wingwj): The host can't be the same one req_subs = { 'host': 'testHost', "adminPass": "MySecretPass", "force": "True", } self._test_evacuate(req_subs, 'server-evacuate-req', server_resp=None, expected_resp_code=200) self.assertEqual(0, compute_node_get_all_by_host.call_count) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=None, preserve_ephemeral=mock.ANY, host='testHost', request_spec=mock.ANY) class EvacuateJsonTestV268(EvacuateJsonTestV229): microversion = '2.68' scenarios = [('v2_68', {'api_major_version': 'v2.1'})] @mock.patch('nova.conductor.manager.ComputeTaskManager.rebuild_instance') @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_server_evacuate(self, compute_node_get_all_by_host, rebuild_mock): # Note (wingwj): The host can't be the same one req_subs = { 'host': 'testHost', "adminPass": "MySecretPass", } fake_computes = objects.ComputeNodeList( objects=[objects.ComputeNode(host='testHost', hypervisor_hostname='host')]) compute_node_get_all_by_host.return_value = fake_computes self._test_evacuate(req_subs, 'server-evacuate-req', server_resp=None, expected_resp_code=200) rebuild_mock.assert_called_once_with(mock.ANY, instance=mock.ANY, orig_image_ref=mock.ANY, image_ref=mock.ANY, injected_files=mock.ANY, new_pass="MySecretPass", orig_sys_metadata=mock.ANY, bdms=mock.ANY, recreate=mock.ANY, on_shared_storage=None, preserve_ephemeral=mock.ANY, host=None, request_spec=mock.ANY) def test_server_evacuate_with_force(self): # doesn't apply to v2.68+, which removed the ability to force migrate pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_extension_info.py0000664000175000017500000000303000000000000027025 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class ExtensionInfoAllSamplesJsonTest(api_sample_base.ApiSampleTestBaseV21): sample_dir = "extension-info" def test_list_extensions(self): response = self._do_get('extensions') # The full extension list is one of the places that things are # different between the API versions and the legacy vs. new # stack. We default to the v2.1 case. template = 'extensions-list-resp' if self.api_major_version == 'v2': template = 'extensions-list-resp-v21-compatible' self._verify_response(template, {}, response, 200) class ExtensionInfoSamplesJsonTest(api_sample_base.ApiSampleTestBaseV21): sample_dir = "extension-info" def test_get_extensions(self): response = self._do_get('extensions/os-agents') self._verify_response('extensions-get-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_fixed_ips.py0000664000175000017500000000252600000000000025761 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api import client as api_client from nova.tests.functional import api_samples_test_base class FixedIpTest(api_samples_test_base.ApiSampleTestBase): api_major_version = 'v2' def test_fixed_ip_reserve(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_post, '/os-fixed-ips/192.168.1.1/action', {"reserve": None}) self.assertEqual(410, ex.response.status_code) def test_get_fixed_ip(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, '/os-fixed-ips/192.168.1.1') self.assertEqual(410, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_flavor_access.py0000664000175000017500000000633000000000000026616 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class FlavorAccessTestsBase(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = 'flavor-access' def _add_tenant(self): subs = { 'tenant_id': 'fake_tenant', 'flavor_id': '10', } response = self._do_post('flavors/10/action', 'flavor-access-add-tenant-req', subs) self._verify_response('flavor-access-add-tenant-resp', subs, response, 200) def _create_flavor(self): subs = { 'flavor_id': '10', 'flavor_name': 'test_flavor' } self._do_post("flavors", "flavor-create-req", subs) class FlavorAccessSampleJsonTests(FlavorAccessTestsBase): def test_flavor_access_list(self): self._create_flavor() self._add_tenant() flavor_id = '10' response = self._do_get('flavors/%s/os-flavor-access' % flavor_id) subs = { 'flavor_id': flavor_id, 'tenant_id': 'fake_tenant', } self._verify_response('flavor-access-list-resp', subs, response, 200) def test_flavor_access_add_tenant(self): self._create_flavor() self._add_tenant() def test_flavor_access_remove_tenant(self): self._create_flavor() self._add_tenant() subs = { 'tenant_id': 'fake_tenant', } response = self._do_post('flavors/10/action', "flavor-access-remove-tenant-req", subs) exp_subs = { "tenant_id": self.api.project_id, "flavor_id": "10" } self._verify_response('flavor-access-remove-tenant-resp', exp_subs, response, 200) class FlavorAccessV27SampleJsonTests(FlavorAccessTestsBase): microversion = '2.7' scenarios = [('v2_7', {'api_major_version': 'v2.1'})] def setUp(self): super(FlavorAccessV27SampleJsonTests, self).setUp() self.api.microversion = self.microversion def test_add_tenant_access_to_public_flavor(self): self._create_flavor() subs = { 'flavor_id': '10', 'tenant_id': 'fake_tenant' } # Version 2.7+ will return HTTPConflict (409) # if the flavor is public response = self._do_post('flavors/10/action', 'flavor-access-add-tenant-req', subs) self.assertEqual(409, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_flavor_extraspecs.py0000664000175000017500000000512000000000000027532 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class FlavorExtraSpecsSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = 'flavor-extra-specs' def _flavor_extra_specs_create(self): subs = { 'value1': 'shared', 'value2': '1', } response = self._do_post('flavors/1/os-extra_specs', 'flavor-extra-specs-create-req', subs) self._verify_response('flavor-extra-specs-create-resp', subs, response, 200) def test_flavor_extra_specs_get(self): subs = { 'value1': '1', } self._flavor_extra_specs_create() response = self._do_get('flavors/1/os-extra_specs/hw:numa_nodes') self._verify_response('flavor-extra-specs-get-resp', subs, response, 200) def test_flavor_extra_specs_list(self): subs = { 'value1': 'shared', 'value2': '1', } self._flavor_extra_specs_create() response = self._do_get('flavors/1/os-extra_specs') self._verify_response('flavor-extra-specs-list-resp', subs, response, 200) def test_flavor_extra_specs_create(self): self._flavor_extra_specs_create() def test_flavor_extra_specs_update(self): subs = { 'value1': '2', } self._flavor_extra_specs_create() response = self._do_put('flavors/1/os-extra_specs/hw:numa_nodes', 'flavor-extra-specs-update-req', subs) self._verify_response('flavor-extra-specs-update-resp', subs, response, 200) def test_flavor_extra_specs_delete(self): self._flavor_extra_specs_create() response = self._do_delete('flavors/1/os-extra_specs/hw:numa_nodes') self.assertEqual(200, response.status_code) self.assertEqual('', response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_flavor_manage.py0000664000175000017500000000370000000000000026603 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class FlavorManageSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = 'flavor-manage' def _create_flavor(self): """Create a flavor.""" subs = { 'flavor_id': '10', 'flavor_name': "test_flavor" } response = self._do_post("flavors", "flavor-create-post-req", subs) self._verify_response("flavor-create-post-resp", subs, response, 200) def test_create_delete_flavor(self): # Get api sample to create and delete a flavor. self._create_flavor() response = self._do_delete("flavors/10") self.assertEqual(202, response.status_code) self.assertEqual('', response.text) class FlavorManageSampleJsonTests2_55(FlavorManageSampleJsonTests): microversion = '2.55' scenarios = [('v2_55', {'api_major_version': 'v2.1'})] def test_update_flavor_description(self): response = self._do_put("flavors/1", "flavor-update-req", {}) self._verify_response("flavor-update-resp", {}, response, 200) class FlavorManageSampleJsonTests2_75(FlavorManageSampleJsonTests2_55): microversion = '2.75' scenarios = [('v2_75', {'api_major_version': 'v2.1'})] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_flavors.py0000664000175000017500000001330600000000000025461 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context as nova_context from nova import objects from nova.tests.functional.api_sample_tests import api_sample_base class FlavorsSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): sample_dir = 'flavors' flavor_show_id = '1' subs = {} sort_keys = ['created_at', 'description', 'disabled', 'ephemeral_gb', 'flavorid', 'id', 'is_public', 'memory_mb', 'name', 'root_gb', 'rxtx_factor', 'swap', 'updated_at', 'vcpu_weight', 'vcpus'] sort_dirs = ['asc', 'desc'] def test_flavors_get(self): response = self._do_get('flavors/%s' % self.flavor_show_id) self._verify_response('flavor-get-resp', self.subs, response, 200) def test_flavors_list(self): response = self._do_get('flavors') self._verify_response('flavors-list-resp', self.subs, response, 200) def test_flavors_list_with_sort_key(self): for sort_key in self.sort_keys: response = self._do_get('flavors?sort_key=%s' % sort_key) self._verify_response('flavors-list-resp', self.subs, response, 200) def test_flavors_list_with_invalid_sort_key(self): response = self._do_get('flavors?sort_key=invalid') self.assertEqual(400, response.status_code) def test_flavors_list_with_sort_dir(self): for sort_dir in self.sort_dirs: response = self._do_get('flavors?sort_dir=%s' % sort_dir) self._verify_response('flavors-list-resp', self.subs, response, 200) def test_flavors_list_with_invalid_sort_dir(self): response = self._do_get('flavors?sort_dir=invalid') self.assertEqual(400, response.status_code) def test_flavors_detail(self): response = self._do_get('flavors/detail') self._verify_response('flavors-detail-resp', self.subs, response, 200) def test_flavors_detail_with_sort_key(self): for sort_key in self.sort_keys: response = self._do_get('flavors/detail?sort_key=%s' % sort_key) self._verify_response('flavors-detail-resp', self.subs, response, 200) def test_flavors_detail_with_invalid_sort_key(self): response = self._do_get('flavors/detail?sort_key=invalid') self.assertEqual(400, response.status_code) def test_flavors_detail_with_sort_dir(self): for sort_dir in self.sort_dirs: response = self._do_get('flavors/detail?sort_dir=%s' % sort_dir) self._verify_response('flavors-detail-resp', self.subs, response, 200) def test_flavors_detail_with_invalid_sort_dir(self): response = self._do_get('flavors/detail?sort_dir=invalid') self.assertEqual(400, response.status_code) class FlavorsSampleJsonTest2_55(FlavorsSampleJsonTest): microversion = '2.55' scenarios = [('v2_55', {'api_major_version': 'v2.1'})] def setUp(self): super(FlavorsSampleJsonTest2_55, self).setUp() # Get the existing flavors created by DefaultFlavorsFixture. ctxt = nova_context.get_admin_context() flavors = objects.FlavorList.get_all(ctxt) # Flavors are sorted by flavorid in ascending order by default, so # get the last flavor in the list and create a new flavor with an # incremental flavorid so we have a predictable sort order for the # sample response. new_flavor_id = int(flavors[-1].flavorid) + 1 new_flavor = objects.Flavor( ctxt, memory_mb=2048, vcpus=1, root_gb=20, flavorid=new_flavor_id, name='m1.small.description', description='test description') new_flavor.create() self.flavor_show_id = new_flavor_id self.subs = {'flavorid': new_flavor_id} class FlavorsSampleJsonTest2_61(FlavorsSampleJsonTest): microversion = '2.61' scenarios = [('v2_61', {'api_major_version': 'v2.1'})] def setUp(self): super(FlavorsSampleJsonTest2_61, self).setUp() # Get the existing flavors created by DefaultFlavorsFixture. ctxt = nova_context.get_admin_context() flavors = objects.FlavorList.get_all(ctxt) # Flavors are sorted by flavorid in ascending order by default, so # get the last flavor in the list and create a new flavor with an # incremental flavorid so we have a predictable sort order for the # sample response. new_flavor_id = int(flavors[-1].flavorid) + 1 new_flavor = objects.Flavor( ctxt, memory_mb=2048, vcpus=1, root_gb=20, flavorid=new_flavor_id, name='m1.small.description', description='test description', extra_specs={ 'hw:numa_nodes': '1', 'hw:cpu_policy': 'shared', }) new_flavor.create() self.flavor_show_id = new_flavor_id self.subs = {'flavorid': new_flavor_id} class FlavorsSampleJsonTest2_75(FlavorsSampleJsonTest2_61): microversion = '2.75' scenarios = [('v2_75', {'api_major_version': 'v2.1'})] def test_flavors_list(self): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_floating_ip_dns.py0000664000175000017500000000617300000000000027150 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api import client as api_client from nova.tests.functional.api_sample_tests import api_sample_base class FloatingIpDNSTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True def test_floating_ip_dns_list(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, 'os-floating-ip-dns') self.assertEqual(410, ex.response.status_code) def test_floating_ip_dns_create_or_update(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_put, 'os-floating-ip-dns/domain1.example.org', {'project': 'project1', 'scope': 'public'}) self.assertEqual(410, ex.response.status_code) def test_floating_ip_dns_delete(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_delete, 'os-floating-ip-dns/domain1.example.org') self.assertEqual(410, ex.response.status_code) def test_floating_ip_dns_create_or_update_entry(self): url = 'os-floating-ip-dns/domain1.example.org/entries/instance1' ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_put, url, {'ip': '192.168.1.1', 'dns_type': 'A'}) self.assertEqual(410, ex.response.status_code) def test_floating_ip_dns_entry_get(self): url = 'os-floating-ip-dns/domain1.example.org/entries/instance1' ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, url) self.assertEqual(410, ex.response.status_code) def test_floating_ip_dns_entry_delete(self): url = 'os-floating-ip-dns/domain1.example.org/entries/instance1' ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_delete, url) self.assertEqual(410, ex.response.status_code) def test_floating_ip_dns_entry_list(self): url = 'os-floating-ip-dns/domain1.example.org/entries/192.168.1.1' ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, url) self.assertEqual(410, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_floating_ip_pools.py0000664000175000017500000000254000000000000027512 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class FloatingIPPoolsSampleTests(api_sample_base.ApiSampleTestBaseV21): sample_dir = "os-floating-ip-pools" def test_list_floatingippools(self): pool_list = [ {'name': 'pool1'}, {'name': 'pool2'}, ] def fake_get_floating_ip_pools(self, context): return pool_list self.stub_out('nova.network.neutron.API.get_floating_ip_pools', fake_get_floating_ip_pools) response = self._do_get('os-floating-ip-pools') subs = { 'pool1': pool_list[0]['name'], 'pool2': pool_list[1]['name'], } self._verify_response('floatingippools-list-resp', subs, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_floating_ips.py0000664000175000017500000001772000000000000026467 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import nova.conf from nova.network import constants from nova.tests import fixtures from nova.tests.functional.api_sample_tests import api_sample_base CONF = nova.conf.CONF # TODO(stephenfin): Merge this back into the main class. We have to be careful # with how we do this since if we register two networks we'll have ambiguous # networks breaking auto-allocation class NeutronFixture(fixtures.NeutronFixture): network_1 = { 'id': fixtures.NeutronFixture.network_1['id'], 'name': 'public', 'description': '', 'status': 'ACTIVE', 'subnets': [ # NOTE(stephenfin): We set this below ], 'admin_state_up': True, 'tenant_id': fixtures.NeutronFixture.tenant_id, 'project_id': fixtures.NeutronFixture.tenant_id, 'shared': False, 'mtu': 1500, 'router:external': True, 'availability_zone_hints': [], 'availability_zones': [ 'nova' ], 'port_security_enabled': True, 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'is_default': True, } subnet_1 = { 'id': '6b0b19d2-22b8-45f9-bf32-06d0b89f2d47', 'name': 'public-subnet', 'description': '', 'ip_version': 4, 'ipv6_address_mode': None, 'ipv6_ra_mode': None, 'enable_dhcp': False, 'network_id': network_1['id'], 'tenant_id': fixtures.NeutronFixture.tenant_id, 'project_id': fixtures.NeutronFixture.tenant_id, 'dns_nameservers': [], 'gateway_ip': '172.24.4.1', 'allocation_pools': [ { 'start': '172.24.4.2', 'end': '172.24.4.254' } ], 'host_routes': [], 'cidr': '172.24.4.0/24', } subnet_2 = { 'id': '503505c5-0dd2-4756-b355-4056aa0b6338', 'name': 'ipv6-public-subnet', 'description': '', 'ip_version': 6, 'ipv6_address_mode': None, 'ipv6_ra_mode': None, 'enable_dhcp': False, 'network_id': network_1['id'], 'tenant_id': fixtures.NeutronFixture.tenant_id, 'project_id': fixtures.NeutronFixture.tenant_id, 'dns_nameservers': [], 'gateway_ip': '2001:db8::2', 'allocation_pools': [ {'start': '2001:db8::1', 'end': '2001:db8::1'}, {'start': '2001:db8::3', 'end': '2001:db8::ffff:ffff:ffff:ffff'}, ], 'host_routes': [], 'cidr': '2001:db8::/64', } network_1['subnets'] = [subnet_1['id'], subnet_2['id']] floatingip_1 = { 'id': '8baeddb4-45e2-4c36-8cb7-d79439a5f67c', 'description': '', 'status': 'DOWN', 'floating_ip_address': '172.24.4.17', 'fixed_ip_address': None, 'router_id': None, 'tenant_id': fixtures.NeutronFixture.tenant_id, 'project_id': fixtures.NeutronFixture.tenant_id, 'floating_network_id': network_1['id'], 'port_details': None, 'port_id': None, } floatingip_2 = { 'id': '05ef7490-745a-4af9-98e5-610dc97493c4', 'description': '', 'status': 'DOWN', 'floating_ip_address': '172.24.4.78', 'fixed_ip_address': None, 'router_id': None, 'tenant_id': fixtures.NeutronFixture.tenant_id, 'project_id': fixtures.NeutronFixture.tenant_id, 'floating_network_id': network_1['id'], 'port_details': None, 'port_id': None, } def __init__(self, test): super(NeutronFixture, self).__init__(test) self._ports = {} self._networks = { self.network_1['id']: copy.deepcopy(self.network_1), } self._floatingips = {} self._subnets = { self.subnet_1['id']: copy.deepcopy(self.subnet_1), self.subnet_2['id']: copy.deepcopy(self.subnet_2), } def create_floatingip(self, body=None): for floatingip in [self.floatingip_1, self.floatingip_2]: if floatingip['id'] not in self._floatingips: self._floatingips[floatingip['id']] = copy.deepcopy(floatingip) break else: # we can extend this later, if necessary raise Exception('We only support adding a max of two floating IPs') return {'floatingip': floatingip} def delete_floatingip(self, floatingip): if floatingip not in self._floatingips: raise Exception('This floating IP has not been added yet') del self._floatingips[floatingip] def show_floatingip(self, floatingip, **_params): if floatingip not in self._floatingips: raise Exception('This floating IP has not been added yet') return {'floatingip': copy.deepcopy(self._floatingips[floatingip])} def list_floatingips(self, retrieve_all=True, **_params): return {'floatingips': copy.deepcopy(list(self._floatingips.values()))} def list_extensions(self, *args, **kwargs): extensions = super().list_extensions(*args, **kwargs) extensions['extensions'].append( { # Copied from neutron-lib fip_port_details.py 'updated': '2018-04-09T10:00:00-00:00', 'name': constants.FIP_PORT_DETAILS, 'links': [], 'alias': 'fip-port-details', 'description': 'Add port_details attribute to Floating IP ' 'resource', }, ) return extensions class FloatingIpsTest(api_sample_base.ApiSampleTestBaseV21): sample_dir = "os-floating-ips" def setUp(self): super(FloatingIpsTest, self).setUp() # we use a custom NeutronFixture that mocks out floating IP stuff self.neutron = self.useFixture(NeutronFixture(self)) # we also use a more useful default floating pool value self.flags(default_floating_pool='public', group='neutron') def test_floating_ips_list_empty(self): response = self._do_get('os-floating-ips') self._verify_response('floating-ips-list-empty-resp', {}, response, 200) def test_floating_ips_list(self): self._do_post('os-floating-ips') self._do_post('os-floating-ips') response = self._do_get('os-floating-ips') self._verify_response('floating-ips-list-resp', {}, response, 200) def test_floating_ips_create_nopool(self): response = self._do_post('os-floating-ips') self._verify_response('floating-ips-create-resp', {}, response, 200) def test_floating_ips_create(self): response = self._do_post('os-floating-ips', 'floating-ips-create-req', {'pool': 'public'}) self._verify_response('floating-ips-create-resp', {}, response, 200) return response def test_floating_ips_get(self): floatingip = self.test_floating_ips_create().json()['floating_ip'] response = self._do_get('os-floating-ips/%s' % floatingip['id']) self._verify_response('floating-ips-get-resp', {}, response, 200) def test_floating_ips_delete(self): floatingip = self.test_floating_ips_create().json()['floating_ip'] response = self._do_delete('os-floating-ips/%s' % floatingip['id']) self.assertEqual(202, response.status_code) self.assertEqual("", response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_floating_ips_bulk.py0000664000175000017500000000415400000000000027501 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova.tests.functional.api import client as api_client from nova.tests.functional.api_sample_tests import api_sample_base CONF = nova.conf.CONF class FloatingIpsBulkTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True def test_floating_ips_bulk_list(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, 'os-floating-ips-bulk') self.assertEqual(410, ex.response.status_code) def test_floating_ips_bulk_list_by_host(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, 'os-floating-ips-bulk/testHost') self.assertEqual(410, ex.response.status_code) def test_floating_ips_bulk_create(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_post, '/os-floating-ips-bulk', {'ip_range': '192.168.1.0/24', 'pool': 'nova', 'interface': 'eth0'}) self.assertEqual(410, ex.response.status_code) def test_floating_ips_bulk_delete(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_put, 'os-floating-ips-bulk/delete', {"ip_range": "192.168.1.0/24"}) self.assertEqual(410, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_fping.py0000664000175000017500000000254100000000000025107 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from nova.tests.functional.api import client as api_client from nova.tests.functional import api_samples_test_base class FpingSampleJsonTests(api_samples_test_base.ApiSampleTestBase): api_major_version = 'v2' def test_get_fping(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, '/os-fping') self.assertEqual(410, ex.response.status_code) def test_get_fping_details(self): ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, '/os-fping/%s' % uuidutils.generate_uuid()) self.assertEqual(410, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_hosts.py0000664000175000017500000000355700000000000025154 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class HostsSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-hosts" def test_host_startup(self): response = self._do_get('os-hosts/%s/startup' % self.compute.host) self._verify_response('host-get-startup', {}, response, 200) def test_host_reboot(self): response = self._do_get('os-hosts/%s/reboot' % self.compute.host) self._verify_response('host-get-reboot', {}, response, 200) def test_host_shutdown(self): response = self._do_get('os-hosts/%s/shutdown' % self.compute.host) self._verify_response('host-get-shutdown', {}, response, 200) def test_host_maintenance(self): response = self._do_put('os-hosts/%s' % self.compute.host, 'host-put-maintenance-req', {}) self._verify_response('host-put-maintenance-resp', {}, response, 200) def test_host_get(self): response = self._do_get('os-hosts/%s' % self.compute.host) self._verify_response('host-get-resp', {}, response, 200) def test_hosts_list(self): response = self._do_get('os-hosts') self._verify_response('hosts-list-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_hypervisors.py0000664000175000017500000002543600000000000026411 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.tests.functional.api_sample_tests import api_sample_base class HypervisorsSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-hypervisors" def test_hypervisors_list(self): response = self._do_get('os-hypervisors') self._verify_response('hypervisors-list-resp', {}, response, 200) def test_hypervisors_search(self): response = self._do_get('os-hypervisors/fake/search') self._verify_response('hypervisors-search-resp', {}, response, 200) def test_hypervisors_without_servers(self): response = self._do_get('os-hypervisors/fake/servers') self._verify_response('hypervisors-without-servers-resp', {}, response, 200) @mock.patch("nova.compute.api.HostAPI.instance_get_all_by_host") def test_hypervisors_with_servers(self, mock_instance_get): instance = [ { "deleted": None, "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "deleted": None, "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" }] mock_instance_get.return_value = instance response = self._do_get('os-hypervisors/fake/servers') self._verify_response('hypervisors-with-servers-resp', {}, response, 200) def test_hypervisors_detail(self): hypervisor_id = '1' subs = { 'hypervisor_id': hypervisor_id, 'service_id': '[0-9]+', } response = self._do_get('os-hypervisors/detail') self._verify_response('hypervisors-detail-resp', subs, response, 200) def test_hypervisors_show(self): hypervisor_id = '1' subs = { 'hypervisor_id': hypervisor_id, 'service_id': '[0-9]+', } response = self._do_get('os-hypervisors/%s' % hypervisor_id) self._verify_response('hypervisors-show-resp', subs, response, 200) def test_hypervisors_statistics(self): response = self._do_get('os-hypervisors/statistics') self._verify_response('hypervisors-statistics-resp', {}, response, 200) def test_hypervisors_uptime(self): def fake_get_host_uptime(self, context, hyp): return (" 08:32:11 up 93 days, 18:25, 12 users, load average:" " 0.20, 0.12, 0.14") self.stub_out('nova.compute.api.HostAPI.get_host_uptime', fake_get_host_uptime) hypervisor_id = '1' response = self._do_get('os-hypervisors/%s/uptime' % hypervisor_id) subs = { 'hypervisor_id': hypervisor_id, } self._verify_response('hypervisors-uptime-resp', subs, response, 200) class HypervisorsSampleJson228Tests(HypervisorsSampleJsonTests): microversion = '2.28' scenarios = [('v2_28', {'api_major_version': 'v2.1'})] def setUp(self): super(HypervisorsSampleJson228Tests, self).setUp() self.api.microversion = self.microversion class HypervisorsSampleJson233Tests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-hypervisors" microversion = '2.33' scenarios = [('v2_33', {'api_major_version': 'v2.1'})] def setUp(self): super(HypervisorsSampleJson233Tests, self).setUp() self.api.microversion = self.microversion # Start a new compute service to fake a record with hypervisor id=2 # for pagination test. host = 'host1' self.start_service('compute', host=host) def test_hypervisors_list(self): response = self._do_get('os-hypervisors?limit=1&marker=1') self._verify_response('hypervisors-list-resp', {}, response, 200) def test_hypervisors_detail(self): subs = { 'hypervisor_id': '2', 'host': 'host1', 'host_name': 'host1', 'service_id': '[0-9]+', } response = self._do_get('os-hypervisors/detail?limit=1&marker=1') self._verify_response('hypervisors-detail-resp', subs, response, 200) class HypervisorsSampleJson253Tests(HypervisorsSampleJson228Tests): microversion = '2.53' scenarios = [('v2_53', {'api_major_version': 'v2.1'})] def setUp(self): super(HypervisorsSampleJson253Tests, self).setUp() self.compute_node_1 = self.compute.service_ref.compute_node def generalize_subs(self, subs, vanilla_regexes): """Give the test a chance to modify subs after the server response was verified, and before the on-disk doc/api_samples file is checked. """ # When comparing the template to the sample we just care that the # hypervisor id and service id are UUIDs. subs['hypervisor_id'] = vanilla_regexes['uuid'] subs['service_id'] = vanilla_regexes['uuid'] return subs def test_hypervisors_list(self): # Start another compute service to get a 2nd compute for paging tests. compute_node_2 = self.start_service( 'compute', host='host2').service_ref.compute_node marker = self.compute_node_1.uuid response = self._do_get('os-hypervisors?limit=1&marker=%s' % marker) subs = {'hypervisor_id': compute_node_2.uuid} self._verify_response('hypervisors-list-resp', subs, response, 200) def test_hypervisors_detail(self): # Start another compute service to get a 2nd compute for paging tests. host = 'host2' service_2 = self.start_service('compute', host=host).service_ref compute_node_2 = service_2.compute_node marker = self.compute_node_1.uuid subs = { 'hypervisor_id': compute_node_2.uuid, 'service_id': service_2.uuid } response = self._do_get('os-hypervisors/detail?limit=1&marker=%s' % marker) self._verify_response('hypervisors-detail-resp', subs, response, 200) @mock.patch("nova.compute.api.HostAPI.instance_get_all_by_host") def test_hypervisors_detail_with_servers(self, instance_get_all_by_host): """List hypervisors with details and with hosted servers.""" instances = [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" }] instance_get_all_by_host.return_value = instances response = self._do_get('os-hypervisors/detail?with_servers=1') subs = { 'hypervisor_id': self.compute_node_1.uuid, 'service_id': self.compute.service_ref.uuid, } self._verify_response('hypervisors-detail-with-servers-resp', subs, response, 200) def test_hypervisors_search(self): """The search route is deprecated in 2.53 and is now a query parameter on the GET /os-hypervisors API. """ response = self._do_get( 'os-hypervisors?hypervisor_hostname_pattern=fake') subs = {'hypervisor_id': self.compute_node_1.uuid} self._verify_response('hypervisors-search-resp', subs, response, 200) @mock.patch("nova.compute.api.HostAPI.instance_get_all_by_host") def test_hypervisors_with_servers(self, instance_get_all_by_host): """The servers route is deprecated in 2.53 and is now a query parameter on the GET /os-hypervisors API. """ instances = [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" }] instance_get_all_by_host.return_value = instances response = self._do_get('os-hypervisors?with_servers=true') subs = {'hypervisor_id': self.compute_node_1.uuid} self._verify_response('hypervisors-with-servers-resp', subs, response, 200) def test_hypervisors_without_servers(self): # This is the same as GET /os-hypervisors in 2.53 which is covered by # test_hypervisors_list already. pass def test_hypervisors_uptime(self): def fake_get_host_uptime(self, context, hyp): return (" 08:32:11 up 93 days, 18:25, 12 users, load average:" " 0.20, 0.12, 0.14") self.stub_out('nova.compute.api.HostAPI.get_host_uptime', fake_get_host_uptime) hypervisor_id = self.compute_node_1.uuid response = self._do_get('os-hypervisors/%s/uptime' % hypervisor_id) subs = { 'hypervisor_id': hypervisor_id, } self._verify_response('hypervisors-uptime-resp', subs, response, 200) def test_hypervisors_show(self): hypervisor_id = self.compute_node_1.uuid subs = { 'hypervisor_id': hypervisor_id, 'service_id': self.compute.service_ref.uuid, } response = self._do_get('os-hypervisors/%s' % hypervisor_id) self._verify_response('hypervisors-show-resp', subs, response, 200) @mock.patch("nova.compute.api.HostAPI.instance_get_all_by_host") def test_hypervisors_show_with_servers(self, instance_get_all_by_host): """Tests getting details for a specific hypervisor and including the hosted servers in the response. """ instances = [ { "name": "test_server1", "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }, { "name": "test_server2", "uuid": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb" }] instance_get_all_by_host.return_value = instances hypervisor_id = self.compute_node_1.uuid subs = { 'hypervisor_id': hypervisor_id, 'service_id': self.compute.service_ref.uuid, } response = self._do_get('os-hypervisors/%s?with_servers=1' % hypervisor_id) self._verify_response('hypervisors-show-with-servers-resp', subs, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_images.py0000664000175000017500000000633400000000000025255 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base from nova.tests.unit.image import fake class ImagesSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): sample_dir = 'images' def test_images_list(self): # Get api sample of images get list request. response = self._do_get('images') self._verify_response('images-list-get-resp', {}, response, 200) def test_image_get(self): # Get api sample of one single image details request. image_id = fake.get_valid_image_id() response = self._do_get('images/%s' % image_id) subs = {'image_id': image_id} self._verify_response('image-get-resp', subs, response, 200) def test_images_details(self): # Get api sample of all images details request. response = self._do_get('images/detail') self._verify_response('images-details-get-resp', {}, response, 200) def test_image_metadata_get(self): # Get api sample of an image metadata request. image_id = fake.get_valid_image_id() response = self._do_get('images/%s/metadata' % image_id) subs = {'image_id': image_id} self._verify_response('image-metadata-get-resp', subs, response, 200) def test_image_metadata_post(self): # Get api sample to update metadata of an image metadata request. image_id = fake.get_valid_image_id() response = self._do_post( 'images/%s/metadata' % image_id, 'image-metadata-post-req', {}) self._verify_response('image-metadata-post-resp', {}, response, 200) def test_image_metadata_put(self): # Get api sample of image metadata put request. image_id = fake.get_valid_image_id() response = self._do_put('images/%s/metadata' % (image_id), 'image-metadata-put-req', {}) self._verify_response('image-metadata-put-resp', {}, response, 200) def test_image_meta_key_get(self): # Get api sample of an image metadata key request. image_id = fake.get_valid_image_id() key = "kernel_id" response = self._do_get('images/%s/metadata/%s' % (image_id, key)) self._verify_response('image-meta-key-get', {}, response, 200) def test_image_meta_key_put(self): # Get api sample of image metadata key put request. image_id = fake.get_valid_image_id() key = "auto_disk_config" response = self._do_put('images/%s/metadata/%s' % (image_id, key), 'image-meta-key-put-req', {}) self._verify_response('image-meta-key-put-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_instance_actions.py0000664000175000017500000001424500000000000027334 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers from nova.tests.functional import api_samples_test_base from nova.tests.unit import policy_fixture class ServerActionsSampleJsonTest(test_servers.ServersSampleBase): microversion = None ADMIN_API = True sample_dir = 'os-instance-actions' def setUp(self): super(ServerActionsSampleJsonTest, self).setUp() # Create and stop a server self.uuid = self._post_server() self._get_response('servers/%s/action' % self.uuid, 'POST', '{"os-stop": null}') response = self._do_get('servers/%s/os-instance-actions' % self.uuid) response_data = api_samples_test_base.pretty_data(response.content) actions = api_samples_test_base.objectify(response_data) self.action_stop = actions['instanceActions'][0] self._wait_for_state_change({'id': self.uuid}, 'SHUTOFF') self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) def _get_subs(self): return { 'uuid': self.uuid, 'project_id': self.action_stop['project_id'] } def test_instance_action_get(self): req_id = self.action_stop['request_id'] response = self._do_get('servers/%s/os-instance-actions/%s' % (self.uuid, req_id)) # Non-admins can see event details except for the "traceback" field # starting in the 2.51 microversion. if self.ADMIN_API: name = 'instance-action-get-resp' else: name = 'instance-action-get-non-admin-resp' self._verify_response(name, self._get_subs(), response, 200) def test_instance_actions_list(self): response = self._do_get('servers/%s/os-instance-actions' % self.uuid) self._verify_response('instance-actions-list-resp', self._get_subs(), response, 200) class ServerActionsV221SampleJsonTest(ServerActionsSampleJsonTest): microversion = '2.21' scenarios = [('v2_21', {'api_major_version': 'v2.1'})] class ServerActionsV251AdminSampleJsonTest(ServerActionsSampleJsonTest): """Tests the 2.51 microversion for the os-instance-actions API. The 2.51 microversion allows non-admins to see instance action event details *except* for the traceback field. The tests in this class are run as an admin user so all fields will be displayed. """ microversion = '2.51' scenarios = [('v2_51', {'api_major_version': 'v2.1'})] class ServerActionsV251NonAdminSampleJsonTest(ServerActionsSampleJsonTest): """Tests the 2.51 microversion for the os-instance-actions API. The 2.51 microversion allows non-admins to see instance action event details *except* for the traceback field. The tests in this class are run as a non-admin user so all fields except for the ``traceback`` field will be displayed. """ ADMIN_API = False microversion = '2.51' scenarios = [('v2_51', {'api_major_version': 'v2.1'})] class ServerActionsV258SampleJsonTest(ServerActionsV251AdminSampleJsonTest): microversion = '2.58' scenarios = [('v2_58', {'api_major_version': 'v2.1'})] def test_instance_actions_list_with_limit(self): response = self._do_get('servers/%s/os-instance-actions' '?limit=1' % self.uuid) self._verify_response('instance-actions-list-with-limit-resp', self._get_subs(), response, 200) def test_instance_actions_list_with_marker(self): marker = self.action_stop['request_id'] response = self._do_get('servers/%s/os-instance-actions' '?marker=%s' % (self.uuid, marker)) self._verify_response('instance-actions-list-with-marker-resp', self._get_subs(), response, 200) def test_instance_actions_with_changes_since(self): stop_action_time = self.action_stop['start_time'] response = self._do_get( 'servers/%s/os-instance-actions' '?changes-since=%s' % (self.uuid, stop_action_time)) self._verify_response( 'instance-actions-list-with-changes-since', self._get_subs(), response, 200) class ServerActionsV258NonAdminSampleJsonTest(ServerActionsV258SampleJsonTest): ADMIN_API = False class ServerActionsV262SampleJsonTest(ServerActionsV258SampleJsonTest): microversion = '2.62' scenarios = [('v2_62', {'api_major_version': 'v2.1'})] def _get_subs(self): return { 'uuid': self.uuid, 'project_id': self.action_stop['project_id'], 'event_host': r'\w+', 'event_hostId': '[a-f0-9]+' } class ServerActionsV262NonAdminSampleJsonTest(ServerActionsV262SampleJsonTest): ADMIN_API = False class ServerActionsV266SampleJsonTest(ServerActionsV262SampleJsonTest): microversion = '2.66' scenarios = [('v2_66', {'api_major_version': 'v2.1'})] def test_instance_actions_with_changes_before(self): stop_action_time = self.action_stop['updated_at'] response = self._do_get( 'servers/%s/os-instance-actions' '?changes-before=%s' % (self.uuid, stop_action_time)) self._verify_response( 'instance-actions-list-with-changes-before', self._get_subs(), response, 200) class ServerActionsV284SampleJsonTest(ServerActionsV266SampleJsonTest): microversion = '2.84' scenarios = [('2.84', {'api_major_version': 'v2.1'})] class ServerActionsV284NonAdminSampleJsonTest(ServerActionsV284SampleJsonTest): ADMIN_API = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_instance_usage_audit_log.py0000664000175000017500000000571600000000000031032 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from datetime import datetime from six.moves import urllib from nova import context from nova import objects from nova.tests.functional.api_sample_tests import api_sample_base class InstanceUsageAuditLogJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-instance-usage-audit-log" def setUp(self): super(InstanceUsageAuditLogJsonTest, self).setUp() def fake_service_get_all(self, context, filters=None, set_zones=False): services = [objects.Service(host='samplehost0'), objects.Service(host='samplehost1'), objects.Service(host='samplehost2'), objects.Service(host='samplehost3')] return services def fake_utcnow(with_timezone=False): # It is not UTC time, but no effect for testing return datetime(2012, 7, 3, 19, 36, 5, 0) self.stub_out('oslo_utils.timeutils.utcnow', fake_utcnow) self.stub_out('nova.compute.api.HostAPI.service_get_all', fake_service_get_all) for i in range(0, 3): self._create_task_log('samplehost%d' % i, i + 1) def _create_task_log(self, host, num_instances): task_log = objects.TaskLog(context.get_admin_context()) task_log.task_name = 'instance_usage_audit' task_log.period_beginning = '2012-06-01 00:00:00' task_log.period_ending = '2012-07-01 00:00:00' task_log.host = host task_log.task_items = num_instances task_log.message = ( 'Instance usage audit ran for host %s, %s ' 'instances in 0.01 seconds.' % (host, num_instances)) task_log.begin_task() task_log.errors = 1 task_log.end_task() def test_show_instance_usage_audit_log(self): response = self._do_get('os-instance_usage_audit_log/%s' % urllib.parse.quote('2012-07-05 10:00:00')) self._verify_response('inst-usage-audit-log-show-get-resp', {}, response, 200) def test_index_instance_usage_audit_log(self): response = self._do_get('os-instance_usage_audit_log') self._verify_response('inst-usage-audit-log-index-get-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_keypairs.py0000664000175000017500000003137500000000000025642 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from nova.objects import keypair as keypair_obj from nova.tests.functional.api_sample_tests import api_sample_base from nova.tests.unit import fake_crypto class KeyPairsSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): microversion = None sample_dir = 'os-keypairs' expected_delete_status_code = 202 expected_post_status_code = 200 def setUp(self): super(KeyPairsSampleJsonTest, self).setUp() self.api.microversion = self.microversion # TODO(sdague): this is only needed because we randomly choose the # uuid each time. def generalize_subs(self, subs, vanilla_regexes): subs['keypair_name'] = 'keypair-[0-9a-f-]+' return subs def test_keypairs_post(self): return self._check_keypairs_post() def _check_keypairs_post(self, **kwargs): """Get api sample of key pairs post request.""" key_name = kwargs.pop('kp_name', None) if not key_name: key_name = 'keypair-' + uuids.fake subs = dict(keypair_name=key_name, **kwargs) response = self._do_post('os-keypairs', 'keypairs-post-req', subs) subs = {'keypair_name': key_name} self._verify_response('keypairs-post-resp', subs, response, self.expected_post_status_code) # NOTE(maurosr): return the key_name is necessary cause the # verification returns the label of the last compared information in # the response, not necessarily the key name. return key_name def test_keypairs_import_key_post(self): public_key = fake_crypto.get_ssh_public_key() self._check_keypairs_import_key_post(public_key) def _check_keypairs_import_key_post(self, public_key, **kwargs): # Get api sample of key pairs post to import user's key. key_name = 'keypair-' + uuids.fake subs = { 'keypair_name': key_name, } params = subs.copy() params['public_key'] = public_key params.update(**kwargs) response = self._do_post('os-keypairs', 'keypairs-import-post-req', params) self._verify_response('keypairs-import-post-resp', subs, response, self.expected_post_status_code) def test_keypairs_list(self): # Get api sample of key pairs list request. key_name = self.test_keypairs_post() response = self._do_get('os-keypairs') subs = {'keypair_name': key_name} self._verify_response('keypairs-list-resp', subs, response, 200) def test_keypairs_get(self): # Get api sample of key pairs get request. key_name = self.test_keypairs_post() response = self._do_get('os-keypairs/%s' % key_name) subs = {'keypair_name': key_name} self._verify_response('keypairs-get-resp', subs, response, 200) def test_keypairs_delete(self): # Get api sample of key pairs delete request. key_name = self.test_keypairs_post() response = self._do_delete('os-keypairs/%s' % key_name) self.assertEqual(self.expected_delete_status_code, response.status_code) class KeyPairsV22SampleJsonTest(KeyPairsSampleJsonTest): microversion = '2.2' expected_post_status_code = 201 expected_delete_status_code = 204 # NOTE(gmann): microversion tests do not need to run for v2 API # so defining scenarios only for v2.2 which will run the original tests # by appending '(v2_2)' in test_id. scenarios = [('v2_2', {'api_major_version': 'v2.1'})] def test_keypairs_post(self): # NOTE(claudiub): overrides the method with the same name in # KeypairsSampleJsonTest, as it is used by other tests. return self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_SSH) def test_keypairs_post_x509(self): return self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_X509) def test_keypairs_post_invalid(self): key_name = 'keypair-' + uuids.fake subs = dict(keypair_name=key_name, keypair_type='fakey_type') response = self._do_post('os-keypairs', 'keypairs-post-req', subs) self.assertEqual(400, response.status_code) def test_keypairs_import_key_post(self): # NOTE(claudiub): overrides the method with the same name in # KeypairsSampleJsonTest, since the API sample expects a keypair_type. public_key = fake_crypto.get_ssh_public_key() self._check_keypairs_import_key_post( public_key, keypair_type=keypair_obj.KEYPAIR_TYPE_SSH) def test_keypairs_import_key_post_x509(self): public_key = fake_crypto.get_x509_cert_and_fingerprint()[0] public_key = public_key.replace('\n', '\\n') self._check_keypairs_import_key_post( public_key, keypair_type=keypair_obj.KEYPAIR_TYPE_X509) def _check_keypairs_import_key_post_invalid(self, keypair_type): key_name = 'keypair-' + uuids.fake subs = { 'keypair_name': key_name, 'keypair_type': keypair_type, 'public_key': fake_crypto.get_ssh_public_key() } response = self._do_post('os-keypairs', 'keypairs-import-post-req', subs) self.assertEqual(400, response.status_code) def test_keypairs_import_key_post_invalid_type(self): self._check_keypairs_import_key_post_invalid( keypair_type='fakey_type') def test_keypairs_import_key_post_invalid_combination(self): self._check_keypairs_import_key_post_invalid( keypair_type=keypair_obj.KEYPAIR_TYPE_X509) class KeyPairsV210SampleJsonTest(KeyPairsSampleJsonTest): ADMIN_API = True microversion = '2.10' expected_post_status_code = 201 expected_delete_status_code = 204 scenarios = [('v2_10', {'api_major_version': 'v2.1'})] def test_keypair_create_for_user(self): subs = { 'keypair_type': keypair_obj.KEYPAIR_TYPE_SSH, 'public_key': fake_crypto.get_ssh_public_key(), 'user_id': "fake" } self._check_keypairs_post(**subs) def test_keypairs_post(self): return self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id="admin") def test_keypairs_import_key_post(self): # NOTE(claudiub): overrides the method with the same name in # KeypairsSampleJsonTest, since the API sample expects a keypair_type. public_key = fake_crypto.get_ssh_public_key() self._check_keypairs_import_key_post( public_key, keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id="fake") def test_keypairs_delete_for_user(self): # Delete a keypair on behalf of a user subs = { 'keypair_type': keypair_obj.KEYPAIR_TYPE_SSH, 'public_key': fake_crypto.get_ssh_public_key(), 'user_id': "fake" } key_name = self._check_keypairs_post(**subs) response = self._do_delete('os-keypairs/%s?user_id=fake' % key_name) self.assertEqual(self.expected_delete_status_code, response.status_code) def test_keypairs_list_for_different_users(self): # Get api sample of key pairs list request. # create common kp_name for two users kp_name = 'keypair-' + uuids.fake keypair_user1 = self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id="user1", kp_name=kp_name) keypair_user2 = self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id="user2", kp_name=kp_name) # get all keypairs for user1 (only one) response = self._do_get('os-keypairs?user_id=user1') subs = {'keypair_name': keypair_user1} self._verify_response('keypairs-list-resp', subs, response, 200) # get all keypairs for user2 (only one) response = self._do_get('os-keypairs?user_id=user2') subs = {'keypair_name': keypair_user2} self._verify_response('keypairs-list-resp', subs, response, 200) class KeyPairsV210SampleJsonTestNotAdmin(KeyPairsV210SampleJsonTest): ADMIN_API = False def test_keypairs_post(self): return self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id="fake") def test_keypairs_post_for_other_user(self): rules = {'os_compute_api:os-keypairs:create': 'rule:admin_api or user_id:%(user_id)s'} self.policy.set_rules(rules, overwrite=False) key_name = 'keypair-' + uuids.fake subs = dict(keypair_name=key_name, keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id='fake1') response = self._do_post('os-keypairs', 'keypairs-post-req', subs) self.assertEqual(403, response.status_code) def test_keypairs_list_for_different_users(self): # get and post for other users is forbidden for non admin rules = {'os_compute_api:os-keypairs:index': 'rule:admin_api or user_id:%(user_id)s'} self.policy.set_rules(rules, overwrite=False) response = self._do_get('os-keypairs?user_id=fake1') self.assertEqual(403, response.status_code) class KeyPairsV235SampleJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = 'os-keypairs' microversion = '2.35' expected_post_status_code = 201 scenarios = [('v2_35', {'api_major_version': 'v2.1'})] def setUp(self): super(KeyPairsV235SampleJsonTest, self).setUp() self.api.microversion = self.microversion # TODO(pkholkin): this is only needed because we randomly choose the # uuid each time. def generalize_subs(self, subs, vanilla_regexes): subs['keypair_name'] = 'keypair-[0-9a-f-]+' return subs def test_keypairs_post(self, user="admin", kp_name=None): return self._check_keypairs_post( keypair_type=keypair_obj.KEYPAIR_TYPE_SSH, user_id=user, kp_name=kp_name) def _check_keypairs_post(self, **kwargs): """Get api sample of key pairs post request.""" key_name = kwargs.pop('kp_name', None) if not key_name: key_name = 'keypair-' + uuidutils.generate_uuid() subs = dict(keypair_name=key_name, **kwargs) response = self._do_post('os-keypairs', 'keypairs-post-req', subs) subs = {'keypair_name': key_name} self._verify_response('keypairs-post-resp', subs, response, self.expected_post_status_code) return key_name def test_keypairs_list(self): # Get api sample of key pairs list request. # sort key_pairs by name before paging keypairs = sorted([self.test_keypairs_post() for i in range(3)]) response = self._do_get('os-keypairs?marker=%s&limit=1' % keypairs[1]) subs = {'keypair_name': keypairs[2]} self._verify_response('keypairs-list-resp', subs, response, 200) def test_keypairs_list_for_different_users(self): # Get api sample of key pairs list request. # create common kp_names for two users kp_names = ['keypair-' + uuidutils.generate_uuid() for i in range(3)] # sort key_pairs by name before paging keypairs_user1 = sorted([self.test_keypairs_post( user="user1", kp_name=kp_name) for kp_name in kp_names]) keypairs_user2 = sorted([self.test_keypairs_post( user="user2", kp_name=kp_name) for kp_name in kp_names]) # get all keypairs after the second for user1 response = self._do_get('os-keypairs?user_id=user1&marker=%s' % keypairs_user1[1]) subs = {'keypair_name': keypairs_user1[2]} self._verify_response('keypairs-list-user1-resp', subs, response, 200) # get only one keypair after the second for user2 response = self._do_get('os-keypairs?user_id=user2&marker=%s&limit=1' % keypairs_user2[1]) subs = {'keypair_name': keypairs_user2[2]} self._verify_response('keypairs-list-user2-resp', subs, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_limits.py0000664000175000017500000000547700000000000025320 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class LimitsSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "limits" def setUp(self): super(LimitsSampleJsonTest, self).setUp() # NOTE(gmann): We have to separate the template files between V2 # and V2.1 as the response are different. self.template = 'limit-get-resp' def test_limits_get(self): response = self._do_get('limits') self._verify_response(self.template, {}, response, 200) class LimitsV236Test(api_sample_base.ApiSampleTestBaseV21): """Test limits don't return network resources after 2.36. We dropped the network API in 2.36, which also means that we shouldn't be returning any limits related to network resources either. This tests a different limits template after that point which does not have these. """ sample_dir = "limits" microversion = '2.36' scenarios = [('v2_36', {'api_major_version': 'v2.1'})] def test_limits_get(self): self.api.microversion = self.microversion response = self._do_get('limits') self._verify_response('limit-get-resp', {}, response, 200) class LimitsV239Test(api_sample_base.ApiSampleTestBaseV21): """Test limits don't return 'maxImageMeta' field after 2.39. We dropped the image-metadata proxy API in 2.39, which also means that we shouldn't be returning 'maxImageMeta' field in 'os-limits' response. """ sample_dir = "limits" microversion = '2.39' scenarios = [('v2_39', {'api_major_version': 'v2.1'})] def test_limits_get(self): self.api.microversion = self.microversion response = self._do_get('limits') self._verify_response('limit-get-resp', {}, response, 200) class LimitsV257Test(api_sample_base.ApiSampleTestBaseV21): """Test limits don't return maxPersonality* fields after 2.57.""" sample_dir = "limits" microversion = '2.57' scenarios = [('v2_57', {'api_major_version': 'v2.1'})] def test_limits_get(self): self.api.microversion = self.microversion response = self._do_get('limits') self._verify_response('limit-get-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_lock_server.py0000664000175000017500000000600200000000000026316 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class LockServerSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-lock-server" def setUp(self): """setUp Method for LockServer api samples extension This method creates the server that will be used in each tests """ super(LockServerSamplesJsonTest, self).setUp() self.uuid = self._post_server() def test_post_lock_server(self): # Get api samples to lock server request. response = self._do_post('servers/%s/action' % self.uuid, 'lock-server', {}) self.assertEqual(202, response.status_code) def test_post_unlock_server(self): # Get api samples to unlock server request. self.test_post_lock_server() response = self._do_post('servers/%s/action' % self.uuid, 'unlock-server', {}) self.assertEqual(202, response.status_code) class LockServerSamplesJsonTestV273(test_servers.ServersSampleBase): sample_dir = "os-lock-server" microversion = '2.73' scenarios = [('v2_73', {'api_major_version': 'v2.1'})] def setUp(self): """setUp Method for LockServer api samples extension This method creates the server that will be used in each test """ super(LockServerSamplesJsonTestV273, self).setUp() self.uuid = self._post_server() def test_post_lock_server(self): # backwards compatibility. response = self._do_post('servers/%s/action' % self.uuid, name='lock-server', subs={}) self.assertEqual(202, response.status_code) def test_post_lock_server_with_reason(self): # Get api samples to lock server request. response = self._do_post('servers/%s/action' % self.uuid, name='lock-server-with-reason', subs={}) self.assertEqual(202, response.status_code) def test_post_unlock_server(self): # Get api samples to unlock server request. # We first call the previous test to lock the server with reason # and then unlock it to post a response for unlock. self.test_post_lock_server_with_reason() response = self._do_post('servers/%s/action' % self.uuid, name='unlock-server', subs={}) self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_migrate_server.py0000664000175000017500000002122600000000000027023 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import versionutils from nova import exception from nova import objects from nova.tests.functional.api_sample_tests import test_servers def fake_get_compute(context, host): # TODO(stephenfin): It's gross that we even need this in a functional test # where we can control the running compute services. Stop doing it. service = dict(host=host, binary='nova-compute', topic='compute', report_count=1, updated_at='foo', hypervisor_type='bar', hypervisor_version=( versionutils.convert_version_to_int('1.0')), disabled=False) return {'compute_node': [service]} class MigrateServerSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-migrate-server" def setUp(self): """setUp Method for MigrateServer api samples extension This method creates the server that will be used in each tests """ super(MigrateServerSamplesJsonTest, self).setUp() self.uuid = self._post_server() self.host_attended = self.compute.host @mock.patch('nova.conductor.manager.ComputeTaskManager._cold_migrate') def test_post_migrate(self, mock_cold_migrate): # Get api samples to migrate server request. response = self._do_post('servers/%s/action' % self.uuid, 'migrate-server', {}) self.assertEqual(202, response.status_code) def _check_post_live_migrate_server(self, req_subs=None): if not req_subs: req_subs = {'hostname': self.compute.host} def fake_live_migrate(_self, context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec): self.assertEqual(self.uuid, instance.uuid) host = scheduler_hint["host"] self.assertEqual(self.host_attended, host) self.stub_out( 'nova.conductor.manager.ComputeTaskManager._live_migrate', fake_live_migrate) self.stub_out( 'nova.db.api.service_get_by_compute_host', fake_get_compute) response = self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', req_subs) self.assertEqual(202, response.status_code) def test_post_live_migrate_server(self): # Get api samples to server live migrate request. self._check_post_live_migrate_server() def test_live_migrate_compute_host_not_found(self): hostname = 'dummy-host' def fake_execute(_self): raise exception.ComputeHostNotFound(host=hostname) self.stub_out('nova.conductor.tasks.live_migrate.' 'LiveMigrationTask._execute', fake_execute) response = self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': hostname}) self.assertEqual(400, response.status_code) class MigrateServerSamplesJsonTestV225(MigrateServerSamplesJsonTest): sample_dir = "os-migrate-server" microversion = '2.25' scenarios = [('v2_25', {'api_major_version': 'v2.1'})] def test_post_migrate(self): # no changes for migrate-server pass class MigrateServerSamplesJsonTestV230(MigrateServerSamplesJsonTest): sample_dir = "os-migrate-server" microversion = '2.30' scenarios = [('v2_30', {'api_major_version': 'v2.1'})] def test_post_migrate(self): # no changes for migrate-server pass @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_post_live_migrate_server(self, compute_node_get_all_by_host): # Get api samples to server live migrate request. fake_computes = objects.ComputeNodeList( objects=[objects.ComputeNode(host='testHost', hypervisor_hostname='host')]) compute_node_get_all_by_host.return_value = fake_computes self.host_attended = None self._check_post_live_migrate_server( req_subs={'hostname': self.compute.host, 'force': 'False'}) def test_post_live_migrate_server_with_force(self): self.host_attended = self.compute.host self._check_post_live_migrate_server( req_subs={'hostname': self.compute.host, 'force': 'True'}) def test_live_migrate_compute_host_not_found(self): hostname = 'dummy-host' def fake_execute(_self): raise exception.ComputeHostNotFound(host=hostname) self.stub_out('nova.conductor.tasks.live_migrate.' 'LiveMigrationTask._execute', fake_execute) response = self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': hostname, 'force': 'False'}) self.assertEqual(400, response.status_code) class MigrateServerSamplesJsonTestV256(test_servers.ServersSampleBase): sample_dir = "os-migrate-server" microversion = '2.56' scenarios = [('v2_56', {'api_major_version': 'v2.1'})] def setUp(self): """setUp Method for MigrateServer api samples extension This method creates the server that will be used in each tests """ super(MigrateServerSamplesJsonTestV256, self).setUp() self.uuid = self._post_server() @mock.patch('nova.conductor.manager.ComputeTaskManager._cold_migrate') def test_post_migrate(self, mock_cold_migrate): response = self._do_post('servers/%s/action' % self.uuid, 'migrate-server', {'hostname': 'null'}) self.assertEqual(202, response.status_code) @mock.patch('nova.objects.ComputeNodeList.get_all_by_host', return_value=[objects.ComputeNode( host='target-host', hypervisor_hostname='target-node')]) @mock.patch('nova.conductor.manager.ComputeTaskManager._cold_migrate') def test_post_migrate_with_host(self, mock_cold_migrate, mock_get_all_by_host): response = self._do_post('servers/%s/action' % self.uuid, 'migrate-server', {'hostname': '"target-host"'}) self.assertEqual(202, response.status_code) @mock.patch('nova.conductor.manager.ComputeTaskManager._cold_migrate') def test_post_migrate_null(self, mock_cold_migrate): # Check backward compatibility. response = self._do_post('servers/%s/action' % self.uuid, 'migrate-server-null', {}) self.assertEqual(202, response.status_code) class MigrateServerSamplesJsonTestV268(test_servers.ServersSampleBase): sample_dir = "os-migrate-server" microversion = '2.68' scenarios = [('v2_68', {'api_major_version': 'v2.1'})] def setUp(self): """setUp Method for MigrateServer api samples extension This method creates the server that will be used in each tests """ super(MigrateServerSamplesJsonTestV268, self).setUp() self.uuid = self._post_server() def test_post_live_migrate_server(self): # Get api samples to server live migrate request. req_subs = {'hostname': self.compute.host} def fake_live_migrate(_self, context, instance, scheduler_hint, block_migration, disk_over_commit, request_spec): self.assertEqual(self.uuid, instance.uuid) self.stub_out( 'nova.conductor.manager.ComputeTaskManager._live_migrate', fake_live_migrate) self.stub_out( 'nova.db.api.service_get_by_compute_host', fake_get_compute) response = self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', req_subs) self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_migrations.py0000664000175000017500000004041000000000000026155 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import objects from nova.tests.functional.api_sample_tests import api_sample_base # NOTE(ShaoHe Feng) here I can not use uuidsentinel, it generate a random # UUID. The uuid in doc/api_samples files is fixed. INSTANCE_UUID_1 = "8600d31b-d1a1-4632-b2ff-45c2be1a70ff" INSTANCE_UUID_2 = "9128d044-7b61-403e-b766-7547076ff6c1" def _stub_migrations(stub_self, context, filters): fake_migrations = [ { 'id': 1234, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'done', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration1, 'cross_cell_move': False, }, { 'id': 5678, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'done', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration2, 'cross_cell_move': False, } ] return fake_migrations class MigrationsSamplesJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-migrations" def setUp(self): super(MigrationsSamplesJsonTest, self).setUp() self.stub_out('nova.compute.api.API.get_migrations', _stub_migrations) def test_get_migrations(self): response = self._do_get('os-migrations') self.assertEqual(200, response.status_code) self._verify_response('migrations-get', {}, response, 200) class MigrationsSamplesJsonTestV2_23(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-migrations" microversion = '2.23' scenarios = [('v2_23', {'api_major_version': 'v2.1'})] fake_migrations = [ # in-progress live-migration. { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'cross_cell_move': False, }, # non in-progress live-migration. { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'error', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'cross_cell_move': False, }, # non in-progress resize. { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'error', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'cross_cell_move': False, }, # in-progress resize. { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'cross_cell_move': False, } ] def setUp(self): super(MigrationsSamplesJsonTestV2_23, self).setUp() self.api.microversion = self.microversion fake_context = context.RequestContext('fake', 'fake') for mig in self.fake_migrations: mig_obj = objects.Migration(context=fake_context, **mig) mig_obj.create() def test_get_migrations_v2_23(self): response = self._do_get('os-migrations') self.assertEqual(200, response.status_code) self._verify_response( 'migrations-get', {"instance_1": INSTANCE_UUID_1, "instance_2": INSTANCE_UUID_2}, response, 200) class MigrationsSamplesJsonTestV2_59(MigrationsSamplesJsonTestV2_23): microversion = '2.59' scenarios = [('v2_59', {'api_major_version': 'v2.1'})] fake_migrations = [ # in-progress live-migration. { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 1, 'migration_type': 'live-migration', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 29, 11, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 11, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '12341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, }, # non in-progress live-migration. { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'error', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 1, 'migration_type': 'live-migration', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 29, 12, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 12, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '22341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, }, # non in-progress resize. { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'error', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2016, 0o6, 23, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o6, 23, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '32341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, }, # in-progress resize. { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2016, 0o6, 23, 14, 42, 2), 'updated_at': datetime.datetime(2016, 0o6, 23, 14, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '42341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, } ] def test_get_migrations_with_limit(self): response = self._do_get('os-migrations?limit=1') self.assertEqual(200, response.status_code) self._verify_response( 'migrations-get-with-limit', {"instance_2": INSTANCE_UUID_2}, response, 200) def test_get_migrations_with_marker(self): response = self._do_get( 'os-migrations?marker=22341d4b-346a-40d0-83c6-5f4f6892b650') self.assertEqual(200, response.status_code) self._verify_response( 'migrations-get-with-marker', {"instance_1": INSTANCE_UUID_1, "instance_2": INSTANCE_UUID_2}, response, 200) def test_get_migrations_with_changes_since(self): response = self._do_get( 'os-migrations?changes-since=2016-06-23T13:42:01.000000') self.assertEqual(200, response.status_code) self._verify_response( 'migrations-get-with-changes-since', {"instance_2": INSTANCE_UUID_2}, response, 200) class MigrationsSamplesJsonTestV2_66(MigrationsSamplesJsonTestV2_59): microversion = '2.66' scenarios = [('v2_66', {'api_major_version': 'v2.1'})] def test_get_migrations_with_changes_before(self): response = self._do_get( 'os-migrations?changes-before=2016-01-29T11:42:02.000000') self.assertEqual(200, response.status_code) self._verify_response( 'migrations-get-with-changes-before', {"instance_1": INSTANCE_UUID_1}, response, 200) class MigrationsSamplesJsonTestV2_80(MigrationsSamplesJsonTestV2_66): microversion = '2.80' scenarios = [('v2_80', {'api_major_version': 'v2.1'})] USER_ID1 = "5c48ebaa-193f-4c5d-948a-f559cc92cd5e" PROJECT_ID1 = "ef92ccff-00f3-46e4-b015-811110e36ee4" USER_ID2 = "78348f0e-97ee-4d70-ad34-189692673ea2" PROJECT_ID2 = "9842f0f7-1229-4355-afe7-15ebdbb8c3d8" fake_migrations = [ # in-progress live-migration. { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 1, 'migration_type': 'live-migration', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 29, 11, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 11, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '12341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, 'user_id': USER_ID1, 'project_id': PROJECT_ID1 }, # non in-progress live-migration. { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'error', 'instance_uuid': INSTANCE_UUID_1, 'old_instance_type_id': 1, 'new_instance_type_id': 1, 'migration_type': 'live-migration', 'hidden': False, 'created_at': datetime.datetime(2016, 0o1, 29, 12, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 12, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '22341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, 'user_id': USER_ID1, 'project_id': PROJECT_ID1 }, # non in-progress resize. { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'error', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2016, 0o6, 23, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o6, 23, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '32341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, 'user_id': USER_ID2, 'project_id': PROJECT_ID2 }, # in-progress resize. { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': INSTANCE_UUID_2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'created_at': datetime.datetime(2016, 0o6, 23, 14, 42, 2), 'updated_at': datetime.datetime(2016, 0o6, 23, 14, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': '42341d4b-346a-40d0-83c6-5f4f6892b650', 'cross_cell_move': False, 'user_id': USER_ID2, 'project_id': PROJECT_ID2 } ] def test_get_migrations_with_user_id(self): response = self._do_get('os-migrations?user_id=%s' % self.USER_ID1) self.assertEqual(200, response.status_code) self._verify_response('migrations-get-with-user-or-project-id', {"instance_1": INSTANCE_UUID_1}, response, 200) def test_get_migrations_with_project_id(self): response = self._do_get('os-migrations?project_id=%s' % self.PROJECT_ID1) self.assertEqual(200, response.status_code) self._verify_response('migrations-get-with-user-or-project-id', {"instance_1": INSTANCE_UUID_1}, response, 200) def test_get_migrations_with_user_and_project_id(self): response = self._do_get('os-migrations?user_id=%s&project_id=%s' % (self.USER_ID1, self.PROJECT_ID1)) self.assertEqual(200, response.status_code) self._verify_response( 'migrations-get-with-user-or-project-id', {"instance_1": INSTANCE_UUID_1}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_multinic.py0000664000175000017500000000423300000000000025630 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures from nova.tests.functional.api_sample_tests import api_sample_base class MultinicSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-multinic" def setUp(self): super(MultinicSampleJsonTest, self).setUp() server = self._boot_a_server( extra_params={'networks': [ {'port': fixtures.NeutronFixture.port_1['id']}]}) self.uuid = server['id'] def _boot_a_server(self, expected_status='ACTIVE', extra_params=None): server = self._build_server() if extra_params: server.update(extra_params) created_server = self.api.post_server({'server': server}) # Wait for it to finish being created found_server = self._wait_for_state_change(created_server, expected_status) return found_server def _add_fixed_ip(self): subs = {"networkId": fixtures.NeutronFixture.network_1['id']} response = self._do_post('servers/%s/action' % (self.uuid), 'multinic-add-fixed-ip-req', subs) self.assertEqual(202, response.status_code) def test_add_fixed_ip(self): self._add_fixed_ip() def test_remove_fixed_ip(self): self._add_fixed_ip() subs = {"ip": "10.0.0.4"} response = self._do_post('servers/%s/action' % (self.uuid), 'multinic-remove-fixed-ip-req', subs) self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_multiple_create.py0000664000175000017500000000330000000000000027154 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit.image import fake class MultipleCreateJsonTest(test_servers.ServersSampleBase): sample_dir = "os-multiple-create" def test_multiple_create(self): subs = { 'image_id': fake.get_valid_image_id(), 'compute_endpoint': self._get_compute_endpoint(), 'min_count': "2", 'max_count': "3" } response = self._do_post('servers', 'multiple-create-post-req', subs) self._verify_response('multiple-create-post-resp', subs, response, 202) def test_multiple_create_without_reservation_id(self): subs = { 'image_id': fake.get_valid_image_id(), 'compute_endpoint': self._get_compute_endpoint(), 'min_count': "2", 'max_count': "3" } response = self._do_post('servers', 'multiple-create-no-resv-post-req', subs) self._verify_response('multiple-create-no-resv-post-resp', subs, response, 202) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_networks.py0000664000175000017500000000375400000000000025667 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api_sample_tests import api_sample_base class NetworksJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = 'os-networks' def test_network_list(self): response = self._do_get('os-networks') self._verify_response('networks-list-resp', {}, response, 200) def test_network_show(self): uuid = nova_fixtures.NeutronFixture.network_1['id'] response = self._do_get('os-networks/%s' % uuid) self._verify_response('network-show-resp', {}, response, 200) @mock.patch('nova.network.neutron.API.get', side_effect=exception.Unauthorized) def test_network_show_token_expired(self, mock_get): uuid = nova_fixtures.NeutronFixture.network_1['id'] response = self._do_get('os-networks/%s' % uuid) self.assertEqual(401, response.status_code) def test_network_create(self): self.api.api_post('os-networks', {}, check_response_status=[410]) def test_network_add(self): self.api.api_post('os-networks/add', {}, check_response_status=[410]) def test_network_delete(self): self.api.api_delete('os-networks/always-delete', check_response_status=[410]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_networks_associate.py0000664000175000017500000000305500000000000027714 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class NetworksAssociateJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True def test_disassociate(self): self.api.api_post('os-networks/1/action', {'disassociate': None}, check_response_status=[410]) def test_disassociate_host(self): self.api.api_post('os-networks/1/action', {'disassociate_host': None}, check_response_status=[410]) def test_disassociate_project(self): self.api.api_post('os-networks/1/action', {'disassociate_project': None}, check_response_status=[410]) def test_associate_host(self): self.api.api_post('os-networks/1/action', {'associate_host': 'foo'}, check_response_status=[410]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_pause_server.py0000664000175000017500000000310700000000000026506 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class PauseServerSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-pause-server" def setUp(self): """setUp Method for PauseServer api samples extension This method creates the server that will be used in each test """ super(PauseServerSamplesJsonTest, self).setUp() self.uuid = self._post_server() def test_post_pause(self): # Get api samples to pause server request. response = self._do_post('servers/%s/action' % self.uuid, 'pause-server', {}) self.assertEqual(202, response.status_code) def test_post_unpause(self): # Get api samples to unpause server request. self.test_post_pause() response = self._do_post('servers/%s/action' % self.uuid, 'unpause-server', {}) self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_preserve_ephemeral_rebuild.py0000664000175000017500000000516200000000000031371 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import api as compute_api from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit.image import fake class PreserveEphemeralOnRebuildJsonTest(test_servers.ServersSampleBase): sample_dir = 'os-preserve-ephemeral-rebuild' def _test_server_rebuild_preserve_ephemeral(self, value, resp_tpl=None): uuid = self._post_server() image = fake.get_valid_image_id() subs = {'host': self._get_host(), 'uuid': image, 'name': 'foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'preserve_ephemeral': str(value).lower(), 'action': 'rebuild', 'glance_host': self._get_glance_host(), 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::' } old_rebuild = compute_api.API.rebuild def fake_rebuild(self_, context, instance, image_href, admin_password, files_to_inject=None, **kwargs): self.assertEqual(kwargs['preserve_ephemeral'], value) if resp_tpl: return old_rebuild(self_, context, instance, image_href, admin_password, files_to_inject=None, **kwargs) self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) response = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild-preserve-ephemeral', subs) if resp_tpl: del subs['uuid'] self._verify_response(resp_tpl, subs, response, 202) else: self.assertEqual(202, response.status_code) def test_server_rebuild_preserve_ephemeral_true(self): self._test_server_rebuild_preserve_ephemeral(True) def test_server_rebuild_preserve_ephemeral_false(self): self._test_server_rebuild_preserve_ephemeral(False, resp_tpl='server-action-rebuild-preserve-ephemeral-resp') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_quota_classes.py0000664000175000017500000000350300000000000026651 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class QuotaClassesSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-quota-class-sets" set_id = 'test_class' def test_show_quota_classes(self): # Get api sample to show quota classes. response = self._do_get('os-quota-class-sets/%s' % self.set_id) subs = {'set_id': self.set_id} self._verify_response('quota-classes-show-get-resp', subs, response, 200) def test_update_quota_classes(self): # Get api sample to update quota classes. response = self._do_put('os-quota-class-sets/%s' % self.set_id, 'quota-classes-update-post-req', {}) self._verify_response('quota-classes-update-post-resp', {}, response, 200) class QuotaClassesV250SampleJsonTests(QuotaClassesSampleJsonTests): microversion = '2.50' scenarios = [('v2_50', {'api_major_version': 'v2.1'})] class QuotaClassesV257SampleJsonTests(QuotaClassesSampleJsonTests): microversion = '2.57' scenarios = [('v2_57', {'api_major_version': 'v2.1'})] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_quota_sets.py0000664000175000017500000001022000000000000026164 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api_sample_tests import api_sample_base class QuotaSetsSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-quota-sets" def test_show_quotas(self): # Get api sample to show quotas. response = self._do_get('os-quota-sets/fake_tenant') self._verify_response('quotas-show-get-resp', {}, response, 200) def test_show_quotas_defaults(self): # Get api sample to show quotas defaults. response = self._do_get('os-quota-sets/fake_tenant/defaults') self._verify_response('quotas-show-defaults-get-resp', {}, response, 200) def test_show_quotas_detail(self): # Get api sample to show quotas detail. response = self._do_get('os-quota-sets/fake_tenant/detail') self._verify_response('quotas-show-detail-get-resp', {}, response, 200) def test_update_quotas(self): # Get api sample to update quotas. response = self._do_put('os-quota-sets/fake_tenant', 'quotas-update-post-req', {}) self._verify_response('quotas-update-post-resp', {}, response, 200) def test_delete_quotas(self): # Get api sample to delete quota. response = self._do_delete('os-quota-sets/fake_tenant') self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_update_quotas_force(self): # Get api sample to update quotas. response = self._do_put('os-quota-sets/fake_tenant', 'quotas-update-force-post-req', {}) return self._verify_response('quotas-update-force-post-resp', {}, response, 200) def test_show_quotas_for_user(self): # Get api sample to show quotas for user. response = self._do_get('os-quota-sets/fake_tenant?user_id=1') self._verify_response('user-quotas-show-get-resp', {}, response, 200) def test_delete_quotas_for_user(self): response = self._do_delete('os-quota-sets/fake_tenant?user_id=1') self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_update_quotas_for_user(self): # Get api sample to update quotas for user. response = self._do_put('os-quota-sets/fake_tenant?user_id=1', 'user-quotas-update-post-req', {}) return self._verify_response('user-quotas-update-post-resp', {}, response, 200) class QuotaSetsSampleJsonTests2_36(QuotaSetsSampleJsonTests): microversion = '2.36' scenarios = [('v2_36', {'api_major_version': 'v2.1'})] class QuotaSetsSampleJsonTestsV2_57(QuotaSetsSampleJsonTests): """Tests that injected_file* quotas are not in request or response values. starting with microversion 2.57. """ microversion = '2.57' scenarios = [('v2_57', {'api_major_version': 'v2.1'})] class NoopQuotaSetsSampleJsonTests(QuotaSetsSampleJsonTests): sample_dir = "os-quota-sets-noop" def setUp(self): super(NoopQuotaSetsSampleJsonTests, self).setUp() # NOTE(melwitt): We can't simply set self.flags to the NoopQuotaDriver # here to use the driver because the QuotaEngine is global. See the # fixture for details. self.useFixture(nova_fixtures.NoopQuotaDriverFixture()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_remote_consoles.py0000664000175000017500000001014200000000000027200 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers HTTP_RE = r'(https?://)([\w\d:#@%/;$()~_?\+-=\\.&](#!)?)*' class ConsolesSampleJsonTests(test_servers.ServersSampleBase): microversion = None sample_dir = "os-remote-consoles" def setUp(self): super(ConsolesSampleJsonTests, self).setUp() self.api.microversion = self.microversion self.flags(enabled=True, group='vnc') self.flags(enabled=True, group='spice') self.flags(enabled=True, group='rdp') self.flags(enabled=True, group='serial_console') def test_get_vnc_console(self): uuid = self._post_server() response = self._do_post('servers/%s/action' % uuid, 'get-vnc-console-post-req', {'action': 'os-getVNCConsole'}) self._verify_response('get-vnc-console-post-resp', {'url': HTTP_RE}, response, 200) def test_get_spice_console(self): uuid = self._post_server() response = self._do_post('servers/%s/action' % uuid, 'get-spice-console-post-req', {'action': 'os-getSPICEConsole'}) self._verify_response('get-spice-console-post-resp', {'url': HTTP_RE}, response, 200) def test_get_rdp_console(self): uuid = self._post_server() response = self._do_post('servers/%s/action' % uuid, 'get-rdp-console-post-req', {'action': 'os-getRDPConsole'}) self._verify_response('get-rdp-console-post-resp', {'url': HTTP_RE}, response, 200) def test_get_serial_console(self): uuid = self._post_server() response = self._do_post('servers/%s/action' % uuid, 'get-serial-console-post-req', {'action': 'os-getSerialConsole'}) self._verify_response('get-serial-console-post-resp', {'url': HTTP_RE}, response, 200) class ConsolesV26SampleJsonTests(test_servers.ServersSampleBase): microversion = '2.6' sample_dir = "os-remote-consoles" # NOTE(gmann): microversion tests do not need to run for v2 API # so defining scenarios only for v2.6 which will run the original tests # by appending '(v2_6)' in test_id. scenarios = [('v2_6', {'api_major_version': 'v2.1'})] def test_create_console(self): uuid = self._post_server() body = {'protocol': 'vnc', 'type': 'novnc'} response = self._do_post('servers/%s/remote-consoles' % uuid, 'create-vnc-console-req', body) self._verify_response('create-vnc-console-resp', {'url': HTTP_RE}, response, 200) class ConsolesV28SampleJsonTests(test_servers.ServersSampleBase): sample_dir = "os-remote-consoles" microversion = '2.8' scenarios = [('v2_8', {'api_major_version': 'v2.1'})] def setUp(self): super(ConsolesV28SampleJsonTests, self).setUp() self.flags(enabled=True, group='mks') def test_create_mks_console(self): uuid = self._post_server() body = {'protocol': 'mks', 'type': 'webmks'} response = self._do_post('servers/%s/remote-consoles' % uuid, 'create-mks-console-req', body) self._verify_response('create-mks-console-resp', {'url': HTTP_RE}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_rescue.py0000664000175000017500000000717500000000000025302 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class RescueJsonTest(test_servers.ServersSampleBase): sample_dir = "os-rescue" def _rescue(self, uuid): req_subs = { 'password': 'MySecretPass' } response = self._do_post('servers/%s/action' % uuid, 'server-rescue-req', req_subs) self._verify_response('server-rescue', req_subs, response, 200) def _unrescue(self, uuid): response = self._do_post('servers/%s/action' % uuid, 'server-unrescue-req', {}) self.assertEqual(202, response.status_code) def test_server_rescue(self): uuid = self._post_server() self._rescue(uuid) # Do a server get to make sure that the 'RESCUE' state is set response = self._do_get('servers/%s' % uuid) subs = {} subs['hostid'] = '[a-f0-9]+' subs['id'] = uuid subs['status'] = 'RESCUE' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' subs['instance_name'] = r'instance-\d{8}' subs['hypervisor_hostname'] = r'[\w\.\-]+' subs['cdrive'] = '.*' self._verify_response('server-get-resp-rescue', subs, response, 200) def test_server_rescue_with_image_ref_specified(self): uuid = self._post_server() req_subs = { 'password': 'MySecretPass', 'image_ref': '2341-Abc' } response = self._do_post('servers/%s/action' % uuid, 'server-rescue-req-with-image-ref', req_subs) self._verify_response('server-rescue', req_subs, response, 200) # Do a server get to make sure that the 'RESCUE' state is set response = self._do_get('servers/%s' % uuid) subs = {} subs['hostid'] = '[a-f0-9]+' subs['id'] = uuid subs['status'] = 'RESCUE' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' subs['instance_name'] = r'instance-\d{8}' subs['hypervisor_hostname'] = r'[\w\.\-]+' subs['cdrive'] = '.*' self._verify_response('server-get-resp-rescue', subs, response, 200) def test_server_unrescue(self): uuid = self._post_server() self._rescue(uuid) self._unrescue(uuid) # Do a server get to make sure that the 'ACTIVE' state is back response = self._do_get('servers/%s' % uuid) subs = {} subs['hostid'] = '[a-f0-9]+' subs['id'] = uuid subs['status'] = 'ACTIVE' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' subs['instance_name'] = r'instance-\d{8}' subs['hypervisor_hostname'] = r'[\w\.\-]+' subs['cdrive'] = '.*' self._verify_response('server-get-resp-unrescue', subs, response, 200) class Rescuev287JsonTest(RescueJsonTest): """2.87 adds support for rescuing boot from volume instances""" microversion = '2.87' scenarios = [('v2_87', {'api_major_version': 'v2.1'})] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_security_group_default_rules.py0000664000175000017500000000240200000000000032001 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class SecurityGroupDefaultRulesSampleJsonTest( api_sample_base.ApiSampleTestBaseV21): def test_security_group_default_rules_create(self): self.api.api_post('os-security-group-default-rules', {}, check_response_status=[410]) def test_security_group_default_rules_list(self): self.api.api_get('os-security-group-default-rules', check_response_status=[410]) def test_security_group_default_rules_show(self): self.api.api_get('os-security-group-default-rules/1', check_response_status=[410]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_security_groups.py0000664000175000017500000001425400000000000027256 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers import nova.tests.functional.api_samples_test_base as astb def fake_get(*args, **kwargs): nova_group = {} nova_group['id'] = 1 nova_group['description'] = 'default' nova_group['name'] = 'default' nova_group['project_id'] = astb.PROJECT_ID nova_group['rules'] = [] return nova_group def fake_get_instances_security_groups_bindings(context, servers, detailed=False): result = {} for s in servers: result[s.get('id')] = [{'name': 'test'}] return result def fake_add_to_instance(context, instance, security_group_name): pass def fake_remove_from_instance(context, instance, security_group_name): pass def fake_list(context, names=None, ids=None, project=None, search_opts=None): return [fake_get()] def fake_get_instance_security_groups(context, instance_uuid, detailed=False): return [fake_get()] def fake_create_security_group(context, name, description): return fake_get() def fake_create_security_group_rule(context, security_group, new_rule): return { 'from_port': 22, 'to_port': 22, 'cidr': '10.0.0.0/24', 'id': '00000000-0000-0000-0000-000000000000', 'parent_group_id': '11111111-1111-1111-1111-111111111111', 'protocol': 'tcp', 'group_id': None } def fake_remove_rules(context, security_group, rule_ids): pass def fake_get_rule(context, id): return { 'id': id, 'parent_group_id': '11111111-1111-1111-1111-111111111111' } class SecurityGroupsJsonTest(test_servers.ServersSampleBase): sample_dir = 'os-security-groups' def setUp(self): super(SecurityGroupsJsonTest, self).setUp() path = 'nova.network.security_group_api.' self.stub_out(path + 'get', fake_get) self.stub_out(path + 'get_instances_security_groups_bindings', fake_get_instances_security_groups_bindings) self.stub_out(path + 'add_to_instance', fake_add_to_instance) self.stub_out(path + 'remove_from_instance', fake_remove_from_instance) self.stub_out(path + 'list', fake_list) self.stub_out(path + 'get_instance_security_groups', fake_get_instance_security_groups) self.stub_out(path + 'create_security_group', fake_create_security_group) self.stub_out(path + 'create_security_group_rule', fake_create_security_group_rule) self.stub_out(path + 'remove_rules', fake_remove_rules) self.stub_out(path + 'get_rule', fake_get_rule) def _get_create_subs(self): return { 'group_name': 'default', "description": "default", } def _create_security_group(self): subs = self._get_create_subs() return self._do_post('os-security-groups', 'security-group-post-req', subs) def _add_group(self, uuid): subs = { 'group_name': 'test' } return self._do_post('servers/%s/action' % uuid, 'security-group-add-post-req', subs) def test_security_group_create(self): response = self._create_security_group() subs = self._get_create_subs() self._verify_response('security-groups-create-resp', subs, response, 200) def test_security_groups_list(self): # Get api sample of security groups get list request. response = self._do_get('os-security-groups') self._verify_response('security-groups-list-get-resp', {}, response, 200) def test_security_groups_get(self): # Get api sample of security groups get request. security_group_id = '11111111-1111-1111-1111-111111111111' response = self._do_get('os-security-groups/%s' % security_group_id) self._verify_response('security-groups-get-resp', {}, response, 200) def test_security_groups_list_server(self): # Get api sample of security groups for a specific server. uuid = self._post_server() response = self._do_get('servers/%s/os-security-groups' % uuid) self._verify_response('server-security-groups-list-resp', {}, response, 200) def test_security_groups_add(self): self._create_security_group() uuid = self._post_server() response = self._add_group(uuid) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_security_groups_remove(self): self._create_security_group() uuid = self._post_server() self._add_group(uuid) subs = { 'group_name': 'test' } response = self._do_post('servers/%s/action' % uuid, 'security-group-remove-post-req', subs) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_security_group_rules_create(self): response = self._do_post('os-security-group-rules', 'security-group-rules-post-req', {}) self._verify_response('security-group-rules-post-resp', {}, response, 200) def test_security_group_rules_remove(self): response = self._do_delete( 'os-security-group-rules/00000000-0000-0000-0000-000000000000') self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_diagnostics.py0000664000175000017500000000231600000000000027701 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class ServerDiagnosticsSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-server-diagnostics" def test_server_diagnostics_get(self): uuid = self._post_server() response = self._do_get('servers/%s/diagnostics' % uuid) self._verify_response('server-diagnostics-get-resp', {}, response, 200) class ServerDiagnosticsSamplesJsonTestV248(ServerDiagnosticsSamplesJsonTest): microversion = '2.48' scenarios = [('v2_48', {'api_major_version': 'v2.1'})] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_external_events.py0000664000175000017500000000274500000000000030606 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class ServerExternalEventsSamplesJsonTest(test_servers.ServersSampleBase): ADMIN_API = True sample_dir = "os-server-external-events" def setUp(self): """setUp Method for AdminActions api samples extension This method creates the server that will be used in each tests """ super(ServerExternalEventsSamplesJsonTest, self).setUp() self.uuid = self._post_server() def test_create_event(self): subs = { 'uuid': self.uuid, 'name': 'network-changed', 'status': 'completed', 'tag': 'foo', } response = self._do_post('os-server-external-events', 'event-create-req', subs) self._verify_response('event-create-resp', subs, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_groups.py0000664000175000017500000000546400000000000026720 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import api_sample_base class ServerGroupsSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): sample_dir = "os-server-groups" def _get_create_subs(self): return {'name': 'test'} def _post_server_group(self): """Verify the response status and returns the UUID of the newly created server group. """ subs = self._get_create_subs() response = self._do_post('os-server-groups', 'server-groups-post-req', subs) subs = {} subs['name'] = 'test' return self._verify_response('server-groups-post-resp', subs, response, 200) def test_server_groups_post(self): return self._post_server_group() def test_server_groups_list(self): subs = self._get_create_subs() uuid = self._post_server_group() response = self._do_get('os-server-groups') subs['id'] = uuid self._verify_response('server-groups-list-resp', subs, response, 200) def test_server_groups_get(self): # Get api sample of server groups get request. subs = {'name': 'test'} uuid = self._post_server_group() subs['id'] = uuid response = self._do_get('os-server-groups/%s' % uuid) self._verify_response('server-groups-get-resp', subs, response, 200) def test_server_groups_delete(self): uuid = self._post_server_group() response = self._do_delete('os-server-groups/%s' % uuid) self.assertEqual(204, response.status_code) class ServerGroupsV213SampleJsonTest(ServerGroupsSampleJsonTest): scenarios = [ ("v2_13", {'api_major_version': 'v2.1', 'microversion': '2.13'}) ] def setUp(self): super(ServerGroupsV213SampleJsonTest, self).setUp() self.api.microversion = self.microversion class ServerGroupsV264SampleJsonTest(ServerGroupsV213SampleJsonTest): scenarios = [ ("v2_64", {'api_major_version': 'v2.1', 'microversion': '2.64'}) ] def setUp(self): super(ServerGroupsV264SampleJsonTest, self).setUp() self.api.microversion = self.microversion ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_metadata.py0000664000175000017500000000631700000000000027157 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class ServersMetadataJsonTest(test_servers.ServersSampleBase): sample_dir = 'server-metadata' def _create_and_set(self, subs): uuid = self._post_server() response = self._do_put('/servers/%s/metadata' % uuid, 'server-metadata-all-req', subs) self._verify_response('server-metadata-all-resp', subs, response, 200) return uuid def generalize_subs(self, subs, vanilla_regexes): subs['value'] = '(Foo|Bar) Value' return subs def test_metadata_put_all(self): # Test setting all metadata for a server. subs = {'value': 'Foo Value'} self._create_and_set(subs) def test_metadata_post_all(self): # Test updating all metadata for a server. subs = {'value': 'Foo Value'} uuid = self._create_and_set(subs) subs['value'] = 'Bar Value' response = self._do_post('servers/%s/metadata' % uuid, 'server-metadata-all-req', subs) self._verify_response('server-metadata-all-resp', subs, response, 200) def test_metadata_get_all(self): # Test getting all metadata for a server. subs = {'value': 'Foo Value'} uuid = self._create_and_set(subs) response = self._do_get('servers/%s/metadata' % uuid) self._verify_response('server-metadata-all-resp', subs, response, 200) def test_metadata_put(self): # Test putting an individual metadata item for a server. subs = {'value': 'Foo Value'} uuid = self._create_and_set(subs) subs['value'] = 'Bar Value' response = self._do_put('servers/%s/metadata/foo' % uuid, 'server-metadata-req', subs) self._verify_response('server-metadata-resp', subs, response, 200) def test_metadata_get(self): # Test getting an individual metadata item for a server. subs = {'value': 'Foo Value'} uuid = self._create_and_set(subs) response = self._do_get('servers/%s/metadata/foo' % uuid) self._verify_response('server-metadata-resp', subs, response, 200) def test_metadata_delete(self): # Test deleting an individual metadata item for a server. subs = {'value': 'Foo Value'} uuid = self._create_and_set(subs) response = self._do_delete('servers/%s/metadata/foo' % uuid) self.assertEqual(204, response.status_code) self.assertEqual('', response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_migrations.py0000664000175000017500000002737500000000000027562 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import futurist import mock from nova.conductor import manager as conductor_manager from nova import context from nova.db import api as db from nova import objects from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit import fake_instance class ServerMigrationsSampleJsonTest(test_servers.ServersSampleBase): sample_dir = 'server-migrations' scenarios = [('v2_22', {'api_major_version': 'v2.1'})] microversion = '2.22' def setUp(self): """setUp method for server usage.""" super(ServerMigrationsSampleJsonTest, self).setUp() self.uuid = self._post_server() @mock.patch.object(conductor_manager.ComputeTaskManager, '_live_migrate') @mock.patch.object(db, 'service_get_by_compute_host') @mock.patch.object(objects.Migration, 'get_by_id_and_instance') @mock.patch('nova.compute.manager.ComputeManager.' 'live_migration_force_complete') def test_live_migrate_force_complete(self, live_migration_pause_instance, get_by_id_and_instance, service_get_by_compute_host, _live_migrate): migration = objects.Migration() migration.id = 1 migration.status = 'running' migration.source_compute = self.compute.host get_by_id_and_instance.return_value = migration self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': self.compute.host}) response = self._do_post('servers/%s/migrations/%s/action' % (self.uuid, '3'), 'force_complete', {}) self.assertEqual(202, response.status_code) def test_get_migration(self): response = self._do_get('servers/fake_id/migrations/1234') self.assertEqual(404, response.status_code) def test_list_migrations(self): response = self._do_get('servers/fake_id/migrations') self.assertEqual(404, response.status_code) class ServerMigrationsSamplesJsonTestV2_23(test_servers.ServersSampleBase): ADMIN_API = True sample_dir = "server-migrations" microversion = '2.23' scenarios = [('v2_23', {'api_major_version': 'v2.1'})] UUID_1 = '4cfba335-03d8-49b2-8c52-e69043d1e8fe' UUID_2 = '058fc419-a8a8-4e08-b62c-a9841ef9cd3f' fake_migrations = [ { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': UUID_1, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False }, { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': UUID_2, 'migration_type': 'resize', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 400000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 90000, 'created_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False } ] def setUp(self): super(ServerMigrationsSamplesJsonTestV2_23, self).setUp() fake_context = context.RequestContext('fake', 'fake') self.mig1 = objects.Migration( context=fake_context, **self.fake_migrations[0]) self.mig1.create() self.mig2 = objects.Migration( context=fake_context, **self.fake_migrations[1]) self.mig2.create() fake_ins = fake_instance.fake_db_instance(uuid=self.UUID_1) fake_ins.pop("pci_devices") fake_ins.pop("security_groups") fake_ins.pop("services") fake_ins.pop("tags") fake_ins.pop("info_cache") fake_ins.pop("id") self.instance = objects.Instance( context=fake_context, **fake_ins) self.instance.create() def test_get_migration(self): response = self._do_get('servers/%s/migrations/%s' % (self.fake_migrations[0]["instance_uuid"], self.mig1.id)) self.assertEqual(200, response.status_code) self._verify_response('migrations-get', {"server_uuid": self.UUID_1}, response, 200) def test_list_migrations(self): response = self._do_get('servers/%s/migrations' % self.fake_migrations[0]["instance_uuid"]) self.assertEqual(200, response.status_code) self._verify_response('migrations-index', {"server_uuid_1": self.UUID_1}, response, 200) class ServerMigrationsSampleJsonTestV2_24(test_servers.ServersSampleBase): ADMIN_API = True microversion = '2.24' sample_dir = "server-migrations" scenarios = [('v2_24', {'api_major_version': 'v2.1'})] def setUp(self): """setUp method for server usage.""" super(ServerMigrationsSampleJsonTestV2_24, self).setUp() self.uuid = self._post_server() self.context = context.RequestContext('fake', 'fake') fake_migration = { 'source_node': self.compute.host, 'dest_node': 'node10', 'source_compute': 'compute1', 'dest_compute': 'compute12', 'migration_type': 'live-migration', 'instance_uuid': self.uuid, 'status': 'running'} self.migration = objects.Migration(context=self.context, **fake_migration) self.migration.create() @mock.patch.object(conductor_manager.ComputeTaskManager, '_live_migrate') def test_live_migrate_abort(self, _live_migrate): self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': self.compute.host}) uri = 'servers/%s/migrations/%s' % (self.uuid, self.migration.id) response = self._do_delete(uri) self.assertEqual(202, response.status_code) @mock.patch.object(conductor_manager.ComputeTaskManager, '_live_migrate') def test_live_migrate_abort_migration_not_found(self, _live_migrate): self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': self.compute.host}) uri = 'servers/%s/migrations/%s' % (self.uuid, '45') response = self._do_delete(uri) self.assertEqual(404, response.status_code) @mock.patch.object(conductor_manager.ComputeTaskManager, '_live_migrate') def test_live_migrate_abort_migration_not_running(self, _live_migrate): self.migration.status = 'completed' self.migration.save() self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': self.compute.host}) uri = 'servers/%s/migrations/%s' % (self.uuid, self.migration.id) response = self._do_delete(uri) self.assertEqual(400, response.status_code) class ServerMigrationsSamplesJsonTestV2_59( ServerMigrationsSamplesJsonTestV2_23 ): ADMIN_API = True microversion = '2.59' scenarios = [('v2_59', {'api_major_version': 'v2.1'})] def setUp(self): # Add UUIDs to the fake migrations used in the tests. self.fake_migrations[0][ 'uuid'] = '12341d4b-346a-40d0-83c6-5f4f6892b650' self.fake_migrations[1][ 'uuid'] = '22341d4b-346a-40d0-83c6-5f4f6892b650' super(ServerMigrationsSamplesJsonTestV2_59, self).setUp() class ServerMigrationsSampleJsonTestV2_65(ServerMigrationsSampleJsonTestV2_24): ADMIN_API = True microversion = '2.65' scenarios = [('v2_65', {'api_major_version': 'v2.1'})] @mock.patch.object(conductor_manager.ComputeTaskManager, '_live_migrate') def test_live_migrate_abort_migration_queued(self, _live_migrate): self.migration.status = 'queued' self.migration.save() self._do_post('servers/%s/action' % self.uuid, 'live-migrate-server', {'hostname': self.compute.host}) self.compute._waiting_live_migrations[self.uuid] = ( self.migration, futurist.Future()) uri = 'servers/%s/migrations/%s' % (self.uuid, self.migration.id) response = self._do_delete(uri) self.assertEqual(202, response.status_code) class ServerMigrationsSampleJsonTestV2_80( ServerMigrationsSampleJsonTestV2_65): microversion = '2.80' scenarios = [('v2_80', {'api_major_version': 'v2.1'})] UUID_1 = '4cfba335-03d8-49b2-8c52-e69043d1e8fe' UUID_2 = '058fc419-a8a8-4e08-b62c-a9841ef9cd3f' USER_ID = '8dbaa0f0-ab95-4ffe-8cb4-9c89d2ac9d24' PROJECT_ID = '5f705771-3aa9-4f4c-8660-0d9522ffdbea' fake_migrations = [ { 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': UUID_1, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'user_id': USER_ID, 'project_id': PROJECT_ID }, { 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': UUID_2, 'migration_type': 'resize', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 400000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 90000, 'created_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'updated_at': datetime.datetime(2016, 0o1, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'user_id': USER_ID, 'project_id': PROJECT_ID } ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_password.py0000664000175000017500000000367100000000000027241 0ustar00zuulzuul00000000000000# Copyright 2015 NEC Corporation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.tests.functional.api_sample_tests import test_servers class ServerPasswordSampleJsonTests(test_servers.ServersSampleBase): sample_dir = "os-server-password" @mock.patch("nova.api.metadata.password.extract_password") def test_get_password(self, mock_extract_password): password = ("xlozO3wLCBRWAa2yDjCCVx8vwNPypxnypmRYDa/zErlQ+EzPe1S/" "Gz6nfmC52mOlOSCRuUOmG7kqqgejPof6M7bOezS387zjq4LSvvwp" "28zUknzy4YzfFGhnHAdai3TxUJ26pfQCYrq8UTzmKF2Bq8ioSEtV" "VzM0A96pDh8W2i7BOz6MdoiVyiev/I1K2LsuipfxSJR7Wdke4zNX" "JjHHP2RfYsVbZ/k9ANu+Nz4iIH8/7Cacud/pphH7EjrY6a4RZNrj" "QskrhKYed0YERpotyjYk1eDtRe72GrSiXteqCM4biaQ5w3ruS+Ac" "X//PXk3uJ5kC7d67fPXaVz4WaQRYMg==") # Mock password since there is no api to set it mock_extract_password.return_value = password uuid = self._post_server() response = self._do_get('servers/%s/os-server-password' % uuid) subs = {'encrypted_password': password.replace('+', '\\+')} self._verify_response('get-password-resp', subs, response, 200) def test_reset_password(self): uuid = self._post_server() response = self._do_delete('servers/%s/os-server-password' % uuid) self.assertEqual(204, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_tags.py0000664000175000017500000001013200000000000026323 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from nova.db.sqlalchemy import models from nova.tests.functional.api_sample_tests import test_servers TAG1 = 'tag1' TAG2 = 'tag2' class ServerTagsJsonTest(test_servers.ServersSampleBase): sample_dir = 'os-server-tags' microversion = '2.26' scenarios = [('v2_26', {'api_major_version': 'v2.1'})] def _get_create_subs(self): return {'tag1': TAG1, 'tag2': TAG2} def _get_show_subs(self): subs = self._get_regexes() subs['hostid'] = '[a-f0-9]+' subs['tag1'] = '[0-9a-zA-Z]+' subs['tag2'] = '[0-9a-zA-Z]+' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' subs['hostname'] = r'[\w\.\-]+' subs['instance_name'] = r'instance-\d{8}' subs['hypervisor_hostname'] = r'[\w\.\-]+' subs['cdrive'] = '.*' subs['user_data'] = (self.user_data if six.PY2 else self.user_data.decode('utf-8')) return subs def _put_server_tags(self): """Verify the response status and returns the UUID of the newly created server with tags. """ uuid = self._post_server() subs = self._get_create_subs() response = self._do_put('servers/%s/tags' % uuid, 'server-tags-put-all-req', subs) self._verify_response('server-tags-put-all-resp', subs, response, 200) return uuid def test_server_tags_update_all(self): self._put_server_tags() def test_server_tags_show(self): uuid = self._put_server_tags() response = self._do_get('servers/%s/tags/%s' % (uuid, TAG1)) self.assertEqual(204, response.status_code) def test_server_tags_show_with_details_information(self): uuid = self._put_server_tags() response = self._do_get('servers/%s' % uuid) subs = self._get_show_subs() self._verify_response('server-tags-show-details-resp', subs, response, 200) def test_server_tags_list_with_details_information(self): self._put_server_tags() subs = self._get_show_subs() response = self._do_get('servers/detail') self._verify_response('servers-tags-details-resp', subs, response, 200) def test_server_tags_index(self): uuid = self._put_server_tags() response = self._do_get('servers/%s/tags' % uuid) subs = self._get_regexes() subs['tag1'] = '[0-9a-zA-Z]+' subs['tag2'] = '[0-9a-zA-Z]+' self._verify_response('server-tags-index-resp', subs, response, 200) def test_server_tags_update(self): uuid = self._put_server_tags() tag = models.Tag() tag.resource_id = uuid tag.tag = 'OtherTag' response = self._do_put('servers/%s/tags/%s' % (uuid, tag.tag)) self.assertEqual(201, response.status_code) expected_location = "%s/servers/%s/tags/%s" % ( self._get_vers_compute_endpoint(), uuid, tag.tag) self.assertEqual(expected_location, response.headers['Location']) self.assertEqual('', response.text) def test_server_tags_delete(self): uuid = self._put_server_tags() response = self._do_delete('servers/%s/tags/%s' % (uuid, TAG1)) self.assertEqual(204, response.status_code) self.assertEqual('', response.text) def test_server_tags_delete_all(self): uuid = self._put_server_tags() response = self._do_delete('servers/%s/tags' % uuid) self.assertEqual(204, response.status_code) self.assertEqual('', response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_server_topology.py0000664000175000017500000000460500000000000027251 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import instance_numa as numa from nova.objects import virt_cpu_topology as cpu_topo from nova.tests.functional.api_sample_tests import test_servers def fake_get_numa(): cpu_info = {'sockets': 2, 'cores': 1, 'threads': 2} cpu_topology = cpu_topo.VirtCPUTopology.from_dict(cpu_info) cell_0 = numa.InstanceNUMACell(node=0, memory=1024, pagesize=4, id=0, cpu_topology=cpu_topology, cpu_pinning={0: 0, 1: 5}, cpuset=set([0, 1])) cell_1 = numa.InstanceNUMACell(node=1, memory=2048, pagesize=4, id=1, cpu_topology=cpu_topology, cpu_pinning={2: 1, 3: 8}, cpuset=set([2, 3])) return numa.InstanceNUMATopology(cells=[cell_0, cell_1]) class ServerTopologySamplesJson(test_servers.ServersSampleBase): microversion = '2.78' scenarios = [('v2_78', {'api_major_version': 'v2.1'})] sample_dir = "os-server-topology" def setUp(self): super(ServerTopologySamplesJson, self).setUp() def _load_numa(self, *args, **argv): self.numa_topology = fake_get_numa() self.stub_out('nova.objects.instance.Instance._load_numa_topology', _load_numa) class ServerTopologySamplesJsonTestV278_Admin(ServerTopologySamplesJson): ADMIN_API = True def test_get_servers_topology_admin(self): uuid = self._post_server() response = self._do_get('servers/%s/topology' % uuid) self._verify_response( 'servers-topology-resp', {}, response, 200) class ServerTopologySamplesJsonTestV278(ServerTopologySamplesJson): ADMIN_API = False def test_get_servers_topology_user(self): uuid = self._post_server() response = self._do_get('servers/%s/topology' % uuid) self._verify_response( 'servers-topology-resp-user', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_servers.py0000664000175000017500000010700700000000000025500 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import time from oslo_utils import fixture as utils_fixture from oslo_utils import timeutils import six from nova.api.openstack import api_version_request as avr import nova.conf from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api_sample_tests import api_sample_base from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake CONF = nova.conf.CONF class ServersSampleBase(api_sample_base.ApiSampleTestBaseV21): microversion = None sample_dir = 'servers' user_data_contents = six.b('#!/bin/bash\n/bin/su\necho "I am in you!"\n') user_data = base64.b64encode(user_data_contents) common_req_names = [ (None, '2.36', 'server-create-req'), ('2.37', '2.56', 'server-create-req-v237'), ('2.57', None, 'server-create-req-v257') ] def _get_request_name(self, use_common, sample_name=None): if not use_common: return sample_name or 'server-create-req' api_version = self.microversion or '2.1' for min, max, name in self.common_req_names: if avr.APIVersionRequest(api_version).matches( avr.APIVersionRequest(min), avr.APIVersionRequest(max)): return name def _post_server(self, use_common_server_api_samples=True, name=None, extra_subs=None, sample_name=None): # param use_common_server_api_samples: Boolean to set whether tests use # common sample files for server post request and response. # Default is True which means _get_sample_path method will fetch the # common server sample files from 'servers' directory. # Set False if tests need to use extension specific sample files subs = { 'image_id': fake.get_valid_image_id(), 'host': self._get_host(), 'compute_endpoint': self._get_compute_endpoint(), 'versioned_compute_endpoint': self._get_vers_compute_endpoint(), 'glance_host': self._get_glance_host(), 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', 'user_data': (self.user_data if six.PY2 else self.user_data.decode('utf-8')), 'uuid': '[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}' '-[0-9a-f]{4}-[0-9a-f]{12}', 'name': 'new-server-test' if name is None else name, } # If the template is requesting an explicit availability zone and # the test is setup to have AZs, use the first one in the list which # should default to "us-west". if self.availability_zones: subs['availability_zone'] = self.availability_zones[0] if extra_subs: subs.update(extra_subs) orig_value = self.__class__._use_common_server_api_samples try: self.__class__._use_common_server_api_samples = ( use_common_server_api_samples) # If using common samples, we could only put samples under # api_samples/servers. We will put a lot of samples when we # have more and more microversions. # Callers can specify the sample_name param so that we can add # samples into api_samples/servers/v2.xx. response = self._do_post('servers', self._get_request_name( use_common_server_api_samples, sample_name), subs) status = self._verify_response('server-create-resp', subs, response, 202) return status finally: self.__class__._use_common_server_api_samples = orig_value def setUp(self): super(ServersSampleBase, self).setUp() self.api.microversion = self.microversion class ServersSampleJsonTest(ServersSampleBase): # This controls whether or not we use the common server API sample # for server post req/resp. use_common_server_post = True microversion = None def test_servers_post(self): return self._post_server( use_common_server_api_samples=self.use_common_server_post) def test_servers_get(self): self.stub_out( 'nova.db.api.block_device_mapping_get_all_by_instance_uuids', fakes.stub_bdm_get_all_by_instance_uuids) uuid = self.test_servers_post() response = self._do_get('servers/%s' % uuid) subs = {} subs['hostid'] = '[a-f0-9]+' subs['id'] = uuid subs['instance_name'] = r'instance-\d{8}' subs['hypervisor_hostname'] = r'[\w\.\-]+' subs['hostname'] = r'[\w\.\-]+' subs['mac_addr'] = '(?:[a-f0-9]{2}:){5}[a-f0-9]{2}' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' subs['user_data'] = (self.user_data if six.PY2 else self.user_data.decode('utf-8')) # config drive can be a string for True or empty value for False subs['cdrive'] = '.*' self._verify_response('server-get-resp', subs, response, 200) def test_servers_list(self): uuid = self._post_server() response = self._do_get('servers?limit=1') subs = {'id': uuid} self._verify_response('servers-list-resp', subs, response, 200) def test_servers_details(self): self.stub_out( 'nova.db.api.block_device_mapping_get_all_by_instance_uuids', fakes.stub_bdm_get_all_by_instance_uuids) uuid = self.test_servers_post() response = self._do_get('servers/detail?limit=1') subs = {} subs['hostid'] = '[a-f0-9]+' subs['id'] = uuid subs['instance_name'] = r'instance-\d{8}' subs['hypervisor_hostname'] = r'[\w\.\-]+' subs['hostname'] = r'[\w\.\-]+' subs['mac_addr'] = '(?:[a-f0-9]{2}:){5}[a-f0-9]{2}' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' subs['user_data'] = (self.user_data if six.PY2 else self.user_data.decode('utf-8')) # config drive can be a string for True or empty value for False subs['cdrive'] = '.*' self._verify_response('servers-details-resp', subs, response, 200) class ServersSampleJson23Test(ServersSampleJsonTest): microversion = '2.3' scenarios = [('v2_3', {'api_major_version': 'v2.1'})] class ServersSampleJson29Test(ServersSampleJsonTest): microversion = '2.9' # NOTE(gmann): microversion tests do not need to run for v2 API # so defining scenarios only for v2.9 which will run the original tests # by appending '(v2_9)' in test_id. scenarios = [('v2_9', {'api_major_version': 'v2.1'})] class ServersSampleJson216Test(ServersSampleJsonTest): microversion = '2.16' scenarios = [('v2_16', {'api_major_version': 'v2.1'})] class ServersSampleJson219Test(ServersSampleJsonTest): microversion = '2.19' scenarios = [('v2_19', {'api_major_version': 'v2.1'})] def test_servers_post(self): return self._post_server(False) def test_servers_put(self): uuid = self.test_servers_post() response = self._do_put('servers/%s' % uuid, 'server-put-req', {}) subs = { 'image_id': fake.get_valid_image_id(), 'hostid': '[a-f0-9]+', 'glance_host': self._get_glance_host(), 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::' } self._verify_response('server-put-resp', subs, response, 200) class ServersSampleJson232Test(ServersSampleBase): microversion = '2.32' sample_dir = 'servers' scenarios = [('v2_32', {'api_major_version': 'v2.1'})] def test_servers_post(self): self._post_server(use_common_server_api_samples=False) class ServersSampleJson237Test(ServersSampleBase): microversion = '2.37' sample_dir = 'servers' scenarios = [('v2_37', {'api_major_version': 'v2.1'})] def test_servers_post(self): self._post_server(use_common_server_api_samples=False) class ServersSampleJson242Test(ServersSampleBase): microversion = '2.42' sample_dir = 'servers' scenarios = [('v2_42', {'api_major_version': 'v2.1'})] def test_servers_post(self): self._post_server(use_common_server_api_samples=False) class ServersSampleJson247Test(ServersSampleJsonTest): microversion = '2.47' scenarios = [('v2_47', {'api_major_version': 'v2.1'})] use_common_server_post = False def test_server_rebuild(self): uuid = self._post_server() image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) class ServersSampleJson252Test(ServersSampleJsonTest): microversion = '2.52' scenarios = [('v2_52', {'api_major_version': 'v2.1'})] use_common_server_post = False class ServersSampleJson263Test(ServersSampleBase): microversion = '2.63' scenarios = [('v2_63', {'api_major_version': 'v2.1'})] def setUp(self): super(ServersSampleJson263Test, self).setUp() self.common_subs = { 'hostid': '[a-f0-9]+', 'instance_name': r'instance-\d{8}', 'hypervisor_hostname': r'[\w\.\-]+', 'hostname': r'[\w\.\-]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', 'user_data': (self.user_data if six.PY2 else self.user_data.decode('utf-8')), 'cdrive': '.*', } def test_servers_post(self): self._post_server(use_common_server_api_samples=False) def test_server_rebuild(self): uuid = self._post_server(use_common_server_api_samples=False) fakes.stub_out_key_pair_funcs(self) image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'key_name': 'new-key', 'description': 'description of foobar', 'pass': 'seekr3t', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) exp_resp = params.copy() del exp_resp['uuid'] exp_resp['hostid'] = '[a-f0-9]+' self._verify_response('server-action-rebuild-resp', exp_resp, resp, 202) def test_servers_details(self): uuid = self._post_server(use_common_server_api_samples=False) response = self._do_get('servers/detail?limit=1') subs = self.common_subs.copy() subs['id'] = uuid self._verify_response('servers-details-resp', subs, response, 200) def test_server_get(self): uuid = self._post_server(use_common_server_api_samples=False) response = self._do_get('servers/%s' % uuid) subs = self.common_subs.copy() subs['id'] = uuid self._verify_response('server-get-resp', subs, response, 200) def test_server_update(self): uuid = self._post_server(use_common_server_api_samples=False) subs = self.common_subs.copy() subs['id'] = uuid response = self._do_put('servers/%s' % uuid, 'server-update-req', subs) self._verify_response('server-update-resp', subs, response, 200) class ServersSampleJson266Test(ServersSampleBase): microversion = '2.66' scenarios = [('v2_66', {'api_major_version': 'v2.1'})] def setUp(self): super(ServersSampleJson266Test, self).setUp() self.common_subs = { 'hostid': '[a-f0-9]+', 'instance_name': r'instance-\d{8}', 'hypervisor_hostname': r'[\w\.\-]+', 'hostname': r'[\w\.\-]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', 'user_data': (self.user_data if six.PY2 else self.user_data.decode('utf-8')), 'cdrive': '.*', } def test_get_servers_list_with_changes_before(self): uuid = self._post_server(use_common_server_api_samples=False) current_time = timeutils.parse_isotime(timeutils.utcnow().isoformat()) response = self._do_get( 'servers?changes-before=%s' % timeutils.normalize_time( current_time)) subs = self.common_subs.copy() subs['id'] = uuid self._verify_response( 'servers-list-with-changes-before', subs, response, 200) def test_get_servers_detail_with_changes_before(self): uuid = self._post_server(use_common_server_api_samples=False) current_time = timeutils.parse_isotime(timeutils.utcnow().isoformat()) response = self._do_get( 'servers/detail?changes-before=%s' % timeutils.normalize_time( current_time)) subs = self.common_subs.copy() subs['id'] = uuid self._verify_response( 'servers-details-with-changes-before', subs, response, 200) class ServersSampleJson267Test(ServersSampleBase): microversion = '2.67' scenarios = [('v2_67', {'api_major_version': 'v2.1'})] def setUp(self): super(ServersSampleJson267Test, self).setUp() self.useFixture(nova_fixtures.CinderFixture(self)) def test_servers_post(self): return self._post_server(use_common_server_api_samples=False) class ServersSampleJson269Test(ServersSampleBase): microversion = '2.69' scenarios = [('v2_69', {'api_major_version': 'v2.1'})] def setUp(self): super(ServersSampleJson269Test, self).setUp() def _fake_instancemapping_get_by_cell_and_project(*args, **kwargs): # global cell based on which rest of the functions are stubbed out cell_fixture = nova_fixtures.SingleCellSimple() return [{ 'id': 1, 'updated_at': None, 'created_at': None, 'instance_uuid': utils_fixture.uuidsentinel.inst, 'cell_id': 1, 'project_id': "6f70656e737461636b20342065766572", 'cell_mapping': cell_fixture._fake_cell_list()[0], 'queued_for_delete': False }] self.stub_out('nova.objects.InstanceMappingList.' '_get_not_deleted_by_cell_and_project_from_db', _fake_instancemapping_get_by_cell_and_project) def test_servers_list_from_down_cells(self): uuid = self._post_server(use_common_server_api_samples=False) with nova_fixtures.DownCellFixture(): response = self._do_get('servers') subs = {'id': uuid} self._verify_response('servers-list-resp', subs, response, 200) def test_servers_details_from_down_cells(self): uuid = self._post_server(use_common_server_api_samples=False) with nova_fixtures.DownCellFixture(): response = self._do_get('servers/detail') subs = {'id': uuid} self._verify_response('servers-details-resp', subs, response, 200) def test_server_get_from_down_cells(self): uuid = self._post_server(use_common_server_api_samples=False) with nova_fixtures.DownCellFixture(): response = self._do_get('servers/%s' % uuid) subs = {'id': uuid} self._verify_response('server-get-resp', subs, response, 200) class ServersSampleJson271Test(ServersSampleBase): microversion = '2.71' scenarios = [('v2_71', {'api_major_version': 'v2.1'})] def setUp(self): super(ServersSampleJson271Test, self).setUp() self.common_subs = { 'hostid': '[a-f0-9]+', 'instance_name': r'instance-\d{8}', 'hypervisor_hostname': r'[\w\.\-]+', 'hostname': r'[\w\.\-]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', 'user_data': (self.user_data if six.PY2 else self.user_data.decode('utf-8')), 'cdrive': '.*', } # create server group subs = {'name': 'test'} response = self._do_post('os-server-groups', 'server-groups-post-req', subs) self.sg_uuid = self._verify_response('server-groups-post-resp', subs, response, 200) def _test_servers_post(self): return self._post_server( use_common_server_api_samples=False, extra_subs={'sg_uuid': self.sg_uuid}) def test_servers_get_with_server_group(self): uuid = self._test_servers_post() response = self._do_get('servers/%s' % uuid) subs = self.common_subs.copy() subs['id'] = uuid self._verify_response('server-get-resp', subs, response, 200) def test_servers_update_with_server_groups(self): uuid = self._test_servers_post() subs = self.common_subs.copy() subs['id'] = uuid response = self._do_put('servers/%s' % uuid, 'server-update-req', subs) self._verify_response('server-update-resp', subs, response, 200) def test_servers_rebuild_with_server_groups(self): uuid = self._test_servers_post() fakes.stub_out_key_pair_funcs(self) image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'key_name': 'new-key', 'description': 'description of foobar', 'pass': 'seekr3t', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = self.common_subs.copy() subs.update(params) subs['id'] = uuid del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) def test_server_get_from_down_cells(self): def _fake_instancemapping_get_by_cell_and_project(*args, **kwargs): # global cell based on which rest of the functions are stubbed out cell_fixture = nova_fixtures.SingleCellSimple() return [{ 'id': 1, 'updated_at': None, 'created_at': None, 'instance_uuid': utils_fixture.uuidsentinel.inst, 'cell_id': 1, 'project_id': "6f70656e737461636b20342065766572", 'cell_mapping': cell_fixture._fake_cell_list()[0], 'queued_for_delete': False }] self.stub_out('nova.objects.InstanceMappingList.' '_get_not_deleted_by_cell_and_project_from_db', _fake_instancemapping_get_by_cell_and_project) uuid = self._test_servers_post() with nova_fixtures.DownCellFixture(): response = self._do_get('servers/%s' % uuid) subs = {'id': uuid} self._verify_response('server-get-down-cell-resp', subs, response, 200) class ServersSampleJson273Test(ServersSampleBase): microversion = '2.73' scenarios = [('v2_73', {'api_major_version': 'v2.1'})] def _post_server_and_lock(self): uuid = self._post_server(use_common_server_api_samples=False) reason = "I don't want to work" self._do_post('servers/%s/action' % uuid, 'lock-server-with-reason', {"locked_reason": reason}) return uuid def test_servers_details_with_locked_reason(self): uuid = self._post_server_and_lock() response = self._do_get('servers/detail') subs = {'id': uuid} self._verify_response('servers-details-resp', subs, response, 200) def test_server_get_with_locked_reason(self): uuid = self._post_server_and_lock() response = self._do_get('servers/%s' % uuid) subs = {'id': uuid} self._verify_response('server-get-resp', subs, response, 200) def test_server_rebuild_with_empty_locked_reason(self): uuid = self._post_server(use_common_server_api_samples=False) image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) def test_update_server_with_empty_locked_reason(self): uuid = self._post_server(use_common_server_api_samples=False) subs = {} subs['hostid'] = '[a-f0-9]+' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' response = self._do_put('servers/%s' % uuid, 'server-update-req', subs) self._verify_response('server-update-resp', subs, response, 200) class ServersSampleJson274Test(ServersSampleBase): """Supporting host and/or hypervisor_hostname is an admin API to create servers. """ ADMIN_API = True SUPPORTS_CELLS = True microversion = '2.74' scenarios = [('v2_74', {'api_major_version': 'v2.1'})] # Do not put an availability_zone in the API sample request since it would # be confusing with the requested host/hypervisor_hostname and forced # host/node zone:host:node case. availability_zones = [] def _setup_compute_service(self): return self.start_service('compute', host='openstack-node-01') def setUp(self): super(ServersSampleJson274Test, self).setUp() def test_servers_post_with_only_host(self): self._post_server(use_common_server_api_samples=False, sample_name='server-create-req-with-only-host') def test_servers_post_with_only_node(self): self._post_server(use_common_server_api_samples=False, sample_name='server-create-req-with-only-node') def test_servers_post_with_host_and_node(self): self._post_server(use_common_server_api_samples=False, sample_name='server-create-req-with-host-and-node') class ServersUpdateSampleJsonTest(ServersSampleBase): def test_update_server(self): uuid = self._post_server() subs = {} subs['hostid'] = '[a-f0-9]+' subs['access_ip_v4'] = '1.2.3.4' subs['access_ip_v6'] = '80fe::' response = self._do_put('servers/%s' % uuid, 'server-update-req', subs) self._verify_response('server-update-resp', subs, response, 200) class ServersUpdateSampleJson247Test(ServersUpdateSampleJsonTest): microversion = '2.47' scenarios = [('v2_47', {'api_major_version': 'v2.1'})] class ServersSampleJson275Test(ServersUpdateSampleJsonTest): microversion = '2.75' scenarios = [('v2_75', {'api_major_version': 'v2.1'})] def test_server_rebuild(self): uuid = self._post_server() image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) class ServerSortKeysJsonTests(ServersSampleBase): sample_dir = 'servers-sort' def test_servers_list(self): self._post_server() response = self._do_get('servers?sort_key=display_name&sort_dir=asc') self._verify_response('server-sort-keys-list-resp', {}, response, 200) class _ServersActionsJsonTestMixin(object): def _test_server_action(self, uuid, action, req_tpl, subs=None, resp_tpl=None, code=202): subs = subs or {} subs.update({'action': action, 'glance_host': self._get_glance_host()}) response = self._do_post('servers/%s/action' % uuid, req_tpl, subs) if resp_tpl: self._verify_response(resp_tpl, subs, response, code) else: self.assertEqual(code, response.status_code) self.assertEqual("", response.text) return response class ServersActionsJsonTest(ServersSampleBase, _ServersActionsJsonTestMixin): SUPPORTS_CELLS = True def test_server_reboot_hard(self): uuid = self._post_server() self._test_server_action(uuid, "reboot", 'server-action-reboot', {"type": "HARD"}) def test_server_reboot_soft(self): uuid = self._post_server() self._test_server_action(uuid, "reboot", 'server-action-reboot', {"type": "SOFT"}) def test_server_rebuild(self): uuid = self._post_server() image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) def test_server_resize(self): self.flags(allow_resize_to_same_host=True) uuid = self._post_server() self._test_server_action(uuid, "resize", 'server-action-resize', {"id": '2', "host": self._get_host()}) return uuid def test_server_revert_resize(self): uuid = self.test_server_resize() self._test_server_action(uuid, "revertResize", 'server-action-revert-resize') def test_server_confirm_resize(self): uuid = self.test_server_resize() self._test_server_action(uuid, "confirmResize", 'server-action-confirm-resize', code=204) def _wait_for_active_server(self, uuid): """Wait 10 seconds for the server to be ACTIVE, else fail. :param uuid: The server id. :returns: The ACTIVE server. """ server = self._do_get('servers/%s' % uuid, return_json_body=True)['server'] count = 0 while server['status'] != 'ACTIVE' and count < 10: time.sleep(1) server = self._do_get('servers/%s' % uuid, return_json_body=True)['server'] count += 1 if server['status'] != 'ACTIVE': self.fail('Timed out waiting for server %s to be ACTIVE.' % uuid) return server def test_server_add_floating_ip(self): uuid = self._post_server() # Get the server details so we can find a fixed IP to use in the # addFloatingIp request. server = self._wait_for_active_server(uuid) addresses = server['addresses'] # Find a fixed IP. fixed_address = None for network, ips in addresses.items(): for ip in ips: if ip['OS-EXT-IPS:type'] == 'fixed': fixed_address = ip['addr'] break if fixed_address: break if fixed_address is None: self.fail('Failed to find a fixed IP for server %s in addresses: ' '%s' % (uuid, addresses)) subs = { "address": "10.10.10.10", "fixed_address": fixed_address } # This is gross, but we need to stub out the associate_floating_ip # call in the FloatingIPActionController since we don't have a real # networking service backing this up, just the fake neutron stubs. self.stub_out('nova.network.neutron.API.associate_floating_ip', lambda *a, **k: None) self._test_server_action(uuid, 'addFloatingIp', 'server-action-addfloatingip-req', subs) def test_server_remove_floating_ip(self): server_uuid = self._post_server() self._wait_for_active_server(server_uuid) subs = { "address": "172.16.10.7" } self.stub_out( 'nova.network.neutron.API.get_floating_ip_by_address', lambda *a, **k: { 'port_id': 'a0c566f0-faab-406f-b77f-2b286dc6dd7e'}) self.stub_out( 'nova.network.neutron.API.' 'get_instance_id_by_floating_address', lambda *a, **k: server_uuid) self.stub_out( 'nova.network.neutron.API.disassociate_floating_ip', lambda *a, **k: None) self._test_server_action(server_uuid, 'removeFloatingIp', 'server-action-removefloatingip-req', subs) class ServersActionsJson219Test(ServersSampleBase): microversion = '2.19' scenarios = [('v2_19', {'api_major_version': 'v2.1'})] def test_server_rebuild(self): uuid = self._post_server() image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'description': 'description of foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) class ServersActionsJson226Test(ServersSampleBase): microversion = '2.26' scenarios = [('v2_26', {'api_major_version': 'v2.1'})] def test_server_rebuild(self): uuid = self._post_server() image = fake.get_valid_image_id() params = { 'uuid': image, 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', 'disk_config': 'AUTO', 'hostid': '[a-f0-9]+', 'name': 'foobar', 'pass': 'seekr3t', 'preserve_ephemeral': 'false', 'description': 'description of foobar' } # Add 'tag1' and 'tag2' tags self._do_put('servers/%s/tags/tag1' % uuid) self._do_put('servers/%s/tags/tag2' % uuid) # Rebuild Action resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) class ServersActionsJson254Test(ServersSampleBase): microversion = '2.54' sample_dir = 'servers' scenarios = [('v2_54', {'api_major_version': 'v2.1'})] def _create_server(self): return self._post_server() def test_server_rebuild(self): fakes.stub_out_key_pair_funcs(self) uuid = self._create_server() image = fake.get_valid_image_id() params = { 'uuid': image, 'name': 'foobar', 'key_name': 'new-key', 'description': 'description of foobar', 'pass': 'seekr3t', 'hostid': '[a-f0-9]+', 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '80fe::', } resp = self._do_post('servers/%s/action' % uuid, 'server-action-rebuild', params) subs = params.copy() del subs['uuid'] self._verify_response('server-action-rebuild-resp', subs, resp, 202) class ServersActionsJson257Test(ServersActionsJson254Test): """Tests rebuilding a server with new user_data.""" microversion = '2.57' scenarios = [('v2_57', {'api_major_version': 'v2.1'})] def _create_server(self): return self._post_server(use_common_server_api_samples=False) class ServersCreateImageJsonTest(ServersSampleBase, _ServersActionsJsonTestMixin): """Tests the createImage server action API against 2.1.""" def test_server_create_image(self): uuid = self._post_server() resp = self._test_server_action(uuid, 'createImage', 'server-action-create-image', {'name': 'foo-image'}) # we should have gotten a location header back self.assertIn('location', resp.headers) # we should not have gotten a body back self.assertEqual(0, len(resp.content)) class ServersCreateImageJsonTestv2_45(ServersCreateImageJsonTest): """Tests the createImage server action API against 2.45.""" microversion = '2.45' scenarios = [('v2_45', {'api_major_version': 'v2.1'})] def test_server_create_image(self): uuid = self._post_server() resp = self._test_server_action( uuid, 'createImage', 'server-action-create-image', {'name': 'foo-image'}, 'server-action-create-image-resp') # assert that no location header was returned self.assertNotIn('location', resp.headers) class ServerStartStopJsonTest(ServersSampleBase): def _test_server_action(self, uuid, action, req_tpl): response = self._do_post('servers/%s/action' % uuid, req_tpl, {'action': action}) self.assertEqual(202, response.status_code) self.assertEqual("", response.text) def test_server_start(self): uuid = self._post_server() self._test_server_action(uuid, 'os-stop', 'server-action-stop') self._test_server_action(uuid, 'os-start', 'server-action-start') def test_server_stop(self): uuid = self._post_server() self._test_server_action(uuid, 'os-stop', 'server-action-stop') class ServersSampleMultiStatusJsonTest(ServersSampleBase): def test_servers_list(self): uuid = self._post_server() response = self._do_get('servers?limit=1&status=active&status=error') subs = {'id': uuid, 'status': 'error'} self._verify_response('servers-list-status-resp', subs, response, 200) class ServerTriggerCrashDumpJsonTest(ServersSampleBase): microversion = '2.17' scenarios = [('v2_17', {'api_major_version': 'v2.1'})] def test_trigger_crash_dump(self): uuid = self._post_server() response = self._do_post('servers/%s/action' % uuid, 'server-action-trigger-crash-dump', {}) self.assertEqual(response.status_code, 202) self.assertEqual(response.text, "") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_servers_ips.py0000664000175000017500000000266000000000000026352 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api_sample_tests import test_servers class ServersIpsJsonTest(test_servers.ServersSampleBase): sample_dir = 'server-ips' def test_get(self): # Test getting a server's IP information. uuid = self._post_server() response = self._do_get('servers/%s/ips' % uuid) self._verify_response('server-ips-resp', {}, response, 200) def test_get_by_network(self): # Test getting a server's IP information by network id. server_uuid = self._post_server() network_label = nova_fixtures.NeutronFixture.network_1['name'] response = self._do_get('servers/%s/ips/%s' % ( server_uuid, network_label)) self._verify_response('server-ips-network-resp', {}, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_services.py0000664000175000017500000002245000000000000025630 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import fixture as utils_fixture from nova import exception from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api_sample_tests import api_sample_base from nova.tests.unit.api.openstack.compute import test_services from nova.tests.unit.objects import test_compute_node from nova.tests.unit.objects import test_host_mapping class ServicesJsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-services" microversion = None def setUp(self): super(ServicesJsonTest, self).setUp() self.api.microversion = self.microversion self.stub_out("nova.db.api.service_get_all", test_services.fake_db_api_service_get_all) self.stub_out("nova.db.api.service_get_by_host_and_binary", test_services.fake_service_get_by_host_binary) self.stub_out("nova.db.api.service_update", test_services.fake_service_update) # If we are not using real services, we need to stub out # HostAPI._update_compute_provider_status so we don't actually # try to call a fake service over RPC. self.stub_out('nova.compute.api.HostAPI.' '_update_compute_provider_status', lambda *args, **kwargs: None) self.useFixture(utils_fixture.TimeFixture(test_services.fake_utcnow())) def test_services_list(self): """Return a list of all agent builds.""" response = self._do_get('os-services') subs = {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'state': 'up'} self._verify_response('services-list-get-resp', subs, response, 200) def test_service_enable(self): """Enable an existing agent build.""" subs = {"host": "host1", 'binary': 'nova-compute'} response = self._do_put('os-services/enable', 'service-enable-put-req', subs) self._verify_response('service-enable-put-resp', subs, response, 200) def test_service_disable(self): """Disable an existing agent build.""" subs = {"host": "host1", 'binary': 'nova-compute'} response = self._do_put('os-services/disable', 'service-disable-put-req', subs) self._verify_response('service-disable-put-resp', subs, response, 200) def test_service_disable_log_reason(self): """Disable an existing service and log the reason.""" subs = {"host": "host1", 'binary': 'nova-compute', 'disabled_reason': 'test2'} response = self._do_put('os-services/disable-log-reason', 'service-disable-log-put-req', subs) self._verify_response('service-disable-log-put-resp', subs, response, 200) def test_service_delete(self): """Delete an existing service.""" response = self._do_delete('os-services/1') self.assertEqual(204, response.status_code) self.assertEqual("", response.text) class ServicesV211JsonTest(ServicesJsonTest): microversion = '2.11' # NOTE(gryf): There is no need to run those tests on v2 API. Only # scenarios for v2_11 will be run. scenarios = [('v2_11', {'api_major_version': 'v2.1'})] def test_services_list(self): """Return a list of all agent builds.""" response = self._do_get('os-services') subs = {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'forced_down': 'false', 'status': 'disabled', 'state': 'up'} self._verify_response('services-list-get-resp', subs, response, 200) def test_force_down(self): """Set forced_down flag""" subs = {"host": 'host1', 'binary': 'nova-compute', 'forced_down': 'true'} response = self._do_put('os-services/force-down', 'service-force-down-put-req', subs) self._verify_response('service-force-down-put-resp', subs, response, 200) class ServicesV253JsonTest(ServicesV211JsonTest): microversion = '2.53' scenarios = [('v2_53', {'api_major_version': 'v2.1'})] def setUp(self): super(ServicesV253JsonTest, self).setUp() def db_service_get_by_uuid(ctxt, service_uuid): for svc in test_services.fake_services_list: if svc['uuid'] == service_uuid: return svc raise exception.ServiceNotFound(service_id=service_uuid) def fake_cn_get_all_by_host(context, host): cn = test_compute_node.fake_compute_node cn['uuid'] = test_services.FAKE_UUID_COMPUTE_HOST1 cn['host'] = host return [cn] def fake_hm_get_by_host(context, host): hm = test_host_mapping.get_db_mapping() hm['host'] = host return hm def fake_hm_destroy(context, host): return 1 self.stub_out('nova.db.api.service_get_by_uuid', db_service_get_by_uuid) self.stub_out('nova.db.api.compute_node_get_all_by_host', fake_cn_get_all_by_host) self.stub_out( 'nova.objects.host_mapping.HostMapping._get_by_host_from_db', fake_hm_get_by_host) self.stub_out('nova.objects.host_mapping.HostMapping._destroy_in_db', fake_hm_destroy) def test_service_enable(self): """Enable an existing service.""" response = self._do_put( 'os-services/%s' % test_services.FAKE_UUID_COMPUTE_HOST1, 'service-enable-put-req', subs={}) self._verify_response('service-enable-put-resp', {}, response, 200) def test_service_disable(self): """Disable an existing service.""" response = self._do_put( 'os-services/%s' % test_services.FAKE_UUID_COMPUTE_HOST1, 'service-disable-put-req', subs={}) self._verify_response('service-disable-put-resp', {}, response, 200) def test_service_disable_log_reason(self): """Disable an existing service and log the reason.""" subs = {'disabled_reason': 'maintenance'} response = self._do_put( 'os-services/%s' % test_services.FAKE_UUID_COMPUTE_HOST1, 'service-disable-log-put-req', subs) self._verify_response('service-disable-log-put-resp', subs, response, 200) def test_service_delete(self): """Delete an existing service.""" response = self._do_delete( 'os-services/%s' % test_services.FAKE_UUID_COMPUTE_HOST1) self.assertEqual(204, response.status_code) self.assertEqual("", response.text) def test_force_down(self): """Set forced_down flag""" subs = {'forced_down': 'true'} response = self._do_put( 'os-services/%s' % test_services.FAKE_UUID_COMPUTE_HOST1, 'service-force-down-put-req', subs) self._verify_response('service-force-down-put-resp', subs, response, 200) class ServicesV269JsonTest(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-services" microversion = '2.69' scenarios = [('v2_69', {'api_major_version': 'v2.1'})] def setUp(self): super(ServicesV269JsonTest, self).setUp() def _fake_cell_list(*args, **kwargs): return [{'id': 1, 'updated_at': None, 'created_at': None, 'uuid': utils_fixture.uuidsentinel.cell1, 'name': 'onlycell', 'transport_url': 'fake://nowhere/', 'database_connection': 'sqlite:///', 'disabled': False}] def fake_hostmappinglist_get(*args, **kwargs): cm = _fake_cell_list()[0] return [{'id': 1, 'updated_at': None, 'created_at': None, 'host': 'host1', 'cell_mapping': cm}, {'id': 2, 'updated_at': None, 'created_at': None, 'host': 'host2', 'cell_mapping': cm}] self.stub_out('nova.objects.HostMappingList._get_from_db', fake_hostmappinglist_get) def test_get_services_from_down_cells(self): subs = {} with nova_fixtures.DownCellFixture(): response = self._do_get('os-services') self._verify_response('services-list-get-resp', subs, response, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_shelve.py0000664000175000017500000000537400000000000025301 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova.tests.functional.api_sample_tests import test_servers CONF = nova.conf.CONF class ShelveJsonTest(test_servers.ServersSampleBase): sample_dir = "os-shelve" def setUp(self): super(ShelveJsonTest, self).setUp() # Don't offload instance, so we can test the offload call. CONF.set_override('shelved_offload_time', -1) def _test_server_action(self, uuid, template, action): response = self._do_post('servers/%s/action' % uuid, template, {'action': action}) self.assertEqual(202, response.status_code) self.assertEqual("", response.text) def test_shelve(self): uuid = self._post_server() self._test_server_action(uuid, 'os-shelve', 'shelve') def test_shelve_offload(self): uuid = self._post_server() self._test_server_action(uuid, 'os-shelve', 'shelve') self._test_server_action(uuid, 'os-shelve-offload', 'shelveOffload') def test_unshelve(self): uuid = self._post_server() self._test_server_action(uuid, 'os-shelve', 'shelve') self._test_server_action(uuid, 'os-unshelve', 'unshelve') class UnshelveJson277Test(test_servers.ServersSampleBase): sample_dir = "os-shelve" microversion = '2.77' scenarios = [('v2_77', {'api_major_version': 'v2.1'})] def _test_server_action(self, uuid, template, action, subs=None): subs = subs or {} subs.update({'action': action}) response = self._do_post('servers/%s/action' % uuid, template, subs) self.assertEqual(202, response.status_code) self.assertEqual("", response.text) def test_unshelve_with_az(self): uuid = self._post_server() self._test_server_action(uuid, 'os-shelve', 'shelve') self._test_server_action(uuid, 'os-unshelve', 'unshelve', subs={"availability_zone": "us-west"}) def test_unshelve_no_az(self): uuid = self._post_server() self._test_server_action(uuid, 'os-shelve', 'shelve') self._test_server_action(uuid, 'os-unshelve-null', 'unshelve') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_simple_tenant_usage.py0000664000175000017500000001365300000000000030040 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_utils import timeutils from six.moves.urllib import parse from nova.tests.functional.api_sample_tests import test_servers import nova.tests.functional.api_samples_test_base as astb class SimpleTenantUsageSampleJsonTest(test_servers.ServersSampleBase): sample_dir = "os-simple-tenant-usage" def setUp(self): """setUp method for simple tenant usage.""" super(SimpleTenantUsageSampleJsonTest, self).setUp() started = timeutils.utcnow() now = started + datetime.timedelta(hours=1) timeutils.set_time_override(started) self._post_server() timeutils.set_time_override(now) self.query = { 'start': str(started), 'end': str(now) } def tearDown(self): """tearDown method for simple tenant usage.""" super(SimpleTenantUsageSampleJsonTest, self).tearDown() timeutils.clear_time_override() def test_get_tenants_usage(self): # Get api sample to get all tenants usage request. response = self._do_get('os-simple-tenant-usage?%s' % ( parse.urlencode(self.query))) self._verify_response('simple-tenant-usage-get', {}, response, 200) def test_get_tenants_usage_with_detail(self): # Get all tenants usage information with detail. query = self.query.copy() query.update({'detailed': 1}) response = self._do_get('os-simple-tenant-usage?%s' % ( parse.urlencode(query))) self._verify_response('simple-tenant-usage-get-detail', {}, response, 200) def test_get_tenant_usage_details(self): # Get api sample to get specific tenant usage request. tenant_id = astb.PROJECT_ID response = self._do_get('os-simple-tenant-usage/%s?%s' % (tenant_id, parse.urlencode(self.query))) self._verify_response('simple-tenant-usage-get-specific', {}, response, 200) class SimpleTenantUsageV240Test(test_servers.ServersSampleBase): USE_PROJECT_ID = False sample_dir = 'os-simple-tenant-usage' microversion = '2.40' scenarios = [('v2_40', {'api_major_version': 'v2.1'})] def setUp(self): super(SimpleTenantUsageV240Test, self).setUp() self.api.microversion = self.microversion self.project_id_0 = astb.PROJECT_ID self.project_id_1 = '0000000e737461636b20342065000000' started = timeutils.utcnow() now = started + datetime.timedelta(hours=1) timeutils.set_time_override(started) with mock.patch('oslo_utils.uuidutils.generate_uuid') as mock_uuids: # make uuids incrementing, so that sort order is deterministic uuid_format = '1f1deceb-17b5-4c04-84c7-e0d4499c8f%02d' mock_uuids.side_effect = [uuid_format % x for x in range(100)] self.project_id = self.project_id_0 self.instance1_uuid = self._post_server(name='instance-1') self.instance2_uuid = self._post_server(name='instance-2') self.project_id = self.project_id_1 self.instance3_uuid = self._post_server(name='instance-3') timeutils.set_time_override(now) self.query = { 'start': str(started), 'end': str(now), 'limit': '1', 'marker': self.instance1_uuid, } def tearDown(self): super(SimpleTenantUsageV240Test, self).tearDown() timeutils.clear_time_override() def test_get_tenants_usage(self): url = 'os-simple-tenant-usage?%s' response = self._do_get(url % (parse.urlencode(self.query))) template_name = 'simple-tenant-usage-get' self._verify_response(template_name, {}, response, 200) def test_get_tenants_usage_with_detail(self): query = self.query.copy() query.update({'detailed': 1}) url = 'os-simple-tenant-usage?%s' response = self._do_get(url % (parse.urlencode(query))) template_name = 'simple-tenant-usage-get-detail' self._verify_response(template_name, {}, response, 200) def test_get_tenant_usage_details(self): tenant_id = self.project_id_0 url = 'os-simple-tenant-usage/{tenant}?%s'.format(tenant=tenant_id) response = self._do_get(url % (parse.urlencode(self.query))) template_name = 'simple-tenant-usage-get-specific' subs = {'tenant_id': self.project_id_0} self._verify_response(template_name, subs, response, 200) def test_get_tenants_usage_end_marker(self): # When using the last server retrieved as a marker, # the subsequent usages list should be empty (bug #1796689). url = 'os-simple-tenant-usage?%s' query = dict(detailed=1, start=self.query['start'], end=self.query['end']) response = self._do_get(url % (parse.urlencode(query))) template_name = 'simple-tenant-usage-get-all' self._verify_response(template_name, {}, response, 200) last_server = response.json()['tenant_usages'][-1]['server_usages'][-1] query['marker'] = last_server['instance_id'] response = self._do_get(url % (parse.urlencode(query))) self.assertEqual(0, len(response.json()['tenant_usages'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_suspend_server.py0000664000175000017500000000306700000000000027057 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api_sample_tests import test_servers class SuspendServerSamplesJsonTest(test_servers.ServersSampleBase): sample_dir = "os-suspend-server" def setUp(self): """setUp Method for SuspendServer api samples extension This method creates the server that will be used in each tests """ super(SuspendServerSamplesJsonTest, self).setUp() self.uuid = self._post_server() def test_post_suspend(self): # Get api samples to suspend server request. response = self._do_post('servers/%s/action' % self.uuid, 'server-suspend', {}) self.assertEqual(202, response.status_code) def test_post_resume(self): # Get api samples to server resume request. self.test_post_suspend() response = self._do_post('servers/%s/action' % self.uuid, 'server-resume', {}) self.assertEqual(202, response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_tenant_networks.py0000664000175000017500000000244400000000000027233 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova.tests.functional.api_sample_tests import api_sample_base CONF = nova.conf.CONF class TenantNetworksJsonTests(api_sample_base.ApiSampleTestBaseV21): ADMIN_API = True sample_dir = "os-tenant-networks" def test_list_networks(self): response = self._do_get('os-tenant-networks') self._verify_response('networks-list-res', {}, response, 200) def test_create_network(self): self.api.api_post('os-tenant-networks', {}, check_response_status=[410]) def test_delete_network(self): self.api.api_delete('os-tenant-networks/1', check_response_status=[410]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_versions.py0000664000175000017500000000555100000000000025660 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import fixtures import webob from nova.api.openstack import api_version_request as avr from nova.tests.functional.api_sample_tests import api_sample_base @ddt.ddt class VersionsSampleJsonTest(api_sample_base.ApiSampleTestBaseV21): """Validate that proper version documents can be fetched without auth.""" # Here we want to avoid stubbing keystone middleware. That will cause # "real" keystone middleware to run (and fail) if it's in the pipeline. # (The point of this test is to prove we do version discovery through # pipelines that *don't* authenticate.) STUB_KEYSTONE = False USE_PROJECT_ID = False sample_dir = 'versions' # NOTE(gmann): Setting empty scenario for 'version' API testing # as those does not send request on particular endpoint and running # its tests alone is enough. scenarios = [] max_api_version = {'max_api_version': avr.max_api_version().get_string()} def setUp(self): super(VersionsSampleJsonTest, self).setUp() # Version documents are supposed to be available without auth, so make # the auth middleware "fail" authentication. self.useFixture(fixtures.MockPatch( # [api]auth_strategy is set to noauth2 by the ConfFixture 'nova.api.openstack.auth.NoAuthMiddlewareBase.base_call', return_value=webob.Response(status=401))) def _get(self, url): return self._do_get( url, # Since we're explicitly getting discovery endpoints, strip the # automatic /v2[.1] added by the fixture. strip_version=True) @ddt.data('', '/') def test_versions_get_base(self, url): response = self._get(url) self._verify_response('versions-get-resp', self.max_api_version, response, 200, update_links=False) @ddt.data(('/v2', 'v2-version-get-resp', {}), ('/v2/', 'v2-version-get-resp', {}), ('/v2.1', 'v21-version-get-resp', max_api_version), ('/v2.1/', 'v21-version-get-resp', max_api_version)) @ddt.unpack def test_versions_get_versioned(self, url, tplname, subs): response = self._get(url) self._verify_response(tplname, subs, response, 200, update_links=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_virtual_interfaces.py0000664000175000017500000000227500000000000027701 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova.tests.functional.api import client as api_client from nova.tests.functional import api_samples_test_base class VirtualInterfacesJsonTest(api_samples_test_base.ApiSampleTestBase): api_major_version = 'v2' def test_vifs_list(self): uuid = uuids.instance_1 ex = self.assertRaises(api_client.OpenStackApiException, self.api.api_get, '/servers/%s/os-virtual-interfaces' % uuid) self.assertEqual(410, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_sample_tests/test_volumes.py0000664000175000017500000002753700000000000025512 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from nova.tests import fixtures from nova.tests.functional.api_sample_tests import api_sample_base from nova.tests.functional.api_sample_tests import test_servers from nova.tests.unit.api.openstack import fakes class SnapshotsSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): sample_dir = "os-volumes" create_subs = { 'snapshot_name': 'snap-001', 'description': 'Daily backup', 'volume_id': '521752a6-acf6-4b2d-bc7a-119f9148cd8c' } def setUp(self): super(SnapshotsSampleJsonTests, self).setUp() self.stub_out("nova.volume.cinder.API.get_all_snapshots", fakes.stub_snapshot_get_all) self.stub_out("nova.volume.cinder.API.get_snapshot", fakes.stub_snapshot_get) def _create_snapshot(self): self.stub_out("nova.volume.cinder.API.create_snapshot", fakes.stub_snapshot_create) response = self._do_post("os-snapshots", "snapshot-create-req", self.create_subs) return response def test_snapshots_create(self): response = self._create_snapshot() self._verify_response("snapshot-create-resp", self.create_subs, response, 200) def test_snapshots_delete(self): self.stub_out("nova.volume.cinder.API.delete_snapshot", fakes.stub_snapshot_delete) self._create_snapshot() response = self._do_delete('os-snapshots/100') self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_snapshots_detail(self): response = self._do_get('os-snapshots/detail') self._verify_response('snapshots-detail-resp', {}, response, 200) def test_snapshots_list(self): response = self._do_get('os-snapshots') self._verify_response('snapshots-list-resp', {}, response, 200) def test_snapshots_show(self): response = self._do_get('os-snapshots/100') subs = { 'snapshot_name': 'Default name', 'description': 'Default description' } self._verify_response('snapshots-show-resp', subs, response, 200) def _get_volume_id(): return 'a26887c6-c47b-4654-abb5-dfadf7d3f803' def _stub_volume(id, displayname="Volume Name", displaydesc="Volume Description", size=100): volume = { 'id': id, 'size': size, 'availability_zone': 'zone1:host1', 'status': 'in-use', 'attach_status': 'attached', 'name': 'vol name', 'display_name': displayname, 'display_description': displaydesc, 'created_at': datetime.datetime(2008, 12, 1, 11, 1, 55), 'snapshot_id': None, 'volume_type_id': 'fakevoltype', 'volume_metadata': [], 'volume_type': {'name': 'Backup'}, 'multiattach': False, 'attachments': {'3912f2b4-c5ba-4aec-9165-872876fe202e': {'mountpoint': '/', 'attachment_id': 'a26887c6-c47b-4654-abb5-dfadf7d3f803' } } } return volume def _stub_volume_get(stub_self, context, volume_id): return _stub_volume(volume_id) def _stub_volume_delete(stub_self, context, *args, **param): pass def _stub_volume_get_all(stub_self, context, search_opts=None): id = _get_volume_id() return [_stub_volume(id)] def _stub_volume_create(stub_self, context, size, name, description, snapshot, **param): id = _get_volume_id() return _stub_volume(id) class VolumesSampleJsonTest(test_servers.ServersSampleBase): sample_dir = "os-volumes" def setUp(self): super(VolumesSampleJsonTest, self).setUp() fakes.stub_out_networking(self) self.stub_out("nova.volume.cinder.API.delete", _stub_volume_delete) self.stub_out("nova.volume.cinder.API.get", _stub_volume_get) self.stub_out("nova.volume.cinder.API.get_all", _stub_volume_get_all) def _post_volume(self): subs_req = { 'volume_name': "Volume Name", 'volume_desc': "Volume Description", } self.stub_out("nova.volume.cinder.API.create", _stub_volume_create) response = self._do_post('os-volumes', 'os-volumes-post-req', subs_req) self._verify_response('os-volumes-post-resp', subs_req, response, 200) def test_volumes_show(self): subs = { 'volume_name': "Volume Name", 'volume_desc': "Volume Description", } vol_id = _get_volume_id() response = self._do_get('os-volumes/%s' % vol_id) self._verify_response('os-volumes-get-resp', subs, response, 200) def test_volumes_index(self): subs = { 'volume_name': "Volume Name", 'volume_desc': "Volume Description", } response = self._do_get('os-volumes') self._verify_response('os-volumes-index-resp', subs, response, 200) def test_volumes_detail(self): # For now, index and detail are the same. # See the volumes api subs = { 'volume_name': "Volume Name", 'volume_desc': "Volume Description", } response = self._do_get('os-volumes/detail') self._verify_response('os-volumes-detail-resp', subs, response, 200) def test_volumes_create(self): self._post_volume() def test_volumes_delete(self): self._post_volume() vol_id = _get_volume_id() response = self._do_delete('os-volumes/%s' % vol_id) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) class VolumeAttachmentsSample(test_servers.ServersSampleBase): sample_dir = "os-volumes" OLD_VOLUME_ID = fixtures.CinderFixture.SWAP_OLD_VOL NEW_VOLUME_ID = fixtures.CinderFixture.SWAP_NEW_VOL def setUp(self): super(VolumeAttachmentsSample, self).setUp() self.useFixture(fixtures.CinderFixture(self)) self.server_id = self._post_server() def _get_vol_attachment_subs(self, subs): """Allows subclasses to override/supplement request/response subs""" return subs def test_attach_volume_to_server(self): subs = { 'volume_id': self.OLD_VOLUME_ID, 'device': '/dev/sdb' } subs = self._get_vol_attachment_subs(subs) response = self._do_post('servers/%s/os-volume_attachments' % self.server_id, 'attach-volume-to-server-req', subs) self._verify_response('attach-volume-to-server-resp', subs, response, 200) return subs def test_list_volume_attachments(self): subs = self.test_attach_volume_to_server() # Attach another volume to the server so the response has multiple # which is more interesting since it's a list of dicts. body = { 'volumeAttachment': { 'volumeId': self.NEW_VOLUME_ID } } self.api.post_server_volume(self.server_id, body) response = self._do_get('servers/%s/os-volume_attachments' % self.server_id) subs['volume_id2'] = self.NEW_VOLUME_ID self._verify_response('list-volume-attachments-resp', subs, response, 200) def test_volume_attachment_detail(self): subs = self.test_attach_volume_to_server() response = self._do_get('servers/%s/os-volume_attachments/%s' % (self.server_id, subs['volume_id'])) self._verify_response('volume-attachment-detail-resp', subs, response, 200) def test_volume_attachment_delete(self): subs = self.test_attach_volume_to_server() response = self._do_delete('servers/%s/os-volume_attachments/%s' % (self.server_id, subs['volume_id'])) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) def test_volume_attachment_update(self): subs = self.test_attach_volume_to_server() subs['new_volume_id'] = self.NEW_VOLUME_ID response = self._do_put('servers/%s/os-volume_attachments/%s' % (self.server_id, subs['volume_id']), 'update-volume-req', subs) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) class VolumeAttachmentsSampleV249(VolumeAttachmentsSample): sample_dir = "os-volumes" microversion = '2.49' scenarios = [('v2_49', {'api_major_version': 'v2.1'})] def setUp(self): super(VolumeAttachmentsSampleV249, self).setUp() # Stub out ComputeManager._delete_disk_metadata since the fake virt # driver does not actually update the instance.device_metadata.devices # list with the tagged bdm disk device metadata. self.stub_out('nova.compute.manager.ComputeManager.' '_delete_disk_metadata', lambda *a, **kw: None) def _get_vol_attachment_subs(self, subs): return dict(subs, tag='foo') class VolumeAttachmentsSampleV270(VolumeAttachmentsSampleV249): """2.70 adds the "tag" parameter to the response body""" microversion = '2.70' scenarios = [('v2_70', {'api_major_version': 'v2.1'})] class VolumeAttachmentsSampleV279(VolumeAttachmentsSampleV270): """Microversion 2.79 adds the "delete_on_termination" parameter to the request and response body. """ microversion = '2.79' scenarios = [('v2_79', {'api_major_version': 'v2.1'})] class UpdateVolumeAttachmentsSampleV285(VolumeAttachmentsSampleV279): """Microversion 2.85 adds the ``PUT /servers/{server_id}/os-volume_attachments/{volume_id}`` support for specifying ``delete_on_termination`` field in the request body to re-config the attached volume whether to delete when the instance is deleted. """ microversion = '2.85' scenarios = [('v2_85', {'api_major_version': 'v2.1'})] def test_volume_attachment_update(self): subs = self.test_attach_volume_to_server() attached_volume_id = subs['volume_id'] subs['server_id'] = self.server_id response = self._do_put('servers/%s/os-volume_attachments/%s' % (self.server_id, attached_volume_id), 'update-volume-attachment-delete-flag-req', subs) self.assertEqual(202, response.status_code) self.assertEqual('', response.text) # Make sure the attached volume was changed attachments = self.api.api_get( '/servers/%s/os-volume_attachments' % self.server_id).body[ 'volumeAttachments'] self.assertEqual(1, len(attachments)) self.assertEqual(self.server_id, attachments[0]['serverId']) self.assertTrue(attachments[0]['delete_on_termination']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/api_samples_test_base.py0000664000175000017500000005436200000000000023747 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import pprint import re from oslo_serialization import jsonutils import six from nova import test from nova.tests.functional import integrated_helpers PROJECT_ID = "6f70656e737461636b20342065766572" # for pretty printing errors pp = pprint.PrettyPrinter(indent=4) class NoMatch(test.TestingException): pass def pretty_data(data): data = jsonutils.dumps(jsonutils.loads(data), sort_keys=True, indent=4) return '\n'.join(line.rstrip() for line in data.split('\n')).strip() def objectify(data): if not data: return {} # NOTE(sdague): templates will contain values like %(foo)s # throughout them. If these are inside of double quoted # strings, life is good, and we can treat it just like valid # json to load it to python. # # However we've got some fields which are ints, like # aggregate_id. This means we've got a snippet in the sample # that looks like: # # "id": %(aggregate_id)s, # # which is not valid json, and will explode. We do a quick and # dirty transform of this to: # # "id": "%(int:aggregate_id)s", # # That makes it valid data to convert to json, but keeps # around the information that we need to drop those strings # later. The regex anchors from the ': ', as all of these will # be top rooted keys. data = re.sub(r'(\: )%\((.+)\)s([^"])', r'\1"%(int:\2)s"\3', data) return jsonutils.loads(data) class ApiSampleTestBase(integrated_helpers._IntegratedTestBase): sample_dir = None microversion = None _use_common_server_api_samples = False def __init__(self, *args, **kwargs): super(ApiSampleTestBase, self).__init__(*args, **kwargs) self.subs = {} # TODO(auggy): subs should really be a class @property def subs(self): return self._subs @subs.setter def subs(self, value): non_strings = \ {k: v for k, v in value.items() if (not k == 'compute_host') and (not isinstance(v, six.string_types))} if len(non_strings) > 0: raise TypeError("subs can't contain non-string values:" "\n%(non_strings)s" % {'non_strings': non_strings}) else: self._subs = value @classmethod def _get_sample_path(cls, name, dirname, suffix='', api_version=None): parts = [dirname] parts.append('api_samples') # Note(gmann): if _use_common_server_api_samples is set to True # then common server sample files present in 'servers' directory # will be used. As of now it is being used for server POST request # to avoid duplicate copy of server req and resp sample files. # Example - ServersSampleBase's _post_server method. if cls._use_common_server_api_samples: parts.append('servers') else: parts.append(cls.sample_dir) if api_version: parts.append('v' + api_version) parts.append(name + ".json" + suffix) return os.path.join(*parts) @classmethod def _get_sample(cls, name, api_version=None): dirname = os.path.dirname(os.path.abspath(__file__)) dirname = os.path.normpath(os.path.join(dirname, "../../../doc")) return cls._get_sample_path(name, dirname, api_version=api_version) @classmethod def _get_template(cls, name, api_version=None): dirname = os.path.dirname(os.path.abspath(__file__)) dirname = os.path.normpath(os.path.join(dirname, "./api_sample_tests")) return cls._get_sample_path(name, dirname, suffix='.tpl', api_version=api_version) def _read_template(self, name): template = self._get_template(name, self.microversion) with open(template) as inf: return inf.read().strip() def _write_template(self, name, data): with open(self._get_template(name, self.microversion), 'w') as outf: outf.write(data) def _write_sample(self, name, data): with open(self._get_sample( name, self.microversion), 'w') as outf: outf.write(data) def _compare_result(self, expected, result, result_str): matched_value = None # None if expected is None: if result is None: pass elif result == u'': pass # TODO(auggy): known issue Bug#1544720 else: raise NoMatch('%(result_str)s: Expected None, got %(result)s.' % {'result_str': result_str, 'result': result}) # dictionary elif isinstance(expected, dict): if not isinstance(result, dict): raise NoMatch('%(result_str)s: %(result)s is not a dict.' % {'result_str': result_str, 'result': result}) ex_keys = sorted(expected.keys()) res_keys = sorted(result.keys()) if ex_keys != res_keys: ex_delta = [] res_delta = [] for key in ex_keys: if key not in res_keys: ex_delta.append(key) for key in res_keys: if key not in ex_keys: res_delta.append(key) raise NoMatch( 'Dictionary key mismatch:\n' 'Extra key(s) in template:\n%(ex_delta)s\n' 'Extra key(s) in %(result_str)s:\n%(res_delta)s\n' % {'ex_delta': ex_delta, 'result_str': result_str, 'res_delta': res_delta}) for key in ex_keys: # TODO(auggy): pass key name along as well for error reporting res = self._compare_result(expected[key], result[key], result_str) matched_value = res or matched_value # list elif isinstance(expected, list): if not isinstance(result, list): raise NoMatch( '%(result_str)s: %(result)s is not a list.' % {'result_str': result_str, 'result': result}) expected = expected[:] extra = [] # if it's a list of 1, do the simple compare which gives a # better error message. if len(result) == len(expected) == 1: return self._compare_result(expected[0], result[0], result_str) # This is clever enough to need some explanation. What we # are doing here is looping the result list, and trying to # compare it to every item in the expected list. If there # is more than one, we're going to get fails. We ignore # those. But every time we match an expected we drop it, # and break to the next iteration. Every time we hit the # end of the iteration, we add our results into a bucket # of non matched. # # This results in poor error messages because we don't # really know why the elements failed to match each # other. A more complicated diff might be nice. for res_obj in result: for i, ex_obj in enumerate(expected): try: matched_value = self._compare_result(ex_obj, res_obj, result_str) del expected[i] break except NoMatch: pass else: extra.append(res_obj) error = [] if expected: error.append('Extra list items in template:') error.extend([repr(o) for o in expected]) if extra: error.append('Extra list items in %(result_str)s:' % {'result_str': result_str}) error.extend([repr(o) for o in extra]) if error: raise NoMatch('\n'.join(error)) # template string elif isinstance(expected, six.string_types) and '%' in expected: # NOTE(vish): escape stuff for regex for char in '[]<>?': expected = expected.replace(char, '\\%s' % char) # NOTE(vish): special handling of subs that are not quoted. We are # expecting an int but we had to pass in a string # so the json would parse properly. if expected.startswith("%(int:"): result = str(result) expected = expected.replace('int:', '') expected = expected % self.subs expected = '^%s$' % expected try: match = re.match(expected, result) except TypeError as e: raise NoMatch( 'Values do not match:\n' 'Template: %(expected)s\n%(result_str)s: %(result)s\n' 'Error: %(error)s' % {'expected': expected, 'result_str': result_str, 'result': result, 'error': e}) if not match: raise NoMatch( 'Values do not match:\n' 'Template: %(expected)s\n%(result_str)s: %(result)s' % {'expected': expected, 'result_str': result_str, 'result': result}) try: matched_value = match.group('id') except IndexError: if match.groups(): matched_value = match.groups()[0] # string elif isinstance(expected, six.string_types): # NOTE(danms): Ignore whitespace in this comparison expected = expected.strip() if isinstance(result, six.string_types): result = result.strip() if expected != result: # NOTE(tdurakov):this attempt to parse string as JSON # is needed for correct comparison of hypervisor.cpu_info, # which is stringified JSON object # # TODO(tdurakov): remove this check as soon as # hypervisor.cpu_info become common JSON object in REST API. try: expected = objectify(expected) result = objectify(result) return self._compare_result(expected, result, result_str) except ValueError: pass raise NoMatch( 'Values do not match:\n' 'Template: %(expected)s\n%(result_str)s: ' '%(result)s' % {'expected': expected, 'result_str': result_str, 'result': result}) # int elif isinstance(expected, (six.integer_types, float)): if expected != result: raise NoMatch( 'Values do not match:\n' 'Template: %(expected)s\n%(result_str)s: ' '%(result)s' % {'expected': expected, 'result_str': result_str, 'result': result}) else: raise ValueError( 'Unexpected type %(expected_type)s' % {'expected_type': type(expected)}) return matched_value @property def project_id(self): # We'll allow test cases to override the default project id. This is # useful when using multiple tenants. project_id = None try: project_id = self.api.project_id except AttributeError: pass return project_id or PROJECT_ID @project_id.setter def project_id(self, project_id): self.api.project_id = project_id # Reset cached credentials self.api.auth_result = None def generalize_subs(self, subs, vanilla_regexes): """Give the test a chance to modify subs after the server response was verified, and before the on-disk doc/api_samples file is checked. This may be needed by some tests to convert exact matches expected from the server into pattern matches to verify what is in the sample file. If there are no changes to be made, subs is returned unharmed. """ return subs def _update_links(self, sample_data): """Process sample data and update version specific links.""" # replace version urls project_id_exp = '(%s|%s)' % (PROJECT_ID, self.project_id) url_re = self._get_host() + r"/v(2|2\.1)/" + project_id_exp new_url = self._get_host() + "/" + self.api_major_version if self.USE_PROJECT_ID: new_url += "/" + self.project_id updated_data = re.sub(url_re, new_url, sample_data) # replace unversioned urls url_re = self._get_host() + "/" + project_id_exp new_url = self._get_host() if self.USE_PROJECT_ID: new_url += "/" + self.project_id updated_data = re.sub(url_re, new_url, updated_data) return updated_data def _verify_response(self, name, subs, response, exp_code, update_links=True): # Always also include the laundry list of base regular # expressions for possible key values in our templates. Test # specific patterns (the value of ``subs``) can override # these. regexes = self._get_regexes() regexes.update(subs) subs = regexes self.subs = subs message = response.text if response.status_code >= 400 else None self.assertEqual(exp_code, response.status_code, message) response_data = response.content response_data = pretty_data(response_data) if not os.path.exists(self._get_template(name, self.microversion)): self._write_template(name, response_data) template_data = response_data else: template_data = self._read_template(name) if (self.generate_samples and not os.path.exists(self._get_sample( name, self.microversion))): self._write_sample(name, response_data) sample_data = response_data else: with open(self._get_sample(name, self.microversion)) as sample: sample_data = sample.read() if update_links: sample_data = self._update_links(sample_data) try: template_data = objectify(template_data) response_data = objectify(response_data) response_result = self._compare_result(template_data, response_data, "Response") except NoMatch as e: raise NoMatch("\nFailed to match Template to Response: \n%s\n" "Template: %s\n\n" "Response: %s\n\n" % (e, pp.pformat(template_data), pp.pformat(response_data))) try: # NOTE(danms): replace some of the subs with patterns for the # doc/api_samples check, which won't have things like the # correct compute host name. Also let the test do some of its # own generalization, if necessary vanilla_regexes = self._get_regexes() subs['compute_host'] = vanilla_regexes['host_name'] subs['id'] = vanilla_regexes['id'] subs['uuid'] = vanilla_regexes['uuid'] subs['image_id'] = vanilla_regexes['uuid'] subs = self.generalize_subs(subs, vanilla_regexes) self.subs = subs sample_data = objectify(sample_data) self._compare_result(template_data, sample_data, "Sample") return response_result except NoMatch as e: raise NoMatch("\nFailed to match Template to Sample: \n%s\n" "Template: %s\n\n" "Sample: %s\n\n" "Hint: does your test need to override " "ApiSampleTestBase.generalize_subs()?" % (e, pp.pformat(template_data), pp.pformat(sample_data))) def _get_host(self): return 'http://openstack.example.com' def _get_glance_host(self): return 'http://glance.openstack.example.com' def _get_regexes(self): text = r'(\\"|[^"])*' isotime_re = r'\d{4}-[0,1]\d-[0-3]\dT\d{2}:\d{2}:\d{2}Z' strtime_re = r'\d{4}-[0,1]\d-[0-3]\dT\d{2}:\d{2}:\d{2}\.\d{6}' strtime_url_re = (r'\d{4}-[0,1]\d-[0-3]\d' r'\+\d{2}\%3A\d{2}\%3A\d{2}\.\d{6}') xmltime_re = (r'\d{4}-[0,1]\d-[0-3]\d ' r'\d{2}:\d{2}:\d{2}' r'(\.\d{6})?(\+00:00)?') # NOTE(claudiub): the x509 keypairs are different from the # ssh keypairs. For example, the x509 fingerprint has 40 bytes. return { 'isotime': isotime_re, 'strtime': strtime_re, 'strtime_url': strtime_url_re, 'strtime_or_none': r'None|%s' % strtime_re, 'xmltime': xmltime_re, 'password': '[0-9a-zA-Z]{1,12}', 'ip': '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}', 'ip6': '([0-9a-zA-Z]{1,4}:){1,7}:?[0-9a-zA-Z]{1,4}', 'id': '(?P[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}' '-[0-9a-f]{4}-[0-9a-f]{12})', 'uuid': '[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}' '-[0-9a-f]{4}-[0-9a-f]{12}', 'request_id': 'req-[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}' '-[0-9a-f]{4}-[0-9a-f]{12}', 'reservation_id': 'r-[0-9a-zA-Z]{8}', 'private_key': '(-----BEGIN RSA PRIVATE KEY-----|)' '[a-zA-Z0-9\n/+=]*' '(-----END RSA PRIVATE KEY-----|)', 'public_key': '(ssh-rsa|-----BEGIN CERTIFICATE-----)' '[ a-zA-Z0-9\n/+=]*' '(Generated-by-Nova|-----END CERTIFICATE-----)', 'fingerprint': '(([0-9a-f]{2}:){19}|([0-9a-f]{2}:){15})' '[0-9a-f]{2}', 'keypair_type': 'ssh|x509', 'host': self._get_host(), 'host_name': r'\w+', 'glance_host': self._get_glance_host(), 'compute_host': self.compute.host, 'text': text, 'int': '[0-9]+', 'user_id': text, 'api_vers': self.api_major_version, 'compute_endpoint': self._get_compute_endpoint(), 'versioned_compute_endpoint': self._get_vers_compute_endpoint(), } def _get_compute_endpoint(self): # NOTE(sdague): "openstack" is stand in for project_id, it # should be more generic in future. if self.USE_PROJECT_ID: return '%s/%s' % (self._get_host(), self.project_id) else: return self._get_host() def _get_vers_compute_endpoint(self): # NOTE(sdague): "openstack" is stand in for project_id, it # should be more generic in future. if self.USE_PROJECT_ID: return '%s/%s/%s' % (self._get_host(), self.api_major_version, self.project_id) else: return '%s/%s' % (self._get_host(), self.api_major_version) def _get_response(self, url, method, body=None, strip_version=False, headers=None): headers = headers or {} headers['Content-Type'] = 'application/json' headers['Accept'] = 'application/json' return self.api.api_request(url, body=body, method=method, headers=headers, strip_version=strip_version) def _do_options(self, url, strip_version=False, headers=None): return self._get_response(url, 'OPTIONS', strip_version=strip_version, headers=headers) def _do_get(self, url, strip_version=False, headers=None, return_json_body=False): response = self._get_response(url, 'GET', strip_version=strip_version, headers=headers) if return_json_body and hasattr(response, 'content'): return jsonutils.loads(response.content) return response def _do_post(self, url, name=None, subs=None, method='POST', headers=None): self.subs = {} if subs is None else subs body = None if name: body = self._read_template(name) % self.subs sample = self._get_sample(name, self.microversion) if self.generate_samples and not os.path.exists(sample): self._write_sample(name, body) return self._get_response(url, method, body, headers=headers) def _do_put(self, url, name=None, subs=None, headers=None): # name indicates that we have a body document. While the HTTP # spec implies that PUT is supposed to have one, we have some # APIs which don't. if name: return self._do_post( url, name, subs, method='PUT', headers=headers) else: return self._get_response(url, 'PUT', headers=headers) def _do_delete(self, url, headers=None): return self._get_response(url, 'DELETE', headers=headers) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4864686 nova-21.2.4/nova/tests/functional/compute/0000775000175000017500000000000000000000000020511 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/compute/__init__.py0000664000175000017500000000000000000000000022610 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/compute/test_cache_image.py0000664000175000017500000001035700000000000024335 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import objects from nova import test from nova.tests.unit import fake_notifier class ImageCacheTest(test.TestCase): NUMBER_OF_CELLS = 2 def setUp(self): super(ImageCacheTest, self).setUp() self.flags(compute_driver='fake.FakeDriverWithCaching') fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.context = context.get_admin_context() self.conductor = self.start_service('conductor') self.compute1 = self.start_service('compute', host='compute1') self.compute2 = self.start_service('compute', host='compute2') self.compute3 = self.start_service('compute', host='compute3', cell_name='cell2') self.compute4 = self.start_service('compute', host='compute4', cell_name='cell2') self.compute5 = self.start_service('compute', host='compute5', cell_name='cell2') cell2 = self.cell_mappings['cell2'] with context.target_cell(self.context, cell2) as cctxt: srv = objects.Service.get_by_compute_host(cctxt, 'compute5') srv.forced_down = True srv.save() def test_cache_image(self): """Test caching images by injecting the request directly to the conductor service and making sure it fans out and calls the expected nodes. """ aggregate = objects.Aggregate(name='test', uuid=uuids.aggregate, id=1, hosts=['compute1', 'compute3', 'compute4', 'compute5']) self.conductor.compute_task_mgr.cache_images( self.context, aggregate, ['an-image']) # NOTE(danms): We expect only three image cache attempts because # compute5 is marked as forced-down and compute2 is not in the # requested aggregate. for host in ['compute1', 'compute3', 'compute4']: mgr = getattr(self, host) self.assertEqual(set(['an-image']), mgr.driver.cached_images) for host in ['compute2', 'compute5']: mgr = getattr(self, host) self.assertEqual(set(), mgr.driver.cached_images) fake_notifier.wait_for_versioned_notifications( 'aggregate.cache_images.start') progress = fake_notifier.wait_for_versioned_notifications( 'aggregate.cache_images.progress', n_events=4) self.assertEqual(4, len(progress), progress) for notification in progress: payload = notification['payload']['nova_object.data'] if payload['host'] == 'compute5': self.assertEqual(['an-image'], payload['images_failed']) self.assertEqual([], payload['images_cached']) else: self.assertEqual(['an-image'], payload['images_cached']) self.assertEqual([], payload['images_failed']) self.assertLessEqual(payload['index'], 4) self.assertGreater(payload['index'], 0) self.assertEqual(4, payload['total']) self.assertIn('conductor', notification['publisher_id']) fake_notifier.wait_for_versioned_notifications( 'aggregate.cache_images.end') logtext = self.stdlog.logger.output self.assertIn( '3 cached, 0 existing, 0 errors, 0 unsupported, 1 skipped', logtext) self.assertNotIn( 'Image pre-cache operation for image an-image failed', logtext) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/compute/test_host_api.py0000664000175000017500000001300100000000000023723 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import api as compute_api from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures class ComputeHostAPIMultiCellTestCase(test.NoDBTestCase): """Tests for the HostAPI with multiple cells.""" USES_DB_SELF = True def setUp(self): super(ComputeHostAPIMultiCellTestCase, self).setUp() self.host_api = compute_api.HostAPI() self.useFixture(nova_fixtures.Database(database='api')) celldbs = nova_fixtures.CellDatabases() celldbs.add_cell_database(objects.CellMapping.CELL0_UUID) celldbs.add_cell_database(uuids.cell1, default=True) celldbs.add_cell_database(uuids.cell2) self.useFixture(celldbs) self.ctxt = context.get_admin_context() cell0 = objects.CellMapping( context=self.ctxt, uuid=objects.CellMapping.CELL0_UUID, database_connection=objects.CellMapping.CELL0_UUID, transport_url='none:///') cell0.create() cell1 = objects.CellMapping( context=self.ctxt, uuid=uuids.cell1, database_connection=uuids.cell1, transport_url='none:///') cell1.create() cell2 = objects.CellMapping( context=self.ctxt, uuid=uuids.cell2, database_connection=uuids.cell2, transport_url='none:///') cell2.create() self.cell_mappings = (cell0, cell1, cell2) def test_compute_node_get_all_uuid_marker(self): """Tests paging over multiple cells with a uuid marker. This test is going to setup three compute nodes in two cells for a total of six compute nodes. Then it will page over them with a limit of two so there should be three pages total. """ # create the compute nodes in the non-cell0 cells count = 0 for cell in self.cell_mappings[1:]: for x in range(3): compute_node_uuid = getattr(uuids, 'node_%s' % count) with context.target_cell(self.ctxt, cell) as cctxt: node = objects.ComputeNode( cctxt, uuid=compute_node_uuid, host=compute_node_uuid, vcpus=2, memory_mb=2048, local_gb=128, vcpus_used=0, memory_mb_used=0, local_gb_used=0, cpu_info='{}', hypervisor_type='fake', hypervisor_version=10) node.create() count += 1 # create a host mapping for the compute to link it to the cell host_mapping = objects.HostMapping( self.ctxt, host=compute_node_uuid, cell_mapping=cell) host_mapping.create() # now start paging with a limit of two per page; the first page starts # with no marker compute_nodes = self.host_api.compute_node_get_all(self.ctxt, limit=2) # assert that we got two compute nodes from cell1 self.assertEqual(2, len(compute_nodes)) for compute_node in compute_nodes: host_mapping = objects.HostMapping.get_by_host( self.ctxt, compute_node.host) self.assertEqual(uuids.cell1, host_mapping.cell_mapping.uuid) # now our marker is the last item in the first page marker = compute_nodes[-1].uuid compute_nodes = self.host_api.compute_node_get_all( self.ctxt, limit=2, marker=marker) # assert that we got the last compute node from cell1 and the first # compute node from cell2 self.assertEqual(2, len(compute_nodes)) host_mapping = objects.HostMapping.get_by_host( self.ctxt, compute_nodes[0].host) self.assertEqual(uuids.cell1, host_mapping.cell_mapping.uuid) host_mapping = objects.HostMapping.get_by_host( self.ctxt, compute_nodes[1].host) self.assertEqual(uuids.cell2, host_mapping.cell_mapping.uuid) # now our marker is the last item in the second page; make the limit=3 # so we make sure we've exhausted the pages marker = compute_nodes[-1].uuid compute_nodes = self.host_api.compute_node_get_all( self.ctxt, limit=3, marker=marker) # assert that we got two compute nodes from cell2 self.assertEqual(2, len(compute_nodes)) for compute_node in compute_nodes: host_mapping = objects.HostMapping.get_by_host( self.ctxt, compute_node.host) self.assertEqual(uuids.cell2, host_mapping.cell_mapping.uuid) def test_compute_node_get_all_uuid_marker_not_found(self): """Simple test to make sure we get MarkerNotFound raised up if we try paging with a uuid marker that is not found in any cell. """ self.assertRaises(exception.MarkerNotFound, self.host_api.compute_node_get_all, self.ctxt, limit=10, marker=uuids.not_found) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/compute/test_init_host.py0000664000175000017500000002125000000000000024122 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import time from nova import context as nova_context from nova import objects from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier class ComputeManagerInitHostTestCase( integrated_helpers.ProviderUsageBaseTestCase): """Tests various actions performed when the nova-compute service starts.""" compute_driver = 'fake.MediumFakeDriver' def test_migrate_disk_and_power_off_crash_finish_revert_migration(self): """Tests the scenario that the compute service crashes while the driver's migrate_disk_and_power_off method is running (we could be slow transferring disks or something when it crashed) and on restart of the compute service the driver's finish_revert_migration method is called to cleanup the source host and reset the instance task_state. """ # Start two compute service so we migrate across hosts. for x in range(2): self._start_compute('host%d' % x) # Create a server, it does not matter on which host it lands. server = self._build_server(networks='auto') server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Save the source hostname for assertions later. source_host = server['OS-EXT-SRV-ATTR:host'] def fake_migrate_disk_and_power_off(*args, **kwargs): # Simulate the source compute service crashing by restarting it. self.restart_compute_service(self.computes[source_host]) # We have to keep the method from returning before asserting the # _init_instance restart behavior otherwise resize_instance will # fail and set the instance to ERROR status, revert allocations, # etc which is not realistic if the service actually crashed while # migrate_disk_and_power_off was running. # The sleep value needs to be large to avoid this waking up and # interfering with other tests running on the same worker. time.sleep(1000000) source_driver = self.computes[source_host].manager.driver with mock.patch.object(source_driver, 'migrate_disk_and_power_off', side_effect=fake_migrate_disk_and_power_off): # Initiate a cold migration from the source host. self.admin_api.post_server_action(server['id'], {'migrate': None}) # Now wait for the task_state to be reset to None during # _init_instance. server = self._wait_for_server_parameter(server, { 'status': 'ACTIVE', 'OS-EXT-STS:task_state': None, 'OS-EXT-SRV-ATTR:host': source_host } ) # Assert we went through the _init_instance processing we expect. log_out = self.stdlog.logger.output self.assertIn('Instance found in migrating state during startup. ' 'Resetting task_state', log_out) # Assert that driver.finish_revert_migration did not raise an error. self.assertNotIn('Failed to revert crashed migration', log_out) # The migration status should be "error" rather than stuck as # "migrating". context = nova_context.get_admin_context() # FIXME(mriedem): This is bug 1836369 because we would normally expect # Migration.get_by_instance_and_status to raise # MigrationNotFoundByStatus since the status should be "error". objects.Migration.get_by_instance_and_status( context, server['id'], 'migrating') # Assert things related to the resize get cleaned up: # - things set on the instance during prep_resize like: # - migration_context # - new_flavor # - stashed old_vm_state in system_metadata # - migration-based allocations from conductor/scheduler, i.e. that the # allocations created by the scheduler for the instance and dest host # are gone and the source host allocations are back on the instance # rather than the migration record instance = objects.Instance.get_by_uuid( context, server['id'], expected_attrs=[ 'migration_context', 'flavor', 'system_metadata' ]) # FIXME(mriedem): Leaving these fields set on the instance is # bug 1836369. self.assertIsNotNone(instance.migration_context) self.assertIsNotNone(instance.new_flavor) self.assertEqual('active', instance.system_metadata['old_vm_state']) dest_host = 'host0' if source_host == 'host1' else 'host1' dest_rp_uuid = self._get_provider_uuid_by_host(dest_host) dest_allocations = self._get_allocations_by_provider_uuid(dest_rp_uuid) # FIXME(mriedem): This is bug 1836369 because we orphaned the # allocations created by the scheduler for the server on the dest host. self.assertIn(server['id'], dest_allocations) source_rp_uuid = self._get_provider_uuid_by_host(source_host) source_allocations = self._get_allocations_by_provider_uuid( source_rp_uuid) # FIXME(mriedem): This is bug 1836369 because the server is running on # the source host but is not tracking allocations against the source # host. self.assertNotIn(server['id'], source_allocations) class TestComputeRestartInstanceStuckInBuild( integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(TestComputeRestartInstanceStuckInBuild, self).setUp() self.compute1 = self._start_compute(host='host1') def test_restart_compute_while_instance_waiting_for_resource_claim(self): """Test for bug 1833581 where an instance is stuck in BUILD state forever due to compute service is restarted before the resource claim finished. """ # To reproduce the problem we need to stop / kill the compute service # when an instance build request has already reached the service but # the instance_claim() has not finished. One way that this # happens in practice is when multiple builds are waiting for the # 'nova-compute-resource' semaphore. So one way to reproduce this in # the test would be to grab that semaphore, boot an instance, wait for # it to reach the compute then stop the compute. # Unfortunately when we release the semaphore after the simulated # compute restart the original instance_claim execution continues as # the stopped compute is not 100% stopped in the func test env. Also # we cannot really keep the semaphore forever as this named semaphore # is shared between the old and new compute service. # There is another way to trigger the issue. We can inject a sleep into # instance_claim() to stop it. This is less realistic but it works in # the test env. server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') def sleep_forever(*args, **kwargs): time.sleep(1000000) with mock.patch('nova.compute.resource_tracker.ResourceTracker.' 'instance_claim') as mock_instance_claim: mock_instance_claim.side_effect = sleep_forever server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'BUILD') # the instance.create.start is the closest thing to the # instance_claim call we can wait for in the test fake_notifier.wait_for_versioned_notifications( 'instance.create.start') with mock.patch('nova.compute.manager.LOG.debug') as mock_log: self.restart_compute_service(self.compute1) # We expect that the instance is pushed to ERROR state during the # compute restart. self._wait_for_state_change(server, 'ERROR') mock_log.assert_called_with( 'Instance spawn was interrupted before instance_claim, setting ' 'instance to ERROR state', instance=mock.ANY) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/compute/test_instance_list.py0000664000175000017500000005732700000000000024777 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import instance_list from nova import context from nova.db import api as db from nova import exception from nova import objects from nova import test class InstanceListTestCase(test.TestCase): NUMBER_OF_CELLS = 3 def setUp(self): super(InstanceListTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.num_instances = 3 self.instances = [] start = datetime.datetime(1985, 10, 25, 1, 21, 0) dt = start spread = datetime.timedelta(minutes=10) self.cells = objects.CellMappingList.get_all(self.context) # Create three instances in each of the real cells. Leave the # first cell empty to make sure we don't break with an empty # one. for cell in self.cells[1:]: for i in range(0, self.num_instances): with context.target_cell(self.context, cell) as cctx: inst = objects.Instance( context=cctx, project_id=self.context.project_id, user_id=self.context.user_id, created_at=start, launched_at=dt, instance_type_id=i, hostname='%s-inst%i' % (cell.name, i)) inst.create() if i % 2 == 0: # Make some faults for this instance for n in range(0, i + 1): msg = 'fault%i-%s' % (n, inst.hostname) f = objects.InstanceFault(context=cctx, instance_uuid=inst.uuid, code=i, message=msg, details='fake', host='fakehost') f.create() self.instances.append(inst) im = objects.InstanceMapping(context=self.context, project_id=inst.project_id, user_id=inst.user_id, instance_uuid=inst.uuid, cell_mapping=cell) im.create() dt += spread def test_get_sorted(self): filters = {} limit = None marker = None columns = [] sort_keys = ['uuid'] sort_dirs = ['asc'] obj, insts = instance_list.get_instances_sorted(self.context, filters, limit, marker, columns, sort_keys, sort_dirs) uuids = [inst['uuid'] for inst in insts] self.assertEqual(sorted(uuids), uuids) self.assertEqual(len(self.instances), len(uuids)) def test_get_sorted_descending(self): filters = {} limit = None marker = None columns = [] sort_keys = ['uuid'] sort_dirs = ['desc'] obj, insts = instance_list.get_instances_sorted(self.context, filters, limit, marker, columns, sort_keys, sort_dirs) uuids = [inst['uuid'] for inst in insts] self.assertEqual(list(reversed(sorted(uuids))), uuids) self.assertEqual(len(self.instances), len(uuids)) def test_get_sorted_with_filter(self): filters = {'instance_type_id': 1} limit = None marker = None columns = [] sort_keys = ['uuid'] sort_dirs = ['asc'] obj, insts = instance_list.get_instances_sorted(self.context, filters, limit, marker, columns, sort_keys, sort_dirs) uuids = [inst['uuid'] for inst in insts] expected = [inst['uuid'] for inst in self.instances if inst['instance_type_id'] == 1] self.assertEqual(list(sorted(expected)), uuids) def test_get_sorted_by_defaults(self): filters = {} limit = None marker = None columns = [] sort_keys = None sort_dirs = None obj, insts = instance_list.get_instances_sorted(self.context, filters, limit, marker, columns, sort_keys, sort_dirs) uuids = set([inst['uuid'] for inst in insts]) expected = set([inst['uuid'] for inst in self.instances]) self.assertEqual(expected, uuids) def test_get_sorted_with_limit(self): obj, insts = instance_list.get_instances_sorted(self.context, {}, 5, None, [], ['uuid'], ['asc']) uuids = [inst['uuid'] for inst in insts] had_uuids = [inst.uuid for inst in self.instances] self.assertEqual(sorted(had_uuids)[:5], uuids) self.assertEqual(5, len(uuids)) def test_get_sorted_with_large_limit(self): obj, insts = instance_list.get_instances_sorted(self.context, {}, 5000, None, [], ['uuid'], ['asc']) uuids = [inst['uuid'] for inst in insts] self.assertEqual(sorted(uuids), uuids) self.assertEqual(len(self.instances), len(uuids)) def test_get_sorted_with_large_limit_batched(self): obj, insts = instance_list.get_instances_sorted(self.context, {}, 5000, None, [], ['uuid'], ['asc'], batch_size=2) uuids = [inst['uuid'] for inst in insts] self.assertEqual(sorted(uuids), uuids) self.assertEqual(len(self.instances), len(uuids)) def _test_get_sorted_with_limit_marker(self, sort_by, pages=2, pagesize=2, sort_dir='asc'): """Get multiple pages by a sort key and validate the results. This requests $pages of $pagesize, followed by a final page with no limit, and a final-final page which should be empty. It validates that we got a consistent set of results no patter where the page boundary is, that we got all the results after the unlimited query, and that the final page comes back empty when we use the last instance as a marker. """ insts = [] page = 0 while True: if page >= pages: # We've requested the specified number of limited (by pagesize) # pages, so request a penultimate page with no limit which # should always finish out the result. limit = None else: # Request a limited-size page for the first $pages pages. limit = pagesize if insts: # If we're not on the first page, use the last instance we # received as the marker marker = insts[-1]['uuid'] else: # No marker for the first page marker = None batch = list( instance_list.get_instances_sorted(self.context, {}, limit, marker, [], [sort_by], [sort_dir])[1]) if not batch: # This should only happen when we've pulled the last empty # page because we used the marker of the last instance. If # we end up with a non-deterministic ordering, we'd loop # forever. break insts.extend(batch) page += 1 if page > len(self.instances) * 2: # Do this sanity check in case we introduce (or find) another # repeating page bug like #1721791. Without this we loop # until timeout, which is less obvious. raise Exception('Infinite paging loop') # We should have requested exactly (or one more unlimited) pages self.assertIn(page, (pages, pages + 1)) # Make sure the full set matches what we know to be true found = [x[sort_by] for x in insts] had = [x[sort_by] for x in self.instances] if sort_by in ('launched_at', 'created_at'): # We're comparing objects and database entries, so we need to # squash the tzinfo of the object ones so we can compare had = [x.replace(tzinfo=None) for x in had] self.assertEqual(len(had), len(found)) if sort_dir == 'asc': self.assertEqual(sorted(had), found) else: self.assertEqual(list(reversed(sorted(had))), found) def test_get_sorted_with_limit_marker_stable(self): """Test sorted by hostname. This will be a stable sort that won't change on each run. """ self._test_get_sorted_with_limit_marker(sort_by='hostname') def test_get_sorted_with_limit_marker_stable_reverse(self): """Test sorted by hostname. This will be a stable sort that won't change on each run. """ self._test_get_sorted_with_limit_marker(sort_by='hostname', sort_dir='desc') def test_get_sorted_with_limit_marker_stable_different_pages(self): """Test sorted by hostname with different page sizes. Just do the above with page seams in different places. """ self._test_get_sorted_with_limit_marker(sort_by='hostname', pages=3, pagesize=1) def test_get_sorted_with_limit_marker_stable_different_pages_reverse(self): """Test sorted by hostname with different page sizes. Just do the above with page seams in different places. """ self._test_get_sorted_with_limit_marker(sort_by='hostname', pages=3, pagesize=1, sort_dir='desc') def test_get_sorted_with_limit_marker_random(self): """Test sorted by uuid. This will not be stable and the actual ordering will depend on uuid generation and thus be different on each run. Do this in addition to the stable sort above to keep us honest. """ self._test_get_sorted_with_limit_marker(sort_by='uuid') def test_get_sorted_with_limit_marker_random_different_pages(self): """Test sorted by uuid with different page sizes. Just do the above with page seams in different places. """ self._test_get_sorted_with_limit_marker(sort_by='uuid', pages=3, pagesize=2) def test_get_sorted_with_limit_marker_datetime(self): """Test sorted by launched_at. This tests that we can do all of this, but with datetime fields. """ self._test_get_sorted_with_limit_marker(sort_by='launched_at') def test_get_sorted_with_limit_marker_datetime_same(self): """Test sorted by created_at. This tests that we can do all of this, but with datetime fields that are identical. """ self._test_get_sorted_with_limit_marker(sort_by='created_at') def test_get_sorted_with_deleted_marker(self): marker = self.instances[1]['uuid'] before = list( instance_list.get_instances_sorted(self.context, {}, None, marker, [], None, None)[1]) db.instance_destroy(self.context, marker) after = list( instance_list.get_instances_sorted(self.context, {}, None, marker, [], None, None)[1]) self.assertEqual(before, after) def test_get_sorted_with_invalid_marker(self): self.assertRaises(exception.MarkerNotFound, list, instance_list.get_instances_sorted( self.context, {}, None, 'not-a-marker', [], None, None)[1]) def test_get_sorted_with_purged_instance(self): """Test that we handle a mapped but purged instance.""" im = objects.InstanceMapping(self.context, instance_uuid=uuids.missing, project_id=self.context.project_id, user_id=self.context.user_id, cell=self.cells[0]) im.create() self.assertRaises(exception.MarkerNotFound, list, instance_list.get_instances_sorted( self.context, {}, None, uuids.missing, [], None, None)[1]) def _test_get_paginated_with_filter(self, filters): found_uuids = [] marker = None while True: # Query for those instances, sorted by a different key in # pages of one until we've consumed them all batch = list( instance_list.get_instances_sorted(self.context, filters, 1, marker, [], ['hostname'], ['asc'])[1]) if not batch: break found_uuids.extend([x['uuid'] for x in batch]) marker = found_uuids[-1] return found_uuids def test_get_paginated_with_uuid_filter(self): """Test getting pages with uuid filters. This runs through the results of a uuid-filtered query in pages of length one to ensure that we land on markers that are filtered out of the query and are not accidentally returned. """ # Pick a set of the instances by uuid, when sorted by uuid all_uuids = [x['uuid'] for x in self.instances] filters = {'uuid': sorted(all_uuids)[:7]} found_uuids = self._test_get_paginated_with_filter(filters) # Make sure we found all (and only) the instances we asked for self.assertEqual(set(found_uuids), set(filters['uuid'])) self.assertEqual(7, len(found_uuids)) def test_get_paginated_with_other_filter(self): """Test getting pages with another filter. This runs through the results of a filtered query in pages of length one to ensure we land on markers that are filtered out of the query and are not accidentally returned. """ expected = [inst['uuid'] for inst in self.instances if inst['instance_type_id'] == 1] filters = {'instance_type_id': 1} found_uuids = self._test_get_paginated_with_filter(filters) self.assertEqual(set(expected), set(found_uuids)) def test_get_paginated_with_uuid_and_other_filter(self): """Test getting pages with a uuid and other type of filter. We do this to make sure that we still find (but exclude) the marker even if one of the other filters would have included it. """ # Pick a set of the instances by uuid, when sorted by uuid all_uuids = [x['uuid'] for x in self.instances] filters = {'uuid': sorted(all_uuids)[:7], 'user_id': 'fake'} found_uuids = self._test_get_paginated_with_filter(filters) # Make sure we found all (and only) the instances we asked for self.assertEqual(set(found_uuids), set(filters['uuid'])) self.assertEqual(7, len(found_uuids)) def test_get_sorted_with_faults(self): """Make sure we get faults when we ask for them.""" insts = list( instance_list.get_instances_sorted(self.context, {}, None, None, ['fault'], ['hostname'], ['asc'])[1]) # Two of the instances in each cell have faults (0th and 2nd) expected_faults = self.NUMBER_OF_CELLS * 2 expected_no_fault = len(self.instances) - expected_faults faults = [inst['fault'] for inst in insts] self.assertEqual(expected_no_fault, faults.count(None)) def test_get_sorted_paginated_with_faults(self): """Get pages of one with faults. Do this specifically so we make sure we land on faulted marker instances to ensure we don't omit theirs. """ insts = [] while True: if insts: marker = insts[-1]['uuid'] else: marker = None batch = list( instance_list.get_instances_sorted(self.context, {}, 1, marker, ['fault'], ['hostname'], ['asc'])[1]) if not batch: break insts.extend(batch) self.assertEqual(len(self.instances), len(insts)) # Two of the instances in each cell have faults (0th and 2nd) expected_faults = self.NUMBER_OF_CELLS * 2 expected_no_fault = len(self.instances) - expected_faults faults = [inst['fault'] for inst in insts] self.assertEqual(expected_no_fault, faults.count(None)) def test_instance_list_minimal_cells(self): """Get a list of instances with a subset of cell mappings.""" last_cell = self.cells[-1] with context.target_cell(self.context, last_cell) as cctxt: last_cell_instances = db.instance_get_all(cctxt) last_cell_uuids = [inst['uuid'] for inst in last_cell_instances] instances = list( instance_list.get_instances_sorted(self.context, {}, None, None, [], ['uuid'], ['asc'], cell_mappings=self.cells[:-1]) [1]) found_uuids = [inst['hostname'] for inst in instances] had_uuids = [inst['hostname'] for inst in self.instances if inst['uuid'] not in last_cell_uuids] self.assertEqual(sorted(had_uuids), sorted(found_uuids)) class TestInstanceListObjects(test.TestCase): def setUp(self): super(TestInstanceListObjects, self).setUp() self.context = context.RequestContext('fake', 'fake') self.num_instances = 3 self.instances = [] start = datetime.datetime(1985, 10, 25, 1, 21, 0) dt = start spread = datetime.timedelta(minutes=10) cells = objects.CellMappingList.get_all(self.context) # Create three instances in each of the real cells. Leave the # first cell empty to make sure we don't break with an empty # one for cell in cells[1:]: for i in range(0, self.num_instances): with context.target_cell(self.context, cell) as cctx: inst = objects.Instance( context=cctx, project_id=self.context.project_id, user_id=self.context.user_id, created_at=start, launched_at=dt, instance_type_id=i, hostname='%s-inst%i' % (cell.name, i)) inst.create() if i % 2 == 0: # Make some faults for this instance for n in range(0, i + 1): msg = 'fault%i-%s' % (n, inst.hostname) f = objects.InstanceFault(context=cctx, instance_uuid=inst.uuid, code=i, message=msg, details='fake', host='fakehost') f.create() self.instances.append(inst) im = objects.InstanceMapping(context=self.context, project_id=inst.project_id, user_id=inst.user_id, instance_uuid=inst.uuid, cell_mapping=cell) im.create() dt += spread def test_get_instance_objects_sorted(self): filters = {} limit = None marker = None expected_attrs = [] sort_keys = ['uuid'] sort_dirs = ['asc'] insts, down_cell_uuids = instance_list.get_instance_objects_sorted( self.context, filters, limit, marker, expected_attrs, sort_keys, sort_dirs) found_uuids = [x.uuid for x in insts] had_uuids = sorted([x['uuid'] for x in self.instances]) self.assertEqual(had_uuids, found_uuids) # Make sure none of the instances have fault set self.assertEqual(0, len([inst for inst in insts if 'fault' in inst])) def test_get_instance_objects_sorted_with_fault(self): filters = {} limit = None marker = None expected_attrs = ['fault'] sort_keys = ['uuid'] sort_dirs = ['asc'] insts, down_cell_uuids = instance_list.get_instance_objects_sorted( self.context, filters, limit, marker, expected_attrs, sort_keys, sort_dirs) found_uuids = [x.uuid for x in insts] had_uuids = sorted([x['uuid'] for x in self.instances]) self.assertEqual(had_uuids, found_uuids) # They should all have fault set, but only some have # actual faults self.assertEqual(2, len([inst for inst in insts if inst.fault])) def test_get_instance_objects_sorted_paged(self): """Query a full first page and ensure an empty second one. This uses created_at which is enforced to be the same across each instance by setUp(). This will help make sure we still have a stable ordering, even when we only claim to care about created_at. """ instp1, down_cell_uuids = instance_list.get_instance_objects_sorted( self.context, {}, None, None, [], ['created_at'], ['asc']) self.assertEqual(len(self.instances), len(instp1)) instp2, down_cell_uuids = instance_list.get_instance_objects_sorted( self.context, {}, None, instp1[-1]['uuid'], [], ['created_at'], ['asc']) self.assertEqual(0, len(instp2)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/compute/test_live_migration.py0000664000175000017500000002171500000000000025140 0ustar00zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier class FakeCinderError(object): """Poor man's Mock because we're stubbing out and not mock.patching. Stubs out attachment_delete. We keep a raise and call count to simulate a single volume error while being able to assert that we still got called for all of an instance's volumes. """ def __init__(self): self.raise_count = 0 self.call_count = 0 def __call__(self, *args, **kwargs): self.call_count += 1 if self.raise_count == 0: self.raise_count += 1 raise exception.CinderConnectionFailed(reason='Fake Cinder error') class LiveMigrationCinderFailure(integrated_helpers._IntegratedTestBase): api_major_version = 'v2.1' microversion = 'latest' def setUp(self): super(LiveMigrationCinderFailure, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) # Start a second compute node (the first one was started for us by # _IntegratedTestBase. set_nodes() is needed to avoid duplicate # nodenames. See comments in test_bug_1702454.py. self.compute2 = self.start_service('compute', host='host2') def test_live_migrate_attachment_delete_fails(self): self.useFixture(nova_fixtures.CinderFixture(self)) server = self.api.post_server({ 'server': { 'flavorRef': 1, 'imageRef': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'name': 'live-migrate-attachment-delete-fail-test', 'networks': 'none', 'block_device_mapping_v2': [ {'boot_index': 0, 'uuid': uuids.broken_volume, 'source_type': 'volume', 'destination_type': 'volume'}, {'boot_index': 1, 'uuid': uuids.working_volume, 'source_type': 'volume', 'destination_type': 'volume'}]}}) server = self._wait_for_state_change(server, 'ACTIVE') source = server['OS-EXT-SRV-ATTR:host'] if source == self.compute.host: dest = self.compute2.host else: dest = self.compute.host post = { 'os-migrateLive': { 'host': dest, 'block_migration': False, } } stub_attachment_delete = FakeCinderError() self.stub_out('nova.volume.cinder.API.attachment_delete', stub_attachment_delete) self.api.post_server_action(server['id'], post) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': dest, 'status': 'ACTIVE'}) self.assertEqual(2, stub_attachment_delete.call_count) self.assertEqual(1, stub_attachment_delete.raise_count) class TestVolAttachmentsDuringLiveMigration( integrated_helpers._IntegratedTestBase ): """Assert the lifecycle of volume attachments during LM rollbacks """ # Default self.api to the self.admin_api as live migration is admin only ADMIN_API = True microversion = 'latest' def setUp(self): super().setUp() self.cinder = self.useFixture(nova_fixtures.CinderFixture(self)) def _setup_compute_service(self): self._start_compute('src') self._start_compute('dest') @mock.patch('nova.virt.fake.FakeDriver.live_migration') def test_vol_attachments_during_driver_live_mig_failure(self, mock_lm): """Assert volume attachments during live migration rollback * Mock live_migration to always rollback and raise a failure within the fake virt driver * Launch a boot from volume instance * Assert that the volume is attached correctly to the instance * Live migrate the instance to another host invoking the mocked live_migration method * Assert that the instance is still on the source host * Assert that the original source host volume attachment remains """ # Mock out driver.live_migration so that we always rollback def _fake_live_migration_with_rollback( context, instance, dest, post_method, recover_method, block_migration=False, migrate_data=None): # Just call the recover_method to simulate a rollback recover_method(context, instance, dest, migrate_data) # raise test.TestingException here to imitate a virt driver raise test.TestingException() mock_lm.side_effect = _fake_live_migration_with_rollback volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server = self._build_server( name='test_bfv_live_migration_failure', image_uuid='', networks='none' ) server['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': volume_id }] server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') # Fetch the source host for use later server = self.api.get_server(server['id']) src_host = server['OS-EXT-SRV-ATTR:host'] # Assert that the volume is connected to the instance self.assertIn( volume_id, self.cinder.volume_ids_for_instance(server['id'])) # Assert that we have an active attachment in the fixture attachments = self.cinder.volume_to_attachment.get(volume_id) self.assertEqual(1, len(attachments)) # Fetch the attachment_id for use later once we have migrated src_attachment_id = list(attachments.keys())[0] # Migrate the instance and wait until the migration errors out thanks # to our mocked version of live_migration raising TestingException self._live_migrate(server, 'error', server_expected_state='ERROR') # Assert that we called the fake live_migration method mock_lm.assert_called_once() # Assert that the instance is on the source server = self.api.get_server(server['id']) self.assertEqual(src_host, server['OS-EXT-SRV-ATTR:host']) # Assert that the src attachment is still present attachments = self.cinder.volume_to_attachment.get(volume_id) self.assertIn(src_attachment_id, attachments.keys()) self.assertEqual(1, len(attachments)) class LiveMigrationNeutronInteractionsTest( integrated_helpers._IntegratedTestBase): # NOTE(artom) We need the admin API to force the host when booting the test # server. ADMIN_API = True microversion = 'latest' def _setup_compute_service(self): self._start_compute('src') self._start_compute('dest') def test_live_migrate_vifs_from_info_cache(self): """Test that bug 1879787 can no longer manifest itself because we get the network_info from the instance info cache, and not Neutron. """ def stub_notify(context, instance, event_suffix, network_info=None, extra_usage_info=None, fault=None): vif = network_info[0] # Make sure we have the correct VIF (the NeutronFixture # deterministically uses port_2 for networks=auto) and that the # profile does not contain `migrating_to`, indicating that we did # not obtain it from the Neutron API. self.assertEqual(self.neutron.port_2['id'], vif['id']) self.assertNotIn('migrating_to', vif['profile']) server = self._create_server(networks='auto', host=self.computes['src'].host) with mock.patch.object(self.computes['src'].manager, '_notify_about_instance_usage', side_effect=stub_notify) as mock_notify: self._live_migrate(server, 'completed') server = self.api.get_server(server['id']) self.assertEqual('dest', server['OS-EXT-SRV-ATTR:host']) # We don't care about call arguments here, we just want to be sure # our stub actually got called. mock_notify.assert_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/compute/test_migration_list.py0000664000175000017500000000777700000000000025170 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils.fixture import uuidsentinel from nova.compute import migration_list from nova import context from nova import exception from nova import objects from nova import test class TestMigrationListObjects(test.TestCase): NUMBER_OF_CELLS = 3 def setUp(self): super(TestMigrationListObjects, self).setUp() self.context = context.RequestContext('fake', 'fake') self.num_migrations = 3 self.migrations = [] start = datetime.datetime(1985, 10, 25, 1, 21, 0) self.cells = objects.CellMappingList.get_all(self.context) # Create three migrations in each of the real cells. Leave the # first cell empty to make sure we don't break with an empty # one. for cell in self.cells[1:]: for i in range(0, self.num_migrations): with context.target_cell(self.context, cell) as cctx: mig = objects.Migration(cctx, uuid=getattr( uuidsentinel, '%s_mig%i' % (cell.name, i) ), created_at=start, migration_type='resize', instance_uuid=getattr( uuidsentinel, 'inst%i' % i) ) mig.create() self.migrations.append(mig) def test_get_instance_objects_sorted(self): filters = {} limit = None marker = None sort_keys = ['uuid'] sort_dirs = ['asc'] migs = migration_list.get_migration_objects_sorted( self.context, filters, limit, marker, sort_keys, sort_dirs) found_uuids = [x.uuid for x in migs] had_uuids = sorted([x['uuid'] for x in self.migrations]) self.assertEqual(had_uuids, found_uuids) def test_get_instance_objects_sorted_paged(self): """Query a full first page and ensure an empty second one. This uses created_at which is enforced to be the same across each migration by setUp(). This will help make sure we still have a stable ordering, even when we only claim to care about created_at. """ migp1 = migration_list.get_migration_objects_sorted( self.context, {}, None, None, ['created_at'], ['asc']) self.assertEqual(len(self.migrations), len(migp1)) migp2 = migration_list.get_migration_objects_sorted( self.context, {}, None, migp1[-1]['uuid'], ['created_at'], ['asc']) self.assertEqual(0, len(migp2)) def test_get_marker_record_not_found(self): marker = uuidsentinel.not_found self.assertRaises(exception.MarkerNotFound, migration_list.get_migration_objects_sorted, self.context, {}, None, marker, None, None) def test_get_sorted_with_limit(self): migs = migration_list.get_migration_objects_sorted( self.context, {}, 2, None, ['uuid'], ['asc']) uuids = [mig['uuid'] for mig in migs] had_uuids = [mig.uuid for mig in self.migrations] self.assertEqual(sorted(had_uuids)[:2], uuids) self.assertEqual(2, len(uuids)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/compute/test_resource_tracker.py0000664000175000017500000004621000000000000025467 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import power_state from nova.compute import resource_tracker from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova import conf from nova import context from nova import objects from nova import test from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.virt import driver as virt_driver CONF = conf.CONF VCPU = orc.VCPU MEMORY_MB = orc.MEMORY_MB DISK_GB = orc.DISK_GB COMPUTE_HOST = 'compute-host' class IronicResourceTrackerTest(test.TestCase): """Tests the behaviour of the resource tracker with regards to the transitional period between adding support for custom resource classes in the placement API and integrating inventory and allocation records for Ironic baremetal nodes with those custom resource classes. """ FLAVOR_FIXTURES = { 'CUSTOM_SMALL_IRON': objects.Flavor( name='CUSTOM_SMALL_IRON', flavorid=42, vcpus=4, memory_mb=4096, root_gb=1024, swap=0, ephemeral_gb=0, extra_specs={}, ), 'CUSTOM_BIG_IRON': objects.Flavor( name='CUSTOM_BIG_IRON', flavorid=43, vcpus=16, memory_mb=65536, root_gb=1024, swap=0, ephemeral_gb=0, extra_specs={}, ), } COMPUTE_NODE_FIXTURES = { uuids.cn1: objects.ComputeNode( uuid=uuids.cn1, hypervisor_hostname='cn1', hypervisor_type='ironic', hypervisor_version=0, cpu_info="", host=COMPUTE_HOST, vcpus=4, vcpus_used=0, cpu_allocation_ratio=1.0, memory_mb=4096, memory_mb_used=0, ram_allocation_ratio=1.0, local_gb=1024, local_gb_used=0, disk_allocation_ratio=1.0, ), uuids.cn2: objects.ComputeNode( uuid=uuids.cn2, hypervisor_hostname='cn2', hypervisor_type='ironic', hypervisor_version=0, cpu_info="", host=COMPUTE_HOST, vcpus=4, vcpus_used=0, cpu_allocation_ratio=1.0, memory_mb=4096, memory_mb_used=0, ram_allocation_ratio=1.0, local_gb=1024, local_gb_used=0, disk_allocation_ratio=1.0, ), uuids.cn3: objects.ComputeNode( uuid=uuids.cn3, hypervisor_hostname='cn3', hypervisor_type='ironic', hypervisor_version=0, cpu_info="", host=COMPUTE_HOST, vcpus=16, vcpus_used=0, cpu_allocation_ratio=1.0, memory_mb=65536, memory_mb_used=0, ram_allocation_ratio=1.0, local_gb=2048, local_gb_used=0, disk_allocation_ratio=1.0, ), } INSTANCE_FIXTURES = { uuids.instance1: objects.Instance( uuid=uuids.instance1, flavor=FLAVOR_FIXTURES['CUSTOM_SMALL_IRON'], vm_state=vm_states.BUILDING, task_state=task_states.SPAWNING, power_state=power_state.RUNNING, project_id='project', user_id=uuids.user, ), } def _set_client(self, client): """Set up embedded report clients to use the direct one from the interceptor. """ self.report_client = client self.rt.reportclient = client def setUp(self): super(IronicResourceTrackerTest, self).setUp() self.flags( reserved_host_memory_mb=0, cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0, disk_allocation_ratio=1.0, ) self.ctx = context.RequestContext('user', 'project') driver = mock.MagicMock(autospec=virt_driver.ComputeDriver) driver.node_is_available.return_value = True def fake_upt(provider_tree, nodename, allocations=None): inventory = { 'CUSTOM_SMALL_IRON': { 'total': 1, 'reserved': 0, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, }, } provider_tree.update_inventory(nodename, inventory) driver.update_provider_tree.side_effect = fake_upt self.driver_mock = driver self.rt = resource_tracker.ResourceTracker(COMPUTE_HOST, driver) self.instances = self.create_fixtures() def create_fixtures(self): for flavor in self.FLAVOR_FIXTURES.values(): # Clone the object so the class variable isn't # modified by reference. flavor = flavor.obj_clone() flavor._context = self.ctx flavor.obj_set_defaults() flavor.create() # We create some compute node records in the Nova cell DB to simulate # data before adding integration for Ironic baremetal nodes with the # placement API... for cn in self.COMPUTE_NODE_FIXTURES.values(): # Clone the object so the class variable isn't # modified by reference. cn = cn.obj_clone() cn._context = self.ctx cn.obj_set_defaults() cn.create() instances = {} for instance in self.INSTANCE_FIXTURES.values(): # Clone the object so the class variable isn't # modified by reference. instance = instance.obj_clone() instance._context = self.ctx instance.obj_set_defaults() instance.create() instances[instance.uuid] = instance return instances @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) @mock.patch('nova.objects.compute_node.ComputeNode.save', new=mock.Mock()) def test_node_stats_isolation(self): """Regression test for bug 1784705 introduced in Ocata. The ResourceTracker.stats field is meant to track per-node stats so this test registers three compute nodes with a single RT where each node has unique stats, and then makes sure that after updating usage for an instance, the nodes still have their unique stats and nothing is leaked from node to node. """ self.useFixture(func_fixtures.PlacementFixture()) # Before the resource tracker is "initialized", we shouldn't have # any compute nodes or stats in the RT's cache... self.assertEqual(0, len(self.rt.compute_nodes)) self.assertEqual(0, len(self.rt.stats)) # Now "initialize" the resource tracker. This is what # nova.compute.manager.ComputeManager does when "initializing" the # nova-compute service. Do this in a predictable order so cn1 is # first and cn3 is last. for cn in sorted(self.COMPUTE_NODE_FIXTURES.values(), key=lambda _cn: _cn.hypervisor_hostname): nodename = cn.hypervisor_hostname # Fake that each compute node has unique extra specs stats and # the RT makes sure those are unique per node. stats = {'node:%s' % nodename: nodename} self.driver_mock.get_available_resource.return_value = { 'hypervisor_hostname': nodename, 'hypervisor_type': 'ironic', 'hypervisor_version': 0, 'vcpus': cn.vcpus, 'vcpus_used': cn.vcpus_used, 'memory_mb': cn.memory_mb, 'memory_mb_used': cn.memory_mb_used, 'local_gb': cn.local_gb, 'local_gb_used': cn.local_gb_used, 'numa_topology': None, 'resource_class': None, # Act like admin hasn't set yet... 'stats': stats, } self.rt.update_available_resource(self.ctx, nodename) self.assertEqual(3, len(self.rt.compute_nodes)) self.assertEqual(3, len(self.rt.stats)) def _assert_stats(): # Make sure each compute node has a unique set of stats and # they don't accumulate across nodes. for _cn in self.rt.compute_nodes.values(): node_stats_key = 'node:%s' % _cn.hypervisor_hostname self.assertIn(node_stats_key, _cn.stats) node_stat_count = 0 for stat in _cn.stats: if stat.startswith('node:'): node_stat_count += 1 self.assertEqual(1, node_stat_count, _cn.stats) _assert_stats() # Now "spawn" an instance to the first compute node by calling the # RT's instance_claim(). cn1_obj = self.COMPUTE_NODE_FIXTURES[uuids.cn1] cn1_nodename = cn1_obj.hypervisor_hostname inst = self.instances[uuids.instance1] with self.rt.instance_claim(self.ctx, inst, cn1_nodename, {}): _assert_stats() class TestUpdateComputeNodeReservedAndAllocationRatio( integrated_helpers.ProviderUsageBaseTestCase): """Tests reflecting reserved and allocation ratio inventory from nova-compute to placement """ compute_driver = 'fake.FakeDriver' @staticmethod def _get_reserved_host_values_from_config(): return { 'VCPU': CONF.reserved_host_cpus, 'MEMORY_MB': CONF.reserved_host_memory_mb, 'DISK_GB': compute_utils.convert_mb_to_ceil_gb( CONF.reserved_host_disk_mb) } def _assert_reserved_inventory(self, inventories): reserved = self._get_reserved_host_values_from_config() for rc, res in reserved.items(): self.assertIn('reserved', inventories[rc]) self.assertEqual(res, inventories[rc]['reserved'], 'Unexpected resource provider inventory ' 'reserved value for %s' % rc) def test_update_inventory_reserved_and_allocation_ratio_from_conf(self): # Start a compute service which should create a corresponding resource # provider in the placement service. compute_service = self._start_compute('fake-host') # Assert the compute node resource provider exists in placement with # the default reserved and allocation ratio values from config. rp_uuid = self._get_provider_uuid_by_host('fake-host') inventories = self._get_provider_inventory(rp_uuid) # The default allocation ratio config values are all 0.0 and get # defaulted to real values in the ComputeNode object, so we need to # check our defaults against what is in the ComputeNode object. ctxt = context.get_admin_context() # Note that the CellDatabases fixture usage means we don't need to # target the context to cell1 even though the compute_nodes table is # in the cell1 database. cn = objects.ComputeNode.get_by_uuid(ctxt, rp_uuid) ratios = { 'VCPU': cn.cpu_allocation_ratio, 'MEMORY_MB': cn.ram_allocation_ratio, 'DISK_GB': cn.disk_allocation_ratio } for rc, ratio in ratios.items(): self.assertIn(rc, inventories) self.assertIn('allocation_ratio', inventories[rc]) self.assertEqual(ratio, inventories[rc]['allocation_ratio'], 'Unexpected allocation ratio for %s' % rc) self._assert_reserved_inventory(inventories) # Now change the configuration values, restart the compute service, # and ensure the changes are reflected in the resource provider # inventory records. We use 2.0 since disk_allocation_ratio defaults # to 1.0. self.flags(cpu_allocation_ratio=2.0) self.flags(ram_allocation_ratio=2.0) self.flags(disk_allocation_ratio=2.0) self.flags(reserved_host_cpus=2) self.flags(reserved_host_memory_mb=1024) self.flags(reserved_host_disk_mb=8192) self.restart_compute_service(compute_service) # The ratios should now come from config overrides rather than the # defaults in the ComputeNode object. ratios = { 'VCPU': CONF.cpu_allocation_ratio, 'MEMORY_MB': CONF.ram_allocation_ratio, 'DISK_GB': CONF.disk_allocation_ratio } attr_map = { 'VCPU': 'cpu', 'MEMORY_MB': 'ram', 'DISK_GB': 'disk', } cn = objects.ComputeNode.get_by_uuid(ctxt, rp_uuid) inventories = self._get_provider_inventory(rp_uuid) for rc, ratio in ratios.items(): # Make sure the config is what we expect. self.assertEqual(2.0, ratio, 'Unexpected config allocation ratio for %s' % rc) # Make sure the values in the DB are updated. self.assertEqual( ratio, getattr(cn, '%s_allocation_ratio' % attr_map[rc]), 'Unexpected ComputeNode allocation ratio for %s' % rc) # Make sure the values in placement are updated. self.assertEqual(ratio, inventories[rc]['allocation_ratio'], 'Unexpected resource provider inventory ' 'allocation ratio for %s' % rc) # The reserved host values should also come from config. self._assert_reserved_inventory(inventories) def test_allocation_ratio_create_with_initial_allocation_ratio(self): # The xxx_allocation_ratio is set to None by default, and we use # 16.1/1.6/1.1 since disk_allocation_ratio defaults to 16.0/1.5/1.0. self.flags(initial_cpu_allocation_ratio=16.1) self.flags(initial_ram_allocation_ratio=1.6) self.flags(initial_disk_allocation_ratio=1.1) # Start a compute service which should create a corresponding resource # provider in the placement service. self._start_compute('fake-host') # Assert the compute node resource provider exists in placement with # the default reserved and allocation ratio values from config. rp_uuid = self._get_provider_uuid_by_host('fake-host') inventories = self._get_provider_inventory(rp_uuid) ctxt = context.get_admin_context() # Note that the CellDatabases fixture usage means we don't need to # target the context to cell1 even though the compute_nodes table is # in the cell1 database. cn = objects.ComputeNode.get_by_uuid(ctxt, rp_uuid) ratios = { 'VCPU': cn.cpu_allocation_ratio, 'MEMORY_MB': cn.ram_allocation_ratio, 'DISK_GB': cn.disk_allocation_ratio } initial_ratio_conf = { 'VCPU': CONF.initial_cpu_allocation_ratio, 'MEMORY_MB': CONF.initial_ram_allocation_ratio, 'DISK_GB': CONF.initial_disk_allocation_ratio } for rc, ratio in ratios.items(): self.assertIn(rc, inventories) self.assertIn('allocation_ratio', inventories[rc]) # Check the allocation_ratio values come from the new # CONF.initial_xxx_allocation_ratio self.assertEqual(initial_ratio_conf[rc], ratio, 'Unexpected allocation ratio for %s' % rc) # Check the initial allocation ratio is updated to inventories self.assertEqual(ratio, inventories[rc]['allocation_ratio'], 'Unexpected allocation ratio for %s' % rc) def test_allocation_ratio_overwritten_from_config(self): # NOTE(yikun): This test case includes below step: # 1. Overwrite the allocation_ratio via the placement API directly - # run the RT.update_available_resource periodic and assert the # allocation ratios are not overwritten from config. # # 2. Set the CONF.*_allocation_ratio, run the periodic, and assert # that the config overwrites what was set via the placement API. compute_service = self._start_compute('fake-host') rp_uuid = self._get_provider_uuid_by_host('fake-host') ctxt = context.get_admin_context() rt = compute_service.manager.rt inv = self.placement_api.get( '/resource_providers/%s/inventories' % rp_uuid).body ratios = {'VCPU': 16.1, 'MEMORY_MB': 1.6, 'DISK_GB': 1.1} for rc, ratio in ratios.items(): inv['inventories'][rc]['allocation_ratio'] = ratio # Overwrite the allocation_ratio via the placement API directly self._update_inventory(rp_uuid, inv) inv = self._get_provider_inventory(rp_uuid) # Check inventories is updated to ratios for rc, ratio in ratios.items(): self.assertIn(rc, inv) self.assertIn('allocation_ratio', inv[rc]) self.assertEqual(ratio, inv[rc]['allocation_ratio'], 'Unexpected allocation ratio for %s' % rc) # Make sure xxx_allocation_ratio is None by default self.assertIsNone(CONF.cpu_allocation_ratio) self.assertIsNone(CONF.ram_allocation_ratio) self.assertIsNone(CONF.disk_allocation_ratio) # run the RT.update_available_resource periodic rt.update_available_resource(ctxt, 'fake-host') # assert the allocation ratios are not overwritten from config inv = self._get_provider_inventory(rp_uuid) for rc, ratio in ratios.items(): self.assertIn(rc, inv) self.assertIn('allocation_ratio', inv[rc]) self.assertEqual(ratio, inv[rc]['allocation_ratio'], 'Unexpected allocation ratio for %s' % rc) # set the CONF.*_allocation_ratio self.flags(cpu_allocation_ratio=15.9) self.flags(ram_allocation_ratio=1.4) self.flags(disk_allocation_ratio=0.9) # run the RT.update_available_resource periodic rt.update_available_resource(ctxt, 'fake-host') inv = self._get_provider_inventory(rp_uuid) ratios = { 'VCPU': CONF.cpu_allocation_ratio, 'MEMORY_MB': CONF.ram_allocation_ratio, 'DISK_GB': CONF.disk_allocation_ratio } # assert that the config overwrites what was set via the placement API. for rc, ratio in ratios.items(): self.assertIn(rc, inv) self.assertIn('allocation_ratio', inv[rc]) self.assertEqual(ratio, inv[rc]['allocation_ratio'], 'Unexpected allocation ratio for %s' % rc) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4904687 nova-21.2.4/nova/tests/functional/db/0000775000175000017500000000000000000000000017422 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/__init__.py0000664000175000017500000000000000000000000021521 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4904687 nova-21.2.4/nova/tests/functional/db/api/0000775000175000017500000000000000000000000020173 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/api/__init__.py0000664000175000017500000000000000000000000022272 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/api/test_migrations.py0000664000175000017500000007666500000000000024004 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. There are "opportunistic" tests which allows testing against all 3 databases (sqlite in memory, mysql, pg) in a properly configured unit test environment. For the opportunistic testing you need to set up db's named 'openstack_citest' with user 'openstack_citest' and password 'openstack_citest' on localhost. The test will then use that db and u/p combo to run the tests. For postgres on Ubuntu this can be done with the following commands:: | sudo -u postgres psql | postgres=# create user openstack_citest with createdb login password | 'openstack_citest'; | postgres=# create database openstack_citest with owner openstack_citest; """ import os from migrate.versioning import repository import mock from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import utils as db_utils from oslo_serialization import jsonutils import sqlalchemy from sqlalchemy.engine import reflection import testtools from nova.db import migration from nova.db.sqlalchemy.api_migrations import migrate_repo from nova.db.sqlalchemy import api_models from nova.db.sqlalchemy import migration as sa_migration from nova import test from nova.test import uuids from nova.tests import fixtures as nova_fixtures class NovaAPIModelsSync(test_migrations.ModelsMigrationsSync): """Test that the models match the database after migrations are run.""" def setUp(self): super(NovaAPIModelsSync, self).setUp() self.engine = enginefacade.writer.get_engine() def db_sync(self, engine): with mock.patch.object(sa_migration, 'get_engine', return_value=engine): sa_migration.db_sync(database='api') @property def migrate_engine(self): return self.engine def get_engine(self, context=None): return self.migrate_engine def get_metadata(self): return api_models.API_BASE.metadata def include_object(self, object_, name, type_, reflected, compare_to): if type_ == 'table': # migrate_version is a sqlalchemy-migrate control table and # isn't included in the model. if name == 'migrate_version': return False return True def filter_metadata_diff(self, diff): # Filter out diffs that shouldn't cause a sync failure. new_diff = [] # Define a whitelist of ForeignKeys that exist on the model but not in # the database. They will be removed from the model at a later time. fkey_whitelist = {'build_requests': ['request_spec_id']} # Define a whitelist of columns that will be removed from the # DB at a later release and aren't on a model anymore. column_whitelist = { 'build_requests': ['vm_state', 'instance_metadata', 'display_name', 'access_ip_v6', 'access_ip_v4', 'key_name', 'locked_by', 'image_ref', 'progress', 'request_spec_id', 'info_cache', 'user_id', 'task_state', 'security_groups', 'config_drive'], 'resource_providers': ['can_host'], } for element in diff: if isinstance(element, list): # modify_nullable is a list new_diff.append(element) else: # tuple with action as first element. Different actions have # different tuple structures. if element[0] == 'add_fk': fkey = element[1] tablename = fkey.table.name column_keys = fkey.column_keys if (tablename in fkey_whitelist and column_keys == fkey_whitelist[tablename]): continue elif element[0] == 'remove_column': tablename = element[2] column = element[3] if (tablename in column_whitelist and column.name in column_whitelist[tablename]): continue new_diff.append(element) return new_diff class TestNovaAPIMigrationsSQLite(NovaAPIModelsSync, test_fixtures.OpportunisticDBTestMixin, testtools.TestCase): pass class TestNovaAPIMigrationsMySQL(NovaAPIModelsSync, test_fixtures.OpportunisticDBTestMixin, testtools.TestCase): FIXTURE = test_fixtures.MySQLOpportunisticFixture class TestNovaAPIMigrationsPostgreSQL(NovaAPIModelsSync, test_fixtures.OpportunisticDBTestMixin, testtools.TestCase): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class NovaAPIMigrationsWalk(test_migrations.WalkVersionsMixin): def setUp(self): # NOTE(sdague): the oslo_db base test case completely # invalidates our logging setup, we actually have to do that # before it is called to keep this from vomiting all over our # test output. self.useFixture(nova_fixtures.StandardLogging()) super(NovaAPIMigrationsWalk, self).setUp() self.engine = enginefacade.writer.get_engine() @property def INIT_VERSION(self): return migration.db_initial_version('api') @property def REPOSITORY(self): return repository.Repository( os.path.abspath(os.path.dirname(migrate_repo.__file__))) @property def migration_api(self): return sa_migration.versioning_api @property def migrate_engine(self): return self.engine def _skippable_migrations(self): mitaka_placeholders = list(range(8, 13)) newton_placeholders = list(range(21, 26)) ocata_placeholders = list(range(31, 41)) pike_placeholders = list(range(45, 50)) queens_placeholders = list(range(53, 58)) # We forgot to add the rocky placeholders stein_placeholders = list(range(63, 68)) train_placeholders = list(range(68, 73)) special_cases = [ 30, # Enforcement migration, no changes to test ] return (mitaka_placeholders + newton_placeholders + ocata_placeholders + pike_placeholders + queens_placeholders + stein_placeholders + train_placeholders + special_cases) def migrate_up(self, version, with_data=False): if with_data: check = getattr(self, '_check_%03d' % version, None) if version not in self._skippable_migrations(): self.assertIsNotNone(check, ('API DB Migration %i does not have a ' 'test. Please add one!') % version) super(NovaAPIMigrationsWalk, self).migrate_up(version, with_data) def test_walk_versions(self): self.walk_versions(snake_walk=False, downgrade=False) def assertColumnExists(self, engine, table_name, column): self.assertTrue(db_utils.column_exists(engine, table_name, column), 'Column %s.%s does not exist' % (table_name, column)) def assertIndexExists(self, engine, table_name, index): self.assertTrue(db_utils.index_exists(engine, table_name, index), 'Index %s on table %s does not exist' % (index, table_name)) def assertUniqueConstraintExists(self, engine, table_name, columns): inspector = reflection.Inspector.from_engine(engine) constrs = inspector.get_unique_constraints(table_name) constr_columns = [constr['column_names'] for constr in constrs] self.assertIn(columns, constr_columns) def assertTableNotExists(self, engine, table_name): self.assertRaises(sqlalchemy.exc.NoSuchTableError, db_utils.get_table, engine, table_name) def _check_001(self, engine, data): for column in ['created_at', 'updated_at', 'id', 'uuid', 'name', 'transport_url', 'database_connection']: self.assertColumnExists(engine, 'cell_mappings', column) self.assertIndexExists(engine, 'cell_mappings', 'uuid_idx') self.assertUniqueConstraintExists(engine, 'cell_mappings', ['uuid']) def _check_002(self, engine, data): for column in ['created_at', 'updated_at', 'id', 'instance_uuid', 'cell_id', 'project_id']: self.assertColumnExists(engine, 'instance_mappings', column) for index in ['instance_uuid_idx', 'project_id_idx']: self.assertIndexExists(engine, 'instance_mappings', index) self.assertUniqueConstraintExists(engine, 'instance_mappings', ['instance_uuid']) inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('instance_mappings')[0] self.assertEqual('cell_mappings', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['cell_id'], fk['constrained_columns']) def _check_003(self, engine, data): for column in ['created_at', 'updated_at', 'id', 'cell_id', 'host']: self.assertColumnExists(engine, 'host_mappings', column) self.assertIndexExists(engine, 'host_mappings', 'host_idx') self.assertUniqueConstraintExists(engine, 'host_mappings', ['host']) inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('host_mappings')[0] self.assertEqual('cell_mappings', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['cell_id'], fk['constrained_columns']) def _check_004(self, engine, data): columns = ['created_at', 'updated_at', 'id', 'instance_uuid', 'spec'] for column in columns: self.assertColumnExists(engine, 'request_specs', column) self.assertUniqueConstraintExists(engine, 'request_specs', ['instance_uuid']) self.assertIndexExists(engine, 'request_specs', 'request_spec_instance_uuid_idx') def _check_005(self, engine, data): # flavors for column in ['created_at', 'updated_at', 'name', 'id', 'memory_mb', 'vcpus', 'swap', 'vcpu_weight', 'flavorid', 'rxtx_factor', 'root_gb', 'ephemeral_gb', 'disabled', 'is_public']: self.assertColumnExists(engine, 'flavors', column) self.assertUniqueConstraintExists(engine, 'flavors', ['flavorid']) self.assertUniqueConstraintExists(engine, 'flavors', ['name']) # flavor_extra_specs for column in ['created_at', 'updated_at', 'id', 'flavor_id', 'key', 'value']: self.assertColumnExists(engine, 'flavor_extra_specs', column) self.assertIndexExists(engine, 'flavor_extra_specs', 'flavor_extra_specs_flavor_id_key_idx') self.assertUniqueConstraintExists(engine, 'flavor_extra_specs', ['flavor_id', 'key']) inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('flavor_extra_specs')[0] self.assertEqual('flavors', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['flavor_id'], fk['constrained_columns']) # flavor_projects for column in ['created_at', 'updated_at', 'id', 'flavor_id', 'project_id']: self.assertColumnExists(engine, 'flavor_projects', column) self.assertUniqueConstraintExists(engine, 'flavor_projects', ['flavor_id', 'project_id']) inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('flavor_projects')[0] self.assertEqual('flavors', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['flavor_id'], fk['constrained_columns']) def _check_006(self, engine, data): for column in ['id', 'request_spec_id', 'project_id', 'user_id', 'display_name', 'instance_metadata', 'progress', 'vm_state', 'image_ref', 'access_ip_v4', 'access_ip_v6', 'info_cache', 'security_groups', 'config_drive', 'key_name', 'locked_by']: self.assertColumnExists(engine, 'build_requests', column) self.assertIndexExists(engine, 'build_requests', 'build_requests_project_id_idx') self.assertUniqueConstraintExists(engine, 'build_requests', ['request_spec_id']) inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('build_requests')[0] self.assertEqual('request_specs', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['request_spec_id'], fk['constrained_columns']) def _check_007(self, engine, data): map_table = db_utils.get_table(engine, 'instance_mappings') self.assertTrue(map_table.columns['cell_id'].nullable) # Ensure the foreign key still exists inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('instance_mappings')[0] self.assertEqual('cell_mappings', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['cell_id'], fk['constrained_columns']) def _check_013(self, engine, data): for column in ['instance_uuid', 'instance']: self.assertColumnExists(engine, 'build_requests', column) self.assertIndexExists(engine, 'build_requests', 'build_requests_instance_uuid_idx') self.assertUniqueConstraintExists(engine, 'build_requests', ['instance_uuid']) def _check_014(self, engine, data): for column in ['name', 'public_key']: self.assertColumnExists(engine, 'key_pairs', column) self.assertUniqueConstraintExists(engine, 'key_pairs', ['user_id', 'name']) def _check_015(self, engine, data): build_requests_table = db_utils.get_table(engine, 'build_requests') for column in ['request_spec_id', 'user_id', 'security_groups', 'config_drive']: self.assertTrue(build_requests_table.columns[column].nullable) inspector = reflection.Inspector.from_engine(engine) constrs = inspector.get_unique_constraints('build_requests') constr_columns = [constr['column_names'] for constr in constrs] self.assertNotIn(['request_spec_id'], constr_columns) def _check_016(self, engine, data): self.assertColumnExists(engine, 'resource_providers', 'id') self.assertIndexExists(engine, 'resource_providers', 'resource_providers_name_idx') self.assertIndexExists(engine, 'resource_providers', 'resource_providers_uuid_idx') self.assertColumnExists(engine, 'inventories', 'id') self.assertIndexExists(engine, 'inventories', 'inventories_resource_class_id_idx') self.assertColumnExists(engine, 'allocations', 'id') self.assertColumnExists(engine, 'resource_provider_aggregates', 'aggregate_id') def _check_017(self, engine, data): # aggregate_metadata for column in ['created_at', 'updated_at', 'id', 'aggregate_id', 'key', 'value']: self.assertColumnExists(engine, 'aggregate_metadata', column) self.assertUniqueConstraintExists(engine, 'aggregate_metadata', ['aggregate_id', 'key']) self.assertIndexExists(engine, 'aggregate_metadata', 'aggregate_metadata_key_idx') # aggregate_hosts for column in ['created_at', 'updated_at', 'id', 'host', 'aggregate_id']: self.assertColumnExists(engine, 'aggregate_hosts', column) self.assertUniqueConstraintExists(engine, 'aggregate_hosts', ['host', 'aggregate_id']) # aggregates for column in ['created_at', 'updated_at', 'id', 'name']: self.assertColumnExists(engine, 'aggregates', column) self.assertIndexExists(engine, 'aggregates', 'aggregate_uuid_idx') self.assertUniqueConstraintExists(engine, 'aggregates', ['name']) def _check_018(self, engine, data): # instance_groups for column in ['created_at', 'updated_at', 'id', 'user_id', 'project_id', 'uuid', 'name']: self.assertColumnExists(engine, 'instance_groups', column) self.assertUniqueConstraintExists(engine, 'instance_groups', ['uuid']) # instance_group_policy for column in ['created_at', 'updated_at', 'id', 'policy', 'group_id']: self.assertColumnExists(engine, 'instance_group_policy', column) self.assertIndexExists(engine, 'instance_group_policy', 'instance_group_policy_policy_idx') # Ensure the foreign key still exists inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('instance_group_policy')[0] self.assertEqual('instance_groups', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) # instance_group_member for column in ['created_at', 'updated_at', 'id', 'instance_uuid', 'group_id']: self.assertColumnExists(engine, 'instance_group_member', column) self.assertIndexExists(engine, 'instance_group_member', 'instance_group_member_instance_idx') def _check_019(self, engine, data): self.assertColumnExists(engine, 'build_requests', 'block_device_mappings') def _pre_upgrade_020(self, engine): build_requests = db_utils.get_table(engine, 'build_requests') fake_build_req = {'id': 2020, 'project_id': 'fake_proj_id', 'block_device_mappings': 'fake_BDM'} build_requests.insert().execute(fake_build_req) def _check_020(self, engine, data): build_requests = db_utils.get_table(engine, 'build_requests') if engine.name == 'mysql': self.assertIsInstance(build_requests.c.block_device_mappings.type, sqlalchemy.dialects.mysql.MEDIUMTEXT) fake_build_req = build_requests.select( build_requests.c.id == 2020).execute().first() self.assertEqual('fake_BDM', fake_build_req.block_device_mappings) def _check_026(self, engine, data): self.assertColumnExists(engine, 'resource_classes', 'id') self.assertColumnExists(engine, 'resource_classes', 'name') def _check_027(self, engine, data): # quota_classes for column in ['created_at', 'updated_at', 'id', 'class_name', 'resource', 'hard_limit']: self.assertColumnExists(engine, 'quota_classes', column) self.assertIndexExists(engine, 'quota_classes', 'quota_classes_class_name_idx') # quota_usages for column in ['created_at', 'updated_at', 'id', 'project_id', 'resource', 'in_use', 'reserved', 'until_refresh', 'user_id']: self.assertColumnExists(engine, 'quota_usages', column) self.assertIndexExists(engine, 'quota_usages', 'quota_usages_project_id_idx') self.assertIndexExists(engine, 'quota_usages', 'quota_usages_user_id_idx') # quotas for column in ['created_at', 'updated_at', 'id', 'project_id', 'resource', 'hard_limit']: self.assertColumnExists(engine, 'quotas', column) self.assertUniqueConstraintExists(engine, 'quotas', ['project_id', 'resource']) # project_user_quotas for column in ['created_at', 'updated_at', 'id', 'user_id', 'project_id', 'resource', 'hard_limit']: self.assertColumnExists(engine, 'project_user_quotas', column) self.assertUniqueConstraintExists(engine, 'project_user_quotas', ['user_id', 'project_id', 'resource']) self.assertIndexExists(engine, 'project_user_quotas', 'project_user_quotas_project_id_idx') self.assertIndexExists(engine, 'project_user_quotas', 'project_user_quotas_user_id_idx') # reservations for column in ['created_at', 'updated_at', 'id', 'uuid', 'usage_id', 'project_id', 'resource', 'delta', 'expire', 'user_id']: self.assertColumnExists(engine, 'reservations', column) self.assertIndexExists(engine, 'reservations', 'reservations_project_id_idx') self.assertIndexExists(engine, 'reservations', 'reservations_uuid_idx') self.assertIndexExists(engine, 'reservations', 'reservations_expire_idx') self.assertIndexExists(engine, 'reservations', 'reservations_user_id_idx') # Ensure the foreign key still exists inspector = reflection.Inspector.from_engine(engine) # There should only be one foreign key here fk = inspector.get_foreign_keys('reservations')[0] self.assertEqual('quota_usages', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) def _pre_upgrade_028(self, engine): build_requests = db_utils.get_table(engine, 'build_requests') fake_build_req = {'id': 2021, 'project_id': 'fake_proj_id', 'instance': '{"uuid": "foo", "name": "bar"}'} build_requests.insert().execute(fake_build_req) def _check_028(self, engine, data): build_requests = db_utils.get_table(engine, 'build_requests') if engine.name == 'mysql': self.assertIsInstance(build_requests.c.block_device_mappings.type, sqlalchemy.dialects.mysql.MEDIUMTEXT) fake_build_req = build_requests.select( build_requests.c.id == 2021).execute().first() self.assertEqual('{"uuid": "foo", "name": "bar"}', fake_build_req.instance) def _check_029(self, engine, data): for column in ['created_at', 'updated_at', 'id', 'uuid']: self.assertColumnExists(engine, 'placement_aggregates', column) def _check_041(self, engine, data): self.assertColumnExists(engine, 'traits', 'id') self.assertUniqueConstraintExists(engine, 'traits', ['name']) self.assertColumnExists(engine, 'resource_provider_traits', 'trait_id') self.assertColumnExists(engine, 'resource_provider_traits', 'resource_provider_id') self.assertIndexExists( engine, 'resource_provider_traits', 'resource_provider_traits_resource_provider_trait_idx') inspector = reflection.Inspector.from_engine(engine) self.assertEqual( 2, len(inspector.get_foreign_keys('resource_provider_traits'))) for fk in inspector.get_foreign_keys('resource_provider_traits'): if 'traits' == fk['referred_table']: self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['trait_id'], fk['constrained_columns']) elif 'resource_providers' == fk['referred_table']: self.assertEqual(['id'], fk['referred_columns']) self.assertEqual(['resource_provider_id'], fk['constrained_columns']) def _check_042(self, engine, data): self.assertColumnExists(engine, 'build_requests', 'tags') def _check_043(self, engine, data): for column in ['created_at', 'updated_at', 'id', 'uuid', 'project_id', 'user_id']: self.assertColumnExists(engine, 'consumers', column) self.assertIndexExists(engine, 'consumers', 'consumers_project_id_uuid_idx') self.assertIndexExists(engine, 'consumers', 'consumers_project_id_user_id_uuid_idx') self.assertUniqueConstraintExists(engine, 'consumers', ['uuid']) def _check_044(self, engine, data): for column in ['created_at', 'updated_at', 'id', 'external_id']: self.assertColumnExists(engine, 'projects', column) self.assertColumnExists(engine, 'users', column) self.assertUniqueConstraintExists(engine, 'projects', ['external_id']) self.assertUniqueConstraintExists(engine, 'users', ['external_id']) # We needed to drop and recreate columns and indexes on consumers, so # check that worked out properly self.assertColumnExists(engine, 'consumers', 'project_id') self.assertColumnExists(engine, 'consumers', 'user_id') self.assertIndexExists( engine, 'consumers', 'consumers_project_id_uuid_idx', ) self.assertIndexExists( engine, 'consumers', 'consumers_project_id_user_id_uuid_idx', ) def _check_050(self, engine, data): self.assertColumnExists(engine, 'flavors', 'description') def _check_051(self, engine, data): for column in ['root_provider_id', 'parent_provider_id']: self.assertColumnExists(engine, 'resource_providers', column) self.assertIndexExists(engine, 'resource_providers', 'resource_providers_root_provider_id_idx') self.assertIndexExists(engine, 'resource_providers', 'resource_providers_parent_provider_id_idx') def _pre_upgrade_052(self, engine): request_specs = db_utils.get_table(engine, 'request_specs') # The spec value is a serialized json blob. spec = jsonutils.dumps( {"instance_group": {"id": 42, "members": ["uuid1", "uuid2", "uuid3"]}}) fake_request_spec = { 'id': 42, 'spec': spec, 'instance_uuid': uuids.instance} request_specs.insert().execute(fake_request_spec) def _check_052(self, engine, data): request_specs = db_utils.get_table(engine, 'request_specs') if engine.name == 'mysql': self.assertIsInstance(request_specs.c.spec.type, sqlalchemy.dialects.mysql.MEDIUMTEXT) expected_spec = {"instance_group": {"id": 42, "members": ["uuid1", "uuid2", "uuid3"]}} from_db_request_spec = request_specs.select( request_specs.c.id == 42).execute().first() self.assertEqual(uuids.instance, from_db_request_spec['instance_uuid']) db_spec = jsonutils.loads(from_db_request_spec['spec']) self.assertDictEqual(expected_spec, db_spec) def _check_058(self, engine, data): self.assertColumnExists(engine, 'cell_mappings', 'disabled') def _pre_upgrade_059(self, engine): # Add a fake consumers table record to verify that generation is # added with a default value of 0. projects = db_utils.get_table(engine, 'projects') project_id = projects.insert().execute( dict(external_id=uuids.project_external_id) ).inserted_primary_key[0] users = db_utils.get_table(engine, 'users') user_id = users.insert().execute( dict(external_id=uuids.user_external_id) ).inserted_primary_key[0] consumers = db_utils.get_table(engine, 'consumers') fake_consumer = dict( uuid=uuids.consumer_uuid, project_id=project_id, user_id=user_id) consumers.insert().execute(fake_consumer) def _check_059(self, engine, data): self.assertColumnExists(engine, "consumers", "generation") # Assert we have one existing entry and it's generation value is 0. consumers = db_utils.get_table(engine, 'consumers') result = consumers.select().execute().fetchall() self.assertEqual(1, len(result)) self.assertEqual(0, result[0]['generation']) def _check_060(self, engine, data): self.assertColumnExists(engine, 'instance_group_policy', 'rules') def _check_061(self, engine, data): self.assertColumnExists(engine, 'instance_mappings', 'queued_for_delete') def _check_062(self, engine, data): self.assertColumnExists(engine, 'instance_mappings', 'user_id') self.assertIndexExists(engine, 'instance_mappings', 'instance_mappings_user_id_project_id_idx') class TestNovaAPIMigrationsWalkSQLite(NovaAPIMigrationsWalk, test_fixtures.OpportunisticDBTestMixin, test.NoDBTestCase): pass class TestNovaAPIMigrationsWalkMySQL(NovaAPIMigrationsWalk, test_fixtures.OpportunisticDBTestMixin, test.NoDBTestCase): FIXTURE = test_fixtures.MySQLOpportunisticFixture class TestNovaAPIMigrationsWalkPostgreSQL(NovaAPIMigrationsWalk, test_fixtures.OpportunisticDBTestMixin, test.NoDBTestCase): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_aggregate.py0000664000175000017500000007432500000000000022774 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from copy import deepcopy import mock from oslo_db import exception as db_exc from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils from nova import context from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception import nova.objects.aggregate as aggregate_obj from nova import test from nova.tests.unit import matchers from nova.tests.unit.objects.test_objects import compare_obj as base_compare SUBS = {'metadata': 'metadetails'} NOW = timeutils.utcnow().replace(microsecond=0) def _get_fake_aggregate(db_id, in_api=True, result=True): agg_map = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'id': db_id, 'uuid': getattr(uuidsentinel, str(db_id)), 'name': 'name_' + str(db_id), } if not in_api: agg_map['deleted'] = False if result: agg_map['hosts'] = _get_fake_hosts(db_id) agg_map['metadetails'] = _get_fake_metadata(db_id) return agg_map def _get_fake_hosts(db_id): return ['constant_host', 'unique_host_' + str(db_id)] def _get_fake_metadata(db_id): return {'constant_key': 'constant_value', 'unique_key': 'unique_value_' + str(db_id)} @db_api.api_context_manager.writer def _create_aggregate(context, values=_get_fake_aggregate(1, result=False), metadata=_get_fake_metadata(1)): aggregate = api_models.Aggregate() aggregate.update(values) aggregate.save(context.session) if metadata: for key, value in metadata.items(): aggregate_metadata = api_models.AggregateMetadata() aggregate_metadata.update({'key': key, 'value': value, 'aggregate_id': aggregate['id']}) aggregate_metadata.save(context.session) return aggregate @db_api.api_context_manager.writer def _create_aggregate_with_hosts(context, values=_get_fake_aggregate(1, result=False), metadata=_get_fake_metadata(1), hosts=_get_fake_hosts(1)): aggregate = _create_aggregate(context, values, metadata) for host in hosts: host_model = api_models.AggregateHost() host_model.update({'host': host, 'aggregate_id': aggregate.id}) host_model.save(context.session) return aggregate @db_api.api_context_manager.reader def _aggregate_host_get_all(context, aggregate_id): return context.session.query(api_models.AggregateHost).\ filter_by(aggregate_id=aggregate_id).all() @db_api.api_context_manager.reader def _aggregate_metadata_get_all(context, aggregate_id): results = context.session.query(api_models.AggregateMetadata).\ filter_by(aggregate_id=aggregate_id).all() metadata = {} for r in results: metadata[r['key']] = r['value'] return metadata class AggregateObjectDbTestCase(test.TestCase): def setUp(self): super(AggregateObjectDbTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def test_aggregate_get_from_db(self): result = _create_aggregate_with_hosts(self.context) expected = aggregate_obj._aggregate_get_from_db(self.context, result['id']) self.assertEqual(_get_fake_hosts(1), expected.hosts) self.assertEqual(_get_fake_metadata(1), expected['metadetails']) def test_aggregate_get_from_db_by_uuid(self): result = _create_aggregate_with_hosts(self.context) expected = aggregate_obj._aggregate_get_from_db_by_uuid( self.context, result['uuid']) self.assertEqual(result.uuid, expected.uuid) self.assertEqual(_get_fake_hosts(1), expected.hosts) self.assertEqual(_get_fake_metadata(1), expected['metadetails']) def test_aggregate_get_from_db_raise_not_found(self): aggregate_id = 5 self.assertRaises(exception.AggregateNotFound, aggregate_obj._aggregate_get_from_db, self.context, aggregate_id) def test_aggregate_get_all_from_db(self): for c in range(3): _create_aggregate(self.context, values={'name': 'fake_aggregate_%d' % c}) results = aggregate_obj._get_all_from_db(self.context) self.assertEqual(len(results), 3) def test_aggregate_get_by_host_from_db(self): _create_aggregate_with_hosts(self.context, values={'name': 'fake_aggregate_1'}, hosts=['host.1.openstack.org']) _create_aggregate_with_hosts(self.context, values={'name': 'fake_aggregate_2'}, hosts=['host.1.openstack.org']) _create_aggregate(self.context, values={'name': 'no_host_aggregate'}) rh1 = aggregate_obj._get_all_from_db(self.context) rh2 = aggregate_obj._get_by_host_from_db(self.context, 'host.1.openstack.org') self.assertEqual(3, len(rh1)) self.assertEqual(2, len(rh2)) def test_aggregate_get_by_host_with_key_from_db(self): ah1 = _create_aggregate_with_hosts(self.context, values={'name': 'fake_aggregate_1'}, metadata={'goodkey': 'good'}, hosts=['host.1.openstack.org']) _create_aggregate_with_hosts(self.context, values={'name': 'fake_aggregate_2'}, hosts=['host.1.openstack.org']) rh1 = aggregate_obj._get_by_host_from_db(self.context, 'host.1.openstack.org', key='goodkey') self.assertEqual(1, len(rh1)) self.assertEqual(ah1['id'], rh1[0]['id']) def test_aggregate_get_by_metadata_key_from_db(self): _create_aggregate(self.context, values={'name': 'aggregate_1'}, metadata={'goodkey': 'good'}) _create_aggregate(self.context, values={'name': 'aggregate_2'}, metadata={'goodkey': 'bad'}) _create_aggregate(self.context, values={'name': 'aggregate_3'}, metadata={'badkey': 'good'}) rl1 = aggregate_obj._get_by_metadata_from_db(self.context, key='goodkey') self.assertEqual(2, len(rl1)) def test_aggregate_create_in_db(self): fake_create_aggregate = { 'name': 'fake-aggregate', } agg = aggregate_obj._aggregate_create_in_db(self.context, fake_create_aggregate) result = aggregate_obj._aggregate_get_from_db(self.context, agg.id) self.assertEqual(result.name, fake_create_aggregate['name']) def test_aggregate_create_in_db_with_metadata(self): fake_create_aggregate = { 'name': 'fake-aggregate', } agg = aggregate_obj._aggregate_create_in_db(self.context, fake_create_aggregate, metadata={'goodkey': 'good'}) result = aggregate_obj._aggregate_get_from_db(self.context, agg.id) md = aggregate_obj._get_by_metadata_from_db(self.context, key='goodkey') self.assertEqual(len(md), 1) self.assertEqual(md[0]['id'], agg.id) self.assertEqual(result.name, fake_create_aggregate['name']) def test_aggregate_create_raise_exist_exc(self): fake_create_aggregate = { 'name': 'fake-aggregate', } aggregate_obj._aggregate_create_in_db(self.context, fake_create_aggregate) self.assertRaises(exception.AggregateNameExists, aggregate_obj._aggregate_create_in_db, self.context, fake_create_aggregate, metadata=None) def test_aggregate_delete(self): result = _create_aggregate(self.context, metadata=None) aggregate_obj._aggregate_delete_from_db(self.context, result['id']) self.assertRaises(exception.AggregateNotFound, aggregate_obj._aggregate_get_from_db, self.context, result['id']) def test_aggregate_delete_raise_not_found(self): # this does not exist! aggregate_id = 45 self.assertRaises(exception.AggregateNotFound, aggregate_obj._aggregate_delete_from_db, self.context, aggregate_id) def test_aggregate_delete_with_metadata(self): result = _create_aggregate(self.context, metadata={'availability_zone': 'fake_avail_zone'}) aggregate_obj._aggregate_delete_from_db(self.context, result['id']) self.assertRaises(exception.AggregateNotFound, aggregate_obj._aggregate_get_from_db, self.context, result['id']) def test_aggregate_update(self): created = _create_aggregate(self.context, metadata={'availability_zone': 'fake_avail_zone'}) result = aggregate_obj._aggregate_get_from_db(self.context, created['id']) self.assertEqual('fake_avail_zone', result['availability_zone']) new_values = deepcopy(_get_fake_aggregate(1, result=False)) new_values['availability_zone'] = 'different_avail_zone' updated = aggregate_obj._aggregate_update_to_db(self.context, result['id'], new_values) self.assertEqual('different_avail_zone', updated['availability_zone']) def test_aggregate_update_with_metadata(self): result = _create_aggregate(self.context, metadata=None) values = deepcopy(_get_fake_aggregate(1, result=False)) values['metadata'] = deepcopy(_get_fake_metadata(1)) values['availability_zone'] = 'different_avail_zone' expected_metadata = deepcopy(values['metadata']) expected_metadata['availability_zone'] = values['availability_zone'] aggregate_obj._aggregate_update_to_db(self.context, result['id'], values) metadata = _aggregate_metadata_get_all(self.context, result['id']) updated = aggregate_obj._aggregate_get_from_db(self.context, result['id']) self.assertThat(metadata, matchers.DictMatches(expected_metadata)) self.assertEqual('different_avail_zone', updated['availability_zone']) def test_aggregate_update_with_existing_metadata(self): result = _create_aggregate(self.context) values = deepcopy(_get_fake_aggregate(1, result=False)) values['metadata'] = deepcopy(_get_fake_metadata(1)) values['metadata']['fake_key1'] = 'foo' expected_metadata = deepcopy(values['metadata']) aggregate_obj._aggregate_update_to_db(self.context, result['id'], values) metadata = _aggregate_metadata_get_all(self.context, result['id']) self.assertThat(metadata, matchers.DictMatches(expected_metadata)) def test_aggregate_update_zone_with_existing_metadata(self): result = _create_aggregate(self.context) new_zone = {'availability_zone': 'fake_avail_zone_2'} metadata = deepcopy(_get_fake_metadata(1)) metadata.update(new_zone) aggregate_obj._aggregate_update_to_db(self.context, result['id'], new_zone) expected = _aggregate_metadata_get_all(self.context, result['id']) self.assertThat(metadata, matchers.DictMatches(expected)) def test_aggregate_update_raise_not_found(self): # this does not exist! aggregate_id = 2 new_values = deepcopy(_get_fake_aggregate(1, result=False)) self.assertRaises(exception.AggregateNotFound, aggregate_obj._aggregate_update_to_db, self.context, aggregate_id, new_values) def test_aggregate_update_raise_name_exist(self): _create_aggregate(self.context, values={'name': 'test1'}, metadata={'availability_zone': 'fake_avail_zone'}) _create_aggregate(self.context, values={'name': 'test2'}, metadata={'availability_zone': 'fake_avail_zone'}) aggregate_id = 1 new_values = {'name': 'test2'} self.assertRaises(exception.AggregateNameExists, aggregate_obj._aggregate_update_to_db, self.context, aggregate_id, new_values) def test_aggregate_host_add_to_db(self): result = _create_aggregate(self.context, metadata=None) host = _get_fake_hosts(1)[0] aggregate_obj._host_add_to_db(self.context, result['id'], host) expected = aggregate_obj._aggregate_get_from_db(self.context, result['id']) self.assertEqual([_get_fake_hosts(1)[0]], expected.hosts) def test_aggregate_host_re_add_to_db(self): result = _create_aggregate_with_hosts(self.context, metadata=None) host = _get_fake_hosts(1)[0] aggregate_obj._host_delete_from_db(self.context, result['id'], host) aggregate_obj._host_add_to_db(self.context, result['id'], host) expected = _aggregate_host_get_all(self.context, result['id']) self.assertEqual(len(expected), 2) def test_aggregate_host_add_to_db_duplicate_works(self): r1 = _create_aggregate_with_hosts(self.context, metadata=None) r2 = _create_aggregate_with_hosts(self.context, values={'name': 'fake_aggregate2'}, metadata={'availability_zone': 'fake_avail_zone2'}) h1 = _aggregate_host_get_all(self.context, r1['id']) self.assertEqual(len(h1), 2) self.assertEqual(r1['id'], h1[0]['aggregate_id']) h2 = _aggregate_host_get_all(self.context, r2['id']) self.assertEqual(len(h2), 2) self.assertEqual(r2['id'], h2[0]['aggregate_id']) def test_aggregate_host_add_to_db_duplicate_raise_exist_exc(self): result = _create_aggregate_with_hosts(self.context, metadata=None) self.assertRaises(exception.AggregateHostExists, aggregate_obj._host_add_to_db, self.context, result['id'], _get_fake_hosts(1)[0]) def test_aggregate_host_add_to_db_raise_not_found(self): # this does not exist! aggregate_id = 1 host = _get_fake_hosts(1)[0] self.assertRaises(exception.AggregateNotFound, aggregate_obj._host_add_to_db, self.context, aggregate_id, host) def test_aggregate_host_delete_from_db(self): result = _create_aggregate_with_hosts(self.context, metadata=None) aggregate_obj._host_delete_from_db(self.context, result['id'], _get_fake_hosts(1)[0]) expected = _aggregate_host_get_all(self.context, result['id']) self.assertEqual(len(expected), 1) def test_aggregate_host_delete_from_db_raise_not_found(self): result = _create_aggregate(self.context) self.assertRaises(exception.AggregateHostNotFound, aggregate_obj._host_delete_from_db, self.context, result['id'], _get_fake_hosts(1)[0]) def test_aggregate_metadata_add(self): result = _create_aggregate(self.context, metadata=None) metadata = deepcopy(_get_fake_metadata(1)) aggregate_obj._metadata_add_to_db(self.context, result['id'], metadata) expected = _aggregate_metadata_get_all(self.context, result['id']) self.assertThat(metadata, matchers.DictMatches(expected)) def test_aggregate_metadata_add_empty_metadata(self): result = _create_aggregate(self.context, metadata=None) metadata = {} aggregate_obj._metadata_add_to_db(self.context, result['id'], metadata) expected = _aggregate_metadata_get_all(self.context, result['id']) self.assertThat(metadata, matchers.DictMatches(expected)) def test_aggregate_metadata_add_and_update(self): result = _create_aggregate(self.context) metadata = deepcopy(_get_fake_metadata(1)) key = list(metadata.keys())[0] new_metadata = {key: 'foo', 'fake_new_key': 'fake_new_value'} metadata.update(new_metadata) aggregate_obj._metadata_add_to_db(self.context, result['id'], new_metadata) expected = _aggregate_metadata_get_all(self.context, result['id']) self.assertThat(metadata, matchers.DictMatches(expected)) def test_aggregate_metadata_add_retry(self): result = _create_aggregate(self.context, metadata=None) with mock.patch('nova.db.sqlalchemy.api_models.' 'AggregateMetadata.__table__.insert') as insert_mock: insert_mock.side_effect = db_exc.DBDuplicateEntry self.assertRaises(db_exc.DBDuplicateEntry, aggregate_obj._metadata_add_to_db, self.context, result['id'], {'fake_key2': 'fake_value2'}, max_retries=5) def test_aggregate_metadata_update(self): result = _create_aggregate(self.context) metadata = deepcopy(_get_fake_metadata(1)) key = list(metadata.keys())[0] aggregate_obj._metadata_delete_from_db(self.context, result['id'], key) new_metadata = {key: 'foo'} aggregate_obj._metadata_add_to_db(self.context, result['id'], new_metadata) expected = _aggregate_metadata_get_all(self.context, result['id']) metadata[key] = 'foo' self.assertThat(metadata, matchers.DictMatches(expected)) def test_aggregate_metadata_delete(self): result = _create_aggregate(self.context, metadata=None) metadata = deepcopy(_get_fake_metadata(1)) aggregate_obj._metadata_add_to_db(self.context, result['id'], metadata) aggregate_obj._metadata_delete_from_db(self.context, result['id'], list(metadata.keys())[0]) expected = _aggregate_metadata_get_all(self.context, result['id']) del metadata[list(metadata.keys())[0]] self.assertThat(metadata, matchers.DictMatches(expected)) def test_aggregate_remove_availability_zone(self): result = _create_aggregate(self.context, metadata={'availability_zone': 'fake_avail_zone'}) aggregate_obj._metadata_delete_from_db(self.context, result['id'], 'availability_zone') aggr = aggregate_obj._aggregate_get_from_db(self.context, result['id']) self.assertIsNone(aggr['availability_zone']) def test_aggregate_metadata_delete_raise_not_found(self): result = _create_aggregate(self.context) self.assertRaises(exception.AggregateMetadataNotFound, aggregate_obj._metadata_delete_from_db, self.context, result['id'], 'foo_key') def create_aggregate(context, db_id): fake_aggregate = _get_fake_aggregate(db_id, in_api=False, result=False) aggregate_obj._aggregate_create_in_db(context, fake_aggregate, metadata=_get_fake_metadata(db_id)) for host in _get_fake_hosts(db_id): aggregate_obj._host_add_to_db(context, fake_aggregate['id'], host) def compare_obj(test, result, source): source['deleted'] = False def updated_at_comparator(result, source): return True return base_compare(test, result, source, subs=SUBS, comparators={'updated_at': updated_at_comparator}) class AggregateObjectTestCase(test.TestCase): def setUp(self): super(AggregateObjectTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') self._seed_data() def _seed_data(self): for i in range(1, 10): create_aggregate(self.context, i) def test_create(self): new_agg = aggregate_obj.Aggregate(self.context) new_agg.name = 'new-aggregate' new_agg.create() result = aggregate_obj.Aggregate.get_by_id(self.context, new_agg.id) self.assertEqual(new_agg.name, result.name) def test_get_by_id(self): for i in range(1, 10): agg = aggregate_obj.Aggregate.get_by_id(self.context, i) compare_obj(self, agg, _get_fake_aggregate(i)) def test_save(self): for i in range(1, 10): agg = aggregate_obj.Aggregate.get_by_id(self.context, i) fake_agg = _get_fake_aggregate(i) fake_agg['name'] = 'new-name' + str(i) agg.name = 'new-name' + str(i) agg.save() result = aggregate_obj.Aggregate.get_by_id(self.context, i) compare_obj(self, agg, fake_agg) compare_obj(self, result, fake_agg) def test_update_metadata(self): for i in range(1, 10): agg = aggregate_obj.Aggregate.get_by_id(self.context, i) fake_agg = _get_fake_aggregate(i) fake_agg['metadetails'] = {'constant_key': 'constant_value'} agg.update_metadata({'unique_key': None}) agg.save() result = aggregate_obj.Aggregate.get_by_id(self.context, i) compare_obj(self, agg, fake_agg) compare_obj(self, result, fake_agg) def test_destroy(self): for i in range(1, 10): agg = aggregate_obj.Aggregate.get_by_id(self.context, i) agg.destroy() aggs = aggregate_obj.AggregateList.get_all(self.context) self.assertEqual(len(aggs), 0) def test_add_host(self): for i in range(1, 10): agg = aggregate_obj.Aggregate.get_by_id(self.context, i) fake_agg = _get_fake_aggregate(i) fake_agg['hosts'].append('barbar') agg.add_host('barbar') agg.save() result = aggregate_obj.Aggregate.get_by_id(self.context, i) compare_obj(self, agg, fake_agg) compare_obj(self, result, fake_agg) def test_delete_host(self): for i in range(1, 10): agg = aggregate_obj.Aggregate.get_by_id(self.context, i) fake_agg = _get_fake_aggregate(i) fake_agg['hosts'].remove('constant_host') agg.delete_host('constant_host') result = aggregate_obj.Aggregate.get_by_id(self.context, i) compare_obj(self, agg, fake_agg) compare_obj(self, result, fake_agg) def test_get_by_metadata(self): agg = aggregate_obj.Aggregate.get_by_id(self.context, 1) agg.update_metadata({'foo': 'bar'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 2) agg.update_metadata({'foo': 'baz', 'fu': 'bar'}) aggs = aggregate_obj.AggregateList.get_by_metadata( self.context, key='foo', value='bar') self.assertEqual(1, len(aggs)) self.assertEqual(1, aggs[0].id) aggs = aggregate_obj.AggregateList.get_by_metadata( self.context, value='bar') self.assertEqual(2, len(aggs)) self.assertEqual(set([1, 2]), set([a.id for a in aggs])) def test_get_by_metadata_from_db_assertion(self): self.assertRaises(AssertionError, aggregate_obj._get_by_metadata_from_db, self.context) def test_get_non_matching_by_metadata_keys(self): """Test aggregates that are not matching with metadata.""" agg = aggregate_obj.Aggregate.get_by_id(self.context, 1) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 2) agg.update_metadata({'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 3) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 4) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'just_for_marking'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 5) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'just_for_marking'}) aggs = aggregate_obj.AggregateList.get_non_matching_by_metadata_keys( self.context, ['trait:HW_CPU_X86_MMX'], 'trait:', value='required') self.assertEqual(2, len(aggs)) self.assertItemsEqual([2, 3], [a.id for a in aggs]) def test_matching_aggregates_multiple_keys(self): """All matching aggregates for multiple keys.""" agg = aggregate_obj.Aggregate.get_by_id(self.context, 1) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 2) agg.update_metadata({'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 3) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 4) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'just_for_marking'}) aggs = aggregate_obj.AggregateList.get_non_matching_by_metadata_keys( self.context, ['trait:HW_CPU_X86_MMX', 'trait:HW_CPU_X86_SGX'], 'trait:', value='required') self.assertEqual(0, len(aggs)) def test_get_non_matching_aggregates_multiple_keys(self): """Return non matching aggregates for multiple keys.""" agg = aggregate_obj.Aggregate.get_by_id(self.context, 1) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 2) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'required', 'trait:HW_CPU_API_DXVA': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 3) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 4) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'just_for_marking'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 5) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'just_for_marking', 'trait:HW_CPU_X86_SSE': 'required'}) aggs = aggregate_obj.AggregateList.get_non_matching_by_metadata_keys( self.context, ['trait:HW_CPU_X86_MMX', 'trait:HW_CPU_X86_SGX'], 'trait:', value='required') self.assertEqual(2, len(aggs)) self.assertItemsEqual([2, 5], [a.id for a in aggs]) def test_get_non_matching_by_metadata_keys_empty_keys(self): """Test aggregates non matching by metadata with empty keys.""" agg = aggregate_obj.Aggregate.get_by_id(self.context, 1) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 2) agg.update_metadata({'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 3) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'required'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 4) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'just_for_marking'}) agg = aggregate_obj.Aggregate.get_by_id(self.context, 5) agg.update_metadata({'trait:HW_CPU_X86_MMX': 'required', 'trait:HW_CPU_X86_SGX': 'just_for_marking', 'trait:HW_CPU_X86_SSE': 'required'}) aggs = aggregate_obj.AggregateList.get_non_matching_by_metadata_keys( self.context, [], 'trait:', value='required') self.assertEqual(5, len(aggs)) self.assertItemsEqual([1, 2, 3, 4, 5], [a.id for a in aggs]) def test_get_non_matching_by_metadata_keys_empty_key_prefix(self): """Test aggregates non matching by metadata with empty key_prefix.""" self.assertRaises( ValueError, aggregate_obj.AggregateList.get_non_matching_by_metadata_keys, self.context, ['trait:HW_CPU_X86_MMX'], '', value='required') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_aggregate_model.py0000664000175000017500000000434600000000000024150 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db.sqlalchemy import api_models from nova.db.sqlalchemy import models from nova import test class AggregateTablesCompareTestCase(test.NoDBTestCase): def _get_column_list(self, model): column_list = [m.key for m in model.__table__.columns] return column_list def _check_column_list(self, columns_new, columns_old, added=None, removed=None): for c in added or []: columns_new.remove(c) for c in removed or []: columns_old.remove(c) intersect = set(columns_new).intersection(set(columns_old)) if intersect != set(columns_new) or intersect != set(columns_old): return False return True def _compare_models(self, m_a, m_b, added=None, removed=None): added = added or [] removed = removed or ['deleted_at', 'deleted'] c_a = self._get_column_list(m_a) c_b = self._get_column_list(m_b) self.assertTrue(self._check_column_list(c_a, c_b, added=added, removed=removed)) def test_tables_aggregate_hosts(self): self._compare_models(api_models.AggregateHost(), models.AggregateHost()) def test_tables_aggregate_metadata(self): self._compare_models(api_models.AggregateMetadata(), models.AggregateMetadata()) def test_tables_aggregates(self): self._compare_models(api_models.Aggregate(), models.Aggregate()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_archive.py0000664000175000017500000002743400000000000022466 0ustar00zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import re from dateutil import parser as dateutil_parser from oslo_utils import timeutils from sqlalchemy import func from sqlalchemy import MetaData from sqlalchemy import select from nova import context from nova.db import api as db from nova.db.sqlalchemy import api as sqlalchemy_api from nova.tests.functional import test_servers class TestDatabaseArchive(test_servers.ServersTestBase): """Tests DB API for archiving (soft) deleted records""" def setUp(self): super(TestDatabaseArchive, self).setUp() self.enforce_fk_constraints() def test_archive_deleted_rows(self): # Boots a server, deletes it, and then tries to archive it. server = self._create_server() server_id = server['id'] # Assert that there are instance_actions. instance_actions are # interesting since we don't soft delete them but they have a foreign # key back to the instances table. actions = self.api.get_instance_actions(server_id) self.assertTrue(len(actions), 'No instance actions for server: %s' % server_id) self._delete_server(server) # Verify we have the soft deleted instance in the database. admin_context = context.get_admin_context(read_deleted='yes') # This will raise InstanceNotFound if it's not found. instance = db.instance_get_by_uuid(admin_context, server_id) # Make sure it's soft deleted. self.assertNotEqual(0, instance.deleted) # Verify we have some system_metadata since we'll check that later. self.assertTrue(len(instance.system_metadata), 'No system_metadata for instance: %s' % server_id) # Now try and archive the soft deleted records. results, deleted_instance_uuids, archived = \ db.archive_deleted_rows(max_rows=100) # verify system_metadata was dropped self.assertIn('instance_system_metadata', results) self.assertEqual(len(instance.system_metadata), results['instance_system_metadata']) # Verify that instances rows are dropped self.assertIn('instances', results) # Verify that instance_actions and actions_event are dropped # by the archive self.assertIn('instance_actions', results) self.assertIn('instance_actions_events', results) self.assertEqual(sum(results.values()), archived) def test_archive_deleted_rows_with_undeleted_residue(self): # Boots a server, deletes it, and then tries to archive it. server = self._create_server() server_id = server['id'] # Assert that there are instance_actions. instance_actions are # interesting since we don't soft delete them but they have a foreign # key back to the instances table. actions = self.api.get_instance_actions(server_id) self.assertTrue(len(actions), 'No instance actions for server: %s' % server_id) self._delete_server(server) # Verify we have the soft deleted instance in the database. admin_context = context.get_admin_context(read_deleted='yes') # This will raise InstanceNotFound if it's not found. instance = db.instance_get_by_uuid(admin_context, server_id) # Make sure it's soft deleted. self.assertNotEqual(0, instance.deleted) # Undelete the instance_extra record to make sure we delete it anyway extra = db.instance_extra_get_by_instance_uuid(admin_context, instance.uuid) self.assertNotEqual(0, extra.deleted) db.instance_extra_update_by_uuid(admin_context, instance.uuid, {'deleted': 0}) extra = db.instance_extra_get_by_instance_uuid(admin_context, instance.uuid) self.assertEqual(0, extra.deleted) # Verify we have some system_metadata since we'll check that later. self.assertTrue(len(instance.system_metadata), 'No system_metadata for instance: %s' % server_id) # Create a pci_devices record to simulate an instance that had a PCI # device allocated at the time it was deleted. There is a window of # time between deletion of the instance record and freeing of the PCI # device in nova-compute's _complete_deletion method during RT update. db.pci_device_update(admin_context, 1, 'fake-address', {'compute_node_id': 1, 'address': 'fake-address', 'vendor_id': 'fake', 'product_id': 'fake', 'dev_type': 'fake', 'label': 'fake', 'status': 'allocated', 'instance_uuid': instance.uuid}) # Now try and archive the soft deleted records. results, deleted_instance_uuids, archived = \ db.archive_deleted_rows(max_rows=100) # verify system_metadata was dropped self.assertIn('instance_system_metadata', results) self.assertEqual(len(instance.system_metadata), results['instance_system_metadata']) # Verify that instances rows are dropped self.assertIn('instances', results) # Verify that instance_actions and actions_event are dropped # by the archive self.assertIn('instance_actions', results) self.assertIn('instance_actions_events', results) self.assertEqual(sum(results.values()), archived) # Verify that the pci_devices record has not been dropped self.assertNotIn('pci_devices', results) def test_archive_deleted_rows_incomplete(self): """This tests a scenario where archive_deleted_rows is run with --max_rows and does not run to completion. That is, the archive is stopped before all archivable records have been archived. Specifically, the problematic state is when a single instance becomes partially archived (example: 'instance_extra' record for one instance has been archived while its 'instances' record remains). Any access of the instance (example: listing deleted instances) that triggers the retrieval of a dependent record that has been archived away, results in undefined behavior that may raise an error. We will force the system into a state where a single deleted instance is partially archived. We want to verify that we can, for example, successfully do a GET /servers/detail at any point between partial archive_deleted_rows runs without errors. """ # Boots a server, deletes it, and then tries to archive it. server = self._create_server() server_id = server['id'] # Assert that there are instance_actions. instance_actions are # interesting since we don't soft delete them but they have a foreign # key back to the instances table. actions = self.api.get_instance_actions(server_id) self.assertTrue(len(actions), 'No instance actions for server: %s' % server_id) self._delete_server(server) # Archive deleted records iteratively, 1 row at a time, and try to do a # GET /servers/detail between each run. All should succeed. exceptions = [] while True: _, _, archived = db.archive_deleted_rows(max_rows=1) try: # Need to use the admin API to list deleted servers. self.admin_api.get_servers(search_opts={'deleted': True}) except Exception as ex: exceptions.append(ex) if archived == 0: break self.assertFalse(exceptions) def _get_table_counts(self): engine = sqlalchemy_api.get_engine() conn = engine.connect() meta = MetaData(engine) meta.reflect() shadow_tables = sqlalchemy_api._purgeable_tables(meta) results = {} for table in shadow_tables: r = conn.execute( select([func.count()]).select_from(table)).fetchone() results[table.name] = r[0] return results def test_archive_then_purge_all(self): server = self._create_server() server_id = server['id'] self._delete_server(server) results, deleted_ids, archived = db.archive_deleted_rows(max_rows=1000) self.assertEqual([server_id], deleted_ids) lines = [] def status(msg): lines.append(msg) admin_context = context.get_admin_context() deleted = sqlalchemy_api.purge_shadow_tables(admin_context, None, status_fn=status) self.assertNotEqual(0, deleted) self.assertNotEqual(0, len(lines)) self.assertEqual(sum(results.values()), archived) for line in lines: self.assertIsNotNone(re.match(r'Deleted [1-9][0-9]* rows from .*', line)) results = self._get_table_counts() # No table should have any rows self.assertFalse(any(results.values())) def test_archive_then_purge_by_date(self): server = self._create_server() server_id = server['id'] self._delete_server(server) results, deleted_ids, archived = db.archive_deleted_rows(max_rows=1000) self.assertEqual([server_id], deleted_ids) self.assertEqual(sum(results.values()), archived) pre_purge_results = self._get_table_counts() past = timeutils.utcnow() - datetime.timedelta(hours=1) admin_context = context.get_admin_context() deleted = sqlalchemy_api.purge_shadow_tables(admin_context, past) # Make sure we didn't delete anything if the marker is before # we started self.assertEqual(0, deleted) results = self._get_table_counts() # Nothing should be changed if we didn't purge anything self.assertEqual(pre_purge_results, results) future = timeutils.utcnow() + datetime.timedelta(hours=1) deleted = sqlalchemy_api.purge_shadow_tables(admin_context, future) # Make sure we deleted things when the marker is after # we started self.assertNotEqual(0, deleted) results = self._get_table_counts() # There should be no rows in any table if we purged everything self.assertFalse(any(results.values())) def test_purge_with_real_date(self): """Make sure the result of dateutil's parser works with the query we're making to sqlalchemy. """ server = self._create_server() server_id = server['id'] self._delete_server(server) results, deleted_ids, archived = db.archive_deleted_rows(max_rows=1000) self.assertEqual([server_id], deleted_ids) date = dateutil_parser.parse('oct 21 2015', fuzzy=True) admin_context = context.get_admin_context() deleted = sqlalchemy_api.purge_shadow_tables(admin_context, date) self.assertEqual(0, deleted) self.assertEqual(sum(results.values()), archived) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_build_request.py0000664000175000017500000006076700000000000023722 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import uuidutils from nova import context from nova import exception from nova import objects from nova.objects import build_request from nova import test from nova.tests import fixtures from nova.tests.unit import fake_build_request from nova.tests.unit import fake_instance class BuildRequestTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(BuildRequestTestCase, self).setUp() # NOTE: This means that we're using a database for this test suite # despite inheriting from NoDBTestCase self.useFixture(fixtures.Database(database='api')) self.context = context.RequestContext('fake-user', 'fake-project') self.build_req_obj = build_request.BuildRequest() self.instance_uuid = uuidutils.generate_uuid() self.project_id = 'fake-project' def _create_req(self): args = fake_build_request.fake_db_req() args.pop('id', None) args['instance_uuid'] = self.instance_uuid args['project_id'] = self.project_id return build_request.BuildRequest._from_db_object(self.context, self.build_req_obj, self.build_req_obj._create_in_db(self.context, args)) def test_get_by_instance_uuid_not_found(self): self.assertRaises(exception.BuildRequestNotFound, self.build_req_obj._get_by_instance_uuid_from_db, self.context, self.instance_uuid) def test_get_by_uuid(self): expected_req = self._create_req() req_obj = self.build_req_obj.get_by_instance_uuid(self.context, self.instance_uuid) for key in self.build_req_obj.fields.keys(): expected = getattr(expected_req, key) db_value = getattr(req_obj, key) if key == 'instance': self.assertTrue(objects.base.obj_equal_prims(expected, db_value)) continue elif key in ('block_device_mappings', 'tags'): self.assertEqual(1, len(db_value)) # Can't compare list objects directly, just compare the single # item they contain. self.assertTrue(objects.base.obj_equal_prims(expected[0], db_value[0])) continue self.assertEqual(expected, db_value) def test_destroy(self): self._create_req() db_req = self.build_req_obj.get_by_instance_uuid(self.context, self.instance_uuid) db_req.destroy() self.assertRaises(exception.BuildRequestNotFound, self.build_req_obj._get_by_instance_uuid_from_db, self.context, self.instance_uuid) def test_destroy_twice_raises(self): self._create_req() db_req = self.build_req_obj.get_by_instance_uuid(self.context, self.instance_uuid) db_req.destroy() self.assertRaises(exception.BuildRequestNotFound, db_req.destroy) def test_save(self): self._create_req() db_req = self.build_req_obj.get_by_instance_uuid(self.context, self.instance_uuid) db_req.project_id = 'foobar' db_req.save() updated_req = self.build_req_obj.get_by_instance_uuid( self.context, self.instance_uuid) self.assertEqual('foobar', updated_req.project_id) def test_save_not_found(self): self._create_req() db_req = self.build_req_obj.get_by_instance_uuid(self.context, self.instance_uuid) db_req.project_id = 'foobar' db_req.destroy() self.assertRaises(exception.BuildRequestNotFound, db_req.save) class BuildRequestListTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(BuildRequestListTestCase, self).setUp() # NOTE: This means that we're using a database for this test suite # despite inheriting from NoDBTestCase self.useFixture(fixtures.Database(database='api')) self.project_id = 'fake-project' self.context = context.RequestContext('fake-user', self.project_id) def _create_req(self, project_id=None, instance=None): kwargs = {} if instance: kwargs['instance'] = jsonutils.dumps(instance.obj_to_primitive()) args = fake_build_request.fake_db_req(**kwargs) args.pop('id', None) args['instance_uuid'] = uuidutils.generate_uuid() args['project_id'] = self.project_id if not project_id else project_id return build_request.BuildRequest._from_db_object(self.context, build_request.BuildRequest(), build_request.BuildRequest._create_in_db(self.context, args)) def test_get_all_empty(self): req_objs = build_request.BuildRequestList.get_all(self.context) self.assertEqual([], req_objs.objects) def test_get_all(self): reqs = [self._create_req(), self._create_req()] req_list = build_request.BuildRequestList.get_all(self.context) self.assertEqual(2, len(req_list)) for i in range(len(req_list)): self.assertEqual(reqs[i].instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[i].instance, req_list[i].instance)) def test_get_all_filter_by_project_id(self): reqs = [self._create_req(), self._create_req(project_id='filter')] req_list = build_request.BuildRequestList.get_all(self.context) self.assertEqual(1, len(req_list)) self.assertEqual(reqs[0].project_id, req_list[0].project_id) self.assertEqual(reqs[0].instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[0].instance, req_list[0].instance)) def test_get_all_bypass_project_id_filter_as_admin(self): reqs = [self._create_req(), self._create_req(project_id='filter')] req_list = build_request.BuildRequestList.get_all( self.context.elevated()) self.assertEqual(2, len(req_list)) for i in range(len(req_list)): self.assertEqual(reqs[i].project_id, req_list[i].project_id) self.assertEqual(reqs[i].instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[i].instance, req_list[i].instance)) def test_get_by_filters(self): reqs = [self._create_req(), self._create_req()] req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) for i in range(len(req_list)): self.assertEqual(reqs[i].instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[i].instance, req_list[i].instance)) def test_get_by_filters_limit_0(self): self._create_req() req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, limit=0) self.assertEqual([], req_list.objects) def test_get_by_filters_deleted(self): self._create_req() req_list = build_request.BuildRequestList.get_by_filters( self.context, {'deleted': True}) self.assertEqual([], req_list.objects) def test_get_by_filters_cleaned(self): self._create_req() req_list = build_request.BuildRequestList.get_by_filters( self.context, {'cleaned': True}) self.assertEqual([], req_list.objects) def test_get_by_filters_exact_match(self): instance_find = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, image_ref='findme') instance_filter = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, image_ref='filterme') reqs = [self._create_req(instance=instance_filter), self._create_req(instance=instance_find)] req_list = build_request.BuildRequestList.get_by_filters( self.context, {'image_ref': 'findme'}) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(1, len(req_list)) self.assertEqual(reqs[1].instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[1].instance, req_list[0].instance)) def test_get_by_filters_exact_match_list(self): instance_find = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, image_ref='findme') instance_filter = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, image_ref='filterme') reqs = [self._create_req(instance=instance_filter), self._create_req(instance=instance_find)] req_list = build_request.BuildRequestList.get_by_filters( self.context, {'image_ref': ['findme', 'fake']}) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(1, len(req_list)) self.assertEqual(reqs[1].instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[1].instance, req_list[0].instance)) def test_get_by_filters_exact_match_metadata(self): instance_find = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, metadata={'foo': 'bar'}, expected_attrs='metadata') instance_filter = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, metadata={'bar': 'baz'}, expected_attrs='metadata') reqs = [self._create_req(instance=instance_filter), self._create_req(instance=instance_find)] req_list = build_request.BuildRequestList.get_by_filters( self.context, {'metadata': {'foo': 'bar'}}) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(1, len(req_list)) self.assertEqual(reqs[1].instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[1].instance, req_list[0].instance)) def test_get_by_filters_exact_match_metadata_list(self): instance_find = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, metadata={'foo': 'bar', 'cat': 'meow'}, expected_attrs='metadata') instance_filter = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, metadata={'bar': 'baz', 'cat': 'meow'}, expected_attrs='metadata') reqs = [self._create_req(instance=instance_filter), self._create_req(instance=instance_find)] req_list = build_request.BuildRequestList.get_by_filters( self.context, {'metadata': [{'foo': 'bar'}, {'cat': 'meow'}]}) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(1, len(req_list)) self.assertEqual(reqs[1].instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[1].instance, req_list[0].instance)) def test_get_by_filters_regex_match_one(self): instance_find = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, display_name='find this one') instance_filter = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, display_name='filter this one') reqs = [self._create_req(instance=instance_filter), self._create_req(instance=instance_find)] req_list = build_request.BuildRequestList.get_by_filters( self.context, {'display_name': 'find'}) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(1, len(req_list)) self.assertEqual(reqs[1].instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[1].instance, req_list[0].instance)) def test_get_by_filters_regex_match_both(self): instance_find = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, display_name='find this one') instance_filter = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, display_name='filter this one') reqs = [self._create_req(instance=instance_filter), self._create_req(instance=instance_find)] req_list = build_request.BuildRequestList.get_by_filters( self.context, {'display_name': 'this'}, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) for i in range(len(req_list)): self.assertEqual(reqs[i].instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(reqs[i].instance, req_list[i].instance)) def test_get_by_filters_sort_asc(self): instance_1024 = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=1024) instance_512 = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=512) req_second = self._create_req(instance=instance_1024) req_first = self._create_req(instance=instance_512) req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, sort_keys=['root_gb'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) self.assertEqual(req_first.instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_first.instance, req_list[0].instance)) self.assertEqual(req_second.instance_uuid, req_list[1].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_second.instance, req_list[1].instance)) def test_get_by_filters_sort_desc(self): instance_1024 = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=1024) instance_512 = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=512) req_second = self._create_req(instance=instance_512) req_first = self._create_req(instance=instance_1024) req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, sort_keys=['root_gb'], sort_dirs=['desc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) self.assertEqual(req_first.instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_first.instance, req_list[0].instance)) self.assertEqual(req_second.instance_uuid, req_list[1].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_second.instance, req_list[1].instance)) def test_get_by_filters_sort_build_req_id(self): # Create instance objects this way so that there is no 'id' set. # The 'id' will not be populated on a BuildRequest.instance so this # checks that sorting by 'id' uses the BuildRequest.id. instance_1 = objects.Instance(self.context, host=None, uuid=uuidutils.generate_uuid()) instance_2 = objects.Instance(self.context, host=None, uuid=uuidutils.generate_uuid()) req_first = self._create_req(instance=instance_2) req_second = self._create_req(instance=instance_1) req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) self.assertEqual(req_first.instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_first.instance, req_list[0].instance)) self.assertEqual(req_second.instance_uuid, req_list[1].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_second.instance, req_list[1].instance)) def test_get_by_filters_multiple_sort_keys(self): instance_first = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=512, image_ref='ccc') instance_second = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=512, image_ref='bbb') instance_third = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, root_gb=1024, image_ref='aaa') req_first = self._create_req(instance=instance_first) req_third = self._create_req(instance=instance_third) req_second = self._create_req(instance=instance_second) req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, sort_keys=['root_gb', 'image_ref'], sort_dirs=['asc', 'desc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(3, len(req_list)) self.assertEqual(req_first.instance_uuid, req_list[0].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_first.instance, req_list[0].instance)) self.assertEqual(req_second.instance_uuid, req_list[1].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_second.instance, req_list[1].instance)) self.assertEqual(req_third.instance_uuid, req_list[2].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req_third.instance, req_list[2].instance)) def test_get_by_filters_marker(self): instance = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None) reqs = [self._create_req(), self._create_req(instance=instance), self._create_req()] req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, marker=instance.uuid, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(1, len(req_list)) req = req_list[0] expected_req = reqs[2] # The returned build request should be the last one in the reqs list # since the marker is the 2nd item in the list (of 3). self.assertEqual(expected_req.instance_uuid, req.instance_uuid) self.assertTrue(objects.base.obj_equal_prims(expected_req.instance, req.instance)) def test_get_by_filters_marker_not_found(self): self._create_req() self.assertRaises(exception.MarkerNotFound, build_request.BuildRequestList.get_by_filters, self.context, {}, marker=uuidutils.generate_uuid(), sort_keys=['id'], sort_dirs=['asc']) def test_get_by_filters_limit(self): reqs = [self._create_req(), self._create_req(), self._create_req()] req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, limit=2, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) for i, req in enumerate(reqs[:2]): self.assertEqual(req.instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req.instance, req_list[i].instance)) def test_get_by_filters_marker_limit(self): instance = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None) reqs = [self._create_req(), self._create_req(instance=instance), self._create_req(), self._create_req()] req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, marker=instance.uuid, limit=2, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) for i, req in enumerate(reqs[2:]): self.assertEqual(req.instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req.instance, req_list[i].instance)) def test_get_by_filters_marker_overlimit(self): instance = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None) reqs = [self._create_req(), self._create_req(instance=instance), self._create_req(), self._create_req()] req_list = build_request.BuildRequestList.get_by_filters( self.context, {}, marker=instance.uuid, limit=4, sort_keys=['id'], sort_dirs=['asc']) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(2, len(req_list)) for i, req in enumerate(reqs[2:]): self.assertEqual(req.instance_uuid, req_list[i].instance_uuid) self.assertTrue(objects.base.obj_equal_prims(req.instance, req_list[i].instance)) def test_get_by_filters_bails_on_empty_list_check(self): instance1 = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, image_ref='') instance2 = fake_instance.fake_instance_obj( self.context, objects.Instance, uuid=uuidutils.generate_uuid(), host=None, image_ref='') self._create_req(instance=instance1) self._create_req(instance=instance2) req_list = build_request.BuildRequestList.get_by_filters( self.context, {'image_ref': []}) self.assertIsInstance(req_list, objects.BuildRequestList) self.assertEqual(0, len(req_list)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_cell_mapping.py0000664000175000017500000001536600000000000023500 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from nova import context from nova import exception from nova import objects from nova.objects import cell_mapping from nova import test from nova.tests import fixtures SAMPLE_MAPPING = {'uuid': '', 'name': 'fake-cell', 'transport_url': 'rabbit:///', 'database_connection': 'mysql+pymysql:///'} def create_mapping(**kwargs): args = SAMPLE_MAPPING.copy() if 'uuid' not in kwargs: args['uuid'] = uuidutils.generate_uuid() args.update(kwargs) ctxt = context.RequestContext() return cell_mapping.CellMapping._create_in_db(ctxt, args) class CellMappingTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(CellMappingTestCase, self).setUp() self.useFixture(fixtures.Database(database='api')) self.context = context.RequestContext('fake-user', 'fake-project') self.mapping_obj = cell_mapping.CellMapping() def test_get_by_uuid(self): mapping = create_mapping() db_mapping = self.mapping_obj._get_by_uuid_from_db(self.context, mapping['uuid']) for key in self.mapping_obj.fields.keys(): self.assertEqual(db_mapping[key], mapping[key]) def test_get_by_uuid_not_found(self): self.assertRaises(exception.CellMappingNotFound, self.mapping_obj._get_by_uuid_from_db, self.context, uuidutils.generate_uuid()) def test_save_in_db(self): mapping = create_mapping() self.mapping_obj._save_in_db(self.context, mapping['uuid'], {'name': 'meow'}) db_mapping = self.mapping_obj._get_by_uuid_from_db(self.context, mapping['uuid']) self.assertNotEqual(db_mapping['name'], mapping['name']) for key in [key for key in self.mapping_obj.fields.keys() if key not in ['name', 'updated_at']]: self.assertEqual(db_mapping[key], mapping[key]) def test_destroy_in_db(self): mapping = create_mapping() self.mapping_obj._get_by_uuid_from_db(self.context, mapping['uuid']) self.mapping_obj._destroy_in_db(self.context, mapping['uuid']) self.assertRaises(exception.CellMappingNotFound, self.mapping_obj._get_by_uuid_from_db, self.context, mapping['uuid']) def test_destroy_in_db_not_found(self): self.assertRaises(exception.CellMappingNotFound, self.mapping_obj._destroy_in_db, self.context, uuidutils.generate_uuid()) class CellMappingListTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(CellMappingListTestCase, self).setUp() self.useFixture(fixtures.Database(database='api')) def test_get_all(self): mappings = {} mapping = create_mapping() mappings[mapping['uuid']] = mapping mapping = create_mapping() mappings[mapping['uuid']] = mapping ctxt = context.RequestContext() db_mappings = cell_mapping.CellMappingList._get_all_from_db(ctxt) for db_mapping in db_mappings: mapping = mappings[db_mapping.uuid] for key in cell_mapping.CellMapping.fields.keys(): self.assertEqual(db_mapping[key], mapping[key]) def test_get_by_disabled(self): enabled_mapping = create_mapping(disabled=False) disabled_mapping = create_mapping(disabled=True) ctxt = context.RequestContext() mappings = cell_mapping.CellMappingList.get_all(ctxt) self.assertEqual(2, len(mappings)) self.assertEqual(enabled_mapping['uuid'], mappings[0].uuid) self.assertEqual(disabled_mapping['uuid'], mappings[1].uuid) mappings = cell_mapping.CellMappingList.get_by_disabled(ctxt, disabled=False) self.assertEqual(1, len(mappings)) self.assertEqual(enabled_mapping['uuid'], mappings[0].uuid) mappings = cell_mapping.CellMappingList.get_by_disabled(ctxt, disabled=True) self.assertEqual(1, len(mappings)) self.assertEqual(disabled_mapping['uuid'], mappings[0].uuid) def test_get_by_project_id(self): ctxt = context.RequestContext() cell1 = objects.CellMapping.get_by_uuid(ctxt, create_mapping().uuid) cell2 = objects.CellMapping.get_by_uuid(ctxt, create_mapping().uuid) cell3 = objects.CellMapping.get_by_uuid(ctxt, create_mapping().uuid) cells = [cell1, cell2, cell3] # Proj1 is all in one cell for i in range(0, 5): uuid = uuidutils.generate_uuid() im = objects.InstanceMapping(context=ctxt, instance_uuid=uuid, cell_mapping=cell1, project_id='proj1') im.create() # Proj2 is in the first two cells for i in range(0, 5): uuid = uuidutils.generate_uuid() cell = cells[i % 2] im = objects.InstanceMapping(context=ctxt, instance_uuid=uuid, cell_mapping=cell, project_id='proj2') im.create() # One mapping has no cell. This helps ensure that our query # filters out any mappings that aren't tied to a cell. im = objects.InstanceMapping(context=ctxt, instance_uuid=uuidutils.generate_uuid(), cell_mapping=None, project_id='proj2') im.create() # Proj1 should only be in cell1 and we should only get back # a single mapping for it cells = objects.CellMappingList.get_by_project_id(ctxt, 'proj1') self.assertEqual(1, len(cells)) self.assertEqual(cell1.uuid, cells[0].uuid) cells = objects.CellMappingList.get_by_project_id(ctxt, 'proj2') self.assertEqual(2, len(cells)) self.assertEqual(sorted([cell1.uuid, cell2.uuid]), sorted([cm.uuid for cm in cells])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_compute_api.py0000664000175000017500000000471500000000000023347 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import api as compute_api from nova import context as nova_context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures class ComputeAPITestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(ComputeAPITestCase, self).setUp() self.useFixture(nova_fixtures.Database(database='api')) @mock.patch('nova.objects.instance_mapping.InstanceMapping.create') def test_reqspec_buildreq_instmapping_single_transaction(self, mock_create): # Simulate a DBError during an INSERT by raising an exception from the # InstanceMapping.create method. mock_create.side_effect = test.TestingException('oops') ctxt = nova_context.RequestContext('fake-user', 'fake-project') rs = objects.RequestSpec(context=ctxt, instance_uuid=uuids.inst) # project_id and instance cannot be None br = objects.BuildRequest(context=ctxt, instance_uuid=uuids.inst, project_id=ctxt.project_id, instance=objects.Instance()) im = objects.InstanceMapping(context=ctxt, instance_uuid=uuids.inst) self.assertRaises( test.TestingException, compute_api.API._create_reqspec_buildreq_instmapping, ctxt, rs, br, im) # Since the instance mapping failed to INSERT, we should not have # written a request spec record or a build request record. self.assertRaises( exception.RequestSpecNotFound, objects.RequestSpec.get_by_instance_uuid, ctxt, uuids.inst) self.assertRaises( exception.BuildRequestNotFound, objects.BuildRequest.get_by_instance_uuid, ctxt, uuids.inst) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_compute_node.py0000664000175000017500000002344700000000000023526 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel import nova.conf from nova import context from nova.db import api as db from nova import objects from nova.objects import compute_node from nova.objects import fields as obj_fields from nova import test CONF = nova.conf.CONF _HOSTNAME = 'fake-host' _NODENAME = 'fake-node' _VIRT_DRIVER_AVAIL_RESOURCES = { 'vcpus': 4, 'memory_mb': 512, 'local_gb': 6, 'vcpus_used': 0, 'memory_mb_used': 0, 'local_gb_used': 0, 'hypervisor_type': 'fake', 'hypervisor_version': 0, 'hypervisor_hostname': _NODENAME, 'cpu_info': '', 'numa_topology': None, } fake_compute_obj = objects.ComputeNode( host=_HOSTNAME, vcpus=_VIRT_DRIVER_AVAIL_RESOURCES['vcpus'], memory_mb=_VIRT_DRIVER_AVAIL_RESOURCES['memory_mb'], local_gb=_VIRT_DRIVER_AVAIL_RESOURCES['local_gb'], vcpus_used=_VIRT_DRIVER_AVAIL_RESOURCES['vcpus_used'], memory_mb_used=_VIRT_DRIVER_AVAIL_RESOURCES['memory_mb_used'], local_gb_used=_VIRT_DRIVER_AVAIL_RESOURCES['local_gb_used'], hypervisor_type='fake', hypervisor_version=0, hypervisor_hostname=_HOSTNAME, free_ram_mb=(_VIRT_DRIVER_AVAIL_RESOURCES['memory_mb'] - _VIRT_DRIVER_AVAIL_RESOURCES['memory_mb_used']), free_disk_gb=(_VIRT_DRIVER_AVAIL_RESOURCES['local_gb'] - _VIRT_DRIVER_AVAIL_RESOURCES['local_gb_used']), current_workload=0, running_vms=0, cpu_info='{}', disk_available_least=0, host_ip='1.1.1.1', supported_hv_specs=[ objects.HVSpec.from_list([ obj_fields.Architecture.I686, obj_fields.HVType.KVM, obj_fields.VMMode.HVM]) ], metrics=None, pci_device_pools=None, extra_resources=None, stats={}, numa_topology=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, ) class ComputeNodeTestCase(test.TestCase): def setUp(self): super(ComputeNodeTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def _create_zero_and_none_cn(self): cn1 = fake_compute_obj.obj_clone() cn1._context = self.context cn1.create() db.compute_node_update(self.context, cn1.id, {'cpu_allocation_ratio': 0.0, 'disk_allocation_ratio': 0.0, 'ram_allocation_ratio': 0.0}) cn1_db = db.compute_node_get(self.context, cn1.id) for x in ['cpu', 'disk', 'ram']: self.assertEqual(0.0, cn1_db['%s_allocation_ratio' % x]) cn2 = fake_compute_obj.obj_clone() cn2._context = self.context cn2.host += '-alt' cn2.create() # We can't set a cn_obj.xxx_allocation_ratio to None, # so we set ratio to None in db directly db.compute_node_update(self.context, cn2.id, {'cpu_allocation_ratio': None, 'disk_allocation_ratio': None, 'ram_allocation_ratio': None}) cn2_db = db.compute_node_get(self.context, cn2.id) for x in ['cpu', 'disk', 'ram']: self.assertIsNone(cn2_db['%s_allocation_ratio' % x]) def test_get_all_by_uuids(self): cn1 = fake_compute_obj.obj_clone() cn1._context = self.context cn1.create() cn2 = fake_compute_obj.obj_clone() cn2._context = self.context # Two compute nodes can't have the same tuple (host, node, deleted) cn2.host = _HOSTNAME + '2' cn2.create() # A deleted compute node cn3 = fake_compute_obj.obj_clone() cn3._context = self.context cn3.host = _HOSTNAME + '3' cn3.create() cn3.destroy() cns = objects.ComputeNodeList.get_all_by_uuids(self.context, []) self.assertEqual(0, len(cns)) # Ensure that asking for one compute node when there are multiple only # returns the one we want. cns = objects.ComputeNodeList.get_all_by_uuids(self.context, [cn1.uuid]) self.assertEqual(1, len(cns)) cns = objects.ComputeNodeList.get_all_by_uuids(self.context, [cn1.uuid, cn2.uuid]) self.assertEqual(2, len(cns)) # Ensure that asking for a non-existing UUID along with # existing UUIDs doesn't limit the return of the existing # compute nodes... cns = objects.ComputeNodeList.get_all_by_uuids(self.context, [cn1.uuid, cn2.uuid, uuidsentinel.noexists]) self.assertEqual(2, len(cns)) # Ensure we don't get the deleted one, even if we ask for it cns = objects.ComputeNodeList.get_all_by_uuids(self.context, [cn1.uuid, cn2.uuid, cn3.uuid]) self.assertEqual(2, len(cns)) def test_get_by_hypervisor_type(self): cn1 = fake_compute_obj.obj_clone() cn1._context = self.context cn1.hypervisor_type = 'ironic' cn1.create() cn2 = fake_compute_obj.obj_clone() cn2._context = self.context cn2.hypervisor_type = 'libvirt' cn2.host += '-alt' cn2.create() cns = objects.ComputeNodeList.get_by_hypervisor_type(self.context, 'ironic') self.assertEqual(1, len(cns)) self.assertEqual(cn1.uuid, cns[0].uuid) def test_ratio_online_migration_when_load(self): # set cpu and disk, and leave ram unset(None) self.flags(cpu_allocation_ratio=1.0) self.flags(disk_allocation_ratio=2.0) self._create_zero_and_none_cn() # trigger online migration objects.ComputeNodeList.get_all(self.context) cns = db.compute_node_get_all(self.context) for cn in cns: # the cpu/disk ratio is refreshed to CONF.xxx_allocation_ratio self.assertEqual(CONF.cpu_allocation_ratio, cn['cpu_allocation_ratio']) self.assertEqual(CONF.disk_allocation_ratio, cn['disk_allocation_ratio']) # the ram ratio is refreshed to CONF.initial_xxx_allocation_ratio self.assertEqual(CONF.initial_ram_allocation_ratio, cn['ram_allocation_ratio']) def test_migrate_empty_ratio(self): # we have 5 records to process, the last of which is deleted for i in range(5): cn = fake_compute_obj.obj_clone() cn._context = self.context cn.host += '-alt-%s' % i cn.create() db.compute_node_update(self.context, cn.id, {'cpu_allocation_ratio': 0.0}) if i == 4: cn.destroy() # first only process 2 res = compute_node.migrate_empty_ratio(self.context, 2) self.assertEqual(res, (2, 2)) # then process others - there should only be 2 found since one # of the remaining compute nodes is deleted and gets filtered out res = compute_node.migrate_empty_ratio(self.context, 999) self.assertEqual(res, (2, 2)) def test_migrate_none_or_zero_ratio_with_none_ratio_conf(self): cn1 = fake_compute_obj.obj_clone() cn1._context = self.context cn1.create() db.compute_node_update(self.context, cn1.id, {'cpu_allocation_ratio': 0.0, 'disk_allocation_ratio': 0.0, 'ram_allocation_ratio': 0.0}) self.flags(initial_cpu_allocation_ratio=32.0) self.flags(initial_ram_allocation_ratio=8.0) self.flags(initial_disk_allocation_ratio=2.0) res = compute_node.migrate_empty_ratio(self.context, 1) self.assertEqual(res, (1, 1)) # the ratio is refreshed to CONF.initial_xxx_allocation_ratio # beacause CONF.xxx_allocation_ratio is None cns = db.compute_node_get_all(self.context) # the ratio is refreshed to CONF.xxx_allocation_ratio for cn in cns: for x in ['cpu', 'disk', 'ram']: conf_key = 'initial_%s_allocation_ratio' % x key = '%s_allocation_ratio' % x self.assertEqual(getattr(CONF, conf_key), cn[key]) def test_migrate_none_or_zero_ratio_with_not_empty_ratio(self): cn1 = fake_compute_obj.obj_clone() cn1._context = self.context cn1.create() db.compute_node_update(self.context, cn1.id, {'cpu_allocation_ratio': 32.0, 'ram_allocation_ratio': 4.0, 'disk_allocation_ratio': 3.0}) res = compute_node.migrate_empty_ratio(self.context, 1) # the non-empty ratio will not be refreshed self.assertEqual(res, (0, 0)) cns = db.compute_node_get_all(self.context) for cn in cns: self.assertEqual(32.0, cn['cpu_allocation_ratio']) self.assertEqual(4.0, cn['ram_allocation_ratio']) self.assertEqual(3.0, cn['disk_allocation_ratio']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_connection_switch.py0000664000175000017500000002073200000000000024557 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures class ConnectionSwitchTestCase(test.NoDBTestCase): USES_DB_SELF = True test_filename = 'foo.db' fake_conn = 'sqlite:///' + test_filename def setUp(self): super(ConnectionSwitchTestCase, self).setUp() self.addCleanup(self.cleanup) self.useFixture(nova_fixtures.Database(database='api')) self.useFixture(nova_fixtures.Database(database='main')) # Use a file-based sqlite database so data will persist across new # connections # The 'main' database connection will stay open, so in-memory is fine self.useFixture(nova_fixtures.Database(connection=self.fake_conn)) def cleanup(self): try: os.remove(self.test_filename) except OSError: pass def test_connection_switch(self): ctxt = context.RequestContext('fake-user', 'fake-project') # Make a request context with a cell mapping mapping = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection=self.fake_conn, transport_url='none:///') mapping.create() # Create an instance in the cell database uuid = uuidutils.generate_uuid() with context.target_cell(ctxt, mapping) as cctxt: # Must set project_id because instance get specifies # project_only=True to model_query, which means non-admin # users can only read instances for their project instance = objects.Instance(context=cctxt, uuid=uuid, project_id='fake-project') instance.create() # Verify the instance is found in the cell database inst = objects.Instance.get_by_uuid(cctxt, uuid) self.assertEqual(uuid, inst.uuid) # Verify the instance isn't found in the main database self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, ctxt, uuid) class CellDatabasesTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(CellDatabasesTestCase, self).setUp() self.useFixture(nova_fixtures.Database(database='api')) fix = nova_fixtures.CellDatabases() fix.add_cell_database('cell0') fix.add_cell_database('cell1') fix.add_cell_database('cell2') self.useFixture(fix) self.context = context.RequestContext('fake-user', 'fake-project') def _create_cell_mappings(self): cell0_uuid = objects.CellMapping.CELL0_UUID self.mapping0 = objects.CellMapping(context=self.context, uuid=cell0_uuid, database_connection='cell0', transport_url='none:///') self.mapping1 = objects.CellMapping(context=self.context, uuid=uuidutils.generate_uuid(), database_connection='cell1', transport_url='none:///') self.mapping2 = objects.CellMapping(context=self.context, uuid=uuidutils.generate_uuid(), database_connection='cell2', transport_url='none:///') self.mapping0.create() self.mapping1.create() self.mapping2.create() def test_cell_dbs(self): self._create_cell_mappings() # Create an instance and read it from cell1 uuid = uuidutils.generate_uuid() with context.target_cell(self.context, self.mapping1) as cctxt: instance = objects.Instance(context=cctxt, uuid=uuid, project_id='fake-project') instance.create() inst = objects.Instance.get_by_uuid(cctxt, uuid) self.assertEqual(uuid, inst.uuid) # Make sure it can't be read from cell2 with context.target_cell(self.context, self.mapping2) as cctxt: self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, cctxt, uuid) # Make sure it can still be read from cell1 with context.target_cell(self.context, self.mapping1) as cctxt: inst = objects.Instance.get_by_uuid(cctxt, uuid) self.assertEqual(uuid, inst.uuid) # Create an instance and read it from cell2 uuid = uuidutils.generate_uuid() with context.target_cell(self.context, self.mapping2) as cctxt: instance = objects.Instance(context=cctxt, uuid=uuid, project_id='fake-project') instance.create() inst = objects.Instance.get_by_uuid(cctxt, uuid) self.assertEqual(uuid, inst.uuid) # Make sure it can't be read from cell1 with context.target_cell(self.context, self.mapping1) as cctxt: self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, cctxt, uuid) def test_scatter_gather_cells(self): self._create_cell_mappings() # Create an instance in cell0 with context.target_cell(self.context, self.mapping0) as cctxt: instance = objects.Instance(context=cctxt, uuid=uuids.instance0, project_id='fake-project') instance.create() # Create an instance in first cell with context.target_cell(self.context, self.mapping1) as cctxt: instance = objects.Instance(context=cctxt, uuid=uuids.instance1, project_id='fake-project') instance.create() # Create an instance in second cell with context.target_cell(self.context, self.mapping2) as cctxt: instance = objects.Instance(context=cctxt, uuid=uuids.instance2, project_id='fake-project') instance.create() filters = {'deleted': False, 'project_id': 'fake-project'} results = context.scatter_gather_all_cells( self.context, objects.InstanceList.get_by_filters, filters, sort_dir='asc') instances = objects.InstanceList() for result in results.values(): instances = instances + result # Should have 3 instances across cells self.assertEqual(3, len(instances)) # Verify we skip cell0 when specified results = context.scatter_gather_skip_cell0( self.context, objects.InstanceList.get_by_filters, filters) instances = objects.InstanceList() for result in results.values(): instances = instances + result # Should have gotten only the instances from the last two cells self.assertEqual(2, len(instances)) self.assertIn(self.mapping1.uuid, results) self.assertIn(self.mapping2.uuid, results) instance_uuids = [inst.uuid for inst in instances] self.assertIn(uuids.instance1, instance_uuids) self.assertIn(uuids.instance2, instance_uuids) # Try passing one cell results = context.scatter_gather_cells( self.context, [self.mapping1], 60, objects.InstanceList.get_by_filters, filters) instances = objects.InstanceList() for result in results.values(): instances = instances + result # Should have gotten only one instance from cell1 self.assertEqual(1, len(instances)) self.assertIn(self.mapping1.uuid, results) self.assertEqual(uuids.instance1, instances[0].uuid) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_console_auth_token.py0000664000175000017500000000430400000000000024717 0ustar00zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel from oslo_versionedobjects import fixture as ovo_fixture from nova import context from nova import exception from nova import objects from nova import test class ConsoleAuthTokenTestCase(test.TestCase): def setUp(self): super(ConsoleAuthTokenTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') instance = objects.Instance( context=self.context, project_id=self.context.project_id, uuid=uuidsentinel.fake_instance) instance.create() self.console = objects.ConsoleAuthToken( context=self.context, instance_uuid=uuidsentinel.fake_instance, console_type='fake-type', host='fake-host', port=1000, internal_access_path='fake-internal_access_path', access_url_base='fake-external_access_path' ) self.token = self.console.authorize(100) def test_validate(self): connection_info = objects.ConsoleAuthToken.validate( self.context, self.token) expected = self.console.obj_to_primitive()['nova_object.data'] del expected['created_at'] ovo_fixture.compare_obj(self, connection_info, expected, allow_missing=['created_at']) def test_validate_invalid(self): unauthorized_token = uuidsentinel.token self.assertRaises( exception.InvalidToken, objects.ConsoleAuthToken.validate, self.context, unauthorized_token) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_flavor.py0000664000175000017500000001551300000000000022331 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova import test from nova.tests import fixtures fake_api_flavor = { 'created_at': None, 'updated_at': None, 'name': 'm1.foo', 'memory_mb': 1024, 'vcpus': 4, 'root_gb': 20, 'ephemeral_gb': 0, 'flavorid': 'm1.foo', 'swap': 0, 'rxtx_factor': 1.0, 'vcpu_weight': 1, 'disabled': False, 'is_public': True, 'extra_specs': {'foo': 'bar'}, 'projects': ['project1', 'project2'], 'description': None } class FlavorObjectTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(FlavorObjectTestCase, self).setUp() self.useFixture(fixtures.Database()) self.useFixture(fixtures.Database(database='api')) self.context = context.RequestContext('fake-user', 'fake-project') def test_create(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() self.assertIn('id', flavor) # Make sure we find this in the API database flavor2 = objects.Flavor._flavor_get_from_db(self.context, flavor.id) self.assertEqual(flavor.id, flavor2['id']) def test_get_with_no_projects(self): fields = dict(fake_api_flavor, projects=[]) flavor = objects.Flavor(context=self.context, **fields) flavor.create() flavor = objects.Flavor.get_by_flavor_id(self.context, flavor.flavorid) self.assertEqual([], flavor.projects) def test_get_with_projects_and_specs(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() flavor = objects.Flavor.get_by_id(self.context, flavor.id) self.assertEqual(fake_api_flavor['projects'], flavor.projects) self.assertEqual(fake_api_flavor['extra_specs'], flavor.extra_specs) def _test_query(self, flavor): flavor2 = objects.Flavor.get_by_id(self.context, flavor.id) self.assertEqual(flavor.id, flavor2.id) flavor2 = objects.Flavor.get_by_flavor_id(self.context, flavor.flavorid) self.assertEqual(flavor.id, flavor2.id) flavor2 = objects.Flavor.get_by_name(self.context, flavor.name) self.assertEqual(flavor.id, flavor2.id) def test_query_api(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() self._test_query(flavor) def test_save(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() flavor.extra_specs['marty'] = 'mcfly' flavor.extra_specs['foo'] = 'bart' projects = list(flavor.projects) flavor.projects.append('project3') flavor.save() flavor2 = objects.Flavor.get_by_flavor_id(self.context, flavor.flavorid) self.assertEqual({'marty': 'mcfly', 'foo': 'bart'}, flavor2.extra_specs) self.assertEqual(set(projects + ['project3']), set(flavor.projects)) del flavor.extra_specs['foo'] del flavor.projects[-1] flavor.save() flavor2 = objects.Flavor.get_by_flavor_id(self.context, flavor.flavorid) self.assertEqual({'marty': 'mcfly'}, flavor2.extra_specs) self.assertEqual(set(projects), set(flavor2.projects)) @staticmethod @db_api.api_context_manager.reader def _collect_flavor_residue_api(context, flavor): flavors = context.session.query(api_models.Flavors).\ filter_by(id=flavor.id).all() specs = context.session.query(api_models.FlavorExtraSpecs).\ filter_by(flavor_id=flavor.id).all() projects = context.session.query(api_models.FlavorProjects).\ filter_by(flavor_id=flavor.id).all() return len(flavors) + len(specs) + len(projects) def _test_destroy(self, flavor): flavor.destroy() self.assertRaises(exception.FlavorNotFound, objects.Flavor.get_by_name, self.context, flavor.name) def test_destroy_api(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() self._test_destroy(flavor) self.assertEqual( 0, self._collect_flavor_residue_api(self.context, flavor)) def test_destroy_missing_flavor_by_flavorid(self): flavor = objects.Flavor(context=self.context, flavorid='foo') self.assertRaises(exception.FlavorNotFound, flavor.destroy) def test_destroy_missing_flavor_by_id(self): flavor = objects.Flavor(context=self.context, flavorid='foo', id=1234) self.assertRaises(exception.FlavorNotFound, flavor.destroy) def _test_get_all(self, expect_len, marker=None, limit=None): flavors = objects.FlavorList.get_all(self.context, marker=marker, limit=limit) self.assertEqual(expect_len, len(flavors)) return flavors def test_get_all_with_all_api_flavors(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() self._test_get_all(1) def test_get_all_with_marker_in_api(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() fake_flavor2 = dict(fake_api_flavor, name='m1.zoo', flavorid='m1.zoo') flavor = objects.Flavor(context=self.context, **fake_flavor2) flavor.create() result = self._test_get_all(1, marker='m1.foo', limit=1) result_flavorids = [x.flavorid for x in result] self.assertEqual(['m1.zoo'], result_flavorids) def test_get_all_with_marker_not_found(self): flavor = objects.Flavor(context=self.context, **fake_api_flavor) flavor.create() fake_flavor2 = dict(fake_api_flavor, name='m1.zoo', flavorid='m1.zoo') flavor = objects.Flavor(context=self.context, **fake_flavor2) flavor.create() self.assertRaises(exception.MarkerNotFound, self._test_get_all, 2, marker='noflavoratall') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_flavor_model.py0000664000175000017500000000620100000000000023503 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db.sqlalchemy import api_models from nova.db.sqlalchemy import models from nova import test class FlavorTablesCompareTestCase(test.NoDBTestCase): def _get_columns_list(self, model): columns_list = [m.key for m in model.__table__.columns] return columns_list def _check_column_list(self, columns_new, columns_old): columns_old.remove('deleted_at') columns_old.remove('deleted') intersect = set(columns_new).intersection(set(columns_old)) if intersect != set(columns_new) or intersect != set(columns_old): return False return True def test_tables_flavors_instance_types(self): flavors = api_models.Flavors() instance_types = models.InstanceTypes() columns_flavors = self._get_columns_list(flavors) # The description column is only in the API database so we have to # exclude it from this check. columns_flavors.remove('description') columns_instance_types = self._get_columns_list(instance_types) self.assertTrue(self._check_column_list(columns_flavors, columns_instance_types)) def test_tables_flavor_instance_type_extra_specs(self): flavor_extra_specs = api_models.FlavorExtraSpecs() instance_type_extra_specs = models.InstanceTypeExtraSpecs() columns_flavor_extra_specs = self._get_columns_list(flavor_extra_specs) columns_instance_type_extra_specs = self._get_columns_list( instance_type_extra_specs) columns_flavor_extra_specs.remove('flavor_id') columns_instance_type_extra_specs.remove('instance_type_id') self.assertTrue(self._check_column_list( columns_flavor_extra_specs, columns_instance_type_extra_specs)) def test_tables_flavor_instance_type_projects(self): flavor_projects = api_models.FlavorProjects() instance_types_projects = models.InstanceTypeProjects() columns_flavor_projects = self._get_columns_list(flavor_projects) columns_instance_type_projects = self._get_columns_list( instance_types_projects) columns_flavor_projects.remove('flavor_id') columns_instance_type_projects.remove('instance_type_id') self.assertTrue(self._check_column_list( columns_flavor_projects, columns_instance_type_projects)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_host_mapping.py0000664000175000017500000002107400000000000023527 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from nova import context from nova import exception from nova import objects from nova.objects import cell_mapping from nova.objects import host_mapping from nova import test from nova.tests import fixtures sample_mapping = {'host': 'fake-host', 'cell_mapping': None} sample_cell_mapping = {'id': 1, 'uuid': '', 'name': 'fake-cell', 'transport_url': 'rabbit:///', 'database_connection': 'mysql:///'} def create_cell_mapping(**kwargs): args = sample_cell_mapping.copy() if 'uuid' not in kwargs: args['uuid'] = uuidutils.generate_uuid() args.update(kwargs) ctxt = context.RequestContext('fake-user', 'fake-project') return cell_mapping.CellMapping._create_in_db(ctxt, args) def create_mapping(**kwargs): args = sample_mapping.copy() args.update(kwargs) if args["cell_mapping"] is None: args["cell_mapping"] = create_cell_mapping() args["cell_id"] = args.pop("cell_mapping", {}).get("id") ctxt = context.RequestContext('fake-user', 'fake-project') return host_mapping.HostMapping._create_in_db(ctxt, args) def create_mapping_obj(context, **kwargs): mapping = create_mapping(**kwargs) return host_mapping.HostMapping._from_db_object( context, host_mapping.HostMapping(), mapping) class HostMappingTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(HostMappingTestCase, self).setUp() self.useFixture(fixtures.Database(database='api')) self.context = context.RequestContext('fake-user', 'fake-project') self.mapping_obj = host_mapping.HostMapping() self.cell_mapping_obj = cell_mapping.CellMapping() def _compare_cell_obj_to_mapping(self, obj, mapping): for key in [key for key in self.cell_mapping_obj.fields.keys() if key not in ("created_at", "updated_at")]: self.assertEqual(getattr(obj, key), mapping[key]) def test_get_by_host(self): mapping = create_mapping() db_mapping = self.mapping_obj._get_by_host_from_db( self.context, mapping['host']) for key in self.mapping_obj.fields.keys(): if key == "cell_mapping": key = "cell_id" self.assertEqual(db_mapping[key], mapping[key]) def test_get_by_host_not_found(self): self.assertRaises(exception.HostMappingNotFound, self.mapping_obj._get_by_host_from_db, self.context, 'fake-host2') def test_update_cell_mapping(self): db_hm = create_mapping() db_cell = create_cell_mapping(id=42) cell = cell_mapping.CellMapping.get_by_uuid( self.context, db_cell['uuid']) hm = host_mapping.HostMapping(self.context) hm.id = db_hm['id'] hm.cell_mapping = cell hm.save() self.assertNotEqual(db_hm['cell_id'], hm.cell_mapping.id) for key in hm.fields.keys(): if key in ('updated_at', 'cell_mapping'): continue model_field = getattr(hm, key) if key == 'created_at': model_field = model_field.replace(tzinfo=None) self.assertEqual(db_hm[key], model_field, 'field %s' % key) db_hm_new = host_mapping.HostMapping._get_by_host_from_db( self.context, db_hm['host']) self.assertNotEqual(db_hm['cell_id'], db_hm_new['cell_id']) def test_destroy_in_db(self): mapping = create_mapping() self.mapping_obj._get_by_host_from_db(self.context, mapping['host']) self.mapping_obj._destroy_in_db(self.context, mapping['host']) self.assertRaises(exception.HostMappingNotFound, self.mapping_obj._get_by_host_from_db, self.context, mapping['host']) def test_load_cell_mapping(self): cell = create_cell_mapping(id=42) mapping_obj = create_mapping_obj(self.context, cell_mapping=cell) cell_map_obj = mapping_obj.cell_mapping self._compare_cell_obj_to_mapping(cell_map_obj, cell) def test_host_mapping_list_get_by_cell_id(self): """Tests getting all of the HostMappings for a given CellMapping id. """ # we shouldn't have any host mappings yet self.assertEqual(0, len(host_mapping.HostMappingList.get_by_cell_id( self.context, sample_cell_mapping['id']))) # now create a host mapping db_host_mapping = create_mapping() # now we should list out one host mapping for the cell host_mapping_list = host_mapping.HostMappingList.get_by_cell_id( self.context, db_host_mapping['cell_id']) self.assertEqual(1, len(host_mapping_list)) self.assertEqual(db_host_mapping['id'], host_mapping_list[0].id) class HostMappingDiscoveryTest(test.TestCase): def _setup_cells(self): ctxt = context.get_admin_context() self.celldbs = fixtures.CellDatabases() cells = [] for uuid in (uuids.cell1, uuids.cell2, uuids.cell3): cm = objects.CellMapping(context=ctxt, uuid=uuid, database_connection=uuid, transport_url='fake://') cm.create() cells.append(cm) self.celldbs.add_cell_database(uuid) self.useFixture(self.celldbs) for cell in cells: for i in (1, 2, 3): # Make one host in each cell unmapped mapped = 0 if i == 2 else 1 host = 'host-%s-%i' % (cell.uuid, i) if mapped: hm = objects.HostMapping(context=ctxt, cell_mapping=cell, host=host) hm.create() with context.target_cell(ctxt, cell): cn = objects.ComputeNode( context=ctxt, vcpus=1, memory_mb=1, local_gb=1, vcpus_used=0, memory_mb_used=0, local_gb_used=0, hypervisor_type='danvm', hypervisor_version='1', cpu_info='foo', cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0, disk_allocation_ratio=1.0, mapped=mapped, host=host) cn.create() def test_discover_hosts(self): status = lambda m: None ctxt = context.get_admin_context() # NOTE(danms): Three cells, one unmapped host per cell mappings = host_mapping.discover_hosts(ctxt, status_fn=status) self.assertEqual(3, len(mappings)) # NOTE(danms): All hosts should be mapped now, so we should do # no lookups for them with mock.patch('nova.objects.HostMapping.get_by_host') as mock_gbh: mappings = host_mapping.discover_hosts(ctxt, status_fn=status) self.assertFalse(mock_gbh.called) self.assertEqual(0, len(mappings)) def test_discover_hosts_one_cell(self): status = lambda m: None ctxt = context.get_admin_context() cells = objects.CellMappingList.get_all(ctxt) # NOTE(danms): One cell, one unmapped host per cell mappings = host_mapping.discover_hosts(ctxt, cells[1].uuid, status_fn=status) self.assertEqual(1, len(mappings)) # NOTE(danms): Three cells, two with one more unmapped host mappings = host_mapping.discover_hosts(ctxt, status_fn=status) self.assertEqual(2, len(mappings)) # NOTE(danms): All hosts should be mapped now, so we should do # no lookups for them with mock.patch('nova.objects.HostMapping.get_by_host') as mock_gbh: mappings = host_mapping.discover_hosts(ctxt, status_fn=status) self.assertFalse(mock_gbh.called) self.assertEqual(0, len(mappings)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_instance.py0000664000175000017500000001571000000000000022643 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from nova.compute import vm_states from nova import context from nova.db import api as db from nova import objects from nova import test class InstanceObjectTestCase(test.TestCase): def setUp(self): super(InstanceObjectTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def _create_instance(self, **values): inst = objects.Instance(context=self.context, project_id=self.context.project_id, user_id=self.context.user_id) inst.update(values) inst.create() return inst def test_get_count_by_vm_state(self): # _create_instance() creates an instance with project_id and user_id # from self.context by default self._create_instance() self._create_instance(vm_state=vm_states.ACTIVE) self._create_instance(vm_state=vm_states.ACTIVE, project_id='foo') self._create_instance(vm_state=vm_states.ACTIVE, user_id='bar') count = objects.InstanceList.get_count_by_vm_state( self.context, self.context.project_id, self.context.user_id, vm_states.ACTIVE) self.assertEqual(1, count) def test_embedded_instance_flavor_description_is_not_persisted(self): """The instance.flavor.description field will not be exposed out of the REST API when showing server details, so we want to make sure the embedded instance.flavor.description is not persisted with the instance_extra.flavor information. """ # Create a flavor with a description. flavorid = uuidutils.generate_uuid() flavor = objects.Flavor(context.get_admin_context(), name=flavorid, flavorid=flavorid, memory_mb=2048, vcpus=2, description='do not persist me in an instance') flavor.create() # Now create the instance with that flavor. instance = self._create_instance(flavor=flavor) # Make sure the embedded flavor.description is nulled out. self.assertIsNone(instance.flavor.description) # Now set the flavor on the instance again to make sure save() does # not persist the flavor.description value. instance.flavor = flavor self.assertIn('flavor', list(instance.obj_what_changed())) instance.save() # Get the instance from the database since our old version is dirty. instance = objects.Instance.get_by_uuid( self.context, instance.uuid, expected_attrs=['flavor']) self.assertIsNone(instance.flavor.description) def test_populate_missing_availability_zones(self): # create two instances once with avz set and other not set. inst1 = self._create_instance(host="fake-host1") uuid1 = inst1.uuid inst2 = self._create_instance(availability_zone="fake", host="fake-host2") # ... and one without a host (simulating failed spawn) self._create_instance(host=None) self.assertIsNone(inst1.availability_zone) self.assertEqual("fake", inst2.availability_zone) count_all, count_hit = (objects.instance. populate_missing_availability_zones(self.context, 10)) # we get only the instance whose avz was None and where host is set self.assertEqual(1, count_all) self.assertEqual(1, count_hit) # since instance has no avz, avz is set by get_host_availability_zone # to CONF.default_availability_zone i.e 'nova' which is the default # zone for compute services. inst1 = objects.Instance.get_by_uuid(self.context, uuid1) self.assertEqual('nova', inst1.availability_zone) # create an instance with avz as None on a host that has avz. host = 'fake-host' agg_meta = {'name': 'az_agg', 'uuid': uuidutils.generate_uuid(), 'metadata': {'availability_zone': 'nova-test'}} agg = objects.Aggregate(self.context, **agg_meta) agg.create() agg = objects.Aggregate.get_by_id(self.context, agg.id) values = { 'binary': 'nova-compute', 'host': host, 'topic': 'compute', 'disabled': False, } service = db.service_create(self.context, values) agg.add_host(service['host']) inst3 = self._create_instance(host=host) uuid3 = inst3.uuid self.assertIsNone(inst3.availability_zone) count_all, count_hit = (objects.instance. populate_missing_availability_zones(self.context, 10)) # we get only the instance whose avz was None i.e inst3. self.assertEqual(1, count_all) self.assertEqual(1, count_hit) inst3 = objects.Instance.get_by_uuid(self.context, uuid3) self.assertEqual('nova-test', inst3.availability_zone) def test_get_count_by_hosts(self): self._create_instance(host='fake_host1') self._create_instance(host='fake_host1') self._create_instance(host='fake_host2') count = objects.InstanceList.get_count_by_hosts( self.context, hosts=['fake_host1']) self.assertEqual(2, count) count = objects.InstanceList.get_count_by_hosts( self.context, hosts=['fake_host2']) self.assertEqual(1, count) count = objects.InstanceList.get_count_by_hosts( self.context, hosts=['fake_host1', 'fake_host2']) self.assertEqual(3, count) def test_hidden_instance_not_counted(self): """Tests that a hidden instance is not counted against quota usage.""" # Create an instance that is not hidden and count usage. instance = self._create_instance(vcpus=1, memory_mb=2048) counts = objects.InstanceList.get_counts( self.context, instance.project_id)['project'] self.assertEqual(1, counts['instances']) self.assertEqual(instance.vcpus, counts['cores']) self.assertEqual(instance.memory_mb, counts['ram']) # Now hide the instance and count usage again, everything should be 0. instance.hidden = True instance.save() counts = objects.InstanceList.get_counts( self.context, instance.project_id)['project'] self.assertEqual(0, counts['instances']) self.assertEqual(0, counts['cores']) self.assertEqual(0, counts['ram']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_instance_group.py0000664000175000017500000001672200000000000024063 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_versionedobjects import fixture as ovo_fixture from nova import context from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.objects import base from nova import test class InstanceGroupObjectTestCase(test.TestCase): def setUp(self): super(InstanceGroupObjectTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def _api_group(self, **values): group = objects.InstanceGroup(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id, name='foogroup', policy='anti-affinity', rules={'max_server_per_host': 1}, members=['memberfoo']) group.update(values) group.create() return group def test_create(self): create_group = self._api_group() db_group = create_group._get_from_db_by_uuid(self.context, create_group.uuid) self.assertIsInstance(db_group.policy, api_models.InstanceGroupPolicy) self.assertEqual(create_group.policies[0], db_group.policy.policy) self.assertEqual(create_group.id, db_group.policy.group_id) ovo_fixture.compare_obj( self, create_group, db_group, comparators={'policy': lambda a, b: b == a.policy}, allow_missing=('deleted', 'deleted_at', 'policies', '_rules')) self.assertEqual({'max_server_per_host': 1}, create_group.rules) def test_destroy(self): create_group = self._api_group() create_group.destroy() self.assertRaises(exception.InstanceGroupNotFound, create_group._get_from_db_by_uuid, self.context, create_group.uuid) @mock.patch('nova.compute.utils.notify_about_server_group_update') def test_save(self, _mock_notify): create_group = self._api_group() create_group.members = ['memberbar1', 'memberbar2'] create_group.name = 'anewname' create_group.save() db_group = create_group._get_from_db_by_uuid(self.context, create_group.uuid) ovo_fixture.compare_obj( self, create_group, db_group, comparators={'policy': lambda a, b: b == a.policy}, allow_missing=('deleted', 'deleted_at', 'policies', '_rules')) self.assertEqual({'max_server_per_host': 1}, create_group.rules) def test_add_members(self): create_group = self._api_group() new_member = ['memberbar'] objects.InstanceGroup.add_members(self.context, create_group.uuid, new_member) db_group = create_group._get_from_db_by_uuid(self.context, create_group.uuid) self.assertEqual(create_group.members + new_member, db_group.members) def test_add_members_to_group_with_no_members(self): create_group = self._api_group(members=[]) new_member = ['memberbar'] objects.InstanceGroup.add_members(self.context, create_group.uuid, new_member) db_group = create_group._get_from_db_by_uuid(self.context, create_group.uuid) self.assertEqual(new_member, db_group.members) def test_remove_members(self): create_group = self._api_group(members=[]) # Add new members. new_members = [uuids.instance1, uuids.instance2, uuids.instance3] objects.InstanceGroup.add_members(self.context, create_group.uuid, new_members) # We already have tests for adding members, so we don't have to # verify they were added. # Remove the first two members we added. objects.InstanceGroup._remove_members_in_db(self.context, create_group.id, new_members[:2]) # Refresh the group from the database. db_group = create_group._get_from_db_by_uuid(self.context, create_group.uuid) # We should have one new member left. self.assertEqual([uuids.instance3], db_group.members) def test_get_by_uuid(self): create_group = self._api_group() get_group = objects.InstanceGroup.get_by_uuid(self.context, create_group.uuid) self.assertTrue(base.obj_equal_prims(create_group, get_group)) def test_get_by_name(self): create_group = self._api_group() get_group = objects.InstanceGroup.get_by_name(self.context, create_group.name) self.assertTrue(base.obj_equal_prims(create_group, get_group)) def test_get_by_instance_uuid(self): create_group = self._api_group(members=[uuids.instance]) get_group = objects.InstanceGroup.get_by_instance_uuid(self.context, uuids.instance) self.assertTrue(base.obj_equal_prims(create_group, get_group)) def test_get_by_project_id(self): create_group = self._api_group() get_groups = objects.InstanceGroupList.get_by_project_id( self.context, self.context.project_id) self.assertEqual(1, len(get_groups)) self.assertTrue(base.obj_equal_prims(create_group, get_groups[0])) ovo_fixture.compare_obj(self, get_groups[0], create_group) def test_get_all(self): create_group = self._api_group() get_groups = objects.InstanceGroupList.get_all(self.context) self.assertEqual(1, len(get_groups)) self.assertTrue(base.obj_equal_prims(create_group, get_groups[0])) ovo_fixture.compare_obj(self, get_groups[0], create_group) def test_get_counts(self): # _api_group() creates a group with project_id and user_id from # self.context by default self._api_group() self._api_group(project_id='foo') self._api_group(user_id='bar') # Count only across a project counts = objects.InstanceGroupList.get_counts(self.context, 'foo') self.assertEqual(1, counts['project']['server_groups']) self.assertNotIn('user', counts) # Count across a project and a user counts = objects.InstanceGroupList.get_counts( self.context, self.context.project_id, user_id=self.context.user_id) self.assertEqual(2, counts['project']['server_groups']) self.assertEqual(1, counts['user']['server_groups']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_instance_mapping.py0000664000175000017500000006646400000000000024372 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel from oslo_utils import uuidutils from nova.compute import vm_states from nova import context from nova import exception from nova.objects import cell_mapping from nova.objects import instance from nova.objects import instance_mapping from nova.objects import virtual_interface from nova import test from nova.tests import fixtures sample_mapping = {'instance_uuid': '', 'cell_id': 3, 'project_id': 'fake-project', 'user_id': 'fake-user'} sample_cell_mapping = {'id': 3, 'uuid': '', 'name': 'fake-cell', 'transport_url': 'rabbit:///', 'database_connection': 'mysql:///'} def create_cell_mapping(**kwargs): args = sample_cell_mapping.copy() if 'uuid' not in kwargs: args['uuid'] = uuidutils.generate_uuid() args.update(kwargs) ctxt = context.RequestContext('fake-user', 'fake-project') return cell_mapping.CellMapping._create_in_db(ctxt, args) def create_mapping(**kwargs): args = sample_mapping.copy() if 'instance_uuid' not in kwargs: args['instance_uuid'] = uuidutils.generate_uuid() args.update(kwargs) ctxt = context.RequestContext('fake-user', 'fake-project') return instance_mapping.InstanceMapping._create_in_db(ctxt, args) class InstanceMappingTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(InstanceMappingTestCase, self).setUp() self.useFixture(fixtures.Database(database='api')) self.context = context.RequestContext('fake-user', 'fake-project') self.mapping_obj = instance_mapping.InstanceMapping() def test_get_by_instance_uuid(self): cell_mapping = create_cell_mapping() mapping = create_mapping() db_mapping = self.mapping_obj._get_by_instance_uuid_from_db( self.context, mapping['instance_uuid']) for key in [key for key in self.mapping_obj.fields.keys() if key != 'cell_mapping']: self.assertEqual(db_mapping[key], mapping[key]) self.assertEqual(db_mapping['cell_mapping']['id'], cell_mapping['id']) def test_get_by_instance_uuid_not_found(self): self.assertRaises(exception.InstanceMappingNotFound, self.mapping_obj._get_by_instance_uuid_from_db, self.context, uuidutils.generate_uuid()) def test_save_in_db(self): mapping = create_mapping() cell_mapping = create_cell_mapping() self.mapping_obj._save_in_db(self.context, mapping['instance_uuid'], {'cell_id': cell_mapping['id']}) db_mapping = self.mapping_obj._get_by_instance_uuid_from_db( self.context, mapping['instance_uuid']) for key in [key for key in self.mapping_obj.fields.keys() if key not in ['cell_id', 'cell_mapping', 'updated_at']]: self.assertEqual(db_mapping[key], mapping[key]) self.assertEqual(db_mapping['cell_id'], cell_mapping['id']) def test_destroy_in_db(self): mapping = create_mapping() self.mapping_obj._get_by_instance_uuid_from_db(self.context, mapping['instance_uuid']) self.mapping_obj._destroy_in_db(self.context, mapping['instance_uuid']) self.assertRaises(exception.InstanceMappingNotFound, self.mapping_obj._get_by_instance_uuid_from_db, self.context, mapping['instance_uuid']) def test_cell_id_nullable(self): # Just ensure this doesn't raise create_mapping(cell_id=None) def test_modify_cell_mapping(self): inst_mapping = instance_mapping.InstanceMapping(context=self.context) inst_mapping.instance_uuid = uuidutils.generate_uuid() inst_mapping.project_id = self.context.project_id inst_mapping.cell_mapping = None inst_mapping.create() c_mapping = cell_mapping.CellMapping( self.context, uuid=uuidutils.generate_uuid(), name="cell0", transport_url="none:///", database_connection="fake:///") c_mapping.create() inst_mapping.cell_mapping = c_mapping inst_mapping.save() result_mapping = instance_mapping.InstanceMapping.get_by_instance_uuid( self.context, inst_mapping.instance_uuid) self.assertEqual(result_mapping.cell_mapping.id, c_mapping.id) def test_populate_queued_for_delete(self): cells = [] celldbs = fixtures.CellDatabases() # Create two cell databases and map them for uuid in (uuidsentinel.cell1, uuidsentinel.cell2): cm = cell_mapping.CellMapping(context=self.context, uuid=uuid, database_connection=uuid, transport_url='fake://') cm.create() cells.append(cm) celldbs.add_cell_database(uuid) self.useFixture(celldbs) # Create 5 instances per cell, two deleted, one with matching # queued_for_delete in the instance mapping for cell in cells: for i in range(0, 5): # Instance 4 should be SOFT_DELETED vm_state = (vm_states.SOFT_DELETED if i == 4 else vm_states.ACTIVE) # Instance 2 should already be marked as queued_for_delete qfd = True if i == 2 else None with context.target_cell(self.context, cell) as cctxt: inst = instance.Instance( cctxt, vm_state=vm_state, project_id=self.context.project_id, user_id=self.context.user_id) inst.create() if i in (2, 3): # Instances 2 and 3 are hard-deleted inst.destroy() instance_mapping.InstanceMapping._create_in_db( self.context, {'project_id': self.context.project_id, 'cell_id': cell.id, 'queued_for_delete': qfd, 'instance_uuid': inst.uuid}) done, total = instance_mapping.populate_queued_for_delete(self.context, 2) # First two needed fixing, and honored the limit self.assertEqual(2, done) self.assertEqual(2, total) done, total = instance_mapping.populate_queued_for_delete(self.context, 1000) # Last six included two that were already done, and spanned to the # next cell self.assertEqual(6, done) self.assertEqual(6, total) mappings = instance_mapping.InstanceMappingList.get_by_project_id( self.context, self.context.project_id) # Check that we have only the expected number of records with # True/False (which implies no NULL records). # Six deleted instances self.assertEqual(6, len( [im for im in mappings if im.queued_for_delete is True])) # Four non-deleted instances self.assertEqual(4, len( [im for im in mappings if im.queued_for_delete is False])) # Run it again to make sure we don't query the cell database for # instances if we didn't get any un-migrated mappings. with mock.patch('nova.objects.InstanceList.get_by_filters', new_callable=mock.NonCallableMock): done, total = instance_mapping.populate_queued_for_delete( self.context, 1000) self.assertEqual(0, done) self.assertEqual(0, total) def test_user_id_not_set_if_null_from_db(self): # Create an instance mapping with user_id=None. db_mapping = create_mapping(user_id=None) self.assertIsNone(db_mapping['user_id']) # Get the mapping to run convert from db object to versioned object. im = instance_mapping.InstanceMapping.get_by_instance_uuid( self.context, db_mapping['instance_uuid']) # Verify the user_id is not set. self.assertNotIn('user_id', im) @mock.patch('nova.objects.instance_mapping.LOG.warning') def test_populate_user_id(self, mock_log_warning): cells = [] celldbs = fixtures.CellDatabases() # Create two cell databases and map them for uuid in (uuidsentinel.cell1, uuidsentinel.cell2): cm = cell_mapping.CellMapping(context=self.context, uuid=uuid, database_connection=uuid, transport_url='fake://') cm.create() cells.append(cm) celldbs.add_cell_database(uuid) self.useFixture(celldbs) # Create 5 instances per cell for cell in cells: for i in range(0, 5): with context.target_cell(self.context, cell) as cctxt: inst = instance.Instance( cctxt, project_id=self.context.project_id, user_id=self.context.user_id) inst.create() # Make every other mapping have a NULL user_id # Will be a total of four mappings with NULL user_id user_id = self.context.user_id if i % 2 == 0 else None create_mapping(project_id=self.context.project_id, user_id=user_id, cell_id=cell.id, instance_uuid=inst.uuid) # Create a SOFT_DELETED instance with a user_id=None instance mapping. # This should get migrated. with context.target_cell(self.context, cells[0]) as cctxt: inst = instance.Instance( cctxt, project_id=self.context.project_id, user_id=self.context.user_id, vm_state=vm_states.SOFT_DELETED) inst.create() create_mapping(project_id=self.context.project_id, user_id=None, cell_id=cells[0].id, instance_uuid=inst.uuid, queued_for_delete=True) # Create a deleted instance with a user_id=None instance mapping. # This should get migrated. with context.target_cell(self.context, cells[1]) as cctxt: inst = instance.Instance( cctxt, project_id=self.context.project_id, user_id=self.context.user_id) inst.create() inst.destroy() create_mapping(project_id=self.context.project_id, user_id=None, cell_id=cells[1].id, instance_uuid=inst.uuid, queued_for_delete=True) # Create an instance mapping for an instance not yet scheduled. It # should not get migrated because we won't know what user_id to use. unscheduled = create_mapping(project_id=self.context.project_id, user_id=None, cell_id=None) # Create two instance mappings for instances that no longer exist. # Example: residue from a manual cleanup or after a periodic compute # purge and before a database archive. This record should not get # migrated. nonexistent = [] for i in range(2): nonexistent.append( create_mapping(project_id=self.context.project_id, user_id=None, cell_id=cells[i].id, instance_uuid=uuidutils.generate_uuid())) # Create an instance mapping simulating a virtual interface migration # marker instance which has had map_instances run on it. # This should not be found by the migration. create_mapping(project_id=virtual_interface.FAKE_UUID, user_id=None) found, done = instance_mapping.populate_user_id(self.context, 2) # Two needed fixing, and honored the limit. self.assertEqual(2, found) self.assertEqual(2, done) found, done = instance_mapping.populate_user_id(self.context, 1000) # Only four left were fixable. The fifth instance found has no # cell and cannot be migrated yet. The 6th and 7th instances found have # no corresponding instance records and cannot be migrated. self.assertEqual(7, found) self.assertEqual(4, done) # Verify the orphaned instance mappings warning log message was only # emitted once. mock_log_warning.assert_called_once() # Check that we have only the expected number of records with # user_id set. We created 10 instances (5 per cell with 2 per cell # with NULL user_id), 1 SOFT_DELETED instance with NULL user_id, # 1 deleted instance with NULL user_id, and 1 not-yet-scheduled # instance with NULL user_id. # We expect 12 of them to have user_id set after migration (15 total, # with the not-yet-scheduled instance and the orphaned instance # mappings ignored). ims = instance_mapping.InstanceMappingList.get_by_project_id( self.context, self.context.project_id) self.assertEqual(12, len( [im for im in ims if 'user_id' in im])) # Check that one instance mapping record (not yet scheduled) has not # been migrated by this script. # Check that two other instance mapping records (no longer existing # instances) have not been migrated by this script. self.assertEqual(15, len(ims)) # Set the cell and create the instance for the mapping without a cell, # then run the migration again. unscheduled = instance_mapping.InstanceMapping.get_by_instance_uuid( self.context, unscheduled['instance_uuid']) unscheduled.cell_mapping = cells[0] unscheduled.save() with context.target_cell(self.context, cells[0]) as cctxt: inst = instance.Instance( cctxt, uuid=unscheduled.instance_uuid, project_id=self.context.project_id, user_id=self.context.user_id) inst.create() found, done = instance_mapping.populate_user_id(self.context, 1000) # Should have found the not-yet-scheduled instance and the orphaned # instance mappings. self.assertEqual(3, found) # Should have only migrated the not-yet-schedule instance. self.assertEqual(1, done) # Delete the orphaned instance mapping (simulate manual cleanup by an # operator). for db_im in nonexistent: nonexist = instance_mapping.InstanceMapping.get_by_instance_uuid( self.context, db_im['instance_uuid']) nonexist.destroy() # Run the script one last time to make sure it finds nothing left to # migrate. found, done = instance_mapping.populate_user_id(self.context, 1000) self.assertEqual(0, found) self.assertEqual(0, done) @mock.patch('nova.objects.InstanceList.get_by_filters') def test_populate_user_id_instance_get_fail(self, mock_inst_get): cells = [] celldbs = fixtures.CellDatabases() # Create two cell databases and map them for uuid in (uuidsentinel.cell1, uuidsentinel.cell2): cm = cell_mapping.CellMapping(context=self.context, uuid=uuid, database_connection=uuid, transport_url='fake://') cm.create() cells.append(cm) celldbs.add_cell_database(uuid) self.useFixture(celldbs) # Create one instance per cell for cell in cells: with context.target_cell(self.context, cell) as cctxt: inst = instance.Instance( cctxt, project_id=self.context.project_id, user_id=self.context.user_id) inst.create() create_mapping(project_id=self.context.project_id, user_id=None, cell_id=cell.id, instance_uuid=inst.uuid) # Simulate the first cell is down/has some error mock_inst_get.side_effect = [test.TestingException(), instance.InstanceList(objects=[inst])] found, done = instance_mapping.populate_user_id(self.context, 1000) # Verify we continue to the next cell when a down/error cell is # encountered. self.assertEqual(2, found) self.assertEqual(1, done) class InstanceMappingListTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(InstanceMappingListTestCase, self).setUp() self.useFixture(fixtures.Database(database='api')) self.context = context.RequestContext('fake-user', 'fake-project') self.list_obj = instance_mapping.InstanceMappingList() def test_get_by_project_id_from_db(self): project_id = 'fake-project' mappings = {} mapping = create_mapping(project_id=project_id) mappings[mapping['instance_uuid']] = mapping mapping = create_mapping(project_id=project_id) mappings[mapping['instance_uuid']] = mapping db_mappings = self.list_obj._get_by_project_id_from_db( self.context, project_id) for db_mapping in db_mappings: mapping = mappings[db_mapping.instance_uuid] for key in instance_mapping.InstanceMapping.fields.keys(): self.assertEqual(db_mapping[key], mapping[key]) def test_instance_mapping_list_get_by_cell_id(self): """Tests getting all of the InstanceMappings for a given CellMapping id """ # we shouldn't have any instance mappings yet inst_mapping_list = ( instance_mapping.InstanceMappingList.get_by_cell_id( self.context, sample_cell_mapping['id']) ) self.assertEqual(0, len(inst_mapping_list)) # now create an instance mapping in a cell db_inst_mapping1 = create_mapping() # let's also create an instance mapping that's not in a cell to make # sure our filtering is working db_inst_mapping2 = create_mapping(cell_id=None) self.assertIsNone(db_inst_mapping2['cell_id']) # now we should list out one instance mapping for the cell inst_mapping_list = ( instance_mapping.InstanceMappingList.get_by_cell_id( self.context, db_inst_mapping1['cell_id']) ) self.assertEqual(1, len(inst_mapping_list)) self.assertEqual(db_inst_mapping1['id'], inst_mapping_list[0].id) def test_instance_mapping_get_by_instance_uuids(self): db_inst_mapping1 = create_mapping() db_inst_mapping2 = create_mapping(cell_id=None) # Create a third that we won't include create_mapping() uuids = [db_inst_mapping1.instance_uuid, db_inst_mapping2.instance_uuid] mappings = instance_mapping.InstanceMappingList.get_by_instance_uuids( self.context, uuids + [uuidsentinel.deleted_instance]) self.assertEqual(sorted(uuids), sorted([m.instance_uuid for m in mappings])) def test_get_not_deleted_by_cell_and_project(self): cells = [] # Create two cells for uuid in (uuidsentinel.cell1, uuidsentinel.cell2): cm = cell_mapping.CellMapping(context=self.context, uuid=uuid, database_connection="fake:///", transport_url='fake://') cm.create() cells.append(cm) uuids = {cells[0]: [uuidsentinel.c1i1, uuidsentinel.c1i2], cells[1]: [uuidsentinel.c2i1, uuidsentinel.c2i2]} project_ids = ['fake-project-1', 'fake-project-2'] # Create five instance_mappings such that: for cell, uuid in uuids.items(): # Both the cells contain a mapping belonging to fake-project-1 im1 = instance_mapping.InstanceMapping(context=self.context, project_id=project_ids[0], cell_mapping=cell, instance_uuid=uuid[0], queued_for_delete=False) im1.create() # Both the cells contain a mapping belonging to fake-project-2 im2 = instance_mapping.InstanceMapping(context=self.context, project_id=project_ids[1], cell_mapping=cell, instance_uuid=uuid[1], queued_for_delete=False) im2.create() # The second cell has a third mapping that is queued for deletion # which belongs to fake-project-1. if cell.uuid == uuidsentinel.cell2: im3 = instance_mapping.InstanceMapping(context=self.context, project_id=project_ids[0], cell_mapping=cell, instance_uuid=uuidsentinel.qfd, queued_for_delete=True) im3.create() # Get not queued for deletion mappings from cell1 belonging to # fake-project-2. ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project( self.context, cells[0].uuid, 'fake-project-2')) # This will give us one mapping from cell1 self.assertEqual([uuidsentinel.c1i2], sorted([m.instance_uuid for m in ims])) self.assertIn('cell_mapping', ims[0]) # Get not queued for deletion mappings from cell2 belonging to # fake-project-1. ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project( self.context, cells[1].uuid, 'fake-project-1')) # This will give us one mapping from cell2. Note that even if # there are two mappings belonging to fake-project-1 inside cell2, # only the one not queued for deletion is returned. self.assertEqual([uuidsentinel.c2i1], sorted([m.instance_uuid for m in ims])) # Try getting a mapping belonging to a non-existing project_id. ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project( self.context, cells[0].uuid, 'fake-project-3')) # Since no mappings belong to fake-project-3, nothing is returned. self.assertEqual([], sorted([m.instance_uuid for m in ims])) def test_get_not_deleted_by_cell_and_project_limit(self): cm = cell_mapping.CellMapping(context=self.context, uuid=uuidsentinel.cell, database_connection='fake:///', transport_url='fake://') cm.create() pid = self.context.project_id for uuid in (uuidsentinel.uuid2, uuidsentinel.inst2): im = instance_mapping.InstanceMapping(context=self.context, project_id=pid, cell_mapping=cm, instance_uuid=uuid, queued_for_delete=False) im.create() ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project(self.context, cm.uuid, pid)) self.assertEqual(2, len(ims)) ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project(self.context, cm.uuid, pid, limit=10)) self.assertEqual(2, len(ims)) ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project(self.context, cm.uuid, pid, limit=1)) self.assertEqual(1, len(ims)) def test_get_not_deleted_by_cell_and_project_None(self): cm = cell_mapping.CellMapping(context=self.context, uuid=uuidsentinel.cell, database_connection='fake:///', transport_url='fake://') cm.create() im1 = instance_mapping.InstanceMapping(context=self.context, project_id='fake-project-1', cell_mapping=cm, instance_uuid=uuidsentinel.uid1, queued_for_delete=False) im1.create() im2 = instance_mapping.InstanceMapping(context=self.context, project_id='fake-project-2', cell_mapping=cm, instance_uuid=uuidsentinel.uid2, queued_for_delete=None) im2.create() # testing if it accepts None project_id in the query and # catches None queued for delete records. ims = (instance_mapping.InstanceMappingList. get_not_deleted_by_cell_and_project(self.context, cm.uuid, None)) self.assertEqual(2, len(ims)) def test_get_counts(self): create_mapping(project_id='fake-project', user_id='fake-user', queued_for_delete=False) # mapping with another user create_mapping(project_id='fake-project', user_id='other-user', queued_for_delete=False) # mapping in another project create_mapping(project_id='other-project', user_id='fake-user', queued_for_delete=False) # queued_for_delete=True, should not be counted create_mapping(project_id='fake-project', user_id='fake-user', queued_for_delete=True) # queued_for_delete=None (not yet migrated), should not be counted create_mapping(project_id='fake-project', user_id='fake-user', queued_for_delete=None) # Count only across a project counts = instance_mapping.InstanceMappingList.get_counts( self.context, 'fake-project') self.assertEqual(2, counts['project']['instances']) self.assertNotIn('user', counts) # Count across a project and a user counts = instance_mapping.InstanceMappingList.get_counts( self.context, 'fake-project', user_id='fake-user') self.assertEqual(2, counts['project']['instances']) self.assertEqual(1, counts['user']['instances']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_keypair.py0000664000175000017500000002347000000000000022505 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova.db.sqlalchemy import api as db_api from nova import exception from nova import objects from nova.objects import keypair from nova import test class KeyPairObjectTestCase(test.TestCase): def setUp(self): super(KeyPairObjectTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def _api_kp(self, **values): kp = objects.KeyPair(context=self.context, user_id=self.context.user_id, name='fookey', fingerprint='fp', public_key='keydata', type='ssh') kp.update(values) kp.create() return kp def _main_kp(self, **values): vals = { 'user_id': self.context.user_id, 'name': 'fookey', 'fingerprint': 'fp', 'public_key': 'keydata', 'type': 'ssh', } vals.update(values) return db_api.key_pair_create(self.context, vals) def test_create_in_api(self): kp = self._api_kp() keypair.KeyPair._get_from_db(self.context, kp.user_id, kp.name) self.assertRaises(exception.KeypairNotFound, db_api.key_pair_get, self.context, kp.user_id, kp.name) def test_create_in_api_duplicate(self): self._api_kp() self.assertRaises(exception.KeyPairExists, self._api_kp) def test_create_in_api_duplicate_in_main(self): self._main_kp() self.assertRaises(exception.KeyPairExists, self._api_kp) def test_get_from_api(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') kp = objects.KeyPair.get_by_name(self.context, self.context.user_id, 'apikey') self.assertEqual('apikey', kp.name) def test_get_from_main(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') kp = objects.KeyPair.get_by_name(self.context, self.context.user_id, 'mainkey') self.assertEqual('mainkey', kp.name) def test_get_not_found(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') self.assertRaises(exception.KeypairNotFound, objects.KeyPair.get_by_name, self.context, self.context.user_id, 'nokey') def test_destroy_in_api(self): kp = self._api_kp(name='apikey') self._main_kp(name='mainkey') kp.destroy() self.assertRaises(exception.KeypairNotFound, objects.KeyPair.get_by_name, self.context, self.context.user_id, 'apikey') def test_destroy_by_name_in_api(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') objects.KeyPair.destroy_by_name(self.context, self.context.user_id, 'apikey') self.assertRaises(exception.KeypairNotFound, objects.KeyPair.get_by_name, self.context, self.context.user_id, 'apikey') def test_destroy_in_main(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') kp = objects.KeyPair.get_by_name(self.context, self.context.user_id, 'mainkey') kp.destroy() self.assertRaises(exception.KeypairNotFound, objects.KeyPair.get_by_name, self.context, self.context.user_id, 'mainkey') def test_destroy_by_name_in_main(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') objects.KeyPair.destroy_by_name(self.context, self.context.user_id, 'mainkey') def test_get_by_user(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id) self.assertEqual(2, len(kpl)) self.assertEqual(set(['apikey', 'mainkey']), set([x.name for x in kpl])) def test_get_count_by_user(self): self._api_kp(name='apikey') self._main_kp(name='mainkey') count = objects.KeyPairList.get_count_by_user(self.context, self.context.user_id) self.assertEqual(2, count) def test_get_by_user_limit_and_marker(self): self._api_kp(name='apikey1') self._api_kp(name='apikey2') self._main_kp(name='mainkey1') self._main_kp(name='mainkey2') # check all 4 keypairs (2 api and 2 main) kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id) self.assertEqual(4, len(kpl)) self.assertEqual(set(['apikey1', 'apikey2', 'mainkey1', 'mainkey2']), set([x.name for x in kpl])) # check only 1 keypair (1 api) kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id, limit=1) self.assertEqual(1, len(kpl)) self.assertEqual(set(['apikey1']), set([x.name for x in kpl])) # check only 3 keypairs (2 api and 1 main) kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id, limit=3) self.assertEqual(3, len(kpl)) self.assertEqual(set(['apikey1', 'apikey2', 'mainkey1']), set([x.name for x in kpl])) # check keypairs after 'apikey1' (1 api and 2 main) kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id, marker='apikey1') self.assertEqual(3, len(kpl)) self.assertEqual(set(['apikey2', 'mainkey1', 'mainkey2']), set([x.name for x in kpl])) # check keypairs after 'mainkey2' (no keypairs) kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id, marker='mainkey2') self.assertEqual(0, len(kpl)) # check only 2 keypairs after 'apikey1' (1 api and 1 main) kpl = objects.KeyPairList.get_by_user(self.context, self.context.user_id, limit=2, marker='apikey1') self.assertEqual(2, len(kpl)) self.assertEqual(set(['apikey2', 'mainkey1']), set([x.name for x in kpl])) # check non-existing keypair self.assertRaises(exception.MarkerNotFound, objects.KeyPairList.get_by_user, self.context, self.context.user_id, limit=2, marker='unknown_kp') def test_get_by_user_different_users(self): # create keypairs for two users self._api_kp(name='apikey', user_id='user1') self._api_kp(name='apikey', user_id='user2') self._main_kp(name='mainkey', user_id='user1') self._main_kp(name='mainkey', user_id='user2') # check all 2 keypairs for user1 (1 api and 1 main) kpl = objects.KeyPairList.get_by_user(self.context, 'user1') self.assertEqual(2, len(kpl)) self.assertEqual(set(['apikey', 'mainkey']), set([x.name for x in kpl])) # check all 2 keypairs for user2 (1 api and 1 main) kpl = objects.KeyPairList.get_by_user(self.context, 'user2') self.assertEqual(2, len(kpl)) self.assertEqual(set(['apikey', 'mainkey']), set([x.name for x in kpl])) # check only 1 keypair for user1 (1 api) kpl = objects.KeyPairList.get_by_user(self.context, 'user1', limit=1) self.assertEqual(1, len(kpl)) self.assertEqual(set(['apikey']), set([x.name for x in kpl])) # check keypairs after 'apikey' for user2 (1 main) kpl = objects.KeyPairList.get_by_user(self.context, 'user2', marker='apikey') self.assertEqual(1, len(kpl)) self.assertEqual(set(['mainkey']), set([x.name for x in kpl])) # check only 2 keypairs after 'apikey' for user1 (1 main) kpl = objects.KeyPairList.get_by_user(self.context, 'user1', limit=2, marker='apikey') self.assertEqual(1, len(kpl)) self.assertEqual(set(['mainkey']), set([x.name for x in kpl])) # check non-existing keypair for user2 self.assertRaises(exception.MarkerNotFound, objects.KeyPairList.get_by_user, self.context, 'user2', limit=2, marker='unknown_kp') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_quota.py0000664000175000017500000003165600000000000022177 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import mock from oslo_utils import uuidutils from nova import context from nova import objects from nova import quota from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.db import test_instance_mapping @ddt.ddt class QuotaTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(QuotaTestCase, self).setUp() self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) self.useFixture(nova_fixtures.Database(database='api')) fix = nova_fixtures.CellDatabases() fix.add_cell_database('cell1') fix.add_cell_database('cell2') self.useFixture(fix) @ddt.data(True, False) @mock.patch('nova.quota.LOG.warning') @mock.patch('nova.quota._user_id_queued_for_delete_populated') def test_server_group_members_count_by_user(self, uid_qfd_populated, mock_uid_qfd_populated, mock_warn_log): mock_uid_qfd_populated.return_value = uid_qfd_populated ctxt = context.RequestContext('fake-user', 'fake-project') mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='cell1', transport_url='none:///') mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='cell2', transport_url='none:///') mapping1.create() mapping2.create() # Create a server group the instances will use. group = objects.InstanceGroup(context=ctxt) group.project_id = ctxt.project_id group.user_id = ctxt.user_id group.create() instance_uuids = [] # Create an instance in cell1 with context.target_cell(ctxt, mapping1) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='fake-user') instance.create() instance_uuids.append(instance.uuid) im = objects.InstanceMapping(context=ctxt, instance_uuid=instance.uuid, project_id='fake-project', user_id='fake-user', cell_id=mapping1.id) im.create() # Create an instance in cell2 with context.target_cell(ctxt, mapping2) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='fake-user') instance.create() instance_uuids.append(instance.uuid) im = objects.InstanceMapping(context=ctxt, instance_uuid=instance.uuid, project_id='fake-project', user_id='fake-user', cell_id=mapping2.id) im.create() # Create an instance that is queued for delete in cell2. It should not # be counted with context.target_cell(ctxt, mapping2) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='fake-user') instance.create() instance.destroy() instance_uuids.append(instance.uuid) im = objects.InstanceMapping(context=ctxt, instance_uuid=instance.uuid, project_id='fake-project', user_id='fake-user', cell_id=mapping2.id, queued_for_delete=True) im.create() # Add the uuids to the group objects.InstanceGroup.add_members(ctxt, group.uuid, instance_uuids) # add_members() doesn't add the members to the object field group.members.extend(instance_uuids) # Count server group members from instance mappings or cell databases, # depending on whether the user_id/queued_for_delete data migration has # been completed. count = quota._server_group_count_members_by_user(ctxt, group, 'fake-user') self.assertEqual(2, count['user']['server_group_members']) if uid_qfd_populated: # Did not log a warning about falling back to legacy count. mock_warn_log.assert_not_called() else: # Logged a warning about falling back to legacy count. mock_warn_log.assert_called_once() # Create a duplicate of the cell1 instance in cell2 except hidden. with context.target_cell(ctxt, mapping2) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='fake-user', uuid=instance_uuids[0], hidden=True) instance.create() # The duplicate hidden instance should not be counted. count = quota._server_group_count_members_by_user( ctxt, group, instance.user_id) self.assertEqual(2, count['user']['server_group_members']) @mock.patch('nova.objects.CellMappingList.get_by_project_id', wraps=objects.CellMappingList.get_by_project_id) def test_instances_cores_ram_count(self, mock_get_project_cell_mappings): ctxt = context.RequestContext('fake-user', 'fake-project') mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='cell1', transport_url='none:///') mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='cell2', transport_url='none:///') mapping1.create() mapping2.create() # Create an instance in cell1 with context.target_cell(ctxt, mapping1) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='fake-user', vcpus=2, memory_mb=512) instance.create() # create mapping for the instance since we query only those cells # in which the project has instances based on the instance_mappings im = objects.InstanceMapping(context=ctxt, instance_uuid=instance.uuid, cell_mapping=mapping1, project_id='fake-project') im.create() # Create an instance in cell2 with context.target_cell(ctxt, mapping2) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='fake-user', vcpus=4, memory_mb=1024) instance.create() # create mapping for the instance since we query only those cells # in which the project has instances based on the instance_mappings im = objects.InstanceMapping(context=ctxt, instance_uuid=instance.uuid, cell_mapping=mapping2, project_id='fake-project') im.create() # Create an instance in cell2 for a different user with context.target_cell(ctxt, mapping2) as cctxt: instance = objects.Instance(context=cctxt, project_id='fake-project', user_id='other-fake-user', vcpus=4, memory_mb=1024) instance.create() # create mapping for the instance since we query only those cells # in which the project has instances based on the instance_mappings im = objects.InstanceMapping(context=ctxt, instance_uuid=instance.uuid, cell_mapping=mapping2, project_id='fake-project') im.create() # Count instances, cores, and ram across cells (all cells) count = quota._instances_cores_ram_count(ctxt, 'fake-project', user_id='fake-user') mock_get_project_cell_mappings.assert_not_called() self.assertEqual(3, count['project']['instances']) self.assertEqual(10, count['project']['cores']) self.assertEqual(2560, count['project']['ram']) self.assertEqual(2, count['user']['instances']) self.assertEqual(6, count['user']['cores']) self.assertEqual(1536, count['user']['ram']) # Count instances, cores, and ram across cells (query cell subset) self.flags(instance_list_per_project_cells=True, group='api') count = quota._instances_cores_ram_count(ctxt, 'fake-project', user_id='fake-user') mock_get_project_cell_mappings.assert_called_with(ctxt, 'fake-project') self.assertEqual(3, count['project']['instances']) self.assertEqual(10, count['project']['cores']) self.assertEqual(2560, count['project']['ram']) self.assertEqual(2, count['user']['instances']) self.assertEqual(6, count['user']['cores']) self.assertEqual(1536, count['user']['ram']) def test_user_id_queued_for_delete_populated(self): ctxt = context.RequestContext( test_instance_mapping.sample_mapping['user_id'], test_instance_mapping.sample_mapping['project_id']) # One deleted or SOFT_DELETED instance with user_id=None, should not be # considered by the check. test_instance_mapping.create_mapping(user_id=None, queued_for_delete=True) # Should be True because deleted instances are not considered. self.assertTrue(quota._user_id_queued_for_delete_populated(ctxt)) # A non-deleted instance with user_id=None, should be considered in the # check. test_instance_mapping.create_mapping(user_id=None, queued_for_delete=False) # Should be False because it's not deleted and user_id is unmigrated. self.assertFalse(quota._user_id_queued_for_delete_populated(ctxt)) # A non-deleted instance in a different project, should be considered # in the check (if project_id is not passed). test_instance_mapping.create_mapping(queued_for_delete=False, project_id='other-project') # Should be False since only instance 3 has user_id set and we're not # filtering on project. self.assertFalse(quota._user_id_queued_for_delete_populated(ctxt)) # Should be True because only instance 3 will be considered when we # filter on project. self.assertTrue( quota._user_id_queued_for_delete_populated( ctxt, project_id='other-project')) # Add a mapping for an instance that has not yet migrated # queued_for_delete. test_instance_mapping.create_mapping(queued_for_delete=None) # Should be False because an unmigrated queued_for_delete was found. self.assertFalse( quota._user_id_queued_for_delete_populated(ctxt)) # Check again filtering on project. Should be True because the # unmigrated queued_for_delete record is part of a different project. self.assertTrue( quota._user_id_queued_for_delete_populated( ctxt, project_id='other-project')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_quota_model.py0000664000175000017500000000475300000000000023355 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db.sqlalchemy import api_models from nova.db.sqlalchemy import models from nova import test class QuotaTablesCompareTestCase(test.NoDBTestCase): def _get_column_list(self, model): column_list = [m.key for m in model.__table__.columns] return column_list def _check_column_list(self, columns_new, columns_old, added=None, removed=None): for c in added or []: columns_new.remove(c) for c in removed or []: columns_old.remove(c) intersect = set(columns_new).intersection(set(columns_old)) if intersect != set(columns_new) or intersect != set(columns_old): return False return True def _compare_models(self, m_a, m_b, added=None, removed=None): added = added or [] removed = removed or ['deleted_at', 'deleted'] c_a = self._get_column_list(m_a) c_b = self._get_column_list(m_b) self.assertTrue(self._check_column_list(c_a, c_b, added=added, removed=removed)) def test_tables_quota(self): self._compare_models(api_models.Quota(), models.Quota()) def test_tables_project_user_quota(self): self._compare_models(api_models.ProjectUserQuota(), models.ProjectUserQuota()) def test_tables_quota_class(self): self._compare_models(api_models.QuotaClass(), models.QuotaClass()) def test_tables_quota_usage(self): self._compare_models(api_models.QuotaUsage(), models.QuotaUsage()) def test_tables_reservation(self): self._compare_models(api_models.Reservation(), models.Reservation()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_quotas.py0000664000175000017500000004026200000000000022353 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova.db.sqlalchemy import api as db_api from nova import exception from nova.objects import quotas from nova import test from nova.tests.unit.db import test_db_api class QuotasObjectTestCase(test.TestCase, test_db_api.ModelsObjectComparatorMixin): def setUp(self): super(QuotasObjectTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def test_create_class(self): created = quotas.Quotas._create_class_in_db(self.context, 'foo', 'cores', 10) db_class = quotas.Quotas._get_class_from_db(self.context, 'foo', 'cores') self._assertEqualObjects(created, db_class) def test_create_class_exists(self): quotas.Quotas._create_class_in_db(self.context, 'foo', 'cores', 10) self.assertRaises(exception.QuotaClassExists, quotas.Quotas._create_class_in_db, self.context, 'foo', 'cores', 10) def test_update_class(self): created = quotas.Quotas._create_class_in_db(self.context, 'foo', 'cores', 10) quotas.Quotas._update_class_in_db(self.context, 'foo', 'cores', 20) db_class = quotas.Quotas._get_class_from_db(self.context, 'foo', 'cores') # Should have a limit of 20 now created['hard_limit'] = 20 self._assertEqualObjects(created, db_class, ignored_keys='updated_at') def test_update_class_not_found(self): self.assertRaises(exception.QuotaClassNotFound, quotas.Quotas._update_class_in_db, self.context, 'foo', 'cores', 20) def test_create_per_project_limit(self): created = quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'fixed_ips', 10) db_limit = quotas.Quotas._get_from_db(self.context, 'fake-project', 'fixed_ips') self._assertEqualObjects(created, db_limit) def test_create_per_user_limit(self): created = quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'cores', 10, user_id='fake-user') db_limit = quotas.Quotas._get_from_db(self.context, 'fake-project', 'cores', user_id='fake-user') self._assertEqualObjects(created, db_limit) def test_create_limit_duplicate(self): quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'cores', 10) self.assertRaises(exception.QuotaExists, quotas.Quotas._create_limit_in_db, self.context, 'fake-project', 'cores', 20) def test_update_per_project_limit(self): created = quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'fixed_ips', 10) quotas.Quotas._update_limit_in_db(self.context, 'fake-project', 'fixed_ips', 20) db_limit = quotas.Quotas._get_from_db(self.context, 'fake-project', 'fixed_ips') # Should have a limit of 20 now created['hard_limit'] = 20 self._assertEqualObjects(created, db_limit, ignored_keys='updated_at') def test_update_per_project_limit_not_found(self): self.assertRaises(exception.ProjectQuotaNotFound, quotas.Quotas._update_limit_in_db, self.context, 'fake-project', 'fixed_ips', 20) def test_update_per_user_limit(self): created = quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'cores', 10, user_id='fake-user') quotas.Quotas._update_limit_in_db(self.context, 'fake-project', 'cores', 20, user_id='fake-user') db_limit = quotas.Quotas._get_from_db(self.context, 'fake-project', 'cores', user_id='fake-user') # Should have a limit of 20 now created['hard_limit'] = 20 self._assertEqualObjects(created, db_limit, ignored_keys='updated_at') def test_update_per_user_limit_not_found(self): self.assertRaises(exception.ProjectUserQuotaNotFound, quotas.Quotas._update_limit_in_db, self.context, 'fake-project', 'cores', 20, user_id='fake-user') def test_get_per_project_limit_not_found(self): self.assertRaises(exception.ProjectQuotaNotFound, quotas.Quotas._get_from_db, self.context, 'fake-project', 'fixed_ips') def test_get_per_user_limit_not_found(self): self.assertRaises(exception.ProjectUserQuotaNotFound, quotas.Quotas._get_from_db, self.context, 'fake-project', 'cores', user_id='fake-user') def test_get_all_per_user_limits(self): created = [] created.append(quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'cores', 10, user_id='fake-user')) created.append(quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'ram', 8192, user_id='fake-user')) db_limits = quotas.Quotas._get_all_from_db(self.context, 'fake-project') for i, db_limit in enumerate(db_limits): self._assertEqualObjects(created[i], db_limit) def test_get_all_per_project_limits_by_project(self): quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'fixed_ips', 20) quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'floating_ips', 10) limits_dict = quotas.Quotas._get_all_from_db_by_project(self.context, 'fake-project') self.assertEqual('fake-project', limits_dict['project_id']) self.assertEqual(20, limits_dict['fixed_ips']) self.assertEqual(10, limits_dict['floating_ips']) def test_get_all_per_user_limits_by_project_and_user(self): quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'instances', 5, user_id='fake-user') quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'cores', 10, user_id='fake-user') limits_dict = quotas.Quotas._get_all_from_db_by_project_and_user( self.context, 'fake-project', 'fake-user') self.assertEqual('fake-project', limits_dict['project_id']) self.assertEqual('fake-user', limits_dict['user_id']) self.assertEqual(5, limits_dict['instances']) self.assertEqual(10, limits_dict['cores']) def test_destroy_per_project_and_per_user_limits(self): # per user limit quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'instances', 5, user_id='fake-user') # per project limit quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'fixed_ips', 10) quotas.Quotas._destroy_all_in_db_by_project(self.context, 'fake-project') self.assertRaises(exception.ProjectUserQuotaNotFound, quotas.Quotas._get_from_db, self.context, 'fake-project', 'instances', user_id='fake-user') self.assertRaises(exception.ProjectQuotaNotFound, quotas.Quotas._get_from_db, self.context, 'fake-project', 'fixed_ips') def test_destroy_per_project_and_per_user_limits_not_found(self): self.assertRaises(exception.ProjectQuotaNotFound, quotas.Quotas._destroy_all_in_db_by_project, self.context, 'fake-project') def test_destroy_per_user_limits(self): quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'instances', 5, user_id='fake-user') quotas.Quotas._destroy_all_in_db_by_project_and_user(self.context, 'fake-project', 'fake-user') self.assertRaises(exception.ProjectUserQuotaNotFound, quotas.Quotas._get_from_db, self.context, 'fake-project', 'instances', user_id='fake-user') def test_destroy_per_user_limits_not_found(self): self.assertRaises( exception.ProjectUserQuotaNotFound, quotas.Quotas._destroy_all_in_db_by_project_and_user, self.context, 'fake-project', 'fake-user') def test_get_class_not_found(self): self.assertRaises(exception.QuotaClassNotFound, quotas.Quotas._get_class_from_db, self.context, 'foo', 'cores') def test_get_all_class_by_name(self): quotas.Quotas._create_class_in_db(self.context, 'foo', 'instances', 5) quotas.Quotas._create_class_in_db(self.context, 'foo', 'cores', 10) limits_dict = quotas.Quotas._get_all_class_from_db_by_name( self.context, 'foo') self.assertEqual('foo', limits_dict['class_name']) self.assertEqual(5, limits_dict['instances']) self.assertEqual(10, limits_dict['cores']) def test_migrate_quota_limits(self): # Create a limit in api db quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'instances', 5, user_id='fake-user') # Create 4 limits in main db db_api.quota_create(self.context, 'fake-project', 'cores', 10, user_id='fake-user') db_api.quota_create(self.context, 'fake-project', 'ram', 8192, user_id='fake-user') db_api.quota_create(self.context, 'fake-project', 'fixed_ips', 10) db_api.quota_create(self.context, 'fake-project', 'floating_ips', 10) # Migrate with a count/limit of 3 total, done = quotas.migrate_quota_limits_to_api_db(self.context, 3) self.assertEqual(3, total) self.assertEqual(3, done) # This only fetches from the api db. There should now be 4 limits. api_user_limits = quotas.Quotas._get_all_from_db(self.context, 'fake-project') api_proj_limits_dict = quotas.Quotas._get_all_from_db_by_project( self.context, 'fake-project') api_proj_limits_dict.pop('project_id', None) self.assertEqual(4, len(api_user_limits) + len(api_proj_limits_dict)) # This only fetches from the main db. There should be one left. main_user_limits = db_api.quota_get_all(self.context, 'fake-project') main_proj_limits_dict = db_api.quota_get_all_by_project(self.context, 'fake-project') main_proj_limits_dict.pop('project_id', None) self.assertEqual(1, len(main_user_limits) + len(main_proj_limits_dict)) self.assertEqual((1, 1), quotas.migrate_quota_limits_to_api_db( self.context, 100)) self.assertEqual((0, 0), quotas.migrate_quota_limits_to_api_db( self.context, 100)) def test_migrate_quota_limits_skips_existing(self): quotas.Quotas._create_limit_in_db(self.context, 'fake-project', 'instances', 5, user_id='fake-user') db_api.quota_create(self.context, 'fake-project', 'instances', 5, user_id='fake-user') total, done = quotas.migrate_quota_limits_to_api_db( self.context, 100) self.assertEqual(1, total) self.assertEqual(1, done) total, done = quotas.migrate_quota_limits_to_api_db( self.context, 100) self.assertEqual(0, total) self.assertEqual(0, done) self.assertEqual(1, len(quotas.Quotas._get_all_from_db( self.context, 'fake-project'))) def test_migrate_quota_classes(self): # Create a class in api db quotas.Quotas._create_class_in_db(self.context, 'foo', 'instances', 5) # Create 3 classes in main db db_api.quota_class_create(self.context, 'foo', 'cores', 10) db_api.quota_class_create(self.context, db_api._DEFAULT_QUOTA_NAME, 'instances', 10) db_api.quota_class_create(self.context, 'foo', 'ram', 8192) total, done = quotas.migrate_quota_classes_to_api_db(self.context, 2) self.assertEqual(2, total) self.assertEqual(2, done) # This only fetches from the api db api_foo_dict = quotas.Quotas._get_all_class_from_db_by_name( self.context, 'foo') api_foo_dict.pop('class_name', None) api_default_dict = quotas.Quotas._get_all_class_from_db_by_name( self.context, db_api._DEFAULT_QUOTA_NAME) api_default_dict.pop('class_name', None) self.assertEqual(3, len(api_foo_dict) + len(api_default_dict)) # This only fetches from the main db main_foo_dict = db_api.quota_class_get_all_by_name(self.context, 'foo') main_foo_dict.pop('class_name', None) main_default_dict = db_api.quota_class_get_default(self.context) main_default_dict.pop('class_name', None) self.assertEqual(1, len(main_foo_dict) + len(main_default_dict)) self.assertEqual((1, 1), quotas.migrate_quota_classes_to_api_db( self.context, 100)) self.assertEqual((0, 0), quotas.migrate_quota_classes_to_api_db( self.context, 100)) def test_migrate_quota_classes_skips_existing(self): quotas.Quotas._create_class_in_db(self.context, 'foo-class', 'instances', 5) db_api.quota_class_create(self.context, 'foo-class', 'instances', 7) total, done = quotas.migrate_quota_classes_to_api_db( self.context, 100) self.assertEqual(1, total) self.assertEqual(1, done) total, done = quotas.migrate_quota_classes_to_api_db( self.context, 100) self.assertEqual(0, total) self.assertEqual(0, done) # Existing class should not be overwritten in the result db_class = quotas.Quotas._get_all_class_from_db_by_name( self.context, 'foo-class') self.assertEqual(5, db_class['instances']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/db/test_request_spec.py0000664000175000017500000000603000000000000023534 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova import exception from nova.objects import base as obj_base from nova.objects import request_spec from nova import test from nova.tests import fixtures from nova.tests.unit import fake_request_spec class RequestSpecTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(RequestSpecTestCase, self).setUp() self.useFixture(fixtures.Database(database='api')) # NOTE(danms): Only needed for the fallback legacy main db loading # code in InstanceGroup. self.useFixture(fixtures.Database(database='main')) self.context = context.RequestContext('fake-user', 'fake-project') self.spec_obj = request_spec.RequestSpec() self.instance_uuid = None def _create_spec(self): args = fake_request_spec.fake_db_spec() args.pop('id', None) self.instance_uuid = args['instance_uuid'] request_spec.RequestSpec._from_db_object(self.context, self.spec_obj, self.spec_obj._create_in_db(self.context, args)) return self.spec_obj def test_get_by_instance_uuid_not_found(self): self.assertRaises(exception.RequestSpecNotFound, self.spec_obj._get_by_instance_uuid_from_db, self.context, self.instance_uuid) def test_get_by_uuid(self): spec = self._create_spec() db_spec = self.spec_obj.get_by_instance_uuid(self.context, self.instance_uuid) self.assertTrue(obj_base.obj_equal_prims(spec, db_spec)) def test_save_in_db(self): spec = self._create_spec() old_az = spec.availability_zone spec.availability_zone = '%s-new' % old_az spec.save() db_spec = self.spec_obj.get_by_instance_uuid(self.context, spec.instance_uuid) self.assertTrue(obj_base.obj_equal_prims(spec, db_spec)) self.assertNotEqual(old_az, db_spec.availability_zone) def test_double_create(self): spec = self._create_spec() self.assertRaises(exception.ObjectActionError, spec.create) def test_destroy(self): spec = self._create_spec() spec.destroy() self.assertRaises( exception.RequestSpecNotFound, self.spec_obj._get_by_instance_uuid_from_db, self.context, self.instance_uuid) def test_destroy_not_found(self): spec = self._create_spec() spec.destroy() self.assertRaises(exception.RequestSpecNotFound, spec.destroy) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_security_group.py0000664000175000017500000000401600000000000024117 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova.db.sqlalchemy import api as db_api from nova import objects from nova import test class SecurityGroupObjectTestCase(test.TestCase): def setUp(self): super(SecurityGroupObjectTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') def _create_group(self, **values): defaults = {'project_id': self.context.project_id, 'user_id': self.context.user_id, 'name': 'foogroup', 'description': 'foodescription'} defaults.update(values) db_api.security_group_create(self.context, defaults) def test_get_counts(self): # _create_group() creates a group with project_id and user_id from # self.context by default self._create_group(name='a') self._create_group(name='b', project_id='foo') self._create_group(name='c', user_id='bar') # Count only across a project counts = objects.SecurityGroupList.get_counts(self.context, 'foo') self.assertEqual(1, counts['project']['security_groups']) self.assertNotIn('user', counts) # Count across a project and a user counts = objects.SecurityGroupList.get_counts( self.context, self.context.project_id, user_id=self.context.user_id) self.assertEqual(2, counts['project']['security_groups']) self.assertEqual(1, counts['user']['security_groups']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/db/test_virtual_interface.py0000664000175000017500000003363600000000000024554 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_config import cfg from oslo_utils import timeutils from nova import context from nova import exception from nova.network import model as network_model from nova import objects from nova.objects import virtual_interface from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network CONF = cfg.CONF FAKE_UUID = '00000000-0000-0000-0000-000000000000' def _delete_vif_list(context, instance_uuid): vif_list = objects.VirtualInterfaceList.\ get_by_instance_uuid(context, instance_uuid) # Set old VirtualInterfaces as deleted. for vif in vif_list: vif.destroy() def _verify_list_fulfillment(context, instance_uuid): try: info_cache = objects.InstanceInfoCache.\ get_by_instance_uuid(context, instance_uuid) except exception.InstanceInfoCacheNotFound: info_cache = [] vif_list = objects.VirtualInterfaceList.\ get_by_instance_uuid(context, instance_uuid) vif_list = filter(lambda x: not x.deleted, vif_list) cached_vif_ids = [vif['id'] for vif in info_cache.network_info] db_vif_ids = [vif.uuid for vif in vif_list] return cached_vif_ids == db_vif_ids class VirtualInterfaceListMigrationTestCase( integrated_helpers._IntegratedTestBase): ADMIN_API = True api_major_version = 'v2.1' _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' def setUp(self): super(VirtualInterfaceListMigrationTestCase, self).setUp() self.context = context.get_admin_context() fake_network.set_stub_network_methods(self) self.cells = objects.CellMappingList.get_all(self.context) self._start_compute('compute2') self.instances = [] def _create_instances(self, pre_newton=2, deleted=0, total=5, target_cell=None): if not target_cell: target_cell = self.cells[1] instances = [] with context.target_cell(self.context, target_cell) as cctxt: flav_dict = objects.Flavor._flavor_get_from_db(cctxt, 1) flavor = objects.Flavor(**flav_dict) for i in range(0, total): inst = objects.Instance( context=cctxt, project_id=self.api.project_id, user_id=FAKE_UUID, vm_state='active', flavor=flavor, created_at=datetime.datetime(1985, 10, 25, 1, 21, 0), launched_at=datetime.datetime(1985, 10, 25, 1, 22, 0), host=self.computes['compute2'].host, hostname='%s-inst%i' % (target_cell.name, i)) inst.create() info_cache = objects.InstanceInfoCache(context=cctxt) info_cache.updated_at = timeutils.utcnow() info_cache.network_info = network_model.NetworkInfo() info_cache.instance_uuid = inst.uuid info_cache.save() instances.append(inst) im = objects.InstanceMapping(context=cctxt, project_id=inst.project_id, user_id=inst.user_id, instance_uuid=inst.uuid, cell_mapping=target_cell) im.create() # Attach fake interfaces to instances network_id = list(self.neutron._networks.keys())[0] for i in range(0, len(instances)): for k in range(0, 4): self.api.attach_interface(instances[i].uuid, {"interfaceAttachment": {"net_id": network_id}}) with context.target_cell(self.context, target_cell) as cctxt: # Fake the pre-newton behaviour by removing the # VirtualInterfacesList objects. if pre_newton: for i in range(0, pre_newton): _delete_vif_list(cctxt, instances[i].uuid) if deleted: # Delete from the end of active instances list for i in range(total - deleted, total): instances[i].destroy() self.instances += instances def test_migration_nothing_to_migrate(self): """This test when there already populated VirtualInterfaceList objects for created instances. """ self._create_instances(pre_newton=0, total=5) match, done = virtual_interface.fill_virtual_interface_list( self.context, 5) self.assertEqual(5, match) self.assertEqual(0, done) def test_migration_verify_max_count(self): """This verifies if max_count is respected to avoid migration of bigger set of data, than user specified. """ self._create_instances(pre_newton=0, total=3) match, done = virtual_interface.fill_virtual_interface_list( self.context, 2) self.assertEqual(2, match) self.assertEqual(0, done) def test_migration_do_not_step_to_next_cell(self): """This verifies if script doesn't step into next cell when max_count is reached. """ # Create 2 instances in cell0 self._create_instances( pre_newton=0, total=2, target_cell=self.cells[0]) # Create 2 instances in cell1 self._create_instances( pre_newton=0, total=2, target_cell=self.cells[1]) with mock.patch('nova.objects.InstanceList.get_by_filters', side_effect=[self.instances[0:2], self.instances[2:]]) \ as mock_get: match, done = virtual_interface.fill_virtual_interface_list( self.context, 2) self.assertEqual(2, match) self.assertEqual(0, done) mock_get.assert_called_once() def test_migration_pre_newton_instances(self): """This test when there is an instance created in release older than Newton. For those instances the VirtualInterfaceList needs to be re-created from cache. """ # Lets spawn 3 pre-newton instances and 2 new ones self._create_instances(pre_newton=3, total=5) match, done = virtual_interface.fill_virtual_interface_list( self.context, 5) self.assertEqual(5, match) self.assertEqual(3, done) # Make sure we ran over all the instances - verify if marker works match, done = virtual_interface.fill_virtual_interface_list( self.context, 50) self.assertEqual(0, match) self.assertEqual(0, done) for i in range(0, 5): _verify_list_fulfillment(self.context, self.instances[i].uuid) def test_migration_pre_newton_instance_new_vifs(self): """This test when instance was created before Newton but in meantime new interfaces where attached and VirtualInterfaceList is not populated. """ self._create_instances(pre_newton=0, total=1) vif_list = objects.VirtualInterfaceList.get_by_instance_uuid( self.context, self.instances[0].uuid) # Drop first vif from list to pretend old instance vif_list[0].destroy() match, done = virtual_interface.fill_virtual_interface_list( self.context, 5) # The whole VirtualInterfaceList should be rewritten and base # on cache. self.assertEqual(1, match) self.assertEqual(1, done) _verify_list_fulfillment(self.context, self.instances[0].uuid) def test_migration_attach_in_progress(self): """This test when number of vifs (db) is bigger than number taken from network cache. Potential port-attach is taking place. """ self._create_instances(pre_newton=0, total=1) instance_info_cache = objects.InstanceInfoCache.get_by_instance_uuid( self.context, self.instances[0].uuid) # Delete last interface to pretend that's still in progress instance_info_cache.network_info.pop() instance_info_cache.updated_at = datetime.datetime(2015, 1, 1) instance_info_cache.save() match, done = virtual_interface.fill_virtual_interface_list( self.context, 5) # I don't know whats going on so instance VirtualInterfaceList # should stay untouched. self.assertEqual(1, match) self.assertEqual(0, done) def test_migration_empty_network_info(self): """This test if migration is not executed while NetworkInfo is empty, like instance without interfaces attached. """ self._create_instances(pre_newton=0, total=1) instance_info_cache = objects.InstanceInfoCache.get_by_instance_uuid( self.context, self.instances[0].uuid) # Clean NetworkInfo. Pretend instance without interfaces. instance_info_cache.network_info = None instance_info_cache.save() match, done = virtual_interface.fill_virtual_interface_list( self.context, 5) self.assertEqual(0, match) self.assertEqual(0, done) def test_migration_inconsistent_data(self): """This test when vif (db) are in completely different comparing to network cache and we don't know how to deal with it. It's the corner-case. """ self._create_instances(pre_newton=0, total=1) instance_info_cache = objects.InstanceInfoCache.get_by_instance_uuid( self.context, self.instances[0].uuid) # Change order of interfaces in NetworkInfo to fake # inconsistency between cache and db. nwinfo = instance_info_cache.network_info interface = nwinfo.pop() nwinfo.insert(0, interface) instance_info_cache.updated_at = datetime.datetime(2015, 1, 1) instance_info_cache.network_info = nwinfo # Update the cache instance_info_cache.save() match, done = virtual_interface.fill_virtual_interface_list( self.context, 5) # Cache is corrupted, so must be rewritten self.assertEqual(1, match) self.assertEqual(1, done) def test_migration_dont_touch_deleted_objects(self): """This test if deleted instances are skipped during migration. """ self._create_instances( pre_newton=1, deleted=1, total=3) match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(2, match) self.assertEqual(1, done) def test_migration_multiple_cells(self): """This test if marker and max_rows limit works properly while running in multi-cell environment. """ # Create 2 instances in cell0 self._create_instances( pre_newton=1, total=2, target_cell=self.cells[0]) # Create 4 instances in cell1 self._create_instances( pre_newton=3, total=5, target_cell=self.cells[1]) # Fill vif list limiting to 4 instances - it should # touch cell0 and cell1 instances (migrate 3 due 1 is post newton). match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(4, match) self.assertEqual(3, done) # Verify that the marker instance has project_id/user_id set properly. with context.target_cell(self.context, self.cells[1]) as cctxt: # The marker record is destroyed right after it's created, since # only the presence of the row is needed to satisfy the fkey # constraint. cctxt = cctxt.elevated(read_deleted='yes') marker_instance = objects.Instance.get_by_uuid(cctxt, FAKE_UUID) self.assertEqual(FAKE_UUID, marker_instance.project_id) self.assertEqual(FAKE_UUID, marker_instance.user_id) # Try again - should fill 3 left instances from cell1 match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(3, match) self.assertEqual(1, done) # Try again - should be nothing to migrate match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(0, match) self.assertEqual(0, done) def test_migration_multiple_cells_new_instances_in_meantime(self): """This test if marker is created per-cell and we're able to verify instanced that were added in meantime. """ # Create 2 instances in cell0 self._create_instances( pre_newton=1, total=2, target_cell=self.cells[0]) # Create 2 instances in cell1 self._create_instances( pre_newton=1, total=2, target_cell=self.cells[1]) # Migrate instances in both cells. match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(4, match) self.assertEqual(2, done) # Add new instances to cell1 self._create_instances( pre_newton=0, total=2, target_cell=self.cells[1]) # Try again, should find instances in cell1 match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(2, match) self.assertEqual(0, done) # Try again - should be nothing to migrate match, done = virtual_interface.fill_virtual_interface_list( self.context, 4) self.assertEqual(0, match) self.assertEqual(0, done) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/fixtures.py0000664000175000017500000001473000000000000021265 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixtures solely for functional tests.""" from __future__ import absolute_import import fixtures from keystoneauth1 import adapter as ka from keystoneauth1 import session as ks from placement.tests.functional.fixtures import placement as placement_fixtures from requests import adapters from nova.tests.functional.api import client class PlacementApiClient(object): def __init__(self, placement_fixture): self.fixture = placement_fixture def get(self, url, **kwargs): return client.APIResponse( self.fixture._fake_get(None, url, **kwargs)) def put(self, url, body, **kwargs): return client.APIResponse( self.fixture._fake_put(None, url, body, **kwargs)) def post(self, url, body, **kwargs): return client.APIResponse( self.fixture._fake_post(None, url, body, **kwargs)) def delete(self, url, **kwargs): return client.APIResponse( self.fixture._fake_delete(None, url, **kwargs)) class PlacementFixture(placement_fixtures.PlacementFixture): """A fixture to placement operations. Runs a local WSGI server bound on a free port and having the Placement application with NoAuth middleware. This fixture also prevents calling the ServiceCatalog for getting the endpoint. It's possible to ask for a specific token when running the fixtures so all calls would be passing this token. Most of the time users of this fixture will also want the placement database fixture to be called first, so that is done automatically. If that is not desired pass ``db=False`` when initializing the fixture and establish the database yourself with: self.useFixture(placement_fixtures.Database(set_config=True)) """ def setUp(self): super(PlacementFixture, self).setUp() # Turn off manipulation of socket_options in TCPKeepAliveAdapter # to keep wsgi-intercept happy. Replace it with the method # from its superclass. self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.TCPKeepAliveAdapter.init_poolmanager', adapters.HTTPAdapter.init_poolmanager)) self._client = ka.Adapter(ks.Session(auth=None), raise_exc=False) # NOTE(sbauza): We need to mock the scheduler report client because # we need to fake Keystone by directly calling the endpoint instead # of looking up the service catalog, like we did for the OSAPIFixture. self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.get', self._fake_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.post', self._fake_post)) self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.put', self._fake_put)) self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.delete', self._fake_delete)) self.api = PlacementApiClient(self) @staticmethod def _update_headers_with_version(headers, version): if version is not None: # TODO(mriedem): Perform some version discovery at some point. headers.update({ 'OpenStack-API-Version': 'placement %s' % version }) def _fake_get(self, client, url, version=None, global_request_id=None): # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, version) return self._client.get( url, endpoint_override=self.endpoint, headers=headers) def _fake_post( self, client, url, data, version=None, global_request_id=None ): # NOTE(sdague): using json= instead of data= sets the # media type to application/json for us. Placement API is # more sensitive to this than other APIs in the OpenStack # ecosystem. # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, version) return self._client.post( url, json=data, endpoint_override=self.endpoint, headers=headers) def _fake_put( self, client, url, data, version=None, global_request_id=None ): # NOTE(sdague): using json= instead of data= sets the # media type to application/json for us. Placement API is # more sensitive to this than other APIs in the OpenStack # ecosystem. # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, version) return self._client.put( url, json=data, endpoint_override=self.endpoint, headers=headers) def _fake_delete( self, client, url, version=None, global_request_id=None ): # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, version) return self._client.delete( url, endpoint_override=self.endpoint, headers=headers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/integrated_helpers.py0000664000175000017500000013354100000000000023266 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Provides common functionality for integrated unit tests """ import collections import random import six import string import time import os_traits from oslo_log import log as logging from nova.compute import instance_actions from nova.compute import utils as compute_utils import nova.conf from nova import context from nova.db import api as db import nova.image.glance from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client as api_client from nova.tests.functional import fixtures as func_fixtures from nova.tests.unit import cast_as_call from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) def generate_random_alphanumeric(length): """Creates a random alphanumeric string of specified length.""" return ''.join(random.choice(string.ascii_uppercase + string.digits) for _x in range(length)) def generate_random_numeric(length): """Creates a random numeric string of specified length.""" return ''.join(random.choice(string.digits) for _x in range(length)) def generate_new_element(items, prefix, numeric=False): """Creates a random string with prefix, that is not in 'items' list.""" while True: if numeric: candidate = prefix + generate_random_numeric(8) else: candidate = prefix + generate_random_alphanumeric(8) if candidate not in items: return candidate LOG.debug("Random collision on %s", candidate) class InstanceHelperMixin(object): def _wait_for_server_parameter( self, server, expected_params, max_retries=10, api=None): api = api or getattr(self, 'admin_api', self.api) retry_count = 0 while True: server = api.get_server(server['id']) if all([server[attr] == expected_params[attr] for attr in expected_params]): break retry_count += 1 if retry_count == max_retries: self.fail('Wait for state change failed, ' 'expected_params=%s, server=%s' % ( expected_params, server)) time.sleep(0.5) return server def _wait_for_state_change(self, server, expected_status, max_retries=10): return self._wait_for_server_parameter( server, {'status': expected_status}, max_retries) def _wait_until_deleted(self, server): initially_in_error = server.get('status') == 'ERROR' try: for i in range(40): server = self.api.get_server(server['id']) if not initially_in_error and server['status'] == 'ERROR': self.fail('Server went to error state instead of' 'disappearing.') time.sleep(0.5) self.fail('Server failed to delete.') except api_client.OpenStackApiNotFoundException: return def _wait_for_action_fail_completion( self, server, expected_action, event_name): """Polls instance action events for the given instance, action and action event name until it finds the action event with an error result. """ return self._wait_for_instance_action_event( server, expected_action, event_name, event_result='error') def _wait_for_instance_action_event( self, server, action_name, event_name, event_result): """Polls the instance action events for the given instance, action, event, and event result until it finds the event. """ api = getattr(self, 'admin_api', self.api) actions = [] events = [] for attempt in range(10): actions = api.get_instance_actions(server['id']) # The API returns the newest event first for action in actions: if action['action'] != action_name: continue events = api.get_instance_action_details(server['id'], action['request_id'])['events'] # Look for the action event being in error state. for event in events: result = event['result'] if (event['event'] == event_name and result is not None and result.lower() == event_result.lower()): return event # We didn't find the completion event yet, so wait a bit. time.sleep(0.5) self.fail( 'Timed out waiting for %s instance action event. Current instance ' 'actions: %s. Events in the last matching action: %s' % (event_name, actions, events)) def _assert_resize_migrate_action_fail(self, server, action, error_in_tb): """Waits for the conductor_migrate_server action event to fail for the given action and asserts the error is in the event traceback. :param server: API response dict of the server being resized/migrated :param action: Either "resize" or "migrate" instance action. :param error_in_tb: Some expected part of the error event traceback. :returns: The instance action event dict from the API response """ event = self._wait_for_action_fail_completion( server, action, 'conductor_migrate_server') self.assertIn(error_in_tb, event['traceback']) return event def _wait_for_migration_status(self, server, expected_statuses): """Waits for a migration record with the given statuses to be found for the given server, else the test fails. The migration record, if found, is returned. """ api = getattr(self, 'admin_api', self.api) statuses = [status.lower() for status in expected_statuses] actual_status = None for attempt in range(10): migrations = api.api_get('/os-migrations').body['migrations'] for migration in migrations: if migration['instance_uuid'] == server['id']: actual_status = migration['status'] if migration['status'].lower() in statuses: return migration time.sleep(0.5) self.fail( 'Timed out waiting for migration with status for instance %s ' '(expected "%s", got "%s")' % ( server['id'], expected_statuses, actual_status, )) def _wait_for_log(self, log_line): for i in range(10): if log_line in self.stdlog.logger.output: return time.sleep(0.5) self.fail('The line "%(log_line)s" did not appear in the log') def _wait_for_assert(self, assert_func, max_retries=10, sleep=0.5): """Waits and retries the assert_func either until it does not raise AssertionError any more or until the max_retries run out. """ last_error = None for i in range(max_retries): try: return assert_func() except AssertionError as e: last_error = e time.sleep(sleep) raise last_error def _create_aggregate(self, name, availability_zone=None): """Creates a host aggregate with the given name and optional AZ :param name: The name of the host aggregate :param availability_zone: Optional availability zone that the aggregate represents :returns: The id value of the created aggregate """ api = getattr(self, 'admin_api', self.api) body = { 'aggregate': { 'name': name, 'availability_zone': availability_zone } } return api.post_aggregate(body)['id'] def _build_flavor(self, id=None, name=None, memory_mb=2048, vcpu=2, disk=10, ephemeral=10, swap=0, rxtx_factor=1.0, is_public=True): """Build a request for the flavor create API. :param id: An ID for the flavor. :param name: A name for the flavor. :param memory_mb: The flavor memory. :param vcpu: The flavor vcpus. :param disk: The flavor disk. :param ephemeral: The flavor ephemeral. :param swap: The flavor swap. :param rxtx_factor: (DEPRECATED) The flavor RX-TX factor. :param is_public: Whether the flavor is public or not. :returns: The generated request body. """ if not name: name = ''.join( random.choice(string.ascii_lowercase) for i in range(20)) return { "flavor": { "id": id, "name": name, "ram": memory_mb, "vcpus": vcpu, "disk": disk, "OS-FLV-EXT-DATA:ephemeral": ephemeral, "swap": swap, "rxtx_factor": rxtx_factor, "os-flavor-access:is_public": is_public, } } def _create_flavor(self, id=None, name=None, memory_mb=2048, vcpu=2, disk=10, ephemeral=10, swap=0, rxtx_factor=1.0, is_public=True, extra_spec=None): """Build and submit a request to the flavor create API. :param id: An ID for the flavor. :param name: A name for the flavor. :param memory_mb: The flavor memory. :param vcpu: The flavor vcpus. :param disk: The flavor disk. :param ephemeral: The flavor ephemeral. :param swap: The flavor swap. :param rxtx_factor: (DEPRECATED) The flavor RX-TX factor. :param is_public: Whether the flavor is public or not. :returns: The ID of the created flavor. """ body = self._build_flavor( id, name, memory_mb, vcpu, disk, ephemeral, swap, rxtx_factor, is_public) flavor = self.api_fixture.admin_api.post_flavor(body) if extra_spec is not None: spec = {"extra_specs": extra_spec} self.api_fixture.admin_api.post_extra_spec(flavor['id'], spec) return flavor['id'] def _build_server(self, name=None, image_uuid=None, flavor_id=None, networks=None, az=None, host=None): """Build a request for the server create API. :param name: A name for the server. :param image_uuid: The ID of an existing image. :param flavor_id: The ID of an existing flavor. :param networks: A dict of networks to attach or a string of 'none' or 'auto'. :param az: The name of the availability zone the instance should request. :param host: The host to boot the instance on. Requires API microversion 2.74 or greater. :returns: The generated request body. """ if not name: name = ''.join( random.choice(string.ascii_lowercase) for i in range(20)) if image_uuid is None: # we need to handle '' # NOTE(takashin): In API version 2.36, image APIs were deprecated. # In API version 2.36 or greater, self.api.get_images() returns # a 404 error. In that case, 'image_uuid' should be specified. with utils.temporary_mutation(self.api, microversion='2.35'): image_uuid = self.api.get_images()[0]['id'] if not flavor_id: # Set a valid flavorId flavor_id = self.api.get_flavors()[0]['id'] server = { 'name': name, 'imageRef': image_uuid, 'flavorRef': 'http://fake.server/%s' % flavor_id, } if networks is not None: server['networks'] = networks if az is not None: server['availability_zone'] = az # This requires at least microversion 2.74 to work if host is not None: server['host'] = host return server def _create_server(self, name=None, image_uuid=None, flavor_id=None, networks=None, az=None, host=None, expected_state='ACTIVE', api=None): """Build and submit a request to the server create API. :param name: A name for the server. :param image_uuid: The ID of an existing image. :param flavor_id: The ID of an existing flavor. :param networks: A dict of networks to attach or a string of 'none' or 'auto'. :param az: The name of the availability zone the instance should request. :param host: The host to boot the instance on. Requires API microversion 2.74 or greater. :param expected_state: The expected end state. :param api: An API client to create the server with; defaults to 'self.api' :returns: The response from the API containing the created server. """ # if forcing the server onto a host, we have to use the admin API if not api: api = self.api if not az else getattr(self, 'admin_api', self.api) body = self._build_server( name, image_uuid, flavor_id, networks, az, host) server = api.post_server({'server': body}) return self._wait_for_state_change(server, expected_state) def _delete_server(self, server): """Delete a server.""" self.api.delete_server(server['id']) self._wait_until_deleted(server) def _confirm_resize(self, server): self.api.post_server_action(server['id'], {'confirmResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') self._wait_for_instance_action_event( server, instance_actions.CONFIRM_RESIZE, 'compute_confirm_resize', 'success') return server def _revert_resize(self, server): # NOTE(sbauza): This method requires the caller to setup a fake # notifier by stubbing it. self.api.post_server_action(server['id'], {'revertResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') self._wait_for_migration_status(server, ['reverted']) # Note that the migration status is changed to "reverted" in the # dest host revert_resize method but the allocations are cleaned up # in the source host finish_revert_resize method so we need to wait # for the finish_revert_resize method to complete. fake_notifier.wait_for_versioned_notifications( 'instance.resize_revert.end') return server def _migrate_or_resize(self, server, request): if not ('resize' in request or 'migrate' in request): raise Exception('_migrate_or_resize only supports resize or ' 'migrate requests.') self.api.post_server_action(server['id'], request) self._wait_for_state_change(server, 'VERIFY_RESIZE') def _resize_server(self, server, new_flavor): resize_req = { 'resize': { 'flavorRef': new_flavor } } self._migrate_or_resize(server, resize_req) def _live_migrate(self, server, migration_expected_state, server_expected_state='ACTIVE'): self.api.post_server_action( server['id'], {'os-migrateLive': {'host': None, 'block_migration': 'auto'}}) self._wait_for_state_change(server, server_expected_state) self._wait_for_migration_status(server, [migration_expected_state]) class _IntegratedTestBase(test.TestCase, InstanceHelperMixin): #: Whether the test requires global external locking being configured for #: them. New tests should set this to False. REQUIRES_LOCKING = True #: Whether to use admin credentials for all nova API requests. ADMIN_API = False # TODO(stephenfin): Rename to API_MAJOR_VERSION #: The default API major version to use for all nova API requests. api_major_version = 'v2.1' # TODO(stephenfin): Rename to API_MICRO_VERSION #: The default microversion to use for all nova API requests; requires API #: major version 2.1 microversion = None #: Whether to include the project ID in the URL for API requests through #: OSAPIFixture. USE_PROJECT_ID = False #: Whether to stub keystonemiddleware and NovaKeystoneContext; override to #: making those middlewares behave as they would in real life, i.e. try to #: do real authentication. STUB_KEYSTONE = True def setUp(self): super(_IntegratedTestBase, self).setUp() self.fake_image_service =\ nova.tests.unit.image.fake.stub_out_image_service(self) self.useFixture(cast_as_call.CastAsCall(self)) placement = self.useFixture(func_fixtures.PlacementFixture()) self.placement_api = placement.api self.neutron = self.useFixture(nova_fixtures.NeutronFixture(self)) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self._setup_services() self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) def _setup_compute_service(self): return self._start_compute('compute') def _setup_scheduler_service(self): return self.start_service('scheduler') def _setup_services(self): # NOTE(danms): Set the global MQ connection to that of our first cell # for any cells-ignorant code. Normally this is defaulted in the tests # which will result in us not doing the right thing. if 'cell1' in self.cell_mappings: self.flags(transport_url=self.cell_mappings['cell1'].transport_url) self.conductor = self.start_service('conductor') self.scheduler = self._setup_scheduler_service() self.compute = self._setup_compute_service() self.api_fixture = self.useFixture( nova_fixtures.OSAPIFixture( api_version=self.api_major_version, use_project_id_in_urls=self.USE_PROJECT_ID, stub_keystone=self.STUB_KEYSTONE)) # if the class needs to run as admin, make the api endpoint # the admin, otherwise it's safer to run as non admin user. if self.ADMIN_API: self.api = self.api_fixture.admin_api else: self.api = self.api_fixture.api self.admin_api = self.api_fixture.admin_api if self.microversion: self.api.microversion = self.microversion if not self.ADMIN_API: self.admin_api.microversion = self.microversion def _check_api_endpoint(self, endpoint, expected_middleware): app = self.api_fixture.app().get((None, '/v2')) while getattr(app, 'application', False): for middleware in expected_middleware: if isinstance(app.application, middleware): expected_middleware.remove(middleware) break app = app.application self.assertEqual([], expected_middleware, ("The expected wsgi middlewares %s are not " "existed") % expected_middleware) # TODO(sbauza): Drop this method once test classes inherit from a mixin def _get_provider_uuid_by_name(self, name): return self.placement_api.get( '/resource_providers?name=%s' % name).body[ 'resource_providers'][0]['uuid'] # TODO(sbauza): Drop this method once test classes inherit from a mixin def _get_all_rp_uuids_in_a_tree(self, in_tree_rp_uuid): rps = self.placement_api.get( '/resource_providers?in_tree=%s' % in_tree_rp_uuid, version='1.20').body['resource_providers'] return [rp['uuid'] for rp in rps] # TODO(sbauza): Drop this method once test classes inherit from a mixin def _get_provider_inventory(self, rp_uuid): return self.placement_api.get( '/resource_providers/%s/inventories' % rp_uuid).body['inventories'] # TODO(sbauza): Drop this method once test classes inherit from a mixin def _get_provider_usages(self, provider_uuid): return self.placement_api.get( '/resource_providers/%s/usages' % provider_uuid).body['usages'] # TODO(sbauza): Drop this method once test classes inherit from a mixin def _create_trait(self, trait): return self.placement_api.put('/traits/%s' % trait, {}, version='1.6') # TODO(sbauza): Drop this method once test classes inherit from a mixin def _set_provider_traits(self, rp_uuid, traits): """This will overwrite any existing traits. :param rp_uuid: UUID of the resource provider to update :param traits: list of trait strings to set on the provider :returns: APIResponse object with the results """ provider = self.placement_api.get( '/resource_providers/%s' % rp_uuid).body put_traits_req = { 'resource_provider_generation': provider['generation'], 'traits': traits } return self.placement_api.put( '/resource_providers/%s/traits' % rp_uuid, put_traits_req, version='1.6') # FIXME(sbauza): There is little value to have this be a whole base testclass # instead of a mixin only providing methods for accessing Placement endpoint. class ProviderUsageBaseTestCase(test.TestCase, InstanceHelperMixin): """Base test class for functional tests that check provider usage and consumer allocations in Placement during various operations. Subclasses must define a **compute_driver** attribute for the virt driver to use. This class sets up standard fixtures and controller services but does not start any compute services, that is left to the subclass. """ microversion = 'latest' # These must match the capabilities in # nova.virt.libvirt.driver.LibvirtDriver.capabilities expected_libvirt_driver_capability_traits = set([ six.u(trait) for trait in [ os_traits.COMPUTE_ACCELERATORS, os_traits.COMPUTE_DEVICE_TAGGING, os_traits.COMPUTE_NET_ATTACH_INTERFACE, os_traits.COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG, os_traits.COMPUTE_VOLUME_ATTACH_WITH_TAG, os_traits.COMPUTE_VOLUME_EXTEND, os_traits.COMPUTE_TRUSTED_CERTS, os_traits.COMPUTE_IMAGE_TYPE_AKI, os_traits.COMPUTE_IMAGE_TYPE_AMI, os_traits.COMPUTE_IMAGE_TYPE_ARI, os_traits.COMPUTE_IMAGE_TYPE_ISO, os_traits.COMPUTE_IMAGE_TYPE_QCOW2, os_traits.COMPUTE_IMAGE_TYPE_RAW, os_traits.COMPUTE_RESCUE_BFV, ] ]) # These must match the capabilities in # nova.virt.fake.FakeDriver.capabilities expected_fake_driver_capability_traits = set([ six.u(trait) for trait in [ os_traits.COMPUTE_ACCELERATORS, os_traits.COMPUTE_IMAGE_TYPE_RAW, os_traits.COMPUTE_DEVICE_TAGGING, os_traits.COMPUTE_NET_ATTACH_INTERFACE, os_traits.COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG, os_traits.COMPUTE_VOLUME_ATTACH_WITH_TAG, os_traits.COMPUTE_VOLUME_EXTEND, os_traits.COMPUTE_VOLUME_MULTI_ATTACH, os_traits.COMPUTE_TRUSTED_CERTS, ] ]) def setUp(self): self.flags(compute_driver=self.compute_driver) super(ProviderUsageBaseTestCase, self).setUp() self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) self.neutron = self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(nova_fixtures.AllServicesCurrent()) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) placement = self.useFixture(func_fixtures.PlacementFixture()) self.placement_api = placement.api self.api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.admin_api = self.api_fixture.admin_api self.admin_api.microversion = self.microversion self.api = self.admin_api # the image fake backend needed for image discovery self.image_service = ( nova.tests.unit.image.fake.stub_out_image_service(self)) self.start_service('conductor') self.scheduler_service = self.start_service('scheduler') self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) def _get_provider_uuid_by_host(self, host): # NOTE(gibi): the compute node id is the same as the compute node # provider uuid on that compute resp = self.admin_api.api_get( 'os-hypervisors?hypervisor_hostname_pattern=%s' % host).body return resp['hypervisors'][0]['id'] def _get_provider_usages(self, provider_uuid): return self.placement_api.get( '/resource_providers/%s/usages' % provider_uuid).body['usages'] def _get_allocations_by_server_uuid(self, server_uuid): return self.placement_api.get( '/allocations/%s' % server_uuid).body['allocations'] def _wait_for_server_allocations(self, consumer_id, max_retries=20): retry_count = 0 while True: alloc = self._get_allocations_by_server_uuid(consumer_id) if alloc: break retry_count += 1 if retry_count == max_retries: self.fail('Wait for server allocations failed, ' 'server=%s' % (consumer_id)) time.sleep(0.5) return alloc def _get_allocations_by_provider_uuid(self, rp_uuid): return self.placement_api.get( '/resource_providers/%s/allocations' % rp_uuid).body['allocations'] def _get_all_providers(self): return self.placement_api.get( '/resource_providers', version='1.14').body['resource_providers'] def _create_trait(self, trait): return self.placement_api.put('/traits/%s' % trait, {}, version='1.6') def _delete_trait(self, trait): return self.placement_api.delete('/traits/%s' % trait, version='1.6') def _get_provider_traits(self, provider_uuid): return self.placement_api.get( '/resource_providers/%s/traits' % provider_uuid, version='1.6').body['traits'] def _set_provider_traits(self, rp_uuid, traits): """This will overwrite any existing traits. :param rp_uuid: UUID of the resource provider to update :param traits: list of trait strings to set on the provider :returns: APIResponse object with the results """ provider = self.placement_api.get( '/resource_providers/%s' % rp_uuid).body put_traits_req = { 'resource_provider_generation': provider['generation'], 'traits': traits } return self.placement_api.put( '/resource_providers/%s/traits' % rp_uuid, put_traits_req, version='1.6') def _get_all_resource_classes(self): dicts = self.placement_api.get( '/resource_classes', version='1.2').body['resource_classes'] return [d['name'] for d in dicts] def _get_all_traits(self): return self.placement_api.get('/traits', version='1.6').body['traits'] def _get_provider_inventory(self, rp_uuid): return self.placement_api.get( '/resource_providers/%s/inventories' % rp_uuid).body['inventories'] def _get_provider_aggregates(self, rp_uuid): return self.placement_api.get( '/resource_providers/%s/aggregates' % rp_uuid, version='1.1').body['aggregates'] def _post_resource_provider(self, rp_name): return self.placement_api.post( url='/resource_providers', version='1.20', body={'name': rp_name}).body def _set_inventory(self, rp_uuid, inv_body): """This will set the inventory for a given resource provider. :param rp_uuid: UUID of the resource provider to update :param inv_body: inventory to set on the provider :returns: APIResponse object with the results """ return self.placement_api.post( url= ('/resource_providers/%s/inventories' % rp_uuid), version='1.15', body=inv_body).body def _update_inventory(self, rp_uuid, inv_body): """This will update the inventory for a given resource provider. :param rp_uuid: UUID of the resource provider to update :param inv_body: inventory to set on the provider :returns: APIResponse object with the results """ return self.placement_api.put( url= ('/resource_providers/%s/inventories' % rp_uuid), body=inv_body).body def _get_resource_provider_by_uuid(self, rp_uuid): return self.placement_api.get( '/resource_providers/%s' % rp_uuid, version='1.15').body def _set_aggregate(self, rp_uuid, agg_id): provider = self.placement_api.get( '/resource_providers/%s' % rp_uuid).body post_agg_req = {"aggregates": [agg_id], "resource_provider_generation": provider['generation']} return self.placement_api.put( '/resource_providers/%s/aggregates' % rp_uuid, version='1.19', body=post_agg_req).body def _get_all_rp_uuids_in_a_tree(self, in_tree_rp_uuid): rps = self.placement_api.get( '/resource_providers?in_tree=%s' % in_tree_rp_uuid, version='1.20').body['resource_providers'] return [rp['uuid'] for rp in rps] def assertRequestMatchesUsage(self, requested_resources, root_rp_uuid): # It matches the usages of the whole tree against the request rp_uuids = self._get_all_rp_uuids_in_a_tree(root_rp_uuid) # NOTE(gibi): flattening the placement usages means we cannot # verify the structure here. However I don't see any way to define this # function for nested and non-nested trees in a generic way. total_usage = collections.defaultdict(int) for rp in rp_uuids: usage = self._get_provider_usages(rp) for rc, amount in usage.items(): total_usage[rc] += amount # Cannot simply do an assertEqual(expected, actual) as usages always # contain every RC even if the usage is 0 and the flavor could also # contain explicit 0 request for some resources. # So if the flavor contains an explicit 0 resource request (e.g. in # case of ironic resources:VCPU=0) then this code needs to assert that # such resource has 0 usage in the tree. In the other hand if the usage # contains 0 value for some resources that the flavor does not request # then that is totally fine. for rc, value in requested_resources.items(): self.assertIn( rc, total_usage, 'The requested resource class not found in the total_usage of ' 'the RP tree') self.assertEqual( value, total_usage[rc], 'The requested resource amount does not match with the total ' 'resource usage of the RP tree') for rc, value in total_usage.items(): if value != 0: self.assertEqual( requested_resources[rc], value, 'The requested resource amount does not match with the ' 'total resource usage of the RP tree') def assertFlavorMatchesUsage(self, root_rp_uuid, *flavors): resources = collections.defaultdict(int) for flavor in flavors: res = self._resources_from_flavor(flavor) for rc, value in res.items(): resources[rc] += value self.assertRequestMatchesUsage(resources, root_rp_uuid) def _resources_from_flavor(self, flavor): resources = collections.defaultdict(int) resources['VCPU'] = flavor['vcpus'] resources['MEMORY_MB'] = flavor['ram'] resources['DISK_GB'] = flavor['disk'] for key, value in flavor['extra_specs'].items(): if key.startswith('resources'): resources[key.split(':')[1]] += value return resources def assertFlavorMatchesAllocation(self, flavor, consumer_uuid, root_rp_uuid): # NOTE(gibi): This function does not handle sharing RPs today. expected_rps = self._get_all_rp_uuids_in_a_tree(root_rp_uuid) allocations = self._get_allocations_by_server_uuid(consumer_uuid) # NOTE(gibi): flattening the placement allocation means we cannot # verify the structure here. However I don't see any way to define this # function for nested and non-nested trees in a generic way. total_allocation = collections.defaultdict(int) for rp, alloc in allocations.items(): self.assertIn(rp, expected_rps, 'Unexpected, out of tree RP in the' ' allocation') for rc, value in alloc['resources'].items(): total_allocation[rc] += value self.assertEqual( self._resources_from_flavor(flavor), total_allocation, 'The resources requested in the flavor does not match with total ' 'allocation in the RP tree') def get_migration_uuid_for_instance(self, instance_uuid): # NOTE(danms): This is too much introspection for a test like this, but # we can't see the migration uuid from the API, so we just encapsulate # the peek behind the curtains here to keep it out of the tests. # TODO(danms): Get the migration uuid from the API once it is exposed ctxt = context.get_admin_context() migrations = db.migration_get_all_by_filters( ctxt, {'instance_uuid': instance_uuid}) self.assertEqual(1, len(migrations), 'Test expected a single migration, ' 'but found %i' % len(migrations)) return migrations[0].uuid def _boot_and_check_allocations( self, flavor, source_hostname, networks='none'): """Boot an instance and check that the resource allocation is correct After booting an instance on the given host with a given flavor it asserts that both the providers usages and resource allocations match with the resources requested in the flavor. It also asserts that running the periodic update_available_resource call does not change the resource state. :param flavor: the flavor the instance will be booted with :param source_hostname: the name of the host the instance will be booted on :param networks: list of network dicts passed to the server create API or "none" or "auto" :return: the API representation of the booted instance """ server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor['id'], networks=networks) server_req['availability_zone'] = 'nova:%s' % source_hostname LOG.info('booting on %s', source_hostname) created_server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(created_server, 'ACTIVE') # Verify that our source host is what the server ended up on self.assertEqual(source_hostname, server['OS-EXT-SRV-ATTR:host']) source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) # Before we run periodics, make sure that we have allocations/usages # only on the source host self.assertFlavorMatchesUsage(source_rp_uuid, flavor) # Check that the other providers has no usage for rp_uuid in [self._get_provider_uuid_by_host(hostname) for hostname in self.computes.keys() if hostname != source_hostname]: self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) # Check that the server only allocates resource from the host it is # booted on self.assertFlavorMatchesAllocation(flavor, server['id'], source_rp_uuid) self._run_periodics() # After running the periodics but before we start any other operation, # we should have exactly the same allocation/usage information as # before running the periodics # Check usages on the selected host after boot self.assertFlavorMatchesUsage(source_rp_uuid, flavor) # Check that the server only allocates resource from the host it is # booted on self.assertFlavorMatchesAllocation(flavor, server['id'], source_rp_uuid) # Check that the other providers has no usage for rp_uuid in [self._get_provider_uuid_by_host(hostname) for hostname in self.computes.keys() if hostname != source_hostname]: self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) return server def _delete_and_check_allocations(self, server): """Delete the instance and asserts that the allocations are cleaned If the server was moved (resized or live migrated), also checks that migration-based allocations are also cleaned up. :param server: The API representation of the instance to be deleted :returns: The uuid of the migration record associated with the resize or cold migrate operation """ # First check to see if there is a related migration record so we can # assert its allocations (if any) are not leaked. with utils.temporary_mutation(self.admin_api, microversion='2.59'): migrations = self.admin_api.api_get( '/os-migrations?instance_uuid=%s' % server['id']).body['migrations'] if migrations: # If there is more than one migration, they are sorted by # created_at in descending order so we'll get the last one # which is probably what we'd always want anyway. migration_uuid = migrations[0]['uuid'] else: migration_uuid = None self._delete_server(server) # NOTE(gibi): The resource allocation is deleted after the instance is # destroyed in the db so wait_until_deleted might return before the # the resource are deleted in placement. So we need to wait for the # instance.delete.end notification as that is emitted after the # resources are freed. fake_notifier.wait_for_versioned_notifications('instance.delete.end') for rp_uuid in [self._get_provider_uuid_by_host(hostname) for hostname in self.computes.keys()]: self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) # and no allocations for the deleted server allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual(0, len(allocations)) if migration_uuid: # and no allocations for the delete migration allocations = self._get_allocations_by_server_uuid(migration_uuid) self.assertEqual(0, len(allocations)) return migration_uuid def _move_and_check_allocations(self, server, request, old_flavor, new_flavor, source_rp_uuid, dest_rp_uuid): self._migrate_or_resize(server, request) def _check_allocation(): self.assertFlavorMatchesUsage(source_rp_uuid, old_flavor) self.assertFlavorMatchesUsage(dest_rp_uuid, new_flavor) # The instance should own the new_flavor allocation against the # destination host created by the scheduler self.assertFlavorMatchesAllocation(new_flavor, server['id'], dest_rp_uuid) # The migration should own the old_flavor allocation against the # source host created by conductor migration_uuid = self.get_migration_uuid_for_instance(server['id']) self.assertFlavorMatchesAllocation(old_flavor, migration_uuid, source_rp_uuid) # OK, so the move operation has run, but we have not yet confirmed or # reverted the move operation. Before we run periodics, make sure # that we have allocations/usages on BOTH the source and the # destination hosts. _check_allocation() self._run_periodics() _check_allocation() # Make sure the RequestSpec.flavor matches the new_flavor. ctxt = context.get_admin_context() reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) self.assertEqual(new_flavor['id'], reqspec.flavor.flavorid) def _migrate_and_check_allocations(self, server, flavor, source_rp_uuid, dest_rp_uuid): request = { 'migrate': None } self._move_and_check_allocations( server, request=request, old_flavor=flavor, new_flavor=flavor, source_rp_uuid=source_rp_uuid, dest_rp_uuid=dest_rp_uuid) def _resize_and_check_allocations(self, server, old_flavor, new_flavor, source_rp_uuid, dest_rp_uuid): request = { 'resize': { 'flavorRef': new_flavor['id'] } } self._move_and_check_allocations( server, request=request, old_flavor=old_flavor, new_flavor=new_flavor, source_rp_uuid=source_rp_uuid, dest_rp_uuid=dest_rp_uuid) def _resize_to_same_host_and_check_allocations(self, server, old_flavor, new_flavor, rp_uuid): # Resize the server to the same host and check usages in VERIFY_RESIZE # state self.flags(allow_resize_to_same_host=True) self._resize_server(server, new_flavor['id']) self.assertFlavorMatchesUsage(rp_uuid, old_flavor, new_flavor) # The instance should hold a new_flavor allocation self.assertFlavorMatchesAllocation(new_flavor, server['id'], rp_uuid) # The migration should hold an old_flavor allocation migration_uuid = self.get_migration_uuid_for_instance(server['id']) self.assertFlavorMatchesAllocation(old_flavor, migration_uuid, rp_uuid) # We've resized to the same host and have doubled allocations for both # the old and new flavor on the same host. Run the periodic on the # compute to see if it tramples on what the scheduler did. self._run_periodics() # In terms of usage, it's still double on the host because the instance # and the migration each hold an allocation for the new and old # flavors respectively. self.assertFlavorMatchesUsage(rp_uuid, old_flavor, new_flavor) # The instance should hold a new_flavor allocation self.assertFlavorMatchesAllocation(new_flavor, server['id'], rp_uuid) # The migration should hold an old_flavor allocation self.assertFlavorMatchesAllocation(old_flavor, migration_uuid, rp_uuid) def _check_allocation_during_evacuate( self, flavor, server_uuid, source_root_rp_uuid, dest_root_rp_uuid): allocations = self._get_allocations_by_server_uuid(server_uuid) self.assertEqual(2, len(allocations)) self.assertFlavorMatchesUsage(source_root_rp_uuid, flavor) self.assertFlavorMatchesUsage(dest_root_rp_uuid, flavor) def assert_hypervisor_usage(self, compute_node_uuid, flavor, volume_backed): """Asserts the given hypervisor's resource usage matches the given flavor (assumes a single instance on the hypervisor). :param compute_node_uuid: UUID of the ComputeNode to check. :param flavor: "flavor" entry dict from from GET /flavors/{flavor_id} :param volume_backed: True if the flavor is used with a volume-backed server, False otherwise. """ # GET /os-hypervisors/{uuid} requires at least 2.53 with utils.temporary_mutation(self.admin_api, microversion='2.53'): hypervisor = self.admin_api.api_get( '/os-hypervisors/%s' % compute_node_uuid).body['hypervisor'] if volume_backed: expected_disk_usage = 0 else: expected_disk_usage = flavor['disk'] # Account for reserved_host_disk_mb. expected_disk_usage += compute_utils.convert_mb_to_ceil_gb( CONF.reserved_host_disk_mb) self.assertEqual(expected_disk_usage, hypervisor['local_gb_used']) # Account for reserved_host_memory_mb. expected_ram_usage = CONF.reserved_host_memory_mb + flavor['ram'] self.assertEqual(expected_ram_usage, hypervisor['memory_mb_used']) # Account for reserved_host_cpus. expected_vcpu_usage = CONF.reserved_host_cpus + flavor['vcpus'] self.assertEqual(expected_vcpu_usage, hypervisor['vcpus_used']) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4944687 nova-21.2.4/nova/tests/functional/libvirt/0000775000175000017500000000000000000000000020510 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/libvirt/__init__.py0000664000175000017500000000000000000000000022607 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/base.py0000664000175000017500000003352200000000000022001 0ustar00zuulzuul00000000000000# Copyright (C) 2018 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import io import fixtures import mock from nova import conf from nova.objects import fields as obj_fields from nova.tests import fixtures as nova_fixtures from nova.tests.functional import test_servers as base from nova.tests.unit.virt.libvirt import fake_imagebackend from nova.tests.unit.virt.libvirt import fakelibvirt CONF = conf.CONF class ServersTestBase(base.ServersTestBase): ADDITIONAL_FILTERS = [] def setUp(self): super(ServersTestBase, self).setUp() # Replace libvirt with fakelibvirt self.useFixture(fake_imagebackend.ImageBackendFixture()) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.libvirt', fakelibvirt)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.host.libvirt', fakelibvirt)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.guest.libvirt', fakelibvirt)) self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._create_image', return_value=(False, False))) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._get_local_gb_info', return_value={'total': 128, 'used': 44, 'free': 84})) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.libvirt_utils.is_valid_hostname', return_value=True)) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.libvirt_utils.file_open', side_effect=lambda *a, **k: io.BytesIO(b''))) self.useFixture(fixtures.MockPatch( 'nova.privsep.utils.supports_direct_io', return_value=True)) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.host.Host.get_online_cpus', return_value=set(range(16)))) # Mock the 'get_connection' function, as we're going to need to provide # custom capabilities for each test _p = mock.patch('nova.virt.libvirt.host.Host.get_connection') self.mock_conn = _p.start() self.addCleanup(_p.stop) # Mock the 'get_arch' function as we may need to provide different host # architectures during some tests. We default to x86_64 _a = mock.patch('nova.virt.libvirt.utils.get_arch') self.mock_arch = _a.start() self.mock_arch.return_value = obj_fields.Architecture.X86_64 self.addCleanup(_a.stop) def _setup_compute_service(self): # NOTE(stephenfin): We don't start the compute service here as we wish # to configure the host capabilities first. We instead start the # service in the test self.flags(compute_driver='libvirt.LibvirtDriver') def _setup_scheduler_service(self): enabled_filters = CONF.filter_scheduler.enabled_filters enabled_filters += self.ADDITIONAL_FILTERS self.flags(driver='filter_scheduler', group='scheduler') self.flags(enabled_filters=enabled_filters, group='filter_scheduler') return self.start_service('scheduler') def _get_connection(self, host_info, pci_info=None, libvirt_version=fakelibvirt.FAKE_LIBVIRT_VERSION, mdev_info=None, hostname=None): # sanity check self.assertGreater(16, host_info.cpus, "Host.get_online_cpus is only accounting for 16 CPUs but you're " "requesting %d; change the mock or your test" % host_info.cpus) fake_connection = fakelibvirt.Connection( 'qemu:///system', version=libvirt_version, hv_version=fakelibvirt.FAKE_QEMU_VERSION, host_info=host_info, pci_info=pci_info, mdev_info=mdev_info, hostname=hostname) return fake_connection def start_computes(self, host_info_dict=None, save_rp_uuids=False): """Start compute services. The started services will be saved in self.computes, keyed by hostname. :param host_info_dict: A hostname -> fakelibvirt.HostInfo object dictionary representing the libvirt HostInfo of each compute host. If None, the default is to start 2 computes, named test_compute0 and test_compute1, with 2 NUMA nodes, 2 cores per node, 2 threads per core, and 16GB of RAM. :param save_rp_uuids: If True, save the resource provider UUID of each started compute in self.compute_rp_uuids, keyed by hostname. """ if host_info_dict is None: host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) host_info_dict = {'test_compute0': host_info, 'test_compute1': host_info} def start_compute(host, host_info): fake_connection = self._get_connection(host_info=host_info, hostname=host) # This is fun. Firstly we need to do a global'ish mock so we can # actually start the service. with mock.patch('nova.virt.libvirt.host.Host.get_connection', return_value=fake_connection): compute = self.start_service('compute', host=host) # Once that's done, we need to tweak the compute "service" to # make sure it returns unique objects. We do this inside the # mock context to avoid a small window between the end of the # context and the tweaking where get_connection would revert to # being an autospec mock. compute.driver._host.get_connection = lambda: fake_connection return compute self.computes = {} self.compute_rp_uuids = {} for host, host_info in host_info_dict.items(): # NOTE(artom) A lambda: foo construct returns the value of foo at # call-time, so if the value of foo changes with every iteration of # a loop, every call to the lambda will return a different value of # foo. Because that's not what we want in our lambda further up, # we can't put it directly in the for loop, and need to introduce # the start_compute function to create a scope in which host and # host_info do not change with every iteration of the for loop. self.computes[host] = start_compute(host, host_info) if save_rp_uuids: self.compute_rp_uuids[host] = self.placement_api.get( '/resource_providers?name=%s' % host).body[ 'resource_providers'][0]['uuid'] class LibvirtNeutronFixture(nova_fixtures.NeutronFixture): """A custom variant of the stock neutron fixture with more networks. There are three networks available: two l2 networks (one flat and one VLAN) and one l3 network (VXLAN). """ network_1 = { 'id': '3cb9bc59-5699-4588-a4b1-b87f96708bc6', 'status': 'ACTIVE', 'subnets': [], 'name': 'physical-network-foo', 'admin_state_up': True, 'tenant_id': nova_fixtures.NeutronFixture.tenant_id, 'provider:physical_network': 'foo', 'provider:network_type': 'flat', 'provider:segmentation_id': None, } network_2 = network_1.copy() network_2.update({ 'id': 'a252b8cd-2d99-4e82-9a97-ec1217c496f5', 'name': 'physical-network-bar', 'provider:physical_network': 'bar', 'provider:network_type': 'vlan', 'provider:segmentation_id': 123, }) network_3 = network_1.copy() network_3.update({ 'id': '877a79cc-295b-4b80-9606-092bf132931e', 'name': 'tunneled-network', 'provider:physical_network': None, 'provider:network_type': 'vxlan', 'provider:segmentation_id': 69, }) network_4 = network_1.copy() network_4.update({ 'id': '1b70879f-fd00-411e-8ea9-143e7820e61d', 'name': 'private-network', 'shared': False, 'provider:physical_network': 'physnet4', "provider:network_type": "vlan", 'provider:segmentation_id': 42, }) subnet_1 = nova_fixtures.NeutronFixture.subnet_1.copy() subnet_1.update({ 'name': 'physical-subnet-foo', }) subnet_2 = nova_fixtures.NeutronFixture.subnet_1.copy() subnet_2.update({ 'id': 'b4c13749-c002-47ed-bf42-8b1d44fa9ff2', 'name': 'physical-subnet-bar', 'network_id': network_2['id'], }) subnet_3 = nova_fixtures.NeutronFixture.subnet_1.copy() subnet_3.update({ 'id': '4dacb20b-917f-4275-aa75-825894553442', 'name': 'tunneled-subnet', 'network_id': network_3['id'], }) subnet_4 = nova_fixtures.NeutronFixture.subnet_1.copy() subnet_4.update({ 'id': '7cb343ec-6637-494c-89a1-8890eab7788e', 'name': 'physical-subnet-bar', 'network_id': network_4['id'], }) network_1['subnets'] = [subnet_1] network_2['subnets'] = [subnet_2] network_3['subnets'] = [subnet_3] network_4['subnets'] = [subnet_4] network_1_port_2 = { 'id': 'f32582b5-8694-4be8-9a52-c5732f601c9d', 'network_id': network_1['id'], 'status': 'ACTIVE', 'mac_address': '71:ce:c7:8b:cd:dc', 'fixed_ips': [ { 'ip_address': '192.168.1.10', 'subnet_id': subnet_1['id'] } ], 'binding:vif_type': 'ovs', 'binding:vnic_type': 'normal', } network_1_port_3 = { 'id': '9c7580a0-8b01-41f3-ba07-a114709a4b74', 'network_id': network_1['id'], 'status': 'ACTIVE', 'mac_address': '71:ce:c7:2b:cd:dc', 'fixed_ips': [ { 'ip_address': '192.168.1.11', 'subnet_id': subnet_1['id'] } ], 'binding:vif_type': 'ovs', 'binding:vnic_type': 'normal', } network_2_port_1 = { 'id': '67d36444-6353-40f5-9e92-59346cf0dfda', 'network_id': network_2['id'], 'status': 'ACTIVE', 'mac_address': 'd2:0b:fd:d7:89:9b', 'fixed_ips': [ { 'ip_address': '192.168.1.6', 'subnet_id': subnet_2['id'] } ], 'binding:vif_type': 'ovs', 'binding:vnic_type': 'normal', } network_3_port_1 = { 'id': '4bfa1dc4-4354-4840-b0b4-f06196fa1344', 'network_id': network_3['id'], 'status': 'ACTIVE', 'mac_address': 'd2:0b:fd:99:89:9b', 'fixed_ips': [ { 'ip_address': '192.168.2.6', 'subnet_id': subnet_3['id'] } ], 'binding:vif_type': 'ovs', 'binding:vnic_type': 'normal', } network_4_port_1 = { 'id': 'b4cd0b93-2ac8-40a7-9fa4-2cd680ccdf3e', 'network_id': network_4['id'], 'status': 'ACTIVE', 'mac_address': 'b5:bc:2e:e7:51:ee', 'fixed_ips': [ { 'ip_address': '192.168.4.6', 'subnet_id': subnet_4['id'] } ], 'binding:vif_type': 'hw_veb', 'binding:vnic_type': 'direct', 'binding:vif_details': {'vlan': 42}, 'binding:profile': {'pci_vendor_info': '1377:0047', 'pci_slot': '0000:81:00.1', 'physical_network': 'physnet4'}, } def __init__(self, test): super(LibvirtNeutronFixture, self).__init__(test) self._networks = { self.network_1['id']: self.network_1, self.network_2['id']: self.network_2, self.network_3['id']: self.network_3, self.network_4['id']: self.network_4, } self._net1_ports = [self.network_1_port_2, self.network_1_port_3] def create_port(self, body=None): network_id = body['port']['network_id'] assert network_id in self._networks, ('Network %s not in fixture' % network_id) if network_id == self.network_1['id']: port = self._net1_ports.pop(0) elif network_id == self.network_2['id']: port = self.network_2_port_1 elif network_id == self.network_3['id']: port = self.network_3_port_1 elif network_id == self.network_4['id']: port = self.network_4_port_1 # this copy is here to avoid modifying class variables like # network_2_port_1 below at the update call port = copy.deepcopy(port) port.update(body['port']) # the tenant ID is normally extracted from credentials in the request # and is not present in the body if 'tenant_id' not in port: port['tenant_id'] = nova_fixtures.NeutronFixture.tenant_id # similarly, these attributes are set by neutron itself port['admin_state_up'] = True self._ports[port['id']] = port # this copy is here as nova sometimes modifies the returned port # locally and we want to avoid that nova modifies the fixture internals return {'port': copy.deepcopy(port)} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/integrated_helpers.py0000664000175000017500000000321200000000000024730 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from nova import conf from nova.tests.functional import integrated_helpers from nova.tests.unit.virt.libvirt import fakelibvirt CONF = conf.CONF class LibvirtProviderUsageBaseTestCase( integrated_helpers.ProviderUsageBaseTestCase): """Base test class for functional tests that check provider allocations and usage using the libvirt driver. """ compute_driver = 'libvirt.LibvirtDriver' STUB_INIT_HOST = True def setUp(self): super(LibvirtProviderUsageBaseTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture(stub_os_vif=False)) if self.STUB_INIT_HOST: self.useFixture( fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver.init_host')) self.useFixture( fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver.spawn')) def start_compute(self): self.compute = self._start_compute(CONF.host) nodename = self.compute.manager._get_nodename(None) self.host_uuid = self._get_provider_uuid_by_host(nodename) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_evacuate.py0000664000175000017500000006134100000000000023723 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import fixtures import mock import os.path from oslo_utils import fileutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import units from nova import conf from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake as fake_image from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt.libvirt import config as libvirt_config CONF = conf.CONF FLAVOR_FIXTURES = [ {'flavorid': 'root_only', 'name': 'root_only', 'vcpus': 1, 'memory_mb': 512, 'root_gb': 1, 'ephemeral_gb': 0, 'swap': 0}, {'flavorid': 'with_ephemeral', 'name': 'with_ephemeral', 'vcpus': 1, 'memory_mb': 512, 'root_gb': 1, 'ephemeral_gb': 1, 'swap': 0}, {'flavorid': 'with_swap', 'name': 'with_swap', 'vcpus': 1, 'memory_mb': 512, 'root_gb': 1, 'ephemeral_gb': 0, 'swap': 1}, ] # Choice of image id is arbitrary, but fixed for consistency. IMAGE_ID = fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID # NOTE(mdbooth): Change I76448196 tests for creation of any local disk, and # short-circuits as soon as it sees one created. Disks are created in order: # root disk, ephemeral disks, swap disk. Therefore to test correct handling of # ephemeral disks we must ensure there is no root disk, and to test swap disks # we must ensure there is no root or ephemeral disks. Each of the following # fixtures intentionally has only a single local disk (or none for bfv), # ensuring we cover all local disks. SERVER_FIXTURES = [ # Local root disk only {'name': 'local_root', 'imageRef': IMAGE_ID, 'flavorRef': 'root_only', }, # No local disks {'name': 'bfv', 'flavorRef': 'root_only', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': uuids.vol1, 'source_type': 'volume', 'destination_type': 'volume', }], }, # Local eph disk only {'name': 'bfv_with_eph', 'flavorRef': 'with_ephemeral', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': uuids.vol2, 'source_type': 'volume', 'destination_type': 'volume', }], }, # Local swap disk only {'name': 'bfv_with_swap', 'flavorRef': 'with_swap', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': uuids.vol3, 'source_type': 'volume', 'destination_type': 'volume', }], }, ] SERVER_DISKS = { 'local_root': 'disk', 'bfv': None, 'bfv_with_eph': 'disk.eph0', 'bfv_with_swap': 'disk.swap', } class _FileTest(object): """A base class for the _FlatTest and _Qcow2Test mixin test classes""" def setUp(self): super(_FileTest, self).setUp() def assert_disks_nonshared_instancedir(self, server): name = server['name'] disk = SERVER_DISKS[name] if not disk: return source_root_disk = os.path.join(self.source_instance_path(server), disk) dest_root_disk = os.path.join(self.dest_instance_path(server), disk) self.assertTrue(os.path.exists(source_root_disk), "Source root disk %s for server %s does not exist" % (source_root_disk, name)) self.assertFalse(os.path.exists(dest_root_disk), "Destination root disk %s for server %s exists" % (dest_root_disk, name)) def assert_disks_shared_instancedir(self, server): name = server['name'] disk = SERVER_DISKS[name] if not disk: return source_root_disk = os.path.join( self.source_instance_path(server), disk) self.assertTrue(os.path.exists(source_root_disk), "Source root disk %s for server %s does not exist" % (source_root_disk, name)) class _FlatTest(_FileTest): """A mixin which configures the flat imagebackend, and provides assertions for the expected state of the flat imagebackend after an evacuation. We mock create_image to touch a file so we can assert its existence/removal in tests. """ def setUp(self): super(_FlatTest, self).setUp() self.flags(group='libvirt', images_type='flat') def fake_create_image(_self, *args, **kwargs): # Simply ensure the file exists open(_self.path, 'a').close() self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Flat.create_image', fake_create_image)) # Mocked to reduce runtime self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.imagebackend.Flat.correct_format')) class _Qcow2Test(_FileTest): """A mixin which configures the qcow2 imagebackend, and provides assertions for the expected state of the flat imagebackend after an evacuation. We mock create_image to touch a file so we can assert its existence/removal in tests. """ def setUp(self): super(_Qcow2Test, self).setUp() self.flags(group='libvirt', images_type='qcow2') def fake_create_image(_self, *args, **kwargs): # Simply ensure the file exists open(_self.path, 'a').close() self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Qcow2.create_image', fake_create_image)) class _RbdTest(object): """A mixin which configures the rbd imagebackend, and provides assertions for the expected state of the rbd imagebackend after an evacuation. We mock RBDDriver so we don't need an actual ceph cluster. We mock create_image to store which rbd volumes would have been created, and exists to reference that store. """ def setUp(self): super(_RbdTest, self).setUp() self.flags(group='libvirt', images_type='rbd') self.flags(group='libvirt', rbd_secret_uuid='1234') self.created = set() def fake_create_image(_self, *args, **kwargs): self.created.add(_self.rbd_name) def fake_exists(_self): return _self.rbd_name in self.created self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Rbd.create_image', fake_create_image)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Rbd.exists', fake_exists)) # We never want to actually touch rbd self.mock_rbd_driver = self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.storage.rbd_utils.RBDDriver')).mock.return_value self.mock_rbd_driver.get_mon_addrs.return_value = ([], []) self.mock_rbd_driver.size.return_value = 10 * units.Gi self.mock_rbd_driver.rbd_user = 'rbd' def _assert_disks(self, server): name = server['name'] disk = SERVER_DISKS[name] if not disk: return # Check that we created a root disk and haven't called _cleanup_rbd at # all self.assertIn("%s_%s" % (server['id'], disk), self.created) self.mock_rbd_driver.cleanup_volumes.assert_not_called() # We never want to cleanup rbd disks during evacuate, regardless of # instance shared storage assert_disks_nonshared_instancedir = _assert_disks assert_disks_shared_instancedir = _assert_disks class FakeLVM(object): def __init__(self): self.volumes = set() def _exists(self, vg, lv): return any([v for v in self.volumes if v[0] == vg and v[1] == lv]) def _vg_exists(self, vg): return any([v for v in self.volumes if v[0] == vg]) def _find_vol_from_path(self, path): info = self.volume_info(path) for vol in self.volumes: if vol[0] == info['VG'] and vol[1] == info['LV']: return vol return None def create_volume(self, vg, lv, size, sparse=False): self.volumes.add((vg, lv, size)) def list_volumes(self, vg_path): _, vg = os.path.split(vg_path) return [vol[1] for vol in self.volumes if vol[0] == vg] def volume_info(self, path): path, lv = os.path.split(path) path, vg = os.path.split(path) return {'VG': vg, 'LV': lv} def get_volume_size(self, path): vol = self._find_vol_from_path(path) if vol is not None: return vol[2] raise exception.VolumeBDMPathNotFound(path=path) def remove_volumes(self, paths): for path in paths: vol = self._find_vol_from_path(path) if vol is not None: self.volumes.remove(vol) class _LVMTest(object): """A mixin which configures the LVM imagebackend, and provides assertions for the expected state of the LVM imagebackend after an evacuation. We need to track logical volumes on each compute separately, which we do by mocking the nova.virt.libvirt.storage.lvm module immediately before starting a new compute. """ def setUp(self): super(_LVMTest, self).setUp() self.flags(group='libvirt', images_type='lvm', images_volume_group='fake_vg') # A map of compute service name: fake libvirt module self.fake_lvms = collections.defaultdict(FakeLVM) # The fake libvirt module in use by the compute service which is # running currently self.fake_lvm = None def fake_create_image(_self, prepare_template, base, size, *args, **kwargs): self.fake_lvm.create_volume(_self.vg, _self.lv, size) def fake_exists(_self, *args, **kwargs): return self.fake_lvm._exists(_self.vg, _self.lv) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Lvm.create_image', fake_create_image)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Lvm.exists', fake_exists)) orig_path_exists = os.path.exists def fake_path_exists(path): if path.startswith('/dev/'): paths = path.split(os.sep)[2:] if len(paths) == 0: # For completeness: /dev exists return True if len(paths) == 1: return self.fake_lvm._vg_exists(*paths) if len(paths) == 2: return self.fake_lvm._exists(*paths) return False else: return orig_path_exists(path) self.useFixture(fixtures.MonkeyPatch( 'os.path.exists', fake_path_exists)) def _start_compute(self, name): compute = super(_LVMTest, self)._start_compute(name) # We need each compute to have its own fake LVM. These mocks replace # the previous mocks, globally. This only works because in this test we # only ever run one of the computes at a time. self.fake_lvm = self.fake_lvms[name] self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.lvm', self.fake_lvm)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.lvm', self.fake_lvm)) return compute def _assert_disks(self, server): name = server['name'] disk = SERVER_DISKS[name] if not disk: return vg = CONF.libvirt.images_volume_group lv = '{uuid}_{disk}'.format(uuid=server['id'], disk=disk) compute0 = self.fake_lvms['compute0'] compute1 = self.fake_lvms['compute1'] self.assertTrue(compute0._exists(vg, lv), 'Disk "{disk}" of server {server} does not exist on ' 'source'.format(disk=disk, server=name)) self.assertFalse(compute1._exists(vg, lv), 'Disk "{disk}" of server {server} still exists on ' 'destination'.format(disk=disk, server=name)) # We always want to cleanup LVM disks on failure, regardless of shared # instance directory assert_disks_nonshared_instancedir = _assert_disks assert_disks_shared_instancedir = _assert_disks class _LibvirtEvacuateTest(integrated_helpers.InstanceHelperMixin): """The main libvirt evacuate test. This configures a set of stub services with 2 computes and defines 2 tests, both of which create a server on compute0 and then evacuate it to compute1. test_evacuate_failure_nonshared_instancedir does this with a non-shared instance directory, and test_evacuate_failure_shared_instancedir does this with a shared instance directory. This class requires one of the mixins _FlatTest, _RbdTest, _LVMTest, or _Qcow2Test to execute. These configure an imagebackend, and define the assertions assert_disks_nonshared_instancedir and assert_disks_shared_instancedir to assert the expected state of that imagebackend after an evacuation. By combining shared and non-shared instance directory tests in this class with these mixins we get test coverage of all combinations of shared/nonshared instanace directories and block storage. """ def _start_compute(self, name): # NOTE(mdbooth): fakelibvirt's getHostname currently returns a # hardcoded 'compute1', which is undesirable if we want multiple fake # computes. There's no good way to pre-initialise get_connection() to # return a fake libvirt with a custom return for getHostname. # # Here we mock the class during service creation to return our custom # hostname, but we can't leave this in place because then both computes # will still get the same value from their libvirt Connection. Once the # service has started, we poke a custom getHostname into the # instantiated object to do the same thing, but only for that object. with mock.patch.object(fakelibvirt.Connection, 'getHostname', return_value=name): compute = self.start_service('compute', host=name) compute.driver._host.get_connection().getHostname = lambda: name return compute def setUp(self): super(_LibvirtEvacuateTest, self).setUp() self.useFixture(nova_fixtures.CinderFixture(self)) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_network.set_stub_network_methods(self) api_fixture = self.useFixture( nova_fixtures.OSAPIFixture(api_version='v2.1')) self.api = api_fixture.admin_api # force_down and evacuate without onSharedStorage self.api.microversion = '2.14' fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.useFixture(fakelibvirt.FakeLibvirtFixture()) # Fake out all the details of volume connection self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver.get_volume_connector')) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver._connect_volume')) # For cleanup self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume')) volume_config = libvirt_config.LibvirtConfigGuestDisk() volume_config.driver_name = 'fake-volume-driver' volume_config.source_path = 'fake-source-path' volume_config.target_dev = 'fake-target-dev' volume_config.target_bus = 'fake-target-bus' get_volume_config = self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver._get_volume_config')).mock get_volume_config.return_value = volume_config # Ensure our computes report lots of available disk, vcpu, and ram lots = 10000000 get_local_gb_info = self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver._get_local_gb_info')).mock get_local_gb_info.return_value = { 'total': lots, 'free': lots, 'used': 1} get_vcpu_available = self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver._get_vcpu_available')).mock get_vcpu_available.return_value = set(cpu for cpu in range(24)) get_memory_mb_total = self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.host.Host.get_memory_mb_total')).mock get_memory_mb_total.return_value = lots # Mock out adding rng devices self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.driver.LibvirtDriver._add_rng_device')).mock self.start_service('conductor') self.start_service('scheduler') self.flags(compute_driver='libvirt.LibvirtDriver') ctxt = context.get_admin_context() for flavor in FLAVOR_FIXTURES: objects.Flavor(context=ctxt, **flavor).create() @staticmethod def source_instance_path(server): return os.path.join(CONF.instances_path, server['id']) @staticmethod def dest_instance_path(server): return os.path.join(CONF.instances_path, 'dest', server['id']) def _create_servers(self): def _create_server(server): # NOTE(mdbooth): We could do all the server creations concurrently # to improve throughput, but we have seen this result in timeouts # on a loaded CI worker. server = self.api.post_server({'server': server}) # Wait for server to become ACTIVE, and return its updated state # NOTE(mdbooth): Increase max_retries from default to reduce # chances of timeout. return self._wait_for_state_change(server, 'ACTIVE', max_retries=30) return [_create_server(server) for server in SERVER_FIXTURES] def _swap_computes(self, compute0): # Force compute0 down compute0.stop() self.api.force_down_service('compute0', 'nova-compute', True) # Start compute1 return self._start_compute('compute1') def _evacuate_with_failure(self, server, compute1): # Perform an evacuation during which we experience a failure on the # destination host instance_uuid = server['id'] with mock.patch.object(compute1.driver, 'plug_vifs') as plug_vifs: plug_vifs.side_effect = test.TestingException self.api.post_server_action(instance_uuid, {'evacuate': {'host': 'compute1'}}) # Wait for the rebuild to start, then complete fake_notifier.wait_for_versioned_notifications( 'instance.rebuild.start') self._wait_for_migration_status(server, ['failed']) server = self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None}) # Meta-test plug_vifs.assert_called() plug_vifs.reset_mock() # Return fresh server state after evacuate return server def test_evacuate_failure_nonshared_instancedir(self): """Assert the failure cleanup behaviour of non-shared instance storage If we fail during evacuate and the instance directory didn't previously exist on the destination, we should delete it """ # Use a unique instances_path per test to allow cleanup self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) # Create instances on compute0 compute0 = self._start_compute('compute0') servers = self._create_servers() compute1 = self._swap_computes(compute0) # Create a 'pass-through' mock for ensure_tree so we can log its calls orig_ensure_tree = fileutils.ensure_tree mock_ensure_tree = self.useFixture(fixtures.MockPatch( 'oslo_utils.fileutils.ensure_tree', side_effect=orig_ensure_tree)).mock for server in servers: name = server['name'] source_instance_path = self.source_instance_path(server) dest_instance_path = self.dest_instance_path(server) # Check that we've got an instance directory on the source and not # on the dest self.assertTrue(os.path.exists(source_instance_path), "Source instance directory %s for server %s does " "not exist" % (source_instance_path, name)) self.assertFalse(os.path.exists(dest_instance_path), "Destination instance directory %s for server %s " "exists" % (dest_instance_path, name)) # By default our 2 compute hosts share the same instance directory # on the test runner. Force a different directory while running # evacuate on compute1 so we don't have shared storage. def dest_get_instance_path(instance, relative=False): if relative: return instance.uuid return dest_instance_path with mock.patch('nova.virt.libvirt.utils.get_instance_path') \ as get_instance_path: get_instance_path.side_effect = dest_get_instance_path server = self._evacuate_with_failure(server, compute1) # Check that we've got an instance directory on the source and not # on the dest, but that the dest was created self.assertTrue(os.path.exists(source_instance_path), "Source instance directory %s for server %s does " "not exist" % (source_instance_path, name)) self.assertFalse(os.path.exists(dest_instance_path), "Destination instance directory %s for server %s " "exists" % (dest_instance_path, name)) mock_ensure_tree.assert_called_with(dest_instance_path) self.assert_disks_nonshared_instancedir(server) # Check we're still on the failed source host self.assertEqual('compute0', server['OS-EXT-SRV-ATTR:host']) def test_evacuate_failure_shared_instancedir(self): """Assert the failure cleanup behaviour of shared instance storage If we fail during evacuate and the instance directory was already present on the destination, we should leave it there By default our 2 compute hosts share the same instance directory on the test runner. """ # Use a unique instances_path per test to allow cleanup self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) # Create test instances on compute0 compute0 = self._start_compute('compute0') servers = self._create_servers() compute1 = self._swap_computes(compute0) for server in servers: name = server['name'] shared_instance_path = self.source_instance_path(server) # Check that we've got an instance directory on the source self.assertTrue(os.path.exists(shared_instance_path), "Shared instance directory %s for server %s does " "not exist" % (shared_instance_path, name)) server = self._evacuate_with_failure(server, compute1) # Check that the instance directory still exists self.assertTrue(os.path.exists(shared_instance_path), "Shared instance directory %s for server %s " "does not exist" % (shared_instance_path, name)) self.assert_disks_shared_instancedir(server) # Check we're still on the failed source host self.assertEqual('compute0', server['OS-EXT-SRV-ATTR:host']) class LibvirtFlatEvacuateTest(_FlatTest, _LibvirtEvacuateTest, test.TestCase): pass class LibvirtQcowEvacuateTest(_Qcow2Test, _LibvirtEvacuateTest, test.TestCase): pass class LibvirtRbdEvacuateTest(_RbdTest, _LibvirtEvacuateTest, test.TestCase): pass class LibvirtLVMEvacuateTest(_LVMTest, _LibvirtEvacuateTest, test.TestCase): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_numa_servers.py0000664000175000017500000015707000000000000024644 0ustar00zuulzuul00000000000000# Copyright (C) 2015 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import testtools from oslo_config import cfg from oslo_log import log as logging import nova from nova.conf import neutron as neutron_conf from nova import context as nova_context from nova import objects from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional.libvirt import base from nova.tests.unit import fake_notifier from nova.tests.unit.virt.libvirt import fakelibvirt CONF = cfg.CONF LOG = logging.getLogger(__name__) class NUMAServersTestBase(base.ServersTestBase): ADDITIONAL_FILTERS = ['NUMATopologyFilter'] def setUp(self): super(NUMAServersTestBase, self).setUp() self.ctxt = nova_context.get_admin_context() # Mock the 'NUMATopologyFilter' filter, as most tests need to inspect # this host_manager = self.scheduler.manager.driver.host_manager numa_filter_class = host_manager.filter_cls_map['NUMATopologyFilter'] host_pass_mock = mock.Mock(wraps=numa_filter_class().host_passes) _p = mock.patch('nova.scheduler.filters' '.numa_topology_filter.NUMATopologyFilter.host_passes', side_effect=host_pass_mock) self.mock_filter = _p.start() self.addCleanup(_p.stop) class NUMAServersTest(NUMAServersTestBase): def _run_build_test(self, flavor_id, end_status='ACTIVE', filter_called_on_error=True, expected_usage=None): # NOTE(bhagyashris): Always use host as 'compute1' so that it's # possible to get resource provider information for verifying # compute usages. This host name 'compute1' is hard coded in # Connection class in fakelibvirt.py. # TODO(stephenfin): Remove the hardcoded limit, possibly overridding # 'start_service' to make sure there isn't a mismatch self.compute = self.start_service('compute', host='compute1') compute_rp_uuid = self.placement_api.get( '/resource_providers?name=compute1').body[ 'resource_providers'][0]['uuid'] # Create server created_server = self._create_server( flavor_id=flavor_id, expected_state=end_status) # Validate the quota usage if filter_called_on_error and end_status == 'ACTIVE': quota_details = self.api.get_quota_detail() expected_core_usages = expected_usage.get( 'VCPU', expected_usage.get('PCPU', 0)) self.assertEqual(expected_core_usages, quota_details['cores']['in_use']) # Validate that NUMATopologyFilter has been called or not called, # depending on whether this is expected to make it past placement or # not (hint: if it's a lack of VCPU/PCPU resources, it won't) if filter_called_on_error: self.assertTrue(self.mock_filter.called) else: self.assertFalse(self.mock_filter.called) if expected_usage: compute_usage = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(expected_usage, compute_usage) self.addCleanup(self._delete_server, created_server) return created_server def test_create_server_with_numa_topology(self): """Create a server with two NUMA nodes. This should pass and result in a guest NUMA topology with two NUMA nodes. """ host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = {'hw:numa_nodes': '2'} flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2} server = self._run_build_test(flavor_id, expected_usage=expected_usage) ctx = nova_context.get_admin_context() inst = objects.Instance.get_by_uuid(ctx, server['id']) self.assertEqual(2, len(inst.numa_topology.cells)) self.assertNotIn('cpu_topology', inst.numa_topology.cells[0]) self.assertNotIn('cpu_topology', inst.numa_topology.cells[1]) def test_create_server_with_numa_fails(self): """Create a two NUMA node instance on a host with only one node. This should fail because each guest NUMA node must be placed on a separate host NUMA node. """ host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=1, cpu_cores=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = {'hw:numa_nodes': '2'} flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id, end_status='ERROR') def test_create_server_with_hugepages(self): """Create a server with huge pages. Configuring huge pages against a server also necessitates configuring a NUMA topology. """ host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=2, cpu_cores=2, cpu_threads=2, kB_mem=(1024 * 1024 * 16)) # GB self.mock_conn.return_value = self._get_connection(host_info=host_info) # create 1024 * 2 MB huge pages, and allocate the rest of the 16 GB as # small pages for cell in host_info.numa_topology.cells: huge_pages = 1024 small_pages = (host_info.kB_mem - (2048 * huge_pages)) // 4 cell.mempages = fakelibvirt.create_mempages([ (4, small_pages), (2048, huge_pages), ]) extra_spec = {'hw:mem_page_size': 'large'} flavor_id = self._create_flavor(memory_mb=2048, extra_spec=extra_spec) expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2} server = self._run_build_test(flavor_id, expected_usage=expected_usage) ctx = nova_context.get_admin_context() inst = objects.Instance.get_by_uuid(ctx, server['id']) self.assertEqual(1, len(inst.numa_topology.cells)) self.assertEqual(2048, inst.numa_topology.cells[0].pagesize) # kB self.assertEqual(2048, inst.numa_topology.cells[0].memory) # MB def test_create_server_with_hugepages_fails(self): """Create a server with huge pages on a host that doesn't support them. This should fail because there are hugepages but not enough of them. """ host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=2, cpu_cores=2, cpu_threads=2, kB_mem=(1024 * 1024 * 16)) # GB self.mock_conn.return_value = self._get_connection(host_info=host_info) # create 512 * 2 MB huge pages, and allocate the rest of the 16 GB as # small pages for cell in host_info.numa_topology.cells: huge_pages = 512 small_pages = (host_info.kB_mem - (2048 * huge_pages)) // 4 cell.mempages = fakelibvirt.create_mempages([ (4, small_pages), (2048, huge_pages), ]) extra_spec = {'hw:mem_page_size': 'large'} flavor_id = self._create_flavor(memory_mb=2048, extra_spec=extra_spec) self._run_build_test(flavor_id, end_status='ERROR') def test_create_server_with_legacy_pinning_policy(self): """Create a server using the legacy 'hw:cpu_policy' extra spec. This should pass and result in a guest NUMA topology with pinned CPUs. """ self.flags(cpu_dedicated_set='0-9', cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set=None) host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=1, cpu_cores=5, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = { 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'prefer', } flavor_id = self._create_flavor(vcpu=5, extra_spec=extra_spec) expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'PCPU': 5} server = self._run_build_test(flavor_id, expected_usage=expected_usage) inst = objects.Instance.get_by_uuid(self.ctxt, server['id']) self.assertEqual(1, len(inst.numa_topology.cells)) self.assertEqual(5, inst.numa_topology.cells[0].cpu_topology.cores) def test_create_server_with_legacy_pinning_policy_old_configuration(self): """Create a server using the legacy extra spec and configuration. This should pass and result in a guest NUMA topology with pinned CPUs, though we'll still be consuming VCPUs (which would in theory be fixed during a later reshape). """ self.flags(cpu_dedicated_set=None, cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set='0-7') host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = { 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'prefer', } flavor_id = self._create_flavor(extra_spec=extra_spec) expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2} self._run_build_test(flavor_id, expected_usage=expected_usage) def test_create_server_with_isolate_thread_policy_old_configuration(self): """Create a server with the legacy 'hw:cpu_thread_policy=isolate' extra spec and configuration. This should pass and result in an instance consuming $flavor.vcpu host cores plus the thread sibling(s) of each of these cores. We also be consuming VCPUs since we're on legacy configuration here, though that would in theory be fixed during a later reshape. """ self.flags( cpu_dedicated_set=None, cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set='0-3') # host has hyperthreads, which means we're going to end up consuming # $flavor.vcpu hosts cores plus the thread sibling(s) for each core host_info = fakelibvirt.HostInfo( cpu_nodes=1, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=(1024 * 1024 * 16), # GB ) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = { 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'isolate', } flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2} self._run_build_test(flavor_id, expected_usage=expected_usage) # verify that we have consumed two cores plus the thread sibling of # each core, totalling four cores since the HostInfo indicates each # core should have two threads ctxt = nova_context.get_admin_context() host_numa = objects.NUMATopology.obj_from_db_obj( objects.ComputeNode.get_by_nodename(ctxt, 'compute1').numa_topology ) self.assertEqual({0, 1, 2, 3}, host_numa.cells[0].pinned_cpus) def test_create_server_with_legacy_pinning_policy_fails(self): """Create a pinned instance on a host with no PCPUs. This should fail because we're translating the extra spec and the host isn't reporting the PCPUs we need. """ self.flags(cpu_shared_set='0-9', cpu_dedicated_set=None, group='compute') self.flags(vcpu_pin_set=None) host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=1, cpu_cores=5, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = { 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'prefer', } flavor_id = self._create_flavor(vcpu=5, extra_spec=extra_spec) self._run_build_test(flavor_id, end_status='ERROR') def test_create_server_with_legacy_pinning_policy_quota_fails(self): """Create a pinned instance on a host with PCPUs but not enough quota. This should fail because the quota request should fail. """ self.flags(cpu_dedicated_set='0-7', cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set=None) host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = { 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'prefer', } flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) # Update the core quota less than we requested self.api.update_quota({'cores': 1}) # NOTE(bhagyashris): Always use host as 'compute1' so that it's # possible to get resource provider information for verifying # compute usages. This host name 'compute1' is hard coded in # Connection class in fakelibvirt.py. # TODO(stephenfin): Remove the hardcoded limit, possibly overridding # 'start_service' to make sure there isn't a mismatch self.compute = self.start_service('compute', host='compute1') post = {'server': self._build_server(flavor_id=flavor_id)} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server, post) self.assertEqual(403, ex.response.status_code) def test_create_server_with_isolate_thread_policy_fails(self): """Create a server with the legacy 'hw:cpu_thread_policy=isolate' extra spec. This should fail on a host with hyperthreading. """ self.flags( cpu_dedicated_set='0-3', cpu_shared_set='4-7', group='compute') self.flags(vcpu_pin_set=None) # host has hyperthreads, which means it should be rejected host_info = fakelibvirt.HostInfo( cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=(1024 * 1024 * 16), # GB ) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = { 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'isolate', } flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) self._run_build_test(flavor_id, end_status='ERROR') def test_create_server_with_pcpu(self): """Create a server using an explicit 'resources:PCPU' request. This should pass and result in a guest NUMA topology with pinned CPUs. """ self.flags(cpu_dedicated_set='0-7', cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set=None) host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = {'resources:PCPU': '2'} flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'PCPU': 2} server = self._run_build_test(flavor_id, expected_usage=expected_usage) ctx = nova_context.get_admin_context() inst = objects.Instance.get_by_uuid(ctx, server['id']) self.assertEqual(1, len(inst.numa_topology.cells)) self.assertEqual(1, inst.numa_topology.cells[0].cpu_topology.cores) self.assertEqual(2, inst.numa_topology.cells[0].cpu_topology.threads) def test_create_server_with_pcpu_fails(self): """Create a pinned instance on a host with no PCPUs. This should fail because we're explicitly requesting PCPUs and the host isn't reporting them. """ self.flags(cpu_shared_set='0-9', cpu_dedicated_set=None, group='compute') self.flags(vcpu_pin_set=None) host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=1, cpu_cores=5, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = {'resources:PCPU': 2} flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) self._run_build_test(flavor_id, end_status='ERROR', filter_called_on_error=False) def test_create_server_with_pcpu_quota_fails(self): """Create a pinned instance on a host with PCPUs but not enough quota. This should fail because the quota request should fail. """ self.flags(cpu_dedicated_set='0-7', cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set=None) host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection extra_spec = {'resources:PCPU': '2'} flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec) # Update the core quota less than we requested self.api.update_quota({'cores': 1}) # NOTE(bhagyashris): Always use host as 'compute1' so that it's # possible to get resource provider information for verifying # compute usages. This host name 'compute1' is hard coded in # Connection class in fakelibvirt.py. # TODO(stephenfin): Remove the hardcoded limit, possibly overridding # 'start_service' to make sure there isn't a mismatch self.compute = self.start_service('compute', host='compute1') post = {'server': self._build_server(flavor_id=flavor_id)} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server, post) self.assertEqual(403, ex.response.status_code) def _inspect_filter_numa_topology(self, cell_count): """Helper function used by test_resize_server_with_numa* tests.""" args, kwargs = self.mock_filter.call_args_list[0] self.assertEqual(2, len(args)) self.assertEqual({}, kwargs) numa_topology = args[1].numa_topology self.assertEqual(cell_count, len(numa_topology.cells), args) # We always reset mock_filter because we don't want these result # fudging later tests self.mock_filter.reset_mock() self.assertEqual(0, len(self.mock_filter.call_args_list)) def _inspect_request_spec(self, server, cell_count): """Helper function used by test_resize_server_with_numa* tests.""" req_spec = objects.RequestSpec.get_by_instance_uuid( self.ctxt, server['id']) self.assertEqual(cell_count, len(req_spec.numa_topology.cells)) def test_resize_revert_server_with_numa(self): """Create a single-node instance and resize it to a flavor with two nodes, then revert to the old flavor rather than confirm. Nothing too complicated going on here. We create an instance with a one NUMA node guest topology and then attempt to resize this to use a topology with two nodes. Once done, we revert this resize to ensure the instance reverts to using the old NUMA topology as expected. """ # don't bother waiting for neutron events since we don't actually have # neutron self.flags(vif_plugging_timeout=0) host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) # Start services self.computes = {} for host in ['test_compute0', 'test_compute1']: fake_connection = self._get_connection( host_info=host_info, hostname=host) # This is fun. Firstly we need to do a global'ish mock so we can # actually start the service. with mock.patch('nova.virt.libvirt.host.Host.get_connection', return_value=fake_connection): compute = self.start_service('compute', host=host) # Once that's done, we need to do some tweaks to each individual # compute "service" to make sure they return unique objects compute.driver._host.get_connection = lambda: fake_connection self.computes[host] = compute # STEP ONE # Create server extra_spec = {'hw:numa_nodes': '1'} flavor_a_id = self._create_flavor(vcpu=4, extra_spec=extra_spec) server = self._create_server(flavor_id=flavor_a_id) # Ensure the filter saw the 'numa_topology' field and the request spec # is as expected self._inspect_filter_numa_topology(cell_count=1) self._inspect_request_spec(server, cell_count=1) # STEP TWO # Create a new flavor with a different but still valid number of NUMA # nodes extra_spec = {'hw:numa_nodes': '2'} flavor_b_id = self._create_flavor(vcpu=4, extra_spec=extra_spec) # TODO(stephenfin): The mock of 'migrate_disk_and_power_off' should # probably be less...dumb with mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '.migrate_disk_and_power_off', return_value='{}'): post = {'resize': {'flavorRef': flavor_b_id}} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Ensure the filter saw 'hw:numa_nodes=2' from flavor_b and the request # spec has been updated self._inspect_filter_numa_topology(cell_count=2) self._inspect_request_spec(server, cell_count=2) # STEP THREE # Revert the instance rather than confirming it, and ensure we see the # old NUMA topology # TODO(stephenfin): The mock of 'migrate_disk_and_power_off' should # probably be less...dumb with mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '.migrate_disk_and_power_off', return_value='{}'): post = {'revertResize': {}} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ACTIVE') # We don't have a filter call to check, but we can check that the # request spec changes were reverted self._inspect_request_spec(server, cell_count=1) def test_resize_vcpu_to_pcpu(self): """Create an unpinned instance and resize it to a flavor with pinning. This should pass and result in a guest NUMA topology with pinned CPUs. """ self.flags(cpu_dedicated_set='0-3', cpu_shared_set='4-7', group='compute') self.flags(vcpu_pin_set=None) # Start services self.start_computes(save_rp_uuids=True) # Create server flavor_a_id = self._create_flavor(extra_spec={}) server = self._create_server(flavor_id=flavor_a_id) original_host = server['OS-EXT-SRV-ATTR:host'] for host, compute_rp_uuid in self.compute_rp_uuids.items(): if host == original_host: # the host with the instance expected_usage = {'VCPU': 2, 'PCPU': 0, 'DISK_GB': 20, 'MEMORY_MB': 2048} else: # the other host expected_usage = {'VCPU': 0, 'PCPU': 0, 'DISK_GB': 0, 'MEMORY_MB': 0} compute_usage = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(expected_usage, compute_usage) # We reset mock_filter because we want to ensure it's called as part of # the *migration* self.mock_filter.reset_mock() self.assertEqual(0, len(self.mock_filter.call_args_list)) extra_spec = {'hw:cpu_policy': 'dedicated'} flavor_b_id = self._create_flavor(extra_spec=extra_spec) # TODO(stephenfin): The mock of 'migrate_disk_and_power_off' should # probably be less...dumb with mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '.migrate_disk_and_power_off', return_value='{}'): post = {'resize': {'flavorRef': flavor_b_id}} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') new_host = server['OS-EXT-SRV-ATTR:host'] self.assertNotEqual(original_host, new_host) # We don't confirm the resize yet as we expect this to have landed and # all we want to know is whether the filter was correct and the # resource usage has been updated for host, compute_rp_uuid in self.compute_rp_uuids.items(): if host == original_host: # the host that had the instance should still have allocations # since the resize hasn't been confirmed expected_usage = {'VCPU': 2, 'PCPU': 0, 'DISK_GB': 20, 'MEMORY_MB': 2048} else: # the other host should have the new allocations replete with # PCPUs expected_usage = {'VCPU': 0, 'PCPU': 2, 'DISK_GB': 20, 'MEMORY_MB': 2048} compute_usage = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(expected_usage, compute_usage) self.assertEqual(1, len(self.mock_filter.call_args_list)) args, kwargs = self.mock_filter.call_args_list[0] self.assertEqual(2, len(args)) self.assertEqual({}, kwargs) # Now confirm the resize and ensure our inventories update post = {'confirmResize': None} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ACTIVE') for host, compute_rp_uuid in self.compute_rp_uuids.items(): if host == original_host: # the host that had the instance should no longer have # alocations since the resize has been confirmed expected_usage = {'VCPU': 0, 'PCPU': 0, 'DISK_GB': 0, 'MEMORY_MB': 0} else: # the other host should still have the new allocations replete # with PCPUs expected_usage = {'VCPU': 0, 'PCPU': 2, 'DISK_GB': 20, 'MEMORY_MB': 2048} compute_usage = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(expected_usage, compute_usage) def test_resize_bug_1879878(self): """Resize a instance with a NUMA topology when confirm takes time. Bug 1879878 describes a race between the periodic tasks of the resource tracker and the libvirt virt driver. The virt driver expects to be the one doing the unpinning of instances, however, the resource tracker is stepping on the virt driver's toes. """ self.flags( cpu_dedicated_set='0-3', cpu_shared_set='4-7', group='compute') self.flags(vcpu_pin_set=None) orig_confirm = nova.virt.libvirt.driver.LibvirtDriver.confirm_migration def fake_confirm_migration(*args, **kwargs): # run periodics before finally running the confirm_resize routine, # simulating a race between the resource tracker and the virt # driver self._run_periodics() # then inspect the ComputeNode objects for our two hosts src_numa_topology = objects.NUMATopology.obj_from_db_obj( objects.ComputeNode.get_by_nodename( self.ctxt, src_host, ).numa_topology, ) dst_numa_topology = objects.NUMATopology.obj_from_db_obj( objects.ComputeNode.get_by_nodename( self.ctxt, dst_host, ).numa_topology, ) self.assertEqual(2, len(src_numa_topology.cells[0].pinned_cpus)) self.assertEqual(2, len(dst_numa_topology.cells[0].pinned_cpus)) # before continuing with the actualy confirm process return orig_confirm(*args, **kwargs) self.stub_out( 'nova.virt.libvirt.driver.LibvirtDriver.confirm_migration', fake_confirm_migration, ) # start services self.start_computes(save_rp_uuids=True) # create server flavor_a_id = self._create_flavor( vcpu=2, extra_spec={'hw:cpu_policy': 'dedicated'}) server = self._create_server(flavor_id=flavor_a_id) src_host = server['OS-EXT-SRV-ATTR:host'] # we don't really care what the new flavor is, so long as the old # flavor is using pinning. We use a similar flavor for simplicity. flavor_b_id = self._create_flavor( vcpu=2, extra_spec={'hw:cpu_policy': 'dedicated'}) # TODO(stephenfin): The mock of 'migrate_disk_and_power_off' should # probably be less...dumb with mock.patch( 'nova.virt.libvirt.driver.LibvirtDriver' '.migrate_disk_and_power_off', return_value='{}', ): # TODO(stephenfin): Replace with a helper post = {'resize': {'flavorRef': flavor_b_id}} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') dst_host = server['OS-EXT-SRV-ATTR:host'] # Now confirm the resize post = {'confirmResize': None} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ACTIVE') class NUMAServerTestWithCountingQuotaFromPlacement(NUMAServersTest): def setUp(self): self.flags(count_usage_from_placement=True, group='quota') super(NUMAServersTest, self).setUp() class ReshapeForPCPUsTest(NUMAServersTestBase): api_major_version = 'v2.1' # TODO(stephenfin): We're using this because we want to be able to force # the host during scheduling. We should instead look at overriding policy ADMIN_API = True def test_vcpu_to_pcpu_reshape(self): """Verify that VCPU to PCPU reshape works. This rather complex test checks that everything is wired up properly by the reshape operation. 1) create two pinned servers with an old tree where the compute provider is reporting VCPUs and the servers are consuming the same 2) start a migration of one of these servers to another host but don't confirm it 3) trigger a reshape 4) check that the allocations of both the servers and the migration record on the host are updated 5) create another server now against the new tree """ # we need to use the 'host' parameter when creating servers self.api.microversion = '2.74' # we need to configure the legacy 'vcpu_pin_set' config option, rather # than the new ones, to ensure the reshape doesn't happen yet self.flags(cpu_dedicated_set=None, cpu_shared_set=None, group='compute') self.flags(vcpu_pin_set='0-7') host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) # Start services self.start_computes(save_rp_uuids=True) # ensure there is no PCPU inventory being reported for host, compute_rp_uuid in self.compute_rp_uuids.items(): compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(8, compute_inventory['VCPU']['total']) self.assertNotIn('PCPU', compute_inventory) # now we boot two servers with pinning, which should boot even without # PCPUs since we're not doing the translation yet extra_spec = {'hw:cpu_policy': 'dedicated'} flavor_id = self._create_flavor(extra_spec=extra_spec) server_req = self._build_server(flavor_id=flavor_id) server_req['host'] = 'test_compute0' server_req['networks'] = 'auto' created_server1 = self.api.post_server({'server': server_req}) server1 = self._wait_for_state_change(created_server1, 'ACTIVE') created_server2 = self.api.post_server({'server': server_req}) server2 = self._wait_for_state_change(created_server2, 'ACTIVE') # sanity check usages compute_rp_uuid = self.compute_rp_uuids['test_compute0'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(4, compute_usages['VCPU']) compute_rp_uuid = self.compute_rp_uuids['test_compute1'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(0, compute_usages['VCPU']) # now initiate the migration process for one of the servers with mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '.migrate_disk_and_power_off', return_value='{}'): post = {'migrate': None} self.api.post_server_action(server2['id'], post) server2 = self._wait_for_state_change(server2, 'VERIFY_RESIZE') # verify that the inventory, usages and allocation are correct before # the reshape. Note that the value of 8 VCPUs is derived from # fakelibvirt.HostInfo with our overridden values # first, check 'test_compute0', which should have the allocations for # server1 (the one that hasn't been migrated) and for the migration # record of server2 (the one that has been migrated) compute_rp_uuid = self.compute_rp_uuids['test_compute0'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(8, compute_inventory['VCPU']['total']) self.assertNotIn('PCPU', compute_inventory) compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(4, compute_usages['VCPU']) self.assertNotIn('PCPU', compute_usages) allocations = self.placement_api.get( '/allocations/%s' % server1['id']).body['allocations'] # the flavor has disk=10 and ephemeral=10 self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2}, allocations[compute_rp_uuid]['resources']) # then check 'test_compute1', which should have the allocations for # server2 (the one that has been migrated) compute_rp_uuid = self.compute_rp_uuids['test_compute1'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(8, compute_inventory['VCPU']['total']) self.assertNotIn('PCPU', compute_inventory) compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(2, compute_usages['VCPU']) self.assertNotIn('PCPU', compute_usages) allocations = self.placement_api.get( '/allocations/%s' % server2['id']).body['allocations'] # the flavor has disk=10 and ephemeral=10 self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2}, allocations[compute_rp_uuid]['resources']) # set the new config options on the compute services and restart them, # meaning the compute services will now report PCPUs and reshape # existing inventory to use them self.flags(cpu_dedicated_set='0-7', group='compute') self.flags(vcpu_pin_set=None) for host in ['test_compute0', 'test_compute1']: self.computes[host].stop() fake_connection = self._get_connection( host_info=host_info, hostname=host) # This is fun. Firstly we need to do a global'ish mock so we can # actually start the service. with mock.patch('nova.virt.libvirt.host.Host.get_connection', return_value=fake_connection): compute = self.start_service('compute', host=host) # Once that's done, we need to do some tweaks to each individual # compute "service" to make sure they return unique objects compute.driver._host.get_connection = lambda: fake_connection self.computes[host] = compute # verify that the inventory, usages and allocation are correct after # the reshape # first, check 'test_compute0', which should have the allocations for # server1 (the one that hasn't been migrated) and for the migration # record of server2 (the one that has been migrated) compute_rp_uuid = self.compute_rp_uuids['test_compute0'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(8, compute_inventory['PCPU']['total']) self.assertNotIn('VCPU', compute_inventory) compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(4, compute_usages['PCPU']) self.assertNotIn('VCPU', compute_usages) allocations = self.placement_api.get( '/allocations/%s' % server1['id']).body['allocations'] # the flavor has disk=10 and ephemeral=10 self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'PCPU': 2}, allocations[compute_rp_uuid]['resources']) # then check 'test_compute1', which should have the allocations for # server2 (the one that has been migrated) compute_rp_uuid = self.compute_rp_uuids['test_compute1'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(8, compute_inventory['PCPU']['total']) self.assertNotIn('VCPU', compute_inventory) compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(2, compute_usages['PCPU']) self.assertNotIn('VCPU', compute_usages) allocations = self.placement_api.get( '/allocations/%s' % server2['id']).body['allocations'] # the flavor has disk=10 and ephemeral=10 self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'PCPU': 2}, allocations[compute_rp_uuid]['resources']) # now create one more instance with pinned instances against the # reshaped tree which should result in PCPU allocations created_server = self.api.post_server({'server': server_req}) server3 = self._wait_for_state_change(created_server, 'ACTIVE') compute_rp_uuid = self.compute_rp_uuids['test_compute0'] compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(8, compute_inventory['PCPU']['total']) self.assertNotIn('VCPU', compute_inventory) compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(6, compute_usages['PCPU']) self.assertNotIn('VCPU', compute_usages) # check the allocations for this server specifically allocations = self.placement_api.get( '/allocations/%s' % server3['id']).body[ 'allocations'] self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'PCPU': 2}, allocations[compute_rp_uuid]['resources']) self._delete_server(server1) self._delete_server(server2) self._delete_server(server3) class NUMAServersWithNetworksTest(NUMAServersTestBase): def setUp(self): # We need to enable neutron in this one self.flags(physnets=['foo', 'bar'], group='neutron') neutron_conf.register_dynamic_opts(CONF) self.flags(numa_nodes=[1], group='neutron_physnet_foo') self.flags(numa_nodes=[0], group='neutron_physnet_bar') self.flags(numa_nodes=[0, 1], group='neutron_tunnel') super(NUMAServersWithNetworksTest, self).setUp() # The ultimate base class _IntegratedTestBase uses NeutronFixture but # we need a bit more intelligent neutron for these tests. Applying the # new fixture here means that we re-stub what the previous neutron # fixture already stubbed. self.neutron = self.useFixture(base.LibvirtNeutronFixture(self)) def _test_create_server_with_networks(self, flavor_id, networks, end_status='ACTIVE'): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection self.compute = self.start_service('compute', host='test_compute0') # Create server return self._create_server( flavor_id=flavor_id, networks=networks, expected_state=end_status) def test_create_server_with_single_physnet(self): extra_spec = {'hw:numa_nodes': '1'} flavor_id = self._create_flavor(extra_spec=extra_spec) networks = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, ] self._test_create_server_with_networks(flavor_id, networks) self.assertTrue(self.mock_filter.called) def test_create_server_with_multiple_physnets(self): """Test multiple networks split across host NUMA nodes. This should pass because the networks requested are split across multiple host NUMA nodes but the guest explicitly allows multiple NUMA nodes. """ extra_spec = {'hw:numa_nodes': '2'} flavor_id = self._create_flavor(extra_spec=extra_spec) networks = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, {'uuid': base.LibvirtNeutronFixture.network_2['id']}, ] self._test_create_server_with_networks(flavor_id, networks) self.assertTrue(self.mock_filter.called) def test_create_server_with_multiple_physnets_fail(self): """Test multiple networks split across host NUMA nodes. This should fail because we've requested a single-node instance but the networks requested are split across multiple host NUMA nodes. """ extra_spec = {'hw:numa_nodes': '1'} flavor_id = self._create_flavor(extra_spec=extra_spec) networks = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, {'uuid': base.LibvirtNeutronFixture.network_2['id']}, ] self._test_create_server_with_networks(flavor_id, networks, end_status='ERROR') self.assertTrue(self.mock_filter.called) def test_create_server_with_physnet_and_tunneled_net(self): """Test combination of physnet and tunneled network. This should pass because we've requested a single-node instance and the requested networks share at least one NUMA node. """ extra_spec = {'hw:numa_nodes': '1'} flavor_id = self._create_flavor(extra_spec=extra_spec) networks = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, {'uuid': base.LibvirtNeutronFixture.network_3['id']}, ] self._test_create_server_with_networks(flavor_id, networks) self.assertTrue(self.mock_filter.called) # FIXME(sean-k-mooney): The logic of this test is incorrect. # The test was written to assert that we failed to rebuild # because the NUMA constraints were violated due to the attachment # of an interface from a second host NUMA node to an instance with # a NUMA topology of 1 that is affined to a different NUMA node. # Nova should reject the interface attachment if the NUMA constraints # would be violated and it should fail at that point not when the # instance is rebuilt. This is a latent bug which will be addressed # in a separate patch. @testtools.skip("bug 1855332") def test_attach_interface_with_network_affinity_violation(self): extra_spec = {'hw:numa_nodes': '1'} flavor_id = self._create_flavor(extra_spec=extra_spec) networks = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, ] server = self._test_create_server_with_networks(flavor_id, networks) # attach an interface from the **same** network post = { 'interfaceAttachment': { 'net_id': base.LibvirtNeutronFixture.network_1['id'], } } self.api.attach_interface(server['id'], post) post = {'rebuild': { 'imageRef': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', }} # This should succeed since we haven't changed the NUMA affinity # requirements self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') # attach an interface from a **different** network post = { 'interfaceAttachment': { 'net_id': base.LibvirtNeutronFixture.network_2['id'], } } # FIXME(sean-k-mooney): This should raise an exception as this # interface attachment would violate the NUMA constraints. self.api.attach_interface(server['id'], post) post = {'rebuild': { 'imageRef': 'a2459075-d96c-40d5-893e-577ff92e721c', }} # NOTE(sean-k-mooney): the rest of the test is incorrect but # is left to show the currently broken behavior. # Now this should fail because we've violated the NUMA requirements # with the latest attachment ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], post) # NOTE(danms): This wouldn't happen in a real deployment since rebuild # is a cast, but since we are using CastAsCall this will bubble to the # API. self.assertEqual(500, ex.response.status_code) self.assertIn('NoValidHost', six.text_type(ex)) def test_cold_migrate_with_physnet(self): # Start services self.start_computes(save_rp_uuids=True) # Create server extra_spec = {'hw:numa_nodes': '1'} flavor_id = self._create_flavor(extra_spec=extra_spec) networks = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, ] server = self._create_server( flavor_id=flavor_id, networks=networks) original_host = server['OS-EXT-SRV-ATTR:host'] # We reset mock_filter because we want to ensure it's called as part of # the *migration* self.mock_filter.reset_mock() self.assertEqual(0, len(self.mock_filter.call_args_list)) # TODO(stephenfin): The mock of 'migrate_disk_and_power_off' should # probably be less...dumb with mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '.migrate_disk_and_power_off', return_value='{}'): self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # We don't bother confirming the resize as we expect this to have # landed and all we want to know is whether the filter was correct self.assertNotEqual(original_host, server['OS-EXT-SRV-ATTR:host']) self.assertEqual(1, len(self.mock_filter.call_args_list)) args, kwargs = self.mock_filter.call_args_list[0] self.assertEqual(2, len(args)) self.assertEqual({}, kwargs) network_metadata = args[1].network_metadata self.assertIsNotNone(network_metadata) self.assertEqual(set(['foo']), network_metadata.physnets) class NUMAServersRebuildTests(NUMAServersTestBase): def setUp(self): super().setUp() images = self.api.get_images() # save references to first two images for server create and rebuild self.image_ref_0 = images[0]['id'] self.image_ref_1 = images[1]['id'] fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) def _create_active_server(self, server_args=None): basic_server = { 'flavorRef': 1, 'name': 'numa_server', 'networks': [{ 'uuid': nova_fixtures.NeutronFixture.network_1['id'] }], 'imageRef': self.image_ref_0 } if server_args: basic_server.update(server_args) server = self.api.post_server({'server': basic_server}) return self._wait_for_state_change(server, 'ACTIVE') def _rebuild_server(self, active_server, image_ref): args = {"rebuild": {"imageRef": image_ref}} self.api.api_post( 'servers/%s/action' % active_server['id'], args) fake_notifier.wait_for_versioned_notifications('instance.rebuild.end') return self._wait_for_state_change(active_server, 'ACTIVE') def test_rebuild_server_with_numa(self): """Create a NUMA instance and ensure it can be rebuilt. """ # Create a flavor consuming 2 pinned cpus with an implicit # numa topology of 1 virtual numa node. extra_spec = {'hw:cpu_policy': 'dedicated'} flavor_id = self._create_flavor(extra_spec=extra_spec) # Create a host with 4 physical cpus to allow rebuild leveraging # the free space to ensure the numa topology filter does not # eliminate the host. host_info = fakelibvirt.HostInfo(cpu_nodes=1, cpu_sockets=1, cpu_cores=4, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection self.compute = self.start_service('compute', host='compute1') server = self._create_active_server( server_args={"flavorRef": flavor_id}) # this should succeed as the NUMA topology has not changed # and we have enough resources on the host. We rebuild with # a different image to force the rebuild to query the scheduler # to validate the host. self._rebuild_server(server, self.image_ref_1) def test_rebuild_server_with_numa_inplace_fails(self): """Create a NUMA instance and ensure in place rebuild fails. """ # Create a flavor consuming 2 pinned cpus with an implicit # numa topology of 1 virtual numa node. extra_spec = {'hw:cpu_policy': 'dedicated'} flavor_id = self._create_flavor(extra_spec=extra_spec) # cpu_cores is set to 2 to ensure that we have enough space # to boot the vm but not enough space to rebuild # by doubling the resource use during scheduling. host_info = fakelibvirt.HostInfo( cpu_nodes=1, cpu_sockets=1, cpu_cores=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection self.compute = self.start_service('compute', host='compute1') server = self._create_active_server( server_args={"flavorRef": flavor_id}) # This should succeed as the numa constraints do not change. self._rebuild_server(server, self.image_ref_1) def test_rebuild_server_with_different_numa_topology_fails(self): """Create a NUMA instance and ensure inplace rebuild fails. """ # Create a flavor consuming 2 pinned cpus with an implicit # numa topology of 1 virtual numa node. extra_spec = {'hw:cpu_policy': 'dedicated'} flavor_id = self._create_flavor(extra_spec=extra_spec) host_info = fakelibvirt.HostInfo( cpu_nodes=2, cpu_sockets=1, cpu_cores=4, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection self.compute = self.start_service('compute', host='compute1') server = self._create_active_server( server_args={"flavorRef": flavor_id}) # The original vm had an implicit numa topology of 1 virtual numa node # so we alter the requested numa topology in image_ref_1 to request # 2 virtual numa nodes. ctx = nova_context.get_admin_context() image_meta = {'properties': {'hw_numa_nodes': 2}} self.fake_image_service.update(ctx, self.image_ref_1, image_meta) # NOTE(sean-k-mooney): this should fail because rebuild uses noop # claims therefore it is not allowed for the NUMA topology or resource # usage to change during a rebuild. ex = self.assertRaises( client.OpenStackApiException, self._rebuild_server, server, self.image_ref_1) self.assertEqual(400, ex.response.status_code) self.assertIn("An instance's NUMA topology cannot be changed", six.text_type(ex)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_pci_sriov_servers.py0000664000175000017500000005764300000000000025706 0ustar00zuulzuul00000000000000# Copyright (C) 2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import ddt import fixtures import mock from oslo_log import log as logging from oslo_serialization import jsonutils from nova import context from nova import objects from nova.objects import fields from nova.tests.functional.libvirt import base from nova.tests.unit.virt.libvirt import fakelibvirt LOG = logging.getLogger(__name__) class _PCIServersTestBase(base.ServersTestBase): ADDITIONAL_FILTERS = ['NUMATopologyFilter', 'PciPassthroughFilter'] def setUp(self): self.flags(passthrough_whitelist=self.PCI_PASSTHROUGH_WHITELIST, alias=self.PCI_ALIAS, group='pci') super(_PCIServersTestBase, self).setUp() self.compute_started = False # Mock the 'PciPassthroughFilter' filter, as most tests need to inspect # this host_manager = self.scheduler.manager.driver.host_manager pci_filter_class = host_manager.filter_cls_map['PciPassthroughFilter'] host_pass_mock = mock.Mock(wraps=pci_filter_class().host_passes) self.mock_filter = self.useFixture(fixtures.MockPatch( 'nova.scheduler.filters.pci_passthrough_filter' '.PciPassthroughFilter.host_passes', side_effect=host_pass_mock)).mock def _run_build_test(self, flavor_id, end_status='ACTIVE'): if not self.compute_started: self.compute = self.start_service('compute', host='test_compute0') self.compute_started = True # Create server good_server = self._build_server(flavor_id=flavor_id) post = {'server': good_server} created_server = self.api.post_server(post) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # Validate that the server has been created found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) # It should also be in the all-servers list servers = self.api.get_servers() server_ids = [s['id'] for s in servers] self.assertIn(created_server_id, server_ids) # Validate that PciPassthroughFilter has been called self.assertTrue(self.mock_filter.called) found_server = self._wait_for_state_change(found_server, end_status) self.addCleanup(self._delete_server, found_server) return created_server class SRIOVServersTest(_PCIServersTestBase): VFS_ALIAS_NAME = 'vfs' PFS_ALIAS_NAME = 'pfs' PCI_PASSTHROUGH_WHITELIST = [jsonutils.dumps(x) for x in ( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PF_PROD_ID, 'physical_network': 'physnet4', }, { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.VF_PROD_ID, 'physical_network': 'physnet4', }, )] # PFs will be removed from pools unless they are specifically # requested, so we explicitly request them with the 'device_type' # attribute PCI_ALIAS = [jsonutils.dumps(x) for x in ( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PF_PROD_ID, 'device_type': fields.PciDeviceType.SRIOV_PF, 'name': PFS_ALIAS_NAME, }, { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.VF_PROD_ID, 'name': VFS_ALIAS_NAME, }, )] def setUp(self): super().setUp() # The ultimate base class _IntegratedTestBase uses NeutronFixture but # we need a bit more intelligent neutron for these tests. Applying the # new fixture here means that we re-stub what the previous neutron # fixture already stubbed. self.neutron = self.useFixture(base.LibvirtNeutronFixture(self)) def _disable_sriov_in_pf(self, pci_info): # Check for PF and change the capability from virt_functions # Delete all the VFs vfs_to_delete = [] for device_name, device in pci_info.devices.items(): if 'virt_functions' in device.pci_device: device.generate_xml(skip_capability=True) elif 'phys_function' in device.pci_device: vfs_to_delete.append(device_name) for device in vfs_to_delete: del pci_info.devices[device] def test_create_server_with_VF(self): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo() fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # Create a flavor extra_spec = {"pci_passthrough:alias": "%s:1" % self.VFS_ALIAS_NAME} flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id) def test_create_server_with_PF(self): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo() fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # Create a flavor extra_spec = {"pci_passthrough:alias": "%s:1" % self.PFS_ALIAS_NAME} flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id) def test_create_server_with_PF_no_VF(self): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo(num_pfs=1, num_vfs=4) fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # Create a flavor extra_spec_pfs = {"pci_passthrough:alias": "%s:1" % self.PFS_ALIAS_NAME} extra_spec_vfs = {"pci_passthrough:alias": "%s:1" % self.VFS_ALIAS_NAME} flavor_id_pfs = self._create_flavor(extra_spec=extra_spec_pfs) flavor_id_vfs = self._create_flavor(extra_spec=extra_spec_vfs) self._run_build_test(flavor_id_pfs) self._run_build_test(flavor_id_vfs, end_status='ERROR') def test_create_server_with_VF_no_PF(self): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo(num_pfs=1, num_vfs=4) fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # Create a flavor extra_spec_pfs = {"pci_passthrough:alias": "%s:1" % self.PFS_ALIAS_NAME} extra_spec_vfs = {"pci_passthrough:alias": "%s:1" % self.VFS_ALIAS_NAME} flavor_id_pfs = self._create_flavor(extra_spec=extra_spec_pfs) flavor_id_vfs = self._create_flavor(extra_spec=extra_spec_vfs) self._run_build_test(flavor_id_vfs) self._run_build_test(flavor_id_pfs, end_status='ERROR') def test_create_server_after_change_in_nonsriov_pf_to_sriov_pf(self): # Starts a compute with PF not configured with SRIOV capabilities # Updates the PF with SRIOV capability and restart the compute service # Then starts a VM with the sriov port. The VM should be in active # state with sriov port attached. # To emulate the device type changing, we first create a # HostPCIDevicesInfo object with PFs and VFs. Then we make a copy # and remove the VFs and the virt_function capability. This is # done to ensure the physical function product id is same in both # the versions. host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo(num_pfs=1, num_vfs=1) pci_info_no_sriov = copy.deepcopy(pci_info) # Disable SRIOV capabilties in PF and delete the VFs self._disable_sriov_in_pf(pci_info_no_sriov) fake_connection = self._get_connection(host_info, pci_info=pci_info_no_sriov, hostname='test_compute0') self.mock_conn.return_value = fake_connection self.compute = self.start_service('compute', host='test_compute0') ctxt = context.get_admin_context() pci_devices = objects.PciDeviceList.get_by_compute_node( ctxt, objects.ComputeNode.get_by_nodename( ctxt, 'test_compute0', ).id, ) self.assertEqual(1, len(pci_devices)) self.assertEqual('type-PCI', pci_devices[0].dev_type) # Update connection with original pci info with sriov PFs fake_connection = self._get_connection(host_info, pci_info=pci_info, hostname='test_compute0') self.mock_conn.return_value = fake_connection # Restart the compute service self.restart_compute_service(self.compute) # Verify if PCI devices are of type type-PF or type-VF pci_devices = objects.PciDeviceList.get_by_compute_node( ctxt, objects.ComputeNode.get_by_nodename( ctxt, 'test_compute0', ).id, ) for pci_device in pci_devices: self.assertIn(pci_device.dev_type, ['type-PF', 'type-VF']) # create the port self.neutron.create_port({'port': self.neutron.network_4_port_1}) # create a server using the VF via neutron flavor_id = self._create_flavor() self._create_server( flavor_id=flavor_id, networks=[ {'port': base.LibvirtNeutronFixture.network_4_port_1['id']}, ], ) class GetServerDiagnosticsServerWithVfTestV21(_PCIServersTestBase): api_major_version = 'v2.1' microversion = '2.48' image_ref_parameter = 'imageRef' VFS_ALIAS_NAME = 'vfs' PCI_PASSTHROUGH_WHITELIST = [jsonutils.dumps(x) for x in ( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.VF_PROD_ID, }, )] PCI_ALIAS = [jsonutils.dumps(x) for x in ( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.VF_PROD_ID, 'name': VFS_ALIAS_NAME, }, )] def setUp(self): super(GetServerDiagnosticsServerWithVfTestV21, self).setUp() self.api.microversion = self.microversion # The ultimate base class _IntegratedTestBase uses NeutronFixture but # we need a bit more intelligent neutron for these tests. Applying the # new fixture here means that we re-stub what the previous neutron # fixture already stubbed. self.neutron = self.useFixture(base.LibvirtNeutronFixture(self)) def test_get_server_diagnostics_server_with_VF(self): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo() fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # Create a flavor extra_spec = {"pci_passthrough:alias": "%s:1" % self.VFS_ALIAS_NAME} flavor_id = self._create_flavor(extra_spec=extra_spec) if not self.compute_started: self.compute = self.start_service('compute', host='test_compute0') self.compute_started = True # Create server good_server = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor_id) good_server['networks'] = [ {'uuid': base.LibvirtNeutronFixture.network_1['id']}, {'uuid': base.LibvirtNeutronFixture.network_4['id']}, ] post = {'server': good_server} created_server = self.api.post_server(post) self._wait_for_state_change(created_server, 'ACTIVE') diagnostics = self.api.get_server_diagnostics(created_server['id']) self.assertEqual(base.LibvirtNeutronFixture. network_1_port_2['mac_address'], diagnostics['nic_details'][0]['mac_address']) self.assertEqual(base.LibvirtNeutronFixture. network_4_port_1['mac_address'], diagnostics['nic_details'][1]['mac_address']) self.assertIsNotNone(diagnostics['nic_details'][0]['tx_packets']) self.assertIsNone(diagnostics['nic_details'][1]['tx_packets']) class PCIServersTest(_PCIServersTestBase): ALIAS_NAME = 'a1' PCI_PASSTHROUGH_WHITELIST = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, } )] PCI_ALIAS = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, 'name': ALIAS_NAME, } )] def test_create_server_with_pci_dev_and_numa(self): """Verifies that an instance can be booted with cpu pinning and with an assigned pci device. """ self.flags(cpu_dedicated_set='0-7', group='compute') host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo(num_pci=1, numa_node=1) fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # create a flavor extra_spec = { 'hw:cpu_policy': 'dedicated', 'pci_passthrough:alias': '%s:1' % self.ALIAS_NAME, } flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id) def test_create_server_with_pci_dev_and_numa_fails(self): """This test ensures that it is not possible to allocated CPU and memory resources from one NUMA node and a PCI device from another. """ self.flags(cpu_dedicated_set='0-7', group='compute') host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo(num_pci=1, numa_node=0) fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # boot one instance with no PCI device to "fill up" NUMA node 0 extra_spec = { 'hw:cpu_policy': 'dedicated', } flavor_id = self._create_flavor(vcpu=4, extra_spec=extra_spec) self._run_build_test(flavor_id) # now boot one with a PCI device, which should fail to boot extra_spec['pci_passthrough:alias'] = '%s:1' % self.ALIAS_NAME flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id, end_status='ERROR') class PCIServersWithPreferredNUMATest(_PCIServersTestBase): ALIAS_NAME = 'a1' PCI_PASSTHROUGH_WHITELIST = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, } )] PCI_ALIAS = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, 'name': ALIAS_NAME, 'device_type': fields.PciDeviceType.STANDARD, 'numa_policy': fields.PCINUMAAffinityPolicy.PREFERRED, } )] end_status = 'ACTIVE' def test_create_server_with_pci_dev_and_numa(self): """Validate behavior of 'preferred' PCI NUMA policy. This test ensures that it *is* possible to allocate CPU and memory resources from one NUMA node and a PCI device from another *if* PCI NUMA policies are in use. """ self.flags(cpu_dedicated_set='0-7', group='compute') host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo(num_pci=1, numa_node=0) fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # boot one instance with no PCI device to "fill up" NUMA node 0 extra_spec = { 'hw:cpu_policy': 'dedicated', } flavor_id = self._create_flavor(vcpu=4, extra_spec=extra_spec) self._run_build_test(flavor_id) # now boot one with a PCI device, which should succeed thanks to the # use of the PCI policy extra_spec['pci_passthrough:alias'] = '%s:1' % self.ALIAS_NAME flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id, end_status=self.end_status) class PCIServersWithRequiredNUMATest(PCIServersWithPreferredNUMATest): ALIAS_NAME = 'a1' PCI_ALIAS = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, 'name': ALIAS_NAME, 'device_type': fields.PciDeviceType.STANDARD, 'numa_policy': fields.PCINUMAAffinityPolicy.REQUIRED, } )] end_status = 'ERROR' @ddt.ddt class PCIServersWithSRIOVAffinityPoliciesTest(_PCIServersTestBase): # The order of the filters is required to make the assertion that the # PciPassthroughFilter is invoked in _run_build_test pass in the # numa affinity tests otherwise the NUMATopologyFilter will eliminate # all hosts before we execute the PciPassthroughFilter. ADDITIONAL_FILTERS = ['PciPassthroughFilter', 'NUMATopologyFilter'] ALIAS_NAME = 'a1' PCI_PASSTHROUGH_WHITELIST = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, } )] # we set the numa_affinity policy to required to ensure strict affinity # between pci devices and the guest cpu and memory will be enforced. PCI_ALIAS = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, 'name': ALIAS_NAME, 'device_type': fields.PciDeviceType.STANDARD, 'numa_policy': fields.PCINUMAAffinityPolicy.REQUIRED, } )] # NOTE(sean-k-mooney): i could just apply the ddt decorators # to this function for the most part but i have chosen to # keep one top level function per policy to make documenting # the test cases simpler. def _test_policy(self, pci_numa_node, status, policy): host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) pci_info = fakelibvirt.HostPCIDevicesInfo( num_pci=1, numa_node=pci_numa_node) fake_connection = self._get_connection(host_info, pci_info) self.mock_conn.return_value = fake_connection # only allow cpus on numa node 1 to be used for pinning self.flags(cpu_dedicated_set='4-7', group='compute') # request cpu pinning to create a numa toplogy and allow the test to # force which numa node the vm would have to be pinned too. extra_spec = { 'hw:cpu_policy': 'dedicated', 'pci_passthrough:alias': '%s:1' % self.ALIAS_NAME, 'hw:pci_numa_affinity_policy': policy } flavor_id = self._create_flavor(extra_spec=extra_spec) self._run_build_test(flavor_id, end_status=status) @ddt.unpack # unpacks each sub-tuple e.g. *(pci_numa_node, status) # the preferred policy should always pass regardless of numa affinity @ddt.data((-1, 'ACTIVE'), (0, 'ACTIVE'), (1, 'ACTIVE')) def test_create_server_with_sriov_numa_affinity_policy_preferred( self, pci_numa_node, status): """Validate behavior of 'preferred' PCI NUMA affinity policy. This test ensures that it *is* possible to allocate CPU and memory resources from one NUMA node and a PCI device from another *if* the SR-IOV NUMA affinity policy is set to preferred. """ self._test_policy(pci_numa_node, status, 'preferred') @ddt.unpack # unpacks each sub-tuple e.g. *(pci_numa_node, status) # the legacy policy allow a PCI device to be used if it has NUMA # affinity or if no NUMA info is available so we set the NUMA # node for this device to -1 which is the sentinel value use by the # Linux kernel for a device with no NUMA affinity. @ddt.data((-1, 'ACTIVE'), (0, 'ERROR'), (1, 'ACTIVE')) def test_create_server_with_sriov_numa_affinity_policy_legacy( self, pci_numa_node, status): """Validate behavior of 'legacy' PCI NUMA affinity policy. This test ensures that it *is* possible to allocate CPU and memory resources from one NUMA node and a PCI device from another *if* the SR-IOV NUMA affinity policy is set to legacy and the device does not report NUMA information. """ self._test_policy(pci_numa_node, status, 'legacy') @ddt.unpack # unpacks each sub-tuple e.g. *(pci_numa_node, status) # The required policy requires a PCI device to both report a NUMA # and for the guest cpus and ram to be affinitized to the same # NUMA node so we create 1 pci device in the first NUMA node. @ddt.data((-1, 'ERROR'), (0, 'ERROR'), (1, 'ACTIVE')) def test_create_server_with_sriov_numa_affinity_policy_required( self, pci_numa_node, status): """Validate behavior of 'required' PCI NUMA affinity policy. This test ensures that it *is not* possible to allocate CPU and memory resources from one NUMA node and a PCI device from another *if* the SR-IOV NUMA affinity policy is set to required and the device does reports NUMA information. """ # we set the numa_affinity policy to preferred to allow the PCI device # to be selected from any numa node so we can prove the flavor # overrides the alias. alias = [jsonutils.dumps( { 'vendor_id': fakelibvirt.PCI_VEND_ID, 'product_id': fakelibvirt.PCI_PROD_ID, 'name': self.ALIAS_NAME, 'device_type': fields.PciDeviceType.STANDARD, 'numa_policy': fields.PCINUMAAffinityPolicy.PREFERRED, } )] self.flags(passthrough_whitelist=self.PCI_PASSTHROUGH_WHITELIST, alias=alias, group='pci') self._test_policy(pci_numa_node, status, 'required') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_report_cpu_traits.py0000664000175000017500000002253400000000000025677 0ustar00zuulzuul00000000000000# Copyright (c) 2018 Intel, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_resource_classes as orc import os_traits as ost from nova import conf from nova.db import constants as db_const from nova import test from nova.tests.functional.libvirt import integrated_helpers from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt.libvirt.host import SEV_KERNEL_PARAM_FILE CONF = conf.CONF class LibvirtReportTraitsTestBase( integrated_helpers.LibvirtProviderUsageBaseTestCase): pass def assertMemEncryptionSlotsEqual(self, slots): inventory = self._get_provider_inventory(self.host_uuid) if slots == 0: self.assertNotIn(orc.MEM_ENCRYPTION_CONTEXT, inventory) else: self.assertEqual( inventory[orc.MEM_ENCRYPTION_CONTEXT], { 'total': slots, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 0, } ) class LibvirtReportTraitsTests(LibvirtReportTraitsTestBase): def test_report_cpu_traits(self): self.assertEqual([], self._get_all_providers()) self.start_compute() # Test CPU traits reported on initial node startup, these specific # trait values are coming from fakelibvirt's baselineCPU result. # COMPUTE_NODE is always set on the compute node provider. traits = self._get_provider_traits(self.host_uuid) for trait in ('HW_CPU_X86_VMX', 'HW_CPU_X86_AESNI', 'COMPUTE_NODE'): self.assertIn(trait, traits) self._create_trait('CUSTOM_TRAITS') new_traits = ['CUSTOM_TRAITS', 'HW_CPU_X86_AVX'] self._set_provider_traits(self.host_uuid, new_traits) # The above is an out-of-band placement operation, as if the operator # used the CLI. So now we have to "SIGHUP the compute process" to clear # the report client cache so the subsequent update picks up the change. self.compute.manager.reset() self._run_periodics() # HW_CPU_X86_AVX is filtered out because nova-compute owns CPU traits # and it's not in the baseline for the host. traits = set(self._get_provider_traits(self.host_uuid)) expected_traits = self.expected_libvirt_driver_capability_traits.union( [u'HW_CPU_X86_VMX', u'HW_CPU_X86_AESNI', u'CUSTOM_TRAITS', # The periodic restored the COMPUTE_NODE trait. u'COMPUTE_NODE'] ) for trait in expected_traits: self.assertIn(trait, traits) class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase): STUB_INIT_HOST = False @test.patch_exists(SEV_KERNEL_PARAM_FILE, False) def setUp(self): super(LibvirtReportNoSevTraitsTests, self).setUp() self.start_compute() def test_sev_trait_off_on(self): """Test that the compute service reports the SEV trait in the list of global traits, but doesn't immediately register it on the compute host resource provider in the placement API, due to the kvm-amd kernel module's sev parameter file being (mocked as) absent. Then test that if the SEV capability appears (again via mocking), after a restart of the compute service, the trait gets registered on the compute host. Also test that on both occasions, the inventory of the MEM_ENCRYPTION_CONTEXT resource class on the compute host corresponds to the absence or presence of the SEV capability. """ self.assertFalse(self.compute.driver._host.supports_amd_sev) sev_trait = ost.HW_CPU_X86_AMD_SEV global_traits = self._get_all_traits() self.assertIn(sev_trait, global_traits) traits = self._get_provider_traits(self.host_uuid) self.assertNotIn(sev_trait, traits) self.assertMemEncryptionSlotsEqual(0) # Now simulate the host gaining SEV functionality. Here we # simulate a kernel update or reconfiguration which causes the # kvm-amd kernel module's "sev" parameter to become available # and set to 1, however it could also happen via a libvirt # upgrade, for instance. sev_features = \ fakelibvirt.virConnect._domain_capability_features_with_SEV with test.nested( self.patch_exists(SEV_KERNEL_PARAM_FILE, True), self.patch_open(SEV_KERNEL_PARAM_FILE, "1\n"), mock.patch.object(fakelibvirt.virConnect, '_domain_capability_features', new=sev_features) ) as (mock_exists, mock_open, mock_features): # Retrigger the detection code. In the real world this # would be a restart of the compute service. # As we are changing the domain caps we need to clear the # cache in the host object. self.compute.driver._host._domain_caps = None self.compute.driver._host._set_amd_sev_support() self.assertTrue(self.compute.driver._host.supports_amd_sev) mock_exists.assert_has_calls([mock.call(SEV_KERNEL_PARAM_FILE)]) mock_open.assert_has_calls([mock.call(SEV_KERNEL_PARAM_FILE)]) # However it won't disappear in the provider tree and get synced # back to placement until we force a reinventory: self.compute.manager.reset() # reset cached traits so they are recalculated. self.compute.driver._static_traits = None self._run_periodics() traits = self._get_provider_traits(self.host_uuid) self.assertIn(sev_trait, traits) # Sanity check that we've still got the trait globally. self.assertIn(sev_trait, self._get_all_traits()) self.assertMemEncryptionSlotsEqual(db_const.MAX_INT) class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase): STUB_INIT_HOST = False @test.patch_exists(SEV_KERNEL_PARAM_FILE, True) @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n") @mock.patch.object( fakelibvirt.virConnect, '_domain_capability_features', new=fakelibvirt.virConnect._domain_capability_features_with_SEV) def setUp(self): super(LibvirtReportSevTraitsTests, self).setUp() self.flags(num_memory_encrypted_guests=16, group='libvirt') self.start_compute() def test_sev_trait_on_off(self): """Test that the compute service reports the SEV trait in the list of global traits, and immediately registers it on the compute host resource provider in the placement API, due to the SEV capability being (mocked as) present. Then test that if the SEV capability disappears (again via mocking), after a restart of the compute service, the trait gets removed from the compute host. Also test that on both occasions, the inventory of the MEM_ENCRYPTION_CONTEXT resource class on the compute host corresponds to the absence or presence of the SEV capability. """ self.assertTrue(self.compute.driver._host.supports_amd_sev) sev_trait = ost.HW_CPU_X86_AMD_SEV global_traits = self._get_all_traits() self.assertIn(sev_trait, global_traits) traits = self._get_provider_traits(self.host_uuid) self.assertIn(sev_trait, traits) self.assertMemEncryptionSlotsEqual(16) # Now simulate the host losing SEV functionality. Here we # simulate a kernel downgrade or reconfiguration which causes # the kvm-amd kernel module's "sev" parameter to become # unavailable, however it could also happen via a libvirt # downgrade, for instance. with self.patch_exists(SEV_KERNEL_PARAM_FILE, False) as mock_exists: # Retrigger the detection code. In the real world this # would be a restart of the compute service. self.compute.driver._host._domain_caps = None self.compute.driver._host._set_amd_sev_support() self.assertFalse(self.compute.driver._host.supports_amd_sev) mock_exists.assert_has_calls([mock.call(SEV_KERNEL_PARAM_FILE)]) # However it won't disappear in the provider tree and get synced # back to placement until we force a reinventory: self.compute.manager.reset() # reset cached traits so they are recalculated. self.compute.driver._static_traits = None self._run_periodics() traits = self._get_provider_traits(self.host_uuid) self.assertNotIn(sev_trait, traits) # Sanity check that we've still got the trait globally. self.assertIn(sev_trait, self._get_all_traits()) self.assertMemEncryptionSlotsEqual(0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_reshape.py0000664000175000017500000002621600000000000023557 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io import mock from oslo_config import cfg from oslo_log import log as logging from nova import context from nova import objects from nova.tests.functional.libvirt import base from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt.libvirt import utils CONF = cfg.CONF LOG = logging.getLogger(__name__) class VGPUReshapeTests(base.ServersTestBase): @mock.patch('nova.virt.libvirt.LibvirtDriver._get_local_gb_info', return_value={'total': 128, 'used': 44, 'free': 84}) @mock.patch('nova.virt.libvirt.driver.libvirt_utils.is_valid_hostname', return_value=True) @mock.patch('nova.virt.libvirt.driver.libvirt_utils.file_open', side_effect=[io.BytesIO(b''), io.BytesIO(b''), io.BytesIO(b'')]) def test_create_servers_with_vgpu( self, mock_file_open, mock_valid_hostname, mock_get_fs_info): """Verify that vgpu reshape works with libvirt driver 1) create two servers with an old tree where the VGPU resource is on the compute provider 2) trigger a reshape 3) check that the allocations of the servers are still valid 4) create another server now against the new tree """ # NOTE(gibi): We cannot simply ask the virt driver to create an old # RP tree with vgpu on the root RP as that code path does not exist # any more. So we have to hack a "bit". We will create a compute # service without vgpu support to have the compute RP ready then we # manually add the VGPU resources to that RP in placement. Also we make # sure that during the instance claim the virt driver does not detect # the old tree as that would be a bad time for reshape. Later when the # compute service is restarted the driver will do the reshape. mdevs = { 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c01': fakelibvirt.FakeMdevDevice( dev_name='mdev_4b20d080_1b54_4048_85b3_a6a62d165c01', type_id=fakelibvirt.NVIDIA_11_VGPU_TYPE, parent=fakelibvirt.PGPU1_PCI_ADDR), 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c02': fakelibvirt.FakeMdevDevice( dev_name='mdev_4b20d080_1b54_4048_85b3_a6a62d165c02', type_id=fakelibvirt.NVIDIA_11_VGPU_TYPE, parent=fakelibvirt.PGPU2_PCI_ADDR), 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c03': fakelibvirt.FakeMdevDevice( dev_name='mdev_4b20d080_1b54_4048_85b3_a6a62d165c03', type_id=fakelibvirt.NVIDIA_11_VGPU_TYPE, parent=fakelibvirt.PGPU3_PCI_ADDR), } fake_connection = self._get_connection( # We need more RAM or the 3rd server won't be created host_info=fakelibvirt.HostInfo(kB_mem=8192), mdev_info=fakelibvirt.HostMdevDevicesInfo(devices=mdevs)) self.mock_conn.return_value = fake_connection # start a compute with vgpu support disabled so the driver will # ignore the content of the above HostMdevDeviceInfo self.flags(enabled_vgpu_types='', group='devices') self.compute = self.start_service('compute', host='compute1') # create the VGPU resource in placement manually compute_rp_uuid = self.placement_api.get( '/resource_providers?name=compute1').body[ 'resource_providers'][0]['uuid'] inventories = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body inventories['inventories']['VGPU'] = { 'allocation_ratio': 1.0, 'max_unit': 3, 'min_unit': 1, 'reserved': 0, 'step_size': 1, 'total': 3} self.placement_api.put( '/resource_providers/%s/inventories' % compute_rp_uuid, inventories) # enabled vgpu support self.flags( enabled_vgpu_types=fakelibvirt.NVIDIA_11_VGPU_TYPE, group='devices') # We don't want to restart the compute service or it would call for # a reshape but we still want to accept some vGPU types so we call # directly the needed method self.compute.driver.supported_vgpu_types = ( self.compute.driver._get_supported_vgpu_types()) # now we boot two servers with vgpu extra_spec = {"resources:VGPU": 1} flavor_id = self._create_flavor(extra_spec=extra_spec) server_req = self._build_server(flavor_id=flavor_id) # NOTE(gibi): during instance_claim() there is a # driver.update_provider_tree() call that would detect the old tree and # would fail as this is not a good time to reshape. To avoid that we # temporarily mock update_provider_tree here. with mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' 'update_provider_tree'): created_server1 = self.api.post_server({'server': server_req}) server1 = self._wait_for_state_change(created_server1, 'ACTIVE') created_server2 = self.api.post_server({'server': server_req}) server2 = self._wait_for_state_change(created_server2, 'ACTIVE') # Determine which device is associated with which instance # { inst.uuid: pgpu_name } inst_to_pgpu = {} ctx = context.get_admin_context() for server in (server1, server2): inst = objects.Instance.get_by_uuid(ctx, server['id']) mdevs = list( self.compute.driver._get_all_assigned_mediated_devices(inst)) self.assertEqual(1, len(mdevs)) mdev_uuid = mdevs[0] mdev_info = self.compute.driver._get_mediated_device_information( utils.mdev_uuid2name(mdev_uuid)) inst_to_pgpu[inst.uuid] = mdev_info['parent'] # The VGPUs should have come from different pGPUs self.assertNotEqual(*list(inst_to_pgpu.values())) # verify that the inventory, usages and allocation are correct before # the reshape compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertEqual(3, compute_inventory['VGPU']['total']) compute_usages = self.placement_api.get( '/resource_providers/%s/usages' % compute_rp_uuid).body[ 'usages'] self.assertEqual(2, compute_usages['VGPU']) for server in (server1, server2): allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # the flavor has disk=10 and ephemeral=10 self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2, 'VGPU': 1}, allocations[compute_rp_uuid]['resources']) # restart compute which will trigger a reshape self.compute = self.restart_compute_service(self.compute) # verify that the inventory, usages and allocation are correct after # the reshape compute_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % compute_rp_uuid).body[ 'inventories'] self.assertNotIn('VGPU', compute_inventory) # NOTE(sbauza): The two instances will use two different pGPUs # That said, we need to check all the pGPU inventories for knowing # which ones are used. usages = {} pgpu_uuid_to_name = {} for pci_device in [fakelibvirt.PGPU1_PCI_ADDR, fakelibvirt.PGPU2_PCI_ADDR, fakelibvirt.PGPU3_PCI_ADDR]: gpu_rp_uuid = self.placement_api.get( '/resource_providers?name=compute1_%s' % pci_device).body[ 'resource_providers'][0]['uuid'] pgpu_uuid_to_name[gpu_rp_uuid] = pci_device gpu_inventory = self.placement_api.get( '/resource_providers/%s/inventories' % gpu_rp_uuid).body[ 'inventories'] self.assertEqual(1, gpu_inventory['VGPU']['total']) gpu_usages = self.placement_api.get( '/resource_providers/%s/usages' % gpu_rp_uuid).body[ 'usages'] usages[pci_device] = gpu_usages['VGPU'] # Make sure that both instances are using different pGPUs used_devices = [dev for dev, usage in usages.items() if usage == 1] avail_devices = list(set(usages.keys()) - set(used_devices)) self.assertEqual(2, len(used_devices)) # Make sure that both instances are using the correct pGPUs for server in [server1, server2]: allocations = self.placement_api.get( '/allocations/%s' % server['id']).body[ 'allocations'] self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2}, allocations[compute_rp_uuid]['resources']) rp_uuids = list(allocations.keys()) # We only have two RPs, the compute RP (the root) and the child # pGPU RP gpu_rp_uuid = (rp_uuids[1] if rp_uuids[0] == compute_rp_uuid else rp_uuids[0]) self.assertEqual( {'VGPU': 1}, allocations[gpu_rp_uuid]['resources']) # The pGPU's RP name contains the pGPU name self.assertIn(inst_to_pgpu[server['id']], pgpu_uuid_to_name[gpu_rp_uuid]) # now create one more instance with vgpu against the reshaped tree created_server = self.api.post_server({'server': server_req}) server3 = self._wait_for_state_change(created_server, 'ACTIVE') # find the pGPU that wasn't used before we created the third instance # It should have taken the previously available pGPU device = avail_devices[0] gpu_rp_uuid = self.placement_api.get( '/resource_providers?name=compute1_%s' % device).body[ 'resource_providers'][0]['uuid'] gpu_usages = self.placement_api.get( '/resource_providers/%s/usages' % gpu_rp_uuid).body[ 'usages'] self.assertEqual(1, gpu_usages['VGPU']) allocations = self.placement_api.get( '/allocations/%s' % server3['id']).body[ 'allocations'] self.assertEqual( {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2}, allocations[compute_rp_uuid]['resources']) self.assertEqual( {'VGPU': 1}, allocations[gpu_rp_uuid]['resources']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_rt_servers.py0000664000175000017500000000475500000000000024332 0ustar00zuulzuul00000000000000# Copyright (C) 2015 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api import client from nova.tests.functional.libvirt import base from nova.tests.unit.virt.libvirt import fakelibvirt class RealTimeServersTest(base.ServersTestBase): ADDITIONAL_FILTERS = ['NUMATopologyFilter'] def setUp(self): super(RealTimeServersTest, self).setUp() self.flags(sysinfo_serial='none', group='libvirt') def test_no_dedicated_cpu(self): flavor_id = self._create_flavor(extra_spec={'hw:cpu_realtime': 'yes'}) server = self._build_server(flavor_id=flavor_id) # Cannot set realtime policy in a non dedicated cpu pinning policy self.assertRaises( client.OpenStackApiException, self.api.post_server, {'server': server}) def test_no_realtime_mask(self): flavor_id = self._create_flavor(extra_spec={ 'hw:cpu_realtime': 'yes', 'hw:cpu_policy': 'dedicated'}) server = self._build_server(flavor_id=flavor_id) # Cannot set realtime policy if not vcpus mask defined self.assertRaises( client.OpenStackApiException, self.api.post_server, {'server': server}) def test_success(self): self.flags(cpu_dedicated_set='0-7', group='compute') host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000) fake_connection = self._get_connection(host_info=host_info) self.mock_conn.return_value = fake_connection self.compute = self.start_service('compute', host='test_compute0') flavor_id = self._create_flavor(extra_spec={ 'hw:cpu_realtime': 'yes', 'hw:cpu_policy': 'dedicated', 'hw:cpu_realtime_mask': '^1'}) server = self._create_server(flavor_id=flavor_id) self._delete_server(server) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_shared_resource_provider.py0000664000175000017500000002113100000000000027206 0ustar00zuulzuul00000000000000# Copyright (C) 2018 NTT DATA, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids import unittest from nova.compute import instance_actions from nova import conf from nova.tests.functional.libvirt import integrated_helpers import nova.tests.unit.image.fake CONF = conf.CONF class SharedStorageProviderUsageTestCase( integrated_helpers.LibvirtProviderUsageBaseTestCase): def setUp(self): super(SharedStorageProviderUsageTestCase, self).setUp() self.start_compute() # TODO(efried): Bug #1784020 @unittest.expectedFailure def test_shared_storage_rp_configuration_with_cn_rp(self): """Test to check whether compute node and shared storage resource provider inventory is configured properly or not. """ # shared storage resource provider shared_RP = self._post_resource_provider( rp_name='shared_resource_provider') # created inventory for shared storage RP inv = {"resource_class": "DISK_GB", "total": 78, "reserved": 0, "min_unit": 1, "max_unit": 78, "step_size": 1, "allocation_ratio": 1.0} self._set_inventory(shared_RP['uuid'], inv) # Added traits to shared storage resource provider self._set_provider_traits(shared_RP['uuid'], ['MISC_SHARES_VIA_AGGREGATE']) # add both cn_rp and shared_rp under one aggregate self._set_aggregate(shared_RP['uuid'], uuids.shr_disk_agg) self._set_aggregate(self.host_uuid, uuids.shr_disk_agg) self.assertIn("DISK_GB", self._get_provider_inventory(self.host_uuid)) # run update_available_resource periodic task after configuring shared # resource provider to update compute node resources self._run_periodics() # we expect that the virt driver stops reporting DISK_GB on the compute # RP as soon as a shared RP with DISK_GB is created in the compute tree self.assertNotIn("DISK_GB", self._get_provider_inventory(self.host_uuid)) # create server self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=1, networks='none', ) # get shared_rp and cn_rp usages shared_rp_usages = self._get_provider_usages(shared_RP['uuid']) cn_rp_usages = self._get_provider_usages(self.host_uuid) # Check if DISK_GB resource is allocated from shared_RP and the # remaining resources are allocated from host_uuid. self.assertEqual({'DISK_GB': 1}, shared_rp_usages) self.assertEqual({'MEMORY_MB': 512, 'VCPU': 1}, cn_rp_usages) def create_shared_storage_rp(self): # shared storage resource provider shared_RP = self._post_resource_provider( rp_name='shared_resource_provider1') # created inventory for shared storage RP inv = {"resource_class": "DISK_GB", "total": 78, "reserved": 0, "min_unit": 1, "max_unit": 78, "step_size": 1, "allocation_ratio": 1.0} self._set_inventory(shared_RP['uuid'], inv) # Added traits to shared storage resource provider self._set_provider_traits(shared_RP['uuid'], ['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD']) return shared_RP['uuid'] # TODO(efried): Bug #1784020 @unittest.expectedFailure def test_rebuild_instance_with_image_traits_on_shared_rp(self): shared_rp_uuid = self.create_shared_storage_rp() # add both cn_rp and shared_rp under one aggregate self._set_aggregate(shared_rp_uuid, uuids.shr_disk_agg) self._set_aggregate(self.host_uuid, uuids.shr_disk_agg) self.assertIn("DISK_GB", self._get_provider_inventory(self.host_uuid)) # run update_available_resource periodic task after configuring shared # resource provider to update compute node resources self._run_periodics() # we expect that the virt driver stops reporting DISK_GB on the compute # RP as soon as a shared RP with DISK_GB is created in the compute tree self.assertNotIn("DISK_GB", self._get_provider_inventory(self.host_uuid)) server = self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=1, networks='none' ) rebuild_image_ref = ( nova.tests.unit.image.fake.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID) with nova.utils.temporary_mutation(self.api, microversion='2.35'): self.api.api_put('/images/%s/metadata' % rebuild_image_ref, {'metadata': { 'trait:STORAGE_DISK_SSD': 'required'}}) rebuild_req_body = { 'rebuild': { 'imageRef': rebuild_image_ref } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None}) # get shared_rp and cn_rp usages shared_rp_usages = self._get_provider_usages(shared_rp_uuid) cn_rp_usages = self._get_provider_usages(self.host_uuid) # Check if DISK_GB resource is allocated from shared_RP and the # remaining resources are allocated from host_uuid. self.assertEqual({'DISK_GB': 1}, shared_rp_usages) self.assertEqual({'MEMORY_MB': 512, 'VCPU': 1}, cn_rp_usages) allocs = self._get_allocations_by_server_uuid(server['id']) self.assertIn(self.host_uuid, allocs) server = self.api.get_server(server['id']) self.assertEqual(rebuild_image_ref, server['image']['id']) # TODO(efried): Bug #1784020 @unittest.expectedFailure def test_rebuild_instance_with_image_traits_on_shared_rp_no_valid_host( self): shared_rp_uuid = self.create_shared_storage_rp() # add both cn_rp and shared_rp under one aggregate self._set_aggregate(shared_rp_uuid, uuids.shr_disk_agg) self._set_aggregate(self.host_uuid, uuids.shr_disk_agg) self.assertIn("DISK_GB", self._get_provider_inventory(self.host_uuid)) # run update_available_resource periodic task after configuring shared # resource provider to update compute node resources self._run_periodics() # we expect that the virt driver stops reporting DISK_GB on the compute # RP as soon as a shared RP with DISK_GB is created in the compute tree self.assertNotIn("DISK_GB", self._get_provider_inventory(self.host_uuid)) # create server org_image_id = '155d900f-4e14-4e4c-a73d-069cbf4541e6' server = self._create_server( image_uuid=org_image_id, flavor_id=1, networks='none', ) rebuild_image_ref = ( nova.tests.unit.image.fake.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID) with nova.utils.temporary_mutation(self.api, microversion='2.35'): self.api.api_put('/images/%s/metadata' % rebuild_image_ref, {'metadata': { 'trait:CUSTOM_FOO': 'required'}}) rebuild_req_body = { 'rebuild': { 'imageRef': rebuild_image_ref } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) # Look for the failed rebuild action. self._wait_for_action_fail_completion( server, instance_actions.REBUILD, 'rebuild_server') # Assert the server image_ref was rolled back on failure. server = self.api.get_server(server['id']) self.assertEqual(org_image_id, server['image']['id']) # The server should be in ERROR state self.assertEqual('ERROR', server['status']) self.assertIn('No valid host', server['fault']['message']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_vgpu.py0000664000175000017500000003601200000000000023104 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import re import mock import os_resource_classes as orc from oslo_config import cfg from oslo_log import log as logging from oslo_utils import uuidutils import nova.conf from nova import context from nova import objects from nova.tests.functional.libvirt import base from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt.libvirt import driver as libvirt_driver from nova.virt.libvirt import utils as libvirt_utils CONF = cfg.CONF LOG = logging.getLogger(__name__) _DEFAULT_HOST = 'host1' class VGPUTestBase(base.ServersTestBase): FAKE_LIBVIRT_VERSION = 5000000 FAKE_QEMU_VERSION = 3001000 # Since we run all computes by a single process, we need to identify which # current compute service we use at the moment. _current_host = _DEFAULT_HOST def setUp(self): super(VGPUTestBase, self).setUp() self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._get_local_gb_info', return_value={'total': 128, 'used': 44, 'free': 84})) self.useFixture(fixtures.MockPatch( 'nova.privsep.libvirt.create_mdev', side_effect=self._create_mdev)) # NOTE(sbauza): Since the fake create_mdev doesn't know which compute # was called, we need to look at a value that can be provided just # before the driver calls create_mdev. That's why we fake the below # method for having the LibvirtDriver instance so we could modify # the self.current_host value. orig_get_vgpu_type_per_pgpu = ( libvirt_driver.LibvirtDriver._get_vgpu_type_per_pgpu) def fake_get_vgpu_type_per_pgpu(_self, *args): # See, here we look at the hostname from the virt driver... self._current_host = _self._host.get_hostname() # ... and then we call the original method return orig_get_vgpu_type_per_pgpu(_self, *args) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._get_vgpu_type_per_pgpu', new=fake_get_vgpu_type_per_pgpu)) self.context = context.get_admin_context() def pci2libvirt_address(self, address): return "pci_{}_{}_{}_{}".format(*re.split("[.:]", address)) def libvirt2pci_address(self, dev_name): return "{}:{}:{}.{}".format(*dev_name[4:].split('_')) def _create_mdev(self, physical_device, mdev_type, uuid=None): # We need to fake the newly created sysfs object by adding a new # FakeMdevDevice in the existing persisted Connection object so # when asking to get the existing mdevs, we would see it. if not uuid: uuid = uuidutils.generate_uuid() mdev_name = libvirt_utils.mdev_uuid2name(uuid) libvirt_parent = self.pci2libvirt_address(physical_device) # Here, we get the right compute thanks by the self.current_host that # was modified just before connection = self.computes[ self._current_host].driver._host.get_connection() connection.mdev_info.devices.update( {mdev_name: fakelibvirt.FakeMdevDevice(dev_name=mdev_name, type_id=mdev_type, parent=libvirt_parent)}) return uuid def _start_compute_service(self, hostname): fake_connection = self._get_connection( host_info=fakelibvirt.HostInfo(cpu_nodes=2, kB_mem=8192), # We want to create two pGPUs but no other PCI devices pci_info=fakelibvirt.HostPCIDevicesInfo(num_pci=0, num_pfs=0, num_vfs=0, num_mdevcap=2), hostname=hostname) with mock.patch('nova.virt.libvirt.host.Host.get_connection', return_value=fake_connection): # this method will update a self.computes dict keyed by hostname compute = self._start_compute(hostname) compute.driver._host.get_connection = lambda: fake_connection rp_uuid = self._get_provider_uuid_by_name(hostname) rp_uuids = self._get_all_rp_uuids_in_a_tree(rp_uuid) for rp in rp_uuids: inventory = self._get_provider_inventory(rp) if orc.VGPU in inventory: usage = self._get_provider_usages(rp) self.assertEqual(16, inventory[orc.VGPU]['total']) self.assertEqual(0, usage[orc.VGPU]) # Since we haven't created any mdevs yet, we shouldn't find them self.assertEqual([], compute.driver._get_mediated_devices()) return compute class VGPUTests(VGPUTestBase): # We want to target some hosts for some created instances api_major_version = 'v2.1' ADMIN_API = True microversion = 'latest' def setUp(self): super(VGPUTests, self).setUp() extra_spec = {"resources:VGPU": "1"} self.flavor = self._create_flavor(extra_spec=extra_spec) # Start compute1 supporting only nvidia-11 self.flags( enabled_vgpu_types=fakelibvirt.NVIDIA_11_VGPU_TYPE, group='devices') # for the sake of resizing, we need to patch the two methods below self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._get_instance_disk_info', return_value=[])) self.useFixture(fixtures.MockPatch('os.rename')) self.compute1 = self._start_compute_service(_DEFAULT_HOST) def assert_vgpu_usage_for_compute(self, compute, expected): total_usage = 0 # We only want to get mdevs that are assigned to instances mdevs = compute.driver._get_all_assigned_mediated_devices() for mdev in mdevs: mdev_name = libvirt_utils.mdev_uuid2name(mdev) mdev_info = compute.driver._get_mediated_device_information( mdev_name) parent_name = mdev_info['parent'] parent_rp_name = compute.host + '_' + parent_name parent_rp_uuid = self._get_provider_uuid_by_name(parent_rp_name) parent_usage = self._get_provider_usages(parent_rp_uuid) if orc.VGPU in parent_usage: total_usage += parent_usage[orc.VGPU] self.assertEqual(expected, len(mdevs)) self.assertEqual(expected, total_usage) def test_create_servers_with_vgpu(self): self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor, host=self.compute1.host, networks='auto', expected_state='ACTIVE') self.assert_vgpu_usage_for_compute(self.compute1, expected=1) def _confirm_resize(self, server, host='host1'): # NOTE(sbauza): Unfortunately, _cleanup_resize() in libvirt checks the # host option to know the source hostname but given we have a global # CONF, the value will be the hostname of the last compute service that # was created, so we need to change it here. # TODO(sbauza): Remove the below once we stop using CONF.host in # libvirt and rather looking at the compute host value. orig_host = CONF.host self.flags(host=host) super(VGPUTests, self)._confirm_resize(server) self.flags(host=orig_host) self._wait_for_state_change(server, 'ACTIVE') def test_resize_servers_with_vgpu(self): # Add another compute for the sake of resizing self.compute2 = self._start_compute_service('host2') server = self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor, host=self.compute1.host, networks='auto', expected_state='ACTIVE') # Make sure we only have 1 vGPU for compute1 self.assert_vgpu_usage_for_compute(self.compute1, expected=1) self.assert_vgpu_usage_for_compute(self.compute2, expected=0) extra_spec = {"resources:VGPU": "1"} new_flavor = self._create_flavor(memory_mb=4096, extra_spec=extra_spec) # First, resize and then revert. self._resize_server(server, new_flavor) # After resizing, we then have two vGPUs, both for each compute self.assert_vgpu_usage_for_compute(self.compute1, expected=1) self.assert_vgpu_usage_for_compute(self.compute2, expected=1) self._revert_resize(server) # We're back to the original resources usage self.assert_vgpu_usage_for_compute(self.compute1, expected=1) self.assert_vgpu_usage_for_compute(self.compute2, expected=0) # Now resize and then confirm it. self._resize_server(server, new_flavor) self.assert_vgpu_usage_for_compute(self.compute1, expected=1) self.assert_vgpu_usage_for_compute(self.compute2, expected=1) self._confirm_resize(server) # In the last case, the source guest disappeared so we only have 1 vGPU self.assert_vgpu_usage_for_compute(self.compute1, expected=0) self.assert_vgpu_usage_for_compute(self.compute2, expected=1) class VGPUMultipleTypesTests(VGPUTestBase): def setUp(self): super(VGPUMultipleTypesTests, self).setUp() extra_spec = {"resources:VGPU": "1"} self.flavor = self._create_flavor(extra_spec=extra_spec) self.flags( enabled_vgpu_types=[fakelibvirt.NVIDIA_11_VGPU_TYPE, fakelibvirt.NVIDIA_12_VGPU_TYPE], group='devices') # we need to call the below again to ensure the updated # 'device_addresses' value is read and the new groups created nova.conf.devices.register_dynamic_opts(CONF) # host1 will have 2 physical GPUs : # - 0000:81:00.0 will only support nvidia-11 # - 0000:81:01.0 will only support nvidia-12 pgpu1_pci_addr = self.libvirt2pci_address(fakelibvirt.PGPU1_PCI_ADDR) pgpu2_pci_addr = self.libvirt2pci_address(fakelibvirt.PGPU2_PCI_ADDR) self.flags(device_addresses=[pgpu1_pci_addr], group='vgpu_nvidia-11') self.flags(device_addresses=[pgpu2_pci_addr], group='vgpu_nvidia-12') # Prepare traits for later on self._create_trait('CUSTOM_NVIDIA_11') self._create_trait('CUSTOM_NVIDIA_12') self.compute1 = self._start_compute_service('host1') def test_create_servers_with_vgpu(self): self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor, host=self.compute1.host, expected_state='ACTIVE') mdevs = self.compute1.driver._get_mediated_devices() self.assertEqual(1, len(mdevs)) # We can be deterministic : since 0000:81:01.0 is asked to only support # nvidia-12 *BUT* doesn't actually have this type as a PCI capability, # we are sure that only 0000:81:00.0 is used. parent_name = mdevs[0]['parent'] self.assertEqual(fakelibvirt.PGPU1_PCI_ADDR, parent_name) # We are also sure that there is no RP for 0000:81:01.0 since there # is no inventory for nvidia-12 root_rp_uuid = self._get_provider_uuid_by_name(self.compute1.host) rp_uuids = self._get_all_rp_uuids_in_a_tree(root_rp_uuid) # We only have 2 RPs : the root RP and only the pGPU1 RP... self.assertEqual(2, len(rp_uuids)) # ... but we double-check by asking the RP by its expected name expected_pgpu2_rp_name = (self.compute1.host + '_' + fakelibvirt.PGPU2_PCI_ADDR) pgpu2_rp = self.placement_api.get( '/resource_providers?name=' + expected_pgpu2_rp_name).body[ 'resource_providers'] # See, Placement API returned no RP for this name as it doesn't exist. self.assertEqual([], pgpu2_rp) def test_create_servers_with_specific_type(self): # Regenerate the PCI addresses so both pGPUs now support nvidia-12 connection = self.computes[ self.compute1.host].driver._host.get_connection() connection.pci_info = fakelibvirt.HostPCIDevicesInfo( num_pci=0, num_pfs=0, num_vfs=0, num_mdevcap=2, multiple_gpu_types=True) # Make a restart to update the Resource Providers self.compute1 = self.restart_compute_service(self.compute1) pgpu1_rp_uuid = self._get_provider_uuid_by_name( self.compute1.host + '_' + fakelibvirt.PGPU1_PCI_ADDR) pgpu2_rp_uuid = self._get_provider_uuid_by_name( self.compute1.host + '_' + fakelibvirt.PGPU2_PCI_ADDR) pgpu1_inventory = self._get_provider_inventory(pgpu1_rp_uuid) self.assertEqual(16, pgpu1_inventory[orc.VGPU]['total']) pgpu2_inventory = self._get_provider_inventory(pgpu2_rp_uuid) self.assertEqual(8, pgpu2_inventory[orc.VGPU]['total']) # Attach traits to the pGPU RPs self._set_provider_traits(pgpu1_rp_uuid, ['CUSTOM_NVIDIA_11']) self._set_provider_traits(pgpu2_rp_uuid, ['CUSTOM_NVIDIA_12']) expected = {'CUSTOM_NVIDIA_11': fakelibvirt.PGPU1_PCI_ADDR, 'CUSTOM_NVIDIA_12': fakelibvirt.PGPU2_PCI_ADDR} for trait in expected.keys(): # Add a trait to the flavor extra_spec = {"resources:VGPU": "1", "trait:%s" % trait: "required"} flavor = self._create_flavor(extra_spec=extra_spec) # Use the new flavor for booting server = self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor, host=self.compute1.host, expected_state='ACTIVE') # Get the instance we just created inst = objects.Instance.get_by_uuid(self.context, server['id']) # Get the mdevs that were allocated for this instance, we should # only have one mdevs = self.compute1.driver._get_all_assigned_mediated_devices( inst) self.assertEqual(1, len(mdevs)) # It's a dict of mdev_uuid/instance_uuid pairs, we only care about # the keys mdevs = list(mdevs.keys()) # Now get the detailed information about this single mdev mdev_info = self.compute1.driver._get_mediated_device_information( libvirt_utils.mdev_uuid2name(mdevs[0])) # We can be deterministic : since we asked for a specific type, # we know which pGPU we landed. self.assertEqual(expected[trait], mdev_info['parent']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/libvirt/test_vpmem.py0000664000175000017500000002765200000000000023261 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from oslo_config import cfg from oslo_log import log as logging from nova import objects from nova.tests.functional.libvirt import integrated_helpers from nova.tests.unit.virt.libvirt import fake_imagebackend from nova.tests.unit.virt.libvirt import fakelibvirt CONF = cfg.CONF LOG = logging.getLogger(__name__) class VPMEMTestBase(integrated_helpers.LibvirtProviderUsageBaseTestCase): FAKE_LIBVIRT_VERSION = 5000000 FAKE_QEMU_VERSION = 3001000 def setUp(self): super(VPMEMTestBase, self).setUp() self.flags(pmem_namespaces="4GB:ns_0,SMALL:ns_1|ns_2", group='libvirt') self.fake_pmem_namespaces = ''' [{"dev":"namespace0.0", "mode":"devdax", "map":"mem", "size":4292870144, "uuid":"24ffd5e4-2b39-4f28-88b3-d6dc1ec44863", "daxregion":{"id": 0, "size": 4292870144,"align": 2097152, "devices":[{"chardev":"dax0.0", "size":4292870144}]}, "name":"ns_0", "numa_node":0}, {"dev":"namespace0.1", "mode":"devdax", "map":"mem", "size":4292870144, "uuid":"ac64fe52-de38-465b-b32b-947a6773ac66", "daxregion":{"id": 0, "size": 4292870144,"align": 2097152, "devices":[{"chardev":"dax0.1", "size":4292870144}]}, "name":"ns_1", "numa_node":0}, {"dev":"namespace0.2", "mode":"devdax", "map":"mem", "size":4292870144, "uuid":"2ff41eba-db9c-4bb9-a959-31d992568a3e", "raw_uuid":"0b61823b-5668-4856-842d-c644dae83410", "daxregion":{"id":0, "size":4292870144, "align":2097152, "devices":[{"chardev":"dax0.2", "size":4292870144}]}, "name":"ns_2", "numa_node":0}]''' self.useFixture(fixtures.MockPatch( 'nova.privsep.libvirt.cleanup_vpmem')) self.useFixture(fixtures.MockPatch( 'nova.privsep.libvirt.get_pmem_namespaces', return_value=self.fake_pmem_namespaces)) self.useFixture(fake_imagebackend.ImageBackendFixture()) self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._get_local_gb_info', return_value={'total': 128, 'used': 44, 'free': 84})) self.mock_conn = self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.host.Host._get_new_connection')).mock def _get_connection(self, host_info, hostname=None): fake_connection = fakelibvirt.Connection( 'qemu:///system', version=self.FAKE_LIBVIRT_VERSION, hv_version=self.FAKE_QEMU_VERSION, host_info=host_info, hostname=hostname) return fake_connection def _start_compute_service(self, hostname): fake_connection = self._get_connection( # Need a host to support creating more servers with vpmems host_info=fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2, kB_mem=15740000), hostname=hostname) self.mock_conn.return_value = fake_connection compute = self._start_compute(host=hostname) # Ensure populating the existing pmems correctly. vpmems = compute.driver._vpmems_by_name expected_vpmems = { 'ns_0': objects.LibvirtVPMEMDevice( label='4GB', name='ns_0', devpath='/dev/dax0.0', size=4292870144, align=2097152), 'ns_1': objects.LibvirtVPMEMDevice( label='SMALL', name='ns_1', devpath='/dev/dax0.1', size=4292870144, align=2097152), 'ns_2': objects.LibvirtVPMEMDevice( label='SMALL', name='ns_2', devpath='/dev/dax0.2', size=4292870144, align=2097152)} self.assertDictEqual(expected_vpmems, vpmems) # Ensure reporting vpmems resources correctly rp_uuid = self._get_provider_uuid_by_host(compute.host) inventory = self._get_provider_inventory(rp_uuid) self.assertEqual(1, inventory['CUSTOM_PMEM_NAMESPACE_4GB']['total']) self.assertEqual(2, inventory['CUSTOM_PMEM_NAMESPACE_SMALL']['total']) return compute def _create_server(self, flavor_id, hostname, expected_state): return super(VPMEMTestBase, self)._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor_id, networks='none', az='nova:%s' % hostname, expected_state=expected_state) def _delete_server(self, server): self.api.delete_server(server['id']) def _check_vpmem_allocations(self, vpmem_allocs, server_id, cn_uuid): cn_allocs = self._get_allocations_by_server_uuid( server_id)[cn_uuid]['resources'] for rc, amount in vpmem_allocs.items(): self.assertEqual(amount, cn_allocs[rc]) class VPMEMTests(VPMEMTestBase): def setUp(self): super(VPMEMTests, self).setUp() extra_spec = {"hw:pmem": "SMALL"} self.flavor = self._create_flavor(extra_spec=extra_spec) def test_create_servers_with_vpmem(self): # Start one compute service self.compute1 = self._start_compute_service('host1') cn1_uuid = self._get_provider_uuid_by_host(self.compute1.host) # Boot two servers with pmem server1 = self._create_server(self.flavor, self.compute1.host, expected_state='ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server1['id'], cn1_uuid) server2 = self._create_server(self.flavor, self.compute1.host, expected_state='ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server2['id'], cn1_uuid) # 'SMALL' VPMEM resource has used up server3 = self._create_server(self.flavor, self.compute1.host, expected_state='ERROR') # Delete server2, one 'SMALL' VPMEM will be released self._delete_server(server2) self._wait_until_deleted(server2) server3 = self._create_server(self.flavor, self.compute1.host, expected_state='ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server3['id'], cn1_uuid) class VPMEMResizeTests(VPMEMTestBase): def setUp(self): super(VPMEMResizeTests, self).setUp() self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.LibvirtDriver._get_instance_disk_info', return_value=[])) self.useFixture(fixtures.MockPatch('os.rename')) extra_spec = {"hw:pmem": "SMALL"} self.flavor1 = self._create_flavor(extra_spec=extra_spec) extra_spec = {"hw:pmem": "4GB,SMALL"} self.flavor2 = self._create_flavor(extra_spec=extra_spec) def _resize_server(self, server, flavor): resize_req = { 'resize': { 'flavorRef': flavor } } self.api.api_post('/servers/%s/action' % server['id'], resize_req) def _confirm_resize(self, server): confirm_resize_req = {'confirmResize': None} self.api.api_post('/servers/%s/action' % server['id'], confirm_resize_req) def _revert_resize(self, server): revert_resize_req = {'revertResize': None} self.api.api_post('/servers/%s/action' % server['id'], revert_resize_req) def test_resize(self): self.flags(allow_resize_to_same_host=False) # Start two compute nodes self.compute1 = self._start_compute_service('host1') self.compute2 = self._start_compute_service('host2') cn1_uuid = self._get_provider_uuid_by_host(self.compute1.host) cn2_uuid = self._get_provider_uuid_by_host(self.compute2.host) # Boot one server with pmem, then resize the server server = self._create_server(self.flavor1, self.compute1.host, expected_state='ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) # Revert resize self._resize_server(server, self.flavor2) self._wait_for_state_change(server, 'VERIFY_RESIZE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_4GB': 1, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn2_uuid) self._revert_resize(server) self._wait_for_state_change(server, 'ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) # Confirm resize self._resize_server(server, self.flavor2) self._wait_for_state_change(server, 'VERIFY_RESIZE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_4GB': 1, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn2_uuid) self._confirm_resize(server) self._wait_for_state_change(server, 'ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_4GB': 1, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn2_uuid) def test_resize_same_host(self): self.flags(allow_resize_to_same_host=True) # Start one compute nodes self.compute1 = self._start_compute_service('host1') cn1_uuid = self._get_provider_uuid_by_host(self.compute1.host) # Boot one server with pmem, then resize the server server = self._create_server(self.flavor1, self.compute1.host, expected_state='ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) # Revert resize self._resize_server(server, self.flavor2) self._wait_for_state_change(server, 'VERIFY_RESIZE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_4GB': 1, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) self._revert_resize(server) self._wait_for_state_change(server, 'ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) # Confirm resize self._resize_server(server, self.flavor2) self._wait_for_state_change(server, 'VERIFY_RESIZE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_4GB': 1, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) self._confirm_resize(server) self._wait_for_state_change(server, 'ACTIVE') self._check_vpmem_allocations({'CUSTOM_PMEM_NAMESPACE_4GB': 1, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1}, server['id'], cn1_uuid) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.4944687 nova-21.2.4/nova/tests/functional/notification_sample_tests/0000775000175000017500000000000000000000000024306 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/__init__.py0000664000175000017500000000000000000000000026405 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/notification_sample_base.py0000664000175000017500000003152700000000000031711 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os import time from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import fixture as utils_fixture from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests import json_ref from nova.tests.unit.api.openstack.compute import test_services from nova.tests.unit import fake_crypto from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake CONF = cfg.CONF class NotificationSampleTestBase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Base class for notification sample testing. To add tests for a versioned notification you have to store a sample file under doc/notification_sample directory. In the test method in the subclass trigger a change in the system that expected to generate the notification then use the _verify_notification function to assert if the stored sample matches with the generated one. If the notification has different payload content depending on the state change you triggered then the replacements parameter of the _verify_notification function can be used to override values coming from the sample file. Check nova.functional.notification_sample_tests.test_service_update as an example. """ ANY = object() REQUIRES_LOCKING = True # NOTE(gibi): Notification payloads always reflect the data needed # for every supported API microversion so we can safe to use the latest # API version in the tests. This helps the test to use the new API # features too. This can be overridden by subclasses that need to cap # at a specific microversion for older APIs. MAX_MICROVERSION = 'latest' def setUp(self): super(NotificationSampleTestBase, self).setUp() api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api max_version = self.MAX_MICROVERSION self.api.microversion = max_version self.admin_api.microversion = max_version fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.useFixture(utils_fixture.TimeFixture(test_services.fake_utcnow())) # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.useFixture(func_fixtures.PlacementFixture()) context_patcher = self.mock_gen_request_id = mock.patch( 'oslo_context.context.generate_request_id', return_value='req-5b6c791d-5709-4f36-8fbe-c3e02869e35d') self.mock_gen_request_id = context_patcher.start() self.addCleanup(context_patcher.stop) self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') # Reset the service create notifications fake_notifier.reset() def _get_notification_sample(self, sample): sample_dir = os.path.dirname(os.path.abspath(__file__)) sample_dir = os.path.normpath(os.path.join( sample_dir, "../../../../doc/notification_samples")) return sample_dir + '/' + sample + '.json' def _apply_replacements(self, replacements, sample_obj, notification): replacements = replacements or {} for key, value in replacements.items(): obj = sample_obj['payload'] n_obj = notification['payload'] for sub_key in key.split('.')[:-1]: obj = obj['nova_object.data'][sub_key] n_obj = n_obj['nova_object.data'][sub_key] if value == NotificationSampleTestBase.ANY: del obj['nova_object.data'][key.split('.')[-1]] del n_obj['nova_object.data'][key.split('.')[-1]] else: obj['nova_object.data'][key.split('.')[-1]] = value def _verify_notification(self, sample_file_name, replacements=None, actual=None): """Assert if the generated notification matches with the stored sample :param sample_file_name: The name of the sample file to match relative to doc/notification_samples :param replacements: A dict of key value pairs that is used to update the payload field of the sample data before it is matched against the generated notification. The 'x.y':'new-value' key-value pair selects the ["payload"]["nova_object.data"]["x"] ["nova_object.data"]["y"] value from the sample data and overrides it with 'new-value'. There is a special value ANY that can be used to indicate that the actual field value shall be ignored during matching. :param actual: Defines the actual notification to compare with. If None then it defaults to the first versioned notification emitted during the test. """ if not actual: self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] else: notification = actual sample_file = self._get_notification_sample(sample_file_name) with open(sample_file) as sample: sample_data = sample.read() sample_obj = jsonutils.loads(sample_data) sample_base_dir = os.path.dirname(sample_file) sample_obj = json_ref.resolve_refs( sample_obj, base_path=sample_base_dir) self._apply_replacements(replacements, sample_obj, notification) self.assertJsonEqual(sample_obj, notification) def _pop_and_verify_dest_select_notification(self, server_id, replacements=None): replacements = replacements or {} replacements['instance_uuid'] = server_id replacements['pci_requests.instance_uuid'] = server_id replacements['flavor.extra_specs'] = self.ANY replacements['numa_topology'] = self.ANY scheduler_expected_notifications = [ 'scheduler-select_destinations-start', 'scheduler-select_destinations-end'] self.assertLessEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) for notification in scheduler_expected_notifications: self._verify_notification( notification, replacements=replacements, actual=fake_notifier.VERSIONED_NOTIFICATIONS.pop(0)) def _boot_a_server(self, expected_status='ACTIVE', extra_params=None, scheduler_hints=None, additional_extra_specs=None): # We have to depend on a specific image and flavor to fix the content # of the notification that will be emitted flavor_body = {'flavor': {'name': 'test_flavor', 'ram': 512, 'vcpus': 1, 'disk': 1, 'id': 'a22d5517-147c-4147-a0d1-e698df5cd4e3' }} flavor_id = self.api.post_flavor(flavor_body)['id'] extra_specs = { "extra_specs": { "hw:watchdog_action": "disabled"}} if additional_extra_specs: extra_specs['extra_specs'].update(additional_extra_specs) self.admin_api.post_extra_spec(flavor_id, extra_specs) # Ignore the create flavor notification fake_notifier.reset() keypair_req = { "keypair": { "name": "my-key", "public_key": fake_crypto.get_ssh_public_key() }} self.api.post_keypair(keypair_req) keypair_expected_notifications = [ 'keypair-import-start', 'keypair-import-end' ] self.assertLessEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) for notification in keypair_expected_notifications: self._verify_notification( notification, actual=fake_notifier.VERSIONED_NOTIFICATIONS.pop(0)) server = self._build_server( name='some-server', image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor_id) # NOTE(gibi): from microversion 2.19 the description is not set to the # instance name automatically but can be provided at boot. server['description'] = 'some-server' if extra_params: extra_params['return_reservation_id'] = True extra_params['key_name'] = 'my-key' server.update(extra_params) post = {'server': server} if scheduler_hints: post.update({"os:scheduler_hints": scheduler_hints}) created_server = self.api.post_server(post) reservation_id = created_server['reservation_id'] created_server = self.api.get_servers( detail=False, search_opts={'reservation_id': reservation_id})[0] self.assertTrue(created_server['id']) # Wait for it to finish being created found_server = self._wait_for_state_change(created_server, expected_status) found_server['reservation_id'] = reservation_id # Note(elod.illes): let's just pop and verify the dest_select # notifications if we don't have a special case if scheduler_hints is None and expected_status != 'ERROR': self._pop_and_verify_dest_select_notification(found_server['id']) if found_server['status'] == 'ACTIVE': self.api.put_server_tags(found_server['id'], ['tag1']) return found_server def _get_notifications(self, event_type): return [notification for notification in fake_notifier.VERSIONED_NOTIFICATIONS if notification['event_type'] == event_type] def _wait_for_notification(self, event_type, timeout=10.0): # NOTE(mdbooth): wait_for_versioned_notifications raises an exception # if it times out since change I017d1a31. Consider removing this # method. fake_notifier.wait_for_versioned_notifications( event_type, timeout=timeout) def _wait_for_notifications(self, event_type, expected_count, timeout=10.0): notifications = fake_notifier.wait_for_versioned_notifications( event_type, n_events=expected_count, timeout=timeout) msg = ''.join('\n%s' % notif for notif in notifications) self.assertEqual(expected_count, len(notifications), 'Unexpected number of %s notifications ' 'within the given timeout. ' 'Expected %d, got %d: %s' % (event_type, expected_count, len(notifications), msg)) return notifications def _attach_volume_to_server(self, server, volume_id): self.api.post_server_volume( server['id'], {"volumeAttachment": {"volumeId": volume_id}}) self._wait_for_notification('instance.volume_attach.end') def _wait_and_get_migrations(self, server, max_retries=20): """Simple method to wait for the migrations Here we wait for the moment where active migration is in progress so we can get them and use them in the migration-related tests. :param server: server we'd like to use :param max_retries: maximum number of retries :returns: the migrations """ retries = 0 while retries < max_retries: retries += 1 migrations = self.admin_api.get_active_migrations(server['id']) if (len(migrations) > 0 and migrations[0]['status'] not in ['queued', 'preparing']): return migrations if retries == max_retries: self.fail('The migration table left empty.') time.sleep(0.5) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_aggregate.py0000664000175000017500000002010400000000000027642 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestAggregateNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): self.flags(compute_driver='fake.FakeDriverWithCaching') super(TestAggregateNotificationSample, self).setUp() def test_aggregate_create_delete(self): aggregate_req = { "aggregate": { "name": "my-aggregate", "availability_zone": "nova"}} aggregate = self.admin_api.post_aggregate(aggregate_req) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'aggregate-create-start', replacements={ 'uuid': aggregate['uuid']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'aggregate-create-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self.admin_api.delete_aggregate(aggregate['id']) self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'aggregate-delete-start', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'aggregate-delete-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) def test_aggregate_add_remove_host(self): aggregate_req = { "aggregate": { "name": "my-aggregate", "availability_zone": "nova"}} aggregate = self.admin_api.post_aggregate(aggregate_req) fake_notifier.reset() add_host_req = { "add_host": { "host": "compute" } } self.admin_api.post_aggregate_action(aggregate['id'], add_host_req) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'aggregate-add_host-start', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'aggregate-add_host-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) remove_host_req = { "remove_host": { "host": "compute" } } self.admin_api.post_aggregate_action(aggregate['id'], remove_host_req) self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'aggregate-remove_host-start', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'aggregate-remove_host-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) self.admin_api.delete_aggregate(aggregate['id']) def test_aggregate_update_metadata(self): aggregate_req = { "aggregate": { "name": "my-aggregate", "availability_zone": "nova"}} aggregate = self.admin_api.post_aggregate(aggregate_req) set_metadata_req = { "set_metadata": { "metadata": { "availability_zone": "AZ-1" } } } fake_notifier.reset() self.admin_api.post_aggregate_action(aggregate['id'], set_metadata_req) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'aggregate-update_metadata-start', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'aggregate-update_metadata-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def test_aggregate_updateprops(self): aggregate_req = { "aggregate": { "name": "my-aggregate", "availability_zone": "nova"}} aggregate = self.admin_api.post_aggregate(aggregate_req) update_req = { "aggregate": { "name": "my-new-aggregate"}} self.admin_api.put_aggregate(aggregate['id'], update_req) # 0. aggregate-create-start # 1. aggregate-create-end # 2. aggregate-update_prop-start # 3. aggregate-update_prop-end self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'aggregate-update_prop-start', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'aggregate-update_prop-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) def test_aggregate_cache_images(self): aggregate_req = { "aggregate": { "name": "my-aggregate", "availability_zone": "nova"}} aggregate = self.admin_api.post_aggregate(aggregate_req) add_host_req = { "add_host": { "host": "compute" } } self.admin_api.post_aggregate_action(aggregate['id'], add_host_req) fake_notifier.reset() cache_images_req = { 'cache': [ {'id': '155d900f-4e14-4e4c-a73d-069cbf4541e6'} ] } self.admin_api.api_post('/os-aggregates/%s/images' % aggregate['id'], cache_images_req) # Since the operation is asynchronous we have to wait for the end # notification. fake_notifier.wait_for_versioned_notifications( 'aggregate.cache_images.end') self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'aggregate-cache_images-start', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'aggregate-cache_images-progress', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self._verify_notification( 'aggregate-cache_images-end', replacements={ 'uuid': aggregate['uuid'], 'id': aggregate['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_compute_task.py0000664000175000017500000001356400000000000030426 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestComputeTaskNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): super(TestComputeTaskNotificationSample, self).setUp() self.neutron = fixtures.NeutronFixture(self) self.useFixture(self.neutron) def test_build_instances_fault(self): # Force down the compute node service_id = self.api.get_service_id('nova-compute') self.admin_api.put_service_force_down(service_id, True) server = self._boot_a_server( expected_status='ERROR', extra_params={'networks': [{'port': self.neutron.port_1['id']}]}, additional_extra_specs={'hw:numa_nodes': 1, 'hw:numa_cpus.0': '0', 'hw:numa_mem.0': 512}) self._wait_for_notification('compute_task.build_instances.error') # 0. scheduler.select_destinations.start # 1. compute_task.rebuild_server.error self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'compute_task-build_instances-error', replacements={ 'instance_uuid': server['id'], 'request_spec.instance_uuid': server['id'], 'request_spec.security_groups': [], 'request_spec.numa_topology.instance_uuid': server['id'], 'request_spec.pci_requests.instance_uuid': server['id'], 'reason.function_name': self.ANY, 'reason.module_name': self.ANY, 'reason.traceback': self.ANY }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def test_rebuild_fault(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}, additional_extra_specs={'hw:numa_nodes': 1, 'hw:numa_cpus.0': '0', 'hw:numa_mem.0': 512}) self._wait_for_notification('instance.create.end') # Force down the compute node service_id = self.api.get_service_id('nova-compute') self.admin_api.put_service_force_down(service_id, True) fake_notifier.reset() # NOTE(takashin): The rebuild action and the evacuate action shares # same code path. So the 'evacuate' action is used for this test. post = {'evacuate': {}} self.admin_api.post_server_action(server['id'], post) self._wait_for_notification('compute_task.rebuild_server.error') # 0. instance.evacuate # 1. scheduler.select_destinations.start # 2. compute_task.rebuild_server.error self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'compute_task-rebuild_server-error', replacements={ 'instance_uuid': server['id'], 'request_spec.instance_uuid': server['id'], 'request_spec.security_groups': [], 'request_spec.numa_topology.instance_uuid': server['id'], 'request_spec.pci_requests.instance_uuid': server['id'], 'reason.function_name': self.ANY, 'reason.module_name': self.ANY, 'reason.traceback': self.ANY }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) def test_migrate_fault(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}, additional_extra_specs={'hw:numa_nodes': 1, 'hw:numa_cpus.0': '0', 'hw:numa_mem.0': 512}) self._wait_for_notification('instance.create.end') # Disable the compute node service_id = self.api.get_service_id('nova-compute') self.admin_api.put_service(service_id, {'status': 'disabled'}) fake_notifier.reset() # Note that the operation will return a 202 response but fail with # NoValidHost asynchronously. self.admin_api.post_server_action(server['id'], {'migrate': None}) self._wait_for_notification('compute_task.migrate_server.error') # 0. scheduler.select_destinations.start # 1. compute_task.migrate_server.error self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'compute_task-migrate_server-error', replacements={ 'instance_uuid': server['id'], 'request_spec.instance_uuid': server['id'], 'request_spec.security_groups': [], 'request_spec.numa_topology.instance_uuid': server['id'], 'request_spec.pci_requests.instance_uuid': server['id'], 'reason.function_name': self.ANY, 'reason.module_name': self.ANY, 'reason.traceback': self.ANY }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_exception_notification.py0000664000175000017500000000341300000000000032464 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api import client as api_client from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestExceptionNotificationSample( notification_sample_base.NotificationSampleTestBase): def test_versioned_exception_notification_with_correct_params( self): post = { "aggregate": { "name": "versioned_exc_aggregate", "availability_zone": "nova" } } self.admin_api.api_post('os-aggregates', post) # recreating the aggregate raises exception self.assertRaises(api_client.OpenStackApiException, self.admin_api.api_post, 'os-aggregates', post) self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS)) traceback = fake_notifier.VERSIONED_NOTIFICATIONS[3][ 'payload']['nova_object.data']['traceback'] self.assertIn('AggregateNameExists', traceback) self._verify_notification( 'compute-exception', replacements={ 'traceback': self.ANY}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_flavor.py0000664000175000017500000001074500000000000027217 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestFlavorNotificationSample( notification_sample_base.NotificationSampleTestBase): def test_flavor_create(self): body = { "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "rxtx_factor": 2.0 } } self.admin_api.api_post('flavors', body) self._verify_notification('flavor-create') def test_flavor_destroy(self): body = { "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "rxtx_factor": 2.0 } } # Create a flavor. self.admin_api.api_post('flavors', body) self.admin_api.api_delete( 'flavors/a22d5517-147c-4147-a0d1-e698df5cd4e3') self._verify_notification( 'flavor-delete', actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def test_flavor_update(self): body = { "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "os-flavor-access:is_public": False, "rxtx_factor": 2.0 } } # Create a flavor. self.admin_api.api_post('flavors', body) body = { "extra_specs": { "hw:numa_nodes": "2", } } self.admin_api.api_post( 'flavors/a22d5517-147c-4147-a0d1-e698df5cd4e3/os-extra_specs', body) body = { "addTenantAccess": { "tenant": "fake_tenant" } } self.admin_api.api_post( 'flavors/a22d5517-147c-4147-a0d1-e698df5cd4e3/action', body) self._verify_notification( 'flavor-update', actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) class TestFlavorNotificationSamplev2_55( notification_sample_base.NotificationSampleTestBase): """Tests PUT /flavors/{flavor_id} with a description.""" MAX_MICROVERSION = '2.55' def test_flavor_udpate_with_description(self): # First create a flavor without a description. body = { "flavor": { "name": "test_flavor", "ram": 1024, "vcpus": 2, "disk": 10, "id": "a22d5517-147c-4147-a0d1-e698df5cd4e3", "os-flavor-access:is_public": False, "rxtx_factor": 2.0 } } # Create a flavor. flavor = self.admin_api.api_post('flavors', body).body['flavor'] # Check the notification; should be the same as the sample where there # is no description set. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'flavor-create', replacements={'is_public': False}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) # Update and set the flavor description. self.admin_api.api_put( 'flavors/%s' % flavor['id'], {'flavor': {'description': 'test description'}}).body['flavor'] # Assert the notifications, one for create and one for update. self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'flavor-update', replacements={'description': 'test description', 'extra_specs': {}, 'projects': []}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_instance.py0000664000175000017500000025017000000000000027530 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import mock from nova import context from nova import exception from nova.tests import fixtures from nova.tests.functional.api import client from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestInstanceNotificationSampleWithMultipleCompute( notification_sample_base.NotificationSampleTestBase): def setUp(self): self.flags(compute_driver='fake.FakeLiveMigrateDriver') self.flags(bdms_in_notifications='True', group='notifications') super(TestInstanceNotificationSampleWithMultipleCompute, self).setUp() self.neutron = fixtures.NeutronFixture(self) self.useFixture(self.neutron) self.cinder = fixtures.CinderFixture(self) self.useFixture(self.cinder) self.useFixture(fixtures.AllServicesCurrent()) def test_multiple_compute_actions(self): # There are not going to be real network-vif-plugged events coming # so don't wait for them. self.flags(live_migration_wait_for_vif_plug=False, group='compute') server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._wait_for_notification('instance.create.end') self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) # server will boot on the 'compute' host self.compute2 = self.start_service('compute', host='host2') actions = [ self._test_live_migration_rollback, self._test_live_migration_abort, self._test_live_migration_success, self._test_evacuate_server, self._test_live_migration_force_complete ] for action in actions: fake_notifier.reset() action(server) # Ensure that instance is in active state after an action self._wait_for_state_change(server, 'ACTIVE') @mock.patch('nova.compute.manager.ComputeManager.' '_live_migration_cleanup_flags', return_value=[True, False]) @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration', side_effect=exception.DestinationDiskExists(path='path')) def _test_live_migration_rollback(self, server, mock_migration, mock_flags): post = { 'os-migrateLive': { 'host': 'host2', 'block_migration': True, } } self.admin_api.post_server_action(server['id'], post) self._wait_for_notification( 'instance.live_migration_rollback_dest.end') # 0. scheduler.select_destinations.start # 1. scheduler.select_destinations.end # 2. instance.live_migration_rollback.start # 3. instance.live_migration_rollback.end # 4. instance.live_migration_rollback_dest.start # 5. instance.live_migration_rollback_dest.end self.assertEqual(6, len(fake_notifier.VERSIONED_NOTIFICATIONS), [x['event_type'] for x in fake_notifier.VERSIONED_NOTIFICATIONS]) self._verify_notification( 'instance-live_migration_rollback-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-live_migration_rollback-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) self._verify_notification( 'instance-live_migration_rollback_dest-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[4]) self._verify_notification( 'instance-live_migration_rollback_dest-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) def _test_live_migration_success(self, server): post = { 'os-migrateLive': { 'host': 'host2', 'block_migration': True, } } self.admin_api.post_server_action(server['id'], post) self._wait_for_notification('instance.live_migration_pre.end') # 0. scheduler.select_destinations.start # 1. scheduler.select_destinations.end # 2. instance.live_migration_pre.start # 3. instance.live_migration_pre.end self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS), [x['event_type'] for x in fake_notifier.VERSIONED_NOTIFICATIONS]) self._verify_notification( 'instance-live_migration_pre-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-live_migration_pre-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) migrations = self.admin_api.get_active_migrations(server['id']) self.assertEqual(1, len(migrations)) self._wait_for_notification('instance.live_migration_post.end') # 0. scheduler.select_destinations.start # 1. scheduler.select_destinations.end # 2. instance.live_migration_pre.start # 3. instance.live_migration_pre.end # 4. instance.live_migration_post.start # 5. instance.live_migration_post_dest.start # 6. instance.live_migration_post_dest.end # 7. instance.live_migration_post.end self.assertEqual(8, len(fake_notifier.VERSIONED_NOTIFICATIONS), [x['event_type'] for x in fake_notifier.VERSIONED_NOTIFICATIONS]) self._verify_notification( 'instance-live_migration_post-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[4]) self._verify_notification( 'instance-live_migration_post_dest-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) self._verify_notification( 'instance-live_migration_post_dest-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[6]) self._verify_notification( 'instance-live_migration_post-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[7]) def _test_live_migration_abort(self, server): post = { "os-migrateLive": { "host": "host2", "block_migration": False, } } self.admin_api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'MIGRATING') migrations = self._wait_and_get_migrations(server) self.admin_api.delete_migration(server['id'], migrations[0]['id']) self._wait_for_notification('instance.live_migration_abort.start') self._wait_for_state_change(server, 'ACTIVE') # NOTE(gibi): the intance.live_migration_rollback notification emitted # after the instance.live_migration_abort notification so we have to # wait for the rollback to ensure we can assert both notifications # below self._wait_for_notification('instance.live_migration_rollback.end') # 0. scheduler.select_destinations.start # 1. scheduler.select_destinations.end # 2. instance.live_migration_pre.start # 3. instance.live_migration_pre.end # 4. instance.live_migration_abort.start # 5. instance.live_migration_abort.end # 6. instance.live_migration_rollback.start # 7. instance.live_migration_rollback.end self.assertEqual(8, len(fake_notifier.VERSIONED_NOTIFICATIONS), [x['event_type'] for x in fake_notifier.VERSIONED_NOTIFICATIONS]) self._verify_notification( 'instance-live_migration_pre-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-live_migration_pre-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) self._verify_notification( 'instance-live_migration_abort-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[4]) self._verify_notification( 'instance-live_migration_abort-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) self._verify_notification( 'instance-live_migration_rollback-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[6]) self._verify_notification( 'instance-live_migration_rollback-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[7]) def _test_evacuate_server(self, server): services = self.admin_api.get_services(host='host2', binary='nova-compute') service_id = services[0]['id'] self.admin_api.put_service(service_id, {'forced_down': True}) evacuate = { 'evacuate': { 'host': 'compute', } } self.admin_api.post_server_action(server['id'], evacuate) self._wait_for_state_change(server, expected_status='REBUILD') self._wait_for_state_change(server, expected_status='ACTIVE') notifications = self._get_notifications('instance.evacuate') self.assertEqual(1, len(notifications), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-evacuate', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=notifications[0]) self.admin_api.put_service(service_id, {'forced_down': False}) def _test_live_migration_force_complete(self, server): post = { 'os-migrateLive': { 'host': 'host2', 'block_migration': True, } } self.admin_api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'MIGRATING') migrations = self._wait_and_get_migrations(server) migration_id = migrations[0]['id'] self.admin_api.force_complete_migration(server['id'], migration_id) # Note that we wait for instance.live_migration_force_complete.end but # by the time we check versioned notifications received we could have # entered ComputeManager._post_live_migration which could emit up to # four other notifications: # - instance.live_migration_post.start # - instance.live_migration_post_dest.start # - instance.live_migration_post_dest.end # - instance.live_migration_post.end # We are not concerned about those in this test so that's why we stop # once we get instance.live_migration_force_complete.end and assert # we got at least 6 notifications. self._wait_for_notification( 'instance.live_migration_force_complete.end') # 0. scheduler.select_destinations.start # 1. scheduler.select_destinations.end # 2. instance.live_migration_pre.start # 3. instance.live_migration_pre.end # 4. instance.live_migration_force_complete.start # 5. instance.live_migration_force_complete.end self.assertGreaterEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 6, fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-live_migration_force_complete-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[4]) self._verify_notification( 'instance-live_migration_force_complete-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) class TestInstanceNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): self.flags(bdms_in_notifications='True', group='notifications') super(TestInstanceNotificationSample, self).setUp() self.neutron = fixtures.NeutronFixture(self) self.useFixture(self.neutron) self.cinder = fixtures.CinderFixture(self) self.useFixture(self.cinder) def _wait_until_swap_volume(self, server, volume_id): for i in range(50): volume_attachments = self.api.get_server_volumes(server['id']) if len(volume_attachments) > 0: for volume_attachment in volume_attachments: if volume_attachment['volumeId'] == volume_id: return time.sleep(0.5) self.fail('Volume swap operation failed.') def test_instance_action(self): # A single test case is used to test most of the instance action # notifications to avoid booting up an instance for every action # separately. # Every instance action test function shall make sure that after the # function the instance is in active state and usable by other actions. # Therefore some action especially delete cannot be used here as # recovering from that action would mean to recreate the instance and # that would go against the whole purpose of this optimization server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) actions = [ self._test_power_off_on_server, self._test_restore_server, self._test_suspend_resume_server, self._test_pause_unpause_server, self._test_shelve_and_shelve_offload_server, self._test_unshelve_server, self._test_resize_and_revert_server, self._test_snapshot_server, self._test_reboot_server, self._test_reboot_server_error, self._test_trigger_crash_dump, self._test_volume_detach_attach_server, self._test_rescue_unrescue_server, self._test_soft_delete_server, self._test_attach_volume_error, self._test_interface_attach_and_detach, self._test_interface_attach_error, self._test_lock_unlock_instance, self._test_lock_unlock_instance_with_reason, ] for action in actions: fake_notifier.reset() action(server) # Ensure that instance is in active state after an action self._wait_for_state_change(server, 'ACTIVE') # if the test step did not raised then we consider the step as # succeeded. We drop the logs to avoid causing subunit parser # errors due to logging too much at the end of the test case. self.stdlog.delete_stored_logs() def test_create_delete_server(self): fake_trusted_certs = ['cert-id-1', 'cert-id-2'] server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}], 'tags': ['tag'], 'trusted_image_certificates': fake_trusted_certs}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) self._delete_server(server) # NOTE(gibi): The wait_unit_deleted() call polls the REST API to see if # the instance is disappeared however the _delete_instance() in # compute/manager destroys the instance first then send the # instance.delete.end notification. So to avoid race condition the test # needs to wait for the notification as well here. self._wait_for_notification('instance.delete.end') self.assertEqual(9, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) # This list needs to be in order. expected_notifications = [ 'instance-create-start', 'instance-create-end', 'instance-update-tags-action', 'instance-volume_attach-start', 'instance-volume_attach-end', 'instance-delete-start', 'instance-shutdown-start', 'instance-shutdown-end', 'instance-delete-end' ] for idx, notification in enumerate(expected_notifications): self._verify_notification( notification, replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[idx]) @mock.patch('nova.compute.manager.ComputeManager._build_resources') def test_create_server_error(self, mock_build): def _build_resources(*args, **kwargs): raise exception.FlavorDiskTooSmall() mock_build.side_effect = _build_resources fake_trusted_certs = ['cert-id-1', 'cert-id-2'] server = self._boot_a_server( expected_status='ERROR', extra_params={'networks': [{'port': self.neutron.port_1['id']}], 'tags': ['tag'], 'trusted_image_certificates': fake_trusted_certs}) # 0. scheduler.select_destinations.start # 1. scheduler.select_destinations.end # 2. instance-create-start # 3. instance-create-error self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) tb = fake_notifier.VERSIONED_NOTIFICATIONS[3]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn('raise exception.FlavorDiskTooSmall()', tb) self._verify_notification( 'instance-create-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-create-error', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'fault.traceback': self.ANY}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) fake_notifier.reset() self._delete_server(server) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-delete-start_not_scheduled', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-delete-end_not_scheduled', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def test_instance_exists_usage_audit(self): # TODO(xavvior): Should create a functional test for the # "instance_usage_audit" periodic task. We didn't find usable # solution for this problem, however we tried to test it in # several ways. pass def test_instance_exists(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) # Let's generate some bandwidth usage data. # Just call the periodic task directly for simplicity self.compute.manager._poll_bandwidth_usage(context.get_admin_context()) fake_notifier.reset() post = { 'rebuild': { 'imageRef': 'a2459075-d96c-40d5-893e-577ff92e721c', 'metadata': {} } } self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, expected_status='REBUILD') self._wait_for_state_change(server, expected_status='ACTIVE') notifications = self._get_notifications('instance.exists') self._verify_notification( 'instance-exists', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'] }, actual=notifications[0]) def test_delete_server_while_compute_is_down(self): server = self._boot_a_server( expected_status='ACTIVE', extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) service_id = self.api.get_service_id('nova-compute') self.admin_api.put_service_force_down(service_id, True) fake_notifier.reset() self._delete_server(server) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-delete-start_compute_down', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-delete-end_compute_down', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self.admin_api.put_service_force_down(service_id, False) def _verify_instance_update_steps(self, steps, notifications, initial=None): replacements = {} if initial: replacements = initial for i, step in enumerate(steps): replacements.update(step) self._verify_notification( 'instance-update', replacements=replacements, actual=notifications[i]) return replacements def test_create_delete_server_with_instance_update(self): # This makes server network creation synchronous which is necessary # for notification samples that expect instance.info_cache.network_info # to be set. self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.flags(notify_on_state_change='vm_and_task_state', group='notifications') server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) instance_updates = self._wait_for_notifications('instance.update', 8) # The first notification comes from the nova-conductor, the # eighth notification comes from nova-api the # rest is from the nova-compute. To keep the test simpler # assert this fact and then modify the publisher_id of the # first and eighth notification to match the template self.assertEqual('nova-conductor:fake-mini', instance_updates[0]['publisher_id']) self.assertEqual('nova-api:fake-mini', instance_updates[7]['publisher_id']) instance_updates[0]['publisher_id'] = 'nova-compute:fake-mini' instance_updates[7]['publisher_id'] = 'nova-compute:fake-mini' create_steps = [ # nothing -> scheduling {'reservation_id': server['reservation_id'], 'uuid': server['id'], 'host': None, 'node': None, 'state_update.new_task_state': 'scheduling', 'state_update.old_task_state': 'scheduling', 'state_update.state': 'building', 'state_update.old_state': 'building', 'state': 'building'}, # scheduling -> building { 'state_update.new_task_state': None, 'state_update.old_task_state': 'scheduling', 'task_state': None}, # scheduled {'host': 'compute', 'node': 'fake-mini', 'state_update.old_task_state': None, 'updated_at': '2012-10-29T13:42:11Z'}, # building -> networking {'state_update.new_task_state': 'networking', 'state_update.old_task_state': 'networking', 'task_state': 'networking'}, # networking -> block_device_mapping {'state_update.new_task_state': 'block_device_mapping', 'state_update.old_task_state': 'networking', 'task_state': 'block_device_mapping', 'ip_addresses': [{ "nova_object.name": "IpPayload", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": { "mac": "fa:16:3e:4c:2c:30", "address": "192.168.1.3", "port_uuid": "ce531f90-199f-48c0-816c-13e38010b442", "meta": {}, "version": 4, "label": "private", "device_name": "tapce531f90-19" }}] }, # block_device_mapping -> spawning {'state_update.new_task_state': 'spawning', 'state_update.old_task_state': 'block_device_mapping', 'task_state': 'spawning', }, # spawning -> active {'state_update.new_task_state': None, 'state_update.old_task_state': 'spawning', 'state_update.state': 'active', 'launched_at': '2012-10-29T13:42:11Z', 'state': 'active', 'task_state': None, 'power_state': 'running'}, # tag added {'state_update.old_task_state': None, 'state_update.old_state': 'active', 'tags': ['tag1']}, ] replacements = self._verify_instance_update_steps( create_steps, instance_updates) fake_notifier.reset() # Let's generate some bandwidth usage data. # Just call the periodic task directly for simplicity self.compute.manager._poll_bandwidth_usage(context.get_admin_context()) self._delete_server(server) instance_updates = self._get_notifications('instance.update') self.assertEqual(2, len(instance_updates), fake_notifier.VERSIONED_NOTIFICATIONS) delete_steps = [ # active -> deleting {'state_update.new_task_state': 'deleting', 'state_update.old_task_state': 'deleting', 'state_update.old_state': 'active', 'state': 'active', 'task_state': 'deleting', 'bandwidth': [ {'nova_object.namespace': 'nova', 'nova_object.name': 'BandwidthPayload', 'nova_object.data': {'network_name': 'private', 'out_bytes': 0, 'in_bytes': 0}, 'nova_object.version': '1.0'}], 'tags': ["tag1"], 'block_devices': [{ "nova_object.data": { "boot_index": None, "delete_on_termination": False, "device_name": "/dev/sdb", "tag": None, "volume_id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" }, "nova_object.name": "BlockDevicePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }] }, # deleting -> deleted {'state_update.new_task_state': None, 'state_update.old_task_state': 'deleting', 'state_update.old_state': 'active', 'state_update.state': 'deleted', 'state': 'deleted', 'task_state': None, 'terminated_at': '2012-10-29T13:42:11Z', 'ip_addresses': [], 'power_state': 'pending', 'bandwidth': [], 'tags': ["tag1"], 'block_devices': [{ "nova_object.data": { "boot_index": None, "delete_on_termination": False, "device_name": "/dev/sdb", "tag": None, "volume_id": "a07f71dc-8151-4e7d-a0cc-cd24a3f11113" }, "nova_object.name": "BlockDevicePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }] }, ] self._verify_instance_update_steps(delete_steps, instance_updates, initial=replacements) def _test_power_off_on_server(self, server): self.api.post_server_action(server['id'], {'os-stop': {}}) self._wait_for_state_change(server, expected_status='SHUTOFF') self.api.post_server_action(server['id'], {'os-start': {}}) self._wait_for_state_change(server, expected_status='ACTIVE') self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-power_off-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-power_off-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self._verify_notification( 'instance-power_on-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-power_on-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) def _test_shelve_and_shelve_offload_server(self, server): self.flags(shelved_offload_time=-1) self.api.post_server_action(server['id'], {'shelve': {}}) self._wait_for_state_change(server, expected_status='SHELVED') self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-shelve-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self._verify_notification( 'instance-shelve-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) fake_notifier.reset() self.api.post_server_action(server['id'], {'shelveOffload': {}}) # we need to wait for the instance.host to become None as well before # we can unshelve to make sure that the unshelve.start notification # payload is stable as the compute manager first sets the instance # state then a bit later sets the instance.host to None. self._wait_for_server_parameter(server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None}) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-shelve_offload-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-shelve_offload-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self.api.post_server_action(server['id'], {'unshelve': None}) self._wait_for_state_change(server, 'ACTIVE') self._wait_for_notification('instance.unshelve.end') def _test_unshelve_server(self, server): # setting the shelved_offload_time to 0 should set the # instance status to 'SHELVED_OFFLOADED' self.flags(shelved_offload_time = 0) self.api.post_server_action(server['id'], {'shelve': {}}) # we need to wait for the instance.host to become None as well before # we can unshelve to make sure that the unshelve.start notification # payload is stable as the compute manager first sets the instance # state then a bit later sets the instance.host to None. self._wait_for_server_parameter(server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None}) post = {'unshelve': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') self._wait_for_notification('instance.unshelve.end') self.assertEqual(9, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-unshelve-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[7]) self._verify_notification( 'instance-unshelve-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[8]) def _test_suspend_resume_server(self, server): post = {'suspend': {}} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'SUSPENDED') post = {'resume': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') # Four versioned notification are generated. # 0. instance-suspend-start # 1. instance-suspend-end # 2. instance-resume-start # 3. instance-resume-end self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-suspend-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-suspend-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self._verify_notification( 'instance-resume-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-resume-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) self.flags(reclaim_instance_interval=0) def _test_pause_unpause_server(self, server): self.api.post_server_action(server['id'], {'pause': {}}) self._wait_for_state_change(server, 'PAUSED') self.api.post_server_action(server['id'], {'unpause': {}}) self._wait_for_state_change(server, 'ACTIVE') # Four versioned notifications are generated # 0. instance-pause-start # 1. instance-pause-end # 2. instance-unpause-start # 3. instance-unpause-end self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-pause-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-pause-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self._verify_notification( 'instance-unpause-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-unpause-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) def _build_destination_payload(self): cell1 = self.cell_mappings.get('cell1') return { 'nova_object.version': '1.0', 'nova_object.namespace': 'nova', 'nova_object.name': 'DestinationPayload', 'nova_object.data': { 'aggregates': None, 'cell': { 'nova_object.version': '2.0', 'nova_object.namespace': 'nova', 'nova_object.name': 'CellMappingPayload', 'nova_object.data': { 'disabled': False, 'name': u'cell1', 'uuid': cell1.uuid } } } } def _test_resize_and_revert_server(self, server): self.flags(allow_resize_to_same_host=True) other_flavor_body = { 'flavor': { 'name': 'other_flavor', 'ram': 256, 'vcpus': 1, 'disk': 1, 'id': 'd5a8bb54-365a-45ae-abdb-38d249df7845' } } other_flavor_id = self.api.post_flavor(other_flavor_body)['id'] extra_specs = { "extra_specs": { "hw:watchdog_action": "reset"}} self.admin_api.post_extra_spec(other_flavor_id, extra_specs) # Ignore the create flavor notification fake_notifier.reset() post = { 'resize': { 'flavorRef': other_flavor_id } } self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'VERIFY_RESIZE') self._pop_and_verify_dest_select_notification(server['id'], replacements={ 'ignore_hosts': [], 'flavor.memory_mb': other_flavor_body['flavor']['ram'], 'flavor.name': other_flavor_body['flavor']['name'], 'flavor.flavorid': other_flavor_id, 'flavor.extra_specs': extra_specs['extra_specs'], 'requested_destination': self._build_destination_payload()}) self.assertEqual(7, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) # ignore instance.exists fake_notifier.VERSIONED_NOTIFICATIONS.pop(0) # This list needs to be in order. expected_notifications = [ 'instance-resize_prep-start', 'instance-resize_prep-end', 'instance-resize-start', 'instance-resize-end', 'instance-resize_finish-start', 'instance-resize_finish-end' ] for idx, notification in enumerate(expected_notifications): self._verify_notification( notification, replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[idx]) fake_notifier.reset() # the following is the revert server request post = {'revertResize': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) # ignore instance.exists fake_notifier.VERSIONED_NOTIFICATIONS.pop(0) self._verify_notification( 'instance-resize_revert-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-resize_revert-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) @mock.patch('nova.compute.manager.ComputeManager._prep_resize') def test_resize_server_error_but_reschedule_was_success( self, mock_prep_resize): """Test it, when the prep_resize method raise an exception, but the reschedule_resize_or_reraise was successful and scheduled the resize. In this case we get a notification about the exception, which caused the prep_resize error. """ def _build_resources(*args, **kwargs): raise exception.FlavorDiskTooSmall() server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self.flags(allow_resize_to_same_host=True) other_flavor_body = { 'flavor': { 'name': 'other_flavor_error', 'ram': 512, 'vcpus': 1, 'disk': 1, 'id': 'a22d5517-147c-4147-a0d1-e698df5cd4e9' } } other_flavor_id = self.api.post_flavor(other_flavor_body)['id'] post = { 'resize': { 'flavorRef': other_flavor_id } } fake_notifier.reset() mock_prep_resize.side_effect = _build_resources # NOTE(gibi): the first resize_instance call (from the API) should be # unaffected so that we can reach _prep_resize at all. But the # subsequent resize_instance call (from _reschedule_resize_or_reraise) # needs to be mocked as there is no alternative host to resize to. patcher = mock.patch.object(self.compute.manager.compute_task_api, 'resize_instance') self.addCleanup(patcher.stop) patcher.start() self.api.post_server_action(server['id'], post) self._wait_for_notification('instance.resize.error') self._pop_and_verify_dest_select_notification(server['id'], replacements={ 'ignore_hosts': [], 'flavor.name': other_flavor_body['flavor']['name'], 'flavor.flavorid': other_flavor_id, 'flavor.extra_specs': {}, 'requested_destination': self._build_destination_payload()}) # 0: instance-exists # 1: instance-resize_prep-start # 2: instance-resize-error # 3: instance-resize_prep-end self.assertLessEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), 'Unexpected number of notifications: %s' % fake_notifier.VERSIONED_NOTIFICATIONS) # Note(gibi): There is also an instance.exists notification emitted # during the rescheduling tb = fake_notifier.VERSIONED_NOTIFICATIONS[2]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn("raise exception.FlavorDiskTooSmall()", tb) self._verify_notification('instance-resize-error', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'fault.traceback': self.ANY }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) @mock.patch('nova.compute.manager.ComputeManager._prep_resize') def test_resize_server_error_and_reschedule_was_failed( self, mock_prep_resize): """Test it, when the prep_resize method raise an exception, after trying again with the reschedule_resize_or_reraise method call, but the rescheduled also was unsuccessful. In this case called the exception block. In the exception block send a notification about error. At end called the six.reraise(*exc_info), which not send another error. """ def _build_resources(*args, **kwargs): raise exception.FlavorDiskTooSmall() server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self.flags(allow_resize_to_same_host=True) other_flavor_body = { 'flavor': { 'name': 'other_flavor_error', 'ram': 512, 'vcpus': 1, 'disk': 1, 'id': 'a22d5517-147c-4147-a0d1-e698df5cd4e9' } } other_flavor_id = self.api.post_flavor(other_flavor_body)['id'] post = { 'resize': { 'flavorRef': other_flavor_id } } fake_notifier.reset() mock_prep_resize.side_effect = _build_resources # NOTE(gibi): the first resize_instance call (from the API) should be # unaffected so that we can reach _prep_resize at all. But the # subsequent resize_instance call (from _reschedule_resize_or_reraise) # needs to fail. It isn't realistic that resize_instance would raise # FlavorDiskTooSmall, but it's needed for the notification sample # to work. patcher = mock.patch.object(self.compute.manager.compute_task_api, 'resize_instance', side_effect=_build_resources) self.addCleanup(patcher.stop) patcher.start() self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, expected_status='ERROR') self._wait_for_notification('compute.exception') # There should be the following notifications after scheduler's # select_destination notifications: # 0: instance-exists # 1: instance-resize_prep-start # 2: instance-resize-error # 3: instance-resize_prep-end # 4: compute.exception # (via the wrap_exception decorator on # the ComputeManager.prep_resize method.) self._pop_and_verify_dest_select_notification(server['id'], replacements={ 'ignore_hosts': [], 'flavor.name': other_flavor_body['flavor']['name'], 'flavor.flavorid': other_flavor_id, 'flavor.extra_specs': {}, 'requested_destination': self._build_destination_payload()}) self.assertEqual(5, len(fake_notifier.VERSIONED_NOTIFICATIONS), 'Unexpected number of notifications: %s' % fake_notifier.VERSIONED_NOTIFICATIONS) tb = fake_notifier.VERSIONED_NOTIFICATIONS[2]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn("raise exception.FlavorDiskTooSmall()", tb) self._verify_notification('instance-resize-error', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'fault.traceback': self.ANY }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) def _test_snapshot_server(self, server): post = {'createImage': {'name': 'test-snap'}} response = self.api.post_server_action(server['id'], post) self._wait_for_notification('instance.snapshot.end') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-snapshot-start', replacements={ 'snapshot_image_id': response['image_id'], 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-snapshot-end', replacements={ 'snapshot_image_id': response['image_id'], 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def test_rebuild_server(self): # NOTE(gabor_antal): Rebuild changes the image used by the instance, # therefore the actions tested in test_instance_action had to be in # specific order. To avoid this problem, rebuild was moved from # test_instance_action to its own method. server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) fake_notifier.reset() image_ref = 'a2459075-d96c-40d5-893e-577ff92e721c' post = { 'rebuild': { 'imageRef': image_ref, 'metadata': {} } } self.api.post_server_action(server['id'], post) # Before going back to ACTIVE state # server state need to be changed to REBUILD state self._wait_for_state_change(server, expected_status='REBUILD') self._wait_for_state_change(server, expected_status='ACTIVE') self._pop_and_verify_dest_select_notification(server['id'], replacements={ 'image.container_format': 'ami', 'image.disk_format': 'ami', 'image.id': image_ref, 'image.properties': { 'nova_object.data': {}, 'nova_object.name': 'ImageMetaPropsPayload', 'nova_object.namespace': 'nova', 'nova_object.version': u'1.3'}, 'image.size': 58145823, 'image.tags': [], 'scheduler_hints': {'_nova_check_type': ['rebuild']}, 'force_hosts': 'compute', 'force_nodes': 'fake-mini', 'requested_destination': self._build_destination_payload()}) # 0. instance.rebuild_scheduled # 1. instance.exists # 2. instance.rebuild.start # 3. instance.detach.start # 4. instance.detach.end # 5. instance.rebuild.end # The compute/manager will detach every volume during rebuild self.assertEqual(6, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-rebuild_scheduled', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'trusted_image_certificates': None}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-rebuild-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'trusted_image_certificates': None}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-volume_detach-start', replacements={ 'reservation_id': server['reservation_id'], 'task_state': 'rebuilding', 'architecture': None, 'image_uuid': 'a2459075-d96c-40d5-893e-577ff92e721c', 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) self._verify_notification( 'instance-volume_detach-end', replacements={ 'reservation_id': server['reservation_id'], 'task_state': 'rebuilding', 'architecture': None, 'image_uuid': 'a2459075-d96c-40d5-893e-577ff92e721c', 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[4]) self._verify_notification( 'instance-rebuild-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'trusted_image_certificates': None}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) def test_rebuild_server_with_trusted_cert(self): # NOTE(gabor_antal): Rebuild changes the image used by the instance, # therefore the actions tested in test_instance_action had to be in # specific order. To avoid this problem, rebuild was moved from # test_instance_action to its own method. create_trusted_certs = ['cert-id-1', 'cert-id-2'] server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}], 'trusted_image_certificates': create_trusted_certs}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) fake_notifier.reset() image_ref = 'a2459075-d96c-40d5-893e-577ff92e721c' rebuild_trusted_certs = ['rebuild-cert-id-1', 'rebuild-cert-id-2'] post = { 'rebuild': { 'imageRef': image_ref, 'metadata': {}, 'trusted_image_certificates': rebuild_trusted_certs, } } self.api.post_server_action(server['id'], post) # Before going back to ACTIVE state # server state need to be changed to REBUILD state self._wait_for_state_change(server, expected_status='REBUILD') self._wait_for_state_change(server, expected_status='ACTIVE') self._pop_and_verify_dest_select_notification(server['id'], replacements={ 'image.container_format': 'ami', 'image.disk_format': 'ami', 'image.id': image_ref, 'image.properties': { 'nova_object.data': {}, 'nova_object.name': 'ImageMetaPropsPayload', 'nova_object.namespace': 'nova', 'nova_object.version': u'1.3'}, 'image.size': 58145823, 'image.tags': [], 'scheduler_hints': {'_nova_check_type': ['rebuild']}, 'force_hosts': 'compute', 'force_nodes': 'fake-mini', 'requested_destination': self._build_destination_payload()}) # 0. instance.rebuild_scheduled # 1. instance.exists # 2. instance.rebuild.start # 3. instance.detach.start # 4. instance.detach.end # 5. instance.rebuild.end # The compute/manager will detach every volume during rebuild self.assertEqual(6, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-rebuild_scheduled', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-rebuild-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'instance-volume_detach-start', replacements={ 'reservation_id': server['reservation_id'], 'task_state': 'rebuilding', 'architecture': None, 'image_uuid': 'a2459075-d96c-40d5-893e-577ff92e721c', 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) self._verify_notification( 'instance-volume_detach-end', replacements={ 'reservation_id': server['reservation_id'], 'task_state': 'rebuilding', 'architecture': None, 'image_uuid': 'a2459075-d96c-40d5-893e-577ff92e721c', 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[4]) self._verify_notification( 'instance-rebuild-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) @mock.patch('nova.compute.manager.ComputeManager.' '_do_rebuild_instance_with_claim') def test_rebuild_server_exc(self, mock_rebuild): def _virtual_interface_create_failed(*args, **kwargs): # A real error that could come out of driver.spawn() during rebuild raise exception.VirtualInterfaceCreateException() server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) fake_notifier.reset() post = { 'rebuild': { 'imageRef': 'a2459075-d96c-40d5-893e-577ff92e721c', 'metadata': {} } } self.api.post_server_action(server['id'], post) mock_rebuild.side_effect = _virtual_interface_create_failed self._wait_for_state_change(server, expected_status='ERROR') notification = self._get_notifications('instance.rebuild.error') self.assertEqual(1, len(notification), fake_notifier.VERSIONED_NOTIFICATIONS) tb = notification[0]['payload']['nova_object.data']['fault'][ 'nova_object.data']['traceback'] self.assertIn('raise exception.VirtualInterfaceCreateException()', tb) self._verify_notification( 'instance-rebuild-error', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'trusted_image_certificates': None, 'fault.traceback': self.ANY}, actual=notification[0]) def _test_restore_server(self, server): self.flags(reclaim_instance_interval=30) self.api.delete_server(server['id']) self._wait_for_state_change(server, 'SOFT_DELETED') # we don't want to test soft_delete here fake_notifier.reset() self.api.post_server_action(server['id'], {'restore': {}}) self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-restore-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-restore-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_reboot_server(self, server): post = {'reboot': {'type': 'HARD'}} self.api.post_server_action(server['id'], post) self._wait_for_notification('instance.reboot.start') self._wait_for_notification('instance.reboot.end') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-reboot-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-reboot-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) @mock.patch('nova.virt.fake.SmallFakeDriver.reboot') def _test_reboot_server_error(self, server, mock_reboot): def _hard_reboot(*args, **kwargs): raise exception.UnsupportedVirtType(virt="FakeVirt") mock_reboot.side_effect = _hard_reboot post = {'reboot': {'type': 'HARD'}} self.api.post_server_action(server['id'], post) self._wait_for_notification('instance.reboot.start') self._wait_for_notification('instance.reboot.error') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) tb = fake_notifier.VERSIONED_NOTIFICATIONS[1]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn("raise exception.UnsupportedVirtType", tb) self._verify_notification( 'instance-reboot-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-reboot-error', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'fault.traceback': self.ANY}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _detach_volume_from_server(self, server, volume_id): self.api.delete_server_volume(server['id'], volume_id) self._wait_for_notification('instance.volume_detach.end') def _volume_swap_server(self, server, attachement_id, volume_id): self.api.put_server_volume(server['id'], attachement_id, volume_id) def test_volume_swap_server(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) self.cinder.swap_volume_instance_uuid = server['id'] self._volume_swap_server(server, self.cinder.SWAP_OLD_VOL, self.cinder.SWAP_NEW_VOL) self._wait_until_swap_volume(server, self.cinder.SWAP_NEW_VOL) # NOTE(gibi): the new volume id can appear on the API earlier than the # volume_swap.end notification emitted. So to make the test stable # we have to wait for the volume_swap.end notification directly. self._wait_for_notification('instance.volume_swap.end') self.assertEqual(7, len(fake_notifier.VERSIONED_NOTIFICATIONS), 'Unexpected number of versioned notifications. ' 'Got: %s' % fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-volume_swap-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) self._verify_notification( 'instance-volume_swap-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[6]) def _do_setup_server_and_error_flag(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_ERR_OLD_VOL) self.cinder.attachment_error_id = self.cinder.SWAP_ERR_ATTACH_ID return server def test_volume_swap_server_with_error(self): server = self._do_setup_server_and_error_flag() self._volume_swap_server(server, self.cinder.SWAP_ERR_OLD_VOL, self.cinder.SWAP_ERR_NEW_VOL) self._wait_for_notification('compute.exception') # Eight versioned notifications are generated. # 0. instance-create-start # 1. instance-create-end # 2. instance-update # 3. instance-volume_attach-start # 4. instance-volume_attach-end # 5. instance-volume_swap-start # 6. instance-volume_swap-error # 7. compute.exception self.assertLessEqual(7, len(fake_notifier.VERSIONED_NOTIFICATIONS), 'Unexpected number of versioned notifications. ' 'Got: %s' % fake_notifier.VERSIONED_NOTIFICATIONS) block_devices = [{ "nova_object.data": { "boot_index": None, "delete_on_termination": False, "device_name": "/dev/sdb", "tag": None, "volume_id": self.cinder.SWAP_ERR_OLD_VOL }, "nova_object.name": "BlockDevicePayload", "nova_object.namespace": "nova", "nova_object.version": "1.0" }] self._verify_notification( 'instance-volume_swap-start', replacements={ 'new_volume_id': self.cinder.SWAP_ERR_NEW_VOL, 'old_volume_id': self.cinder.SWAP_ERR_OLD_VOL, 'block_devices': block_devices, 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[5]) tb1 = fake_notifier.VERSIONED_NOTIFICATIONS[6]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn("_swap_volume", tb1) tb2 = fake_notifier.VERSIONED_NOTIFICATIONS[7]['payload'][ 'nova_object.data']['traceback'] self.assertIn("_swap_volume", tb2) self._verify_notification( 'instance-volume_swap-error', replacements={ 'reservation_id': server['reservation_id'], 'block_devices': block_devices, 'uuid': server['id'], 'fault.traceback': self.ANY}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[6]) def test_resize_confirm_server(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) self.admin_api.post_extra_spec( '2', {"extra_specs": {"hw:watchdog_action": "disabled"}}) self.flags(allow_resize_to_same_host=True) post = {'resize': {'flavorRef': '2'}} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'VERIFY_RESIZE') fake_notifier.reset() post = {'confirmResize': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-resize_confirm-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-resize_confirm-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_trigger_crash_dump(self, server): post = {'trigger_crash_dump': None} self.api.post_server_action(server['id'], post) self._wait_for_notification('instance.trigger_crash_dump.end') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-trigger_crash_dump-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-trigger_crash_dump-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_volume_detach_attach_server(self, server): self._detach_volume_from_server(server, self.cinder.SWAP_OLD_VOL) # 0. volume_detach-start # 1. volume_detach-end self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-volume_detach-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-volume_detach-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) fake_notifier.reset() self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) # 0. volume_attach-start # 1. volume_attach-end self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-volume_attach-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-volume_attach-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_rescue_unrescue_server(self, server): # Both "rescue" and "unrescue" notification asserts are made here # rescue notification asserts post = { "rescue": { "rescue_image_ref": 'a2459075-d96c-40d5-893e-577ff92e721c' } } self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'RESCUE') # 0. instance.rescue.start # 1. instance.exists # 2. instance.rescue.end self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-rescue-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-rescue-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) fake_notifier.reset() # unrescue notification asserts post = { 'unrescue': None } self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-unrescue-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-unrescue-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_soft_delete_server(self, server): self.flags(reclaim_instance_interval=30) self.api.delete_server(server['id']) self._wait_for_state_change(server, 'SOFT_DELETED') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-soft_delete-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-soft_delete-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self.flags(reclaim_instance_interval=0) # Leave instance in normal, active state self.api.post_server_action(server['id'], {'restore': {}}) def _do_test_attach_volume_error(self, server, mock_attach): def attach_volume(*args, **kwargs): raise exception.CinderConnectionFailed( reason="Connection timed out") mock_attach.side_effect = attach_volume post = {"volumeAttachment": {"volumeId": self.cinder.SWAP_NEW_VOL}} self.api.post_server_volume(server['id'], post) self._wait_for_notification('instance.volume_attach.error') block_devices = [ # Add by default at boot {'nova_object.data': {'boot_index': None, 'delete_on_termination': False, 'tag': None, 'device_name': '/dev/sdb', 'volume_id': self.cinder.SWAP_OLD_VOL}, 'nova_object.name': 'BlockDevicePayload', 'nova_object.namespace': 'nova', 'nova_object.version': '1.0'}, # Attaching it right now {'nova_object.data': {'boot_index': None, 'delete_on_termination': False, 'tag': None, 'device_name': '/dev/sdc', 'volume_id': self.cinder.SWAP_NEW_VOL}, 'nova_object.name': 'BlockDevicePayload', 'nova_object.namespace': 'nova', 'nova_object.version': '1.0'}] # 0. volume_attach-start # 1. volume_attach-error # 2. compute.exception # We only rely on the first 2 notifications, in this case we don't # care about the exception notification. self.assertLessEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'instance-volume_attach-start', replacements={ 'reservation_id': server['reservation_id'], 'block_devices': block_devices, 'volume_id': self.cinder.SWAP_NEW_VOL, 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) tb = fake_notifier.VERSIONED_NOTIFICATIONS[1]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn("CinderConnectionFailed:", tb) self._verify_notification( 'instance-volume_attach-error', replacements={ 'reservation_id': server['reservation_id'], 'block_devices': block_devices, 'volume_id': self.cinder.SWAP_NEW_VOL, 'uuid': server['id'], 'fault.traceback': self.ANY}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) @mock.patch('nova.volume.cinder.API.attachment_update') def _test_attach_volume_error(self, server, mock_attach): self._do_test_attach_volume_error(server, mock_attach) def _test_interface_attach_and_detach(self, server): post = { 'interfaceAttachment': { 'net_id': fixtures.NeutronFixture.network_1['id'] } } self.api.attach_interface(server['id'], post) self._wait_for_notification('instance.interface_attach.end') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-interface_attach-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-interface_attach-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) fake_notifier.reset() self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self.api.detach_interface( server['id'], fixtures.NeutronFixture.port_2['id']) self._wait_for_notification('instance.interface_detach.end') self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-interface_detach-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-interface_detach-end', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) @mock.patch('nova.virt.fake.SmallFakeDriver.attach_interface') def _test_interface_attach_error(self, server, mock_driver): def _unsuccessful_attach_interface(*args, **kwargs): raise exception.InterfaceAttachFailed("dummy") mock_driver.side_effect = _unsuccessful_attach_interface post = { 'interfaceAttachment': { 'net_id': fixtures.NeutronFixture.network_1['id'] } } self.assertRaises( client.OpenStackApiException, self.api.attach_interface, server['id'], post) self._wait_for_notification('instance.interface_attach.error') # 0. instance.interface_attach.start # 1. instance.interface_attach.error # 2. compute.exception self.assertLessEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'instance-interface_attach-start', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) tb = fake_notifier.VERSIONED_NOTIFICATIONS[1]['payload'][ 'nova_object.data']['fault']['nova_object.data']['traceback'] self.assertIn("raise exception.InterfaceAttachFailed", tb) self._verify_notification( 'instance-interface_attach-error', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id'], 'fault.traceback': self.ANY}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_lock_unlock_instance(self, server): self.api.post_server_action(server['id'], {'lock': {}}) self._wait_for_server_parameter(server, {'locked': True}) self.api.post_server_action(server['id'], {'unlock': {}}) self._wait_for_server_parameter(server, {'locked': False}) # Two versioned notifications are generated # 0. instance-lock # 1. instance-unlock self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-lock', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-unlock', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def _test_lock_unlock_instance_with_reason(self, server): self.api.post_server_action( server['id'], {'lock': {"locked_reason": "global warming"}}) self._wait_for_server_parameter(server, {'locked': True}) self.api.post_server_action(server['id'], {'unlock': {}}) self._wait_for_server_parameter(server, {'locked': False}) # Two versioned notifications are generated # 0. instance-lock # 1. instance-unlock self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), fake_notifier.VERSIONED_NOTIFICATIONS) self._verify_notification( 'instance-lock-with-reason', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'instance-unlock', replacements={ 'reservation_id': server['reservation_id'], 'uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_keypair.py0000664000175000017500000000620100000000000027362 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestKeypairNotificationSample( notification_sample_base.NotificationSampleTestBase): def test_keypair_create_delete(self): keypair_req = { "keypair": { "name": "my-key", "user_id": "fake", "type": "ssh" }} keypair = self.api.post_keypair(keypair_req) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'keypair-create-start', replacements={}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'keypair-create-end', replacements={ "fingerprint": keypair['fingerprint'], "public_key": keypair['public_key'] }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) self.api.delete_keypair(keypair['name']) self.assertEqual(4, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'keypair-delete-start', replacements={ "fingerprint": keypair['fingerprint'], "public_key": keypair['public_key'] }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[2]) self._verify_notification( 'keypair-delete-end', replacements={ "fingerprint": keypair['fingerprint'], "public_key": keypair['public_key'] }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[3]) def test_keypair_import(self): pub_key = ('ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGg' 'B4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0l' 'RE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv' '9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYc' 'pSxsIbECHw== Generated-by-Nova') keypair_req = { "keypair": { "name": "my-key", "user_id": "fake", "public_key": pub_key, "type": "ssh"}} self.api.post_keypair(keypair_req) self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'keypair-import-start', actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) self._verify_notification( 'keypair-import-end', actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_libvirt.py0000664000175000017500000000415100000000000027373 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock import nova.conf from nova import exception from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt.libvirt import host CONF = nova.conf.CONF class TestLibvirtErrorNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): self.flags(compute_driver='libvirt.LibvirtDriver') self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.useFixture(fixtures.MockPatchObject(host.Host, 'initialize')) super(TestLibvirtErrorNotificationSample, self).setUp() @mock.patch('nova.virt.libvirt.host.Host._get_connection') def test_libvirt_connect_error(self, mock_get_conn): mock_get_conn.side_effect = fakelibvirt.libvirtError( 'Sample exception for versioned notification test.') # restart the compute service self.assertRaises(exception.HypervisorUnavailable, self.restart_compute_service, self.compute) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'libvirt-connect-error', replacements={ 'ip': CONF.my_ip, 'reason.function_name': self.ANY, 'reason.module_name': self.ANY, 'reason.traceback': self.ANY }, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_metrics.py0000664000175000017500000000302200000000000027362 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova import context from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier CONF = nova.conf.CONF class TestMetricsNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): self.flags(compute_monitors=['cpu.virt_driver']) super(TestMetricsNotificationSample, self).setUp() # Reset the cpu stats of the 'cpu.virt_driver' monitor self.compute.manager.rt.monitors[0]._cpu_stats = {} def test_metrics_update(self): self.compute.manager.update_available_resource( context.get_admin_context()) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'metrics-update', replacements={'host_ip': CONF.my_ip}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_server_group.py0000664000175000017500000000565700000000000030456 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestServerGroupNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): super(TestServerGroupNotificationSample, self).setUp() self.neutron = fixtures.NeutronFixture(self) self.useFixture(self.neutron) def test_server_group_create_delete(self): group_req = { "name": "test-server-group", "policy": "anti-affinity", "rules": {"max_server_per_host": 3} } group = self.api.post_server_groups(group_req) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'server_group-create', replacements={'uuid': group['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) fake_notifier.reset() self.api.delete_server_group(group['id']) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'server_group-delete', replacements={'uuid': group['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) def test_server_group_add_member(self): group_req = { "name": "test-server-group", "policy": "anti-affinity", "rules": {"max_server_per_host": 3} } group = self.api.post_server_groups(group_req) fake_notifier.reset() server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}, scheduler_hints={"group": group['id']}) self._wait_for_notification('instance.update') # 0: server_group.add_member # 1: scheduler-select_destinations-start # 2: scheduler-select_destinations-end # 3: instance.create.start # 4: instance.create.end # 5: instance.update # (Due to adding server tags in the '_boot_a_server' method.) self.assertEqual(6, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'server_group-add_member', replacements={'uuid': group['id'], 'members': [server['id']]}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_service.py0000664000175000017500000001723400000000000027366 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import fixture as utils_fixture from nova import exception from nova.objects import service from nova.tests import fixtures from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit.api.openstack.compute import test_services from nova.tests.unit import fake_notifier class TestServiceNotificationBase( notification_sample_base.NotificationSampleTestBase): def _verify_notification(self, sample_file_name, replacements=None, actual=None): # This just extends the generic _verify_notification to default the # service version to the current service version to avoid sample update # after every service version bump. if 'version' not in replacements: replacements['version'] = service.SERVICE_VERSION base = super(TestServiceNotificationBase, self) base._verify_notification(sample_file_name, replacements, actual) class TestServiceUpdateNotificationSamplev2_52(TestServiceNotificationBase): # These tests have to be capped at 2.52 since the PUT format changes in # the 2.53 microversion. MAX_MICROVERSION = '2.52' def setUp(self): super(TestServiceUpdateNotificationSamplev2_52, self).setUp() self.stub_out("nova.db.api.service_get_by_host_and_binary", test_services.fake_service_get_by_host_binary) self.stub_out("nova.db.api.service_update", test_services.fake_service_update) # NOTE(gibi): enable / disable a compute service tries to call # the compute service via RPC to update placement. However in these # tests the compute services are faked. So stub out the RPC call to # avoid waiting for the RPC timeout. The notifications are generated # regardless of the result of the RPC call anyhow. self.stub_out("nova.compute.rpcapi.ComputeAPI.set_host_enabled", lambda *args, **kwargs: None) self.useFixture(utils_fixture.TimeFixture(test_services.fake_utcnow())) self.useFixture(fixtures.SingleCellSimple()) self.service_uuid = test_services.fake_service_get_by_host_binary( None, 'host1', 'nova-compute')['uuid'] def test_service_enable(self): body = {'host': 'host1', 'binary': 'nova-compute'} self.admin_api.api_put('os-services/enable', body) self._verify_notification('service-update', replacements={'uuid': self.service_uuid}) def test_service_disabled(self): body = {'host': 'host1', 'binary': 'nova-compute'} self.admin_api.api_put('os-services/disable', body) self._verify_notification('service-update', replacements={'disabled': True, 'uuid': self.service_uuid}) def test_service_disabled_log_reason(self): body = {'host': 'host1', 'binary': 'nova-compute', 'disabled_reason': 'test2'} self.admin_api.api_put('os-services/disable-log-reason', body) self._verify_notification('service-update', replacements={'disabled': True, 'disabled_reason': 'test2', 'uuid': self.service_uuid}) def test_service_force_down(self): body = {'host': 'host1', 'binary': 'nova-compute', 'forced_down': True} self.admin_api.api_put('os-services/force-down', body) self._verify_notification('service-update', replacements={'forced_down': True, 'disabled': True, 'disabled_reason': 'test2', 'uuid': self.service_uuid}) class TestServiceUpdateNotificationSampleLatest( TestServiceUpdateNotificationSamplev2_52): """Tests the PUT /os-services/{service_id} API notifications.""" MAX_MICROVERSION = 'latest' def setUp(self): super(TestServiceUpdateNotificationSampleLatest, self).setUp() def db_service_get_by_uuid(ctxt, service_uuid): for svc in test_services.fake_services_list: if svc['uuid'] == service_uuid: return svc raise exception.ServiceNotFound(service_id=service_uuid) self.stub_out('nova.db.api.service_get_by_uuid', db_service_get_by_uuid) def test_service_enable(self): body = {'status': 'enabled'} self.admin_api.api_put('os-services/%s' % self.service_uuid, body) self._verify_notification('service-update', replacements={'uuid': self.service_uuid}) def test_service_disabled(self): body = {'status': 'disabled'} self.admin_api.api_put('os-services/%s' % self.service_uuid, body) self._verify_notification('service-update', replacements={'disabled': True, 'uuid': self.service_uuid}) def test_service_disabled_log_reason(self): body = {'status': 'disabled', 'disabled_reason': 'test2'} self.admin_api.api_put('os-services/%s' % self.service_uuid, body) self._verify_notification('service-update', replacements={'disabled': True, 'disabled_reason': 'test2', 'uuid': self.service_uuid}) def test_service_force_down(self): body = {'forced_down': True} self.admin_api.api_put('os-services/%s' % self.service_uuid, body) self._verify_notification('service-update', replacements={'forced_down': True, 'disabled': True, 'disabled_reason': 'test2', 'uuid': self.service_uuid}) class TestServiceNotificationSample(TestServiceNotificationBase): def test_service_create(self): self.compute2 = self.start_service('compute', host='host2') self._verify_notification( 'service-create', replacements={ 'uuid': notification_sample_base.NotificationSampleTestBase.ANY}) def test_service_destroy(self): self.compute2 = self.start_service('compute', host='host2') # This test runs with the latest microversion by default so we get the # service uuid back from the REST API. compute2_service_id = self.admin_api.get_services( host=self.compute2.host, binary='nova-compute')[0]['id'] self.admin_api.api_delete('os-services/%s' % compute2_service_id) self._verify_notification( 'service-delete', replacements={'uuid': compute2_service_id}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/notification_sample_tests/test_volume.py0000664000175000017500000000504200000000000027227 0ustar00zuulzuul00000000000000# Copyright 2018 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova.tests import fixtures from nova.tests.functional.notification_sample_tests \ import notification_sample_base from nova.tests.unit import fake_notifier class TestVolumeUsageNotificationSample( notification_sample_base.NotificationSampleTestBase): def setUp(self): self.flags(volume_usage_poll_interval=60) super(TestVolumeUsageNotificationSample, self).setUp() self.neutron = fixtures.NeutronFixture(self) self.useFixture(self.neutron) self.cinder = fixtures.CinderFixture(self) self.useFixture(self.cinder) def _setup_server_with_volume_attached(self): server = self._boot_a_server( extra_params={'networks': [{'port': self.neutron.port_1['id']}]}) self._attach_volume_to_server(server, self.cinder.SWAP_OLD_VOL) fake_notifier.reset() return server def test_volume_usage_with_detaching_volume(self): server = self._setup_server_with_volume_attached() self.api.delete_server_volume(server['id'], self.cinder.SWAP_OLD_VOL) self._wait_for_notification('instance.volume_detach.end') # 0. volume_detach-start # 1. volume.usage # 2. volume_detach-end self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'volume-usage', replacements={'instance_uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[1]) def test_instance_poll_volume_usage(self): server = self._setup_server_with_volume_attached() self.compute.manager._poll_volume_usage(context.get_admin_context()) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self._verify_notification( 'volume-usage', replacements={'instance_uuid': server['id']}, actual=fake_notifier.VERSIONED_NOTIFICATIONS[0]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5064685 nova-21.2.4/nova/tests/functional/regressions/0000775000175000017500000000000000000000000021400 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/README.rst0000664000175000017500000000146700000000000023077 0ustar00zuulzuul00000000000000================================ Tests for Specific Regressions ================================ When we have a bug reported by end users that we can write a full stack reproduce on, we should. And we should keep a regression test for that bug in our tree. It can be deleted at some future date if needed, but largely should not be changed. Writing Regression Tests ======================== - These should be full stack tests which inherit from nova.test.TestCase directly. (This is to prevent coupling with other tests). - They should setup a full stack cloud in their setUp via fixtures - They should each live in a file which is named test_bug_######.py Writing Tests Before the Bug is Fixed ===================================== TODO describe writing and landing tests before the bug is fixed as a reproduce. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/__init__.py0000664000175000017500000000000000000000000023477 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1404867.py0000664000175000017500000000620500000000000024506 0ustar00zuulzuul00000000000000# Copyright 2018 VEXXHOST, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers class DeleteWithReservedVolumes(integrated_helpers._IntegratedTestBase): """Test deleting of an instance in error state that has a reserved volume. This test boots a server from volume which will fail to be scheduled, ending up in ERROR state with no host assigned and then deletes the server. Since the server failed to be scheduled, a local delete should run which will make sure that reserved volumes at the API layer are properly cleaned up. The regression is that Nova would not clean up the reserved volumes and the volume would be stuck in 'attaching' state. """ api_major_version = 'v2.1' microversion = 'latest' def _setup_compute_service(self): # Override `_setup_compute_service` to make sure that we do not start # up the compute service, making sure that the instance will end up # failing to find a valid host. pass def _create_error_server(self, volume_id): server = self.api.post_server({ 'server': { 'flavorRef': '1', 'name': 'bfv-delete-server-in-error-status', 'networks': 'none', 'block_device_mapping_v2': [ { 'boot_index': 0, 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume' }, ] } }) return self._wait_for_state_change(server, 'ERROR') def test_delete_with_reserved_volumes_new(self): self.cinder = self.useFixture( nova_fixtures.CinderFixture(self)) # Create a server which should go to ERROR state because we don't # have any active computes. volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server = self._create_error_server(volume_id) server_id = server['id'] # There should now exist an attachment to the volume as it was created # by Nova. self.assertIn(volume_id, self.cinder.volume_ids_for_instance(server_id)) # Delete this server, which should delete BDMs and remove the # reservation on the instances. self.api.delete_server(server['id']) # The volume should no longer have any attachments as instance delete # should have removed them. self.assertNotIn(volume_id, self.cinder.volume_ids_for_instance(server_id)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1522536.py0000664000175000017500000000464300000000000024504 0ustar00zuulzuul00000000000000# Copyright 2016 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.scheduler.utils import nova.servicegroup from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.unit import cast_as_call import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestServerGet(test.TestCase): REQUIRES_LOCKING = True def setUp(self): super(TestServerGet, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') self.useFixture(cast_as_call.CastAsCall(self)) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def test_id_overlap(self): """Regression test for bug #1522536. Before fixing this bug, getting a numeric id caused a 500 error because it treated the numeric value as the db index, fetched the server, but then processing of extensions blew up. Since we have fixed this bug it returns a 404, which is expected. In future a 400 might be more appropriate. """ server = dict(name='server1', imageRef=self.image_id, flavorRef=self.flavor_id) self.api.post_server({'server': server}) self.assertRaises(client.OpenStackApiNotFoundException, self.api.get_server, 1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1541691.py0000664000175000017500000000411600000000000024502 0ustar00zuulzuul00000000000000# Copyright 2016 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.scheduler.utils import nova.servicegroup from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestServerValidation(test.TestCase): REQUIRES_LOCKING = True microversion = None def setUp(self): super(TestServerValidation, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.api = api_fixture.api self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def test_name_validation(self): """Regression test for bug #1541691. The current jsonschema validation spits a giant wall of regex at you (about 500k characters). This is not useful to determine why your request actually failed. Ensure that once we fix this it doesn't regress. """ server = dict(name='server1 ', imageRef=self.image_id, flavorRef=self.flavor_id) server_args = {'server': server} self.assertRaises(client.OpenStackApiException, self.api.post_server, server_args) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1548980.py0000664000175000017500000000644600000000000024522 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import nova.scheduler.utils import nova.servicegroup from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.unit import cast_as_call import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestServerGet(test.TestCase): REQUIRES_LOCKING = True def setUp(self): super(TestServerGet, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # The non-admin API client is fine to stay at 2.1 since it just creates # and deletes the server. self.api = api_fixture.api self.admin_api = api_fixture.admin_api # The admin API client needs to be at microversion 2.16 to exhibit the # regression. self.admin_api.microversion = '2.16' # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') self.useFixture(cast_as_call.CastAsCall(self)) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def test_list_deleted_instances(self): """Regression test for bug #1548980. Before fixing this bug, listing deleted instances returned a 404 because lazy-loading services from a deleted instance failed. Now we should be able to list the deleted instance and the host_state attribute should be "". """ server = dict(name='server1', imageRef=self.image_id, flavorRef=self.flavor_id) server = self.api.post_server({'server': server}) self.api.delete_server(server['id']) # Wait 30 seconds for it to be gone. for x in range(30): try: self.api.get_server(server['id']) time.sleep(1) except client.OpenStackApiNotFoundException: break else: self.fail('Timed out waiting to delete server: %s' % server['id']) servers = self.admin_api.get_servers(search_opts={'deleted': 1}) self.assertEqual(1, len(servers)) self.assertEqual(server['id'], servers[0]['id']) # host_status is returned in the 2.16 microversion and since the server # is deleted it should be the empty string self.assertEqual(0, len(servers[0]['host_status'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1552888.py0000664000175000017500000000275200000000000024520 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import policy_fixture class TestAggregateCreation(test.TestCase): def setUp(self): super(TestAggregateCreation, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.admin_api = api_fixture.admin_api def test_name_validation(self): """Regression test for bug #1552888. The current aggregate accepts a null param for availability zone, change to the validation might affect some command like 'nova aggregate create foo' This test ensure those kind of change won't affect validation """ body = {"aggregate": {"name": "foo", "availability_zone": None}} # This should success self.admin_api.api_post('/os-aggregates', body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1554631.py0000664000175000017500000001107000000000000024475 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cinderclient import exceptions as cinder_exceptions import mock from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.unit import policy_fixture class TestCinderForbidden(test.TestCase): def setUp(self): super(TestCinderForbidden, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api @mock.patch('nova.volume.cinder.cinderclient') def test_forbidden_cinder_operation_returns_403(self, mock_cinder): """Regression test for bug #1554631. When the Cinder client returns a 403 Forbidden on any operation, the Nova API should forward on the 403 instead of returning 500. """ cinder_client = mock.Mock() mock_cinder.return_value = cinder_client exc = cinder_exceptions.Forbidden(403) cinder_client.volumes.create.side_effect = exc volume = {'display_name': 'vol1', 'size': 3} e = self.assertRaises(client.OpenStackApiException, self.api.post_volume, {'volume': volume}) self.assertEqual(403, e.response.status_code) class TestCinderOverLimit(test.TestCase): def setUp(self): super(TestCinderOverLimit, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api @mock.patch('nova.volume.cinder.cinderclient') def test_over_limit_volumes_with_message(self, mock_cinder): """Regression test for bug #1680457. When the Cinder client returns OverLimit when trying to create a volume, an OverQuota exception should be raised. """ cinder_client = mock.Mock() mock_cinder.return_value = cinder_client msg = ("VolumeSizeExceedsLimit: Requested volume size XG is larger" " than maximum allowed limit YG.") exc = cinder_exceptions.OverLimit(413, message=msg) cinder_client.volumes.create.side_effect = exc volume = {'display_name': 'vol1', 'size': 3} e = self.assertRaises(client.OpenStackApiException, self.api.post_volume, {'volume': volume}) self.assertEqual(403, e.response.status_code) # Make sure we went over on volumes self.assertIn('VolumeSizeExceedsLimit', e.response.text) @mock.patch('nova.volume.cinder.cinderclient') def test_over_limit_snapshots(self, mock_cinder): """Regression test for bug #1554631. When the Cinder client returns OverLimit when trying to create a snapshot, an OverQuota exception should be raised with the value being snapshots. """ self._do_snapshot_over_test(mock_cinder) @mock.patch('nova.volume.cinder.cinderclient') def test_over_limit_snapshots_force(self, mock_cinder): """Regression test for bug #1554631. When the Cinder client returns OverLimit when trying to create a snapshot, an OverQuota exception should be raised with the value being snapshots. (create_snapshot_force version) """ self._do_snapshot_over_test(mock_cinder, force=True) def _do_snapshot_over_test(self, mock_cinder, force=False): cinder_client = mock.Mock() mock_cinder.return_value = cinder_client exc = cinder_exceptions.OverLimit(413) cinder_client.volume_snapshots.create.side_effect = exc snap = {'display_name': 'snap1', 'volume_id': '521752a6-acf6-4b2d-bc7a-119f9148cd8c', 'force': force} e = self.assertRaises(client.OpenStackApiException, self.api.post_snapshot, {'snapshot': snap}) self.assertEqual(403, e.response.status_code) # Make sure we went over on snapshots self.assertIn('snapshots', e.response.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1558866.py0000664000175000017500000000600400000000000024514 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client as api_client from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class TestServerGet(test.TestCase): def setUp(self): super(TestServerGet, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery image_service = fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # NOTE(mriedem): This image has an invalid architecture metadata value # and is used for negative testing in the functional stack. timestamp = datetime.datetime(2011, 1, 1, 1, 2, 3) image = {'id': 'c456eb30-91d7-4f43-8f46-2efd9eccd744', 'name': 'fake-image-invalid-arch', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'raw', 'disk_format': 'raw', 'size': '25165824', 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': 'x64'}} self.image_id = image_service.create(None, image)['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def test_boot_server_with_invalid_image_meta(self): """Regression test for bug #1558866. Glance allows you to provide any architecture value for image meta properties but nova validates the image metadata against the nova.compute.arch.ALL values during the conversion to the ImageMeta object. This test ensures we get a 400 back in that case rather than a 500. """ server = dict(name='server1', imageRef=self.image_id, flavorRef=self.flavor_id) ex = self.assertRaises(api_client.OpenStackApiException, self.api.post_server, {'server': server}) self.assertEqual(400, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1568208.py0000664000175000017500000000227500000000000024511 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_reports import guru_meditation_report as gmr from nova import test from nova import version class TestServerGet(test.TestCase): def test_guru_meditation_report_generation(self): """Regression test for bug #1568208. This test ensures a text guru meditation report can be generated successfully and generation does not fail e.g. due to incorrectly registered config options. """ # NOTE(rpodolyaka): we are only interested in success of run() call # here, it's up to oslo_reports tests to check the generated report generator = gmr.TextGuruMeditation(version) generator.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1595962.py0000664000175000017500000001605000000000000024514 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import fixtures import io import mock import nova from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.unit import cast_as_call from nova.tests.unit import policy_fixture from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt.libvirt import guest as libvirt_guest class TestSerialConsoleLiveMigrate(test.TestCase): REQUIRES_LOCKING = True def setUp(self): super(TestSerialConsoleLiveMigrate, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # Replace libvirt with fakelibvirt self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.libvirt', fakelibvirt)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.host.libvirt', fakelibvirt)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.guest.libvirt', fakelibvirt)) self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.admin_api = api_fixture.admin_api self.api = api_fixture.api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) nova.tests.unit.fake_network.set_stub_network_methods(self) self.flags(compute_driver='libvirt.LibvirtDriver') self.flags(enabled=True, group="serial_console") self.flags(enabled=False, group="vnc") self.flags(enabled=False, group="spice") self.flags(use_usb_tablet=False, group="libvirt") self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute', host='test_compute1') self.useFixture(cast_as_call.CastAsCall(self)) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] @mock.patch.object(fakelibvirt.Domain, 'undefine') @mock.patch('nova.virt.libvirt.LibvirtDriver.get_volume_connector') @mock.patch('nova.virt.libvirt.guest.Guest.get_job_info') @mock.patch.object(fakelibvirt.Domain, 'migrateToURI3') @mock.patch('nova.virt.libvirt.host.Host.get_connection') @mock.patch('nova.virt.disk.api.get_disk_size', return_value=1024) @mock.patch('os.path.getsize', return_value=1024) @mock.patch('nova.conductor.tasks.live_migrate.LiveMigrationTask.' '_check_destination_is_not_source', return_value=False) @mock.patch('nova.virt.libvirt.LibvirtDriver._create_image', return_value=(False, False)) @mock.patch('nova.virt.libvirt.LibvirtDriver._get_local_gb_info', return_value={'total': 128, 'used': 44, 'free': 84}) @mock.patch('nova.virt.libvirt.driver.libvirt_utils.is_valid_hostname', return_value=True) @mock.patch('nova.virt.libvirt.driver.libvirt_utils.file_open', side_effect=[io.BytesIO(b''), io.BytesIO(b'')]) def test_serial_console_live_migrate(self, mock_file_open, mock_valid_hostname, mock_get_fs_info, mock_create_image, mock_conductor_source_check, mock_path_get_size, mock_get_disk_size, mock_host_get_connection, mock_migrate_to_uri, mock_get_job_info, mock_get_volume_connector, mock_undefine): """Regression test for bug #1595962. If the graphical consoles VNC and SPICE are disabled, the live-migration of an instance will result in an ERROR state. VNC and SPICE are usually disabled on IBM z systems platforms where graphical consoles are not available. The serial console is then enabled and VNC + SPICE are disabled. The error will be raised at https://opendev.org/openstack/nova/src/commit/ 4f33047d07f5a11b208c344fe206aba01cd8e6fe/ nova/virt/libvirt/driver.py#L5842-L5852 """ mock_get_job_info.return_value = libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED) fake_connection = fakelibvirt.Connection('qemu:///system', version=fakelibvirt.FAKE_LIBVIRT_VERSION, hv_version=fakelibvirt.FAKE_QEMU_VERSION) mock_host_get_connection.return_value = fake_connection # We invoke cleanup on source host first which will call undefine # method currently. Since in functional test we make all compute # services linked to the same connection, we need to mock the undefine # method to avoid triggering 'Domain not found' error in subsequent # rpc call post_live_migration_at_destination. mock_undefine.return_value = True server_attr = dict(name='server1', imageRef=self.image_id, flavorRef=self.flavor_id) server = self.api.post_server({'server': server_attr}) server_id = server['id'] self.wait_till_active_or_timeout(server_id) post = {"os-migrateLive": { "block_migration": False, "disk_over_commit": False, "host": "test_compute1" }} try: # This should succeed self.admin_api.post_server_action(server_id, post) self.wait_till_active_or_timeout(server_id) except Exception as ex: self.fail(ex.response.content) def wait_till_active_or_timeout(self, server_id): timeout = 0.0 server = self.api.get_server(server_id) while server['status'] != "ACTIVE" and timeout < 10.0: time.sleep(.1) timeout += .1 server = self.api.get_server(server_id) if server['status'] != "ACTIVE": self.fail("The server is not active after the timeout.") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1620248.py0000664000175000017500000000417000000000000024476 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import cast_as_call import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestServerUpdate(test.TestCase): REQUIRES_LOCKING = True def setUp(self): super(TestServerUpdate, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) # Simulate requests coming in before the instance is scheduled by # using a no-op for conductor build_instances self.useFixture(nova_fixtures.NoopConductorFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.useFixture(cast_as_call.CastAsCall(self)) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def test_update_name_before_scheduled(self): server = dict(name='server0', imageRef=self.image_id, flavorRef=self.flavor_id) server_id = self.api.post_server({'server': server})['id'] server = {'server': {'name': 'server-renamed'}} self.api.api_put('/servers/%s' % server_id, server) server_name = self.api.get_server(server_id)['name'] self.assertEqual('server-renamed', server_name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1669054.py0000664000175000017500000000705000000000000024506 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova import objects from nova.tests.functional import integrated_helpers class ResizeEvacuateTestCase(integrated_helpers._IntegratedTestBase): """Regression test for bug 1669054 introduced in Newton. When resizing a server, if CONF.allow_resize_to_same_host is False, the API will set RequestSpec.ignore_hosts = [instance.host] and then later in conductor the RequestSpec changes are saved to persist the new flavor. This inadvertently saves the ignore_hosts value. Later if you try to migrate, evacuate or unshelve the server, that original source host will be ignored. If the deployment has a small number of computes, like two in an edge node, then evacuate will fail because the only other available host is ignored. This test recreates the scenario. """ # Set variables used in the parent class. REQUIRES_LOCKING = False ADMIN_API = True _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' api_major_version = 'v2.1' microversion = '2.11' # Need at least 2.11 for the force-down API def test_resize_then_evacuate(self): # Create a server. At this point there is only one compute service. flavors = self.api.get_flavors() flavor1 = flavors[0]['id'] server = self._build_server(flavor_id=flavor1) server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') # Start up another compute service so we can resize. host2 = self.start_service('compute', host='host2') # Now resize the server to move it to host2. flavor2 = flavors[1]['id'] req = {'resize': {'flavorRef': flavor2}} self.api.post_server_action(server['id'], req) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) self.api.post_server_action(server['id'], {'confirmResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') # Disable the host on which the server is now running (host2). host2.stop() self.api.force_down_service('host2', 'nova-compute', forced_down=True) # Now try to evacuate the server back to the original source compute. req = {'evacuate': {'onSharedStorage': False}} self.api.post_server_action(server['id'], req) server = self._wait_for_state_change(server, 'ACTIVE') # The evacuate flow in the compute manager is annoying in that it # sets the instance status to ACTIVE before updating the host, so we # have to wait for the migration record to be 'done' to avoid a race. self._wait_for_migration_status(server, ['done']) self.assertEqual(self.compute.host, server['OS-EXT-SRV-ATTR:host']) # Assert the RequestSpec.ignore_hosts field is not populated. reqspec = objects.RequestSpec.get_by_instance_uuid( context.get_admin_context(), server['id']) self.assertIsNone(reqspec.ignore_hosts) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1670627.py0000664000175000017500000001325300000000000024506 0ustar00zuulzuul00000000000000# Copyright 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.unit import cast_as_call import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestDeleteFromCell0CheckQuota(test.TestCase): """This tests a regression introduced in the Ocata release. In Ocata we started building instances in conductor for cells v2. If we can't schedule the instance, it gets put into ERROR state, the instance record is created in the cell0 database and the BuildRequest is deleted. In the API: 1) quota.reserve creates a reservation record and updates the resource usage record to increment 'reserved' which is counted as part of usage. 2) quota.commit deletes the reservation record and updates the resource usage record to decrement 'reserved' and increment 'in_use' which is counted as part of usage When the user deletes the instance, the API code sees that the instance is living in cell0 and deletes it from cell0. The original quota reservation was made in the cell (nova) database and usage was not decremented. So a user that has several failed instance builds eventually runs out of quota even though they successfully deleted their servers. """ def setUp(self): super(TestDeleteFromCell0CheckQuota, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') # We don't actually start a compute service; this way we don't have any # compute hosts to schedule the instance to and will go into error and # be put into cell0. self.useFixture(cast_as_call.CastAsCall(self)) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def _wait_for_instance_status(self, server_id, status): timeout = 0.0 server = self.api.get_server(server_id) while server['status'] != status and timeout < 10.0: time.sleep(.1) timeout += .1 server = self.api.get_server(server_id) if server['status'] != status: self.fail('Timed out waiting for server %s to have status: %s.' % (server_id, status)) return server def _wait_for_instance_delete(self, server_id): timeout = 0.0 while timeout < 10.0: try: server = self.api.get_server(server_id) except client.OpenStackApiNotFoundException: # the instance is gone so we're happy return else: time.sleep(.1) timeout += .1 self.fail('Timed out waiting for server %s to be deleted. ' 'Current vm_state: %s. Current task_state: %s' % (server_id, server['OS-EXT-STS:vm_state'], server['OS-EXT-STS:task_state'])) def _delete_server(self, server): try: self.api.delete_server(server['id']) except client.OpenStackApiNotFoundException: pass def test_delete_error_instance_in_cell0_and_check_quota(self): """Tests deleting the error instance in cell0 and quota. This test will create the server which will fail to schedule because there is no compute to send it to. This will trigger the conductor manager to put the instance into ERROR state and create it in cell0. The test asserts that quota was decremented. Then it deletes the instance and will check quota again after the instance is gone. """ # Get the current quota usage starting_usage = self.api.get_limits() # Create the server which we expect to go into ERROR state. server = dict( name='cell0-quota-test', imageRef=self.image_id, flavorRef=self.flavor_id) server = self.api.post_server({'server': server}) self.addCleanup(self._delete_server, server) self._wait_for_instance_status(server['id'], 'ERROR') # Check quota to see we've incremented usage by 1. current_usage = self.api.get_limits() self.assertEqual(starting_usage['absolute']['totalInstancesUsed'] + 1, current_usage['absolute']['totalInstancesUsed']) # Now delete the server and wait for it to be gone. self._delete_server(server) self._wait_for_instance_delete(server['id']) # Now check the quota again. Since the bug is fixed, ending usage # should be current usage - 1. ending_usage = self.api.get_limits() self.assertEqual(current_usage['absolute']['totalInstancesUsed'] - 1, ending_usage['absolute']['totalInstancesUsed']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1671648.py0000664000175000017500000001456600000000000024522 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import nova.compute.resource_tracker from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.unit import cast_as_call from nova.tests.unit import fake_network import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestRetryBetweenComputeNodeBuilds(test.TestCase): """This tests a regression introduced in the Ocata release. In Ocata we started building instances in conductor for cells v2. That uses a new "schedule_and_build_instances" in the ConductorManager rather than the old "build_instances" method and duplicates a lot of the same logic, but it missed populating the "retry" value in the scheduler filter properties. As a result, failures to build an instance on a compute node which would normally result in a retry of the build on another compute node are not actually happening. """ def setUp(self): super(TestRetryBetweenComputeNodeBuilds, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # This stubs out the network allocation in compute. fake_network.set_stub_network_methods(self) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # The admin API is used to get the server details to verify the # host on which the server was built. self.admin_api = api_fixture.admin_api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') # We start two compute services because we're going to fake one # of them to fail the build so we can trigger the retry code. self.start_service('compute', host='host1') self.start_service('compute', host='host2') self.scheduler_service = self.start_service('scheduler') self.useFixture(cast_as_call.CastAsCall(self)) self.image_id = self.admin_api.get_images()[0]['id'] self.flavor_id = self.admin_api.get_flavors()[0]['id'] # This is our flag that we set when we hit the first host and # made it fail. self.failed_host = None self.attempts = 0 # We can't stub nova.compute.claims.Claims.__init__ because there is # a race where nova.compute.claims.NopClaim will be used instead, # see for details: # https://opendev.org/openstack/nova/src/commit/ # bb02d1110a9529217a5e9b1e1fe8ca25873cac59/ # nova/compute/resource_tracker.py#L121-L130 real_instance_claim =\ nova.compute.resource_tracker.ResourceTracker.instance_claim def fake_instance_claim(_self, *args, **kwargs): self.attempts += 1 if self.failed_host is None: # Set the failed_host value to the ResourceTracker.host value. self.failed_host = _self.host raise exception.ComputeResourcesUnavailable( reason='failure on host %s' % _self.host) return real_instance_claim(_self, *args, **kwargs) self.stub_out( 'nova.compute.resource_tracker.ResourceTracker.instance_claim', fake_instance_claim) def _wait_for_instance_status(self, server_id, status): timeout = 0.0 server = self.admin_api.get_server(server_id) while server['status'] != status and timeout < 10.0: time.sleep(.1) timeout += .1 server = self.admin_api.get_server(server_id) if server['status'] != status: self.fail('Timed out waiting for server %s to have status: %s. ' 'Current status: %s. Build attempts: %s' % (server_id, status, server['status'], self.attempts)) return server def test_retry_build_on_compute_error(self): """Tests the retry operation between compute nodes when one fails. This tests the scenario that we have two compute services and we try to build a single server. The test is setup such that the scheduler picks the first host which we mock out to fail the claim. This should then trigger a retry to the second host. """ # Now that the bug is fixed, we should assert that the server goes to # ACTIVE status and is on the second host after the retry operation. server = dict( name='retry-test', imageRef=self.image_id, flavorRef=self.flavor_id) server = self.admin_api.post_server({'server': server}) self.addCleanup(self.admin_api.delete_server, server['id']) server = self._wait_for_instance_status(server['id'], 'ACTIVE') # Assert that the host is not the failed host. self.assertNotEqual(self.failed_host, server['OS-EXT-SRV-ATTR:host']) # Assert that we retried. self.assertEqual(2, self.attempts) class TestRetryBetweenComputeNodeBuildsNoAllocations( TestRetryBetweenComputeNodeBuilds): """Tests the reschedule scenario using a scheduler driver which does not use Placement. """ def setUp(self): super(TestRetryBetweenComputeNodeBuildsNoAllocations, self).setUp() # We need to mock the FilterScheduler to not use Placement so that # allocations won't be created during scheduling. self.scheduler_service.manager.driver.USES_ALLOCATION_CANDIDATES = \ False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1675570.py0000664000175000017500000001611500000000000024510 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from oslo_log import log as logging from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import fixtures as func_fixtures from nova.tests.unit import cast_as_call import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture LOG = logging.getLogger(__name__) class TestLocalDeleteAttachedVolumes(test.TestCase): """Test local delete in the API of a server with a volume attached. This test creates a server, then shelve-offloads it, attaches a volume, and then deletes the server. Since the server is shelved-offloaded it does not have instance.host set which should trigger a local delete flow in the API. During local delete we should try to cleanup any attached volumes. This test asserts that on a local delete we also detach any volumes and destroy the related BlockDeviceMappings. """ def setUp(self): super(TestLocalDeleteAttachedVolumes, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # We need the CinderFixture to stub out the volume API. self.cinder = self.useFixture( nova_fixtures.CinderFixture(self)) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # Use the PlacementFixture to avoid annoying warnings in the logs. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # We want to use 2.37 for passing networks='none' on server create. # We also need this since you can only attach a volume to a # shelved-offloaded server in microversion 2.20+. self.api.microversion = 'latest' # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') self.useFixture(cast_as_call.CastAsCall(self)) self.flavor_id = self.api.get_flavors()[0]['id'] def _wait_for_instance_status(self, server_id, status): timeout = 0.0 server = self.api.get_server(server_id) while server['status'] != status and timeout < 10.0: time.sleep(.1) timeout += .1 server = self.api.get_server(server_id) if server['status'] != status: self.fail('Timed out waiting for server %s to have status: %s. ' 'Current status: %s' % (server_id, status, server['status'])) return server def _wait_for_instance_delete(self, server_id): timeout = 0.0 while timeout < 10.0: try: server = self.api.get_server(server_id) except client.OpenStackApiNotFoundException: # the instance is gone so we're happy return else: time.sleep(.1) timeout += .1 self.fail('Timed out waiting for server %s to be deleted. ' 'Current vm_state: %s. Current task_state: %s' % (server_id, server['OS-EXT-STS:vm_state'], server['OS-EXT-STS:task_state'])) def _delete_server(self, server): try: self.api.delete_server(server['id']) except client.OpenStackApiNotFoundException: pass def _wait_for_volume_attach(self, server_id, volume_id): timeout = 0.0 server = self.api.get_server(server_id) attached_vols = [vol['id'] for vol in server['os-extended-volumes:volumes_attached']] while volume_id not in attached_vols and timeout < 10.0: time.sleep(.1) timeout += .1 server = self.api.get_server(server_id) attached_vols = [vol['id'] for vol in server['os-extended-volumes:volumes_attached']] if volume_id not in attached_vols: self.fail('Timed out waiting for volume %s to be attached to ' 'server %s. Currently attached volumes: %s' % (volume_id, server_id, attached_vols)) def test_local_delete_with_volume_attached(self, mock_version_get=None): LOG.info('Creating server and waiting for it to be ACTIVE.') server = dict( name='local-delete-volume-attach-test', # The image ID comes from nova.tests.unit.image.fake. imageRef='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', flavorRef=self.flavor_id, # Bypass network setup on the compute. networks='none') server = self.api.post_server({'server': server}) server_id = server['id'] self.addCleanup(self._delete_server, server) self._wait_for_instance_status(server_id, 'ACTIVE') LOG.info('Shelve-offloading server %s', server_id) self.api.post_server_action(server_id, {'shelve': None}) # Wait for the server to be offloaded. self._wait_for_instance_status(server_id, 'SHELVED_OFFLOADED') volume_id = '9a695496-44aa-4404-b2cc-ccab2501f87e' LOG.info('Attaching volume %s to server %s', volume_id, server_id) self.api.post_server_volume( server_id, dict(volumeAttachment=dict(volumeId=volume_id))) # Wait for the volume to show up in the server's list of attached # volumes. LOG.info('Validating that volume %s is attached to server %s.', volume_id, server_id) self._wait_for_volume_attach(server_id, volume_id) # Check to see that the fixture is tracking the server and volume # attachment. self.assertIn(volume_id, self.cinder.volume_ids_for_instance(server_id)) # At this point the instance.host is no longer set, so deleting # the server will take the local delete path in the API. LOG.info('Deleting shelved-offloaded server %s.', server_id) self._delete_server(server) # Now wait for the server to be gone. self._wait_for_instance_delete(server_id) LOG.info('Validating that volume %s was detached from server %s.', volume_id, server_id) # Now that the bug is fixed, assert the volume was detached. self.assertNotIn(volume_id, self.cinder.volume_ids_for_instance(server_id)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1678326.py0000664000175000017500000000721300000000000024511 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import exception from nova import objects from nova.tests.functional.regressions import test_bug_1670627 class TestDeleteFromCell0CheckQuotaRollback( test_bug_1670627.TestDeleteFromCell0CheckQuota): """This tests a regression introduced in the Ocata release. The _delete_while_booting method in the compute API was added in Ocata and will commit quota reservations for a usage decrement *before* attempting to destroy the instance. This is wrong since we should only adjust quota usage until *after* the instance is destroyed. If we fail to destroy the instance, then we rollback the quota reservation for the usage decrement. Because of how the Quotas object's commit and rollback methods work, they mask the fact that if you have already committed the reservation, a later attempt to rollback is a noop, which hid this regression. This test class extends the same regression test class for bug 1670627 because the same regression from _delete_while_booting was copied into _delete which was added as part of fixing bug 1670627. For the regression, we can recreate it by attempting to delete an instance but stub out instance.destroy() to make it fail and then check quota usage which should be unchanged from before the delete request when the bug is fixed. """ def test_delete_error_instance_in_cell0_and_check_quota(self): # Get the current quota usage starting_usage = self.api.get_limits() # Create the server which we expect to go into ERROR state. server = dict( name='cell0-quota-test', imageRef=self.image_id, flavorRef=self.flavor_id) server = self.api.post_server({'server': server}) self.addCleanup(self._delete_server, server) self._wait_for_instance_status(server['id'], 'ERROR') # Check quota to see we've incremented usage by 1. current_usage = self.api.get_limits() self.assertEqual(starting_usage['absolute']['totalInstancesUsed'] + 1, current_usage['absolute']['totalInstancesUsed']) # Now we stub out instance.destroy to fail with InstanceNotFound # which triggers the quotas.rollback call. original_instance_destroy = objects.Instance.destroy def fake_instance_destroy(*args, **kwargs): raise exception.InstanceNotFound(instance_id=server['id']) self.stub_out('nova.objects.instance.Instance.destroy', fake_instance_destroy) # Now delete the server and wait for it to be gone. self._delete_server(server) # Reset the stub so we can actually delete the instance on tearDown. self.stub_out('nova.objects.instance.Instance.destroy', original_instance_destroy) # Now check the quota again. We should be back to the pre-delete usage. ending_usage = self.api.get_limits() self.assertEqual(current_usage['absolute']['totalInstancesUsed'], ending_usage['absolute']['totalInstancesUsed']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1679750.py0000664000175000017500000001553000000000000024514 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_config import fixture as config_fixture from placement import conf as placement_conf from placement.tests import fixtures as placement_db from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestLocalDeleteAllocations(test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(TestLocalDeleteAllocations, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to show security groups for a server. self.useFixture(nova_fixtures.NeutronFixture(self)) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api # We need the latest microversion to force-down the compute service self.admin_api.microversion = 'latest' # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') @staticmethod def _get_usages(placement_api, rp_uuid): fmt = '/resource_providers/%(uuid)s/usages' resp = placement_api.get(fmt % {'uuid': rp_uuid}) return resp.body['usages'] def test_local_delete_removes_allocations_after_compute_restart(self): """Tests that allocations are removed after a local delete. This tests the scenario where a server is local deleted (because the compute host is down) and we want to make sure that its allocations have been cleaned up once the nova-compute service restarts. In this scenario we conditionally use the PlacementFixture to simulate the case that nova-api isn't configured to talk to placement, thus we need to manage the placement database independently. """ config = cfg.ConfigOpts() placement_config = self.useFixture(config_fixture.Config(config)) placement_conf.register_opts(config) self.useFixture(placement_db.Database(placement_config, set_config=True)) # Get allocations, make sure they are 0. with func_fixtures.PlacementFixture( conf_fixture=placement_config, db=False, register_opts=False) as placement: compute = self.start_service('compute') placement_api = placement.api resp = placement_api.get('/resource_providers') rp_uuid = resp.body['resource_providers'][0]['uuid'] usages_before = self._get_usages(placement_api, rp_uuid) for usage in usages_before.values(): self.assertEqual(0, usage) # Create a server. server = self._build_server(networks='none') server = self.admin_api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Assert usages are non zero now. usages_during = self._get_usages(placement_api, rp_uuid) for usage in usages_during.values(): self.assertNotEqual(0, usage) # Force-down compute to trigger local delete. compute.stop() compute_service_id = self.admin_api.get_services( host=compute.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute_service_id, {'forced_down': True}) # Delete the server (will be a local delete because compute is down). self._delete_server(server) with func_fixtures.PlacementFixture( conf_fixture=placement_config, db=False, register_opts=False) as placement: placement_api = placement.api # Assert usages are still non-zero. usages_during = self._get_usages(placement_api, rp_uuid) for usage in usages_during.values(): self.assertNotEqual(0, usage) # Start the compute service again. Before it comes up, it will # call the update_available_resource code in the ResourceTracker # which is what "heals" the allocations for the deleted instance. compute.start() # Get the allocations again to check against the original. usages_after = self._get_usages(placement_api, rp_uuid) # They should match. self.assertEqual(usages_before, usages_after) def test_local_delete_removes_allocations_from_api(self): """Tests that the compute API deletes allocations when the compute service on which the instance was running is down. """ placement_api = self.useFixture(func_fixtures.PlacementFixture()).api compute = self.start_service('compute') # Get allocations, make sure they are 0. resp = placement_api.get('/resource_providers') rp_uuid = resp.body['resource_providers'][0]['uuid'] usages_before = self._get_usages(placement_api, rp_uuid) for usage in usages_before.values(): self.assertEqual(0, usage) # Create a server. server = self._build_server(networks='none') server = self.admin_api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Assert usages are non zero now. usages_during = self._get_usages(placement_api, rp_uuid) for usage in usages_during.values(): self.assertNotEqual(0, usage) # Force-down compute to trigger local delete. compute.stop() compute_service_id = self.admin_api.get_services( host=compute.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute_service_id, {'forced_down': True}) # Delete the server (will be a local delete because compute is down). self._delete_server(server) # Get the allocations again to make sure they were deleted. usages_after = self._get_usages(placement_api, rp_uuid) # They should match. self.assertEqual(usages_before, usages_after) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1682693.py0000664000175000017500000000725100000000000024515 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class ServerTagsFilteringTest(test.TestCase, integrated_helpers.InstanceHelperMixin): """Simple tests to create servers with tags and then list servers using the various tag filters. This is a regression test for bug 1682693 introduced in Newton when we started pulling instances from cell0 and the main cell. """ def setUp(self): super(ServerTagsFilteringTest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # Use the PlacementFixture to avoid annoying warnings in the logs. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.api.microversion = 'latest' self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') # create two test servers self.servers = [] for x in range(2): server = self._build_server(networks='none') server = self.api.post_server({'server': server}) self.addCleanup(self.api.delete_server, server['id']) server = self._wait_for_state_change(server, 'ACTIVE') self.servers.append(server) # now apply two tags to the first server self.two_tag_server = self.servers[0] self.api.put_server_tags(self.two_tag_server['id'], ['foo', 'bar']) # apply one tag to the second server which intersects with one tag # from the first server self.one_tag_server = self.servers[1] self.api.put_server_tags(self.one_tag_server['id'], ['foo']) def test_list_servers_filter_by_tags(self): """Tests listing servers and filtering by the 'tags' query parameter which uses AND logic. """ servers = self.api.get_servers(search_opts=dict(tags='foo,bar')) # we should get back our server that has both tags self.assertEqual(1, len(servers)) server = servers[0] self.assertEqual(self.two_tag_server['id'], server['id']) self.assertEqual(2, len(server['tags'])) self.assertEqual(['bar', 'foo'], sorted(server['tags'])) # query for the shared tag and we should get two servers back servers = self.api.get_servers(search_opts=dict(tags='foo')) self.assertEqual(2, len(servers)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1689692.py0000664000175000017500000000704400000000000024523 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import cast_as_call from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class ServerListLimitMarkerCell0Test(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1689692 introduced in Ocata. The user specifies a limit which is greater than the number of instances left in the page and the marker starts in the cell0 database. What happens is we don't null out the marker but we still have more limit so we continue to page in the cell database(s) but the marker isn't found in any of those, since it's already found in cell0, so it eventually raises a MarkerNotFound error. """ def setUp(self): super(ServerListLimitMarkerCell0Test, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.api.microversion = 'latest' self.start_service('conductor') self.start_service('scheduler') # We don't start the compute service because we want NoValidHost so # all of the instances go into ERROR state and get put into cell0. self.useFixture(cast_as_call.CastAsCall(self)) def test_list_servers_marker_in_cell0_more_limit(self): """Creates three servers, then lists them with a marker on the first and a limit of 3 which is more than what's left to page on (2) but it shouldn't fail, it should just give the other two back. """ # create three test servers for x in range(3): server_req = self._build_server(networks='none') server = self.api.post_server({'server': server_req}) self.addCleanup(self.api.delete_server, server['id']) self._wait_for_state_change(server, 'ERROR') servers = self.api.get_servers() self.assertEqual(3, len(servers)) # Take the first server and user that as our marker. marker = servers[0]['id'] # Since we're paging after the first server as our marker, there are # only two left so specifying three should just return two. servers = self.api.get_servers(search_opts=dict(marker=marker, limit=3)) self.assertEqual(2, len(servers)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1702454.py0000664000175000017500000001275000000000000024501 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import cast_as_call from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class SchedulerOnlyChecksTargetTest(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1702454 introduced in Newton. That test is for verifying that if we evacuate by providing a target, the scheduler only checks the related host. If the host is not able to accepting the instance, it would return a NoValidHost to the user instead of passing the instance to another host. Unfortunately, when we wrote the feature for that in Newton, we forgot to transform the new RequestSpec field called `requested_destination` into an item for the legacy filter_properties dictionary so the scheduler wasn't getting it. That test will use 3 hosts: - host1 which will be the source host for the instance - host2 which will be the requested target when evacuating from host1 - host3 which could potentially be the evacuation target if the scheduler doesn't correctly get host2 as the requested destination from the user. """ def setUp(self): super(SchedulerOnlyChecksTargetTest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # The admin API is used to get the server details to verify the # host on which the server was built. self.admin_api = api_fixture.admin_api self.api = api_fixture.api # the image fake backend needed for image discovery image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) self.start_service('conductor') # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.admin_api.microversion = 'latest' self.api.microversion = 'latest' # Define a very basic scheduler that only verifies if host is down. self.flags(enabled_filters=['ComputeFilter'], group='filter_scheduler') # NOTE(sbauza): Use the HostNameWeigherFixture so we are sure that # we prefer first host1 for the boot request and forget about any # other weigher. # Host2 should only be preferred over host3 if and only if that's the # only host we verify (as requested_destination does). self.useFixture(nova_fixtures.HostNameWeigherFixture( weights={'host1': 100, 'host2': 1, 'host3': 50})) self.start_service('scheduler') # Let's now start three compute nodes as we said above. self.start_service('compute', host='host1') self.start_service('compute', host='host2') self.start_service('compute', host='host3') self.useFixture(cast_as_call.CastAsCall(self)) def test_evacuate_server(self): # We first create the instance server = self._build_server(networks='none') server = self.admin_api.post_server({'server': server}) server_id = server['id'] self.addCleanup(self.api.delete_server, server_id) self._wait_for_state_change(server, 'ACTIVE') # We need to get instance details for knowing its host server = self.admin_api.get_server(server_id) host = server['OS-EXT-SRV-ATTR:host'] # As weigher prefers host1, we are sure we find it here. self.assertEqual('host1', host) # Now, force host1 to be down. As we use ComputeFilter, it won't ever # be a possible destination for the scheduler now. self.admin_api.microversion = '2.11' # Cap for the force-down call. self.admin_api.force_down_service(host, 'nova-compute', True) self.admin_api.microversion = 'latest' # It's time to evacuate by asking host2 as a target. Remember, the # only possibility the instance can end up on it is because the # scheduler should only verify the requested destination as host2 # is weighed lower than host3. evacuate = { 'evacuate': { 'host': 'host2' } } self.admin_api.post_server_action(server['id'], evacuate) self._wait_for_state_change(server, 'ACTIVE') server = self.admin_api.get_server(server_id) # Yeepee, that works! self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1713783.py0000664000175000017500000001047300000000000024510 0ustar00zuulzuul00000000000000# Copyright 2017 Ericsson # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import time from oslo_log import log as logging from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture LOG = logging.getLogger(__name__) class FailedEvacuateStateTests(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression Tests for bug #1713783 When evacuation fails with NoValidHost, the migration status remains 'accepted' instead of 'error'. This causes problem in case the compute service starts up again and looks for migrations with status 'accepted', as it then removes the local instances for those migrations even though the instance never actually migrated to another host. """ microversion = 'latest' def setUp(self): super(FailedEvacuateStateTests, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.api.microversion = self.microversion nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.hostname = 'host1' self.compute1 = self.start_service('compute', host=self.hostname) fake_network.set_stub_network_methods(self) def _wait_for_notification_event_type(self, event_type, max_retries=10): retry_counter = 0 while True: if len(fake_notifier.NOTIFICATIONS) > 0: for notification in fake_notifier.NOTIFICATIONS: if notification.event_type == event_type: return if retry_counter == max_retries: self.fail('Wait for notification event type (%s) failed' % event_type) retry_counter += 1 time.sleep(0.5) def _boot_a_server(self): server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') LOG.info('booting on %s', self.hostname) created_server = self.api.post_server({'server': server_req}) return self._wait_for_state_change(created_server, 'ACTIVE') def test_evacuate_no_valid_host(self): # Boot a server server = self._boot_a_server() # Force source compute down compute_id = self.api.get_services( host=self.hostname, binary='nova-compute')[0]['id'] self.api.put_service(compute_id, {'forced_down': 'true'}) fake_notifier.stub_notifier(self) fake_notifier.reset() # Initiate evacuation post = {'evacuate': {}} self.api.post_server_action(server['id'], post) self._wait_for_notification_event_type('compute_task.rebuild_server') server = self._wait_for_state_change(server, 'ERROR') self.assertEqual(self.hostname, server['OS-EXT-SRV-ATTR:host']) # Check migrations migrations = self.api.get_migrations() self.assertEqual(1, len(migrations)) self.assertEqual('evacuation', migrations[0]['migration_type']) self.assertEqual(server['id'], migrations[0]['instance_uuid']) self.assertEqual(self.hostname, migrations[0]['source_compute']) self.assertEqual('error', migrations[0]['status']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1718455.py0000664000175000017500000001322300000000000024505 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import time from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestLiveMigrateOneOfConcurrentlyCreatedInstances( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression tests for bug #1718455 When creating multiple instances at the same time, the RequestSpec record is persisting the number of concurrent instances. When moving one of those instances, the scheduler should not include num_instances > 1 in the request spec. It was partially fixed by bug #1708961 but we forgot to amend some place in the scheduler so that the right number of hosts was returned to the scheduler method calling both the Placement API and filters/weighers but we were verifying that returned size of hosts against a wrong number, which is the number of instances created concurrently. That test will create 2 concurrent instances and verify that when live-migrating one of them, we end up with a correct move operation. """ microversion = 'latest' def setUp(self): super(TestLiveMigrateOneOfConcurrentlyCreatedInstances, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.api.microversion = self.microversion nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') self.start_service('compute', host='host1') self.start_service('compute', host='host2') fake_network.set_stub_network_methods(self) def _boot_servers(self, num_servers=1): server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') server_req.update({'min_count': str(num_servers), 'return_reservation_id': 'True'}) response = self.api.post_server({'server': server_req}) reservation_id = response['reservation_id'] # lookup servers created by the multi-create request. servers = self.api.get_servers(detail=True, search_opts={'reservation_id': reservation_id}) for idx, server in enumerate(servers): servers[idx] = self._wait_for_state_change(server, 'ACTIVE') return servers def _wait_for_migration_status(self, server, expected_status): """Waits for a migration record with the given status to be found for the given server, else the test fails. The migration record, if found, is returned. """ for attempt in range(10): migrations = self.api.get_migrations() for migration in migrations: if (migration['instance_uuid'] == server['id'] and migration['status'].lower() == expected_status.lower()): return migration time.sleep(0.5) self.fail('Timed out waiting for migration with status "%s" for ' 'instance: %s. Current instance migrations: %s' % (expected_status, server['id'], migrations)) def test_live_migrate_one_multi_created_instance(self): # Boot two servers in a multi-create request servers = self._boot_servers(num_servers=2) # Take the first instance and verify which host the instance is there server = servers[0] original_host = server['OS-EXT-SRV-ATTR:host'] target_host = 'host1' if original_host == 'host2' else 'host2' # Initiate live migration for that instance by targeting the other host post = {'os-migrateLive': {'block_migration': 'auto', 'host': target_host}} # NOTE(sbauza): Since API version 2.34, live-migration pre-checks are # now done asynchronously so even if we hit a NoValidHost exception by # the conductor, the API call will always succeed with a HTTP202 # response code. In order to verify whether the migration succeeded, # we need to lookup the migrations API. self.api.post_server_action(server['id'], post) # Poll the migration until it is done. migration = self._wait_for_migration_status(server, 'completed') self.assertEqual('live-migration', migration['migration_type']) self.assertEqual(original_host, migration['source_compute']) # Verify that the migration succeeded as the instance is now on the # destination node. server = self.api.get_server(server['id']) self.assertEqual(target_host, server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1718512.py0000664000175000017500000001451600000000000024505 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import manager as compute_manager from nova import context as nova_context from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class TestRequestSpecRetryReschedule(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1718512 introduced in Newton. Contains a test for a regression where an instance builds on one host, then is resized. During the resize, the first attempted host fails and the resize is rescheduled to another host which passes. The failed host is persisted in the RequestSpec.retry field by mistake. Then later when trying to live migrate the instance to the same host that failed during resize, it is rejected by the RetryFilter because it's already in the RequestSpec.retry field. """ def setUp(self): super(TestRequestSpecRetryReschedule, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # The admin API is used to get the server details to verify the # host on which the server was built. self.admin_api = api_fixture.admin_api self.api = api_fixture.api # the image fake backend needed for image discovery image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) self.start_service('conductor') # We have to get the image before we use 2.latest otherwise we'll get # a 404 on the /images proxy API because of 2.36. self.image_id = self.api.get_images()[0]['id'] # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.admin_api.microversion = 'latest' self.api.microversion = 'latest' # Use custom weigher to make sure that we have a predictable # scheduling sort order. self.useFixture(nova_fixtures.HostNameWeigherFixture()) self.start_service('scheduler') # Let's now start three compute nodes as we said above. for host in ['host1', 'host2', 'host3']: self.start_service('compute', host=host) def _stub_resize_failure(self, failed_host): actual_prep_resize = compute_manager.ComputeManager._prep_resize def fake_prep_resize(_self, *args, **kwargs): if _self.host == failed_host: raise Exception('%s:fake_prep_resize' % failed_host) actual_prep_resize(_self, *args, **kwargs) self.stub_out('nova.compute.manager.ComputeManager._prep_resize', fake_prep_resize) def test_resize_with_reschedule_then_live_migrate(self): """Tests the following scenario: - Server is created on host1 successfully. - Server is resized; host2 is tried and fails, and rescheduled to host3. - Then try to live migrate the instance to host2 which should work. """ flavors = self.api.get_flavors() flavor1 = flavors[0] flavor2 = flavors[1] if flavor1["disk"] > flavor2["disk"]: # Make sure that flavor1 is smaller flavor1, flavor2 = flavor2, flavor1 # create the instance which should go to host1 server = self._build_server( image_uuid=self.image_id, flavor_id=flavor1['id'], networks='none') server = self.admin_api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) # Stub out the resize to fail on host2, which will trigger a reschedule # to host3. self._stub_resize_failure('host2') # Resize the server to flavor2, which should make it ultimately end up # on host3. data = {'resize': {'flavorRef': flavor2['id']}} self.api.post_server_action(server['id'], data) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) self.api.post_server_action(server['id'], {'confirmResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') # Now live migrate the server to host2 specifically, which previously # failed the resize attempt but here it should pass. data = {'os-migrateLive': {'host': 'host2', 'block_migration': 'auto'}} self.admin_api.post_server_action(server['id'], data) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) # NOTE(mriedem): The instance status effectively goes to ACTIVE before # the migration status is changed to "completed" since # post_live_migration_at_destination changes the instance status # and _post_live_migration changes the migration status later. So we # need to poll the migration record until it's complete or we timeout. self._wait_for_migration_status(server, ['completed']) reqspec = objects.RequestSpec.get_by_instance_uuid( nova_context.get_admin_context(), server['id']) self.assertIsNone(reqspec.retry) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1719730.py0000664000175000017500000001203000000000000024475 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestRescheduleWithServerGroup(test.TestCase, integrated_helpers.InstanceHelperMixin): """This tests a regression introduced in the Pike release. In Pike we converted the affinity filter code to use the RequestSpec object instead of legacy dicts. The filter used to populate server group info in the filter_properties and the conversion removed that. However, in the conductor, we are still converting RequestSpec back and forth between object and primitive, and there is a mismatch between the keys being set/get in filter_properties. So during a reschedule with a server group, we hit an exception "'NoneType' object is not iterable" in the RequestSpec.from_primitives method and the reschedule fails. """ def setUp(self): super(TestRescheduleWithServerGroup, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # This stubs out the network allocation in compute. fake_network.set_stub_network_methods(self) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # The admin API is used to get the server details to verify the # host on which the server was built. self.admin_api = api_fixture.admin_api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') # We start two compute services because we're going to fake one raising # RescheduledException to trigger a retry to the other compute host. self.start_service('compute', host='host1') self.start_service('compute', host='host2') self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] # This is our flag that we set when we hit the first host and # made it fail. self.failed_host = None self.attempts = 0 def fake_validate_instance_group_policy(_self, *args, **kwargs): self.attempts += 1 if self.failed_host is None: # Set the failed_host value to the ComputeManager.host value. self.failed_host = _self.host raise exception.RescheduledException(instance_uuid='fake', reason='Policy violated') self.stub_out('nova.compute.manager.ComputeManager.' '_validate_instance_group_policy', fake_validate_instance_group_policy) def test_reschedule_with_server_group(self): """Tests the reschedule with server group when one compute host fails. This tests the scenario where we have two compute services and try to build a single server. The test is setup such that the scheduler picks the first host which we mock out to fail the late affinity check. This should then trigger a retry on the second host. """ group = {'name': 'a-name', 'policies': ['affinity']} created_group = self.api.post_server_groups(group) server = {'name': 'retry-with-server-group', 'imageRef': self.image_id, 'flavorRef': self.flavor_id} hints = {'group': created_group['id']} created_server = self.api.post_server({'server': server, 'os:scheduler_hints': hints}) found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Assert that the host is not the failed host. self.assertNotEqual(self.failed_host, found_server['OS-EXT-SRV-ATTR:host']) # Assert that we retried. self.assertEqual(2, self.attempts) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1732947.py0000664000175000017500000000741200000000000024512 0ustar00zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers CONF = nova.conf.CONF class RebuildVolumeBackedSameImage(integrated_helpers._IntegratedTestBase): """Tests the regression in bug 1732947 where rebuilding a volume-backed instance with the original image still results in conductor calling the scheduler to validate the image. This is because the instance.image_ref is not set for a volume-backed instance, so the conditional check in the API to see if the provided image_ref for rebuild is different than the original image. """ api_major_version = 'v2.1' microversion = 'latest' def setUp(self): super(RebuildVolumeBackedSameImage, self).setUp() # We are creating a volume-backed server so we need the CinderFixture. self.useFixture(nova_fixtures.CinderFixture(self)) def _setup_scheduler_service(self): # Add the IsolatedHostsFilter to the list of enabled filters since it # is not enabled by default. enabled_filters = CONF.filter_scheduler.enabled_filters enabled_filters.append('IsolatedHostsFilter') self.flags(enabled_filters=enabled_filters, group='filter_scheduler') return self.start_service('scheduler') def test_volume_backed_rebuild_same_image(self): # First create our server as normal. volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server_req_body = { # There is no imageRef because this is boot from volume. 'server': { 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, 'name': 'test_volume_backed_rebuild_same_image', # We don't care about networking for this test. This requires # microversion >= 2.37. 'networks': 'none', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume' }] } } server = self.api.post_server(server_req_body) server = self._wait_for_state_change(server, 'ACTIVE') # For a volume-backed server, the image ref will be an empty string # in the server response. self.assertEqual('', server['image']) # Now we mark the host that the instance is running on as isolated # but we won't mark the image as isolated, meaning the rebuild # will fail for that image on that host. self.flags(isolated_hosts=[self.compute.host], group='filter_scheduler') # Now rebuild the server with the same image that was used to create # our fake volume. rebuild_req_body = { 'rebuild': { 'imageRef': '155d900f-4e14-4e4c-a73d-069cbf4541e6' } } server = self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body).body['server'] # The server image ref should still be blank for a volume-backed server # after the rebuild. self.assertEqual('', server['image']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1735407.py0000664000175000017500000001674300000000000024513 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestParallelEvacuationWithServerGroup( test.TestCase, integrated_helpers.InstanceHelperMixin): """Verifies that the server group policy is not violated during parallel evacuation. """ def setUp(self): super(TestParallelEvacuationWithServerGroup, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # This stubs out the network allocation in compute. fake_network.set_stub_network_methods(self) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api # 2.11 is needed for force_down # 2.14 is needed for evacuate without onSharedStorage flag self.api.microversion = '2.14' fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') # We start two compute services because we need two instances with # anti-affinity server group policy to be booted self.compute1 = self.start_service('compute', host='host1') self.compute2 = self.start_service('compute', host='host2') self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] manager_class = nova.compute.manager.ComputeManager original_rebuild = manager_class._do_rebuild_instance def fake_rebuild(self_, context, instance, *args, **kwargs): # Simulate that the rebuild request of one of the instances # reaches the target compute manager significantly later so the # rebuild of the other instance can finish before the late # validation of the first rebuild. # We cannot simply delay the virt driver's rebuild or the # manager's _rebuild_default_impl as those run after the late # validation if instance.host == 'host1': # wait for the other instance rebuild to start fake_notifier.wait_for_versioned_notifications( 'instance.rebuild.start', n_events=1) original_rebuild(self_, context, instance, *args, **kwargs) self.stub_out('nova.compute.manager.ComputeManager.' '_do_rebuild_instance', fake_rebuild) def test_parallel_evacuate_with_server_group(self): self.skipTest('Skipped until bug 1763181 is fixed') group_req = {'name': 'a-name', 'policies': ['anti-affinity']} group = self.api.post_server_groups(group_req) # boot two instances with anti-affinity server = {'name': 'server', 'imageRef': self.image_id, 'flavorRef': self.flavor_id} hints = {'group': group['id']} created_server1 = self.api.post_server({'server': server, 'os:scheduler_hints': hints}) server1 = self._wait_for_state_change(created_server1, 'ACTIVE') created_server2 = self.api.post_server({'server': server, 'os:scheduler_hints': hints}) server2 = self._wait_for_state_change(created_server2, 'ACTIVE') # assert that the anti-affinity policy is enforced during the boot self.assertNotEqual(server1['OS-EXT-SRV-ATTR:host'], server2['OS-EXT-SRV-ATTR:host']) # simulate compute failure on both compute host to allow evacuation self.compute1.stop() # force it down to avoid waiting for the service group to time out self.api.force_down_service('host1', 'nova-compute', True) self.compute2.stop() self.api.force_down_service('host2', 'nova-compute', True) # start a third compute to have place for one of the instances self.compute3 = self.start_service('compute', host='host3') # evacuate both instances post = {'evacuate': {}} self.api.post_server_action(server1['id'], post) self.api.post_server_action(server2['id'], post) # make sure that the rebuild is started and then finished # NOTE(mdbooth): We only get 1 rebuild.start notification here because # we validate server group policy (and therefore fail) before emitting # rebuild.start. fake_notifier.wait_for_versioned_notifications( 'instance.rebuild.start', n_events=1) server1 = self._wait_for_server_parameter( server1, {'OS-EXT-STS:task_state': None}) server2 = self._wait_for_server_parameter( server2, {'OS-EXT-STS:task_state': None}) # NOTE(gibi): The instance.host set _after_ the instance state and # tast_state is set back to normal so it is not enough to wait for # that. The only thing that happens after the instance.host is set to # the target host is the migration status setting to done. So we have # to wait for that to avoid asserting the wrong host below. self._wait_for_migration_status(server1, ['done', 'failed']) self._wait_for_migration_status(server2, ['done', 'failed']) # get the servers again to have the latest information about their # hosts server1 = self.api.get_server(server1['id']) server2 = self.api.get_server(server2['id']) # assert that the anti-affinity policy is enforced during the # evacuation self.assertNotEqual(server1['OS-EXT-SRV-ATTR:host'], server2['OS-EXT-SRV-ATTR:host']) # assert that one of the evacuation was successful and that server is # moved to another host and the evacuation of the other server is # failed if server1['status'] == 'ERROR': failed_server = server1 evacuated_server = server2 else: failed_server = server2 evacuated_server = server1 self.assertEqual('ERROR', failed_server['status']) self.assertNotEqual('host3', failed_server['OS-EXT-SRV-ATTR:host']) self.assertEqual('ACTIVE', evacuated_server['status']) self.assertEqual('host3', evacuated_server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1741125.py0000664000175000017500000000743600000000000024504 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova.compute import manager as compute_manager from nova.tests.functional import integrated_helpers class TestServerResizeReschedule(integrated_helpers.ProviderUsageBaseTestCase): """Regression test for bug #1741125 During testing in the alternate host series, it was found that retries when resizing an instance would always fail. This turned out to be true even before alternate hosts for resize was introduced. Further investigation showed that there was a race in call to retry the resize and the revert of the original attempt. This adds a functional regression test to show the failure. A follow up patch with the fix will modify the test to show it passing again. """ compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(TestServerResizeReschedule, self).setUp() self.compute1 = self._start_compute(host='host1') self.compute2 = self._start_compute(host='host2') self.compute3 = self._start_compute(host='host3') self.compute4 = self._start_compute(host='host4') flavors = self.api.get_flavors() self.flavor1 = flavors[0] self.flavor2 = flavors[1] if self.flavor1["disk"] > self.flavor2["disk"]: # Make sure that flavor1 is smaller self.flavor1, self.flavor2 = self.flavor2, self.flavor1 def test_resize_reschedule_uses_host_lists(self): """Test that when a resize attempt fails, the retry comes from the supplied host_list, and does not call the scheduler. """ server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor1['id'], networks='none') self.first_attempt = True created_server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(created_server, 'ACTIVE') actual_prep_resize = compute_manager.ComputeManager._prep_resize def fake_prep_resize(*args, **kwargs): if self.first_attempt: # Only fail the first time self.first_attempt = False raise Exception('fake_prep_resize') actual_prep_resize(*args, **kwargs) # Yes this isn't great in a functional test, but it's simple. self.stub_out('nova.compute.manager.ComputeManager._prep_resize', fake_prep_resize) server_uuid = server["id"] data = {"resize": {"flavorRef": self.flavor2['id']}} self.api.post_server_action(server_uuid, data) server = self._wait_for_state_change(created_server, 'VERIFY_RESIZE') self.assertEqual(self.flavor2['name'], server['flavor']['original_name']) class TestServerResizeRescheduleWithNoAllocations( TestServerResizeReschedule): """Tests the reschedule scenario using a scheduler driver which does not use Placement. """ def setUp(self): super(TestServerResizeRescheduleWithNoAllocations, self).setUp() # We need to mock the FilterScheduler to not use Placement so that # allocations won't be created during scheduling. self.scheduler_service.manager.driver.USES_ALLOCATION_CANDIDATES = \ False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1741307.py0000664000175000017500000001061500000000000024477 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestResizeWithNoAllocationScheduler( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression tests for bug #1741307 Some scheduler drivers, like the old CachingScheduler driver, do not use Placement to make claims (allocations) against compute nodes during scheduling like the FilterScheduler does. During a cold migrate / resize, the FilterScheduler will "double up" the instance allocations so the instance has resource allocations made against both the source node and the chosen destination node. Conductor will then attempt to "swap" the source node allocation to the migration record. If using a non-Placement driver, there are no allocations for the instance on the source node and conductor fails. Note that if the compute running the instance was running Ocata code or older, then the compute itself would create the allocations in Placement via the ResourceTracker, but once all computes are upgraded to Pike or newer, the compute no longer creates allocations in Placement because it assumes the scheduler is doing that, which is not the case with these outlier scheduler drivers. This is a regression test to show the failure before it's fixed and then can be used to confirm the fix. """ microversion = 'latest' def setUp(self): super(TestResizeWithNoAllocationScheduler, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.api.microversion = self.microversion nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.start_service('conductor') # Create two compute nodes/services. for host in ('host1', 'host2'): self.start_service('compute', host=host) scheduler_service = self.start_service('scheduler') # We need to mock the FilterScheduler to not use Placement so that # allocations won't be created during scheduling. scheduler_service.manager.driver.USES_ALLOCATION_CANDIDATES = False flavors = self.api.get_flavors() self.old_flavor = flavors[0] self.new_flavor = flavors[1] def test_resize(self): # Create our server without networking just to keep things simple. server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.old_flavor['id'], networks='none') server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ACTIVE') original_host = server['OS-EXT-SRV-ATTR:host'] target_host = 'host1' if original_host == 'host2' else 'host2' # Issue the resize request. post = { 'resize': { 'flavorRef': self.new_flavor['id'] } } self.api.post_server_action(server['id'], post) # Poll the server until the resize is done. server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Assert that the server was migrated to the other host. self.assertEqual(target_host, server['OS-EXT-SRV-ATTR:host']) # Confirm the resize. post = {'confirmResize': None} self.api.post_server_action(server['id'], post, check_response_status=[204]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1746483.py0000664000175000017500000001045100000000000024507 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import config from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as image_fakes from nova.tests.unit import policy_fixture from nova import utils CONF = config.CONF class TestBootFromVolumeIsolatedHostsFilter( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug #1746483 The IsolatedHostsFilter checks for images restricted to certain hosts via config options. When creating a server from a root volume, the image is in the volume (and it's related metadata from Cinder). When creating a volume-backed server, the imageRef is not required. The regression is that the RequestSpec.image.id field is not set and the IsolatedHostsFilter blows up trying to load the image id. """ def setUp(self): super(TestBootFromVolumeIsolatedHostsFilter, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(nova_fixtures.CinderFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api image_fakes.stub_out_image_service(self) self.addCleanup(image_fakes.FakeImageService_reset) self.start_service('conductor') # Add the IsolatedHostsFilter to the list of enabled filters since it # is not enabled by default. enabled_filters = CONF.filter_scheduler.enabled_filters enabled_filters.append('IsolatedHostsFilter') self.flags( enabled_filters=enabled_filters, isolated_images=[image_fakes.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID], isolated_hosts=['host1'], restrict_isolated_hosts_to_isolated_images=True, group='filter_scheduler') self.start_service('scheduler') # Create two compute nodes/services so we can restrict the image # we'll use to one of the hosts. for host in ('host1', 'host2'): self.start_service('compute', host=host) def test_boot_from_volume_with_isolated_image(self): # Create our server without networking just to keep things simple. image_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server_req_body = { # There is no imageRef because this is boot from volume. 'server': { 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, 'name': 'test_boot_from_volume_with_isolated_image', 'networks': 'none', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': image_id, 'source_type': 'volume', 'destination_type': 'volume' }] } } # Note that we're using v2.1 by default but need v2.37 to use # networks='none'. with utils.temporary_mutation(self.api, microversion='2.37'): server = self.api.post_server(server_req_body) server = self._wait_for_state_change(server, 'ACTIVE') # NOTE(mriedem): The instance is successfully scheduled but since # the image_id from the volume_image_metadata isn't stored in the # RequestSpec.image.id, and restrict_isolated_hosts_to_isolated_images # is True, the isolated host (host1) is filtered out because the # filter doesn't have enough information to know if the image within # the volume can be used on that host. self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1764556.py0000664000175000017500000001521000000000000024506 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context as nova_context from nova.db import api as db from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture from nova import utils class InstanceListWithDeletedServicesTestCase( test.TestCase, integrated_helpers.InstanceHelperMixin): """Contains regression testing for bug 1764556 which is similar to bug 1746509 but different in that it deals with a deleted service and compute node which was not upgraded to the point of having a UUID and that causes problems later when an instance which was running on that node is migrated back to an upgraded service with the same hostname. This is because the service uuid migration routine gets a ServiceNotFound error when loading up a deleted service by hostname. """ def setUp(self): super(InstanceListWithDeletedServicesTestCase, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api self.admin_api.microversion = 'latest' # the image fake backend needed for image discovery fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.api.microversion = 'latest' self.start_service('conductor') self.start_service('scheduler') def _migrate_server(self, server, target_host): self.admin_api.api_post('/servers/%s/action' % server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual(target_host, server['OS-EXT-SRV-ATTR:host']) self.admin_api.api_post('/servers/%s/action' % server['id'], {'confirmResize': None}, check_response_status=[204]) server = self._wait_for_state_change(server, 'ACTIVE') return server def test_instance_list_deleted_service_with_no_uuid(self): """This test covers the following scenario: 1. create an instance on a host which we'll simulate to be old by not having a uuid set 2. migrate the instance to the "newer" host that has a service uuid 3. delete the old service/compute node 4. start a new service with the old hostname (still host1); this will also create a new compute_nodes table record for that host/node 5. migrate the instance back to the host1 service 6. list instances which will try to online migrate the old service uuid """ host1 = self.start_service('compute', host='host1') # Create an instance which will be on host1 since it's the only host. server_req = self._build_server(networks='none') server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # Now we start a 2nd compute which is "upgraded" (has a uuid) and # we'll migrate the instance to that host. host2 = self.start_service('compute', host='host2') self.assertIsNotNone(host2.service_ref.uuid) server = self._migrate_server(server, 'host2') # Delete the host1 service (which implicitly deletes the host1 compute # node record). host1.stop() self.admin_api.api_delete( '/os-services/%s' % host1.service_ref.uuid) # We should now only have 1 compute service (host2). compute_services = self.admin_api.api_get( '/os-services?binary=nova-compute').body['services'] self.assertEqual(1, len(compute_services)) # Make sure the compute node is also gone. self.admin_api.api_get( '/os-hypervisors?hypervisor_hostname_pattern=host1', check_response_status=[404]) # Now recreate the host1 service and compute node by restarting the # service. self.restart_compute_service(host1) # At this point, host1's service should have a uuid. self.assertIsNotNone(host1.service_ref.uuid) # Sanity check that there are 3 services in the database, but only 1 # is deleted. ctxt = nova_context.get_admin_context() with utils.temporary_mutation(ctxt, read_deleted='yes'): services = db.service_get_all_by_binary(ctxt, 'nova-compute') self.assertEqual(3, len(services)) deleted_services = [svc for svc in services if svc['deleted']] self.assertEqual(1, len(deleted_services)) deleted_service = deleted_services[0] self.assertEqual('host1', deleted_service['host']) # Now migrate the instance back to host1. self._migrate_server(server, 'host1') # Now null out the service uuid to simulate that the deleted host1 # is old. We have to do this through the DB API directly since the # Service object won't allow a null uuid field. We also have to do # this *after* deleting the service via the REST API and migrating the # server because otherwise that will set a uuid when looking up the # service. with utils.temporary_mutation(ctxt, read_deleted='yes'): service_ref = db.service_update( ctxt, deleted_service['id'], {'uuid': None}) self.assertIsNone(service_ref['uuid']) # Finally, list servers as an admin so it joins on services to get host # information. servers = self.admin_api.get_servers(detail=True) for server in servers: self.assertEqual('UP', server['host_status']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1764883.py0000664000175000017500000001155500000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestEvacuationWithSourceReturningDuringRebuild( test.TestCase, integrated_helpers.InstanceHelperMixin): """Assert the behaviour of evacuating instances when the src returns early. This test asserts that evacuating instances end up in an ACTIVE state on the destination even when the source host comes back online during an evacuation while the migration record is in a pre-migrating state. """ def setUp(self): super(TestEvacuationWithSourceReturningDuringRebuild, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) # The NeutronFixture is needed to stub out validate_networks in API. self.useFixture(nova_fixtures.NeutronFixture(self)) # This stubs out the network allocation in compute. fake_network.set_stub_network_methods(self) # We need the computes reporting into placement for the filter # scheduler to pick a host. self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api # 2.11 is needed for force_down # 2.14 is needed for evacuate without onSharedStorage flag self.api.microversion = '2.14' # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') # Start two computes self._start_compute('host1') self._start_compute('host2') self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] self.addCleanup(fake_notifier.reset) # Stub out rebuild with a slower method allowing the src compute to be # restarted once the migration hits pre-migrating after claiming # resources on the dest. manager_class = nova.compute.manager.ComputeManager original_rebuild = manager_class._do_rebuild_instance def start_src_rebuild(self_, context, instance, *args, **kwargs): server = self.api.get_server(instance.uuid) # Start the src compute once the migration is pre-migrating. self._wait_for_migration_status(server, ['pre-migrating']) self.computes.get(self.source_compute).start() original_rebuild(self_, context, instance, *args, **kwargs) self.stub_out('nova.compute.manager.ComputeManager.' '_do_rebuild_instance', start_src_rebuild) def test_evacuation_with_source_compute_returning_during_rebuild(self): # Launch an instance server_request = {'name': 'server', 'imageRef': self.image_id, 'flavorRef': self.flavor_id} server_response = self.api.post_server({'server': server_request}) server = self._wait_for_state_change(server_response, 'ACTIVE') # Record where the instance is running before forcing the service down self.source_compute = server['OS-EXT-SRV-ATTR:host'] self.computes.get(self.source_compute).stop() self.api.force_down_service(self.source_compute, 'nova-compute', True) # Start evacuating the instance from the source_host self.api.post_server_action(server['id'], {'evacuate': {}}) # Wait for the instance to go into an ACTIVE state self._wait_for_state_change(server, 'ACTIVE') server = self.api.get_server(server['id']) host = server['OS-EXT-SRV-ATTR:host'] migrations = self.api.get_migrations() # Assert that we have a single `done` migration record after the evac self.assertEqual(1, len(migrations)) self.assertEqual('done', migrations[0]['status']) # Assert that the instance is now on the dest self.assertNotEqual(self.source_compute, host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1778305.py0000664000175000017500000000513400000000000024507 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.context from nova.db import api as db from nova import objects from nova import test class InstanceListWithOldDeletedServiceTestCase(test.TestCase): def setUp(self): super(InstanceListWithOldDeletedServiceTestCase, self).setUp() self.context = nova.context.RequestContext('fake-user', 'fake-project') def test_instance_list_old_deleted_service_with_no_uuid(self): # Create a nova-compute service record with a host that will match the # instance's host, with no uuid. We can't do this through the # Service object because it will automatically generate a uuid. # Use service version 9, which is too old compared to the minimum # version in the rest of the deployment. service = db.service_create(self.context, {'host': 'fake-host', 'binary': 'nova-compute', 'version': 9}) self.assertIsNone(service['uuid']) # Now delete it. db.service_destroy(self.context, service['id']) # Create a new service with the same host name that has a UUID and a # current version. new_service = objects.Service(context=self.context, host='fake-host', binary='nova-compute') new_service.create() # Create an instance whose host will match both services, including the # deleted one. inst = objects.Instance(context=self.context, project_id=self.context.project_id, host='fake-host') inst.create() insts = objects.InstanceList.get_by_filters( self.context, {}, expected_attrs=['services']) self.assertEqual(1, len(insts)) self.assertEqual(2, len(insts[0].services)) # Deleted service should not have a UUID for service in insts[0].services: if service.deleted: self.assertNotIn('uuid', service) else: self.assertIsNotNone(service.uuid) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1780373.py0000664000175000017500000001222600000000000024505 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class TestMultiCreateServerGroupMemberOverQuota( test.TestCase, integrated_helpers.InstanceHelperMixin): """This tests a regression introduced in the Pike release. Starting in the Pike release, quotas are no longer tracked using usages and reservations tables but instead perform a resource counting operation at the point of resource creation. When creating multiple servers in the same request that belong in the same server group, the [quota]/server_group_members config option is checked to determine if those servers can belong in the same group based on quota. However, the quota check for server_group_members only counts existing group members based on live instances in the cell database(s). But the actual instance record isn't created in the cell database until *after* the server_group_members quota check happens. Because of this, it is possible to bypass the server_group_members quota check when creating multiple servers in the same request. """ def setUp(self): super(TestMultiCreateServerGroupMemberOverQuota, self).setUp() self.flags(server_group_members=2, group='quota') self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.api.microversion = '2.37' # so we can specify networks='none' fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) group = {'name': 'test group', 'policies': ['soft-anti-affinity']} self.created_group = self.api.post_server_groups(group) def test_multi_create_server_group_members_over_quota(self): """Recreate scenario for the bug where we create an anti-affinity server group and then create 3 servers in the group using a multi-create POST /servers request. """ server_req = self._build_server( image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, networks='none') server_req['min_count'] = 3 server_req['return_reservation_id'] = True hints = {'group': self.created_group['id']} # We should get a 403 response due to going over quota on server # group members in a single request. self.api.api_post( '/servers', {'server': server_req, 'os:scheduler_hints': hints}, check_response_status=[403]) group = self.api.api_get( '/os-server-groups/%s' % self.created_group['id']).body['server_group'] self.assertEqual(0, len(group['members'])) def test_concurrent_request_server_group_members_over_quota(self): """Recreate scenario for the bug where we create 3 servers in the same group but in separate requests. The NoopConductorFixture is used to ensure the instances are not created in the nova cell database which means the quota check will have to rely on counting group members using build requests from the API DB. """ # These aren't really concurrent requests, but we can simulate that # by using NoopConductorFixture. self.useFixture(nova_fixtures.NoopConductorFixture()) for x in range(3): server_req = self._build_server( image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, networks='none') hints = {'group': self.created_group['id']} # This should result in a 403 response on the 3rd server. if x == 2: self.api.api_post( '/servers', {'server': server_req, 'os:scheduler_hints': hints}, check_response_status=[403]) else: self.api.post_server( {'server': server_req, 'os:scheduler_hints': hints}) # There should only be two servers created which are both members of # the same group. servers = self.api.get_servers(detail=False) self.assertEqual(2, len(servers)) group = self.api.api_get( '/os-server-groups/%s' % self.created_group['id']).body['server_group'] self.assertEqual(2, len(group['members'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1781286.py0000664000175000017500000002111000000000000024501 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_db import exception as oslo_db_exc from nova.compute import manager as compute_manager from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class RescheduleBuildAvailabilityZoneUpCall( test.TestCase, integrated_helpers.InstanceHelperMixin): """This is a regression test for bug 1781286 which was introduced with a change in Pike to set the instance availability_zone in conductor once a host is selected from the scheduler. The regression in the initial server build case is when a reschedule is triggered, and the cell conductor does not have access to the API DB, it fails with a CantStartEngineError trying to connect to the API DB to get availability zone (aggregate) info about the alternate host selection. """ def setUp(self): super(RescheduleBuildAvailabilityZoneUpCall, self).setUp() # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Start controller services. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api self.start_service('conductor') self.start_service('scheduler') # Start two computes with the fake reschedule driver. self.flags(compute_driver='fake.FakeRescheduleDriver') self.start_service('compute', host='host1') self.start_service('compute', host='host2') # Listen for notifications. fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) def test_server_create_reschedule_blocked_az_up_call(self): self.flags(default_availability_zone='us-central') # We need to stub out the call to get_host_availability_zone to blow # up once we have gone to the compute service. With the way our # RPC/DB fixtures are setup it's non-trivial to try and separate a # superconductor from a cell conductor so we can configure the cell # conductor from not having access to the API DB but that would be a # a nice thing to have at some point. original_bari = compute_manager.ComputeManager.build_and_run_instance def wrap_bari(*args, **kwargs): # Poison the AZ query to blow up as if the cell conductor does not # have access to the API DB. self.useFixture( fixtures.MockPatch( 'nova.objects.AggregateList.get_by_host', side_effect=oslo_db_exc.CantStartEngineError)) return original_bari(*args, **kwargs) self.stub_out('nova.compute.manager.ComputeManager.' 'build_and_run_instance', wrap_bari) server = self._build_server() server = self.api.post_server({'server': server}) # Because we poisoned AggregateList.get_by_host after hitting the # compute service we have to wait for the notification that the build # is complete and then stop the mock so we can use the API again. fake_notifier.wait_for_versioned_notifications('instance.create.end') # Note that we use stopall here because we actually called # build_and_run_instance twice so we have more than one instance of # the mock that needs to be stopped. mock.patch.stopall() server = self._wait_for_state_change(server, 'ACTIVE') # We should have rescheduled and the instance AZ should be set from the # Selection object. Since neither compute host is in an AZ, the server # is in the default AZ from config. self.assertEqual('us-central', server['OS-EXT-AZ:availability_zone']) class RescheduleMigrateAvailabilityZoneUpCall( test.TestCase, integrated_helpers.InstanceHelperMixin): """This is a regression test for the resize/cold migrate aspect of bug 1781286 where the cell conductor does not have access to the API DB. """ def setUp(self): super(RescheduleMigrateAvailabilityZoneUpCall, self).setUp() # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Start controller services. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api self.start_service('conductor') self.start_service('scheduler') # We need three hosts for this test, one is the initial host on which # the server is built, and the others are for the migration where the # first will fail and the second is an alternate. self.start_service('compute', host='host1') self.start_service('compute', host='host2') self.start_service('compute', host='host3') # Listen for notifications. fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) def test_migrate_reschedule_blocked_az_up_call(self): self.flags(default_availability_zone='us-central') # We need to stub out the call to get_host_availability_zone to blow # up once we have gone to the compute service. original_prep_resize = compute_manager.ComputeManager._prep_resize self.rescheduled = None def wrap_prep_resize(_self, *args, **kwargs): # Poison the AZ query to blow up as if the cell conductor does not # have access to the API DB. self.agg_mock = self.useFixture( fixtures.MockPatch( 'nova.objects.AggregateList.get_by_host', side_effect=oslo_db_exc.CantStartEngineError)).mock if self.rescheduled is None: # Track the first host that we rescheduled from. self.rescheduled = _self.host # Trigger a reschedule. raise exception.ComputeResourcesUnavailable( reason='test_migrate_reschedule_blocked_az_up_call') return original_prep_resize(_self, *args, **kwargs) self.stub_out('nova.compute.manager.ComputeManager._prep_resize', wrap_prep_resize) server = self._build_server() server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') original_host = server['OS-EXT-SRV-ATTR:host'] # Now cold migrate the server to the other host. self.api.post_server_action(server['id'], {'migrate': None}) # Because we poisoned AggregateList.get_by_host after hitting the # compute service we have to wait for the notification that the resize # is complete and then stop the mock so we can use the API again. fake_notifier.wait_for_versioned_notifications( 'instance.resize_finish.end') # Note that we use stopall here because we actually called _prep_resize # twice so we have more than one instance of the mock that needs to be # stopped. mock.patch.stopall() server = self._wait_for_state_change(server, 'VERIFY_RESIZE') final_host = server['OS-EXT-SRV-ATTR:host'] self.assertNotIn(final_host, [original_host, self.rescheduled]) # We should have rescheduled and the instance AZ should be set from the # Selection object. Since neither compute host is in an AZ, the server # is in the default AZ from config. self.assertEqual('us-central', server['OS-EXT-AZ:availability_zone']) self.agg_mock.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1781710.py0000664000175000017500000001311600000000000024500 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.scheduler import filter_scheduler from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class AntiAffinityMultiCreateRequest(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1781710 introduced in Rocky. The ServerGroupAntiAffinityFilter changed in Rocky to support the "max_server_per_host" rule in the group's anti-affinity policy which allows having more than one server from the same anti-affinity group on the same host. As a result, the scheduler filter logic changed and a regression was introduced because of how the FilterScheduler is tracking which hosts are selected for each instance in a multi-create request. This test uses a custom weigher to ensure that when creating two servers in a single request that are in the same anti-affinity group with the default "max_server_per_host" setting (1), the servers are split across the two hosts even though normally one host would be weighed higher than the other. """ def setUp(self): super(AntiAffinityMultiCreateRequest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # The admin API is used to get the server details to verify the # host on which the server was built. self.admin_api = api_fixture.admin_api self.api = api_fixture.api image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) self.start_service('conductor') # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.admin_api.microversion = 'latest' self.api.microversion = 'latest' self.useFixture(nova_fixtures.HostNameWeigherFixture()) # disable late check on compute node to mimic devstack. self.flags(disable_group_policy_check_upcall=True, group='workarounds') self.start_service('scheduler') self.start_service('compute', host='host1') self.start_service('compute', host='host2') def test_anti_affinity_multi_create(self): # Create the anti-affinity server group in which we'll create our # two servers. group = self.api.post_server_groups( {'name': 'test group', 'policy': 'anti-affinity'}) # Stub out FilterScheduler._get_alternate_hosts so we can assert what # is coming back for alternate hosts is what we'd expect after the # initial hosts are selected for each instance. original_get_alternate_hosts = ( filter_scheduler.FilterScheduler._get_alternate_hosts) def stub_get_alternate_hosts(*a, **kw): # Intercept the result so we can assert there are no alternates. selections_to_return = original_get_alternate_hosts(*a, **kw) # Since we only have two hosts and each host is selected for each # server, and alternates should not include selected hosts, we # should get back a list with two entries (one per server) and each # entry should be a list of length 1 for the selected host per # server with no alternates. self.assertEqual(2, len(selections_to_return), 'There should be one host per server in the ' 'anti-affinity group.') hosts = set([]) for selection_list in selections_to_return: self.assertEqual(1, len(selection_list), selection_list) hosts.add(selection_list[0].service_host) self.assertEqual(2, len(hosts), hosts) return selections_to_return self.stub_out('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_alternate_hosts', stub_get_alternate_hosts) # Now create two servers in that group. server_req = self._build_server( image_uuid=image_fake.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, networks='none') server_req['min_count'] = 2 self.api.api_post( '/servers', {'server': server_req, 'os:scheduler_hints': {'group': group['id']}}) selected_hosts = set([]) # Now wait for both servers to be ACTIVE and get the host on which # each server was built. for server in self.api.get_servers(detail=False): server = self._wait_for_state_change(server, 'ACTIVE') selected_hosts.add(server['OS-EXT-SRV-ATTR:host']) # Assert that each server is on a separate host. self.assertEqual(2, len(selected_hosts)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1784353.py0000664000175000017500000000677200000000000024520 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestRescheduleWithVolumesAttached( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1784353 introduced in Queens. This regression test asserts that volume backed instances fail to start when rescheduled due to their volume attachments being deleted by cleanup code within the compute layer after an initial failure to spawn. """ def setUp(self): super(TestRescheduleWithVolumesAttached, self).setUp() # Use the new attach flow fixture for cinder cinder_fixture = nova_fixtures.CinderFixture(self) self.cinder = self.useFixture(cinder_fixture) self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) fake_network.set_stub_network_methods(self) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.flags(compute_driver='fake.FakeRescheduleDriver') self.start_service('conductor') self.start_service('scheduler') # Start two computes to allow the instance to be rescheduled self.host1 = self.start_service('compute', host='host1') self.host2 = self.start_service('compute', host='host2') self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def test_reschedule_with_volume_attached(self): # Boot a volume backed instance volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server_request = { 'name': 'server', 'flavorRef': self.flavor_id, 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume'}], } server_response = self.api.post_server({'server': server_request}) server_id = server_response['id'] self._wait_for_state_change(server_response, 'ACTIVE') attached_volume_ids = self.cinder.volume_ids_for_instance(server_id) self.assertIn(volume_id, attached_volume_ids) self.assertEqual(1, len(self.cinder.volume_to_attachment)) # There should only be one attachment record for the volume and # instance because the original would have been deleted before # rescheduling off the first host. self.assertEqual(1, len(self.cinder.volume_to_attachment[volume_id])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1790204.py0000664000175000017500000000642300000000000024501 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import instance_actions from nova.tests.functional import integrated_helpers class ResizeSameHostDoubledAllocations( integrated_helpers.ProviderUsageBaseTestCase): """Regression test for bug 1790204 introduced in Pike. Since Pike, the FilterScheduler uses Placement to "claim" resources via allocations during scheduling. During a move operation, the scheduler will claim resources on the selected destination resource provider (compute node). For a same-host resize, this means claiming resources from the *new_flavor* on the destination while holding allocations for the *old_flavor* on the source, which is the same provider. This can lead to NoValidHost failures during scheduling a resize to the same host since the allocations are "doubled up" on the same provider even though the *new_flavor* allocations could fit. This test reproduces the regression. """ # Use the SmallFakeDriver which has little inventory: # vcpus = 2, memory_mb = 8192, local_gb = 1028 compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(ResizeSameHostDoubledAllocations, self).setUp() self._start_compute(uuids.host) def test_resize_same_host_full_allocations(self): self.flags(allow_resize_to_same_host=True) # Create two flavors so we can control what the test uses. flavor = { 'flavor': { 'name': 'v1r2d1', 'ram': 2048, 'vcpus': 1, 'disk': 1028, } } flavor1 = self.admin_api.post_flavor(flavor) # Tweak the flavor to increase the number of VCPUs. flavor['flavor'].update({'name': 'v2r2d1', 'vcpus': 2}) flavor2 = self.admin_api.post_flavor(flavor) # Create a server from flavor1. server = self._boot_and_check_allocations(flavor1, uuids.host) # Now try to resize to flavor2. resize_req = { 'resize': { 'flavorRef': flavor2['id'], } } # FIXME(mriedem): This is bug 1790204 where scheduling fails with # NoValidHost because the scheduler fails to claim "doubled up" # allocations on the same compute node resource provider. Scheduling # fails because DISK_GB inventory allocation ratio is 1.0 and when # resizing to the same host we'll be trying to claim 2048 but the # provider only has a total of 1024. self.api.post_server_action(server['id'], resize_req, check_response_status=[202]) # The instance action should have failed with details. self._assert_resize_migrate_action_fail( server, instance_actions.RESIZE, 'NoValidHost') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1794996.py0000664000175000017500000000670500000000000024532 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional import integrated_helpers class TestEvacuateDeleteServerRestartOriginalCompute( integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(TestEvacuateDeleteServerRestartOriginalCompute, self).setUp() self.compute1 = self._start_compute(host='host1') self.compute2 = self._start_compute(host='host2') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_evacuate_delete_server_restart_original_compute(self): """Regression test for bug 1794996 where a server is successfully evacuated from a down host and then deleted. Then the source compute host is brought back online and attempts to cleanup the guest from the hypervisor and allocations for the evacuated (and now deleted) instance. Before the bug is fixed, the original compute fails to start because lazy-loading the instance.flavor on the deleted instance, which is needed to cleanup allocations from the source host, raises InstanceNotFound. After the bug is fixed, the original source host compute service starts up. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # evacuate the server post = {'evacuate': {}} self.api.post_server_action( server['id'], post) expected_params = {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'} server = self._wait_for_server_parameter(server, expected_params) # Expect to have allocation and usages on both computes as the # source compute is still down source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual(2, len(allocations)) self._check_allocation_during_evacuate( self.flavor1, server['id'], source_rp_uuid, dest_rp_uuid) # Delete the evacuated server. The allocations should be gone from # both the original evacuated-from host and the evacuated-to host. self._delete_and_check_allocations(server) # restart the source compute self.compute1 = self.restart_compute_service(self.compute1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1797580.py0000664000175000017500000001057400000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class ColdMigrateTargetHostThenLiveMigrateTest( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1797580 introduced in Queens. Microversion 2.56 allows cold migrating to a specified target host. The compute API sets the requested destination on the request spec with the specified target host and then conductor sends that request spec to the scheduler to validate the host. Conductor later persists the changes to the request spec because it's the resize flow and the flavor could change (even though in this case it won't since it's a cold migrate). After confirming the resize, if the server is live migrated it will fail during scheduling because of the persisted RequestSpec.requested_destination from the cold migration, and you can't live migrate to the same host on which the instance is currently running. This test reproduces the regression and will validate the fix. """ def setUp(self): super(ColdMigrateTargetHostThenLiveMigrateTest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # The admin API is used to get the server details to verify the # host on which the server was built and cold/live migrate it. self.admin_api = api_fixture.admin_api self.api = api_fixture.api # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.admin_api.microversion = 'latest' self.api.microversion = 'latest' image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') for host in ('host1', 'host2'): self.start_service('compute', host=host) def test_cold_migrate_target_host_then_live_migrate(self): # Create a server, it doesn't matter on which host it builds. server = self._build_server( image_uuid=image_fake.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, networks='none') server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') original_host = server['OS-EXT-SRV-ATTR:host'] target_host = 'host1' if original_host == 'host2' else 'host2' # Cold migrate the server to the specific target host. migrate_req = {'migrate': {'host': target_host}} self.admin_api.post_server_action(server['id'], migrate_req) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Confirm the resize so the server stays on the target host. confim_req = {'confirmResize': None} self.admin_api.post_server_action(server['id'], confim_req) server = self._wait_for_state_change(server, 'ACTIVE') # Attempt to live migrate the server but don't specify a host so the # scheduler has to pick one. live_migrate_req = { 'os-migrateLive': {'host': None, 'block_migration': 'auto'}} self.admin_api.post_server_action(server['id'], live_migrate_req) server = self._wait_for_state_change(server, 'ACTIVE') # The live migration should have been successful and the server is now # back on the original host. self.assertEqual(original_host, server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1806064.py0000664000175000017500000001525400000000000024505 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import utils as compute_utils from nova import context as nova_context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import policy_fixture class BootFromVolumeOverQuotaRaceDeleteTest( test.TestCase, integrated_helpers.InstanceHelperMixin): """Test for regression bug 1806064 introduced in Pike. This is very similar to regression bug 1404867 where reserved/attached volumes during a boot from volume request are not detached while deleting a server that failed to schedule. In this case, scheduling is successful but the late quota check in ComputeTaskManager.schedule_and_build_instances fails. In the case of a scheduling failure, the instance record along with the associated BDMs are created in the cell0 database and that is where the "local delete" code in the API finds them to detach the related volumes. In the case of the quota fail race, the instance has already been created in a selected cell but the BDMs records have not been and are thus not "seen" during the API local delete and the volumes are left attached to a deleted server. An additional issue, covered in the test here, is that tags provided when creating the server are not retrievable from the API after the late quota check fails. """ def setUp(self): super(BootFromVolumeOverQuotaRaceDeleteTest, self).setUp() # We need the cinder fixture for boot from volume testing. self.cinder_fixture = self.useFixture( nova_fixtures.CinderFixture(self)) # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).api # Use microversion 2.52 which allows creating a server with tags. self.api.microversion = '2.52' self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') def test_bfv_quota_race_local_delete(self): # Setup a boot-from-volume request where the API will create a # volume attachment record for the given pre-existing volume. # We also tag the server since tags, like BDMs, should be created in # the cell database along with the instance. volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server = { 'server': { 'name': 'test_bfv_quota_race_local_delete', 'flavorRef': self.api.get_flavors()[0]['id'], 'imageRef': '', 'block_device_mapping_v2': [{ 'boot_index': 0, 'source_type': 'volume', 'destination_type': 'volume', 'uuid': volume_id }], 'networks': 'auto', 'tags': ['bfv'] } } # Now we need to stub out the quota check routine so that we can # simulate the race where the initial quota check in the API passes # but fails in conductor once the instance has been created in cell1. original_quota_check = compute_utils.check_num_instances_quota def stub_check_num_instances_quota(_self, context, instance_type, min_count, *args, **kwargs): # Determine where we are in the flow based on whether or not the # min_count is 0 (API will pass 1, conductor will pass 0). if min_count == 0: raise exception.TooManyInstances( 'test_bfv_quota_race_local_delete') # We're checking from the API so perform the original quota check. return original_quota_check( _self, context, instance_type, min_count, *args, **kwargs) self.stub_out('nova.compute.utils.check_num_instances_quota', stub_check_num_instances_quota) server = self.api.post_server(server) server = self._wait_for_state_change(server, 'ERROR') # At this point, the build request should be gone and the instance # should have been created in cell1. context = nova_context.get_admin_context() self.assertRaises(exception.BuildRequestNotFound, objects.BuildRequest.get_by_instance_uuid, context, server['id']) # The default cell in the functional tests is cell1 but we want to # specifically target cell1 to make sure the instance exists there # and we're not just getting lucky somehow due to the fixture. cell1 = self.cell_mappings[test.CELL1_NAME] with nova_context.target_cell(context, cell1) as cctxt: # This would raise InstanceNotFound if the instance isn't in cell1. instance = objects.Instance.get_by_uuid(cctxt, server['id']) self.assertIsNone(instance.host, 'instance.host should not be set') # Make sure the BDMs and tags also exist in cell1. bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( cctxt, instance.uuid) self.assertEqual(1, len(bdms), 'BDMs were not created in cell1') tags = objects.TagList.get_by_resource_id(cctxt, instance.uuid) self.assertEqual(1, len(tags), 'Tags were not created in cell1') # Make sure we can still view the tags on the server before it is # deleted. self.assertEqual(['bfv'], server['tags']) # Now delete the server which, since it does not have a host, will be # deleted "locally" from the API. self._delete_server(server) # The volume should have been detached by the API. attached_volumes = self.cinder_fixture.volume_ids_for_instance( server['id']) # volume_ids_for_instance is a generator so listify self.assertEqual(0, len(list(attached_volumes))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1806515.py0000664000175000017500000000470700000000000024507 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier import nova.tests.unit.image.fake class ShowErrorServerWithTags(test.TestCase, integrated_helpers.InstanceHelperMixin): """Test list of an instance in error state that has tags. This test boots a server with tag which will fail to be scheduled, ending up in ERROR state with no host assigned and then show the server. """ def setUp(self): super(ShowErrorServerWithTags, self).setUp() api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.useFixture(nova_fixtures.NeutronFixture(self)) self.start_service('conductor') self.start_service('scheduler') # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.image_id = self.api.get_images()[0]['id'] self.api.microversion = 'latest' self.addCleanup(fake_notifier.reset) def _create_error_server(self): server = self.api.post_server({ 'server': { 'flavorRef': '1', 'name': 'show-server-with-tag-in-error-status', 'networks': 'none', 'tags': ['tag1'], 'imageRef': self.image_id } }) return self._wait_for_state_change(server, 'ERROR') def test_show_server_tag_in_error(self): # Create a server which should go to ERROR state because we don't # have any active computes. server = self._create_error_server() server_id = server['id'] tags = self.api.get_server_tags(server_id) self.assertIn('tag1', tags) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1815153.py0000664000175000017500000001465200000000000024505 0ustar00zuulzuul00000000000000# Copyright 2019 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import instance_actions from nova import context from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as image_fake from nova.tests.unit import policy_fixture class NonPersistentFieldNotResetTest( test.TestCase, integrated_helpers.InstanceHelperMixin): """Test for regression bug 1815153 The bug is that the 'requested_destination' field in the RequestSpec object is reset when saving the object in the 'heal_reqspec_is_bfv' method in the case that a server created before Rocky which does not have is_bfv field. Tests the following two cases here. * Cold migration with a target host without a force flag * Evacuate with a target host without a force flag The following two cases are not tested here because 'requested_destination' is not set when the 'heal_reqspec_is_bfv' method is called. * Live migration without a destination host. * Unshelve a server """ def setUp(self): super(NonPersistentFieldNotResetTest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api # Use the latest microversion available to make sure something does # not regress in new microversions; cap as necessary. self.api.microversion = 'latest' image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') self.compute = {} for host in ('host1', 'host2', 'host3'): compute_service = self.start_service('compute', host=host) self.compute.update({host: compute_service}) self.ctxt = context.get_admin_context() @staticmethod def _get_target_host(host): target_host = {'host1': 'host2', 'host2': 'host3', 'host3': 'host1'} return target_host[host] def _remove_is_bfv_in_request_spec(self, server_id): # Now let's hack the RequestSpec.is_bfv field to mimic migrating an # old instance created before RequestSpec.is_bfv was set in the API, reqspec = objects.RequestSpec.get_by_instance_uuid(self.ctxt, server_id) del reqspec.is_bfv reqspec.save() reqspec = objects.RequestSpec.get_by_instance_uuid(self.ctxt, server_id) # Make sure 'is_bfv' is not set. self.assertNotIn('is_bfv', reqspec) def test_cold_migrate(self): server = self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') original_host = server['OS-EXT-SRV-ATTR:host'] target_host = self._get_target_host(original_host) self._remove_is_bfv_in_request_spec(server['id']) # Force a target host down source_compute_id = self.api.get_services( host=target_host, binary='nova-compute')[0]['id'] self.compute[target_host].stop() self.api.put_service( source_compute_id, {'forced_down': 'true'}) # Cold migrate a server with a target host. # The response status code is 202 even though the operation will # fail because the requested target host is down which will result # in a NoValidHost error. self.api.post_server_action( server['id'], {'migrate': {'host': target_host}}, check_response_status=[202]) # The instance action should have failed with details. self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'NoValidHost') # Make sure 'is_bfv' is set. reqspec = objects.RequestSpec.get_by_instance_uuid(self.ctxt, server['id']) self.assertIn('is_bfv', reqspec) self.assertIs(reqspec.is_bfv, False) def test_evacuate(self): server = self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') original_host = server['OS-EXT-SRV-ATTR:host'] target_host = self._get_target_host(original_host) self._remove_is_bfv_in_request_spec(server['id']) # Force source and target hosts down for host in (original_host, target_host): source_compute_id = self.api.get_services( host=host, binary='nova-compute')[0]['id'] self.compute[host].stop() self.api.put_service( source_compute_id, {'forced_down': 'true'}) # Evacuate a server with a target host. # If requested_destination is reset, the server is moved to a host # that is not the target host. # Its status becomes 'ACTIVE'. # If requested_destination is not reset, a status of the server # becomes 'ERROR' because the target host is down. self.api.post_server_action( server['id'], {'evacuate': {'host': target_host}}) expected_params = {'OS-EXT-SRV-ATTR:host': original_host, 'status': 'ERROR'} server = self._wait_for_server_parameter(server, expected_params) # Make sure 'is_bfv' is set. reqspec = objects.RequestSpec.get_by_instance_uuid(self.ctxt, server['id']) self.assertIn('is_bfv', reqspec) self.assertIs(reqspec.is_bfv, False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1823370.py0000664000175000017500000000635300000000000024504 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers class MultiCellEvacuateTestCase(integrated_helpers._IntegratedTestBase): """Recreate test for bug 1823370 which was introduced in Pike. When evacuating a server, the request to the scheduler should be restricted to the cell in which the instance is already running since cross-cell evacuate is not supported, at least not yet. This test creates two cells with two compute hosts in one cell and another compute host in the other cell. A server is created in the cell with two hosts and then evacuated which should land it on the other host in that cell, not the other cell with only one host. The scheduling behavior is controlled via the custom HostNameWeigher. """ # Set variables used in the parent class. NUMBER_OF_CELLS = 2 REQUIRES_LOCKING = False ADMIN_API = True _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' api_major_version = 'v2.1' microversion = '2.11' # Need at least 2.11 for the force-down API def setUp(self): # Register a custom weigher for predictable scheduling results. self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(MultiCellEvacuateTestCase, self).setUp() def _setup_compute_service(self): """Start three compute services, two in cell1 and one in cell2. host1: cell1 (highest weight) host2: cell2 (medium weight) host3: cell1 (lowest weight) """ host_to_cell = {'host1': 'cell1', 'host2': 'cell2', 'host3': 'cell1'} for host, cell in host_to_cell.items(): self._start_compute(host, cell_name=cell) def test_evacuate_multi_cell(self): # Create a server which should land on host1 since it has the highest # weight. server = self._build_server() server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) # Disable the host on which the server is now running. self.computes['host1'].stop() self.api.force_down_service('host1', 'nova-compute', forced_down=True) # Now evacuate the server which should send it to host3 since it is # in the same cell as host1, even though host2 in cell2 is weighed # higher than host3. req = {'evacuate': {'onSharedStorage': False}} self.api.post_server_action(server['id'], req) self._wait_for_migration_status(server, ['done']) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1825020.py0000664000175000017500000000660000000000000024471 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image class VolumeBackedResizeDiskDown(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1825020 introduced in Stein. This tests a resize scenario where a volume-backed server is resized to a flavor with a smaller disk than the flavor used to create the server. Since the server is volume-backed, the disk in the new flavor should not matter since it won't be used for the guest. """ def setUp(self): super(VolumeBackedResizeDiskDown, self).setUp() self.flags(allow_resize_to_same_host=True) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(nova_fixtures.CinderFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') def test_volume_backed_resize_disk_down(self): # First determine the flavors we're going to use by picking two flavors # with different size disks and create a volume-backed server using the # flavor with the bigger disk size. flavors = self.api.get_flavors() flavor1 = flavors[0] flavor2 = flavors[1] self.assertGreater(flavor2['disk'], flavor1['disk']) vol_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server = { 'name': 'test_volume_backed_resize_disk_down', 'imageRef': '', 'flavorRef': flavor2['id'], 'block_device_mapping_v2': [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': vol_id }] } server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') # Now try to resize the server with the flavor that has smaller disk. # This should be allowed since the server is volume-backed and the # disk size in the flavor shouldn't matter. data = {'resize': {'flavorRef': flavor1['id']}} self.api.post_server_action(server['id'], data) self._wait_for_state_change(server, 'VERIFY_RESIZE') # Now confirm the resize just to complete the operation. self.api.post_server_action(server['id'], {'confirmResize': None}) self._wait_for_state_change(server, 'ACTIVE') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1825034.py0000664000175000017500000000732300000000000024501 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context as nova_context from nova.db import api as db_api from nova.objects import virtual_interface from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image class FillVirtualInterfaceListMigration( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for a bug 1825034 introduced in Stein. The fill_virtual_interface_list online data migration creates a mostly empty marker instance record and immediately (soft) deletes it just to satisfy a foreign key constraint with the virtual_interfaces table. The problem is since the fake instance marker record is mostly empty, it can fail to load fields in the API when listing deleted servers. """ def setUp(self): super(FillVirtualInterfaceListMigration, self).setUp() api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') # the image fake backend needed for image discovery fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) def test_fill_vifs_migration(self): # Create a test server. self._create_server( flavor_id=1, networks=[{ 'uuid': nova_fixtures.NeutronFixture.network_1['id'], }], ) # Run the online data migration which will create a (soft-deleted) # marker record. ctxt = nova_context.get_admin_context() virtual_interface.fill_virtual_interface_list(ctxt, max_count=50) # Now archive the deleted instance record. # The following (archive stuff) is used to prove that the migration # created a "fake instance". It is not necessary to trigger the bug. table_to_rows_archived, deleted_instance_uuids, total_rows_archived = ( db_api.archive_deleted_rows(max_rows=1000)) self.assertIn('instances', table_to_rows_archived) self.assertEqual(1, table_to_rows_archived['instances']) self.assertEqual(1, len(deleted_instance_uuids)) self.assertEqual(virtual_interface.FAKE_UUID, deleted_instance_uuids[0]) # Since the above (archive stuff) removed the fake instance, do the # migration again to recreate it so we can exercise the code path. virtual_interface.fill_virtual_interface_list(ctxt, max_count=50) # Now list deleted servers. The fake marker instance should be excluded # from the API results. for detail in (True, False): servers = self.api.get_servers(detail=detail, search_opts={'all_tenants': 1, 'deleted': 1}) self.assertEqual(0, len(servers)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1825537.py0000664000175000017500000000674600000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional import integrated_helpers class FinishResizeErrorAllocationCleanupTestCase( integrated_helpers.ProviderUsageBaseTestCase): """Test for bug 1825537 introduced in Rocky and backported down to Pike. Tests a scenario where finish_resize fails on the dest compute during a resize and ensures resource provider allocations are properly cleaned up in placement. """ compute_driver = 'fake.FakeFinishMigrationFailDriver' def setUp(self): super(FinishResizeErrorAllocationCleanupTestCase, self).setUp() # Get the flavors we're going to use. flavors = self.api.get_flavors() self.flavor1 = flavors[0] self.flavor2 = flavors[1] def _resize_and_assert_error(self, server, dest_host): # Now resize the server and wait for it to go to ERROR status because # the finish_migration virt driver method in host2 should fail. req = {'resize': {'flavorRef': self.flavor2['id']}} self.api.post_server_action(server['id'], req) # The instance is set to ERROR status before the fault is recorded so # to avoid a race we need to wait for the migration status to change # to 'error' which happens after the fault is recorded. self._wait_for_migration_status(server, ['error']) server = self._wait_for_state_change(server, 'ERROR') # The server should be pointing at $dest_host because resize_instance # will have updated the host/node value on the instance before casting # to the finish_resize method on the dest compute. self.assertEqual(dest_host, server['OS-EXT-SRV-ATTR:host']) # In this case the FakeFinishMigrationFailDriver.finish_migration # method raises VirtualInterfaceCreateException. self.assertIn('Virtual Interface creation failed', server['fault']['message']) def test_finish_resize_fails_allocation_cleanup(self): # Start two computes so we can resize across hosts. self._start_compute('host1') self._start_compute('host2') # Create a server on host1. server = self._boot_and_check_allocations(self.flavor1, 'host1') # Resize to host2 which should fail. self._resize_and_assert_error(server, 'host2') # Check the resource provider allocations. Since the server is pointed # at the dest host in the DB now, the dest node resource provider # allocations should still exist with the new flavor. source_rp_uuid = self._get_provider_uuid_by_host('host1') dest_rp_uuid = self._get_provider_uuid_by_host('host2') self.assertFlavorMatchesAllocation( self.flavor2, server['id'], dest_rp_uuid) # And the source node provider should not have any usage. source_rp_usages = self._get_provider_usages(source_rp_uuid) no_usage = {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0} self.assertEqual(no_usage, source_rp_usages) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1830747.py0000664000175000017500000001501200000000000024502 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.conductor import api as conductor_api from nova import context as nova_context from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image class MissingReqSpecInstanceGroupUUIDTestCase( test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression recreate test for bug 1830747 Before change I4244f7dd8fe74565180f73684678027067b4506e in Stein, when a cold migration would reschedule to conductor it would not send the RequestSpec, only the filter_properties. The filter_properties contain a primitive version of the instance group information from the RequestSpec for things like the group members, hosts and policies, but not the uuid. When conductor is trying to reschedule the cold migration without a RequestSpec, it builds a RequestSpec from the components it has, like the filter_properties. This results in a RequestSpec with an instance_group field set but with no uuid field in the RequestSpec.instance_group. That RequestSpec gets persisted and then because of change Ie70c77db753711e1449e99534d3b83669871943f, later attempts to load the RequestSpec from the database will fail because of the missing RequestSpec.instance_group.uuid. This test recreates the regression scenario by cold migrating a server to a host which fails and triggers a reschedule but without the RequestSpec so a RequestSpec is created/updated for the instance without the instance_group.uuid set which will lead to a failure loading the RequestSpec from the DB later. """ def setUp(self): super(MissingReqSpecInstanceGroupUUIDTestCase, self).setUp() # Stub out external dependencies. self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Configure the API to allow resizing to the same host so we can keep # the number of computes down to two in the test. self.flags(allow_resize_to_same_host=True) # Start nova controller services. api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.start_service('conductor') # Use a custom weigher to make sure that we have a predictable # scheduling sort order. self.useFixture(nova_fixtures.HostNameWeigherFixture()) self.start_service('scheduler') # Start two computes, one where the server will be created and another # where we'll cold migrate it. self._start_compute('host1') self._start_compute('host2') def test_cold_migrate_reschedule(self): # Create an anti-affinity group for the server. body = { 'server_group': { 'name': 'test-group', 'policies': ['anti-affinity'] } } group_id = self.api.api_post( '/os-server-groups', body).body['server_group']['id'] # Create a server in the group which should land on host1 due to our # custom weigher. body = {'server': self._build_server()} body['os:scheduler_hints'] = {'group': group_id} server = self.api.post_server(body) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) # Verify the group uuid is set in the request spec. ctxt = nova_context.get_admin_context() reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) self.assertEqual(group_id, reqspec.instance_group.uuid) # Stub out the ComputeTaskAPI.resize_instance method to not pass the # request spec from compute to conductor during a reschedule. # This simulates the pre-Stein reschedule behavior. original_resize_instance = conductor_api.ComputeTaskAPI.resize_instance def stub_resize_instance(_self, context, instance, scheduler_hint, flavor, *args, **kwargs): # Only remove the request spec if we know we're rescheduling # which we can determine from the filter_properties retry dict. filter_properties = scheduler_hint['filter_properties'] if filter_properties.get('retry', {}).get('exc'): # Assert the group_uuid is passed through the filter properties self.assertIn('group_uuid', filter_properties) self.assertEqual(group_id, filter_properties['group_uuid']) kwargs.pop('request_spec', None) return original_resize_instance( _self, context, instance, scheduler_hint, flavor, *args, **kwargs) self.stub_out('nova.conductor.api.ComputeTaskAPI.resize_instance', stub_resize_instance) # Now cold migrate the server. Because of allow_resize_to_same_host and # the weigher, the scheduler will pick host1 first. The FakeDriver # actually allows migrating to the same host so we need to stub that # out so the compute will raise UnableToMigrateToSelf like when using # the libvirt driver. host1_driver = self.computes['host1'].driver with mock.patch.dict(host1_driver.capabilities, supports_migrate_to_same_host=False): self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) # The RequestSpec.instance_group.uuid should still be set. reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) self.assertEqual(group_id, reqspec.instance_group.uuid) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1831771.py0000664000175000017500000000761000000000000024505 0ustar00zuulzuul00000000000000# Copyright 2019, Red Hat, Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock from nova.compute import task_states from nova.compute import vm_states from nova import objects from nova import test from nova.tests.functional import integrated_helpers class TestDelete(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.MediumFakeDriver' def test_delete_during_create(self): compute = self._start_compute('compute1') def delete_race(instance): self.api.delete_server(instance.uuid) self._wait_for_server_parameter( {'id': instance.uuid}, {'OS-EXT-STS:task_state': task_states.DELETING}, ) orig_save = objects.Instance.save # an in-memory record of the current instance task state as persisted # to the db. db_task_states = collections.defaultdict(str) active_after_deleting_error = [False] # A wrapper round instance.save() which allows us to inject a race # under specific conditions before calling the original instance.save() def wrap_save(instance, *wrap_args, **wrap_kwargs): # We're looking to inject the race before: # instance.save(expected_task_state=task_states.SPAWNING) # towards the end of _build_and_run_instance. # # At this point the driver has finished creating the instance, but # we're still on the compute host and still holding the compute # host instance lock. # # This is just a convenient interception point. In order to race # the delete could have happened at any point prior to this since # the previous instance.save() expected_task_state = wrap_kwargs.get('expected_task_state') if ( expected_task_state == task_states.SPAWNING ): delete_race(instance) orig_save(instance, *wrap_args, **wrap_kwargs) if ( db_task_states[instance.uuid] == task_states.DELETING and instance.vm_state == vm_states.ACTIVE and instance.task_state is None ): # the instance was in the DELETING task state in the db, and we # overwrote that to set it to ACTIVE with no task state. # Bug 1848666. active_after_deleting_error[0] = True db_task_states[instance.uuid] = instance.task_state with test.nested( mock.patch('nova.objects.Instance.save', wrap_save), mock.patch.object(compute.driver, 'spawn'), mock.patch.object(compute.driver, 'unplug_vifs'), ) as (_, mock_spawn, mock_unplug_vifs): # the compute manager doesn't set the ERROR state in cleanup since # it might race with delete, therefore we'll be left in BUILDING server_req = self._build_server(networks='none') created_server = self.api.post_server({'server': server_req}) self._wait_until_deleted(created_server) # assert that we spawned the instance, and unplugged vifs on # cleanup mock_spawn.assert_called() mock_unplug_vifs.assert_called() # FIXME(mdbooth): Bug 1848666 self.assertTrue(active_after_deleting_error[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1835822.py0000664000175000017500000002125700000000000024511 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class RegressionTest1835822( test.TestCase, integrated_helpers.InstanceHelperMixin): # ---------------------------- setup ---------------------------- def setUp(self): super(RegressionTest1835822, self).setUp() # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).api self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') # the image fake backend needed for image discovery fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) images = self.api.get_images() self.image_ref_0 = images[0]['id'] self.image_ref_1 = images[1]['id'] fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) # ---------------------------- helpers ---------------------------- def _create_active_server(self, server_args=None): basic_server = { 'flavorRef': 1, 'name': 'RegressionTest1835822', 'networks': [{ 'uuid': nova_fixtures.NeutronFixture.network_1['id'] }], 'imageRef': self.image_ref_0 } if server_args: basic_server.update(server_args) server = self.api.post_server({'server': basic_server}) return self._wait_for_state_change(server, 'ACTIVE') def _hard_reboot_server(self, active_server): args = {"reboot": {"type": "HARD"}} self.api.api_post('servers/%s/action' % active_server['id'], args) fake_notifier.wait_for_versioned_notifications('instance.reboot.end') return self._wait_for_state_change(active_server, 'ACTIVE') def _rebuild_server(self, active_server): args = {"rebuild": {"imageRef": self.image_ref_1}} self.api.api_post('servers/%s/action' % active_server['id'], args) fake_notifier.wait_for_versioned_notifications('instance.rebuild.end') return self._wait_for_state_change(active_server, 'ACTIVE') def _shelve_server(self, active_server): self.api.post_server_action(active_server['id'], {'shelve': {}}) return self._wait_for_state_change(active_server, 'SHELVED_OFFLOADED') def _unshelve_server(self, shelved_server): self.api.post_server_action(shelved_server['id'], {'unshelve': {}}) return self._wait_for_state_change(shelved_server, 'ACTIVE') # ---------------------------- tests ---------------------------- def test_create_server_with_config_drive(self): """Verify that we can create a server with a config drive. """ active_server = self._create_active_server( server_args={'config_drive': True}) self.assertTrue(active_server['config_drive']) def test_create_server_without_config_drive(self): """Verify that we can create a server without a config drive. """ self.flags(force_config_drive=False) active_server = self._create_active_server() self.assertEqual('', active_server['config_drive']) def test_create_server_with_forced_config_drive(self): """Verify that we can create a server with a forced config drive. """ self.flags(force_config_drive=True) active_server = self._create_active_server() self.assertTrue(active_server['config_drive']) def test_create_server_with_forced_config_drive_reboot(self): """Verify that we can create a server with a forced config drive and it survives reboot. """ self.flags(force_config_drive=True) active_server = self._create_active_server() self.assertTrue(active_server['config_drive']) active_server = self._hard_reboot_server(active_server) self.assertTrue(active_server['config_drive']) def test_create_server_config_drive_reboot_after_conf_change(self): """Verify that we can create a server with or without a forced config drive it does not change across a reboot. """ # NOTE(sean-k-mooney): we do not need to restart the compute # service because of the way self.flags overrides the config # values. self.flags(force_config_drive=True) with_config_drive = self._create_active_server() self.assertTrue(with_config_drive['config_drive']) self.flags(force_config_drive=False) without_config_drive = self._create_active_server() self.assertEqual('', without_config_drive['config_drive']) # this server was created with force_config_drive=true # so assert now that force_config_drive is false it does # not override the value it was booted with. with_config_drive = self._hard_reboot_server(with_config_drive) self.assertTrue(with_config_drive['config_drive']) # this server was booted with force_config_drive=False so # assert that it's config drive setting is not overridden self.flags(force_config_drive=True) without_config_drive = self._hard_reboot_server(without_config_drive) self.assertEqual('', without_config_drive['config_drive']) def test_create_server_config_drive_rebuild_after_conf_change(self): """Verify that we can create a server with or without a forced config drive it does not change across a rebuild. """ self.flags(force_config_drive=True) with_config_drive = self._create_active_server() self.assertTrue(with_config_drive['config_drive']) self.flags(force_config_drive=False) without_config_drive = self._create_active_server() self.assertEqual('', without_config_drive['config_drive']) # this server was created with force_config_drive=true # so assert now that force_config_drive is false it does # not override the value it was booted with. with_config_drive = self._rebuild_server(with_config_drive) self.assertTrue(with_config_drive['config_drive']) # this server was booted with force_config_drive=False so # assert that it's config drive setting is not overridden self.flags(force_config_drive=True) without_config_drive = self._rebuild_server(without_config_drive) self.assertEqual('', without_config_drive['config_drive']) def test_create_server_config_drive_shelve_unshelve_conf_change(self): """Verify that we can create a server with or without a forced config drive it does not change across a shelve and unshelve. """ self.flags(force_config_drive=True) with_config_drive = self._create_active_server() self.assertTrue(with_config_drive['config_drive']) self.flags(force_config_drive=False) without_config_drive = self._create_active_server() self.assertEqual('', without_config_drive['config_drive']) # this server was created with force_config_drive=true # so assert now that force_config_drive is false it does # not override the value it was booted with. with_config_drive = self._shelve_server(with_config_drive) with_config_drive = self._unshelve_server(with_config_drive) self.assertTrue(with_config_drive['config_drive']) # this server was booted with force_config_drive=False so # assert that it's config drive setting is not overridden self.flags(force_config_drive=True) without_config_drive = self._shelve_server(without_config_drive) without_config_drive = self._unshelve_server(without_config_drive) self.assertEqual('', without_config_drive['config_drive']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1837955.py0000664000175000017500000001063100000000000024514 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from nova import exception from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier class BuildRescheduleClaimFailsTestCase( integrated_helpers.ProviderUsageBaseTestCase): """Regression test case for bug 1837955 where a server build fails on the primary host and then attempting to allocate resources on the alternate host, the alternate host is full and the allocations claim in placement fails, resulting in the build failing due to MaxRetriesExceeded and the server going to ERROR status. """ compute_driver = 'fake.SmallFakeDriver' def _wait_for_unversioned_notification(self, event_type): for x in range(20): # wait up to 10 seconds for notification in fake_notifier.NOTIFICATIONS: if notification.event_type == event_type: return notification time.sleep(.5) self.fail('Timed out waiting for unversioned notification %s. Got: %s' % (event_type, fake_notifier.NOTIFICATIONS)) def test_build_reschedule_alt_host_alloc_fails(self): # Start two compute services so we have one alternate host. # Set cpu_allocation_ratio=1.0 to make placement inventory # and allocations for VCPU easier to manage. self.flags(cpu_allocation_ratio=1.0) for x in range(2): self._start_compute('host%i' % x) def fake_instance_claim(_self, _context, _inst, nodename, *a, **kw): # Before triggering the reschedule to the other host, max out the # capacity on the alternate host. alt_nodename = 'host0' if nodename == 'host1' else 'host1' rp_uuid = self._get_provider_uuid_by_host(alt_nodename) inventories = self._get_provider_inventory(rp_uuid) # Fake some other consumer taking all of the VCPU on the alt host. # Since we set cpu_allocation_ratio=1.0 the total is the total # capacity for VCPU on the host. total_vcpu = inventories['VCPU']['total'] alt_consumer = '7d32d0bc-af16-44b2-8019-a24925d76152' allocs = { 'allocations': { rp_uuid: { 'resources': { 'VCPU': total_vcpu } } }, 'project_id': self.api.project_id, 'user_id': self.api.project_id } resp = self.placement_api.put( '/allocations/%s' % alt_consumer, allocs, version='1.12') self.assertEqual(204, resp.status, resp.content) raise exception.ComputeResourcesUnavailable(reason='overhead!') # Stub out the instance claim (regardless of which host the scheduler # picks as the primary) to trigger a reschedule. self.stub_out('nova.compute.manager.resource_tracker.ResourceTracker.' 'instance_claim', fake_instance_claim) # Now that our stub is in place, try to create a server and wait for it # to go to ERROR status. server_req = self._build_server( networks=[{'port': self.neutron.port_1['id']}]) server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ERROR') # Wait for the MaxRetriesExceeded fault to be recorded. # set_vm_state_and_notify sets the vm_state to ERROR before the fault # is recorded but after the notification is sent. So wait for the # unversioned notification to show up and then get the fault. self._wait_for_unversioned_notification( 'compute_task.build_instances') server = self.api.get_server(server['id']) self.assertIn('fault', server) self.assertIn('Exceeded maximum number of retries', server['fault']['message']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1839560.py0000664000175000017500000001364000000000000024511 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova import context from nova.db import api as db_api from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova import utils LOG = logging.getLogger(__name__) class PeriodicNodeRecreateTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1839560 introduced in Rocky. When an ironic node is undergoing maintenance the driver will not report it as an available node to the ComputeManager.update_available_resource periodic task. The ComputeManager will then (soft) delete a ComputeNode record for that no-longer-available node. If/when the ironic node is available again and the driver reports it, the ResourceTracker will attempt to create a ComputeNode record for the ironic node. The regression with change Ia69fabce8e7fd7de101e291fe133c6f5f5f7056a is that the ironic node uuid is used as the ComputeNode.uuid and there is a unique constraint on the ComputeNode.uuid value in the database. So trying to create a ComputeNode with the same uuid (after the ironic node comes back from being unavailable) fails with a DuplicateEntry error since there is a (soft) deleted version of the ComputeNode with the same uuid in the database. """ def setUp(self): super(PeriodicNodeRecreateTestCase, self).setUp() # We need the PlacementFixture for the compute nodes to report in but # otherwise don't care about placement for this test. self.useFixture(func_fixtures.PlacementFixture()) # Start up the API so we can query the os-hypervisors API. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api # Make sure we're using the fake driver that has predictable uuids # for each node. self.flags(compute_driver='fake.PredictableNodeUUIDDriver') def test_update_available_resource_node_recreate(self): # First we create a compute service to manage a couple of fake nodes. compute = self.start_service('compute', 'node1') # When start_service runs, it will create the node1 ComputeNode. compute.manager.driver._set_nodes(['node1', 'node2']) # Run the update_available_resource periodic to register node2. ctxt = context.get_admin_context() compute.manager.update_available_resource(ctxt) # Make sure no compute nodes were orphaned or deleted. self.assertNotIn('Deleting orphan compute node', self.stdlog.logger.output) # Now we should have two compute nodes, make sure the hypervisors API # shows them. hypervisors = self.api.api_get('/os-hypervisors').body['hypervisors'] self.assertEqual(2, len(hypervisors), hypervisors) self.assertEqual({'node1', 'node2'}, set([hyp['hypervisor_hostname'] for hyp in hypervisors])) # Now stub the driver to only report node1. This is making it look like # node2 is no longer available when update_available_resource runs. compute.manager.driver._set_nodes(['node1']) compute.manager.update_available_resource(ctxt) # node2 should have been deleted, check the logs and API. log = self.stdlog.logger.output self.assertIn('Deleting orphan compute node', log) self.assertIn('hypervisor host is node2', log) hypervisors = self.api.api_get('/os-hypervisors').body['hypervisors'] self.assertEqual(1, len(hypervisors), hypervisors) self.assertEqual('node1', hypervisors[0]['hypervisor_hostname']) # But the node2 ComputeNode is still in the database with deleted!=0. with utils.temporary_mutation(ctxt, read_deleted='yes'): cn = objects.ComputeNode.get_by_host_and_nodename( ctxt, 'node1', 'node2') self.assertTrue(cn.deleted) # Now stub the driver again to report node2 as being back and run # the periodic task. compute.manager.driver._set_nodes(['node1', 'node2']) LOG.info('Running update_available_resource which should bring back ' 'node2.') compute.manager.update_available_resource(ctxt) # The DBDuplicateEntry error should have been handled and resulted in # updating the (soft) deleted record to no longer be deleted. log = self.stdlog.logger.output self.assertNotIn('DBDuplicateEntry', log) # Should have two reported hypervisors again. hypervisors = self.api.api_get('/os-hypervisors').body['hypervisors'] self.assertEqual(2, len(hypervisors), hypervisors) # Now that the node2 record was un-soft-deleted, archiving should not # remove any compute_nodes. LOG.info('Archiving the database.') archived = db_api.archive_deleted_rows(max_rows=1000)[0] self.assertNotIn('compute_nodes', archived) cn2 = objects.ComputeNode.get_by_host_and_nodename( ctxt, 'node1', 'node2') self.assertFalse(cn2.deleted) self.assertIsNone(cn2.deleted_at) # The node2 id and uuid should not have changed in the DB. self.assertEqual(cn.id, cn2.id) self.assertEqual(cn.uuid, cn2.uuid) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1843090.py0000664000175000017500000000661200000000000024503 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.compute from nova import exception from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier class PinnedComputeRpcTests(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.MediumFakeDriver' def setUp(self): # Use a custom weigher to make sure that we have a predictable host # selection order during scheduling self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(PinnedComputeRpcTests, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.compute1 = self._start_compute(host='host1') self.compute2 = self._start_compute(host='host2') self.compute3 = self._start_compute(host='host3') def _test_reschedule_migration_with_compute_rpc_pin(self, version_cap): self.flags(compute=version_cap, group='upgrade_levels') server_req = self._build_server(networks='none') server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ACTIVE') orig_claim = nova.compute.resource_tracker.ResourceTracker.resize_claim claim_calls = [] def fake_orig_claim( _self, context, instance, instance_type, nodename, *args, **kwargs): if not claim_calls: claim_calls.append(nodename) raise exception.ComputeResourcesUnavailable( reason='Simulated claim failure') else: claim_calls.append(nodename) return orig_claim( _self, context, instance, instance_type, nodename, *args, **kwargs) with mock.patch( 'nova.compute.resource_tracker.ResourceTracker.resize_claim', new=fake_orig_claim): # Now migrate the server which is going to fail on the first # destination but then will be rescheduled. self.api.post_server_action(server['id'], {'migrate': None}) # We expect that the instance is on host3 as the scheduler # selected host2 due to our weigher and the cold migrate failed # there and re-scheduled to host3 were it succeeded. self._wait_for_server_parameter(server, { 'OS-EXT-SRV-ATTR:host': 'host3', 'OS-EXT-STS:task_state': None, 'status': 'VERIFY_RESIZE'}) # we ensure that there was a failed and then a successful claim call self.assertEqual(['host2', 'host3'], claim_calls) def test_reschedule_migration_5_1(self): self._test_reschedule_migration_with_compute_rpc_pin('5.1') def test_reschedule_migration_5_0(self): self._test_reschedule_migration_with_compute_rpc_pin('5.0') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1843708.py0000664000175000017500000000500500000000000024504 0ustar00zuulzuul00000000000000# Copyright 2019 NTT Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import context from nova import objects from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier from nova.tests.unit.image import fake as fake_image class RebuildWithKeypairTestCase(integrated_helpers._IntegratedTestBase): """Regression test for bug 1843708. This tests a rebuild scenario with new key pairs. """ api_major_version = 'v2.1' microversion = 'latest' def test_rebuild_with_keypair(self): keypair_req = { 'keypair': { 'name': 'test-key1', 'type': 'ssh', }, } keypair1 = self.api.post_keypair(keypair_req) keypair_req['keypair']['name'] = 'test-key2' keypair2 = self.api.post_keypair(keypair_req) server = self._build_server(networks='none') server.update({'key_name': 'test-key1'}) # Create a server with keypair 'test-key1' server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') # Check keypairs ctxt = context.get_admin_context() instance = objects.Instance.get_by_uuid( ctxt, server['id'], expected_attrs=['keypairs']) self.assertEqual( keypair1['public_key'], instance.keypairs[0].public_key) # Rebuild a server with keypair 'test-key2' body = { 'rebuild': { 'imageRef': fake_image.get_valid_image_id(), 'key_name': 'test-key2', }, } self.api.api_post('servers/%s/action' % server['id'], body) fake_notifier.wait_for_versioned_notifications('instance.rebuild.end') self._wait_for_state_change(server, 'ACTIVE') # Check keypairs changed instance = objects.Instance.get_by_uuid( ctxt, server['id'], expected_attrs=['keypairs']) self.assertEqual( keypair2['public_key'], instance.keypairs[0].public_key) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1845291.py0000664000175000017500000000575700000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova from nova import exception from nova.tests.functional import integrated_helpers class ForcedHostMissingReScheduleTestCase( integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(ForcedHostMissingReScheduleTestCase, self).setUp() self._start_compute(host="host1") self._start_compute(host="host2") self._start_compute(host="host3") flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_boot_with_az_and_host_then_migrate_re_schedules(self): """Ensure that re-schedule is possible on migration even if the server is originally booted with forced host. The _boot_and_check_allocations() will start a server forced to host1. Then a migration is triggered. Both host2 and host3 are valid targets. But the test mocks resize_claim to make the first dest host fail and expects that nova will try the alternative host. """ server = self._boot_and_check_allocations( self.flavor1, 'host1') orig_claim = nova.compute.resource_tracker.ResourceTracker.resize_claim claim_calls = [] def fake_orig_claim( _self, context, instance, instance_type, nodename, *args, **kwargs): if not claim_calls: claim_calls.append(nodename) raise exception.ComputeResourcesUnavailable( reason='Simulated claim failure') else: claim_calls.append(nodename) return orig_claim( _self, context, instance, instance_type, nodename, *args, **kwargs) with mock.patch( 'nova.compute.resource_tracker.ResourceTracker.resize_claim', new=fake_orig_claim): # Now migrate the server which is going to fail on the first # destination but then will expect to be rescheduled. self.api.post_server_action(server['id'], {'migrate': None}) # We expect that the instance re-scheduled but successfully ended # up on the second destination host. self._wait_for_server_parameter(server, { 'OS-EXT-STS:task_state': None, 'status': 'VERIFY_RESIZE'}) # we ensure that there was a failed and then a successful claim call self.assertEqual(2, len(claim_calls)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1848343.py0000664000175000017500000001610400000000000024506 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import instance_actions from nova.compute import manager as compute_manager from nova.scheduler.client import query as query_client from nova.tests.functional import integrated_helpers class DeletedServerAllocationRevertTest( integrated_helpers.ProviderUsageBaseTestCase): """Tests for bug 1848343 introduced in Queens where reverting a migration-based allocation can re-create and leak allocations for a deleted server if the server is deleted during a migration (resize, cold or live). """ compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(DeletedServerAllocationRevertTest, self).setUp() # Start two computes so we can migrate between them. self._start_compute('host1') self._start_compute('host2') def _create_server(self): """Creates and return a server along with a source host and target host. """ server = super()._create_server(networks='none') source_host = server['OS-EXT-SRV-ATTR:host'] target_host = 'host2' if source_host == 'host1' else 'host1' return server, source_host, target_host def _assert_no_allocations(self, server): # There should be no allocations on either host1 or host2. providers = self._get_all_providers() for rp in providers: allocations = self._get_allocations_by_provider_uuid(rp['uuid']) # FIXME(mriedem): This is bug 1848343 where rollback # reverts the allocations and moves the source host allocations # held by the migration consumer back to the now-deleted instance # consumer. if rp['name'] == server['OS-EXT-SRV-ATTR:host']: self.assertFlavorMatchesAllocation( server['flavor'], server['id'], rp['uuid']) else: self.assertEqual({}, allocations, 'Leaked allocations on provider: %s (%s)' % (rp['uuid'], rp['name'])) def _disable_target_host(self, target_host): # Disable the target compute service to trigger a NoValidHost from # the scheduler which happens after conductor has moved the source # node allocations to the migration record. target_service = self.computes[target_host].service_ref self.api.put_service(target_service.uuid, {'status': 'disabled'}) def _stub_delete_server_during_scheduling(self, server): # Wrap the select_destinations call so we can delete the server # concurrently while scheduling. original_select_dests = \ query_client.SchedulerQueryClient.select_destinations def wrap_select_dests(*args, **kwargs): # Simulate concurrently deleting the server while scheduling. self._delete_server(server) return original_select_dests(*args, **kwargs) self.stub_out('nova.scheduler.client.query.SchedulerQueryClient.' 'select_destinations', wrap_select_dests) def test_migration_task_rollback(self): """Tests a scenario where the MigrationTask swaps the allocations for a cold migrate (or resize, it does not matter) and then fails and rolls back allocations before RPC casting to prep_resize on the dest host. """ server, source_host, target_host = self._create_server() self._disable_target_host(target_host) self._stub_delete_server_during_scheduling(server) # Now start the cold migration which will fail due to NoValidHost. self.api.post_server_action(server['id'], {'migrate': None}, check_response_status=[202]) # We cannot monitor the migration from the API since it is deleted # when the instance is deleted so just wait for the failed instance # action event after the task rollback happens. # Note that we get InstanceNotFound rather than NoValidHost because # the NoValidHost handler in ComputeTaskManager._cold_migrate calls # _set_vm_state_and_notify which raises InstanceNotFound and masks # the NoValidHost error. self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'InstanceNotFound') self._assert_no_allocations(server) def test_live_migration_task_rollback(self): """Tests a scenario where the LiveMigrationTask swaps the allocations for a live migration and then fails and rolls back allocations before RPC casting to live_migration on the source host. """ server, source_host, target_host = self._create_server() self._disable_target_host(target_host) self._stub_delete_server_during_scheduling(server) # Now start the live migration which will fail due to NoValidHost. body = {'os-migrateLive': {'host': None, 'block_migration': 'auto'}} self.api.post_server_action(server['id'], body) # We cannot monitor the migration from the API since it is deleted # when the instance is deleted so just wait for the failed instance # action event after the task rollback happens. self._wait_for_action_fail_completion( server, instance_actions.LIVE_MIGRATION, 'conductor_live_migrate_instance') self._assert_no_allocations(server) def test_migrate_on_compute_fail(self): """Tests a scenario where during the _prep_resize on the dest host the instance is gone which triggers a failure and revert of the migration-based allocations created in conductor. """ server, source_host, target_host = self._create_server() # Wrap _prep_resize so we can concurrently delete the server. original_prep_resize = compute_manager.ComputeManager._prep_resize def wrap_prep_resize(*args, **kwargs): self._delete_server(server) return original_prep_resize(*args, **kwargs) self.stub_out('nova.compute.manager.ComputeManager._prep_resize', wrap_prep_resize) # Now start the cold migration which will fail in the dest compute. self.api.post_server_action(server['id'], {'migrate': None}) # We cannot monitor the migration from the API since it is deleted # when the instance is deleted so just wait for the failed instance # action event after the allocation revert happens. self._wait_for_action_fail_completion( server, instance_actions.MIGRATE, 'compute_prep_resize') self._assert_no_allocations(server) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1849165.py0000664000175000017500000000463700000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova from nova.tests.functional import integrated_helpers class UpdateResourceMigrationRaceTest( integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(UpdateResourceMigrationRaceTest, self).setUp() self._start_compute(host="host1") self._start_compute(host="host2") flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_update_dest_between_mig_start_and_claim(self): """Validate update_available_resource when an instance exists that has started migrating but not yet been claimed on the destination. """ server = self._boot_and_check_allocations( self.flavor1, 'host1') orig_prep = nova.compute.manager.ComputeManager.pre_live_migration def fake_pre(*args, **kwargs): # Trigger update_available_resource on the destination (this runs # it on the source as well, but that's okay). self._run_periodics() return orig_prep(*args, **kwargs) with mock.patch( 'nova.compute.manager.ComputeManager.pre_live_migration', new=fake_pre): # Migrate the server. self.api.post_server_action( server['id'], {'os-migrateLive': {'host': None, 'block_migration': 'auto'}}) self._wait_for_server_parameter(server, { 'OS-EXT-STS:task_state': None, 'status': 'ACTIVE'}) # NOTE(efried): This was bug 1849165 where # _populate_assigned_resources raised a TypeError because it tried # to access the instance's migration_context before that existed. self.assertNotIn( "TypeError: argument of type 'NoneType' is not iterable", self.stdlog.logger.output) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1849409.py0000664000175000017500000000537200000000000024517 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image class ListDeletedServersWithMarker(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1849409 introduced in Queens where listing deleted servers with a marker returns the wrong results because the marker is nulled out if BuildRequestList.get_by_filters does not raise MarkerNotFound, but that does not mean the marker was found in the build request list. """ def setUp(self): super(ListDeletedServersWithMarker, self).setUp() # Start standard fixtures. self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Start nova services. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') def test_list_deleted_servers_with_marker(self): # Create a server. server = self._build_server() server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Now delete the server and wait for it to be gone. self._delete_server(server) # List deleted servers, we should get the one back. servers = self.api.get_servers(detail=False, search_opts={'deleted': True}) self.assertEqual(1, len(servers), servers) self.assertEqual(server['id'], servers[0]['id']) # Now list deleted servers with a marker which should not return the # marker instance. servers = self.api.get_servers(detail=False, search_opts={'deleted': True, 'marker': server['id']}) self.assertEqual(0, len(servers), servers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1852458.py0000664000175000017500000000737300000000000024520 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import instance_actions from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture from nova import utils class TestInstanceActionBuryInCell0(test.TestCase, integrated_helpers.InstanceHelperMixin): """Regression test for bug 1852458 where the "create" instance action event was not being created for instances buried in cell0 starting in Ocata. """ def setUp(self): super(TestInstanceActionBuryInCell0, self).setUp() # Setup common fixtures. fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) policy = self.useFixture(policy_fixture.RealPolicyFixture()) # Allow non-admins to see instance action events. policy.set_rules({ 'os_compute_api:os-instance-actions:events': 'rule:admin_or_owner' }, overwrite=False) # Setup controller services. self.start_service('conductor') self.start_service('scheduler') self.api = self.useFixture( nova_fixtures.OSAPIFixture(api_version='v2.1')).api def test_bury_in_cell0_instance_create_action(self): """Tests creating a server which will fail scheduling because there is no compute service and result in the instance being created (buried) in cell0. """ server = self._build_server(networks='none') # Use microversion 2.37 to create a server without any networking. with utils.temporary_mutation(self.api, microversion='2.37'): server = self.api.post_server({'server': server}) # The server should go to ERROR status and have a NoValidHost fault. server = self._wait_for_state_change(server, 'ERROR') self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) self.assertEqual('', server['hostId']) # Assert the "create" instance action exists and is failed. actions = self.api.get_instance_actions(server['id']) self.assertEqual(1, len(actions), actions) action = actions[0] self.assertEqual(instance_actions.CREATE, action['action']) self.assertEqual('Error', action['message']) # Get the events. There should be one with an Error result. action = self.api.api_get( '/servers/%s/os-instance-actions/%s' % (server['id'], action['request_id'])).body['instanceAction'] events = action['events'] self.assertEqual(1, len(events), events) event = events[0] self.assertEqual('conductor_schedule_and_build_instances', event['event']) self.assertEqual('Error', event['result']) # Normally non-admins cannot see the event traceback but we enabled # that via policy in setUp so assert something was recorded. self.assertIn('select_destinations', event['traceback']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1862633.py0000664000175000017500000000735600000000000024515 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions as neutron_exception from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier from nova.tests.unit.image import fake as fake_image class UnshelveNeutronErrorTest( test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(UnshelveNeutronErrorTest, self).setUp() # Start standard fixtures. placement = func_fixtures.PlacementFixture() self.useFixture(placement) self.placement_api = placement.api self.neutron = nova_fixtures.NeutronFixture(self) self.useFixture(self.neutron) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Start nova services. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api self.api.microversion = 'latest' fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.start_service('conductor') self.start_service('scheduler') self.start_service('compute', host='host1') self.start_service('compute', host='host2') def test_unshelve_offloaded_fails_due_to_neutron(self): server = self._create_server( networks=[{'port': self.neutron.port_1['id']}], az='nova:host1') # with default config shelve means immediate offload as well req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) # disable the original host of the instance to force a port update # during unshelve source_service_id = self.api.get_services( host='host1', binary='nova-compute')[0]['id'] self.api.put_service(source_service_id, {"status": "disabled"}) # Simulate that port update fails during unshelve due to neutron is # unavailable with mock.patch( 'nova.tests.fixtures.NeutronFixture.' 'update_port') as mock_update_port: mock_update_port.side_effect = neutron_exception.ConnectionFailed( reason='test') req = {'unshelve': None} self.api.post_server_action(server['id'], req) fake_notifier.wait_for_versioned_notifications( 'instance.unshelve.start') self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-STS:task_state': None, 'OS-EXT-SRV-ATTR:host': None}) # As the instance went back to offloaded state we expect no allocation allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1879878.py0000664000175000017500000001313200000000000024525 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from nova.compute import resource_tracker as rt from nova import context as nova_context from nova import objects from nova.tests.functional import integrated_helpers @ddt.ddt class TestColdMigrationUsage(integrated_helpers._IntegratedTestBase): """Reproducer for bug #1879878. Demonstrate the possibility of races caused by running the resource tracker's periodic task between marking a migration as confirmed or reverted and dropping the claim for that migration on the source or destination host, respectively. """ compute_driver = 'fake.MediumFakeDriver' microversion = 'latest' def setUp(self): self.flags(compute_driver=self.compute_driver) super().setUp() self.ctxt = nova_context.get_admin_context() def _setup_compute_service(self): # Start two computes so we can migrate between them. self._start_compute('host1') self._start_compute('host2') def _create_server(self): """Creates and return a server along with a source host and target host. """ server = super()._create_server(networks='none') self.addCleanup(self._delete_server, server) source_host = server['OS-EXT-SRV-ATTR:host'] target_host = 'host2' if source_host == 'host1' else 'host1' return server, source_host, target_host def assertUsage(self, hostname, usage): """Assert VCPU usage for the given host.""" cn = objects.ComputeNode.get_by_nodename(self.ctxt, hostname) # we could test anything, but vcpu is easy to grok self.assertEqual(cn.vcpus_used, usage) @ddt.data(True, False) def test_migrate_confirm(self, drop_race): server, src_host, dst_host = self._create_server() # only one instance created, so usage on its host self.assertUsage(src_host, 1) self.assertUsage(dst_host, 0) orig_drop_claim = rt.ResourceTracker.drop_move_claim_at_source def fake_drop_move_claim(*args, **kwargs): # run periodics after marking the migration confirmed, simulating a # race between the doing this and actually dropping the claim # check the usage, which should show usage on both hosts self.assertUsage(src_host, 1) self.assertUsage(dst_host, 1) if drop_race: self._run_periodics() self.assertUsage(src_host, 1) self.assertUsage(dst_host, 1) return orig_drop_claim(*args, **kwargs) self.stub_out( 'nova.compute.resource_tracker.ResourceTracker.' 'drop_move_claim_at_source', fake_drop_move_claim, ) # TODO(stephenfin): Use a helper self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_state_change(server, 'VERIFY_RESIZE') # migration isn't complete so we should have usage on both hosts self.assertUsage(src_host, 1) self.assertUsage(dst_host, 1) self._confirm_resize(server) # migration is now confirmed so we should once again only have usage on # one host self.assertUsage(src_host, 0) self.assertUsage(dst_host, 1) # running periodics shouldn't change things self._run_periodics() self.assertUsage(src_host, 0) self.assertUsage(dst_host, 1) @ddt.data(True, False) def test_migrate_revert(self, drop_race): server, src_host, dst_host = self._create_server() # only one instance created, so usage on its host self.assertUsage(src_host, 1) self.assertUsage(dst_host, 0) orig_drop_claim = rt.ResourceTracker.drop_move_claim_at_dest def fake_drop_move_claim(*args, **kwargs): # run periodics after marking the migration reverted, simulating a # race between the doing this and actually dropping the claim # check the usage, which should show usage on both hosts self.assertUsage(src_host, 1) self.assertUsage(dst_host, 1) if drop_race: self._run_periodics() self.assertUsage(src_host, 1) self.assertUsage(dst_host, 1) return orig_drop_claim(*args, **kwargs) self.stub_out( 'nova.compute.resource_tracker.ResourceTracker.' 'drop_move_claim_at_dest', fake_drop_move_claim, ) # TODO(stephenfin): Use a helper self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_state_change(server, 'VERIFY_RESIZE') # migration isn't complete so we should have usage on both hosts self.assertUsage(src_host, 1) self.assertUsage(dst_host, 1) self._revert_resize(server) # migration is now reverted so we should once again only have usage on # one host self.assertUsage(src_host, 1) self.assertUsage(dst_host, 0) # running periodics shouldn't change things self._run_periodics() self.assertUsage(src_host, 1) self.assertUsage(dst_host, 0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1888395.py0000664000175000017500000001202600000000000024520 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from lxml import etree from urllib import parse as urlparse from nova import context from nova.network import constants as neutron_constants from nova.network import neutron from nova.tests.functional.libvirt import base as libvirt_base from nova.tests.unit.virt.libvirt import fake_os_brick_connector from nova.tests.unit.virt.libvirt import fakelibvirt class TestLiveMigrationWithoutMultiplePortBindings( libvirt_base.ServersTestBase): """Regression test for bug 1888395. This regression test asserts that Live migration works when neutron does not support the binding-extended api extension and the legacy single port binding workflow is used. """ ADMIN_API = True api_major_version = 'v2.1' microversion = 'latest' def list_extensions(self, *args, **kwargs): return { 'extensions': [ { # Copied from neutron-lib portbindings.py "updated": "2014-02-03T10:00:00-00:00", "name": neutron_constants.PORT_BINDING, "links": [], "alias": "binding", "description": "Expose port bindings of a virtual port to " "external application" } ] } def setUp(self): self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) super().setUp() self.neutron.list_extensions = self.list_extensions self.neutron_api = neutron.API() # TODO(sean-k-mooney): remove after # I275509eb0e0eb9eaf26fe607b7d9a67e1edc71f8 # has merged. self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.connector', fake_os_brick_connector)) self.start_computes({ 'start_host': fakelibvirt.HostInfo( cpu_nodes=1, cpu_sockets=1, cpu_cores=4, cpu_threads=2, kB_mem=10740000), 'end_host': fakelibvirt.HostInfo( cpu_nodes=1, cpu_sockets=1, cpu_cores=4, cpu_threads=2, kB_mem=10740000)}) self.ctxt = context.get_admin_context() # TODO(sean-k-mooney): remove this when it is part of ServersTestBase self.useFixture(fixtures.MonkeyPatch( 'nova.tests.unit.virt.libvirt.fakelibvirt.Domain.migrateToURI3', self._migrate_stub)) def _migrate_stub(self, domain, destination, params, flags): """Stub out migrateToURI3.""" src_hostname = domain._connection.hostname dst_hostname = urlparse.urlparse(destination).netloc # In a real live migration, libvirt and QEMU on the source and # destination talk it out, resulting in the instance starting to exist # on the destination. Fakelibvirt cannot do that, so we have to # manually create the "incoming" instance on the destination # fakelibvirt. dst = self.computes[dst_hostname] dst.driver._host.get_connection().createXML( params['destination_xml'], 'fake-createXML-doesnt-care-about-flags') src = self.computes[src_hostname] conn = src.driver._host.get_connection() # because migrateToURI3 is spawned in a background thread, this method # does not block the upper nova layers. Because we don't want nova to # think the live migration has finished until this method is done, the # last thing we do is make fakelibvirt's Domain.jobStats() return # VIR_DOMAIN_JOB_COMPLETED. server = etree.fromstring( params['destination_xml'] ).find('./uuid').text dom = conn.lookupByUUIDString(server) dom.complete_job() def test_live_migrate(self): server = self._create_server( host='start_host', networks=[{'port': self.neutron.port_1['id']}]) self.assertFalse( self.neutron_api.supports_port_binding_extension(self.ctxt)) # TODO(sean-k-mooney): extend _live_migrate to support passing a host self.api.post_server_action( server['id'], { 'os-migrateLive': { 'host': 'end_host', 'block_migration': 'auto' } } ) self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'end_host', 'status': 'ACTIVE'}) msg = "NotImplementedError: Cannot load 'vif_type' in the base class" self.assertNotIn(msg, self.stdlog.logger.output) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1889108.py0000664000175000017500000000765100000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers class TestVolAttachmentsDuringPreLiveMigration( integrated_helpers._IntegratedTestBase ): """Regression test for bug 1889108. This regression test asserts that the original source volume attachments are not removed during the rollback from pre_live_migration failures on the destination. """ # Default self.api to the self.admin_api as live migration is admin only ADMIN_API = True microversion = 'latest' def setUp(self): super().setUp() self.cinder = self.useFixture(nova_fixtures.CinderFixture(self)) def _setup_compute_service(self): self._start_compute('src') self._start_compute('dest') @mock.patch('nova.virt.fake.FakeDriver.pre_live_migration', side_effect=test.TestingException) def test_vol_attachments_during_driver_pre_live_mig_failure( self, mock_plm): """Assert that the src attachment is incorrectly removed * Mock pre_live_migration to always fail within the virt driver * Launch a boot from volume instance * Assert that the volume is attached correctly to the instance. * Live migrate the instance to another host invoking the mocked pre_live_migration * Assert that the instance is still on the source host * Assert that both the original source host volume attachment and new destination volume attachment have been removed """ volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server = self._build_server( name='test_bfv_pre_live_migration_failure', image_uuid='', networks='none' ) server['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': volume_id }] server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') # Fetch the source host for use later server = self.api.get_server(server['id']) src_host = server['OS-EXT-SRV-ATTR:host'] # Assert that the volume is connected to the instance self.assertIn( volume_id, self.cinder.volume_ids_for_instance(server['id'])) # Assert that we have an active attachment in the fixture attachments = self.cinder.volume_to_attachment.get(volume_id) self.assertEqual(1, len(attachments)) # Fetch the attachment_id for use later once we have migrated src_attachment_id = list(attachments.keys())[0] # Migrate the instance and wait until the migration errors out thanks # to our mocked version of pre_live_migration raising # test.TestingException self._live_migrate(server, 'error') # Assert that we called the fake pre_live_migration method mock_plm.assert_called_once() # Assert that the instance is listed on the source server = self.api.get_server(server['id']) self.assertEqual(src_host, server['OS-EXT-SRV-ATTR:host']) # Assert that the src attachment is still present attachments = self.cinder.volume_to_attachment.get(volume_id) self.assertIn(src_attachment_id, attachments.keys()) self.assertEqual(1, len(attachments)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1893284.py0000664000175000017500000001065200000000000024514 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client as api_client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class TestServersPerUserQuota(test.TestCase, integrated_helpers.InstanceHelperMixin): """This tests a regression introduced in the Pike release. In Pike we started counting resources for quota limit checking instead of tracking usages in a separate database table. As part of that change, per-user quota functionality was broken for server creates. When mulitple users in the same project have per-user quota, they are meant to be allowed to create resources such that may not exceed their per-user quota nor their project quota. If a project has an 'instances' quota of 10 and user A has a quota of 1 and user B has a quota of 1, both users should each be able to create 1 server. Because of the bug, in this scenario user A will succeed in creating a server but user B will fail to create a server with a 403 "quota exceeded" error because the 'instances' resource count isn't being correctly scoped per-user. """ def setUp(self): super(TestServersPerUserQuota, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api self.api.microversion = '2.37' # so we can specify networks='none' self.admin_api.microversion = '2.37' fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') self.start_service('compute') def test_create_server_with_per_user_quota(self): # Set per-user quota for the non-admin user to allow 1 instance. # The default quota for the project is 10 instances. quotas = {'instances': 1} self.admin_api.update_quota( quotas, project_id=self.api.project_id, user_id=self.api.auth_user) # Verify that the non-admin user has a quota limit of 1 instance. quotas = self.api.get_quota_detail(user_id=self.api.auth_user) self.assertEqual(1, quotas['instances']['limit']) # Verify that the admin user has a quota limit of 10 instances. quotas = self.api.get_quota_detail(user_id=self.admin_api.auth_user) self.assertEqual(10, quotas['instances']['limit']) # Boot one instance into the default project as the admin user. # This results in usage of 1 instance for the project and 1 instance # for the admin user. self._create_server( image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, networks='none', api=self.admin_api) # Now try to boot an instance as the non-admin user. # This should succeed because the non-admin user has 0 instances and # the project limit allows 10 instances. server_req = self._build_server( image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, networks='none') server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # A request to boot a second instance should fail because the # non-admin has already booted 1 allowed instance. ex = self.assertRaises( api_client.OpenStackApiException, self.api.post_server, {'server': server_req}) self.assertEqual(403, ex.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1894966.py0000664000175000017500000000272000000000000024521 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import integrated_helpers class TestCreateServerGroupWithEmptyPolicies( test.TestCase, integrated_helpers.InstanceHelperMixin, ): """Demonstrate bug #1894966. Attempt to create a server group with an invalid 'policies' field. It should fail cleanly. """ def setUp(self): super().setUp() api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.api.microversion = '2.63' # the last version with the bug def test_create_with_empty_policies(self): exc = self.assertRaises( client.OpenStackApiException, self.api.post_server_groups, {'name': 'test group', 'policies': []}) self.assertEqual(400, exc.response.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1895696.py0000664000175000017500000001337100000000000024526 0ustar00zuulzuul00000000000000# Copyright 2020, Red Hat, Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils.fixture import uuidsentinel as uuids from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import integrated_helpers class TestNonBootableImageMeta(integrated_helpers._IntegratedTestBase): """Regression test for bug 1895696 This regression test asserts the behaviour of server creation requests when using an image with nonbootable properties either directly in the request or to create a volume that is then booted from. """ microversion = 'latest' def setUp(self): super().setUp() # Add an image to the Glance fixture with cinder_encryption_key set timestamp = datetime.datetime(2011, 1, 1, 1, 2, 3) cinder_encrypted_image = { 'id': uuids.cinder_encrypted_image_uuid, 'name': 'cinder_encryption_key_image', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '74185822', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': { 'cinder_encryption_key_id': uuids.cinder_encryption_key_id, } } self.fake_image_service.create(None, cinder_encrypted_image) self.cinder = self.useFixture(nova_fixtures.CinderFixture(self)) # Mock out nova.volume.cinder.API.{create,get} so that when n-api # requests that c-api create a volume from the above image that the # response includes cinder_encryption_key_id in the # volume_image_metadata cinder_encrypted_volume = { 'status': 'available', 'display_name': 'cinder_encrypted_volume', 'attach_status': 'detached', 'id': uuids.cinder_encrypted_volume_uuid, 'multiattach': False, 'size': 1, 'encrypted': True, 'volume_image_metadata': { 'cinder_encryption_key_id': uuids.cinder_encryption_key_id } } def fake_cinder_create(self_api, context, size, name, description, snapshot=None, image_id=None, volume_type=None, metadata=None, availability_zone=None): if image_id == uuids.cinder_encrypted_image_uuid: return cinder_encrypted_volume self.stub_out( 'nova.volume.cinder.API.create', fake_cinder_create) def fake_cinder_get(self_api, context, volume_id, microversion=None): return cinder_encrypted_volume self.stub_out( 'nova.volume.cinder.API.get', fake_cinder_get) def test_nonbootable_metadata_image_metadata(self): """Assert behaviour when booting from an encrypted image """ server = self._build_server( name='test_nonbootable_metadata_bfv_image_metadata', image_uuid=uuids.cinder_encrypted_image_uuid, networks='none' ) # NOTE(lyarwood): This should always fail as Nova will attempt to boot # directly from this encrypted image. ex = self.assertRaises( client.OpenStackApiException, self.api.post_server, {'server': server}) self.assertEqual(400, ex.response.status_code) self.assertIn( "Direct booting of an image uploaded from an encrypted volume is " "unsupported", str(ex)) def test_nonbootable_metadata_bfv_image_metadata(self): """Assert behaviour when n-api creates volume using an encrypted image """ server = self._build_server( name='test_nonbootable_metadata_bfv_image_metadata', image_uuid='', networks='none' ) # TODO(lyarwood): Merge this into _build_server server['block_device_mapping_v2'] = [{ 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'uuid': uuids.cinder_encrypted_image_uuid, 'volume_size': 1, }] # Assert that this request is accepted and the server moves to ACTIVE server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') def test_nonbootable_metadata_bfv_volume_image_metadata(self): """Assert behaviour when c-api has created volume using encrypted image """ server = self._build_server( name='test_nonbootable_metadata_bfv_volume_image_metadata', image_uuid='', networks='none' ) # TODO(lyarwood): Merge this into _build_server server['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': uuids.cinder_encrypted_volume_uuid, }] # Assert that this request is accepted and the server moves to ACTIVE server = self.api.post_server({'server': server}) self._wait_for_state_change(server, 'ACTIVE') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1896463.py0000664000175000017500000002167000000000000024520 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import fixtures import time from oslo_config import cfg from nova import context from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova import utils from nova.virt import fake CONF = cfg.CONF class TestEvacuateResourceTrackerRace( test.TestCase, integrated_helpers.InstanceHelperMixin, ): """Demonstrate bug #1896463. Trigger a race condition between an almost finished evacuation that is dropping the migration context, and the _update_available_resource() periodic task that already loaded the instance list but haven't loaded the migration list yet. The result is that the PCI allocation made by the evacuation is deleted by the overlapping periodic task run and the instance will not have PCI allocation after the evacuation. """ def setUp(self): super().setUp() self.neutron = self.useFixture(nova_fixtures.NeutronFixture(self)) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.placement = self.useFixture(func_fixtures.PlacementFixture()).api self.api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.admin_api = self.api_fixture.admin_api self.admin_api.microversion = 'latest' self.api = self.admin_api self.start_service('conductor') self.start_service('scheduler') self.flags(compute_driver='fake.FakeDriverWithPciResources') self.useFixture( fake.FakeDriverWithPciResources. FakeDriverWithPciResourcesConfigFixture()) self.compute1 = self._start_compute('host1') self.compute1_id = self._get_compute_node_id_by_host('host1') self.compute1_service_id = self.admin_api.get_services( host='host1', binary='nova-compute')[0]['id'] self.compute2 = self._start_compute('host2') self.compute2_id = self._get_compute_node_id_by_host('host2') self.compute2_service_id = self.admin_api.get_services( host='host2', binary='nova-compute')[0]['id'] # add extra ports and the related network to the neutron fixture # specifically for these tests. It cannot be added globally in the # fixture init as it adds a second network that makes auto allocation # based test to fail due to ambiguous networks. self.neutron._ports[self.neutron.sriov_port['id']] = \ copy.deepcopy(self.neutron.sriov_port) self.neutron._networks[ self.neutron.network_2['id']] = self.neutron.network_2 self.neutron._subnets[ self.neutron.subnet_2['id']] = self.neutron.subnet_2 self.ctxt = context.get_admin_context() def _get_compute_node_id_by_host(self, host): # we specifically need the integer id of the node not the UUID so we # need to use the old microversion with utils.temporary_mutation(self.admin_api, microversion='2.52'): hypers = self.admin_api.api_get( 'os-hypervisors').body['hypervisors'] for hyper in hypers: if hyper['hypervisor_hostname'] == host: return hyper['id'] self.fail('Hypervisor with hostname=%s not found' % host) def _assert_pci_device_allocated( self, instance_uuid, compute_node_id, num=1): """Assert that a given number of PCI devices are allocated to the instance on the given host. """ devices = objects.PciDeviceList.get_by_instance_uuid( self.ctxt, instance_uuid) devices_on_host = [dev for dev in devices if dev.compute_node_id == compute_node_id] self.assertEqual(num, len(devices_on_host)) def test_evacuate_races_with_update_available_resource(self): # Create a server with a direct port to have PCI allocation server = self._create_server( name='test-server-for-bug-1896463', networks=[{'port': self.neutron.sriov_port['id']}], host='host1' ) self._assert_pci_device_allocated(server['id'], self.compute1_id) self._assert_pci_device_allocated( server['id'], self.compute2_id, num=0) # stop and force down the compute the instance is on to allow # evacuation self.compute1.stop() self.admin_api.put_service( self.compute1_service_id, {'forced_down': 'true'}) # Inject some sleeps both in the Instance.drop_migration_context and # the MigrationList.get_in_progress_by_host_and_node code to make them # overlap. # We want to create the following execution scenario: # 1) The evacuation makes a move claim on the dest including the PCI # claim. This means there is a migration context. But the evacuation # is not complete yet so the instance.host does not point to the # dest host. # 2) The dest resource tracker starts an _update_available_resource() # periodic task and this task loads the list of instances on its # host from the DB. Our instance is not in this list due to #1. # 3) The evacuation finishes, the instance.host is set to the dest host # and the migration context is deleted. # 4) The periodic task now loads the list of in-progress migration from # the DB to check for incoming our outgoing migrations. However due # to #3 our instance is not in this list either. # 5) The periodic task cleans up every lingering PCI claim that is not # connected to any instance collected above from the instance list # and from the migration list. As our instance is not in either of # the lists, the resource tracker cleans up the PCI allocation for # the already finished evacuation of our instance. # # Unfortunately we cannot reproduce the above situation without sleeps. # We need that the evac starts first then the periodic starts, but not # finishes, then evac finishes, then periodic finishes. If I trigger # and run the whole periodic in a wrapper of drop_migration_context # then I could not reproduce the situation described at #4). In general # it is not # # evac # | # | # | periodic # | | # | | # | x # | # | # x # # but # # evac # | # | # | periodic # | | # | | # | | # x | # | # x # # what is needed need. # # Starting the periodic from the test in a separate thread at # drop_migration_context() might work but that is an extra complexity # in the test code. Also it might need a sleep still to make the # reproduction stable but only one sleep instead of two. orig_drop = objects.Instance.drop_migration_context def slow_drop(*args, **kwargs): time.sleep(1) return orig_drop(*args, **kwargs) self.useFixture( fixtures.MockPatch( 'nova.objects.instance.Instance.drop_migration_context', new=slow_drop)) orig_get_mig = objects.MigrationList.get_in_progress_by_host_and_node def slow_get_mig(*args, **kwargs): time.sleep(2) return orig_get_mig(*args, **kwargs) self.useFixture( fixtures.MockPatch( 'nova.objects.migration.MigrationList.' 'get_in_progress_by_host_and_node', new=slow_get_mig)) self.admin_api.post_server_action(server['id'], {'evacuate': {}}) # we trigger the _update_available_resource periodic to overlap with # the already started evacuation self._run_periodics() self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host2', 'status': 'ACTIVE'}) self._assert_pci_device_allocated(server['id'], self.compute1_id) self._assert_pci_device_allocated(server['id'], self.compute2_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/regressions/test_bug_1914777.py0000664000175000017500000001637400000000000024524 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context as nova_context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestDeleteWhileBooting(test.TestCase, integrated_helpers.InstanceHelperMixin): """This tests race scenarios where an instance is deleted while booting. In these scenarios, the nova-api service is racing with nova-conductor service; nova-conductor is in the middle of booting the instance when nova-api begins fulfillment of a delete request. As the two services delete records out from under each other, both services need to handle it properly such that a delete request will always be fulfilled. Another scenario where two requests can race and delete things out from under each other is if two or more delete requests are racing while the instance is booting. In order to force things into states where bugs have occurred, we must mock some object retrievals from the database to simulate the different points at which a delete request races with a create request or another delete request. We aim to mock only the bare minimum necessary to recreate the bug scenarios. """ def setUp(self): super(TestDeleteWhileBooting, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) nova.tests.unit.image.fake.stub_out_image_service(self) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.ctxt = nova_context.get_context() # We intentionally do not start a conductor or scheduler service, since # our goal is to simulate an instance that has not been scheduled yet. # Kick off a server create request and move on once it's in the BUILD # state. Since we have no conductor or scheduler service running, the # server will "hang" in an unscheduled state for testing. self.server = self._create_server(expected_state='BUILD') # Simulate that a different request has deleted the build request # record after this delete request has begun processing. (The first # lookup of the build request occurs in the servers API to get the # instance object in order to delete it). # We need to get the build request now before we mock the method. self.br = objects.BuildRequest.get_by_instance_uuid( self.ctxt, self.server['id']) @mock.patch('nova.objects.build_request.BuildRequest.get_by_instance_uuid') def test_build_request_and_instance_not_found(self, mock_get_br): """This tests a scenario where another request has deleted the build request record and the instance record ahead of us. """ # The first lookup at the beginning of the delete request in the # ServersController succeeds and the second lookup to handle "delete # while booting" in compute/api fails after a different request has # deleted it. br_not_found = exception.BuildRequestNotFound(uuid=self.server['id']) mock_get_br.side_effect = [self.br, br_not_found, br_not_found] self._delete_server(self.server) @mock.patch('nova.objects.build_request.BuildRequest.get_by_instance_uuid') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.objects.instance.Instance.get_by_uuid') def test_deleting_instance_at_the_same_time(self, mock_get_i, mock_get_im, mock_get_br): """This tests the scenario where another request is trying to delete the instance record at the same time we are, while the instance is booting. An example of this: while the create and delete are running at the same time, the delete request deletes the build request, the create request finds the build request already deleted when it tries to delete it. The create request deletes the instance record and then delete request tries to lookup the instance after it deletes the build request. Its attempt to lookup the instance fails because the create request already deleted it. """ # First lookup at the beginning of the delete request in the # ServersController succeeds, second lookup to handle "delete while # booting" in compute/api fails after the conductor has deleted it. br_not_found = exception.BuildRequestNotFound(uuid=self.server['id']) mock_get_br.side_effect = [self.br, br_not_found] # Simulate the instance transitioning from having no cell assigned to # having a cell assigned while the delete request is being processed. # First lookup of the instance mapping has the instance unmapped (no # cell) and subsequent lookups have the instance mapped to cell1. no_cell_im = objects.InstanceMapping( context=self.ctxt, instance_uuid=self.server['id'], cell_mapping=None) has_cell_im = objects.InstanceMapping( context=self.ctxt, instance_uuid=self.server['id'], cell_mapping=self.cell_mappings['cell1']) mock_get_im.side_effect = [ no_cell_im, has_cell_im, has_cell_im, has_cell_im, has_cell_im] # Simulate that the instance object has been created by the conductor # in the create path while the delete request is being processed. # First lookups are before the instance has been deleted and the last # lookup is after the conductor has deleted the instance. Use the build # request to make an instance object for testing. i = self.br.get_new_instance(self.ctxt) i_not_found = exception.InstanceNotFound(instance_id=self.server['id']) mock_get_i.side_effect = [i, i, i, i_not_found, i_not_found] # Simulate that the conductor is running instance_destroy at the same # time as we are. def fake_instance_destroy(*args, **kwargs): # NOTE(melwitt): This is a misleading exception, as it is not only # raised when a constraint on 'host' is not met, but also when two # instance_destroy calls are racing. In this test, the soft delete # returns 0 rows affected because another request soft deleted the # record first. raise exception.ObjectActionError( action='destroy', reason='host changed') self.stub_out( 'nova.objects.instance.Instance.destroy', fake_instance_destroy) self._delete_server(self.server) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_aggregates.py0000664000175000017500000012525000000000000022564 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids import nova.conf from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture from nova.tests.unit import utils as test_utils from nova import utils CONF = nova.conf.CONF class AggregatesTest(integrated_helpers._IntegratedTestBase): api_major_version = 'v2' ADMIN_API = True def _add_hosts_to_aggregate(self): """List all compute services and add them all to an aggregate.""" compute_services = [s for s in self.api.get_services() if s['binary'] == 'nova-compute'] agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) for service in compute_services: self.api.add_host_to_aggregate(agg['id'], service['host']) self._test_aggregate = agg return len(compute_services) def test_add_hosts(self): # Default case with one compute, mapped for us self.assertEqual(1, self._add_hosts_to_aggregate()) def test_add_unmapped_host(self): """Ensure that hosts without mappings are still found and added""" # Add another compute, but nuke its HostMapping self.start_service('compute', host='compute2') self.host_mappings['compute2'].destroy() self.assertEqual(2, self._add_hosts_to_aggregate()) class AggregatesV281Test(AggregatesTest): api_major_version = 'v2.1' microversion = '2.81' def setUp(self): self.flags(compute_driver='fake.FakeDriverWithCaching') super(AggregatesV281Test, self).setUp() def test_cache_images_on_aggregate(self): self._add_hosts_to_aggregate() agg = self._test_aggregate img = '155d900f-4e14-4e4c-a73d-069cbf4541e6' self.assertEqual(set(), self.compute.driver.cached_images) body = {'cache': [ {'id': img}, ]} self.api.api_post('/os-aggregates/%s/images' % agg['id'], body, check_response_status=[202]) self.assertEqual(set([img]), self.compute.driver.cached_images) def test_cache_images_on_aggregate_missing_image(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) # NOTE(danms): This image-id does not exist img = '155d900f-4e14-4e4c-a73d-069cbf4541e0' body = {'cache': [ {'id': img}, ]} self.api.api_post('/os-aggregates/%s/images' % agg['id'], body, check_response_status=[400]) def test_cache_images_on_missing_aggregate(self): img = '155d900f-4e14-4e4c-a73d-069cbf4541e6' body = {'cache': [ {'id': img}, ]} self.api.api_post('/os-aggregates/123/images', body, check_response_status=[404]) def test_cache_images_with_duplicates(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) img = '155d900f-4e14-4e4c-a73d-069cbf4541e6' body = {'cache': [ {'id': img}, {'id': img}, ]} self.api.api_post('/os-aggregates/%i/images' % agg['id'], body, check_response_status=[400]) def test_cache_images_with_no_images(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) body = {'cache': []} self.api.api_post('/os-aggregates/%i/images' % agg['id'], body, check_response_status=[400]) def test_cache_images_with_additional_in_image(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) img = '155d900f-4e14-4e4c-a73d-069cbf4541e6' body = {'cache': [ {'id': img, 'power': '1.21 gigawatts'}, ]} self.api.api_post('/os-aggregates/%i/images' % agg['id'], body, check_response_status=[400]) def test_cache_images_with_missing_image_id(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) body = {'cache': [ {'power': '1.21 gigawatts'}, ]} self.api.api_post('/os-aggregates/%i/images' % agg['id'], body, check_response_status=[400]) def test_cache_images_with_missing_cache(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) body = {} self.api.api_post('/os-aggregates/%i/images' % agg['id'], body, check_response_status=[400]) def test_cache_images_with_additional_in_cache(self): agg = {'aggregate': {'name': 'test-aggregate'}} agg = self.api.post_aggregate(agg) img = '155d900f-4e14-4e4c-a73d-069cbf4541e6' body = {'cache': [{'id': img}], 'power': '1.21 gigawatts', } self.api.api_post('/os-aggregates/%i/images' % agg['id'], body, check_response_status=[400]) class AggregateRequestFiltersTest( integrated_helpers.ProviderUsageBaseTestCase): microversion = 'latest' compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(AggregateRequestFiltersTest, self).setUp() self.aggregates = {} self._start_compute('host1') self._start_compute('host2') self.flavors = self.api.get_flavors() # Aggregate with only host1 self._create_aggregate('only-host1') self._add_host_to_aggregate('only-host1', 'host1') # Aggregate with only host2 self._create_aggregate('only-host2') self._add_host_to_aggregate('only-host2', 'host2') # Aggregate with neither host self._create_aggregate('no-hosts') def _create_aggregate(self, name): agg = self.admin_api.post_aggregate({'aggregate': {'name': name}}) self.aggregates[name] = agg def _get_provider_uuid_by_host(self, host): """Return the compute node uuid for a named compute host.""" # NOTE(gibi): the compute node id is the same as the compute node # provider uuid on that compute resp = self.admin_api.api_get( 'os-hypervisors?hypervisor_hostname_pattern=%s' % host).body return resp['hypervisors'][0]['id'] def _add_host_to_aggregate(self, agg, host): """Add a compute host to both nova and placement aggregates. :param agg: Name of the nova aggregate :param host: Name of the compute host """ agg = self.aggregates[agg] self.admin_api.add_host_to_aggregate(agg['id'], host) def _boot_server(self, az=None, flavor_id=None, image_id=None, end_status='ACTIVE'): flavor_id = flavor_id or self.flavors[0]['id'] image_uuid = image_id or '155d900f-4e14-4e4c-a73d-069cbf4541e6' server_req = self._build_server( image_uuid=image_uuid, flavor_id=flavor_id, networks='none', az=az) created_server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(created_server, end_status) return server def _get_instance_host(self, server): srv = self.admin_api.get_server(server['id']) return srv['OS-EXT-SRV-ATTR:host'] def _set_az_aggregate(self, agg, az): """Set the availability_zone of an aggregate :param agg: Name of the nova aggregate :param az: Availability zone name """ agg = self.aggregates[agg] action = { 'set_metadata': { 'metadata': { 'availability_zone': az, } }, } self.admin_api.post_aggregate_action(agg['id'], action) def _set_metadata(self, agg, metadata): """POST /os-aggregates/{aggregate_id}/action (set_metadata) :param agg: Name of the nova aggregate :param metadata: dict of aggregate metadata key/value pairs to add, update, or remove if value=None (note "availability_zone" cannot be nulled out once set). """ agg = self.aggregates[agg] action = { 'set_metadata': { 'metadata': metadata }, } self.admin_api.post_aggregate_action(agg['id'], action) def _grant_tenant_aggregate(self, agg, tenants): """Grant a set of tenants access to use an aggregate. :param agg: Name of the nova aggregate :param tenants: A list of all tenant ids that will be allowed access """ agg = self.aggregates[agg] action = { 'set_metadata': { 'metadata': { 'filter_tenant_id%i' % i: tenant for i, tenant in enumerate(tenants) } }, } self.admin_api.post_aggregate_action(agg['id'], action) def _set_traits_on_aggregate(self, agg, traits): """Set traits to aggregate. :param agg: Name of the nova aggregate :param traits: List of traits to be assigned to the aggregate """ action = { 'set_metadata': { 'metadata': { 'trait:' + trait: 'required' for trait in traits } } } self.admin_api.post_aggregate_action( self.aggregates[agg]['id'], action) class AggregatePostTest(AggregateRequestFiltersTest): def test_set_az_for_aggreate_no_instances(self): """Should be possible to update AZ for an empty aggregate. Check you can change the AZ name of an aggregate when it does not contain any servers. """ self._set_az_aggregate('only-host1', 'fake-az') def test_rename_to_same_az(self): """AZ rename should pass successfully if AZ name is not changed""" az = 'fake-az' self._set_az_aggregate('only-host1', az) self._boot_server(az=az) self._set_az_aggregate('only-host1', az) def test_fail_set_az(self): """Check it is not possible to update a non-empty aggregate. Check you cannot change the AZ name of an aggregate when it contains any servers. """ az = 'fake-az' self._set_az_aggregate('only-host1', az) server = self._boot_server(az=az) self.assertRaisesRegex( client.OpenStackApiException, 'One or more hosts contain instances in this zone.', self._set_az_aggregate, 'only-host1', 'new' + az) # Configure for the SOFT_DELETED scenario. self.flags(reclaim_instance_interval=300) self.api.delete_server(server['id']) server = self._wait_for_state_change(server, 'SOFT_DELETED') self.assertRaisesRegex( client.OpenStackApiException, 'One or more hosts contain instances in this zone.', self._set_az_aggregate, 'only-host1', 'new' + az) # Force delete the SOFT_DELETED server. self.api.api_post( '/servers/%s/action' % server['id'], {'forceDelete': None}) # Wait for it to be deleted since forceDelete is asynchronous. self._wait_until_deleted(server) # Now we can rename the AZ since the server is gone. self._set_az_aggregate('only-host1', 'new' + az) def test_cannot_delete_az(self): az = 'fake-az' # Assign the AZ to the aggregate. self._set_az_aggregate('only-host1', az) # Set some metadata on the aggregate; note the "availability_zone" # metadata key is not specified. self._set_metadata('only-host1', {'foo': 'bar'}) # Verify the AZ was retained. agg = self.admin_api.api_get( '/os-aggregates/%s' % self.aggregates['only-host1']['id']).body['aggregate'] self.assertEqual(az, agg['availability_zone']) # NOTE: this test case has the same test methods as AggregatePostTest # but for the AZ update it uses PUT /os-aggregates/{aggregate_id} method class AggregatePutTest(AggregatePostTest): def _set_az_aggregate(self, agg, az): """Set the availability_zone of an aggregate via PUT :param agg: Name of the nova aggregate :param az: Availability zone name """ agg = self.aggregates[agg] body = { 'aggregate': { 'availability_zone': az, }, } self.admin_api.put_aggregate(agg['id'], body) class TenantAggregateFilterTest(AggregateRequestFiltersTest): def setUp(self): super(TenantAggregateFilterTest, self).setUp() # Default to enabling the filter and making it mandatory self.flags(limit_tenants_to_placement_aggregate=True, group='scheduler') self.flags(placement_aggregate_required_for_tenants=True, group='scheduler') def test_tenant_id_required_fails_if_no_aggregate(self): # Without granting our tenant permission to an aggregate, instance # creates should fail since aggregates are required self._boot_server(end_status='ERROR') def test_tenant_id_not_required_succeeds_if_no_aggregate(self): self.flags(placement_aggregate_required_for_tenants=False, group='scheduler') # Without granting our tenant permission to an aggregate, instance # creates should still succeed since aggregates are not required self._boot_server(end_status='ACTIVE') def test_filter_honors_tenant_id(self): tenant = self.api.project_id # Grant our tenant access to the aggregate with only host1 in it # and boot some servers. They should all stack up on host1. self._grant_tenant_aggregate('only-host1', ['foo', tenant, 'bar']) server1 = self._boot_server(end_status='ACTIVE') server2 = self._boot_server(end_status='ACTIVE') # Grant our tenant access to the aggregate with only host2 in it # and boot some servers. They should all stack up on host2. self._grant_tenant_aggregate('only-host1', ['foo', 'bar']) self._grant_tenant_aggregate('only-host2', ['foo', tenant, 'bar']) server3 = self._boot_server(end_status='ACTIVE') server4 = self._boot_server(end_status='ACTIVE') # Make sure the servers landed on the hosts we had access to at # the time we booted them. hosts = [self._get_instance_host(s) for s in (server1, server2, server3, server4)] expected_hosts = ['host1', 'host1', 'host2', 'host2'] self.assertEqual(expected_hosts, hosts) def test_filter_with_empty_aggregate(self): tenant = self.api.project_id # Grant our tenant access to the aggregate with no hosts in it self._grant_tenant_aggregate('no-hosts', ['foo', tenant, 'bar']) self._boot_server(end_status='ERROR') def test_filter_with_multiple_aggregates_for_tenant(self): tenant = self.api.project_id # Grant our tenant access to the aggregate with no hosts in it, # and one with a host. self._grant_tenant_aggregate('no-hosts', ['foo', tenant, 'bar']) self._grant_tenant_aggregate('only-host2', ['foo', tenant, 'bar']) # Boot several servers and make sure they all land on the # only host we have access to. for i in range(0, 4): server = self._boot_server(end_status='ACTIVE') self.assertEqual('host2', self._get_instance_host(server)) class AvailabilityZoneFilterTest(AggregateRequestFiltersTest): def setUp(self): # Default to enabling the filter self.flags(query_placement_for_availability_zone=True, group='scheduler') # Use custom weigher to make sure that we have a predictable # scheduling sort order. self.useFixture(nova_fixtures.HostNameWeigherFixture()) # NOTE(danms): Do this before calling setUp() so that # the scheduler service that is started sees the new value filters = CONF.filter_scheduler.enabled_filters filters.remove('AvailabilityZoneFilter') self.flags(enabled_filters=filters, group='filter_scheduler') super(AvailabilityZoneFilterTest, self).setUp() def test_filter_with_az(self): self._set_az_aggregate('only-host2', 'myaz') server1 = self._boot_server(az='myaz') server2 = self._boot_server(az='myaz') hosts = [self._get_instance_host(s) for s in (server1, server2)] self.assertEqual(['host2', 'host2'], hosts) class IsolateAggregateFilterTest(AggregateRequestFiltersTest): def setUp(self): # Default to enabling the filter self.flags(enable_isolated_aggregate_filtering=True, group='scheduler') # Use a custom weigher that would prefer host1 if the isolate # aggregate filter were not in place otherwise it's not deterministic # whether we're landing on host2 because of the filter or just by # chance. self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(IsolateAggregateFilterTest, self).setUp() self.image_service = nova.tests.unit.image.fake.FakeImageService() # setting traits to flavors flavor_body = {'flavor': {'name': 'test_flavor', 'ram': 512, 'vcpus': 1, 'disk': 1 }} self.flavor_with_trait_dxva = self.api.post_flavor(flavor_body) self.admin_api.post_extra_spec( self.flavor_with_trait_dxva['id'], {'extra_specs': {'trait:HW_GPU_API_DXVA': 'required'}}) flavor_body['flavor']['name'] = 'test_flavor1' self.flavor_with_trait_sgx = self.api.post_flavor(flavor_body) self.admin_api.post_extra_spec( self.flavor_with_trait_sgx['id'], {'extra_specs': {'trait:HW_CPU_X86_SGX': 'required'}}) self.flavor_without_trait = self.flavors[0] with nova.utils.temporary_mutation(self.api, microversion='2.35'): images = self.api.get_images() self.image_id_without_trait = images[0]['id'] def test_filter_with_no_valid_host(self): """Test 'isolate_aggregates' filter with no valid hosts. No required traits set in image/flavor, so all aggregates with required traits set should be ignored. """ rp_uuid1 = self._get_provider_uuid_by_host('host1') self._set_provider_traits( rp_uuid1, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) self._set_traits_on_aggregate( 'only-host1', ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits( rp_uuid2, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) self._set_traits_on_aggregate( 'only-host2', ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) server = self._boot_server( flavor_id=self.flavor_without_trait['id'], image_id=self.image_id_without_trait, end_status='ERROR') self.assertIsNone(self._get_instance_host(server)) self.assertIn('No valid host', server['fault']['message']) def test_filter_without_trait(self): """Test 'isolate_aggregates' filter with valid hosts. No required traits set in image/flavor so instance should be booted on host from an aggregate with no required traits set. """ rp_uuid1 = self._get_provider_uuid_by_host('host1') self._set_provider_traits( rp_uuid1, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) self._set_traits_on_aggregate( 'only-host1', ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) server = self._boot_server( flavor_id=self.flavor_without_trait['id'], image_id=self.image_id_without_trait) self.assertEqual('host2', self._get_instance_host(server)) def test_filter_with_trait_on_flavor(self): """Test filter with matching required traits set only in one aggregate. Required trait (HW_GPU_API_DXVA) set in flavor so instance should be booted on host with matching required traits set on aggregates. """ rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits(rp_uuid2, ['HW_GPU_API_DXVA']) rp_uuid1 = self._get_provider_uuid_by_host('host1') self._set_provider_traits( rp_uuid1, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) self._set_traits_on_aggregate('only-host2', ['HW_GPU_API_DXVA']) self._set_traits_on_aggregate( 'only-host1', ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) server = self._boot_server( flavor_id=self.flavor_with_trait_dxva['id'], image_id=self.image_id_without_trait) self.assertEqual('host2', self._get_instance_host(server)) def test_filter_with_common_trait_on_aggregates(self): """Test filter with common required traits set to aggregates. Required trait (HW_CPU_X86_SGX) set in flavor so instance should be booted on host with exact matching required traits set on aggregates. """ rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits(rp_uuid2, ['HW_CPU_X86_SGX']) rp_uuid1 = self._get_provider_uuid_by_host('host1') self._set_provider_traits( rp_uuid1, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) self._set_traits_on_aggregate('only-host2', ['HW_CPU_X86_SGX']) self._set_traits_on_aggregate( 'only-host1', ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) server = self._boot_server( flavor_id=self.flavor_with_trait_sgx['id'], image_id=self.image_id_without_trait) self.assertEqual('host2', self._get_instance_host(server)) def test_filter_with_traits_on_image_and_flavor(self): """Test filter with common traits set to image/flavor and aggregates. Required trait (HW_CPU_X86_SGX) set in flavor and required trait (HW_CPU_X86_VMX) set in image, so instance should be booted on host with exact matching required traits set on aggregates. """ rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits( rp_uuid2, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) rp_uuid1 = self._get_provider_uuid_by_host('host1') self._set_provider_traits(rp_uuid1, ['HW_GPU_API_DXVA']) self._set_traits_on_aggregate('only-host1', ['HW_GPU_API_DXVA']) self._set_traits_on_aggregate( 'only-host2', ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) # Creating a new image and setting traits on it. with nova.utils.temporary_mutation(self.api, microversion='2.35'): self.ctxt = test_utils.get_test_admin_context() img_ref = self.image_service.create(self.ctxt, {'name': 'image10'}) image_id_with_trait = img_ref['id'] self.addCleanup( self.image_service.delete, self.ctxt, image_id_with_trait) self.api.api_put('/images/%s/metadata' % image_id_with_trait, {'metadata': { 'trait:HW_CPU_X86_VMX': 'required'}}) server = self._boot_server( flavor_id=self.flavor_with_trait_sgx['id'], image_id=image_id_with_trait) self.assertEqual('host2', self._get_instance_host(server)) def test_filter_with_traits_image_flavor_subset_of_aggregates(self): """Test filter with image/flavor required traits subset of aggregates. Image and flavor has a nonempty set of required traits that's subset set of the traits on the aggregates. """ rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits( rp_uuid2, ['HW_CPU_X86_VMX', 'HW_GPU_API_DXVA', 'HW_CPU_X86_SGX']) self._set_traits_on_aggregate( 'only-host2', ['HW_CPU_X86_VMX', 'HW_GPU_API_DXVA', 'HW_CPU_X86_SGX']) # Creating a new image and setting traits on it. with nova.utils.temporary_mutation(self.api, microversion='2.35'): self.ctxt = test_utils.get_test_admin_context() img_ref = self.image_service.create(self.ctxt, {'name': 'image10'}) image_id_with_trait = img_ref['id'] self.addCleanup( self.image_service.delete, self.ctxt, image_id_with_trait) self.api.api_put('/images/%s/metadata' % image_id_with_trait, {'metadata': { 'trait:HW_CPU_X86_VMX': 'required'}}) server = self._boot_server( flavor_id=self.flavor_with_trait_sgx['id'], image_id=image_id_with_trait, end_status='ERROR') self.assertIsNone(self._get_instance_host(server)) self.assertIn('No valid host', server['fault']['message']) def test_filter_with_traits_image_flavor_disjoint_of_aggregates(self): """Test filter with image/flav required traits disjoint of aggregates. Image and flavor has a nonempty set of required traits that's disjoint set of the traits on the aggregates. """ rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits(rp_uuid2, ['HW_CPU_X86_VMX']) rp_uuid1 = self._get_provider_uuid_by_host('host1') self._set_provider_traits(rp_uuid1, ['HW_GPU_API_DXVA']) self._set_traits_on_aggregate('only-host1', ['HW_GPU_API_DXVA']) self._set_traits_on_aggregate('only-host2', ['HW_CPU_X86_VMX']) # Creating a new image and setting traits on it. with nova.utils.temporary_mutation(self.api, microversion='2.35'): self.ctxt = test_utils.get_test_admin_context() img_ref = self.image_service.create(self.ctxt, {'name': 'image10'}) image_id_with_trait = img_ref['id'] self.addCleanup( self.image_service.delete, self.ctxt, image_id_with_trait) self.api.api_put('/images/%s/metadata' % image_id_with_trait, {'metadata': { 'trait:HW_CPU_X86_VMX': 'required'}}) server = self._boot_server( flavor_id=self.flavor_with_trait_sgx['id'], image_id=image_id_with_trait, end_status='ERROR') self.assertIsNone(self._get_instance_host(server)) self.assertIn('No valid host', server['fault']['message']) class IsolateAggregateFilterTestWithConcernFilters(IsolateAggregateFilterTest): def setUp(self): filters = CONF.filter_scheduler.enabled_filters # NOTE(shilpasd): To test `isolate_aggregates` request filter, along # with following filters which also filters hosts based on aggregate # metadata. if 'AggregateImagePropertiesIsolation' not in filters: filters.append('AggregateImagePropertiesIsolation') if 'AggregateInstanceExtraSpecsFilter' not in filters: filters.append('AggregateInstanceExtraSpecsFilter') self.flags(enabled_filters=filters, group='filter_scheduler') super(IsolateAggregateFilterTestWithConcernFilters, self).setUp() class IsolateAggregateFilterTestWOConcernFilters(IsolateAggregateFilterTest): def setUp(self): filters = CONF.filter_scheduler.enabled_filters # NOTE(shilpasd): To test `isolate_aggregates` request filter, removed # following filters which also filters hosts based on aggregate # metadata. if 'AggregateImagePropertiesIsolation' in filters: filters.remove('AggregateImagePropertiesIsolation') if 'AggregateInstanceExtraSpecsFilter' in filters: filters.remove('AggregateInstanceExtraSpecsFilter') self.flags(enabled_filters=filters, group='filter_scheduler') super(IsolateAggregateFilterTestWOConcernFilters, self).setUp() class TestAggregateFiltersTogether(AggregateRequestFiltersTest): def setUp(self): # Use a custom weigher that would prefer host1 if the forbidden # aggregate filter were not in place otherwise it's not deterministic # whether we're landing on host2 because of the filter or just by # chance. self.useFixture(nova_fixtures.HostNameWeigherFixture()) # NOTE(danms): Do this before calling setUp() so that # the scheduler service that is started sees the new value filters = CONF.filter_scheduler.enabled_filters filters.remove('AvailabilityZoneFilter') # NOTE(shilpasd): To test `isolate_aggregates` request filter, removed # following filters which also filters hosts based on aggregate # metadata. if 'AggregateImagePropertiesIsolation' in filters: filters.remove('AggregateImagePropertiesIsolation') if 'AggregateInstanceExtraSpecsFilter' in filters: filters.remove('AggregateInstanceExtraSpecsFilter') self.flags(enabled_filters=filters, group='filter_scheduler') super(TestAggregateFiltersTogether, self).setUp() # Default to enabling all filters self.flags(limit_tenants_to_placement_aggregate=True, group='scheduler') self.flags(placement_aggregate_required_for_tenants=True, group='scheduler') self.flags(query_placement_for_availability_zone=True, group='scheduler') self.flags(enable_isolated_aggregate_filtering=True, group='scheduler') # setting traits to flavors flavor_body = {'flavor': {'name': 'test_flavor', 'ram': 512, 'vcpus': 1, 'disk': 1 }} self.flavor_with_trait_dxva = self.api.post_flavor(flavor_body) self.admin_api.post_extra_spec( self.flavor_with_trait_dxva['id'], {'extra_specs': {'trait:HW_GPU_API_DXVA': 'required'}}) def test_tenant_with_az_match(self): # Grant our tenant access to the aggregate with # host1 self._grant_tenant_aggregate('only-host1', [self.api.project_id]) # Set an az on only-host1 self._set_az_aggregate('only-host1', 'myaz') # Boot the server into that az and make sure we land server = self._boot_server(az='myaz') self.assertEqual('host1', self._get_instance_host(server)) def test_tenant_with_az_mismatch(self): # Grant our tenant access to the aggregate with # host1 self._grant_tenant_aggregate('only-host1', [self.api.project_id]) # Set an az on only-host2 self._set_az_aggregate('only-host2', 'myaz') # Boot the server into that az and make sure we fail server = self._boot_server(az='myaz', end_status='ERROR') self.assertIsNone(self._get_instance_host(server)) def test_tenant_with_az_and_traits_match(self): # Grant our tenant access to the aggregate with host2 self._grant_tenant_aggregate('only-host2', [self.api.project_id]) # Set an az on only-host2 self._set_az_aggregate('only-host2', 'myaz') # Set trait on host2 rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits(rp_uuid2, ['HW_GPU_API_DXVA']) # Set trait on aggregate only-host2 self._set_traits_on_aggregate('only-host2', ['HW_GPU_API_DXVA']) # Boot the server into that az and make sure we land server = self._boot_server( flavor_id=self.flavor_with_trait_dxva['id'], az='myaz') self.assertEqual('host2', self._get_instance_host(server)) def test_tenant_with_az_and_traits_mismatch(self): # Grant our tenant access to the aggregate with host2 self._grant_tenant_aggregate('only-host2', [self.api.project_id]) # Set an az on only-host1 self._set_az_aggregate('only-host2', 'myaz') # Set trait on host2 rp_uuid2 = self._get_provider_uuid_by_host('host2') self._set_provider_traits(rp_uuid2, ['HW_CPU_X86_VMX']) # Set trait on aggregate only-host2 self._set_traits_on_aggregate('only-host2', ['HW_CPU_X86_VMX']) # Boot the server into that az and make sure we fail server = self._boot_server( flavor_id=self.flavor_with_trait_dxva['id'], az='myaz', end_status='ERROR') self.assertIsNone(self._get_instance_host(server)) self.assertIn('No valid host', server['fault']['message']) class TestAggregateMultiTenancyIsolationFilter( test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(TestAggregateMultiTenancyIsolationFilter, self).setUp() # Stub out glance, placement and neutron. nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) # Start nova services. self.start_service('conductor') self.admin_api = self.useFixture( nova_fixtures.OSAPIFixture(api_version='v2.1')).admin_api self.api = self.useFixture( nova_fixtures.OSAPIFixture(api_version='v2.1', project_id=uuids.non_admin)).api # Add the AggregateMultiTenancyIsolation to the list of enabled # filters since it is not enabled by default. enabled_filters = CONF.filter_scheduler.enabled_filters enabled_filters.append('AggregateMultiTenancyIsolation') self.flags(enabled_filters=enabled_filters, group='filter_scheduler') self.start_service('scheduler') for host in ('host1', 'host2'): self.start_service('compute', host=host) def test_aggregate_multitenancy_isolation_filter(self): """Tests common scenarios with the AggregateMultiTenancyIsolation filter: * hosts in a tenant-isolated aggregate are only accepted for that tenant * hosts not in a tenant-isolated aggregate are acceptable for all tenants, including tenants with access to the isolated-tenant aggregate """ # Create a tenant-isolated aggregate for the non-admin user. agg_id = self.admin_api.post_aggregate( {'aggregate': {'name': 'non_admin_agg'}})['id'] meta_req = {'set_metadata': { 'metadata': {'filter_tenant_id': uuids.non_admin}}} self.admin_api.api_post('/os-aggregates/%s/action' % agg_id, meta_req) # Add host2 to the aggregate; we'll restrict host2 to the non-admin # tenant. host_req = {'add_host': {'host': 'host2'}} self.admin_api.api_post('/os-aggregates/%s/action' % agg_id, host_req) # Stub out select_destinations to assert how many host candidates were # available per tenant-specific request. original_filtered_hosts = ( nova.scheduler.host_manager.HostManager.get_filtered_hosts) def spy_get_filtered_hosts(*args, **kwargs): self.filtered_hosts = original_filtered_hosts(*args, **kwargs) return self.filtered_hosts self.stub_out( 'nova.scheduler.host_manager.HostManager.get_filtered_hosts', spy_get_filtered_hosts) # Create a server for the admin - should only have one host candidate. server_req = {'server': self._build_server(networks='none')} with utils.temporary_mutation(self.admin_api, microversion='2.37'): server = self.admin_api.post_server(server_req) server = self._wait_for_state_change(server, 'ACTIVE') # Assert it's not on host2 which is isolated to the non-admin tenant. self.assertNotEqual('host2', server['OS-EXT-SRV-ATTR:host']) self.assertEqual(1, len(self.filtered_hosts)) # Now create a server for the non-admin tenant to which host2 is # isolated via the aggregate, but the other compute host is a # candidate. We don't assert that the non-admin tenant server shows # up on host2 because the other host, which is not isolated to the # aggregate, is still a candidate. server_req = {'server': self._build_server(networks='none')} with utils.temporary_mutation(self.api, microversion='2.37'): server = self.api.post_server(server_req) self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(2, len(self.filtered_hosts)) class AggregateMultiTenancyIsolationColdMigrateTest( test.TestCase, integrated_helpers.InstanceHelperMixin): @staticmethod def _create_aggregate(admin_api, name): return admin_api.api_post( '/os-aggregates', {'aggregate': {'name': name}}).body['aggregate'] @staticmethod def _add_host_to_aggregate(admin_api, aggregate, host): add_host_req_body = { "add_host": { "host": host } } admin_api.api_post( '/os-aggregates/%s/action' % aggregate['id'], add_host_req_body) @staticmethod def _isolate_aggregate(admin_api, aggregate, tenant_id): set_meta_req_body = { "set_metadata": { "metadata": { "filter_tenant_id": tenant_id } } } admin_api.api_post( '/os-aggregates/%s/action' % aggregate['id'], set_meta_req_body) def setUp(self): super(AggregateMultiTenancyIsolationColdMigrateTest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) # Intentionally keep these separate since we want to create the # server with the non-admin user in a different project. admin_api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1', project_id=uuids.admin_project)) self.admin_api = admin_api_fixture.admin_api self.admin_api.microversion = 'latest' user_api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1', project_id=uuids.user_project)) self.api = user_api_fixture.api self.api.microversion = 'latest' # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.start_service('conductor') # Enable the AggregateMultiTenancyIsolation filter before starting the # scheduler service. enabled_filters = CONF.filter_scheduler.enabled_filters if 'AggregateMultiTenancyIsolation' not in enabled_filters: enabled_filters.append('AggregateMultiTenancyIsolation') self.flags( enabled_filters=enabled_filters, group='filter_scheduler') # Add a custom weigher which will weigh host1, which will be in the # admin project aggregate, higher than the other hosts which are in # the non-admin project aggregate. self.useFixture(nova_fixtures.HostNameWeigherFixture()) self.start_service('scheduler') for host in ('host1', 'host2', 'host3'): self.start_service('compute', host=host) # Create an admin-only aggregate for the admin project. This is needed # because if host1 is not in an aggregate with the filter_tenant_id # metadata key, the filter will accept that host even for the non-admin # project. admin_aggregate = self._create_aggregate( self.admin_api, 'admin-aggregate') self._add_host_to_aggregate(self.admin_api, admin_aggregate, 'host1') # Restrict the admin project to the admin aggregate. self._isolate_aggregate( self.admin_api, admin_aggregate, uuids.admin_project) # Create the tenant aggregate for the non-admin project. tenant_aggregate = self._create_aggregate( self.admin_api, 'tenant-aggregate') # Add two compute hosts to the tenant aggregate. We exclude host1 # since that is weighed higher due to HostNameWeigherFixture and we # want to ensure the scheduler properly filters out host1 before we # even get to weighing the selected hosts. for host in ('host2', 'host3'): self._add_host_to_aggregate(self.admin_api, tenant_aggregate, host) # Restrict the non-admin project to the tenant aggregate. self._isolate_aggregate( self.admin_api, tenant_aggregate, uuids.user_project) def test_cold_migrate_server(self): """Creates a server using the non-admin project, then cold migrates the server and asserts the server goes to the other host in the isolated host aggregate via the AggregateMultiTenancyIsolation filter. """ img = nova.tests.unit.image.fake.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID server_req_body = self._build_server( image_uuid=img, networks='none') server = self.api.post_server({'server': server_req_body}) server = self._wait_for_state_change(server, 'ACTIVE') # Ensure the server ended up in host2 or host3 original_host = server['OS-EXT-SRV-ATTR:host'] self.assertNotEqual('host1', original_host) # Now cold migrate the server and it should end up in the other host # in the same tenant-isolated aggregate. self.admin_api.api_post( '/servers/%s/action' % server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Ensure the server is on the other host in the same aggregate. expected_host = 'host3' if original_host == 'host2' else 'host2' self.assertEqual(expected_host, server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_availability_zones.py0000664000175000017500000001605100000000000024341 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from nova import context from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class TestAvailabilityZoneScheduling( test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(TestAvailabilityZoneScheduling, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.admin_api self.api.microversion = 'latest' fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') # Start two compute services in separate zones. self._start_host_in_zone('host1', 'zone1') self._start_host_in_zone('host2', 'zone2') flavors = self.api.get_flavors() self.flavor1 = flavors[0]['id'] self.flavor2 = flavors[1]['id'] def _start_host_in_zone(self, host, zone): # Start the nova-compute service. self.start_service('compute', host=host) # Create a host aggregate with a zone in which to put this host. aggregate_body = { "aggregate": { "name": zone, "availability_zone": zone } } aggregate = self.api.api_post( '/os-aggregates', aggregate_body).body['aggregate'] # Now add the compute host to the aggregate. add_host_body = { "add_host": { "host": host } } self.api.api_post( '/os-aggregates/%s/action' % aggregate['id'], add_host_body) def _create_server(self, name): # Create a server, it doesn't matter which host it ends up in. server = super(TestAvailabilityZoneScheduling, self)._create_server( flavor_id=self.flavor1, networks='none',) original_host = server['OS-EXT-SRV-ATTR:host'] # Assert the server has the AZ set (not None or 'nova'). expected_zone = 'zone1' if original_host == 'host1' else 'zone2' self.assertEqual(expected_zone, server['OS-EXT-AZ:availability_zone']) return server def _assert_instance_az(self, server, expected_zone): # Check the API. self.assertEqual(expected_zone, server['OS-EXT-AZ:availability_zone']) # Check the DB. ctxt = context.get_admin_context() with context.target_cell( ctxt, self.cell_mappings[test.CELL1_NAME]) as cctxt: instance = objects.Instance.get_by_uuid(cctxt, server['id']) self.assertEqual(expected_zone, instance.availability_zone) def test_live_migrate_implicit_az(self): """Tests live migration of an instance with an implicit AZ. Before Pike, a server created without an explicit availability zone was assigned a default AZ based on the "default_schedule_zone" config option which defaults to None, which allows the instance to move freely between availability zones. With change I8d426f2635232ffc4b510548a905794ca88d7f99 in Pike, if the user does not request an availability zone, the instance.availability_zone field is set based on the host chosen by the scheduler. The default AZ for all nova-compute services is determined by the "default_availability_zone" config option which defaults to "nova". This test creates two nova-compute services in separate zones, creates a server without specifying an explicit zone, and then tries to live migrate the instance to the other compute which should succeed because the request spec does not include an explicit AZ, so the instance is still not restricted to its current zone even if it says it is in one. """ server = self._create_server('test_live_migrate_implicit_az') original_host = server['OS-EXT-SRV-ATTR:host'] # Attempt to live migrate the instance; again, we don't specify a host # because there are only two hosts so the scheduler would only be able # to pick the second host which is in a different zone. live_migrate_req = { 'os-migrateLive': { 'block_migration': 'auto', 'host': None } } self.api.post_server_action(server['id'], live_migrate_req) # Poll the migration until it is done. migration = self._wait_for_migration_status(server, ['completed']) self.assertEqual('live-migration', migration['migration_type']) # Assert that the server did move. Note that we check both the API and # the database because the API will return the AZ from the host # aggregate if instance.host is not None. server = self.api.get_server(server['id']) expected_zone = 'zone2' if original_host == 'host1' else 'zone1' self._assert_instance_az(server, expected_zone) def test_resize_revert_across_azs(self): """Creates two compute service hosts in separate AZs. Creates a server without an explicit AZ so it lands in one AZ, and then resizes the server which moves it to the other AZ. Then the resize is reverted and asserts the server is shown as back in the original source host AZ. """ server = self._create_server('test_resize_revert_across_azs') original_host = server['OS-EXT-SRV-ATTR:host'] original_az = 'zone1' if original_host == 'host1' else 'zone2' # Resize the server which should move it to the other zone. self.api.post_server_action( server['id'], {'resize': {'flavorRef': self.flavor2}}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Now the server should be in the other AZ. new_zone = 'zone2' if original_host == 'host1' else 'zone1' self._assert_instance_az(server, new_zone) # Revert the resize and the server should be back in the original AZ. self.api.post_server_action(server['id'], {'revertResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') self._assert_instance_az(server, original_az) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_boot_from_volume.py0000664000175000017500000002455500000000000024036 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from nova import context from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client as api_client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.functional import test_servers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class BootFromVolumeTest(test_servers.ServersTestBase): def _get_hypervisor_stats(self): response = self.admin_api.api_get('/os-hypervisors/statistics') return response.body['hypervisor_statistics'] def _verify_zero_local_gb_used(self): stats = self._get_hypervisor_stats() self.assertEqual(0, stats['local_gb_used']) def _verify_instance_flavor_not_zero(self, instance_uuid): # We are trying to avoid saving instance records with root_gb=0 ctxt = context.RequestContext('fake', self.api.project_id) instance = objects.Instance.get_by_uuid(ctxt, instance_uuid) self.assertNotEqual(0, instance.root_gb) self.assertNotEqual(0, instance.flavor.root_gb) def _verify_request_spec_flavor_not_zero(self, instance_uuid): # We are trying to avoid saving request spec records with root_gb=0 ctxt = context.RequestContext('fake', self.api.project_id) rspec = objects.RequestSpec.get_by_instance_uuid(ctxt, instance_uuid) self.assertNotEqual(0, rspec.flavor.root_gb) def setUp(self): # These need to be set up before services are started, else they # won't be reflected in the running service. self.flags(allow_resize_to_same_host=True) super(BootFromVolumeTest, self).setUp() self.admin_api = self.api_fixture.admin_api self.useFixture(nova_fixtures.CinderFixture(self)) def test_boot_from_volume_larger_than_local_gb(self): # Verify no local disk is being used currently self._verify_zero_local_gb_used() # Create flavors with disk larger than available host local disk flavor_id = self._create_flavor(memory_mb=64, vcpu=1, disk=8192, ephemeral=0) flavor_id_alt = self._create_flavor(memory_mb=64, vcpu=1, disk=16384, ephemeral=0) # Boot a server with a flavor disk larger than the available local # disk. It should succeed for boot from volume. server = self._build_server(image_uuid='', flavor_id=flavor_id) volume_uuid = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL bdm = {'boot_index': 0, 'uuid': volume_uuid, 'source_type': 'volume', 'destination_type': 'volume'} server['block_device_mapping_v2'] = [bdm] created_server = self.api.post_server({"server": server}) server_id = created_server['id'] self._wait_for_state_change(created_server, 'ACTIVE') # Check that hypervisor local disk reporting is still 0 self._verify_zero_local_gb_used() # Check that instance has not been saved with 0 root_gb self._verify_instance_flavor_not_zero(server_id) # Check that request spec has not been saved with 0 root_gb self._verify_request_spec_flavor_not_zero(server_id) # Do actions that could change local disk reporting and verify they # don't change local disk reporting. # Resize post_data = {'resize': {'flavorRef': flavor_id_alt}} self.api.post_server_action(server_id, post_data) self._wait_for_state_change(created_server, 'VERIFY_RESIZE') # Check that hypervisor local disk reporting is still 0 self._verify_zero_local_gb_used() # Check that instance has not been saved with 0 root_gb self._verify_instance_flavor_not_zero(server_id) # Check that request spec has not been saved with 0 root_gb self._verify_request_spec_flavor_not_zero(server_id) # Confirm the resize post_data = {'confirmResize': None} self.api.post_server_action(server_id, post_data) self._wait_for_state_change(created_server, 'ACTIVE') # Check that hypervisor local disk reporting is still 0 self._verify_zero_local_gb_used() # Check that instance has not been saved with 0 root_gb self._verify_instance_flavor_not_zero(server_id) # Check that request spec has not been saved with 0 root_gb self._verify_request_spec_flavor_not_zero(server_id) # Shelve post_data = {'shelve': None} self.api.post_server_action(server_id, post_data) self._wait_for_state_change(created_server, 'SHELVED_OFFLOADED') # Check that hypervisor local disk reporting is still 0 self._verify_zero_local_gb_used() # Check that instance has not been saved with 0 root_gb self._verify_instance_flavor_not_zero(server_id) # Check that request spec has not been saved with 0 root_gb self._verify_request_spec_flavor_not_zero(server_id) # Unshelve post_data = {'unshelve': None} self.api.post_server_action(server_id, post_data) self._wait_for_state_change(created_server, 'ACTIVE') # Check that hypervisor local disk reporting is still 0 self._verify_zero_local_gb_used() # Check that instance has not been saved with 0 root_gb self._verify_instance_flavor_not_zero(server_id) # Check that request spec has not been saved with 0 root_gb self._verify_request_spec_flavor_not_zero(server_id) # Rebuild # The image_uuid is from CinderFixture for the # volume representing IMAGE_BACKED_VOL. image_uuid = '155d900f-4e14-4e4c-a73d-069cbf4541e6' post_data = {'rebuild': {'imageRef': image_uuid}} self.api.post_server_action(server_id, post_data) self._wait_for_state_change(created_server, 'ACTIVE') # Check that hypervisor local disk reporting is still 0 self._verify_zero_local_gb_used() # Check that instance has not been saved with 0 root_gb self._verify_instance_flavor_not_zero(server_id) # Check that request spec has not been saved with 0 root_gb self._verify_request_spec_flavor_not_zero(server_id) def test_max_local_block_devices_0_force_bfv(self): """Tests that when the API is configured with max_local_block_devices=0 a user cannot boot from image, they must boot from volume. """ self.flags(max_local_block_devices=0) server = self._build_server() ex = self.assertRaises(api_client.OpenStackApiException, self.admin_api.post_server, {'server': server}) self.assertEqual(400, ex.response.status_code) self.assertIn('You specified more local devices than the limit allows', six.text_type(ex)) class BootFromVolumeLargeRequestTest(test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(BootFromVolumeLargeRequestTest, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(nova_fixtures.CinderFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.image_service = fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api # The test cases will handle starting compute/conductor/scheduler # services if they need them. def test_boot_from_volume_10_servers_255_volumes_2_images(self): """Create 10 servers with 255 BDMs each using the same image for 200 of the BDMs and another image for 55 other BDMs. This is a bit silly but it just shows that it's possible and there is no rate limiting involved in this type of very heavy request. """ # We only care about API performance in this test case so stub out # conductor to not do anything. self.useFixture(nova_fixtures.NoopConductorFixture()) # NOTE(gibi): Do not use 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' image # as that is defined with a separate kernel image, leading to one extra # call to nova.image.glance.API.get from compute.api # _handle_kernel_and_ramdisk() image1 = 'a2459075-d96c-40d5-893e-577ff92e721c' image2 = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' server = self._build_server() server.pop('imageRef') server['min_count'] = 10 bdms = [] # Create 200 BDMs using image1 and 55 using image2. for boot_index in range(255): image_uuid = image2 if boot_index >= 200 else image1 bdms.append({ 'source_type': 'image', 'destination_type': 'volume', 'delete_on_termination': True, 'volume_size': 1, 'boot_index': boot_index, 'uuid': image_uuid }) server['block_device_mapping_v2'] = bdms # Wrap the image service get method to check how many times it was # called. with mock.patch('nova.image.glance.API.get', wraps=self.image_service.show) as mock_image_get: self.api.post_server({'server': server}) # Assert that there was caching of the GET /v2/images/{image_id} # calls. The expected total in this case is 3: one for validating # the root BDM image, one for image1 and one for image2 during bdm # validation - only the latter two cases use a cache. self.assertEqual(3, mock_image_get.call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_cold_migrate.py0000664000175000017500000001316400000000000023104 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.compute import api as compute_api from nova import context as nova_context from nova import objects from nova import test from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier class ColdMigrationDisallowSameHost( integrated_helpers.ProviderUsageBaseTestCase): """Tests cold migrate where the source host does not have the COMPUTE_SAME_HOST_COLD_MIGRATE trait. """ compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(ColdMigrationDisallowSameHost, self).setUp() # Start one compute service which will use the fake virt driver # which disallows cold migration to the same host. self._start_compute('host1') def _wait_for_migrate_no_valid_host(self, error='NoValidHost'): event = fake_notifier.wait_for_versioned_notifications( 'compute_task.migrate_server.error')[0] self.assertEqual(error, event['payload']['nova_object.data']['reason'][ 'nova_object.data']['exception']) def test_cold_migrate_same_host_not_supported(self): """Simple test to show that you cannot cold-migrate to the same host when the resource provider does not expose the COMPUTE_SAME_HOST_COLD_MIGRATE trait. """ server = self._create_server(networks='none') # The fake driver does not report COMPUTE_SAME_HOST_COLD_MIGRATE # so cold migration should fail since we only have one host. self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_migrate_no_valid_host() def test_cold_migrate_same_host_old_compute_disallow(self): """Upgrade compat test where the resource provider does not report the COMPUTE_SAME_HOST_COLD_MIGRATE trait but the compute service is old so the API falls back to the allow_resize_to_same_host config which defaults to False. """ server = self._create_server(networks='none') # Stub the compute service version check to make the compute service # appear old. fake_service = objects.Service() fake_service.version = ( compute_api.MIN_COMPUTE_SAME_HOST_COLD_MIGRATE - 1) with mock.patch('nova.objects.Service.get_by_compute_host', return_value=fake_service) as mock_get_service: self.api.post_server_action(server['id'], {'migrate': None}) mock_get_service.assert_called_once_with( test.MatchType(nova_context.RequestContext), 'host1') # Since allow_resize_to_same_host defaults to False scheduling failed # since there are no other hosts. self._wait_for_migrate_no_valid_host() def test_cold_migrate_same_host_old_compute_allow(self): """Upgrade compat test where the resource provider does not report the COMPUTE_SAME_HOST_COLD_MIGRATE trait but the compute service is old so the API falls back to the allow_resize_to_same_host config which in this test is set to True. """ self.flags(allow_resize_to_same_host=True) server = self._create_server(networks='none') # Stub the compute service version check to make the compute service # appear old. fake_service = objects.Service() fake_service.version = ( compute_api.MIN_COMPUTE_SAME_HOST_COLD_MIGRATE - 1) with mock.patch('nova.objects.Service.get_by_compute_host', return_value=fake_service) as mock_get_service: self.api.post_server_action(server['id'], {'migrate': None}) mock_get_service.assert_called_once_with( test.MatchType(nova_context.RequestContext), 'host1') # In this case the compute is old so the API falls back to checking # allow_resize_to_same_host which says same-host cold migrate is # allowed so the scheduler sends the request to the only compute # available but the virt driver says same-host cold migrate is not # supported and raises UnableToMigrateToSelf. A reschedule is sent # to conductor which results in MaxRetriesExceeded since there are no # alternative hosts. self._wait_for_migrate_no_valid_host(error='MaxRetriesExceeded') class ColdMigrationAllowSameHost( integrated_helpers.ProviderUsageBaseTestCase): """Tests cold migrate where the source host has the COMPUTE_SAME_HOST_COLD_MIGRATE trait. """ compute_driver = 'fake.SameHostColdMigrateDriver' def setUp(self): super(ColdMigrationAllowSameHost, self).setUp() self._start_compute('host1') def test_cold_migrate_same_host_supported(self): """Simple test to show that you can cold-migrate to the same host when the resource provider exposes the COMPUTE_SAME_HOST_COLD_MIGRATE trait. """ server = self._create_server(networks='none') self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_compute_mgr.py0000664000175000017500000001007500000000000022772 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import mock from nova import context from nova.network import model as network_model from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import cast_as_call from nova.tests.unit import fake_network from nova.tests.unit import fake_server_actions class ComputeManagerTestCase(test.TestCase): def setUp(self): super(ComputeManagerTestCase, self).setUp() self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) self.useFixture(cast_as_call.CastAsCall(self)) self.conductor = self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') self.context = context.RequestContext('fake', 'fake') fake_server_actions.stub_out_action_events(self) fake_network.set_stub_network_methods(self) self.useFixture(fixtures.MockPatch( 'nova.network.neutron.API.get_instance_nw_info', return_value=network_model.NetworkInfo(), )) def test_instance_fault_message_no_traceback_with_retry(self): """This test simulates a spawn failure on the last retry attempt. If driver spawn raises an exception on the last retry attempt, the instance fault message should not contain a traceback for the last exception. The fault message field is limited in size and a long message with a traceback displaces the original error message. """ self.flags(max_attempts=3, group='scheduler') flavor = objects.Flavor( id=1, name='flavor1', memory_mb=256, vcpus=1, root_gb=1, ephemeral_gb=1, flavorid='1', swap=0, rxtx_factor=1.0, vcpu_weight=1, disabled=False, is_public=True, extra_specs={}, projects=[]) instance = objects.Instance(self.context, flavor=flavor, vcpus=1, memory_mb=256, root_gb=0, ephemeral_gb=0, project_id='fake') instance.create() # Amongst others, mock the resource tracker, otherwise it will # not have been sufficiently initialized and will raise a KeyError # on the self.compute_nodes dict after the TestingException. @mock.patch.object(self.conductor.manager.compute_task_mgr, '_cleanup_allocated_networks') @mock.patch.object(self.compute.manager.network_api, 'cleanup_instance_network_on_host') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch.object(self.compute.manager, 'rt') @mock.patch.object(self.compute.manager.driver, 'spawn') def _test(mock_spawn, mock_grt, mock_notify, mock_cinoh, mock_can): mock_spawn.side_effect = test.TestingException('Preserve this') # Simulate that we're on the last retry attempt filter_properties = {'retry': {'num_attempts': 3}} request_spec = objects.RequestSpec.from_primitives( self.context, {'instance_properties': {'uuid': instance.uuid}, 'instance_type': flavor, 'image': None}, filter_properties) self.compute.manager.build_and_run_instance( self.context, instance, {}, request_spec, filter_properties, block_device_mapping=[]) _test() self.assertIn('Preserve this', instance.fault.message) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_conf_max_attach_disk_devices.py0000664000175000017500000001270000000000000026300 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import six from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import test_servers class ConfigurableMaxDiskDevicesTest(test_servers.ServersTestBase): def setUp(self): super(ConfigurableMaxDiskDevicesTest, self).setUp() self.cinder = self.useFixture( nova_fixtures.CinderFixture(self)) def _wait_for_volume_attach(self, server_id, volume_id): for i in range(0, 100): server = self.api.get_server(server_id) attached_vols = [vol['id'] for vol in server['os-extended-volumes:volumes_attached']] if volume_id in attached_vols: return time.sleep(.1) self.fail('Timed out waiting for volume %s to be attached to ' 'server %s. Currently attached volumes: %s' % (volume_id, server_id, attached_vols)) def test_boot_from_volume(self): # Set the maximum to 1 and boot from 1 volume. This should pass. self.flags(max_disk_devices_to_attach=1, group='compute') server = self._build_server(flavor_id='1') server['imageRef'] = '' volume_uuid = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL bdm = {'boot_index': 0, 'uuid': volume_uuid, 'source_type': 'volume', 'destination_type': 'volume'} server['block_device_mapping_v2'] = [bdm] created_server = self.api.post_server({"server": server}) self._wait_for_state_change(created_server, 'ACTIVE') def test_boot_from_volume_plus_attach_max_exceeded(self): # Set the maximum to 1, boot from 1 volume, and attach one volume. # This should fail because it's trying to attach 2 disk devices. self.flags(max_disk_devices_to_attach=1, group='compute') server = self._build_server(flavor_id='1') server['imageRef'] = '' vol_img_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL boot_vol = {'boot_index': 0, 'uuid': vol_img_id, 'source_type': 'volume', 'destination_type': 'volume'} vol_id = '9a695496-44aa-4404-b2cc-ccab2501f87e' other_vol = {'uuid': vol_id, 'source_type': 'volume', 'destination_type': 'volume'} server['block_device_mapping_v2'] = [boot_vol, other_vol] created_server = self.api.post_server({"server": server}) server_id = created_server['id'] # Server should go into ERROR state self._wait_for_state_change(created_server, 'ERROR') # Verify the instance fault server = self.api.get_server(server_id) # If anything fails during _prep_block_device, a 500 internal server # error is raised. self.assertEqual(500, server['fault']['code']) expected = ('Build of instance %s aborted: The maximum allowed number ' 'of disk devices (1) to attach to a single instance has ' 'been exceeded.' % server_id) self.assertIn(expected, server['fault']['message']) # Verify no volumes are attached (this is a generator) attached_vols = list(self.cinder.volume_ids_for_instance(server_id)) self.assertNotIn(vol_img_id, attached_vols) self.assertNotIn(vol_id, attached_vols) def test_attach(self): # Set the maximum to 2. This will allow one disk device for the local # disk of the server and also one volume to attach. A second volume # attach request will fail. self.flags(max_disk_devices_to_attach=2, group='compute') server = self._build_server(flavor_id='1') created_server = self.api.post_server({"server": server}) server_id = created_server['id'] self._wait_for_state_change(created_server, 'ACTIVE') # Attach one volume, should pass. vol_id = '9a695496-44aa-4404-b2cc-ccab2501f87e' self.api.post_server_volume( server_id, dict(volumeAttachment=dict(volumeId=vol_id))) self._wait_for_volume_attach(server_id, vol_id) # Attach a second volume, should fail fast in the API. other_vol_id = 'f2063123-0f88-463c-ac9d-74127faebcbe' ex = self.assertRaises( client.OpenStackApiException, self.api.post_server_volume, server_id, dict(volumeAttachment=dict(volumeId=other_vol_id))) self.assertEqual(403, ex.response.status_code) expected = ('The maximum allowed number of disk devices (2) to attach ' 'to a single instance has been exceeded.') self.assertIn(expected, six.text_type(ex)) # Verify only one volume is attached (this is a generator) attached_vols = list(self.cinder.volume_ids_for_instance(server_id)) self.assertIn(vol_id, attached_vols) self.assertNotIn(other_vol_id, attached_vols) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_cross_az_attach.py0000664000175000017500000001606500000000000023625 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client as api_client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class CrossAZAttachTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Contains various scenarios for the [cinder]/cross_az_attach option and how it affects interactions between nova and cinder in the API and compute service. """ az = 'us-central-1' def setUp(self): super(CrossAZAttachTestCase, self).setUp() # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.CinderFixture(self, az=self.az)) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Start nova controller services. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).admin_api self.start_service('conductor') self.start_service('scheduler') # Start one compute service and add it to the AZ. This allows us to # get past the AvailabilityZoneFilter and build a server. self.start_service('compute', host='host1') agg_id = self.api.post_aggregate({'aggregate': { 'name': self.az, 'availability_zone': self.az}})['id'] self.api.api_post('/os-aggregates/%s/action' % agg_id, {'add_host': {'host': 'host1'}}) def test_cross_az_attach_false_boot_from_volume_no_az_specified(self): """Tests the scenario where [cinder]/cross_az_attach=False and the server is created with a pre-existing volume but the server create request does not specify an AZ nor is [DEFAULT]/default_schedule_zone set. In this case the server is created in the zone specified by the volume. """ self.flags(cross_az_attach=False, group='cinder') # Do not need imageRef for boot from volume. server = self._build_server(image_uuid='') server['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL }] server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(self.az, server['OS-EXT-AZ:availability_zone']) def test_cross_az_attach_false_data_volume_no_az_specified(self): """Tests the scenario where [cinder]/cross_az_attach=False and the server is created with a pre-existing volume as a non-boot data volume but the server create request does not specify an AZ nor is [DEFAULT]/default_schedule_zone set. In this case the server is created in the zone specified by the non-root data volume. """ self.flags(cross_az_attach=False, group='cinder') server = self._build_server() # Note that we use the legacy block_device_mapping parameter rather # than block_device_mapping_v2 because that will create an implicit # source_type=image, destination_type=local, boot_index=0, # uuid=$imageRef which is used as the root BDM and allows our # non-boot data volume to be attached during server create. Otherwise # we get InvalidBDMBootSequence. server['block_device_mapping'] = [{ # This is a non-bootable volume in the CinderFixture. 'volume_id': nova_fixtures.CinderFixture.SWAP_OLD_VOL }] server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(self.az, server['OS-EXT-AZ:availability_zone']) def test_cross_az_attach_false_boot_from_volume_default_zone_match(self): """Tests the scenario where [cinder]/cross_az_attach=False and the server is created with a pre-existing volume and the [DEFAULT]/default_schedule_zone matches the volume's AZ. """ self.flags(cross_az_attach=False, group='cinder') self.flags(default_schedule_zone=self.az) # Do not need imageRef for boot from volume. server = self._build_server(image_uuid='') server['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL }] server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual(self.az, server['OS-EXT-AZ:availability_zone']) def test_cross_az_attach_false_bfv_az_specified_mismatch(self): """Negative test where the server is being created in a specific AZ that does not match the volume being attached which results in a 400 error response. """ self.flags(cross_az_attach=False, group='cinder') # Do not need imageRef for boot from volume. server = self._build_server(image_uuid='', az='london') server['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL }] # Use the AZ fixture to fake out the london AZ. with nova_fixtures.AvailabilityZoneFixture(zones=['london', self.az]): ex = self.assertRaises(api_client.OpenStackApiException, self.api.post_server, {'server': server}) self.assertEqual(400, ex.response.status_code) self.assertIn('Server and volumes are not in the same availability ' 'zone. Server is in: london. Volumes are in: %s' % self.az, six.text_type(ex)) def test_cross_az_attach_false_no_volumes(self): """A simple test to make sure cross_az_attach=False API validation is a noop if there are no volumes in the server create request. """ self.flags(cross_az_attach=False, group='cinder') server = self._create_server(az=self.az) self.assertEqual(self.az, server['OS-EXT-AZ:availability_zone']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_cross_cell_migrate.py0000664000175000017500000016702300000000000024317 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_db import exception as oslo_db_exc from oslo_utils import fixture as osloutils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.compute import instance_actions from nova import conf from nova import context as nova_context from nova.db import api as db_api from nova import exception from nova import objects from nova.policies import base as base_policies from nova.policies import servers as servers_policies from nova.scheduler import utils as scheduler_utils from nova.scheduler import weights from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import cast_as_call from nova.tests.unit import fake_notifier from nova import utils CONF = conf.CONF class HostNameWeigher(weights.BaseHostWeigher): # TestMultiCellMigrate creates host1 in cell1 and host2 in cell2. # Something about migrating from host1 to host2 teases out failures # which probably has to do with cell1 being the default cell DB in # our base test class setup, so prefer host1 to make the tests # deterministic. _weights = {'host1': 100, 'host2': 50} def _weigh_object(self, host_state, weight_properties): # Any undefined host gets no weight. return self._weights.get(host_state.host, 0) class TestMultiCellMigrate(integrated_helpers.ProviderUsageBaseTestCase): """Tests for cross-cell cold migration (resize)""" NUMBER_OF_CELLS = 2 compute_driver = 'fake.MediumFakeDriver' def setUp(self): # Use our custom weigher defined above to make sure that we have # a predictable scheduling sort order during server create. weight_classes = [ __name__ + '.HostNameWeigher', 'nova.scheduler.weights.cross_cell.CrossCellWeigher' ] self.flags(weight_classes=weight_classes, group='filter_scheduler') super(TestMultiCellMigrate, self).setUp() self.cinder = self.useFixture(nova_fixtures.CinderFixture(self)) self._enable_cross_cell_resize() self.created_images = [] # list of image IDs created during resize # Adjust the polling interval and timeout for long RPC calls. self.flags(rpc_response_timeout=1) self.flags(long_rpc_timeout=60) # Set up 2 compute services in different cells self.host_to_cell_mappings = { 'host1': 'cell1', 'host2': 'cell2'} self.cell_to_aggregate = {} for host in sorted(self.host_to_cell_mappings): cell_name = self.host_to_cell_mappings[host] # Start the compute service on the given host in the given cell. self._start_compute(host, cell_name=cell_name) # Create an aggregate where the AZ name is the cell name. agg_id = self._create_aggregate( cell_name, availability_zone=cell_name) # Add the host to the aggregate. body = {'add_host': {'host': host}} self.admin_api.post_aggregate_action(agg_id, body) self.cell_to_aggregate[cell_name] = agg_id def _enable_cross_cell_resize(self): # Enable cross-cell resize policy since it defaults to not allow # anyone to perform that type of operation. For these tests we'll # just allow admins to perform cross-cell resize. self.policy.set_rules({ servers_policies.CROSS_CELL_RESIZE: base_policies.RULE_ADMIN_API}, overwrite=False) def assertFlavorMatchesAllocation(self, flavor, allocation, volume_backed=False): self.assertEqual(flavor['vcpus'], allocation['VCPU']) self.assertEqual(flavor['ram'], allocation['MEMORY_MB']) # Volume-backed instances won't have DISK_GB allocations. if volume_backed: self.assertNotIn('DISK_GB', allocation) else: self.assertEqual(flavor['disk'], allocation['DISK_GB']) def assert_instance_fields_match_flavor(self, instance, flavor): self.assertEqual(instance.memory_mb, flavor['ram']) self.assertEqual(instance.vcpus, flavor['vcpus']) self.assertEqual(instance.root_gb, flavor['disk']) self.assertEqual( instance.ephemeral_gb, flavor['OS-FLV-EXT-DATA:ephemeral']) def _count_volume_attachments(self, server_id): attachment_ids = self.cinder.attachment_ids_for_instance(server_id) return len(attachment_ids) def assert_quota_usage(self, expected_num_instances): limits = self.api.get_limits()['absolute'] self.assertEqual(expected_num_instances, limits['totalInstancesUsed']) def _create_server(self, flavor, volume_backed=False, group_id=None, no_networking=False): """Creates a server and waits for it to be ACTIVE :param flavor: dict form of the flavor to use :param volume_backed: True if the server should be volume-backed :param group_id: UUID of a server group in which to create the server :param no_networking: True if the server should be creating without networking, otherwise it will be created with a specific port and VIF tag :returns: server dict response from the GET /servers/{server_id} API """ if no_networking: networks = 'none' else: # Provide a VIF tag for the pre-existing port. Since VIF tags are # stored in the virtual_interfaces table in the cell DB, we want to # make sure those survive the resize to another cell. networks = [{ 'port': self.neutron.port_1['id'], 'tag': 'private' }] server = self._build_server( flavor_id=flavor['id'], networks=networks) # Put a tag on the server to make sure that survives the resize. server['tags'] = ['test'] if volume_backed: bdms = [{ 'boot_index': 0, 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL, 'source_type': 'volume', 'destination_type': 'volume', 'tag': 'root' }] server['block_device_mapping_v2'] = bdms # We don't need the imageRef for volume-backed servers. server.pop('imageRef', None) req = dict(server=server) if group_id: req['os:scheduler_hints'] = {'group': group_id} server = self.api.post_server(req) server = self._wait_for_state_change(server, 'ACTIVE') # For volume-backed make sure there is one attachment to start. if volume_backed: self.assertEqual(1, self._count_volume_attachments(server['id']), self.cinder.volume_to_attachment) return server def stub_image_create(self): """Stubs the _FakeImageService.create method to track created images""" original_create = self.image_service.create def image_create_snooper(*args, **kwargs): image = original_create(*args, **kwargs) self.created_images.append(image['id']) return image _p = mock.patch.object( self.image_service, 'create', side_effect=image_create_snooper) _p.start() self.addCleanup(_p.stop) def _resize_and_validate(self, volume_backed=False, stopped=False, target_host=None, server=None): """Creates (if a server is not provided) and resizes the server to another cell. Validates various aspects of the server and its related records (allocations, migrations, actions, VIF tags, etc). :param volume_backed: True if the server should be volume-backed, False if image-backed. :param stopped: True if the server should be stopped prior to resize, False if the server should be ACTIVE :param target_host: If not None, triggers a cold migration to the specified host. :param server: A pre-existing server to resize. If None this method creates the server. :returns: tuple of: - server response object - source compute node resource provider uuid - target compute node resource provider uuid - old flavor - new flavor """ flavors = self.api.get_flavors() if server is None: # Create the server. old_flavor = flavors[0] server = self._create_server( old_flavor, volume_backed=volume_backed) else: for flavor in flavors: if flavor['name'] == server['flavor']['original_name']: old_flavor = flavor break else: self.fail('Unable to find old flavor with name %s. Flavors: ' '%s', server['flavor']['original_name'], flavors) original_host = server['OS-EXT-SRV-ATTR:host'] image_uuid = None if volume_backed else server['image']['id'] # Our HostNameWeigher ensures the server starts in cell1, so we expect # the server AZ to be cell1 as well. self.assertEqual('cell1', server['OS-EXT-AZ:availability_zone']) if stopped: # Stop the server before resizing it. self.api.post_server_action(server['id'], {'os-stop': None}) self._wait_for_state_change(server, 'SHUTOFF') # Before resizing make sure quota usage is only 1 for total instances. self.assert_quota_usage(expected_num_instances=1) if target_host: # Cold migrate the server to the target host. new_flavor = old_flavor # flavor does not change for cold migrate body = {'migrate': {'host': target_host}} expected_host = target_host else: # Resize it which should migrate the server to the host in the # other cell. new_flavor = flavors[1] body = {'resize': {'flavorRef': new_flavor['id']}} expected_host = 'host1' if original_host == 'host2' else 'host2' self.stub_image_create() self.api.post_server_action(server['id'], body) # Wait for the server to be resized and then verify the host has # changed to be the host in the other cell. server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual(expected_host, server['OS-EXT-SRV-ATTR:host']) # Assert that the instance is only listed one time from the API (to # make sure it's not listed out of both cells). # Note that we only get one because the DB API excludes hidden # instances by default (see instance_get_all_by_filters_sort). servers = self.api.get_servers() self.assertEqual(1, len(servers), 'Unexpected number of servers: %s' % servers) self.assertEqual(expected_host, servers[0]['OS-EXT-SRV-ATTR:host']) # And that there is only one migration record. migrations = self.api.api_get( '/os-migrations?instance_uuid=%s' % server['id'] ).body['migrations'] self.assertEqual(1, len(migrations), 'Unexpected number of migrations records: %s' % migrations) migration = migrations[0] self.assertEqual('finished', migration['status']) # There should be at least two actions, one for create and one for the # resize. There will be a third action if the server was stopped. Use # assertGreaterEqual in case a test performed some actions on a # pre-created server before resizing it, like attaching a volume. actions = self.api.get_instance_actions(server['id']) expected_num_of_actions = 3 if stopped else 2 self.assertGreaterEqual(len(actions), expected_num_of_actions, actions) # Each action should have events (make sure these were copied from # the source cell to the target cell). for action in actions: detail = self.api.api_get( '/servers/%s/os-instance-actions/%s' % ( server['id'], action['request_id'])).body['instanceAction'] self.assertNotEqual(0, len(detail['events']), detail) # The tag should still be present on the server. self.assertEqual(1, len(server['tags']), 'Server tags not found in target cell.') self.assertEqual('test', server['tags'][0]) # Confirm the source node has allocations for the old flavor and the # target node has allocations for the new flavor. source_rp_uuid = self._get_provider_uuid_by_host(original_host) # The source node allocations should be on the migration record. source_allocations = self._get_allocations_by_provider_uuid( source_rp_uuid)[migration['uuid']]['resources'] self.assertFlavorMatchesAllocation( old_flavor, source_allocations, volume_backed=volume_backed) target_rp_uuid = self._get_provider_uuid_by_host(expected_host) # The target node allocations should be on the instance record. target_allocations = self._get_allocations_by_provider_uuid( target_rp_uuid)[server['id']]['resources'] self.assertFlavorMatchesAllocation( new_flavor, target_allocations, volume_backed=volume_backed) # The instance, in the target cell DB, should have the old and new # flavor stored with it with the values we expect at this point. target_cell_name = self.host_to_cell_mappings[expected_host] self.assertEqual( target_cell_name, server['OS-EXT-AZ:availability_zone']) target_cell = self.cell_mappings[target_cell_name] admin_context = nova_context.get_admin_context() with nova_context.target_cell(admin_context, target_cell) as cctxt: inst = objects.Instance.get_by_uuid( cctxt, server['id'], expected_attrs=['flavor']) self.assertIsNotNone( inst.old_flavor, 'instance.old_flavor not saved in target cell') self.assertIsNotNone( inst.new_flavor, 'instance.new_flavor not saved in target cell') self.assertEqual(inst.flavor.flavorid, inst.new_flavor.flavorid) if target_host: # cold migrate so flavor does not change self.assertEqual( inst.flavor.flavorid, inst.old_flavor.flavorid) else: self.assertNotEqual( inst.flavor.flavorid, inst.old_flavor.flavorid) self.assertEqual(old_flavor['id'], inst.old_flavor.flavorid) self.assertEqual(new_flavor['id'], inst.new_flavor.flavorid) # Assert the ComputeManager._set_instance_info fields # are correct after the resize. self.assert_instance_fields_match_flavor(inst, new_flavor) # The availability_zone field in the DB should also be updated. self.assertEqual(target_cell_name, inst.availability_zone) # A pre-created server might not have any ports attached. if server['addresses']: # Assert the VIF tag was carried through to the target cell DB. interface_attachments = self.api.get_port_interfaces(server['id']) self.assertEqual(1, len(interface_attachments)) self.assertEqual('private', interface_attachments[0]['tag']) if volume_backed: # Assert the BDM tag was carried through to the target cell DB. volume_attachments = self.api.get_server_volumes(server['id']) self.assertEqual(1, len(volume_attachments)) self.assertEqual('root', volume_attachments[0]['tag']) # Make sure the guest is no longer tracked on the source node. source_guest_uuids = ( self.computes[original_host].manager.driver.list_instance_uuids()) self.assertNotIn(server['id'], source_guest_uuids) # And the guest is on the target node hypervisor. target_guest_uuids = ( self.computes[expected_host].manager.driver.list_instance_uuids()) self.assertIn(server['id'], target_guest_uuids) # The source hypervisor continues to report usage in the hypervisors # API because even though the guest was destroyed there, the instance # resources are still claimed on that node in case the user reverts. self.assert_hypervisor_usage(source_rp_uuid, old_flavor, volume_backed) # The new flavor should show up with resource usage on the target host. self.assert_hypervisor_usage(target_rp_uuid, new_flavor, volume_backed) # While we have a copy of the instance in each cell database make sure # that quota usage is only reporting 1 (because one is hidden). self.assert_quota_usage(expected_num_instances=1) # For a volume-backed server, at this point there should be two volume # attachments for the instance: one tracked in the source cell and # one in the target cell. if volume_backed: self.assertEqual(2, self._count_volume_attachments(server['id']), self.cinder.volume_to_attachment) # Assert the expected power state. expected_power_state = 4 if stopped else 1 self.assertEqual( expected_power_state, server['OS-EXT-STS:power_state'], "Unexpected power state after resize.") # For an image-backed server, a snapshot image should have been created # and then deleted during the resize. if volume_backed: self.assertEqual('', server['image']) self.assertEqual( 0, len(self.created_images), "Unexpected image create during volume-backed resize") else: # The original image for the server shown in the API should not # have changed even if a snapshot was used to create the guest # on the dest host. self.assertEqual(image_uuid, server['image']['id']) self.assertEqual( 1, len(self.created_images), "Unexpected number of images created for image-backed resize") # Make sure the temporary snapshot image was deleted; we use the # compute images proxy API here which is deprecated so we force the # microversion to 2.1. with utils.temporary_mutation(self.api, microversion='2.1'): self.api.api_get('/images/%s' % self.created_images[0], check_response_status=[404]) return server, source_rp_uuid, target_rp_uuid, old_flavor, new_flavor def _attach_volume_to_server(self, server_id, volume_id): """Attaches the volume to the server and waits for the "instance.volume_attach.end" versioned notification. """ body = {'volumeAttachment': {'volumeId': volume_id}} self.api.api_post( '/servers/%s/os-volume_attachments' % server_id, body) fake_notifier.wait_for_versioned_notifications( 'instance.volume_attach.end') def _detach_volume_from_server(self, server_id, volume_id): """Detaches the volume from the server and waits for the "instance.volume_detach.end" versioned notification. """ self.api.api_delete( '/servers/%s/os-volume_attachments/%s' % (server_id, volume_id)) fake_notifier.wait_for_versioned_notifications( 'instance.volume_detach.end') def assert_volume_is_attached(self, server_id, volume_id): """Asserts the volume is attached to the server.""" server = self.api.get_server(server_id) attachments = server['os-extended-volumes:volumes_attached'] attached_vol_ids = [attachment['id'] for attachment in attachments] self.assertIn(volume_id, attached_vol_ids, 'Attached volumes: %s' % attachments) def assert_volume_is_detached(self, server_id, volume_id): """Asserts the volume is detached from the server.""" server = self.api.get_server(server_id) attachments = server['os-extended-volumes:volumes_attached'] attached_vol_ids = [attachment['id'] for attachment in attachments] self.assertNotIn(volume_id, attached_vol_ids, 'Attached volumes: %s' % attachments) def assert_resize_confirm_notifications(self): # We should have gotten only two notifications: # 1. instance.resize_confirm.start # 2. instance.resize_confirm.end self.assertEqual(2, len(fake_notifier.VERSIONED_NOTIFICATIONS), 'Unexpected number of versioned notifications for ' 'cross-cell resize confirm: %s' % fake_notifier.VERSIONED_NOTIFICATIONS) start = fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type'] self.assertEqual('instance.resize_confirm.start', start) end = fake_notifier.VERSIONED_NOTIFICATIONS[1]['event_type'] self.assertEqual('instance.resize_confirm.end', end) def delete_server_and_assert_cleanup(self, server, assert_confirmed_migration=False): """Deletes the server and makes various cleanup checks. - makes sure allocations from placement are gone - makes sure the instance record is gone from both cells - makes sure there are no leaked volume attachments :param server: dict of the server resource to delete :param assert_confirmed_migration: If True, asserts that the Migration record for the server has status "confirmed". This is useful when testing that deleting a resized server automatically confirms the resize. """ # Determine which cell the instance was in when the server was deleted # in the API so we can check hard vs soft delete in the DB. current_cell = self.host_to_cell_mappings[ server['OS-EXT-SRV-ATTR:host']] # Delete the server and check that the allocations are gone from # the placement service. mig_uuid = self._delete_and_check_allocations(server) ctxt = nova_context.get_admin_context() if assert_confirmed_migration: # Get the Migration object from the last cell the instance was in # and assert its status is "confirmed". cell = self.cell_mappings[current_cell] with nova_context.target_cell(ctxt, cell) as cctxt: migration = objects.Migration.get_by_uuid(cctxt, mig_uuid) self.assertEqual('confirmed', migration.status) # Make sure the instance record is gone from both cell databases. for cell_name in self.host_to_cell_mappings.values(): cell = self.cell_mappings[cell_name] with nova_context.target_cell(ctxt, cell) as cctxt: # If this is the current cell the instance was in when it was # deleted it should be soft-deleted (instance.deleted!=0), # otherwise it should be hard-deleted and getting it with a # read_deleted='yes' context should still raise. read_deleted = 'no' if current_cell == cell_name else 'yes' with utils.temporary_mutation( cctxt, read_deleted=read_deleted): self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, cctxt, server['id']) # Make sure there are no leaked volume attachments. attachment_count = self._count_volume_attachments(server['id']) self.assertEqual(0, attachment_count, 'Leaked volume attachments: %s' % self.cinder.volume_to_attachment) def assert_resize_confirm_actions(self, server): actions = self.api.get_instance_actions(server['id']) actions_by_action = {action['action']: action for action in actions} self.assertIn(instance_actions.CONFIRM_RESIZE, actions_by_action) confirm_action = actions_by_action[instance_actions.CONFIRM_RESIZE] detail = self.api.api_get( '/servers/%s/os-instance-actions/%s' % ( server['id'], confirm_action['request_id']) ).body['instanceAction'] events_by_name = {event['event']: event for event in detail['events']} self.assertEqual(2, len(detail['events']), detail) for event_name in ('conductor_confirm_snapshot_based_resize', 'compute_confirm_snapshot_based_resize_at_source'): self.assertIn(event_name, events_by_name) self.assertEqual('Success', events_by_name[event_name]['result']) self.assertEqual('Success', detail['events'][0]['result']) def test_resize_confirm_image_backed(self): """Creates an image-backed server in one cell and resizes it to the host in the other cell. The resize is confirmed. """ server, source_rp_uuid, target_rp_uuid, _, new_flavor = ( self._resize_and_validate()) # Attach a fake volume to the server to make sure it survives confirm. self._attach_volume_to_server(server['id'], uuids.fake_volume_id) # Reset the fake notifier so we only check confirmation notifications. fake_notifier.reset() # Confirm the resize and check all the things. The instance and its # related records should be gone from the source cell database; the # migration should be confirmed; the allocations, held by the migration # record on the source compute node resource provider, should now be # gone; there should be a confirmResize instance action record with # a successful event. self.api.post_server_action(server['id'], {'confirmResize': None}) self._wait_for_state_change(server, 'ACTIVE') self._assert_confirm( server, source_rp_uuid, target_rp_uuid, new_flavor) # Make sure the fake volume is still attached. self.assert_volume_is_attached(server['id'], uuids.fake_volume_id) # Explicitly delete the server and make sure it's gone from all cells. self.delete_server_and_assert_cleanup(server) # Run the DB archive code in all cells to make sure we did not mess # up some referential constraint. self._archive_cell_dbs() def _assert_confirm(self, server, source_rp_uuid, target_rp_uuid, new_flavor): target_host = server['OS-EXT-SRV-ATTR:host'] source_host = 'host1' if target_host == 'host2' else 'host2' # The migration should be confirmed. migrations = self.api.api_get( '/os-migrations?instance_uuid=%s' % server['id'] ).body['migrations'] self.assertEqual(1, len(migrations), migrations) migration = migrations[0] self.assertEqual('confirmed', migration['status'], migration) # The resource allocations held against the source node by the # migration record should be gone and the target node provider should # have allocations held by the instance. source_allocations = self._get_allocations_by_provider_uuid( source_rp_uuid) self.assertEqual({}, source_allocations) target_allocations = self._get_allocations_by_provider_uuid( target_rp_uuid) self.assertIn(server['id'], target_allocations) self.assertFlavorMatchesAllocation( new_flavor, target_allocations[server['id']]['resources']) self.assert_resize_confirm_actions(server) # Make sure the guest is on the target node hypervisor and not on the # source node hypervisor. source_guest_uuids = ( self.computes[source_host].manager.driver.list_instance_uuids()) self.assertNotIn(server['id'], source_guest_uuids, 'Guest is still running on the source hypervisor.') target_guest_uuids = ( self.computes[target_host].manager.driver.list_instance_uuids()) self.assertIn(server['id'], target_guest_uuids, 'Guest is not running on the target hypervisor.') # Assert the source host hypervisor usage is back to 0 and the target # is using the new flavor. self.assert_hypervisor_usage( target_rp_uuid, new_flavor, volume_backed=False) no_usage = {'vcpus': 0, 'disk': 0, 'ram': 0} self.assert_hypervisor_usage( source_rp_uuid, no_usage, volume_backed=False) # Run periodics and make sure the usage is still as expected. self._run_periodics() self.assert_hypervisor_usage( target_rp_uuid, new_flavor, volume_backed=False) self.assert_hypervisor_usage( source_rp_uuid, no_usage, volume_backed=False) # Make sure we got the expected notifications for the confirm action. self.assert_resize_confirm_notifications() def _archive_cell_dbs(self): ctxt = nova_context.get_admin_context() archived_instances_count = 0 for cell in self.cell_mappings.values(): with nova_context.target_cell(ctxt, cell) as cctxt: results = db_api.archive_deleted_rows( context=cctxt, max_rows=1000)[0] archived_instances_count += results.get('instances', 0) # We expect to have archived at least one instance. self.assertGreaterEqual(archived_instances_count, 1, 'No instances were archived from any cell.') def assert_resize_revert_notifications(self): # We should have gotten three notifications: # 1. instance.resize_revert.start (from target compute host) # 2. instance.exists (from target compute host) # 3. instance.resize_revert.end (from source compute host) self.assertEqual(3, len(fake_notifier.VERSIONED_NOTIFICATIONS), 'Unexpected number of versioned notifications for ' 'cross-cell resize revert: %s' % fake_notifier.VERSIONED_NOTIFICATIONS) start = fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type'] self.assertEqual('instance.resize_revert.start', start) exists = fake_notifier.VERSIONED_NOTIFICATIONS[1]['event_type'] self.assertEqual('instance.exists', exists) end = fake_notifier.VERSIONED_NOTIFICATIONS[2]['event_type'] self.assertEqual('instance.resize_revert.end', end) def assert_resize_revert_actions(self, server, source_host, dest_host): # There should not be any InstanceActionNotFound errors in the logs # since ComputeTaskManager.revert_snapshot_based_resize passes # graceful_exit=True to wrap_instance_event. self.assertNotIn('InstanceActionNotFound', self.stdlog.logger.output) actions = self.api.get_instance_actions(server['id']) # The revert instance action should have been copied from the target # cell to the source cell and "completed" there, i.e. an event # should show up under that revert action. actions_by_action = {action['action']: action for action in actions} self.assertIn(instance_actions.REVERT_RESIZE, actions_by_action) confirm_action = actions_by_action[instance_actions.REVERT_RESIZE] detail = self.api.api_get( '/servers/%s/os-instance-actions/%s' % ( server['id'], confirm_action['request_id']) ).body['instanceAction'] events_by_name = {event['event']: event for event in detail['events']} # There are two events: # - conductor_revert_snapshot_based_resize which is copied from the # target cell database record in conductor # - compute_revert_snapshot_based_resize_at_dest # - compute_finish_revert_snapshot_based_resize_at_source which is from # the source compute service method self.assertEqual(3, len(events_by_name), detail) self.assertIn('conductor_revert_snapshot_based_resize', events_by_name) conductor_event = events_by_name[ 'conductor_revert_snapshot_based_resize'] # The RevertResizeTask explicitly finishes this event in the source # cell DB. self.assertEqual('Success', conductor_event['result']) self.assertIn('compute_revert_snapshot_based_resize_at_dest', events_by_name) finish_revert_at_dest_event = events_by_name[ 'compute_revert_snapshot_based_resize_at_dest'] self.assertEqual(dest_host, finish_revert_at_dest_event['host']) self.assertEqual('Success', finish_revert_at_dest_event['result']) self.assertIn('compute_finish_revert_snapshot_based_resize_at_source', events_by_name) finish_revert_at_source_event = events_by_name[ 'compute_finish_revert_snapshot_based_resize_at_source'] self.assertEqual(source_host, finish_revert_at_source_event['host']) self.assertEqual('Success', finish_revert_at_source_event['result']) def test_resize_revert_volume_backed(self): """Tests a volume-backed resize to another cell where the resize is reverted back to the original source cell. """ server, source_rp_uuid, target_rp_uuid, old_flavor, new_flavor = ( self._resize_and_validate(volume_backed=True)) target_host = server['OS-EXT-SRV-ATTR:host'] # Attach a fake volume to the server to make sure it survives revert. self._attach_volume_to_server(server['id'], uuids.fake_volume_id) # Reset the fake notifier so we only check revert notifications. fake_notifier.reset() # Revert the resize. The server should be re-spawned in the source # cell and removed from the target cell. The allocations # should be gone from the target compute node resource provider, the # migration record should be reverted and there should be a revert # action. self.api.post_server_action(server['id'], {'revertResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') source_host = server['OS-EXT-SRV-ATTR:host'] # The migration should be reverted. Wait for the # instance.resize_revert.end notification because the migration.status # is changed to "reverted" *after* the instance status is changed to # ACTIVE. fake_notifier.wait_for_versioned_notifications( 'instance.resize_revert.end') migrations = self.api.api_get( '/os-migrations?instance_uuid=%s' % server['id'] ).body['migrations'] self.assertEqual(1, len(migrations), migrations) migration = migrations[0] self.assertEqual('reverted', migration['status'], migration) # The target allocations should be gone. target_allocations = self._get_allocations_by_provider_uuid( target_rp_uuid) self.assertEqual({}, target_allocations) # The source allocations should just be on the server and for the old # flavor. source_allocations = self._get_allocations_by_provider_uuid( source_rp_uuid) self.assertNotIn(migration['uuid'], source_allocations) self.assertIn(server['id'], source_allocations) source_allocations = source_allocations[server['id']]['resources'] self.assertFlavorMatchesAllocation( old_flavor, source_allocations, volume_backed=True) self.assert_resize_revert_actions(server, source_host, target_host) # Make sure the guest is on the source node hypervisor and not on the # target node hypervisor. source_guest_uuids = ( self.computes[source_host].manager.driver.list_instance_uuids()) self.assertIn(server['id'], source_guest_uuids, 'Guest is not running on the source hypervisor.') target_guest_uuids = ( self.computes[target_host].manager.driver.list_instance_uuids()) self.assertNotIn(server['id'], target_guest_uuids, 'Guest is still running on the target hypervisor.') # Assert the target host hypervisor usage is back to 0 and the source # is back to using the old flavor. self.assert_hypervisor_usage( source_rp_uuid, old_flavor, volume_backed=True) no_usage = {'vcpus': 0, 'disk': 0, 'ram': 0} self.assert_hypervisor_usage( target_rp_uuid, no_usage, volume_backed=True) # Run periodics and make sure the usage is still as expected. self._run_periodics() self.assert_hypervisor_usage( source_rp_uuid, old_flavor, volume_backed=True) self.assert_hypervisor_usage( target_rp_uuid, no_usage, volume_backed=True) # Make sure the fake volume is still attached. self.assert_volume_is_attached(server['id'], uuids.fake_volume_id) # Make sure we got the expected notifications for the revert action. self.assert_resize_revert_notifications() # Explicitly delete the server and make sure it's gone from all cells. self.delete_server_and_assert_cleanup(server) def test_resize_revert_detach_volume_while_resized(self): """Test for resize revert where a volume is attached to the server before resize, then it is detached while resized, and then we revert and make sure it is still detached. """ # Create the server up-front. server = self._create_server(self.api.get_flavors()[0]) # Attach a random fake volume to the server. self._attach_volume_to_server(server['id'], uuids.fake_volume_id) # Resize the server. self._resize_and_validate(server=server) # Ensure the volume is still attached to the server in the target cell. self.assert_volume_is_attached(server['id'], uuids.fake_volume_id) # Detach the volume from the server in the target cell while the # server is in VERIFY_RESIZE status. self._detach_volume_from_server(server['id'], uuids.fake_volume_id) # Revert the resize and assert the volume is still detached from the # server after it has gone back to the source cell. self.api.post_server_action(server['id'], {'revertResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') self._wait_for_migration_status(server, ['reverted']) self.assert_volume_is_detached(server['id'], uuids.fake_volume_id) # Delete the server and make sure we did not leak anything. self.delete_server_and_assert_cleanup(server) def test_delete_while_in_verify_resize_status(self): """Tests that when deleting a server in VERIFY_RESIZE status, the data is cleaned from both the source and target cell and the resize is automatically confirmed. """ server = self._resize_and_validate()[0] self.delete_server_and_assert_cleanup(server, assert_confirmed_migration=True) def test_cold_migrate_target_host_in_other_cell(self): """Tests cold migrating to a target host in another cell. This is mostly just to ensure the API does not restrict the target host to the source cell when cross-cell resize is allowed by policy. """ # _resize_and_validate creates the server on host1 which is in cell1. # To make things interesting, start a third host but in cell1 so we can # be sure the requested host from cell2 is honored. self._start_compute( 'host3', cell_name=self.host_to_cell_mappings['host1']) self._resize_and_validate(target_host='host2') def test_cold_migrate_cross_cell_weigher_stays_in_source_cell(self): """Tests cross-cell cold migrate where the source cell has two hosts so the CrossCellWeigher picks the other host in the source cell and we do a traditional resize. Note that in this case, HostNameWeigher will actually weigh host2 (in cell2) higher than host3 (in cell1) but the CrossCellWeigher will weigh host2 much lower than host3 since host3 is in the same cell as the source host (host1). """ # Create the server first (should go in host1). server = self._create_server(self.api.get_flavors()[0]) # Start another compute host service in cell1. self._start_compute( 'host3', cell_name=self.host_to_cell_mappings['host1']) # Cold migrate the server which should move the server to host3. self.admin_api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) def test_resize_cross_cell_weigher_filtered_to_target_cell_by_spec(self): """Variant of test_cold_migrate_cross_cell_weigher_stays_in_source_cell but in this case the flavor used for the resize is restricted via aggregate metadata to host2 in cell2 so even though normally host3 in cell1 would be weigher higher the CrossCellWeigher is a no-op since host3 is filtered out. """ # Create the server first (should go in host1). old_flavor = self.api.get_flavors()[0] server = self._create_server(old_flavor) # Start another compute host service in cell1. self._start_compute( 'host3', cell_name=self.host_to_cell_mappings['host1']) # Set foo=bar metadata on the cell2 aggregate. self.admin_api.post_aggregate_action( self.cell_to_aggregate['cell2'], {'set_metadata': {'metadata': {'foo': 'bar'}}}) # Create a flavor to use for the resize which has the foo=bar spec. new_flavor = { 'id': uuids.new_flavor, 'name': 'cell2-foo-bar-flavor', 'vcpus': old_flavor['vcpus'], 'ram': old_flavor['ram'], 'disk': old_flavor['disk'] } self.admin_api.post_flavor({'flavor': new_flavor}) # TODO(stephenfin): What do I do with this??? self.admin_api.post_extra_spec( new_flavor['id'], {'extra_specs': {'aggregate_instance_extra_specs:foo': 'bar'}} ) # Enable AggregateInstanceExtraSpecsFilter and restart the scheduler. enabled_filters = CONF.filter_scheduler.enabled_filters if 'AggregateInstanceExtraSpecsFilter' not in enabled_filters: enabled_filters.append('AggregateInstanceExtraSpecsFilter') self.flags(enabled_filters=enabled_filters, group='filter_scheduler') self.scheduler_service.stop() self.scheduler_service = self.start_service('scheduler') # Now resize to the new flavor and it should go to host2 in cell2. self.admin_api.post_server_action( server['id'], {'resize': {'flavorRef': new_flavor['id']}}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) # TODO(mriedem): Test a bunch of rollback scenarios. # TODO(mriedem): Test re-scheduling when the first host fails the # resize_claim and a subsequent alternative host works, and also the # case that all hosts fail the resize_claim. def test_anti_affinity_group(self): """Tests an anti-affinity group scenario where a server is moved across cells and then trying to move the other from the same group to the same host in the target cell should be rejected by the scheduler. """ # Create an anti-affinity server group for our servers. body = { 'server_group': { 'name': 'test_anti_affinity_group', 'policy': 'anti-affinity' } } group_id = self.api.api_post( '/os-server-groups', body).body['server_group']['id'] # Create a server in the group in cell1 (should land on host1 due to # HostNameWeigher). flavor = self.api.get_flavors()[0] server1 = self._create_server( flavor, group_id=group_id, no_networking=True) # Start another compute host service in cell1. self._start_compute( 'host3', cell_name=self.host_to_cell_mappings['host1']) # Create another server but we want it on host3 in cell1. We cannot # use the az forced host parameter because then we will not be able to # move the server across cells later. The HostNameWeigher will prefer # host2 in cell2 so we need to temporarily force host2 down. host2_service_uuid = self.computes['host2'].service_ref.uuid self.admin_api.put_service_force_down( host2_service_uuid, forced_down=True) server2 = self._create_server( flavor, group_id=group_id, no_networking=True) self.assertEqual('host3', server2['OS-EXT-SRV-ATTR:host']) # Remove the forced-down status of the host2 compute service so we can # migrate there. self.admin_api.put_service_force_down( host2_service_uuid, forced_down=False) # Now migrate server1 which should move it to host2 in cell2 otherwise # it would violate the anti-affinity policy since server2 is on host3 # in cell1. self.admin_api.post_server_action(server1['id'], {'migrate': None}) server1 = self._wait_for_state_change(server1, 'VERIFY_RESIZE') self.assertEqual('host2', server1['OS-EXT-SRV-ATTR:host']) self.admin_api.post_server_action( server1['id'], {'confirmResize': None}) self._wait_for_state_change(server1, 'ACTIVE') # At this point we have: # server1: host2 in cell2 # server2: host3 in cell1 # The server group hosts should reflect that. ctxt = nova_context.get_admin_context() group = objects.InstanceGroup.get_by_uuid(ctxt, group_id) group_hosts = scheduler_utils._get_instance_group_hosts_all_cells( ctxt, group) self.assertEqual(['host2', 'host3'], sorted(group_hosts)) # Try to migrate server2 to host2 in cell2 which should fail scheduling # because it violates the anti-affinity policy. Note that without # change I4b67ec9dd4ce846a704d0f75ad64c41e693de0fb in # ServerGroupAntiAffinityFilter this would fail because the scheduler # utils setup_instance_group only looks at the group hosts in the # source cell. self.admin_api.post_server_action( server2['id'], {'migrate': {'host': 'host2'}}) self._wait_for_migration_status(server2, ['error']) @mock.patch('nova.compute.api.API._get_source_compute_service', new=mock.Mock()) @mock.patch('nova.servicegroup.api.API.service_is_up', new=mock.Mock(return_value=True)) def test_poll_unconfirmed_resizes_with_upcall(self): """Tests the _poll_unconfirmed_resizes periodic task with a cross-cell resize once the instance is in VERIFY_RESIZE status on the dest host. In this case _poll_unconfirmed_resizes works because an up-call is possible to the API DB. """ server, source_rp_uuid, target_rp_uuid, _, new_flavor = ( self._resize_and_validate()) # At this point the server is in VERIFY_RESIZE status so enable the # _poll_unconfirmed_resizes periodic task and run it on the target # compute service. # Reset the fake notifier so we only check confirmation notifications. fake_notifier.reset() self.flags(resize_confirm_window=1) # Stub timeutils so the DB API query finds the unconfirmed migration. future = timeutils.utcnow() + datetime.timedelta(hours=1) ctxt = nova_context.get_admin_context() # This works because the test environment is configured with the API DB # connection globally. If the compute service was running with a conf # that did not have access to the API DB this would fail. target_host = server['OS-EXT-SRV-ATTR:host'] cell = self.cell_mappings[self.host_to_cell_mappings[target_host]] with nova_context.target_cell(ctxt, cell) as cctxt: with osloutils_fixture.TimeFixture(future): self.computes[target_host].manager._poll_unconfirmed_resizes( cctxt) self._wait_for_state_change(server, 'ACTIVE') self._assert_confirm( server, source_rp_uuid, target_rp_uuid, new_flavor) @mock.patch('nova.compute.api.API._get_source_compute_service', new=mock.Mock()) @mock.patch('nova.servicegroup.api.API.service_is_up', new=mock.Mock(return_value=True)) def test_poll_unconfirmed_resizes_with_no_upcall(self): """Tests the _poll_unconfirmed_resizes periodic task with a cross-cell resize once the instance is in VERIFY_RESIZE status on the dest host. In this case _poll_unconfirmed_resizes fails because an up-call is not possible to the API DB. """ server, source_rp_uuid, target_rp_uuid, _, new_flavor = ( self._resize_and_validate()) # At this point the server is in VERIFY_RESIZE status so enable the # _poll_unconfirmed_resizes periodic task and run it on the target # compute service. self.flags(resize_confirm_window=1) # Stub timeutils so the DB API query finds the unconfirmed migration. future = timeutils.utcnow() + datetime.timedelta(hours=1) ctxt = nova_context.get_admin_context() target_host = server['OS-EXT-SRV-ATTR:host'] cell = self.cell_mappings[self.host_to_cell_mappings[target_host]] nova_context.set_target_cell(ctxt, cell) # Simulate not being able to query the API DB by poisoning calls to # the instance_mappings table. Use the CastAsCall fixture so we can # trap and log errors for assertions in the test. with test.nested( osloutils_fixture.TimeFixture(future), cast_as_call.CastAsCall(self), mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid', side_effect=oslo_db_exc.CantStartEngineError) ) as ( _, _, get_im ): self.computes[target_host].manager._poll_unconfirmed_resizes(ctxt) get_im.assert_called() log_output = self.stdlog.logger.output self.assertIn('Error auto-confirming resize', log_output) self.assertIn('CantStartEngineError', log_output) # TODO(mriedem): Perform a resize with at-capacity computes, meaning that # when we revert we can only fit the instance with the old flavor back # onto the source host in the source cell. def test_resize_confirm_from_stopped(self): """Tests resizing and confirming a volume-backed server that was initially stopped so it should remain stopped through the resize. """ server = self._resize_and_validate(volume_backed=True, stopped=True)[0] # Confirm the resize and assert the guest remains off. self.api.post_server_action(server['id'], {'confirmResize': None}) server = self._wait_for_state_change(server, 'SHUTOFF') self.assertEqual(4, server['OS-EXT-STS:power_state'], "Unexpected power state after confirmResize.") self._wait_for_migration_status(server, ['confirmed']) # Now try cold-migrating back to cell1 to make sure there is no # duplicate entry error in the DB. self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Should be back on host1 in cell1. self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) def test_resize_revert_from_stopped(self): """Tests resizing and reverting an image-backed server that was initially stopped so it should remain stopped through the revert. """ server = self._resize_and_validate(stopped=True)[0] # Revert the resize and assert the guest remains off. self.api.post_server_action(server['id'], {'revertResize': None}) server = self._wait_for_state_change(server, 'SHUTOFF') self.assertEqual(4, server['OS-EXT-STS:power_state'], "Unexpected power state after revertResize.") self._wait_for_migration_status(server, ['reverted']) # Now try cold-migrating to cell2 to make sure there is no # duplicate entry error in the DB. self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Should be on host2 in cell2. self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) def test_finish_snapshot_based_resize_at_dest_spawn_fails(self): """Negative test where the driver spawn fails on the dest host during finish_snapshot_based_resize_at_dest which triggers a rollback of the instance data in the target cell. Furthermore, the test will hard reboot the server in the source cell to recover it from ERROR status. """ # Create a volume-backed server. This is more interesting for rollback # testing to make sure the volume attachments in the target cell were # cleaned up on failure. flavors = self.api.get_flavors() server = self._create_server(flavors[0], volume_backed=True) # Now mock out the spawn method on the destination host to fail # during _finish_snapshot_based_resize_at_dest_spawn and then resize # the server. error = exception.HypervisorUnavailable(host='host2') with mock.patch.object(self.computes['host2'].driver, 'spawn', side_effect=error): flavor2 = flavors[1]['id'] body = {'resize': {'flavorRef': flavor2}} self.api.post_server_action(server['id'], body) # The server should go to ERROR state with a fault record and # the API should still be showing the server from the source cell # because the instance mapping was not updated. server = self._wait_for_server_parameter(server, {'status': 'ERROR', 'OS-EXT-STS:task_state': None}) # The migration should be in 'error' status. self._wait_for_migration_status(server, ['error']) # Assert a fault was recorded. self.assertIn('fault', server) self.assertIn('Connection to the hypervisor is broken', server['fault']['message']) # The instance in the target cell DB should have been hard-deleted. self._assert_instance_not_in_cell('cell2', server['id']) # Assert that there is only one volume attachment for the server, i.e. # the one in the target cell was deleted. self.assertEqual(1, self._count_volume_attachments(server['id']), self.cinder.volume_to_attachment) # Assert that migration-based allocations were properly reverted. self._assert_allocation_revert_on_fail(server) # Now hard reboot the server in the source cell and it should go back # to ACTIVE. self.api.post_server_action(server['id'], {'reboot': {'type': 'HARD'}}) self._wait_for_state_change(server, 'ACTIVE') # Now retry the resize without the fault in the target host to make # sure things are OK (no duplicate entry errors in the target DB). self.api.post_server_action(server['id'], body) self._wait_for_state_change(server, 'VERIFY_RESIZE') def _assert_instance_not_in_cell(self, cell_name, server_id): cell = self.cell_mappings[cell_name] ctxt = nova_context.get_admin_context(read_deleted='yes') with nova_context.target_cell(ctxt, cell) as cctxt: self.assertRaises( exception.InstanceNotFound, objects.Instance.get_by_uuid, cctxt, server_id) def _assert_allocation_revert_on_fail(self, server): # Since this happens in MigrationTask.rollback in conductor, we need # to wait for something which happens after that, which is the # ComputeTaskManager._cold_migrate method sending the # compute_task.migrate_server.error event. fake_notifier.wait_for_versioned_notifications( 'compute_task.migrate_server.error') mig_uuid = self.get_migration_uuid_for_instance(server['id']) mig_allocs = self._get_allocations_by_server_uuid(mig_uuid) self.assertEqual({}, mig_allocs) source_rp_uuid = self._get_provider_uuid_by_host( server['OS-EXT-SRV-ATTR:host']) server_allocs = self._get_allocations_by_server_uuid(server['id']) volume_backed = False if server['image'] else True self.assertFlavorMatchesAllocation( server['flavor'], server_allocs[source_rp_uuid]['resources'], volume_backed=volume_backed) def test_prep_snapshot_based_resize_at_source_destroy_fails(self): """Negative test where prep_snapshot_based_resize_at_source fails destroying the guest for the non-volume backed server and asserts resources are rolled back. """ # Create a non-volume backed server for the snapshot flow. flavors = self.api.get_flavors() flavor1 = flavors[0] server = self._create_server(flavor1) # Now mock out the snapshot method on the source host to fail # during _prep_snapshot_based_resize_at_source and then resize # the server. source_host = server['OS-EXT-SRV-ATTR:host'] error = exception.HypervisorUnavailable(host=source_host) with mock.patch.object(self.computes[source_host].driver, 'destroy', side_effect=error): flavor2 = flavors[1]['id'] body = {'resize': {'flavorRef': flavor2}} self.api.post_server_action(server['id'], body) # The server should go to ERROR state with a fault record and # the API should still be showing the server from the source cell # because the instance mapping was not updated. server = self._wait_for_server_parameter(server, {'status': 'ERROR', 'OS-EXT-STS:task_state': None}) # The migration should be in 'error' status. self._wait_for_migration_status(server, ['error']) # Assert a fault was recorded. self.assertIn('fault', server) self.assertIn('Connection to the hypervisor is broken', server['fault']['message']) # The instance in the target cell DB should have been hard-deleted. self._assert_instance_not_in_cell('cell2', server['id']) # Assert that migration-based allocations were properly reverted. self._assert_allocation_revert_on_fail(server) # Now hard reboot the server in the source cell and it should go back # to ACTIVE. self.api.post_server_action(server['id'], {'reboot': {'type': 'HARD'}}) self._wait_for_state_change(server, 'ACTIVE') # Now retry the resize without the fault in the target host to make # sure things are OK (no duplicate entry errors in the target DB). self.api.post_server_action(server['id'], body) self._wait_for_state_change(server, 'VERIFY_RESIZE') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_external_networks.py0000664000175000017500000000727200000000000024234 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_utils.fixture import uuidsentinel as uuids from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class TestNeutronExternalNetworks(test.TestCase, integrated_helpers.InstanceHelperMixin): """Tests for creating a server on a neutron network with router:external=True. """ def setUp(self): super(TestNeutronExternalNetworks, self).setUp() # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) neutron = self.useFixture(nova_fixtures.NeutronFixture(self)) self._setup_external_network(neutron) # Start nova controller services. api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.start_service('conductor') self.start_service('scheduler') self.start_service('compute', host='host1') @staticmethod def _setup_external_network(neutron): # Add related network to the neutron fixture specifically for these # tests. It cannot be added globally in the fixture init as it adds a # second network that makes auto allocation based tests fail due to # ambiguous networks. external_network = copy.deepcopy(neutron.network_2) # has shared=False external_network['name'] = 'external' external_network['router:external'] = True external_network['id'] = uuids.external_network neutron._networks[external_network['id']] = external_network def test_non_admin_create_server_on_external_network(self): """By default policy non-admin users are not allowed to create servers on external networks that are not shared. Doing so should result in the server failing to build with an ExternalNetworkAttachForbidden error. Note that we do not use SpawnIsSynchronousFixture to make _allocate_network_async synchronous in this test since that would change the behavior of the ComputeManager._build_resources method and abort the build which is not how ExternalNetworkAttachForbidden is really handled in reality. """ server = self._build_server( networks=[{'uuid': uuids.external_network}]) server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ERROR') # The BuildAbortException should be in the server fault message. self.assertIn('aborted: Failed to allocate the network(s), not ' 'rescheduling.', server['fault']['message']) # And ExternalNetworkAttachForbidden should be in the logs. self.assertIn('ExternalNetworkAttachForbidden', self.stdlog.logger.output) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/test_flavor_extraspecs.py0000664000175000017500000001247400000000000024210 0ustar00zuulzuul00000000000000# Copyright 2020, Red Hat, Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for os-extra_specs API.""" from nova.tests.functional.api import client as api_client from nova.tests.functional import integrated_helpers class FlavorExtraSpecsTest(integrated_helpers._IntegratedTestBase): api_major_version = 'v2' def setUp(self): super(FlavorExtraSpecsTest, self).setUp() self.flavor_id = self._create_flavor() def test_create(self): """Test creating flavor extra specs with valid specs.""" body = { 'extra_specs': {'hw:numa_nodes': '1', 'hw:cpu_policy': 'shared'}, } self.admin_api.post_extra_spec(self.flavor_id, body) self.assertEqual( body['extra_specs'], self.admin_api.get_extra_specs(self.flavor_id) ) def test_create_invalid_spec(self): """Test creating flavor extra specs with invalid specs. This should pass because validation is not enabled in this API microversion. """ body = {'extra_specs': {'hw:numa_nodes': 'foo', 'foo': 'bar'}} self.admin_api.post_extra_spec(self.flavor_id, body) self.assertEqual( body['extra_specs'], self.admin_api.get_extra_specs(self.flavor_id) ) def test_update(self): """Test updating extra specs with valid specs.""" spec_id = 'hw:numa_nodes' body = {'hw:numa_nodes': '1'} self.admin_api.put_extra_spec(self.flavor_id, spec_id, body) self.assertEqual( body, self.admin_api.get_extra_spec(self.flavor_id, spec_id) ) def test_update_invalid_spec(self): """Test updating extra specs with invalid specs. This should pass because validation is not enabled in this API microversion. """ spec_id = 'hw:foo' body = {'hw:foo': 'bar'} self.admin_api.put_extra_spec(self.flavor_id, spec_id, body) self.assertEqual( body, self.admin_api.get_extra_spec(self.flavor_id, spec_id) ) class FlavorExtraSpecsV286Test(FlavorExtraSpecsTest): api_major_version = 'v2.1' microversion = '2.86' def test_create_invalid_spec(self): """Test creating extra specs with invalid specs.""" body = {'extra_specs': {'hw:numa_nodes': 'foo', 'foo': 'bar'}} # this should fail because 'foo' is not a suitable value for # 'hw:numa_nodes' exc = self.assertRaises( api_client.OpenStackApiException, self.admin_api.post_extra_spec, self.flavor_id, body, ) self.assertEqual(400, exc.response.status_code) # ...and the extra specs should not be saved self.assertEqual({}, self.admin_api.get_extra_specs(self.flavor_id)) def test_create_unknown_spec(self): """Test creating extra specs with unknown specs.""" body = {'extra_specs': {'hw:numa_nodes': '1', 'foo': 'bar'}} # this should pass because we don't recognize the extra spec but it's # not in a namespace we care about self.admin_api.post_extra_spec(self.flavor_id, body) body = {'extra_specs': {'hw:numa_nodes': '1', 'hw:foo': 'bar'}} # ...but this should fail because we do recognize the namespace exc = self.assertRaises( api_client.OpenStackApiException, self.admin_api.post_extra_spec, self.flavor_id, body, ) self.assertEqual(400, exc.response.status_code) def test_update_invalid_spec(self): """Test updating extra specs with invalid specs.""" spec_id = 'hw:foo' body = {'hw:foo': 'bar'} # this should fail because we don't recognize the extra spec exc = self.assertRaises( api_client.OpenStackApiException, self.admin_api.put_extra_spec, self.flavor_id, spec_id, body, ) self.assertEqual(400, exc.response.status_code) spec_id = 'hw:numa_nodes' body = {'hw:numa_nodes': 'foo'} # ...while this should fail because the value is not valid exc = self.assertRaises( api_client.OpenStackApiException, self.admin_api.put_extra_spec, self.flavor_id, spec_id, body, ) self.assertEqual(400, exc.response.status_code) # ...and neither extra spec should be saved self.assertEqual({}, self.admin_api.get_extra_specs(self.flavor_id)) def test_update_unknown_spec(self): """Test updating extra specs with unknown specs.""" spec_id = 'foo:bar' body = {'foo:bar': 'baz'} # this should pass because we don't recognize the extra spec but it's # not in a namespace we care about self.admin_api.put_extra_spec(self.flavor_id, spec_id, body) self.assertEqual(body, self.admin_api.get_extra_specs(self.flavor_id)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_images.py0000664000175000017500000001065100000000000021716 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import test_servers class ImagesTest(test_servers.ServersTestBase): def test_create_images_negative_invalid_state(self): # Create server server = self._build_server() created_server = self.api.post_server({"server": server}) server_id = created_server['id'] found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Create image name = 'Snapshot 1' self.api.post_server_action( server_id, {'createImage': {'name': name}}) # Confirm that the image was created images = self.api.get_images(detail=False) image_map = {image['name']: image for image in images} found_image = image_map.get(name) self.assertTrue(found_image) # Change server status from ACTIVE to SHELVED for negative test self.flags(shelved_offload_time = -1) self.api.post_server_action(server_id, {'shelve': {}}) found_server = self._wait_for_state_change(found_server, 'SHELVED') # Create image in SHELVED (not ACTIVE, etc.) name = 'Snapshot 2' ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server_id, {'createImage': {'name': name}}) self.assertEqual(409, ex.response.status_code) # Confirm that the image was not created images = self.api.get_images(detail=False) image_map = {image['name']: image for image in images} found_image = image_map.get(name) self.assertFalse(found_image) # Cleanup self._delete_server(found_server) def test_admin_snapshot_user_image_access_member(self): """Tests a scenario where a user in project A creates a server and an admin in project B creates a snapshot of the server. The user in project A should have member access to the image even though the admin project B is the owner of the image. """ # Create a server using the tenant user project. server = self._build_server() server = self.api.post_server({"server": server}) server = self._wait_for_state_change(server, 'ACTIVE') # Create an admin API fixture with a unique project ID. admin_api = self.useFixture( nova_fixtures.OSAPIFixture( project_id=uuids.admin_project)).admin_api # Create a snapshot of the server using the admin project. name = 'admin-created-snapshot' admin_api.post_server_action( server['id'], {'createImage': {'name': name}}) # Confirm that the image was created. images = admin_api.get_images() for image in images: if image['name'] == name: break else: self.fail('Expected snapshot image %s not found in images list %s' % (name, images)) # Assert the owner is the admin project since the admin created the # snapshot. Note that the images API proxy puts stuff it does not know # about in the 'metadata' dict so that is where we will find owner. metadata = image['metadata'] self.assertIn('owner', metadata) self.assertEqual(uuids.admin_project, metadata['owner']) # Assert the non-admin tenant user project is a member. self.assertIn('instance_owner', metadata) self.assertEqual( self.api_fixture.project_id, metadata['instance_owner']) # Be sure we did not get a false positive by making sure the admin and # tenant user API fixtures are not using the same project_id. self.assertNotEqual(uuids.admin_project, self.api_fixture.project_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_instance_actions.py0000664000175000017500000003220700000000000023776 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_policy import policy as oslo_policy from nova import exception from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.functional import test_servers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class InstanceActionsTestV2(test_servers.ServersTestBase): """Tests Instance Actions API""" def test_get_instance_actions(self): server = self._create_server() actions = self.api.get_instance_actions(server['id']) self.assertEqual('create', actions[0]['action']) def test_get_instance_actions_deleted(self): server = self._create_server() self._delete_server(server) self.assertRaises(client.OpenStackApiNotFoundException, self.api.get_instance_actions, server['id']) class InstanceActionsTestV21(InstanceActionsTestV2): api_major_version = 'v2.1' class InstanceActionsTestV221(InstanceActionsTestV21): microversion = '2.21' def setUp(self): super(InstanceActionsTestV221, self).setUp() self.api.microversion = self.microversion def test_get_instance_actions_deleted(self): server = self._create_server() self._delete_server(server) actions = self.api.get_instance_actions(server['id']) self.assertEqual('delete', actions[0]['action']) self.assertEqual('create', actions[1]['action']) class HypervisorError(Exception): """This is just used to make sure the exception type is in the events.""" pass class InstanceActionEventFaultsTestCase( test.TestCase, integrated_helpers.InstanceHelperMixin): """Tests for the instance action event details reporting from the API""" def setUp(self): super(InstanceActionEventFaultsTestCase, self).setUp() # Setup the standard fixtures. fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(policy_fixture.RealPolicyFixture()) # Start the compute services. self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api def _set_policy_rules(self, overwrite=True): rules = {'os_compute_api:os-instance-actions:show': '', 'os_compute_api:os-instance-actions:events:details': 'project_id:%(project_id)s'} policy.set_rules(oslo_policy.Rules.from_dict(rules), overwrite=overwrite) def test_instance_action_event_details_non_nova_exception(self): """Creates a server using the non-admin user, then reboot it which will generate a non-NovaException fault and put the instance into ERROR status. Then checks that fault details are visible. """ # Create the server with the non-admin user. server = self._build_server( networks=[{'port': nova_fixtures.NeutronFixture.port_1['id']}]) server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Stop the server before rebooting it so that after the driver.reboot # method raises an exception, the fake driver does not report the # instance power state as running - that will make the compute manager # set the instance vm_state to error. self.api.post_server_action(server['id'], {'os-stop': None}) server = self._wait_for_state_change(server, 'SHUTOFF') # Stub out the compute driver reboot method to raise a non-nova # exception to simulate some error from the underlying hypervisor # which in this case we are going to say has sensitive content. error_msg = 'sensitive info' with mock.patch.object( self.compute.manager.driver, 'reboot', side_effect=HypervisorError(error_msg)) as mock_reboot: reboot_request = {'reboot': {'type': 'HARD'}} self.api.post_server_action(server['id'], reboot_request) # In this case we wait for the status to change to ERROR using # the non-admin user so we can assert the fault details. We also # wait for the task_state to be None since the wrap_instance_fault # decorator runs before the reverts_task_state decorator so we will # be sure the fault is set on the server. server = self._wait_for_server_parameter( server, {'status': 'ERROR', 'OS-EXT-STS:task_state': None}, api=self.api) mock_reboot.assert_called_once() self._set_policy_rules(overwrite=False) server_id = server['id'] # Calls GET on the server actions and verifies that the reboot # action expected in the response. response = self.api.api_get('/servers/%s/os-instance-actions' % server_id) server_actions = response.body['instanceActions'] for actions in server_actions: if actions['action'] == 'reboot': reboot_request_id = actions['request_id'] # non admin shows instance actions details and verifies the 'details' # in the action events via 'request_id', since microversion 2.51 that # we can show events, but in microversion 2.84 that we can show # 'details' for non-admin. self.api.microversion = '2.84' action_events_response = self.api.api_get( '/servers/%s/os-instance-actions/%s' % (server_id, reboot_request_id)) reboot_action = action_events_response.body['instanceAction'] # Since reboot action failed, the 'message' property in reboot action # should be 'Error', otherwise it's None. self.assertEqual('Error', reboot_action['message']) reboot_action_events = reboot_action['events'] # The instance action events from the non-admin user API response # should not have 'traceback' in it. self.assertNotIn('traceback', reboot_action_events[0]) # And the sensitive details from the non-nova exception should not be # in the details. self.assertIn('details', reboot_action_events[0]) self.assertNotIn(error_msg, reboot_action_events[0]['details']) # The exception type class name should be in the details. self.assertIn('HypervisorError', reboot_action_events[0]['details']) # Get the server fault details for the admin user. self.admin_api.microversion = '2.84' action_events_response = self.admin_api.api_get( '/servers/%s/os-instance-actions/%s' % (server_id, reboot_request_id)) reboot_action = action_events_response.body['instanceAction'] self.assertEqual('Error', reboot_action['message']) reboot_action_events = reboot_action['events'] # The admin can see the fault details which includes the traceback, # and make sure the traceback is there by looking for part of it. self.assertIn('traceback', reboot_action_events[0]) self.assertIn('in reboot_instance', reboot_action_events[0]['traceback']) # The exception type class name should be in the details for the admin # user as well since the fault handling code cannot distinguish who # is going to see the message so it only sets class name. self.assertIn('HypervisorError', reboot_action_events[0]['details']) def test_instance_action_event_details_with_nova_exception(self): """Creates a server using the non-admin user, then reboot it which will generate a nova exception fault and put the instance into ERROR status. Then checks that fault details are visible. """ # Create the server with the non-admin user. server = self._build_server( networks=[{'port': nova_fixtures.NeutronFixture.port_1['id']}]) server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Stop the server before rebooting it so that after the driver.reboot # method raises an exception, the fake driver does not report the # instance power state as running - that will make the compute manager # set the instance vm_state to error. self.api.post_server_action(server['id'], {'os-stop': None}) server = self._wait_for_state_change(server, 'SHUTOFF') # Stub out the compute driver reboot method to raise a nova # exception 'InstanceRebootFailure' to simulate some error. exc_reason = 'reboot failure' with mock.patch.object( self.compute.manager.driver, 'reboot', side_effect=exception.InstanceRebootFailure(reason=exc_reason) ) as mock_reboot: reboot_request = {'reboot': {'type': 'HARD'}} self.api.post_server_action(server['id'], reboot_request) # In this case we wait for the status to change to ERROR using # the non-admin user so we can assert the fault details. We also # wait for the task_state to be None since the wrap_instance_fault # decorator runs before the reverts_task_state decorator so we will # be sure the fault is set on the server. server = self._wait_for_server_parameter( server, {'status': 'ERROR', 'OS-EXT-STS:task_state': None}, api=self.api) mock_reboot.assert_called_once() self._set_policy_rules(overwrite=False) server_id = server['id'] # Calls GET on the server actions and verifies that the reboot # action expected in the response. response = self.api.api_get('/servers/%s/os-instance-actions' % server_id) server_actions = response.body['instanceActions'] for actions in server_actions: if actions['action'] == 'reboot': reboot_request_id = actions['request_id'] # non admin shows instance actions details and verifies the 'details' # in the action events via 'request_id', since microversion 2.51 that # we can show events, but in microversion 2.84 that we can show # 'details' for non-admin. self.api.microversion = '2.84' action_events_response = self.api.api_get( '/servers/%s/os-instance-actions/%s' % (server_id, reboot_request_id)) reboot_action = action_events_response.body['instanceAction'] # Since reboot action failed, the 'message' property in reboot action # should be 'Error', otherwise it's None. self.assertEqual('Error', reboot_action['message']) reboot_action_events = reboot_action['events'] # The instance action events from the non-admin user API response # should not have 'traceback' in it. self.assertNotIn('traceback', reboot_action_events[0]) # The nova exception format message should be in the details. self.assertIn('details', reboot_action_events[0]) self.assertIn(exc_reason, reboot_action_events[0]['details']) # Get the server fault details for the admin user. self.admin_api.microversion = '2.84' action_events_response = self.admin_api.api_get( '/servers/%s/os-instance-actions/%s' % (server_id, reboot_request_id)) reboot_action = action_events_response.body['instanceAction'] self.assertEqual('Error', reboot_action['message']) reboot_action_events = reboot_action['events'] # The admin can see the fault details which includes the traceback, # and make sure the traceback is there by looking for part of it. self.assertIn('traceback', reboot_action_events[0]) self.assertIn('in reboot_instance', reboot_action_events[0]['traceback']) # The nova exception format message should be in the details. self.assertIn(exc_reason, reboot_action_events[0]['details']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/test_json_filter.py0000664000175000017500000000604200000000000022766 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import conf from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers CONF = conf.CONF class JsonFilterTestCase(integrated_helpers.ProviderUsageBaseTestCase): """Functional tests for the JsonFilter scheduler filter.""" microversion = '2.1' compute_driver = 'fake.SmallFakeDriver' def setUp(self): # Need to enable the JsonFilter before starting the scheduler service # in the parent class. enabled_filters = CONF.filter_scheduler.enabled_filters if 'JsonFilter' not in enabled_filters: enabled_filters.append('JsonFilter') self.flags(enabled_filters=enabled_filters, group='filter_scheduler') # Use our custom weigher defined above to make sure that we have # a predictable scheduling sort order during server create. self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(JsonFilterTestCase, self).setUp() # Now create two compute services which will have unique host and # node names. self._start_compute('host1') self._start_compute('host2') def test_filter_on_hypervisor_hostname(self): """Tests a commonly used scenario for people trying to build a baremetal server on a specific ironic node. Note that although an ironic deployment would normally have a 1:M host:node topology the test is setup with a 1:1 host:node but we can still test using that by filtering on hypervisor_hostname. Also note that an admin could force a server to build on a specific host by passing availability_zone=:: but that means no filters get run which might be undesirable. """ # Create a server passing the hypervisor_hostname query scheduler hint # for host2 to make sure the filter works. If not, because of the # custom HostNameWeigher, host1 would be chosen. query = jsonutils.dumps(['=', '$hypervisor_hostname', 'host2']) server = self._build_server() request = {'server': server, 'os:scheduler_hints': {'query': query}} server = self.api.post_server(request) server = self._wait_for_state_change(server, 'ACTIVE') # Since we request host2 the server should be there despite host1 being # weighed higher. self.assertEqual( 'host2', server['OS-EXT-SRV-ATTR:hypervisor_hostname']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_legacy_v2_compatible_wrapper.py0000664000175000017500000000574400000000000026272 0ustar00zuulzuul00000000000000# Copyright 2015 Intel Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api import openstack from nova.api.openstack import wsgi from nova.tests.functional.api import client from nova.tests.functional import test_servers class LegacyV2CompatibleTestBase(test_servers.ServersTestBase): api_major_version = 'v2' def setUp(self): super(LegacyV2CompatibleTestBase, self).setUp() self._check_api_endpoint('/v2', [openstack.LegacyV2CompatibleWrapper]) def test_request_with_microversion_headers(self): self.api.microversion = '2.100' response = self.api.api_post('os-keypairs', {"keypair": {"name": "test"}}) self.assertNotIn(wsgi.API_VERSION_REQUEST_HEADER, response.headers) self.assertNotIn(wsgi.LEGACY_API_VERSION_REQUEST_HEADER, response.headers) self.assertNotIn('Vary', response.headers) self.assertNotIn('type', response.body["keypair"]) def test_request_without_additional_properties_check(self): self.api.microversion = '2.100' response = self.api.api_post('os-keypairs', {"keypair": {"name": "test", "foooooo": "barrrrrr"}}) self.assertNotIn(wsgi.API_VERSION_REQUEST_HEADER, response.headers) self.assertNotIn(wsgi.LEGACY_API_VERSION_REQUEST_HEADER, response.headers) self.assertNotIn('Vary', response.headers) self.assertNotIn('type', response.body["keypair"]) def test_request_with_pattern_properties_check(self): server = self._build_server() post = {'server': server} created_server = self.api.post_server(post) self._wait_for_state_change(created_server, 'ACTIVE') response = self.api.post_server_metadata(created_server['id'], {'a': 'b'}) self.assertEqual(response, {'a': 'b'}) def test_request_with_pattern_properties_with_avoid_metadata(self): server = self._build_server() post = {'server': server} created_server = self.api.post_server(post) exc = self.assertRaises(client.OpenStackApiException, self.api.post_server_metadata, created_server['id'], {'a': 'b', 'x' * 300: 'y', 'h' * 300: 'i'}) self.assertEqual(exc.response.status_code, 400) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_list_servers_ip_filter.py0000664000175000017500000001202000000000000025222 0ustar00zuulzuul00000000000000# Copyright 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import nova.scheduler.utils import nova.servicegroup from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.unit import cast_as_call import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture class TestListServersIpFilter(test.TestCase): def setUp(self): super(TestListServersIpFilter, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.neutron = self.useFixture( nova_fixtures.NeutronFixture(self)) # Add a 2nd port to the neutron fixture to have multiple ports self.neutron.create_port({'port': self.neutron.port_2}) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.useFixture(func_fixtures.PlacementFixture()) self.start_service('conductor') self.flags(enabled_filters=['ComputeFilter'], group='filter_scheduler') self.start_service('scheduler') self.start_service('compute') self.useFixture(cast_as_call.CastAsCall(self)) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] def wait_until_active_or_timeout(self, server_id): timeout = 0.0 server = self.api.get_server(server_id) while server['status'] != "ACTIVE" and timeout < 10.0: time.sleep(.1) timeout += .1 server = self.api.get_server(server_id) if server['status'] != "ACTIVE": self.fail( 'Timed out waiting for server %s to be ACTIVE.' % server_id) return server def test_list_servers_with_ip_filters_regex(self): """Tests listing servers with IP filter regex. The compute API will perform a regex match on the ip filter and include all servers that have fixed IPs which match the filter. For example, consider we have two servers. The first server has IP 10.1.1.1 and the second server has IP 10.1.1.10. If we list servers with filter ip=10.1.1.1 we should get back both servers because 10.1.1.1 is a prefix of 10.1.1.10. If we list servers with filter ip=10.1.1.10 then we should only get back the second server. """ # We're going to create two servers with unique ports, but the IPs on # the ports are close enough that one matches the regex for the other. # The ports used in this test are defined in the NeutronFixture. for port_id in (self.neutron.port_1['id'], self.neutron.port_2['id']): server = dict( name=port_id, imageRef=self.image_id, flavorRef=self.flavor_id, networks=[{'port': port_id}]) server = self.api.post_server({'server': server}) self.addCleanup(self.api.delete_server, server['id']) self.wait_until_active_or_timeout(server['id']) # Now list servers and filter on the IP of the first server. servers = self.api.get_servers( search_opts={ 'ip': self.neutron.port_1['fixed_ips'][0]['ip_address']}) # We should get both servers back because the IP on the first server is # a prefix of the IP on the second server. self.assertEqual(2, len(servers), 'Unexpected number of servers returned when ' 'filtering by ip=%s: %s' % ( self.neutron.port_1['fixed_ips'][0]['ip_address'], servers)) # Now list servers and filter on the IP of the second server. servers = self.api.get_servers( search_opts={ 'ip': self.neutron.port_2['fixed_ips'][0]['ip_address']}) # We should get one server back because the IP on the second server is # unique between both servers. self.assertEqual(1, len(servers), 'Unexpected number of servers returned when ' 'filtering by ip=%s: %s' % ( self.neutron.port_2['fixed_ips'][0]['ip_address'], servers)) self.assertEqual(self.neutron.port_2['fixed_ips'][0]['ip_address'], servers[0]['addresses']['private'][0]['addr']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/test_login.py0000664000175000017500000000215000000000000021554 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from nova.tests.functional import integrated_helpers LOG = logging.getLogger(__name__) class LoginTest(integrated_helpers._IntegratedTestBase): api_major_version = 'v2' def test_login(self): # Simple check - we list flavors - so we know we're logged in. flavors = self.api.get_flavors() for flavor in flavors: LOG.debug("flavor: %s", flavor) class LoginTestV21(LoginTest): api_major_version = 'v2.1' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_metadata.py0000664000175000017500000001630000000000000022226 0ustar00zuulzuul00000000000000# Copyright 2016 Rackspace Australia # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import jsonschema import os import requests from oslo_serialization import jsonutils from oslo_utils import uuidutils from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image class fake_result(object): def __init__(self, result): self.status_code = 200 self.text = jsonutils.dumps(result) real_request = requests.request def fake_request(obj, url, method, **kwargs): if url.startswith('http://127.0.0.1:123'): return fake_result({'a': 1, 'b': 'foo'}) if url.startswith('http://127.0.0.1:124'): return fake_result({'c': 3}) if url.startswith('http://127.0.0.1:125'): return fake_result(jsonutils.loads(kwargs.get('data', '{}'))) return real_request(method, url, **kwargs) class MetadataTest(test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(MetadataTest, self).setUp() fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.start_service('conductor') self.start_service('scheduler') self.api = self.useFixture( nova_fixtures.OSAPIFixture(api_version='v2.1')).api self.start_service('compute') # create a server for the tests server = self._build_server(name='test') server = self.api.post_server({'server': server}) self.server = self._wait_for_state_change(server, 'ACTIVE') self.api_fixture = self.useFixture(nova_fixtures.OSMetadataServer()) self.md_url = self.api_fixture.md_url # make sure that the metadata service returns information about the # server we created above def fake_get_fixed_ip_by_address(self, ctxt, address): return {'instance_uuid': server['id']} self.useFixture( fixtures.MonkeyPatch( 'nova.network.neutron.API.get_fixed_ip_by_address', fake_get_fixed_ip_by_address)) def test_lookup_metadata_root_url(self): res = requests.request('GET', self.md_url, timeout=5) self.assertEqual(200, res.status_code) def test_lookup_metadata_openstack_url(self): url = '%sopenstack' % self.md_url res = requests.request('GET', url, timeout=5, headers={'X-Forwarded-For': '127.0.0.2'}) self.assertEqual(200, res.status_code) def test_lookup_metadata_data_url(self): url = '%sopenstack/latest/meta_data.json' % self.md_url res = requests.request('GET', url, timeout=5) self.assertEqual(200, res.status_code) j = jsonutils.loads(res.text) self.assertIn('hostname', j) self.assertEqual('test.novalocal', j['hostname']) def test_lookup_external_service(self): self.flags( vendordata_providers=['StaticJSON', 'DynamicJSON'], vendordata_dynamic_targets=[ 'testing@http://127.0.0.1:123', 'hamster@http://127.0.0.1:123' ], group='api' ) self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.Session.request', fake_request)) url = '%sopenstack/2016-10-06/vendor_data2.json' % self.md_url res = requests.request('GET', url, timeout=5) self.assertEqual(200, res.status_code) j = jsonutils.loads(res.text) self.assertEqual({}, j['static']) self.assertEqual(1, j['testing']['a']) self.assertEqual('foo', j['testing']['b']) self.assertEqual(1, j['hamster']['a']) self.assertEqual('foo', j['hamster']['b']) def test_lookup_external_service_no_overwrite(self): self.flags( vendordata_providers=['DynamicJSON'], vendordata_dynamic_targets=[ 'testing@http://127.0.0.1:123', 'testing@http://127.0.0.1:124' ], group='api' ) self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.Session.request', fake_request)) url = '%sopenstack/2016-10-06/vendor_data2.json' % self.md_url res = requests.request('GET', url, timeout=5) self.assertEqual(200, res.status_code) j = jsonutils.loads(res.text) self.assertNotIn('static', j) self.assertEqual(1, j['testing']['a']) self.assertEqual('foo', j['testing']['b']) self.assertNotIn('c', j['testing']) def test_lookup_external_service_passes_data(self): # Much of the data we pass to the REST service is missing because of # the way we've created the fake instance, but we should at least try # and ensure we're passing _some_ data through to the external REST # service. self.flags( vendordata_providers=['DynamicJSON'], vendordata_dynamic_targets=[ 'testing@http://127.0.0.1:125' ], group='api' ) self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.Session.request', fake_request)) url = '%sopenstack/2016-10-06/vendor_data2.json' % self.md_url res = requests.request('GET', url, timeout=5) self.assertEqual(200, res.status_code) j = jsonutils.loads(res.text) self.assertIn('instance-id', j['testing']) self.assertTrue(uuidutils.is_uuid_like(j['testing']['instance-id'])) self.assertIn('hostname', j['testing']) self.assertEqual(self.server['tenant_id'], j['testing']['project-id']) self.assertIn('metadata', j['testing']) self.assertIn('image-id', j['testing']) self.assertIn('user-data', j['testing']) def test_network_data_matches_schema(self): self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.Session.request', fake_request)) url = '%sopenstack/latest/network_data.json' % self.md_url res = requests.request('GET', url, timeout=5) self.assertEqual(200, res.status_code) # load the jsonschema for network_data schema_file = os.path.normpath(os.path.join( os.path.dirname(os.path.abspath(__file__)), "../../../doc/api_schemas/network_data.json")) with open(schema_file, 'rb') as f: schema = jsonutils.load(f) jsonschema.validate(res.json(), schema) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_middleware.py0000664000175000017500000001275000000000000022570 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests to assert that various incorporated middleware works as expected. """ from oslo_config import cfg from nova.tests.functional.api_sample_tests import api_sample_base class TestCORSMiddleware(api_sample_base.ApiSampleTestBaseV21): '''Provide a basic smoke test to ensure CORS middleware is active. The tests below provide minimal confirmation that the CORS middleware is active, and may be configured. For comprehensive tests, please consult the test suite in oslo_middleware. ''' def setUp(self): # Here we monkeypatch GroupAttr.__getattr__, necessary because the # paste.ini method of initializing this middleware creates its own # ConfigOpts instance, bypassing the regular config fixture. # Mocking also does not work, as accessing an attribute on a mock # object will return a MagicMock instance, which will fail # configuration type checks. def _mock_getattr(instance, key): if key != 'allowed_origin': return self._original_call_method(instance, key) return "http://valid.example.com" self._original_call_method = cfg.ConfigOpts.GroupAttr.__getattr__ cfg.ConfigOpts.GroupAttr.__getattr__ = _mock_getattr # With the project_id in the URL, we get the 300 'multiple choices' # response from nova.api.openstack.compute.versions.Versions. self.exp_version_status = 300 if self.USE_PROJECT_ID else 200 # Initialize the application after all the config overrides are in # place. super(TestCORSMiddleware, self).setUp() def tearDown(self): super(TestCORSMiddleware, self).tearDown() # Reset the configuration overrides. cfg.ConfigOpts.GroupAttr.__getattr__ = self._original_call_method def test_valid_cors_options_request(self): response = self._do_options('servers', headers={ 'Origin': 'http://valid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(response.status_code, 200) self.assertIn('Access-Control-Allow-Origin', response.headers) self.assertEqual('http://valid.example.com', response.headers['Access-Control-Allow-Origin']) def test_invalid_cors_options_request(self): response = self._do_options('servers', headers={ 'Origin': 'http://invalid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(response.status_code, 200) self.assertNotIn('Access-Control-Allow-Origin', response.headers) def test_valid_cors_get_request(self): response = self._do_get('servers', headers={ 'Origin': 'http://valid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(response.status_code, 200) self.assertIn('Access-Control-Allow-Origin', response.headers) self.assertEqual('http://valid.example.com', response.headers['Access-Control-Allow-Origin']) def test_invalid_cors_get_request(self): response = self._do_get('servers', headers={ 'Origin': 'http://invalid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(response.status_code, 200) self.assertNotIn('Access-Control-Allow-Origin', response.headers) def test_valid_cors_get_versions_request(self): response = self._do_get('', strip_version=True, headers={ 'Origin': 'http://valid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(response.status_code, self.exp_version_status) self.assertIn('Access-Control-Allow-Origin', response.headers) self.assertEqual('http://valid.example.com', response.headers['Access-Control-Allow-Origin']) def test_invalid_cors_get_versions_request(self): response = self._do_get('', strip_version=True, headers={ 'Origin': 'http://invalid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(response.status_code, self.exp_version_status) self.assertNotIn('Access-Control-Allow-Origin', response.headers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/test_monkey_patch.py0000664000175000017500000000352200000000000023131 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(artom) This file exists to test eventlet monkeypatching. How and what # eventlet monkeypatches can be controlled by environment variables that # are processed by eventlet at import-time (for exmaple, EVENTLET_NO_GREENDNS). # Nova manages all of this in nova.monkey_patch. Therefore, nova.monkey_patch # must be the first thing to import eventlet. As nova.tests.functional.__init__ # imports nova.monkey_patch, we're OK here. import socket import traceback from nova import test class TestMonkeyPatch(test.TestCase): def test_greendns_is_disabled(self): """Try to resolve a fake fqdn. If we see greendns mentioned in the traceback of the raised exception, it means we've not actually disabled greendns. See the TODO and NOTE in nova.monkey_patch to understand why greendns needs to be disabled. """ raised = False try: socket.gethostbyname('goat.fake') except Exception: tb = traceback.format_exc() # NOTE(artom) If we've correctly disabled greendns, we expect the # traceback to not contain any reference to it. self.assertNotIn('greendns.py', tb) raised = True self.assertTrue(raised) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_multiattach.py0000664000175000017500000000652300000000000022773 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers class TestMultiattachVolumes(integrated_helpers._IntegratedTestBase): """Functional tests for creating a server from a multiattach volume and attaching a multiattach volume to a server. Uses the CinderFixture fixture with a specific volume ID to represent a multiattach volume. """ # These are all used in _IntegratedTestBase. api_major_version = 'v2.1' microversion = '2.60' _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' def setUp(self): # Everything has been upgraded to the latest code to support # multiattach. self.useFixture(nova_fixtures.AllServicesCurrent()) super(TestMultiattachVolumes, self).setUp() self.useFixture(nova_fixtures.CinderFixture(self)) def test_boot_from_volume_and_attach_to_second_server(self): """This scenario creates a server from the multiattach volume, waits for it to be ACTIVE, and then attaches the volume to another server. """ volume_id = nova_fixtures.CinderFixture.MULTIATTACH_VOL create_req = self._build_server( image_uuid='', flavor_id='1', networks='none') create_req['block_device_mapping_v2'] = [{ 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, 'boot_index': 0 }] server = self.api.post_server({'server': create_req}) self._wait_for_state_change(server, 'ACTIVE') # Make sure the volume is attached to the first server. attachments = self.api.api_get( '/servers/%s/os-volume_attachments' % server['id']).body[ 'volumeAttachments'] self.assertEqual(1, len(attachments)) self.assertEqual(server['id'], attachments[0]['serverId']) self.assertEqual(volume_id, attachments[0]['volumeId']) # Now create a second server and attach the same volume to that. server2 = self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id='1', networks='none') # Attach the volume to the second server. self.api.api_post('/servers/%s/os-volume_attachments' % server2['id'], {'volumeAttachment': {'volumeId': volume_id}}) # Make sure the volume is attached to the second server. attachments = self.api.api_get( '/servers/%s/os-volume_attachments' % server2['id']).body[ 'volumeAttachments'] self.assertEqual(1, len(attachments)) self.assertEqual(server2['id'], attachments[0]['serverId']) self.assertEqual(volume_id, attachments[0]['volumeId']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_nova_manage.py0000664000175000017500000023223300000000000022726 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import collections import mock import fixtures from neutronclient.common import exceptions as neutron_client_exc import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel from six.moves import StringIO from nova.cmd import manage from nova import config from nova import context from nova import exception from nova.network import constants from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.functional import test_servers from nova.tests.unit.image import fake as image_fake CONF = config.CONF INCOMPLETE_CONSUMER_ID = '00000000-0000-0000-0000-000000000000' class NovaManageDBIronicTest(test.TestCase): def setUp(self): super(NovaManageDBIronicTest, self).setUp() self.commands = manage.DbCommands() self.context = context.RequestContext('fake-user', 'fake-project') self.service1 = objects.Service(context=self.context, host='fake-host1', binary='nova-compute', topic='fake-host1', report_count=1, disabled=False, disabled_reason=None, availability_zone='nova', forced_down=False) self.service1.create() self.service2 = objects.Service(context=self.context, host='fake-host2', binary='nova-compute', topic='fake-host2', report_count=1, disabled=False, disabled_reason=None, availability_zone='nova', forced_down=False) self.service2.create() self.service3 = objects.Service(context=self.context, host='fake-host3', binary='nova-compute', topic='fake-host3', report_count=1, disabled=False, disabled_reason=None, availability_zone='nova', forced_down=False) self.service3.create() self.cn1 = objects.ComputeNode(context=self.context, service_id=self.service1.id, host='fake-host1', hypervisor_type='ironic', vcpus=1, memory_mb=1024, local_gb=10, vcpus_used=1, memory_mb_used=1024, local_gb_used=10, hypervisor_version=0, hypervisor_hostname='fake-node1', cpu_info='{}') self.cn1.create() self.cn2 = objects.ComputeNode(context=self.context, service_id=self.service1.id, host='fake-host1', hypervisor_type='ironic', vcpus=1, memory_mb=1024, local_gb=10, vcpus_used=1, memory_mb_used=1024, local_gb_used=10, hypervisor_version=0, hypervisor_hostname='fake-node2', cpu_info='{}') self.cn2.create() self.cn3 = objects.ComputeNode(context=self.context, service_id=self.service2.id, host='fake-host2', hypervisor_type='ironic', vcpus=1, memory_mb=1024, local_gb=10, vcpus_used=1, memory_mb_used=1024, local_gb_used=10, hypervisor_version=0, hypervisor_hostname='fake-node3', cpu_info='{}') self.cn3.create() self.cn4 = objects.ComputeNode(context=self.context, service_id=self.service3.id, host='fake-host3', hypervisor_type='libvirt', vcpus=1, memory_mb=1024, local_gb=10, vcpus_used=1, memory_mb_used=1024, local_gb_used=10, hypervisor_version=0, hypervisor_hostname='fake-node4', cpu_info='{}') self.cn4.create() self.cn5 = objects.ComputeNode(context=self.context, service_id=self.service2.id, host='fake-host2', hypervisor_type='ironic', vcpus=1, memory_mb=1024, local_gb=10, vcpus_used=1, memory_mb_used=1024, local_gb_used=10, hypervisor_version=0, hypervisor_hostname='fake-node5', cpu_info='{}') self.cn5.create() self.insts = [] for cn in (self.cn1, self.cn2, self.cn3, self.cn4, self.cn4, self.cn5): flavor = objects.Flavor(extra_specs={}) inst = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id, flavor=flavor, node=cn.hypervisor_hostname) inst.create() self.insts.append(inst) self.ironic_insts = [i for i in self.insts if i.node != self.cn4.hypervisor_hostname] self.virt_insts = [i for i in self.insts if i.node == self.cn4.hypervisor_hostname] def test_ironic_flavor_migration_by_host_and_node(self): ret = self.commands.ironic_flavor_migration('test', 'fake-host1', 'fake-node2', False, False) self.assertEqual(0, ret) k = 'resources:CUSTOM_TEST' for inst in self.ironic_insts: inst.refresh() if inst.node == 'fake-node2': self.assertIn(k, inst.flavor.extra_specs) self.assertEqual('1', inst.flavor.extra_specs[k]) else: self.assertNotIn(k, inst.flavor.extra_specs) for inst in self.virt_insts: inst.refresh() self.assertNotIn(k, inst.flavor.extra_specs) def test_ironic_flavor_migration_by_host(self): ret = self.commands.ironic_flavor_migration('test', 'fake-host1', None, False, False) self.assertEqual(0, ret) k = 'resources:CUSTOM_TEST' for inst in self.ironic_insts: inst.refresh() if inst.node in ('fake-node1', 'fake-node2'): self.assertIn(k, inst.flavor.extra_specs) self.assertEqual('1', inst.flavor.extra_specs[k]) else: self.assertNotIn(k, inst.flavor.extra_specs) for inst in self.virt_insts: inst.refresh() self.assertNotIn(k, inst.flavor.extra_specs) def test_ironic_flavor_migration_by_host_not_ironic(self): ret = self.commands.ironic_flavor_migration('test', 'fake-host3', None, False, False) self.assertEqual(1, ret) k = 'resources:CUSTOM_TEST' for inst in self.ironic_insts: inst.refresh() self.assertNotIn(k, inst.flavor.extra_specs) for inst in self.virt_insts: inst.refresh() self.assertNotIn(k, inst.flavor.extra_specs) def test_ironic_flavor_migration_all_hosts(self): ret = self.commands.ironic_flavor_migration('test', None, None, True, False) self.assertEqual(0, ret) k = 'resources:CUSTOM_TEST' for inst in self.ironic_insts: inst.refresh() self.assertIn(k, inst.flavor.extra_specs) self.assertEqual('1', inst.flavor.extra_specs[k]) for inst in self.virt_insts: inst.refresh() self.assertNotIn(k, inst.flavor.extra_specs) def test_ironic_flavor_migration_invalid(self): # No host or node and not "all" ret = self.commands.ironic_flavor_migration('test', None, None, False, False) self.assertEqual(3, ret) # No host, only node ret = self.commands.ironic_flavor_migration('test', None, 'fake-node', False, False) self.assertEqual(3, ret) # Asked for all but provided a node ret = self.commands.ironic_flavor_migration('test', None, 'fake-node', True, False) self.assertEqual(3, ret) # Asked for all but provided a host ret = self.commands.ironic_flavor_migration('test', 'fake-host', None, True, False) self.assertEqual(3, ret) # Asked for all but provided a host and node ret = self.commands.ironic_flavor_migration('test', 'fake-host', 'fake-node', True, False) self.assertEqual(3, ret) # Did not provide a resource_class ret = self.commands.ironic_flavor_migration(None, 'fake-host', 'fake-node', False, False) self.assertEqual(3, ret) def test_ironic_flavor_migration_no_match(self): ret = self.commands.ironic_flavor_migration('test', 'fake-nonexist', None, False, False) self.assertEqual(1, ret) ret = self.commands.ironic_flavor_migration('test', 'fake-nonexist', 'fake-node', False, False) self.assertEqual(1, ret) def test_ironic_two_instances(self): # NOTE(danms): This shouldn't be possible, but simulate it like # someone hacked the database, which should also cover any other # way this could happen. # Since we created two instances on cn4 in setUp() we can convert that # to an ironic host and cause the two-instances-on-one-ironic paradox # to happen. self.cn4.hypervisor_type = 'ironic' self.cn4.save() ret = self.commands.ironic_flavor_migration('test', 'fake-host3', 'fake-node4', False, False) self.assertEqual(2, ret) class NovaManageCellV2Test(test.TestCase): def setUp(self): super(NovaManageCellV2Test, self).setUp() self.commands = manage.CellV2Commands() self.context = context.RequestContext('fake-user', 'fake-project') self.service1 = objects.Service(context=self.context, host='fake-host1', binary='nova-compute', topic='fake-host1', report_count=1, disabled=False, disabled_reason=None, availability_zone='nova', forced_down=False) self.service1.create() self.cn1 = objects.ComputeNode(context=self.context, service_id=self.service1.id, host='fake-host1', hypervisor_type='ironic', vcpus=1, memory_mb=1024, local_gb=10, vcpus_used=1, memory_mb_used=1024, local_gb_used=10, hypervisor_version=0, hypervisor_hostname='fake-node1', cpu_info='{}') self.cn1.create() def test_delete_host(self): cells = objects.CellMappingList.get_all(self.context) self.commands.discover_hosts() # We should have one mapped node cns = objects.ComputeNodeList.get_all(self.context) self.assertEqual(1, len(cns)) self.assertEqual(1, cns[0].mapped) for cell in cells: r = self.commands.delete_host(cell.uuid, 'fake-host1') if r == 0: break # Our node should now be unmapped cns = objects.ComputeNodeList.get_all(self.context) self.assertEqual(1, len(cns)) self.assertEqual(0, cns[0].mapped) def test_delete_cell_force_unmaps_computes(self): cells = objects.CellMappingList.get_all(self.context) self.commands.discover_hosts() # We should have one host mapping hms = objects.HostMappingList.get_all(self.context) self.assertEqual(1, len(hms)) # We should have one mapped node cns = objects.ComputeNodeList.get_all(self.context) self.assertEqual(1, len(cns)) self.assertEqual(1, cns[0].mapped) for cell in cells: res = self.commands.delete_cell(cell.uuid, force=True) self.assertEqual(0, res) # The host mapping should be deleted since the force option is used hms = objects.HostMappingList.get_all(self.context) self.assertEqual(0, len(hms)) # All our cells should be deleted cells = objects.CellMappingList.get_all(self.context) self.assertEqual(0, len(cells)) # Our node should now be unmapped cns = objects.ComputeNodeList.get_all(self.context) self.assertEqual(1, len(cns)) self.assertEqual(0, cns[0].mapped) class TestNovaManagePlacementHealAllocations( integrated_helpers.ProviderUsageBaseTestCase): """Functional tests for nova-manage placement heal_allocations""" # This is required by the parent class. compute_driver = 'fake.SmallFakeDriver' # We want to test iterating across multiple cells. NUMBER_OF_CELLS = 2 def setUp(self): super(TestNovaManagePlacementHealAllocations, self).setUp() self.useFixture(nova_fixtures.CinderFixture(self)) self.cli = manage.PlacementCommands() # We need to start a compute in each non-cell0 cell. for cell_name, cell_mapping in self.cell_mappings.items(): if cell_mapping.uuid == objects.CellMapping.CELL0_UUID: continue self._start_compute(cell_name, cell_name=cell_name) # Make sure we have two hypervisors reported in the API. hypervisors = self.admin_api.api_get( '/os-hypervisors').body['hypervisors'] self.assertEqual(2, len(hypervisors)) self.flavor = self.api.get_flavors()[0] self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) # We need to mock the FilterScheduler to not use Placement so that # allocations won't be created during scheduling and then we can heal # them in the CLI. self.scheduler_service.manager.driver.USES_ALLOCATION_CANDIDATES = \ False def _boot_and_assert_no_allocations(self, flavor, hostname, volume_backed=False): """Creates a server on the given host and asserts neither have usage :param flavor: the flavor used to create the server :param hostname: the host on which to create the server :param volume_backed: True if the server should be volume-backed and as a result not have any DISK_GB allocation :returns: two-item tuple of the server and the compute node resource provider uuid """ server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor['id'], networks='none') server_req['availability_zone'] = 'nova:%s' % hostname if volume_backed: vol_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server_req['block_device_mapping_v2'] = [{ 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'uuid': vol_id }] server_req['imageRef'] = '' created_server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(created_server, 'ACTIVE') # Verify that our source host is what the server ended up on self.assertEqual(hostname, server['OS-EXT-SRV-ATTR:host']) # Check that the compute node resource provider has no allocations. rp_uuid = self._get_provider_uuid_by_host(hostname) provider_usages = self._get_provider_usages(rp_uuid) for resource_class, usage in provider_usages.items(): self.assertEqual( 0, usage, 'Compute node resource provider %s should not have %s ' 'usage; something must be wrong in test setup.' % (hostname, resource_class)) # Check that the server has no allocations. allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual({}, allocations, 'Server should not have allocations; something must ' 'be wrong in test setup.') return server, rp_uuid def _assert_healed(self, server, rp_uuid): allocations = self._get_allocations_by_server_uuid(server['id']) self.assertIn(rp_uuid, allocations, 'Allocations not found for server %s and compute node ' 'resource provider. %s\nOutput:%s' % (server['id'], rp_uuid, self.output.getvalue())) self.assertFlavorMatchesAllocation(self.flavor, server['id'], rp_uuid) def test_heal_allocations_paging(self): """This test runs the following scenario: * Schedule server1 to cell1 and assert it doesn't have allocations. * Schedule server2 to cell2 and assert it doesn't have allocations. * Run "nova-manage placement heal_allocations --max-count 1" to make sure we stop with just one instance and the return code is 1. * Run "nova-manage placement heal_allocations" and assert both both instances now have allocations against their respective compute node resource providers. """ server1, rp_uuid1 = self._boot_and_assert_no_allocations( self.flavor, 'cell1') server2, rp_uuid2 = self._boot_and_assert_no_allocations( self.flavor, 'cell2') # heal server1 and server2 in separate calls for x in range(2): result = self.cli.heal_allocations(max_count=1, verbose=True) self.assertEqual(1, result, self.output.getvalue()) output = self.output.getvalue() self.assertIn('Max count reached. Processed 1 instances.', output) # If this is the 2nd call, we'll have skipped the first instance. if x == 0: self.assertNotIn('is up-to-date', output) else: self.assertIn('is up-to-date', output) self._assert_healed(server1, rp_uuid1) self._assert_healed(server2, rp_uuid2) # run it again to make sure nothing was processed result = self.cli.heal_allocations(verbose=True) self.assertEqual(4, result, self.output.getvalue()) self.assertIn('is up-to-date', self.output.getvalue()) def test_heal_allocations_paging_max_count_more_than_num_instances(self): """Sets up 2 instances in cell1 and 1 instance in cell2. Then specify --max-count=10, processes 3 instances, rc is 0 """ servers = [] # This is really a list of 2-item tuples. for x in range(2): servers.append( self._boot_and_assert_no_allocations(self.flavor, 'cell1')) servers.append( self._boot_and_assert_no_allocations(self.flavor, 'cell2')) result = self.cli.heal_allocations(max_count=10, verbose=True) self.assertEqual(0, result, self.output.getvalue()) self.assertIn('Processed 3 instances.', self.output.getvalue()) for server, rp_uuid in servers: self._assert_healed(server, rp_uuid) def test_heal_allocations_paging_more_instances_remain(self): """Tests that there is one instance in cell1 and two instances in cell2, with a --max-count=2. This tests that we stop in cell2 once max_count is reached. """ servers = [] # This is really a list of 2-item tuples. servers.append( self._boot_and_assert_no_allocations(self.flavor, 'cell1')) for x in range(2): servers.append( self._boot_and_assert_no_allocations(self.flavor, 'cell2')) result = self.cli.heal_allocations(max_count=2, verbose=True) self.assertEqual(1, result, self.output.getvalue()) self.assertIn('Max count reached. Processed 2 instances.', self.output.getvalue()) # Assert that allocations were healed on the instances we expect. Order # works here because cell mappings are retrieved by id in ascending # order so oldest to newest, and instances are also retrieved from each # cell by created_at in ascending order, which matches the order we put # created servers in our list. for x in range(2): self._assert_healed(*servers[x]) # And assert the remaining instance does not have allocations. allocations = self._get_allocations_by_server_uuid( servers[2][0]['id']) self.assertEqual({}, allocations) def test_heal_allocations_unlimited(self): """Sets up 2 instances in cell1 and 1 instance in cell2. Then don't specify --max-count, processes 3 instances, rc is 0. """ servers = [] # This is really a list of 2-item tuples. for x in range(2): servers.append( self._boot_and_assert_no_allocations(self.flavor, 'cell1')) servers.append( self._boot_and_assert_no_allocations(self.flavor, 'cell2')) result = self.cli.heal_allocations(verbose=True) self.assertEqual(0, result, self.output.getvalue()) self.assertIn('Processed 3 instances.', self.output.getvalue()) for server, rp_uuid in servers: self._assert_healed(server, rp_uuid) def test_heal_allocations_shelved(self): """Tests the scenario that an instance with no allocations is shelved so heal_allocations skips it (since the instance is not on a host). """ server, rp_uuid = self._boot_and_assert_no_allocations( self.flavor, 'cell1') self.api.post_server_action(server['id'], {'shelve': None}) # The server status goes to SHELVED_OFFLOADED before the host/node # is nulled out in the compute service, so we also have to wait for # that so we don't race when we run heal_allocations. server = self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': None, 'status': 'SHELVED_OFFLOADED'}) result = self.cli.heal_allocations(verbose=True) self.assertEqual(4, result, self.output.getvalue()) self.assertIn('Instance %s is not on a host.' % server['id'], self.output.getvalue()) # Check that the server has no allocations. allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual({}, allocations, 'Shelved-offloaded server should not have ' 'allocations.') def test_heal_allocations_task_in_progress(self): """Tests the case that heal_allocations skips over an instance which is undergoing a task state transition (in this case pausing). """ server, rp_uuid = self._boot_and_assert_no_allocations( self.flavor, 'cell1') def fake_pause_instance(_self, ctxt, instance, *a, **kw): self.assertEqual('pausing', instance.task_state) # We have to stub out pause_instance so that the instance is stuck with # task_state != None. self.stub_out('nova.compute.manager.ComputeManager.pause_instance', fake_pause_instance) self.api.post_server_action(server['id'], {'pause': None}) result = self.cli.heal_allocations(verbose=True) self.assertEqual(4, result, self.output.getvalue()) # Check that the server has no allocations. allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual({}, allocations, 'Server undergoing task state transition should ' 'not have allocations.') # Assert something was logged for this instance when it was skipped. self.assertIn('Instance %s is undergoing a task state transition: ' 'pausing' % server['id'], self.output.getvalue()) def test_heal_allocations_ignore_deleted_server(self): """Creates two servers, deletes one, and then runs heal_allocations to make sure deleted servers are filtered out. """ # Create a server that we'll leave alive self._boot_and_assert_no_allocations(self.flavor, 'cell1') # and another that we'll delete server, _ = self._boot_and_assert_no_allocations(self.flavor, 'cell1') self._delete_server(server) result = self.cli.heal_allocations(verbose=True) self.assertEqual(0, result, self.output.getvalue()) self.assertIn('Processed 1 instances.', self.output.getvalue()) def test_heal_allocations_update_sentinel_consumer(self): """Tests the scenario that allocations were created before microversion 1.8 when consumer (project_id and user_id) were not required so the consumer information is using sentinel values from config. Since the hacked scheduler used in this test class won't actually create allocations during scheduling, we have to create the allocations out-of-band and then run our heal routine to see they get updated with the instance project and user information. """ server, rp_uuid = self._boot_and_assert_no_allocations( self.flavor, 'cell1') # Now we'll create allocations using microversion < 1.8 to so that # placement creates the consumer record with the config-based project # and user values. alloc_body = { "allocations": [ { "resource_provider": { "uuid": rp_uuid }, "resources": { "MEMORY_MB": self.flavor['ram'], "VCPU": self.flavor['vcpus'], "DISK_GB": self.flavor['disk'] } } ] } self.placement_api.put('/allocations/%s' % server['id'], alloc_body) # Make sure we did that correctly. Use version 1.12 so we can assert # the project_id and user_id are based on the sentinel values. allocations = self.placement_api.get( '/allocations/%s' % server['id'], version='1.12').body self.assertEqual(INCOMPLETE_CONSUMER_ID, allocations['project_id']) self.assertEqual(INCOMPLETE_CONSUMER_ID, allocations['user_id']) allocations = allocations['allocations'] self.assertIn(rp_uuid, allocations) self.assertFlavorMatchesAllocation(self.flavor, server['id'], rp_uuid) # First do a dry run. result = self.cli.heal_allocations(verbose=True, dry_run=True) # Nothing changed so the return code should be 4. self.assertEqual(4, result, self.output.getvalue()) output = self.output.getvalue() self.assertIn('Processed 0 instances.', output) self.assertIn('[dry-run] Update allocations for instance %s' % server['id'], output) # Now run heal_allocations which should update the consumer info. result = self.cli.heal_allocations(verbose=True) self.assertEqual(0, result, self.output.getvalue()) output = self.output.getvalue() self.assertIn( 'Successfully updated allocations for', output) self.assertIn('Processed 1 instances.', output) # Now assert that the consumer was actually updated. allocations = self.placement_api.get( '/allocations/%s' % server['id'], version='1.12').body self.assertEqual(server['tenant_id'], allocations['project_id']) self.assertEqual(server['user_id'], allocations['user_id']) def test_heal_allocations_dry_run(self): """Tests to make sure the --dry-run option does not commit changes.""" # Create a server with no allocations. server, rp_uuid = self._boot_and_assert_no_allocations( self.flavor, 'cell1') result = self.cli.heal_allocations(verbose=True, dry_run=True) # Nothing changed so the return code should be 4. self.assertEqual(4, result, self.output.getvalue()) output = self.output.getvalue() self.assertIn('Processed 0 instances.', output) self.assertIn('[dry-run] Create allocations for instance ' '%s' % server['id'], output) self.assertIn(rp_uuid, output) def test_heal_allocations_specific_instance(self): """Tests the case that a specific instance is processed and only that instance even though there are two which require processing. """ # Create one that we won't process. self._boot_and_assert_no_allocations( self.flavor, 'cell1') # Create another that we will process specifically. server, rp_uuid = self._boot_and_assert_no_allocations( self.flavor, 'cell1', volume_backed=True) # First do a dry run to make sure two instances need processing. result = self.cli.heal_allocations( max_count=2, verbose=True, dry_run=True) # Nothing changed so the return code should be 4. self.assertEqual(4, result, self.output.getvalue()) output = self.output.getvalue() self.assertIn('Found 2 candidate instances', output) # Now run with our specific instance and it should be the only one # processed. Also run with max_count specified to show it's ignored. result = self.cli.heal_allocations( max_count=10, verbose=True, instance_uuid=server['id']) output = self.output.getvalue() self.assertEqual(0, result, self.output.getvalue()) self.assertIn('Found 1 candidate instances', output) self.assertIn('Processed 1 instances.', output) # There shouldn't be any messages about running in batches. self.assertNotIn('Running batches', output) # There shouldn't be any message about max count reached. self.assertNotIn('Max count reached.', output) # Make sure there is no DISK_GB allocation for the volume-backed # instance but there is a VCPU allocation based on the flavor. allocs = self._get_allocations_by_server_uuid( server['id'])[rp_uuid]['resources'] self.assertNotIn('DISK_GB', allocs) self.assertEqual(self.flavor['vcpus'], allocs['VCPU']) # Now run it again on the specific instance and it should be done. result = self.cli.heal_allocations( verbose=True, instance_uuid=server['id']) output = self.output.getvalue() self.assertEqual(4, result, self.output.getvalue()) self.assertIn('Found 1 candidate instances', output) self.assertIn('Processed 0 instances.', output) # There shouldn't be any message about max count reached. self.assertNotIn('Max count reached.', output) # Delete the instance mapping and make sure that results in an error # when we run the command. ctxt = context.get_admin_context() im = objects.InstanceMapping.get_by_instance_uuid(ctxt, server['id']) im.destroy() result = self.cli.heal_allocations( verbose=True, instance_uuid=server['id']) output = self.output.getvalue() self.assertEqual(127, result, self.output.getvalue()) self.assertIn('Unable to find cell for instance %s, is it mapped?' % server['id'], output) def test_heal_allocations_specific_cell(self): """Tests the case that a specific cell is processed and only that cell even though there are two which require processing. """ # Create one that we won't process. server1, rp_uuid1 = self._boot_and_assert_no_allocations( self.flavor, 'cell1') # Create another that we will process specifically. server2, rp_uuid2 = self._boot_and_assert_no_allocations( self.flavor, 'cell2') # Get Cell_id of cell2 cell2_id = self.cell_mappings['cell2'].uuid # First do a dry run to make sure two instances need processing. result = self.cli.heal_allocations( max_count=2, verbose=True, dry_run=True) # Nothing changed so the return code should be 4. self.assertEqual(4, result, self.output.getvalue()) output = self.output.getvalue() self.assertIn('Found 1 candidate instances', output) # Now run with our specific cell and it should be the only one # processed. result = self.cli.heal_allocations(verbose=True, cell_uuid=cell2_id) output = self.output.getvalue() self.assertEqual(0, result, self.output.getvalue()) self.assertIn('Found 1 candidate instances', output) self.assertIn('Processed 1 instances.', output) # Now run it again on the specific cell and it should be done. result = self.cli.heal_allocations( verbose=True, cell_uuid=cell2_id) output = self.output.getvalue() self.assertEqual(4, result, self.output.getvalue()) self.assertIn('Found 1 candidate instances', output) self.assertIn('Processed 0 instances.', output) class TestNovaManagePlacementHealPortAllocations( test_servers.PortResourceRequestBasedSchedulingTestBase): def setUp(self): super(TestNovaManagePlacementHealPortAllocations, self).setUp() self.cli = manage.PlacementCommands() self.flavor = self.api.get_flavors()[0] self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) # Make it easier to debug failed test cases def print_stdout_on_fail(*args, **kwargs): import sys sys.stderr.write(self.output.getvalue()) self.addOnException(print_stdout_on_fail) def _add_resource_request_to_a_bound_port(self, port_id, resource_request): # NOTE(gibi): self.neutron._ports contains a copy of each neutron port # defined on class level in the fixture. So modifying what is in the # _ports list is safe as it is re-created for each Neutron fixture # instance therefore for each individual test using that fixture. bound_port = self.neutron._ports[port_id] bound_port[constants.RESOURCE_REQUEST] = resource_request def _create_server_with_missing_port_alloc( self, ports, resource_request=None): if not resource_request: resource_request = { "resources": { orc.NET_BW_IGR_KILOBIT_PER_SEC: 1000, orc.NET_BW_EGR_KILOBIT_PER_SEC: 1000}, "required": ["CUSTOM_PHYSNET2", "CUSTOM_VNIC_TYPE_NORMAL"] } server = self._create_server( flavor=self.flavor, networks=[{'port': port['id']} for port in ports]) server = self._wait_for_state_change(server, 'ACTIVE') # This is a hack to simulate that we have a server that is missing # allocation for its port for port in ports: self._add_resource_request_to_a_bound_port( port['id'], resource_request) updated_ports = [ self.neutron.show_port(port['id'])['port'] for port in ports] return server, updated_ports def _assert_placement_updated(self, server, ports): rsp = self.placement_api.get( '/allocations/%s' % server['id'], version=1.28).body allocations = rsp['allocations'] # we expect one allocation for the compute resources and one for the # networking resources self.assertEqual(2, len(allocations)) self.assertEqual( self._resources_from_flavor(self.flavor), allocations[self.compute1_rp_uuid]['resources']) self.assertEqual(server['tenant_id'], rsp['project_id']) self.assertEqual(server['user_id'], rsp['user_id']) network_allocations = allocations[ self.ovs_bridge_rp_per_host[self.compute1_rp_uuid]]['resources'] # this code assumes that every port is allocated from the same OVS # bridge RP total_request = collections.defaultdict(int) for port in ports: port_request = port[constants.RESOURCE_REQUEST]['resources'] for rc, amount in port_request.items(): total_request[rc] += amount self.assertEqual(total_request, network_allocations) def _assert_port_updated(self, port_uuid): updated_port = self.neutron.show_port(port_uuid)['port'] binding_profile = updated_port.get('binding:profile', {}) self.assertEqual( self.ovs_bridge_rp_per_host[self.compute1_rp_uuid], binding_profile['allocation']) def _assert_ports_updated(self, ports): for port in ports: self._assert_port_updated(port['id']) def _assert_placement_not_updated(self, server): allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(1, len(allocations)) self.assertIn(self.compute1_rp_uuid, allocations) def _assert_port_not_updated(self, port_uuid): updated_port = self.neutron.show_port(port_uuid)['port'] binding_profile = updated_port.get('binding:profile', {}) self.assertNotIn('allocation', binding_profile) def _assert_ports_not_updated(self, ports): for port in ports: self._assert_port_not_updated(port['id']) def test_heal_port_allocation_only(self): """Test that only port allocation needs to be healed for an instance. * boot with a neutron port that does not have resource request * hack in a resource request for the bound port * heal the allocation * check if the port allocation is created in placement and the port is updated in neutron """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_updated(server, ports) self._assert_ports_updated(ports) self.assertIn( 'Successfully updated allocations', self.output.getvalue()) self.assertEqual(0, result) def test_heal_port_allocation_dry_run(self): server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # let's trigger a heal result = self.cli.heal_allocations( verbose=True, max_count=2, dry_run=True) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) self.assertIn( '[dry-run] Update allocations for instance', self.output.getvalue()) # Note that we had a issues by printing defaultdicts directly to the # user in the past. So let's assert it does not happen any more. self.assertNotIn('defaultdict', self.output.getvalue()) self.assertEqual(4, result) def test_no_healing_is_needed(self): """Test that the instance has a port that has allocations so nothing to be healed. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # heal it once result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_updated(server, ports) self._assert_ports_updated(ports) self.assertIn( 'Successfully updated allocations', self.output.getvalue()) self.assertEqual(0, result) # try to heal it again result = self.cli.heal_allocations(verbose=True, max_count=2) # nothing is removed self._assert_placement_updated(server, ports) self._assert_ports_updated(ports) # healing was not needed self.assertIn( 'Nothing to be healed.', self.output.getvalue()) self.assertEqual(4, result) def test_skip_heal_port_allocation(self): """Test that only port allocation needs to be healed for an instance but port healing is skipped on the cli. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # let's trigger a heal result = self.cli.heal_allocations( verbose=True, max_count=2, skip_port_allocations=True) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) output = self.output.getvalue() self.assertNotIn('Updating port', output) self.assertIn('Nothing to be healed', output) self.assertEqual(4, result) def test_skip_heal_port_allocation_but_heal_the_rest(self): """Test that the instance doesn't have allocation at all, needs allocation for ports as well, but only heal the non port related allocation. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # delete the server allocation in placement to simulate that it needs # to be healed # NOTE(gibi): putting empty allocation will delete the consumer in # placement allocations = self.placement_api.get( '/allocations/%s' % server['id'], version=1.28).body allocations['allocations'] = {} self.placement_api.put( '/allocations/%s' % server['id'], allocations, version=1.28) # let's trigger a heal result = self.cli.heal_allocations( verbose=True, max_count=2, skip_port_allocations=True) # this actually checks that the server has its non port related # allocation in placement self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) output = self.output.getvalue() self.assertIn( 'Successfully created allocations for instance', output) self.assertEqual(0, result) def test_heal_port_allocation_and_project_id(self): """Test that not just port allocation needs to be healed but also the missing project_id and user_id. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # override allocation with placement microversion <1.8 to simulate # missing project_id and user_id alloc_body = { "allocations": [ { "resource_provider": { "uuid": self.compute1_rp_uuid }, "resources": { "MEMORY_MB": self.flavor['ram'], "VCPU": self.flavor['vcpus'], "DISK_GB": self.flavor['disk'] } } ] } self.placement_api.put('/allocations/%s' % server['id'], alloc_body) # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_updated(server, ports) self._assert_ports_updated(ports) output = self.output.getvalue() self.assertIn( 'Successfully updated allocations for instance', output) self.assertIn('Processed 1 instances.', output) self.assertEqual(0, result) def test_heal_allocation_create_allocation_with_port_allocation(self): """Test that the instance doesn't have allocation at all but needs allocation for the ports as well. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) # delete the server allocation in placement to simulate that it needs # to be healed # NOTE(gibi): putting empty allocation will delete the consumer in # placement allocations = self.placement_api.get( '/allocations/%s' % server['id'], version=1.28).body allocations['allocations'] = {} self.placement_api.put( '/allocations/%s' % server['id'], allocations, version=1.28) # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_updated(server, ports) self._assert_ports_updated(ports) output = self.output.getvalue() self.assertIn( 'Successfully created allocations for instance', output) self.assertEqual(0, result) def test_heal_port_allocation_not_enough_resources_for_port(self): """Test that a port needs allocation but not enough inventory available. """ # The port will request too much NET_BW_IGR_KILOBIT_PER_SEC so there is # no RP on the host that can provide it. resource_request = { "resources": { orc.NET_BW_IGR_KILOBIT_PER_SEC: 100000000000, orc.NET_BW_EGR_KILOBIT_PER_SEC: 1000}, "required": ["CUSTOM_PHYSNET2", "CUSTOM_VNIC_TYPE_NORMAL"] } server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1], resource_request) # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) # Actually the ports were updated but the update is rolled back when # the placement update failed self._assert_ports_not_updated(ports) output = self.output.getvalue() self.assertIn( 'Rolling back port update', output) self.assertIn( 'Failed to update allocations for consumer', output) self.assertEqual(3, result) def test_heal_port_allocation_no_rp_providing_required_traits(self): """Test that a port needs allocation but no rp is providing the required traits. """ # The port will request a trait, CUSTOM_PHYSNET_NONEXISTENT that will # not be provided by any RP on this host resource_request = { "resources": { orc.NET_BW_IGR_KILOBIT_PER_SEC: 1000, orc.NET_BW_EGR_KILOBIT_PER_SEC: 1000}, "required": ["CUSTOM_PHYSNET_NONEXISTENT", "CUSTOM_VNIC_TYPE_NORMAL"] } server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1], resource_request) # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) self.assertIn( 'No matching resource provider is available for healing the port ' 'allocation', self.output.getvalue()) self.assertEqual(3, result) def test_heal_port_allocation_ambiguous_rps(self): """Test that there are more than one matching RPs are available on the compute. """ # The port will request CUSTOM_VNIC_TYPE_DIRECT trait and there are # two RPs that supports such trait. resource_request = { "resources": { orc.NET_BW_IGR_KILOBIT_PER_SEC: 1000, orc.NET_BW_EGR_KILOBIT_PER_SEC: 1000}, "required": ["CUSTOM_PHYSNET2", "CUSTOM_VNIC_TYPE_DIRECT"] } server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1], resource_request) # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) self.assertIn( 'More than one matching resource provider', self.output.getvalue()) self.assertEqual(3, result) def test_heal_port_allocation_neutron_unavailable_during_port_query(self): """Test that Neutron is not available when querying ports. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) with mock.patch.object( self.neutron, "list_ports", side_effect=neutron_client_exc.Unauthorized()): # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) self.assertIn( 'Unable to query ports for instance', self.output.getvalue()) self.assertEqual(5, result) def test_heal_port_allocation_neutron_unavailable(self): """Test that the port cannot be updated in Neutron with RP uuid as Neutron is unavailable. """ server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) with mock.patch.object( self.neutron, "update_port", side_effect=neutron_client_exc.Forbidden()): # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) self.assertIn( 'Unable to update ports with allocations', self.output.getvalue()) self.assertEqual(6, result) def test_heal_multiple_port_allocations_rollback_success(self): """Test neutron port update rollback happy case. Try to heal two ports and make the second port update to fail in neutron. Assert that the first port update rolled back successfully. """ port2 = self.neutron.create_port()['port'] server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1, port2]) orig_update_port = self.neutron.update_port update = [] def fake_update_port(*args, **kwargs): if len(update) == 0 or len(update) > 1: update.append(True) return orig_update_port(*args, **kwargs) if len(update) == 1: update.append(True) raise neutron_client_exc.Forbidden() with mock.patch.object( self.neutron, "update_port", side_effect=fake_update_port): # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) # Actually one of the ports were updated but the update is rolled # back when the second neutron port update failed self._assert_ports_not_updated(ports) output = self.output.getvalue() self.assertIn( 'Rolling back port update', output) self.assertIn( 'Unable to update ports with allocations', output) self.assertEqual(6, result) def test_heal_multiple_port_allocations_rollback_fails(self): """Test neutron port update rollback error case. Try to heal three ports and make the last port update to fail in neutron. Also make the rollback of the second port update to fail. """ port2 = self.neutron.create_port()['port'] port3 = self.neutron.create_port()['port'] server, _ = self._create_server_with_missing_port_alloc( [self.neutron.port_1, port2, port3]) orig_update_port = self.neutron.update_port port_updates = [] def fake_update_port(port_id, *args, **kwargs): # 0, 1: the first two update operation succeeds # 4: the last rollback operation succeeds if len(port_updates) in [0, 1, 4]: port_updates.append(port_id) return orig_update_port(port_id, *args, **kwargs) # 2 : last update operation fails # 3 : the first rollback operation also fails if len(port_updates) in [2, 3]: port_updates.append(port_id) raise neutron_client_exc.Forbidden() with mock.patch.object( self.neutron, "update_port", side_effect=fake_update_port) as mock_update_port: # let's trigger a heal result = self.cli.heal_allocations(verbose=True, max_count=2) self.assertEqual(5, mock_update_port.call_count) self._assert_placement_not_updated(server) # the order of the ports is random due to usage of dicts so we # need the info from the fake_update_port that which port update # failed # the first port update was successful, this will be the first port to # rollback too and the rollback will fail self._assert_port_updated(port_updates[0]) # the second port update was successful, this will be the second port # to rollback which will succeed self._assert_port_not_updated(port_updates[1]) # the third port was never updated successfully self._assert_port_not_updated(port_updates[2]) output = self.output.getvalue() self.assertIn( 'Rolling back port update', output) self.assertIn( 'Failed to update neutron ports with allocation keys and the ' 'automatic rollback of the previously successful port updates ' 'also failed', output) # as we failed to roll back the first port update we instruct the user # to clean it up manually self.assertIn( "Make sure that the binding:profile.allocation key of the " "affected ports ['%s'] are manually cleaned in neutron" % port_updates[0], output) self.assertEqual(7, result) def _test_heal_port_allocation_placement_unavailable( self, server, ports, error): with mock.patch('nova.cmd.manage.PlacementCommands.' '_get_rps_in_tree_with_required_traits', side_effect=error): result = self.cli.heal_allocations(verbose=True, max_count=2) self._assert_placement_not_updated(server) self._assert_ports_not_updated(ports) self.assertEqual(3, result) def test_heal_port_allocation_placement_unavailable(self): server, ports = self._create_server_with_missing_port_alloc( [self.neutron.port_1]) for error in [ exception.PlacementAPIConnectFailure(), exception.ResourceProviderRetrievalFailed(uuid=uuidsentinel.rp1), exception.ResourceProviderTraitRetrievalFailed( uuid=uuidsentinel.rp1)]: self._test_heal_port_allocation_placement_unavailable( server, ports, error) class TestNovaManagePlacementSyncAggregates( integrated_helpers.ProviderUsageBaseTestCase): """Functional tests for nova-manage placement sync_aggregates""" # This is required by the parent class. compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(TestNovaManagePlacementSyncAggregates, self).setUp() self.cli = manage.PlacementCommands() # Start two computes. At least two computes are useful for testing # to make sure removing one from an aggregate doesn't remove the other. self._start_compute('host1') self._start_compute('host2') # Make sure we have two hypervisors reported in the API. hypervisors = self.admin_api.api_get( '/os-hypervisors').body['hypervisors'] self.assertEqual(2, len(hypervisors)) self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) def _create_aggregate(self, name): return self.admin_api.post_aggregate({'aggregate': {'name': name}}) def test_sync_aggregates(self): """This is a simple test which does the following: - add each host to a unique aggregate - add both hosts to a shared aggregate - run sync_aggregates and assert both providers are in two aggregates - run sync_aggregates again and make sure nothing changed """ # create three aggregates, one per host and one shared host1_agg = self._create_aggregate('host1') host2_agg = self._create_aggregate('host2') shared_agg = self._create_aggregate('shared') # Add the hosts to the aggregates. We have to temporarily mock out the # scheduler report client to *not* mirror the add host changes so that # sync_aggregates will do the job. with mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host'): self.admin_api.add_host_to_aggregate(host1_agg['id'], 'host1') self.admin_api.add_host_to_aggregate(host2_agg['id'], 'host2') self.admin_api.add_host_to_aggregate(shared_agg['id'], 'host1') self.admin_api.add_host_to_aggregate(shared_agg['id'], 'host2') # Run sync_aggregates and assert both providers are in two aggregates. result = self.cli.sync_aggregates(verbose=True) self.assertEqual(0, result, self.output.getvalue()) host_to_rp_uuid = {} for host in ('host1', 'host2'): rp_uuid = self._get_provider_uuid_by_host(host) host_to_rp_uuid[host] = rp_uuid rp_aggregates = self._get_provider_aggregates(rp_uuid) self.assertEqual(2, len(rp_aggregates), '%s should be in two provider aggregates' % host) self.assertIn( 'Successfully added host (%s) and provider (%s) to aggregate ' '(%s)' % (host, rp_uuid, shared_agg['uuid']), self.output.getvalue()) # Remove host1 from the shared aggregate. Again, we have to temporarily # mock out the call from the aggregates API to placement to mirror the # change. with mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host'): self.admin_api.remove_host_from_aggregate( shared_agg['id'], 'host1') # Run sync_aggregates and assert the provider for host1 is still in two # aggregates and host2's provider is still in two aggregates. # TODO(mriedem): When we add an option to remove providers from # placement aggregates when the corresponding host isn't in a compute # aggregate, we can test that the host1 provider is only left in one # aggregate. result = self.cli.sync_aggregates(verbose=True) self.assertEqual(0, result, self.output.getvalue()) for host in ('host1', 'host2'): rp_uuid = host_to_rp_uuid[host] rp_aggregates = self._get_provider_aggregates(rp_uuid) self.assertEqual(2, len(rp_aggregates), '%s should be in two provider aggregates' % host) class TestNovaManagePlacementAudit( integrated_helpers.ProviderUsageBaseTestCase): """Functional tests for nova-manage placement audit""" # Let's just use a simple fake driver compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(TestNovaManagePlacementAudit, self).setUp() self.cli = manage.PlacementCommands() # Make sure we have two computes for migrations self.compute1 = self._start_compute('host1') self.compute2 = self._start_compute('host2') # Make sure we have two hypervisors reported in the API. hypervisors = self.admin_api.api_get( '/os-hypervisors').body['hypervisors'] self.assertEqual(2, len(hypervisors)) self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.flavor = self.api.get_flavors()[0] def test_audit_orphaned_allocation_from_instance_delete(self): """Creates a server and deletes it by retaining its allocations so the audit command can find it. """ target_hostname = self.compute1.host rp_uuid = self._get_provider_uuid_by_host(target_hostname) server = self._boot_and_check_allocations(self.flavor, target_hostname) # let's mock the allocation delete call to placement with mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance'): self.api.delete_server(server['id']) self._wait_until_deleted(server) # make sure the allocation is still around self.assertFlavorMatchesUsage(rp_uuid, self.flavor) # Don't ask to delete the orphaned allocations, just audit them ret = self.cli.audit(verbose=True) # The allocation should still exist self.assertFlavorMatchesUsage(rp_uuid, self.flavor) output = self.output.getvalue() self.assertIn( 'Allocations for consumer UUID %(consumer_uuid)s on ' 'Resource Provider %(rp_uuid)s can be deleted' % {'consumer_uuid': server['id'], 'rp_uuid': rp_uuid}, output) self.assertIn('Processed 1 allocation.', output) # Here we don't want to delete the found allocations self.assertNotIn( 'Deleted allocations for consumer UUID %s' % server['id'], output) self.assertEqual(3, ret) # Now ask the audit command to delete the rogue allocations. ret = self.cli.audit(delete=True, verbose=True) # The allocations are now deleted self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) output = self.output.getvalue() self.assertIn( 'Deleted allocations for consumer UUID %s' % server['id'], output) self.assertIn('Processed 1 allocation.', output) self.assertEqual(4, ret) def test_audit_orphaned_allocations_from_confirmed_resize(self): """Resize a server but when confirming it, leave the migration allocation there so the audit command can find it. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) old_flavor = self.flavor new_flavor = self.api.get_flavors()[1] # we want to make sure we resize to compute2 self.flags(allow_resize_to_same_host=False) server = self._boot_and_check_allocations(self.flavor, source_hostname) # Do a resize post = { 'resize': { 'flavorRef': new_flavor['id'] } } self._move_and_check_allocations( server, request=post, old_flavor=old_flavor, new_flavor=new_flavor, source_rp_uuid=source_rp_uuid, dest_rp_uuid=dest_rp_uuid) # Retain the migration UUID record for later usage migration_uuid = self.get_migration_uuid_for_instance(server['id']) # Confirm the resize so it should in theory delete the source # allocations but mock out the allocation delete for the source post = {'confirmResize': None} with mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance'): self.api.post_server_action( server['id'], post, check_response_status=[204]) self._wait_for_state_change(server, 'ACTIVE') # The target host usage should be according to the new flavor... self.assertFlavorMatchesUsage(dest_rp_uuid, new_flavor) # ...but we should still see allocations for the source compute self.assertFlavorMatchesUsage(source_rp_uuid, old_flavor) # Now, run the audit command that will find this orphaned allocation ret = self.cli.audit(verbose=True) output = self.output.getvalue() self.assertIn( 'Allocations for consumer UUID %(consumer_uuid)s on ' 'Resource Provider %(rp_uuid)s can be deleted' % {'consumer_uuid': migration_uuid, 'rp_uuid': source_rp_uuid}, output) self.assertIn('Processed 1 allocation.', output) self.assertEqual(3, ret) # Now we want to delete the orphaned allocation that is duplicate ret = self.cli.audit(delete=True, verbose=True) # There should be no longer usage for the source host since the # allocation disappeared self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) output = self.output.getvalue() self.assertIn( 'Deleted allocations for consumer UUID %(consumer_uuid)s on ' 'Resource Provider %(rp_uuid)s' % {'consumer_uuid': migration_uuid, 'rp_uuid': source_rp_uuid}, output) self.assertIn('Processed 1 allocation.', output) self.assertEqual(4, ret) # TODO(sbauza): Remove this test once bug #1829479 is fixed def test_audit_orphaned_allocations_from_deleted_compute_evacuate(self): """Evacuate a server and the delete the source node so that it will leave a source allocation that the audit command will find. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor, source_hostname) # Stop the service and fake it down self.compute1.stop() source_service_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.admin_api.put_service(source_service_id, {'forced_down': 'true'}) # evacuate the instance to the target post = {'evacuate': {"host": dest_hostname}} self.admin_api.post_server_action(server['id'], post) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'}) # Now the instance is gone, we can delete the compute service self.admin_api.api_delete('/os-services/%s' % source_service_id) # Since the compute is deleted, we should have in theory a single # allocation against the destination resource provider, but evacuated # instances are not having their allocations deleted. See bug #1829479. # We have two allocations for the same consumer, source and destination self._check_allocation_during_evacuate( self.flavor, server['id'], source_rp_uuid, dest_rp_uuid) # Now, run the audit command that will find this orphaned allocation ret = self.cli.audit(verbose=True) output = self.output.getvalue() self.assertIn( 'Allocations for consumer UUID %(consumer_uuid)s on ' 'Resource Provider %(rp_uuid)s can be deleted' % {'consumer_uuid': server['id'], 'rp_uuid': source_rp_uuid}, output) self.assertIn('Processed 1 allocation.', output) self.assertEqual(3, ret) # Now we want to delete the orphaned allocation that is duplicate ret = self.cli.audit(delete=True, verbose=True) # We finally should only have the target allocations self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) output = self.output.getvalue() self.assertIn( 'Deleted allocations for consumer UUID %(consumer_uuid)s on ' 'Resource Provider %(rp_uuid)s' % {'consumer_uuid': server['id'], 'rp_uuid': source_rp_uuid}, output) self.assertIn('Processed 1 allocation.', output) self.assertEqual(4, ret) class TestDBArchiveDeletedRows(integrated_helpers._IntegratedTestBase): """Functional tests for the "nova-manage db archive_deleted_rows" CLI.""" api_major_version = 'v2.1' _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' def setUp(self): super(TestDBArchiveDeletedRows, self).setUp() self.enforce_fk_constraints() self.cli = manage.DbCommands() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) def test_archive_instance_group_members(self): """Tests that instance_group_member records in the API DB are deleted when a server group member instance is archived. """ # Create a server group. group = self.api.post_server_groups( {'name': 'test_archive_instance_group_members', 'policies': ['affinity']}) # Create two servers in the group. server = self._build_server() server['min_count'] = 2 server_req = { 'server': server, 'os:scheduler_hints': {'group': group['id']}} # Since we don't pass return_reservation_id=True we get the first # server back in the response. We're also using the CastAsCall fixture # (from the base class) fixture so we don't have to worry about the # server being ACTIVE. server = self.api.post_server(server_req) # Assert we have two group members. self.assertEqual( 2, len(self.api.get_server_group(group['id'])['members'])) # Now delete one server and then we can archive. server = self.api.get_server(server['id']) self._delete_server(server) # Now archive. self.cli.archive_deleted_rows(verbose=True) # Assert only one instance_group_member record was deleted. self.assertRegex(self.output.getvalue(), r".*instance_group_member.*\| 1.*") # And that we still have one remaining group member. self.assertEqual( 1, len(self.api.get_server_group(group['id'])['members'])) class TestDBArchiveDeletedRowsMultiCell(integrated_helpers.InstanceHelperMixin, test.TestCase): NUMBER_OF_CELLS = 2 def setUp(self): super(TestDBArchiveDeletedRowsMultiCell, self).setUp() self.enforce_fk_constraints() self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) # We need the admin api to forced_host for server create self.api = api_fixture.admin_api image_fake.stub_out_image_service(self) self.addCleanup(image_fake.FakeImageService_reset) self.start_service('conductor') self.start_service('scheduler') self.context = context.RequestContext('fake-user', 'fake-project') self.cli = manage.DbCommands() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) # Start two compute services, one per cell self.compute1 = self.start_service('compute', host='host1', cell_name='cell1') self.compute2 = self.start_service('compute', host='host2', cell_name='cell2') def test_archive_deleted_rows(self): admin_context = context.get_admin_context(read_deleted='yes') # Boot a server to cell1 server_ids = {} server = self._build_server(az='nova:host1') created_server = self.api.post_server({'server': server}) self._wait_for_state_change(created_server, 'ACTIVE') server_ids['cell1'] = created_server['id'] # Boot a server to cell2 server = self._build_server(az='nova:host2') created_server = self.api.post_server({'server': server}) self._wait_for_state_change(created_server, 'ACTIVE') server_ids['cell2'] = created_server['id'] # Boot a server to cell0 (cause ERROR state prior to schedule) server = self._build_server() # Flavor m1.xlarge cannot be fulfilled server['flavorRef'] = 'http://fake.server/5' created_server = self.api.post_server({'server': server}) self._wait_for_state_change(created_server, 'ERROR') server_ids['cell0'] = created_server['id'] # Verify all the servers are in the databases for cell_name, server_id in server_ids.items(): with context.target_cell(admin_context, self.cell_mappings[cell_name]) as cctxt: objects.Instance.get_by_uuid(cctxt, server_id) # Delete the servers for cell_name in server_ids.keys(): self.api.delete_server(server_ids[cell_name]) # Verify all the servers are in the databases still (as soft deleted) for cell_name, server_id in server_ids.items(): with context.target_cell(admin_context, self.cell_mappings[cell_name]) as cctxt: objects.Instance.get_by_uuid(cctxt, server_id) # Archive the deleted rows self.cli.archive_deleted_rows(verbose=True, all_cells=True) # Three instances should have been archived (cell0, cell1, cell2) self.assertRegex(self.output.getvalue(), r"| cell0\.instances.*\| 1.*") self.assertRegex(self.output.getvalue(), r"| cell1\.instances.*\| 1.*") self.assertRegex(self.output.getvalue(), r"| cell2\.instances.*\| 1.*") self.assertRegex(self.output.getvalue(), r"| API_DB\.instance_mappings.*\| 3.*") self.assertRegex(self.output.getvalue(), r"| API_DB\.request_specs.*\| 3.*") # Verify all the servers are gone from the cell databases for cell_name, server_id in server_ids.items(): with context.target_cell(admin_context, self.cell_mappings[cell_name]) as cctxt: self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, cctxt, server_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_policy.py0000664000175000017500000002031500000000000021746 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils import timeutils import nova.policies.base from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture from nova import utils class HostStatusPolicyTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Tests host_status policies behavior in the API.""" host_status_rule = 'os_compute_api:servers:show:host_status' host_status_unknown_only_rule = ( 'os_compute_api:servers:show:host_status:unknown-only') image_uuid = '155d900f-4e14-4e4c-a73d-069cbf4541e6' def setUp(self): super(HostStatusPolicyTestCase, self).setUp() # Setup the standard fixtures. fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(policy_fixture.RealPolicyFixture()) # Start the services. self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api # The host_status field is returned starting in microversion 2.16. self.api.microversion = '2.16' self.admin_api.microversion = '2.16' def _setup_host_status_unknown_only_test(self, networks=None): # Set policy such that admin are allowed to see any/all host status and # all users are allowed to see UNKNOWN host status only. self.policy.set_rules({ self.host_status_rule: 'rule:admin_api', self.host_status_unknown_only_rule: nova.policies.base.RULE_ANY}, # This is needed to avoid nulling out the rest of default policy. overwrite=False) # Create a server as a normal non-admin user. # In microversion 2.36 the /images proxy API was deprecated, so # specifiy the image_uuid directly. kwargs = {'image_uuid': self.image_uuid} if networks: # Starting with microversion 2.37 the networks field is required. kwargs['networks'] = networks return self._create_server(**kwargs) @staticmethod def _get_server(resp): # Get a server whether it's a single server or a list of one server. server = resp if not isinstance(resp, list) else resp[0] # The PUT /servers/{server_id} response has a 'server' attribute. if 'server' in server: server = server['server'] return server def _set_server_state_active(self, server): # Needed for being able to issue multiple rebuild requests while the # compute service is down. reset_state = {'os-resetState': {'state': 'active'}} self.admin_api.post_server_action(server['id'], reset_state) def _test_host_status_unknown_only(self, func_name, *args): admin_func = getattr(self.admin_api, func_name) func = getattr(self.api, func_name) # Run the operation as admin and extract the server from the response. server = self._get_server(admin_func(*args)) # We need to wait for ACTIVE if this was a post rebuild server action, # else a subsequent rebuild request will fail with a 409 in the API. self._wait_for_state_change(server, 'ACTIVE') # Verify admin can see the host status UP. self.assertEqual('UP', server['host_status']) # Get server as normal non-admin user. server = self._get_server(func(*args)) self._wait_for_state_change(server, 'ACTIVE') # Verify non-admin do not receive the host_status field because it is # not UNKNOWN. self.assertNotIn('host_status', server) # Stop the compute service to trigger UNKNOWN host_status. self.compute.stop() # Advance time by 30 minutes so nova considers service as down. minutes_from_now = timeutils.utcnow() + datetime.timedelta(minutes=30) timeutils.set_time_override(override_time=minutes_from_now) self.addCleanup(timeutils.clear_time_override) # Run the operation as admin and extract the server from the response. server = self._get_server(admin_func(*args)) # Verify admin can see the host status UNKNOWN. self.assertEqual('UNKNOWN', server['host_status']) # Now that the compute service is down, the rebuild will not ever # complete. But we're only interested in what would be returned from # the API post rebuild action, so reset the state to ACTIVE to allow # the next rebuild request to go through without a 409 error. self._set_server_state_active(server) # Run the operation as a normal non-admin user and extract the server # from the response. server = self._get_server(func(*args)) # Verify non-admin can see the host status UNKNOWN too. self.assertEqual('UNKNOWN', server['host_status']) self._set_server_state_active(server) # Now, adjust the policy to make it so only admin are allowed to see # UNKNOWN host status only. self.policy.set_rules({ self.host_status_unknown_only_rule: 'rule:admin_api'}, overwrite=False) # Run the operation as a normal non-admin user and extract the server # from the response. server = self._get_server(func(*args)) # Verify non-admin do not receive the host_status field. self.assertNotIn('host_status', server) self._set_server_state_active(server) # Verify that admin will not receive ths host_status field if the # API microversion < 2.16. with utils.temporary_mutation(self.admin_api, microversion='2.15'): server = self._get_server(admin_func(*args)) self.assertNotIn('host_status', server) def test_get_server_host_status_unknown_only(self): server = self._setup_host_status_unknown_only_test() # GET /servers/{server_id} self._test_host_status_unknown_only('get_server', server['id']) def test_get_servers_detail_host_status_unknown_only(self): self._setup_host_status_unknown_only_test() # GET /servers/detail self._test_host_status_unknown_only('get_servers') def test_put_server_host_status_unknown_only(self): # The host_status field is returned from PUT /servers/{server_id} # starting in microversion 2.75. self.api.microversion = '2.75' self.admin_api.microversion = '2.75' server = self._setup_host_status_unknown_only_test(networks='none') # PUT /servers/{server_id} update = {'server': {'name': 'host-status-unknown-only'}} self._test_host_status_unknown_only('put_server', server['id'], update) def test_post_server_rebuild_host_status_unknown_only(self): # The host_status field is returned from POST # /servers/{server_id}/action (rebuild) starting in microversion 2.75. self.api.microversion = '2.75' self.admin_api.microversion = '2.75' server = self._setup_host_status_unknown_only_test(networks='none') # POST /servers/{server_id}/action (rebuild) rebuild = {'rebuild': {'imageRef': self.image_uuid}} self._test_host_status_unknown_only('post_server_action', server['id'], rebuild) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_report_client.py0000664000175000017500000020146600000000000023330 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import ddt from keystoneauth1 import exceptions as kse import mock import os_resource_classes as orc import os_traits as ot from oslo_utils.fixture import uuidsentinel as uuids import pkg_resources from nova.cmd import status from nova.compute import provider_tree from nova.compute import utils as compute_utils from nova import conf from nova import context # TODO(cdent): This points to the nova, not placement, exception for # InvalidResourceClass. This test should probably move out of the # placement hierarchy since it expects a "standard" placement server # and is not testing the placement service itself. from nova import exception from nova import objects from nova.scheduler.client import report from nova.scheduler import utils from nova import test from nova.tests.functional import fixtures as func_fixtures CONF = conf.CONF CMD_STATUS_MIN_MICROVERSION = pkg_resources.parse_version( status.MIN_PLACEMENT_MICROVERSION) class VersionCheckingReportClient(report.SchedulerReportClient): """This wrapper around SchedulerReportClient checks microversions for get/put/post/delete calls to validate that the minimum requirement enforced in nova.cmd.status has been bumped appropriately when the report client uses a new version. This of course relies on there being a test in this module that hits the code path using that microversion. (This mechanism can be copied into other func test suites where we hit the report client.) """ @staticmethod def _check_microversion(kwargs): microversion = kwargs.get('version') if not microversion: return seen_microversion = pkg_resources.parse_version(microversion) if seen_microversion > CMD_STATUS_MIN_MICROVERSION: raise ValueError( "Report client is using microversion %s, but nova.cmd.status " "is only requiring %s. See " "I4369f7fb1453e896864222fa407437982be8f6b5 for an example of " "how to bump the minimum requirement." % (microversion, status.MIN_PLACEMENT_MICROVERSION)) def get(self, *args, **kwargs): self._check_microversion(kwargs) return super(VersionCheckingReportClient, self).get(*args, **kwargs) def put(self, *args, **kwargs): self._check_microversion(kwargs) return super(VersionCheckingReportClient, self).put(*args, **kwargs) def post(self, *args, **kwargs): self._check_microversion(kwargs) return super(VersionCheckingReportClient, self).post(*args, **kwargs) def delete(self, *args, **kwargs): self._check_microversion(kwargs) return super(VersionCheckingReportClient, self).delete(*args, **kwargs) @ddt.ddt @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) @mock.patch('nova.objects.compute_node.ComputeNode.save', new=mock.Mock()) class SchedulerReportClientTests(test.TestCase): def setUp(self): super(SchedulerReportClientTests, self).setUp() self.compute_uuid = uuids.compute_node self.compute_name = 'computehost' self.compute_node = objects.ComputeNode( uuid=self.compute_uuid, hypervisor_hostname=self.compute_name, vcpus=2, cpu_allocation_ratio=16.0, memory_mb=2048, ram_allocation_ratio=1.5, local_gb=1024, disk_allocation_ratio=1.0) self.instance_uuid = uuids.inst self.instance = objects.Instance( uuid=self.instance_uuid, project_id = uuids.project, user_id = uuids.user, flavor=objects.Flavor(root_gb=10, swap=1, ephemeral_gb=100, memory_mb=1024, vcpus=2, extra_specs={})) self.context = context.get_admin_context() # The ksa adapter used by the PlacementFixture, for mocking purposes. self.placement_client = self.useFixture( func_fixtures.PlacementFixture())._client self.client = VersionCheckingReportClient() def compute_node_to_inventory_dict(self): result = {} if self.compute_node.vcpus > 0: result[orc.VCPU] = { 'total': self.compute_node.vcpus, 'reserved': CONF.reserved_host_cpus, 'min_unit': 1, 'max_unit': self.compute_node.vcpus, 'step_size': 1, 'allocation_ratio': self.compute_node.cpu_allocation_ratio, } if self.compute_node.memory_mb > 0: result[orc.MEMORY_MB] = { 'total': self.compute_node.memory_mb, 'reserved': CONF.reserved_host_memory_mb, 'min_unit': 1, 'max_unit': self.compute_node.memory_mb, 'step_size': 1, 'allocation_ratio': self.compute_node.ram_allocation_ratio, } if self.compute_node.local_gb > 0: reserved_disk_gb = compute_utils.convert_mb_to_ceil_gb( CONF.reserved_host_disk_mb) result[orc.DISK_GB] = { 'total': self.compute_node.local_gb, 'reserved': reserved_disk_gb, 'min_unit': 1, 'max_unit': self.compute_node.local_gb, 'step_size': 1, 'allocation_ratio': self.compute_node.disk_allocation_ratio, } return result def test_client_report_smoke(self): """Check things go as expected when doing the right things.""" # TODO(cdent): We should probably also have a test that # tests that when allocation or inventory errors happen, we # are resilient. res_class = orc.VCPU # When we start out there are no resource providers. rp = self.client._get_resource_provider(self.context, self.compute_uuid) self.assertIsNone(rp) rps = self.client.get_providers_in_tree(self.context, self.compute_uuid) self.assertEqual([], rps) # But get_provider_tree_and_ensure_root creates one (via # _ensure_resource_provider) ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) self.assertEqual([self.compute_uuid], ptree.get_provider_uuids()) # Now let's update status for our compute node. self.client._ensure_resource_provider( self.context, self.compute_uuid, name=self.compute_name) self.client.set_inventory_for_provider( self.context, self.compute_uuid, self.compute_node_to_inventory_dict()) # So now we have a resource provider rp = self.client._get_resource_provider(self.context, self.compute_uuid) self.assertIsNotNone(rp) rps = self.client.get_providers_in_tree(self.context, self.compute_uuid) self.assertEqual(1, len(rps)) # We should also have empty sets of aggregate and trait # associations self.assertEqual( [], self.client._get_sharing_providers(self.context, [uuids.agg])) self.assertFalse( self.client._provider_tree.have_aggregates_changed( self.compute_uuid, [])) self.assertFalse( self.client._provider_tree.have_traits_changed( self.compute_uuid, [])) # TODO(cdent): change this to use the methods built in # to the report client to retrieve inventory? inventory_url = ('/resource_providers/%s/inventories' % self.compute_uuid) resp = self.client.get(inventory_url) inventory_data = resp.json()['inventories'] self.assertEqual(self.compute_node.vcpus, inventory_data[res_class]['total']) # Providers and inventory show up nicely in the provider tree ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) self.assertEqual([self.compute_uuid], ptree.get_provider_uuids()) self.assertTrue(ptree.has_inventory(self.compute_uuid)) # Update allocations with our instance alloc_dict = utils.resources_from_flavor(self.instance, self.instance.flavor) payload = { "allocations": { self.compute_uuid: {"resources": alloc_dict} }, "project_id": self.instance.project_id, "user_id": self.instance.user_id, "consumer_generation": None } self.client.put_allocations( self.context, self.instance_uuid, payload) # Check that allocations were made resp = self.client.get('/allocations/%s' % self.instance_uuid) alloc_data = resp.json()['allocations'] vcpu_data = alloc_data[self.compute_uuid]['resources'][res_class] self.assertEqual(2, vcpu_data) # Check that usages are up to date resp = self.client.get('/resource_providers/%s/usages' % self.compute_uuid) usage_data = resp.json()['usages'] vcpu_data = usage_data[res_class] self.assertEqual(2, vcpu_data) # Delete allocations with our instance self.client.delete_allocation_for_instance(self.context, self.instance.uuid) # No usage resp = self.client.get('/resource_providers/%s/usages' % self.compute_uuid) usage_data = resp.json()['usages'] vcpu_data = usage_data[res_class] self.assertEqual(0, vcpu_data) # Allocation bumped the generation, so refresh to get the latest self.client._refresh_and_get_inventory(self.context, self.compute_uuid) # Trigger the reporting client deleting all inventory by setting # the compute node's CPU, RAM and disk amounts to 0. self.compute_node.vcpus = 0 self.compute_node.memory_mb = 0 self.compute_node.local_gb = 0 self.client.set_inventory_for_provider( self.context, self.compute_uuid, self.compute_node_to_inventory_dict()) # Check there's no more inventory records resp = self.client.get(inventory_url) inventory_data = resp.json()['inventories'] self.assertEqual({}, inventory_data) # Build the provider tree afresh. ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) # The compute node is still there self.assertEqual([self.compute_uuid], ptree.get_provider_uuids()) # But the inventory is gone self.assertFalse(ptree.has_inventory(self.compute_uuid)) def test_global_request_id(self): global_request_id = 'req-%s' % uuids.global_request_id def fake_request(*args, **kwargs): self.assertEqual(global_request_id, kwargs['headers']['X-OpenStack-Request-ID']) with mock.patch.object(self.client._client, 'request', side_effect=fake_request): self.client._delete_provider(self.compute_uuid, global_request_id=global_request_id) payload = { 'name': 'test-resource-provider' } self.client.post('/resource_providers', payload, global_request_id=global_request_id) self.client.put('/resource_providers/%s' % self.compute_uuid, payload, global_request_id=global_request_id) self.client.get('/resource_providers/%s' % self.compute_uuid, global_request_id=global_request_id) def test_get_provider_tree_with_nested_and_aggregates(self): r"""A more in-depth test of get_provider_tree_and_ensure_root with nested and sharing resource providers. ss1(DISK) ss2(DISK) ss3(DISK) agg_disk_1 \ / agg_disk_2 | agg_disk_3 cn(VCPU,MEM,DISK) x / \ pf1(VF,BW) pf2(VF,BW) sbw(BW) agg_ip \ / agg_ip | agg_bw sip(IP) x """ # Register the compute node and its inventory self.client._ensure_resource_provider( self.context, self.compute_uuid, name=self.compute_name) self.client.set_inventory_for_provider( self.context, self.compute_uuid, self.compute_node_to_inventory_dict()) # The compute node is associated with two of the shared storages self.client.set_aggregates_for_provider( self.context, self.compute_uuid, set([uuids.agg_disk_1, uuids.agg_disk_2])) # Register two SR-IOV PFs with VF and bandwidth inventory for x in (1, 2): name = 'pf%d' % x uuid = getattr(uuids, name) self.client._ensure_resource_provider( self.context, uuid, name=name, parent_provider_uuid=self.compute_uuid) self.client.set_inventory_for_provider( self.context, uuid, { orc.SRIOV_NET_VF: { 'total': 24 * x, 'reserved': x, 'min_unit': 1, 'max_unit': 24 * x, 'step_size': 1, 'allocation_ratio': 1.0, }, 'CUSTOM_BANDWIDTH': { 'total': 125000 * x, 'reserved': 1000 * x, 'min_unit': 5000, 'max_unit': 25000 * x, 'step_size': 5000, 'allocation_ratio': 1.0, }, }) # They're associated with an IP address aggregate self.client.set_aggregates_for_provider(self.context, uuid, [uuids.agg_ip]) # Set some traits on 'em self.client.set_traits_for_provider( self.context, uuid, ['CUSTOM_PHYSNET_%d' % x]) # Register three shared storage pools with disk inventory for x in (1, 2, 3): name = 'ss%d' % x uuid = getattr(uuids, name) self.client._ensure_resource_provider(self.context, uuid, name=name) self.client.set_inventory_for_provider( self.context, uuid, { orc.DISK_GB: { 'total': 100 * x, 'reserved': x, 'min_unit': 1, 'max_unit': 10 * x, 'step_size': 2, 'allocation_ratio': 10.0, }, }) # Mark as a sharing provider self.client.set_traits_for_provider( self.context, uuid, ['MISC_SHARES_VIA_AGGREGATE']) # Associate each with its own aggregate. The compute node is # associated with the first two (agg_disk_1 and agg_disk_2). agg = getattr(uuids, 'agg_disk_%d' % x) self.client.set_aggregates_for_provider(self.context, uuid, [agg]) # Register a shared IP address provider with IP address inventory self.client._ensure_resource_provider(self.context, uuids.sip, name='sip') self.client.set_inventory_for_provider( self.context, uuids.sip, { orc.IPV4_ADDRESS: { 'total': 128, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 1.0, }, }) # Mark as a sharing provider, and add another trait self.client.set_traits_for_provider( self.context, uuids.sip, set(['MISC_SHARES_VIA_AGGREGATE', 'CUSTOM_FOO'])) # It's associated with the same aggregate as both PFs self.client.set_aggregates_for_provider(self.context, uuids.sip, [uuids.agg_ip]) # Register a shared network bandwidth provider self.client._ensure_resource_provider(self.context, uuids.sbw, name='sbw') self.client.set_inventory_for_provider( self.context, uuids.sbw, { 'CUSTOM_BANDWIDTH': { 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, }, }) # Mark as a sharing provider self.client.set_traits_for_provider( self.context, uuids.sbw, ['MISC_SHARES_VIA_AGGREGATE']) # It's associated with some other aggregate. self.client.set_aggregates_for_provider(self.context, uuids.sbw, [uuids.agg_bw]) # Setup is done. Grab the ProviderTree prov_tree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) # All providers show up because we used _ensure_resource_provider self.assertEqual(set([self.compute_uuid, uuids.ss1, uuids.ss2, uuids.pf1, uuids.pf2, uuids.sip, uuids.ss3, uuids.sbw]), set(prov_tree.get_provider_uuids())) # Narrow the field to just our compute subtree. self.assertEqual( set([self.compute_uuid, uuids.pf1, uuids.pf2]), set(prov_tree.get_provider_uuids(self.compute_uuid))) # Validate traits for a couple of providers self.assertFalse(prov_tree.have_traits_changed( uuids.pf2, ['CUSTOM_PHYSNET_2'])) self.assertFalse(prov_tree.have_traits_changed( uuids.sip, ['MISC_SHARES_VIA_AGGREGATE', 'CUSTOM_FOO'])) # Validate aggregates for a couple of providers self.assertFalse(prov_tree.have_aggregates_changed( uuids.sbw, [uuids.agg_bw])) self.assertFalse(prov_tree.have_aggregates_changed( self.compute_uuid, [uuids.agg_disk_1, uuids.agg_disk_2])) def test__set_inventory_reserved_eq_total(self): # Create the provider self.client._ensure_resource_provider(self.context, uuids.cn) # Make sure we can set reserved value equal to total inv = { orc.SRIOV_NET_VF: { 'total': 24, 'reserved': 24, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, } self.client.set_inventory_for_provider( self.context, uuids.cn, inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) def test_set_inventory_for_provider(self): """Tests for SchedulerReportClient.set_inventory_for_provider.""" inv = { orc.SRIOV_NET_VF: { 'total': 24, 'reserved': 1, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, } # Provider doesn't exist in our cache self.assertRaises( ValueError, self.client.set_inventory_for_provider, self.context, uuids.cn, inv) self.assertIsNone(self.client._get_inventory( self.context, uuids.cn)) # Create the provider self.client._ensure_resource_provider(self.context, uuids.cn) # Still no inventory, but now we don't get a 404 self.assertEqual( {}, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Now set the inventory self.client.set_inventory_for_provider( self.context, uuids.cn, inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Make sure we can change it inv = { orc.SRIOV_NET_VF: { 'total': 24, 'reserved': 1, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, orc.IPV4_ADDRESS: { 'total': 128, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 1.0, }, } self.client.set_inventory_for_provider( self.context, uuids.cn, inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Create custom resource classes on the fly self.assertFalse( self.client.get('/resource_classes/CUSTOM_BANDWIDTH', version='1.2')) inv = { orc.SRIOV_NET_VF: { 'total': 24, 'reserved': 1, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, orc.IPV4_ADDRESS: { 'total': 128, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 1.0, }, 'CUSTOM_BANDWIDTH': { 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, }, } self.client.set_inventory_for_provider( self.context, uuids.cn, inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # The custom resource class got created. self.assertTrue( self.client.get('/resource_classes/CUSTOM_BANDWIDTH', version='1.2')) # Creating a bogus resource class raises the appropriate exception. bogus_inv = dict(inv) bogus_inv['CUSTOM_BOGU$$'] = { 'total': 1, 'reserved': 1, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, } self.assertRaises( exception.InvalidResourceClass, self.client.set_inventory_for_provider, self.context, uuids.cn, bogus_inv) self.assertFalse( self.client.get('/resource_classes/BOGUS')) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Create a generation conflict by doing an "out of band" update oob_inv = { orc.IPV4_ADDRESS: { 'total': 128, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 1.0, }, } gen = self.client._provider_tree.data(uuids.cn).generation self.assertTrue( self.client.put( '/resource_providers/%s/inventories' % uuids.cn, {'resource_provider_generation': gen, 'inventories': oob_inv})) self.assertEqual( oob_inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Now try to update again. inv = { orc.SRIOV_NET_VF: { 'total': 24, 'reserved': 1, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, 'CUSTOM_BANDWIDTH': { 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, }, } # Cached generation is off, so this will bounce with a conflict. self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.set_inventory_for_provider, self.context, uuids.cn, inv) # Inventory still corresponds to the out-of-band update self.assertEqual( oob_inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Force refresh to get the latest generation self.client._refresh_and_get_inventory(self.context, uuids.cn) # Now the update should work self.client.set_inventory_for_provider( self.context, uuids.cn, inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) payload = { "allocations": { uuids.cn: {"resources": {orc.SRIOV_NET_VF: 1}} }, "project_id": uuids.proj, "user_id": uuids.user, "consumer_generation": None } # Now set up an InventoryInUse case by creating a VF allocation... self.assertTrue( self.client.put_allocations( self.context, uuids.consumer, payload)) # ...and trying to delete the provider's VF inventory bad_inv = { 'CUSTOM_BANDWIDTH': { 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, }, } # Allocation bumped the generation, so refresh to get the latest self.client._refresh_and_get_inventory(self.context, uuids.cn) msgre = (".*update conflict: Inventory for 'SRIOV_NET_VF' on " "resource provider '%s' in use..*" % uuids.cn) with self.assertRaisesRegex(exception.InventoryInUse, msgre): self.client.set_inventory_for_provider(self.context, uuids.cn, bad_inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Same result if we try to clear all the inventory bad_inv = {} with self.assertRaisesRegex(exception.InventoryInUse, msgre): self.client.set_inventory_for_provider(self.context, uuids.cn, bad_inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) # Remove the allocation to make it work self.client.delete('/allocations/' + uuids.consumer) # Force refresh to get the latest generation self.client._refresh_and_get_inventory(self.context, uuids.cn) inv = {} self.client.set_inventory_for_provider( self.context, uuids.cn, inv) self.assertEqual( inv, self.client._get_inventory( self.context, uuids.cn)['inventories']) def test_update_from_provider_tree(self): """A "realistic" walk through the lifecycle of a compute node provider tree. """ # NOTE(efried): We can use the same ProviderTree throughout, since # update_from_provider_tree doesn't change it. new_tree = provider_tree.ProviderTree() def assert_ptrees_equal(): uuids = set(self.client._provider_tree.get_provider_uuids()) self.assertEqual(uuids, set(new_tree.get_provider_uuids())) for uuid in uuids: cdata = self.client._provider_tree.data(uuid) ndata = new_tree.data(uuid) self.assertEqual(ndata.name, cdata.name) self.assertEqual(ndata.parent_uuid, cdata.parent_uuid) self.assertFalse( new_tree.has_inventory_changed(uuid, cdata.inventory)) self.assertFalse( new_tree.have_traits_changed(uuid, cdata.traits)) self.assertFalse( new_tree.have_aggregates_changed(uuid, cdata.aggregates)) # Do these with a failing request method to prove no API calls are made with mock.patch.object(self.placement_client, 'request', mock.NonCallableMock()): # To begin with, the cache should be empty self.assertEqual( [], self.client._provider_tree.get_provider_uuids()) # When new_tree is empty, it's a no-op. self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Populate with a provider with no inventories, aggregates, traits new_tree.new_root('root', uuids.root) self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Throw in some more providers, in various spots in the tree, with # some sub-properties new_tree.new_child('child1', uuids.root, uuid=uuids.child1) new_tree.update_aggregates('child1', [uuids.agg1, uuids.agg2]) new_tree.new_child('grandchild1_1', uuids.child1, uuid=uuids.gc1_1) new_tree.update_traits(uuids.gc1_1, ['CUSTOM_PHYSNET_2']) new_tree.new_root('ssp', uuids.ssp) new_tree.update_inventory('ssp', { orc.DISK_GB: { 'total': 100, 'reserved': 1, 'min_unit': 1, 'max_unit': 10, 'step_size': 2, 'allocation_ratio': 10.0, }, }) self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Swizzle properties # Give the root some everything new_tree.update_inventory(uuids.root, { orc.VCPU: { 'total': 10, 'reserved': 0, 'min_unit': 1, 'max_unit': 2, 'step_size': 1, 'allocation_ratio': 10.0, }, orc.MEMORY_MB: { 'total': 1048576, 'reserved': 2048, 'min_unit': 1024, 'max_unit': 131072, 'step_size': 1024, 'allocation_ratio': 1.0, }, }) new_tree.update_aggregates(uuids.root, [uuids.agg1]) new_tree.update_traits(uuids.root, ['HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2']) # Take away the child's aggregates new_tree.update_aggregates(uuids.child1, []) # Grandchild gets some inventory ipv4_inv = { orc.IPV4_ADDRESS: { 'total': 128, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 1.0, }, } new_tree.update_inventory('grandchild1_1', ipv4_inv) # Shared storage provider gets traits new_tree.update_traits('ssp', set(['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD'])) self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Let's go for some error scenarios. # Add inventory in an invalid resource class new_tree.update_inventory( 'grandchild1_1', dict(ipv4_inv, MOTSUC_BANDWIDTH={ 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, })) self.assertRaises( exception.ResourceProviderSyncFailed, self.client.update_from_provider_tree, self.context, new_tree) # The inventory update didn't get synced. self.assertIsNone(self.client._get_inventory( self.context, uuids.grandchild1_1)) # We invalidated the cache for the entire tree around grandchild1_1 # but did not invalidate the other root (the SSP) self.assertEqual([uuids.ssp], self.client._provider_tree.get_provider_uuids()) # This is a little under-the-hood-looking, but make sure we cleared # the association refresh timers for everything in the grandchild's # tree self.assertEqual(set([uuids.ssp]), set(self.client._association_refresh_time)) # Fix that problem so we can try the next one new_tree.update_inventory( 'grandchild1_1', dict(ipv4_inv, CUSTOM_BANDWIDTH={ 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, })) # Add a bogus trait new_tree.update_traits(uuids.root, ['HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'MOTSUC_FOO']) self.assertRaises( exception.ResourceProviderSyncFailed, self.client.update_from_provider_tree, self.context, new_tree) # Placement didn't get updated self.assertEqual(set(['HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2']), self.client.get_provider_traits( self.context, uuids.root).traits) # ...and the root was removed from the cache self.assertFalse(self.client._provider_tree.exists(uuids.root)) # Fix that problem new_tree.update_traits(uuids.root, ['HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'CUSTOM_FOO']) # Now the sync should work self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Let's cause a conflict error by doing an "out-of-band" update gen = self.client._provider_tree.data(uuids.ssp).generation self.assertTrue(self.client.put( '/resource_providers/%s/traits' % uuids.ssp, {'resource_provider_generation': gen, 'traits': ['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_HDD']}, version='1.6')) # Now if we try to modify the traits, we should fail and invalidate # the cache... new_tree.update_traits(uuids.ssp, ['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD', 'CUSTOM_FAST']) self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.update_from_provider_tree, self.context, new_tree) # ...but the next iteration will refresh the cache with the latest # generation and so the next attempt should succeed. self.client.update_from_provider_tree(self.context, new_tree) # The out-of-band change is blown away, as it should be. assert_ptrees_equal() # Let's delete some stuff new_tree.remove(uuids.ssp) self.assertFalse(new_tree.exists('ssp')) # Verify that placement communication failure raises through with mock.patch.object(self.client, '_delete_provider', side_effect=kse.EndpointNotFound): self.assertRaises( kse.ClientException, self.client.update_from_provider_tree, self.context, new_tree) # The provider didn't get deleted (this doesn't raise # ResourceProviderNotFound) self.client.get_provider_by_name(self.context, 'ssp') # Continue removing stuff new_tree.remove('child1') self.assertFalse(new_tree.exists('child1')) # Removing a node removes its descendants too self.assertFalse(new_tree.exists('grandchild1_1')) self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Remove the last provider new_tree.remove(uuids.root) self.assertEqual([], new_tree.get_provider_uuids()) self.client.update_from_provider_tree(self.context, new_tree) assert_ptrees_equal() # Having removed the providers this way, they ought to be gone # from placement for uuid in (uuids.root, uuids.child1, uuids.grandchild1_1, uuids.ssp): resp = self.client.get('/resource_providers/%s' % uuid) self.assertEqual(404, resp.status_code) def test_non_tree_aggregate_membership(self): """There are some methods of the reportclient that interact with the reportclient's provider_tree cache of information on a best-effort basis. These methods are called to add and remove members from a nova host aggregate and ensure that the placement API has a mirrored record of the resource provider's aggregate associations. We want to simulate this use case by invoking these methods with an empty cache and making sure it never gets populated (and we don't raise ValueError). """ agg_uuid = uuids.agg # get_provider_tree_and_ensure_root creates a resource provider # record for us ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid, name=self.compute_name) self.assertEqual([self.compute_uuid], ptree.get_provider_uuids()) # Now blow away the cache so we can ensure the use_cache=False # behavior of aggregate_{add|remove}_host correctly ignores and/or # doesn't attempt to populate/update it. self.client._provider_tree.remove(self.compute_uuid) self.assertEqual( [], self.client._provider_tree.get_provider_uuids()) # Use the reportclient's _get_provider_aggregates() private method # to verify no aggregates are yet associated with this provider aggs = self.client._get_provider_aggregates( self.context, self.compute_uuid).aggregates self.assertEqual(set(), aggs) # Now associate the compute **host name** with an aggregate and # ensure the aggregate association is saved properly self.client.aggregate_add_host( self.context, agg_uuid, host_name=self.compute_name) # Check that the ProviderTree cache hasn't been modified (since # the aggregate_add_host() method is only called from nova-api and # we don't want to have a ProviderTree cache at that layer. self.assertEqual( [], self.client._provider_tree.get_provider_uuids()) aggs = self.client._get_provider_aggregates( self.context, self.compute_uuid).aggregates self.assertEqual(set([agg_uuid]), aggs) # Finally, remove the association and verify it's removed in # placement self.client.aggregate_remove_host( self.context, agg_uuid, self.compute_name) self.assertEqual( [], self.client._provider_tree.get_provider_uuids()) aggs = self.client._get_provider_aggregates( self.context, self.compute_uuid).aggregates self.assertEqual(set(), aggs) # Try removing the same host and verify no error self.client.aggregate_remove_host( self.context, agg_uuid, self.compute_name) self.assertEqual( [], self.client._provider_tree.get_provider_uuids()) def test_alloc_cands_smoke(self): """Simple call to get_allocation_candidates for version checking.""" flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) req_spec = objects.RequestSpec(flavor=flavor, is_bfv=False) self.client.get_allocation_candidates( self.context, utils.ResourceRequest(req_spec)) def _set_up_provider_tree(self): r"""Create two compute nodes in placement ("this" one, and another one) and a storage provider sharing with both. +-----------------------+ +------------------------+ |uuid: self.compute_uuid| |uuid: uuids.ssp | |name: self.compute_name| |name: 'ssp' | |inv: MEMORY_MB=2048 |......|inv: DISK_GB=500 |... | SRIOV_NET_VF=2 | agg1 |traits: [MISC_SHARES...]| . |aggs: [uuids.agg1] | |aggs: [uuids.agg1] | . agg1 +-----------------------+ +------------------------+ . / \ . +-------------------+ +-------------------+ +-------------------+ |uuid: uuids.numa1 | |uuid: uuids.numa2 | |uuid: uuids.othercn| |name: 'numa1' | |name: 'numa2' | |name: 'othercn' | |inv: VCPU=8 | |inv: VCPU=8 | |inv: VCPU=8 | | CUSTOM_PCPU=8 | | CUSTOM_PCPU=8 | | MEMORY_MB=1024| | SRIOV_NET_VF=4| | SRIOV_NET_VF=4| |aggs: [uuids.agg1] | +-------------------+ +-------------------+ +-------------------+ Returns a dict, keyed by provider UUID, of the expected shape of the provider tree, as expected by the expected_dict param of assertProviderTree. """ ret = {} # get_provider_tree_and_ensure_root creates a resource provider # record for us ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid, name=self.compute_name) inv = dict(MEMORY_MB={'total': 2048}, SRIOV_NET_VF={'total': 2}) ptree.update_inventory(self.compute_uuid, inv) ptree.update_aggregates(self.compute_uuid, [uuids.agg1]) ret[self.compute_uuid] = dict( name=self.compute_name, parent_uuid=None, inventory=inv, aggregates=set([uuids.agg1]), traits=set() ) # These are part of the compute node's tree ptree.new_child('numa1', self.compute_uuid, uuid=uuids.numa1) inv = dict(VCPU={'total': 8}, CUSTOM_PCPU={'total': 8}, SRIOV_NET_VF={'total': 4}) ptree.update_inventory('numa1', inv) ret[uuids.numa1] = dict( name='numa1', parent_uuid=self.compute_uuid, inventory=inv, aggregates=set(), traits=set(), ) ptree.new_child('numa2', self.compute_uuid, uuid=uuids.numa2) ptree.update_inventory('numa2', inv) ret[uuids.numa2] = dict( name='numa2', parent_uuid=self.compute_uuid, inventory=inv, aggregates=set(), traits=set(), ) # A sharing provider that's not part of the compute node's tree. ptree.new_root('ssp', uuids.ssp) inv = dict(DISK_GB={'total': 500}) ptree.update_inventory(uuids.ssp, inv) # Part of the shared storage aggregate ptree.update_aggregates(uuids.ssp, [uuids.agg1]) ptree.update_traits(uuids.ssp, ['MISC_SHARES_VIA_AGGREGATE']) ret[uuids.ssp] = dict( name='ssp', parent_uuid=None, inventory=inv, aggregates=set([uuids.agg1]), traits=set(['MISC_SHARES_VIA_AGGREGATE']) ) self.client.update_from_provider_tree(self.context, ptree) # Another unrelated compute node. We don't use the report client's # convenience methods because we don't want this guy in the cache. resp = self.client.post( '/resource_providers', {'uuid': uuids.othercn, 'name': 'othercn'}, version='1.20') resp = self.client.put( '/resource_providers/%s/inventories' % uuids.othercn, {'inventories': {'VCPU': {'total': 8}, 'MEMORY_MB': {'total': 1024}}, 'resource_provider_generation': resp.json()['generation']}) # Part of the shared storage aggregate self.client.put( '/resource_providers/%s/aggregates' % uuids.othercn, {'aggregates': [uuids.agg1], 'resource_provider_generation': resp.json()['resource_provider_generation']}, version='1.19') return ret def assertProviderTree(self, expected_dict, actual_tree): # expected_dict is of the form: # { rp_uuid: { # 'parent_uuid': ..., # 'inventory': {...}, # 'aggregates': set(...), # 'traits': set(...), # } # } # actual_tree is a ProviderTree # Same UUIDs self.assertEqual(set(expected_dict), set(actual_tree.get_provider_uuids())) for uuid, pdict in expected_dict.items(): actual_data = actual_tree.data(uuid) # Fields existing on the `expected` object are the only ones we # care to check. for k, expected in pdict.items(): # For inventories, we're only validating totals if k == 'inventory': self.assertEqual( set(expected), set(actual_data.inventory), "Mismatched inventory keys for provider %s" % uuid) for rc, totaldict in expected.items(): self.assertEqual( totaldict['total'], actual_data.inventory[rc]['total'], "Mismatched inventory totals for provider %s" % uuid) else: self.assertEqual(expected, getattr(actual_data, k), "Mismatched %s for provider %s" % (k, uuid)) def _set_up_provider_tree_allocs(self): """Create some allocations on our compute (with sharing).""" ret = { uuids.cn_inst1: { 'allocations': { self.compute_uuid: {'resources': {'MEMORY_MB': 512, 'SRIOV_NET_VF': 1}}, uuids.numa1: {'resources': {'VCPU': 2, 'CUSTOM_PCPU': 2}}, uuids.ssp: {'resources': {'DISK_GB': 100}} }, 'consumer_generation': None, 'project_id': uuids.proj, 'user_id': uuids.user, }, uuids.cn_inst2: { 'allocations': { self.compute_uuid: {'resources': {'MEMORY_MB': 256}}, uuids.numa2: {'resources': {'CUSTOM_PCPU': 1, 'SRIOV_NET_VF': 1}}, uuids.ssp: {'resources': {'DISK_GB': 50}} }, 'consumer_generation': None, 'project_id': uuids.proj, 'user_id': uuids.user, }, } self.client.put('/allocations/' + uuids.cn_inst1, ret[uuids.cn_inst1], version='1.28') self.client.put('/allocations/' + uuids.cn_inst2, ret[uuids.cn_inst2], version='1.28') # And on the other compute (with sharing) self.client.put( '/allocations/' + uuids.othercn_inst, {'allocations': { uuids.othercn: {'resources': {'VCPU': 2, 'MEMORY_MB': 64}}, uuids.ssp: {'resources': {'DISK_GB': 30}} }, 'consumer_generation': None, 'project_id': uuids.proj, 'user_id': uuids.user, }, version='1.28') return ret def assertAllocations(self, expected, actual): """Compare the parts we care about in two dicts, keyed by consumer UUID, of allocation information. We don't care about comparing generations """ # Same consumers self.assertEqual(set(expected), set(actual)) # We're going to mess with these, to make life easier, so copy them expected = copy.deepcopy(expected) actual = copy.deepcopy(actual) for allocs in list(expected.values()) + list(actual.values()): del allocs['consumer_generation'] for alloc in allocs['allocations'].values(): if 'generation' in alloc: del alloc['generation'] self.assertEqual(expected, actual) def test_allocation_candidates_mappings(self): """Do a complex GET /allocation_candidates query and make sure the response contains the ``mappings`` keys we expect at Placement 1.34. """ flavor = objects.Flavor( vcpus=0, memory_mb=0, root_gb=0, ephemeral_gb=0, swap=0, extra_specs={ 'group_policy': 'none', 'resources_CPU:VCPU': 1, 'resources_MEM:MEMORY_MB': 1024, 'resources_DISK:DISK_GB': 10 }) req_spec = objects.RequestSpec(flavor=flavor, is_bfv=False) self._set_up_provider_tree() acs = self.client.get_allocation_candidates( self.context, utils.ResourceRequest(req_spec))[0] # We're not going to validate all the allocations - Placement has # tests for that - just make sure they're there. self.assertEqual(3, len(acs)) # We're not going to validate all the mappings - Placement has # tests for that - just make sure they're there. for ac in acs: self.assertIn('allocations', ac) self.assertEqual({'_CPU', '_MEM', '_DISK'}, set(ac['mappings'])) # One data element is: # root_required: set of traits for root_required # root_forbidden: set of traits for root_forbidden # expected_acs: integer expected number of allocation candidates @ddt.data( # With no root_required qparam, we get two candidates involving the # primary compute and one involving `othercn`. (This is covered # elsewhere too, but included here as a baseline). {'root_required': {}, 'root_forbidden': {}, 'expected_acs': 3}, # Requiring traits that are on our primary compute culls out the result # involving `othercn`. {'root_required': {ot.COMPUTE_VOLUME_EXTEND, 'CUSTOM_FOO'}, 'root_forbidden': {}, 'expected_acs': 2}, # Forbidding a trait that's absent doesn't matter {'root_required': {ot.COMPUTE_VOLUME_EXTEND, 'CUSTOM_FOO'}, 'root_forbidden': {ot.COMPUTE_VOLUME_MULTI_ATTACH}, 'expected_acs': 2}, # But forbidding COMPUTE_STATUS_DISABLED kills it {'root_required': {ot.COMPUTE_VOLUME_EXTEND, 'CUSTOM_FOO'}, 'root_forbidden': {ot.COMPUTE_VOLUME_MULTI_ATTACH, ot.COMPUTE_STATUS_DISABLED}, 'expected_acs': 0}, # Removing the required traits brings back the candidate involving # `othercn`. But the primary compute is still filtered out by the # root_forbidden. {'root_required': {}, 'root_forbidden': {ot.COMPUTE_VOLUME_MULTI_ATTACH, ot.COMPUTE_STATUS_DISABLED}, 'expected_acs': 1}, ) def test_allocation_candidates_root_traits(self, data): """Smoke test to ensure root_{required|forbidden} percolates from the RequestSpec through to the GET /allocation_candidates response. We're not trying to prove that root_required works -- Placement tests for that -- we're just throwing out enough permutations to prove that we must be building the querystring correctly. """ flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) req_spec = objects.RequestSpec(flavor=flavor, is_bfv=False) req_spec.root_required.update(data['root_required']) req_spec.root_forbidden.update(data['root_forbidden']) self._set_up_provider_tree() self.client.set_traits_for_provider( self.context, self.compute_uuid, (ot.COMPUTE_STATUS_DISABLED, ot.COMPUTE_VOLUME_EXTEND, 'CUSTOM_FOO')) acs, _, ver = self.client.get_allocation_candidates( self.context, utils.ResourceRequest(req_spec)) self.assertEqual('1.35', ver) # This prints which ddt permutation we're using if it fails. self.assertEqual(data['expected_acs'], len(acs), data) def test_get_allocations_for_provider_tree(self): # When the provider tree cache is empty (or we otherwise supply a # bogus node name), we get ValueError. self.assertRaises(ValueError, self.client.get_allocations_for_provider_tree, self.context, 'bogus') self._set_up_provider_tree() # At this point, there are no allocations self.assertEqual({}, self.client.get_allocations_for_provider_tree( self.context, self.compute_name)) expected = self._set_up_provider_tree_allocs() # And now we should get all the right allocations. Note that we see # nothing from othercn_inst. actual = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertAllocations(expected, actual) def test_reshape(self): """Smoke test the report client shim for the reshaper API.""" # Simulate placement API communication failure with mock.patch.object( self.client, 'post', side_effect=kse.MissingAuthPlugin): self.assertRaises(kse.ClientException, self.client._reshape, self.context, {}, {}) # Invalid payload (empty inventories) results in a 409, which the # report client converts to ReshapeFailed try: self.client._reshape(self.context, {}, {}) except exception.ReshapeFailed as e: self.assertIn('JSON does not validate: {} does not have ' 'enough properties', e.kwargs['error']) # Okay, do some real stuffs. We're just smoke-testing that we can # hit a good path to the API here; real testing of the API happens # in gabbits and via update_from_provider_tree. self._set_up_provider_tree() self._set_up_provider_tree_allocs() # Updating allocations bumps generations for affected providers. # In real life, the subsequent update_from_provider_tree will # bounce 409, the cache will be cleared, and the operation will be # retried. We don't care about any of that retry logic in the scope # of this test case, so just clear the cache so # get_provider_tree_and_ensure_root repopulates it and we avoid the # conflict exception. self.client.clear_provider_cache() ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) inventories = {} for rp_uuid in ptree.get_provider_uuids(): data = ptree.data(rp_uuid) # Add a new resource class to the inventories inventories[rp_uuid] = { "inventories": dict(data.inventory, CUSTOM_FOO={'total': 10}), "resource_provider_generation": data.generation } allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) for alloc in allocs.values(): for res in alloc['allocations'].values(): res['resources']['CUSTOM_FOO'] = 1 resp = self.client._reshape(self.context, inventories, allocs) self.assertEqual(204, resp.status_code) def test_update_from_provider_tree_reshape(self): """Run update_from_provider_tree with reshaping.""" exp_ptree = self._set_up_provider_tree() # Save a copy of this for later orig_exp_ptree = copy.deepcopy(exp_ptree) # A null reshape: no inv changes, empty allocs ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertProviderTree(exp_ptree, ptree) self.assertAllocations({}, allocs) self.client.update_from_provider_tree(self.context, ptree, allocations=allocs) exp_allocs = self._set_up_provider_tree_allocs() # Save a copy of this for later orig_exp_allocs = copy.deepcopy(exp_allocs) # Updating allocations bumps generations for affected providers. # In real life, the subsequent update_from_provider_tree will # bounce 409, the cache will be cleared, and the operation will be # retried. We don't care about any of that retry logic in the scope # of this test case, so just clear the cache so # get_provider_tree_and_ensure_root repopulates it and we avoid the # conflict exception. self.client.clear_provider_cache() # Another null reshape: no inv changes, no alloc changes ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertProviderTree(exp_ptree, ptree) self.assertAllocations(exp_allocs, allocs) self.client.update_from_provider_tree(self.context, ptree, allocations=allocs) # Now a reshape that adds an inventory item to all the providers in # the provider tree (i.e. the "local" ones and the shared one, but # not the othercn); and an allocation of that resource only for the # local instances, and only on providers that already have # allocations (i.e. the compute node and sharing provider for both # cn_inst*, and numa1 for cn_inst1 and numa2 for cn_inst2). ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertProviderTree(exp_ptree, ptree) self.assertAllocations(exp_allocs, allocs) for rp_uuid in ptree.get_provider_uuids(): # Add a new resource class to the inventories ptree.update_inventory( rp_uuid, dict(ptree.data(rp_uuid).inventory, CUSTOM_FOO={'total': 10})) exp_ptree[rp_uuid]['inventory']['CUSTOM_FOO'] = { 'total': 10} for c_uuid, alloc in allocs.items(): for rp_uuid, res in alloc['allocations'].items(): res['resources']['CUSTOM_FOO'] = 1 exp_allocs[c_uuid]['allocations'][rp_uuid][ 'resources']['CUSTOM_FOO'] = 1 self.client.update_from_provider_tree(self.context, ptree, allocations=allocs) # Let's do a big transform that stuffs everything back onto the # compute node ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertProviderTree(exp_ptree, ptree) self.assertAllocations(exp_allocs, allocs) cum_inv = {} for rp_uuid in ptree.get_provider_uuids(): # Accumulate all the inventory amounts for each RC for rc, inv in ptree.data(rp_uuid).inventory.items(): if rc not in cum_inv: cum_inv[rc] = {'total': 0} cum_inv[rc]['total'] += inv['total'] # Remove all the providers except the compute node and the # shared storage provider, which still has (and shall # retain) allocations from the "other" compute node. # TODO(efried): But is that right? I should be able to # remove the SSP from *this* tree and have it continue to # exist in the world. But how would ufpt distinguish? if rp_uuid not in (self.compute_uuid, uuids.ssp): ptree.remove(rp_uuid) # Put the accumulated inventory onto the compute RP ptree.update_inventory(self.compute_uuid, cum_inv) # Cause trait and aggregate transformations too. ptree.update_aggregates(self.compute_uuid, set()) ptree.update_traits(self.compute_uuid, ['CUSTOM_ALL_IN_ONE']) exp_ptree = { self.compute_uuid: dict( parent_uuid = None, inventory = cum_inv, aggregates=set(), traits = set(['CUSTOM_ALL_IN_ONE']), ), uuids.ssp: dict( # Don't really care about the details parent_uuid=None, ), } # Let's inject an error path test here: attempting to reshape # inventories without having moved their allocations should fail. ex = self.assertRaises( exception.ReshapeFailed, self.client.update_from_provider_tree, self.context, ptree, allocations=allocs) self.assertIn('placement.inventory.inuse', ex.format_message()) # Move all the allocations off their existing providers and # onto the compute node for c_uuid, alloc in allocs.items(): cum_allocs = {} for rp_uuid, resources in alloc['allocations'].items(): # Accumulate all the allocations for each RC for rc, amount in resources['resources'].items(): if rc not in cum_allocs: cum_allocs[rc] = 0 cum_allocs[rc] += amount alloc['allocations'] = { # Put the accumulated allocations on the compute RP self.compute_uuid: {'resources': cum_allocs}} exp_allocs = copy.deepcopy(allocs) self.client.update_from_provider_tree(self.context, ptree, allocations=allocs) # Okay, let's transform back now ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertProviderTree(exp_ptree, ptree) self.assertAllocations(exp_allocs, allocs) for rp_uuid, data in orig_exp_ptree.items(): if not ptree.exists(rp_uuid): # This should only happen for children, because the CN # and SSP are already there. ptree.new_child(data['name'], data['parent_uuid'], uuid=rp_uuid) ptree.update_inventory(rp_uuid, data['inventory']) ptree.update_traits(rp_uuid, data['traits']) ptree.update_aggregates(rp_uuid, data['aggregates']) for c_uuid, orig_allocs in orig_exp_allocs.items(): allocs[c_uuid]['allocations'] = orig_allocs['allocations'] self.client.update_from_provider_tree(self.context, ptree, allocations=allocs) ptree = self.client.get_provider_tree_and_ensure_root( self.context, self.compute_uuid) allocs = self.client.get_allocations_for_provider_tree( self.context, self.compute_name) self.assertProviderTree(orig_exp_ptree, ptree) self.assertAllocations(orig_exp_allocs, allocs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_scheduler.py0000664000175000017500000000750200000000000022430 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import instance_actions from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_network import nova.tests.unit.image.fake as fake_image from nova.tests.unit import policy_fixture CELL1_NAME = 'cell1' CELL2_NAME = 'cell2' class MultiCellSchedulerTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): NUMBER_OF_CELLS = 2 def setUp(self): super(MultiCellSchedulerTestCase, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(nova_fixtures.AllServicesCurrent()) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api fake_network.set_stub_network_methods(self) self.flags(allow_resize_to_same_host=False) self.flags(enabled_filters=['AllHostsFilter'], group='filter_scheduler') self.start_service('conductor') self.start_service('scheduler') fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) def _test_create_and_migrate(self, expected_status, az=None): server = self._create_server(az=az) return self.admin_api.api_post( '/servers/%s/action' % server['id'], {'migrate': None}, check_response_status=[expected_status]), server def test_migrate_between_cells(self): """Verify that migrating between cells is not allowed. Right now, we can't migrate between cells. So, create two computes in different cells and make sure that migration fails with NoValidHost. """ # Hosts in different cells self.start_service('compute', host='compute1', cell_name=CELL1_NAME) self.start_service('compute', host='compute2', cell_name=CELL2_NAME) _, server = self._test_create_and_migrate(expected_status=202) # The instance action should have failed with details. self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'NoValidHost') def test_migrate_within_cell(self): """Verify that migrating within cells is allowed. Create two computes in the same cell and validate that the same migration is allowed. """ # Hosts in the same cell self.start_service('compute', host='compute1', cell_name=CELL1_NAME) self.start_service('compute', host='compute2', cell_name=CELL1_NAME) # Create another host just so it looks like we have hosts in # both cells self.start_service('compute', host='compute3', cell_name=CELL2_NAME) # Force the server onto compute1 in cell1 so we do not accidentally # land on compute3 in cell2 and fail to migrate. _, server = self._test_create_and_migrate(expected_status=202, az='nova:compute1') self._wait_for_state_change(server, 'VERIFY_RESIZE') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_server_external_events.py0000664000175000017500000001134600000000000025247 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.compute import instance_actions from nova.compute import power_state from nova.compute import vm_states from nova.tests.functional import integrated_helpers from nova.tests.unit import fake_notifier class ServerExternalEventsTestV276( integrated_helpers.ProviderUsageBaseTestCase): microversion = '2.76' compute_driver = 'fake.PowerUpdateFakeDriver' def setUp(self): super(ServerExternalEventsTestV276, self).setUp() self.compute = self.start_service('compute', host='compute') server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') created_server = self.api.post_server({'server': server_req}) self.server = self._wait_for_state_change(created_server, 'ACTIVE') self.power_off = {'name': 'power-update', 'tag': 'POWER_OFF', 'server_uuid': self.server["id"]} self.power_on = {'name': 'power-update', 'tag': 'POWER_ON', 'server_uuid': self.server["id"]} def test_server_power_update(self): # This test checks the functionality of handling the "power-update" # external events. self.assertEqual( power_state.RUNNING, self.server['OS-EXT-STS:power_state']) self.api.create_server_external_events(events=[self.power_off]) expected_params = {'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': vm_states.STOPPED, 'OS-EXT-STS:power_state': power_state.SHUTDOWN} server = self._wait_for_server_parameter(self.server, expected_params) msg = ' with target power state POWER_OFF.' self.assertIn(msg, self.stdlog.logger.output) # Test if this is logged in the instance action list. actions = self.api.get_instance_actions(server['id']) self.assertEqual(2, len(actions)) acts = {action['action']: action for action in actions} self.assertEqual(['create', 'stop'], sorted(acts)) stop_action = acts[instance_actions.STOP] detail = self.api.get_instance_action_details(server['id'], stop_action['request_id']) events_by_name = {event['event']: event for event in detail['events']} self.assertEqual(1, len(detail['events']), detail) self.assertIn('compute_power_update', events_by_name) self.assertEqual('Success', detail['events'][0]['result']) # Test if notifications were emitted. fake_notifier.wait_for_versioned_notifications( 'instance.power_off.start') fake_notifier.wait_for_versioned_notifications( 'instance.power_off.end') # Checking POWER_ON self.api.create_server_external_events(events=[self.power_on]) expected_params = {'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': vm_states.ACTIVE, 'OS-EXT-STS:power_state': power_state.RUNNING} server = self._wait_for_server_parameter(self.server, expected_params) msg = ' with target power state POWER_ON.' self.assertIn(msg, self.stdlog.logger.output) # Test if this is logged in the instance action list. actions = self.api.get_instance_actions(server['id']) self.assertEqual(3, len(actions)) acts = {action['action']: action for action in actions} self.assertEqual(['create', 'start', 'stop'], sorted(acts)) start_action = acts[instance_actions.START] detail = self.api.get_instance_action_details(server['id'], start_action['request_id']) events_by_name = {event['event']: event for event in detail['events']} self.assertEqual(1, len(detail['events']), detail) self.assertIn('compute_power_update', events_by_name) self.assertEqual('Success', detail['events'][0]['result']) # Test if notifications were emitted. fake_notifier.wait_for_versioned_notifications( 'instance.power_on.start') fake_notifier.wait_for_versioned_notifications( 'instance.power_on.end') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_server_faults.py0000664000175000017500000001240200000000000023331 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class HypervisorError(Exception): """This is just used to make sure the exception type is in the fault.""" pass class ServerFaultTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Tests for the server faults reporting from the API.""" def setUp(self): super(ServerFaultTestCase, self).setUp() # Setup the standard fixtures. fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(policy_fixture.RealPolicyFixture()) # Start the compute services. self.start_service('conductor') self.start_service('scheduler') self.compute = self.start_service('compute') api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api def test_server_fault_non_nova_exception(self): """Creates a server using the non-admin user, then reboots it which will generate a non-NovaException fault and put the instance into ERROR status. Then checks that fault details are only visible to the admin user. """ # Create the server with the non-admin user. server = self._build_server( networks=[{'port': nova_fixtures.NeutronFixture.port_1['id']}]) server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Stop the server before rebooting it so that after the driver.reboot # method raises an exception, the fake driver does not report the # instance power state as running - that will make the compute manager # set the instance vm_state to error. self.api.post_server_action(server['id'], {'os-stop': None}) server = self._wait_for_state_change(server, 'SHUTOFF') # Stub out the compute driver reboot method to raise a non-nova # exception to simulate some error from the underlying hypervisor # which in this case we are going to say has sensitive content. error_msg = 'sensitive info' with mock.patch.object( self.compute.manager.driver, 'reboot', side_effect=HypervisorError(error_msg)) as mock_reboot: reboot_request = {'reboot': {'type': 'HARD'}} self.api.post_server_action(server['id'], reboot_request) # In this case we wait for the status to change to ERROR using # the non-admin user so we can assert the fault details. We also # wait for the task_state to be None since the wrap_instance_fault # decorator runs before the reverts_task_state decorator so we will # be sure the fault is set on the server. server = self._wait_for_server_parameter( server, {'status': 'ERROR', 'OS-EXT-STS:task_state': None}, api=self.api) mock_reboot.assert_called_once() # The server fault from the non-admin user API response should not # have details in it. self.assertIn('fault', server) fault = server['fault'] self.assertNotIn('details', fault) # And the sensitive details from the non-nova exception should not be # in the message. self.assertIn('message', fault) self.assertNotIn(error_msg, fault['message']) # The exception type class name should be in the message. self.assertIn('HypervisorError', fault['message']) # Get the server fault details for the admin user. server = self.admin_api.get_server(server['id']) fault = server['fault'] # The admin can see the fault details which includes the traceback. self.assertIn('details', fault) # The details also contain the exception message (which is not in the # fault message). self.assertIn(error_msg, fault['details']) # Make sure the traceback is there by looking for part of it. self.assertIn('in reboot_instance', fault['details']) # The exception type class name should be in the message for the admin # user as well since the fault handling code cannot distinguish who # is going to see the message so it only sets class name. self.assertIn('HypervisorError', fault['message']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_server_group.py0000664000175000017500000013270400000000000023177 0ustar00zuulzuul00000000000000# Copyright 2015 Ericsson AB # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg import six from nova.compute import instance_actions from nova import context from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit import policy_fixture from nova import utils from nova.virt import fake import nova.scheduler.utils import nova.servicegroup import nova.tests.unit.image.fake # An alternate project id PROJECT_ID_ALT = "616c6c796f7572626173656172656f73" CONF = cfg.CONF class ServerGroupTestBase(test.TestCase, integrated_helpers.InstanceHelperMixin): REQUIRES_LOCKING = True api_major_version = 'v2.1' microversion = None _enabled_filters = (CONF.filter_scheduler.enabled_filters + ['ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter']) anti_affinity = {'name': 'fake-name-1', 'policies': ['anti-affinity']} affinity = {'name': 'fake-name-2', 'policies': ['affinity']} def _get_weight_classes(self): return [] def setUp(self): super(ServerGroupTestBase, self).setUp() self.flags(enabled_filters=self._enabled_filters, group='filter_scheduler') # NOTE(sbauza): Don't verify VCPUS and disks given the current nodes. self.flags(cpu_allocation_ratio=9999.0) self.flags(disk_allocation_ratio=9999.0) self.flags(weight_classes=self._get_weight_classes(), group='filter_scheduler') self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.api.microversion = self.microversion self.admin_api = api_fixture.admin_api self.admin_api.microversion = self.microversion # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) self.start_service('conductor') self.start_service('scheduler') self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) def _boot_a_server_to_group(self, group, expected_status='ACTIVE', flavor=None, az=None): server = self._build_server( image_uuid='a2459075-d96c-40d5-893e-577ff92e721c', flavor_id=flavor['id'] if flavor else None, networks=[], az=az) post = {'server': server, 'os:scheduler_hints': {'group': group['id']}} created_server = self.api.post_server(post) self.assertTrue(created_server['id']) # Wait for it to finish being created found_server = self._wait_for_state_change( created_server, expected_status) return found_server class ServerGroupFakeDriver(fake.SmallFakeDriver): """A specific fake driver for our tests. Here, we only want to be RAM-bound. """ vcpus = 1000 memory_mb = 8192 local_gb = 100000 # A fake way to change the FakeDriver given we don't have a possibility yet to # modify the resources for the FakeDriver def _fake_load_compute_driver(virtapi, compute_driver=None): return ServerGroupFakeDriver(virtapi) class ServerGroupTestV21(ServerGroupTestBase): def setUp(self): super(ServerGroupTestV21, self).setUp() # TODO(sbauza): Remove that once there is a way to have a custom # FakeDriver supporting different resources. Note that we can't also # simply change the config option for choosing our custom fake driver # as the mocked method only accepts to load drivers in the nova.virt # tree. self.stub_out('nova.virt.driver.load_compute_driver', _fake_load_compute_driver) self.compute = self.start_service('compute', host='compute') # NOTE(gibi): start a second compute host to be able to test affinity self.compute2 = self.start_service('compute', host='host2') def test_get_no_groups(self): groups = self.api.get_server_groups() self.assertEqual([], groups) def test_create_and_delete_groups(self): groups = [self.anti_affinity, self.affinity] created_groups = [] for group in groups: created_group = self.api.post_server_groups(group) created_groups.append(created_group) self.assertEqual(group['name'], created_group['name']) self.assertEqual(group['policies'], created_group['policies']) self.assertEqual([], created_group['members']) self.assertEqual({}, created_group['metadata']) self.assertIn('id', created_group) group_details = self.api.get_server_group(created_group['id']) self.assertEqual(created_group, group_details) existing_groups = self.api.get_server_groups() self.assertIn(created_group, existing_groups) existing_groups = self.api.get_server_groups() self.assertEqual(len(groups), len(existing_groups)) for group in created_groups: self.api.delete_server_group(group['id']) existing_groups = self.api.get_server_groups() self.assertNotIn(group, existing_groups) def test_create_wrong_policy(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_groups, {'name': 'fake-name-1', 'policies': ['wrong-policy']}) self.assertEqual(400, ex.response.status_code) self.assertIn('Invalid input', ex.response.text) self.assertIn('wrong-policy', ex.response.text) def test_get_groups_all_projects(self): # This test requires APIs using two projects. # Create an API using project 'openstack1'. # This is a non-admin API. # # NOTE(sdague): this is actually very much *not* how this # fixture should be used. This actually spawns a whole # additional API server. Should be addressed in the future. api_openstack1 = self.useFixture(nova_fixtures.OSAPIFixture( api_version=self.api_major_version, project_id=PROJECT_ID_ALT)).api api_openstack1.microversion = self.microversion # Create a server group in project 'openstack' # Project 'openstack' is used by self.api group1 = self.anti_affinity openstack_group = self.api.post_server_groups(group1) # Create a server group in project 'openstack1' group2 = self.affinity openstack1_group = api_openstack1.post_server_groups(group2) # The admin should be able to get server groups in all projects. all_projects_admin = self.admin_api.get_server_groups( all_projects=True) self.assertIn(openstack_group, all_projects_admin) self.assertIn(openstack1_group, all_projects_admin) # The non-admin should only be able to get server groups # in his project. # The all_projects parameter is ignored for non-admin clients. all_projects_non_admin = api_openstack1.get_server_groups( all_projects=True) self.assertNotIn(openstack_group, all_projects_non_admin) self.assertIn(openstack1_group, all_projects_non_admin) def test_create_duplicated_policy(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_groups, {"name": "fake-name-1", "policies": ["affinity", "affinity"]}) self.assertEqual(400, ex.response.status_code) self.assertIn('Invalid input', ex.response.text) def test_create_multiple_policies(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_groups, {"name": "fake-name-1", "policies": ["anti-affinity", "affinity"]}) self.assertEqual(400, ex.response.status_code) def _boot_servers_to_group(self, group, flavor=None): servers = [] for _ in range(0, 2): server = self._boot_a_server_to_group(group, flavor=flavor) servers.append(server) return servers def test_boot_servers_with_affinity(self): created_group = self.api.post_server_groups(self.affinity) servers = self._boot_servers_to_group(created_group) members = self.api.get_server_group(created_group['id'])['members'] host = servers[0]['OS-EXT-SRV-ATTR:host'] for server in servers: self.assertIn(server['id'], members) self.assertEqual(host, server['OS-EXT-SRV-ATTR:host']) def test_boot_servers_with_affinity_overquota(self): # Tests that we check server group member quotas and cleanup created # resources when we fail with OverQuota. self.flags(server_group_members=1, group='quota') # make sure we start with 0 servers servers = self.api.get_servers(detail=False) self.assertEqual(0, len(servers)) created_group = self.api.post_server_groups(self.affinity) ex = self.assertRaises(client.OpenStackApiException, self._boot_servers_to_group, created_group) self.assertEqual(403, ex.response.status_code) # _boot_servers_to_group creates 2 instances in the group in order, not # multiple servers in a single request. Since our quota is 1, the first # server create would pass, the second should fail, and we should be # left with 1 server and it's 1 block device mapping. servers = self.api.get_servers(detail=False) self.assertEqual(1, len(servers)) ctxt = context.get_admin_context() servers = db.instance_get_all(ctxt) self.assertEqual(1, len(servers)) ctxt_mgr = db_api.get_context_manager(ctxt) with ctxt_mgr.reader.using(ctxt): bdms = db_api._block_device_mapping_get_query(ctxt).all() self.assertEqual(1, len(bdms)) self.assertEqual(servers[0]['uuid'], bdms[0]['instance_uuid']) def test_boot_servers_with_affinity_no_valid_host(self): created_group = self.api.post_server_groups(self.affinity) # Using big enough flavor to use up the resources on the host flavor = self.api.get_flavors()[2] self._boot_servers_to_group(created_group, flavor=flavor) # The third server cannot be booted as there is not enough resource # on the host where the first two server was booted failed_server = self._boot_a_server_to_group(created_group, flavor=flavor, expected_status='ERROR') self.assertEqual('No valid host was found. ' 'There are not enough hosts available.', failed_server['fault']['message']) def test_boot_servers_with_anti_affinity(self): created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) members = self.api.get_server_group(created_group['id'])['members'] self.assertNotEqual(servers[0]['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) for server in servers: self.assertIn(server['id'], members) def test_boot_server_with_anti_affinity_no_valid_host(self): created_group = self.api.post_server_groups(self.anti_affinity) self._boot_servers_to_group(created_group) # We have 2 computes so the third server won't fit into the same group failed_server = self._boot_a_server_to_group(created_group, expected_status='ERROR') self.assertEqual('No valid host was found. ' 'There are not enough hosts available.', failed_server['fault']['message']) def _rebuild_with_group(self, group): created_group = self.api.post_server_groups(group) servers = self._boot_servers_to_group(created_group) post = {'rebuild': {'imageRef': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'}} self.api.post_server_action(servers[1]['id'], post) rebuilt_server = self._wait_for_state_change(servers[1], 'ACTIVE') self.assertEqual(post['rebuild']['imageRef'], rebuilt_server.get('image')['id']) return [servers[0], rebuilt_server] def test_rebuild_with_affinity(self): untouched_server, rebuilt_server = self._rebuild_with_group( self.affinity) self.assertEqual(untouched_server['OS-EXT-SRV-ATTR:host'], rebuilt_server['OS-EXT-SRV-ATTR:host']) def test_rebuild_with_anti_affinity(self): untouched_server, rebuilt_server = self._rebuild_with_group( self.anti_affinity) self.assertNotEqual(untouched_server['OS-EXT-SRV-ATTR:host'], rebuilt_server['OS-EXT-SRV-ATTR:host']) def _migrate_with_group_no_valid_host(self, group): created_group = self.api.post_server_groups(group) servers = self._boot_servers_to_group(created_group) post = {'migrate': {}} # NoValidHost errors are handled in conductor after the API has cast # off and returned a 202 response to the user. self.admin_api.post_server_action(servers[1]['id'], post, check_response_status=[202]) # The instance action should have failed with details. self._assert_resize_migrate_action_fail( servers[1], instance_actions.MIGRATE, 'NoValidHost') def test_migrate_with_group_no_valid_host(self): for group in [self.affinity, self.anti_affinity]: self._migrate_with_group_no_valid_host(group) def test_migrate_with_anti_affinity(self): # Start additional host to test migration with anti-affinity self.start_service('compute', host='host3') created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) post = {'migrate': {}} self.admin_api.post_server_action(servers[1]['id'], post) migrated_server = self._wait_for_state_change( servers[1], 'VERIFY_RESIZE') self.assertNotEqual(servers[0]['OS-EXT-SRV-ATTR:host'], migrated_server['OS-EXT-SRV-ATTR:host']) def test_migrate_with_anti_affinity_confirm_updates_scheduler(self): # Start additional host to test migration with anti-affinity compute3 = self.start_service('compute', host='host3') # make sure that compute syncing instance info to scheduler # this tells the scheduler that it can expect such updates periodically # and don't have to look into the db for it at the start of each # scheduling for compute in [self.compute, self.compute2, compute3]: compute.manager._sync_scheduler_instance_info( context.get_admin_context()) created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) post = {'migrate': {}} self.admin_api.post_server_action(servers[1]['id'], post) migrated_server = self._wait_for_state_change( servers[1], 'VERIFY_RESIZE') self.assertNotEqual(servers[0]['OS-EXT-SRV-ATTR:host'], migrated_server['OS-EXT-SRV-ATTR:host']) # We have 3 hosts, so after the move is confirmed one of the hosts # should be considered empty so we could boot a 3rd server on that host post = {'confirmResize': {}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_state_change(servers[1], 'ACTIVE') server3 = self._boot_a_server_to_group(created_group) # we have 3 servers that should occupy 3 different hosts hosts = {server['OS-EXT-SRV-ATTR:host'] for server in [servers[0], migrated_server, server3]} self.assertEqual(3, len(hosts)) def test_resize_to_same_host_with_anti_affinity(self): self.flags(allow_resize_to_same_host=True) created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group, flavor=self.api.get_flavors()[0]) post = {'resize': {'flavorRef': '2'}} server1_old_host = servers[1]['OS-EXT-SRV-ATTR:host'] self.admin_api.post_server_action(servers[1]['id'], post) migrated_server = self._wait_for_state_change( servers[1], 'VERIFY_RESIZE') self.assertEqual(server1_old_host, migrated_server['OS-EXT-SRV-ATTR:host']) def _get_compute_service_by_host_name(self, host_name): host = None if self.compute.host == host_name: host = self.compute elif self.compute2.host == host_name: host = self.compute2 else: raise AssertionError('host = %s does not found in ' 'existing hosts %s' % (host_name, str([self.compute.host, self.compute2.host]))) return host def _set_forced_down(self, service, forced_down): # Use microversion 2.53 for PUT /os-services/{service_id} force down. with utils.temporary_mutation(self.admin_api, microversion='2.53'): self.admin_api.put_service_force_down(service.service_ref.uuid, forced_down) def test_evacuate_with_anti_affinity(self): created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) # Start additional host to test evacuation self.start_service('compute', host='host3') post = {'evacuate': {'onSharedStorage': False}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['done']) evacuated_server = self._wait_for_state_change(servers[1], 'ACTIVE') # check that the server is evacuated to another host self.assertNotEqual(evacuated_server['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) # check that anti-affinity policy is kept during evacuation self.assertNotEqual(evacuated_server['OS-EXT-SRV-ATTR:host'], servers[0]['OS-EXT-SRV-ATTR:host']) def test_evacuate_with_anti_affinity_no_valid_host(self): created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) post = {'evacuate': {'onSharedStorage': False}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['error']) server_after_failed_evac = self._wait_for_state_change( servers[1], 'ERROR') # assert that after a failed evac the server active on the same host # as before self.assertEqual(server_after_failed_evac['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) def test_evacuate_with_affinity_no_valid_host(self): created_group = self.api.post_server_groups(self.affinity) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) post = {'evacuate': {'onSharedStorage': False}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['error']) server_after_failed_evac = self._wait_for_state_change( servers[1], 'ERROR') # assert that after a failed evac the server active on the same host # as before self.assertEqual(server_after_failed_evac['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) def test_soft_affinity_not_supported(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_groups, {'name': 'fake-name-1', 'policies': ['soft-affinity']}) self.assertEqual(400, ex.response.status_code) self.assertIn('Invalid input', ex.response.text) self.assertIn('soft-affinity', ex.response.text) class ServerGroupAffinityConfTest(ServerGroupTestBase): api_major_version = 'v2.1' # Load only anti-affinity filter so affinity will be missing _enabled_filters = ['ServerGroupAntiAffinityFilter'] @mock.patch('nova.scheduler.utils._SUPPORTS_AFFINITY', None) def test_affinity_no_filter(self): created_group = self.api.post_server_groups(self.affinity) failed_server = self._boot_a_server_to_group(created_group, expected_status='ERROR') self.assertEqual('ServerGroup policy is not supported: ' 'ServerGroupAffinityFilter not configured', failed_server['fault']['message']) self.assertEqual(400, failed_server['fault']['code']) class ServerGroupAntiAffinityConfTest(ServerGroupTestBase): api_major_version = 'v2.1' # Load only affinity filter so anti-affinity will be missing _enabled_filters = ['ServerGroupAffinityFilter'] @mock.patch('nova.scheduler.utils._SUPPORTS_ANTI_AFFINITY', None) def test_anti_affinity_no_filter(self): created_group = self.api.post_server_groups(self.anti_affinity) failed_server = self._boot_a_server_to_group(created_group, expected_status='ERROR') self.assertEqual('ServerGroup policy is not supported: ' 'ServerGroupAntiAffinityFilter not configured', failed_server['fault']['message']) self.assertEqual(400, failed_server['fault']['code']) class ServerGroupSoftAffinityConfTest(ServerGroupTestBase): api_major_version = 'v2.1' microversion = '2.15' soft_affinity = {'name': 'fake-name-4', 'policies': ['soft-affinity']} def _get_weight_classes(self): # Load only soft-anti-affinity weigher so affinity will be missing return ['nova.scheduler.weights.affinity.' 'ServerGroupSoftAntiAffinityWeigher'] @mock.patch('nova.scheduler.utils._SUPPORTS_SOFT_AFFINITY', None) def test_soft_affinity_no_filter(self): created_group = self.api.post_server_groups(self.soft_affinity) failed_server = self._boot_a_server_to_group(created_group, expected_status='ERROR') self.assertEqual('ServerGroup policy is not supported: ' 'ServerGroupSoftAffinityWeigher not configured', failed_server['fault']['message']) self.assertEqual(400, failed_server['fault']['code']) class ServerGroupSoftAntiAffinityConfTest(ServerGroupTestBase): api_major_version = 'v2.1' microversion = '2.15' soft_anti_affinity = {'name': 'fake-name-3', 'policies': ['soft-anti-affinity']} def _get_weight_classes(self): # Load only soft affinity filter so anti-affinity will be missing return ['nova.scheduler.weights.affinity.' 'ServerGroupSoftAffinityWeigher'] @mock.patch('nova.scheduler.utils._SUPPORTS_SOFT_ANTI_AFFINITY', None) def test_soft_anti_affinity_no_filter(self): created_group = self.api.post_server_groups(self.soft_anti_affinity) failed_server = self._boot_a_server_to_group(created_group, expected_status='ERROR') self.assertEqual('ServerGroup policy is not supported: ' 'ServerGroupSoftAntiAffinityWeigher not configured', failed_server['fault']['message']) self.assertEqual(400, failed_server['fault']['code']) class ServerGroupTestV215(ServerGroupTestV21): api_major_version = 'v2.1' microversion = '2.15' soft_anti_affinity = {'name': 'fake-name-3', 'policies': ['soft-anti-affinity']} soft_affinity = {'name': 'fake-name-4', 'policies': ['soft-affinity']} def setUp(self): super(ServerGroupTestV215, self).setUp() soft_affinity_patcher = mock.patch( 'nova.scheduler.utils._SUPPORTS_SOFT_AFFINITY') soft_anti_affinity_patcher = mock.patch( 'nova.scheduler.utils._SUPPORTS_SOFT_ANTI_AFFINITY') self.addCleanup(soft_affinity_patcher.stop) self.addCleanup(soft_anti_affinity_patcher.stop) self.mock_soft_affinity = soft_affinity_patcher.start() self.mock_soft_anti_affinity = soft_anti_affinity_patcher.start() self.mock_soft_affinity.return_value = None self.mock_soft_anti_affinity.return_value = None def _get_weight_classes(self): return ['nova.scheduler.weights.affinity.' 'ServerGroupSoftAffinityWeigher', 'nova.scheduler.weights.affinity.' 'ServerGroupSoftAntiAffinityWeigher'] def test_evacuate_with_anti_affinity(self): created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) # Start additional host to test evacuation compute3 = self.start_service('compute', host='host3') post = {'evacuate': {}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['done']) evacuated_server = self._wait_for_state_change(servers[1], 'ACTIVE') # check that the server is evacuated self.assertNotEqual(evacuated_server['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) # check that policy is kept self.assertNotEqual(evacuated_server['OS-EXT-SRV-ATTR:host'], servers[0]['OS-EXT-SRV-ATTR:host']) compute3.kill() def test_evacuate_with_anti_affinity_no_valid_host(self): created_group = self.api.post_server_groups(self.anti_affinity) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) post = {'evacuate': {}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['error']) server_after_failed_evac = self._wait_for_state_change( servers[1], 'ERROR') # assert that after a failed evac the server active on the same host # as before self.assertEqual(server_after_failed_evac['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) def test_evacuate_with_affinity_no_valid_host(self): created_group = self.api.post_server_groups(self.affinity) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) post = {'evacuate': {}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['error']) server_after_failed_evac = self._wait_for_state_change( servers[1], 'ERROR') # assert that after a failed evac the server active on the same host # as before self.assertEqual(server_after_failed_evac['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) def _check_group_format(self, group, created_group): self.assertEqual(group['policies'], created_group['policies']) self.assertEqual({}, created_group['metadata']) self.assertNotIn('rules', created_group) def test_create_and_delete_groups(self): groups = [self.anti_affinity, self.affinity, self.soft_affinity, self.soft_anti_affinity] created_groups = [] for group in groups: created_group = self.api.post_server_groups(group) created_groups.append(created_group) self.assertEqual(group['name'], created_group['name']) self._check_group_format(group, created_group) self.assertEqual([], created_group['members']) self.assertIn('id', created_group) group_details = self.api.get_server_group(created_group['id']) self.assertEqual(created_group, group_details) existing_groups = self.api.get_server_groups() self.assertIn(created_group, existing_groups) existing_groups = self.api.get_server_groups() self.assertEqual(len(groups), len(existing_groups)) for group in created_groups: self.api.delete_server_group(group['id']) existing_groups = self.api.get_server_groups() self.assertNotIn(group, existing_groups) def test_boot_servers_with_soft_affinity(self): created_group = self.api.post_server_groups(self.soft_affinity) servers = self._boot_servers_to_group(created_group) members = self.api.get_server_group(created_group['id'])['members'] self.assertEqual(2, len(servers)) self.assertIn(servers[0]['id'], members) self.assertIn(servers[1]['id'], members) self.assertEqual(servers[0]['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) def test_boot_servers_with_soft_affinity_no_resource_on_first_host(self): created_group = self.api.post_server_groups(self.soft_affinity) # Using big enough flavor to use up the resources on the first host flavor = self.api.get_flavors()[2] servers = self._boot_servers_to_group(created_group, flavor) # The third server cannot be booted on the first host as there # is not enough resource there, but as opposed to the affinity policy # it will be booted on the other host, which has enough resources. third_server = self._boot_a_server_to_group(created_group, flavor=flavor) members = self.api.get_server_group(created_group['id'])['members'] hosts = [] for server in servers: hosts.append(server['OS-EXT-SRV-ATTR:host']) self.assertIn(third_server['id'], members) self.assertNotIn(third_server['OS-EXT-SRV-ATTR:host'], hosts) def test_boot_servers_with_soft_anti_affinity(self): created_group = self.api.post_server_groups(self.soft_anti_affinity) servers = self._boot_servers_to_group(created_group) members = self.api.get_server_group(created_group['id'])['members'] self.assertEqual(2, len(servers)) self.assertIn(servers[0]['id'], members) self.assertIn(servers[1]['id'], members) self.assertNotEqual(servers[0]['OS-EXT-SRV-ATTR:host'], servers[1]['OS-EXT-SRV-ATTR:host']) def test_boot_servers_with_soft_anti_affinity_one_available_host(self): self.compute2.kill() created_group = self.api.post_server_groups(self.soft_anti_affinity) servers = self._boot_servers_to_group(created_group) members = self.api.get_server_group(created_group['id'])['members'] host = servers[0]['OS-EXT-SRV-ATTR:host'] for server in servers: self.assertIn(server['id'], members) self.assertEqual(host, server['OS-EXT-SRV-ATTR:host']) def test_rebuild_with_soft_affinity(self): untouched_server, rebuilt_server = self._rebuild_with_group( self.soft_affinity) self.assertEqual(untouched_server['OS-EXT-SRV-ATTR:host'], rebuilt_server['OS-EXT-SRV-ATTR:host']) def test_rebuild_with_soft_anti_affinity(self): untouched_server, rebuilt_server = self._rebuild_with_group( self.soft_anti_affinity) self.assertNotEqual(untouched_server['OS-EXT-SRV-ATTR:host'], rebuilt_server['OS-EXT-SRV-ATTR:host']) def _migrate_with_soft_affinity_policies(self, group): created_group = self.api.post_server_groups(group) servers = self._boot_servers_to_group(created_group) post = {'migrate': {}} self.admin_api.post_server_action(servers[1]['id'], post) migrated_server = self._wait_for_state_change( servers[1], 'VERIFY_RESIZE') return [migrated_server['OS-EXT-SRV-ATTR:host'], servers[0]['OS-EXT-SRV-ATTR:host']] def test_migrate_with_soft_affinity(self): migrated_server, other_server = ( self._migrate_with_soft_affinity_policies(self.soft_affinity)) self.assertNotEqual(migrated_server, other_server) def test_migrate_with_soft_anti_affinity(self): migrated_server, other_server = ( self._migrate_with_soft_affinity_policies(self.soft_anti_affinity)) self.assertEqual(migrated_server, other_server) def _evacuate_with_soft_anti_affinity_policies(self, group): created_group = self.api.post_server_groups(group) servers = self._boot_servers_to_group(created_group) host = self._get_compute_service_by_host_name( servers[1]['OS-EXT-SRV-ATTR:host']) # Set forced_down on the host to ensure nova considers the host down. self._set_forced_down(host, True) post = {'evacuate': {}} self.admin_api.post_server_action(servers[1]['id'], post) self._wait_for_migration_status(servers[1], ['done']) evacuated_server = self._wait_for_state_change(servers[1], 'ACTIVE') # Note(gibi): need to get the server again as the state of the instance # goes to ACTIVE first then the host of the instance changes to the # new host later evacuated_server = self.admin_api.get_server(evacuated_server['id']) return [evacuated_server['OS-EXT-SRV-ATTR:host'], servers[0]['OS-EXT-SRV-ATTR:host']] def test_evacuate_with_soft_affinity(self): evacuated_server, other_server = ( self._evacuate_with_soft_anti_affinity_policies( self.soft_affinity)) self.assertNotEqual(evacuated_server, other_server) def test_evacuate_with_soft_anti_affinity(self): evacuated_server, other_server = ( self._evacuate_with_soft_anti_affinity_policies( self.soft_anti_affinity)) self.assertEqual(evacuated_server, other_server) def test_soft_affinity_not_supported(self): pass class ServerGroupTestV264(ServerGroupTestV215): api_major_version = 'v2.1' microversion = '2.64' anti_affinity = {'name': 'fake-name-1', 'policy': 'anti-affinity'} affinity = {'name': 'fake-name-2', 'policy': 'affinity'} soft_anti_affinity = {'name': 'fake-name-3', 'policy': 'soft-anti-affinity'} soft_affinity = {'name': 'fake-name-4', 'policy': 'soft-affinity'} def _check_group_format(self, group, created_group): self.assertEqual(group['policy'], created_group['policy']) self.assertEqual(group.get('rules', {}), created_group['rules']) self.assertNotIn('metadata', created_group) self.assertNotIn('policies', created_group) def test_boot_server_with_anti_affinity_rules(self): anti_affinity_max_2 = { 'name': 'fake-name-1', 'policy': 'anti-affinity', 'rules': {'max_server_per_host': 2} } created_group = self.api.post_server_groups(anti_affinity_max_2) servers1st = self._boot_servers_to_group(created_group) servers2nd = self._boot_servers_to_group(created_group) # We have 2 computes so the fifth server won't fit into the same group failed_server = self._boot_a_server_to_group(created_group, expected_status='ERROR') self.assertEqual('No valid host was found. ' 'There are not enough hosts available.', failed_server['fault']['message']) hosts = map(lambda x: x['OS-EXT-SRV-ATTR:host'], servers1st + servers2nd) hosts = [h for h in hosts] # 4 servers self.assertEqual(4, len(hosts)) # schedule to 2 host self.assertEqual(2, len(set(hosts))) # each host has 2 servers for host in set(hosts): self.assertEqual(2, hosts.count(host)) class ServerGroupTestMultiCell(ServerGroupTestBase): NUMBER_OF_CELLS = 2 def setUp(self): super(ServerGroupTestMultiCell, self).setUp() # Start two compute services, one per cell self.compute1 = self.start_service('compute', host='host1', cell_name='cell1') self.compute2 = self.start_service('compute', host='host2', cell_name='cell2') # This is needed to find a server that is still booting with multiple # cells, while waiting for the state change to ACTIVE. See the # _get_instance method in the compute/api for details. self.useFixture(nova_fixtures.AllServicesCurrent()) self.aggregates = {} def _create_aggregate(self, name): agg = self.admin_api.post_aggregate({'aggregate': {'name': name}}) self.aggregates[name] = agg def _add_host_to_aggregate(self, agg, host): """Add a compute host to nova aggregates. :param agg: Name of the nova aggregate :param host: Name of the compute host """ agg = self.aggregates[agg] self.admin_api.add_host_to_aggregate(agg['id'], host) def _set_az_aggregate(self, agg, az): """Set the availability_zone of an aggregate :param agg: Name of the nova aggregate :param az: Availability zone name """ agg = self.aggregates[agg] action = { 'set_metadata': { 'metadata': { 'availability_zone': az, } }, } self.admin_api.post_aggregate_action(agg['id'], action) def test_boot_servers_with_affinity(self): # Create a server group for affinity # As of microversion 2.64, a single policy must be specified when # creating a server group. created_group = self.api.post_server_groups(self.affinity) # Create aggregates for cell1 and cell2 self._create_aggregate('agg1_cell1') self._create_aggregate('agg2_cell2') # Add each cell to a separate aggregate self._add_host_to_aggregate('agg1_cell1', 'host1') self._add_host_to_aggregate('agg2_cell2', 'host2') # Set each cell to a separate availability zone self._set_az_aggregate('agg1_cell1', 'cell1') self._set_az_aggregate('agg2_cell2', 'cell2') # Boot a server to cell2 with the affinity policy. Order matters here # because the CellDatabases fixture defaults the local cell database to # cell1. So boot the server to cell2 where the group member cannot be # found as a result of the default setting. self._boot_a_server_to_group(created_group, az='cell2') # Boot a server to cell1 with the affinity policy. This should fail # because group members found in cell2 should violate the policy. self._boot_a_server_to_group(created_group, az='cell1', expected_status='ERROR') class TestAntiAffinityLiveMigration(test.TestCase, integrated_helpers.InstanceHelperMixin): def setUp(self): super(TestAntiAffinityLiveMigration, self).setUp() # Setup common fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) # Setup API. api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api # Fake out glance. nova.tests.unit.image.fake.stub_out_image_service(self) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) # Start conductor, scheduler and two computes. self.start_service('conductor') self.start_service('scheduler') for host in ('host1', 'host2'): self.start_service('compute', host=host) def test_serial_no_valid_host_then_pass_with_third_host(self): """Creates 2 servers in order (not a multi-create request) in an anti-affinity group so there will be 1 server on each host. Then attempts to live migrate the first server which will fail because the only other available host will be full. Then starts up a 3rd compute service and retries the live migration which should then pass. """ # Create the anti-affinity group used for the servers. group = self.api.post_server_groups( {'name': 'test_serial_no_valid_host_then_pass_with_third_host', 'policies': ['anti-affinity']}) servers = [] for _ in range(2): server = self._build_server(networks='none') # Add the group hint so the server is created in our group. server_req = { 'server': server, 'os:scheduler_hints': {'group': group['id']} } # Use microversion 2.37 for passing networks='none'. with utils.temporary_mutation(self.api, microversion='2.37'): server = self.api.post_server(server_req) servers.append( self._wait_for_state_change(server, 'ACTIVE')) # Make sure each server is on a unique host. hosts = set([svr['OS-EXT-SRV-ATTR:host'] for svr in servers]) self.assertEqual(2, len(hosts)) # And make sure the group has 2 members. members = self.api.get_server_group(group['id'])['members'] self.assertEqual(2, len(members)) # Now attempt to live migrate one of the servers which should fail # because we don't have a free host. Since we're using microversion 2.1 # the scheduling will be synchronous and we should get back a 400 # response for the NoValidHost error. body = { 'os-migrateLive': { 'host': None, 'block_migration': False, 'disk_over_commit': False } } # Specifically use the first server since that was the first member # added to the group. server = servers[0] ex = self.assertRaises(client.OpenStackApiException, self.admin_api.post_server_action, server['id'], body) self.assertEqual(400, ex.response.status_code) self.assertIn('No valid host', six.text_type(ex)) # Now start up a 3rd compute service and retry the live migration which # should work this time. self.start_service('compute', host='host3') self.admin_api.post_server_action(server['id'], body) server = self._wait_for_state_change(server, 'ACTIVE') # Now the server should be on host3 since that was the only available # host for the live migration. self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/test_server_rescue.py0000664000175000017500000001011500000000000023320 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import integrated_helpers class BFVRescue(integrated_helpers.ProviderUsageBaseTestCase): """Base class for various boot from volume rescue tests.""" def setUp(self): super(BFVRescue, self).setUp() self.useFixture(nova_fixtures.CinderFixture(self)) self._start_compute(host='host1') def _create_bfv_server(self): server_request = self._build_server(networks=[]) server_request.pop('imageRef') server_request['block_device_mapping_v2'] = [{ 'boot_index': 0, 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL, 'source_type': 'volume', 'destination_type': 'volume'}] server = self.api.post_server({'server': server_request}) self._wait_for_state_change(server, 'ACTIVE') return server class DisallowBFVRescuev286(BFVRescue): """Asserts that BFV rescue requests fail prior to microversion 2.87. """ compute_driver = 'fake.MediumFakeDriver' microversion = '2.86' def test_bfv_rescue_not_supported(self): server = self._create_bfv_server() ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], {'rescue': { 'rescue_image_ref': '155d900f-4e14-4e4c-a73d-069cbf4541e6'}}) self.assertEqual(400, ex.response.status_code) self.assertIn('Cannot rescue a volume-backed instance', ex.response.text) class DisallowBFVRescuev286WithTrait(BFVRescue): """Asserts that BFV rescue requests fail prior to microversion 2.87 even when the required COMPUTE_RESCUE_BFV trait is reported by the compute. """ compute_driver = 'fake.RescueBFVDriver' microversion = '2.86' def test_bfv_rescue_not_supported(self): server = self._create_bfv_server() ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], {'rescue': { 'rescue_image_ref': '155d900f-4e14-4e4c-a73d-069cbf4541e6'}}) self.assertEqual(400, ex.response.status_code) self.assertIn('Cannot rescue a volume-backed instance', ex.response.text) class DisallowBFVRescuev287WithoutTrait(BFVRescue): """Asserts that BFV rescue requests fail with microversion 2.87 (or later) when the required COMPUTE_RESCUE_BFV trait is not reported by the compute. """ compute_driver = 'fake.MediumFakeDriver' microversion = '2.87' def test_bfv_rescue_not_supported(self): server = self._create_bfv_server() ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], {'rescue': { 'rescue_image_ref': '155d900f-4e14-4e4c-a73d-069cbf4541e6'}}) self.assertEqual(400, ex.response.status_code) self.assertIn('Host unable to rescue a volume-backed instance', ex.response.text) class AllowBFVRescuev287WithTrait(BFVRescue): """Asserts that BFV rescue requests pass with microversion 2.87 (or later) when the required COMPUTE_RESCUE_BFV trait is reported by the compute. """ compute_driver = 'fake.RescueBFVDriver' microversion = '2.87' def test_bfv_rescue_supported(self): server = self._create_bfv_server() self.api.post_server_action(server['id'], {'rescue': { 'rescue_image_ref': '155d900f-4e14-4e4c-a73d-069cbf4541e6'}}) self._wait_for_state_change(server, 'RESCUE') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_servers.py0000664000175000017500000131374400000000000022154 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import collections import copy import datetime import time import zlib from keystoneauth1 import adapter import mock from neutronclient.common import exceptions as neutron_exception import os_resource_classes as orc from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import fixture as osloutils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils import six from nova.compute import api as compute_api from nova.compute import instance_actions from nova.compute import manager as compute_manager from nova import context from nova import exception from nova.network import constants from nova.network import neutron as neutronapi from nova import objects from nova.objects import block_device as block_device_obj from nova.policies import base as base_policies from nova.policies import servers as servers_policies from nova.scheduler import utils from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client from nova.tests.functional import integrated_helpers from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_notifier from nova.tests.unit import fake_requests from nova.tests.unit.image import fake as fake_image from nova.tests.unit.objects import test_instance_info_cache from nova import utils as nova_utils from nova.virt import fake from nova import volume CONF = cfg.CONF LOG = logging.getLogger(__name__) class ServersTestBase(integrated_helpers._IntegratedTestBase): api_major_version = 'v2' _force_delete_parameter = 'forceDelete' _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' _access_ipv4_parameter = 'accessIPv4' _access_ipv6_parameter = 'accessIPv6' _return_resv_id_parameter = 'return_reservation_id' _min_count_parameter = 'min_count' def setUp(self): super(ServersTestBase, self).setUp() def _get_access_ips_params(self): return {self._access_ipv4_parameter: "172.19.0.2", self._access_ipv6_parameter: "fe80::2"} def _verify_access_ips(self, server): self.assertEqual('172.19.0.2', server[self._access_ipv4_parameter]) self.assertEqual('fe80::2', server[self._access_ipv6_parameter]) class ServersTest(ServersTestBase): def test_get_servers(self): # Simple check that listing servers works. servers = self.api.get_servers() for server in servers: LOG.debug("server: %s", server) def _get_node_build_failures(self): ctxt = context.get_admin_context() computes = objects.ComputeNodeList.get_all(ctxt) return { node.hypervisor_hostname: int(node.stats.get('failed_builds', 0)) for node in computes} def test_create_server_with_error(self): # Create a server which will enter error state. def throw_error(*args, **kwargs): raise exception.BuildAbortException(reason='', instance_uuid='fake') self.stub_out('nova.virt.fake.FakeDriver.spawn', throw_error) server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) found_server = self._wait_for_state_change(found_server, 'ERROR') # Delete the server self._delete_server(created_server) # We should have no (persisted) build failures until we update # resources, after which we should have one self.assertEqual([0], list(self._get_node_build_failures().values())) # BuildAbortException will not trigger a reschedule and the build # failure update is the last step in the compute manager after # instance state setting, fault recording and notification sending. So # we have no other way than simply wait to ensure the node build # failure counter updated before we assert it. def failed_counter_updated(): self._run_periodics() self.assertEqual( [1], list(self._get_node_build_failures().values())) self._wait_for_assert(failed_counter_updated) def test_create_server_with_image_type_filter(self): self.flags(query_placement_for_image_type_support=True, group='scheduler') raw_image = '155d900f-4e14-4e4c-a73d-069cbf4541e6' vhd_image = 'a440c04b-79fa-479c-bed1-0b816eaec379' server = self._build_server(image_uuid=vhd_image) server = self.api.post_server({'server': server}) server = self.api.get_server(server['id']) errored_server = self._wait_for_state_change(server, 'ERROR') self.assertIn('No valid host', errored_server['fault']['message']) server = self._build_server(image_uuid=raw_image) server = self.api.post_server({'server': server}) server = self.api.get_server(server['id']) created_server = self._wait_for_state_change(server, 'ACTIVE') # Delete the server self._delete_server(created_server) def _test_create_server_with_error_with_retries(self): # Create a server which will enter error state. self._start_compute('host2') fails = [] def throw_error(*args, **kwargs): fails.append('one') raise test.TestingException('Please retry me') self.stub_out('nova.virt.fake.FakeDriver.spawn', throw_error) server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) found_server = self._wait_for_state_change(found_server, 'ERROR') # Delete the server self._delete_server(created_server) return len(fails) def test_create_server_with_error_with_retries(self): self.flags(max_attempts=2, group='scheduler') fails = self._test_create_server_with_error_with_retries() self.assertEqual(2, fails) self._run_periodics() self.assertEqual( [1, 1], list(self._get_node_build_failures().values())) def test_create_server_with_error_with_no_retries(self): self.flags(max_attempts=1, group='scheduler') fails = self._test_create_server_with_error_with_retries() self.assertEqual(1, fails) # The build failure update is the last step in build_and_run_instance # in the compute manager after instance state setting, fault # recording and notification sending. So we have no other way than # simply wait to ensure the node build failure counter updated # before we assert it. def failed_counter_updated(): self._run_periodics() self.assertEqual( [0, 1], list(sorted(self._get_node_build_failures().values()))) self._wait_for_assert(failed_counter_updated) def test_create_and_delete_server(self): # Creates and deletes a server. # Create server # Build the server data gradually, checking errors along the way server = {} good_server = self._build_server() post = {'server': server} # Without an imageRef, this throws 500. # TODO(justinsb): Check whatever the spec says should be thrown here self.assertRaises(client.OpenStackApiException, self.api.post_server, post) # With an invalid imageRef, this throws 500. server[self._image_ref_parameter] = uuids.fake # TODO(justinsb): Check whatever the spec says should be thrown here self.assertRaises(client.OpenStackApiException, self.api.post_server, post) # Add a valid imageRef server[self._image_ref_parameter] = good_server.get( self._image_ref_parameter) # Without flavorRef, this throws 500 # TODO(justinsb): Check whatever the spec says should be thrown here self.assertRaises(client.OpenStackApiException, self.api.post_server, post) server[self._flavor_ref_parameter] = good_server.get( self._flavor_ref_parameter) # Without a name, this throws 500 # TODO(justinsb): Check whatever the spec says should be thrown here self.assertRaises(client.OpenStackApiException, self.api.post_server, post) # Set a valid server name server['name'] = good_server['name'] created_server = self.api.post_server(post) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # Check it's there found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) # It should also be in the all-servers list servers = self.api.get_servers() server_ids = [s['id'] for s in servers] self.assertIn(created_server_id, server_ids) found_server = self._wait_for_state_change(found_server, 'ACTIVE') servers = self.api.get_servers(detail=True) for server in servers: self.assertIn("image", server) self.assertIn("flavor", server) # Delete the server self._delete_server(found_server) def _force_reclaim(self): # Make sure that compute manager thinks the instance is # old enough to be expired the_past = timeutils.utcnow() + datetime.timedelta(hours=1) timeutils.set_time_override(override_time=the_past) self.addCleanup(timeutils.clear_time_override) ctxt = context.get_admin_context() self.compute._reclaim_queued_deletes(ctxt) def test_deferred_delete(self): # Creates, deletes and waits for server to be reclaimed. self.flags(reclaim_instance_interval=1) # Create server server = self._build_server() created_server = self.api.post_server({'server': server}) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # Wait for it to finish being created found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Cannot restore unless instance is deleted self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, {'restore': {}}) # Delete the server self.api.delete_server(created_server_id) # Wait for queued deletion found_server = self._wait_for_state_change(found_server, 'SOFT_DELETED') self._force_reclaim() # Wait for real deletion self._wait_until_deleted(found_server) def test_deferred_delete_restore(self): # Creates, deletes and restores a server. self.flags(reclaim_instance_interval=3600) # Create server server = self._build_server() created_server = self.api.post_server({'server': server}) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # Wait for it to finish being created found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Delete the server self.api.delete_server(created_server_id) # Wait for queued deletion found_server = self._wait_for_state_change(found_server, 'SOFT_DELETED') # Restore server self.api.post_server_action(created_server_id, {'restore': {}}) # Wait for server to become active again found_server = self._wait_for_state_change(found_server, 'ACTIVE') def test_deferred_delete_restore_overquota(self): # Test that a restore that would put the user over quota fails self.flags(instances=1, group='quota') # Creates, deletes and restores a server. self.flags(reclaim_instance_interval=3600) # Create server server = self._build_server() created_server1 = self.api.post_server({'server': server}) LOG.debug("created_server: %s", created_server1) self.assertTrue(created_server1['id']) created_server_id1 = created_server1['id'] # Wait for it to finish being created found_server1 = self._wait_for_state_change(created_server1, 'ACTIVE') # Delete the server self.api.delete_server(created_server_id1) # Wait for queued deletion found_server1 = self._wait_for_state_change(found_server1, 'SOFT_DELETED') # Create a second server server = self._build_server() created_server2 = self.api.post_server({'server': server}) LOG.debug("created_server: %s", created_server2) self.assertTrue(created_server2['id']) # Wait for it to finish being created self._wait_for_state_change(created_server2, 'ACTIVE') # Try to restore the first server, it should fail ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id1, {'restore': {}}) self.assertEqual(403, ex.response.status_code) self.assertEqual('SOFT_DELETED', found_server1['status']) def test_deferred_delete_force(self): # Creates, deletes and force deletes a server. self.flags(reclaim_instance_interval=3600) # Create server server = self._build_server() created_server = self.api.post_server({'server': server}) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # Wait for it to finish being created found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Delete the server self.api.delete_server(created_server_id) # Wait for queued deletion found_server = self._wait_for_state_change(found_server, 'SOFT_DELETED') # Force delete server self.api.post_server_action(created_server_id, {self._force_delete_parameter: {}}) # Wait for real deletion self._wait_until_deleted(found_server) def test_create_server_with_metadata(self): # Creates a server with metadata. # Build the server data gradually, checking errors along the way server = self._build_server() metadata = {} for i in range(30): metadata['key_%s' % i] = 'value_%s' % i server['metadata'] = metadata post = {'server': server} created_server = self.api.post_server(post) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) self.assertEqual(metadata, found_server.get('metadata')) # The server should also be in the all-servers details list servers = self.api.get_servers(detail=True) server_map = {server['id']: server for server in servers} found_server = server_map.get(created_server_id) self.assertTrue(found_server) # Details do include metadata self.assertEqual(metadata, found_server.get('metadata')) # The server should also be in the all-servers summary list servers = self.api.get_servers(detail=False) server_map = {server['id']: server for server in servers} found_server = server_map.get(created_server_id) self.assertTrue(found_server) # Summary should not include metadata self.assertFalse(found_server.get('metadata')) # Cleanup self._delete_server(found_server) def test_server_metadata_actions_negative_invalid_state(self): # Create server with metadata server = self._build_server() metadata = {'key_1': 'value_1'} server['metadata'] = metadata post = {'server': server} created_server = self.api.post_server(post) found_server = self._wait_for_state_change(created_server, 'ACTIVE') self.assertEqual(metadata, found_server.get('metadata')) server_id = found_server['id'] # Change status from ACTIVE to SHELVED for negative test self.flags(shelved_offload_time = -1) self.api.post_server_action(server_id, {'shelve': {}}) found_server = self._wait_for_state_change(found_server, 'SHELVED') metadata = {'key_2': 'value_2'} # Update Metadata item in SHELVED (not ACTIVE, etc.) ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_metadata, server_id, metadata) self.assertEqual(409, ex.response.status_code) self.assertEqual('SHELVED', found_server['status']) # Delete Metadata item in SHELVED (not ACTIVE, etc.) ex = self.assertRaises(client.OpenStackApiException, self.api.delete_server_metadata, server_id, 'key_1') self.assertEqual(409, ex.response.status_code) self.assertEqual('SHELVED', found_server['status']) # Cleanup self._delete_server(found_server) def test_create_and_rebuild_server(self): # Rebuild a server with metadata. # create a server with initially has no metadata server = self._build_server() server_post = {'server': server} metadata = {} for i in range(30): metadata['key_%s' % i] = 'value_%s' % i server_post['server']['metadata'] = metadata created_server = self.api.post_server(server_post) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] created_server = self._wait_for_state_change(created_server, 'ACTIVE') # rebuild the server with metadata and other server attributes post = {} post['rebuild'] = { self._image_ref_parameter: "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "name": "blah", self._access_ipv4_parameter: "172.19.0.2", self._access_ipv6_parameter: "fe80::2", "metadata": {'some': 'thing'}, } post['rebuild'].update(self._get_access_ips_params()) self.api.post_server_action(created_server_id, post) LOG.debug("rebuilt server: %s", created_server) self.assertTrue(created_server['id']) found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) self.assertEqual({'some': 'thing'}, found_server.get('metadata')) self.assertEqual('blah', found_server.get('name')) self.assertEqual(post['rebuild'][self._image_ref_parameter], found_server.get('image')['id']) self._verify_access_ips(found_server) # rebuild the server with empty metadata and nothing else post = {} post['rebuild'] = { self._image_ref_parameter: "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", "metadata": {}, } self.api.post_server_action(created_server_id, post) LOG.debug("rebuilt server: %s", created_server) self.assertTrue(created_server['id']) found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) self.assertEqual({}, found_server.get('metadata')) self.assertEqual('blah', found_server.get('name')) self.assertEqual(post['rebuild'][self._image_ref_parameter], found_server.get('image')['id']) self._verify_access_ips(found_server) # Cleanup self._delete_server(found_server) def test_rename_server(self): # Test building and renaming a server. # Create a server server = self._build_server() created_server = self.api.post_server({'server': server}) LOG.debug("created_server: %s", created_server) server_id = created_server['id'] self.assertTrue(server_id) # Rename the server to 'new-name' self.api.put_server(server_id, {'server': {'name': 'new-name'}}) # Check the name of the server created_server = self.api.get_server(server_id) self.assertEqual(created_server['name'], 'new-name') # Cleanup self._delete_server(created_server) def test_create_multiple_servers(self): # Creates multiple servers and checks for reservation_id. # Create 2 servers, setting 'return_reservation_id, which should # return a reservation_id server = self._build_server() server[self._min_count_parameter] = 2 server[self._return_resv_id_parameter] = True post = {'server': server} response = self.api.post_server(post) self.assertIn('reservation_id', response) reservation_id = response['reservation_id'] self.assertNotIn(reservation_id, ['', None]) # Assert that the reservation_id itself has the expected format self.assertRegex(reservation_id, 'r-[0-9a-zA-Z]{8}') # Create 1 more server, which should not return a reservation_id server = self._build_server() post = {'server': server} created_server = self.api.post_server(post) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # lookup servers created by the first request. servers = self.api.get_servers(detail=True, search_opts={'reservation_id': reservation_id}) server_map = {server['id']: server for server in servers} found_server = server_map.get(created_server_id) # The server from the 2nd request should not be there. self.assertIsNone(found_server) # Should have found 2 servers. self.assertEqual(len(server_map), 2) # Cleanup self._delete_server(created_server) for server in server_map.values(): self._delete_server(server) def test_create_server_with_injected_files(self): # Creates a server with injected_files. personality = [] # Inject a text file data = 'Hello, World!' personality.append({ 'path': '/helloworld.txt', 'contents': base64.encode_as_bytes(data), }) # Inject a binary file data = zlib.compress(b'Hello, World!') personality.append({ 'path': '/helloworld.zip', 'contents': base64.encode_as_bytes(data), }) # Create server server = self._build_server() server['personality'] = personality post = {'server': server} created_server = self.api.post_server(post) LOG.debug("created_server: %s", created_server) self.assertTrue(created_server['id']) created_server_id = created_server['id'] # Check it's there found_server = self.api.get_server(created_server_id) self.assertEqual(created_server_id, found_server['id']) found_server = self._wait_for_state_change(found_server, 'ACTIVE') # Cleanup self._delete_server(found_server) def test_stop_start_servers_negative_invalid_state(self): # Create server server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Start server in ACTIVE # NOTE(mkoshiya): When os-start API runs, the server status # must be SHUTOFF. # By returning 409, I want to confirm that the ACTIVE server does not # cause unexpected behavior. post = {'os-start': {}} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, post) self.assertEqual(409, ex.response.status_code) self.assertEqual('ACTIVE', found_server['status']) # Stop server post = {'os-stop': {}} self.api.post_server_action(created_server_id, post) found_server = self._wait_for_state_change(found_server, 'SHUTOFF') # Stop server in SHUTOFF # NOTE(mkoshiya): When os-stop API runs, the server status # must be ACTIVE or ERROR. # By returning 409, I want to confirm that the SHUTOFF server does not # cause unexpected behavior. post = {'os-stop': {}} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, post) self.assertEqual(409, ex.response.status_code) self.assertEqual('SHUTOFF', found_server['status']) # Cleanup self._delete_server(found_server) def test_revert_resized_server_negative_invalid_state(self): # Create server server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Revert resized server in ACTIVE # NOTE(yatsumi): When revert resized server API runs, # the server status must be VERIFY_RESIZE. # By returning 409, I want to confirm that the ACTIVE server does not # cause unexpected behavior. post = {'revertResize': {}} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, post) self.assertEqual(409, ex.response.status_code) self.assertEqual('ACTIVE', found_server['status']) # Cleanup self._delete_server(found_server) def test_resize_server_negative_invalid_state(self): # Avoid migration self.flags(allow_resize_to_same_host=True) # Create server server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Resize server(flavorRef: 1 -> 2) post = {'resize': {"flavorRef": "2", "OS-DCF:diskConfig": "AUTO"}} self.api.post_server_action(created_server_id, post) found_server = self._wait_for_state_change(found_server, 'VERIFY_RESIZE') # Resize server in VERIFY_RESIZE(flavorRef: 2 -> 1) # NOTE(yatsumi): When resize API runs, the server status # must be ACTIVE or SHUTOFF. # By returning 409, I want to confirm that the VERIFY_RESIZE server # does not cause unexpected behavior. post = {'resize': {"flavorRef": "1", "OS-DCF:diskConfig": "AUTO"}} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, post) self.assertEqual(409, ex.response.status_code) self.assertEqual('VERIFY_RESIZE', found_server['status']) # Cleanup self._delete_server(found_server) def test_confirm_resized_server_negative_invalid_state(self): # Create server server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] found_server = self._wait_for_state_change(created_server, 'ACTIVE') # Confirm resized server in ACTIVE # NOTE(yatsumi): When confirm resized server API runs, # the server status must be VERIFY_RESIZE. # By returning 409, I want to confirm that the ACTIVE server does not # cause unexpected behavior. post = {'confirmResize': {}} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, post) self.assertEqual(409, ex.response.status_code) self.assertEqual('ACTIVE', found_server['status']) # Cleanup self._delete_server(found_server) def test_resize_server_overquota(self): self.flags(cores=1, group='quota') self.flags(ram=512, group='quota') # Create server with default flavor, 1 core, 512 ram server = self._build_server() created_server = self.api.post_server({"server": server}) created_server_id = created_server['id'] self._wait_for_state_change(created_server, 'ACTIVE') # Try to resize to flavorid 2, 1 core, 2048 ram post = {'resize': {'flavorRef': '2'}} ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, created_server_id, post) self.assertEqual(403, ex.response.status_code) def test_attach_vol_maximum_disk_devices_exceeded(self): self.useFixture(nova_fixtures.CinderFixture(self)) server = self._build_server() created_server = self.api.post_server({"server": server}) server_id = created_server['id'] self._wait_for_state_change(created_server, 'ACTIVE') volume_id = '9a695496-44aa-4404-b2cc-ccab2501f87e' LOG.info('Attaching volume %s to server %s', volume_id, server_id) # The fake driver doesn't implement get_device_name_for_instance, so # we'll just raise the exception directly here, instead of simuluating # an instance with 26 disk devices already attached. with mock.patch.object(self.compute.driver, 'get_device_name_for_instance') as mock_get: mock_get.side_effect = exception.TooManyDiskDevices(maximum=26) ex = self.assertRaises( client.OpenStackApiException, self.api.post_server_volume, server_id, dict(volumeAttachment=dict(volumeId=volume_id))) expected = ('The maximum allowed number of disk devices (26) to ' 'attach to a single instance has been exceeded.') self.assertEqual(403, ex.response.status_code) self.assertIn(expected, six.text_type(ex)) class ServersTestV21(ServersTest): api_major_version = 'v2.1' class ServersTestV219(ServersTestBase): api_major_version = 'v2.1' def _create_server(self, set_desc = True, desc = None): server = self._build_server() if set_desc: server['description'] = desc post = {'server': server} response = self.api.api_post('/servers', post).body return (server, response['server']) def _update_server(self, server_id, set_desc = True, desc = None): new_name = integrated_helpers.generate_random_alphanumeric(8) server = {'server': {'name': new_name}} if set_desc: server['server']['description'] = desc self.api.api_put('/servers/%s' % server_id, server) def _rebuild_server(self, server_id, set_desc = True, desc = None): new_name = integrated_helpers.generate_random_alphanumeric(8) post = {} post['rebuild'] = { "name": new_name, self._image_ref_parameter: "76fa36fc-c930-4bf3-8c8a-ea2a2420deb6", self._access_ipv4_parameter: "172.19.0.2", self._access_ipv6_parameter: "fe80::2", "metadata": {'some': 'thing'}, } post['rebuild'].update(self._get_access_ips_params()) if set_desc: post['rebuild']['description'] = desc self.api.api_post('/servers/%s/action' % server_id, post) def _create_server_and_verify(self, set_desc = True, expected_desc = None): # Creates a server with a description and verifies it is # in the GET responses. created_server = self._create_server(set_desc, expected_desc)[1] self._verify_server_description(created_server['id'], expected_desc) self._delete_server(created_server) def _update_server_and_verify(self, server_id, set_desc = True, expected_desc = None): # Updates a server with a description and verifies it is # in the GET responses. self._update_server(server_id, set_desc, expected_desc) self._verify_server_description(server_id, expected_desc) def _rebuild_server_and_verify(self, server_id, set_desc = True, expected_desc = None): # Rebuilds a server with a description and verifies it is # in the GET responses. self._rebuild_server(server_id, set_desc, expected_desc) self._verify_server_description(server_id, expected_desc) def _verify_server_description(self, server_id, expected_desc = None, desc_in_resp = True): # Calls GET on the servers and verifies that the description # is set as expected in the response, or not set at all. response = self.api.api_get('/servers/%s' % server_id) found_server = response.body['server'] self.assertEqual(server_id, found_server['id']) if desc_in_resp: # Verify the description is set as expected (can be None) self.assertEqual(expected_desc, found_server.get('description')) else: # Verify the description is not included in the response. self.assertNotIn('description', found_server) servers = self.api.api_get('/servers/detail').body['servers'] server_map = {server['id']: server for server in servers} found_server = server_map.get(server_id) self.assertTrue(found_server) if desc_in_resp: # Verify the description is set as expected (can be None) self.assertEqual(expected_desc, found_server.get('description')) else: # Verify the description is not included in the response. self.assertNotIn('description', found_server) def _create_assertRaisesRegex(self, desc): # Verifies that a 400 error is thrown on create server with self.assertRaisesRegex(client.OpenStackApiException, ".*Unexpected status code.*") as cm: self._create_server(True, desc) self.assertEqual(400, cm.exception.response.status_code) def _update_assertRaisesRegex(self, server_id, desc): # Verifies that a 400 error is thrown on update server with self.assertRaisesRegex(client.OpenStackApiException, ".*Unexpected status code.*") as cm: self._update_server(server_id, True, desc) self.assertEqual(400, cm.exception.response.status_code) def _rebuild_assertRaisesRegex(self, server_id, desc): # Verifies that a 400 error is thrown on rebuild server with self.assertRaisesRegex(client.OpenStackApiException, ".*Unexpected status code.*") as cm: self._rebuild_server(server_id, True, desc) self.assertEqual(400, cm.exception.response.status_code) def test_create_server_with_description(self): self.api.microversion = '2.19' # Create and get a server with a description self._create_server_and_verify(True, 'test description') # Create and get a server with an empty description self._create_server_and_verify(True, '') # Create and get a server with description set to None self._create_server_and_verify() # Create and get a server without setting the description self._create_server_and_verify(False) def test_update_server_with_description(self): self.api.microversion = '2.19' # Create a server with an initial description server = self._create_server(True, 'test desc 1')[1] server_id = server['id'] # Update and get the server with a description self._update_server_and_verify(server_id, True, 'updated desc') # Update and get the server name without changing the description self._update_server_and_verify(server_id, False, 'updated desc') # Update and get the server with an empty description self._update_server_and_verify(server_id, True, '') # Update and get the server by removing the description (set to None) self._update_server_and_verify(server_id) # Update and get the server with a 2nd new description self._update_server_and_verify(server_id, True, 'updated desc2') # Cleanup self._delete_server(server) def test_rebuild_server_with_description(self): self.api.microversion = '2.19' # Create a server with an initial description server = self._create_server(True, 'test desc 1')[1] server_id = server['id'] self._wait_for_state_change(server, 'ACTIVE') # Rebuild and get the server with a description self._rebuild_server_and_verify(server_id, True, 'updated desc') # Rebuild and get the server name without changing the description self._rebuild_server_and_verify(server_id, False, 'updated desc') # Rebuild and get the server with an empty description self._rebuild_server_and_verify(server_id, True, '') # Rebuild and get the server by removing the description (set to None) self._rebuild_server_and_verify(server_id) # Rebuild and get the server with a 2nd new description self._rebuild_server_and_verify(server_id, True, 'updated desc2') # Cleanup self._delete_server(server) def test_version_compatibility(self): # Create a server with microversion v2.19 and a description. self.api.microversion = '2.19' server = self._create_server(True, 'test desc 1')[1] server_id = server['id'] # Verify that the description is not included on V2.18 GETs self.api.microversion = '2.18' self._verify_server_description(server_id, desc_in_resp = False) # Verify that updating the server with description on V2.18 # results in a 400 error self._update_assertRaisesRegex(server_id, 'test update 2.18') # Verify that rebuilding the server with description on V2.18 # results in a 400 error self._rebuild_assertRaisesRegex(server_id, 'test rebuild 2.18') # Cleanup self._delete_server(server) # Create a server on V2.18 and verify that the description # defaults to the name on a V2.19 GET server_req, server = self._create_server(False) server_id = server['id'] self.api.microversion = '2.19' self._verify_server_description(server_id, server_req['name']) # Cleanup self._delete_server(server) # Verify that creating a server with description on V2.18 # results in a 400 error self.api.microversion = '2.18' self._create_assertRaisesRegex('test create 2.18') def test_description_errors(self): self.api.microversion = '2.19' # Create servers with invalid descriptions. These throw 400. # Invalid unicode with non-printable control char self._create_assertRaisesRegex(u'invalid\0dstring') # Description is longer than 255 chars self._create_assertRaisesRegex('x' * 256) # Update and rebuild servers with invalid descriptions. # These throw 400. server_id = self._create_server(True, "desc")[1]['id'] # Invalid unicode with non-printable control char self._update_assertRaisesRegex(server_id, u'invalid\u0604string') self._rebuild_assertRaisesRegex(server_id, u'invalid\u0604string') # Description is longer than 255 chars self._update_assertRaisesRegex(server_id, 'x' * 256) self._rebuild_assertRaisesRegex(server_id, 'x' * 256) class ServerTestV220(ServersTestBase): api_major_version = 'v2.1' def setUp(self): super(ServerTestV220, self).setUp() self.api.microversion = '2.20' self.ctxt = context.get_admin_context() def _create_server(self): server = self._build_server() post = {'server': server} response = self.api.api_post('/servers', post).body return (server, response['server']) def _shelve_server(self): server = self._create_server()[1] server_id = server['id'] self._wait_for_state_change(server, 'ACTIVE') self.api.post_server_action(server_id, {'shelve': None}) return self._wait_for_state_change(server, 'SHELVED_OFFLOADED') def _get_fake_bdms(self, ctxt): return block_device_obj.block_device_make_list(self.ctxt, [fake_block_device.FakeDbBlockDeviceDict( {'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '5d721593-f033-4f6d-ab6f-b5b067e61bc4'})]) def test_attach_detach_vol_to_shelved_offloaded_server_new_flow(self): self.flags(shelved_offload_time=0) found_server = self._shelve_server() server_id = found_server['id'] fake_bdms = self._get_fake_bdms(self.ctxt) # Test attach volume self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get) with test.nested(mock.patch.object(compute_api.API, '_check_volume_already_attached_to_instance'), mock.patch.object(volume.cinder.API, 'check_availability_zone'), mock.patch.object(volume.cinder.API, 'attachment_create'), mock.patch.object(volume.cinder.API, 'attachment_complete') ) as (mock_check_vol_attached, mock_check_av_zone, mock_attach_create, mock_attachment_complete): mock_attach_create.return_value = {'id': uuids.volume} volume_attachment = {"volumeAttachment": {"volumeId": "5d721593-f033-4f6d-ab6f-b5b067e61bc4"}} attach_response = self.api.api_post( '/servers/%s/os-volume_attachments' % (server_id), volume_attachment).body['volumeAttachment'] self.assertTrue(mock_attach_create.called) mock_attachment_complete.assert_called_once_with( mock.ANY, uuids.volume) self.assertIsNone(attach_response['device']) # Test detach volume with test.nested(mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid'), mock.patch.object(compute_api.API, '_local_cleanup_bdm_volumes') ) as (mock_get_bdms, mock_clean_vols): mock_get_bdms.return_value = fake_bdms attachment_id = mock_get_bdms.return_value[0]['volume_id'] self.api.api_delete('/servers/%s/os-volume_attachments/%s' % (server_id, attachment_id)) self.assertTrue(mock_clean_vols.called) self._delete_server(found_server) class ServerTestV269(ServersTestBase): api_major_version = 'v2.1' NUMBER_OF_CELLS = 3 def setUp(self): super(ServerTestV269, self).setUp() self.api.microversion = '2.69' self.ctxt = context.get_admin_context() self.project_id = self.api.project_id self.cells = objects.CellMappingList.get_all(self.ctxt) self.down_cell_insts = [] self.up_cell_insts = [] self.down_cell_mappings = objects.CellMappingList() flavor = objects.Flavor(id=1, name='flavor1', memory_mb=256, vcpus=1, root_gb=1, ephemeral_gb=1, flavorid='1', swap=0, rxtx_factor=1.0, vcpu_weight=1, disabled=False, is_public=True, extra_specs={}, projects=[]) _info_cache = objects.InstanceInfoCache(context) objects.InstanceInfoCache._from_db_object(context, _info_cache, test_instance_info_cache.fake_info_cache) # cell1 and cell2 will be the down cells while # cell0 and cell3 will be the up cells. down_cell_names = ['cell1', 'cell2'] for cell in self.cells: # create 2 instances and their mappings in all the 4 cells for i in range(2): with context.target_cell(self.ctxt, cell) as cctxt: inst = objects.Instance( context=cctxt, project_id=self.project_id, user_id=self.project_id, instance_type_id=flavor.id, hostname='%s-inst%i' % (cell.name, i), flavor=flavor, info_cache=_info_cache, display_name='server-test') inst.create() im = objects.InstanceMapping(context=self.ctxt, instance_uuid=inst.uuid, cell_mapping=cell, project_id=self.project_id, queued_for_delete=False) im.create() if cell.name in down_cell_names: self.down_cell_insts.append(inst.uuid) else: self.up_cell_insts.append(inst.uuid) # In cell1 and cell3 add a third instance in a different project # to show the --all-tenants case. if cell.name == 'cell1' or cell.name == 'cell3': with context.target_cell(self.ctxt, cell) as cctxt: inst = objects.Instance( context=cctxt, project_id='faker', user_id='faker', instance_type_id=flavor.id, hostname='%s-inst%i' % (cell.name, 3), flavor=flavor, info_cache=_info_cache, display_name='server-test') inst.create() im = objects.InstanceMapping(context=self.ctxt, instance_uuid=inst.uuid, cell_mapping=cell, project_id='faker', queued_for_delete=False) im.create() if cell.name in down_cell_names: self.down_cell_mappings.objects.append(cell) self.useFixture(nova_fixtures.DownCellFixture(self.down_cell_mappings)) def test_get_servers_with_down_cells(self): servers = self.api.get_servers(detail=False) # 4 servers from the up cells and 4 servers from the down cells self.assertEqual(8, len(servers)) for server in servers: if 'name' not in server: # server is in the down cell. self.assertEqual('UNKNOWN', server['status']) self.assertIn(server['id'], self.down_cell_insts) self.assertIn('links', server) # the partial construct will have only the above 3 keys self.assertEqual(3, len(server)) else: # server in up cell self.assertIn(server['id'], self.up_cell_insts) # has all the keys self.assertEqual(server['name'], 'server-test') self.assertIn('links', server) def test_get_servers_detail_with_down_cells(self): servers = self.api.get_servers() # 4 servers from the up cells and 4 servers from the down cells self.assertEqual(8, len(servers)) for server in servers: if 'user_id' not in server: # server is in the down cell. self.assertEqual('UNKNOWN', server['status']) self.assertIn(server['id'], self.down_cell_insts) # the partial construct will only have 5 keys: created, # tenant_id, status, id and links. security_groups should be # present too but isn't since we haven't created a network # interface self.assertEqual(5, len(server)) else: # server in up cell self.assertIn(server['id'], self.up_cell_insts) # has all the keys self.assertEqual(server['user_id'], self.project_id) self.assertIn('image', server) def test_get_servers_detail_limits_with_down_cells(self): servers = self.api.get_servers(search_opts={'limit': 5}) # 4 servers from the up cells since we skip down cell # results by default for paging. self.assertEqual(4, len(servers), servers) for server in servers: # server in up cell self.assertIn(server['id'], self.up_cell_insts) # has all the keys self.assertEqual(server['user_id'], self.project_id) self.assertIn('image', server) def test_get_servers_detail_limits_with_down_cells_the_500_gift(self): self.flags(list_records_by_skipping_down_cells=False, group='api') # We get an API error with a 500 response code since the # list_records_by_skipping_down_cells config option is False. exp = self.assertRaises(client.OpenStackApiException, self.api.get_servers, search_opts={'limit': 5}) self.assertEqual(500, exp.response.status_code) self.assertIn('NovaException', six.text_type(exp)) def test_get_servers_detail_marker_in_down_cells(self): marker = self.down_cell_insts[2] # It will fail with a 500 if the marker is in the down cell. exp = self.assertRaises(client.OpenStackApiException, self.api.get_servers, search_opts={'marker': marker}) self.assertEqual(500, exp.response.status_code) self.assertIn('oslo_db.exception.DBError', six.text_type(exp)) def test_get_servers_detail_marker_sorting(self): marker = self.up_cell_insts[1] # It will give the results from the up cell if # list_records_by_skipping_down_cells config option is True. servers = self.api.get_servers(search_opts={'marker': marker, 'sort_key': "created_at", 'sort_dir': "asc"}) # since there are 4 servers from the up cells, when giving the # second instance as marker, sorted by creation time in ascending # third and fourth instances will be returned. self.assertEqual(2, len(servers)) for server in servers: self.assertIn( server['id'], [self.up_cell_insts[2], self.up_cell_insts[3]]) def test_get_servers_detail_non_admin_with_deleted_flag(self): # if list_records_by_skipping_down_cells config option is True # this deleted option should be ignored and the rest of the instances # from the up cells and the partial results from the down cells should # be returned. # Set the policy so we don't have permission to allow # all filters but are able to get server details. servers_rule = 'os_compute_api:servers:detail' extraspec_rule = 'os_compute_api:servers:allow_all_filters' self.policy.set_rules({ extraspec_rule: 'rule:admin_api', servers_rule: '@'}) servers = self.api.get_servers(search_opts={'deleted': True}) # gets 4 results from up cells and 4 from down cells. self.assertEqual(8, len(servers)) for server in servers: if "image" not in server: self.assertIn(server['id'], self.down_cell_insts) else: self.assertIn(server['id'], self.up_cell_insts) def test_get_servers_detail_filters(self): # We get the results only from the up cells, this ignoring the down # cells if list_records_by_skipping_down_cells config option is True. api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.admin_api = api_fixture.admin_api self.admin_api.microversion = '2.69' servers = self.admin_api.get_servers( search_opts={'hostname': "cell3-inst0"}) self.assertEqual(1, len(servers)) self.assertEqual(self.up_cell_insts[2], servers[0]['id']) def test_get_servers_detail_all_tenants_with_down_cells(self): api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.admin_api = api_fixture.admin_api self.admin_api.microversion = '2.69' servers = self.admin_api.get_servers(search_opts={'all_tenants': True}) # 4 servers from the up cells and 4 servers from the down cells # plus the 2 instances from cell1 and cell3 which are in a different # project. self.assertEqual(10, len(servers)) for server in servers: if 'user_id' not in server: # server is in the down cell. self.assertEqual('UNKNOWN', server['status']) if server['tenant_id'] != 'faker': self.assertIn(server['id'], self.down_cell_insts) # the partial construct will only have 5 keys: created, # tenant_id, status, id and links. security_groups should be # present too but isn't since we haven't created a network # interface self.assertEqual(5, len(server)) else: # server in up cell if server['tenant_id'] != 'faker': self.assertIn(server['id'], self.up_cell_insts) self.assertEqual(server['user_id'], self.project_id) self.assertIn('image', server) class ServerRebuildTestCase(integrated_helpers._IntegratedTestBase): api_major_version = 'v2.1' # We have to cap the microversion at 2.38 because that's the max we # can use to update image metadata via our compute images proxy API. microversion = '2.38' def _disable_compute_for(self, server): # Refresh to get its host server = self.api.get_server(server['id']) host = server['OS-EXT-SRV-ATTR:host'] # Disable the service it is on self.api_fixture.admin_api.put_service('disable', {'host': host, 'binary': 'nova-compute'}) def test_rebuild_with_image_novalidhost(self): """Creates a server with an image that is valid for the single compute that we have. Then rebuilds the server, passing in an image with metadata that does not fit the single compute which should result in a NoValidHost error. The ImagePropertiesFilter filter is enabled by default so that should filter out the host based on the image meta. """ self.compute2 = self.start_service('compute', host='host2') # We hard-code from a fake image since we can't get images # via the compute /images proxy API with microversion > 2.35. original_image_ref = '155d900f-4e14-4e4c-a73d-069cbf4541e6' server_req_body = { 'server': { 'imageRef': original_image_ref, 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, 'name': 'test_rebuild_with_image_novalidhost', # We don't care about networking for this test. This requires # microversion >= 2.37. 'networks': 'none' } } server = self.api.post_server(server_req_body) self._wait_for_state_change(server, 'ACTIVE') # Disable the host we're on so ComputeFilter would have ruled it out # normally self._disable_compute_for(server) # Now update the image metadata to be something that won't work with # the fake compute driver we're using since the fake driver has an # "x86_64" architecture. rebuild_image_ref = fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID self.api.put_image_meta_key( rebuild_image_ref, 'hw_architecture', 'unicore32') # Now rebuild the server with that updated image and it should result # in a NoValidHost failure from the scheduler. rebuild_req_body = { 'rebuild': { 'imageRef': rebuild_image_ref } } # Since we're using the CastAsCall fixture, the NoValidHost error # should actually come back to the API and result in a 500 error. # Normally the user would get a 202 response because nova-api RPC casts # to nova-conductor which RPC calls the scheduler which raises the # NoValidHost. We can mimic the end user way to figure out the failure # by looking for the failed 'rebuild' instance action event. self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body, check_response_status=[500]) # Look for the failed rebuild action. self._wait_for_action_fail_completion( server, instance_actions.REBUILD, 'rebuild_server') # Assert the server image_ref was rolled back on failure. server = self.api.get_server(server['id']) self.assertEqual(original_image_ref, server['image']['id']) # The server should be in ERROR state self.assertEqual('ERROR', server['status']) self.assertIn('No valid host', server['fault']['message']) # Rebuild it again with the same bad image to make sure it's rejected # again. Since we're using CastAsCall here, there is no 202 from the # API, and the exception from conductor gets passed back through the # API. ex = self.assertRaises( client.OpenStackApiException, self.api.api_post, '/servers/%s/action' % server['id'], rebuild_req_body) self.assertIn('NoValidHost', six.text_type(ex)) # A rebuild to the same host should never attempt a rebuild claim. @mock.patch('nova.compute.resource_tracker.ResourceTracker.rebuild_claim', new_callable=mock.NonCallableMock) def test_rebuild_with_new_image(self, mock_rebuild_claim): """Rebuilds a server with a different image which will run it through the scheduler to validate the image is still OK with the compute host that the instance is running on. Validates that additional resources are not allocated against the instance.host in Placement due to the rebuild on same host. """ admin_api = self.api_fixture.admin_api admin_api.microversion = '2.53' def _get_provider_uuid_by_host(host): resp = admin_api.api_get( 'os-hypervisors?hypervisor_hostname_pattern=%s' % host).body return resp['hypervisors'][0]['id'] def _get_provider_usages(provider_uuid): return self.placement_api.get( '/resource_providers/%s/usages' % provider_uuid).body['usages'] def _get_allocations_by_server_uuid(server_uuid): return self.placement_api.get( '/allocations/%s' % server_uuid).body['allocations'] def _set_provider_inventory(rp_uuid, resource_class, inventory): # Get the resource provider generation for the inventory update. rp = self.placement_api.get( '/resource_providers/%s' % rp_uuid).body inventory['resource_provider_generation'] = rp['generation'] return self.placement_api.put( '/resource_providers/%s/inventories/%s' % (rp_uuid, resource_class), inventory).body def assertFlavorMatchesAllocation(flavor, allocation): self.assertEqual(flavor['vcpus'], allocation['VCPU']) self.assertEqual(flavor['ram'], allocation['MEMORY_MB']) self.assertEqual(flavor['disk'], allocation['DISK_GB']) nodename = self.compute.manager._get_nodename(None) rp_uuid = _get_provider_uuid_by_host(nodename) # make sure we start with no usage on the compute node rp_usages = _get_provider_usages(rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_usages) server_req_body = { 'server': { # We hard-code from a fake image since we can't get images # via the compute /images proxy API with microversion > 2.35. 'imageRef': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, 'name': 'test_rebuild_with_new_image', # We don't care about networking for this test. This requires # microversion >= 2.37. 'networks': 'none' } } server = self.api.post_server(server_req_body) self._wait_for_state_change(server, 'ACTIVE') flavor = self.api.api_get('/flavors/1').body['flavor'] # make the compute node full and ensure rebuild still succeed _set_provider_inventory(rp_uuid, "VCPU", {"total": 1}) # There should be usage for the server on the compute node now. rp_usages = _get_provider_usages(rp_uuid) assertFlavorMatchesAllocation(flavor, rp_usages) allocs = _get_allocations_by_server_uuid(server['id']) self.assertIn(rp_uuid, allocs) allocs = allocs[rp_uuid]['resources'] assertFlavorMatchesAllocation(flavor, allocs) rebuild_image_ref = fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID # Now rebuild the server with a different image. rebuild_req_body = { 'rebuild': { 'imageRef': rebuild_image_ref } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None}) # The usage and allocations should not have changed. rp_usages = _get_provider_usages(rp_uuid) assertFlavorMatchesAllocation(flavor, rp_usages) allocs = _get_allocations_by_server_uuid(server['id']) self.assertIn(rp_uuid, allocs) allocs = allocs[rp_uuid]['resources'] assertFlavorMatchesAllocation(flavor, allocs) def test_volume_backed_rebuild_different_image(self): """Tests that trying to rebuild a volume-backed instance with a different image than what is in the root disk of the root volume will result in a 400 BadRequest error. """ self.useFixture(nova_fixtures.CinderFixture(self)) # First create our server as normal. server_req_body = { # There is no imageRef because this is boot from volume. 'server': { 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, 'name': 'test_volume_backed_rebuild_different_image', # We don't care about networking for this test. This requires # microversion >= 2.37. 'networks': 'none', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL, 'source_type': 'volume', 'destination_type': 'volume' }] } } server = self.api.post_server(server_req_body) server = self._wait_for_state_change(server, 'ACTIVE') # For a volume-backed server, the image ref will be an empty string # in the server response. self.assertEqual('', server['image']) # Now rebuild the server with a different image than was used to create # our fake volume. rebuild_image_ref = fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID rebuild_req_body = { 'rebuild': { 'imageRef': rebuild_image_ref } } resp = self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body, check_response_status=[400]) # Assert that we failed because of the image change and not something # else. self.assertIn('Unable to rebuild with a different image for a ' 'volume-backed server', six.text_type(resp)) class ServersTestV280(ServersTestBase): api_major_version = 'v2.1' def setUp(self): super(ServersTestV280, self).setUp() api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api self.api.microversion = '2.80' self.admin_api.microversion = '2.80' def test_get_migrations_after_cold_migrate_server_in_same_project( self): # Create a server by non-admin server = self.api.post_server({ 'server': { 'flavorRef': 1, 'imageRef': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'name': 'migrate-server-test', 'networks': 'none' }}) server_id = server['id'] # Check it's there found_server = self.api.get_server(server_id) self.assertEqual(server_id, found_server['id']) self.start_service('compute', host='host2') post = {'migrate': {}} self.admin_api.post_server_action(server_id, post) # Get the migration records by admin migrations = self.admin_api.get_migrations( user_id=self.admin_api.auth_user) self.assertEqual(1, len(migrations)) self.assertEqual(server_id, migrations[0]['instance_uuid']) # Get the migration records by non-admin migrations = self.admin_api.get_migrations( user_id=self.api.auth_user) self.assertEqual([], migrations) def test_get_migrations_after_live_migrate_server_in_different_project( self): # Create a server by non-admin server = self.api.post_server({ 'server': { 'flavorRef': 1, 'imageRef': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'name': 'migrate-server-test', 'networks': 'none' }}) server_id = server['id'] # Check it's there found_server = self.api.get_server(server_id) self.assertEqual(server_id, found_server['id']) server = self._wait_for_state_change(found_server, 'BUILD') self.start_service('compute', host='host2') project_id_1 = '4906260553374bf0a5d566543b320516' project_id_2 = 'c850298c1b6b4796a8f197ac310b2469' new_api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version=self.api_major_version, project_id=project_id_1)) new_admin_api = new_api_fixture.admin_api new_admin_api.microversion = '2.80' post = { 'os-migrateLive': { 'host': 'host2', 'block_migration': True } } new_admin_api.post_server_action(server_id, post) # Get the migration records migrations = new_admin_api.get_migrations(project_id=project_id_1) self.assertEqual(1, len(migrations)) self.assertEqual(server_id, migrations[0]['instance_uuid']) # Get the migration records by not exist project_id migrations = new_admin_api.get_migrations(project_id=project_id_2) self.assertEqual([], migrations) class ServerMovingTests(integrated_helpers.ProviderUsageBaseTestCase): """Tests moving servers while checking the resource allocations and usages These tests use two compute hosts. Boot a server on one of them then try to move the server to the other. At every step resource allocation of the server and the resource usages of the computes are queried from placement API and asserted. """ REQUIRES_LOCKING = True # NOTE(danms): The test defaults to using SmallFakeDriver, # which only has one vcpu, which can't take the doubled allocation # we're now giving it. So, use the bigger MediumFakeDriver here. compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(ServerMovingTests, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.compute1 = self._start_compute(host='host1') self.compute2 = self._start_compute(host='host2') flavors = self.api.get_flavors() self.flavor1 = flavors[0] self.flavor2 = flavors[1] # create flavor3 which has less MEMORY_MB but more DISK_GB than flavor2 flavor_body = {'flavor': {'name': 'test_flavor3', 'ram': int(self.flavor2['ram'] / 2), 'vcpus': 1, 'disk': self.flavor2['disk'] * 2, 'id': 'a22d5517-147c-4147-a0d1-e698df5cd4e3' }} self.flavor3 = self.api.post_flavor(flavor_body) def _other_hostname(self, host): other_host = {'host1': 'host2', 'host2': 'host1'} return other_host[host] def _run_periodics(self): # NOTE(jaypipes): We always run periodics in the same order: first on # compute1, then on compute2. However, we want to test scenarios when # the periodics run at different times during mover operations. This is # why we have the "reverse" tests which simply switch the source and # dest host while keeping the order in which we run the # periodics. This effectively allows us to test the matrix of timing # scenarios during move operations. ctx = context.get_admin_context() LOG.info('Running periodic for compute1 (%s)', self.compute1.manager.host) self.compute1.manager.update_available_resource(ctx) LOG.info('Running periodic for compute2 (%s)', self.compute2.manager.host) self.compute2.manager.update_available_resource(ctx) LOG.info('Finished with periodics') def test_resize_revert(self): self._test_resize_revert(dest_hostname='host1') def test_resize_revert_reverse(self): self._test_resize_revert(dest_hostname='host2') def test_resize_confirm(self): self._test_resize_confirm(dest_hostname='host1') def test_resize_confirm_reverse(self): self._test_resize_confirm(dest_hostname='host2') def test_migration_confirm_resize_error(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor1, source_hostname) self._move_and_check_allocations( server, request={'migrate': None}, old_flavor=self.flavor1, new_flavor=self.flavor1, source_rp_uuid=source_rp_uuid, dest_rp_uuid=dest_rp_uuid) # Mock failure def fake_confirm_migration(context, migration, instance, network_info): raise exception.MigrationPreCheckError( reason='test_migration_confirm_resize_error') with mock.patch('nova.virt.fake.FakeDriver.' 'confirm_migration', side_effect=fake_confirm_migration): # Confirm the migration/resize and check the usages post = {'confirmResize': None} self.api.post_server_action( server['id'], post, check_response_status=[204]) server = self._wait_for_state_change(server, 'ERROR') # After confirming and error, we should have an allocation only on the # destination host self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._run_periodics() # Check we're still accurate after running the periodics self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def _test_resize_revert(self, dest_hostname): source_hostname = self._other_hostname(dest_hostname) source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor1, source_hostname) self._resize_and_check_allocations(server, self.flavor1, self.flavor2, source_rp_uuid, dest_rp_uuid) # Revert the resize and check the usages post = {'revertResize': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') # Make sure the RequestSpec.flavor matches the original flavor. ctxt = context.get_admin_context() reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) self.assertEqual(self.flavor1['id'], reqspec.flavor.flavorid) self._run_periodics() # the original host expected to have the old resource allocation self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # Check that the server only allocates resource from the original host self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def _test_resize_confirm(self, dest_hostname): source_hostname = self._other_hostname(dest_hostname) source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor1, source_hostname) self._resize_and_check_allocations(server, self.flavor1, self.flavor2, source_rp_uuid, dest_rp_uuid) # Confirm the resize and check the usages self._confirm_resize(server) # After confirming, we should have an allocation only on the # destination host # The target host usage should be according to the new flavor self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor2) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) # and the target host allocation should be according to the new flavor self.assertFlavorMatchesAllocation(self.flavor2, server['id'], dest_rp_uuid) self._run_periodics() # Check we're still accurate after running the periodics # and the target host usage should be according to the new flavor self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor2) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) # and the server allocates only from the target host self.assertFlavorMatchesAllocation(self.flavor2, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def test_resize_revert_same_host(self): # make sure that the test only uses a single host compute2_service_id = self.admin_api.get_services( host=self.compute2.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute2_service_id, {'status': 'disabled'}) hostname = self.compute1.manager.host rp_uuid = self._get_provider_uuid_by_host(hostname) server = self._boot_and_check_allocations(self.flavor2, hostname) self._resize_to_same_host_and_check_allocations( server, self.flavor2, self.flavor3, rp_uuid) # Revert the resize and check the usages post = {'revertResize': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') self._run_periodics() # after revert only allocations due to the old flavor should remain self.assertFlavorMatchesUsage(rp_uuid, self.flavor2) self.assertFlavorMatchesAllocation(self.flavor2, server['id'], rp_uuid) self._delete_and_check_allocations(server) def test_resize_confirm_same_host(self): # make sure that the test only uses a single host compute2_service_id = self.admin_api.get_services( host=self.compute2.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute2_service_id, {'status': 'disabled'}) hostname = self.compute1.manager.host rp_uuid = self._get_provider_uuid_by_host(hostname) server = self._boot_and_check_allocations(self.flavor2, hostname) self._resize_to_same_host_and_check_allocations( server, self.flavor2, self.flavor3, rp_uuid) # Confirm the resize and check the usages self._confirm_resize(server) self._run_periodics() # after confirm only allocations due to the new flavor should remain self.assertFlavorMatchesUsage(rp_uuid, self.flavor3) self.assertFlavorMatchesAllocation(self.flavor3, server['id'], rp_uuid) self._delete_and_check_allocations(server) def test_resize_not_enough_resource(self): # Try to resize to a flavor that requests more VCPU than what the # compute hosts has available and expect the resize to fail flavor_body = {'flavor': {'name': 'test_too_big_flavor', 'ram': 1024, 'vcpus': fake.MediumFakeDriver.vcpus + 1, 'disk': 20, }} big_flavor = self.api.post_flavor(flavor_body) dest_hostname = self.compute2.host source_hostname = self._other_hostname(dest_hostname) source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) self.flags(allow_resize_to_same_host=False) resize_req = { 'resize': { 'flavorRef': big_flavor['id'] } } self.api.post_server_action( server['id'], resize_req, check_response_status=[202]) event = self._assert_resize_migrate_action_fail( server, instance_actions.RESIZE, 'NoValidHost') self.assertIn('details', event) # This test case works in microversion 2.84. self.assertIn('No valid host was found', event['details']) server = self.admin_api.get_server(server['id']) self.assertEqual(source_hostname, server['OS-EXT-SRV-ATTR:host']) # The server is still ACTIVE and thus there is no fault message. self.assertEqual('ACTIVE', server['status']) self.assertNotIn('fault', server) # only the source host shall have usages after the failed resize self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) # Check that the other provider has no usage self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # Check that the server only allocates resource from the host it is # booted on self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def test_resize_delete_while_verify(self): """Test scenario where the server is deleted while in the VERIFY_RESIZE state and ensures the allocations are properly cleaned up from the source and target compute node resource providers. The _confirm_resize_on_deleting() method in the API is actually responsible for making sure the migration-based allocations get cleaned up by confirming the resize on the source host before deleting the server from the target host. """ dest_hostname = 'host2' source_hostname = self._other_hostname(dest_hostname) source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor1, source_hostname) self._resize_and_check_allocations(server, self.flavor1, self.flavor2, source_rp_uuid, dest_rp_uuid) self._delete_and_check_allocations(server) def test_resize_confirm_assert_hypervisor_usage_no_periodics(self): """Resize confirm test for bug 1818914 to make sure the tracked resource usage in the os-hypervisors API (not placement) is as expected during a confirmed resize. This intentionally does not use _test_resize_confirm in order to avoid running periodics. """ # There should be no usage from a server on either hypervisor. source_rp_uuid = self._get_provider_uuid_by_host('host1') dest_rp_uuid = self._get_provider_uuid_by_host('host2') no_usage = {'vcpus': 0, 'disk': 0, 'ram': 0} for rp_uuid in (source_rp_uuid, dest_rp_uuid): self.assert_hypervisor_usage( rp_uuid, no_usage, volume_backed=False) # Create the server and wait for it to be ACTIVE. server = self._boot_and_check_allocations(self.flavor1, 'host1') # There should be resource usage for flavor1 on the source host. self.assert_hypervisor_usage( source_rp_uuid, self.flavor1, volume_backed=False) # And still no usage on the dest host. self.assert_hypervisor_usage( dest_rp_uuid, no_usage, volume_backed=False) # Resize the server to flavor2 and wait for VERIFY_RESIZE. self.flags(allow_resize_to_same_host=False) resize_req = { 'resize': { 'flavorRef': self.flavor2['id'] } } self.api.post_server_action(server['id'], resize_req) self._wait_for_state_change(server, 'VERIFY_RESIZE') # There should be resource usage for flavor1 on the source host. self.assert_hypervisor_usage( source_rp_uuid, self.flavor1, volume_backed=False) # And resource usage for flavor2 on the target host. self.assert_hypervisor_usage( dest_rp_uuid, self.flavor2, volume_backed=False) # Now confirm the resize and check hypervisor usage again. self._confirm_resize(server) # There should no resource usage for flavor1 on the source host. self.assert_hypervisor_usage( source_rp_uuid, no_usage, volume_backed=False) # And resource usage for flavor2 should still be on the target host. self.assert_hypervisor_usage( dest_rp_uuid, self.flavor2, volume_backed=False) # Run periodics and make sure usage is still as expected. self._run_periodics() self.assert_hypervisor_usage( source_rp_uuid, no_usage, volume_backed=False) self.assert_hypervisor_usage( dest_rp_uuid, self.flavor2, volume_backed=False) def _wait_for_notification_event_type(self, event_type, max_retries=50): retry_counter = 0 while True: if len(fake_notifier.NOTIFICATIONS) > 0: for notification in fake_notifier.NOTIFICATIONS: if notification.event_type == event_type: return if retry_counter == max_retries: self.fail('Wait for notification event type (%s) failed' % event_type) retry_counter += 1 time.sleep(0.1) def test_evacuate_with_no_compute(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # Disable compute service on destination host compute2_service_id = self.admin_api.get_services( host=dest_hostname, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute2_service_id, {'status': 'disabled'}) server = self._boot_and_check_allocations( self.flavor1, source_hostname) # Force source compute down source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # Initialize fake_notifier fake_notifier.stub_notifier(self) fake_notifier.reset() # Initiate evacuation post = {'evacuate': {}} self.api.post_server_action(server['id'], post) # NOTE(elod.illes): Should be changed to non-polling solution when # patch https://review.opendev.org/#/c/482629/ gets merged: # fake_notifier.wait_for_versioned_notifications( # 'compute_task.rebuild_server') self._wait_for_notification_event_type('compute_task.rebuild_server') self._run_periodics() # There is no other host to evacuate to so the rebuild should put the # VM to ERROR state, but it should remain on source compute expected_params = {'OS-EXT-SRV-ATTR:host': source_hostname, 'status': 'ERROR'} server = self._wait_for_server_parameter(server, expected_params) # Check migrations migrations = self.api.get_migrations() self.assertEqual(1, len(migrations)) self.assertEqual('evacuation', migrations[0]['migration_type']) self.assertEqual(server['id'], migrations[0]['instance_uuid']) self.assertEqual(source_hostname, migrations[0]['source_compute']) self.assertEqual('error', migrations[0]['status']) # Restart source host self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) self.compute1.start() self._run_periodics() # Check allocation and usages: should only use resources on source host self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) zero_usage = {'VCPU': 0, 'DISK_GB': 0, 'MEMORY_MB': 0} self.assertRequestMatchesUsage(zero_usage, dest_rp_uuid) self._delete_and_check_allocations(server) def test_migrate_no_valid_host(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) dest_compute_id = self.admin_api.get_services( host=dest_hostname, binary='nova-compute')[0]['id'] self.compute2.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( dest_compute_id, {'forced_down': 'true'}) # migrate the server post = {'migrate': None} self.api.post_server_action(server['id'], post) self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'NoValidHost') expected_params = {'OS-EXT-SRV-ATTR:host': source_hostname, 'status': 'ACTIVE'} self._wait_for_server_parameter(server, expected_params) self._run_periodics() # Expect to have allocation only on source_host self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) zero_usage = {'VCPU': 0, 'DISK_GB': 0, 'MEMORY_MB': 0} self.assertRequestMatchesUsage(zero_usage, dest_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def _test_evacuate(self, keep_hypervisor_state): source_hostname = self.compute1.host dest_hostname = self.compute2.host server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # evacuate the server post = {'evacuate': {}} self.api.post_server_action( server['id'], post) expected_params = {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'} server = self._wait_for_server_parameter(server, expected_params) # Expect to have allocation and usages on both computes as the # source compute is still down source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self._check_allocation_during_evacuate( self.flavor1, server['id'], source_rp_uuid, dest_rp_uuid) # restart the source compute self.compute1 = self.restart_compute_service( self.compute1, keep_hypervisor_state=keep_hypervisor_state) self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) source_usages = self._get_provider_usages(source_rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_usages) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def test_evacuate_instance_kept_on_the_hypervisor(self): self._test_evacuate(keep_hypervisor_state=True) def test_evacuate_clean_hypervisor(self): self._test_evacuate(keep_hypervisor_state=False) def _test_evacuate_forced_host(self, keep_hypervisor_state): """Evacuating a server with a forced host bypasses the scheduler which means conductor has to create the allocations against the destination node. This test recreates the scenarios and asserts the allocations on the source and destination nodes are as expected. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host # the ability to force evacuate a server is removed entirely in 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # evacuate the server and force the destination host which bypasses # the scheduler post = { 'evacuate': { 'host': dest_hostname, 'force': True } } self.api.post_server_action(server['id'], post) expected_params = {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'} server = self._wait_for_server_parameter(server, expected_params) # Run the periodics to show those don't modify allocations. self._run_periodics() # Expect to have allocation and usages on both computes as the # source compute is still down source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self._check_allocation_during_evacuate( self.flavor1, server['id'], source_rp_uuid, dest_rp_uuid) # restart the source compute self.compute1 = self.restart_compute_service( self.compute1, keep_hypervisor_state=keep_hypervisor_state) self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) # Run the periodics again to show they don't change anything. self._run_periodics() # When the source node starts up, the instance has moved so the # ResourceTracker should cleanup allocations for the source node. source_usages = self._get_provider_usages(source_rp_uuid) self.assertEqual( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_usages) # The usages/allocations should still exist on the destination node # after the source node starts back up. self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def test_evacuate_forced_host_instance_kept_on_the_hypervisor(self): self._test_evacuate_forced_host(keep_hypervisor_state=True) def test_evacuate_forced_host_clean_hypervisor(self): self._test_evacuate_forced_host(keep_hypervisor_state=False) def test_evacuate_forced_host_v268(self): """Evacuating a server with a forced host was removed in API microversion 2.68. This test ensures that the request is rejected. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host server = self._boot_and_check_allocations( self.flavor1, source_hostname) # evacuate the server and force the destination host which bypasses # the scheduler post = { 'evacuate': { 'host': dest_hostname, 'force': True } } ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], post) self.assertIn("'force' was unexpected", six.text_type(ex)) # NOTE(gibi): there is a similar test in SchedulerOnlyChecksTargetTest but # we want this test here as well because ServerMovingTest is a parent class # of multiple test classes that run this test case with different compute # node setups. def test_evacuate_host_specified_but_not_forced(self): """Evacuating a server with a host but using the scheduler to create the allocations against the destination node. This test recreates the scenarios and asserts the allocations on the source and destination nodes are as expected. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # evacuate the server specify the target but do not force the # destination host to use the scheduler to validate the target host post = { 'evacuate': { 'host': dest_hostname, } } self.api.post_server_action(server['id'], post) expected_params = {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'} server = self._wait_for_server_parameter(server, expected_params) # Run the periodics to show those don't modify allocations. self._run_periodics() # Expect to have allocation and usages on both computes as the # source compute is still down source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self._check_allocation_during_evacuate( self.flavor1, server['id'], source_rp_uuid, dest_rp_uuid) # restart the source compute self.compute1 = self.restart_compute_service(self.compute1) self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) # Run the periodics again to show they don't change anything. self._run_periodics() # When the source node starts up, the instance has moved so the # ResourceTracker should cleanup allocations for the source node. source_usages = self._get_provider_usages(source_rp_uuid) self.assertEqual( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_usages) # The usages/allocations should still exist on the destination node # after the source node starts back up. self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def _test_evacuate_claim_on_dest_fails(self, keep_hypervisor_state): """Tests that the allocations on the destination node are cleaned up when the rebuild move claim fails due to insufficient resources. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # NOTE(mriedem): This isn't great, and I'd like to fake out the driver # to make the claim fail, by doing something like returning a too high # memory_mb overhead, but the limits dict passed to the claim is empty # so the claim test is considering it as unlimited and never actually # performs a claim test. def fake_move_claim(*args, **kwargs): # Assert the destination node allocation exists. self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) raise exception.ComputeResourcesUnavailable( reason='test_evacuate_claim_on_dest_fails') with mock.patch('nova.compute.claims.MoveClaim', fake_move_claim): # evacuate the server self.api.post_server_action(server['id'], {'evacuate': {}}) # the migration will fail on the dest node and the instance will # stay ACTIVE and task_state will be set to None. server = self._wait_for_server_parameter( server, {'status': 'ACTIVE', 'OS-EXT-STS:task_state': None}) # Run the periodics to show those don't modify allocations. self._run_periodics() # The allocation should still exist on the source node since it's # still down, and the allocation on the destination node should be # cleaned up. source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) # restart the source compute self.compute1 = self.restart_compute_service( self.compute1, keep_hypervisor_state=keep_hypervisor_state) self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) # Run the periodics again to show they don't change anything. self._run_periodics() # The source compute shouldn't have cleaned up the allocation for # itself since the instance didn't move. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) def test_evacuate_claim_on_dest_fails_instance_kept_on_the_hypervisor( self): self._test_evacuate_claim_on_dest_fails(keep_hypervisor_state=True) def test_evacuate_claim_on_dest_fails_clean_hypervisor(self): self._test_evacuate_claim_on_dest_fails(keep_hypervisor_state=False) def _test_evacuate_rebuild_on_dest_fails(self, keep_hypervisor_state): """Tests that the allocations on the destination node are cleaned up automatically when the claim is made but the actual rebuild via the driver fails. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) def fake_rebuild(*args, **kwargs): # Assert the destination node allocation exists. self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) raise test.TestingException('test_evacuate_rebuild_on_dest_fails') with mock.patch.object( self.compute2.driver, 'rebuild', fake_rebuild): # evacuate the server self.api.post_server_action(server['id'], {'evacuate': {}}) # the migration will fail on the dest node and the instance will # go into error state server = self._wait_for_state_change(server, 'ERROR') # Run the periodics to show those don't modify allocations. self._run_periodics() # The allocation should still exist on the source node since it's # still down, and the allocation on the destination node should be # cleaned up. source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) # restart the source compute self.compute1 = self.restart_compute_service( self.compute1, keep_hypervisor_state=keep_hypervisor_state) self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) # Run the periodics again to show they don't change anything. self._run_periodics() # The source compute shouldn't have cleaned up the allocation for # itself since the instance didn't move. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) def test_evacuate_rebuild_on_dest_fails_instance_kept_on_the_hypervisor( self): self._test_evacuate_rebuild_on_dest_fails(keep_hypervisor_state=True) def test_evacuate_rebuild_on_dest_fails_clean_hypervisor(self): self._test_evacuate_rebuild_on_dest_fails(keep_hypervisor_state=False) def _boot_then_shelve_and_check_allocations(self, hostname, rp_uuid): # avoid automatic shelve offloading self.flags(shelved_offload_time=-1) server = self._boot_and_check_allocations( self.flavor1, hostname) req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_state_change(server, 'SHELVED') # the host should maintain the existing allocation for this instance # while the instance is shelved self.assertFlavorMatchesUsage(rp_uuid, self.flavor1) # Check that the server only allocates resource from the host it is # booted on self.assertFlavorMatchesAllocation(self.flavor1, server['id'], rp_uuid) return server def test_shelve_unshelve(self): source_hostname = self.compute1.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) server = self._boot_then_shelve_and_check_allocations( source_hostname, source_rp_uuid) req = { 'unshelve': None } self.api.post_server_action(server['id'], req) self._wait_for_state_change(server, 'ACTIVE') # the host should have resource usage as the instance is ACTIVE self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) # Check that the server only allocates resource from the host it is # booted on self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def _shelve_offload_and_check_allocations(self, server, source_rp_uuid): req = { 'shelveOffload': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter(server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None, 'OS-EXT-AZ:availability_zone': ''}) source_usages = self._get_provider_usages(source_rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_usages) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual(0, len(allocations)) def test_shelve_offload_unshelve_diff_host(self): source_hostname = self.compute1.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) server = self._boot_then_shelve_and_check_allocations( source_hostname, source_rp_uuid) self._shelve_offload_and_check_allocations(server, source_rp_uuid) # unshelve after shelve offload will do scheduling. this test case # wants to test the scenario when the scheduler select a different host # to ushelve the instance. So we disable the original host. source_service_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.admin_api.put_service(source_service_id, {'status': 'disabled'}) req = { 'unshelve': None } self.api.post_server_action(server['id'], req) server = self._wait_for_state_change(server, 'ACTIVE') # unshelving an offloaded instance will call the scheduler so the # instance might end up on a different host current_hostname = server['OS-EXT-SRV-ATTR:host'] self.assertEqual(current_hostname, self._other_hostname( source_hostname)) # the host running the instance should have resource usage current_rp_uuid = self._get_provider_uuid_by_host(current_hostname) self.assertFlavorMatchesUsage(current_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], current_rp_uuid) self._delete_and_check_allocations(server) def test_shelve_offload_unshelve_same_host(self): source_hostname = self.compute1.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) server = self._boot_then_shelve_and_check_allocations( source_hostname, source_rp_uuid) self._shelve_offload_and_check_allocations(server, source_rp_uuid) # unshelve after shelve offload will do scheduling. this test case # wants to test the scenario when the scheduler select the same host # to ushelve the instance. So we disable the other host. source_service_id = self.admin_api.get_services( host=self._other_hostname(source_hostname), binary='nova-compute')[0]['id'] self.admin_api.put_service(source_service_id, {'status': 'disabled'}) req = { 'unshelve': None } self.api.post_server_action(server['id'], req) server = self._wait_for_state_change(server, 'ACTIVE') # unshelving an offloaded instance will call the scheduler so the # instance might end up on a different host current_hostname = server['OS-EXT-SRV-ATTR:host'] self.assertEqual(current_hostname, source_hostname) # the host running the instance should have resource usage current_rp_uuid = self._get_provider_uuid_by_host(current_hostname) self.assertFlavorMatchesUsage(current_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], current_rp_uuid) self._delete_and_check_allocations(server) def test_live_migrate_force(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # the ability to force live migrate a server is removed entirely in # 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor1, source_hostname) # live migrate the server and force the destination host which bypasses # the scheduler post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, 'force': True, } } self.api.post_server_action(server['id'], post) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'}) self._run_periodics() # NOTE(danms): There should be no usage for the source self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) # the server has an allocation on only the dest node self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def test_live_migrate_forced_v268(self): """Live migrating a server with a forced host was removed in API microversion 2.68. This test ensures that the request is rejected. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host server = self._boot_and_check_allocations( self.flavor1, source_hostname) # live migrate the server and force the destination host which bypasses # the scheduler post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, 'force': True, } } ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], post) self.assertIn("'force' was unexpected", six.text_type(ex)) def test_live_migrate(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname, networks=[{'port': self.neutron.port_1['id']}]) post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, } } self.api.post_server_action(server['id'], post) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'}) self._run_periodics() # NOTE(danms): There should be no usage for the source self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def test_live_migrate_pre_check_fails(self): """Tests the case that the LiveMigrationTask in conductor has called the scheduler which picked a host and created allocations against it in Placement, but then when the conductor task calls check_can_live_migrate_destination on the destination compute it fails. The allocations on the destination compute node should be cleaned up before the conductor task asks the scheduler for another host to try the live migration. """ self.failed_hostname = None source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) def fake_check_can_live_migrate_destination( context, instance, src_compute_info, dst_compute_info, block_migration=False, disk_over_commit=False): self.failed_hostname = dst_compute_info['host'] raise exception.MigrationPreCheckError( reason='test_live_migrate_pre_check_fails') with mock.patch('nova.virt.fake.FakeDriver.' 'check_can_live_migrate_destination', side_effect=fake_check_can_live_migrate_destination): post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, } } self.api.post_server_action(server['id'], post) # As there are only two computes and we failed to live migrate to # the only other destination host, the LiveMigrationTask raises # MaxRetriesExceeded back to the conductor manager which handles it # generically and sets the instance back to ACTIVE status and # clears the task_state. The migration record status is set to # 'error', so that's what we need to look for to know when this # is done. migration = self._wait_for_migration_status(server, ['error']) # The source_compute should be set on the migration record, but the # destination shouldn't be as we never made it to one. self.assertEqual(source_hostname, migration['source_compute']) self.assertIsNone(migration['dest_compute']) # Make sure the destination host (the only other host) is the failed # host. self.assertEqual(dest_hostname, self.failed_hostname) # Since the instance didn't move, assert the allocations are still # on the source node. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) # Assert the allocations, created by the scheduler, are cleaned up # after the migration pre-check error happens. self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # There should only be 1 allocation for the instance on the source node self.assertFlavorMatchesAllocation( self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) @mock.patch('nova.virt.fake.FakeDriver.pre_live_migration') def test_live_migrate_rollback_cleans_dest_node_allocations( self, mock_pre_live_migration, force=False): """Tests the case that when live migration fails, either during the call to pre_live_migration on the destination, or during the actual live migration in the virt driver, the allocations on the destination node are rolled back since the instance is still on the source node. """ source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # the ability to force live migrate a server is removed entirely in # 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor1, source_hostname) def stub_pre_live_migration(context, instance, block_device_info, network_info, disk_info, migrate_data): # Make sure the source node allocations are against the migration # record and the dest node allocations are against the instance. self.assertFlavorMatchesAllocation( self.flavor1, migrate_data.migration.uuid, source_rp_uuid) self.assertFlavorMatchesAllocation( self.flavor1, server['id'], dest_rp_uuid) # The actual type of exception here doesn't matter. The point # is that the virt driver raised an exception from the # pre_live_migration method on the destination host. raise test.TestingException( 'test_live_migrate_rollback_cleans_dest_node_allocations') mock_pre_live_migration.side_effect = stub_pre_live_migration post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, 'force': force } } self.api.post_server_action(server['id'], post) # The compute manager will put the migration record into error status # when pre_live_migration fails, so wait for that to happen. migration = self._wait_for_migration_status(server, ['error']) # The _rollback_live_migration method in the compute manager will reset # the task_state on the instance, so wait for that to happen. server = self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None}) self.assertEqual(source_hostname, migration['source_compute']) self.assertEqual(dest_hostname, migration['dest_compute']) # Since the instance didn't move, assert the allocations are still # on the source node. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) # Assert the allocations, created by the scheduler, are cleaned up # after the rollback happens. self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # There should only be 1 allocation for the instance on the source node self.assertFlavorMatchesAllocation( self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def test_live_migrate_rollback_cleans_dest_node_allocations_forced(self): """Tests the case that when a forced host live migration fails, either during the call to pre_live_migration on the destination, or during the actual live migration in the virt driver, the allocations on the destination node are rolled back since the instance is still on the source node. """ self.test_live_migrate_rollback_cleans_dest_node_allocations( force=True) def test_rescheduling_when_migrating_instance(self): """Tests that allocations are removed from the destination node by the compute service when a cold migrate / resize fails and a reschedule request is sent back to conductor. """ source_hostname = self.compute1.manager.host server = self._boot_and_check_allocations( self.flavor1, source_hostname) def fake_prep_resize(*args, **kwargs): dest_hostname = self._other_hostname(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertIn(dest_rp_uuid, allocations) source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) migration_uuid = self.get_migration_uuid_for_instance(server['id']) allocations = self._get_allocations_by_server_uuid(migration_uuid) self.assertIn(source_rp_uuid, allocations) raise test.TestingException('Simulated _prep_resize failure.') # Yes this isn't great in a functional test, but it's simple. self.stub_out('nova.compute.manager.ComputeManager._prep_resize', fake_prep_resize) # Now migrate the server which is going to fail on the destination. self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_action_fail_completion( server, instance_actions.MIGRATE, 'compute_prep_resize') dest_hostname = self._other_hostname(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # Expects no allocation records on the failed host. self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # Ensure the allocation records still exist on the source host. source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertIn(source_rp_uuid, allocations) def _test_resize_to_same_host_instance_fails(self, failing_method, event_name): """Tests that when we resize to the same host and resize fails in the given method, we cleanup the allocations before rescheduling. """ # make sure that the test only uses a single host compute2_service_id = self.admin_api.get_services( host=self.compute2.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute2_service_id, {'status': 'disabled'}) hostname = self.compute1.manager.host rp_uuid = self._get_provider_uuid_by_host(hostname) server = self._boot_and_check_allocations(self.flavor1, hostname) def fake_resize_method(*args, **kwargs): # Ensure the allocations are doubled now before we fail. self.assertFlavorMatchesUsage(rp_uuid, self.flavor1, self.flavor2) raise test.TestingException('Simulated resize failure.') # Yes this isn't great in a functional test, but it's simple. self.stub_out( 'nova.compute.manager.ComputeManager.%s' % failing_method, fake_resize_method) self.flags(allow_resize_to_same_host=True) resize_req = { 'resize': { 'flavorRef': self.flavor2['id'] } } self.api.post_server_action(server['id'], resize_req) self._wait_for_action_fail_completion( server, instance_actions.RESIZE, event_name) # Ensure the allocation records still exist on the host. source_rp_uuid = self._get_provider_uuid_by_host(hostname) if failing_method == '_finish_resize': # finish_resize will drop the old flavor allocations. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor2) else: # The new_flavor should have been subtracted from the doubled # allocation which just leaves us with the original flavor. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) def test_resize_to_same_host_prep_resize_fails(self): self._test_resize_to_same_host_instance_fails( '_prep_resize', 'compute_prep_resize') def test_resize_instance_fails_allocation_cleanup(self): self._test_resize_to_same_host_instance_fails( '_resize_instance', 'compute_resize_instance') def test_finish_resize_fails_allocation_cleanup(self): self._test_resize_to_same_host_instance_fails( '_finish_resize', 'compute_finish_resize') def _server_created_with_host(self): hostname = self.compute1.host server_req = self._build_server( image_uuid="155d900f-4e14-4e4c-a73d-069cbf4541e6", flavor_id=self.flavor1["id"], networks='none') server_req['host'] = hostname created_server = self.api.post_server({"server": server_req}) server = self._wait_for_state_change(created_server, "ACTIVE") return server def test_live_migration_after_server_created_with_host(self): """Test after creating server with requested host, and then do live-migration for the server. The requested host will not effect the new moving operation. """ dest_hostname = self.compute2.host created_server = self._server_created_with_host() post = { 'os-migrateLive': { 'host': None, 'block_migration': 'auto' } } self.api.post_server_action(created_server['id'], post) new_server = self._wait_for_server_parameter( created_server, {'status': 'ACTIVE'}) inst_dest_host = new_server["OS-EXT-SRV-ATTR:host"] self.assertEqual(dest_hostname, inst_dest_host) def test_evacuate_after_server_created_with_host(self): """Test after creating server with requested host, and then do evacuation for the server. The requested host will not effect the new moving operation. """ dest_hostname = self.compute2.host created_server = self._server_created_with_host() source_compute_id = self.admin_api.get_services( host=created_server["OS-EXT-SRV-ATTR:host"], binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) post = { 'evacuate': {} } self.api.post_server_action(created_server['id'], post) expected_params = {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'} new_server = self._wait_for_server_parameter(created_server, expected_params) inst_dest_host = new_server["OS-EXT-SRV-ATTR:host"] self.assertEqual(dest_hostname, inst_dest_host) def test_resize_and_confirm_after_server_created_with_host(self): """Test after creating server with requested host, and then do resize for the server. The requested host will not effect the new moving operation. """ dest_hostname = self.compute2.host created_server = self._server_created_with_host() # resize server self.flags(allow_resize_to_same_host=False) resize_req = { 'resize': { 'flavorRef': self.flavor2['id'] } } self.api.post_server_action(created_server['id'], resize_req) self._wait_for_state_change(created_server, 'VERIFY_RESIZE') # Confirm the resize new_server = self._confirm_resize(created_server) inst_dest_host = new_server["OS-EXT-SRV-ATTR:host"] self.assertEqual(dest_hostname, inst_dest_host) def test_shelve_unshelve_after_server_created_with_host(self): """Test after creating server with requested host, and then do shelve and unshelve for the server. The requested host will not effect the new moving operation. """ dest_hostname = self.compute2.host created_server = self._server_created_with_host() self.flags(shelved_offload_time=-1) req = {'shelve': {}} self.api.post_server_action(created_server['id'], req) self._wait_for_state_change(created_server, 'SHELVED') req = {'shelveOffload': {}} self.api.post_server_action(created_server['id'], req) self._wait_for_server_parameter(created_server, { 'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None, 'OS-EXT-AZ:availability_zone': ''}) # unshelve after shelve offload will do scheduling. this test case # wants to test the scenario when the scheduler select a different host # to ushelve the instance. So we disable the original host. source_service_id = self.admin_api.get_services( host=created_server["OS-EXT-SRV-ATTR:host"], binary='nova-compute')[0]['id'] self.admin_api.put_service(source_service_id, {'status': 'disabled'}) req = {'unshelve': None} self.api.post_server_action(created_server['id'], req) new_server = self._wait_for_state_change(created_server, 'ACTIVE') inst_dest_host = new_server["OS-EXT-SRV-ATTR:host"] self.assertEqual(dest_hostname, inst_dest_host) @mock.patch.object(utils, 'fill_provider_mapping', wraps=utils.fill_provider_mapping) def _test_resize_reschedule_uses_host_lists(self, mock_fill_provider_map, fails, num_alts=None): """Test that when a resize attempt fails, the retry comes from the supplied host_list, and does not call the scheduler. """ server_req = self._build_server( image_uuid="155d900f-4e14-4e4c-a73d-069cbf4541e6", flavor_id=self.flavor1["id"], networks='none') created_server = self.api.post_server({"server": server_req}) server = self._wait_for_state_change(created_server, "ACTIVE") inst_host = server["OS-EXT-SRV-ATTR:host"] uuid_orig = self._get_provider_uuid_by_host(inst_host) # We will need four new compute nodes to test the resize, representing # the host selected by select_destinations(), along with 3 alternates. self._start_compute(host="selection") self._start_compute(host="alt_host1") self._start_compute(host="alt_host2") self._start_compute(host="alt_host3") uuid_sel = self._get_provider_uuid_by_host("selection") uuid_alt1 = self._get_provider_uuid_by_host("alt_host1") uuid_alt2 = self._get_provider_uuid_by_host("alt_host2") uuid_alt3 = self._get_provider_uuid_by_host("alt_host3") hosts = [{"name": "selection", "uuid": uuid_sel}, {"name": "alt_host1", "uuid": uuid_alt1}, {"name": "alt_host2", "uuid": uuid_alt2}, {"name": "alt_host3", "uuid": uuid_alt3}, ] self.useFixture(nova_fixtures.HostNameWeigherFixture( weights={ "selection": 999, "alt_host1": 888, "alt_host2": 777, "alt_host3": 666, "host1": 0, "host2": 0})) self.scheduler_service.stop() self.scheduler_service = self.start_service('scheduler') def fake_prep_resize(*args, **kwargs): if self.num_fails < fails: self.num_fails += 1 raise Exception("fake_prep_resize") actual_prep_resize(*args, **kwargs) # Yes this isn't great in a functional test, but it's simple. actual_prep_resize = compute_manager.ComputeManager._prep_resize self.stub_out("nova.compute.manager.ComputeManager._prep_resize", fake_prep_resize) self.num_fails = 0 num_alts = 4 if num_alts is None else num_alts # Make sure we have enough retries available for the number of # requested fails. attempts = min(fails + 2, num_alts) self.flags(max_attempts=attempts, group='scheduler') server_uuid = server["id"] data = {"resize": {"flavorRef": self.flavor2["id"]}} self.api.post_server_action(server_uuid, data) if num_alts < fails: # We will run out of alternates before populate_retry will # raise a MaxRetriesExceeded exception, so the migration will # fail and the server should be in status "ERROR" server = self._wait_for_state_change(created_server, "ERROR") # The usage should be unchanged from the original flavor self.assertFlavorMatchesUsage(uuid_orig, self.flavor1) # There should be no usages on any of the hosts target_uuids = (uuid_sel, uuid_alt1, uuid_alt2, uuid_alt3) empty_usage = {"VCPU": 0, "MEMORY_MB": 0, "DISK_GB": 0} for target_uuid in target_uuids: usage = self._get_provider_usages(target_uuid) self.assertEqual(empty_usage, usage) else: server = self._wait_for_state_change(created_server, "VERIFY_RESIZE") # Verify that the selected host failed, and was rescheduled to # an alternate host. new_server_host = server.get("OS-EXT-SRV-ATTR:host") expected_host = hosts[fails]["name"] self.assertEqual(expected_host, new_server_host) uuid_dest = hosts[fails]["uuid"] # The usage should match the resized flavor self.assertFlavorMatchesUsage(uuid_dest, self.flavor2) # Verify that the other host have no allocations target_uuids = (uuid_sel, uuid_alt1, uuid_alt2, uuid_alt3) empty_usage = {"VCPU": 0, "MEMORY_MB": 0, "DISK_GB": 0} for target_uuid in target_uuids: if target_uuid == uuid_dest: continue usage = self._get_provider_usages(target_uuid) self.assertEqual(empty_usage, usage) # Verify that there is only one migration record for the instance. ctxt = context.get_admin_context() filters = {"instance_uuid": server["id"]} migrations = objects.MigrationList.get_by_filters(ctxt, filters) self.assertEqual(1, len(migrations.objects)) # fill_provider_mapping should have been called once for the initial # build, once for the resize scheduling to the primary host and then # once per reschedule. expected_fill_count = 2 if num_alts > 1: expected_fill_count += self.num_fails - 1 self.assertGreaterEqual(mock_fill_provider_map.call_count, expected_fill_count) def test_resize_reschedule_uses_host_lists_1_fail(self): self._test_resize_reschedule_uses_host_lists(fails=1) def test_resize_reschedule_uses_host_lists_3_fails(self): self._test_resize_reschedule_uses_host_lists(fails=3) def test_resize_reschedule_uses_host_lists_not_enough_alts(self): self._test_resize_reschedule_uses_host_lists(fails=3, num_alts=1) def test_migrate_confirm(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) self._migrate_and_check_allocations( server, self.flavor1, source_rp_uuid, dest_rp_uuid) # Confirm the move and check the usages self._confirm_resize(server) def _check_allocation(): # the target host usage should be according to the flavor self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) # the source host has no usage self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) # and the target host allocation should be according to the flavor self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) # After confirming, we should have an allocation only on the # destination host _check_allocation() self._run_periodics() # Check we're still accurate after running the periodics _check_allocation() self._delete_and_check_allocations(server) def test_migrate_revert(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) self._migrate_and_check_allocations( server, self.flavor1, source_rp_uuid, dest_rp_uuid) # Revert the move and check the usages post = {'revertResize': None} self.api.post_server_action(server['id'], post) self._wait_for_state_change(server, 'ACTIVE') def _check_allocation(): self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # Check that the server only allocates resource from the original # host self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) # the original host expected to have the old resource allocation _check_allocation() self._run_periodics() _check_allocation() self._delete_and_check_allocations(server) class PollUnconfirmedResizesTest(integrated_helpers.ProviderUsageBaseTestCase): """Tests for the _poll_unconfirmed_resizes periodic task.""" compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(PollUnconfirmedResizesTest, self).setUp() # Start two computes for the resize. self._start_compute('host1') self._start_compute('host2') def test_source_host_down_during_confirm(self): """Tests the scenario that between the time that the server goes to VERIFY_RESIZE status and the _poll_unconfirmed_resizes periodic task runs the source compute service goes down so the confirm task fails. """ server = self._build_server(networks='none', host='host1') server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Cold migrate the server to the other host. self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) # Make sure the migration is finished. self._wait_for_migration_status(server, ['finished']) # Stop and force down the source compute service. self.computes['host1'].stop() source_service = self.api.get_services( binary='nova-compute', host='host1')[0] self.api.put_service( source_service['id'], {'status': 'disabled', 'forced_down': True}) # Now configure auto-confirm and call the method on the target compute # so we do not have to wait for the periodic to run. self.flags(resize_confirm_window=1) # Stub timeutils so the DB API query finds the unconfirmed migration. future = timeutils.utcnow() + datetime.timedelta(hours=1) ctxt = context.get_admin_context() with osloutils_fixture.TimeFixture(future): self.computes['host2'].manager._poll_unconfirmed_resizes(ctxt) self.assertIn('Error auto-confirming resize', self.stdlog.logger.output) self.assertIn('Service is unavailable at this time', self.stdlog.logger.output) # The source compute service check should have been done before the # migration status was updated so it should still be "finished". self._wait_for_migration_status(server, ['finished']) # Try to confirm in the API while the source compute service is still # down to assert the 409 (rather than a 500) error. ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], {'confirmResize': None}) self.assertEqual(409, ex.response.status_code) self.assertIn('Service is unavailable at this time', six.text_type(ex)) # Bring the source compute back up and try to confirm the resize which # should work since the migration status is still "finished". self.restart_compute_service(self.computes['host1']) self.api.put_service( source_service['id'], {'status': 'enabled', 'forced_down': False}) # Use the API to confirm the resize because _poll_unconfirmed_resizes # requires mucking with the current time which causes problems with # the service_is_up check in the API. self.api.post_server_action(server['id'], {'confirmResize': None}) self._wait_for_state_change(server, 'ACTIVE') class ServerLiveMigrateForceAndAbort( integrated_helpers.ProviderUsageBaseTestCase): """Test Server live migrations, which delete the migration or force_complete it, and check the allocations after the operations. The test are using fakedriver to handle the force_completion and deletion of live migration. """ compute_driver = 'fake.FakeLiveMigrateDriver' def setUp(self): super(ServerLiveMigrateForceAndAbort, self).setUp() self.compute1 = self._start_compute(host='host1') self.compute2 = self._start_compute(host='host2') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_live_migrate_force_complete(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, } } self.api.post_server_action(server['id'], post) migration = self._wait_for_migration_status(server, ['running']) self.api.force_complete_migration(server['id'], migration['id']) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ACTIVE'}) self._run_periodics() self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, source_rp_uuid) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], dest_rp_uuid) self._delete_and_check_allocations(server) def test_live_migrate_delete(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations( self.flavor1, source_hostname) post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, } } self.api.post_server_action(server['id'], post) migration = self._wait_for_migration_status(server, ['running']) self.api.delete_migration(server['id'], migration['id']) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': source_hostname, 'status': 'ACTIVE'}) self._run_periodics() allocations = self._get_allocations_by_server_uuid(server['id']) self.assertNotIn(dest_rp_uuid, allocations) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self._delete_and_check_allocations(server) class ServerLiveMigrateForceAndAbortWithNestedResourcesRequest( ServerLiveMigrateForceAndAbort): compute_driver = 'fake.FakeLiveMigrateDriverWithNestedCustomResources' def setUp(self): super(ServerLiveMigrateForceAndAbortWithNestedResourcesRequest, self).setUp() # modify the flavor used in the test base class to require one piece of # CUSTOM_MAGIC resource as well. self.api.post_extra_spec( self.flavor1['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) # save the extra_specs in the flavor stored in the test case as # well self.flavor1['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} class ServerRescheduleTests(integrated_helpers.ProviderUsageBaseTestCase): """Tests server create scenarios which trigger a reschedule during a server build and validates that allocations in Placement are properly cleaned up. Uses a fake virt driver that fails the build on the first attempt. """ compute_driver = 'fake.FakeRescheduleDriver' def setUp(self): super(ServerRescheduleTests, self).setUp() self.compute1 = self._start_compute(host='host1') self.compute2 = self._start_compute(host='host2') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def _other_hostname(self, host): other_host = {'host1': 'host2', 'host2': 'host1'} return other_host[host] def test_rescheduling_when_booting_instance(self): """Tests that allocations, created by the scheduler, are cleaned from the source node when the build fails on that node and is rescheduled to another node. """ server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor1['id'], networks='none') created_server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(created_server, 'ACTIVE') dest_hostname = server['OS-EXT-SRV-ATTR:host'] failed_hostname = self._other_hostname(dest_hostname) LOG.info('failed on %s', failed_hostname) LOG.info('booting on %s', dest_hostname) failed_rp_uuid = self._get_provider_uuid_by_host(failed_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # Expects no allocation records on the failed host. self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, failed_rp_uuid) # Ensure the allocation records on the destination host. self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1) def test_allocation_fails_during_reschedule(self): """Verify that if nova fails to allocate resources during re-schedule then the server is put into ERROR state properly. """ server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor1['id'], networks='none') orig_claim = utils.claim_resources # First call is during boot, we want that to succeed normally. Then the # fake virt driver triggers a re-schedule. During that re-schedule we # simulate that the placement call fails. with mock.patch('nova.scheduler.utils.claim_resources', side_effect=[ orig_claim, exception.AllocationUpdateFailed( consumer_uuid=uuids.inst1, error='testing')]): server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ERROR') self._delete_and_check_allocations(server) class ServerRescheduleTestsWithNestedResourcesRequest(ServerRescheduleTests): compute_driver = 'fake.FakeRescheduleDriverWithNestedCustomResources' def setUp(self): super(ServerRescheduleTestsWithNestedResourcesRequest, self).setUp() # modify the flavor used in the test base class to require one piece of # CUSTOM_MAGIC resource as well. self.api.post_extra_spec( self.flavor1['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) # save the extra_specs in the flavor stored in the test case as # well self.flavor1['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} class ServerBuildAbortTests(integrated_helpers.ProviderUsageBaseTestCase): """Tests server create scenarios which trigger a build abort during a server build and validates that allocations in Placement are properly cleaned up. Uses a fake virt driver that aborts the build on the first attempt. """ compute_driver = 'fake.FakeBuildAbortDriver' def setUp(self): super(ServerBuildAbortTests, self).setUp() # We only need one compute service/host/node for these tests. self.compute1 = self._start_compute(host='host1') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_abort_when_booting_instance(self): """Tests that allocations, created by the scheduler, are cleaned from the source node when the build is aborted on that node. """ server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor1['id'], networks='none') created_server = self.api.post_server({'server': server_req}) self._wait_for_state_change(created_server, 'ERROR') failed_hostname = self.compute1.manager.host # BuildAbortException coming from the FakeBuildAbortDriver will not # trigger a reschedule and the placement cleanup is the last step in # the compute manager after instance state setting, fault recording # and notification sending. So we have no other way than simply wait # to ensure the placement cleanup happens before we assert it. def placement_cleanup(): failed_rp_uuid = self._get_provider_uuid_by_host(failed_hostname) # Expects no allocation records on the failed host. self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, failed_rp_uuid) self._wait_for_assert(placement_cleanup) class ServerDeleteBuildTests(integrated_helpers.ProviderUsageBaseTestCase): """Tests server delete during instance in build and validates that allocations in Placement are properly cleaned up. """ compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(ServerDeleteBuildTests, self).setUp() self.compute1 = self._start_compute(host='host1') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_delete_stuck_build_instance_after_claim(self): """Test for bug 1859496 where an instance allocation can leaks after deletion if build process have been interrupted after resource claim """ # To reproduce the issue we need to interrupt instance spawn # when build request has already reached the scheduler service, # so that instance resource get claims. # Real case can typically be a conductor restart during # instance claim. # To emulate conductor restart we raise an Exception in # filter_scheduler after instance is claimed and mock # _bury_in_cell0 in that case conductor thread return. # Then we delete server after ensuring allocation is made and check # There is no leak. # Note that because deletion occurs early, conductor did not populate # instance DB entries in cells, preventing the compute # update_available_resource periodic task to heal leaked allocations. server_req = self._build_server( 'interrupted-server', flavor_id=self.flavor1['id'], image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none') with test.nested( mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_ensure_sufficient_hosts'), mock.patch('nova.conductor.manager.ComputeTaskManager.' '_bury_in_cell0') ) as (mock_suff_hosts, mock_bury): mock_suff_hosts.side_effect = test.TestingException('oops') server = self.api.post_server({'server': server_req}) self._wait_for_server_allocations(server['id']) self.api.api_delete('/servers/%s' % server['id']) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual({}, allocations) class ServerBuildAbortTestsWithNestedResourceRequest(ServerBuildAbortTests): compute_driver = 'fake.FakeBuildAbortDriverWithNestedCustomResources' def setUp(self): super(ServerBuildAbortTestsWithNestedResourceRequest, self).setUp() # modify the flavor used in the test base class to require one piece of # CUSTOM_MAGIC resource as well. self.api.post_extra_spec( self.flavor1['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) # save the extra_specs in the flavor stored in the test case as # well self.flavor1['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} class ServerUnshelveSpawnFailTests( integrated_helpers.ProviderUsageBaseTestCase): """Tests server unshelve scenarios which trigger a VirtualInterfaceCreateException during driver.spawn() and validates that allocations in Placement are properly cleaned up. """ compute_driver = 'fake.FakeUnshelveSpawnFailDriver' def setUp(self): super(ServerUnshelveSpawnFailTests, self).setUp() # We only need one compute service/host/node for these tests. self.compute1 = self._start_compute('host1') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def test_driver_spawn_fail_when_unshelving_instance(self): """Tests that allocations, created by the scheduler, are cleaned from the target node when the unshelve driver.spawn fails on that node. """ hostname = self.compute1.manager.host rp_uuid = self._get_provider_uuid_by_host(hostname) # We start with no usages on the host. self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor1['id'], networks='none') server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # assert allocations exist for the host self.assertFlavorMatchesUsage(rp_uuid, self.flavor1) # shelve offload the server self.flags(shelved_offload_time=0) self.api.post_server_action(server['id'], {'shelve': None}) self._wait_for_server_parameter(server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None}) # assert allocations were removed from the host self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) # unshelve the server, which should fail self.api.post_server_action(server['id'], {'unshelve': None}) self._wait_for_action_fail_completion( server, instance_actions.UNSHELVE, 'compute_unshelve_instance') # assert allocations were removed from the host self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_uuid) class ServerUnshelveSpawnFailTestsWithNestedResourceRequest( ServerUnshelveSpawnFailTests): compute_driver = ('fake.' 'FakeUnshelveSpawnFailDriverWithNestedCustomResources') def setUp(self): super(ServerUnshelveSpawnFailTestsWithNestedResourceRequest, self).setUp() # modify the flavor used in the test base class to require one piece of # CUSTOM_MAGIC resource as well. self.api.post_extra_spec( self.flavor1['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) # save the extra_specs in the flavor stored in the test case as # well self.flavor1['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} class ServerSoftDeleteTests(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(ServerSoftDeleteTests, self).setUp() # We only need one compute service/host/node for these tests. self.compute1 = self._start_compute('host1') flavors = self.api.get_flavors() self.flavor1 = flavors[0] def _soft_delete_and_check_allocation(self, server, hostname): self.api.delete_server(server['id']) server = self._wait_for_state_change(server, 'SOFT_DELETED') self._run_periodics() # in soft delete state nova should keep the resource allocation as # the instance can be restored rp_uuid = self._get_provider_uuid_by_host(hostname) self.assertFlavorMatchesUsage(rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], rp_uuid) # run the periodic reclaim but as time isn't advanced it should not # reclaim the instance ctxt = context.get_admin_context() self.compute1._reclaim_queued_deletes(ctxt) self._run_periodics() self.assertFlavorMatchesUsage(rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], rp_uuid) def test_soft_delete_then_reclaim(self): """Asserts that the automatic reclaim of soft deleted instance cleans up the allocations in placement. """ # make sure that instance will go to SOFT_DELETED state instead of # deleted immediately self.flags(reclaim_instance_interval=30) hostname = self.compute1.host rp_uuid = self._get_provider_uuid_by_host(hostname) server = self._boot_and_check_allocations(self.flavor1, hostname) self._soft_delete_and_check_allocation(server, hostname) # advance the time and run periodic reclaim, instance should be deleted # and resources should be freed the_past = timeutils.utcnow() + datetime.timedelta(hours=1) timeutils.set_time_override(override_time=the_past) self.addCleanup(timeutils.clear_time_override) ctxt = context.get_admin_context() self.compute1._reclaim_queued_deletes(ctxt) # Wait for real deletion self._wait_until_deleted(server) usages = self._get_provider_usages(rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, usages) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual(0, len(allocations)) def test_soft_delete_then_restore(self): """Asserts that restoring a soft deleted instance keeps the proper allocation in placement. """ # make sure that instance will go to SOFT_DELETED state instead of # deleted immediately self.flags(reclaim_instance_interval=30) hostname = self.compute1.host rp_uuid = self._get_provider_uuid_by_host(hostname) server = self._boot_and_check_allocations( self.flavor1, hostname) self._soft_delete_and_check_allocation(server, hostname) post = {'restore': {}} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ACTIVE') # after restore the allocations should be kept self.assertFlavorMatchesUsage(rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], rp_uuid) # Now we want a real delete self.flags(reclaim_instance_interval=0) self._delete_and_check_allocations(server) class ServerSoftDeleteTestsWithNestedResourceRequest(ServerSoftDeleteTests): compute_driver = 'fake.MediumFakeDriverWithNestedCustomResources' def setUp(self): super(ServerSoftDeleteTestsWithNestedResourceRequest, self).setUp() # modify the flavor used in the test base class to require one piece of # CUSTOM_MAGIC resource as well. self.api.post_extra_spec( self.flavor1['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) # save the extra_specs in the flavor stored in the test case as # well self.flavor1['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} class VolumeBackedServerTest(integrated_helpers.ProviderUsageBaseTestCase): """Tests for volume-backed servers.""" compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(VolumeBackedServerTest, self).setUp() self.compute1 = self._start_compute('host1') self.flavor_id = self._create_flavor( disk=10, ephemeral=20, swap=5 * 1024) def _create_server(self): server_req = self._build_server( flavor_id=self.flavor_id, networks='none') server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ACTIVE') return server def _create_volume_backed_server(self): self.useFixture(nova_fixtures.CinderFixture(self)) volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server_req_body = { # There is no imageRef because this is boot from volume. 'server': { 'flavorRef': self.flavor_id, 'name': 'test_volume_backed', # We don't care about networking for this test. This # requires microversion >= 2.37. 'networks': 'none', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume' }] } } server = self.api.post_server(server_req_body) server = self._wait_for_state_change(server, 'ACTIVE') return server def test_ephemeral_has_disk_allocation(self): server = self._create_server() allocs = self._get_allocations_by_server_uuid(server['id']) resources = list(allocs.values())[0]['resources'] self.assertIn('MEMORY_MB', resources) # 10gb root, 20gb ephemeral, 5gb swap expected_usage = 35 self.assertEqual(expected_usage, resources['DISK_GB']) # Ensure the compute node is reporting the correct disk usage self.assertEqual( expected_usage, self.admin_api.get_hypervisor_stats()['local_gb_used']) def test_volume_backed_image_type_filter(self): # Enable the image type support filter and ensure that a # non-image-having volume-backed server can still boot self.flags(query_placement_for_image_type_support=True, group='scheduler') server = self._create_volume_backed_server() created_server = self.api.get_server(server['id']) self.assertEqual('ACTIVE', created_server['status']) def test_volume_backed_no_disk_allocation(self): server = self._create_volume_backed_server() allocs = self._get_allocations_by_server_uuid(server['id']) resources = list(allocs.values())[0]['resources'] self.assertIn('MEMORY_MB', resources) # 0gb root, 20gb ephemeral, 5gb swap expected_usage = 25 self.assertEqual(expected_usage, resources['DISK_GB']) # Ensure the compute node is reporting the correct disk usage self.assertEqual( expected_usage, self.admin_api.get_hypervisor_stats()['local_gb_used']) # Now let's hack the RequestSpec.is_bfv field to mimic migrating an # old instance created before RequestSpec.is_bfv was set in the API, # move the instance and verify that the RequestSpec.is_bfv is set # and the instance still reports the same DISK_GB allocations as during # the initial create. ctxt = context.get_admin_context() reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) # Make sure it's set. self.assertTrue(reqspec.is_bfv) del reqspec.is_bfv reqspec.save() reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) # Make sure it's not set. self.assertNotIn('is_bfv', reqspec) # Now migrate the instance to another host and check the request spec # and allocations after the migration. self._start_compute('host2') self.admin_api.post_server_action(server['id'], {'migrate': None}) # Wait for the server to complete the cold migration. server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) # Confirm the cold migration and check usage and the request spec. self._confirm_resize(server) reqspec = objects.RequestSpec.get_by_instance_uuid(ctxt, server['id']) # Make sure it's set. self.assertTrue(reqspec.is_bfv) allocs = self._get_allocations_by_server_uuid(server['id']) resources = list(allocs.values())[0]['resources'] self.assertEqual(expected_usage, resources['DISK_GB']) # Now shelve and unshelve the server to make sure root_gb DISK_GB # isn't reported for allocations after we unshelve the server. fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.api.post_server_action(server['id'], {'shelve': None}) self._wait_for_state_change(server, 'SHELVED_OFFLOADED') fake_notifier.wait_for_versioned_notifications( 'instance.shelve_offload.end') # The server should not have any allocations since it's not currently # hosted on any compute service. allocs = self._get_allocations_by_server_uuid(server['id']) self.assertDictEqual({}, allocs) # Now unshelve the server and make sure there are still no DISK_GB # allocations for the root disk. self.api.post_server_action(server['id'], {'unshelve': None}) self._wait_for_state_change(server, 'ACTIVE') allocs = self._get_allocations_by_server_uuid(server['id']) resources = list(allocs.values())[0]['resources'] self.assertEqual(expected_usage, resources['DISK_GB']) class TraitsBasedSchedulingTest(integrated_helpers.ProviderUsageBaseTestCase): """Tests for requesting a server with required traits in Placement""" compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(TraitsBasedSchedulingTest, self).setUp() self.compute1 = self._start_compute('host1') self.compute2 = self._start_compute('host2') # Using a standard trait from the os-traits library, set a required # trait extra spec on the flavor. flavors = self.api.get_flavors() self.flavor_with_trait = flavors[0] self.admin_api.post_extra_spec( self.flavor_with_trait['id'], {'extra_specs': {'trait:HW_CPU_X86_VMX': 'required'}}) self.flavor_without_trait = flavors[1] self.flavor_with_forbidden_trait = flavors[2] self.admin_api.post_extra_spec( self.flavor_with_forbidden_trait['id'], {'extra_specs': {'trait:HW_CPU_X86_SGX': 'forbidden'}}) # Note that we're using v2.35 explicitly as the api returns 404 # starting with 2.36 with nova_utils.temporary_mutation(self.api, microversion='2.35'): images = self.api.get_images() self.image_id_with_trait = images[0]['id'] self.api.api_put('/images/%s/metadata' % self.image_id_with_trait, {'metadata': { 'trait:HW_CPU_X86_SGX': 'required'}}) self.image_id_without_trait = images[1]['id'] def _create_server_with_traits(self, flavor_id, image_id): """Create a server with given flavor and image id's :param flavor_id: the flavor id :param image_id: the image id :return: create server response """ server_req = self._build_server( image_uuid=image_id, flavor_id=flavor_id, networks='none') return self.api.post_server({'server': server_req}) def _create_volume_backed_server_with_traits(self, flavor_id, volume_id): """Create a server with block device mapping(volume) with the given flavor and volume id's. Either the flavor or the image backing the volume is expected to have the traits :param flavor_id: the flavor id :param volume_id: the volume id :return: create server response """ server_req_body = { # There is no imageRef because this is boot from volume. 'server': { 'flavorRef': flavor_id, 'name': 'test_image_trait_on_volume_backed', # We don't care about networking for this test. This # requires microversion >= 2.37. 'networks': 'none', 'block_device_mapping_v2': [{ 'boot_index': 0, 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume' }] } } server = self.api.post_server(server_req_body) return server def test_flavor_traits_based_scheduling(self): """Tests that a server create request using a required trait in the flavor ends up on the single compute node resource provider that also has that trait in Placement. That test will however pass half of the times even if the trait is not taken into consideration, so we are also disabling the compute node that has the required trait and try again, which should result in a no valid host error. """ # Decorate compute1 resource provider with the required trait. rp_uuid = self._get_provider_uuid_by_host(self.compute1.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX']) # Create server using flavor with required trait server = self._create_server_with_traits(self.flavor_with_trait['id'], self.image_id_without_trait) server = self._wait_for_state_change(server, 'ACTIVE') # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute1.host, server['OS-EXT-SRV-ATTR:host']) # Disable the compute node that has the required trait compute1_service_id = self.admin_api.get_services( host=self.compute1.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute1_service_id, {'status': 'disabled'}) # Create server using flavor with required trait server = self._create_server_with_traits(self.flavor_with_trait['id'], self.image_id_without_trait) # The server should go to ERROR state because there is no valid host. server = self._wait_for_state_change(server, 'ERROR') self.assertIsNone(server['OS-EXT-SRV-ATTR:host']) # Make sure the failure was due to NoValidHost by checking the fault. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) def test_flavor_forbidden_traits_based_scheduling(self): """Tests that a server create request using a forbidden trait in the flavor ends up on the single compute host that doesn't have that trait in Placement. That test will however pass half of the times even if the trait is not taken into consideration, so we are also disabling the compute node that doesn't have the forbidden trait and try again, which should result in a no valid host error. """ # Decorate compute1 resource provider with forbidden trait rp_uuid = self._get_provider_uuid_by_host(self.compute1.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_SGX']) # Create server using flavor with forbidden trait server = self._create_server_with_traits( self.flavor_with_forbidden_trait['id'], self.image_id_without_trait ) server = self._wait_for_state_change(server, 'ACTIVE') # Assert the server ended up on the expected compute host that doesn't # have the forbidden trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) # Disable the compute node that doesn't have the forbidden trait compute2_service_id = self.admin_api.get_services( host=self.compute2.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute2_service_id, {'status': 'disabled'}) # Create server using flavor with forbidden trait server = self._create_server_with_traits( self.flavor_with_forbidden_trait['id'], self.image_id_without_trait ) # The server should go to ERROR state because there is no valid host. server = self._wait_for_state_change(server, 'ERROR') self.assertIsNone(server['OS-EXT-SRV-ATTR:host']) # Make sure the failure was due to NoValidHost by checking the fault. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) def test_image_traits_based_scheduling(self): """Tests that a server create request using a required trait on image ends up on the single compute node resource provider that also has that trait in Placement. """ # Decorate compute2 resource provider with image trait. rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_SGX']) # Create server using only image trait server = self._create_server_with_traits( self.flavor_without_trait['id'], self.image_id_with_trait) server = self._wait_for_state_change(server, 'ACTIVE') # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) def test_flavor_image_traits_based_scheduling(self): """Tests that a server create request using a required trait on flavor AND a required trait on the image ends up on the single compute node resource provider that also has that trait in Placement. """ # Decorate compute2 resource provider with both flavor and image trait. rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) # Create server using flavor and image trait server = self._create_server_with_traits( self.flavor_with_trait['id'], self.image_id_with_trait) server = self._wait_for_state_change(server, 'ACTIVE') # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) def test_image_trait_on_volume_backed_instance(self): """Tests that when trying to launch a volume-backed instance with a required trait on the image metadata contained within the volume, the instance ends up on the single compute node resource provider that also has that trait in Placement. """ # Decorate compute2 resource provider with volume image metadata trait. rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_SGX']) self.useFixture(nova_fixtures.CinderFixture(self)) # Create our server with a volume containing the image meta data with a # required trait server = self._create_volume_backed_server_with_traits( self.flavor_without_trait['id'], nova_fixtures.CinderFixture. IMAGE_WITH_TRAITS_BACKED_VOL) server = self._wait_for_state_change(server, 'ACTIVE') # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) def test_flavor_image_trait_on_volume_backed_instance(self): """Tests that when trying to launch a volume-backed instance with a required trait on flavor AND a required trait on the image metadata contained within the volume, the instance ends up on the single compute node resource provider that also has those traits in Placement. """ # Decorate compute2 resource provider with volume image metadata trait. rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) self.useFixture(nova_fixtures.CinderFixture(self)) # Create our server with a flavor trait and a volume containing the # image meta data with a required trait server = self._create_volume_backed_server_with_traits( self.flavor_with_trait['id'], nova_fixtures.CinderFixture. IMAGE_WITH_TRAITS_BACKED_VOL) server = self._wait_for_state_change(server, 'ACTIVE') # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) def test_flavor_traits_based_scheduling_no_valid_host(self): """Tests that a server create request using a required trait expressed in flavor fails to find a valid host since no compute node resource providers have the trait. """ # Decorate compute1 resource provider with the image trait. rp_uuid = self._get_provider_uuid_by_host(self.compute1.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_SGX']) server = self._create_server_with_traits(self.flavor_with_trait['id'], self.image_id_without_trait) # The server should go to ERROR state because there is no valid host. server = self._wait_for_state_change(server, 'ERROR') self.assertIsNone(server['OS-EXT-SRV-ATTR:host']) # Make sure the failure was due to NoValidHost by checking the fault. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) def test_image_traits_based_scheduling_no_valid_host(self): """Tests that a server create request using a required trait expressed in image fails to find a valid host since no compute node resource providers have the trait. """ # Decorate compute1 resource provider with that flavor trait. rp_uuid = self._get_provider_uuid_by_host(self.compute1.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX']) server = self._create_server_with_traits( self.flavor_without_trait['id'], self.image_id_with_trait) # The server should go to ERROR state because there is no valid host. server = self._wait_for_state_change(server, 'ERROR') self.assertIsNone(server['OS-EXT-SRV-ATTR:host']) # Make sure the failure was due to NoValidHost by checking the fault. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) def test_flavor_image_traits_based_scheduling_no_valid_host(self): """Tests that a server create request using a required trait expressed in flavor AND a required trait expressed in the image fails to find a valid host since no compute node resource providers have the trait. """ server = self._create_server_with_traits( self.flavor_with_trait['id'], self.image_id_with_trait) # The server should go to ERROR state because there is no valid host. server = self._wait_for_state_change(server, 'ERROR') self.assertIsNone(server['OS-EXT-SRV-ATTR:host']) # Make sure the failure was due to NoValidHost by checking the fault. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) def test_image_trait_on_volume_backed_instance_no_valid_host(self): """Tests that when trying to launch a volume-backed instance with a required trait on the image metadata contained within the volume fails to find a valid host since no compute node resource providers have the trait. """ self.useFixture(nova_fixtures.CinderFixture(self)) # Create our server with a volume server = self._create_volume_backed_server_with_traits( self.flavor_without_trait['id'], nova_fixtures.CinderFixture. IMAGE_WITH_TRAITS_BACKED_VOL) # The server should go to ERROR state because there is no valid host. server = self._wait_for_state_change(server, 'ERROR') self.assertIsNone(server['OS-EXT-SRV-ATTR:host']) # Make sure the failure was due to NoValidHost by checking the fault. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) def test_rebuild_instance_with_image_traits(self): """Rebuilds a server with a different image which has traits associated with it and which will run it through the scheduler to validate the image is still OK with the compute host that the instance is running on. """ # Decorate compute2 resource provider with both flavor and image trait. rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) # make sure we start with no usage on the compute node rp_usages = self._get_provider_usages(rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_usages) # create a server without traits on image and with traits on flavour server = self._create_server_with_traits( self.flavor_with_trait['id'], self.image_id_without_trait) server = self._wait_for_state_change(server, 'ACTIVE') # make the compute node full and ensure rebuild still succeed inv = {"resource_class": "VCPU", "total": 1} self._set_inventory(rp_uuid, inv) # Now rebuild the server with a different image with traits rebuild_req_body = { 'rebuild': { 'imageRef': self.image_id_with_trait } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None}) allocs = self._get_allocations_by_server_uuid(server['id']) self.assertIn(rp_uuid, allocs) # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) def test_rebuild_instance_with_image_traits_no_host(self): """Rebuilding a server with a different image which has required traits on the image fails to valid the host that this server is currently running, cause the compute host resource provider is not associated with similar trait. """ # Decorate compute2 resource provider with traits on flavor rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX']) # make sure we start with no usage on the compute node rp_usages = self._get_provider_usages(rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_usages) # create a server without traits on image and with traits on flavour server = self._create_server_with_traits( self.flavor_with_trait['id'], self.image_id_without_trait) server = self._wait_for_state_change(server, 'ACTIVE') # Now rebuild the server with a different image with traits rebuild_req_body = { 'rebuild': { 'imageRef': self.image_id_with_trait } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) # Look for the failed rebuild action. self._wait_for_action_fail_completion( server, instance_actions.REBUILD, 'rebuild_server') # Assert the server image_ref was rolled back on failure. server = self.api.get_server(server['id']) self.assertEqual(self.image_id_without_trait, server['image']['id']) # The server should be in ERROR state self.assertEqual('ERROR', server['status']) self.assertEqual("No valid host was found. Image traits cannot be " "satisfied by the current resource providers. " "Either specify a different image during rebuild " "or create a new server with the specified image.", server['fault']['message']) def test_rebuild_instance_with_image_traits_no_image_change(self): """Rebuilds a server with a same image which has traits associated with it and which will run it through the scheduler to validate the image is still OK with the compute host that the instance is running on. """ # Decorate compute2 resource provider with both flavor and image trait. rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX', 'HW_CPU_X86_SGX']) # make sure we start with no usage on the compute node rp_usages = self._get_provider_usages(rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_usages) # create a server with traits in both image and flavour server = self._create_server_with_traits( self.flavor_with_trait['id'], self.image_id_with_trait) server = self._wait_for_state_change(server, 'ACTIVE') # Now rebuild the server with a different image with traits rebuild_req_body = { 'rebuild': { 'imageRef': self.image_id_with_trait } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None}) allocs = self._get_allocations_by_server_uuid(server['id']) self.assertIn(rp_uuid, allocs) # Assert the server ended up on the expected compute host that has # the required trait. self.assertEqual(self.compute2.host, server['OS-EXT-SRV-ATTR:host']) def test_rebuild_instance_with_image_traits_and_forbidden_flavor_traits( self): """Rebuilding a server with a different image which has required traits on the image fails to validate image traits because flavor associated with the current instance has the similar trait that is forbidden """ # Decorate compute2 resource provider with traits on flavor rp_uuid = self._get_provider_uuid_by_host(self.compute2.host) self._set_provider_traits(rp_uuid, ['HW_CPU_X86_VMX']) # make sure we start with no usage on the compute node rp_usages = self._get_provider_usages(rp_uuid) self.assertEqual({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, rp_usages) # create a server with forbidden traits on flavor and no triats on # image server = self._create_server_with_traits( self.flavor_with_forbidden_trait['id'], self.image_id_without_trait) server = self._wait_for_state_change(server, 'ACTIVE') # Now rebuild the server with a different image with traits rebuild_req_body = { 'rebuild': { 'imageRef': self.image_id_with_trait } } self.api.api_post('/servers/%s/action' % server['id'], rebuild_req_body) # Look for the failed rebuild action. self._wait_for_action_fail_completion( server, instance_actions.REBUILD, 'rebuild_server') # Assert the server image_ref was rolled back on failure. server = self.api.get_server(server['id']) self.assertEqual(self.image_id_without_trait, server['image']['id']) # The server should be in ERROR state self.assertEqual('ERROR', server['status']) self.assertEqual("No valid host was found. Image traits are part of " "forbidden traits in flavor associated with the " "server. Either specify a different image during " "rebuild or create a new server with the specified " "image and a compatible flavor.", server['fault']['message']) class ServerTestV256Common(ServersTestBase): api_major_version = 'v2.1' microversion = '2.56' ADMIN_API = True def _setup_compute_service(self): # Set up 3 compute services in the same cell for host in ('host1', 'host2', 'host3'): self.start_service('compute', host=host) def _create_server(self, target_host=None): server = self._build_server( image_uuid='a2459075-d96c-40d5-893e-577ff92e721c') server.update({'networks': 'auto'}) if target_host is not None: server['availability_zone'] = 'nova:%s' % target_host post = {'server': server} response = self.api.api_post('/servers', post).body return response['server'] @staticmethod def _get_target_and_other_hosts(host): target_other_hosts = {'host1': ['host2', 'host3'], 'host2': ['host3', 'host1'], 'host3': ['host1', 'host2']} return target_other_hosts[host] class ServerTestV256MultiCellTestCase(ServerTestV256Common): """Negative test to ensure we fail with ComputeHostNotFound if we try to target a host in another cell from where the instance lives. """ NUMBER_OF_CELLS = 2 def _setup_compute_service(self): # Set up 2 compute services in different cells host_to_cell_mappings = { 'host1': 'cell1', 'host2': 'cell2'} for host in sorted(host_to_cell_mappings): self.start_service('compute', host=host, cell_name=host_to_cell_mappings[host]) def test_migrate_server_to_host_in_different_cell(self): # We target host1 specifically so that we have a predictable target for # the cold migration in cell2. server = self._create_server(target_host='host1') server = self._wait_for_state_change(server, 'ACTIVE') self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], {'migrate': {'host': 'host2'}}) # When the API pulls the instance out of cell1, the context is targeted # to cell1, so when the compute API resize() method attempts to lookup # the target host in cell1, it will result in a ComputeHostNotFound # error. self.assertEqual(400, ex.response.status_code) self.assertIn('Compute host host2 could not be found', six.text_type(ex)) class ServerTestV256SingleCellMultiHostTestCase(ServerTestV256Common): """Happy path test where we create a server on one host, migrate it to another host of our choosing and ensure it lands there. """ def test_migrate_server_to_host_in_same_cell(self): server = self._create_server() server = self._wait_for_state_change(server, 'ACTIVE') source_host = server['OS-EXT-SRV-ATTR:host'] target_host = self._get_target_and_other_hosts(source_host)[0] self.api.post_server_action(server['id'], {'migrate': {'host': target_host}}) # Assert the server is now on the target host. server = self.api.get_server(server['id']) self.assertEqual(target_host, server['OS-EXT-SRV-ATTR:host']) class ServerTestV256RescheduleTestCase(ServerTestV256Common): @mock.patch.object(compute_manager.ComputeManager, '_prep_resize', side_effect=exception.MigrationError( reason='Test Exception')) def test_migrate_server_not_reschedule(self, mock_prep_resize): server = self._create_server() found_server = self._wait_for_state_change(server, 'ACTIVE') target_host, other_host = self._get_target_and_other_hosts( found_server['OS-EXT-SRV-ATTR:host']) self.assertRaises(client.OpenStackApiException, self.api.post_server_action, server['id'], {'migrate': {'host': target_host}}) self.assertEqual(1, mock_prep_resize.call_count) found_server = self.api.get_server(server['id']) # Check that rescheduling is not occurred. self.assertNotEqual(other_host, found_server['OS-EXT-SRV-ATTR:host']) class ConsumerGenerationConflictTest( integrated_helpers.ProviderUsageBaseTestCase): # we need the medium driver to be able to allocate resource not just for # a single instance compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(ConsumerGenerationConflictTest, self).setUp() flavors = self.api.get_flavors() self.flavor = flavors[0] self.other_flavor = flavors[1] self.compute1 = self._start_compute('compute1') self.compute2 = self._start_compute('compute2') def test_create_server_fails_as_placement_reports_consumer_conflict(self): server_req = self._build_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=self.flavor['id'], networks='none') # We cannot pre-create a consumer with the uuid of the instance created # below as that uuid is generated. Instead we have to simulate that # Placement returns 409, consumer generation conflict for the PUT # /allocation request the scheduler does for the instance. with mock.patch('keystoneauth1.adapter.Adapter.put') as mock_put: rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) mock_put.return_value = rsp created_server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(created_server, 'ERROR') # This is not a conflict that the API user can ever resolve. It is a # serious inconsistency in our database or a bug in the scheduler code # doing the claim. self.assertEqual(500, server['fault']['code']) self.assertIn('Failed to update allocations for consumer', server['fault']['message']) allocations = self._get_allocations_by_server_uuid(server['id']) self.assertEqual(0, len(allocations)) self._delete_and_check_allocations(server) def test_migrate_claim_on_dest_fails(self): source_hostname = self.compute1.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) server = self._boot_and_check_allocations(self.flavor, source_hostname) # We have to simulate that Placement returns 409, consumer generation # conflict for the PUT /allocation request the scheduler does on the # destination host for the instance. with mock.patch('keystoneauth1.adapter.Adapter.put') as mock_put: rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) mock_put.return_value = rsp request = {'migrate': None} self.api.post_server_action(server['id'], request, check_response_status=[202]) self._wait_for_server_parameter(server, {'OS-EXT-STS:task_state': None}) # The instance action should have failed with details. # save_and_reraise_exception gets different results between py2 and py3 # for the traceback but we want to use the more specific # "claim_resources" for py3. We can remove this when we drop support # for py2. error_in_tb = 'claim_resources' if six.PY3 else 'select_destinations' self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, error_in_tb) # The migration is aborted so the instance is ACTIVE on the source # host instead of being in VERIFY_RESIZE state. server = self.api.get_server(server['id']) self.assertEqual('ACTIVE', server['status']) self.assertEqual(source_hostname, server['OS-EXT-SRV-ATTR:host']) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor) self.assertFlavorMatchesAllocation(self.flavor, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def test_migrate_move_allocation_fails_due_to_conflict(self): source_hostname = self.compute1.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) server = self._boot_and_check_allocations(self.flavor, source_hostname) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) with mock.patch('keystoneauth1.adapter.Adapter.post', autospec=True) as mock_post: mock_post.return_value = rsp request = {'migrate': None} self.api.post_server_action(server['id'], request, check_response_status=[202]) self._wait_for_server_parameter(server, {'OS-EXT-STS:task_state': None}) self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'move_allocations') self.assertEqual(1, mock_post.call_count) migrations = self.api.get_migrations() self.assertEqual(1, len(migrations)) self.assertEqual('migration', migrations[0]['migration_type']) self.assertEqual(server['id'], migrations[0]['instance_uuid']) self.assertEqual(source_hostname, migrations[0]['source_compute']) self.assertEqual('error', migrations[0]['status']) # The migration is aborted so the instance is ACTIVE on the source # host instead of being in VERIFY_RESIZE state. server = self.api.get_server(server['id']) self.assertEqual('ACTIVE', server['status']) self.assertEqual(source_hostname, server['OS-EXT-SRV-ATTR:host']) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor) self.assertFlavorMatchesAllocation(self.flavor, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def test_confirm_migrate_delete_alloc_on_source_fails(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor, source_hostname) self._migrate_and_check_allocations( server, self.flavor, source_rp_uuid, dest_rp_uuid) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) with mock.patch('keystoneauth1.adapter.Adapter.put', autospec=True) as mock_put: mock_put.return_value = rsp post = {'confirmResize': None} self.api.post_server_action( server['id'], post, check_response_status=[204]) server = self._wait_for_state_change(server, 'ERROR') self.assertIn('Failed to delete allocations', server['fault']['message']) self.assertEqual(1, mock_put.call_count) migrations = self.api.get_migrations() self.assertEqual(1, len(migrations)) self.assertEqual('migration', migrations[0]['migration_type']) self.assertEqual(server['id'], migrations[0]['instance_uuid']) self.assertEqual(source_hostname, migrations[0]['source_compute']) self.assertEqual('error', migrations[0]['status']) # NOTE(gibi): Nova leaks the allocation held by the migration_uuid even # after the instance is deleted. At least nova logs a fat ERROR. self.assertIn('Deleting allocation in placement for migration %s ' 'failed. The instance %s will be put to ERROR state but ' 'the allocation held by the migration is leaked.' % (migrations[0]['uuid'], server['id']), self.stdlog.logger.output) self._delete_server(server) fake_notifier.wait_for_versioned_notifications('instance.delete.end') allocations = self._get_allocations_by_server_uuid( migrations[0]['uuid']) self.assertEqual(1, len(allocations)) def test_revert_migrate_delete_dest_allocation_fails_due_to_conflict(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) server = self._boot_and_check_allocations(self.flavor, source_hostname) self._migrate_and_check_allocations( server, self.flavor, source_rp_uuid, dest_rp_uuid) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) with mock.patch('keystoneauth1.adapter.Adapter.post', autospec=True) as mock_post: mock_post.return_value = rsp post = {'revertResize': None} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ERROR') self.assertEqual(1, mock_post.call_count) migrations = self.api.get_migrations() self.assertEqual(1, len(migrations)) self.assertEqual('migration', migrations[0]['migration_type']) self.assertEqual(server['id'], migrations[0]['instance_uuid']) self.assertEqual(source_hostname, migrations[0]['source_compute']) self.assertEqual('error', migrations[0]['status']) # NOTE(gibi): Nova leaks the allocation held by the migration_uuid even # after the instance is deleted. At least nova logs a fat ERROR. self.assertIn('Reverting allocation in placement for migration %s ' 'failed. The instance %s will be put into ERROR state ' 'but the allocation held by the migration is leaked.' % (migrations[0]['uuid'], server['id']), self.stdlog.logger.output) self._delete_server(server) fake_notifier.wait_for_versioned_notifications('instance.delete.end') allocations = self._get_allocations_by_server_uuid( migrations[0]['uuid']) self.assertEqual(1, len(allocations)) def test_revert_resize_same_host_delete_dest_fails_due_to_conflict(self): # make sure that the test only uses a single host compute2_service_id = self.admin_api.get_services( host=self.compute2.host, binary='nova-compute')[0]['id'] self.admin_api.put_service(compute2_service_id, {'status': 'disabled'}) hostname = self.compute1.manager.host rp_uuid = self._get_provider_uuid_by_host(hostname) server = self._boot_and_check_allocations(self.flavor, hostname) self._resize_to_same_host_and_check_allocations( server, self.flavor, self.other_flavor, rp_uuid) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) with mock.patch('keystoneauth1.adapter.Adapter.post', autospec=True) as mock_post: mock_post.return_value = rsp post = {'revertResize': None} self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ERROR',) self.assertEqual(1, mock_post.call_count) migrations = self.api.get_migrations() self.assertEqual(1, len(migrations)) self.assertEqual('resize', migrations[0]['migration_type']) self.assertEqual(server['id'], migrations[0]['instance_uuid']) self.assertEqual(hostname, migrations[0]['source_compute']) self.assertEqual('error', migrations[0]['status']) # NOTE(gibi): Nova leaks the allocation held by the migration_uuid even # after the instance is deleted. At least nova logs a fat ERROR. self.assertIn('Reverting allocation in placement for migration %s ' 'failed. The instance %s will be put into ERROR state ' 'but the allocation held by the migration is leaked.' % (migrations[0]['uuid'], server['id']), self.stdlog.logger.output) self._delete_server(server) fake_notifier.wait_for_versioned_notifications('instance.delete.end') allocations = self._get_allocations_by_server_uuid( migrations[0]['uuid']) self.assertEqual(1, len(allocations)) def test_force_live_migrate_claim_on_dest_fails(self): # Normal live migrate moves source allocation from instance to # migration like a normal migrate tested above. # Normal live migrate claims on dest like a normal boot tested above. source_hostname = self.compute1.host dest_hostname = self.compute2.host # the ability to force live migrate a server is removed entirely in # 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor, source_hostname) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) with mock.patch('keystoneauth1.adapter.Adapter.put', autospec=True) as mock_put: mock_put.return_value = rsp post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, 'force': True, } } self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ERROR') self.assertEqual(1, mock_put.call_count) # This is not a conflict that the API user can ever resolve. It is a # serious inconsistency in our database or a bug in the scheduler code # doing the claim. self.assertEqual(500, server['fault']['code']) # The instance is in ERROR state so the allocations are in limbo but # at least we expect that when the instance is deleted the allocations # are cleaned up properly. self._delete_and_check_allocations(server) def test_live_migrate_drop_allocation_on_source_fails(self): source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # the ability to force live migrate a server is removed entirely in # 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor, source_hostname) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) orig_put = adapter.Adapter.put rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) self.adapter_put_call_count = 0 def fake_put(_self, url, **kwargs): self.adapter_put_call_count += 1 migration_uuid = self.get_migration_uuid_for_instance(server['id']) if url == '/allocations/%s' % migration_uuid: return rsp else: return orig_put(_self, url, **kwargs) with mock.patch('keystoneauth1.adapter.Adapter.put', new=fake_put): post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, 'force': True, } } self.api.post_server_action(server['id'], post) # nova does the source host cleanup _after_ setting the migration # to completed and sending end notifications so we have to wait # here a bit. time.sleep(1) # Nova failed to clean up on the source host. This right now puts # the instance to ERROR state and fails the migration. server = self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': dest_hostname, 'status': 'ERROR'}) self._wait_for_migration_status(server, ['error']) fake_notifier.wait_for_versioned_notifications( 'instance.live_migration_post.end') # 1 claim on destination, 1 normal delete on dest that fails, self.assertEqual(2, self.adapter_put_call_count) # As the cleanup on the source host failed Nova leaks the allocation # held by the migration. self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor) migration_uuid = self.get_migration_uuid_for_instance(server['id']) self.assertFlavorMatchesAllocation(self.flavor, migration_uuid, source_rp_uuid) self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor) self.assertFlavorMatchesAllocation(self.flavor, server['id'], dest_rp_uuid) # NOTE(gibi): Nova leaks the allocation held by the migration_uuid even # after the instance is deleted. At least nova logs a fat ERROR. self.assertIn('Deleting allocation in placement for migration %s ' 'failed. The instance %s will be put to ERROR state but ' 'the allocation held by the migration is leaked.' % (migration_uuid, server['id']), self.stdlog.logger.output) self._delete_server(server) fake_notifier.wait_for_versioned_notifications('instance.delete.end') self.assertFlavorMatchesAllocation(self.flavor, migration_uuid, source_rp_uuid) def _test_evacuate_fails_allocating_on_dest_host(self, force): source_hostname = self.compute1.host dest_hostname = self.compute2.host # the ability to force evacuate a server is removed entirely in 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) with mock.patch('keystoneauth1.adapter.Adapter.put', autospec=True) as mock_put: mock_put.return_value = rsp post = { 'evacuate': { 'force': force } } if force: post['evacuate']['host'] = dest_hostname self.api.post_server_action(server['id'], post) server = self._wait_for_state_change(server, 'ERROR') self.assertEqual(1, mock_put.call_count) # As nova failed to allocate on the dest host we only expect allocation # on the source source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor) self.assertRequestMatchesUsage({'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def test_force_evacuate_fails_allocating_on_dest_host(self): self._test_evacuate_fails_allocating_on_dest_host(force=True) def test_evacuate_fails_allocating_on_dest_host(self): self._test_evacuate_fails_allocating_on_dest_host(force=False) def test_server_delete_fails_due_to_conflict(self): source_hostname = self.compute1.host server = self._boot_and_check_allocations(self.flavor, source_hostname) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps({'text': 'consumer generation conflict'})) with mock.patch('keystoneauth1.adapter.Adapter.put', autospec=True) as mock_put: mock_put.return_value = rsp self.api.delete_server(server['id']) server = self._wait_for_state_change(server, 'ERROR') self.assertEqual(1, mock_put.call_count) # We still have the allocations as deletion failed source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor) self.assertFlavorMatchesAllocation(self.flavor, server['id'], source_rp_uuid) # retry the delete to make sure that allocations are removed this time self._delete_and_check_allocations(server) def test_server_local_delete_fails_due_to_conflict(self): source_hostname = self.compute1.host server = self._boot_and_check_allocations(self.flavor, source_hostname) source_compute_id = self.admin_api.get_services( host=self.compute1.host, binary='nova-compute')[0]['id'] self.compute1.stop() self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) rsp = fake_requests.FakeResponse( 409, jsonutils.dumps({'text': 'consumer generation conflict'})) with mock.patch('keystoneauth1.adapter.Adapter.put', autospec=True) as mock_put: mock_put.return_value = rsp ex = self.assertRaises(client.OpenStackApiException, self.api.delete_server, server['id']) self.assertEqual(409, ex.response.status_code) self.assertIn('Failed to delete allocations for consumer', jsonutils.loads(ex.response.content)[ 'conflictingRequest']['message']) self.assertEqual(1, mock_put.call_count) # We still have the allocations as deletion failed source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor) self.assertFlavorMatchesAllocation(self.flavor, server['id'], source_rp_uuid) # retry the delete to make sure that allocations are removed this time self._delete_and_check_allocations(server) class ServerMovingTestsWithNestedComputes(ServerMovingTests): """Runs all the server moving tests while the computes have nested trees. The servers still do not request resources from any child provider though. """ compute_driver = 'fake.MediumFakeDriverWithNestedCustomResources' class ServerMovingTestsWithNestedResourceRequests( ServerMovingTestsWithNestedComputes): """Runs all the server moving tests while the computes have nested trees. The servers also request resources from child providers. """ def setUp(self): super(ServerMovingTestsWithNestedResourceRequests, self).setUp() # modify the flavors used in the ServerMoving test base class to # require one piece of CUSTOM_MAGIC resource as well. for flavor in [self.flavor1, self.flavor2, self.flavor3]: self.api.post_extra_spec( flavor['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) # save the extra_specs in the flavor stored in the test case as # well flavor['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} def _check_allocation_during_evacuate( self, flavor, server_uuid, source_root_rp_uuid, dest_root_rp_uuid): # NOTE(gibi): evacuate is the only case when the same consumer has # allocation from two different RP trees so we need a special check # here. allocations = self._get_allocations_by_server_uuid(server_uuid) source_rps = self._get_all_rp_uuids_in_a_tree(source_root_rp_uuid) dest_rps = self._get_all_rp_uuids_in_a_tree(dest_root_rp_uuid) self.assertEqual(set(source_rps + dest_rps), set(allocations)) total_source_allocation = collections.defaultdict(int) total_dest_allocation = collections.defaultdict(int) for rp, alloc in allocations.items(): for rc, value in alloc['resources'].items(): if rp in source_rps: total_source_allocation[rc] += value else: total_dest_allocation[rc] += value self.assertEqual( self._resources_from_flavor(flavor), total_source_allocation) self.assertEqual( self._resources_from_flavor(flavor), total_dest_allocation) def test_live_migrate_force(self): # Nova intentionally does not support force live-migrating server # with nested allocations. source_hostname = self.compute1.host dest_hostname = self.compute2.host source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) # the ability to force live migrate a server is removed entirely in # 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor1, source_hostname) post = { 'os-migrateLive': { 'host': dest_hostname, 'block_migration': True, 'force': True, } } self.api.post_server_action(server['id'], post) self._wait_for_migration_status(server, ['error']) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': source_hostname, 'status': 'ACTIVE'}) self.assertIn('Unable to move instance %s to host host2. The instance ' 'has complex allocations on the source host so move ' 'cannot be forced.' % server['id'], self.stdlog.logger.output) self._run_periodics() # NOTE(danms): There should be no usage for the dest self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) # the server has an allocation on only the source node self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def _test_evacuate_forced_host(self, keep_hypervisor_state): # Nova intentionally does not support force evacuating server # with nested allocations. source_hostname = self.compute1.host dest_hostname = self.compute2.host # the ability to force evacuate a server is removed entirely in 2.68 self.api.microversion = '2.67' server = self._boot_and_check_allocations( self.flavor1, source_hostname) source_compute_id = self.admin_api.get_services( host=source_hostname, binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # evacuate the server and force the destination host which bypasses # the scheduler post = { 'evacuate': { 'host': dest_hostname, 'force': True } } self.api.post_server_action(server['id'], post) self._wait_for_migration_status(server, ['error']) expected_params = {'OS-EXT-SRV-ATTR:host': source_hostname, 'status': 'ACTIVE'} server = self._wait_for_server_parameter(server, expected_params) self.assertIn('Unable to move instance %s to host host2. The instance ' 'has complex allocations on the source host so move ' 'cannot be forced.' % server['id'], self.stdlog.logger.output) # Run the periodics to show those don't modify allocations. self._run_periodics() source_rp_uuid = self._get_provider_uuid_by_host(source_hostname) dest_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) # restart the source compute self.compute1 = self.restart_compute_service( self.compute1, keep_hypervisor_state=keep_hypervisor_state) self.admin_api.put_service( source_compute_id, {'forced_down': 'false'}) # Run the periodics again to show they don't change anything. self._run_periodics() # When the source node starts up nothing should change as the # evacuation failed self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) # NOTE(gibi): There is another case NestedToFlat but that leads to the same # code path that NestedToNested as in both cases the instance will have # complex allocation on the source host which is already covered in # ServerMovingTestsWithNestedResourceRequests class ServerMovingTestsFromFlatToNested( integrated_helpers.ProviderUsageBaseTestCase): """Tests trying to move servers from a compute with a flat RP tree to a compute with a nested RP tree and assert that the blind allocation copy fails cleanly. """ REQUIRES_LOCKING = True compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(ServerMovingTestsFromFlatToNested, self).setUp() flavors = self.api.get_flavors() self.flavor1 = flavors[0] self.api.post_extra_spec( self.flavor1['id'], {'extra_specs': {'resources:CUSTOM_MAGIC': 1}}) self.flavor1['extra_specs'] = {'resources:CUSTOM_MAGIC': 1} def test_force_live_migrate_from_flat_to_nested(self): # first compute will start with the flat RP tree but we add # CUSTOM_MAGIC inventory to the root compute RP orig_update_provider_tree = fake.MediumFakeDriver.update_provider_tree # the ability to force live migrate a server is removed entirely in # 2.68 self.api.microversion = '2.67' def stub_update_provider_tree(self, provider_tree, nodename, allocations=None): # do the regular inventory update orig_update_provider_tree( self, provider_tree, nodename, allocations) if nodename == 'host1': # add the extra resource inv = provider_tree.data(nodename).inventory inv['CUSTOM_MAGIC'] = { 'total': 10, 'reserved': 0, 'min_unit': 1, 'max_unit': 10, 'step_size': 1, 'allocation_ratio': 1, } provider_tree.update_inventory(nodename, inv) self.stub_out('nova.virt.fake.FakeDriver.update_provider_tree', stub_update_provider_tree) self.compute1 = self._start_compute(host='host1') source_rp_uuid = self._get_provider_uuid_by_host('host1') server = self._boot_and_check_allocations(self.flavor1, 'host1') # start the second compute with nested RP tree self.flags( compute_driver='fake.MediumFakeDriverWithNestedCustomResources') self.compute2 = self._start_compute(host='host2') # try to force live migrate from flat to nested. post = { 'os-migrateLive': { 'host': 'host2', 'block_migration': True, 'force': True, } } self.api.post_server_action(server['id'], post) # We expect that the migration will fail as force migrate tries to # blindly copy the source allocation to the destination but on the # destination there is no inventory of CUSTOM_MAGIC on the compute node # provider as that resource is reported on a child provider. self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ACTIVE'}) migration = self._wait_for_migration_status(server, ['error']) self.assertEqual('host1', migration['source_compute']) self.assertEqual('host2', migration['dest_compute']) # Nova fails the migration because it ties to allocation CUSTOM_MAGIC # from the dest node root RP and placement rejects the that allocation. self.assertIn("Unable to allocate inventory: Inventory for " "'CUSTOM_MAGIC'", self.stdlog.logger.output) self.assertIn('No valid host was found. Unable to move instance %s to ' 'host host2. There is not enough capacity on the host ' 'for the instance.' % server['id'], self.stdlog.logger.output) dest_rp_uuid = self._get_provider_uuid_by_host('host2') # There should be no usage for the dest self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # and everything stays at the source self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) def test_force_evacuate_from_flat_to_nested(self): # first compute will start with the flat RP tree but we add # CUSTOM_MAGIC inventory to the root compute RP orig_update_provider_tree = fake.MediumFakeDriver.update_provider_tree # the ability to force evacuate a server is removed entirely in 2.68 self.api.microversion = '2.67' def stub_update_provider_tree(self, provider_tree, nodename, allocations=None): # do the regular inventory update orig_update_provider_tree( self, provider_tree, nodename, allocations) if nodename == 'host1': # add the extra resource inv = provider_tree.data(nodename).inventory inv['CUSTOM_MAGIC'] = { 'total': 10, 'reserved': 0, 'min_unit': 1, 'max_unit': 10, 'step_size': 1, 'allocation_ratio': 1, } provider_tree.update_inventory(nodename, inv) self.stub_out('nova.virt.fake.FakeDriver.update_provider_tree', stub_update_provider_tree) self.compute1 = self._start_compute(host='host1') source_rp_uuid = self._get_provider_uuid_by_host('host1') server = self._boot_and_check_allocations(self.flavor1, 'host1') # start the second compute with nested RP tree self.flags( compute_driver='fake.MediumFakeDriverWithNestedCustomResources') self.compute2 = self._start_compute(host='host2') source_compute_id = self.admin_api.get_services( host='host1', binary='nova-compute')[0]['id'] self.compute1.stop() # force it down to avoid waiting for the service group to time out self.admin_api.put_service( source_compute_id, {'forced_down': 'true'}) # try to force evacuate from flat to nested. post = { 'evacuate': { 'host': 'host2', 'force': True, } } self.api.post_server_action(server['id'], post) # We expect that the evacuation will fail as force evacuate tries to # blindly copy the source allocation to the destination but on the # destination there is no inventory of CUSTOM_MAGIC on the compute node # provider as that resource is reported on a child provider. self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ACTIVE'}) migration = self._wait_for_migration_status(server, ['error']) self.assertEqual('host1', migration['source_compute']) self.assertEqual('host2', migration['dest_compute']) # Nova fails the migration because it ties to allocation CUSTOM_MAGIC # from the dest node root RP and placement rejects the that allocation. self.assertIn("Unable to allocate inventory: Inventory for " "'CUSTOM_MAGIC'", self.stdlog.logger.output) self.assertIn('No valid host was found. Unable to move instance %s to ' 'host host2. There is not enough capacity on the host ' 'for the instance.' % server['id'], self.stdlog.logger.output) dest_rp_uuid = self._get_provider_uuid_by_host('host2') # There should be no usage for the dest self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, dest_rp_uuid) # and everything stays at the source self.assertFlavorMatchesUsage(source_rp_uuid, self.flavor1) self.assertFlavorMatchesAllocation(self.flavor1, server['id'], source_rp_uuid) self._delete_and_check_allocations(server) class PortResourceRequestBasedSchedulingTestBase( integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.FakeDriverWithPciResources' CUSTOM_VNIC_TYPE_NORMAL = 'CUSTOM_VNIC_TYPE_NORMAL' CUSTOM_VNIC_TYPE_DIRECT = 'CUSTOM_VNIC_TYPE_DIRECT' CUSTOM_VNIC_TYPE_MACVTAP = 'CUSTOM_VNIC_TYPE_MACVTAP' CUSTOM_PHYSNET1 = 'CUSTOM_PHYSNET1' CUSTOM_PHYSNET2 = 'CUSTOM_PHYSNET2' CUSTOM_PHYSNET3 = 'CUSTOM_PHYSNET3' PF1 = 'pf1' PF2 = 'pf2' PF3 = 'pf3' def setUp(self): # enable PciPassthroughFilter to support SRIOV before the base class # starts the scheduler if 'PciPassthroughFilter' not in CONF.filter_scheduler.enabled_filters: self.flags( enabled_filters=CONF.filter_scheduler.enabled_filters + ['PciPassthroughFilter'], group='filter_scheduler') self.useFixture( fake.FakeDriverWithPciResources. FakeDriverWithPciResourcesConfigFixture()) super(PortResourceRequestBasedSchedulingTestBase, self).setUp() # Make ComputeManager._allocate_network_async synchronous to detect # errors in tests that involve rescheduling. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) self.compute1 = self._start_compute('host1') self.compute1_rp_uuid = self._get_provider_uuid_by_host('host1') self.compute1_service_id = self.admin_api.get_services( host='host1', binary='nova-compute')[0]['id'] self.ovs_bridge_rp_per_host = {} self.sriov_dev_rp_per_host = {} self.flavor = self.api.get_flavors()[0] self.flavor_with_group_policy = self.api.get_flavors()[1] # Setting group policy for placement. This is mandatory when more than # one request group is included in the allocation candidate request and # we have tests with two ports both having resource request modelled as # two separate request groups. self.admin_api.post_extra_spec( self.flavor_with_group_policy['id'], {'extra_specs': {'group_policy': 'isolate'}}) self._create_networking_rp_tree('host1', self.compute1_rp_uuid) # add extra ports and the related network to the neutron fixture # specifically for these tests. It cannot be added globally in the # fixture init as it adds a second network that makes auto allocation # based test to fail due to ambiguous networks. self.neutron._ports[ self.neutron.port_with_sriov_resource_request['id']] = \ copy.deepcopy(self.neutron.port_with_sriov_resource_request) self.neutron._ports[self.neutron.sriov_port['id']] = \ copy.deepcopy(self.neutron.sriov_port) self.neutron._networks[ self.neutron.network_2['id']] = self.neutron.network_2 self.neutron._subnets[ self.neutron.subnet_2['id']] = self.neutron.subnet_2 macvtap = self.neutron.port_macvtap_with_resource_request self.neutron._ports[macvtap['id']] = copy.deepcopy(macvtap) def assertComputeAllocationMatchesFlavor( self, allocations, compute_rp_uuid, flavor): compute_allocations = allocations[compute_rp_uuid]['resources'] self.assertEqual( self._resources_from_flavor(flavor), compute_allocations) def _create_server(self, flavor, networks, host=None): server_req = self._build_server( image_uuid='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', flavor_id=flavor['id'], networks=networks, host=host) return self.api.post_server({'server': server_req}) def _set_provider_inventories(self, rp_uuid, inventories): rp = self.placement_api.get( '/resource_providers/%s' % rp_uuid).body inventories['resource_provider_generation'] = rp['generation'] return self._update_inventory(rp_uuid, inventories) def _create_ovs_networking_rp_tree(self, compute_rp_uuid): # we need uuid sentinel for the test to make pep8 happy but we need a # unique one per compute so here is some ugliness ovs_agent_rp_uuid = getattr(uuids, compute_rp_uuid + 'ovs agent') agent_rp_req = { "name": ovs_agent_rp_uuid, "uuid": ovs_agent_rp_uuid, "parent_provider_uuid": compute_rp_uuid } self.placement_api.post('/resource_providers', body=agent_rp_req, version='1.20') ovs_bridge_rp_uuid = getattr(uuids, ovs_agent_rp_uuid + 'ovs br') ovs_bridge_req = { "name": ovs_bridge_rp_uuid, "uuid": ovs_bridge_rp_uuid, "parent_provider_uuid": ovs_agent_rp_uuid } self.placement_api.post('/resource_providers', body=ovs_bridge_req, version='1.20') self.ovs_bridge_rp_per_host[compute_rp_uuid] = ovs_bridge_rp_uuid self._set_provider_inventories( ovs_bridge_rp_uuid, {"inventories": { orc.NET_BW_IGR_KILOBIT_PER_SEC: {"total": 10000}, orc.NET_BW_EGR_KILOBIT_PER_SEC: {"total": 10000}, }}) self._create_trait(self.CUSTOM_VNIC_TYPE_NORMAL) self._create_trait(self.CUSTOM_PHYSNET2) self._set_provider_traits( ovs_bridge_rp_uuid, [self.CUSTOM_VNIC_TYPE_NORMAL, self.CUSTOM_PHYSNET2]) def _create_pf_device_rp( self, device_rp_uuid, parent_rp_uuid, inventories, traits, device_rp_name=None): """Create a RP in placement for a physical function network device with traits and inventories. """ if not device_rp_name: device_rp_name = device_rp_uuid sriov_pf_req = { "name": device_rp_name, "uuid": device_rp_uuid, "parent_provider_uuid": parent_rp_uuid } self.placement_api.post('/resource_providers', body=sriov_pf_req, version='1.20') self._set_provider_inventories( device_rp_uuid, {"inventories": inventories}) for trait in traits: self._create_trait(trait) self._set_provider_traits( device_rp_uuid, traits) def _create_sriov_networking_rp_tree(self, hostname, compute_rp_uuid): # Create a matching RP tree in placement for the PCI devices added to # the passthrough_whitelist config during setUp() and PCI devices # present in the FakeDriverWithPciResources virt driver. # # * PF1 represents the PCI device 0000:01:00, it will be mapped to # physnet1 and it will have bandwidth inventory. # * PF2 represents the PCI device 0000:02:00, it will be mapped to # physnet2 it will have bandwidth inventory. # * PF3 represents the PCI device 0000:03:00 and, it will be mapped to # physnet2 but it will not have bandwidth inventory. self.sriov_dev_rp_per_host[compute_rp_uuid] = {} sriov_agent_rp_uuid = getattr(uuids, compute_rp_uuid + 'sriov agent') agent_rp_req = { "name": "%s:NIC Switch agent" % hostname, "uuid": sriov_agent_rp_uuid, "parent_provider_uuid": compute_rp_uuid } self.placement_api.post('/resource_providers', body=agent_rp_req, version='1.20') dev_rp_name_prefix = ("%s:NIC Switch agent:" % hostname) sriov_pf1_rp_uuid = getattr(uuids, sriov_agent_rp_uuid + 'PF1') self.sriov_dev_rp_per_host[ compute_rp_uuid][self.PF1] = sriov_pf1_rp_uuid inventories = { orc.NET_BW_IGR_KILOBIT_PER_SEC: {"total": 100000}, orc.NET_BW_EGR_KILOBIT_PER_SEC: {"total": 100000}, } traits = [self.CUSTOM_VNIC_TYPE_DIRECT, self.CUSTOM_PHYSNET1] self._create_pf_device_rp( sriov_pf1_rp_uuid, sriov_agent_rp_uuid, inventories, traits, device_rp_name=dev_rp_name_prefix + "%s-ens1" % hostname) sriov_pf2_rp_uuid = getattr(uuids, sriov_agent_rp_uuid + 'PF2') self.sriov_dev_rp_per_host[ compute_rp_uuid][self.PF2] = sriov_pf2_rp_uuid inventories = { orc.NET_BW_IGR_KILOBIT_PER_SEC: {"total": 100000}, orc.NET_BW_EGR_KILOBIT_PER_SEC: {"total": 100000}, } traits = [self.CUSTOM_VNIC_TYPE_DIRECT, self.CUSTOM_VNIC_TYPE_MACVTAP, self.CUSTOM_PHYSNET2] self._create_pf_device_rp( sriov_pf2_rp_uuid, sriov_agent_rp_uuid, inventories, traits, device_rp_name=dev_rp_name_prefix + "%s-ens2" % hostname) sriov_pf3_rp_uuid = getattr(uuids, sriov_agent_rp_uuid + 'PF3') self.sriov_dev_rp_per_host[ compute_rp_uuid][self.PF3] = sriov_pf3_rp_uuid inventories = {} traits = [self.CUSTOM_VNIC_TYPE_DIRECT, self.CUSTOM_PHYSNET2] self._create_pf_device_rp( sriov_pf3_rp_uuid, sriov_agent_rp_uuid, inventories, traits, device_rp_name=dev_rp_name_prefix + "%s-ens3" % hostname) def _create_networking_rp_tree(self, hostname, compute_rp_uuid): # let's simulate what the neutron would do self._create_ovs_networking_rp_tree(compute_rp_uuid) self._create_sriov_networking_rp_tree(hostname, compute_rp_uuid) def assertPortMatchesAllocation(self, port, allocations): port_request = port[constants.RESOURCE_REQUEST]['resources'] for rc, amount in allocations.items(): self.assertEqual(port_request[rc], amount, 'port %s requested %d %s ' 'resources but got allocation %d' % (port['id'], port_request[rc], rc, amount)) def _create_server_with_ports(self, *ports): server = self._create_server( flavor=self.flavor_with_group_policy, networks=[{'port': port['id']} for port in ports], host='host1') return self._wait_for_state_change(server, 'ACTIVE') def _check_allocation( self, server, compute_rp_uuid, non_qos_port, qos_port, qos_sriov_port, flavor, migration_uuid=None, source_compute_rp_uuid=None, new_flavor=None): updated_non_qos_port = self.neutron.show_port( non_qos_port['id'])['port'] updated_qos_port = self.neutron.show_port(qos_port['id'])['port'] updated_qos_sriov_port = self.neutron.show_port( qos_sriov_port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # if there is new_flavor then we either have an in progress resize or # a confirmed resize. In both cases the instance allocation should be # according to the new_flavor current_flavor = (new_flavor if new_flavor else flavor) # We expect one set of allocations for the compute resources on the # compute rp and two sets for the networking resources one on the ovs # bridge rp due to the qos_port resource request and one one the # sriov pf2 due to qos_sriov_port resource request self.assertEqual(3, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, compute_rp_uuid, current_flavor) ovs_allocations = allocations[ self.ovs_bridge_rp_per_host[compute_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_port, ovs_allocations) sriov_allocations = allocations[ self.sriov_dev_rp_per_host[compute_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation(qos_sriov_port, sriov_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request qos_binding_profile = updated_qos_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[compute_rp_uuid], qos_binding_profile['allocation']) qos_sriov_binding_profile = updated_qos_sriov_port['binding:profile'] self.assertEqual(self.sriov_dev_rp_per_host[compute_rp_uuid][self.PF2], qos_sriov_binding_profile['allocation']) # And we expect not to have any allocation set in the port binding for # the port that doesn't have resource request self.assertEqual({}, updated_non_qos_port['binding:profile']) if migration_uuid: migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and two sets for the networking resources one on the # ovs bridge rp due to the qos_port resource request and one one # the sriov pf2 due to qos_sriov_port resource request self.assertEqual(3, len(migration_allocations)) self.assertComputeAllocationMatchesFlavor( migration_allocations, source_compute_rp_uuid, flavor) ovs_allocations = migration_allocations[ self.ovs_bridge_rp_per_host[ source_compute_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_port, ovs_allocations) sriov_allocations = migration_allocations[ self.sriov_dev_rp_per_host[ source_compute_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation(qos_sriov_port, sriov_allocations) def _delete_server_and_check_allocations( self, server, qos_port, qos_sriov_port): self._delete_and_check_allocations(server) # assert that unbind removes the allocation from the binding of the # ports that got allocation during the bind updated_qos_port = self.neutron.show_port(qos_port['id'])['port'] binding_profile = updated_qos_port['binding:profile'] self.assertNotIn('allocation', binding_profile) updated_qos_sriov_port = self.neutron.show_port( qos_sriov_port['id'])['port'] binding_profile = updated_qos_sriov_port['binding:profile'] self.assertNotIn('allocation', binding_profile) def _create_server_with_ports_and_check_allocation( self, non_qos_normal_port, qos_normal_port, qos_sriov_port): server = self._create_server_with_ports( non_qos_normal_port, qos_normal_port, qos_sriov_port) # check that the server allocates from the current host properly self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) return server def _assert_pci_request_pf_device_name(self, server, device_name): ctxt = context.get_admin_context() pci_requests = objects.InstancePCIRequests.get_by_instance_uuid( ctxt, server['id']) self.assertEqual(1, len(pci_requests.requests)) self.assertEqual(1, len(pci_requests.requests[0].spec)) self.assertEqual( device_name, pci_requests.requests[0].spec[0]['parent_ifname']) class UnsupportedPortResourceRequestBasedSchedulingTest( PortResourceRequestBasedSchedulingTestBase): """Tests for handling servers with ports having resource requests """ def _add_resource_request_to_a_bound_port(self, port_id): # NOTE(gibi): self.neutron._ports contains a copy of each neutron port # defined on class level in the fixture. So modifying what is in the # _ports list is safe as it is re-created for each Neutron fixture # instance therefore for each individual test using that fixture. bound_port = self.neutron._ports[port_id] bound_port[constants.RESOURCE_REQUEST] = ( self.neutron.port_with_resource_request[ constants.RESOURCE_REQUEST]) def test_interface_attach_with_port_resource_request(self): # create a server server = self._create_server( flavor=self.flavor, networks=[{'port': self.neutron.port_1['id']}]) self._wait_for_state_change(server, 'ACTIVE') # try to add a port with resource request post = { 'interfaceAttachment': { 'port_id': self.neutron.port_with_resource_request['id'] }} ex = self.assertRaises(client.OpenStackApiException, self.api.attach_interface, server['id'], post) self.assertEqual(400, ex.response.status_code) self.assertIn('Attaching interfaces with QoS policy is ' 'not supported for instance', six.text_type(ex)) @mock.patch('nova.tests.fixtures.NeutronFixture.create_port') def test_interface_attach_with_network_create_port_has_resource_request( self, mock_neutron_create_port): # create a server server = self._create_server( flavor=self.flavor, networks=[{'port': self.neutron.port_1['id']}]) self._wait_for_state_change(server, 'ACTIVE') # the interfaceAttach operation below will result in a new port being # created in the network that is attached. Make sure that neutron # returns a port that has resource request. mock_neutron_create_port.return_value = ( {'port': copy.deepcopy(self.neutron.port_with_resource_request)}) # try to attach a network post = { 'interfaceAttachment': { 'net_id': self.neutron.network_1['id'] }} ex = self.assertRaises(client.OpenStackApiException, self.api.attach_interface, server['id'], post) self.assertEqual(400, ex.response.status_code) self.assertIn('Using networks with QoS policy is not supported for ' 'instance', six.text_type(ex)) @mock.patch('nova.tests.fixtures.NeutronFixture.create_port') def test_create_server_with_network_create_port_has_resource_request( self, mock_neutron_create_port): # the server create operation below will result in a new port being # created in the network. Make sure that neutron returns a port that # has resource request. mock_neutron_create_port.return_value = ( {'port': copy.deepcopy(self.neutron.port_with_resource_request)}) server = self._create_server( flavor=self.flavor, networks=[{'uuid': self.neutron.network_1['id']}]) server = self._wait_for_state_change(server, 'ERROR') self.assertEqual(500, server['fault']['code']) self.assertIn('Failed to allocate the network', server['fault']['message']) def test_create_server_with_port_resource_request_old_microversion(self): # NOTE(gibi): 2.71 is the last microversion where nova does not support # this kind of create server self.api.microversion = '2.71' ex = self.assertRaises( client.OpenStackApiException, self._create_server, flavor=self.flavor, networks=[{'port': self.neutron.port_with_resource_request['id']}]) self.assertEqual(400, ex.response.status_code) self.assertIn( "Creating servers with ports having resource requests, like a " "port with a QoS minimum bandwidth policy, is not supported " "until microversion 2.72.", six.text_type(ex)) def test_live_migrate_server_with_port_resource_request_old_version( self): server = self._create_server( flavor=self.flavor, networks=[{'port': self.neutron.port_1['id']}]) self._wait_for_state_change(server, 'ACTIVE') # We need to simulate that the above server has a port that has # resource request; we cannot boot with such a port but legacy servers # can exist with such a port. self._add_resource_request_to_a_bound_port(self.neutron.port_1['id']) post = { 'os-migrateLive': { 'host': None, 'block_migration': False, } } with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=48, ): ex = self.assertRaises( client.OpenStackApiException, self.api.post_server_action, server['id'], post) self.assertEqual(400, ex.response.status_code) self.assertIn( "The os-migrateLive action on a server with ports having resource " "requests, like a port with a QoS minimum bandwidth policy, is " "not supported by this cluster right now", six.text_type(ex)) self.assertIn( "The os-migrateLive action on a server with ports having resource " "requests, like a port with a QoS minimum bandwidth policy, is " "not supported until every nova-compute is upgraded to Ussuri", self.stdlog.logger.output) def test_evacuate_server_with_port_resource_request_old_version( self): server = self._create_server( flavor=self.flavor, networks=[{'port': self.neutron.port_1['id']}]) self._wait_for_state_change(server, 'ACTIVE') # We need to simulate that the above server has a port that has # resource request; we cannot boot with such a port but legacy servers # can exist with such a port. self._add_resource_request_to_a_bound_port(self.neutron.port_1['id']) with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=48, ): ex = self.assertRaises( client.OpenStackApiException, self.api.post_server_action, server['id'], {'evacuate': {}}) self.assertEqual(400, ex.response.status_code) self.assertIn( "The evacuate action on a server with ports having resource " "requests, like a port with a QoS minimum bandwidth policy, is " "not supported by this cluster right now", six.text_type(ex)) self.assertIn( "The evacuate action on a server with ports having resource " "requests, like a port with a QoS minimum bandwidth policy, is " "not supported until every nova-compute is upgraded to Ussuri", self.stdlog.logger.output) def test_unshelve_offloaded_server_with_port_resource_request_old_version( self): server = self._create_server( flavor=self.flavor, networks=[{'port': self.neutron.port_1['id']}]) self._wait_for_state_change(server, 'ACTIVE') # with default config shelve means immediate offload as well req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED'}) # We need to simulate that the above server has a port that has # resource request; we cannot boot with such a port but legacy servers # can exist with such a port. self._add_resource_request_to_a_bound_port(self.neutron.port_1['id']) with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=48, ): ex = self.assertRaises( client.OpenStackApiException, self.api.post_server_action, server['id'], {'unshelve': None}) self.assertEqual(400, ex.response.status_code) self.assertIn( "The unshelve action on a server with ports having resource " "requests, like a port with a QoS minimum bandwidth policy, is " "not supported by this cluster right now", six.text_type(ex)) self.assertIn( "The unshelve action on a server with ports having resource " "requests, like a port with a QoS minimum bandwidth policy, is " "not supported until every nova-compute is upgraded to Ussuri", self.stdlog.logger.output) def test_unshelve_not_offloaded_server_with_port_resource_request( self): """If the server is not offloaded then unshelving does not cause a new resource allocation therefore having port resource request is irrelevant. This test asserts that such unshelve request is not rejected. """ server = self._create_server( flavor=self.flavor, networks=[{'port': self.neutron.port_1['id']}]) self._wait_for_state_change(server, 'ACTIVE') # avoid automatic shelve offloading self.flags(shelved_offload_time=-1) req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter(server, {'status': 'SHELVED'}) # We need to simulate that the above server has a port that has # resource request; we cannot boot with such a port but legacy servers # can exist with such a port. self._add_resource_request_to_a_bound_port(self.neutron.port_1['id']) self.api.post_server_action(server['id'], {'unshelve': None}) self._wait_for_state_change(server, 'ACTIVE') class NonAdminUnsupportedPortResourceRequestBasedSchedulingTest( UnsupportedPortResourceRequestBasedSchedulingTest): def setUp(self): super( NonAdminUnsupportedPortResourceRequestBasedSchedulingTest, self).setUp() # switch to non admin api self.api = self.api_fixture.api self.api.microversion = self.microversion # allow non-admin to call the operations self.policy.set_rules({ 'os_compute_api:servers:create': '@', 'os_compute_api:servers:create:attach_network': '@', 'os_compute_api:servers:show': '@', 'os_compute_api:os-attach-interfaces': '@', 'os_compute_api:os-attach-interfaces:create': '@', 'os_compute_api:os-shelve:shelve': '@', 'os_compute_api:os-shelve:unshelve': '@', 'os_compute_api:os-migrate-server:migrate_live': '@', 'os_compute_api:os-evacuate': '@', }) class PortResourceRequestBasedSchedulingTest( PortResourceRequestBasedSchedulingTestBase): """Tests creating a server with a pre-existing port that has a resource request for a QoS minimum bandwidth policy. """ def test_boot_server_with_two_ports_one_having_resource_request(self): non_qos_port = self.neutron.port_1 qos_port = self.neutron.port_with_resource_request server = self._create_server( flavor=self.flavor, networks=[{'port': non_qos_port['id']}, {'port': qos_port['id']}]) server = self._wait_for_state_change(server, 'ACTIVE') updated_non_qos_port = self.neutron.show_port( non_qos_port['id'])['port'] updated_qos_port = self.neutron.show_port(qos_port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the ovs bridge # rp due to the qos_port resource request self.assertEqual(2, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor) network_allocations = allocations[ self.ovs_bridge_rp_per_host[self.compute1_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_port, network_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request qos_binding_profile = updated_qos_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[self.compute1_rp_uuid], qos_binding_profile['allocation']) # And we expect not to have any allocation set in the port binding for # the port that doesn't have resource request self.assertEqual({}, updated_non_qos_port['binding:profile']) self._delete_and_check_allocations(server) # assert that unbind removes the allocation from the binding of the # port that got allocation during the bind updated_qos_port = self.neutron.show_port(qos_port['id'])['port'] binding_profile = updated_qos_port['binding:profile'] self.assertNotIn('allocation', binding_profile) def test_one_ovs_one_sriov_port(self): ovs_port = self.neutron.port_with_resource_request sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server(flavor=self.flavor_with_group_policy, networks=[{'port': ovs_port['id']}, {'port': sriov_port['id']}]) server = self._wait_for_state_change(server, 'ACTIVE') ovs_port = self.neutron.show_port(ovs_port['id'])['port'] sriov_port = self.neutron.show_port(sriov_port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the ovs bridge # rp and on the sriov PF rp. self.assertEqual(3, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor_with_group_policy) ovs_allocations = allocations[ self.ovs_bridge_rp_per_host[self.compute1_rp_uuid]]['resources'] sriov_allocations = allocations[ self.sriov_dev_rp_per_host[ self.compute1_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation(ovs_port, ovs_allocations) self.assertPortMatchesAllocation(sriov_port, sriov_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request ovs_binding = ovs_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[self.compute1_rp_uuid], ovs_binding['allocation']) sriov_binding = sriov_port['binding:profile'] self.assertEqual( self.sriov_dev_rp_per_host[self.compute1_rp_uuid][self.PF2], sriov_binding['allocation']) def test_interface_detach_with_port_with_bandwidth_request(self): port = self.neutron.port_with_resource_request # create a server server = self._create_server( flavor=self.flavor, networks=[{'port': port['id']}]) self._wait_for_state_change(server, 'ACTIVE') allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the ovs bridge # rp due to the port resource request self.assertEqual(2, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor) network_allocations = allocations[ self.ovs_bridge_rp_per_host[self.compute1_rp_uuid]]['resources'] self.assertPortMatchesAllocation(port, network_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request updated_port = self.neutron.show_port(port['id'])['port'] binding_profile = updated_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[self.compute1_rp_uuid], binding_profile['allocation']) self.api.detach_interface( server['id'], self.neutron.port_with_resource_request['id']) fake_notifier.wait_for_versioned_notifications( 'instance.interface_detach.end') updated_port = self.neutron.show_port( self.neutron.port_with_resource_request['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect that the port related resource allocations are removed self.assertEqual(1, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor) # We expect that the allocation is removed from the port too binding_profile = updated_port['binding:profile'] self.assertNotIn('allocation', binding_profile) def test_delete_bound_port_in_neutron_with_resource_request(self): """Neutron sends a network-vif-deleted os-server-external-events notification to nova when a bound port is deleted. Nova detaches the vif from the server. If the port had a resource allocation then that allocation is leaked. This test makes sure that 1) an ERROR is logged when the leak happens. 2) the leaked resource is reclaimed when the server is deleted. """ port = self.neutron.port_with_resource_request # create a server server = self._create_server( flavor=self.flavor, networks=[{'port': port['id']}]) server = self._wait_for_state_change(server, 'ACTIVE') allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the ovs bridge # rp due to the port resource request self.assertEqual(2, len(allocations)) compute_allocations = allocations[self.compute1_rp_uuid]['resources'] network_allocations = allocations[ self.ovs_bridge_rp_per_host[self.compute1_rp_uuid]]['resources'] self.assertEqual(self._resources_from_flavor(self.flavor), compute_allocations) self.assertPortMatchesAllocation(port, network_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request updated_port = self.neutron.show_port(port['id'])['port'] binding_profile = updated_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[self.compute1_rp_uuid], binding_profile['allocation']) # neutron is faked in the functional test so this test just sends in # a os-server-external-events notification to trigger the # detach + ERROR log. events = { "events": [ { "name": "network-vif-deleted", "server_uuid": server['id'], "tag": port['id'], } ] } response = self.api.api_post('/os-server-external-events', events).body self.assertEqual(200, response['events'][0]['code']) port_rp_uuid = self.ovs_bridge_rp_per_host[self.compute1_rp_uuid] # 1) Nova logs an ERROR about the leak self._wait_for_log( 'ERROR [nova.compute.manager] The bound port %(port_id)s is ' 'deleted in Neutron but the resource allocation on the resource ' 'provider %(rp_uuid)s is leaked until the server %(server_uuid)s ' 'is deleted.' % {'port_id': port['id'], 'rp_uuid': port_rp_uuid, 'server_uuid': server['id']}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # Nova leaks the port allocation so the server still has the same # allocation before the port delete. self.assertEqual(2, len(allocations)) compute_allocations = allocations[self.compute1_rp_uuid]['resources'] network_allocations = allocations[port_rp_uuid]['resources'] self.assertEqual(self._resources_from_flavor(self.flavor), compute_allocations) self.assertPortMatchesAllocation(port, network_allocations) # 2) Also nova will reclaim the leaked resource during the server # delete self._delete_and_check_allocations(server) def test_two_sriov_ports_one_with_request_two_available_pfs(self): """Verify that the port's bandwidth allocated from the same PF as the allocated VF. One compute host: * PF1 (0000:01:00) is configured for physnet1 * PF2 (0000:02:00) is configured for physnet2, with 1 VF and bandwidth inventory * PF3 (0000:03:00) is configured for physnet2, with 1 VF but without bandwidth inventory One instance will be booted with two neutron ports, both ports requested to be connected to physnet2. One port has resource request the other does not have resource request. The port having the resource request cannot be allocated to PF3 and PF1 while the other port that does not have resource request can be allocated to PF2 or PF3. For the detailed compute host config see the FakeDriverWithPciResources class. For the necessary passthrough_whitelist config see the setUp of the PortResourceRequestBasedSchedulingTestBase class. """ sriov_port = self.neutron.sriov_port sriov_port_with_res_req = self.neutron.port_with_sriov_resource_request server = self._create_server( flavor=self.flavor_with_group_policy, networks=[ {'port': sriov_port_with_res_req['id']}, {'port': sriov_port['id']}]) server = self._wait_for_state_change(server, 'ACTIVE') sriov_port = self.neutron.show_port(sriov_port['id'])['port'] sriov_port_with_res_req = self.neutron.show_port( sriov_port_with_res_req['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the sriov PF2 # rp. self.assertEqual(2, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor_with_group_policy) sriov_allocations = allocations[ self.sriov_dev_rp_per_host[ self.compute1_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation( sriov_port_with_res_req, sriov_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request sriov_with_req_binding = sriov_port_with_res_req['binding:profile'] self.assertEqual( self.sriov_dev_rp_per_host[self.compute1_rp_uuid][self.PF2], sriov_with_req_binding['allocation']) # and the port without resource request does not have allocation sriov_binding = sriov_port['binding:profile'] self.assertNotIn('allocation', sriov_binding) # We expect that the selected PCI device matches with the RP from # where the bandwidth is allocated from. The bandwidth is allocated # from 0000:02:00 (PF2) so the PCI device should be a VF of that PF self.assertEqual( fake.FakeDriverWithPciResources.PCI_ADDR_PF2_VF1, sriov_with_req_binding['pci_slot']) # But also the port that has no resource request still gets a pci slot # allocated. The 0000:02:00 has no more VF available but 0000:03:00 has # one VF available and that PF is also on physnet2 self.assertEqual( fake.FakeDriverWithPciResources.PCI_ADDR_PF3_VF1, sriov_binding['pci_slot']) def test_one_sriov_port_no_vf_and_bandwidth_available_on_the_same_pf(self): """Verify that if there is no PF that both provides bandwidth and VFs then the boot will fail. """ # boot a server with a single sriov port that has no resource request sriov_port = self.neutron.sriov_port server = self._create_server( flavor=self.flavor_with_group_policy, networks=[{'port': sriov_port['id']}]) self._wait_for_state_change(server, 'ACTIVE') sriov_port = self.neutron.show_port(sriov_port['id'])['port'] sriov_binding = sriov_port['binding:profile'] # We expect that this consume the last available VF from the PF2 self.assertEqual( fake.FakeDriverWithPciResources.PCI_ADDR_PF2_VF1, sriov_binding['pci_slot']) # Now boot a second server with a port that has resource request # At this point PF2 has available bandwidth but no available VF # and PF3 has available VF but no available bandwidth so we expect # the boot to fail. sriov_port_with_res_req = self.neutron.port_with_sriov_resource_request server = self._create_server( flavor=self.flavor_with_group_policy, networks=[{'port': sriov_port_with_res_req['id']}]) # NOTE(gibi): It should be NoValidHost in an ideal world but that would # require the scheduler to detect the situation instead of the pci # claim. However that is pretty hard as the scheduler does not know # anything about allocation candidates (e.g. that the only candidate # for the port in this case is PF2) it see the whole host as a # candidate and in our host there is available VF for the request even # if that is on the wrong PF. server = self._wait_for_state_change(server, 'ERROR') self.assertIn( 'Exceeded maximum number of retries. Exhausted all hosts ' 'available for retrying build failures for instance', server['fault']['message']) def test_sriov_macvtap_port_with_resource_request(self): """Verify that vnic type macvtap is also supported""" port = self.neutron.port_macvtap_with_resource_request server = self._create_server( flavor=self.flavor, networks=[{'port': port['id']}]) server = self._wait_for_state_change(server, 'ACTIVE') port = self.neutron.show_port(port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the sriov PF2 # rp. self.assertEqual(2, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor) sriov_allocations = allocations[self.sriov_dev_rp_per_host[ self.compute1_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation( port, sriov_allocations) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding for the port having resource # request port_binding = port['binding:profile'] self.assertEqual( self.sriov_dev_rp_per_host[self.compute1_rp_uuid][self.PF2], port_binding['allocation']) # We expect that the selected PCI device matches with the RP from # where the bandwidth is allocated from. The bandwidth is allocated # from 0000:02:00 (PF2) so the PCI device should be a VF of that PF self.assertEqual( fake.FakeDriverWithPciResources.PCI_ADDR_PF2_VF1, port_binding['pci_slot']) class ServerMoveWithPortResourceRequestTest( PortResourceRequestBasedSchedulingTestBase): def setUp(self): # Use our custom weigher defined above to make sure that we have # a predictable host order in the alternate list returned by the # scheduler for migration. self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(ServerMoveWithPortResourceRequestTest, self).setUp() self.compute2 = self._start_compute('host2') self.compute2_rp_uuid = self._get_provider_uuid_by_host('host2') self._create_networking_rp_tree('host2', self.compute2_rp_uuid) self.compute2_service_id = self.admin_api.get_services( host='host2', binary='nova-compute')[0]['id'] # create a bigger flavor to use in resize test self.flavor_with_group_policy_bigger = self.admin_api.post_flavor( {'flavor': { 'ram': self.flavor_with_group_policy['ram'], 'vcpus': self.flavor_with_group_policy['vcpus'], 'name': self.flavor_with_group_policy['name'] + '+', 'disk': self.flavor_with_group_policy['disk'] + 1, }}) self.admin_api.post_extra_spec( self.flavor_with_group_policy_bigger['id'], {'extra_specs': {'group_policy': 'isolate'}}) def test_migrate_server_with_qos_port_old_dest_compute_no_alternate(self): """Create a situation where the only migration target host returned by the scheduler is too old and therefore the migration fails. """ non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) orig_get_service = objects.Service.get_by_host_and_binary def fake_get_service(context, host, binary): # host2 is the only migration target, let's make it too old so the # migration will fail if host == 'host2': service = orig_get_service(context, host, binary) service.version = 38 return service else: return orig_get_service(context, host, binary) with mock.patch( 'nova.objects.Service.get_by_host_and_binary', side_effect=fake_get_service): self.api.post_server_action(server['id'], {'migrate': None}, check_response_status=[202]) self._wait_for_server_parameter(server, {'OS-EXT-STS:task_state': None}) self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'NoValidHost') # check that the server still allocates from the original host self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) # but the migration allocation is gone migration_uuid = self.get_migration_uuid_for_instance(server['id']) migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_migrate_server_with_qos_port_old_dest_compute_alternate(self): """Create a situation where the first migration target host returned by the scheduler is too old and therefore the second host is selected by the MigrationTask. """ self._start_compute('host3') compute3_rp_uuid = self._get_provider_uuid_by_host('host3') self._create_networking_rp_tree('host3', compute3_rp_uuid) non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) orig_get_service = objects.Service.get_by_host_and_binary def fake_get_service(context, host, binary): # host2 is the first migration target, let's make it too old so the # migration will skip this host if host == 'host2': service = orig_get_service(context, host, binary) service.version = 38 return service # host3 is the second migration target, let's make it new enough so # the migration task will choose this host elif host == 'host3': service = orig_get_service(context, host, binary) service.version = 39 return service else: return orig_get_service(context, host, binary) with mock.patch( 'nova.objects.Service.get_by_host_and_binary', side_effect=fake_get_service): self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_state_change(server, 'VERIFY_RESIZE') migration_uuid = self.get_migration_uuid_for_instance(server['id']) # check that server allocates from host3 and the migration allocates # from host1 self._check_allocation( server, compute3_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy, migration_uuid, source_compute_rp_uuid=self.compute1_rp_uuid) self._confirm_resize(server) # check that allocation is still OK self._check_allocation( server, compute3_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) # but the migration allocation is gone migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def _test_resize_or_migrate_server_with_qos_ports(self, new_flavor=None): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) if new_flavor: self.api_fixture.api.post_server_action( server['id'], {'resize': {"flavorRef": new_flavor['id']}}) else: self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_state_change(server, 'VERIFY_RESIZE') migration_uuid = self.get_migration_uuid_for_instance(server['id']) # check that server allocates from the new host properly self._check_allocation( server, self.compute2_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy, migration_uuid, source_compute_rp_uuid=self.compute1_rp_uuid, new_flavor=new_flavor) self._assert_pci_request_pf_device_name(server, 'host2-ens2') self._confirm_resize(server) # check that allocation is still OK self._check_allocation( server, self.compute2_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy, new_flavor=new_flavor) migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_migrate_server_with_qos_ports(self): self._test_resize_or_migrate_server_with_qos_ports() def test_resize_server_with_qos_ports(self): self._test_resize_or_migrate_server_with_qos_ports( new_flavor=self.flavor_with_group_policy_bigger) def _test_resize_or_migrate_revert_with_qos_ports(self, new_flavor=None): non_qos_port = self.neutron.port_1 qos_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_port, qos_port, qos_sriov_port) if new_flavor: self.api_fixture.api.post_server_action( server['id'], {'resize': {"flavorRef": new_flavor['id']}}) else: self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_state_change(server, 'VERIFY_RESIZE') migration_uuid = self.get_migration_uuid_for_instance(server['id']) # check that server allocates from the new host properly self._check_allocation( server, self.compute2_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy, migration_uuid, source_compute_rp_uuid=self.compute1_rp_uuid, new_flavor=new_flavor) self.api.post_server_action(server['id'], {'revertResize': None}) self._wait_for_state_change(server, 'ACTIVE') # check that allocation is moved back to the source host self._check_allocation( server, self.compute1_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy) # check that the target host allocation is cleaned up. self.assertRequestMatchesUsage( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0, 'NET_BW_IGR_KILOBIT_PER_SEC': 0, 'NET_BW_EGR_KILOBIT_PER_SEC': 0}, self.compute2_rp_uuid) migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) self._delete_server_and_check_allocations( server, qos_port, qos_sriov_port) def test_migrate_revert_with_qos_ports(self): self._test_resize_or_migrate_revert_with_qos_ports() def test_resize_revert_with_qos_ports(self): self._test_resize_or_migrate_revert_with_qos_ports( new_flavor=self.flavor_with_group_policy_bigger) def _test_resize_or_migrate_server_with_qos_port_reschedule_success( self, new_flavor=None): self._start_compute('host3') compute3_rp_uuid = self._get_provider_uuid_by_host('host3') self._create_networking_rp_tree('host3', compute3_rp_uuid) non_qos_port = self.neutron.port_1 qos_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_port, qos_port, qos_sriov_port) # Yes this isn't great in a functional test, but it's simple. original_prep_resize = compute_manager.ComputeManager._prep_resize prep_resize_calls = [] def fake_prep_resize(_self, *args, **kwargs): # Make the first prep_resize fail and the rest passing through # the original _prep_resize call if not prep_resize_calls: prep_resize_calls.append(_self.host) raise test.TestingException('Simulated prep_resize failure.') prep_resize_calls.append(_self.host) original_prep_resize(_self, *args, **kwargs) # The patched compute manager will raise from _prep_resize on the # first host of the migration. Then the migration # is reschedule on the other host where it will succeed with mock.patch.object( compute_manager.ComputeManager, '_prep_resize', new=fake_prep_resize): if new_flavor: self.api_fixture.api.post_server_action( server['id'], {'resize': {"flavorRef": new_flavor['id']}}) else: self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_state_change(server, 'VERIFY_RESIZE') # ensure that resize is tried on two hosts, so we had a re-schedule self.assertEqual(['host2', 'host3'], prep_resize_calls) migration_uuid = self.get_migration_uuid_for_instance(server['id']) # check that server allocates from the final host properly while # the migration holds the allocation on the source host self._check_allocation( server, compute3_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy, migration_uuid, source_compute_rp_uuid=self.compute1_rp_uuid, new_flavor=new_flavor) self._assert_pci_request_pf_device_name(server, 'host3-ens2') self._confirm_resize(server) # check that allocation is still OK self._check_allocation( server, compute3_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy, new_flavor=new_flavor) migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) self._delete_server_and_check_allocations( server, qos_port, qos_sriov_port) def test_migrate_server_with_qos_port_reschedule_success(self): self._test_resize_or_migrate_server_with_qos_port_reschedule_success() def test_resize_server_with_qos_port_reschedule_success(self): self._test_resize_or_migrate_server_with_qos_port_reschedule_success( new_flavor=self.flavor_with_group_policy_bigger) def _test_resize_or_migrate_server_with_qos_port_reschedule_failure( self, new_flavor=None): non_qos_port = self.neutron.port_1 qos_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_port, qos_port, qos_sriov_port) # The patched compute manager on host2 will raise from _prep_resize. # Then the migration is reschedule but there is no other host to # choose from. with mock.patch.object( compute_manager.ComputeManager, '_prep_resize', side_effect=test.TestingException( 'Simulated prep_resize failure.')): if new_flavor: self.api_fixture.api.post_server_action( server['id'], {'resize': {"flavorRef": new_flavor['id']}}) else: self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ERROR'}) self._wait_for_migration_status(server, ['error']) migration_uuid = self.get_migration_uuid_for_instance(server['id']) # as the migration is failed we expect that the migration allocation # is deleted migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) # and the instance allocates from the source host self._check_allocation( server, self.compute1_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy) def test_migrate_server_with_qos_port_reschedule_failure(self): self._test_resize_or_migrate_server_with_qos_port_reschedule_failure() def test_resize_server_with_qos_port_reschedule_failure(self): self._test_resize_or_migrate_server_with_qos_port_reschedule_failure( new_flavor=self.flavor_with_group_policy_bigger) def test_migrate_server_with_qos_port_pci_update_fail_not_reschedule(self): # Update the name of the network device RP of PF2 on host2 to something # unexpected. This will cause # update_pci_request_spec_with_allocated_interface_name() to raise # when the instance is migrated to the host2. rsp = self.placement_api.put( '/resource_providers/%s' % self.sriov_dev_rp_per_host[self.compute2_rp_uuid][self.PF2], {"name": "invalid-device-rp-name"}) self.assertEqual(200, rsp.status) self._start_compute('host3') compute3_rp_uuid = self._get_provider_uuid_by_host('host3') self._create_networking_rp_tree('host3', compute3_rp_uuid) non_qos_port = self.neutron.port_1 qos_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_port, qos_port, qos_sriov_port) # The compute manager on host2 will raise from # update_pci_request_spec_with_allocated_interface_name which will # intentionally not trigger a re-schedule even if there is host3 as an # alternate. self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host1', # Note that we have to wait for the task_state to be reverted # to None since that happens after the fault is recorded. 'OS-EXT-STS:task_state': None, 'status': 'ERROR'}) self._wait_for_migration_status(server, ['error']) self.assertIn( 'Build of instance %s aborted' % server['id'], server['fault']['message']) self._wait_for_action_fail_completion( server, instance_actions.MIGRATE, 'compute_prep_resize') fake_notifier.wait_for_versioned_notifications( 'instance.resize_prep.end') fake_notifier.wait_for_versioned_notifications( 'compute.exception') migration_uuid = self.get_migration_uuid_for_instance(server['id']) # as the migration is failed we expect that the migration allocation # is deleted migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) # and the instance allocates from the source host self._check_allocation( server, self.compute1_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy) def test_migrate_server_with_qos_port_pinned_compute_rpc(self): # Pin the compute rpc version to 5.1 to test what happens if # resize RPC is called without RequestSpec. # It is OK to set this after the nova services has started in setUp() # as no compute rpc call is made so far. self.flags(compute='5.1', group='upgrade_levels') non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request server = self._create_server_with_ports( non_qos_normal_port, qos_normal_port) # This migration expected to fail as the old RPC does not provide # enough information to do a proper port binding on the target host. # The MigrationTask in the conductor checks that the RPC is new enough # for this request for each possible destination provided by the # scheduler and skips the old hosts. The actual response will be a 202 # so we have to wait for the failed instance action event. self.api.post_server_action(server['id'], {'migrate': None}) self._assert_resize_migrate_action_fail( server, instance_actions.MIGRATE, 'NoValidHost') # The migration is put into error self._wait_for_migration_status(server, ['error']) # The migration is rejected so the instance remains on the source host server = self.api.get_server(server['id']) self.assertEqual('ACTIVE', server['status']) self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) migration_uuid = self.get_migration_uuid_for_instance(server['id']) # The migration allocation is deleted migration_allocations = self.placement_api.get( '/allocations/%s' % migration_uuid).body['allocations'] self.assertEqual({}, migration_allocations) # The instance is still allocated from the source host updated_non_qos_port = self.neutron.show_port( non_qos_normal_port['id'])['port'] updated_qos_port = self.neutron.show_port( qos_normal_port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the ovs # bridge rp due to the qos_port resource request self.assertEqual(2, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, self.compute1_rp_uuid, self.flavor_with_group_policy) ovs_allocations = allocations[ self.ovs_bridge_rp_per_host[self.compute1_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_normal_port, ovs_allocations) # binding:profile still points to the networking RP on the source host qos_binding_profile = updated_qos_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[self.compute1_rp_uuid], qos_binding_profile['allocation']) # And we expect not to have any allocation set in the port binding for # the port that doesn't have resource request self.assertEqual({}, updated_non_qos_port['binding:profile']) def _check_allocation_during_evacuate( self, server, flavor, source_compute_rp_uuid, dest_compute_rp_uuid, non_qos_port, qos_port, qos_sriov_port): # evacuate is the only case when the same consumer has allocation from # two different RP trees so we need special checks updated_non_qos_port = self.neutron.show_port( non_qos_port['id'])['port'] updated_qos_port = self.neutron.show_port(qos_port['id'])['port'] updated_qos_sriov_port = self.neutron.show_port( qos_sriov_port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect two sets of allocations. One set for the source compute # and one set for the dest compute. Each set we expect 3 allocations # one for the compute RP according to the flavor, one for the ovs port # and one for the SRIOV port. self.assertEqual(6, len(allocations), allocations) # 1. source compute allocation compute_allocations = allocations[source_compute_rp_uuid]['resources'] self.assertEqual( self._resources_from_flavor(flavor), compute_allocations) # 2. source ovs allocation ovs_allocations = allocations[ self.ovs_bridge_rp_per_host[source_compute_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_port, ovs_allocations) # 3. source sriov allocation sriov_allocations = allocations[ self.sriov_dev_rp_per_host[ source_compute_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation(qos_sriov_port, sriov_allocations) # 4. dest compute allocation compute_allocations = allocations[dest_compute_rp_uuid]['resources'] self.assertEqual( self._resources_from_flavor(flavor), compute_allocations) # 5. dest ovs allocation ovs_allocations = allocations[ self.ovs_bridge_rp_per_host[dest_compute_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_port, ovs_allocations) # 6. dest SRIOV allocation sriov_allocations = allocations[ self.sriov_dev_rp_per_host[ dest_compute_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation(qos_sriov_port, sriov_allocations) # the qos ports should have their binding pointing to the RPs in the # dest compute RP tree qos_binding_profile = updated_qos_port['binding:profile'] self.assertEqual( self.ovs_bridge_rp_per_host[dest_compute_rp_uuid], qos_binding_profile['allocation']) qos_sriov_binding_profile = updated_qos_sriov_port['binding:profile'] self.assertEqual( self.sriov_dev_rp_per_host[dest_compute_rp_uuid][self.PF2], qos_sriov_binding_profile['allocation']) # And we expect not to have any allocation set in the port binding for # the port that doesn't have resource request self.assertEqual({}, updated_non_qos_port['binding:profile']) def _check_allocation_after_evacuation_source_recovered( self, server, flavor, dest_compute_rp_uuid, non_qos_port, qos_port, qos_sriov_port): # check that source allocation is cleaned up and the dest allocation # and the port bindings are not touched. updated_non_qos_port = self.neutron.show_port( non_qos_port['id'])['port'] updated_qos_port = self.neutron.show_port(qos_port['id'])['port'] updated_qos_sriov_port = self.neutron.show_port( qos_sriov_port['id'])['port'] allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(3, len(allocations), allocations) # 1. dest compute allocation compute_allocations = allocations[dest_compute_rp_uuid]['resources'] self.assertEqual( self._resources_from_flavor(flavor), compute_allocations) # 2. dest ovs allocation ovs_allocations = allocations[ self.ovs_bridge_rp_per_host[dest_compute_rp_uuid]]['resources'] self.assertPortMatchesAllocation(qos_port, ovs_allocations) # 3. dest SRIOV allocation sriov_allocations = allocations[ self.sriov_dev_rp_per_host[ dest_compute_rp_uuid][self.PF2]]['resources'] self.assertPortMatchesAllocation(qos_sriov_port, sriov_allocations) # the qos ports should have their binding pointing to the RPs in the # dest compute RP tree qos_binding_profile = updated_qos_port['binding:profile'] self.assertEqual( self.ovs_bridge_rp_per_host[dest_compute_rp_uuid], qos_binding_profile['allocation']) qos_sriov_binding_profile = updated_qos_sriov_port['binding:profile'] self.assertEqual( self.sriov_dev_rp_per_host[dest_compute_rp_uuid][self.PF2], qos_sriov_binding_profile['allocation']) # And we expect not to have any allocation set in the port binding for # the port that doesn't have resource request self.assertEqual({}, updated_non_qos_port['binding:profile']) def test_evacuate_with_qos_port(self, host=None): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) # force source compute down self.compute1.stop() self.admin_api.put_service( self.compute1_service_id, {'forced_down': 'true'}) req = {'evacuate': {}} if host: req['evacuate']['host'] = host self.api.post_server_action(server['id'], req) self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host2', 'status': 'ACTIVE'}) self._check_allocation_during_evacuate( server, self.flavor_with_group_policy, self.compute1_rp_uuid, self.compute2_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port) self._assert_pci_request_pf_device_name(server, 'host2-ens2') # recover source compute self.admin_api.put_service( self.compute1_service_id, {'forced_down': 'false'}) self.compute1 = self.restart_compute_service(self.compute1) # check that source allocation is cleaned up and the dest allocation # and the port bindings are not touched. self._check_allocation_after_evacuation_source_recovered( server, self.flavor_with_group_policy, self.compute2_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_evacuate_with_target_host_with_qos_port(self): self.test_evacuate_with_qos_port(host='host2') def test_evacuate_with_qos_port_fails_recover_source_compute(self): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) # force source compute down self.compute1.stop() self.admin_api.put_service( self.compute1_service_id, {'forced_down': 'true'}) with mock.patch( 'nova.compute.resource_tracker.ResourceTracker.rebuild_claim', side_effect=exception.ComputeResourcesUnavailable( reason='test evacuate failure')): req = {'evacuate': {}} self.api.post_server_action(server['id'], req) # Evacuate does not have reschedule loop so evacuate expected to # simply fail and the server remains on the source host server = self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ACTIVE', 'OS-EXT-STS:task_state': None}) self._wait_for_migration_status(server, ['failed']) # As evacuation failed the resource allocation should be untouched self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) # recover source compute self.admin_api.put_service( self.compute1_service_id, {'forced_down': 'false'}) self.compute1 = self.restart_compute_service(self.compute1) # check again that even after source host recovery the source # allocation is intact self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_evacuate_with_qos_port_pci_update_fail(self): # Update the name of the network device RP of PF2 on host2 to something # unexpected. This will cause # update_pci_request_spec_with_allocated_interface_name() to raise # when the instance is evacuated to the host2. rsp = self.placement_api.put( '/resource_providers/%s' % self.sriov_dev_rp_per_host[self.compute2_rp_uuid][self.PF2], {"name": "invalid-device-rp-name"}) self.assertEqual(200, rsp.status) non_qos_port = self.neutron.port_1 qos_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_port, qos_port, qos_sriov_port) # force source compute down self.compute1.stop() self.admin_api.put_service( self.compute1_service_id, {'forced_down': 'true'}) # The compute manager on host2 will raise from # update_pci_request_spec_with_allocated_interface_name self.api.post_server_action(server['id'], {'evacuate': {}}) server = self._wait_for_server_parameter(server, {'OS-EXT-SRV-ATTR:host': 'host1', 'OS-EXT-STS:task_state': None, 'status': 'ERROR'}) self._wait_for_migration_status(server, ['failed']) self.assertIn( 'does not have a properly formatted name', server['fault']['message']) self._wait_for_action_fail_completion( server, instance_actions.EVACUATE, 'compute_rebuild_instance') fake_notifier.wait_for_versioned_notifications( 'instance.rebuild.error') fake_notifier.wait_for_versioned_notifications( 'compute.exception') # and the instance allocates from the source host self._check_allocation( server, self.compute1_rp_uuid, non_qos_port, qos_port, qos_sriov_port, self.flavor_with_group_policy) def test_live_migrate_with_qos_port(self, host=None): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) self.api.post_server_action( server['id'], { 'os-migrateLive': { 'host': host, 'block_migration': 'auto' } } ) self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host2', 'status': 'ACTIVE'}) self._check_allocation( server, self.compute2_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) self._assert_pci_request_pf_device_name(server, 'host2-ens2') self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_live_migrate_with_qos_port_with_target_host(self): self.test_live_migrate_with_qos_port(host='host2') def test_live_migrate_with_qos_port_reschedule_success(self): self._start_compute('host3') compute3_rp_uuid = self._get_provider_uuid_by_host('host3') self._create_networking_rp_tree('host3', compute3_rp_uuid) non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) orig_check = fake.FakeDriver.check_can_live_migrate_destination def fake_check_can_live_migrate_destination( context, instance, src_compute_info, dst_compute_info, block_migration=False, disk_over_commit=False): if dst_compute_info['host'] == 'host2': raise exception.MigrationPreCheckError( reason='test_live_migrate_pre_check_fails') else: return orig_check( context, instance, src_compute_info, dst_compute_info, block_migration, disk_over_commit) with mock.patch('nova.virt.fake.FakeDriver.' 'check_can_live_migrate_destination', side_effect=fake_check_can_live_migrate_destination): self.api.post_server_action( server['id'], { 'os-migrateLive': { 'host': None, 'block_migration': 'auto' } } ) # The first migration attempt was to host2. So we expect that the # instance lands on host3. self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host3', 'status': 'ACTIVE'}) self._check_allocation( server, compute3_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) self._assert_pci_request_pf_device_name(server, 'host3-ens2') self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_live_migrate_with_qos_port_reschedule_fails(self): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) with mock.patch( 'nova.virt.fake.FakeDriver.check_can_live_migrate_destination', side_effect=exception.MigrationPreCheckError( reason='test_live_migrate_pre_check_fails')): self.api.post_server_action( server['id'], { 'os-migrateLive': { 'host': None, 'block_migration': 'auto' } } ) # The every migration target host will fail the pre check so # the conductor will run out of target host and the migration will # fail self._wait_for_migration_status(server, ['error']) # the server will remain on host1 self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ACTIVE'}) self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) # Assert that the InstancePCIRequests also rolled back to point to # host1 self._assert_pci_request_pf_device_name(server, 'host1-ens2') self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_live_migrate_with_qos_port_pci_update_fails(self): # Update the name of the network device RP of PF2 on host2 to something # unexpected. This will cause # update_pci_request_spec_with_allocated_interface_name() to raise # when the instance is live migrated to the host2. rsp = self.placement_api.put( '/resource_providers/%s' % self.sriov_dev_rp_per_host[self.compute2_rp_uuid][self.PF2], {"name": "invalid-device-rp-name"}) self.assertEqual(200, rsp.status) non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) self.api.post_server_action( server['id'], { 'os-migrateLive': { 'host': None, 'block_migration': 'auto' } } ) # pci update will fail after scheduling to host2 self._wait_for_migration_status(server, ['error']) server = self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ERROR'}) self.assertIn( 'does not have a properly formatted name', server['fault']['message']) self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) # Assert that the InstancePCIRequests still point to host1 self._assert_pci_request_pf_device_name(server, 'host1-ens2') self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_unshelve_offloaded_server_with_qos_port(self): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) # with default config shelve means immediate offload as well req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED'}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) self.api.post_server_action(server['id'], {'unshelve': None}) self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ACTIVE'}) self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) self._assert_pci_request_pf_device_name(server, 'host1-ens2') # shelve offload again and then make host1 unusable so the subsequent # unshelve needs to select host2 req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED'}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) self.admin_api.put_service( self.compute1_service_id, {"status": "disabled"}) self.api.post_server_action(server['id'], {'unshelve': None}) self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host2', 'status': 'ACTIVE'}) self._check_allocation( server, self.compute2_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) self._assert_pci_request_pf_device_name(server, 'host2-ens2') self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_unshelve_offloaded_server_with_qos_port_pci_update_fails(self): # Update the name of the network device RP of PF2 on host2 to something # unexpected. This will cause # update_pci_request_spec_with_allocated_interface_name() to raise # when the instance is unshelved to the host2. rsp = self.placement_api.put( '/resource_providers/%s' % self.sriov_dev_rp_per_host[self.compute2_rp_uuid][self.PF2], {"name": "invalid-device-rp-name"}) self.assertEqual(200, rsp.status) non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) # with default config shelve means immediate offload as well req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED'}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) # make host1 unusable so the subsequent unshelve needs to select host2 self.admin_api.put_service( self.compute1_service_id, {"status": "disabled"}) self.api.post_server_action(server['id'], {'unshelve': None}) # Unshelve fails on host2 due to # update_pci_request_spec_with_allocated_interface_name fails so the # instance goes back to shelve offloaded state fake_notifier.wait_for_versioned_notifications( 'instance.unshelve.start') error_notification = fake_notifier.wait_for_versioned_notifications( 'compute.exception')[0] self.assertEqual( 'UnexpectedResourceProviderNameForPCIRequest', error_notification['payload']['nova_object.data']['exception']) server = self._wait_for_server_parameter( server, {'OS-EXT-STS:task_state': None, 'status': 'SHELVED_OFFLOADED'}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) def test_unshelve_offloaded_server_with_qos_port_fails_due_to_neutron( self): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) # with default config shelve means immediate offload as well req = { 'shelve': {} } self.api.post_server_action(server['id'], req) self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED'}) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) # Simulate that port update fails during unshelve due to neutron is # unavailable with mock.patch( 'nova.tests.fixtures.NeutronFixture.' 'update_port') as mock_update_port: mock_update_port.side_effect = neutron_exception.ConnectionFailed( reason='test') req = {'unshelve': None} self.api.post_server_action(server['id'], req) fake_notifier.wait_for_versioned_notifications( 'instance.unshelve.start') self._wait_for_server_parameter( server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-STS:task_state': None}) # As the instance went back to offloaded state we expect no allocation allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] self.assertEqual(0, len(allocations)) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) class LiveMigrateAbortWithPortResourceRequestTest( PortResourceRequestBasedSchedulingTestBase): compute_driver = "fake.FakeLiveMigrateDriverWithPciResources" def setUp(self): # Use a custom weigher to make sure that we have a predictable host # order in the alternate list returned by the scheduler for migration. self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(LiveMigrateAbortWithPortResourceRequestTest, self).setUp() self.compute2 = self._start_compute('host2') self.compute2_rp_uuid = self._get_provider_uuid_by_host('host2') self._create_networking_rp_tree('host2', self.compute2_rp_uuid) self.compute2_service_id = self.admin_api.get_services( host='host2', binary='nova-compute')[0]['id'] def test_live_migrate_with_qos_port_abort_migration(self): non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) # The special virt driver will keep the live migration running until it # is aborted. self.api.post_server_action( server['id'], { 'os-migrateLive': { 'host': None, 'block_migration': 'auto' } } ) # wait for the migration to start migration = self._wait_for_migration_status(server, ['running']) # delete the migration to abort it self.api.delete_migration(server['id'], migration['id']) self._wait_for_migration_status(server, ['cancelled']) self._wait_for_server_parameter( server, {'OS-EXT-SRV-ATTR:host': 'host1', 'status': 'ACTIVE'}) self._check_allocation( server, self.compute1_rp_uuid, non_qos_normal_port, qos_normal_port, qos_sriov_port, self.flavor_with_group_policy) # Assert that the InstancePCIRequests rolled back to point to host1 self._assert_pci_request_pf_device_name(server, 'host1-ens2') self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) class PortResourceRequestReSchedulingTest( PortResourceRequestBasedSchedulingTestBase): """Similar to PortResourceRequestBasedSchedulingTest except this test uses FakeRescheduleDriver which will test reschedules during server create work as expected, i.e. that the resource request allocations are moved from the initially selected compute to the alternative compute. """ compute_driver = 'fake.FakeRescheduleDriver' def setUp(self): super(PortResourceRequestReSchedulingTest, self).setUp() self.compute2 = self._start_compute('host2') self.compute2_rp_uuid = self._get_provider_uuid_by_host('host2') self._create_networking_rp_tree('host2', self.compute2_rp_uuid) def _create_networking_rp_tree(self, hostname, compute_rp_uuid): # let's simulate what the neutron would do self._create_ovs_networking_rp_tree(compute_rp_uuid) def test_boot_reschedule_success(self): port = self.neutron.port_with_resource_request server = self._create_server( flavor=self.flavor, networks=[{'port': port['id']}]) server = self._wait_for_state_change(server, 'ACTIVE') updated_port = self.neutron.show_port(port['id'])['port'] dest_hostname = server['OS-EXT-SRV-ATTR:host'] dest_compute_rp_uuid = self._get_provider_uuid_by_host(dest_hostname) failed_compute_rp = (self.compute1_rp_uuid if dest_compute_rp_uuid == self.compute2_rp_uuid else self.compute2_rp_uuid) allocations = self.placement_api.get( '/allocations/%s' % server['id']).body['allocations'] # We expect one set of allocations for the compute resources on the # compute rp and one set for the networking resources on the ovs bridge # rp self.assertEqual(2, len(allocations)) self.assertComputeAllocationMatchesFlavor( allocations, dest_compute_rp_uuid, self.flavor) network_allocations = allocations[ self.ovs_bridge_rp_per_host[dest_compute_rp_uuid]]['resources'] self.assertPortMatchesAllocation(port, network_allocations) # assert that the allocations against the host where the spawn # failed are cleaned up properly self.assertEqual( {'VCPU': 0, 'MEMORY_MB': 0, 'DISK_GB': 0}, self._get_provider_usages(failed_compute_rp)) self.assertEqual( {'NET_BW_EGR_KILOBIT_PER_SEC': 0, 'NET_BW_IGR_KILOBIT_PER_SEC': 0}, self._get_provider_usages( self.ovs_bridge_rp_per_host[failed_compute_rp])) # We expect that only the RP uuid of the networking RP having the port # allocation is sent in the port binding binding_profile = updated_port['binding:profile'] self.assertEqual(self.ovs_bridge_rp_per_host[dest_compute_rp_uuid], binding_profile['allocation']) self._delete_and_check_allocations(server) # assert that unbind removes the allocation from the binding updated_port = self.neutron.show_port(port['id'])['port'] binding_profile = updated_port['binding:profile'] self.assertNotIn('allocation', binding_profile) def test_boot_reschedule_fill_provider_mapping_raises(self): """Verify that if the _fill_provider_mapping raises during re-schedule then the instance is properly put into ERROR state. """ port = self.neutron.port_with_resource_request # First call is during boot, we want that to succeed normally. Then the # fake virt driver triggers a re-schedule. During that re-schedule the # fill is called again, and we simulate that call raises. original_fill = utils.fill_provider_mapping def stub_fill_provider_mapping(*args, **kwargs): if not mock_fill.called: return original_fill(*args, **kwargs) raise exception.ResourceProviderTraitRetrievalFailed( uuid=uuids.rp1) with mock.patch( 'nova.scheduler.utils.fill_provider_mapping', side_effect=stub_fill_provider_mapping) as mock_fill: server = self._create_server( flavor=self.flavor, networks=[{'port': port['id']}]) server = self._wait_for_state_change(server, 'ERROR') self.assertIn( 'Failed to get traits for resource provider', server['fault']['message']) self._delete_and_check_allocations(server) # assert that unbind removes the allocation from the binding updated_port = self.neutron.show_port(port['id'])['port'] binding_profile = neutronapi.get_binding_profile(updated_port) self.assertNotIn('allocation', binding_profile) class AcceleratorServerBase(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def setUp(self): super(AcceleratorServerBase, self).setUp() self.cyborg = self.useFixture(nova_fixtures.CyborgFixture()) # dict of form {$compute_rp_uuid: $device_rp_uuid} self.device_rp_map = {} # self.NUM_HOSTS should be set up by derived classes if not hasattr(self, 'NUM_HOSTS'): self.NUM_HOSTS = 1 self._setup_compute_nodes_and_device_rps() def _setup_compute_nodes_and_device_rps(self): self.compute_services = [] for i in range(self.NUM_HOSTS): svc = self._start_compute(host='accel_host' + str(i)) self.compute_services.append(svc) self.compute_rp_uuids = [ rp['uuid'] for rp in self._get_all_providers() if rp['uuid'] == rp['root_provider_uuid']] for index, uuid in enumerate(self.compute_rp_uuids): device_rp_uuid = self._create_device_rp(index, uuid) self.device_rp_map[uuid] = device_rp_uuid self.device_rp_uuids = list(self.device_rp_map.values()) def _create_device_rp(self, index, compute_rp_uuid, resource='FPGA', res_amt=2): """Created nested RP for a device. There is one per host. :param index: Number of the device rp uuid for this setup :param compute_rp_uuid: Resource provider UUID of the host. :param resource: Placement resource name. Assumed to be a standard resource class. :param res_amt: Amount of the resource. :returns: Device RP UUID """ resp = self._post_nested_resource_provider( 'FakeDevice' + str(index), parent_rp_uuid=compute_rp_uuid) device_rp_uuid = resp['uuid'] inventory = { 'resource_provider_generation': 0, 'inventories': { resource: { 'total': res_amt, 'allocation_ratio': 1.0, 'max_unit': res_amt, 'min_unit': 1, 'reserved': 0, 'step_size': 1, } }, } self._update_inventory(device_rp_uuid, inventory) self._create_trait(self.cyborg.trait) self._set_provider_traits(device_rp_uuid, [self.cyborg.trait]) return device_rp_uuid def _post_nested_resource_provider(self, rp_name, parent_rp_uuid): body = {'name': rp_name, 'parent_provider_uuid': parent_rp_uuid} return self.placement_api.post( url='/resource_providers', version='1.20', body=body).body def _create_acc_flavor(self): extra_specs = {'accel:device_profile': self.cyborg.dp_name} flavor_id = self._create_flavor(name='acc.tiny', extra_spec=extra_specs) return flavor_id def _check_allocations_usage(self, server, check_other_host_alloc=True): # Check allocations on host where instance is running server_uuid = server['id'] hostname = server['OS-EXT-SRV-ATTR:host'] server_host_rp_uuid = self._get_provider_uuid_by_host(hostname) expected_host_alloc = { 'resources': {'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 20}, } expected_device_alloc = {'resources': {'FPGA': 1}} for i in range(self.NUM_HOSTS): compute_uuid = self.compute_rp_uuids[i] device_uuid = self.device_rp_map[compute_uuid] host_alloc = self._get_allocations_by_provider_uuid(compute_uuid) device_alloc = self._get_allocations_by_provider_uuid(device_uuid) if compute_uuid == server_host_rp_uuid: self.assertEqual(expected_host_alloc, host_alloc[server_uuid]) self.assertEqual(expected_device_alloc, device_alloc[server_uuid]) else: if check_other_host_alloc: self.assertEqual({}, host_alloc) self.assertEqual({}, device_alloc) # NOTE(Sundar): ARQs for an instance could come from different # devices in the same host, in general. But, in this test case, # there is only one device in the host. So, we check for that. device_rp_uuid = self.device_rp_map[server_host_rp_uuid] expected_arq_bind_info = set([('Bound', hostname, device_rp_uuid, server_uuid)]) arqs = nova_fixtures.CyborgFixture.fake_get_arqs_for_instance( server_uuid) # The state is hardcoded but other fields come from the test case. arq_bind_info = {(arq['state'], arq['hostname'], arq['device_rp_uuid'], arq['instance_uuid']) for arq in arqs} self.assertSetEqual(expected_arq_bind_info, arq_bind_info) def _check_no_allocs_usage(self, server_uuid): allocs = self._get_allocations_by_server_uuid(server_uuid) self.assertEqual({}, allocs) for i in range(self.NUM_HOSTS): host_alloc = self._get_allocations_by_provider_uuid( self.compute_rp_uuids[i]) self.assertEqual({}, host_alloc) device_alloc = self._get_allocations_by_provider_uuid( self.device_rp_uuids[i]) self.assertEqual({}, device_alloc) usage = self._get_provider_usages( self.device_rp_uuids[i]).get('FPGA') self.assertEqual(0, usage) class AcceleratorServerTest(AcceleratorServerBase): def setUp(self): self.NUM_HOSTS = 1 super(AcceleratorServerTest, self).setUp() def _get_server(self, expected_state='ACTIVE'): flavor_id = self._create_acc_flavor() server_name = 'accel_server1' server = self._create_server( server_name, flavor_id=flavor_id, image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none', expected_state=expected_state) return server def test_create_delete_server_ok(self): server = self._get_server() # Verify that the host name and the device rp UUID are set properly. # Other fields in the ARQ are hardcoded data from the fixture. arqs = self.cyborg.fake_get_arqs_for_instance(server['id']) self.assertEqual(self.device_rp_uuids[0], arqs[0]['device_rp_uuid']) self.assertEqual(server['OS-EXT-SRV-ATTR:host'], arqs[0]['hostname']) # Check allocations and usage self._check_allocations_usage(server) # Delete server and check that ARQs got deleted self.api.delete_server(server['id']) self._wait_until_deleted(server) self.cyborg.mock_del_arqs.assert_called_once_with(server['id']) # Check that resources are freed self._check_no_allocs_usage(server['id']) def test_create_server_with_error(self): def throw_error(*args, **kwargs): raise exception.BuildAbortException(reason='', instance_uuid='fake') self.stub_out('nova.virt.fake.FakeDriver.spawn', throw_error) server = self._get_server(expected_state='ERROR') server_uuid = server['id'] # Check that Cyborg was called to delete ARQs self.cyborg.mock_del_arqs.assert_called_once_with(server_uuid) # BuildAbortException will not trigger a reschedule and the placement # cleanup is the last step in the compute manager after instance state # setting, fault recording and notification sending. So we have no # other way than simply wait to ensure the placement cleanup happens # before we assert it. def placement_cleanup(): # An instance in error state should consume no resources self._check_no_allocs_usage(server_uuid) self._wait_for_assert(placement_cleanup) self.api.delete_server(server_uuid) self._wait_until_deleted(server) # Verify that there is one more call to delete ARQs self.cyborg.mock_del_arqs.assert_has_calls( [mock.call(server_uuid), mock.call(server_uuid)]) # Verify that no allocations/usages remain after deletion self._check_no_allocs_usage(server_uuid) def test_create_server_with_local_delete(self): """Delete the server when compute service is down.""" server = self._get_server() server_uuid = server['id'] # Stop the server. self.api.post_server_action(server_uuid, {'os-stop': {}}) self._wait_for_state_change(server, 'SHUTOFF') self._check_allocations_usage(server) # Stop and force down the compute service. compute_id = self.admin_api.get_services( host='accel_host0', binary='nova-compute')[0]['id'] self.compute_services[0].stop() self.admin_api.put_service(compute_id, {'forced_down': 'true'}) # Delete the server with compute service down. self.api.delete_server(server_uuid) self.cyborg.mock_del_arqs.assert_called_once_with(server_uuid) self._check_no_allocs_usage(server_uuid) # Restart the compute service to see if anything fails. self.admin_api.put_service(compute_id, {'forced_down': 'false'}) self.compute_services[0].start() class AcceleratorServerReschedTest(AcceleratorServerBase): def setUp(self): self.NUM_HOSTS = 2 super(AcceleratorServerReschedTest, self).setUp() def test_resched(self): orig_spawn = fake.FakeDriver.spawn def fake_spawn(*args, **kwargs): fake_spawn.count += 1 if fake_spawn.count == 1: raise exception.ComputeResourcesUnavailable( reason='First host fake fail.', instance_uuid='fake') else: orig_spawn(*args, **kwargs) fake_spawn.count = 0 with mock.patch('nova.virt.fake.FakeDriver.spawn', new=fake_spawn): flavor_id = self._create_acc_flavor() server_name = 'accel_server1' server = self._create_server( server_name, flavor_id=flavor_id, image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none', expected_state='ACTIVE') self.assertEqual(2, fake_spawn.count) self._check_allocations_usage(server) self.cyborg.mock_del_arqs.assert_called_once_with(server['id']) def test_resched_fails(self): def throw_error(*args, **kwargs): raise exception.ComputeResourcesUnavailable(reason='', instance_uuid='fake') self.stub_out('nova.virt.fake.FakeDriver.spawn', throw_error) flavor_id = self._create_acc_flavor() server_name = 'accel_server1' server = self._create_server( server_name, flavor_id=flavor_id, image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none', expected_state='ERROR') server_uuid = server['id'] self._check_no_allocs_usage(server_uuid) self.cyborg.mock_del_arqs.assert_has_calls( [mock.call(server_uuid), mock.call(server_uuid), mock.call(server_uuid)]) class AcceleratorServerOpsTest(AcceleratorServerBase): def setUp(self): self.NUM_HOSTS = 2 # 2nd host needed for evacuate super(AcceleratorServerOpsTest, self).setUp() flavor_id = self._create_acc_flavor() server_name = 'accel_server1' self.server = self._create_server( server_name, flavor_id=flavor_id, image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', networks='none', expected_state='ACTIVE') def test_soft_reboot_ok(self): params = {'reboot': {'type': 'SOFT'}} self.api.post_server_action(self.server['id'], params) self._wait_for_state_change(self.server, 'ACTIVE') self._check_allocations_usage(self.server) def test_hard_reboot_ok(self): params = {'reboot': {'type': 'HARD'}} self.api.post_server_action(self.server['id'], params) self._wait_for_state_change(self.server, 'HARD_REBOOT') self._wait_for_state_change(self.server, 'ACTIVE') self._check_allocations_usage(self.server) def test_pause_unpause_ok(self): # Pause and unpause should work with accelerators. # This is not a general test of un/pause functionality. self.api.post_server_action(self.server['id'], {'pause': {}}) self._wait_for_state_change(self.server, 'PAUSED') self._check_allocations_usage(self.server) # ARQs didn't get deleted (and so didn't have to be re-created). self.cyborg.mock_del_arqs.assert_not_called() self.api.post_server_action(self.server['id'], {'unpause': {}}) self._wait_for_state_change(self.server, 'ACTIVE') self._check_allocations_usage(self.server) def test_stop_start_ok(self): # Stop and start should work with accelerators. # This is not a general test of start/stop functionality. self.api.post_server_action(self.server['id'], {'os-stop': {}}) self._wait_for_state_change(self.server, 'SHUTOFF') self._check_allocations_usage(self.server) # ARQs didn't get deleted (and so didn't have to be re-created). self.cyborg.mock_del_arqs.assert_not_called() self.api.post_server_action(self.server['id'], {'os-start': {}}) self._wait_for_state_change(self.server, 'ACTIVE') self._check_allocations_usage(self.server) def test_lock_unlock_ok(self): # Lock/unlock are no-ops for accelerators. self.api.post_server_action(self.server['id'], {'lock': {}}) server = self.api.get_server(self.server['id']) self.assertTrue(server['locked']) self._check_allocations_usage(self.server) self.api.post_server_action(self.server['id'], {'unlock': {}}) server = self.api.get_server(self.server['id']) self.assertTrue(not server['locked']) self._check_allocations_usage(self.server) def test_backup_ok(self): self.api.post_server_action(self.server['id'], {'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1}}) self._check_allocations_usage(self.server) def test_create_image_ok(self): # snapshot self.api.post_server_action(self.server['id'], {'createImage': { 'name': 'foo-image', 'metadata': {'meta_var': 'meta_val'}}}) self._check_allocations_usage(self.server) def test_rescue_unrescue_ok(self): self.api.post_server_action(self.server['id'], {'rescue': { 'adminPass': 'MySecretPass', 'rescue_image_ref': '70a599e0-31e7-49b7-b260-868f441e862b'}}) self._check_allocations_usage(self.server) # ARQs didn't get deleted (and so didn't have to be re-created). self.cyborg.mock_del_arqs.assert_not_called() self._wait_for_state_change(self.server, 'RESCUE') self.api.post_server_action(self.server['id'], {'unrescue': {}}) self._check_allocations_usage(self.server) def test_resize_fails(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, self.server['id'], {'resize': {'flavorRef': '2', 'OS-DCF:diskConfig': 'AUTO'}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) def test_suspend_fails(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, self.server['id'], {'suspend': {}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) def test_migrate_fails(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, self.server['id'], {'migrate': {}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) def test_live_migrate_fails(self): ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, self.server['id'], {'migrate': {'host': 'accel_host1'}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) def test_evacuate_fails(self): server_hostname = self.server['OS-EXT-SRV-ATTR:host'] for i in range(self.NUM_HOSTS): hostname = 'accel_host' + str(i) if hostname != server_hostname: other_hostname = hostname if self.compute_services[i].host == server_hostname: compute_to_stop = self.compute_services[i] # Stop and force down the compute service. compute_id = self.admin_api.get_services( host=server_hostname, binary='nova-compute')[0]['id'] compute_to_stop.stop() self.admin_api.put_service(compute_id, {'forced_down': 'true'}) ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, self.server['id'], {'evacuate': { 'host': other_hostname, 'adminPass': 'MySecretPass'}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) def test_rebuild_fails(self): rebuild_image_ref = fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID ex = self.assertRaises(client.OpenStackApiException, self.api.post_server_action, self.server['id'], {'rebuild': { 'imageRef': rebuild_image_ref, 'OS-DCF:diskConfig': 'AUTO'}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) class CrossCellResizeWithQoSPort(PortResourceRequestBasedSchedulingTestBase): NUMBER_OF_CELLS = 2 def setUp(self): # Use our custom weigher defined above to make sure that we have # a predictable host order in the alternate list returned by the # scheduler for migration. self.useFixture(nova_fixtures.HostNameWeigherFixture()) super(CrossCellResizeWithQoSPort, self).setUp() # start compute2 in cell2, compute1 is started in cell1 by default self.compute2 = self._start_compute('host2', cell_name='cell2') self.compute2_rp_uuid = self._get_provider_uuid_by_host('host2') self._create_networking_rp_tree('host2', self.compute2_rp_uuid) self.compute2_service_id = self.admin_api.get_services( host='host2', binary='nova-compute')[0]['id'] # Enable cross-cell resize policy since it defaults to not allow # anyone to perform that type of operation. For these tests we'll # just allow admins to perform cross-cell resize. self.policy.set_rules({ servers_policies.CROSS_CELL_RESIZE: base_policies.RULE_ADMIN_API}, overwrite=False) def test_cross_cell_migrate_server_with_qos_ports(self): """Test that cross cell migration is not supported with qos ports and nova therefore falls back to do a same cell migration instead. To test this properly we first make sure that there is no valid host in the same cell but there is valid host in another cell and observe that the migration fails with NoValidHost. Then we start a new compute in the same cell the instance is in and retry the migration that is now expected to pass. """ non_qos_normal_port = self.neutron.port_1 qos_normal_port = self.neutron.port_with_resource_request qos_sriov_port = self.neutron.port_with_sriov_resource_request server = self._create_server_with_ports_and_check_allocation( non_qos_normal_port, qos_normal_port, qos_sriov_port) orig_create_binding = neutronapi.API._create_port_binding hosts = { 'host1': self.compute1_rp_uuid, 'host2': self.compute2_rp_uuid} # Add an extra check to our neutron fixture. This check makes sure that # the RP sent in the binding corresponds to host of the binding. In a # real deployment this is checked by the Neutron server. As bug # 1907522 showed we fail this check for cross cell migration with qos # ports in a real deployment. So to reproduce that bug we need to have # the same check in our test env too. def spy_on_create_binding(context, client, port_id, data): host_rp_uuid = hosts[data['binding']['host']] device_rp_uuid = data['binding']['profile'].get('allocation') if port_id == qos_normal_port['id']: if device_rp_uuid != self.ovs_bridge_rp_per_host[host_rp_uuid]: raise exception.PortBindingFailed(port_id=port_id) elif port_id == qos_sriov_port['id']: if (device_rp_uuid not in self.sriov_dev_rp_per_host[host_rp_uuid].values()): raise exception.PortBindingFailed(port_id=port_id) return orig_create_binding(context, client, port_id, data) with mock.patch( 'nova.network.neutron.API._create_port_binding', side_effect=spy_on_create_binding, autospec=True ): # We expect the migration to fail as the only available target # host is in a different cell and while cross cell migration is # enabled it is not supported for neutron ports with resource # request. self.api.post_server_action(server['id'], {'migrate': None}) self._wait_for_migration_status(server, ['error']) self._wait_for_server_parameter( server, {'status': 'ACTIVE', 'OS-EXT-SRV-ATTR:host': 'host1'}) event = self._wait_for_action_fail_completion( server, 'migrate', 'conductor_migrate_server') self.assertIn( 'exception.NoValidHost', event['traceback']) log = self.stdlog.logger.output self.assertIn( 'Request is allowed by policy to perform cross-cell resize ' 'but the instance has ports with resource request and ' 'cross-cell resize is not supported with such ports.', log) self.assertNotIn( 'nova.exception.PortBindingFailed: Binding failed for port', log) self.assertNotIn( "AttributeError: 'NoneType' object has no attribute 'version'", log) # Now start a new compute in the same cell as the instance and retry # the migration. self._start_compute('host3', cell_name='cell1') self.compute3_rp_uuid = self._get_provider_uuid_by_host('host3') self._create_networking_rp_tree('host3', self.compute3_rp_uuid) with mock.patch( 'nova.network.neutron.API._create_port_binding', side_effect=spy_on_create_binding, autospec=True ): self.api.post_server_action(server['id'], {'migrate': None}) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) self._delete_server_and_check_allocations( server, qos_normal_port, qos_sriov_port) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_servers_provider_tree.py0000664000175000017500000007753400000000000025110 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_traits from oslo_log import log as logging from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import exception from nova.tests.functional import integrated_helpers from nova.virt import fake LOG = logging.getLogger(__name__) class ProviderTreeTests(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.MediumFakeDriver' def setUp(self): super(ProviderTreeTests, self).setUp() # Before starting compute, placement has no providers registered self.assertEqual([], self._get_all_providers()) # Start compute without mocking update_provider_tree. The fake driver # doesn't implement the method, so this will cause us to start with the # legacy get_available_resource()-based inventory discovery and # boostrapping of placement data. self.compute = self._start_compute(host='host1') # Mock out update_provider_tree *after* starting compute with the # (unmocked, default, unimplemented) version from the fake driver. _p = mock.patch.object(fake.MediumFakeDriver, 'update_provider_tree') self.addCleanup(_p.stop) self.mock_upt = _p.start() # The compute host should have been created in placement with # appropriate inventory and no traits rps = self._get_all_providers() self.assertEqual(1, len(rps)) self.assertEqual(self.compute.host, rps[0]['name']) self.host_uuid = self._get_provider_uuid_by_host(self.compute.host) self.assertEqual({ 'DISK_GB': { 'total': 1028, 'allocation_ratio': 1.0, 'max_unit': 1028, 'min_unit': 1, 'reserved': 0, 'step_size': 1, }, 'MEMORY_MB': { 'total': 8192, 'allocation_ratio': 1.5, 'max_unit': 8192, 'min_unit': 1, 'reserved': 512, 'step_size': 1, }, 'VCPU': { 'total': 10, 'allocation_ratio': 16.0, 'max_unit': 10, 'min_unit': 1, 'reserved': 0, 'step_size': 1, }, }, self._get_provider_inventory(self.host_uuid)) self.expected_compute_node_traits = ( self.expected_fake_driver_capability_traits.union( # The COMPUTE_NODE trait is always added [os_traits.COMPUTE_NODE])) self.assertItemsEqual(self.expected_compute_node_traits, self._get_provider_traits(self.host_uuid)) def _run_update_available_resource(self, startup): self.compute.rt.update_available_resource( context.get_admin_context(), self.compute.host, startup=startup) def _run_update_available_resource_and_assert_raises( self, exc=exception.ResourceProviderSyncFailed, startup=False): """Invoke ResourceTracker.update_available_resource and assert that it results in ResourceProviderSyncFailed. _run_periodicals is a little too high up in the call stack to be useful for this, because ResourceTracker.update_available_resource_for_node swallows all exceptions. """ self.assertRaises(exc, self._run_update_available_resource, startup) def test_update_provider_tree_associated_info(self): """Inventory in some standard and custom resource classes. Standard and custom traits. Aggregates. Custom resource class and trait get created; inventory, traits, and aggregates get set properly. """ inv = { 'VCPU': { 'total': 10, 'reserved': 0, 'min_unit': 1, 'max_unit': 2, 'step_size': 1, 'allocation_ratio': 10.0, }, 'MEMORY_MB': { 'total': 1048576, 'reserved': 2048, 'min_unit': 1024, 'max_unit': 131072, 'step_size': 1024, 'allocation_ratio': 1.0, }, 'CUSTOM_BANDWIDTH': { 'total': 1250000, 'reserved': 10000, 'min_unit': 5000, 'max_unit': 250000, 'step_size': 5000, 'allocation_ratio': 8.0, }, } traits = set(['HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'CUSTOM_GOLD']) aggs = set([uuids.agg1, uuids.agg2]) def update_provider_tree(prov_tree, nodename): prov_tree.update_inventory(self.compute.host, inv) prov_tree.update_traits(self.compute.host, traits) prov_tree.update_aggregates(self.compute.host, aggs) self.mock_upt.side_effect = update_provider_tree self.assertNotIn('CUSTOM_BANDWIDTH', self._get_all_resource_classes()) self.assertNotIn('CUSTOM_GOLD', self._get_all_traits()) self._run_periodics() self.assertIn('CUSTOM_BANDWIDTH', self._get_all_resource_classes()) self.assertIn('CUSTOM_GOLD', self._get_all_traits()) self.assertEqual(inv, self._get_provider_inventory(self.host_uuid)) self.assertItemsEqual( traits.union(self.expected_compute_node_traits), self._get_provider_traits(self.host_uuid) ) self.assertEqual(aggs, set(self._get_provider_aggregates(self.host_uuid))) def _update_provider_tree_multiple_providers(self, startup=False, do_reshape=False): r"""Make update_provider_tree create multiple providers, including an additional root as a sharing provider; and some descendants in the compute node's tree. +---------------------------+ +--------------------------+ |uuid: self.host_uuid | |uuid: uuids.ssp | |name: self.compute.host | |name: 'ssp' | |inv: (per MediumFakeDriver)| |inv: DISK_GB=500 | | VCPU=10 |...|traits: [MISC_SHARES_..., | | MEMORY_MB=8192 | | STORAGE_DISK_SSD]| | DISK_GB=1028 | |aggs: [uuids.agg] | |aggs: [uuids.agg] | +--------------------------+ +---------------------------+ / \ +-----------------+ +-----------------+ |uuid: uuids.numa1| |uuid: uuids.numa2| |name: 'numa1' | |name: 'numa2' | |inv: VCPU=10 | |inv: VCPU=20 | | MEMORY_MB=1G| | MEMORY_MB=2G| +-----------------+ +-----------------+ / \ / \ +------------+ +------------+ +------------+ +------------+ |uuid: | |uuid: | |uuid: | |uuid: | | uuids.pf1_1| | uuids.pf1_2| | uuids.pf2_1| | uuids.pf2_2| |name: | |name: | |name: | |name: | | 'pf1_1' | | 'pf1_2' | | 'pf2_1' | | 'pf2_2' | |inv: | |inv: | |inv: | |inv: | | ..NET_VF: 2| | ..NET_VF: 3| | ..NET_VF: 3| | ..NET_VF: 4| |traits: | |traits: | |traits: | |traits: | | ..PHYSNET_0| | ..PHYSNET_1| | ..PHYSNET_0| | ..PHYSNET_1| +------------+ +------------+ +------------+ +------------+ """ def update_provider_tree(prov_tree, nodename, allocations=None): if do_reshape and allocations is None: raise exception.ReshapeNeeded() # Create a shared storage provider as a root prov_tree.new_root('ssp', uuids.ssp) prov_tree.update_traits( 'ssp', ['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD']) prov_tree.update_aggregates('ssp', [uuids.agg]) prov_tree.update_inventory('ssp', {'DISK_GB': {'total': 500}}) # Compute node is in the same aggregate prov_tree.update_aggregates(self.compute.host, [uuids.agg]) # Create two NUMA nodes as children prov_tree.new_child('numa1', self.host_uuid, uuid=uuids.numa1) prov_tree.new_child('numa2', self.host_uuid, uuid=uuids.numa2) # Give the NUMA nodes the proc/mem inventory. NUMA 2 has twice as # much as NUMA 1 (so we can validate later that everything is where # it should be). for n in (1, 2): inv = { 'VCPU': { 'total': 10 * n, 'reserved': 0, 'min_unit': 1, 'max_unit': 2, 'step_size': 1, 'allocation_ratio': 10.0, }, 'MEMORY_MB': { 'total': 1048576 * n, 'reserved': 2048, 'min_unit': 512, 'max_unit': 131072, 'step_size': 512, 'allocation_ratio': 1.0, }, } prov_tree.update_inventory('numa%d' % n, inv) # Each NUMA node has two PFs providing VF inventory on one of two # networks for n in (1, 2): for p in (1, 2): name = 'pf%d_%d' % (n, p) prov_tree.new_child( name, getattr(uuids, 'numa%d' % n), uuid=getattr(uuids, name)) trait = 'CUSTOM_PHYSNET_%d' % ((n + p) % 2) prov_tree.update_traits(name, [trait]) inv = { 'SRIOV_NET_VF': { 'total': n + p, 'reserved': 0, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, }, } prov_tree.update_inventory(name, inv) if do_reshape: # Clear out the compute node's inventory. Its VCPU and # MEMORY_MB "moved" to the NUMA RPs and its DISK_GB "moved" to # the shared storage provider. prov_tree.update_inventory(self.host_uuid, {}) # Move all the allocations for consumer_uuid, alloc_info in allocations.items(): allocs = alloc_info['allocations'] # All allocations should belong to the compute node. self.assertEqual([self.host_uuid], list(allocs)) new_allocs = {} for rc, amt in allocs[self.host_uuid]['resources'].items(): # Move VCPU to NUMA1 and MEMORY_MB to NUMA2. Bogus, but # lets us prove stuff ends up where we tell it to go. if rc == 'VCPU': rp_uuid = uuids.numa1 elif rc == 'MEMORY_MB': rp_uuid = uuids.numa2 elif rc == 'DISK_GB': rp_uuid = uuids.ssp else: self.fail("Unexpected resource on compute node: " "%s=%d" % (rc, amt)) new_allocs[rp_uuid] = { 'resources': {rc: amt}, } # Add a VF for the heck of it. Again bogus, but see above. new_allocs[uuids.pf1_1] = { 'resources': {'SRIOV_NET_VF': 1} } # Now replace just the allocations, leaving the other stuff # (proj/user ID and consumer generation) alone alloc_info['allocations'] = new_allocs self.mock_upt.side_effect = update_provider_tree if startup: self.restart_compute_service(self.compute) else: self._run_update_available_resource(False) # Create a dict, keyed by provider UUID, of all the providers rps_by_uuid = {} for rp_dict in self._get_all_providers(): rps_by_uuid[rp_dict['uuid']] = rp_dict # All and only the expected providers got created. all_uuids = set([self.host_uuid, uuids.ssp, uuids.numa1, uuids.numa2, uuids.pf1_1, uuids.pf1_2, uuids.pf2_1, uuids.pf2_2]) self.assertEqual(all_uuids, set(rps_by_uuid)) # Validate tree roots tree_uuids = [self.host_uuid, uuids.numa1, uuids.numa2, uuids.pf1_1, uuids.pf1_2, uuids.pf2_1, uuids.pf2_2] for tree_uuid in tree_uuids: self.assertEqual(self.host_uuid, rps_by_uuid[tree_uuid]['root_provider_uuid']) self.assertEqual(uuids.ssp, rps_by_uuid[uuids.ssp]['root_provider_uuid']) # SSP has the right traits self.assertEqual( set(['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD']), set(self._get_provider_traits(uuids.ssp))) # SSP has the right inventory self.assertEqual( 500, self._get_provider_inventory(uuids.ssp)['DISK_GB']['total']) # SSP and compute are in the same aggregate agg_uuids = set([self.host_uuid, uuids.ssp]) for uuid in agg_uuids: self.assertEqual(set([uuids.agg]), set(self._get_provider_aggregates(uuid))) # The rest aren't in aggregates for uuid in (all_uuids - agg_uuids): self.assertEqual(set(), set(self._get_provider_aggregates(uuid))) # NUMAs have the right inventory and parentage for n in (1, 2): numa_uuid = getattr(uuids, 'numa%d' % n) self.assertEqual(self.host_uuid, rps_by_uuid[numa_uuid]['parent_provider_uuid']) inv = self._get_provider_inventory(numa_uuid) self.assertEqual(10 * n, inv['VCPU']['total']) self.assertEqual(1048576 * n, inv['MEMORY_MB']['total']) # PFs have the right inventory, physnet, and parentage self.assertEqual(uuids.numa1, rps_by_uuid[uuids.pf1_1]['parent_provider_uuid']) self.assertEqual(['CUSTOM_PHYSNET_0'], self._get_provider_traits(uuids.pf1_1)) self.assertEqual( 2, self._get_provider_inventory(uuids.pf1_1)['SRIOV_NET_VF']['total']) self.assertEqual(uuids.numa1, rps_by_uuid[uuids.pf1_2]['parent_provider_uuid']) self.assertEqual(['CUSTOM_PHYSNET_1'], self._get_provider_traits(uuids.pf1_2)) self.assertEqual( 3, self._get_provider_inventory(uuids.pf1_2)['SRIOV_NET_VF']['total']) self.assertEqual(uuids.numa2, rps_by_uuid[uuids.pf2_1]['parent_provider_uuid']) self.assertEqual(['CUSTOM_PHYSNET_1'], self._get_provider_traits(uuids.pf2_1)) self.assertEqual( 3, self._get_provider_inventory(uuids.pf2_1)['SRIOV_NET_VF']['total']) self.assertEqual(uuids.numa2, rps_by_uuid[uuids.pf2_2]['parent_provider_uuid']) self.assertEqual(['CUSTOM_PHYSNET_0'], self._get_provider_traits(uuids.pf2_2)) self.assertEqual( 4, self._get_provider_inventory(uuids.pf2_2)['SRIOV_NET_VF']['total']) # Compute don't have any extra traits self.assertItemsEqual(self.expected_compute_node_traits, self._get_provider_traits(self.host_uuid)) # NUMAs don't have any traits for uuid in (uuids.numa1, uuids.numa2): self.assertEqual([], self._get_provider_traits(uuid)) def test_update_provider_tree_multiple_providers(self): self._update_provider_tree_multiple_providers() def test_update_provider_tree_multiple_providers_startup(self): """The above works the same for startup when no reshape requested.""" self._update_provider_tree_multiple_providers(startup=True) def test_update_provider_tree_bogus_resource_class(self): def update_provider_tree(prov_tree, nodename): prov_tree.update_inventory(self.compute.host, {'FOO': {}}) self.mock_upt.side_effect = update_provider_tree rcs = self._get_all_resource_classes() self.assertIn('VCPU', rcs) self.assertNotIn('FOO', rcs) self._run_update_available_resource_and_assert_raises() rcs = self._get_all_resource_classes() self.assertIn('VCPU', rcs) self.assertNotIn('FOO', rcs) def test_update_provider_tree_bogus_trait(self): def update_provider_tree(prov_tree, nodename): prov_tree.update_traits(self.compute.host, ['FOO']) self.mock_upt.side_effect = update_provider_tree traits = self._get_all_traits() self.assertIn('HW_CPU_X86_AVX', traits) self.assertNotIn('FOO', traits) self._run_update_available_resource_and_assert_raises() traits = self._get_all_traits() self.assertIn('HW_CPU_X86_AVX', traits) self.assertNotIn('FOO', traits) def _create_instance(self, flavor): return self._create_server( image_uuid='155d900f-4e14-4e4c-a73d-069cbf4541e6', flavor_id=flavor['id'], networks='none', az='nova:host1') def test_reshape(self): """On startup, virt driver signals it needs to reshape, then does so. This test creates a couple of instances so there are allocations to be moved by the reshape operation. Then we do the reshape and make sure the inventories and allocations end up where they should. """ # First let's create some instances so we have allocations to move. flavors = self.api.get_flavors() inst1 = self._create_instance(flavors[0]) inst2 = self._create_instance(flavors[1]) # Instance create calls RT._update, which calls # driver.update_provider_tree, which is currently mocked to a no-op. self.assertEqual(2, self.mock_upt.call_count) self.mock_upt.reset_mock() # Hit the reshape. self._update_provider_tree_multiple_providers(startup=True, do_reshape=True) # Check the final allocations # The compute node provider should have *no* allocations. self.assertEqual( {}, self._get_allocations_by_provider_uuid(self.host_uuid)) # And no inventory self.assertEqual({}, self._get_provider_inventory(self.host_uuid)) # NUMA1 got all the VCPU self.assertEqual( {inst1['id']: {'resources': {'VCPU': 1}}, inst2['id']: {'resources': {'VCPU': 1}}}, self._get_allocations_by_provider_uuid(uuids.numa1)) # NUMA2 got all the memory self.assertEqual( {inst1['id']: {'resources': {'MEMORY_MB': 512}}, inst2['id']: {'resources': {'MEMORY_MB': 2048}}}, self._get_allocations_by_provider_uuid(uuids.numa2)) # Disk resource ended up on the shared storage provider self.assertEqual( {inst1['id']: {'resources': {'DISK_GB': 1}}, inst2['id']: {'resources': {'DISK_GB': 20}}}, self._get_allocations_by_provider_uuid(uuids.ssp)) # We put VFs on the first PF in NUMA1 self.assertEqual( {inst1['id']: {'resources': {'SRIOV_NET_VF': 1}}, inst2['id']: {'resources': {'SRIOV_NET_VF': 1}}}, self._get_allocations_by_provider_uuid(uuids.pf1_1)) self.assertEqual( {}, self._get_allocations_by_provider_uuid(uuids.pf1_2)) self.assertEqual( {}, self._get_allocations_by_provider_uuid(uuids.pf2_1)) self.assertEqual( {}, self._get_allocations_by_provider_uuid(uuids.pf2_2)) # This is *almost* redundant - but it makes sure the instances don't # have extra allocations from some other provider. self.assertEqual( { uuids.numa1: { 'resources': {'VCPU': 1}, # Don't care about the generations - rely on placement db # tests to validate that those behave properly. 'generation': mock.ANY, }, uuids.numa2: { 'resources': {'MEMORY_MB': 512}, 'generation': mock.ANY, }, uuids.ssp: { 'resources': {'DISK_GB': 1}, 'generation': mock.ANY, }, uuids.pf1_1: { 'resources': {'SRIOV_NET_VF': 1}, 'generation': mock.ANY, }, }, self._get_allocations_by_server_uuid(inst1['id'])) self.assertEqual( { uuids.numa1: { 'resources': {'VCPU': 1}, 'generation': mock.ANY, }, uuids.numa2: { 'resources': {'MEMORY_MB': 2048}, 'generation': mock.ANY, }, uuids.ssp: { 'resources': {'DISK_GB': 20}, 'generation': mock.ANY, }, uuids.pf1_1: { 'resources': {'SRIOV_NET_VF': 1}, 'generation': mock.ANY, }, }, self._get_allocations_by_server_uuid(inst2['id'])) # The first call raises ReshapeNeeded, resulting in the second. self.assertEqual(2, self.mock_upt.call_count) # The expected value of the allocations kwarg to update_provider_tree # for that second call: exp_allocs = { inst1['id']: { 'allocations': { uuids.numa1: {'resources': {'VCPU': 1}}, uuids.numa2: {'resources': {'MEMORY_MB': 512}}, uuids.ssp: {'resources': {'DISK_GB': 1}}, uuids.pf1_1: {'resources': {'SRIOV_NET_VF': 1}}, }, 'consumer_generation': mock.ANY, 'project_id': mock.ANY, 'user_id': mock.ANY, }, inst2['id']: { 'allocations': { uuids.numa1: {'resources': {'VCPU': 1}}, uuids.numa2: {'resources': {'MEMORY_MB': 2048}}, uuids.ssp: {'resources': {'DISK_GB': 20}}, uuids.pf1_1: {'resources': {'SRIOV_NET_VF': 1}}, }, 'consumer_generation': mock.ANY, 'project_id': mock.ANY, 'user_id': mock.ANY, }, } self.mock_upt.assert_has_calls([ mock.call(mock.ANY, 'host1'), mock.call(mock.ANY, 'host1', allocations=exp_allocs), ]) class TraitsTrackingTests(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' fake_caps = { 'supports_attach_interface': True, 'supports_device_tagging': False, } def _mock_upt(self, traits_to_add, traits_to_remove): """Set up the compute driver with a fake update_provider_tree() which injects the given traits into the provider tree """ original_upt = fake.SmallFakeDriver.update_provider_tree def fake_upt(self2, ptree, nodename, allocations=None): original_upt(self2, ptree, nodename, allocations) LOG.debug("injecting traits via fake update_provider_tree(): %s", traits_to_add) ptree.add_traits(nodename, *traits_to_add) LOG.debug("removing traits via fake update_provider_tree(): %s", traits_to_remove) ptree.remove_traits(nodename, *traits_to_remove) self.stub_out('nova.virt.fake.FakeDriver.update_provider_tree', fake_upt) @mock.patch.dict(fake.SmallFakeDriver.capabilities, clear=True, values=fake_caps) def test_resource_provider_traits(self): """Test that the compute service reports traits via driver capabilities and registers them on the compute host resource provider in the placement API. """ custom_trait = 'CUSTOM_FOO' ptree_traits = [custom_trait, 'HW_CPU_X86_VMX'] global_traits = self._get_all_traits() self.assertNotIn(custom_trait, global_traits) self.assertIn(os_traits.COMPUTE_NET_ATTACH_INTERFACE, global_traits) self.assertIn(os_traits.COMPUTE_DEVICE_TAGGING, global_traits) self.assertIn(os_traits.COMPUTE_NODE, global_traits) self.assertEqual([], self._get_all_providers()) self._mock_upt(ptree_traits, []) self.compute = self._start_compute(host='host1') rp_uuid = self._get_provider_uuid_by_host('host1') expected_traits = set( ptree_traits + [os_traits.COMPUTE_NET_ATTACH_INTERFACE, os_traits.COMPUTE_NODE] ) self.assertItemsEqual(expected_traits, self._get_provider_traits(rp_uuid)) global_traits = self._get_all_traits() # CUSTOM_FOO is now a registered trait because the virt driver # reported it. self.assertIn(custom_trait, global_traits) # Now simulate user deletion of driver-provided traits from # the compute node provider. expected_traits.remove(custom_trait) expected_traits.remove(os_traits.COMPUTE_NET_ATTACH_INTERFACE) self._set_provider_traits(rp_uuid, list(expected_traits)) self.assertItemsEqual(expected_traits, self._get_provider_traits(rp_uuid)) # The above trait deletions are simulations of an out-of-band # placement operation, as if the operator used the CLI. So # now we have to "SIGHUP the compute process" to clear the # report client cache so the subsequent update picks up the # changes. self.compute.manager.reset() # Add the traits back so that the mock update_provider_tree() # can reinject them. expected_traits.update( [custom_trait, os_traits.COMPUTE_NET_ATTACH_INTERFACE]) # Now when we run the periodic update task, the trait should # reappear in the provider tree and get synced back to # placement. self._run_periodics() self.assertItemsEqual(expected_traits, self._get_provider_traits(rp_uuid)) global_traits = self._get_all_traits() self.assertIn(custom_trait, global_traits) self.assertIn(os_traits.COMPUTE_NET_ATTACH_INTERFACE, global_traits) @mock.patch.dict(fake.SmallFakeDriver.capabilities, clear=True, values=fake_caps) def test_admin_traits_preserved(self): """Test that if admin externally sets traits on the resource provider then the compute periodic doesn't remove them from placement. """ admin_trait = 'CUSTOM_TRAIT_FROM_ADMIN' self._create_trait(admin_trait) global_traits = self._get_all_traits() self.assertIn(admin_trait, global_traits) self.compute = self._start_compute(host='host1') rp_uuid = self._get_provider_uuid_by_host('host1') traits = self._get_provider_traits(rp_uuid) traits.append(admin_trait) self._set_provider_traits(rp_uuid, traits) self.assertIn(admin_trait, self._get_provider_traits(rp_uuid)) # SIGHUP the compute process to clear the report client # cache, so the subsequent periodic update recalculates everything. self.compute.manager.reset() self._run_periodics() self.assertIn(admin_trait, self._get_provider_traits(rp_uuid)) @mock.patch.dict(fake.SmallFakeDriver.capabilities, clear=True, values=fake_caps) def test_driver_removing_support_for_trait_via_capability(self): """Test that if a driver initially reports a trait via a supported capability, then at the next periodic update doesn't report support for it again, it gets removed from the provider in the placement service. """ self.compute = self._start_compute(host='host1') rp_uuid = self._get_provider_uuid_by_host('host1') trait = os_traits.COMPUTE_NET_ATTACH_INTERFACE self.assertIn(trait, self._get_provider_traits(rp_uuid)) new_caps = dict(fake.SmallFakeDriver.capabilities, **{'supports_attach_interface': False}) with mock.patch.dict(fake.SmallFakeDriver.capabilities, new_caps): self._run_periodics() self.assertNotIn(trait, self._get_provider_traits(rp_uuid)) def test_driver_removing_trait_via_upt(self): """Test that if a driver reports a trait via update_provider_tree() initially, but at the next periodic update doesn't report it again, that it gets removed from placement. """ custom_trait = "CUSTOM_TRAIT_FROM_DRIVER" standard_trait = os_traits.HW_CPU_X86_SGX self._mock_upt([custom_trait, standard_trait], []) self.compute = self._start_compute(host='host1') rp_uuid = self._get_provider_uuid_by_host('host1') self.assertIn(custom_trait, self._get_provider_traits(rp_uuid)) self.assertIn(standard_trait, self._get_provider_traits(rp_uuid)) # Now change the fake update_provider_tree() from injecting the # traits to removing them, and run the periodic update. self._mock_upt([], [custom_trait, standard_trait]) self._run_periodics() self.assertNotIn(custom_trait, self._get_provider_traits(rp_uuid)) self.assertNotIn(standard_trait, self._get_provider_traits(rp_uuid)) @mock.patch.dict(fake.SmallFakeDriver.capabilities, clear=True, values=fake_caps) def test_driver_removes_unsupported_trait_from_admin(self): """Test that if an admin adds a trait corresponding to a capability which is unsupported, then if the provider cache is reset, the driver will remove it during the next update. """ self.compute = self._start_compute(host='host1') rp_uuid = self._get_provider_uuid_by_host('host1') traits = self._get_provider_traits(rp_uuid) trait = os_traits.COMPUTE_DEVICE_TAGGING self.assertNotIn(trait, traits) # Simulate an admin associating the trait with the host via # the placement API. traits.append(trait) self._set_provider_traits(rp_uuid, traits) # Check that worked. traits = self._get_provider_traits(rp_uuid) self.assertIn(trait, traits) # SIGHUP the compute process to clear the report client # cache, so the subsequent periodic update recalculates everything. self.compute.manager.reset() self._run_periodics() self.assertNotIn(trait, self._get_provider_traits(rp_uuid)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/test_service.py0000664000175000017500000001447400000000000022120 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from nova import context as nova_context from nova.objects import service from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class ServiceTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Contains scenarios for testing services. """ def setUp(self): super(ServiceTestCase, self).setUp() # Use the standard fixtures. self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) # Start nova controller services. self.api = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')).api self.start_service('conductor') self.scheduler = self.start_service('scheduler') # Our OSAPIFixture does not use a WSGIService, so just use the metadata # server fixture (which uses WSGIService) for testing. self.metadata = self.useFixture( nova_fixtures.OSMetadataServer()).metadata # Start one compute service. self.start_service('compute') def test_service_reset_resets_cell_cache(self): """Tests that the cell cache for database transaction context managers is cleared after a service reset (example scenario: SIGHUP). """ server_req = self._build_server() server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # Cell cache should be populated after creating a server. self.assertTrue(nova_context.CELL_CACHE) self.scheduler.reset() # Cell cache should be empty after the service reset. self.assertEqual({}, nova_context.CELL_CACHE) # Now test the WSGI service. server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # Cell cache should be populated after creating a server. self.assertTrue(nova_context.CELL_CACHE) self.metadata.reset() # Cell cache should be empty after the service reset. self.assertEqual({}, nova_context.CELL_CACHE) def test_service_start_resets_cell_cache(self): """Tests that the cell cache for database transaction context managers is cleared upon a service start (example scenario: service start after a SIGTERM and the parent process forks child process workers). """ server_req = self._build_server() server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # Cell cache should be populated after creating a server. self.assertTrue(nova_context.CELL_CACHE) self.scheduler.stop() # NOTE(melwitt): Simulate a service starting after being stopped. The # scenario we want to handle is one where during start, the parent # process forks child process workers while one or more of its cached # database transaction context managers is inside a locked code # section. If the child processes are forked while the lock is locked, # the child processes begin with an already locked lock that can never # be acquired again. The result is that requests gets stuck and fail # with a CellTimeout error. self.scheduler.start() # Cell cache should be empty after the service start. self.assertEqual({}, nova_context.CELL_CACHE) # Now test the WSGI service. server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # Cell cache should be populated after creating a server. self.assertTrue(nova_context.CELL_CACHE) self.metadata.stop() self.metadata.start() # Cell cache should be empty after the service reset. self.assertEqual({}, nova_context.CELL_CACHE) class TestOldComputeCheck( test.TestCase, integrated_helpers.InstanceHelperMixin): def test_conductor_warns_if_old_compute(self): old_version = service.SERVICE_VERSION_ALIASES[ service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=old_version): self.start_service('conductor') self.assertIn( 'Current Nova version does not support computes older than', self.stdlog.logger.output) def test_api_warns_if_old_compute(self): old_version = service.SERVICE_VERSION_ALIASES[ service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=old_version): self.useFixture(nova_fixtures.OSAPIFixture(api_version='v2.1')) self.assertIn( 'Current Nova version does not support computes older than', self.stdlog.logger.output) def test_compute_warns_if_old_compute(self): old_version = service.SERVICE_VERSION_ALIASES[ service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=old_version): self._start_compute('host1') self.assertIn( 'Current Nova version does not support computes older than', self.stdlog.logger.output) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5064685 nova-21.2.4/nova/tests/functional/wsgi/0000775000175000017500000000000000000000000020006 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/functional/wsgi/__init__.py0000664000175000017500000000000000000000000022105 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/wsgi/test_flavor_manage.py0000664000175000017500000002201500000000000024220 0ustar00zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testscenarios from nova import context from nova import exception as ex from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import integrated_helpers as helper from nova.tests.unit import policy_fixture def rand_flavor(**kwargs): flav = { 'name': 'name-%s' % helper.generate_random_alphanumeric(10), 'id': helper.generate_random_alphanumeric(10), 'ram': int(helper.generate_random_numeric(2)) + 1, 'disk': int(helper.generate_random_numeric(3)), 'vcpus': int(helper.generate_random_numeric(1)) + 1, } flav.update(kwargs) return flav class FlavorManageFullstack(testscenarios.WithScenarios, test.TestCase): """Tests for flavors manage administrative command. Extension: os-flavors-manage os-flavors-manage adds a set of admin functions to the flavors resource for the creation and deletion of flavors. POST /v2/flavors: :: { 'name': NAME, # string, required unique 'id': ID, # string, required unique 'ram': RAM, # in MB, required 'vcpus': VCPUS, # int value, required 'disk': DISK, # in GB, required 'OS-FLV-EXT-DATA:ephemeral', # in GB, ephemeral disk size 'is_public': IS_PUBLIC, # boolean 'swap': SWAP, # in GB? 'rxtx_factor': RXTX, # ??? } Returns Flavor DELETE /v2/flavors/ID Functional Test Scope: This test starts the wsgi stack for the nova api services, uses an in memory database to ensure the path through the wsgi layer to the database. """ _additional_fixtures = [] scenarios = [ # test v2.1 base microversion ('v2_1', { 'api_major_version': 'v2.1'}), ] def setUp(self): super(FlavorManageFullstack, self).setUp() # load any additional fixtures specified by the scenario for fix in self._additional_fixtures: self.useFixture(fix()) self.useFixture(policy_fixture.RealPolicyFixture()) api_fixture = self.useFixture( nova_fixtures.OSAPIFixture( api_version=self.api_major_version)) # NOTE(sdague): because this test is primarily an admin API # test default self.api to the admin api. self.api = api_fixture.admin_api self.user_api = api_fixture.api def assertFlavorDbEqual(self, flav, flavdb): # a mapping of the REST params to the db fields mapping = { 'name': 'name', 'disk': 'root_gb', 'ram': 'memory_mb', 'vcpus': 'vcpus', 'id': 'flavorid', 'swap': 'swap' } for k, v in mapping.items(): if k in flav: self.assertEqual(flav[k], flavdb[v], "%s != %s" % (flav, flavdb)) def assertFlavorAPIEqual(self, flav, flavapi): # for all keys in the flavor, ensure they are correctly set in # flavapi response. for k in flav: if k in flavapi: self.assertEqual(flav[k], flavapi[k], "%s != %s" % (flav, flavapi)) else: self.fail("Missing key: %s in flavor: %s" % (k, flavapi)) def assertFlavorInList(self, flav, flavlist): for item in flavlist['flavors']: if flav['id'] == item['id']: self.assertEqual(flav['name'], item['name']) return self.fail("%s not found in %s" % (flav, flavlist)) def assertFlavorNotInList(self, flav, flavlist): for item in flavlist['flavors']: if flav['id'] == item['id']: self.fail("%s found in %s" % (flav, flavlist)) def test_flavor_manage_func_negative(self): """Test flavor manage edge conditions. - Bogus body is a 400 - Unknown flavor is a 404 - Deleting unknown flavor is a 404 """ # Test for various API failure conditions # bad body is 400 resp = self.api.api_post('flavors', '', check_response_status=False) self.assertEqual(400, resp.status) # get unknown flavor is 404 resp = self.api.api_delete('flavors/foo', check_response_status=False) self.assertEqual(404, resp.status) # delete unknown flavor is 404 resp = self.api.api_delete('flavors/foo', check_response_status=False) self.assertEqual(404, resp.status) ctx = context.get_admin_context() # bounds conditions - invalid vcpus flav = {'flavor': rand_flavor(vcpus=0)} resp = self.api.api_post('flavors', flav, check_response_status=False) self.assertEqual(400, resp.status, resp) # ... and ensure that we didn't leak it into the db self.assertRaises(ex.FlavorNotFound, objects.Flavor.get_by_flavor_id, ctx, flav['flavor']['id']) # bounds conditions - invalid ram flav = {'flavor': rand_flavor(ram=0)} resp = self.api.api_post('flavors', flav, check_response_status=False) self.assertEqual(400, resp.status) # ... and ensure that we didn't leak it into the db self.assertRaises(ex.FlavorNotFound, objects.Flavor.get_by_flavor_id, ctx, flav['flavor']['id']) # NOTE(sdague): if there are other bounds conditions that # should be checked, stack them up here. def test_flavor_manage_deleted(self): """Ensure the behavior around a deleted flavor is stable. - Fetching a deleted flavor works, and returns the flavor info. - Listings should not contain deleted flavors """ # create a deleted flavor new_flav = {'flavor': rand_flavor()} self.api.api_post('flavors', new_flav) self.api.api_delete('flavors/%s' % new_flav['flavor']['id']) # deleted flavor should not show up in a list resp = self.api.api_get('flavors') self.assertFlavorNotInList(new_flav['flavor'], resp.body) def test_flavor_manage_func(self): """Basic flavor creation lifecycle testing. - Creating a flavor - Ensure it's in the database - Ensure it's in the listing - Delete it - Ensure it's hidden in the database """ ctx = context.get_admin_context() flav1 = { 'flavor': rand_flavor(), } # Create flavor and ensure it made it to the database self.api.api_post('flavors', flav1) flav1db = objects.Flavor.get_by_flavor_id(ctx, flav1['flavor']['id']) self.assertFlavorDbEqual(flav1['flavor'], flav1db) # Ensure new flavor is seen in the listing resp = self.api.api_get('flavors') self.assertFlavorInList(flav1['flavor'], resp.body) # Delete flavor and ensure it was removed from the database self.api.api_delete('flavors/%s' % flav1['flavor']['id']) self.assertRaises(ex.FlavorNotFound, objects.Flavor.get_by_flavor_id, ctx, flav1['flavor']['id']) resp = self.api.api_delete('flavors/%s' % flav1['flavor']['id'], check_response_status=False) self.assertEqual(404, resp.status) def test_flavor_manage_permissions(self): """Ensure that regular users can't create or delete flavors. """ ctx = context.get_admin_context() flav1 = {'flavor': rand_flavor()} # Ensure user can't create flavor resp = self.user_api.api_post('flavors', flav1, check_response_status=False) self.assertEqual(403, resp.status) # ... and that it didn't leak through self.assertRaises(ex.FlavorNotFound, objects.Flavor.get_by_flavor_id, ctx, flav1['flavor']['id']) # Create the flavor as the admin user self.api.api_post('flavors', flav1) # Ensure user can't delete flavors from our cloud resp = self.user_api.api_delete('flavors/%s' % flav1['flavor']['id'], check_response_status=False) self.assertEqual(403, resp.status) # ... and ensure that we didn't actually delete the flavor, # this will throw an exception if we did. objects.Flavor.get_by_flavor_id(ctx, flav1['flavor']['id']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/wsgi/test_interfaces.py0000664000175000017500000000430200000000000023541 0ustar00zuulzuul00000000000000# Copyright 2017 NTT Corporation. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.functional.api import client from nova.tests.functional import test_servers class InterfaceFullstack(test_servers.ServersTestBase): """Tests for port interfaces command. Functional Test Scope: os-interface API specifies a port ID created by Neutron. """ api_major_version = 'v2.1' def test_detach_interface_negative_invalid_state(self): # Create server with network created_server = self._create_server( networks=[{'uuid': '3cb9bc59-5699-4588-a4b1-b87f96708bc6'}]) created_server_id = created_server['id'] found_server = self._wait_for_state_change(created_server, 'ACTIVE') post = { 'interfaceAttachment': { 'net_id': "3cb9bc59-5699-4588-a4b1-b87f96708bc6" } } self.api.attach_interface(created_server_id, post) response = self.api.get_port_interfaces(created_server_id)[0] port_id = response['port_id'] # Change status from ACTIVE to SUSPENDED for negative test post = {'suspend': {}} self.api.post_server_action(created_server_id, post) found_server = self._wait_for_state_change(found_server, 'SUSPENDED') # Detach port interface in SUSPENDED (not ACTIVE, etc.) ex = self.assertRaises(client.OpenStackApiException, self.api.detach_interface, created_server_id, port_id) self.assertEqual(409, ex.response.status_code) self.assertEqual('SUSPENDED', found_server['status']) # Cleanup self._delete_server(found_server) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/wsgi/test_secgroup.py0000664000175000017500000000653500000000000023257 0ustar00zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import testscenarios from nova import test from nova.tests import fixtures as nova_fixtures import nova.tests.unit.image.fake from nova.tests.unit import policy_fixture LOG = logging.getLogger(__name__) # TODO(stephenfin): Add InstanceHelperMixin class SecgroupsFullstack(testscenarios.WithScenarios, test.TestCase): """Tests for security groups TODO: describe security group API TODO: define scope """ REQUIRES_LOCKING = True _image_ref_parameter = 'imageRef' _flavor_ref_parameter = 'flavorRef' # This test uses ``testscenarios`` which matrix multiplies the # test across the scenarios listed below setting the attributes # in the dictionary on ``self`` for each scenario. scenarios = [ ('v2', { 'api_major_version': 'v2'}), # test v2.1 base microversion ('v2_1', { 'api_major_version': 'v2.1'}), ] def setUp(self): super(SecgroupsFullstack, self).setUp() self.useFixture(policy_fixture.RealPolicyFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture()) self.api = api_fixture.api # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) # TODO(sdague): refactor this method into the API client, we're # going to use it a lot def _build_minimal_create_server_request(self, name): server = {} image = self.api.get_images()[0] LOG.info("Image: %s", image) if self._image_ref_parameter in image: image_href = image[self._image_ref_parameter] else: image_href = image['id'] image_href = 'http://fake.server/%s' % image_href # We now have a valid imageId server[self._image_ref_parameter] = image_href # Set a valid flavorId flavor = self.api.get_flavors()[1] server[self._flavor_ref_parameter] = ('http://fake.server/%s' % flavor['id']) server['name'] = name return server def test_security_group_fuzz(self): """Test security group doesn't explode with a 500 on bad input. Originally reported with bug https://bugs.launchpad.net/nova/+bug/1239723 """ server = self._build_minimal_create_server_request("sg-fuzz") # security groups must be passed as a list, this is an invalid # format. The jsonschema in v2.1 caught it automatically, but # in v2 we used to throw a 500. server['security_groups'] = {"name": "sec"} resp = self.api.api_post('/servers', {'server': server}, check_response_status=False) self.assertEqual(400, resp.status) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/wsgi/test_servers.py0000664000175000017500000004661600000000000023125 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from nova.policies import base as base_policies from nova.policies import servers as servers_policies from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional.api import client as api_client from nova.tests.functional import fixtures as func_fixtures from nova.tests.functional import integrated_helpers from nova.tests.unit.image import fake as fake_image from nova.tests.unit import policy_fixture class ServersPreSchedulingTestCase(test.TestCase, integrated_helpers.InstanceHelperMixin): """Tests for the servers API with unscheduled instances. With cellsv2 an instance is not written to an instance table in the cell database until it has been scheduled to a cell. This means we need to be careful to ensure the instance can still be represented before that point. NOTE(alaski): The above is the desired future state, this test class is here to confirm that the behavior does not change as the transition is made. This test class starts the wsgi stack for the nova api service, and uses an in memory database for persistence. It does not allow requests to get past scheduling. """ api_major_version = 'v2.1' def setUp(self): super(ServersPreSchedulingTestCase, self).setUp() fake_image.stub_out_image_service(self) self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(nova_fixtures.NoopConductorFixture()) self.useFixture(nova_fixtures.NeutronFixture(self)) self.useFixture(func_fixtures.PlacementFixture()) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.api.microversion = 'latest' self.useFixture(nova_fixtures.SingleCellSimple( instances_created=False)) def test_instance_from_buildrequest(self): self.useFixture(nova_fixtures.AllServicesCurrent()) image_ref = fake_image.get_valid_image_id() body = { 'server': { 'name': 'foo', 'imageRef': image_ref, 'flavorRef': '1', 'networks': 'none', } } create_resp = self.api.api_post('servers', body) get_resp = self.api.api_get('servers/%s' % create_resp.body['server']['id']) flavor_get_resp = self.api.api_get('flavors/%s' % body['server']['flavorRef']) server = get_resp.body['server'] # Validate a few things self.assertEqual('foo', server['name']) self.assertEqual(image_ref, server['image']['id']) self.assertEqual(flavor_get_resp.body['flavor']['name'], server['flavor']['original_name']) self.assertEqual('', server['hostId']) self.assertIsNone(server['OS-SRV-USG:launched_at']) self.assertIsNone(server['OS-SRV-USG:terminated_at']) self.assertFalse(server['locked']) self.assertEqual([], server['tags']) self.assertEqual('scheduling', server['OS-EXT-STS:task_state']) self.assertEqual('building', server['OS-EXT-STS:vm_state']) self.assertEqual('BUILD', server['status']) def test_instance_from_buildrequest_old_service(self): image_ref = fake_image.get_valid_image_id() body = { 'server': { 'name': 'foo', 'imageRef': image_ref, 'flavorRef': '1', 'networks': 'none', } } create_resp = self.api.api_post('servers', body) get_resp = self.api.api_get('servers/%s' % create_resp.body['server']['id']) flavor_get_resp = self.api.api_get('flavors/%s' % body['server']['flavorRef']) server = get_resp.body['server'] # Just validate some basics self.assertEqual('foo', server['name']) self.assertEqual(image_ref, server['image']['id']) self.assertEqual(flavor_get_resp.body['flavor']['name'], server['flavor']['original_name']) self.assertEqual('', server['hostId']) self.assertIsNone(server['OS-SRV-USG:launched_at']) self.assertIsNone(server['OS-SRV-USG:terminated_at']) self.assertFalse(server['locked']) self.assertEqual([], server['tags']) self.assertEqual('scheduling', server['OS-EXT-STS:task_state']) self.assertEqual('building', server['OS-EXT-STS:vm_state']) self.assertEqual('BUILD', server['status']) def test_delete_instance_from_buildrequest(self): self.useFixture(nova_fixtures.AllServicesCurrent()) image_ref = fake_image.get_valid_image_id() body = { 'server': { 'name': 'foo', 'imageRef': image_ref, 'flavorRef': '1', 'networks': 'none', } } create_resp = self.api.api_post('servers', body) self.api.api_delete('servers/%s' % create_resp.body['server']['id']) get_resp = self.api.api_get('servers/%s' % create_resp.body['server']['id'], check_response_status=False) self.assertEqual(404, get_resp.status) def test_delete_instance_from_buildrequest_old_service(self): image_ref = fake_image.get_valid_image_id() body = { 'server': { 'name': 'foo', 'imageRef': image_ref, 'flavorRef': '1', 'networks': 'none', } } create_resp = self.api.api_post('servers', body) self.api.api_delete('servers/%s' % create_resp.body['server']['id']) get_resp = self.api.api_get('servers/%s' % create_resp.body['server']['id'], check_response_status=False) self.assertEqual(404, get_resp.status) def _test_instance_list_from_buildrequests(self): image_ref = fake_image.get_valid_image_id() body = { 'server': { 'name': 'foo', 'imageRef': image_ref, 'flavorRef': '1', 'networks': 'none', } } inst1 = self.api.api_post('servers', body) body['server']['name'] = 'bar' inst2 = self.api.api_post('servers', body) list_resp = self.api.get_servers() # Default sort is created_at desc, so last created is first self.assertEqual(2, len(list_resp)) self.assertEqual(inst2.body['server']['id'], list_resp[0]['id']) self.assertEqual('bar', list_resp[0]['name']) self.assertEqual(inst1.body['server']['id'], list_resp[1]['id']) self.assertEqual('foo', list_resp[1]['name']) # Change the sort order list_resp = self.api.api_get( 'servers/detail?sort_key=created_at&sort_dir=asc') list_resp = list_resp.body['servers'] self.assertEqual(2, len(list_resp)) self.assertEqual(inst1.body['server']['id'], list_resp[0]['id']) self.assertEqual('foo', list_resp[0]['name']) self.assertEqual(inst2.body['server']['id'], list_resp[1]['id']) self.assertEqual('bar', list_resp[1]['name']) def test_instance_list_from_buildrequests(self): self.useFixture(nova_fixtures.AllServicesCurrent()) self._test_instance_list_from_buildrequests() def test_instance_list_from_buildrequests_old_service(self): self._test_instance_list_from_buildrequests() def test_instance_list_from_buildrequests_with_tags(self): """Creates two servers with two tags each, where the 2nd tag (tag2) is the only intersection between the tags in both servers. This is used to test the various tags filters working in the BuildRequestList. """ self.useFixture(nova_fixtures.AllServicesCurrent()) image_ref = fake_image.get_valid_image_id() body = { 'server': { 'name': 'foo', 'imageRef': image_ref, 'flavorRef': '1', 'networks': 'none', 'tags': ['tag1', 'tag2'] } } inst1 = self.api.api_post('servers', body) body['server']['name'] = 'bar' body['server']['tags'] = ['tag2', 'tag3'] inst2 = self.api.api_post('servers', body) # list servers using tags=tag1,tag2 list_resp = self.api.api_get( 'servers/detail?tags=tag1,tag2') list_resp = list_resp.body['servers'] self.assertEqual(1, len(list_resp)) self.assertEqual(inst1.body['server']['id'], list_resp[0]['id']) self.assertEqual('foo', list_resp[0]['name']) # list servers using tags-any=tag1,tag3 list_resp = self.api.api_get( 'servers/detail?tags-any=tag1,tag3') list_resp = list_resp.body['servers'] self.assertEqual(2, len(list_resp)) # Default sort is created_at desc, so last created is first self.assertEqual(inst2.body['server']['id'], list_resp[0]['id']) self.assertEqual('bar', list_resp[0]['name']) self.assertEqual(inst1.body['server']['id'], list_resp[1]['id']) self.assertEqual('foo', list_resp[1]['name']) # list servers using not-tags=tag1,tag2 list_resp = self.api.api_get( 'servers/detail?not-tags=tag1,tag2') list_resp = list_resp.body['servers'] self.assertEqual(1, len(list_resp)) self.assertEqual(inst2.body['server']['id'], list_resp[0]['id']) self.assertEqual('bar', list_resp[0]['name']) # list servers using not-tags-any=tag1,tag3 list_resp = self.api.api_get( 'servers/detail?not-tags-any=tag1,tag3') list_resp = list_resp.body['servers'] self.assertEqual(0, len(list_resp)) def test_bfv_delete_build_request_pre_scheduling(self): cinder = self.useFixture( nova_fixtures.CinderFixture(self)) # This makes the get_minimum_version_all_cells check say we're running # the latest of everything. self.useFixture(nova_fixtures.AllServicesCurrent()) volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL server = self.api.post_server({ 'server': { 'flavorRef': '1', 'name': 'test_bfv_delete_build_request_pre_scheduling', 'networks': 'none', 'block_device_mapping_v2': [ { 'boot_index': 0, 'uuid': volume_id, 'source_type': 'volume', 'destination_type': 'volume' }, ] } }) # Since _IntegratedTestBase uses the CastAsCall fixture, when we # get the server back we know all of the volume stuff should be done. self.assertIn(volume_id, cinder.volume_ids_for_instance(server['id'])) # Now delete the server, which should go through the "local delete" # code in the API, find the build request and delete it along with # detaching the volume from the instance. self.api.delete_server(server['id']) # The volume should no longer have any attachments as instance delete # should have removed them. self.assertNotIn(volume_id, cinder.volume_ids_for_instance(server['id'])) def test_instance_list_build_request_marker_ip_filter(self): """Tests listing instances with a marker that is in the build_requests table and also filtering by ip, in which case the ip filter can't possibly find anything because instances that are not yet scheduled can't have ips, but the point is to find the marker in the build requests table. """ self.useFixture(nova_fixtures.AllServicesCurrent()) # Create the server. body = { 'server': { 'name': 'test_instance_list_build_request_marker_ip_filter', 'imageRef': fake_image.get_valid_image_id(), 'flavorRef': '1', 'networks': 'none' } } server = self.api.post_server(body) # Now list servers using the one we just created as the marker and # include the ip filter (see bug 1764685). search_opts = { 'marker': server['id'], 'ip': '192.168.159.150' } servers = self.api.get_servers(search_opts=search_opts) # We'll get 0 servers back because there are none with the specified # ip filter. self.assertEqual(0, len(servers)) class EnforceVolumeBackedForZeroDiskFlavorTestCase( test.TestCase, integrated_helpers.InstanceHelperMixin): """Tests for the os_compute_api:servers:create:zero_disk_flavor policy rule These tests explicitly rely on microversion 2.1. """ def setUp(self): super(EnforceVolumeBackedForZeroDiskFlavorTestCase, self).setUp() fake_image.stub_out_image_service(self) self.addCleanup(fake_image.FakeImageService_reset) self.useFixture(nova_fixtures.NeutronFixture(self)) self.policy_fixture = ( self.useFixture(policy_fixture.RealPolicyFixture())) api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( api_version='v2.1')) self.api = api_fixture.api self.admin_api = api_fixture.admin_api # We need a zero disk flavor for the tests in this class. flavor_req = { "flavor": { "name": "zero-disk-flavor", "ram": 1024, "vcpus": 2, "disk": 0 } } self.zero_disk_flavor = self.admin_api.post_flavor(flavor_req) def test_create_image_backed_server_with_zero_disk_fails(self): """Tests that a non-admin trying to create an image-backed server using a flavor with 0 disk will result in a 403 error when rule os_compute_api:servers:create:zero_disk_flavor is set to admin-only. """ self.policy_fixture.set_rules({ servers_policies.ZERO_DISK_FLAVOR: base_policies.RULE_ADMIN_API}, overwrite=False) server_req = self._build_server( image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, flavor_id=self.zero_disk_flavor['id']) ex = self.assertRaises(api_client.OpenStackApiException, self.api.post_server, {'server': server_req}) self.assertIn('Only volume-backed servers are allowed for flavors ' 'with zero disk.', six.text_type(ex)) self.assertEqual(403, ex.response.status_code) def test_create_volume_backed_server_with_zero_disk_allowed(self): """Tests that creating a volume-backed server with a zero-root disk flavor will be allowed for admins. """ # For this test, we want to start conductor and the scheduler but # we don't start compute so that scheduling fails; we don't really # care about successfully building an active server here. self.useFixture(func_fixtures.PlacementFixture()) self.useFixture(nova_fixtures.CinderFixture(self)) self.start_service('conductor') self.start_service('scheduler') server_req = self._build_server( flavor_id=self.zero_disk_flavor['id']) server_req.pop('imageRef', None) server_req['block_device_mapping_v2'] = [{ 'uuid': nova_fixtures.CinderFixture.IMAGE_BACKED_VOL, 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0 }] server = self.admin_api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ERROR') self.assertIn('No valid host', server['fault']['message']) class ResizeCheckInstanceHostTestCase( integrated_helpers.ProviderUsageBaseTestCase): """Tests for the check_instance_host decorator used during resize/migrate. """ compute_driver = 'fake.MediumFakeDriver' def test_resize_source_compute_validation(self, resize=True): flavors = self.api.get_flavors() # Start up a compute on which to create a server. self._start_compute('host1') # Create a server on host1. server = self._build_server( flavor_id=flavors[0]['id'], networks='none') server = self.api.post_server({'server': server}) server = self._wait_for_state_change(server, 'ACTIVE') # Check if we're cold migrating. if resize: req = {'resize': {'flavorRef': flavors[1]['id']}} else: req = {'migrate': None} # Start up a destination compute. self._start_compute('host2') # First, force down the source compute service. source_service = self.api.get_services( binary='nova-compute', host='host1')[0] self.api.put_service(source_service['id'], {'forced_down': True}) # Try the operation and it should fail with a 409 response. ex = self.assertRaises(api_client.OpenStackApiException, self.api.post_server_action, server['id'], req) self.assertEqual(409, ex.response.status_code) self.assertIn('Service is unavailable at this time', six.text_type(ex)) # Now bring the source compute service up but disable it. The operation # should be allowed in this case since the service is up. self.api.put_service(source_service['id'], {'forced_down': False, 'status': 'disabled'}) self.api.post_server_action(server['id'], req) server = self._wait_for_state_change(server, 'VERIFY_RESIZE') # Revert the resize to get the server back to host1. self.api.post_server_action(server['id'], {'revertResize': None}) server = self._wait_for_state_change(server, 'ACTIVE') # Now shelve offload the server so it does not have a host. self.api.post_server_action(server['id'], {'shelve': None}) self._wait_for_server_parameter(server, {'status': 'SHELVED_OFFLOADED', 'OS-EXT-SRV-ATTR:host': None}) # Now try the operation again and it should fail with a different # 409 response. ex = self.assertRaises(api_client.OpenStackApiException, self.api.post_server_action, server['id'], req) self.assertEqual(409, ex.response.status_code) # This error comes from check_instance_state which is processed before # check_instance_host. self.assertIn('while it is in vm_state shelved_offloaded', six.text_type(ex)) def test_cold_migrate_source_compute_validation(self): self.test_resize_source_compute_validation(resize=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/functional/wsgi/test_services.py0000664000175000017500000004634300000000000023254 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os_resource_classes as orc import os_traits import six from nova import context as nova_context from nova import exception from nova import objects from nova.tests.functional.api import client as api_client from nova.tests.functional import integrated_helpers from nova import utils class TestServicesAPI(integrated_helpers.ProviderUsageBaseTestCase): compute_driver = 'fake.SmallFakeDriver' def test_compute_service_delete_ensure_related_cleanup(self): """Tests deleting a compute service and the related cleanup associated with that like the compute_nodes table entry, removing the host from any aggregates, the host mapping in the API DB and the associated resource provider in Placement. """ compute = self._start_compute('host1') # Make sure our compute host is represented as expected. services = self.admin_api.get_services(binary='nova-compute') self.assertEqual(1, len(services)) service = services[0] # Now create a host aggregate and add our host to it. aggregate = self.admin_api.post_aggregate( {'aggregate': {'name': 'agg1'}}) self.admin_api.add_host_to_aggregate(aggregate['id'], service['host']) # Make sure the host is in the aggregate. aggregate = self.admin_api.api_get( '/os-aggregates/%s' % aggregate['id']).body['aggregate'] self.assertEqual([service['host']], aggregate['hosts']) rp_uuid = self._get_provider_uuid_by_host(service['host']) # We'll know there is a host mapping implicitly if os-hypervisors # returned something in _get_provider_uuid_by_host, but let's also # make sure the host mapping is there like we expect. ctxt = nova_context.get_admin_context() objects.HostMapping.get_by_host(ctxt, service['host']) # Make sure there is a resource provider for that compute node based # on the uuid. resp = self.placement_api.get('/resource_providers/%s' % rp_uuid) self.assertEqual(200, resp.status) # Make sure the resource provider has inventory. inventories = self._get_provider_inventory(rp_uuid) # Expect a minimal set of inventory for the fake virt driver. for resource_class in [orc.VCPU, orc.MEMORY_MB, orc.DISK_GB]: self.assertIn(resource_class, inventories) # Now create a server so that the resource provider has some allocation # records. flavor = self.api.get_flavors()[0] server = self._boot_and_check_allocations(flavor, service['host']) # Now the fun part, delete the compute service and make sure related # resources are cleaned up, like the compute node, host mapping, and # resource provider. We have to first stop the compute service so # it doesn't recreate the compute node during the # update_available_resource periodic task. self.admin_api.put_service(service['id'], {'forced_down': True}) compute.stop() # The first attempt should fail since there is an instance on the # compute host. ex = self.assertRaises(api_client.OpenStackApiException, self.admin_api.api_delete, '/os-services/%s' % service['id']) self.assertIn('Unable to delete compute service that is hosting ' 'instances.', six.text_type(ex)) self.assertEqual(409, ex.response.status_code) # Now delete the instance and wait for it to be gone. self._delete_and_check_allocations(server) # Now we can delete the service. self.admin_api.api_delete('/os-services/%s' % service['id']) # Make sure the service is deleted. services = self.admin_api.get_services(binary='nova-compute') self.assertEqual(0, len(services)) # Make sure the host was removed from the aggregate. aggregate = self.admin_api.api_get( '/os-aggregates/%s' % aggregate['id']).body['aggregate'] self.assertEqual([], aggregate['hosts']) # Trying to get the hypervisor should result in a 404. self.admin_api.api_get( 'os-hypervisors?hypervisor_hostname_pattern=%s' % service['host'], check_response_status=[404]) # The host mapping should also be gone. self.assertRaises(exception.HostMappingNotFound, objects.HostMapping.get_by_host, ctxt, service['host']) # And finally, the resource provider should also be gone. The API # will perform a cascading delete of the resource provider inventory # and allocation information. resp = self.placement_api.get('/resource_providers/%s' % rp_uuid) self.assertEqual(404, resp.status) def test_evacuate_then_delete_compute_service(self): """Tests a scenario where a server is created on a host, the host goes down, the server is evacuated to another host, and then the source host compute service is deleted. After that the deleted compute service is restarted. Related placement resources are checked throughout. """ # Create our source host that we will evacuate *from* later. host1 = self._start_compute('host1') # Create a server which will go on host1 since it is the only host. flavor = self.api.get_flavors()[0] server = self._boot_and_check_allocations(flavor, 'host1') # Get the compute service record for host1 so we can manage it. service = self.admin_api.get_services( binary='nova-compute', host='host1')[0] # Get the corresponding resource provider uuid for host1. rp_uuid = self._get_provider_uuid_by_host(service['host']) # Make sure there is a resource provider for that compute node based # on the uuid. resp = self.placement_api.get('/resource_providers/%s' % rp_uuid) self.assertEqual(200, resp.status) # Down the compute service for host1 so we can evacuate from it. self.admin_api.put_service(service['id'], {'forced_down': True}) host1.stop() # Start another host and trigger the server evacuate to that host. self._start_compute('host2') self.admin_api.post_server_action(server['id'], {'evacuate': {}}) # The host does not change until after the status is changed to ACTIVE # so wait for both parameters. self._wait_for_server_parameter(server, { 'status': 'ACTIVE', 'OS-EXT-SRV-ATTR:host': 'host2'}) # Delete the compute service for host1 and check the related # placement resources for that host. self.admin_api.api_delete('/os-services/%s' % service['id']) # Make sure the service is gone. services = self.admin_api.get_services( binary='nova-compute', host='host1') self.assertEqual(0, len(services), services) # FIXME(mriedem): This is bug 1829479 where the compute service is # deleted but the resource provider is not because there are still # allocations against the provider from the evacuated server. resp = self.placement_api.get('/resource_providers/%s' % rp_uuid) self.assertEqual(200, resp.status) self.assertFlavorMatchesUsage(rp_uuid, flavor) # Try to restart the host1 compute service to create a new resource # provider. self.restart_compute_service(host1) # FIXME(mriedem): This is bug 1817833 where restarting the now-deleted # compute service attempts to create a new resource provider with a # new uuid but the same name which results in a conflict. The service # does not die, however, because _update_available_resource_for_node # catches and logs but does not re-raise the error. log_output = self.stdlog.logger.output self.assertIn('Error updating resources for node host1.', log_output) self.assertIn('Failed to create resource provider host1', log_output) def test_migrate_confirm_after_deleted_source_compute(self): """Tests a scenario where a server is cold migrated and while in VERIFY_RESIZE status the admin attempts to delete the source compute and then the user tries to confirm the resize. """ # Start a compute service and create a server there. self._start_compute('host1') host1_rp_uuid = self._get_provider_uuid_by_host('host1') flavor = self.api.get_flavors()[0] server = self._boot_and_check_allocations(flavor, 'host1') # Start a second compute service so we can cold migrate there. self._start_compute('host2') host2_rp_uuid = self._get_provider_uuid_by_host('host2') # Cold migrate the server to host2. self._migrate_and_check_allocations( server, flavor, host1_rp_uuid, host2_rp_uuid) # Delete the source compute service. service = self.admin_api.get_services( binary='nova-compute', host='host1')[0] # We expect the delete request to fail with a 409 error because of the # instance in VERIFY_RESIZE status even though that instance is marked # as being on host2 now. ex = self.assertRaises(api_client.OpenStackApiException, self.admin_api.api_delete, '/os-services/%s' % service['id']) self.assertEqual(409, ex.response.status_code) self.assertIn('Unable to delete compute service that has in-progress ' 'migrations', six.text_type(ex)) self.assertIn('There are 1 in-progress migrations involving the host', self.stdlog.logger.output) # The provider is still around because we did not delete the service. resp = self.placement_api.get('/resource_providers/%s' % host1_rp_uuid) self.assertEqual(200, resp.status) self.assertFlavorMatchesUsage(host1_rp_uuid, flavor) # Now try to confirm the migration. self._confirm_resize(server) # Delete the host1 service since the migration is confirmed and the # server is on host2. self.admin_api.api_delete('/os-services/%s' % service['id']) # The host1 resource provider should be gone. resp = self.placement_api.get('/resource_providers/%s' % host1_rp_uuid) self.assertEqual(404, resp.status) def test_resize_revert_after_deleted_source_compute(self): """Tests a scenario where a server is resized and while in VERIFY_RESIZE status the admin attempts to delete the source compute and then the user tries to revert the resize. """ # Start a compute service and create a server there. self._start_compute('host1') host1_rp_uuid = self._get_provider_uuid_by_host('host1') flavors = self.api.get_flavors() flavor1 = flavors[0] flavor2 = flavors[1] server = self._boot_and_check_allocations(flavor1, 'host1') # Start a second compute service so we can resize there. self._start_compute('host2') host2_rp_uuid = self._get_provider_uuid_by_host('host2') # Resize the server to host2. self._resize_and_check_allocations( server, flavor1, flavor2, host1_rp_uuid, host2_rp_uuid) # Delete the source compute service. service = self.admin_api.get_services( binary='nova-compute', host='host1')[0] # We expect the delete request to fail with a 409 error because of the # instance in VERIFY_RESIZE status even though that instance is marked # as being on host2 now. ex = self.assertRaises(api_client.OpenStackApiException, self.admin_api.api_delete, '/os-services/%s' % service['id']) self.assertEqual(409, ex.response.status_code) self.assertIn('Unable to delete compute service that has in-progress ' 'migrations', six.text_type(ex)) self.assertIn('There are 1 in-progress migrations involving the host', self.stdlog.logger.output) # The provider is still around because we did not delete the service. resp = self.placement_api.get('/resource_providers/%s' % host1_rp_uuid) self.assertEqual(200, resp.status) self.assertFlavorMatchesUsage(host1_rp_uuid, flavor1) # Now revert the resize. self._revert_resize(server) self.assertFlavorMatchesUsage(host1_rp_uuid, flavor1) zero_flavor = {'vcpus': 0, 'ram': 0, 'disk': 0, 'extra_specs': {}} self.assertFlavorMatchesUsage(host2_rp_uuid, zero_flavor) # Delete the host2 service since the migration is reverted and the # server is on host1 again. service2 = self.admin_api.get_services( binary='nova-compute', host='host2')[0] self.admin_api.api_delete('/os-services/%s' % service2['id']) # The host2 resource provider should be gone. resp = self.placement_api.get('/resource_providers/%s' % host2_rp_uuid) self.assertEqual(404, resp.status) class ComputeStatusFilterTest(integrated_helpers.ProviderUsageBaseTestCase): """Tests the API, compute service and Placement interaction with the COMPUTE_STATUS_DISABLED trait when a compute service is enable/disabled. This version of the test uses the 2.latest microversion for testing the 2.53+ behavior of the PUT /os-services/{service_id} API. """ compute_driver = 'fake.SmallFakeDriver' def _update_service(self, service, disabled, forced_down=None): """Update the service using the 2.53 request schema. :param service: dict representing the service resource in the API :param disabled: True if the service should be disabled, False if the service should be enabled :param forced_down: Optionally change the forced_down value. """ status = 'disabled' if disabled else 'enabled' req = {'status': status} if forced_down is not None: req['forced_down'] = forced_down self.admin_api.put_service(service['id'], req) def test_compute_status_filter(self): """Tests the compute_status_filter placement request filter""" # Start a compute service so a compute node and resource provider is # created. compute = self._start_compute('host1') # Get the UUID of the resource provider that was created. rp_uuid = self._get_provider_uuid_by_host('host1') # Get the service from the compute API. services = self.admin_api.get_services(binary='nova-compute', host='host1') self.assertEqual(1, len(services)) service = services[0] # At this point, the service should be enabled and the # COMPUTE_STATUS_DISABLED trait should not be set on the # resource provider in placement. self.assertEqual('enabled', service['status']) rp_traits = self._get_provider_traits(rp_uuid) trait = os_traits.COMPUTE_STATUS_DISABLED self.assertNotIn(trait, rp_traits) # Now disable the compute service via the API. self._update_service(service, disabled=True) # The update to placement should be synchronous so check the provider # traits and COMPUTE_STATUS_DISABLED should be set. rp_traits = self._get_provider_traits(rp_uuid) self.assertIn(trait, rp_traits) # Try creating a server which should fail because nothing is available. networks = [{'port': self.neutron.port_1['id']}] server_req = self._build_server(networks=networks) server = self.api.post_server({'server': server_req}) server = self._wait_for_state_change(server, 'ERROR') # There should be a NoValidHost fault recorded. self.assertIn('fault', server) self.assertIn('No valid host', server['fault']['message']) # Now enable the service and the trait should be gone. self._update_service(service, disabled=False) rp_traits = self._get_provider_traits(rp_uuid) self.assertNotIn(trait, rp_traits) # Try creating another server and it should be OK. server = self.api.post_server({'server': server_req}) self._wait_for_state_change(server, 'ACTIVE') # Stop, force-down and disable the service so the API cannot call # the compute service to sync the trait. compute.stop() self._update_service(service, disabled=True, forced_down=True) # The API should have logged a message about the service being down. self.assertIn('Compute service on host host1 is down. The ' 'COMPUTE_STATUS_DISABLED trait will be synchronized ' 'when the service is restarted.', self.stdlog.logger.output) # The trait should not be on the provider even though the node is # disabled. rp_traits = self._get_provider_traits(rp_uuid) self.assertNotIn(trait, rp_traits) # Restart the compute service which should sync and set the trait on # the provider in placement. self.restart_compute_service(compute) rp_traits = self._get_provider_traits(rp_uuid) self.assertIn(trait, rp_traits) class ComputeStatusFilterTest211(ComputeStatusFilterTest): """Extends ComputeStatusFilterTest and uses the 2.11 API for the legacy os-services disable/enable/force-down API behavior """ microversion = '2.11' def _update_service(self, service, disabled, forced_down=None): """Update the service using the 2.11 request schema. :param service: dict representing the service resource in the API :param disabled: True if the service should be disabled, False if the service should be enabled :param forced_down: Optionally change the forced_down value. """ # Before 2.53 the service is uniquely identified by host and binary. body = { 'host': service['host'], 'binary': service['binary'] } # Handle forced_down first if provided since the enable/disable # behavior in the API depends on it. if forced_down is not None: body['forced_down'] = forced_down self.admin_api.api_put('/os-services/force-down', body) if disabled: self.admin_api.api_put('/os-services/disable', body) else: self.admin_api.api_put('/os-services/enable', body) def _get_provider_uuid_by_host(self, host): # We have to temporarily mutate to 2.53 to get the hypervisor UUID. with utils.temporary_mutation(self.admin_api, microversion='2.53'): return super(ComputeStatusFilterTest211, self)._get_provider_uuid_by_host(host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/json_ref.py0000664000175000017500000000453500000000000017061 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_serialization import jsonutils def _resolve_ref(ref, base_path): file_path, _, json_path = ref.partition('#') if json_path: raise NotImplementedError('JSON refs with JSON path after the "#" is ' 'not yet supported') path = os.path.join(base_path, file_path) # binary mode is needed due to bug/1515231 with open(path, 'r+b') as f: ref_value = jsonutils.load(f) base_path = os.path.dirname(path) res = resolve_refs(ref_value, base_path) return res def resolve_refs(obj_with_refs, base_path): if isinstance(obj_with_refs, list): for i, item in enumerate(obj_with_refs): obj_with_refs[i] = resolve_refs(item, base_path) elif isinstance(obj_with_refs, dict): if '$ref' in obj_with_refs.keys(): ref = obj_with_refs.pop('$ref') resolved_ref = _resolve_ref(ref, base_path) # the rest of the ref dict contains overrides for the ref. Resolve # refs in the overrides then apply those overrides recursively # here. resolved_overrides = resolve_refs(obj_with_refs, base_path) _update_dict_recursively(resolved_ref, resolved_overrides) return resolved_ref else: for key, value in obj_with_refs.items(): obj_with_refs[key] = resolve_refs(value, base_path) else: # scalar, nothing to do pass return obj_with_refs def _update_dict_recursively(d, update): """Update dict d recursively with data from dict update""" for k, v in update.items(): if k in d and isinstance(d[k], dict) and isinstance(v, dict): _update_dict_recursively(d[k], v) else: d[k] = v ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5144684 nova-21.2.4/nova/tests/unit/0000775000175000017500000000000000000000000015652 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/__init__.py0000664000175000017500000000213300000000000017762 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova.tests.unit` -- Nova Unittests ===================================================== .. automodule:: nova.tests.unit :platform: Unix """ from nova import objects # NOTE(comstud): Make sure we have all of the objects loaded. We do this # at module import time, because we may be using mock decorators in our # tests that run at import time. objects.register_all() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5144684 nova-21.2.4/nova/tests/unit/accelerator/0000775000175000017500000000000000000000000020136 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/accelerator/__init__.py0000664000175000017500000000000000000000000022235 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/accelerator/test_cyborg.py0000664000175000017500000003706200000000000023044 0ustar00zuulzuul00000000000000# Copyright 2019 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import mock from keystoneauth1 import exceptions as ks_exc from requests.models import Response from oslo_serialization import jsonutils from nova.accelerator import cyborg from nova import context from nova import exception from nova.objects import request_spec from nova import test from nova.tests.unit import fake_requests class CyborgTestCase(test.NoDBTestCase): def setUp(self): super(CyborgTestCase, self).setUp() self.context = context.get_admin_context() self.client = cyborg.get_client(self.context) def test_get_client(self): # Set up some ksa conf options region = 'MyRegion' endpoint = 'http://example.com:1234' self.flags(group='cyborg', region_name=region, endpoint_override=endpoint) ctxt = context.get_admin_context() client = cyborg.get_client(ctxt) # Dig into the ksa adapter a bit to ensure the conf options got through # We don't bother with a thorough test of get_ksa_adapter - that's done # elsewhere - this is just sanity-checking that we spelled things right # in the conf setup. self.assertEqual('accelerator', client._client.service_type) self.assertEqual(region, client._client.region_name) self.assertEqual(endpoint, client._client.endpoint_override) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_call_cyborg(self, mock_ksa_get): mock_ksa_get.return_value = 1 # dummy value resp, err_msg = self.client._call_cyborg( self.client._client.get, self.client.DEVICE_PROFILE_URL) self.assertEqual(resp, 1) self.assertIsNone(err_msg) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_call_cyborg_keystone_error(self, mock_ksa_get): mock_ksa_get.side_effect = ks_exc.ClientException resp, err_msg = self.client._call_cyborg( self.client._client.get, self.client.DEVICE_PROFILE_URL) self.assertIsNone(resp) expected_err = 'Could not communicate with Cyborg.' self.assertIn(expected_err, err_msg) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_call_cyborg_bad_response(self, mock_ksa_get): mock_ksa_get.return_value = None resp, err_msg = self.client._call_cyborg( self.client._client.get, self.client.DEVICE_PROFILE_URL) self.assertIsNone(resp) expected_err = 'Invalid response from Cyborg:' self.assertIn(expected_err, err_msg) @mock.patch('nova.accelerator.cyborg._CyborgClient._call_cyborg') @mock.patch.object(Response, 'json') def test_get_device_profile_list(self, mock_resp_json, mock_call_cyborg): mock_call_cyborg.return_value = Response(), None mock_resp_json.return_value = {'device_profiles': 1} # dummy value ret = self.client._get_device_profile_list(dp_name='mydp') self.assertEqual(ret, 1) @mock.patch('nova.accelerator.cyborg._CyborgClient._call_cyborg') def test_get_device_profile_list_bad_response(self, mock_call_cyborg): "If Cyborg cannot be reached or returns bad response, raise exception." mock_call_cyborg.return_value = (None, 'Some error') self.assertRaises(exception.DeviceProfileError, self.client._get_device_profile_list, dp_name='mydp') @mock.patch('nova.accelerator.cyborg._CyborgClient.' '_get_device_profile_list') def test_get_device_profile_groups(self, mock_get_dp_list): mock_get_dp_list.return_value = [{ "groups": [{ "resources:FPGA": "1", "trait:CUSTOM_FPGA_CARD": "required" }], "name": "mydp", "uuid": "307076c2-5aed-4f72-81e8-1b42f9aa2ec6" }] rg = request_spec.RequestGroup(requester_id='device_profile_0') rg.add_resource(rclass='FPGA', amount='1') rg.add_trait(trait_name='CUSTOM_FPGA_CARD', trait_type='required') expected_groups = [rg] actual_groups = self.client.get_device_profile_groups('mydp') self.assertEqual(len(expected_groups), len(actual_groups)) self.assertEqual(expected_groups[0].__dict__, actual_groups[0].__dict__) @mock.patch('nova.accelerator.cyborg._CyborgClient.' '_get_device_profile_list') def test_get_device_profile_groups_no_dp(self, mock_get_dp_list): # If the return value has no device profiles, raise exception mock_get_dp_list.return_value = None self.assertRaises(exception.DeviceProfileError, self.client.get_device_profile_groups, dp_name='mydp') @mock.patch('nova.accelerator.cyborg._CyborgClient.' '_get_device_profile_list') def test_get_device_profile_groups_many_dp(self, mock_get_dp_list): # If the returned list has more than one dp, raise exception mock_get_dp_list.return_value = [1, 2] self.assertRaises(exception.DeviceProfileError, self.client.get_device_profile_groups, dp_name='mydp') def _get_arqs_and_request_groups(self): arq_common = { # All ARQs for an instance have same device profile name. "device_profile_name": "noprog-dp", "device_rp_uuid": "", "hostname": "", "instance_uuid": "", "state": "Initial", } arq_variants = [ {"device_profile_group_id": 0, "uuid": "edbba496-3cc8-4256-94ca-dfe3413348eb"}, {"device_profile_group_id": 1, "uuid": "20125bcb-9f55-4e13-8e8c-3fee30e54cca"}, ] arqs = [dict(arq_common, **variant) for variant in arq_variants] rg_rp_map = { 'device_profile_0': ['c532cf11-02ed-4b03-9dd8-3e9a454131dc'], 'device_profile_1': ['2c332d7b-daaf-4726-a80d-ecf5212da4b8'], } return arqs, rg_rp_map def _get_bound_arqs(self): arqs, rg_rp_map = self._get_arqs_and_request_groups() common = { 'host_name': 'myhost', 'instance_uuid': '15d3acf8-df76-400b-bfc9-484a5208daa1', } bindings = { arqs[0]['uuid']: dict( common, device_rp_uuid=rg_rp_map['device_profile_0'][0]), arqs[1]['uuid']: dict( common, device_rp_uuid=rg_rp_map['device_profile_1'][0]), } bound_arq_common = { "attach_handle_info": { "bus": "01", "device": "00", "domain": "0000", "function": "0" # will vary function ID later }, "attach_handle_type": "PCI", "state": "Bound", # Devic eprofile name is common to all bound ARQs "device_profile_name": arqs[0]["device_profile_name"], **common } bound_arqs = [ {'uuid': arq['uuid'], 'device_profile_group_id': arq['device_profile_group_id'], 'device_rp_uuid': bindings[arq['uuid']]['device_rp_uuid'], **bound_arq_common} for arq in arqs] for index, bound_arq in enumerate(bound_arqs): bound_arq['attach_handle_info']['function'] = index # fix func ID return bindings, bound_arqs @mock.patch('keystoneauth1.adapter.Adapter.post') def test_create_arqs_failure(self, mock_cyborg_post): # If Cyborg returns invalid response, raise exception. mock_cyborg_post.return_value = None self.assertRaises(exception.AcceleratorRequestOpFailed, self.client._create_arqs, dp_name='mydp') @mock.patch('nova.accelerator.cyborg._CyborgClient.' '_create_arqs') def test_create_arq_and_match_rps(self, mock_create_arqs): # Happy path arqs, rg_rp_map = self._get_arqs_and_request_groups() dp_name = arqs[0]["device_profile_name"] mock_create_arqs.return_value = arqs ret_arqs = self.client.create_arqs_and_match_resource_providers( dp_name, rg_rp_map) # Each value in rg_rp_map is a list. We merge them into a single list. expected_rp_uuids = sorted(list( itertools.chain.from_iterable(rg_rp_map.values()))) ret_rp_uuids = sorted([arq['device_rp_uuid'] for arq in ret_arqs]) self.assertEqual(expected_rp_uuids, ret_rp_uuids) @mock.patch('nova.accelerator.cyborg._CyborgClient.' '_create_arqs') def test_create_arq_and_match_rps_exception(self, mock_create_arqs): # If Cyborg response does not contain ARQs, raise arqs, rg_rp_map = self._get_arqs_and_request_groups() dp_name = arqs[0]["device_profile_name"] mock_create_arqs.return_value = None self.assertRaises( exception.AcceleratorRequestOpFailed, self.client.create_arqs_and_match_resource_providers, dp_name, rg_rp_map) @mock.patch('keystoneauth1.adapter.Adapter.patch') def test_bind_arqs(self, mock_cyborg_patch): bindings, bound_arqs = self._get_bound_arqs() arq_uuid = bound_arqs[0]['uuid'] patch_list = {} for arq_uuid, binding in bindings.items(): patch = [{"path": "/" + field, "op": "add", "value": value } for field, value in binding.items()] patch_list[arq_uuid] = patch self.client.bind_arqs(bindings) mock_cyborg_patch.assert_called_once_with( self.client.ARQ_URL, json=mock.ANY) called_params = mock_cyborg_patch.call_args.kwargs['json'] self.assertEqual(sorted(called_params), sorted(patch_list)) @mock.patch('nova.accelerator.cyborg._CyborgClient._call_cyborg') def test_bind_arqs_exception(self, mock_call_cyborg): # If Cyborg returns invalid response, raise exception. bindings, _ = self._get_bound_arqs() mock_call_cyborg.return_value = None, 'Some error' self.assertRaises(exception.AcceleratorRequestOpFailed, self.client.bind_arqs, bindings=bindings) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_get_arqs_for_instance(self, mock_cyborg_get): # Happy path, without only_resolved=True _, bound_arqs = self._get_bound_arqs() instance_uuid = bound_arqs[0]['instance_uuid'] query = {"instance": instance_uuid} content = jsonutils.dumps({'arqs': bound_arqs}) resp = fake_requests.FakeResponse(200, content) mock_cyborg_get.return_value = resp ret_arqs = self.client.get_arqs_for_instance(instance_uuid) mock_cyborg_get.assert_called_once_with( self.client.ARQ_URL, params=query) bound_arqs.sort(key=lambda x: x['uuid']) ret_arqs.sort(key=lambda x: x['uuid']) for ret_arq, bound_arq in zip(ret_arqs, bound_arqs): self.assertDictEqual(ret_arq, bound_arq) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_get_arqs_for_instance_exception(self, mock_cyborg_get): # If Cyborg returns an error code, raise exception _, bound_arqs = self._get_bound_arqs() instance_uuid = bound_arqs[0]['instance_uuid'] resp = fake_requests.FakeResponse(404, content='') mock_cyborg_get.return_value = resp self.assertRaises( exception.AcceleratorRequestOpFailed, self.client.get_arqs_for_instance, instance_uuid) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_get_arqs_for_instance_exception_no_resp(self, mock_cyborg_get): # If Cyborg returns an error code, raise exception _, bound_arqs = self._get_bound_arqs() instance_uuid = bound_arqs[0]['instance_uuid'] content = jsonutils.dumps({'noarqs': 'oops'}) resp = fake_requests.FakeResponse(200, content) mock_cyborg_get.return_value = resp self.assertRaisesRegex( exception.AcceleratorRequestOpFailed, 'Cyborg returned no accelerator requests for ', self.client.get_arqs_for_instance, instance_uuid) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_get_arqs_for_instance_all_resolved(self, mock_cyborg_get): # If all ARQs are resolved, return full list _, bound_arqs = self._get_bound_arqs() instance_uuid = bound_arqs[0]['instance_uuid'] query = {"instance": instance_uuid} content = jsonutils.dumps({'arqs': bound_arqs}) resp = fake_requests.FakeResponse(200, content) mock_cyborg_get.return_value = resp ret_arqs = self.client.get_arqs_for_instance( instance_uuid, only_resolved=True) mock_cyborg_get.assert_called_once_with( self.client.ARQ_URL, params=query) bound_arqs.sort(key=lambda x: x['uuid']) ret_arqs.sort(key=lambda x: x['uuid']) for ret_arq, bound_arq in zip(ret_arqs, bound_arqs): self.assertDictEqual(ret_arq, bound_arq) @mock.patch('keystoneauth1.adapter.Adapter.get') def test_get_arqs_for_instance_some_resolved(self, mock_cyborg_get): # If only some ARQs are resolved, return just the resolved ones unbound_arqs, _ = self._get_arqs_and_request_groups() _, bound_arqs = self._get_bound_arqs() # Create a amixture of unbound and bound ARQs arqs = [unbound_arqs[0], bound_arqs[0]] instance_uuid = bound_arqs[0]['instance_uuid'] query = {"instance": instance_uuid} content = jsonutils.dumps({'arqs': arqs}) resp = fake_requests.FakeResponse(200, content) mock_cyborg_get.return_value = resp ret_arqs = self.client.get_arqs_for_instance( instance_uuid, only_resolved=True) mock_cyborg_get.assert_called_once_with( self.client.ARQ_URL, params=query) self.assertEqual(ret_arqs, [bound_arqs[0]]) @mock.patch('nova.accelerator.cyborg._CyborgClient._call_cyborg') def test_delete_arqs_for_instance(self, mock_call_cyborg): # Happy path mock_call_cyborg.return_value = ('Some Value', None) instance_uuid = 'edbba496-3cc8-4256-94ca-dfe3413348eb' self.client.delete_arqs_for_instance(instance_uuid) mock_call_cyborg.assert_called_once_with(mock.ANY, self.client.ARQ_URL, params={'instance': instance_uuid}) @mock.patch('nova.accelerator.cyborg._CyborgClient._call_cyborg') def test_delete_arqs_for_instance_exception(self, mock_call_cyborg): # If Cyborg returns invalid response, raise exception. err_msg = 'Some error' mock_call_cyborg.return_value = (None, err_msg) instance_uuid = 'edbba496-3cc8-4256-94ca-dfe3413348eb' exc = self.assertRaises(exception.AcceleratorRequestOpFailed, self.client.delete_arqs_for_instance, instance_uuid) expected_msg = ('Failed to delete accelerator requests: ' + err_msg + ' Instance ' + instance_uuid) self.assertEqual(expected_msg, exc.format_message()) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5144684 nova-21.2.4/nova/tests/unit/api/0000775000175000017500000000000000000000000016423 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/__init__.py0000664000175000017500000000000000000000000020522 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5184684 nova-21.2.4/nova/tests/unit/api/openstack/0000775000175000017500000000000000000000000020412 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/__init__.py0000664000175000017500000000000000000000000022511 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/common.py0000664000175000017500000000323500000000000022257 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import webob def webob_factory(url): """Factory for removing duplicate webob code from tests.""" base_url = url def web_request(url, method=None, body=None): req = webob.Request.blank("%s%s" % (base_url, url)) if method: req.content_type = "application/json" req.method = method if body: req.body = jsonutils.dump_as_bytes(body) return req return web_request def compare_links(actual, expected): """Compare xml atom links.""" return compare_tree_to_dict(actual, expected, ('rel', 'href', 'type')) def compare_media_types(actual, expected): """Compare xml media types.""" return compare_tree_to_dict(actual, expected, ('base', 'type')) def compare_tree_to_dict(actual, expected, keys): """Compare parts of lxml.etree objects to dicts.""" for elem, data in zip(actual, expected): for key in keys: if elem.get(key) != data.get(key): return False return True ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5304682 nova-21.2.4/nova/tests/unit/api/openstack/compute/0000775000175000017500000000000000000000000022066 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/__init__.py0000664000175000017500000000000000000000000024165 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/admin_only_action_common.py0000664000175000017500000003052100000000000027477 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils import timeutils from oslo_utils import uuidutils import webob from nova.compute import vm_states from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class CommonMixin(object): def setUp(self): super(CommonMixin, self).setUp() self.compute_api = None self.req = fakes.HTTPRequest.blank('') self.context = self.req.environ['nova.context'] self.mock_get = self.useFixture( fixtures.MockPatch('nova.compute.api.API.get')).mock def _stub_instance_get(self, uuid=None): if uuid is None: uuid = uuidutils.generate_uuid() instance = fake_instance.fake_instance_obj(self.context, id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = instance return instance def _test_non_existing_instance(self, action, body_map=None): # Reset the mock. self.mock_get.reset_mock() expected_attrs = None if action == '_migrate_live': expected_attrs = ['numa_topology'] elif action == '_migrate': expected_attrs = ['flavor', 'services'] uuid = uuidutils.generate_uuid() self.mock_get.side_effect = exception.InstanceNotFound( instance_id=uuid) controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPNotFound, controller_function, self.req, uuid, body=body_map) self.mock_get.assert_called_once_with(self.context, uuid, expected_attrs=expected_attrs, cell_down_support=False) def _test_action(self, action, body=None, method=None, compute_api_args_map=None): # Reset the mock. self.mock_get.reset_mock() expected_attrs = None if action == '_migrate_live': expected_attrs = ['numa_topology'] elif action == '_migrate': expected_attrs = ['flavor', 'services'] if method is None: method = action.replace('_', '') compute_api_args_map = compute_api_args_map or {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) with mock.patch.object(self.compute_api, method) as mock_method: controller_function = getattr(self.controller, action) res = controller_function(self.req, instance.uuid, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if self._api_version == '2.1': status_int = controller_function.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=expected_attrs, cell_down_support=False) def _test_not_implemented_state(self, action, method=None): # Reset the mock. self.mock_get.reset_mock() expected_attrs = None if action == '_migrate_live': expected_attrs = ['numa_topology'] if method is None: method = action.replace('_', '') instance = self._stub_instance_get() body = {} compute_api_args_map = {} args, kwargs = compute_api_args_map.get(action, ((), {})) with mock.patch.object( self.compute_api, method, side_effect=NotImplementedError()) as mock_method: controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPNotImplemented, controller_function, self.req, instance.uuid, body=body) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=expected_attrs, cell_down_support=False) def _test_invalid_state(self, action, method=None, body_map=None, compute_api_args_map=None, exception_arg=None): # Reset the mock. self.mock_get.reset_mock() expected_attrs = None if action == '_migrate_live': expected_attrs = ['numa_topology'] elif action == '_migrate': expected_attrs = ['flavor', 'services'] if method is None: method = action.replace('_', '') if body_map is None: body_map = {} if compute_api_args_map is None: compute_api_args_map = {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) with mock.patch.object( self.compute_api, method, side_effect=exception.InstanceInvalidState( attr='vm_state', instance_uuid=instance.uuid, state='foo', method=method)) as mock_method: controller_function = getattr(self.controller, action) ex = self.assertRaises(webob.exc.HTTPConflict, controller_function, self.req, instance.uuid, body=body_map) self.assertIn("Cannot \'%(action)s\' instance %(id)s" % {'action': exception_arg or method, 'id': instance.uuid}, ex.explanation) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=expected_attrs, cell_down_support=False) def _test_locked_instance(self, action, method=None, body=None, compute_api_args_map=None): # Reset the mock. self.mock_get.reset_mock() expected_attrs = None if action == '_migrate_live': expected_attrs = ['numa_topology'] elif action == '_migrate': expected_attrs = ['flavor', 'services'] if method is None: method = action.replace('_', '') compute_api_args_map = compute_api_args_map or {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) with mock.patch.object( self.compute_api, method, side_effect=exception.InstanceIsLocked( instance_uuid=instance.uuid)) as mock_method: controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPConflict, controller_function, self.req, instance.uuid, body=body) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=expected_attrs, cell_down_support=False) def _test_instance_not_found_in_compute_api(self, action, method=None, body=None, compute_api_args_map=None): # Reset the mock. self.mock_get.reset_mock() expected_attrs = None if action == '_migrate_live': expected_attrs = ['numa_topology'] if method is None: method = action.replace('_', '') compute_api_args_map = compute_api_args_map or {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) with mock.patch.object( self.compute_api, method, side_effect=exception.InstanceNotFound( instance_id=instance.uuid)) as mock_method: controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPNotFound, controller_function, self.req, instance.uuid, body=body) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=expected_attrs, cell_down_support=False) class CommonTests(CommonMixin, test.NoDBTestCase): def _test_actions(self, actions, method_translations=None, body_map=None, args_map=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} for action in actions: method = method_translations.get(action) body = body_map.get(action) self._test_action(action, method=method, body=body, compute_api_args_map=args_map) def _test_actions_instance_not_found_in_compute_api(self, actions, method_translations=None, body_map=None, args_map=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} for action in actions: method = method_translations.get(action) body = body_map.get(action) self._test_instance_not_found_in_compute_api( action, method=method, body=body, compute_api_args_map=args_map) def _test_actions_with_non_existed_instance(self, actions, body_map=None): body_map = body_map or {} for action in actions: self._test_non_existing_instance(action, body_map=body_map.get(action)) def _test_actions_raise_conflict_on_invalid_state( self, actions, method_translations=None, body_map=None, args_map=None, exception_args=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} exception_args = exception_args or {} for action in actions: method = method_translations.get(action) exception_arg = exception_args.get(action) self._test_invalid_state(action, method=method, body_map=body_map.get(action), compute_api_args_map=args_map, exception_arg=exception_arg) def _test_actions_with_locked_instance(self, actions, method_translations=None, body_map=None, args_map=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} for action in actions: method = method_translations.get(action) body = body_map.get(action) self._test_locked_instance(action, method=method, body=body, compute_api_args_map=args_map) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/dummy_schema.py0000664000175000017500000000237400000000000025121 0ustar00zuulzuul00000000000000# Copyright 2014 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types dummy = { 'type': 'object', 'properties': { 'dummy': { 'type': 'object', 'properties': { 'val': parameter_types.name, }, 'additionalProperties': False, }, }, 'required': ['dummy'], 'additionalProperties': False, } dummy2 = { 'type': 'object', 'properties': { 'dummy': { 'type': 'object', 'properties': { 'val2': parameter_types.name, }, 'additionalProperties': False, }, }, 'required': ['dummy'], 'additionalProperties': False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/microversions.py0000664000175000017500000000771400000000000025353 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Microversions Test Extension""" import functools import webob from nova.api.openstack.compute import routes from nova.api.openstack import wsgi from nova.api import validation from nova.tests.unit.api.openstack.compute import dummy_schema class MicroversionsController(wsgi.Controller): @wsgi.Controller.api_version("2.1") def index(self, req): data = {'param': 'val'} return data @wsgi.Controller.api_version("2.2") # noqa def index(self, req): data = {'param': 'val2'} return data @wsgi.Controller.api_version("3.0") # noqa def index(self, req): raise webob.exc.HTTPBadRequest() # We have a second example controller here to help check # for accidental dependencies between API controllers # due to base class changes class MicroversionsController2(wsgi.Controller): @wsgi.Controller.api_version("2.2", "2.5") def index(self, req): data = {'param': 'controller2_val1'} return data @wsgi.Controller.api_version("2.5", "3.1") # noqa @wsgi.response(202) def index(self, req): data = {'param': 'controller2_val2'} return data class MicroversionsController3(wsgi.Controller): @wsgi.Controller.api_version("2.1") @validation.schema(dummy_schema.dummy) def create(self, req, body): data = {'param': 'create_val1'} return data @wsgi.Controller.api_version("2.1") @validation.schema(dummy_schema.dummy, "2.3", "2.8") @validation.schema(dummy_schema.dummy2, "2.9") def update(self, req, id, body): data = {'param': 'update_val1'} return data @wsgi.Controller.api_version("2.1", "2.2") @wsgi.response(202) @wsgi.action('foo') def _foo(self, req, id, body): data = {'foo': 'bar'} return data class MicroversionsController4(wsgi.Controller): @wsgi.Controller.api_version("2.1") def _create(self, req): data = {'param': 'controller4_val1'} return data @wsgi.Controller.api_version("2.2") # noqa def _create(self, req): data = {'param': 'controller4_val2'} return data def create(self, req, body): return self._create(req) class MicroversionsExtendsBaseController(wsgi.Controller): @wsgi.Controller.api_version("2.1") def show(self, req, id): return {'base_param': 'base_val'} mv_controller = functools.partial(routes._create_controller, MicroversionsController, []) mv2_controller = functools.partial(routes._create_controller, MicroversionsController2, []) mv3_controller = functools.partial(routes._create_controller, MicroversionsController3, []) mv4_controller = functools.partial(routes._create_controller, MicroversionsController4, []) mv5_controller = functools.partial(routes._create_controller, MicroversionsExtendsBaseController, []) ROUTES = ( ('/microversions', { 'GET': [mv_controller, 'index'] }), ('/microversions2', { 'GET': [mv2_controller, 'index'] }), ('/microversions3', { 'POST': [mv3_controller, 'create'] }), ('/microversions3/{id}', { 'PUT': [mv3_controller, 'update'] }), ('/microversions3/{id}/action', { 'POST': [mv3_controller, 'action'] }), ('/microversions4', { 'POST': [mv4_controller, 'create'] }), ('/microversions5/{id}', { 'GET': [mv5_controller, 'show'] }), ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_access_ips.py0000664000175000017500000001250600000000000025617 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova.api.openstack.compute import servers as servers_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake v4_key = "accessIPv4" v6_key = "accessIPv6" class AccessIPsAPIValidationTestV21(test.TestCase): validation_error = exception.ValidationError def setUp(self): super(AccessIPsAPIValidationTestV21, self).setUp() def fake_save(context, **kwargs): pass def fake_rebuild(*args, **kwargs): pass fakes.stub_out_nw_api(self) self._set_up_controller() fake.stub_out_image_service(self) self.stub_out('nova.compute.api.API.get', # This project_id matches fakes.HTTPRequest.blank. fakes.fake_compute_get(project_id=fakes.FAKE_PROJECT_ID)) self.stub_out('nova.objects.instance.Instance.save', fake_save) self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) def _set_up_controller(self): self.controller = servers_v21.ServersController() def _verify_update_access_ip(self, res_dict, params): for key, value in params.items(): value = value or '' self.assertEqual(res_dict['server'][key], value) def _test_create(self, params): body = { 'server': { 'name': 'server_test', 'imageRef': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', 'flavorRef': 'http://localhost/123/flavors/3', }, } body['server'].update(params) req = fakes.HTTPRequest.blank('') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' res_dict = self.controller.create(req, body=body).obj return res_dict def _test_update(self, params): body = { 'server': { }, } body['server'].update(params) req = fakes.HTTPRequest.blank('') res_dict = self.controller.update(req, fakes.FAKE_UUID, body=body) self._verify_update_access_ip(res_dict, params) def _test_rebuild(self, params): body = { 'rebuild': { 'imageRef': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', }, } body['rebuild'].update(params) req = fakes.HTTPRequest.blank('') self.controller._action_rebuild(req, fakes.FAKE_UUID, body=body) def test_create_server_with_access_ipv4(self): params = {v4_key: '192.168.0.10'} self._test_create(params) def test_create_server_with_access_ip_pass_disabled(self): # test with admin passwords disabled See lp bug 921814 self.flags(enable_instance_password=False, group='api') params = {v4_key: '192.168.0.10', v6_key: '2001:db8::9abc'} res = self._test_create(params) server = res['server'] self.assertNotIn("admin_password", server) def test_create_server_with_invalid_access_ipv4(self): params = {v4_key: '1.1.1.1.1.1'} self.assertRaises(self.validation_error, self._test_create, params) def test_create_server_with_access_ipv6(self): params = {v6_key: '2001:db8::9abc'} self._test_create(params) def test_create_server_with_invalid_access_ipv6(self): params = {v6_key: 'fe80:::::::'} self.assertRaises(self.validation_error, self._test_create, params) def test_update_server_with_access_ipv4(self): params = {v4_key: '192.168.0.10'} self._test_update(params) def test_update_server_with_invalid_access_ipv4(self): params = {v4_key: '1.1.1.1.1.1'} self.assertRaises(self.validation_error, self._test_update, params) def test_update_server_with_access_ipv6(self): params = {v6_key: '2001:db8::9abc'} self._test_update(params) def test_update_server_with_invalid_access_ipv6(self): params = {v6_key: 'fe80:::::::'} self.assertRaises(self.validation_error, self._test_update, params) def test_rebuild_server_with_access_ipv4(self): params = {v4_key: '192.168.0.10'} self._test_rebuild(params) def test_rebuild_server_with_invalid_access_ipv4(self): params = {v4_key: '1.1.1.1.1.1'} self.assertRaises(self.validation_error, self._test_rebuild, params) def test_rebuild_server_with_access_ipv6(self): params = {v6_key: '2001:db8::9abc'} self._test_rebuild(params) def test_rebuild_server_with_invalid_access_ipv6(self): params = {v6_key: 'fe80:::::::'} self.assertRaises(self.validation_error, self._test_rebuild, params) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_admin_actions.py0000664000175000017500000000404500000000000026312 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.compute import admin_actions as admin_actions_v21 from nova.tests.unit.api.openstack.compute import admin_only_action_common class AdminActionsTestV21(admin_only_action_common.CommonTests): admin_actions = admin_actions_v21 _api_version = '2.1' def setUp(self): super(AdminActionsTestV21, self).setUp() self.controller = self.admin_actions.AdminActionsController() self.compute_api = self.controller.compute_api self.stub_out('nova.api.openstack.compute.admin_actions.' 'AdminActionsController', lambda *a, **k: self.controller) def test_actions(self): actions = ['_reset_network', '_inject_network_info'] method_translations = {'_reset_network': 'reset_network', '_inject_network_info': 'inject_network_info'} self._test_actions(actions, method_translations) def test_actions_with_non_existed_instance(self): actions = ['_reset_network', '_inject_network_info'] self._test_actions_with_non_existed_instance(actions) def test_actions_with_locked_instance(self): actions = ['_reset_network', '_inject_network_info'] method_translations = {'_reset_network': 'reset_network', '_inject_network_info': 'inject_network_info'} self._test_actions_with_locked_instance(actions, method_translations=method_translations) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_admin_password.py0000664000175000017500000001613500000000000026517 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import webob from nova.api.openstack.compute import admin_password as admin_password_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance def fake_get(self, context, id, expected_attrs=None, cell_down_support=False): return fake_instance.fake_instance_obj( context, uuid=id, project_id=context.project_id, user_id=context.user_id, expected_attrs=expected_attrs) def fake_set_admin_password(self, context, instance, password=None): pass class AdminPasswordTestV21(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(AdminPasswordTestV21, self).setUp() self.stub_out('nova.compute.api.API.set_admin_password', fake_set_admin_password) self.stub_out('nova.compute.api.API.get', fake_get) self.fake_req = fakes.HTTPRequest.blank('') def _get_action(self): return admin_password_v21.AdminPasswordController().change_password def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def test_change_password(self): body = {'changePassword': {'adminPass': 'test'}} res = self._get_action()(self.fake_req, fakes.FAKE_UUID, body=body) self._check_status(202, res, self._get_action()) def test_change_password_empty_string(self): body = {'changePassword': {'adminPass': ''}} res = self._get_action()(self.fake_req, fakes.FAKE_UUID, body=body) self._check_status(202, res, self._get_action()) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=NotImplementedError()) def test_change_password_with_non_implement(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPNotImplemented, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=fakes.FAKE_UUID)) def test_change_password_with_non_existed_instance(self, mock_get): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPNotFound, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_with_non_string_password(self): body = {'changePassword': {'adminPass': 1234}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.InstancePasswordSetFailed(instance="1", reason='')) def test_change_password_failed(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.SetAdminPasswdNotSupported(instance="1", reason='')) def test_change_password_not_supported(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.InstanceAgentNotEnabled(instance="1", reason='')) def test_change_password_guest_agent_disabled(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_without_admin_password(self): body = {'changPassword': {}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_none(self): body = {'changePassword': {'adminPass': None}} ex = self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) self.assertIn('adminPass. Value: None. None is not of type', six.text_type(ex)) def test_change_password_adminpass_none(self): body = {'changePassword': None} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_bad_request(self): body = {'changePassword': {'pass': '12345'}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_server_change_password_pass_disabled(self): # run with enable_instance_password disabled to verify adminPass # is missing from response. See lp bug 921814 self.flags(enable_instance_password=False, group='api') body = {'changePassword': {'adminPass': '1234pass'}} res = self._get_action()(self.fake_req, fakes.FAKE_UUID, body=body) self._check_status(202, res, self._get_action()) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.InstanceInvalidState( instance_uuid='fake', attr='vm_state', state='stopped', method='set_admin_password')) def test_change_password_invalid_state(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_agents.py0000664000175000017500000004400100000000000024757 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob.exc from nova.api.openstack.compute import agents as agents_v21 from nova.db import api as db from nova.db.sqlalchemy import models from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes fake_agents_list = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'id': 2}, {'hypervisor': 'xen', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource2', 'md5hash': 'add6bb58e139be103324d04d82d8f547', 'id': 3}, {'hypervisor': 'xen', 'os': 'win', 'architecture': 'power', 'version': '7.0', 'url': 'http://example.com/path/to/resource3', 'md5hash': 'add6bb58e139be103324d04d82d8f548', 'id': 4}, ] def fake_agent_build_get_all(context, hypervisor): agent_build_all = [] for agent in fake_agents_list: if hypervisor and hypervisor != agent['hypervisor']: continue agent_build_ref = models.AgentBuild() agent_build_ref.update(agent) agent_build_all.append(agent_build_ref) return agent_build_all def fake_agent_build_update(context, agent_build_id, values): pass def fake_agent_build_destroy(context, agent_update_id): pass def fake_agent_build_create(context, values): values['id'] = 1 agent_build_ref = models.AgentBuild() agent_build_ref.update(values) return agent_build_ref class AgentsTestV21(test.NoDBTestCase): controller = agents_v21.AgentController() validation_error = exception.ValidationError microversion = '2.1' def setUp(self): super(AgentsTestV21, self).setUp() self.stub_out("nova.db.api.agent_build_get_all", fake_agent_build_get_all) self.stub_out("nova.db.api.agent_build_update", fake_agent_build_update) self.stub_out("nova.db.api.agent_build_destroy", fake_agent_build_destroy) self.stub_out("nova.db.api.agent_build_create", fake_agent_build_create) self.req = self._get_http_request() def _get_http_request(self): return fakes.HTTPRequest.blank('', version=self.microversion) def test_agents_create(self): body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} response = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}} res_dict = self.controller.create(self.req, body=body) self.assertEqual(res_dict, response) def _test_agents_create_key_error(self, key): body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['agent'].pop(key) self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_without_hypervisor(self): self._test_agents_create_key_error('hypervisor') def test_agents_create_without_os(self): self._test_agents_create_key_error('os') def test_agents_create_without_architecture(self): self._test_agents_create_key_error('architecture') def test_agents_create_without_version(self): self._test_agents_create_key_error('version') def test_agents_create_without_url(self): self._test_agents_create_key_error('url') def test_agents_create_without_md5hash(self): self._test_agents_create_key_error('md5hash') def test_agents_create_with_wrong_type(self): body = {'agent': None} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_with_empty_type(self): body = {} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_with_existed_agent(self): def fake_agent_build_create_with_exited_agent(context, values): raise exception.AgentBuildExists(**values) self.stub_out('nova.db.api.agent_build_create', fake_agent_build_create_with_exited_agent) body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=body) def _test_agents_create_with_invalid_length(self, key): body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['agent'][key] = 'x' * 256 self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_with_invalid_length_hypervisor(self): self._test_agents_create_with_invalid_length('hypervisor') def test_agents_create_with_invalid_length_os(self): self._test_agents_create_with_invalid_length('os') def test_agents_create_with_invalid_length_architecture(self): self._test_agents_create_with_invalid_length('architecture') def test_agents_create_with_invalid_length_version(self): self._test_agents_create_with_invalid_length('version') def test_agents_create_with_invalid_length_url(self): self._test_agents_create_with_invalid_length('url') def test_agents_create_with_invalid_length_md5hash(self): self._test_agents_create_with_invalid_length('md5hash') def test_agents_delete(self): self.controller.delete(self.req, 1) def test_agents_delete_with_id_not_found(self): with mock.patch.object(db, 'agent_build_destroy', side_effect=exception.AgentBuildNotFound(id=1)): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 1) def test_agents_delete_string_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'string_id') def _test_agents_list(self, query_string=None): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string, version=self.microversion) res_dict = self.controller.index(req) agents_list = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, {'hypervisor': 'xen', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource2', 'md5hash': 'add6bb58e139be103324d04d82d8f547', 'agent_id': 3}, {'hypervisor': 'xen', 'os': 'win', 'architecture': 'power', 'version': '7.0', 'url': 'http://example.com/path/to/resource3', 'md5hash': 'add6bb58e139be103324d04d82d8f548', 'agent_id': 4}, ] self.assertEqual(res_dict, {'agents': agents_list}) def test_agents_list(self): self._test_agents_list() def test_agents_list_with_hypervisor(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='hypervisor=kvm', version=self.microversion) res_dict = self.controller.index(req) response = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, ] self.assertEqual(res_dict, {'agents': response}) def test_agents_list_with_multi_hypervisor_filter(self): query_string = 'hypervisor=xen&hypervisor=kvm' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string, version=self.microversion) res_dict = self.controller.index(req) response = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, ] self.assertEqual(res_dict, {'agents': response}) def test_agents_list_query_allow_negative_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='hypervisor=-1', version=self.microversion) res_dict = self.controller.index(req) self.assertEqual(res_dict, {'agents': []}) def test_agents_list_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='hypervisor=1', version=self.microversion) res_dict = self.controller.index(req) self.assertEqual(res_dict, {'agents': []}) def test_agents_list_with_unknown_filter(self): query_string = 'unknown_filter=abc' self._test_agents_list(query_string=query_string) def test_agents_list_with_hypervisor_and_additional_filter(self): req = fakes.HTTPRequest.blank( '', use_admin_context=True, query_string='hypervisor=kvm&additional_filter=abc', version=self.microversion) res_dict = self.controller.index(req) response = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, ] self.assertEqual(res_dict, {'agents': response}) def test_agents_update(self): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} response = {'agent': {'agent_id': 1, 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} res_dict = self.controller.update(self.req, 1, body=body) self.assertEqual(res_dict, response) def _test_agents_update_key_error(self, key): body = {'para': {'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['para'].pop(key) self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_without_version(self): self._test_agents_update_key_error('version') def test_agents_update_without_url(self): self._test_agents_update_key_error('url') def test_agents_update_without_md5hash(self): self._test_agents_update_key_error('md5hash') def test_agents_update_with_wrong_type(self): body = {'agent': None} self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_with_empty(self): body = {} self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_value_error(self): body = {'para': {'version': '7.0', 'url': 1111, 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_with_string_id(self): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, 'string_id', body=body) def _test_agents_update_with_invalid_length(self, key): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['para'][key] = 'x' * 256 self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_with_invalid_length_version(self): self._test_agents_update_with_invalid_length('version') def test_agents_update_with_invalid_length_url(self): self._test_agents_update_with_invalid_length('url') def test_agents_update_with_invalid_length_md5hash(self): self._test_agents_update_with_invalid_length('md5hash') def test_agents_update_with_id_not_found(self): with mock.patch.object(db, 'agent_build_update', side_effect=exception.AgentBuildNotFound(id=1)): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, 1, body=body) class AgentsTestV275(AgentsTestV21): microversion = '2.75' def test_agents_list_additional_filter_old_version(self): req = fakes.HTTPRequest.blank( '', use_admin_context=True, query_string='additional_filter=abc', version='2.74') self.controller.index(req) def test_agents_list_with_unknown_filter(self): req = fakes.HTTPRequest.blank( '', use_admin_context=True, query_string='unknown_filter=abc', version=self.microversion) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_agents_list_with_hypervisor_and_additional_filter(self): req = fakes.HTTPRequest.blank( '', use_admin_context=True, query_string='hypervisor=kvm&additional_filter=abc', version=self.microversion) self.assertRaises(exception.ValidationError, self.controller.index, req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_aggregates.py0000664000175000017500000010557400000000000025624 0ustar00zuulzuul00000000000000# Copyright (c) 2012 Citrix Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the aggregates admin api.""" import mock from oslo_utils.fixture import uuidsentinel from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack.compute import aggregates as aggregates_v21 from nova.compute import api as compute_api from nova import context from nova import exception from nova import objects from nova.objects import base as obj_base from nova import test from nova.tests.unit.api.openstack import fakes def _make_agg_obj(agg_dict): return objects.Aggregate(**agg_dict) def _make_agg_list(agg_list): return objects.AggregateList(objects=[_make_agg_obj(a) for a in agg_list]) def _transform_aggregate_az(agg_dict): # the Aggregate object looks for availability_zone within metadata, # so if availability_zone is in the top-level dict, move it down into # metadata. We also have to delete the key from the top-level dict because # availability_zone is a read-only property on the Aggregate object md = agg_dict.get('metadata', {}) if 'availability_zone' in agg_dict: md['availability_zone'] = agg_dict['availability_zone'] del agg_dict['availability_zone'] agg_dict['metadata'] = md return agg_dict def _transform_aggregate_list_azs(agg_list): for agg_dict in agg_list: yield _transform_aggregate_az(agg_dict) AGGREGATE_LIST = [ {"name": "aggregate1", "id": "1", "metadata": {"availability_zone": "nova1"}}, {"name": "aggregate2", "id": "2", "metadata": {"availability_zone": "nova1"}}, {"name": "aggregate3", "id": "3", "metadata": {"availability_zone": "nova2"}}, {"name": "aggregate1", "id": "4", "metadata": {"availability_zone": "nova1"}}] AGGREGATE_LIST = _make_agg_list(AGGREGATE_LIST) AGGREGATE = {"name": "aggregate1", "id": "1", "metadata": {"foo": "bar", "availability_zone": "nova1"}, "hosts": ["host1", "host2"]} AGGREGATE = _make_agg_obj(AGGREGATE) FORMATTED_AGGREGATE = {"name": "aggregate1", "id": "1", "metadata": {"availability_zone": "nova1"}} FORMATTED_AGGREGATE = _make_agg_obj(FORMATTED_AGGREGATE) class FakeRequest(object): environ = {"nova.context": context.get_admin_context()} class AggregateTestCaseV21(test.NoDBTestCase): """Test Case for aggregates admin api.""" add_host = 'self.controller._add_host' remove_host = 'self.controller._remove_host' set_metadata = 'self.controller._set_metadata' bad_request = exception.ValidationError def _set_up(self): self.controller = aggregates_v21.AggregateController() self.req = fakes.HTTPRequest.blank('/v2/os-aggregates', use_admin_context=True) self.user_req = fakes.HTTPRequest.blank('/v2/os-aggregates') self.context = self.req.environ['nova.context'] def setUp(self): super(AggregateTestCaseV21, self).setUp() self._set_up() def test_index(self): def _list_aggregates(context): if context is None: raise Exception() return AGGREGATE_LIST with mock.patch.object(self.controller.api, 'get_aggregate_list', side_effect=_list_aggregates) as mock_list: result = self.controller.index(self.req) result = _transform_aggregate_list_azs(result['aggregates']) self._assert_agg_data(AGGREGATE_LIST, _make_agg_list(result)) self.assertTrue(mock_list.called) def test_create(self): with mock.patch.object(self.controller.api, 'create_aggregate', return_value=AGGREGATE) as mock_create: result = self.controller.create(self.req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(FORMATTED_AGGREGATE, _make_agg_obj(result)) mock_create.assert_called_once_with(self.context, 'test', 'nova1') def test_create_with_duplicate_aggregate_name(self): side_effect = exception.AggregateNameExists(aggregate_name="test") with mock.patch.object(self.controller.api, 'create_aggregate', side_effect=side_effect) as mock_create: self.assertRaises(exc.HTTPConflict, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) mock_create.assert_called_once_with(self.context, 'test', 'nova1') @mock.patch.object(compute_api.AggregateAPI, 'create_aggregate') def test_create_with_unmigrated_aggregates(self, mock_create_aggregate): mock_create_aggregate.side_effect = \ exception.ObjectActionError(action='create', reason='main database still contains aggregates') self.assertRaises(exc.HTTPConflict, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) def test_create_with_incorrect_availability_zone(self): side_effect = exception.InvalidAggregateAction( action='create_aggregate', aggregate_id="'N/A'", reason='invalid zone') with mock.patch.object(self.controller.api, 'create_aggregate', side_effect=side_effect) as mock_create: self.assertRaises(exc.HTTPBadRequest, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "nova_bad"}}) mock_create.assert_called_once_with(self.context, 'test', 'nova_bad') def test_create_with_no_aggregate(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"foo": {"name": "test", "availability_zone": "nova1"}}) def test_create_with_no_name(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"foo": "test", "availability_zone": "nova1"}}) def test_create_name_with_leading_trailing_spaces(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": " test ", "availability_zone": "nova1"}}) def test_create_name_with_leading_trailing_spaces_compat_mode(self): def fake_mock_aggs(context, name, az): self.assertEqual('test', name) return AGGREGATE with mock.patch.object(compute_api.AggregateAPI, 'create_aggregate') as mock_aggs: mock_aggs.side_effect = fake_mock_aggs self.req.set_legacy_v2() self.controller.create(self.req, body={"aggregate": {"name": " test ", "availability_zone": "nova1"}}) def test_create_with_no_availability_zone(self): with mock.patch.object(self.controller.api, 'create_aggregate', return_value=AGGREGATE) as mock_create: result = self.controller.create(self.req, body={"aggregate": {"name": "test"}}) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(FORMATTED_AGGREGATE, _make_agg_obj(result)) mock_create.assert_called_once_with(self.context, 'test', None) def test_create_with_null_name(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "", "availability_zone": "nova1"}}) def test_create_with_name_too_long(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "x" * 256, "availability_zone": "nova1"}}) def test_create_with_availability_zone_too_long(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "x" * 256}}) def test_create_with_availability_zone_invalid(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "bad:az"}}) def test_create_availability_zone_with_leading_trailing_spaces(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": " nova1 "}}) def test_create_availability_zone_with_leading_trailing_spaces_compat_mode( self): def fake_mock_aggs(context, name, az): self.assertEqual('nova1', az) return AGGREGATE with mock.patch.object(compute_api.AggregateAPI, 'create_aggregate') as mock_aggs: mock_aggs.side_effect = fake_mock_aggs self.req.set_legacy_v2() self.controller.create(self.req, body={"aggregate": {"name": "test", "availability_zone": " nova1 "}}) def test_create_with_empty_availability_zone(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": ""}}) @mock.patch('nova.compute.api.AggregateAPI.create_aggregate') def test_create_with_none_availability_zone(self, mock_create_agg): mock_create_agg.return_value = objects.Aggregate( self.context, name='test', uuid=uuidsentinel.aggregate, hosts=[], metadata={}) body = {"aggregate": {"name": "test", "availability_zone": None}} result = self.controller.create(self.req, body=body) mock_create_agg.assert_called_once_with(self.context, 'test', None) self.assertEqual(result['aggregate']['name'], 'test') def test_create_with_extra_invalid_arg(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"name": "test", "availability_zone": "nova1", "foo": 'bar'}) def test_show(self): with mock.patch.object(self.controller.api, 'get_aggregate', return_value=AGGREGATE) as mock_get: aggregate = self.controller.show(self.req, "1") aggregate = _transform_aggregate_az(aggregate['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(aggregate)) mock_get.assert_called_once_with(self.context, '1') def test_show_with_bad_aggregate(self): side_effect = exception.AggregateNotFound(aggregate_id='2') with mock.patch.object(self.controller.api, 'get_aggregate', side_effect=side_effect) as mock_get: self.assertRaises(exc.HTTPNotFound, self.controller.show, self.req, "2") mock_get.assert_called_once_with(self.context, '2') def test_show_with_invalid_id(self): self.assertRaises(exc.HTTPBadRequest, self.controller.show, self.req, 'foo') def test_update(self): body = {"aggregate": {"name": "new_name", "availability_zone": "nova1"}} with mock.patch.object(self.controller.api, 'update_aggregate', return_value=AGGREGATE) as mock_update: result = self.controller.update(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, '1', body["aggregate"]) def test_update_with_only_name(self): body = {"aggregate": {"name": "new_name"}} with mock.patch.object(self.controller.api, 'update_aggregate', return_value=AGGREGATE) as mock_update: result = self.controller.update(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, '1', body["aggregate"]) def test_update_with_only_availability_zone(self): body = {"aggregate": {"availability_zone": "nova1"}} with mock.patch.object(self.controller.api, 'update_aggregate', return_value=AGGREGATE) as mock_update: result = self.controller.update(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, '1', body["aggregate"]) def test_update_with_no_updates(self): test_metadata = {"aggregate": {}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_no_update_key(self): test_metadata = {"asdf": {}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_wrong_updates(self): test_metadata = {"aggregate": {"status": "disable", "foo": "bar"}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_null_name(self): test_metadata = {"aggregate": {"name": ""}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_name_too_long(self): test_metadata = {"aggregate": {"name": "x" * 256}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_availability_zone_too_long(self): test_metadata = {"aggregate": {"availability_zone": "x" * 256}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_availability_zone_invalid(self): test_metadata = {"aggregate": {"availability_zone": "bad:az"}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_empty_availability_zone(self): test_metadata = {"aggregate": {"availability_zone": ""}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) @mock.patch('nova.compute.api.AggregateAPI.update_aggregate') def test_update_with_none_availability_zone(self, mock_update_agg): agg_id = 173 mock_update_agg.return_value = objects.Aggregate(self.context, name='test', uuid=uuidsentinel.agg, id=agg_id, hosts=[], metadata={}) body = {"aggregate": {"name": "test", "availability_zone": None}} result = self.controller.update(self.req, agg_id, body=body) mock_update_agg.assert_called_once_with(self.context, agg_id, body['aggregate']) self.assertEqual(result['aggregate']['name'], 'test') def test_update_with_bad_aggregate(self): body = {"aggregate": {"name": "test_name"}} side_effect = exception.AggregateNotFound(aggregate_id=2) with mock.patch.object(self.controller.api, 'update_aggregate', side_effect=side_effect) as mock_update: self.assertRaises(exc.HTTPNotFound, self.controller.update, self.req, "2", body=body) mock_update.assert_called_once_with(self.context, '2', body["aggregate"]) def test_update_with_invalid_id(self): body = {"aggregate": {"name": "test_name"}} self.assertRaises(exc.HTTPBadRequest, self.controller.update, self.req, 'foo', body=body) def test_update_with_duplicated_name(self): body = {"aggregate": {"name": "test_name"}} side_effect = exception.AggregateNameExists(aggregate_name="test_name") with mock.patch.object(self.controller.api, 'update_aggregate', side_effect=side_effect) as mock_update: self.assertRaises(exc.HTTPConflict, self.controller.update, self.req, "2", body=body) mock_update.assert_called_once_with(self.context, '2', body["aggregate"]) def test_invalid_action(self): body = {"append_host": {"host": "host1"}} self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body=body) def test_update_with_invalid_action(self): with mock.patch.object(self.controller.api, "update_aggregate", side_effect=exception.InvalidAggregateAction( action='invalid', aggregate_id='1', reason= "not empty")): body = {"aggregate": {"availability_zone": "nova"}} self.assertRaises(exc.HTTPBadRequest, self.controller.update, self.req, "1", body=body) def test_add_host(self): with mock.patch.object(self.controller.api, 'add_host_to_aggregate', return_value=AGGREGATE) as mock_add: aggregate = eval(self.add_host)(self.req, "1", body={"add_host": {"host": "host1"}}) aggregate = _transform_aggregate_az(aggregate['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(aggregate)) mock_add.assert_called_once_with(self.context, "1", "host1") def test_add_host_with_already_added_host(self): side_effect = exception.AggregateHostExists(aggregate_id="1", host="host1") with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=side_effect) as mock_add: self.assertRaises(exc.HTTPConflict, eval(self.add_host), self.req, "1", body={"add_host": {"host": "host1"}}) mock_add.assert_called_once_with(self.context, "1", "host1") def test_add_host_with_bad_aggregate(self): side_effect = exception.AggregateNotFound( aggregate_id="2") with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=side_effect) as mock_add: self.assertRaises(exc.HTTPNotFound, eval(self.add_host), self.req, "2", body={"add_host": {"host": "host1"}}) mock_add.assert_called_once_with(self.context, "2", "host1") def test_add_host_with_invalid_id(self): body = {"add_host": {"host": "host1"}} self.assertRaises(exc.HTTPBadRequest, eval(self.add_host), self.req, 'foo', body=body) def test_add_host_with_bad_host(self): side_effect = exception.ComputeHostNotFound(host="bogus_host") with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=side_effect) as mock_add: self.assertRaises(exc.HTTPNotFound, eval(self.add_host), self.req, "1", body={"add_host": {"host": "bogus_host"}}) mock_add.assert_called_once_with(self.context, "1", "bogus_host") def test_add_host_with_missing_host(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"asdf": "asdf"}}) def test_add_host_with_invalid_format_host(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": "a" * 300}}) def test_add_host_with_invalid_name_host(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": "bad:host"}}) def test_add_host_with_multiple_hosts(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": ["host1", "host2"]}}) def test_add_host_raises_key_error(self): with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=KeyError) as mock_add: self.assertRaises(exc.HTTPInternalServerError, eval(self.add_host), self.req, "1", body={"add_host": {"host": "host1"}}) mock_add.assert_called_once_with(self.context, "1", "host1") def test_add_host_with_invalid_request(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": "1"}) def test_add_host_with_non_string(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": 1}}) def test_remove_host(self): return_value = _make_agg_obj({'metadata': {}}) with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', return_value=return_value) as mock_rem: eval(self.remove_host)(self.req, "1", body={"remove_host": {"host": "host1"}}) mock_rem.assert_called_once_with(self.context, "1", "host1") def test_remove_host_with_bad_aggregate(self): side_effect = exception.AggregateNotFound( aggregate_id="2") with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPNotFound, eval(self.remove_host), self.req, "2", body={"remove_host": {"host": "host1"}}) mock_rem.assert_called_once_with(self.context, "2", "host1") def test_remove_host_with_invalid_id(self): body = {"remove_host": {"host": "host1"}} self.assertRaises(exc.HTTPBadRequest, eval(self.remove_host), self.req, 'foo', body=body) def test_remove_host_with_host_not_in_aggregate(self): side_effect = exception.AggregateHostNotFound(aggregate_id="1", host="host1") with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPNotFound, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": "host1"}}) mock_rem.assert_called_once_with(self.context, "1", "host1") def test_remove_host_with_bad_host(self): side_effect = exception.ComputeHostNotFound(host="bogushost") with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPNotFound, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": "bogushost"}}) mock_rem.assert_called_once_with(self.context, "1", "bogushost") self.assertIn('Failed to remove host bogushost from aggregate 1. ' 'Error: Compute host bogushost could not be found.', self.stdlog.logger.output) def test_remove_host_with_missing_host(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"asdf": "asdf"}) def test_remove_host_with_multiple_hosts(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": ["host1", "host2"]}}) def test_remove_host_with_extra_param(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": {"asdf": "asdf", "host": "asdf"}}) def test_remove_host_with_invalid_request(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": "1"}) def test_remove_host_with_missing_host_empty(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": {}}) def test_remove_host_resource_provider_update_conflict(self): """Tests that ResourceProviderUpdateConflict is handled as a 409.""" side_effect = exception.ResourceProviderUpdateConflict( uuid=uuidsentinel.provider_id, generation=1, error='try again!') with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPConflict, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": "bogushost"}}) mock_rem.assert_called_once_with(self.context, "1", "bogushost") self.assertIn('Failed to remove host bogushost from aggregate 1. ' 'Error: A conflict was encountered attempting to update ' 'resource provider', self.stdlog.logger.output) def test_set_metadata(self): body = {"set_metadata": {"metadata": {"foo": "bar"}}} with mock.patch.object(self.controller.api, 'update_aggregate_metadata', return_value=AGGREGATE) as mock_update: result = eval(self.set_metadata)(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, "1", body["set_metadata"]['metadata']) def test_set_metadata_delete(self): body = {"set_metadata": {"metadata": {"foo": None}}} with mock.patch.object(self.controller.api, 'update_aggregate_metadata') as mocked: mocked.return_value = AGGREGATE result = eval(self.set_metadata)(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mocked.assert_called_once_with(self.context, "1", body["set_metadata"]["metadata"]) def test_set_metadata_with_bad_aggregate(self): body = {"set_metadata": {"metadata": {"foo": "bar"}}} side_effect = exception.AggregateNotFound(aggregate_id="2") with mock.patch.object(self.controller.api, 'update_aggregate_metadata', side_effect=side_effect) as mock_update: self.assertRaises(exc.HTTPNotFound, eval(self.set_metadata), self.req, "2", body=body) mock_update.assert_called_once_with(self.context, "2", body["set_metadata"]['metadata']) def test_set_metadata_with_invalid_id(self): body = {"set_metadata": {"metadata": {"foo": "bar"}}} self.assertRaises(exc.HTTPBadRequest, eval(self.set_metadata), self.req, 'foo', body=body) def test_set_metadata_with_missing_metadata(self): body = {"asdf": {"foo": "bar"}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_extra_params(self): body = {"metadata": {"foo": "bar"}, "asdf": {"foo": "bar"}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_without_dict(self): body = {"set_metadata": {'metadata': 1}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_empty_key(self): body = {"set_metadata": {"metadata": {"": "value"}}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_key_too_long(self): body = {"set_metadata": {"metadata": {"x" * 256: "value"}}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_value_too_long(self): body = {"set_metadata": {"metadata": {"key": "x" * 256}}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_string(self): body = {"set_metadata": {"metadata": "test"}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_delete_aggregate(self): with mock.patch.object(self.controller.api, 'delete_aggregate') as mock_del: self.controller.delete(self.req, "1") mock_del.assert_called_once_with(self.context, "1") def test_delete_aggregate_with_bad_aggregate(self): side_effect = exception.AggregateNotFound( aggregate_id="2") with mock.patch.object(self.controller.api, 'delete_aggregate', side_effect=side_effect) as mock_del: self.assertRaises(exc.HTTPNotFound, self.controller.delete, self.req, "2") mock_del.assert_called_once_with(self.context, "2") def test_delete_with_invalid_id(self): self.assertRaises(exc.HTTPBadRequest, self.controller.delete, self.req, 'foo') def test_delete_aggregate_with_host(self): with mock.patch.object(self.controller.api, "delete_aggregate", side_effect=exception.InvalidAggregateAction( action="delete", aggregate_id="2", reason="not empty")): self.assertRaises(exc.HTTPBadRequest, self.controller.delete, self.req, "2") def test_marshall_aggregate(self): # _marshall_aggregate() just basically turns the aggregate returned # from the AggregateAPI into a dict, so this tests that transform. # We would expect the dictionary that comes out is the same one # that we pump into the aggregate object in the first place agg = {'name': 'aggregate1', 'id': 1, 'uuid': uuidsentinel.aggregate, 'metadata': {'foo': 'bar', 'availability_zone': 'nova'}, 'hosts': ['host1', 'host2']} agg_obj = _make_agg_obj(agg) # _marshall_aggregate() puts all fields and obj_extra_fields in the # top-level dict, so we need to put availability_zone at the top also agg['availability_zone'] = 'nova' avr_v240 = api_version_request.APIVersionRequest("2.40") avr_v241 = api_version_request.APIVersionRequest("2.41") req = mock.MagicMock(api_version_request=avr_v241) marshalled_agg = self.controller._marshall_aggregate(req, agg_obj) self.assertEqual(agg, marshalled_agg['aggregate']) req = mock.MagicMock(api_version_request=avr_v240) marshalled_agg = self.controller._marshall_aggregate(req, agg_obj) # UUID isn't in microversion 2.40 and before del agg['uuid'] self.assertEqual(agg, marshalled_agg['aggregate']) def _assert_agg_data(self, expected, actual): self.assertTrue(obj_base.obj_equal_prims(expected, actual), "The aggregate objects were not equal") def test_images_with_invalid_id(self): body = {'cache': [{'id': uuidsentinel.cache}]} req = fakes.HTTPRequest.blank('/v2/os-aggregates', use_admin_context=True, version='2.81') self.assertRaises(exc.HTTPBadRequest, self.controller.images, req, 'foo', body=body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_api.py0000664000175000017500000001427600000000000024262 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import encodeutils import webob.dec import webob.exc from nova.api import openstack as openstack_api from nova.api.openstack import wsgi from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class APITest(test.NoDBTestCase): @property def wsgi_app(self): return fakes.wsgi_app_v21() def _wsgi_app(self, inner_app): return openstack_api.FaultWrapper(inner_app) def test_malformed_json(self): req = fakes.HTTPRequest.blank('/') req.method = 'POST' req.body = b'{' req.headers["content-type"] = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(res.status_int, 400) def test_malformed_xml(self): req = fakes.HTTPRequest.blank('/') req.method = 'POST' req.body = b'' req.headers["content-type"] = "application/xml" res = req.get_response(self.wsgi_app) self.assertEqual(res.status_int, 415) def test_vendor_content_type_json(self): ctype = 'application/vnd.openstack.compute+json' req = fakes.HTTPRequest.blank('/') req.headers['Accept'] = ctype res = req.get_response(self.wsgi_app) self.assertEqual(res.status_int, 200) # NOTE(scottynomad): Webob's Response assumes that header values are # strings so the `res.content_type` property is broken in python3. # # Consider changing `api.openstack.wsgi.Resource._process_stack` # to encode header values in ASCII rather than UTF-8. # https://tools.ietf.org/html/rfc7230#section-3.2.4 content_type = res.headers.get('Content-Type') self.assertEqual(ctype, encodeutils.safe_decode(content_type)) jsonutils.loads(res.body) def test_exceptions_are_converted_to_faults_webob_exc(self): @webob.dec.wsgify def raise_webob_exc(req): raise webob.exc.HTTPNotFound(explanation='Raised a webob.exc') # api.application = raise_webob_exc api = self._wsgi_app(raise_webob_exc) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertEqual(resp.status_int, 404, resp.text) def test_exceptions_are_converted_to_faults_api_fault(self): @webob.dec.wsgify def raise_api_fault(req): exc = webob.exc.HTTPNotFound(explanation='Raised a webob.exc') return wsgi.Fault(exc) # api.application = raise_api_fault api = self._wsgi_app(raise_api_fault) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn('itemNotFound', resp.text) self.assertEqual(resp.status_int, 404, resp.text) def test_exceptions_are_converted_to_faults_exception(self): @webob.dec.wsgify def fail(req): raise Exception("Threw an exception") # api.application = fail api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn('{"computeFault', resp.text) self.assertEqual(resp.status_int, 500, resp.text) def _do_test_exception_safety_reflected_in_faults(self, expose): class ExceptionWithSafety(exception.NovaException): safe = expose @webob.dec.wsgify def fail(req): raise ExceptionWithSafety('some explanation') api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn('{"computeFault', resp.text) expected = ('ExceptionWithSafety: some explanation' if expose else 'The server has either erred or is incapable ' 'of performing the requested operation.') self.assertIn(expected, resp.text) self.assertEqual(resp.status_int, 500, resp.text) def test_safe_exceptions_are_described_in_faults(self): self._do_test_exception_safety_reflected_in_faults(True) def test_unsafe_exceptions_are_not_described_in_faults(self): self._do_test_exception_safety_reflected_in_faults(False) def _do_test_exception_mapping(self, exception_type, msg): @webob.dec.wsgify def fail(req): raise exception_type(msg) api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn(msg, resp.text) self.assertEqual(resp.status_int, exception_type.code, resp.text) if hasattr(exception_type, 'headers'): for (key, value) in exception_type.headers.items(): self.assertIn(key, resp.headers) self.assertEqual(resp.headers[key], str(value)) def test_quota_error_mapping(self): self._do_test_exception_mapping(exception.QuotaError, 'too many used') def test_non_nova_notfound_exception_mapping(self): class ExceptionWithCode(Exception): code = 404 self._do_test_exception_mapping(ExceptionWithCode, 'NotFound') def test_non_nova_exception_mapping(self): class ExceptionWithCode(Exception): code = 417 self._do_test_exception_mapping(ExceptionWithCode, 'Expectation failed') def test_exception_with_none_code_throws_500(self): class ExceptionWithNoneCode(Exception): code = None @webob.dec.wsgify def fail(req): raise ExceptionWithNoneCode() api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertEqual(500, resp.status_int) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_attach_interfaces.py0000664000175000017500000006270200000000000027155 0ustar00zuulzuul00000000000000# Copyright 2012 SINA Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from webob import exc from nova.api.openstack import common from nova.api.openstack.compute import attach_interfaces \ as attach_interfaces_v21 from nova.compute import api as compute_api from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit import fake_network_cache_model FAKE_UUID1 = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' FAKE_UUID2 = 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb' FAKE_PORT_ID1 = '11111111-1111-1111-1111-111111111111' FAKE_PORT_ID2 = '22222222-2222-2222-2222-222222222222' FAKE_PORT_ID3 = '33333333-3333-3333-3333-333333333333' FAKE_NOT_FOUND_PORT_ID = '00000000-0000-0000-0000-000000000000' FAKE_NET_ID1 = '44444444-4444-4444-4444-444444444444' FAKE_NET_ID2 = '55555555-5555-5555-5555-555555555555' FAKE_NET_ID3 = '66666666-6666-6666-6666-666666666666' FAKE_BAD_NET_ID = '00000000-0000-0000-0000-000000000000' port_data1 = { "id": FAKE_PORT_ID1, "network_id": FAKE_NET_ID1, "admin_state_up": True, "status": "ACTIVE", "mac_address": "aa:aa:aa:aa:aa:aa", "fixed_ips": ["10.0.1.2"], "device_id": FAKE_UUID1, } port_data2 = { "id": FAKE_PORT_ID2, "network_id": FAKE_NET_ID2, "admin_state_up": True, "status": "ACTIVE", "mac_address": "bb:bb:bb:bb:bb:bb", "fixed_ips": ["10.0.2.2"], "device_id": FAKE_UUID1, } port_data3 = { "id": FAKE_PORT_ID3, "network_id": FAKE_NET_ID3, "admin_state_up": True, "status": "ACTIVE", "mac_address": "bb:bb:bb:bb:bb:bb", "fixed_ips": ["10.0.2.2"], "device_id": '', } fake_networks = [FAKE_NET_ID1, FAKE_NET_ID2] ports = [port_data1, port_data2, port_data3] def fake_show_port(context, port_id, **kwargs): for port in ports: if port['id'] == port_id: return {'port': port} else: raise exception.PortNotFound(port_id=port_id) def fake_attach_interface(self, context, instance, network_id, port_id, requested_ip='192.168.1.3', tag=None): if not network_id: # if no network_id is given when add a port to an instance, use the # first default network. network_id = fake_networks[0] if network_id == FAKE_BAD_NET_ID: raise exception.NetworkNotFound(network_id=network_id) if not port_id: port_id = ports[fake_networks.index(network_id)]['id'] if port_id == FAKE_NOT_FOUND_PORT_ID: raise exception.PortNotFound(port_id=port_id) vif = fake_network_cache_model.new_vif() vif['id'] = port_id vif['network']['id'] = network_id vif['network']['subnets'][0]['ips'][0]['address'] = requested_ip return vif def fake_detach_interface(self, context, instance, port_id): for port in ports: if port['id'] == port_id: return raise exception.PortNotFound(port_id=port_id) def fake_get_instance(self, context, instance_id, expected_attrs=None, cell_down_support=False): return fake_instance.fake_instance_obj( context, id=1, uuid=instance_id, project_id=context.project_id) class InterfaceAttachTestsV21(test.NoDBTestCase): controller_cls = attach_interfaces_v21.InterfaceAttachmentController validate_exc = exception.ValidationError in_use_exc = exc.HTTPConflict not_found_exc = exc.HTTPNotFound not_usable_exc = exc.HTTPBadRequest def setUp(self): super(InterfaceAttachTestsV21, self).setUp() self.flags(timeout=30, group='neutron') self.stub_out('nova.compute.api.API.get', fake_get_instance) self.expected_show = {'interfaceAttachment': {'net_id': FAKE_NET_ID1, 'port_id': FAKE_PORT_ID1, 'mac_addr': port_data1['mac_address'], 'port_state': port_data1['status'], 'fixed_ips': port_data1['fixed_ips'], }} self.attachments = self.controller_cls() show_port_patch = mock.patch.object(self.attachments.network_api, 'show_port', fake_show_port) show_port_patch.start() self.addCleanup(show_port_patch.stop) self.req = fakes.HTTPRequest.blank('') @mock.patch.object(compute_api.API, 'get', side_effect=exception.InstanceNotFound(instance_id='')) def _test_instance_not_found(self, func, args, mock_get, kwargs=None): if not kwargs: kwargs = {} self.assertRaises(exc.HTTPNotFound, func, self.req, *args, **kwargs) def test_show_instance_not_found(self): self._test_instance_not_found(self.attachments.show, ('fake', 'fake')) def test_index_instance_not_found(self): self._test_instance_not_found(self.attachments.index, ('fake', )) def test_detach_interface_instance_not_found(self): self._test_instance_not_found(self.attachments.delete, ('fake', 'fake')) def test_attach_interface_instance_not_found(self): self._test_instance_not_found(self.attachments.create, ('fake', ), kwargs={'body': {'interfaceAttachment': {}}}) def test_show(self): result = self.attachments.show(self.req, FAKE_UUID1, FAKE_PORT_ID1) self.assertEqual(self.expected_show, result) def test_show_with_port_not_found(self): self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID2, FAKE_PORT_ID1) def test_show_forbidden(self): with mock.patch.object(self.attachments.network_api, 'show_port', side_effect=exception.Forbidden): self.assertRaises(exc.HTTPForbidden, self.attachments.show, self.req, FAKE_UUID1, FAKE_PORT_ID1) def test_delete(self): self.stub_out('nova.compute.api.API.detach_interface', fake_detach_interface) req_context = self.req.environ['nova.context'] inst = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) with mock.patch.object(common, 'get_instance', return_value=inst) as mock_get_instance: result = self.attachments.delete(self.req, FAKE_UUID1, FAKE_PORT_ID1) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.attachments, attach_interfaces_v21.InterfaceAttachmentController): status_int = self.attachments.delete.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) ctxt = self.req.environ['nova.context'] mock_get_instance.assert_called_with( self.attachments.compute_api, ctxt, FAKE_UUID1, expected_attrs=['device_metadata']) def test_detach_interface_instance_locked(self): def fake_detach_interface_from_locked_server(self, context, instance, port_id): raise exception.InstanceIsLocked(instance_uuid=FAKE_UUID1) self.stub_out('nova.compute.api.API.detach_interface', fake_detach_interface_from_locked_server) self.assertRaises(exc.HTTPConflict, self.attachments.delete, self.req, FAKE_UUID1, FAKE_PORT_ID1) def test_delete_interface_not_found(self): self.stub_out('nova.compute.api.API.detach_interface', fake_detach_interface) self.assertRaises(exc.HTTPNotFound, self.attachments.delete, self.req, FAKE_UUID1, 'invalid-port-id') def test_attach_interface_instance_locked(self): def fake_attach_interface_to_locked_server(self, context, instance, network_id, port_id, requested_ip, tag=None): raise exception.InstanceIsLocked(instance_uuid=FAKE_UUID1) self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface_to_locked_server) body = {} self.assertRaises(exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_attach_interface_without_network_id(self): self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) body = {} result = self.attachments.create(self.req, FAKE_UUID1, body=body) self.assertEqual(result['interfaceAttachment']['net_id'], FAKE_NET_ID1) @mock.patch.object( compute_api.API, 'attach_interface', side_effect=exception.NetworkInterfaceTaggedAttachNotSupported()) def test_interface_tagged_attach_not_supported(self, mock_attach): body = {'interfaceAttachment': {'net_id': FAKE_NET_ID2}} self.assertRaises(exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_attach_interface_with_network_id(self): self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) body = {'interfaceAttachment': {'net_id': FAKE_NET_ID2}} result = self.attachments.create(self.req, FAKE_UUID1, body=body) self.assertEqual(result['interfaceAttachment']['net_id'], FAKE_NET_ID2) def _attach_interface_bad_request_case(self, body): self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) self.assertRaises(exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID1, body=body) def _attach_interface_not_found_case(self, body): self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) self.assertRaises(self.not_found_exc, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_attach_interface_with_port_and_network_id(self): body = { 'interfaceAttachment': { 'port_id': FAKE_PORT_ID1, 'net_id': FAKE_NET_ID2 } } self._attach_interface_bad_request_case(body) def test_attach_interface_with_not_found_network_id(self): body = { 'interfaceAttachment': { 'net_id': FAKE_BAD_NET_ID } } self._attach_interface_not_found_case(body) def test_attach_interface_with_not_found_port_id(self): body = { 'interfaceAttachment': { 'port_id': FAKE_NOT_FOUND_PORT_ID } } self._attach_interface_not_found_case(body) def test_attach_interface_with_invalid_state(self): def fake_attach_interface_invalid_state(*args, **kwargs): raise exception.InstanceInvalidState( instance_uuid='', attr='', state='', method='attach_interface') self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface_invalid_state) body = {'interfaceAttachment': {'net_id': FAKE_NET_ID1}} self.assertRaises(exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_attach_interface_port_limit_exceeded(self): """Tests the scenario where nova-compute attempts to create a port to attach but the tenant port quota is exceeded and PortLimitExceeded is raised from the neutron API code which results in a 403 response. """ with mock.patch.object(self.attachments.compute_api, 'attach_interface', side_effect=exception.PortLimitExceeded): body = {'interfaceAttachment': {}} ex = self.assertRaises( exc.HTTPForbidden, self.attachments.create, self.req, FAKE_UUID1, body=body) self.assertIn('Maximum number of ports exceeded', six.text_type(ex)) def test_detach_interface_with_invalid_state(self): def fake_detach_interface_invalid_state(*args, **kwargs): raise exception.InstanceInvalidState( instance_uuid='', attr='', state='', method='detach_interface') self.stub_out('nova.compute.api.API.detach_interface', fake_detach_interface_invalid_state) self.assertRaises(exc.HTTPConflict, self.attachments.delete, self.req, FAKE_UUID1, FAKE_NET_ID1) @mock.patch.object(compute_api.API, 'detach_interface', side_effect=NotImplementedError()) def test_detach_interface_with_not_implemented(self, _mock): self.assertRaises(exc.HTTPNotImplemented, self.attachments.delete, self.req, FAKE_UUID1, FAKE_NET_ID1) def test_attach_interface_invalid_fixed_ip(self): body = { 'interfaceAttachment': { 'net_id': FAKE_NET_ID1, 'fixed_ips': [{'ip_address': 'invalid_ip'}] } } self.assertRaises(self.validate_exc, self.attachments.create, self.req, FAKE_UUID1, body=body) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'attach_interface') def test_attach_interface_fixed_ip_already_in_use(self, attach_mock, get_mock): req_context = self.req.environ['nova.context'] fake_instance = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) get_mock.return_value = fake_instance attach_mock.side_effect = exception.FixedIpAlreadyInUse( address='10.0.2.2', instance_uuid=FAKE_UUID1) body = {} self.assertRaises(self.in_use_exc, self.attachments.create, self.req, FAKE_UUID1, body=body) ctxt = self.req.environ['nova.context'] attach_mock.assert_called_once_with(ctxt, fake_instance, None, None, None, tag=None) get_mock.assert_called_once_with(ctxt, FAKE_UUID1, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'attach_interface') def test_attach_interface_port_in_use(self, attach_mock, get_mock): req_context = self.req.environ['nova.context'] fake_instance = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) get_mock.return_value = fake_instance attach_mock.side_effect = exception.PortInUse( port_id=FAKE_PORT_ID1) body = {} self.assertRaises(self.in_use_exc, self.attachments.create, self.req, FAKE_UUID1, body=body) ctxt = self.req.environ['nova.context'] attach_mock.assert_called_once_with(ctxt, fake_instance, None, None, None, tag=None) get_mock.assert_called_once_with(ctxt, FAKE_UUID1, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'attach_interface') def test_attach_interface_port_not_usable(self, attach_mock, get_mock): req_context = self.req.environ['nova.context'] fake_instance = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) get_mock.return_value = fake_instance attach_mock.side_effect = exception.PortNotUsable( port_id=FAKE_PORT_ID1, instance=fake_instance.uuid) body = {} self.assertRaises(self.not_usable_exc, self.attachments.create, self.req, FAKE_UUID1, body=body) ctxt = self.req.environ['nova.context'] attach_mock.assert_called_once_with(ctxt, fake_instance, None, None, None, tag=None) get_mock.assert_called_once_with(ctxt, FAKE_UUID1, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'attach_interface') def test_attach_interface_failed_no_network(self, attach_mock, get_mock): req_context = self.req.environ['nova.context'] fake_instance = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) get_mock.return_value = fake_instance attach_mock.side_effect = ( exception.InterfaceAttachFailedNoNetwork(project_id=FAKE_UUID2)) self.assertRaises(exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID1, body={}) ctxt = self.req.environ['nova.context'] attach_mock.assert_called_once_with(ctxt, fake_instance, None, None, None, tag=None) get_mock.assert_called_once_with(ctxt, FAKE_UUID1, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'attach_interface') def test_attach_interface_no_more_fixed_ips(self, attach_mock, get_mock): req_context = self.req.environ['nova.context'] fake_instance = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) get_mock.return_value = fake_instance attach_mock.side_effect = exception.NoMoreFixedIps( net=FAKE_NET_ID1) body = {} self.assertRaises(exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID1, body=body) ctxt = self.req.environ['nova.context'] attach_mock.assert_called_once_with(ctxt, fake_instance, None, None, None, tag=None) get_mock.assert_called_once_with(ctxt, FAKE_UUID1, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'attach_interface') def test_attach_interface_failed_securitygroup_cannot_be_applied( self, attach_mock, get_mock): req_context = self.req.environ['nova.context'] fake_instance = objects.Instance(uuid=FAKE_UUID1, project_id=req_context.project_id) get_mock.return_value = fake_instance attach_mock.side_effect = ( exception.SecurityGroupCannotBeApplied()) self.assertRaises(exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID1, body={}) ctxt = self.req.environ['nova.context'] attach_mock.assert_called_once_with(ctxt, fake_instance, None, None, None, tag=None) get_mock.assert_called_once_with(ctxt, FAKE_UUID1, expected_attrs=None, cell_down_support=False) def _test_attach_interface_with_invalid_parameter(self, param): self.stub_out('nova.compute.api.API.attach_interface', fake_attach_interface) body = {'interface_attachment': param} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_attach_interface_instance_with_non_uuid_net_id(self): param = {'net_id': 'non_uuid'} self._test_attach_interface_with_invalid_parameter(param) def test_attach_interface_instance_with_non_uuid_port_id(self): param = {'port_id': 'non_uuid'} self._test_attach_interface_with_invalid_parameter(param) def test_attach_interface_instance_with_non_array_fixed_ips(self): param = {'fixed_ips': 'non_array'} self._test_attach_interface_with_invalid_parameter(param) class InterfaceAttachTestsV249(test.NoDBTestCase): controller_cls = attach_interfaces_v21.InterfaceAttachmentController def setUp(self): super(InterfaceAttachTestsV249, self).setUp() self.attachments = self.controller_cls() self.req = fakes.HTTPRequest.blank('', version='2.49') def test_tagged_interface_attach_invalid_tag_comma(self): body = {'interfaceAttachment': {'net_id': FAKE_NET_ID2, 'tag': ','}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_tagged_interface_attach_invalid_tag_slash(self): body = {'interfaceAttachment': {'net_id': FAKE_NET_ID2, 'tag': '/'}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID1, body=body) def test_tagged_interface_attach_invalid_tag_too_long(self): tag = ''.join(map(str, range(10, 41))) body = {'interfaceAttachment': {'net_id': FAKE_NET_ID2, 'tag': tag}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID1, body=body) @mock.patch('nova.compute.api.API.attach_interface') @mock.patch('nova.compute.api.API.get', fake_get_instance) def test_tagged_interface_attach_valid_tag(self, _): body = {'interfaceAttachment': {'net_id': FAKE_NET_ID2, 'tag': 'foo'}} with mock.patch.object(self.attachments, 'show'): self.attachments.create(self.req, FAKE_UUID1, body=body) class InterfaceAttachTestsV270(test.NoDBTestCase): """os-interface API tests for microversion 2.70""" def setUp(self): super(InterfaceAttachTestsV270, self).setUp() self.attachments = ( attach_interfaces_v21.InterfaceAttachmentController()) self.req = fakes.HTTPRequest.blank('', version='2.70') self.stub_out('nova.compute.api.API.get', fake_get_instance) @mock.patch('nova.objects.VirtualInterface.get_by_uuid', return_value=None) def test_show_interface_no_vif(self, mock_get_by_uuid): """Tests GET /servers/{server_id}/os-interface/{id} where there is no corresponding VirtualInterface database record for the attached port. """ with mock.patch.object(self.attachments.network_api, 'show_port', fake_show_port): attachment = self.attachments.show( self.req, FAKE_UUID1, FAKE_PORT_ID1)['interfaceAttachment'] self.assertIn('tag', attachment) self.assertIsNone(attachment['tag']) ctxt = self.req.environ['nova.context'] mock_get_by_uuid.assert_called_once_with(ctxt, FAKE_PORT_ID1) @mock.patch('nova.objects.VirtualInterfaceList.get_by_instance_uuid', return_value=objects.VirtualInterfaceList()) def test_list_interfaces_no_vifs(self, mock_get_by_instance_uuid): """Tests GET /servers/{server_id}/os-interface where there is no corresponding VirtualInterface database record for the attached ports. """ with mock.patch.object(self.attachments.network_api, 'list_ports', return_value={'ports': ports}) as list_ports: attachments = self.attachments.index( self.req, FAKE_UUID1)['interfaceAttachments'] for attachment in attachments: self.assertIn('tag', attachment) self.assertIsNone(attachment['tag']) ctxt = self.req.environ['nova.context'] list_ports.assert_called_once_with(ctxt, device_id=FAKE_UUID1) mock_get_by_instance_uuid.assert_called_once_with( self.req.environ['nova.context'], FAKE_UUID1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_auth.py0000664000175000017500000000621300000000000024442 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testscenarios from nova.api import openstack as openstack_api from nova.api.openstack import auth from nova.api.openstack import compute from nova.api.openstack import urlmap from nova import test from nova.tests.unit.api.openstack import fakes class TestNoAuthMiddleware(testscenarios.WithScenarios, test.NoDBTestCase): scenarios = [ ('project_id', { 'expected_url': 'http://localhost/v2.1/user1_project', 'auth_middleware': auth.NoAuthMiddleware}), ('no_project_id', { 'expected_url': 'http://localhost/v2.1', 'auth_middleware': auth.NoAuthMiddlewareV2_18}), ] def setUp(self): super(TestNoAuthMiddleware, self).setUp() fakes.stub_out_networking(self) api_v21 = openstack_api.FaultWrapper( self.auth_middleware( compute.APIRouterV21() ) ) self.wsgi_app = urlmap.URLMap() self.wsgi_app['/v2.1'] = api_v21 self.req_url = '/v2.1' def test_authorize_user(self): req = fakes.HTTPRequest.blank(self.req_url, base_url='') req.headers['X-Auth-User'] = 'user1' req.headers['X-Auth-Key'] = 'user1_key' req.headers['X-Auth-Project-Id'] = 'user1_project' result = req.get_response(self.wsgi_app) self.assertEqual(result.status, '204 No Content') self.assertEqual(result.headers['X-Server-Management-Url'], self.expected_url) def test_authorize_user_trailing_slash(self): # make sure it works with trailing slash on the request self.req_url = self.req_url + '/' req = fakes.HTTPRequest.blank(self.req_url, base_url='') req.headers['X-Auth-User'] = 'user1' req.headers['X-Auth-Key'] = 'user1_key' req.headers['X-Auth-Project-Id'] = 'user1_project' result = req.get_response(self.wsgi_app) self.assertEqual(result.status, '204 No Content') self.assertEqual(result.headers['X-Server-Management-Url'], self.expected_url) def test_auth_token_no_empty_headers(self): req = fakes.HTTPRequest.blank(self.req_url, base_url='') req.headers['X-Auth-User'] = 'user1' req.headers['X-Auth-Key'] = 'user1_key' req.headers['X-Auth-Project-Id'] = 'user1_project' result = req.get_response(self.wsgi_app) self.assertEqual(result.status, '204 No Content') self.assertNotIn('X-CDN-Management-Url', result.headers) self.assertNotIn('X-Storage-Url', result.headers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_availability_zone.py0000664000175000017500000003023300000000000027205 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel from nova.api.openstack.compute import availability_zone as az_v21 from nova.api.openstack.compute import servers as servers_v21 from nova import availability_zones from nova.compute import api as compute_api from nova import context from nova.db import api as db from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake from nova.tests.unit import matchers from nova.tests.unit.objects import test_service FAKE_UUID = fakes.FAKE_UUID def fake_service_get_all(context, filters=None, **kwargs): def __fake_service(binary, availability_zone, created_at, updated_at, host, disabled): db_s = dict(test_service.fake_service, binary=binary, availability_zone=availability_zone, available_zones=availability_zone, created_at=created_at, updated_at=updated_at, host=host, disabled=disabled) # The version field is immutable so remove that before creating the obj db_s.pop('version', None) return objects.Service(context, **db_s) svcs = [__fake_service("nova-compute", "zone-2", datetime.datetime(2012, 11, 14, 9, 53, 25, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", True), __fake_service("nova-scheduler", "internal", datetime.datetime(2012, 11, 14, 9, 57, 3, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", True), __fake_service("nova-compute", "zone-1", datetime.datetime(2012, 11, 14, 9, 53, 25, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", False), __fake_service("nova-sched", "internal", datetime.datetime(2012, 11, 14, 9, 57, 3, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", False), # nova-conductor is in the same zone and host as nova-sched # and is here to make sure /detail filters out duplicates. __fake_service("nova-conductor", "internal", datetime.datetime(2012, 11, 14, 9, 57, 3, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", False)] if filters and 'disabled' in filters: svcs = [svc for svc in svcs if svc.disabled == filters['disabled']] return objects.ServiceList(objects=svcs) class AvailabilityZoneApiTestV21(test.NoDBTestCase): availability_zone = az_v21 def setUp(self): super(AvailabilityZoneApiTestV21, self).setUp() availability_zones.reset_cache() fakes.stub_out_nw_api(self) self.stub_out('nova.availability_zones.set_availability_zones', lambda c, services: services) self.stub_out('nova.servicegroup.API.service_is_up', lambda s, service: True) self.controller = self.availability_zone.AvailabilityZoneController() self.mock_service_get_all = mock.patch.object( self.controller.host_api, 'service_get_all', side_effect=fake_service_get_all).start() self.addCleanup(self.mock_service_get_all.stop) def test_filtered_availability_zones(self): zones = ['zone1', 'internal'] expected = [{'zoneName': 'zone1', 'zoneState': {'available': True}, "hosts": None}] result = self.controller._get_filtered_availability_zones(zones, True) self.assertEqual(result, expected) expected = [{'zoneName': 'zone1', 'zoneState': {'available': False}, "hosts": None}] result = self.controller._get_filtered_availability_zones(zones, False) self.assertEqual(result, expected) def test_availability_zone_index(self): req = fakes.HTTPRequest.blank('') resp_dict = self.controller.index(req) self.assertIn('availabilityZoneInfo', resp_dict) zones = resp_dict['availabilityZoneInfo'] self.assertEqual(len(zones), 2) self.assertEqual(zones[0]['zoneName'], u'zone-1') self.assertTrue(zones[0]['zoneState']['available']) self.assertIsNone(zones[0]['hosts']) self.assertEqual(zones[1]['zoneName'], u'zone-2') self.assertFalse(zones[1]['zoneState']['available']) self.assertIsNone(zones[1]['hosts']) def test_availability_zone_detail(self): req = fakes.HTTPRequest.blank('') resp_dict = self.controller.detail(req) self.assertIn('availabilityZoneInfo', resp_dict) zones = resp_dict['availabilityZoneInfo'] self.assertEqual(len(zones), 3) timestamp = iso8601.parse_date("2012-12-26T14:45:25Z") expected = [ { 'zoneName': 'internal', 'zoneState': {'available': True}, 'hosts': { 'fake_host-1': { 'nova-sched': { 'active': True, 'available': True, 'updated_at': timestamp }, 'nova-conductor': { 'active': True, 'available': True, 'updated_at': timestamp } } } }, { 'zoneName': 'zone-1', 'zoneState': {'available': True}, 'hosts': { 'fake_host-1': { 'nova-compute': { 'active': True, 'available': True, 'updated_at': timestamp } } } }, { 'zoneName': 'zone-2', 'zoneState': {'available': False}, 'hosts': None } ] self.assertEqual(expected, zones) self.assertEqual(1, self.mock_service_get_all.call_count, self.mock_service_get_all.call_args_list) @mock.patch.object(availability_zones, 'get_availability_zones', return_value=[['nova'], []]) def test_availability_zone_detail_no_services(self, mock_get_az): expected_response = {'availabilityZoneInfo': [{'zoneState': {'available': True}, 'hosts': {}, 'zoneName': 'nova'}]} req = fakes.HTTPRequest.blank('') resp_dict = self.controller.detail(req) self.assertThat(resp_dict, matchers.DictMatches(expected_response)) class ServersControllerCreateTestV21(test.TestCase): base_url = '/v2/%s/' % fakes.FAKE_PROJECT_ID def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTestV21, self).setUp() self.instance_cache_num = 0 fakes.stub_out_nw_api(self) self._set_up_controller() def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] instance.uuid = FAKE_UUID return instance fake.stub_out_image_service(self) self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) def _set_up_controller(self): self.controller = servers_v21.ServersController() def _create_instance_with_availability_zone(self, zone_name): def create(*args, **kwargs): self.assertIn('availability_zone', kwargs) self.assertEqual('nova', kwargs['availability_zone']) return old_create(*args, **kwargs) old_create = compute_api.API.create self.stub_out('nova.compute.api.API.create', create) image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = ('http://localhost' + self.base_url + 'flavors/3') body = { 'server': { 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, 'availability_zone': zone_name, }, } req = fakes.HTTPRequest.blank('') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' admin_context = context.get_admin_context() db.service_create(admin_context, {'host': 'host1_zones', 'binary': "nova-compute", 'topic': 'compute', 'report_count': 0}) agg = objects.Aggregate(admin_context, name='agg1', uuid=uuidsentinel.agg_uuid, metadata={'availability_zone': 'nova'}) agg.create() agg.add_host('host1_zones') return req, body def test_create_instance_with_availability_zone(self): zone_name = 'nova' req, body = self._create_instance_with_availability_zone(zone_name) res = self.controller.create(req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) def test_create_instance_with_invalid_availability_zone_too_long(self): zone_name = 'a' * 256 req, body = self._create_instance_with_availability_zone(zone_name) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_instance_with_invalid_availability_zone_too_short(self): zone_name = '' req, body = self._create_instance_with_availability_zone(zone_name) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_instance_with_invalid_availability_zone_not_str(self): zone_name = 111 req, body = self._create_instance_with_availability_zone(zone_name) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_instance_without_availability_zone(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = ('http://localhost' + self.base_url + 'flavors/3') body = { 'server': { 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } req = fakes.HTTPRequest.blank('') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' res = self.controller.create(req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_baremetal_nodes.py0000664000175000017500000002301400000000000026623 0ustar00zuulzuul00000000000000# Copyright (c) 2013 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironicclient import exc as ironic_exc import mock import six from webob import exc from nova.api.openstack.compute import baremetal_nodes \ as b_nodes_v21 from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.virt.ironic import utils as ironic_utils def fake_node(**updates): node = { 'id': 1, 'service_host': "host", 'cpus': 8, 'memory_mb': 8192, 'local_gb': 128, 'pm_address': "10.1.2.3", 'pm_user': "pm_user", 'pm_password': "pm_pass", 'terminal_port': 8000, 'interfaces': [], 'instance_uuid': 'fake-instance-uuid', } if updates: node.update(updates) return node def fake_node_ext_status(**updates): node = fake_node(uuid='fake-uuid', task_state='fake-task-state', updated_at='fake-updated-at', pxe_config_path='fake-pxe-config-path') if updates: node.update(updates) return node FAKE_IRONIC_CLIENT = ironic_utils.FakeClient() @mock.patch.object(b_nodes_v21, '_get_ironic_client', lambda *_: FAKE_IRONIC_CLIENT) class BareMetalNodesTestV21(test.NoDBTestCase): mod = b_nodes_v21 def setUp(self): super(BareMetalNodesTestV21, self).setUp() self._setup() self.context = context.get_admin_context() self.request = fakes.HTTPRequest.blank('', use_admin_context=True) def _setup(self): self.controller = b_nodes_v21.BareMetalNodeController() @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list') def test_index_ironic(self, mock_list): properties = {'cpus': 2, 'memory_mb': 1024, 'local_gb': 20} node = ironic_utils.get_test_node(properties=properties) mock_list.return_value = [node] res_dict = self.controller.index(self.request) expected_output = {'nodes': [{'memory_mb': properties['memory_mb'], 'host': 'IRONIC MANAGED', 'disk_gb': properties['local_gb'], 'interfaces': [], 'task_state': None, 'id': node.uuid, 'cpus': properties['cpus']}]} self.assertEqual(expected_output, res_dict) mock_list.assert_called_once_with(detail=True) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list') def test_index_ironic_missing_properties(self, mock_list): properties = {'cpus': 2} node = ironic_utils.get_test_node(properties=properties) mock_list.return_value = [node] res_dict = self.controller.index(self.request) expected_output = {'nodes': [{'memory_mb': 0, 'host': 'IRONIC MANAGED', 'disk_gb': 0, 'interfaces': [], 'task_state': None, 'id': node.uuid, 'cpus': properties['cpus']}]} self.assertEqual(expected_output, res_dict) mock_list.assert_called_once_with(detail=True) def test_index_ironic_not_implemented(self): with mock.patch.object(self.mod, 'ironic_client', None): self.assertRaises(exc.HTTPNotImplemented, self.controller.index, self.request) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get') def test_show_ironic(self, mock_get, mock_list_ports): properties = {'cpus': 1, 'memory_mb': 512, 'local_gb': 10} node = ironic_utils.get_test_node(properties=properties) port = ironic_utils.get_test_port() mock_get.return_value = node mock_list_ports.return_value = [port] res_dict = self.controller.show(self.request, node.uuid) expected_output = {'node': {'memory_mb': properties['memory_mb'], 'instance_uuid': None, 'host': 'IRONIC MANAGED', 'disk_gb': properties['local_gb'], 'interfaces': [{'address': port.address}], 'task_state': None, 'id': node.uuid, 'cpus': properties['cpus']}} self.assertEqual(expected_output, res_dict) mock_get.assert_called_once_with(node.uuid) mock_list_ports.assert_called_once_with(node.uuid) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get') def test_show_ironic_no_properties(self, mock_get, mock_list_ports): properties = {} node = ironic_utils.get_test_node(properties=properties) port = ironic_utils.get_test_port() mock_get.return_value = node mock_list_ports.return_value = [port] res_dict = self.controller.show(self.request, node.uuid) expected_output = {'node': {'memory_mb': 0, 'instance_uuid': None, 'host': 'IRONIC MANAGED', 'disk_gb': 0, 'interfaces': [{'address': port.address}], 'task_state': None, 'id': node.uuid, 'cpus': 0}} self.assertEqual(expected_output, res_dict) mock_get.assert_called_once_with(node.uuid) mock_list_ports.assert_called_once_with(node.uuid) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get') def test_show_ironic_no_interfaces(self, mock_get, mock_list_ports): properties = {'cpus': 1, 'memory_mb': 512, 'local_gb': 10} node = ironic_utils.get_test_node(properties=properties) mock_get.return_value = node mock_list_ports.return_value = [] res_dict = self.controller.show(self.request, node.uuid) self.assertEqual([], res_dict['node']['interfaces']) mock_get.assert_called_once_with(node.uuid) mock_list_ports.assert_called_once_with(node.uuid) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get', side_effect=ironic_exc.NotFound()) def test_show_ironic_node_not_found(self, mock_get): error = self.assertRaises(exc.HTTPNotFound, self.controller.show, self.request, 'fake-uuid') self.assertIn('fake-uuid', six.text_type(error)) def test_show_ironic_not_implemented(self): with mock.patch.object(self.mod, 'ironic_client', None): properties = {'cpus': 1, 'memory_mb': 512, 'local_gb': 10} node = ironic_utils.get_test_node(properties=properties) self.assertRaises(exc.HTTPNotImplemented, self.controller.show, self.request, node.uuid) def test_create_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller.create, self.request, {'node': object()}) def test_delete_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller.delete, self.request, 'fake-id') def test_add_interface_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller._add_interface, self.request, 'fake-id', 'fake-body') def test_remove_interface_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller._remove_interface, self.request, 'fake-id', 'fake-body') class BareMetalNodesTestDeprecation(test.NoDBTestCase): def setUp(self): super(BareMetalNodesTestDeprecation, self).setUp() self.controller = b_nodes_v21.BareMetalNodeController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._add_interface, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._remove_interface, self.req, fakes.FAKE_UUID, {}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_certificates.py0000664000175000017500000000326600000000000026153 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.compute import certificates \ as certificates_v21 from nova import context from nova import test from nova.tests.unit.api.openstack import fakes class CertificatesTestV21(test.NoDBTestCase): certificates = certificates_v21 url = '/v2/%s/os-certificates' % fakes.FAKE_PROJECT_ID certificate_show_extension = 'os_compute_api:os-certificates:show' certificate_create_extension = \ 'os_compute_api:os-certificates:create' def setUp(self): super(CertificatesTestV21, self).setUp() self.context = context.RequestContext('fake', fakes.FAKE_PROJECT_ID) self.controller = self.certificates.CertificatesController() self.req = fakes.HTTPRequest.blank('') def test_certificates_show_root(self): self.assertRaises(webob.exc.HTTPGone, self.controller.show, self.req, 'root') def test_certificates_create_certificate(self): self.assertRaises(webob.exc.HTTPGone, self.controller.create, self.req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_cloudpipe.py0000664000175000017500000000334200000000000025465 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from webob import exc from nova.api.openstack.compute import cloudpipe as cloudpipe_v21 from nova import test from nova.tests.unit.api.openstack import fakes project_id = uuidutils.generate_uuid(dashed=False) class CloudpipeTestV21(test.NoDBTestCase): cloudpipe = cloudpipe_v21 url = '/v2/%s/os-cloudpipe' % fakes.FAKE_PROJECT_ID def setUp(self): super(CloudpipeTestV21, self).setUp() self.controller = self.cloudpipe.CloudpipeController() self.req = fakes.HTTPRequest.blank('') def test_cloudpipe_list(self): self.assertRaises(exc.HTTPGone, self.controller.index, self.req) def test_cloudpipe_create(self): body = {'cloudpipe': {'project_id': project_id}} self.assertRaises(exc.HTTPGone, self.controller.create, self.req, body=body) def test_cloudpipe_configure_project(self): body = {"configure_project": {"vpn_ip": "1.2.3.4", "vpn_port": 222}} self.assertRaises(exc.HTTPGone, self.controller.update, self.req, 'configure-project', body=body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_cloudpipe_update.py0000664000175000017500000000254600000000000027034 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack.compute import cloudpipe as clup_v21 from nova import test from nova.tests.unit.api.openstack import fakes class CloudpipeUpdateTestV21(test.NoDBTestCase): def setUp(self): super(CloudpipeUpdateTestV21, self).setUp() self.controller = clup_v21.CloudpipeController() self.req = fakes.HTTPRequest.blank('') def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def test_cloudpipe_configure_project(self): body = {"configure_project": {"vpn_ip": "1.2.3.4", "vpn_port": 222}} self.assertRaises(webob.exc.HTTPGone, self.controller.update, self.req, 'configure-project', body=body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_console_auth_tokens.py0000664000175000017500000001063700000000000027554 0ustar00zuulzuul00000000000000# Copyright 2013 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import console_auth_tokens \ as console_auth_tokens_v21 from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes class ConsoleAuthTokensExtensionTestV21(test.NoDBTestCase): controller_class = console_auth_tokens_v21 _EXPECTED_OUTPUT = {'console': {'instance_uuid': fakes.FAKE_UUID, 'host': 'fake_host', 'port': '1234', 'internal_access_path': 'fake_access_path'}} # The database backend returns a ConsoleAuthToken.to_dict() and o.vo # StringField are unicode. And the port is an IntegerField. _EXPECTED_OUTPUT_DB = copy.deepcopy(_EXPECTED_OUTPUT) _EXPECTED_OUTPUT_DB['console'].update( {'host': u'fake_host', 'port': 1234, 'internal_access_path': u'fake_access_path'}) def setUp(self): super(ConsoleAuthTokensExtensionTestV21, self).setUp() self.controller = self.controller_class.ConsoleAuthTokensController() self.req = fakes.HTTPRequest.blank('', use_admin_context=True) self.context = self.req.environ['nova.context'] @mock.patch('nova.objects.ConsoleAuthToken.validate', return_value=objects.ConsoleAuthToken( instance_uuid=fakes.FAKE_UUID, host='fake_host', port='1234', internal_access_path='fake_access_path', console_type='rdp-html5', token=fakes.FAKE_UUID)) def test_get_console_connect_info(self, mock_validate): output = self.controller.show(self.req, fakes.FAKE_UUID) self.assertEqual(self._EXPECTED_OUTPUT_DB, output) mock_validate.assert_called_once_with(self.context, fakes.FAKE_UUID) @mock.patch('nova.objects.ConsoleAuthToken.validate', side_effect=exception.InvalidToken(token='***')) def test_get_console_connect_info_token_not_found(self, mock_validate): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, fakes.FAKE_UUID) mock_validate.assert_called_once_with(self.context, fakes.FAKE_UUID) @mock.patch('nova.objects.ConsoleAuthToken.validate', return_value=objects.ConsoleAuthToken( instance_uuid=fakes.FAKE_UUID, host='fake_host', port='1234', internal_access_path='fake_access_path', console_type='unauthorized_console_type', token=fakes.FAKE_UUID)) def test_get_console_connect_info_nonrdp_console_type(self, mock_validate): self.assertRaises(webob.exc.HTTPUnauthorized, self.controller.show, self.req, fakes.FAKE_UUID) mock_validate.assert_called_once_with(self.context, fakes.FAKE_UUID) class ConsoleAuthTokensExtensionTestV231(ConsoleAuthTokensExtensionTestV21): def setUp(self): super(ConsoleAuthTokensExtensionTestV231, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.31') @mock.patch('nova.objects.ConsoleAuthToken.validate', return_value = objects.ConsoleAuthToken( instance_uuid=fakes.FAKE_UUID, host='fake_host', port='1234', internal_access_path='fake_access_path', console_type='webmks', token=fakes.FAKE_UUID)) def test_get_console_connect_info_nonrdp_console_type(self, mock_validate): output = self.controller.show(self.req, fakes.FAKE_UUID) self.assertEqual(self._EXPECTED_OUTPUT_DB, output) mock_validate.assert_called_once_with(self.context, fakes.FAKE_UUID) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_console_output.py0000664000175000017500000001367400000000000026574 0ustar00zuulzuul00000000000000# Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import string import mock import webob from nova.api.openstack.compute import console_output \ as console_output_v21 from nova.compute import api as compute_api from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance def fake_get_console_output(self, _context, _instance, tail_length): fixture = [str(i) for i in range(5)] if tail_length is None: pass elif tail_length == 0: fixture = [] else: fixture = fixture[-int(tail_length):] return '\n'.join(fixture) def fake_get_console_output_not_ready(self, _context, _instance, tail_length): raise exception.InstanceNotReady(instance_id=_instance["uuid"]) def fake_get_console_output_all_characters(self, _ctx, _instance, _tail_len): return string.printable def fake_get(self, context, instance_uuid, expected_attrs=None, cell_down_support=False): return fake_instance.fake_instance_obj(context, **{'uuid': instance_uuid}) def fake_get_not_found(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') class ConsoleOutputExtensionTestV21(test.NoDBTestCase): controller_class = console_output_v21 validation_error = exception.ValidationError def setUp(self): super(ConsoleOutputExtensionTestV21, self).setUp() self.stub_out('nova.compute.api.API.get_console_output', fake_get_console_output) self.stub_out('nova.compute.api.API.get', fake_get) self.controller = self.controller_class.ConsoleOutputController() self.req = fakes.HTTPRequest.blank('') def _get_console_output(self, length_dict=None): length_dict = length_dict or {} body = {'os-getConsoleOutput': length_dict} return self.controller.get_console_output(self.req, fakes.FAKE_UUID, body=body) def _check_console_output_failure(self, exception, body): self.assertRaises(exception, self.controller.get_console_output, self.req, fakes.FAKE_UUID, body=body) def test_get_text_console_instance_action(self): output = self._get_console_output() self.assertEqual({'output': '0\n1\n2\n3\n4'}, output) def test_get_console_output_with_tail(self): output = self._get_console_output(length_dict={'length': 3}) self.assertEqual({'output': '2\n3\n4'}, output) def test_get_console_output_with_none_length(self): output = self._get_console_output(length_dict={'length': None}) self.assertEqual({'output': '0\n1\n2\n3\n4'}, output) def test_get_console_output_with_length_as_str(self): output = self._get_console_output(length_dict={'length': '3'}) self.assertEqual({'output': '2\n3\n4'}, output) def test_get_console_output_filtered_characters(self): self.stub_out('nova.compute.api.API.get_console_output', fake_get_console_output_all_characters) output = self._get_console_output() expect = (string.digits + string.ascii_letters + string.punctuation + ' \t\n') self.assertEqual({'output': expect}, output) def test_get_text_console_no_instance(self): self.stub_out('nova.compute.api.API.get', fake_get_not_found) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotFound, body) def test_get_text_console_no_instance_on_get_output(self): self.stub_out('nova.compute.api.API.get_console_output', fake_get_not_found) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotFound, body) def test_get_console_output_with_non_integer_length(self): body = {'os-getConsoleOutput': {'length': 'NaN'}} self._check_console_output_failure(self.validation_error, body) def test_get_text_console_bad_body(self): body = {} self._check_console_output_failure(self.validation_error, body) def test_get_console_output_with_length_as_float(self): body = {'os-getConsoleOutput': {'length': 2.5}} self._check_console_output_failure(self.validation_error, body) def test_get_console_output_not_ready(self): self.stub_out('nova.compute.api.API.get_console_output', fake_get_console_output_not_ready) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPConflict, body) def test_not_implemented(self): self.stub_out('nova.compute.api.API.get_console_output', fakes.fake_not_implemented) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotImplemented, body) def test_get_console_output_with_boolean_length(self): body = {'os-getConsoleOutput': {'length': True}} self._check_console_output_failure(self.validation_error, body) @mock.patch.object(compute_api.API, 'get_console_output', side_effect=exception.ConsoleNotAvailable( instance_uuid='fake_uuid')) def test_get_console_output_not_available(self, mock_get_console_output): body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotFound, body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_create_backup.py0000664000175000017500000004112700000000000026274 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import timeutils import six import webob from nova.api.openstack import common from nova.api.openstack.compute import create_backup \ as create_backup_v21 from nova.compute import api from nova.compute import utils as compute_utils from nova import exception from nova import test from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class CreateBackupTestsV21(admin_only_action_common.CommonMixin, test.NoDBTestCase): create_backup = create_backup_v21 controller_name = 'CreateBackupController' validation_error = exception.ValidationError def setUp(self): super(CreateBackupTestsV21, self).setUp() self.controller = getattr(self.create_backup, self.controller_name)() self.compute_api = self.controller.compute_api patch_get = mock.patch.object(self.compute_api, 'get') self.mock_get = patch_get.start() self.addCleanup(patch_get.stop) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_with_metadata(self, mock_backup, mock_check_image): metadata = {'123': 'asdf'} body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': metadata, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties=metadata) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, metadata) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 1, extra_properties=metadata) self.assertEqual(202, res.status_int) self.assertIn('fake-image-id', res.headers['Location']) def test_create_backup_no_name(self): # Name is required for backups. body = { 'createBackup': { 'backup_type': 'daily', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_name_with_leading_trailing_spaces(self): body = { 'createBackup': { 'name': ' test ', 'backup_type': 'daily', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_name_with_leading_trailing_spaces_compat_mode( self, mock_backup, mock_check_image): body = { 'createBackup': { 'name': ' test ', 'backup_type': 'daily', 'rotation': 1, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image self.req.set_legacy_v2() self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'test', 'daily', 1, extra_properties={}) def test_create_backup_no_rotation(self): # Rotation is required for backup requests. body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_negative_rotation(self): """Rotation must be greater than or equal to zero for backup requests """ body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': -1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_negative_rotation_with_string_number(self): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '-1', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_rotation_with_empty_string(self): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_no_backup_type(self): # Backup Type (daily or weekly) is required for backup requests. body = { 'createBackup': { 'name': 'Backup 1', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_non_dict_metadata(self): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': 'non_dict', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_bad_entity(self): body = {'createBackup': 'go'} self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_rotation_is_zero(self, mock_backup, mock_check_image): # The happy path for creating backups if rotation is zero. body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 0, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 0, extra_properties={}) self.assertEqual(202, res.status_int) self.assertNotIn('Location', res.headers) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_rotation_is_positive(self, mock_backup, mock_check_image): # The happy path for creating backups if rotation is positive. body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 1, extra_properties={}) self.assertEqual(202, res.status_int) self.assertIn('fake-image-id', res.headers['Location']) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_rotation_is_string_number( self, mock_backup, mock_check_image): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '1', }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance['uuid'], body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 1, extra_properties={}) self.assertEqual(202, res.status_int) self.assertIn('fake-image-id', res.headers['Location']) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup', return_value=dict( id='fake-image-id', status='ACTIVE', name='Backup 1', properties={})) def test_create_backup_v2_45(self, mock_backup, mock_check_image): """Tests the 2.45 microversion to ensure the Location header is not in the response. """ body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '1', }, } instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance req = fakes.HTTPRequest.blank('', version='2.45') res = self.controller._create_backup(req, instance['uuid'], body=body) self.assertIsInstance(res, dict) self.assertEqual('fake-image-id', res['image_id']) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_raises_conflict_on_invalid_state(self, mock_backup, mock_check_image): body_map = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.side_effect = exception.InstanceInvalidState( attr='vm_state', instance_uuid=instance.uuid, state='foo', method='backup') ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._create_backup, self.req, instance.uuid, body=body_map) self.assertIn("Cannot 'createBackup' instance %(id)s" % {'id': instance.uuid}, ex.explanation) def test_create_backup_with_non_existed_instance(self): body_map = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } uuid = fakes.FAKE_UUID self.mock_get.side_effect = exception.InstanceNotFound( instance_id=uuid) self.assertRaises(webob.exc.HTTPNotFound, self.controller._create_backup, self.req, uuid, body=body_map) def test_create_backup_with_invalid_create_backup(self): body = { 'createBackupup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=True) def test_backup_volume_backed_instance(self, mock_is_volume_backed, mock_check_image): body = { 'createBackup': { 'name': 'BackupMe', 'backup_type': 'daily', 'rotation': 3 }, } updates = {'vm_state': 'active', 'task_state': None, 'launched_at': timeutils.utcnow()} instance = fake_instance.fake_instance_obj(self.context, **updates) instance.image_ref = None self.mock_get.return_value = instance ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller._create_backup, self.req, instance['uuid'], body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_is_volume_backed.assert_called_once_with(self.context, instance) self.assertIn('Backup is not supported for volume-backed instances', six.text_type(ex)) class CreateBackupTestsV239(test.NoDBTestCase): def setUp(self): super(CreateBackupTestsV239, self).setUp() self.controller = create_backup_v21.CreateBackupController() self.req = fakes.HTTPRequest.blank('', version='2.39') @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(common, 'get_instance') def test_create_backup_no_quota_checks(self, mock_get_instance, mock_check_quotas): # 'mock_get_instance' helps to skip the whole logic of the action, # but to make the test mock_get_instance.side_effect = webob.exc.HTTPNotFound metadata = {'123': 'asdf'} body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': metadata, }, } self.assertRaises(webob.exc.HTTPNotFound, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) # starting from version 2.39 no quota checks on Nova side are performed # for 'createBackup' action after removing 'image-metadata' proxy API mock_check_quotas.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_deferred_delete.py0000664000175000017500000001654100000000000026610 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import deferred_delete as dd_v21 from nova.compute import api as compute_api from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class FakeRequest(object): def __init__(self, context): self.environ = {'nova.context': context} class DeferredDeleteExtensionTestV21(test.NoDBTestCase): ext_ver = dd_v21.DeferredDeleteController def setUp(self): super(DeferredDeleteExtensionTestV21, self).setUp() self.fake_input_dict = {} self.fake_uuid = 'fake_uuid' self.fake_context = context.RequestContext( 'fake', fakes.FAKE_PROJECT_ID) self.fake_req = FakeRequest(self.fake_context) self.extension = self.ext_ver() @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete') def test_force_delete(self, mock_force_delete, mock_get): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get.return_value = instance res = self.extension._force_delete(self.fake_req, self.fake_uuid, self.fake_input_dict) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.extension, dd_v21.DeferredDeleteController): status_int = self.extension._force_delete.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None, cell_down_support=False) mock_force_delete.assert_called_once_with(self.fake_context, instance) @mock.patch.object(compute_api.API, 'get') def test_force_delete_instance_not_found(self, mock_get): mock_get.side_effect = exception.InstanceNotFound( instance_id='instance-0000') self.assertRaises(webob.exc.HTTPNotFound, self.extension._force_delete, self.fake_req, self.fake_uuid, self.fake_input_dict) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete', side_effect=exception.InstanceIsLocked( instance_uuid='fake_uuid')) def test_force_delete_instance_locked(self, mock_force_delete, mock_get): req = fakes.HTTPRequest.blank('/v2/%s/servers/fake_uuid/action' % fakes.FAKE_PROJECT_ID) ex = self.assertRaises(webob.exc.HTTPConflict, self.extension._force_delete, req, 'fake_uuid', '') self.assertIn('Instance fake_uuid is locked', ex.explanation) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete', side_effect=exception.InstanceNotFound( instance_id='fake_uuid')) def test_force_delete_instance_notfound(self, mock_force_delete, mock_get): req = fakes.HTTPRequest.blank('/v2/%s/servers/fake_uuid/action' % fakes.FAKE_PROJECT_ID) ex = self.assertRaises(webob.exc.HTTPNotFound, self.extension._force_delete, req, 'fake_uuid', '') self.assertIn('Instance fake_uuid could not be found', ex.explanation) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'restore') def test_restore(self, mock_restore, mock_get): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get.return_value = instance res = self.extension._restore(self.fake_req, self.fake_uuid, self.fake_input_dict) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.extension, dd_v21.DeferredDeleteController): status_int = self.extension._restore.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None, cell_down_support=False) mock_restore.assert_called_once_with(self.fake_context, instance) @mock.patch.object(compute_api.API, 'get') def test_restore_instance_not_found(self, mock_get): mock_get.side_effect = exception.InstanceNotFound( instance_id='instance-0000') self.assertRaises(webob.exc.HTTPNotFound, self.extension._restore, self.fake_req, self.fake_uuid, self.fake_input_dict) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None, cell_down_support=False) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'restore') def test_restore_raises_conflict_on_invalid_state(self, mock_restore, mock_get): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get.return_value = instance mock_restore.side_effect = exception.InstanceInvalidState( attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.assertRaises(webob.exc.HTTPConflict, self.extension._restore, self.fake_req, self.fake_uuid, self.fake_input_dict) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None, cell_down_support=False) mock_restore.assert_called_once_with(self.fake_context, instance) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_disk_config.py0000664000175000017500000004300100000000000025754 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_serialization import jsonutils import six from nova.api.openstack import compute from nova.compute import api as compute_api from nova import context as nova_context from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance import nova.tests.unit.image.fake MANUAL_INSTANCE_UUID = fakes.FAKE_UUID AUTO_INSTANCE_UUID = fakes.FAKE_UUID.replace('a', 'b') API_DISK_CONFIG = 'OS-DCF:diskConfig' class DiskConfigTestCaseV21(test.TestCase): project_id = fakes.FAKE_PROJECT_ID def setUp(self): super(DiskConfigTestCaseV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self._set_up_app() self._setup_fake_image_service() ctxt = nova_context.RequestContext( # These values match what is used in fakes.HTTPRequest.blank. user_id='fake_user', project_id=self.project_id) FAKE_INSTANCES = [ fakes.stub_instance_obj(ctxt, uuid=MANUAL_INSTANCE_UUID, auto_disk_config=False), fakes.stub_instance_obj(ctxt, uuid=AUTO_INSTANCE_UUID, auto_disk_config=True) ] def fake_instance_get(_self, context, server_id, *args, **kwargs): for instance in FAKE_INSTANCES: if server_id == instance.uuid: return instance raise exception.InstanceNotFound(instance_id=server_id) self.stub_out('nova.compute.api.API.get', fake_instance_get) self.stub_out('nova.objects.Instance.save', lambda *args, **kwargs: None) def fake_rebuild(*args, **kwargs): pass self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) def fake_instance_create(context, inst_, session=None): inst = fake_instance.fake_db_instance(**{ 'id': 1, 'uuid': AUTO_INSTANCE_UUID, 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'updated_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'progress': 0, 'name': 'instance-1', # this is a property 'task_state': '', 'vm_state': '', 'auto_disk_config': inst_['auto_disk_config'], 'security_groups': inst_['security_groups'], 'instance_type': objects.Flavor.get_by_name(context, 'm1.small'), }) return inst self.stub_out('nova.db.api.instance_create', fake_instance_create) def _set_up_app(self): self.app = compute.APIRouterV21() def _get_expected_msg_for_invalid_disk_config(self): if six.PY3: return ('{{"badRequest": {{"message": "Invalid input for' ' field/attribute {0}. Value: {1}. \'{1}\' is' ' not one of [\'AUTO\', \'MANUAL\']", "code": 400}}}}') else: return ('{{"badRequest": {{"message": "Invalid input for' ' field/attribute {0}. Value: {1}. u\'{1}\' is' ' not one of [\'AUTO\', \'MANUAL\']", "code": 400}}}}') def _setup_fake_image_service(self): self.image_service = nova.tests.unit.image.fake.stub_out_image_service( self) timestamp = datetime.datetime(2011, 1, 1, 1, 2, 3) image = {'id': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'name': 'fakeimage7', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '74185822', 'properties': {'auto_disk_config': 'Disabled'}} self.image_service.create(None, image) def tearDown(self): super(DiskConfigTestCaseV21, self).tearDown() nova.tests.unit.image.fake.FakeImageService_reset() def assertDiskConfig(self, dict_, value): self.assertIn(API_DISK_CONFIG, dict_) self.assertEqual(dict_[API_DISK_CONFIG], value) def test_show_server(self): req = fakes.HTTPRequest.blank( '/%s/servers/%s' % (self.project_id, MANUAL_INSTANCE_UUID)) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') req = fakes.HTTPRequest.blank( '/%s/servers/%s' % (self.project_id, AUTO_INSTANCE_UUID)) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_detail_servers(self): req = fakes.HTTPRequest.blank('/%s/servers/detail' % self.project_id) res = req.get_response(self.app) server_dicts = jsonutils.loads(res.body)['servers'] expectations = ['MANUAL', 'AUTO'] for server_dict, expected in zip(server_dicts, expectations): self.assertDiskConfig(server_dict, expected) def test_show_image(self): self.flags(group='glance', api_servers=['http://localhost:9292']) req = fakes.HTTPRequest.blank( '/%s/images/a440c04b-79fa-479c-bed1-0b816eaec379' % self.project_id) res = req.get_response(self.app) image_dict = jsonutils.loads(res.body)['image'] self.assertDiskConfig(image_dict, 'MANUAL') req = fakes.HTTPRequest.blank( '/%s/images/70a599e0-31e7-49b7-b260-868f441e862b' % self.project_id) res = req.get_response(self.app) image_dict = jsonutils.loads(res.body)['image'] self.assertDiskConfig(image_dict, 'AUTO') def test_detail_image(self): req = fakes.HTTPRequest.blank('/%s/images/detail' % self.project_id) res = req.get_response(self.app) image_dicts = jsonutils.loads(res.body)['images'] expectations = ['MANUAL', 'AUTO'] for image_dict, expected in zip(image_dicts, expectations): # NOTE(sirp): image fixtures 6 and 7 are setup for # auto_disk_config testing if image_dict['id'] in (6, 7): self.assertDiskConfig(image_dict, expected) def test_create_server_override_auto(self): req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', API_DISK_CONFIG: 'AUTO' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_create_server_override_manual(self): req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', API_DISK_CONFIG: 'MANUAL' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') def test_create_server_detect_from_image(self): """If user doesn't pass in diskConfig for server, use image metadata to specify AUTO or MANUAL. """ req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'a440c04b-79fa-479c-bed1-0b816eaec379', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '70a599e0-31e7-49b7-b260-868f441e862b', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_create_server_detect_from_image_disabled_goes_to_manual(self): req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') def test_create_server_errors_when_disabled_and_auto(self): req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'flavorRef': '1', API_DISK_CONFIG: 'AUTO' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_create_server_when_disabled_and_manual(self): req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'flavorRef': '1', API_DISK_CONFIG: 'MANUAL' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') @mock.patch('nova.api.openstack.common.get_instance') def _test_update_server_disk_config(self, uuid, disk_config, get_instance_mock): req = fakes.HTTPRequest.blank( '/%s/servers/%s' % (self.project_id, uuid)) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {API_DISK_CONFIG: disk_config}} req.body = jsonutils.dump_as_bytes(body) auto_disk_config = (disk_config == 'AUTO') instance = fakes.stub_instance_obj( req.environ['nova.context'], project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id, auto_disk_config=auto_disk_config) get_instance_mock.return_value = instance res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, disk_config) def test_update_server_override_auto(self): self._test_update_server_disk_config(AUTO_INSTANCE_UUID, 'AUTO') def test_update_server_override_manual(self): self._test_update_server_disk_config(MANUAL_INSTANCE_UUID, 'MANUAL') def test_update_server_invalid_disk_config(self): # Return BadRequest if user passes an invalid diskConfig value. req = fakes.HTTPRequest.blank( '/%s/servers/%s' % (self.project_id, MANUAL_INSTANCE_UUID)) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {API_DISK_CONFIG: 'server_test'}} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) expected_msg = self._get_expected_msg_for_invalid_disk_config() expected_msg = expected_msg.format(API_DISK_CONFIG, 'server_test') self.assertJsonEqual(jsonutils.loads(expected_msg), jsonutils.loads(res.body)) @mock.patch('nova.api.openstack.common.get_instance') def _test_rebuild_server_disk_config(self, uuid, disk_config, get_instance_mock): req = fakes.HTTPRequest.blank( '/%s/servers/%s/action' % (self.project_id, uuid)) req.method = 'POST' req.content_type = 'application/json' auto_disk_config = (disk_config == 'AUTO') instance = fakes.stub_instance_obj( req.environ['nova.context'], project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id, auto_disk_config=auto_disk_config) get_instance_mock.return_value = instance body = {"rebuild": { 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', API_DISK_CONFIG: disk_config }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, disk_config) def test_rebuild_server_override_auto(self): self._test_rebuild_server_disk_config(AUTO_INSTANCE_UUID, 'AUTO') def test_rebuild_server_override_manual(self): self._test_rebuild_server_disk_config(MANUAL_INSTANCE_UUID, 'MANUAL') def test_create_server_with_auto_disk_config(self): req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', API_DISK_CONFIG: 'AUTO' }} old_create = compute_api.API.create def create(*args, **kwargs): self.assertIn('auto_disk_config', kwargs) self.assertTrue(kwargs['auto_disk_config']) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') @mock.patch('nova.api.openstack.common.get_instance') def test_rebuild_server_with_auto_disk_config(self, get_instance_mock): req = fakes.HTTPRequest.blank( '/%s/servers/%s/action' % (self.project_id, AUTO_INSTANCE_UUID)) req.method = 'POST' req.content_type = 'application/json' instance = fakes.stub_instance_obj( req.environ['nova.context'], project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id, auto_disk_config=True) get_instance_mock.return_value = instance body = {"rebuild": { 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', API_DISK_CONFIG: 'AUTO' }} def rebuild(*args, **kwargs): self.assertIn('auto_disk_config', kwargs) self.assertTrue(kwargs['auto_disk_config']) self.stub_out('nova.compute.api.API.rebuild', rebuild) req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_resize_server_with_auto_disk_config(self): req = fakes.HTTPRequest.blank( '/%s/servers/%s/action' % (self.project_id, AUTO_INSTANCE_UUID)) req.method = 'POST' req.content_type = 'application/json' body = {"resize": { "flavorRef": "3", API_DISK_CONFIG: 'AUTO' }} def resize(*args, **kwargs): self.assertIn('auto_disk_config', kwargs) self.assertTrue(kwargs['auto_disk_config']) self.stub_out('nova.compute.api.API.resize', resize) req.body = jsonutils.dump_as_bytes(body) req.get_response(self.app) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_evacuate.py0000664000175000017500000004064000000000000025300 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids import six import testtools import webob from nova.api.openstack.compute import evacuate as evacuate_v21 from nova.compute import api as compute_api from nova.compute import vm_states import nova.conf from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance CONF = nova.conf.CONF def fake_compute_api(*args, **kwargs): return True def fake_compute_api_get(self, context, instance_id, **kwargs): # BAD_UUID is something that does not exist if instance_id == 'BAD_UUID': raise exception.InstanceNotFound(instance_id=instance_id) else: return fake_instance.fake_instance_obj(context, id=1, uuid=instance_id, task_state=None, host='host1', vm_state=vm_states.ACTIVE) def fake_service_get_by_compute_host(self, context, host): if host == 'bad-host': raise exception.ComputeHostNotFound(host=host) elif host == 'unmapped-host': raise exception.HostMappingNotFound(name=host) else: return { 'host_name': host, 'service': 'compute', 'zone': 'nova' } class EvacuateTestV21(test.NoDBTestCase): validation_error = exception.ValidationError _methods = ('resize', 'evacuate') def setUp(self): super(EvacuateTestV21, self).setUp() self.stub_out('nova.compute.api.API.get', fake_compute_api_get) self.stub_out('nova.compute.api.HostAPI.service_get_by_compute_host', fake_service_get_by_compute_host) self.mock_list_port = self.useFixture( fixtures.MockPatch('nova.network.neutron.API.list_ports')).mock self.mock_list_port.return_value = {'ports': []} self.UUID = uuids.fake for _method in self._methods: self.stub_out('nova.compute.api.API.%s' % _method, fake_compute_api) self._set_up_controller() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.req = fakes.HTTPRequest.blank('') def _set_up_controller(self): self.controller = evacuate_v21.EvacuateController() self.controller_no_ext = self.controller def _get_evacuate_response(self, json_load, uuid=None): base_json_load = {'evacuate': json_load} response = self.controller._evacuate(self.admin_req, uuid or self.UUID, body=base_json_load) return response def _check_evacuate_failure(self, exception, body, uuid=None, controller=None): controller = controller or self.controller body = {'evacuate': body} return self.assertRaises(exception, controller._evacuate, self.admin_req, uuid or self.UUID, body=body) def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) def test_evacuate_with_invalid_instance(self): self._check_evacuate_failure(webob.exc.HTTPNotFound, {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}, uuid='BAD_UUID') def test_evacuate_with_active_service(self): def fake_evacuate(*args, **kwargs): raise exception.ComputeServiceInUse("Service still in use") self.stub_out('nova.compute.api.API.evacuate', fake_evacuate) self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_no_target(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) def test_evacuate_instance_without_on_shared_storage(self): self._check_evacuate_failure(self.validation_error, {'host': 'my-host', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_invalid_characters_host(self): host = 'abc!#' self._check_evacuate_failure(self.validation_error, {'host': host, 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_too_long_host(self): host = 'a' * 256 self._check_evacuate_failure(self.validation_error, {'host': host, 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_invalid_on_shared_storage(self): self._check_evacuate_failure(self.validation_error, {'host': 'my-host', 'onSharedStorage': 'foo', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_bad_target(self): self._check_evacuate_failure(webob.exc.HTTPNotFound, {'host': 'bad-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_unmapped_target(self): self._check_evacuate_failure(webob.exc.HTTPNotFound, {'host': 'unmapped-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_target(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) @mock.patch('nova.objects.Instance.save') def test_evacuate_shared_and_pass(self, mock_save): self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'host': 'bad-host', 'onSharedStorage': 'True', 'adminPass': 'MyNewPass'}) @mock.patch('nova.objects.Instance.save') def test_evacuate_not_shared_pass_generated(self, mock_save): res = self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'False'}) self.assertEqual(CONF.password_length, len(res['adminPass'])) @mock.patch('nova.objects.Instance.save') def test_evacuate_shared(self, mock_save): self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'True'}) def test_evacuate_to_same_host(self): self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'host': 'host1', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_empty_host(self): self._check_evacuate_failure(self.validation_error, {'host': '', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}, controller=self.controller_no_ext) @mock.patch('nova.objects.Instance.save') def test_evacuate_instance_with_underscore_in_hostname(self, mock_save): admin_pass = 'MyNewPass' # NOTE: The hostname grammar in RFC952 does not allow for # underscores in hostnames. However, we should test that it # is supported because it sometimes occurs in real systems. res = self._get_evacuate_response({'host': 'underscore_hostname', 'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) def test_evacuate_disable_password_return(self): self._test_evacuate_enable_instance_password_conf(enable_pass=False) def test_evacuate_enable_password_return(self): self._test_evacuate_enable_instance_password_conf(enable_pass=True) @mock.patch('nova.objects.Instance.save') def _test_evacuate_enable_instance_password_conf(self, mock_save, enable_pass): self.flags(enable_instance_password=enable_pass, group='api') res = self._get_evacuate_response({'host': 'underscore_hostname', 'onSharedStorage': 'False'}) if enable_pass: self.assertIn('adminPass', res) else: self.assertIsNone(res) class EvacuateTestV214(EvacuateTestV21): def setUp(self): super(EvacuateTestV214, self).setUp() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True, version='2.14') self.req = fakes.HTTPRequest.blank('', version='2.14') def _get_evacuate_response(self, json_load, uuid=None): json_load.pop('onSharedStorage', None) base_json_load = {'evacuate': json_load} response = self.controller._evacuate(self.admin_req, uuid or self.UUID, body=base_json_load) return response def _check_evacuate_failure(self, exception, body, uuid=None, controller=None): controller = controller or self.controller body.pop('onSharedStorage', None) body = {'evacuate': body} return self.assertRaises(exception, controller._evacuate, self.admin_req, uuid or self.UUID, body=body) @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance(self, mock_evacuate): self._get_evacuate_response({}) admin_pass = mock_evacuate.call_args_list[0][0][4] on_shared_storage = mock_evacuate.call_args_list[0][0][3] self.assertEqual(CONF.password_length, len(admin_pass)) self.assertIsNone(on_shared_storage) def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'adminPass': admin_pass}) self.assertIsNone(res) @testtools.skip('Password is not returned from Microversion 2.14') def test_evacuate_disable_password_return(self): pass @testtools.skip('Password is not returned from Microversion 2.14') def test_evacuate_enable_password_return(self): pass @testtools.skip('onSharedStorage was removed from Microversion 2.14') def test_evacuate_instance_with_invalid_on_shared_storage(self): pass @testtools.skip('onSharedStorage was removed from Microversion 2.14') @mock.patch('nova.objects.Instance.save') def test_evacuate_not_shared_pass_generated(self, mock_save): pass @mock.patch.object(compute_api.API, 'evacuate') @mock.patch('nova.objects.Instance.save') def test_evacuate_pass_generated(self, mock_save, mock_evacuate): self._get_evacuate_response({'host': 'my-host'}) self.assertEqual(CONF.password_length, len(mock_evacuate.call_args_list[0][0][4])) def test_evacuate_instance_without_on_shared_storage(self): self._get_evacuate_response({'host': 'my-host', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_no_target(self): admin_pass = 'MyNewPass' with mock.patch.object(compute_api.API, 'evacuate') as mock_evacuate: self._get_evacuate_response({'adminPass': admin_pass}) self.assertEqual(admin_pass, mock_evacuate.call_args_list[0][0][4]) @testtools.skip('onSharedStorage was removed from Microversion 2.14') @mock.patch('nova.objects.Instance.save') def test_evacuate_shared_and_pass(self, mock_save): pass @testtools.skip('from Microversion 2.14 it is covered with ' 'test_evacuate_pass_generated') def test_evacuate_instance_with_target(self): pass @mock.patch('nova.objects.Instance.save') def test_evacuate_instance_with_underscore_in_hostname(self, mock_save): # NOTE: The hostname grammar in RFC952 does not allow for # underscores in hostnames. However, we should test that it # is supported because it sometimes occurs in real systems. self._get_evacuate_response({'host': 'underscore_hostname'}) class EvacuateTestV229(EvacuateTestV214): def setUp(self): super(EvacuateTestV229, self).setUp() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True, version='2.29') self.req = fakes.HTTPRequest.blank('', version='2.29') @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance(self, mock_evacuate): self._get_evacuate_response({}) admin_pass = mock_evacuate.call_args_list[0][0][4] on_shared_storage = mock_evacuate.call_args_list[0][0][3] force = mock_evacuate.call_args_list[0][0][5] self.assertEqual(CONF.password_length, len(admin_pass)) self.assertIsNone(on_shared_storage) self.assertFalse(force) def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'adminPass': admin_pass, 'force': 'false'}) self.assertIsNone(res) @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance_with_forced_host(self, mock_evacuate): self._get_evacuate_response({'host': 'my-host', 'force': 'true'}) force = mock_evacuate.call_args_list[0][0][5] self.assertTrue(force) def test_forced_evacuate_with_no_host_provided(self): self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'force': 'true'}) class EvacuateTestV268(EvacuateTestV229): def setUp(self): super(EvacuateTestV268, self).setUp() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True, version='2.68') self.req = fakes.HTTPRequest.blank('', version='2.68') def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'adminPass': admin_pass}) self.assertIsNone(res) @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance_with_forced_host(self, mock_evacuate): ex = self._check_evacuate_failure(self.validation_error, {'host': 'my-host', 'force': 'true'}) self.assertIn('force', six.text_type(ex)) def test_forced_evacuate_with_no_host_provided(self): # not applicable for v2.68, which removed the 'force' parameter pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_extended_hypervisors.py0000664000175000017500000001156400000000000027763 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova.api.openstack.compute import hypervisors \ as hypervisors_v21 from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack.compute import test_hypervisors from nova.tests.unit.api.openstack import fakes def fake_compute_node_get(context, compute_id): for hyper in test_hypervisors.TEST_HYPERS_OBJ: if hyper.id == int(compute_id): return hyper raise exception.ComputeHostNotFound(host=compute_id) def fake_compute_node_get_all(context, limit=None, marker=None): return test_hypervisors.TEST_HYPERS_OBJ def fake_service_get_by_compute_host(context, host): for service in test_hypervisors.TEST_SERVICES: if service.host == host: return service class ExtendedHypervisorsTestV21(test.NoDBTestCase): DETAIL_HYPERS_DICTS = copy.deepcopy(test_hypervisors.TEST_HYPERS) del DETAIL_HYPERS_DICTS[0]['service_id'] del DETAIL_HYPERS_DICTS[1]['service_id'] del DETAIL_HYPERS_DICTS[0]['host'] del DETAIL_HYPERS_DICTS[1]['host'] del DETAIL_HYPERS_DICTS[0]['uuid'] del DETAIL_HYPERS_DICTS[1]['uuid'] DETAIL_HYPERS_DICTS[0].update({'state': 'up', 'status': 'enabled', 'service': dict(id=1, host='compute1', disabled_reason=None)}) DETAIL_HYPERS_DICTS[1].update({'state': 'up', 'status': 'enabled', 'service': dict(id=2, host='compute2', disabled_reason=None)}) def _set_up_controller(self): self.controller = hypervisors_v21.HypervisorsController() def _get_request(self): return fakes.HTTPRequest.blank( '/v2/%s/os-hypervisors/detail' % fakes.FAKE_PROJECT_ID, use_admin_context=True) def setUp(self): super(ExtendedHypervisorsTestV21, self).setUp() self._set_up_controller() def test_view_hypervisor_detail_noservers(self): with mock.patch.object(self.controller.servicegroup_api, 'service_is_up', return_value=True) as mock_service_is_up: req = self._get_request() result = self.controller._view_hypervisor( test_hypervisors.TEST_HYPERS_OBJ[0], test_hypervisors.TEST_SERVICES[0], True, req) self.assertEqual(self.DETAIL_HYPERS_DICTS[0], result) self.assertTrue(mock_service_is_up.called) @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=fake_service_get_by_compute_host) def test_detail(self, mock_get_by_host): with test.nested( mock.patch.object(self.controller.host_api, 'compute_node_get_all', side_effect=fake_compute_node_get_all), mock.patch.object(self.controller.servicegroup_api, 'service_is_up', return_value=True), ) as (mock_node_get_all, mock_service_is_up): req = self._get_request() result = self.controller.detail(req) self.assertEqual(dict(hypervisors=self.DETAIL_HYPERS_DICTS), result) self.assertTrue(mock_service_is_up.called) self.assertTrue(mock_get_by_host.called) self.assertTrue(mock_node_get_all.called) @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=fake_service_get_by_compute_host) def test_show_withid(self, mock_get_by_host): with test.nested( mock.patch.object(self.controller.host_api, 'compute_node_get', side_effect=fake_compute_node_get), mock.patch.object(self.controller.servicegroup_api, 'service_is_up', return_value=True), ) as (mock_node_get, mock_service_is_up): req = self._get_request() result = self.controller.show(req, '1') self.assertEqual(dict(hypervisor=self.DETAIL_HYPERS_DICTS[0]), result) self.assertTrue(mock_service_is_up.called) self.assertTrue(mock_get_by_host.called) self.assertTrue(mock_node_get.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_extended_ips.py0000664000175000017500000001117400000000000026156 0ustar00zuulzuul00000000000000# Copyright 2013 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import six from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' NW_CACHE = [ { 'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': { 'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [ { 'cidr': '192.168.1.0/24', 'ips': [ { 'address': '192.168.1.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.1', 'type': 'floating'}, ], }, ], }, ] } }, { 'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': { 'bridge': 'br1', 'id': 2, 'label': 'public', 'subnets': [ { 'cidr': '10.0.0.0/24', 'ips': [ { 'address': '10.0.0.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.2', 'type': 'floating'}, ], } ], }, ] } } ] ALL_IPS = [] for cache in NW_CACHE: for subnet in cache['network']['subnets']: for fixed in subnet['ips']: sanitized = dict(fixed) sanitized.pop('floating_ips') ALL_IPS.append(sanitized) for floating in fixed['floating_ips']: ALL_IPS.append(floating) ALL_IPS.sort(key=lambda x: str(x)) def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance_obj(None, 1, uuid=UUID3, nw_cache=NW_CACHE) return inst def fake_compute_get_all(*args, **kwargs): inst_list = [ fakes.stub_instance_obj(None, 1, uuid=UUID1, nw_cache=NW_CACHE), fakes.stub_instance_obj(None, 2, uuid=UUID2, nw_cache=NW_CACHE), ] return objects.InstanceList(objects=inst_list) class ExtendedIpsTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-IPS:' def setUp(self): super(ExtendedIpsTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_ips(self, server): for network in six.itervalues(server['addresses']): for ip in network: yield ip def assertServerStates(self, server): results = [] for ip in self._get_ips(server): results.append({'address': ip.get('addr'), 'type': ip.get('%stype' % self.prefix)}) self.assertJsonEqual(ALL_IPS, results) def test_show(self): url = '/v2/%s/servers/%s' % (fakes.FAKE_PROJECT_ID, UUID3) res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertServerStates(self._get_server(res.body)) def test_detail(self): url = '/v2/%s/servers/detail' % fakes.FAKE_PROJECT_ID res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertServerStates(server) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_extended_ips_mac.py0000664000175000017500000001163300000000000026776 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import six from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' NW_CACHE = [ { 'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': { 'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [ { 'cidr': '192.168.1.0/24', 'ips': [ { 'address': '192.168.1.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.1', 'type': 'floating'}, ], }, ], }, ] } }, { 'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': { 'bridge': 'br1', 'id': 2, 'label': 'public', 'subnets': [ { 'cidr': '10.0.0.0/24', 'ips': [ { 'address': '10.0.0.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.2', 'type': 'floating'}, ], } ], }, ] } } ] ALL_IPS = [] for cache in NW_CACHE: for subnet in cache['network']['subnets']: for fixed in subnet['ips']: sanitized = dict(fixed) sanitized['mac_address'] = cache['address'] sanitized.pop('floating_ips') sanitized.pop('type') ALL_IPS.append(sanitized) for floating in fixed['floating_ips']: sanitized = dict(floating) sanitized['mac_address'] = cache['address'] sanitized.pop('type') ALL_IPS.append(sanitized) ALL_IPS.sort(key=lambda x: '%s-%s' % (x['address'], x['mac_address'])) def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance_obj(None, 1, uuid=UUID3, nw_cache=NW_CACHE) return inst def fake_compute_get_all(*args, **kwargs): inst_list = [ fakes.stub_instance_obj(None, 1, uuid=UUID1, nw_cache=NW_CACHE), fakes.stub_instance_obj(None, 2, uuid=UUID2, nw_cache=NW_CACHE), ] return objects.InstanceList(objects=inst_list) class ExtendedIpsMacTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-IPS-MAC:' def setUp(self): super(ExtendedIpsMacTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_ips(self, server): for network in six.itervalues(server['addresses']): for ip in network: yield ip def assertServerStates(self, server): results = [] for ip in self._get_ips(server): results.append({'address': ip.get('addr'), 'mac_address': ip.get('%smac_addr' % self.prefix)}) self.assertJsonEqual(ALL_IPS, results) def test_show(self): url = '/v2/%s/servers/%s' % (fakes.FAKE_PROJECT_ID, UUID3) res = self._make_request(url) self.assertEqual(200, res.status_int) self.assertServerStates(self._get_server(res.body)) def test_detail(self): url = '/v2/%s/servers/detail' % fakes.FAKE_PROJECT_ID res = self._make_request(url) self.assertEqual(200, res.status_int) for _i, server in enumerate(self._get_servers(res.body)): self.assertServerStates(server) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_extension_info.py0000664000175000017500000000434000000000000026527 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import extension_info from nova import exception from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes class ExtensionInfoV21Test(test.NoDBTestCase): def setUp(self): super(ExtensionInfoV21Test, self).setUp() self.controller = extension_info.ExtensionInfoController() patcher = mock.patch.object(policy, 'authorize', return_value=True) patcher.start() self.addCleanup(patcher.stop) def test_extension_info_show_servers_not_present(self): req = fakes.HTTPRequest.blank('/extensions/servers') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 'servers') class ExtensionInfoPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ExtensionInfoPolicyEnforcementV21, self).setUp() self.controller = extension_info.ExtensionInfoController() self.req = fakes.HTTPRequest.blank('') def _test_extension_policy_failed(self, action, *args): rule_name = "os_compute_api:extensions" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, getattr(self.controller, action), self.req, *args) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_extension_index_policy_failed(self): self._test_extension_policy_failed('index') def test_extension_show_policy_failed(self): self._test_extension_policy_failed('show', 1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_flavor_access.py0000664000175000017500000003754200000000000026324 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from webob import exc from nova.api.openstack import api_version_request as api_version from nova.api.openstack.compute import flavor_access \ as flavor_access_v21 from nova.api.openstack.compute import flavors as flavors_api from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes def generate_flavor(flavorid, ispublic): return { 'id': flavorid, 'flavorid': str(flavorid), 'root_gb': 1, 'ephemeral_gb': 1, 'name': u'test', 'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1), 'updated_at': None, 'memory_mb': 512, 'vcpus': 1, 'swap': 512, 'rxtx_factor': 1.0, 'disabled': False, 'extra_specs': {}, 'vcpu_weight': None, 'is_public': bool(ispublic), 'description': None } INSTANCE_TYPES = { '0': generate_flavor(0, True), '1': generate_flavor(1, True), '2': generate_flavor(2, False), '3': generate_flavor(3, False)} ACCESS_LIST = [{'flavor_id': '2', 'project_id': 'proj2'}, {'flavor_id': '2', 'project_id': 'proj3'}, {'flavor_id': '3', 'project_id': 'proj3'}] def fake_get_flavor_access_by_flavor_id(context, flavorid): res = [] for access in ACCESS_LIST: if access['flavor_id'] == flavorid: res.append(access['project_id']) return res def fake_get_flavor_by_flavor_id(context, flavorid): return INSTANCE_TYPES[flavorid] def _has_flavor_access(flavorid, projectid): for access in ACCESS_LIST: if (access['flavor_id'] == flavorid and access['project_id'] == projectid): return True return False def fake_get_all_flavors_sorted_list(context, inactive=False, filters=None, sort_key='flavorid', sort_dir='asc', limit=None, marker=None): if filters is None or filters['is_public'] is None: return sorted(INSTANCE_TYPES.values(), key=lambda item: item[sort_key]) res = {} for k, v in INSTANCE_TYPES.items(): if filters['is_public'] and _has_flavor_access(k, context.project_id): res.update({k: v}) continue if v['is_public'] == filters['is_public']: res.update({k: v}) res = sorted(res.values(), key=lambda item: item[sort_key]) return res class FakeRequest(object): environ = {"nova.context": context.get_admin_context()} api_version_request = api_version.APIVersionRequest("2.1") def is_legacy_v2(self): return False class FakeResponse(object): obj = {'flavor': {'id': '0'}, 'flavors': [ {'id': '0'}, {'id': '2'}] } def attach(self, **kwargs): pass def fake_get_flavor_projects_from_db(context, flavorid): raise exception.FlavorNotFound(flavor_id=flavorid) class FlavorAccessTestV21(test.NoDBTestCase): api_version = "2.1" FlavorAccessController = flavor_access_v21.FlavorAccessController FlavorActionController = flavor_access_v21.FlavorActionController _prefix = "/v2/%s" % fakes.FAKE_PROJECT_ID validation_ex = exception.ValidationError def setUp(self): super(FlavorAccessTestV21, self).setUp() self.flavor_controller = flavors_api.FlavorsController() # We need to stub out verify_project_id so that it doesn't # generate an EndpointNotFound exception and result in a # server error. self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) self.req = FakeRequest() self.req.environ = {"nova.context": context.RequestContext( 'fake_user', fakes.FAKE_PROJECT_ID)} self.stub_out('nova.objects.Flavor._flavor_get_by_flavor_id_from_db', fake_get_flavor_by_flavor_id) self.stub_out('nova.objects.flavor._flavor_get_all_from_db', fake_get_all_flavors_sorted_list) self.stub_out('nova.objects.flavor._get_projects_from_db', fake_get_flavor_access_by_flavor_id) self.flavor_access_controller = self.FlavorAccessController() self.flavor_action_controller = self.FlavorActionController() def _verify_flavor_list(self, result, expected): # result already sorted by flavor_id self.assertEqual(len(result), len(expected)) for d1, d2 in zip(result, expected): self.assertEqual(d1['id'], d2['id']) @mock.patch('nova.objects.Flavor._flavor_get_by_flavor_id_from_db', side_effect=exception.FlavorNotFound(flavor_id='foo')) def test_list_flavor_access_public(self, mock_api_get): # query os-flavor-access on public flavor should return 404 self.assertRaises(exc.HTTPNotFound, self.flavor_access_controller.index, self.req, '1') def test_list_flavor_access_private(self): expected = {'flavor_access': [ {'flavor_id': '2', 'tenant_id': 'proj2'}, {'flavor_id': '2', 'tenant_id': 'proj3'}]} result = self.flavor_access_controller.index(self.req, '2') self.assertEqual(result, expected) def test_list_flavor_with_admin_default_proj1(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank(self._prefix + '/flavors', use_admin_context=True) req.environ['nova.context'].project_id = 'proj1' result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_default_proj2(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}, {'id': '2'}]} req = fakes.HTTPRequest.blank(self._prefix + '/flavors', use_admin_context=True) req.environ['nova.context'].project_id = 'proj2' result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_true(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=true' req = fakes.HTTPRequest.blank(url, use_admin_context=True) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_false(self): expected = {'flavors': [{'id': '2'}, {'id': '3'}]} url = self._prefix + '/flavors?is_public=false' req = fakes.HTTPRequest.blank(url, use_admin_context=True) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_false_proj2(self): expected = {'flavors': [{'id': '2'}, {'id': '3'}]} url = self._prefix + '/flavors?is_public=false' req = fakes.HTTPRequest.blank(url, use_admin_context=True) req.environ['nova.context'].project_id = 'proj2' result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_none(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}, {'id': '2'}, {'id': '3'}]} url = self._prefix + '/flavors?is_public=none' req = fakes.HTTPRequest.blank(url, use_admin_context=True) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_default(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank(self._prefix + '/flavors', use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_ispublic_true(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=true' req = fakes.HTTPRequest.blank(url, use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_ispublic_false(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=false' req = fakes.HTTPRequest.blank(url, use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_ispublic_none(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=none' req = fakes.HTTPRequest.blank(url, use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_add_tenant_access(self): def stub_add_flavor_access(context, flavor_id, projectid): self.assertEqual(3, flavor_id, "flavor_id") self.assertEqual("proj2", projectid, "projectid") self.stub_out('nova.objects.Flavor._flavor_add_project', stub_add_flavor_access) expected = {'flavor_access': [{'flavor_id': '3', 'tenant_id': 'proj3'}]} body = {'addTenantAccess': {'tenant': 'proj2'}} req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) result = self.flavor_action_controller._add_tenant_access( req, '3', body=body) self.assertEqual(result, expected) @mock.patch('nova.objects.Flavor.get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='1')) def test_add_tenant_access_with_flavor_not_found(self, mock_get): body = {'addTenantAccess': {'tenant': 'proj2'}} req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) self.assertRaises(exc.HTTPNotFound, self.flavor_action_controller._add_tenant_access, req, '2', body=body) def test_add_tenant_access_with_no_tenant(self): req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'addTenantAccess': {'foo': 'proj2'}} self.assertRaises(self.validation_ex, self.flavor_action_controller._add_tenant_access, req, '2', body=body) body = {'addTenantAccess': {'tenant': ''}} self.assertRaises(self.validation_ex, self.flavor_action_controller._add_tenant_access, req, '2', body=body) def test_add_tenant_access_with_already_added_access(self): def stub_add_flavor_access(context, flavorid, projectid): raise exception.FlavorAccessExists(flavor_id=flavorid, project_id=projectid) self.stub_out('nova.objects.Flavor._flavor_add_project', stub_add_flavor_access) body = {'addTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPConflict, self.flavor_action_controller._add_tenant_access, self.req, '3', body=body) def test_remove_tenant_access_with_bad_access(self): def stub_remove_flavor_access(context, flavorid, projectid): raise exception.FlavorAccessNotFound(flavor_id=flavorid, project_id=projectid) self.stub_out('nova.objects.Flavor._flavor_del_project', stub_remove_flavor_access) body = {'removeTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPNotFound, self.flavor_action_controller._remove_tenant_access, self.req, '3', body=body) def test_add_tenant_access_is_public(self): body = {'addTenantAccess': {'tenant': 'proj2'}} req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) req.api_version_request = api_version.APIVersionRequest('2.7') self.assertRaises(exc.HTTPConflict, self.flavor_action_controller._add_tenant_access, req, '1', body=body) @mock.patch('nova.objects.Flavor._flavor_get_by_flavor_id_from_db', side_effect=exception.FlavorNotFound(flavor_id='foo')) def test_delete_tenant_access_with_no_tenant(self, mock_api_get): req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'removeTenantAccess': {'foo': 'proj2'}} self.assertRaises(self.validation_ex, self.flavor_action_controller._remove_tenant_access, req, '2', body=body) body = {'removeTenantAccess': {'tenant': ''}} self.assertRaises(self.validation_ex, self.flavor_action_controller._remove_tenant_access, req, '2', body=body) @mock.patch('nova.api.openstack.identity.verify_project_id', side_effect=exc.HTTPBadRequest( explanation="Project ID proj2 is not a valid project.")) def test_add_tenant_access_with_invalid_tenant(self, mock_verify): """Tests the case that the tenant does not exist in Keystone.""" req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'addTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPBadRequest, self.flavor_action_controller._add_tenant_access, req, '2', body=body) mock_verify.assert_called_once_with( req.environ['nova.context'], 'proj2') @mock.patch('nova.api.openstack.identity.verify_project_id', side_effect=exc.HTTPBadRequest( explanation="Project ID proj2 is not a valid project.")) def test_remove_tenant_access_with_invalid_tenant(self, mock_verify): """Tests the case that the tenant does not exist in Keystone.""" req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'removeTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPBadRequest, self.flavor_action_controller._remove_tenant_access, req, '2', body=body) mock_verify.assert_called_once_with( req.environ['nova.context'], 'proj2') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_flavor_disabled.py0000664000175000017500000000425500000000000026625 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes class FlavorDisabledTestV21(test.NoDBTestCase): base_url = '/v2/%s/flavors' % fakes.FAKE_PROJECT_ID content_type = 'application/json' prefix = "OS-FLV-DISABLED:" def setUp(self): super(FlavorDisabledTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_flavor_get_all(self) fakes.stub_out_flavor_get_by_flavor_id(self) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_flavor(self, body): return jsonutils.loads(body).get('flavor') def _get_flavors(self, body): return jsonutils.loads(body).get('flavors') def assertFlavorDisabled(self, flavor, disabled): self.assertEqual(flavor.get('%sdisabled' % self.prefix), disabled) def test_show(self): url = self.base_url + '/1' res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertFlavorDisabled(self._get_flavor(res.body), fakes.FLAVORS['1'].disabled) def test_detail(self): url = self.base_url + '/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) flavors = self._get_flavors(res.body) self.assertFlavorDisabled(flavors[0], fakes.FLAVORS['1'].disabled) self.assertFlavorDisabled(flavors[1], fakes.FLAVORS['2'].disabled) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_flavor_manage.py0000664000175000017500000006047200000000000026311 0ustar00zuulzuul00000000000000# Copyright 2011 Andrew Bogott for the Wikimedia Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_serialization import jsonutils import six import webob from nova.api.openstack.compute import flavor_access as flavor_access_v21 from nova.api.openstack.compute import flavor_manage as flavormanage_v21 from nova.compute import flavors from nova.db import api as db from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes def fake_create(newflavor): newflavor['flavorid'] = 1234 newflavor["name"] = 'test' newflavor["memory_mb"] = 512 newflavor["vcpus"] = 2 newflavor["root_gb"] = 1 newflavor["ephemeral_gb"] = 1 newflavor["swap"] = 512 newflavor["rxtx_factor"] = 1.0 newflavor["is_public"] = True newflavor["disabled"] = False def fake_create_without_swap(newflavor): newflavor['flavorid'] = 1234 newflavor["name"] = 'test' newflavor["memory_mb"] = 512 newflavor["vcpus"] = 2 newflavor["root_gb"] = 1 newflavor["ephemeral_gb"] = 1 newflavor["swap"] = 0 newflavor["rxtx_factor"] = 1.0 newflavor["is_public"] = True newflavor["disabled"] = False newflavor["extra_specs"] = {"key1": "value1"} class FlavorManageTestV21(test.NoDBTestCase): controller = flavormanage_v21.FlavorManageController() validation_error = exception.ValidationError base_url = '/v2/%s/flavors' % fakes.FAKE_PROJECT_ID microversion = '2.1' def setUp(self): super(FlavorManageTestV21, self).setUp() self.stub_out("nova.objects.Flavor.create", fake_create) self.request_body = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } self.expected_flavor = self.request_body def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url, version=self.microversion, use_admin_context=True) @property def app(self): return fakes.wsgi_app_v21() @mock.patch('nova.objects.Flavor.destroy') def test_delete(self, mock_destroy): res = self.controller._delete(self._get_http_request(), 1234) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, flavormanage_v21.FlavorManageController): status_int = self.controller._delete.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) # subsequent delete should fail mock_destroy.side_effect = exception.FlavorNotFound(flavor_id=1234) self.assertRaises(webob.exc.HTTPNotFound, self.controller._delete, self._get_http_request(), 1234) def _test_create_missing_parameter(self, parameter): body = { "flavor": { "name": "azAZ09. -_", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } del body['flavor'][parameter] self.assertRaises(self.validation_error, self.controller._create, self._get_http_request(), body=body) def test_create_missing_name(self): self._test_create_missing_parameter('name') def test_create_missing_ram(self): self._test_create_missing_parameter('ram') def test_create_missing_vcpus(self): self._test_create_missing_parameter('vcpus') def test_create_missing_disk(self): self._test_create_missing_parameter('disk') def _create_flavor_success_case(self, body, req=None, version=None): req = req if req else self._get_http_request(url=self.base_url) req.headers['Content-Type'] = 'application/json' req.headers['X-OpenStack-Nova-API-Version'] = ( version or self.microversion) req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(200, res.status_code) return jsonutils.loads(res.body) def test_create(self): body = self._create_flavor_success_case(self.request_body) for key in self.expected_flavor["flavor"]: self.assertEqual(body["flavor"][key], self.expected_flavor["flavor"][key]) def test_create_public_default(self): del self.request_body['flavor']['os-flavor-access:is_public'] body = self._create_flavor_success_case(self.request_body) for key in self.expected_flavor["flavor"]: self.assertEqual(body["flavor"][key], self.expected_flavor["flavor"][key]) def test_create_without_flavorid(self): del self.request_body['flavor']['id'] body = self._create_flavor_success_case(self.request_body) for key in self.expected_flavor["flavor"]: self.assertEqual(body["flavor"][key], self.expected_flavor["flavor"][key]) def _create_flavor_bad_request_case(self, body): self.assertRaises(self.validation_error, self.controller._create, self._get_http_request(), body=body) def test_create_invalid_name(self): self.request_body['flavor']['name'] = 'bad !@#!$%\x00 name' self._create_flavor_bad_request_case(self.request_body) def test_create_flavor_name_is_whitespace(self): self.request_body['flavor']['name'] = ' ' self._create_flavor_bad_request_case(self.request_body) def test_create_with_name_too_long(self): self.request_body['flavor']['name'] = 'a' * 256 self._create_flavor_bad_request_case(self.request_body) def test_create_with_short_name(self): self.request_body['flavor']['name'] = '' self._create_flavor_bad_request_case(self.request_body) def test_create_with_name_leading_trailing_spaces(self): self.request_body['flavor']['name'] = ' test ' self._create_flavor_bad_request_case(self.request_body) def test_create_with_name_leading_trailing_spaces_compat_mode(self): req = self._get_http_request(url=self.base_url) req.set_legacy_v2() self.request_body['flavor']['name'] = ' test ' body = self._create_flavor_success_case(self.request_body, req) self.assertEqual('test', body['flavor']['name']) def test_create_without_flavorname(self): del self.request_body['flavor']['name'] self._create_flavor_bad_request_case(self.request_body) def test_create_empty_body(self): body = { "flavor": {} } self._create_flavor_bad_request_case(body) def test_create_no_body(self): body = {} self._create_flavor_bad_request_case(body) def test_create_invalid_format_body(self): body = { "flavor": [] } self._create_flavor_bad_request_case(body) def test_create_invalid_flavorid(self): self.request_body['flavor']['id'] = "!@#!$#!$^#&^$&" self._create_flavor_bad_request_case(self.request_body) def test_create_check_flavor_id_length(self): MAX_LENGTH = 255 self.request_body['flavor']['id'] = "a" * (MAX_LENGTH + 1) self._create_flavor_bad_request_case(self.request_body) def test_create_with_leading_trailing_whitespaces_in_flavor_id(self): self.request_body['flavor']['id'] = " bad_id " self._create_flavor_bad_request_case(self.request_body) def test_create_without_ram(self): del self.request_body['flavor']['ram'] self._create_flavor_bad_request_case(self.request_body) def test_create_with_0_ram(self): self.request_body['flavor']['ram'] = 0 self._create_flavor_bad_request_case(self.request_body) def test_create_with_ram_exceed_max_limit(self): self.request_body['flavor']['ram'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_without_vcpus(self): del self.request_body['flavor']['vcpus'] self._create_flavor_bad_request_case(self.request_body) def test_create_with_0_vcpus(self): self.request_body['flavor']['vcpus'] = 0 self._create_flavor_bad_request_case(self.request_body) def test_create_with_vcpus_exceed_max_limit(self): self.request_body['flavor']['vcpus'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_without_disk(self): del self.request_body['flavor']['disk'] self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_disk(self): self.request_body['flavor']['disk'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_disk_exceed_max_limit(self): self.request_body['flavor']['disk'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_ephemeral(self): self.request_body['flavor']['OS-FLV-EXT-DATA:ephemeral'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_ephemeral_exceed_max_limit(self): self.request_body['flavor'][ 'OS-FLV-EXT-DATA:ephemeral'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_swap(self): self.request_body['flavor']['swap'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_swap_exceed_max_limit(self): self.request_body['flavor']['swap'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_rxtx_factor(self): self.request_body['flavor']['rxtx_factor'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_rxtx_factor_exceed_max_limit(self): self.request_body['flavor']['rxtx_factor'] = db.SQL_SP_FLOAT_MAX * 2 self._create_flavor_bad_request_case(self.request_body) def test_create_with_non_boolean_is_public(self): self.request_body['flavor']['os-flavor-access:is_public'] = 123 self._create_flavor_bad_request_case(self.request_body) def test_flavor_exists_exception_returns_409(self): expected = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": 1235, "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } def fake_create(name, memory_mb, vcpus, root_gb, ephemeral_gb, flavorid, swap, rxtx_factor, is_public, description): raise exception.FlavorExists(name=name) self.stub_out('nova.compute.flavors.create', fake_create) self.assertRaises(webob.exc.HTTPConflict, self.controller._create, self._get_http_request(), body=expected) def test_invalid_memory_mb(self): """Check negative and decimal number can't be accepted.""" self.assertRaises(exception.InvalidInput, flavors.create, "abc", -512, 2, 1, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcd", 512.2, 2, 1, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcde", None, 2, 1, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcdef", 512, 2, None, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcdef", "test_memory_mb", 2, None, 1, 1234, 512, 1, True) def test_create_with_description(self): """With microversion <2.55 this should return a failure.""" self.request_body['flavor']['description'] = 'invalid' ex = self.assertRaises( self.validation_error, self.controller._create, self._get_http_request(), body=self.request_body) self.assertIn('description', six.text_type(ex)) def test_flavor_update_description(self): """With microversion <2.55 this should return a failure.""" flavor = self._create_flavor_success_case(self.request_body)['flavor'] self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller._update, self._get_http_request(), flavor['id'], body={'flavor': {'description': 'nope'}}) class FlavorManageTestV2_55(FlavorManageTestV21): microversion = '2.55' def get_flavor(self, flavor, **kwargs): return objects.Flavor( flavorid=flavor['id'], name=flavor['name'], memory_mb=flavor['ram'], vcpus=flavor['vcpus'], root_gb=flavor['disk'], swap=flavor['swap'], ephemeral_gb=flavor['OS-FLV-EXT-DATA:ephemeral'], disabled=flavor['OS-FLV-DISABLED:disabled'], is_public=flavor['os-flavor-access:is_public'], rxtx_factor=flavor['rxtx_factor'], description=flavor['description'], **kwargs) def setUp(self): super(FlavorManageTestV2_55, self).setUp() # Send a description in POST /flavors requests. self.request_body['flavor']['description'] = 'test description' def test_create_with_description(self): # test_create already tests this. pass @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_flavor_update_description(self, mock_flavor_save, mock_get): """Tests updating a flavor description.""" # First create a flavor. flavor = self._create_flavor_success_case(self.request_body)['flavor'] self.assertEqual('test description', flavor['description']) mock_get.return_value = self.get_flavor(flavor) # Now null out the flavor description. flavor = self.controller._update( self._get_http_request(), flavor['id'], body={'flavor': {'description': None}})['flavor'] self.assertIsNone(flavor['description']) mock_get.assert_called_once_with( test.MatchType(fakes.FakeRequestContext), flavor['id']) mock_flavor_save.assert_called_once_with() @mock.patch('nova.objects.Flavor.get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='notfound')) def test_flavor_update_not_found(self, mock_get): """Tests that a 404 is returned if the flavor is not found.""" self.assertRaises(webob.exc.HTTPNotFound, self.controller._update, self._get_http_request(), 'notfound', body={'flavor': {'description': None}}) def test_flavor_update_missing_description(self): """Tests that a schema validation error is raised if no description is provided in the update request body. """ self.assertRaises(self.validation_error, self.controller._update, self._get_http_request(), 'invalid', body={'flavor': {}}) def test_create_with_invalid_description(self): # NOTE(mriedem): Intentionally not using ddt for this since ddt will # create a test name that has 65536 'a's in the name which blows up # the console output. for description in ('bad !@#!$%\x00 description', # printable chars 'a' * 65536): # maxLength self.request_body['flavor']['description'] = description self.assertRaises(self.validation_error, self.controller._create, self._get_http_request(), body=self.request_body) @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_update_with_invalid_description(self, mock_flavor_save, mock_get): # First create a flavor. flavor = self._create_flavor_success_case(self.request_body)['flavor'] self.assertEqual('test description', flavor['description']) mock_get.return_value = objects.Flavor( flavorid=flavor['id'], name=flavor['name'], memory_mb=flavor['ram'], vcpus=flavor['vcpus'], root_gb=flavor['disk'], swap=flavor['swap'], ephemeral_gb=flavor['OS-FLV-EXT-DATA:ephemeral'], disabled=flavor['OS-FLV-DISABLED:disabled'], is_public=flavor['os-flavor-access:is_public'], description=flavor['description']) # NOTE(mriedem): Intentionally not using ddt for this since ddt will # create a test name that has 65536 'a's in the name which blows up # the console output. for description in ('bad !@#!$%\x00 description', # printable chars 'a' * 65536): # maxLength self.request_body['flavor']['description'] = description self.assertRaises(self.validation_error, self.controller._update, self._get_http_request(), flavor['id'], body={'flavor': {'description': description}}) class FlavorManageTestV2_61(FlavorManageTestV2_55): """Run the same tests as we would for v2.55 but with a extra_specs.""" microversion = '2.61' def get_flavor(self, flavor): return super(FlavorManageTestV2_61, self).get_flavor( flavor, extra_specs={"key1": "value1"}) def setUp(self): super(FlavorManageTestV2_61, self).setUp() self.expected_flavor = copy.deepcopy(self.request_body) self.expected_flavor['flavor']['extra_specs'] = {} @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_flavor_update_extra_spec(self, mock_flavor_save, mock_get): # First create a flavor. flavor = self._create_flavor_success_case(self.request_body)['flavor'] mock_get.return_value = self.get_flavor(flavor) flavor = self.controller._update( self._get_http_request(), flavor['id'], body={'flavor': {'description': None}})['flavor'] self.assertEqual({"key1": "value1"}, flavor['extra_specs']) class FlavorManageTestV2_75(FlavorManageTestV2_61): microversion = '2.75' FLAVOR_WITH_NO_SWAP = objects.Flavor( name='test', memory_mb=512, vcpus=2, root_gb=1, ephemeral_gb=1, flavorid=1234, rxtx_factor=1.0, disabled=False, is_public=True, swap=0, extra_specs={"key1": "value1"} ) def test_create_flavor_default_swap_value_old_version(self): self.stub_out("nova.objects.Flavor.create", fake_create_without_swap) del self.request_body['flavor']['swap'] resp = self._create_flavor_success_case(self.request_body, version='2.74') self.assertEqual(resp['flavor']['swap'], "") @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_update_flavor_default_swap_value_old_version(self, mock_save, mock_get): self.stub_out("nova.objects.Flavor.create", fake_create_without_swap) del self.request_body['flavor']['swap'] flavor = self._create_flavor_success_case(self.request_body, version='2.74')['flavor'] mock_get.return_value = self.FLAVOR_WITH_NO_SWAP req = fakes.HTTPRequest.blank('/%s/flavors' % fakes.FAKE_PROJECT_ID, version='2.74') req.method = 'PUT' response = self.controller._update( req, flavor['id'], body={'flavor': {'description': None}})['flavor'] self.assertEqual(response['swap'], '') @mock.patch('nova.objects.FlavorList.get_all') def test_create_flavor_default_swap_value(self, mock_get): self.stub_out("nova.objects.Flavor.create", fake_create_without_swap) del self.request_body['flavor']['swap'] resp = self._create_flavor_success_case(self.request_body) self.assertEqual(resp['flavor']['swap'], 0) @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_update_flavor_default_swap_value(self, mock_save, mock_get): self.stub_out("nova.objects.Flavor.create", fake_create_without_swap) del self.request_body['flavor']['swap'] mock_get.return_value = self.FLAVOR_WITH_NO_SWAP flavor = self._create_flavor_success_case(self.request_body)['flavor'] req = fakes.HTTPRequest.blank('/%s/flavors' % fakes.FAKE_PROJECT_ID, version=self.microversion) response = self.controller._update( req, flavor['id'], body={'flavor': {'description': None}})['flavor'] self.assertEqual(response['swap'], 0) class PrivateFlavorManageTestV21(test.TestCase): controller = flavormanage_v21.FlavorManageController() base_url = '/v2/%s/flavors' % fakes.FAKE_PROJECT_ID def setUp(self): super(PrivateFlavorManageTestV21, self).setUp() self.flavor_access_controller = (flavor_access_v21. FlavorAccessController()) self.expected = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "swap": 512, "rxtx_factor": 1 } } @property def app(self): return fakes.wsgi_app_v21(fake_auth_context=self._get_http_request(). environ['nova.context']) def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) def _get_response(self): req = self._get_http_request(self.base_url) req.headers['Content-Type'] = 'application/json' req.method = 'POST' req.body = jsonutils.dump_as_bytes(self.expected) res = req.get_response(self.app) return jsonutils.loads(res.body) def test_create_private_flavor_should_not_grant_flavor_access(self): self.expected["flavor"]["os-flavor-access:is_public"] = False body = self._get_response() for key in self.expected["flavor"]: self.assertEqual(body["flavor"][key], self.expected["flavor"][key]) # Because for normal user can't access the non-public flavor without # access. So it need admin context at here. flavor_access_body = self.flavor_access_controller.index( fakes.HTTPRequest.blank('', use_admin_context=True), body["flavor"]["id"]) expected_flavor_access_body = { "tenant_id": fakes.FAKE_PROJECT_ID, "flavor_id": "%s" % body["flavor"]["id"] } self.assertNotIn(expected_flavor_access_body, flavor_access_body["flavor_access"]) def test_create_public_flavor_should_not_create_flavor_access(self): self.expected["flavor"]["os-flavor-access:is_public"] = True body = self._get_response() for key in self.expected["flavor"]: self.assertEqual(body["flavor"][key], self.expected["flavor"][key]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_flavorextradata.py0000664000175000017500000000645700000000000026702 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes class FlavorExtraDataTestV21(test.NoDBTestCase): base_url = '/v2/%s/flavors' % fakes.FAKE_PROJECT_ID def setUp(self): super(FlavorExtraDataTestV21, self).setUp() fakes.stub_out_flavor_get_all(self) fakes.stub_out_flavor_get_by_flavor_id(self) @property def app(self): return fakes.wsgi_app_v21() def _verify_flavor_response(self, flavor, expected): for key in expected: self.assertEqual(flavor[key], expected[key]) def test_show(self): expected = { 'flavor': { 'id': fakes.FLAVORS['1'].flavorid, 'name': fakes.FLAVORS['1'].name, 'ram': fakes.FLAVORS['1'].memory_mb, 'vcpus': fakes.FLAVORS['1'].vcpus, 'disk': fakes.FLAVORS['1'].root_gb, 'OS-FLV-EXT-DATA:ephemeral': fakes.FLAVORS['1'].ephemeral_gb, } } url = self.base_url + '/1' req = fakes.HTTPRequest.blank(url) req.headers['Content-Type'] = 'application/json' res = req.get_response(self.app) body = jsonutils.loads(res.body) self._verify_flavor_response(body['flavor'], expected['flavor']) def test_detail(self): expected = [ { 'id': fakes.FLAVORS['1'].flavorid, 'name': fakes.FLAVORS['1'].name, 'ram': fakes.FLAVORS['1'].memory_mb, 'vcpus': fakes.FLAVORS['1'].vcpus, 'disk': fakes.FLAVORS['1'].root_gb, 'OS-FLV-EXT-DATA:ephemeral': fakes.FLAVORS['1'].ephemeral_gb, 'rxtx_factor': fakes.FLAVORS['1'].rxtx_factor or u'', 'os-flavor-access:is_public': fakes.FLAVORS['1'].is_public, }, { 'id': fakes.FLAVORS['2'].flavorid, 'name': fakes.FLAVORS['2'].name, 'ram': fakes.FLAVORS['2'].memory_mb, 'vcpus': fakes.FLAVORS['2'].vcpus, 'disk': fakes.FLAVORS['2'].root_gb, 'OS-FLV-EXT-DATA:ephemeral': fakes.FLAVORS['2'].ephemeral_gb, 'rxtx_factor': fakes.FLAVORS['2'].rxtx_factor or u'', 'os-flavor-access:is_public': fakes.FLAVORS['2'].is_public, }, ] url = self.base_url + '/detail' req = fakes.HTTPRequest.blank(url) req.headers['Content-Type'] = 'application/json' res = req.get_response(self.app) body = jsonutils.loads(res.body) for i, flavor in enumerate(body['flavors']): self._verify_flavor_response(flavor, expected[i]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_flavors.py0000664000175000017500000011653300000000000025164 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six.moves.urllib.parse as urlparse import webob from nova.api.openstack import common from nova.api.openstack.compute import flavors as flavors_v21 from nova import context from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers NS = "{http://docs.openstack.org/compute/api/v1.1}" ATOMNS = "{http://www.w3.org/2005/Atom}" def fake_get_limit_and_marker(request, max_limit=1): params = common.get_pagination_params(request) limit = params.get('limit', max_limit) limit = min(max_limit, limit) marker = params.get('marker') return limit, marker def return_flavor_not_found(context, flavor_id, read_deleted=None): raise exception.FlavorNotFound(flavor_id=flavor_id) class FlavorsTestV21(test.TestCase): _prefix = "/v2/%s" % fakes.FAKE_PROJECT_ID Controller = flavors_v21.FlavorsController fake_request = fakes.HTTPRequestV21 _rspv = "v2/%s" % fakes.FAKE_PROJECT_ID _fake = "/%s" % fakes.FAKE_PROJECT_ID microversion = '2.1' # Flag to tell the test if a description should be expected in a response. expect_description = False # Flag to tell the test if a extra_specs should be expected in a response. expect_extra_specs = False def setUp(self): super(FlavorsTestV21, self).setUp() fakes.stub_out_networking(self) fakes.stub_out_flavor_get_all(self) fakes.stub_out_flavor_get_by_flavor_id(self) self.controller = self.Controller() def _build_request(self, url): return self.fake_request.blank( self._prefix + url, version=self.microversion) def _set_expected_body(self, expected, flavor): expected['OS-FLV-EXT-DATA:ephemeral'] = flavor.ephemeral_gb expected['OS-FLV-DISABLED:disabled'] = flavor.disabled expected['swap'] = flavor.swap if self.expect_description: expected['description'] = flavor.description if self.expect_extra_specs: expected['extra_specs'] = flavor.extra_specs @mock.patch('nova.objects.Flavor.get_by_flavor_id', side_effect=return_flavor_not_found) def test_get_flavor_by_invalid_id(self, mock_get): req = self._build_request('/flavors/asdf') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 'asdf') def test_get_flavor_by_id(self): req = self._build_request('/flavors/1') flavor = self.controller.show(req, '1') expected = { "flavor": { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, } self._set_expected_body(expected['flavor'], fakes.FLAVORS['1']) self.assertEqual(flavor, expected) def test_get_flavor_with_custom_link_prefix(self): self.flags(compute_link_prefix='http://zoo.com:42', glance_link_prefix='http://circus.com:34', group='api') req = self._build_request('/flavors/1') flavor = self.controller.show(req, '1') expected = { "flavor": { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://zoo.com:42/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://zoo.com:42" + self._fake + "/flavors/1", }, ], }, } self._set_expected_body(expected['flavor'], fakes.FLAVORS['1']) self.assertEqual(expected, flavor) def test_get_flavor_list(self): req = self._build_request('/flavors') flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } if self.expect_description: for idx, _flavor in enumerate(expected['flavors']): expected['flavors'][idx]['description'] = ( fakes.FLAVORS[_flavor['id']].description) self.assertEqual(flavor, expected) def test_get_flavor_list_with_marker(self): self.maxDiff = None url = '/flavors?limit=1&marker=1' req = self._build_request(url) flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], 'flavors_links': [ {'href': 'http://localhost/' + self._rspv + '/flavors?limit=1&marker=2', 'rel': 'next'} ] } if self.expect_description: expected['flavors'][0]['description'] = ( fakes.FLAVORS['2'].description) self.assertThat(flavor, matchers.DictMatches(expected)) def test_get_flavor_list_with_invalid_marker(self): req = self._build_request('/flavors?marker=99999') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_flavor_detail_with_limit(self): url = '/flavors/detail?limit=1' req = self._build_request(url) response = self.controller.detail(req) response_list = response["flavors"] response_links = response["flavors_links"] expected_flavors = [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, ] self._set_expected_body(expected_flavors[0], fakes.FLAVORS['1']) self.assertEqual(response_list, expected_flavors) self.assertEqual(response_links[0]['rel'], 'next') href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual('/' + self._rspv + '/flavors/detail', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['1'], 'marker': ['1']}, matchers.DictMatches(params)) def test_get_flavor_with_limit(self): req = self._build_request('/flavors?limit=2') response = self.controller.index(req) response_list = response["flavors"] response_links = response["flavors_links"] expected_flavors = [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], } ] if self.expect_description: for idx, _flavor in enumerate(expected_flavors): expected_flavors[idx]['description'] = ( fakes.FLAVORS[_flavor['id']].description) self.assertEqual(response_list, expected_flavors) self.assertEqual(response_links[0]['rel'], 'next') href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual('/' + self._rspv + '/flavors', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['2'], 'marker': ['2']}, matchers.DictMatches(params)) def test_get_flavor_with_default_limit(self): self.stub_out('nova.api.openstack.common.get_limit_and_marker', fake_get_limit_and_marker) self.flags(max_limit=1, group='api') req = self._build_request('/flavors?limit=2') response = self.controller.index(req) response_list = response["flavors"] response_links = response["flavors_links"] expected_flavors = [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "links": [ { "rel": "self", "href": ("http://localhost/v2/%s/flavors/1" % fakes.FAKE_PROJECT_ID), }, { "rel": "bookmark", "href": ("http://localhost/%s/flavors/1" % fakes.FAKE_PROJECT_ID), } ] } ] if self.expect_description: expected_flavors[0]['description'] = ( fakes.FLAVORS['1'].description) self.assertEqual(response_list, expected_flavors) self.assertEqual(response_links[0]['rel'], 'next') href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual('/v2/%s/flavors' % fakes.FAKE_PROJECT_ID, href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['2'], 'marker': ['1']}, matchers.DictMatches(params)) def test_get_flavor_list_detail(self): req = self._build_request('/flavors/detail') flavor = self.controller.detail(req) expected = { "flavors": [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } self._set_expected_body(expected['flavors'][0], fakes.FLAVORS['1']) self._set_expected_body(expected['flavors'][1], fakes.FLAVORS['2']) self.assertEqual(expected, flavor) @mock.patch('nova.objects.FlavorList.get_all', return_value=objects.FlavorList()) def test_get_empty_flavor_list(self, mock_get): req = self._build_request('/flavors') flavors = self.controller.index(req) expected = {'flavors': []} self.assertEqual(flavors, expected) def test_get_flavor_list_filter_min_ram(self): # Flavor lists may be filtered by minRam. req = self._build_request('/flavors?minRam=512') flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } if self.expect_description: expected['flavors'][0]['description'] = ( fakes.FLAVORS['2'].description) self.assertEqual(flavor, expected) def test_get_flavor_list_filter_invalid_min_ram(self): # Ensure you cannot list flavors with invalid minRam param. req = self._build_request('/flavors?minRam=NaN') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_flavor_list_filter_min_disk(self): # Flavor lists may be filtered by minDisk. req = self._build_request('/flavors?minDisk=20') flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } if self.expect_description: expected['flavors'][0]['description'] = ( fakes.FLAVORS['2'].description) self.assertEqual(flavor, expected) def test_get_flavor_list_filter_invalid_min_disk(self): # Ensure you cannot list flavors with invalid minDisk param. req = self._build_request('/flavors?minDisk=NaN') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_flavor_list_detail_min_ram_and_min_disk(self): """Tests that filtering work on flavor details and that minRam and minDisk filters can be combined """ req = self._build_request('/flavors/detail?minRam=256&minDisk=20') flavor = self.controller.detail(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } self._set_expected_body(expected['flavors'][0], fakes.FLAVORS['2']) self.assertEqual(expected, flavor) def _test_list_flavors_with_invalid_filter( self, url, expected_exception=exception.ValidationError): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail req = self._build_request(url) self.assertRaises(expected_exception, controller_list, req) def test_list_flavors_with_invalid_non_int_limit(self): self._test_list_flavors_with_invalid_filter('/flavors?limit=-9') def test_list_detail_flavors_with_invalid_non_int_limit(self): self._test_list_flavors_with_invalid_filter('/flavors/detail?limit=-9') def test_list_flavors_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter('/flavors?limit=abc') def test_list_detail_flavors_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter( '/flavors?limit=1&limit=abc') def test_list_detail_duplicate_query_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?limit=1&limit=abc') def _test_list_flavors_duplicate_query_parameters_validation( self, url, expected=None): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail expected_resp = [{ "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }] if expected: expected_resp[0].update(expected) if self.expect_description: expected_resp[0]['description'] = ( fakes.FLAVORS['2'].description) if 'detail' in url and self.expect_extra_specs: expected_resp[0]['extra_specs'] = ( fakes.FLAVORS['2'].extra_specs) params = { 'limit': 1, 'marker': 1, 'is_public': 't', 'minRam': 2, 'minDisk': 2, 'sort_key': 'id', 'sort_dir': 'asc' } for param, value in params.items(): req = self._build_request( url + '?marker=1&%s=%s&%s=%s' % (param, value, param, value)) result = controller_list(req) self.assertEqual(expected_resp, result['flavors']) def test_list_duplicate_query_parameters_validation(self): self._test_list_flavors_duplicate_query_parameters_validation( '/flavors') def test_list_detail_duplicate_query_parameters_validation(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_duplicate_query_parameters_validation( '/flavors/detail', expected) def _test_list_flavors_with_allowed_filter( self, url, expected=None, req=None): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail expected_resp = [{ "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }] if expected: expected_resp[0].update(expected) if self.expect_description: expected_resp[0]['description'] = ( fakes.FLAVORS['2'].description) if 'detail' in url and self.expect_extra_specs: expected_resp[0]['extra_specs'] = ( fakes.FLAVORS['2'].extra_specs) req = req or self._build_request(url + '&limit=1&marker=1') result = controller_list(req) self.assertEqual(expected_resp, result['flavors']) def test_list_flavors_with_additional_filter(self): self._test_list_flavors_with_allowed_filter( '/flavors?limit=1&marker=1&additional=something') def test_list_detail_flavors_with_additional_filter(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?limit=1&marker=1&additional=something', expected) def test_list_flavors_with_min_ram_filter_as_negative_int(self): self._test_list_flavors_with_allowed_filter( '/flavors?minRam=-2') def test_list_detail_flavors_with_min_ram_filter_as_negative_int(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?minRam=-2', expected) def test_list_flavors_with_min_ram_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors?minRam=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_detail_flavors_with_min_ram_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?minRam=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_flavors_with_min_disk_filter_as_negative_int(self): self._test_list_flavors_with_allowed_filter('/flavors?minDisk=-2') def test_list_detail_flavors_with_min_disk_filter_as_negative_int(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?minDisk=-2', expected) def test_list_flavors_with_min_disk_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors?minDisk=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_detail_flavors_with_min_disk_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?minDisk=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_flavors_with_is_public_filter_as_string_none(self): self._test_list_flavors_with_allowed_filter( '/flavors?is_public=none') def test_list_detail_flavors_with_is_public_filter_as_string_none(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?is_public=none', expected) def test_list_flavors_with_is_public_filter_as_valid_bool(self): self._test_list_flavors_with_allowed_filter( '/flavors?is_public=false') def test_list_detail_flavors_with_is_public_filter_as_valid_bool(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?is_public=false', expected) def test_list_flavors_with_is_public_filter_as_invalid_string(self): self._test_list_flavors_with_allowed_filter( '/flavors?is_public=invalid') def test_list_detail_flavors_with_is_public_filter_as_invalid_string(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?is_public=invalid', expected) class FlavorsTestV2_55(FlavorsTestV21): """Run the same tests as we would for v2.1 but with a description.""" microversion = '2.55' expect_description = True class FlavorsTestV2_61(FlavorsTestV2_55): """Run the same tests as we would for v2.55 but with a extra_specs.""" microversion = '2.61' expect_extra_specs = True class FlavorsTestV2_75(FlavorsTestV2_61): microversion = '2.75' FLAVOR_WITH_NO_SWAP = objects.Flavor( id=1, name='flavor 1', memory_mb=256, vcpus=1, root_gb=10, ephemeral_gb=20, flavorid='1', rxtx_factor=1.0, vcpu_weight=None, disabled=False, is_public=True, swap=0, description=None, extra_specs={"key1": "value1", "key2": "value2"} ) def test_list_flavors_with_additional_filter_old_version(self): req = self.fake_request.blank( '/%s/flavors?limit=1&marker=1&additional=something' % fakes.FAKE_PROJECT_ID, version='2.74') self._test_list_flavors_with_allowed_filter( '/%s/flavors?limit=1&marker=1&additional=something' % fakes.FAKE_PROJECT_ID, req=req) def test_list_detail_flavors_with_additional_filter_old_version(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } req = self.fake_request.blank( '/%s/flavors?limit=1&marker=1&additional=something' % fakes.FAKE_PROJECT_ID, version='2.74') self._test_list_flavors_with_allowed_filter( '/%s/flavors/detail?limit=1&marker=1&additional=something' % fakes.FAKE_PROJECT_ID, expected, req=req) def _test_list_flavors_with_additional_filter(self, url): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail req = self._build_request(url) self.assertRaises(exception.ValidationError, controller_list, req) def test_list_flavors_with_additional_filter(self): self._test_list_flavors_with_additional_filter( '/flavors?limit=1&marker=1&additional=something') def test_list_detail_flavors_with_additional_filter(self): self._test_list_flavors_with_additional_filter( '/flavors/detail?limit=1&marker=1&additional=something') @mock.patch('nova.objects.FlavorList.get_all') def test_list_flavor_detail_default_swap_value_old_version(self, mock_get): mock_get.return_value = objects.FlavorList( objects=[self.FLAVOR_WITH_NO_SWAP]) req = self.fake_request.blank( '/%s/flavors/detail?limit=1' % fakes.FAKE_PROJECT_ID, version='2.74') response = self.controller.detail(req) response_list = response["flavors"] self.assertEqual(response_list[0]['swap'], "") @mock.patch('nova.objects.Flavor.get_by_flavor_id') def test_show_flavor_default_swap_value_old_version(self, mock_get): mock_get.return_value = self.FLAVOR_WITH_NO_SWAP req = self.fake_request.blank( '/%s/flavors/detail?limit=1' % fakes.FAKE_PROJECT_ID, version='2.74') response = self.controller.show(req, 1) response_list = response["flavor"] self.assertEqual(response_list['swap'], "") @mock.patch('nova.objects.FlavorList.get_all') def test_list_flavor_detail_default_swap_value(self, mock_get): mock_get.return_value = objects.FlavorList( objects=[self.FLAVOR_WITH_NO_SWAP]) req = self.fake_request.blank( '/%s/flavors/detail?limit=1' % fakes.FAKE_PROJECT_ID, version=self.microversion) response = self.controller.detail(req) response_list = response["flavors"] self.assertEqual(response_list[0]['swap'], 0) @mock.patch('nova.objects.Flavor.get_by_flavor_id') def test_show_flavor_default_swap_value(self, mock_get): mock_get.return_value = self.FLAVOR_WITH_NO_SWAP req = self.fake_request.blank( '/%s/flavors/detail?limit=1' % fakes.FAKE_PROJECT_ID, version=self.microversion) response = self.controller.show(req, 1) response_list = response["flavor"] self.assertEqual(response_list['swap'], 0) class DisabledFlavorsWithRealDBTestV21(test.TestCase): """Tests that disabled flavors should not be shown nor listed.""" Controller = flavors_v21.FlavorsController _prefix = "/v2" fake_request = fakes.HTTPRequestV21 def setUp(self): super(DisabledFlavorsWithRealDBTestV21, self).setUp() # Add a new disabled type to the list of flavors self.req = self.fake_request.blank(self._prefix + '/flavors') self.context = self.req.environ['nova.context'] self.admin_context = context.get_admin_context() self.disabled_type = self._create_disabled_instance_type() self.addCleanup(self.disabled_type.destroy) self.inst_types = objects.FlavorList.get_all(self.admin_context) self.controller = self.Controller() def _create_disabled_instance_type(self): flavor = objects.Flavor(context=self.admin_context, name='foo.disabled', flavorid='10.disabled', memory_mb=512, vcpus=2, root_gb=1, ephemeral_gb=0, swap=0, rxtx_factor=1.0, vcpu_weight=1, disabled=True, is_public=True, extra_specs={}, projects=[]) flavor.create() return flavor def test_index_should_not_list_disabled_flavors_to_user(self): self.context.is_admin = False flavor_list = self.controller.index(self.req)['flavors'] api_flavorids = set(f['id'] for f in flavor_list) db_flavorids = set(i['flavorid'] for i in self.inst_types) disabled_flavorid = str(self.disabled_type['flavorid']) self.assertIn(disabled_flavorid, db_flavorids) self.assertEqual(db_flavorids - set([disabled_flavorid]), api_flavorids) def test_index_should_list_disabled_flavors_to_admin(self): self.context.is_admin = True flavor_list = self.controller.index(self.req)['flavors'] api_flavorids = set(f['id'] for f in flavor_list) db_flavorids = set(i['flavorid'] for i in self.inst_types) disabled_flavorid = str(self.disabled_type['flavorid']) self.assertIn(disabled_flavorid, db_flavorids) self.assertEqual(db_flavorids, api_flavorids) def test_show_should_include_disabled_flavor_for_user(self): """Counterintuitively we should show disabled flavors to all users and not just admins. The reason is that, when a user performs a server-show request, we want to be able to display the pretty flavor name ('512 MB Instance') and not just the flavor-id even if the flavor id has been marked disabled. """ self.context.is_admin = False flavor = self.controller.show( self.req, self.disabled_type['flavorid'])['flavor'] self.assertEqual(flavor['name'], self.disabled_type['name']) def test_show_should_include_disabled_flavor_for_admin(self): self.context.is_admin = True flavor = self.controller.show( self.req, self.disabled_type['flavorid'])['flavor'] self.assertEqual(flavor['name'], self.disabled_type['name']) class ParseIsPublicTestV21(test.TestCase): Controller = flavors_v21.FlavorsController def setUp(self): super(ParseIsPublicTestV21, self).setUp() self.controller = self.Controller() def assertPublic(self, expected, is_public): self.assertIs(expected, self.controller._parse_is_public(is_public), '%s did not return %s' % (is_public, expected)) def test_None(self): self.assertPublic(True, None) def test_truthy(self): self.assertPublic(True, True) self.assertPublic(True, 't') self.assertPublic(True, 'true') self.assertPublic(True, 'yes') self.assertPublic(True, '1') def test_falsey(self): self.assertPublic(False, False) self.assertPublic(False, 'f') self.assertPublic(False, 'false') self.assertPublic(False, 'no') self.assertPublic(False, '0') def test_string_none(self): self.assertPublic(None, 'none') self.assertPublic(None, 'None') def test_other(self): self.assertRaises( webob.exc.HTTPBadRequest, self.assertPublic, None, 'other') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_flavors_extra_specs.py0000664000175000017500000004327400000000000027565 0ustar00zuulzuul00000000000000# Copyright 2011 University of Southern California # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools import webob from nova.api.openstack.compute import flavors_extraspecs \ as flavorextraspecs_v21 from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_flavor def return_create_flavor_extra_specs(context, flavor_id, extra_specs, *args, **kwargs): return stub_flavor_extra_specs() def return_flavor_extra_specs(context, flavor_id): return stub_flavor_extra_specs() def return_flavor_extra_specs_item(context, flavor_id, key): return {key: stub_flavor_extra_specs()[key]} def return_empty_flavor_extra_specs(context, flavor_id): return {} def delete_flavor_extra_specs(context, flavor_id, key): pass def stub_flavor_extra_specs(): specs = { 'hw:cpu_policy': 'shared', 'hw:numa_nodes': '1', } return specs class FlavorsExtraSpecsTestV21(test.TestCase): bad_request = exception.ValidationError flavorextraspecs = flavorextraspecs_v21 def _get_request(self, url, use_admin_context=False, version=None): kwargs = {} if version: kwargs['version'] = version req_url = '/v2/%s/flavors/%s' % (fakes.FAKE_PROJECT_ID, url) return fakes.HTTPRequest.blank( req_url, use_admin_context=use_admin_context, **kwargs, ) def setUp(self): super(FlavorsExtraSpecsTestV21, self).setUp() fakes.stub_out_key_pair_funcs(self) self.controller = self.flavorextraspecs.FlavorExtraSpecsController() def test_index(self): flavor = dict(test_flavor.fake_flavor, extra_specs={'hw:numa_nodes': '1'}) req = self._get_request('1/os-extra_specs') with mock.patch( 'nova.objects.Flavor._flavor_get_by_flavor_id_from_db' ) as mock_get: mock_get.return_value = flavor res_dict = self.controller.index(req, 1) self.assertEqual('1', res_dict['extra_specs']['hw:numa_nodes']) @mock.patch('nova.objects.Flavor.get_by_flavor_id') def test_index_no_data(self, mock_get): flavor = objects.Flavor(flavorid='1', extra_specs={}) mock_get.return_value = flavor req = self._get_request('1/os-extra_specs') res_dict = self.controller.index(req, 1) self.assertEqual(0, len(res_dict['extra_specs'])) @mock.patch('nova.objects.Flavor.get_by_flavor_id') def test_index_flavor_not_found(self, mock_get): req = self._get_request('1/os-extra_specs', use_admin_context=True) mock_get.side_effect = exception.FlavorNotFound(flavor_id='1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, req, 1) def test_show(self): flavor = objects.Flavor( flavorid='1', extra_specs={'hw:numa_nodes': '1'} ) req = self._get_request('1/os-extra_specs/hw:numa_nodes') with mock.patch('nova.objects.Flavor.get_by_flavor_id') as mock_get: mock_get.return_value = flavor res_dict = self.controller.show(req, 1, 'hw:numa_nodes') self.assertEqual('1', res_dict['hw:numa_nodes']) @mock.patch('nova.objects.Flavor.get_by_flavor_id') def test_show_spec_not_found(self, mock_get): mock_get.return_value = objects.Flavor(extra_specs={}) req = self._get_request('1/os-extra_specs/hw:cpu_thread_policy') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 1, 'hw:cpu_thread_policy') def test_not_found_because_flavor(self): req = self._get_request('1/os-extra_specs/hw:numa_nodes', use_admin_context=True) with mock.patch('nova.objects.Flavor.get_by_flavor_id') as mock_get: mock_get.side_effect = exception.FlavorNotFound(flavor_id='1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 1, 'hw:numa_nodes') self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, 1, 'hw:numa_nodes', body={'hw:numa_nodes': '1'}) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1, 'hw:numa_nodes') req = self._get_request('1/os-extra_specs', use_admin_context=True) with mock.patch('nova.objects.Flavor.get_by_flavor_id') as mock_get: mock_get.side_effect = exception.FlavorNotFound(flavor_id='1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, 1, body={'extra_specs': { 'hw:numa_nodes': '1'}}) @mock.patch('nova.objects.Flavor._flavor_get_by_flavor_id_from_db') def test_delete(self, mock_get): flavor = dict(test_flavor.fake_flavor, extra_specs={'hw:numa_nodes': '1'}) req = self._get_request('1/os-extra_specs/hw:numa_nodes', use_admin_context=True) mock_get.return_value = flavor with mock.patch('nova.objects.Flavor.save'): self.controller.delete(req, 1, 'hw:numa_nodes') def test_delete_spec_not_found(self): req = self._get_request('1/os-extra_specs/key6', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1, 'key6') def test_create(self): body = { 'extra_specs': { 'hw:cpu_policy': 'shared', 'hw:numa_nodes': '1', } } req = self._get_request('1/os-extra_specs', use_admin_context=True) res_dict = self.controller.create(req, 1, body=body) self.assertEqual('shared', res_dict['extra_specs']['hw:cpu_policy']) self.assertEqual('1', res_dict['extra_specs']['hw:numa_nodes']) def test_create_flavor_not_found(self): body = {'extra_specs': {'hw:numa_nodes': '1'}} req = self._get_request('1/os-extra_specs', use_admin_context=True) with mock.patch('nova.objects.Flavor.save', side_effect=exception.FlavorNotFound(flavor_id='')): self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, 1, body=body) def test_create_flavor_db_duplicate(self): body = {'extra_specs': {'hw:numa_nodes': '1'}} req = self._get_request('1/os-extra_specs', use_admin_context=True) with mock.patch( 'nova.objects.Flavor.save', side_effect=exception.FlavorExtraSpecUpdateCreateFailed( id='', retries=10)): self.assertRaises(webob.exc.HTTPConflict, self.controller.create, req, 1, body=body) def _test_create_bad_request(self, body): self.stub_out('nova.objects.flavor._flavor_extra_specs_add', return_create_flavor_extra_specs) req = self._get_request('1/os-extra_specs', use_admin_context=True) self.assertRaises(self.bad_request, self.controller.create, req, 1, body=body) def test_create_empty_body(self): self._test_create_bad_request('') def test_create_non_dict_extra_specs(self): self._test_create_bad_request({"extra_specs": "non_dict"}) def test_create_non_string_key(self): self._test_create_bad_request({"extra_specs": {None: "value1"}}) def test_create_non_string_value(self): self._test_create_bad_request({"extra_specs": {"hw:numa_nodes": None}}) def test_create_zero_length_key(self): self._test_create_bad_request({"extra_specs": {"": "value1"}}) def test_create_long_key(self): key = "a" * 256 self._test_create_bad_request({"extra_specs": {key: "value1"}}) def test_create_long_value(self): value = "a" * 256 self._test_create_bad_request( {"extra_specs": {"hw_numa_nodes": value}} ) def test_create_really_long_integer_value(self): value = 10 ** 1000 req = self._get_request('1/os-extra_specs', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, 1, body={"extra_specs": {"key1": value}}) def test_create_invalid_specs(self): """Test generic invalid specs. These are invalid regardless of the validation scheme, if any, in use. """ invalid_specs = { 'key1/': 'value1', '': 'value1', '$$akey$': 'value1', '!akey': 'value1', '': 'value1', } for key, value in invalid_specs.items(): body = {"extra_specs": {key: value}} req = self._get_request('1/os-extra_specs', use_admin_context=True) self.assertRaises(self.bad_request, self.controller.create, req, 1, body=body) def test_create_invalid_known_namespace(self): """Test behavior of validator with specs from known namespace.""" invalid_specs = { 'hw:numa_nodes': 'foo', 'hw:cpu_policy': 'sharrred', 'hw:cpu_policyyyyyyy': 'shared', 'hw:foo': 'bar', 'resources:VCPU': 'N', 'resources_foo:VCPU': 'N', 'resources:VVCPU': '1', 'resources_foo:VVCPU': '1', 'trait:STORAGE_DISK_SSD': 'forbiden', 'trait_foo:HW_CPU_X86_AVX2': 'foo', 'trait:bar': 'required', 'trait_foo:bar': 'required', 'trait:CUSTOM_foo': 'required', 'trait:CUSTOM_FOO': 'bar', 'trait_foo:CUSTOM_BAR': 'foo', } for key, value in invalid_specs.items(): body = {'extra_specs': {key: value}} req = self._get_request( '1/os-extra_specs', use_admin_context=True, version='2.86', ) with testtools.ExpectedException( self.bad_request, 'Validation failed; .*' ): self.controller.create(req, 1, body=body) def test_create_invalid_unknown_namespace(self): """Test behavior of validator with specs from unknown namespace.""" unknown_specs = { 'foo': 'bar', 'foo:bar': 'baz', 'hww:cpu_policy': 'sharrred', } for key, value in unknown_specs.items(): body = {'extra_specs': {key: value}} req = self._get_request( '1/os-extra_specs', use_admin_context=True, version='2.86', ) self.controller.create(req, 1, body=body) @mock.patch('nova.objects.flavor._flavor_extra_specs_add') def test_create_valid_specs(self, mock_flavor_extra_specs): valid_specs = { 'hide_hypervisor_id': 'true', 'hw:hide_hypervisor_id': 'true', 'hw:numa_nodes': '1', 'hw:numa_cpus.0': '0-3,8-9,11,10', 'resources:VCPU': '4', 'resources_foo:VCPU': '4', 'resources:CUSTOM_FOO': '1', 'resources_foo:CUSTOM_BAR': '2', 'trait:STORAGE_DISK_SSD': 'forbidden', 'trait_foo:HW_CPU_X86_AVX2': 'required', 'trait:CUSTOM_FOO': 'forbidden', 'trait_foo:CUSTOM_BAR': 'required', } mock_flavor_extra_specs.side_effect = return_create_flavor_extra_specs for key, value in valid_specs.items(): body = {"extra_specs": {key: value}} req = self._get_request( '1/os-extra_specs', use_admin_context=True, version='2.86', ) res_dict = self.controller.create(req, 1, body=body) self.assertEqual(value, res_dict['extra_specs'][key]) @mock.patch('nova.objects.flavor._flavor_extra_specs_add') def test_update_item(self, mock_add): mock_add.side_effect = return_create_flavor_extra_specs body = {'hw:cpu_policy': 'shared'} req = self._get_request('1/os-extra_specs/hw:cpu_policy', use_admin_context=True) res_dict = self.controller.update(req, 1, 'hw:cpu_policy', body=body) self.assertEqual('shared', res_dict['hw:cpu_policy']) def _test_update_item_bad_request(self, body): req = self._get_request('1/os-extra_specs/hw:cpu_policy', use_admin_context=True) self.assertRaises(self.bad_request, self.controller.update, req, 1, 'hw:cpu_policy', body=body) def test_update_item_empty_body(self): self._test_update_item_bad_request('') def test_update_item_too_many_keys(self): body = {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "2"} self._test_update_item_bad_request(body) def test_update_item_non_dict_extra_specs(self): self._test_update_item_bad_request("non_dict") def test_update_item_non_string_key(self): self._test_update_item_bad_request({None: "value1"}) def test_update_item_non_string_value(self): self._test_update_item_bad_request({"hw:cpu_policy": None}) def test_update_item_zero_length_key(self): self._test_update_item_bad_request({"": "value1"}) def test_update_item_long_key(self): key = "a" * 256 self._test_update_item_bad_request({key: "value1"}) def test_update_item_long_value(self): value = "a" * 256 self._test_update_item_bad_request({"key1": value}) def test_update_item_body_uri_mismatch(self): body = {'hw:cpu_policy': 'shared'} req = self._get_request('1/os-extra_specs/bad', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'bad', body=body) def test_update_flavor_not_found(self): body = {'hw:cpu_policy': 'shared'} req = self._get_request('1/os-extra_specs/hw:cpu_policy', use_admin_context=True) with mock.patch('nova.objects.Flavor.save', side_effect=exception.FlavorNotFound(flavor_id='')): self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, 1, 'hw:cpu_policy', body=body) def test_update_flavor_db_duplicate(self): body = {'hw:cpu_policy': 'shared'} req = self._get_request('1/os-extra_specs/hw:cpu_policy', use_admin_context=True) with mock.patch( 'nova.objects.Flavor.save', side_effect=exception.FlavorExtraSpecUpdateCreateFailed( id=1, retries=5)): self.assertRaises(webob.exc.HTTPConflict, self.controller.update, req, 1, 'hw:cpu_policy', body=body) def test_update_really_long_integer_value(self): body = {'hw:numa_nodes': 10 ** 1000} req = self._get_request('1/os-extra_specs/hw:numa_nodes', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'hw:numa_nodes', body=body) def test_update_invalid_specs_known_namespace(self): """Test behavior of validator with specs from known namespace.""" invalid_specs = { 'hw:numa_nodes': 'foo', 'hw:cpu_policy': 'sharrred', 'hw:cpu_policyyyyyyy': 'shared', 'hw:foo': 'bar', } for key, value in invalid_specs.items(): body = {key: value} req = self._get_request( '1/os-extra_specs/{key}', use_admin_context=True, version='2.86', ) with testtools.ExpectedException( self.bad_request, 'Validation failed; .*' ): self.controller.update(req, 1, key, body=body) def test_update_invalid_specs_unknown_namespace(self): """Test behavior of validator with specs from unknown namespace.""" unknown_specs = { 'foo': 'bar', 'foo:bar': 'baz', 'hww:cpu_policy': 'sharrred', } for key, value in unknown_specs.items(): body = {key: value} req = self._get_request( f'1/os-extra_specs/{key}', use_admin_context=True, version='2.86', ) self.controller.update(req, 1, key, body=body) @mock.patch('nova.objects.flavor._flavor_extra_specs_add') def test_update_valid_specs(self, mock_flavor_extra_specs): valid_specs = { 'hide_hypervisor_id': 'true', 'hw:numa_nodes': '1', 'hw:numa_cpus.0': '0-3,8-9,11,10', } mock_flavor_extra_specs.side_effect = return_create_flavor_extra_specs for key, value in valid_specs.items(): body = {key: value} req = self._get_request( f'1/os-extra_specs/{key}', use_admin_context=True, version='2.86', ) res_dict = self.controller.update(req, 1, key, body=body) self.assertEqual(value, res_dict[key]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_floating_ip_pools.py0000664000175000017500000000653200000000000027214 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import floating_ip_pools \ as fipp_v21 from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes def fake_get_floating_ip_pools(*args, **kwargs): return [ {'name': 'nova'}, {'name': 'other'}, ] class FloatingIpPoolTestV21(test.NoDBTestCase): floating_ip_pools = fipp_v21 def setUp(self): super(FloatingIpPoolTestV21, self).setUp() self.context = context.RequestContext('fake', fakes.FAKE_PROJECT_ID) self.controller = self.floating_ip_pools.FloatingIPPoolsController() self.req = fakes.HTTPRequest.blank('') def test_translate_floating_ip_pools_view(self): pools = fake_get_floating_ip_pools(None, self.context) view = self.floating_ip_pools._translate_floating_ip_pools_view(pools) self.assertIn('floating_ip_pools', view) self.assertEqual(view['floating_ip_pools'][0]['name'], pools[0]['name']) self.assertEqual(view['floating_ip_pools'][1]['name'], pools[1]['name']) def test_floating_ips_pools_list(self): with mock.patch.object(self.controller.network_api, 'get_floating_ip_pools', fake_get_floating_ip_pools): res_dict = self.controller.index(self.req) pools = fake_get_floating_ip_pools(None, self.context) response = { 'floating_ip_pools': [{'name': pool['name']} for pool in pools], } self.assertEqual(response, res_dict) class FloatingIPPoolsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPPoolsPolicyEnforcementV21, self).setUp() self.controller = fipp_v21.FloatingIPPoolsController() self.req = fakes.HTTPRequest.blank('') def test_change_password_policy_failed(self): rule_name = "os_compute_api:os-floating-ip-pools" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class FloatingIpPoolDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpPoolDeprecationTest, self).setUp() self.controller = fipp_v21.FloatingIPPoolsController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_not_found_for_fip_pool_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_floating_ips.py0000664000175000017500000002103700000000000026160 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids import webob from nova.api.openstack.compute import floating_ips as fips_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class FloatingIpTestV21(test.NoDBTestCase): floating_ips = fips_v21 def setUp(self): super(FloatingIpTestV21, self).setUp() self.controller = self.floating_ips.FloatingIPController() def test_floatingip_delete(self): req = fakes.HTTPRequest.blank('') fip_val = { 'floating_ip_address': '1.1.1.1', 'fixed_ip_address': '192.168.1.2', } with test.nested( mock.patch.object(self.controller.network_api, 'disassociate_floating_ip'), mock.patch.object(self.controller.network_api, 'disassociate_and_release_floating_ip'), mock.patch.object(self.controller.network_api, 'release_floating_ip'), mock.patch.object(self.controller.network_api, 'get_instance_id_by_floating_address', return_value=None), mock.patch.object(self.controller.network_api, 'get_floating_ip', return_value=fip_val)) as ( disoc_fip, dis_and_del, rel_fip, _, _): self.controller.delete(req, 1) self.assertFalse(disoc_fip.called) self.assertFalse(rel_fip.called) # Only disassociate_and_release_floating_ip is # called if using neutron self.assertTrue(dis_and_del.called) def _test_floatingip_delete_not_found(self, ex, expect_ex=webob.exc.HTTPNotFound): req = fakes.HTTPRequest.blank('') with mock.patch.object(self.controller.network_api, 'get_floating_ip', side_effect=ex): self.assertRaises(expect_ex, self.controller.delete, req, 1) def test_floatingip_delete_not_found_ip(self): ex = exception.FloatingIpNotFound(id=1) self._test_floatingip_delete_not_found(ex) def test_floatingip_delete_not_found(self): ex = exception.NotFound self._test_floatingip_delete_not_found(ex) def test_floatingip_delete_invalid_id(self): ex = exception.InvalidID(id=1) self._test_floatingip_delete_not_found(ex, webob.exc.HTTPBadRequest) def _test_floatingip_delete_error_disassociate(self, raised_exc, expected_exc): """Ensure that various exceptions are correctly transformed. Handle the myriad exceptions that could be raised from the 'disassociate_and_release_floating_ip' call. """ req = fakes.HTTPRequest.blank('') with mock.patch.object(self.controller.network_api, 'get_floating_ip', return_value={'floating_ip_address': 'foo'}), \ mock.patch.object(self.controller.network_api, 'get_instance_id_by_floating_address', return_value=None), \ mock.patch.object(self.controller.network_api, 'disassociate_and_release_floating_ip', side_effect=raised_exc): self.assertRaises(expected_exc, self.controller.delete, req, 1) def test_floatingip_delete_error_disassociate_1(self): raised_exc = exception.Forbidden expected_exc = webob.exc.HTTPForbidden self._test_floatingip_delete_error_disassociate(raised_exc, expected_exc) def test_floatingip_delete_error_disassociate_2(self): raised_exc = exception.FloatingIpNotFoundForAddress(address='1.1.1.1') expected_exc = webob.exc.HTTPNotFound self._test_floatingip_delete_error_disassociate(raised_exc, expected_exc) class FloatingIPPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPPolicyEnforcementV21, self).setUp() self.controller = fips_v21.FloatingIPController() self.req = fakes.HTTPRequest.blank('') def _common_policy_check(self, func, *arg, **kwarg): rule_name = "os_compute_api:os-floating-ips" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): self._common_policy_check(self.controller.index, self.req) def test_show_policy_failed(self): self._common_policy_check(self.controller.show, self.req, uuids.fake) def test_create_policy_failed(self): self._common_policy_check(self.controller.create, self.req) def test_delete_policy_failed(self): self._common_policy_check(self.controller.delete, self.req, uuids.fake) class FloatingIPActionPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPActionPolicyEnforcementV21, self).setUp() self.controller = fips_v21.FloatingIPActionController() self.req = fakes.HTTPRequest.blank('') def _common_policy_check(self, func, *arg, **kwarg): rule_name = "os_compute_api:os-floating-ips" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_add_policy_failed(self): body = dict(addFloatingIp=dict(address='10.10.10.11')) self._common_policy_check( self.controller._add_floating_ip, self.req, uuids.fake, body=body) def test_remove_policy_failed(self): body = dict(removeFloatingIp=dict(address='10.10.10.10')) self._common_policy_check( self.controller._remove_floating_ip, self.req, uuids.fake, body=body) class FloatingIpsDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpsDeprecationTest, self).setUp() self.req = fakes.HTTPRequest.blank('', version='2.36') self.controller = fips_v21.FloatingIPController() def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, uuids.fake) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, uuids.fake) class FloatingIpActionDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpActionDeprecationTest, self).setUp() self.req = fakes.HTTPRequest.blank('', version='2.44') self.controller = fips_v21.FloatingIPActionController() def test_add_floating_ip_not_found(self): body = dict(addFloatingIp=dict(address='10.10.10.11')) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._add_floating_ip, self.req, uuids.fake, body=body) def test_remove_floating_ip_not_found(self): body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._remove_floating_ip, self.req, uuids.fake, body=body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_hosts.py0000664000175000017500000004761400000000000024653 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel import testtools import webob.exc from nova.api.openstack.compute import hosts as os_hosts_v21 from nova.compute import power_state from nova.compute import vm_states from nova import context as context_maker from nova.db import api as db from nova import exception from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_hosts def stub_service_get_all(context, disabled=None): return fake_hosts.SERVICES_LIST def stub_service_get_by_host_and_binary(context, host_name, binary): for service in stub_service_get_all(context): if service['host'] == host_name and service['binary'] == binary: return service def stub_set_host_enabled(self, context, host_name, enabled): """Simulates three possible behaviours for VM drivers or compute drivers when enabling or disabling a host. 'enabled' means new instances can go to this host 'disabled' means they can't """ results = {True: "enabled", False: "disabled"} if host_name == "notimplemented": # The vm driver for this host doesn't support this feature raise NotImplementedError() elif host_name == "dummydest": # The host does not exist raise exception.ComputeHostNotFound(host=host_name) elif host_name == "service_not_available": # The service is not available raise exception.ComputeServiceUnavailable(host=host_name) elif host_name == "host_c2": # Simulate a failure return results[not enabled] else: # Do the right thing return results[enabled] def stub_set_host_maintenance(self, context, host_name, mode): # We'll simulate success and failure by assuming # that 'host_c1' always succeeds, and 'host_c2' # always fails results = {True: "on_maintenance", False: "off_maintenance"} if host_name == "notimplemented": # The vm driver for this host doesn't support this feature raise NotImplementedError() elif host_name == "dummydest": # The host does not exist raise exception.ComputeHostNotFound(host=host_name) elif host_name == "service_not_available": # The service is not available raise exception.ComputeServiceUnavailable(host=host_name) elif host_name == "host_c2": # Simulate a failure return results[not mode] else: # Do the right thing return results[mode] def stub_host_power_action(self, context, host_name, action): if host_name == "notimplemented": raise NotImplementedError() elif host_name == "dummydest": # The host does not exist raise exception.ComputeHostNotFound(host=host_name) elif host_name == "service_not_available": # The service is not available raise exception.ComputeServiceUnavailable(host=host_name) return action def _create_instance(**kwargs): """Create a test instance.""" ctxt = context_maker.get_admin_context() return db.instance_create(ctxt, _create_instance_dict(**kwargs)) def _create_instance_dict(**kwargs): """Create a dictionary for a test instance.""" inst = {} inst['image_ref'] = 'cedef40a-ed67-4d10-800e-17455edce175' inst['reservation_id'] = 'r-fakeres' inst['user_id'] = kwargs.get('user_id', 'admin') inst['project_id'] = kwargs.get('project_id', fakes.FAKE_PROJECT_ID) inst['instance_type_id'] = '1' if 'host' in kwargs: inst['host'] = kwargs.get('host') inst['vcpus'] = kwargs.get('vcpus', 1) inst['memory_mb'] = kwargs.get('memory_mb', 20) inst['root_gb'] = kwargs.get('root_gb', 30) inst['ephemeral_gb'] = kwargs.get('ephemeral_gb', 30) inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE) inst['power_state'] = kwargs.get('power_state', power_state.RUNNING) inst['task_state'] = kwargs.get('task_state', None) inst['availability_zone'] = kwargs.get('availability_zone', None) inst['ami_launch_index'] = 0 inst['launched_on'] = kwargs.get('launched_on', 'dummy') return inst class HostTestCaseV21(test.TestCase): """Test Case for hosts.""" validation_ex = exception.ValidationError Controller = os_hosts_v21.HostController policy_ex = exception.PolicyNotAuthorized def _setup_stubs(self): # Pretend we have fake_hosts.HOST_LIST in the DB self.stub_out('nova.db.api.service_get_all', stub_service_get_all) # Only hosts in our fake DB exist self.stub_out('nova.db.api.service_get_by_host_and_binary', stub_service_get_by_host_and_binary) # 'host_c1' always succeeds, and 'host_c2' self.stub_out('nova.compute.api.HostAPI.set_host_enabled', stub_set_host_enabled) # 'host_c1' always succeeds, and 'host_c2' self.stub_out('nova.compute.api.HostAPI.set_host_maintenance', stub_set_host_maintenance) self.stub_out('nova.compute.api.HostAPI.host_power_action', stub_host_power_action) def setUp(self): super(HostTestCaseV21, self).setUp() self.controller = self.Controller() self.hosts_api = self.controller.api self.req = fakes.HTTPRequest.blank('', use_admin_context=True) self.useFixture(fixtures.SingleCellSimple()) self._setup_stubs() def _test_host_update(self, host, key, val, expected_value): body = {key: val} result = self.controller.update(self.req, host, body=body) self.assertEqual(result[key], expected_value) def test_list_hosts(self): """Verify that the compute hosts are returned.""" result = self.controller.index(self.req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST, hosts) def test_list_host_with_multi_filter(self): query_string = 'zone=nova1&zone=nova' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST_NOVA_ZONE, hosts) def test_list_host_query_allow_negative_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='zone=-1') result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual([], hosts) def test_list_host_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='zone=123') result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual([], hosts) def test_list_host_with_unknown_filter(self): query_string = 'unknown_filter=abc' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST, hosts) def test_list_host_with_hypervisor_and_additional_filter(self): query_string = 'zone=nova&additional_filter=nova2' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST_NOVA_ZONE, hosts) def test_disable_host(self): self._test_host_update('host_c1', 'status', 'disable', 'disabled') self._test_host_update('host_c2', 'status', 'disable', 'enabled') def test_enable_host(self): self._test_host_update('host_c1', 'status', 'enable', 'enabled') self._test_host_update('host_c2', 'status', 'enable', 'disabled') def test_enable_maintenance(self): self._test_host_update('host_c1', 'maintenance_mode', 'enable', 'on_maintenance') def test_disable_maintenance(self): self._test_host_update('host_c1', 'maintenance_mode', 'disable', 'off_maintenance') def _test_host_update_notimpl(self, key, val): def stub_service_get_all_notimpl(self, req): return [{'host': 'notimplemented', 'topic': None, 'availability_zone': None}] self.stub_out('nova.db.api.service_get_all', stub_service_get_all_notimpl) body = {key: val} self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.update, self.req, 'notimplemented', body=body) def test_disable_host_notimpl(self): self._test_host_update_notimpl('status', 'disable') def test_enable_maintenance_notimpl(self): self._test_host_update_notimpl('maintenance_mode', 'enable') def test_host_startup(self): result = self.controller.startup(self.req, "host_c1") self.assertEqual(result["power_action"], "startup") def test_host_shutdown(self): result = self.controller.shutdown(self.req, "host_c1") self.assertEqual(result["power_action"], "shutdown") def test_host_reboot(self): result = self.controller.reboot(self.req, "host_c1") self.assertEqual(result["power_action"], "reboot") def _test_host_power_action_notimpl(self, method): self.assertRaises(webob.exc.HTTPNotImplemented, method, self.req, "notimplemented") def test_host_startup_notimpl(self): self._test_host_power_action_notimpl(self.controller.startup) def test_host_shutdown_notimpl(self): self._test_host_power_action_notimpl(self.controller.shutdown) def test_host_reboot_notimpl(self): self._test_host_power_action_notimpl(self.controller.reboot) def test_host_status_bad_host(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'status': 'enable'}) def test_host_maintenance_bad_host(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'maintenance_mode': 'enable'}) def test_host_power_action_bad_host(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.reboot(self.req, dest) def test_host_status_bad_status(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'service_not_available' with testtools.ExpectedException(webob.exc.HTTPBadRequest, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'status': 'enable'}) def test_host_maintenance_bad_status(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'service_not_available' with testtools.ExpectedException(webob.exc.HTTPBadRequest, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'maintenance_mode': 'enable'}) def test_host_power_action_bad_status(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'service_not_available' with testtools.ExpectedException(webob.exc.HTTPBadRequest, ".*%s.*" % dest): self.controller.reboot(self.req, dest) def test_bad_status_value(self): bad_body = {"status": "bad"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body) bad_body2 = {"status": "disablabc"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body2) def test_bad_update_key(self): bad_body = {"crazy": "bad"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body) def test_bad_update_key_and_correct_update_key(self): bad_body = {"status": "disable", "crazy": "bad"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body) def test_good_update_keys(self): body = {"status": "disable", "maintenance_mode": "enable"} result = self.controller.update(self.req, 'host_c1', body=body) self.assertEqual(result["host"], "host_c1") self.assertEqual(result["status"], "disabled") self.assertEqual(result["maintenance_mode"], "on_maintenance") def test_show_host_not_exist(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.show(self.req, dest) def _create_compute_service(self): """Create compute-manager(ComputeNode and Service record).""" ctxt = self.req.environ["nova.context"] dic = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0} s_ref = db.service_create(ctxt, dic) dic = {'service_id': s_ref['id'], 'host': s_ref['host'], 'uuid': uuidsentinel.compute_node, 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, 'cpu_info': '', 'stats': ''} db.compute_node_create(ctxt, dic) return db.service_get(ctxt, s_ref['id']) def test_show_no_project(self): """No instances are running on the given host.""" ctxt = context_maker.get_admin_context() s_ref = self._create_compute_service() result = self.controller.show(self.req, s_ref['host']) proj = ['(total)', '(used_now)', '(used_max)'] column = ['host', 'project', 'cpu', 'memory_mb', 'disk_gb'] self.assertEqual(len(result['host']), 3) for resource in result['host']: self.assertIn(resource['resource']['project'], proj) self.assertEqual(len(resource['resource']), 5) self.assertEqual(set(column), set(resource['resource'].keys())) db.service_destroy(ctxt, s_ref['id']) def test_show_works_correctly(self): """show() works correctly as expected.""" ctxt = context_maker.get_admin_context() s_ref = self._create_compute_service() i_ref1 = _create_instance(project_id='p-01', host=s_ref['host']) i_ref2 = _create_instance(project_id='p-02', vcpus=3, host=s_ref['host']) result = self.controller.show(self.req, s_ref['host']) proj = ['(total)', '(used_now)', '(used_max)', 'p-01', 'p-02'] column = ['host', 'project', 'cpu', 'memory_mb', 'disk_gb'] self.assertEqual(len(result['host']), 5) for resource in result['host']: self.assertIn(resource['resource']['project'], proj) self.assertEqual(len(resource['resource']), 5) self.assertEqual(set(column), set(resource['resource'].keys())) db.service_destroy(ctxt, s_ref['id']) db.instance_destroy(ctxt, i_ref1['uuid']) db.instance_destroy(ctxt, i_ref2['uuid']) def test_show_late_host_mapping_gone(self): s_ref = self._create_compute_service() with mock.patch.object(self.controller.api, 'instance_get_all_by_host') as m: m.side_effect = exception.HostMappingNotFound(name='something') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, s_ref['host']) def test_list_hosts_with_zone(self): query_string = 'zone=nova' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST_NOVA_ZONE, hosts) class HostsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(HostsPolicyEnforcementV21, self).setUp() self.controller = os_hosts_v21.HostController() self.req = fakes.HTTPRequest.blank('') def test_index_policy_failed(self): rule_name = "os_compute_api:os-hosts" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = "os_compute_api:os-hosts" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, 1) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class HostControllerDeprecationTest(test.NoDBTestCase): def setUp(self): super(HostControllerDeprecationTest, self).setUp() self.controller = os_hosts_v21.HostController() self.req = fakes.HTTPRequest.blank('', version='2.43') def test_not_found_for_all_host_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.startup, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.shutdown, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.reboot, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, body={}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_hypervisor_status.py0000664000175000017500000000703000000000000027314 0ustar00zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova.api.openstack.compute import hypervisors \ as hypervisors_v21 from nova import objects from nova import test from nova.tests.unit.api.openstack.compute import test_hypervisors from nova.tests.unit.api.openstack import fakes TEST_HYPER = test_hypervisors.TEST_HYPERS_OBJ[0].obj_clone() TEST_SERVICE = objects.Service(id=1, host="compute1", binary="nova-compute", topic="compute_topic", report_count=5, disabled=False, disabled_reason=None, availability_zone="nova") class HypervisorStatusTestV21(test.NoDBTestCase): def _prepare_extension(self): self.controller = hypervisors_v21.HypervisorsController() self.controller.servicegroup_api.service_is_up = mock.MagicMock( return_value=True) def _get_request(self): return fakes.HTTPRequest.blank( '/v2/%s/os-hypervisors/detail' % fakes.FAKE_PROJECT_ID, use_admin_context=True) def test_view_hypervisor_service_status(self): self._prepare_extension() req = self._get_request() result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, False, req) self.assertEqual('enabled', result['status']) self.assertEqual('up', result['state']) self.assertEqual('enabled', result['status']) self.controller.servicegroup_api.service_is_up.return_value = False result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, False, req) self.assertEqual('down', result['state']) hyper = copy.deepcopy(TEST_HYPER) service = copy.deepcopy(TEST_SERVICE) service.disabled = True result = self.controller._view_hypervisor(hyper, service, False, req) self.assertEqual('disabled', result['status']) def test_view_hypervisor_detail_status(self): self._prepare_extension() req = self._get_request() result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, True, req) self.assertEqual('enabled', result['status']) self.assertEqual('up', result['state']) self.assertIsNone(result['service']['disabled_reason']) self.controller.servicegroup_api.service_is_up.return_value = False result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, True, req) self.assertEqual('down', result['state']) hyper = copy.deepcopy(TEST_HYPER) service = copy.deepcopy(TEST_SERVICE) service.disabled = True service.disabled_reason = "fake" result = self.controller._view_hypervisor(hyper, service, True, req) self.assertEqual('disabled', result['status'],) self.assertEqual('fake', result['service']['disabled_reason']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_hypervisors.py0000664000175000017500000016060300000000000026102 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import netaddr from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import six from webob import exc from nova.api.openstack.compute import hypervisors \ as hypervisors_v21 from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance CPU_INFO = """ {"arch": "x86_64", "vendor": "fake", "topology": {"cores": 1, "threads": 1, "sockets": 1}, "features": [], "model": ""}""" TEST_HYPERS = [ dict(id=1, uuid=uuids.hyper1, service_id=1, host="compute1", vcpus=4, memory_mb=10 * 1024, local_gb=250, vcpus_used=2, memory_mb_used=5 * 1024, local_gb_used=125, hypervisor_type="xen", hypervisor_version=3, hypervisor_hostname="hyper1", free_ram_mb=5 * 1024, free_disk_gb=125, current_workload=2, running_vms=2, cpu_info=CPU_INFO, disk_available_least=100, host_ip=netaddr.IPAddress('1.1.1.1')), dict(id=2, uuid=uuids.hyper2, service_id=2, host="compute2", vcpus=4, memory_mb=10 * 1024, local_gb=250, vcpus_used=2, memory_mb_used=5 * 1024, local_gb_used=125, hypervisor_type="xen", hypervisor_version=3, hypervisor_hostname="hyper2", free_ram_mb=5 * 1024, free_disk_gb=125, current_workload=2, running_vms=2, cpu_info=CPU_INFO, disk_available_least=100, host_ip=netaddr.IPAddress('2.2.2.2'))] TEST_SERVICES = [ objects.Service(id=1, uuid=uuids.service1, host="compute1", binary="nova-compute", topic="compute_topic", report_count=5, disabled=False, disabled_reason=None, availability_zone="nova"), objects.Service(id=2, uuid=uuids.service2, host="compute2", binary="nova-compute", topic="compute_topic", report_count=5, disabled=False, disabled_reason=None, availability_zone="nova"), ] TEST_HYPERS_OBJ = [objects.ComputeNode(**hyper_dct) for hyper_dct in TEST_HYPERS] TEST_HYPERS[0].update({'service': TEST_SERVICES[0]}) TEST_HYPERS[1].update({'service': TEST_SERVICES[1]}) TEST_SERVERS = [dict(name="inst1", uuid=uuids.instance_1, host="compute1"), dict(name="inst2", uuid=uuids.instance_2, host="compute2"), dict(name="inst3", uuid=uuids.instance_3, host="compute1"), dict(name="inst4", uuid=uuids.instance_4, host="compute2")] def fake_compute_node_get_all(context, limit=None, marker=None): if marker in ['99999', uuids.invalid_marker]: raise exception.MarkerNotFound(marker) marker_found = True if marker is None else False output = [] for hyper in TEST_HYPERS_OBJ: # Starting with the 2.53 microversion, the marker is a uuid. if not marker_found and marker in (str(hyper.id), hyper.uuid): marker_found = True elif marker_found: if limit is None or len(output) < int(limit): output.append(hyper) return output def fake_compute_node_search_by_hypervisor(context, hypervisor_re): return TEST_HYPERS_OBJ def fake_compute_node_get(context, compute_id): for hyper in TEST_HYPERS_OBJ: if hyper.uuid == compute_id or hyper.id == int(compute_id): return hyper raise exception.ComputeHostNotFound(host=compute_id) def fake_service_get_by_compute_host(context, host): for service in TEST_SERVICES: if service.host == host: return service def fake_compute_node_statistics(context): result = dict( count=0, vcpus=0, memory_mb=0, local_gb=0, vcpus_used=0, memory_mb_used=0, local_gb_used=0, free_ram_mb=0, free_disk_gb=0, current_workload=0, running_vms=0, disk_available_least=0, ) for hyper in TEST_HYPERS_OBJ: for key in result: if key == 'count': result[key] += 1 else: result[key] += getattr(hyper, key) return result def fake_instance_get_all_by_host(context, host): results = [] for inst in TEST_SERVERS: if inst['host'] == host: inst_obj = fake_instance.fake_instance_obj(context, **inst) results.append(inst_obj) return results class HypervisorsTestV21(test.NoDBTestCase): api_version = '2.1' # Allow subclasses to override if the id value in the response is the # compute node primary key integer id or the uuid. expect_uuid_for_id = False # TODO(stephenfin): These should just be defined here TEST_HYPERS_OBJ = copy.deepcopy(TEST_HYPERS_OBJ) TEST_SERVICES = copy.deepcopy(TEST_SERVICES) TEST_SERVERS = copy.deepcopy(TEST_SERVERS) DETAIL_HYPERS_DICTS = copy.deepcopy(TEST_HYPERS) del DETAIL_HYPERS_DICTS[0]['service_id'] del DETAIL_HYPERS_DICTS[1]['service_id'] del DETAIL_HYPERS_DICTS[0]['host'] del DETAIL_HYPERS_DICTS[1]['host'] del DETAIL_HYPERS_DICTS[0]['uuid'] del DETAIL_HYPERS_DICTS[1]['uuid'] DETAIL_HYPERS_DICTS[0].update({'state': 'up', 'status': 'enabled', 'service': dict(id=1, host='compute1', disabled_reason=None)}) DETAIL_HYPERS_DICTS[1].update({'state': 'up', 'status': 'enabled', 'service': dict(id=2, host='compute2', disabled_reason=None)}) INDEX_HYPER_DICTS = [ dict(id=1, hypervisor_hostname="hyper1", state='up', status='enabled'), dict(id=2, hypervisor_hostname="hyper2", state='up', status='enabled')] DETAIL_NULL_CPUINFO_DICT = {'': '', None: None} def _get_request(self, use_admin_context, url=''): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.api_version) def _set_up_controller(self): self.controller = hypervisors_v21.HypervisorsController() self.controller.servicegroup_api.service_is_up = mock.MagicMock( return_value=True) def _get_hyper_id(self): """Helper function to get the proper hypervisor id for a request :returns: The first hypervisor's uuid for microversions that expect a uuid for the id, otherwise the hypervisor's id primary key """ return (self.TEST_HYPERS_OBJ[0].uuid if self.expect_uuid_for_id else self.TEST_HYPERS_OBJ[0].id) def setUp(self): super(HypervisorsTestV21, self).setUp() self._set_up_controller() host_api = self.controller.host_api host_api.compute_node_get_all = mock.MagicMock( side_effect=fake_compute_node_get_all) host_api.service_get_by_compute_host = mock.MagicMock( side_effect=fake_service_get_by_compute_host) host_api.compute_node_search_by_hypervisor = mock.MagicMock( side_effect=fake_compute_node_search_by_hypervisor) host_api.compute_node_get = mock.MagicMock( side_effect=fake_compute_node_get) self.stub_out('nova.db.api.compute_node_statistics', fake_compute_node_statistics) def test_view_hypervisor_nodetail_noservers(self): req = self._get_request(True) result = self.controller._view_hypervisor( self.TEST_HYPERS_OBJ[0], self.TEST_SERVICES[0], False, req) self.assertEqual(self.INDEX_HYPER_DICTS[0], result) def test_view_hypervisor_detail_noservers(self): req = self._get_request(True) result = self.controller._view_hypervisor( self.TEST_HYPERS_OBJ[0], self.TEST_SERVICES[0], True, req) self.assertEqual(self.DETAIL_HYPERS_DICTS[0], result) def test_view_hypervisor_servers(self): req = self._get_request(True) result = self.controller._view_hypervisor(self.TEST_HYPERS_OBJ[0], self.TEST_SERVICES[0], False, req, self.TEST_SERVERS) expected_dict = copy.deepcopy(self.INDEX_HYPER_DICTS[0]) expected_dict.update({'servers': [ dict(name="inst1", uuid=uuids.instance_1), dict(name="inst2", uuid=uuids.instance_2), dict(name="inst3", uuid=uuids.instance_3), dict(name="inst4", uuid=uuids.instance_4)]}) self.assertEqual(expected_dict, result) def _test_view_hypervisor_detail_cpuinfo_null(self, cpu_info): req = self._get_request(True) test_hypervisor_obj = copy.deepcopy(self.TEST_HYPERS_OBJ[0]) test_hypervisor_obj.cpu_info = cpu_info result = self.controller._view_hypervisor(test_hypervisor_obj, self.TEST_SERVICES[0], True, req) expected_dict = copy.deepcopy(self.DETAIL_HYPERS_DICTS[0]) expected_dict.update({'cpu_info': self.DETAIL_NULL_CPUINFO_DICT[cpu_info]}) self.assertEqual(result, expected_dict) def test_view_hypervisor_detail_cpuinfo_empty_string(self): self._test_view_hypervisor_detail_cpuinfo_null('') def test_view_hypervisor_detail_cpuinfo_none(self): self._test_view_hypervisor_detail_cpuinfo_null(None) def test_index(self): req = self._get_request(True) result = self.controller.index(req) self.assertEqual(dict(hypervisors=self.INDEX_HYPER_DICTS), result) def test_index_compute_host_not_found(self): """Tests that if a service is deleted but the compute node is not we don't fail when listing hypervisors. """ # two computes, a matching service only exists for the first one compute_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(**TEST_HYPERS[0]), objects.ComputeNode(**TEST_HYPERS[1]) ]) def fake_service_get_by_compute_host(context, host): if host == TEST_HYPERS[0]['host']: return TEST_SERVICES[0] raise exception.ComputeHostNotFound(host=host) @mock.patch.object(self.controller.host_api, 'compute_node_get_all', return_value=compute_nodes) @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host', fake_service_get_by_compute_host) def _test(self, compute_node_get_all): req = self._get_request(True) result = self.controller.index(req) self.assertEqual(1, len(result['hypervisors'])) expected = { 'id': compute_nodes[0].uuid if self.expect_uuid_for_id else compute_nodes[0].id, 'hypervisor_hostname': compute_nodes[0].hypervisor_hostname, 'state': 'up', 'status': 'enabled', } self.assertDictEqual(expected, result['hypervisors'][0]) _test(self) def test_index_compute_host_not_mapped(self): """Tests that we don't fail index if a host is not mapped.""" # two computes, a matching service only exists for the first one compute_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(**TEST_HYPERS[0]), objects.ComputeNode(**TEST_HYPERS[1]) ]) def fake_service_get_by_compute_host(context, host): if host == TEST_HYPERS[0]['host']: return TEST_SERVICES[0] raise exception.HostMappingNotFound(name=host) @mock.patch.object(self.controller.host_api, 'compute_node_get_all', return_value=compute_nodes) @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host', fake_service_get_by_compute_host) def _test(self, compute_node_get_all): req = self._get_request(True) result = self.controller.index(req) self.assertEqual(1, len(result['hypervisors'])) expected = { 'id': compute_nodes[0].uuid if self.expect_uuid_for_id else compute_nodes[0].id, 'hypervisor_hostname': compute_nodes[0].hypervisor_hostname, 'state': 'up', 'status': 'enabled', } self.assertDictEqual(expected, result['hypervisors'][0]) _test(self) def test_detail(self): req = self._get_request(True) result = self.controller.detail(req) self.assertEqual(dict(hypervisors=self.DETAIL_HYPERS_DICTS), result) def test_detail_compute_host_not_found(self): """Tests that if a service is deleted but the compute node is not we don't fail when listing hypervisors. """ # two computes, a matching service only exists for the first one compute_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(**TEST_HYPERS[0]), objects.ComputeNode(**TEST_HYPERS[1]) ]) def fake_service_get_by_compute_host(context, host): if host == TEST_HYPERS[0]['host']: return TEST_SERVICES[0] raise exception.ComputeHostNotFound(host=host) @mock.patch.object(self.controller.host_api, 'compute_node_get_all', return_value=compute_nodes) @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host', fake_service_get_by_compute_host) def _test(self, compute_node_get_all): req = self._get_request(True) result = self.controller.detail(req) self.assertEqual(1, len(result['hypervisors'])) expected = { 'id': compute_nodes[0].id, 'hypervisor_hostname': compute_nodes[0].hypervisor_hostname, 'state': 'up', 'status': 'enabled', } # we don't care about all of the details, just make sure we get # the subset we care about and there are more keys than what index # would return hypervisor = result['hypervisors'][0] self.assertTrue( set(expected.keys()).issubset(set(hypervisor.keys()))) self.assertGreater(len(hypervisor.keys()), len(expected.keys())) self.assertEqual(compute_nodes[0].hypervisor_hostname, hypervisor['hypervisor_hostname']) _test(self) def test_detail_compute_host_not_mapped(self): """Tests that if a service is deleted but the compute node is not we don't fail when listing hypervisors. """ # two computes, a matching service only exists for the first one compute_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(**TEST_HYPERS[0]), objects.ComputeNode(**TEST_HYPERS[1]) ]) def fake_service_get_by_compute_host(context, host): if host == TEST_HYPERS[0]['host']: return TEST_SERVICES[0] raise exception.HostMappingNotFound(name=host) @mock.patch.object(self.controller.host_api, 'compute_node_get_all', return_value=compute_nodes) @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host', fake_service_get_by_compute_host) def _test(self, compute_node_get_all): req = self._get_request(True) result = self.controller.detail(req) self.assertEqual(1, len(result['hypervisors'])) expected = { 'id': compute_nodes[0].id, 'hypervisor_hostname': compute_nodes[0].hypervisor_hostname, 'state': 'up', 'status': 'enabled', } # we don't care about all of the details, just make sure we get # the subset we care about and there are more keys than what index # would return hypervisor = result['hypervisors'][0] self.assertTrue( set(expected.keys()).issubset(set(hypervisor.keys()))) self.assertGreater(len(hypervisor.keys()), len(expected.keys())) self.assertEqual(compute_nodes[0].hypervisor_hostname, hypervisor['hypervisor_hostname']) _test(self) def test_show_compute_host_not_mapped(self): """Tests that if a service is deleted but the compute node is not we don't fail when listing hypervisors. """ @mock.patch.object(self.controller.host_api, 'compute_node_get', return_value=self.TEST_HYPERS_OBJ[0]) @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host') def _test(self, mock_service, mock_compute_node_get): req = self._get_request(True) mock_service.side_effect = exception.HostMappingNotFound( name='foo') hyper_id = self._get_hyper_id() self.assertRaises(exc.HTTPNotFound, self.controller.show, req, hyper_id) self.assertTrue(mock_service.called) mock_compute_node_get.assert_called_once_with(mock.ANY, hyper_id) _test(self) def test_show_noid(self): req = self._get_request(True) hyperid = uuids.hyper3 if self.expect_uuid_for_id else '3' self.assertRaises(exc.HTTPNotFound, self.controller.show, req, hyperid) def test_show_non_integer_id(self): req = self._get_request(True) self.assertRaises(exc.HTTPNotFound, self.controller.show, req, 'abc') def test_show_withid(self): req = self._get_request(True) hyper_id = self._get_hyper_id() result = self.controller.show(req, hyper_id) self.assertEqual(dict(hypervisor=self.DETAIL_HYPERS_DICTS[0]), result) def test_uptime_noid(self): req = self._get_request(True) hyper_id = uuids.hyper3 if self.expect_uuid_for_id else '3' self.assertRaises(exc.HTTPNotFound, self.controller.uptime, req, hyper_id) def test_uptime_notimplemented(self): with mock.patch.object(self.controller.host_api, 'get_host_uptime', side_effect=exc.HTTPNotImplemented() ) as mock_get_uptime: req = self._get_request(True) hyper_id = self._get_hyper_id() self.assertRaises(exc.HTTPNotImplemented, self.controller.uptime, req, hyper_id) self.assertEqual(1, mock_get_uptime.call_count) def test_uptime_implemented(self): with mock.patch.object(self.controller.host_api, 'get_host_uptime', return_value="fake uptime" ) as mock_get_uptime: req = self._get_request(True) hyper_id = self._get_hyper_id() result = self.controller.uptime(req, hyper_id) expected_dict = copy.deepcopy(self.INDEX_HYPER_DICTS[0]) expected_dict.update({'uptime': "fake uptime"}) self.assertEqual(dict(hypervisor=expected_dict), result) self.assertEqual(1, mock_get_uptime.call_count) def test_uptime_non_integer_id(self): req = self._get_request(True) self.assertRaises(exc.HTTPNotFound, self.controller.uptime, req, 'abc') def test_uptime_hypervisor_down(self): with mock.patch.object(self.controller.host_api, 'get_host_uptime', side_effect=exception.ComputeServiceUnavailable(host='dummy') ) as mock_get_uptime: req = self._get_request(True) hyper_id = self._get_hyper_id() self.assertRaises(exc.HTTPBadRequest, self.controller.uptime, req, hyper_id) mock_get_uptime.assert_called_once_with( mock.ANY, self.TEST_HYPERS_OBJ[0].host) def test_uptime_hypervisor_not_mapped_service_get(self): @mock.patch.object(self.controller.host_api, 'compute_node_get') @mock.patch.object(self.controller.host_api, 'get_host_uptime') @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host', side_effect=exception.HostMappingNotFound( name='dummy')) def _test(mock_get, _, __): req = self._get_request(True) hyper_id = self._get_hyper_id() self.assertRaises(exc.HTTPNotFound, self.controller.uptime, req, hyper_id) self.assertTrue(mock_get.called) _test() def test_uptime_hypervisor_not_mapped(self): with mock.patch.object(self.controller.host_api, 'get_host_uptime', side_effect=exception.HostMappingNotFound(name='dummy') ) as mock_get_uptime: req = self._get_request(True) hyper_id = self._get_hyper_id() self.assertRaises(exc.HTTPNotFound, self.controller.uptime, req, hyper_id) mock_get_uptime.assert_called_once_with( mock.ANY, self.TEST_HYPERS_OBJ[0].host) def test_search(self): req = self._get_request(True) result = self.controller.search(req, 'hyper') self.assertEqual(dict(hypervisors=self.INDEX_HYPER_DICTS), result) def test_search_non_exist(self): with mock.patch.object(self.controller.host_api, 'compute_node_search_by_hypervisor', return_value=[]) as mock_node_search: req = self._get_request(True) self.assertRaises(exc.HTTPNotFound, self.controller.search, req, 'a') self.assertEqual(1, mock_node_search.call_count) def test_search_unmapped(self): @mock.patch.object(self.controller.host_api, 'compute_node_search_by_hypervisor') @mock.patch.object(self.controller.host_api, 'service_get_by_compute_host') def _test(mock_service, mock_search): mock_search.return_value = [mock.MagicMock()] mock_service.side_effect = exception.HostMappingNotFound( name='foo') req = self._get_request(True) self.assertRaises(exc.HTTPNotFound, self.controller.search, req, 'a') self.assertTrue(mock_service.called) _test() @mock.patch.object(objects.InstanceList, 'get_by_host', side_effect=fake_instance_get_all_by_host) def test_servers(self, mock_get): req = self._get_request(True) result = self.controller.servers(req, 'hyper') expected_dict = copy.deepcopy(self.INDEX_HYPER_DICTS) expected_dict[0].update({'servers': [ dict(uuid=uuids.instance_1), dict(uuid=uuids.instance_3)]}) expected_dict[1].update({'servers': [ dict(uuid=uuids.instance_2), dict(uuid=uuids.instance_4)]}) for output in result['hypervisors']: servers = output['servers'] for server in servers: del server['name'] self.assertEqual(dict(hypervisors=expected_dict), result) def test_servers_not_mapped(self): req = self._get_request(True) with mock.patch.object(self.controller.host_api, 'instance_get_all_by_host') as m: m.side_effect = exception.HostMappingNotFound(name='something') self.assertRaises(exc.HTTPNotFound, self.controller.servers, req, 'hyper') def test_servers_non_id(self): with mock.patch.object(self.controller.host_api, 'compute_node_search_by_hypervisor', return_value=[]) as mock_node_search: req = self._get_request(True) self.assertRaises(exc.HTTPNotFound, self.controller.servers, req, '115') self.assertEqual(1, mock_node_search.call_count) def test_servers_with_non_integer_hypervisor_id(self): with mock.patch.object(self.controller.host_api, 'compute_node_search_by_hypervisor', return_value=[]) as mock_node_search: req = self._get_request(True) self.assertRaises(exc.HTTPNotFound, self.controller.servers, req, 'abc') self.assertEqual(1, mock_node_search.call_count) def test_servers_with_no_server(self): with mock.patch.object(self.controller.host_api, 'instance_get_all_by_host', return_value=[]) as mock_inst_get_all: req = self._get_request(True) result = self.controller.servers(req, self.TEST_HYPERS_OBJ[0].id) self.assertEqual(dict(hypervisors=self.INDEX_HYPER_DICTS), result) self.assertTrue(mock_inst_get_all.called) def test_statistics(self): req = self._get_request(True) result = self.controller.statistics(req) self.assertEqual(dict(hypervisor_statistics=dict( count=2, vcpus=8, memory_mb=20 * 1024, local_gb=500, vcpus_used=4, memory_mb_used=10 * 1024, local_gb_used=250, free_ram_mb=10 * 1024, free_disk_gb=250, current_workload=4, running_vms=4, disk_available_least=200)), result) class HypervisorsTestV228(HypervisorsTestV21): api_version = '2.28' DETAIL_HYPERS_DICTS = copy.deepcopy(HypervisorsTestV21.DETAIL_HYPERS_DICTS) DETAIL_HYPERS_DICTS[0]['cpu_info'] = jsonutils.loads(CPU_INFO) DETAIL_HYPERS_DICTS[1]['cpu_info'] = jsonutils.loads(CPU_INFO) DETAIL_NULL_CPUINFO_DICT = {'': {}, None: {}} class HypervisorsTestV233(HypervisorsTestV228): api_version = '2.33' def test_index_pagination(self): req = self._get_request(True, '/v2/1234/os-hypervisors?limit=1&marker=1') result = self.controller.index(req) expected = { 'hypervisors': [ {'hypervisor_hostname': 'hyper2', 'id': 2, 'state': 'up', 'status': 'enabled'} ], 'hypervisors_links': [ {'href': 'http://localhost/v2/os-hypervisors?limit=1&marker=2', 'rel': 'next'} ] } self.assertEqual(expected, result) def test_index_pagination_with_invalid_marker(self): req = self._get_request(True, '/v2/1234/os-hypervisors?marker=99999') self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) def test_index_pagination_with_invalid_non_int_limit(self): req = self._get_request(True, '/v2/1234/os-hypervisors?limit=-9') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_pagination_with_invalid_string_limit(self): req = self._get_request(True, '/v2/1234/os-hypervisors?limit=abc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_duplicate_query_parameters_with_invalid_string_limit(self): req = self._get_request( True, '/v2/1234/os-hypervisors/?limit=1&limit=abc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_duplicate_query_parameters_validation(self): expected = [{ 'hypervisor_hostname': 'hyper2', 'id': 2, 'state': 'up', 'status': 'enabled'} ] params = { 'limit': 1, 'marker': 1, } for param, value in params.items(): req = self._get_request( use_admin_context=True, url='/os-hypervisors?marker=1&%s=%s&%s=%s' % (param, value, param, value)) result = self.controller.index(req) self.assertEqual(expected, result['hypervisors']) def test_index_pagination_with_additional_filter(self): expected = { 'hypervisors': [ {'hypervisor_hostname': 'hyper2', 'id': 2, 'state': 'up', 'status': 'enabled'} ], 'hypervisors_links': [ {'href': 'http://localhost/v2/os-hypervisors?limit=1&marker=2', 'rel': 'next'} ] } req = self._get_request( True, '/v2/1234/os-hypervisors?limit=1&marker=1&additional=3') result = self.controller.index(req) self.assertEqual(expected, result) def test_detail_pagination(self): req = self._get_request( True, '/v2/1234/os-hypervisors/detail?limit=1&marker=1') result = self.controller.detail(req) link = 'http://localhost/v2/os-hypervisors/detail?limit=1&marker=2' expected = { 'hypervisors': [ {'cpu_info': {'arch': 'x86_64', 'features': [], 'model': '', 'topology': {'cores': 1, 'sockets': 1, 'threads': 1}, 'vendor': 'fake'}, 'current_workload': 2, 'disk_available_least': 100, 'free_disk_gb': 125, 'free_ram_mb': 5120, 'host_ip': netaddr.IPAddress('2.2.2.2'), 'hypervisor_hostname': 'hyper2', 'hypervisor_type': 'xen', 'hypervisor_version': 3, 'id': 2, 'local_gb': 250, 'local_gb_used': 125, 'memory_mb': 10240, 'memory_mb_used': 5120, 'running_vms': 2, 'service': {'disabled_reason': None, 'host': 'compute2', 'id': 2}, 'state': 'up', 'status': 'enabled', 'vcpus': 4, 'vcpus_used': 2} ], 'hypervisors_links': [{'href': link, 'rel': 'next'}] } self.assertEqual(expected, result) def test_detail_pagination_with_invalid_marker(self): req = self._get_request(True, '/v2/1234/os-hypervisors/detail?marker=99999') self.assertRaises(exc.HTTPBadRequest, self.controller.detail, req) def test_detail_pagination_with_invalid_string_limit(self): req = self._get_request(True, '/v2/1234/os-hypervisors/detail?limit=abc') self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_detail_duplicate_query_parameters_with_invalid_string_limit(self): req = self._get_request( True, '/v2/1234/os-hypervisors/detail?limit=1&limit=abc') self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_detail_duplicate_query_parameters_validation(self): expected = [ {'cpu_info': {'arch': 'x86_64', 'features': [], 'model': '', 'topology': {'cores': 1, 'sockets': 1, 'threads': 1}, 'vendor': 'fake'}, 'current_workload': 2, 'disk_available_least': 100, 'free_disk_gb': 125, 'free_ram_mb': 5120, 'host_ip': netaddr.IPAddress('2.2.2.2'), 'hypervisor_hostname': 'hyper2', 'hypervisor_type': 'xen', 'hypervisor_version': 3, 'id': 2, 'local_gb': 250, 'local_gb_used': 125, 'memory_mb': 10240, 'memory_mb_used': 5120, 'running_vms': 2, 'service': {'disabled_reason': None, 'host': 'compute2', 'id': 2}, 'state': 'up', 'status': 'enabled', 'vcpus': 4, 'vcpus_used': 2} ] params = { 'limit': 1, 'marker': 1, } for param, value in params.items(): req = self._get_request( use_admin_context=True, url='/os-hypervisors/detail?marker=1&%s=%s&%s=%s' % (param, value, param, value)) result = self.controller.detail(req) self.assertEqual(expected, result['hypervisors']) def test_detail_pagination_with_additional_filter(self): link = 'http://localhost/v2/os-hypervisors/detail?limit=1&marker=2' expected = { 'hypervisors': [ {'cpu_info': {'arch': 'x86_64', 'features': [], 'model': '', 'topology': {'cores': 1, 'sockets': 1, 'threads': 1}, 'vendor': 'fake'}, 'current_workload': 2, 'disk_available_least': 100, 'free_disk_gb': 125, 'free_ram_mb': 5120, 'host_ip': netaddr.IPAddress('2.2.2.2'), 'hypervisor_hostname': 'hyper2', 'hypervisor_type': 'xen', 'hypervisor_version': 3, 'id': 2, 'local_gb': 250, 'local_gb_used': 125, 'memory_mb': 10240, 'memory_mb_used': 5120, 'running_vms': 2, 'service': {'disabled_reason': None, 'host': 'compute2', 'id': 2}, 'state': 'up', 'status': 'enabled', 'vcpus': 4, 'vcpus_used': 2} ], 'hypervisors_links': [{ 'href': link, 'rel': 'next'}] } req = self._get_request( True, '/v2/1234/os-hypervisors/detail?limit=1&marker=1&unknown=2') result = self.controller.detail(req) self.assertEqual(expected, result) class HypervisorsTestV252(HypervisorsTestV233): """This is a boundary test to make sure 2.52 works like 2.33.""" api_version = '2.52' class HypervisorsTestV253(HypervisorsTestV252): api_version = hypervisors_v21.UUID_FOR_ID_MIN_VERSION expect_uuid_for_id = True # This is an expected response for index(). INDEX_HYPER_DICTS = [ dict(id=uuids.hyper1, hypervisor_hostname="hyper1", state='up', status='enabled'), dict(id=uuids.hyper2, hypervisor_hostname="hyper2", state='up', status='enabled')] def setUp(self): super(HypervisorsTestV253, self).setUp() # This is an expected response for detail(). for index, detail_hyper_dict in enumerate(self.DETAIL_HYPERS_DICTS): detail_hyper_dict['id'] = TEST_HYPERS[index]['uuid'] detail_hyper_dict['service']['id'] = TEST_SERVICES[index].uuid def test_servers(self): """Asserts that calling the servers route after 2.52 fails.""" self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.servers, self._get_request(True), 'hyper') def test_servers_with_no_server(self): """Tests GET /os-hypervisors?with_servers=1 when there are no instances on the given host. """ with mock.patch.object(self.controller.host_api, 'instance_get_all_by_host', return_value=[]) as mock_inst_get_all: req = self._get_request(use_admin_context=True, url='/os-hypervisors?with_servers=1') result = self.controller.index(req) self.assertEqual(dict(hypervisors=self.INDEX_HYPER_DICTS), result) # instance_get_all_by_host is called for each hypervisor self.assertEqual(2, mock_inst_get_all.call_count) mock_inst_get_all.assert_has_calls(( mock.call(req.environ['nova.context'], TEST_HYPERS_OBJ[0].host), mock.call(req.environ['nova.context'], TEST_HYPERS_OBJ[1].host))) def test_servers_not_mapped(self): """Tests that instance_get_all_by_host fails with HostMappingNotFound. """ req = self._get_request(use_admin_context=True, url='/os-hypervisors?with_servers=1') with mock.patch.object( self.controller.host_api, 'instance_get_all_by_host', side_effect=exception.HostMappingNotFound(name='something')): result = self.controller.index(req) self.assertEqual(dict(hypervisors=[]), result) def test_list_with_servers(self): """Tests GET /os-hypervisors?with_servers=True""" instances = [ objects.InstanceList(objects=[objects.Instance( id=1, uuid=uuids.hyper1_instance1)]), objects.InstanceList(objects=[objects.Instance( id=2, uuid=uuids.hyper2_instance1)])] with mock.patch.object(self.controller.host_api, 'instance_get_all_by_host', side_effect=instances) as mock_inst_get_all: req = self._get_request(use_admin_context=True, url='/os-hypervisors?with_servers=True') result = self.controller.index(req) index_with_servers = copy.deepcopy(self.INDEX_HYPER_DICTS) index_with_servers[0]['servers'] = [ {'name': 'instance-00000001', 'uuid': uuids.hyper1_instance1}] index_with_servers[1]['servers'] = [ {'name': 'instance-00000002', 'uuid': uuids.hyper2_instance1}] self.assertEqual(dict(hypervisors=index_with_servers), result) # instance_get_all_by_host is called for each hypervisor self.assertEqual(2, mock_inst_get_all.call_count) mock_inst_get_all.assert_has_calls(( mock.call(req.environ['nova.context'], TEST_HYPERS_OBJ[0].host), mock.call(req.environ['nova.context'], TEST_HYPERS_OBJ[1].host))) def test_list_with_servers_invalid_parameter(self): """Tests using an invalid with_servers query parameter.""" req = self._get_request(use_admin_context=True, url='/os-hypervisors?with_servers=invalid') self.assertRaises( exception.ValidationError, self.controller.index, req) def test_list_with_hostname_pattern_and_paging_parameters(self): """This is a negative test to validate that trying to list hypervisors with a hostname pattern and paging parameters results in a 400 error. """ req = self._get_request( use_admin_context=True, url='/os-hypervisors?hypervisor_hostname_pattern=foo&' 'limit=1&marker=%s' % uuids.marker) ex = self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) self.assertIn('Paging over hypervisors with the ' 'hypervisor_hostname_pattern query parameter is not ' 'supported.', six.text_type(ex)) def test_servers_with_non_integer_hypervisor_id(self): """This is a poorly named test, it's really checking the 404 case where there is no match for the hostname pattern. """ req = self._get_request( use_admin_context=True, url='/os-hypervisors?with_servers=yes&' 'hypervisor_hostname_pattern=shenzhen') with mock.patch.object(self.controller.host_api, 'compute_node_search_by_hypervisor', return_value=objects.ComputeNodeList()) as s: self.assertRaises(exc.HTTPNotFound, self.controller.index, req) s.assert_called_once_with(req.environ['nova.context'], 'shenzhen') def test_servers_non_id(self): """There is no reason to test this for 2.53 since the /os-hypervisors/servers route is deprecated. """ pass def test_search_old_route(self): """Asserts that calling the search route after 2.52 fails.""" self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.search, self._get_request(True), 'hyper') def test_search(self): """Test listing hypervisors with details and using the hypervisor_hostname_pattern query string. """ req = self._get_request( use_admin_context=True, url='/os-hypervisors?hypervisor_hostname_pattern=shenzhen') with mock.patch.object(self.controller.host_api, 'compute_node_search_by_hypervisor', return_value=objects.ComputeNodeList( objects=[TEST_HYPERS_OBJ[0]])) as s: result = self.controller.detail(req) s.assert_called_once_with(req.environ['nova.context'], 'shenzhen') expected = { 'hypervisors': [ {'cpu_info': {'arch': 'x86_64', 'features': [], 'model': '', 'topology': {'cores': 1, 'sockets': 1, 'threads': 1}, 'vendor': 'fake'}, 'current_workload': 2, 'disk_available_least': 100, 'free_disk_gb': 125, 'free_ram_mb': 5120, 'host_ip': netaddr.IPAddress('1.1.1.1'), 'hypervisor_hostname': 'hyper1', 'hypervisor_type': 'xen', 'hypervisor_version': 3, 'id': TEST_HYPERS_OBJ[0].uuid, 'local_gb': 250, 'local_gb_used': 125, 'memory_mb': 10240, 'memory_mb_used': 5120, 'running_vms': 2, 'service': {'disabled_reason': None, 'host': 'compute1', 'id': TEST_SERVICES[0].uuid}, 'state': 'up', 'status': 'enabled', 'vcpus': 4, 'vcpus_used': 2} ] } # There are no links when using the hypervisor_hostname_pattern # query string since we can't page using a pattern matcher. self.assertNotIn('hypervisors_links', result) self.assertDictEqual(expected, result) def test_search_invalid_hostname_pattern_parameter(self): """Tests passing an invalid hypervisor_hostname_pattern query parameter. """ req = self._get_request( use_admin_context=True, url='/os-hypervisors?hypervisor_hostname_pattern=invalid~host') self.assertRaises( exception.ValidationError, self.controller.detail, req) def test_search_non_exist(self): """This is a duplicate of test_servers_with_non_integer_hypervisor_id. """ pass def test_search_unmapped(self): """This is already tested with test_index_compute_host_not_mapped.""" pass def test_show_non_integer_id(self): """There is no reason to test this for 2.53 since 2.53 requires a non-integer id (requires a uuid). """ pass def test_show_integer_id(self): """Tests that we get a 400 if passed a hypervisor integer id to show(). """ req = self._get_request(True) ex = self.assertRaises(exc.HTTPBadRequest, self.controller.show, req, '1') self.assertIn('Invalid uuid 1', six.text_type(ex)) def test_show_with_servers_invalid_parameter(self): """Tests passing an invalid value for the with_servers query parameter to the show() method to make sure the query parameter is validated. """ hyper_id = self._get_hyper_id() req = self._get_request( use_admin_context=True, url='/os-hypervisors/%s?with_servers=invalid' % hyper_id) ex = self.assertRaises( exception.ValidationError, self.controller.show, req, hyper_id) self.assertIn('with_servers', six.text_type(ex)) def test_show_with_servers_host_mapping_not_found(self): """Tests that a 404 is returned if instance_get_all_by_host raises HostMappingNotFound. """ hyper_id = self._get_hyper_id() req = self._get_request( use_admin_context=True, url='/os-hypervisors/%s?with_servers=true' % hyper_id) with mock.patch.object( self.controller.host_api, 'instance_get_all_by_host', side_effect=exception.HostMappingNotFound(name=hyper_id)): self.assertRaises(exc.HTTPNotFound, self.controller.show, req, hyper_id) def test_show_with_servers(self): """Tests the show() result when servers are included in the output.""" instances = objects.InstanceList(objects=[objects.Instance( id=1, uuid=uuids.hyper1_instance1)]) hyper_id = self._get_hyper_id() req = self._get_request( use_admin_context=True, url='/os-hypervisors/%s?with_servers=on' % hyper_id) with mock.patch.object(self.controller.host_api, 'instance_get_all_by_host', return_value=instances) as mock_inst_get_all: result = self.controller.show(req, hyper_id) show_with_servers = copy.deepcopy(self.DETAIL_HYPERS_DICTS[0]) show_with_servers['servers'] = [ {'name': 'instance-00000001', 'uuid': uuids.hyper1_instance1}] self.assertDictEqual(dict(hypervisor=show_with_servers), result) # instance_get_all_by_host is called mock_inst_get_all.assert_called_once_with( req.environ['nova.context'], TEST_HYPERS_OBJ[0].host) def test_uptime_non_integer_id(self): """There is no reason to test this for 2.53 since 2.53 requires a non-integer id (requires a uuid). """ pass def test_uptime_integer_id(self): """Tests that we get a 400 if passed a hypervisor integer id to uptime(). """ req = self._get_request(True) ex = self.assertRaises(exc.HTTPBadRequest, self.controller.uptime, req, '1') self.assertIn('Invalid uuid 1', six.text_type(ex)) def test_detail_pagination(self): """Tests details paging with uuid markers.""" req = self._get_request( use_admin_context=True, url='/os-hypervisors/detail?limit=1&marker=%s' % TEST_HYPERS_OBJ[0].uuid) result = self.controller.detail(req) link = ('http://localhost/v2/os-hypervisors/detail?limit=1&marker=%s' % TEST_HYPERS_OBJ[1].uuid) expected = { 'hypervisors': [ {'cpu_info': {'arch': 'x86_64', 'features': [], 'model': '', 'topology': {'cores': 1, 'sockets': 1, 'threads': 1}, 'vendor': 'fake'}, 'current_workload': 2, 'disk_available_least': 100, 'free_disk_gb': 125, 'free_ram_mb': 5120, 'host_ip': netaddr.IPAddress('2.2.2.2'), 'hypervisor_hostname': 'hyper2', 'hypervisor_type': 'xen', 'hypervisor_version': 3, 'id': TEST_HYPERS_OBJ[1].uuid, 'local_gb': 250, 'local_gb_used': 125, 'memory_mb': 10240, 'memory_mb_used': 5120, 'running_vms': 2, 'service': {'disabled_reason': None, 'host': 'compute2', 'id': TEST_SERVICES[1].uuid}, 'state': 'up', 'status': 'enabled', 'vcpus': 4, 'vcpus_used': 2} ], 'hypervisors_links': [{'href': link, 'rel': 'next'}] } self.assertEqual(expected, result) def test_detail_pagination_with_invalid_marker(self): """Tests detail paging with an invalid marker (not found).""" req = self._get_request( use_admin_context=True, url='/os-hypervisors/detail?marker=%s' % uuids.invalid_marker) self.assertRaises(exc.HTTPBadRequest, self.controller.detail, req) def test_detail_pagination_with_additional_filter(self): req = self._get_request( True, '/v2/1234/os-hypervisors/detail?limit=1&marker=9&unknown=2') self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_detail_duplicate_query_parameters_validation(self): """Tests that the list Detail query parameter schema enforces only a single entry for any query parameter. """ params = { 'limit': 1, 'marker': uuids.marker, 'hypervisor_hostname_pattern': 'foo', 'with_servers': 'true' } for param, value in params.items(): req = self._get_request( use_admin_context=True, url='/os-hypervisors/detail?%s=%s&%s=%s' % (param, value, param, value)) self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_index_pagination(self): """Tests index paging with uuid markers.""" req = self._get_request( use_admin_context=True, url='/os-hypervisors?limit=1&marker=%s' % TEST_HYPERS_OBJ[0].uuid) result = self.controller.index(req) link = ('http://localhost/v2/os-hypervisors?limit=1&marker=%s' % TEST_HYPERS_OBJ[1].uuid) expected = { 'hypervisors': [{ 'hypervisor_hostname': 'hyper2', 'id': TEST_HYPERS_OBJ[1].uuid, 'state': 'up', 'status': 'enabled' }], 'hypervisors_links': [{'href': link, 'rel': 'next'}] } self.assertEqual(expected, result) def test_index_pagination_with_invalid_marker(self): """Tests index paging with an invalid marker (not found).""" req = self._get_request( use_admin_context=True, url='/os-hypervisors?marker=%s' % uuids.invalid_marker) self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) def test_index_pagination_with_additional_filter(self): req = self._get_request( True, '/v2/1234/os-hypervisors/?limit=1&marker=9&unknown=2') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_duplicate_query_parameters_validation(self): """Tests that the list query parameter schema enforces only a single entry for any query parameter. """ params = { 'limit': 1, 'marker': uuids.marker, 'hypervisor_hostname_pattern': 'foo', 'with_servers': 'true' } for param, value in params.items(): req = self._get_request( use_admin_context=True, url='/os-hypervisors?%s=%s&%s=%s' % (param, value, param, value)) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_show_duplicate_query_parameters_validation(self): """Tests that the show query parameter schema enforces only a single entry for any query parameter. """ req = self._get_request( use_admin_context=True, url='/os-hypervisors/%s?with_servers=1&with_servers=1' % uuids.hyper1) self.assertRaises(exception.ValidationError, self.controller.show, req, uuids.hyper1) class HypervisorsTestV275(HypervisorsTestV253): api_version = '2.75' def _test_servers_with_no_server(self, func, version=None, **kwargs): """Tests GET APIs return 'servers' field in response even no servers on hypervisors. """ with mock.patch.object(self.controller.host_api, 'instance_get_all_by_host', return_value=[]): req = fakes.HTTPRequest.blank('/os-hypervisors?with_servers=1', use_admin_context=True, version=version or self.api_version) result = func(req, **kwargs) return result def test_list_servers_with_no_server_old_version(self): result = self._test_servers_with_no_server(self.controller.index, version='2.74') for hyper in result['hypervisors']: self.assertNotIn('servers', hyper) def test_list_detail_servers_with_no_server_old_version(self): result = self._test_servers_with_no_server(self.controller.detail, version='2.74') for hyper in result['hypervisors']: self.assertNotIn('servers', hyper) def test_show_servers_with_no_server_old_version(self): result = self._test_servers_with_no_server(self.controller.show, version='2.74', id=uuids.hyper1) self.assertNotIn('servers', result['hypervisor']) def test_servers_with_no_server(self): result = self._test_servers_with_no_server(self.controller.index) for hyper in result['hypervisors']: self.assertEqual(0, len(hyper['servers'])) def test_list_detail_servers_with_empty_server_list(self): result = self._test_servers_with_no_server(self.controller.detail) for hyper in result['hypervisors']: self.assertEqual(0, len(hyper['servers'])) def test_show_servers_with_empty_server_list(self): result = self._test_servers_with_no_server(self.controller.show, id=uuids.hyper1) self.assertEqual(0, len(result['hypervisor']['servers'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_image_metadata.py0000664000175000017500000004145700000000000026434 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_serialization import jsonutils import webob from nova.api.openstack.compute import image_metadata as image_metadata_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import image_fixtures IMAGE_FIXTURES = image_fixtures.get_image_fixtures() CHK_QUOTA_STR = 'nova.api.openstack.common.check_img_metadata_properties_quota' def get_image_123(): return copy.deepcopy(IMAGE_FIXTURES)[0] class ImageMetaDataTestV21(test.NoDBTestCase): controller_class = image_metadata_v21.ImageMetadataController invalid_request = exception.ValidationError base_path = '/v2/%s/images/' % fakes.FAKE_PROJECT_ID def setUp(self): super(ImageMetaDataTestV21, self).setUp() self.controller = self.controller_class() @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_index(self, get_all_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata') res_dict = self.controller.index(req, '123') expected = {'metadata': {'key1': 'value1'}} self.assertEqual(res_dict, expected) get_all_mocked.assert_called_once_with(mock.ANY, '123') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_show(self, get_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key1') res_dict = self.controller.show(req, '123', 'key1') self.assertIn('meta', res_dict) self.assertEqual(len(res_dict['meta']), 1) self.assertEqual('value1', res_dict['meta']['key1']) get_mocked.assert_called_once_with(mock.ANY, '123') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_show_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key9') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '123', 'key9') @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_show_image_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank(self.base_path + '100/metadata/key1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '100', 'key9') @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_create(self, get_mocked, update_mocked, quota_mocked): mock_result = copy.deepcopy(get_image_123()) mock_result['properties']['key7'] = 'value7' update_mocked.return_value = mock_result req = fakes.HTTPRequest.blank(self.base_path + '123/metadata') req.method = 'POST' body = {"metadata": {"key7": "value7"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res = self.controller.create(req, '123', body=body) get_mocked.assert_called_once_with(mock.ANY, '123') expected = copy.deepcopy(get_image_123()) expected['properties'] = { 'key1': 'value1', # existing meta 'key7': 'value7' # new meta } quota_mocked.assert_called_once_with(mock.ANY, expected["properties"]) update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) expected_output = {'metadata': {'key1': 'value1', 'key7': 'value7'}} self.assertEqual(expected_output, res) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_create_image_not_found(self, _get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '100/metadata') req.method = 'POST' body = {"metadata": {"key7": "value7"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, '100', body=body) self.assertFalse(quota_mocked.called) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_update_all(self, get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata') req.method = 'PUT' body = {"metadata": {"key9": "value9"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res = self.controller.update_all(req, '123', body=body) get_mocked.assert_called_once_with(mock.ANY, '123') expected = copy.deepcopy(get_image_123()) expected['properties'] = { 'key9': 'value9' # replace meta } quota_mocked.assert_called_once_with(mock.ANY, expected["properties"]) update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) expected_output = {'metadata': {'key9': 'value9'}} self.assertEqual(expected_output, res) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_update_all_image_not_found(self, _get_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '100/metadata') req.method = 'PUT' body = {"metadata": {"key9": "value9"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.update_all, req, '100', body=body) self.assertFalse(quota_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_update_item(self, _get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "zz"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res = self.controller.update(req, '123', 'key1', body=body) expected = copy.deepcopy(get_image_123()) expected['properties'] = { 'key1': 'zz' # changed meta } quota_mocked.assert_called_once_with(mock.ANY, expected["properties"]) update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) expected_output = {'meta': {'key1': 'zz'}} self.assertEqual(res, expected_output) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_update_item_image_not_found(self, _get_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '100/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "zz"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, '100', 'key1', body=body) self.assertFalse(quota_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get') def test_update_item_bad_body(self, get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key1') req.method = 'PUT' body = {"key1": "zz"} req.body = b'' req.headers["content-type"] = "application/json" self.assertRaises(self.invalid_request, self.controller.update, req, '123', 'key1', body=body) self.assertFalse(get_mocked.called) self.assertFalse(quota_mocked.called) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR, side_effect=webob.exc.HTTPBadRequest()) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get') def test_update_item_too_many_keys(self, get_mocked, update_mocked, _quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key1') req.method = 'PUT' body = {"meta": {"foo": "bar"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, '123', 'key1', body=body) self.assertFalse(get_mocked.called) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_update_item_body_uri_mismatch(self, _get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/bad') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, '123', 'bad', body=body) self.assertFalse(quota_mocked.called) self.assertFalse(update_mocked.called) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_delete(self, _get_mocked, update_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key1') req.method = 'DELETE' res = self.controller.delete(req, '123', 'key1') expected = copy.deepcopy(get_image_123()) expected['properties'] = {} update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) self.assertIsNone(res) @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_delete_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/blah') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, '123', 'blah') @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_delete_image_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank(self.base_path + '100/metadata/key1') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, '100', 'key1') @mock.patch(CHK_QUOTA_STR, side_effect=webob.exc.HTTPForbidden(explanation='')) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_too_many_metadata_items_on_create(self, _get_mocked, update_mocked, _quota_mocked): body = {"metadata": {"foo": "bar"}} req = fakes.HTTPRequest.blank(self.base_path + '123/metadata') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, req, '123', body=body) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR, side_effect=webob.exc.HTTPForbidden(explanation='')) @mock.patch('nova.image.glance.API.update') @mock.patch('nova.image.glance.API.get', return_value=get_image_123()) def test_too_many_metadata_items_on_put(self, _get_mocked, update_mocked, _quota_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/blah') req.method = 'PUT' body = {"meta": {"blah": "blah", "blah1": "blah1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.invalid_request, self.controller.update, req, '123', 'blah', body=body) self.assertFalse(update_mocked.called) @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_image_not_authorized_update(self, _get_mocked): req = fakes.HTTPRequest.blank(self.base_path + '123/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, req, '123', 'key1', body=body) @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_image_not_authorized_update_all(self, _get_mocked): image_id = 131 # see nova.tests.unit.api.openstack.fakes:_make_image_fixtures req = fakes.HTTPRequest.blank(self.base_path + '%s/metadata/key1' % image_id) req.method = 'PUT' body = {"metadata": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.update_all, req, image_id, body=body) @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_image_not_authorized_create(self, _get_mocked): image_id = 131 # see nova.tests.unit.api.openstack.fakes:_make_image_fixtures req = fakes.HTTPRequest.blank(self.base_path + '%s/metadata/key1' % image_id) req.method = 'POST' body = {"metadata": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, req, image_id, body=body) class ImageMetadataControllerV239(test.NoDBTestCase): def setUp(self): super(ImageMetadataControllerV239, self).setUp() self.controller = image_metadata_v21.ImageMetadataController() self.req = fakes.HTTPRequest.blank('', version='2.39') def test_not_found_for_all_image_metadata_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, fakes.FAKE_UUID, {'metadata': {}}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, 'id', {'metadata': {}}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update_all, self.req, fakes.FAKE_UUID, {'metadata': {}}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_images.py0000664000175000017500000004652000000000000024753 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests of the new image services, both as a service layer, and as a WSGI layer """ import copy import mock import six.moves.urllib.parse as urlparse import webob from nova.api.openstack.compute import images as images_v21 from nova.api.openstack.compute.views import images as images_view from nova import exception from nova.image import glance from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import image_fixtures from nova.tests.unit import matchers NS = "{http://docs.openstack.org/compute/api/v1.1}" ATOMNS = "{http://www.w3.org/2005/Atom}" NOW_API_FORMAT = "2010-10-11T10:30:22Z" IMAGE_FIXTURES = image_fixtures.get_image_fixtures() class ImagesControllerTestV21(test.NoDBTestCase): """Test of the OpenStack API /images application controller w/Glance. """ image_controller_class = images_v21.ImagesController url_base = '/v2/%s' % fakes.FAKE_PROJECT_ID bookmark_base = '/%s' % fakes.FAKE_PROJECT_ID http_request = fakes.HTTPRequestV21 def setUp(self): """Run before each test.""" super(ImagesControllerTestV21, self).setUp() self.flags(api_servers=['http://localhost:9292'], group='glance') fakes.stub_out_networking(self) fakes.stub_out_key_pair_funcs(self) fakes.stub_out_compute_api_snapshot(self) fakes.stub_out_compute_api_backup(self) self.controller = self.image_controller_class() self.url_prefix = "http://localhost%s/images" % self.url_base self.bookmark_prefix = "http://localhost%s/images" % self.bookmark_base self.uuid = 'fa95aaf5-ab3b-4cd8-88c0-2be7dd051aaf' self.server_uuid = "aa640691-d1a7-4a67-9d3c-d35ee6b3cc74" self.server_href = ( "http://localhost%s/servers/%s" % (self.url_base, self.server_uuid)) self.server_bookmark = ( "http://localhost%s/servers/%s" % (self.bookmark_base, self.server_uuid)) self.alternate = "%s/images/%s" self.expected_image_123 = { "image": {'id': '123', 'name': 'public image', 'metadata': {'key1': 'value1'}, 'updated': NOW_API_FORMAT, 'created': NOW_API_FORMAT, 'status': 'ACTIVE', 'minDisk': 10, 'progress': 100, 'minRam': 128, 'OS-EXT-IMG-SIZE:size': 25165824, "links": [{ "rel": "self", "href": "%s/123" % self.url_prefix }, { "rel": "bookmark", "href": "%s/123" % self.bookmark_prefix }, { "rel": "alternate", "type": "application/vnd.openstack.image", "href": self.alternate % (glance.generate_glance_url('ctx'), 123), }], }, } self.expected_image_124 = { "image": {'id': '124', 'name': 'queued snapshot', 'metadata': { u'instance_uuid': self.server_uuid, u'user_id': u'fake', }, 'updated': NOW_API_FORMAT, 'created': NOW_API_FORMAT, 'status': 'SAVING', 'progress': 25, 'minDisk': 0, 'minRam': 0, 'OS-EXT-IMG-SIZE:size': 25165824, 'server': { 'id': self.server_uuid, "links": [{ "rel": "self", "href": self.server_href, }, { "rel": "bookmark", "href": self.server_bookmark, }], }, "links": [{ "rel": "self", "href": "%s/124" % self.url_prefix }, { "rel": "bookmark", "href": "%s/124" % self.bookmark_prefix }, { "rel": "alternate", "type": "application/vnd.openstack.image", "href": self.alternate % (glance.generate_glance_url('ctx'), 124), }], }, } @mock.patch('nova.image.glance.API.get', return_value=IMAGE_FIXTURES[0]) def test_get_image(self, get_mocked): request = self.http_request.blank(self.url_base + 'images/123') actual_image = self.controller.show(request, '123') self.assertThat(actual_image, matchers.DictMatches(self.expected_image_123)) get_mocked.assert_called_once_with(mock.ANY, '123') @mock.patch('nova.image.glance.API.get', return_value=IMAGE_FIXTURES[1]) def test_get_image_with_custom_prefix(self, _get_mocked): self.flags(compute_link_prefix='https://zoo.com:42', glance_link_prefix='http://circus.com:34', group='api') fake_req = self.http_request.blank(self.url_base + 'images/124') actual_image = self.controller.show(fake_req, '124') expected_image = self.expected_image_124 expected_image["image"]["links"][0]["href"] = ( "https://zoo.com:42%s/images/124" % self.url_base) expected_image["image"]["links"][1]["href"] = ( "https://zoo.com:42%s/images/124" % self.bookmark_base) expected_image["image"]["links"][2]["href"] = ( "http://circus.com:34/images/124") expected_image["image"]["server"]["links"][0]["href"] = ( "https://zoo.com:42%s/servers/%s" % (self.url_base, self.server_uuid)) expected_image["image"]["server"]["links"][1]["href"] = ( "https://zoo.com:42%s/servers/%s" % (self.bookmark_base, self.server_uuid)) self.assertThat(actual_image, matchers.DictMatches(expected_image)) @mock.patch('nova.image.glance.API.get', side_effect=exception.ImageNotFound(image_id='')) def test_get_image_404(self, _get_mocked): fake_req = self.http_request.blank(self.url_base + 'images/unknown') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, fake_req, 'unknown') @mock.patch('nova.image.glance.API.get_all', return_value=IMAGE_FIXTURES) def test_get_image_details(self, get_all_mocked): request = self.http_request.blank(self.url_base + 'images/detail') response = self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, filters={}) response_list = response["images"] image_125 = copy.deepcopy(self.expected_image_124["image"]) image_125['id'] = '125' image_125['name'] = 'saving snapshot' image_125['progress'] = 50 image_125["links"][0]["href"] = "%s/125" % self.url_prefix image_125["links"][1]["href"] = "%s/125" % self.bookmark_prefix image_125["links"][2]["href"] = ( "%s/images/125" % glance.generate_glance_url('ctx')) image_126 = copy.deepcopy(self.expected_image_124["image"]) image_126['id'] = '126' image_126['name'] = 'active snapshot' image_126['status'] = 'ACTIVE' image_126['progress'] = 100 image_126["links"][0]["href"] = "%s/126" % self.url_prefix image_126["links"][1]["href"] = "%s/126" % self.bookmark_prefix image_126["links"][2]["href"] = ( "%s/images/126" % glance.generate_glance_url('ctx')) image_127 = copy.deepcopy(self.expected_image_124["image"]) image_127['id'] = '127' image_127['name'] = 'killed snapshot' image_127['status'] = 'ERROR' image_127['progress'] = 0 image_127["links"][0]["href"] = "%s/127" % self.url_prefix image_127["links"][1]["href"] = "%s/127" % self.bookmark_prefix image_127["links"][2]["href"] = ( "%s/images/127" % glance.generate_glance_url('ctx')) image_128 = copy.deepcopy(self.expected_image_124["image"]) image_128['id'] = '128' image_128['name'] = 'deleted snapshot' image_128['status'] = 'DELETED' image_128['progress'] = 0 image_128["links"][0]["href"] = "%s/128" % self.url_prefix image_128["links"][1]["href"] = "%s/128" % self.bookmark_prefix image_128["links"][2]["href"] = ( "%s/images/128" % glance.generate_glance_url('ctx')) image_129 = copy.deepcopy(self.expected_image_124["image"]) image_129['id'] = '129' image_129['name'] = 'pending_delete snapshot' image_129['status'] = 'DELETED' image_129['progress'] = 0 image_129["links"][0]["href"] = "%s/129" % self.url_prefix image_129["links"][1]["href"] = "%s/129" % self.bookmark_prefix image_129["links"][2]["href"] = ( "%s/images/129" % glance.generate_glance_url('ctx')) image_130 = copy.deepcopy(self.expected_image_123["image"]) image_130['id'] = '130' image_130['name'] = None image_130['metadata'] = {} image_130['minDisk'] = 0 image_130['minRam'] = 0 image_130["links"][0]["href"] = "%s/130" % self.url_prefix image_130["links"][1]["href"] = "%s/130" % self.bookmark_prefix image_130["links"][2]["href"] = ( "%s/images/130" % glance.generate_glance_url('ctx')) image_131 = copy.deepcopy(self.expected_image_123["image"]) image_131['id'] = '131' image_131['name'] = None image_131['metadata'] = {} image_131['minDisk'] = 0 image_131['minRam'] = 0 image_131["links"][0]["href"] = "%s/131" % self.url_prefix image_131["links"][1]["href"] = "%s/131" % self.bookmark_prefix image_131["links"][2]["href"] = ( "%s/images/131" % glance.generate_glance_url('ctx')) expected = [self.expected_image_123["image"], self.expected_image_124["image"], image_125, image_126, image_127, image_128, image_129, image_130, image_131] self.assertThat(expected, matchers.DictListMatches(response_list)) @mock.patch('nova.image.glance.API.get_all') def test_get_image_details_with_limit(self, get_all_mocked): request = self.http_request.blank(self.url_base + 'images/detail?limit=2') self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, limit=2, filters={}) @mock.patch('nova.image.glance.API.get_all') def test_get_image_details_with_limit_and_page_size(self, get_all_mocked): request = self.http_request.blank( self.url_base + 'images/detail?limit=2&page_size=1') self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, limit=2, filters={}, page_size=1) @mock.patch('nova.image.glance.API.get_all') def _detail_request(self, filters, request, get_all_mocked): self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, filters=filters) def test_image_detail_filter_with_name(self): filters = {'name': 'testname'} request = self.http_request.blank(self.url_base + 'images/detail' '?name=testname') self._detail_request(filters, request) def test_image_detail_filter_with_status(self): filters = {'status': 'active'} request = self.http_request.blank(self.url_base + 'images/detail' '?status=ACTIVE') self._detail_request(filters, request) def test_image_detail_filter_with_property(self): filters = {'property-test': '3'} request = self.http_request.blank(self.url_base + 'images/detail' '?property-test=3') self._detail_request(filters, request) def test_image_detail_filter_server_href(self): filters = {'property-instance_uuid': self.uuid} request = self.http_request.blank( self.url_base + 'images/detail?server=' + self.uuid) self._detail_request(filters, request) def test_image_detail_filter_server_uuid(self): filters = {'property-instance_uuid': self.uuid} request = self.http_request.blank( self.url_base + 'images/detail?server=' + self.uuid) self._detail_request(filters, request) def test_image_detail_filter_changes_since(self): filters = {'changes-since': '2011-01-24T17:08Z'} request = self.http_request.blank(self.url_base + 'images/detail' '?changes-since=2011-01-24T17:08Z') self._detail_request(filters, request) def test_image_detail_filter_with_type(self): filters = {'property-image_type': 'BASE'} request = self.http_request.blank( self.url_base + 'images/detail?type=BASE') self._detail_request(filters, request) def test_image_detail_filter_not_supported(self): filters = {'status': 'active'} request = self.http_request.blank( self.url_base + 'images/detail?status=' 'ACTIVE&UNSUPPORTEDFILTER=testname') self._detail_request(filters, request) def test_image_detail_no_filters(self): filters = {} request = self.http_request.blank(self.url_base + 'images/detail') self._detail_request(filters, request) @mock.patch('nova.image.glance.API.get_all', side_effect=exception.Invalid) def test_image_detail_invalid_marker(self, _get_all_mocked): request = self.http_request.blank(self.url_base + '?marker=invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.detail, request) def test_generate_alternate_link(self): view = images_view.ViewBuilder() request = self.http_request.blank(self.url_base + 'images/1') generated_url = view._get_alternate_link(request, 1) actual_url = "%s/images/1" % glance.generate_glance_url('ctx') self.assertEqual(generated_url, actual_url) def _check_response(self, controller_method, response, expected_code): self.assertEqual(expected_code, controller_method.wsgi_code) @mock.patch('nova.image.glance.API.delete') def test_delete_image(self, delete_mocked): request = self.http_request.blank(self.url_base + 'images/124') request.method = 'DELETE' delete_method = self.controller.delete response = delete_method(request, '124') self._check_response(delete_method, response, 204) delete_mocked.assert_called_once_with(mock.ANY, '124') @mock.patch('nova.image.glance.API.delete', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_delete_deleted_image(self, _delete_mocked): # If you try to delete a deleted image, you get back 403 Forbidden. request = self.http_request.blank(self.url_base + 'images/123') request.method = 'DELETE' self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, '123') @mock.patch('nova.image.glance.API.delete', side_effect=exception.ImageNotFound(image_id='123')) def test_delete_image_not_found(self, _delete_mocked): request = self.http_request.blank(self.url_base + 'images/300') request.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, '300') @mock.patch('nova.image.glance.API.get_all', return_value=[IMAGE_FIXTURES[0]]) def test_get_image_next_link(self, get_all_mocked): request = self.http_request.blank( self.url_base + 'imagesl?limit=1') response = self.controller.index(request) response_links = response['images_links'] href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual(self.url_base + '/images', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['1'], 'marker': [IMAGE_FIXTURES[0]['id']]}, matchers.DictMatches(params)) @mock.patch('nova.image.glance.API.get_all', return_value=[IMAGE_FIXTURES[0]]) def test_get_image_details_next_link(self, get_all_mocked): request = self.http_request.blank( self.url_base + 'images/detail?limit=1') response = self.controller.detail(request) response_links = response['images_links'] href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual(self.url_base + '/images/detail', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['1'], 'marker': [IMAGE_FIXTURES[0]['id']]}, matchers.DictMatches(params)) class ImagesControllerDeprecationTest(test.NoDBTestCase): def setUp(self): super(ImagesControllerDeprecationTest, self).setUp() self.controller = images_v21.ImagesController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_not_found_for_all_images_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.detail, self.req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_instance_actions.py0000664000175000017500000005064200000000000027032 0ustar00zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import iso8601 import mock from oslo_policy import policy as oslo_policy from oslo_utils.fixture import uuidsentinel as uuids import six from webob import exc from nova.api.openstack.compute import instance_actions as instance_actions_v21 from nova.api.openstack import wsgi as os_wsgi from nova.compute import api as compute_api from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit import fake_server_actions from nova import utils FAKE_UUID = fake_server_actions.FAKE_UUID FAKE_REQUEST_ID = fake_server_actions.FAKE_REQUEST_ID1 FAKE_EVENT_ID = fake_server_actions.FAKE_ACTION_ID1 FAKE_REQUEST_NOTFOUND_ID = 'req-' + uuids.req_not_found def format_action(action, expect_traceback=True, expect_host=False, expect_hostId=False): '''Remove keys that aren't serialized.''' to_delete = ('id', 'finish_time', 'created_at', 'updated_at', 'deleted_at', 'deleted') for key in to_delete: if key in action: del(action[key]) if 'start_time' in action: # NOTE(danms): Without WSGI above us, these will be just stringified action['start_time'] = str(action['start_time'].replace(tzinfo=None)) for event in action.get('events', []): format_event(event, action.get('project_id'), expect_traceback=expect_traceback, expect_host=expect_host, expect_hostId=expect_hostId) return action def format_event(event, project_id, expect_traceback=True, expect_host=False, expect_hostId=False): '''Remove keys that aren't serialized.''' to_delete = ['id', 'created_at', 'updated_at', 'deleted_at', 'deleted', 'action_id', 'details'] if not expect_traceback: to_delete.append('traceback') if not expect_host: to_delete.append('host') if not expect_hostId: to_delete.append('hostId') for key in to_delete: if key in event: del(event[key]) if 'start_time' in event: # NOTE(danms): Without WSGI above us, these will be just stringified event['start_time'] = str(event['start_time'].replace(tzinfo=None)) if 'finish_time' in event: # NOTE(danms): Without WSGI above us, these will be just stringified event['finish_time'] = str(event['finish_time'].replace(tzinfo=None)) return event class InstanceActionsTestV21(test.NoDBTestCase): instance_actions = instance_actions_v21 wsgi_api_version = os_wsgi.DEFAULT_API_VERSION expect_events_non_admin = False expect_event_hostId = False expect_event_host = False def fake_get(self, context, instance_uuid, expected_attrs=None, cell_down_support=False): return objects.Instance( context, id=1, uuid=instance_uuid, project_id=context.project_id) def setUp(self): super(InstanceActionsTestV21, self).setUp() self.controller = self.instance_actions.InstanceActionsController() self.fake_actions = copy.deepcopy(fake_server_actions.FAKE_ACTIONS) self.fake_events = copy.deepcopy(fake_server_actions.FAKE_EVENTS) get_patcher = mock.patch.object(compute_api.API, 'get', side_effect=self.fake_get) self.addCleanup(get_patcher.stop) self.mock_get = get_patcher.start() def _get_http_req(self, action, use_admin_context=False): fake_url = '/%s/servers/12/%s' % (fakes.FAKE_PROJECT_ID, action) return fakes.HTTPRequest.blank(fake_url, use_admin_context=use_admin_context, version=self.wsgi_api_version) def _get_http_req_with_version(self, action, use_admin_context=False, version="2.21"): fake_url = '/%s/servers/12/%s' % (fakes.FAKE_PROJECT_ID, action) return fakes.HTTPRequest.blank(fake_url, use_admin_context=use_admin_context, version=version) def _set_policy_rules(self): rules = {'compute:get': '', 'os_compute_api:os-instance-actions:show': '', 'os_compute_api:os-instance-actions:events': 'is_admin:True'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) def test_list_actions(self): def fake_get_actions(context, uuid, limit=None, marker=None, filters=None): actions = [] for act in six.itervalues(self.fake_actions[uuid]): action = models.InstanceAction() action.update(act) actions.append(action) return actions self.stub_out('nova.db.api.actions_get', fake_get_actions) req = self._get_http_req('os-instance-actions') res_dict = self.controller.index(req, FAKE_UUID) for res in res_dict['instanceActions']: fake_action = self.fake_actions[FAKE_UUID][res['request_id']] self.assertEqual(format_action(fake_action), format_action(res)) def test_get_action_with_events_allowed(self): def fake_get_action(context, uuid, request_id): action = models.InstanceAction() action.update(self.fake_actions[uuid][request_id]) return action def fake_get_events(context, action_id): events = [] for evt in self.fake_events[action_id]: event = models.InstanceActionEvent() event.update(evt) events.append(event) return events self.stub_out('nova.db.api.action_get_by_request_id', fake_get_action) self.stub_out('nova.db.api.action_events_get', fake_get_events) req = self._get_http_req('os-instance-actions/1', use_admin_context=True) res_dict = self.controller.show(req, FAKE_UUID, FAKE_REQUEST_ID) fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] fake_events = self.fake_events[fake_action['id']] fake_action['events'] = fake_events self.assertEqual(format_action(fake_action, expect_host=self.expect_event_host, expect_hostId=self.expect_event_hostId), format_action(res_dict['instanceAction'], expect_host=self.expect_event_host, expect_hostId=self.expect_event_hostId)) def test_get_action_with_events_not_allowed(self): def fake_get_action(context, uuid, request_id): return self.fake_actions[uuid][request_id] def fake_get_events(context, action_id): return self.fake_events[action_id] self.stub_out('nova.db.api.action_get_by_request_id', fake_get_action) self.stub_out('nova.db.api.action_events_get', fake_get_events) self._set_policy_rules() req = self._get_http_req('os-instance-actions/1') res_dict = self.controller.show(req, FAKE_UUID, FAKE_REQUEST_ID) fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] if self.expect_events_non_admin: fake_event = fake_server_actions.FAKE_EVENTS[FAKE_EVENT_ID] fake_action['events'] = copy.deepcopy(fake_event) # By default, non-admins are not allowed to see traceback details # and event host. self.assertEqual(format_action(fake_action, expect_traceback=False, expect_host=False, expect_hostId=self.expect_event_hostId), format_action(res_dict['instanceAction'], expect_traceback=False, expect_host=False, expect_hostId=self.expect_event_hostId)) def test_action_not_found(self): def fake_no_action(context, uuid, action_id): return None self.stub_out('nova.db.api.action_get_by_request_id', fake_no_action) req = self._get_http_req('os-instance-actions/1') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, FAKE_UUID, FAKE_REQUEST_ID) def test_index_instance_not_found(self): self.mock_get.side_effect = exception.InstanceNotFound( instance_id=FAKE_UUID) req = self._get_http_req('os-instance-actions') self.assertRaises(exc.HTTPNotFound, self.controller.index, req, FAKE_UUID) self.mock_get.assert_called_once_with(req.environ['nova.context'], FAKE_UUID, expected_attrs=None, cell_down_support=False) def test_show_instance_not_found(self): self.mock_get.side_effect = exception.InstanceNotFound( instance_id=FAKE_UUID) req = self._get_http_req('os-instance-actions/fake') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, FAKE_UUID, 'fake') self.mock_get.assert_called_once_with(req.environ['nova.context'], FAKE_UUID, expected_attrs=None, cell_down_support=False) class InstanceActionsTestV221(InstanceActionsTestV21): wsgi_api_version = "2.21" def fake_get(self, context, instance_uuid, expected_attrs=None, cell_down_support=False): self.assertEqual('yes', context.read_deleted) return objects.Instance( context, id=1, uuid=instance_uuid, project_id=context.project_id) class InstanceActionsTestV251(InstanceActionsTestV221): wsgi_api_version = "2.51" expect_events_non_admin = True class InstanceActionsTestV258(InstanceActionsTestV251): wsgi_api_version = "2.58" @mock.patch('nova.objects.InstanceActionList.get_by_instance_uuid') def test_get_action_with_invalid_marker(self, mock_actions_get): """Tests detail paging with an invalid marker (not found).""" mock_actions_get.side_effect = exception.MarkerNotFound( marker=FAKE_REQUEST_NOTFOUND_ID) req = self._get_http_req('os-instance-actions?' 'marker=%s' % FAKE_REQUEST_NOTFOUND_ID) self.assertRaises(exc.HTTPBadRequest, self.controller.index, req, FAKE_UUID) def test_get_action_with_invalid_limit(self): """Tests get paging with an invalid limit.""" req = self._get_http_req('os-instance-actions?limit=x') self.assertRaises(exception.ValidationError, self.controller.index, req) req = self._get_http_req('os-instance-actions?limit=-1') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_action_with_invalid_change_since(self): """Tests get paging with a invalid change_since.""" req = self._get_http_req('os-instance-actions?' 'changes-since=wrong_time') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Invalid input for query parameters changes-since', six.text_type(ex)) def test_get_action_with_invalid_params(self): """Tests get paging with a invalid change_since.""" req = self._get_http_req('os-instance-actions?' 'wrong_params=xxx') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) def test_get_action_with_multi_params(self): """Tests get paging with multi markers.""" req = self._get_http_req('os-instance-actions?marker=A&marker=B') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Invalid input for query parameters marker', six.text_type(ex)) class InstanceActionsTestV262(InstanceActionsTestV258): wsgi_api_version = "2.62" expect_event_hostId = True expect_event_host = True instance_project_id = '26cde4489f6749a08834741678df3c4a' def fake_get(self, context, instance_uuid, expected_attrs=None, cell_down_support=False): return objects.Instance(uuid=instance_uuid, project_id=self.instance_project_id) @mock.patch.object(compute_api.InstanceActionAPI, 'action_events_get') @mock.patch.object(compute_api.InstanceActionAPI, 'action_get_by_request_id') def test_get_action_with_events_project_id_none(self, mock_action_get, mock_action_events): fake_request_id = 'req-%s' % uuids.req1 mock_action_get.return_value = objects.InstanceAction( id=789, action='stop', instance_uuid=uuids.instance, request_id=fake_request_id, user_id=None, project_id=None, start_time=datetime.datetime(2019, 2, 28, 14, 28, 0, 0), finish_time=None, message='', created_at=None, updated_at=None, deleted_at=None, deleted=False) mock_action_events.return_value = [ objects.InstanceActionEvent( id=5, action_id=789, event='compute_stop_instance', start_time=datetime.datetime(2019, 2, 28, 14, 28, 0, 0), finish_time=datetime.datetime(2019, 2, 28, 14, 30, 0, 0), result='Success', traceback='', created_at=None, updated_at=None, deleted_at=None, deleted=False, host='host2')] req = self._get_http_req('os-instance-actions/1', use_admin_context=True) res_dict = self.controller.show(req, uuids.instance, fake_request_id) # Assert that 'project_id' is null (None) in the response self.assertIsNone(res_dict['instanceAction']['project_id']) self.assertEqual('host2', res_dict['instanceAction']['events'][0]['host']) # Assert that the 'hostId' is based on 'host' and the project ID # of the server self.assertEqual(utils.generate_hostid( res_dict['instanceAction']['events'][0]['host'], self.instance_project_id), res_dict['instanceAction']['events'][0]['hostId']) class InstanceActionsTestV266(InstanceActionsTestV258): wsgi_api_version = "2.66" def test_get_action_with_invalid_changes_before(self): """Tests get paging with a invalid changes-before.""" req = self._get_http_req('os-instance-actions?' 'changes-before=wrong_time') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Invalid input for query parameters changes-before', six.text_type(ex)) @mock.patch('nova.compute.api.InstanceActionAPI.actions_get') @mock.patch('nova.api.openstack.common.get_instance') def test_get_action_with_changes_since_and_changes_before( self, mock_get_instance, mock_action_get): param = 'changes-since=2012-12-05T00:00:00Z&' \ 'changes-before=2012-12-05T01:00:00Z' req = self._get_http_req_with_version('os-instance-actions?%s' % param, use_admin_context=True, version=self.wsgi_api_version) instance = fake_instance.fake_instance_obj(req.environ['nova.context']) mock_get_instance.return_value = instance self.controller.index(req, FAKE_UUID) filters = {'changes-since': datetime.datetime( 2012, 12, 5, 0, 0, tzinfo=iso8601.iso8601.UTC), 'changes-before': datetime.datetime( 2012, 12, 5, 1, 0, tzinfo=iso8601.iso8601.UTC)} mock_action_get.assert_called_once_with(req.environ['nova.context'], instance, limit=1000, marker=None, filters=filters) def test_instance_actions_filters_with_distinct_changes_time_bad_request( self): changes_since = '2018-09-04T05:45:27Z' changes_before = '2018-09-03T05:45:27Z' req = self._get_http_req('os-instance-actions?' 'changes-since=%s&changes-before=%s' % (changes_since, changes_before)) ex = self.assertRaises(exc.HTTPBadRequest, self.controller.index, req, FAKE_UUID) self.assertIn('The value of changes-since must be less than ' 'or equal to changes-before', six.text_type(ex)) def test_get_action_with_changes_before_old_microversion(self): """Tests that the changes-before query parameter is an error before microversion 2.66. """ param = 'changes-before=2018-09-13T15:13:03Z' req = self._get_http_req_with_version('os-instance-actions?%s' % param, use_admin_context=True, version="2.65") ex = self.assertRaises(exception.ValidationError, self.controller.index, req) detail = 'Additional properties are not allowed' self.assertIn(detail, six.text_type(ex)) class InstanceActionsTestV284(InstanceActionsTestV266): wsgi_api_version = "2.84" def _set_policy_rules(self, overwrite=True): rules = {'os_compute_api:os-instance-actions:show': '', 'os_compute_api:os-instance-actions:events:details': 'project_id:%(project_id)s'} policy.set_rules(oslo_policy.Rules.from_dict(rules), overwrite=overwrite) def test_show_action_with_details(self): def fake_get_action(context, uuid, request_id): return self.fake_actions[uuid][request_id] def fake_get_events(context, action_id): return self.fake_events[action_id] self.stub_out('nova.db.api.action_get_by_request_id', fake_get_action) self.stub_out('nova.db.api.action_events_get', fake_get_events) self._set_policy_rules(overwrite=False) req = self._get_http_req('os-instance-actions/1') res_dict = self.controller.show(req, FAKE_UUID, FAKE_REQUEST_ID) for event in res_dict['instanceAction']['events']: self.assertIn('details', event) def test_show_action_with_details_old_microversion(self): """Before microversion 2.84, we cannot get the details in events.""" def fake_get_action(context, uuid, request_id): return self.fake_actions[uuid][request_id] def fake_get_events(context, action_id): return self.fake_events[action_id] self.stub_out('nova.db.api.action_get_by_request_id', fake_get_action) self.stub_out('nova.db.api.action_events_get', fake_get_events) req = self._get_http_req_with_version('os-instance-actions/1', version="2.83") res_dict = self.controller.show(req, FAKE_UUID, FAKE_REQUEST_ID) for event in res_dict['instanceAction']['events']: self.assertNotIn('details', event) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_instance_usage_audit_log.py0000664000175000017500000001704000000000000030520 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils import fixture as utils_fixture from nova.api.openstack.compute import instance_usage_audit_log as v21_ial from nova import context from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_service service_base = test_service.fake_service TEST_COMPUTE_SERVICES = [dict(service_base, host='foo', topic='compute'), dict(service_base, host='bar', topic='compute'), dict(service_base, host='baz', topic='compute'), dict(service_base, host='plonk', topic='compute'), dict(service_base, host='wibble', topic='bogus'), ] begin1 = datetime.datetime(2012, 7, 4, 6, 0, 0) begin2 = end1 = datetime.datetime(2012, 7, 5, 6, 0, 0) begin3 = end2 = datetime.datetime(2012, 7, 6, 6, 0, 0) end3 = datetime.datetime(2012, 7, 7, 6, 0, 0) # test data TEST_LOGS1 = [ # all services done, no errors. dict(host="plonk", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=23, message="test1"), dict(host="baz", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=17, message="test2"), dict(host="bar", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=10, message="test3"), dict(host="foo", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=7, message="test4"), ] TEST_LOGS2 = [ # some still running... dict(host="plonk", period_beginning=begin2, period_ending=end2, state="DONE", errors=0, task_items=23, message="test5"), dict(host="baz", period_beginning=begin2, period_ending=end2, state="DONE", errors=0, task_items=17, message="test6"), dict(host="bar", period_beginning=begin2, period_ending=end2, state="RUNNING", errors=0, task_items=10, message="test7"), dict(host="foo", period_beginning=begin2, period_ending=end2, state="DONE", errors=0, task_items=7, message="test8"), ] TEST_LOGS3 = [ # some errors.. dict(host="plonk", period_beginning=begin3, period_ending=end3, state="DONE", errors=0, task_items=23, message="test9"), dict(host="baz", period_beginning=begin3, period_ending=end3, state="DONE", errors=2, task_items=17, message="test10"), dict(host="bar", period_beginning=begin3, period_ending=end3, state="DONE", errors=0, task_items=10, message="test11"), dict(host="foo", period_beginning=begin3, period_ending=end3, state="DONE", errors=1, task_items=7, message="test12"), ] def fake_task_log_get_all(context, task_name, begin, end, host=None, state=None): assert task_name == "instance_usage_audit" if begin == begin1 and end == end1: return TEST_LOGS1 if begin == begin2 and end == end2: return TEST_LOGS2 if begin == begin3 and end == end3: return TEST_LOGS3 raise AssertionError("Invalid date %s to %s" % (begin, end)) def fake_last_completed_audit_period(unit=None, before=None): audit_periods = [(begin3, end3), (begin2, end2), (begin1, end1)] if before is not None: for begin, end in audit_periods: if before > end: return begin, end raise AssertionError("Invalid before date %s" % (before)) return begin1, end1 class InstanceUsageAuditLogTestV21(test.NoDBTestCase): def setUp(self): super(InstanceUsageAuditLogTestV21, self).setUp() self.context = context.get_admin_context() self.useFixture( utils_fixture.TimeFixture(datetime.datetime(2012, 7, 5, 10, 0, 0))) self._set_up_controller() self.host_api = self.controller.host_api def fake_service_get_all(context, disabled): self.assertIsNone(disabled) return TEST_COMPUTE_SERVICES self.stub_out('nova.utils.last_completed_audit_period', fake_last_completed_audit_period) self.stub_out('nova.db.api.service_get_all', fake_service_get_all) self.stub_out('nova.db.api.task_log_get_all', fake_task_log_get_all) self.req = fakes.HTTPRequest.blank('') def _set_up_controller(self): self.controller = v21_ial.InstanceUsageAuditLogController() def test_index(self): result = self.controller.index(self.req) self.assertIn('instance_usage_audit_logs', result) logs = result['instance_usage_audit_logs'] self.assertEqual(57, logs['total_instances']) self.assertEqual(0, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(4, logs['num_hosts_done']) self.assertEqual(0, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("ALL hosts done. 0 errors.", logs['overall_status']) def test_show(self): result = self.controller.show(self.req, '2012-07-05 10:00:00') self.assertIn('instance_usage_audit_log', result) logs = result['instance_usage_audit_log'] self.assertEqual(57, logs['total_instances']) self.assertEqual(0, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(4, logs['num_hosts_done']) self.assertEqual(0, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("ALL hosts done. 0 errors.", logs['overall_status']) def test_show_with_running(self): result = self.controller.show(self.req, '2012-07-06 10:00:00') self.assertIn('instance_usage_audit_log', result) logs = result['instance_usage_audit_log'] self.assertEqual(57, logs['total_instances']) self.assertEqual(0, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(3, logs['num_hosts_done']) self.assertEqual(1, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("3 of 4 hosts done. 0 errors.", logs['overall_status']) def test_show_with_errors(self): result = self.controller.show(self.req, '2012-07-07 10:00:00') self.assertIn('instance_usage_audit_log', result) logs = result['instance_usage_audit_log'] self.assertEqual(57, logs['total_instances']) self.assertEqual(3, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(4, logs['num_hosts_done']) self.assertEqual(0, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("ALL hosts done. 3 errors.", logs['overall_status']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_keypairs.py0000664000175000017500000005156400000000000025341 0ustar00zuulzuul00000000000000# Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import keypairs as keypairs_v21 from nova.api.openstack import wsgi as os_wsgi from nova import context as nova_context from nova import exception from nova import objects from nova import quota from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_keypair QUOTAS = quota.QUOTAS keypair_data = { 'public_key': 'FAKE_KEY', 'fingerprint': 'FAKE_FINGERPRINT', } FAKE_UUID = 'b48316c5-71e8-45e4-9884-6c78055b9b13' def fake_keypair(name): return dict(test_keypair.fake_keypair, name=name, **keypair_data) def db_key_pair_get_all_by_user(self, user_id, limit, marker): return [fake_keypair('FAKE')] def db_key_pair_create(self, keypair): return fake_keypair(name=keypair['name']) def db_key_pair_destroy(context, user_id, name): if not (user_id and name): raise Exception() def db_key_pair_create_duplicate(context): raise exception.KeyPairExists(key_name='create_duplicate') class KeypairsTestV21(test.TestCase): base_url = '/v2/%s' % fakes.FAKE_PROJECT_ID validation_error = exception.ValidationError wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def _setup_app_and_controller(self): self.app_server = fakes.wsgi_app_v21() self.controller = keypairs_v21.KeypairController() def setUp(self): super(KeypairsTestV21, self).setUp() fakes.stub_out_networking(self) fakes.stub_out_secgroup_api(self) self.stub_out("nova.db.api.key_pair_get_all_by_user", db_key_pair_get_all_by_user) self.stub_out("nova.db.api.key_pair_create", db_key_pair_create) self.stub_out("nova.db.api.key_pair_destroy", db_key_pair_destroy) self._setup_app_and_controller() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) def test_keypair_list(self): res_dict = self.controller.index(self.req) response = {'keypairs': [{'keypair': dict(keypair_data, name='FAKE')}]} self.assertEqual(res_dict, response) def test_keypair_create(self): body = {'keypair': {'name': 'create_test'}} res_dict = self.controller.create(self.req, body=body) self.assertGreater(len(res_dict['keypair']['fingerprint']), 0) self.assertGreater(len(res_dict['keypair']['private_key']), 0) self._assert_keypair_type(res_dict) def _test_keypair_create_bad_request_case(self, body, exception): self.assertRaises(exception, self.controller.create, self.req, body=body) def test_keypair_create_with_empty_name(self): body = {'keypair': {'name': ''}} self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_create_with_name_too_long(self): body = { 'keypair': { 'name': 'a' * 256 } } self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_create_with_name_leading_trailing_spaces(self): body = { 'keypair': { 'name': ' test ' } } self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_create_with_name_leading_trailing_spaces_compat_mode( self): body = {'keypair': {'name': ' test '}} self.req.set_legacy_v2() res_dict = self.controller.create(self.req, body=body) self.assertEqual('test', res_dict['keypair']['name']) def test_keypair_create_with_non_alphanumeric_name(self): body = { 'keypair': { 'name': 'test/keypair' } } self._test_keypair_create_bad_request_case(body, webob.exc.HTTPBadRequest) def test_keypair_import_bad_key(self): body = { 'keypair': { 'name': 'create_test', 'public_key': 'ssh-what negative', }, } self._test_keypair_create_bad_request_case(body, webob.exc.HTTPBadRequest) def test_keypair_create_with_invalid_keypair_body(self): body = {'alpha': {'name': 'create_test'}} self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_import(self): body = { 'keypair': { 'name': 'create_test', 'public_key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBYIznA' 'x9D7118Q1VKGpXy2HDiKyUTM8XcUuhQpo0srqb9rboUp4' 'a9NmCwpWpeElDLuva707GOUnfaBAvHBwsRXyxHJjRaI6Y' 'Qj2oLJwqvaSaWUbyT1vtryRqy6J3TecN0WINY71f4uymi' 'MZP0wby4bKBcYnac8KiCIlvkEl0ETjkOGUq8OyWRmn7lj' 'j5SESEUdBP0JnuTFKddWTU/wD6wydeJaUhBTqOlHn0kX1' 'GyqoNTE1UEhcM5ZRWgfUZfTjVyDF2kGj3vJLCJtJ8LoGc' 'j7YaN4uPg1rBle+izwE/tLonRrds+cev8p6krSSrxWOwB' 'bHkXa6OciiJDvkRzJXzf', }, } res_dict = self.controller.create(self.req, body=body) # FIXME(ja): Should we check that public_key was sent to create? self.assertGreater(len(res_dict['keypair']['fingerprint']), 0) self.assertNotIn('private_key', res_dict['keypair']) self._assert_keypair_type(res_dict) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_import_quota_limit(self, mock_check): mock_check.side_effect = exception.OverQuota(overs='key_pairs', usages={'key_pairs': 100}) body = { 'keypair': { 'name': 'create_test', 'public_key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBYIznA' 'x9D7118Q1VKGpXy2HDiKyUTM8XcUuhQpo0srqb9rboUp4' 'a9NmCwpWpeElDLuva707GOUnfaBAvHBwsRXyxHJjRaI6Y' 'Qj2oLJwqvaSaWUbyT1vtryRqy6J3TecN0WINY71f4uymi' 'MZP0wby4bKBcYnac8KiCIlvkEl0ETjkOGUq8OyWRmn7lj' 'j5SESEUdBP0JnuTFKddWTU/wD6wydeJaUhBTqOlHn0kX1' 'GyqoNTE1UEhcM5ZRWgfUZfTjVyDF2kGj3vJLCJtJ8LoGc' 'j7YaN4uPg1rBle+izwE/tLonRrds+cev8p6krSSrxWOwB' 'bHkXa6OciiJDvkRzJXzf', }, } ex = self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) self.assertIn('Quota exceeded, too many key pairs.', ex.explanation) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_create_quota_limit(self, mock_check): mock_check.side_effect = exception.OverQuota(overs='key_pairs', usages={'key_pairs': 100}) body = { 'keypair': { 'name': 'create_test', }, } ex = self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) self.assertIn('Quota exceeded, too many key pairs.', ex.explanation) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_create_over_quota_during_recheck(self, mock_check): # Simulate a race where the first check passes and the recheck fails. # First check occurs in compute/api. exc = exception.OverQuota(overs='key_pairs', usages={'key_pairs': 100}) mock_check.side_effect = [None, exc] body = { 'keypair': { 'name': 'create_test', }, } self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) ctxt = self.req.environ['nova.context'] self.assertEqual(2, mock_check.call_count) call1 = mock.call(ctxt, {'key_pairs': 1}, ctxt.user_id) call2 = mock.call(ctxt, {'key_pairs': 0}, ctxt.user_id) mock_check.assert_has_calls([call1, call2]) # Verify we removed the key pair that was added after the first # quota check passed. key_pairs = objects.KeyPairList.get_by_user(ctxt, ctxt.user_id) names = [key_pair.name for key_pair in key_pairs] self.assertNotIn('create_test', names) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_create_no_quota_recheck(self, mock_check): # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') body = { 'keypair': { 'name': 'create_test', }, } self.controller.create(self.req, body=body) ctxt = self.req.environ['nova.context'] # check_deltas should have been called only once. mock_check.assert_called_once_with(ctxt, {'key_pairs': 1}, ctxt.user_id) def test_keypair_create_duplicate(self): self.stub_out("nova.objects.KeyPair.create", db_key_pair_create_duplicate) body = {'keypair': {'name': 'create_duplicate'}} ex = self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=body) self.assertIn("Key pair 'create_duplicate' already exists.", ex.explanation) @mock.patch('nova.objects.KeyPair.get_by_name') def test_keypair_delete(self, mock_get_by_name): mock_get_by_name.return_value = objects.KeyPair( nova_context.get_admin_context(), **fake_keypair('FAKE')) self.controller.delete(self.req, 'FAKE') def test_keypair_get_keypair_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 'DOESNOTEXIST') def test_keypair_delete_not_found(self): def db_key_pair_get_not_found(context, user_id, name): raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out("nova.db.api.key_pair_destroy", db_key_pair_get_not_found) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 'FAKE') def test_keypair_show(self): def _db_key_pair_get(context, user_id, name): return dict(test_keypair.fake_keypair, name='foo', public_key='XXX', fingerprint='YYY', type='ssh') self.stub_out("nova.db.api.key_pair_get", _db_key_pair_get) res_dict = self.controller.show(self.req, 'FAKE') self.assertEqual('foo', res_dict['keypair']['name']) self.assertEqual('XXX', res_dict['keypair']['public_key']) self.assertEqual('YYY', res_dict['keypair']['fingerprint']) self._assert_keypair_type(res_dict) def test_keypair_show_not_found(self): def _db_key_pair_get(context, user_id, name): raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out("nova.db.api.key_pair_get", _db_key_pair_get) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 'FAKE') def _assert_keypair_type(self, res_dict): self.assertNotIn('type', res_dict['keypair']) class KeypairsTestV22(KeypairsTestV21): wsgi_api_version = '2.2' def test_keypair_list(self): res_dict = self.controller.index(self.req) expected = {'keypairs': [{'keypair': dict(keypair_data, name='FAKE', type='ssh')}]} self.assertEqual(expected, res_dict) def _assert_keypair_type(self, res_dict): self.assertEqual('ssh', res_dict['keypair']['type']) def test_keypair_create_with_name_leading_trailing_spaces_compat_mode( self): pass def test_create_server_keypair_name_with_leading_trailing_compat_mode( self): pass class KeypairsTestV210(KeypairsTestV22): wsgi_api_version = '2.10' def test_keypair_create_with_name_leading_trailing_spaces_compat_mode( self): pass def test_create_server_keypair_name_with_leading_trailing_compat_mode( self): pass def test_keypair_list_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs?user_id=foo', version=self.wsgi_api_version, use_admin_context=True) with mock.patch.object(self.controller.api, 'get_key_pairs') as mock_g: self.controller.index(req) userid = mock_g.call_args_list[0][0][1] self.assertEqual('foo', userid) def test_keypair_show_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs/FAKE?user_id=foo', version=self.wsgi_api_version, use_admin_context=True) with mock.patch.object(self.controller.api, 'get_key_pair') as mock_g: self.controller.show(req, 'FAKE') userid = mock_g.call_args_list[0][0][1] self.assertEqual('foo', userid) def test_keypair_delete_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs/FAKE?user_id=foo', version=self.wsgi_api_version, use_admin_context=True) with mock.patch.object(self.controller.api, 'delete_key_pair') as mock_g: self.controller.delete(req, 'FAKE') userid = mock_g.call_args_list[0][0][1] self.assertEqual('foo', userid) def test_keypair_create_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs', version=self.wsgi_api_version, use_admin_context=True) body = {'keypair': {'name': 'create_test', 'user_id': '8861f37f-034e-4ca8-8abe-6d13c074574a'}} with mock.patch.object(self.controller.api, 'create_key_pair', return_value=(mock.MagicMock(), 1)) as mock_g: res = self.controller.create(req, body=body) userid = mock_g.call_args_list[0][0][1] self.assertEqual('8861f37f-034e-4ca8-8abe-6d13c074574a', userid) self.assertIn('keypair', res) def test_keypair_import_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs', version=self.wsgi_api_version, use_admin_context=True) body = {'keypair': {'name': 'create_test', 'user_id': '8861f37f-034e-4ca8-8abe-6d13c074574a', 'public_key': 'public_key'}} with mock.patch.object(self.controller.api, 'import_key_pair') as mock_g: res = self.controller.create(req, body=body) userid = mock_g.call_args_list[0][0][1] self.assertEqual('8861f37f-034e-4ca8-8abe-6d13c074574a', userid) self.assertIn('keypair', res) def test_keypair_list_other_user_invalid_in_old_microversion(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs?user_id=foo', version="2.9", use_admin_context=True) with mock.patch.object(self.controller.api, 'get_key_pairs') as mock_g: self.controller.index(req) userid = mock_g.call_args_list[0][0][1] self.assertEqual('fake_user', userid) class KeypairsTestV235(test.TestCase): base_url = '/v2/%s' % fakes.FAKE_PROJECT_ID wsgi_api_version = '2.35' def _setup_app_and_controller(self): self.app_server = fakes.wsgi_app_v21() self.controller = keypairs_v21.KeypairController() def setUp(self): super(KeypairsTestV235, self).setUp() self._setup_app_and_controller() @mock.patch("nova.db.api.key_pair_get_all_by_user") def test_keypair_list_limit_and_marker(self, mock_kp_get): mock_kp_get.side_effect = db_key_pair_get_all_by_user req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=3&marker=fake_marker', version=self.wsgi_api_version, use_admin_context=True) res_dict = self.controller.index(req) mock_kp_get.assert_called_once_with( req.environ['nova.context'], 'fake_user', limit=3, marker='fake_marker') response = {'keypairs': [{'keypair': dict(keypair_data, name='FAKE', type='ssh')}]} self.assertEqual(res_dict, response) @mock.patch('nova.compute.api.KeypairAPI.get_key_pairs') def test_keypair_list_limit_and_marker_invalid_marker(self, mock_kp_get): mock_kp_get.side_effect = exception.MarkerNotFound(marker='unknown_kp') req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=3&marker=unknown_kp', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_keypair_list_limit_and_marker_invalid_limit(self): req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=abc&marker=fake_marker', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) @mock.patch("nova.db.api.key_pair_get_all_by_user") def test_keypair_list_limit_and_marker_invalid_in_old_microversion( self, mock_kp_get): mock_kp_get.side_effect = db_key_pair_get_all_by_user req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=3&marker=fake_marker', version="2.30", use_admin_context=True) self.controller.index(req) mock_kp_get.assert_called_once_with( req.environ['nova.context'], 'fake_user', limit=None, marker=None) class KeypairsTestV275(test.TestCase): def setUp(self): super(KeypairsTestV275, self).setUp() self.controller = keypairs_v21.KeypairController() @mock.patch("nova.db.api.key_pair_get_all_by_user") @mock.patch('nova.objects.KeyPair.get_by_name') def test_keypair_list_additional_param_old_version(self, mock_get_by_name, mock_kp_get): req = fakes.HTTPRequest.blank( '/os-keypairs?unknown=3', version='2.74', use_admin_context=True) self.controller.index(req) self.controller.show(req, 1) with mock.patch.object(self.controller.api, 'delete_key_pair'): self.controller.delete(req, 1) def test_keypair_list_additional_param(self): req = fakes.HTTPRequest.blank( '/os-keypairs?unknown=3', version='2.75', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_keypair_show_additional_param(self): req = fakes.HTTPRequest.blank( '/os-keypairs?unknown=3', version='2.75', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.show, req, 1) def test_keypair_delete_additional_param(self): req = fakes.HTTPRequest.blank( '/os-keypairs?unknown=3', version='2.75', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.delete, req, 1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_limits.py0000664000175000017500000004340700000000000025010 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests dealing with HTTP rate-limiting. """ import mock from oslo_serialization import jsonutils from oslo_utils import encodeutils from six.moves import http_client as httplib from six.moves import StringIO from nova.api.openstack.compute import limits as limits_v21 from nova.api.openstack.compute import views from nova.api.openstack import wsgi import nova.context from nova import exception from nova.policies import limits as l_policies from nova import quota from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers class BaseLimitTestSuite(test.NoDBTestCase): """Base test suite which provides relevant stubs and time abstraction.""" def setUp(self): super(BaseLimitTestSuite, self).setUp() self.time = 0.0 self.absolute_limits = {} def stub_get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v, in_use=v // 2) for k, v in self.absolute_limits.items()} mock_get_project_quotas = mock.patch.object( nova.quota.QUOTAS, "get_project_quotas", side_effect = stub_get_project_quotas) mock_get_project_quotas.start() self.addCleanup(mock_get_project_quotas.stop) patcher = self.mock_can = mock.patch('nova.context.RequestContext.can') self.mock_can = patcher.start() self.addCleanup(patcher.stop) def _get_time(self): """Return the "time" according to this test suite.""" return self.time class LimitsControllerTestV21(BaseLimitTestSuite): """Tests for `limits.LimitsController` class.""" limits_controller = limits_v21.LimitsController def setUp(self): """Run before each test.""" super(LimitsControllerTestV21, self).setUp() self.controller = wsgi.Resource(self.limits_controller()) self.ctrler = self.limits_controller() def _get_index_request(self, accept_header="application/json", tenant_id=None, user_id='testuser', project_id='testproject'): """Helper to set routing arguments.""" request = fakes.HTTPRequest.blank('', version='2.1') if tenant_id: request = fakes.HTTPRequest.blank('/?tenant_id=%s' % tenant_id, version='2.1') request.accept = accept_header request.environ["wsgiorg.routing_args"] = (None, { "action": "index", "controller": "", }) context = nova.context.RequestContext(user_id, project_id) request.environ["nova.context"] = context return request def test_empty_index_json(self): # Test getting empty limit details in JSON. request = self._get_index_request() response = request.get_response(self.controller) expected = { "limits": { "rate": [], "absolute": {}, }, } body = jsonutils.loads(response.body) self.assertEqual(expected, body) def test_index_json(self): self._test_index_json() def test_index_json_by_tenant(self): self._test_index_json('faketenant') def _test_index_json(self, tenant_id=None): # Test getting limit details in JSON. request = self._get_index_request(tenant_id=tenant_id) context = request.environ["nova.context"] if tenant_id is None: tenant_id = context.project_id self.absolute_limits = { 'ram': 512, 'instances': 5, 'cores': 21, 'key_pairs': 10, 'floating_ips': 10, 'security_groups': 10, 'security_group_rules': 20, } expected = { "limits": { "rate": [], "absolute": { "maxTotalRAMSize": 512, "maxTotalInstances": 5, "maxTotalCores": 21, "maxTotalKeypairs": 10, "maxTotalFloatingIps": 10, "maxSecurityGroups": 10, "maxSecurityGroupRules": 20, "totalRAMUsed": 256, "totalCoresUsed": 10, "totalInstancesUsed": 2, "totalFloatingIpsUsed": 5, "totalSecurityGroupsUsed": 5, }, }, } def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v, in_use=v // 2) for k, v in self.absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas response = request.get_response(self.controller) body = jsonutils.loads(response.body) self.assertEqual(expected, body) get_project_quotas.assert_called_once_with(context, tenant_id, usages=True) def _do_test_used_limits(self, reserved): request = self._get_index_request(tenant_id=None) quota_map = { 'totalRAMUsed': 'ram', 'totalCoresUsed': 'cores', 'totalInstancesUsed': 'instances', 'totalFloatingIpsUsed': 'floating_ips', 'totalSecurityGroupsUsed': 'security_groups', 'totalServerGroupsUsed': 'server_groups', } limits = {} expected_abs_limits = [] for display_name, q in quota_map.items(): limits[q] = {'limit': len(display_name), 'in_use': len(display_name) // 2, 'reserved': 0} expected_abs_limits.append(display_name) def stub_get_project_quotas(context, project_id, usages=True): return limits self.stub_out('nova.quota.QUOTAS.get_project_quotas', stub_get_project_quotas) res = request.get_response(self.controller) body = jsonutils.loads(res.body) abs_limits = body['limits']['absolute'] for limit in expected_abs_limits: value = abs_limits[limit] r = limits[quota_map[limit]]['reserved'] if reserved else 0 self.assertEqual(limits[quota_map[limit]]['in_use'] + r, value) def test_used_limits_basic(self): self._do_test_used_limits(False) def test_used_limits_with_reserved(self): self._do_test_used_limits(True) def test_admin_can_fetch_limits_for_a_given_tenant_id(self): project_id = "123456" user_id = "A1234" tenant_id = 'abcd' fake_req = self._get_index_request(tenant_id=tenant_id, user_id=user_id, project_id=project_id) context = fake_req.environ["nova.context"] with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: fake_req.get_response(self.controller) self.assertEqual(2, self.mock_can.call_count) self.mock_can.assert_called_with( l_policies.OTHER_PROJECT_LIMIT_POLICY_NAME, target={"project_id": tenant_id}) mock_get_quotas.assert_called_once_with(context, tenant_id, usages=True) def _test_admin_can_fetch_used_limits_for_own_project(self, req_get): project_id = "123456" if 'tenant_id' in req_get: project_id = req_get['tenant_id'] user_id = "A1234" fake_req = self._get_index_request(user_id=user_id, project_id=project_id) context = fake_req.environ["nova.context"] with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: fake_req.get_response(self.controller) mock_get_quotas.assert_called_once_with(context, project_id, usages=True) def test_admin_can_fetch_used_limits_for_own_project(self): req_get = {} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_for_dummy_only(self): # for back compatible we allow additional param to be send to req.GET # it can be removed when we add restrictions to query param later req_get = {'dummy': 'dummy'} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_with_positive_int(self): req_get = {'tenant_id': 123} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_with_negative_int(self): req_get = {'tenant_id': -1} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_with_unkown_param(self): req_get = {'tenant_id': '123', 'unknown': 'unknown'} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_used_limits_fetched_for_context_project_id(self): project_id = "123456" fake_req = self._get_index_request(project_id=project_id) context = fake_req.environ["nova.context"] with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: fake_req.get_response(self.controller) mock_get_quotas.assert_called_once_with(context, project_id, usages=True) def test_used_ram_added(self): fake_req = self._get_index_request() def stub_get_project_quotas(context, project_id, usages=True): return {'ram': {'limit': 512, 'in_use': 256}} with mock.patch.object(quota.QUOTAS, 'get_project_quotas', side_effect=stub_get_project_quotas ) as mock_get_quotas: res = fake_req.get_response(self.controller) body = jsonutils.loads(res.body) abs_limits = body['limits']['absolute'] self.assertIn('totalRAMUsed', abs_limits) self.assertEqual(256, abs_limits['totalRAMUsed']) self.assertEqual(1, mock_get_quotas.call_count) def test_no_ram_quota(self): fake_req = self._get_index_request() with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: res = fake_req.get_response(self.controller) body = jsonutils.loads(res.body) abs_limits = body['limits']['absolute'] self.assertNotIn('totalRAMUsed', abs_limits) self.assertEqual(1, mock_get_quotas.call_count) class FakeHttplibSocket(object): """Fake `httplib.HTTPResponse` replacement.""" def __init__(self, response_string): """Initialize new `FakeHttplibSocket`.""" self._buffer = StringIO(response_string) def makefile(self, _mode, _other): """Returns the socket's internal buffer.""" return self._buffer class FakeHttplibConnection(object): """Fake `httplib.HTTPConnection`.""" def __init__(self, app, host): """Initialize `FakeHttplibConnection`.""" self.app = app self.host = host def request(self, method, path, body="", headers=None): """Requests made via this connection actually get translated and routed into our WSGI app, we then wait for the response and turn it back into an `httplib.HTTPResponse`. """ if not headers: headers = {} req = fakes.HTTPRequest.blank(path) req.method = method req.headers = headers req.host = self.host req.body = encodeutils.safe_encode(body) resp = str(req.get_response(self.app)) resp = "HTTP/1.0 %s" % resp sock = FakeHttplibSocket(resp) self.http_response = httplib.HTTPResponse(sock) self.http_response.begin() def getresponse(self): """Return our generated response from the request.""" return self.http_response class LimitsViewBuilderTest(test.NoDBTestCase): def setUp(self): super(LimitsViewBuilderTest, self).setUp() self.view_builder = views.limits.ViewBuilder() self.req = fakes.HTTPRequest.blank('/?tenant_id=None') self.rate_limits = [] self.absolute_limits = {"metadata_items": {'limit': 1, 'in_use': 1}, "injected_files": {'limit': 5, 'in_use': 1}, "injected_file_content_bytes": {'limit': 5, 'in_use': 1}} def test_build_limits(self): expected_limits = {"limits": { "rate": [], "absolute": {"maxServerMeta": 1, "maxImageMeta": 1, "maxPersonality": 5, "maxPersonalitySize": 5}}} output = self.view_builder.build(self.req, self.absolute_limits) self.assertThat(output, matchers.DictMatches(expected_limits)) def test_build_limits_empty_limits(self): expected_limits = {"limits": {"rate": [], "absolute": {}}} quotas = {} output = self.view_builder.build(self.req, quotas) self.assertThat(output, matchers.DictMatches(expected_limits)) class LimitsControllerTestV236(BaseLimitTestSuite): def setUp(self): super(LimitsControllerTestV236, self).setUp() self.controller = limits_v21.LimitsController() self.req = fakes.HTTPRequest.blank("/?tenant_id=faketenant", version='2.36') def test_index_filtered(self): absolute_limits = { 'ram': 512, 'instances': 5, 'cores': 21, 'key_pairs': 10, 'floating_ips': 10, 'security_groups': 10, 'security_group_rules': 20, } def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v, in_use=v // 2) for k, v in absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas response = self.controller.index(self.req) expected_response = { "limits": { "rate": [], "absolute": { "maxTotalRAMSize": 512, "maxTotalInstances": 5, "maxTotalCores": 21, "maxTotalKeypairs": 10, "totalRAMUsed": 256, "totalCoresUsed": 10, "totalInstancesUsed": 2, }, }, } self.assertEqual(expected_response, response) class LimitsControllerTestV239(BaseLimitTestSuite): def setUp(self): super(LimitsControllerTestV239, self).setUp() self.controller = limits_v21.LimitsController() self.req = fakes.HTTPRequest.blank("/?tenant_id=faketenant", version='2.39') def test_index_filtered_no_max_image_meta(self): absolute_limits = { "metadata_items": 1, } def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v, in_use=v // 2) for k, v in absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas response = self.controller.index(self.req) # staring from version 2.39 there is no 'maxImageMeta' field # in response after removing 'image-metadata' proxy API expected_response = { "limits": { "rate": [], "absolute": { "maxServerMeta": 1, }, }, } self.assertEqual(expected_response, response) class LimitsControllerTestV275(BaseLimitTestSuite): def setUp(self): super(LimitsControllerTestV275, self).setUp() self.controller = limits_v21.LimitsController() def test_index_additional_query_param_old_version(self): absolute_limits = { "metadata_items": 1, } req = fakes.HTTPRequest.blank("/?unkown=fake", version='2.74') def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v, in_use=v // 2) for k, v in absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas self.controller.index(req) def test_index_additional_query_param(self): req = fakes.HTTPRequest.blank("/?unkown=fake", version='2.75') self.assertRaises( exception.ValidationError, self.controller.index, req=req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_lock_server.py0000664000175000017500000001235000000000000026016 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute import lock_server as lock_server_v21 from nova import exception from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit import fake_instance class LockServerTestsV21(admin_only_action_common.CommonTests): lock_server = lock_server_v21 controller_name = 'LockServerController' _api_version = '2.1' def setUp(self): super(LockServerTestsV21, self).setUp() self.controller = getattr(self.lock_server, self.controller_name)() self.compute_api = self.controller.compute_api self.stub_out('nova.api.openstack.compute.lock_server.' 'LockServerController', lambda *a, **kw: self.controller) def test_lock_unlock(self): args_map = {'_lock': ((), {"reason": None})} body_map = {'_lock': {"lock": None}} self._test_actions(['_lock', '_unlock'], args_map=args_map, body_map=body_map) def test_lock_unlock_with_non_existed_instance(self): body_map = {'_lock': {"lock": None}} self._test_actions_with_non_existed_instance(['_lock', '_unlock'], body_map=body_map) @mock.patch.object(common, 'get_instance') def test_unlock_with_any_body(self, get_instance_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context']) get_instance_mock.return_value = instance # This will pass since there is no schema validation. body = {'unlock': {'blah': 'blah'}} with mock.patch.object(self.compute_api, 'unlock') as mock_lock: self.controller._unlock(self.req, instance.uuid, body=body) mock_lock.assert_called_once_with( self.req.environ['nova.context'], instance) @mock.patch.object(common, 'get_instance') def test_lock_with_empty_dict_body_is_valid(self, get_instance_mock): # Empty dict with no key in the body is allowed. instance = fake_instance.fake_instance_obj( self.req.environ['nova.context']) get_instance_mock.return_value = instance body = {'lock': {}} with mock.patch.object(self.compute_api, 'lock') as mock_lock: self.controller._lock(self.req, instance.uuid, body=body) mock_lock.assert_called_once_with( self.req.environ['nova.context'], instance, reason=None) class LockServerTestsV273(LockServerTestsV21): def setUp(self): super(LockServerTestsV273, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.73') @mock.patch.object(common, 'get_instance') def test_lock_with_reason_V273(self, get_instance_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context']) get_instance_mock.return_value = instance reason = "I don't want to work" body = {'lock': {"locked_reason": reason}} with mock.patch.object(self.compute_api, 'lock') as mock_lock: self.controller._lock(self.req, instance.uuid, body=body) mock_lock.assert_called_once_with( self.req.environ['nova.context'], instance, reason=reason) def test_lock_with_reason_exceeding_255_chars(self): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context']) reason = 's' * 256 body = {'lock': {"locked_reason": reason}} exp = self.assertRaises(exception.ValidationError, self.controller._lock, self.req, instance.uuid, body=body) self.assertIn('is too long', six.text_type(exp)) def test_lock_with_reason_in_invalid_format(self): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context']) reason = 256 body = {'lock': {"locked_reason": reason}} exp = self.assertRaises(exception.ValidationError, self.controller._lock, self.req, instance.uuid, body=body) self.assertIn("256 is not of type 'string'", six.text_type(exp)) def test_lock_with_invalid_paramater(self): # This will fail from 2.73 since we have a schema check that allows # only locked_reason instance = fake_instance.fake_instance_obj( self.req.environ['nova.context']) body = {'lock': {'blah': 'blah'}} exp = self.assertRaises(exception.ValidationError, self.controller._lock, self.req, instance.uuid, body=body) self.assertIn("('blah' was unexpected)", six.text_type(exp)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_microversions.py0000664000175000017500000003254700000000000026414 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from nova.api.openstack import api_version_request as api_version from nova import test from nova.tests.unit.api.openstack.compute import microversions from nova.tests.unit.api.openstack import fakes class LegacyMicroversionsTest(test.NoDBTestCase): header_name = 'X-OpenStack-Nova-API-Version' def setUp(self): super(LegacyMicroversionsTest, self).setUp() self.app = fakes.wsgi_app_v21(custom_routes=microversions.ROUTES) def _test_microversions(self, app, req, ret_code, ret_header=None): req.environ['CONTENT_TYPE'] = "application/json" res = req.get_response(app) self.assertEqual(ret_code, res.status_int) if ret_header: if 'nova' not in self.header_name.lower(): ret_header = 'compute %s' % ret_header self.assertEqual(ret_header, res.headers[self.header_name]) return res def _make_header(self, req_header): if 'nova' in self.header_name.lower(): headers = {self.header_name: req_header} else: headers = {self.header_name: 'compute %s' % req_header} return headers def test_microversions_no_header(self): req = fakes.HTTPRequest.blank( '/v2/%s/microversions' % fakes.FAKE_PROJECT_ID, method='GET') res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('val', resp_json['param']) def test_microversions_return_header(self): req = fakes.HTTPRequest.blank( '/v2/%s/microversions' % fakes.FAKE_PROJECT_ID) res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('val', resp_json['param']) if 'nova' in self.header_name.lower(): self.assertEqual("2.1", res.headers[self.header_name]) else: self.assertEqual("compute 2.1", res.headers[self.header_name]) self.assertIn(self.header_name, res.headers.getall('Vary')) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_return_header_non_default(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("2.3") req = fakes.HTTPRequest.blank( '/v2/%s/microversions' % fakes.FAKE_PROJECT_ID) req.headers = self._make_header('2.3') res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('val2', resp_json['param']) if 'nova' in self.header_name.lower(): self.assertEqual("2.3", res.headers[self.header_name]) else: self.assertEqual("compute 2.3", res.headers[self.header_name]) self.assertIn(self.header_name, res.headers.getall('Vary')) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_return_header_fault(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.0") req = fakes.HTTPRequest.blank( '/v2/%s/microversions' % fakes.FAKE_PROJECT_ID) req.headers = self._make_header('3.0') res = req.get_response(self.app) self.assertEqual(400, res.status_int) if 'nova' in self.header_name.lower(): self.assertEqual("3.0", res.headers[self.header_name]) else: self.assertEqual("compute 3.0", res.headers[self.header_name]) self.assertIn(self.header_name, res.headers.getall('Vary')) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def _check_microversion_response(self, url, req_version, resp_param, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest('2.3') req = fakes.HTTPRequest.blank(url) req.headers = self._make_header(req_version) res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual(resp_param, resp_json['param']) def test_microversions_with_header(self): self._check_microversion_response( '/v2/%s/microversions' % fakes.FAKE_PROJECT_ID, '2.3', 'val2') def test_microversions_with_header_exact_match(self): self._check_microversion_response( '/v2/%s/microversions' % fakes.FAKE_PROJECT_ID, '2.2', 'val2') def test_microversions2_no_2_1_version(self): self._check_microversion_response( '/v2/%s/microversions2' % fakes.FAKE_PROJECT_ID, '2.3', 'controller2_val1') @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions2_later_version(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.1") req = fakes.HTTPRequest.blank( '/v2/%s/microversions2' % fakes.FAKE_PROJECT_ID) req.headers = self._make_header('3.0') res = req.get_response(self.app) self.assertEqual(202, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('controller2_val2', resp_json['param']) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions2_version_too_high(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.5") req = fakes.HTTPRequest.blank( '/v2/%s/microversions2' % fakes.FAKE_PROJECT_ID) req.headers = {self.header_name: '3.2'} res = req.get_response(self.app) self.assertEqual(404, res.status_int) def test_microversions2_version_too_low(self): req = fakes.HTTPRequest.blank( '/v2/%s/microversions2' % fakes.FAKE_PROJECT_ID) req.headers = {self.header_name: '2.1'} res = req.get_response(self.app) self.assertEqual(404, res.status_int) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_global_version_too_high(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.5") req = fakes.HTTPRequest.blank( '/v2/%s/microversions2' % fakes.FAKE_PROJECT_ID) req.headers = self._make_header('3.7') res = req.get_response(self.app) self.assertEqual(406, res.status_int) res_json = jsonutils.loads(res.body) self.assertEqual("Version 3.7 is not supported by the API. " "Minimum is 2.1 and maximum is 3.5.", res_json['computeFault']['message']) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_schema(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.3") req = fakes.HTTPRequest.blank( '/v2/%s/microversions3' % fakes.FAKE_PROJECT_ID) req.method = 'POST' req.headers = self._make_header('2.2') req.environ['CONTENT_TYPE'] = "application/json" req.body = jsonutils.dump_as_bytes({'dummy': {'val': 'foo'}}) res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('create_val1', resp_json['param']) if 'nova' in self.header_name.lower(): self.assertEqual("2.2", res.headers[self.header_name]) else: self.assertEqual("compute 2.2", res.headers[self.header_name]) self.assertIn(self.header_name, res.headers.getall('Vary')) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_schema_fail(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.3") req = fakes.HTTPRequest.blank( '/v2/%s/microversions3' % fakes.FAKE_PROJECT_ID) req.method = 'POST' req.headers = {self.header_name: '2.2'} req.environ['CONTENT_TYPE'] = "application/json" req.body = jsonutils.dump_as_bytes({'dummy': {'invalid_param': 'foo'}}) res = req.get_response(self.app) self.assertEqual(400, res.status_int) resp_json = jsonutils.loads(res.body) self.assertTrue(resp_json['badRequest']['message'].startswith( "Invalid input for field/attribute dummy.")) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_schema_out_of_version_check(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.3") req = fakes.HTTPRequest.blank( '/v2/%s/microversions3/1' % fakes.FAKE_PROJECT_ID) req.method = 'PUT' req.headers = self._make_header('2.2') req.body = jsonutils.dump_as_bytes({'dummy': {'inv_val': 'foo'}}) req.environ['CONTENT_TYPE'] = "application/json" res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('update_val1', resp_json['param']) if 'nova' in self.header_name.lower(): self.assertEqual("2.2", res.headers[self.header_name]) else: self.assertEqual("compute 2.2", res.headers[self.header_name]) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_microversions_schema_second_version(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.3") req = fakes.HTTPRequest.blank( '/v2/%s/microversions3/1' % fakes.FAKE_PROJECT_ID) req.headers = self._make_header('2.10') req.environ['CONTENT_TYPE'] = "application/json" req.method = 'PUT' req.body = jsonutils.dump_as_bytes({'dummy': {'val2': 'foo'}}) res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual('update_val1', resp_json['param']) if 'nova' in self.header_name.lower(): self.assertEqual("2.10", res.headers[self.header_name]) else: self.assertEqual("compute 2.10", res.headers[self.header_name]) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def _test_microversions_inner_function(self, version, expected_resp, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("2.2") req = fakes.HTTPRequest.blank( '/v2/%s/microversions4' % fakes.FAKE_PROJECT_ID) req.headers = self._make_header(version) req.environ['CONTENT_TYPE'] = "application/json" req.method = 'POST' req.body = b'' res = req.get_response(self.app) self.assertEqual(200, res.status_int) resp_json = jsonutils.loads(res.body) self.assertEqual(expected_resp, resp_json['param']) if 'nova' not in self.header_name.lower(): version = 'compute %s' % version self.assertEqual(version, res.headers[self.header_name]) def test_microversions_inner_function_v22(self): self._test_microversions_inner_function('2.2', 'controller4_val2') def test_microversions_inner_function_v21(self): self._test_microversions_inner_function('2.1', 'controller4_val1') @mock.patch("nova.api.openstack.api_version_request.max_api_version") def _test_microversions_actions(self, ret_code, ret_header, req_header, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("2.3") req = fakes.HTTPRequest.blank( '/v2/%s/microversions3/1/action' % fakes.FAKE_PROJECT_ID) if req_header: req.headers = self._make_header(req_header) req.method = 'POST' req.body = jsonutils.dump_as_bytes({'foo': None}) res = self._test_microversions(self.app, req, ret_code, ret_header=ret_header) if ret_code == 202: resp_json = jsonutils.loads(res.body) self.assertEqual({'foo': 'bar'}, resp_json) def test_microversions_actions(self): self._test_microversions_actions(202, "2.1", "2.1") def test_microversions_actions_too_high(self): self._test_microversions_actions(404, "2.3", "2.3") def test_microversions_actions_no_header(self): self._test_microversions_actions(202, "2.1", None) class MicroversionsTest(LegacyMicroversionsTest): header_name = 'OpenStack-API-Version' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_migrate_server.py0000664000175000017500000006500500000000000026523 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils import six import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import migrate_server as \ migrate_server_v21 from nova import exception from nova import objects from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes class MigrateServerTestsV21(admin_only_action_common.CommonTests): migrate_server = migrate_server_v21 controller_name = 'MigrateServerController' validation_error = exception.ValidationError _api_version = '2.1' disk_over_commit = False force = None async_ = False host_name = None def setUp(self): super(MigrateServerTestsV21, self).setUp() self.controller = getattr(self.migrate_server, self.controller_name)() self.compute_api = self.controller.compute_api self.stub_out('nova.api.openstack.compute.migrate_server.' 'MigrateServerController', lambda *a, **kw: self.controller) self.mock_list_port = self.useFixture( fixtures.MockPatch('nova.network.neutron.API.list_ports')).mock self.mock_list_port.return_value = {'ports': []} def _get_migration_body(self, **kwargs): return {'os-migrateLive': self._get_params(**kwargs)} def _get_params(self, **kwargs): return {'host': kwargs.get('host'), 'block_migration': kwargs.get('block_migration') or False, 'disk_over_commit': self.disk_over_commit} def test_migrate(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host='hostname')} args_map = {'_migrate_live': ((False, self.disk_over_commit, 'hostname', self.force, self.async_), {}), '_migrate': ((), {'host_name': self.host_name})} self._test_actions(['_migrate', '_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_migrate_none_hostname(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host=None)} args_map = {'_migrate_live': ((False, self.disk_over_commit, None, self.force, self.async_), {}), '_migrate': ((), {'host_name': None})} self._test_actions(['_migrate', '_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_migrate_with_non_existed_instance(self): body_map = {'_migrate_live': self._get_migration_body(host='hostname')} self._test_actions_with_non_existed_instance( ['_migrate', '_migrate_live'], body_map=body_map) def test_migrate_raise_conflict_on_invalid_state(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host='hostname')} args_map = {'_migrate_live': ((False, self.disk_over_commit, 'hostname', self.force, self.async_), {}), '_migrate': ((), {'host_name': self.host_name})} exception_arg = {'_migrate': 'migrate', '_migrate_live': 'os-migrateLive'} self._test_actions_raise_conflict_on_invalid_state( ['_migrate', '_migrate_live'], body_map=body_map, args_map=args_map, method_translations=method_translations, exception_args=exception_arg) def test_actions_with_locked_instance(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host='hostname')} args_map = {'_migrate_live': ((False, self.disk_over_commit, 'hostname', self.force, self.async_), {}), '_migrate': ((), {'host_name': self.host_name})} self._test_actions_with_locked_instance( ['_migrate', '_migrate_live'], body_map=body_map, args_map=args_map, method_translations=method_translations) def _test_migrate_exception(self, exc_info, expected_result): instance = self._stub_instance_get() with mock.patch.object(self.compute_api, 'resize', side_effect=exc_info) as mock_resize: self.assertRaises(expected_result, self.controller._migrate, self.req, instance['uuid'], body={'migrate': None}) mock_resize.assert_called_once_with( self.context, instance, host_name=self.host_name) self.mock_get.assert_called_once_with( self.context, instance.uuid, expected_attrs=['flavor', 'services'], cell_down_support=False) def test_migrate_too_many_instances(self): exc_info = exception.TooManyInstances(overs='', req='', used=0, allowed=0) self._test_migrate_exception(exc_info, webob.exc.HTTPForbidden) def _test_migrate_live_succeeded(self, param): instance = self._stub_instance_get() live_migrate_method = self.controller._migrate_live with mock.patch.object(self.compute_api, 'live_migrate') as mock_live_migrate: live_migrate_method(self.req, instance.uuid, body={'os-migrateLive': param}) self.assertEqual(202, live_migrate_method.wsgi_code) mock_live_migrate.assert_called_once_with( self.context, instance, False, self.disk_over_commit, 'hostname', self.force, self.async_) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=['numa_topology'], cell_down_support=False) def test_migrate_live_enabled(self): param = self._get_params(host='hostname') self._test_migrate_live_succeeded(param) def test_migrate_live_enabled_with_string_param(self): param = {'host': 'hostname', 'block_migration': "False", 'disk_over_commit': "False"} self._test_migrate_live_succeeded(param) def test_migrate_live_without_host(self): body = self._get_migration_body() del body['os-migrateLive']['host'] self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_without_block_migration(self): body = self._get_migration_body() del body['os-migrateLive']['block_migration'] self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_without_disk_over_commit(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': False}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_with_invalid_block_migration(self): body = self._get_migration_body(block_migration='foo') self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_with_invalid_disk_over_commit(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': False, 'disk_over_commit': "foo"}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_missing_dict_param(self): body = self._get_migration_body(host='hostname') del body['os-migrateLive']['host'] body['os-migrateLive']['dummy'] = 'hostname' self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def _test_migrate_live_failed_with_exception( self, fake_exc, uuid=None, expected_exc=webob.exc.HTTPBadRequest, check_response=True): instance = self._stub_instance_get(uuid=uuid) body = self._get_migration_body(host='hostname') with mock.patch.object( self.compute_api, 'live_migrate', side_effect=fake_exc) as mock_live_migrate: ex = self.assertRaises(expected_exc, self.controller._migrate_live, self.req, instance.uuid, body=body) if check_response: self.assertIn(six.text_type(fake_exc), ex.explanation) mock_live_migrate.assert_called_once_with( self.context, instance, False, self.disk_over_commit, 'hostname', self.force, self.async_) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=['numa_topology'], cell_down_support=False) def test_migrate_live_compute_service_unavailable(self): self._test_migrate_live_failed_with_exception( exception.ComputeServiceUnavailable(host='host')) def test_migrate_live_compute_service_not_found(self): self._test_migrate_live_failed_with_exception( exception.ComputeHostNotFound(host='host')) def test_migrate_live_invalid_hypervisor_type(self): self._test_migrate_live_failed_with_exception( exception.InvalidHypervisorType()) def test_migrate_live_invalid_cpu_info(self): self._test_migrate_live_failed_with_exception( exception.InvalidCPUInfo(reason="")) def test_migrate_live_unable_to_migrate_to_self(self): uuid = uuidutils.generate_uuid() self._test_migrate_live_failed_with_exception( exception.UnableToMigrateToSelf(instance_id=uuid, host='host'), uuid=uuid) def test_migrate_live_destination_hypervisor_too_old(self): self._test_migrate_live_failed_with_exception( exception.DestinationHypervisorTooOld()) def test_migrate_live_no_valid_host(self): self._test_migrate_live_failed_with_exception( exception.NoValidHost(reason='')) def test_migrate_live_invalid_local_storage(self): self._test_migrate_live_failed_with_exception( exception.InvalidLocalStorage(path='', reason='')) def test_migrate_live_invalid_shared_storage(self): self._test_migrate_live_failed_with_exception( exception.InvalidSharedStorage(path='', reason='')) def test_migrate_live_hypervisor_unavailable(self): self._test_migrate_live_failed_with_exception( exception.HypervisorUnavailable(host="")) def test_migrate_live_instance_not_active(self): self._test_migrate_live_failed_with_exception( exception.InstanceInvalidState( instance_uuid='', state='', attr='', method=''), expected_exc=webob.exc.HTTPConflict, check_response=False) def test_migrate_live_pre_check_error(self): self._test_migrate_live_failed_with_exception( exception.MigrationPreCheckError(reason='')) def test_migrate_live_migration_with_unexpected_error(self): self._test_migrate_live_failed_with_exception( exception.MigrationError(reason=''), expected_exc=webob.exc.HTTPInternalServerError, check_response=False) @mock.patch('nova.objects.Service.get_by_host_and_binary') @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request', return_value=True) def test_migrate_with_bandwidth_from_old_compute_not_supported( self, mock_has_res_req, mock_get_service): instance = self._stub_instance_get() mock_get_service.return_value = objects.Service(host=instance['host']) mock_get_service.return_value.version = 38 self.assertRaises( webob.exc.HTTPConflict, self.controller._migrate, self.req, instance['uuid'], body={'migrate': None}) mock_has_res_req.assert_called_once_with( instance['uuid'], self.controller.network_api) mock_get_service.assert_called_once_with( self.req.environ['nova.context'], instance['host'], 'nova-compute') class MigrateServerTestsV225(MigrateServerTestsV21): # We don't have disk_over_commit in v2.25 disk_over_commit = None def setUp(self): super(MigrateServerTestsV225, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.25') def _get_params(self, **kwargs): return {'host': kwargs.get('host'), 'block_migration': kwargs.get('block_migration') or False} def test_migrate_live_enabled_with_string_param(self): param = {'host': 'hostname', 'block_migration': "False"} self._test_migrate_live_succeeded(param) def test_migrate_live_without_disk_over_commit(self): pass def test_migrate_live_with_invalid_disk_over_commit(self): pass def test_live_migrate_block_migration_auto(self): method_translations = {'_migrate_live': 'live_migrate'} body_map = {'_migrate_live': {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}}} args_map = {'_migrate_live': ((None, None, 'hostname', self.force, self.async_), {})} self._test_actions(['_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_migrate_live_with_disk_over_commit_raise(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto', 'disk_over_commit': False}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) class MigrateServerTestsV230(MigrateServerTestsV225): force = False def setUp(self): super(MigrateServerTestsV230, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.30') def _test_live_migrate(self, force=False): if force is True: literal_force = 'true' else: literal_force = 'false' method_translations = {'_migrate_live': 'live_migrate'} body_map = {'_migrate_live': {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto', 'force': literal_force}}} args_map = {'_migrate_live': ((None, None, 'hostname', force, self.async_), {})} self._test_actions(['_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_live_migrate(self): self._test_live_migrate() def test_live_migrate_with_forced_host(self): self._test_live_migrate(force=True) def test_forced_live_migrate_with_no_provided_host(self): body = {'os-migrateLive': {'force': 'true'}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) class MigrateServerTestsV234(MigrateServerTestsV230): async_ = True def setUp(self): super(MigrateServerTestsV234, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.34') # NOTE(tdurakov): for REST API version 2.34 and above, tests below are not # valid, as they are made in background. def test_migrate_live_compute_service_unavailable(self): pass def test_migrate_live_compute_service_not_found(self): pass def test_migrate_live_invalid_hypervisor_type(self): pass def test_migrate_live_invalid_cpu_info(self): pass def test_migrate_live_unable_to_migrate_to_self(self): pass def test_migrate_live_destination_hypervisor_too_old(self): pass def test_migrate_live_no_valid_host(self): pass def test_migrate_live_invalid_local_storage(self): pass def test_migrate_live_invalid_shared_storage(self): pass def test_migrate_live_hypervisor_unavailable(self): pass def test_migrate_live_instance_not_active(self): pass def test_migrate_live_pre_check_error(self): pass def test_migrate_live_migration_precheck_client_exception(self): pass def test_migrate_live_migration_with_unexpected_error(self): pass def test_migrate_live_migration_with_old_nova_not_supported(self): pass def test_migrate_live_compute_host_not_found(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}} exc = exception.ComputeHostNotFound( reason="Compute host %(host)s could not be found.", host='hostname') instance = self._stub_instance_get() with mock.patch.object(self.compute_api, 'live_migrate', side_effect=exc) as mock_live_migrate: self.assertRaises(webob.exc.HTTPBadRequest, self.controller._migrate_live, self.req, instance.uuid, body=body) mock_live_migrate.assert_called_once_with( self.context, instance, None, self.disk_over_commit, 'hostname', self.force, self.async_) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=['numa_topology'], cell_down_support=False) def test_migrate_live_unexpected_error(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}} exc = exception.InvalidHypervisorType( reason="The supplied hypervisor type of is invalid.") instance = self._stub_instance_get() with mock.patch.object(self.compute_api, 'live_migrate', side_effect=exc) as mock_live_migrate: self.assertRaises(webob.exc.HTTPInternalServerError, self.controller._migrate_live, self.req, instance.uuid, body=body) mock_live_migrate.assert_called_once_with( self.context, instance, None, self.disk_over_commit, 'hostname', self.force, self.async_) self.mock_get.assert_called_once_with(self.context, instance.uuid, expected_attrs=['numa_topology'], cell_down_support=False) class MigrateServerTestsV256(MigrateServerTestsV234): host_name = 'fake-host' method_translations = {'_migrate': 'resize'} body_map = {'_migrate': {'migrate': {'host': host_name}}} args_map = {'_migrate': ((), {'host_name': host_name})} def setUp(self): super(MigrateServerTestsV256, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.56') def _test_migrate_validation_error(self, body): self.assertRaises(self.validation_error, self.controller._migrate, self.req, fakes.FAKE_UUID, body=body) def _test_migrate_exception(self, exc_info, expected_result): @mock.patch.object(self.compute_api, 'get') @mock.patch.object(self.compute_api, 'resize', side_effect=exc_info) def _test(mock_resize, mock_get): instance = objects.Instance(uuid=uuids.instance) self.assertRaises(expected_result, self.controller._migrate, self.req, instance['uuid'], body={'migrate': {'host': self.host_name}}) _test() def test_migrate(self): self._test_actions(['_migrate'], body_map=self.body_map, method_translations=self.method_translations, args_map=self.args_map) def test_migrate_without_host(self): # The request body is: '{"migrate": null}' body_map = {'_migrate': {'migrate': None}} args_map = {'_migrate': ((), {'host_name': None})} self._test_actions(['_migrate'], body_map=body_map, method_translations=self.method_translations, args_map=args_map) def test_migrate_none_hostname(self): # The request body is: '{"migrate": {"host": null}}' body_map = {'_migrate': {'migrate': {'host': None}}} args_map = {'_migrate': ((), {'host_name': None})} self._test_actions(['_migrate'], body_map=body_map, method_translations=self.method_translations, args_map=args_map) def test_migrate_with_non_existed_instance(self): self._test_actions_with_non_existed_instance( ['_migrate'], body_map=self.body_map) def test_migrate_raise_conflict_on_invalid_state(self): exception_arg = {'_migrate': 'migrate'} self._test_actions_raise_conflict_on_invalid_state( ['_migrate'], body_map=self.body_map, args_map=self.args_map, method_translations=self.method_translations, exception_args=exception_arg) def test_actions_with_locked_instance(self): self._test_actions_with_locked_instance( ['_migrate'], body_map=self.body_map, args_map=self.args_map, method_translations=self.method_translations) def test_migrate_without_migrate_object(self): self._test_migrate_validation_error({}) def test_migrate_invalid_migrate_object(self): self._test_migrate_validation_error({'migrate': 'fake-host'}) def test_migrate_with_additional_property(self): self._test_migrate_validation_error( {'migrate': {'host': self.host_name, 'additional': 'foo'}}) def test_migrate_with_host_length_more_than_255(self): self._test_migrate_validation_error( {'migrate': {'host': 'a' * 256}}) def test_migrate_nonexistent_host(self): exc_info = exception.ComputeHostNotFound(host='nonexistent_host') self._test_migrate_exception(exc_info, webob.exc.HTTPBadRequest) def test_migrate_to_same_host(self): exc_info = exception.CannotMigrateToSameHost() self._test_migrate_exception(exc_info, webob.exc.HTTPBadRequest) class MigrateServerTestsV268(MigrateServerTestsV256): force = False def setUp(self): super(MigrateServerTestsV268, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.68') def test_live_migrate(self): method_translations = {'_migrate_live': 'live_migrate'} body_map = {'_migrate_live': {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}}} args_map = {'_migrate_live': ((None, None, 'hostname', False, self.async_), {})} self._test_actions(['_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) @mock.patch('nova.virt.hardware.get_mem_encryption_constraint', new=mock.Mock(return_value=True)) @mock.patch.object(objects.instance.Instance, 'image_meta') def test_live_migrate_sev_rejected(self, mock_image): instance = self._stub_instance_get() body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}} ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) self.assertIn("Operation 'live-migration' not supported for " "SEV-enabled instance (%s)" % instance.uuid, six.text_type(ex)) def test_live_migrate_with_forced_host(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto', 'force': 'true'}} ex = self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) self.assertIn('force', six.text_type(ex)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_migrations.py0000664000175000017500000005027500000000000025664 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 import mock from oslo_utils.fixture import uuidsentinel as uuids import six from webob import exc from nova.api.openstack.compute import migrations as migrations_v21 from nova import context from nova import exception from nova import objects from nova.objects import base from nova import test from nova.tests.unit.api.openstack import fakes fake_migrations = [ # in-progress live migration { 'id': 1, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': uuids.instance1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration1, 'cross_cell_move': False, 'user_id': None, 'project_id': None }, # non in-progress live migration { 'id': 2, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'error', 'instance_uuid': uuids.instance1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration2, 'cross_cell_move': False, 'user_id': None, 'project_id': None }, # in-progress resize { 'id': 4, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': uuids.instance2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 45000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 96000, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration3, 'cross_cell_move': False, 'user_id': None, 'project_id': None }, # non in-progress resize { 'id': 5, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'error', 'instance_uuid': uuids.instance2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 45000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 96000, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration4, 'cross_cell_move': False, 'user_id': None, 'project_id': None } ] migrations_obj = base.obj_make_list( 'fake-context', objects.MigrationList(), objects.Migration, fake_migrations) class FakeRequest(object): environ = {"nova.context": context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID, is_admin=True)} GET = {} class MigrationsTestCaseV21(test.NoDBTestCase): migrations = migrations_v21 def _migrations_output(self): return self.controller._output(self.req, migrations_obj) def setUp(self): """Run before each test.""" super(MigrationsTestCaseV21, self).setUp() self.controller = self.migrations.MigrationsController() self.req = fakes.HTTPRequest.blank('', use_admin_context=True) self.context = self.req.environ['nova.context'] def test_index(self): migrations_in_progress = {'migrations': self._migrations_output()} for mig in migrations_in_progress['migrations']: self.assertIn('id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) self.assertNotIn('links', mig) filters = {'host': 'host1', 'status': 'migrating', 'instance_uuid': uuids.instance1, 'source_compute': 'host1', 'hidden': '0', 'migration_type': 'resize'} # python-novaclient actually supports sending this even though it's # not used in the DB API layer and is totally useless. This lets us, # however, test that additionalProperties=True allows it. unknown_filter = {'cell_name': 'ChildCell'} self.req.GET.update(filters) self.req.GET.update(unknown_filter) with mock.patch.object(self.controller.compute_api, 'get_migrations', return_value=migrations_obj) as ( mock_get_migrations ): response = self.controller.index(self.req) self.assertEqual(migrations_in_progress, response) # Only with the filters, and the unknown filter is stripped mock_get_migrations.assert_called_once_with(self.context, filters) def test_index_query_allow_negative_int_as_string(self): migrations = {'migrations': self._migrations_output()} filters = ['host', 'status', 'cell_name', 'instance_uuid', 'source_compute', 'hidden', 'migration_type'] with mock.patch.object(self.controller.compute_api, 'get_migrations', return_value=migrations_obj): for fl in filters: req = fakes.HTTPRequest.blank('/os-migrations', use_admin_context=True, query_string='%s=-1' % fl) response = self.controller.index(req) self.assertEqual(migrations, response) def test_index_query_duplicate_query_parameters(self): migrations = {'migrations': self._migrations_output()} params = {'host': 'host1', 'status': 'migrating', 'cell_name': 'ChildCell', 'instance_uuid': uuids.instance1, 'source_compute': 'host1', 'hidden': '0', 'migration_type': 'resize'} with mock.patch.object(self.controller.compute_api, 'get_migrations', return_value=migrations_obj): for k, v in params.items(): req = fakes.HTTPRequest.blank( '/os-migrations', use_admin_context=True, query_string='%s=%s&%s=%s' % (k, v, k, v)) response = self.controller.index(req) self.assertEqual(migrations, response) class MigrationsTestCaseV223(MigrationsTestCaseV21): wsgi_api_version = '2.23' def setUp(self): """Run before each test.""" super(MigrationsTestCaseV223, self).setUp() self.req = fakes.HTTPRequest.blank( '', version=self.wsgi_api_version, use_admin_context=True) def test_index(self): migrations = {'migrations': self.controller._output( self.req, migrations_obj, True)} for i, mig in enumerate(migrations['migrations']): # first item is in-progress live migration if i == 0: self.assertIn('links', mig) else: self.assertNotIn('links', mig) self.assertIn('migration_type', mig) self.assertIn('id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) with mock.patch.object(self.controller.compute_api, 'get_migrations') as m_get: m_get.return_value = migrations_obj response = self.controller.index(self.req) self.assertEqual(migrations, response) self.assertIn('links', response['migrations'][0]) self.assertIn('migration_type', response['migrations'][0]) class MigrationsTestCaseV259(MigrationsTestCaseV223): wsgi_api_version = '2.59' def test_index(self): migrations = {'migrations': self.controller._output( self.req, migrations_obj, True, True)} for i, mig in enumerate(migrations['migrations']): # first item is in-progress live migration if i == 0: self.assertIn('links', mig) else: self.assertNotIn('links', mig) self.assertIn('migration_type', mig) self.assertIn('id', mig) self.assertIn('uuid', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) with mock.patch.object(self.controller.compute_api, 'get_migrations_sorted') as m_get: m_get.return_value = migrations_obj response = self.controller.index(self.req) self.assertEqual(migrations, response) self.assertIn('links', response['migrations'][0]) self.assertIn('migration_type', response['migrations'][0]) @mock.patch('nova.compute.api.API.get_migrations_sorted') def test_index_with_invalid_marker(self, mock_migrations_get): """Tests detail paging with an invalid marker (not found).""" mock_migrations_get.side_effect = exception.MarkerNotFound( marker=uuids.invalid_marker) req = fakes.HTTPRequest.blank( '/os-migrations?marker=%s' % uuids.invalid_marker, version=self.wsgi_api_version, use_admin_context=True) e = self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) self.assertEqual( "Marker %s could not be found." % uuids.invalid_marker, six.text_type(e)) def test_index_with_invalid_limit(self): """Tests detail paging with an invalid limit.""" req = fakes.HTTPRequest.blank( '/os-migrations?limit=x', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) req = fakes.HTTPRequest.blank( '/os-migrations?limit=-1', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_with_invalid_changes_since(self): """Tests detail paging with an invalid changes-since value.""" req = fakes.HTTPRequest.blank( '/os-migrations?changes-since=wrong_time', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_with_unknown_query_param(self): """Tests detail paging with an unknown query parameter.""" req = fakes.HTTPRequest.blank( '/os-migrations?foo=bar', version=self.wsgi_api_version, use_admin_context=True) ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) @mock.patch('nova.compute.api.API.get_migrations', return_value=objects.MigrationList()) def test_index_with_changes_since_old_microversion(self, get_migrations): """Tests that the changes-since query parameter is ignored before microversion 2.59. """ # Also use a valid filter (instance_uuid) to make sure only # changes-since is removed. req = fakes.HTTPRequest.blank( '/os-migrations?changes-since=2018-01-10T16:59:24.138939&' 'instance_uuid=%s' % uuids.instance_uuid, version='2.58', use_admin_context=True) result = self.controller.index(req) self.assertEqual({'migrations': []}, result) get_migrations.assert_called_once_with( req.environ['nova.context'], {'instance_uuid': uuids.instance_uuid}) class MigrationTestCaseV266(MigrationsTestCaseV259): wsgi_api_version = '2.66' def test_index_with_invalid_changes_before(self): """Tests detail paging with an invalid changes-before value.""" req = fakes.HTTPRequest.blank( '/os-migrations?changes-before=wrong_time', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) @mock.patch('nova.compute.api.API.get_migrations_sorted', return_value=objects.MigrationList()) def test_index_with_changes_since_and_changes_before( self, get_migrations_sorted): changes_since = '2013-10-22T13:42:02Z' changes_before = '2013-10-22T13:42:03Z' req = fakes.HTTPRequest.blank( '/os-migrations?changes-since=%s&changes-before=%s&' 'instance_uuid=%s' % (changes_since, changes_before, uuids.instance_uuid), version=self.wsgi_api_version, use_admin_context=True) self.controller.index(req) search_opts = {'instance_uuid': uuids.instance_uuid, 'changes-before': datetime.datetime(2013, 10, 22, 13, 42, 3, tzinfo=iso8601.iso8601.UTC), 'changes-since': datetime.datetime(2013, 10, 22, 13, 42, 2, tzinfo=iso8601.iso8601.UTC)} get_migrations_sorted.assert_called_once_with( req.environ['nova.context'], search_opts, sort_dirs=mock.ANY, sort_keys=mock.ANY, limit=1000, marker=None) def test_get_migrations_filters_with_distinct_changes_time_bad_request( self): changes_since = '2018-09-04T05:45:27Z' changes_before = '2018-09-03T05:45:27Z' req = fakes.HTTPRequest.blank('/os-migrations?' 'changes-since=%s&changes-before=%s' % (changes_since, changes_before), version=self.wsgi_api_version, use_admin_context=True) ex = self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) self.assertIn('The value of changes-since must be less than ' 'or equal to changes-before', six.text_type(ex)) def test_index_with_changes_before_old_microversion_failed(self): """Tests that the changes-before query parameter is an error before microversion 2.66. """ # Also use a valid filter (instance_uuid) to make sure # changes-before is an additional property. req = fakes.HTTPRequest.blank( '/os-migrations?changes-before=2018-01-10T16:59:24.138939&' 'instance_uuid=%s' % uuids.instance_uuid, version='2.65', use_admin_context=True) ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) @mock.patch('nova.compute.api.API.get_migrations', return_value=objects.MigrationList()) def test_index_with_changes_before_old_microversion(self, get_migrations): """Tests that the changes-before query parameter is ignored before microversion 2.59. """ # Also use a valid filter (instance_uuid) to make sure only # changes-before is removed. req = fakes.HTTPRequest.blank( '/os-migrations?changes-before=2018-01-10T16:59:24.138939&' 'instance_uuid=%s' % uuids.instance_uuid, version='2.58', use_admin_context=True) result = self.controller.index(req) self.assertEqual({'migrations': []}, result) get_migrations.assert_called_once_with( req.environ['nova.context'], {'instance_uuid': uuids.instance_uuid}) class MigrationsTestCaseV280(MigrationTestCaseV266): wsgi_api_version = '2.80' def test_index(self): migrations = {'migrations': self.controller._output( self.req, migrations_obj, add_link=True, add_uuid=True, add_user_project=True)} for i, mig in enumerate(migrations['migrations']): # first item is in-progress live migration if i == 0: self.assertIn('links', mig) else: self.assertNotIn('links', mig) self.assertIn('migration_type', mig) self.assertIn('id', mig) self.assertIn('uuid', mig) self.assertIn('user_id', mig) self.assertIn('project_id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) with mock.patch.object(self.controller.compute_api, 'get_migrations_sorted') as m_get: m_get.return_value = migrations_obj response = self.controller.index(self.req) self.assertEqual(migrations, response) self.assertIn('links', response['migrations'][0]) self.assertIn('migration_type', response['migrations'][0]) def test_index_filter_by_user_id_pre_v280(self): """Tests that the migrations by user_id query parameter is not allowed before microversion 2.80. """ req = fakes.HTTPRequest.blank( '/os-migrations?user_id=%s' % uuids.user_id, version='2.79', use_admin_context=True) ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) def test_index_filter_by_project_id_pre_v280(self): """Tests that the migrations by project_id query parameter is not allowed before microversion 2.80. """ req = fakes.HTTPRequest.blank( '/os-migrations?project_id=%s' % uuids.project_id, version='2.79', use_admin_context=True) ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_multinic.py0000664000175000017500000001631400000000000025330 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import multinic as multinic_v21 from nova import compute from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes UUID = '70f6db34-de8d-4fbd-aafb-4065bdfa6114' last_add_fixed_ip = (None, None) last_remove_fixed_ip = (None, None) def compute_api_add_fixed_ip(self, context, instance, network_id): global last_add_fixed_ip last_add_fixed_ip = (instance['uuid'], network_id) def compute_api_remove_fixed_ip(self, context, instance, address): global last_remove_fixed_ip last_remove_fixed_ip = (instance['uuid'], address) def compute_api_get(self, context, instance_id, expected_attrs=None, cell_down_support=False): instance = objects.Instance() instance.uuid = instance_id instance.id = 1 instance.vm_state = 'fake' instance.task_state = 'fake' instance.obj_reset_changes() return instance class FixedIpTestV21(test.NoDBTestCase): controller_class = multinic_v21 validation_error = exception.ValidationError def setUp(self): super(FixedIpTestV21, self).setUp() fakes.stub_out_networking(self) self.stub_out('nova.compute.api.API.add_fixed_ip', compute_api_add_fixed_ip) self.stub_out('nova.compute.api.API.remove_fixed_ip', compute_api_remove_fixed_ip) self.stub_out('nova.compute.api.API.get', compute_api_get) self.controller = self.controller_class.MultinicController() self.fake_req = fakes.HTTPRequest.blank('') def test_add_fixed_ip(self): global last_add_fixed_ip last_add_fixed_ip = (None, None) body = dict(addFixedIp=dict(networkId='test_net')) self.controller._add_fixed_ip(self.fake_req, UUID, body=body) self.assertEqual(last_add_fixed_ip, (UUID, 'test_net')) def _test_add_fixed_ip_bad_request(self, body): self.assertRaises(self.validation_error, self.controller._add_fixed_ip, self.fake_req, UUID, body=body) def test_add_fixed_ip_empty_network_id(self): body = {'addFixedIp': {'network_id': ''}} self._test_add_fixed_ip_bad_request(body) def test_add_fixed_ip_network_id_bigger_than_36(self): body = {'addFixedIp': {'network_id': 'a' * 37}} self._test_add_fixed_ip_bad_request(body) def test_add_fixed_ip_no_network(self): global last_add_fixed_ip last_add_fixed_ip = (None, None) body = dict(addFixedIp=dict()) self._test_add_fixed_ip_bad_request(body) self.assertEqual(last_add_fixed_ip, (None, None)) @mock.patch.object(compute.api.API, 'add_fixed_ip') def test_add_fixed_ip_no_more_ips_available(self, mock_add_fixed_ip): mock_add_fixed_ip.side_effect = exception.NoMoreFixedIps(net='netid') body = dict(addFixedIp=dict(networkId='test_net')) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._add_fixed_ip, self.fake_req, UUID, body=body) def test_remove_fixed_ip(self): global last_remove_fixed_ip last_remove_fixed_ip = (None, None) body = dict(removeFixedIp=dict(address='10.10.10.1')) self.controller._remove_fixed_ip(self.fake_req, UUID, body=body) self.assertEqual(last_remove_fixed_ip, (UUID, '10.10.10.1')) def test_remove_fixed_ip_no_address(self): global last_remove_fixed_ip last_remove_fixed_ip = (None, None) body = dict(removeFixedIp=dict()) self.assertRaises(self.validation_error, self.controller._remove_fixed_ip, self.fake_req, UUID, body=body) self.assertEqual(last_remove_fixed_ip, (None, None)) def test_remove_fixed_ip_invalid_address(self): body = {'removeFixedIp': {'address': ''}} self.assertRaises(self.validation_error, self.controller._remove_fixed_ip, self.fake_req, UUID, body=body) @mock.patch.object(compute.api.API, 'remove_fixed_ip', side_effect=exception.FixedIpNotFoundForInstance( instance_uuid=UUID, ip='10.10.10.1')) def test_remove_fixed_ip_not_found(self, _remove_fixed_ip): body = {'removeFixedIp': {'address': '10.10.10.1'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller._remove_fixed_ip, self.fake_req, UUID, body=body) class MultinicPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(MultinicPolicyEnforcementV21, self).setUp() self.controller = multinic_v21.MultinicController() self.req = fakes.HTTPRequest.blank('') def test_add_fixed_ip_policy_failed(self): rule_name = "os_compute_api:os-multinic" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._add_fixed_ip, self.req, fakes.FAKE_UUID, body={'addFixedIp': {'networkId': fakes.FAKE_UUID}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_remove_fixed_ip_policy_failed(self): rule_name = "os_compute_api:os-multinic" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._remove_fixed_ip, self.req, fakes.FAKE_UUID, body={'removeFixedIp': {'address': "10.0.0.1"}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class MultinicAPIDeprecationTest(test.NoDBTestCase): def setUp(self): super(MultinicAPIDeprecationTest, self).setUp() self.controller = multinic_v21.MultinicController() self.req = fakes.HTTPRequest.blank('', version='2.44') def test_add_fixed_ip_not_found(self): body = dict(addFixedIp=dict(networkId='test_net')) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._add_fixed_ip, self.req, UUID, body=body) def test_remove_fixed_ip__not_found(self): body = dict(removeFixedIp=dict(address='10.10.10.1')) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._remove_fixed_ip, self.req, UUID, body=body) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_networks.py0000664000175000017500000001476400000000000025367 0ustar00zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_utils.fixture import uuidsentinel as uuids import webob from nova.api.openstack.compute import networks as networks_v21 from nova import exception from nova.network import neutron from nova import test from nova.tests.unit.api.openstack import fakes # NOTE(stephenfin): obviously these aren't complete reponses, but this is all # we care about FAKE_NETWORKS = [ { 'id': uuids.private, 'name': 'private', }, { 'id': uuids.public, 'name': 'public', }, ] USER_RESPONSE_TEMPLATE = { field: None for field in ( 'id', 'cidr', 'netmask', 'gateway', 'broadcast', 'dns1', 'dns2', 'cidr_v6', 'gateway_v6', 'label', 'netmask_v6') } ADMIN_RESPONSE_TEMPLATE = { field: None for field in ( 'id', 'cidr', 'netmask', 'gateway', 'broadcast', 'dns1', 'dns2', 'cidr_v6', 'gateway_v6', 'label', 'netmask_v6', 'created_at', 'updated_at', 'deleted_at', 'deleted', 'injected', 'bridge', 'vlan', 'vpn_public_address', 'vpn_public_port', 'vpn_private_address', 'dhcp_start', 'project_id', 'host', 'bridge_interface', 'multi_host', 'priority', 'rxtx_base', 'mtu', 'dhcp_server', 'enable_dhcp', 'share_address') } class FakeNetworkAPI(object): def __init__(self): self.networks = copy.deepcopy(FAKE_NETWORKS) def get_all(self, context): return self.networks def get(self, context, network_id): for network in self.networks: if network['id'] == network_id: return network raise exception.NetworkNotFound(network_id=network_id) class NetworksTestV21(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(NetworksTestV21, self).setUp() # TODO(stephenfin): Consider using then NeutronFixture here self.fake_network_api = FakeNetworkAPI() self._setup() fakes.stub_out_networking(self) self.non_admin_req = fakes.HTTPRequest.blank( '', project_id=fakes.FAKE_PROJECT_ID) self.admin_req = fakes.HTTPRequest.blank( '', project_id=fakes.FAKE_PROJECT_ID, use_admin_context=True) def _setup(self): self.controller = networks_v21.NetworkController( self.fake_network_api) self.neutron_ctrl = networks_v21.NetworkController( neutron.API()) self.req = fakes.HTTPRequest.blank('', project_id=fakes.FAKE_PROJECT_ID) def _check_status(self, res, method, code): self.assertEqual(method.wsgi_code, code) def test_network_list_all_as_user(self): res_dict = self.controller.index(self.non_admin_req) expected = [] for fake_network in FAKE_NETWORKS: expected_ = copy.deepcopy(USER_RESPONSE_TEMPLATE) expected_['id'] = fake_network['id'] expected_['label'] = fake_network['name'] expected.append(expected_) self.assertEqual({'networks': expected}, res_dict) def test_network_list_all_as_admin(self): res_dict = self.controller.index(self.admin_req) expected = [] for fake_network in FAKE_NETWORKS: expected_ = copy.deepcopy(ADMIN_RESPONSE_TEMPLATE) expected_['id'] = fake_network['id'] expected_['label'] = fake_network['name'] expected.append(expected_) self.assertEqual({'networks': expected}, res_dict) def test_network_get_as_user(self): uuid = FAKE_NETWORKS[0]['id'] res_dict = self.controller.show(self.non_admin_req, uuid) expected = copy.deepcopy(USER_RESPONSE_TEMPLATE) expected['id'] = FAKE_NETWORKS[0]['id'] expected['label'] = FAKE_NETWORKS[0]['name'] self.assertEqual({'network': expected}, res_dict) def test_network_get_as_admin(self): uuid = FAKE_NETWORKS[0]['id'] res_dict = self.controller.show(self.admin_req, uuid) expected = copy.deepcopy(ADMIN_RESPONSE_TEMPLATE) expected['id'] = FAKE_NETWORKS[0]['id'] expected['label'] = FAKE_NETWORKS[0]['name'] self.assertEqual({'network': expected}, res_dict) def test_network_get_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, uuids.invalid) class NetworksEnforcementV21(test.NoDBTestCase): def setUp(self): super(NetworksEnforcementV21, self).setUp() self.controller = networks_v21.NetworkController() self.req = fakes.HTTPRequest.blank('') def test_show_policy_failed(self): rule_name = 'os_compute_api:os-networks:view' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = 'os_compute_api:os-networks:view' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class NetworksDeprecationTest(test.NoDBTestCase): def setUp(self): super(NetworksDeprecationTest, self).setUp() self.controller = networks_v21.NetworkController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_api_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_pause_server.py0000664000175000017500000000433600000000000026210 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.compute import pause_server as \ pause_server_v21 from nova.tests.unit.api.openstack.compute import admin_only_action_common class PauseServerTestsV21(admin_only_action_common.CommonTests): pause_server = pause_server_v21 controller_name = 'PauseServerController' _api_version = '2.1' def setUp(self): super(PauseServerTestsV21, self).setUp() self.controller = getattr(self.pause_server, self.controller_name)() self.compute_api = self.controller.compute_api self.stub_out('nova.api.openstack.compute.pause_server.' 'PauseServerController', lambda *a, **kw: self.controller) def test_pause_unpause(self): self._test_actions(['_pause', '_unpause']) def test_actions_raise_on_not_implemented(self): for action in ['_pause', '_unpause']: self._test_not_implemented_state(action) def test_pause_unpause_with_non_existed_instance(self): self._test_actions_with_non_existed_instance(['_pause', '_unpause']) def test_pause_unpause_with_non_existed_instance_in_compute_api(self): self._test_actions_instance_not_found_in_compute_api(['_pause', '_unpause']) def test_pause_unpause_raise_conflict_on_invalid_state(self): self._test_actions_raise_conflict_on_invalid_state(['_pause', '_unpause']) def test_actions_with_locked_instance(self): self._test_actions_with_locked_instance(['_pause', '_unpause']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_quota_classes.py0000664000175000017500000001522000000000000026345 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import webob from nova.api.openstack.compute import quota_classes \ as quota_classes_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class QuotaClassSetsTestV21(test.TestCase): validation_error = exception.ValidationError api_version = '2.1' quota_resources = {'metadata_items': 128, 'ram': 51200, 'floating_ips': -1, 'fixed_ips': -1, 'instances': 10, 'injected_files': 5, 'cores': 20, 'injected_file_content_bytes': 10240, 'security_groups': -1, 'security_group_rules': -1, 'key_pairs': 100, 'injected_file_path_bytes': 255} filtered_quotas = None def quota_set(self, class_name): quotas = copy.deepcopy(self.quota_resources) quotas['id'] = class_name return {'quota_class_set': quotas} def setUp(self): super(QuotaClassSetsTestV21, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.api_version) self._setup() def _setup(self): self.controller = quota_classes_v21.QuotaClassSetsController() def _check_filtered_extended_quota(self, quota_set): self.assertNotIn('server_groups', quota_set) self.assertNotIn('server_group_members', quota_set) self.assertEqual(-1, quota_set['floating_ips']) self.assertEqual(-1, quota_set['fixed_ips']) self.assertEqual(-1, quota_set['security_groups']) self.assertEqual(-1, quota_set['security_group_rules']) def test_format_quota_set(self): quota_set = self.controller._format_quota_set('test_class', self.quota_resources, self.filtered_quotas) qs = quota_set['quota_class_set'] self.assertEqual(qs['id'], 'test_class') for resource, value in self.quota_resources.items(): self.assertEqual(value, qs[resource]) if self.filtered_quotas: for resource in self.filtered_quotas: self.assertNotIn(resource, qs) self._check_filtered_extended_quota(qs) def test_quotas_show(self): res_dict = self.controller.show(self.req, 'test_class') self.assertEqual(res_dict, self.quota_set('test_class')) def test_quotas_update(self): expected_body = {'quota_class_set': self.quota_resources} request_quota_resources = copy.deepcopy(self.quota_resources) request_quota_resources['server_groups'] = 10 request_quota_resources['server_group_members'] = 10 request_body = {'quota_class_set': request_quota_resources} res_dict = self.controller.update(self.req, 'test_class', body=request_body) self.assertEqual(res_dict, expected_body) def test_quotas_update_with_empty_body(self): body = {} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) def test_quotas_update_with_invalid_integer(self): body = {'quota_class_set': {'instances': 2 ** 31 + 1}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) def test_quotas_update_with_long_quota_class_name(self): name = 'a' * 256 body = {'quota_class_set': {'instances': 10}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, name, body=body) def test_quotas_update_with_non_integer(self): body = {'quota_class_set': {'instances': "abc"}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) body = {'quota_class_set': {'instances': 50.5}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) body = {'quota_class_set': { 'instances': u'\u30aa\u30fc\u30d7\u30f3'}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) def test_quotas_update_with_unsupported_quota_class(self): body = {'quota_class_set': {'instances': 50, 'cores': 50, 'ram': 51200, 'unsupported': 12}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) class QuotaClassSetsTestV250(QuotaClassSetsTestV21): api_version = '2.50' quota_resources = {'metadata_items': 128, 'ram': 51200, 'instances': 10, 'injected_files': 5, 'cores': 20, 'injected_file_content_bytes': 10240, 'key_pairs': 100, 'injected_file_path_bytes': 255, 'server_groups': 10, 'server_group_members': 10} filtered_quotas = quota_classes_v21.FILTERED_QUOTAS_2_50 def _check_filtered_extended_quota(self, quota_set): self.assertEqual(10, quota_set['server_groups']) self.assertEqual(10, quota_set['server_group_members']) for resource in self.filtered_quotas: self.assertNotIn(resource, quota_set) def test_quotas_update_with_filtered_quota(self): for resource in self.filtered_quotas: body = {'quota_class_set': {resource: 10}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) class QuotaClassSetsTestV257(QuotaClassSetsTestV250): api_version = '2.57' def setUp(self): super(QuotaClassSetsTestV257, self).setUp() for resource in quota_classes_v21.FILTERED_QUOTAS_2_57: self.quota_resources.pop(resource, None) self.filtered_quotas.extend(quota_classes_v21.FILTERED_QUOTAS_2_57) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_quotas.py0000664000175000017500000006731000000000000025022 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import quota_sets as quotas_v21 from nova.db import api as db from nova import exception from nova import quota from nova import test from nova.tests.unit.api.openstack import fakes def quota_set(id, include_server_group_quotas=True): res = {'quota_set': {'id': id, 'metadata_items': 128, 'ram': 51200, 'floating_ips': -1, 'fixed_ips': -1, 'instances': 10, 'injected_files': 5, 'cores': 20, 'injected_file_content_bytes': 10240, 'security_groups': -1, 'security_group_rules': -1, 'key_pairs': 100, 'injected_file_path_bytes': 255}} if include_server_group_quotas: res['quota_set']['server_groups'] = 10 res['quota_set']['server_group_members'] = 10 return res class BaseQuotaSetsTest(test.TestCase): def setUp(self): super(BaseQuotaSetsTest, self).setUp() # We need to stub out verify_project_id so that it doesn't # generate an EndpointNotFound exception and result in a # server error. self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) def get_delete_status_int(self, res): # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. return self.controller.delete.wsgi_code class QuotaSetsTestV21(BaseQuotaSetsTest): plugin = quotas_v21 validation_error = exception.ValidationError include_server_group_quotas = True def setUp(self): super(QuotaSetsTestV21, self).setUp() self._setup_controller() self.default_quotas = { 'instances': 10, 'cores': 20, 'ram': 51200, 'floating_ips': -1, 'fixed_ips': -1, 'metadata_items': 128, 'injected_files': 5, 'injected_file_path_bytes': 255, 'injected_file_content_bytes': 10240, 'security_groups': -1, 'security_group_rules': -1, 'key_pairs': 100, } if self.include_server_group_quotas: self.default_quotas['server_groups'] = 10 self.default_quotas['server_group_members'] = 10 def _setup_controller(self): self.controller = self.plugin.QuotaSetsController() def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) def test_format_quota_set(self): quota_set = self.controller._format_quota_set('1234', self.default_quotas, []) qs = quota_set['quota_set'] self.assertEqual(qs['id'], '1234') self.assertEqual(qs['instances'], 10) self.assertEqual(qs['cores'], 20) self.assertEqual(qs['ram'], 51200) self.assertEqual(qs['floating_ips'], -1) self.assertEqual(qs['fixed_ips'], -1) self.assertEqual(qs['metadata_items'], 128) self.assertEqual(qs['injected_files'], 5) self.assertEqual(qs['injected_file_path_bytes'], 255) self.assertEqual(qs['injected_file_content_bytes'], 10240) self.assertEqual(qs['security_groups'], -1) self.assertEqual(qs['security_group_rules'], -1) self.assertEqual(qs['key_pairs'], 100) if self.include_server_group_quotas: self.assertEqual(qs['server_groups'], 10) self.assertEqual(qs['server_group_members'], 10) def test_validate_quota_limit(self): resource = 'fake' # Valid - finite values self.assertIsNone(self.controller._validate_quota_limit(resource, 50, 10, 100)) # Valid - finite limit and infinite maximum self.assertIsNone(self.controller._validate_quota_limit(resource, 50, 10, -1)) # Valid - infinite limit and infinite maximum self.assertIsNone(self.controller._validate_quota_limit(resource, -1, 10, -1)) # Valid - all infinite self.assertIsNone(self.controller._validate_quota_limit(resource, -1, -1, -1)) # Invalid - limit is less than -1 self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, -2, 10, 100) # Invalid - limit is less than minimum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, 5, 10, 100) # Invalid - limit is greater than maximum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, 200, 10, 100) # Invalid - infinite limit is greater than maximum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, -1, 10, 100) # Invalid - limit is less than infinite minimum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, 50, -1, -1) # Invalid - limit is larger than 0x7FFFFFFF self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, db.MAX_INT + 1, -1, -1) def test_quotas_defaults(self): uri = '/v2/%s/os-quota-sets/%s/defaults' % ( fakes.FAKE_PROJECT_ID, fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(uri) res_dict = self.controller.defaults(req, fakes.FAKE_PROJECT_ID) self.default_quotas.update({'id': fakes.FAKE_PROJECT_ID}) expected = {'quota_set': self.default_quotas} self.assertEqual(res_dict, expected) def test_quotas_show(self): req = self._get_http_request() res_dict = self.controller.show(req, 1234) ref_quota_set = quota_set('1234', self.include_server_group_quotas) self.assertEqual(res_dict, ref_quota_set) def test_quotas_update(self): self.default_quotas.update({ 'instances': 50, 'cores': 50 }) body = {'quota_set': self.default_quotas} req = self._get_http_request() res_dict = self.controller.update(req, 'update_me', body=body) self.assertEqual(body, res_dict) @mock.patch('nova.objects.Quotas.create_limit') def test_quotas_update_with_good_data(self, mock_createlimit): self.default_quotas.update({}) body = {'quota_set': self.default_quotas} req = self._get_http_request() self.controller.update(req, 'update_me', body=body) self.assertEqual(len(self.default_quotas), len(mock_createlimit.mock_calls)) @mock.patch('nova.api.validation.validators._SchemaValidator.validate') @mock.patch('nova.objects.Quotas.create_limit') def test_quotas_update_with_bad_data(self, mock_createlimit, mock_validate): self.default_quotas.update({ 'instances': 50, 'cores': -50 }) body = {'quota_set': self.default_quotas} req = self._get_http_request() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) self.assertEqual(0, len(mock_createlimit.mock_calls)) def test_quotas_update_zero_value(self): body = {'quota_set': {'instances': 0, 'cores': 0, 'ram': 0, 'floating_ips': -1, 'metadata_items': 0, 'injected_files': 0, 'injected_file_content_bytes': 0, 'injected_file_path_bytes': 0, 'security_groups': -1, 'security_group_rules': -1, 'key_pairs': 100, 'fixed_ips': -1}} if self.include_server_group_quotas: body['quota_set']['server_groups'] = 10 body['quota_set']['server_group_members'] = 10 req = self._get_http_request() res_dict = self.controller.update(req, 'update_me', body=body) self.assertEqual(body, res_dict) def _quotas_update_bad_request_case(self, body): req = self._get_http_request() self.assertRaises(self.validation_error, self.controller.update, req, 'update_me', body=body) def test_quotas_update_invalid_key(self): body = {'quota_set': {'instances2': -2, 'cores': -2, 'ram': -2, 'floating_ips': -2, 'metadata_items': -2, 'injected_files': -2, 'injected_file_content_bytes': -2}} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_limit(self): body = {'quota_set': {'instances': -2, 'cores': -2, 'ram': -2, 'floating_ips': -2, 'fixed_ips': -2, 'metadata_items': -2, 'injected_files': -2, 'injected_file_content_bytes': -2}} self._quotas_update_bad_request_case(body) def test_quotas_update_empty_body(self): body = {} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_value_non_int(self): # when PUT non integer value self.default_quotas.update({ 'instances': 'test' }) body = {'quota_set': self.default_quotas} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_value_with_float(self): # when PUT non integer value self.default_quotas.update({ 'instances': 50.5 }) body = {'quota_set': self.default_quotas} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_value_with_unicode(self): # when PUT non integer value self.default_quotas.update({ 'instances': u'\u30aa\u30fc\u30d7\u30f3' }) body = {'quota_set': self.default_quotas} self._quotas_update_bad_request_case(body) @mock.patch('nova.objects.Quotas.destroy_all_by_project') def test_quotas_delete(self, mock_destroy_all_by_project): req = self._get_http_request() res = self.controller.delete(req, 1234) self.assertEqual(202, self.get_delete_status_int(res)) mock_destroy_all_by_project.assert_called_once_with( req.environ['nova.context'], 1234) def test_duplicate_quota_filter(self): query_string = 'user_id=1&user_id=2' req = fakes.HTTPRequest.blank('', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_filter_negative_int_as_string(self): req = fakes.HTTPRequest.blank('', query_string='user_id=-1') self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_filter_int_as_string(self): req = fakes.HTTPRequest.blank('', query_string='user_id=123') self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_unknown_quota_filter(self): query_string = 'unknown_filter=abc' req = fakes.HTTPRequest.blank('', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_additional_filter(self): query_string = 'user_id=1&additional_filter=2' req = fakes.HTTPRequest.blank('', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) class ExtendedQuotasTestV21(BaseQuotaSetsTest): plugin = quotas_v21 def setUp(self): super(ExtendedQuotasTestV21, self).setUp() self._setup_controller() fake_quotas = {'ram': {'limit': 51200, 'in_use': 12800, 'reserved': 12800}, 'cores': {'limit': 20, 'in_use': 10, 'reserved': 5}, 'instances': {'limit': 100, 'in_use': 0, 'reserved': 0}} def _setup_controller(self): self.controller = self.plugin.QuotaSetsController() def fake_get_quotas(self, context, id, user_id=None, usages=False): if usages: return self.fake_quotas else: return {k: v['limit'] for k, v in self.fake_quotas.items()} def fake_get_settable_quotas(self, context, project_id, user_id=None): return { 'ram': {'minimum': self.fake_quotas['ram']['in_use'] + self.fake_quotas['ram']['reserved'], 'maximum': -1}, 'cores': {'minimum': self.fake_quotas['cores']['in_use'] + self.fake_quotas['cores']['reserved'], 'maximum': -1}, 'instances': {'minimum': self.fake_quotas['instances']['in_use'] + self.fake_quotas['instances']['reserved'], 'maximum': -1}, } def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) @mock.patch.object(quota.QUOTAS, 'get_settable_quotas') def test_quotas_update_exceed_in_used(self, get_settable_quotas): body = {'quota_set': {'cores': 10}} get_settable_quotas.side_effect = self.fake_get_settable_quotas req = self._get_http_request() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) @mock.patch.object(quota.QUOTAS, 'get_settable_quotas') def test_quotas_force_update_exceed_in_used(self, get_settable_quotas): with mock.patch.object(self.plugin.QuotaSetsController, '_get_quotas') as _get_quotas: body = {'quota_set': {'cores': 10, 'force': 'True'}} get_settable_quotas.side_effect = self.fake_get_settable_quotas _get_quotas.side_effect = self.fake_get_quotas req = self._get_http_request() self.controller.update(req, 'update_me', body=body) @mock.patch('nova.objects.Quotas.create_limit') def test_quotas_update_good_data(self, mock_createlimit): body = {'quota_set': {'cores': 1, 'instances': 1}} req = fakes.HTTPRequest.blank( '/v2/%s/os-quota-sets/update_me' % fakes.FAKE_PROJECT_ID, use_admin_context=True) self.controller.update(req, 'update_me', body=body) self.assertEqual(2, len(mock_createlimit.mock_calls)) @mock.patch('nova.objects.Quotas.create_limit') @mock.patch.object(quota.QUOTAS, 'get_settable_quotas') def test_quotas_update_bad_data(self, mock_gsq, mock_createlimit): body = {'quota_set': {'cores': 10, 'instances': 1}} mock_gsq.side_effect = self.fake_get_settable_quotas req = fakes.HTTPRequest.blank( '/v2/%s/os-quota-sets/update_me' % fakes.FAKE_PROJECT_ID, use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) self.assertEqual(0, len(mock_createlimit.mock_calls)) class UserQuotasTestV21(BaseQuotaSetsTest): plugin = quotas_v21 include_server_group_quotas = True def setUp(self): super(UserQuotasTestV21, self).setUp() self._setup_controller() def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) def _setup_controller(self): self.controller = self.plugin.QuotaSetsController() def test_user_quotas_show(self): req = self._get_http_request( '/v2/%s/os-quota-sets/%s?user_id=1' % (fakes.FAKE_PROJECT_ID, fakes.FAKE_PROJECT_ID)) res_dict = self.controller.show(req, fakes.FAKE_PROJECT_ID) ref_quota_set = quota_set(fakes.FAKE_PROJECT_ID, self.include_server_group_quotas) self.assertEqual(res_dict, ref_quota_set) def test_user_quotas_update(self): body = {'quota_set': {'instances': 10, 'cores': 20, 'ram': 51200, 'floating_ips': -1, 'fixed_ips': -1, 'metadata_items': 128, 'injected_files': 5, 'injected_file_content_bytes': 10240, 'injected_file_path_bytes': 255, 'security_groups': -1, 'security_group_rules': -1, 'key_pairs': 100}} if self.include_server_group_quotas: body['quota_set']['server_groups'] = 10 body['quota_set']['server_group_members'] = 10 url = ('/v2/%s/os-quota-sets/update_me?user_id=1' % fakes.FAKE_PROJECT_ID) req = self._get_http_request(url) res_dict = self.controller.update(req, 'update_me', body=body) self.assertEqual(body, res_dict) def test_user_quotas_update_exceed_project(self): body = {'quota_set': {'instances': 20}} url = ('/v2/%s/os-quota-sets/update_me?user_id=1' % fakes.FAKE_PROJECT_ID) req = self._get_http_request(url) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) @mock.patch('nova.objects.Quotas.destroy_all_by_project_and_user') def test_user_quotas_delete(self, mock_destroy_all_by_project_and_user): url = '/v2/%s/os-quota-sets/%s?user_id=1' % (fakes.FAKE_PROJECT_ID, fakes.FAKE_PROJECT_ID) req = self._get_http_request(url) res = self.controller.delete(req, fakes.FAKE_PROJECT_ID) self.assertEqual(202, self.get_delete_status_int(res)) mock_destroy_all_by_project_and_user.assert_called_once_with( req.environ['nova.context'], fakes.FAKE_PROJECT_ID, '1' ) @mock.patch('nova.objects.Quotas.create_limit') def test_user_quotas_update_good_data(self, mock_createlimit): body = {'quota_set': {'instances': 1, 'cores': 1}} url = ('/v2/%s/os-quota-sets/update_me?user_id=1' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(url, use_admin_context=True) self.controller.update(req, 'update_me', body=body) self.assertEqual(2, len(mock_createlimit.mock_calls)) @mock.patch('nova.objects.Quotas.create_limit') def test_user_quotas_update_bad_data(self, mock_createlimit): body = {'quota_set': {'instances': 20, 'cores': 1}} url = ('/v2/%s/os-quota-sets/update_me?user_id=1' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(url, use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) self.assertEqual(0, len(mock_createlimit.mock_calls)) class QuotaSetsTestV236(test.NoDBTestCase): microversion = '2.36' def setUp(self): super(QuotaSetsTestV236, self).setUp() # We need to stub out verify_project_id so that it doesn't # generate an EndpointNotFound exception and result in a # server error. self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) self.old_req = fakes.HTTPRequest.blank('', version='2.1') self.filtered_quotas = ['fixed_ips', 'floating_ips', 'security_group_rules', 'security_groups'] self.quotas = { 'cores': {'limit': 20}, 'fixed_ips': {'limit': -1}, 'floating_ips': {'limit': -1}, 'injected_file_content_bytes': {'limit': 10240}, 'injected_file_path_bytes': {'limit': 255}, 'injected_files': {'limit': 5}, 'instances': {'limit': 10}, 'key_pairs': {'limit': 100}, 'metadata_items': {'limit': 128}, 'ram': {'limit': 51200}, 'security_group_rules': {'limit': -1}, 'security_groups': {'limit': -1}, 'server_group_members': {'limit': 10}, 'server_groups': {'limit': 10} } self.defaults = { 'cores': 20, 'fixed_ips': -1, 'floating_ips': -1, 'injected_file_content_bytes': 10240, 'injected_file_path_bytes': 255, 'injected_files': 5, 'instances': 10, 'key_pairs': 100, 'metadata_items': 128, 'ram': 51200, 'security_group_rules': -1, 'security_groups': -1, 'server_group_members': 10, 'server_groups': 10 } self.controller = quotas_v21.QuotaSetsController() self.req = fakes.HTTPRequest.blank('', version=self.microversion) def _ensure_filtered_quotas_existed_in_old_api(self): res_dict = self.controller.show(self.old_req, 1234) for filtered in self.filtered_quotas: self.assertIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_show_filtered(self, mock_quotas): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.show(self.req, 1234) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_defaults') @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_default_filtered(self, mock_quotas, mock_defaults): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.defaults(self.req, 1234) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_detail_filtered(self, mock_quotas): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.detail(self.req, 1234) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_update_input_filtered(self, mock_quotas): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() for filtered in self.filtered_quotas: self.assertRaises(exception.ValidationError, self.controller.update, self.req, 1234, body={'quota_set': {filtered: 100}}) @mock.patch('nova.objects.Quotas.create_limit') @mock.patch('nova.quota.QUOTAS.get_settable_quotas') @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_update_output_filtered(self, mock_quotas, mock_settable, mock_create_limit): mock_quotas.return_value = self.quotas mock_settable.return_value = {'cores': {'maximum': -1, 'minimum': 0}} self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.update(self.req, 1234, body={'quota_set': {'cores': 100}}) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) class QuotaSetsTestV257(QuotaSetsTestV236): microversion = '2.57' def setUp(self): super(QuotaSetsTestV257, self).setUp() self.filtered_quotas.extend(quotas_v21.FILTERED_QUOTAS_2_57) class QuotaSetsTestV275(QuotaSetsTestV257): microversion = '2.75' @mock.patch('nova.objects.Quotas.destroy_all_by_project') @mock.patch('nova.objects.Quotas.create_limit') @mock.patch('nova.quota.QUOTAS.get_settable_quotas') @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quota_additional_filter_older_version(self, mock_quotas, mock_settable, mock_create_limit, mock_destroy): mock_quotas.return_value = self.quotas mock_settable.return_value = {'cores': {'maximum': -1, 'minimum': 0}} query_string = 'additional_filter=2' req = fakes.HTTPRequest.blank('', version='2.74', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_update_additional_filter(self): query_string = 'user_id=1&additional_filter=2' req = fakes.HTTPRequest.blank('', version=self.microversion, query_string=query_string) self.assertRaises(exception.ValidationError, self.controller.update, req, 'update_me', body={'quota_set': {}}) def test_quota_show_additional_filter(self): query_string = 'user_id=1&additional_filter=2' req = fakes.HTTPRequest.blank('', version=self.microversion, query_string=query_string) self.assertRaises(exception.ValidationError, self.controller.show, req, 1234) def test_quota_detail_additional_filter(self): query_string = 'user_id=1&additional_filter=2' req = fakes.HTTPRequest.blank('', version=self.microversion, query_string=query_string) self.assertRaises(exception.ValidationError, self.controller.detail, req, 1234) def test_quota_delete_additional_filter(self): query_string = 'user_id=1&additional_filter=2' req = fakes.HTTPRequest.blank('', version=self.microversion, query_string=query_string) self.assertRaises(exception.ValidationError, self.controller.delete, req, 1234) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_remote_consoles.py0000664000175000017500000005373200000000000026711 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import remote_consoles \ as console_v21 from nova.compute import api as compute_api from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class ConsolesExtensionTestV21(test.NoDBTestCase): controller_class = console_v21.RemoteConsolesController validation_error = exception.ValidationError def setUp(self): super(ConsolesExtensionTestV21, self).setUp() self.instance = objects.Instance(uuid=fakes.FAKE_UUID) self.stub_out('nova.compute.api.API.get', lambda *a, **kw: self.instance) self.controller = self.controller_class() def _check_console_failure(self, func, expected_exception, body, mocked_method=None, raised_exception=None): req = fakes.HTTPRequest.blank('') if mocked_method: @mock.patch.object(compute_api.API, mocked_method, side_effect=raised_exception) def _do_test(mock_method): self.assertRaises(expected_exception, func, req, fakes.FAKE_UUID, body=body) self.assertTrue(mock_method.called) _do_test() else: self.assertRaises(expected_exception, func, req, fakes.FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'get_vnc_console', return_value={'url': 'http://fake'}) def test_get_vnc_console(self, mock_get_vnc_console): body = {'os-getVNCConsole': {'type': 'novnc'}} req = fakes.HTTPRequest.blank('') output = self.controller.get_vnc_console(req, fakes.FAKE_UUID, body=body) self.assertEqual(output, {u'console': {u'url': u'http://fake', u'type': u'novnc'}}) mock_get_vnc_console.assert_called_once_with( req.environ['nova.context'], self.instance, 'novnc') def test_get_vnc_console_not_ready(self): body = {'os-getVNCConsole': {'type': 'novnc'}} self._check_console_failure( self.controller.get_vnc_console, webob.exc.HTTPConflict, body, 'get_vnc_console', exception.InstanceNotReady(instance_id=fakes.FAKE_UUID)) def test_get_vnc_console_no_type(self): body = {'os-getVNCConsole': {}} self._check_console_failure( self.controller.get_vnc_console, self.validation_error, body) def test_get_vnc_console_no_instance(self): body = {'os-getVNCConsole': {'type': 'novnc'}} self._check_console_failure( self.controller.get_vnc_console, webob.exc.HTTPNotFound, body, 'get', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_vnc_console_no_instance_on_console_get(self): body = {'os-getVNCConsole': {'type': 'novnc'}} self._check_console_failure( self.controller.get_vnc_console, webob.exc.HTTPNotFound, body, 'get_vnc_console', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_vnc_console_invalid_type(self): body = {'os-getVNCConsole': {'type': 'invalid'}} self._check_console_failure( self.controller.get_vnc_console, self.validation_error, body) def test_get_vnc_console_type_unavailable(self): body = {'os-getVNCConsole': {'type': 'unavailable'}} self._check_console_failure( self.controller.get_vnc_console, self.validation_error, body) def test_get_vnc_console_not_implemented(self): body = {'os-getVNCConsole': {'type': 'novnc'}} self._check_console_failure( self.controller.get_vnc_console, webob.exc.HTTPNotImplemented, body, 'get_vnc_console', NotImplementedError()) @mock.patch.object(compute_api.API, 'get_spice_console', return_value={'url': 'http://fake'}) def test_get_spice_console(self, mock_get_spice_console): body = {'os-getSPICEConsole': {'type': 'spice-html5'}} req = fakes.HTTPRequest.blank('') output = self.controller.get_spice_console(req, fakes.FAKE_UUID, body=body) self.assertEqual(output, {u'console': {u'url': u'http://fake', u'type': u'spice-html5'}}) mock_get_spice_console.assert_called_once_with( req.environ['nova.context'], self.instance, 'spice-html5') def test_get_spice_console_not_ready(self): body = {'os-getSPICEConsole': {'type': 'spice-html5'}} self._check_console_failure( self.controller.get_spice_console, webob.exc.HTTPConflict, body, 'get_spice_console', exception.InstanceNotReady(instance_id=fakes.FAKE_UUID)) def test_get_spice_console_no_type(self): body = {'os-getSPICEConsole': {}} self._check_console_failure( self.controller.get_spice_console, self.validation_error, body) def test_get_spice_console_no_instance(self): body = {'os-getSPICEConsole': {'type': 'spice-html5'}} self._check_console_failure( self.controller.get_spice_console, webob.exc.HTTPNotFound, body, 'get', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_spice_console_no_instance_on_console_get(self): body = {'os-getSPICEConsole': {'type': 'spice-html5'}} self._check_console_failure( self.controller.get_spice_console, webob.exc.HTTPNotFound, body, 'get_spice_console', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_spice_console_invalid_type(self): body = {'os-getSPICEConsole': {'type': 'invalid'}} self._check_console_failure( self.controller.get_spice_console, self.validation_error, body) def test_get_spice_console_not_implemented(self): body = {'os-getSPICEConsole': {'type': 'spice-html5'}} self._check_console_failure( self.controller.get_spice_console, webob.exc.HTTPNotImplemented, body, 'get_spice_console', NotImplementedError()) def test_get_spice_console_type_unavailable(self): body = {'os-getSPICEConsole': {'type': 'unavailable'}} self._check_console_failure( self.controller.get_spice_console, self.validation_error, body) @mock.patch.object(compute_api.API, 'get_rdp_console', return_value={'url': 'http://fake'}) def test_get_rdp_console(self, mock_get_rdp_console): body = {'os-getRDPConsole': {'type': 'rdp-html5'}} req = fakes.HTTPRequest.blank('') output = self.controller.get_rdp_console(req, fakes.FAKE_UUID, body=body) self.assertEqual(output, {u'console': {u'url': u'http://fake', u'type': u'rdp-html5'}}) mock_get_rdp_console.assert_called_once_with( req.environ['nova.context'], self.instance, 'rdp-html5') def test_get_rdp_console_not_ready(self): body = {'os-getRDPConsole': {'type': 'rdp-html5'}} self._check_console_failure( self.controller.get_rdp_console, webob.exc.HTTPConflict, body, 'get_rdp_console', exception.InstanceNotReady(instance_id=fakes.FAKE_UUID)) def test_get_rdp_console_no_type(self): body = {'os-getRDPConsole': {}} self._check_console_failure( self.controller.get_rdp_console, self.validation_error, body) def test_get_rdp_console_no_instance(self): body = {'os-getRDPConsole': {'type': 'rdp-html5'}} self._check_console_failure( self.controller.get_rdp_console, webob.exc.HTTPNotFound, body, 'get', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_rdp_console_no_instance_on_console_get(self): body = {'os-getRDPConsole': {'type': 'rdp-html5'}} self._check_console_failure( self.controller.get_rdp_console, webob.exc.HTTPNotFound, body, 'get_rdp_console', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_rdp_console_invalid_type(self): body = {'os-getRDPConsole': {'type': 'invalid'}} self._check_console_failure( self.controller.get_rdp_console, self.validation_error, body) def test_get_rdp_console_type_unavailable(self): body = {'os-getRDPConsole': {'type': 'unavailable'}} self._check_console_failure( self.controller.get_rdp_console, self.validation_error, body) def test_get_vnc_console_with_undefined_param(self): body = {'os-getVNCConsole': {'type': 'novnc', 'undefined': 'foo'}} self._check_console_failure( self.controller.get_vnc_console, self.validation_error, body) def test_get_spice_console_with_undefined_param(self): body = {'os-getSPICEConsole': {'type': 'spice-html5', 'undefined': 'foo'}} self._check_console_failure( self.controller.get_spice_console, self.validation_error, body) def test_get_rdp_console_with_undefined_param(self): body = {'os-getRDPConsole': {'type': 'rdp-html5', 'undefined': 'foo'}} self._check_console_failure( self.controller.get_rdp_console, self.validation_error, body) @mock.patch.object(compute_api.API, 'get_serial_console', return_value={'url': 'ws://fake'}) def test_get_serial_console(self, mock_get_serial_console): body = {'os-getSerialConsole': {'type': 'serial'}} req = fakes.HTTPRequest.blank('') output = self.controller.get_serial_console(req, fakes.FAKE_UUID, body=body) self.assertEqual({u'console': {u'url': u'ws://fake', u'type': u'serial'}}, output) mock_get_serial_console.assert_called_once_with( req.environ['nova.context'], self.instance, 'serial') def test_get_serial_console_not_enable(self): body = {'os-getSerialConsole': {'type': 'serial'}} self._check_console_failure( self.controller.get_serial_console, webob.exc.HTTPBadRequest, body, 'get_serial_console', exception.ConsoleTypeUnavailable(console_type="serial")) def test_get_serial_console_invalid_type(self): body = {'os-getSerialConsole': {'type': 'invalid'}} self._check_console_failure( self.controller.get_serial_console, self.validation_error, body) def test_get_serial_console_no_type(self): body = {'os-getSerialConsole': {}} self._check_console_failure( self.controller.get_serial_console, self.validation_error, body) def test_get_serial_console_no_instance(self): body = {'os-getSerialConsole': {'type': 'serial'}} self._check_console_failure( self.controller.get_serial_console, webob.exc.HTTPNotFound, body, 'get_serial_console', exception.InstanceNotFound(instance_id=fakes.FAKE_UUID)) def test_get_serial_console_instance_not_ready(self): body = {'os-getSerialConsole': {'type': 'serial'}} self._check_console_failure( self.controller.get_serial_console, webob.exc.HTTPConflict, body, 'get_serial_console', exception.InstanceNotReady(instance_id=fakes.FAKE_UUID)) def test_get_serial_console_socket_exhausted(self): body = {'os-getSerialConsole': {'type': 'serial'}} self._check_console_failure( self.controller.get_serial_console, webob.exc.HTTPBadRequest, body, 'get_serial_console', exception.SocketPortRangeExhaustedException(host='127.0.0.1')) def test_get_serial_console_image_nport_invalid(self): body = {'os-getSerialConsole': {'type': 'serial'}} self._check_console_failure( self.controller.get_serial_console, webob.exc.HTTPBadRequest, body, 'get_serial_console', exception.ImageSerialPortNumberInvalid( num_ports='x', property="hw_serial_port_count")) def test_get_serial_console_image_nport_exceed(self): body = {'os-getSerialConsole': {'type': 'serial'}} self._check_console_failure( self.controller.get_serial_console, webob.exc.HTTPBadRequest, body, 'get_serial_console', exception.ImageSerialPortNumberExceedFlavorValue()) class ConsolesExtensionTestV26(test.NoDBTestCase): def setUp(self): super(ConsolesExtensionTestV26, self).setUp() self.req = fakes.HTTPRequest.blank('') self.context = self.req.environ['nova.context'] self.req.api_version_request = api_version_request.APIVersionRequest( '2.6') self.instance = fake_instance.fake_instance_obj(self.context) self.stub_out('nova.compute.api.API.get', lambda *a, **kw: self.instance) self.controller = console_v21.RemoteConsolesController() def test_create_vnc_console(self): mock_handler = mock.MagicMock() mock_handler.return_value = {'url': "http://fake"} self.controller.handlers['vnc'] = mock_handler body = {'remote_console': {'protocol': 'vnc', 'type': 'novnc'}} output = self.controller.create(self.req, fakes.FAKE_UUID, body=body) self.assertEqual({'remote_console': {'protocol': 'vnc', 'type': 'novnc', 'url': 'http://fake'}}, output) mock_handler.assert_called_once_with(self.context, self.instance, 'novnc') def test_create_spice_console(self): mock_handler = mock.MagicMock() mock_handler.return_value = {'url': "http://fake"} self.controller.handlers['spice'] = mock_handler body = {'remote_console': {'protocol': 'spice', 'type': 'spice-html5'}} output = self.controller.create(self.req, fakes.FAKE_UUID, body=body) self.assertEqual({'remote_console': {'protocol': 'spice', 'type': 'spice-html5', 'url': 'http://fake'}}, output) mock_handler.assert_called_once_with(self.context, self.instance, 'spice-html5') def test_create_rdp_console(self): mock_handler = mock.MagicMock() mock_handler.return_value = {'url': "http://fake"} self.controller.handlers['rdp'] = mock_handler body = {'remote_console': {'protocol': 'rdp', 'type': 'rdp-html5'}} output = self.controller.create(self.req, fakes.FAKE_UUID, body=body) self.assertEqual({'remote_console': {'protocol': 'rdp', 'type': 'rdp-html5', 'url': 'http://fake'}}, output) mock_handler.assert_called_once_with(self.context, self.instance, 'rdp-html5') def test_create_serial_console(self): mock_handler = mock.MagicMock() mock_handler.return_value = {'url': "ws://fake"} self.controller.handlers['serial'] = mock_handler body = {'remote_console': {'protocol': 'serial', 'type': 'serial'}} output = self.controller.create(self.req, fakes.FAKE_UUID, body=body) self.assertEqual({'remote_console': {'protocol': 'serial', 'type': 'serial', 'url': 'ws://fake'}}, output) mock_handler.assert_called_once_with(self.context, self.instance, 'serial') def test_create_console_instance_not_ready(self): mock_handler = mock.MagicMock() mock_handler.side_effect = exception.InstanceNotReady( instance_id='xxx') self.controller.handlers['vnc'] = mock_handler body = {'remote_console': {'protocol': 'vnc', 'type': 'novnc'}} self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, fakes.FAKE_UUID, body=body) def test_create_console_unavailable(self): mock_handler = mock.MagicMock() mock_handler.side_effect = exception.ConsoleTypeUnavailable( console_type='vnc') self.controller.handlers['vnc'] = mock_handler body = {'remote_console': {'protocol': 'vnc', 'type': 'novnc'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, fakes.FAKE_UUID, body=body) self.assertTrue(mock_handler.called) def test_create_console_not_found(self,): mock_handler = mock.MagicMock() mock_handler.side_effect = exception.InstanceNotFound( instance_id='xxx') self.controller.handlers['vnc'] = mock_handler body = {'remote_console': {'protocol': 'vnc', 'type': 'novnc'}} self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.req, fakes.FAKE_UUID, body=body) def test_create_console_not_implemented(self): mock_handler = mock.MagicMock() mock_handler.side_effect = NotImplementedError() self.controller.handlers['vnc'] = mock_handler body = {'remote_console': {'protocol': 'vnc', 'type': 'novnc'}} self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.create, self.req, fakes.FAKE_UUID, body=body) def test_create_console_nport_invalid(self): mock_handler = mock.MagicMock() mock_handler.side_effect = exception.ImageSerialPortNumberInvalid( num_ports='x', property="hw_serial_port_count") self.controller.handlers['serial'] = mock_handler body = {'remote_console': {'protocol': 'serial', 'type': 'serial'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, fakes.FAKE_UUID, body=body) def test_create_console_nport_exceed(self): mock_handler = mock.MagicMock() mock_handler.side_effect = ( exception.ImageSerialPortNumberExceedFlavorValue()) self.controller.handlers['serial'] = mock_handler body = {'remote_console': {'protocol': 'serial', 'type': 'serial'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, fakes.FAKE_UUID, body=body) def test_create_console_socket_exhausted(self): mock_handler = mock.MagicMock() mock_handler.side_effect = ( exception.SocketPortRangeExhaustedException(host='127.0.0.1')) self.controller.handlers['serial'] = mock_handler body = {'remote_console': {'protocol': 'serial', 'type': 'serial'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, fakes.FAKE_UUID, body=body) def test_create_console_invalid_type(self): mock_handler = mock.MagicMock() mock_handler.side_effect = ( exception.ConsoleTypeInvalid(console_type='invalid_type')) self.controller.handlers['serial'] = mock_handler body = {'remote_console': {'protocol': 'serial', 'type': 'xvpvnc'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, fakes.FAKE_UUID, body=body) class ConsolesExtensionTestV28(ConsolesExtensionTestV26): def setUp(self): super(ConsolesExtensionTestV28, self).setUp() self.req = fakes.HTTPRequest.blank('') self.context = self.req.environ['nova.context'] self.req.api_version_request = api_version_request.APIVersionRequest( '2.8') self.controller = console_v21.RemoteConsolesController() def test_create_mks_console(self): mock_handler = mock.MagicMock() mock_handler.return_value = {'url': "http://fake"} self.controller.handlers['mks'] = mock_handler body = {'remote_console': {'protocol': 'mks', 'type': 'webmks'}} output = self.controller.create(self.req, fakes.FAKE_UUID, body=body) self.assertEqual({'remote_console': {'protocol': 'mks', 'type': 'webmks', 'url': 'http://fake'}}, output) mock_handler.assert_called_once_with(self.context, self.instance, 'webmks') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_rescue.py0000664000175000017500000002413400000000000024771 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack import api_version_request from nova.api.openstack.compute import rescue as rescue_v21 from nova import compute import nova.conf from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance CONF = nova.conf.CONF UUID = '70f6db34-de8d-4fbd-aafb-4065bdfa6114' def rescue(self, context, instance, rescue_password=None, rescue_image_ref=None, allow_bfv_rescue=False): pass def unrescue(self, context, instance): pass def fake_compute_get(*args, **kwargs): return fake_instance.fake_instance_obj(args[1], id=1, uuid=UUID, **kwargs) class RescueTestV21(test.NoDBTestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def setUp(self): super(RescueTestV21, self).setUp() self.stub_out("nova.compute.api.API.get", fake_compute_get) self.stub_out("nova.compute.api.API.rescue", rescue) self.stub_out("nova.compute.api.API.unrescue", unrescue) self.controller = self._set_up_controller() self.fake_req = fakes.HTTPRequest.blank('') def _set_up_controller(self): return rescue_v21.RescueController() def _allow_bfv_rescue(self): return api_version_request.is_supported(self.fake_req, '2.87') @mock.patch.object(compute.api.API, "rescue") def test_rescue_from_locked_server(self, mock_rescue): mock_rescue.side_effect = exception.InstanceIsLocked( instance_uuid=UUID) body = {"rescue": {"adminPass": "AABBCC112233"}} self.assertRaises(webob.exc.HTTPConflict, self.controller._rescue, self.fake_req, UUID, body=body) self.assertTrue(mock_rescue.called) def test_rescue_with_preset_password(self): body = {"rescue": {"adminPass": "AABBCC112233"}} resp = self.controller._rescue(self.fake_req, UUID, body=body) self.assertEqual("AABBCC112233", resp['adminPass']) def test_rescue_generates_password(self): body = dict(rescue=None) resp = self.controller._rescue(self.fake_req, UUID, body=body) self.assertEqual(CONF.password_length, len(resp['adminPass'])) @mock.patch.object(compute.api.API, "rescue") def test_rescue_of_rescued_instance(self, mock_rescue): mock_rescue.side_effect = exception.InstanceInvalidState( 'fake message') body = dict(rescue=None) self.assertRaises(webob.exc.HTTPConflict, self.controller._rescue, self.fake_req, UUID, body=body) self.assertTrue(mock_rescue.called) def test_unrescue(self): body = dict(unrescue=None) resp = self.controller._unrescue(self.fake_req, UUID, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, rescue_v21.RescueController): status_int = self.controller._unrescue.wsgi_code else: status_int = resp.status_int self.assertEqual(202, status_int) @mock.patch.object(compute.api.API, "unrescue") def test_unrescue_from_locked_server(self, mock_unrescue): mock_unrescue.side_effect = exception.InstanceIsLocked( instance_uuid=UUID) body = dict(unrescue=None) self.assertRaises(webob.exc.HTTPConflict, self.controller._unrescue, self.fake_req, UUID, body=body) self.assertTrue(mock_unrescue.called) @mock.patch.object(compute.api.API, "unrescue") def test_unrescue_of_active_instance(self, mock_unrescue): mock_unrescue.side_effect = exception.InstanceInvalidState( 'fake message') body = dict(unrescue=None) self.assertRaises(webob.exc.HTTPConflict, self.controller._unrescue, self.fake_req, UUID, body=body) self.assertTrue(mock_unrescue.called) @mock.patch.object(compute.api.API, "rescue") def test_rescue_raises_unrescuable(self, mock_rescue): mock_rescue.side_effect = exception.InstanceNotRescuable( 'fake message') body = dict(rescue=None) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._rescue, self.fake_req, UUID, body=body) self.assertTrue(mock_rescue.called) @mock.patch.object(compute.api.API, "rescue", side_effect=exception.UnsupportedRescueImage(image='fake')) def test_rescue_raises_unsupported_image(self, mock_rescue): body = dict(rescue=None) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._rescue, self.fake_req, UUID, body=body) self.assertTrue(mock_rescue.called) def test_rescue_with_bad_image_specified(self): body = {"rescue": {"adminPass": "ABC123", "rescue_image_ref": "img-id"}} self.assertRaises(exception.ValidationError, self.controller._rescue, self.fake_req, UUID, body=body) def test_rescue_with_imageRef_as_full_url(self): image_href = ('http://localhost/v2/fake/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6') body = {"rescue": {"adminPass": "ABC123", "rescue_image_ref": image_href}} self.assertRaises(exception.ValidationError, self.controller._rescue, self.fake_req, UUID, body=body) def test_rescue_with_imageRef_as_empty_string(self): body = {"rescue": {"adminPass": "ABC123", "rescue_image_ref": ''}} self.assertRaises(exception.ValidationError, self.controller._rescue, self.fake_req, UUID, body=body) @mock.patch('nova.compute.api.API.rescue') @mock.patch('nova.api.openstack.common.get_instance') def test_rescue_with_image_specified( self, get_instance_mock, mock_compute_api_rescue): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) get_instance_mock.return_value = instance body = {"rescue": {"adminPass": "ABC123", "rescue_image_ref": self.image_uuid}} resp_json = self.controller._rescue(self.fake_req, UUID, body=body) self.assertEqual("ABC123", resp_json['adminPass']) mock_compute_api_rescue.assert_called_with( mock.ANY, instance, rescue_password=u'ABC123', rescue_image_ref=self.image_uuid, allow_bfv_rescue=self._allow_bfv_rescue()) @mock.patch('nova.compute.api.API.rescue') @mock.patch('nova.api.openstack.common.get_instance') def test_rescue_without_image_specified( self, get_instance_mock, mock_compute_api_rescue): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) get_instance_mock.return_value = instance body = {"rescue": {"adminPass": "ABC123"}} resp_json = self.controller._rescue(self.fake_req, UUID, body=body) self.assertEqual("ABC123", resp_json['adminPass']) mock_compute_api_rescue.assert_called_with( mock.ANY, instance, rescue_password=u'ABC123', rescue_image_ref=None, allow_bfv_rescue=self._allow_bfv_rescue()) def test_rescue_with_none(self): body = dict(rescue=None) resp = self.controller._rescue(self.fake_req, UUID, body=body) self.assertEqual(CONF.password_length, len(resp['adminPass'])) def test_rescue_with_empty_dict(self): body = dict(rescue=dict()) resp = self.controller._rescue(self.fake_req, UUID, body=body) self.assertEqual(CONF.password_length, len(resp['adminPass'])) def test_rescue_disable_password(self): self.flags(enable_instance_password=False, group='api') body = dict(rescue=None) resp_json = self.controller._rescue(self.fake_req, UUID, body=body) self.assertNotIn('adminPass', resp_json) def test_rescue_with_invalid_property(self): body = {"rescue": {"test": "test"}} self.assertRaises(exception.ValidationError, self.controller._rescue, self.fake_req, UUID, body=body) class RescueTestV287(RescueTestV21): def setUp(self): super(RescueTestV287, self).setUp() v287_req = api_version_request.APIVersionRequest('2.87') self.fake_req.api_version_request = v287_req @mock.patch('nova.compute.api.API.rescue') @mock.patch('nova.api.openstack.common.get_instance') def test_allow_bfv_rescue(self, mock_get_instance, mock_compute_rescue): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get_instance.return_value = instance body = {"rescue": {"adminPass": "ABC123"}} self.controller._rescue(self.fake_req, uuids.instance, body=body) # Assert that allow_bfv_rescue is True for this 2.87 request mock_get_instance.assert_called_once_with( mock.ANY, mock.ANY, uuids.instance) mock_compute_rescue.assert_called_with( mock.ANY, instance, rescue_image_ref=None, rescue_password=u'ABC123', allow_bfv_rescue=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_security_groups.py0000664000175000017500000020717000000000000026754 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions as n_exc from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils import six import webob from nova.api.openstack.compute import security_groups as secgroups_v21 from nova import context as context_maker import nova.db.api from nova import exception from nova.network import model from nova.network import neutron as neutron_api from nova.network import security_group_api from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack import fakes CONF = cfg.CONF FAKE_UUID1 = 'a47ae74e-ab08-447f-8eee-ffd43fc46c16' FAKE_UUID2 = 'c6e6430a-6563-4efa-9542-5e93c9e97d18' UUID_SERVER = uuids.server class AttrDict(dict): def __getattr__(self, k): return self[k] def security_group_request_template(**kwargs): sg = kwargs.copy() sg.setdefault('name', 'test') sg.setdefault('description', 'test-description') return sg def security_group_template(**kwargs): sg = kwargs.copy() sg.setdefault('tenant_id', '123') sg.setdefault('name', 'test') sg.setdefault('description', 'test-description') return sg def security_group_db(security_group, id=None): attrs = security_group.copy() if 'tenant_id' in attrs: attrs['project_id'] = attrs.pop('tenant_id') if id is not None: attrs['id'] = id attrs.setdefault('rules', []) attrs.setdefault('instances', []) return AttrDict(attrs) def security_group_rule_template(**kwargs): rule = kwargs.copy() rule.setdefault('ip_protocol', 'tcp') rule.setdefault('from_port', 22) rule.setdefault('to_port', 22) rule.setdefault('parent_group_id', 2) return rule def security_group_rule_db(rule, id=None): attrs = rule.copy() if 'ip_protocol' in attrs: attrs['protocol'] = attrs.pop('ip_protocol') return AttrDict(attrs) def return_security_group_by_name(context, project_id, group_name, columns_to_join=None): return {'id': 1, 'name': group_name, "instances": [{'id': 1, 'uuid': UUID_SERVER}]} class MockClient(object): # Needs to be global to survive multiple calls to get_client. _fake_security_groups = {} _fake_ports = {} _fake_networks = {} _fake_subnets = {} _fake_security_group_rules = {} def __init__(self): # add default security group if not len(self._fake_security_groups): ret = {'name': 'default', 'description': 'default', 'tenant_id': fakes.FAKE_PROJECT_ID, 'security_group_rules': [], 'id': uuidutils.generate_uuid()} self._fake_security_groups[ret['id']] = ret def _reset(self): self._fake_security_groups.clear() self._fake_ports.clear() self._fake_networks.clear() self._fake_subnets.clear() self._fake_security_group_rules.clear() def create_security_group(self, body=None): s = body.get('security_group') if not isinstance(s.get('name', ''), six.string_types): msg = ('BadRequest: Invalid input for name. Reason: ' 'None is not a valid string.') raise n_exc.BadRequest(message=msg) if not isinstance(s.get('description.', ''), six.string_types): msg = ('BadRequest: Invalid input for description. Reason: ' 'None is not a valid string.') raise n_exc.BadRequest(message=msg) if len(s.get('name')) > 255 or len(s.get('description')) > 255: msg = 'Security Group name great than 255' raise n_exc.NeutronClientException(message=msg, status_code=401) ret = {'name': s.get('name'), 'description': s.get('description'), 'tenant_id': fakes.FAKE_PROJECT_ID, 'security_group_rules': [], 'id': uuidutils.generate_uuid()} self._fake_security_groups[ret['id']] = ret return {'security_group': ret} def create_network(self, body): n = body.get('network') ret = {'status': 'ACTIVE', 'subnets': [], 'name': n.get('name'), 'admin_state_up': n.get('admin_state_up', True), 'tenant_id': fakes.FAKE_PROJECT_ID, 'id': uuidutils.generate_uuid()} if 'port_security_enabled' in n: ret['port_security_enabled'] = n['port_security_enabled'] self._fake_networks[ret['id']] = ret return {'network': ret} def create_subnet(self, body): s = body.get('subnet') try: net = self._fake_networks[s.get('network_id')] except KeyError: msg = 'Network %s not found' % s.get('network_id') raise n_exc.NeutronClientException(message=msg, status_code=404) ret = {'name': s.get('name'), 'network_id': s.get('network_id'), 'tenant_id': fakes.FAKE_PROJECT_ID, 'cidr': s.get('cidr'), 'id': uuidutils.generate_uuid(), 'gateway_ip': '10.0.0.1'} net['subnets'].append(ret['id']) self._fake_networks[net['id']] = net self._fake_subnets[ret['id']] = ret return {'subnet': ret} def create_port(self, body): p = body.get('port') ret = {'status': 'ACTIVE', 'id': uuidutils.generate_uuid(), 'mac_address': p.get('mac_address', 'fa:16:3e:b8:f5:fb'), 'device_id': p.get('device_id', uuidutils.generate_uuid()), 'admin_state_up': p.get('admin_state_up', True), 'security_groups': p.get('security_groups', []), 'network_id': p.get('network_id'), 'ip_allocation': p.get('ip_allocation'), 'binding:vnic_type': p.get('binding:vnic_type') or model.VNIC_TYPE_NORMAL} network = self._fake_networks[p['network_id']] if 'port_security_enabled' in p: ret['port_security_enabled'] = p['port_security_enabled'] elif 'port_security_enabled' in network: ret['port_security_enabled'] = network['port_security_enabled'] port_security = ret.get('port_security_enabled', True) # port_security must be True if security groups are present if not port_security and ret['security_groups']: raise exception.SecurityGroupCannotBeApplied() if network['subnets'] and p.get('ip_allocation') != 'deferred': ret['fixed_ips'] = [{'subnet_id': network['subnets'][0], 'ip_address': '10.0.0.1'}] if not ret['security_groups'] and (port_security is None or port_security is True): for security_group in self._fake_security_groups.values(): if security_group['name'] == 'default': ret['security_groups'] = [security_group['id']] break self._fake_ports[ret['id']] = ret return {'port': ret} def create_security_group_rule(self, body): # does not handle bulk case so just picks rule[0] r = body.get('security_group_rules')[0] fields = ['direction', 'protocol', 'port_range_min', 'port_range_max', 'ethertype', 'remote_ip_prefix', 'tenant_id', 'security_group_id', 'remote_group_id'] ret = {} for field in fields: ret[field] = r.get(field) ret['id'] = uuidutils.generate_uuid() self._fake_security_group_rules[ret['id']] = ret return {'security_group_rules': [ret]} def show_security_group(self, security_group, **_params): try: sg = self._fake_security_groups[security_group] except KeyError: msg = 'Security Group %s not found' % security_group raise n_exc.NeutronClientException(message=msg, status_code=404) for security_group_rule in self._fake_security_group_rules.values(): if security_group_rule['security_group_id'] == sg['id']: sg['security_group_rules'].append(security_group_rule) return {'security_group': sg} def show_security_group_rule(self, security_group_rule, **_params): try: return {'security_group_rule': self._fake_security_group_rules[security_group_rule]} except KeyError: msg = 'Security Group rule %s not found' % security_group_rule raise n_exc.NeutronClientException(message=msg, status_code=404) def show_network(self, network, **_params): try: return {'network': self._fake_networks[network]} except KeyError: msg = 'Network %s not found' % network raise n_exc.NeutronClientException(message=msg, status_code=404) def show_port(self, port, **_params): try: return {'port': self._fake_ports[port]} except KeyError: msg = 'Port %s not found' % port raise n_exc.NeutronClientException(message=msg, status_code=404) def show_subnet(self, subnet, **_params): try: return {'subnet': self._fake_subnets[subnet]} except KeyError: msg = 'Port %s not found' % subnet raise n_exc.NeutronClientException(message=msg, status_code=404) def list_security_groups(self, **_params): ret = [] for security_group in self._fake_security_groups.values(): names = _params.get('name') if names: if not isinstance(names, list): names = [names] for name in names: if security_group.get('name') == name: ret.append(security_group) ids = _params.get('id') if ids: if not isinstance(ids, list): ids = [ids] for id in ids: if security_group.get('id') == id: ret.append(security_group) elif not (names or ids): ret.append(security_group) return {'security_groups': ret} def list_networks(self, **_params): # network/api.py _get_available_networks calls this assuming # search_opts filter "shared" is implemented and not ignored shared = _params.get("shared", None) if shared: return {'networks': []} else: return {'networks': [network for network in self._fake_networks.values()]} def list_ports(self, **_params): ret = [] device_id = _params.get('device_id') for port in self._fake_ports.values(): if device_id: if port['device_id'] in device_id: ret.append(port) else: ret.append(port) return {'ports': ret} def list_subnets(self, **_params): return {'subnets': [subnet for subnet in self._fake_subnets.values()]} def list_floatingips(self, **_params): return {'floatingips': []} def delete_security_group(self, security_group): self.show_security_group(security_group) ports = self.list_ports() for port in ports.get('ports'): for sg_port in port['security_groups']: if sg_port == security_group: msg = ('Unable to delete Security group %s in use' % security_group) raise n_exc.NeutronClientException(message=msg, status_code=409) del self._fake_security_groups[security_group] def delete_security_group_rule(self, security_group_rule): self.show_security_group_rule(security_group_rule) del self._fake_security_group_rules[security_group_rule] def delete_network(self, network): self.show_network(network) self._check_ports_on_network(network) for subnet in self._fake_subnets.values(): if subnet['network_id'] == network: del self._fake_subnets[subnet['id']] del self._fake_networks[network] def delete_port(self, port): self.show_port(port) del self._fake_ports[port] def update_port(self, port, body=None): self.show_port(port) self._fake_ports[port].update(body['port']) return {'port': self._fake_ports[port]} def list_extensions(self, **_parms): return {'extensions': []} def _check_ports_on_network(self, network): ports = self.list_ports() for port in ports: if port['network_id'] == network: msg = ('Unable to complete operation on network %s. There is ' 'one or more ports still in use on the network' % network) raise n_exc.NeutronClientException(message=msg, status_code=409) def find_resource(self, resource, name_or_id, project_id=None, cmd_resource=None, parent_id=None, fields=None): if resource == 'security_group': # lookup first by unique id sg = self._fake_security_groups.get(name_or_id) if sg: return sg # lookup by name, raise an exception on duplicates res = None for sg in self._fake_security_groups.values(): if sg['name'] == name_or_id: if res: raise n_exc.NeutronClientNoUniqueMatch( resource=resource, name=name_or_id) res = sg if res: return res raise n_exc.NotFound("Fake %s '%s' not found." % (resource, name_or_id)) def get_client(context=None, admin=False): return MockClient() class TestSecurityGroupsV21(test.TestCase): secgrp_ctl_cls = secgroups_v21.SecurityGroupController server_secgrp_ctl_cls = secgroups_v21.ServerSecurityGroupController secgrp_act_ctl_cls = secgroups_v21.SecurityGroupActionController def setUp(self): super(TestSecurityGroupsV21, self).setUp() self.controller = self.secgrp_ctl_cls() self.server_controller = self.server_secgrp_ctl_cls() self.manager = self.secgrp_act_ctl_cls() # This needs to be done here to set fake_id because the derived # class needs to be called first if it wants to set # 'security_group_api' and this setUp method needs to be called. self.fake_id = '11111111-1111-1111-1111-111111111111' self.req = fakes.HTTPRequest.blank('') project_id = self.req.environ['nova.context'].project_id self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( **{'power_state': 0x01, 'host': "localhost", 'uuid': UUID_SERVER, 'name': 'asdf', 'project_id': project_id})) self.original_client = neutron_api.get_client neutron_api.get_client = get_client def tearDown(self): neutron_api.get_client = self.original_client get_client()._reset() super(TestSecurityGroupsV21, self).tearDown() def _create_sg_template(self, **kwargs): sg = security_group_request_template(**kwargs) return self.controller.create(self.req, body={'security_group': sg}) def _create_network(self): body = {'network': {'name': 'net1'}} neutron = get_client() net = neutron.create_network(body) body = {'subnet': {'network_id': net['network']['id'], 'cidr': '10.0.0.0/24'}} neutron.create_subnet(body) return net def _create_port(self, **kwargs): body = {'port': {'binding:vnic_type': model.VNIC_TYPE_NORMAL}} fields = ['security_groups', 'device_id', 'network_id', 'port_security_enabled', 'ip_allocation'] for field in fields: if field in kwargs: body['port'][field] = kwargs[field] neutron = get_client() return neutron.create_port(body) def _create_security_group(self, **kwargs): body = {'security_group': {}} fields = ['name', 'description'] for field in fields: if field in kwargs: body['security_group'][field] = kwargs[field] neutron = get_client() return neutron.create_security_group(body) def _assert_security_groups_in_use(self, project_id, user_id, in_use): context = context_maker.get_admin_context() count = objects.Quotas.count_as_dict(context, 'security_groups', project_id, user_id) self.assertEqual(in_use, count['project']['security_groups']) self.assertEqual(in_use, count['user']['security_groups']) def test_create_security_group(self): sg = security_group_request_template() res_dict = self.controller.create(self.req, {'security_group': sg}) self.assertEqual(res_dict['security_group']['name'], 'test') self.assertEqual(res_dict['security_group']['description'], 'test-description') def test_create_security_group_with_no_name(self): sg = security_group_request_template() del sg['name'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_no_body(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, None) def test_create_security_group_with_no_security_group(self): body = {'no-securityGroup': None} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body) def test_create_security_group_above_255_characters_name(self): sg = security_group_request_template(name='1234567890' * 26) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_above_255_characters_description(self): sg = security_group_request_template(description='1234567890' * 26) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_get_security_group_list(self): self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups' % fakes.FAKE_PROJECT_ID) list_dict = self.controller.index(req) self.assertEqual(len(list_dict['security_groups']), 2) def test_get_security_group_list_offset_and_limit(self): path = ('/v2/%s/os-security-groups?offset=1&limit=1' % fakes.FAKE_PROJECT_ID) self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank(path) list_dict = self.controller.index(req) self.assertEqual(len(list_dict['security_groups']), 1) def test_get_security_group_list_missing_group_id_rule(self): groups = [] rule1 = security_group_rule_template(cidr='10.2.3.124/24', parent_group_id=1, group_id={}, id=88, protocol='TCP') rule2 = security_group_rule_template(cidr='10.2.3.125/24', parent_group_id=1, id=99, protocol=88, group_id='HAS_BEEN_DELETED') sg = security_group_template(id=1, name='test', description='test-desc', rules=[rule1, rule2]) groups.append(sg) # An expected rule here needs to be created as the api returns # different attributes on the rule for a response than what was # passed in. For example: # "cidr": "0.0.0.0/0" ->"ip_range": {"cidr": "0.0.0.0/0"} expected_rule = security_group_rule_template( ip_range={'cidr': '10.2.3.124/24'}, parent_group_id=1, group={}, id=88, ip_protocol='TCP') expected = security_group_template(id=1, name='test', description='test-desc', rules=[expected_rule]) expected = {'security_groups': [expected]} with mock.patch( 'nova.network.security_group_api.list', return_value=[ security_group_db( secgroup) for secgroup in groups]) as mock_list: res_dict = self.controller.index(self.req) self.assertEqual(res_dict, expected) mock_list.assert_called_once_with(self.req.environ['nova.context'], project=fakes.FAKE_PROJECT_ID, search_opts={}) def test_get_security_group_by_instance(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], device_id=UUID_SERVER) expected = [{'rules': [], 'tenant_id': fakes.FAKE_PROJECT_ID, 'id': sg['id'], 'name': 'test', 'description': 'test-description'}] req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/os-security-groups' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) res_dict = self.server_controller.index( req, UUID_SERVER)['security_groups'] self.assertEqual(expected, res_dict) @mock.patch('nova.network.security_group_api.' 'get_instances_security_groups_bindings') def test_get_security_group_empty_for_instance(self, neutron_sg_bind_mock): servers = [{'id': FAKE_UUID1}] neutron_sg_bind_mock.return_value = {} ctx = context_maker.get_admin_context() sgs = security_group_api.get_instance_security_groups(ctx, instance_obj.Instance(uuid=FAKE_UUID1)) neutron_sg_bind_mock.assert_called_once_with(ctx, servers, False) self.assertEqual([], sgs) def test_get_security_group_by_instance_non_existing(self): with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id='1')): self.assertRaises(webob.exc.HTTPNotFound, self.server_controller.index, self.req, '1') def test_get_security_group_by_id(self): sg = self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups/%s' % (fakes.FAKE_PROJECT_ID, sg['id'])) res_dict = self.controller.show(req, sg['id']) expected = {'security_group': sg} self.assertEqual(res_dict, expected) def test_get_security_group_by_invalid_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'invalid') def test_get_security_group_by_non_existing_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, self.fake_id) def test_update_default_security_group_fail(self): sg = security_group_template() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, '1', {'security_group': sg}) def test_delete_security_group_by_id(self): sg = self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups/%s' % (fakes.FAKE_PROJECT_ID, sg['id'])) self.controller.delete(req, sg['id']) def test_delete_security_group_by_admin(self): sg = self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups/%s' % (fakes.FAKE_PROJECT_ID, sg['id']), use_admin_context=True) self.controller.delete(req, sg['id']) def test_delete_security_group_by_invalid_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'invalid') def test_delete_security_group_by_non_existing_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, self.fake_id) @mock.patch('nova.compute.utils.refresh_info_cache_for_instance') def test_delete_security_group_in_use(self, refresh_info_cache_mock): sg = self._create_sg_template().get('security_group') self._create_network() db_inst = fakes.stub_instance(id=1, nw_cache=[], security_groups=[]) _context = context_maker.get_admin_context() instance = instance_obj.Instance._from_db_object( _context, instance_obj.Instance(), db_inst, expected_attrs=instance_obj.INSTANCE_DEFAULT_FIELDS) neutron = neutron_api.API() with mock.patch.object(nova.db.api, 'instance_get_by_uuid', return_value=db_inst): neutron.allocate_for_instance(_context, instance, False, None, security_groups=[sg['id']]) req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups/%s' % (fakes.FAKE_PROJECT_ID, sg['id'])) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, sg['id']) def _test_list_with_invalid_filter( self, url, expected_exception=exception.ValidationError): prefix = '/os-security-groups' req = fakes.HTTPRequest.blank(prefix + url) self.assertRaises(expected_exception, self.controller.index, req) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') def test_list_duplicate_query_parameters_validation(self): params = { 'limit': 1, 'offset': 1, 'all_tenants': 1 } for param, value in params.items(): req = fakes.HTTPRequest.blank( '/os-security-groups' + '?%s=%s&%s=%s' % (param, value, param, value)) self.controller.index(req) def test_list_with_additional_filter(self): req = fakes.HTTPRequest.blank( '/os-security-groups?limit=1&offset=1&additional=something') self.controller.index(req) def test_list_all_tenants_filter_as_string(self): req = fakes.HTTPRequest.blank( '/os-security-groups?all_tenants=abc') self.controller.index(req) def test_list_all_tenants_filter_as_positive_int(self): req = fakes.HTTPRequest.blank( '/os-security-groups?all_tenants=1') self.controller.index(req) def test_list_all_tenants_filter_as_negative_int(self): req = fakes.HTTPRequest.blank( '/os-security-groups?all_tenants=-1') self.controller.index(req) def test_associate_by_non_existing_security_group_name(self): body = dict(addSecurityGroup=dict(name='non-existing')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_without_body(self): body = dict(addSecurityGroup=None) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_no_security_group_name(self): body = dict(addSecurityGroup=dict()) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_security_group_name_with_whitespaces(self): body = dict(addSecurityGroup=dict(name=" ")) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_non_existing_instance(self): body = dict(addSecurityGroup=dict(name="test")) with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id='1')): self.assertRaises(webob.exc.HTTPNotFound, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_duplicate_names(self): sg1 = self._create_security_group(name='sg1', description='sg1')['security_group'] self._create_security_group(name='sg1', description='sg1')['security_group'] net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id']], device_id=UUID_SERVER) body = dict(addSecurityGroup=dict(name="sg1")) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.assertRaises(webob.exc.HTTPConflict, self.manager._addSecurityGroup, req, UUID_SERVER, body) def test_associate_port_security_enabled_true(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], port_security_enabled=True, device_id=UUID_SERVER) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.manager._addSecurityGroup(req, UUID_SERVER, body) def test_associate_port_security_enabled_false(self): self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], port_security_enabled=False, device_id=UUID_SERVER) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, req, UUID_SERVER, body) def test_associate_deferred_ip_port(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], port_security_enabled=True, ip_allocation='deferred', device_id=UUID_SERVER) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.manager._addSecurityGroup(req, UUID_SERVER, body) def test_associate(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], device_id=UUID_SERVER) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.manager._addSecurityGroup(req, UUID_SERVER, body) def test_disassociate_by_non_existing_security_group_name(self): body = dict(removeSecurityGroup=dict(name='non-existing')) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.assertRaises(webob.exc.HTTPNotFound, self.manager._removeSecurityGroup, req, UUID_SERVER, body) def test_disassociate_without_body(self): body = dict(removeSecurityGroup=None) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_no_security_group_name(self): body = dict(removeSecurityGroup=dict()) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_security_group_name_with_whitespaces(self): body = dict(removeSecurityGroup=dict(name=" ")) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_non_existing_instance(self): body = dict(removeSecurityGroup=dict(name="test")) with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id='1')): self.assertRaises(webob.exc.HTTPNotFound, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], device_id=UUID_SERVER) body = dict(removeSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/action' % (fakes.FAKE_PROJECT_ID, UUID_SERVER)) self.manager._removeSecurityGroup(req, UUID_SERVER, body) def test_get_instances_security_groups_bindings(self): servers = [{'id': FAKE_UUID1}, {'id': FAKE_UUID2}] sg1 = self._create_sg_template(name='test1').get('security_group') sg2 = self._create_sg_template(name='test2').get('security_group') # test name='' is replaced with id sg3 = self._create_sg_template(name='').get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id'], sg2['id']], device_id=FAKE_UUID1) self._create_port( network_id=net['network']['id'], security_groups=[sg2['id'], sg3['id']], device_id=FAKE_UUID2) expected = {FAKE_UUID1: [{'name': sg1['name']}, {'name': sg2['name']}], FAKE_UUID2: [{'name': sg2['name']}, {'name': sg3['id']}]} bindings = security_group_api.get_instances_security_groups_bindings( context_maker.get_admin_context(), servers) self.assertEqual(bindings, expected) def test_get_instance_security_groups(self): sg1 = self._create_sg_template(name='test1').get('security_group') sg2 = self._create_sg_template(name='test2').get('security_group') # test name='' is replaced with id sg3 = self._create_sg_template(name='').get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id'], sg2['id'], sg3['id']], device_id=FAKE_UUID1) expected = [{'name': sg1['name']}, {'name': sg2['name']}, {'name': sg3['id']}] sgs = security_group_api.get_instance_security_groups( context_maker.get_admin_context(), instance_obj.Instance(uuid=FAKE_UUID1)) self.assertEqual(sgs, expected) def test_create_port_with_sg_and_port_security_enabled_true(self): sg1 = self._create_sg_template(name='test1').get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id']], port_security_enabled=True, device_id=FAKE_UUID1) sgs = security_group_api.get_instance_security_groups( context_maker.get_admin_context(), instance_obj.Instance(uuid=FAKE_UUID1)) self.assertEqual(sgs, [{'name': 'test1'}]) def test_create_port_with_sg_and_port_security_enabled_false(self): sg1 = self._create_sg_template(name='test1').get('security_group') net = self._create_network() self.assertRaises(exception.SecurityGroupCannotBeApplied, self._create_port, network_id=net['network']['id'], security_groups=[sg1['id']], port_security_enabled=False, device_id=FAKE_UUID1) class TestSecurityGroupRulesV21(test.TestCase): secgrp_ctl_cls = secgroups_v21.SecurityGroupRulesController def setUp(self): super(TestSecurityGroupRulesV21, self).setUp() self.controller = self.secgrp_ctl_cls() self.controller_sg = secgroups_v21.SecurityGroupController() id1 = '11111111-1111-1111-1111-111111111111' id2 = '22222222-2222-2222-2222-222222222222' self.invalid_id = '33333333-3333-3333-3333-333333333333' self.sg1 = security_group_template( id=id1, security_group_rules=[]) self.sg2 = security_group_template( id=id2, security_group_rules=[], name='authorize_revoke', description='authorize-revoke testing') self.parent_security_group = security_group_db(self.sg2) self.req = fakes.HTTPRequest.blank('') self.original_client = neutron_api.get_client neutron_api.get_client = get_client neutron = get_client() neutron._fake_security_groups[id1] = self.sg1 neutron._fake_security_groups[id2] = self.sg2 def tearDown(self): neutron_api.get_client = self.original_client get_client()._reset() super(TestSecurityGroupRulesV21, self).tearDown() def test_create_by_cidr(self): rule = security_group_rule_template(cidr='10.2.3.124/24', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg2['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "10.2.3.124/24") def test_create_by_group_id(self): rule = security_group_rule_template(group_id=self.sg1['id'], parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg2['id']) def test_create_by_same_group_id(self): rule1 = security_group_rule_template(group_id=self.sg1['id'], from_port=80, to_port=80, parent_group_id=self.sg2['id']) self.parent_security_group['rules'] = [security_group_rule_db(rule1)] rule2 = security_group_rule_template(group_id=self.sg1['id'], from_port=81, to_port=81, parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule2}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg2['id']) self.assertEqual(security_group_rule['from_port'], 81) self.assertEqual(security_group_rule['to_port'], 81) def test_create_none_value_from_to_port(self): rule = {'parent_group_id': self.sg1['id'], 'group_id': self.sg1['id']} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertIsNone(security_group_rule['from_port']) self.assertIsNone(security_group_rule['to_port']) self.assertEqual(security_group_rule['group']['name'], 'test') self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) def test_create_none_value_from_to_port_icmp(self): rule = {'parent_group_id': self.sg1['id'], 'group_id': self.sg1['id'], 'ip_protocol': 'ICMP'} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertEqual(security_group_rule['ip_protocol'], 'ICMP') self.assertEqual(security_group_rule['from_port'], -1) self.assertEqual(security_group_rule['to_port'], -1) self.assertEqual(security_group_rule['group']['name'], 'test') self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) def test_create_none_value_from_to_port_tcp(self): rule = {'parent_group_id': self.sg1['id'], 'group_id': self.sg1['id'], 'ip_protocol': 'TCP'} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertEqual(security_group_rule['ip_protocol'], 'TCP') self.assertEqual(security_group_rule['from_port'], 1) self.assertEqual(security_group_rule['to_port'], 65535) self.assertEqual(security_group_rule['group']['name'], 'test') self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) def test_create_by_invalid_cidr_json(self): rule = security_group_rule_template( ip_protocol="tcp", from_port=22, to_port=22, parent_group_id=self.sg2['id'], cidr="10.2.3.124/2433") self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_by_invalid_tcp_port_json(self): rule = security_group_rule_template( ip_protocol="tcp", from_port=75534, to_port=22, parent_group_id=self.sg2['id'], cidr="10.2.3.124/24") self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_by_invalid_icmp_port_json(self): rule = security_group_rule_template( ip_protocol="icmp", from_port=1, to_port=256, parent_group_id=self.sg2['id'], cidr="10.2.3.124/24") self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_add_existing_rules_by_cidr(self): sg = security_group_template() req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups' % fakes.FAKE_PROJECT_ID) self.controller_sg.create(req, {'security_group': sg}) rule = security_group_rule_template( cidr='15.0.0.0/8', parent_group_id=self.sg2['id']) req = fakes.HTTPRequest.blank( '/v2/%s/os-security-group-rules' % fakes.FAKE_PROJECT_ID) self.controller.create(req, {'security_group_rule': rule}) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, {'security_group_rule': rule}) def test_create_add_existing_rules_by_group_id(self): sg = security_group_template() req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups' % fakes.FAKE_PROJECT_ID) self.controller_sg.create(req, {'security_group': sg}) rule = security_group_rule_template( group=self.sg1['id'], parent_group_id=self.sg2['id']) req = fakes.HTTPRequest.blank( '/v2/%s/os-security-group-rules' % fakes.FAKE_PROJECT_ID) self.controller.create(req, {'security_group_rule': rule}) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, {'security_group_rule': rule}) def test_create_with_no_body(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, None) def test_create_with_no_security_group_rule_in_body(self): rules = {'test': 'test'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, rules) def test_create_with_invalid_parent_group_id(self): rule = security_group_rule_template(parent_group_id='invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_existing_parent_group_id(self): rule = security_group_rule_template(group_id=None, parent_group_id=self.invalid_id) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_existing_group_id(self): rule = security_group_rule_template(group_id='invalid', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_protocol(self): rule = security_group_rule_template(ip_protocol='invalid-protocol', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_protocol(self): rule = security_group_rule_template(cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) del rule['ip_protocol'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_from_port(self): rule = security_group_rule_template(from_port='666666', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_to_port(self): rule = security_group_rule_template(to_port='666666', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_numerical_from_port(self): rule = security_group_rule_template(from_port='invalid', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_numerical_to_port(self): rule = security_group_rule_template(to_port='invalid', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_from_port(self): rule = security_group_rule_template(cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) del rule['from_port'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_to_port(self): rule = security_group_rule_template(cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) del rule['to_port'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_cidr(self): rule = security_group_rule_template(cidr='10.2.2222.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_cidr_group(self): rule = security_group_rule_template(parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "0.0.0.0/0") def test_create_with_invalid_group_id(self): rule = security_group_rule_template(group_id='invalid', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_empty_group_id(self): rule = security_group_rule_template(group_id='', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_nonexist_group_id(self): rule = security_group_rule_template(group_id=self.invalid_id, parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_same_group_parent_id_and_group_id(self): rule = security_group_rule_template(group_id=self.sg1['id'], parent_group_id=self.sg1['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) self.assertEqual(security_group_rule['group']['name'], self.sg1['name']) def _test_create_with_no_ports_and_no_group(self, proto): rule = {'ip_protocol': proto, 'parent_group_id': self.sg2['id']} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def _test_create_with_no_ports(self, proto): rule = {'ip_protocol': proto, 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id']} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] expected_rule = { 'from_port': 1, 'group': {'tenant_id': '123', 'name': 'test'}, 'ip_protocol': proto, 'to_port': 65535, 'parent_group_id': self.sg2['id'], 'ip_range': {}, 'id': security_group_rule['id'] } if proto == 'icmp': expected_rule['to_port'] = -1 expected_rule['from_port'] = -1 self.assertEqual(expected_rule, security_group_rule) def test_create_with_no_ports_icmp(self): self._test_create_with_no_ports_and_no_group('icmp') self._test_create_with_no_ports('icmp') def test_create_with_no_ports_tcp(self): self._test_create_with_no_ports_and_no_group('tcp') self._test_create_with_no_ports('tcp') def test_create_with_no_ports_udp(self): self._test_create_with_no_ports_and_no_group('udp') self._test_create_with_no_ports('udp') def _test_create_with_ports(self, proto, from_port, to_port): rule = { 'ip_protocol': proto, 'from_port': from_port, 'to_port': to_port, 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id'] } res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] expected_rule = { 'from_port': from_port, 'group': {'tenant_id': '123', 'name': 'test'}, 'ip_protocol': proto, 'to_port': to_port, 'parent_group_id': self.sg2['id'], 'ip_range': {}, 'id': security_group_rule['id'] } self.assertEqual(proto, security_group_rule['ip_protocol']) self.assertEqual(from_port, security_group_rule['from_port']) self.assertEqual(to_port, security_group_rule['to_port']) self.assertEqual(expected_rule, security_group_rule) def test_create_with_ports_icmp(self): self._test_create_with_ports('icmp', 0, 1) self._test_create_with_ports('icmp', 0, 0) self._test_create_with_ports('icmp', 1, 0) def test_create_with_ports_tcp(self): self._test_create_with_ports('tcp', 1, 1) self._test_create_with_ports('tcp', 1, 65535) self._test_create_with_ports('tcp', 65535, 65535) def test_create_with_ports_udp(self): self._test_create_with_ports('udp', 1, 1) self._test_create_with_ports('udp', 1, 65535) self._test_create_with_ports('udp', 65535, 65535) def test_delete(self): rule = security_group_rule_template(parent_group_id=self.sg2['id']) req = fakes.HTTPRequest.blank( '/v2/%s/os-security-group-rules' % fakes.FAKE_PROJECT_ID) res_dict = self.controller.create(req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] req = fakes.HTTPRequest.blank('/v2/%s/os-security-group-rules/%s' % ( fakes.FAKE_PROJECT_ID, security_group_rule['id'])) self.controller.delete(req, security_group_rule['id']) def test_delete_invalid_rule_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'invalid') def test_delete_non_existing_rule_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, self.invalid_id) def test_create_rule_cidr_allow_all(self): rule = security_group_rule_template(cidr='0.0.0.0/0', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "0.0.0.0/0") def test_create_rule_cidr_ipv6_allow_all(self): rule = security_group_rule_template(cidr='::/0', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "::/0") def test_create_rule_cidr_allow_some(self): rule = security_group_rule_template(cidr='15.0.0.0/8', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "15.0.0.0/8") def test_create_rule_cidr_bad_netmask(self): rule = security_group_rule_template(cidr='15.0.0.0/0') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' def fake_compute_get_all(*args, **kwargs): base = {'id': 1, 'description': 'foo', 'user_id': 'bar', 'project_id': 'baz', 'deleted': False, 'deleted_at': None, 'updated_at': None, 'created_at': None} inst_list = [ fakes.stub_instance_obj( None, 1, uuid=UUID1, security_groups=[dict(base, **{'name': 'fake-0-0'}), dict(base, **{'name': 'fake-0-1'})]), fakes.stub_instance_obj( None, 2, uuid=UUID2, security_groups=[dict(base, **{'name': 'fake-1-0'}), dict(base, **{'name': 'fake-1-1'})]) ] return objects.InstanceList(objects=inst_list) def fake_compute_get(*args, **kwargs): secgroups = objects.SecurityGroupList() secgroups.objects = [ objects.SecurityGroup(name='fake-2-0'), objects.SecurityGroup(name='fake-2-1'), ] inst = fakes.stub_instance_obj(None, 1, uuid=UUID3) inst.security_groups = secgroups return inst def fake_compute_create(*args, **kwargs): return ([fake_compute_get(*args, **kwargs)], '') class SecurityGroupsOutputTest(test.TestCase): content_type = 'application/json' def setUp(self): super(SecurityGroupsOutputTest, self).setUp() self.controller = secgroups_v21.SecurityGroupController() self.original_client = neutron_api.get_client neutron_api.get_client = get_client fakes.stub_out_nw_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) self.stub_out('nova.compute.api.API.create', fake_compute_create) self.stub_out( 'nova.network.security_group_api.' 'get_instances_security_groups_bindings', self._fake_get_instances_security_groups_bindings) def tearDown(self): neutron_api.get_client = self.original_client get_client()._reset() super(SecurityGroupsOutputTest, self).tearDown() def _fake_get_instances_security_groups_bindings(inst, context, servers): groups = { '00000000-0000-0000-0000-000000000001': [{'name': 'fake-0-0'}, {'name': 'fake-0-1'}], '00000000-0000-0000-0000-000000000002': [{'name': 'fake-1-0'}, {'name': 'fake-1-1'}], '00000000-0000-0000-0000-000000000003': [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}]} result = {} for server in servers: result[server['id']] = groups.get(server['id']) return result def _make_request(self, url, body=None): req = fakes.HTTPRequest.blank(url) if body: req.method = 'POST' req.body = encodeutils.safe_encode(self._encode_body(body)) req.content_type = self.content_type req.headers['Accept'] = self.content_type # NOTE: This 'os-security-groups' is for enabling security_groups # attribute on response body. res = req.get_response(fakes.wsgi_app_v21()) return res def _encode_body(self, body): return jsonutils.dumps(body) def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_groups(self, server): return server.get('security_groups') def test_create(self): url = '/v2/%s/servers' % fakes.FAKE_PROJECT_ID image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups' % fakes.FAKE_PROJECT_ID) security_groups = [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}] for security_group in security_groups: sg = security_group_template(name=security_group['name']) self.controller.create(req, {'security_group': sg}) server = dict(name='server_test', imageRef=image_uuid, flavorRef=2, security_groups=security_groups) res = self._make_request(url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) def test_create_server_get_default_security_group(self): url = '/v2/%s/servers' % fakes.FAKE_PROJECT_ID image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' server = dict(name='server_test', imageRef=image_uuid, flavorRef=2) res = self._make_request(url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) group = self._get_groups(server)[0] self.assertEqual(group.get('name'), 'default') def test_show(self): self.stub_out( 'nova.network.security_group_api.get_instance_security_groups', lambda inst, context, id: [ {'name': 'fake-2-0'}, {'name': 'fake-2-1'}]) url = '/v2/%s/servers' % fakes.FAKE_PROJECT_ID image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' req = fakes.HTTPRequest.blank( '/v2/%s/os-security-groups' % fakes.FAKE_PROJECT_ID) security_groups = [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}] for security_group in security_groups: sg = security_group_template(name=security_group['name']) self.controller.create(req, {'security_group': sg}) server = dict(name='server_test', imageRef=image_uuid, flavorRef=2, security_groups=security_groups) res = self._make_request(url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) # Test that show (GET) returns the same information as create (POST) url = '/v2/%s/servers/%s' % (fakes.FAKE_PROJECT_ID, UUID3) res = self._make_request(url) self.assertEqual(res.status_int, 200) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) def test_detail(self): url = '/v2/%s/servers/detail' % fakes.FAKE_PROJECT_ID res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): for j, group in enumerate(self._get_groups(server)): name = 'fake-%s-%s' % (i, j) self.assertEqual(group.get('name'), name) @mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound(instance_id='fake')) def test_no_instance_passthrough_404(self, mock_get): url = ('/v2/%s/servers/70f6db34-de8d-4fbd-aafb-4065bdfa6115' % fakes.FAKE_PROJECT_ID) res = self._make_request(url) self.assertEqual(res.status_int, 404) class TestSecurityGroupsDeprecation(test.NoDBTestCase): def setUp(self): super(TestSecurityGroupsDeprecation, self).setUp() self.controller = secgroups_v21.SecurityGroupController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) class TestSecurityGroupRulesDeprecation(test.NoDBTestCase): def setUp(self): super(TestSecurityGroupRulesDeprecation, self).setUp() self.controller = secgroups_v21.SecurityGroupRulesController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_actions.py0000664000175000017500000014534000000000000026534 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils import webob from nova.api.openstack.compute import servers as servers_v21 from nova.compute import api as compute_api from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import exception from nova.image import glance from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests.unit.image import fake CONF = nova.conf.CONF FAKE_UUID = fakes.FAKE_UUID class MockSetAdminPassword(object): def __init__(self): self.instance_id = None self.password = None def __call__(self, context, instance, password): self.instance_id = instance['uuid'] self.password = password class ServerActionsControllerTestV21(test.TestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' image_base_url = 'http://localhost:9292/images/' image_href = image_base_url + '/' + image_uuid servers = servers_v21 validation_error = exception.ValidationError request_too_large_error = exception.ValidationError image_url = None def setUp(self): super(ServerActionsControllerTestV21, self).setUp() self.flags(group='glance', api_servers=['http://localhost:9292']) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get(vm_state=vm_states.ACTIVE, project_id=fakes.FAKE_PROJECT_ID, host='fake_host')) self.stub_out('nova.objects.Instance.save', lambda *a, **kw: None) fakes.stub_out_compute_api_snapshot(self) fake.stub_out_image_service(self) self.flags(enable_instance_password=True, group='api') self._image_href = '155d900f-4e14-4e4c-a73d-069cbf4541e6' self.controller = self._get_controller() self.compute_api = self.controller.compute_api # We don't care about anything getting as far as hitting the compute # RPC API so we just mock it out here. mock_rpcapi = mock.patch.object(self.compute_api, 'compute_rpcapi') mock_rpcapi.start() self.addCleanup(mock_rpcapi.stop) # The project_id here matches what is used by default in # fake_compute_get which need to match for policy checks. self.req = fakes.HTTPRequest.blank('', project_id=fakes.FAKE_PROJECT_ID) self.context = self.req.environ['nova.context'] self.image_api = glance.API() # Assume that anything that hits the compute API and looks for a # RequestSpec doesn't care about it, since testing logic that deep # should be done in nova.tests.unit.compute.test_compute_api. mock_reqspec = mock.patch('nova.objects.RequestSpec') mock_reqspec.start() self.addCleanup(mock_reqspec.stop) # Similarly we shouldn't care about anything hitting conductor from # these tests. mock_conductor = mock.patch.object( self.controller.compute_api, 'compute_task_api') mock_conductor.start() self.addCleanup(mock_conductor.stop) # Assume that none of the tests are using ports with resource requests. self.mock_list_port = self.useFixture( fixtures.MockPatch('nova.network.neutron.API.list_ports')).mock self.mock_list_port.return_value = {'ports': []} def _get_controller(self): return self.servers.ServersController() def _test_locked_instance(self, action, method=None, body_map=None, compute_api_args_map=None): if body_map is None: body_map = {} if compute_api_args_map is None: compute_api_args_map = {} args, kwargs = compute_api_args_map.get(action, ((), {})) uuid = uuidutils.generate_uuid() context = self.req.environ['nova.context'] instance = fake_instance.fake_db_instance( id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, project_id=context.project_id, user_id=context.user_id) instance = objects.Instance._from_db_object( self.context, objects.Instance(), instance) with test.nested( mock.patch.object(compute_api.API, 'get', return_value=instance), mock.patch.object(compute_api.API, method, side_effect=exception.InstanceIsLocked( instance_uuid=instance['uuid'])), ) as (mock_get, mock_method): controller_function = 'self.controller.' + action self.assertRaises(webob.exc.HTTPConflict, eval(controller_function), self.req, instance['uuid'], body=body_map.get(action)) expected_attrs = ['flavor', 'numa_topology'] if method == 'resize': expected_attrs.append('services') mock_get.assert_called_once_with(self.context, uuid, expected_attrs=expected_attrs, cell_down_support=False) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) def test_actions_with_locked_instance(self): actions = ['_action_resize', '_action_confirm_resize', '_action_revert_resize', '_action_reboot', '_action_rebuild'] method_translations = {'_action_resize': 'resize', '_action_confirm_resize': 'confirm_resize', '_action_revert_resize': 'revert_resize', '_action_reboot': 'reboot', '_action_rebuild': 'rebuild'} body_map = {'_action_resize': {'resize': {'flavorRef': '2'}}, '_action_reboot': {'reboot': {'type': 'HARD'}}, '_action_rebuild': {'rebuild': { 'imageRef': self.image_uuid, 'adminPass': 'TNc53Dr8s7vw'}}} args_map = {'_action_resize': (('2'), {'auto_disk_config': None}), '_action_confirm_resize': ((), {}), '_action_reboot': (('HARD',), {}), '_action_rebuild': ((self.image_uuid, 'TNc53Dr8s7vw'), {})} for action in actions: method = method_translations.get(action) self._test_locked_instance(action, method=method, body_map=body_map, compute_api_args_map=args_map) def test_reboot_hard(self): body = dict(reboot=dict(type="HARD")) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_soft(self): body = dict(reboot=dict(type="SOFT")) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_incorrect_type(self): body = dict(reboot=dict(type="NOT_A_TYPE")) self.assertRaises(self.validation_error, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_missing_type(self): body = dict(reboot=dict()) self.assertRaises(self.validation_error, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_none(self): body = dict(reboot=dict(type=None)) self.assertRaises(self.validation_error, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_not_found(self): body = dict(reboot=dict(type="HARD")) with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=uuids.fake)): self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_reboot, self.req, uuids.fake, body=body) def test_reboot_raises_conflict_on_invalid_state(self): body = dict(reboot=dict(type="HARD")) def fake_reboot(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.reboot', fake_reboot) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_soft_with_soft_in_progress_raises_conflict(self): body = dict(reboot=dict(type="SOFT")) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get(project_id=fakes.FAKE_PROJECT_ID, vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING)) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_hard_with_soft_in_progress_does_not_raise(self): body = dict(reboot=dict(type="HARD")) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get(project_id=fakes.FAKE_PROJECT_ID, vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING)) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_hard_with_hard_in_progress(self): body = dict(reboot=dict(type="HARD")) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING_HARD)) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_soft_with_hard_in_progress_raises_conflict(self): body = dict(reboot=dict(type="SOFT")) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING_HARD)) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def _test_rebuild_preserve_ephemeral(self, value=None): return_server = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.compute.api.API.get', return_server) body = { "rebuild": { "imageRef": self._image_href, }, } if value is not None: body['rebuild']['preserve_ephemeral'] = value with mock.patch.object(compute_api.API, 'rebuild') as mock_rebuild: self.controller._action_rebuild(self.req, FAKE_UUID, body=body) if value is not None: mock_rebuild.assert_called_once_with(self.context, mock.ANY, self._image_href, mock.ANY, preserve_ephemeral=value) else: mock_rebuild.assert_called_once_with(self.context, mock.ANY, self._image_href, mock.ANY) def test_rebuild_preserve_ephemeral_true(self): self._test_rebuild_preserve_ephemeral(True) def test_rebuild_preserve_ephemeral_false(self): self._test_rebuild_preserve_ephemeral(False) def test_rebuild_preserve_ephemeral_default(self): self._test_rebuild_preserve_ephemeral() def test_rebuild_accepted_minimum(self): return_server = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.compute.api.API.get', return_server) self_href = 'http://localhost/v2/servers/%s' % FAKE_UUID body = { "rebuild": { "imageRef": self._image_href, }, } robj = self.controller._action_rebuild(self.req, FAKE_UUID, body=body) body = robj.obj self.assertEqual(body['server']['image']['id'], '2') self.assertEqual(len(body['server']['adminPass']), CONF.password_length) self.assertEqual(robj['location'], self_href) # pep3333 requires applications produces headers which are str self.assertEqual(str, type(robj['location'])) def test_rebuild_instance_with_image_uuid(self): info = dict(image_href_in_call=None) def rebuild(self2, context, instance, image_href, *args, **kwargs): info['image_href_in_call'] = image_href self.stub_out('nova.compute.api.API.rebuild', rebuild) # proper local hrefs must start with 'http://localhost/v2/' body = { 'rebuild': { 'imageRef': self.image_uuid, }, } self.controller._action_rebuild(self.req, FAKE_UUID, body=body) self.assertEqual(info['image_href_in_call'], self.image_uuid) def test_rebuild_instance_with_image_href_uses_uuid(self): # proper local hrefs must start with 'http://localhost/v2/' body = { 'rebuild': { 'imageRef': self.image_href, }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_accepted_minimum_pass_disabled(self): # run with enable_instance_password disabled to verify adminPass # is missing from response. See lp bug 921814 self.flags(enable_instance_password=False, group='api') return_server = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.compute.api.API.get', return_server) self_href = 'http://localhost/v2/servers/%s' % FAKE_UUID body = { "rebuild": { "imageRef": self._image_href, }, } robj = self.controller._action_rebuild(self.req, FAKE_UUID, body=body) body = robj.obj self.assertEqual(body['server']['image']['id'], '2') self.assertNotIn("adminPass", body['server']) self.assertEqual(robj['location'], self_href) # pep3333 requires applications produces headers which are str self.assertEqual(str, type(robj['location'])) def test_rebuild_raises_conflict_on_invalid_state(self): body = { "rebuild": { "imageRef": self._image_href, }, } def fake_rebuild(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_accepted_with_metadata(self): metadata = {'new': 'metadata'} return_server = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, metadata=metadata, vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.compute.api.API.get', return_server) body = { "rebuild": { "imageRef": self._image_href, "metadata": metadata, }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(body['server']['metadata'], metadata) def test_rebuild_accepted_with_bad_metadata(self): body = { "rebuild": { "imageRef": self._image_href, "metadata": "stack", }, } self.assertRaises(self.validation_error, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_too_large_metadata(self): body = { "rebuild": { "imageRef": self._image_href, "metadata": { 256 * "k": "value" } } } self.assertRaises(self.request_too_large_error, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_bad_entity(self): body = { "rebuild": { "imageId": self._image_href, }, } self.assertRaises(self.validation_error, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_admin_pass(self): return_server = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.compute.api.API.get', return_server) body = { "rebuild": { "imageRef": self._image_href, "adminPass": "asdf", }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(body['server']['image']['id'], '2') self.assertEqual(body['server']['adminPass'], 'asdf') def test_rebuild_admin_pass_pass_disabled(self): # run with enable_instance_password disabled to verify adminPass # is missing from response. See lp bug 921814 self.flags(enable_instance_password=False, group='api') return_server = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.compute.api.API.get', return_server) body = { "rebuild": { "imageRef": self._image_href, "adminPass": "asdf", }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(body['server']['image']['id'], '2') self.assertNotIn('adminPass', body['server']) def test_rebuild_server_not_found(self): body = { "rebuild": { "imageRef": self._image_href, }, } with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=FAKE_UUID)): self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_bad_image(self): body = { "rebuild": { "imageRef": "foo", }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_accessIP(self): attributes = { 'access_ip_v4': '172.19.0.1', 'access_ip_v6': 'fe80::1', } body = { "rebuild": { "imageRef": self._image_href, "accessIPv4": "172.19.0.1", "accessIPv6": "fe80::1", }, } data = {'changes': {}} orig_get = compute_api.API.get def wrap_get(*args, **kwargs): data['instance'] = orig_get(*args, **kwargs) return data['instance'] def fake_save(context, **kwargs): data['changes'].update(data['instance'].obj_get_changes()) self.stub_out('nova.compute.api.API.get', wrap_get) self.stub_out('nova.objects.Instance.save', fake_save) self.controller._action_rebuild(self.req, FAKE_UUID, body=body) self.assertEqual(self._image_href, data['changes']['image_ref']) self.assertEqual("", data['changes']['kernel_id']) self.assertEqual("", data['changes']['ramdisk_id']) self.assertEqual(task_states.REBUILDING, data['changes']['task_state']) self.assertEqual(0, data['changes']['progress']) for attr, value in attributes.items(): self.assertEqual(value, str(data['changes'][attr])) def test_rebuild_when_kernel_not_exists(self): def return_image_meta(*args, **kwargs): image_meta_table = { '2': {'id': uuids.image_id, 'status': 'active', 'container_format': 'ari'}, '155d900f-4e14-4e4c-a73d-069cbf4541e6': {'id': uuids.image_id, 'status': 'active', 'container_format': 'raw', 'properties': {'kernel_id': 1, 'ramdisk_id': 2}}, } image_id = args[2] try: image_meta = image_meta_table[str(image_id)] except KeyError: raise exception.ImageNotFound(image_id=image_id) return image_meta self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', return_image_meta) body = { "rebuild": { "imageRef": "155d900f-4e14-4e4c-a73d-069cbf4541e6", }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_proper_kernel_ram(self): instance_meta = {'kernel_id': None, 'ramdisk_id': None} orig_get = compute_api.API.get def wrap_get(*args, **kwargs): inst = orig_get(*args, **kwargs) instance_meta['instance'] = inst return inst def fake_save(context, **kwargs): instance = instance_meta['instance'] for key in instance_meta.keys(): if key in instance.obj_what_changed(): instance_meta[key] = instance[key] def return_image_meta(*args, **kwargs): image_meta_table = { uuids.kernel_image_id: { 'id': uuids.kernel_image_id, 'status': 'active', 'container_format': 'aki'}, uuids.ramdisk_image_id: { 'id': uuids.ramdisk_image_id, 'status': 'active', 'container_format': 'ari'}, '155d900f-4e14-4e4c-a73d-069cbf4541e6': {'id': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'status': 'active', 'container_format': 'raw', 'properties': {'kernel_id': uuids.kernel_image_id, 'ramdisk_id': uuids.ramdisk_image_id}}, } image_id = args[2] try: image_meta = image_meta_table[str(image_id)] except KeyError: raise exception.ImageNotFound(image_id=image_id) return image_meta self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', return_image_meta) self.stub_out('nova.compute.api.API.get', wrap_get) self.stub_out('nova.objects.Instance.save', fake_save) body = { "rebuild": { "imageRef": "155d900f-4e14-4e4c-a73d-069cbf4541e6", }, } self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(instance_meta['kernel_id'], uuids.kernel_image_id) self.assertEqual(instance_meta['ramdisk_id'], uuids.ramdisk_image_id) @mock.patch.object(compute_api.API, 'rebuild') def test_rebuild_instance_raise_auto_disk_config_exc(self, mock_rebuild): body = { "rebuild": { "imageRef": self._image_href, }, } mock_rebuild.side_effect = exception.AutoDiskConfigDisabledByImage( image='dummy') self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'rebuild') def test_rebuild_raise_invalid_architecture_exc(self, mock_rebuild): body = { "rebuild": { "imageRef": self._image_href, }, } mock_rebuild.side_effect = exception.InvalidArchitectureName('arm64') self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'rebuild') def test_rebuild_raise_invalid_volume_exc(self, mock_rebuild): """Make sure that we can't rebuild with an InvalidVolume exception.""" body = { "rebuild": { "imageRef": self._image_href, }, } mock_rebuild.side_effect = exception.InvalidVolume('error') self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_resize_server(self): body = dict(resize=dict(flavorRef="http://localhost/3")) self.resize_called = False def resize_mock(*args, **kwargs): self.resize_called = True self.stub_out('nova.compute.api.API.resize', resize_mock) self.controller._action_resize(self.req, FAKE_UUID, body=body) self.assertTrue(self.resize_called) def test_resize_server_no_flavor(self): body = dict(resize=dict()) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_server_no_flavor_ref(self): body = dict(resize=dict(flavorRef=None)) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_server_with_extra_arg(self): body = dict(resize=dict(favorRef="http://localhost/3", extra_arg="extra_arg")) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_server_invalid_flavor_ref(self): body = dict(resize=dict(flavorRef=1.2)) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_with_server_not_found(self): body = dict(resize=dict(flavorRef="http://localhost/3")) with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=FAKE_UUID)): self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_with_image_exceptions(self): body = dict(resize=dict(flavorRef="http://localhost/3")) self.resize_called = 0 image_id = 'fake_image_id' exceptions = [ (exception.ImageNotAuthorized(image_id=image_id), webob.exc.HTTPUnauthorized), (exception.ImageNotFound(image_id=image_id), webob.exc.HTTPBadRequest), (exception.Invalid, webob.exc.HTTPBadRequest), (exception.AutoDiskConfigDisabledByImage(image=image_id), webob.exc.HTTPBadRequest), ] raised, expected = map(iter, zip(*exceptions)) def _fake_resize(obj, context, instance, flavor_id, auto_disk_config=None): self.resize_called += 1 raise next(raised) self.stub_out('nova.compute.api.API.resize', _fake_resize) for call_no in range(len(exceptions)): next_exception = next(expected) actual = self.assertRaises(next_exception, self.controller._action_resize, self.req, FAKE_UUID, body=body) if (isinstance(exceptions[call_no][0], exception.NoValidHost)): self.assertEqual(actual.explanation, 'No valid host was found. Bad host') elif (isinstance(exceptions[call_no][0], exception.AutoDiskConfigDisabledByImage)): self.assertEqual(actual.explanation, 'Requested image fake_image_id has automatic' ' disk resize disabled.') self.assertEqual(self.resize_called, call_no + 1) @mock.patch('nova.compute.api.API.resize', side_effect=exception.CannotResizeDisk(reason='')) def test_resize_raises_cannot_resize_disk(self, mock_resize): body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize', side_effect=exception.FlavorNotFound(reason='', flavor_id='fake_id')) def test_resize_raises_flavor_not_found(self, mock_resize): body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_with_too_many_instances(self): body = dict(resize=dict(flavorRef="http://localhost/3")) def fake_resize(*args, **kwargs): raise exception.TooManyInstances(message="TooManyInstance") self.stub_out('nova.compute.api.API.resize', fake_resize) self.assertRaises(webob.exc.HTTPForbidden, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_raises_conflict_on_invalid_state(self): body = dict(resize=dict(flavorRef="http://localhost/3")) def fake_resize(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.resize', fake_resize) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'resize') def test_resize_instance_raise_auto_disk_config_exc(self, mock_resize): mock_resize.side_effect = exception.AutoDiskConfigDisabledByImage( image='dummy') body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize', side_effect=exception.PciRequestAliasNotDefined( alias='fake_name')) def test_resize_pci_alias_not_defined(self, mock_resize): # Tests that PciRequestAliasNotDefined is translated to a 400 error. body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_confirm_resize_server(self): body = dict(confirmResize=None) self.confirm_resize_called = False def cr_mock(*args): self.confirm_resize_called = True self.stub_out('nova.compute.api.API.confirm_resize', cr_mock) self.controller._action_confirm_resize(self.req, FAKE_UUID, body=body) self.assertTrue(self.confirm_resize_called) def test_confirm_resize_migration_not_found(self): body = dict(confirmResize=None) def confirm_resize_mock(*args): raise exception.MigrationNotFoundByStatus(instance_id=1, status='finished') self.stub_out('nova.compute.api.API.confirm_resize', confirm_resize_mock) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_confirm_resize, self.req, FAKE_UUID, body=body) def test_confirm_resize_raises_conflict_on_invalid_state(self): body = dict(confirmResize=None) def fake_confirm_resize(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.confirm_resize', fake_confirm_resize) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_confirm_resize, self.req, FAKE_UUID, body=body) def test_revert_resize_migration_not_found(self): body = dict(revertResize=None) def revert_resize_mock(*args): raise exception.MigrationNotFoundByStatus(instance_id=1, status='finished') self.stub_out('nova.compute.api.API.revert_resize', revert_resize_mock) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_revert_resize, self.req, FAKE_UUID, body=body) def test_revert_resize_server_not_found(self): body = dict(revertResize=None) with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id='bad_server_id')): self.assertRaises(webob. exc.HTTPNotFound, self.controller._action_revert_resize, self.req, "bad_server_id", body=body) def test_revert_resize_server(self): body = dict(revertResize=None) self.revert_resize_called = False def revert_mock(*args): self.revert_resize_called = True self.stub_out('nova.compute.api.API.revert_resize', revert_mock) body = self.controller._action_revert_resize(self.req, FAKE_UUID, body=body) self.assertTrue(self.revert_resize_called) def test_revert_resize_raises_conflict_on_invalid_state(self): body = dict(revertResize=None) def fake_revert_resize(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.revert_resize', fake_revert_resize) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_revert_resize, self.req, FAKE_UUID, body=body) def test_create_image(self): body = { 'createImage': { 'name': 'Snapshot 1', }, } response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] self.assertEqual(self.image_url + '123' if self.image_url else self.image_api.generate_image_url('123', self.context), location) def test_create_image_v2_45(self): """Tests the createImage server action API with the 2.45 microversion where there is a response body but no Location header. """ body = { 'createImage': { 'name': 'Snapshot 1', }, } req = fakes.HTTPRequest.blank('', version='2.45') response = self.controller._action_create_image(req, FAKE_UUID, body=body) self.assertIsInstance(response, dict) self.assertEqual('123', response['image_id']) def test_create_image_name_too_long(self): long_name = 'a' * 260 body = { 'createImage': { 'name': long_name, }, } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def _do_test_create_volume_backed_image( self, extra_properties, mock_vol_create_side_effect=None): def _fake_id(x): return '%s-%s-%s-%s' % (x * 8, x * 4, x * 4, x * 12) body = dict(createImage=dict(name='snapshot_of_volume_backed')) if extra_properties: body['createImage']['metadata'] = extra_properties image_service = glance.get_default_image_service() bdm = [dict(volume_id=_fake_id('a'), volume_size=1, device_name='vda', delete_on_termination=False)] def fake_block_device_mapping_get_all_by_instance(context, inst_id, use_slave=False): return [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': _fake_id('a'), 'source_type': 'snapshot', 'destination_type': 'volume', 'volume_size': 1, 'device_name': 'vda', 'snapshot_id': 1, 'boot_index': 0, 'delete_on_termination': False, 'no_device': None})] self.stub_out('nova.db.api.block_device_mapping_get_all_by_instance', fake_block_device_mapping_get_all_by_instance) system_metadata = dict(image_kernel_id=_fake_id('b'), image_ramdisk_id=_fake_id('c'), image_root_device_name='/dev/vda', image_block_device_mapping=str(bdm), image_container_format='ami') instance = fakes.fake_compute_get(project_id=fakes.FAKE_PROJECT_ID, image_ref=uuids.fake, vm_state=vm_states.ACTIVE, root_device_name='/dev/vda', system_metadata=system_metadata) self.stub_out('nova.compute.api.API.get', instance) volume = dict(id=_fake_id('a'), size=1, host='fake', display_description='fake') snapshot = dict(id=_fake_id('d')) with test.nested( mock.patch.object( self.controller.compute_api.volume_api, 'get_absolute_limits', return_value={'totalSnapshotsUsed': 0, 'maxTotalSnapshots': 10}), mock.patch.object(self.controller.compute_api.compute_rpcapi, 'quiesce_instance', side_effect=exception.InstanceQuiesceNotSupported( instance_id='fake', reason='test')), mock.patch.object(self.controller.compute_api.volume_api, 'get', return_value=volume), mock.patch.object(self.controller.compute_api.volume_api, 'create_snapshot_force', return_value=snapshot), ) as (mock_get_limits, mock_quiesce, mock_vol_get, mock_vol_create): if mock_vol_create_side_effect: mock_vol_create.side_effect = mock_vol_create_side_effect response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] image_id = location.replace(self.image_url or self.image_api.generate_image_url('', self.context), '') image = image_service.show(None, image_id) self.assertEqual(image['name'], 'snapshot_of_volume_backed') properties = image['properties'] self.assertEqual(properties['kernel_id'], _fake_id('b')) self.assertEqual(properties['ramdisk_id'], _fake_id('c')) self.assertEqual(properties['root_device_name'], '/dev/vda') self.assertTrue(properties['bdm_v2']) bdms = properties['block_device_mapping'] self.assertEqual(len(bdms), 1) self.assertEqual(bdms[0]['boot_index'], 0) self.assertEqual(bdms[0]['source_type'], 'snapshot') self.assertEqual(bdms[0]['destination_type'], 'volume') self.assertEqual(bdms[0]['snapshot_id'], snapshot['id']) self.assertEqual('/dev/vda', bdms[0]['device_name']) for fld in ('connection_info', 'id', 'instance_uuid'): self.assertNotIn(fld, bdms[0]) for k in extra_properties.keys(): self.assertEqual(properties[k], extra_properties[k]) mock_quiesce.assert_called_once_with(mock.ANY, mock.ANY) mock_vol_get.assert_called_once_with(mock.ANY, volume['id']) mock_vol_create.assert_called_once_with(mock.ANY, volume['id'], mock.ANY, mock.ANY) def test_create_volume_backed_image_no_metadata(self): self._do_test_create_volume_backed_image({}) def test_create_volume_backed_image_with_metadata(self): self._do_test_create_volume_backed_image(dict(ImageType='Gold', ImageVersion='2.0')) def test_create_volume_backed_image_cinder_over_quota(self): self.assertRaises( webob.exc.HTTPForbidden, self._do_test_create_volume_backed_image, {}, mock_vol_create_side_effect=exception.OverQuota( overs='snapshot')) def _test_create_volume_backed_image_with_metadata_from_volume( self, extra_metadata=None): def _fake_id(x): return '%s-%s-%s-%s' % (x * 8, x * 4, x * 4, x * 12) body = dict(createImage=dict(name='snapshot_of_volume_backed')) if extra_metadata: body['createImage']['metadata'] = extra_metadata image_service = glance.get_default_image_service() def fake_block_device_mapping_get_all_by_instance(context, inst_id, use_slave=False): return [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': _fake_id('a'), 'source_type': 'snapshot', 'destination_type': 'volume', 'volume_size': 1, 'device_name': 'vda', 'snapshot_id': 1, 'boot_index': 0, 'delete_on_termination': False, 'no_device': None})] self.stub_out('nova.db.api.block_device_mapping_get_all_by_instance', fake_block_device_mapping_get_all_by_instance) instance = fakes.fake_compute_get( project_id=fakes.FAKE_PROJECT_ID, image_ref='', vm_state=vm_states.ACTIVE, root_device_name='/dev/vda', system_metadata={'image_test_key1': 'test_value1', 'image_test_key2': 'test_value2'}) self.stub_out('nova.compute.api.API.get', instance) volume = dict(id=_fake_id('a'), size=1, host='fake', display_description='fake') snapshot = dict(id=_fake_id('d')) with test.nested( mock.patch.object( self.controller.compute_api.volume_api, 'get_absolute_limits', return_value={'totalSnapshotsUsed': 0, 'maxTotalSnapshots': 10}), mock.patch.object(self.controller.compute_api.compute_rpcapi, 'quiesce_instance', side_effect=exception.InstanceQuiesceNotSupported( instance_id='fake', reason='test')), mock.patch.object(self.controller.compute_api.volume_api, 'get', return_value=volume), mock.patch.object(self.controller.compute_api.volume_api, 'create_snapshot_force', return_value=snapshot), ) as (mock_get_limits, mock_quiesce, mock_vol_get, mock_vol_create): response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] image_id = location.replace(self.image_base_url, '') image = image_service.show(None, image_id) properties = image['properties'] self.assertEqual(properties['test_key1'], 'test_value1') self.assertEqual(properties['test_key2'], 'test_value2') if extra_metadata: for key, val in extra_metadata.items(): self.assertEqual(properties[key], val) mock_quiesce.assert_called_once_with(mock.ANY, mock.ANY) mock_vol_get.assert_called_once_with(mock.ANY, volume['id']) mock_vol_create.assert_called_once_with(mock.ANY, volume['id'], mock.ANY, mock.ANY) def test_create_vol_backed_img_with_meta_from_vol_without_extra_meta(self): self._test_create_volume_backed_image_with_metadata_from_volume() def test_create_vol_backed_img_with_meta_from_vol_with_extra_meta(self): self._test_create_volume_backed_image_with_metadata_from_volume( extra_metadata={'a': 'b'}) def test_create_image_with_metadata(self): body = { 'createImage': { 'name': 'Snapshot 1', 'metadata': {'key': 'asdf'}, }, } response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] self.assertEqual(self.image_url + '123' if self.image_url else self.image_api.generate_image_url('123', self.context), location) def test_create_image_with_too_much_metadata(self): body = { 'createImage': { 'name': 'Snapshot 1', 'metadata': {}, }, } for num in range(CONF.quota.metadata_items + 1): body['createImage']['metadata']['foo%i' % num] = "bar" self.assertRaises(webob.exc.HTTPForbidden, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_no_name(self): body = { 'createImage': {}, } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_blank_name(self): body = { 'createImage': { 'name': '', } } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_bad_metadata(self): body = { 'createImage': { 'name': 'geoff', 'metadata': 'henry', }, } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_raises_conflict_on_invalid_state(self): def snapshot(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.snapshot', snapshot) body = { "createImage": { "name": "test_snapshot", }, } self.assertRaises(webob.exc.HTTPConflict, self.controller._action_create_image, self.req, FAKE_UUID, body=body) @mock.patch('nova.objects.Service.get_by_host_and_binary') @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request', return_value=True) def test_resize_with_bandwidth_from_old_compute_not_supported( self, mock_has_res_req, mock_get_service): body = dict(resize=dict(flavorRef="http://localhost/3")) mock_get_service.return_value = objects.Service() mock_get_service.return_value.version = 38 self.assertRaises(webob.exc.HTTPConflict, self.controller._action_resize, self.req, FAKE_UUID, body=body) mock_has_res_req.assert_called_once_with( FAKE_UUID, self.controller.network_api) mock_get_service.assert_called_once_with( self.req.environ['nova.context'], 'fake_host', 'nova-compute') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_diagnostics.py0000664000175000017500000001560600000000000027404 0ustar00zuulzuul00000000000000# Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack import wsgi as os_wsgi from nova.compute import api as compute_api from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes UUID = uuids.abc def fake_instance_get(self, _context, instance_uuid, expected_attrs=None, cell_down_support=False): if instance_uuid != UUID: raise Exception("Invalid UUID") return objects.Instance(uuid=instance_uuid, project_id=_context.project_id, host='123') class ServerDiagnosticsTestV21(test.NoDBTestCase): mock_diagnostics_method = 'get_diagnostics' api_version = '2.1' def _setup_router(self): self.router = fakes.wsgi_app_v21() def _get_request(self): return fakes.HTTPRequest.blank( '/v2/%s/servers/%s/diagnostics' % (fakes.FAKE_PROJECT_ID, UUID), version=self.api_version, headers = {os_wsgi.API_VERSION_REQUEST_HEADER: 'compute %s' % self.api_version}) def setUp(self): super(ServerDiagnosticsTestV21, self).setUp() self._setup_router() @mock.patch.object(compute_api.API, 'get', fake_instance_get) def _test_get_diagnostics(self, expected, return_value): req = self._get_request() with mock.patch.object(compute_api.API, self.mock_diagnostics_method, return_value=return_value): res = req.get_response(self.router) output = jsonutils.loads(res.body) self.assertEqual(expected, output) def test_get_diagnostics(self): diagnostics = {'data': 'Some diagnostics info'} self._test_get_diagnostics(diagnostics, diagnostics) @mock.patch.object(compute_api.API, 'get', side_effect=exception.InstanceNotFound(instance_id=UUID)) def test_get_diagnostics_with_non_existed_instance(self, mock_get): req = self._get_request() res = req.get_response(self.router) self.assertEqual(res.status_int, 404) @mock.patch.object(compute_api.API, 'get', fake_instance_get) def test_get_diagnostics_raise_conflict_on_invalid_state(self): req = self._get_request() with mock.patch.object(compute_api.API, self.mock_diagnostics_method, side_effect=exception.InstanceInvalidState('fake message')): res = req.get_response(self.router) self.assertEqual(409, res.status_int) @mock.patch.object(compute_api.API, 'get', fake_instance_get) def test_get_diagnostics_raise_instance_not_ready(self): req = self._get_request() with mock.patch.object(compute_api.API, self.mock_diagnostics_method, side_effect=exception.InstanceNotReady('fake message')): res = req.get_response(self.router) self.assertEqual(409, res.status_int) @mock.patch.object(compute_api.API, 'get', fake_instance_get) def test_get_diagnostics_raise_no_notimplementederror(self): req = self._get_request() with mock.patch.object(compute_api.API, self.mock_diagnostics_method, side_effect=NotImplementedError): res = req.get_response(self.router) self.assertEqual(501, res.status_int) class ServerDiagnosticsTestV248(ServerDiagnosticsTestV21): mock_diagnostics_method = 'get_instance_diagnostics' api_version = '2.48' def test_get_diagnostics(self): return_value = objects.Diagnostics( config_drive=False, state='running', driver='libvirt', uptime=5, hypervisor='hypervisor', # hypervisor_os is unset cpu_details=[ objects.CpuDiagnostics(id=0, time=1111, utilisation=11), objects.CpuDiagnostics(id=1, time=None, utilisation=22), objects.CpuDiagnostics(id=2, time=3333, utilisation=None), objects.CpuDiagnostics(id=None, time=4444, utilisation=44)], nic_details=[objects.NicDiagnostics( mac_address='de:ad:be:ef:00:01', rx_drop=1, rx_errors=2, rx_octets=3, rx_packets=4, rx_rate=5, tx_drop=6, tx_errors=7, tx_octets=8, # tx_packets is unset tx_rate=None)], disk_details=[objects.DiskDiagnostics( errors_count=1, read_bytes=2, read_requests=3, # write_bytes is unset write_requests=None)], num_cpus=4, num_disks=1, num_nics=1, memory_details=objects.MemoryDiagnostics(maximum=8192, used=3072)) expected = { 'config_drive': False, 'state': 'running', 'driver': 'libvirt', 'uptime': 5, 'hypervisor': 'hypervisor', 'hypervisor_os': None, 'cpu_details': [{'id': 0, 'time': 1111, 'utilisation': 11}, {'id': 1, 'time': None, 'utilisation': 22}, {'id': 2, 'time': 3333, 'utilisation': None}, {'id': None, 'time': 4444, 'utilisation': 44}], 'nic_details': [{'mac_address': 'de:ad:be:ef:00:01', 'rx_drop': 1, 'rx_errors': 2, 'rx_octets': 3, 'rx_packets': 4, 'rx_rate': 5, 'tx_drop': 6, 'tx_errors': 7, 'tx_octets': 8, 'tx_packets': None, 'tx_rate': None}], 'disk_details': [{'errors_count': 1, 'read_bytes': 2, 'read_requests': 3, 'write_bytes': None, 'write_requests': None}], 'num_cpus': 4, 'num_disks': 1, 'num_nics': 1, 'memory_details': {'maximum': 8192, 'used': 3072}} self._test_get_diagnostics(expected, return_value) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_external_events.py0000664000175000017500000003046700000000000030305 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures as fx import mock from oslo_utils.fixture import uuidsentinel as uuids import six from nova.api.openstack.compute import server_external_events \ as server_external_events_v21 from nova import exception from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes fake_instances = { '00000000-0000-0000-0000-000000000001': objects.Instance(id=1, uuid='00000000-0000-0000-0000-000000000001', host='host1'), '00000000-0000-0000-0000-000000000002': objects.Instance(id=2, uuid='00000000-0000-0000-0000-000000000002', host='host1'), '00000000-0000-0000-0000-000000000003': objects.Instance(id=3, uuid='00000000-0000-0000-0000-000000000003', host='host2'), '00000000-0000-0000-0000-000000000004': objects.Instance(id=4, uuid='00000000-0000-0000-0000-000000000004', host=None), } fake_instance_uuids = sorted(fake_instances.keys()) MISSING_UUID = '00000000-0000-0000-0000-000000000005' fake_cells = [objects.CellMapping(uuid=uuids.cell1, database_connection="db1"), objects.CellMapping(uuid=uuids.cell2, database_connection="db2")] fake_instance_mappings = [ objects.InstanceMapping(cell_mapping=fake_cells[instance.id % 2], instance_uuid=instance.uuid) for instance in fake_instances.values()] @classmethod def fake_get_by_filters(cls, context, filters, expected_attrs=None): if expected_attrs: # This is a regression check for bug 1645479. expected_attrs_set = set(expected_attrs) full_expected_attrs_set = set(instance_obj.INSTANCE_OPTIONAL_ATTRS) assert expected_attrs_set.issubset(full_expected_attrs_set), \ ('%s is not a subset of %s' % (expected_attrs_set, full_expected_attrs_set)) return objects.InstanceList(objects=[ inst for inst in fake_instances.values() if inst.uuid in filters['uuid']]) @classmethod def fake_get_by_instance_uuids(cls, context, uuids): mappings = [im for im in fake_instance_mappings if im.instance_uuid in uuids] return objects.InstanceMappingList(objects=mappings) @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids', fake_get_by_instance_uuids) @mock.patch('nova.objects.InstanceList.get_by_filters', fake_get_by_filters) class ServerExternalEventsTestV21(test.NoDBTestCase): server_external_events = server_external_events_v21 invalid_error = exception.ValidationError wsgi_api_version = '2.1' def setUp(self): super(ServerExternalEventsTestV21, self).setUp() self.api = \ self.server_external_events.ServerExternalEventsController() self.event_1 = {'name': 'network-vif-plugged', 'tag': 'foo', 'server_uuid': fake_instance_uuids[0], 'status': 'completed'} self.event_2 = {'name': 'network-changed', 'server_uuid': fake_instance_uuids[1]} self.default_body = {'events': [self.event_1, self.event_2]} self.resp_event_1 = dict(self.event_1) self.resp_event_1['code'] = 200 self.resp_event_2 = dict(self.event_2) self.resp_event_2['code'] = 200 self.resp_event_2['status'] = 'completed' self.default_resp_body = {'events': [self.resp_event_1, self.resp_event_2]} self.req = fakes.HTTPRequest.blank('', use_admin_context=True, version=self.wsgi_api_version) def _assert_call(self, body, expected_uuids, expected_events): with mock.patch.object(self.api.compute_api, 'external_instance_event') as api_method: response = self.api.create(self.req, body=body) result = response.obj code = response._code if expected_events: self.assertEqual(1, api_method.call_count) call = api_method.call_args_list[0] args = call[0] call_instances = args[1] call_events = args[2] else: self.assertEqual(0, api_method.call_count) call_instances = [] call_events = [] self.assertEqual(set(expected_uuids), set([instance.uuid for instance in call_instances])) self.assertEqual(len(expected_uuids), len(call_instances)) self.assertEqual(set(expected_events), set([event.name for event in call_events])) self.assertEqual(len(expected_events), len(call_events)) return result, code def test_create(self): result, code = self._assert_call(self.default_body, fake_instance_uuids[:2], ['network-vif-plugged', 'network-changed']) self.assertEqual(self.default_resp_body, result) self.assertEqual(200, code) def test_create_one_bad_instance(self): body = self.default_body body['events'][1]['server_uuid'] = MISSING_UUID result, code = self._assert_call(body, [fake_instance_uuids[0]], ['network-vif-plugged']) self.assertEqual('failed', result['events'][1]['status']) self.assertEqual(200, result['events'][0]['code']) self.assertEqual(404, result['events'][1]['code']) self.assertEqual(207, code) def test_create_event_instance_has_no_host(self): body = self.default_body body['events'][0]['server_uuid'] = fake_instance_uuids[-1] # the instance without host should not be passed to the compute layer result, code = self._assert_call(body, [fake_instance_uuids[1]], ['network-changed']) self.assertEqual(422, result['events'][0]['code']) self.assertEqual('failed', result['events'][0]['status']) self.assertEqual(200, result['events'][1]['code']) self.assertEqual(207, code) def test_create_no_good_instances(self): """Always 207 with granular codes even if all fail; see bug 1855752.""" body = self.default_body body['events'][0]['server_uuid'] = MISSING_UUID body['events'][1]['server_uuid'] = fake_instance_uuids[-1] result, code = self._assert_call(body, [], []) self.assertEqual(404, result['events'][0]['code']) self.assertEqual(422, result['events'][1]['code']) self.assertEqual(207, code) def test_create_bad_status(self): body = self.default_body body['events'][1]['status'] = 'foo' self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_extra_gorp(self): body = self.default_body body['events'][0]['foobar'] = 'bad stuff' self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_bad_events(self): body = {'events': 'foo'} self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_bad_body(self): body = {'foo': 'bar'} self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_unknown_events(self): self.event_1['name'] = 'unkown_event' body = {'events': self.event_1} self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids', fake_get_by_instance_uuids) @mock.patch('nova.objects.InstanceList.get_by_filters', fake_get_by_filters) class ServerExternalEventsTestV251(ServerExternalEventsTestV21): wsgi_api_version = '2.51' def test_create_with_missing_tag(self): body = self.default_body body['events'][1]['name'] = 'volume-extended' result, code = self._assert_call(body, [fake_instance_uuids[0]], ['network-vif-plugged']) self.assertEqual(200, result['events'][0]['code']) self.assertEqual('completed', result['events'][0]['status']) self.assertEqual(400, result['events'][1]['code']) self.assertEqual('failed', result['events'][1]['status']) self.assertEqual(207, code) @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids', fake_get_by_instance_uuids) @mock.patch('nova.objects.InstanceList.get_by_filters', fake_get_by_filters) class ServerExternalEventsTestV276(ServerExternalEventsTestV21): wsgi_api_version = '2.76' def setUp(self): super(ServerExternalEventsTestV276, self).setUp() self.useFixture(fx.EnvironmentVariable('OS_DEBUG', '1')) self.stdlog = self.useFixture(fixtures.StandardLogging()) def test_create_with_missing_tag(self): body = self.default_body body['events'][0]['name'] = 'power-update' body['events'][1]['name'] = 'power-update' result, code = self._assert_call(body, [fake_instance_uuids[0]], ['power-update']) msg = "Event tag is missing for instance" self.assertIn(msg, self.stdlog.logger.output) self.assertEqual(200, result['events'][0]['code']) self.assertEqual('completed', result['events'][0]['status']) self.assertEqual(400, result['events'][1]['code']) self.assertEqual('failed', result['events'][1]['status']) self.assertEqual(207, code) def test_create_event_auth_pre_2_76_fails(self): # Negative test to make sure you can't create 'power-update' # before 2.76. body = self.default_body body['events'][0]['name'] = 'power-update' body['events'][1]['name'] = 'power-update' req = fakes.HTTPRequestV21.blank( '/os-server-external-events', version='2.75') exp = self.assertRaises( exception.ValidationError, self.api.create, req, body=body) self.assertIn('Invalid input for field/attribute name.', six.text_type(exp)) @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids', fake_get_by_instance_uuids) @mock.patch('nova.objects.InstanceList.get_by_filters', fake_get_by_filters) class ServerExternalEventsTestV282(ServerExternalEventsTestV21): wsgi_api_version = '2.82' def setUp(self): super(ServerExternalEventsTestV282, self).setUp() self.useFixture(fx.EnvironmentVariable('OS_DEBUG', '1')) self.stdlog = self.useFixture(fixtures.StandardLogging()) def test_accelerator_request_bound_event(self): body = self.default_body event_name = 'accelerator-request-bound' body['events'][0]['name'] = event_name # event 0 has a tag body['events'][1]['name'] = event_name # event 1 has no tag result, code = self._assert_call( body, [fake_instance_uuids[0]], [event_name]) self.assertEqual(200, result['events'][0]['code']) self.assertEqual('completed', result['events'][0]['status']) msg = "Event tag is missing for instance" self.assertIn(msg, self.stdlog.logger.output) self.assertEqual(400, result['events'][1]['code']) self.assertEqual('failed', result['events'][1]['status']) self.assertEqual(207, code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_group_quotas.py0000664000175000017500000001423700000000000027624 0ustar00zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils import webob from nova.api.openstack.compute import server_groups as sg_v21 from nova import context from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes CONF = cfg.CONF class AttrDict(dict): def __getattr__(self, k): return self[k] def server_group_template(**kwargs): sgroup = kwargs.copy() sgroup.setdefault('name', 'test') return sgroup def server_group_db(sg): attrs = sg.copy() if 'id' in attrs: attrs['uuid'] = attrs.pop('id') if 'policies' in attrs: policies = attrs.pop('policies') attrs['policies'] = policies else: attrs['policies'] = [] if 'members' in attrs: members = attrs.pop('members') attrs['members'] = members else: attrs['members'] = [] if 'metadata' in attrs: attrs['metadetails'] = attrs.pop('metadata') else: attrs['metadetails'] = {} attrs['deleted'] = 0 attrs['deleted_at'] = None attrs['created_at'] = None attrs['updated_at'] = None if 'user_id' not in attrs: attrs['user_id'] = fakes.FAKE_USER_ID if 'project_id' not in attrs: attrs['project_id'] = fakes.FAKE_PROJECT_ID attrs['id'] = 7 return AttrDict(attrs) class ServerGroupQuotasTestV21(test.TestCase): def setUp(self): super(ServerGroupQuotasTestV21, self).setUp() self._setup_controller() self.req = fakes.HTTPRequest.blank('') def _setup_controller(self): self.controller = sg_v21.ServerGroupController() def _setup_quotas(self): pass def _assert_server_groups_in_use(self, project_id, user_id, in_use): ctxt = context.get_admin_context() counts = objects.InstanceGroupList.get_counts(ctxt, project_id, user_id=user_id) self.assertEqual(in_use, counts['project']['server_groups']) self.assertEqual(in_use, counts['user']['server_groups']) def test_create_server_group_normal(self): self._setup_quotas() sgroup = server_group_template() policies = ['anti-affinity'] sgroup['policies'] = policies res_dict = self.controller.create(self.req, body={'server_group': sgroup}) self.assertEqual(res_dict['server_group']['name'], 'test') self.assertTrue(uuidutils.is_uuid_like(res_dict['server_group']['id'])) self.assertEqual(res_dict['server_group']['policies'], policies) def test_create_server_group_quota_limit(self): self._setup_quotas() sgroup = server_group_template() policies = ['anti-affinity'] sgroup['policies'] = policies # Start by creating as many server groups as we're allowed to. for i in range(CONF.quota.server_groups): self.controller.create(self.req, body={'server_group': sgroup}) # Then, creating a server group should fail. self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body={'server_group': sgroup}) @mock.patch('nova.objects.Quotas.check_deltas') def test_create_server_group_recheck_disabled(self, mock_check): self.flags(recheck_quota=False, group='quota') self._setup_quotas() sgroup = server_group_template() policies = ['anti-affinity'] sgroup['policies'] = policies self.controller.create(self.req, body={'server_group': sgroup}) ctxt = self.req.environ['nova.context'] mock_check.assert_called_once_with(ctxt, {'server_groups': 1}, ctxt.project_id, ctxt.user_id) def test_delete_server_group_by_admin(self): self._setup_quotas() sgroup = server_group_template() policies = ['anti-affinity'] sgroup['policies'] = policies res = self.controller.create(self.req, body={'server_group': sgroup}) sg_id = res['server_group']['id'] context = self.req.environ['nova.context'] self._assert_server_groups_in_use(context.project_id, context.user_id, 1) # Delete the server group we've just created. req = fakes.HTTPRequest.blank('', use_admin_context=True) self.controller.delete(req, sg_id) # Make sure the quota in use has been released. self._assert_server_groups_in_use(context.project_id, context.user_id, 0) @mock.patch('nova.objects.InstanceGroup.destroy') def test_delete_server_group_by_id(self, mock_destroy): self._setup_quotas() sg = server_group_template(id=uuids.sg1_id) def return_server_group(_cls, context, group_id): self.assertEqual(sg['id'], group_id) return objects.InstanceGroup(**server_group_db(sg)) self.stub_out('nova.objects.InstanceGroup.get_by_uuid', return_server_group) resp = self.controller.delete(self.req, uuids.sg1_id) mock_destroy.assert_called_once_with() # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, sg_v21.ServerGroupController): status_int = self.controller.delete.wsgi_code else: status_int = resp.status_int self.assertEqual(204, status_int) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_groups.py0000664000175000017500000011212000000000000026401 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_utils.fixture import uuidsentinel from oslo_utils import uuidutils import six import webob from nova.api.openstack import api_version_request as avr from nova.api.openstack.compute import server_groups as sg_v21 from nova import context from nova import exception from nova import objects from nova.policies import server_groups as sg_policies from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import policy_fixture class AttrDict(dict): def __getattr__(self, k): return self[k] def server_group_template(**kwargs): sgroup = kwargs.copy() sgroup.setdefault('name', 'test') return sgroup def server_group_resp_template(**kwargs): sgroup = kwargs.copy() sgroup.setdefault('name', 'test') if 'policy' not in kwargs: sgroup.setdefault('policies', []) sgroup.setdefault('members', []) return sgroup def server_group_db(sg): attrs = copy.deepcopy(sg) if 'id' in attrs: attrs['uuid'] = attrs.pop('id') if 'policies' in attrs: policies = attrs.pop('policies') attrs['policies'] = policies else: attrs['policies'] = [] if 'policy' in attrs: del attrs['policies'] if 'members' in attrs: members = attrs.pop('members') attrs['members'] = members else: attrs['members'] = [] attrs['deleted'] = 0 attrs['deleted_at'] = None attrs['created_at'] = None attrs['updated_at'] = None if 'user_id' not in attrs: attrs['user_id'] = fakes.FAKE_USER_ID if 'project_id' not in attrs: attrs['project_id'] = fakes.FAKE_PROJECT_ID attrs['id'] = 7 return AttrDict(attrs) class ServerGroupTestV21(test.NoDBTestCase): USES_DB_SELF = True validation_error = exception.ValidationError wsgi_api_version = '2.1' def setUp(self): super(ServerGroupTestV21, self).setUp() self._setup_controller() self.req = fakes.HTTPRequest.blank('') self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.foo_req = fakes.HTTPRequest.blank('', project_id='foo') self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(fixtures.Database(database='api')) cells = fixtures.CellDatabases() cells.add_cell_database(uuidsentinel.cell1) cells.add_cell_database(uuidsentinel.cell2) self.useFixture(cells) ctxt = context.get_admin_context() self.cells = {} for uuid in (uuidsentinel.cell1, uuidsentinel.cell2): cm = objects.CellMapping(context=ctxt, uuid=uuid, database_connection=uuid, transport_url=uuid) cm.create() self.cells[cm.uuid] = cm def _setup_controller(self): self.controller = sg_v21.ServerGroupController() def test_create_server_group_with_no_policies(self): sgroup = server_group_template() self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def _create_server_group_normal(self, policies=None, policy=None, rules=None): sgroup = server_group_template() sgroup['policies'] = policies res_dict = self.controller.create(self.req, body={'server_group': sgroup}) self.assertEqual(res_dict['server_group']['name'], 'test') self.assertTrue(uuidutils.is_uuid_like(res_dict['server_group']['id'])) self.assertEqual(res_dict['server_group']['policies'], policies) def test_create_server_group_with_new_policy_before_264(self): req = fakes.HTTPRequest.blank('', version='2.63') policy = 'anti-affinity' rules = {'max_server_per_host': 3} # 'policy' isn't an acceptable request key before 2.64 sgroup = server_group_template(policy=policy) result = self.assertRaises( self.validation_error, self.controller.create, req, body={'server_group': sgroup}) self.assertIn( "Invalid input for field/attribute server_group", six.text_type(result) ) # 'rules' isn't an acceptable request key before 2.64 sgroup = server_group_template(rules=rules) result = self.assertRaises( self.validation_error, self.controller.create, req, body={'server_group': sgroup}) self.assertIn( "Invalid input for field/attribute server_group", six.text_type(result) ) def test_create_server_group(self): policies = ['affinity', 'anti-affinity'] for policy in policies: self._create_server_group_normal(policies=[policy]) def test_create_server_group_rbac_default(self): sgroup = server_group_template() sgroup['policies'] = ['affinity'] # test as admin self.controller.create(self.admin_req, body={'server_group': sgroup}) # test as non-admin self.controller.create(self.req, body={'server_group': sgroup}) def test_create_server_group_rbac_admin_only(self): sgroup = server_group_template() sgroup['policies'] = ['affinity'] # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'create' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin self.controller.create(self.admin_req, body={'server_group': sgroup}) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.create, self.req, body={'server_group': sgroup}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def _create_instance(self, ctx, cell): with context.target_cell(ctx, cell) as cctx: instance = objects.Instance(context=cctx, image_ref=uuidsentinel.fake_image_ref, node='node1', reservation_id='a', host='host1', project_id=fakes.FAKE_PROJECT_ID, vm_state='fake', system_metadata={'key': 'value'}) instance.create() im = objects.InstanceMapping(context=ctx, project_id=ctx.project_id, user_id=ctx.user_id, cell_mapping=cell, instance_uuid=instance.uuid) im.create() return instance def _create_instance_group(self, context, members): ig = objects.InstanceGroup(context=context, name='fake_name', user_id='fake_user', project_id=fakes.FAKE_PROJECT_ID, members=members) ig.create() return ig.uuid def _create_groups_and_instances(self, ctx): cell1 = self.cells[uuidsentinel.cell1] cell2 = self.cells[uuidsentinel.cell2] instances = [self._create_instance(ctx, cell=cell1), self._create_instance(ctx, cell=cell2), self._create_instance(ctx, cell=None)] members = [instance.uuid for instance in instances] ig_uuid = self._create_instance_group(ctx, members) return (ig_uuid, instances, members) def _test_list_server_group_all(self, api_version='2.1'): self._test_list_server_group(api_version=api_version, limited='', path='/os-server-groups?all_projects=True') def _test_list_server_group_offset_and_limit(self, api_version='2.1'): self._test_list_server_group(api_version=api_version, limited='&offset=1&limit=1', path='/os-server-groups?all_projects=True') @mock.patch('nova.objects.InstanceGroupList.get_by_project_id') @mock.patch('nova.objects.InstanceGroupList.get_all') def _test_list_server_group(self, mock_get_all, mock_get_by_project, path, api_version='2.1', limited=None): policies = ['anti-affinity'] policy = "anti-affinity" members = [] metadata = {} # always empty names = ['default-x', 'test'] p_id = fakes.FAKE_PROJECT_ID u_id = fakes.FAKE_USER_ID ver = avr.APIVersionRequest(api_version) if ver >= avr.APIVersionRequest("2.64"): sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policy=policy, rules={}, members=members, project_id=p_id, user_id=u_id) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policy=policy, rules={}, members=members, project_id=p_id, user_id=u_id) elif ver >= avr.APIVersionRequest("2.13"): sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) else: sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata) tenant_groups = [sg2] all_groups = [sg1, sg2] if limited: all = {'server_groups': [sg2]} tenant_specific = {'server_groups': []} else: all = {'server_groups': all_groups} tenant_specific = {'server_groups': tenant_groups} def return_all_server_groups(): return objects.InstanceGroupList( objects=[objects.InstanceGroup( **server_group_db(sg)) for sg in all_groups]) mock_get_all.return_value = return_all_server_groups() def return_tenant_server_groups(): return objects.InstanceGroupList( objects=[objects.InstanceGroup( **server_group_db(sg)) for sg in tenant_groups]) mock_get_by_project.return_value = return_tenant_server_groups() path = path or '/os-server-groups?all_projects=True' if limited: path += limited req = fakes.HTTPRequest.blank(path, version=api_version) admin_req = fakes.HTTPRequest.blank(path, use_admin_context=True, version=api_version) # test as admin res_dict = self.controller.index(admin_req) self.assertEqual(all, res_dict) # test as non-admin res_dict = self.controller.index(req) self.assertEqual(tenant_specific, res_dict) @mock.patch('nova.objects.InstanceGroupList.get_by_project_id') def _test_list_server_group_by_tenant(self, mock_get_by_project, api_version='2.1'): policies = ['anti-affinity'] members = [] metadata = {} # always empty names = ['default-x', 'test'] p_id = fakes.FAKE_PROJECT_ID u_id = fakes.FAKE_USER_ID if api_version >= '2.13': sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) else: sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata) groups = [sg1, sg2] expected = {'server_groups': groups} def return_server_groups(): return objects.InstanceGroupList( objects=[objects.InstanceGroup( **server_group_db(sg)) for sg in groups]) return_get_by_project = return_server_groups() mock_get_by_project.return_value = return_get_by_project path = '/os-server-groups' req = fakes.HTTPRequest.blank(path, version=api_version) res_dict = self.controller.index(req) self.assertEqual(expected, res_dict) def test_display_members(self): ctx = context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID) (ig_uuid, instances, members) = self._create_groups_and_instances(ctx) res_dict = self.controller.show(self.req, ig_uuid) result_members = res_dict['server_group']['members'] self.assertEqual(3, len(result_members)) for member in members: self.assertIn(member, result_members) def test_display_members_with_nonexistent_group(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, uuidsentinel.group) def test_display_active_members_only(self): ctx = context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID) (ig_uuid, instances, members) = self._create_groups_and_instances(ctx) # delete an instance im = objects.InstanceMapping.get_by_instance_uuid(ctx, instances[1].uuid) with context.target_cell(ctx, im.cell_mapping) as cctxt: instances[1]._context = cctxt instances[1].destroy() # check that the instance does not exist self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, ctx, instances[1].uuid) res_dict = self.controller.show(self.req, ig_uuid) result_members = res_dict['server_group']['members'] # check that only the active instance is displayed self.assertEqual(2, len(result_members)) self.assertIn(instances[0].uuid, result_members) def test_display_members_rbac_default(self): ctx = context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID) ig_uuid = self._create_groups_and_instances(ctx)[0] # test as admin self.controller.show(self.admin_req, ig_uuid) # test as non-admin, same project self.controller.show(self.req, ig_uuid) # test as non-admin, different project self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.foo_req, ig_uuid) def test_display_members_rbac_admin_only(self): ctx = context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID) ig_uuid = self._create_groups_and_instances(ctx)[0] # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'show' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin self.controller.show(self.admin_req, ig_uuid) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, self.req, ig_uuid) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_create_server_group_with_non_alphanumeric_in_name(self): # The fix for bug #1434335 expanded the allowable character set # for server group names to include non-alphanumeric characters # if they are printable. sgroup = server_group_template(name='good* $%name', policies=['affinity']) res_dict = self.controller.create(self.req, body={'server_group': sgroup}) self.assertEqual(res_dict['server_group']['name'], 'good* $%name') def test_create_server_group_with_illegal_name(self): # blank name sgroup = server_group_template(name='', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with length 256 sgroup = server_group_template(name='1234567890' * 26, policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # non-string name sgroup = server_group_template(name=12, policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with leading spaces sgroup = server_group_template(name=' leading spaces', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with trailing spaces sgroup = server_group_template(name='trailing space ', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with all spaces sgroup = server_group_template(name=' ', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with unprintable character sgroup = server_group_template(name='bad\x00name', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with out of range char U0001F4A9 sgroup = server_group_template(name=u"\U0001F4A9", policies=['affinity']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_with_illegal_policies(self): # blank policy sgroup = server_group_template(name='fake-name', policies='') self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # policy as integer sgroup = server_group_template(name='fake-name', policies=7) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # policy as string sgroup = server_group_template(name='fake-name', policies='invalid') self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # policy as None sgroup = server_group_template(name='fake-name', policies=None) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_conflicting_policies(self): sgroup = server_group_template() policies = ['anti-affinity', 'affinity'] sgroup['policies'] = policies self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_with_duplicate_policies(self): sgroup = server_group_template() policies = ['affinity', 'affinity'] sgroup['policies'] = policies self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_not_supported(self): sgroup = server_group_template() policies = ['storage-affinity', 'anti-affinity', 'rack-affinity'] sgroup['policies'] = policies self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_with_no_body(self): self.assertRaises(self.validation_error, self.controller.create, self.req, body=None) def test_create_server_group_with_no_server_group(self): body = {'no-instanceGroup': None} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_list_server_group_by_tenant(self): self._test_list_server_group_by_tenant( api_version=self.wsgi_api_version) def test_list_server_group_all_v20(self): self._test_list_server_group_all(api_version='2.0') def test_list_server_group_all(self): self._test_list_server_group_all( api_version=self.wsgi_api_version) def test_list_server_group_offset_and_limit(self): self._test_list_server_group_offset_and_limit( api_version=self.wsgi_api_version) def test_list_server_groups_rbac_default(self): # test as admin self.controller.index(self.admin_req) # test as non-admin self.controller.index(self.req) def test_list_server_group_multiple_param(self): self._test_list_server_group(api_version=self.wsgi_api_version, limited='&offset=2&limit=2&limit=1&offset=1', path='/os-server-groups?all_projects=False&all_projects=True') def test_list_server_group_additional_param(self): self._test_list_server_group(api_version=self.wsgi_api_version, limited='&offset=1&limit=1', path='/os-server-groups?dummy=False&all_projects=True') def test_list_server_group_param_as_int(self): self._test_list_server_group(api_version=self.wsgi_api_version, limited='&offset=1&limit=1', path='/os-server-groups?all_projects=1') def test_list_server_group_negative_int_as_offset(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version=self.wsgi_api_version, limited='&offset=-1', path='/os-server-groups?all_projects=1') def test_list_server_group_string_int_as_offset(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version=self.wsgi_api_version, limited='&offset=dummy', path='/os-server-groups?all_projects=1') def test_list_server_group_multiparam_string_as_offset(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version=self.wsgi_api_version, limited='&offset=dummy&offset=1', path='/os-server-groups?all_projects=1') def test_list_server_group_negative_int_as_limit(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version=self.wsgi_api_version, limited='&limit=-1', path='/os-server-groups?all_projects=1') def test_list_server_group_string_int_as_limit(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version=self.wsgi_api_version, limited='&limit=dummy', path='/os-server-groups?all_projects=1') def test_list_server_group_multiparam_string_as_limit(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version=self.wsgi_api_version, limited='&limit=dummy&limit=1', path='/os-server-groups?all_projects=1') def test_list_server_groups_rbac_admin_only(self): # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'index' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin self.controller.index(self.admin_req) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.objects.InstanceGroup.destroy') def test_delete_server_group_by_id(self, mock_destroy): sg = server_group_template(id=uuidsentinel.sg1_id) def return_server_group(_cls, context, group_id): self.assertEqual(sg['id'], group_id) return objects.InstanceGroup(**server_group_db(sg)) self.stub_out('nova.objects.InstanceGroup.get_by_uuid', return_server_group) resp = self.controller.delete(self.req, uuidsentinel.sg1_id) mock_destroy.assert_called_once_with() # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, sg_v21.ServerGroupController): status_int = self.controller.delete.wsgi_code else: status_int = resp.status_int self.assertEqual(204, status_int) def test_delete_non_existing_server_group(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 'invalid') def test_delete_server_group_rbac_default(self): ctx = context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID) # test as admin ig_uuid = self._create_groups_and_instances(ctx)[0] self.controller.delete(self.admin_req, ig_uuid) # test as non-admin ig_uuid = self._create_groups_and_instances(ctx)[0] self.controller.delete(self.req, ig_uuid) def test_delete_server_group_rbac_admin_only(self): ctx = context.RequestContext('fake_user', fakes.FAKE_PROJECT_ID) # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'delete' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin ig_uuid = self._create_groups_and_instances(ctx)[0] self.controller.delete(self.admin_req, ig_uuid) # check for failure as non-admin ig_uuid = self._create_groups_and_instances(ctx)[0] exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.delete, self.req, ig_uuid) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class ServerGroupTestV213(ServerGroupTestV21): wsgi_api_version = '2.13' def _setup_controller(self): self.controller = sg_v21.ServerGroupController() def test_list_server_group_all(self): self._test_list_server_group_all(api_version='2.13') def test_list_server_group_offset_and_limit(self): self._test_list_server_group_offset_and_limit(api_version='2.13') def test_list_server_group_by_tenant(self): self._test_list_server_group_by_tenant(api_version='2.13') class ServerGroupTestV264(ServerGroupTestV213): wsgi_api_version = '2.64' def _setup_controller(self): self.controller = sg_v21.ServerGroupController() def _create_server_group_normal(self, policies=None, policy=None, rules=None): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) sgroup = server_group_template() sgroup['rules'] = rules or {} sgroup['policy'] = policy res_dict = self.controller.create(req, body={'server_group': sgroup}) self.assertEqual(res_dict['server_group']['name'], 'test') self.assertTrue(uuidutils.is_uuid_like(res_dict['server_group']['id'])) self.assertEqual(res_dict['server_group']['policy'], policy) self.assertEqual(res_dict['server_group']['rules'], rules or {}) return res_dict['server_group']['id'] def test_list_server_group_all(self): self._test_list_server_group_all(api_version=self.wsgi_api_version) def test_create_and_show_server_group(self): policies = ['affinity', 'anti-affinity'] for policy in policies: g_uuid = self._create_server_group_normal( policy=policy) res_dict = self._display_server_group(g_uuid) self.assertEqual(res_dict['server_group']['policy'], policy) self.assertEqual(res_dict['server_group']['rules'], {}) def _display_server_group(self, uuid): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) group = self.controller.show(req, uuid) return group @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=33) def test_create_and_show_server_group_with_rules(self, mock_get_v): policy = 'anti-affinity' rules = {'max_server_per_host': 3} g_uuid = self._create_server_group_normal( policy=policy, rules=rules) res_dict = self._display_server_group(g_uuid) self.assertEqual(res_dict['server_group']['policy'], policy) self.assertEqual(res_dict['server_group']['rules'], rules) def test_create_affinity_server_group_with_invalid_policy(self): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) sgroup = server_group_template(policy='affinity', rules={'max_server_per_host': 3}) result = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, body={'server_group': sgroup}) self.assertIn("Only anti-affinity policy supports rules", six.text_type(result)) def test_create_anti_affinity_server_group_with_invalid_rules(self): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) # A negative test for key is unknown, the value is not positive # and not integer invalid_rules = [{'unknown_key': '3'}, {'max_server_per_host': 0}, {'max_server_per_host': 'foo'}] for r in invalid_rules: sgroup = server_group_template(policy='anti-affinity', rules=r) result = self.assertRaises( self.validation_error, self.controller.create, req, body={'server_group': sgroup}) self.assertIn( "Invalid input for field/attribute", six.text_type(result) ) @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=32) def test_create_server_group_with_low_version_compute_service(self, mock_get_v): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) sgroup = server_group_template(policy='anti-affinity', rules={'max_server_per_host': 3}) result = self.assertRaises( webob.exc.HTTPConflict, self.controller.create, req, body={'server_group': sgroup}) self.assertIn("Creating an anti-affinity group with rule " "max_server_per_host > 1 is not yet supported.", six.text_type(result)) def test_create_server_group(self): policies = ['affinity', 'anti-affinity'] for policy in policies: self._create_server_group_normal(policy=policy) def test_policies_since_264(self): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) # 'policies' isn't allowed in request >= 2.64 sgroup = server_group_template(policies=['anti-affinity']) self.assertRaises( self.validation_error, self.controller.create, req, body={'server_group': sgroup}) def test_create_server_group_without_policy(self): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) # 'policy' is required request key in request >= 2.64 sgroup = server_group_template() self.assertRaises(self.validation_error, self.controller.create, req, body={'server_group': sgroup}) def test_create_server_group_with_illegal_policies(self): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) # blank policy sgroup = server_group_template(policy='') self.assertRaises(self.validation_error, self.controller.create, req, body={'server_group': sgroup}) # policy as integer sgroup = server_group_template(policy=7) self.assertRaises(self.validation_error, self.controller.create, req, body={'server_group': sgroup}) # policy as string sgroup = server_group_template(policy='invalid') self.assertRaises(self.validation_error, self.controller.create, req, body={'server_group': sgroup}) # policy as None sgroup = server_group_template(policy=None) self.assertRaises(self.validation_error, self.controller.create, req, body={'server_group': sgroup}) def test_additional_params(self): req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) sgroup = server_group_template(unknown='unknown') self.assertRaises(self.validation_error, self.controller.create, req, body={'server_group': sgroup}) class ServerGroupTestV275(ServerGroupTestV264): wsgi_api_version = '2.75' def test_list_server_group_additional_param_old_version(self): self._test_list_server_group(api_version='2.74', limited='&offset=1&limit=1', path='/os-server-groups?dummy=False&all_projects=True') def test_list_server_group_additional_param(self): req = fakes.HTTPRequest.blank('/os-server-groups?dummy=False', version=self.wsgi_api_version) self.assertRaises(self.validation_error, self.controller.index, req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_metadata.py0000664000175000017500000006673200000000000026663 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils import six import webob from nova.api.openstack.compute import server_metadata \ as server_metadata_v21 from nova.compute import vm_states import nova.db.api from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes CONF = cfg.CONF def return_create_instance_metadata_max(context, server_id, metadata, delete): return stub_max_server_metadata() def return_create_instance_metadata(context, server_id, metadata, delete): return stub_server_metadata() def fake_instance_save(inst, **kwargs): inst.metadata = stub_server_metadata() inst.obj_reset_changes() def return_server_metadata(context, server_id): if not isinstance(server_id, six.string_types) or not len(server_id) == 36: msg = 'id %s must be a uuid in return server metadata' % server_id raise Exception(msg) return stub_server_metadata() def return_empty_server_metadata(context, server_id): return {} def delete_server_metadata(context, server_id, key): pass def stub_server_metadata(): metadata = { "key1": "value1", "key2": "value2", "key3": "value3", } return metadata def stub_max_server_metadata(): metadata = {"metadata": {}} for num in range(CONF.quota.metadata_items): metadata['metadata']['key%i' % num] = "blah" return metadata def return_server_nonexistent(context, server_id, columns_to_join=None, use_slave=False): raise exception.InstanceNotFound(instance_id=server_id) def fake_change_instance_metadata(self, context, instance, diff): pass class ServerMetaDataTestV21(test.TestCase): validation_ex = exception.ValidationError validation_ex_large = validation_ex def setUp(self): super(ServerMetaDataTestV21, self).setUp() metadata = stub_server_metadata() self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( **{'uuid': '0cc3346e-9fef-4445-abe6-5d2b2690ec64', 'name': 'fake', 'launched_at': timeutils.utcnow(), 'vm_state': vm_states.ACTIVE, 'metadata': metadata})) self.stub_out('nova.db.api.instance_metadata_get', return_server_metadata) self.stub_out( 'nova.compute.rpcapi.ComputeAPI.change_instance_metadata', fake_change_instance_metadata) self._set_up_resources() def _set_up_resources(self): self.controller = server_metadata_v21.ServerMetadataController() self.uuid = uuids.fake self.url = '/%s/servers/%s/metadata' % (fakes.FAKE_PROJECT_ID, self.uuid) def _get_request(self, param_url=''): return fakes.HTTPRequestV21.blank(self.url + param_url) def test_index(self): req = self._get_request() res_dict = self.controller.index(req, self.uuid) expected = { 'metadata': { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', }, } self.assertEqual(expected, res_dict) def test_index_nonexistent_server(self): self.stub_out('nova.db.api.instance_metadata_get', return_server_nonexistent) req = self._get_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, req, self.url) def test_index_no_data(self): self.stub_out('nova.db.api.instance_metadata_get', return_empty_server_metadata) req = self._get_request() res_dict = self.controller.index(req, self.uuid) expected = {'metadata': {}} self.assertEqual(expected, res_dict) def test_show(self): req = self._get_request('/key2') res_dict = self.controller.show(req, self.uuid, 'key2') expected = {"meta": {'key2': 'value2'}} self.assertEqual(expected, res_dict) def test_show_nonexistent_server(self): self.stub_out('nova.db.api.instance_metadata_get', return_server_nonexistent) req = self._get_request('/key2') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.uuid, 'key2') def test_show_meta_not_found(self): self.stub_out('nova.db.api.instance_metadata_get', return_empty_server_metadata) req = self._get_request('/key6') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.uuid, 'key6') def test_delete(self): self.stub_out('nova.db.api.instance_metadata_get', return_server_metadata) self.stub_out('nova.db.api.instance_metadata_delete', delete_server_metadata) req = self._get_request('/key2') req.method = 'DELETE' res = self.controller.delete(req, self.uuid, 'key2') self.assertIsNone(res) def test_delete_nonexistent_server(self): req = self._get_request('/key1') req.method = 'DELETE' with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=self.uuid)): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.uuid, 'key1') def test_delete_meta_not_found(self): self.stub_out('nova.db.api.instance_metadata_get', return_empty_server_metadata) req = self._get_request('/key6') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.uuid, 'key6') def test_create(self): self.stub_out('nova.objects.Instance.save', fake_instance_save) req = self._get_request() req.method = 'POST' req.content_type = "application/json" body = {"metadata": {"key9": "value9"}} req.body = jsonutils.dump_as_bytes(body) res_dict = self.controller.create(req, self.uuid, body=body) body['metadata'].update({ "key1": "value1", "key2": "value2", "key3": "value3", }) self.assertEqual(body, res_dict) def test_create_empty_body(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'POST' req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=None) def test_create_item_empty_key(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"metadata": {"": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_item_non_dict(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"metadata": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_item_key_too_long(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"metadata": {("a" * 260): "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex_large, self.controller.create, req, self.uuid, body=body) def test_create_malformed_container(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_malformed_data(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"metadata": ['asdf']} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_nonexistent_server(self): req = self._get_request() req.method = 'POST' body = {"metadata": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=self.uuid)): self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, self.uuid, body=body) def test_update_metadata(self): self.stub_out('nova.objects.Instance.save', fake_instance_save) req = self._get_request() req.method = 'POST' req.content_type = 'application/json' expected = { 'metadata': { 'key1': 'updatedvalue', 'key29': 'newkey', } } req.body = jsonutils.dump_as_bytes(expected) response = self.controller.update_all(req, self.uuid, body=expected) self.assertEqual(expected, response) def test_update_all(self): self.stub_out('nova.objects.Instance.save', fake_instance_save) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = { 'metadata': { 'key10': 'value10', 'key99': 'value99', }, } req.body = jsonutils.dump_as_bytes(expected) res_dict = self.controller.update_all(req, self.uuid, body=expected) self.assertEqual(expected, res_dict) def test_update_all_empty_container(self): self.stub_out('nova.objects.Instance.save', fake_instance_save) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = {'metadata': {}} req.body = jsonutils.dump_as_bytes(expected) res_dict = self.controller.update_all(req, self.uuid, body=expected) self.assertEqual(expected, res_dict) def test_update_all_empty_body_item(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=None) def test_update_all_with_non_dict_item(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/bad') req.method = 'PUT' body = {"metadata": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=body) def test_update_all_malformed_container(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = {'meta': {}} req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=expected) def test_update_all_malformed_data(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = {'metadata': ['asdf']} req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=expected) def test_update_all_nonexistent_server(self): req = self._get_request() req.method = 'PUT' req.content_type = "application/json" body = {'metadata': {'key10': 'value10'}} req.body = jsonutils.dump_as_bytes(body) with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=self.uuid)): self.assertRaises(webob.exc.HTTPNotFound, self.controller.update_all, req, self.uuid, body=body) def test_update_all_non_dict(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'PUT' body = {"metadata": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=body) def test_update_item(self): self.stub_out('nova.objects.Instance.save', fake_instance_save) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res_dict = self.controller.update(req, self.uuid, 'key1', body=body) expected = {"meta": {'key1': 'value1'}} self.assertEqual(expected, res_dict) def test_update_item_nonexistent_server(self): req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" with mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=self.uuid)): self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, self.uuid, 'key1', body=body) def test_update_item_empty_body(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=None) def test_update_malformed_container(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' expected = {'meta': {}} req.body = jsonutils.dump_as_bytes(expected) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=expected) def test_update_malformed_data(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' expected = {'metadata': ['asdf']} req.body = jsonutils.dump_as_bytes(expected) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=expected) def test_update_item_empty_key(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, '', body=body) def test_update_item_key_too_long(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {("a" * 260): "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex_large, self.controller.update, req, self.uuid, ("a" * 260), body=body) def test_update_item_value_too_long(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": ("a" * 260)}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex_large, self.controller.update, req, self.uuid, "key1", body=body) def test_update_item_too_many_keys(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": "value1", "key2": "value2"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=body) def test_update_item_body_uri_mismatch(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/bad') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.uuid, 'bad', body=body) def test_update_item_non_dict(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/bad') req.method = 'PUT' body = {"meta": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'bad', body=body) def test_update_empty_container(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' expected = {'metadata': {}} req.body = jsonutils.dump_as_bytes(expected) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'bad', body=expected) def test_too_many_metadata_items_on_create(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) data = {"metadata": {}} for num in range(CONF.quota.metadata_items + 1): data['metadata']['key%i' % num] = "blah" req = self._get_request() req.method = 'POST' req.body = jsonutils.dump_as_bytes(data) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, req, self.uuid, body=data) def test_invalid_metadata_items_on_create(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'POST' req.headers["content-type"] = "application/json" # test for long key data = {"metadata": {"a" * 260: "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.create, req, self.uuid, body=data) # test for long value data = {"metadata": {"key": "v" * 260}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.create, req, self.uuid, body=data) # test for empty key. data = {"metadata": {"": "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=data) def test_too_many_metadata_items_on_update_item(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) data = {"metadata": {}} for num in range(CONF.quota.metadata_items + 1): data['metadata']['key%i' % num] = "blah" req = self._get_request() req.method = 'PUT' req.body = jsonutils.dump_as_bytes(data) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.update_all, req, self.uuid, body=data) def test_invalid_metadata_items_on_update_item(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) data = {"metadata": {}} for num in range(CONF.quota.metadata_items + 1): data['metadata']['key%i' % num] = "blah" req = self._get_request() req.method = 'PUT' req.body = jsonutils.dump_as_bytes(data) req.headers["content-type"] = "application/json" # test for long key data = {"metadata": {"a" * 260: "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.update_all, req, self.uuid, body=data) # test for long value data = {"metadata": {"key": "v" * 260}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.update_all, req, self.uuid, body=data) # test for empty key. data = {"metadata": {"": "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=data) class BadStateServerMetaDataTestV21(test.TestCase): def setUp(self): super(BadStateServerMetaDataTestV21, self).setUp() self.stub_out('nova.db.api.instance_metadata_get', return_server_metadata) self.stub_out( 'nova.compute.rpcapi.ComputeAPI.change_instance_metadata', fake_change_instance_metadata) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( **{'uuid': '0cc3346e-9fef-4445-abe6-5d2b2690ec64', 'name': 'fake', 'vm_state': vm_states.BUILDING})) self.stub_out('nova.db.api.instance_metadata_delete', delete_server_metadata) self._set_up_resources() def _set_up_resources(self): self.controller = server_metadata_v21.ServerMetadataController() self.uuid = uuids.fake self.url = '/%s/servers/%s/metadata' % (fakes.FAKE_PROJECT_ID, self.uuid) def _get_request(self, param_url=''): return fakes.HTTPRequestV21.blank(self.url + param_url) def test_invalid_state_on_delete(self): req = self._get_request('/key2') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, req, self.uuid, 'key2') def test_invalid_state_on_update_metadata(self): self.stub_out('nova.db.api.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'POST' req.content_type = 'application/json' expected = { 'metadata': { 'key1': 'updatedvalue', 'key29': 'newkey', } } req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(webob.exc.HTTPConflict, self.controller.update_all, req, self.uuid, body=expected) @mock.patch.object(nova.compute.api.API, 'update_instance_metadata', side_effect=exception.InstanceIsLocked(instance_uuid=0)) def test_instance_lock_update_metadata(self, mock_update): req = self._get_request() req.method = 'POST' req.content_type = 'application/json' expected = { 'metadata': { 'keydummy': 'newkey', } } req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(webob.exc.HTTPConflict, self.controller.update_all, req, self.uuid, body=expected) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_migrations.py0000664000175000017500000003512300000000000027245 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import mock from oslo_utils.fixture import uuidsentinel as uuids import webob from nova.api.openstack.compute import server_migrations from nova import exception from nova import objects from nova.objects import base from nova import test from nova.tests.unit.api.openstack import fakes SERVER_UUID = uuids.server_uuid fake_migrations = [ { 'id': 1234, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': SERVER_UUID, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration1, 'cross_cell_move': False, 'user_id': None, 'project_id': None }, { 'id': 5678, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'running', 'instance_uuid': SERVER_UUID, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 400000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 90000, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration2, 'cross_cell_move': False, 'user_id': None, 'project_id': None } ] migrations_obj = base.obj_make_list( 'fake-context', objects.MigrationList(), objects.Migration, fake_migrations) class ServerMigrationsTestsV21(test.NoDBTestCase): wsgi_api_version = '2.22' def setUp(self): super(ServerMigrationsTestsV21, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) self.context = self.req.environ['nova.context'] self.controller = server_migrations.ServerMigrationsController() self.compute_api = self.controller.compute_api def test_force_complete_succeeded(self): @mock.patch.object(self.compute_api, 'live_migrate_force_complete') @mock.patch.object(self.compute_api, 'get') def _do_test(compute_api_get, live_migrate_force_complete): self.controller._force_complete(self.req, '1', '1', body={'force_complete': None}) live_migrate_force_complete.assert_called_once_with( self.context, compute_api_get.return_value, '1') _do_test() def _test_force_complete_failed_with_exception(self, fake_exc, expected_exc): @mock.patch.object(self.compute_api, 'live_migrate_force_complete', side_effect=fake_exc) @mock.patch.object(self.compute_api, 'get') def _do_test(compute_api_get, live_migrate_force_complete): self.assertRaises(expected_exc, self.controller._force_complete, self.req, '1', '1', body={'force_complete': None}) _do_test() def test_force_complete_instance_not_migrating(self): self._test_force_complete_failed_with_exception( exception.InstanceInvalidState(instance_uuid='', state='', attr='', method=''), webob.exc.HTTPConflict) def test_force_complete_migration_not_found(self): self._test_force_complete_failed_with_exception( exception.MigrationNotFoundByStatus(instance_id='', status=''), webob.exc.HTTPBadRequest) def test_force_complete_instance_is_locked(self): self._test_force_complete_failed_with_exception( exception.InstanceIsLocked(instance_uuid=''), webob.exc.HTTPConflict) def test_force_complete_invalid_migration_state(self): self._test_force_complete_failed_with_exception( exception.InvalidMigrationState(migration_id='', instance_uuid='', state='', method=''), webob.exc.HTTPBadRequest) def test_force_complete_instance_not_found(self): self._test_force_complete_failed_with_exception( exception.InstanceNotFound(instance_id=''), webob.exc.HTTPNotFound) def test_force_complete_unexpected_error(self): self._test_force_complete_failed_with_exception( exception.NovaException(), webob.exc.HTTPInternalServerError) class ServerMigrationsTestsV223(ServerMigrationsTestsV21): wsgi_api_version = '2.23' def setUp(self): super(ServerMigrationsTestsV223, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version, use_admin_context=True) self.context = self.req.environ['nova.context'] @mock.patch('nova.compute.api.API.get_migrations_in_progress_by_instance') @mock.patch('nova.compute.api.API.get') def test_index(self, m_get_instance, m_get_mig): migrations = [server_migrations.output(mig) for mig in migrations_obj] migrations_in_progress = {'migrations': migrations} for mig in migrations_in_progress['migrations']: self.assertIn('id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) m_get_mig.return_value = migrations_obj response = self.controller.index(self.req, SERVER_UUID) self.assertEqual(migrations_in_progress, response) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) @mock.patch('nova.compute.api.API.get') def test_index_invalid_instance(self, m_get_instance): m_get_instance.side_effect = exception.InstanceNotFound(instance_id=1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, self.req, SERVER_UUID) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show(self, m_get_instance, m_get_mig): migrations = [server_migrations.output(mig) for mig in migrations_obj] m_get_mig.return_value = migrations_obj[0] response = self.controller.show(self.req, SERVER_UUID, migrations_obj[0].id) self.assertEqual(migrations[0], response['migration']) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show_migration_non_progress(self, m_get_instance, m_get_mig): non_progress_mig = copy.deepcopy(migrations_obj[0]) non_progress_mig.status = "reverted" m_get_mig.return_value = non_progress_mig self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, non_progress_mig.id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show_migration_not_live_migration(self, m_get_instance, m_get_mig): non_progress_mig = copy.deepcopy(migrations_obj[0]) non_progress_mig.migration_type = "resize" m_get_mig.return_value = non_progress_mig self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, non_progress_mig.id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show_migration_not_exist(self, m_get_instance, m_get_mig): m_get_mig.side_effect = exception.MigrationNotFoundForInstance( migration_id=migrations_obj[0].id, instance_id=SERVER_UUID) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, migrations_obj[0].id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) @mock.patch('nova.compute.api.API.get') def test_show_migration_invalid_instance(self, m_get_instance): m_get_instance.side_effect = exception.InstanceNotFound(instance_id=1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, migrations_obj[0].id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None, cell_down_support=False) class ServerMigrationsTestsV224(ServerMigrationsTestsV21): wsgi_api_version = '2.24' def setUp(self): super(ServerMigrationsTestsV224, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version, use_admin_context=True) self.context = self.req.environ['nova.context'] def test_cancel_live_migration_succeeded(self): @mock.patch.object(self.compute_api, 'live_migrate_abort') @mock.patch.object(self.compute_api, 'get') def _do_test(mock_get, mock_abort): self.controller.delete(self.req, 'server-id', 'migration-id') mock_abort.assert_called_once_with(self.context, mock_get.return_value, 'migration-id', support_abort_in_queue=False) _do_test() def _test_cancel_live_migration_failed(self, fake_exc, expected_exc): @mock.patch.object(self.compute_api, 'live_migrate_abort', side_effect=fake_exc) @mock.patch.object(self.compute_api, 'get') def _do_test(mock_get, mock_abort): self.assertRaises(expected_exc, self.controller.delete, self.req, 'server-id', 'migration-id') _do_test() def test_cancel_live_migration_invalid_state(self): self._test_cancel_live_migration_failed( exception.InstanceInvalidState(instance_uuid='', state='', attr='', method=''), webob.exc.HTTPConflict) def test_cancel_live_migration_migration_not_found(self): self._test_cancel_live_migration_failed( exception.MigrationNotFoundForInstance(migration_id='', instance_id=''), webob.exc.HTTPNotFound) def test_cancel_live_migration_invalid_migration_state(self): self._test_cancel_live_migration_failed( exception.InvalidMigrationState(migration_id='', instance_uuid='', state='', method=''), webob.exc.HTTPBadRequest) def test_cancel_live_migration_instance_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 'server-id', 'migration-id') class ServerMigrationsTestsV265(ServerMigrationsTestsV224): wsgi_api_version = '2.65' def test_cancel_live_migration_succeeded(self): @mock.patch.object(self.compute_api, 'live_migrate_abort') @mock.patch.object(self.compute_api, 'get') def _do_test(mock_get, mock_abort): self.controller.delete(self.req, 'server-id', 1) mock_abort.assert_called_once_with(self.context, mock_get.return_value, 1, support_abort_in_queue=True) _do_test() class ServerMigrationsTestsV280(ServerMigrationsTestsV265): wsgi_api_version = '2.80' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_password.py0000664000175000017500000000453700000000000026740 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import server_password \ as server_password_v21 from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class ServerPasswordTestV21(test.NoDBTestCase): content_type = 'application/json' server_password = server_password_v21 delete_call = 'self.controller.clear' def setUp(self): super(ServerPasswordTestV21, self).setUp() fakes.stub_out_nw_api(self) self.stub_out('nova.compute.api.API.get', lambda self, ctxt, *a, **kw: fake_instance.fake_instance_obj( ctxt, system_metadata={}, expected_attrs=['system_metadata'])) self.password = 'fakepass' self.controller = self.server_password.ServerPasswordController() self.fake_req = fakes.HTTPRequest.blank('') def fake_convert_password(context, password): self.password = password return {} self.stub_out('nova.api.metadata.password.extract_password', lambda i: self.password) self.stub_out('nova.api.metadata.password.convert_password', fake_convert_password) def test_get_password(self): res = self.controller.index(self.fake_req, 'fake') self.assertEqual(res['password'], 'fakepass') def test_reset_password(self): with mock.patch('nova.objects.Instance._save_flavor'): eval(self.delete_call)(self.fake_req, 'fake') self.assertEqual(eval(self.delete_call).wsgi_code, 204) res = self.controller.index(self.fake_req, 'fake') self.assertEqual(res['password'], '') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_reset_state.py0000664000175000017500000001240500000000000027411 0ustar00zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils import webob from nova.api.openstack.compute import admin_actions \ as admin_actions_v21 from nova.compute import vm_states from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes class ResetStateTestsV21(test.NoDBTestCase): admin_act = admin_actions_v21 bad_request = exception.ValidationError def setUp(self): super(ResetStateTestsV21, self).setUp() self.uuid = uuidutils.generate_uuid() self.admin_api = self.admin_act.AdminActionsController() self.compute_api = self.admin_api.compute_api self.request = self._get_request() self.context = self.request.environ['nova.context'] self.instance = self._create_instance() def _create_instance(self): instance = objects.Instance() instance.uuid = self.uuid instance.vm_state = 'fake' instance.task_state = 'fake' instance.project_id = self.context.project_id instance.obj_reset_changes() return instance def _check_instance_state(self, expected): self.assertEqual(set(expected.keys()), self.instance.obj_what_changed()) for k, v in expected.items(): self.assertEqual(v, getattr(self.instance, k), "Instance.%s doesn't match" % k) self.instance.obj_reset_changes() def _get_request(self): return fakes.HTTPRequest.blank('') def test_no_state(self): self.assertRaises(self.bad_request, self.admin_api._reset_state, self.request, self.uuid, body={"os-resetState": None}) def test_bad_state(self): self.assertRaises(self.bad_request, self.admin_api._reset_state, self.request, self.uuid, body={"os-resetState": {"state": "spam"}}) def test_no_instance(self): self.compute_api.get = mock.MagicMock( side_effect=exception.InstanceNotFound(instance_id='inst_ud')) self.assertRaises(webob.exc.HTTPNotFound, self.admin_api._reset_state, self.request, self.uuid, body={"os-resetState": {"state": "active"}}) self.compute_api.get.assert_called_once_with( self.context, self.uuid, expected_attrs=None, cell_down_support=False) def test_reset_active(self): expected = dict(vm_state=vm_states.ACTIVE, task_state=None) self.instance.save = mock.MagicMock( side_effect=lambda **kw: self._check_instance_state(expected)) self.compute_api.get = mock.MagicMock(return_value=self.instance) body = {"os-resetState": {"state": "active"}} result = self.admin_api._reset_state(self.request, self.uuid, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.admin_api, admin_actions_v21.AdminActionsController): status_int = self.admin_api._reset_state.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) self.compute_api.get.assert_called_once_with( self.context, self.instance.uuid, expected_attrs=None, cell_down_support=False) self.instance.save.assert_called_once_with(admin_state_reset=True) def test_reset_error(self): expected = dict(vm_state=vm_states.ERROR, task_state=None) self.instance.save = mock.MagicMock( side_effect=lambda **kw: self._check_instance_state(expected)) self.compute_api.get = mock.MagicMock(return_value=self.instance) body = {"os-resetState": {"state": "error"}} result = self.admin_api._reset_state(self.request, self.uuid, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.admin_api, admin_actions_v21.AdminActionsController): status_int = self.admin_api._reset_state.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) self.compute_api.get.assert_called_once_with( self.context, self.instance.uuid, expected_attrs=None, cell_down_support=False) self.instance.save.assert_called_once_with(admin_state_reset=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_start_stop.py0000664000175000017500000002225200000000000027272 0ustar00zuulzuul00000000000000# Copyright (c) 2012 Midokura Japan K.K. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_policy import policy as oslo_policy from oslo_utils.fixture import uuidsentinel as uuids import six import webob from nova.api.openstack.compute import servers \ as server_v21 from nova.compute import api as compute_api from nova.db import api as db from nova import exception from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes class ServerStartStopTestV21(test.TestCase): def setUp(self): super(ServerStartStopTestV21, self).setUp() self._setup_controller() self.req = fakes.HTTPRequest.blank('') self.useFixture(nova_fixtures.SingleCellSimple()) self.stub_out('nova.db.api.instance_get_by_uuid', fakes.fake_instance_get( project_id=fakes.FAKE_PROJECT_ID)) def _setup_controller(self): self.controller = server_v21.ServersController() @mock.patch.object(compute_api.API, 'start') def test_start(self, start_mock): body = dict(start="") self.controller._start_server(self.req, uuids.instance, body) start_mock.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'start', side_effect=exception.InstanceNotReady( instance_id=uuids.instance)) def test_start_not_ready(self, start_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, self.req, uuids.instance, body) @mock.patch.object(compute_api.API, 'start', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_start_locked_server(self, start_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, self.req, uuids.instance, body) @mock.patch.object(compute_api.API, 'start', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_start_invalid_state(self, start_mock): body = dict(start="") ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, self.req, uuids.instance, body) self.assertIn('is locked', six.text_type(ex)) @mock.patch.object(compute_api.API, 'stop') def test_stop(self, stop_mock): body = dict(stop="") self.controller._stop_server(self.req, uuids.instance, body) stop_mock.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'stop', side_effect=exception.InstanceNotReady( instance_id=uuids.instance)) def test_stop_not_ready(self, stop_mock): body = dict(stop="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, self.req, uuids.instance, body) @mock.patch.object(compute_api.API, 'stop', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_stop_locked_server(self, stop_mock): body = dict(stop="") ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, self.req, uuids.instance, body) self.assertIn('is locked', six.text_type(ex)) @mock.patch.object(compute_api.API, 'stop', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_stop_invalid_state(self, stop_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, self.req, uuids.instance, body) @mock.patch.object(db, 'instance_get_by_uuid', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)) def test_start_with_bogus_id(self, get_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._start_server, self.req, uuids.instance, body) @mock.patch.object(db, 'instance_get_by_uuid', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)) def test_stop_with_bogus_id(self, get_mock): body = dict(stop="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._stop_server, self.req, uuids.instance, body) class ServerStartStopPolicyEnforcementV21(test.TestCase): start_policy = "os_compute_api:servers:start" stop_policy = "os_compute_api:servers:stop" def setUp(self): super(ServerStartStopPolicyEnforcementV21, self).setUp() self.controller = server_v21.ServersController() self.req = fakes.HTTPRequest.blank('') self.useFixture(nova_fixtures.SingleCellSimple()) self.stub_out( 'nova.db.api.instance_get_by_uuid', fakes.fake_instance_get( project_id=self.req.environ['nova.context'].project_id)) def test_start_policy_failed(self): rules = { self.start_policy: "project_id:non_fake" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(start="") exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._start_server, self.req, uuids.instance, body) self.assertIn(self.start_policy, exc.format_message()) def test_start_overridden_policy_failed_with_other_user_in_same_project( self): rules = { self.start_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' body = dict(start="") exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._start_server, self.req, uuids.instance, body) self.assertIn(self.start_policy, exc.format_message()) @mock.patch('nova.compute.api.API.start') def test_start_overridden_policy_pass_with_same_user(self, start_mock): rules = { self.start_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(start="") self.controller._start_server(self.req, uuids.instance, body) start_mock.assert_called_once_with(mock.ANY, mock.ANY) def test_stop_policy_failed_with_other_project(self): rules = { self.stop_policy: "project_id:%(project_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(stop="") # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._stop_server, self.req, uuids.instance, body) self.assertIn(self.stop_policy, exc.format_message()) @mock.patch('nova.compute.api.API.stop') def test_stop_overridden_policy_pass_with_same_project(self, stop_mock): rules = { self.stop_policy: "project_id:%(project_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(stop="") self.controller._stop_server(self.req, uuids.instance, body) stop_mock.assert_called_once_with(mock.ANY, mock.ANY) def test_stop_overridden_policy_failed_with_other_user_in_same_project( self): rules = { self.stop_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' body = dict(stop="") exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._stop_server, self.req, uuids.instance, body) self.assertIn(self.stop_policy, exc.format_message()) @mock.patch('nova.compute.api.API.stop') def test_stop_overridden_policy_pass_with_same_user(self, stop_mock): rules = { self.stop_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(stop="") self.controller._stop_server(self.req, uuids.instance, body) stop_mock.assert_called_once_with(mock.ANY, mock.ANY) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_tags.py0000664000175000017500000004035000000000000026025 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from webob import exc from nova.api.openstack.compute import server_tags from nova.api.openstack.compute import servers from nova.compute import vm_states from nova import context from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova.objects import instance from nova.objects import tag as tag_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance UUID = 'b48316c5-71e8-45e4-9884-6c78055b9b13' TAG1 = 'tag1' TAG2 = 'tag2' TAG3 = 'tag3' TAGS = [TAG1, TAG2, TAG3] NON_EXISTING_UUID = '123' def return_server(compute_api, context, instance_id, expected_attrs=None): return fake_instance.fake_instance_obj(context, vm_state=vm_states.ACTIVE) def return_invalid_server(compute_api, context, instance_id, expected_attrs=None): return fake_instance.fake_instance_obj(context, vm_state=vm_states.BUILDING) class ServerTagsTest(test.TestCase): api_version = '2.26' def setUp(self): super(ServerTagsTest, self).setUp() self.controller = server_tags.ServerTagsController() inst_map = objects.InstanceMapping( project_id=fakes.FAKE_PROJECT_ID, cell_mapping=objects.CellMappingList.get_all( context.get_admin_context())[1]) self.stub_out('nova.objects.InstanceMapping.get_by_instance_uuid', lambda s, c, u: inst_map) def _get_tag(self, tag_name): tag = models.Tag() tag.tag = tag_name tag.resource_id = UUID return tag def _get_request(self, url, method): request = fakes.HTTPRequest.blank(url, version=self.api_version) request.method = method return request @mock.patch('nova.db.api.instance_tag_exists') def test_show(self, mock_exists): mock_exists.return_value = True req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG1), 'GET') self.controller.show(req, UUID, TAG1) mock_exists.assert_called_once_with(mock.ANY, UUID, TAG1) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_index(self, mock_db_get_inst_tags): fake_tags = [self._get_tag(tag) for tag in TAGS] mock_db_get_inst_tags.return_value = fake_tags req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'GET') res = self.controller.index(req, UUID) self.assertEqual(TAGS, res.get('tags')) mock_db_get_inst_tags.assert_called_once_with(mock.ANY, UUID) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_set') def test_update_all(self, mock_db_set_inst_tags, mock_notify): self.stub_out('nova.api.openstack.common.get_instance', return_server) fake_tags = [self._get_tag(tag) for tag in TAGS] mock_db_set_inst_tags.return_value = fake_tags req = self._get_request( '/v2/%s/servers/%s/tags' % (fakes.FAKE_PROJECT_ID, UUID), 'PUT') res = self.controller.update_all(req, UUID, body={'tags': TAGS}) self.assertEqual(TAGS, res['tags']) mock_db_set_inst_tags.assert_called_once_with(mock.ANY, UUID, TAGS) self.assertEqual(1, mock_notify.call_count) def test_update_all_too_many_tags(self): self.stub_out('nova.api.openstack.common.get_instance', return_server) fake_tags = {'tags': [str(i) for i in range( instance.MAX_TAG_COUNT + 1)]} req = self._get_request( '/v2/%s/servers/%s/tags' % (fakes.FAKE_PROJECT_ID, UUID), 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body=fake_tags) def test_update_all_forbidden_characters(self): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'PUT') for tag in ['tag,1', 'tag/1']: self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': [tag, 'tag2']}) def test_update_all_invalid_tag_type(self): req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': [1]}) def test_update_all_tags_with_one_tag_empty_string(self): req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': ['tag1', '']}) def test_update_all_too_long_tag(self): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'PUT') tag = "a" * (tag_obj.MAX_TAG_LENGTH + 1) self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': [tag]}) def test_update_all_invalid_tag_list_type(self): req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': {'tag': 'tag'}}) def test_update_all_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'PUT') self.assertRaises(exc.HTTPConflict, self.controller.update_all, req, UUID, body={'tags': TAGS}) @mock.patch('nova.db.api.instance_tag_exists') def test_show_non_existing_tag(self, mock_exists): mock_exists.return_value = False req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG1), 'GET') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, UUID, TAG1) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_add') @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_update(self, mock_db_get_inst_tags, mock_db_add_inst_tags, mock_notify): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [self._get_tag(TAG1)] mock_db_add_inst_tags.return_value = self._get_tag(TAG2) url = '/v2/%s/servers/%s/tags/%s' % (fakes.FAKE_PROJECT_ID, UUID, TAG2) location = 'http://localhost' + url req = self._get_request(url, 'PUT') res = self.controller.update(req, UUID, TAG2, body=None) self.assertEqual(201, res.status_int) self.assertEqual(0, len(res.body)) self.assertEqual(location, res.headers['Location']) mock_db_add_inst_tags.assert_called_once_with(mock.ANY, UUID, TAG2) self.assertEqual(2, mock_db_get_inst_tags.call_count) self.assertEqual(1, mock_notify.call_count) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_update_existing_tag(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [self._get_tag(TAG1)] req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG1), 'PUT') res = self.controller.update(req, UUID, TAG1, body=None) self.assertEqual(204, res.status_int) self.assertEqual(0, len(res.body)) mock_db_get_inst_tags.assert_called_once_with(mock.ANY, UUID) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_update_tag_limit_exceed(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) fake_tags = [self._get_tag(str(i)) for i in range(instance.MAX_TAG_COUNT)] mock_db_get_inst_tags.return_value = fake_tags req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG2), 'PUT') self.assertRaises(exc.HTTPBadRequest, self.controller.update, req, UUID, TAG2, body=None) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_update_too_long_tag(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [] tag = "a" * (tag_obj.MAX_TAG_LENGTH + 1) req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, tag), 'PUT') self.assertRaises(exc.HTTPBadRequest, self.controller.update, req, UUID, tag, body=None) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_update_forbidden_characters(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [] for tag in ['tag,1', 'tag/1']: req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, tag), 'PUT') self.assertRaises(exc.HTTPBadRequest, self.controller.update, req, UUID, tag, body=None) def test_update_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG1), 'PUT') self.assertRaises(exc.HTTPConflict, self.controller.update, req, UUID, TAG1, body=None) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_delete') def test_delete(self, mock_db_delete_inst_tags, mock_notify, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG2), 'DELETE') self.controller.delete(req, UUID, TAG2) mock_db_delete_inst_tags.assert_called_once_with(mock.ANY, UUID, TAG2) mock_db_get_inst_tags.assert_called_once_with(mock.ANY, UUID) self.assertEqual(1, mock_notify.call_count) @mock.patch('nova.db.api.instance_tag_delete') def test_delete_non_existing_tag(self, mock_db_delete_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) def fake_db_delete_tag(context, instance_uuid, tag): self.assertEqual(UUID, instance_uuid) self.assertEqual(TAG1, tag) raise exception.InstanceTagNotFound(instance_id=instance_uuid, tag=tag) mock_db_delete_inst_tags.side_effect = fake_db_delete_tag req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG1), 'DELETE') self.assertRaises(exc.HTTPNotFound, self.controller.delete, req, UUID, TAG1) def test_delete_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, UUID, TAG2), 'DELETE') self.assertRaises(exc.HTTPConflict, self.controller.delete, req, UUID, TAG1) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_delete_all') def test_delete_all(self, mock_db_delete_inst_tags, mock_notify): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'DELETE') self.controller.delete_all(req, UUID) mock_db_delete_inst_tags.assert_called_once_with(mock.ANY, UUID) self.assertEqual(1, mock_notify.call_count) def test_delete_all_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request('/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, UUID), 'DELETE') self.assertRaises(exc.HTTPConflict, self.controller.delete_all, req, UUID) def test_show_non_existing_instance(self): req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID, TAG1), 'GET') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, NON_EXISTING_UUID, TAG1) def test_show_with_details_information_non_existing_instance(self): req = self._get_request( '/v2/%s/servers/%s' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID), 'GET') servers_controller = servers.ServersController() self.assertRaises(exc.HTTPNotFound, servers_controller.show, req, NON_EXISTING_UUID) def test_index_non_existing_instance(self): req = self._get_request( 'v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID), 'GET') self.assertRaises(exc.HTTPNotFound, self.controller.index, req, NON_EXISTING_UUID) def test_update_non_existing_instance(self): req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID, TAG1), 'PUT') self.assertRaises(exc.HTTPNotFound, self.controller.update, req, NON_EXISTING_UUID, TAG1, body=None) def test_update_all_non_existing_instance(self): req = self._get_request( '/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID), 'PUT') self.assertRaises(exc.HTTPNotFound, self.controller.update_all, req, NON_EXISTING_UUID, body={'tags': TAGS}) def test_delete_non_existing_instance(self): req = self._get_request( '/v2/%s/servers/%s/tags/%s' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID, TAG1), 'DELETE') self.assertRaises(exc.HTTPNotFound, self.controller.delete, req, NON_EXISTING_UUID, TAG1) def test_delete_all_non_existing_instance(self): req = self._get_request( '/v2/%s/servers/%s/tags' % ( fakes.FAKE_PROJECT_ID, NON_EXISTING_UUID), 'DELETE') self.assertRaises(exc.HTTPNotFound, self.controller.delete_all, req, NON_EXISTING_UUID) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_server_topology.py0000664000175000017500000000742600000000000026752 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from webob import exc from nova.api.openstack import common from nova.api.openstack.compute import server_topology from nova import exception from nova import objects from nova.objects import instance_numa as numa from nova import test from nova.tests.unit.api.openstack import fakes class ServerTopologyTestV278(test.NoDBTestCase): mock_method = 'get_by_instance_uuid' api_version = '2.78' def setUp(self): super(ServerTopologyTestV278, self).setUp() self.uuid = uuids.instance self.req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/topology' % (fakes.FAKE_PROJECT_ID, self.uuid), version=self.api_version, use_admin_context=True) self.controller = server_topology.ServerTopologyController() self.context = self.req.environ['nova.context'] def _fake_numa(self, cpu_pinning=None): ce0 = numa.InstanceNUMACell(node=0, memory=1024, pagesize=4, id=0, cpu_topology=None, cpu_pinning=cpu_pinning, cpuset=set([0, 1])) return numa.InstanceNUMATopology(cells=[ce0]) @mock.patch.object(common, 'get_instance', side_effect=exc.HTTPNotFound('Fake')) def test_get_topology_with_invalid_instance(self, mock_get): excep = self.assertRaises(exc.HTTPNotFound, self.controller.index, self.req, self.uuid) self.assertEqual("Fake", str(excep)) @mock.patch.object(common, 'get_instance') def test_get_topology_with_no_topology(self, fake_get): expect = {'nodes': [], 'pagesize_kb': None} inst = objects.instance.Instance(uuid=self.uuid, host='123', project_id=self.context.project_id) inst.numa_topology = None fake_get.return_value = inst output = self.controller.index(self.req, self.uuid) self.assertEqual(expect, output) @mock.patch.object(common, 'get_instance') def test_get_topology_cpu_pinning_with_none(self, fake_get): expect = {'nodes': [ {'memory_mb': 1024, 'siblings': [], 'vcpu_set': set([0, 1]), 'host_node': 0, 'cpu_pinning':{}}], 'pagesize_kb': 4} inst = objects.instance.Instance(uuid=self.uuid, host='123', project_id=self.context.project_id) inst.numa_topology = self._fake_numa(cpu_pinning=None) fake_get.return_value = inst output = self.controller.index(self.req, self.uuid) self.assertEqual(expect, output) inst.numa_topology = self._fake_numa(cpu_pinning={}) fake_get.return_value = inst output = self.controller.index(self.req, self.uuid) self.assertEqual(expect, output) def test_hit_topology_before278(self): req = fakes.HTTPRequest.blank( '/v2/%s/servers/%s/topology' % (fakes.FAKE_PROJECT_ID, self.uuid), version='2.77') excep = self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, req, self.uuid) self.assertEqual(400, excep.code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_serversV21.py0000664000175000017500000127410600000000000025474 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import datetime import ddt import functools import fixtures import iso8601 import mock from oslo_policy import policy as oslo_policy from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import range import six.moves.urllib.parse as urlparse import testtools import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack import compute from nova.api.openstack.compute import ips from nova.api.openstack.compute.schemas import servers as servers_schema from nova.api.openstack.compute import servers from nova.api.openstack.compute import views from nova.api.openstack import wsgi as os_wsgi from nova import availability_zones from nova import block_device from nova.compute import api as compute_api from nova.compute import flavors from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models from nova import exception from nova.image import glance from nova import objects from nova.objects import instance as instance_obj from nova.objects.instance_group import InstanceGroup from nova.objects import tag from nova.policies import servers as server_policies from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit.image import fake from nova.tests.unit import matchers from nova import utils as nova_utils CONF = nova.conf.CONF FAKE_UUID = fakes.FAKE_UUID UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' INSTANCE_IDS = {FAKE_UUID: 1} FIELDS = instance_obj.INSTANCE_DEFAULT_FIELDS GET_ONLY_FIELDS = ['OS-EXT-AZ:availability_zone', 'config_drive', 'OS-EXT-SRV-ATTR:host', 'OS-EXT-SRV-ATTR:hypervisor_hostname', 'OS-EXT-SRV-ATTR:instance_name', 'OS-EXT-SRV-ATTR:hostname', 'OS-EXT-SRV-ATTR:kernel_id', 'OS-EXT-SRV-ATTR:launch_index', 'OS-EXT-SRV-ATTR:ramdisk_id', 'OS-EXT-SRV-ATTR:reservation_id', 'OS-EXT-SRV-ATTR:root_device_name', 'OS-EXT-SRV-ATTR:user_data', 'host_status', 'key_name', 'OS-SRV-USG:launched_at', 'OS-SRV-USG:terminated_at', 'OS-EXT-STS:task_state', 'OS-EXT-STS:vm_state', 'OS-EXT-STS:power_state', 'security_groups', 'os-extended-volumes:volumes_attached'] def instance_update_and_get_original(context, instance_uuid, values, columns_to_join=None, ): inst = fakes.stub_instance(INSTANCE_IDS.get(instance_uuid), name=values.get('display_name')) inst = dict(inst, **values) return (inst, inst) def instance_update(context, instance_uuid, values): inst = fakes.stub_instance(INSTANCE_IDS.get(instance_uuid), name=values.get('display_name')) inst = dict(inst, **values) return inst def fake_compute_api(cls, req, id): return True def fake_start_stop_not_ready(self, context, instance): raise exception.InstanceNotReady(instance_id=instance["uuid"]) def fake_start_stop_invalid_state(self, context, instance): raise exception.InstanceInvalidState( instance_uuid=instance['uuid'], attr='fake_attr', method='fake_method', state='fake_state') def fake_instance_get_by_uuid_not_found(context, uuid, columns_to_join, use_slave=False): raise exception.InstanceNotFound(instance_id=uuid) def fake_instance_get_all_with_locked(context, list_locked, **kwargs): obj_list = [] s_id = 0 for locked in list_locked: uuid = fakes.get_fake_uuid(locked) s_id = s_id + 1 kwargs['locked_by'] = None if locked == 'not_locked' else locked server = fakes.stub_instance_obj(context, id=s_id, uuid=uuid, **kwargs) obj_list.append(server) return objects.InstanceList(objects=obj_list) def fake_instance_get_all_with_description(context, list_desc, **kwargs): obj_list = [] s_id = 0 for desc in list_desc: uuid = fakes.get_fake_uuid(desc) s_id = s_id + 1 kwargs['display_description'] = desc server = fakes.stub_instance_obj(context, id=s_id, uuid=uuid, **kwargs) obj_list.append(server) return objects.InstanceList(objects=obj_list) def fake_compute_get_empty_az(*args, **kwargs): inst = fakes.stub_instance(vm_state=vm_states.ACTIVE, availability_zone='') return fake_instance.fake_instance_obj(args[1], **inst) def fake_bdms_get_all_by_instance_uuids(*args, **kwargs): return [ fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'volume_id': 'some_volume_1', 'instance_uuid': FAKE_UUID, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': True, }), fake_block_device.FakeDbBlockDeviceDict({ 'id': 2, 'volume_id': 'some_volume_2', 'instance_uuid': FAKE_UUID, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, }), ] def fake_get_inst_mappings_by_instance_uuids_from_db(*args, **kwargs): return [{ 'id': 1, 'instance_uuid': UUID1, 'cell_mapping': { 'id': 1, 'uuid': uuids.cell1, 'name': 'fake', 'transport_url': 'fake://nowhere/', 'updated_at': None, 'database_connection': uuids.cell1, 'created_at': None, 'disabled': False}, 'project_id': 'fake-project' }] class MockSetAdminPassword(object): def __init__(self): self.instance_id = None self.password = None def __call__(self, context, instance_id, password): self.instance_id = instance_id self.password = password class ControllerTest(test.TestCase): project_id = fakes.FAKE_PROJECT_ID path = '/%s/servers' % project_id path_v2 = '/v2' + path path_with_id = path + '/%s' path_with_id_v2 = path_v2 + '/%s' path_with_query = path + '?%s' path_detail = path + '/detail' path_detail_v2 = path_v2 + '/detail' path_detail_with_query = path_detail + '?%s' path_action = path + '/%s/action' def setUp(self): super(ControllerTest, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_key_pair_funcs(self) fake.stub_out_image_service(self) fakes.stub_out_secgroup_api( self, security_groups=[{'name': 'default'}]) return_server = fakes.fake_compute_get(id=2, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1) return_servers = fakes.fake_compute_get_all() # Server sort keys extension is enabled in v21 so sort data is passed # to the instance API and the sorted DB API is invoked self.mock_get_all = self.useFixture(fixtures.MockPatchObject( compute_api.API, 'get_all', side_effect=return_servers)).mock self.mock_get = self.useFixture(fixtures.MockPatchObject( compute_api.API, 'get', side_effect=return_server)).mock self.stub_out('nova.db.api.instance_update_and_get_original', instance_update_and_get_original) self.stub_out('nova.db.api.' 'block_device_mapping_get_all_by_instance_uuids', fake_bdms_get_all_by_instance_uuids) self.stub_out('nova.objects.InstanceMappingList.' '_get_by_instance_uuids_from_db', fake_get_inst_mappings_by_instance_uuids_from_db) self.flags(group='glance', api_servers=['http://localhost:9292']) self.controller = servers.ServersController() self.ips_controller = ips.IPsController() policy.reset() policy.init() self.addCleanup(policy.reset) # Assume that anything that hits the compute API and looks for a # RequestSpec doesn't care about it, since testing logic that deep # should be done in nova.tests.unit.compute.test_compute_api. mock_reqspec = mock.patch('nova.objects.RequestSpec') mock_reqspec.start() self.addCleanup(mock_reqspec.stop) # Similarly we shouldn't care about anything hitting conductor from # these tests. mock_conductor = mock.patch.object( self.controller.compute_api, 'compute_task_api') mock_conductor.start() self.addCleanup(mock_conductor.stop) class ServersControllerTest(ControllerTest): wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def setUp(self): super(ServersControllerTest, self).setUp() self.request = fakes.HTTPRequest.blank( self.path_with_id_v2 % FAKE_UUID, use_admin_context=False, version=self.wsgi_api_version) return_server = fakes.fake_compute_get( id=2, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=self.request.environ['nova.context'].project_id) self.mock_get.side_effect = return_server def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.wsgi_api_version) @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_instance_lookup_targets(self, mock_get_im, mock_get_inst): ctxt = context.RequestContext('fake', self.project_id) mock_get_im.return_value.cell_mapping.database_connection = uuids.cell1 self.controller._get_instance(ctxt, 'foo') mock_get_im.assert_called_once_with(ctxt, 'foo') self.assertIsNotNone(ctxt.db_connection) def test_requested_networks_prefix(self): """Tests that we no longer support the legacy br- format for a network id. """ uuid = 'br-00000000-0000-0000-0000-000000000000' requested_networks = [{'uuid': uuid}] ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller._get_requested_networks, requested_networks) self.assertIn('Bad networks format: network uuid is not in proper ' 'format', six.text_type(ex)) def test_requested_networks_enabled_with_port(self): port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'port': port}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(None, None, port, None)], res.as_tuples()) def test_requested_networks_enabled_with_network(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' requested_networks = [{'uuid': network}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(network, None, None, None)], res.as_tuples()) def test_requested_networks_enabled_with_network_and_port(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(None, None, port, None)], res.as_tuples()) def test_requested_networks_with_and_duplicate_networks(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' requested_networks = [{'uuid': network}, {'uuid': network}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(network, None, None, None), (network, None, None, None)], res.as_tuples()) def test_requested_networks_enabled_conflict_on_fixed_ip(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' addr = '10.0.0.1' requested_networks = [{'uuid': network, 'fixed_ip': addr, 'port': port}] self.assertRaises( webob.exc.HTTPBadRequest, self.controller._get_requested_networks, requested_networks) def test_requested_networks_api_enabled_with_v2_subclass(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(None, None, port, None)], res.as_tuples()) def test_get_server_by_uuid(self): res_dict = self.controller.show(self.request, FAKE_UUID) self.assertEqual(res_dict['server']['id'], FAKE_UUID) def test_get_server_joins(self): def fake_get(*args, **kwargs): expected_attrs = kwargs['expected_attrs'] self.assertEqual(['flavor', 'info_cache', 'metadata', 'numa_topology'], expected_attrs) ctxt = context.RequestContext('fake', self.project_id) return fake_instance.fake_instance_obj( ctxt, expected_attrs=expected_attrs, project_id=self.request.environ['nova.context'].project_id) self.mock_get.side_effect = fake_get self.controller.show(self.request, FAKE_UUID) def test_unique_host_id(self): """Create two servers with the same host and different project_ids and check that the host_id's are unique. """ def return_instance_with_host(context, *args, **kwargs): project_id = context.project_id return fakes.stub_instance_obj(context, id=1, uuid=FAKE_UUID, project_id=project_id, host='fake_host') req1 = self.req(self.path_with_id % FAKE_UUID) project_id = uuidutils.generate_uuid() req2 = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID, version=self.wsgi_api_version, project_id=project_id) self.mock_get.side_effect = return_instance_with_host server1 = self.controller.show(req1, FAKE_UUID) server2 = self.controller.show(req2, FAKE_UUID) self.assertNotEqual(server1['server']['hostId'], server2['server']['hostId']) def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100): return { "server": { "id": uuid, "user_id": "fake_user", "created": "2010-10-10T12:00:00Z", "updated": "2010-11-11T11:00:00Z", "progress": progress, "name": "server2", "status": status, "hostId": '', "image": { "id": "10", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "2", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'} ] }, "metadata": { "seq": "2", }, "links": [ { "rel": "self", "href": "http://localhost%s/%s" % (self.path_v2, uuid), }, { "rel": "bookmark", "href": "http://localhost%s/%s" % (self.path, uuid), }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', "OS-EXT-AZ:availability_zone": "nova", "config_drive": None, "OS-EXT-SRV-ATTR:host": None, "OS-EXT-SRV-ATTR:hypervisor_hostname": None, "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "key_name": '', "OS-SRV-USG:launched_at": None, "OS-SRV-USG:terminated_at": None, "security_groups": [{'name': 'default'}], "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": vm_states.ACTIVE, "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [ {'id': 'some_volume_1'}, {'id': 'some_volume_2'}, ], "tenant_id": self.request.environ['nova.context'].project_id } } def test_get_server_by_id(self): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id uuid = FAKE_UUID res_dict = self.controller.show(self.request, uuid) expected_server = self._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, progress=0) expected_server['server']['tenant_id'] = self.request.environ[ 'nova.context'].project_id self.assertThat(res_dict, matchers.DictMatches(expected_server)) def test_get_server_empty_az(self): uuid = FAKE_UUID req = self.req(self.path_with_id_v2 % uuid) self.mock_get.side_effect = fakes.fake_compute_get( availability_zone='', project_id=req.environ['nova.context'].project_id) res_dict = self.controller.show(req, uuid) self.assertEqual(res_dict['server']['OS-EXT-AZ:availability_zone'], '') def test_get_server_with_active_status_by_id(self): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id res_dict = self.controller.show(self.request, FAKE_UUID) expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) expected_server['server']['tenant_id'] = self.request.environ[ 'nova.context'].project_id self.assertThat(res_dict, matchers.DictMatches(expected_server)) self.mock_get.assert_called_once_with( self.request.environ['nova.context'], FAKE_UUID, expected_attrs=['flavor', 'info_cache', 'metadata', 'numa_topology'], cell_down_support=False) def test_get_server_with_id_image_ref_by_id(self): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id res_dict = self.controller.show(self.request, FAKE_UUID) expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) expected_server['server']['tenant_id'] = self.request.environ[ 'nova.context'].project_id self.assertThat(res_dict, matchers.DictMatches(expected_server)) self.mock_get.assert_called_once_with( self.request.environ['nova.context'], FAKE_UUID, expected_attrs=['flavor', 'info_cache', 'metadata', 'numa_topology'], cell_down_support=False) def _generate_nw_cache_info(self): pub0 = ('172.19.0.1', '172.19.0.2',) pub1 = ('1.2.3.4',) pub2 = ('b33f::fdee:ddff:fecc:bbaa',) priv0 = ('192.168.0.3', '192.168.0.4',) def _ip(ip): return {'address': ip, 'type': 'fixed'} nw_cache = [ {'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': {'bridge': 'br0', 'id': 1, 'label': 'public', 'subnets': [{'cidr': '172.19.0.0/24', 'ips': [_ip(ip) for ip in pub0]}, {'cidr': '1.2.3.0/16', 'ips': [_ip(ip) for ip in pub1]}, {'cidr': 'b33f::/64', 'ips': [_ip(ip) for ip in pub2]}]}}, {'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': {'bridge': 'br1', 'id': 2, 'label': 'private', 'subnets': [{'cidr': '192.168.0.0/24', 'ips': [_ip(ip) for ip in priv0]}]}}] return nw_cache def test_get_server_addresses_from_cache(self): nw_cache = self._generate_nw_cache_info() self.mock_get.side_effect = fakes.fake_compute_get(nw_cache=nw_cache, availability_zone='nova') req = self.req((self.path_with_id % FAKE_UUID) + '/ips') res_dict = self.ips_controller.index(req, FAKE_UUID) expected = { 'addresses': { 'private': [ {'version': 4, 'addr': '192.168.0.3'}, {'version': 4, 'addr': '192.168.0.4'}, ], 'public': [ {'version': 4, 'addr': '172.19.0.1'}, {'version': 4, 'addr': '172.19.0.2'}, {'version': 4, 'addr': '1.2.3.4'}, {'version': 6, 'addr': 'b33f::fdee:ddff:fecc:bbaa'}, ], }, } self.assertThat(res_dict, matchers.DictMatches(expected)) self.mock_get.assert_called_once_with( req.environ['nova.context'], FAKE_UUID, expected_attrs=None, cell_down_support=False) # Make sure we kept the addresses in order self.assertIsInstance(res_dict['addresses'], collections.OrderedDict) labels = [vif['network']['label'] for vif in nw_cache] for index, label in enumerate(res_dict['addresses'].keys()): self.assertEqual(label, labels[index]) def test_get_server_addresses_nonexistent_network(self): url = ((self.path_with_id_v2 % FAKE_UUID) + '/ips/network_0') req = self.req(url) self.assertRaises(webob.exc.HTTPNotFound, self.ips_controller.show, req, FAKE_UUID, 'network_0') def test_get_server_addresses_nonexistent_server(self): self.mock_get.side_effect = exception.InstanceNotFound( instance_id='fake') req = self.req((self.path_with_id % uuids.fake) + '/ips') self.assertRaises(webob.exc.HTTPNotFound, self.ips_controller.index, req, uuids.fake) self.mock_get.assert_called_once_with( req.environ['nova.context'], uuids.fake, expected_attrs=None, cell_down_support=False) def test_show_server_hide_addresses_in_building(self): uuid = FAKE_UUID req = self.req(self.path_with_id_v2 % uuid) self.mock_get.side_effect = fakes.fake_compute_get( uuid=uuid, vm_state=vm_states.BUILDING, project_id=req.environ['nova.context'].project_id) res_dict = self.controller.show(req, uuid) self.assertEqual({}, res_dict['server']['addresses']) def test_show_server_addresses_in_non_building(self): uuid = FAKE_UUID nw_cache = self._generate_nw_cache_info() expected = { 'addresses': { 'private': [ {'version': 4, 'addr': '192.168.0.3', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'}, {'version': 4, 'addr': '192.168.0.4', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'}, ], 'public': [ {'version': 4, 'addr': '172.19.0.1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '172.19.0.2', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '1.2.3.4', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': 'b33f::fdee:ddff:fecc:bbaa', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, ], }, } req = self.req(self.path_with_id_v2 % uuid) self.mock_get.side_effect = fakes.fake_compute_get( nw_cache=nw_cache, uuid=uuid, vm_state=vm_states.ACTIVE, project_id=req.environ['nova.context'].project_id) res_dict = self.controller.show(req, uuid) self.assertThat(res_dict['server']['addresses'], matchers.DictMatches(expected['addresses'])) def test_detail_server_hide_addresses(self): nw_cache = self._generate_nw_cache_info() expected = { 'addresses': { 'private': [ {'version': 4, 'addr': '192.168.0.3', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'}, {'version': 4, 'addr': '192.168.0.4', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'}, ], 'public': [ {'version': 4, 'addr': '172.19.0.1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '172.19.0.2', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '1.2.3.4', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': 'b33f::fdee:ddff:fecc:bbaa', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, ], }, } def fake_get_all(context, **kwargs): return objects.InstanceList( objects=[fakes.stub_instance_obj(1, vm_state=vm_states.BUILDING, uuid=uuids.fake, nw_cache=nw_cache), fakes.stub_instance_obj(2, vm_state=vm_states.ACTIVE, uuid=uuids.fake2, nw_cache=nw_cache)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'deleted=true', use_admin_context=True) servers = self.controller.detail(req)['servers'] for server in servers: if server['OS-EXT-STS:vm_state'] == 'building': self.assertEqual({}, server['addresses']) else: self.assertThat(server['addresses'], matchers.DictMatches(expected['addresses'])) def test_get_server_list_empty(self): self.mock_get_all.side_effect = None self.mock_get_all.return_value = objects.InstanceList(objects=[]) req = self.req(self.path) res_dict = self.controller.index(req) self.assertEqual(0, len(res_dict['servers'])) self.mock_get_all.assert_called_once_with( req.environ['nova.context'], expected_attrs=[], limit=1000, marker=None, search_opts={'deleted': False, 'project_id': self.project_id}, sort_dirs=['desc'], sort_keys=['created_at'], cell_down_support=False, all_tenants=False) def test_get_server_list_with_reservation_id(self): req = self.req(self.path_with_query % 'reservation_id=foo') res_dict = self.controller.index(req) i = 0 for s in res_dict['servers']: self.assertEqual(s.get('name'), 'server%d' % (i + 1)) i += 1 def test_get_server_list_with_reservation_id_empty(self): req = self.req(self.path_detail_with_query % 'reservation_id=foo') res_dict = self.controller.detail(req) i = 0 for s in res_dict['servers']: self.assertEqual(s.get('name'), 'server%d' % (i + 1)) i += 1 def test_get_server_list_with_reservation_id_details(self): req = self.req(self.path_detail_with_query % 'reservation_id=foo') res_dict = self.controller.detail(req) i = 0 for s in res_dict['servers']: self.assertEqual(s.get('name'), 'server%d' % (i + 1)) i += 1 def test_get_server_list(self): req = self.req(self.path) res_dict = self.controller.index(req) self.assertEqual(len(res_dict['servers']), 5) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['id'], fakes.get_fake_uuid(i)) self.assertEqual(s['name'], 'server%d' % (i + 1)) self.assertIsNone(s.get('image', None)) expected_links = [ { "rel": "self", "href": "http://localhost" + ( self.path_with_id_v2 % s['id']), }, { "rel": "bookmark", "href": "http://localhost" + ( self.path_with_id % s['id']), }, ] self.assertEqual(s['links'], expected_links) def test_get_servers_with_limit(self): req = self.req(self.path_with_query % 'limit=3') res_dict = self.controller.index(req) servers = res_dict['servers'] self.assertEqual([s['id'] for s in servers], [fakes.get_fake_uuid(i) for i in range(len(servers))]) servers_links = res_dict['servers_links'] self.assertEqual(servers_links[0]['rel'], 'next') href_parts = urlparse.urlparse(servers_links[0]['href']) self.assertEqual('/v2' + self.path, href_parts.path) params = urlparse.parse_qs(href_parts.query) expected_params = {'limit': ['3'], 'marker': [fakes.get_fake_uuid(2)]} self.assertThat(params, matchers.DictMatches(expected_params)) def test_get_servers_with_limit_bad_value(self): req = self.req(self.path_with_query % 'limit=aaa') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_server_details_empty(self): self.mock_get_all.side_effect = None self.mock_get_all.return_value = objects.InstanceList(objects=[]) req = self.req(self.path_detail) expected_attrs = ['flavor', 'info_cache', 'metadata'] if api_version_request.is_supported(req, '2.16'): expected_attrs.append('services') res_dict = self.controller.detail(req) self.assertEqual(0, len(res_dict['servers'])) self.mock_get_all.assert_called_once_with( req.environ['nova.context'], expected_attrs=sorted(expected_attrs), limit=1000, marker=None, search_opts={'deleted': False, 'project_id': self.project_id}, sort_dirs=['desc'], sort_keys=['created_at'], cell_down_support=False, all_tenants=False) def test_get_server_details_with_bad_name(self): req = self.req(self.path_detail_with_query % 'name=%2Binstance') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_server_details_with_limit(self): req = self.req(self.path_detail_with_query % 'limit=3') res = self.controller.detail(req) servers = res['servers'] self.assertEqual([s['id'] for s in servers], [fakes.get_fake_uuid(i) for i in range(len(servers))]) servers_links = res['servers_links'] self.assertEqual(servers_links[0]['rel'], 'next') href_parts = urlparse.urlparse(servers_links[0]['href']) self.assertEqual(self.path_detail_v2, href_parts.path) params = urlparse.parse_qs(href_parts.query) expected = {'limit': ['3'], 'marker': [fakes.get_fake_uuid(2)]} self.assertThat(params, matchers.DictMatches(expected)) def test_get_server_details_with_limit_bad_value(self): req = self.req(self.path_detail_with_query % 'limit=aaa') self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_get_server_details_with_limit_and_other_params(self): req = self.req(self.path_detail_with_query % 'limit=3&blah=2:t&sort_key=uuid&sort_dir=asc') res = self.controller.detail(req) servers = res['servers'] self.assertEqual([s['id'] for s in servers], [fakes.get_fake_uuid(i) for i in range(len(servers))]) servers_links = res['servers_links'] self.assertEqual(servers_links[0]['rel'], 'next') href_parts = urlparse.urlparse(servers_links[0]['href']) self.assertEqual(self.path_detail_v2, href_parts.path) params = urlparse.parse_qs(href_parts.query) expected = {'limit': ['3'], 'sort_key': ['uuid'], 'sort_dir': ['asc'], 'marker': [fakes.get_fake_uuid(2)]} self.assertThat(params, matchers.DictMatches(expected)) def test_get_servers_with_too_big_limit(self): req = self.req(self.path_with_query % 'limit=30') res_dict = self.controller.index(req) self.assertNotIn('servers_links', res_dict) def test_get_servers_with_bad_limit(self): req = self.req(self.path_with_query % 'limit=asdf') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_with_marker(self): url = '%s?marker=%s' % (self.path_v2, fakes.get_fake_uuid(2)) req = self.req(url) servers = self.controller.index(req)['servers'] self.assertEqual([s['name'] for s in servers], ["server4", "server5"]) def test_get_servers_with_limit_and_marker(self): url = '%s?limit=2&marker=%s' % (self.path_v2, fakes.get_fake_uuid(1)) req = self.req(url) servers = self.controller.index(req)['servers'] self.assertEqual([s['name'] for s in servers], ['server3', 'server4']) def test_get_servers_with_bad_marker(self): req = self.req(self.path_with_query % 'limit=2&marker=asdf') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_servers_with_invalid_filter_param(self): req = self.req(self.path_with_query % 'info_cache=asdf', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) req = self.req(self.path_with_query % '__foo__=asdf', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_servers_with_invalid_regex_filter_param(self): req = self.req(self.path_with_query % 'flavor=[[[', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_with_empty_regex_filter_param(self): req = self.req(self.path_with_query % 'flavor=', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_detail_with_empty_regex_filter_param(self): req = self.req(self.path_detail_with_query % 'flavor=', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_get_servers_invalid_sort_key(self): # "hidden" is a real field for instances but not exposed in the API. req = self.req(self.path_with_query % 'sort_key=hidden&sort_dir=desc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_ignore_sort_key(self): req = self.req(self.path_with_query % 'sort_key=vcpus&sort_dir=asc') self.controller.index(req) self.mock_get_all.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=[], sort_dirs=[], cell_down_support=False, all_tenants=False) def test_get_servers_ignore_locked_sort_key(self): # Prior to microversion 2.73 locked sort key is ignored. req = self.req(self.path_with_query % 'sort_key=locked&sort_dir=asc') self.controller.detail(req) self.mock_get_all.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=[], sort_dirs=[], cell_down_support=False, all_tenants=False) def test_get_servers_ignore_sort_key_only_one_dir(self): req = self.req(self.path_with_query % 'sort_key=user_id&sort_key=vcpus&sort_dir=asc') self.controller.index(req) self.mock_get_all.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=['user_id'], sort_dirs=['asc'], cell_down_support=False, all_tenants=False) def test_get_servers_ignore_sort_key_with_no_sort_dir(self): req = self.req(self.path_with_query % 'sort_key=vcpus&sort_key=user_id') self.controller.index(req) self.mock_get_all.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=['user_id'], sort_dirs=[], cell_down_support=False, all_tenants=False) def test_get_servers_ignore_sort_key_with_bad_sort_dir(self): req = self.req(self.path_with_query % 'sort_key=vcpus&sort_dir=bad_dir') self.controller.index(req) self.mock_get_all.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=[], sort_dirs=[], cell_down_support=False, all_tenants=False) def test_get_servers_non_admin_with_admin_only_sort_key(self): req = self.req(self.path_with_query % 'sort_key=host&sort_dir=desc') self.assertRaises(webob.exc.HTTPForbidden, self.controller.index, req) def test_get_servers_admin_with_admin_only_sort_key(self): req = self.req(self.path_with_query % 'sort_key=node&sort_dir=desc', use_admin_context=True) self.controller.detail(req) self.mock_get_all.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=['node'], sort_dirs=['desc'], cell_down_support=False, all_tenants=False) def test_get_servers_with_bad_option(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_list = [fakes.stub_instance(100, uuid=uuids.fake)] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'unknownoption=whee') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) self.mock_get_all.assert_called_once_with( req.environ['nova.context'], expected_attrs=[], limit=1000, marker=None, search_opts={'deleted': False, 'project_id': self.project_id}, sort_dirs=['desc'], sort_keys=['created_at'], cell_down_support=False, all_tenants=False) def test_get_servers_with_locked_filter(self): # Prior to microversion 2.73 locked filter parameter is ignored. def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_list = [fakes.stub_instance(100, uuid=uuids.fake)] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'locked=true') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) self.mock_get_all.assert_called_once_with( req.environ['nova.context'], expected_attrs=[], limit=1000, marker=None, search_opts={'deleted': False, 'project_id': self.project_id}, sort_dirs=['desc'], sort_keys=['created_at'], cell_down_support=False, all_tenants=False) def test_get_servers_allows_image(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('image', search_opts) self.assertEqual(search_opts['image'], '12345') db_list = [fakes.stub_instance(100, uuid=uuids.fake)] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'image=12345') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_tenant_id_filter_no_admin_context(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('tenant_id', search_opts) self.assertEqual(self.project_id, search_opts['project_id']) return [fakes.stub_instance_obj(100)] req = self.req(self.path_with_query % 'tenant_id=newfake') self.mock_get_all.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_tenant_id_filter_admin_context(self): """"Test tenant_id search opt is dropped if all_tenants is not set.""" def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('tenant_id', search_opts) self.assertEqual(self.project_id, search_opts['project_id']) return [fakes.stub_instance_obj(100)] req = self.req(self.path_with_query % 'tenant_id=newfake', use_admin_context=True) self.mock_get_all.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_normal(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertNotIn('project_id', search_opts) return [fakes.stub_instance_obj(100)] req = self.req(self.path_with_query % 'all_tenants', use_admin_context=True) self.mock_get_all.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_one(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertNotIn('project_id', search_opts) return [fakes.stub_instance_obj(100)] self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'all_tenants=1', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) def test_all_tenants_param_zero(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertNotIn('all_tenants', search_opts) return [fakes.stub_instance_obj(100)] self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'all_tenants=0', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) def test_all_tenants_param_false(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertNotIn('all_tenants', search_opts) return [fakes.stub_instance_obj(100)] self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'all_tenants=false', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) def test_all_tenants_param_invalid(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertNotIn('all_tenants', search_opts) return [fakes.stub_instance_obj(100)] self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'all_tenants=xxx', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_admin_restricted_tenant(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertEqual(search_opts['project_id'], self.project_id) return [fakes.stub_instance_obj(100)] self.mock_get_all.side_effect = fake_get_all req = self.req(self.path, use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) def test_all_tenants_pass_policy(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('project_id', search_opts) self.assertTrue(context.is_admin) return [fakes.stub_instance_obj(100)] self.mock_get_all.side_effect = fake_get_all rules = { "os_compute_api:servers:index": "project_id:%s" % self.project_id, "os_compute_api:servers:index:get_all_tenants": "project_id:%s" % self.project_id } policy.set_rules(oslo_policy.Rules.from_dict(rules)) req = self.req(self.path_with_query % 'all_tenants=1') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) def test_all_tenants_fail_policy(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) return [fakes.stub_instance_obj(100)] rules = { "os_compute_api:servers:index:get_all_tenants": "project_id:non_fake", "os_compute_api:servers:get_all": "project_id:%s" % self.project_id, } policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'all_tenants=1') self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, req) def test_get_servers_allows_flavor(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('flavor', search_opts) # flavor is an integer ID self.assertEqual(search_opts['flavor'], '12345') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'flavor=12345') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_with_bad_flavor(self): req = self.req(self.path_with_query % 'flavor=abcde') self.mock_get_all.side_effect = None self.mock_get_all.return_value = objects.InstanceList(objects=[]) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 0) def test_get_server_details_with_bad_flavor(self): req = self.req(self.path_with_query % 'flavor=abcde') self.mock_get_all.side_effect = None self.mock_get_all.return_value = objects.InstanceList(objects=[]) servers = self.controller.detail(req)['servers'] self.assertThat(servers, testtools.matchers.HasLength(0)) def test_get_servers_allows_status(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('vm_state', search_opts) self.assertEqual(search_opts['vm_state'], [vm_states.ACTIVE]) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'status=active') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_allows_task_status(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('task_state', search_opts) self.assertEqual([task_states.REBOOT_PENDING, task_states.REBOOT_STARTED, task_states.REBOOTING], search_opts['task_state']) return objects.InstanceList( objects=[fakes.stub_instance_obj( 100, uuid=uuids.fake, task_state=task_states.REBOOTING)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'status=reboot') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_resize_status(self): # Test when resize status, it maps list of vm states. def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIn('vm_state', search_opts) self.assertEqual(search_opts['vm_state'], [vm_states.ACTIVE, vm_states.STOPPED]) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'status=resize') servers = self.controller.detail(req)['servers'] self.assertEqual(1, len(servers), 1) self.assertEqual(servers[0]['id'], uuids.fake) def test_get_servers_invalid_status(self): # Test getting servers by invalid status. req = self.req(self.path_with_query % 'status=baloney', use_admin_context=False) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 0) def test_get_servers_deleted_status_as_user(self): req = self.req(self.path_with_query % 'status=deleted', use_admin_context=False) self.assertRaises(webob.exc.HTTPForbidden, self.controller.detail, req) def test_get_servers_deleted_status_as_admin(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIn('vm_state', search_opts) self.assertEqual(search_opts['vm_state'], ['deleted']) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'status=deleted', use_admin_context=True) servers = self.controller.detail(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_deleted_filter_str_to_bool(self): db_list = objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake, vm_state='deleted')]) self.mock_get_all.side_effect = None self.mock_get_all.return_value = db_list req = self.req(self.path_with_query % 'deleted=true', use_admin_context=True) servers = self.controller.detail(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) # Assert that 'deleted' filter value is converted to boolean # while calling get_all() method. expected_search_opts = {'deleted': True, 'project_id': self.project_id} self.assertEqual(expected_search_opts, self.mock_get_all.call_args[1]['search_opts']) def test_get_servers_deleted_filter_invalid_str(self): db_list = objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = None self.mock_get_all.return_value = db_list req = fakes.HTTPRequest.blank(self.path_with_query % 'deleted=abc', use_admin_context=True) servers = self.controller.detail(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) # Assert that invalid 'deleted' filter value is converted to boolean # False while calling get_all() method. expected_search_opts = {'deleted': False, 'project_id': self.project_id} self.assertEqual(expected_search_opts, self.mock_get_all.call_args[1]['search_opts']) def test_get_servers_allows_name(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('name', search_opts) self.assertEqual(search_opts['name'], 'whee.*') self.assertEqual([], expected_attrs) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'name=whee.*') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_flavor_not_found(self): self.mock_get_all.side_effect = exception.FlavorNotFound(flavor_id=1) req = fakes.HTTPRequest.blank( self.path_with_query % 'status=active&flavor=abc') servers = self.controller.index(req)['servers'] self.assertEqual(0, len(servers)) def test_get_servers_allows_changes_since(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('changes-since', search_opts) changes_since = datetime.datetime(2011, 1, 24, 17, 8, 1, tzinfo=iso8601.iso8601.UTC) self.assertEqual(search_opts['changes-since'], changes_since) self.assertNotIn('deleted', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all params = 'changes-since=2011-01-24T17:08:01Z' req = self.req(self.path_with_query % params) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_allows_changes_since_bad_value(self): params = 'changes-since=asdf' req = self.req(self.path_with_query % params) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_allows_changes_since_bad_value_on_compat_mode(self): params = 'changes-since=asdf' req = self.req(self.path_with_query % params) req.set_legacy_v2() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_servers_admin_filters_as_user(self): """Test getting servers by admin-only or unknown options when context is not admin. Make sure the admin and unknown options are stripped before they get to compute_api.get_all() """ def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) # Allowed by user self.assertIn('name', search_opts) self.assertIn('ip', search_opts) # OSAPI converts status to vm_state self.assertIn('vm_state', search_opts) # Allowed only by admins with admin API on self.assertNotIn('unknown_option', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all query_str = "name=foo&ip=10.*&status=active&unknown_option=meow" req = fakes.HTTPRequest.blank(self.path_with_query % query_str) res = self.controller.index(req) servers = res['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_admin_options_as_admin(self): """Test getting servers by admin-only or unknown options when context is admin. All options should be passed """ def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) # Allowed by user self.assertIn('name', search_opts) self.assertIn('terminated_at', search_opts) # OSAPI converts status to vm_state self.assertIn('vm_state', search_opts) # Allowed only by admins with admin API on self.assertIn('ip', search_opts) self.assertNotIn('unknown_option', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all query_str = ("name=foo&ip=10.*&status=active&unknown_option=meow&" "terminated_at=^2016-02-01.*") req = self.req(self.path_with_query % query_str, use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_admin_filters_as_user_with_policy_override(self): """Test getting servers by admin-only or unknown options when context is not admin but policy allows. """ server_uuid = uuids.fake def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) # Allowed by user self.assertIn('name', search_opts) self.assertIn('terminated_at', search_opts) # OSAPI converts status to vm_state self.assertIn('vm_state', search_opts) # Allowed only by admins with admin API on self.assertIn('ip', search_opts) self.assertNotIn('unknown_option', search_opts) # "hidden" is ignored as a filter parameter since it is only used # internally self.assertNotIn('hidden', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) rules = { "os_compute_api:servers:index": "project_id:%s" % self.project_id, "os_compute_api:servers:allow_all_filters": "project_id:%s" % self.project_id, } policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.mock_get_all.side_effect = fake_get_all query_str = ("name=foo&ip=10.*&status=active&unknown_option=meow&" "terminated_at=^2016-02-01.*&hidden=true") req = self.req(self.path_with_query % query_str) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_ip(self): """Test getting servers by ip.""" def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('ip', search_opts) self.assertEqual(search_opts['ip'], r'10\..*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % r'ip=10\..*') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_admin_allows_ip6(self): """Test getting servers by ip6 with admin_api enabled and admin context """ def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('ip6', search_opts) self.assertEqual(search_opts['ip6'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'ip6=ffff.*', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_allows_ip6_with_new_version(self): """Test getting servers by ip6 with new version requested and no admin context """ def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('ip6', search_opts) self.assertEqual(search_opts['ip6'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'ip6=ffff.*') req.api_version_request = api_version_request.APIVersionRequest('2.5') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_admin_allows_access_ip_v4(self): """Test getting servers by access_ip_v4 with admin_api enabled and admin context """ def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('access_ip_v4', search_opts) self.assertEqual(search_opts['access_ip_v4'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'access_ip_v4=ffff.*', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_admin_allows_access_ip_v6(self): """Test getting servers by access_ip_v6 with admin_api enabled and admin context """ def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('access_ip_v6', search_opts) self.assertEqual(search_opts['access_ip_v6'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'access_ip_v6=ffff.*', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def _assertServerUsage(self, server, launched_at, terminated_at): resp_launched_at = timeutils.parse_isotime( server.get('OS-SRV-USG:launched_at')) self.assertEqual(timeutils.normalize_time(resp_launched_at), launched_at) resp_terminated_at = timeutils.parse_isotime( server.get('OS-SRV-USG:terminated_at')) self.assertEqual(timeutils.normalize_time(resp_terminated_at), terminated_at) def test_show_server_usage(self): DATE1 = datetime.datetime(year=2013, month=4, day=5, hour=12) DATE2 = datetime.datetime(year=2013, month=4, day=5, hour=13) req = self.req(self.path_with_id % FAKE_UUID) req.accept = 'application/json' req.method = 'GET' self.mock_get.side_effect = fakes.fake_compute_get( id=1, uuid=FAKE_UUID, launched_at=DATE1, terminated_at=DATE2, project_id=req.environ['nova.context'].project_id) res = req.get_response(compute.APIRouterV21()) self.assertEqual(res.status_int, 200) self.useFixture(utils_fixture.TimeFixture()) self._assertServerUsage(jsonutils.loads(res.body).get('server'), launched_at=DATE1, terminated_at=DATE2) def test_detail_server_usage(self): DATE1 = datetime.datetime(year=2013, month=4, day=5, hour=12) DATE2 = datetime.datetime(year=2013, month=4, day=5, hour=13) DATE3 = datetime.datetime(year=2013, month=4, day=5, hour=14) def fake_compute_get_all(*args, **kwargs): db_list = [ fakes.stub_instance_obj(context, id=2, uuid=FAKE_UUID, launched_at=DATE2, terminated_at=DATE3), fakes.stub_instance_obj(context, id=3, uuid=FAKE_UUID, launched_at=DATE1, terminated_at=DATE3), ] return objects.InstanceList(objects=db_list) self.mock_get_all.side_effect = fake_compute_get_all req = self.req(self.path_detail) req.accept = 'application/json' servers = req.get_response(compute.APIRouterV21()) self.assertEqual(servers.status_int, 200) self._assertServerUsage(jsonutils.loads( servers.body).get('servers')[0], launched_at=DATE2, terminated_at=DATE3) self._assertServerUsage(jsonutils.loads( servers.body).get('servers')[1], launched_at=DATE1, terminated_at=DATE3) def test_get_all_server_details(self): expected_flavor = { "id": "2", "links": [ { "rel": "bookmark", "href": ('http://localhost/%s/flavors/2' % self.project_id), }, ], } expected_image = { "id": "10", "links": [ { "rel": "bookmark", "href": ('http://localhost/%s/images/10' % self.project_id), }, ], } req = self.req(self.path_detail) res_dict = self.controller.detail(req) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['id'], fakes.get_fake_uuid(i)) self.assertEqual(s['hostId'], '') self.assertEqual(s['name'], 'server%d' % (i + 1)) self.assertEqual(s['image'], expected_image) self.assertEqual(s['flavor'], expected_flavor) self.assertEqual(s['status'], 'ACTIVE') self.assertEqual(s['metadata']['seq'], str(i + 1)) def test_get_all_server_details_with_host(self): """We want to make sure that if two instances are on the same host, then they return the same hostId. If two instances are on different hosts, they should return different hostIds. In this test, there are 5 instances - 2 on one host and 3 on another. """ def return_servers_with_host(*args, **kwargs): return objects.InstanceList( objects=[fakes.stub_instance_obj(None, id=i + 1, user_id='fake', project_id='fake', host=i % 2, uuid=fakes.get_fake_uuid(i)) for i in range(5)]) self.mock_get_all.side_effect = return_servers_with_host req = self.req(self.path_detail) res_dict = self.controller.detail(req) server_list = res_dict['servers'] host_ids = [server_list[0]['hostId'], server_list[1]['hostId']] self.assertTrue(host_ids[0] and host_ids[1]) self.assertNotEqual(host_ids[0], host_ids[1]) for i, s in enumerate(server_list): self.assertEqual(s['id'], fakes.get_fake_uuid(i)) self.assertEqual(s['hostId'], host_ids[i % 2]) self.assertEqual(s['name'], 'server%d' % (i + 1)) def test_get_servers_joins_services(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): cur = api_version_request.APIVersionRequest(self.wsgi_api_version) v216 = api_version_request.APIVersionRequest('2.16') if cur >= v216: self.assertIn('services', expected_attrs) else: self.assertNotIn('services', expected_attrs) return objects.InstanceList() self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_detail, use_admin_context=True) self.assertIn('servers', self.controller.detail(req)) req = fakes.HTTPRequest.blank(self.path_detail, use_admin_context=True, version=self.wsgi_api_version) self.assertIn('servers', self.controller.detail(req)) class ServersControllerTestV23(ServersControllerTest): wsgi_api_version = '2.3' def setUp(self): super(ServersControllerTestV23, self).setUp() self.request = self.req(self.path_with_id % FAKE_UUID) self.project_id = self.request.environ['nova.context'].project_id self.mock_get.side_effect = fakes.fake_compute_get( id=2, uuid=FAKE_UUID, node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=self.project_id) def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100): server_dict = super(ServersControllerTestV23, self)._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, status, progress) server_dict['server']["OS-EXT-SRV-ATTR:hostname"] = "server2" server_dict['server'][ "OS-EXT-SRV-ATTR:hypervisor_hostname"] = "node-fake" server_dict['server']["OS-EXT-SRV-ATTR:kernel_id"] = UUID1 server_dict['server']["OS-EXT-SRV-ATTR:launch_index"] = 0 server_dict['server']["OS-EXT-SRV-ATTR:ramdisk_id"] = UUID2 server_dict['server']["OS-EXT-SRV-ATTR:reservation_id"] = "r-1" server_dict['server']["OS-EXT-SRV-ATTR:root_device_name"] = "/dev/vda" server_dict['server']["OS-EXT-SRV-ATTR:user_data"] = "userdata" server_dict['server']["OS-EXT-STS:task_state"] = None server_dict['server']["OS-EXT-STS:vm_state"] = vm_states.ACTIVE server_dict['server']["OS-EXT-STS:power_state"] = 1 server_dict['server']["os-extended-volumes:volumes_attached"] = [ {'id': 'some_volume_1', 'delete_on_termination': True}, {'id': 'some_volume_2', 'delete_on_termination': False}] server_dict['server']["tenant_id"] = self.project_id return server_dict def test_show(self): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id res_dict = self.controller.show(self.request, FAKE_UUID) expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) self.assertThat(res_dict, matchers.DictMatches(expected_server)) def test_detail(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): obj_list = [] for i in range(2): server = fakes.stub_instance_obj(context, id=2, uuid=FAKE_UUID, node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=context.project_id) obj_list.append(server) return objects.InstanceList(objects=obj_list) self.mock_get_all.side_effect = None req = self.req(self.path_detail) self.mock_get_all.return_value = fake_get_all( req.environ['nova.context']) servers_list = self.controller.detail(req) image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) self.assertIn(expected_server['server'], servers_list['servers']) class ServersControllerTestV29(ServersControllerTest): wsgi_api_version = '2.9' def setUp(self): super(ServersControllerTestV29, self).setUp() self.mock_get.side_effect = fakes.fake_compute_get( id=2, uuid=FAKE_UUID, node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=self.request.environ['nova.context'].project_id) def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100): server_dict = super(ServersControllerTestV29, self)._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, status, progress) server_dict['server']['locked'] = False server_dict['server']["OS-EXT-SRV-ATTR:hostname"] = "server2" server_dict['server'][ "OS-EXT-SRV-ATTR:hypervisor_hostname"] = "node-fake" server_dict['server']["OS-EXT-SRV-ATTR:kernel_id"] = UUID1 server_dict['server']["OS-EXT-SRV-ATTR:launch_index"] = 0 server_dict['server']["OS-EXT-SRV-ATTR:ramdisk_id"] = UUID2 server_dict['server']["OS-EXT-SRV-ATTR:reservation_id"] = "r-1" server_dict['server']["OS-EXT-SRV-ATTR:root_device_name"] = "/dev/vda" server_dict['server']["OS-EXT-SRV-ATTR:user_data"] = "userdata" server_dict['server']["OS-EXT-STS:task_state"] = None server_dict['server']["OS-EXT-STS:vm_state"] = vm_states.ACTIVE server_dict['server']["OS-EXT-STS:power_state"] = 1 server_dict['server']["os-extended-volumes:volumes_attached"] = [ {'id': 'some_volume_1', 'delete_on_termination': True}, {'id': 'some_volume_2', 'delete_on_termination': False}] server_dict['server']["tenant_id"] = self.request.environ[ 'nova.context'].project_id return server_dict def _test_get_server_with_lock(self, locked_by): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id req = self.req(self.path_with_id % FAKE_UUID) project_id = req.environ['nova.context'].project_id self.mock_get.side_effect = fakes.fake_compute_get( id=2, locked_by=locked_by, uuid=FAKE_UUID, node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=project_id) res_dict = self.controller.show(req, FAKE_UUID) expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) expected_server['server']['locked'] = True if locked_by else False expected_server['server']['tenant_id'] = project_id self.assertThat(res_dict, matchers.DictMatches(expected_server)) return res_dict def test_get_server_with_locked_by_admin(self): res_dict = self._test_get_server_with_lock('admin') self.assertTrue(res_dict['server']['locked']) def test_get_server_with_locked_by_owner(self): res_dict = self._test_get_server_with_lock('owner') self.assertTrue(res_dict['server']['locked']) def test_get_server_not_locked(self): res_dict = self._test_get_server_with_lock(None) self.assertFalse(res_dict['server']['locked']) def _test_list_server_detail_with_lock(self, s1_locked, s2_locked): self.mock_get_all.side_effect = None self.mock_get_all.return_value = fake_instance_get_all_with_locked( context, [s1_locked, s2_locked], node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1) req = self.req(self.path_detail) servers_list = self.controller.detail(req) # Check that each returned server has the same 'locked' value # and 'id' as they were created. for locked in [s1_locked, s2_locked]: server = next(server for server in servers_list['servers'] if (server['id'] == fakes.get_fake_uuid(locked))) expected = False if locked == 'not_locked' else True self.assertEqual(expected, server['locked']) def test_list_server_detail_with_locked_s1_admin_s2_owner(self): self._test_list_server_detail_with_lock('admin', 'owner') def test_list_server_detail_with_locked_s1_owner_s2_admin(self): self._test_list_server_detail_with_lock('owner', 'admin') def test_list_server_detail_with_locked_s1_admin_s2_admin(self): self._test_list_server_detail_with_lock('admin', 'admin') def test_list_server_detail_with_locked_s1_admin_s2_not_locked(self): self._test_list_server_detail_with_lock('admin', 'not_locked') def test_list_server_detail_with_locked_s1_s2_not_locked(self): self._test_list_server_detail_with_lock('not_locked', 'not_locked') def test_get_servers_remove_non_search_options(self): self.mock_get_all.side_effect = None req = fakes.HTTPRequestV21.blank('/servers' '?sort_key=uuid&sort_dir=asc' '&sort_key=user_id&sort_dir=desc' '&limit=1&marker=123', use_admin_context=True) self.controller.index(req) kwargs = self.mock_get_all.call_args[1] search_opts = kwargs['search_opts'] for key in ('sort_key', 'sort_dir', 'limit', 'marker'): self.assertNotIn(key, search_opts) class ServersControllerTestV216(ServersControllerTest): wsgi_api_version = '2.16' def setUp(self): super(ServersControllerTestV216, self).setUp() self.mock_get.side_effect = fakes.fake_compute_get( id=2, uuid=FAKE_UUID, host="node-fake", node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=self.request.environ['nova.context'].project_id) self.mock_get_instance_host_status = self.useFixture( fixtures.MockPatchObject( compute_api.API, 'get_instance_host_status', return_value='UP')).mock def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100): server_dict = super(ServersControllerTestV216, self)._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, status, progress) server_dict['server']['locked'] = False server_dict['server']["host_status"] = "UP" server_dict['server']["OS-EXT-SRV-ATTR:hostname"] = "server2" server_dict['server']['hostId'] = nova_utils.generate_hostid( 'node-fake', server_dict['server']['tenant_id']) server_dict['server']["OS-EXT-SRV-ATTR:host"] = "node-fake" server_dict['server'][ "OS-EXT-SRV-ATTR:hypervisor_hostname"] = "node-fake" server_dict['server']["OS-EXT-SRV-ATTR:kernel_id"] = UUID1 server_dict['server']["OS-EXT-SRV-ATTR:launch_index"] = 0 server_dict['server']["OS-EXT-SRV-ATTR:ramdisk_id"] = UUID2 server_dict['server']["OS-EXT-SRV-ATTR:reservation_id"] = "r-1" server_dict['server']["OS-EXT-SRV-ATTR:root_device_name"] = "/dev/vda" server_dict['server']["OS-EXT-SRV-ATTR:user_data"] = "userdata" server_dict['server']["OS-EXT-STS:task_state"] = None server_dict['server']["OS-EXT-STS:vm_state"] = vm_states.ACTIVE server_dict['server']["OS-EXT-STS:power_state"] = 1 server_dict['server']["os-extended-volumes:volumes_attached"] = [ {'id': 'some_volume_1', 'delete_on_termination': True}, {'id': 'some_volume_2', 'delete_on_termination': False}] server_dict['server']['tenant_id'] = self.request.environ[ 'nova.context'].project_id return server_dict @mock.patch('nova.compute.api.API.get_instance_host_status') def _verify_host_status_policy_behavior(self, func, mock_get_host_status): # Set policy to disallow both host_status cases and verify we don't # call the get_instance_host_status compute RPC API. rules = { 'os_compute_api:servers:show:host_status': '!', 'os_compute_api:servers:show:host_status:unknown-only': '!', } orig_rules = policy.get_rules() policy.set_rules(oslo_policy.Rules.from_dict(rules), overwrite=False) func() mock_get_host_status.assert_not_called() # Restore the original rules. policy.set_rules(orig_rules) def test_show(self): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id res_dict = self.controller.show(self.request, FAKE_UUID) expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) self.assertThat(res_dict, matchers.DictMatches(expected_server)) func = functools.partial(self.controller.show, self.request, FAKE_UUID) self._verify_host_status_policy_behavior(func) def test_detail(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): obj_list = [] for i in range(2): server = fakes.stub_instance_obj(context, id=2, uuid=FAKE_UUID, host="node-fake", node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=context.project_id) obj_list.append(server) return objects.InstanceList(objects=obj_list) self.mock_get_all.side_effect = None req = self.req(self.path_detail) self.mock_get_all.return_value = fake_get_all( req.environ['nova.context']) servers_list = self.controller.detail(req) self.assertEqual(2, len(servers_list['servers'])) image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0) self.assertIn(expected_server['server'], servers_list['servers']) # We should have only gotten the host status once per host (and the # 2 servers in the response are using the same host). self.mock_get_instance_host_status.assert_called_once() func = functools.partial(self.controller.detail, req) self._verify_host_status_policy_behavior(func) class ServersControllerTestV219(ServersControllerTest): wsgi_api_version = '2.19' def setUp(self): super(ServersControllerTestV219, self).setUp() self.mock_get.side_effect = fakes.fake_compute_get( id=2, uuid=FAKE_UUID, node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=self.request.environ['nova.context'].project_id) self.useFixture(fixtures.MockPatchObject( compute_api.API, 'get_instance_host_status', return_value='UP')).mock def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100, description=None): server_dict = super(ServersControllerTestV219, self)._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, status, progress) server_dict['server']['locked'] = False server_dict['server']['description'] = description server_dict['server']["host_status"] = "UP" server_dict['server']["OS-EXT-SRV-ATTR:hostname"] = "server2" server_dict['server'][ "OS-EXT-SRV-ATTR:hypervisor_hostname"] = "node-fake" server_dict['server']["OS-EXT-SRV-ATTR:kernel_id"] = UUID1 server_dict['server']["OS-EXT-SRV-ATTR:launch_index"] = 0 server_dict['server']["OS-EXT-SRV-ATTR:ramdisk_id"] = UUID2 server_dict['server']["OS-EXT-SRV-ATTR:reservation_id"] = "r-1" server_dict['server']["OS-EXT-SRV-ATTR:root_device_name"] = "/dev/vda" server_dict['server']["OS-EXT-SRV-ATTR:user_data"] = "userdata" server_dict['server']["OS-EXT-STS:task_state"] = None server_dict['server']["OS-EXT-STS:vm_state"] = vm_states.ACTIVE server_dict['server']["OS-EXT-STS:power_state"] = 1 server_dict['server']["os-extended-volumes:volumes_attached"] = [ {'id': 'some_volume_1', 'delete_on_termination': True}, {'id': 'some_volume_2', 'delete_on_termination': False}] return server_dict def _test_get_server_with_description(self, description): image_bookmark = "http://localhost/%s/images/10" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/2" % self.project_id req = self.req(self.path_with_id % FAKE_UUID) project_id = req.environ['nova.context'].project_id self.mock_get.side_effect = fakes.fake_compute_get( id=2, display_description=description, uuid=FAKE_UUID, node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID1, ramdisk_id=UUID2, display_name="server2", root_device_name="/dev/vda", user_data="userdata", metadata={"seq": "2"}, availability_zone='nova', launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1, project_id=project_id) res_dict = self.controller.show(req, FAKE_UUID) expected_server = self._get_server_data_dict(FAKE_UUID, image_bookmark, flavor_bookmark, progress=0, description=description) expected_server['server']['tenant_id'] = project_id self.assertThat(res_dict, matchers.DictMatches(expected_server)) return res_dict def _test_list_server_detail_with_descriptions(self, s1_desc, s2_desc): self.mock_get_all.side_effect = None self.mock_get_all.return_value = ( fake_instance_get_all_with_description(context, [s1_desc, s2_desc], launched_at=None, terminated_at=None)) req = self.req(self.path_detail) servers_list = self.controller.detail(req) # Check that each returned server has the same 'description' value # and 'id' as they were created. for desc in [s1_desc, s2_desc]: server = next(server for server in servers_list['servers'] if (server['id'] == fakes.get_fake_uuid(desc))) expected = desc self.assertEqual(expected, server['description']) def test_get_server_with_description(self): self._test_get_server_with_description('test desc') def test_list_server_detail_with_descriptions(self): self._test_list_server_detail_with_descriptions('desc1', 'desc2') class ServersControllerTestV226(ControllerTest): wsgi_api_version = '2.26' def test_get_server_with_tags_by_id(self): req = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID, version=self.wsgi_api_version) ctxt = req.environ['nova.context'] tags = ['tag1', 'tag2'] def fake_get(*args, **kwargs): self.assertIn('tags', kwargs['expected_attrs']) fake_server = fakes.stub_instance_obj( ctxt, id=2, vm_state=vm_states.ACTIVE, progress=100, project_id=ctxt.project_id) tag_list = objects.TagList(objects=[ objects.Tag(resource_id=FAKE_UUID, tag=tag) for tag in tags]) fake_server.tags = tag_list return fake_server self.mock_get.side_effect = fake_get res_dict = self.controller.show(req, FAKE_UUID) self.assertIn('tags', res_dict['server']) self.assertEqual(tags, res_dict['server']['tags']) def _test_get_servers_allows_tag_filters(self, filter_name): query_string = '%s=t1,t2' % filter_name req = fakes.HTTPRequest.blank(self.path_with_query % query_string, version=self.wsgi_api_version) def fake_get_all(*a, **kw): self.assertIsNotNone(kw['search_opts']) self.assertIn(filter_name, kw['search_opts']) self.assertEqual(kw['search_opts'][filter_name], ['t1', 't2']) return objects.InstanceList( objects=[fakes.stub_instance_obj(req.environ['nova.context'], uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_allows_tags_filter(self): self._test_get_servers_allows_tag_filters('tags') def test_get_servers_allows_tags_any_filter(self): self._test_get_servers_allows_tag_filters('tags-any') def test_get_servers_allows_not_tags_filter(self): self._test_get_servers_allows_tag_filters('not-tags') def test_get_servers_allows_not_tags_any_filter(self): self._test_get_servers_allows_tag_filters('not-tags-any') class ServerControllerTestV238(ControllerTest): wsgi_api_version = '2.38' def _test_invalid_status(self, is_admin): req = fakes.HTTPRequest.blank( self.path_detail_with_query % 'status=invalid', version=self.wsgi_api_version, use_admin_context=is_admin) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.detail, req) def test_list_servers_detail_invalid_status_for_admin(self): self._test_invalid_status(True) def test_list_servers_detail_invalid_status_for_non_admin(self): self._test_invalid_status(False) class ServerControllerTestV247(ControllerTest): """Server controller test for microversion 2.47 The intent here is simply to verify that when showing server details after microversion 2.47 that the flavor is shown as a dict of flavor information rather than as dict of id/links. The existence of the 'extra_specs' key is controlled by policy. """ wsgi_api_version = '2.47' @mock.patch.object(objects.TagList, 'get_by_resource_id') def test_get_all_server_details(self, mock_get_by_resource_id): # Fake out tags on the instances mock_get_by_resource_id.return_value = objects.TagList() expected_flavor = { 'disk': 20, 'ephemeral': 0, 'extra_specs': {}, 'original_name': u'm1.small', 'ram': 2048, 'swap': 0, 'vcpus': 1} req = fakes.HTTPRequest.blank(self.path_detail, version=self.wsgi_api_version) hits = [] real_auth = policy.authorize # Wrapper for authorize to count the number of times # we authorize for extra-specs def fake_auth(context, action, target): if 'extra-specs' in action: hits.append(1) return real_auth(context, action, target) with mock.patch('nova.policy.authorize') as mock_auth: mock_auth.side_effect = fake_auth res_dict = self.controller.detail(req) # We should have found more than one servers, but only hit the # policy check once self.assertGreater(len(res_dict['servers']), 1) self.assertEqual(1, len(hits)) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['flavor'], expected_flavor) @mock.patch.object(objects.TagList, 'get_by_resource_id') def test_get_all_server_details_no_extra_spec(self, mock_get_by_resource_id): # Fake out tags on the instances mock_get_by_resource_id.return_value = objects.TagList() # Set the policy so we don't have permission to index # flavor extra-specs but are able to get server details. servers_rule = 'os_compute_api:servers:detail' extraspec_rule = 'os_compute_api:os-flavor-extra-specs:index' self.policy.set_rules({ extraspec_rule: 'rule:admin_api', servers_rule: '@'}) expected_flavor = { 'disk': 20, 'ephemeral': 0, 'original_name': u'm1.small', 'ram': 2048, 'swap': 0, 'vcpus': 1} req = fakes.HTTPRequest.blank(self.path_detail, version=self.wsgi_api_version) res_dict = self.controller.detail(req) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['flavor'], expected_flavor) class ServerControllerTestV266(ControllerTest): """Server controller test for microversion 2.66 Add changes-before parameter to get servers or servers details of 2.66 microversion. Filters the response by a date and time stamp when the server last changed. Those changed before the specified date and time stamp are returned. """ wsgi_api_version = '2.66' def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.wsgi_api_version) def test_get_servers_allows_changes_before(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('changes-before', search_opts) changes_before = datetime.datetime(2011, 1, 24, 17, 8, 1, tzinfo=iso8601.iso8601.UTC) self.assertEqual(search_opts['changes-before'], changes_before) self.assertNotIn('deleted', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all params = 'changes-before=2011-01-24T17:08:01Z' req = self.req(self.path_with_query % params) req.api_version_request = api_version_request.APIVersionRequest('2.66') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_allows_changes_before_bad_value(self): params = 'changes-before=asdf' req = self.req(self.path_with_query % params) req.api_version_request = api_version_request.APIVersionRequest('2.66') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_allows_changes_before_bad_value_on_compat_mode(self): params = 'changes-before=asdf' req = self.req(self.path_with_query % params) req.api_version_request = api_version_request.APIVersionRequest('2.66') req.set_legacy_v2() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_servers_allows_changes_since_and_changes_before(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertIn('changes-since', search_opts) changes_since = datetime.datetime(2011, 1, 23, 17, 8, 1, tzinfo=iso8601.iso8601.UTC) self.assertIn('changes-before', search_opts) changes_before = datetime.datetime(2011, 1, 24, 17, 8, 1, tzinfo=iso8601.iso8601.UTC) self.assertEqual(search_opts['changes-since'], changes_since) self.assertEqual(search_opts['changes-before'], changes_before) self.assertNotIn('deleted', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all params = 'changes-since=2011-01-23T17:08:01Z&' \ 'changes-before=2011-01-24T17:08:01Z' req = self.req(self.path_with_query % params) req.api_version_request = api_version_request.APIVersionRequest('2.66') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_filters_with_distinct_changes_time_bad_request(self): changes_since = '2018-09-04T05:45:27Z' changes_before = '2018-09-03T05:45:27Z' query_string = ('changes-since=%s&changes-before=%s' % (changes_since, changes_before)) req = self.req(self.path_with_query % query_string) req.api_version_request = api_version_request.APIVersionRequest('2.66') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) class ServersControllerTestV271(ControllerTest): wsgi_api_version = '2.71' def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.wsgi_api_version) def test_show_server_group_not_exist(self): req = self.req(self.path_with_id % FAKE_UUID) return_server = fakes.fake_compute_get( id=2, vm_state=vm_states.ACTIVE, project_id=req.environ['nova.context'].project_id) self.mock_get.side_effect = return_server servers = self.controller.show(req, FAKE_UUID) expect_sg = [] self.assertEqual(expect_sg, servers['server']['server_groups']) class ServersControllerTestV273(ControllerTest): """Server Controller test for microversion 2.73 The intent here is simply to verify that when showing server details after microversion 2.73 the response will also have the locked_reason key for the servers. """ wsgi_api_version = '2.73' def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.wsgi_api_version) def test_get_servers_with_locked_filter(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_list = [fakes.stub_instance( 100, uuid=uuids.fake, locked_by='fake')] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'locked=true') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) search = {'deleted': False, 'project_id': self.project_id, 'locked': True} self.mock_get_all.assert_called_once_with( req.environ['nova.context'], expected_attrs=[], limit=1000, marker=None, search_opts=search, sort_dirs=['desc'], sort_keys=['created_at'], cell_down_support=False, all_tenants=False) def test_get_servers_with_locked_filter_invalid_value(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_list = [fakes.stub_instance( 100, uuid=uuids.fake, locked_by='fake')] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'locked=price') exp = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) self.assertIn("Unrecognized value 'price'", six.text_type(exp)) def test_get_servers_with_locked_filter_empty_value(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_list = [fakes.stub_instance( 100, uuid=uuids.fake, locked_by='fake')] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'locked=') exp = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) self.assertIn("Unrecognized value ''", six.text_type(exp)) def test_get_servers_with_locked_sort_key(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_list = [fakes.stub_instance( 100, uuid=uuids.fake, locked_by='fake')] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.mock_get_all.side_effect = fake_get_all req = self.req(self.path_with_query % 'sort_dir=desc&sort_key=locked') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) self.mock_get_all.assert_called_once_with( req.environ['nova.context'], expected_attrs=[], limit=1000, marker=None, search_opts={'deleted': False, 'project_id': self.project_id}, sort_dirs=['desc'], sort_keys=['locked'], cell_down_support=False, all_tenants=False) class ServersControllerTestV275(ControllerTest): wsgi_api_version = '2.75' image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' @mock.patch('nova.compute.api.API.get_all') def test_get_servers_additional_query_param_old_version(self, mock_get): req = fakes.HTTPRequest.blank(self.path_with_query % 'unknown=1', use_admin_context=True, version='2.74') self.controller.index(req) @mock.patch('nova.compute.api.API.get_all') def test_get_servers_ignore_sort_key_old_version(self, mock_get): req = fakes.HTTPRequest.blank( self.path_with_query % 'sort_key=deleted', use_admin_context=True, version='2.74') self.controller.index(req) def test_get_servers_additional_query_param(self): req = fakes.HTTPRequest.blank(self.path_with_query % 'unknown=1', use_admin_context=True, version=self.wsgi_api_version) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_previously_ignored_sort_key(self): for s_ignore in servers_schema.SERVER_LIST_IGNORE_SORT_KEY_V273: req = fakes.HTTPRequest.blank( self.path_with_query % 'sort_key=%s' % s_ignore, use_admin_context=True, version=self.wsgi_api_version) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_additional_sort_key(self): req = fakes.HTTPRequest.blank( self.path_with_query % 'sort_key=unknown', use_admin_context=True, version=self.wsgi_api_version) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_update_response_no_show_server_only_attributes_old_version(self): # There are some old server attributes which were added only for # GET server APIs not for PUT. GET server and PUT server share the # same view builder method SHOW() to build the response, So make sure # attributes which are not supposed to be included for PUT # response are not present. body = {'server': {'name': 'server_test'}} req = fakes.HTTPRequest.blank(self.path_with_query % 'unknown=1', use_admin_context=True, version='2.74') res_dict = self.controller.update(req, FAKE_UUID, body=body) for field in GET_ONLY_FIELDS: self.assertNotIn(field, res_dict['server']) for items in res_dict['server']['addresses'].values(): for item in items: self.assertNotIn('OS-EXT-IPS:type', item) self.assertNotIn('OS-EXT-IPS-MAC:mac_addr', item) def test_update_response_has_show_server_all_attributes(self): body = {'server': {'name': 'server_test'}} req = fakes.HTTPRequest.blank(self.path_with_query % 'unknown=1', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.update(req, FAKE_UUID, body=body) for field in GET_ONLY_FIELDS: self.assertIn(field, res_dict['server']) for items in res_dict['server']['addresses'].values(): for item in items: self.assertIn('OS-EXT-IPS:type', item) self.assertIn('OS-EXT-IPS-MAC:mac_addr', item) def test_rebuild_response_no_show_server_only_attributes_old_version(self): # There are some old server attributes which were added only for # GET server APIs not for Rebuild. GET server and Rebuild server share # same view builder method SHOW() to build the response, So make sure # the attributes which are not supposed to be included for Rebuild # response are not present. body = {'rebuild': {"imageRef": self.image_uuid}} req = fakes.HTTPRequest.blank(self.path_with_query % 'unknown=1', use_admin_context=True, version='2.74') fake_get = fakes.fake_compute_get( vm_state=vm_states.ACTIVE, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.mock_get.side_effect = fake_get res_dict = self.controller._action_rebuild(req, FAKE_UUID, body=body).obj get_only_fields_Rebuild = copy.deepcopy(GET_ONLY_FIELDS) get_only_fields_Rebuild.remove('key_name') for field in get_only_fields_Rebuild: self.assertNotIn(field, res_dict['server']) for items in res_dict['server']['addresses'].values(): for item in items: self.assertNotIn('OS-EXT-IPS:type', item) self.assertNotIn('OS-EXT-IPS-MAC:mac_addr', item) def test_rebuild_response_has_show_server_all_attributes(self): body = {'rebuild': {"imageRef": self.image_uuid}} req = fakes.HTTPRequest.blank(self.path_with_query % 'unknown=1', use_admin_context=True, version=self.wsgi_api_version) fake_get = fakes.fake_compute_get( vm_state=vm_states.ACTIVE, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.mock_get.side_effect = fake_get res_dict = self.controller._action_rebuild(req, FAKE_UUID, body=body).obj for field in GET_ONLY_FIELDS: if field == 'OS-EXT-SRV-ATTR:user_data': self.assertNotIn(field, res_dict['server']) field = 'user_data' self.assertIn(field, res_dict['server']) for items in res_dict['server']['addresses'].values(): for item in items: self.assertIn('OS-EXT-IPS:type', item) self.assertIn('OS-EXT-IPS-MAC:mac_addr', item) class ServersControllerTestV283(ControllerTest): filters = ['availability_zone', 'config_drive', 'key_name', 'created_at', 'launched_at', 'terminated_at', 'power_state', 'task_state', 'vm_state', 'progress', 'user_id'] def test_get_servers_by_new_filter_for_non_admin(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) for f in self.filters: self.assertIn(f, search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) self.mock_get_all.side_effect = fake_get_all query_str = '&'.join('%s=test_value' % f for f in self.filters) req = fakes.HTTPRequest.blank(self.path_with_query % query_str, version='2.83') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) def test_get_servers_new_filters_for_non_admin_old_version(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) for f in self.filters: self.assertNotIn(f, search_opts) return objects.InstanceList( objects=[]) # Without policy edition, test will fail and admin filter will work. self.policy.set_rules({'os_compute_api:servers:index': ''}) self.mock_get_all.side_effect = fake_get_all query_str = '&'.join('%s=test_value' % f for f in self.filters) req = fakes.HTTPRequest.blank(self.path_with_query % query_str, version='2.82') servers = self.controller.index(req)['servers'] self.assertEqual(0, len(servers)) def test_get_servers_by_node_fail_non_admin(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('node', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=uuids.fake)]) server_filter_rule = 'os_compute_api:servers:allow_all_filters' self.policy.set_rules({'os_compute_api:servers:index': '', server_filter_rule: 'role:admin'}) self.mock_get_all.side_effect = fake_get_all query_str = "node=node1" req = fakes.HTTPRequest.blank(self.path_with_query % query_str, version='2.83') servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(uuids.fake, servers[0]['id']) class ServersControllerDeleteTest(ControllerTest): def setUp(self): super(ServersControllerDeleteTest, self).setUp() self.server_delete_called = False def fake_delete(api, context, instance): if instance.uuid == uuids.non_existent_uuid: raise exception.InstanceNotFound(instance_id=instance.uuid) self.server_delete_called = True self.stub_out('nova.compute.api.API.delete', fake_delete) def _create_delete_request(self, uuid): fakes.stub_out_instance_quota(self, 0, 10) req = fakes.HTTPRequestV21.blank(self.path_with_id % uuid) req.method = 'DELETE' fake_get = fakes.fake_compute_get( uuid=uuid, vm_state=vm_states.ACTIVE, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.mock_get.side_effect = fake_get return req def _delete_server_instance(self, uuid=FAKE_UUID): req = self._create_delete_request(uuid) self.controller.delete(req, uuid) def test_delete_server_instance(self): self._delete_server_instance() self.assertTrue(self.server_delete_called) def test_delete_server_instance_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self._delete_server_instance, uuid=uuids.non_existent_uuid) def test_delete_server_instance_while_building(self): req = self._create_delete_request(FAKE_UUID) self.controller.delete(req, FAKE_UUID) self.assertTrue(self.server_delete_called) @mock.patch.object(compute_api.API, 'delete', side_effect=exception.InstanceIsLocked( instance_uuid=FAKE_UUID)) def test_delete_locked_server(self, mock_delete): req = self._create_delete_request(FAKE_UUID) self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, req, FAKE_UUID) mock_delete.assert_called_once_with( req.environ['nova.context'], test.MatchType(objects.Instance)) def test_delete_server_instance_while_resize(self): req = self._create_delete_request(FAKE_UUID) fake_get = fakes.fake_compute_get( vm_state=vm_states.ACTIVE, task_state=task_states.RESIZE_PREP, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.mock_get.side_effect = fake_get self.controller.delete(req, FAKE_UUID) def test_delete_server_instance_if_not_launched(self): self.flags(reclaim_instance_interval=3600) req = fakes.HTTPRequestV21.blank(self.path_with_id % FAKE_UUID) req.method = 'DELETE' self.server_delete_called = False fake_get = fakes.fake_compute_get( launched_at=None, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.mock_get.side_effect = fake_get def instance_destroy_mock(*args, **kwargs): self.server_delete_called = True deleted_at = timeutils.utcnow() return fake_instance.fake_db_instance(deleted_at=deleted_at) self.stub_out('nova.db.api.instance_destroy', instance_destroy_mock) self.controller.delete(req, FAKE_UUID) # delete() should be called for instance which has never been active, # even if reclaim_instance_interval has been set. self.assertTrue(self.server_delete_called) class ServersControllerRebuildInstanceTest(ControllerTest): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' expected_key_name = False def setUp(self): super(ServersControllerRebuildInstanceTest, self).setUp() self.req = fakes.HTTPRequest.blank(self.path_action % FAKE_UUID) self.req.method = 'POST' self.req.headers["content-type"] = "application/json" self.req_user_id = self.req.environ['nova.context'].user_id self.req_project_id = self.req.environ['nova.context'].project_id self.useFixture(nova_fixtures.SingleCellSimple()) def fake_get(ctrl, ctxt, uuid): if uuid == 'test_inst': raise webob.exc.HTTPNotFound(explanation='fakeout') return fakes.stub_instance_obj(None, vm_state=vm_states.ACTIVE, project_id=self.req_project_id, user_id=self.req_user_id) self.useFixture( fixtures.MonkeyPatch('nova.api.openstack.compute.servers.' 'ServersController._get_instance', fake_get)) fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, project_id=self.req_project_id, user_id=self.req_user_id) self.mock_get.side_effect = fake_get self.body = { 'rebuild': { 'name': 'new_name', 'imageRef': self.image_uuid, 'metadata': { 'open': 'stack', }, }, } def test_rebuild_server_with_image_not_uuid(self): self.body['rebuild']['imageRef'] = 'not-uuid' self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_server_with_image_as_full_url(self): image_href = ( 'http://localhost/v2/%s/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' % self.project_id) self.body['rebuild']['imageRef'] = image_href self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_server_with_image_as_empty_string(self): self.body['rebuild']['imageRef'] = '' self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_with_spaces_in_the_middle(self): self.body['rebuild']['name'] = 'abc def' self.req.body = jsonutils.dump_as_bytes(self.body) self.controller._action_rebuild(self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_with_leading_trailing_spaces(self): self.body['rebuild']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_with_leading_trailing_spaces_compat_mode( self): self.body['rebuild']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.req.set_legacy_v2() def fake_rebuild(*args, **kwargs): self.assertEqual('abc def', kwargs['display_name']) with mock.patch.object(compute_api.API, 'rebuild') as mock_rebuild: mock_rebuild.side_effect = fake_rebuild self.controller._action_rebuild(self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_blank_metadata_key(self): self.body['rebuild']['metadata'][''] = 'world' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_metadata_key_too_long(self): self.body['rebuild']['metadata'][('a' * 260)] = 'world' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_metadata_value_too_long(self): self.body['rebuild']['metadata']['key1'] = ('a' * 260) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_metadata_value_not_string(self): self.body['rebuild']['metadata']['key1'] = 1 self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) @mock.patch.object(fake._FakeImageService, 'show', return_value=dict( id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active', properties={'key1': 'value1'}, min_ram="4096", min_disk="10")) def test_rebuild_instance_fails_when_min_ram_too_small(self, mock_show): # make min_ram larger than our instance ram size self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) mock_show.assert_called_once_with( self.req.environ['nova.context'], self.image_uuid, include_locations=False, show_deleted=True) @mock.patch.object(fake._FakeImageService, 'show', return_value=dict( id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active', properties={'key1': 'value1'}, min_ram="128", min_disk="100000")) def test_rebuild_instance_fails_when_min_disk_too_small(self, mock_show): # make min_disk larger than our instance disk size self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) mock_show.assert_called_once_with( self.req.environ['nova.context'], self.image_uuid, include_locations=False, show_deleted=True) @mock.patch.object(fake._FakeImageService, 'show', return_value=dict( id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active', size=str(1000 * (1024 ** 3)))) def test_rebuild_instance_image_too_large(self, mock_show): # make image size larger than our instance disk size self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) mock_show.assert_called_once_with( self.req.environ['nova.context'], self.image_uuid, include_locations=False, show_deleted=True) def test_rebuild_instance_name_all_blank(self): self.body['rebuild']['name'] = ' ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) @mock.patch.object(fake._FakeImageService, 'show', return_value=dict( id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='DELETED')) def test_rebuild_instance_with_deleted_image(self, mock_show): self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) mock_show.assert_called_once_with( self.req.environ['nova.context'], self.image_uuid, include_locations=False, show_deleted=True) def test_rebuild_instance_onset_file_limit_over_quota(self): def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active') with test.nested( mock.patch.object(fake._FakeImageService, 'show', side_effect=fake_get_image), mock.patch.object(self.controller.compute_api, 'rebuild', side_effect=exception.OnsetFileLimitExceeded) ) as ( show_mock, rebuild_mock ): self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPForbidden, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_bad_personality(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') body = { "rebuild": { "imageRef": self.image_uuid, "personality": [{ "path": "/path/to/file", "contents": "INVALID b64", }] }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_personality(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') body = { "rebuild": { "imageRef": self.image_uuid, "personality": [{ "path": "/path/to/file", "contents": base64.encode_as_text("Test String"), }] }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertNotIn('personality', body['server']) def test_rebuild_response_has_no_show_server_only_attributes(self): # There are some old server attributes which were added only for # GET server APIs not for rebuild. GET server and Rebuild share the # same view builder method SHOW() to build the response, So make sure # attributes which are not supposed to be included for Rebuild # response are not present. body = { "rebuild": { "imageRef": self.image_uuid, }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj get_only_fields = copy.deepcopy(GET_ONLY_FIELDS) if self.expected_key_name: get_only_fields.remove('key_name') for field in get_only_fields: self.assertNotIn(field, body['server']) @mock.patch.object(compute_api.API, 'start') def test_start(self, mock_start): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(start="") self.controller._start_server(req, FAKE_UUID, body) mock_start.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'start', fake_start_stop_not_ready) def test_start_not_ready(self): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, req, FAKE_UUID, body) @mock.patch.object( compute_api.API, 'start', fakes.fake_actions_to_locked_server) def test_start_locked_server(self): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, req, FAKE_UUID, body) @mock.patch.object(compute_api.API, 'start', fake_start_stop_invalid_state) def test_start_invalid(self): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, req, FAKE_UUID, body) @mock.patch.object(compute_api.API, 'stop') def test_stop(self, mock_stop): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(stop="") self.controller._stop_server(req, FAKE_UUID, body) mock_stop.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'stop', fake_start_stop_not_ready) def test_stop_not_ready(self): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(stop="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, req, FAKE_UUID, body) @mock.patch.object( compute_api.API, 'stop', fakes.fake_actions_to_locked_server) def test_stop_locked_server(self): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(stop="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, req, FAKE_UUID, body) @mock.patch.object(compute_api.API, 'stop', fake_start_stop_invalid_state) def test_stop_invalid_state(self): req = fakes.HTTPRequestV21.blank(self.path_action % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, req, FAKE_UUID, body) @mock.patch( 'nova.db.api.instance_get_by_uuid', fake_instance_get_by_uuid_not_found) def test_start_with_bogus_id(self): req = fakes.HTTPRequestV21.blank(self.path_action % 'test_inst') body = dict(start="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._start_server, req, 'test_inst', body) @mock.patch( 'nova.db.api.instance_get_by_uuid', fake_instance_get_by_uuid_not_found) def test_stop_with_bogus_id(self): req = fakes.HTTPRequestV21.blank(self.path_action % 'test_inst') body = dict(stop="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._stop_server, req, 'test_inst', body) class ServersControllerRebuildTestV254(ServersControllerRebuildInstanceTest): expected_key_name = True def setUp(self): super(ServersControllerRebuildTestV254, self).setUp() fakes.stub_out_key_pair_funcs(self) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.54') def _test_set_key_name_rebuild(self, set_key_name=True): key_name = "key" fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, key_name=key_name, project_id=self.req_project_id, user_id=self.req_user_id) self.mock_get.side_effect = fake_get if set_key_name: self.body['rebuild']['key_name'] = key_name self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller._action_rebuild( self.req, FAKE_UUID, body=self.body).obj['server'] self.assertEqual(server['id'], FAKE_UUID) self.assertEqual(server['key_name'], key_name) def test_rebuild_accepted_with_keypair_name(self): self._test_set_key_name_rebuild() def test_rebuild_key_not_changed(self): self._test_set_key_name_rebuild(set_key_name=False) def test_rebuild_invalid_microversion_253(self): self.req.api_version_request = \ api_version_request.APIVersionRequest('2.53') body = { "rebuild": { "imageRef": self.image_uuid, "key_name": "key" }, } excpt = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('key_name', six.text_type(excpt)) def test_rebuild_with_not_existed_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": "nonexistentkey" }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_user_has_no_key_pair(self): def no_key_pair(context, user_id, name): raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out('nova.db.api.key_pair_get', no_key_pair) fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, key_name=None, project_id=self.req_project_id, user_id=self.req_user_id) self.mock_get.side_effect = fake_get self.body['rebuild']['key_name'] = "a-key-name" self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_with_non_string_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": 12345 }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_invalid_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": "123\0d456" }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_empty_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": '' }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_none_keypair_name(self): key_name = None fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, key_name=key_name, project_id=self.req_project_id, user_id=self.req_user_id) self.mock_get.side_effect = fake_get with mock.patch.object(objects.KeyPair, 'get_by_name') as key_get: self.body['rebuild']['key_name'] = key_name self.req.body = jsonutils.dump_as_bytes(self.body) self.controller._action_rebuild( self.req, FAKE_UUID, body=self.body) # NOTE: because the api will call _get_server twice. The server # response will always be the same one. So we just use # objects.KeyPair.get_by_name to verify test. key_get.assert_not_called() def test_rebuild_with_too_large_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": 256 * "k" }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) class ServersControllerRebuildTestV257(ServersControllerRebuildTestV254): """Tests server rebuild at microversion 2.57 where user_data can be provided and personality files are no longer accepted. """ def setUp(self): super(ServersControllerRebuildTestV257, self).setUp() self.req.api_version_request = \ api_version_request.APIVersionRequest('2.57') def test_rebuild_personality(self): """Tests that trying to rebuild with personality files fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "personality": [{ "path": "/path/to/file", "contents": base64.encode_as_text("Test String"), }] } } ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('personality', six.text_type(ex)) def test_rebuild_user_data_old_version(self): """Tests that trying to rebuild with user_data before 2.57 fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } self.req.api_version_request = \ api_version_request.APIVersionRequest('2.55') ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('user_data', six.text_type(ex)) def test_rebuild_user_data_malformed(self): """Tests that trying to rebuild with malformed user_data fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "user_data": b'invalid' } } ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('user_data', six.text_type(ex)) def test_rebuild_user_data_too_large(self): """Tests that passing user_data to rebuild that is too large fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "user_data": ('MQ==' * 16384) } } ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('user_data', six.text_type(ex)) @mock.patch.object(context.RequestContext, 'can') @mock.patch('nova.db.api.instance_update_and_get_original') def test_rebuild_reset_user_data(self, mock_update, mock_policy): """Tests that passing user_data=None resets the user_data on the instance. """ body = { "rebuild": { "imageRef": self.image_uuid, "user_data": None } } self.mock_get.side_effect = None self.mock_get.return_value = fakes.stub_instance_obj( context.RequestContext(self.req_user_id, self.req_project_id), user_data='ZWNobyAiaGVsbG8gd29ybGQi') def fake_instance_update_and_get_original( ctxt, instance_uuid, values, **kwargs): # save() is called twice and the second one has system_metadata # in the updates, so we can ignore that one. if 'system_metadata' not in values: self.assertIn('user_data', values) self.assertIsNone(values['user_data']) return instance_update_and_get_original( ctxt, instance_uuid, values, **kwargs) mock_update.side_effect = fake_instance_update_and_get_original self.controller._action_rebuild(self.req, FAKE_UUID, body=body) self.assertEqual(2, mock_update.call_count) class ServersControllerRebuildTestV219(ServersControllerRebuildInstanceTest): def setUp(self): super(ServersControllerRebuildTestV219, self).setUp() self.req.api_version_request = \ api_version_request.APIVersionRequest('2.19') def _rebuild_server(self, set_desc, desc): fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, display_description=desc, project_id=self.req_project_id, user_id=self.req_user_id) self.mock_get.side_effect = fake_get if set_desc: self.body['rebuild']['description'] = desc self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller._action_rebuild(self.req, FAKE_UUID, body=self.body).obj['server'] self.assertEqual(server['id'], FAKE_UUID) self.assertEqual(server['description'], desc) def test_rebuild_server_with_description(self): self._rebuild_server(True, 'server desc') def test_rebuild_server_empty_description(self): self._rebuild_server(True, '') def test_rebuild_server_without_description(self): self._rebuild_server(False, '') def test_rebuild_server_remove_description(self): self._rebuild_server(True, None) def test_rebuild_server_description_too_long(self): self.body['rebuild']['description'] = 'x' * 256 self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_server_description_invalid(self): # Invalid non-printable control char in the desc. self.body['rebuild']['description'] = "123\0d456" self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) # NOTE(jaypipes): Not based from ServersControllerRebuildInstanceTest because # that test case's setUp is completely b0rked class ServersControllerRebuildTestV263(ControllerTest): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def setUp(self): super(ServersControllerRebuildTestV263, self).setUp() self.req = fakes.HTTPRequest.blank(self.path_action % FAKE_UUID) self.req.method = 'POST' self.req.headers["content-type"] = "application/json" self.req_user_id = self.req.environ['nova.context'].user_id self.req_project_id = self.req.environ['nova.context'].project_id self.req.api_version_request = \ api_version_request.APIVersionRequest('2.63') self.body = { 'rebuild': { 'name': 'new_name', 'imageRef': self.image_uuid, 'metadata': { 'open': 'stack', }, }, } @mock.patch('nova.compute.api.API.get') def _rebuild_server(self, mock_get, certs=None, conf_enabled=True, conf_certs=None): fakes.stub_out_trusted_certs(self, certs=certs) ctx = self.req.environ['nova.context'] mock_get.return_value = fakes.stub_instance_obj(ctx, vm_state=vm_states.ACTIVE, trusted_certs=certs, project_id=self.req_project_id, user_id=self.req_user_id) self.flags(default_trusted_certificate_ids=conf_certs, group='glance') if conf_enabled: self.flags(verify_glance_signatures=True, group='glance') self.flags(enable_certificate_validation=True, group='glance') self.body['rebuild']['trusted_image_certificates'] = certs self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller._action_rebuild( self.req, FAKE_UUID, body=self.body).obj['server'] if certs: self.assertEqual(certs, server['trusted_image_certificates']) else: if conf_enabled: # configuration file default is used self.assertEqual( conf_certs, server['trusted_image_certificates']) else: # either not set or empty self.assertIsNone(server['trusted_image_certificates']) def test_rebuild_server_with_trusted_certs(self): """Test rebuild with valid trusted_image_certificates argument""" self._rebuild_server( certs=['0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8', '674736e3-f25c-405c-8362-bbf991e0ce0a']) def test_rebuild_server_without_trusted_certs(self): """Test rebuild without trusted image certificates""" self._rebuild_server() def test_rebuild_server_conf_options_turned_off_set(self): """Test rebuild with feature disabled and certs specified""" self._rebuild_server( certs=['0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8'], conf_enabled=False) def test_rebuild_server_conf_options_turned_off_empty(self): """Test rebuild with feature disabled""" self._rebuild_server(conf_enabled=False) def test_rebuild_server_default_trusted_certificates_empty(self): """Test rebuild with feature enabled and no certs specified""" self._rebuild_server(conf_enabled=True) def test_rebuild_server_default_trusted_certificates(self): """Test rebuild with certificate specified in configurations""" self._rebuild_server(conf_enabled=True, conf_certs=['conf-id']) def test_rebuild_server_with_empty_trusted_cert_id(self): """Make sure that we can't rebuild with an empty certificate ID""" self.body['rebuild']['trusted_image_certificates'] = [''] self.req.body = jsonutils.dump_as_bytes(self.body) ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('is too short', six.text_type(ex)) def test_rebuild_server_with_empty_trusted_certs(self): """Make sure that we can't rebuild with an empty array of IDs""" self.body['rebuild']['trusted_image_certificates'] = [] self.req.body = jsonutils.dump_as_bytes(self.body) ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('is too short', six.text_type(ex)) def test_rebuild_server_with_too_many_trusted_certs(self): """Make sure that we can't rebuild with an array of >50 unique IDs""" self.body['rebuild']['trusted_image_certificates'] = [ 'cert{}'.format(i) for i in range(51)] self.req.body = jsonutils.dump_as_bytes(self.body) ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('is too long', six.text_type(ex)) def test_rebuild_server_with_nonunique_trusted_certs(self): """Make sure that we can't rebuild with a non-unique array of IDs""" self.body['rebuild']['trusted_image_certificates'] = ['cert', 'cert'] self.req.body = jsonutils.dump_as_bytes(self.body) ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('has non-unique elements', six.text_type(ex)) def test_rebuild_server_with_invalid_trusted_cert_id(self): """Make sure that we can't rebuild with non-string certificate IDs""" self.body['rebuild']['trusted_image_certificates'] = [1, 2] self.req.body = jsonutils.dump_as_bytes(self.body) ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('is not of type', six.text_type(ex)) def test_rebuild_server_with_invalid_trusted_certs(self): """Make sure that we can't rebuild with certificates in a non-array""" self.body['rebuild']['trusted_image_certificates'] = "not-an-array" self.req.body = jsonutils.dump_as_bytes(self.body) ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('is not of type', six.text_type(ex)) def test_rebuild_server_with_trusted_certs_pre_2_63_fails(self): """Make sure we can't use trusted_certs before 2.63""" self._rebuild_server(certs=['trusted-cert-id']) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.62') ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) def test_rebuild_server_with_trusted_certs_policy_failed(self): rule_name = "os_compute_api:servers:rebuild:trusted_certs" rules = {"os_compute_api:servers:rebuild": "@", rule_name: "project:%s" % fakes.FAKE_PROJECT_ID} self.policy.set_rules(rules) exc = self.assertRaises(exception.PolicyNotAuthorized, self._rebuild_server, certs=['0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8']) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch.object(compute_api.API, 'rebuild') def test_rebuild_server_with_cert_validation_error( self, mock_rebuild): mock_rebuild.side_effect = exception.CertificateValidationFailed( cert_uuid="cert id", reason="test cert validation error") ex = self.assertRaises(webob.exc.HTTPBadRequest, self._rebuild_server, certs=['trusted-cert-id']) self.assertIn('test cert validation error', six.text_type(ex)) class ServersControllerRebuildTestV271(ControllerTest): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def setUp(self): super(ServersControllerRebuildTestV271, self).setUp() self.req = fakes.HTTPRequest.blank(self.path_action % FAKE_UUID, use_admin_context=True) self.req.method = 'POST' self.req.headers["content-type"] = "application/json" self.req_user_id = self.req.environ['nova.context'].user_id self.req_project_id = self.req.environ['nova.context'].project_id self.req.api_version_request = (api_version_request. APIVersionRequest('2.71')) self.body = { "rebuild": { "imageRef": self.image_uuid, "user_data": None } } @mock.patch('nova.compute.api.API.get') def _rebuild_server(self, mock_get): ctx = self.req.environ['nova.context'] mock_get.return_value = fakes.stub_instance_obj(ctx, vm_state=vm_states.ACTIVE, project_id=self.req_project_id, user_id=self.req_user_id) server = self.controller._action_rebuild( self.req, FAKE_UUID, body=self.body).obj['server'] return server @mock.patch.object(InstanceGroup, 'get_by_instance_uuid', side_effect=exception.InstanceGroupNotFound(group_uuid=FAKE_UUID)) def test_rebuild_with_server_group_not_exist(self, mock_sg_get): server = self._rebuild_server() self.assertEqual([], server['server_groups']) class ServersControllerUpdateTest(ControllerTest): def _get_request(self, body=None): req = fakes.HTTPRequestV21.blank(self.path_with_id % FAKE_UUID) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(body) fake_get = fakes.fake_compute_get( project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.mock_get.side_effect = fake_get return req def test_update_server_all_attributes(self): body = {'server': { 'name': 'server_test', }} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['name'], 'server_test') def test_update_server_name(self): body = {'server': {'name': 'server_test'}} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['name'], 'server_test') def test_update_response_has_no_show_server_only_attributes(self): # There are some old server attributes which were added only for # GET server APIs not for PUT. GET server and PUT server share the # same view builder method SHOW() to build the response, So make sure # attributes which are not supposed to be included for PUT # response are not present. body = {'server': {'name': 'server_test'}} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) for field in GET_ONLY_FIELDS: self.assertNotIn(field, res_dict['server']) def test_update_server_name_too_long(self): body = {'server': {'name': 'x' * 256}} req = self._get_request(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_name_all_blank_spaces(self): self.stub_out('nova.db.api.instance_get', fakes.fake_instance_get(name='server_test')) req = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {'name': ' ' * 64}} req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_name_with_spaces_in_the_middle(self): body = {'server': {'name': 'abc def'}} req = self._get_request(body) self.controller.update(req, FAKE_UUID, body=body) def test_update_server_name_with_leading_trailing_spaces(self): self.stub_out('nova.db.api.instance_get', fakes.fake_instance_get(name='server_test')) req = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {'name': ' abc def '}} req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_name_with_leading_trailing_spaces_compat_mode(self): body = {'server': {'name': ' abc def '}} req = self._get_request(body) req.set_legacy_v2() self.controller.update(req, FAKE_UUID, body=body) def test_update_server_admin_password_extra_arg(self): inst_dict = dict(name='server_test', admin_password='bacon') body = dict(server=inst_dict) req = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID) req.method = 'PUT' req.content_type = "application/json" req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_host_id(self): inst_dict = dict(host_id='123') body = dict(server=inst_dict) req = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID) req.method = 'PUT' req.content_type = "application/json" req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_not_found(self): self.mock_get.side_effect = exception.InstanceNotFound( instance_id='fake') body = {'server': {'name': 'server_test'}} req = fakes.HTTPRequest.blank(self.path_with_id % FAKE_UUID) req.method = 'PUT' req.content_type = "application/json" req.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'update_instance') def test_update_server_not_found_on_update(self, mock_update_instance): def fake_update(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') mock_update_instance.side_effect = fake_update body = {'server': {'name': 'server_test'}} req = self._get_request(body) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_policy_fail(self): rule = {'compute:update': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) body = {'server': {'name': 'server_test'}} req = self._get_request(body) self.assertRaises(exception.PolicyNotAuthorized, self.controller.update, req, FAKE_UUID, body=body) class ServersControllerTriggerCrashDumpTest(ControllerTest): def setUp(self): super(ServersControllerTriggerCrashDumpTest, self).setUp() self.instance = fakes.stub_instance_obj(None, vm_state=vm_states.ACTIVE, project_id=self.project_id) def fake_get(ctrl, ctxt, uuid): if uuid != FAKE_UUID: raise webob.exc.HTTPNotFound(explanation='fakeout') return self.instance self.useFixture( fixtures.MonkeyPatch('nova.api.openstack.compute.servers.' 'ServersController._get_instance', fake_get)) self.req = fakes.HTTPRequest.blank(self.path_action % FAKE_UUID) self.req.api_version_request =\ api_version_request.APIVersionRequest('2.17') self.body = dict(trigger_crash_dump=None) @mock.patch.object(compute_api.API, 'trigger_crash_dump') def test_trigger_crash_dump(self, mock_trigger_crash_dump): ctxt = self.req.environ['nova.context'] self.controller._action_trigger_crash_dump(self.req, FAKE_UUID, body=self.body) mock_trigger_crash_dump.assert_called_with(ctxt, self.instance) def test_trigger_crash_dump_policy_failed(self): rule_name = "os_compute_api:servers:trigger_crash_dump" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) self.assertIn("os_compute_api:servers:trigger_crash_dump", exc.format_message()) @mock.patch.object(compute_api.API, 'trigger_crash_dump', fake_start_stop_not_ready) def test_trigger_crash_dump_not_ready(self): self.assertRaises(webob.exc.HTTPConflict, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) @mock.patch.object(compute_api.API, 'trigger_crash_dump', fakes.fake_actions_to_locked_server) def test_trigger_crash_dump_locked_server(self): self.assertRaises(webob.exc.HTTPConflict, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) @mock.patch.object(compute_api.API, 'trigger_crash_dump', fake_start_stop_invalid_state) def test_trigger_crash_dump_invalid_state(self): self.assertRaises(webob.exc.HTTPConflict, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) def test_trigger_crash_dump_with_bogus_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_trigger_crash_dump, self.req, 'test_inst', body=self.body) def test_trigger_crash_dump_schema_invalid_type(self): self.body['trigger_crash_dump'] = 'not null' self.assertRaises(exception.ValidationError, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) def test_trigger_crash_dump_schema_extra_property(self): self.body['extra_property'] = 'extra' self.assertRaises(exception.ValidationError, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) class ServersControllerUpdateTestV219(ServersControllerUpdateTest): def _get_request(self, body=None): req = super(ServersControllerUpdateTestV219, self)._get_request( body=body) req.api_version_request = api_version_request.APIVersionRequest('2.19') return req def _update_server_desc(self, set_desc, desc=None): body = {'server': {}} if set_desc: body['server']['description'] = desc req = self._get_request() res_dict = self.controller.update(req, FAKE_UUID, body=body) return res_dict def test_update_server_description(self): res_dict = self._update_server_desc(True, 'server_desc') self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['description'], 'server_desc') def test_update_server_empty_description(self): res_dict = self._update_server_desc(True, '') self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['description'], '') def test_update_server_without_description(self): res_dict = self._update_server_desc(False) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertIsNone(res_dict['server']['description']) def test_update_server_remove_description(self): res_dict = self._update_server_desc(True) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertIsNone(res_dict['server']['description']) def test_update_server_all_attributes(self): body = {'server': { 'name': 'server_test', 'description': 'server_desc' }} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['name'], 'server_test') self.assertEqual(res_dict['server']['description'], 'server_desc') def test_update_server_description_too_long(self): body = {'server': {'description': 'x' * 256}} req = self._get_request(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_description_invalid(self): # Invalid non-printable control char in the desc. body = {'server': {'description': "123\0d456"}} req = self._get_request(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) class ServersControllerUpdateTestV271(ServersControllerUpdateTest): body = {'server': {'name': 'server_test'}} def _get_request(self, body=None): req = super(ServersControllerUpdateTestV271, self)._get_request( body=body) req.api_version_request = api_version_request.APIVersionRequest('2.71') return req @mock.patch.object(InstanceGroup, 'get_by_instance_uuid', side_effect=exception.InstanceGroupNotFound(group_uuid=FAKE_UUID)) def test_update_with_server_group_not_exist(self, mock_sg_get): req = self._get_request(self.body) res_dict = self.controller.update(req, FAKE_UUID, body=self.body) self.assertEqual([], res_dict['server']['server_groups']) class ServerStatusTest(test.TestCase): project_id = fakes.FAKE_PROJECT_ID path = '/%s/servers' % project_id path_with_id = path + '/%s' path_action = path + '/%s/action' def setUp(self): super(ServerStatusTest, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api( self, security_groups=[{'name': 'default'}]) self.controller = servers.ServersController() def _get_with_state(self, vm_state, task_state=None): request = fakes.HTTPRequestV21.blank(self.path_with_id % FAKE_UUID) self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( vm_state=vm_state, task_state=task_state, project_id=request.environ['nova.context'].project_id)) return self.controller.show(request, FAKE_UUID) def test_active(self): response = self._get_with_state(vm_states.ACTIVE) self.assertEqual(response['server']['status'], 'ACTIVE') def test_reboot(self): response = self._get_with_state(vm_states.ACTIVE, task_states.REBOOTING) self.assertEqual(response['server']['status'], 'REBOOT') def test_reboot_hard(self): response = self._get_with_state(vm_states.ACTIVE, task_states.REBOOTING_HARD) self.assertEqual(response['server']['status'], 'HARD_REBOOT') def test_reboot_resize_policy_fail(self): rule = {'compute:reboot': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) req = fakes.HTTPRequestV21.blank(self.path_action % '1234') self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( vm_state='ACTIVE', task_state=None, project_id=req.environ['nova.context'].project_id)) self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_reboot, req, '1234', body={'reboot': {'type': 'HARD'}}) def test_rebuild(self): response = self._get_with_state(vm_states.ACTIVE, task_states.REBUILDING) self.assertEqual(response['server']['status'], 'REBUILD') def test_rebuild_error(self): response = self._get_with_state(vm_states.ERROR) self.assertEqual(response['server']['status'], 'ERROR') def test_resize(self): response = self._get_with_state(vm_states.ACTIVE, task_states.RESIZE_PREP) self.assertEqual(response['server']['status'], 'RESIZE') def test_confirm_resize_policy_fail(self): rule = {'compute:confirm_resize': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) req = fakes.HTTPRequestV21.blank(self.path_action % '1234') self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( vm_state='ACTIVE', task_state=None, project_id=req.environ['nova.context'].project_id)) self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_confirm_resize, req, '1234', {}) def test_verify_resize(self): response = self._get_with_state(vm_states.RESIZED, None) self.assertEqual(response['server']['status'], 'VERIFY_RESIZE') def test_revert_resize(self): response = self._get_with_state(vm_states.RESIZED, task_states.RESIZE_REVERTING) self.assertEqual(response['server']['status'], 'REVERT_RESIZE') def test_revert_resize_policy_fail(self): rule = {'compute:revert_resize': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) req = fakes.HTTPRequestV21.blank(self.path_action % '1234') self.stub_out('nova.compute.api.API.get', fakes.fake_compute_get( vm_state='ACTIVE', task_state=None, project_id=req.environ['nova.context'].project_id)) self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_revert_resize, req, '1234', {}) def test_password_update(self): response = self._get_with_state(vm_states.ACTIVE, task_states.UPDATING_PASSWORD) self.assertEqual(response['server']['status'], 'PASSWORD') def test_stopped(self): response = self._get_with_state(vm_states.STOPPED) self.assertEqual(response['server']['status'], 'SHUTOFF') class ServersControllerCreateTest(test.TestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' project_id = fakes.FAKE_PROJECT_ID def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTest, self).setUp() self.flags(enable_instance_password=True, group='api') self.instance_cache_num = 0 self.instance_cache_by_id = {} self.instance_cache_by_uuid = {} fakes.stub_out_nw_api(self) self.controller = servers.ServersController() def instance_create(context, inst): inst_type = flavors.get_flavor_by_flavor_id(3) image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def_image_ref = 'http://localhost/%s/images/%s' % (self.project_id, image_uuid) self.instance_cache_num += 1 instance = fake_instance.fake_db_instance(**{ 'id': self.instance_cache_num, 'display_name': inst['display_name'] or 'test', 'display_description': inst['display_description'] or '', 'uuid': FAKE_UUID, 'instance_type': inst_type, 'image_ref': inst.get('image_ref', def_image_ref), 'user_id': 'fake', 'project_id': fakes.FAKE_PROJECT_ID, 'reservation_id': inst['reservation_id'], "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), "config_drive": None, "progress": 0, "fixed_ips": [], "task_state": "", "vm_state": "", "root_device_name": inst.get('root_device_name', 'vda'), }) self.instance_cache_by_id[instance['id']] = instance self.instance_cache_by_uuid[instance['uuid']] = instance return instance def instance_get(context, instance_id): """Stub for compute/api create() pulling in instance after scheduling """ return self.instance_cache_by_id[instance_id] def instance_update(context, uuid, values): instance = self.instance_cache_by_uuid[uuid] instance.update(values) return instance def server_update_and_get_original( context, instance_uuid, params, columns_to_join=None): inst = self.instance_cache_by_uuid[instance_uuid] inst.update(params) return (inst, inst) fakes.stub_out_key_pair_funcs(self) fake.stub_out_image_service(self) self.stub_out('nova.db.api.instance_create', instance_create) self.stub_out('nova.db.api.instance_system_metadata_update', lambda *a, **kw: None) self.stub_out('nova.db.api.instance_get', instance_get) self.stub_out('nova.db.api.instance_update', instance_update) self.stub_out('nova.db.api.instance_update_and_get_original', server_update_and_get_original) self.body = { 'server': { 'name': 'server_test', 'imageRef': self.image_uuid, 'flavorRef': self.flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, 'networks': [{ 'uuid': 'ff608d40-75e9-48cb-b745-77bb55b5eaf2' }], }, } self.bdm_v2 = [{ 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'uuid': 'fake', 'device_name': 'vdb', 'delete_on_termination': False, }] self.bdm = [{ 'no_device': None, 'virtual_name': 'root', 'volume_id': fakes.FAKE_UUID, 'device_name': 'vda', 'delete_on_termination': False }] self.req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) self.req.method = 'POST' self.req.headers["content-type"] = "application/json" server = dict(name='server_test', imageRef=FAKE_UUID, flavorRef=2) body = {'server': server} self.req.body = encodeutils.safe_encode(jsonutils.dumps(body)) def _check_admin_password_len(self, server_dict): """utility function - check server_dict for admin_password length.""" self.assertEqual(CONF.password_length, len(server_dict["adminPass"])) def _check_admin_password_missing(self, server_dict): """utility function - check server_dict for admin_password absence.""" self.assertNotIn("adminPass", server_dict) def _test_create_instance(self, flavor=2): self.stub_out('uuid.uuid4', lambda: FAKE_UUID) image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' self.body['server']['imageRef'] = image_uuid self.body['server']['flavorRef'] = flavor self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller.create(self.req, body=self.body).obj['server'] self._check_admin_password_len(server) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_with_none_value_port(self): self.body['server'] = {'networks': [{'port': None, 'uuid': FAKE_UUID}]} self.body['server']['name'] = 'test' self._test_create_instance() def test_create_instance_private_flavor(self): values = { 'name': 'fake_name', 'memory': 512, 'vcpus': 1, 'root_gb': 10, 'ephemeral_gb': 10, 'flavorid': '1324', 'swap': 0, 'rxtx_factor': 0.5, 'is_public': False, } flavors.create(**values) ex = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_instance, flavor=1324) self.assertEqual('Flavor 1324 could not be found.', six.text_type(ex)) def test_create_server_bad_image_uuid(self): self.body['server']['min_count'] = 1 self.body['server']['imageRef'] = 1, self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_server_with_deleted_image(self): # Get the fake image service so we can set the status to deleted (image_service, image_id) = glance.get_remote_image_service( context, '') image_service.update(context, self.image_uuid, {'status': 'DELETED'}) self.addCleanup(image_service.update, context, self.image_uuid, {'status': 'active'}) self.body['server']['flavorRef'] = 2 self.req.body = jsonutils.dump_as_bytes(self.body) with testtools.ExpectedException( webob.exc.HTTPBadRequest, 'Image 76fa36fc-c930-4bf3-8c8a-ea2a2420deb6 is not active.'): self.controller.create(self.req, body=self.body) def test_create_server_image_too_large(self): # Get the fake image service so we can update the size of the image (image_service, image_id) = glance.get_remote_image_service( context, self.image_uuid) image = image_service.show(context, image_id) orig_size = image['size'] new_size = str(1000 * (1024 ** 3)) image_service.update(context, self.image_uuid, {'size': new_size}) self.addCleanup(image_service.update, context, self.image_uuid, {'size': orig_size}) self.body['server']['flavorRef'] = 2 self.req.body = jsonutils.dump_as_bytes(self.body) with testtools.ExpectedException( webob.exc.HTTPBadRequest, "Flavor's disk is too small for requested image."): self.controller.create(self.req, body=self.body) @mock.patch.object(fake._FakeImageService, 'show', return_value=dict( id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', status='active', properties=dict( cinder_encryption_key_id=fakes.FAKE_UUID))) def test_create_server_image_nonbootable(self, mock_show): self.req.body = jsonutils.dump_as_bytes(self.body) expected_msg = ("Image {} is unacceptable: Direct booting of an image " "uploaded from an encrypted volume is unsupported.") with testtools.ExpectedException( webob.exc.HTTPBadRequest, expected_msg.format(self.image_uuid)): self.controller.create(self.req, body=self.body) def test_create_instance_with_image_non_uuid(self): self.body['server']['imageRef'] = 'not-uuid' self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_with_image_as_full_url(self): image_href = ('http://localhost/v2/%s/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' % self.project_id) self.body['server']['imageRef'] = image_href self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_with_image_as_empty_string(self): self.body['server']['imageRef'] = '' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_no_key_pair(self): fakes.stub_out_key_pair_funcs(self, have_key_pair=False) self._test_create_instance() def _test_create_extra(self, params, no_image=False): self.body['server']['flavorRef'] = 2 if no_image: self.body['server'].pop('imageRef', None) self.body['server'].update(params) self.req.body = jsonutils.dump_as_bytes(self.body) self.req.headers["content-type"] = "application/json" self.controller.create(self.req, body=self.body).obj['server'] @mock.patch.object(compute_api.API, 'create', side_effect=exception.PortRequiresFixedIP( port_id=uuids.port)) def test_create_instance_with_port_with_no_fixed_ips(self, mock_create): requested_networks = [{'port': uuids.port}] params = {'networks': requested_networks} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) def test_create_instance_raise_user_data_too_large(self): self.body['server']['user_data'] = (b'1' * 65536) ex = self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) # Make sure the failure was about user_data and not something else. self.assertIn('user_data', six.text_type(ex)) @mock.patch.object(compute_api.API, 'create', side_effect=exception.NetworkRequiresSubnet( network_uuid=uuids.network)) def test_create_instance_with_network_with_no_subnet(self, mock_create): requested_networks = [{'uuid': uuids.network}] params = {'networks': requested_networks} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create', side_effect=exception.NoUniqueMatch( "No Unique match found for ...")) def test_create_instance_with_non_unique_secgroup_name(self, mock_create): requested_networks = [{'uuid': uuids.network}] params = {'networks': requested_networks, 'security_groups': [{'name': 'dup'}, {'name': 'dup'}]} self.assertRaises(webob.exc.HTTPConflict, self._test_create_extra, params) def test_create_instance_secgroup_leading_trailing_spaces(self): network = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network}] params = {'networks': requested_networks, 'security_groups': [{'name': ' sg '}]} self.assertRaises(exception.ValidationError, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create') def test_create_instance_secgroup_leading_trailing_spaces_compat_mode( self, mock_create): requested_networks = [{'uuid': uuids.network}] params = {'networks': requested_networks, 'security_groups': [{'name': ' sg '}]} def fake_create(*args, **kwargs): self.assertEqual([' sg '], kwargs['security_groups']) return (objects.InstanceList(objects=[fakes.stub_instance_obj( self.req.environ['nova.context'])]), None) mock_create.side_effect = fake_create self.req.set_legacy_v2() self._test_create_extra(params) def test_create_instance_with_networks_disabled_neutronv2(self): net_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' requested_networks = [{'uuid': net_uuid}] params = {'networks': requested_networks} old_create = compute_api.API.create def create(*args, **kwargs): result = [('76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', None, None, None)] self.assertEqual(result, kwargs['requested_networks'].as_tuples()) return old_create(*args, **kwargs) with mock.patch('nova.compute.api.API.create', create): self._test_create_extra(params) def test_create_instance_with_pass_disabled(self): # test with admin passwords disabled See lp bug 921814 self.flags(enable_instance_password=False, group='api') self.stub_out('uuid.uuid4', lambda: FAKE_UUID) self.flags(enable_instance_password=False, group='api') self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self._check_admin_password_missing(server) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_name_too_long(self): self.body['server']['name'] = 'X' * 256 self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_name_with_spaces_in_the_middle(self): self.body['server']['name'] = 'abc def' self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body) def test_create_instance_name_with_leading_trailing_spaces(self): self.body['server']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_name_with_leading_trailing_spaces_in_compat_mode( self): self.body['server']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.req.set_legacy_v2() self.controller.create(self.req, body=self.body) def test_create_instance_name_all_blank_spaces(self): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/%s/flavors/3' % self.project_id body = { 'server': { 'name': ' ' * 64, 'imageRef': image_uuid, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } req = fakes.HTTPRequest.blank('/%s/servers' % self.project_id) req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_az_with_leading_trailing_spaces(self): self.body['server']['availability_zone'] = ' zone1 ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_az_with_leading_trailing_spaces_in_compat_mode( self): self.body['server']['name'] = ' abc def ' self.body['server']['availability_zones'] = ' zone1 ' self.req.body = jsonutils.dump_as_bytes(self.body) self.req.set_legacy_v2() with mock.patch.object(availability_zones, 'get_availability_zones', return_value=[' zone1 ']): self.controller.create(self.req, body=self.body) def test_create_instance(self): self.stub_out('uuid.uuid4', lambda: FAKE_UUID) self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self._check_admin_password_len(server) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_pass_disabled(self): self.stub_out('uuid.uuid4', lambda: FAKE_UUID) self.flags(enable_instance_password=False, group='api') self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self._check_admin_password_missing(server) self.assertEqual(FAKE_UUID, server['id']) @mock.patch('nova.virt.hardware.numa_get_constraints') def _test_create_instance_numa_topology_wrong(self, exc, numa_constraints_mock): numa_constraints_mock.side_effect = exc(**{ 'name': None, 'source': 'flavor', 'requested': 'dummy', 'available': str(objects.fields.CPUAllocationPolicy.ALL), 'cpunum': 0, 'cpumax': 0, 'cpuset': None, 'memsize': 0, 'memtotal': 0}) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_numa_topology_wrong(self): for exc in [exception.ImageNUMATopologyIncomplete, exception.ImageNUMATopologyForbidden, exception.ImageNUMATopologyAsymmetric, exception.ImageNUMATopologyCPUOutOfRange, exception.ImageNUMATopologyCPUDuplicates, exception.ImageNUMATopologyCPUsUnassigned, exception.InvalidCPUAllocationPolicy, exception.InvalidCPUThreadAllocationPolicy, exception.ImageNUMATopologyMemoryOutOfRange]: self._test_create_instance_numa_topology_wrong(exc) def test_create_instance_too_much_metadata(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata']['vote'] = 'fiddletown' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_key_too_long(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {('a' * 260): '12345'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_value_too_long(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {'key1': ('a' * 260)} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_key_blank(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {'': 'abcd'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_not_dict(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = 'string' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_key_not_string(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {1: 'test'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_value_not_string(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {'test': ['a', 'list']} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_user_data_malformed_bad_request(self): params = {'user_data': 'u1234'} self.assertRaises(exception.ValidationError, self._test_create_extra, params) def _create_instance_body_of_config_drive(self, param): def create(*args, **kwargs): self.assertIn('config_drive', kwargs) return old_create(*args, **kwargs) old_create = compute_api.API.create self.stub_out('nova.compute.api.API.create', create) self.body['server']['config_drive'] = param self.req.body = jsonutils.dump_as_bytes(self.body) def test_create_instance_with_config_drive(self): param = True self._create_instance_body_of_config_drive(param) self.controller.create(self.req, body=self.body).obj def test_create_instance_with_config_drive_as_boolean_string(self): param = 'false' self._create_instance_body_of_config_drive(param) self.controller.create(self.req, body=self.body).obj def test_create_instance_with_bad_config_drive(self): param = 12345 self._create_instance_body_of_config_drive(param) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_without_config_drive(self): def create(*args, **kwargs): self.assertIsNone(kwargs['config_drive']) return old_create(*args, **kwargs) old_create = compute_api.API.create self.stub_out('nova.compute.api.API.create', create) self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body).obj def test_create_instance_with_empty_config_drive(self): param = '' self._create_instance_body_of_config_drive(param) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def _test_create(self, params, no_image=False): self. body['server'].update(params) if no_image: del self.body['server']['imageRef'] self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body).obj['server'] def test_create_instance_with_volumes_enabled_no_image(self): """Test that the create will fail if there is no image and no bdms supplied in the request """ old_create = compute_api.API.create def create(*args, **kwargs): self.assertNotIn('imageRef', kwargs) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create, {}, no_image=True) @mock.patch('nova.compute.api.API._get_volumes_for_bdms') @mock.patch.object(compute_api.API, '_validate_bdm') @mock.patch('nova.utils.get_bdm_image_metadata') def test_create_instance_with_bdms_and_no_image( self, mock_bdm_image_metadata, mock_validate_bdm, mock_get_vols): mock_bdm_image_metadata.return_value = {} mock_validate_bdm.return_value = True old_create = compute_api.API.create def create(*args, **kwargs): self.assertThat( block_device.BlockDeviceDict(self.bdm_v2[0]), matchers.DictMatches(kwargs['block_device_mapping'][0]) ) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self._test_create(params, no_image=True) mock_validate_bdm.assert_called_once() mock_bdm_image_metadata.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY, mock.ANY, False) @mock.patch('nova.compute.api.API._get_volumes_for_bdms') @mock.patch.object(compute_api.API, '_validate_bdm') @mock.patch('nova.utils.get_bdm_image_metadata') def test_create_instance_with_bdms_and_empty_imageRef( self, mock_bdm_image_metadata, mock_validate_bdm, mock_get_volumes): mock_bdm_image_metadata.return_value = {} mock_validate_bdm.return_value = True old_create = compute_api.API.create def create(*args, **kwargs): self.assertThat( block_device.BlockDeviceDict(self.bdm_v2[0]), matchers.DictMatches(kwargs['block_device_mapping'][0]) ) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2, 'imageRef': ''} self._test_create(params) def test_create_instance_with_imageRef_as_full_url(self): bdm = [{'device_name': 'foo'}] image_href = ('http://localhost/v2/%s/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' % self.project_id) params = {'block_device_mapping_v2': bdm, 'imageRef': image_href} self.assertRaises(exception.ValidationError, self._test_create, params) def test_create_instance_with_non_uuid_imageRef(self): bdm = [{'device_name': 'foo'}] params = {'block_device_mapping_v2': bdm, 'imageRef': '123123abcd'} self.assertRaises(exception.ValidationError, self._test_create, params) def test_create_instance_with_invalid_bdm_in_2nd_dict(self): bdm_1st = {"source_type": "image", "delete_on_termination": True, "boot_index": 0, "uuid": "2ff3a1d3-ed70-4c3f-94ac-941461153bc0", "destination_type": "local"} bdm_2nd = {"source_type": "volume", "uuid": "99d92140-3d0c-4ea5-a49c-f94c38c607f0", "destination_type": "invalid"} bdm = [bdm_1st, bdm_2nd] params = {'block_device_mapping_v2': bdm, 'imageRef': '2ff3a1d3-ed70-4c3f-94ac-941461153bc0'} self.assertRaises(exception.ValidationError, self._test_create, params) def test_create_instance_with_boot_index_none_ok(self): """Tests creating a server with two block devices. One is the boot device and the other is a non-bootable device. """ # From the docs: # To disable a device from booting, set the boot index to a negative # value or use the default boot index value, which is None. The # simplest usage is, set the boot index of the boot device to 0 and use # the default boot index value, None, for any other devices. bdms = [ # This is the bootable device that would create a 20GB cinder # volume from the given image. { 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'uuid': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'volume_size': 20 }, # This is the non-bootable 10GB ext4 ephemeral block device. { 'source_type': 'blank', 'destination_type': 'local', 'boot_index': None, # If 'guest_format' is 'swap' then a swap device is created. 'guest_format': 'ext4' } ] params = {'block_device_mapping_v2': bdms} self._test_create(params, no_image=True) def test_create_instance_with_boot_index_none_image_local_fails(self): """Tests creating a server with a local image-based block device which has a boot_index of None which is invalid. """ bdms = [{ 'source_type': 'image', 'destination_type': 'local', 'boot_index': None, 'uuid': '155d900f-4e14-4e4c-a73d-069cbf4541e6' }] params = {'block_device_mapping_v2': bdms} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create, params, no_image=True) def test_create_instance_with_invalid_boot_index(self): bdm = [{"source_type": "image", "delete_on_termination": True, "boot_index": 'invalid', "uuid": "2ff3a1d3-ed70-4c3f-94ac-941461153bc0", "destination_type": "local"}] params = {'block_device_mapping_v2': bdm, 'imageRef': '2ff3a1d3-ed70-4c3f-94ac-941461153bc0'} self.assertRaises(exception.ValidationError, self._test_create, params) def test_create_instance_with_device_name_not_string(self): self.bdm_v2[0]['device_name'] = 123 old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm_v2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(exception.ValidationError, self._test_create, params, no_image=True) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_bdm_param_not_list(self, mock_create): self.params = {'block_device_mapping': '/dev/vdb'} self.assertRaises(exception.ValidationError, self._test_create, self.params) def test_create_instance_with_device_name_empty(self): self.bdm_v2[0]['device_name'] = '' old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm_v2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(exception.ValidationError, self._test_create, params, no_image=True) def test_create_instance_with_device_name_too_long(self): self.bdm_v2[0]['device_name'] = 'a' * 256 old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm_v2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(exception.ValidationError, self._test_create, params, no_image=True) def test_create_instance_with_space_in_device_name(self): self.bdm_v2[0]['device_name'] = 'v da' old_create = compute_api.API.create def create(*args, **kwargs): self.assertTrue(kwargs['legacy_bdm']) self.assertEqual(kwargs['block_device_mapping'], self.bdm_v2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(exception.ValidationError, self._test_create, params, no_image=True) def test_create_instance_with_invalid_size(self): self.bdm_v2[0]['volume_size'] = 'hello world' old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm_v2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(exception.ValidationError, self._test_create, params, no_image=True) def _test_create_instance_with_destination_type_error(self, destination_type): self.bdm_v2[0]['destination_type'] = destination_type params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(exception.ValidationError, self._test_create, params, no_image=True) def test_create_instance_with_destination_type_empty_string(self): self._test_create_instance_with_destination_type_error('') def test_create_instance_with_invalid_destination_type(self): self._test_create_instance_with_destination_type_error('fake') @mock.patch('nova.compute.api.API._get_volumes_for_bdms') @mock.patch.object(compute_api.API, '_validate_bdm') def test_create_instance_bdm(self, mock_validate_bdm, mock_get_volumes): bdm = [{ 'source_type': 'volume', 'device_name': 'fake_dev', 'uuid': 'fake_vol' }] bdm_expected = [{ 'source_type': 'volume', 'device_name': 'fake_dev', 'volume_id': 'fake_vol' }] old_create = compute_api.API.create def create(*args, **kwargs): self.assertFalse(kwargs['legacy_bdm']) for expected, received in zip(bdm_expected, kwargs['block_device_mapping']): self.assertThat(block_device.BlockDeviceDict(expected), matchers.DictMatches(received)) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': bdm} self._test_create(params, no_image=True) mock_validate_bdm.assert_called_once() @mock.patch('nova.compute.api.API._get_volumes_for_bdms') @mock.patch.object(compute_api.API, '_validate_bdm') def test_create_instance_bdm_missing_device_name(self, mock_validate_bdm, mock_get_volumes): del self.bdm_v2[0]['device_name'] old_create = compute_api.API.create def create(*args, **kwargs): self.assertFalse(kwargs['legacy_bdm']) self.assertNotIn(None, kwargs['block_device_mapping'][0]['device_name']) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) params = {'block_device_mapping_v2': self.bdm_v2} self._test_create(params, no_image=True) mock_validate_bdm.assert_called_once() @mock.patch.object( block_device.BlockDeviceDict, '_validate', side_effect=exception.InvalidBDMFormat(details='Wrong BDM')) def test_create_instance_bdm_validation_error(self, mock_validate): params = {'block_device_mapping_v2': self.bdm_v2} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create, params, no_image=True) @mock.patch('nova.utils.get_bdm_image_metadata') def test_create_instance_non_bootable_volume_fails(self, fake_bdm_meta): params = {'block_device_mapping_v2': self.bdm_v2} fake_bdm_meta.side_effect = exception.InvalidBDMVolumeNotBootable(id=1) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create, params, no_image=True) @mock.patch('nova.compute.api.API._get_volumes_for_bdms') def test_create_instance_bdm_api_validation_fails(self, mock_get_volumes): self.validation_fail_test_validate_called = False self.validation_fail_instance_destroy_called = False bdm_exceptions = ((exception.InvalidBDMSnapshot, {'id': 'fake'}), (exception.InvalidBDMVolume, {'id': 'fake'}), (exception.InvalidBDMImage, {'id': 'fake'}), (exception.InvalidBDMBootSequence, {}), (exception.InvalidBDMLocalsLimit, {}), (exception.InvalidBDMDiskBus, {'disk_bus': 'foo'})) ex_iter = iter(bdm_exceptions) def _validate_bdm(*args, **kwargs): self.validation_fail_test_validate_called = True ex, kargs = next(ex_iter) raise ex(**kargs) def _instance_destroy(*args, **kwargs): self.validation_fail_instance_destroy_called = True self.stub_out('nova.compute.api.API._validate_bdm', _validate_bdm) self.stub_out('nova.objects.Instance.destroy', _instance_destroy) for _unused in range(len(bdm_exceptions)): params = {'block_device_mapping_v2': [self.bdm_v2[0].copy()]} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create, params) self.assertTrue(self.validation_fail_test_validate_called) self.assertFalse(self.validation_fail_instance_destroy_called) self.validation_fail_test_validate_called = False self.validation_fail_instance_destroy_called = False @mock.patch('nova.compute.api.API._get_volumes_for_bdms') @mock.patch.object(compute_api.API, '_validate_bdm') def _test_create_bdm(self, params, mock_validate_bdm, mock_get_volumes, no_image=False): self.body['server'].update(params) if no_image: del self.body['server']['imageRef'] self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body).obj['server'] mock_validate_bdm.assert_called_once_with( test.MatchType(fakes.FakeRequestContext), test.MatchType(objects.Instance), test.MatchType(objects.Flavor), test.MatchType(objects.BlockDeviceMappingList), {}, mock_get_volumes.return_value, False) def test_create_instance_with_volumes_enabled(self): params = {'block_device_mapping': self.bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_bdm(params) @mock.patch('nova.utils.get_bdm_image_metadata') def test_create_instance_with_volumes_enabled_and_bdms_no_image( self, mock_get_bdm_image_metadata): """Test that the create works if there is no image supplied but os-volumes extension is enabled and bdms are supplied """ volume = { 'id': uuids.volume_id, 'status': 'active', 'volume_image_metadata': {'test_key': 'test_value'} } mock_get_bdm_image_metadata.return_value = volume params = {'block_device_mapping': self.bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm) self.assertNotIn('imageRef', kwargs) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_bdm(params, no_image=True) mock_get_bdm_image_metadata.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY, self.bdm, True) @mock.patch('nova.utils.get_bdm_image_metadata') def test_create_instance_with_imageRef_as_empty_string( self, mock_bdm_image_metadata): volume = { 'id': uuids.volume_id, 'status': 'active', 'volume_image_metadata': {'test_key': 'test_value'} } mock_bdm_image_metadata.return_value = volume params = {'block_device_mapping': self.bdm, 'imageRef': ''} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_bdm(params) def test_create_instance_with_imageRef_as_full_url_legacy_bdm(self): bdm = [{ 'volume_id': fakes.FAKE_UUID, 'device_name': 'vda' }] image_href = ('http://localhost/v2/%s/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' % self.project_id) params = {'block_device_mapping': bdm, 'imageRef': image_href} self.assertRaises(exception.ValidationError, self._test_create_bdm, params) def test_create_instance_with_non_uuid_imageRef_legacy_bdm(self): bdm = [{ 'volume_id': fakes.FAKE_UUID, 'device_name': 'vda' }] params = {'block_device_mapping': bdm, 'imageRef': 'bad-format'} self.assertRaises(exception.ValidationError, self._test_create_bdm, params) @mock.patch('nova.utils.get_bdm_image_metadata') def test_create_instance_non_bootable_volume_fails_legacy_bdm( self, fake_bdm_meta): bdm = [{ 'volume_id': fakes.FAKE_UUID, 'device_name': 'vda' }] params = {'block_device_mapping': bdm} fake_bdm_meta.side_effect = exception.InvalidBDMVolumeNotBootable(id=1) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_bdm, params, no_image=True) def test_create_instance_with_device_name_not_string_legacy_bdm(self): self.bdm[0]['device_name'] = 123 old_create = compute_api.API.create self.params = {'block_device_mapping': self.bdm} def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(exception.ValidationError, self._test_create_bdm, self.params) def test_create_instance_with_snapshot_volume_id_none(self): old_create = compute_api.API.create bdm = [{ 'no_device': None, 'snapshot_id': None, 'volume_id': None, 'device_name': 'vda', 'delete_on_termination': False }] self.params = {'block_device_mapping': bdm} def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(exception.ValidationError, self._test_create_bdm, self.params) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_legacy_bdm_param_not_list(self, mock_create): self.params = {'block_device_mapping': '/dev/vdb'} self.assertRaises(exception.ValidationError, self._test_create_bdm, self.params) def test_create_instance_with_device_name_empty_legacy_bdm(self): self.bdm[0]['device_name'] = '' params = {'block_device_mapping': self.bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(exception.ValidationError, self._test_create_bdm, params) def test_create_instance_with_device_name_too_long_legacy_bdm(self): self.bdm[0]['device_name'] = 'a' * 256, params = {'block_device_mapping': self.bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], self.bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(exception.ValidationError, self._test_create_bdm, params) def test_create_instance_with_space_in_device_name_legacy_bdm(self): self.bdm[0]['device_name'] = 'vd a', params = {'block_device_mapping': self.bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertTrue(kwargs['legacy_bdm']) self.assertEqual(kwargs['block_device_mapping'], self.bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(exception.ValidationError, self._test_create_bdm, params) def _test_create_bdm_instance_with_size_error(self, size): bdm = [{'delete_on_termination': True, 'device_name': 'vda', 'volume_size': size, 'volume_id': '11111111-1111-1111-1111-111111111111'}] params = {'block_device_mapping': bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['block_device_mapping'], bdm) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self.assertRaises(exception.ValidationError, self._test_create_bdm, params) def test_create_instance_with_invalid_size_legacy_bdm(self): self._test_create_bdm_instance_with_size_error("hello world") def test_create_instance_with_size_empty_string(self): self._test_create_bdm_instance_with_size_error('') def test_create_instance_with_size_zero(self): self._test_create_bdm_instance_with_size_error("0") def test_create_instance_with_size_greater_than_limit(self): self._test_create_bdm_instance_with_size_error(db.MAX_INT + 1) def test_create_instance_with_bdm_delete_on_termination(self): bdm = [{'device_name': 'foo1', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': 'True'}, {'device_name': 'foo2', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': True}, {'device_name': 'foo3', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': 'False'}, {'device_name': 'foo4', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': False}, {'device_name': 'foo5', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': False}] expected_bdm = [ {'device_name': 'foo1', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': True}, {'device_name': 'foo2', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': True}, {'device_name': 'foo3', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': False}, {'device_name': 'foo4', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': False}, {'device_name': 'foo5', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': False}] params = {'block_device_mapping': bdm} old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(expected_bdm, kwargs['block_device_mapping']) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_bdm(params) def test_create_instance_with_bdm_delete_on_termination_invalid_2nd(self): bdm = [{'device_name': 'foo1', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': 'True'}, {'device_name': 'foo2', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': 'invalid'}] params = {'block_device_mapping': bdm} self.assertRaises(exception.ValidationError, self._test_create_bdm, params) def test_create_instance_decide_format_legacy(self): bdm = [{'device_name': 'foo1', 'volume_id': fakes.FAKE_UUID, 'delete_on_termination': True}] expected_legacy_flag = True old_create = compute_api.API.create def create(*args, **kwargs): legacy_bdm = kwargs.get('legacy_bdm', True) self.assertEqual(legacy_bdm, expected_legacy_flag) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_bdm({}) params = {'block_device_mapping': bdm} self._test_create_bdm(params) def test_create_instance_both_bdm_formats(self): bdm = [{'device_name': 'foo'}] bdm_v2 = [{'source_type': 'volume', 'uuid': 'fake_vol'}] params = {'block_device_mapping': bdm, 'block_device_mapping_v2': bdm_v2} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_bdm, params) def test_create_instance_invalid_key_name(self): self.body['server']['key_name'] = 'nonexistentkey' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_valid_key_name(self): self.stub_out('uuid.uuid4', lambda: FAKE_UUID) self.body['server']['key_name'] = 'key' self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj self.assertEqual(FAKE_UUID, res["server"]["id"]) self._check_admin_password_len(res["server"]) def test_create_server_keypair_name_with_leading_trailing(self): self.body['server']['key_name'] = ' abc ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create') def test_create_server_keypair_name_with_leading_trailing_compat_mode( self, mock_create): params = {'key_name': ' abc '} def fake_create(*args, **kwargs): self.assertEqual(' abc ', kwargs['key_name']) return (objects.InstanceList(objects=[fakes.stub_instance_obj( self.req.environ['nova.context'])]), None) mock_create.side_effect = fake_create self.req.set_legacy_v2() self._test_create_extra(params) def test_create_instance_invalid_flavor_href(self): flavor_ref = 'http://localhost/v2/flavors/asdf' self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_flavor_id_int(self): flavor_ref = -1 self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_non_existing_snapshot_id( self, mock_create, mock_get_flavor_by_flavor_id): mock_create.side_effect = exception.SnapshotNotFound(snapshot_id='123') self.body['server'] = {'name': 'server_test', 'flavorRef': self.flavor_ref, 'block_device_mapping_v2': [{'source_type': 'snapshot', 'uuid': '123'}]} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_flavor_id_empty(self): flavor_ref = "" self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_bad_flavor_href(self): flavor_ref = 'http://localhost/v2/flavors/17' self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_local_href(self): self.stub_out('uuid.uuid4', lambda: FAKE_UUID) self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_admin_password(self): self.body['server']['flavorRef'] = 3 self.body['server']['adminPass'] = 'testpass' self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self.assertEqual(server['adminPass'], self.body['server']['adminPass']) def test_create_instance_admin_password_pass_disabled(self): self.flags(enable_instance_password=False, group='api') self.body['server']['flavorRef'] = 3 self.body['server']['adminPass'] = 'testpass' self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj self.assertIn('server', res) self.assertIn('adminPass', self.body['server']) def test_create_instance_admin_password_empty(self): self.body['server']['flavorRef'] = 3 self.body['server']['adminPass'] = '' self.req.body = jsonutils.dump_as_bytes(self.body) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body) def test_create_location(self): self.stub_out('uuid.uuid4', lambda: FAKE_UUID) selfhref = 'http://localhost/v2/%s/servers/%s' % (self.project_id, FAKE_UUID) self.req.body = jsonutils.dump_as_bytes(self.body) robj = self.controller.create(self.req, body=self.body) self.assertEqual(encodeutils.safe_decode(robj['Location']), selfhref) @mock.patch('nova.objects.Quotas.get_all_by_project') @mock.patch('nova.objects.Quotas.get_all_by_project_and_user') @mock.patch('nova.objects.Quotas.count_as_dict') def _do_test_create_instance_above_quota(self, resource, allowed, quota, expected_msg, mock_count, mock_get_all_pu, mock_get_all_p): count = {'project': {}, 'user': {}} for res in ('instances', 'ram', 'cores'): if res == resource: value = quota - allowed count['project'][res] = count['user'][res] = value else: count['project'][res] = count['user'][res] = 0 mock_count.return_value = count mock_get_all_p.return_value = {'project_id': fakes.FAKE_PROJECT_ID} mock_get_all_pu.return_value = {'project_id': fakes.FAKE_PROJECT_ID, 'user_id': 'fake_user'} if resource in db_api.PER_PROJECT_QUOTAS: mock_get_all_p.return_value[resource] = quota else: mock_get_all_pu.return_value[resource] = quota fakes.stub_out_instance_quota(self, allowed, quota, resource) self.body['server']['flavorRef'] = 3 self.req.body = jsonutils.dump_as_bytes(self.body) try: self.controller.create(self.req, body=self.body).obj['server'] self.fail('expected quota to be exceeded') except webob.exc.HTTPForbidden as e: self.assertEqual(e.explanation, expected_msg) def test_create_instance_above_quota_instances(self): msg = ('Quota exceeded for instances: Requested 1, but' ' already used 10 of 10 instances') self._do_test_create_instance_above_quota('instances', 0, 10, msg) def test_create_instance_above_quota_ram(self): msg = ('Quota exceeded for ram: Requested 4096, but' ' already used 8192 of 10240 ram') self._do_test_create_instance_above_quota('ram', 2048, 10 * 1024, msg) def test_create_instance_above_quota_cores(self): msg = ('Quota exceeded for cores: Requested 2, but' ' already used 9 of 10 cores') self._do_test_create_instance_above_quota('cores', 1, 10, msg) @mock.patch.object(fakes.QUOTAS, 'limit_check') def test_create_instance_above_quota_server_group_members( self, mock_limit_check): ctxt = self.req.environ['nova.context'] fake_group = objects.InstanceGroup(ctxt) fake_group.project_id = ctxt.project_id fake_group.user_id = ctxt.user_id fake_group.create() real_count = fakes.QUOTAS.count_as_dict def fake_count(context, name, group, user_id): if name == 'server_group_members': self.assertEqual(group.uuid, fake_group.uuid) self.assertEqual(user_id, self.req.environ['nova.context'].user_id) return {'user': {'server_group_members': 10}} else: return real_count(context, name, group, user_id) def fake_limit_check(context, **kwargs): if 'server_group_members' in kwargs: raise exception.OverQuota(overs={}) def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) mock_limit_check.side_effect = fake_limit_check self.stub_out('nova.db.api.instance_destroy', fake_instance_destroy) self.body['os:scheduler_hints'] = {'group': fake_group.uuid} self.req.body = jsonutils.dump_as_bytes(self.body) expected_msg = "Quota exceeded, too many servers in group" try: with mock.patch.object(fakes.QUOTAS, 'count_as_dict', side_effect=fake_count): self.controller.create(self.req, body=self.body).obj self.fail('expected quota to be exceeded') except webob.exc.HTTPForbidden as e: self.assertEqual(e.explanation, expected_msg) def test_create_instance_with_group_hint(self): ctxt = self.req.environ['nova.context'] test_group = objects.InstanceGroup(ctxt) test_group.project_id = ctxt.project_id test_group.user_id = ctxt.user_id test_group.create() def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) self.stub_out('nova.db.api.instance_destroy', fake_instance_destroy) self.body['os:scheduler_hints'] = {'group': test_group.uuid} self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller.create(self.req, body=self.body).obj['server'] test_group = objects.InstanceGroup.get_by_uuid(ctxt, test_group.uuid) self.assertIn(server['id'], test_group.members) def _test_create_instance_with_group_hint(self, hint, hint_name='os:scheduler_hints'): def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) def fake_create(*args, **kwargs): self.assertEqual(kwargs['scheduler_hints'], hint) return ([fakes.stub_instance(1)], '') self.stub_out('nova.compute.api.API.create', fake_create) self.stub_out('nova.db.instance_destroy', fake_instance_destroy) self.body[hint_name] = hint self.req.body = jsonutils.dump_as_bytes(self.body) return self.controller.create(self.req, body=self.body).obj['server'] def test_create_instance_with_group_hint_legacy(self): self._test_create_instance_with_group_hint( {'different_host': '9c47bf55-e9d8-42da-94ab-7f9e80cd1857'}, hint_name='OS-SCH-HNT:scheduler_hints') def test_create_server_with_different_host_hint(self): self._test_create_instance_with_group_hint( {'different_host': '9c47bf55-e9d8-42da-94ab-7f9e80cd1857'}) self._test_create_instance_with_group_hint( {'different_host': ['9c47bf55-e9d8-42da-94ab-7f9e80cd1857', '82412fa6-0365-43a9-95e4-d8b20e00c0de']}) def test_create_instance_with_group_hint_group_not_found(self): def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) self.stub_out('nova.db.api.instance_destroy', fake_instance_destroy) self.body['os:scheduler_hints'] = { 'group': '5b674f73-c8cf-40ef-9965-3b6fe4b304b1'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_with_group_hint_wrong_uuid_format(self): self.body['os:scheduler_hints'] = { 'group': 'non-uuid'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_server_bad_hints_non_dict(self): sch_hints = ['os:scheduler_hints', 'OS-SCH-HNT:scheduler_hints'] for hint in sch_hints: self.body[hint] = 'non-dict' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_server_bad_hints_long_group(self): self.body['os:scheduler_hints'] = { 'group': 'a' * 256} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_server_with_bad_different_host_hint(self): self.body['os:scheduler_hints'] = { 'different_host': 'non-server-id'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) self.body['os:scheduler_hints'] = { 'different_host': ['non-server-id01', 'non-server-id02']} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.PortInUse(port_id=uuids.port)) def test_create_instance_with_port_in_use(self, mock_create): requested_networks = [{'uuid': uuids.network, 'port': uuids.port}] params = {'networks': requested_networks} self.assertRaises(webob.exc.HTTPConflict, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create') def test_create_instance_public_network_non_admin(self, mock_create): public_network_uuid = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' params = {'networks': [{'uuid': public_network_uuid}]} self.req.body = jsonutils.dump_as_bytes(self.body) mock_create.side_effect = exception.ExternalNetworkAttachForbidden( network_uuid=public_network_uuid) self.assertRaises(webob.exc.HTTPForbidden, self._test_create_extra, params) def test_multiple_create_with_string_type_min_and_max(self): min_count = '2' max_count = '3' params = { 'min_count': min_count, 'max_count': max_count, } old_create = compute_api.API.create def create(*args, **kwargs): self.assertIsInstance(kwargs['min_count'], int) self.assertIsInstance(kwargs['max_count'], int) self.assertEqual(kwargs['min_count'], 2) self.assertEqual(kwargs['max_count'], 3) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_extra(params) def test_create_instance_with_multiple_create_enabled(self): min_count = 2 max_count = 3 params = { 'min_count': min_count, 'max_count': max_count, } old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['min_count'], 2) self.assertEqual(kwargs['max_count'], 3) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_extra(params) def test_create_instance_invalid_negative_min(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': -1, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_instance_invalid_negative_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'max_count': -1, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_instance_with_blank_min(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': '', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_instance_with_blank_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'max_count': '', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_instance_invalid_min_greater_than_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 4, 'max_count': 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=body) def test_create_instance_invalid_alpha_min(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 'abcd', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_instance_invalid_alpha_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'max_count': 'abcd', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_multiple_instances(self): """Test creating multiple instances but not asking for reservation_id """ image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] self.instance_cache_by_uuid[instance.uuid] = instance return instance self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) res = self.controller.create(self.req, body=body).obj instance_uuids = self.instance_cache_by_uuid.keys() self.assertIn(res["server"]["id"], instance_uuids) self._check_admin_password_len(res["server"]) def test_create_multiple_instances_pass_disabled(self): """Test creating multiple instances but not asking for reservation_id """ self.flags(enable_instance_password=False, group='api') image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] self.instance_cache_by_uuid[instance.uuid] = instance return instance self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) res = self.controller.create(self.req, body=body).obj instance_uuids = self.instance_cache_by_uuid.keys() self.assertIn(res["server"]["id"], instance_uuids) self._check_admin_password_missing(res["server"]) def _create_multiple_instances_resv_id_return(self, resv_id_return): """Test creating multiple instances with asking for reservation_id """ def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] self.instance_cache_by_uuid[instance.uuid] = instance return instance self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, 'return_reservation_id': resv_id_return } } res = self.controller.create(self.req, body=body) reservation_id = res.obj['reservation_id'] self.assertNotEqual(reservation_id, "") self.assertIsNotNone(reservation_id) self.assertGreater(len(reservation_id), 1) def test_create_multiple_instances_with_resv_id_return(self): self._create_multiple_instances_resv_id_return(True) def test_create_multiple_instances_with_string_resv_id_return(self): self._create_multiple_instances_resv_id_return("True") def test_create_multiple_instances_with_multiple_volume_bdm(self): """Test that a BadRequest is raised if multiple instances are requested with a list of block device mappings for volumes. """ min_count = 2 bdm = [{'source_type': 'volume', 'uuid': 'vol-xxxx'}, {'source_type': 'volume', 'uuid': 'vol-yyyy'} ] params = { 'block_device_mapping_v2': bdm, 'min_count': min_count } old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['min_count'], 2) self.assertEqual(len(kwargs['block_device_mapping']), 2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) exc = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params, no_image=True) self.assertEqual("Cannot attach one or more volumes to multiple " "instances", exc.explanation) def test_create_multiple_instances_with_single_volume_bdm(self): """Test that a BadRequest is raised if multiple instances are requested to boot from a single volume. """ min_count = 2 bdm = [{'source_type': 'volume', 'uuid': 'vol-xxxx'}] params = { 'block_device_mapping_v2': bdm, 'min_count': min_count } old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['min_count'], 2) self.assertEqual(kwargs['block_device_mapping'][0]['volume_id'], 'vol-xxxx') return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) exc = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params, no_image=True) self.assertEqual("Cannot attach one or more volumes to multiple " "instances", exc.explanation) def test_create_multiple_instance_with_non_integer_max_count(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'max_count': 2.5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_multiple_instance_with_non_integer_min_count(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 2.5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=body) def test_create_multiple_instance_max_count_overquota_min_count_ok(self): self.flags(instances=3, group='quota') image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 2, 'max_count': 5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] self.instance_cache_by_uuid[instance.uuid] = instance return instance self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) res = self.controller.create(self.req, body=body).obj instance_uuids = self.instance_cache_by_uuid.keys() self.assertIn(res["server"]["id"], instance_uuids) def test_create_multiple_instance_max_count_overquota_min_count_over(self): self.flags(instances=3, group='quota') image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'min_count': 4, 'max_count': 5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) @mock.patch.object(compute_api.API, 'create') def test_create_multiple_instance_with_specified_ip_neutronv2(self, _api_mock): _api_mock.side_effect = exception.InvalidFixedIpAndMaxCountRequest( reason="") network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' address = '10.0.0.1' requested_networks = [{'uuid': network, 'fixed_ip': address, 'port': port}] params = {'networks': requested_networks} self.body['server']['max_count'] = 2 self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create', side_effect=exception.MultiplePortsNotApplicable( reason="Unable to launch multiple instances with " "a single configured port ID. Please " "launch your instance one by one with " "different ports.")) def test_create_multiple_instance_with_port(self, mock_create): requested_networks = [{'uuid': uuids.network, 'port': uuids.port}] params = {'networks': requested_networks} self.body['server']['max_count'] = 2 self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create', side_effect=exception.NetworkNotFound( network_id=uuids.network)) def test_create_instance_with_not_found_network(self, mock_create): requested_networks = [{'uuid': uuids.network}] params = {'networks': requested_networks} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create', side_effect=exception.PortNotFound(port_id=uuids.port)) def test_create_instance_with_port_not_found(self, mock_create): requested_networks = [{'uuid': uuids.network, 'port': uuids.port}] params = {'networks': requested_networks} self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_network_ambiguous(self, mock_create): mock_create.side_effect = exception.NetworkAmbiguous() self.assertRaises(webob.exc.HTTPConflict, self._test_create_extra, {}) @mock.patch.object(compute_api.API, 'create', side_effect=exception.UnableToAutoAllocateNetwork( project_id=FAKE_UUID)) def test_create_instance_with_unable_to_auto_allocate_network(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) @mock.patch.object(compute_api.API, 'create', side_effect=exception.ImageNotAuthorized( image_id=FAKE_UUID)) def test_create_instance_with_image_not_authorized(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InstanceExists( name='instance-name')) def test_create_instance_raise_instance_exists(self, mock_create): self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDMEphemeralSize) def test_create_instance_raise_invalid_bdm_ephsize(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidNUMANodesNumber( nodes='-1')) def test_create_instance_raise_invalid_numa_nodes(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDMFormat(details='')) def test_create_instance_raise_invalid_bdm_format(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDMSwapSize) def test_create_instance_raise_invalid_bdm_swapsize(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDM) def test_create_instance_raise_invalid_bdm(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.ImageBadRequest( image_id='dummy', response='dummy')) def test_create_instance_raise_image_bad_request(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_availability_zone(self): self.body['server']['availability_zone'] = 'invalid::::zone' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_availability_zone_as_int(self): self.body['server']['availability_zone'] = 123 self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.FixedIpNotFoundForAddress( address='dummy')) def test_create_instance_raise_fixed_ip_not_found_bad_request(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.CPUThreadPolicyConfigurationInvalid()) def test_create_instance_raise_cpu_thread_policy_configuration_invalid( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.get_mem_encryption_constraint', side_effect=exception.FlavorImageConflict( message="fake conflict reason")) def test_create_instance_raise_flavor_image_conflict( self, mock_conflict): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.get_mem_encryption_constraint', side_effect=exception.InvalidMachineType( message="fake conflict reason")) def test_create_instance_raise_invalid_machine_type( self, mock_conflict): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.ImageCPUPinningForbidden()) def test_create_instance_raise_image_cpu_pinning_forbidden( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.ImageCPUThreadPolicyForbidden()) def test_create_instance_raise_image_cpu_thread_policy_forbidden( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.MemoryPageSizeInvalid(pagesize='-1')) def test_create_instance_raise_memory_page_size_invalid(self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.MemoryPageSizeForbidden(pagesize='1', against='2')) def test_create_instance_raise_memory_page_size_forbidden(self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.RealtimeConfigurationInvalid()) def test_create_instance_raise_realtime_configuration_invalid( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.RealtimeMaskNotFoundOrInvalid()) def test_create_instance_raise_realtime_mask_not_found_or_invalid( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create') def test_create_instance_invalid_personality(self, mock_create): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') codec = 'utf8' content = encodeutils.safe_encode( 'b25zLiINCg0KLVJpY2hhcmQgQ$$%QQmFjaA==') start_position = 19 end_position = 20 msg = 'invalid start byte' mock_create.side_effect = UnicodeDecodeError(codec, content, start_position, end_position, msg) self.body['server']['personality'] = [ { "path": "/etc/banner.txt", "contents": "b25zLiINCg0KLVJpY2hhcmQgQ$$%QQmFjaA==", }, ] self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_without_personality_should_get_empty_list(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual([], kwargs['injected_files']) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_instance() def test_create_instance_with_extra_personality_arg(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') self.body['server']['personality'] = [ { "path": "/etc/banner.txt", "contents": "b25zLiINCg0KLVJpY2hhcmQgQ$$%QQmFjaA==", "extra_arg": "extra value" }, ] self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.PciRequestAliasNotDefined( alias='fake_name')) def test_create_instance_pci_alias_not_defined(self, mock_create): # Tests that PciRequestAliasNotDefined is translated to a 400 error. ex = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) self.assertIn('PCI alias fake_name is not defined', six.text_type(ex)) @mock.patch.object(compute_api.API, 'create', side_effect=exception.PciInvalidAlias( reason='just because')) def test_create_instance_pci_invalid_alias(self, mock_create): # Tests that PciInvalidAlias is translated to a 400 error. ex = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) self.assertIn('Invalid PCI alias definition', six.text_type(ex)) def test_create_instance_with_user_data(self): value = base64.encode_as_text("A random string") params = {'user_data': value} self._test_create_extra(params) def test_create_instance_with_bad_user_data(self): value = "A random string" params = {'user_data': value} self.assertRaises(exception.ValidationError, self._test_create_extra, params) @mock.patch('nova.compute.api.API.create') def test_create_instance_with_none_allowd_for_v20_compat_mode(self, mock_create): def create(context, *args, **kwargs): self.assertIsNone(kwargs['user_data']) return ([fakes.stub_instance_obj(context)], None) mock_create.side_effect = create self.req.set_legacy_v2() params = {'user_data': None} self._test_create_extra(params) class ServersControllerCreateTestV219(ServersControllerCreateTest): def _create_instance_req(self, set_desc, desc=None): if set_desc: self.body['server']['description'] = desc self.req.body = jsonutils.dump_as_bytes(self.body) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.19') def test_create_instance_with_description(self): self._create_instance_req(True, 'server_desc') # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_with_none_description(self): self._create_instance_req(True) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_with_empty_description(self): self._create_instance_req(True, '') # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_without_description(self): self._create_instance_req(False) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_description_too_long(self): self._create_instance_req(True, 'X' * 256) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_description_invalid(self): self._create_instance_req(True, "abc\0ddef") self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) class ServersControllerCreateTestV232(test.NoDBTestCase): def setUp(self): super(ServersControllerCreateTestV232, self).setUp() self.controller = servers.ServersController() self.body = { 'server': { 'name': 'device-tagging-server', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', 'networks': [{ 'uuid': 'ff608d40-75e9-48cb-b745-77bb55b5eaf2' }], 'block_device_mapping_v2': [{ 'uuid': '70a599e0-31e7-49b7-b260-868f441e862b', 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'volume_size': '1' }] } } self.req = fakes.HTTPRequestV21.blank( '/%s/servers' % fakes.FAKE_PROJECT_ID, version='2.32') self.req.method = 'POST' self.req.headers['content-type'] = 'application/json' def _create_server(self): self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body) def test_create_server_no_tags(self): with test.nested( mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f60012d9-5ba4-4547-ab48-f94ff7e62d4e'}], 1)), ): self._create_server() def test_create_server_tagged_nic(self): with test.nested( mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f60012d9-5ba4-4547-ab48-f94ff7e62d4e'}], 1)), ): self.body['server']['networks'][0]['tag'] = 'foo' self._create_server() def test_create_server_tagged_bdm(self): with test.nested( mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f60012d9-5ba4-4547-ab48-f94ff7e62d4e'}], 1)), ): self.body['server']['block_device_mapping_v2'][0]['tag'] = 'foo' self._create_server() class ServersControllerCreateTestV237(test.NoDBTestCase): """Tests server create scenarios with the v2.37 microversion. These tests are mostly about testing the validation on the 2.37 server create request with emphasis on negative scenarios. """ def setUp(self): super(ServersControllerCreateTestV237, self).setUp() # Create the server controller. self.controller = servers.ServersController() # Define a basic server create request body which tests can customize. self.body = { 'server': { 'name': 'auto-allocate-test', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', }, } # Create a fake request using the 2.37 microversion. self.req = fakes.HTTPRequestV21.blank( '/%s/servers' % fakes.FAKE_PROJECT_ID, version='2.37') self.req.method = 'POST' self.req.headers['content-type'] = 'application/json' def _create_server(self, networks): self.body['server']['networks'] = networks self.req.body = jsonutils.dump_as_bytes(self.body) return self.controller.create(self.req, body=self.body).obj['server'] def test_create_server_auth_pre_2_37_fails(self): """Negative test to make sure you can't pass 'auto' before 2.37""" self.req.api_version_request = \ api_version_request.APIVersionRequest('2.36') self.assertRaises(exception.ValidationError, self._create_server, 'auto') def test_create_server_no_requested_networks_fails(self): """Negative test for a server create request with no networks requested which should fail with the v2.37 schema validation. """ self.assertRaises(exception.ValidationError, self._create_server, None) def test_create_server_network_id_not_uuid_fails(self): """Negative test for a server create request where the requested network id is not one of the auto/none enums. """ self.assertRaises(exception.ValidationError, self._create_server, 'not-auto-or-none') def test_create_server_network_id_empty_string_fails(self): """Negative test for a server create request where the requested network id is the empty string. """ self.assertRaises(exception.ValidationError, self._create_server, '') @mock.patch.object(context.RequestContext, 'can') def test_create_server_networks_none_skip_policy(self, context_can): """Test to ensure skip checking policy rule create:attach_network, when networks is 'none' which means no network will be allocated. """ with test.nested( mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=14), mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f9bccadf-5ab1-4a56-9156-c00c178fe5f5'}], 1)), ): network_policy = server_policies.SERVERS % 'create:attach_network' self._create_server('none') call_list = [c for c in context_can.call_args_list if c[0][0] == network_policy] self.assertEqual(0, len(call_list)) @mock.patch.object(objects.Flavor, 'get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='2')) def test_create_server_auto_flavornotfound(self, get_flavor): """Tests that requesting auto networking is OK. This test short-circuits on a FlavorNotFound error. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises( webob.exc.HTTPBadRequest, self._create_server, 'auto') # make sure it was a flavor not found error and not something else self.assertIn('Flavor 2 could not be found', six.text_type(ex)) @mock.patch.object(objects.Flavor, 'get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='2')) def test_create_server_none_flavornotfound(self, get_flavor): """Tests that requesting none for networking is OK. This test short-circuits on a FlavorNotFound error. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises( webob.exc.HTTPBadRequest, self._create_server, 'none') # make sure it was a flavor not found error and not something else self.assertIn('Flavor 2 could not be found', six.text_type(ex)) @mock.patch.object(objects.Flavor, 'get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='2')) def test_create_server_multiple_specific_nics_flavornotfound(self, get_flavor): """Tests that requesting multiple specific network IDs is OK. This test short-circuits on a FlavorNotFound error. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises( webob.exc.HTTPBadRequest, self._create_server, [{'uuid': 'e3b686a8-b91d-4a61-a3fc-1b74bb619ddb'}, {'uuid': 'e0f00941-f85f-46ec-9315-96ded58c2f14'}]) # make sure it was a flavor not found error and not something else self.assertIn('Flavor 2 could not be found', six.text_type(ex)) def test_create_server_legacy_neutron_network_id_fails(self): """Tests that we no longer support the legacy br- format for a network id. """ uuid = 'br-00000000-0000-0000-0000-000000000000' self.assertRaises(exception.ValidationError, self._create_server, [{'uuid': uuid}]) @ddt.ddt class ServersControllerCreateTestV252(test.NoDBTestCase): def setUp(self): super(ServersControllerCreateTestV252, self).setUp() self.controller = servers.ServersController() self.body = { 'server': { 'name': 'device-tagging-server', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', 'networks': [{ 'uuid': 'ff608d40-75e9-48cb-b745-77bb55b5eaf2' }] } } self.req = fakes.HTTPRequestV21.blank( '/%s/servers' % fakes.FAKE_PROJECT_ID, version='2.52') self.req.method = 'POST' self.req.headers['content-type'] = 'application/json' def _create_server(self, tags): self.body['server']['tags'] = tags self.req.body = jsonutils.dump_as_bytes(self.body) return self.controller.create(self.req, body=self.body).obj['server'] def test_create_server_with_tags_pre_2_52_fails(self): """Negative test to make sure you can't pass 'tags' before 2.52""" self.req.api_version_request = \ api_version_request.APIVersionRequest('2.51') self.assertRaises( exception.ValidationError, self._create_server, ['tag1']) @ddt.data([','], ['/'], ['a' * (tag.MAX_TAG_LENGTH + 1)], ['a'] * (instance_obj.MAX_TAG_COUNT + 1), [''], [1, 2, 3], {'tag': 'tag'}) def test_create_server_with_tags_incorrect_tags(self, tags): """Negative test to incorrect tags are not allowed""" self.req.api_version_request = \ api_version_request.APIVersionRequest('2.52') self.assertRaises( exception.ValidationError, self._create_server, tags) class ServersControllerCreateTestV257(test.NoDBTestCase): """Tests that trying to create a server with personality files using microversion 2.57 fails. """ def test_create_server_with_personality_fails(self): controller = servers.ServersController() body = { 'server': { 'name': 'no-personality-files', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', 'networks': 'auto', 'personality': [{ 'path': '/path/to/file', 'contents': 'ZWNobyAiaGVsbG8gd29ybGQi' }] } } req = fakes.HTTPRequestV21.blank('/servers', version='2.57') req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' ex = self.assertRaises( exception.ValidationError, controller.create, req, body=body) self.assertIn('personality', six.text_type(ex)) @mock.patch('nova.compute.utils.check_num_instances_quota', new=lambda *args, **kwargs: 1) class ServersControllerCreateTestV260(test.NoDBTestCase): """Negative tests for creating a server with a multiattach volume.""" def setUp(self): super(ServersControllerCreateTestV260, self).setUp() self.useFixture(nova_fixtures.NoopQuotaDriverFixture()) self.controller = servers.ServersController() get_flavor_mock = mock.patch( 'nova.compute.flavors.get_flavor_by_flavor_id', return_value=fake_flavor.fake_flavor_obj( context.get_admin_context(), flavorid='1', expected_attrs=['extra_specs'])) get_flavor_mock.start() self.addCleanup(get_flavor_mock.stop) reqspec_create_mock = mock.patch( 'nova.objects.RequestSpec.create') reqspec_create_mock.start() self.addCleanup(reqspec_create_mock.stop) volume_get_mock = mock.patch( 'nova.volume.cinder.API.get', return_value={'id': uuids.fake_volume_id, 'multiattach': True}) volume_get_mock.start() self.addCleanup(volume_get_mock.stop) def _post_server(self, version=None): body = { 'server': { 'name': 'multiattach', 'flavorRef': '1', 'networks': 'none', 'block_device_mapping_v2': [{ 'uuid': uuids.fake_volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': True}] } } req = fakes.HTTPRequestV21.blank( '/servers', version=version or '2.60') req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' return self.controller.create(req, body=body) def test_create_server_with_multiattach_fails_old_microversion(self): """Tests the case that the user tries to boot from volume with a multiattach volume but before using microversion 2.60. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_server, '2.59') self.assertIn('Multiattach volumes are only supported starting with ' 'compute API version 2.60', six.text_type(ex)) class ServersControllerCreateTestV263(ServersControllerCreateTest): def _create_instance_req(self, certs=None): self.body['server']['trusted_image_certificates'] = certs self.flags(verify_glance_signatures=True, group='glance') self.flags(enable_certificate_validation=True, group='glance') self.req.body = jsonutils.dump_as_bytes(self.body) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.63') def test_create_instance_with_trusted_certs(self): """Test create with valid trusted_image_certificates argument""" self._create_instance_req( ['0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8', '674736e3-f25c-405c-8362-bbf991e0ce0a']) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_without_trusted_certs(self): """Test create without trusted image certificates""" self._create_instance_req() # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_with_empty_trusted_cert_id(self): """Make sure we can't create with an empty certificate ID""" self._create_instance_req(['']) ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('is too short', six.text_type(ex)) def test_create_instance_with_empty_trusted_certs(self): """Make sure we can't create with an empty array of IDs""" self.body['server']['trusted_image_certificates'] = [] self.req.body = jsonutils.dump_as_bytes(self.body) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.63') ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('is too short', six.text_type(ex)) def test_create_instance_with_too_many_trusted_certs(self): """Make sure we can't create with an array of >50 unique IDs""" self._create_instance_req(['cert{}'.format(i) for i in range(51)]) ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('is too long', six.text_type(ex)) def test_create_instance_with_nonunique_trusted_certs(self): """Make sure we can't create with a non-unique array of IDs""" self._create_instance_req(['cert', 'cert']) ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('has non-unique elements', six.text_type(ex)) def test_create_instance_with_invalid_trusted_cert_id(self): """Make sure we can't create with non-string certificate IDs""" self._create_instance_req([1, 2]) ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('is not of type', six.text_type(ex)) def test_create_instance_with_invalid_trusted_certs(self): """Make sure we can't create with certificates in a non-array""" self._create_instance_req("not-an-array") ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('is not of type', six.text_type(ex)) def test_create_server_with_trusted_certs_pre_2_63_fails(self): """Make sure we can't use trusted_certs before 2.63""" self._create_instance_req(['trusted-cert-id']) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.62') ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) def test_create_server_with_trusted_certs_policy_failed(self): rule_name = "os_compute_api:servers:create:trusted_certs" rules = {"os_compute_api:servers:create": "@", "os_compute_api:servers:create:forced_host": "@", "os_compute_api:servers:create:attach_volume": "@", "os_compute_api:servers:create:attach_network": "@", rule_name: "project:fake"} self._create_instance_req(['0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8']) self.policy.set_rules(rules) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.create, self.req, body=self.body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch.object(compute_api.API, 'create') def test_create_server_with_cert_validation_error( self, mock_create): mock_create.side_effect = exception.CertificateValidationFailed( cert_uuid="cert id", reason="test cert validation error") self._create_instance_req(['trusted-cert-id']) ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) self.assertIn('test cert validation error', six.text_type(ex)) class ServersControllerCreateTestV267(ServersControllerCreateTest): def setUp(self): super(ServersControllerCreateTestV267, self).setUp() self.block_device_mapping_v2 = [ {'uuid': '70a599e0-31e7-49b7-b260-868f441e862b', 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'volume_size': '1', 'volume_type': 'fake-lvm-1' }] def _test_create_extra(self, *args, **kwargs): self.req.api_version_request = \ api_version_request.APIVersionRequest('2.67') return super(ServersControllerCreateTestV267, self)._test_create_extra( *args, **kwargs) def test_create_server_with_trusted_volume_type_pre_2_67_fails(self): """Make sure we can't use volume_type before 2.67""" self.body['server'].update( {'block_device_mapping_v2': self.block_device_mapping_v2}) self.req.body = jsonutils.dump_as_bytes(self.block_device_mapping_v2) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.66') ex = self.assertRaises( exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn("'volume_type' was unexpected", six.text_type(ex)) @mock.patch.object(compute_api.API, 'create', side_effect=exception.VolumeTypeNotFound( id_or_name='fake-lvm-1')) def test_create_instance_with_volume_type_not_found(self, mock_create): """Trying to boot from volume with a volume type that does not exist will result in a 400 error. """ params = {'block_device_mapping_v2': self.block_device_mapping_v2} ex = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) self.assertIn('Volume type fake-lvm-1 could not be found', six.text_type(ex)) def test_create_instance_with_volume_type_empty_string(self): """Test passing volume_type='' which is accepted but not used.""" self.block_device_mapping_v2[0]['volume_type'] = '' params = {'block_device_mapping_v2': self.block_device_mapping_v2} self._test_create_extra(params) def test_create_instance_with_none_volume_type(self): """Test passing volume_type=None which is accepted but not used.""" self.block_device_mapping_v2[0]['volume_type'] = None params = {'block_device_mapping_v2': self.block_device_mapping_v2} self._test_create_extra(params) def test_create_instance_without_volume_type(self): """Test passing without volume_type which is accepted but not used.""" self.block_device_mapping_v2[0].pop('volume_type') params = {'block_device_mapping_v2': self.block_device_mapping_v2} self._test_create_extra(params) def test_create_instance_with_volume_type_too_long(self): """Tests the maxLength schema validation on volume_type.""" self.block_device_mapping_v2[0]['volume_type'] = 'X' * 256 params = {'block_device_mapping_v2': self.block_device_mapping_v2} ex = self.assertRaises(exception.ValidationError, self._test_create_extra, params) self.assertIn('is too long', six.text_type(ex)) class ServersControllerCreateTestV274(ServersControllerCreateTest): def setUp(self): super(ServersControllerCreateTestV274, self).setUp() self.req.environ['nova.context'] = fakes.FakeRequestContext( user_id='fake_user', project_id=self.project_id, is_admin=True) self.mock_get = self.useFixture( fixtures.MockPatch('nova.scheduler.client.report.' 'SchedulerReportClient.get')).mock def _generate_req(self, host=None, node=None, az=None, api_version='2.74'): if host: self.body['server']['host'] = host if node: self.body['server']['hypervisor_hostname'] = node if az: self.body['server']['availability_zone'] = az self.req.body = jsonutils.dump_as_bytes(self.body) self.req.api_version_request = \ api_version_request.APIVersionRequest(api_version) def test_create_instance_with_invalid_host(self): self._generate_req(host='node-invalid') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) self.assertIn('Compute host node-invalid could not be found.', six.text_type(ex)) def test_create_instance_with_non_string_host(self): self._generate_req(host=123) ex = self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn("Invalid input for field/attribute host.", six.text_type(ex)) def test_create_instance_with_invalid_hypervisor_hostname(self): get_resp = mock.Mock() get_resp.status_code = 404 self.mock_get.return_value = get_resp self._generate_req(node='node-invalid') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) self.assertIn('Compute host node-invalid could not be found.', six.text_type(ex)) def test_create_instance_with_non_string_hypervisor_hostname(self): get_resp = mock.Mock() get_resp.status_code = 404 self.mock_get.return_value = get_resp self._generate_req(node=123) ex = self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn("Invalid input for field/attribute hypervisor_hostname.", six.text_type(ex)) def test_create_instance_with_invalid_host_and_hypervisor_hostname(self): self._generate_req(host='host-invalid', node='node-invalid') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) self.assertIn('Compute host host-invalid could not be found.', six.text_type(ex)) def test_create_instance_with_non_string_host_and_hypervisor_hostname( self): self._generate_req(host=123, node=123) ex = self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn("Invalid input for field/attribute", six.text_type(ex)) def test_create_instance_pre_274(self): self._generate_req(host='host', node='node', api_version='2.73') ex = self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) self.assertIn("Invalid input for field/attribute server.", six.text_type(ex)) def test_create_instance_mutual(self): self._generate_req(host='host', node='node', az='nova:host:node') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) self.assertIn("mutually exclusive", six.text_type(ex)) def test_create_instance_invalid_policy(self): self._generate_req(host='host', node='node') # non-admin self.req.environ['nova.context'] = fakes.FakeRequestContext( user_id='fake_user', project_id=fakes.FAKE_PROJECT_ID, is_admin=False) ex = self.assertRaises(exception.PolicyNotAuthorized, self.controller.create, self.req, body=self.body) self.assertIn("Policy doesn't allow compute:servers:create:" "requested_destination to be performed.", six.text_type(ex)) def test_create_instance_private_flavor(self): # Here we use admin context, so if we do not pass it or # we do not anything, the test case will be failed. pass class ServersControllerCreateTestWithMock(test.TestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTestWithMock, self).setUp() self.flags(enable_instance_password=True, group='api') self.instance_cache_num = 0 self.instance_cache_by_id = {} self.instance_cache_by_uuid = {} self.controller = servers.ServersController() self.body = { 'server': { 'name': 'server_test', 'imageRef': self.image_uuid, 'flavorRef': self.flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } self.req = fakes.HTTPRequest.blank( '/%s/servers' % fakes.FAKE_PROJECT_ID) self.req.method = 'POST' self.req.headers["content-type"] = "application/json" def _test_create_extra(self, params, no_image=False): self.body['server']['flavorRef'] = 2 if no_image: self.body['server'].pop('imageRef', None) self.body['server'].update(params) self.req.body = jsonutils.dump_as_bytes(self.body) self.req.headers["content-type"] = "application/json" self.controller.create(self.req, body=self.body).obj['server'] @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_fixed_ip_already_in_use(self, create_mock): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '10.0.2.3' requested_networks = [{'uuid': network, 'fixed_ip': address}] params = {'networks': requested_networks} create_mock.side_effect = exception.FixedIpAlreadyInUse( address=address, instance_uuid=network) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) self.assertEqual(1, len(create_mock.call_args_list)) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_invalid_fixed_ip(self, create_mock): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '999.0.2.3' requested_networks = [{'uuid': network, 'fixed_ip': address}] params = {'networks': requested_networks} self.assertRaises(exception.ValidationError, self._test_create_extra, params) self.assertFalse(create_mock.called) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidVolume(reason='error')) def test_create_instance_with_invalid_volume_error(self, create_mock): # Tests that InvalidVolume is translated to a 400 error. self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) class ServersViewBuilderTest(test.TestCase): project_id = fakes.FAKE_PROJECT_ID def setUp(self): super(ServersViewBuilderTest, self).setUp() fakes.stub_out_nw_api(self) self.flags(group='glance', api_servers=['http://localhost:9292']) nw_cache_info = self._generate_nw_cache_info() db_inst = fakes.stub_instance( id=1, image_ref="5", uuid=FAKE_UUID, display_name="test_server", include_fake_metadata=False, availability_zone='nova', nw_cache=nw_cache_info, launched_at=None, terminated_at=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=1) fakes.stub_out_secgroup_api( self, security_groups=[{'name': 'default'}]) self.stub_out('nova.db.api.' 'block_device_mapping_get_all_by_instance_uuids', fake_bdms_get_all_by_instance_uuids) self.stub_out('nova.objects.InstanceMappingList.' '_get_by_instance_uuids_from_db', fake_get_inst_mappings_by_instance_uuids_from_db) self.uuid = db_inst['uuid'] self.view_builder = views.servers.ViewBuilder() self.request = fakes.HTTPRequestV21.blank("/%s" % self.project_id) self.request.context = context.RequestContext('fake', self.project_id) self.instance = fake_instance.fake_instance_obj( self.request.context, expected_attrs=instance_obj.INSTANCE_DEFAULT_FIELDS, **db_inst) self.self_link = "http://localhost/v2/%s/servers/%s" % ( self.project_id, self.uuid) self.bookmark_link = "http://localhost/%s/servers/%s" % ( self.project_id, self.uuid) def _generate_nw_cache_info(self): fixed_ipv4 = ('192.168.1.100', '192.168.2.100', '192.168.3.100') fixed_ipv6 = ('2001:db8:0:1::1',) def _ip(ip): return {'address': ip, 'type': 'fixed'} nw_cache = [ {'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': {'bridge': 'br0', 'id': 1, 'label': 'test1', 'subnets': [{'cidr': '192.168.1.0/24', 'ips': [_ip(fixed_ipv4[0])]}, {'cidr': 'b33f::/64', 'ips': [_ip(fixed_ipv6[0])]}]}}, {'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': {'bridge': 'br0', 'id': 1, 'label': 'test1', 'subnets': [{'cidr': '192.168.2.0/24', 'ips': [_ip(fixed_ipv4[1])]}]}}, {'address': 'cc:cc:cc:cc:cc:cc', 'id': 3, 'network': {'bridge': 'br0', 'id': 2, 'label': 'test2', 'subnets': [{'cidr': '192.168.3.0/24', 'ips': [_ip(fixed_ipv4[2])]}]}}] return nw_cache def test_get_flavor_valid_instance_type(self): flavor_bookmark = "http://localhost/%s/flavors/1" % self.project_id expected = {"id": "1", "links": [{"rel": "bookmark", "href": flavor_bookmark}]} result = self.view_builder._get_flavor(self.request, self.instance, False) self.assertEqual(result, expected) @mock.patch('nova.context.scatter_gather_cells') def test_get_volumes_attached_with_faily_cells(self, mock_sg): bdms = fake_bdms_get_all_by_instance_uuids() # just faking a nova list scenario mock_sg.return_value = { uuids.cell1: bdms[0], uuids.cell2: exception.BDMNotFound(id='fake') } ctxt = context.RequestContext('fake', fakes.FAKE_PROJECT_ID) result = self.view_builder._get_instance_bdms_in_multiple_cells( ctxt, [self.instance.uuid]) # will get the result from cell1 self.assertEqual(result, bdms[0]) mock_sg.assert_called_once() def test_build_server(self): expected_server = { "server": { "id": self.uuid, "name": "test_server", "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], } } output = self.view_builder.basic(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_with_project_id(self): expected_server = { "server": { "id": self.uuid, "name": "test_server", "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], } } output = self.view_builder.basic(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail(self): image_bookmark = "http://localhost/%s/images/5" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/1" % self.project_id expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 0, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', "OS-EXT-AZ:availability_zone": "nova", "config_drive": None, "OS-EXT-SRV-ATTR:host": None, "OS-EXT-SRV-ATTR:hypervisor_hostname": None, "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "key_name": '', "OS-SRV-USG:launched_at": None, "OS-SRV-USG:terminated_at": None, "security_groups": [{'name': 'default'}], "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": vm_states.ACTIVE, "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [ {'id': 'some_volume_1'}, {'id': 'some_volume_2'}, ] } } output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail_with_fault(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid) image_bookmark = "http://localhost/%s/images/5" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/1" % self.project_id expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "name": "test_server", "status": "ERROR", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "fault": { "code": 404, "created": "2010-10-10T12:00:00Z", "message": "HTTPNotFound", "details": "Stock details for test", }, "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', "OS-EXT-AZ:availability_zone": "nova", "config_drive": None, "OS-EXT-SRV-ATTR:host": None, "OS-EXT-SRV-ATTR:hypervisor_hostname": None, "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "key_name": '', "OS-SRV-USG:launched_at": None, "OS-SRV-USG:terminated_at": None, "security_groups": [{'name': 'default'}], "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": vm_states.ERROR, "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [ {'id': 'some_volume_1'}, {'id': 'some_volume_2'}, ] } } self.request.context = context.RequestContext('fake', self.project_id) output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail_with_fault_that_has_been_deleted(self): self.instance['deleted'] = 1 self.instance['vm_state'] = vm_states.ERROR fault = fake_instance.fake_fault_obj(self.request.context, self.uuid, code=500, message="No valid host was found") self.instance['fault'] = fault expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "No valid host was found"} self.request.context = context.RequestContext('fake', self.project_id) output = self.view_builder.show(self.request, self.instance) # Regardless of vm_state deleted servers should be DELETED self.assertEqual("DELETED", output['server']['status']) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_build_server_detail_with_fault_no_instance_mapping(self, mock_im): self.instance['vm_state'] = vm_states.ERROR mock_im.side_effect = exception.InstanceMappingNotFound(uuid='foo') self.request.context = context.RequestContext('fake', self.project_id) self.view_builder.show(self.request, self.instance) mock_im.assert_called_once_with(mock.ANY, self.uuid) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_build_server_detail_with_fault_loaded(self, mock_im): self.instance['vm_state'] = vm_states.ERROR fault = fake_instance.fake_fault_obj(self.request.context, self.uuid, code=500, message="No valid host was found") self.instance['fault'] = fault self.request.context = context.RequestContext('fake', self.project_id) self.view_builder.show(self.request, self.instance) self.assertFalse(mock_im.called) def test_build_server_detail_with_fault_no_details_not_admin(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid, code=500, message='Error') expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "Error"} self.request.context = context.RequestContext('fake', self.project_id) output = self.view_builder.show(self.request, self.instance) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) def test_build_server_detail_with_fault_admin(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid, code=500, message='Error') expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "Error", 'details': 'Stock details for test'} self.request.environ['nova.context'].is_admin = True output = self.view_builder.show(self.request, self.instance) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) def test_build_server_detail_with_fault_no_details_admin(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid, code=500, message='Error', details='') expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "Error"} self.request.environ['nova.context'].is_admin = True output = self.view_builder.show(self.request, self.instance) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) def test_build_server_detail_with_fault_but_active(self): self.instance['vm_state'] = vm_states.ACTIVE self.instance['progress'] = 100 self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid) output = self.view_builder.show(self.request, self.instance) self.assertNotIn('fault', output['server']) def test_build_server_detail_active_status(self): # set the power state of the instance to running self.instance['vm_state'] = vm_states.ACTIVE self.instance['progress'] = 100 image_bookmark = "http://localhost/%s/images/5" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/1" % self.project_id expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 100, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', "OS-EXT-AZ:availability_zone": "nova", "config_drive": None, "OS-EXT-SRV-ATTR:host": None, "OS-EXT-SRV-ATTR:hypervisor_hostname": None, "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "key_name": '', "OS-SRV-USG:launched_at": None, "OS-SRV-USG:terminated_at": None, "security_groups": [{'name': 'default'}], "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": vm_states.ACTIVE, "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [ {'id': 'some_volume_1'}, {'id': 'some_volume_2'}, ] } } output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail_with_metadata(self): metadata = [] metadata.append(models.InstanceMetadata(key="Open", value="Stack")) metadata = nova_utils.metadata_to_dict(metadata) self.instance['metadata'] = metadata image_bookmark = "http://localhost/%s/images/5" % self.project_id flavor_bookmark = "http://localhost/%s/flavors/1" % self.project_id expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 0, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {"Open": "Stack"}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', "OS-EXT-AZ:availability_zone": "nova", "config_drive": None, "OS-EXT-SRV-ATTR:host": None, "OS-EXT-SRV-ATTR:hypervisor_hostname": None, "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "key_name": '', "OS-SRV-USG:launched_at": None, "OS-SRV-USG:terminated_at": None, "security_groups": [{'name': 'default'}], "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": vm_states.ACTIVE, "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [ {'id': 'some_volume_1'}, {'id': 'some_volume_2'}, ] } } output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) class ServersViewBuilderTestV269(ServersViewBuilderTest): """Server ViewBuilder test for microversion 2.69 The intent here is simply to verify that when showing server details after microversion 2.69 the response could have missing keys for those servers from the down cells. """ wsgi_api_version = '2.69' def setUp(self): super(ServersViewBuilderTestV269, self).setUp() self.view_builder = views.servers.ViewBuilder() self.ctxt = context.RequestContext('fake', self.project_id) def fake_is_supported(req, min_version="2.1", max_version="2.69"): return (fakes.api_version.APIVersionRequest(max_version) >= req.api_version_request >= fakes.api_version.APIVersionRequest(min_version)) self.stub_out('nova.api.openstack.api_version_request.is_supported', fake_is_supported) def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.wsgi_api_version) def test_get_server_list_detail_with_down_cells(self): # Fake out 1 partially constructued instance and one full instance. self.instances = [ self.instance, objects.Instance( context=self.ctxt, uuid=uuids.fake1, project_id=fakes.FAKE_PROJECT_ID, created_at=datetime.datetime(1955, 11, 5) ) ] req = self.req('/%s/servers/detail' % self.project_id) output = self.view_builder.detail(req, self.instances, True) self.assertEqual(2, len(output['servers'])) image_bookmark = "http://localhost/%s/images/5" % self.project_id expected = { "servers": [{ "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 0, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { 'disk': 1, 'ephemeral': 1, 'vcpus': 1, 'ram': 256, 'original_name': 'flavor1', 'extra_specs': {}, 'swap': 0 }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "tags": [], "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "OS-EXT-SRV-ATTR:root_device_name": None, "accessIPv4": '', "accessIPv6": '', "host_status": '', "OS-EXT-SRV-ATTR:user_data": None, "trusted_image_certificates": None, "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-SRV-ATTR:kernel_id": '', "OS-EXT-SRV-ATTR:reservation_id": '', "config_drive": None, "OS-EXT-SRV-ATTR:host": None, "OS-EXT-SRV-ATTR:hypervisor_hostname": None, "OS-EXT-SRV-ATTR:hostname": 'test_server', "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "key_name": '', "locked": False, "description": None, "OS-SRV-USG:launched_at": None, "OS-SRV-USG:terminated_at": None, "security_groups": [{'name': 'default'}], "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": vm_states.ACTIVE, "OS-EXT-STS:power_state": 1, "OS-EXT-SRV-ATTR:launch_index": 0, "OS-EXT-SRV-ATTR:ramdisk_id": '', "os-extended-volumes:volumes_attached": [ {'id': 'some_volume_1', 'delete_on_termination': True}, {'id': 'some_volume_2', 'delete_on_termination': False}, ] }, { 'created': '1955-11-05T00:00:00Z', 'id': uuids.fake1, 'tenant_id': fakes.FAKE_PROJECT_ID, "status": "UNKNOWN", "links": [ { "rel": "self", "href": "http://localhost/v2/%s/servers/%s" % (self.project_id, uuids.fake1), }, { "rel": "bookmark", "href": "http://localhost/%s/servers/%s" % (self.project_id, uuids.fake1), }, ], }] } self.assertThat(output, matchers.DictMatches(expected)) def test_get_server_list_with_down_cells(self): # Fake out 1 partially constructued instance and one full instance. self.instances = [ self.instance, objects.Instance( context=self.ctxt, uuid=uuids.fake1, project_id=fakes.FAKE_PROJECT_ID, created_at=datetime.datetime(1955, 11, 5) ) ] req = self.req('/%s/servers' % self.project_id) output = self.view_builder.index(req, self.instances, True) self.assertEqual(2, len(output['servers'])) expected = { "servers": [{ "id": self.uuid, "name": "test_server", "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ] }, { 'id': uuids.fake1, "status": "UNKNOWN", "links": [ { "rel": "self", "href": "http://localhost/v2/%s/servers/%s" % (self.project_id, uuids.fake1), }, { "rel": "bookmark", "href": "http://localhost/%s/servers/%s" % (self.project_id, uuids.fake1), }, ], }] } self.assertThat(output, matchers.DictMatches(expected)) def test_get_server_with_down_cells(self): # Fake out 1 partially constructued instance. self.instance = objects.Instance( context=self.ctxt, uuid=self.uuid, project_id=self.instance.project_id, created_at=datetime.datetime(1955, 11, 5), user_id=self.instance.user_id, image_ref=self.instance.image_ref, power_state=0, flavor=self.instance.flavor, availability_zone=self.instance.availability_zone ) req = self.req('/%s/servers/%s' % (self.project_id, FAKE_UUID)) output = self.view_builder.show(req, self.instance, cell_down_support=True) # ten fields from request_spec and instance_mapping self.assertEqual(10, len(output['server'])) image_bookmark = "http://localhost/%s/images/5" % self.project_id expected = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "created": '1955-11-05T00:00:00Z', "status": "UNKNOWN", "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { 'disk': 1, 'ephemeral': 1, 'vcpus': 1, 'ram': 256, 'original_name': 'flavor1', 'extra_specs': {}, 'swap': 0 }, "OS-EXT-AZ:availability_zone": "nova", "OS-EXT-STS:power_state": 0, "links": [ { "rel": "self", "href": "http://localhost/v2/%s/servers/%s" % (self.project_id, self.uuid), }, { "rel": "bookmark", "href": "http://localhost/%s/servers/%s" % (self.project_id, self.uuid), }, ] } } self.assertThat(output, matchers.DictMatches(expected)) def test_get_server_without_image_avz_user_id_set_from_down_cells(self): # Fake out 1 partially constructued instance. self.instance = objects.Instance( context=self.ctxt, uuid=self.uuid, project_id=self.instance.project_id, created_at=datetime.datetime(1955, 11, 5), user_id=None, image_ref=None, power_state=0, flavor=self.instance.flavor, availability_zone=None ) req = self.req('/%s/servers/%s' % (self.project_id, FAKE_UUID)) output = self.view_builder.show(req, self.instance, cell_down_support=True) # nine fields from request_spec and instance_mapping self.assertEqual(10, len(output['server'])) expected = { "server": { "id": self.uuid, "user_id": "UNKNOWN", "tenant_id": "fake_project", "created": '1955-11-05T00:00:00Z', "status": "UNKNOWN", "image": "", "flavor": { 'disk': 1, 'ephemeral': 1, 'vcpus': 1, 'ram': 256, 'original_name': 'flavor1', 'extra_specs': {}, 'swap': 0 }, "OS-EXT-AZ:availability_zone": "UNKNOWN", "OS-EXT-STS:power_state": 0, "links": [ { "rel": "self", "href": "http://localhost/v2/%s/servers/%s" % (self.project_id, self.uuid), }, { "rel": "bookmark", "href": "http://localhost/%s/servers/%s" % (self.project_id, self.uuid), }, ] } } self.assertThat(output, matchers.DictMatches(expected)) class ServersAllExtensionsTestCase(test.TestCase): """Servers tests using default API router with all extensions enabled. The intent here is to catch cases where extensions end up throwing an exception because of a malformed request before the core API gets a chance to validate the request and return a 422 response. For example, AccessIPsController extends servers.Controller:: | @wsgi.extends | def create(self, req, resp_obj, body): | context = req.environ['nova.context'] | if authorize(context) and 'server' in resp_obj.obj: | resp_obj.attach(xml=AccessIPTemplate()) | server = resp_obj.obj['server'] | self._extend_server(req, server) we want to ensure that the extension isn't barfing on an invalid body. """ def setUp(self): super(ServersAllExtensionsTestCase, self).setUp() self.app = compute.APIRouterV21() @mock.patch.object(compute_api.API, 'create', side_effect=test.TestingException( "Should not reach the compute API.")) def test_create_missing_server(self, mock_create): # Test create with malformed body. req = fakes.HTTPRequestV21.blank( '/%s/servers' % fakes.FAKE_PROJECT_ID) req.method = 'POST' req.content_type = 'application/json' body = {'foo': {'a': 'b'}} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(400, res.status_int) def test_update_missing_server(self): # Test update with malformed body. req = fakes.HTTPRequestV21.blank( '/%s/servers/1' % fakes.FAKE_PROJECT_ID) req.method = 'PUT' req.content_type = 'application/json' body = {'foo': {'a': 'b'}} req.body = jsonutils.dump_as_bytes(body) with mock.patch('nova.objects.Instance.save') as mock_save: res = req.get_response(self.app) self.assertFalse(mock_save.called) self.assertEqual(400, res.status_int) class ServersInvalidRequestTestCase(test.TestCase): """Tests of places we throw 400 Bad Request from.""" def setUp(self): super(ServersInvalidRequestTestCase, self).setUp() self.controller = servers.ServersController() def _invalid_server_create(self, body): req = fakes.HTTPRequestV21.blank( '/%s/servers' % fakes.FAKE_PROJECT_ID) req.method = 'POST' self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_server_no_body(self): self._invalid_server_create(body=None) def test_create_server_missing_server(self): body = {'foo': {'a': 'b'}} self._invalid_server_create(body=body) def test_create_server_malformed_entity(self): body = {'server': 'string'} self._invalid_server_create(body=body) def _unprocessable_server_update(self, body): req = fakes.HTTPRequestV21.blank( '/%s/servers/%s' % (fakes.FAKE_PROJECT_ID, FAKE_UUID)) req.method = 'PUT' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_no_body(self): self._invalid_server_create(body=None) def test_update_server_missing_server(self): body = {'foo': {'a': 'b'}} self._invalid_server_create(body=body) def test_create_update_malformed_entity(self): body = {'server': 'string'} self._invalid_server_create(body=body) class ServersActionsJsonTestV239(test.NoDBTestCase): def setUp(self): super(ServersActionsJsonTestV239, self).setUp() self.controller = servers.ServersController() self.req = fakes.HTTPRequest.blank('', version='2.39') @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(common, 'get_instance') def test_server_create_image_no_quota_checks(self, mock_get_instance, mock_check_quotas): # 'mock_get_instance' helps to skip the whole logic of the action, # but to make the test mock_get_instance.side_effect = webob.exc.HTTPNotFound body = { 'createImage': { 'name': 'Snapshot 1', }, } self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_create_image, self.req, FAKE_UUID, body=body) # starting from version 2.39 no quota checks on Nova side are performed # for 'createImage' action after removing 'image-metadata' proxy API mock_check_quotas.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_services.py0000664000175000017500000016156200000000000025335 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from keystoneauth1 import exceptions as ks_exc import mock from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel import six import webob.exc from nova.api.openstack.compute import services as services_v21 from nova.api.openstack import wsgi as os_wsgi from nova import availability_zones from nova.compute import api as compute from nova import context from nova import exception from nova import objects from nova.servicegroup.drivers import db as db_driver from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_service # This is tied into the os-services API samples functional tests. FAKE_UUID_COMPUTE_HOST1 = 'e81d66a4-ddd3-4aba-8a84-171d1cb4d339' fake_services_list = [ dict(test_service.fake_service, binary='nova-scheduler', host='host1', id=1, uuid=uuidsentinel.svc1, disabled=True, topic='scheduler', updated_at=datetime.datetime(2012, 10, 29, 13, 42, 2), created_at=datetime.datetime(2012, 9, 18, 2, 46, 27), last_seen_up=datetime.datetime(2012, 10, 29, 13, 42, 2), forced_down=False, disabled_reason='test1'), dict(test_service.fake_service, binary='nova-compute', host='host1', id=2, uuid=FAKE_UUID_COMPUTE_HOST1, disabled=True, topic='compute', updated_at=datetime.datetime(2012, 10, 29, 13, 42, 5), created_at=datetime.datetime(2012, 9, 18, 2, 46, 27), last_seen_up=datetime.datetime(2012, 10, 29, 13, 42, 5), forced_down=False, disabled_reason='test2'), dict(test_service.fake_service, binary='nova-scheduler', host='host2', id=3, uuid=uuidsentinel.svc3, disabled=False, topic='scheduler', updated_at=datetime.datetime(2012, 9, 19, 6, 55, 34), created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=datetime.datetime(2012, 9, 19, 6, 55, 34), forced_down=False, disabled_reason=None), dict(test_service.fake_service, binary='nova-compute', host='host2', id=4, uuid=uuidsentinel.svc4, disabled=True, topic='compute', updated_at=datetime.datetime(2012, 9, 18, 8, 3, 38), created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=datetime.datetime(2012, 9, 18, 8, 3, 38), forced_down=False, disabled_reason='test4'), # NOTE(rpodolyaka): API services are special case and must be filtered out dict(test_service.fake_service, binary='nova-osapi_compute', host='host2', id=5, uuid=uuidsentinel.svc5, disabled=False, topic=None, updated_at=None, created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=None, forced_down=False, disabled_reason=None), dict(test_service.fake_service, binary='nova-metadata', host='host2', id=6, uuid=uuidsentinel.svc6, disabled=False, topic=None, updated_at=None, created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=None, forced_down=False, disabled_reason=None), ] def fake_service_get_all(services): def service_get_all(context, filters=None, set_zones=False, all_cells=False, cell_down_support=False): if set_zones or 'availability_zone' in filters: return availability_zones.set_availability_zones(context, services) return services return service_get_all def fake_db_api_service_get_all(context, disabled=None): return fake_services_list def fake_db_service_get_by_host_binary(services): def service_get_by_host_binary(context, host, binary): for service in services: if service['host'] == host and service['binary'] == binary: return service raise exception.HostBinaryNotFound(host=host, binary=binary) return service_get_by_host_binary def fake_service_get_by_host_binary(context, host, binary): fake = fake_db_service_get_by_host_binary(fake_services_list) return fake(context, host, binary) def _service_get_by_id(services, value): for service in services: if service['id'] == value: return service return None def fake_db_service_update(services): def service_update(context, service_id, values): service = _service_get_by_id(services, service_id) if service is None: raise exception.ServiceNotFound(service_id=service_id) service = copy.deepcopy(service) service.update(values) return service return service_update def fake_service_update(context, service_id, values): fake = fake_db_service_update(fake_services_list) return fake(context, service_id, values) def fake_utcnow(): return datetime.datetime(2012, 10, 29, 13, 42, 11) class ServicesTestV21(test.TestCase): service_is_up_exc = webob.exc.HTTPInternalServerError bad_request = exception.ValidationError wsgi_api_version = os_wsgi.DEFAULT_API_VERSION base_path = '/%s/services' % fakes.FAKE_PROJECT_ID base_path_with_query = base_path + '?%s' def _set_up_controller(self): self.controller = services_v21.ServiceController() def setUp(self): super(ServicesTestV21, self).setUp() self.ctxt = context.get_admin_context() self.host_api = compute.HostAPI() self._set_up_controller() self.controller.host_api.service_get_all = ( mock.Mock(side_effect=fake_service_get_all(fake_services_list))) self.useFixture(utils_fixture.TimeFixture(fake_utcnow())) self.stub_out('nova.db.api.service_get_by_host_and_binary', fake_db_service_get_by_host_binary(fake_services_list)) self.stub_out('nova.db.api.service_update', fake_db_service_update(fake_services_list)) # NOTE(gibi): enable / disable a compute service tries to call # the compute service via RPC to update placement. However in these # tests the compute services are faked. So stub out the RPC call to # avoid waiting for the RPC timeout. self.stub_out("nova.compute.rpcapi.ComputeAPI.set_host_enabled", lambda *args, **kwargs: None) self.req = fakes.HTTPRequest.blank('') self.useFixture(fixtures.SingleCellSimple()) def _process_output(self, services, has_disabled=False, has_id=False): return services def test_services_list(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'disabled_reason': 'test1', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'disabled_reason': 'test2', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'id': 3, 'status': 'enabled', 'disabled_reason': None, 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'disabled_reason': 'test4', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host(self): req = fakes.HTTPRequest.blank(self.base_path_with_query % 'host=host1', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'disabled_reason': 'test1', 'id': 1, 'zone': 'internal', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_service(self): req = fakes.HTTPRequest.blank( self.base_path_with_query % 'binary=nova-compute', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'disabled_reason': 'test2', 'id': 2, 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'disabled_reason': 'test4', 'id': 4, 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def _test_services_list_with_param(self, url): req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host_service(self): url = self.base_path_with_query % 'host=host1&binary=nova-compute' self._test_services_list_with_param(url) def test_services_list_with_additional_filter(self): url = (self.base_path_with_query % 'host=host1&binary=nova-compute&unknown=abc') self._test_services_list_with_param(url) def test_services_list_with_unknown_filter(self): url = self.base_path_with_query % 'unknown=abc' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'disabled_reason': 'test1', 'host': 'host1', 'id': 1, 'state': 'up', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'zone': 'internal'}, {'binary': 'nova-compute', 'disabled_reason': 'test2', 'host': 'host1', 'id': 2, 'state': 'up', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'zone': 'nova'}, {'binary': 'nova-scheduler', 'disabled_reason': None, 'host': 'host2', 'id': 3, 'state': 'down', 'status': 'enabled', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'zone': 'internal'}, {'binary': 'nova-compute', 'disabled_reason': 'test4', 'host': 'host2', 'id': 4, 'state': 'down', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'zone': 'nova'}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_multiple_host_filter(self): url = self.base_path_with_query % 'host=host1&host=host2' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) # 2nd query param 'host2' is used here response = {'services': [ {'binary': 'nova-scheduler', 'disabled_reason': None, 'host': 'host2', 'id': 3, 'state': 'down', 'status': 'enabled', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'zone': 'internal'}, {'binary': 'nova-compute', 'disabled_reason': 'test4', 'host': 'host2', 'id': 4, 'state': 'down', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'zone': 'nova'}]} self._process_output(response) self.assertEqual(response, res_dict) def test_services_list_with_multiple_service_filter(self): url = (self.base_path_with_query % 'binary=nova-compute&binary=nova-scheduler') req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) # 2nd query param 'nova-scheduler' is used here response = {'services': [ {'binary': 'nova-scheduler', 'disabled_reason': 'test1', 'host': 'host1', 'id': 1, 'state': 'up', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'zone': 'internal'}, {'binary': 'nova-scheduler', 'disabled_reason': None, 'host': 'host2', 'id': 3, 'state': 'down', 'status': 'enabled', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'zone': 'internal'}]} self.assertEqual(response, res_dict) def test_services_list_host_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='binary=1') res_dict = self.controller.index(req) self.assertEqual({'services': []}, res_dict) def test_services_list_service_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='host=1') res_dict = self.controller.index(req) self.assertEqual({'services': []}, res_dict) def test_services_list_with_host_service_dummy(self): # This is for backward compatible, need remove it when # restriction to param is enabled. url = (self.base_path_with_query % 'host=host1&binary=nova-compute&dummy=dummy') self._test_services_list_with_param(url) def test_services_detail(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'id': 2, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'status': 'enabled', 'id': 3, 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'disabled_reason': None}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host(self): req = fakes.HTTPRequest.blank(self.base_path_with_query % 'host=host1', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'id': 1, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_service(self): req = fakes.HTTPRequest.blank( self.base_path_with_query % 'binary=nova-compute', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host_service(self): url = self.base_path_with_query % 'host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'id': 2, 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_services_detail_with_delete_extension(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'id': 1, 'zone': 'internal', 'disabled_reason': 'test1', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'id': 2, 'zone': 'nova', 'disabled_reason': 'test2', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'disabled_reason': None, 'id': 3, 'zone': 'internal', 'status': 'enabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'disabled_reason': 'test4', 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response, has_id=True) self.assertEqual(res_dict, response) def test_services_enable(self): def _service_update(context, service_id, values): self.assertIsNone(values['disabled_reason']) return dict(test_service.fake_service, id=service_id, **values) self.stub_out('nova.db.api.service_update', _service_update) body = {'host': 'host1', 'binary': 'nova-compute'} res_dict = self.controller.update(self.req, "enable", body=body) self.assertEqual(res_dict['service']['status'], 'enabled') self.assertNotIn('disabled_reason', res_dict['service']) def test_services_enable_with_invalid_host(self): body = {'host': 'invalid', 'binary': 'nova-compute'} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "enable", body=body) def test_services_enable_with_unmapped_host(self): body = {'host': 'invalid', 'binary': 'nova-compute'} with mock.patch.object(self.controller.host_api, 'service_update') as m: m.side_effect = exception.HostMappingNotFound(name='something') self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "enable", body=body) def test_services_enable_with_invalid_binary(self): body = {'host': 'host1', 'binary': 'invalid'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, "enable", body=body) def test_services_disable(self): body = {'host': 'host1', 'binary': 'nova-compute'} res_dict = self.controller.update(self.req, "disable", body=body) self.assertEqual(res_dict['service']['status'], 'disabled') self.assertNotIn('disabled_reason', res_dict['service']) def test_services_disable_with_invalid_host(self): body = {'host': 'invalid', 'binary': 'nova-compute'} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "disable", body=body) def test_services_disable_with_invalid_binary(self): body = {'host': 'host1', 'binary': 'invalid'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, "disable", body=body) def test_services_disable_log_reason(self): body = {'host': 'host1', 'binary': 'nova-compute', 'disabled_reason': 'test-reason', } res_dict = self.controller.update(self.req, "disable-log-reason", body=body) self.assertEqual(res_dict['service']['status'], 'disabled') self.assertEqual(res_dict['service']['disabled_reason'], 'test-reason') def test_mandatory_reason_field(self): body = {'host': 'host1', 'binary': 'nova-compute', } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, "disable-log-reason", body=body) def test_invalid_reason_field(self): reason = 'a' * 256 body = {'host': 'host1', 'binary': 'nova-compute', 'disabled_reason': reason, } self.assertRaises(self.bad_request, self.controller.update, self.req, "disable-log-reason", body=body) @mock.patch('nova.objects.ComputeNodeList.get_all_by_host', return_value=objects.ComputeNodeList(objects=[])) def test_services_delete(self, mock_get_compute_nodes): compute = objects.Service(self.ctxt, **{'host': 'fake-compute-host', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0}) compute.create() with mock.patch('nova.objects.Service.destroy') as service_delete: self.controller.delete(self.req, compute.id) service_delete.assert_called_once_with() self.assertEqual(self.controller.delete.wsgi_code, 204) mock_get_compute_nodes.assert_called_once_with( self.req.environ['nova.context'], compute.host) def test_services_delete_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 1234) def test_services_delete_invalid_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'abc') def test_services_delete_duplicate_service(self): with mock.patch.object(self.controller, 'host_api') as host_api: host_api.service_get_by_id.side_effect = ( exception.ServiceNotUnique()) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 1234) @mock.patch('nova.objects.InstanceList.get_count_by_hosts', return_value=0) @mock.patch('nova.objects.HostMapping.get_by_host', side_effect=exception.HostMappingNotFound(name='host1')) @mock.patch('nova.objects.Service.destroy') def test_compute_service_delete_host_mapping_not_found( self, service_delete, get_hm, get_count_by_hosts): """Tests that we are still able to successfully delete a nova-compute service even if the HostMapping is not found. """ @mock.patch('nova.objects.ComputeNodeList.get_all_by_host', return_value=objects.ComputeNodeList(objects=[ objects.ComputeNode(uuid=uuidsentinel.uuid1, host='host1', hypervisor_hostname='node1'), objects.ComputeNode(uuid=uuidsentinel.uuid2, host='host1', hypervisor_hostname='node2')])) @mock.patch.object(self.controller.host_api, 'service_get_by_id', return_value=objects.Service( host='host1', binary='nova-compute')) @mock.patch.object(self.controller.aggregate_api, 'get_aggregates_by_host', return_value=objects.AggregateList()) @mock.patch.object(self.controller.placementclient, 'delete_resource_provider', # placement connect error doesn't stop the loop side_effect=[ks_exc.EndpointNotFound, None]) @mock.patch.object(services_v21, 'LOG') def _test(mock_log, delete_resource_provider, get_aggregates_by_host, service_get_by_id, cn_get_all_by_host): self.controller.delete(self.req, 2) ctxt = self.req.environ['nova.context'] service_get_by_id.assert_called_once_with(ctxt, 2) get_count_by_hosts.assert_called_once_with(ctxt, ['host1']) get_aggregates_by_host.assert_called_once_with(ctxt, 'host1') self.assertEqual(2, delete_resource_provider.call_count) nodes = cn_get_all_by_host.return_value delete_resource_provider.assert_has_calls([ mock.call(ctxt, node, cascade=True) for node in nodes ], any_order=True) get_hm.assert_called_once_with(ctxt, 'host1') service_delete.assert_called_once_with() mock_log.error.assert_called_once_with( "Failed to delete compute node resource provider for compute " "node %s: %s", uuidsentinel.uuid1, mock.ANY) _test() # This test is just to verify that the servicegroup API gets used when # calling the API @mock.patch.object(db_driver.DbDriver, 'is_up', side_effect=KeyError) def test_services_with_exception(self, mock_is_up): url = self.base_path_with_query % 'host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True) self.assertRaises(self.service_is_up_exc, self.controller.index, req) class ServicesTestV211(ServicesTestV21): wsgi_api_version = '2.11' def test_services_list(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'forced_down': False, 'disabled_reason': 'test1', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'disabled_reason': 'test2', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'id': 3, 'status': 'enabled', 'disabled_reason': None, 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'disabled_reason': 'test4', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host(self): req = fakes.HTTPRequest.blank(self.base_path_with_query % 'host=host1', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'disabled_reason': 'test1', 'id': 1, 'zone': 'internal', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_service(self): req = fakes.HTTPRequest.blank( self.base_path_with_query % 'binary=nova-compute', version=self.wsgi_api_version, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'disabled_reason': 'test2', 'id': 2, 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'disabled_reason': 'test4', 'id': 4, 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host_service(self): url = self.base_path_with_query % 'host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_detail(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'id': 2, 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'status': 'enabled', 'id': 3, 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'disabled_reason': None}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host(self): req = fakes.HTTPRequest.blank(self.base_path_with_query % 'host=host1', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'id': 1, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_service(self): req = fakes.HTTPRequest.blank( self.base_path_with_query % 'binary=nova-compute', version=self.wsgi_api_version, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host_service(self): url = self.base_path_with_query % 'host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'id': 2, 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_services_detail_with_delete_extension(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'id': 1, 'zone': 'internal', 'disabled_reason': 'test1', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'id': 2, 'zone': 'nova', 'disabled_reason': 'test2', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'disabled_reason': None, 'id': 3, 'zone': 'internal', 'status': 'enabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'disabled_reason': 'test4', 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response, has_id=True) self.assertEqual(res_dict, response) def test_force_down_service(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": True, "host": "host1", "binary": "nova-compute"} res_dict = self.controller.update(req, 'force-down', body=req_body) response = { "service": { "forced_down": True, "host": "host1", "binary": "nova-compute" } } self.assertEqual(response, res_dict) def test_force_down_service_with_string_forced_down(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": "True", "host": "host1", "binary": "nova-compute"} res_dict = self.controller.update(req, 'force-down', body=req_body) response = { "service": { "forced_down": True, "host": "host1", "binary": "nova-compute" } } self.assertEqual(response, res_dict) def test_force_down_service_with_invalid_parameter(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": "Invalid", "host": "host1", "binary": "nova-compute"} self.assertRaises(exception.ValidationError, self.controller.update, req, 'force-down', body=req_body) def test_update_forced_down_invalid_service(self): req = fakes.HTTPRequest.blank(self.base_path, use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": True, "host": "host1", "binary": "nova-scheduler"} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'force-down', body=req_body) class ServicesTestV252(ServicesTestV211): """This is a boundary test to ensure that 2.52 behaves the same as 2.11.""" wsgi_api_version = '2.52' class FakeServiceGroupAPI(object): def service_is_up(self, *args, **kwargs): return True def get_updated_time(self, *args, **kwargs): return mock.sentinel.updated_time class ServicesTestV253(test.TestCase): """Tests for the 2.53 microversion in the os-services API.""" def setUp(self): super(ServicesTestV253, self).setUp() self.controller = services_v21.ServiceController() self.controller.servicegroup_api = FakeServiceGroupAPI() self.req = fakes.HTTPRequest.blank( '', version=services_v21.UUID_FOR_ID_MIN_VERSION) def assert_services_equal(self, s1, s2): for k in ('binary', 'host'): self.assertEqual(s1[k], s2[k]) def test_list_has_uuid_in_id_field(self): """Tests that a GET response includes an id field but the value is the service uuid rather than the id integer primary key. """ service_uuids = [s['uuid'] for s in fake_services_list] with mock.patch.object( self.controller.host_api, 'service_get_all', side_effect=fake_service_get_all(fake_services_list)): resp = self.controller.index(self.req) for service in resp['services']: # Make sure a uuid field wasn't returned. self.assertNotIn('uuid', service) # Make sure the id field is one of our known uuids. self.assertIn(service['id'], service_uuids) # Make sure this service was in our known list of fake services. expected = next(iter(filter( lambda s: s['uuid'] == service['id'], fake_services_list))) self.assert_services_equal(expected, service) def test_delete_takes_uuid_for_id(self): """Tests that a DELETE request correctly deletes a service when a valid service uuid is provided for an existing service. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref with mock.patch('nova.objects.Service.destroy') as service_delete: self.controller.delete(self.req, service.uuid) service_delete.assert_called_once_with() self.assertEqual(204, self.controller.delete.wsgi_code) def test_delete_uuid_not_found(self): """Tests that we get a 404 response when attempting to delete a service that is not found by the given uuid. """ self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, uuidsentinel.svc2) def test_delete_invalid_uuid(self): """Tests that the service uuid is validated in a DELETE request.""" ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 1234) self.assertIn('Invalid uuid', six.text_type(ex)) def test_update_invalid_service_uuid(self): """Tests that the service uuid is validated in a PUT request.""" ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, 1234, body={}) self.assertIn('Invalid uuid', six.text_type(ex)) def test_update_policy_failed(self): """Tests that policy is checked with microversion 2.53.""" rule_name = "os_compute_api:os-services:update" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, uuidsentinel.service_uuid, body={}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_service_not_found(self): """Tests that we get a 404 response if the service is not found by the given uuid when handling a PUT request. """ self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, uuidsentinel.service_uuid, body={}) def test_update_invalid_status(self): """Tests that jsonschema validates the status field in the request body and fails if it's not "enabled" or "disabled". """ service = self.start_service( 'compute', 'fake-compute-host').service_ref self.assertRaises( exception.ValidationError, self.controller.update, self.req, service.uuid, body={'status': 'invalid'}) def test_update_disabled_no_reason_then_enable(self): """Tests disabling a service with no reason given. Then enables it to see the change in the response body. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref resp = self.controller.update(self.req, service.uuid, body={'status': 'disabled'}) expected_resp = { 'service': { 'status': 'disabled', 'state': 'up', 'binary': 'nova-compute', 'host': 'fake-compute-host', 'zone': 'nova', # Comes from CONF.default_availability_zone 'updated_at': mock.sentinel.updated_time, 'disabled_reason': None, 'id': service.uuid, 'forced_down': False } } self.assertDictEqual(expected_resp, resp) # Now enable the service to see the response change. req = fakes.HTTPRequest.blank( '', version=services_v21.UUID_FOR_ID_MIN_VERSION) resp = self.controller.update(req, service.uuid, body={'status': 'enabled'}) expected_resp['service']['status'] = 'enabled' self.assertDictEqual(expected_resp, resp) def test_update_enable_with_disabled_reason_fails(self): """Validates that requesting to both enable a service and set the disabled_reason results in a 400 BadRequest error. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={'status': 'enabled', 'disabled_reason': 'invalid'}) self.assertIn("Specifying 'disabled_reason' with status 'enabled' " "is invalid.", six.text_type(ex)) def test_update_disabled_reason_and_forced_down(self): """Tests disabling a service with a reason and forcing it down is reflected back in the response. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref resp = self.controller.update(self.req, service.uuid, body={'status': 'disabled', 'disabled_reason': 'maintenance', # Also tests bool_from_string usage 'forced_down': 'yes'}) expected_resp = { 'service': { 'status': 'disabled', 'state': 'up', 'binary': 'nova-compute', 'host': 'fake-compute-host', 'zone': 'nova', # Comes from CONF.default_availability_zone 'updated_at': mock.sentinel.updated_time, 'disabled_reason': 'maintenance', 'id': service.uuid, 'forced_down': True } } self.assertDictEqual(expected_resp, resp) def test_update_forced_down_invalid_value(self): """Tests that passing an invalid value for forced_down results in a validation error. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref self.assertRaises(exception.ValidationError, self.controller.update, self.req, service.uuid, body={'status': 'disabled', 'disabled_reason': 'maintenance', 'forced_down': 'invalid'}) def test_update_forced_down_invalid_service(self): """Tests that you can't update a non-nova-compute service.""" service = self.start_service( 'scheduler', 'fake-scheduler-host').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={'forced_down': True}) self.assertEqual('Updating a nova-scheduler service is not supported. ' 'Only nova-compute services can be updated.', six.text_type(ex)) def test_update_empty_body(self): """Tests that the caller gets a 400 error if they don't request any updates. """ service = self.start_service('compute').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={}) self.assertEqual("No updates were requested. Fields 'status' or " "'forced_down' should be specified.", six.text_type(ex)) def test_update_only_disabled_reason(self): """Tests that the caller gets a 400 error if they only specify disabled_reason but don't also specify status='disabled'. """ service = self.start_service('compute').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={'disabled_reason': 'missing status'}) self.assertEqual("No updates were requested. Fields 'status' or " "'forced_down' should be specified.", six.text_type(ex)) class ServicesTestV275(test.TestCase): wsgi_api_version = '2.75' def setUp(self): super(ServicesTestV275, self).setUp() self.controller = services_v21.ServiceController() def test_services_list_with_additional_filter_old_version(self): url = ('/%s/services?host=host1&binary=nova-compute&unknown=abc' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(url, use_admin_context=True, version='2.74') self.controller.index(req) def test_services_list_with_additional_filter(self): url = ('/%s/services?host=host1&binary=nova-compute&unknown=abc' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(url, use_admin_context=True, version=self.wsgi_api_version) self.assertRaises(exception.ValidationError, self.controller.index, req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_shelve.py0000664000175000017500000002053700000000000024774 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel import six import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import shelve as shelve_v21 from nova.compute import task_states from nova.compute import vm_states from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class ShelvePolicyTestV21(test.NoDBTestCase): plugin = shelve_v21 def setUp(self): super(ShelvePolicyTestV21, self).setUp() self.controller = self.plugin.ShelveController() self.req = fakes.HTTPRequest.blank('') @mock.patch('nova.api.openstack.common.get_instance') def test_shelve_locked_server(self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) self.stub_out('nova.compute.api.API.shelve', fakes.fake_actions_to_locked_server) self.assertRaises(webob.exc.HTTPConflict, self.controller._shelve, self.req, uuidsentinel.fake, {}) @mock.patch('nova.api.openstack.common.get_instance') @mock.patch('nova.objects.instance.Instance.save') def test_shelve_task_state_race(self, mock_save, get_instance_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], vm_state=vm_states.ACTIVE, task_state=None) instance.launched_at = instance.created_at instance.system_metadata = {} get_instance_mock.return_value = instance mock_save.side_effect = exception.UnexpectedTaskStateError( instance_uuid=instance.uuid, expected=None, actual=task_states.SHELVING) ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._shelve, self.req, uuidsentinel.fake, body={'shelve': {}}) self.assertIn('Conflict updating instance', six.text_type(ex)) mock_save.assert_called_once_with(expected_task_state=[None]) @mock.patch('nova.api.openstack.common.get_instance') def test_unshelve_locked_server(self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) self.stub_out('nova.compute.api.API.unshelve', fakes.fake_actions_to_locked_server) self.assertRaises(webob.exc.HTTPConflict, self.controller._unshelve, self.req, uuidsentinel.fake, body={'unshelve': {}}) @mock.patch('nova.api.openstack.common.get_instance') def test_shelve_offload_locked_server(self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) self.stub_out('nova.compute.api.API.shelve_offload', fakes.fake_actions_to_locked_server) self.assertRaises(webob.exc.HTTPConflict, self.controller._shelve_offload, self.req, uuidsentinel.fake, {}) class UnshelveServerControllerTestV277(test.NoDBTestCase): """Server controller test for microversion 2.77 Add availability_zone parameter to unshelve a shelved-offloaded server of 2.77 microversion. """ wsgi_api_version = '2.77' def setUp(self): super(UnshelveServerControllerTestV277, self).setUp() self.controller = shelve_v21.ShelveController() self.req = fakes.HTTPRequest.blank( '/%s/servers/a/action' % fakes.FAKE_PROJECT_ID, use_admin_context=True, version=self.wsgi_api_version) # These tests don't care about ports with QoS bandwidth resources. self.stub_out('nova.api.openstack.common.' 'instance_has_port_with_resource_request', lambda *a, **kw: False) def fake_get_instance(self): ctxt = self.req.environ['nova.context'] return fake_instance.fake_instance_obj( ctxt, uuid=fakes.FAKE_UUID, vm_state=vm_states.SHELVED_OFFLOADED) @mock.patch('nova.api.openstack.common.get_instance') def test_unshelve_with_az_pre_2_77_failed(self, mock_get_instance): """Make sure specifying an AZ before microversion 2.77 is ignored.""" instance = self.fake_get_instance() mock_get_instance.return_value = instance body = { 'unshelve': { 'availability_zone': 'us-east' }} self.req.body = jsonutils.dump_as_bytes(body) self.req.api_version_request = (api_version_request. APIVersionRequest('2.76')) with mock.patch.object(self.controller.compute_api, 'unshelve') as mock_unshelve: self.controller._unshelve(self.req, fakes.FAKE_UUID, body=body) mock_unshelve.assert_called_once_with( self.req.environ['nova.context'], instance, new_az=None) @mock.patch('nova.compute.api.API.unshelve') @mock.patch('nova.api.openstack.common.get_instance') def test_unshelve_with_none_pre_2_77_success( self, mock_get_instance, mock_unshelve): """Make sure we can unshelve server with None before microversion 2.77. """ instance = self.fake_get_instance() mock_get_instance.return_value = instance body = {'unshelve': None} self.req.body = jsonutils.dump_as_bytes(body) self.req.api_version_request = (api_version_request. APIVersionRequest('2.76')) self.controller._unshelve(self.req, fakes.FAKE_UUID, body=body) mock_unshelve.assert_called_once_with( self.req.environ['nova.context'], instance, new_az=None) @mock.patch('nova.compute.api.API.unshelve') @mock.patch('nova.api.openstack.common.get_instance') def test_unshelve_with_empty_dict_with_v2_77_failed( self, mock_get_instance, mock_unshelve): """Make sure we cannot unshelve server with empty dict.""" instance = self.fake_get_instance() mock_get_instance.return_value = instance body = {'unshelve': {}} self.req.body = jsonutils.dump_as_bytes(body) exc = self.assertRaises(exception.ValidationError, self.controller._unshelve, self.req, fakes.FAKE_UUID, body=body) self.assertIn("\'availability_zone\' is a required property", six.text_type(exc)) def test_invalid_az_name_with_int(self): body = { 'unshelve': { 'availability_zone': 1234 }} self.req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller._unshelve, self.req, fakes.FAKE_UUID, body=body) def test_no_az_value(self): body = { 'unshelve': { 'availability_zone': None }} self.req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller._unshelve, self.req, fakes.FAKE_UUID, body=body) def test_unshelve_with_additional_param(self): body = { 'unshelve': { 'availability_zone': 'us-east', 'additional_param': 1 }} self.req.body = jsonutils.dump_as_bytes(body) exc = self.assertRaises( exception.ValidationError, self.controller._unshelve, self.req, fakes.FAKE_UUID, body=body) self.assertIn("Additional properties are not allowed", six.text_type(exc)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_simple_tenant_usage.py0000664000175000017500000006363600000000000027543 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_policy import policy as oslo_policy from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from six.moves import range import webob from nova.api.openstack.compute import simple_tenant_usage as \ simple_tenant_usage_v21 from nova.compute import vm_states import nova.conf from nova import context from nova import exception from nova import objects from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes CONF = nova.conf.CONF SERVERS = 5 TENANTS = 2 HOURS = 24 ROOT_GB = 10 EPHEMERAL_GB = 20 MEMORY_MB = 1024 VCPUS = 2 NOW = timeutils.utcnow() START = NOW - datetime.timedelta(hours=HOURS) STOP = NOW FAKE_INST_TYPE = {'id': 1, 'vcpus': VCPUS, 'root_gb': ROOT_GB, 'ephemeral_gb': EPHEMERAL_GB, 'memory_mb': MEMORY_MB, 'name': 'fakeflavor', 'flavorid': 'foo', 'rxtx_factor': 1.0, 'vcpu_weight': 1, 'swap': 0, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'disabled': False, 'is_public': True, 'extra_specs': {'foo': 'bar'}} def _fake_instance(start, end, instance_id, tenant_id, vm_state=vm_states.ACTIVE): flavor = objects.Flavor(**FAKE_INST_TYPE) return objects.Instance( deleted=False, id=instance_id, uuid=getattr(uuids, 'instance_%d' % instance_id), image_ref='1', project_id=tenant_id, user_id='fakeuser', display_name='name', instance_type_id=FAKE_INST_TYPE['id'], launched_at=start, terminated_at=end, vm_state=vm_state, memory_mb=MEMORY_MB, vcpus=VCPUS, root_gb=ROOT_GB, ephemeral_gb=EPHEMERAL_GB, flavor=flavor) def _fake_instance_deleted_flavorless(context, start, end, instance_id, tenant_id, vm_state=vm_states.ACTIVE): return objects.Instance( context=context, deleted=instance_id, id=instance_id, uuid=getattr(uuids, 'instance_%d' % instance_id), image_ref='1', project_id=tenant_id, user_id='fakeuser', display_name='name', instance_type_id=FAKE_INST_TYPE['id'], launched_at=start, terminated_at=end, deleted_at=start, vm_state=vm_state, memory_mb=MEMORY_MB, vcpus=VCPUS, root_gb=ROOT_GB, ephemeral_gb=EPHEMERAL_GB) @classmethod def fake_get_active_deleted_flavorless(cls, context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): # First get some normal instances to have actual usage instances = [ _fake_instance(START, STOP, x, project_id or 'faketenant_%s' % (x // SERVERS)) for x in range(TENANTS * SERVERS)] # Then get some deleted instances with no flavor to test bugs 1643444 and # 1692893 (duplicates) instances.extend([ _fake_instance_deleted_flavorless( context, START, STOP, x, project_id or 'faketenant_%s' % (x // SERVERS)) for x in range(TENANTS * SERVERS)]) return objects.InstanceList(objects=instances) @classmethod def fake_get_active_by_window_joined(cls, context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): return objects.InstanceList(objects=[ _fake_instance(START, STOP, x, project_id or 'faketenant_%s' % (x // SERVERS)) for x in range(TENANTS * SERVERS)]) class SimpleTenantUsageTestV21(test.TestCase): version = '2.1' policy_rule_prefix = "os_compute_api:os-simple-tenant-usage" controller = simple_tenant_usage_v21.SimpleTenantUsageController() def setUp(self): super(SimpleTenantUsageTestV21, self).setUp() self.admin_context = context.RequestContext('fakeadmin_0', 'faketenant_0', is_admin=True) self.user_context = context.RequestContext('fakeadmin_0', 'faketenant_0', is_admin=False) self.alt_user_context = context.RequestContext('fakeadmin_0', 'faketenant_1', is_admin=False) self.num_cells = len(objects.CellMappingList.get_all( self.admin_context)) def _test_verify_index(self, start, stop, limit=None): url = '?start=%s&end=%s' if limit: url += '&limit=%s' % (limit) req = fakes.HTTPRequest.blank(url % (start.isoformat(), stop.isoformat()), version=self.version) req.environ['nova.context'] = self.admin_context res_dict = self.controller.index(req) usages = res_dict['tenant_usages'] if limit: num = 1 else: # NOTE(danms): We call our fake data mock once per cell, # and the default fixture has two cells (cell0 and cell1), # so all our math will be doubled. num = self.num_cells for i in range(TENANTS): self.assertEqual(SERVERS * HOURS * num, int(usages[i]['total_hours'])) self.assertEqual(SERVERS * (ROOT_GB + EPHEMERAL_GB) * HOURS * num, int(usages[i]['total_local_gb_usage'])) self.assertEqual(SERVERS * MEMORY_MB * HOURS * num, int(usages[i]['total_memory_mb_usage'])) self.assertEqual(SERVERS * VCPUS * HOURS * num, int(usages[i]['total_vcpus_usage'])) self.assertFalse(usages[i].get('server_usages')) if limit: self.assertIn('tenant_usages_links', res_dict) self.assertEqual('next', res_dict['tenant_usages_links'][0]['rel']) else: self.assertNotIn('tenant_usages_links', res_dict) # NOTE(artom) Test for bugs 1643444 and 1692893 (duplicates). We simulate a # situation where an instance has been deleted (moved to shadow table) and # its corresponding instance_extra row has been archived (deleted from # shadow table). @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_deleted_flavorless) @mock.patch.object( objects.Instance, '_load_flavor', side_effect=exception.InstanceNotFound(instance_id='fake-id')) def test_verify_index_deleted_flavorless(self, mock_load): with mock.patch.object(self.controller, '_get_flavor', return_value=None): self._test_verify_index(START, STOP) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_verify_index(self): self._test_verify_index(START, STOP) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_verify_index_future_end_time(self): future = NOW + datetime.timedelta(hours=HOURS) self._test_verify_index(START, future) def test_verify_show(self): self._test_verify_show(START, STOP) def test_verify_show_future_end_time(self): future = NOW + datetime.timedelta(hours=HOURS) self._test_verify_show(START, future) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def _get_tenant_usages(self, detailed=''): req = fakes.HTTPRequest.blank('?detailed=%s&start=%s&end=%s' % (detailed, START.isoformat(), STOP.isoformat()), version=self.version) req.environ['nova.context'] = self.admin_context # Make sure that get_active_by_window_joined is only called with # expected_attrs=['flavor']. orig_get_active_by_window_joined = ( objects.InstanceList.get_active_by_window_joined) def fake_get_active_by_window_joined(context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): self.assertEqual(['flavor'], expected_attrs) return orig_get_active_by_window_joined(context, begin, end, project_id, host, expected_attrs, use_slave) with mock.patch.object(objects.InstanceList, 'get_active_by_window_joined', side_effect=fake_get_active_by_window_joined): res_dict = self.controller.index(req) return res_dict['tenant_usages'] def test_verify_detailed_index(self): usages = self._get_tenant_usages('1') for i in range(TENANTS): servers = usages[i]['server_usages'] for j in range(SERVERS): self.assertEqual(HOURS, int(servers[j]['hours'])) def test_verify_simple_index(self): usages = self._get_tenant_usages(detailed='0') for i in range(TENANTS): self.assertIsNone(usages[i].get('server_usages')) def test_verify_simple_index_empty_param(self): # NOTE(lzyeval): 'detailed=&start=..&end=..' usages = self._get_tenant_usages() for i in range(TENANTS): self.assertIsNone(usages[i].get('server_usages')) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def _test_verify_show(self, start, stop, limit=None): tenant_id = 1 url = '?start=%s&end=%s' if limit: url += '&limit=%s' % (limit) req = fakes.HTTPRequest.blank(url % (start.isoformat(), stop.isoformat()), version=self.version) req.environ['nova.context'] = self.user_context res_dict = self.controller.show(req, tenant_id) if limit: num = 1 else: # NOTE(danms): We call our fake data mock once per cell, # and the default fixture has two cells (cell0 and cell1), # so all our math will be doubled. num = self.num_cells usage = res_dict['tenant_usage'] servers = usage['server_usages'] self.assertEqual(TENANTS * SERVERS * num, len(usage['server_usages'])) server_uuids = [getattr(uuids, 'instance_%d' % x) for x in range(SERVERS)] for j in range(SERVERS): delta = STOP - START # NOTE(javeme): cast seconds from float to int for clarity uptime = int(delta.total_seconds()) self.assertEqual(uptime, int(servers[j]['uptime'])) self.assertEqual(HOURS, int(servers[j]['hours'])) self.assertIn(servers[j]['instance_id'], server_uuids) if limit: self.assertIn('tenant_usage_links', res_dict) self.assertEqual('next', res_dict['tenant_usage_links'][0]['rel']) else: self.assertNotIn('tenant_usage_links', res_dict) def test_verify_show_cannot_view_other_tenant(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s' % (START.isoformat(), STOP.isoformat()), version=self.version) req.environ['nova.context'] = self.alt_user_context rules = { self.policy_rule_prefix + ":show": [ ["role:admin"], ["project_id:%(project_id)s"]] } policy.set_rules(oslo_policy.Rules.from_dict(rules)) try: self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, req, 'faketenant_0') finally: policy.reset() def test_get_tenants_usage_with_bad_start_date(self): future = NOW + datetime.timedelta(hours=HOURS) req = fakes.HTTPRequest.blank('?start=%s&end=%s' % (future.isoformat(), NOW.isoformat()), version=self.version) req.environ['nova.context'] = self.user_context self.assertRaises(webob.exc.HTTPBadRequest, self.controller.show, req, 'faketenant_0') def test_get_tenants_usage_with_invalid_start_date(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s' % ("xxxx", NOW.isoformat()), version=self.version) req.environ['nova.context'] = self.user_context self.assertRaises(webob.exc.HTTPBadRequest, self.controller.show, req, 'faketenant_0') def _test_get_tenants_usage_with_one_date(self, date_url_param): req = fakes.HTTPRequest.blank('?%s' % date_url_param, version=self.version) req.environ['nova.context'] = self.user_context res = self.controller.show(req, 'faketenant_0') self.assertIn('tenant_usage', res) def test_get_tenants_usage_with_no_start_date(self): self._test_get_tenants_usage_with_one_date( 'end=%s' % (NOW + datetime.timedelta(5)).isoformat()) def test_get_tenants_usage_with_no_end_date(self): self._test_get_tenants_usage_with_one_date( 'start=%s' % (NOW - datetime.timedelta(5)).isoformat()) def test_index_additional_query_parameters(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version=self.version) res = self.controller.index(req) self.assertIn('tenant_usages', res) def _test_index_duplicate_query_parameters_validation(self, params): for param, value in params.items(): req = fakes.HTTPRequest.blank('?start=%s&%s=%s&%s=%s' % (START.isoformat(), param, value, param, value), version=self.version) res = self.controller.index(req) self.assertIn('tenant_usages', res) def test_index_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat(), 'detailed': 1 } self._test_index_duplicate_query_parameters_validation(params) def test_show_additional_query_parameters(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version=self.version) res = self.controller.show(req, 1) self.assertIn('tenant_usage', res) def _test_show_duplicate_query_parameters_validation(self, params): for param, value in params.items(): req = fakes.HTTPRequest.blank('?start=%s&%s=%s&%s=%s' % (START.isoformat(), param, value, param, value), version=self.version) res = self.controller.show(req, 1) self.assertIn('tenant_usage', res) def test_show_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat() } self._test_show_duplicate_query_parameters_validation(params) class SimpleTenantUsageTestV40(SimpleTenantUsageTestV21): version = '2.40' @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_next_links_show(self): self._test_verify_show(START, STOP, limit=SERVERS * TENANTS) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_next_links_index(self): self._test_verify_index(START, STOP, limit=SERVERS * TENANTS) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_index_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat(), 'detailed': 1, 'limit': 1, 'marker': 1 } self._test_index_duplicate_query_parameters_validation(params) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_show_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat(), 'limit': 1, 'marker': 1 } self._test_show_duplicate_query_parameters_validation(params) class SimpleTenantUsageTestV2_75(SimpleTenantUsageTestV40): version = '2.75' def test_index_additional_query_param_old_version(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version='2.74') res = self.controller.index(req) self.assertIn('tenant_usages', res) def test_index_additional_query_parameters(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version=self.version) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_show_additional_query_param_old_version(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version='2.74') res = self.controller.show(req, 1) self.assertIn('tenant_usage', res) def test_show_additional_query_parameters(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version=self.version) self.assertRaises(exception.ValidationError, self.controller.show, req, 1) class SimpleTenantUsageLimitsTestV21(test.TestCase): version = '2.1' def setUp(self): super(SimpleTenantUsageLimitsTestV21, self).setUp() self.controller = simple_tenant_usage_v21.SimpleTenantUsageController() self.tenant_id = 1 def _get_request(self, url): url = url % (START.isoformat(), STOP.isoformat()) return fakes.HTTPRequest.blank(url, version=self.version) def assert_limit(self, mock_get, limit): mock_get.assert_called_with( mock.ANY, mock.ANY, mock.ANY, mock.ANY, expected_attrs=['flavor'], limit=1000, marker=None) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_defaults_to_conf_max_limit_show(self, mock_get): req = self._get_request('?start=%s&end=%s') self.controller.show(req, self.tenant_id) self.assert_limit(mock_get, CONF.api.max_limit) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_defaults_to_conf_max_limit_index(self, mock_get): req = self._get_request('?start=%s&end=%s') self.controller.index(req) self.assert_limit(mock_get, CONF.api.max_limit) class SimpleTenantUsageLimitsTestV240(SimpleTenantUsageLimitsTestV21): version = '2.40' def assert_limit_and_marker(self, mock_get, limit, marker): # NOTE(danms): Make sure we called at least once with the marker mock_get.assert_any_call( mock.ANY, mock.ANY, mock.ANY, mock.ANY, expected_attrs=['flavor'], limit=3, marker=marker) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_and_marker_show(self, mock_get): req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.controller.show(req, self.tenant_id) self.assert_limit_and_marker(mock_get, 3, 'some-marker') @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_and_marker_index(self, mock_get): req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.controller.index(req) self.assert_limit_and_marker(mock_get, 3, 'some-marker') @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_marker_not_found_show(self, mock_get): mock_get.side_effect = exception.MarkerNotFound(marker='some-marker') req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.assertRaises( webob.exc.HTTPBadRequest, self.controller.show, req, 1) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_marker_not_found_index(self, mock_get): mock_get.side_effect = exception.MarkerNotFound(marker='some-marker') req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.assertRaises( webob.exc.HTTPBadRequest, self.controller.index, req) def test_index_with_invalid_non_int_limit(self): req = self._get_request('?start=%s&end=%s&limit=-3') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=abc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_duplicate_query_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=3&limit=abc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_show_with_invalid_non_int_limit(self): req = self._get_request('?start=%s&end=%s&limit=-3') self.assertRaises(exception.ValidationError, self.controller.show, req) def test_show_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=abc') self.assertRaises(exception.ValidationError, self.controller.show, req) def test_show_duplicate_query_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=3&limit=abc') self.assertRaises(exception.ValidationError, self.controller.show, req) class SimpleTenantUsageControllerTestV21(test.TestCase): controller = simple_tenant_usage_v21.SimpleTenantUsageController() def setUp(self): super(SimpleTenantUsageControllerTestV21, self).setUp() self.context = context.RequestContext('fakeuser', 'fake-project') self.inst_obj = _fake_instance(START, STOP, instance_id=1, tenant_id=self.context.project_id, vm_state=vm_states.DELETED) @mock.patch('nova.objects.Instance.get_flavor', side_effect=exception.NotFound()) def test_get_flavor_from_non_deleted_with_id_fails(self, fake_get_flavor): # If an instance is not deleted and missing type information from # instance.flavor, then that's a bug self.assertRaises(exception.NotFound, self.controller._get_flavor, self.context, self.inst_obj, {}) @mock.patch('nova.objects.Instance.get_flavor', side_effect=exception.NotFound()) def test_get_flavor_from_deleted_with_notfound(self, fake_get_flavor): # If the flavor is not found from the instance and the instance is # deleted, attempt to look it up from the DB and if found we're OK. self.inst_obj.deleted = 1 flavor = self.controller._get_flavor(self.context, self.inst_obj, {}) self.assertEqual(objects.Flavor, type(flavor)) self.assertEqual(FAKE_INST_TYPE['id'], flavor.id) @mock.patch('nova.objects.Instance.get_flavor', side_effect=exception.NotFound()) def test_get_flavor_from_deleted_with_id_of_deleted(self, fake_get_flavor): # Verify the legacy behavior of instance_type_id pointing to a # missing type being non-fatal self.inst_obj.deleted = 1 self.inst_obj.instance_type_id = 99 flavor = self.controller._get_flavor(self.context, self.inst_obj, {}) self.assertIsNone(flavor) class SimpleTenantUsageUtilsV21(test.NoDBTestCase): simple_tenant_usage = simple_tenant_usage_v21 def test_valid_string(self): dt = self.simple_tenant_usage.parse_strtime( "2014-02-21T13:47:20.824060", "%Y-%m-%dT%H:%M:%S.%f") self.assertEqual(datetime.datetime( microsecond=824060, second=20, minute=47, hour=13, day=21, month=2, year=2014), dt) def test_invalid_string(self): self.assertRaises(exception.InvalidStrTime, self.simple_tenant_usage.parse_strtime, "2014-02-21 13:47:20.824060", "%Y-%m-%dT%H:%M:%S.%f") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_snapshots.py0000664000175000017500000002314000000000000025521 0ustar00zuulzuul00000000000000# Copyright 2011 Denali Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import volumes as volumes_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.volume import cinder FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' class SnapshotApiTestV21(test.NoDBTestCase): controller = volumes_v21.SnapshotController() validation_error = exception.ValidationError def setUp(self): super(SnapshotApiTestV21, self).setUp() fakes.stub_out_networking(self) self.stub_out("nova.volume.cinder.API.create_snapshot", fakes.stub_snapshot_create) self.stub_out("nova.volume.cinder.API.create_snapshot_force", fakes.stub_snapshot_create) self.stub_out("nova.volume.cinder.API.delete_snapshot", fakes.stub_snapshot_delete) self.stub_out("nova.volume.cinder.API.get_snapshot", fakes.stub_snapshot_get) self.stub_out("nova.volume.cinder.API.get_all_snapshots", fakes.stub_snapshot_get_all) self.stub_out("nova.volume.cinder.API.get", fakes.stub_volume_get) self.req = fakes.HTTPRequest.blank('') def _test_snapshot_create(self, force): snapshot = {"volume_id": '12', "force": force, "display_name": "Snapshot Test Name", "display_description": "Snapshot Test Desc"} body = dict(snapshot=snapshot) resp_dict = self.controller.create(self.req, body=body) self.assertIn('snapshot', resp_dict) self.assertEqual(snapshot['display_name'], resp_dict['snapshot']['displayName']) self.assertEqual(snapshot['display_description'], resp_dict['snapshot']['displayDescription']) self.assertEqual(snapshot['volume_id'], resp_dict['snapshot']['volumeId']) def test_snapshot_create(self): self._test_snapshot_create(False) def test_snapshot_create_force(self): self._test_snapshot_create(True) def test_snapshot_create_invalid_force_param(self): body = {'snapshot': {'volume_id': '1', 'force': '**&&^^%%$$##@@'}} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_snapshot_delete(self): snapshot_id = '123' delete = self.controller.delete result = delete(self.req, snapshot_id) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, volumes_v21.SnapshotController): status_int = delete.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) @mock.patch.object(cinder.API, 'delete_snapshot', side_effect=exception.SnapshotNotFound(snapshot_id=FAKE_UUID)) def test_delete_snapshot_not_exists(self, mock_mr): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, FAKE_UUID) def test_snapshot_delete_invalid_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, '-1') def test_snapshot_show(self): snapshot_id = '123' resp_dict = self.controller.show(self.req, snapshot_id) self.assertIn('snapshot', resp_dict) self.assertEqual(str(snapshot_id), resp_dict['snapshot']['id']) def test_snapshot_show_invalid_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, '-1') def test_snapshot_detail(self): resp_dict = self.controller.detail(self.req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(3, len(resp_snapshots)) resp_snapshot = resp_snapshots.pop() self.assertEqual(102, resp_snapshot['id']) def test_snapshot_detail_offset_and_limit(self): path = ('/v2/%s/os-snapshots/detail?offset=1&limit=1' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(path) resp_dict = self.controller.detail(req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(1, len(resp_snapshots)) resp_snapshot = resp_snapshots.pop() self.assertEqual(101, resp_snapshot['id']) def test_snapshot_index(self): resp_dict = self.controller.index(self.req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(3, len(resp_snapshots)) def test_snapshot_index_offset_and_limit(self): path = ('/v2/%s/os-snapshots?offset=1&limit=1' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(path) resp_dict = self.controller.index(req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(1, len(resp_snapshots)) def _test_list_with_invalid_filter(self, url): prefix = '/os-snapshots' req = fakes.HTTPRequest.blank(prefix + url) controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail self.assertRaises(exception.ValidationError, controller_list, req) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_detail_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('/detail?limit=-9') def test_detail_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('/detail?limit=abc') def test_detail_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '/detail?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') def test_detail_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('/detail?offset=-9') def test_detail_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('/detail?offset=abc') def test_detail_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '/detail?offset=1&offset=abc') def _test_list_duplicate_query_parameters_validation(self, url): params = { 'limit': 1, 'offset': 1 } controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail for param, value in params.items(): req = fakes.HTTPRequest.blank( url + '?%s=%s&%s=%s' % (param, value, param, value)) controller_list(req) def test_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation('/os-snapshots') def test_detail_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation( '/os-snapshots/detail') def test_list_with_additional_filter(self): req = fakes.HTTPRequest.blank( '/os-snapshots?limit=1&offset=1&additional=something') self.controller.index(req) def test_detail_list_with_additional_filter(self): req = fakes.HTTPRequest.blank( '/os-snapshots/detail?limit=1&offset=1&additional=something') self.controller.detail(req) class TestSnapshotAPIDeprecation(test.NoDBTestCase): def setUp(self): super(TestSnapshotAPIDeprecation, self).setUp() self.controller = volumes_v21.SnapshotController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.detail, self.req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_suspend_server.py0000664000175000017500000000505200000000000026550 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import webob from nova.api.openstack.compute import suspend_server as \ suspend_server_v21 from nova import objects from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes class SuspendServerTestsV21(admin_only_action_common.CommonTests): suspend_server = suspend_server_v21 controller_name = 'SuspendServerController' _api_version = '2.1' def setUp(self): super(SuspendServerTestsV21, self).setUp() self.controller = getattr(self.suspend_server, self.controller_name)() self.compute_api = self.controller.compute_api self.stub_out('nova.api.openstack.compute.suspend_server.' 'SuspendServerController', lambda *a, **kw: self.controller) def test_suspend_resume(self): self._test_actions(['_suspend', '_resume']) @mock.patch('nova.virt.hardware.get_mem_encryption_constraint', new=mock.Mock(return_value=True)) @mock.patch.object(objects.instance.Instance, 'image_meta') def test_suspend_sev_rejected(self, mock_image): instance = self._stub_instance_get() ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._suspend, self.req, fakes.FAKE_UUID, body={}) self.assertIn("Operation 'suspend' not supported for SEV-enabled " "instance (%s)" % instance.uuid, six.text_type(ex)) def test_suspend_resume_with_non_existed_instance(self): self._test_actions_with_non_existed_instance(['_suspend', '_resume']) def test_suspend_resume_raise_conflict_on_invalid_state(self): self._test_actions_raise_conflict_on_invalid_state(['_suspend', '_resume']) def test_actions_with_locked_instance(self): self._test_actions_with_locked_instance(['_suspend', '_resume']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_tenant_networks.py0000664000175000017500000001314700000000000026732 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_config import cfg from oslo_utils.fixture import uuidsentinel as uuids import webob from nova.api.openstack.compute import tenant_networks \ as networks_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes CONF = cfg.CONF NETWORKS = [ { "id": uuids.fake_net1, "name": "fake_net1", }, { "id": uuids.fake_net2, "name": "fake_net2", } ] DEFAULT_NETWORK = [ { "id": uuids.fake_net3, "name": "default", } ] NETWORKS_WITH_DEFAULT_NET = copy.deepcopy(NETWORKS) NETWORKS_WITH_DEFAULT_NET.extend(DEFAULT_NETWORK) DEFAULT_TENANT_ID = CONF.api.neutron_default_tenant_id def fake_network_api_get_all(context): if context.project_id == DEFAULT_TENANT_ID: return DEFAULT_NETWORK else: return NETWORKS class TenantNetworksTestV21(test.NoDBTestCase): ctrlr = networks_v21.TenantNetworkController validation_error = exception.ValidationError def setUp(self): # TODO(stephenfin): We should probably use NeutronFixture here super(TenantNetworksTestV21, self).setUp() # os-tenant-networks only supports Neutron when listing networks or # showing details about a network, create and delete operations # result in a 503 and 500 response, respectively. self.controller = self.ctrlr() self.req = fakes.HTTPRequest.blank('') self.original_value = CONF.api.use_neutron_default_nets def tearDown(self): super(TenantNetworksTestV21, self).tearDown() CONF.set_override("use_neutron_default_nets", self.original_value, group='api') def test_network_show(self): with mock.patch.object(self.controller.network_api, 'get', return_value=NETWORKS[0]): res = self.controller.show(self.req, 1) expected = { 'id': NETWORKS[0]['id'], 'label': NETWORKS[0]['name'], 'cidr': str(None), } self.assertEqual(expected, res['network']) def test_network_show_not_found(self): ctxt = self.req.environ['nova.context'] with mock.patch.object(self.controller.network_api, 'get', side_effect=exception.NetworkNotFound( network_id=1)) as get_mock: self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 1) get_mock.assert_called_once_with(ctxt, 1) def _test_network_index(self, default_net=True): CONF.set_override("use_neutron_default_nets", default_net, group='api') networks = NETWORKS if default_net: networks = NETWORKS_WITH_DEFAULT_NET expected = [] for network in networks: expected.append({ 'id': network['id'], 'label': network['name'], 'cidr': str(None), }) with mock.patch.object(self.controller.network_api, 'get_all', side_effect=fake_network_api_get_all): res = self.controller.index(self.req) self.assertEqual(expected, res['networks']) def test_network_index_with_default_net(self): self._test_network_index() def test_network_index_without_default_net(self): self._test_network_index(default_net=False) class TenantNetworksEnforcementV21(test.NoDBTestCase): def setUp(self): super(TenantNetworksEnforcementV21, self).setUp() self.controller = networks_v21.TenantNetworkController() self.req = fakes.HTTPRequest.blank('') def test_index_policy_failed(self): rule_name = 'os_compute_api:os-tenant-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = 'os_compute_api:os-tenant-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class TenantNetworksDeprecationTest(test.NoDBTestCase): def setUp(self): super(TenantNetworksDeprecationTest, self).setUp() self.controller = networks_v21.TenantNetworkController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_urlmap.py0000664000175000017500000001356100000000000025005 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes import nova.tests.unit.image.fake class UrlmapTest(test.NoDBTestCase): def setUp(self): super(UrlmapTest, self).setUp() nova.tests.unit.image.fake.stub_out_image_service(self) def tearDown(self): super(UrlmapTest, self).tearDown() nova.tests.unit.image.fake.FakeImageService_reset() def test_path_version_v2(self): # Test URL path specifying v2 returns v2 content. req = fakes.HTTPRequest.blank('/v2/') req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21(v2_compatible=True)) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.0', body['version']['id']) def test_content_type_version_v2(self): # Test Content-Type specifying v2 returns v2 content. req = fakes.HTTPRequest.blank('/') req.content_type = "application/json;version=2" req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21(v2_compatible=True)) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.0', body['version']['id']) def test_accept_version_v2(self): # Test Accept header specifying v2 returns v2 content. req = fakes.HTTPRequest.blank('/') req.accept = "application/json;version=2" res = req.get_response(fakes.wsgi_app_v21(v2_compatible=True)) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.0', body['version']['id']) def test_accept_content_type(self): # Test Accept header specifying JSON returns JSON content. url = ('/v2/%s/images/cedef40a-ed67-4d10-800e-17455edce175' % fakes.FAKE_PROJECT_ID) req = fakes.HTTPRequest.blank(url) req.accept = "application/xml;q=0.8, application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('cedef40a-ed67-4d10-800e-17455edce175', body['image']['id']) def test_path_version_v21(self): # Test URL path specifying v2.1 returns v2.1 content. req = fakes.HTTPRequest.blank('/v2.1/') req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_content_type_version_v21(self): # Test Content-Type specifying v2.1 returns v2 content. req = fakes.HTTPRequest.blank('/') req.content_type = "application/json;version=2.1" req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_accept_version_v21(self): # Test Accept header specifying v2.1 returns v2.1 content. req = fakes.HTTPRequest.blank('/') req.accept = "application/json;version=2.1" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_accept_content_type_v21(self): # Test Accept header specifying JSON returns JSON content. req = fakes.HTTPRequest.blank('/') req.content_type = "application/json;version=2.1" req.accept = "application/xml;q=0.8, application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_script_name_path_info(self): """Ensure URLMap preserves SCRIPT_NAME and PATH_INFO correctly.""" data = ( ('', '', ''), ('/', '', '/'), ('/v2', '/v2', ''), ('/v2/', '/v2', '/'), ('/v2.1', '/v2.1', ''), ('/v2.1/', '/v2.1', '/'), ('/v2/foo', '/v2', '/foo'), ('/v2.1/foo', '/v2.1', '/foo'), ('/bar/baz', '', '/bar/baz') ) app = fakes.wsgi_app_v21() for url, exp_script_name, exp_path_info in data: req = fakes.HTTPRequest.blank(url) req.get_response(app) # The app uses /v2 as the base URL :( exp_script_name = '/v2' + exp_script_name self.assertEqual(exp_script_name, req.environ['SCRIPT_NAME']) self.assertEqual(exp_path_info, req.environ['PATH_INFO']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_versions.py0000664000175000017500000004166500000000000025363 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack import api_version_request as avr from nova.api.openstack.compute import views from nova.api import wsgi from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers NS = { 'atom': 'http://www.w3.org/2005/Atom', 'ns': 'http://docs.openstack.org/common/api/v1.0' } MAX_API_VERSION = avr.max_api_version().get_string() EXP_LINKS = { 'v2.0': { 'html': 'http://docs.openstack.org/', }, 'v2.1': { 'html': 'http://docs.openstack.org/' }, } EXP_VERSIONS = { "v2.0": { "id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": EXP_LINKS['v2.0']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2", }, ], }, "v2.1": { "id": "v2.1", "status": "CURRENT", "version": MAX_API_VERSION, "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [ { "rel": "self", "href": "http://localhost/v2.1/", }, { "rel": "describedby", "type": "text/html", "href": EXP_LINKS['v2.1']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1", } ], } } def _get_self_href(response): """Extract the URL to self from response data.""" data = jsonutils.loads(response.body) for link in data['versions'][0]['links']: if link['rel'] == 'self': return link['href'] return '' class VersionsTestV21WithV2CompatibleWrapper(test.NoDBTestCase): @property def wsgi_app(self): return fakes.wsgi_app_v21(v2_compatible=True) def test_get_version_list(self): req = fakes.HTTPRequest.blank('/', base_url='') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) versions = jsonutils.loads(res.body)["versions"] expected = [ { "id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [ { "rel": "self", "href": "http://localhost/v2/", }], }, { "id": "v2.1", "status": "CURRENT", "version": MAX_API_VERSION, "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [ { "rel": "self", "href": "http://localhost/v2.1/", }], }, ] self.assertEqual(expected, versions) def _test_get_version_2_detail(self, url, accept=None): if accept is None: accept = "application/json" req = fakes.HTTPRequest.blank(url, base_url='') req.accept = accept res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) version = jsonutils.loads(res.body) expected = { "version": { "id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [ { "rel": "self", "href": "http://localhost/v2/", }, { "rel": "describedby", "type": "text/html", "href": EXP_LINKS['v2.0']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/" "vnd.openstack.compute+json;version=2", }, ], }, } self.assertEqual(expected, version) def test_get_version_2_detail(self): self._test_get_version_2_detail('/v2/') def test_get_version_2_detail_content_type(self): accept = "application/json;version=2" self._test_get_version_2_detail('/', accept=accept) def test_get_version_2_versions_invalid(self): req = fakes.HTTPRequest.blank('/v2/versions/1234/foo') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(404, res.status_int) def test_multi_choice_image(self): req = fakes.HTTPRequest.blank('/images/1', base_url='') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(300, res.status_int) self.assertEqual("application/json", res.content_type) expected = { "choices": [ { "id": "v2.0", "status": "SUPPORTED", "links": [ { "href": "http://localhost/v2/images/1", "rel": "self", }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json" ";version=2" }, ], }, { "id": "v2.1", "status": "CURRENT", "links": [ { "href": "http://localhost/v2.1/images/1", "rel": "self", }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1", } ], }, ], } self.assertThat(jsonutils.loads(res.body), matchers.DictMatches(expected)) def test_multi_choice_server_atom(self): """Make sure multi choice responses do not have content-type application/atom+xml (should use default of json) """ req = fakes.HTTPRequest.blank('/servers', base_url='') req.accept = "application/atom+xml" res = req.get_response(self.wsgi_app) self.assertEqual(300, res.status_int) self.assertEqual("application/json", res.content_type) def test_multi_choice_server(self): uuid = uuids.fake req = fakes.HTTPRequest.blank('/servers/' + uuid, base_url='') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(300, res.status_int) self.assertEqual("application/json", res.content_type) expected = { "choices": [ { "id": "v2.0", "status": "SUPPORTED", "links": [ { "href": "http://localhost/v2/servers/" + uuid, "rel": "self", }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json" ";version=2" }, ], }, { "id": "v2.1", "status": "CURRENT", "links": [ { "href": "http://localhost/v2.1/servers/" + uuid, "rel": "self", }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1", } ], }, ], } self.assertThat(jsonutils.loads(res.body), matchers.DictMatches(expected)) class VersionsViewBuilderTests(test.NoDBTestCase): def test_view_builder(self): base_url = "http://example.org/" version_data = { "v3.2.1": { "id": "3.2.1", "status": "CURRENT", "version": "2.3", "min_version": "2.1", "updated": "2011-07-18T11:30:00Z", } } expected = { "versions": [ { "id": "3.2.1", "status": "CURRENT", "version": "2.3", "min_version": "2.1", "updated": "2011-07-18T11:30:00Z", "links": [ { "rel": "self", "href": "http://example.org/v2/", }, ], } ] } builder = views.versions.ViewBuilder(base_url) output = builder.build_versions(version_data) self.assertEqual(expected, output) def _test_view_builder_compute_link_prefix(self, href=None): base_url = "http://example.org/v2.1/" if href is None: href = base_url version_data = { "id": "v2.1", "status": "CURRENT", "version": "2.8", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": EXP_LINKS['v2.1']['html'], } ], "media-types": [ { "base": "application/json", "type": ("application/vnd.openstack." "compute+json;version=2.1") } ], } expected_data = copy.deepcopy(version_data) expected = {'version': expected_data} expected['version']['links'].insert(0, { "rel": "self", "href": href, }) builder = views.versions.ViewBuilder(base_url) output = builder.build_version(version_data) self.assertEqual(expected, output) def test_view_builder_with_compute_link_prefix(self): self.flags(compute_link_prefix='http://zoo.com:42', group='api') href = "http://zoo.com:42/v2.1/" self._test_view_builder_compute_link_prefix(href) def test_view_builder_without_compute_link_prefix(self): self._test_view_builder_compute_link_prefix() def test_generate_href(self): base_url = "http://example.org/app/" expected = "http://example.org/app/v2/" builder = views.versions.ViewBuilder(base_url) actual = builder.generate_href('v2') self.assertEqual(expected, actual) def test_generate_href_v21(self): base_url = "http://example.org/app/" expected = "http://example.org/app/v2.1/" builder = views.versions.ViewBuilder(base_url) actual = builder.generate_href('v2.1') self.assertEqual(expected, actual) def test_generate_href_unknown(self): base_url = "http://example.org/app/" expected = "http://example.org/app/v2/" builder = views.versions.ViewBuilder(base_url) actual = builder.generate_href('foo') self.assertEqual(expected, actual) def test_generate_href_with_path(self): path = "random/path" base_url = "http://example.org/app/" expected = "http://example.org/app/v2/%s" % path builder = views.versions.ViewBuilder(base_url) actual = builder.generate_href("v2", path) self.assertEqual(actual, expected) def test_generate_href_with_empty_path(self): path = "" base_url = "http://example.org/app/" expected = "http://example.org/app/v2/" builder = views.versions.ViewBuilder(base_url) actual = builder.generate_href("v2", path) self.assertEqual(actual, expected) # NOTE(oomichi): Now version API of v2.0 covers "/"(root). # So this class tests "/v2.1" only for v2.1 API. class VersionsTestV21(test.NoDBTestCase): exp_versions = copy.deepcopy(EXP_VERSIONS) exp_versions['v2.0']['links'].insert(0, {'href': 'http://localhost/v2.1/', 'rel': 'self'}, ) @property def wsgi_app(self): return fakes.wsgi_app_v21() def test_get_version_21_detail(self): req = fakes.HTTPRequest.blank('/v2.1/', base_url='') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) version = jsonutils.loads(res.body) expected = {"version": self.exp_versions['v2.1']} self.assertEqual(expected, version) def test_get_version_21_versions_v21_detail(self): req = fakes.HTTPRequest.blank( '/v2.1/%s/versions/v2.1' % fakes.FAKE_PROJECT_ID, base_url='') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) version = jsonutils.loads(res.body) expected = {"version": self.exp_versions['v2.1']} self.assertEqual(expected, version) def test_get_version_21_versions_v20_detail(self): req = fakes.HTTPRequest.blank( '/v2.1/%s/versions/v2.0' % fakes.FAKE_PROJECT_ID, base_url='') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) version = jsonutils.loads(res.body) expected = {"version": self.exp_versions['v2.0']} self.assertEqual(expected, version) def test_get_version_21_versions_invalid(self): req = fakes.HTTPRequest.blank('/v2.1/versions/1234/foo') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(404, res.status_int) def test_get_version_21_detail_content_type(self): req = fakes.HTTPRequest.blank('/', base_url='') req.accept = "application/json;version=2.1" res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) version = jsonutils.loads(res.body) expected = {"version": self.exp_versions['v2.1']} self.assertEqual(expected, version) class VersionBehindSslTestCase(test.NoDBTestCase): def setUp(self): super(VersionBehindSslTestCase, self).setUp() self.flags(secure_proxy_ssl_header='HTTP_X_FORWARDED_PROTO', group='wsgi') @property def wsgi_app(self): return fakes.wsgi_app_v21(v2_compatible=True) def test_versions_without_headers(self): req = wsgi.Request.blank('/') req.accept = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) href = _get_self_href(res) self.assertTrue(href.startswith('http://')) def test_versions_with_header(self): req = wsgi.Request.blank('/') req.accept = "application/json" req.headers['X-Forwarded-Proto'] = 'https' res = req.get_response(self.wsgi_app) self.assertEqual(200, res.status_int) href = _get_self_href(res) self.assertTrue(href.startswith('https://')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/compute/test_volumes.py0000664000175000017500000023565600000000000025212 0ustar00zuulzuul00000000000000# Copyright 2013 Josh Durgin # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import fixtures import mock from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils.fixture import uuidsentinel as uuids import six from six.moves import urllib import webob from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute import assisted_volume_snapshots \ as assisted_snaps_v21 from nova.api.openstack.compute import volumes as volumes_v21 from nova.compute import api as compute_api from nova.compute import flavors from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova import exception from nova import objects from nova.objects import block_device as block_device_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.volume import cinder CONF = nova.conf.CONF # This is the server ID. FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' # This is the old volume ID (to swap from). FAKE_UUID_A = '00000000-aaaa-aaaa-aaaa-000000000000' # This is the new volume ID (to swap to). FAKE_UUID_B = 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb' # This is a volume that is not found. FAKE_UUID_C = 'cccccccc-cccc-cccc-cccc-cccccccccccc' IMAGE_UUID = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' def fake_get_instance(self, context, instance_id, expected_attrs=None, cell_down_support=False): return fake_instance.fake_instance_obj( context, id=1, uuid=instance_id, project_id=context.project_id) def fake_get_volume(self, context, id): if id == FAKE_UUID_A: status = 'in-use' attach_status = 'attached' elif id == FAKE_UUID_B: status = 'available' attach_status = 'detached' else: raise exception.VolumeNotFound(volume_id=id) return {'id': id, 'status': status, 'attach_status': attach_status} def fake_create_snapshot(self, context, volume, name, description): return {'id': 123, 'volume_id': 'fakeVolId', 'status': 'available', 'volume_size': 123, 'created_at': '2013-01-01 00:00:01', 'display_name': 'myVolumeName', 'display_description': 'myVolumeDescription'} def fake_delete_snapshot(self, context, snapshot_id): pass def fake_compute_volume_snapshot_delete(self, context, volume_id, snapshot_id, delete_info): pass @classmethod def fake_bdm_get_by_volume_and_instance(cls, ctxt, volume_id, instance_uuid): if volume_id != FAKE_UUID_A: raise exception.VolumeBDMNotFound(volume_id=volume_id) db_bdm = fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'instance_uuid': instance_uuid, 'device_name': '/dev/fake0', 'delete_on_termination': 'False', 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': FAKE_UUID_A, 'volume_size': 1}) return objects.BlockDeviceMapping._from_db_object( ctxt, objects.BlockDeviceMapping(), db_bdm) class BootFromVolumeTest(test.TestCase): def setUp(self): super(BootFromVolumeTest, self).setUp() self.stub_out('nova.compute.api.API.create', self._get_fake_compute_api_create()) fakes.stub_out_nw_api(self) self._block_device_mapping_seen = None self._legacy_bdm_seen = True def _get_fake_compute_api_create(self): def _fake_compute_api_create(cls, context, instance_type, image_href, **kwargs): self._block_device_mapping_seen = kwargs.get( 'block_device_mapping') self._legacy_bdm_seen = kwargs.get('legacy_bdm') inst_type = flavors.get_flavor_by_flavor_id(2) resv_id = None return ([{'id': 1, 'display_name': 'test_server', 'uuid': FAKE_UUID, 'instance_type': inst_type, 'access_ip_v4': '1.2.3.4', 'access_ip_v6': 'fead::1234', 'image_ref': IMAGE_UUID, 'user_id': 'fake', 'project_id': fakes.FAKE_PROJECT_ID, 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0), 'progress': 0, 'fixed_ips': [] }], resv_id) return _fake_compute_api_create def test_create_root_volume(self): body = dict(server=dict( name='test_server', imageRef=IMAGE_UUID, flavorRef=2, min_count=1, max_count=1, block_device_mapping=[dict( volume_id='ca9fe3f5-cede-43cb-8050-1672acabe348', device_name='/dev/vda', delete_on_termination=False, )] )) req = fakes.HTTPRequest.blank('/v2/%s/os-volumes_boot' % fakes.FAKE_PROJECT_ID) req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(202, res.status_int) server = jsonutils.loads(res.body)['server'] self.assertEqual(FAKE_UUID, server['id']) self.assertEqual(CONF.password_length, len(server['adminPass'])) self.assertEqual(1, len(self._block_device_mapping_seen)) self.assertTrue(self._legacy_bdm_seen) self.assertEqual('ca9fe3f5-cede-43cb-8050-1672acabe348', self._block_device_mapping_seen[0]['volume_id']) self.assertEqual('/dev/vda', self._block_device_mapping_seen[0]['device_name']) def test_create_root_volume_bdm_v2(self): body = dict(server=dict( name='test_server', imageRef=IMAGE_UUID, flavorRef=2, min_count=1, max_count=1, block_device_mapping_v2=[dict( source_type='volume', uuid='1', device_name='/dev/vda', boot_index=0, delete_on_termination=False, )] )) req = fakes.HTTPRequest.blank('/v2/%s/os-volumes_boot' % fakes.FAKE_PROJECT_ID) req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(202, res.status_int) server = jsonutils.loads(res.body)['server'] self.assertEqual(FAKE_UUID, server['id']) self.assertEqual(CONF.password_length, len(server['adminPass'])) self.assertEqual(1, len(self._block_device_mapping_seen)) self.assertFalse(self._legacy_bdm_seen) self.assertEqual('1', self._block_device_mapping_seen[0]['volume_id']) self.assertEqual(0, self._block_device_mapping_seen[0]['boot_index']) self.assertEqual('/dev/vda', self._block_device_mapping_seen[0]['device_name']) class VolumeApiTestV21(test.NoDBTestCase): url_prefix = '/v2/%s' % fakes.FAKE_PROJECT_ID def setUp(self): super(VolumeApiTestV21, self).setUp() fakes.stub_out_networking(self) self.stub_out('nova.volume.cinder.API.delete', lambda self, context, volume_id: None) self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get) self.stub_out('nova.volume.cinder.API.get_all', fakes.stub_volume_get_all) self.context = context.get_admin_context() @property def app(self): return fakes.wsgi_app_v21() def test_volume_create(self): self.stub_out('nova.volume.cinder.API.create', fakes.stub_volume_create) vol = {"size": 100, "display_name": "Volume Test Name", "display_description": "Volume Test Desc", "availability_zone": "zone1:host1"} body = {"volume": vol} req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) resp_dict = jsonutils.loads(resp.body) self.assertIn('volume', resp_dict) self.assertEqual(vol['size'], resp_dict['volume']['size']) self.assertEqual(vol['display_name'], resp_dict['volume']['displayName']) self.assertEqual(vol['display_description'], resp_dict['volume']['displayDescription']) self.assertEqual(vol['availability_zone'], resp_dict['volume']['availabilityZone']) @mock.patch.object(cinder.API, 'create') def _test_volume_translate_exception(self, cinder_exc, api_exc, mock_create): """Tests that cinder exceptions are correctly translated""" mock_create.side_effect = cinder_exc vol = {"size": '10', "display_name": "Volume Test Name", "display_description": "Volume Test Desc", "availability_zone": "zone1:host1"} body = {"volume": vol} req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') self.assertRaises(api_exc, volumes_v21.VolumeController().create, req, body=body) mock_create.assert_called_once_with( req.environ['nova.context'], '10', 'Volume Test Name', 'Volume Test Desc', availability_zone='zone1:host1', metadata=None, snapshot=None, volume_type=None) @mock.patch.object(cinder.API, 'get_snapshot') @mock.patch.object(cinder.API, 'create') def test_volume_create_bad_snapshot_id(self, mock_create, mock_get): vol = {"snapshot_id": '1', "size": 10} body = {"volume": vol} mock_get.side_effect = exception.SnapshotNotFound(snapshot_id='1') req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') self.assertRaises(webob.exc.HTTPNotFound, volumes_v21.VolumeController().create, req, body=body) def test_volume_create_bad_input(self): self._test_volume_translate_exception( exception.InvalidInput(reason='fake'), webob.exc.HTTPBadRequest) def test_volume_create_bad_quota(self): self._test_volume_translate_exception( exception.OverQuota(overs='fake'), webob.exc.HTTPForbidden) def test_volume_index(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_volume_detail(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/detail') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_volume_show(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/123') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) @mock.patch.object(cinder.API, 'get', side_effect=exception.VolumeNotFound( volume_id=uuids.volume)) def test_volume_show_no_volume(self, mock_get): req = fakes.HTTPRequest.blank('%s/os-volumes/%s' % (self.url_prefix, uuids.volume)) resp = req.get_response(self.app) self.assertEqual(404, resp.status_int) self.assertIn('Volume %s could not be found.' % uuids.volume, encodeutils.safe_decode(resp.body)) mock_get.assert_called_once_with(req.environ['nova.context'], uuids.volume) def test_volume_delete(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/123') req.method = 'DELETE' resp = req.get_response(self.app) self.assertEqual(202, resp.status_int) @mock.patch.object(cinder.API, 'delete', side_effect=exception.VolumeNotFound( volume_id=uuids.volume)) def test_volume_delete_no_volume(self, mock_delete): req = fakes.HTTPRequest.blank('%s/os-volumes/%s' % (self.url_prefix, uuids.volume)) req.method = 'DELETE' resp = req.get_response(self.app) self.assertEqual(404, resp.status_int) self.assertIn('Volume %s could not be found.' % uuids.volume, encodeutils.safe_decode(resp.body)) mock_delete.assert_called_once_with(req.environ['nova.context'], uuids.volume) def _test_list_with_invalid_filter(self, url): prefix = '/os-volumes' req = fakes.HTTPRequest.blank(prefix + url) self.assertRaises(exception.ValidationError, volumes_v21.VolumeController().index, req) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_detail_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('/detail?limit=-9') def test_detail_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('/detail?limit=abc') def test_detail_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '/detail?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') def test_detail_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('/detail?offset=-9') def test_detail_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('/detail?offset=abc') def test_detail_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '/detail?offset=1&offset=abc') def _test_list_duplicate_query_parameters_validation(self, url): params = { 'limit': 1, 'offset': 1 } for param, value in params.items(): req = fakes.HTTPRequest.blank( self.url_prefix + url + '?%s=%s&%s=%s' % (param, value, param, value)) resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation('/os-volumes') def test_detail_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation( '/os-volumes/detail') def test_list_with_additional_filter(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes?limit=1&offset=1&additional=something') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_detail_list_with_additional_filter(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/detail?limit=1&offset=1&additional=something') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) class VolumeAttachTestsV21(test.NoDBTestCase): validation_error = exception.ValidationError microversion = '2.1' _prefix = '/servers/id/os-volume_attachments' def setUp(self): super(VolumeAttachTestsV21, self).setUp() self.stub_out('nova.objects.BlockDeviceMapping' '.get_by_volume_and_instance', fake_bdm_get_by_volume_and_instance) self.stub_out('nova.compute.api.API.get', fake_get_instance) self.stub_out('nova.volume.cinder.API.get', fake_get_volume) self.context = context.get_admin_context() self.expected_show = {'volumeAttachment': {'device': '/dev/fake0', 'serverId': FAKE_UUID, 'id': FAKE_UUID_A, 'volumeId': FAKE_UUID_A }} self.attachments = volumes_v21.VolumeAttachmentController() self.req = self._build_request('/uuid') self.req.body = jsonutils.dump_as_bytes({}) self.req.headers['content-type'] = 'application/json' self.req.environ['nova.context'] = self.context def _build_request(self, url=''): return fakes.HTTPRequest.blank( self._prefix + url, version=self.microversion) def test_show(self): result = self.attachments.show(self.req, FAKE_UUID, FAKE_UUID_A) self.assertEqual(self.expected_show, result) @mock.patch.object(compute_api.API, 'get', side_effect=exception.InstanceNotFound(instance_id=FAKE_UUID)) def test_show_no_instance(self, mock_mr): self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID, FAKE_UUID_A) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.VolumeBDMNotFound( volume_id=FAKE_UUID_A)) def test_show_no_bdms(self, mock_mr): self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID, FAKE_UUID_A) def test_show_bdms_no_mountpoint(self): FAKE_UUID_NOTEXIST = '00000000-aaaa-aaaa-aaaa-aaaaaaaaaaaa' self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID, FAKE_UUID_NOTEXIST) def test_detach(self): self.stub_out('nova.compute.api.API.detach_volume', lambda self, context, instance, volume: None) inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) with mock.patch.object(common, 'get_instance', return_value=inst) as mock_get_instance: result = self.attachments.delete(self.req, FAKE_UUID, FAKE_UUID_A) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.attachments, volumes_v21.VolumeAttachmentController): status_int = self.attachments.delete.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) mock_get_instance.assert_called_with( self.attachments.compute_api, self.context, FAKE_UUID, expected_attrs=['device_metadata']) @mock.patch.object(common, 'get_instance') def test_detach_vol_shelved_not_supported(self, mock_get_instance): inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version='2.19') req.method = 'DELETE' req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context self.assertRaises(webob.exc.HTTPConflict, self.attachments.delete, req, FAKE_UUID, FAKE_UUID_A) @mock.patch.object(compute_api.API, 'detach_volume') @mock.patch.object(common, 'get_instance') def test_detach_vol_shelved_supported(self, mock_get_instance, mock_detach): inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version='2.20') req.method = 'DELETE' req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context self.attachments.delete(req, FAKE_UUID, FAKE_UUID_A) self.assertTrue(mock_detach.called) def test_detach_vol_not_found(self): self.stub_out('nova.compute.api.API.detach_volume', lambda self, context, instance, volume: None) self.assertRaises(exc.HTTPNotFound, self.attachments.delete, self.req, FAKE_UUID, FAKE_UUID_C) @mock.patch('nova.objects.BlockDeviceMapping.is_root', new_callable=mock.PropertyMock) def test_detach_vol_root(self, mock_isroot): mock_isroot.return_value = True self.assertRaises(exc.HTTPBadRequest, self.attachments.delete, self.req, FAKE_UUID, FAKE_UUID_A) @mock.patch.object(compute_api.API, 'detach_volume') def test_detach_volume_from_locked_server(self, mock_detach_volume): mock_detach_volume.side_effect = exception.InstanceIsLocked( instance_uuid=FAKE_UUID) self.assertRaises(webob.exc.HTTPConflict, self.attachments.delete, self.req, FAKE_UUID, FAKE_UUID_A) mock_detach_volume.assert_called_once_with( self.req.environ['nova.context'], test.MatchType(objects.Instance), {'attach_status': 'attached', 'status': 'in-use', 'id': FAKE_UUID_A}) def test_attach_volume(self): self.stub_out('nova.compute.api.API.attach_volume', lambda self, context, instance, volume_id, device, tag=None, supports_multiattach=False, delete_on_termination=False: None) body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} result = self.attachments.create(self.req, FAKE_UUID, body=body) self.assertEqual('00000000-aaaa-aaaa-aaaa-000000000000', result['volumeAttachment']['id']) @mock.patch.object(compute_api.API, 'attach_volume', side_effect=exception.VolumeTaggedAttachNotSupported()) def test_tagged_volume_attach_not_supported(self, mock_attach_volume): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self.assertRaises(webob.exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(common, 'get_instance') def test_attach_vol_shelved_not_supported(self, mock_get_instance): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst self.assertRaises(webob.exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'attach_volume', return_value='/dev/myfake') @mock.patch.object(common, 'get_instance') def test_attach_vol_shelved_supported(self, mock_get_instance, mock_attach): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst req = fakes.HTTPRequest.blank('/v2/servers/id/os-volume_attachments', version='2.20') req.method = 'POST' req.body = jsonutils.dump_as_bytes({}) req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context result = self.attachments.create(req, FAKE_UUID, body=body) self.assertEqual('00000000-aaaa-aaaa-aaaa-000000000000', result['volumeAttachment']['id']) self.assertEqual('/dev/myfake', result['volumeAttachment']['device']) @mock.patch.object(compute_api.API, 'attach_volume', return_value='/dev/myfake') def test_attach_volume_with_auto_device(self, mock_attach): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': None}} result = self.attachments.create(self.req, FAKE_UUID, body=body) self.assertEqual('00000000-aaaa-aaaa-aaaa-000000000000', result['volumeAttachment']['id']) self.assertEqual('/dev/myfake', result['volumeAttachment']['device']) @mock.patch.object(compute_api.API, 'attach_volume', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_attach_volume_to_locked_server(self, mock_attach_volume): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self.assertRaises(webob.exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID, body=body) supports_multiattach = api_version_request.is_supported( self.req, '2.60') mock_attach_volume.assert_called_once_with( self.req.environ['nova.context'], test.MatchType(objects.Instance), FAKE_UUID_A, '/dev/fake', supports_multiattach=supports_multiattach, delete_on_termination=False, tag=None) def test_attach_volume_bad_id(self): self.stub_out('nova.compute.api.API.attach_volume', lambda self, context, instance, volume_id, device, tag=None, supports_multiattach=False: None) body = { 'volumeAttachment': { 'device': None, 'volumeId': 'TESTVOLUME', } } self.assertRaises(self.validation_error, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'attach_volume', side_effect=exception.DevicePathInUse(path='/dev/sda')) def test_attach_volume_device_in_use(self, mock_attach): body = { 'volumeAttachment': { 'device': '/dev/sda', 'volumeId': FAKE_UUID_A, } } self.assertRaises(webob.exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID, body=body) def test_attach_volume_without_volumeId(self): self.stub_out('nova.compute.api.API.attach_volume', lambda self, context, instance, volume_id, device, tag=None, supports_multiattach=False: None) body = { 'volumeAttachment': { 'device': None } } self.assertRaises(self.validation_error, self.attachments.create, self.req, FAKE_UUID, body=body) def test_attach_volume_with_extra_arg(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'extra': 'extra_arg'}} self.assertRaises(self.validation_error, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'attach_volume') def test_attach_volume_with_invalid_input(self, mock_attach): mock_attach.side_effect = exception.InvalidInput( reason='Invalid volume') body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} req = self._build_request() req.method = 'POST' req.body = jsonutils.dump_as_bytes({}) req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context self.assertRaises(exc.HTTPBadRequest, self.attachments.create, req, FAKE_UUID, body=body) def _test_swap(self, attachments, uuid=FAKE_UUID_A, body=None): body = body or {'volumeAttachment': {'volumeId': FAKE_UUID_B}} return attachments.update(self.req, uuids.instance, uuid, body=body) @mock.patch.object(compute_api.API, 'swap_volume', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_swap_volume_for_locked_server(self, mock_swap_volume): with mock.patch.object(self.attachments, '_update_volume_regular'): self.assertRaises(webob.exc.HTTPConflict, self._test_swap, self.attachments) mock_swap_volume.assert_called_once_with( self.req.environ['nova.context'], test.MatchType(objects.Instance), {'attach_status': 'attached', 'status': 'in-use', 'id': FAKE_UUID_A}, {'attach_status': 'detached', 'status': 'available', 'id': FAKE_UUID_B}) @mock.patch.object(compute_api.API, 'swap_volume') def test_swap_volume(self, mock_swap_volume): result = self._test_swap(self.attachments) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.attachments, volumes_v21.VolumeAttachmentController): status_int = self.attachments.update.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) mock_swap_volume.assert_called_once_with( self.req.environ['nova.context'], test.MatchType(objects.Instance), {'attach_status': 'attached', 'status': 'in-use', 'id': FAKE_UUID_A}, {'attach_status': 'detached', 'status': 'available', 'id': FAKE_UUID_B}) def test_swap_volume_with_nonexistent_uri(self): self.assertRaises(exc.HTTPNotFound, self._test_swap, self.attachments, uuid=FAKE_UUID_C) @mock.patch.object(cinder.API, 'get') def test_swap_volume_with_nonexistent_dest_in_body(self, mock_get): mock_get.side_effect = [ None, exception.VolumeNotFound(volume_id=FAKE_UUID_C)] body = {'volumeAttachment': {'volumeId': FAKE_UUID_C}} with mock.patch.object(self.attachments, '_update_volume_regular'): self.assertRaises(exc.HTTPBadRequest, self._test_swap, self.attachments, body=body) mock_get.assert_has_calls([ mock.call(self.req.environ['nova.context'], FAKE_UUID_A), mock.call(self.req.environ['nova.context'], FAKE_UUID_C)]) def test_swap_volume_without_volumeId(self): body = {'volumeAttachment': {'device': '/dev/fake'}} self.assertRaises(self.validation_error, self._test_swap, self.attachments, body=body) def test_swap_volume_with_extra_arg(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self.assertRaises(self.validation_error, self._test_swap, self.attachments, body=body) @mock.patch.object(compute_api.API, 'swap_volume', side_effect=exception.VolumeBDMNotFound( volume_id=FAKE_UUID_B)) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.VolumeBDMNotFound( volume_id=FAKE_UUID_A)) def test_swap_volume_for_bdm_not_found(self, mock_bdm, mock_swap_volume): self.assertRaises(webob.exc.HTTPNotFound, self._test_swap, self.attachments) if mock_bdm.called: # New path includes regular PUT procedure mock_bdm.assert_called_once_with(self.req.environ['nova.context'], FAKE_UUID_A, uuids.instance) mock_swap_volume.assert_not_called() else: # Old path is pure swap-volume mock_bdm.assert_not_called() mock_swap_volume.assert_called_once_with( self.req.environ['nova.context'], test.MatchType(objects.Instance), {'attach_status': 'attached', 'status': 'in-use', 'id': FAKE_UUID_A}, {'attach_status': 'detached', 'status': 'available', 'id': FAKE_UUID_B}) def _test_list_with_invalid_filter(self, url): req = self._build_request(url) self.assertRaises(exception.ValidationError, self.attachments.index, req, FAKE_UUID) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_list_duplicate_query_parameters_validation(self, mock_get): fake_bdms = objects.BlockDeviceMappingList() mock_get.return_value = fake_bdms params = { 'limit': 1, 'offset': 1 } for param, value in params.items(): req = self._build_request('?%s=%s&%s=%s' % (param, value, param, value)) self.attachments.index(req, FAKE_UUID) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_list_with_additional_filter(self, mock_get): fake_bdms = objects.BlockDeviceMappingList() mock_get.return_value = fake_bdms req = self._build_request( '?limit=1&additional=something') self.attachments.index(req, FAKE_UUID) class VolumeAttachTestsV249(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(VolumeAttachTestsV249, self).setUp() self.attachments = volumes_v21.VolumeAttachmentController() self.req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version='2.49') def test_tagged_volume_attach_invalid_tag_comma(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': ','}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID, body=body) def test_tagged_volume_attach_invalid_tag_slash(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': '/'}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID, body=body) def test_tagged_volume_attach_invalid_tag_too_long(self): tag = ''.join(map(str, range(10, 41))) body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': tag}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.attach_volume') @mock.patch('nova.compute.api.API.get', fake_get_instance) def test_tagged_volume_attach_valid_tag(self, _): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': 'foo'}} self.attachments.create(self.req, FAKE_UUID, body=body) class VolumeAttachTestsV260(test.NoDBTestCase): """Negative tests for attaching a multiattach volume with version 2.60.""" def setUp(self): super(VolumeAttachTestsV260, self).setUp() self.controller = volumes_v21.VolumeAttachmentController() get_instance = mock.patch('nova.compute.api.API.get') get_instance.side_effect = fake_get_instance get_instance.start() self.addCleanup(get_instance.stop) def _post_attach(self, version=None): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A}} req = fakes.HTTPRequestV21.blank( '/servers/%s/os-volume_attachments' % FAKE_UUID, version=version or '2.60') req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' return self.controller.create(req, FAKE_UUID, body=body) def test_attach_with_multiattach_fails_old_microversion(self): """Tests the case that the user tries to attach with a multiattach volume but before using microversion 2.60. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachNotSupportedOldMicroversion) as attach: ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_attach, '2.59') create_kwargs = attach.call_args[1] self.assertFalse(create_kwargs['supports_multiattach']) self.assertIn('Multiattach volumes are only supported starting with ' 'compute API version 2.60', six.text_type(ex)) def test_attach_with_multiattach_fails_not_supported_by_driver(self): """Tests the case that the user tries to attach with a multiattach volume but the compute hosting the instance does not support multiattach volumes. This would come from reserve_block_device_name via RPC call to the compute service. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachNotSupportedByVirtDriver( volume_id=FAKE_UUID_A)) as attach: ex = self.assertRaises(webob.exc.HTTPConflict, self._post_attach) create_kwargs = attach.call_args[1] self.assertTrue(create_kwargs['supports_multiattach']) self.assertIn("has 'multiattach' set, which is not supported for " "this instance", six.text_type(ex)) def test_attach_with_multiattach_fails_for_shelved_offloaded_server(self): """Tests the case that the user tries to attach with a multiattach volume to a shelved offloaded server which is not supported. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachToShelvedNotSupported) as attach: ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_attach) create_kwargs = attach.call_args[1] self.assertTrue(create_kwargs['supports_multiattach']) self.assertIn('Attaching multiattach volumes is not supported for ' 'shelved-offloaded instances.', six.text_type(ex)) class VolumeAttachTestsV2_75(VolumeAttachTestsV21): microversion = '2.75' def setUp(self): super(VolumeAttachTestsV2_75, self).setUp() self.expected_show = {'volumeAttachment': {'device': '/dev/fake0', 'serverId': FAKE_UUID, 'id': FAKE_UUID_A, 'volumeId': FAKE_UUID_A, 'tag': None, }} @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_list_with_additional_filter_old_version(self, mock_get): fake_bdms = objects.BlockDeviceMappingList() mock_get.return_value = fake_bdms req = fakes.HTTPRequest.blank( '/os-volumes?limit=1&offset=1&additional=something', version='2.74') self.attachments.index(req, FAKE_UUID) def test_list_with_additional_filter(self): req = self._build_request( '?limit=1&additional=something') self.assertRaises(self.validation_error, self.attachments.index, req, FAKE_UUID) class VolumeAttachTestsV279(VolumeAttachTestsV2_75): microversion = '2.79' def setUp(self): super(VolumeAttachTestsV279, self).setUp() self.controller = volumes_v21.VolumeAttachmentController() self.expected_show = {'volumeAttachment': {'device': '/dev/fake0', 'serverId': FAKE_UUID, 'id': FAKE_UUID_A, 'volumeId': FAKE_UUID_A, 'tag': None, 'delete_on_termination': False }} def _get_req(self, body, microversion=None): req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version=microversion or self.microversion) req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' return req def test_create_volume_attach_pre_v279(self): """Tests the case that the user tries to attach a volume with delete_on_termination field, but before using microversion 2.79. """ body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'delete_on_termination': False}} req = self._get_req(body, microversion='2.78') ex = self.assertRaises(exception.ValidationError, self.controller.create, req, FAKE_UUID, body=body) self.assertIn("Additional properties are not allowed", six.text_type(ex)) @mock.patch('nova.compute.api.API.attach_volume', return_value=None) def test_attach_volume_pre_v279(self, mock_attach_volume): """Before microversion 2.79, attach a volume will not contain 'delete_on_termination' field in the response. """ body = {'volumeAttachment': {'volumeId': FAKE_UUID_A}} req = self._get_req(body, microversion='2.78') result = self.attachments.create(req, FAKE_UUID, body=body) self.assertNotIn('delete_on_termination', result['volumeAttachment']) mock_attach_volume.assert_called_once_with( req.environ['nova.context'], test.MatchType(objects.Instance), FAKE_UUID_A, None, tag=None, supports_multiattach=True, delete_on_termination=False) @mock.patch('nova.compute.api.API.attach_volume', return_value=None) def test_attach_volume_with_delete_on_termination_default_value( self, mock_attach_volume): """Test attach a volume doesn't specify 'delete_on_termination' in the request, you will be get it's default value in the response. The delete_on_termination's default value is 'False'. """ body = {'volumeAttachment': {'volumeId': FAKE_UUID_A}} req = self._get_req(body) result = self.attachments.create(req, FAKE_UUID, body=body) self.assertFalse(result['volumeAttachment']['delete_on_termination']) mock_attach_volume.assert_called_once_with( req.environ['nova.context'], test.MatchType(objects.Instance), FAKE_UUID_A, None, tag=None, supports_multiattach=True, delete_on_termination=False) def test_create_volume_attach_invalid_delete_on_termination_empty(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'delete_on_termination': None}} req = self._get_req(body) ex = self.assertRaises(exception.ValidationError, self.controller.create, req, FAKE_UUID, body=body) self.assertIn("Invalid input for field/attribute " "delete_on_termination.", six.text_type(ex)) def test_create_volume_attach_invalid_delete_on_termination_value(self): """"Test the case that the user tries to set the delete_on_termination value not in the boolean or string-boolean check, the valid boolean value are: [True, 'True', 'TRUE', 'true', '1', 'ON', 'On', 'on', 'YES', 'Yes', 'yes', False, 'False', 'FALSE', 'false', '0', 'OFF', 'Off', 'off', 'NO', 'No', 'no'] """ body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'delete_on_termination': 'foo'}} req = self._get_req(body) ex = self.assertRaises(exception.ValidationError, self.controller.create, req, FAKE_UUID, body=body) self.assertIn("Invalid input for field/attribute " "delete_on_termination.", six.text_type(ex)) @mock.patch('nova.compute.api.API.attach_volume', return_value=None) def test_attach_volume_v279(self, mock_attach_volume): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'delete_on_termination': True}} req = self._get_req(body) result = self.attachments.create(req, FAKE_UUID, body=body) self.assertTrue(result['volumeAttachment']['delete_on_termination']) mock_attach_volume.assert_called_once_with( req.environ['nova.context'], test.MatchType(objects.Instance), FAKE_UUID_A, None, tag=None, supports_multiattach=True, delete_on_termination=True) def test_show_pre_v279(self): """Before microversion 2.79, show a detail of a volume attachment does not contain the 'delete_on_termination' field in the response body. """ req = self._get_req(body={}, microversion='2.78') req.method = 'GET' result = self.attachments.show(req, FAKE_UUID, FAKE_UUID_A) self.assertNotIn('delete_on_termination', result['volumeAttachment']) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_list_pre_v279(self, mock_get_bdms): """Before microversion 2.79, list of a volume attachment does not contain the 'delete_on_termination' field in the response body. """ req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments', version="2.78") req.body = jsonutils.dump_as_bytes({}) req.method = 'GET' req.headers['content-type'] = 'application/json' vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=True, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) bdms = objects.BlockDeviceMappingList(objects=[vol_bdm]) mock_get_bdms.return_value = bdms result = self.attachments.index(req, FAKE_UUID) self.assertNotIn('delete_on_termination', result['volumeAttachments']) class UpdateVolumeAttachTests(VolumeAttachTestsV279): microversion = '2.85' @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_swap_volume(self, mock_save_bdm, mock_get_bdm): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_bdm.return_value = vol_bdm # On the newer microversion, this test will try to look up the # BDM to check for update of other fields. super(UpdateVolumeAttachTests, self).test_swap_volume() def test_swap_volume_with_extra_arg(self): # NOTE(danms): Override this from parent because now device # is checked for unchanged-ness. body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake0', 'notathing': 'foo'}} self.assertRaises(self.validation_error, self._test_swap, self.attachments, body=body) @mock.patch.object(compute_api.API, 'swap_volume') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_update_volume(self, mock_bdm_save, mock_get_vol_and_inst, mock_swap): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'tag': 'fake-tag', 'delete_on_termination': True, 'device': '/dev/fake0', }} self.attachments.update(self.req, FAKE_UUID, FAKE_UUID_A, body=body) mock_swap.assert_not_called() mock_bdm_save.assert_called_once() self.assertTrue(vol_bdm['delete_on_termination']) @mock.patch.object(compute_api.API, 'swap_volume') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_update_volume_with_bool_from_string( self, mock_bdm_save, mock_get_vol_and_inst, mock_swap): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=True, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'tag': 'fake-tag', 'delete_on_termination': 'False', 'device': '/dev/fake0', }} self.attachments.update(self.req, FAKE_UUID, FAKE_UUID_A, body=body) mock_swap.assert_not_called() mock_bdm_save.assert_called_once() self.assertFalse(vol_bdm['delete_on_termination']) # Update delete_on_termination to False body['volumeAttachment']['delete_on_termination'] = '0' self.attachments.update(self.req, FAKE_UUID, FAKE_UUID_A, body=body) mock_swap.assert_not_called() mock_bdm_save.assert_called() self.assertFalse(vol_bdm['delete_on_termination']) # Update delete_on_termination to True body['volumeAttachment']['delete_on_termination'] = '1' self.attachments.update(self.req, FAKE_UUID, FAKE_UUID_A, body=body) mock_swap.assert_not_called() mock_bdm_save.assert_called() self.assertTrue(vol_bdm['delete_on_termination']) @mock.patch.object(compute_api.API, 'swap_volume') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_update_volume_swap(self, mock_bdm_save, mock_get_vol_and_inst, mock_swap): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_B, 'tag': 'fake-tag', 'delete_on_termination': True, }} self.attachments.update(self.req, FAKE_UUID, FAKE_UUID_A, body=body) mock_bdm_save.assert_called_once() self.assertTrue(vol_bdm['delete_on_termination']) # Swap volume is tested elsewhere, just make sure that we did # attempt to call it in addition to updating the BDM self.assertTrue(mock_swap.called) @mock.patch.object(compute_api.API, 'swap_volume') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_update_volume_swap_only_old_microversion( self, mock_bdm_save, mock_get_vol_and_inst, mock_swap): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_B, }} req = self._get_req(body, microversion='2.84') self.attachments.update(req, FAKE_UUID, FAKE_UUID_A, body=body) mock_swap.assert_called_once() mock_bdm_save.assert_not_called() @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.VolumeBDMNotFound( volume_id=FAKE_UUID_A)) def test_update_volume_with_invalid_volume_id(self, mock_mr): body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'delete_on_termination': True, }} self.assertRaises(exc.HTTPNotFound, self.attachments.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def test_update_volume_with_changed_attachment_id(self, mock_get_vol_and_inst): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'id': uuids.attachment_id2, }} self.assertRaises(exc.HTTPBadRequest, self.attachments.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def test_update_volume_with_changed_attachment_id_old_microversion( self, mock_get_vol_and_inst): body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'id': uuids.attachment_id, }} req = self._get_req(body, microversion='2.84') ex = self.assertRaises(exception.ValidationError, self.attachments.update, req, FAKE_UUID, FAKE_UUID_A, body=body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def test_update_volume_with_changed_serverId(self, mock_get_vol_and_inst): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'serverId': uuids.server_id, }} self.assertRaises(exc.HTTPBadRequest, self.attachments.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def test_update_volume_with_changed_serverId_old_microversion( self, mock_get_vol_and_inst): body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'serverId': uuids.server_id, }} req = self._get_req(body, microversion='2.84') ex = self.assertRaises(exception.ValidationError, self.attachments.update, req, FAKE_UUID, FAKE_UUID_A, body=body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def test_update_volume_with_changed_device(self, mock_get_vol_and_inst): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'device': '/dev/sdz', }} self.assertRaises(exc.HTTPBadRequest, self.attachments.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) def test_update_volume_with_device_name_old_microversion(self): body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'device': '/dev/fake0', }} req = self._get_req(body, microversion='2.84') ex = self.assertRaises(exception.ValidationError, self.attachments.update, req, FAKE_UUID, FAKE_UUID_A, body=body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def test_update_volume_with_changed_tag(self, mock_get_vol_and_inst): vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=FAKE_UUID, volume_id=FAKE_UUID_A, source_type='volume', destination_type='volume', delete_on_termination=False, connection_info=None, tag='fake-tag', device_name='/dev/fake0', attachment_id=uuids.attachment_id) mock_get_vol_and_inst.return_value = vol_bdm body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'tag': 'icanhaznewtag', }} self.assertRaises(exc.HTTPBadRequest, self.attachments.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) def test_update_volume_with_tag_old_microversion(self): body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'tag': 'fake-tag', }} req = self._get_req(body, microversion='2.84') ex = self.assertRaises(exception.ValidationError, self.attachments.update, req, FAKE_UUID, FAKE_UUID_A, body=body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) def test_update_volume_with_delete_flag_old_microversion(self): body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'delete_on_termination': True, }} req = self._get_req(body, microversion='2.84') ex = self.assertRaises(exception.ValidationError, self.attachments.update, req, FAKE_UUID, FAKE_UUID_A, body=body) self.assertIn('Additional properties are not allowed', six.text_type(ex)) class SwapVolumeMultiattachTestCase(test.NoDBTestCase): @mock.patch('nova.api.openstack.common.get_instance') @mock.patch('nova.volume.cinder.API.begin_detaching') @mock.patch('nova.volume.cinder.API.roll_detaching') def test_swap_multiattach_multiple_readonly_attachments_fails( self, mock_roll_detaching, mock_begin_detaching, mock_get_instance): """Tests that trying to swap from a multiattach volume with multiple read/write attachments will return an error. """ def fake_volume_get(_context, volume_id): if volume_id == uuids.old_vol_id: return { 'id': volume_id, 'size': 1, 'multiattach': True, 'attachments': { uuids.server1: { 'attachment_id': uuids.attachment_id1, 'mountpoint': '/dev/vdb' }, uuids.server2: { 'attachment_id': uuids.attachment_id2, 'mountpoint': '/dev/vdb' } } } if volume_id == uuids.new_vol_id: return { 'id': volume_id, 'size': 1, 'attach_status': 'detached' } raise exception.VolumeNotFound(volume_id=volume_id) def fake_attachment_get(_context, attachment_id): return {'attach_mode': 'rw'} ctxt = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctxt, uuid=uuids.server1, vm_state=vm_states.ACTIVE, task_state=None, launched_at=datetime.datetime(2018, 6, 6)) mock_get_instance.return_value = instance controller = volumes_v21.VolumeAttachmentController() with test.nested( mock.patch.object(controller.volume_api, 'get', side_effect=fake_volume_get), mock.patch.object(controller.compute_api.volume_api, 'attachment_get', side_effect=fake_attachment_get)) as ( mock_volume_get, mock_attachment_get ): req = fakes.HTTPRequest.blank( '/servers/%s/os-volume_attachments/%s' % (uuids.server1, uuids.old_vol_id)) req.headers['content-type'] = 'application/json' req.environ['nova.context'] = ctxt body = { 'volumeAttachment': { 'volumeId': uuids.new_vol_id } } ex = self.assertRaises( webob.exc.HTTPBadRequest, controller.update, req, uuids.server1, uuids.old_vol_id, body=body) self.assertIn('Swapping multi-attach volumes with more than one ', six.text_type(ex)) mock_attachment_get.assert_has_calls([ mock.call(ctxt, uuids.attachment_id1), mock.call(ctxt, uuids.attachment_id2)], any_order=True) mock_roll_detaching.assert_called_once_with(ctxt, uuids.old_vol_id) class CommonBadRequestTestCase(object): resource = None entity_name = None controller_cls = None kwargs = {} bad_request = exc.HTTPBadRequest """ Tests of places we throw 400 Bad Request from """ def setUp(self): super(CommonBadRequestTestCase, self).setUp() self.controller = self.controller_cls() def _bad_request_create(self, body): req = fakes.HTTPRequest.blank('/v2/%s/%s' % ( fakes.FAKE_PROJECT_ID, self.resource)) req.method = 'POST' kwargs = self.kwargs.copy() kwargs['body'] = body self.assertRaises(self.bad_request, self.controller.create, req, **kwargs) def test_create_no_body(self): self._bad_request_create(body=None) def test_create_missing_volume(self): body = {'foo': {'a': 'b'}} self._bad_request_create(body=body) def test_create_malformed_entity(self): body = {self.entity_name: 'string'} self._bad_request_create(body=body) class BadRequestVolumeTestCaseV21(CommonBadRequestTestCase, test.NoDBTestCase): resource = 'os-volumes' entity_name = 'volume' controller_cls = volumes_v21.VolumeController bad_request = exception.ValidationError @mock.patch.object(cinder.API, 'delete', side_effect=exception.InvalidInput(reason='vol attach')) def test_delete_invalid_status_volume(self, mock_delete): req = fakes.HTTPRequest.blank('/v2.1/os-volumes') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, FAKE_UUID) class BadRequestSnapshotTestCaseV21(CommonBadRequestTestCase, test.NoDBTestCase): resource = 'os-snapshots' entity_name = 'snapshot' controller_cls = volumes_v21.SnapshotController bad_request = exception.ValidationError class AssistedSnapshotCreateTestCaseV21(test.NoDBTestCase): assisted_snaps = assisted_snaps_v21 bad_request = exception.ValidationError def setUp(self): super(AssistedSnapshotCreateTestCaseV21, self).setUp() self.controller = \ self.assisted_snaps.AssistedVolumeSnapshotsController() self.url = ('/v2/%s/os-assisted-volume-snapshots' % fakes.FAKE_PROJECT_ID) @mock.patch.object(compute_api.API, 'volume_snapshot_create') def test_assisted_create(self, mock_volume_snapshot_create): req = fakes.HTTPRequest.blank(self.url) expected_create_info = {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'} body = {'snapshot': {'volume_id': uuids.volume_to_snapshot, 'create_info': expected_create_info}} req.method = 'POST' self.controller.create(req, body=body) mock_volume_snapshot_create.assert_called_once_with( req.environ['nova.context'], uuids.volume_to_snapshot, expected_create_info) def test_assisted_create_missing_create_info(self): req = fakes.HTTPRequest.blank(self.url) body = {'snapshot': {'volume_id': '1'}} req.method = 'POST' self.assertRaises(self.bad_request, self.controller.create, req, body=body) def test_assisted_create_with_unexpected_attr(self): req = fakes.HTTPRequest.blank(self.url) body = { 'snapshot': { 'volume_id': '1', 'create_info': { 'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id' } }, 'unexpected': 0, } req.method = 'POST' self.assertRaises(self.bad_request, self.controller.create, req, body=body) @mock.patch('nova.objects.BlockDeviceMapping.get_by_volume', side_effect=exception.VolumeBDMIsMultiAttach(volume_id='1')) def test_assisted_create_multiattach_fails(self, bdm_get_by_volume): req = fakes.HTTPRequest.blank(self.url) body = {'snapshot': {'volume_id': '1', 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} req.method = 'POST' self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, body=body) def _test_assisted_create_instance_conflict(self, api_error): req = fakes.HTTPRequest.blank(self.url) body = {'snapshot': {'volume_id': '1', 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} req.method = 'POST' with mock.patch.object(compute_api.API, 'volume_snapshot_create', side_effect=api_error): self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, body=body) def test_assisted_create_instance_invalid_state(self): api_error = exception.InstanceInvalidState( instance_uuid=FAKE_UUID, attr='task_state', state=task_states.SHELVING_OFFLOADING, method='volume_snapshot_create') self._test_assisted_create_instance_conflict(api_error) def test_assisted_create_instance_not_ready(self): api_error = exception.InstanceNotReady(instance_id=FAKE_UUID) self._test_assisted_create_instance_conflict(api_error) class AssistedSnapshotDeleteTestCaseV21(test.NoDBTestCase): assisted_snaps = assisted_snaps_v21 microversion = '2.1' def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def setUp(self): super(AssistedSnapshotDeleteTestCaseV21, self).setUp() self.controller = \ self.assisted_snaps.AssistedVolumeSnapshotsController() self.mock_volume_snapshot_delete = self.useFixture( fixtures.MockPatchObject(compute_api.API, 'volume_snapshot_delete')).mock self.url = ('/v2/%s/os-assisted-volume-snapshots' % fakes.FAKE_PROJECT_ID) def test_assisted_delete(self): params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), } req = fakes.HTTPRequest.blank( self.url + '?%s' % urllib.parse.urlencode(params), version=self.microversion) req.method = 'DELETE' result = self.controller.delete(req, '5') self._check_status(204, result, self.controller.delete) def test_assisted_delete_missing_delete_info(self): req = fakes.HTTPRequest.blank(self.url, version=self.microversion) req.method = 'DELETE' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, '5') def _test_assisted_delete_instance_conflict(self, api_error): # unset the stub on volume_snapshot_delete from setUp self.mock_volume_snapshot_delete.stop() params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), } req = fakes.HTTPRequest.blank( self.url + '?%s' % urllib.parse.urlencode(params), version=self.microversion) req.method = 'DELETE' with mock.patch.object(compute_api.API, 'volume_snapshot_delete', side_effect=api_error): self.assertRaises( webob.exc.HTTPBadRequest, self.controller.delete, req, '5') def test_assisted_delete_instance_invalid_state(self): api_error = exception.InstanceInvalidState( instance_uuid=FAKE_UUID, attr='task_state', state=task_states.UNSHELVING, method='volume_snapshot_delete') self._test_assisted_delete_instance_conflict(api_error) def test_assisted_delete_instance_not_ready(self): api_error = exception.InstanceNotReady(instance_id=FAKE_UUID) self._test_assisted_delete_instance_conflict(api_error) def test_delete_additional_query_parameters(self): params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), 'additional': 123 } req = fakes.HTTPRequest.blank( self.url + '?%s' % urllib.parse.urlencode(params), version=self.microversion) req.method = 'DELETE' self.controller.delete(req, '5') def test_delete_duplicate_query_parameters_validation(self): params = [ ('delete_info', jsonutils.dumps({'volume_id': '1'})), ('delete_info', jsonutils.dumps({'volume_id': '2'})) ] req = fakes.HTTPRequest.blank( self.url + '?%s' % urllib.parse.urlencode(params), version=self.microversion) req.method = 'DELETE' self.controller.delete(req, '5') def test_assisted_delete_missing_volume_id(self): params = { 'delete_info': jsonutils.dumps({'something_else': '1'}), } req = fakes.HTTPRequest.blank( self.url + '?%s' % urllib.parse.urlencode(params), version=self.microversion) req.method = 'DELETE' ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, '5') # This is the result of a KeyError but the only thing in the message # is the missing key. self.assertIn('volume_id', six.text_type(ex)) class AssistedSnapshotDeleteTestCaseV275(AssistedSnapshotDeleteTestCaseV21): assisted_snaps = assisted_snaps_v21 microversion = '2.75' def test_delete_additional_query_parameters_old_version(self): params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), 'additional': 123 } req = fakes.HTTPRequest.blank( self.url + '?%s' % urllib.parse.urlencode(params), version='2.74') self.controller.delete(req, 1) def test_delete_additional_query_parameters(self): req = fakes.HTTPRequest.blank( self.url + '?unknown=1', version=self.microversion) self.assertRaises(exception.ValidationError, self.controller.delete, req, 1) class TestVolumesAPIDeprecation(test.NoDBTestCase): def setUp(self): super(TestVolumesAPIDeprecation, self).setUp() self.controller = volumes_v21.VolumeController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.detail, self.req) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/fakes.py0000664000175000017500000006303100000000000022060 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import uuidutils import routes import six from six.moves import range import webob.dec from nova.api import auth as api_auth from nova.api import openstack as openstack_api from nova.api.openstack import api_version_request as api_version from nova.api.openstack import compute from nova.api.openstack.compute import versions from nova.api.openstack import urlmap from nova.api.openstack import wsgi as os_wsgi from nova.api import wsgi from nova.compute import flavors from nova.compute import vm_states import nova.conf from nova import context from nova.db.sqlalchemy import models from nova import exception as exc from nova import objects from nova.objects import base from nova import quota from nova.tests.unit import fake_block_device from nova.tests.unit.objects import test_keypair from nova import utils CONF = nova.conf.CONF QUOTAS = quota.QUOTAS FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' FAKE_PROJECT_ID = '6a6a9c9eee154e9cb8cec487b98d36ab' FAKE_USER_ID = '5fae60f5cf4642609ddd31f71748beac' FAKE_UUIDS = {} @webob.dec.wsgify def fake_wsgi(self, req): return self.application def wsgi_app_v21(fake_auth_context=None, v2_compatible=False, custom_routes=None): # NOTE(efried): Keep this (roughly) in sync with api-paste.ini def wrap(app, use_context=False): if v2_compatible: app = openstack_api.LegacyV2CompatibleWrapper(app) if use_context: if fake_auth_context is not None: ctxt = fake_auth_context else: ctxt = context.RequestContext( 'fake', FAKE_PROJECT_ID, auth_token=True) app = api_auth.InjectContext(ctxt, app) app = openstack_api.FaultWrapper(app) return app inner_app_v21 = compute.APIRouterV21(custom_routes=custom_routes) mapper = urlmap.URLMap() mapper['/'] = wrap(versions.Versions()) mapper['/v2'] = wrap(versions.VersionsV2()) mapper['/v2.1'] = wrap(versions.VersionsV2()) mapper['/v2/+'] = wrap(inner_app_v21, use_context=True) mapper['/v2.1/+'] = wrap(inner_app_v21, use_context=True) return mapper def stub_out_key_pair_funcs(testcase, have_key_pair=True, **kwargs): def key_pair(context, user_id): return [dict(test_keypair.fake_keypair, name='key', public_key='public_key', **kwargs)] def one_key_pair(context, user_id, name): if name in ['key', 'new-key']: return dict(test_keypair.fake_keypair, name=name, public_key='public_key', **kwargs) else: raise exc.KeypairNotFound(user_id=user_id, name=name) def no_key_pair(context, user_id): return [] if have_key_pair: testcase.stub_out('nova.db.api.key_pair_get_all_by_user', key_pair) testcase.stub_out('nova.db.api.key_pair_get', one_key_pair) else: testcase.stub_out('nova.db.api.key_pair_get_all_by_user', no_key_pair) def stub_out_trusted_certs(test, certs=None): def fake_trusted_certs(cls, context, instance_uuid): return objects.TrustedCerts(ids=trusted_certs) def fake_instance_extra(context, instance_uuid, columns): if columns is ['trusted_certs']: return {'trusted_certs': trusted_certs} else: return {'numa_topology': None, 'pci_requests': None, 'flavor': None, 'vcpu_model': None, 'trusted_certs': trusted_certs, 'migration_context': None} trusted_certs = [] if certs: trusted_certs = certs test.stub_out('nova.objects.TrustedCerts.get_by_instance_uuid', fake_trusted_certs) test.stub_out('nova.db.instance_extra_get_by_instance_uuid', fake_instance_extra) def stub_out_instance_quota(test, allowed, quota, resource='instances'): def fake_reserve(context, **deltas): requested = deltas.pop(resource, 0) if requested > allowed: quotas = dict(instances=1, cores=1, ram=1) quotas[resource] = quota usages = dict(instances=dict(in_use=0, reserved=0), cores=dict(in_use=0, reserved=0), ram=dict(in_use=0, reserved=0)) usages[resource]['in_use'] = (quotas[resource] * 9 // 10 - allowed) usages[resource]['reserved'] = quotas[resource] // 10 raise exc.OverQuota(overs=[resource], quotas=quotas, usages=usages) test.stub_out('nova.quota.QUOTAS.reserve', fake_reserve) def stub_out_networking(test): def get_my_ip(): return '127.0.0.1' test.stub_out('oslo_utils.netutils.get_my_ipv4', get_my_ip) def stub_out_compute_api_snapshot(test): def snapshot(self, context, instance, name, extra_properties=None): # emulate glance rejecting image names which are too long if len(name) > 256: raise exc.Invalid return dict(id='123', status='ACTIVE', name=name, properties=extra_properties) test.stub_out('nova.compute.api.API.snapshot', snapshot) class stub_out_compute_api_backup(object): def __init__(self, test): self.extra_props_last_call = None test.stub_out('nova.compute.api.API.backup', self.backup) def backup(self, context, instance, name, backup_type, rotation, extra_properties=None): self.extra_props_last_call = extra_properties props = dict(backup_type=backup_type, rotation=rotation) props.update(extra_properties or {}) return dict(id='123', status='ACTIVE', name=name, properties=props) def stub_out_nw_api(test, cls=None, private=None, publics=None): if not private: private = '192.168.0.3' if not publics: publics = ['1.2.3.4'] class Fake(object): def __init__(self): pass def get_instance_nw_info(*args, **kwargs): pass def get_floating_ips_by_fixed_address(*args, **kwargs): return publics def validate_networks(self, context, networks, max_count): return max_count def create_resource_requests( self, context, requested_networks, pci_requests=None, affinity_policy=None): return None, [] if cls is None: cls = Fake test.stub_out('nova.network.neutron.API', cls) def stub_out_secgroup_api(test, security_groups=None): def get_instances_security_groups_bindings( context, servers, detailed=False): instances_security_group_bindings = {} if servers: # we don't get security group information for down cells instances_security_group_bindings = { server['id']: security_groups or [] for server in servers if server['status'] != 'UNKNOWN' } return instances_security_group_bindings def get_instance_security_groups(context, instance, detailed=False): return security_groups if security_groups is not None else [] test.stub_out( 'nova.network.security_group_api' '.get_instances_security_groups_bindings', get_instances_security_groups_bindings) test.stub_out( 'nova.network.security_group_api.get_instance_security_groups', get_instance_security_groups) class FakeToken(object): id_count = 0 def __getitem__(self, key): return getattr(self, key) def __init__(self, **kwargs): FakeToken.id_count += 1 self.id = FakeToken.id_count for k, v in kwargs.items(): setattr(self, k, v) class FakeRequestContext(context.RequestContext): def __init__(self, *args, **kwargs): kwargs['auth_token'] = kwargs.get('auth_token', 'fake_auth_token') super(FakeRequestContext, self).__init__(*args, **kwargs) class HTTPRequest(os_wsgi.Request): @classmethod def blank(cls, *args, **kwargs): defaults = {'base_url': 'http://localhost/v2'} use_admin_context = kwargs.pop('use_admin_context', False) project_id = kwargs.pop('project_id', FAKE_PROJECT_ID) version = kwargs.pop('version', os_wsgi.DEFAULT_API_VERSION) defaults.update(kwargs) out = super(HTTPRequest, cls).blank(*args, **defaults) out.environ['nova.context'] = FakeRequestContext( user_id='fake_user', project_id=project_id, is_admin=use_admin_context) out.api_version_request = api_version.APIVersionRequest(version) return out class HTTPRequestV21(HTTPRequest): pass class TestRouter(wsgi.Router): def __init__(self, controller, mapper=None): if not mapper: mapper = routes.Mapper() mapper.resource("test", "tests", controller=os_wsgi.Resource(controller)) super(TestRouter, self).__init__(mapper) class FakeAuthDatabase(object): data = {} @staticmethod def auth_token_get(context, token_hash): return FakeAuthDatabase.data.get(token_hash, None) @staticmethod def auth_token_create(context, token): fake_token = FakeToken(created_at=timeutils.utcnow(), **token) FakeAuthDatabase.data[fake_token.token_hash] = fake_token FakeAuthDatabase.data['id_%i' % fake_token.id] = fake_token return fake_token @staticmethod def auth_token_destroy(context, token_id): token = FakeAuthDatabase.data.get('id_%i' % token_id) if token and token.token_hash in FakeAuthDatabase.data: del FakeAuthDatabase.data[token.token_hash] del FakeAuthDatabase.data['id_%i' % token_id] def create_info_cache(nw_cache): if nw_cache is None: pub0 = ('192.168.1.100',) pub1 = ('2001:db8:0:1::1',) def _ip(ip): return {'address': ip, 'type': 'fixed'} nw_cache = [ {'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': {'bridge': 'br0', 'id': 1, 'label': 'test1', 'subnets': [{'cidr': '192.168.1.0/24', 'ips': [_ip(ip) for ip in pub0]}, {'cidr': 'b33f::/64', 'ips': [_ip(ip) for ip in pub1]}]}}] if not isinstance(nw_cache, six.string_types): nw_cache = jsonutils.dumps(nw_cache) return { "info_cache": { "network_info": nw_cache, "deleted": False, "created_at": None, "deleted_at": None, "updated_at": None, } } def get_fake_uuid(token=0): if token not in FAKE_UUIDS: FAKE_UUIDS[token] = uuidutils.generate_uuid() return FAKE_UUIDS[token] def fake_instance_get(**kwargs): def _return_server(context, uuid, columns_to_join=None, use_slave=False): if 'project_id' not in kwargs: kwargs['project_id'] = 'fake' return stub_instance(1, **kwargs) return _return_server def fake_compute_get(**kwargs): def _return_server_obj(context, *a, **kw): return stub_instance_obj(context, **kwargs) return _return_server_obj def fake_actions_to_locked_server(self, context, instance, *args, **kwargs): raise exc.InstanceIsLocked(instance_uuid=instance['uuid']) def fake_instance_get_all_by_filters(num_servers=5, **kwargs): def _return_servers(context, *args, **kwargs): servers_list = [] marker = None limit = None found_marker = False if "marker" in kwargs: marker = kwargs["marker"] if "limit" in kwargs: limit = kwargs["limit"] if 'columns_to_join' in kwargs: kwargs.pop('columns_to_join') if 'use_slave' in kwargs: kwargs.pop('use_slave') if 'sort_keys' in kwargs: kwargs.pop('sort_keys') if 'sort_dirs' in kwargs: kwargs.pop('sort_dirs') if 'cell_mappings' in kwargs: kwargs.pop('cell_mappings') for i in range(num_servers): uuid = get_fake_uuid(i) server = stub_instance(id=i + 1, uuid=uuid, **kwargs) servers_list.append(server) if marker is not None and uuid == marker: found_marker = True servers_list = [] if marker is not None and not found_marker: raise exc.MarkerNotFound(marker=marker) if limit is not None: servers_list = servers_list[:limit] return servers_list return _return_servers def fake_compute_get_all(num_servers=5, **kwargs): def _return_servers_objs(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): db_insts = fake_instance_get_all_by_filters()(None, limit=limit, marker=marker) expected = ['metadata', 'system_metadata', 'flavor', 'info_cache', 'security_groups'] return base.obj_make_list(context, objects.InstanceList(), objects.Instance, db_insts, expected_attrs=expected) return _return_servers_objs def stub_instance(id=1, user_id=None, project_id=None, host=None, node=None, vm_state=None, task_state=None, reservation_id="", uuid=FAKE_UUID, image_ref="10", flavor_id="1", name=None, key_name='', access_ipv4=None, access_ipv6=None, progress=0, auto_disk_config=False, display_name=None, display_description=None, include_fake_metadata=True, config_drive=None, power_state=None, nw_cache=None, metadata=None, security_groups=None, root_device_name=None, limit=None, marker=None, launched_at=timeutils.utcnow(), terminated_at=timeutils.utcnow(), availability_zone='', locked_by=None, cleaned=False, memory_mb=0, vcpus=0, root_gb=0, ephemeral_gb=0, instance_type=None, launch_index=0, kernel_id="", ramdisk_id="", user_data=None, system_metadata=None, services=None, trusted_certs=None, hidden=False): if user_id is None: user_id = 'fake_user' if project_id is None: project_id = 'fake_project' if metadata: metadata = [{'key': k, 'value': v} for k, v in metadata.items()] elif include_fake_metadata: metadata = [models.InstanceMetadata(key='seq', value=str(id))] else: metadata = [] inst_type = flavors.get_flavor_by_flavor_id(int(flavor_id)) sys_meta = flavors.save_flavor_info({}, inst_type) sys_meta.update(system_metadata or {}) if host is not None: host = str(host) if key_name: key_data = 'FAKE' else: key_data = '' if security_groups is None: security_groups = [{"id": 1, "name": "test", "description": "Foo:", "project_id": "project", "user_id": "user", "created_at": None, "updated_at": None, "deleted_at": None, "deleted": False}] # ReservationID isn't sent back, hack it in there. server_name = name or "server%s" % id if reservation_id != "": server_name = "reservation_%s" % (reservation_id, ) info_cache = create_info_cache(nw_cache) if instance_type is None: instance_type = objects.Flavor.get_by_name( context.get_admin_context(), 'm1.small') flavorinfo = jsonutils.dumps({ 'cur': instance_type.obj_to_primitive(), 'old': None, 'new': None, }) instance = { "id": int(id), "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), "deleted_at": datetime.datetime(2010, 12, 12, 10, 0, 0), "deleted": None, "user_id": user_id, "project_id": project_id, "image_ref": image_ref, "kernel_id": kernel_id, "ramdisk_id": ramdisk_id, "launch_index": launch_index, "key_name": key_name, "key_data": key_data, "config_drive": config_drive, "vm_state": vm_state or vm_states.ACTIVE, "task_state": task_state, "power_state": power_state, "memory_mb": memory_mb, "vcpus": vcpus, "root_gb": root_gb, "ephemeral_gb": ephemeral_gb, "ephemeral_key_uuid": None, "hostname": display_name or server_name, "host": host, "node": node, "instance_type_id": 1, "instance_type": inst_type, "user_data": user_data, "reservation_id": reservation_id, "mac_address": "", "launched_at": launched_at, "terminated_at": terminated_at, "availability_zone": availability_zone, "display_name": display_name or server_name, "display_description": display_description, "locked": locked_by is not None, "locked_by": locked_by, "metadata": metadata, "access_ip_v4": access_ipv4, "access_ip_v6": access_ipv6, "uuid": uuid, "progress": progress, "auto_disk_config": auto_disk_config, "name": "instance-%s" % id, "shutdown_terminate": True, "disable_terminate": False, "security_groups": security_groups, "root_device_name": root_device_name, "system_metadata": utils.dict_to_metadata(sys_meta), "pci_devices": [], "vm_mode": "", "default_swap_device": "", "default_ephemeral_device": "", "launched_on": "", "cell_name": "", "architecture": "", "os_type": "", "extra": {"numa_topology": None, "pci_requests": None, "flavor": flavorinfo, "trusted_certs": trusted_certs, }, "cleaned": cleaned, "services": services, "tags": [], "hidden": hidden, } instance.update(info_cache) instance['info_cache']['instance_uuid'] = instance['uuid'] return instance def stub_instance_obj(ctxt, *args, **kwargs): db_inst = stub_instance(*args, **kwargs) expected = ['metadata', 'system_metadata', 'flavor', 'info_cache', 'security_groups', 'tags'] inst = objects.Instance._from_db_object(ctxt, objects.Instance(), db_inst, expected_attrs=expected) inst.fault = None if db_inst["services"] is not None: # This ensures services there if one wanted so inst.services = db_inst["services"] return inst def stub_volume(id, **kwargs): volume = { 'id': id, 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'host': 'fakehost', 'size': 1, 'availability_zone': 'fakeaz', 'status': 'fakestatus', 'attach_status': 'attached', 'name': 'vol name', 'display_name': 'displayname', 'display_description': 'displaydesc', 'created_at': datetime.datetime(1999, 1, 1, 1, 1, 1), 'snapshot_id': None, 'volume_type_id': 'fakevoltype', 'volume_metadata': [], 'volume_type': {'name': 'vol_type_name'}, 'multiattach': False, 'attachments': {'fakeuuid': {'mountpoint': '/'}, 'fakeuuid2': {'mountpoint': '/dev/sdb'} } } volume.update(kwargs) return volume def stub_volume_create(self, context, size, name, description, snapshot, **param): vol = stub_volume('1') vol['size'] = size vol['display_name'] = name vol['display_description'] = description try: vol['snapshot_id'] = snapshot['id'] except (KeyError, TypeError): vol['snapshot_id'] = None vol['availability_zone'] = param.get('availability_zone', 'fakeaz') return vol def stub_volume_update(self, context, *args, **param): pass def stub_volume_get(self, context, volume_id): return stub_volume(volume_id) def stub_volume_get_all(context, search_opts=None): return [stub_volume(100, project_id='fake'), stub_volume(101, project_id='superfake'), stub_volume(102, project_id='superduperfake')] def stub_volume_check_attach(self, context, *args, **param): pass def stub_snapshot(id, **kwargs): snapshot = { 'id': id, 'volume_id': 12, 'status': 'available', 'volume_size': 100, 'created_at': timeutils.utcnow(), 'display_name': 'Default name', 'display_description': 'Default description', 'project_id': 'fake' } snapshot.update(kwargs) return snapshot def stub_snapshot_create(self, context, volume_id, name, description): return stub_snapshot(100, volume_id=volume_id, display_name=name, display_description=description) def stub_compute_volume_snapshot_create(self, context, volume_id, create_info): return {'snapshot': {'id': "421752a6-acf6-4b2d-bc7a-119f9148cd8c", 'volumeId': volume_id}} def stub_snapshot_delete(self, context, snapshot_id): if snapshot_id == '-1': raise exc.SnapshotNotFound(snapshot_id=snapshot_id) def stub_compute_volume_snapshot_delete(self, context, volume_id, snapshot_id, delete_info): pass def stub_snapshot_get(self, context, snapshot_id): if snapshot_id == '-1': raise exc.SnapshotNotFound(snapshot_id=snapshot_id) return stub_snapshot(snapshot_id) def stub_snapshot_get_all(self, context): return [stub_snapshot(100, project_id='fake'), stub_snapshot(101, project_id='superfake'), stub_snapshot(102, project_id='superduperfake')] def stub_bdm_get_all_by_instance_uuids(context, instance_uuids, use_slave=False): i = 1 result = [] for instance_uuid in instance_uuids: for x in range(2): # add two BDMs per instance result.append(fake_block_device.FakeDbBlockDeviceDict({ 'id': i, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'volume_id%d' % (i), 'instance_uuid': instance_uuid, })) i += 1 return result def fake_not_implemented(*args, **kwargs): raise NotImplementedError() FLAVORS = { '1': objects.Flavor( id=1, name='flavor 1', memory_mb=256, vcpus=1, root_gb=10, ephemeral_gb=20, flavorid='1', swap=10, rxtx_factor=1.0, vcpu_weight=None, disabled=False, is_public=True, description=None, extra_specs={"key1": "value1", "key2": "value2"} ), '2': objects.Flavor( id=2, name='flavor 2', memory_mb=512, vcpus=1, root_gb=20, ephemeral_gb=10, flavorid='2', swap=5, rxtx_factor=None, vcpu_weight=None, disabled=True, is_public=True, description='flavor 2 description', extra_specs={} ), } def stub_out_flavor_get_by_flavor_id(test): @staticmethod def fake_get_by_flavor_id(context, flavor_id, read_deleted=None): return FLAVORS[flavor_id] test.stub_out('nova.objects.Flavor.get_by_flavor_id', fake_get_by_flavor_id) def stub_out_flavor_get_all(test): @staticmethod def fake_get_all(context, inactive=False, filters=None, sort_key='flavorid', sort_dir='asc', limit=None, marker=None): if marker in ['99999']: raise exc.MarkerNotFound(marker) def reject_min(db_attr, filter_attr): return (filter_attr in filters and getattr(flavor, db_attr) < int(filters[filter_attr])) filters = filters or {} res = [] for flavor in FLAVORS.values(): if reject_min('memory_mb', 'min_memory_mb'): continue elif reject_min('root_gb', 'min_root_gb'): continue res.append(flavor) res = sorted(res, key=lambda item: getattr(item, sort_key)) output = [] marker_found = True if marker is None else False for flavor in res: if not marker_found and marker == flavor.flavorid: marker_found = True elif marker_found: if limit is None or len(output) < int(limit): output.append(flavor) return objects.FlavorList(objects=output) test.stub_out('nova.objects.FlavorList.get_all', fake_get_all) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/test_api_version_request.py0000664000175000017500000001603700000000000026120 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class APIVersionRequestTests(test.NoDBTestCase): base_path = '/%s' % fakes.FAKE_PROJECT_ID def test_valid_version_strings(self): def _test_string(version, exp_major, exp_minor): v = api_version_request.APIVersionRequest(version) self.assertEqual(v.ver_major, exp_major) self.assertEqual(v.ver_minor, exp_minor) _test_string("1.1", 1, 1) _test_string("2.10", 2, 10) _test_string("5.234", 5, 234) _test_string("12.5", 12, 5) _test_string("2.0", 2, 0) _test_string("2.200", 2, 200) def test_null_version(self): v = api_version_request.APIVersionRequest() self.assertTrue(v.is_null()) def test_invalid_version_strings(self): self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "2") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "200") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "2.1.4") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "200.23.66.3") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "5 .3") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "5. 3") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "5.03") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "02.1") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "2.001") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, " 2.1") self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, "2.1 ") def test_version_comparisons(self): vers1 = api_version_request.APIVersionRequest("2.0") vers2 = api_version_request.APIVersionRequest("2.5") vers3 = api_version_request.APIVersionRequest("5.23") vers4 = api_version_request.APIVersionRequest("2.0") v_null = api_version_request.APIVersionRequest() self.assertLess(v_null, vers2) self.assertLess(vers1, vers2) self.assertLessEqual(vers1, vers2) self.assertLessEqual(vers1, vers4) self.assertGreater(vers2, v_null) self.assertGreater(vers3, vers2) self.assertGreaterEqual(vers1, vers4) self.assertGreaterEqual(vers3, vers2) self.assertNotEqual(vers1, vers2) self.assertEqual(vers1, vers4) self.assertNotEqual(vers1, v_null) self.assertEqual(v_null, v_null) self.assertRaises(TypeError, vers1.__lt__, "2.1") def test_version_matches(self): vers1 = api_version_request.APIVersionRequest("2.0") vers2 = api_version_request.APIVersionRequest("2.5") vers3 = api_version_request.APIVersionRequest("2.45") vers4 = api_version_request.APIVersionRequest("3.3") vers5 = api_version_request.APIVersionRequest("3.23") vers6 = api_version_request.APIVersionRequest("2.0") vers7 = api_version_request.APIVersionRequest("3.3") vers8 = api_version_request.APIVersionRequest("4.0") v_null = api_version_request.APIVersionRequest() self.assertTrue(vers2.matches(vers1, vers3)) self.assertTrue(vers2.matches(vers1, v_null)) self.assertTrue(vers1.matches(vers6, vers2)) self.assertTrue(vers4.matches(vers2, vers7)) self.assertTrue(vers4.matches(v_null, vers7)) self.assertTrue(vers4.matches(v_null, vers8)) self.assertFalse(vers1.matches(vers2, vers3)) self.assertFalse(vers5.matches(vers2, vers4)) self.assertFalse(vers2.matches(vers3, vers1)) self.assertRaises(ValueError, v_null.matches, vers1, vers3) def test_get_string(self): vers1_string = "3.23" vers1 = api_version_request.APIVersionRequest(vers1_string) self.assertEqual(vers1_string, vers1.get_string()) self.assertRaises(ValueError, api_version_request.APIVersionRequest().get_string) def test_is_supported_min_version(self): req = fakes.HTTPRequest.blank(self.base_path, version='2.5') self.assertTrue(api_version_request.is_supported( req, min_version='2.4')) self.assertTrue(api_version_request.is_supported( req, min_version='2.5')) self.assertFalse(api_version_request.is_supported( req, min_version='2.6')) def test_is_supported_max_version(self): req = fakes.HTTPRequest.blank(self.base_path, version='2.5') self.assertFalse(api_version_request.is_supported( req, max_version='2.4')) self.assertTrue(api_version_request.is_supported( req, max_version='2.5')) self.assertTrue(api_version_request.is_supported( req, max_version='2.6')) def test_is_supported_min_and_max_version(self): req = fakes.HTTPRequest.blank(self.base_path, version='2.5') self.assertFalse(api_version_request.is_supported( req, min_version='2.3', max_version='2.4')) self.assertTrue(api_version_request.is_supported( req, min_version='2.3', max_version='2.5')) self.assertTrue(api_version_request.is_supported( req, min_version='2.3', max_version='2.7')) self.assertTrue(api_version_request.is_supported( req, min_version='2.5', max_version='2.7')) self.assertFalse(api_version_request.is_supported( req, min_version='2.6', max_version='2.7')) self.assertTrue(api_version_request.is_supported( req, min_version='2.5', max_version='2.5')) self.assertFalse(api_version_request.is_supported( req, min_version='2.10', max_version='2.1')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/test_auth.py0000664000175000017500000000342300000000000022766 0ustar00zuulzuul00000000000000# Copyright (c) 2018 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_middleware import request_id from oslo_serialization import jsonutils import webob import webob.exc import nova.api.openstack.auth import nova.conf from nova import test CONF = nova.conf.CONF class NoAuthMiddleware(test.NoDBTestCase): def setUp(self): super(NoAuthMiddleware, self).setUp() @webob.dec.wsgify() def fake_app(req): self.context = req.environ['nova.context'] return webob.Response() self.context = None self.middleware = nova.api.openstack.auth.NoAuthMiddleware(fake_app) self.request = webob.Request.blank('/') self.request.headers['X_TENANT_ID'] = 'testtenantid' self.request.headers['X_AUTH_TOKEN'] = 'testauthtoken' self.request.headers['X_SERVICE_CATALOG'] = jsonutils.dumps({}) def test_request_id_extracted_from_env(self): req_id = 'dummy-request-id' self.request.headers['X_PROJECT_ID'] = 'testtenantid' self.request.headers['X_USER_ID'] = 'testuserid' self.request.environ[request_id.ENV_REQUEST_ID] = req_id self.request.get_response(self.middleware) self.assertEqual(req_id, self.context.request_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/test_common.py0000664000175000017500000006415200000000000023323 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test suites for 'common' code used throughout the OpenStack HTTP API. """ import mock import six from testtools import matchers import webob import webob.exc import webob.multidict from nova.api.openstack import common from nova.compute import task_states from nova.compute import vm_states from nova import context from nova import exception from nova.network import neutron as network from nova import test from nova.tests.unit.api.openstack import fakes NS = "{http://docs.openstack.org/compute/api/v1.1}" ATOMNS = "{http://www.w3.org/2005/Atom}" class LimiterTest(test.NoDBTestCase): """Unit tests for the `nova.api.openstack.common.limited` method which takes in a list of items and, depending on the 'offset' and 'limit' GET params, returns a subset or complete set of the given items. """ def setUp(self): """Run before each test.""" super(LimiterTest, self).setUp() self.tiny = range(1) self.small = range(10) self.medium = range(1000) self.large = range(10000) def test_limiter_offset_zero(self): # Test offset key works with 0. req = webob.Request.blank('/?offset=0') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_offset_medium(self): # Test offset key works with a medium sized number. req = webob.Request.blank('/?offset=10') self.assertEqual(0, len(common.limited(self.tiny, req))) self.assertEqual(common.limited(self.small, req), self.small[10:]) self.assertEqual(common.limited(self.medium, req), self.medium[10:]) self.assertEqual(common.limited(self.large, req), self.large[10:1010]) def test_limiter_offset_over_max(self): # Test offset key works with a number over 1000 (max_limit). req = webob.Request.blank('/?offset=1001') self.assertEqual(0, len(common.limited(self.tiny, req))) self.assertEqual(0, len(common.limited(self.small, req))) self.assertEqual(0, len(common.limited(self.medium, req))) self.assertEqual( common.limited(self.large, req), self.large[1001:2001]) def test_limiter_offset_blank(self): # Test offset key works with a blank offset. req = webob.Request.blank('/?offset=') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_offset_bad(self): # Test offset key works with a BAD offset. req = webob.Request.blank(u'/?offset=\u0020aa') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_nothing(self): # Test request with no offset or limit. req = webob.Request.blank('/') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_limit_zero(self): # Test limit of zero. req = webob.Request.blank('/?limit=0') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_limit_medium(self): # Test limit of 10. req = webob.Request.blank('/?limit=10') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium[:10]) self.assertEqual(common.limited(self.large, req), self.large[:10]) def test_limiter_limit_over_max(self): # Test limit of 3000. req = webob.Request.blank('/?limit=3000') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_limit_and_offset(self): # Test request with both limit and offset. items = range(2000) req = webob.Request.blank('/?offset=1&limit=3') self.assertEqual(common.limited(items, req), items[1:4]) req = webob.Request.blank('/?offset=3&limit=0') self.assertEqual(common.limited(items, req), items[3:1003]) req = webob.Request.blank('/?offset=3&limit=1500') self.assertEqual(common.limited(items, req), items[3:1003]) req = webob.Request.blank('/?offset=3000&limit=10') self.assertEqual(0, len(common.limited(items, req))) def test_limiter_custom_max_limit(self): # Test a max_limit other than 1000. max_limit = 2000 self.flags(max_limit=max_limit, group='api') items = range(max_limit) req = webob.Request.blank('/?offset=1&limit=3') self.assertEqual( common.limited(items, req), items[1:4]) req = webob.Request.blank('/?offset=3&limit=0') self.assertEqual( common.limited(items, req), items[3:]) req = webob.Request.blank('/?offset=3&limit=2500') self.assertEqual( common.limited(items, req), items[3:]) req = webob.Request.blank('/?offset=3000&limit=10') self.assertEqual(0, len(common.limited(items, req))) def test_limiter_negative_limit(self): # Test a negative limit. req = webob.Request.blank('/?limit=-3000') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_negative_offset(self): # Test a negative offset. req = webob.Request.blank('/?offset=-30') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) class SortParamUtilsTest(test.NoDBTestCase): def test_get_sort_params_defaults(self): '''Verifies the default sort key and direction.''' sort_keys, sort_dirs = common.get_sort_params({}) self.assertEqual(['created_at'], sort_keys) self.assertEqual(['desc'], sort_dirs) def test_get_sort_params_override_defaults(self): '''Verifies that the defaults can be overridden.''' sort_keys, sort_dirs = common.get_sort_params({}, default_key='key1', default_dir='dir1') self.assertEqual(['key1'], sort_keys) self.assertEqual(['dir1'], sort_dirs) sort_keys, sort_dirs = common.get_sort_params({}, default_key=None, default_dir=None) self.assertEqual([], sort_keys) self.assertEqual([], sort_dirs) def test_get_sort_params_single_value(self): '''Verifies a single sort key and direction.''' params = webob.multidict.MultiDict() params.add('sort_key', 'key1') params.add('sort_dir', 'dir1') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['key1'], sort_keys) self.assertEqual(['dir1'], sort_dirs) def test_get_sort_params_single_with_default(self): '''Verifies a single sort value with a default.''' params = webob.multidict.MultiDict() params.add('sort_key', 'key1') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['key1'], sort_keys) # sort_key was supplied, sort_dir should be defaulted self.assertEqual(['desc'], sort_dirs) params = webob.multidict.MultiDict() params.add('sort_dir', 'dir1') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['created_at'], sort_keys) # sort_dir was supplied, sort_key should be defaulted self.assertEqual(['dir1'], sort_dirs) def test_get_sort_params_multiple_values(self): '''Verifies multiple sort parameter values.''' params = webob.multidict.MultiDict() params.add('sort_key', 'key1') params.add('sort_key', 'key2') params.add('sort_key', 'key3') params.add('sort_dir', 'dir1') params.add('sort_dir', 'dir2') params.add('sort_dir', 'dir3') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['key1', 'key2', 'key3'], sort_keys) self.assertEqual(['dir1', 'dir2', 'dir3'], sort_dirs) # Also ensure that the input parameters are not modified sort_key_vals = [] sort_dir_vals = [] while 'sort_key' in params: sort_key_vals.append(params.pop('sort_key')) while 'sort_dir' in params: sort_dir_vals.append(params.pop('sort_dir')) self.assertEqual(['key1', 'key2', 'key3'], sort_key_vals) self.assertEqual(['dir1', 'dir2', 'dir3'], sort_dir_vals) self.assertEqual(0, len(params)) class PaginationParamsTest(test.NoDBTestCase): """Unit tests for the `nova.api.openstack.common.get_pagination_params` method which takes in a request object and returns 'marker' and 'limit' GET params. """ def test_no_params(self): # Test no params. req = webob.Request.blank('/') self.assertEqual(common.get_pagination_params(req), {}) def test_valid_marker(self): # Test valid marker param. req = webob.Request.blank( '/?marker=263abb28-1de6-412f-b00b-f0ee0c4333c2') self.assertEqual(common.get_pagination_params(req), {'marker': '263abb28-1de6-412f-b00b-f0ee0c4333c2'}) def test_valid_limit(self): # Test valid limit param. req = webob.Request.blank('/?limit=10') self.assertEqual(common.get_pagination_params(req), {'limit': 10}) def test_invalid_limit(self): # Test invalid limit param. req = webob.Request.blank('/?limit=-2') self.assertRaises( webob.exc.HTTPBadRequest, common.get_pagination_params, req) def test_valid_limit_and_marker(self): # Test valid limit and marker parameters. marker = '263abb28-1de6-412f-b00b-f0ee0c4333c2' req = webob.Request.blank('/?limit=20&marker=%s' % marker) self.assertEqual(common.get_pagination_params(req), {'marker': marker, 'limit': 20}) def test_valid_page_size(self): # Test valid page_size param. req = webob.Request.blank('/?page_size=10') self.assertEqual(common.get_pagination_params(req), {'page_size': 10}) def test_invalid_page_size(self): # Test invalid page_size param. req = webob.Request.blank('/?page_size=-2') self.assertRaises( webob.exc.HTTPBadRequest, common.get_pagination_params, req) def test_valid_limit_and_page_size(self): # Test valid limit and page_size parameters. req = webob.Request.blank('/?limit=20&page_size=5') self.assertEqual(common.get_pagination_params(req), {'page_size': 5, 'limit': 20}) class MiscFunctionsTest(test.TestCase): def test_remove_trailing_version_from_href(self): fixture = 'http://www.testsite.com/v1.1' expected = 'http://www.testsite.com' actual = common.remove_trailing_version_from_href(fixture) self.assertEqual(actual, expected) def test_remove_trailing_version_from_href_2(self): fixture = 'http://www.testsite.com/compute/v1.1' expected = 'http://www.testsite.com/compute' actual = common.remove_trailing_version_from_href(fixture) self.assertEqual(actual, expected) def test_remove_trailing_version_from_href_3(self): fixture = 'http://www.testsite.com/v1.1/images/v10.5' expected = 'http://www.testsite.com/v1.1/images' actual = common.remove_trailing_version_from_href(fixture) self.assertEqual(actual, expected) def test_remove_trailing_version_from_href_bad_request(self): fixture = 'http://www.testsite.com/v1.1/images' self.assertRaises(ValueError, common.remove_trailing_version_from_href, fixture) def test_remove_trailing_version_from_href_bad_request_2(self): fixture = 'http://www.testsite.com/images/v' self.assertRaises(ValueError, common.remove_trailing_version_from_href, fixture) def test_remove_trailing_version_from_href_bad_request_3(self): fixture = 'http://www.testsite.com/v1.1images' self.assertRaises(ValueError, common.remove_trailing_version_from_href, fixture) def test_get_id_from_href_with_int_url(self): fixture = 'http://www.testsite.com/dir/45' actual = common.get_id_from_href(fixture) expected = '45' self.assertEqual(actual, expected) def test_get_id_from_href_with_int(self): fixture = '45' actual = common.get_id_from_href(fixture) expected = '45' self.assertEqual(actual, expected) def test_get_id_from_href_with_int_url_query(self): fixture = 'http://www.testsite.com/dir/45?asdf=jkl' actual = common.get_id_from_href(fixture) expected = '45' self.assertEqual(actual, expected) def test_get_id_from_href_with_uuid_url(self): fixture = 'http://www.testsite.com/dir/abc123' actual = common.get_id_from_href(fixture) expected = "abc123" self.assertEqual(actual, expected) def test_get_id_from_href_with_uuid_url_query(self): fixture = 'http://www.testsite.com/dir/abc123?asdf=jkl' actual = common.get_id_from_href(fixture) expected = "abc123" self.assertEqual(actual, expected) def test_get_id_from_href_with_uuid(self): fixture = 'abc123' actual = common.get_id_from_href(fixture) expected = 'abc123' self.assertEqual(actual, expected) def test_raise_http_conflict_for_instance_invalid_state(self): exc = exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') try: common.raise_http_conflict_for_instance_invalid_state(exc, 'meow', 'fake_server_id') except webob.exc.HTTPConflict as e: self.assertEqual(six.text_type(e), "Cannot 'meow' instance fake_server_id while it is in " "fake_attr fake_state") else: self.fail("webob.exc.HTTPConflict was not raised") def test_status_from_state(self): for vm_state in (vm_states.ACTIVE, vm_states.STOPPED): for task_state in (task_states.RESIZE_PREP, task_states.RESIZE_MIGRATING, task_states.RESIZE_MIGRATED, task_states.RESIZE_FINISH): actual = common.status_from_state(vm_state, task_state) expected = 'RESIZE' self.assertEqual(expected, actual) def test_status_rebuild_from_state(self): for vm_state in (vm_states.ACTIVE, vm_states.STOPPED, vm_states.ERROR): for task_state in (task_states.REBUILDING, task_states.REBUILD_BLOCK_DEVICE_MAPPING, task_states.REBUILD_SPAWNING): actual = common.status_from_state(vm_state, task_state) expected = 'REBUILD' self.assertEqual(expected, actual) def test_status_migrating_from_state(self): for vm_state in (vm_states.ACTIVE, vm_states.PAUSED): task_state = task_states.MIGRATING actual = common.status_from_state(vm_state, task_state) expected = 'MIGRATING' self.assertEqual(expected, actual) def test_task_and_vm_state_from_status(self): fixture1 = ['reboot'] actual = common.task_and_vm_state_from_status(fixture1) expected = [vm_states.ACTIVE], [task_states.REBOOT_PENDING, task_states.REBOOT_STARTED, task_states.REBOOTING] self.assertEqual(expected, actual) fixture2 = ['resize'] actual = common.task_and_vm_state_from_status(fixture2) expected = ([vm_states.ACTIVE, vm_states.STOPPED], [task_states.RESIZE_FINISH, task_states.RESIZE_MIGRATED, task_states.RESIZE_MIGRATING, task_states.RESIZE_PREP]) self.assertEqual(expected, actual) fixture3 = ['resize', 'reboot'] actual = common.task_and_vm_state_from_status(fixture3) expected = ([vm_states.ACTIVE, vm_states.STOPPED], [task_states.REBOOT_PENDING, task_states.REBOOT_STARTED, task_states.REBOOTING, task_states.RESIZE_FINISH, task_states.RESIZE_MIGRATED, task_states.RESIZE_MIGRATING, task_states.RESIZE_PREP]) self.assertEqual(expected, actual) def test_is_all_tenants_true(self): for value in ('', '1', 'true', 'True'): search_opts = {'all_tenants': value} self.assertTrue(common.is_all_tenants(search_opts)) self.assertIn('all_tenants', search_opts) def test_is_all_tenants_false(self): for value in ('0', 'false', 'False'): search_opts = {'all_tenants': value} self.assertFalse(common.is_all_tenants(search_opts)) self.assertIn('all_tenants', search_opts) def test_is_all_tenants_missing(self): self.assertFalse(common.is_all_tenants({})) def test_is_all_tenants_invalid(self): search_opts = {'all_tenants': 'wonk'} self.assertRaises(exception.InvalidInput, common.is_all_tenants, search_opts) def test_instance_has_port_with_resource_request(self): network_api = mock.Mock(spec=network.API()) network_api.list_ports.return_value = {'ports': [ {'resource_request': mock.sentinel.request} ]} res = common.instance_has_port_with_resource_request( mock.sentinel.uuid, network_api) self.assertTrue(res) network_api.list_ports.assert_called_once_with( test.MatchType(context.RequestContext), device_id=mock.sentinel.uuid, fields=['resource_request']) # assert that the neutron call uses an admin context ctxt = network_api.mock_calls[0][1][0] self.assertTrue(ctxt.is_admin) self.assertIsNone(ctxt.auth_token) class TestCollectionLinks(test.NoDBTestCase): """Tests the _get_collection_links method.""" @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_less_than_limit(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() params = mock.PropertyMock(return_value=dict(limit=10)) type(req).params = params builder = common.ViewBuilder() results = builder._get_collection_links(req, items, "ignored", "uuid") self.assertFalse(href_link_mock.called) self.assertThat(results, matchers.HasLength(0)) @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_equals_given_limit(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() params = mock.PropertyMock(return_value=dict(limit=1)) type(req).params = params builder = common.ViewBuilder() results = builder._get_collection_links(req, items, mock.sentinel.coll_key, "uuid") href_link_mock.assert_called_once_with(req, "123", mock.sentinel.coll_key) self.assertThat(results, matchers.HasLength(1)) @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_equals_default_limit(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() params = mock.PropertyMock(return_value=dict()) type(req).params = params self.flags(max_limit=1, group='api') builder = common.ViewBuilder() results = builder._get_collection_links(req, items, mock.sentinel.coll_key, "uuid") href_link_mock.assert_called_once_with(req, "123", mock.sentinel.coll_key) self.assertThat(results, matchers.HasLength(1)) @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_equals_default_limit_with_given(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() # Given limit is greater than default max, only return default max params = mock.PropertyMock(return_value=dict(limit=2)) type(req).params = params self.flags(max_limit=1, group='api') builder = common.ViewBuilder() results = builder._get_collection_links(req, items, mock.sentinel.coll_key, "uuid") href_link_mock.assert_called_once_with(req, "123", mock.sentinel.coll_key) self.assertThat(results, matchers.HasLength(1)) class LinkPrefixTest(test.NoDBTestCase): def test_update_link_prefix(self): vb = common.ViewBuilder() result = vb._update_link_prefix("http://192.168.0.243:24/", "http://127.0.0.1/compute") self.assertEqual("http://127.0.0.1/compute", result) result = vb._update_link_prefix("http://foo.x.com/v1", "http://new.prefix.com") self.assertEqual("http://new.prefix.com/v1", result) result = vb._update_link_prefix( "http://foo.x.com/v1", "http://new.prefix.com:20455/new_extra_prefix") self.assertEqual("http://new.prefix.com:20455/new_extra_prefix/v1", result) class UrlJoinTest(test.NoDBTestCase): def test_url_join(self): pieces = ["one", "two", "three"] joined = common.url_join(*pieces) self.assertEqual("one/two/three", joined) def test_url_join_extra_slashes(self): pieces = ["one/", "/two//", "/three/"] joined = common.url_join(*pieces) self.assertEqual("one/two/three", joined) def test_url_join_trailing_slash(self): pieces = ["one", "two", "three", ""] joined = common.url_join(*pieces) self.assertEqual("one/two/three/", joined) def test_url_join_empty_list(self): pieces = [] joined = common.url_join(*pieces) self.assertEqual("", joined) def test_url_join_single_empty_string(self): pieces = [""] joined = common.url_join(*pieces) self.assertEqual("", joined) def test_url_join_single_slash(self): pieces = ["/"] joined = common.url_join(*pieces) self.assertEqual("", joined) class ViewBuilderLinkTest(test.NoDBTestCase): project_id = fakes.FAKE_PROJECT_ID api_version = "2.1" def setUp(self): super(ViewBuilderLinkTest, self).setUp() self.request = self.req("/%s" % self.project_id) self.vb = common.ViewBuilder() def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.api_version) def test_get_project_id(self): proj_id = self.vb._get_project_id(self.request) self.assertEqual(self.project_id, proj_id) def test_get_project_id_with_none_project_id(self): self.request.environ["nova.context"].project_id = None proj_id = self.vb._get_project_id(self.request) self.assertEqual('', proj_id) def test_get_next_link(self): identifier = "identifier" collection = "collection" next_link = self.vb._get_next_link(self.request, identifier, collection) expected = "/".join((self.request.url, "%s?marker=%s" % (collection, identifier))) self.assertEqual(expected, next_link) def test_get_href_link(self): identifier = "identifier" collection = "collection" href_link = self.vb._get_href_link(self.request, identifier, collection) expected = "/".join((self.request.url, collection, identifier)) self.assertEqual(expected, href_link) def test_get_bookmark_link(self): identifier = "identifier" collection = "collection" bookmark_link = self.vb._get_bookmark_link(self.request, identifier, collection) bmk_url = common.remove_trailing_version_from_href( self.request.application_url) expected = "/".join((bmk_url, self.project_id, collection, identifier)) self.assertEqual(expected, bookmark_link) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/test_faults.py0000664000175000017500000001270200000000000023323 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils import webob import webob.dec import webob.exc import nova.api.openstack from nova.api.openstack import wsgi from nova import exception from nova import test class TestFaultWrapper(test.NoDBTestCase): """Tests covering `nova.api.openstack:FaultWrapper` class.""" @mock.patch('oslo_i18n.translate') @mock.patch('nova.i18n.get_available_languages') def test_safe_exception_translated(self, mock_languages, mock_translate): def fake_translate(value, locale): return "I've been translated!" mock_translate.side_effect = fake_translate # Create an exception, passing a translatable message with a # known value we can test for later. safe_exception = exception.NotFound('Should be translated.') safe_exception.safe = True safe_exception.code = 404 req = webob.Request.blank('/') def raiser(*args, **kwargs): raise safe_exception wrapper = nova.api.openstack.FaultWrapper(raiser) response = req.get_response(wrapper) # The text of the exception's message attribute (replaced # above with a non-default value) should be passed to # translate(). mock_translate.assert_any_call(u'Should be translated.', None) # The return value from translate() should appear in the response. self.assertIn("I've been translated!", response.body.decode("UTF-8")) class TestFaults(test.NoDBTestCase): """Tests covering `nova.api.openstack.faults:Fault` class.""" def _prepare_xml(self, xml_string): """Remove characters from string which hinder XML equality testing.""" xml_string = xml_string.replace(" ", "") xml_string = xml_string.replace("\n", "") xml_string = xml_string.replace("\t", "") return xml_string def test_400_fault_json(self): # Test fault serialized to JSON via file-extension and/or header. requests = [ webob.Request.blank('/.json'), webob.Request.blank('/', headers={"Accept": "application/json"}), ] for request in requests: fault = wsgi.Fault(webob.exc.HTTPBadRequest(explanation='scram')) response = request.get_response(fault) expected = { "badRequest": { "message": "scram", "code": 400, }, } actual = jsonutils.loads(response.body) self.assertEqual(response.content_type, "application/json") self.assertEqual(expected, actual) def test_413_fault_json(self): # Test fault serialized to JSON via file-extension and/or header. requests = [ webob.Request.blank('/.json'), webob.Request.blank('/', headers={"Accept": "application/json"}), ] for request in requests: exc = webob.exc.HTTPRequestEntityTooLarge # NOTE(aloga): we intentionally pass an integer for the # 'Retry-After' header. It should be then converted to a str fault = wsgi.Fault(exc(explanation='sorry', headers={'Retry-After': 4})) response = request.get_response(fault) expected = { "overLimit": { "message": "sorry", "code": 413, "retryAfter": "4", }, } actual = jsonutils.loads(response.body) self.assertEqual(response.content_type, "application/json") self.assertEqual(expected, actual) def test_429_fault_json(self): # Test fault serialized to JSON via file-extension and/or header. requests = [ webob.Request.blank('/.json'), webob.Request.blank('/', headers={"Accept": "application/json"}), ] for request in requests: exc = webob.exc.HTTPTooManyRequests # NOTE(aloga): we intentionally pass an integer for the # 'Retry-After' header. It should be then converted to a str fault = wsgi.Fault(exc(explanation='sorry', headers={'Retry-After': 4})) response = request.get_response(fault) expected = { "overLimit": { "message": "sorry", "code": 429, "retryAfter": "4", }, } actual = jsonutils.loads(response.body) self.assertEqual(response.content_type, "application/json") self.assertEqual(expected, actual) def test_fault_has_status_int(self): # Ensure the status_int is set correctly on faults. fault = wsgi.Fault(webob.exc.HTTPBadRequest(explanation='what?')) self.assertEqual(fault.status_int, 400) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/test_legacy_v2_compatible_wrapper.py0000664000175000017500000001754400000000000027650 0ustar00zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from jsonschema import exceptions as jsonschema_exc import webob import webob.dec import nova.api.openstack from nova.api.openstack import wsgi from nova.api.validation import validators from nova import test class TestLegacyV2CompatibleWrapper(test.NoDBTestCase): def test_filter_out_microversions_request_header(self): req = webob.Request.blank('/') req.headers[wsgi.API_VERSION_REQUEST_HEADER] = '2.2' @webob.dec.wsgify def fake_app(req, *args, **kwargs): self.assertNotIn(wsgi.API_VERSION_REQUEST_HEADER, req) resp = webob.Response() return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) req.get_response(wrapper) def test_filter_out_microversions_response_header(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 resp.headers[wsgi.API_VERSION_REQUEST_HEADER] = '2.3' return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertNotIn(wsgi.API_VERSION_REQUEST_HEADER, response.headers) def test_filter_out_microversions_vary_header(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 resp.headers['Vary'] = wsgi.API_VERSION_REQUEST_HEADER return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertNotIn('Vary', response.headers) def test_filter_out_microversions_vary_header_with_multi_fields(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 resp.headers['Vary'] = '%s, %s, %s' % ( wsgi.API_VERSION_REQUEST_HEADER, 'FAKE_HEADER1', 'FAKE_HEADER2') return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertEqual('FAKE_HEADER1,FAKE_HEADER2', response.headers['Vary']) def test_filter_out_microversions_no_vary_header(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertNotIn('Vary', response.headers) def test_legacy_env_variable(self): req = webob.Request.blank('/') @webob.dec.wsgify(RequestClass=wsgi.Request) def fake_app(req, *args, **kwargs): self.assertTrue(req.is_legacy_v2()) resp = webob.Response() resp.status_int = 204 return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) req.get_response(wrapper) class TestSoftAdditionalPropertiesValidation(test.NoDBTestCase): def setUp(self): super(TestSoftAdditionalPropertiesValidation, self).setUp() self.schema = { 'type': 'object', 'properties': { 'foo': {'type': 'string'}, 'bar': {'type': 'string'} }, 'additionalProperties': False} self.schema_allow = { 'type': 'object', 'properties': { 'foo': {'type': 'string'}, 'bar': {'type': 'string'} }, 'additionalProperties': True} self.schema_with_pattern = { 'type': 'object', 'patternProperties': { '^[a-zA-Z0-9-_:. ]{1,255}$': {'type': 'string'} }, 'additionalProperties': False} self.schema_allow_with_pattern = { 'type': 'object', 'patternProperties': { '^[a-zA-Z0-9-_:. ]{1,255}$': {'type': 'string'} }, 'additionalProperties': True} def test_strip_extra_properties_out_without_extra_props(self): validator = validators._SchemaValidator(self.schema).validator instance = {'foo': '1'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema) self.assertRaises(StopIteration, next, gen) self.assertEqual({'foo': '1'}, instance) def test_strip_extra_properties_out_with_extra_props(self): validator = validators._SchemaValidator(self.schema).validator instance = {'foo': '1', 'extra_foo': 'extra'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema) self.assertRaises(StopIteration, next, gen) self.assertEqual({'foo': '1'}, instance) def test_not_strip_extra_properties_out_with_allow_extra_props(self): validator = validators._SchemaValidator(self.schema_allow).validator instance = {'foo': '1', 'extra_foo': 'extra'} gen = validators._soft_validate_additional_properties( validator, True, instance, self.schema_allow) self.assertRaises(StopIteration, next, gen) self.assertEqual({'foo': '1', 'extra_foo': 'extra'}, instance) def test_pattern_properties_with_invalid_property_and_allow_extra_props( self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1', 'b' * 300: 'extra'} gen = validators._soft_validate_additional_properties( validator, True, instance, self.schema_with_pattern) self.assertRaises(StopIteration, next, gen) def test_pattern_properties(self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema_with_pattern) self.assertRaises(StopIteration, next, gen) def test_pattern_properties_with_invalid_property(self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1', 'b' * 300: 'extra'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema_with_pattern) exc = next(gen) self.assertIsInstance(exc, jsonschema_exc.ValidationError) self.assertIn('was', exc.message) def test_pattern_properties_with_multiple_invalid_properties(self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1', 'b' * 300: 'extra', 'c' * 300: 'extra'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema_with_pattern) exc = next(gen) self.assertIsInstance(exc, jsonschema_exc.ValidationError) self.assertIn('were', exc.message) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/api/openstack/test_mapper.py0000664000175000017500000000317000000000000023310 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api import openstack as openstack_api from nova import test from nova.tests.unit.api.openstack import fakes class MapperTest(test.NoDBTestCase): def test_resource_project_prefix(self): class Controller(object): def index(self, req): return 'foo' app = fakes.TestRouter(Controller(), openstack_api.ProjectMapper()) req = webob.Request.blank('/1234/tests') resp = req.get_response(app) self.assertEqual(b'foo', resp.body) self.assertEqual(resp.status_int, 200) def test_resource_no_project_prefix(self): class Controller(object): def index(self, req): return 'foo' app = fakes.TestRouter(Controller(), openstack_api.PlainMapper()) req = webob.Request.blank('/tests') resp = req.get_response(app) self.assertEqual(b'foo', resp.body) self.assertEqual(resp.status_int, 200) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/test_requestlog.py0000664000175000017500000001313500000000000024220 0ustar00zuulzuul00000000000000# Copyright 2017 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import fixtures as fx import testtools from nova.tests import fixtures from nova.tests.unit import conf_fixture """Test request logging middleware under various conditions. The request logging middleware is needed when running under something other than eventlet. While Nova grew up on eventlet, and it's wsgi server, it meant that our user facing data (the log stream) was a mix of what Nova was emitting, and what eventlet.wsgi was emitting on our behalf. When running under uwsgi we want to make sure that we have equivalent coverage. All these tests use GET / to hit an endpoint that doesn't require the database setup. We have to do a bit of mocking to make that work. """ class TestRequestLogMiddleware(testtools.TestCase): def setUp(self): super(TestRequestLogMiddleware, self).setUp() # this is the minimal set of magic mocks needed to convince # the API service it can start on it's own without a database. mocks = ['nova.objects.Service.get_by_host_and_binary', 'nova.objects.Service.create', 'nova.utils.raise_if_old_compute'] self.stdlog = fixtures.StandardLogging() self.useFixture(self.stdlog) for m in mocks: p = mock.patch(m) self.addCleanup(p.stop) p.start() @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_requests(self, emit): """Ensure requests are logged. Make a standard request for / and ensure there is a log entry. """ emit.return_value = True self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api resp = api.api_request('/', strip_version=True) # the content length might vary, but the important part is # what we log is what we return to the user (which turns out # to excitingly not be the case with eventlet!) content_length = resp.headers['content-length'] log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s' % content_length) self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_mv(self, emit): """Ensure logs register microversion if passed. This makes sure that microversion logging actually shows up when appropriate. """ emit.return_value = True self.useFixture(conf_fixture.ConfFixture()) # NOTE(sdague): all these tests are using the self.useFixture( fx.MonkeyPatch( 'nova.api.openstack.compute.versions.' 'Versions.support_api_request_version', True)) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.microversion = '2.25' resp = api.api_request('/', strip_version=True) content_length = resp.headers['content-length'] log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s microversion: 2.25 time:' % content_length) self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.compute.versions.Versions.index') @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_under_exception(self, emit, v_index): """Ensure that logs still emit under unexpected failure. If we get an unexpected failure all the way up to the top, we should still have a record of that request via the except block. """ emit.return_value = True v_index.side_effect = Exception("Unexpected Error") self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.api_request('/', strip_version=True) log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /"' ' status: 500 len: 0 microversion: - time:') self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_no_log_under_eventlet(self, emit): """Ensure that logs don't end up under eventlet. We still set the _should_emit return value directly to prevent the situation where eventlet is removed from tests and this preventing that. NOTE(sdague): this test can be deleted when eventlet is no longer supported for the wsgi stack in Nova. """ emit.return_value = False self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.api_request('/', strip_version=True) self.assertNotIn("nova.api.openstack.requestlog", self.stdlog.logger.output) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/api/openstack/test_wsgi.py0000664000175000017500000010757000000000000023006 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils import six import testscenarios import webob from nova.api.openstack import api_version_request as api_version from nova.api.openstack import versioned_method from nova.api.openstack import wsgi from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers from nova.tests.unit import utils class MicroversionedTest(testscenarios.WithScenarios, test.NoDBTestCase): scenarios = [ ('legacy-microversion', { 'header_name': 'X-OpenStack-Nova-API-Version', }), ('modern-microversion', { 'header_name': 'OpenStack-API-Version', }) ] def _make_microversion_header(self, value): if 'nova' in self.header_name.lower(): return {self.header_name: value} else: return {self.header_name: 'compute %s' % value} class RequestTest(MicroversionedTest): def setUp(self): super(RequestTest, self).setUp() self.stub_out('nova.i18n.get_available_languages', lambda *args, **kwargs: ['en-GB', 'en-AU', 'de', 'zh-CN', 'en-US', 'ja-JP']) def test_content_type_missing(self): request = wsgi.Request.blank('/tests/123', method='POST') request.body = b"" self.assertIsNone(request.get_content_type()) def test_content_type_unsupported(self): request = wsgi.Request.blank('/tests/123', method='POST') request.headers["Content-Type"] = "text/html" request.body = b"asdf
" self.assertRaises(exception.InvalidContentType, request.get_content_type) def test_content_type_with_charset(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "application/json; charset=UTF-8" result = request.get_content_type() self.assertEqual(result, "application/json") def test_content_type_accept_default(self): request = wsgi.Request.blank('/tests/123.unsupported') request.headers["Accept"] = "application/unsupported1" result = request.best_match_content_type() self.assertEqual(result, "application/json") def test_content_type_accept_with_quality_values(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = ( "application/json;q=0.4," "application/vnd.openstack.compute+json;q=0.6") result = request.best_match_content_type() self.assertEqual("application/vnd.openstack.compute+json", result) def test_from_request(self): request = wsgi.Request.blank('/') accepted = 'bogus;q=1, en-gb;q=0.7,en-us,en;q=0.5,*;q=0.7' request.headers = {'Accept-Language': accepted} self.assertEqual(request.best_match_language(), 'en-US') def test_asterisk(self): # In the section 3.4 of RFC4647, it says as follows: # If the language range "*"(asterisk) is the only one # in the language priority list or if no other language range # follows, the default value is computed and returned. # # In this case, the default value 'None' is returned. request = wsgi.Request.blank('/') accepted = '*;q=0.5' request.headers = {'Accept-Language': accepted} self.assertIsNone(request.best_match_language()) def test_asterisk_followed_by_other_language(self): request = wsgi.Request.blank('/') accepted = '*,ja-jp;q=0.5' request.headers = {'Accept-Language': accepted} self.assertEqual('ja-JP', request.best_match_language()) def test_truncate(self): request = wsgi.Request.blank('/') accepted = 'de-CH' request.headers = {'Accept-Language': accepted} self.assertEqual('de', request.best_match_language()) def test_secondary(self): request = wsgi.Request.blank('/') accepted = 'nn,en-gb;q=0.5' request.headers = {'Accept-Language': accepted} self.assertEqual('en-GB', request.best_match_language()) def test_none_found(self): request = wsgi.Request.blank('/') accepted = 'nb-no' request.headers = {'Accept-Language': accepted} self.assertIsNone(request.best_match_language()) def test_no_lang_header(self): request = wsgi.Request.blank('/') accepted = '' request.headers = {'Accept-Language': accepted} self.assertIsNone(request.best_match_language()) def test_api_version_request_header_none(self): request = wsgi.Request.blank('/') request.set_api_version_request() self.assertEqual(api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION), request.api_version_request) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_api_version_request_header(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("2.14") request = wsgi.Request.blank('/') request.headers = self._make_microversion_header('2.14') request.set_api_version_request() self.assertEqual(api_version.APIVersionRequest("2.14"), request.api_version_request) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_api_version_request_header_latest(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.5") request = wsgi.Request.blank('/') request.headers = self._make_microversion_header('latest') request.set_api_version_request() self.assertEqual(api_version.APIVersionRequest("3.5"), request.api_version_request) def test_api_version_request_header_invalid(self): request = wsgi.Request.blank('/') request.headers = self._make_microversion_header('2.1.3') self.assertRaises(exception.InvalidAPIVersionString, request.set_api_version_request) class ActionDispatcherTest(test.NoDBTestCase): def test_dispatch(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' self.assertEqual(serializer.dispatch({}, action='create'), 'pants') def test_dispatch_action_None(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' serializer.default = lambda x: 'trousers' self.assertEqual(serializer.dispatch({}, action=None), 'trousers') def test_dispatch_default(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' serializer.default = lambda x: 'trousers' self.assertEqual(serializer.dispatch({}, action='update'), 'trousers') class JSONDictSerializerTest(test.NoDBTestCase): def test_json(self): input_dict = dict(servers=dict(a=(2, 3))) expected_json = '{"servers":{"a":[2,3]}}' serializer = wsgi.JSONDictSerializer() result = serializer.serialize(input_dict) result = result.replace('\n', '').replace(' ', '') self.assertEqual(result, expected_json) class JSONDeserializerTest(test.NoDBTestCase): def test_json(self): data = """{"a": { "a1": "1", "a2": "2", "bs": ["1", "2", "3", {"c": {"c1": "1"}}], "d": {"e": "1"}, "f": "1"}}""" as_dict = { 'body': { 'a': { 'a1': '1', 'a2': '2', 'bs': ['1', '2', '3', {'c': {'c1': '1'}}], 'd': {'e': '1'}, 'f': '1', }, }, } deserializer = wsgi.JSONDeserializer() self.assertEqual(deserializer.deserialize(data), as_dict) def test_json_valid_utf8(self): data = b"""{"server": {"min_count": 1, "flavorRef": "1", "name": "\xe6\xa6\x82\xe5\xbf\xb5", "imageRef": "10bab10c-1304-47d", "max_count": 1}} """ as_dict = { 'body': { u'server': { u'min_count': 1, u'flavorRef': u'1', u'name': u'\u6982\u5ff5', u'imageRef': u'10bab10c-1304-47d', u'max_count': 1 } } } deserializer = wsgi.JSONDeserializer() self.assertEqual(deserializer.deserialize(data), as_dict) def test_json_invalid_utf8(self): """Send invalid utf-8 to JSONDeserializer.""" data = b"""{"server": {"min_count": 1, "flavorRef": "1", "name": "\xf0\x28\x8c\x28", "imageRef": "10bab10c-1304-47d", "max_count": 1}} """ deserializer = wsgi.JSONDeserializer() self.assertRaises(exception.MalformedRequestBody, deserializer.deserialize, data) class ResourceTest(MicroversionedTest): def get_req_id_header_name(self, request): header_name = 'x-openstack-request-id' if utils.get_api_version(request) < 3: header_name = 'x-compute-request-id' return header_name def test_resource_receives_api_version_request_default(self): class Controller(object): def index(self, req): if req.api_version_request != \ api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION): raise webob.exc.HTTPInternalServerError() return 'success' app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests') response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_resource_receives_api_version_request(self, mock_maxver): version = "2.5" mock_maxver.return_value = api_version.APIVersionRequest(version) class Controller(object): def index(self, req): if req.api_version_request != \ api_version.APIVersionRequest(version): raise webob.exc.HTTPInternalServerError() return 'success' app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests') req.headers = self._make_microversion_header(version) response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) def test_resource_receives_api_version_request_invalid(self): invalid_version = "2.5.3" class Controller(object): def index(self, req): return 'success' app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests') req.headers = self._make_microversion_header(invalid_version) response = req.get_response(app) self.assertEqual(400, response.status_int) def test_resource_call_with_method_get(self): class Controller(object): def index(self, req): return 'success' app = fakes.TestRouter(Controller()) # the default method is GET req = webob.Request.blank('/tests') response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) req.body = b'{"body": {"key": "value"}}' response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) req.content_type = 'application/json' response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) def test_resource_call_with_method_post(self): class Controller(object): @wsgi.expected_errors(400) def create(self, req, body): if expected_body != body: msg = "The request body invalid" raise webob.exc.HTTPBadRequest(explanation=msg) return "success" # verify the method: POST app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests', method="POST", content_type='application/json') req.body = b'{"body": {"key": "value"}}' expected_body = {'body': { "key": "value" } } response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) # verify without body expected_body = None req.body = None response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) # the body is validated in the controller expected_body = {'body': None} response = req.get_response(app) expected_unsupported_type_body = {'badRequest': {'message': 'The request body invalid', 'code': 400}} self.assertEqual(response.status_int, 400) self.assertEqual(expected_unsupported_type_body, jsonutils.loads(response.body)) def test_resource_call_with_method_put(self): class Controller(object): def update(self, req, id, body): if expected_body != body: msg = "The request body invalid" raise webob.exc.HTTPBadRequest(explanation=msg) return "success" # verify the method: PUT app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests/test_id', method="PUT", content_type='application/json') req.body = b'{"body": {"key": "value"}}' expected_body = {'body': { "key": "value" } } response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) req.body = None expected_body = None response = req.get_response(app) self.assertEqual(response.status_int, 200) # verify no content_type is contained in the request req = webob.Request.blank('/tests/test_id', method="PUT", content_type='application/xml') req.content_type = 'application/xml' req.body = b'{"body": {"key": "value"}}' response = req.get_response(app) expected_unsupported_type_body = {'badMediaType': {'message': 'Unsupported Content-Type', 'code': 415}} self.assertEqual(response.status_int, 415) self.assertEqual(expected_unsupported_type_body, jsonutils.loads(response.body)) def test_resource_call_with_method_delete(self): class Controller(object): def delete(self, req, id): return "success" # verify the method: DELETE app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests/test_id', method="DELETE") response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) # ignore the body req.body = b'{"body": {"key": "value"}}' response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) def test_resource_forbidden(self): class Controller(object): def index(self, req): raise exception.Forbidden() req = webob.Request.blank('/tests') app = fakes.TestRouter(Controller()) response = req.get_response(app) self.assertEqual(response.status_int, 403) def test_resource_not_authorized(self): class Controller(object): def index(self, req): raise exception.Unauthorized() req = webob.Request.blank('/tests') app = fakes.TestRouter(Controller()) self.assertRaises( exception.Unauthorized, req.get_response, app) def test_dispatch(self): class Controller(object): def index(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) method = resource.get_method(None, 'index', None, '') actual = resource.dispatch(method, None, {'pants': 'off'}) expected = 'off' self.assertEqual(actual, expected) def test_get_method_unknown_controller_method(self): class Controller(object): def index(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(AttributeError, resource.get_method, None, 'create', None, '') def test_get_method_action_json(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) method = resource.get_method(None, 'action', 'application/json', '{"fooAction": true}') self.assertEqual(controller._action_foo, method) def test_get_method_action_bad_body(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(exception.MalformedRequestBody, resource.get_method, None, 'action', 'application/json', '{}') def test_get_method_unknown_controller_action(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(KeyError, resource.get_method, None, 'action', 'application/json', '{"barAction": true}') def test_get_method_action_method(self): class Controller(object): def action(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) method = resource.get_method(None, 'action', 'application/xml', 'true([a-zA-Z0-9_-]{1,64})?)', 'trait(?P([a-zA-Z0-9_-]{1,64})?)', 'vmware', } self.assertTrue( namespaces.issubset(validators.NAMESPACES), f'{namespaces} is not a subset of {validators.NAMESPACES}', ) def test_spec(self): unknown_namespaces = ( ('hhw:cpu_realtime_mask', '^0'), ('w:cpu_realtime_mask', '^0'), ('hw_cpu_realtime_mask', '^0'), ('foo', 'bar'), ) for key, value in unknown_namespaces: validators.validate(key, value) known_invalid_namespaces = ( ('hw:cpu_realtime_maskk', '^0'), ('hw:cpu_realtime_mas', '^0'), ('hw:foo', 'bar'), ) for key, value in known_invalid_namespaces: with testtools.ExpectedException(exception.ValidationError): validators.validate(key, value) def test_value__str(self): valid_specs = ( # patterns ('hw:cpu_realtime_mask', '^0'), ('hw:cpu_realtime_mask', '^0,2-3,1'), ('hw:mem_page_size', 'large'), ('hw:mem_page_size', '2kbit'), ('hw:mem_page_size', '1GB'), # enums ('hw:cpu_thread_policy', 'prefer'), ('hw:emulator_threads_policy', 'isolate'), ('hw:pci_numa_affinity_policy', 'legacy'), ) for key, value in valid_specs: validators.validate(key, value) invalid_specs = ( # patterns ('hw:cpu_realtime_mask', '0'), ('hw:cpu_realtime_mask', '^0,2-3,b'), ('hw:mem_page_size', 'largest'), ('hw:mem_page_size', '2kbits'), ('hw:mem_page_size', '1gigabyte'), # enums ('hw:cpu_thread_policy', 'preferred'), ('hw:emulator_threads_policy', 'iisolate'), ('hw:pci_numa_affinity_policy', 'lgacy'), ) for key, value in invalid_specs: with testtools.ExpectedException(exception.ValidationError): validators.validate(key, value) def test_value__int(self): valid_specs = ( ('hw:numa_nodes', '1'), ('os:monitors', '1'), ('powervm:shared_weight', '1'), ('os:monitors', '8'), ('powervm:shared_weight', '255'), ) for key, value in valid_specs: validators.validate(key, value) invalid_specs = ( ('hw:serial_port_count', 'five'), # NaN ('hw:serial_port_count', '!'), # NaN ('hw:numa_nodes', '0'), # has min ('os:monitors', '0'), # has min ('powervm:shared_weight', '-1'), # has min ('os:monitors', '9'), # has max ('powervm:shared_weight', '256'), # has max ) for key, value in invalid_specs: with testtools.ExpectedException(exception.ValidationError): validators.validate(key, value) def test_value__bool(self): valid_specs = ( ('hw:cpu_realtime', '1'), ('hw:cpu_realtime', '0'), ('hw:mem_encryption', 'true'), ('hw:boot_menu', 'y'), ) for key, value in valid_specs: validators.validate(key, value) invalid_specs = ( ('hw:cpu_realtime', '2'), ('hw:cpu_realtime', '00'), ('hw:mem_encryption', 'tru'), ('hw:boot_menu', 'yah'), ) for key, value in invalid_specs: with testtools.ExpectedException(exception.ValidationError): validators.validate(key, value) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cast_as_call.py0000664000175000017500000000500600000000000020635 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import fixtures import oslo_messaging as messaging class CastAsCall(fixtures.Fixture): """Make RPC 'cast' behave like a 'call'. This is a little hack for tests that need to know when a cast operation has completed. The idea is that we wait for the RPC endpoint method to complete and return before continuing on the caller. See Ia7f40718533e450f00cd3e7d753ac65755c70588 for more background. """ def __init__(self, testcase): super(CastAsCall, self).__init__() self.testcase = testcase @staticmethod def _stub_out(testcase, obj=None): if obj: orig_prepare = obj.prepare else: orig_prepare = messaging.RPCClient.prepare def prepare(self, *args, **kwargs): # Casts with fanout=True would throw errors if its monkeypatched to # the call method, so we must override fanout to False if 'fanout' in kwargs: kwargs['fanout'] = False cctxt = orig_prepare(self, *args, **kwargs) CastAsCall._stub_out(testcase, cctxt) # woo, recurse! return cctxt if obj: cls = getattr(sys.modules[obj.__class__.__module__], obj.__class__.__name__) testcase.stub_out('%s.%s.prepare' % (obj.__class__.__module__, obj.__class__.__name__), prepare) testcase.stub_out('%s.%s.cast' % (obj.__class__.__module__, obj.__class__.__name__), cls.call) else: testcase.stub_out('oslo_messaging.RPCClient.prepare', prepare) testcase.stub_out('oslo_messaging.RPCClient.cast', messaging.RPCClient.call) def setUp(self): super(CastAsCall, self).setUp() self._stub_out(self.testcase) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5304682 nova-21.2.4/nova/tests/unit/cmd/0000775000175000017500000000000000000000000016415 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/cmd/__init__.py0000664000175000017500000000000000000000000020514 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cmd/test_baseproxy.py0000664000175000017500000001131000000000000022036 0ustar00zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import fixtures import mock from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from six.moves import StringIO from nova.cmd import baseproxy from nova import config from nova.console import websocketproxy from nova import test from nova import version @mock.patch.object(config, 'parse_args', new=lambda *args, **kwargs: None) class BaseProxyTestCase(test.NoDBTestCase): def setUp(self): super(BaseProxyTestCase, self).setUp() self.stderr = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stderr', self.stderr)) @mock.patch('os.path.exists', return_value=False) # NOTE(mriedem): sys.exit raises TestingException so we can actually exit # the test normally. @mock.patch('sys.exit', side_effect=test.TestingException) def test_proxy_ssl_without_cert(self, mock_exit, mock_exists): self.flags(ssl_only=True) self.assertRaises(test.TestingException, baseproxy.proxy, '0.0.0.0', '6080') mock_exit.assert_called_once_with(-1) self.assertEqual(self.stderr.getvalue(), "SSL only and self.pem not found\n") @mock.patch('os.path.exists', return_value=False) @mock.patch('sys.exit', side_effect=test.TestingException) def test_proxy_web_dir_does_not_exist(self, mock_exit, mock_exists): self.flags(web='/my/fake/webserver/') self.assertRaises(test.TestingException, baseproxy.proxy, '0.0.0.0', '6080') mock_exit.assert_called_once_with(-1) @mock.patch('os.path.exists', return_value=True) @mock.patch.object(logging, 'setup') @mock.patch.object(gmr.TextGuruMeditation, 'setup_autorun') @mock.patch('nova.console.websocketproxy.NovaWebSocketProxy.__init__', return_value=None) @mock.patch('nova.console.websocketproxy.NovaWebSocketProxy.start_server') @mock.patch('websockify.websocketproxy.select_ssl_version', return_value=None) def test_proxy(self, mock_select_ssl_version, mock_start, mock_init, mock_gmr, mock_log, mock_exists): baseproxy.proxy('0.0.0.0', '6080') mock_log.assert_called_once_with(baseproxy.CONF, 'nova') mock_gmr.assert_called_once_with(version, conf=baseproxy.CONF) mock_init.assert_called_once_with( listen_host='0.0.0.0', listen_port='6080', source_is_ipv6=False, cert='self.pem', key=None, ssl_only=False, ssl_ciphers=None, ssl_minimum_version='default', daemon=False, record=None, security_proxy=None, traffic=True, web='/usr/share/spice-html5', file_only=True, RequestHandlerClass=websocketproxy.NovaProxyRequestHandler) mock_start.assert_called_once_with() @mock.patch('os.path.exists', return_value=False) @mock.patch('sys.exit', side_effect=test.TestingException) def test_proxy_exit_with_error(self, mock_exit, mock_exists): self.flags(ssl_only=True) self.assertRaises(test.TestingException, baseproxy.proxy, '0.0.0.0', '6080') self.assertEqual(self.stderr.getvalue(), "SSL only and self.pem not found\n") mock_exit.assert_called_once_with(-1) @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.console.websocketproxy.NovaWebSocketProxy.__init__', return_value=None) @mock.patch('nova.console.websocketproxy.NovaWebSocketProxy.start_server') def test_proxy_ssl_settings(self, mock_start, mock_init, mock_exists): self.flags(ssl_minimum_version='tlsv1_3', group='console') self.flags(ssl_ciphers='ALL:!aNULL', group='console') baseproxy.proxy('0.0.0.0', '6080') mock_init.assert_called_once_with( listen_host='0.0.0.0', listen_port='6080', source_is_ipv6=False, cert='self.pem', key=None, ssl_only=False, ssl_ciphers='ALL:!aNULL', ssl_minimum_version='tlsv1_3', daemon=False, record=None, security_proxy=None, traffic=True, web='/usr/share/spice-html5', file_only=True, RequestHandlerClass=websocketproxy.NovaProxyRequestHandler) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cmd/test_cmd_db_blocks.py0000664000175000017500000000260600000000000022577 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import mock from nova.cmd import compute from nova.db import api as db from nova import exception from nova import test @contextlib.contextmanager def restore_db(): orig = db.IMPL try: yield finally: db.IMPL = orig class ComputeMainTest(test.NoDBTestCase): @mock.patch('nova.conductor.api.API.wait_until_ready') @mock.patch('oslo_reports.guru_meditation_report') def _call_main(self, mod, gmr, cond): @mock.patch.object(mod, 'config') @mock.patch.object(mod, 'service') def run_main(serv, conf): mod.main() run_main() def test_compute_main_blocks_db(self): with restore_db(): self._call_main(compute) self.assertRaises(exception.DBNotAllowed, db.instance_get, 1, 2) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cmd/test_common.py0000664000175000017500000001235400000000000021323 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the common functions used by different CLI interfaces. """ import sys import fixtures import mock from six.moves import StringIO from nova.cmd import common as cmd_common from nova.db import api from nova import exception from nova import test class TestCmdCommon(test.NoDBTestCase): @mock.patch.object(cmd_common, 'LOG') @mock.patch.object(api, 'IMPL') def test_block_db_access(self, mock_db_IMPL, mock_LOG): cmd_common.block_db_access('unit-tests') self.assertEqual(api.IMPL, api.IMPL.foo) self.assertRaises(exception.DBNotAllowed, api.IMPL) self.assertEqual('unit-tests', mock_LOG.error.call_args[0][1]['service_name']) def test_args_decorator(self): @cmd_common.args(bar='') @cmd_common.args('foo') def f(): pass f_args = f.__dict__['args'] bar_args = ((), {'bar': ''}) foo_args = (('foo', ), {}) self.assertEqual(bar_args, f_args[0]) self.assertEqual(foo_args, f_args[1]) def test_methods_of(self): class foo(object): foo = 'bar' def public(self): pass def _private(self): pass methods = cmd_common.methods_of(foo()) method_names = [method_name for method_name, method in methods] self.assertIn('public', method_names) self.assertNotIn('_private', method_names) self.assertNotIn('foo', method_names) @mock.patch.object(cmd_common, 'CONF') def test_print_bash_completion_no_query_category(self, mock_CONF): self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) mock_CONF.category.query_category = None categories = {'foo': mock.sentinel.foo, 'bar': mock.sentinel.bar} cmd_common.print_bash_completion(categories) self.assertEqual(' '.join(categories.keys()) + '\n', sys.stdout.getvalue()) @mock.patch.object(cmd_common, 'CONF') def test_print_bash_completion_mismatch(self, mock_CONF): self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) mock_CONF.category.query_category = 'bar' categories = {'foo': mock.sentinel.foo} cmd_common.print_bash_completion(categories) self.assertEqual('', sys.stdout.getvalue()) @mock.patch.object(cmd_common, 'methods_of') @mock.patch.object(cmd_common, 'CONF') def test_print_bash_completion(self, mock_CONF, mock_method_of): self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) mock_CONF.category.query_category = 'foo' actions = [('f1', mock.sentinel.f1), ('f2', mock.sentinel.f2)] mock_method_of.return_value = actions mock_fn = mock.Mock() categories = {'foo': mock_fn} cmd_common.print_bash_completion(categories) mock_fn.assert_called_once_with() mock_method_of.assert_called_once_with(mock_fn.return_value) self.assertEqual(' '.join([k for k, v in actions]) + '\n', sys.stdout.getvalue()) @mock.patch.object(cmd_common, 'validate_args') @mock.patch.object(cmd_common, 'CONF') def test_get_action_fn(self, mock_CONF, mock_validate_args): mock_validate_args.return_value = None action_args = [u'arg'] action_kwargs = ['missing', 'foo', 'bar'] mock_CONF.category.action_fn = mock.sentinel.action_fn mock_CONF.category.action_args = action_args mock_CONF.category.action_kwargs = action_kwargs mock_CONF.category.action_kwarg_foo = u'foo_val' mock_CONF.category.action_kwarg_bar = True mock_CONF.category.action_kwarg_missing = None actual_fn, actual_args, actual_kwargs = cmd_common.get_action_fn() self.assertEqual(mock.sentinel.action_fn, actual_fn) self.assertEqual(action_args, actual_args) self.assertEqual(u'foo_val', actual_kwargs['foo']) self.assertTrue(actual_kwargs['bar']) self.assertNotIn('missing', actual_kwargs) @mock.patch.object(cmd_common, 'validate_args') @mock.patch.object(cmd_common, 'CONF') def test_get_action_fn_missing_args(self, mock_CONF, mock_validate_args): # Don't leak the actual print call self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) mock_validate_args.return_value = ['foo'] mock_CONF.category.action_fn = mock.sentinel.action_fn mock_CONF.category.action_args = [] mock_CONF.category.action_kwargs = [] self.assertRaises(exception.Invalid, cmd_common.get_action_fn) mock_CONF.print_help.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cmd/test_manage.py0000664000175000017500000043026300000000000021266 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2011 Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import sys import warnings import ddt import fixtures import mock from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel from oslo_utils import uuidutils from six.moves import StringIO from nova.cmd import manage from nova import conf from nova import context from nova.db import api as db from nova.db import migration from nova.db.sqlalchemy import migration as sqla_migration from nova import exception from nova import objects from nova.scheduler.client import report from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_requests CONF = conf.CONF class UtilitiesTestCase(test.NoDBTestCase): def test_mask_passwd(self): # try to trip up the regex match with extra : and @. url1 = ("http://user:pass@domain.com:1234/something?" "email=me@somewhere.com") self.assertEqual( ("http://user:****@domain.com:1234/something?" "email=me@somewhere.com"), manage.mask_passwd_in_url(url1)) # pretty standard kinds of urls that we expect, have different # schemes. This ensures none of the parts get lost. url2 = "mysql+pymysql://root:pass@127.0.0.1/nova_api?charset=utf8" self.assertEqual( "mysql+pymysql://root:****@127.0.0.1/nova_api?charset=utf8", manage.mask_passwd_in_url(url2)) url3 = "rabbit://stackrabbit:pass@10.42.0.53:5672/" self.assertEqual( "rabbit://stackrabbit:****@10.42.0.53:5672/", manage.mask_passwd_in_url(url3)) url4 = ("mysql+pymysql://nova:my_password@my_IP/nova_api?" "charset=utf8&ssl_ca=/etc/nova/tls/mysql/ca-cert.pem" "&ssl_cert=/etc/nova/tls/mysql/server-cert.pem" "&ssl_key=/etc/nova/tls/mysql/server-key.pem") url4_safe = ("mysql+pymysql://nova:****@my_IP/nova_api?" "charset=utf8&ssl_ca=/etc/nova/tls/mysql/ca-cert.pem" "&ssl_cert=/etc/nova/tls/mysql/server-cert.pem" "&ssl_key=/etc/nova/tls/mysql/server-key.pem") self.assertEqual( url4_safe, manage.mask_passwd_in_url(url4)) class DbCommandsTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(DbCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.DbCommands() self.useFixture(nova_fixtures.Database()) self.useFixture(nova_fixtures.Database(database='api')) def test_online_migrations_unique(self): names = [m.__name__ for m in self.commands.online_migrations] self.assertEqual(len(set(names)), len(names), 'Online migrations must have a unique name') def test_archive_deleted_rows_negative(self): self.assertEqual(2, self.commands.archive_deleted_rows(-1)) def test_archive_deleted_rows_large_number(self): large_number = '1' * 100 self.assertEqual(2, self.commands.archive_deleted_rows(large_number)) @mock.patch.object(manage.DbCommands, 'purge') @mock.patch.object(db, 'archive_deleted_rows', # Each call to archive in each cell returns # total_rows_archived=15, so passing max_rows=30 will # only iterate the first two cells. return_value=(dict(instances=10, consoles=5), list(), 15)) def _test_archive_deleted_rows_all_cells(self, mock_db_archive, mock_purge, purge=False): cell_dbs = nova_fixtures.CellDatabases() cell_dbs.add_cell_database('fake:///db1') cell_dbs.add_cell_database('fake:///db2') cell_dbs.add_cell_database('fake:///db3') self.useFixture(cell_dbs) ctxt = context.RequestContext() cell_mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db1', transport_url='fake:///mq1', name='cell1') cell_mapping1.create() cell_mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db2', transport_url='fake:///mq2', name='cell2') cell_mapping2.create() cell_mapping3 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db3', transport_url='fake:///mq3', name='cell3') cell_mapping3.create() # Archive with max_rows=30, so we test the case that when we are out of # limit, we don't go to the remaining cell. result = self.commands.archive_deleted_rows(30, verbose=True, all_cells=True, purge=purge) mock_db_archive.assert_has_calls([ # Called with max_rows=30 but only 15 were archived. mock.call(test.MatchType(context.RequestContext), 30, before=None), # So the total from the last call was 15 and the new max_rows=15 # for the next call in the second cell. mock.call(test.MatchType(context.RequestContext), 15, before=None) ]) output = self.output.getvalue() expected = '''\ +-----------------+-------------------------+ | Table | Number of Rows Archived | +-----------------+-------------------------+ | cell1.consoles | 5 | | cell1.instances | 10 | | cell2.consoles | 5 | | cell2.instances | 10 | +-----------------+-------------------------+ ''' if purge: expected += 'Rows were archived, running purge...\n' mock_purge.assert_called_once_with(purge_all=True, verbose=True, all_cells=True) else: mock_purge.assert_not_called() self.assertEqual(expected, output) self.assertEqual(1, result) def test_archive_deleted_rows_all_cells(self): self._test_archive_deleted_rows_all_cells() def test_archive_deleted_rows_all_cells_purge(self): self._test_archive_deleted_rows_all_cells(purge=True) @mock.patch.object(db, 'archive_deleted_rows') def test_archive_deleted_rows_all_cells_until_complete(self, mock_db_archive): # First two calls to archive in each cell return total_rows_archived=15 # and the last call returns 0 (nothing left to archive). fake_return = (dict(instances=10, consoles=5), list(), 15) mock_db_archive.side_effect = [fake_return, (dict(), list(), 0), fake_return, (dict(), list(), 0), (dict(), list(), 0)] cell_dbs = nova_fixtures.CellDatabases() cell_dbs.add_cell_database('fake:///db1') cell_dbs.add_cell_database('fake:///db2') cell_dbs.add_cell_database('fake:///db3') self.useFixture(cell_dbs) ctxt = context.RequestContext() cell_mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db1', transport_url='fake:///mq1', name='cell1') cell_mapping1.create() cell_mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db2', transport_url='fake:///mq2', name='cell2') cell_mapping2.create() cell_mapping3 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db3', transport_url='fake:///mq3', name='cell3') cell_mapping3.create() # Archive with max_rows=30, so we test that subsequent max_rows are not # reduced when until_complete=True. There is no max total limit. result = self.commands.archive_deleted_rows(30, verbose=True, all_cells=True, until_complete=True) mock_db_archive.assert_has_calls([ # Called with max_rows=30 but only 15 were archived. mock.call(test.MatchType(context.RequestContext), 30, before=None), # Called with max_rows=30 but 0 were archived (nothing left to # archive in this cell) mock.call(test.MatchType(context.RequestContext), 30, before=None), # So the total from the last call was 0 and the new max_rows=30 # because until_complete=True. mock.call(test.MatchType(context.RequestContext), 30, before=None), # Called with max_rows=30 but 0 were archived (nothing left to # archive in this cell) mock.call(test.MatchType(context.RequestContext), 30, before=None), # Called one final time with max_rows=30 mock.call(test.MatchType(context.RequestContext), 30, before=None) ]) output = self.output.getvalue() expected = '''\ Archiving.....complete +-----------------+-------------------------+ | Table | Number of Rows Archived | +-----------------+-------------------------+ | cell1.consoles | 5 | | cell1.instances | 10 | | cell2.consoles | 5 | | cell2.instances | 10 | +-----------------+-------------------------+ ''' self.assertEqual(expected, output) self.assertEqual(1, result) @mock.patch.object(db, 'archive_deleted_rows', return_value=( dict(instances=10, consoles=5), list(), 15)) def _test_archive_deleted_rows(self, mock_db_archive, verbose=False): result = self.commands.archive_deleted_rows(20, verbose=verbose) mock_db_archive.assert_called_once_with( test.MatchType(context.RequestContext), 20, before=None) output = self.output.getvalue() if verbose: expected = '''\ +-----------+-------------------------+ | Table | Number of Rows Archived | +-----------+-------------------------+ | consoles | 5 | | instances | 10 | +-----------+-------------------------+ ''' self.assertEqual(expected, output) else: self.assertEqual(0, len(output)) self.assertEqual(1, result) def test_archive_deleted_rows(self): # Tests that we don't show any table output (not verbose). self._test_archive_deleted_rows() def test_archive_deleted_rows_verbose(self): # Tests that we get table output. self._test_archive_deleted_rows(verbose=True) @mock.patch.object(db, 'archive_deleted_rows') @mock.patch.object(objects.CellMappingList, 'get_all') def test_archive_deleted_rows_until_complete(self, mock_get_all, mock_db_archive, verbose=False): mock_db_archive.side_effect = [ ({'instances': 10, 'instance_extra': 5}, list(), 15), ({'instances': 5, 'instance_faults': 1}, list(), 6), ({}, list(), 0)] result = self.commands.archive_deleted_rows(20, verbose=verbose, until_complete=True) self.assertEqual(1, result) if verbose: expected = """\ Archiving.....complete +-----------------+-------------------------+ | Table | Number of Rows Archived | +-----------------+-------------------------+ | instance_extra | 5 | | instance_faults | 1 | | instances | 15 | +-----------------+-------------------------+ """ else: expected = '' self.assertEqual(expected, self.output.getvalue()) mock_db_archive.assert_has_calls([ mock.call(test.MatchType(context.RequestContext), 20, before=None), mock.call(test.MatchType(context.RequestContext), 20, before=None), mock.call(test.MatchType(context.RequestContext), 20, before=None), ]) def test_archive_deleted_rows_until_complete_quiet(self): self.test_archive_deleted_rows_until_complete(verbose=False) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') @mock.patch.object(db, 'archive_deleted_rows') @mock.patch.object(objects.CellMappingList, 'get_all') def test_archive_deleted_rows_until_stopped(self, mock_get_all, mock_db_archive, mock_db_purge, verbose=True): mock_db_archive.side_effect = [ ({'instances': 10, 'instance_extra': 5}, list(), 15), ({'instances': 5, 'instance_faults': 1}, list(), 6), KeyboardInterrupt] result = self.commands.archive_deleted_rows(20, verbose=verbose, until_complete=True, purge=True) self.assertEqual(1, result) if verbose: expected = """\ Archiving.....stopped +-----------------+-------------------------+ | Table | Number of Rows Archived | +-----------------+-------------------------+ | instance_extra | 5 | | instance_faults | 1 | | instances | 15 | +-----------------+-------------------------+ Rows were archived, running purge... """ else: expected = '' self.assertEqual(expected, self.output.getvalue()) mock_db_archive.assert_has_calls([ mock.call(test.MatchType(context.RequestContext), 20, before=None), mock.call(test.MatchType(context.RequestContext), 20, before=None), mock.call(test.MatchType(context.RequestContext), 20, before=None), ]) mock_db_purge.assert_called_once_with(mock.ANY, None, status_fn=mock.ANY) @mock.patch.object(db, 'archive_deleted_rows') def test_archive_deleted_rows_until_stopped_cells(self, mock_db_archive, verbose=True): # Test when archive with all_cells=True and until_complete=True, # when hit KeyboardInterrupt, it will directly return and not # process remaining cells. mock_db_archive.side_effect = [ ({'instances': 10, 'instance_extra': 5}, list(), 15), KeyboardInterrupt] cell_dbs = nova_fixtures.CellDatabases() cell_dbs.add_cell_database('fake:///db1') cell_dbs.add_cell_database('fake:///db2') cell_dbs.add_cell_database('fake:///db3') self.useFixture(cell_dbs) ctxt = context.RequestContext() cell_mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db1', transport_url='fake:///mq1', name='cell1') cell_mapping1.create() cell_mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db2', transport_url='fake:///mq2', name='cell2') cell_mapping2.create() cell_mapping3 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db3', transport_url='fake:///mq3', name='cell3') cell_mapping3.create() result = self.commands.archive_deleted_rows(20, verbose=verbose, until_complete=True, all_cells=True) self.assertEqual(1, result) if verbose: expected = '''\ Archiving....stopped +----------------------+-------------------------+ | Table | Number of Rows Archived | +----------------------+-------------------------+ | cell1.instance_extra | 5 | | cell1.instances | 10 | +----------------------+-------------------------+ ''' else: expected = '' self.assertEqual(expected, self.output.getvalue()) mock_db_archive.assert_has_calls([ mock.call(test.MatchType(context.RequestContext), 20, before=None), mock.call(test.MatchType(context.RequestContext), 20, before=None) ]) def test_archive_deleted_rows_until_stopped_quiet(self): self.test_archive_deleted_rows_until_stopped(verbose=False) @mock.patch.object(db, 'archive_deleted_rows') @mock.patch.object(objects.CellMappingList, 'get_all') def test_archive_deleted_rows_before(self, mock_get_all, mock_db_archive): mock_db_archive.side_effect = [ ({'instances': 10, 'instance_extra': 5}, list(), 15), ({'instances': 5, 'instance_faults': 1}, list(), 6), KeyboardInterrupt] result = self.commands.archive_deleted_rows(20, before='2017-01-13') mock_db_archive.assert_called_once_with( test.MatchType(context.RequestContext), 20, before=datetime.datetime(2017, 1, 13)) self.assertEqual(1, result) @mock.patch.object(db, 'archive_deleted_rows', return_value=({}, [], 0)) @mock.patch.object(objects.CellMappingList, 'get_all') def test_archive_deleted_rows_verbose_no_results(self, mock_get_all, mock_db_archive): result = self.commands.archive_deleted_rows(20, verbose=True, purge=True) mock_db_archive.assert_called_once_with( test.MatchType(context.RequestContext), 20, before=None) output = self.output.getvalue() # If nothing was archived, there should be no purge messages self.assertIn('Nothing was archived.', output) self.assertEqual(0, result) @mock.patch.object(db, 'archive_deleted_rows') @mock.patch.object(objects.RequestSpec, 'destroy_bulk') @mock.patch.object(objects.InstanceGroup, 'destroy_members_bulk') def test_archive_deleted_rows_and_api_db_records( self, mock_members_destroy, mock_reqspec_destroy, mock_db_archive, verbose=True): self.useFixture(nova_fixtures.Database()) self.useFixture(nova_fixtures.Database(database='api')) ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping(context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq', name='cell1') cell_mapping.create() uuids = [] for i in range(2): uuid = uuidutils.generate_uuid() uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid)\ .create() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, cell_mapping=cell_mapping, instance_uuid=uuid)\ .create() mock_db_archive.return_value = ( dict(instances=2, consoles=5), uuids, 7) mock_reqspec_destroy.return_value = 2 mock_members_destroy.return_value = 0 result = self.commands.archive_deleted_rows(20, verbose=verbose, all_cells=True) self.assertEqual(1, result) mock_db_archive.assert_has_calls([ mock.call(test.MatchType(context.RequestContext), 20, before=None) ]) self.assertEqual(1, mock_reqspec_destroy.call_count) mock_members_destroy.assert_called_once() output = self.output.getvalue() if verbose: expected = '''\ +------------------------------+-------------------------+ | Table | Number of Rows Archived | +------------------------------+-------------------------+ | API_DB.instance_group_member | 0 | | API_DB.instance_mappings | 2 | | API_DB.request_specs | 2 | | cell1.consoles | 5 | | cell1.instances | 2 | +------------------------------+-------------------------+ ''' self.assertEqual(expected, output) else: self.assertEqual(0, len(output)) @mock.patch.object(objects.CellMappingList, 'get_all', side_effect=db_exc.CantStartEngineError) def test_archive_deleted_rows_without_api_connection_configured(self, mock_get_all): result = self.commands.archive_deleted_rows(20, verbose=True) mock_get_all.assert_called_once() output = self.output.getvalue() expected = '''\ Failed to connect to API DB so aborting this archival attempt. \ Please check your config file to make sure that [api_database]/connection \ is set and run this command again. ''' self.assertEqual(expected, output) self.assertEqual(3, result) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') def test_purge_all(self, mock_purge): mock_purge.return_value = 1 ret = self.commands.purge(purge_all=True) self.assertEqual(0, ret) mock_purge.assert_called_once_with(mock.ANY, None, status_fn=mock.ANY) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') def test_purge_date(self, mock_purge): mock_purge.return_value = 1 ret = self.commands.purge(before='oct 21 2015') self.assertEqual(0, ret) mock_purge.assert_called_once_with(mock.ANY, datetime.datetime(2015, 10, 21), status_fn=mock.ANY) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') def test_purge_date_fail(self, mock_purge): ret = self.commands.purge(before='notadate') self.assertEqual(2, ret) self.assertFalse(mock_purge.called) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') def test_purge_no_args(self, mock_purge): ret = self.commands.purge() self.assertEqual(1, ret) self.assertFalse(mock_purge.called) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') def test_purge_nothing_deleted(self, mock_purge): mock_purge.return_value = 0 ret = self.commands.purge(purge_all=True) self.assertEqual(3, ret) @mock.patch('nova.db.sqlalchemy.api.purge_shadow_tables') @mock.patch('nova.objects.CellMappingList.get_all') def test_purge_all_cells(self, mock_get_cells, mock_purge): cell1 = objects.CellMapping(uuid=uuidsentinel.cell1, name='cell1', database_connection='foo1', transport_url='bar1') cell2 = objects.CellMapping(uuid=uuidsentinel.cell2, name='cell2', database_connection='foo2', transport_url='bar2') mock_get_cells.return_value = [cell1, cell2] values = [123, 456] def fake_purge(*args, **kwargs): val = values.pop(0) kwargs['status_fn'](val) return val mock_purge.side_effect = fake_purge ret = self.commands.purge(purge_all=True, all_cells=True, verbose=True) self.assertEqual(0, ret) mock_get_cells.assert_called_once_with(mock.ANY) output = self.output.getvalue() expected = """\ Cell %s: 123 Cell %s: 456 """ % (cell1.identity, cell2.identity) self.assertEqual(expected, output) @mock.patch('nova.objects.CellMappingList.get_all') def test_purge_all_cells_no_api_config(self, mock_get_cells): mock_get_cells.side_effect = db_exc.DBError ret = self.commands.purge(purge_all=True, all_cells=True) self.assertEqual(4, ret) self.assertIn('Unable to get cell list', self.output.getvalue()) @mock.patch.object(migration, 'db_null_instance_uuid_scan', return_value={'foo': 0}) def test_null_instance_uuid_scan_no_records_found(self, mock_scan): self.commands.null_instance_uuid_scan() self.assertIn("There were no records found", self.output.getvalue()) @mock.patch.object(migration, 'db_null_instance_uuid_scan', return_value={'foo': 1, 'bar': 0}) def _test_null_instance_uuid_scan(self, mock_scan, delete): self.commands.null_instance_uuid_scan(delete) output = self.output.getvalue() if delete: self.assertIn("Deleted 1 records from table 'foo'.", output) self.assertNotIn("Deleted 0 records from table 'bar'.", output) else: self.assertIn("1 records in the 'foo' table", output) self.assertNotIn("0 records in the 'bar' table", output) self.assertNotIn("There were no records found", output) def test_null_instance_uuid_scan_readonly(self): self._test_null_instance_uuid_scan(delete=False) def test_null_instance_uuid_scan_delete(self): self._test_null_instance_uuid_scan(delete=True) @mock.patch.object(sqla_migration, 'db_version', return_value=2) def test_version(self, sqla_migrate): self.commands.version() sqla_migrate.assert_called_once_with(context=None, database='main') @mock.patch.object(sqla_migration, 'db_sync') def test_sync(self, sqla_sync): self.commands.sync(version=4, local_cell=True) sqla_sync.assert_called_once_with(context=None, version=4, database='main') @mock.patch('nova.db.migration.db_sync') @mock.patch.object(objects.CellMapping, 'get_by_uuid', return_value='map') def test_sync_cell0(self, mock_get_by_uuid, mock_db_sync): ctxt = context.get_admin_context() cell_ctxt = context.get_admin_context() with test.nested( mock.patch('nova.context.RequestContext', return_value=ctxt), mock.patch('nova.context.target_cell')) \ as (mock_get_context, mock_target_cell): fake_target_cell_mock = mock.MagicMock() fake_target_cell_mock.__enter__.return_value = cell_ctxt mock_target_cell.return_value = fake_target_cell_mock self.commands.sync(version=4) mock_get_by_uuid.assert_called_once_with(ctxt, objects.CellMapping.CELL0_UUID) mock_target_cell.assert_called_once_with(ctxt, 'map') db_sync_calls = [ mock.call(4, context=cell_ctxt), mock.call(4) ] mock_db_sync.assert_has_calls(db_sync_calls) @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=test.TestingException('invalid connection')) def test_sync_cell0_unknown_error(self, mock_get_by_uuid): """Asserts that a detailed error message is given when an unknown error occurs trying to get the cell0 cell mapping. """ result = self.commands.sync() self.assertEqual(1, result) mock_get_by_uuid.assert_called_once_with( test.MatchType(context.RequestContext), objects.CellMapping.CELL0_UUID) expected = """ERROR: Could not access cell0. Has the nova_api database been created? Has the nova_cell0 database been created? Has "nova-manage api_db sync" been run? Has "nova-manage cell_v2 map_cell0" been run? Is [api_database]/connection set in nova.conf? Is the cell0 database connection URL correct? Error: invalid connection """ self.assertEqual(expected, self.output.getvalue()) def _fake_db_command(self, migrations=None): if migrations is None: mock_mig_1 = mock.MagicMock(__name__="mock_mig_1") mock_mig_2 = mock.MagicMock(__name__="mock_mig_2") mock_mig_1.return_value = (5, 4) mock_mig_2.return_value = (6, 6) migrations = (mock_mig_1, mock_mig_2) class _CommandSub(manage.DbCommands): online_migrations = migrations return _CommandSub @mock.patch('nova.context.get_admin_context') def test_online_migrations(self, mock_get_context): self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) ctxt = mock_get_context.return_value command_cls = self._fake_db_command() command = command_cls() command.online_data_migrations(10) command_cls.online_migrations[0].assert_called_once_with(ctxt, 10) command_cls.online_migrations[1].assert_called_once_with(ctxt, 6) expected = """\ 5 rows matched query mock_mig_1, 4 migrated 6 rows matched query mock_mig_2, 6 migrated +------------+--------------+-----------+ | Migration | Total Needed | Completed | +------------+--------------+-----------+ | mock_mig_1 | 5 | 4 | | mock_mig_2 | 6 | 6 | +------------+--------------+-----------+ """ self.assertEqual(expected, sys.stdout.getvalue()) @mock.patch('nova.context.get_admin_context') def test_online_migrations_no_max_count(self, mock_get_context): self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) total = [120] batches = [50, 40, 30, 0] runs = [] def fake_migration(context, count): self.assertEqual(mock_get_context.return_value, context) runs.append(count) count = batches.pop(0) total[0] -= count return count, count command_cls = self._fake_db_command((fake_migration,)) command = command_cls() command.online_data_migrations(None) expected = """\ Running batches of 50 until complete 50 rows matched query fake_migration, 50 migrated 40 rows matched query fake_migration, 40 migrated 30 rows matched query fake_migration, 30 migrated +----------------+--------------+-----------+ | Migration | Total Needed | Completed | +----------------+--------------+-----------+ | fake_migration | 120 | 120 | +----------------+--------------+-----------+ """ self.assertEqual(expected, sys.stdout.getvalue()) self.assertEqual([], batches) self.assertEqual(0, total[0]) self.assertEqual([50, 50, 50, 50], runs) @mock.patch('nova.context.get_admin_context') def test_online_migrations_error(self, mock_get_context): good_remaining = [50] def good_migration(context, count): self.assertEqual(mock_get_context.return_value, context) found = good_remaining[0] done = min(found, count) good_remaining[0] -= done return found, done bad_migration = mock.MagicMock() bad_migration.side_effect = test.TestingException bad_migration.__name__ = 'bad' command_cls = self._fake_db_command((bad_migration, good_migration)) command = command_cls() # bad_migration raises an exception, but it could be because # good_migration had not completed yet. We should get 1 in this case, # because some work was done, and the command should be reiterated. self.assertEqual(1, command.online_data_migrations(max_count=50)) # When running this for the second time, there's no work left for # good_migration to do, but bad_migration still fails - should # get 2 this time. self.assertEqual(2, command.online_data_migrations(max_count=50)) # When --max-count is not used, we should get 2 if all possible # migrations completed but some raise exceptions good_remaining = [125] self.assertEqual(2, command.online_data_migrations(None)) def test_online_migrations_bad_max(self): self.assertEqual(127, self.commands.online_data_migrations(max_count=-2)) self.assertEqual(127, self.commands.online_data_migrations(max_count='a')) self.assertEqual(127, self.commands.online_data_migrations(max_count=0)) def test_online_migrations_no_max(self): with mock.patch.object(self.commands, '_run_migration') as rm: rm.return_value = {}, False self.assertEqual(0, self.commands.online_data_migrations()) def test_online_migrations_finished(self): with mock.patch.object(self.commands, '_run_migration') as rm: rm.return_value = {}, False self.assertEqual(0, self.commands.online_data_migrations(max_count=5)) def test_online_migrations_not_finished(self): with mock.patch.object(self.commands, '_run_migration') as rm: rm.return_value = {'mig': (10, 5)}, False self.assertEqual(1, self.commands.online_data_migrations(max_count=5)) class ApiDbCommandsTestCase(test.NoDBTestCase): def setUp(self): super(ApiDbCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.ApiDbCommands() @mock.patch.object(sqla_migration, 'db_version', return_value=2) def test_version(self, sqla_migrate): self.commands.version() sqla_migrate.assert_called_once_with(context=None, database='api') @mock.patch.object(sqla_migration, 'db_sync') def test_sync(self, sqla_sync): self.commands.sync(version=4) sqla_sync.assert_called_once_with(context=None, version=4, database='api') @ddt.ddt class CellV2CommandsTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(CellV2CommandsTestCase, self).setUp() self.useFixture(nova_fixtures.Database()) self.useFixture(nova_fixtures.Database(database='api')) self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.CellV2Commands() def test_map_cell_and_hosts(self): # Create some fake compute nodes and check if they get host mappings ctxt = context.RequestContext() values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) cell_mapping_uuid = self.output.getvalue().strip() # Verify the cell mapping cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_mapping_uuid) self.assertEqual('ssd', cell_mapping.name) self.assertEqual(cell_transport_url, cell_mapping.transport_url) # Verify the host mappings for i in range(3): host = 'host%s' % i host_mapping = objects.HostMapping.get_by_host(ctxt, host) self.assertEqual(cell_mapping.uuid, host_mapping.cell_mapping.uuid) def test_map_cell_and_hosts_duplicate(self): # Create a cell mapping and hosts and check that nothing new is created ctxt = context.RequestContext() cell_mapping_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_mapping_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() # Create compute nodes that will map to the cell values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() host_mapping = objects.HostMapping( ctxt, host=host, cell_mapping=cell_mapping) host_mapping.create() cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" retval = self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) self.assertEqual(0, retval) output = self.output.getvalue().strip() expected = '' for i in range(3): expected += ('Host host%s is already mapped to cell %s\n' % (i, cell_mapping_uuid)) expected += 'All hosts are already mapped to cell(s).' self.assertEqual(expected, output) def test_map_cell_and_hosts_partial_update(self): # Create a cell mapping and partial hosts and check that # missing HostMappings are created ctxt = context.RequestContext() cell_mapping_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_mapping_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() # Create compute nodes that will map to the cell values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() # NOTE(danms): Create a second node on one compute to make sure # we handle that case compute_node = objects.ComputeNode(ctxt, host='host0', **values) compute_node.create() # Only create 2 existing HostMappings out of 3 for i in range(2): host = 'host%s' % i host_mapping = objects.HostMapping( ctxt, host=host, cell_mapping=cell_mapping) host_mapping.create() cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) # Verify the HostMapping for the last host was created host_mapping = objects.HostMapping.get_by_host(ctxt, 'host2') self.assertEqual(cell_mapping.uuid, host_mapping.cell_mapping.uuid) # Verify the output output = self.output.getvalue().strip() expected = '' for i in [0, 1, 0]: expected += ('Host host%s is already mapped to cell %s\n' % (i, cell_mapping_uuid)) # The expected CellMapping UUID for the last host should be the same expected += cell_mapping.uuid self.assertEqual(expected, output) def test_map_cell_and_hosts_no_hosts_found(self): cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" retval = self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) self.assertEqual(0, retval) output = self.output.getvalue().strip() expected = 'No hosts found to map to cell, exiting.' self.assertEqual(expected, output) def test_map_cell_and_hosts_no_transport_url(self): self.flags(transport_url=None) retval = self.commands.map_cell_and_hosts() self.assertEqual(1, retval) output = self.output.getvalue().strip() expected = ('Must specify --transport-url if [DEFAULT]/transport_url ' 'is not set in the configuration file.') self.assertEqual(expected, output) def test_map_cell_and_hosts_transport_url_config(self): self.flags(transport_url = "fake://guest:devstack@127.0.0.1:9999/") retval = self.commands.map_cell_and_hosts() self.assertEqual(0, retval) @mock.patch.object(context, 'target_cell') def test_map_instances(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(3): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, uuid=uuid).create() self.commands.map_instances(cell_uuid) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) # Verify that map_instances populates user_id. self.assertEqual(ctxt.user_id, inst_mapping.user_id) self.assertEqual(cell_mapping.uuid, inst_mapping.cell_mapping.uuid) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_duplicates(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(3): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, uuid=uuid).create() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, instance_uuid=instance_uuids[0], cell_mapping=cell_mapping).create() self.commands.map_instances(cell_uuid) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) mappings = objects.InstanceMappingList.get_by_project_id(ctxt, ctxt.project_id) self.assertEqual(3, len(mappings)) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_two_batches(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] # Batch size is 50 in map_instances for i in range(60): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, uuid=uuid).create() ret = self.commands.map_instances(cell_uuid) self.assertEqual(0, ret) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) self.assertEqual(2, mock_target_cell.call_count) mock_target_cell.assert_called_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_max_count(self, mock_target_cell): # NOTE(gibi): map_instances command uses non canonical UUID # serialization for the marker instance mapping. The db schema is not # violated so we suppress the warning here. warnings.filterwarnings('ignore', message=".*invalid UUID.*") ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(6): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, uuid=uuid).create() ret = self.commands.map_instances(cell_uuid, max_count=3) self.assertEqual(1, ret) for uuid in instance_uuids[:3]: # First three are mapped inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) for uuid in instance_uuids[3:]: # Last three are not self.assertRaises(exception.InstanceMappingNotFound, objects.InstanceMapping.get_by_instance_uuid, ctxt, uuid) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_marker_deleted(self, mock_target_cell): # NOTE(gibi): map_instances command uses non canonical UUID # serialization for the marker instance mapping. The db schema is not # violated so we suppress the warning here. warnings.filterwarnings('ignore', message=".*invalid UUID.*") ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(6): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, uuid=uuid).create() ret = self.commands.map_instances(cell_uuid, max_count=3) self.assertEqual(1, ret) # Instances are mapped in the order created so we know the marker is # based off the third instance. marker = instance_uuids[2].replace('-', ' ') marker_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, marker) marker_mapping.destroy() ret = self.commands.map_instances(cell_uuid) self.assertEqual(0, ret) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) self.assertEqual(2, mock_target_cell.call_count) mock_target_cell.assert_called_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_marker_reset(self, mock_target_cell): # NOTE(gibi): map_instances command uses non canonical UUID # serialization for the marker instance mapping. The db schema is not # violated so we suppress the warning here. warnings.filterwarnings('ignore', message=".*invalid UUID.*") ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(5): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, user_id=ctxt.user_id, uuid=uuid).create() # Maps first three instances. ret = self.commands.map_instances(cell_uuid, max_count=3) self.assertEqual(1, ret) # Verifying that the marker is now based on third instance # i.e the position of the marker. inst_mappings = objects.InstanceMappingList.get_by_project_id(ctxt, 'INSTANCE_MIGRATION_MARKER') marker = inst_mappings[0].instance_uuid.replace(' ', '-') self.assertEqual(instance_uuids[2], marker) # Now calling reset with map_instances max_count=2 would reset # the marker as expected and start map_instances from the beginning. # This implies we end up finding the marker based on second instance. ret = self.commands.map_instances(cell_uuid, max_count=2, reset_marker=True) self.assertEqual(1, ret) inst_mappings = objects.InstanceMappingList.get_by_project_id(ctxt, 'INSTANCE_MIGRATION_MARKER') marker = inst_mappings[0].instance_uuid.replace(' ', '-') self.assertEqual(instance_uuids[1], marker) # Maps 4th instance using the marker (3rd is already mapped). ret = self.commands.map_instances(cell_uuid, max_count=2) self.assertEqual(1, ret) # Verifying that the marker is now based on fourth instance # i.e the position of the marker. inst_mappings = objects.InstanceMappingList.get_by_project_id(ctxt, 'INSTANCE_MIGRATION_MARKER') marker = inst_mappings[0].instance_uuid.replace(' ', '-') self.assertEqual(instance_uuids[3], marker) # Maps first four instances (all four duplicate entries which # are already present from previous calls) ret = self.commands.map_instances(cell_uuid, max_count=4, reset_marker=True) self.assertEqual(1, ret) # Verifying that the marker is still based on fourth instance # i.e the position of the marker. inst_mappings = objects.InstanceMappingList.get_by_project_id(ctxt, 'INSTANCE_MIGRATION_MARKER') marker = inst_mappings[0].instance_uuid.replace(' ', '-') self.assertEqual(instance_uuids[3], marker) # Maps the 5th instance. ret = self.commands.map_instances(cell_uuid) self.assertEqual(0, ret) def test_map_instances_validate_cell_uuid(self): # create a random cell_uuid which is invalid cell_uuid = uuidutils.generate_uuid() # check that it raises an exception self.assertRaises(exception.CellMappingNotFound, self.commands.map_instances, cell_uuid) def test_map_cell0(self): ctxt = context.RequestContext() database_connection = 'fake:/foobar//' self.commands.map_cell0(database_connection) cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual(database_connection, cell_mapping.database_connection) @mock.patch.object(manage.CellV2Commands, '_map_cell0', new=mock.Mock()) def test_map_cell0_returns_0_on_successful_create(self): self.assertEqual(0, self.commands.map_cell0()) @mock.patch.object(manage.CellV2Commands, '_map_cell0') def test_map_cell0_returns_0_if_cell0_already_exists(self, _map_cell0): _map_cell0.side_effect = db_exc.DBDuplicateEntry exit_code = self.commands.map_cell0() self.assertEqual(0, exit_code) output = self.output.getvalue().strip() self.assertEqual('Cell0 is already setup', output) def test_map_cell0_default_database(self): CONF.set_default('connection', 'fake://netloc/nova', group='database') ctxt = context.RequestContext() self.commands.map_cell0() cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual('fake://netloc/nova_cell0', cell_mapping.database_connection) @ddt.data('mysql+pymysql://nova:abcd0123:AB@controller/%s', 'mysql+pymysql://nova:abcd0123?AB@controller/%s', 'mysql+pymysql://nova:abcd0123@AB@controller/%s', 'mysql+pymysql://nova:abcd0123/AB@controller/%s', 'mysql+pymysql://test:abcd0123/AB@controller/%s?charset=utf8') def test_map_cell0_default_database_special_characters(self, connection): """Tests that a URL with special characters, like in the credentials, is handled properly. """ decoded_connection = connection % 'nova' self.flags(connection=decoded_connection, group='database') ctxt = context.RequestContext() self.commands.map_cell0() cell_mapping = objects.CellMapping.get_by_uuid( ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual( connection % 'nova_cell0', cell_mapping.database_connection) # Delete the cell mapping for the next iteration. cell_mapping.destroy() def _test_migrate_simple_command(self, cell0_sync_fail=False): ctxt = context.RequestContext() CONF.set_default('connection', 'fake://netloc/nova', group='database') values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() transport_url = "fake://guest:devstack@127.0.0.1:9999/" cell_uuid = uuidsentinel.cell @mock.patch('nova.db.migration.db_sync') @mock.patch.object(context, 'target_cell') @mock.patch.object(uuidutils, 'generate_uuid', return_value=cell_uuid) def _test(mock_gen_uuid, mock_target_cell, mock_db_sync): if cell0_sync_fail: mock_db_sync.side_effect = db_exc.DBError result = self.commands.simple_cell_setup(transport_url) mock_db_sync.assert_called() return result r = _test() self.assertEqual(0, r) # Check cell0 from default cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual('fake://netloc/nova_cell0', cell_mapping.database_connection) # Verify the cell mapping cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) self.assertEqual(transport_url, cell_mapping.transport_url) # Verify the host mappings for i in range(3): host = 'host%s' % i host_mapping = objects.HostMapping.get_by_host(ctxt, host) self.assertEqual(cell_mapping.uuid, host_mapping.cell_mapping.uuid) def test_simple_command_single(self): self._test_migrate_simple_command() def test_simple_command_cell0_fail(self): # Make sure that if db_sync fails, we still do all the other # bits self._test_migrate_simple_command(cell0_sync_fail=True) def test_simple_command_multiple(self): # Make sure that the command is idempotent self._test_migrate_simple_command() self._test_migrate_simple_command() def test_instance_verify_no_mapping(self): r = self.commands.verify_instance(uuidsentinel.instance) self.assertEqual(1, r) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_instance_verify_has_only_instance_mapping(self, mock_get): im = objects.InstanceMapping(cell_mapping=None) mock_get.return_value = im r = self.commands.verify_instance(uuidsentinel.instance) self.assertEqual(2, r) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch.object(context, 'target_cell') def test_instance_verify_has_all_mappings(self, mock_target_cell, mock_get2, mock_get1): cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cel) im = objects.InstanceMapping(cell_mapping=cm) mock_get1.return_value = im mock_get2.return_value = None r = self.commands.verify_instance(uuidsentinel.instance) self.assertEqual(0, r) def test_instance_verify_quiet(self): # NOTE(danms): This will hit the first use of the say() wrapper # and reasonably verify that path self.assertEqual(1, self.commands.verify_instance(uuidsentinel.foo, quiet=True)) @mock.patch.object(context, 'target_cell') def test_instance_verify_has_instance_mapping_but_no_instance(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping(context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt uuid = uuidutils.generate_uuid() objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, cell_mapping=cell_mapping, instance_uuid=uuid)\ .create() # a scenario where an instance is deleted, but not archived. inst = objects.Instance.get_by_uuid(ctxt, uuid) inst.destroy() r = self.commands.verify_instance(uuid) self.assertEqual(3, r) self.assertIn('has been deleted', self.output.getvalue()) # a scenario where there is only the instance mapping but no instance # like when an instance has been archived but the instance mapping # was not deleted. uuid = uuidutils.generate_uuid() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, cell_mapping=cell_mapping, instance_uuid=uuid)\ .create() r = self.commands.verify_instance(uuid) self.assertEqual(4, r) self.assertIn('has been archived', self.output.getvalue()) def _return_compute_nodes(self, ctxt, num=1): nodes = [] for i in range(num): nodes.append(objects.ComputeNode(ctxt, uuid=uuidutils.generate_uuid(), host='host%s' % i, vcpus=1, memory_mb=1, local_gb=1, vcpus_used=0, memory_mb_used=0, local_gb_used=0, hypervisor_type='', hypervisor_version=1, cpu_info='')) return nodes @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.CellMappingList, 'get_all') def test_discover_hosts_single_cell(self, mock_cell_mapping_get_all, mock_target_cell): ctxt = context.RequestContext() compute_nodes = self._return_compute_nodes(ctxt) for compute_node in compute_nodes: compute_node.create() cell_mapping = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt self.commands.discover_hosts(cell_uuid=cell_mapping.uuid) # Check that the host mappings were created for i, compute_node in enumerate(compute_nodes): host_mapping = objects.HostMapping.get_by_host(ctxt, compute_node.host) self.assertEqual('host%s' % i, host_mapping.host) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) mock_cell_mapping_get_all.assert_not_called() @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.CellMappingList, 'get_all') def test_discover_hosts_single_cell_no_new_hosts( self, mock_cell_mapping_get_all, mock_target_cell): ctxt = context.RequestContext() # Create some compute nodes and matching host mappings cell_mapping = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() compute_nodes = self._return_compute_nodes(ctxt) for compute_node in compute_nodes: compute_node.create() host_mapping = objects.HostMapping(context=ctxt, host=compute_node.host, cell_mapping=cell_mapping) host_mapping.create() with mock.patch('nova.objects.HostMapping.create') as mock_create: self.commands.discover_hosts(cell_uuid=cell_mapping.uuid) mock_create.assert_not_called() mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) mock_cell_mapping_get_all.assert_not_called() @mock.patch.object(objects.CellMapping, 'get_by_uuid') def test_discover_hosts_multiple_cells(self, mock_cell_mapping_get_by_uuid): # Create in-memory databases for cell1 and cell2 to let target_cell # run for real. We want one compute node in cell1's db and the other # compute node in cell2's db. cell_dbs = nova_fixtures.CellDatabases() cell_dbs.add_cell_database('fake:///db1') cell_dbs.add_cell_database('fake:///db2') self.useFixture(cell_dbs) ctxt = context.RequestContext() cell_mapping0 = objects.CellMapping( context=ctxt, uuid=objects.CellMapping.CELL0_UUID, database_connection='fake:///db0', transport_url='none:///') cell_mapping0.create() cell_mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db1', transport_url='fake:///mq1') cell_mapping1.create() cell_mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db2', transport_url='fake:///mq2') cell_mapping2.create() compute_nodes = self._return_compute_nodes(ctxt, num=2) # Create the first compute node in cell1's db with context.target_cell(ctxt, cell_mapping1) as cctxt: compute_nodes[0]._context = cctxt compute_nodes[0].create() # Create the first compute node in cell2's db with context.target_cell(ctxt, cell_mapping2) as cctxt: compute_nodes[1]._context = cctxt compute_nodes[1].create() self.commands.discover_hosts(verbose=True) output = self.output.getvalue().strip() self.assertNotEqual('', output) # Check that the host mappings were created for i, compute_node in enumerate(compute_nodes): host_mapping = objects.HostMapping.get_by_host(ctxt, compute_node.host) self.assertEqual('host%s' % i, host_mapping.host) mock_cell_mapping_get_by_uuid.assert_not_called() @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts_strict(self, mock_discover_hosts): # Check for exit code 0 if unmapped hosts found mock_discover_hosts.return_value = ['fake'] self.assertEqual(self.commands.discover_hosts(strict=True), 0) # Check for exit code 1 if no unmapped hosts are found mock_discover_hosts.return_value = [] self.assertEqual(self.commands.discover_hosts(strict=True), 1) # Check the return when strict=False self.assertIsNone(self.commands.discover_hosts()) @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts_by_service(self, mock_discover_hosts): mock_discover_hosts.return_value = ['fake'] ret = self.commands.discover_hosts(by_service=True, strict=True) self.assertEqual(0, ret) mock_discover_hosts.assert_called_once_with(mock.ANY, None, mock.ANY, True) @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts_mapping_exists(self, mock_discover_hosts): mock_discover_hosts.side_effect = exception.HostMappingExists( name='fake') ret = self.commands.discover_hosts() output = self.output.getvalue().strip() self.assertEqual(2, ret) expected = ("ERROR: Duplicate host mapping was encountered. This " "command should be run once after all compute hosts " "have been deployed and should not be run in parallel. " "When run in parallel, the commands will collide with " "each other trying to map the same hosts in the database " "at the same time. Error: Host 'fake' mapping already " "exists") self.assertEqual(expected, output) def test_validate_transport_url_in_conf(self): from_conf = 'fake://user:pass@host:5672/' self.flags(transport_url=from_conf) self.assertEqual(from_conf, self.commands._validate_transport_url(None)) def test_validate_transport_url_on_command_line(self): from_cli = 'fake://user:pass@host:5672/' self.assertEqual(from_cli, self.commands._validate_transport_url(from_cli)) def test_validate_transport_url_missing(self): self.flags(transport_url=None) self.assertIsNone(self.commands._validate_transport_url(None)) def test_validate_transport_url_favors_command_line(self): self.flags(transport_url='fake://user:pass@host:5672/') from_cli = 'fake://otheruser:otherpass@otherhost:5673' self.assertEqual(from_cli, self.commands._validate_transport_url(from_cli)) def test_validate_transport_url_invalid_url(self): self.assertIsNone(self.commands._validate_transport_url('not-a-url')) self.assertIn('Invalid transport URL', self.output.getvalue()) def test_non_unique_transport_url_database_connection_checker(self): ctxt = context.RequestContext() cell1 = objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq1', database_connection='fake:///db1') cell1.create() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell2, name='cell2', transport_url='fake://mq2', database_connection='fake:///db2').create() resultf = self.commands.\ _non_unique_transport_url_database_connection_checker( ctxt, None, 'fake://mq3', 'fake:///db3') resultt = self.commands.\ _non_unique_transport_url_database_connection_checker( ctxt, None, 'fake://mq1', 'fake:///db1') resultd = self.commands.\ _non_unique_transport_url_database_connection_checker( ctxt, cell1, 'fake://mq1', 'fake:///db1') self.assertFalse(resultf) self.assertTrue(resultt) self.assertFalse(resultd) self.assertIn('exists', self.output.getvalue()) def test_create_cell_use_params(self): ctxt = context.get_context() kwargs = dict( name='fake-name', transport_url='http://fake-transport-url', database_connection='fake-db-connection') status = self.commands.create_cell(verbose=True, **kwargs) self.assertEqual(0, status) cell2_uuid = self.output.getvalue().strip() self.commands.create_cell(**kwargs) cell2 = objects.CellMapping.get_by_uuid(ctxt, cell2_uuid) self.assertEqual(kwargs['name'], cell2.name) self.assertEqual(kwargs['database_connection'], cell2.database_connection) self.assertEqual(kwargs['transport_url'], cell2.transport_url) self.assertIs(cell2.disabled, False) def test_create_cell_use_params_with_template(self): ctxt = context.get_context() self.flags(transport_url='rabbit://host:1234') kwargs = dict( name='fake-name', transport_url='{scheme}://other-{hostname}:{port}', database_connection='fake-db-connection') status = self.commands.create_cell(verbose=True, **kwargs) self.assertEqual(0, status) cell2_uuid = self.output.getvalue().strip() self.commands.create_cell(**kwargs) # Make sure it ended up as a template in the database db_cm = objects.CellMapping._get_by_uuid_from_db(ctxt, cell2_uuid) self.assertEqual('{scheme}://other-{hostname}:{port}', db_cm.transport_url) # Make sure it gets translated if we load by object cell2 = objects.CellMapping.get_by_uuid(ctxt, cell2_uuid) self.assertEqual(kwargs['name'], cell2.name) self.assertEqual(kwargs['database_connection'], cell2.database_connection) self.assertEqual('rabbit://other-host:1234', cell2.transport_url) self.assertIs(cell2.disabled, False) def test_create_cell_use_config_values(self): settings = dict( transport_url='http://fake-conf-transport-url', database_connection='fake-conf-db-connection') self.flags(connection=settings['database_connection'], group='database') self.flags(transport_url=settings['transport_url']) ctxt = context.get_context() status = self.commands.create_cell(verbose=True) self.assertEqual(0, status) cell1_uuid = self.output.getvalue().split('\n')[-2].strip() cell1 = objects.CellMapping.get_by_uuid(ctxt, cell1_uuid) self.assertIsNone(cell1.name) self.assertEqual(settings['database_connection'], cell1.database_connection) self.assertEqual(settings['transport_url'], cell1.transport_url) def test_create_cell_failed_if_non_unique(self): kwargs = dict( name='fake-name', transport_url='http://fake-transport-url', database_connection='fake-db-connection') status1 = self.commands.create_cell(verbose=True, **kwargs) status2 = self.commands.create_cell(verbose=True, **kwargs) self.assertEqual(0, status1) self.assertEqual(2, status2) self.assertIn('exists', self.output.getvalue()) def test_create_cell_failed_if_no_transport_url(self): self.flags(transport_url=None) status = self.commands.create_cell() self.assertEqual(1, status) self.assertIn('--transport-url', self.output.getvalue()) def test_create_cell_failed_if_no_database_connection(self): self.flags(connection=None, group='database') status = self.commands.create_cell(transport_url='http://fake-url') self.assertEqual(1, status) self.assertIn('--database_connection', self.output.getvalue()) def test_create_cell_pre_disabled(self): ctxt = context.get_context() kwargs = dict( name='fake-name1', transport_url='http://fake-transport-url1', database_connection='fake-db-connection1') status1 = self.commands.create_cell(verbose=True, disabled=True, **kwargs) self.assertEqual(0, status1) cell_uuid1 = self.output.getvalue().strip() cell1 = objects.CellMapping.get_by_uuid(ctxt, cell_uuid1) self.assertEqual(kwargs['name'], cell1.name) self.assertIs(cell1.disabled, True) def test_list_cells_verbose_false(self): ctxt = context.RequestContext() cell_mapping0 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, database_connection='fake://user1:pass1@host1/db0', transport_url='none://user1:pass1@host1/', name='cell0') cell_mapping0.create() cell_mapping1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, database_connection='fake://user1@host1/db0', transport_url='none://user1@host1/vhost1', name='cell1') cell_mapping1.create() self.assertEqual(0, self.commands.list_cells()) output = self.output.getvalue().strip() self.assertEqual('''\ +-------+--------------------------------------+---------------------------+-----------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+---------------------------+-----------------------------+----------+ | cell0 | %(uuid_map0)s | none://user1:****@host1/ | fake://user1:****@host1/db0 | False | | cell1 | %(uuid_map1)s | none://user1@host1/vhost1 | fake://user1@host1/db0 | False | +-------+--------------------------------------+---------------------------+-----------------------------+----------+''' % # noqa {"uuid_map0": uuidsentinel.map0, "uuid_map1": uuidsentinel.map1}, output) def test_list_cells_multiple_sorted_verbose_true(self): ctxt = context.RequestContext() cell_mapping0 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, database_connection='fake:///db0', transport_url='none:///', name='cell0') cell_mapping0.create() cell_mapping1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, database_connection='fake:///dblon', transport_url='fake:///mqlon', name='london', disabled=True) cell_mapping1.create() cell_mapping2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map2, database_connection='fake:///dbdal', transport_url='fake:///mqdal', name='dallas') cell_mapping2.create() no_name = objects.CellMapping( context=ctxt, uuid=uuidsentinel.none, database_connection='fake:///dbnone', transport_url='fake:///mqnone') no_name.create() self.assertEqual(0, self.commands.list_cells(verbose=True)) output = self.output.getvalue().strip() self.assertEqual('''\ +--------+--------------------------------------+----------------+---------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +--------+--------------------------------------+----------------+---------------------+----------+ | | %(uuid_none)s | fake:///mqnone | fake:///dbnone | False | | cell0 | %(uuid_map0)s | none:/// | fake:///db0 | False | | dallas | %(uuid_map2)s | fake:///mqdal | fake:///dbdal | False | | london | %(uuid_map1)s | fake:///mqlon | fake:///dblon | True | +--------+--------------------------------------+----------------+---------------------+----------+''' % # noqa {"uuid_map0": uuidsentinel.map0, "uuid_map1": uuidsentinel.map1, "uuid_map2": uuidsentinel.map2, "uuid_none": uuidsentinel.none}, output) def test_delete_cell_not_found(self): """Tests trying to delete a cell that is not found by uuid.""" cell_uuid = uuidutils.generate_uuid() self.assertEqual(1, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertEqual('Cell with uuid %s was not found.' % cell_uuid, output) @mock.patch.object(objects.ComputeNodeList, 'get_all') def test_delete_cell_host_mappings_exist(self, mock_get_cn): """Tests trying to delete a cell which has host mappings.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create a host mapping in this cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm) hm.create() mock_get_cn.return_value = [] self.assertEqual(2, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertIn('There are existing hosts mapped to cell', output) @mock.patch.object(objects.InstanceList, 'get_all') def test_delete_cell_instance_mappings_exist_with_instances( self, mock_get_all): """Tests trying to delete a cell which has instance mappings.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() mock_get_all.return_value = [objects.Instance( ctxt, uuid=uuidsentinel.instance)] # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create an instance mapping in this cell im = objects.InstanceMapping( context=ctxt, instance_uuid=uuidutils.generate_uuid(), cell_mapping=cm, project_id=uuidutils.generate_uuid()) im.create() self.assertEqual(3, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertIn('There are existing instances mapped to cell', output) @mock.patch.object(objects.InstanceList, 'get_all', return_value=[]) def test_delete_cell_instance_mappings_exist_without_instances( self, mock_get_all): """Tests trying to delete a cell which has instance mappings.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create an instance mapping in this cell im = objects.InstanceMapping( context=ctxt, instance_uuid=uuidutils.generate_uuid(), cell_mapping=cm, project_id=uuidutils.generate_uuid()) im.create() self.assertEqual(4, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertIn('There are instance mappings to cell with uuid', output) self.assertIn('but all instances have been deleted in the cell.', output) self.assertIn("So execute 'nova-manage db archive_deleted_rows' to " "delete the instance mappings.", output) def test_delete_cell_success_without_host_mappings(self): """Tests trying to delete an empty cell.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() self.assertEqual(0, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertEqual('', output) @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.HostMapping, 'destroy') @mock.patch.object(objects.CellMapping, 'destroy') def test_delete_cell_success_with_host_mappings(self, mock_cell_destroy, mock_hm_destroy, mock_get_cn): """Tests trying to delete a cell with host.""" ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create a host mapping in this cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm) hm.create() mock_get_cn.return_value = [] self.assertEqual(0, self.commands.delete_cell(uuidsentinel.cell1, force=True)) output = self.output.getvalue().strip() self.assertEqual('', output) mock_hm_destroy.assert_called_once_with() mock_cell_destroy.assert_called_once_with() @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.InstanceMapping, 'destroy') @mock.patch.object(objects.HostMapping, 'destroy') @mock.patch.object(objects.CellMapping, 'destroy') def test_delete_cell_force_with_inst_mappings_of_deleted_instances(self, mock_cell_destroy, mock_hm_destroy, mock_im_destroy, mock_target_cell): # Test for verifying the deletion of instance_mappings # of deleted instances when using the --force option ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm.create() mock_target_cell.return_value.__enter__.return_value = ctxt # create a host mapping in this cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm) hm.create() # create an instance and its mapping. inst_uuid = uuidutils.generate_uuid() proj_uuid = uuidutils.generate_uuid() instance = objects.Instance(ctxt, project_id=proj_uuid, uuid=inst_uuid) instance.create() im = objects.InstanceMapping(ctxt, project_id=proj_uuid, cell_mapping=cm, instance_uuid=inst_uuid) im.create() res = self.commands.delete_cell(uuidsentinel.cell1, force=True) self.assertEqual(3, res) output = self.output.getvalue().strip() self.assertIn('There are existing instances mapped to cell', output) # delete the instance such that we now have only its mapping instance.destroy() res = self.commands.delete_cell(uuidsentinel.cell1, force=True) self.assertEqual(0, res) mock_hm_destroy.assert_called_once_with() mock_cell_destroy.assert_called_once_with() mock_im_destroy.assert_called_once_with() self.assertEqual(4, mock_target_cell.call_count) def test_update_cell_not_found(self): self.assertEqual(1, self.commands.update_cell( uuidsentinel.cell1, 'foo', 'fake://new', 'fake:///new')) self.assertIn('not found', self.output.getvalue()) def test_update_cell_failed_if_non_unique_transport_db_urls(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq1', database_connection='fake:///db1').create() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell2, name='cell2', transport_url='fake://mq2', database_connection='fake:///db2').create() cell2_update1 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq1', 'fake:///db1') self.assertEqual(3, cell2_update1) self.assertIn('exists', self.output.getvalue()) cell2_update2 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq1', 'fake:///db3') self.assertEqual(3, cell2_update2) self.assertIn('exists', self.output.getvalue()) cell2_update3 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq3', 'fake:///db1') self.assertEqual(3, cell2_update3) self.assertIn('exists', self.output.getvalue()) cell2_update4 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq3', 'fake:///db3') self.assertEqual(0, cell2_update4) def test_update_cell_failed(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() with mock.patch('nova.objects.CellMapping.save') as mock_save: mock_save.side_effect = Exception self.assertEqual(2, self.commands.update_cell( uuidsentinel.cell1, 'foo', 'fake://new', 'fake:///new')) self.assertIn('Unable to update', self.output.getvalue()) def test_update_cell_success(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() self.assertEqual(0, self.commands.update_cell( uuidsentinel.cell1, 'foo', 'fake://new', 'fake:///new')) cm = objects.CellMapping.get_by_uuid(ctxt, uuidsentinel.cell1) self.assertEqual('foo', cm.name) self.assertEqual('fake://new', cm.transport_url) self.assertEqual('fake:///new', cm.database_connection) output = self.output.getvalue().strip() self.assertEqual('', output) def test_update_cell_success_defaults(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() self.assertEqual(0, self.commands.update_cell(uuidsentinel.cell1)) cm = objects.CellMapping.get_by_uuid(ctxt, uuidsentinel.cell1) self.assertEqual('cell1', cm.name) expected_transport_url = CONF.transport_url or 'fake://mq' self.assertEqual(expected_transport_url, cm.transport_url) expected_db_connection = CONF.database.connection or 'fake:///db' self.assertEqual(expected_db_connection, cm.database_connection) output = self.output.getvalue().strip() lines = output.split('\n') self.assertIn('using the value [DEFAULT]/transport_url', lines[0]) self.assertIn('using the value [database]/connection', lines[1]) self.assertEqual(2, len(lines)) def test_update_cell_disable_and_enable(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() self.assertEqual(4, self.commands.update_cell(uuidsentinel.cell1, disable=True, enable=True)) output = self.output.getvalue().strip() self.assertIn('Cell cannot be disabled and enabled at the same ' 'time.', output) def test_update_cell_disable_cell0(self): ctxt = context.get_admin_context() uuid0 = objects.CellMapping.CELL0_UUID objects.CellMapping(context=ctxt, uuid=uuid0, name='cell0', transport_url='fake://mq', database_connection='fake:///db').create() self.assertEqual(5, self.commands.update_cell(uuid0, disable=True)) output = self.output.getvalue().strip() self.assertIn('Cell0 cannot be disabled.', output) def test_update_cell_disable_success(self): ctxt = context.get_admin_context() uuid = uuidsentinel.cell1 objects.CellMapping(context=ctxt, uuid=uuid, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() cm = objects.CellMapping.get_by_uuid(ctxt, uuid) self.assertFalse(cm.disabled) self.assertEqual(0, self.commands.update_cell(uuid, disable=True)) cm = objects.CellMapping.get_by_uuid(ctxt, uuid) self.assertTrue(cm.disabled) output = self.output.getvalue().strip() lines = output.split('\n') self.assertIn('using the value [DEFAULT]/transport_url', lines[0]) self.assertIn('using the value [database]/connection', lines[1]) self.assertEqual(2, len(lines)) def test_update_cell_enable_success(self): ctxt = context.get_admin_context() uuid = uuidsentinel.cell1 objects.CellMapping(context=ctxt, uuid=uuid, name='cell1', transport_url='fake://mq', database_connection='fake:///db', disabled=True).create() cm = objects.CellMapping.get_by_uuid(ctxt, uuid) self.assertTrue(cm.disabled) self.assertEqual(0, self.commands.update_cell(uuid, enable=True)) cm = objects.CellMapping.get_by_uuid(ctxt, uuid) self.assertFalse(cm.disabled) output = self.output.getvalue().strip() lines = output.split('\n') self.assertIn('using the value [DEFAULT]/transport_url', lines[0]) self.assertIn('using the value [database]/connection', lines[1]) self.assertEqual(2, len(lines)) def test_update_cell_disable_already_disabled(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db', disabled=True).create() cm = objects.CellMapping.get_by_uuid(ctxt, uuidsentinel.cell1) self.assertTrue(cm.disabled) self.assertEqual(0, self.commands.update_cell(uuidsentinel.cell1, disable=True)) self.assertTrue(cm.disabled) output = self.output.getvalue().strip() self.assertIn('is already disabled', output) def test_update_cell_enable_already_enabled(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() cm = objects.CellMapping.get_by_uuid(ctxt, uuidsentinel.cell1) self.assertFalse(cm.disabled) self.assertEqual(0, self.commands.update_cell(uuidsentinel.cell1, enable=True)) self.assertFalse(cm.disabled) output = self.output.getvalue().strip() self.assertIn('is already enabled', output) def test_list_hosts(self): ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, name='london', database_connection='fake:///db', transport_url='fake:///mq') cm1.create() cm2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, name='dallas', database_connection='fake:///db', transport_url='fake:///mq') cm2.create() # create a host mapping in another cell hm1 = objects.HostMapping( context=ctxt, host='fake-host-1', cell_mapping=cm1) hm1.create() hm2 = objects.HostMapping( context=ctxt, host='fake-host-2', cell_mapping=cm2) hm2.create() self.assertEqual(0, self.commands.list_hosts()) output = self.output.getvalue().strip() self.assertEqual('''\ +-----------+--------------------------------------+-------------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+-------------+ | london | %(uuid_map0)s | fake-host-1 | | dallas | %(uuid_map1)s | fake-host-2 | +-----------+--------------------------------------+-------------+''' % {"uuid_map0": uuidsentinel.map0, "uuid_map1": uuidsentinel.map1}, output) def test_list_hosts_in_cell(self): ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, name='london', database_connection='fake:///db', transport_url='fake:///mq') cm1.create() cm2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, name='dallas', database_connection='fake:///db', transport_url='fake:///mq') cm2.create() # create a host mapping in another cell hm1 = objects.HostMapping( context=ctxt, host='fake-host-1', cell_mapping=cm1) hm1.create() hm2 = objects.HostMapping( context=ctxt, host='fake-host-2', cell_mapping=cm2) hm2.create() self.assertEqual(0, self.commands.list_hosts( cell_uuid=uuidsentinel.map0)) output = self.output.getvalue().strip() self.assertEqual('''\ +-----------+--------------------------------------+-------------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+-------------+ | london | %(uuid_map0)s | fake-host-1 | +-----------+--------------------------------------+-------------+''' % {"uuid_map0": uuidsentinel.map0}, output) def test_list_hosts_cell_not_found(self): """Tests trying to delete a host but a specified cell is not found.""" self.assertEqual(1, self.commands.list_hosts( cell_uuid=uuidsentinel.cell1)) output = self.output.getvalue().strip() self.assertEqual( 'Cell with uuid %s was not found.' % uuidsentinel.cell1, output) def test_delete_host_cell_not_found(self): """Tests trying to delete a host but a specified cell is not found.""" self.assertEqual(1, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual( 'Cell with uuid %s was not found.' % uuidsentinel.cell1, output) def test_delete_host_host_not_found(self): """Tests trying to delete a host but the host is not found.""" ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm.create() self.assertEqual(2, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('The host fake-host was not found.', output) def test_delete_host_host_not_in_cell(self): """Tests trying to delete a host but the host does not belongs to a specified cell. """ ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() cm2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell2, database_connection='fake:///db', transport_url='fake:///mq') cm2.create() # create a host mapping in another cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm2) hm.create() self.assertEqual(3, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual(('The host fake-host was not found in the cell %s.' % uuidsentinel.cell1), output) @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_delete_host_instances_exist(self, mock_get_cn, mock_get_by_host): """Tests trying to delete a host but the host has instances.""" ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() # create a host mapping in the cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm1) hm.create() mock_get_by_host.return_value = [objects.Instance( ctxt, uuid=uuidsentinel.instance)] mock_get_cn.return_value = [] self.assertEqual(4, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('There are instances on the host fake-host.', output) mock_get_by_host.assert_called_once_with( test.MatchType(context.RequestContext), 'fake-host') @mock.patch.object(objects.InstanceList, 'get_by_host', return_value=[]) @mock.patch.object(objects.HostMapping, 'destroy') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_delete_host_success(self, mock_get_cn, mock_destroy, mock_get_by_host): """Tests trying to delete a host that has no instances.""" ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() # create a host mapping in the cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm1) hm.create() mock_get_cn.return_value = [mock.MagicMock(), mock.MagicMock()] self.assertEqual(0, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('', output) mock_get_by_host.assert_called_once_with( test.MatchType(context.RequestContext), 'fake-host') mock_destroy.assert_called_once_with() for node in mock_get_cn.return_value: self.assertEqual(0, node.mapped) node.save.assert_called_once_with() @mock.patch.object(objects.InstanceList, 'get_by_host', return_value=[]) @mock.patch.object(objects.HostMapping, 'destroy') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host', side_effect=exception.ComputeHostNotFound(host='fake-host')) def test_delete_host_success_compute_host_not_found(self, mock_get_cn, mock_destroy, mock_get_by_host): """Tests trying to delete a host that has no instances, but cannot be found by ComputeNodeList.get_all_by_host. """ ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() # create a host mapping in the cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm1) hm.create() self.assertEqual(0, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('', output) mock_get_by_host.assert_called_once_with( test.MatchType(context.RequestContext), 'fake-host') mock_destroy.assert_called_once_with() mock_get_cn.assert_called_once_with( test.MatchType(context.RequestContext), 'fake-host') @ddt.ddt class TestNovaManagePlacement(test.NoDBTestCase): """Unit tests for the nova-manage placement commands. Tests in this class should be simple and can rely on mock, so they are usually restricted to negative or side-effect type tests. For more involved functional scenarios, use nova.tests.functional.test_nova_manage. """ def setUp(self): super(TestNovaManagePlacement, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.cli = manage.PlacementCommands() self.useFixture(fixtures.MockPatch('nova.network.neutron.get_client')) def test_heal_allocations_with_cell_instance_id(self): """Test heal allocation with both cell id and instance id""" cell_uuid = uuidutils.generate_uuid() instance_uuid = uuidutils.generate_uuid() self.assertEqual(127, self.cli.heal_allocations( instance_uuid=instance_uuid, cell_uuid=cell_uuid)) self.assertIn('The --cell and --instance options', self.output.getvalue()) @mock.patch('nova.objects.CellMapping.get_by_uuid', side_effect=exception.CellMappingNotFound(uuid='fake')) def test_heal_allocations_with_cell_id_not_found(self, mock_get): """Test the case where cell_id is not found""" self.assertEqual(127, self.cli.heal_allocations(cell_uuid='fake')) output = self.output.getvalue().strip() self.assertEqual('Cell with uuid fake was not found.', output) @ddt.data(-1, 0, "one") def test_heal_allocations_invalid_max_count(self, max_count): self.assertEqual(127, self.cli.heal_allocations(max_count=max_count)) @mock.patch('nova.objects.CellMappingList.get_all', return_value=objects.CellMappingList()) def test_heal_allocations_no_cells(self, mock_get_all_cells): self.assertEqual(4, self.cli.heal_allocations(verbose=True)) self.assertIn('No cells to process', self.output.getvalue()) @mock.patch('nova.objects.CellMappingList.get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)])) @mock.patch('nova.objects.InstanceList.get_by_filters', return_value=objects.InstanceList()) def test_heal_allocations_no_instances( self, mock_get_instances, mock_get_all_cells): self.assertEqual(4, self.cli.heal_allocations(verbose=True)) self.assertIn('Processed 0 instances.', self.output.getvalue()) @mock.patch('nova.objects.CellMappingList.get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)])) @mock.patch('nova.objects.InstanceList.get_by_filters', return_value=objects.InstanceList(objects=[ objects.Instance( uuid=uuidsentinel.instance, host='fake', node='fake', task_state=None)])) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer', return_value={}) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename', side_effect=exception.ComputeHostNotFound(host='fake')) def test_heal_allocations_compute_host_not_found( self, mock_get_compute_node, mock_get_allocs, mock_get_instances, mock_get_all_cells): self.assertEqual(2, self.cli.heal_allocations()) self.assertIn('Compute host fake could not be found.', self.output.getvalue()) @mock.patch('nova.objects.CellMappingList.get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)])) @mock.patch('nova.objects.InstanceList.get_by_filters', return_value=objects.InstanceList(objects=[ objects.Instance( uuid=uuidsentinel.instance, host='fake', node='fake', task_state=None, flavor=objects.Flavor(), project_id='fake-project', user_id='fake-user')])) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer', return_value={}) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename', return_value=objects.ComputeNode(uuid=uuidsentinel.node)) @mock.patch('nova.scheduler.utils.resources_from_flavor', return_value={'VCPU': 1}) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put', return_value=fake_requests.FakeResponse( 500, content=jsonutils.dumps({"errors": [{"code": ""}]}))) def test_heal_allocations_put_allocations_fails( self, mock_put_allocations, mock_res_from_flavor, mock_get_compute_node, mock_get_allocs, mock_get_instances, mock_get_all_cells): self.assertEqual(3, self.cli.heal_allocations()) self.assertIn('Failed to update allocations for consumer', self.output.getvalue()) instance = mock_get_instances.return_value[0] mock_res_from_flavor.assert_called_once_with( instance, instance.flavor) expected_payload = { 'allocations': { uuidsentinel.node: { 'resources': {'VCPU': 1} } }, 'user_id': 'fake-user', 'project_id': 'fake-project', 'consumer_generation': None } mock_put_allocations.assert_called_once_with( '/allocations/%s' % instance.uuid, expected_payload, global_request_id=mock.ANY, version='1.28') @mock.patch('nova.objects.CellMappingList.get_all', new=mock.Mock(return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)]))) @mock.patch('nova.objects.InstanceList.get_by_filters', new=mock.Mock(return_value=objects.InstanceList(objects=[ objects.Instance( uuid=uuidsentinel.instance, host='fake', node='fake', task_state=None, flavor=objects.Flavor(), project_id='fake-project', user_id='fake-user')]))) def test_heal_allocations_get_allocs_placement_fails(self): self.assertEqual(3, self.cli.heal_allocations()) output = self.output.getvalue() self.assertIn('Allocation retrieval failed', output) # Having not mocked get_allocs_for_consumer, we get MissingAuthPlugin. self.assertIn('An auth plugin is required', output) @mock.patch('nova.objects.CellMappingList.get_all', new=mock.Mock(return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)]))) @mock.patch('nova.objects.InstanceList.get_by_filters', side_effect=[ objects.InstanceList(objects=[objects.Instance( uuid=uuidsentinel.instance, host='fake', node='fake', task_state=None, flavor=objects.Flavor(), project_id='fake-project', user_id='fake-user')]), objects.InstanceList(objects=[])]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer', new=mock.Mock( side_effect=exception.ConsumerAllocationRetrievalFailed( consumer_uuid='CONSUMER', error='ERROR'))) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put', new_callable=mock.NonCallableMagicMock) def test_heal_allocations_get_allocs_retrieval_fails(self, mock_put, mock_getinst): self.assertEqual(3, self.cli.heal_allocations()) @mock.patch('nova.objects.CellMappingList.get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)])) @mock.patch('nova.objects.InstanceList.get_by_filters', # Called twice, first returns 1 instance, second returns [] side_effect=( objects.InstanceList(objects=[ objects.Instance( uuid=uuidsentinel.instance, host='fake', node='fake', task_state=None, project_id='fake-project', user_id='fake-user')]), objects.InstanceList())) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename', new_callable=mock.NonCallableMock) # assert not called @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put', return_value=fake_requests.FakeResponse(204)) def test_heal_allocations( self, mock_put, mock_get_compute_node, mock_get_allocs, mock_get_instances, mock_get_all_cells): """Tests the scenario that there are allocations created using placement API microversion < 1.8 where project/user weren't provided. The allocations will be re-put with the instance project_id/user_id values. Note that GET /allocations/{consumer_id} since commit f44965010 will create the missing consumer record using the config option sentinels for project and user, so we won't get null back for the consumer project/user. """ mock_get_allocs.return_value = { "allocations": { "92637880-2d79-43c6-afab-d860886c6391": { "generation": 2, "resources": { "DISK_GB": 50, "MEMORY_MB": 512, "VCPU": 2 } } }, "project_id": uuidsentinel.project_id, "user_id": uuidsentinel.user_id, "consumer_generation": 12, } self.assertEqual(0, self.cli.heal_allocations(verbose=True)) self.assertIn('Processed 1 instances.', self.output.getvalue()) mock_get_allocs.assert_called_once_with( test.MatchType(context.RequestContext), uuidsentinel.instance) expected_put_data = mock_get_allocs.return_value expected_put_data['project_id'] = 'fake-project' expected_put_data['user_id'] = 'fake-user' mock_put.assert_called_once_with( '/allocations/%s' % uuidsentinel.instance, expected_put_data, global_request_id=mock.ANY, version='1.28') @mock.patch('nova.objects.CellMappingList.get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping(name='cell1', uuid=uuidsentinel.cell1)])) @mock.patch('nova.objects.InstanceList.get_by_filters', return_value=objects.InstanceList(objects=[ objects.Instance( uuid=uuidsentinel.instance, host='fake', node='fake', task_state=None, project_id='fake-project', user_id='fake-user')])) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put', return_value=fake_requests.FakeResponse( 409, content=jsonutils.dumps( {"errors": [ {"code": "placement.concurrent_update", "detail": "consumer generation conflict"}]}))) def test_heal_allocations_put_fails( self, mock_put, mock_get_allocs, mock_get_instances, mock_get_all_cells): """Tests the scenario that there are allocations created using placement API microversion < 1.8 where project/user weren't provided and there was no consumer. The allocations will be re-put with the instance project_id/user_id values but that fails with a 409 so a return code of 3 is expected from the command. """ mock_get_allocs.return_value = { "allocations": { "92637880-2d79-43c6-afab-d860886c6391": { "generation": 2, "resources": { "DISK_GB": 50, "MEMORY_MB": 512, "VCPU": 2 } } }, "project_id": uuidsentinel.project_id, "user_id": uuidsentinel.user_id } self.assertEqual(3, self.cli.heal_allocations(verbose=True)) self.assertIn( 'consumer generation conflict', self.output.getvalue()) mock_get_allocs.assert_called_once_with( test.MatchType(context.RequestContext), uuidsentinel.instance) expected_put_data = mock_get_allocs.return_value expected_put_data['project_id'] = 'fake-project' expected_put_data['user_id'] = 'fake-user' mock_put.assert_called_once_with( '/allocations/%s' % uuidsentinel.instance, expected_put_data, global_request_id=mock.ANY, version='1.28') @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list', return_value=objects.AggregateList(objects=[ objects.Aggregate(name='foo', hosts=['host1'])])) @mock.patch('nova.objects.HostMapping.get_by_host', side_effect=exception.HostMappingNotFound(name='host1')) def test_sync_aggregates_host_mapping_not_found( self, mock_get_host_mapping, mock_get_aggs): """Tests that we handle HostMappingNotFound.""" result = self.cli.sync_aggregates(verbose=True) self.assertEqual(4, result) self.assertIn('The following hosts were found in nova host aggregates ' 'but no host mappings were found in the nova API DB. ' 'Run "nova-manage cell_v2 discover_hosts" and then ' 'retry. Missing: host1', self.output.getvalue()) @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list', return_value=objects.AggregateList(objects=[ objects.Aggregate(name='foo', hosts=['host1'])])) @mock.patch('nova.objects.HostMapping.get_by_host', return_value=objects.HostMapping( host='host1', cell_mapping=objects.CellMapping())) @mock.patch('nova.objects.ComputeNodeList.get_all_by_host', return_value=objects.ComputeNodeList(objects=[ objects.ComputeNode(hypervisor_hostname='node1'), objects.ComputeNode(hypervisor_hostname='node2')])) @mock.patch('nova.context.target_cell') def test_sync_aggregates_too_many_computes_for_host( self, mock_target_cell, mock_get_nodes, mock_get_host_mapping, mock_get_aggs): """Tests the scenario that a host in an aggregate has more than one compute node so the command does not know which compute node uuid to use for the placement resource provider aggregate and fails. """ mock_target_cell.return_value.__enter__.return_value = ( mock.sentinel.cell_context) result = self.cli.sync_aggregates(verbose=True) self.assertEqual(1, result) self.assertIn('Unexpected number of compute node records ' '(2) found for host host1. There should ' 'only be a one-to-one mapping.', self.output.getvalue()) mock_get_nodes.assert_called_once_with( mock.sentinel.cell_context, 'host1') @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list', return_value=objects.AggregateList(objects=[ objects.Aggregate(name='foo', hosts=['host1'])])) @mock.patch('nova.objects.HostMapping.get_by_host', return_value=objects.HostMapping( host='host1', cell_mapping=objects.CellMapping())) @mock.patch('nova.objects.ComputeNodeList.get_all_by_host', side_effect=exception.ComputeHostNotFound(host='host1')) @mock.patch('nova.context.target_cell') def test_sync_aggregates_compute_not_found( self, mock_target_cell, mock_get_nodes, mock_get_host_mapping, mock_get_aggs): """Tests the scenario that no compute node record is found for a given host in an aggregate. """ mock_target_cell.return_value.__enter__.return_value = ( mock.sentinel.cell_context) result = self.cli.sync_aggregates(verbose=True) self.assertEqual(5, result) self.assertIn('Unable to find matching compute_nodes record entries ' 'in the cell database for the following hosts; does the ' 'nova-compute service on each host need to be ' 'restarted? Missing: host1', self.output.getvalue()) mock_get_nodes.assert_called_once_with( mock.sentinel.cell_context, 'host1') @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list', new=mock.Mock(return_value=objects.AggregateList(objects=[ objects.Aggregate(name='foo', hosts=['host1'], uuid=uuidsentinel.aggregate)]))) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_sync_aggregates_get_provider_aggs_placement_server_error( self, mock_agg_add): """Tests the scenario that placement returns an unexpected server error when getting aggregates for a given resource provider. """ mock_agg_add.side_effect = ( exception.ResourceProviderAggregateRetrievalFailed( uuid=uuidsentinel.rp_uuid)) with mock.patch.object(self.cli, '_get_rp_uuid_for_host', return_value=uuidsentinel.rp_uuid): result = self.cli.sync_aggregates(verbose=True) self.assertEqual(2, result) self.assertIn('Failed to get aggregates for resource provider with ' 'UUID %s' % uuidsentinel.rp_uuid, self.output.getvalue()) @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list', new=mock.Mock(return_value=objects.AggregateList(objects=[ objects.Aggregate(name='foo', hosts=['host1'], uuid=uuidsentinel.aggregate)]))) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_sync_aggregates_put_aggregates_fails_provider_not_found( self, mock_agg_add): """Tests the scenario that we are trying to add a provider to an aggregate in placement but the PUT /resource_providers/{rp_uuid}/aggregates call fails with a 404 because the provider is not found. """ mock_agg_add.side_effect = exception.ResourceProviderNotFound( name_or_uuid=uuidsentinel.rp_uuid) with mock.patch.object(self.cli, '_get_rp_uuid_for_host', return_value=uuidsentinel.rp_uuid): result = self.cli.sync_aggregates(verbose=True) self.assertEqual(6, result) self.assertIn('Unable to find matching resource provider record in ' 'placement with uuid for the following hosts: ' '(host1=%s)' % uuidsentinel.rp_uuid, self.output.getvalue()) mock_agg_add.assert_called_once_with( mock.ANY, uuidsentinel.aggregate, rp_uuid=uuidsentinel.rp_uuid) @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list', new=mock.Mock(return_value=objects.AggregateList(objects=[ objects.Aggregate(name='foo', hosts=['host1'], uuid=uuidsentinel.aggregate)]))) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_sync_aggregates_put_aggregates_fails_generation_conflict( self, mock_agg_add): """Tests the scenario that we are trying to add a provider to an aggregate in placement but the PUT /resource_providers/{rp_uuid}/aggregates call fails with a 409 generation conflict (even after retries). """ mock_agg_add.side_effect = exception.ResourceProviderUpdateConflict( uuid=uuidsentinel.rp_uuid, generation=1, error="Conflict!") with mock.patch.object(self.cli, '_get_rp_uuid_for_host', return_value=uuidsentinel.rp_uuid): result = self.cli.sync_aggregates(verbose=True) self.assertEqual(3, result) self.assertIn("Failed updating provider aggregates for " "host (host1), provider (%s) and aggregate " "(%s)." % (uuidsentinel.rp_uuid, uuidsentinel.aggregate), self.output.getvalue()) self.assertIn("Conflict!", self.output.getvalue()) def test_has_request_but_no_allocation(self): # False because there is a full resource_request and allocation set. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.healed, 'resource_request': { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000, }, 'required': [ 'CUSTOM_VNIC_TYPE_NORMAL' ] }, 'binding:profile': {'allocation': uuidsentinel.rp1} })) # True because there is a full resource_request but no allocation set. self.assertTrue( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.needs_healing, 'resource_request': { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000, }, 'required': [ 'CUSTOM_VNIC_TYPE_NORMAL' ] }, 'binding:profile': {} })) # True because there is a full resource_request but no allocation set. self.assertTrue( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.needs_healing_null_profile, 'resource_request': { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000, }, 'required': [ 'CUSTOM_VNIC_TYPE_NORMAL' ] }, 'binding:profile': None, })) # False because there are no resources in the resource_request. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.empty_resources, 'resource_request': { 'resources': {}, 'required': [ 'CUSTOM_VNIC_TYPE_NORMAL' ] }, 'binding:profile': {} })) # False because there are no resources in the resource_request. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.missing_resources, 'resource_request': { 'required': [ 'CUSTOM_VNIC_TYPE_NORMAL' ] }, 'binding:profile': {} })) # False because there are no required traits in the resource_request. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.empty_required, 'resource_request': { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000, }, 'required': [] }, 'binding:profile': {} })) # False because there are no required traits in the resource_request. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.missing_required, 'resource_request': { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000, }, }, 'binding:profile': {} })) # False because there are no resources or required traits in the # resource_request. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.empty_resource_request, 'resource_request': {}, 'binding:profile': {} })) # False because there is no resource_request. self.assertFalse( self.cli._has_request_but_no_allocation( { 'id': uuidsentinel.missing_resource_request, 'binding:profile': {} })) def test_update_ports_only_updates_binding_profile(self): """Simple test to make sure that only the port's binding:profile is updated based on the provided port dict's binding:profile and not just the binding:profile allocation key or other fields on the port. """ neutron = mock.Mock() output = mock.Mock() binding_profile = { 'allocation': uuidsentinel.rp_uuid, 'foo': 'bar' } ports_to_update = [{ 'id': uuidsentinel.port_id, 'binding:profile': binding_profile, 'bar': 'baz' }] self.cli._update_ports(neutron, ports_to_update, output) expected_update_body = { 'port': { 'binding:profile': binding_profile } } neutron.update_port.assert_called_once_with( uuidsentinel.port_id, body=expected_update_body) def test_audit_with_wrong_provider_uuid(self): with mock.patch.object( self.cli, '_get_resource_provider', side_effect=exception.ResourceProviderNotFound( name_or_uuid=uuidsentinel.fake_uuid)): ret = self.cli.audit( provider_uuid=uuidsentinel.fake_uuid) self.assertEqual(127, ret) output = self.output.getvalue() self.assertIn( 'Resource provider with UUID %s' % uuidsentinel.fake_uuid, output) @mock.patch.object(manage.PlacementCommands, '_check_orphaned_allocations_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def _test_audit(self, get_resource_providers, check_orphaned_allocs, verbose=False, delete=False, errors=False, found=False): rps = [ {"generation": 1, "uuid": uuidsentinel.rp1, "links": None, "name": "rp1", "parent_provider_uuid": None, "root_provider_uuid": uuidsentinel.rp1}, {"generation": 1, "uuid": uuidsentinel.rp2, "links": None, "name": "rp2", "parent_provider_uuid": None, "root_provider_uuid": uuidsentinel.rp2}, ] get_resource_providers.return_value = fake_requests.FakeResponse( 200, content=jsonutils.dumps({"resource_providers": rps})) if errors: # We found one orphaned allocation per RP but RP1 got a fault check_orphaned_allocs.side_effect = ((1, 1), (1, 0)) elif found: # we found one orphaned allocation per RP and we had no faults check_orphaned_allocs.side_effect = ((1, 0), (1, 0)) else: # No orphaned allocations are found for all the RPs check_orphaned_allocs.side_effect = ((0, 0), (0, 0)) ret = self.cli.audit(verbose=verbose, delete=delete) if errors: # Any fault stops the audit and provides a return code equals to 1 expected_ret = 1 elif found and delete: # We found orphaned allocations and deleted them expected_ret = 4 elif found and not delete: # We found orphaned allocations but we left them expected_ret = 3 else: # Nothing was found expected_ret = 0 self.assertEqual(expected_ret, ret) call1 = mock.call(mock.ANY, mock.ANY, mock.ANY, rps[0], delete) call2 = mock.call(mock.ANY, mock.ANY, mock.ANY, rps[1], delete) if errors: # We stop checking other RPs once we got a fault check_orphaned_allocs.assert_has_calls([call1]) else: # All the RPs are checked check_orphaned_allocs.assert_has_calls([call1, call2]) if verbose and found: output = self.output.getvalue() self.assertIn('Processed 2 allocations', output) if errors: output = self.output.getvalue() self.assertIn( 'The Resource Provider %s had problems' % rps[0]["uuid"], output) def test_audit_not_found_orphaned_allocs(self): self._test_audit(found=False) def test_audit_found_orphaned_allocs_not_verbose(self): self._test_audit(found=True) def test_audit_found_orphaned_allocs_verbose(self): self._test_audit(found=True, verbose=True) def test_audit_found_orphaned_allocs_and_deleted_them(self): self._test_audit(found=True, delete=True) def test_audit_found_orphaned_allocs_but_got_errors(self): self._test_audit(errors=True) @mock.patch.object(manage.PlacementCommands, '_delete_allocations_from_consumer') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_resource_provider') @mock.patch.object(manage.PlacementCommands, '_get_instances_and_current_migrations') def test_check_orphaned_allocations_for_provider(self, get_insts_and_migs, get_allocs_for_rp, delete_allocs): provider = {"generation": 1, "uuid": uuidsentinel.rp1, "links": None, "name": "rp1", "parent_provider_uuid": None, "root_provider_uuid": uuidsentinel.rp1} compute_resources = {'VCPU': 1, 'MEMORY_MB': 2048, 'DISK_GB': 20} allocations = { # Some orphaned compute allocation uuidsentinel.orphaned_alloc1: {'resources': compute_resources}, # Some existing instance allocation uuidsentinel.inst1: {'resources': compute_resources}, # Some existing migration allocation uuidsentinel.mig1: {'resources': compute_resources}, # Some other allocation not related to Nova uuidsentinel.other_alloc1: {'resources': {'CUSTOM_GOO'}}, } get_insts_and_migs.return_value = ( [uuidsentinel.inst1], [uuidsentinel.mig1]) get_allocs_for_rp.return_value = report.ProviderAllocInfo(allocations) ctxt = context.RequestContext() placement = report.SchedulerReportClient() ret = self.cli._check_orphaned_allocations_for_provider( ctxt, placement, lambda x: x, provider, True) get_allocs_for_rp.assert_called_once_with(ctxt, uuidsentinel.rp1) delete_allocs.assert_called_once_with(ctxt, placement, provider, uuidsentinel.orphaned_alloc1, 'instance') self.assertEqual((1, 0), ret) class TestNovaManageMain(test.NoDBTestCase): """Tests the nova-manage:main() setup code.""" def setUp(self): super(TestNovaManageMain, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) @mock.patch.object(manage.config, 'parse_args') @mock.patch.object(manage, 'CONF') def test_error_traceback(self, mock_conf, mock_parse_args): with mock.patch.object(manage.cmd_common, 'get_action_fn', side_effect=test.TestingException('oops')): mock_conf.post_mortem = False self.assertEqual(255, manage.main()) # assert the traceback is dumped to stdout output = self.output.getvalue() self.assertIn('An error has occurred', output) self.assertIn('Traceback', output) self.assertIn('oops', output) @mock.patch('pdb.post_mortem') @mock.patch.object(manage.config, 'parse_args') @mock.patch.object(manage, 'CONF') def test_error_post_mortem(self, mock_conf, mock_parse_args, mock_pm): with mock.patch.object(manage.cmd_common, 'get_action_fn', side_effect=test.TestingException('oops')): mock_conf.post_mortem = True self.assertEqual(255, manage.main()) self.assertTrue(mock_pm.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/cmd/test_nova_api.py0000664000175000017500000000606700000000000021633 0ustar00zuulzuul00000000000000# Copyright 2015 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.cmd import api from nova import config from nova import exception from nova import test # required because otherwise oslo early parse_args dies @mock.patch.object(config, 'parse_args', new=lambda *args, **kwargs: None) # required so we don't set the global service version cache @mock.patch('nova.objects.Service.enable_min_version_cache') class TestNovaAPI(test.NoDBTestCase): def test_continues_on_failure(self, version_cache): count = [1] fake_server = mock.MagicMock() fake_server.workers = 123 def fake_service(api, **kw): while count: count.pop() raise exception.PasteAppNotFound(name=api, path='/') return fake_server self.flags(enabled_apis=['osapi_compute', 'metadata']) with mock.patch.object(api, 'service') as mock_service: mock_service.WSGIService.side_effect = fake_service api.main() mock_service.WSGIService.assert_has_calls([ mock.call('osapi_compute', use_ssl=False), mock.call('metadata', use_ssl=False), ]) launcher = mock_service.process_launcher.return_value launcher.launch_service.assert_called_once_with( fake_server, workers=123) self.assertTrue(version_cache.called) @mock.patch('sys.exit') def test_fails_if_none_started(self, mock_exit, version_cache): mock_exit.side_effect = test.TestingException self.flags(enabled_apis=[]) with mock.patch.object(api, 'service') as mock_service: self.assertRaises(test.TestingException, api.main) mock_exit.assert_called_once_with(1) launcher = mock_service.process_launcher.return_value self.assertFalse(launcher.wait.called) self.assertFalse(version_cache.called) @mock.patch('sys.exit') def test_fails_if_all_failed(self, mock_exit, version_cache): mock_exit.side_effect = test.TestingException self.flags(enabled_apis=['osapi_compute', 'metadata']) with mock.patch.object(api, 'service') as mock_service: mock_service.WSGIService.side_effect = exception.PasteAppNotFound( name='foo', path='/') self.assertRaises(test.TestingException, api.main) mock_exit.assert_called_once_with(1) launcher = mock_service.process_launcher.return_value self.assertFalse(launcher.wait.called) self.assertTrue(version_cache.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cmd/test_policy.py0000664000175000017500000002147500000000000021336 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the nova-policy-check CLI interfaces. """ import fixtures import mock from six.moves import StringIO from nova.cmd import policy import nova.conf from nova import context as nova_context from nova.db import api as db from nova import exception from nova.policies import base as base_policies from nova.policies import instance_actions as ia_policies from nova import test from nova.tests.unit import fake_instance from nova.tests.unit import policy_fixture CONF = nova.conf.CONF class TestPolicyCheck(test.NoDBTestCase): def setUp(self): super(TestPolicyCheck, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) self.cmd = policy.PolicyCommands() @mock.patch.object(policy.PolicyCommands, '_filter_rules') @mock.patch.object(policy.PolicyCommands, '_get_target') @mock.patch.object(policy.PolicyCommands, '_get_context') def test_check(self, mock_get_context, mock_get_target, mock_filter_rules): fake_rules = ['fake:rule', 'faux:roule'] mock_filter_rules.return_value = fake_rules self.cmd.check(target=mock.sentinel.target) mock_get_context.assert_called_once_with() mock_get_target.assert_called_once_with(mock_get_context.return_value, mock.sentinel.target) mock_filter_rules.assert_called_once_with( mock_get_context.return_value, '', mock_get_target.return_value) self.assertEqual('\n'.join(fake_rules) + '\n', self.output.getvalue()) @mock.patch.object(nova_context, 'RequestContext') @mock.patch.object(policy, 'CONF') def test_get_context(self, mock_CONF, mock_RequestContext): context = self.cmd._get_context() self.assertEqual(mock_RequestContext.return_value, context) mock_RequestContext.assert_called_once_with( roles=mock_CONF.os_roles, user_id=mock_CONF.os_user_id, project_id=mock_CONF.os_tenant_id) def test_get_target_none(self): target = self.cmd._get_target(mock.sentinel.context, None) self.assertIsNone(target) def test_get_target_invalid_attribute(self): self.assertRaises(exception.InvalidAttribute, self.cmd._get_target, mock.sentinel.context, ['nope=nada']) def test_get_target(self): expected_target = { 'project_id': 'fake-proj', 'user_id': 'fake-user', 'quota_class': 'fake-quota-class', 'availability_zone': 'fake-az', } given_target = ['='.join([key, val]) for key, val in expected_target.items()] actual_target = self.cmd._get_target(mock.sentinel.context, given_target) self.assertDictEqual(expected_target, actual_target) @mock.patch.object(nova_context, 'get_admin_context') @mock.patch.object(db, 'instance_get_by_uuid') def test_get_target_instance(self, mock_instance_get, mock_get_admin_context): admin_context = nova_context.RequestContext(is_admin=True) mock_get_admin_context.return_value = admin_context given_target = ['instance_id=fake_id'] mock_instance_get.return_value = fake_instance.fake_db_instance() target = self.cmd._get_target(mock.sentinel.context, given_target) self.assertEqual(target, {'user_id': 'fake-user', 'project_id': 'fake-project'}) mock_instance_get.assert_called_once_with(admin_context, 'fake_id') def _check_filter_rules(self, context=None, target=None, expected_rules=None): context = context or nova_context.get_admin_context() if expected_rules is None: expected_rules = [ r.name for r in ia_policies.list_rules()] passing_rules = self.cmd._filter_rules( context, 'os-instance-actions:list', target) passing_rules += self.cmd._filter_rules( context, 'os-instance-actions:show', target) passing_rules += self.cmd._filter_rules( context, 'os-instance-actions:events', target) passing_rules += self.cmd._filter_rules( context, 'os-instance-actions:events:details', target) self.assertEqual(set(expected_rules), set(passing_rules)) def test_filter_rules_non_admin(self): context = nova_context.RequestContext() rule_conditions = [base_policies.PROJECT_READER_OR_SYSTEM_READER] expected_rules = [r.name for r in ia_policies.list_rules() if r.check_str in rule_conditions] self._check_filter_rules(context, expected_rules=expected_rules) def test_filter_rules_admin(self): self._check_filter_rules() def test_filter_rules_instance_non_admin(self): db_context = nova_context.RequestContext(user_id='fake-user', project_id='fake-project') instance = fake_instance.fake_instance_obj(db_context) context = nova_context.RequestContext() expected_rules = [r.name for r in ia_policies.list_rules() if r.check_str == base_policies.RULE_ANY] self._check_filter_rules(context, instance, expected_rules) def test_filter_rules_instance_admin(self): db_context = nova_context.RequestContext(user_id='fake-user', project_id='fake-project') instance = fake_instance.fake_instance_obj(db_context) self._check_filter_rules(target=instance) def test_filter_rules_instance_owner(self): db_context = nova_context.RequestContext(user_id='fake-user', project_id='fake-project') instance = fake_instance.fake_instance_obj(db_context) rule_conditions = [base_policies.PROJECT_READER_OR_SYSTEM_READER] expected_rules = [r.name for r in ia_policies.list_rules() if r.check_str in rule_conditions] self._check_filter_rules(db_context, instance, expected_rules) @mock.patch.object(policy.config, 'parse_args') @mock.patch.object(policy, 'CONF') def _check_main(self, mock_CONF, mock_parse_args, category_name='check', expected_return_value=0): mock_CONF.category.name = category_name return_value = policy.main() self.assertEqual(expected_return_value, return_value) mock_CONF.register_cli_opts.assert_called_once_with( policy.cli_opts) mock_CONF.register_cli_opt.assert_called_once_with( policy.category_opt) @mock.patch.object(policy.version, 'version_string_with_package', return_value="x.x.x") def test_main_version(self, mock_version_string): self._check_main(category_name='version') self.assertEqual("x.x.x\n", self.output.getvalue()) @mock.patch.object(policy.cmd_common, 'print_bash_completion') def test_main_bash_completion(self, mock_print_bash): self._check_main(category_name='bash-completion') mock_print_bash.assert_called_once_with(policy.CATEGORIES) @mock.patch.object(policy.cmd_common, 'get_action_fn') def test_main(self, mock_get_action_fn): mock_fn = mock.Mock() mock_fn_args = [mock.sentinel.arg] mock_fn_kwargs = {'key': mock.sentinel.value} mock_get_action_fn.return_value = (mock_fn, mock_fn_args, mock_fn_kwargs) self._check_main(expected_return_value=mock_fn.return_value) mock_fn.assert_called_once_with(mock.sentinel.arg, key=mock.sentinel.value) @mock.patch.object(policy.cmd_common, 'get_action_fn') def test_main_error(self, mock_get_action_fn): mock_fn = mock.Mock(side_effect=Exception) mock_get_action_fn.return_value = (mock_fn, [], {}) self._check_main(expected_return_value=1) self.assertIn("error: ", self.output.getvalue()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/cmd/test_scheduler.py0000664000175000017500000000362400000000000022011 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.cmd import scheduler from nova import config from nova import test # required because otherwise oslo early parse_args dies @mock.patch.object(config, 'parse_args', new=lambda *args, **kwargs: None) class TestScheduler(test.NoDBTestCase): @mock.patch('nova.service.Service.create') @mock.patch('nova.service.serve') @mock.patch('nova.service.wait') @mock.patch('oslo_concurrency.processutils.get_worker_count', return_value=2) def test_workers_defaults(self, get_worker_count, mock_wait, mock_serve, service_create): scheduler.main() get_worker_count.assert_called_once_with() mock_serve.assert_called_once_with( service_create.return_value, workers=2) mock_wait.assert_called_once_with() @mock.patch('nova.service.Service.create') @mock.patch('nova.service.serve') @mock.patch('nova.service.wait') @mock.patch('oslo_concurrency.processutils.get_worker_count') def test_workers_override(self, get_worker_count, mock_wait, mock_serve, service_create): self.flags(workers=4, group='scheduler') scheduler.main() get_worker_count.assert_not_called() mock_serve.assert_called_once_with( service_create.return_value, workers=4) mock_wait.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/cmd/test_status.py0000664000175000017500000006224400000000000021361 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the nova-status CLI interfaces. """ # NOTE(cdent): Additional tests of nova-status may be found in # nova/tests/functional/test_nova_status.py. Those tests use the external # PlacementFixture, which is only available in functioanl tests. import fixtures import mock from six.moves import StringIO from keystoneauth1 import exceptions as ks_exc from keystoneauth1 import loading as keystone from keystoneauth1 import session from oslo_upgradecheck import upgradecheck from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from requests import models from nova.cmd import status import nova.conf from nova import context from nova import exception # NOTE(mriedem): We only use objects as a convenience to populate the database # in the tests, we don't use them in the actual CLI. from nova import objects from nova.objects import service from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import policy_fixture CONF = nova.conf.CONF class TestNovaStatusMain(test.NoDBTestCase): """Tests for the basic nova-status command infrastructure.""" def setUp(self): super(TestNovaStatusMain, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) @mock.patch.object(status.config, 'parse_args') @mock.patch.object(status, 'CONF') def _check_main(self, mock_CONF, mock_parse_args, category_name='check', expected_return_value=0): mock_CONF.category.name = category_name return_value = status.main() self.assertEqual(expected_return_value, return_value) mock_CONF.register_cli_opt.assert_called_once_with( status.category_opt) @mock.patch.object(status.version, 'version_string_with_package', return_value="x.x.x") def test_main_version(self, mock_version_string): self._check_main(category_name='version') self.assertEqual("x.x.x\n", self.output.getvalue()) @mock.patch.object(status.cmd_common, 'print_bash_completion') def test_main_bash_completion(self, mock_print_bash): self._check_main(category_name='bash-completion') mock_print_bash.assert_called_once_with(status.CATEGORIES) @mock.patch.object(status.cmd_common, 'get_action_fn') def test_main(self, mock_get_action_fn): mock_fn = mock.Mock() mock_fn_args = [mock.sentinel.arg] mock_fn_kwargs = {'key': mock.sentinel.value} mock_get_action_fn.return_value = (mock_fn, mock_fn_args, mock_fn_kwargs) self._check_main(expected_return_value=mock_fn.return_value) mock_fn.assert_called_once_with(mock.sentinel.arg, key=mock.sentinel.value) @mock.patch.object(status.cmd_common, 'get_action_fn') def test_main_error(self, mock_get_action_fn): mock_fn = mock.Mock(side_effect=Exception('wut')) mock_get_action_fn.return_value = (mock_fn, [], {}) self._check_main(expected_return_value=255) output = self.output.getvalue() self.assertIn('Error:', output) # assert the traceback is in the output self.assertIn('wut', output) class TestPlacementCheck(test.NoDBTestCase): """Tests the nova-status placement checks. These are done with mock as the ability to replicate all failure domains otherwise is quite complicated. Using a devstack environment you can validate each of these tests are matching reality. """ def setUp(self): super(TestPlacementCheck, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.cmd = status.UpgradeCommands() @mock.patch.object(keystone, "load_auth_from_conf_options") def test_no_auth(self, auth): """Test failure when no credentials are specified. Replicate in devstack: start devstack with or without placement engine, remove the auth section from the [placement] block in nova.conf. """ auth.side_effect = ks_exc.MissingAuthPlugin() res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.FAILURE, res.code) self.assertIn('No credentials specified', res.details) @mock.patch.object(keystone, "load_auth_from_conf_options") @mock.patch.object(session.Session, 'request') def _test_placement_get_interface( self, expected_interface, mock_get, mock_auth): def fake_request(path, method, *a, **kw): self.assertEqual(mock.sentinel.path, path) self.assertEqual('GET', method) self.assertIn('endpoint_filter', kw) self.assertEqual(expected_interface, kw['endpoint_filter']['interface']) return mock.Mock(autospec=models.Response) mock_get.side_effect = fake_request self.cmd._placement_get(mock.sentinel.path) mock_auth.assert_called_once_with(status.CONF, 'placement') self.assertTrue(mock_get.called) def test_placement_get_interface_default(self): """Tests that we try internal, then public interface by default.""" self._test_placement_get_interface(['internal', 'public']) def test_placement_get_interface_internal(self): """Tests that "internal" is specified for interface when configured.""" self.flags(valid_interfaces='internal', group='placement') self._test_placement_get_interface(['internal']) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_invalid_auth(self, get): """Test failure when wrong credentials are specified or service user doesn't exist. Replicate in devstack: start devstack with or without placement engine, specify random credentials in auth section from the [placement] block in nova.conf. """ get.side_effect = ks_exc.Unauthorized() res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.FAILURE, res.code) self.assertIn('Placement service credentials do not work', res.details) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_invalid_endpoint(self, get): """Test failure when no endpoint exists. Replicate in devstack: start devstack without placement engine, but create valid placement service user and specify it in auth section of [placement] in nova.conf. """ get.side_effect = ks_exc.EndpointNotFound() res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.FAILURE, res.code) self.assertIn('Placement API endpoint not found', res.details) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_discovery_failure(self, get): """Test failure when discovery for placement URL failed. Replicate in devstack: start devstack with placement engine, create valid placement service user and specify it in auth section of [placement] in nova.conf. Stop keystone service. """ get.side_effect = ks_exc.DiscoveryFailure() res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.FAILURE, res.code) self.assertIn('Discovery for placement API URI failed.', res.details) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_down_endpoint(self, get): """Test failure when endpoint is down. Replicate in devstack: start devstack with placement engine, disable placement engine apache config. """ get.side_effect = ks_exc.NotFound() res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.FAILURE, res.code) self.assertIn('Placement API does not seem to be running', res.details) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_valid_version(self, get): get.return_value = { "versions": [ { "min_version": "1.0", "max_version": status.MIN_PLACEMENT_MICROVERSION, "id": "v1.0" } ] } res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.SUCCESS, res.code) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_version_comparison_does_not_use_floats(self, get): # NOTE(rpodolyaka): previously _check_placement() coerced the version # numbers to floats prior to comparison, that would lead to failures # in cases like float('1.10') < float('1.4'). As we require 1.4+ now, # the _check_placement() call below will assert that version comparison # continues to work correctly when Placement API versions 1.10 # (or newer) is released get.return_value = { "versions": [ { "min_version": "1.0", "max_version": status.MIN_PLACEMENT_MICROVERSION, "id": "v1.0" } ] } res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.SUCCESS, res.code) @mock.patch.object(status.UpgradeCommands, "_placement_get") def test_invalid_version(self, get): get.return_value = { "versions": [ { "min_version": "0.9", "max_version": "0.9", "id": "v1.0" } ] } res = self.cmd._check_placement() self.assertEqual(upgradecheck.Code.FAILURE, res.code) self.assertIn('Placement API version %s needed, you have 0.9' % status.MIN_PLACEMENT_MICROVERSION, res.details) class TestUpgradeCheckCellsV2(test.NoDBTestCase): """Tests for the nova-status upgrade cells v2 specific check.""" # We'll setup the API DB fixture ourselves and slowly build up the # contents until the check passes. USES_DB_SELF = True def setUp(self): super(TestUpgradeCheckCellsV2, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.useFixture(nova_fixtures.Database(database='api')) self.cmd = status.UpgradeCommands() def test_check_no_cell_mappings(self): """The cells v2 check should fail because there are no cell mappings. """ result = self.cmd._check_cellsv2() self.assertEqual(upgradecheck.Code.FAILURE, result.code) self.assertIn('There needs to be at least two cell mappings', result.details) def _create_cell_mapping(self, uuid): cm = objects.CellMapping( context=context.get_admin_context(), uuid=uuid, name=uuid, transport_url='fake://%s/' % uuid, database_connection=uuid) cm.create() return cm def test_check_no_cell0_mapping(self): """We'll create two cell mappings but not have cell0 mapped yet.""" for i in range(2): uuid = getattr(uuids, str(i)) self._create_cell_mapping(uuid) result = self.cmd._check_cellsv2() self.assertEqual(upgradecheck.Code.FAILURE, result.code) self.assertIn('No cell0 mapping found', result.details) def test_check_no_host_mappings_with_computes(self): """Creates a cell0 and cell1 mapping but no host mappings and there are compute nodes in the cell database. """ self._setup_cells() cn = objects.ComputeNode( context=context.get_admin_context(), host='fake-host', vcpus=4, memory_mb=8 * 1024, local_gb=40, vcpus_used=2, memory_mb_used=2 * 1024, local_gb_used=10, hypervisor_type='fake', hypervisor_version=1, cpu_info='{"arch": "x86_64"}') cn.create() result = self.cmd._check_cellsv2() self.assertEqual(upgradecheck.Code.FAILURE, result.code) self.assertIn('No host mappings found but there are compute nodes', result.details) def test_check_no_host_mappings_no_computes(self): """Creates the cell0 and cell1 mappings but no host mappings and no compute nodes so it's assumed to be an initial install. """ self._setup_cells() result = self.cmd._check_cellsv2() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) self.assertIn('No host mappings or compute nodes were found', result.details) def test_check_success(self): """Tests a successful cells v2 upgrade check.""" # create the cell0 and first cell mappings self._setup_cells() # Start a compute service and create a hostmapping for it svc = self.start_service('compute') cell = self.cell_mappings[test.CELL1_NAME] hm = objects.HostMapping(context=context.get_admin_context(), host=svc.host, cell_mapping=cell) hm.create() result = self.cmd._check_cellsv2() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) self.assertIsNone(result.details) class TestUpgradeCheckIronicFlavorMigration(test.NoDBTestCase): """Tests for the nova-status upgrade check on ironic flavor migration.""" # We'll setup the database ourselves because we need to use cells fixtures # for multiple cell mappings. USES_DB_SELF = True # This will create three cell mappings: cell0, cell1 (default) and cell2 NUMBER_OF_CELLS = 2 def setUp(self): super(TestUpgradeCheckIronicFlavorMigration, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) # We always need the API DB to be setup. self.useFixture(nova_fixtures.Database(database='api')) self.cmd = status.UpgradeCommands() @staticmethod def _create_node_in_cell(ctxt, cell, hypervisor_type, nodename): with context.target_cell(ctxt, cell) as cctxt: cn = objects.ComputeNode( context=cctxt, hypervisor_type=hypervisor_type, hypervisor_hostname=nodename, # The rest of these values are fakes. host=uuids.host, vcpus=4, memory_mb=8 * 1024, local_gb=40, vcpus_used=2, memory_mb_used=2 * 1024, local_gb_used=10, hypervisor_version=1, cpu_info='{"arch": "x86_64"}') cn.create() return cn @staticmethod def _create_instance_in_cell(ctxt, cell, node, is_deleted=False, flavor_migrated=False): with context.target_cell(ctxt, cell) as cctxt: inst = objects.Instance( context=cctxt, host=node.host, node=node.hypervisor_hostname, uuid=uuidutils.generate_uuid()) inst.create() if is_deleted: inst.destroy() else: # Create an embedded flavor for the instance. We don't create # this because we're in a cell context and flavors are global, # but we don't actually care about global flavors in this # check. extra_specs = {} if flavor_migrated: extra_specs['resources:CUSTOM_BAREMETAL_GOLD'] = '1' inst.flavor = objects.Flavor(cctxt, extra_specs=extra_specs) inst.old_flavor = None inst.new_flavor = None inst.save() return inst def test_fresh_install_no_cell_mappings(self): """Tests the scenario where we don't have any cell mappings (no cells v2 setup yet) so we don't know what state we're in and we return a warning. """ result = self.cmd._check_ironic_flavor_migration() self.assertEqual(upgradecheck.Code.WARNING, result.code) self.assertIn('Unable to determine ironic flavor migration without ' 'cell mappings', result.details) def test_fresh_install_no_computes(self): """Tests a fresh install scenario where we have two non-cell0 cells but no compute nodes in either cell yet, so there is nothing to do and we return success. """ self._setup_cells() result = self.cmd._check_ironic_flavor_migration() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) def test_mixed_computes_deleted_ironic_instance(self): """Tests the scenario where we have a kvm compute node in one cell and an ironic compute node in another cell. The kvm compute node does not have any instances. The ironic compute node has an instance with the same hypervisor_hostname match but the instance is (soft) deleted so it's ignored. """ self._setup_cells() ctxt = context.get_admin_context() # Create the ironic compute node in cell1 ironic_node = self._create_node_in_cell( ctxt, self.cell_mappings['cell1'], 'ironic', uuids.node_uuid) # Create the kvm compute node in cell2 self._create_node_in_cell( ctxt, self.cell_mappings['cell2'], 'kvm', 'fake-kvm-host') # Now create an instance in cell1 which is on the ironic node but is # soft deleted (instance.deleted == instance.id). self._create_instance_in_cell( ctxt, self.cell_mappings['cell1'], ironic_node, is_deleted=True) result = self.cmd._check_ironic_flavor_migration() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) def test_unmigrated_ironic_instances(self): """Tests a scenario where we have two cells with only ironic compute nodes. The first cell has one migrated and one unmigrated instance. The second cell has two unmigrated instances. The result is the check returns failure. """ self._setup_cells() ctxt = context.get_admin_context() # Create the ironic compute nodes in cell1 for x in range(2): cell = self.cell_mappings['cell1'] ironic_node = self._create_node_in_cell( ctxt, cell, 'ironic', getattr(uuids, 'cell1-node-%d' % x)) # Create an instance for this node. In cell1, we have one # migrated and one unmigrated instance. flavor_migrated = True if x % 2 else False self._create_instance_in_cell( ctxt, cell, ironic_node, flavor_migrated=flavor_migrated) # Create the ironic compute nodes in cell2 for x in range(2): cell = self.cell_mappings['cell2'] ironic_node = self._create_node_in_cell( ctxt, cell, 'ironic', getattr(uuids, 'cell2-node-%d' % x)) # Create an instance for this node. In cell2, all instances are # unmigrated. self._create_instance_in_cell( ctxt, cell, ironic_node, flavor_migrated=False) result = self.cmd._check_ironic_flavor_migration() self.assertEqual(upgradecheck.Code.FAILURE, result.code) # Check the message - it should point out cell1 has one unmigrated # instance and cell2 has two unmigrated instances. unmigrated_instance_count_by_cell = { self.cell_mappings['cell1'].uuid: 1, self.cell_mappings['cell2'].uuid: 2 } self.assertIn( 'There are (cell=x) number of unmigrated instances in each ' 'cell: %s.' % ' '.join('(%s=%s)' % ( cell_id, unmigrated_instance_count_by_cell[cell_id]) for cell_id in sorted(unmigrated_instance_count_by_cell.keys())), result.details) class TestUpgradeCheckCinderAPI(test.NoDBTestCase): def setUp(self): super(TestUpgradeCheckCinderAPI, self).setUp() self.cmd = status.UpgradeCommands() def test_cinder_not_configured(self): self.flags(auth_type=None, group='cinder') self.assertEqual(upgradecheck.Code.SUCCESS, self.cmd._check_cinder().code) @mock.patch('nova.volume.cinder.is_microversion_supported', side_effect=exception.CinderAPIVersionNotAvailable( version='3.44')) def test_microversion_not_available(self, mock_version_check): self.flags(auth_type='password', group='cinder') result = self.cmd._check_cinder() mock_version_check.assert_called_once() self.assertEqual(upgradecheck.Code.FAILURE, result.code) self.assertIn('Cinder API 3.44 or greater is required.', result.details) @mock.patch('nova.volume.cinder.is_microversion_supported', side_effect=test.TestingException('oops')) def test_unknown_error(self, mock_version_check): self.flags(auth_type='password', group='cinder') result = self.cmd._check_cinder() mock_version_check.assert_called_once() self.assertEqual(upgradecheck.Code.WARNING, result.code) self.assertIn('oops', result.details) @mock.patch('nova.volume.cinder.is_microversion_supported') def test_microversion_available(self, mock_version_check): self.flags(auth_type='password', group='cinder') result = self.cmd._check_cinder() mock_version_check.assert_called_once() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) class TestUpgradeCheckPolicy(test.NoDBTestCase): new_default_status = upgradecheck.Code.WARNING def setUp(self): super(TestUpgradeCheckPolicy, self).setUp() self.cmd = status.UpgradeCommands() self.rule_name = "system_admin_api" def tearDown(self): super(TestUpgradeCheckPolicy, self).tearDown() # Check if policy is reset back after the upgrade check self.assertIsNone(policy._ENFORCER) def test_policy_rule_with_new_defaults(self): new_default = "role:admin and system_scope:all" rule = {self.rule_name: new_default} self.policy.set_rules(rule, overwrite=False) self.assertEqual(self.new_default_status, self.cmd._check_policy().code) def test_policy_rule_with_old_defaults(self): new_default = "is_admin:True" rule = {self.rule_name: new_default} self.policy.set_rules(rule, overwrite=False) self.assertEqual(upgradecheck.Code.SUCCESS, self.cmd._check_policy().code) def test_policy_rule_with_both_defaults(self): new_default = "(role:admin and system_scope:all) or is_admin:True" rule = {self.rule_name: new_default} self.policy.set_rules(rule, overwrite=False) self.assertEqual(upgradecheck.Code.SUCCESS, self.cmd._check_policy().code) def test_policy_checks_with_fresh_init_and_no_policy_override(self): self.policy = self.useFixture(policy_fixture.OverridePolicyFixture( rules_in_file={})) policy.reset() self.assertEqual(upgradecheck.Code.SUCCESS, self.cmd._check_policy().code) class TestUpgradeCheckPolicyEnableScope(TestUpgradeCheckPolicy): new_default_status = upgradecheck.Code.SUCCESS def setUp(self): super(TestUpgradeCheckPolicyEnableScope, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class TestUpgradeCheckOldCompute(test.NoDBTestCase): def setUp(self): super(TestUpgradeCheckOldCompute, self).setUp() self.cmd = status.UpgradeCommands() def test_no_compute(self): self.assertEqual( upgradecheck.Code.SUCCESS, self.cmd._check_old_computes().code) def test_only_new_compute(self): last_supported_version = service.SERVICE_VERSION_ALIASES[ service.OLDEST_SUPPORTED_SERVICE_VERSION] with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=last_supported_version): self.assertEqual( upgradecheck.Code.SUCCESS, self.cmd._check_old_computes().code) def test_old_compute(self): too_old = service.SERVICE_VERSION_ALIASES[ service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 with mock.patch( "nova.objects.service.get_minimum_version_all_cells", return_value=too_old): result = self.cmd._check_old_computes() self.assertEqual(upgradecheck.Code.WARNING, result.code) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5344682 nova-21.2.4/nova/tests/unit/compute/0000775000175000017500000000000000000000000017326 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/__init__.py0000664000175000017500000000000000000000000021425 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/eventlet_utils.py0000664000175000017500000000146700000000000022756 0ustar00zuulzuul00000000000000# Rackspace Hosting 2014 # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet class SyncPool(eventlet.GreenPool): """Synchronous pool for testing threaded code without adding sleep waits. """ def spawn_n(self, func, *args, **kwargs): func(*args, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/fake_resource_tracker.py0000664000175000017500000000215200000000000024230 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from nova.compute import resource_tracker class FakeResourceTracker(resource_tracker.ResourceTracker): """Version without a DB requirement.""" def _update(self, context, compute_node, startup=False): pass class RTMockMixin(object): def _mock_rt(self, **kwargs): if 'spec_set' in kwargs: kwargs.update({'autospec': False}) return self.useFixture(fixtures.MockPatchObject( self.compute, 'rt', **kwargs)).mock ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5344682 nova-21.2.4/nova/tests/unit/compute/monitors/0000775000175000017500000000000000000000000021200 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/monitors/__init__.py0000664000175000017500000000000000000000000023277 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5344682 nova-21.2.4/nova/tests/unit/compute/monitors/cpu/0000775000175000017500000000000000000000000021767 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/monitors/cpu/__init__.py0000664000175000017500000000000000000000000024066 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/monitors/cpu/test_virt_driver.py0000664000175000017500000000660200000000000025743 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for Compute Driver CPU resource monitor.""" import mock from nova.compute.monitors.cpu import virt_driver from nova import objects from nova import test class FakeDriver(object): def get_host_cpu_stats(self): return {'kernel': 5664160000000, 'idle': 1592705190000000, 'frequency': 800, 'user': 26728850000000, 'iowait': 6121490000000} class FakeResourceTracker(object): driver = FakeDriver() class VirtDriverCPUMonitorTestCase(test.NoDBTestCase): def test_get_metric_names(self): monitor = virt_driver.Monitor(FakeResourceTracker()) names = monitor.get_metric_names() self.assertEqual(10, len(names)) self.assertIn("cpu.frequency", names) self.assertIn("cpu.user.time", names) self.assertIn("cpu.kernel.time", names) self.assertIn("cpu.idle.time", names) self.assertIn("cpu.iowait.time", names) self.assertIn("cpu.user.percent", names) self.assertIn("cpu.kernel.percent", names) self.assertIn("cpu.idle.percent", names) self.assertIn("cpu.iowait.percent", names) self.assertIn("cpu.percent", names) def test_populate_metrics(self): metrics = objects.MonitorMetricList() monitor = virt_driver.Monitor(FakeResourceTracker()) monitor.populate_metrics(metrics) names = monitor.get_metric_names() for metric in metrics.objects: self.assertIn(metric.name, names) # Some conversion to a dict to ease testing... metrics = {m.name: m.value for m in metrics.objects} self.assertEqual(metrics["cpu.frequency"], 800) self.assertEqual(metrics["cpu.user.time"], 26728850000000) self.assertEqual(metrics["cpu.kernel.time"], 5664160000000) self.assertEqual(metrics["cpu.idle.time"], 1592705190000000) self.assertEqual(metrics["cpu.iowait.time"], 6121490000000) self.assertEqual(metrics["cpu.user.percent"], 1) self.assertEqual(metrics["cpu.kernel.percent"], 0) self.assertEqual(metrics["cpu.idle.percent"], 97) self.assertEqual(metrics["cpu.iowait.percent"], 0) self.assertEqual(metrics["cpu.percent"], 2) def test_ensure_single_sampling(self): # We want to ensure that the virt driver's get_host_cpu_stats() # is only ever called once, otherwise values for monitor metrics # might be illogical -- e.g. pct cpu times for user/system/idle # may add up to more than 100. metrics = objects.MonitorMetricList() monitor = virt_driver.Monitor(FakeResourceTracker()) with mock.patch.object(FakeDriver, 'get_host_cpu_stats') as mocked: monitor.populate_metrics(metrics) mocked.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/monitors/test_monitors.py0000664000175000017500000000521700000000000024470 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for resource monitors.""" import mock from nova.compute import monitors from nova import test class MonitorsTestCase(test.NoDBTestCase): """Test case for monitors.""" @mock.patch('stevedore.enabled.EnabledExtensionManager') def test_check_enabled_monitor(self, _mock_ext_manager): class FakeExt(object): def __init__(self, ept, name): self.entry_point_target = ept self.name = name # We check to ensure only one CPU monitor is loaded... self.flags(compute_monitors=['mon1', 'mon2']) handler = monitors.MonitorHandler(None) ext_cpu_mon1 = FakeExt('nova.compute.monitors.cpu.virt_driver:Monitor', 'mon1') ext_cpu_mon2 = FakeExt('nova.compute.monitors.cpu.virt_driver:Monitor', 'mon2') self.assertTrue(handler.check_enabled_monitor(ext_cpu_mon1)) self.assertFalse(handler.check_enabled_monitor(ext_cpu_mon2)) # We check to ensure that the auto-prefixing of the CPU # namespace is handled properly... self.flags(compute_monitors=['cpu.mon1', 'mon2']) handler = monitors.MonitorHandler(None) ext_cpu_mon1 = FakeExt('nova.compute.monitors.cpu.virt_driver:Monitor', 'mon1') ext_cpu_mon2 = FakeExt('nova.compute.monitors.cpu.virt_driver:Monitor', 'mon2') self.assertTrue(handler.check_enabled_monitor(ext_cpu_mon1)) self.assertFalse(handler.check_enabled_monitor(ext_cpu_mon2)) # Run the check but with no monitors enabled to make sure we don't log. self.flags(compute_monitors=[]) handler = monitors.MonitorHandler(None) ext_cpu_mon1 = FakeExt('nova.compute.monitors.cpu.virt_driver:Monitor', 'mon1') with mock.patch.object(monitors.LOG, 'warning') as mock_warning: self.assertFalse(handler.check_enabled_monitor(ext_cpu_mon1)) mock_warning.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_claims.py0000664000175000017500000004064000000000000022213 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for resource tracker claims.""" import uuid import mock from nova.compute import claims from nova import context from nova import exception from nova import objects from nova.pci import manager as pci_manager from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.pci import fakes as pci_fakes _NODENAME = 'fake-node' class FakeResourceHandler(object): test_called = False usage_is_instance = False def test_resources(self, usage, limits): self.test_called = True self.usage_is_itype = usage.get('name') == 'fakeitype' return [] class DummyTracker(object): icalled = False rcalled = False def __init__(self): self.new_pci_tracker() def abort_instance_claim(self, *args, **kwargs): self.icalled = True def drop_move_claim(self, *args, **kwargs): self.rcalled = True def new_pci_tracker(self): ctxt = context.RequestContext('testuser', 'testproject') self.pci_tracker = pci_manager.PciDevTracker(ctxt) class ClaimTestCase(test.NoDBTestCase): def setUp(self): super(ClaimTestCase, self).setUp() self.context = context.RequestContext('fake-user', 'fake-project') self.instance = None self.compute_node = self._fake_compute_node() self.tracker = DummyTracker() self.empty_requests = objects.InstancePCIRequests( requests=[] ) def _claim(self, limits=None, requests=None, **kwargs): numa_topology = kwargs.pop('numa_topology', None) instance = self._fake_instance(**kwargs) instance.flavor = self._fake_instance_type(**kwargs) if numa_topology: db_numa_topology = { 'id': 1, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'instance_uuid': instance.uuid, 'numa_topology': numa_topology._to_json(), 'pci_requests': (requests or self.empty_requests).to_json() } else: db_numa_topology = None requests = requests or self.empty_requests @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid', return_value=db_numa_topology) def get_claim(mock_extra_get): return claims.Claim(self.context, instance, _NODENAME, self.tracker, self.compute_node, requests, limits=limits) return get_claim() def _fake_instance(self, **kwargs): instance = { 'uuid': str(uuid.uuid1()), 'memory_mb': 1024, 'root_gb': 10, 'ephemeral_gb': 5, 'vcpus': 1, 'system_metadata': {}, 'numa_topology': None } instance.update(**kwargs) return fake_instance.fake_instance_obj(self.context, **instance) def _fake_instance_type(self, **kwargs): instance_type = { 'id': 1, 'name': 'fakeitype', 'memory_mb': 1024, 'vcpus': 1, 'root_gb': 10, 'ephemeral_gb': 5 } instance_type.update(**kwargs) return objects.Flavor(**instance_type) def _fake_compute_node(self, values=None): compute_node = { 'memory_mb': 2048, 'memory_mb_used': 0, 'free_ram_mb': 2048, 'local_gb': 20, 'local_gb_used': 0, 'free_disk_gb': 20, 'vcpus': 2, 'vcpus_used': 0, 'numa_topology': objects.NUMATopology(cells=[ objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=512, memory_usage=0, cpu_usage=0, mempages=[], siblings=[set([1]), set([2])], pinned_cpus=set()), objects.NUMACell( id=2, cpuset=set([3, 4]), pcpuset=set(), memory=512, memory_usage=0, cpu_usage=0, mempages=[], siblings=[set([3]), set([4])], pinned_cpus=set())] )._to_json() } if values: compute_node.update(values) return objects.ComputeNode(**compute_node) @mock.patch('nova.pci.stats.PciDeviceStats.support_requests', return_value=True) def test_pci_pass(self, mock_pci_supports_requests): request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) requests = objects.InstancePCIRequests(requests=[request]) self._claim(requests=requests) mock_pci_supports_requests.assert_called_once_with([request]) @mock.patch('nova.pci.stats.PciDeviceStats.support_requests', return_value=False) def test_pci_fail(self, mock_pci_supports_requests): request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) requests = objects.InstancePCIRequests(requests=[request]) self.assertRaisesRegex( exception.ComputeResourcesUnavailable, 'Claim pci failed.', self._claim, requests=requests) mock_pci_supports_requests.assert_called_once_with([request]) @mock.patch('nova.pci.stats.PciDeviceStats.support_requests') def test_pci_pass_no_requests(self, mock_pci_supports_requests): self._claim() self.assertFalse(mock_pci_supports_requests.called) def test_numa_topology_no_limit(self): huge_instance = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) self._claim(numa_topology=huge_instance) def test_numa_topology_fails(self): huge_instance = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2, 3, 4, 5]), memory=2048)]) limit_topo = objects.NUMATopologyLimits( cpu_allocation_ratio=1, ram_allocation_ratio=1) self.assertRaises(exception.ComputeResourcesUnavailable, self._claim, limits={'numa_topology': limit_topo}, numa_topology=huge_instance) def test_numa_topology_passes(self): huge_instance = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) limit_topo = objects.NUMATopologyLimits( cpu_allocation_ratio=1, ram_allocation_ratio=1) self._claim(limits={'numa_topology': limit_topo}, numa_topology=huge_instance) @pci_fakes.patch_pci_whitelist @mock.patch('nova.objects.InstancePCIRequests.get_by_instance') def test_numa_topology_with_pci(self, mock_get_by_instance): dev_dict = { 'compute_node_id': 1, 'address': 'a', 'product_id': 'p', 'vendor_id': 'v', 'numa_node': 1, 'dev_type': 'type-PCI', 'parent_addr': 'a1', 'status': 'available'} self.tracker.new_pci_tracker() self.tracker.pci_tracker._set_hvdevs([dev_dict]) request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) requests = objects.InstancePCIRequests(requests=[request]) mock_get_by_instance.return_value = requests huge_instance = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) self._claim(requests=requests, numa_topology=huge_instance) @pci_fakes.patch_pci_whitelist @mock.patch('nova.objects.InstancePCIRequests.get_by_instance') def test_numa_topology_with_pci_fail(self, mock_get_by_instance): dev_dict = { 'compute_node_id': 1, 'address': 'a', 'product_id': 'p', 'vendor_id': 'v', 'numa_node': 1, 'dev_type': 'type-PCI', 'parent_addr': 'a1', 'status': 'available'} dev_dict2 = { 'compute_node_id': 1, 'address': 'a', 'product_id': 'p', 'vendor_id': 'v', 'numa_node': 2, 'dev_type': 'type-PCI', 'parent_addr': 'a1', 'status': 'available'} self.tracker.new_pci_tracker() self.tracker.pci_tracker._set_hvdevs([dev_dict, dev_dict2]) request = objects.InstancePCIRequest(count=2, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) requests = objects.InstancePCIRequests(requests=[request]) mock_get_by_instance.return_value = requests huge_instance = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) self.assertRaises(exception.ComputeResourcesUnavailable, self._claim, requests=requests, numa_topology=huge_instance) @pci_fakes.patch_pci_whitelist @mock.patch('nova.objects.InstancePCIRequests.get_by_instance') def test_numa_topology_with_pci_no_numa_info(self, mock_get_by_instance): dev_dict = { 'compute_node_id': 1, 'address': 'a', 'product_id': 'p', 'vendor_id': 'v', 'numa_node': None, 'dev_type': 'type-PCI', 'parent_addr': 'a1', 'status': 'available'} self.tracker.new_pci_tracker() self.tracker.pci_tracker._set_hvdevs([dev_dict]) request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) requests = objects.InstancePCIRequests(requests=[request]) mock_get_by_instance.return_value = requests huge_instance = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) self._claim(requests=requests, numa_topology=huge_instance) def test_abort(self): claim = self._abort() self.assertTrue(claim.tracker.icalled) def _abort(self): claim = None try: with self._claim(memory_mb=4096) as claim: raise test.TestingException("abort") except test.TestingException: pass return claim class MoveClaimTestCase(ClaimTestCase): def _claim(self, limits=None, requests=None, image_meta=None, **kwargs): instance_type = self._fake_instance_type(**kwargs) numa_topology = kwargs.pop('numa_topology', None) image_meta = image_meta or {} self.instance = self._fake_instance(**kwargs) self.instance.numa_topology = None if numa_topology: self.db_numa_topology = { 'id': 1, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'instance_uuid': self.instance.uuid, 'numa_topology': numa_topology._to_json(), 'pci_requests': (requests or self.empty_requests).to_json() } else: self.db_numa_topology = None requests = requests or self.empty_requests @mock.patch('nova.virt.hardware.numa_get_constraints', return_value=numa_topology) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid', return_value=self.db_numa_topology) def get_claim(mock_extra_get, mock_numa_get): return claims.MoveClaim( self.context, self.instance, _NODENAME, instance_type, image_meta, self.tracker, self.compute_node, requests, objects.Migration(migration_type='migration'), limits=limits) return get_claim() @mock.patch('nova.objects.Instance.drop_migration_context') def test_abort(self, mock_drop): claim = self._abort() self.assertTrue(claim.tracker.rcalled) mock_drop.assert_called_once_with() def test_image_meta(self): claim = self._claim() self.assertIsInstance(claim.image_meta, objects.ImageMeta) def test_image_meta_object_passed(self): image_meta = objects.ImageMeta() claim = self._claim(image_meta=image_meta) self.assertIsInstance(claim.image_meta, objects.ImageMeta) class LiveMigrationClaimTestCase(ClaimTestCase): def test_live_migration_claim_bad_pci_request(self): instance_type = self._fake_instance_type() instance = self._fake_instance() instance.numa_topology = None self.assertRaisesRegex( exception.ComputeResourcesUnavailable, 'PCI requests are not supported', claims.MoveClaim, self.context, instance, _NODENAME, instance_type, {}, self.tracker, self.compute_node, objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(alias_name='fake-alias')]), objects.Migration(migration_type='live-migration'), None) def test_live_migration_page_size(self): instance_type = self._fake_instance_type() instance = self._fake_instance() instance.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=1, cpuset=set([1, 2]), memory=512, pagesize=2)]) claimed_numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=1, cpuset=set([1, 2]), memory=512, pagesize=1)]) with mock.patch('nova.virt.hardware.numa_fit_instance_to_host', return_value=claimed_numa_topology): self.assertRaisesRegex( exception.ComputeResourcesUnavailable, 'Requested page size is different', claims.MoveClaim, self.context, instance, _NODENAME, instance_type, {}, self.tracker, self.compute_node, self.empty_requests, objects.Migration(migration_type='live-migration'), None) def test_claim_fails_page_size_not_called(self): instance_type = self._fake_instance_type() instance = self._fake_instance() # This topology cannot fit in self.compute_node # (see _fake_compute_node()) numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=1, cpuset=set([1, 2, 3]), memory=1024)]) with test.nested( mock.patch('nova.virt.hardware.numa_get_constraints', return_value=numa_topology), mock.patch( 'nova.compute.claims.MoveClaim._test_live_migration_page_size' )) as (mock_test_numa, mock_test_page_size): self.assertRaisesRegex( exception.ComputeResourcesUnavailable, 'Requested instance NUMA topology', claims.MoveClaim, self.context, instance, _NODENAME, instance_type, {}, self.tracker, self.compute_node, self.empty_requests, objects.Migration(migration_type='live-migration'), None) mock_test_page_size.assert_not_called() def test_live_migration_no_instance_numa_topology(self): instance_type = self._fake_instance_type() instance = self._fake_instance() instance.numa_topology = None claims.MoveClaim( self.context, instance, _NODENAME, instance_type, {}, self.tracker, self.compute_node, self.empty_requests, objects.Migration(migration_type='live-migration'), None) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_compute.py0000664000175000017500000230453300000000000022425 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for compute service.""" import datetime import fixtures as std_fixtures from itertools import chain import operator import sys import time import ddt from castellan import key_manager import mock from neutronclient.common import exceptions as neutron_exceptions from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import units from oslo_utils import uuidutils import six import testtools from testtools import matchers as testtools_matchers import nova from nova import availability_zones from nova import block_device from nova.compute import api as compute from nova.compute import flavors from nova.compute import instance_actions from nova.compute import manager as compute_manager from nova.compute import power_state from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states import nova.conf from nova.console import type as ctype from nova import context from nova.db import api as db from nova import exception from nova.image import glance as image_api from nova.network import model as network_model from nova import objects from nova.objects import block_device as block_device_obj from nova.objects import fields as obj_fields from nova.objects import instance as instance_obj from nova.objects import migrate_data as migrate_data_obj from nova.policies import servers as servers_policy from nova import test from nova.tests import fixtures from nova.tests.unit.compute import eventlet_utils from nova.tests.unit.compute import fake_resource_tracker from nova.tests.unit import fake_block_device from nova.tests.unit import fake_diagnostics from nova.tests.unit import fake_instance from nova.tests.unit import fake_network from nova.tests.unit import fake_network_cache_model from nova.tests.unit import fake_notifier from nova.tests.unit import fake_server_actions from nova.tests.unit.image import fake as fake_image from nova.tests.unit import matchers from nova.tests.unit.objects import test_diagnostics from nova.tests.unit.objects import test_flavor from nova.tests.unit.objects import test_instance_numa from nova.tests.unit.objects import test_migration from nova.tests.unit import utils as test_utils from nova import utils from nova.virt import block_device as driver_block_device from nova.virt.block_device import DriverVolumeBlockDevice as driver_bdm_volume from nova.virt import event from nova.virt import fake from nova.virt import hardware from nova.volume import cinder LOG = logging.getLogger(__name__) CONF = nova.conf.CONF FAKE_IMAGE_REF = uuids.image_ref NODENAME = 'fakenode1' NODENAME2 = 'fakenode2' def fake_not_implemented(*args, **kwargs): raise NotImplementedError() def get_primitive_instance_by_uuid(context, instance_uuid): """Helper method to get an instance and then convert it to a primitive form using jsonutils. """ instance = db.instance_get_by_uuid(context, instance_uuid) return jsonutils.to_primitive(instance) def unify_instance(instance): """Return a dict-like instance for both object-initiated and model-initiated sources that can reasonably be compared. """ newdict = dict() for k, v in instance.items(): if isinstance(v, datetime.datetime): # NOTE(danms): DB models and Instance objects have different # timezone expectations v = v.replace(tzinfo=None) elif k == 'fault': # NOTE(danms): DB models don't have 'fault' continue elif k == 'pci_devices': # NOTE(yonlig.he) pci devices need lazy loading # fake db does not support it yet. continue newdict[k] = v return newdict class FakeComputeTaskAPI(object): def resize_instance(self, ctxt, instance, scheduler_hint, flavor, reservations=None, clean_shutdown=True, request_spec=None, host_list=None): pass class BaseTestCase(test.TestCase): def setUp(self): super(BaseTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.compute = compute_manager.ComputeManager() # NOTE(gibi): this is a hack to make the fake virt driver use the nodes # needed for these tests. self.compute.driver._set_nodes([NODENAME, NODENAME2]) # execute power syncing synchronously for testing: self.compute._sync_power_pool = eventlet_utils.SyncPool() # override tracker with a version that doesn't need the database: fake_rt = fake_resource_tracker.FakeResourceTracker(self.compute.host, self.compute.driver) self.compute.rt = fake_rt def fake_get_compute_nodes_in_db(self, context, *args, **kwargs): fake_compute_nodes = [{'local_gb': 259, 'uuid': uuids.fake_compute_node, 'vcpus_used': 0, 'deleted': 0, 'hypervisor_type': 'powervm', 'created_at': '2013-04-01T00:27:06.000000', 'local_gb_used': 0, 'updated_at': '2013-04-03T00:35:41.000000', 'hypervisor_hostname': 'fake_phyp1', 'memory_mb_used': 512, 'memory_mb': 131072, 'current_workload': 0, 'vcpus': 16, 'mapped': 1, 'cpu_info': 'ppc64,powervm,3940', 'running_vms': 0, 'free_disk_gb': 259, 'service_id': 7, 'hypervisor_version': 7, 'disk_available_least': 265856, 'deleted_at': None, 'free_ram_mb': 130560, 'metrics': '', 'stats': '', 'numa_topology': '', 'id': 2, 'host': 'fake_phyp1', 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5, 'disk_allocation_ratio': 1.0, 'host_ip': '127.0.0.1'}] return [objects.ComputeNode._from_db_object( context, objects.ComputeNode(), cn) for cn in fake_compute_nodes] def fake_compute_node_delete(context, compute_node_id): self.assertEqual(2, compute_node_id) self.stub_out( 'nova.compute.manager.ComputeManager._get_compute_nodes_in_db', fake_get_compute_nodes_in_db) self.stub_out('nova.db.api.compute_node_delete', fake_compute_node_delete) self.compute.update_available_resource( context.get_admin_context()) self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) def fake_show(meh, context, id, **kwargs): if id: return {'id': id, 'name': 'fake_name', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id, 'ramdisk_id': uuids.ramdisk_id, 'something_else': 'meow'}} else: raise exception.ImageNotFound(image_id=id) fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', fake_show) fake_network.set_stub_network_methods(self) fake_server_actions.stub_out_action_events(self) def fake_get_nw_info(cls, ctxt, instance, *args, **kwargs): return network_model.NetworkInfo() self.stub_out('nova.network.neutron.API.get_instance_nw_info', fake_get_nw_info) self.stub_out('nova.network.neutron.API.migrate_instance_start', lambda *args, **kwargs: None) self.useFixture(fixtures.NeutronFixture(self)) self.compute_api = compute.API() # Just to make long lines short self.rt = self.compute.rt self.mock_get_allocations = self.useFixture( fixtures.fixtures.MockPatch( 'nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer')).mock self.mock_get_allocations.return_value = {} self.mock_get_allocs = self.useFixture( fixtures.fixtures.MockPatch( 'nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer')).mock self.mock_get_allocs.return_value = {'allocations': {}} def tearDown(self): ctxt = context.get_admin_context() fake_image.FakeImageService_reset() instances = db.instance_get_all(ctxt) for instance in instances: db.instance_destroy(ctxt, instance['uuid']) super(BaseTestCase, self).tearDown() def _fake_instance(self, updates): return fake_instance.fake_instance_obj(None, **updates) def _create_fake_instance_obj(self, params=None, type_name='m1.tiny', services=False, ctxt=None): ctxt = ctxt or self.context flavor = objects.Flavor.get_by_name(ctxt, type_name) inst = objects.Instance(context=ctxt) inst.vm_state = vm_states.ACTIVE inst.task_state = None inst.power_state = power_state.RUNNING inst.image_ref = FAKE_IMAGE_REF inst.reservation_id = 'r-fakeres' inst.user_id = self.user_id inst.project_id = self.project_id inst.host = self.compute.host inst.node = NODENAME inst.instance_type_id = flavor.id inst.ami_launch_index = 0 inst.memory_mb = 0 inst.vcpus = 0 inst.root_gb = 0 inst.ephemeral_gb = 0 inst.architecture = obj_fields.Architecture.X86_64 inst.os_type = 'Linux' inst.system_metadata = ( params and params.get('system_metadata', {}) or {}) inst.locked = False inst.created_at = timeutils.utcnow() inst.updated_at = timeutils.utcnow() inst.launched_at = timeutils.utcnow() inst.security_groups = objects.SecurityGroupList(objects=[]) inst.keypairs = objects.KeyPairList(objects=[]) inst.flavor = flavor inst.old_flavor = None inst.new_flavor = None if params: inst.flavor.update(params.pop('flavor', {})) inst.update(params) if services: _create_service_entries(self.context.elevated(), [['fake_zone', [inst.host]]]) cell1 = self.cell_mappings[test.CELL1_NAME] with context.target_cell(ctxt, cell1) as cctxt: inst._context = cctxt inst.create() # Create an instance mapping in cell1 so the API can get the instance. inst_map = objects.InstanceMapping( ctxt, instance_uuid=inst.uuid, project_id=inst.project_id, cell_mapping=cell1) inst_map.create() return inst def _init_aggregate_with_host(self, aggr, aggr_name, zone, host): if not aggr: aggr = self.api.create_aggregate(self.context, aggr_name, zone) aggr = self.api.add_host_to_aggregate(self.context, aggr.id, host) return aggr class ComputeVolumeTestCase(BaseTestCase): def setUp(self): super(ComputeVolumeTestCase, self).setUp() self.fetched_attempts = 0 self.instance = { 'id': 'fake', 'uuid': uuids.instance, 'name': 'fake', 'root_device_name': '/dev/vda', } self.fake_volume = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'device_name': '/dev/vdb', 'connection_info': jsonutils.dumps({})}) self.instance_object = objects.Instance._from_db_object( self.context, objects.Instance(), fake_instance.fake_db_instance()) self.stub_out('nova.volume.cinder.API.get', lambda *a, **kw: {'id': uuids.volume_id, 'size': 4, 'attach_status': 'detached'}) self.stub_out('nova.virt.fake.FakeDriver.get_volume_connector', lambda *a, **kw: None) self.stub_out('nova.volume.cinder.API.initialize_connection', lambda *a, **kw: {}) self.stub_out('nova.volume.cinder.API.terminate_connection', lambda *a, **kw: None) self.stub_out('nova.volume.cinder.API.attach', lambda *a, **kw: None) self.stub_out('nova.volume.cinder.API.detach', lambda *a, **kw: None) self.stub_out('nova.volume.cinder.API.check_availability_zone', lambda *a, **kw: None) self.stub_out('eventlet.greenthread.sleep', lambda *a, **kw: None) def store_cinfo(context, *args, **kwargs): self.cinfo = jsonutils.loads(args[-1].get('connection_info')) return self.fake_volume self.stub_out('nova.db.api.block_device_mapping_create', store_cinfo) self.stub_out('nova.db.api.block_device_mapping_update', store_cinfo) @mock.patch.object(compute_utils, 'EventReporter') def test_attach_volume_serial(self, mock_event): fake_bdm = objects.BlockDeviceMapping(context=self.context, **self.fake_volume) with (mock.patch.object(cinder.API, 'get_volume_encryption_metadata', return_value={})): instance = self._create_fake_instance_obj() self.compute.attach_volume(self.context, instance, bdm=fake_bdm) self.assertEqual(self.cinfo.get('serial'), uuids.volume_id) mock_event.assert_called_once_with( self.context, 'compute_attach_volume', CONF.host, instance.uuid, graceful_exit=False) @mock.patch.object(compute_utils, 'EventReporter') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') def test_attach_volume_raises(self, mock_notify, mock_elevate, mock_event): mock_elevate.return_value = self.context fake_bdm = objects.BlockDeviceMapping(**self.fake_volume) instance = self._create_fake_instance_obj() expected_exception = test.TestingException() def fake_attach(*args, **kwargs): raise expected_exception with test.nested( mock.patch.object(driver_block_device.DriverVolumeBlockDevice, 'attach'), mock.patch.object(cinder.API, 'unreserve_volume'), mock.patch.object(objects.BlockDeviceMapping, 'destroy') ) as (mock_attach, mock_unreserve, mock_destroy): mock_attach.side_effect = fake_attach self.assertRaises( test.TestingException, self.compute.attach_volume, self.context, instance, fake_bdm) self.assertTrue(mock_unreserve.called) self.assertTrue(mock_destroy.called) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='start', volume_id=uuids.volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='error', volume_id=uuids.volume_id, exception=expected_exception, tb=mock.ANY), ]) mock_event.assert_called_once_with( self.context, 'compute_attach_volume', CONF.host, instance.uuid, graceful_exit=False) @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') def test_attach_volume_raises_new_flow(self, mock_notify, mock_elevate): mock_elevate.return_value = self.context attachment_id = uuids.attachment_id fake_bdm = objects.BlockDeviceMapping(**self.fake_volume) fake_bdm.attachment_id = attachment_id instance = self._create_fake_instance_obj() expected_exception = test.TestingException() def fake_attach(*args, **kwargs): raise expected_exception with test.nested( mock.patch.object(driver_block_device.DriverVolumeBlockDevice, 'attach'), mock.patch.object(cinder.API, 'attachment_delete'), mock.patch.object(objects.BlockDeviceMapping, 'destroy') ) as (mock_attach, mock_attachment_delete, mock_destroy): mock_attach.side_effect = fake_attach self.assertRaises( test.TestingException, self.compute.attach_volume, self.context, instance, fake_bdm) self.assertTrue(mock_attachment_delete.called) self.assertTrue(mock_destroy.called) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='start', volume_id=uuids.volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='error', volume_id=uuids.volume_id, exception=expected_exception, tb=mock.ANY), ]) @mock.patch.object(compute_manager.LOG, 'debug') @mock.patch.object(compute_utils, 'EventReporter') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') def test_attach_volume_ignore_VolumeAttachmentNotFound( self, mock_notify, mock_elevate, mock_event, mock_debug_log): """Tests the scenario that the DriverVolumeBlockDevice.attach flow already deleted the volume attachment before the ComputeManager.attach_volume flow tries to rollback the attachment record and delete it, which raises VolumeAttachmentNotFound and is ignored. """ mock_elevate.return_value = self.context attachment_id = uuids.attachment_id fake_bdm = objects.BlockDeviceMapping(**self.fake_volume) fake_bdm.attachment_id = attachment_id instance = self._create_fake_instance_obj() expected_exception = test.TestingException() def fake_attach(*args, **kwargs): raise expected_exception with test.nested( mock.patch.object(driver_block_device.DriverVolumeBlockDevice, 'attach'), mock.patch.object(cinder.API, 'attachment_delete'), mock.patch.object(objects.BlockDeviceMapping, 'destroy') ) as (mock_attach, mock_attach_delete, mock_destroy): mock_attach.side_effect = fake_attach mock_attach_delete.side_effect = \ exception.VolumeAttachmentNotFound( attachment_id=attachment_id) self.assertRaises( test.TestingException, self.compute.attach_volume, self.context, instance, fake_bdm) mock_destroy.assert_called_once_with() mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='start', volume_id=uuids.volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='error', volume_id=uuids.volume_id, exception=expected_exception, tb=mock.ANY), ]) mock_event.assert_called_once_with( self.context, 'compute_attach_volume', CONF.host, instance.uuid, graceful_exit=False) self.assertIsInstance(mock_debug_log.call_args[0][1], exception.VolumeAttachmentNotFound) @mock.patch.object(compute_utils, 'EventReporter') def test_detach_volume_api_raises(self, mock_event): fake_bdm = objects.BlockDeviceMapping(**self.fake_volume) instance = self._create_fake_instance_obj() with test.nested( mock.patch.object(driver_bdm_volume, 'driver_detach'), mock.patch.object(self.compute.volume_api, 'detach'), mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance'), mock.patch.object(fake_bdm, 'destroy') ) as (mock_internal_detach, mock_detach, mock_get, mock_destroy): mock_detach.side_effect = test.TestingException mock_get.return_value = fake_bdm self.assertRaises( test.TestingException, self.compute.detach_volume, self.context, uuids.volume, instance, 'fake_id') self.assertFalse(mock_destroy.called) mock_event.assert_called_once_with( self.context, 'compute_detach_volume', CONF.host, instance.uuid, graceful_exit=False) @mock.patch.object(compute_utils, 'EventReporter') def test_detach_volume_bdm_destroyed(self, mock_event): # Assert that the BDM is destroyed given a successful call to detach # the volume from the instance in Cinder. fake_bdm = objects.BlockDeviceMapping(**self.fake_volume) instance = self._create_fake_instance_obj() with test.nested( mock.patch.object(driver_bdm_volume, 'driver_detach'), mock.patch.object(self.compute.volume_api, 'detach'), mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance'), mock.patch.object(fake_bdm, 'destroy') ) as (mock_internal_detach, mock_detach, mock_get, mock_destroy): mock_get.return_value = fake_bdm self.compute.detach_volume(self.context, uuids.volume_id, instance, uuids.attachment_id) mock_detach.assert_called_once_with(mock.ANY, uuids.volume_id, instance.uuid, uuids.attachment_id) self.assertTrue(mock_destroy.called) mock_event.assert_called_once_with( self.context, 'compute_detach_volume', CONF.host, instance.uuid, graceful_exit=False) def test_await_block_device_created_too_slow(self): self.flags(block_device_allocate_retries=2) self.flags(block_device_allocate_retries_interval=0.1) def never_get(self, context, vol_id): return { 'status': 'creating', 'id': 'blah', } self.stub_out('nova.volume.cinder.API.get', never_get) self.assertRaises(exception.VolumeNotCreated, self.compute._await_block_device_map_created, self.context, '1') def test_await_block_device_created_failed(self): c = self.compute fake_result = {'status': 'error', 'id': 'blah'} with mock.patch.object(c.volume_api, 'get', return_value=fake_result) as fake_get: self.assertRaises(exception.VolumeNotCreated, c._await_block_device_map_created, self.context, '1') fake_get.assert_called_once_with(self.context, '1') def test_await_block_device_created_slow(self): c = self.compute self.flags(block_device_allocate_retries=4) self.flags(block_device_allocate_retries_interval=0.1) def slow_get(cls, context, vol_id): if self.fetched_attempts < 2: self.fetched_attempts += 1 return { 'status': 'creating', 'id': 'blah', } return { 'status': 'available', 'id': 'blah', } self.stub_out('nova.volume.cinder.API.get', slow_get) attempts = c._await_block_device_map_created(self.context, '1') self.assertEqual(attempts, 3) def test_await_block_device_created_retries_zero(self): c = self.compute self.flags(block_device_allocate_retries=0) self.flags(block_device_allocate_retries_interval=0.1) def volume_get(self, context, vol_id): return { 'status': 'available', 'id': 'blah', } self.stub_out('nova.volume.cinder.API.get', volume_get) attempts = c._await_block_device_map_created(self.context, '1') self.assertEqual(1, attempts) def test_boot_volume_serial(self): with ( mock.patch.object(objects.BlockDeviceMapping, 'save') ) as mock_save: block_device_mapping = [ block_device.BlockDeviceDict({ 'id': 1, 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': uuids.volume_id, 'device_name': '/dev/vdb', 'volume_size': 55, 'delete_on_termination': False, 'attachment_id': None, })] bdms = block_device_obj.block_device_make_list_from_dicts( self.context, block_device_mapping) prepped_bdm = self.compute._prep_block_device( self.context, self.instance_object, bdms) self.assertEqual(2, mock_save.call_count) volume_driver_bdm = prepped_bdm['block_device_mapping'][0] self.assertEqual(volume_driver_bdm['connection_info']['serial'], uuids.volume_id) def test_boot_volume_metadata(self, metadata=True): def volume_api_get(*args, **kwargs): if metadata: return { 'size': 1, 'volume_image_metadata': {'vol_test_key': 'vol_test_value', 'min_ram': u'128', 'min_disk': u'256', 'size': u'536870912' }, } else: return {} self.stub_out('nova.volume.cinder.API.get', volume_api_get) expected_no_metadata = {'min_disk': 0, 'min_ram': 0, 'properties': {}, 'size': 0, 'status': 'active'} block_device_mapping = [{ 'id': 1, 'device_name': 'vda', 'no_device': None, 'virtual_name': None, 'snapshot_id': None, 'volume_id': uuids.volume_id, 'delete_on_termination': False, }] image_meta = utils.get_bdm_image_metadata( self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping) if metadata: self.assertEqual(image_meta['properties']['vol_test_key'], 'vol_test_value') self.assertEqual(128, image_meta['min_ram']) self.assertEqual(256, image_meta['min_disk']) self.assertEqual(units.Gi, image_meta['size']) else: self.assertEqual(expected_no_metadata, image_meta) # Test it with new-style BDMs block_device_mapping = [{ 'boot_index': 0, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'delete_on_termination': False, }] image_meta = utils.get_bdm_image_metadata( self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping, legacy_bdm=False) if metadata: self.assertEqual(image_meta['properties']['vol_test_key'], 'vol_test_value') self.assertEqual(128, image_meta['min_ram']) self.assertEqual(256, image_meta['min_disk']) self.assertEqual(units.Gi, image_meta['size']) else: self.assertEqual(expected_no_metadata, image_meta) def test_boot_volume_no_metadata(self): self.test_boot_volume_metadata(metadata=False) def test_boot_image_metadata(self, metadata=True): def image_api_get(*args, **kwargs): if metadata: return { 'properties': {'img_test_key': 'img_test_value'} } else: return {} self.stub_out('nova.image.glance.API.get', image_api_get) block_device_mapping = [{ 'boot_index': 0, 'source_type': 'image', 'destination_type': 'local', 'image_id': uuids.image, 'delete_on_termination': True, }] image_meta = utils.get_bdm_image_metadata( self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping, legacy_bdm=False) if metadata: self.assertEqual('img_test_value', image_meta['properties']['img_test_key']) else: self.assertEqual(image_meta, {}) def test_boot_image_no_metadata(self): self.test_boot_image_metadata(metadata=False) @mock.patch.object(time, 'time') @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(utils, 'last_completed_audit_period') @mock.patch.object(fake.FakeDriver, 'get_all_bw_counters') def test_poll_bandwidth_usage_not_implemented(self, mock_get_counter, mock_last, mock_get_host, mock_time): ctxt = context.get_admin_context() # Following methods will be called # Note - time called two more times from Log mock_last.return_value = (0, 0) mock_time.side_effect = (10, 20, 21) mock_get_host.return_value = [] mock_get_counter.side_effect = NotImplementedError self.flags(bandwidth_poll_interval=1) # The first call will catch a NotImplementedError, # then LOG.info something below: self.compute._poll_bandwidth_usage(ctxt) self.assertIn('Bandwidth usage not supported ' 'by %s' % CONF.compute_driver, self.stdlog.logger.output) # A second call won't call the stubs again as the bandwidth # poll is now disabled self.compute._poll_bandwidth_usage(ctxt) mock_get_counter.assert_called_once_with([]) mock_last.assert_called_once_with() mock_get_host.assert_called_once_with(ctxt, 'fake-mini', use_slave=True) @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_get_host_volume_bdms(self, mock_get_by_inst, mock_get_by_host): fake_instance = mock.Mock(uuid=uuids.volume_instance) mock_get_by_host.return_value = [fake_instance] volume_bdm = mock.Mock(id=1, is_volume=True) not_volume_bdm = mock.Mock(id=2, is_volume=False) mock_get_by_inst.return_value = [volume_bdm, not_volume_bdm] expected_host_bdms = [{'instance': fake_instance, 'instance_bdms': [volume_bdm]}] got_host_bdms = self.compute._get_host_volume_bdms('fake-context') mock_get_by_host.assert_called_once_with('fake-context', self.compute.host, use_slave=False) mock_get_by_inst.assert_called_once_with('fake-context', uuids.volume_instance, use_slave=False) self.assertEqual(expected_host_bdms, got_host_bdms) @mock.patch.object(utils, 'last_completed_audit_period') @mock.patch.object(compute_manager.ComputeManager, '_get_host_volume_bdms') def test_poll_volume_usage_disabled(self, mock_get, mock_last): # None of the mocks should be called. ctxt = 'MockContext' self.flags(volume_usage_poll_interval=0) self.compute._poll_volume_usage(ctxt) self.assertFalse(mock_get.called) self.assertFalse(mock_last.called) @mock.patch.object(compute_manager.ComputeManager, '_get_host_volume_bdms') @mock.patch.object(fake.FakeDriver, 'get_all_volume_usage') def test_poll_volume_usage_returns_no_vols(self, mock_get_usage, mock_get_bdms): ctxt = 'MockContext' # Following methods are called. mock_get_bdms.return_value = [] self.flags(volume_usage_poll_interval=10) self.compute._poll_volume_usage(ctxt) mock_get_bdms.assert_called_once_with(ctxt, use_slave=True) @mock.patch.object(compute_utils, 'notify_about_volume_usage') @mock.patch.object(compute_manager.ComputeManager, '_get_host_volume_bdms') @mock.patch.object(fake.FakeDriver, 'get_all_volume_usage') def test_poll_volume_usage_with_data(self, mock_get_usage, mock_get_bdms, mock_notify): # All the mocks are called mock_get_bdms.return_value = [1, 2] mock_get_usage.return_value = [ {'volume': uuids.volume, 'instance': self.instance_object, 'rd_req': 100, 'rd_bytes': 500, 'wr_req': 25, 'wr_bytes': 75}] self.flags(volume_usage_poll_interval=10) self.compute._poll_volume_usage(self.context) mock_get_bdms.assert_called_once_with(self.context, use_slave=True) mock_notify.assert_called_once_with( self.context, test.MatchType(objects.VolumeUsage), self.compute.host) @mock.patch.object(compute_utils, 'notify_about_volume_usage') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(fake.FakeDriver, 'block_stats') @mock.patch.object(compute_manager.ComputeManager, '_get_host_volume_bdms') @mock.patch.object(fake.FakeDriver, 'get_all_volume_usage') @mock.patch.object(fake.FakeDriver, 'instance_exists') def test_detach_volume_usage(self, mock_exists, mock_get_all, mock_get_bdms, mock_stats, mock_get, mock_notify, mock_elevate, mock_notify_usage): mock_elevate.return_value = self.context # Test that detach volume update the volume usage cache table correctly instance = self._create_fake_instance_obj() bdm = objects.BlockDeviceMapping(context=self.context, id=1, device_name='/dev/vdb', connection_info='{}', instance_uuid=instance['uuid'], source_type='volume', destination_type='volume', no_device=False, disk_bus='foo', device_type='disk', volume_size=1, volume_id=uuids.volume_id, attachment_id=None) host_volume_bdms = {'id': 1, 'device_name': '/dev/vdb', 'connection_info': '{}', 'instance_uuid': instance['uuid'], 'volume_id': uuids.volume_id} mock_get.return_value = bdm.obj_clone() mock_stats.return_value = [1, 30, 1, 20, None] mock_get_bdms.return_value = host_volume_bdms mock_get_all.return_value = [{'volume': uuids.volume_id, 'rd_req': 1, 'rd_bytes': 10, 'wr_req': 1, 'wr_bytes': 5, 'instance': instance}] mock_exists.return_value = True def fake_get_volume_encryption_metadata(self, context, volume_id): return {} self.stub_out('nova.volume.cinder.API.get_volume_encryption_metadata', fake_get_volume_encryption_metadata) self.compute.attach_volume(self.context, instance, bdm) # Poll volume usage & then detach the volume. This will update the # total fields in the volume usage cache. self.flags(volume_usage_poll_interval=10) self.compute._poll_volume_usage(self.context) # Check that a volume.usage and volume.attach notification was sent self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) self.compute.detach_volume(self.context, uuids.volume_id, instance, 'attach-id') # Check that volume.attach, 2 volume.usage, and volume.detach # notifications were sent self.assertEqual(4, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.instance.volume.attach', msg.event_type) msg = fake_notifier.NOTIFICATIONS[2] self.assertEqual('volume.usage', msg.event_type) payload = msg.payload self.assertEqual(instance['uuid'], payload['instance_id']) self.assertEqual('fake', payload['user_id']) self.assertEqual('fake', payload['tenant_id']) self.assertEqual(1, payload['reads']) self.assertEqual(30, payload['read_bytes']) self.assertEqual(1, payload['writes']) self.assertEqual(20, payload['write_bytes']) self.assertIsNone(payload['availability_zone']) msg = fake_notifier.NOTIFICATIONS[3] self.assertEqual('compute.instance.volume.detach', msg.event_type) mock_notify_usage.assert_has_calls([ mock.call(self.context, test.MatchType(objects.VolumeUsage), self.compute.host), mock.call(self.context, test.MatchType(objects.VolumeUsage), self.compute.host)]) self.assertEqual(2, mock_notify_usage.call_count) # Check the database for the volume_usages = db.vol_get_usage_by_time(self.context, 0) self.assertEqual(1, len(volume_usages)) volume_usage = volume_usages[0] self.assertEqual(0, volume_usage['curr_reads']) self.assertEqual(0, volume_usage['curr_read_bytes']) self.assertEqual(0, volume_usage['curr_writes']) self.assertEqual(0, volume_usage['curr_write_bytes']) self.assertEqual(1, volume_usage['tot_reads']) self.assertEqual(30, volume_usage['tot_read_bytes']) self.assertEqual(1, volume_usage['tot_writes']) self.assertEqual(20, volume_usage['tot_write_bytes']) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='start', volume_id=uuids.volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_attach', phase='end', volume_id=uuids.volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_detach', phase='start', volume_id=uuids.volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_detach', phase='end', volume_id=uuids.volume_id), ]) mock_get.assert_called_once_with(self.context, uuids.volume_id, instance.uuid) mock_stats.assert_called_once_with(instance, 'vdb') mock_get_bdms.assert_called_once_with(self.context, use_slave=True) mock_get_all(self.context, host_volume_bdms) mock_exists.assert_called_once_with(mock.ANY) def test_prepare_image_mapping(self): swap_size = 1 ephemeral_size = 1 instance_type = {'swap': swap_size, 'ephemeral_gb': ephemeral_size} mappings = [ {'virtual': 'ami', 'device': 'sda1'}, {'virtual': 'root', 'device': '/dev/sda1'}, {'virtual': 'swap', 'device': 'sdb4'}, {'virtual': 'ephemeral0', 'device': 'sdc1'}, {'virtual': 'ephemeral1', 'device': 'sdc2'}, ] preped_bdm = self.compute_api._prepare_image_mapping( instance_type, mappings) expected_result = [ { 'device_name': '/dev/sdb4', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': 'swap', 'boot_index': -1, 'volume_size': swap_size }, { 'device_name': '/dev/sdc1', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': CONF.default_ephemeral_format, 'boot_index': -1, 'volume_size': ephemeral_size }, { 'device_name': '/dev/sdc2', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': CONF.default_ephemeral_format, 'boot_index': -1, 'volume_size': ephemeral_size } ] for expected, got in zip(expected_result, preped_bdm): self.assertThat(expected, matchers.IsSubDictOf(got)) def test_validate_bdm(self): def fake_get(self, context, res_id): return {'id': res_id, 'size': 4, 'multiattach': False} def fake_attachment_create(_self, ctxt, vol_id, *args, **kwargs): # Return a unique attachment id per volume. return {'id': getattr(uuids, vol_id)} self.stub_out('nova.volume.cinder.API.get', fake_get) self.stub_out('nova.volume.cinder.API.get_snapshot', fake_get) self.stub_out('nova.volume.cinder.API.attachment_create', fake_attachment_create) volume_id = '55555555-aaaa-bbbb-cccc-555555555555' snapshot_id = '66666666-aaaa-bbbb-cccc-555555555555' image_id = '77777777-aaaa-bbbb-cccc-555555555555' instance = self._create_fake_instance_obj() instance_type = {'swap': 1, 'ephemeral_gb': 2} mappings = [ fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/sdb4', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': 'swap', 'boot_index': -1, 'volume_size': 1 }, anon=True), fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/sda1', 'source_type': 'volume', 'destination_type': 'volume', 'device_type': 'disk', 'volume_id': volume_id, 'guest_format': None, 'boot_index': 1, }, anon=True), fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': snapshot_id, 'device_type': 'disk', 'guest_format': None, 'volume_size': 6, 'boot_index': 0, }, anon=True), fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/sda3', 'source_type': 'image', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': None, 'boot_index': 2, 'volume_size': 1 }, anon=True) ] mappings = block_device_obj.block_device_make_list_from_dicts( self.context, mappings) # Make sure it passes at first volumes = { volume_id: fake_get(None, None, volume_id) } self.compute_api._validate_bdm(self.context, instance, instance_type, mappings, {}, volumes) self.assertEqual(4, mappings[1].volume_size) self.assertEqual(6, mappings[2].volume_size) # Boot sequence mappings[2].boot_index = 2 self.assertRaises(exception.InvalidBDMBootSequence, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings, {}, volumes) mappings[2].boot_index = 0 # number of local block_devices self.flags(max_local_block_devices=1) self.assertRaises(exception.InvalidBDMLocalsLimit, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings, {}, volumes) ephemerals = [ fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/vdb', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': None, 'boot_index': -1, 'volume_size': 1 }, anon=True), fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/vdc', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': None, 'boot_index': -1, 'volume_size': 1 }, anon=True) ] ephemerals = block_device_obj.block_device_make_list_from_dicts( self.context, ephemerals) self.flags(max_local_block_devices=4) # More ephemerals are OK as long as they are not over the size limit mappings_ = mappings[:] mappings_.objects.extend(ephemerals) self.compute_api._validate_bdm(self.context, instance, instance_type, mappings_, {}, volumes) # Ephemerals over the size limit ephemerals[0].volume_size = 3 mappings_ = mappings[:] mappings_.objects.extend(ephemerals) self.assertRaises(exception.InvalidBDMEphemeralSize, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings_, {}, volumes) # Swap over the size limit mappings[0].volume_size = 3 self.assertRaises(exception.InvalidBDMSwapSize, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings, {}, volumes) mappings[0].volume_size = 1 additional_swap = [ fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/vdb', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': 'swap', 'boot_index': -1, 'volume_size': 1 }, anon=True) ] additional_swap = block_device_obj.block_device_make_list_from_dicts( self.context, additional_swap) # More than one swap mappings_ = mappings[:] mappings_.objects.extend(additional_swap) self.assertRaises(exception.InvalidBDMFormat, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings_, {}, volumes) image_no_size = [ fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/sda4', 'source_type': 'image', 'image_id': image_id, 'destination_type': 'volume', 'boot_index': -1, 'volume_size': None, }, anon=True) ] image_no_size = block_device_obj.block_device_make_list_from_dicts( self.context, image_no_size) mappings_ = mappings[:] mappings_.objects.extend(image_no_size) self.assertRaises(exception.InvalidBDM, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings_, {}, volumes) # blank device without a specified size fails blank_no_size = [ fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/sda4', 'source_type': 'blank', 'destination_type': 'volume', 'boot_index': -1, 'volume_size': None, }, anon=True) ] blank_no_size = block_device_obj.block_device_make_list_from_dicts( self.context, blank_no_size) mappings_ = mappings[:] mappings_.objects.extend(blank_no_size) self.assertRaises(exception.InvalidBDM, self.compute_api._validate_bdm, self.context, instance, instance_type, mappings_, {}, volumes) def test_validate_bdm_with_more_than_one_default(self): instance_type = {'swap': 1, 'ephemeral_gb': 1} all_mappings = [fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_size': 1, 'device_name': 'vda', 'boot_index': 0, 'delete_on_termination': False}, anon=True), fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/vdb', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'volume_size': None, 'boot_index': -1}, anon=True), fake_block_device.FakeDbBlockDeviceDict({ 'device_name': '/dev/vdc', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'volume_size': None, 'boot_index': -1}, anon=True)] all_mappings = block_device_obj.block_device_make_list_from_dicts( self.context, all_mappings) image_cache = volumes = {} self.assertRaises(exception.InvalidBDMEphemeralSize, self.compute_api._validate_bdm, self.context, self.instance, instance_type, all_mappings, image_cache, volumes) @mock.patch.object(cinder.API, 'attachment_create', side_effect=exception.InvalidVolume(reason='error')) def test_validate_bdm_media_service_invalid_volume(self, mock_att_create): volume_id = uuids.volume_id instance_type = {'swap': 1, 'ephemeral_gb': 1} bdms = [fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': volume_id, 'device_name': 'vda', 'boot_index': 0, 'delete_on_termination': False}, anon=True)] bdms = block_device_obj.block_device_make_list_from_dicts(self.context, bdms) # We test a list of invalid status values that should result # in an InvalidVolume exception being raised. status_values = ( # First two check that the status is 'available'. ('creating', 'detached'), ('error', 'detached'), # Checks that the attach_status is 'detached'. ('available', 'attached') ) for status, attach_status in status_values: if attach_status == 'attached': volumes = {volume_id: {'id': volume_id, 'status': status, 'attach_status': attach_status, 'multiattach': False, 'attachments': {}}} else: volumes = {volume_id: {'id': volume_id, 'status': status, 'attach_status': attach_status, 'multiattach': False}} self.assertRaises(exception.InvalidVolume, self.compute_api._validate_bdm, self.context, self.instance_object, instance_type, bdms, {}, volumes) @mock.patch.object(cinder.API, 'check_availability_zone') @mock.patch.object(cinder.API, 'attachment_create', return_value={'id': uuids.attachment_id}) def test_validate_bdm_media_service_valid(self, mock_att_create, mock_check_av_zone): volume_id = uuids.volume_id instance_type = {'swap': 1, 'ephemeral_gb': 1} bdms = [fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': volume_id, 'device_name': 'vda', 'boot_index': 0, 'delete_on_termination': False}, anon=True)] bdms = block_device_obj.block_device_make_list_from_dicts(self.context, bdms) volume = {'id': volume_id, 'status': 'available', 'attach_status': 'detached', 'multiattach': False} image_cache = {} volumes = {volume_id: volume} self.compute_api._validate_bdm(self.context, self.instance_object, instance_type, bdms, image_cache, volumes) mock_check_av_zone.assert_not_called() mock_att_create.assert_called_once_with( self.context, volume_id, self.instance_object.uuid) def test_volume_snapshot_create(self): self.assertRaises(messaging.ExpectedException, self.compute.volume_snapshot_create, self.context, self.instance_object, 'fake_id', {}) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(NotImplementedError, self.compute.volume_snapshot_create, self.context, self.instance_object, 'fake_id', {}) @mock.patch.object(compute_manager.LOG, 'debug') def test_volume_snapshot_create_instance_not_running(self, mock_debug): with mock.patch.object(self.compute.driver, 'volume_snapshot_create') as mock_create: exc = exception.InstanceNotRunning(instance_id=uuids.instance) mock_create.side_effect = exc self.compute.volume_snapshot_create(self.context, self.instance_object, uuids.volume, {}) mock_debug.assert_called_once_with('Instance disappeared during ' 'volume snapshot create', instance=self.instance_object) def test_volume_snapshot_delete(self): self.assertRaises(messaging.ExpectedException, self.compute.volume_snapshot_delete, self.context, self.instance_object, uuids.volume, uuids.snapshot, {}) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(NotImplementedError, self.compute.volume_snapshot_delete, self.context, self.instance_object, uuids.volume, uuids.snapshot, {}) @mock.patch.object(compute_manager.LOG, 'debug') def test_volume_snapshot_delete_instance_not_running(self, mock_debug): with mock.patch.object(self.compute.driver, 'volume_snapshot_delete') as mock_delete: exc = exception.InstanceNotRunning(instance_id=uuids.instance) mock_delete.side_effect = exc self.compute.volume_snapshot_delete(self.context, self.instance_object, uuids.volume, uuids.snapshot, {}) mock_debug.assert_called_once_with('Instance disappeared during ' 'volume snapshot delete', instance=self.instance_object) @mock.patch.object(cinder.API, 'create', side_effect=exception.OverQuota(overs='something')) def test_prep_block_device_over_quota_failure(self, mock_create): instance = self._create_fake_instance_obj() bdms = [ block_device.BlockDeviceDict({ 'boot_index': 0, 'guest_format': None, 'connection_info': None, 'device_type': u'disk', 'source_type': 'image', 'destination_type': 'volume', 'volume_size': 1, 'image_id': 1, 'device_name': '/dev/vdb', })] bdms = block_device_obj.block_device_make_list_from_dicts( self.context, bdms) self.assertRaises(exception.OverQuota, compute_manager.ComputeManager()._prep_block_device, self.context, instance, bdms) self.assertTrue(mock_create.called) @mock.patch.object(nova.virt.block_device, 'get_swap') @mock.patch.object(nova.virt.block_device, 'convert_blanks') @mock.patch.object(nova.virt.block_device, 'convert_images') @mock.patch.object(nova.virt.block_device, 'convert_snapshots') @mock.patch.object(nova.virt.block_device, 'convert_volumes') @mock.patch.object(nova.virt.block_device, 'convert_ephemerals') @mock.patch.object(nova.virt.block_device, 'convert_swap') @mock.patch.object(nova.virt.block_device, 'attach_block_devices') def test_prep_block_device_with_blanks(self, attach_block_devices, convert_swap, convert_ephemerals, convert_volumes, convert_snapshots, convert_images, convert_blanks, get_swap): instance = self._create_fake_instance_obj() instance['root_device_name'] = '/dev/vda' root_volume = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'instance_uuid': uuids.block_device_instance, 'source_type': 'image', 'destination_type': 'volume', 'image_id': uuids.image, 'volume_size': 1, 'boot_index': 0})) blank_volume1 = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'instance_uuid': uuids.block_device_instance, 'source_type': 'blank', 'destination_type': 'volume', 'volume_size': 1, 'boot_index': 1})) blank_volume2 = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'instance_uuid': uuids.block_device_instance, 'source_type': 'blank', 'destination_type': 'volume', 'volume_size': 1, 'boot_index': 2})) bdms = [blank_volume1, blank_volume2, root_volume] def fake_attach_block_devices(bdm, *args, **kwargs): return bdm convert_swap.return_value = [] convert_ephemerals.return_value = [] convert_volumes.return_value = [blank_volume1, blank_volume2] convert_snapshots.return_value = [] convert_images.return_value = [root_volume] convert_blanks.return_value = [] attach_block_devices.side_effect = fake_attach_block_devices get_swap.return_value = [] expected_block_device_info = { 'root_device_name': '/dev/vda', 'swap': [], 'ephemerals': [], 'block_device_mapping': bdms } manager = compute_manager.ComputeManager() manager.use_legacy_block_device_info = False mock_bdm_saves = [mock.patch.object(bdm, 'save') for bdm in bdms] with test.nested(*mock_bdm_saves): block_device_info = manager._prep_block_device(self.context, instance, bdms) for bdm in bdms: bdm.save.assert_called_once_with() self.assertIsNotNone(bdm.device_name) convert_swap.assert_called_once_with(bdms) convert_ephemerals.assert_called_once_with(bdms) bdm_args = tuple(bdms) convert_volumes.assert_called_once_with(bdm_args) convert_snapshots.assert_called_once_with(bdm_args) convert_images.assert_called_once_with(bdm_args) convert_blanks.assert_called_once_with(bdm_args) self.assertEqual(expected_block_device_info, block_device_info) self.assertEqual(1, attach_block_devices.call_count) get_swap.assert_called_once_with([]) class ComputeTestCase(BaseTestCase, test_diagnostics.DiagnosticsComparisonMixin, fake_resource_tracker.RTMockMixin): def setUp(self): # This needs to go before we call setUp because the thread pool # executor is created in ComputeManager.__init__, which is called # during setUp. self.useFixture(fixtures.SynchronousThreadPoolExecutorFixture()) super(ComputeTestCase, self).setUp() self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.image_api = image_api.API() self.default_flavor = objects.Flavor.get_by_name(self.context, 'm1.small') self.tiny_flavor = objects.Flavor.get_by_name(self.context, 'm1.tiny') def test_wrap_instance_fault(self): inst = {"uuid": uuids.instance} called = {'fault_added': False} def did_it_add_fault(*args): called['fault_added'] = True self.stub_out('nova.compute.utils.add_instance_fault_from_exc', did_it_add_fault) @compute_manager.wrap_instance_fault def failer(self2, context, instance): raise NotImplementedError() self.assertRaises(NotImplementedError, failer, self.compute, self.context, instance=inst) self.assertTrue(called['fault_added']) def test_wrap_instance_fault_instance_in_args(self): inst = {"uuid": uuids.instance} called = {'fault_added': False} def did_it_add_fault(*args): called['fault_added'] = True self.stub_out('nova.compute.utils.add_instance_fault_from_exc', did_it_add_fault) @compute_manager.wrap_instance_fault def failer(self2, context, instance): raise NotImplementedError() self.assertRaises(NotImplementedError, failer, self.compute, self.context, inst) self.assertTrue(called['fault_added']) def test_wrap_instance_fault_no_instance(self): inst = {"uuid": uuids.instance} called = {'fault_added': False} def did_it_add_fault(*args): called['fault_added'] = True self.stub_out('nova.utils.add_instance_fault_from_exc', did_it_add_fault) @compute_manager.wrap_instance_fault def failer(self2, context, instance): raise exception.InstanceNotFound(instance_id=instance['uuid']) self.assertRaises(exception.InstanceNotFound, failer, self.compute, self.context, inst) self.assertFalse(called['fault_added']) def test_object_compat(self): db_inst = fake_instance.fake_db_instance() @compute_manager.object_compat def test_fn(_self, context, instance): self.assertIsInstance(instance, objects.Instance) self.assertEqual(instance.uuid, db_inst['uuid']) self.assertEqual(instance.metadata, db_inst['metadata']) self.assertEqual(instance.system_metadata, db_inst['system_metadata']) test_fn(None, self.context, instance=db_inst) def test_object_compat_no_metas(self): # Tests that we don't try to set metadata/system_metadata on the # instance object using fields that aren't in the db object. db_inst = fake_instance.fake_db_instance() db_inst.pop('metadata', None) db_inst.pop('system_metadata', None) @compute_manager.object_compat def test_fn(_self, context, instance): self.assertIsInstance(instance, objects.Instance) self.assertEqual(instance.uuid, db_inst['uuid']) self.assertNotIn('metadata', instance) self.assertNotIn('system_metadata', instance) test_fn(None, self.context, instance=db_inst) def test_object_compat_more_positional_args(self): db_inst = fake_instance.fake_db_instance() @compute_manager.object_compat def test_fn(_self, context, instance, pos_arg_1, pos_arg_2): self.assertIsInstance(instance, objects.Instance) self.assertEqual(instance.uuid, db_inst['uuid']) self.assertEqual(instance.metadata, db_inst['metadata']) self.assertEqual(instance.system_metadata, db_inst['system_metadata']) self.assertEqual(pos_arg_1, 'fake_pos_arg1') self.assertEqual(pos_arg_2, 'fake_pos_arg2') test_fn(None, self.context, db_inst, 'fake_pos_arg1', 'fake_pos_arg2') def test_create_instance_with_img_ref_associates_config_drive(self): # Make sure create associates a config drive. instance = self._create_fake_instance_obj( params={'config_drive': '1234', }) try: self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) instance = instances[0] self.assertTrue(instance['config_drive']) finally: db.instance_destroy(self.context, instance['uuid']) def test_create_instance_associates_config_drive(self): # Make sure create associates a config drive. instance = self._create_fake_instance_obj( params={'config_drive': '1234', }) try: self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) instance = instances[0] self.assertTrue(instance['config_drive']) finally: db.instance_destroy(self.context, instance['uuid']) def test_create_instance_unlimited_memory(self): # Default of memory limit=None is unlimited. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self.rt.update_available_resource(self.context.elevated(), NODENAME) params = {"flavor": {"memory_mb": 999999999999}} filter_properties = {'limits': {'memory_mb': None}} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, filter_properties, block_device_mapping=[]) cn = self.rt.compute_nodes[NODENAME] self.assertEqual(999999999999, cn.memory_mb_used) def test_create_instance_unlimited_disk(self): self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self.rt.update_available_resource(self.context.elevated(), NODENAME) params = {"root_gb": 999999999999, "ephemeral_gb": 99999999999} filter_properties = {'limits': {'disk_gb': None}} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, filter_properties, block_device_mapping=[]) def test_create_multiple_instances_then_starve(self): self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self.rt.update_available_resource(self.context.elevated(), NODENAME) limits = {'memory_mb': 4096, 'disk_gb': 1000} params = {"flavor": {"memory_mb": 1024, "root_gb": 128, "ephemeral_gb": 128}} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[], limits=limits) cn = self.rt.compute_nodes[NODENAME] self.assertEqual(1024, cn.memory_mb_used) self.assertEqual(256, cn.local_gb_used) params = {"flavor": {"memory_mb": 2048, "root_gb": 256, "ephemeral_gb": 256}} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[], limits=limits) self.assertEqual(3072, cn.memory_mb_used) self.assertEqual(768, cn.local_gb_used) params = {"flavor": {"memory_mb": 8192, "root_gb": 8192, "ephemeral_gb": 8192}} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[], limits=limits) # NOTE(danms): Since we no longer claim memory and disk, this should # complete normally. In reality, this would have been rejected by # placement/scheduler before the instance got here. self.assertEqual(11264, cn.memory_mb_used) self.assertEqual(17152, cn.local_gb_used) def test_create_multiple_instance_with_neutron_port(self): requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.port_instance)]) self.assertRaises(exception.MultiplePortsNotApplicable, self.compute_api.create, self.context, instance_type=self.default_flavor, image_href=None, max_count=2, requested_networks=requested_networks) def test_create_instance_with_oversubscribed_ram(self): # Test passing of oversubscribed ram policy from the scheduler. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self.rt.update_available_resource(self.context.elevated(), NODENAME) # get total memory as reported by virt driver: resources = self.compute.driver.get_available_resource(NODENAME) total_mem_mb = resources['memory_mb'] oversub_limit_mb = total_mem_mb * 1.5 instance_mb = int(total_mem_mb * 1.45) # build an instance, specifying an amount of memory that exceeds # total_mem_mb, but is less than the oversubscribed limit: params = {"flavor": {"memory_mb": instance_mb, "root_gb": 128, "ephemeral_gb": 128}} instance = self._create_fake_instance_obj(params) limits = {'memory_mb': oversub_limit_mb} filter_properties = {'limits': limits} self.compute.build_and_run_instance(self.context, instance, {}, {}, filter_properties, block_device_mapping=[]) cn = self.rt.compute_nodes[NODENAME] self.assertEqual(instance_mb, cn.memory_mb_used) def test_create_instance_with_oversubscribed_ram_fail(self): """Test passing of oversubscribed ram policy from the scheduler, but with insufficient memory. """ self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self.rt.update_available_resource(self.context.elevated(), NODENAME) # get total memory as reported by virt driver: resources = self.compute.driver.get_available_resource(NODENAME) total_mem_mb = resources['memory_mb'] oversub_limit_mb = total_mem_mb * 1.5 instance_mb = int(total_mem_mb * 1.55) # build an instance, specifying an amount of memory that exceeds # both total_mem_mb and the oversubscribed limit: params = {"flavor": {"memory_mb": instance_mb, "root_gb": 128, "ephemeral_gb": 128}} instance = self._create_fake_instance_obj(params) filter_properties = {'limits': {'memory_mb': oversub_limit_mb}} self.compute.build_and_run_instance(self.context, instance, {}, {}, filter_properties, block_device_mapping=[]) def test_create_instance_with_oversubscribed_disk(self): # Test passing of oversubscribed disk policy from the scheduler. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self.rt.update_available_resource(self.context.elevated(), NODENAME) # get total memory as reported by virt driver: resources = self.compute.driver.get_available_resource(NODENAME) total_disk_gb = resources['local_gb'] oversub_limit_gb = total_disk_gb * 1.5 instance_gb = int(total_disk_gb * 1.45) # build an instance, specifying an amount of disk that exceeds # total_disk_gb, but is less than the oversubscribed limit: params = {"flavor": {"root_gb": instance_gb, "memory_mb": 10}} instance = self._create_fake_instance_obj(params) limits = {'disk_gb': oversub_limit_gb} filter_properties = {'limits': limits} self.compute.build_and_run_instance(self.context, instance, {}, {}, filter_properties, block_device_mapping=[]) cn = self.rt.compute_nodes[NODENAME] self.assertEqual(instance_gb, cn.local_gb_used) def test_create_instance_without_node_param(self): instance = self._create_fake_instance_obj({'node': None}) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) instance = instances[0] self.assertEqual(NODENAME, instance['node']) def test_create_instance_no_image(self): # Create instance with no image provided. params = {'image_ref': ''} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self._assert_state({'vm_state': vm_states.ACTIVE, 'task_state': None}) @testtools.skipIf(test_utils.is_osx(), 'IPv6 pretty-printing broken on OSX, see bug 1409135') def test_default_access_ip(self): self.flags(default_access_ip_network_name='test1') fake_network.unset_stub_network_methods(self) instance = self._create_fake_instance_obj() orig_update = self.compute._instance_update # Stub out allocate_for_instance to return a fake network_info list of # VIFs ipv4_ip = network_model.IP(version=4, address='192.168.1.100') ipv4_subnet = network_model.Subnet(ips=[ipv4_ip]) ipv6_ip = network_model.IP(version=6, address='2001:db8:0:1::1') ipv6_subnet = network_model.Subnet(ips=[ipv6_ip]) network = network_model.Network( label='test1', subnets=[ipv4_subnet, ipv6_subnet]) network_info = [network_model.VIF(network=network)] allocate_for_inst_mock = mock.Mock(return_value=network_info) self.compute.network_api.allocate_for_instance = allocate_for_inst_mock # mock out deallocate_for_instance since we don't need it now self.compute.network_api.deallocate_for_instance = mock.Mock() # Make sure the access_ip_* updates happen in the same DB # update as the set to ACTIVE. def _instance_update(self, ctxt, instance_uuid, **kwargs): if kwargs.get('vm_state', None) == vm_states.ACTIVE: self.assertEqual(kwargs['access_ip_v4'], '192.168.1.100') self.assertEqual(kwargs['access_ip_v6'], '2001:db8:0:1::1') return orig_update(ctxt, instance_uuid, **kwargs) self.stub_out('nova.compute.manager.ComputeManager._instance_update', _instance_update) try: self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) instance = instances[0] self.assertEqual(instance['access_ip_v4'], '192.168.1.100') self.assertEqual(instance['access_ip_v6'], '2001:db8:0:1::1') finally: db.instance_destroy(self.context, instance['uuid']) def test_no_default_access_ip(self): instance = self._create_fake_instance_obj() try: self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) instance = instances[0] self.assertFalse(instance['access_ip_v4']) self.assertFalse(instance['access_ip_v6']) finally: db.instance_destroy(self.context, instance['uuid']) def test_fail_to_schedule_persists(self): # check the persistence of the ERROR(scheduling) state. params = {'vm_state': vm_states.ERROR, 'task_state': task_states.SCHEDULING} self._create_fake_instance_obj(params=params) # check state is failed even after the periodic poll self.compute.periodic_tasks(context.get_admin_context()) self._assert_state({'vm_state': vm_states.ERROR, 'task_state': task_states.SCHEDULING}) def test_run_instance_setup_block_device_mapping_fail(self): """block device mapping failure test. Make sure that when there is a block device mapping problem, the instance goes to ERROR state, cleaning the task state """ def fake(*args, **kwargs): raise exception.InvalidBDM() self.stub_out('nova.compute.manager.ComputeManager' '._prep_block_device', fake) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance( self.context, instance=instance, image={}, request_spec={}, block_device_mapping=[], filter_properties={}, requested_networks=[], injected_files=None, admin_password=None, node=None) # check state is failed even after the periodic poll self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) self.compute.periodic_tasks(context.get_admin_context()) self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) @mock.patch('nova.compute.manager.ComputeManager._prep_block_device', side_effect=exception.OverQuota(overs='volumes')) def test_setup_block_device_over_quota_fail(self, mock_prep_block_dev): """block device mapping over quota failure test. Make sure when we're over volume quota according to Cinder client, the appropriate exception is raised and the instances to ERROR state, cleaning the task state. """ instance = self._create_fake_instance_obj() self.compute.build_and_run_instance( self.context, instance=instance, request_spec={}, filter_properties={}, requested_networks=[], injected_files=None, admin_password=None, node=None, block_device_mapping=[], image={}) # check state is failed even after the periodic poll self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) self.compute.periodic_tasks(context.get_admin_context()) self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) self.assertTrue(mock_prep_block_dev.called) def test_run_instance_spawn_fail(self): """spawn failure test. Make sure that when there is a spawning problem, the instance goes to ERROR state, cleaning the task state. """ def fake(*args, **kwargs): raise test.TestingException() self.stub_out('nova.virt.fake.FakeDriver.spawn', fake) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance( self.context, instance=instance, request_spec={}, filter_properties={}, requested_networks=[], injected_files=None, admin_password=None, block_device_mapping=[], image={}, node=None) # check state is failed even after the periodic poll self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) self.compute.periodic_tasks(context.get_admin_context()) self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) def test_run_instance_dealloc_network_instance_not_found(self): """spawn network deallocate test. Make sure that when an instance is not found during spawn that the network is deallocated """ instance = self._create_fake_instance_obj() def fake(*args, **kwargs): raise exception.InstanceNotFound(instance_id="fake") with test.nested( mock.patch.object(self.compute, '_deallocate_network'), mock.patch.object(self.compute.driver, 'spawn') ) as (mock_deallocate, mock_spawn): mock_spawn.side_effect = fake self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) mock_deallocate.assert_called_with(mock.ANY, mock.ANY, None) self.assertTrue(mock_spawn.called) def test_run_instance_bails_on_missing_instance(self): # Make sure that run_instance() will quickly ignore a deleted instance instance = self._create_fake_instance_obj() with mock.patch.object(instance, 'save') as mock_save: mock_save.side_effect = exception.InstanceNotFound(instance_id=1) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertTrue(mock_save.called) def test_run_instance_bails_on_deleting_instance(self): # Make sure that run_instance() will quickly ignore a deleting instance instance = self._create_fake_instance_obj() with mock.patch.object(instance, 'save') as mock_save: mock_save.side_effect = exception.UnexpectedDeletingTaskStateError( instance_uuid=instance['uuid'], expected={'task_state': 'bar'}, actual={'task_state': 'foo'}) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertTrue(mock_save.called) def test_can_terminate_on_error_state(self): # Make sure that the instance can be terminated in ERROR state. # check failed to schedule --> terminate params = {'vm_state': vm_states.ERROR} instance = self._create_fake_instance_obj(params=params) self.compute.terminate_instance(self.context, instance, []) self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, self.context, instance['uuid']) # Double check it's not there for admins, either. self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, self.context.elevated(), instance['uuid']) def test_run_terminate(self): # Make sure it is possible to run and terminate instance. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) LOG.info("Running instances: %s", instances) self.assertEqual(len(instances), 1) self.compute.terminate_instance(self.context, instance, []) instances = db.instance_get_all(self.context) LOG.info("After terminating instances: %s", instances) self.assertEqual(len(instances), 0) admin_deleted_context = context.get_admin_context( read_deleted="only") instance = db.instance_get_by_uuid(admin_deleted_context, instance['uuid']) self.assertEqual(instance['vm_state'], vm_states.DELETED) self.assertIsNone(instance['task_state']) def test_run_terminate_with_vol_attached(self): """Make sure it is possible to run and terminate instance with volume attached """ instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) LOG.info("Running instances: %s", instances) self.assertEqual(len(instances), 1) def fake_check_availability_zone(*args, **kwargs): pass def fake_attachment_create(*args, **kwargs): return {'id': uuids.attachment_id} def fake_volume_get(self, context, volume_id): return {'id': volume_id, 'attach_status': 'attached', 'attachments': {instance.uuid: { 'attachment_id': uuids.attachment_id } }, 'multiattach': False } def fake_terminate_connection(self, context, volume_id, connector): pass def fake_detach(self, context, volume_id, instance_uuid): pass bdms = [] def fake_rpc_reserve_block_device_name(self, context, instance, device, volume_id, **kwargs): bdm = objects.BlockDeviceMapping( **{'context': context, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'instance_uuid': instance['uuid'], 'device_name': '/dev/vdc'}) bdm.create() bdms.append(bdm) return bdm self.stub_out('nova.volume.cinder.API.get', fake_volume_get) self.stub_out('nova.volume.cinder.API.check_availability_zone', fake_check_availability_zone) self.stub_out('nova.volume.cinder.API.attachment_create', fake_attachment_create) self.stub_out('nova.volume.cinder.API.terminate_connection', fake_terminate_connection) self.stub_out('nova.volume.cinder.API.detach', fake_detach) self.stub_out('nova.compute.rpcapi.ComputeAPI.' 'reserve_block_device_name', fake_rpc_reserve_block_device_name) self.compute_api.attach_volume(self.context, instance, 1, '/dev/vdc') self.compute.terminate_instance(self.context, instance, bdms) instances = db.instance_get_all(self.context) LOG.info("After terminating instances: %s", instances) self.assertEqual(len(instances), 0) bdms = db.block_device_mapping_get_all_by_instance(self.context, instance['uuid']) self.assertEqual(len(bdms), 0) def test_run_terminate_no_image(self): """Make sure instance started without image (from volume) can be termintad without issues """ params = {'image_ref': ''} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self._assert_state({'vm_state': vm_states.ACTIVE, 'task_state': None}) self.compute.terminate_instance(self.context, instance, []) instances = db.instance_get_all(self.context) self.assertEqual(len(instances), 0) def test_terminate_no_network(self): # This is as reported in LP bug 1008875 instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) LOG.info("Running instances: %s", instances) self.assertEqual(len(instances), 1) self.compute.terminate_instance(self.context, instance, []) instances = db.instance_get_all(self.context) LOG.info("After terminating instances: %s", instances) self.assertEqual(len(instances), 0) def test_run_terminate_timestamps(self): # Make sure timestamps are set for launched and destroyed. instance = self._create_fake_instance_obj() instance['launched_at'] = None self.assertIsNone(instance['launched_at']) self.assertIsNone(instance['deleted_at']) launch = timeutils.utcnow() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.refresh() self.assertGreater(instance['launched_at'].replace(tzinfo=None), launch) self.assertIsNone(instance['deleted_at']) terminate = timeutils.utcnow() self.compute.terminate_instance(self.context, instance, []) with utils.temporary_mutation(self.context, read_deleted='only'): instance = db.instance_get_by_uuid(self.context, instance['uuid']) self.assertTrue(instance['launched_at'].replace( tzinfo=None) < terminate) self.assertGreater(instance['deleted_at'].replace( tzinfo=None), terminate) def test_run_terminate_deallocate_net_failure_sets_error_state(self): instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) LOG.info("Running instances: %s", instances) self.assertEqual(len(instances), 1) def _fake_deallocate_network(*args, **kwargs): raise test.TestingException() self.stub_out('nova.compute.manager.ComputeManager.' '_deallocate_network', _fake_deallocate_network) self.assertRaises(test.TestingException, self.compute.terminate_instance, self.context, instance, []) instance = db.instance_get_by_uuid(self.context, instance['uuid']) self.assertEqual(instance['vm_state'], vm_states.ERROR) @mock.patch('nova.compute.utils.notify_about_instance_action') def test_stop(self, mock_notify): # Ensure instance can be stopped. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.POWERING_OFF}) inst_uuid = instance['uuid'] extra = ['system_metadata', 'metadata'] inst_obj = objects.Instance.get_by_uuid(self.context, inst_uuid, expected_attrs=extra) self.compute.stop_instance(self.context, instance=inst_obj, clean_shutdown=True) mock_notify.assert_has_calls([ mock.call(self.context, inst_obj, 'fake-mini', action='power_off', phase='start'), mock.call(self.context, inst_obj, 'fake-mini', action='power_off', phase='end')]) self.compute.terminate_instance(self.context, instance, []) @mock.patch('nova.compute.utils.notify_about_instance_action') def test_start(self, mock_notify): # Ensure instance can be started. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.POWERING_OFF}) extra = ['system_metadata', 'metadata'] inst_uuid = instance['uuid'] inst_obj = objects.Instance.get_by_uuid(self.context, inst_uuid, expected_attrs=extra) self.compute.stop_instance(self.context, instance=inst_obj, clean_shutdown=True) inst_obj.task_state = task_states.POWERING_ON inst_obj.save() self.compute.start_instance(self.context, instance=inst_obj) mock_notify.assert_has_calls([ mock.call(self.context, inst_obj, 'fake-mini', action='power_on', phase='start'), mock.call(self.context, inst_obj, 'fake-mini', action='power_on', phase='end')]) self.compute.terminate_instance(self.context, instance, []) def test_start_shelved_instance(self): # Ensure shelved instance can be started. self.deleted_image_id = None def fake_delete(self_, ctxt, image_id): self.deleted_image_id = image_id fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.delete', fake_delete) instance = self._create_fake_instance_obj() image = {'id': uuids.image} # Adding shelved information to instance system metadata. shelved_time = timeutils.utcnow().isoformat() instance.system_metadata['shelved_at'] = shelved_time instance.system_metadata['shelved_image_id'] = image['id'] instance.system_metadata['shelved_host'] = 'fake-mini' instance.save() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.POWERING_OFF, "vm_state": vm_states.SHELVED}) extra = ['system_metadata', 'metadata'] inst_uuid = instance['uuid'] inst_obj = objects.Instance.get_by_uuid(self.context, inst_uuid, expected_attrs=extra) self.compute.stop_instance(self.context, instance=inst_obj, clean_shutdown=True) inst_obj.task_state = task_states.POWERING_ON inst_obj.save() self.compute.start_instance(self.context, instance=inst_obj) self.assertEqual(image['id'], self.deleted_image_id) self.assertNotIn('shelved_at', inst_obj.system_metadata) self.assertNotIn('shelved_image_id', inst_obj.system_metadata) self.assertNotIn('shelved_host', inst_obj.system_metadata) self.compute.terminate_instance(self.context, instance, []) def test_stop_start_no_image(self): params = {'image_ref': ''} instance = self._create_fake_instance_obj(params) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.POWERING_OFF}) extra = ['system_metadata', 'metadata'] inst_uuid = instance['uuid'] inst_obj = objects.Instance.get_by_uuid(self.context, inst_uuid, expected_attrs=extra) self.compute.stop_instance(self.context, instance=inst_obj, clean_shutdown=True) inst_obj.task_state = task_states.POWERING_ON inst_obj.save() self.compute.start_instance(self.context, instance=inst_obj) self.compute.terminate_instance(self.context, instance, []) def test_rescue(self): # Ensure instance can be rescued and unrescued. called = {'rescued': False, 'unrescued': False} def fake_rescue(self, context, instance_ref, network_info, image_meta, rescue_password, block_device_info): called['rescued'] = True self.stub_out('nova.virt.fake.FakeDriver.rescue', fake_rescue) def fake_unrescue(self, instance_ref, network_info): called['unrescued'] = True self.stub_out('nova.virt.fake.FakeDriver.unrescue', fake_unrescue) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.task_state = task_states.RESCUING instance.save() self.compute.rescue_instance(self.context, instance, None, None, True) self.assertTrue(called['rescued']) instance.task_state = task_states.UNRESCUING instance.save() self.compute.unrescue_instance(self.context, instance) self.assertTrue(called['unrescued']) self.compute.terminate_instance(self.context, instance, []) @mock.patch.object(nova.compute.utils, 'notify_about_instance_rescue_action') @mock.patch('nova.context.RequestContext.elevated') def test_rescue_notifications(self, mock_context, mock_notify): # Ensure notifications on instance rescue. def fake_rescue(self, context, instance_ref, network_info, image_meta, rescue_password, block_device_info): pass self.stub_out('nova.virt.fake.FakeDriver.rescue', fake_rescue) mock_context.return_value = self.context instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) fake_notifier.NOTIFICATIONS = [] instance.task_state = task_states.RESCUING instance.save() self.compute.rescue_instance(self.context, instance, None, rescue_image_ref=uuids.fake_image_ref_1, clean_shutdown=True) expected_notifications = ['compute.instance.rescue.start', 'compute.instance.exists', 'compute.instance.rescue.end'] self.assertEqual([m.event_type for m in fake_notifier.NOTIFICATIONS], expected_notifications) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', uuids.fake_image_ref_1, phase='start'), mock.call(self.context, instance, 'fake-mini', uuids.fake_image_ref_1, phase='end')]) for n, msg in enumerate(fake_notifier.NOTIFICATIONS): self.assertEqual(msg.event_type, expected_notifications[n]) self.assertEqual(msg.priority, 'INFO') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance.uuid) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.tiny_flavor.id), str(payload['instance_type_id'])) self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) image_ref_url = self.image_api.generate_image_url(FAKE_IMAGE_REF, self.context) self.assertEqual(payload['image_ref_url'], image_ref_url) msg = fake_notifier.NOTIFICATIONS[0] self.assertIn('rescue_image_name', msg.payload) self.compute.terminate_instance(self.context, instance, []) @mock.patch.object(nova.compute.utils, 'notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') def test_unrescue_notifications(self, mock_context, mock_notify): # Ensure notifications on instance rescue. def fake_unrescue(self, instance_ref, network_info): pass self.stub_out('nova.virt.fake.FakeDriver.unrescue', fake_unrescue) context = self.context mock_context.return_value = context instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) fake_notifier.NOTIFICATIONS = [] instance.task_state = task_states.UNRESCUING instance.save() self.compute.unrescue_instance(self.context, instance) expected_notifications = ['compute.instance.unrescue.start', 'compute.instance.unrescue.end'] self.assertEqual([m.event_type for m in fake_notifier.NOTIFICATIONS], expected_notifications) mock_notify.assert_has_calls([ mock.call(context, instance, 'fake-mini', action='unrescue', phase='start'), mock.call(context, instance, 'fake-mini', action='unrescue', phase='end')]) for n, msg in enumerate(fake_notifier.NOTIFICATIONS): self.assertEqual(msg.event_type, expected_notifications[n]) self.assertEqual(msg.priority, 'INFO') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance.uuid) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.tiny_flavor.id), str(payload['instance_type_id'])) self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) image_ref_url = self.image_api.generate_image_url(FAKE_IMAGE_REF, self.context) self.assertEqual(payload['image_ref_url'], image_ref_url) self.compute.terminate_instance(self.context, instance, []) @mock.patch.object(nova.compute.manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(fake.FakeDriver, 'power_off') @mock.patch.object(fake.FakeDriver, 'rescue') @mock.patch.object(compute_manager.ComputeManager, '_get_rescue_image') def test_rescue_handle_err(self, mock_get, mock_rescue, mock_power_off, mock_get_block_info): # If the driver fails to rescue, instance state should got to ERROR # and the exception should be converted to InstanceNotRescuable inst_obj = self._create_fake_instance_obj() mock_get.return_value = objects.ImageMeta.from_dict({}) mock_get_block_info.return_value = mock.sentinel.block_device_info mock_rescue.side_effect = RuntimeError("Try again later") expected_message = ('Instance %s cannot be rescued: ' 'Driver Error: Try again later' % inst_obj.uuid) with testtools.ExpectedException( exception.InstanceNotRescuable, expected_message): self.compute.rescue_instance( self.context, instance=inst_obj, rescue_password='password', rescue_image_ref=None, clean_shutdown=True) self.assertEqual(vm_states.ERROR, inst_obj.vm_state) mock_get.assert_called_once_with(mock.ANY, inst_obj, mock.ANY) mock_rescue.assert_called_once_with(mock.ANY, inst_obj, [], mock.ANY, 'password', mock.sentinel.block_device_info) @mock.patch.object(nova.compute.manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(image_api.API, "get") @mock.patch.object(fake.FakeDriver, 'power_off') @mock.patch.object(nova.virt.fake.FakeDriver, "rescue") def test_rescue_with_image_specified(self, mock_rescue, mock_power_off, mock_image_get, mock_get_block_info): image_ref = uuids.image_instance rescue_image_meta = {} params = {"task_state": task_states.RESCUING} instance = self._create_fake_instance_obj(params=params) ctxt = context.get_admin_context() mock_context = mock.Mock() mock_context.elevated.return_value = ctxt mock_get_block_info.return_value = mock.sentinel.block_device_info mock_image_get.return_value = rescue_image_meta self.compute.rescue_instance(mock_context, instance=instance, rescue_password="password", rescue_image_ref=image_ref, clean_shutdown=True) mock_image_get.assert_called_with(ctxt, image_ref) mock_rescue.assert_called_with(ctxt, instance, [], test.MatchType(objects.ImageMeta), 'password', mock.sentinel.block_device_info) self.compute.terminate_instance(ctxt, instance, []) @mock.patch.object(nova.compute.manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(image_api.API, "get") @mock.patch.object(fake.FakeDriver, 'power_off') @mock.patch.object(nova.virt.fake.FakeDriver, "rescue") def test_rescue_with_base_image_when_image_not_specified(self, mock_rescue, mock_power_off, mock_image_get, mock_get_block_info): image_ref = FAKE_IMAGE_REF system_meta = {"image_base_image_ref": image_ref} rescue_image_meta = {} params = {"task_state": task_states.RESCUING, "system_metadata": system_meta} instance = self._create_fake_instance_obj(params=params) ctxt = context.get_admin_context() mock_context = mock.Mock() mock_context.elevated.return_value = ctxt mock_get_block_info.return_value = mock.sentinel.block_device_info mock_image_get.return_value = rescue_image_meta self.compute.rescue_instance(mock_context, instance=instance, rescue_password="password", rescue_image_ref=None, clean_shutdown=True) mock_image_get.assert_called_with(ctxt, image_ref) mock_rescue.assert_called_with(ctxt, instance, [], test.MatchType(objects.ImageMeta), 'password', mock.sentinel.block_device_info) self.compute.terminate_instance(self.context, instance, []) def test_power_on(self): # Ensure instance can be powered on. called = {'power_on': False} def fake_driver_power_on(self, context, instance, network_info, block_device_info, accel_device_info=None): called['power_on'] = True self.stub_out('nova.virt.fake.FakeDriver.power_on', fake_driver_power_on) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) extra = ['system_metadata', 'metadata'] inst_obj = objects.Instance.get_by_uuid(self.context, instance['uuid'], expected_attrs=extra) inst_obj.task_state = task_states.POWERING_ON inst_obj.save() self.compute.start_instance(self.context, instance=inst_obj) self.assertTrue(called['power_on']) self.compute.terminate_instance(self.context, inst_obj, []) @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') @mock.patch('nova.network.neutron.API.get_instance_nw_info') @mock.patch.object(fake.FakeDriver, 'power_on') @mock.patch('nova.accelerator.cyborg._CyborgClient.get_arqs_for_instance') def test_power_on_with_accels(self, mock_get_arqs, mock_power_on, mock_nw_info, mock_blockdev): instance = self._create_fake_instance_obj() instance.flavor.extra_specs = {'accel:device_profile': 'mydp'} accel_info = [{'k1': 'v1', 'k2': 'v2'}] mock_get_arqs.return_value = accel_info mock_nw_info.return_value = 'nw_info' mock_blockdev.return_value = 'blockdev_info' self.compute._power_on(self.context, instance) mock_get_arqs.assert_called_once_with(instance['uuid']) mock_power_on.assert_called_once_with(self.context, instance, 'nw_info', 'blockdev_info', accel_info) def test_power_off(self): # Ensure instance can be powered off. called = {'power_off': False} def fake_driver_power_off(self, instance, shutdown_timeout, shutdown_attempts): called['power_off'] = True self.stub_out('nova.virt.fake.FakeDriver.power_off', fake_driver_power_off) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) extra = ['system_metadata', 'metadata'] inst_obj = objects.Instance.get_by_uuid(self.context, instance['uuid'], expected_attrs=extra) inst_obj.task_state = task_states.POWERING_OFF inst_obj.save() self.compute.stop_instance(self.context, instance=inst_obj, clean_shutdown=True) self.assertTrue(called['power_off']) self.compute.terminate_instance(self.context, inst_obj, []) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(nova.context.RequestContext, 'elevated') def test_pause(self, mock_context, mock_notify): # Ensure instance can be paused and unpaused. instance = self._create_fake_instance_obj() ctxt = context.get_admin_context() mock_context.return_value = ctxt self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.task_state = task_states.PAUSING instance.save() fake_notifier.NOTIFICATIONS = [] self.compute.pause_instance(self.context, instance=instance) mock_notify.assert_has_calls([ mock.call(ctxt, instance, 'fake-mini', action='pause', phase='start'), mock.call(ctxt, instance, 'fake-mini', action='pause', phase='end')]) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.pause.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.pause.end') instance.task_state = task_states.UNPAUSING instance.save() mock_notify.reset_mock() fake_notifier.NOTIFICATIONS = [] self.compute.unpause_instance(self.context, instance=instance) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.unpause.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.unpause.end') mock_notify.assert_has_calls([ mock.call(ctxt, instance, 'fake-mini', action='unpause', phase='start'), mock.call(ctxt, instance, 'fake-mini', action='unpause', phase='end')]) self.compute.terminate_instance(self.context, instance, []) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') def test_suspend(self, mock_context, mock_notify): # ensure instance can be suspended and resumed. context = self.context mock_context.return_value = context instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(context, instance, {}, {}, {}, block_device_mapping=[]) instance.task_state = task_states.SUSPENDING instance.save() self.compute.suspend_instance(context, instance) instance.task_state = task_states.RESUMING instance.save() self.compute.resume_instance(context, instance) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 6) msg = fake_notifier.NOTIFICATIONS[2] self.assertEqual(msg.event_type, 'compute.instance.suspend.start') msg = fake_notifier.NOTIFICATIONS[3] self.assertEqual(msg.event_type, 'compute.instance.suspend.end') mock_notify.assert_has_calls([ mock.call(context, instance, 'fake-mini', action='suspend', phase='start'), mock.call(context, instance, 'fake-mini', action='suspend', phase='end')]) self.compute.terminate_instance(self.context, instance, []) def test_suspend_error(self): # Ensure vm_state is ERROR when suspend error occurs. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) with mock.patch.object(self.compute.driver, 'suspend', side_effect=test.TestingException): self.assertRaises(test.TestingException, self.compute.suspend_instance, self.context, instance=instance) instance = db.instance_get_by_uuid(self.context, instance.uuid) self.assertEqual(vm_states.ERROR, instance.vm_state) def test_suspend_not_implemented(self): # Ensure expected exception is raised and the vm_state of instance # restore to original value if suspend is not implemented by driver instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) with mock.patch.object(self.compute.driver, 'suspend', side_effect=NotImplementedError('suspend test')): self.assertRaises(NotImplementedError, self.compute.suspend_instance, self.context, instance=instance) instance = db.instance_get_by_uuid(self.context, instance.uuid) self.assertEqual(vm_states.ACTIVE, instance.vm_state) def test_suspend_rescued(self): # ensure rescued instance can be suspended and resumed. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.vm_state = vm_states.RESCUED instance.task_state = task_states.SUSPENDING instance.save() self.compute.suspend_instance(self.context, instance) self.assertEqual(instance.vm_state, vm_states.SUSPENDED) instance.task_state = task_states.RESUMING instance.save() self.compute.resume_instance(self.context, instance) self.assertEqual(instance.vm_state, vm_states.RESCUED) self.compute.terminate_instance(self.context, instance, []) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(nova.compute.utils, 'notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') def test_resume_notifications(self, mock_context, mock_notify, mock_get_bdms): # ensure instance can be suspended and resumed. context = self.context mock_context.return_value = context instance = self._create_fake_instance_obj() bdms = block_device_obj.block_device_make_list(self.context, []) mock_get_bdms.return_value = bdms self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.task_state = task_states.SUSPENDING instance.save() self.compute.suspend_instance(self.context, instance) instance.task_state = task_states.RESUMING instance.save() self.compute.resume_instance(self.context, instance) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 6) msg = fake_notifier.NOTIFICATIONS[4] self.assertEqual(msg.event_type, 'compute.instance.resume.start') msg = fake_notifier.NOTIFICATIONS[5] self.assertEqual(msg.event_type, 'compute.instance.resume.end') mock_notify.assert_has_calls([ mock.call(context, instance, 'fake-mini', action='resume', phase='start', bdms=bdms), mock.call(context, instance, 'fake-mini', action='resume', phase='end', bdms=bdms)]) self.compute.terminate_instance(self.context, instance, []) def test_resume_no_old_state(self): # ensure a suspended instance with no old_vm_state is resumed to the # ACTIVE state instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.vm_state = vm_states.SUSPENDED instance.task_state = task_states.RESUMING instance.save() self.compute.resume_instance(self.context, instance) self.assertEqual(instance.vm_state, vm_states.ACTIVE) self.compute.terminate_instance(self.context, instance, []) def test_resume_error(self): # Ensure vm_state is ERROR when resume error occurs. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.task_state = task_states.SUSPENDING instance.save() self.compute.suspend_instance(self.context, instance) instance.task_state = task_states.RESUMING instance.save() with mock.patch.object(self.compute.driver, 'resume', side_effect=test.TestingException): self.assertRaises(test.TestingException, self.compute.resume_instance, self.context, instance) instance = db.instance_get_by_uuid(self.context, instance.uuid) self.assertEqual(vm_states.ERROR, instance.vm_state) def test_rebuild(self): # Ensure instance can be rebuilt. instance = self._create_fake_instance_obj() image_ref = instance['image_ref'] sys_metadata = db.instance_system_metadata_get(self.context, instance['uuid']) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.REBUILDING}) self.compute.rebuild_instance(self.context, instance, image_ref, image_ref, injected_files=[], new_pass="new_password", orig_sys_metadata=sys_metadata, bdms=[], recreate=False, on_shared_storage=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits={}, request_spec=None) self.compute.terminate_instance(self.context, instance, []) def test_rebuild_driver(self): # Make sure virt drivers can override default rebuild called = {'rebuild': False} def fake(*args, **kwargs): instance = kwargs['instance'] instance.task_state = task_states.REBUILD_BLOCK_DEVICE_MAPPING instance.save(expected_task_state=[task_states.REBUILDING]) instance.task_state = task_states.REBUILD_SPAWNING instance.save( expected_task_state=[task_states.REBUILD_BLOCK_DEVICE_MAPPING]) called['rebuild'] = True self.stub_out('nova.virt.fake.FakeDriver.rebuild', fake) instance = self._create_fake_instance_obj() image_ref = instance['image_ref'] sys_metadata = db.instance_system_metadata_get(self.context, instance['uuid']) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.REBUILDING}) self.compute.rebuild_instance(self.context, instance, image_ref, image_ref, injected_files=[], new_pass="new_password", orig_sys_metadata=sys_metadata, bdms=[], recreate=False, on_shared_storage=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits={}, request_spec=None) self.assertTrue(called['rebuild']) self.compute.terminate_instance(self.context, instance, []) @mock.patch('nova.compute.manager.ComputeManager._detach_volume') def test_rebuild_driver_with_volumes(self, mock_detach): bdms = block_device_obj.block_device_make_list(self.context, [fake_block_device.FakeDbBlockDeviceDict({ 'id': 3, 'volume_id': uuids.volume_id, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vda', 'connection_info': '{"driver_volume_type": "rbd"}', 'source_type': 'image', 'destination_type': 'volume', 'image_id': uuids.image, 'boot_index': 0 })]) # Make sure virt drivers can override default rebuild called = {'rebuild': False} def fake(*args, **kwargs): instance = kwargs['instance'] instance.task_state = task_states.REBUILD_BLOCK_DEVICE_MAPPING instance.save(expected_task_state=[task_states.REBUILDING]) instance.task_state = task_states.REBUILD_SPAWNING instance.save( expected_task_state=[task_states.REBUILD_BLOCK_DEVICE_MAPPING]) called['rebuild'] = True func = kwargs['detach_block_devices'] # Have the fake driver call the function to detach block devices func(self.context, bdms) # Verify volumes to be detached without destroying mock_detach.assert_called_once_with(self.context, bdms[0], instance, destroy_bdm=False) self.stub_out('nova.virt.fake.FakeDriver.rebuild', fake) instance = self._create_fake_instance_obj() image_ref = instance['image_ref'] sys_metadata = db.instance_system_metadata_get(self.context, instance['uuid']) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.REBUILDING}) self.compute.rebuild_instance(self.context, instance, image_ref, image_ref, injected_files=[], new_pass="new_password", orig_sys_metadata=sys_metadata, bdms=bdms, recreate=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits={}, on_shared_storage=False, request_spec=None) self.assertTrue(called['rebuild']) self.compute.terminate_instance(self.context, instance, []) def test_rebuild_no_image(self): # Ensure instance can be rebuilt when started with no image. params = {'image_ref': ''} instance = self._create_fake_instance_obj(params) sys_metadata = db.instance_system_metadata_get(self.context, instance['uuid']) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.REBUILDING}) self.compute.rebuild_instance(self.context, instance, '', '', injected_files=[], new_pass="new_password", orig_sys_metadata=sys_metadata, bdms=[], recreate=False, on_shared_storage=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits=None, request_spec=None) self.compute.terminate_instance(self.context, instance, []) def test_rebuild_launched_at_time(self): # Ensure instance can be rebuilt. old_time = datetime.datetime(2012, 4, 1) cur_time = datetime.datetime(2012, 12, 21, 12, 21) time_fixture = self.useFixture(utils_fixture.TimeFixture(old_time)) instance = self._create_fake_instance_obj() image_ref = instance['image_ref'] self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) time_fixture.advance_time_delta(cur_time - old_time) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.REBUILDING}) self.compute.rebuild_instance(self.context, instance, image_ref, image_ref, injected_files=[], new_pass="new_password", orig_sys_metadata={}, bdms=[], recreate=False, on_shared_storage=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits={}, request_spec=None) instance.refresh() self.assertEqual(cur_time, instance['launched_at'].replace(tzinfo=None)) self.compute.terminate_instance(self.context, instance, []) def test_rebuild_with_injected_files(self): # Ensure instance can be rebuilt with injected files. injected_files = [ (b'/a/b/c', base64.encode_as_bytes(b'foobarbaz')), ] self.decoded_files = [ (b'/a/b/c', b'foobarbaz'), ] def _spawn(cls, context, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info): self.assertEqual(self.decoded_files, injected_files) self.stub_out('nova.virt.fake.FakeDriver.spawn', _spawn) instance = self._create_fake_instance_obj() image_ref = instance['image_ref'] sys_metadata = db.instance_system_metadata_get(self.context, instance['uuid']) db.instance_update(self.context, instance['uuid'], {"task_state": task_states.REBUILDING}) self.compute.rebuild_instance(self.context, instance, image_ref, image_ref, injected_files=injected_files, new_pass="new_password", orig_sys_metadata=sys_metadata, bdms=[], recreate=False, on_shared_storage=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits={}, request_spec=None) self.compute.terminate_instance(self.context, instance, []) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(compute_manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(compute_manager.ComputeManager, '_instance_update') @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(compute_manager.ComputeManager, '_get_power_state') @mock.patch('nova.compute.utils.notify_about_instance_action') def _test_reboot(self, soft, mock_notify_action, mock_get_power, mock_get_orig, mock_update, mock_notify_usage, mock_get_blk, mock_get_bdms, test_delete=False, test_unrescue=False, fail_reboot=False, fail_running=False): reboot_type = soft and 'SOFT' or 'HARD' task_pending = (soft and task_states.REBOOT_PENDING or task_states.REBOOT_PENDING_HARD) task_started = (soft and task_states.REBOOT_STARTED or task_states.REBOOT_STARTED_HARD) expected_task = (soft and task_states.REBOOTING or task_states.REBOOTING_HARD) expected_tasks = (soft and (task_states.REBOOTING, task_states.REBOOT_PENDING, task_states.REBOOT_STARTED) or (task_states.REBOOTING_HARD, task_states.REBOOT_PENDING_HARD, task_states.REBOOT_STARTED_HARD)) # This is a true unit test, so we don't need the network stubs. fake_network.unset_stub_network_methods(self) # FIXME(comstud): I don't feel like the context needs to # be elevated at all. Hopefully remove elevated from # reboot_instance and remove the mock here in a future patch. # econtext would just become self.context below then. econtext = self.context.elevated() db_instance = fake_instance.fake_db_instance( **dict(uuid=uuids.db_instance, power_state=power_state.NOSTATE, vm_state=vm_states.ACTIVE, task_state=expected_task, launched_at=timeutils.utcnow())) instance = objects.Instance._from_db_object(econtext, objects.Instance(), db_instance) instance.flavor = self.default_flavor updated_dbinstance1 = fake_instance.fake_db_instance( **dict(uuid=uuids.db_instance_1, power_state=10003, vm_state=vm_states.ACTIVE, task_state=expected_task, instance_type=self.default_flavor, launched_at=timeutils.utcnow())) updated_dbinstance2 = fake_instance.fake_db_instance( **dict(uuid=uuids.db_instance_2, power_state=10003, vm_state=vm_states.ACTIVE, instance_type=self.default_flavor, task_state=expected_task, launched_at=timeutils.utcnow())) bdms = block_device_obj.block_device_make_list(self.context, []) mock_get_bdms.return_value = bdms if test_unrescue: instance.vm_state = vm_states.RESCUED instance.obj_reset_changes() fake_nw_model = network_model.NetworkInfo() fake_block_dev_info = 'fake_block_dev_info' fake_power_state1 = 10001 fake_power_state2 = power_state.RUNNING fake_power_state3 = 10002 def _fake_elevated(self): return econtext # Beginning of calls we expect. self.stub_out('nova.context.RequestContext.elevated', _fake_elevated) mock_get_blk.return_value = fake_block_dev_info mock_get_nw = mock.Mock() mock_get_nw.return_value = fake_nw_model self.compute.network_api.get_instance_nw_info = mock_get_nw mock_get_power.side_effect = [fake_power_state1] mock_get_orig.side_effect = [(None, updated_dbinstance1), (None, updated_dbinstance1)] notify_call_list = [mock.call(econtext, instance, 'reboot.start')] notify_action_call_list = [ mock.call(econtext, instance, 'fake-mini', action='reboot', phase='start', bdms=bdms)] ps_call_list = [mock.call(econtext, instance)] db_call_list = [mock.call(econtext, instance['uuid'], {'task_state': task_pending, 'expected_task_state': expected_tasks, 'power_state': fake_power_state1}, columns_to_join=['system_metadata']), mock.call(econtext, updated_dbinstance1['uuid'], {'task_state': task_started, 'expected_task_state': task_pending}, columns_to_join=['system_metadata'])] expected_nw_info = fake_nw_model # Annoying. driver.reboot is wrapped in a try/except, and # doesn't re-raise. It eats exception generated by mock if # this is called with the wrong args, so we have to hack # around it. reboot_call_info = {} expected_call_info = { 'args': (econtext, instance, expected_nw_info, reboot_type), 'kwargs': {'block_device_info': fake_block_dev_info, 'accel_info': []}} fault = exception.InstanceNotFound(instance_id='instance-0000') def fake_reboot(self, *args, **kwargs): reboot_call_info['args'] = args reboot_call_info['kwargs'] = kwargs # NOTE(sirp): Since `bad_volumes_callback` is a function defined # within `reboot_instance`, we don't have access to its value and # can't stub it out, thus we skip that comparison. kwargs.pop('bad_volumes_callback') if fail_reboot: raise fault self.stub_out('nova.virt.fake.FakeDriver.reboot', fake_reboot) # Power state should be updated again if not fail_reboot or fail_running: new_power_state = fake_power_state2 ps_call_list.append(mock.call(econtext, instance)) mock_get_power.side_effect = chain(mock_get_power.side_effect, [fake_power_state2]) else: new_power_state = fake_power_state3 ps_call_list.append(mock.call(econtext, instance)) mock_get_power.side_effect = chain(mock_get_power.side_effect, [fake_power_state3]) if test_delete: fault = exception.InstanceNotFound( instance_id=instance['uuid']) mock_get_orig.side_effect = chain(mock_get_orig.side_effect, [fault]) db_call_list.append( mock.call(econtext, updated_dbinstance1['uuid'], {'power_state': new_power_state, 'task_state': None, 'vm_state': vm_states.ACTIVE}, columns_to_join=['system_metadata'])) notify_call_list.append(mock.call(econtext, instance, 'reboot.end')) notify_action_call_list.append( mock.call(econtext, instance, 'fake-mini', action='reboot', phase='end', bdms=bdms)) elif fail_reboot and not fail_running: mock_get_orig.side_effect = chain(mock_get_orig.side_effect, [fault]) db_call_list.append( mock.call(econtext, updated_dbinstance1['uuid'], {'vm_state': vm_states.ERROR}, columns_to_join=['system_metadata'], )) else: mock_get_orig.side_effect = chain(mock_get_orig.side_effect, [(None, updated_dbinstance2)]) db_call_list.append( mock.call(econtext, updated_dbinstance1['uuid'], {'power_state': new_power_state, 'task_state': None, 'vm_state': vm_states.ACTIVE}, columns_to_join=['system_metadata'], )) if fail_running: notify_call_list.append(mock.call(econtext, instance, 'reboot.error', fault=fault)) notify_action_call_list.append( mock.call(econtext, instance, 'fake-mini', action='reboot', phase='error', exception=fault, bdms=bdms, tb=mock.ANY)) notify_call_list.append(mock.call(econtext, instance, 'reboot.end')) notify_action_call_list.append( mock.call(econtext, instance, 'fake-mini', action='reboot', phase='end', bdms=bdms)) if not fail_reboot or fail_running: self.compute.reboot_instance(self.context, instance=instance, block_device_info=None, reboot_type=reboot_type) else: self.assertRaises(exception.InstanceNotFound, self.compute.reboot_instance, self.context, instance=instance, block_device_info=None, reboot_type=reboot_type) self.assertEqual(expected_call_info, reboot_call_info) mock_get_blk.assert_called_once_with(econtext, instance, bdms=bdms) mock_get_nw.assert_called_once_with(econtext, instance) mock_notify_usage.assert_has_calls(notify_call_list) mock_notify_action.assert_has_calls(notify_action_call_list) mock_get_power.assert_has_calls(ps_call_list) mock_get_orig.assert_has_calls(db_call_list) def test_reboot_soft(self): self._test_reboot(True) def test_reboot_soft_and_delete(self): self._test_reboot(True, test_delete=True) def test_reboot_soft_and_rescued(self): self._test_reboot(True, test_delete=False, test_unrescue=True) def test_reboot_soft_and_delete_and_rescued(self): self._test_reboot(True, test_delete=True, test_unrescue=True) def test_reboot_hard(self): self._test_reboot(False) def test_reboot_hard_and_delete(self): self._test_reboot(False, test_delete=True) def test_reboot_hard_and_rescued(self): self._test_reboot(False, test_delete=False, test_unrescue=True) def test_reboot_hard_and_delete_and_rescued(self): self._test_reboot(False, test_delete=True, test_unrescue=True) @mock.patch('nova.virt.fake.FakeDriver.reboot') @mock.patch('nova.objects.instance.Instance.save') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(compute_manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(compute_manager.ComputeManager, '_instance_update') @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(compute_manager.ComputeManager, '_get_power_state') @mock.patch('nova.compute.utils.notify_about_instance_action') def _test_reboot_with_accels(self, mock_notify_action, mock_get_power, mock_get_orig, mock_update, mock_notify_usage, mock_get_blk, mock_get_bdms, mock_inst_save, mock_reboot, extra_specs=None, accel_info=None): self.compute.network_api.get_instance_nw_info = mock.Mock() reboot_type = 'SOFT' instance = self._create_fake_instance_obj() if extra_specs: instance.flavor.extra_specs = extra_specs self.compute.reboot_instance(self.context, instance=instance, block_device_info=None, reboot_type=reboot_type) mock_reboot.assert_called_once_with( mock.ANY, instance, mock.ANY, reboot_type, block_device_info=mock.ANY, bad_volumes_callback=mock.ANY, accel_info=accel_info or [] ) return instance['uuid'] @mock.patch('nova.accelerator.cyborg._CyborgClient.get_arqs_for_instance') def test_reboot_with_accels_ok(self, mock_get_arqs): dp_name = 'mydp' extra_specs = {'accel:device_profile': dp_name} _, accel_info = fixtures.get_arqs(dp_name) mock_get_arqs.return_value = accel_info instance_uuid = self._test_reboot_with_accels( extra_specs=extra_specs, accel_info=accel_info) mock_get_arqs.assert_called_once_with(instance_uuid) @mock.patch('nova.accelerator.cyborg._CyborgClient.get_arqs_for_instance') def test_reboot_with_accels_no_dp(self, mock_get_arqs): self._test_reboot_with_accels(extra_specs=None, accel_info=None) mock_get_arqs.assert_not_called() @mock.patch.object(jsonutils, 'to_primitive') def test_reboot_fail(self, mock_to_primitive): self._test_reboot(False, fail_reboot=True) def test_reboot_fail_running(self): self._test_reboot(False, fail_reboot=True, fail_running=True) def test_reboot_hard_pausing(self): # We need an actual instance in the database for this test to make # sure that expected_task_state works OK with Instance.save(). instance = self._create_fake_instance_obj( params={'task_state': task_states.PAUSING}) with mock.patch.object(self.compute_api.compute_rpcapi, 'reboot_instance') as rpc_reboot: self.compute_api.reboot(self.context, instance, 'HARD') rpc_reboot.assert_called_once_with( self.context, instance=instance, block_device_info=None, reboot_type='HARD') def test_get_instance_block_device_info_source_image(self): bdms = block_device_obj.block_device_make_list(self.context, [fake_block_device.FakeDbBlockDeviceDict({ 'id': 3, 'volume_id': uuids.volume_id, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vda', 'connection_info': '{"driver_volume_type": "rbd"}', 'source_type': 'image', 'destination_type': 'volume', 'image_id': uuids.image, 'boot_index': 0 })]) with (mock.patch.object( objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=bdms) ) as mock_get_by_instance: block_device_info = ( self.compute._get_instance_block_device_info( self.context, self._create_fake_instance_obj()) ) expected = { 'swap': None, 'ephemerals': [], 'root_device_name': None, 'block_device_mapping': [{ 'connection_info': { 'driver_volume_type': 'rbd' }, 'mount_device': '/dev/vda', 'delete_on_termination': False }] } self.assertTrue(mock_get_by_instance.called) self.assertEqual(block_device_info, expected) def test_get_instance_block_device_info_passed_bdms(self): bdms = block_device_obj.block_device_make_list(self.context, [fake_block_device.FakeDbBlockDeviceDict({ 'id': 3, 'volume_id': uuids.volume_id, 'device_name': '/dev/vdd', 'connection_info': '{"driver_volume_type": "rbd"}', 'source_type': 'volume', 'destination_type': 'volume'}) ]) with (mock.patch.object( objects.BlockDeviceMappingList, 'get_by_instance_uuid')) as mock_get_by_instance: block_device_info = ( self.compute._get_instance_block_device_info( self.context, self._create_fake_instance_obj(), bdms=bdms) ) expected = { 'swap': None, 'ephemerals': [], 'root_device_name': None, 'block_device_mapping': [{ 'connection_info': { 'driver_volume_type': 'rbd' }, 'mount_device': '/dev/vdd', 'delete_on_termination': False }] } self.assertFalse(mock_get_by_instance.called) self.assertEqual(block_device_info, expected) def test_get_instance_block_device_info_swap_and_ephemerals(self): instance = self._create_fake_instance_obj() ephemeral0 = fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vdb', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'disk_bus': 'virtio', 'delete_on_termination': True, 'guest_format': None, 'volume_size': 1, 'boot_index': -1 }) ephemeral1 = fake_block_device.FakeDbBlockDeviceDict({ 'id': 2, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vdc', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'disk_bus': 'virtio', 'delete_on_termination': True, 'guest_format': None, 'volume_size': 2, 'boot_index': -1 }) swap = fake_block_device.FakeDbBlockDeviceDict({ 'id': 3, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vdd', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'disk_bus': 'virtio', 'delete_on_termination': True, 'guest_format': 'swap', 'volume_size': 1, 'boot_index': -1 }) bdms = block_device_obj.block_device_make_list(self.context, [swap, ephemeral0, ephemeral1]) with ( mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=bdms) ) as mock_get_by_instance_uuid: expected_block_device_info = { 'swap': {'device_name': '/dev/vdd', 'swap_size': 1}, 'ephemerals': [{'device_name': '/dev/vdb', 'num': 0, 'size': 1, 'virtual_name': 'ephemeral0'}, {'device_name': '/dev/vdc', 'num': 1, 'size': 2, 'virtual_name': 'ephemeral1'}], 'block_device_mapping': [], 'root_device_name': None } block_device_info = ( self.compute._get_instance_block_device_info( self.context, instance) ) mock_get_by_instance_uuid.assert_called_once_with(self.context, instance['uuid']) self.assertEqual(expected_block_device_info, block_device_info) def test_inject_network_info(self): # Ensure we can inject network info. called = {'inject': False} def fake_driver_inject_network(self, instance, network_info): called['inject'] = True self.stub_out('nova.virt.fake.FakeDriver.inject_network_info', fake_driver_inject_network) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.compute.inject_network_info(self.context, instance=instance) self.assertTrue(called['inject']) self.compute.terminate_instance(self.context, instance, []) def test_reset_network(self): # Ensure we can reset networking on an instance. called = {'count': 0} def fake_driver_reset_network(self, instance): called['count'] += 1 self.stub_out('nova.virt.fake.FakeDriver.reset_network', fake_driver_reset_network) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.compute.reset_network(self.context, instance) self.assertEqual(called['count'], 1) self.compute.terminate_instance(self.context, instance, []) def _get_snapshotting_instance(self): # Ensure instance can be snapshotted. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.task_state = task_states.IMAGE_SNAPSHOT_PENDING instance.save() return instance @mock.patch.object(nova.compute.utils, 'notify_about_instance_snapshot') def test_snapshot(self, mock_notify_snapshot): inst_obj = self._get_snapshotting_instance() mock_context = mock.Mock() with mock.patch.object(self.context, 'elevated', return_value=mock_context) as mock_context_elevated: self.compute.snapshot_instance(self.context, image_id=uuids.snapshot, instance=inst_obj) mock_context_elevated.assert_called_once_with() mock_notify_snapshot.assert_has_calls([ mock.call(mock_context, inst_obj, 'fake-mini', phase='start', snapshot_image_id=uuids.snapshot), mock.call(mock_context, inst_obj, 'fake-mini', phase='end', snapshot_image_id=uuids.snapshot)]) def test_snapshot_no_image(self): inst_obj = self._get_snapshotting_instance() inst_obj.image_ref = '' inst_obj.save() self.compute.snapshot_instance(self.context, image_id=uuids.snapshot, instance=inst_obj) def _test_snapshot_fails(self, raise_during_cleanup, method, expected_state=True): def fake_snapshot(*args, **kwargs): raise test.TestingException() self.fake_image_delete_called = False def fake_delete(self_, context, image_id): self.fake_image_delete_called = True if raise_during_cleanup: raise Exception() self.stub_out('nova.virt.fake.FakeDriver.snapshot', fake_snapshot) fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.delete', fake_delete) inst_obj = self._get_snapshotting_instance() with mock.patch.object(compute_utils, 'EventReporter') as mock_event: if method == 'snapshot': self.assertRaises(test.TestingException, self.compute.snapshot_instance, self.context, image_id=uuids.snapshot, instance=inst_obj) mock_event.assert_called_once_with(self.context, 'compute_snapshot_instance', CONF.host, inst_obj.uuid, graceful_exit=False) else: self.assertRaises(test.TestingException, self.compute.backup_instance, self.context, image_id=uuids.snapshot, instance=inst_obj, backup_type='fake', rotation=1) mock_event.assert_called_once_with(self.context, 'compute_backup_instance', CONF.host, inst_obj.uuid, graceful_exit=False) self.assertEqual(expected_state, self.fake_image_delete_called) self._assert_state({'task_state': None}) @mock.patch.object(nova.compute.manager.ComputeManager, '_rotate_backups') def test_backup_fails(self, mock_rotate): self._test_snapshot_fails(False, 'backup') @mock.patch.object(nova.compute.manager.ComputeManager, '_rotate_backups') def test_backup_fails_cleanup_ignores_exception(self, mock_rotate): self._test_snapshot_fails(True, 'backup') @mock.patch.object(nova.compute.manager.ComputeManager, '_rotate_backups') @mock.patch.object(nova.compute.manager.ComputeManager, '_do_snapshot_instance') def test_backup_fails_rotate_backup(self, mock_snap, mock_rotate): mock_rotate.side_effect = test.TestingException() self._test_snapshot_fails(True, 'backup', False) def test_snapshot_fails(self): self._test_snapshot_fails(False, 'snapshot') def test_snapshot_fails_cleanup_ignores_exception(self): self._test_snapshot_fails(True, 'snapshot') def _test_snapshot_deletes_image_on_failure(self, status, exc, image_not_exist=False): self.fake_image_delete_called = False def fake_show(self_, context, image_id, **kwargs): if image_not_exist: raise exception.ImageNotFound(image_id=uuids.snapshot) self.assertEqual(uuids.snapshot, image_id) image = {'id': image_id, 'status': status} return image self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', fake_show) def fake_delete(self_, context, image_id): self.fake_image_delete_called = True self.assertEqual(uuids.snapshot, image_id) self.stub_out('nova.tests.unit.image.fake._FakeImageService.delete', fake_delete) def fake_snapshot(*args, **kwargs): raise exc self.stub_out('nova.virt.fake.FakeDriver.snapshot', fake_snapshot) fake_image.stub_out_image_service(self) inst_obj = self._get_snapshotting_instance() self.compute.snapshot_instance(self.context, image_id=uuids.snapshot, instance=inst_obj) def test_snapshot_fails_with_glance_error(self): image_not_found = exception.ImageNotFound(image_id=uuids.snapshot) self._test_snapshot_deletes_image_on_failure('error', image_not_found) self.assertFalse(self.fake_image_delete_called) self._assert_state({'task_state': None}) def test_snapshot_fails_with_task_state_error(self): deleting_state_error = exception.UnexpectedDeletingTaskStateError( instance_uuid=uuids.instance, expected={'task_state': task_states.IMAGE_SNAPSHOT}, actual={'task_state': task_states.DELETING}) self._test_snapshot_deletes_image_on_failure( 'error', deleting_state_error) self.assertTrue(self.fake_image_delete_called) self._test_snapshot_deletes_image_on_failure( 'active', deleting_state_error) self.assertFalse(self.fake_image_delete_called) def test_snapshot_fails_with_instance_not_found(self): instance_not_found = exception.InstanceNotFound(instance_id='uuid') self._test_snapshot_deletes_image_on_failure( 'error', instance_not_found) self.assertTrue(self.fake_image_delete_called) self._test_snapshot_deletes_image_on_failure( 'active', instance_not_found) self.assertFalse(self.fake_image_delete_called) @mock.patch.object(compute_manager.LOG, 'warning') def test_snapshot_fails_with_instance_not_found_and_image_not_found(self, mock_warning): instance_not_found = exception.InstanceNotFound(instance_id='uuid') self._test_snapshot_deletes_image_on_failure( 'active', instance_not_found, image_not_exist=True) self.assertFalse(self.fake_image_delete_called) mock_warning.assert_not_called() def test_snapshot_fails_with_instance_not_running(self): instance_not_running = exception.InstanceNotRunning(instance_id='uuid') self._test_snapshot_deletes_image_on_failure( 'error', instance_not_running) self.assertTrue(self.fake_image_delete_called) self._test_snapshot_deletes_image_on_failure( 'active', instance_not_running) self.assertFalse(self.fake_image_delete_called) def test_snapshot_handles_cases_when_instance_is_deleted(self): inst_obj = self._get_snapshotting_instance() inst_obj.task_state = task_states.DELETING inst_obj.save() self.compute.snapshot_instance(self.context, image_id=uuids.snapshot, instance=inst_obj) def test_snapshot_handles_cases_when_instance_is_not_found(self): inst_obj = self._get_snapshotting_instance() inst_obj2 = objects.Instance.get_by_uuid(self.context, inst_obj.uuid) inst_obj2.destroy() self.compute.snapshot_instance(self.context, image_id=uuids.snapshot, instance=inst_obj) def _assert_state(self, state_dict): """Assert state of VM is equal to state passed as parameter.""" instances = db.instance_get_all(self.context) self.assertEqual(len(instances), 1) if 'vm_state' in state_dict: self.assertEqual(state_dict['vm_state'], instances[0]['vm_state']) if 'task_state' in state_dict: self.assertEqual(state_dict['task_state'], instances[0]['task_state']) if 'power_state' in state_dict: self.assertEqual(state_dict['power_state'], instances[0]['power_state']) @mock.patch('nova.image.glance.API.get_all') @mock.patch('nova.image.glance.API.delete') def test_rotate_backups(self, mock_delete, mock_get_all_images): instance = self._create_fake_instance_obj() instance_uuid = instance['uuid'] fake_images = [{ 'id': uuids.image_id_1, 'name': 'fake_name_1', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_1, 'ramdisk_id': uuids.ramdisk_id_1, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }, { 'id': uuids.image_id_2, 'name': 'fake_name_2', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_2, 'ramdisk_id': uuids.ramdisk_id_2, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }, { 'id': uuids.image_id_3, 'name': 'fake_name_3', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_3, 'ramdisk_id': uuids.ramdisk_id_3, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }] mock_get_all_images.return_value = fake_images mock_delete.side_effect = (exception.ImageNotFound( image_id=uuids.image_id_1), None) self.compute._rotate_backups(self.context, instance=instance, backup_type='daily', rotation=1) self.assertEqual(2, mock_delete.call_count) @mock.patch('nova.image.glance.API.get_all') def test_rotate_backups_with_image_delete_failed(self, mock_get_all_images): instance = self._create_fake_instance_obj() instance_uuid = instance['uuid'] fake_images = [{ 'id': uuids.image_id_1, 'created_at': timeutils.parse_strtime('2017-01-04T00:00:00.00'), 'name': 'fake_name_1', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_1, 'ramdisk_id': uuids.ramdisk_id_1, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }, { 'id': uuids.image_id_2, 'created_at': timeutils.parse_strtime('2017-01-03T00:00:00.00'), 'name': 'fake_name_2', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_2, 'ramdisk_id': uuids.ramdisk_id_2, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }, { 'id': uuids.image_id_3, 'created_at': timeutils.parse_strtime('2017-01-02T00:00:00.00'), 'name': 'fake_name_3', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_3, 'ramdisk_id': uuids.ramdisk_id_3, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }, { 'id': uuids.image_id_4, 'created_at': timeutils.parse_strtime('2017-01-01T00:00:00.00'), 'name': 'fake_name_4', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id_4, 'ramdisk_id': uuids.ramdisk_id_4, 'image_type': 'backup', 'backup_type': 'daily', 'instance_uuid': instance_uuid}, }] mock_get_all_images.return_value = fake_images def _check_image_id(context, image_id): self.assertIn(image_id, [uuids.image_id_2, uuids.image_id_3, uuids.image_id_4]) if image_id == uuids.image_id_3: raise Exception('fake %s delete exception' % image_id) if image_id == uuids.image_id_4: raise exception.ImageDeleteConflict(reason='image is in use') with mock.patch.object(nova.image.glance.API, 'delete', side_effect=_check_image_id) as mock_delete: # Fake images 4,3,2 should be rotated in sequence self.compute._rotate_backups(self.context, instance=instance, backup_type='daily', rotation=1) self.assertEqual(3, mock_delete.call_count) def test_console_output(self): # Make sure we can get console output from instance. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) output = self.compute.get_console_output(self.context, instance=instance, tail_length=None) self.assertEqual('FAKE CONSOLE OUTPUT\nANOTHER\nLAST LINE', output) self.compute.terminate_instance(self.context, instance, []) def test_console_output_bytes(self): # Make sure we can get console output from instance. instance = self._create_fake_instance_obj() with mock.patch.object(self.compute, 'get_console_output') as mock_console_output: mock_console_output.return_value = b'Hello.' output = self.compute.get_console_output(self.context, instance=instance, tail_length=None) self.assertEqual(output, b'Hello.') self.compute.terminate_instance(self.context, instance, []) def test_console_output_tail(self): # Make sure we can get console output from instance. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) output = self.compute.get_console_output(self.context, instance=instance, tail_length=2) self.assertEqual('ANOTHER\nLAST LINE', output) self.compute.terminate_instance(self.context, instance, []) def test_console_output_not_implemented(self): def fake_not_implemented(*args, **kwargs): raise NotImplementedError() self.stub_out('nova.virt.fake.FakeDriver.get_console_output', fake_not_implemented) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_console_output, self.context, instance, 0) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(NotImplementedError, self.compute.get_console_output, self.context, instance, 0) self.compute.terminate_instance(self.context, instance, []) def test_console_output_instance_not_found(self): def fake_not_found(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake-instance') self.stub_out('nova.virt.fake.FakeDriver.get_console_output', fake_not_found) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_console_output, self.context, instance, 0) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.InstanceNotFound, self.compute.get_console_output, self.context, instance, 0) self.compute.terminate_instance(self.context, instance, []) def test_novnc_vnc_console(self): # Make sure we can a vnc console for an instance. self.flags(enabled=True, group='vnc') self.flags(enabled=False, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) # Try with the full instance console = self.compute.get_vnc_console(self.context, 'novnc', instance=instance) self.assertTrue(console) # Verify that the console auth has also been stored in the # database backend. auth = objects.ConsoleAuthToken.validate(self.context, console['token']) self.assertIsNotNone(auth) self.compute.terminate_instance(self.context, instance, []) def test_validate_console_port_vnc(self): self.flags(enabled=True, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() def fake_driver_get_console(*args, **kwargs): return ctype.ConsoleVNC(host="fake_host", port=5900) self.stub_out("nova.virt.fake.FakeDriver.get_vnc_console", fake_driver_get_console) self.assertTrue(self.compute.validate_console_port( context=self.context, instance=instance, port="5900", console_type="novnc")) def test_validate_console_port_spice(self): self.flags(enabled=True, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() def fake_driver_get_console(*args, **kwargs): return ctype.ConsoleSpice(host="fake_host", port=5900, tlsPort=88) self.stub_out("nova.virt.fake.FakeDriver.get_spice_console", fake_driver_get_console) self.assertTrue(self.compute.validate_console_port( context=self.context, instance=instance, port="5900", console_type="spice-html5")) def test_validate_console_port_rdp(self): self.flags(enabled=True, group='rdp') instance = self._create_fake_instance_obj() def fake_driver_get_console(*args, **kwargs): return ctype.ConsoleRDP(host="fake_host", port=5900) self.stub_out("nova.virt.fake.FakeDriver.get_rdp_console", fake_driver_get_console) self.assertTrue(self.compute.validate_console_port( context=self.context, instance=instance, port="5900", console_type="rdp-html5")) def test_validate_console_port_serial(self): self.flags(enabled=True, group='serial_console') instance = self._create_fake_instance_obj() def fake_driver_get_console(*args, **kwargs): return ctype.ConsoleSerial(host="fake_host", port=5900) self.stub_out("nova.virt.fake.FakeDriver.get_serial_console", fake_driver_get_console) self.assertTrue(self.compute.validate_console_port( context=self.context, instance=instance, port="5900", console_type="serial")) def test_validate_console_port_mks(self): self.flags(enabled=True, group='mks') instance = self._create_fake_instance_obj() with mock.patch.object( self.compute.driver, 'get_mks_console') as mock_getmks: mock_getmks.return_value = ctype.ConsoleMKS(host="fake_host", port=5900) result = self.compute.validate_console_port(context=self.context, instance=instance, port="5900", console_type="webmks") self.assertTrue(result) def test_validate_console_port_wrong_port(self): self.flags(enabled=True, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() def fake_driver_get_console(*args, **kwargs): return ctype.ConsoleSpice(host="fake_host", port=5900, tlsPort=88) self.stub_out("nova.virt.fake.FakeDriver.get_vnc_console", fake_driver_get_console) self.assertFalse(self.compute.validate_console_port( context=self.context, instance=instance, port="wrongport", console_type="spice-html5")) def test_invalid_vnc_console_type(self): # Raise useful error if console type is an unrecognised string. self.flags(enabled=True, group='vnc') self.flags(enabled=False, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_vnc_console, self.context, 'invalid', instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeInvalid, self.compute.get_vnc_console, self.context, 'invalid', instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_missing_vnc_console_type(self): # Raise useful error is console type is None. self.flags(enabled=True, group='vnc') self.flags(enabled=False, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_vnc_console, self.context, None, instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeInvalid, self.compute.get_vnc_console, self.context, None, instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_get_vnc_console_not_implemented(self): self.stub_out('nova.virt.fake.FakeDriver.get_vnc_console', fake_not_implemented) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_vnc_console, self.context, 'novnc', instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(NotImplementedError, self.compute.get_vnc_console, self.context, 'novnc', instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_spicehtml5_spice_console(self): # Make sure we can a spice console for an instance. self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) # Try with the full instance console = self.compute.get_spice_console(self.context, 'spice-html5', instance=instance) self.assertTrue(console) # Verify that the console auth has also been stored in the # database backend. auth = objects.ConsoleAuthToken.validate(self.context, console['token']) self.assertIsNotNone(auth) self.compute.terminate_instance(self.context, instance, []) def test_invalid_spice_console_type(self): # Raise useful error if console type is an unrecognised string self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_spice_console, self.context, 'invalid', instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeInvalid, self.compute.get_spice_console, self.context, 'invalid', instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_get_spice_console_not_implemented(self): self.stub_out('nova.virt.fake.FakeDriver.get_spice_console', fake_not_implemented) self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_spice_console, self.context, 'spice-html5', instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(NotImplementedError, self.compute.get_spice_console, self.context, 'spice-html5', instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_missing_spice_console_type(self): # Raise useful error is console type is None self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_spice_console, self.context, None, instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeInvalid, self.compute.get_spice_console, self.context, None, instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_rdphtml5_rdp_console(self): # Make sure we can a rdp console for an instance. self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='rdp') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) # Try with the full instance console = self.compute.get_rdp_console(self.context, 'rdp-html5', instance=instance) self.assertTrue(console) # Verify that the console auth has also been stored in the # database backend. auth = objects.ConsoleAuthToken.validate(self.context, console['token']) self.assertIsNotNone(auth) self.compute.terminate_instance(self.context, instance, []) def test_invalid_rdp_console_type(self): # Raise useful error if console type is an unrecognised string self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='rdp') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_rdp_console, self.context, 'invalid', instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeInvalid, self.compute.get_rdp_console, self.context, 'invalid', instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_missing_rdp_console_type(self): # Raise useful error is console type is None self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='rdp') instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertRaises(messaging.ExpectedException, self.compute.get_rdp_console, self.context, None, instance=instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeInvalid, self.compute.get_rdp_console, self.context, None, instance=instance) self.compute.terminate_instance(self.context, instance, []) def test_vnc_console_instance_not_ready(self): self.flags(enabled=True, group='vnc') self.flags(enabled=False, group='spice') instance = self._create_fake_instance_obj( params={'vm_state': vm_states.BUILDING}) def fake_driver_get_console(*args, **kwargs): raise exception.InstanceNotFound(instance_id=instance['uuid']) self.stub_out("nova.virt.fake.FakeDriver.get_vnc_console", fake_driver_get_console) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.InstanceNotReady, self.compute.get_vnc_console, self.context, 'novnc', instance=instance) def test_spice_console_instance_not_ready(self): self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='spice') instance = self._create_fake_instance_obj( params={'vm_state': vm_states.BUILDING}) def fake_driver_get_console(*args, **kwargs): raise exception.InstanceNotFound(instance_id=instance['uuid']) self.stub_out("nova.virt.fake.FakeDriver.get_spice_console", fake_driver_get_console) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.InstanceNotReady, self.compute.get_spice_console, self.context, 'spice-html5', instance=instance) def test_rdp_console_instance_not_ready(self): self.flags(enabled=False, group='vnc') self.flags(enabled=True, group='rdp') instance = self._create_fake_instance_obj( params={'vm_state': vm_states.BUILDING}) def fake_driver_get_console(*args, **kwargs): raise exception.InstanceNotFound(instance_id=instance['uuid']) self.stub_out("nova.virt.fake.FakeDriver.get_rdp_console", fake_driver_get_console) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.InstanceNotReady, self.compute.get_rdp_console, self.context, 'rdp-html5', instance=instance) def test_vnc_console_disabled(self): self.flags(enabled=False, group='vnc') instance = self._create_fake_instance_obj( params={'vm_state': vm_states.BUILDING}) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeUnavailable, self.compute.get_vnc_console, self.context, 'novnc', instance=instance) def test_spice_console_disabled(self): self.flags(enabled=False, group='spice') instance = self._create_fake_instance_obj( params={'vm_state': vm_states.BUILDING}) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeUnavailable, self.compute.get_spice_console, self.context, 'spice-html5', instance=instance) def test_rdp_console_disabled(self): self.flags(enabled=False, group='rdp') instance = self._create_fake_instance_obj( params={'vm_state': vm_states.BUILDING}) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(exception.ConsoleTypeUnavailable, self.compute.get_rdp_console, self.context, 'rdp-html5', instance=instance) def test_diagnostics(self): # Make sure we can get diagnostics for an instance. expected_diagnostic = {'cpu0_time': 17300000000, 'memory': 524288, 'vda_errors': -1, 'vda_read': 262144, 'vda_read_req': 112, 'vda_write': 5778432, 'vda_write_req': 488, 'vnet1_rx': 2070139, 'vnet1_rx_drop': 0, 'vnet1_rx_errors': 0, 'vnet1_rx_packets': 26701, 'vnet1_tx': 140208, 'vnet1_tx_drop': 0, 'vnet1_tx_errors': 0, 'vnet1_tx_packets': 662, } instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) diagnostics = self.compute.get_diagnostics(self.context, instance=instance) self.assertEqual(diagnostics, expected_diagnostic) self.compute.terminate_instance(self.context, instance, []) def test_instance_diagnostics(self): # Make sure we can get diagnostics for an instance. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) diagnostics = self.compute.get_instance_diagnostics(self.context, instance=instance) expected = fake_diagnostics.fake_diagnostics_obj( config_drive=True, cpu_details=[{'id': 0, 'time': 17300000000, 'utilisation': 15}], disk_details=[{'errors_count': 1, 'read_bytes': 262144, 'read_requests': 112, 'write_bytes': 5778432, 'write_requests': 488}], driver='libvirt', hypervisor='kvm', hypervisor_os='ubuntu', memory_details={'maximum': 524288, 'used': 0}, nic_details=[{'mac_address': '01:23:45:67:89:ab', 'rx_octets': 2070139, 'rx_errors': 100, 'rx_drop': 200, 'rx_packets': 26701, 'rx_rate': 300, 'tx_octets': 140208, 'tx_errors': 400, 'tx_drop': 500, 'tx_packets': 662, 'tx_rate': 600}], state='running', uptime=46664) self.assertDiagnosticsEqual(expected, diagnostics) self.compute.terminate_instance(self.context, instance, []) def test_add_fixed_ip_usage_notification(self): def dummy(*args, **kwargs): pass self.stub_out('nova.compute.manager.ComputeManager.' 'inject_network_info', dummy) self.stub_out('nova.compute.manager.ComputeManager.' 'reset_network', dummy) instance = self._create_fake_instance_obj() self.assertEqual(len(fake_notifier.NOTIFICATIONS), 0) with mock.patch.object(self.compute.network_api, 'add_fixed_ip_to_instance', dummy): self.compute.add_fixed_ip_to_instance(self.context, network_id=1, instance=instance) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) self.compute.terminate_instance(self.context, instance, []) def test_remove_fixed_ip_usage_notification(self): def dummy(*args, **kwargs): pass self.stub_out('nova.compute.manager.ComputeManager.' 'inject_network_info', dummy) self.stub_out('nova.compute.manager.ComputeManager.' 'reset_network', dummy) instance = self._create_fake_instance_obj() self.assertEqual(len(fake_notifier.NOTIFICATIONS), 0) with mock.patch.object(self.compute.network_api, 'remove_fixed_ip_from_instance', dummy): self.compute.remove_fixed_ip_from_instance(self.context, 1, instance=instance) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) self.compute.terminate_instance(self.context, instance, []) def test_run_instance_usage_notification(self, request_spec=None): # Ensure run instance generates appropriate usage notification. request_spec = request_spec or objects.RequestSpec( image=objects.ImageMeta(), requested_resources=[]) instance = self._create_fake_instance_obj() expected_image_name = (request_spec.image.name if 'name' in request_spec.image else '') self.compute.build_and_run_instance(self.context, instance, request_spec=request_spec, filter_properties={}, image={'name': expected_image_name}, block_device_mapping=[]) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) instance.refresh() msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.create.start') # The last event is the one with the sugar in it. msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'compute.instance.create.end') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.tiny_flavor.id), str(payload['instance_type_id'])) self.assertEqual(str(self.tiny_flavor.flavorid), str(payload['instance_flavor_id'])) self.assertEqual(payload['state'], 'active') self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) self.assertIn('fixed_ips', payload) self.assertTrue(payload['launched_at']) image_ref_url = self.image_api.generate_image_url(FAKE_IMAGE_REF, self.context) self.assertEqual(payload['image_ref_url'], image_ref_url) self.assertEqual('Success', payload['message']) self.compute.terminate_instance(self.context, instance, []) def test_run_instance_image_usage_notification(self): request_spec = objects.RequestSpec( image=objects.ImageMeta(name='fake_name', key='value'), requested_resources=[]) self.test_run_instance_usage_notification(request_spec=request_spec) def test_run_instance_usage_notification_volume_meta(self): # Volume's image metadata won't contain the image name request_spec = objects.RequestSpec( image=objects.ImageMeta(key='value'), requested_resources=[]) self.test_run_instance_usage_notification(request_spec=request_spec) def test_run_instance_end_notification_on_abort(self): # Test that an error notif is sent if the build is aborted instance = self._create_fake_instance_obj() instance_uuid = instance['uuid'] def build_inst_abort(*args, **kwargs): raise exception.BuildAbortException(reason="already deleted", instance_uuid=instance_uuid) self.stub_out('nova.virt.fake.FakeDriver.spawn', build_inst_abort) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertGreaterEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.create.start') msg = fake_notifier.NOTIFICATIONS[-1] self.assertEqual(msg.event_type, 'compute.instance.create.error') self.assertEqual('ERROR', msg.priority) payload = msg.payload message = payload['message'] self.assertNotEqual(-1, message.find("already deleted")) def test_run_instance_error_notification_on_reschedule(self): # Test that error notif is sent if the build got rescheduled instance = self._create_fake_instance_obj() instance_uuid = instance['uuid'] def build_inst_fail(*args, **kwargs): raise exception.RescheduledException(instance_uuid=instance_uuid, reason="something bad happened") self.stub_out('nova.virt.fake.FakeDriver.spawn', build_inst_fail) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertGreaterEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.create.start') msg = fake_notifier.NOTIFICATIONS[-1] self.assertEqual(msg.event_type, 'compute.instance.create.error') self.assertEqual('ERROR', msg.priority) payload = msg.payload message = payload['message'] self.assertNotEqual(-1, message.find("something bad happened")) def test_run_instance_error_notification_on_failure(self): # Test that error notif is sent if build fails hard instance = self._create_fake_instance_obj() def build_inst_fail(*args, **kwargs): raise test.TestingException("i'm dying") self.stub_out('nova.virt.fake.FakeDriver.spawn', build_inst_fail) self.compute.build_and_run_instance( self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertGreaterEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.create.start') msg = fake_notifier.NOTIFICATIONS[-1] self.assertEqual(msg.event_type, 'compute.instance.create.error') self.assertEqual('ERROR', msg.priority) payload = msg.payload message = payload['message'] # The fault message does not contain the exception value, only the # class name. self.assertEqual(-1, message.find("i'm dying")) self.assertIn('TestingException', message) def test_terminate_usage_notification(self): # Ensure terminate_instance generates correct usage notification. old_time = datetime.datetime(2012, 4, 1) cur_time = datetime.datetime(2012, 12, 21, 12, 21) time_fixture = self.useFixture(utils_fixture.TimeFixture(old_time)) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) fake_notifier.NOTIFICATIONS = [] time_fixture.advance_time_delta(cur_time - old_time) self.compute.terminate_instance(self.context, instance, []) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 4) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'compute.instance.delete.start') msg1 = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg1.event_type, 'compute.instance.shutdown.start') msg1 = fake_notifier.NOTIFICATIONS[2] self.assertEqual(msg1.event_type, 'compute.instance.shutdown.end') msg1 = fake_notifier.NOTIFICATIONS[3] self.assertEqual(msg1.event_type, 'compute.instance.delete.end') payload = msg1.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.tiny_flavor.id), str(payload['instance_type_id'])) self.assertEqual(str(self.tiny_flavor.flavorid), str(payload['instance_flavor_id'])) self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) self.assertIn('terminated_at', payload) self.assertIn('deleted_at', payload) self.assertEqual(payload['terminated_at'], utils.strtime(cur_time)) self.assertEqual(payload['deleted_at'], utils.strtime(cur_time)) image_ref_url = self.image_api.generate_image_url(FAKE_IMAGE_REF, self.context) self.assertEqual(payload['image_ref_url'], image_ref_url) def test_instance_set_to_error_on_uncaught_exception(self): # Test that instance is set to error state when exception is raised. instance = self._create_fake_instance_obj() fake_network.unset_stub_network_methods(self) @mock.patch.object(self.compute.network_api, 'allocate_for_instance', side_effect=messaging.RemoteError()) @mock.patch.object(self.compute.network_api, 'deallocate_for_instance') def _do_test(mock_deallocate, mock_allocate): self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.refresh() self.assertEqual(vm_states.ERROR, instance.vm_state) self.compute.terminate_instance(self.context, instance, []) _do_test() @mock.patch.object(fake.FakeDriver, 'destroy') def test_delete_instance_keeps_net_on_power_off_fail(self, mock_destroy): exp = exception.InstancePowerOffFailure(reason='') mock_destroy.side_effect = exp instance = self._create_fake_instance_obj() self.assertRaises(exception.InstancePowerOffFailure, self.compute._delete_instance, self.context, instance, []) mock_destroy.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(compute_manager.ComputeManager, '_deallocate_network') @mock.patch.object(fake.FakeDriver, 'destroy') def test_delete_instance_loses_net_on_other_fail(self, mock_destroy, mock_deallocate): exp = test.TestingException() mock_destroy.side_effect = exp instance = self._create_fake_instance_obj() self.assertRaises(test.TestingException, self.compute._delete_instance, self.context, instance, []) mock_destroy.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, mock.ANY) mock_deallocate.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) def test_delete_instance_changes_power_state(self): """Test that the power state is NOSTATE after deleting an instance.""" instance = self._create_fake_instance_obj() self.compute._delete_instance(self.context, instance, []) self.assertEqual(power_state.NOSTATE, instance.power_state) def test_instance_termination_exception_sets_error(self): """Test that we handle InstanceTerminationFailure which is propagated up from the underlying driver. """ instance = self._create_fake_instance_obj() def fake_delete_instance(self, context, instance, bdms): raise exception.InstanceTerminationFailure(reason='') self.stub_out('nova.compute.manager.ComputeManager._delete_instance', fake_delete_instance) self.assertRaises(exception.InstanceTerminationFailure, self.compute.terminate_instance, self.context, instance, []) instance = db.instance_get_by_uuid(self.context, instance['uuid']) self.assertEqual(instance['vm_state'], vm_states.ERROR) @mock.patch.object(compute_manager.ComputeManager, '_prep_block_device') def test_network_is_deallocated_on_spawn_failure(self, mock_prep): # When a spawn fails the network must be deallocated. instance = self._create_fake_instance_obj() mock_prep.side_effect = messaging.RemoteError('', '', '') self.compute.build_and_run_instance( self.context, instance, {}, {}, {}, block_device_mapping=[]) self.compute.terminate_instance(self.context, instance, []) mock_prep.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) def _test_state_revert(self, instance, operation, pre_task_state, kwargs=None, vm_state=None): if kwargs is None: kwargs = {} # The API would have set task_state, so do that here to test # that the state gets reverted on failure db.instance_update(self.context, instance['uuid'], {"task_state": pre_task_state}) orig_elevated = self.context.elevated orig_notify = self.compute._notify_about_instance_usage def _get_an_exception(*args, **kwargs): raise test.TestingException() self.stub_out('nova.context.RequestContext.elevated', _get_an_exception) self.stub_out('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage', _get_an_exception) self.stub_out('nova.compute.utils.' 'notify_about_instance_delete', _get_an_exception) func = getattr(self.compute, operation) self.assertRaises(test.TestingException, func, self.context, instance=instance, **kwargs) # self.context.elevated() is called in tearDown() self.stub_out('nova.context.RequestContext.elevated', orig_elevated) self.stub_out('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage', orig_notify) # Fetch the instance's task_state and make sure it reverted to None. instance = db.instance_get_by_uuid(self.context, instance['uuid']) if vm_state: self.assertEqual(instance.vm_state, vm_state) self.assertIsNone(instance["task_state"]) def test_state_revert(self): # ensure that task_state is reverted after a failed operation. migration = objects.Migration(context=self.context.elevated()) migration.instance_uuid = 'b48316c5-71e8-45e4-9884-6c78055b9b13' migration.uuid = uuids.migration_uuid migration.new_instance_type_id = '1' instance_type = objects.Flavor() actions = [ ("reboot_instance", task_states.REBOOTING, {'block_device_info': [], 'reboot_type': 'SOFT'}), ("stop_instance", task_states.POWERING_OFF, {'clean_shutdown': True}), ("start_instance", task_states.POWERING_ON), ("terminate_instance", task_states.DELETING, {'bdms': []}, vm_states.ERROR), ("soft_delete_instance", task_states.SOFT_DELETING, {}), ("restore_instance", task_states.RESTORING), ("rebuild_instance", task_states.REBUILDING, {'orig_image_ref': None, 'image_ref': None, 'injected_files': [], 'new_pass': '', 'orig_sys_metadata': {}, 'bdms': [], 'recreate': False, 'preserve_ephemeral': False, 'migration': None, 'scheduled_node': None, 'limits': {}, 'request_spec': None, 'on_shared_storage': False}), ("set_admin_password", task_states.UPDATING_PASSWORD, {'new_pass': None}), ("rescue_instance", task_states.RESCUING, {'rescue_password': None, 'rescue_image_ref': None, 'clean_shutdown': True}), ("unrescue_instance", task_states.UNRESCUING), ("revert_resize", task_states.RESIZE_REVERTING, {'migration': migration, 'request_spec': {}}), ("prep_resize", task_states.RESIZE_PREP, {'image': {}, 'instance_type': instance_type, 'request_spec': {}, 'filter_properties': {}, 'node': None, 'migration': None, 'host_list': [], 'clean_shutdown': True}), ("resize_instance", task_states.RESIZE_PREP, {'migration': migration, 'image': {}, 'instance_type': {}, 'clean_shutdown': True, 'request_spec': {}}), ("pause_instance", task_states.PAUSING), ("unpause_instance", task_states.UNPAUSING), ("suspend_instance", task_states.SUSPENDING), ("resume_instance", task_states.RESUMING), ] instance = self._create_fake_instance_obj() for operation in actions: if 'revert_resize' in operation: migration.source_compute = 'fake-mini' migration.source_node = 'fake-mini' def fake_migration_save(*args, **kwargs): raise test.TestingException() self.stub_out('nova.objects.migration.Migration.save', fake_migration_save) self._test_state_revert(instance, *operation) def test_quotas_successful_delete(self): instance = self._create_fake_instance_obj() self.compute.terminate_instance(self.context, instance, bdms=[]) def test_quotas_failed_delete(self): instance = self._create_fake_instance_obj() def fake_shutdown_instance(*args, **kwargs): raise test.TestingException() self.stub_out('nova.compute.manager.ComputeManager._shutdown_instance', fake_shutdown_instance) self.assertRaises(test.TestingException, self.compute.terminate_instance, self.context, instance, bdms=[]) def test_quotas_successful_soft_delete(self): instance = self._create_fake_instance_obj( params=dict(task_state=task_states.SOFT_DELETING)) self.compute.soft_delete_instance(self.context, instance) def test_quotas_failed_soft_delete(self): instance = self._create_fake_instance_obj( params=dict(task_state=task_states.SOFT_DELETING)) def fake_soft_delete(*args, **kwargs): raise test.TestingException() self.stub_out('nova.virt.fake.FakeDriver.soft_delete', fake_soft_delete) self.assertRaises(test.TestingException, self.compute.soft_delete_instance, self.context, instance) def test_quotas_destroy_of_soft_deleted_instance(self): instance = self._create_fake_instance_obj( params=dict(vm_state=vm_states.SOFT_DELETED)) self.compute.terminate_instance(self.context, instance, bdms=[]) def _test_finish_resize(self, power_on, resize_instance=True): # Contrived test to ensure finish_resize doesn't raise anything and # also tests resize from ACTIVE or STOPPED state which determines # if the resized instance is powered on or not. vm_state = None if power_on: vm_state = vm_states.ACTIVE else: vm_state = vm_states.STOPPED params = {'vm_state': vm_state} instance = self._create_fake_instance_obj(params) image = {} disk_info = 'fake-disk-info' instance_type = self.default_flavor if not resize_instance: old_instance_type = self.tiny_flavor instance_type['root_gb'] = old_instance_type['root_gb'] instance_type['swap'] = old_instance_type['swap'] instance_type['ephemeral_gb'] = old_instance_type['ephemeral_gb'] instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=instance_type, image={}, request_spec={}, filter_properties={}, node=None, migration=None, clean_shutdown=True, host_list=[]) instance.task_state = task_states.RESIZE_MIGRATED instance.save() # NOTE(mriedem): make sure prep_resize set old_vm_state correctly sys_meta = instance.system_metadata self.assertIn('old_vm_state', sys_meta) if power_on: self.assertEqual(vm_states.ACTIVE, sys_meta['old_vm_state']) else: self.assertEqual(vm_states.STOPPED, sys_meta['old_vm_state']) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') orig_mig_save = migration.save orig_inst_save = instance.save network_api = self.compute.network_api # Three fake BDMs: # 1. volume BDM with an attachment_id which will be updated/completed # 2. volume BDM without an attachment_id so it's not updated # 3. non-volume BDM so it's not updated fake_bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(destination_type='volume', attachment_id=uuids.attachment_id, device_name='/dev/vdb'), objects.BlockDeviceMapping(destination_type='volume', attachment_id=None), objects.BlockDeviceMapping(destination_type='local') ]) with test.nested( mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=fake_bdms), mock.patch.object(network_api, 'setup_networks_on_host'), mock.patch.object(network_api, 'migrate_instance_finish'), mock.patch.object(self.compute.network_api, 'get_instance_nw_info'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(compute_utils, 'notify_about_instance_action'), mock.patch.object(self.compute.driver, 'finish_migration'), mock.patch.object(self.compute, '_get_instance_block_device_info'), mock.patch.object(migration, 'save'), mock.patch.object(instance, 'save'), mock.patch.object(self.compute.driver, 'get_volume_connector'), mock.patch.object(self.compute.volume_api, 'attachment_update'), mock.patch.object(self.compute.volume_api, 'attachment_complete') ) as (mock_get_bdm, mock_setup, mock_net_mig, mock_get_nw, mock_notify, mock_notify_action, mock_virt_mig, mock_get_blk, mock_mig_save, mock_inst_save, mock_get_vol_connector, mock_attachment_update, mock_attachment_complete): def _mig_save(): self.assertEqual(migration.status, 'finished') self.assertEqual(vm_state, instance.vm_state) self.assertEqual(task_states.RESIZE_FINISH, instance.task_state) self.assertTrue(migration._context.is_admin) orig_mig_save() def _instance_save0(expected_task_state=None): self.assertEqual(task_states.RESIZE_MIGRATED, expected_task_state) self.assertEqual(instance_type['id'], instance.instance_type_id) self.assertEqual(task_states.RESIZE_FINISH, instance.task_state) orig_inst_save(expected_task_state=expected_task_state) def _instance_save1(expected_task_state=None): self.assertEqual(task_states.RESIZE_FINISH, expected_task_state) self.assertEqual(vm_states.RESIZED, instance.vm_state) self.assertIsNone(instance.task_state) self.assertIn('launched_at', instance.obj_what_changed()) orig_inst_save(expected_task_state=expected_task_state) mock_get_nw.return_value = 'fake-nwinfo1' mock_get_blk.return_value = 'fake-bdminfo' inst_call_list = [] # First save to update old/current flavor and task state exp_kwargs = dict(expected_task_state=task_states.RESIZE_MIGRATED) inst_call_list.append(mock.call(**exp_kwargs)) mock_inst_save.side_effect = [_instance_save0] # Ensure instance status updates is after the migration finish mock_mig_save.side_effect = _mig_save exp_kwargs = dict(expected_task_state=task_states.RESIZE_FINISH) inst_call_list.append(mock.call(**exp_kwargs)) mock_inst_save.side_effect = chain(mock_inst_save.side_effect, [_instance_save1]) self.compute.finish_resize(self.context, migration=migration, disk_info=disk_info, image=image, instance=instance, request_spec=objects.RequestSpec()) mock_setup.assert_called_once_with(self.context, instance, 'fake-mini') mock_net_mig.assert_called_once_with(self.context, test.MatchType(objects.Instance), migration, None) mock_get_nw.assert_called_once_with(self.context, instance) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'finish_resize.start', network_info='fake-nwinfo1'), mock.call(self.context, instance, 'finish_resize.end', network_info='fake-nwinfo1')]) mock_notify_action.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='resize_finish', phase='start', bdms=fake_bdms), mock.call(self.context, instance, 'fake-mini', action='resize_finish', phase='end', bdms=fake_bdms)]) # nova.conf sets the default flavor to m1.small and the test # sets the default flavor to m1.tiny so they should be different # which makes this a resize mock_virt_mig.assert_called_once_with(self.context, migration, instance, disk_info, 'fake-nwinfo1', test.MatchType(objects.ImageMeta), resize_instance, mock.ANY, 'fake-bdminfo', power_on) mock_get_blk.assert_called_once_with(self.context, instance, refresh_conn_info=True, bdms=fake_bdms) mock_inst_save.assert_has_calls(inst_call_list) mock_mig_save.assert_called_once_with() # We should only have one attachment_update/complete call for the # volume BDM that had an attachment. mock_attachment_update.assert_called_once_with( self.context, uuids.attachment_id, mock_get_vol_connector.return_value, '/dev/vdb') mock_attachment_complete.assert_called_once_with( self.context, uuids.attachment_id) self.mock_get_allocs.assert_called_once_with(self.context, instance.uuid) def test_finish_resize_from_active(self): self._test_finish_resize(power_on=True) def test_finish_resize_from_stopped(self): self._test_finish_resize(power_on=False) def test_finish_resize_without_resize_instance(self): self._test_finish_resize(power_on=True, resize_instance=False) def test_finish_resize_with_volumes(self): """Contrived test to ensure finish_resize doesn't raise anything.""" # create instance instance = self._create_fake_instance_obj() request_spec = objects.RequestSpec() # create volume volume = {'instance_uuid': None, 'device_name': None, 'id': uuids.volume, 'size': 200, 'attach_status': 'detached'} bdm = objects.BlockDeviceMapping( **{'context': self.context, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume, 'instance_uuid': instance['uuid'], 'device_name': '/dev/vdc'}) bdm.create() # stub out volume attach def fake_volume_get(self, context, volume_id, microversion=None): return volume self.stub_out('nova.volume.cinder.API.get', fake_volume_get) def fake_volume_check_availability_zone(self, context, volume_id, instance): pass self.stub_out('nova.volume.cinder.API.check_availability_zone', fake_volume_check_availability_zone) def fake_get_volume_encryption_metadata(self, context, volume_id): return {} self.stub_out('nova.volume.cinder.API.get_volume_encryption_metadata', fake_get_volume_encryption_metadata) orig_connection_data = { 'target_discovered': True, 'target_iqn': 'iqn.2010-10.org.openstack:%s.1' % uuids.volume_id, 'target_portal': '127.0.0.0.1:3260', 'volume_id': uuids.volume_id, } connection_info = { 'driver_volume_type': 'iscsi', 'data': orig_connection_data, } def fake_init_conn(self, context, volume_id, session): return connection_info self.stub_out('nova.volume.cinder.API.initialize_connection', fake_init_conn) def fake_attach(self, context, volume_id, instance_uuid, device_name, mode='rw'): volume['instance_uuid'] = instance_uuid volume['device_name'] = device_name self.stub_out('nova.volume.cinder.API.attach', fake_attach) # stub out virt driver attach def fake_get_volume_connector(*args, **kwargs): return {} self.stub_out('nova.virt.fake.FakeDriver.get_volume_connector', fake_get_volume_connector) def fake_attach_volume(*args, **kwargs): pass self.stub_out('nova.virt.fake.FakeDriver.attach_volume', fake_attach_volume) # attach volume to instance self.compute.attach_volume(self.context, instance, bdm) # assert volume attached correctly self.assertEqual(volume['device_name'], '/dev/vdc') disk_info = db.block_device_mapping_get_all_by_instance( self.context, instance.uuid) self.assertEqual(len(disk_info), 1) for bdm in disk_info: self.assertEqual(bdm['device_name'], volume['device_name']) self.assertEqual(bdm['connection_info'], jsonutils.dumps(connection_info)) # begin resize instance_type = self.default_flavor instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=instance_type, image={}, request_spec=request_spec, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) # fake out detach for prep_resize (and later terminate) def fake_terminate_connection(self, context, volume, connector): connection_info['data'] = None self.stub_out('nova.volume.cinder.API.terminate_connection', fake_terminate_connection) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') self.compute.resize_instance(self.context, instance=instance, migration=migration, image={}, instance_type=jsonutils.to_primitive(instance_type), clean_shutdown=True, request_spec=request_spec) # assert bdm is unchanged disk_info = db.block_device_mapping_get_all_by_instance( self.context, instance.uuid) self.assertEqual(len(disk_info), 1) for bdm in disk_info: self.assertEqual(bdm['device_name'], volume['device_name']) cached_connection_info = jsonutils.loads(bdm['connection_info']) self.assertEqual(cached_connection_info['data'], orig_connection_data) # but connection was terminated self.assertIsNone(connection_info['data']) # stub out virt driver finish_migration def fake(*args, **kwargs): pass self.stub_out('nova.virt.fake.FakeDriver.finish_migration', fake) instance.task_state = task_states.RESIZE_MIGRATED instance.save() # new initialize connection new_connection_data = dict(orig_connection_data) new_iqn = 'iqn.2010-10.org.openstack:%s.2' % uuids.volume_id, new_connection_data['target_iqn'] = new_iqn def fake_init_conn_with_data(self, context, volume, session): connection_info['data'] = new_connection_data return connection_info self.stub_out('nova.volume.cinder.API.initialize_connection', fake_init_conn_with_data) self.compute.finish_resize(self.context, migration=migration, disk_info={}, image={}, instance=instance, request_spec=request_spec) # assert volume attached correctly disk_info = db.block_device_mapping_get_all_by_instance( self.context, instance['uuid']) self.assertEqual(len(disk_info), 1) for bdm in disk_info: self.assertEqual(bdm['connection_info'], jsonutils.dumps(connection_info)) # stub out detach def fake_detach(self, context, volume_uuid): volume['device_path'] = None volume['instance_uuid'] = None self.stub_out('nova.volume.cinder.API.detach', fake_detach) # clean up self.compute.terminate_instance(self.context, instance, []) def test_finish_resize_handles_error(self): # Make sure we don't leave the instance in RESIZE on error. def throw_up(*args, **kwargs): raise test.TestingException() self.stub_out('nova.virt.fake.FakeDriver.finish_migration', throw_up) old_flavor_name = 'm1.tiny' instance = self._create_fake_instance_obj(type_name=old_flavor_name) instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') request_spec = objects.RequestSpec() self.compute.prep_resize(self.context, instance=instance, instance_type=instance_type, image={}, request_spec=request_spec, filter_properties={}, node=None, migration=None, clean_shutdown=True, host_list=[]) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') instance.refresh() instance.task_state = task_states.RESIZE_MIGRATED instance.save() self.assertRaises(test.TestingException, self.compute.finish_resize, self.context, migration=migration, disk_info={}, image={}, instance=instance, request_spec=request_spec) instance.refresh() self.assertEqual(vm_states.ERROR, instance.vm_state) old_flavor = objects.Flavor.get_by_name(self.context, old_flavor_name) self.assertEqual(old_flavor['memory_mb'], instance.memory_mb) self.assertEqual(old_flavor['vcpus'], instance.vcpus) self.assertEqual(old_flavor['root_gb'], instance.root_gb) self.assertEqual(old_flavor['ephemeral_gb'], instance.ephemeral_gb) self.assertEqual(old_flavor['id'], instance.instance_type_id) self.assertNotEqual(instance_type['id'], instance.instance_type_id) def test_set_instance_info(self): old_flavor_name = 'm1.tiny' new_flavor_name = 'm1.small' instance = self._create_fake_instance_obj(type_name=old_flavor_name) new_flavor = objects.Flavor.get_by_name(self.context, new_flavor_name) self.compute._set_instance_info(instance, new_flavor.obj_clone()) self.assertEqual(new_flavor['memory_mb'], instance.memory_mb) self.assertEqual(new_flavor['vcpus'], instance.vcpus) self.assertEqual(new_flavor['root_gb'], instance.root_gb) self.assertEqual(new_flavor['ephemeral_gb'], instance.ephemeral_gb) self.assertEqual(new_flavor['id'], instance.instance_type_id) def test_rebuild_instance_notification(self): # Ensure notifications on instance migrate/resize. old_time = datetime.datetime(2012, 4, 1) cur_time = datetime.datetime(2012, 12, 21, 12, 21) time_fixture = self.useFixture(utils_fixture.TimeFixture(old_time)) inst_ref = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, inst_ref, {}, {}, {}, block_device_mapping=[]) time_fixture.advance_time_delta(cur_time - old_time) fake_notifier.NOTIFICATIONS = [] instance = db.instance_get_by_uuid(self.context, inst_ref['uuid']) orig_sys_metadata = db.instance_system_metadata_get(self.context, inst_ref['uuid']) image_ref = instance["image_ref"] new_image_ref = uuids.new_image_ref db.instance_update(self.context, inst_ref['uuid'], {'image_ref': new_image_ref}) password = "new_password" inst_ref.task_state = task_states.REBUILDING inst_ref.save() self.compute.rebuild_instance(self.context, inst_ref, image_ref, new_image_ref, injected_files=[], new_pass=password, orig_sys_metadata=orig_sys_metadata, bdms=[], recreate=False, on_shared_storage=False, preserve_ephemeral=False, migration=None, scheduled_node=None, limits={}, request_spec=None) inst_ref.refresh() image_ref_url = self.image_api.generate_image_url(image_ref, self.context) new_image_ref_url = self.image_api.generate_image_url(new_image_ref, self.context) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 3) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.exists') self.assertEqual(msg.payload['image_ref_url'], image_ref_url) msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.rebuild.start') self.assertEqual(msg.payload['image_ref_url'], new_image_ref_url) self.assertEqual(msg.payload['image_name'], 'fake_name') msg = fake_notifier.NOTIFICATIONS[2] self.assertEqual(msg.event_type, 'compute.instance.rebuild.end') self.assertEqual(msg.priority, 'INFO') payload = msg.payload self.assertEqual(payload['image_name'], 'fake_name') self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], inst_ref['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.tiny_flavor.id), str(payload['instance_type_id'])) self.assertEqual(str(self.tiny_flavor.flavorid), str(payload['instance_flavor_id'])) self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) self.assertEqual(payload['launched_at'], utils.strtime(cur_time)) self.assertEqual(payload['image_ref_url'], new_image_ref_url) self.compute.terminate_instance(self.context, inst_ref, []) def test_finish_resize_instance_notification(self): # Ensure notifications on instance migrate/resize. old_time = datetime.datetime(2012, 4, 1) cur_time = datetime.datetime(2012, 12, 21, 12, 21) time_fixture = self.useFixture(utils_fixture.TimeFixture(old_time)) instance = self._create_fake_instance_obj() new_type = objects.Flavor.get_by_name(self.context, 'm1.small') new_type_id = new_type['id'] flavor_id = new_type['flavorid'] request_spec = objects.RequestSpec() self.compute.build_and_run_instance(self.context, instance, {}, request_spec, {}, block_device_mapping=[]) instance.host = 'foo' instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=new_type, image={}, request_spec={}, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') self.compute.resize_instance(self.context, instance=instance, migration=migration, image={}, instance_type=new_type, clean_shutdown=True, request_spec=request_spec) time_fixture.advance_time_delta(cur_time - old_time) fake_notifier.NOTIFICATIONS = [] self.compute.finish_resize(self.context, migration=migration, disk_info={}, image={}, instance=instance, request_spec=request_spec) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.finish_resize.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.finish_resize.end') self.assertEqual(msg.priority, 'INFO') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance.uuid) self.assertEqual(payload['instance_type'], 'm1.small') self.assertEqual(str(payload['instance_type_id']), str(new_type_id)) self.assertEqual(str(payload['instance_flavor_id']), str(flavor_id)) self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) self.assertEqual(payload['launched_at'], utils.strtime(cur_time)) image_ref_url = self.image_api.generate_image_url(FAKE_IMAGE_REF, self.context) self.assertEqual(payload['image_ref_url'], image_ref_url) self.compute.terminate_instance(self.context, instance, []) @mock.patch('nova.compute.utils.notify_about_resize_prep_instance') def test_resize_instance_notification(self, mock_notify): # Ensure notifications on instance migrate/resize. old_time = datetime.datetime(2012, 4, 1) cur_time = datetime.datetime(2012, 12, 21, 12, 21) time_fixture = self.useFixture(utils_fixture.TimeFixture(old_time)) instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) time_fixture.advance_time_delta(cur_time - old_time) fake_notifier.NOTIFICATIONS = [] instance.host = 'foo' instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=self.default_flavor, image={}, request_spec={}, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) db.migration_get_by_instance_and_status(self.context.elevated(), instance.uuid, 'pre-migrating') self.assertEqual(len(fake_notifier.NOTIFICATIONS), 3) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.exists') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.resize.prep.start') msg = fake_notifier.NOTIFICATIONS[2] self.assertEqual(msg.event_type, 'compute.instance.resize.prep.end') self.assertEqual(msg.priority, 'INFO') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance.uuid) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.tiny_flavor.id), str(payload['instance_type_id'])) self.assertEqual(str(self.tiny_flavor.flavorid), str(payload['instance_flavor_id'])) self.assertIn('display_name', payload) self.assertIn('created_at', payload) self.assertIn('launched_at', payload) image_ref_url = self.image_api.generate_image_url(FAKE_IMAGE_REF, self.context) self.assertEqual(payload['image_ref_url'], image_ref_url) self.compute.terminate_instance(self.context, instance, []) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', 'start', self.default_flavor), mock.call(self.context, instance, 'fake-mini', 'end', self.default_flavor)]) def test_prep_resize_instance_migration_error_on_none_host(self): """Ensure prep_resize raises a migration error if destination host is not defined """ instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.host = None instance.save() self.assertRaises(exception.MigrationError, self.compute.prep_resize, self.context, instance=instance, instance_type=self.default_flavor, image={}, request_spec={}, filter_properties={}, node=None, clean_shutdown=True, migration=mock.Mock(), host_list=[]) self.compute.terminate_instance(self.context, instance, []) def test_resize_instance_driver_error(self): # Ensure instance status set to Error on resize error. def throw_up(*args, **kwargs): raise test.TestingException() self.stub_out('nova.virt.fake.FakeDriver.migrate_disk_and_power_off', throw_up) instance = self._create_fake_instance_obj() request_spec = objects.RequestSpec() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.host = 'foo' instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=self.default_flavor, image={}, request_spec=request_spec, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) instance.task_state = task_states.RESIZE_PREP instance.save() migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') # verify self.assertRaises(test.TestingException, self.compute.resize_instance, self.context, instance=instance, migration=migration, image={}, instance_type=jsonutils.to_primitive( self.default_flavor), clean_shutdown=True, request_spec=request_spec) # NOTE(comstud): error path doesn't use objects, so our object # is not updated. Refresh and compare against the DB. instance.refresh() self.assertEqual(instance.vm_state, vm_states.ERROR) self.compute.terminate_instance(self.context, instance, []) def test_resize_instance_driver_rollback(self): # Ensure instance status set to Running after rollback. def throw_up(*args, **kwargs): raise exception.InstanceFaultRollback(test.TestingException()) self.stub_out('nova.virt.fake.FakeDriver.migrate_disk_and_power_off', throw_up) instance = self._create_fake_instance_obj() request_spec = objects.RequestSpec() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.host = 'foo' instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=self.default_flavor, image={}, request_spec=request_spec, filter_properties={}, node=None, migration=None, host_list=[], clean_shutdown=True) instance.task_state = task_states.RESIZE_PREP instance.save() migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') self.assertRaises(test.TestingException, self.compute.resize_instance, self.context, instance=instance, migration=migration, image={}, instance_type=jsonutils.to_primitive( self.default_flavor), clean_shutdown=True, request_spec=request_spec) # NOTE(comstud): error path doesn't use objects, so our object # is not updated. Refresh and compare against the DB. instance.refresh() self.assertEqual(instance.vm_state, vm_states.ACTIVE) self.assertIsNone(instance.task_state) self.compute.terminate_instance(self.context, instance, []) def _test_resize_instance(self, clean_shutdown=True): # Ensure instance can be migrated/resized. instance = self._create_fake_instance_obj() request_spec = objects.RequestSpec() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.host = 'foo' instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=self.default_flavor, image={}, request_spec=request_spec, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) # verify 'old_vm_state' was set on system_metadata instance.refresh() sys_meta = instance.system_metadata self.assertEqual(vm_states.ACTIVE, sys_meta['old_vm_state']) instance.task_state = task_states.RESIZE_PREP instance.save() migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value='fake_bdms'), mock.patch.object( self.compute, '_get_instance_block_device_info', return_value='fake_bdinfo'), mock.patch.object(self.compute, '_terminate_volume_connections'), mock.patch.object(self.compute, '_get_power_off_values', return_value=(1, 2)), mock.patch.object(nova.compute.utils, 'is_volume_backed_instance', return_value=False) ) as (mock_notify_action, mock_get_by_inst_uuid, mock_get_instance_vol_bdinfo, mock_terminate_vol_conn, mock_get_power_off_values, mock_check_is_bfv): self.compute.resize_instance(self.context, instance=instance, migration=migration, image={}, instance_type=jsonutils.to_primitive(self.default_flavor), clean_shutdown=clean_shutdown, request_spec=request_spec) mock_notify_action.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='resize', phase='start', bdms='fake_bdms'), mock.call(self.context, instance, 'fake-mini', action='resize', phase='end', bdms='fake_bdms')]) mock_get_instance_vol_bdinfo.assert_called_once_with( self.context, instance, bdms='fake_bdms') mock_terminate_vol_conn.assert_called_once_with(self.context, instance, 'fake_bdms') mock_get_power_off_values.assert_called_once_with(self.context, instance, clean_shutdown) self.assertEqual(migration.dest_compute, instance.host) self.compute.terminate_instance(self.context, instance, []) def test_resize_instance(self): self._test_resize_instance() def test_resize_instance_forced_shutdown(self): self._test_resize_instance(clean_shutdown=False) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(nova.compute.utils, 'notify_about_instance_action') def _test_confirm_resize(self, mock_notify, mock_get_by_instance_uuid, power_on, numa_topology=None): # Common test case method for confirm_resize def fake(*args, **kwargs): pass def fake_confirm_migration_driver(*args, **kwargs): # Confirm the instance uses the new type in finish_resize self.assertEqual('3', instance.flavor.flavorid) expected_bdm = objects.BlockDeviceMappingList(objects=[]) mock_get_by_instance_uuid.return_value = expected_bdm old_vm_state = None p_state = None if power_on: old_vm_state = vm_states.ACTIVE p_state = power_state.RUNNING else: old_vm_state = vm_states.STOPPED p_state = power_state.SHUTDOWN params = {'vm_state': old_vm_state, 'power_state': p_state} instance = self._create_fake_instance_obj(params) request_spec = objects.RequestSpec() self.flags(allow_resize_to_same_host=True) self.stub_out('nova.virt.fake.FakeDriver.finish_migration', fake) self.stub_out('nova.virt.fake.FakeDriver.confirm_migration', fake_confirm_migration_driver) # Get initial memory usage memory_mb_used = self.rt.compute_nodes[NODENAME].memory_mb_used self.compute.build_and_run_instance(self.context, instance, {}, request_spec, {}, block_device_mapping=[]) # Confirm the instance size before the resize starts instance.refresh() flavor = objects.Flavor.get_by_id(self.context, instance.instance_type_id) self.assertEqual(flavor.flavorid, '1') # Memory usage should have increased by the claim self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb) instance.vm_state = old_vm_state instance.power_state = p_state instance.numa_topology = numa_topology instance.save() new_instance_type_ref = flavors.get_flavor_by_flavor_id(3) self.compute.prep_resize(self.context, instance=instance, instance_type=new_instance_type_ref, image={}, request_spec=request_spec, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=None) # Memory usage should increase after the resize as well self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb + new_instance_type_ref.memory_mb) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') migration_context = objects.MigrationContext.get_by_instance_uuid( self.context.elevated(), instance.uuid) self.assertIsInstance(migration_context.old_numa_topology, numa_topology.__class__) self.assertIsNone(migration_context.new_numa_topology) # NOTE(mriedem): ensure prep_resize set old_vm_state in system_metadata sys_meta = instance.system_metadata self.assertEqual(old_vm_state, sys_meta['old_vm_state']) instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.resize_instance(self.context, instance=instance, migration=migration, image={}, instance_type=new_instance_type_ref, clean_shutdown=True, request_spec=request_spec) self.compute.finish_resize(self.context, migration=migration, disk_info={}, image={}, instance=instance, request_spec=request_spec) # Memory usage shouldn't had changed self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb + new_instance_type_ref.memory_mb) # Prove that the instance size is now the new size flavor = objects.Flavor.get_by_id(self.context, instance.instance_type_id) self.assertEqual(flavor.flavorid, '3') # Prove that the NUMA topology has also been updated to that of the new # flavor - meaning None self.assertIsNone(instance.numa_topology) # Finally, confirm the resize and verify the new flavor is applied instance.task_state = None instance.save() with mock.patch.object(objects.Instance, 'get_by_uuid', return_value=instance) as mock_get_by_uuid: self.compute.confirm_resize(self.context, instance=instance, migration=migration) mock_get_by_uuid.assert_called_once_with( self.context, instance.uuid, expected_attrs=['metadata', 'system_metadata', 'flavor']) # Resources from the migration (based on initial flavor) should # be freed now self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + new_instance_type_ref.memory_mb) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='resize', bdms=expected_bdm, phase='start'), mock.call(self.context, instance, 'fake-mini', action='resize', bdms=expected_bdm, phase='end'), mock.call(self.context, instance, 'fake-mini', action='resize_finish', bdms=expected_bdm, phase='start'), mock.call(self.context, instance, 'fake-mini', action='resize_finish', bdms=expected_bdm, phase='end'), mock.call(self.context, instance, 'fake-mini', action='resize_confirm', phase='start'), mock.call(self.context, instance, 'fake-mini', action='resize_confirm', phase='end')]) instance.refresh() flavor = objects.Flavor.get_by_id(self.context, instance.instance_type_id) self.assertEqual(flavor.flavorid, '3') self.assertEqual('fake-mini', migration.source_compute) self.assertEqual(old_vm_state, instance.vm_state) self.assertIsNone(instance.task_state) self.assertIsNone(instance.migration_context) self.assertEqual(p_state, instance.power_state) self.compute.terminate_instance(self.context, instance, []) def test_confirm_resize_from_active(self): self._test_confirm_resize(power_on=True) def test_confirm_resize_from_stopped(self): self._test_confirm_resize(power_on=False) def test_confirm_resize_with_migration_context(self): numa_topology = test_instance_numa.get_fake_obj_numa_topology( self.context) self._test_confirm_resize(power_on=True, numa_topology=numa_topology) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') def test_confirm_resize_with_numa_topology_and_cpu_pinning( self, mock_remove_allocs): instance = self._create_fake_instance_obj() instance.old_flavor = instance.flavor instance.new_flavor = instance.flavor # we have two hosts with the same NUMA topologies. # now instance use two cpus from node_0 (cpu1 and cpu2) on current host old_inst_topology = objects.InstanceNUMATopology( instance_uuid=instance.uuid, cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=512, pagesize=2048, cpu_policy=obj_fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={'0': 1, '1': 2}) ]) # instance will use two cpus from node_1 (cpu3 and cpu4) # on *some other host* new_inst_topology = objects.InstanceNUMATopology( instance_uuid=instance.uuid, cells=[ objects.InstanceNUMACell( id=1, cpuset=set([3, 4]), memory=512, pagesize=2048, cpu_policy=obj_fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={'0': 3, '1': 4}) ]) instance.numa_topology = old_inst_topology # instance placed in node_0 on current host. cpu1 and cpu2 from node_0 # are used cell1 = objects.NUMACell( id=0, cpuset=set(), pcpuset=set([1, 2]), memory=512, pagesize=2048, cpu_usage=2, memory_usage=0, pinned_cpus=set([1, 2]), siblings=[set([1]), set([2])], mempages=[objects.NUMAPagesTopology( size_kb=2048, total=256, used=256)]) # as instance placed in node_0 all cpus from node_1 (cpu3 and cpu4) # are free (on current host) cell2 = objects.NUMACell( id=1, cpuset=set(), pcpuset=set([3, 4]), pinned_cpus=set(), memory=512, pagesize=2048, memory_usage=0, cpu_usage=0, siblings=[set([3]), set([4])], mempages=[objects.NUMAPagesTopology( size_kb=2048, total=256, used=0)]) host_numa_topology = objects.NUMATopology(cells=[cell1, cell2]) migration = objects.Migration(context=self.context.elevated()) migration.instance_uuid = instance.uuid migration.status = 'finished' migration.migration_type = 'migration' migration.source_node = NODENAME migration.create() migration_context = objects.MigrationContext() migration_context.migration_id = migration.id migration_context.old_numa_topology = old_inst_topology migration_context.new_numa_topology = new_inst_topology migration_context.old_pci_devices = None migration_context.new_pci_devices = None instance.migration_context = migration_context instance.vm_state = vm_states.RESIZED instance.system_metadata = {} instance.save() self.rt.tracked_migrations[instance.uuid] = (migration, instance.flavor) cn = self.rt.compute_nodes[NODENAME] cn.numa_topology = jsonutils.dumps( host_numa_topology.obj_to_primitive()) with mock.patch.object(self.compute.network_api, 'setup_networks_on_host'): self.compute.confirm_resize(self.context, instance=instance, migration=migration) instance.refresh() self.assertEqual(vm_states.ACTIVE, instance['vm_state']) updated_topology = objects.NUMATopology.obj_from_primitive( jsonutils.loads(cn.numa_topology)) # after confirming resize all cpus on currect host must be free self.assertEqual(2, len(updated_topology.cells)) for cell in updated_topology.cells: self.assertEqual(set(), cell.pinned_cpus) def _test_resize_with_pci(self, method, expected_pci_addr): instance = self._create_fake_instance_obj() instance.old_flavor = instance.flavor instance.new_flavor = instance.flavor old_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0a:00.1', request_id=uuids.req1)]) new_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0b:00.1', request_id=uuids.req2)]) if expected_pci_addr == old_pci_devices[0].address: expected_pci_device = old_pci_devices[0] else: expected_pci_device = new_pci_devices[0] migration = objects.Migration(context=self.context.elevated()) migration.instance_uuid = instance.uuid migration.status = 'finished' migration.migration_type = 'migration' migration.source_node = NODENAME migration.dest_node = NODENAME migration.create() migration_context = objects.MigrationContext() migration_context.migration_id = migration.id migration_context.old_pci_devices = old_pci_devices migration_context.new_pci_devices = new_pci_devices instance.pci_devices = old_pci_devices instance.migration_context = migration_context instance.vm_state = vm_states.RESIZED instance.system_metadata = {} instance.save() self.rt.pci_tracker = mock.Mock() self.rt.tracked_migrations[instance.uuid] = migration with test.nested( mock.patch.object(self.compute.network_api, 'setup_networks_on_host'), mock.patch.object(self.compute.network_api, 'migrate_instance_start'), mock.patch.object(self.rt.pci_tracker, 'free_device'), mock.patch.object(self.rt.pci_tracker.stats, 'to_device_pools_obj', return_value=objects.PciDevicePoolList()) ) as (mock_setup, mock_migrate, mock_pci_free_device, mock_to_device_pools_obj): method(self.context, instance=instance, migration=migration) mock_pci_free_device.assert_called_once_with( expected_pci_device, mock.ANY) def test_confirm_resize_with_pci(self): self._test_resize_with_pci( self.compute.confirm_resize, '0000:0a:00.1') def test_revert_resize_with_pci(self): self._test_resize_with_pci( lambda context, instance, migration: self.compute.revert_resize( context, instance, migration, objects.RequestSpec()), '0000:0b:00.1') @mock.patch.object(nova.compute.utils, 'notify_about_instance_action') def _test_finish_revert_resize(self, mock_notify, power_on, remove_old_vm_state=False, numa_topology=None): """Convenience method that does most of the work for the test_finish_revert_resize tests. :param power_on -- True if testing resize from ACTIVE state, False if testing resize from STOPPED state. :param remove_old_vm_state -- True if testing a case where the 'old_vm_state' system_metadata is not present when the finish_revert_resize method is called. """ def fake(*args, **kwargs): pass def fake_finish_revert_migration_driver(*args, **kwargs): # Confirm the instance uses the old type in finish_revert_resize inst = args[2] self.assertEqual('1', inst.flavor.flavorid) old_vm_state = None if power_on: old_vm_state = vm_states.ACTIVE else: old_vm_state = vm_states.STOPPED params = {'vm_state': old_vm_state, 'info_cache': objects.InstanceInfoCache( network_info=network_model.NetworkInfo([]))} instance = self._create_fake_instance_obj(params) request_spec = objects.RequestSpec() self.stub_out('nova.virt.fake.FakeDriver.finish_migration', fake) self.stub_out('nova.virt.fake.FakeDriver.finish_revert_migration', fake_finish_revert_migration_driver) # Get initial memory usage memory_mb_used = self.rt.compute_nodes[NODENAME].memory_mb_used self.compute.build_and_run_instance(self.context, instance, {}, request_spec, {}, block_device_mapping=[]) instance.refresh() flavor = objects.Flavor.get_by_id(self.context, instance.instance_type_id) self.assertEqual(flavor.flavorid, '1') # Memory usage should have increased by the claim self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb) old_vm_state = instance['vm_state'] instance.vm_state = old_vm_state instance.numa_topology = numa_topology instance.save() new_instance_type_ref = flavors.get_flavor_by_flavor_id(3) self.compute.prep_resize(self.context, instance=instance, instance_type=new_instance_type_ref, image={}, request_spec=request_spec, filter_properties={}, node=None, migration=None, clean_shutdown=True, host_list=[]) # Memory usage should increase after the resize as well self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb + new_instance_type_ref.memory_mb) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') migration_context = objects.MigrationContext.get_by_instance_uuid( self.context.elevated(), instance.uuid) self.assertIsInstance(migration_context.old_numa_topology, numa_topology.__class__) # NOTE(mriedem): ensure prep_resize set old_vm_state in system_metadata sys_meta = instance.system_metadata self.assertEqual(old_vm_state, sys_meta['old_vm_state']) instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.resize_instance(self.context, instance=instance, migration=migration, image={}, instance_type=new_instance_type_ref, clean_shutdown=True, request_spec=request_spec) self.compute.finish_resize(self.context, migration=migration, disk_info={}, image={}, instance=instance, request_spec=request_spec) # Memory usage shouldn't had changed self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb + new_instance_type_ref.memory_mb) # Prove that the instance size is now the new size instance_type_ref = flavors.get_flavor_by_flavor_id(3) self.assertEqual(instance_type_ref['flavorid'], '3') # Prove that the NUMA topology has also been updated to that of the new # flavor - meaning None self.assertIsNone(instance.numa_topology) instance.task_state = task_states.RESIZE_REVERTING instance.save() self.compute.revert_resize(self.context, migration=migration, instance=instance, request_spec=request_spec) # Resources from the migration (based on initial flavor) should # be freed now self.assertEqual(self.rt.compute_nodes[NODENAME].memory_mb_used, memory_mb_used + flavor.memory_mb) instance.refresh() if remove_old_vm_state: # need to wipe out the old_vm_state from system_metadata # before calling finish_revert_resize sys_meta = instance.system_metadata sys_meta.pop('old_vm_state') # Have to reset for save() to work instance.system_metadata = sys_meta instance.save() self.compute.finish_revert_resize(self.context, migration=migration, instance=instance, request_spec=request_spec) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='resize_revert', phase='start', bdms=test.MatchType(objects.BlockDeviceMappingList)), mock.call(self.context, instance, 'fake-mini', action='resize_revert', phase='end', bdms=test.MatchType(objects.BlockDeviceMappingList))]) self.assertIsNone(instance.task_state) flavor = objects.Flavor.get_by_id(self.context, instance['instance_type_id']) self.assertEqual(flavor.flavorid, '1') self.assertEqual(instance.host, migration.source_compute) self.assertIsInstance(instance.numa_topology, numa_topology.__class__) if remove_old_vm_state: self.assertEqual(vm_states.ACTIVE, instance.vm_state) else: self.assertEqual(old_vm_state, instance.vm_state) def test_finish_revert_resize_from_active(self): self._test_finish_revert_resize(power_on=True) def test_finish_revert_resize_from_stopped(self): self._test_finish_revert_resize(power_on=False) def test_finish_revert_resize_from_stopped_remove_old_vm_state(self): # in this case we resize from STOPPED but end up with ACTIVE # because the old_vm_state value is not present in # finish_revert_resize self._test_finish_revert_resize(power_on=False, remove_old_vm_state=True) def test_finish_revert_resize_migration_context(self): numa_topology = test_instance_numa.get_fake_obj_numa_topology( self.context) self._test_finish_revert_resize(power_on=True, numa_topology=numa_topology) # NOTE(lbeliveau): Move unit test changes from b816e3 to a separate # test. It introduced a hacky way to force the migration dest_compute # and makes it hard to keep _test_finish_revert_resize() generic # and have the resources correctly tracked. def test_finish_revert_resize_validate_source_compute(self): def fake(*args, **kwargs): pass params = {'info_cache': objects.InstanceInfoCache( network_info=network_model.NetworkInfo([]))} instance = self._create_fake_instance_obj(params) request_spec = objects.RequestSpec() self.stub_out('nova.virt.fake.FakeDriver.finish_migration', fake) self.stub_out('nova.virt.fake.FakeDriver.finish_revert_migration', fake) self.compute.build_and_run_instance(self.context, instance, {}, request_spec, {}, block_device_mapping=[]) new_instance_type_ref = flavors.get_flavor_by_flavor_id(3) self.compute.prep_resize(self.context, instance=instance, instance_type=new_instance_type_ref, image={}, request_spec=request_spec, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') migration.dest_compute = NODENAME2 migration.dest_node = NODENAME2 migration.save() instance.task_state = task_states.RESIZE_PREP instance.save() self.compute.resize_instance(self.context, instance=instance, migration=migration, image={}, instance_type=new_instance_type_ref, clean_shutdown=True, request_spec=request_spec) self.compute.finish_resize(self.context, migration=migration, disk_info={}, image={}, instance=instance, request_spec=request_spec) instance.task_state = task_states.RESIZE_REVERTING instance.save() self.compute.revert_resize(self.context, migration=migration, instance=instance, request_spec=request_spec) self.compute.finish_revert_resize(self.context, migration=migration, instance=instance, request_spec=request_spec) self.assertEqual(instance.host, migration.source_compute) self.assertNotEqual(migration.dest_compute, migration.source_compute) self.assertNotEqual(migration.dest_node, migration.source_node) self.assertEqual(NODENAME2, migration.dest_compute) def test_get_by_flavor_id(self): flavor_type = flavors.get_flavor_by_flavor_id(1) self.assertEqual(flavor_type['name'], 'm1.tiny') def test_resize_instance_handles_migration_error(self): # Ensure vm_state is ERROR when error occurs. def raise_migration_failure(*args): raise test.TestingException() self.stub_out('nova.virt.fake.FakeDriver.migrate_disk_and_power_off', raise_migration_failure) instance = self._create_fake_instance_obj() request_spec = objects.RequestSpec() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.host = 'foo' instance.save() self.compute.prep_resize(self.context, instance=instance, instance_type=self.default_flavor, image={}, request_spec=request_spec, filter_properties={}, node=None, clean_shutdown=True, migration=None, host_list=[]) migration = objects.Migration.get_by_instance_and_status( self.context.elevated(), instance.uuid, 'pre-migrating') instance.task_state = task_states.RESIZE_PREP instance.save() self.assertRaises(test.TestingException, self.compute.resize_instance, self.context, instance=instance, migration=migration, image={}, instance_type=jsonutils.to_primitive( self.default_flavor), clean_shutdown=True, request_spec=request_spec) # NOTE(comstud): error path doesn't use objects, so our object # is not updated. Refresh and compare against the DB. instance.refresh() self.assertEqual(instance.vm_state, vm_states.ERROR) self.compute.terminate_instance(self.context, instance, []) @mock.patch.object(fake.FakeDriver, 'pre_live_migration') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) def test_pre_live_migration_works_correctly(self, mock_get_bdms, mock_notify, mock_pre): # Confirm setup_compute_volume is called when volume is mounted. def stupid(*args, **kwargs): return fake_network.fake_get_instance_nw_info(self) self.stub_out('nova.network.neutron.API.get_instance_nw_info', stupid) self.useFixture( std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: True)) # creating instance testdata instance = self._create_fake_instance_obj({'host': 'dummy'}) c = context.get_admin_context() fake_notifier.NOTIFICATIONS = [] migrate_data = objects.LibvirtLiveMigrateData( is_shared_instance_path=False) vifs = migrate_data_obj.VIFMigrateData.create_skeleton_migrate_vifs( stupid()) migrate_data.vifs = vifs mock_pre.return_value = migrate_data with mock.patch.object(self.compute.network_api, 'setup_networks_on_host') as mock_setup: ret = self.compute.pre_live_migration(c, instance=instance, block_migration=False, disk=None, migrate_data=migrate_data) self.assertIs(migrate_data, ret) self.assertTrue(ret.wait_for_vif_plugged, ret) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.live_migration.pre.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.live_migration.pre.end') mock_notify.assert_has_calls([ mock.call(c, instance, 'fake-mini', action='live_migration_pre', phase='start', bdms=mock_get_bdms.return_value), mock.call(c, instance, 'fake-mini', action='live_migration_pre', phase='end', bdms=mock_get_bdms.return_value)]) mock_pre.assert_called_once_with( test.MatchType(nova.context.RequestContext), test.MatchType(objects.Instance), {'swap': None, 'ephemerals': [], 'root_device_name': None, 'block_device_mapping': []}, mock.ANY, mock.ANY, mock.ANY) mock_setup.assert_called_once_with(c, instance, self.compute.host) # cleanup db.instance_destroy(c, instance['uuid']) @mock.patch.object(cinder.API, 'attachment_delete') @mock.patch.object(fake.FakeDriver, 'get_instance_disk_info') @mock.patch.object(compute_rpcapi.ComputeAPI, 'pre_live_migration') @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_rpcapi.ComputeAPI, 'remove_volume_connection') @mock.patch.object(compute_rpcapi.ComputeAPI, 'rollback_live_migration_at_destination') @mock.patch('nova.objects.Migration.save') def test_live_migration_exception_rolls_back(self, mock_save, mock_rollback, mock_remove, mock_get_bdms, mock_get_node, mock_pre, mock_get_disk, mock_attachment_delete): # Confirm exception when pre_live_migration fails. c = context.get_admin_context() instance = self._create_fake_instance_obj( {'host': 'src_host', 'task_state': task_states.MIGRATING, 'info_cache': objects.InstanceInfoCache( network_info=network_model.NetworkInfo([]))}) updated_instance = self._create_fake_instance_obj( {'host': 'fake-dest-host'}) dest_host = updated_instance['host'] dest_node = objects.ComputeNode(host=dest_host, uuid=uuids.dest_node) mock_get_node.return_value = dest_node # All the fake BDMs we've generated, in order fake_bdms = [] # A list of the attachment_ids returned by gen_fake_bdms fake_attachment_ids = [] def gen_fake_bdms(obj, instance): # generate a unique fake connection_info and attachment_id every # time we're called, simulating attachment_id and connection_info # being mutated elsewhere. bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( {'volume_id': uuids.volume_id_1, 'attachment_id': uuidutils.generate_uuid(), 'source_type': 'volume', 'connection_info': jsonutils.dumps(uuidutils.generate_uuid()), 'destination_type': 'volume'})), objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( {'volume_id': uuids.volume_id_2, 'attachment_id': uuidutils.generate_uuid(), 'source_type': 'volume', 'connection_info': jsonutils.dumps(uuidutils.generate_uuid()), 'destination_type': 'volume'})) ]) for bdm in bdms: bdm.save = mock.Mock() fake_attachment_ids.append(bdm.attachment_id) fake_bdms.append(bdms) return bdms migrate_data = migrate_data_obj.XenapiLiveMigrateData( block_migration=True) mock_pre.side_effect = test.TestingException mock_get_bdms.side_effect = gen_fake_bdms # start test migration = objects.Migration(c, uuid=uuids.migration) @mock.patch.object(self.compute.network_api, 'setup_networks_on_host') @mock.patch.object(self.compute, 'reportclient') def do_it(mock_client, mock_setup): mock_client.get_allocations_for_consumer.return_value = { mock.sentinel.source: { 'resources': mock.sentinel.allocs, } } # This call will not raise TestingException because if a task # submitted to a thread executor raises, the exception will not be # raised unless Future.result() is called. self.compute.live_migration( c, dest=dest_host, block_migration=True, instance=instance, migration=migration, migrate_data=migrate_data) mock_setup.assert_called_once_with(c, instance, self.compute.host) mock_client.move_allocations.assert_called_once_with( c, migration.uuid, instance.uuid) do_it() instance.refresh() self.assertEqual('src_host', instance.host) self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) self.assertEqual('error', migration.status) mock_get_disk.assert_called() mock_pre.assert_called_once_with(c, instance, True, mock_get_disk.return_value, dest_host, migrate_data) # Assert that _rollback_live_migration puts connection_info back to # what it was before the call to pre_live_migration. # BlockDeviceMappingList.get_by_instance_uuid is mocked to generate # BDMs with unique connection_info every time it's called. These are # stored in fake_bdms in the order they were generated. We assert here # that the last BDMs generated (in _rollback_live_migration) now have # the same connection_info and attachment_id as the first BDMs # generated (before calling pre_live_migration), and that we saved # them. self.assertGreater(len(fake_bdms), 1) for source_bdm, final_bdm in zip(fake_bdms[0], fake_bdms[-1]): self.assertEqual(source_bdm.attachment_id, final_bdm.attachment_id) self.assertEqual(source_bdm.connection_info, final_bdm.connection_info) final_bdm.save.assert_called() mock_get_bdms.assert_called_with(c, instance.uuid) mock_remove.assert_has_calls([ mock.call(c, instance, uuids.volume_id_1, dest_host), mock.call(c, instance, uuids.volume_id_2, dest_host)]) mock_rollback.assert_called_once_with(c, instance, dest_host, destroy_disks=True, migrate_data=test.MatchType( migrate_data_obj.XenapiLiveMigrateData)) # Assert that the final attachment_ids returned by # BlockDeviceMappingList.get_by_instance_uuid are then deleted. mock_attachment_delete.assert_has_calls([ mock.call(c, fake_attachment_ids.pop()), mock.call(c, fake_attachment_ids.pop())], any_order=True) @mock.patch.object(compute_rpcapi.ComputeAPI, 'pre_live_migration') @mock.patch.object(compute_rpcapi.ComputeAPI, 'post_live_migration_at_destination') @mock.patch.object(compute_manager.InstanceEvents, 'clear_events_for_instance') @mock.patch.object(compute_utils, 'EventReporter') @mock.patch('nova.objects.Migration.save') def test_live_migration_works_correctly(self, mock_save, mock_event, mock_clear, mock_post, mock_pre): # Confirm live_migration() works as expected correctly. # creating instance testdata c = context.get_admin_context() params = {'info_cache': objects.InstanceInfoCache( network_info=network_model.NetworkInfo([]))} instance = self._create_fake_instance_obj(params=params, ctxt=c) instance.host = self.compute.host dest = 'desthost' migration = objects.Migration(uuid=uuids.migration, source_node=instance.node) migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_instance_path=False, is_shared_block_storage=False, migration=migration) mock_pre.return_value = migrate_data # start test with test.nested( mock.patch.object( self.compute.network_api, 'migrate_instance_start'), mock.patch.object( self.compute.network_api, 'setup_networks_on_host') ) as ( mock_migrate, mock_setup ): ret = self.compute.live_migration(c, dest=dest, instance=instance, block_migration=False, migration=migration, migrate_data=migrate_data) self.assertIsNone(ret) mock_event.assert_called_with( c, 'compute_live_migration', CONF.host, instance.uuid, graceful_exit=False) # cleanup instance.destroy() self.assertEqual('completed', migration.status) mock_pre.assert_called_once_with(c, instance, False, None, dest, migrate_data) mock_migrate.assert_called_once_with(c, instance, {'source_compute': instance[ 'host'], 'dest_compute': dest}) mock_post.assert_called_once_with(c, instance, False, dest) mock_clear.assert_called_once_with(mock.ANY) @mock.patch.object(compute_rpcapi.ComputeAPI, 'pre_live_migration') @mock.patch.object(compute_rpcapi.ComputeAPI, 'post_live_migration_at_destination') @mock.patch.object(compute_manager.InstanceEvents, 'clear_events_for_instance') @mock.patch.object(compute_utils, 'EventReporter') @mock.patch('nova.objects.Migration.save') def test_live_migration_handles_errors_correctly(self, mock_save, mock_event, mock_clear, mock_post, mock_pre): # Confirm live_migration() works as expected correctly. # creating instance testdata c = context.get_admin_context() params = {'info_cache': objects.InstanceInfoCache( network_info=network_model.NetworkInfo([]))} instance = self._create_fake_instance_obj(params=params, ctxt=c) instance.host = self.compute.host dest = 'desthost' migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_instance_path=False, is_shared_block_storage=False) mock_pre.return_value = migrate_data # start test migration = objects.Migration(c, uuid=uuids.migration) with mock.patch.object(self.compute.driver, 'cleanup') as mock_cleanup: mock_cleanup.side_effect = test.TestingException # This call will not raise TestingException because if a task # submitted to a thread executor raises, the exception will not be # raised unless Future.result() is called. self.compute.live_migration( c, dest, instance, False, migration, migrate_data) # ensure we have updated the instance and migration objects self.assertEqual(vm_states.ERROR, instance.vm_state) self.assertIsNone(instance.task_state) self.assertEqual("error", migration.status) mock_pre.assert_called_once_with(c, instance, False, None, dest, migrate_data) self.assertEqual(0, mock_clear.call_count) # cleanup instance.destroy() @mock.patch.object(compute_rpcapi.ComputeAPI, 'post_live_migration_at_destination') @mock.patch.object(compute_manager.InstanceEvents, 'clear_events_for_instance') def test_post_live_migration_no_shared_storage_working_correctly(self, mock_clear, mock_post): """Confirm post_live_migration() works correctly as expected for non shared storage migration. """ # Create stubs result = {} # No share storage live migration don't need to destroy at source # server because instance has been migrated to destination, but a # cleanup for block device and network are needed. def fakecleanup(*args, **kwargs): result['cleanup'] = True self.stub_out('nova.virt.fake.FakeDriver.cleanup', fakecleanup) dest = 'desthost' srchost = self.compute.host # creating testdata c = context.get_admin_context() instance = self._create_fake_instance_obj({ 'host': srchost, 'state_description': 'migrating', 'state': power_state.PAUSED, 'task_state': task_states.MIGRATING, 'power_state': power_state.PAUSED}) migration_obj = objects.Migration(uuid=uuids.migration, source_node=instance.node, status='completed') migration = {'source_compute': srchost, 'dest_compute': dest, } migrate_data = objects.LibvirtLiveMigrateData( is_shared_instance_path=False, is_shared_block_storage=False, migration=migration_obj, block_migration=False) bdms = objects.BlockDeviceMappingList(objects=[]) with test.nested( mock.patch.object( self.compute.network_api, 'migrate_instance_start'), mock.patch.object( self.compute.network_api, 'setup_networks_on_host'), mock.patch.object(migration_obj, 'save'), ) as ( mock_migrate, mock_setup, mock_mig_save ): self.compute._post_live_migration(c, instance, dest, migrate_data=migrate_data, source_bdms=bdms) self.assertIn('cleanup', result) self.assertTrue(result['cleanup']) mock_migrate.assert_called_once_with(c, instance, migration) mock_post.assert_called_once_with(c, instance, False, dest) mock_clear.assert_called_once_with(mock.ANY) @mock.patch('nova.compute.utils.notify_about_instance_action') def test_post_live_migration_working_correctly(self, mock_notify): # Confirm post_live_migration() works as expected correctly. dest = 'desthost' srchost = self.compute.host # creating testdata c = context.get_admin_context() instance = self._create_fake_instance_obj({ 'host': srchost, 'state_description': 'migrating', 'state': power_state.PAUSED}, ctxt=c) instance.update({'task_state': task_states.MIGRATING, 'power_state': power_state.PAUSED}) instance.save() migration_obj = objects.Migration(uuid=uuids.migration, source_node=instance.node, status='completed') migrate_data = migrate_data_obj.LiveMigrateData( migration=migration_obj) bdms = objects.BlockDeviceMappingList(objects=[]) # creating mocks with test.nested( mock.patch.object(self.compute.driver, 'post_live_migration'), mock.patch.object(self.compute.network_api, 'migrate_instance_start'), mock.patch.object(self.compute.compute_rpcapi, 'post_live_migration_at_destination'), mock.patch.object(self.compute.driver, 'post_live_migration_at_source'), mock.patch.object(self.compute.network_api, 'setup_networks_on_host'), mock.patch.object(self.compute.instance_events, 'clear_events_for_instance'), mock.patch.object(self.compute, 'update_available_resource'), mock.patch.object(migration_obj, 'save'), mock.patch.object(instance, 'get_network_info'), ) as ( post_live_migration, migrate_instance_start, post_live_migration_at_destination, post_live_migration_at_source, setup_networks_on_host, clear_events, update_available_resource, mig_save, get_nw_info, ): nw_info = network_model.NetworkInfo.hydrate([]) get_nw_info.return_value = nw_info self.compute._post_live_migration(c, instance, dest, migrate_data=migrate_data, source_bdms=bdms) mock_notify.assert_has_calls([ mock.call(c, instance, 'fake-mini', action='live_migration_post', phase='start'), mock.call(c, instance, 'fake-mini', action='live_migration_post', phase='end')]) self.assertEqual(2, mock_notify.call_count) post_live_migration.assert_has_calls([ mock.call(c, instance, {'swap': None, 'ephemerals': [], 'root_device_name': None, 'block_device_mapping': []}, migrate_data)]) migration = {'source_compute': srchost, 'dest_compute': dest, } migrate_instance_start.assert_has_calls([ mock.call(c, instance, migration)]) post_live_migration_at_destination.assert_has_calls([ mock.call(c, instance, False, dest)]) post_live_migration_at_source.assert_has_calls( [mock.call(c, instance, nw_info)]) clear_events.assert_called_once_with(instance) update_available_resource.assert_has_calls([mock.call(c)]) self.assertEqual('completed', migration_obj.status) mig_save.assert_called_once_with() # assert we logged a success message self.assertIn( 'Migrating instance to desthost finished successfully.', self.stdlog.logger.output) def test_post_live_migration_exc_on_dest_works_correctly(self): """Confirm that post_live_migration() completes successfully even after post_live_migration_at_destination() raises an exception. """ dest = 'desthost' srchost = self.compute.host srcnode = 'srcnode' # creating testdata c = context.get_admin_context() instance = self._create_fake_instance_obj({ 'host': srchost, 'node': srcnode, 'state_description': 'migrating', 'state': power_state.PAUSED}, ctxt=c) instance.update({'task_state': task_states.MIGRATING, 'power_state': power_state.PAUSED}) instance.save() migration_obj = objects.Migration(source_node=srcnode, uuid=uuids.migration, status='completed') migrate_data = migrate_data_obj.LiveMigrateData( migration=migration_obj) bdms = objects.BlockDeviceMappingList(objects=[]) # creating mocks with test.nested( mock.patch.object(self.compute.driver, 'post_live_migration'), mock.patch.object(self.compute.network_api, 'migrate_instance_start'), mock.patch.object(self.compute.compute_rpcapi, 'post_live_migration_at_destination', side_effect=Exception), mock.patch.object(self.compute.driver, 'post_live_migration_at_source'), mock.patch.object(self.compute.network_api, 'setup_networks_on_host'), mock.patch.object(self.compute.instance_events, 'clear_events_for_instance'), mock.patch.object(self.compute, 'update_available_resource'), mock.patch.object(migration_obj, 'save'), ) as ( post_live_migration, migrate_instance_start, post_live_migration_at_destination, post_live_migration_at_source, setup_networks_on_host, clear_events, update_available_resource, mig_save ): self.compute._post_live_migration(c, instance, dest, migrate_data=migrate_data, source_bdms=bdms) update_available_resource.assert_has_calls([mock.call(c)]) self.assertEqual('completed', migration_obj.status) # assert we did not log a success message self.assertNotIn( 'Migrating instance to desthost finished successfully.', self.stdlog.logger.output) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_rollback_live_migration(self, mock_bdms, mock_get_node, migration_status=None): c = context.get_admin_context() instance = mock.MagicMock() migration = objects.Migration(uuid=uuids.migration) migrate_data = objects.LibvirtLiveMigrateData( migration=migration, src_supports_numa_live_migration=True) source_bdms = objects.BlockDeviceMappingList() dest_node = objects.ComputeNode(host='foo', uuid=uuids.dest_node) mock_get_node.return_value = dest_node bdms = objects.BlockDeviceMappingList() mock_bdms.return_value = bdms @mock.patch('nova.objects.InstancePCIRequests.get_by_instance_uuid') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(migration, 'save') @mock.patch.object(self.compute, '_revert_allocation') @mock.patch.object(self.compute, '_live_migration_cleanup_flags') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(compute_rpcapi.ComputeAPI, 'drop_move_claim_at_destination') def _test(mock_drop_claim, mock_nw_api, mock_lmcf, mock_ra, mock_mig_save, mock_notify, mock_get_pci): mock_get_pci.return_value = objects.InstancePCIRequests() mock_lmcf.return_value = False, False if migration_status: self.compute._rollback_live_migration( c, instance, 'foo', migrate_data=migrate_data, migration_status=migration_status, source_bdms=source_bdms) else: self.compute._rollback_live_migration( c, instance, 'foo', migrate_data=migrate_data, source_bdms=source_bdms) mock_drop_claim.assert_called_once_with(c, instance, 'foo') mock_notify.assert_has_calls([ mock.call(c, instance, self.compute.host, action='live_migration_rollback', phase='start', bdms=bdms), mock.call(c, instance, self.compute.host, action='live_migration_rollback', phase='end', bdms=bdms)]) mock_nw_api.setup_networks_on_host.assert_has_calls([ mock.call(c, instance, self.compute.host), mock.call(c, instance, host='foo', teardown=True) ]) mock_ra.assert_called_once_with(mock.ANY, instance, migration) mock_mig_save.assert_called_once_with() mock_get_pci.assert_called_once_with(c, instance.uuid) self.assertEqual(mock_get_pci.return_value, instance.pci_requests) _test() self.assertEqual(migration_status or 'error', migration.status) self.assertEqual(0, instance.progress) def test_rollback_live_migration_set_migration_status(self): self.test_rollback_live_migration(migration_status='fake') @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', return_value=objects.ComputeNode( host='dest-host', uuid=uuids.dest_node)) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) def test_rollback_live_migration_network_teardown_fails( self, mock_bdms, mock_get_node): """Tests that _rollback_live_migration calls setup_networks_on_host directly, which raises an exception, and the migration record status is still set to 'error' before re-raising the error. """ ctxt = context.get_admin_context() instance = fake_instance.fake_instance_obj(ctxt) migration = objects.Migration(ctxt, uuid=uuids.migration) migrate_data = objects.LibvirtLiveMigrateData(migration=migration) source_bdms = objects.BlockDeviceMappingList() @mock.patch('nova.objects.InstancePCIRequests.get_by_instance_uuid') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(instance, 'save') @mock.patch.object(migration, 'save') @mock.patch.object(self.compute, '_revert_allocation') @mock.patch.object(self.compute, '_live_migration_cleanup_flags', return_value=(False, False)) @mock.patch.object(self.compute.network_api, 'setup_networks_on_host', side_effect=(None, test.TestingException)) def _test(mock_nw_setup, _mock_lmcf, mock_ra, mock_mig_save, mock_inst_save, _mock_notify_action, mock_notify_usage, mock_get_pci): mock_get_pci.return_value = objects.InstancePCIRequests() self.assertRaises(test.TestingException, self.compute._rollback_live_migration, ctxt, instance, 'dest-host', migrate_data, migration_status='goofballs', source_bdms=source_bdms) # setup_networks_on_host is called twice: # - once to re-setup networking on the source host, which for # neutron doesn't actually do anything since the port's host # binding didn't change since live migration failed # - once to teardown the 'migrating_to' information in the port # binding profile, where migrating_to points at the destination # host (that's done in pre_live_migration on the dest host). This # cleanup would happen in rollback_live_migration_at_destination # except _live_migration_cleanup_flags returned False for # 'do_cleanup'. mock_nw_setup.assert_has_calls([ mock.call(ctxt, instance, self.compute.host), mock.call(ctxt, instance, host='dest-host', teardown=True) ]) mock_ra.assert_called_once_with(ctxt, instance, migration) mock_mig_save.assert_called_once_with() # Since we failed during rollback, the migration status gets set # to 'error' instead of 'goofballs'. self.assertEqual('error', migration.status) mock_get_pci.assert_called_once_with(ctxt, instance.uuid) self.assertEqual(mock_get_pci.return_value, instance.pci_requests) _test() @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(fake.FakeDriver, 'rollback_live_migration_at_destination') def test_rollback_live_migration_at_destination_correctly(self, mock_rollback, mock_notify): # creating instance testdata c = context.get_admin_context() instance = self._create_fake_instance_obj({'host': 'dummy'}) fake_notifier.NOTIFICATIONS = [] # start test with mock.patch.object(self.compute.network_api, 'setup_networks_on_host') as mock_setup: ret = self.compute.rollback_live_migration_at_destination(c, instance=instance, destroy_disks=True, migrate_data=None) self.assertIsNone(ret) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'compute.instance.live_migration.rollback.dest.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'compute.instance.live_migration.rollback.dest.end') mock_setup.assert_called_once_with(c, instance, self.compute.host, teardown=True) mock_rollback.assert_called_once_with(c, instance, [], {'swap': None, 'ephemerals': [], 'root_device_name': None, 'block_device_mapping': []}, destroy_disks=True, migrate_data=None) mock_notify.assert_has_calls([ mock.call(c, instance, self.compute.host, action='live_migration_rollback_dest', phase='start'), mock.call(c, instance, self.compute.host, action='live_migration_rollback_dest', phase='end')]) @mock.patch.object(fake.FakeDriver, 'rollback_live_migration_at_destination') def test_rollback_live_migration_at_destination_network_fails( self, mock_rollback): c = context.get_admin_context() instance = self._create_fake_instance_obj() md = objects.LibvirtLiveMigrateData() with mock.patch.object(self.compute.network_api, 'setup_networks_on_host', side_effect=test.TestingException): self.assertRaises( test.TestingException, self.compute.rollback_live_migration_at_destination, c, instance, destroy_disks=True, migrate_data=md) mock_rollback.assert_called_once_with( c, instance, mock.ANY, mock.ANY, destroy_disks=True, migrate_data=md) def test_run_kill_vm(self): # Detect when a vm is terminated behind the scenes. instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instances = db.instance_get_all(self.context) LOG.info("Running instances: %s", instances) self.assertEqual(len(instances), 1) instance_uuid = instances[0]['uuid'] self.compute.driver._test_remove_vm(instance_uuid) # Force the compute manager to do its periodic poll ctxt = context.get_admin_context() self.compute._sync_power_states(ctxt) instances = db.instance_get_all(self.context) LOG.info("After force-killing instances: %s", instances) self.assertEqual(len(instances), 1) self.assertIsNone(instances[0]['task_state']) def _fill_fault(self, values): extra = {x: None for x in ['created_at', 'deleted_at', 'updated_at', 'deleted']} extra['id'] = 1 extra['details'] = '' extra.update(values) return extra def test_add_instance_fault(self): instance = self._create_fake_instance_obj() exc_info = None def fake_db_fault_create(ctxt, values): self.assertIn('raise NotImplementedError', values['details']) self.assertIn('test', values['details']) del values['details'] expected = { 'code': 500, 'message': 'NotImplementedError', 'instance_uuid': instance['uuid'], 'host': self.compute.host } self.assertEqual(expected, values) return self._fill_fault(expected) try: raise NotImplementedError('test') except NotImplementedError: exc_info = sys.exc_info() self.stub_out('nova.db.api.instance_fault_create', fake_db_fault_create) ctxt = context.get_admin_context() compute_utils.add_instance_fault_from_exc(ctxt, instance, NotImplementedError('test'), exc_info) def test_add_instance_fault_with_remote_error(self): instance = self._create_fake_instance_obj() exc_info = None raised_exc = None def fake_db_fault_create(ctxt, values): global exc_info global raised_exc self.assertIn('raise messaging.RemoteError', values['details']) self.assertIn('Remote error: test My Test Message\nNone.', values['details']) del values['details'] expected = { 'code': 500, 'instance_uuid': instance['uuid'], 'message': 'RemoteError', 'host': self.compute.host } self.assertEqual(expected, values) return self._fill_fault(expected) try: raise messaging.RemoteError('test', 'My Test Message') except messaging.RemoteError as exc: raised_exc = exc exc_info = sys.exc_info() self.stub_out('nova.db.api.instance_fault_create', fake_db_fault_create) ctxt = context.get_admin_context() compute_utils.add_instance_fault_from_exc(ctxt, instance, raised_exc, exc_info) def test_add_instance_fault_user_error(self): instance = self._create_fake_instance_obj() exc_info = None def fake_db_fault_create(ctxt, values): expected = { 'code': 400, 'message': 'fake details', 'details': '', 'instance_uuid': instance['uuid'], 'host': self.compute.host } self.assertEqual(expected, values) return self._fill_fault(expected) user_exc = exception.Invalid('fake details', code=400) try: raise user_exc except exception.Invalid: exc_info = sys.exc_info() self.stub_out('nova.db.api.instance_fault_create', fake_db_fault_create) ctxt = context.get_admin_context() compute_utils.add_instance_fault_from_exc(ctxt, instance, user_exc, exc_info) def test_add_instance_fault_no_exc_info(self): instance = self._create_fake_instance_obj() def fake_db_fault_create(ctxt, values): expected = { 'code': 500, 'message': 'NotImplementedError', 'details': '', 'instance_uuid': instance['uuid'], 'host': self.compute.host } self.assertEqual(expected, values) return self._fill_fault(expected) self.stub_out('nova.db.api.instance_fault_create', fake_db_fault_create) ctxt = context.get_admin_context() compute_utils.add_instance_fault_from_exc(ctxt, instance, NotImplementedError('test')) def test_add_instance_fault_long_message(self): instance = self._create_fake_instance_obj() message = 300 * 'a' def fake_db_fault_create(ctxt, values): expected = { 'code': 500, 'message': message[:255], 'details': '', 'instance_uuid': instance['uuid'], 'host': self.compute.host } self.assertEqual(expected, values) return self._fill_fault(expected) self.stub_out('nova.db.api.instance_fault_create', fake_db_fault_create) ctxt = context.get_admin_context() # Use a NovaException because non-nova exceptions will just have the # class name recorded in the fault message which will not exercise our # length trim code. exc = exception.NovaException(message=message) compute_utils.add_instance_fault_from_exc(ctxt, instance, exc) def test_add_instance_fault_with_message(self): instance = self._create_fake_instance_obj() exc_info = None def fake_db_fault_create(ctxt, values): self.assertIn('raise NotImplementedError', values['details']) del values['details'] expected = { 'code': 500, 'message': 'hoge', 'instance_uuid': instance['uuid'], 'host': self.compute.host } self.assertEqual(expected, values) return self._fill_fault(expected) try: raise NotImplementedError('test') except NotImplementedError: exc_info = sys.exc_info() self.stub_out('nova.db.api.instance_fault_create', fake_db_fault_create) ctxt = context.get_admin_context() compute_utils.add_instance_fault_from_exc(ctxt, instance, NotImplementedError('test'), exc_info, fault_message='hoge') def _test_cleanup_running(self, action): admin_context = context.get_admin_context() deleted_at = (timeutils.utcnow() - datetime.timedelta(hours=1, minutes=5)) instance1 = self._create_fake_instance_obj() instance2 = self._create_fake_instance_obj() timeutils.set_time_override(deleted_at) instance1.destroy() instance2.destroy() timeutils.clear_time_override() self.flags(running_deleted_instance_timeout=3600, running_deleted_instance_action=action) return admin_context, instance1, instance2 @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(compute_manager.ComputeManager, "_cleanup_volumes") @mock.patch.object(compute_manager.ComputeManager, "_shutdown_instance") @mock.patch.object(objects.BlockDeviceMappingList, "get_by_instance_uuid") def test_cleanup_running_deleted_instances_reap(self, mock_get_uuid, mock_shutdown, mock_cleanup, mock_get_inst): ctxt, inst1, inst2 = self._test_cleanup_running('reap') bdms = block_device_obj.block_device_make_list(ctxt, []) # Simulate an error and make sure cleanup proceeds with next instance. mock_shutdown.side_effect = [test.TestingException, None] mock_get_uuid.side_effect = [bdms, bdms] mock_cleanup.return_value = None mock_get_inst.return_value = [inst1, inst2] self.compute._cleanup_running_deleted_instances(ctxt) mock_shutdown.assert_has_calls([ mock.call(ctxt, inst1, bdms, notify=False), mock.call(ctxt, inst2, bdms, notify=False)]) mock_cleanup.assert_called_once_with(ctxt, inst2, bdms, detach=False) mock_get_uuid.assert_has_calls([ mock.call(ctxt, inst1.uuid, use_slave=True), mock.call(ctxt, inst2.uuid, use_slave=True)]) mock_get_inst.assert_called_once_with(ctxt, {'deleted': True, 'soft_deleted': False}) @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(fake.FakeDriver, "set_bootable") @mock.patch.object(fake.FakeDriver, "power_off") def test_cleanup_running_deleted_instances_shutdown(self, mock_power, mock_set, mock_get): ctxt, inst1, inst2 = self._test_cleanup_running('shutdown') mock_get.return_value = [inst1, inst2] self.compute._cleanup_running_deleted_instances(ctxt) mock_get.assert_called_once_with(ctxt, {'deleted': True, 'soft_deleted': False}) mock_power.assert_has_calls([mock.call(inst1), mock.call(inst2)]) mock_set.assert_has_calls([mock.call(inst1, False), mock.call(inst2, False)]) @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(fake.FakeDriver, "set_bootable") @mock.patch.object(fake.FakeDriver, "power_off") def test_cleanup_running_deleted_instances_shutdown_notimpl(self, mock_power, mock_set, mock_get): ctxt, inst1, inst2 = self._test_cleanup_running('shutdown') mock_get.return_value = [inst1, inst2] mock_set.side_effect = [NotImplementedError, NotImplementedError] self.compute._cleanup_running_deleted_instances(ctxt) mock_get.assert_called_once_with(ctxt, {'deleted': True, 'soft_deleted': False}) mock_set.assert_has_calls([mock.call(inst1, False), mock.call(inst2, False)]) mock_power.assert_has_calls([mock.call(inst1), mock.call(inst2)]) @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(fake.FakeDriver, "set_bootable") @mock.patch.object(fake.FakeDriver, "power_off") def test_cleanup_running_deleted_instances_shutdown_error(self, mock_power, mock_set, mock_get): ctxt, inst1, inst2 = self._test_cleanup_running('shutdown') e = test.TestingException('bad') mock_get.return_value = [inst1, inst2] mock_power.side_effect = [e, e] self.compute._cleanup_running_deleted_instances(ctxt) mock_get.assert_called_once_with(ctxt, {'deleted': True, 'soft_deleted': False}) mock_power.assert_has_calls([mock.call(inst1), mock.call(inst2)]) mock_set.assert_has_calls([mock.call(inst1, False), mock.call(inst2, False)]) @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(timeutils, 'is_older_than') def test_running_deleted_instances(self, mock_is_older, mock_get): admin_context = context.get_admin_context() self.compute.host = 'host' instance = self._create_fake_instance_obj() instance.deleted = True now = timeutils.utcnow() instance.deleted_at = now mock_get.return_value = [instance] mock_is_older.return_value = True val = self.compute._running_deleted_instances(admin_context) self.assertEqual(val, [instance]) mock_get.assert_called_once_with( admin_context, {'deleted': True, 'soft_deleted': False}) mock_is_older.assert_called_once_with(now, CONF.running_deleted_instance_timeout) @mock.patch('nova.network.neutron.API.list_ports') def test_require_nw_info_update_host_match(self, mock_list_ports): ctxt = context.get_admin_context() instance = self._create_fake_instance_obj() mock_list_ports.return_value = {'ports': [ {'binding:host_id': self.compute.host, 'binding:vif_type': 'foo'} ]} search_opts = {'device_id': instance.uuid, 'fields': ['binding:host_id', 'binding:vif_type']} val = self.compute._require_nw_info_update(ctxt, instance) # return false since hosts match and vif_type is not unbound or # binding_failed self.assertFalse(val) mock_list_ports.assert_called_once_with(ctxt, **search_opts) @mock.patch('nova.network.neutron.API.list_ports') def test_require_nw_info_update_host_mismatch(self, mock_list_ports): ctxt = context.get_admin_context() instance = self._create_fake_instance_obj() mock_list_ports.return_value = {'ports': [ {'binding:host_id': 'foo', 'binding:vif_type': 'foo'} ]} search_opts = {'device_id': instance.uuid, 'fields': ['binding:host_id', 'binding:vif_type']} val = self.compute._require_nw_info_update(ctxt, instance) self.assertTrue(val) mock_list_ports.assert_called_once_with(ctxt, **search_opts) @mock.patch('nova.network.neutron.API.list_ports') def test_require_nw_info_update_failed_vif_types(self, mock_list_ports): ctxt = context.get_admin_context() instance = self._create_fake_instance_obj() search_opts = {'device_id': instance.uuid, 'fields': ['binding:host_id', 'binding:vif_type']} FAILED_VIF_TYPES = (network_model.VIF_TYPE_UNBOUND, network_model.VIF_TYPE_BINDING_FAILED) for vif_type in FAILED_VIF_TYPES: mock_list_ports.reset_mock() mock_list_ports.return_value = {'ports': [ {'binding:host_id': self.compute.host, 'binding:vif_type': vif_type} ]} val = self.compute._require_nw_info_update(ctxt, instance) self.assertTrue(val) mock_list_ports.assert_called_once_with(ctxt, **search_opts) def test_require_nw_info_update_for_baremetal(self): """Tests _require_nw_info_update for baremetal instance, expected behavior is to return False. """ compute_mgr = compute_manager.ComputeManager('ironic.IronicDriver') with mock.patch.object(compute_mgr, 'network_api') as \ network_api_mock: ctxt = context.get_admin_context() instance = self._create_fake_instance_obj() val = compute_mgr._require_nw_info_update(ctxt, instance) self.assertFalse(val) network_api_mock.assert_not_called() def _heal_instance_info_cache(self, _get_instance_nw_info_raise=False, _get_instance_nw_info_raise_cache=False, _require_nw_info_update=False, _task_state_not_none=False): # Update on every call for the test self.flags(heal_instance_info_cache_interval=-1) ctxt = context.get_admin_context() instance_map = {} instances = [] for x in range(8): inst_uuid = getattr(uuids, 'db_instance_%i' % x) instance_map[inst_uuid] = fake_instance.fake_db_instance( uuid=inst_uuid, host=CONF.host, created_at=None) # These won't be in our instance since they're not requested if _task_state_not_none: instance_map[inst_uuid]['task_state'] = 'foo' instances.append(instance_map[inst_uuid]) call_info = {'get_all_by_host': 0, 'get_by_uuid': 0, 'get_nw_info': 0, 'expected_instance': None, 'require_nw_info': 0, 'setup_network': 0} def fake_instance_get_all_by_host(context, host, columns_to_join, use_slave=False): call_info['get_all_by_host'] += 1 self.assertEqual([], columns_to_join) return instances[:] def fake_instance_get_by_uuid(context, instance_uuid, columns_to_join, use_slave=False): if instance_uuid not in instance_map: raise exception.InstanceNotFound(instance_id=instance_uuid) call_info['get_by_uuid'] += 1 self.assertEqual(['system_metadata', 'info_cache', 'extra', 'extra.flavor'], columns_to_join) return instance_map[instance_uuid] def fake_require_nw_info_update(cls, context, instance): self.assertEqual(call_info['expected_instance']['uuid'], instance['uuid']) if call_info['expected_instance']['task_state'] is None: call_info['require_nw_info'] += 1 return _require_nw_info_update def fake_setup_instance_network_on_host(cls, context, instance, host): self.assertEqual(call_info['expected_instance']['uuid'], instance['uuid']) if call_info['expected_instance']['task_state'] is None and \ _require_nw_info_update: call_info['setup_network'] += 1 # NOTE(comstud): Override the stub in setUp() def fake_get_instance_nw_info(cls, context, instance, **kwargs): # Note that this exception gets caught in compute/manager # and is ignored. However, the below increment of # 'get_nw_info' won't happen, and you'll get an assert # failure checking it below. self.assertEqual(call_info['expected_instance']['uuid'], instance['uuid']) self.assertTrue(kwargs['force_refresh']) call_info['get_nw_info'] += 1 if _get_instance_nw_info_raise: raise exception.InstanceNotFound(instance_id=instance['uuid']) if _get_instance_nw_info_raise_cache: raise exception.InstanceInfoCacheNotFound( instance_uuid=instance['uuid']) self.stub_out('nova.db.api.instance_get_all_by_host', fake_instance_get_all_by_host) self.stub_out('nova.db.api.instance_get_by_uuid', fake_instance_get_by_uuid) self.stub_out('nova.compute.manager.ComputeManager.' '_require_nw_info_update', fake_require_nw_info_update) self.stub_out( 'nova.network.neutron.API.get_instance_nw_info', fake_get_instance_nw_info) self.stub_out( 'nova.network.neutron.API.setup_instance_network_on_host', fake_setup_instance_network_on_host) expected_require_nw_info = 0 expect_setup_network = 0 # Make an instance appear to be still Building instances[0]['vm_state'] = vm_states.BUILDING # Make an instance appear to be Deleting instances[1]['task_state'] = task_states.DELETING # '0', '1' should be skipped.. call_info['expected_instance'] = instances[2] self.compute._heal_instance_info_cache(ctxt) self.assertEqual(1, call_info['get_all_by_host']) self.assertEqual(0, call_info['get_by_uuid']) if not _task_state_not_none: expected_require_nw_info += 1 self.assertEqual(expected_require_nw_info, call_info['require_nw_info']) if _require_nw_info_update and not _task_state_not_none: expect_setup_network += 1 self.assertEqual(expect_setup_network, call_info['setup_network']) self.assertEqual(1, call_info['get_nw_info']) call_info['expected_instance'] = instances[3] self.compute._heal_instance_info_cache(ctxt) self.assertEqual(1, call_info['get_all_by_host']) self.assertEqual(1, call_info['get_by_uuid']) if not _task_state_not_none: expected_require_nw_info += 1 self.assertEqual(expected_require_nw_info, call_info['require_nw_info']) if _require_nw_info_update and not _task_state_not_none: expect_setup_network += 1 self.assertEqual(expect_setup_network, call_info['setup_network']) self.assertEqual(2, call_info['get_nw_info']) # Make an instance switch hosts instances[4]['host'] = 'not-me' # Make an instance disappear instance_map.pop(instances[5]['uuid']) # Make an instance switch to be Deleting instances[6]['task_state'] = task_states.DELETING # '4', '5', and '6' should be skipped.. call_info['expected_instance'] = instances[7] self.compute._heal_instance_info_cache(ctxt) self.assertEqual(1, call_info['get_all_by_host']) self.assertEqual(4, call_info['get_by_uuid']) if not _task_state_not_none: expected_require_nw_info += 1 self.assertEqual(expected_require_nw_info, call_info['require_nw_info']) if _require_nw_info_update and not _task_state_not_none: expect_setup_network += 1 self.assertEqual(expect_setup_network, call_info['setup_network']) self.assertEqual(3, call_info['get_nw_info']) # Should be no more left. self.assertEqual(0, len(self.compute._instance_uuids_to_heal)) # This should cause a DB query now, so get a list of instances # where none can be processed to make sure we handle that case # cleanly. Use just '0' (Building) and '1' (Deleting) instances = instances[0:2] self.compute._heal_instance_info_cache(ctxt) # Should have called the list once more self.assertEqual(2, call_info['get_all_by_host']) # Stays the same because we remove invalid entries from the list self.assertEqual(4, call_info['get_by_uuid']) # Stays the same because we didn't find anything to process self.assertEqual(expected_require_nw_info, call_info['require_nw_info']) self.assertEqual(expect_setup_network, call_info['setup_network']) self.assertEqual(3, call_info['get_nw_info']) def test_heal_instance_info_cache(self): self._heal_instance_info_cache() def test_heal_instance_info_cache_with_instance_exception(self): self._heal_instance_info_cache(_get_instance_nw_info_raise=True) def test_heal_instance_info_cache_with_info_cache_exception(self): self._heal_instance_info_cache(_get_instance_nw_info_raise_cache=True) def test_heal_instance_info_cache_with_port_update(self): self._heal_instance_info_cache(_require_nw_info_update=True) def test_heal_instance_info_cache_with_port_update_instance_not_steady( self): self._heal_instance_info_cache(_require_nw_info_update=True, _task_state_not_none=True) @mock.patch('nova.objects.InstanceList.get_by_filters') @mock.patch('nova.compute.api.API.unrescue') def test_poll_rescued_instances(self, unrescue, get): timed_out_time = timeutils.utcnow() - datetime.timedelta(minutes=5) not_timed_out_time = timeutils.utcnow() instances = [objects.Instance( uuid=uuids.pool_instance_1, vm_state=vm_states.RESCUED, launched_at=timed_out_time), objects.Instance( uuid=uuids.pool_instance_2, vm_state=vm_states.RESCUED, launched_at=timed_out_time), objects.Instance( uuid=uuids.pool_instance_3, vm_state=vm_states.RESCUED, launched_at=not_timed_out_time)] unrescued_instances = {uuids.pool_instance_1: False, uuids.pool_instance_2: False} def fake_instance_get_all_by_filters(context, filters, expected_attrs=None, use_slave=False): self.assertEqual(["system_metadata"], expected_attrs) return instances get.side_effect = fake_instance_get_all_by_filters def fake_unrescue(context, instance): unrescued_instances[instance['uuid']] = True unrescue.side_effect = fake_unrescue self.flags(rescue_timeout=60) ctxt = context.get_admin_context() self.compute._poll_rescued_instances(ctxt) for instance in unrescued_instances.values(): self.assertTrue(instance) @mock.patch('nova.objects.InstanceList.get_by_filters') def test_poll_rebooting_instances(self, get): reboot_timeout = 60 updated_at = timeutils.utcnow() - datetime.timedelta(minutes=5) to_poll = [objects.Instance( uuid=uuids.pool_instance_1, task_state=task_states.REBOOTING, updated_at=updated_at), objects.Instance( uuid=uuids.pool_instance_2, task_state=task_states.REBOOT_STARTED, updated_at=updated_at), objects.Instance( uuid=uuids.pool_instance_3, task_state=task_states.REBOOT_PENDING, updated_at=updated_at)] self.flags(reboot_timeout=reboot_timeout) get.return_value = to_poll ctxt = context.get_admin_context() with (mock.patch.object( self.compute.driver, 'poll_rebooting_instances' )) as mock_poll: self.compute._poll_rebooting_instances(ctxt) mock_poll.assert_called_with(reboot_timeout, to_poll) filters = {'host': 'fake-mini', 'task_state': [ task_states.REBOOTING, task_states.REBOOT_STARTED, task_states.REBOOT_PENDING]} get.assert_called_once_with(ctxt, filters, expected_attrs=[], use_slave=True) def test_poll_unconfirmed_resizes(self): instances = [ fake_instance.fake_db_instance(uuid=uuids.migration_instance_1, vm_state=vm_states.RESIZED, task_state=None), fake_instance.fake_db_instance(uuid=uuids.migration_instance_none), fake_instance.fake_db_instance(uuid=uuids.migration_instance_2, vm_state=vm_states.ERROR, task_state=None), fake_instance.fake_db_instance(uuid=uuids.migration_instance_3, vm_state=vm_states.ACTIVE, task_state= task_states.REBOOTING), fake_instance.fake_db_instance(uuid=uuids.migration_instance_4, vm_state=vm_states.RESIZED, task_state=None), fake_instance.fake_db_instance(uuid=uuids.migration_instance_5, vm_state=vm_states.ACTIVE, task_state=None), # The expceted migration result will be None instead of error # since _poll_unconfirmed_resizes will not change it # when the instance vm state is RESIZED and task state # is deleting, see bug 1301696 for more detail fake_instance.fake_db_instance(uuid=uuids.migration_instance_6, vm_state=vm_states.RESIZED, task_state='deleting'), fake_instance.fake_db_instance(uuid=uuids.migration_instance_7, vm_state=vm_states.RESIZED, task_state='soft-deleting'), fake_instance.fake_db_instance(uuid=uuids.migration_instance_8, vm_state=vm_states.ACTIVE, task_state='resize_finish')] expected_migration_status = {uuids.migration_instance_1: 'confirmed', uuids.migration_instance_none: 'error', uuids.migration_instance_2: 'error', uuids.migration_instance_3: 'error', uuids.migration_instance_4: None, uuids.migration_instance_5: 'error', uuids.migration_instance_6: None, uuids.migration_instance_7: None, uuids.migration_instance_8: None} migrations = [] for i, instance in enumerate(instances, start=1): fake_mig = test_migration.fake_db_migration() fake_mig.update({'id': i, 'instance_uuid': instance['uuid'], 'status': None}) migrations.append(fake_mig) def fake_instance_get_by_uuid(context, instance_uuid, columns_to_join=None, use_slave=False): self.assertIn('metadata', columns_to_join) self.assertIn('system_metadata', columns_to_join) # raise InstanceNotFound exception for non-existing instance # represented by UUID: uuids.migration_instance_none if instance_uuid == uuids.db_instance_nonexist: raise exception.InstanceNotFound(instance_id=instance_uuid) for instance in instances: if instance['uuid'] == instance_uuid: return instance def fake_migration_get_unconfirmed_by_dest_compute(context, resize_confirm_window, dest_compute, use_slave=False): self.assertEqual(dest_compute, CONF.host) return migrations def fake_migration_update(context, mid, updates): for migration in migrations: if migration['id'] == mid: migration.update(updates) return migration def fake_confirm_resize(cls, context, instance, migration=None): # raise exception for uuids.migration_instance_4 to check # migration status does not get set to 'error' on confirm_resize # failure. if instance['uuid'] == uuids.migration_instance_4: raise test.TestingException('bomb') self.assertIsNotNone(migration) for migration2 in migrations: if (migration2['instance_uuid'] == migration['instance_uuid']): migration2['status'] = 'confirmed' self.stub_out('nova.db.api.instance_get_by_uuid', fake_instance_get_by_uuid) self.stub_out('nova.db.api.migration_get_unconfirmed_by_dest_compute', fake_migration_get_unconfirmed_by_dest_compute) self.stub_out('nova.db.api.migration_update', fake_migration_update) self.stub_out('nova.compute.api.API.confirm_resize', fake_confirm_resize) def fetch_instance_migration_status(instance_uuid): for migration in migrations: if migration['instance_uuid'] == instance_uuid: return migration['status'] self.flags(resize_confirm_window=60) ctxt = context.get_admin_context() self.compute._poll_unconfirmed_resizes(ctxt) for instance_uuid, status in expected_migration_status.items(): self.assertEqual(status, fetch_instance_migration_status(instance_uuid)) def test_instance_build_timeout_mixed_instances(self): # Tests that instances which failed to build within the configured # instance_build_timeout value are set to error state. self.flags(instance_build_timeout=30) ctxt = context.get_admin_context() created_at = timeutils.utcnow() + datetime.timedelta(seconds=-60) filters = {'vm_state': vm_states.BUILDING, 'host': CONF.host} # these are the ones that are expired old_instances = [] for x in range(4): instance = {'uuid': uuidutils.generate_uuid(), 'created_at': created_at} instance.update(filters) old_instances.append(fake_instance.fake_db_instance(**instance)) # not expired instances = list(old_instances) # copy the contents of old_instances new_instance = { 'uuid': uuids.fake, 'created_at': timeutils.utcnow(), } sort_key = 'created_at' sort_dir = 'desc' new_instance.update(filters) instances.append(fake_instance.fake_db_instance(**new_instance)) # creating mocks with test.nested( mock.patch.object(db, 'instance_get_all_by_filters', return_value=instances), mock.patch.object(objects.Instance, 'save'), ) as ( instance_get_all_by_filters, conductor_instance_update ): # run the code self.compute._check_instance_build_time(ctxt) # check our assertions instance_get_all_by_filters.assert_called_once_with( ctxt, filters, sort_key, sort_dir, marker=None, columns_to_join=[], limit=None) self.assertThat(conductor_instance_update.mock_calls, testtools_matchers.HasLength(len(old_instances))) for inst in old_instances: conductor_instance_update.assert_has_calls([ mock.call()]) @mock.patch.object(objects.Instance, 'save') def test_instance_update_host_check(self, mock_save): # make sure rt usage doesn't update if the host or node is different instance = self._create_fake_instance_obj({'host': 'someotherhost'}) with mock.patch.object(self.compute.rt, '_update_usage', new=mock.NonCallableMock()): self.compute._instance_update(self.context, instance, vcpus=4) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(compute_manager.ComputeManager, '_is_instance_storage_shared') @mock.patch.object(fake.FakeDriver, 'destroy') @mock.patch('nova.objects.MigrationList.get_by_filters') @mock.patch('nova.objects.Migration.save') def test_destroy_evacuated_instance_on_shared_storage(self, mock_save, mock_get_filter, mock_destroy, mock_is_inst, mock_get_blk, mock_get_inst, mock_remove_allocs): fake_context = context.get_admin_context() # instances in central db instances = [ # those are still related to this host self._create_fake_instance_obj( {'host': self.compute.host}), self._create_fake_instance_obj( {'host': self.compute.host}), self._create_fake_instance_obj( {'host': self.compute.host}) ] # those are already been evacuated to other host evacuated_instance = self._create_fake_instance_obj( {'host': 'otherhost'}) migration = objects.Migration(instance_uuid=evacuated_instance.uuid) migration.source_node = NODENAME mock_get_filter.return_value = [migration] instances.append(evacuated_instance) mock_get_inst.return_value = instances mock_get_blk.return_value = 'fake_bdi' mock_is_inst.return_value = True node_cache = { self.rt.compute_nodes[NODENAME].uuid: objects.ComputeNode( uuid=self.rt.compute_nodes[NODENAME].uuid, hypervisor_hostname=NODENAME) } with mock.patch.object( self.compute.network_api, 'get_instance_nw_info', return_value='fake_network_info') as mock_get_nw: self.compute._destroy_evacuated_instances(fake_context, node_cache) mock_get_filter.assert_called_once_with(fake_context, {'source_compute': self.compute.host, 'status': [ 'accepted', 'pre-migrating', 'done'], 'migration_type': 'evacuation'}) mock_get_inst.assert_called_once_with(fake_context) mock_get_nw.assert_called_once_with(fake_context, evacuated_instance) mock_get_blk.assert_called_once_with(fake_context, evacuated_instance) mock_is_inst.assert_called_once_with(fake_context, evacuated_instance) mock_destroy.assert_called_once_with(fake_context, evacuated_instance, 'fake_network_info', 'fake_bdi', False) mock_remove_allocs.assert_called_once_with( fake_context, evacuated_instance.uuid, self.rt.compute_nodes[NODENAME].uuid) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(fake.FakeDriver, 'check_instance_shared_storage_local') @mock.patch.object(compute_rpcapi.ComputeAPI, 'check_instance_shared_storage') @mock.patch.object(fake.FakeDriver, 'check_instance_shared_storage_cleanup') @mock.patch.object(fake.FakeDriver, 'destroy') @mock.patch('nova.objects.MigrationList.get_by_filters') @mock.patch('nova.objects.Migration.save') def test_destroy_evacuated_instance_with_disks(self, mock_save, mock_get_filter, mock_destroy, mock_check_clean, mock_check, mock_check_local, mock_get_blk, mock_get_drv, mock_remove_allocs): fake_context = context.get_admin_context() # instances in central db instances = [ # those are still related to this host self._create_fake_instance_obj( {'host': self.compute.host}), self._create_fake_instance_obj( {'host': self.compute.host}), self._create_fake_instance_obj( {'host': self.compute.host}) ] # those are already been evacuated to other host evacuated_instance = self._create_fake_instance_obj( {'host': 'otherhost'}) migration = objects.Migration(instance_uuid=evacuated_instance.uuid) migration.source_node = NODENAME mock_get_filter.return_value = [migration] instances.append(evacuated_instance) mock_get_drv.return_value = instances mock_get_blk.return_value = 'fake-bdi' mock_check_local.return_value = {'filename': 'tmpfilename'} mock_check.return_value = False node_cache = { self.rt.compute_nodes[NODENAME].uuid: objects.ComputeNode( uuid=self.rt.compute_nodes[NODENAME].uuid, hypervisor_hostname=NODENAME) } with mock.patch.object( self.compute.network_api, 'get_instance_nw_info', return_value='fake_network_info') as mock_get_nw: self.compute._destroy_evacuated_instances(fake_context, node_cache) mock_get_drv.assert_called_once_with(fake_context) mock_get_nw.assert_called_once_with(fake_context, evacuated_instance) mock_get_blk.assert_called_once_with(fake_context, evacuated_instance) mock_check_local.assert_called_once_with(fake_context, evacuated_instance) mock_check.assert_called_once_with(fake_context, evacuated_instance, {'filename': 'tmpfilename'}, host=None) mock_check_clean.assert_called_once_with(fake_context, {'filename': 'tmpfilename'}) mock_destroy.assert_called_once_with(fake_context, evacuated_instance, 'fake_network_info', 'fake-bdi', True) mock_remove_allocs.assert_called_once_with( fake_context, evacuated_instance.uuid, self.rt.compute_nodes[NODENAME].uuid) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') @mock.patch.object(compute_manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(fake.FakeDriver, 'check_instance_shared_storage_local') @mock.patch.object(compute_rpcapi.ComputeAPI, 'check_instance_shared_storage') @mock.patch.object(fake.FakeDriver, 'check_instance_shared_storage_cleanup') @mock.patch.object(fake.FakeDriver, 'destroy') @mock.patch('nova.objects.MigrationList.get_by_filters') @mock.patch('nova.objects.Migration.save') def test_destroy_evacuated_instance_not_implemented(self, mock_save, mock_get_filter, mock_destroy, mock_check_clean, mock_check, mock_check_local, mock_get_blk, mock_get_inst, mock_remove_allocs): fake_context = context.get_admin_context() # instances in central db instances = [ # those are still related to this host self._create_fake_instance_obj( {'host': self.compute.host}), self._create_fake_instance_obj( {'host': self.compute.host}), self._create_fake_instance_obj( {'host': self.compute.host}) ] # those are already been evacuated to other host evacuated_instance = self._create_fake_instance_obj( {'host': 'otherhost'}) migration = objects.Migration(instance_uuid=evacuated_instance.uuid) migration.source_node = NODENAME mock_get_filter.return_value = [migration] instances.append(evacuated_instance) mock_get_inst.return_value = instances mock_get_blk.return_value = 'fake_bdi' mock_check_local.side_effect = NotImplementedError node_cache = { self.rt.compute_nodes[NODENAME].uuid: objects.ComputeNode( uuid=self.rt.compute_nodes[NODENAME].uuid, hypervisor_hostname=NODENAME) } with mock.patch.object( self.compute.network_api, 'get_instance_nw_info', return_value='fake_network_info') as mock_get_nw: self.compute._destroy_evacuated_instances(fake_context, node_cache) mock_get_inst.assert_called_once_with(fake_context) mock_get_nw.assert_called_once_with(fake_context, evacuated_instance) mock_get_blk.assert_called_once_with(fake_context, evacuated_instance) mock_check_local.assert_called_once_with(fake_context, evacuated_instance) mock_destroy.assert_called_once_with(fake_context, evacuated_instance, 'fake_network_info', 'fake_bdi', True) def test_complete_partial_deletion(self): admin_context = context.get_admin_context() instance = objects.Instance() instance.id = 1 instance.uuid = uuids.instance instance.vm_state = vm_states.DELETED instance.task_state = None instance.system_metadata = {'fake_key': 'fake_value'} instance.flavor = objects.Flavor(vcpus=1, memory_mb=1) instance.project_id = 'fake-prj' instance.user_id = 'fake-user' instance.deleted = False def fake_destroy(self): instance.deleted = True self.stub_out('nova.objects.instance.Instance.destroy', fake_destroy) self.stub_out('nova.db.api.block_device_mapping_get_all_by_instance', lambda *a, **k: None) self.stub_out('nova.compute.manager.ComputeManager.' '_complete_deletion', lambda *a, **k: None) self.stub_out('nova.objects.quotas.Quotas.reserve', lambda *a, **k: None) self.stub_out('nova.compute.utils.notify_about_instance_usage', lambda *a, **k: None) self.stub_out('nova.compute.utils.notify_about_instance_action', lambda *a, **k: None) self.compute._complete_partial_deletion(admin_context, instance) self.assertNotEqual(0, instance.deleted) def test_terminate_instance_updates_tracker(self): admin_context = context.get_admin_context() cn = self.rt.compute_nodes[NODENAME] self.assertEqual(0, cn.vcpus_used) instance = self._create_fake_instance_obj() instance.vcpus = 1 self.compute.rt.instance_claim(admin_context, instance, NODENAME, None) self.assertEqual(1, cn.vcpus_used) self.compute.terminate_instance(admin_context, instance, []) self.assertEqual(0, cn.vcpus_used) @mock.patch('nova.compute.manager.ComputeManager' '._notify_about_instance_usage') @mock.patch('nova.objects.Quotas.reserve') # NOTE(cdent): At least in this test destroy() on the instance sets it # state back to active, meaning the resource tracker won't # update properly. @mock.patch('nova.objects.Instance.destroy') def test_init_deleted_instance_updates_tracker(self, noop1, noop2, noop3): admin_context = context.get_admin_context() cn = self.compute.rt.compute_nodes[NODENAME] self.assertEqual(0, cn.vcpus_used) instance = self._create_fake_instance_obj() instance.vcpus = 1 self.assertEqual(0, cn.vcpus_used) self.compute.rt.instance_claim(admin_context, instance, NODENAME, None) self.compute._init_instance(admin_context, instance) self.assertEqual(1, cn.vcpus_used) instance.vm_state = vm_states.DELETED self.compute._init_instance(admin_context, instance) self.assertEqual(0, cn.vcpus_used) def test_init_instance_for_partial_deletion(self): admin_context = context.get_admin_context() instance = objects.Instance(admin_context) instance.id = 1 instance.vm_state = vm_states.DELETED instance.deleted = False instance.host = self.compute.host def fake_partial_deletion(self, context, instance): instance['deleted'] = instance['id'] self.stub_out('nova.compute.manager.ComputeManager.' '_complete_partial_deletion', fake_partial_deletion) self.compute._init_instance(admin_context, instance) self.assertNotEqual(0, instance['deleted']) @mock.patch.object(compute_manager.ComputeManager, '_complete_partial_deletion') def test_partial_deletion_raise_exception(self, mock_complete): admin_context = context.get_admin_context() instance = objects.Instance(admin_context) instance.uuid = uuids.fake instance.vm_state = vm_states.DELETED instance.deleted = False instance.host = self.compute.host mock_complete.side_effect = ValueError self.compute._init_instance(admin_context, instance) mock_complete.assert_called_once_with(admin_context, instance) def test_add_remove_fixed_ip_updates_instance_updated_at(self): def _noop(*args, **kwargs): pass instance = self._create_fake_instance_obj() updated_at_1 = instance['updated_at'] with mock.patch.object( self.compute.network_api, 'add_fixed_ip_to_instance', _noop): self.compute.add_fixed_ip_to_instance( self.context, 'fake', instance) updated_at_2 = db.instance_get_by_uuid(self.context, instance['uuid'])['updated_at'] with mock.patch.object( self.compute.network_api, 'remove_fixed_ip_from_instance', _noop): self.compute.remove_fixed_ip_from_instance( self.context, 'fake', instance) updated_at_3 = db.instance_get_by_uuid(self.context, instance['uuid'])['updated_at'] updated_ats = (updated_at_1, updated_at_2, updated_at_3) self.assertEqual(len(updated_ats), len(set(updated_ats))) def test_no_pending_deletes_for_soft_deleted_instances(self): self.flags(reclaim_instance_interval=0) ctxt = context.get_admin_context() instance = self._create_fake_instance_obj( params={'host': CONF.host, 'vm_state': vm_states.SOFT_DELETED, 'deleted_at': timeutils.utcnow()}) self.compute._run_pending_deletes(ctxt) instance = db.instance_get_by_uuid(self.context, instance['uuid']) self.assertFalse(instance['cleaned']) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_manager.ComputeManager, '_delete_instance') def test_reclaim_queued_deletes(self, mock_delete, mock_bdms): self.flags(reclaim_instance_interval=3600) ctxt = context.get_admin_context() mock_bdms.return_value = [] # Active self._create_fake_instance_obj(params={'host': CONF.host}) # Deleted not old enough self._create_fake_instance_obj(params={'host': CONF.host, 'vm_state': vm_states.SOFT_DELETED, 'deleted_at': timeutils.utcnow()}) # Deleted old enough (only this one should be reclaimed) deleted_at = (timeutils.utcnow() - datetime.timedelta(hours=1, minutes=5)) self._create_fake_instance_obj( params={'host': CONF.host, 'vm_state': vm_states.SOFT_DELETED, 'deleted_at': deleted_at}) # Restoring # NOTE(hanlind): This specifically tests for a race condition # where restoring a previously soft deleted instance sets # deleted_at back to None, causing reclaim to think it can be # deleted, see LP #1186243. self._create_fake_instance_obj( params={'host': CONF.host, 'vm_state': vm_states.SOFT_DELETED, 'task_state': task_states.RESTORING}) self.compute._reclaim_queued_deletes(ctxt) mock_delete.assert_called_once_with( ctxt, test.MatchType(objects.Instance), []) mock_bdms.assert_called_once_with(ctxt, mock.ANY) @mock.patch.object(objects.InstanceList, 'get_by_filters') @mock.patch.object(compute_manager.ComputeManager, '_deleted_old_enough') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_manager.ComputeManager, '_delete_instance') def test_reclaim_queued_deletes_continue_on_error(self, mock_delete_inst, mock_get_uuid, mock_delete_old, mock_get_filter): # Verify that reclaim continues on error. self.flags(reclaim_instance_interval=3600) ctxt = context.get_admin_context() deleted_at = (timeutils.utcnow() - datetime.timedelta(hours=1, minutes=5)) instance1 = self._create_fake_instance_obj( params={'host': CONF.host, 'vm_state': vm_states.SOFT_DELETED, 'deleted_at': deleted_at}) instance2 = self._create_fake_instance_obj( params={'host': CONF.host, 'vm_state': vm_states.SOFT_DELETED, 'deleted_at': deleted_at}) mock_get_filter.return_value = [instance1, instance2] mock_delete_old.side_effect = (True, True) mock_get_uuid.side_effect = ([], []) mock_delete_inst.side_effect = (test.TestingException, None) self.compute._reclaim_queued_deletes(ctxt) mock_get_filter.assert_called_once_with(ctxt, mock.ANY, expected_attrs=instance_obj.INSTANCE_DEFAULT_FIELDS, use_slave=True) mock_delete_old.assert_has_calls([mock.call(instance1, 3600), mock.call(instance2, 3600)]) mock_get_uuid.assert_has_calls([mock.call(ctxt, instance1.uuid), mock.call(ctxt, instance2.uuid)]) mock_delete_inst.assert_has_calls([ mock.call(ctxt, instance1, []), mock.call(ctxt, instance2, [])]) @mock.patch.object(fake.FakeDriver, 'get_info') @mock.patch.object(compute_manager.ComputeManager, '_sync_instance_power_state') def test_sync_power_states(self, mock_sync, mock_get): ctxt = self.context.elevated() self._create_fake_instance_obj({'host': self.compute.host}) self._create_fake_instance_obj({'host': self.compute.host}) self._create_fake_instance_obj({'host': self.compute.host}) mock_get.side_effect = [ exception.InstanceNotFound(instance_id=uuids.instance), hardware.InstanceInfo(state=power_state.RUNNING), hardware.InstanceInfo(state=power_state.SHUTDOWN)] mock_sync.side_effect = \ exception.InstanceNotFound(instance_id=uuids.instance) self.compute._sync_power_states(ctxt) mock_get.assert_has_calls([mock.call(mock.ANY), mock.call(mock.ANY), mock.call(mock.ANY)]) mock_sync.assert_has_calls([ mock.call(ctxt, mock.ANY, power_state.NOSTATE, use_slave=True), mock.call(ctxt, mock.ANY, power_state.RUNNING, use_slave=True), mock.call(ctxt, mock.ANY, power_state.SHUTDOWN, use_slave=True)]) @mock.patch.object(compute_manager.ComputeManager, '_get_power_state') @mock.patch.object(compute_manager.ComputeManager, '_sync_instance_power_state') def _test_lifecycle_event(self, lifecycle_event, vm_power_state, mock_sync, mock_get, is_actual_state=True): instance = self._create_fake_instance_obj() uuid = instance['uuid'] actual_state = (vm_power_state if vm_power_state is not None and is_actual_state else power_state.NOSTATE) mock_get.return_value = actual_state self.compute.handle_events(event.LifecycleEvent(uuid, lifecycle_event)) mock_get.assert_called_once_with(mock.ANY, test.ContainKeyValue('uuid', uuid)) if actual_state == vm_power_state: mock_sync.assert_called_once_with(mock.ANY, test.ContainKeyValue('uuid', uuid), vm_power_state) def test_lifecycle_events(self): self._test_lifecycle_event(event.EVENT_LIFECYCLE_STOPPED, power_state.SHUTDOWN) self._test_lifecycle_event(event.EVENT_LIFECYCLE_STOPPED, power_state.SHUTDOWN, is_actual_state=False) self._test_lifecycle_event(event.EVENT_LIFECYCLE_STARTED, power_state.RUNNING) self._test_lifecycle_event(event.EVENT_LIFECYCLE_PAUSED, power_state.PAUSED) self._test_lifecycle_event(event.EVENT_LIFECYCLE_RESUMED, power_state.RUNNING) self._test_lifecycle_event(-1, None) def test_lifecycle_event_non_existent_instance(self): # No error raised for non-existent instance because of inherent race # between database updates and hypervisor events. See bug #1180501. event_instance = event.LifecycleEvent('does-not-exist', event.EVENT_LIFECYCLE_STOPPED) self.compute.handle_events(event_instance) @mock.patch.object(objects.Migration, 'get_by_id') def test_confirm_resize_roll_back_quota_migration_not_found(self, mock_get_by_id): instance = self._create_fake_instance_obj() migration = objects.Migration() migration.instance_uuid = instance.uuid migration.status = 'finished' migration.id = 0 mock_get_by_id.side_effect = exception.MigrationNotFound( migration_id=0) self.compute.confirm_resize(self.context, instance=instance, migration=migration) @mock.patch.object(instance_obj.Instance, 'get_by_uuid') def test_confirm_resize_roll_back_quota_instance_not_found(self, mock_get_by_id): instance = self._create_fake_instance_obj() migration = objects.Migration() migration.instance_uuid = instance.uuid migration.status = 'finished' migration.id = 0 mock_get_by_id.side_effect = exception.InstanceNotFound( instance_id=instance.uuid) self.compute.confirm_resize(self.context, instance=instance, migration=migration) @mock.patch.object(objects.Migration, 'get_by_id') def test_confirm_resize_roll_back_quota_status_confirmed(self, mock_get_by_id): instance = self._create_fake_instance_obj() migration = objects.Migration() migration.instance_uuid = instance.uuid migration.status = 'confirmed' migration.id = 0 mock_get_by_id.return_value = migration self.compute.confirm_resize(self.context, instance=instance, migration=migration) @mock.patch.object(objects.Migration, 'get_by_id') def test_confirm_resize_roll_back_quota_status_dummy(self, mock_get_by_id): instance = self._create_fake_instance_obj() migration = objects.Migration() migration.instance_uuid = instance.uuid migration.status = 'dummy' migration.id = 0 mock_get_by_id.return_value = migration self.compute.confirm_resize(self.context, instance=instance, migration=migration) @mock.patch.object(objects.MigrationContext, 'get_pci_mapping_for_migration') def test_allow_confirm_resize_on_instance_in_deleting_task_state( self, mock_pci_mapping): instance = self._create_fake_instance_obj() old_type = instance.flavor new_type = flavors.get_flavor_by_flavor_id('4') instance.flavor = new_type instance.old_flavor = old_type instance.new_flavor = new_type instance.migration_context = objects.MigrationContext() def fake_setup_networks_on_host(self, *args, **kwargs): pass self._mock_rt() with test.nested( mock.patch.object(self.compute.network_api, 'setup_networks_on_host', side_effect=fake_setup_networks_on_host), mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') ) as (mock_setup, mock_remove_allocs): migration = objects.Migration(context=self.context.elevated()) migration.instance_uuid = instance.uuid migration.status = 'finished' migration.migration_type = 'resize' migration.create() instance.task_state = task_states.DELETING instance.vm_state = vm_states.RESIZED instance.system_metadata = {} instance.save() self.compute.confirm_resize(self.context, instance=instance, migration=migration) instance.refresh() self.assertEqual(vm_states.ACTIVE, instance['vm_state']) def _get_instance_and_bdm_for_dev_defaults_tests(self): instance = self._create_fake_instance_obj( params={'root_device_name': '/dev/vda'}) block_device_mapping = block_device_obj.block_device_make_list( self.context, [fake_block_device.FakeDbBlockDeviceDict( {'id': 3, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', 'image_id': uuids.image, 'boot_index': 0})]) return instance, block_device_mapping @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_manager.ComputeManager, '_default_device_names_for_instance') def test_default_block_device_names_empty_instance_root_dev(self, mock_def, mock_save): instance, bdms = self._get_instance_and_bdm_for_dev_defaults_tests() instance.root_device_name = None self.compute._default_block_device_names(instance, {}, bdms) self.assertEqual('/dev/vda', instance.root_device_name) mock_def.assert_called_once_with(instance, '/dev/vda', [], [], [bdm for bdm in bdms]) @mock.patch.object(objects.BlockDeviceMapping, 'save') @mock.patch.object(compute_manager.ComputeManager, '_default_device_names_for_instance') def test_default_block_device_names_empty_root_device(self, mock_def, mock_save): instance, bdms = self._get_instance_and_bdm_for_dev_defaults_tests() bdms[0]['device_name'] = None mock_save.return_value = None self.compute._default_block_device_names(instance, {}, bdms) mock_def.assert_called_once_with(instance, '/dev/vda', [], [], [bdm for bdm in bdms]) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.BlockDeviceMapping, 'save') @mock.patch.object(compute_manager.ComputeManager, '_default_root_device_name') @mock.patch.object(compute_manager.ComputeManager, '_default_device_names_for_instance') def test_default_block_device_names_no_root_device(self, mock_default_name, mock_default_dev, mock_blk_save, mock_inst_save): instance, bdms = self._get_instance_and_bdm_for_dev_defaults_tests() instance.root_device_name = None bdms[0]['device_name'] = None mock_default_dev.return_value = '/dev/vda' mock_blk_save.return_value = None self.compute._default_block_device_names(instance, {}, bdms) self.assertEqual('/dev/vda', instance.root_device_name) mock_default_dev.assert_called_once_with(instance, mock.ANY, bdms[0]) mock_default_name.assert_called_once_with(instance, '/dev/vda', [], [], [bdm for bdm in bdms]) def test_default_block_device_names_with_blank_volumes(self): instance = self._create_fake_instance_obj() image_meta = {} root_volume = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'instance_uuid': uuids.block_device_instance, 'source_type': 'volume', 'destination_type': 'volume', 'image_id': uuids.image, 'boot_index': 0})) blank_volume1 = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'id': 2, 'instance_uuid': uuids.block_device_instance, 'source_type': 'blank', 'destination_type': 'volume', 'boot_index': -1})) blank_volume2 = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'id': 3, 'instance_uuid': uuids.block_device_instance, 'source_type': 'blank', 'destination_type': 'volume', 'boot_index': -1})) ephemeral = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'id': 4, 'instance_uuid': uuids.block_device_instance, 'source_type': 'blank', 'destination_type': 'local'})) swap = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict({ 'id': 5, 'instance_uuid': uuids.block_device_instance, 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap' })) bdms = block_device_obj.block_device_make_list( self.context, [root_volume, blank_volume1, blank_volume2, ephemeral, swap]) with test.nested( mock.patch.object(self.compute, '_default_root_device_name', return_value='/dev/vda'), mock.patch.object(objects.BlockDeviceMapping, 'save'), mock.patch.object(self.compute, '_default_device_names_for_instance') ) as (default_root_device, object_save, default_device_names): self.compute._default_block_device_names(instance, image_meta, bdms) default_root_device.assert_called_once_with(instance, image_meta, bdms[0]) self.assertEqual('/dev/vda', instance.root_device_name) self.assertTrue(object_save.called) default_device_names.assert_called_once_with(instance, '/dev/vda', [bdms[-2]], [bdms[-1]], [bdm for bdm in bdms[:-2]]) def test_reserve_block_device_name(self): instance = self._create_fake_instance_obj( params={'root_device_name': '/dev/vda'}) bdm = objects.BlockDeviceMapping( **{'context': self.context, 'source_type': 'image', 'destination_type': 'local', 'image_id': uuids.image_instance, 'device_name': '/dev/vda', 'instance_uuid': instance.uuid}) bdm.create() self.compute.reserve_block_device_name(self.context, instance, '/dev/vdb', uuids.block_device_instance, 'virtio', 'disk', None, False) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, instance.uuid) bdms = list(bdms) self.assertEqual(len(bdms), 2) bdms.sort(key=operator.attrgetter('device_name')) vol_bdm = bdms[1] self.assertEqual(vol_bdm.source_type, 'volume') self.assertIsNone(vol_bdm.boot_index) self.assertIsNone(vol_bdm.guest_format) self.assertEqual(vol_bdm.destination_type, 'volume') self.assertEqual(vol_bdm.device_name, '/dev/vdb') self.assertEqual(vol_bdm.volume_id, uuids.block_device_instance) self.assertEqual(vol_bdm.disk_bus, 'virtio') self.assertEqual(vol_bdm.device_type, 'disk') def test_reserve_block_device_name_with_iso_instance(self): instance = self._create_fake_instance_obj( params={'root_device_name': '/dev/hda'}) bdm = objects.BlockDeviceMapping( context=self.context, **{'source_type': 'image', 'destination_type': 'local', 'image_id': uuids.image, 'device_name': '/dev/hda', 'instance_uuid': instance.uuid}) bdm.create() self.compute.reserve_block_device_name(self.context, instance, '/dev/vdb', uuids.block_device_instance, 'ide', 'disk', None, False) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, instance.uuid) bdms = list(bdms) self.assertEqual(2, len(bdms)) bdms.sort(key=operator.attrgetter('device_name')) vol_bdm = bdms[1] self.assertEqual('volume', vol_bdm.source_type) self.assertEqual('volume', vol_bdm.destination_type) self.assertEqual('/dev/hdb', vol_bdm.device_name) self.assertEqual(uuids.block_device_instance, vol_bdm.volume_id) self.assertEqual('ide', vol_bdm.disk_bus) self.assertEqual('disk', vol_bdm.device_type) @mock.patch.object(cinder.API, 'get_snapshot') def test_quiesce(self, mock_snapshot_get): # ensure instance can be quiesced and unquiesced instance = self._create_fake_instance_obj() mapping = [{'source_type': 'snapshot', 'snapshot_id': uuids.snap1}, {'source_type': 'snapshot', 'snapshot_id': uuids.snap2}] # unquiesce should wait until volume snapshots are completed mock_snapshot_get.side_effect = [{'status': 'creating'}, {'status': 'available'}] * 2 self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.compute.quiesce_instance(self.context, instance) self.compute.unquiesce_instance(self.context, instance, mapping) self.compute.terminate_instance(self.context, instance, []) mock_snapshot_get.assert_any_call(mock.ANY, uuids.snap1) mock_snapshot_get.assert_any_call(mock.ANY, uuids.snap2) self.assertEqual(4, mock_snapshot_get.call_count) def test_instance_fault_message_no_rescheduled_details_without_retry(self): """This test simulates a spawn failure with no retry data. If driver spawn raises an exception and there is no retry data available, the instance fault message should not contain any details about rescheduling. The fault message field is limited in size and a long message about rescheduling displaces the original error message. """ instance = self._create_fake_instance_obj() with mock.patch.object(self.compute.driver, 'spawn') as mock_spawn: mock_spawn.side_effect = test.TestingException('Preserve this') self.compute.build_and_run_instance( self.context, instance, {}, {}, {}, block_device_mapping=[]) self.assertEqual('Preserve this', instance.fault.message) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.objects.Service.get_minimum_version') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') def test_delete_while_booting_instance_not_in_cell_db_cellsv2( self, br_get_by_instance, legacy_notify, minimum_server_version, im_get_by_instance, target_cell, instance_destroy, notify): minimum_server_version.return_value = 15 im_get_by_instance.return_value = mock.Mock() target_cell.return_value.__enter__.return_value = self.context instance = self._create_fake_instance_obj() instance.host = None instance.save() self.compute_api._delete_instance(self.context, instance) instance_destroy.assert_called_once_with() # the instance is updated during the delete so we only match by uuid test_utils.assert_instance_delete_notification_by_uuid( legacy_notify, notify, instance.uuid, self.compute_api.notifier, self.context) @mock.patch('nova.compute.api.API._local_delete_cleanup') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') def test_delete_while_booting_instance_not_scheduled_cellv1( self, br_get_by_instance, legacy_notify, im_get_by_instance, instance_destroy, notify, api_del_cleanup): instance = self._create_fake_instance_obj() instance.host = None instance.save() # This means compute api looks for an instance to destroy br_get_by_instance.side_effect = exception.BuildRequestNotFound( uuid=instance.uuid) # no mapping means cellv1 im_get_by_instance.return_value = None self.compute_api._delete_instance(self.context, instance) test_utils.assert_instance_delete_notification_by_uuid( legacy_notify, notify, instance.uuid, self.compute_api.notifier, self.context) instance_destroy.assert_called_once_with() api_del_cleanup.assert_called_once() @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') def test_delete_while_booting_instance_not_scheduled_cellv2( self, br_get_by_instance, legacy_notify, im_get_by_instance, target_cell, instance_destroy, notify): target_cell.return_value.__enter__.return_value = self.context instance = self._create_fake_instance_obj() instance.host = None instance.save() # This means compute api looks for an instance to destroy br_get_by_instance.side_effect = exception.BuildRequestNotFound( uuid=instance.uuid) # having a mapping means cellsv2 im_get_by_instance.return_value = mock.Mock() self.compute_api._delete_instance(self.context, instance) instance_destroy.assert_called_once_with() test_utils.assert_instance_delete_notification_by_uuid( legacy_notify, notify, instance.uuid, self.compute_api.notifier, self.context) @ddt.ddt class ComputeAPITestCase(BaseTestCase): def setUp(self): super(ComputeAPITestCase, self).setUp() self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.compute_api = compute.API() self.fake_image = { 'id': 'f9000000-0000-0000-0000-000000000000', 'name': 'fake_name', 'status': 'active', 'properties': {'kernel_id': uuids.kernel_id, 'ramdisk_id': uuids.ramdisk_id}, } def fake_show(obj, context, image_id, **kwargs): if image_id: return self.fake_image else: raise exception.ImageNotFound(image_id=image_id) self.fake_show = fake_show def fake_lookup(self, context, instance): return None, instance self.stub_out('nova.compute.api.API._lookup_instance', fake_lookup) # Mock out schedule_and_build_instances and rebuild_instance # since nothing in these tests should need those to actually # run. We do this to avoid possible races with other tests # that actually test those methods and mock things out within # them, like conductor tests. _patch = mock.patch.object(self.compute_api.compute_task_api, 'schedule_and_build_instances', autospec=True) self.schedule_and_build_instances_mock = _patch.start() self.addCleanup(_patch.stop) _patch = mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance', autospec=True) self.rebuild_instance_mock = _patch.start() self.addCleanup(_patch.stop) # Assume that we're always OK for network quota. def fake_validate_networks(context, requested_networks, num_instances): return num_instances validate_nets_patch = mock.patch.object( self.compute_api.network_api, 'validate_networks', fake_validate_networks) validate_nets_patch.start() self.addCleanup(validate_nets_patch.stop) self.default_flavor = objects.Flavor.get_by_name(self.context, 'm1.small') self.tiny_flavor = objects.Flavor.get_by_name(self.context, 'm1.tiny') def _run_instance(self, params=None): instance = self._create_fake_instance_obj(params, services=True) instance_uuid = instance['uuid'] self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance.refresh() self.assertIsNone(instance['task_state']) return instance, instance_uuid def test_create_with_too_little_ram(self): # Test an instance type with too little memory. inst_type = self.default_flavor inst_type['memory_mb'] = 1 self.fake_image['min_ram'] = 2 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.FlavorMemoryTooSmall, self.compute_api.create, self.context, inst_type, self.fake_image['id']) # Now increase the inst_type memory and make sure all is fine. inst_type['memory_mb'] = 2 (refs, resv_id) = self.compute_api.create(self.context, inst_type, self.fake_image['id']) def test_create_with_too_little_disk(self): # Test an instance type with too little disk space. inst_type = self.default_flavor inst_type['root_gb'] = 1 self.fake_image['min_disk'] = 2 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.FlavorDiskSmallerThanMinDisk, self.compute_api.create, self.context, inst_type, self.fake_image['id']) # Now increase the inst_type disk space and make sure all is fine. inst_type['root_gb'] = 2 (refs, resv_id) = self.compute_api.create(self.context, inst_type, self.fake_image['id']) def test_create_with_too_large_image(self): # Test an instance type with too little disk space. inst_type = self.default_flavor inst_type['root_gb'] = 1 self.fake_image['size'] = '1073741825' self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.FlavorDiskSmallerThanImage, self.compute_api.create, self.context, inst_type, self.fake_image['id']) # Reduce image to 1 GB limit and ensure it works self.fake_image['size'] = '1073741824' (refs, resv_id) = self.compute_api.create(self.context, inst_type, self.fake_image['id']) def test_create_just_enough_ram_and_disk(self): # Test an instance type with just enough ram and disk space. inst_type = self.default_flavor inst_type['root_gb'] = 2 inst_type['memory_mb'] = 2 self.fake_image['min_ram'] = 2 self.fake_image['min_disk'] = 2 self.fake_image['name'] = 'fake_name' self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) (refs, resv_id) = self.compute_api.create(self.context, inst_type, self.fake_image['id']) def test_create_with_no_ram_and_disk_reqs(self): # Test an instance type with no min_ram or min_disk. inst_type = self.default_flavor inst_type['root_gb'] = 1 inst_type['memory_mb'] = 1 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) (refs, resv_id) = self.compute_api.create(self.context, inst_type, self.fake_image['id']) def test_create_with_deleted_image(self): # If we're given a deleted image by glance, we should not be able to # build from it self.fake_image['name'] = 'fake_name' self.fake_image['status'] = 'DELETED' self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) expected_message = ( exception.ImageNotActive.msg_fmt % {'image_id': self.fake_image['id']}) with testtools.ExpectedException(exception.ImageNotActive, expected_message): self.compute_api.create(self.context, self.default_flavor, self.fake_image['id']) @mock.patch('nova.virt.hardware.numa_get_constraints') def test_create_with_numa_topology(self, numa_constraints_mock): numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=512), objects.InstanceNUMACell( id=1, cpuset=set([3, 4]), memory=512)]) numa_topology.obj_reset_changes() numa_constraints_mock.return_value = numa_topology instances, resv_id = self.compute_api.create(self.context, self.default_flavor, self.fake_image['id']) numa_constraints_mock.assert_called_once_with( self.default_flavor, test.MatchType(objects.ImageMeta)) self.assertEqual( numa_topology.cells[0].obj_to_primitive(), instances[0].numa_topology.cells[0].obj_to_primitive()) self.assertEqual( numa_topology.cells[1].obj_to_primitive(), instances[0].numa_topology.cells[1].obj_to_primitive()) def test_create_instance_defaults_display_name(self): # Verify that an instance cannot be created without a display_name. cases = [dict(), dict(display_name=None)] for instance in cases: (ref, resv_id) = self.compute_api.create(self.context, self.default_flavor, 'f5000000-0000-0000-0000-000000000000', **instance) self.assertIsNotNone(ref[0]['display_name']) def test_create_instance_sets_system_metadata(self): # Make sure image properties are copied into system metadata. with mock.patch.object(self.compute_api.compute_task_api, 'schedule_and_build_instances') as mock_sbi: (ref, resv_id) = self.compute_api.create( self.context, instance_type=self.default_flavor, image_href='f5000000-0000-0000-0000-000000000000') build_call = mock_sbi.call_args_list[0] instance = build_call[1]['build_requests'][0].instance image_props = {'image_kernel_id': uuids.kernel_id, 'image_ramdisk_id': uuids.ramdisk_id, 'image_something_else': 'meow', } for key, value in image_props.items(): self.assertIn(key, instance.system_metadata) self.assertEqual(value, instance.system_metadata[key]) def test_create_saves_flavor(self): with mock.patch.object(self.compute_api.compute_task_api, 'schedule_and_build_instances') as mock_sbi: (ref, resv_id) = self.compute_api.create( self.context, instance_type=self.default_flavor, image_href=uuids.image_href_id) build_call = mock_sbi.call_args_list[0] instance = build_call[1]['build_requests'][0].instance self.assertIn('flavor', instance) self.assertEqual(self.default_flavor.flavorid, instance.flavor.flavorid) self.assertNotIn('instance_type_id', instance.system_metadata) def test_create_instance_associates_security_groups(self): # Make sure create associates security groups. with test.nested( mock.patch.object(self.compute_api.compute_task_api, 'schedule_and_build_instances'), mock.patch('nova.network.security_group_api.validate_name', return_value=uuids.secgroup_id), ) as (mock_sbi, mock_secgroups): self.compute_api.create( self.context, instance_type=self.default_flavor, image_href=uuids.image_href_id, security_groups=['testgroup']) build_call = mock_sbi.call_args_list[0] reqspec = build_call[1]['request_spec'][0] self.assertEqual(1, len(reqspec.security_groups)) self.assertEqual(uuids.secgroup_id, reqspec.security_groups[0].uuid) mock_secgroups.assert_called_once_with(mock.ANY, 'testgroup') def test_create_instance_with_invalid_security_group_raises(self): pre_build_len = len(db.instance_get_all(self.context)) with mock.patch( 'nova.network.security_group_api.validate_name', side_effect=exception.SecurityGroupNotFound('foo'), ) as mock_secgroups: self.assertRaises(exception.SecurityGroupNotFound, self.compute_api.create, self.context, instance_type=self.default_flavor, image_href=None, security_groups=['invalid_sec_group']) self.assertEqual(pre_build_len, len(db.instance_get_all(self.context))) mock_secgroups.assert_called_once_with(mock.ANY, 'invalid_sec_group') def test_create_with_malformed_user_data(self): # Test an instance type with malformed user data. self.fake_image['min_ram'] = 2 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.InstanceUserDataMalformed, self.compute_api.create, self.context, self.default_flavor, self.fake_image['id'], user_data=b'banana') def test_create_with_base64_user_data(self): # Test an instance type with ok much user data. self.fake_image['min_ram'] = 2 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) # NOTE(mikal): a string of length 48510 encodes to 65532 characters of # base64 (refs, resv_id) = self.compute_api.create( self.context, self.default_flavor, self.fake_image['id'], user_data=base64.encode_as_text(b'1' * 48510)) def test_populate_instance_for_create(self, num_instances=1): base_options = {'image_ref': self.fake_image['id'], 'system_metadata': {'fake': 'value'}, 'display_name': 'foo', 'uuid': uuids.instance} instance = objects.Instance() instance.update(base_options) instance = self.compute_api._populate_instance_for_create( self.context, instance, self.fake_image, 1, security_groups=objects.SecurityGroupList(), instance_type=self.tiny_flavor, num_instances=num_instances, shutdown_terminate=False) self.assertEqual(str(base_options['image_ref']), instance['system_metadata']['image_base_image_ref']) self.assertEqual(vm_states.BUILDING, instance['vm_state']) self.assertEqual(task_states.SCHEDULING, instance['task_state']) self.assertEqual(1, instance['launch_index']) self.assertEqual(base_options['display_name'], instance['display_name']) self.assertIsNotNone(instance.get('uuid')) self.assertEqual([], instance.security_groups.objects) self.assertIsNone(instance.ephemeral_key_uuid) def test_populate_instance_for_create_encrypted(self, num_instances=1): CONF.set_override('enabled', True, group='ephemeral_storage_encryption') CONF.set_override('backend', 'castellan.tests.unit.key_manager.mock_key_manager.' 'MockKeyManager', group='key_manager') base_options = {'image_ref': self.fake_image['id'], 'system_metadata': {'fake': 'value'}, 'display_name': 'foo', 'uuid': uuids.instance} instance = objects.Instance() instance.update(base_options) self.compute_api.key_manager = key_manager.API() index = 1 instance = self.compute_api._populate_instance_for_create( self.context, instance, self.fake_image, index, security_groups=objects.SecurityGroupList(), instance_type=self.tiny_flavor, num_instances=num_instances, shutdown_terminate=False) self.assertIsNotNone(instance.ephemeral_key_uuid) def test_default_hostname_generator(self): fake_uuids = [uuidutils.generate_uuid() for x in range(4)] orig_populate = self.compute_api._populate_instance_for_create def _fake_populate(self, context, base_options, *args, **kwargs): base_options['uuid'] = fake_uuids.pop(0) return orig_populate(context, base_options, *args, **kwargs) self.stub_out('nova.compute.api.API.' '_populate_instance_for_create', _fake_populate) cases = [(None, 'server-%s' % fake_uuids[0]), ('Hello, Server!', 'hello-server'), ('<}\x1fh\x10e\x08l\x02l\x05o\x12!{>', 'hello'), ('hello_server', 'hello-server')] for display_name, hostname in cases: (ref, resv_id) = self.compute_api.create(self.context, self.default_flavor, image_href=uuids.image_href_id, display_name=display_name) self.assertEqual(ref[0]['hostname'], hostname) @mock.patch('nova.compute.api.API._get_requested_instance_group') def test_instance_create_adds_to_instance_group(self, get_group_mock): self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) group = objects.InstanceGroup(self.context) group.uuid = uuids.fake group.project_id = self.context.project_id group.user_id = self.context.user_id group.create() get_group_mock.return_value = group (refs, resv_id) = self.compute_api.create( self.context, self.default_flavor, self.fake_image['id'], scheduler_hints={'group': group.uuid}) self.assertEqual(len(refs), len(group.members)) group = objects.InstanceGroup.get_by_uuid(self.context, group.uuid) self.assertIn(refs[0]['uuid'], group.members) @mock.patch('nova.objects.quotas.Quotas.check_deltas') @mock.patch('nova.compute.api.API._get_requested_instance_group') def test_create_instance_group_members_over_quota_during_recheck( self, get_group_mock, check_deltas_mock): self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) # Simulate a race where the first check passes and the recheck fails. # The first call is for checking instances/cores/ram, second and third # calls are for checking server group members. check_deltas_mock.side_effect = [ None, None, exception.OverQuota(overs='server_group_members')] group = objects.InstanceGroup(self.context) group.uuid = uuids.fake group.project_id = self.context.project_id group.user_id = self.context.user_id group.create() get_group_mock.return_value = group self.assertRaises(exception.QuotaError, self.compute_api.create, self.context, self.default_flavor, self.fake_image['id'], scheduler_hints={'group': group.uuid}, check_server_group_quota=True) # The first call was for the instances/cores/ram check. self.assertEqual(3, check_deltas_mock.call_count) call1 = mock.call(self.context, {'server_group_members': 1}, group, self.context.user_id) call2 = mock.call(self.context, {'server_group_members': 0}, group, self.context.user_id) check_deltas_mock.assert_has_calls([call1, call2]) # Verify we removed the group members that were added after the first # quota check passed. group = objects.InstanceGroup.get_by_uuid(self.context, group.uuid) self.assertEqual(0, len(group.members)) @mock.patch('nova.objects.quotas.Quotas.check_deltas') @mock.patch('nova.compute.api.API._get_requested_instance_group') def test_create_instance_group_members_no_quota_recheck(self, get_group_mock, check_deltas_mock): self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') group = objects.InstanceGroup(self.context) group.uuid = uuids.fake group.project_id = self.context.project_id group.user_id = self.context.user_id group.create() get_group_mock.return_value = group (refs, resv_id) = self.compute_api.create( self.context, self.default_flavor, self.fake_image['id'], scheduler_hints={'group': group.uuid}, check_server_group_quota=True) self.assertEqual(len(refs), len(group.members)) # check_deltas should have been called only once for server group # members. self.assertEqual(2, check_deltas_mock.call_count) call1 = mock.call(self.context, {'instances': 1, 'cores': self.default_flavor.vcpus, 'ram': self.default_flavor.memory_mb}, self.context.project_id, user_id=None, check_project_id=self.context.project_id, check_user_id=None) call2 = mock.call(self.context, {'server_group_members': 1}, group, self.context.user_id) check_deltas_mock.assert_has_calls([call1, call2]) group = objects.InstanceGroup.get_by_uuid(self.context, group.uuid) self.assertIn(refs[0]['uuid'], group.members) def test_instance_create_with_group_uuid_fails_group_not_exist(self): self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises( exception.InstanceGroupNotFound, self.compute_api.create, self.context, self.default_flavor, self.fake_image['id'], scheduler_hints={'group': '5b674f73-c8cf-40ef-9965-3b6fe4b304b1'}) @mock.patch('nova.objects.RequestSpec') def _test_rebuild(self, mock_reqspec, vm_state=None): instance = self._create_fake_instance_obj() instance_uuid = instance['uuid'] self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance = objects.Instance.get_by_uuid(self.context, instance_uuid) self.assertIsNone(instance.task_state) # Set some image metadata that should get wiped out and reset # as well as some other metadata that should be preserved. instance.system_metadata.update({ 'image_kernel_id': 'old-data', 'image_ramdisk_id': 'old_data', 'image_something_else': 'old-data', 'image_should_remove': 'bye-bye', 'preserved': 'preserve this!'}) instance.save() # Make sure Compute API updates the image_ref before casting to # compute manager. info = {'image_ref': None, 'clean': False} def fake_rpc_rebuild(context, **kwargs): info['image_ref'] = kwargs['instance'].image_ref info['clean'] = ('progress' not in kwargs['instance'].obj_what_changed()) with mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance', fake_rpc_rebuild): image_ref = uuids.new_image_ref password = "new_password" instance.vm_state = vm_state instance.save() self.compute_api.rebuild(self.context, instance, image_ref, password) self.assertEqual(info['image_ref'], image_ref) self.assertTrue(info['clean']) instance.refresh() self.assertEqual(instance.task_state, task_states.REBUILDING) sys_meta = {k: v for k, v in instance.system_metadata.items() if not k.startswith('instance_type')} self.assertEqual( {'image_kernel_id': uuids.kernel_id, 'image_min_disk': '1', 'image_ramdisk_id': uuids.ramdisk_id, 'image_something_else': 'meow', 'preserved': 'preserve this!', 'image_base_image_ref': image_ref, 'boot_roles': ''}, sys_meta) def test_rebuild(self): self._test_rebuild(vm_state=vm_states.ACTIVE) def test_rebuild_in_error_state(self): self._test_rebuild(vm_state=vm_states.ERROR) def test_rebuild_in_error_not_launched(self): instance = self._create_fake_instance_obj(params={'image_ref': ''}) flavor = instance.flavor self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) db.instance_update(self.context, instance['uuid'], {"vm_state": vm_states.ERROR, "launched_at": None}) instance = db.instance_get_by_uuid(self.context, instance['uuid']) instance['flavor'] = flavor self.assertRaises(exception.InstanceInvalidState, self.compute_api.rebuild, self.context, instance, instance['image_ref'], "new password") def test_rebuild_no_image(self): """Tests that rebuild fails if no root BDM is found for an instance without an image_ref (volume-backed instance). """ instance = self._create_fake_instance_obj(params={'image_ref': ''}) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) # The API request schema validates that a UUID is passed for the # imageRef parameter so we need to provide an image. ex = self.assertRaises(exception.NovaException, self.compute_api.rebuild, self.context, instance, self.fake_image['id'], 'new_password') self.assertIn('Unable to find root block device mapping for ' 'volume-backed instance', six.text_type(ex)) @mock.patch('nova.objects.RequestSpec') def test_rebuild_with_deleted_image(self, mock_reqspec): # If we're given a deleted image by glance, we should not be able to # rebuild from it instance = self._create_fake_instance_obj( params={'image_ref': FAKE_IMAGE_REF}) self.fake_image['name'] = 'fake_name' self.fake_image['status'] = 'DELETED' self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) expected_message = ( exception.ImageNotActive.msg_fmt % {'image_id': self.fake_image['id']}) with testtools.ExpectedException(exception.ImageNotActive, expected_message): self.compute_api.rebuild(self.context, instance, self.fake_image['id'], 'new_password') @mock.patch('nova.objects.RequestSpec') def test_rebuild_with_too_little_ram(self, mock_reqspec): instance = self._create_fake_instance_obj( params={'image_ref': FAKE_IMAGE_REF}) instance.flavor.memory_mb = 64 instance.flavor.root_gb = 1 self.fake_image['min_ram'] = 128 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.FlavorMemoryTooSmall, self.compute_api.rebuild, self.context, instance, self.fake_image['id'], 'new_password') # Reduce image memory requirements and make sure it works self.fake_image['min_ram'] = 64 self.compute_api.rebuild(self.context, instance, self.fake_image['id'], 'new_password') @mock.patch('nova.objects.RequestSpec') def test_rebuild_with_too_little_disk(self, mock_reqspec): instance = self._create_fake_instance_obj( params={'image_ref': FAKE_IMAGE_REF}) def fake_extract_flavor(_inst, prefix=''): if prefix == '': f = objects.Flavor(**test_flavor.fake_flavor) f.memory_mb = 64 f.root_gb = 1 return f else: raise KeyError() self.stub_out('nova.compute.flavors.extract_flavor', fake_extract_flavor) self.fake_image['min_disk'] = 2 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.FlavorDiskSmallerThanMinDisk, self.compute_api.rebuild, self.context, instance, self.fake_image['id'], 'new_password') # Reduce image disk requirements and make sure it works self.fake_image['min_disk'] = 1 self.compute_api.rebuild(self.context, instance, self.fake_image['id'], 'new_password') @mock.patch('nova.objects.RequestSpec') def test_rebuild_with_just_enough_ram_and_disk(self, mock_reqspec): instance = self._create_fake_instance_obj( params={'image_ref': FAKE_IMAGE_REF}) def fake_extract_flavor(_inst, prefix=''): if prefix == '': f = objects.Flavor(**test_flavor.fake_flavor) f.memory_mb = 64 f.root_gb = 1 return f else: raise KeyError() self.stub_out('nova.compute.flavors.extract_flavor', fake_extract_flavor) self.fake_image['min_ram'] = 64 self.fake_image['min_disk'] = 1 self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.compute_api.rebuild(self.context, instance, self.fake_image['id'], 'new_password') @mock.patch('nova.objects.RequestSpec') def test_rebuild_with_no_ram_and_disk_reqs(self, mock_reqspec): instance = self._create_fake_instance_obj( params={'image_ref': FAKE_IMAGE_REF}) def fake_extract_flavor(_inst, prefix=''): if prefix == '': f = objects.Flavor(**test_flavor.fake_flavor) f.memory_mb = 64 f.root_gb = 1 return f else: raise KeyError() self.stub_out('nova.compute.flavors.extract_flavor', fake_extract_flavor) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.compute_api.rebuild(self.context, instance, self.fake_image['id'], 'new_password') @mock.patch('nova.objects.RequestSpec') def test_rebuild_with_too_large_image(self, mock_reqspec): instance = self._create_fake_instance_obj( params={'image_ref': FAKE_IMAGE_REF}) def fake_extract_flavor(_inst, prefix=''): if prefix == '': f = objects.Flavor(**test_flavor.fake_flavor) f.memory_mb = 64 f.root_gb = 1 return f else: raise KeyError() self.stub_out('nova.compute.flavors.extract_flavor', fake_extract_flavor) self.fake_image['size'] = '1073741825' self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', self.fake_show) self.assertRaises(exception.FlavorDiskSmallerThanImage, self.compute_api.rebuild, self.context, instance, self.fake_image['id'], 'new_password') # Reduce image to 1 GB limit and ensure it works self.fake_image['size'] = '1073741824' self.compute_api.rebuild(self.context, instance, self.fake_image['id'], 'new_password') def test_hostname_create(self): # Ensure instance hostname is set during creation. (instances, _) = self.compute_api.create(self.context, self.tiny_flavor, image_href=uuids.image_href_id, display_name='test host') self.assertEqual('test-host', instances[0]['hostname']) def _fake_rescue_block_devices(self, instance, status="in-use"): fake_bdms = block_device_obj.block_device_make_list(self.context, [fake_block_device.FakeDbBlockDeviceDict( {'device_name': '/dev/vda', 'source_type': 'volume', 'boot_index': 0, 'destination_type': 'volume', 'volume_id': 'bf0b6b00-a20c-11e2-9e96-0800200c9a66'})]) volume = {'id': 'bf0b6b00-a20c-11e2-9e96-0800200c9a66', 'state': 'active', 'instance_uuid': instance['uuid']} return fake_bdms, volume @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(cinder.API, 'get') def test_rescue_volume_backed_no_image(self, mock_get_vol, mock_get_bdms): # Instance started without an image params = {'image_ref': ''} volume_backed_inst_1 = self._create_fake_instance_obj(params=params) bdms, volume = self._fake_rescue_block_devices(volume_backed_inst_1) mock_get_vol.return_value = {'id': volume['id'], 'status': "in-use"} mock_get_bdms.return_value = bdms with mock.patch.object(self.compute, '_prep_block_device'): self.compute.build_and_run_instance(self.context, volume_backed_inst_1, {}, {}, {}, block_device_mapping=[]) self.assertRaises(exception.InstanceNotRescuable, self.compute_api.rescue, self.context, volume_backed_inst_1) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(cinder.API, 'get') def test_rescue_volume_backed_placeholder_image(self, mock_get_vol, mock_get_bdms): # Instance started with a placeholder image (for metadata) volume_backed_inst_2 = self._create_fake_instance_obj( {'image_ref': FAKE_IMAGE_REF, 'root_device_name': '/dev/vda'}) bdms, volume = self._fake_rescue_block_devices(volume_backed_inst_2) mock_get_vol.return_value = {'id': volume['id'], 'status': "in-use"} mock_get_bdms.return_value = bdms with mock.patch.object(self.compute, '_prep_block_device'): self.compute.build_and_run_instance(self.context, volume_backed_inst_2, {}, {}, {}, block_device_mapping=[]) self.assertRaises(exception.InstanceNotRescuable, self.compute_api.rescue, self.context, volume_backed_inst_2) def test_get(self): # Test get instance. exp_instance = self._create_fake_instance_obj() instance = self.compute_api.get(self.context, exp_instance.uuid) self.assertEqual(exp_instance.id, instance.id) def test_get_with_admin_context(self): # Test get instance. c = context.get_admin_context() exp_instance = self._create_fake_instance_obj() instance = self.compute_api.get(c, exp_instance['uuid']) self.assertEqual(exp_instance.id, instance.id) def test_get_all_by_name_regexp(self): # Test searching instances by name (display_name). c = context.get_admin_context() instance1 = self._create_fake_instance_obj({'display_name': 'woot'}) instance2 = self._create_fake_instance_obj({ 'display_name': 'woo'}) instance3 = self._create_fake_instance_obj({ 'display_name': 'not-woot'}) instances = self.compute_api.get_all(c, search_opts={'name': '^woo.*'}) self.assertEqual(len(instances), 2) instance_uuids = [instance['uuid'] for instance in instances] self.assertIn(instance1['uuid'], instance_uuids) self.assertIn(instance2['uuid'], instance_uuids) instances = self.compute_api.get_all(c, search_opts={'name': '^woot.*'}) instance_uuids = [instance['uuid'] for instance in instances] self.assertEqual(len(instances), 1) self.assertIn(instance1['uuid'], instance_uuids) instances = self.compute_api.get_all(c, search_opts={'name': '.*oot.*'}) self.assertEqual(len(instances), 2) instance_uuids = [instance['uuid'] for instance in instances] self.assertIn(instance1['uuid'], instance_uuids) self.assertIn(instance3['uuid'], instance_uuids) instances = self.compute_api.get_all(c, search_opts={'name': '^n.*'}) self.assertEqual(len(instances), 1) instance_uuids = [instance['uuid'] for instance in instances] self.assertIn(instance3['uuid'], instance_uuids) instances = self.compute_api.get_all(c, search_opts={'name': 'noth.*'}) self.assertEqual(len(instances), 0) def test_get_all_by_multiple_options_at_once(self): # Test searching by multiple options at once. c = context.get_admin_context() def fake_network_info(ip): info = [{ 'address': 'aa:bb:cc:dd:ee:ff', 'id': 1, 'network': { 'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [{ 'cidr': '192.168.0.0/24', 'ips': [{ 'address': ip, 'type': 'fixed', }] }] } }] return jsonutils.dumps(info) instance1 = self._create_fake_instance_obj({ 'display_name': 'woot', 'uuid': '00000000-0000-0000-0000-000000000010', 'info_cache': objects.InstanceInfoCache( network_info=fake_network_info('192.168.0.1'))}) self._create_fake_instance_obj({ # instance2 'display_name': 'woo', 'uuid': '00000000-0000-0000-0000-000000000020', 'info_cache': objects.InstanceInfoCache( network_info=fake_network_info('192.168.0.2'))}) instance3 = self._create_fake_instance_obj({ 'display_name': 'not-woot', 'uuid': '00000000-0000-0000-0000-000000000030', 'info_cache': objects.InstanceInfoCache( network_info=fake_network_info('192.168.0.3'))}) # ip ends up matching 2nd octet here.. so all 3 match ip # but 'name' only matches one instances = self.compute_api.get_all(c, search_opts={'ip': r'.*\.1', 'name': 'not.*'}) self.assertEqual(len(instances), 1) self.assertEqual(instances[0]['uuid'], instance3['uuid']) # ip ends up matching any ip with a '1' in the last octet.. # so instance 1 and 3.. but name should only match #1 # but 'name' only matches one instances = self.compute_api.get_all(c, search_opts={'ip': r'.*\.1$', 'name': '^woo.*'}) self.assertEqual(len(instances), 1) self.assertEqual(instances[0]['uuid'], instance1['uuid']) # same as above but no match on name (name matches instance1 # but the ip query doesn't instances = self.compute_api.get_all(c, search_opts={'ip': r'.*\.2$', 'name': '^woot.*'}) self.assertEqual(len(instances), 0) # ip matches all 3... ipv6 matches #2+#3...name matches #3 instances = self.compute_api.get_all(c, search_opts={'ip': r'.*\.1', 'name': 'not.*', 'ip6': '^.*12.*34.*'}) self.assertEqual(len(instances), 1) self.assertEqual(instances[0]['uuid'], instance3['uuid']) def test_get_all_by_image(self): # Test searching instances by image. c = context.get_admin_context() instance1 = self._create_fake_instance_obj( {'image_ref': uuids.fake_image_ref_1}) instance2 = self._create_fake_instance_obj( {'image_ref': uuids.fake_image_ref_2}) instance3 = self._create_fake_instance_obj( {'image_ref': uuids.fake_image_ref_2}) instances = self.compute_api.get_all(c, search_opts={'image': '123'}) self.assertEqual(len(instances), 0) instances = self.compute_api.get_all( c, search_opts={'image': uuids.fake_image_ref_1}) self.assertEqual(len(instances), 1) self.assertEqual(instances[0]['uuid'], instance1['uuid']) instances = self.compute_api.get_all( c, search_opts={'image': uuids.fake_image_ref_2}) self.assertEqual(len(instances), 2) instance_uuids = [instance['uuid'] for instance in instances] self.assertIn(instance2['uuid'], instance_uuids) self.assertIn(instance3['uuid'], instance_uuids) # Test passing a list as search arg instances = self.compute_api.get_all( c, search_opts={'image': [uuids.fake_image_ref_1, uuids.fake_image_ref_2]}) self.assertEqual(len(instances), 3) def test_get_all_by_flavor(self): # Test searching instances by image. c = context.get_admin_context() flavor_dict = {f.flavorid: f for f in objects.FlavorList.get_all(c)} instance1 = self._create_fake_instance_obj( {'instance_type_id': flavor_dict['1'].id}) instance2 = self._create_fake_instance_obj( {'instance_type_id': flavor_dict['2'].id}) instance3 = self._create_fake_instance_obj( {'instance_type_id': flavor_dict['2'].id}) instances = self.compute_api.get_all(c, search_opts={'flavor': 5}) self.assertEqual(len(instances), 0) # ensure unknown filter maps to an exception self.assertRaises(exception.FlavorNotFound, self.compute_api.get_all, c, search_opts={'flavor': 99}) instances = self.compute_api.get_all(c, search_opts={'flavor': 1}) self.assertEqual(len(instances), 1) self.assertEqual(instances[0]['id'], instance1['id']) instances = self.compute_api.get_all(c, search_opts={'flavor': 2}) self.assertEqual(len(instances), 2) instance_uuids = [instance['uuid'] for instance in instances] self.assertIn(instance2['uuid'], instance_uuids) self.assertIn(instance3['uuid'], instance_uuids) def test_get_all_by_state(self): # Test searching instances by state. c = context.get_admin_context() instance1 = self._create_fake_instance_obj({ 'power_state': power_state.SHUTDOWN, }) instance2 = self._create_fake_instance_obj({ 'power_state': power_state.RUNNING, }) instance3 = self._create_fake_instance_obj({ 'power_state': power_state.RUNNING, }) instances = self.compute_api.get_all(c, search_opts={'power_state': power_state.SUSPENDED}) self.assertEqual(len(instances), 0) instances = self.compute_api.get_all(c, search_opts={'power_state': power_state.SHUTDOWN}) self.assertEqual(len(instances), 1) self.assertEqual(instances[0]['uuid'], instance1['uuid']) instances = self.compute_api.get_all(c, search_opts={'power_state': power_state.RUNNING}) self.assertEqual(len(instances), 2) instance_uuids = [instance['uuid'] for instance in instances] self.assertIn(instance2['uuid'], instance_uuids) self.assertIn(instance3['uuid'], instance_uuids) # Test passing a list as search arg instances = self.compute_api.get_all(c, search_opts={'power_state': [power_state.SHUTDOWN, power_state.RUNNING]}) self.assertEqual(len(instances), 3) def test_instance_metadata(self): meta_changes = [None] self.flags(notify_on_state_change='vm_state', group='notifications') def fake_change_instance_metadata(inst, ctxt, diff, instance=None, instance_uuid=None): meta_changes[0] = diff self.stub_out('nova.compute.rpcapi.ComputeAPI.' 'change_instance_metadata', fake_change_instance_metadata) _context = context.get_admin_context() instance = self._create_fake_instance_obj({'metadata': {'key1': 'value1'}}) metadata = self.compute_api.get_instance_metadata(_context, instance) self.assertEqual(metadata, {'key1': 'value1'}) self.compute_api.update_instance_metadata(_context, instance, {'key2': 'value2'}) metadata = self.compute_api.get_instance_metadata(_context, instance) self.assertEqual(metadata, {'key1': 'value1', 'key2': 'value2'}) self.assertEqual(meta_changes, [{'key2': ['+', 'value2']}]) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 1) msg = fake_notifier.NOTIFICATIONS[0] payload = msg.payload self.assertIn('metadata', payload) self.assertEqual(payload['metadata'], metadata) new_metadata = {'key2': 'bah', 'key3': 'value3'} self.compute_api.update_instance_metadata(_context, instance, new_metadata, delete=True) metadata = self.compute_api.get_instance_metadata(_context, instance) self.assertEqual(metadata, new_metadata) self.assertEqual(meta_changes, [{ 'key1': ['-'], 'key2': ['+', 'bah'], 'key3': ['+', 'value3'], }]) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[1] payload = msg.payload self.assertIn('metadata', payload) self.assertEqual(payload['metadata'], metadata) self.compute_api.delete_instance_metadata(_context, instance, 'key2') metadata = self.compute_api.get_instance_metadata(_context, instance) self.assertEqual(metadata, {'key3': 'value3'}) self.assertEqual(meta_changes, [{'key2': ['-']}]) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 3) msg = fake_notifier.NOTIFICATIONS[2] payload = msg.payload self.assertIn('metadata', payload) self.assertEqual(payload['metadata'], {'key3': 'value3'}) def test_disallow_metadata_changes_during_building(self): def fake_change_instance_metadata(inst, ctxt, diff, instance=None, instance_uuid=None): pass self.stub_out('nova.compute.rpcapi.ComputeAPI.' 'change_instance_metadata', fake_change_instance_metadata) instance = self._create_fake_instance_obj( {'vm_state': vm_states.BUILDING}) self.assertRaises(exception.InstanceInvalidState, self.compute_api.delete_instance_metadata, self.context, instance, "key") self.assertRaises(exception.InstanceInvalidState, self.compute_api.update_instance_metadata, self.context, instance, "key") @staticmethod def _parse_db_block_device_mapping(bdm_ref): attr_list = ('delete_on_termination', 'device_name', 'no_device', 'virtual_name', 'volume_id', 'volume_size', 'snapshot_id') bdm = {} for attr in attr_list: val = bdm_ref.get(attr, None) if val: bdm[attr] = val return bdm def _test_check_and_transform_bdm(self, bdms, expected_bdms, image_bdms=None, base_options=None, legacy_bdms=False, legacy_image_bdms=False): image_bdms = image_bdms or [] image_meta = {} if image_bdms: image_meta = {'properties': {'block_device_mapping': image_bdms}} if not legacy_image_bdms: image_meta['properties']['bdm_v2'] = True base_options = base_options or {'root_device_name': 'vda', 'image_ref': FAKE_IMAGE_REF} transformed_bdm = self.compute_api._check_and_transform_bdm( self.context, base_options, {}, image_meta, 1, 1, bdms, legacy_bdms) for expected, got in zip(expected_bdms, transformed_bdm): self.assertEqual(dict(expected.items()), dict(got.items())) def test_check_and_transform_legacy_bdm_no_image_bdms(self): legacy_bdms = [ {'device_name': '/dev/vda', 'volume_id': '33333333-aaaa-bbbb-cccc-333333333333', 'delete_on_termination': False}] expected_bdms = [block_device.BlockDeviceDict.from_legacy( legacy_bdms[0])] expected_bdms[0]['boot_index'] = 0 expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, expected_bdms) self._test_check_and_transform_bdm(legacy_bdms, expected_bdms, legacy_bdms=True) def test_check_and_transform_legacy_bdm_legacy_image_bdms(self): image_bdms = [ {'device_name': '/dev/vda', 'volume_id': '33333333-aaaa-bbbb-cccc-333333333333', 'delete_on_termination': False}] legacy_bdms = [ {'device_name': '/dev/vdb', 'volume_id': '33333333-aaaa-bbbb-cccc-444444444444', 'delete_on_termination': False}] expected_bdms = [ block_device.BlockDeviceDict.from_legacy(legacy_bdms[0]), block_device.BlockDeviceDict.from_legacy(image_bdms[0])] expected_bdms[0]['boot_index'] = -1 expected_bdms[1]['boot_index'] = 0 expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, expected_bdms) self._test_check_and_transform_bdm(legacy_bdms, expected_bdms, image_bdms=image_bdms, legacy_bdms=True, legacy_image_bdms=True) def test_check_and_transform_legacy_bdm_image_bdms(self): legacy_bdms = [ {'device_name': '/dev/vdb', 'volume_id': '33333333-aaaa-bbbb-cccc-444444444444', 'delete_on_termination': False}] image_bdms = [block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '33333333-aaaa-bbbb-cccc-444444444444', 'boot_index': 0})] expected_bdms = [ block_device.BlockDeviceDict.from_legacy(legacy_bdms[0]), image_bdms[0]] expected_bdms[0]['boot_index'] = -1 expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, expected_bdms) self._test_check_and_transform_bdm(legacy_bdms, expected_bdms, image_bdms=image_bdms, legacy_bdms=True) def test_check_and_transform_bdm_no_image_bdms(self): bdms = [block_device.BlockDeviceDict({'source_type': 'image', 'destination_type': 'local', 'image_id': FAKE_IMAGE_REF, 'boot_index': 0})] expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, bdms) self._test_check_and_transform_bdm(bdms, expected_bdms) def test_check_and_transform_bdm_image_bdms(self): bdms = [block_device.BlockDeviceDict({'source_type': 'image', 'destination_type': 'local', 'image_id': FAKE_IMAGE_REF, 'boot_index': 0})] image_bdms = [block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '33333333-aaaa-bbbb-cccc-444444444444'})] expected_bdms = bdms + image_bdms expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, expected_bdms) self._test_check_and_transform_bdm(bdms, expected_bdms, image_bdms=image_bdms) def test_check_and_transform_bdm_image_bdms_w_overrides(self): bdms = [block_device.BlockDeviceDict({'source_type': 'image', 'destination_type': 'local', 'image_id': FAKE_IMAGE_REF, 'boot_index': 0}), block_device.BlockDeviceDict({'device_name': 'vdb', 'no_device': True})] image_bdms = [block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '33333333-aaaa-bbbb-cccc-444444444444', 'device_name': '/dev/vdb'})] expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, bdms) self._test_check_and_transform_bdm(bdms, expected_bdms, image_bdms=image_bdms) def test_check_and_transform_bdm_image_bdms_w_overrides_complex(self): bdms = [block_device.BlockDeviceDict({'source_type': 'image', 'destination_type': 'local', 'image_id': FAKE_IMAGE_REF, 'boot_index': 0}), block_device.BlockDeviceDict({'device_name': 'vdb', 'no_device': True}), block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '11111111-aaaa-bbbb-cccc-222222222222', 'device_name': 'vdc'})] image_bdms = [ block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '33333333-aaaa-bbbb-cccc-444444444444', 'device_name': '/dev/vdb'}), block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '55555555-aaaa-bbbb-cccc-666666666666', 'device_name': '/dev/vdc'}), block_device.BlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': '77777777-aaaa-bbbb-cccc-8888888888888', 'device_name': '/dev/vdd'})] expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, bdms + [image_bdms[2]]) self._test_check_and_transform_bdm(bdms, expected_bdms, image_bdms=image_bdms) def test_check_and_transform_bdm_legacy_image_bdms(self): bdms = [block_device.BlockDeviceDict({'source_type': 'image', 'destination_type': 'local', 'image_id': FAKE_IMAGE_REF, 'boot_index': 0})] image_bdms = [{'device_name': '/dev/vda', 'volume_id': '33333333-aaaa-bbbb-cccc-333333333333', 'delete_on_termination': False}] expected_bdms = [block_device.BlockDeviceDict.from_legacy( image_bdms[0])] expected_bdms[0]['boot_index'] = 0 expected_bdms = block_device_obj.block_device_make_list_from_dicts( self.context, expected_bdms) self._test_check_and_transform_bdm(bdms, expected_bdms, image_bdms=image_bdms, legacy_image_bdms=True) def test_check_and_transform_image(self): base_options = {'root_device_name': 'vdb', 'image_ref': FAKE_IMAGE_REF} fake_legacy_bdms = [ {'device_name': '/dev/vda', 'volume_id': '33333333-aaaa-bbbb-cccc-333333333333', 'delete_on_termination': False}] image_meta = {'properties': {'block_device_mapping': [ {'device_name': '/dev/vda', 'snapshot_id': '33333333-aaaa-bbbb-cccc-333333333333', 'boot_index': 0}]}} # We get an image BDM transformed_bdm = self.compute_api._check_and_transform_bdm( self.context, base_options, {}, {}, 1, 1, fake_legacy_bdms, True) self.assertEqual(len(transformed_bdm), 2) # No image BDM created if image already defines a root BDM base_options['root_device_name'] = 'vda' base_options['image_ref'] = None transformed_bdm = self.compute_api._check_and_transform_bdm( self.context, base_options, {}, image_meta, 1, 1, [], True) self.assertEqual(len(transformed_bdm), 1) # No image BDM created transformed_bdm = self.compute_api._check_and_transform_bdm( self.context, base_options, {}, {}, 1, 1, fake_legacy_bdms, True) self.assertEqual(len(transformed_bdm), 1) # Volumes with multiple instances fails self.assertRaises(exception.InvalidRequest, self.compute_api._check_and_transform_bdm, self.context, base_options, {}, {}, 1, 2, fake_legacy_bdms, True) # Volume backed so no image_ref in base_options # v2 bdms contains a root image to volume mapping # image_meta contains a snapshot as the image # is created by nova image-create from a volume backed server # see bug 1381598 fake_v2_bdms = [{'boot_index': 0, 'connection_info': None, 'delete_on_termination': None, 'destination_type': u'volume', 'image_id': FAKE_IMAGE_REF, 'source_type': u'image', 'volume_id': None, 'volume_size': 1}] base_options['image_ref'] = None transformed_bdm = self.compute_api._check_and_transform_bdm( self.context, base_options, {}, image_meta, 1, 1, fake_v2_bdms, False) self.assertEqual(len(transformed_bdm), 1) # Image BDM overrides mappings base_options['image_ref'] = FAKE_IMAGE_REF image_meta = { 'properties': { 'mappings': [ {'virtual': 'ephemeral0', 'device': 'vdb'}], 'bdm_v2': True, 'block_device_mapping': [ {'device_name': '/dev/vdb', 'source_type': 'blank', 'destination_type': 'volume', 'volume_size': 1}]}} transformed_bdm = self.compute_api._check_and_transform_bdm( self.context, base_options, {}, image_meta, 1, 1, [], False) self.assertEqual(1, len(transformed_bdm)) self.assertEqual('volume', transformed_bdm[0]['destination_type']) self.assertEqual('/dev/vdb', transformed_bdm[0]['device_name']) def test_volume_size(self): ephemeral_size = 2 swap_size = 3 volume_size = 5 swap_bdm = {'source_type': 'blank', 'guest_format': 'swap', 'destination_type': 'local'} ephemeral_bdm = {'source_type': 'blank', 'guest_format': None, 'destination_type': 'local'} volume_bdm = {'source_type': 'volume', 'volume_size': volume_size, 'destination_type': 'volume'} blank_bdm = {'source_type': 'blank', 'destination_type': 'volume'} inst_type = {'ephemeral_gb': ephemeral_size, 'swap': swap_size} self.assertEqual( self.compute_api._volume_size(inst_type, ephemeral_bdm), ephemeral_size) ephemeral_bdm['volume_size'] = 42 self.assertEqual( self.compute_api._volume_size(inst_type, ephemeral_bdm), 42) self.assertEqual( self.compute_api._volume_size(inst_type, swap_bdm), swap_size) swap_bdm['volume_size'] = 42 self.assertEqual( self.compute_api._volume_size(inst_type, swap_bdm), 42) self.assertEqual( self.compute_api._volume_size(inst_type, volume_bdm), volume_size) self.assertIsNone( self.compute_api._volume_size(inst_type, blank_bdm)) def test_reservation_id_one_instance(self): """Verify building an instance has a reservation_id that matches return value from create. """ (refs, resv_id) = self.compute_api.create(self.context, self.default_flavor, image_href=uuids.image_href_id) self.assertEqual(len(refs), 1) self.assertEqual(refs[0]['reservation_id'], resv_id) def test_reservation_ids_two_instances(self): """Verify building 2 instances at once results in a reservation_id being returned equal to reservation id set in both instances. """ (refs, resv_id) = self.compute_api.create(self.context, self.default_flavor, image_href=uuids.image_href_id, min_count=2, max_count=2) self.assertEqual(len(refs), 2) self.assertIsNotNone(resv_id) for instance in refs: self.assertEqual(instance['reservation_id'], resv_id) def test_single_instance_display_name(self): """Verify building one instance doesn't do anything funky with the display and host names. """ num_instances = 1 refs, _ = self.compute_api.create(self.context, self.default_flavor, image_href=uuids.image_href_id, min_count=num_instances, max_count=num_instances, display_name='x') name = 'x' self.assertEqual(refs[0]['display_name'], name) self.assertEqual(refs[0]['hostname'], name) def test_multi_instance_display_name(self): """Verify building two instances at once results in a unique display and host name. """ num_instances = 2 refs, _ = self.compute_api.create(self.context, self.default_flavor, image_href=uuids.image_href_id, min_count=num_instances, max_count=num_instances, display_name='x') for i in range(num_instances): name = 'x-%s' % (i + 1,) self.assertEqual(refs[i]['display_name'], name) self.assertEqual(refs[i]['hostname'], name) def test_instance_architecture(self): # Test the instance architecture. i_ref = self._create_fake_instance_obj() self.assertEqual(i_ref['architecture'], obj_fields.Architecture.X86_64) def test_instance_unknown_architecture(self): # Test if the architecture is unknown. instance = self._create_fake_instance_obj( params={'architecture': ''}) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance = db.instance_get_by_uuid(self.context, instance['uuid']) self.assertNotEqual(instance['architecture'], 'Unknown') def test_instance_name_template(self): # Test the instance_name template. self.flags(instance_name_template='instance-%d') i_ref = self._create_fake_instance_obj() self.assertEqual(i_ref['name'], 'instance-%d' % i_ref['id']) self.flags(instance_name_template='instance-%(uuid)s') i_ref = self._create_fake_instance_obj() self.assertEqual(i_ref['name'], 'instance-%s' % i_ref['uuid']) self.flags(instance_name_template='%(id)d-%(uuid)s') i_ref = self._create_fake_instance_obj() self.assertEqual(i_ref['name'], '%d-%s' % (i_ref['id'], i_ref['uuid'])) # not allowed.. default is uuid self.flags(instance_name_template='%(name)s') i_ref = self._create_fake_instance_obj() self.assertEqual(i_ref['name'], i_ref['uuid']) @mock.patch('nova.compute.api.API._update_queued_for_deletion') def test_add_remove_fixed_ip(self, mock_update_queued_for_delete): instance = self._create_fake_instance_obj(params={'host': CONF.host}) self.compute_api.add_fixed_ip(self.context, instance, '1') self.compute_api.remove_fixed_ip(self.context, instance, '192.168.1.1') with mock.patch.object(self.compute_api, '_lookup_instance', return_value=(None, instance)): with mock.patch.object(self.compute_api.network_api, 'deallocate_for_instance'): self.compute_api.delete(self.context, instance) def test_attach_volume_invalid(self): instance = fake_instance.fake_instance_obj(None, **{ 'locked': False, 'vm_state': vm_states.ACTIVE, 'task_state': None, 'launched_at': timeutils.utcnow()}) self.assertRaises(exception.InvalidDevicePath, self.compute_api.attach_volume, self.context, instance, None, '/invalid') def test_add_missing_dev_names_assign_dev_name(self): instance = self._create_fake_instance_obj() bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'instance_uuid': instance.uuid, 'volume_id': 'vol-id', 'source_type': 'volume', 'destination_type': 'volume', 'device_name': None, 'boot_index': None, 'disk_bus': None, 'device_type': None }))] with mock.patch.object(objects.BlockDeviceMapping, 'save') as mock_save: self.compute._add_missing_dev_names(bdms, instance) mock_save.assert_called_once_with() self.assertIsNotNone(bdms[0].device_name) @mock.patch.object(compute_manager.ComputeManager, '_get_device_name_for_instance') def test_add_missing_dev_names_skip_bdms_with_dev_name(self, mock_get_dev_name): instance = self._create_fake_instance_obj() bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'instance_uuid': instance.uuid, 'volume_id': 'vol-id', 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vda', 'boot_index': None, 'disk_bus': None, 'device_type': None }))] self.compute._add_missing_dev_names(bdms, instance) self.assertFalse(mock_get_dev_name.called) def test_no_attach_volume_in_rescue_state(self): def fake(*args, **kwargs): pass def fake_volume_get(self, context, volume_id): return {'id': volume_id} self.stub_out('nova.volume.cinder.API.get', fake_volume_get) self.stub_out('nova.volume.cinder.API.check_availability_zone', fake) self.stub_out('nova.volume.cinder.API.reserve_volume', fake) instance = fake_instance.fake_instance_obj(None, **{ 'uuid': 'f3000000-0000-0000-0000-000000000000', 'locked': False, 'vm_state': vm_states.RESCUED}) self.assertRaises(exception.InstanceInvalidState, self.compute_api.attach_volume, self.context, instance, None, '/dev/vdb') def test_no_attach_volume_in_suspended_state(self): instance = fake_instance.fake_instance_obj(None, **{ 'uuid': 'f3000000-0000-0000-0000-000000000000', 'locked': False, 'vm_state': vm_states.SUSPENDED}) self.assertRaises(exception.InstanceInvalidState, self.compute_api.attach_volume, self.context, instance, {'id': uuids.volume}, '/dev/vdb') def test_no_detach_volume_in_rescue_state(self): # Ensure volume can be detached from instance params = {'vm_state': vm_states.RESCUED} instance = self._create_fake_instance_obj(params=params) volume = {'id': 1, 'attach_status': 'attached', 'instance_uuid': instance['uuid']} self.assertRaises(exception.InstanceInvalidState, self.compute_api.detach_volume, self.context, instance, volume) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(cinder.API, 'get') def test_no_rescue_in_volume_state_attaching(self, mock_get_vol, mock_get_bdms): # Make sure a VM cannot be rescued while volume is being attached instance = self._create_fake_instance_obj() bdms, volume = self._fake_rescue_block_devices(instance) mock_get_vol.return_value = {'id': volume['id'], 'status': "attaching"} mock_get_bdms.return_value = bdms self.assertRaises(exception.InvalidVolume, self.compute_api.rescue, self.context, instance) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_vnc_console') def test_vnc_console(self, mock_get): # Make sure we can a vnc console for an instance. fake_instance = self._fake_instance( {'uuid': 'f3000000-0000-0000-0000-000000000000', 'host': 'fake_compute_host'}) fake_console_type = "novnc" fake_connect_info = {'token': 'fake_token', 'console_type': fake_console_type, 'host': 'fake_console_host', 'port': 'fake_console_port', 'internal_access_path': 'fake_access_path', 'instance_uuid': fake_instance.uuid, 'access_url': 'fake_console_url'} mock_get.return_value = fake_connect_info console = self.compute_api.get_vnc_console(self.context, fake_instance, fake_console_type) self.assertEqual(console, {'url': 'fake_console_url'}) mock_get.assert_called_once_with( self.context, instance=fake_instance, console_type=fake_console_type) def test_get_vnc_console_no_host(self): instance = self._create_fake_instance_obj(params={'host': ''}) self.assertRaises(exception.InstanceNotReady, self.compute_api.get_vnc_console, self.context, instance, 'novnc') @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_spice_console') def test_spice_console(self, mock_spice): # Make sure we can a spice console for an instance. fake_instance = self._fake_instance( {'uuid': 'f3000000-0000-0000-0000-000000000000', 'host': 'fake_compute_host'}) fake_console_type = "spice-html5" fake_connect_info = {'token': 'fake_token', 'console_type': fake_console_type, 'host': 'fake_console_host', 'port': 'fake_console_port', 'internal_access_path': 'fake_access_path', 'instance_uuid': fake_instance.uuid, 'access_url': 'fake_console_url'} mock_spice.return_value = fake_connect_info console = self.compute_api.get_spice_console(self.context, fake_instance, fake_console_type) self.assertEqual(console, {'url': 'fake_console_url'}) mock_spice.assert_called_once_with(self.context, instance=fake_instance, console_type=fake_console_type) def test_get_spice_console_no_host(self): instance = self._create_fake_instance_obj(params={'host': ''}) self.assertRaises(exception.InstanceNotReady, self.compute_api.get_spice_console, self.context, instance, 'spice') @ddt.data(('spice', task_states.DELETING), ('spice', task_states.MIGRATING), ('rdp', task_states.DELETING), ('rdp', task_states.MIGRATING), ('vnc', task_states.DELETING), ('vnc', task_states.MIGRATING), ('mks', task_states.DELETING), ('mks', task_states.MIGRATING), ('serial', task_states.DELETING), ('serial', task_states.MIGRATING)) @ddt.unpack def test_get_console_invalid_task_state(self, console_type, task_state): instance = self._create_fake_instance_obj( params={'task_state': task_state}) self.assertRaises( exception.InstanceInvalidState, getattr(self.compute_api, 'get_%s_console' % console_type), self.context, instance, console_type) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_rdp_console') def test_rdp_console(self, mock_rdp): # Make sure we can a rdp console for an instance. fake_instance = self._fake_instance({ 'uuid': 'f3000000-0000-0000-0000-000000000000', 'host': 'fake_compute_host'}) fake_console_type = "rdp-html5" fake_connect_info = {'token': 'fake_token', 'console_type': fake_console_type, 'host': 'fake_console_host', 'port': 'fake_console_port', 'internal_access_path': 'fake_access_path', 'instance_uuid': fake_instance.uuid, 'access_url': 'fake_console_url'} mock_rdp.return_value = fake_connect_info console = self.compute_api.get_rdp_console(self.context, fake_instance, fake_console_type) self.assertEqual(console, {'url': 'fake_console_url'}) mock_rdp.assert_called_once_with(self.context, instance=fake_instance, console_type=fake_console_type) def test_get_rdp_console_no_host(self): instance = self._create_fake_instance_obj(params={'host': ''}) self.assertRaises(exception.InstanceNotReady, self.compute_api.get_rdp_console, self.context, instance, 'rdp') def test_serial_console(self): # Make sure we can get a serial proxy url for an instance. fake_instance = self._fake_instance({ 'uuid': 'f3000000-0000-0000-0000-000000000000', 'host': 'fake_compute_host'}) fake_console_type = 'serial' fake_connect_info = {'token': 'fake_token', 'console_type': fake_console_type, 'host': 'fake_serial_host', 'port': 'fake_tcp_port', 'internal_access_path': 'fake_access_path', 'instance_uuid': fake_instance.uuid, 'access_url': 'fake_access_url'} with mock.patch.object(self.compute_api.compute_rpcapi, 'get_serial_console', return_value=fake_connect_info): console = self.compute_api.get_serial_console(self.context, fake_instance, fake_console_type) self.assertEqual(console, {'url': 'fake_access_url'}) def test_get_serial_console_no_host(self): # Make sure an exception is raised when instance is not Active. instance = self._create_fake_instance_obj(params={'host': ''}) self.assertRaises(exception.InstanceNotReady, self.compute_api.get_serial_console, self.context, instance, 'serial') def test_mks_console(self): fake_instance = self._fake_instance({ 'uuid': 'f3000000-0000-0000-0000-000000000000', 'host': 'fake_compute_host'}) fake_console_type = 'webmks' fake_connect_info = {'token': 'fake_token', 'console_type': fake_console_type, 'host': 'fake_mks_host', 'port': 'fake_tcp_port', 'internal_access_path': 'fake_access_path', 'instance_uuid': fake_instance.uuid, 'access_url': 'fake_access_url'} with mock.patch.object(self.compute_api.compute_rpcapi, 'get_mks_console', return_value=fake_connect_info): console = self.compute_api.get_mks_console(self.context, fake_instance, fake_console_type) self.assertEqual(console, {'url': 'fake_access_url'}) def test_get_mks_console_no_host(self): # Make sure an exception is raised when instance is not Active. instance = self._create_fake_instance_obj(params={'host': ''}) self.assertRaises(exception.InstanceNotReady, self.compute_api.get_mks_console, self.context, instance, 'mks') @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_output') def test_console_output(self, mock_console): fake_instance = self._fake_instance({ 'uuid': 'f3000000-0000-0000-0000-000000000000', 'host': 'fake_compute_host'}) fake_tail_length = 699 fake_console_output = 'fake console output' mock_console.return_value = fake_console_output output = self.compute_api.get_console_output(self.context, fake_instance, tail_length=fake_tail_length) self.assertEqual(output, fake_console_output) mock_console.assert_called_once_with(self.context, instance=fake_instance, tail_length=fake_tail_length) def test_console_output_no_host(self): instance = self._create_fake_instance_obj(params={'host': ''}) self.assertRaises(exception.InstanceNotReady, self.compute_api.get_console_output, self.context, instance) @mock.patch.object(compute_utils, 'notify_about_instance_action') def test_attach_interface(self, mock_notify): instance = self._create_fake_instance_obj() nwinfo = [fake_network_cache_model.new_vif()] network_id = nwinfo[0]['network']['id'] port_id = nwinfo[0]['id'] req_ip = '1.2.3.4' lock_name = 'interface-%s-%s' % (instance.uuid, port_id) mock_allocate = mock.Mock(return_value=nwinfo) self.compute.network_api.allocate_port_for_instance = mock_allocate with test.nested( mock.patch.dict(self.compute.driver.capabilities, supports_attach_interface=True), mock.patch('oslo_concurrency.lockutils.lock') ) as (cap, mock_lock): vif = self.compute.attach_interface(self.context, instance, network_id, port_id, req_ip, None) self.assertEqual(vif['id'], network_id) mock_allocate.assert_called_once_with( self.context, instance, port_id, network_id, req_ip, bind_host_id='fake-mini', tag=None) mock_notify.assert_has_calls([ mock.call(self.context, instance, self.compute.host, action='interface_attach', phase='start'), mock.call(self.context, instance, self.compute.host, action='interface_attach', phase='end')]) mock_lock.assert_called_once_with(lock_name, mock.ANY, mock.ANY, mock.ANY, delay=mock.ANY, do_log=mock.ANY, fair=mock.ANY, semaphores=mock.ANY) return nwinfo, port_id @mock.patch.object(compute_utils, 'notify_about_instance_action') def test_interface_tagged_attach(self, mock_notify): instance = self._create_fake_instance_obj() nwinfo = [fake_network_cache_model.new_vif()] network_id = nwinfo[0]['network']['id'] port_id = nwinfo[0]['id'] req_ip = '1.2.3.4' mock_allocate = mock.Mock(return_value=nwinfo) self.compute.network_api.allocate_port_for_instance = mock_allocate with mock.patch.dict(self.compute.driver.capabilities, supports_attach_interface=True, supports_tagged_attach_interface=True): vif = self.compute.attach_interface(self.context, instance, network_id, port_id, req_ip, tag='foo') self.assertEqual(vif['id'], network_id) mock_allocate.assert_called_once_with( self.context, instance, port_id, network_id, req_ip, bind_host_id='fake-mini', tag='foo') mock_notify.assert_has_calls([ mock.call(self.context, instance, self.compute.host, action='interface_attach', phase='start'), mock.call(self.context, instance, self.compute.host, action='interface_attach', phase='end')]) return nwinfo, port_id def test_tagged_attach_interface_raises(self): instance = self._create_fake_instance_obj() with mock.patch.dict(self.compute.driver.capabilities, supports_attach_interface=True, supports_tagged_attach_interface=False): expected_exception = self.assertRaises( messaging.ExpectedException, self.compute.attach_interface, self.context, instance, 'fake-network-id', 'fake-port-id', 'fake-req-ip', tag='foo') wrapped_exc = expected_exception.exc_info[1] self.assertIsInstance( wrapped_exc, exception.NetworkInterfaceTaggedAttachNotSupported) def test_attach_interface_failed(self): new_type = flavors.get_flavor_by_flavor_id('4') instance = objects.Instance( id=42, uuid=uuids.interface_failed_instance, image_ref='foo', system_metadata={}, flavor=new_type, host='fake-host') nwinfo = [fake_network_cache_model.new_vif()] network_id = nwinfo[0]['network']['id'] port_id = nwinfo[0]['id'] req_ip = '1.2.3.4' with test.nested( mock.patch.object(compute_utils, 'notify_about_instance_action'), mock.patch.object(self.compute.driver, 'attach_interface'), mock.patch.object(self.compute.network_api, 'allocate_port_for_instance'), mock.patch.object(self.compute.network_api, 'deallocate_port_for_instance'), mock.patch.dict(self.compute.driver.capabilities, supports_attach_interface=True)) as ( mock_notify, mock_attach, mock_allocate, mock_deallocate, mock_dict): mock_allocate.return_value = nwinfo mock_attach.side_effect = exception.NovaException("attach_failed") self.assertRaises(exception.InterfaceAttachFailed, self.compute.attach_interface, self.context, instance, network_id, port_id, req_ip, None) mock_allocate.assert_called_once_with(self.context, instance, network_id, port_id, req_ip, bind_host_id='fake-host', tag=None) mock_deallocate.assert_called_once_with(self.context, instance, port_id) mock_notify.assert_has_calls([ mock.call(self.context, instance, self.compute.host, action='interface_attach', phase='start'), mock.call(self.context, instance, self.compute.host, action='interface_attach', exception=mock_attach.side_effect, phase='error', tb=mock.ANY)]) @mock.patch.object(compute_utils, 'notify_about_instance_action') def test_detach_interface(self, mock_notify): nwinfo, port_id = self.test_attach_interface() instance = self._create_fake_instance_obj() instance.info_cache = objects.InstanceInfoCache.new( self.context, uuids.info_cache_instance) instance.info_cache.network_info = network_model.NetworkInfo.hydrate( nwinfo) lock_name = 'interface-%s-%s' % (instance.uuid, port_id) port_allocation = {uuids.rp1: {'NET_BW_EGR_KILOBIT_PER_SEC': 10000}} with test.nested( mock.patch.object( self.compute.reportclient, 'remove_resources_from_instance_allocation'), mock.patch.object(self.compute.network_api, 'deallocate_port_for_instance', return_value=([], port_allocation)), mock.patch('oslo_concurrency.lockutils.lock')) as ( mock_remove_alloc, mock_deallocate, mock_lock): self.compute.detach_interface(self.context, instance, port_id) mock_deallocate.assert_called_once_with( self.context, instance, port_id) mock_remove_alloc.assert_called_once_with( self.context, instance.uuid, port_allocation) self.assertEqual(self.compute.driver._interfaces, {}) mock_notify.assert_has_calls([ mock.call(self.context, instance, self.compute.host, action='interface_detach', phase='start'), mock.call(self.context, instance, self.compute.host, action='interface_detach', phase='end')]) mock_lock.assert_called_once_with(lock_name, mock.ANY, mock.ANY, mock.ANY, delay=mock.ANY, do_log=mock.ANY, fair=mock.ANY, semaphores=mock.ANY) @mock.patch('nova.compute.manager.LOG.log') def test_detach_interface_failed(self, mock_log): nwinfo, port_id = self.test_attach_interface() instance = self._create_fake_instance_obj() instance['uuid'] = uuids.info_cache_instance instance.info_cache = objects.InstanceInfoCache.new( self.context, uuids.info_cache_instance) instance.info_cache.network_info = network_model.NetworkInfo.hydrate( nwinfo) with test.nested( mock.patch.object(compute_utils, 'notify_about_instance_action'), mock.patch.object(self.compute.driver, 'detach_interface', side_effect=exception.NovaException('detach_failed')), mock.patch.object(self.compute.network_api, 'deallocate_port_for_instance')) as ( mock_notify, mock_detach, mock_deallocate): self.assertRaises(exception.InterfaceDetachFailed, self.compute.detach_interface, self.context, instance, port_id) self.assertFalse(mock_deallocate.called) mock_notify.assert_has_calls([ mock.call(self.context, instance, self.compute.host, action='interface_detach', phase='start')]) self.assertEqual(1, mock_log.call_count) self.assertEqual(logging.WARNING, mock_log.call_args[0][0]) @mock.patch.object(compute_manager.LOG, 'warning') def test_detach_interface_deallocate_port_for_instance_failed(self, warn_mock): # Tests that when deallocate_port_for_instance fails we log the failure # before exiting compute.detach_interface. nwinfo, port_id = self.test_attach_interface() instance = self._create_fake_instance_obj() instance.info_cache = objects.InstanceInfoCache.new( self.context, uuids.info_cache_instance) instance.info_cache.network_info = network_model.NetworkInfo.hydrate( nwinfo) # Sometimes neutron errors slip through the neutronv2 API so we want # to make sure we catch those in the compute manager and not just # NovaExceptions. error = neutron_exceptions.PortNotFoundClient() with test.nested( mock.patch.object(compute_utils, 'notify_about_instance_action'), mock.patch.object(self.compute.driver, 'detach_interface'), mock.patch.object(self.compute.network_api, 'deallocate_port_for_instance', side_effect=error), mock.patch.object(self.compute, '_instance_update')) as ( mock_notify, mock_detach, mock_deallocate, mock_instance_update): ex = self.assertRaises(neutron_exceptions.PortNotFoundClient, self.compute.detach_interface, self.context, instance, port_id) self.assertEqual(error, ex) mock_deallocate.assert_called_once_with( self.context, instance, port_id) self.assertEqual(1, warn_mock.call_count) mock_notify.assert_has_calls([ mock.call(self.context, instance, self.compute.host, action='interface_detach', phase='start')]) def test_attach_volume_new_flow(self): fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'device_name': '/dev/vdb'}) bdm = block_device_obj.BlockDeviceMapping()._from_db_object( self.context, block_device_obj.BlockDeviceMapping(), fake_bdm) instance = self._create_fake_instance_obj() instance.id = 42 fake_volume = {'id': uuids.volume, 'multiattach': False} with test.nested( mock.patch.object(cinder.API, 'get', return_value=fake_volume), mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance'), mock.patch.object(cinder.API, 'check_availability_zone'), mock.patch.object(cinder.API, 'attachment_create', return_value={'id': uuids.attachment_id}), mock.patch.object(objects.BlockDeviceMapping, 'save'), mock.patch.object(compute_rpcapi.ComputeAPI, 'reserve_block_device_name', return_value=bdm), mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_volume') ) as (mock_get, mock_no_bdm, mock_check_availability_zone, mock_attachment_create, mock_bdm_save, mock_reserve_bdm, mock_attach): mock_no_bdm.side_effect = exception.VolumeBDMNotFound( volume_id=uuids.volume) self.compute_api.attach_volume( self.context, instance, uuids.volume, '/dev/vdb', 'ide', 'cdrom') mock_reserve_bdm.assert_called_once_with( self.context, instance, '/dev/vdb', uuids.volume, disk_bus='ide', device_type='cdrom', tag=None, multiattach=False) self.assertEqual(mock_get.call_args, mock.call(self.context, uuids.volume)) self.assertEqual(mock_check_availability_zone.call_args, mock.call( self.context, fake_volume, instance=instance)) mock_attachment_create.assert_called_once_with(self.context, uuids.volume, instance.uuid) a, kw = mock_attach.call_args self.assertEqual(a[2].device_name, '/dev/vdb') self.assertEqual(a[2].volume_id, uuids.volume_id) def test_attach_volume_no_device_new_flow(self): fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vdb', 'volume_id': uuids.volume_id}) bdm = block_device_obj.BlockDeviceMapping()._from_db_object( self.context, block_device_obj.BlockDeviceMapping(), fake_bdm) instance = self._create_fake_instance_obj() instance.id = 42 fake_volume = {'id': uuids.volume, 'multiattach': False} with test.nested( mock.patch.object(cinder.API, 'get', return_value=fake_volume), mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.VolumeBDMNotFound), mock.patch.object(cinder.API, 'check_availability_zone'), mock.patch.object(cinder.API, 'attachment_create', return_value={'id': uuids.attachment_id}), mock.patch.object(objects.BlockDeviceMapping, 'save'), mock.patch.object(compute_rpcapi.ComputeAPI, 'reserve_block_device_name', return_value=bdm), mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_volume') ) as (mock_get, mock_no_bdm, mock_check_availability_zone, mock_attachment_create, mock_bdm_save, mock_reserve_bdm, mock_attach): mock_no_bdm.side_effect = exception.VolumeBDMNotFound( volume_id=uuids.volume) self.compute_api.attach_volume( self.context, instance, uuids.volume, device=None) mock_reserve_bdm.assert_called_once_with( self.context, instance, None, uuids.volume, disk_bus=None, device_type=None, tag=None, multiattach=False) self.assertEqual(mock_get.call_args, mock.call(self.context, uuids.volume)) self.assertEqual(mock_check_availability_zone.call_args, mock.call( self.context, fake_volume, instance=instance)) mock_attachment_create.assert_called_once_with(self.context, uuids.volume, instance.uuid) a, kw = mock_attach.call_args self.assertEqual(a[2].volume_id, uuids.volume_id) def test_attach_volume_shelved_offloaded(self): instance = self._create_fake_instance_obj() fake_bdm = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( {'volume_id': uuids.volume, 'source_type': 'volume', 'destination_type': 'volume', 'device_type': 'cdrom', 'disk_bus': 'ide', 'instance_uuid': instance.uuid})) with test.nested( mock.patch.object(compute.API, '_create_volume_bdm', return_value=fake_bdm), mock.patch.object(compute.API, '_check_attach_and_reserve_volume'), mock.patch.object(cinder.API, 'attach'), mock.patch.object(compute_utils, 'EventReporter') ) as (mock_bdm_create, mock_attach_and_reserve, mock_attach, mock_event): volume = {'id': uuids.volume} self.compute_api._attach_volume_shelved_offloaded( self.context, instance, volume, '/dev/vdb', 'ide', 'cdrom', False) mock_attach_and_reserve.assert_called_once_with(self.context, volume, instance, fake_bdm) mock_attach.assert_called_once_with(self.context, uuids.volume, instance.uuid, '/dev/vdb') mock_event.assert_called_once_with( self.context, 'api_attach_volume', CONF.host, instance.uuid, graceful_exit=False) self.assertTrue(mock_attach.called) def test_attach_volume_shelved_offloaded_new_flow(self): instance = self._create_fake_instance_obj() fake_bdm = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( {'volume_id': uuids.volume, 'source_type': 'volume', 'destination_type': 'volume', 'device_type': 'cdrom', 'disk_bus': 'ide', 'instance_uuid': instance.uuid})) def fake_check_attach_and_reserve(*args, **kwargs): fake_bdm.attachment_id = uuids.attachment_id with test.nested( mock.patch.object(compute.API, '_create_volume_bdm', return_value=fake_bdm), mock.patch.object(compute.API, '_check_attach_and_reserve_volume', side_effect=fake_check_attach_and_reserve), mock.patch.object(cinder.API, 'attachment_complete') ) as (mock_bdm_create, mock_attach_and_reserve, mock_attach_complete): volume = {'id': uuids.volume} self.compute_api._attach_volume_shelved_offloaded( self.context, instance, volume, '/dev/vdb', 'ide', 'cdrom', False) mock_attach_and_reserve.assert_called_once_with(self.context, volume, instance, fake_bdm) mock_attach_complete.assert_called_once_with( self.context, uuids.attachment_id) @mock.patch('nova.compute.api.API._record_action_start') def test_detach_volume(self, mock_record): # Ensure volume can be detached from instance called = {} instance = self._create_fake_instance_obj() # Set attach_status to 'fake' as nothing is reading the value. volume = {'id': uuids.volume, 'attach_status': 'fake'} def fake_begin_detaching(*args, **kwargs): called['fake_begin_detaching'] = True def fake_rpc_detach_volume(self, context, **kwargs): called['fake_rpc_detach_volume'] = True self.stub_out('nova.volume.cinder.API.begin_detaching', fake_begin_detaching) self.stub_out('nova.compute.rpcapi.ComputeAPI.detach_volume', fake_rpc_detach_volume) self.compute_api.detach_volume(self.context, instance, volume) self.assertTrue(called.get('fake_begin_detaching')) self.assertTrue(called.get('fake_rpc_detach_volume')) mock_record.assert_called_once_with( self.context, instance, instance_actions.DETACH_VOLUME) @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(nova.volume.cinder.API, 'begin_detaching') @mock.patch.object(compute.API, '_local_cleanup_bdm_volumes') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_id') def test_detach_volume_shelved_offloaded(self, mock_block_dev, mock_local_cleanup, mock_begin_detaching, mock_event, mock_record): fake_bdm = block_device_obj.BlockDeviceMapping(context=context) fake_bdm.attachment_id = None mock_block_dev.return_value = fake_bdm instance = self._create_fake_instance_obj() volume = {'id': uuids.volume, 'attach_status': 'fake'} self.compute_api._detach_volume_shelved_offloaded(self.context, instance, volume) mock_begin_detaching.assert_called_once_with(self.context, volume['id']) mock_record.assert_called_once_with( self.context, instance, instance_actions.DETACH_VOLUME) mock_event.assert_called_once_with(self.context, 'api_detach_volume', CONF.host, instance.uuid, graceful_exit=False) self.assertTrue(mock_local_cleanup.called) @mock.patch.object(nova.volume.cinder.API, 'begin_detaching') @mock.patch.object(compute.API, '_local_cleanup_bdm_volumes') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_id') def test_detach_volume_shelved_offloaded_new_flow(self, mock_block_dev, mock_local_cleanup, mock_begin_detaching): fake_bdm = block_device_obj.BlockDeviceMapping(context=context) fake_bdm.attachment_id = uuids.attachment_id fake_bdm.volume_id = 1 mock_block_dev.return_value = fake_bdm instance = self._create_fake_instance_obj() volume = {'id': uuids.volume, 'attach_status': 'fake'} self.compute_api._detach_volume_shelved_offloaded(self.context, instance, volume) mock_begin_detaching.assert_not_called() self.assertTrue(mock_local_cleanup.called) @mock.patch.object(nova.volume.cinder.API, 'begin_detaching', side_effect=exception.InvalidInput(reason='error')) def test_detach_invalid_volume(self, mock_begin_detaching): # Ensure exception is raised while detaching an un-attached volume fake_instance = self._fake_instance({ 'uuid': 'f7000000-0000-0000-0000-000000000001', 'locked': False, 'launched_at': timeutils.utcnow(), 'vm_state': vm_states.ACTIVE, 'task_state': None}) volume = {'id': 1, 'attach_status': 'detached', 'status': 'available'} self.assertRaises(exception.InvalidVolume, self.compute_api.detach_volume, self.context, fake_instance, volume) def test_detach_suspended_instance_fails(self): fake_instance = self._fake_instance({ 'uuid': 'f7000000-0000-0000-0000-000000000001', 'locked': False, 'launched_at': timeutils.utcnow(), 'vm_state': vm_states.SUSPENDED, 'task_state': None}) # Unused volume = {} self.assertRaises(exception.InstanceInvalidState, self.compute_api.detach_volume, self.context, fake_instance, volume) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(cinder.API, 'get', return_value={'id': uuids.volume_id}) def test_detach_volume_libvirt_is_down(self, mock_get_vol, mock_get): # Ensure rollback during detach if libvirt goes down called = {} instance = self._create_fake_instance_obj() fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'device_name': '/dev/vdb', 'volume_id': uuids.volume_id, 'source_type': 'snapshot', 'destination_type': 'volume', 'connection_info': '{"test": "test"}'}) def fake_libvirt_driver_instance_exists(self, _instance): called['fake_libvirt_driver_instance_exists'] = True return False def fake_libvirt_driver_detach_volume_fails(*args, **kwargs): called['fake_libvirt_driver_detach_volume_fails'] = True raise AttributeError() def fake_roll_detaching(*args, **kwargs): called['fake_roll_detaching'] = True self.stub_out('nova.volume.cinder.API.roll_detaching', fake_roll_detaching) self.stub_out('nova.virt.fake.FakeDriver.instance_exists', fake_libvirt_driver_instance_exists) self.stub_out('nova.virt.fake.FakeDriver.detach_volume', fake_libvirt_driver_detach_volume_fails) mock_get.return_value = objects.BlockDeviceMapping( context=self.context, **fake_bdm) self.assertRaises(AttributeError, self.compute.detach_volume, self.context, 1, instance, None) self.assertTrue(called.get('fake_libvirt_driver_instance_exists')) self.assertTrue(called.get('fake_roll_detaching')) mock_get.assert_called_once_with(self.context, 1, instance.uuid) def test_detach_volume_not_found(self): # Ensure that a volume can be detached even when it is removed # from an instance but remaining in bdm. See bug #1367964. # TODO(lyarwood): Move into ../virt/test_block_device.py instance = self._create_fake_instance_obj() fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume, 'device_name': '/dev/vdb', 'connection_info': '{"test": "test"}'}) bdm = objects.BlockDeviceMapping(context=self.context, **fake_bdm) # Stub out fake_volume_get so cinder api does not raise exception # and manager gets to call bdm.destroy() def fake_volume_get(self, context, volume_id, microversion=None): return {'id': volume_id} self.stub_out('nova.volume.cinder.API.get', fake_volume_get) with test.nested( mock.patch.object(self.compute.driver, 'detach_volume', side_effect=exception.DiskNotFound('sdb')), mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', return_value=bdm), mock.patch.object(cinder.API, 'terminate_connection'), mock.patch.object(bdm, 'destroy'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.volume_api, 'detach'), mock.patch.object(self.compute.driver, 'get_volume_connector', return_value='fake-connector') ) as (mock_detach_volume, mock_volume, mock_terminate_connection, mock_destroy, mock_notify, mock_detach, mock_volume_connector): self.compute.detach_volume(self.context, uuids.volume, instance, None) self.assertTrue(mock_detach_volume.called) mock_terminate_connection.assert_called_once_with(self.context, uuids.volume, 'fake-connector') mock_destroy.assert_called_once_with() mock_detach.assert_called_once_with(mock.ANY, uuids.volume, instance.uuid, None) @mock.patch.object(context.RequestContext, 'elevated') @mock.patch.object(cinder.API, 'attachment_delete') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') def test_shutdown_with_attachment_delete(self, mock_info, mock_attach_delete, mock_elevated): # test _shutdown_instance with volume bdm containing an # attachment id. This should use the v3 cinder api. admin = context.get_admin_context() instance = self._create_fake_instance_obj() attachment_id = uuids.attachment_id vol_bdm = block_device_obj.BlockDeviceMapping( instance_uuid=instance['uuid'], source_type='volume', destination_type='volume', delete_on_termination=False, volume_id=uuids.volume_id, attachment_id=attachment_id) bdms = [vol_bdm] mock_elevated.return_value = admin self.compute._shutdown_instance(admin, instance, bdms) mock_attach_delete.assert_called_once_with(admin, attachment_id) @mock.patch.object(compute_manager.LOG, 'debug') @mock.patch.object(cinder.API, 'attachment_delete') @mock.patch.object(compute_manager.ComputeManager, '_get_instance_block_device_info') def test_shutdown_with_attachment_not_found(self, mock_info, mock_attach_delete, mock_debug_log): # test _shutdown_instance with attachment_delete throwing # a VolumeAttachmentNotFound exception. This should not # cause _shutdown_instance to fail. Only a debug log # message should be generated. admin = context.get_admin_context() instance = self._create_fake_instance_obj() attachment_id = uuids.attachment_id vol_bdm = block_device_obj.BlockDeviceMapping( instance_uuid=instance['uuid'], source_type='volume', destination_type='volume', delete_on_termination=False, volume_id=uuids.volume_id, attachment_id=attachment_id) bdms = [vol_bdm] mock_attach_delete.side_effect = \ exception.VolumeAttachmentNotFound(attachment_id=attachment_id) self.compute._shutdown_instance(admin, instance, bdms) # get last call to LOG.debug and verify correct exception is in there self.assertIsInstance(mock_debug_log.call_args[0][1], exception.VolumeAttachmentNotFound) def test_terminate_with_volumes(self): # Make sure that volumes get detached during instance termination. admin = context.get_admin_context() instance = self._create_fake_instance_obj() volume_id = uuids.volume values = {'instance_uuid': instance['uuid'], 'device_name': '/dev/vdc', 'delete_on_termination': False, 'volume_id': volume_id, 'destination_type': 'volume' } db.block_device_mapping_create(admin, values) def fake_volume_get(self, context, volume_id): return {'id': volume_id} self.stub_out("nova.volume.cinder.API.get", fake_volume_get) # Stub out and record whether it gets detached result = {"detached": False} def fake_detach(self, context, volume_id_param, instance_uuid): result["detached"] = volume_id_param == volume_id self.stub_out("nova.volume.cinder.API.detach", fake_detach) def fake_terminate_connection(self, context, volume_id, connector): return {} self.stub_out("nova.volume.cinder.API.terminate_connection", fake_terminate_connection) # Kill the instance and check that it was detached bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( admin, instance['uuid']) self.compute.terminate_instance(admin, instance, bdms) self.assertTrue(result["detached"]) def test_terminate_deletes_all_bdms(self): admin = context.get_admin_context() instance = self._create_fake_instance_obj() img_bdm = {'context': admin, 'instance_uuid': instance['uuid'], 'device_name': '/dev/vda', 'source_type': 'image', 'destination_type': 'local', 'delete_on_termination': False, 'boot_index': 0, 'image_id': uuids.image} vol_bdm = {'context': admin, 'instance_uuid': instance['uuid'], 'device_name': '/dev/vdc', 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, 'volume_id': uuids.volume} bdms = [] for bdm in img_bdm, vol_bdm: bdm_obj = objects.BlockDeviceMapping(**bdm) bdm_obj.create() bdms.append(bdm_obj) self.stub_out('nova.volume.cinder.API.terminate_connection', mock.MagicMock()) self.stub_out('nova.volume.cinder.API.detach', mock.MagicMock()) def fake_volume_get(self, context, volume_id): return {'id': volume_id} self.stub_out('nova.volume.cinder.API.get', fake_volume_get) self.stub_out('nova.compute.manager.ComputeManager_prep_block_device', mock.MagicMock()) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) self.compute.terminate_instance(self.context, instance, bdms) bdms = db.block_device_mapping_get_all_by_instance(admin, instance['uuid']) self.assertEqual(len(bdms), 0) def test_inject_network_info(self): instance = self._create_fake_instance_obj(params={'host': CONF.host}) self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance = self.compute_api.get(self.context, instance['uuid']) self.compute_api.inject_network_info(self.context, instance) def test_reset_network(self): instance = self._create_fake_instance_obj() self.compute.build_and_run_instance(self.context, instance, {}, {}, {}, block_device_mapping=[]) instance = self.compute_api.get(self.context, instance['uuid']) self.compute_api.reset_network(self.context, instance) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_utils, 'EventReporter') def test_lock(self, mock_event, mock_record, mock_elevate, mock_notify): mock_elevate.return_value = self.context instance = self._create_fake_instance_obj() self.compute_api.lock(self.context, instance) mock_record.assert_called_once_with( self.context, instance, instance_actions.LOCK ) mock_event.assert_called_once_with(self.context, 'api_lock', CONF.host, instance.uuid, graceful_exit=False) mock_notify.assert_called_once_with( self.context, instance, CONF.host, action='lock', source='nova-api') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_utils, 'EventReporter') def test_lock_with_reason(self, mock_event, mock_record, mock_elevate, mock_notify): mock_elevate.return_value = self.context instance = self._create_fake_instance_obj() self.assertNotIn("locked_reason", instance.system_metadata) self.compute_api.lock(self.context, instance, reason="blah") self.assertEqual("blah", instance.system_metadata["locked_reason"]) mock_record.assert_called_once_with( self.context, instance, instance_actions.LOCK ) mock_event.assert_called_once_with(self.context, 'api_lock', CONF.host, instance.uuid, graceful_exit=False) mock_notify.assert_called_once_with( self.context, instance, CONF.host, action='lock', source='nova-api') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_utils, 'EventReporter') def test_unlock(self, mock_event, mock_record, mock_elevate, mock_notify): mock_elevate.return_value = self.context instance = self._create_fake_instance_obj() self.compute_api.unlock(self.context, instance) mock_record.assert_called_once_with( self.context, instance, instance_actions.UNLOCK ) mock_event.assert_called_once_with(self.context, 'api_unlock', CONF.host, instance.uuid, graceful_exit=False) mock_notify.assert_called_once_with( self.context, instance, CONF.host, action='unlock', source='nova-api') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_utils, 'EventReporter') def test_unlock_with_reason(self, mock_event, mock_record, mock_elevate, mock_notify): mock_elevate.return_value = self.context sm = {"locked_reason": "blah"} instance = self._create_fake_instance_obj( params={"system_metadata": sm}) self.compute_api.unlock(self.context, instance) self.assertNotIn("locked_reason", instance.system_metadata) mock_record.assert_called_once_with( self.context, instance, instance_actions.UNLOCK ) mock_event.assert_called_once_with(self.context, 'api_unlock', CONF.host, instance.uuid, graceful_exit=False) mock_notify.assert_called_once_with( self.context, instance, CONF.host, action='unlock', source='nova-api') @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_diagnostics') def test_get_diagnostics(self, mock_get): instance = self._create_fake_instance_obj() self.compute_api.get_diagnostics(self.context, instance) mock_get.assert_called_once_with(self.context, instance=instance) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_instance_diagnostics') def test_get_instance_diagnostics(self, mock_get): instance = self._create_fake_instance_obj() self.compute_api.get_instance_diagnostics(self.context, instance) mock_get.assert_called_once_with(self.context, instance=instance) def _test_live_migrate(self, force=None): instance, instance_uuid = self._run_instance() rpcapi = self.compute_api.compute_task_api fake_spec = objects.RequestSpec() @mock.patch.object(rpcapi, 'live_migrate_instance') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(self.compute_api, '_record_action_start') def do_test(record_action_start, get_by_instance_uuid, get_all_by_host, live_migrate_instance): get_by_instance_uuid.return_value = fake_spec get_all_by_host.return_value = objects.ComputeNodeList( objects=[objects.ComputeNode( host='fake_dest_host', hypervisor_hostname='fake_dest_node')]) self.compute_api.live_migrate(self.context, instance, block_migration=True, disk_over_commit=True, host_name='fake_dest_host', force=force, async_=False) record_action_start.assert_called_once_with(self.context, instance, 'live-migration') if force is False: host = None else: host = 'fake_dest_host' live_migrate_instance.assert_called_once_with( self.context, instance, host, block_migration=True, disk_over_commit=True, request_spec=fake_spec, async_=False) do_test() instance.refresh() self.assertEqual(instance['task_state'], task_states.MIGRATING) if force is False: req_dest = fake_spec.requested_destination self.assertIsNotNone(req_dest) self.assertIsInstance(req_dest, objects.Destination) self.assertEqual('fake_dest_host', req_dest.host) self.assertEqual('fake_dest_node', req_dest.node) def test_live_migrate(self): self._test_live_migrate() def test_live_migrate_with_not_forced_host(self): self._test_live_migrate(force=False) def test_live_migrate_with_forced_host(self): self._test_live_migrate(force=True) def test_fail_live_migrate_with_non_existing_destination(self): instance = self._create_fake_instance_obj(services=True) self.assertIsNone(instance.task_state) self.assertRaises( exception.ComputeHostNotFound, self.compute_api.live_migrate, self.context.elevated(), instance, block_migration=True, disk_over_commit=True, host_name='fake_dest_host', force=False) @mock.patch('nova.compute.utils.notify_about_instance_action') def _test_evacuate(self, mock_notify, force=None): instance = self._create_fake_instance_obj(services=True) self.assertIsNone(instance.task_state) ctxt = self.context.elevated() fake_spec = objects.RequestSpec() def fake_rebuild_instance(*args, **kwargs): # NOTE(sbauza): Host can be set to None, we need to fake a correct # destination if this is the case. instance.host = kwargs['host'] or 'fake_dest_host' instance.save() @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(self.compute_api.servicegroup_api, 'service_is_up') def do_test(service_is_up, get_by_instance_uuid, get_all_by_host, rebuild_instance, get_service): service_is_up.return_value = False get_by_instance_uuid.return_value = fake_spec rebuild_instance.side_effect = fake_rebuild_instance get_all_by_host.return_value = objects.ComputeNodeList( objects=[objects.ComputeNode( host='fake_dest_host', hypervisor_hostname='fake_dest_node')]) get_service.return_value = objects.Service( host='fake_dest_host') self.compute_api.evacuate(ctxt, instance, host='fake_dest_host', on_shared_storage=True, admin_password=None, force=force) if force is False: host = None else: host = 'fake_dest_host' rebuild_instance.assert_called_once_with( ctxt, instance=instance, new_pass=None, injected_files=None, image_ref=None, orig_image_ref=None, orig_sys_metadata=None, bdms=None, recreate=True, on_shared_storage=True, request_spec=fake_spec, host=host) do_test() instance.refresh() self.assertEqual(instance.task_state, task_states.REBUILDING) self.assertEqual(instance.host, 'fake_dest_host') migs = objects.MigrationList.get_by_filters( self.context, {'source_host': 'fake_host'}) self.assertEqual(1, len(migs)) self.assertEqual(self.compute.host, migs[0].source_compute) self.assertEqual('accepted', migs[0].status) self.assertEqual('compute.instance.evacuate', fake_notifier.NOTIFICATIONS[0].event_type) mock_notify.assert_called_once_with( ctxt, instance, self.compute.host, action='evacuate', source='nova-api') if force is False: req_dest = fake_spec.requested_destination self.assertIsNotNone(req_dest) self.assertIsInstance(req_dest, objects.Destination) self.assertEqual('fake_dest_host', req_dest.host) self.assertEqual('fake_dest_node', req_dest.node) def test_evacuate(self): self._test_evacuate() def test_evacuate_with_not_forced_host(self): self._test_evacuate(force=False) def test_evacuate_with_forced_host(self): self._test_evacuate(force=True) @mock.patch('nova.servicegroup.api.API.service_is_up', return_value=False) def test_fail_evacuate_with_non_existing_destination(self, _service_is_up): instance = self._create_fake_instance_obj(services=True) self.assertIsNone(instance.task_state) self.assertRaises(exception.ComputeHostNotFound, self.compute_api.evacuate, self.context.elevated(), instance, host='fake_dest_host', on_shared_storage=True, admin_password=None, force=False) def test_fail_evacuate_from_non_existing_host(self): inst = {} inst['vm_state'] = vm_states.ACTIVE inst['launched_at'] = timeutils.utcnow() inst['image_ref'] = FAKE_IMAGE_REF inst['reservation_id'] = 'r-fakeres' inst['user_id'] = self.user_id inst['project_id'] = self.project_id inst['host'] = 'fake_host' inst['node'] = NODENAME inst['instance_type_id'] = self.tiny_flavor.id inst['ami_launch_index'] = 0 inst['memory_mb'] = 0 inst['vcpus'] = 0 inst['root_gb'] = 0 inst['ephemeral_gb'] = 0 inst['architecture'] = obj_fields.Architecture.X86_64 inst['os_type'] = 'Linux' instance = self._create_fake_instance_obj(inst) self.assertIsNone(instance.task_state) self.assertRaises(exception.ComputeHostNotFound, self.compute_api.evacuate, self.context.elevated(), instance, host='fake_dest_host', on_shared_storage=True, admin_password=None) @mock.patch('nova.objects.Service.get_by_compute_host') def test_fail_evacuate_from_running_host(self, mock_service): instance = self._create_fake_instance_obj(services=True) self.assertIsNone(instance.task_state) def fake_service_is_up(*args, **kwargs): return True self.stub_out('nova.servicegroup.api.API.service_is_up', fake_service_is_up) self.assertRaises(exception.ComputeServiceInUse, self.compute_api.evacuate, self.context.elevated(), instance, host='fake_dest_host', on_shared_storage=True, admin_password=None) def test_fail_evacuate_instance_in_wrong_state(self): states = [vm_states.BUILDING, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.RESCUED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.DELETED] instances = [self._create_fake_instance_obj({'vm_state': state}) for state in states] for instance in instances: self.assertRaises(exception.InstanceInvalidState, self.compute_api.evacuate, self.context, instance, host='fake_dest_host', on_shared_storage=True, admin_password=None) @mock.patch('nova.objects.MigrationList.get_by_filters') def test_get_migrations(self, mock_migration): migration = test_migration.fake_db_migration() filters = {'host': 'host1'} def fake_migrations(context, filters): self.assertIsNotNone(context.db_connection) return objects.MigrationList( objects=[objects.Migration(**migration)]) mock_migration.side_effect = fake_migrations migrations = self.compute_api.get_migrations(self.context, filters) self.assertEqual(1, len(migrations)) self.assertEqual(migrations[0].id, migration['id']) mock_migration.assert_called_once_with(mock.ANY, filters) called_context = mock_migration.call_args_list[0][0][0] self.assertIsInstance(called_context, context.RequestContext) self.assertNotEqual(self.context, called_context) @mock.patch("nova.db.api.migration_get_in_progress_by_instance") def test_get_migrations_in_progress_by_instance(self, mock_get): migration = test_migration.fake_db_migration( instance_uuid=uuids.instance) mock_get.return_value = [migration] migrations = self.compute_api.get_migrations_in_progress_by_instance( self.context, uuids.instance) self.assertEqual(1, len(migrations)) self.assertEqual(migrations[0].id, migration['id']) mock_get.assert_called_once_with(self.context, uuids.instance, None) @mock.patch("nova.db.api.migration_get_by_id_and_instance") def test_get_migration_by_id_and_instance(self, mock_get): migration = test_migration.fake_db_migration( instance_uuid=uuids.instance) mock_get.return_value = migration res = self.compute_api.get_migration_by_id_and_instance( self.context, migration['id'], uuids.instance) self.assertEqual(res.id, migration['id']) mock_get.assert_called_once_with(self.context, migration['id'], uuids.instance) class ComputeAPIIpFilterTestCase(test.NoDBTestCase): '''Verifies the IP filtering in the compute API.''' def setUp(self): super(ComputeAPIIpFilterTestCase, self).setUp() self.compute_api = compute.API() def _get_ip_filtering_instances(self): '''Utility function to get instances for the IP filtering tests.''' info = [{ 'address': 'aa:bb:cc:dd:ee:ff', 'id': 1, 'network': { 'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [{ 'cidr': '192.168.0.0/24', 'ips': [{ 'address': '192.168.0.10', 'type': 'fixed' }, { 'address': '192.168.0.11', 'type': 'fixed' }] }] } }, { 'address': 'aa:bb:cc:dd:ee:ff', 'id': 2, 'network': { 'bridge': 'br1', 'id': 2, 'label': 'private', 'subnets': [{ 'cidr': '192.164.0.0/24', 'ips': [{ 'address': '192.164.0.10', 'type': 'fixed' }] }] } }] info1 = objects.InstanceInfoCache(network_info=jsonutils.dumps(info)) inst1 = objects.Instance(id=1, info_cache=info1) info[0]['network']['subnets'][0]['ips'][0]['address'] = '192.168.0.20' info[0]['network']['subnets'][0]['ips'][1]['address'] = '192.168.0.21' info[1]['network']['subnets'][0]['ips'][0]['address'] = '192.164.0.20' info2 = objects.InstanceInfoCache(network_info=jsonutils.dumps(info)) inst2 = objects.Instance(id=2, info_cache=info2) return objects.InstanceList(objects=[inst1, inst2]) def test_ip_filtering_no_matches(self): instances = self._get_ip_filtering_instances() insts = self.compute_api._ip_filter(instances, {'ip': '.*30'}, None) self.assertEqual(0, len(insts)) def test_ip_filtering_one_match(self): instances = self._get_ip_filtering_instances() for val in ('192.168.0.10', '192.168.0.1', '192.164.0.10', '.*10'): insts = self.compute_api._ip_filter(instances, {'ip': val}, None) self.assertEqual([1], [i.id for i in insts]) def test_ip_filtering_one_match_limit(self): instances = self._get_ip_filtering_instances() for limit in (None, 1, 2): insts = self.compute_api._ip_filter(instances, {'ip': '.*10'}, limit) self.assertEqual([1], [i.id for i in insts]) def test_ip_filtering_two_matches(self): instances = self._get_ip_filtering_instances() for val in ('192.16', '192.168', '192.164'): insts = self.compute_api._ip_filter(instances, {'ip': val}, None) self.assertEqual([1, 2], [i.id for i in insts]) def test_ip_filtering_two_matches_limit(self): instances = self._get_ip_filtering_instances() # Up to 2 match, based on the passed limit for limit in (None, 1, 2, 3): insts = self.compute_api._ip_filter(instances, {'ip': '192.168.0.*'}, limit) expected_ids = [1, 2] if limit: expected_len = min(limit, len(expected_ids)) expected_ids = expected_ids[:expected_len] self.assertEqual(expected_ids, [inst.id for inst in insts]) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=exception.CellMappingNotFound(uuid=uuids.volume)) @mock.patch('nova.network.neutron.API.has_substr_port_filtering_extension', return_value=False) def test_ip_filtering_no_limit_to_db(self, mock_has_port_filter_ext, _mock_cell_map_get, mock_buildreq_get): c = context.get_admin_context() # Limit is not supplied to the DB when using an IP filter with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as m_get: m_get.return_value = objects.InstanceList(objects=[]), list() self.compute_api.get_all(c, search_opts={'ip': '.10'}, limit=1) self.assertEqual(1, m_get.call_count) args = m_get.call_args[0] self.assertIsNone(args[2]) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=exception.CellMappingNotFound(uuid=uuids.volume)) def test_ip_filtering_pass_limit_to_db(self, _mock_cell_map_get, mock_buildreq_get): c = context.get_admin_context() # No IP filter, verify that the limit is passed with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as m_get: m_get.return_value = objects.InstanceList(objects=[]), list() self.compute_api.get_all(c, search_opts={}, limit=1) self.assertEqual(1, m_get.call_count) args = m_get.call_args[0] self.assertEqual(1, args[2]) def fake_rpc_method(self, context, method, **kwargs): pass def _create_service_entries( ctxt, values=[['avail_zone1', ['fake_host1', 'fake_host2']], ['avail_zone2', ['fake_host3']]], skip_host_mapping_creation_for_hosts=None): cells = objects.CellMappingList.get_all(ctxt) index = 0 for (avail_zone, hosts) in values: for host in hosts: # NOTE(danms): spread these services across cells cell = cells[index % len(cells)] index += 1 with context.target_cell(ctxt, cell) as cctxt: s = objects.Service(context=cctxt, host=host, binary='nova-compute', topic='compute', report_count=0) s.create() if host not in (skip_host_mapping_creation_for_hosts or []): hm = objects.HostMapping(context=ctxt, cell_mapping=cell, host=host) hm.create() return values class ComputeAPIAggrTestCase(BaseTestCase): """This is for unit coverage of aggregate-related methods defined in nova.compute.api. """ def setUp(self): super(ComputeAPIAggrTestCase, self).setUp() self.api = compute.AggregateAPI() self.context = context.get_admin_context() self.stub_out('oslo_messaging.rpc.client.call', fake_rpc_method) self.stub_out('oslo_messaging.rpc.client.cast', fake_rpc_method) @mock.patch('nova.compute.utils.notify_about_aggregate_action') def test_aggregate_no_zone(self, mock_notify): # Ensure we can create an aggregate without an availability zone aggr = self.api.create_aggregate(self.context, 'fake_aggregate', None) mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=aggr, action='create', phase='start'), mock.call(context=self.context, aggregate=aggr, action='create', phase='end')]) self.api.delete_aggregate(self.context, aggr.id) self.assertRaises(exception.AggregateNotFound, self.api.delete_aggregate, self.context, aggr.id) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_check_az_for_aggregate(self, mock_add_host): # Ensure all conflict hosts can be returned values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host1 = values[0][1][0] fake_host2 = values[0][1][1] aggr1 = self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host1) aggr1 = self._init_aggregate_with_host(aggr1, None, None, fake_host2) aggr2 = self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host2) aggr2 = self._init_aggregate_with_host(aggr2, None, None, fake_host1) metadata = {'availability_zone': 'another_zone'} self.assertRaises(exception.InvalidAggregateActionUpdate, self.api.update_aggregate, self.context, aggr2.id, metadata) def test_update_aggregate(self): # Ensure metadata can be updated. aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') fake_notifier.NOTIFICATIONS = [] self.api.update_aggregate(self.context, aggr.id, {'name': 'new_fake_aggregate'}) self.assertIsNone(availability_zones._get_cache().get('cache')) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.updateprop.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.updateprop.end') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_update_aggregate_no_az(self, mock_add_host): # Ensure metadata without availability zone can be # updated,even the aggregate contains hosts belong # to another availability zone values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host) aggr2 = self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'name': 'new_fake_aggregate'} fake_notifier.NOTIFICATIONS = [] self.api.update_aggregate(self.context, aggr2.id, metadata) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.updateprop.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.updateprop.end') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_update_aggregate_az_change(self, mock_add_host): # Ensure availability zone can be updated, # when the aggregate is the only one with # availability zone values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] aggr1 = self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host) self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'availability_zone': 'new_fake_zone'} fake_notifier.NOTIFICATIONS = [] self.api.update_aggregate(self.context, aggr1.id, metadata) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.end') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_update_aggregate_az_fails(self, mock_add_host): # Ensure aggregate's availability zone can't be updated, # when aggregate has hosts in other availability zone fake_notifier.NOTIFICATIONS = [] values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host) aggr2 = self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'availability_zone': 'another_zone'} self.assertRaises(exception.InvalidAggregateActionUpdate, self.api.update_aggregate, self.context, aggr2.id, metadata) fake_host2 = values[0][1][1] aggr3 = self._init_aggregate_with_host(None, 'fake_aggregate3', None, fake_host2) metadata = {'availability_zone': fake_zone} self.api.update_aggregate(self.context, aggr3.id, metadata) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 15) msg = fake_notifier.NOTIFICATIONS[13] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.start') msg = fake_notifier.NOTIFICATIONS[14] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.end') aggr4 = self.api.create_aggregate(self.context, 'fake_aggregate', None) metadata = {'availability_zone': ""} self.assertRaises(exception.InvalidAggregateActionUpdate, self.api.update_aggregate, self.context, aggr4.id, metadata) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_update_aggregate_az_fails_with_nova_az(self, mock_add_host): # Ensure aggregate's availability zone can't be updated, # when aggregate has hosts in other availability zone fake_notifier.NOTIFICATIONS = [] values = _create_service_entries(self.context) fake_host = values[0][1][0] self._init_aggregate_with_host(None, 'fake_aggregate1', CONF.default_availability_zone, fake_host) aggr2 = self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'availability_zone': 'another_zone'} self.assertRaises(exception.InvalidAggregateActionUpdate, self.api.update_aggregate, self.context, aggr2.id, metadata) @mock.patch('nova.compute.utils.notify_about_aggregate_action') def test_update_aggregate_metadata(self, mock_notify): # Ensure metadata can be updated. aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') metadata = {'foo_key1': 'foo_value1', 'foo_key2': 'foo_value2', 'availability_zone': 'fake_zone'} fake_notifier.NOTIFICATIONS = [] availability_zones._get_cache().region.get_or_create( 'fake_ky', lambda: 'fake_value') aggr = self.api.update_aggregate_metadata(self.context, aggr.id, metadata) self.assertIsNone(availability_zones._get_cache().get('fake_key')) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.end') mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=aggr, action='update_metadata', phase='start'), mock.call(context=self.context, aggregate=aggr, action='update_metadata', phase='end')]) fake_notifier.NOTIFICATIONS = [] metadata['foo_key1'] = None expected_payload_meta_data = {'foo_key1': None, 'foo_key2': 'foo_value2', 'availability_zone': 'fake_zone'} expected = self.api.update_aggregate_metadata(self.context, aggr.id, metadata) self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('aggregate.updatemetadata.start', msg.event_type) self.assertEqual(expected_payload_meta_data, msg.payload['meta_data']) msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual('aggregate.updatemetadata.end', msg.event_type) self.assertEqual(expected_payload_meta_data, msg.payload['meta_data']) self.assertThat(expected.metadata, matchers.DictMatches({'availability_zone': 'fake_zone', 'foo_key2': 'foo_value2'})) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch('nova.compute.utils.notify_about_aggregate_action') def test_update_aggregate_metadata_no_az(self, mock_notify, mock_add_host): # Ensure metadata without availability zone can be # updated,even the aggregate contains hosts belong # to another availability zone values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host) aggr2 = self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'foo_key2': 'foo_value3'} fake_notifier.NOTIFICATIONS = [] aggr2 = self.api.update_aggregate_metadata(self.context, aggr2.id, metadata) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.end') mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=aggr2, action='update_metadata', phase='start'), mock.call(context=self.context, aggregate=aggr2, action='update_metadata', phase='end')]) self.assertThat(aggr2.metadata, matchers.DictMatches({'foo_key2': 'foo_value3'})) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_update_aggregate_metadata_az_change(self, mock_add_host): # Ensure availability zone can be updated, # when the aggregate is the only one with # availability zone values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] aggr1 = self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host) self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'availability_zone': 'new_fake_zone'} fake_notifier.NOTIFICATIONS = [] self.api.update_aggregate_metadata(self.context, aggr1.id, metadata) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.end') def test_update_aggregate_az_do_not_replace_existing_metadata(self): # Ensure that the update of the aggregate availability zone # does not replace the aggregate existing metadata aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') metadata = {'foo_key1': 'foo_value1'} aggr = self.api.update_aggregate_metadata(self.context, aggr.id, metadata) metadata = {'availability_zone': 'new_fake_zone'} aggr = self.api.update_aggregate(self.context, aggr.id, metadata) self.assertThat(aggr.metadata, matchers.DictMatches( {'availability_zone': 'new_fake_zone', 'foo_key1': 'foo_value1'})) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_update_aggregate_metadata_az_fails(self, mock_add_host): # Ensure aggregate's availability zone can't be updated, # when aggregate has hosts in other availability zone fake_notifier.NOTIFICATIONS = [] values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] self._init_aggregate_with_host(None, 'fake_aggregate1', fake_zone, fake_host) aggr2 = self._init_aggregate_with_host(None, 'fake_aggregate2', None, fake_host) metadata = {'availability_zone': 'another_zone'} self.assertRaises(exception.InvalidAggregateActionUpdateMeta, self.api.update_aggregate_metadata, self.context, aggr2.id, metadata) aggr3 = self._init_aggregate_with_host(None, 'fake_aggregate3', None, fake_host) metadata = {'availability_zone': fake_zone} self.api.update_aggregate_metadata(self.context, aggr3.id, metadata) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 15) msg = fake_notifier.NOTIFICATIONS[13] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.start') msg = fake_notifier.NOTIFICATIONS[14] self.assertEqual(msg.event_type, 'aggregate.updatemetadata.end') aggr4 = self.api.create_aggregate(self.context, 'fake_aggregate', None) metadata = {'availability_zone': ""} self.assertRaises(exception.InvalidAggregateActionUpdateMeta, self.api.update_aggregate_metadata, self.context, aggr4.id, metadata) @mock.patch('nova.compute.utils.notify_about_aggregate_action') def test_delete_aggregate(self, mock_notify): # Ensure we can delete an aggregate. fake_notifier.NOTIFICATIONS = [] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.create.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.create.end') mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=aggr, action='create', phase='start'), mock.call(context=self.context, aggregate=aggr, action='create', phase='end')]) mock_notify.reset_mock() fake_notifier.NOTIFICATIONS = [] self.api.delete_aggregate(self.context, aggr.id) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.delete.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.delete.end') self.assertRaises(exception.AggregateNotFound, self.api.delete_aggregate, self.context, aggr.id) class AggregateIdMatcher(object): def __init__(self, aggr): self.aggr = aggr def __eq__(self, other_aggr): if type(self.aggr) != type(other_aggr): return False return self.aggr.id == other_aggr.id mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=AggregateIdMatcher(aggr), action='delete', phase='start'), mock.call(context=self.context, aggregate=AggregateIdMatcher(aggr), action='delete', phase='end')]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_delete_non_empty_aggregate(self, mock_add_host, mock_remove_host): # Ensure InvalidAggregateAction is raised when non empty aggregate. _create_service_entries(self.context, [['fake_availability_zone', ['fake_host']]]) aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_availability_zone') self.api.add_host_to_aggregate(self.context, aggr.id, 'fake_host') self.assertRaises(exception.InvalidAggregateActionDelete, self.api.delete_aggregate, self.context, aggr.id) mock_remove_host.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch.object(availability_zones, 'update_host_availability_zone_cache') def test_add_host_to_aggregate(self, mock_az, mock_notify, mock_add_host): # Ensure we can add a host to an aggregate. values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) def fake_add_aggregate_host(*args, **kwargs): hosts = kwargs["aggregate"].hosts self.assertIn(fake_host, hosts) self.stub_out('nova.compute.rpcapi.ComputeAPI.add_aggregate_host', fake_add_aggregate_host) fake_notifier.NOTIFICATIONS = [] aggr = self.api.add_host_to_aggregate(self.context, aggr.id, fake_host) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.addhost.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.addhost.end') self.assertEqual(len(aggr.hosts), 1) mock_az.assert_called_once_with(self.context, fake_host) mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=aggr, action='add_host', phase='start'), mock.call(context=self.context, aggregate=aggr, action='add_host', phase='end')]) mock_add_host.assert_called_once_with( self.context, aggr.uuid, host_name=fake_host) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_add_host_to_aggr_with_no_az(self, mock_add_host): values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) aggr = self.api.add_host_to_aggregate(self.context, aggr.id, fake_host) aggr_no_az = self.api.create_aggregate(self.context, 'fake_aggregate2', None) aggr_no_az = self.api.add_host_to_aggregate(self.context, aggr_no_az.id, fake_host) self.assertIn(fake_host, aggr.hosts) self.assertIn(fake_host, aggr_no_az.hosts) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_add_host_to_multi_az(self, mock_add_host): # Ensure we can't add a host to different availability zone values = _create_service_entries(self.context) fake_zone = values[0][0] fake_host = values[0][1][0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) aggr = self.api.add_host_to_aggregate(self.context, aggr.id, fake_host) self.assertEqual(len(aggr.hosts), 1) fake_zone2 = "another_zone" aggr2 = self.api.create_aggregate(self.context, 'fake_aggregate2', fake_zone2) self.assertRaises(exception.InvalidAggregateActionAdd, self.api.add_host_to_aggregate, self.context, aggr2.id, fake_host) self.assertEqual(1, mock_add_host.call_count) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_add_host_to_multi_az_with_nova_agg(self, mock_add_host): # Ensure we can't add a host if already existing in an agg with AZ set # to default values = _create_service_entries(self.context) fake_host = values[0][1][0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', CONF.default_availability_zone) aggr = self.api.add_host_to_aggregate(self.context, aggr.id, fake_host) self.assertEqual(len(aggr.hosts), 1) fake_zone2 = "another_zone" aggr2 = self.api.create_aggregate(self.context, 'fake_aggregate2', fake_zone2) self.assertRaises(exception.InvalidAggregateActionAdd, self.api.add_host_to_aggregate, self.context, aggr2.id, fake_host) self.assertEqual(1, mock_add_host.call_count) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_add_host_to_aggregate_multiple(self, mock_add_host): # Ensure we can add multiple hosts to an aggregate. values = _create_service_entries(self.context) fake_zone = values[0][0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) for host in values[0][1]: aggr = self.api.add_host_to_aggregate(self.context, aggr.id, host) self.assertEqual(len(aggr.hosts), len(values[0][1])) self.assertEqual(len(aggr.hosts), mock_add_host.call_count) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_add_host_to_aggregate_raise_not_found(self, mock_add_host): # Ensure ComputeHostNotFound is raised when adding invalid host. aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') fake_notifier.NOTIFICATIONS = [] self.assertRaises(exception.ComputeHostNotFound, self.api.add_host_to_aggregate, self.context, aggr.id, 'invalid_host') self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) self.assertEqual(fake_notifier.NOTIFICATIONS[1].publisher_id, 'compute.fake-mini') mock_add_host.assert_not_called() @mock.patch('nova.objects.Service.get_by_compute_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_add_host_to_aggregate_raise_not_found_case(self, mock_add_host, mock_get_service): # Ensure ComputeHostNotFound is raised when adding a host with a # hostname that doesn't exactly map what we have stored. def return_anyway(context, host_name): return objects.Service(host=host_name.upper()) mock_get_service.side_effect = return_anyway aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') fake_notifier.NOTIFICATIONS = [] values = _create_service_entries(self.context) fake_host = values[0][1][0] self.assertRaises(exception.ComputeHostNotFound, self.api.add_host_to_aggregate, self.context, aggr.id, fake_host) mock_add_host.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.context.set_target_cell') def test_add_host_to_aggregate_raise_cn_not_found(self, mock_st, mock_hm, mock_add_host): # Ensure ComputeHostNotFound is raised when adding invalid host. aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') fake_notifier.NOTIFICATIONS = [] self.assertRaises(exception.ComputeHostNotFound, self.api.add_host_to_aggregate, self.context, aggr.id, 'invalid_host') mock_add_host.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch.object(availability_zones, 'update_host_availability_zone_cache') def test_remove_host_from_aggregate_active( self, mock_az, mock_notify, mock_add_host, mock_remove_host): # Ensure we can remove a host from an aggregate. values = _create_service_entries(self.context) fake_zone = values[0][0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) for host in values[0][1]: aggr = self.api.add_host_to_aggregate(self.context, aggr.id, host) host_to_remove = values[0][1][0] def fake_remove_aggregate_host(*args, **kwargs): hosts = kwargs["aggregate"].hosts self.assertNotIn(host_to_remove, hosts) self.stub_out('nova.compute.rpcapi.ComputeAPI.remove_aggregate_host', fake_remove_aggregate_host) fake_notifier.NOTIFICATIONS = [] mock_notify.reset_mock() expected = self.api.remove_host_from_aggregate(self.context, aggr.id, host_to_remove) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.event_type, 'aggregate.removehost.start') msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual(msg.event_type, 'aggregate.removehost.end') self.assertEqual(len(aggr.hosts) - 1, len(expected.hosts)) mock_az.assert_called_with(self.context, host_to_remove) mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=expected, action='remove_host', phase='start'), mock.call(context=self.context, aggregate=expected, action='remove_host', phase='end')]) mock_remove_host.assert_called_once_with( self.context, aggr.uuid, host_to_remove) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') def test_remove_host_from_aggregate_raise_not_found( self, mock_remove_host): # Ensure ComputeHostNotFound is raised when removing invalid host. _create_service_entries(self.context, [['fake_zone', ['fake_host']]]) aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') self.assertRaises(exception.ComputeHostNotFound, self.api.remove_host_from_aggregate, self.context, aggr.id, 'invalid_host') mock_remove_host.assert_not_called() @mock.patch('nova.objects.Service.get_by_compute_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch.object(availability_zones, 'update_host_availability_zone_cache') def test_remove_host_from_aggregate_no_host_mapping_service_exists( self, mock_az, mock_notify, mock_add_host, mock_rm_host, mock_get_service): # Ensure ComputeHostNotFound is not raised when adding a host with a # hostname that doesn't have host mapping but has a service entry. fake_host = 'fake_host' # This is called 4 times, during addition to aggregate for cell0 and # cell1, and during deletion for cell0 and cell1 as well mock_get_service.side_effect = [ exception.NotFound(), objects.Service(host=fake_host), exception.NotFound(), objects.Service(host=fake_host) ] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') fake_notifier.NOTIFICATIONS = [] _create_service_entries( self.context, values=[['az', [fake_host]]], skip_host_mapping_creation_for_hosts=[fake_host]) self.api.add_host_to_aggregate(self.context, aggr.id, fake_host) self.api.remove_host_from_aggregate(self.context, aggr.id, fake_host) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.context.set_target_cell') def test_remove_host_from_aggregate_raise_cn_not_found(self, mock_st, mock_hm, mock_remove_host): # Ensure ComputeHostNotFound is raised when removing invalid host. _create_service_entries(self.context, [['fake_zone', ['fake_host']]]) aggr = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') self.assertRaises(exception.ComputeHostNotFound, self.api.remove_host_from_aggregate, self.context, aggr.id, 'invalid_host') mock_remove_host.assert_not_called() def test_aggregate_list(self): aggregate = self.api.create_aggregate(self.context, 'fake_aggregate', 'fake_zone') metadata = {'foo_key1': 'foo_value1', 'foo_key2': 'foo_value2'} meta_aggregate = self.api.create_aggregate(self.context, 'fake_aggregate2', 'fake_zone2') self.api.update_aggregate_metadata(self.context, meta_aggregate.id, metadata) aggregate_list = self.api.get_aggregate_list(self.context) self.assertIn(aggregate.id, map(lambda x: x.id, aggregate_list)) self.assertIn(meta_aggregate.id, map(lambda x: x.id, aggregate_list)) self.assertIn('fake_aggregate', map(lambda x: x.name, aggregate_list)) self.assertIn('fake_aggregate2', map(lambda x: x.name, aggregate_list)) self.assertIn('fake_zone', map(lambda x: x.availability_zone, aggregate_list)) self.assertIn('fake_zone2', map(lambda x: x.availability_zone, aggregate_list)) test_agg_meta = aggregate_list[1].metadata self.assertIn('foo_key1', test_agg_meta) self.assertIn('foo_key2', test_agg_meta) self.assertEqual('foo_value1', test_agg_meta['foo_key1']) self.assertEqual('foo_value2', test_agg_meta['foo_key2']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_aggregate_list_with_hosts(self, mock_add_host): values = _create_service_entries(self.context) fake_zone = values[0][0] host_aggregate = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) self.api.add_host_to_aggregate(self.context, host_aggregate.id, values[0][1][0]) aggregate_list = self.api.get_aggregate_list(self.context) aggregate = aggregate_list[0] hosts = aggregate.hosts if 'hosts' in aggregate else None self.assertIn(values[0][1][0], hosts) @mock.patch('nova.scheduler.client.report.SchedulerReportClient') def test_placement_client_init(self, mock_report_client): """Tests to make sure that the construction of the placement client only happens once per AggregateAPI class instance. """ self.assertIsNone(self.api._placement_client) # Access the property twice to make sure SchedulerReportClient is # only loaded once. for x in range(2): self.api.placement_client mock_report_client.assert_called_once_with() class ComputeAPIAggrCallsSchedulerTestCase(test.NoDBTestCase): """This is for making sure that all Aggregate API methods which are updating the aggregates DB table also notifies the Scheduler by using its client. """ def setUp(self): super(ComputeAPIAggrCallsSchedulerTestCase, self).setUp() self.api = compute.AggregateAPI() self.context = context.RequestContext('fake', 'fake') @mock.patch('nova.scheduler.client.query.SchedulerQueryClient.' 'update_aggregates') def test_create_aggregate(self, update_aggregates): with mock.patch.object(objects.Aggregate, 'create'): agg = self.api.create_aggregate(self.context, 'fake', None) update_aggregates.assert_called_once_with(self.context, [agg]) @mock.patch('nova.scheduler.client.query.SchedulerQueryClient.' 'update_aggregates') def test_update_aggregate(self, update_aggregates): self.api.is_safe_to_update_az = mock.Mock() agg = objects.Aggregate() with mock.patch.object(objects.Aggregate, 'get_by_id', return_value=agg): self.api.update_aggregate(self.context, 1, {}) update_aggregates.assert_called_once_with(self.context, [agg]) @mock.patch('nova.scheduler.client.query.SchedulerQueryClient.' 'update_aggregates') def test_update_aggregate_metadata(self, update_aggregates): self.api.is_safe_to_update_az = mock.Mock() agg = objects.Aggregate() agg.update_metadata = mock.Mock() with mock.patch.object(objects.Aggregate, 'get_by_id', return_value=agg): self.api.update_aggregate_metadata(self.context, 1, {}) update_aggregates.assert_called_once_with(self.context, [agg]) @mock.patch('nova.scheduler.client.query.SchedulerQueryClient.' 'delete_aggregate') def test_delete_aggregate(self, delete_aggregate): self.api.is_safe_to_update_az = mock.Mock() agg = objects.Aggregate(uuid=uuids.agg, name='fake-aggregate', hosts=[]) agg.destroy = mock.Mock() with mock.patch.object(objects.Aggregate, 'get_by_id', return_value=agg): self.api.delete_aggregate(self.context, 1) delete_aggregate.assert_called_once_with(self.context, agg) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch('nova.compute.rpcapi.ComputeAPI.add_aggregate_host') @mock.patch('nova.scheduler.client.query.SchedulerQueryClient.' 'update_aggregates') def test_add_host_to_aggregate(self, update_aggregates, mock_add_agg, mock_notify, mock_add_host): self.api.is_safe_to_update_az = mock.Mock() self.api._update_az_cache_for_host = mock.Mock() agg = objects.Aggregate(name='fake', metadata={}, uuid=uuids.agg) agg.add_host = mock.Mock() with test.nested( mock.patch.object(objects.Service, 'get_by_compute_host', return_value=objects.Service( host='fakehost')), mock.patch.object(objects.Aggregate, 'get_by_id', return_value=agg)): self.api.add_host_to_aggregate(self.context, 1, 'fakehost') update_aggregates.assert_called_once_with(self.context, [agg]) mock_add_agg.assert_called_once_with(self.context, aggregate=agg, host_param='fakehost', host='fakehost') mock_add_host.assert_called_once_with( self.context, agg.uuid, host_name='fakehost') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch('nova.compute.rpcapi.ComputeAPI.remove_aggregate_host') @mock.patch('nova.scheduler.client.query.SchedulerQueryClient.' 'update_aggregates') def test_remove_host_from_aggregate(self, update_aggregates, mock_remove_agg, mock_notify, mock_remove_host): self.api._update_az_cache_for_host = mock.Mock() agg = objects.Aggregate(name='fake', metadata={}, uuid=uuids.agg) agg.delete_host = mock.Mock() with test.nested( mock.patch.object(objects.Service, 'get_by_compute_host'), mock.patch.object(objects.Aggregate, 'get_by_id', return_value=agg)): self.api.remove_host_from_aggregate(self.context, 1, 'fakehost') update_aggregates.assert_called_once_with(self.context, [agg]) mock_remove_agg.assert_called_once_with(self.context, aggregate=agg, host_param='fakehost', host='fakehost') mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=agg, action='remove_host', phase='start'), mock.call(context=self.context, aggregate=agg, action='remove_host', phase='end')]) mock_remove_host.assert_called_once_with( self.context, agg.uuid, 'fakehost') class ComputeAggrTestCase(BaseTestCase): """This is for unit coverage of aggregate-related methods defined in nova.compute.manager. """ def setUp(self): super(ComputeAggrTestCase, self).setUp() self.context = context.get_admin_context() az = {'availability_zone': 'test_zone'} self.aggr = objects.Aggregate(self.context, name='test_aggr', metadata=az) self.aggr.create() def test_add_aggregate_host(self): def fake_driver_add_to_aggregate(self, context, aggregate, host, **_ignore): fake_driver_add_to_aggregate.called = True return {"foo": "bar"} self.stub_out("nova.virt.fake.FakeDriver.add_to_aggregate", fake_driver_add_to_aggregate) self.compute.add_aggregate_host(self.context, host="host", aggregate=self.aggr, slave_info=None) self.assertTrue(fake_driver_add_to_aggregate.called) def test_remove_aggregate_host(self): def fake_driver_remove_from_aggregate(cls, context, aggregate, host, **_ignore): fake_driver_remove_from_aggregate.called = True self.assertEqual("host", host, "host") return {"foo": "bar"} self.stub_out("nova.virt.fake.FakeDriver.remove_from_aggregate", fake_driver_remove_from_aggregate) self.compute.remove_aggregate_host(self.context, aggregate=self.aggr, host="host", slave_info=None) self.assertTrue(fake_driver_remove_from_aggregate.called) def test_add_aggregate_host_passes_slave_info_to_driver(self): def driver_add_to_aggregate(cls, context, aggregate, host, **kwargs): self.assertEqual(self.context, context) self.assertEqual(aggregate.id, self.aggr.id) self.assertEqual(host, "the_host") self.assertEqual("SLAVE_INFO", kwargs.get("slave_info")) self.stub_out("nova.virt.fake.FakeDriver.add_to_aggregate", driver_add_to_aggregate) self.compute.add_aggregate_host(self.context, host="the_host", slave_info="SLAVE_INFO", aggregate=self.aggr) def test_remove_from_aggregate_passes_slave_info_to_driver(self): def driver_remove_from_aggregate(cls, context, aggregate, host, **kwargs): self.assertEqual(self.context, context) self.assertEqual(aggregate.id, self.aggr.id) self.assertEqual(host, "the_host") self.assertEqual("SLAVE_INFO", kwargs.get("slave_info")) self.stub_out("nova.virt.fake.FakeDriver.remove_from_aggregate", driver_remove_from_aggregate) self.compute.remove_aggregate_host(self.context, aggregate=self.aggr, host="the_host", slave_info="SLAVE_INFO") class DisabledInstanceTypesTestCase(BaseTestCase): """Some instance-types are marked 'disabled' which means that they will not show up in customer-facing listings. We do, however, want those instance-types to be available for emergency migrations and for rebuilding of existing instances. One legitimate use of the 'disabled' field would be when phasing out a particular instance-type. We still want customers to be able to use an instance that of the old type, and we want Ops to be able perform migrations against it, but we *don't* want customers building new instances with the phased-out instance-type. """ def setUp(self): super(DisabledInstanceTypesTestCase, self).setUp() self.compute_api = compute.API() self.inst_type = objects.Flavor.get_by_name(self.context, 'm1.small') def test_can_build_instance_from_visible_instance_type(self): self.inst_type['disabled'] = False # Assert that exception.FlavorNotFound is not raised self.compute_api.create(self.context, self.inst_type, image_href=uuids.image_instance) def test_cannot_build_instance_from_disabled_instance_type(self): self.inst_type['disabled'] = True self.assertRaises(exception.FlavorNotFound, self.compute_api.create, self.context, self.inst_type, None) @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=obj_fields.HostStatus.UP)) @mock.patch('nova.compute.api.API._validate_flavor_image_nostatus') @mock.patch('nova.objects.RequestSpec') def test_can_resize_to_visible_instance_type(self, mock_reqspec, mock_validate): instance = self._create_fake_instance_obj() orig_get_flavor_by_flavor_id =\ flavors.get_flavor_by_flavor_id def fake_get_flavor_by_flavor_id(flavor_id, ctxt=None, read_deleted="yes"): instance_type = orig_get_flavor_by_flavor_id(flavor_id, ctxt, read_deleted) instance_type['disabled'] = False return instance_type self.stub_out('nova.compute.flavors.get_flavor_by_flavor_id', fake_get_flavor_by_flavor_id) with mock.patch('nova.conductor.api.ComputeTaskAPI.resize_instance'): self.compute_api.resize(self.context, instance, '4') @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=obj_fields.HostStatus.UP)) def test_cannot_resize_to_disabled_instance_type(self): instance = self._create_fake_instance_obj() orig_get_flavor_by_flavor_id = \ flavors.get_flavor_by_flavor_id def fake_get_flavor_by_flavor_id(flavor_id, ctxt=None, read_deleted="yes"): instance_type = orig_get_flavor_by_flavor_id(flavor_id, ctxt, read_deleted) instance_type['disabled'] = True return instance_type self.stub_out('nova.compute.flavors.get_flavor_by_flavor_id', fake_get_flavor_by_flavor_id) self.assertRaises(exception.FlavorNotFound, self.compute_api.resize, self.context, instance, '4') class InnerTestingException(Exception): pass class ComputeRescheduleResizeOrReraiseTestCase(BaseTestCase): """Test logic and exception handling around rescheduling prep resize requests """ def setUp(self): super(ComputeRescheduleResizeOrReraiseTestCase, self).setUp() self.instance = self._create_fake_instance_obj() self.instance_uuid = self.instance['uuid'] self.instance_type = objects.Flavor.get_by_name( context.get_admin_context(), 'm1.tiny') self.request_spec = objects.RequestSpec() @mock.patch('nova.compute.manager.ComputeManager._prep_resize', side_effect=test.TestingException) @mock.patch.object(compute_manager.ComputeManager, '_reschedule_resize_or_reraise') def test_reschedule_resize_or_reraise_called(self, mock_res, mock_prep): """Verify the rescheduling logic gets called when there is an error during prep_resize. """ inst_obj = self._create_fake_instance_obj() self.compute.prep_resize(self.context, image=None, instance=inst_obj, instance_type=self.instance_type, request_spec=self.request_spec, filter_properties={}, migration=mock.Mock(), node=None, clean_shutdown=True, host_list=None) mock_res.assert_called_once_with(mock.ANY, inst_obj, mock.ANY, self.instance_type, self.request_spec, {}, None) def test_reschedule_resize_or_reraise_no_filter_properties(self): """Test behavior when ``filter_properties`` is None. This should disable rescheduling and the original exception should be raised. """ filter_properties = None try: raise test.TestingException("Original") except Exception: exc_info = sys.exc_info() # because we're not retrying, we should re-raise the exception self.assertRaises(test.TestingException, self.compute._reschedule_resize_or_reraise, self.context, self.instance, exc_info, self.instance_type, self.request_spec, filter_properties, None) def test_reschedule_resize_or_reraise_no_retry_info(self): """Test behavior when ``filter_properties`` doesn't contain 'retry'. This should disable rescheduling and the original exception should be raised. """ filter_properties = {} try: raise test.TestingException("Original") except Exception: exc_info = sys.exc_info() # because we're not retrying, we should re-raise the exception self.assertRaises(test.TestingException, self.compute._reschedule_resize_or_reraise, self.context, self.instance, exc_info, self.instance_type, self.request_spec, filter_properties, None) @mock.patch.object(compute_manager.ComputeManager, '_instance_update') @mock.patch('nova.conductor.api.ComputeTaskAPI.resize_instance', side_effect=InnerTestingException('inner')) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(compute_manager.ComputeManager, "_log_original_error") def test_reschedule_fails_with_exception(self, mock_log, mock_notify, mock_resize, mock_update): """Original exception should be raised if the conductor call raises another exception. """ filter_properties = {'retry': {'num_attempts': 0}} try: raise test.TestingException('Original') except Exception: exc_info = sys.exc_info() self.assertRaises(test.TestingException, self.compute._reschedule_resize_or_reraise, self.context, self.instance, exc_info, self.instance_type, self.request_spec, filter_properties, None) mock_update.assert_called_once_with( self.context, mock.ANY, task_state=task_states.RESIZE_PREP) mock_resize.assert_called_once_with( self.context, mock.ANY, {'filter_properties': filter_properties}, self.instance_type, request_spec=self.request_spec, host_list=None) mock_notify.assert_called_once_with( self.context, self.instance, 'fake-mini', action='resize', phase='error', exception=mock_resize.side_effect, tb=mock.ANY) # If not rescheduled, the original resize exception should not be # logged. mock_log.assert_not_called() @mock.patch.object(compute_manager.ComputeManager, '_instance_update') @mock.patch('nova.conductor.api.ComputeTaskAPI.resize_instance') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(compute_manager.ComputeManager, "_log_original_error") def test_reschedule_passes(self, mock_log, mock_notify, mock_resize, mock_update): filter_properties = {'retry': {'num_attempts': 0}} try: raise test.TestingException("Original") except Exception: exc_info = sys.exc_info() self.compute._reschedule_resize_or_reraise( self.context, self.instance, exc_info, self.instance_type, self.request_spec, filter_properties, None) mock_update.assert_called_once_with( self.context, mock.ANY, task_state=task_states.RESIZE_PREP) mock_resize.assert_called_once_with( self.context, mock.ANY, {'filter_properties': filter_properties}, self.instance_type, request_spec=self.request_spec, host_list=None) mock_notify.assert_called_once_with( self.context, self.instance, 'fake-mini', action='resize', phase='error', exception=exc_info[1], tb=mock.ANY) # If rescheduled, the original resize exception should be logged. mock_log.assert_called_once_with(exc_info, self.instance.uuid) class ComputeInactiveImageTestCase(BaseTestCase): def setUp(self): super(ComputeInactiveImageTestCase, self).setUp() def fake_show(meh, context, id, **kwargs): return {'id': id, 'name': 'fake_name', 'status': 'deleted', 'min_ram': 0, 'min_disk': 0, 'properties': {'kernel_id': uuids.kernel_id, 'ramdisk_id': uuids.ramdisk_id, 'something_else': 'meow'}} fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', fake_show) self.compute_api = compute.API() def test_create_instance_with_deleted_image(self): # Make sure we can't start an instance with a deleted image. inst_type = objects.Flavor.get_by_name(context.get_admin_context(), 'm1.tiny') self.assertRaises(exception.ImageNotActive, self.compute_api.create, self.context, inst_type, uuids.image_instance) class EvacuateHostTestCase(BaseTestCase): def setUp(self): super(EvacuateHostTestCase, self).setUp() self.inst = self._create_fake_instance_obj( {'host': 'fake_host_2', 'node': 'fakenode2'}) self.inst.system_metadata = {} self.inst.task_state = task_states.REBUILDING self.inst.save() def fake_get_compute_info(cls, context, host): cn = objects.ComputeNode(hypervisor_hostname=NODENAME) return cn self.stub_out('nova.compute.manager.ComputeManager._get_compute_info', fake_get_compute_info) self.useFixture(fixtures.SpawnIsSynchronousFixture()) def tearDown(self): db.instance_destroy(self.context, self.inst.uuid) super(EvacuateHostTestCase, self).tearDown() def _rebuild(self, on_shared_storage=True, migration=None, send_node=False, vm_states_is_stopped=False): network_api = self.compute.network_api ctxt = context.get_admin_context() node = limits = None if send_node: node = NODENAME limits = {} @mock.patch( 'nova.compute.manager.ComputeManager._get_request_group_mapping', return_value=mock.sentinel.mapping) @mock.patch( 'nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.compute.utils.notify_about_instance_rebuild') @mock.patch.object(network_api, 'setup_networks_on_host') @mock.patch.object(network_api, 'setup_instance_network_on_host') @mock.patch('nova.context.RequestContext.elevated', return_value=ctxt) def _test_rebuild(mock_context, mock_setup_instance_network_on_host, mock_setup_networks_on_host, mock_notify_rebuild, mock_notify_action, mock_update_pci_req, mock_get_mapping, vm_is_stopped=False): orig_image_ref = None image_ref = None injected_files = None bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, self.inst.uuid) self.compute.rebuild_instance( ctxt, self.inst, orig_image_ref, image_ref, injected_files, 'newpass', {}, bdms, recreate=True, on_shared_storage=on_shared_storage, migration=migration, preserve_ephemeral=False, scheduled_node=node, limits=limits, request_spec=None) if vm_states_is_stopped: mock_notify_rebuild.assert_has_calls([ mock.call(ctxt, self.inst, self.inst.host, phase='start', bdms=bdms), mock.call(ctxt, self.inst, self.inst.host, phase='end', bdms=bdms)]) mock_notify_action.assert_has_calls([ mock.call(ctxt, self.inst, self.inst.host, action='power_off', phase='start'), mock.call(ctxt, self.inst, self.inst.host, action='power_off', phase='end')]) else: mock_notify_rebuild.assert_has_calls([ mock.call(ctxt, self.inst, self.inst.host, phase='start', bdms=bdms), mock.call(ctxt, self.inst, self.inst.host, phase='end', bdms=bdms)]) mock_setup_networks_on_host.assert_called_once_with( ctxt, self.inst, self.inst.host) mock_setup_instance_network_on_host.assert_called_once_with( ctxt, self.inst, self.inst.host, migration, provider_mappings=mock.sentinel.mapping) self.mock_get_allocations.assert_called_once_with(ctxt, self.inst.uuid) mock_update_pci_req.assert_called_once_with( ctxt, self.compute.reportclient, self.inst, mock.sentinel.mapping) _test_rebuild(vm_is_stopped=vm_states_is_stopped) def test_rebuild_on_host_updated_target(self): """Confirm evacuate scenario updates host and node.""" def fake_get_compute_info(context, host): self.assertTrue(context.is_admin) self.assertEqual('fake-mini', host) cn = objects.ComputeNode(hypervisor_hostname=NODENAME) return cn with test.nested( mock.patch.object(self.compute.driver, 'instance_on_disk', side_effect=lambda x: True), mock.patch.object(self.compute, '_get_compute_info', side_effect=fake_get_compute_info) ) as (mock_inst, mock_get): self._rebuild() # Should be on destination host instance = db.instance_get(self.context, self.inst.id) self.assertEqual(instance['host'], self.compute.host) self.assertEqual(NODENAME, instance['node']) self.assertTrue(mock_inst.called) self.assertTrue(mock_get.called) def test_rebuild_on_host_updated_target_node_not_found(self): """Confirm evacuate scenario where compute_node isn't found.""" def fake_get_compute_info(context, host): raise exception.ComputeHostNotFound(host=host) with test.nested( mock.patch.object(self.compute.driver, 'instance_on_disk', side_effect=lambda x: True), mock.patch.object(self.compute, '_get_compute_info', side_effect=fake_get_compute_info) ) as (mock_inst, mock_get): self._rebuild() # Should be on destination host instance = db.instance_get(self.context, self.inst.id) self.assertEqual(instance['host'], self.compute.host) self.assertIsNone(instance['node']) self.assertTrue(mock_inst.called) self.assertTrue(mock_get.called) def test_rebuild_on_host_node_passed(self): patch_get_info = mock.patch.object(self.compute, '_get_compute_info') patch_on_disk = mock.patch.object( self.compute.driver, 'instance_on_disk', return_value=True) with patch_get_info as get_compute_info, patch_on_disk: self._rebuild(send_node=True) self.assertEqual(0, get_compute_info.call_count) # Should be on destination host and node set to what was passed in instance = db.instance_get(self.context, self.inst.id) self.assertEqual(instance['host'], self.compute.host) self.assertEqual(instance['node'], NODENAME) def test_rebuild_with_instance_in_stopped_state(self): """Confirm evacuate scenario updates vm_state to stopped if instance is in stopped state """ # Initialize the VM to stopped state db.instance_update(self.context, self.inst.uuid, {"vm_state": vm_states.STOPPED}) self.inst.vm_state = vm_states.STOPPED self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **ka: True) self._rebuild(vm_states_is_stopped=True) # Check the vm state is reset to stopped instance = db.instance_get(self.context, self.inst.id) self.assertEqual(instance['vm_state'], vm_states.STOPPED) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') def test_rebuild_with_wrong_shared_storage(self, mock_remove_allocs): """Confirm evacuate scenario does not update host.""" with mock.patch.object(self.compute.driver, 'instance_on_disk', side_effect=lambda x: True) as mock_inst: self.assertRaises(exception.InvalidSharedStorage, lambda: self._rebuild(on_shared_storage=False)) # Should remain on original host instance = db.instance_get(self.context, self.inst.id) self.assertEqual(instance['host'], 'fake_host_2') self.assertTrue(mock_inst.called) mock_remove_allocs.assert_called_once_with( mock.ANY, instance.uuid, self.rt.compute_nodes[NODENAME].uuid) @mock.patch.object(cinder.API, 'detach') @mock.patch.object(compute_manager.ComputeManager, '_prep_block_device') @mock.patch.object(driver_bdm_volume, 'driver_detach') def test_rebuild_on_remote_host_with_volumes(self, mock_drv_detach, mock_prep, mock_detach): """Confirm that the evacuate scenario does not attempt a driver detach when rebuilding an instance with volumes on a remote host """ values = {'instance_uuid': self.inst.uuid, 'source_type': 'volume', 'device_name': '/dev/vdc', 'delete_on_termination': False, 'volume_id': uuids.volume_id, 'connection_info': '{}'} db.block_device_mapping_create(self.context, values) def fake_volume_get(self, context, volume, microversion=None): return {'id': uuids.volume} self.stub_out("nova.volume.cinder.API.get", fake_volume_get) # Stub out and record whether it gets detached result = {"detached": False} def fake_detach(context, volume, instance_uuid, attachment_id): result["detached"] = volume == uuids.volume mock_detach.side_effect = fake_detach def fake_terminate_connection(self, context, volume, connector): return {} self.stub_out("nova.volume.cinder.API.terminate_connection", fake_terminate_connection) self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **ka: True) self._rebuild() # cleanup bdms = db.block_device_mapping_get_all_by_instance(self.context, self.inst.uuid) if not bdms: self.fail('BDM entry for the attached volume is missing') for bdm in bdms: db.block_device_mapping_destroy(self.context, bdm['id']) self.assertFalse(mock_drv_detach.called) # make sure volumes attach, detach are called mock_detach.assert_called_once_with( test.MatchType(context.RequestContext), mock.ANY, mock.ANY, None) mock_prep.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchType(objects.Instance), mock.ANY) @mock.patch.object(fake.FakeDriver, 'spawn') def test_rebuild_on_host_with_shared_storage(self, mock_spawn): """Confirm evacuate scenario on shared storage.""" self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **ka: True) self._rebuild() mock_spawn.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchType(objects.Instance), test.MatchType(objects.ImageMeta), mock.ANY, 'newpass', mock.ANY, network_info=mock.ANY, block_device_info=mock.ANY) @mock.patch.object(fake.FakeDriver, 'spawn') def test_rebuild_on_host_without_shared_storage(self, mock_spawn): """Confirm evacuate scenario without shared storage (rebuild from image) """ self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **ka: False) self._rebuild(on_shared_storage=False) mock_spawn.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchType(objects.Instance), test.MatchType(objects.ImageMeta), mock.ANY, 'newpass', mock.ANY, network_info=mock.ANY, block_device_info=mock.ANY) def test_rebuild_on_host_instance_exists(self): """Rebuild if instance exists raises an exception.""" db.instance_update(self.context, self.inst.uuid, {"task_state": task_states.SCHEDULING}) self.compute.build_and_run_instance(self.context, self.inst, {}, {}, {}, block_device_mapping=[]) self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **kw: True) patch_remove_allocs = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') with patch_remove_allocs: self.assertRaises(exception.InstanceExists, lambda: self._rebuild(on_shared_storage=True)) def test_driver_does_not_support_recreate(self): patch_remove_allocs = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') with mock.patch.dict(self.compute.driver.capabilities, supports_evacuate=False), patch_remove_allocs: self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **kw: True) self.assertRaises(exception.InstanceEvacuateNotSupported, lambda: self._rebuild(on_shared_storage=True)) @mock.patch.object(fake.FakeDriver, 'spawn') def test_on_shared_storage_not_provided_host_without_shared_storage(self, mock_spawn): self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **ka: False) self._rebuild(on_shared_storage=None) # 'spawn' should be called with the image_meta from the instance mock_spawn.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchType(objects.Instance), test.MatchObjPrims(self.inst.image_meta), mock.ANY, 'newpass', mock.ANY, network_info=mock.ANY, block_device_info=mock.ANY) @mock.patch.object(fake.FakeDriver, 'spawn') def test_on_shared_storage_not_provided_host_with_shared_storage(self, mock_spawn): self.stub_out('nova.virt.fake.FakeDriver.instance_on_disk', lambda *a, **ka: True) self._rebuild(on_shared_storage=None) mock_spawn.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchType(objects.Instance), test.MatchType(objects.ImageMeta), mock.ANY, 'newpass', mock.ANY, network_info=mock.ANY, block_device_info=mock.ANY) def test_rebuild_migration_passed_in(self): migration = mock.Mock(spec=objects.Migration) patch_spawn = mock.patch.object(self.compute.driver, 'spawn') patch_on_disk = mock.patch.object( self.compute.driver, 'instance_on_disk', return_value=True) self.compute.rt.rebuild_claim = mock.MagicMock() with patch_spawn, patch_on_disk: self._rebuild(migration=migration) self.assertTrue(self.compute.rt.rebuild_claim.called) self.assertEqual('done', migration.status) migration.save.assert_called_once_with() def test_rebuild_migration_node_passed_in(self): patch_spawn = mock.patch.object(self.compute.driver, 'spawn') patch_on_disk = mock.patch.object( self.compute.driver, 'instance_on_disk', return_value=True) with patch_spawn, patch_on_disk: self._rebuild(send_node=True) migrations = objects.MigrationList.get_in_progress_by_host_and_node( self.context, self.compute.host, NODENAME) self.assertEqual(1, len(migrations)) migration = migrations[0] self.assertEqual("evacuation", migration.migration_type) self.assertEqual("pre-migrating", migration.status) def test_rebuild_migration_claim_fails(self): migration = mock.Mock(spec=objects.Migration) patch_spawn = mock.patch.object(self.compute.driver, 'spawn') patch_on_disk = mock.patch.object( self.compute.driver, 'instance_on_disk', return_value=True) patch_claim = mock.patch.object( self.compute.rt, 'rebuild_claim', side_effect=exception.ComputeResourcesUnavailable(reason="boom")) patch_remove_allocs = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') with patch_spawn, patch_on_disk, patch_claim, patch_remove_allocs: self.assertRaises(exception.BuildAbortException, self._rebuild, migration=migration, send_node=True) self.assertEqual("failed", migration.status) migration.save.assert_called_once_with() def test_rebuild_fails_migration_failed(self): migration = mock.Mock(spec=objects.Migration) patch_spawn = mock.patch.object(self.compute.driver, 'spawn') patch_on_disk = mock.patch.object( self.compute.driver, 'instance_on_disk', return_value=True) patch_claim = mock.patch.object(self.compute.rt, 'rebuild_claim') patch_rebuild = mock.patch.object( self.compute, '_do_rebuild_instance_with_claim', side_effect=test.TestingException()) patch_remove_allocs = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') with patch_spawn, patch_on_disk, patch_claim, patch_rebuild, \ patch_remove_allocs: self.assertRaises(test.TestingException, self._rebuild, migration=migration, send_node=True) self.assertEqual("failed", migration.status) migration.save.assert_called_once_with() def test_rebuild_numa_migration_context_honoured(self): numa_topology = test_instance_numa.get_fake_obj_numa_topology( self.context) # NOTE(ndipanov): Make sure that we pass the topology from the context def fake_spawn(context, instance, image_meta, injected_files, admin_password, allocations, network_info=None, block_device_info=None): self.assertIsNone(instance.numa_topology) self.inst.numa_topology = numa_topology patch_spawn = mock.patch.object(self.compute.driver, 'spawn', side_effect=fake_spawn) patch_on_disk = mock.patch.object( self.compute.driver, 'instance_on_disk', return_value=True) with patch_spawn, patch_on_disk: self._rebuild(send_node=True) self.assertIsNone(self.inst.numa_topology) self.assertIsNone(self.inst.migration_context) class ComputeInjectedFilesTestCase(BaseTestCase): # Test that running instances with injected_files decodes files correctly def setUp(self): super(ComputeInjectedFilesTestCase, self).setUp() self.instance = self._create_fake_instance_obj() self.stub_out('nova.virt.fake.FakeDriver.spawn', self._spawn) self.useFixture(fixtures.SpawnIsSynchronousFixture()) def _spawn(self, context, instance, image_meta, injected_files, admin_password, allocations, nw_info, block_device_info, db_api=None): self.assertEqual(self.expected, injected_files) self.assertEqual(self.mock_get_allocations.return_value, allocations) self.mock_get_allocations.assert_called_once_with(instance.uuid) def _test(self, injected_files, decoded_files): self.expected = decoded_files self.compute.build_and_run_instance(self.context, self.instance, {}, {}, {}, block_device_mapping=[], injected_files=injected_files) def test_injected_none(self): # test an input of None for injected_files self._test(None, []) def test_injected_empty(self): # test an input of [] for injected_files self._test([], []) def test_injected_success(self): # test with valid b64 encoded content. injected_files = [ ('/a/b/c', base64.encode_as_bytes(b'foobarbaz')), ('/d/e/f', base64.encode_as_bytes(b'seespotrun')), ] decoded_files = [ ('/a/b/c', 'foobarbaz'), ('/d/e/f', 'seespotrun'), ] self._test(injected_files, decoded_files) def test_injected_invalid(self): # test with invalid b64 encoded content injected_files = [ ('/a/b/c', base64.encode_as_bytes(b'foobarbaz')), ('/d/e/f', 'seespotrun'), ] self.assertRaises(exception.Base64Exception, self.compute.build_and_run_instance, self.context, self.instance, {}, {}, {}, block_device_mapping=[], injected_files=injected_files) class CheckConfigDriveTestCase(test.NoDBTestCase): # NOTE(sirp): `TestCase` is far too heavyweight for this test, this should # probably derive from a `test.FastTestCase` that omits DB and env # handling def setUp(self): super(CheckConfigDriveTestCase, self).setUp() self.compute_api = compute.API() def _assertCheck(self, expected, config_drive): self.assertEqual(expected, self.compute_api._check_config_drive(config_drive)) def _assertInvalid(self, config_drive): self.assertRaises(exception.ConfigDriveInvalidValue, self.compute_api._check_config_drive, config_drive) def test_config_drive_false_values(self): self._assertCheck('', None) self._assertCheck('', '') self._assertCheck('', 'False') self._assertCheck('', 'f') self._assertCheck('', '0') def test_config_drive_true_values(self): self._assertCheck(True, 'True') self._assertCheck(True, 't') self._assertCheck(True, '1') def test_config_drive_bogus_values_raise(self): self._assertInvalid('asd') self._assertInvalid(uuids.fake) class CheckRequestedImageTestCase(test.TestCase): def setUp(self): super(CheckRequestedImageTestCase, self).setUp() self.compute_api = compute.API() self.context = context.RequestContext( 'fake_user_id', 'fake_project_id') self.instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') self.instance_type['memory_mb'] = 64 self.instance_type['root_gb'] = 1 def test_no_image_specified(self): self.compute_api._validate_flavor_image(self.context, None, None, self.instance_type, None) def test_image_status_must_be_active(self): image = dict(id=uuids.image_id, status='foo') self.assertRaises(exception.ImageNotActive, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) image['status'] = 'active' self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) def test_image_min_ram_check(self): image = dict(id=uuids.image_id, status='active', min_ram='65') self.assertRaises(exception.FlavorMemoryTooSmall, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) image['min_ram'] = '64' self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) def test_image_min_disk_check(self): image = dict(id=uuids.image_id, status='active', min_disk='2') self.assertRaises(exception.FlavorDiskSmallerThanMinDisk, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) image['min_disk'] = '1' self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) def test_image_too_large(self): image = dict(id=uuids.image_id, status='active', size='1073741825') self.assertRaises(exception.FlavorDiskSmallerThanImage, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) image['size'] = '1073741824' self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) def test_root_gb_zero_disables_size_check(self): self.policy.set_rules({ servers_policy.ZERO_DISK_FLAVOR: servers_policy.RULE_AOO }, overwrite=False) self.instance_type['root_gb'] = 0 image = dict(id=uuids.image_id, status='active', size='1073741825') self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) def test_root_gb_zero_disables_min_disk(self): self.policy.set_rules({ servers_policy.ZERO_DISK_FLAVOR: servers_policy.RULE_AOO }, overwrite=False) self.instance_type['root_gb'] = 0 image = dict(id=uuids.image_id, status='active', min_disk='2') self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) def test_config_drive_option(self): image = {'id': uuids.image_id, 'status': 'active'} image['properties'] = {'img_config_drive': 'optional'} self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) image['properties'] = {'img_config_drive': 'mandatory'} self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, None) image['properties'] = {'img_config_drive': 'bar'} self.assertRaises(exception.InvalidImageConfigDrive, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) def test_volume_blockdevicemapping(self): # We should allow a root volume which is larger than the flavor root # disk. # We should allow a root volume created from an image whose min_disk is # larger than the flavor root disk. image_uuid = uuids.fake image = dict(id=image_uuid, status='active', size=self.instance_type.root_gb * units.Gi, min_disk=self.instance_type.root_gb + 1) volume_uuid = uuids.fake_2 root_bdm = block_device_obj.BlockDeviceMapping( source_type='volume', destination_type='volume', volume_id=volume_uuid, volume_size=self.instance_type.root_gb + 1) self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, root_bdm) def test_volume_blockdevicemapping_min_disk(self): # A bdm object volume smaller than the image's min_disk should not be # allowed image_uuid = uuids.fake image = dict(id=image_uuid, status='active', size=self.instance_type.root_gb * units.Gi, min_disk=self.instance_type.root_gb + 1) volume_uuid = uuids.fake_2 root_bdm = block_device_obj.BlockDeviceMapping( source_type='image', destination_type='volume', image_id=image_uuid, volume_id=volume_uuid, volume_size=self.instance_type.root_gb) self.assertRaises(exception.VolumeSmallerThanMinDisk, self.compute_api._validate_flavor_image, self.context, image_uuid, image, self.instance_type, root_bdm) def test_volume_blockdevicemapping_min_disk_no_size(self): # We should allow a root volume whose size is not given image_uuid = uuids.fake image = dict(id=image_uuid, status='active', size=self.instance_type.root_gb * units.Gi, min_disk=self.instance_type.root_gb) volume_uuid = uuids.fake_2 root_bdm = block_device_obj.BlockDeviceMapping( source_type='volume', destination_type='volume', volume_id=volume_uuid, volume_size=None) self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, root_bdm) def test_image_blockdevicemapping(self): # Test that we can succeed when passing bdms, and the root bdm isn't a # volume image_uuid = uuids.fake image = dict(id=image_uuid, status='active', size=self.instance_type.root_gb * units.Gi, min_disk=0) root_bdm = block_device_obj.BlockDeviceMapping( source_type='image', destination_type='local', image_id=image_uuid) self.compute_api._validate_flavor_image(self.context, image['id'], image, self.instance_type, root_bdm) def test_image_blockdevicemapping_too_big(self): # We should do a size check against flavor if we were passed bdms but # the root bdm isn't a volume image_uuid = uuids.fake image = dict(id=image_uuid, status='active', size=(self.instance_type.root_gb + 1) * units.Gi, min_disk=0) root_bdm = block_device_obj.BlockDeviceMapping( source_type='image', destination_type='local', image_id=image_uuid) self.assertRaises(exception.FlavorDiskSmallerThanImage, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, root_bdm) def test_image_blockdevicemapping_min_disk(self): # We should do a min_disk check against flavor if we were passed bdms # but the root bdm isn't a volume image_uuid = uuids.fake image = dict(id=image_uuid, status='active', size=0, min_disk=self.instance_type.root_gb + 1) root_bdm = block_device_obj.BlockDeviceMapping( source_type='image', destination_type='local', image_id=image_uuid) self.assertRaises(exception.FlavorDiskSmallerThanMinDisk, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, root_bdm) def test_cpu_policy(self): image = {'id': uuids.image_id, 'status': 'active'} for v in obj_fields.CPUAllocationPolicy.ALL: image['properties'] = {'hw_cpu_policy': v} self.compute_api._validate_flavor_image( self.context, image['id'], image, self.instance_type, None) image['properties'] = {'hw_cpu_policy': 'bar'} self.assertRaises(exception.InvalidRequest, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) def test_cpu_thread_policy(self): image = {'id': uuids.image_id, 'status': 'active'} image['properties'] = { 'hw_cpu_policy': obj_fields.CPUAllocationPolicy.DEDICATED} for v in obj_fields.CPUThreadAllocationPolicy.ALL: image['properties']['hw_cpu_thread_policy'] = v self.compute_api._validate_flavor_image( self.context, image['id'], image, self.instance_type, None) image['properties']['hw_cpu_thread_policy'] = 'bar' self.assertRaises(exception.InvalidRequest, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) image['properties'] = { 'hw_cpu_policy': obj_fields.CPUAllocationPolicy.SHARED, 'hw_cpu_thread_policy': obj_fields.CPUThreadAllocationPolicy.ISOLATE} self.assertRaises(exception.CPUThreadPolicyConfigurationInvalid, self.compute_api._validate_flavor_image, self.context, image['id'], image, self.instance_type, None) class ComputeHooksTestCase(test.BaseHookTestCase): def test_delete_instance_has_hook(self): delete_func = compute_manager.ComputeManager._delete_instance self.assert_has_hook('delete_instance', delete_func) def test_create_instance_has_hook(self): create_func = compute.API.create self.assert_has_hook('create_instance', create_func) def test_build_instance_has_hook(self): build_instance_func = (compute_manager.ComputeManager. _do_build_and_run_instance) self.assert_has_hook('build_instance', build_instance_func) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_compute_api.py0000664000175000017500000133667600000000000023272 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for compute API.""" import contextlib import datetime import ddt import fixtures import iso8601 import mock import os_traits as ot from oslo_messaging import exceptions as oslo_exceptions from oslo_serialization import jsonutils from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import uuidutils import six from nova.compute import api as compute_api from nova.compute import flavors from nova.compute import instance_actions from nova.compute import power_state from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova import conductor import nova.conf from nova import context from nova.db import api as db from nova import exception from nova.image import glance as image_api from nova.network import constants from nova.network import model from nova.network import neutron as neutron_api from nova import objects from nova.objects import base as obj_base from nova.objects import block_device as block_device_obj from nova.objects import fields as fields_obj from nova.objects import image_meta as image_meta_obj from nova.objects import quotas as quotas_obj from nova.objects import security_group as secgroup_obj from nova.servicegroup import api as servicegroup_api from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_block_device from nova.tests.unit import fake_build_request from nova.tests.unit import fake_instance from nova.tests.unit import fake_request_spec from nova.tests.unit import fake_volume from nova.tests.unit.image import fake as fake_image from nova.tests.unit import matchers from nova.tests.unit.objects import test_flavor from nova.tests.unit.objects import test_migration from nova import utils from nova.volume import cinder CONF = nova.conf.CONF FAKE_IMAGE_REF = 'fake-image-ref' NODENAME = 'fakenode1' SHELVED_IMAGE = 'fake-shelved-image' SHELVED_IMAGE_NOT_FOUND = 'fake-shelved-image-notfound' SHELVED_IMAGE_NOT_AUTHORIZED = 'fake-shelved-image-not-authorized' SHELVED_IMAGE_EXCEPTION = 'fake-shelved-image-exception' @ddt.ddt class _ComputeAPIUnitTestMixIn(object): def setUp(self): super(_ComputeAPIUnitTestMixIn, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.compute_api = compute_api.API() self.context = context.RequestContext(self.user_id, self.project_id) def _get_vm_states(self, exclude_states=None): vm_state = set([vm_states.ACTIVE, vm_states.BUILDING, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.RESCUED, vm_states.STOPPED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.DELETED, vm_states.ERROR, vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]) if not exclude_states: exclude_states = set() return vm_state - exclude_states def _create_flavor(self, **updates): flavor = {'id': 1, 'flavorid': 1, 'name': 'm1.tiny', 'memory_mb': 512, 'vcpus': 1, 'vcpu_weight': None, 'root_gb': 1, 'ephemeral_gb': 0, 'rxtx_factor': 1, 'swap': 0, 'deleted': 0, 'disabled': False, 'is_public': True, 'deleted_at': None, 'created_at': datetime.datetime(2012, 1, 19, 18, 49, 30, 877329), 'updated_at': None, 'description': None } if updates: flavor.update(updates) expected_attrs = None if 'extra_specs' in updates and updates['extra_specs']: expected_attrs = ['extra_specs'] return objects.Flavor._from_db_object( self.context, objects.Flavor(extra_specs={}), flavor, expected_attrs=expected_attrs) def _create_instance_obj(self, params=None, flavor=None): """Create a test instance.""" if not params: params = {} if flavor is None: flavor = self._create_flavor() now = timeutils.utcnow() instance = objects.Instance() instance.metadata = {} instance.metadata.update(params.pop('metadata', {})) instance.system_metadata = params.pop('system_metadata', {}) instance._context = self.context instance.id = 1 instance.uuid = uuidutils.generate_uuid() instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.image_ref = FAKE_IMAGE_REF instance.reservation_id = 'r-fakeres' instance.user_id = self.user_id instance.project_id = self.project_id instance.host = 'fake_host' instance.node = NODENAME instance.instance_type_id = flavor.id instance.ami_launch_index = 0 instance.memory_mb = 0 instance.vcpus = 0 instance.root_gb = 0 instance.ephemeral_gb = 0 instance.architecture = fields_obj.Architecture.X86_64 instance.os_type = 'Linux' instance.locked = False instance.created_at = now instance.updated_at = now instance.launched_at = now instance.disable_terminate = False instance.info_cache = objects.InstanceInfoCache() instance.flavor = flavor instance.old_flavor = instance.new_flavor = None instance.numa_topology = None if params: instance.update(params) instance.obj_reset_changes() return instance def _create_keypair_obj(self, instance): """Create a test keypair.""" keypair = objects.KeyPair() keypair.id = 1 keypair.name = 'fake_key' keypair.user_id = instance.user_id keypair.fingerprint = 'fake' keypair.public_key = 'fake key' keypair.type = 'ssh' return keypair def _obj_to_list_obj(self, list_obj, obj): list_obj.objects = [] list_obj.objects.append(obj) list_obj._context = self.context list_obj.obj_reset_changes() return list_obj @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.conductor.conductor_api.ComputeTaskAPI.build_instances') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch('nova.compute.api.API._check_requested_networks') @mock.patch('nova.objects.Quotas.limit_check') @mock.patch('nova.compute.api.API._get_image') @mock.patch('nova.compute.api.API._provision_instances') def test_create_with_networks_max_count_none(self, provision_instances, get_image, check_limit, check_requested_networks, record_action_start, build_instances, check_deltas): # Make sure max_count is checked for None, as Python3 doesn't allow # comparison between NoneType and Integer, something that's allowed in # Python 2. provision_instances.return_value = [] get_image.return_value = (None, {}) check_requested_networks.return_value = 1 instance_type = self._create_flavor() port = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '10.0.0.1' requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(address=address, port_id=port)]) with mock.patch.object(self.compute_api.network_api, 'create_resource_requests', return_value=(None, [])): self.compute_api.create(self.context, instance_type, 'image_id', requested_networks=requested_networks, max_count=None) @mock.patch('nova.objects.Quotas.get_all_by_project_and_user', new=mock.MagicMock()) @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') def test_create_quota_exceeded_messages(self, mock_limit_check_pu, mock_limit_check, mock_count): image_href = "image_href" image_id = 0 instance_type = self._create_flavor() quotas = {'instances': 1, 'cores': 1, 'ram': 1} quota_exception = exception.OverQuota(quotas=quotas, usages={'instances': 1, 'cores': 1, 'ram': 1}, overs=['instances']) proj_count = {'instances': 1, 'cores': 1, 'ram': 1} user_count = proj_count.copy() mock_count.return_value = {'project': proj_count, 'user': user_count} # instances/cores/ram quota mock_limit_check_pu.side_effect = quota_exception # We don't care about network validation in this test. self.compute_api.network_api.validate_networks = ( mock.Mock(return_value=40)) with mock.patch.object(self.compute_api, '_get_image', return_value=(image_id, {})) as mock_get_image: for min_count, message in [(20, '20-40'), (40, '40')]: try: self.compute_api.create(self.context, instance_type, "image_href", min_count=min_count, max_count=40) except exception.TooManyInstances as e: self.assertEqual(message, e.kwargs['req']) else: self.fail("Exception not raised") mock_get_image.assert_called_with(self.context, image_href) self.assertEqual(2, mock_get_image.call_count) self.assertEqual(2, mock_limit_check_pu.call_count) @mock.patch('nova.objects.Quotas.limit_check') def test_create_volume_backed_instance_with_trusted_certs(self, check_limit): # Creating an instance with no image_ref specified will result in # creating a volume-backed instance self.assertRaises(exception.CertificateValidationFailed, self.compute_api.create, self.context, instance_type=self._create_flavor(), image_href=None, trusted_certs=['test-cert-1', 'test-cert-2']) @mock.patch('nova.objects.Quotas.limit_check') def test_create_volume_backed_instance_with_conf_trusted_certs( self, check_limit): self.flags(verify_glance_signatures=True, group='glance') self.flags(enable_certificate_validation=True, group='glance') self.flags(default_trusted_certificate_ids=['certs'], group='glance') # Creating an instance with no image_ref specified will result in # creating a volume-backed instance self.assertRaises(exception.CertificateValidationFailed, self.compute_api.create, self.context, instance_type=self._create_flavor(), image_href=None) def _test_create_max_net_count(self, max_net_count, min_count, max_count): with test.nested( mock.patch.object(self.compute_api, '_get_image', return_value=(None, {})), mock.patch.object(self.compute_api, '_check_auto_disk_config'), mock.patch.object(self.compute_api, '_validate_and_build_base_options', return_value=({}, max_net_count, None, ['default'], None)) ) as ( get_image, check_auto_disk_config, validate_and_build_base_options ): self.assertRaises(exception.PortLimitExceeded, self.compute_api.create, self.context, 'fake_flavor', 'image_id', min_count=min_count, max_count=max_count) def test_max_net_count_zero(self): # Test when max_net_count is zero. max_net_count = 0 min_count = 2 max_count = 3 self._test_create_max_net_count(max_net_count, min_count, max_count) def test_max_net_count_less_than_min_count(self): # Test when max_net_count is nonzero but less than min_count. max_net_count = 1 min_count = 2 max_count = 3 self._test_create_max_net_count(max_net_count, min_count, max_count) def test_specified_port_and_multiple_instances(self): # Tests that if port is specified there is only one instance booting # (i.e max_count == 1) as we can't share the same port across multiple # instances. port = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '10.0.0.1' min_count = 1 max_count = 2 requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(address=address, port_id=port)]) self.assertRaises(exception.MultiplePortsNotApplicable, self.compute_api.create, self.context, 'fake_flavor', 'image_id', min_count=min_count, max_count=max_count, requested_networks=requested_networks) def _test_specified_ip_and_multiple_instances_helper(self, requested_networks): # Tests that if ip is specified there is only one instance booting # (i.e max_count == 1) min_count = 1 max_count = 2 self.assertRaises(exception.InvalidFixedIpAndMaxCountRequest, self.compute_api.create, self.context, "fake_flavor", 'image_id', min_count=min_count, max_count=max_count, requested_networks=requested_networks) def test_specified_ip_and_multiple_instances(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '10.0.0.1' requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=network, address=address)]) self._test_specified_ip_and_multiple_instances_helper( requested_networks) @mock.patch('nova.objects.BlockDeviceMapping.save') @mock.patch.object(compute_rpcapi.ComputeAPI, 'reserve_block_device_name') def test_create_volume_bdm_call_reserve_dev_name(self, mock_reserve, mock_bdm_save): bdm = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'volume_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 1, })) mock_reserve.return_value = bdm instance = self._create_instance_obj() volume = {'id': '1', 'multiattach': False} result = self.compute_api._create_volume_bdm(self.context, instance, 'vda', volume, None, None) self.assertTrue(mock_reserve.called) self.assertEqual(result, bdm) mock_bdm_save.assert_called_once_with() @mock.patch.object(objects.BlockDeviceMapping, 'create') def test_create_volume_bdm_local_creation(self, bdm_create): instance = self._create_instance_obj() volume_id = 'fake-vol-id' bdm = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'instance_uuid': instance.uuid, 'volume_id': volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': None, 'disk_bus': None, 'device_type': None })) result = self.compute_api._create_volume_bdm(self.context, instance, '/dev/vda', {'id': volume_id}, None, None, is_local_creation=True) self.assertEqual(result.instance_uuid, bdm.instance_uuid) self.assertIsNone(result.device_name) self.assertEqual(result.volume_id, bdm.volume_id) self.assertTrue(bdm_create.called) @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(compute_rpcapi.ComputeAPI, 'reserve_block_device_name') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_volume') def test_attach_volume_new_flow(self, mock_attach, mock_bdm, mock_reserve, mock_record): mock_bdm.side_effect = exception.VolumeBDMNotFound( volume_id='fake-volume-id') instance = self._create_instance_obj() volume = fake_volume.fake_volume(1, 'test-vol', 'test-vol', None, None, None, None, None) fake_bdm = mock.MagicMock(spec=objects.BlockDeviceMapping) mock_reserve.return_value = fake_bdm mock_volume_api = mock.patch.object(self.compute_api, 'volume_api', mock.MagicMock(spec=cinder.API)) with mock_volume_api as mock_v_api: mock_v_api.get.return_value = volume mock_v_api.attachment_create.return_value = \ {'id': uuids.attachment_id} self.compute_api.attach_volume( self.context, instance, volume['id']) mock_v_api.check_availability_zone.assert_called_once_with( self.context, volume, instance=instance) mock_v_api.attachment_create.assert_called_once_with(self.context, volume['id'], instance.uuid) mock_attach.assert_called_once_with(self.context, instance, fake_bdm) @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(compute_rpcapi.ComputeAPI, 'reserve_block_device_name') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_volume') def test_tagged_volume_attach_new_flow(self, mock_attach, mock_bdm, mock_reserve, mock_record): mock_bdm.side_effect = exception.VolumeBDMNotFound( volume_id='fake-volume-id') instance = self._create_instance_obj() volume = fake_volume.fake_volume(1, 'test-vol', 'test-vol', None, None, None, None, None) fake_bdm = mock.MagicMock(spec=objects.BlockDeviceMapping) mock_reserve.return_value = fake_bdm mock_volume_api = mock.patch.object(self.compute_api, 'volume_api', mock.MagicMock(spec=cinder.API)) with mock_volume_api as mock_v_api: mock_v_api.get.return_value = volume mock_v_api.attachment_create.return_value = \ {'id': uuids.attachment_id} self.compute_api.attach_volume( self.context, instance, volume['id'], tag='foo') mock_reserve.assert_called_once_with(self.context, instance, None, volume['id'], device_type=None, disk_bus=None, tag='foo', multiattach=False) mock_v_api.check_availability_zone.assert_called_once_with( self.context, volume, instance=instance) mock_v_api.attachment_create.assert_called_once_with( self.context, volume['id'], instance.uuid) mock_attach.assert_called_once_with(self.context, instance, fake_bdm) @mock.patch.object(compute_rpcapi.ComputeAPI, 'reserve_block_device_name') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_volume') def test_attach_volume_attachment_create_fails(self, mock_attach, mock_bdm, mock_reserve): mock_bdm.side_effect = exception.VolumeBDMNotFound( volume_id='fake-volume-id') instance = self._create_instance_obj() volume = fake_volume.fake_volume(1, 'test-vol', 'test-vol', None, None, None, None, None) fake_bdm = mock.MagicMock(spec=objects.BlockDeviceMapping) mock_reserve.return_value = fake_bdm mock_volume_api = mock.patch.object(self.compute_api, 'volume_api', mock.MagicMock(spec=cinder.API)) with mock_volume_api as mock_v_api: mock_v_api.get.return_value = volume mock_v_api.attachment_create.side_effect = test.TestingException() self.assertRaises(test.TestingException, self.compute_api.attach_volume, self.context, instance, volume['id']) mock_v_api.check_availability_zone.assert_called_once_with( self.context, volume, instance=instance) mock_v_api.attachment_create.assert_called_once_with( self.context, volume['id'], instance.uuid) self.assertEqual(0, mock_attach.call_count) fake_bdm.destroy.assert_called_once_with() def test_suspend(self): # Ensure instance can be suspended. instance = self._create_instance_obj() self.assertEqual(instance.vm_state, vm_states.ACTIVE) self.assertIsNone(instance.task_state) rpcapi = self.compute_api.compute_rpcapi with test.nested( mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(rpcapi, 'suspend_instance') ) as (mock_inst_save, mock_record_action, mock_suspend_instance): self.compute_api.suspend(self.context, instance) self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertEqual(task_states.SUSPENDING, instance.task_state) mock_inst_save.assert_called_once_with(expected_task_state=[None]) mock_record_action.assert_called_once_with( self.context, instance, instance_actions.SUSPEND) mock_suspend_instance.assert_called_once_with(self.context, instance) def _test_suspend_fails(self, vm_state): params = dict(vm_state=vm_state) instance = self._create_instance_obj(params=params) self.assertIsNone(instance.task_state) self.assertRaises(exception.InstanceInvalidState, self.compute_api.suspend, self.context, instance) def test_suspend_fails_invalid_states(self): invalid_vm_states = self._get_vm_states(set([vm_states.ACTIVE])) for state in invalid_vm_states: self._test_suspend_fails(state) def test_resume(self): # Ensure instance can be resumed (if suspended). instance = self._create_instance_obj( params=dict(vm_state=vm_states.SUSPENDED)) self.assertEqual(instance.vm_state, vm_states.SUSPENDED) self.assertIsNone(instance.task_state) rpcapi = self.compute_api.compute_rpcapi with test.nested( mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(rpcapi, 'resume_instance') ) as (mock_inst_save, mock_record_action, mock_resume_instance): self.compute_api.resume(self.context, instance) self.assertEqual(vm_states.SUSPENDED, instance.vm_state) self.assertEqual(task_states.RESUMING, instance.task_state) mock_inst_save.assert_called_once_with(expected_task_state=[None]) mock_record_action.assert_called_once_with( self.context, instance, instance_actions.RESUME) mock_resume_instance.assert_called_once_with(self.context, instance) def test_start(self): params = dict(vm_state=vm_states.STOPPED) instance = self._create_instance_obj(params=params) rpcapi = self.compute_api.compute_rpcapi with test.nested( mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(rpcapi, 'start_instance') ) as (mock_inst_save, mock_record_action, mock_start_instance): self.compute_api.start(self.context, instance) self.assertEqual(task_states.POWERING_ON, instance.task_state) mock_inst_save.assert_called_once_with(expected_task_state=[None]) mock_record_action.assert_called_once_with( self.context, instance, instance_actions.START) mock_start_instance.assert_called_once_with(self.context, instance) def test_start_invalid_state(self): instance = self._create_instance_obj() self.assertEqual(instance.vm_state, vm_states.ACTIVE) self.assertRaises(exception.InstanceInvalidState, self.compute_api.start, self.context, instance) def test_start_no_host(self): params = dict(vm_state=vm_states.STOPPED, host='') instance = self._create_instance_obj(params=params) self.assertRaises(exception.InstanceNotReady, self.compute_api.start, self.context, instance) def _test_stop(self, vm_state, force=False, clean_shutdown=True): # Make sure 'progress' gets reset params = dict(task_state=None, progress=99, vm_state=vm_state) instance = self._create_instance_obj(params=params) rpcapi = self.compute_api.compute_rpcapi with test.nested( mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(rpcapi, 'stop_instance') ) as (mock_inst_save, mock_record_action, mock_stop_instance): if force: self.compute_api.force_stop(self.context, instance, clean_shutdown=clean_shutdown) else: self.compute_api.stop(self.context, instance, clean_shutdown=clean_shutdown) self.assertEqual(task_states.POWERING_OFF, instance.task_state) self.assertEqual(0, instance.progress) mock_inst_save.assert_called_once_with(expected_task_state=[None]) mock_record_action.assert_called_once_with( self.context, instance, instance_actions.STOP) mock_stop_instance.assert_called_once_with( self.context, instance, do_cast=True, clean_shutdown=clean_shutdown) def test_stop(self): self._test_stop(vm_states.ACTIVE) def test_stop_stopped_instance_with_bypass(self): self._test_stop(vm_states.STOPPED, force=True) def test_stop_forced_shutdown(self): self._test_stop(vm_states.ACTIVE, force=True) def test_stop_without_clean_shutdown(self): self._test_stop(vm_states.ACTIVE, clean_shutdown=False) def test_stop_forced_without_clean_shutdown(self): self._test_stop(vm_states.ACTIVE, force=True, clean_shutdown=False) def _test_stop_invalid_state(self, vm_state): params = dict(vm_state=vm_state) instance = self._create_instance_obj(params=params) self.assertRaises(exception.InstanceInvalidState, self.compute_api.stop, self.context, instance) def test_stop_fails_invalid_states(self): invalid_vm_states = self._get_vm_states(set([vm_states.ACTIVE, vm_states.ERROR])) for state in invalid_vm_states: self._test_stop_invalid_state(state) def test_stop_a_stopped_inst(self): params = {'vm_state': vm_states.STOPPED} instance = self._create_instance_obj(params=params) self.assertRaises(exception.InstanceInvalidState, self.compute_api.stop, self.context, instance) def test_stop_no_host(self): params = {'host': ''} instance = self._create_instance_obj(params=params) self.assertRaises(exception.InstanceNotReady, self.compute_api.stop, self.context, instance) @mock.patch('nova.compute.api.API._record_action_start') @mock.patch('nova.compute.rpcapi.ComputeAPI.trigger_crash_dump') def test_trigger_crash_dump(self, trigger_crash_dump, _record_action_start): instance = self._create_instance_obj() self.compute_api.trigger_crash_dump(self.context, instance) _record_action_start.assert_called_once_with(self.context, instance, instance_actions.TRIGGER_CRASH_DUMP) trigger_crash_dump.assert_called_once_with(self.context, instance) self.assertIsNone(instance.task_state) def test_trigger_crash_dump_invalid_state(self): params = dict(vm_state=vm_states.STOPPED) instance = self._create_instance_obj(params) self.assertRaises(exception.InstanceInvalidState, self.compute_api.trigger_crash_dump, self.context, instance) def test_trigger_crash_dump_no_host(self): params = dict(host='') instance = self._create_instance_obj(params=params) self.assertRaises(exception.InstanceNotReady, self.compute_api.trigger_crash_dump, self.context, instance) def test_trigger_crash_dump_locked(self): params = dict(locked=True) instance = self._create_instance_obj(params=params) self.assertRaises(exception.InstanceIsLocked, self.compute_api.trigger_crash_dump, self.context, instance) def _test_reboot_type(self, vm_state, reboot_type, task_state=None): # Ensure instance can be soft rebooted. inst = self._create_instance_obj() inst.vm_state = vm_state inst.task_state = task_state expected_task_state = [None] if reboot_type == 'HARD': expected_task_state = task_states.ALLOW_REBOOT rpcapi = self.compute_api.compute_rpcapi with test.nested( mock.patch.object(self.context, 'elevated'), mock.patch.object(inst, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(rpcapi, 'reboot_instance') ) as (mock_elevated, mock_inst_save, mock_record_action, mock_reboot_instance): self.compute_api.reboot(self.context, inst, reboot_type) mock_inst_save.assert_called_once_with( expected_task_state=expected_task_state) mock_record_action.assert_called_once_with( self.context, inst, instance_actions.REBOOT) mock_reboot_instance.assert_called_once_with( self.context, instance=inst, block_device_info=None, reboot_type=reboot_type) def _test_reboot_type_fails(self, reboot_type, **updates): inst = self._create_instance_obj() inst.update(updates) self.assertRaises(exception.InstanceInvalidState, self.compute_api.reboot, self.context, inst, reboot_type) def test_reboot_hard_active(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD') def test_reboot_hard_error(self): self._test_reboot_type(vm_states.ERROR, 'HARD') def test_reboot_hard_rebooting(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD', task_state=task_states.REBOOTING) def test_reboot_hard_reboot_started(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD', task_state=task_states.REBOOT_STARTED) def test_reboot_hard_reboot_pending(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD', task_state=task_states.REBOOT_PENDING) def test_reboot_hard_rescued(self): self._test_reboot_type_fails('HARD', vm_state=vm_states.RESCUED) def test_reboot_hard_resuming(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD', task_state=task_states.RESUMING) def test_reboot_hard_unpausing(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD', task_state=task_states.UNPAUSING) def test_reboot_hard_suspending(self): self._test_reboot_type(vm_states.ACTIVE, 'HARD', task_state=task_states.SUSPENDING) def test_reboot_hard_error_not_launched(self): self._test_reboot_type_fails('HARD', vm_state=vm_states.ERROR, launched_at=None) def test_reboot_soft(self): self._test_reboot_type(vm_states.ACTIVE, 'SOFT') def test_reboot_soft_error(self): self._test_reboot_type_fails('SOFT', vm_state=vm_states.ERROR) def test_reboot_soft_paused(self): self._test_reboot_type_fails('SOFT', vm_state=vm_states.PAUSED) def test_reboot_soft_stopped(self): self._test_reboot_type_fails('SOFT', vm_state=vm_states.STOPPED) def test_reboot_soft_suspended(self): self._test_reboot_type_fails('SOFT', vm_state=vm_states.SUSPENDED) def test_reboot_soft_rebooting(self): self._test_reboot_type_fails('SOFT', task_state=task_states.REBOOTING) def test_reboot_soft_rebooting_hard(self): self._test_reboot_type_fails('SOFT', task_state=task_states.REBOOTING_HARD) def test_reboot_soft_reboot_started(self): self._test_reboot_type_fails('SOFT', task_state=task_states.REBOOT_STARTED) def test_reboot_soft_reboot_pending(self): self._test_reboot_type_fails('SOFT', task_state=task_states.REBOOT_PENDING) def test_reboot_soft_rescued(self): self._test_reboot_type_fails('SOFT', vm_state=vm_states.RESCUED) def test_reboot_soft_error_not_launched(self): self._test_reboot_type_fails('SOFT', vm_state=vm_states.ERROR, launched_at=None) def test_reboot_soft_resuming(self): self._test_reboot_type_fails('SOFT', task_state=task_states.RESUMING) def test_reboot_soft_pausing(self): self._test_reboot_type_fails('SOFT', task_state=task_states.PAUSING) def test_reboot_soft_unpausing(self): self._test_reboot_type_fails('SOFT', task_state=task_states.UNPAUSING) def test_reboot_soft_suspending(self): self._test_reboot_type_fails('SOFT', task_state=task_states.SUSPENDING) def _test_delete_resizing_part(self, inst, deltas): old_flavor = inst.old_flavor deltas['cores'] = -old_flavor.vcpus deltas['ram'] = -old_flavor.memory_mb def _set_delete_shelved_part(self, inst, mock_image_delete): snapshot_id = inst.system_metadata.get('shelved_image_id') if snapshot_id == SHELVED_IMAGE: mock_image_delete.return_value = True elif snapshot_id == SHELVED_IMAGE_NOT_FOUND: mock_image_delete.side_effect = exception.ImageNotFound( image_id=snapshot_id) elif snapshot_id == SHELVED_IMAGE_NOT_AUTHORIZED: mock_image_delete.side_effect = exception.ImageNotAuthorized( image_id=snapshot_id) elif snapshot_id == SHELVED_IMAGE_EXCEPTION: mock_image_delete.side_effect = test.TestingException( "Unexpected error") return snapshot_id @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(objects.Migration, 'get_by_instance_and_status') @mock.patch.object(image_api.API, 'delete') @mock.patch.object(objects.InstanceMapping, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(compute_utils, 'notify_about_instance_usage') @mock.patch.object(db, 'instance_destroy') @mock.patch.object(db, 'instance_system_metadata_get') @mock.patch.object(neutron_api.API, 'deallocate_for_instance') @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(servicegroup_api.API, 'service_is_up') @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(context.RequestContext, 'elevated') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=[]) @mock.patch.object(objects.Instance, 'save') def _test_delete(self, delete_type, mock_save, mock_bdm_get, mock_elevated, mock_get_cn, mock_up, mock_record, mock_inst_update, mock_deallocate, mock_inst_meta, mock_inst_destroy, mock_notify_legacy, mock_get_inst, mock_save_im, mock_image_delete, mock_mig_get, mock_notify, **attrs): expected_save_calls = [mock.call()] expected_record_calls = [] expected_elevated_calls = [] inst = self._create_instance_obj() inst.update(attrs) inst._context = self.context deltas = {'instances': -1, 'cores': -inst.flavor.vcpus, 'ram': -inst.flavor.memory_mb} delete_time = datetime.datetime(1955, 11, 5, 9, 30, tzinfo=iso8601.UTC) self.useFixture(utils_fixture.TimeFixture(delete_time)) task_state = (delete_type == 'soft_delete' and task_states.SOFT_DELETING or task_states.DELETING) updates = {'progress': 0, 'task_state': task_state} if delete_type == 'soft_delete': updates['deleted_at'] = delete_time rpcapi = self.compute_api.compute_rpcapi mock_confirm = self.useFixture( fixtures.MockPatchObject(rpcapi, 'confirm_resize')).mock def _reset_task_state(context, instance, migration, src_host, cast=False): inst.update({'task_state': None}) # After confirm resize action, instance task_state is reset to None mock_confirm.side_effect = _reset_task_state is_shelved = inst.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED) if is_shelved: snapshot_id = self._set_delete_shelved_part(inst, mock_image_delete) mock_terminate = self.useFixture( fixtures.MockPatchObject(rpcapi, 'terminate_instance')).mock mock_soft_delete = self.useFixture( fixtures.MockPatchObject(rpcapi, 'soft_delete_instance')).mock if inst.task_state == task_states.RESIZE_FINISH: self._test_delete_resizing_part(inst, deltas) # NOTE(comstud): This is getting messy. But what we are wanting # to test is: # * Check for downed host # * If downed host: # * Clean up instance, destroying it, sending notifications. # (Tested in _test_downed_host_part()) # * If not downed host: # * Record the action start. # * Cast to compute_rpcapi. cast = True is_downed_host = inst.host == 'down-host' or inst.host is None if inst.vm_state == vm_states.RESIZED: migration = objects.Migration._from_db_object( self.context, objects.Migration(), test_migration.fake_db_migration()) mock_elevated.return_value = self.context expected_elevated_calls.append(mock.call()) mock_mig_get.return_value = migration expected_record_calls.append( mock.call(self.context, inst, instance_actions.CONFIRM_RESIZE)) # After confirm resize action, instance task_state # is reset to None, so is the expected value. But # for soft delete, task_state will be again reset # back to soft-deleting in the code to avoid status # checking failure. updates['task_state'] = None if delete_type == 'soft_delete': expected_save_calls.append(mock.call()) updates['task_state'] = 'soft-deleting' if inst.host is not None: mock_elevated.return_value = self.context expected_elevated_calls.append(mock.call()) mock_get_cn.return_value = objects.Service() mock_up.return_value = (inst.host != 'down-host') if is_downed_host: mock_elevated.return_value = self.context expected_elevated_calls.append(mock.call()) expected_save_calls.append(mock.call()) state = ('soft' in delete_type and vm_states.SOFT_DELETED or vm_states.DELETED) updates.update({'vm_state': state, 'task_state': None, 'terminated_at': delete_time, 'deleted_at': delete_time, 'deleted': True}) fake_inst = fake_instance.fake_db_instance(**updates) mock_inst_destroy.return_value = fake_inst cell = objects.CellMapping(uuid=uuids.cell, transport_url='fake://', database_connection='fake://') im = objects.InstanceMapping(cell_mapping=cell) mock_get_inst.return_value = im cast = False if cast: expected_record_calls.append(mock.call(self.context, inst, instance_actions.DELETE)) # NOTE(takashin): If objects.Instance.destroy() is called, # objects.Instance.uuid (inst.uuid) and host (inst.host) are changed. # So preserve them before calling the method to test. instance_uuid = inst.uuid instance_host = inst.host getattr(self.compute_api, delete_type)(self.context, inst) for k, v in updates.items(): self.assertEqual(inst[k], v) mock_save.assert_has_calls(expected_save_calls) mock_bdm_get.assert_called_once_with(self.context, instance_uuid) if expected_record_calls: mock_record.assert_has_calls(expected_record_calls) if expected_elevated_calls: mock_elevated.assert_has_calls(expected_elevated_calls) if inst.vm_state == vm_states.RESIZED: mock_mig_get.assert_called_once_with( self.context, instance_uuid, 'finished') mock_confirm.assert_called_once_with( self.context, inst, migration, migration['source_compute'], cast=False) if instance_host is not None: mock_get_cn.assert_called_once_with(self.context, instance_host) mock_up.assert_called_once_with( test.MatchType(objects.Service)) if is_downed_host: if 'soft' in delete_type: mock_notify_legacy.assert_has_calls([ mock.call(self.compute_api.notifier, self.context, inst, 'delete.start'), mock.call(self.compute_api.notifier, self.context, inst, 'delete.end')]) else: mock_notify_legacy.assert_has_calls([ mock.call(self.compute_api.notifier, self.context, inst, '%s.start' % delete_type), mock.call(self.compute_api.notifier, self.context, inst, '%s.end' % delete_type)]) mock_deallocate.assert_called_once_with(self.context, inst) mock_inst_destroy.assert_called_once_with( self.context, instance_uuid, constraint=None, hard_delete=False) mock_get_inst.assert_called_with(self.context, instance_uuid) self.assertEqual(2, mock_get_inst.call_count) self.assertTrue(mock_get_inst.return_value.queued_for_delete) mock_save_im.assert_called_once_with() if cast: if delete_type == 'soft_delete': mock_soft_delete.assert_called_once_with(self.context, inst) elif delete_type in ['delete', 'force_delete']: mock_terminate.assert_called_once_with( self.context, inst, []) if is_shelved: mock_image_delete.assert_called_once_with(self.context, snapshot_id) if not cast and delete_type == 'delete': mock_notify.assert_has_calls([ mock.call(self.context, inst, host='fake-mini', source='nova-api', action=delete_type, phase='start'), mock.call(self.context, inst, host='fake-mini', source='nova-api', action=delete_type, phase='end')]) def test_delete(self): self._test_delete('delete') def test_delete_if_not_launched(self): self._test_delete('delete', launched_at=None) def test_delete_in_resizing(self): old_flavor = objects.Flavor(vcpus=1, memory_mb=512, extra_specs={}) self._test_delete('delete', task_state=task_states.RESIZE_FINISH, old_flavor=old_flavor) def test_delete_in_resized(self): self._test_delete('delete', vm_state=vm_states.RESIZED) def test_delete_shelved(self): fake_sys_meta = {'shelved_image_id': SHELVED_IMAGE} self._test_delete('delete', vm_state=vm_states.SHELVED, system_metadata=fake_sys_meta) def test_delete_shelved_offloaded(self): fake_sys_meta = {'shelved_image_id': SHELVED_IMAGE} self._test_delete('delete', vm_state=vm_states.SHELVED_OFFLOADED, system_metadata=fake_sys_meta) def test_delete_shelved_image_not_found(self): fake_sys_meta = {'shelved_image_id': SHELVED_IMAGE_NOT_FOUND} self._test_delete('delete', vm_state=vm_states.SHELVED_OFFLOADED, system_metadata=fake_sys_meta) def test_delete_shelved_image_not_authorized(self): fake_sys_meta = {'shelved_image_id': SHELVED_IMAGE_NOT_AUTHORIZED} self._test_delete('delete', vm_state=vm_states.SHELVED_OFFLOADED, system_metadata=fake_sys_meta) def test_delete_shelved_exception(self): fake_sys_meta = {'shelved_image_id': SHELVED_IMAGE_EXCEPTION} self._test_delete('delete', vm_state=vm_states.SHELVED, system_metadata=fake_sys_meta) def test_delete_with_down_host(self): self._test_delete('delete', host='down-host') def test_delete_soft_with_down_host(self): self._test_delete('soft_delete', host='down-host') def test_delete_soft_in_resized(self): self._test_delete('soft_delete', vm_state=vm_states.RESIZED) def test_delete_soft(self): self._test_delete('soft_delete') def test_delete_forced(self): fake_sys_meta = {'shelved_image_id': SHELVED_IMAGE} for vm_state in self._get_vm_states(): if vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): self._test_delete('force_delete', vm_state=vm_state, system_metadata=fake_sys_meta) self._test_delete('force_delete', vm_state=vm_state) @mock.patch('nova.compute.api.API._delete_while_booting', return_value=False) @mock.patch('nova.compute.api.API._lookup_instance') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.objects.Service.get_by_compute_host') @mock.patch('nova.compute.api.API._local_delete') def test_delete_error_state_with_no_host( self, mock_local_delete, mock_service_get, _mock_notify, _mock_save, mock_bdm_get, mock_lookup, _mock_del_booting): # Instance in error state with no host should be a local delete # for non API cells inst = self._create_instance_obj(params=dict(vm_state=vm_states.ERROR, host=None)) mock_lookup.return_value = None, inst with mock.patch.object(self.compute_api.compute_rpcapi, 'terminate_instance') as mock_terminate: self.compute_api.delete(self.context, inst) mock_local_delete.assert_called_once_with( self.context, inst, mock_bdm_get.return_value, 'delete', self.compute_api._do_delete) mock_terminate.assert_not_called() mock_service_get.assert_not_called() @mock.patch('nova.compute.api.API._delete_while_booting', return_value=False) @mock.patch('nova.compute.api.API._lookup_instance') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.objects.Service.get_by_compute_host') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.servicegroup.api.API.service_is_up', return_value=True) @mock.patch('nova.compute.api.API._record_action_start') @mock.patch('nova.compute.api.API._local_delete') def test_delete_error_state_with_host_set( self, mock_local_delete, _mock_record, mock_service_up, mock_elevated, mock_service_get, _mock_notify, _mock_save, mock_bdm_get, mock_lookup, _mock_del_booting): # Instance in error state with host set should be a non-local delete # for non API cells if the service is up inst = self._create_instance_obj(params=dict(vm_state=vm_states.ERROR, host='fake-host')) mock_lookup.return_value = inst with mock.patch.object(self.compute_api.compute_rpcapi, 'terminate_instance') as mock_terminate: self.compute_api.delete(self.context, inst) mock_service_get.assert_called_once_with( mock_elevated.return_value, 'fake-host') mock_service_up.assert_called_once_with( mock_service_get.return_value) mock_terminate.assert_called_once_with( self.context, inst, mock_bdm_get.return_value) mock_local_delete.assert_not_called() def test_delete_forced_when_task_state_is_not_none(self): for vm_state in self._get_vm_states(): self._test_delete('force_delete', vm_state=vm_state, task_state=task_states.RESIZE_MIGRATING) @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(compute_utils, 'notify_about_instance_usage') @mock.patch.object(db, 'instance_destroy') @mock.patch.object(db, 'constraint') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') def test_delete_fast_if_host_not_set(self, mock_br_get, mock_save, mock_bdm_get, mock_cons, mock_inst_destroy, mock_notify_legacy, mock_notify): self.useFixture(nova_fixtures.AllServicesCurrent()) inst = self._create_instance_obj() inst.host = '' updates = {'progress': 0, 'task_state': task_states.DELETING} mock_lookup = self.useFixture( fixtures.MockPatchObject(self.compute_api, '_lookup_instance')).mock mock_lookup.return_value = (None, inst) mock_bdm_get.return_value = objects.BlockDeviceMappingList() mock_br_get.side_effect = exception.BuildRequestNotFound( uuid=inst.uuid) mock_cons.return_value = 'constraint' delete_time = datetime.datetime(1955, 11, 5, 9, 30, tzinfo=iso8601.UTC) updates['deleted_at'] = delete_time updates['deleted'] = True fake_inst = fake_instance.fake_db_instance(**updates) mock_inst_destroy.return_value = fake_inst instance_uuid = inst.uuid self.compute_api.delete(self.context, inst) for k, v in updates.items(): self.assertEqual(inst[k], v) mock_lookup.assert_called_once_with(self.context, instance_uuid) mock_bdm_get.assert_called_once_with(self.context, instance_uuid) mock_br_get.assert_called_once_with(self.context, instance_uuid) mock_save.assert_called_once_with() mock_notify_legacy.assert_has_calls([ mock.call(self.compute_api.notifier, self.context, inst, 'delete.start'), mock.call(self.compute_api.notifier, self.context, inst, 'delete.end')]) mock_notify.assert_has_calls([ mock.call(self.context, inst, host='fake-mini', source='nova-api', action='delete', phase='start'), mock.call(self.context, inst, host='fake-mini', source='nova-api', action='delete', phase='end')]) mock_cons.assert_called_once_with(host=mock.ANY) mock_inst_destroy.assert_called_once_with( self.context, instance_uuid, constraint='constraint', hard_delete=False) def _fake_do_delete(context, instance, bdms, rservations=None, local=False): pass @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(objects.BlockDeviceMapping, 'destroy') @mock.patch.object(cinder.API, 'detach') @mock.patch.object(compute_utils, 'notify_about_instance_usage') @mock.patch.object(neutron_api.API, 'deallocate_for_instance') @mock.patch.object(context.RequestContext, 'elevated') @mock.patch.object(objects.Instance, 'destroy') def test_local_delete_with_deleted_volume( self, mock_inst_destroy, mock_elevated, mock_dealloc, mock_notify_legacy, mock_detach, mock_bdm_destroy, mock_notify): bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( {'id': 42, 'volume_id': 'volume_id', 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False}))] inst = self._create_instance_obj() inst._context = self.context mock_elevated.return_value = self.context mock_detach.side_effect = exception.VolumeNotFound('volume_id') self.compute_api._local_delete(self.context, inst, bdms, 'delete', self._fake_do_delete) mock_notify_legacy.assert_has_calls([ mock.call(self.compute_api.notifier, self.context, inst, 'delete.start'), mock.call(self.compute_api.notifier, self.context, inst, 'delete.end')]) mock_notify.assert_has_calls([ mock.call(self.context, inst, host='fake-mini', source='nova-api', action='delete', phase='start'), mock.call(self.context, inst, host='fake-mini', source='nova-api', action='delete', phase='end')]) mock_elevated.assert_has_calls([mock.call(), mock.call()]) mock_detach.assert_called_once_with(mock.ANY, 'volume_id', inst.uuid) mock_bdm_destroy.assert_called_once_with() mock_inst_destroy.assert_called_once_with() mock_dealloc.assert_called_once_with(self.context, inst) @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(objects.BlockDeviceMapping, 'destroy') @mock.patch.object(cinder.API, 'detach') @mock.patch.object(compute_utils, 'notify_about_instance_usage') @mock.patch.object(neutron_api.API, 'deallocate_for_instance') @mock.patch.object(context.RequestContext, 'elevated') @mock.patch.object(objects.Instance, 'destroy') @mock.patch.object(compute_utils, 'delete_arqs_if_needed') def test_local_delete_for_arqs( self, mock_del_arqs, mock_inst_destroy, mock_elevated, mock_dealloc, mock_notify_legacy, mock_detach, mock_bdm_destroy, mock_notify): inst = self._create_instance_obj() inst._context = self.context mock_elevated.return_value = self.context bdms = [] self.compute_api._local_delete(self.context, inst, bdms, 'delete', self._fake_do_delete) mock_del_arqs.assert_called_once_with(self.context, inst) @mock.patch.object(objects.BlockDeviceMapping, 'destroy') def test_local_cleanup_bdm_volumes_stashed_connector(self, mock_destroy): """Tests that we call volume_api.terminate_connection when we found a stashed connector in the bdm.connection_info dict. """ inst = self._create_instance_obj() # create two fake bdms, one is a volume and one isn't, both will be # destroyed but we only cleanup the volume bdm in cinder conn_info = {'connector': {'host': inst.host}} vol_bdm = objects.BlockDeviceMapping(self.context, id=1, instance_uuid=inst.uuid, volume_id=uuids.volume_id, source_type='volume', destination_type='volume', delete_on_termination=True, connection_info=jsonutils.dumps( conn_info ), attachment_id=None,) loc_bdm = objects.BlockDeviceMapping(self.context, id=2, instance_uuid=inst.uuid, volume_id=uuids.volume_id2, source_type='blank', destination_type='local') bdms = objects.BlockDeviceMappingList(objects=[vol_bdm, loc_bdm]) @mock.patch.object(self.compute_api.volume_api, 'terminate_connection') @mock.patch.object(self.compute_api.volume_api, 'detach') @mock.patch.object(self.compute_api.volume_api, 'delete') @mock.patch.object(self.context, 'elevated', return_value=self.context) def do_test(self, mock_elevated, mock_delete, mock_detach, mock_terminate): self.compute_api._local_cleanup_bdm_volumes( bdms, inst, self.context) mock_terminate.assert_called_once_with( self.context, uuids.volume_id, conn_info['connector']) mock_detach.assert_called_once_with( self.context, uuids.volume_id, inst.uuid) mock_delete.assert_called_once_with(self.context, uuids.volume_id) self.assertEqual(2, mock_destroy.call_count) do_test(self) @mock.patch.object(objects.BlockDeviceMapping, 'destroy') def test_local_cleanup_bdm_volumes_with_attach_id(self, mock_destroy): """Tests that we call volume_api.attachment_delete when we have an attachment_id in the bdm. """ instance = self._create_instance_obj() conn_info = {'connector': {'host': instance.host}} vol_bdm = objects.BlockDeviceMapping( self.context, id=1, instance_uuid=instance.uuid, volume_id=uuids.volume_id, source_type='volume', destination_type='volume', delete_on_termination=True, connection_info=jsonutils.dumps(conn_info), attachment_id=uuids.attachment_id) bdms = objects.BlockDeviceMappingList(objects=[vol_bdm]) @mock.patch.object(self.compute_api.volume_api, 'delete') @mock.patch.object(self.compute_api.volume_api, 'attachment_delete') @mock.patch.object(self.context, 'elevated', return_value=self.context) def do_test(self, mock_elevated, mock_attach_delete, mock_delete): self.compute_api._local_cleanup_bdm_volumes( bdms, instance, self.context) mock_attach_delete.assert_called_once_with( self.context, vol_bdm.attachment_id) mock_delete.assert_called_once_with( self.context, vol_bdm.volume_id) mock_destroy.assert_called_once_with() do_test(self) @mock.patch.object(objects.BlockDeviceMapping, 'destroy') def test_local_cleanup_bdm_volumes_stashed_connector_host_none( self, mock_destroy): """Tests that we call volume_api.terminate_connection when we found a stashed connector in the bdm.connection_info dict. This tests the case where: 1) the instance host is None 2) the instance vm_state is one where we expect host to be None We allow a mismatch of the host in this situation if the instance is in a state where we expect its host to have been set to None, such as ERROR or SHELVED_OFFLOADED. """ params = dict(host=None, vm_state=vm_states.ERROR) inst = self._create_instance_obj(params=params) conn_info = {'connector': {'host': 'orig-host'}} vol_bdm = objects.BlockDeviceMapping(self.context, id=1, instance_uuid=inst.uuid, volume_id=uuids.volume_id, source_type='volume', destination_type='volume', delete_on_termination=True, connection_info=jsonutils.dumps( conn_info), attachment_id=None) bdms = objects.BlockDeviceMappingList(objects=[vol_bdm]) @mock.patch.object(self.compute_api.volume_api, 'terminate_connection') @mock.patch.object(self.compute_api.volume_api, 'detach') @mock.patch.object(self.compute_api.volume_api, 'delete') @mock.patch.object(self.context, 'elevated', return_value=self.context) def do_test(self, mock_elevated, mock_delete, mock_detach, mock_terminate): self.compute_api._local_cleanup_bdm_volumes( bdms, inst, self.context) mock_terminate.assert_called_once_with( self.context, uuids.volume_id, conn_info['connector']) mock_detach.assert_called_once_with( self.context, uuids.volume_id, inst.uuid) mock_delete.assert_called_once_with(self.context, uuids.volume_id) mock_destroy.assert_called_once_with() do_test(self) def test_delete_disabled(self): # If 'disable_terminate' is True, log output is executed only and # just return immediately. inst = self._create_instance_obj() inst.disable_terminate = True self.compute_api.delete(self.context, inst) @mock.patch.object(objects.Instance, 'save', side_effect=test.TestingException) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) def test_delete_soft_rollback(self, mock_get, mock_save): inst = self._create_instance_obj() delete_time = datetime.datetime(1955, 11, 5) self.useFixture(utils_fixture.TimeFixture(delete_time)) self.assertRaises(test.TestingException, self.compute_api.soft_delete, self.context, inst) mock_get.assert_called_once_with(self.context, inst.uuid) mock_save.assert_called_once_with() @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') def test_attempt_delete_of_buildrequest_success(self, mock_get_by_inst): build_req_mock = mock.MagicMock() mock_get_by_inst.return_value = build_req_mock inst = self._create_instance_obj() self.assertTrue( self.compute_api._attempt_delete_of_buildrequest(self.context, inst)) self.assertTrue(build_req_mock.destroy.called) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') def test_attempt_delete_of_buildrequest_not_found(self, mock_get_by_inst): mock_get_by_inst.side_effect = exception.BuildRequestNotFound( uuid='fake') inst = self._create_instance_obj() self.assertFalse( self.compute_api._attempt_delete_of_buildrequest(self.context, inst)) def test_attempt_delete_of_buildrequest_already_deleted(self): inst = self._create_instance_obj() build_req_mock = mock.MagicMock() build_req_mock.destroy.side_effect = exception.BuildRequestNotFound( uuid='fake') with mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid', return_value=build_req_mock): self.assertFalse( self.compute_api._attempt_delete_of_buildrequest(self.context, inst)) self.assertTrue(build_req_mock.destroy.called) def test_delete_while_booting_buildreq_not_deleted(self): self.useFixture(nova_fixtures.AllServicesCurrent()) inst = self._create_instance_obj() with mock.patch.object(self.compute_api, '_attempt_delete_of_buildrequest', return_value=False): self.assertFalse( self.compute_api._delete_while_booting(self.context, inst)) def test_delete_while_booting_buildreq_deleted_instance_none(self): self.useFixture(nova_fixtures.AllServicesCurrent()) inst = self._create_instance_obj() @mock.patch.object(self.compute_api, '_attempt_delete_of_buildrequest', return_value=True) @mock.patch.object(self.compute_api, '_lookup_instance', return_value=(None, None)) def test(mock_lookup, mock_attempt): self.assertTrue( self.compute_api._delete_while_booting(self.context, inst)) test() def test_delete_while_booting_buildreq_deleted_instance_not_found(self): self.useFixture(nova_fixtures.AllServicesCurrent()) inst = self._create_instance_obj() @mock.patch.object(self.compute_api, '_attempt_delete_of_buildrequest', return_value=True) @mock.patch.object(self.compute_api, '_lookup_instance', side_effect=exception.InstanceNotFound( instance_id='fake')) def test(mock_lookup, mock_attempt): self.assertTrue( self.compute_api._delete_while_booting(self.context, inst)) test() @mock.patch('nova.compute.utils.notify_about_instance_delete') @mock.patch('nova.objects.Instance.destroy') def test_delete_instance_from_cell0(self, destroy_mock, notify_mock): """Tests the case that the instance does not have a host and was not deleted while building, so conductor put it into cell0 so the API has to delete the instance from cell0. """ instance = self._create_instance_obj({'host': None}) cell0 = objects.CellMapping(uuid=objects.CellMapping.CELL0_UUID) with test.nested( mock.patch.object(self.compute_api, '_delete_while_booting', return_value=False), mock.patch.object(self.compute_api, '_lookup_instance', return_value=(cell0, instance)), ) as ( _delete_while_booting, _lookup_instance, ): self.compute_api._delete( self.context, instance, 'delete', mock.NonCallableMock()) _delete_while_booting.assert_called_once_with( self.context, instance) _lookup_instance.assert_called_once_with( self.context, instance.uuid) notify_mock.assert_called_once_with( self.compute_api.notifier, self.context, instance) destroy_mock.assert_called_once_with() @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=exception.InstanceMappingNotFound( uuid='fake')) def test_lookup_instance_mapping_none(self, mock_map_get, mock_target_cell): instance = self._create_instance_obj() with mock.patch.object(objects.Instance, 'get_by_uuid', return_value=instance) as mock_inst_get: cell, ret_instance = self.compute_api._lookup_instance( self.context, instance.uuid) self.assertEqual((None, instance), (cell, ret_instance)) mock_inst_get.assert_called_once_with(self.context, instance.uuid) self.assertFalse(mock_target_cell.called) @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', return_value=objects.InstanceMapping(cell_mapping=None)) def test_lookup_instance_cell_mapping_none(self, mock_map_get, mock_target_cell): instance = self._create_instance_obj() with mock.patch.object(objects.Instance, 'get_by_uuid', return_value=instance) as mock_inst_get: cell, ret_instance = self.compute_api._lookup_instance( self.context, instance.uuid) self.assertEqual((None, instance), (cell, ret_instance)) mock_inst_get.assert_called_once_with(self.context, instance.uuid) self.assertFalse(mock_target_cell.called) @mock.patch.object(context, 'target_cell') def test_lookup_instance_cell_mapping(self, mock_target_cell): instance = self._create_instance_obj() mock_target_cell.return_value.__enter__.return_value = self.context inst_map = objects.InstanceMapping( cell_mapping=objects.CellMapping(database_connection='', transport_url='none')) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', return_value=inst_map) @mock.patch.object(objects.Instance, 'get_by_uuid', return_value=instance) def test(mock_inst_get, mock_map_get): cell, ret_instance = self.compute_api._lookup_instance( self.context, instance.uuid) expected_cell = inst_map.cell_mapping self.assertEqual((expected_cell, instance), (cell, ret_instance)) mock_inst_get.assert_called_once_with(self.context, instance.uuid) mock_target_cell.assert_called_once_with(self.context, inst_map.cell_mapping) test() @mock.patch('nova.compute.api.API._get_source_compute_service') @mock.patch('nova.servicegroup.api.API.service_is_up', return_value=True) @mock.patch.object(objects.Migration, 'save') @mock.patch.object(objects.Migration, 'get_by_instance_and_status') @mock.patch.object(context.RequestContext, 'elevated') def _test_confirm_resize(self, mock_elevated, mock_get, mock_save, mock_service_is_up, mock_get_service, mig_ref_passed=False): params = dict(vm_state=vm_states.RESIZED) fake_inst = self._create_instance_obj(params=params) fake_mig = objects.Migration._from_db_object( self.context, objects.Migration(), test_migration.fake_db_migration()) mock_record = self.useFixture( fixtures.MockPatchObject(self.compute_api, '_record_action_start')).mock mock_confirm = self.useFixture( fixtures.MockPatchObject(self.compute_api.compute_rpcapi, 'confirm_resize')).mock mock_elevated.return_value = self.context if not mig_ref_passed: mock_get.return_value = fake_mig def _check_mig(expected_task_state=None): self.assertEqual('confirming', fake_mig.status) mock_save.side_effect = _check_mig if mig_ref_passed: self.compute_api.confirm_resize(self.context, fake_inst, migration=fake_mig) else: self.compute_api.confirm_resize(self.context, fake_inst) mock_elevated.assert_called_once_with() mock_service_is_up.assert_called_once_with( mock_get_service.return_value) mock_save.assert_called_once_with() mock_record.assert_called_once_with(self.context, fake_inst, 'confirmResize') mock_confirm.assert_called_once_with(self.context, fake_inst, fake_mig, 'compute-source') if not mig_ref_passed: mock_get.assert_called_once_with(self.context, fake_inst['uuid'], 'finished') def test_confirm_resize(self): self._test_confirm_resize() def test_confirm_resize_with_migration_ref(self): self._test_confirm_resize(mig_ref_passed=True) @mock.patch('nova.objects.HostMapping.get_by_host', return_value=objects.HostMapping( cell_mapping=objects.CellMapping( database_connection='fake://', transport_url='none://', uuid=uuids.cell_uuid))) @mock.patch('nova.objects.Service.get_by_compute_host') def test_get_source_compute_service(self, mock_service_get, mock_hm_get): # First start with a same-cell migration. migration = objects.Migration(source_compute='source.host', cross_cell_move=False) self.compute_api._get_source_compute_service(self.context, migration) mock_hm_get.assert_not_called() mock_service_get.assert_called_once_with(self.context, 'source.host') # Make sure the context was not targeted. ctxt = mock_service_get.call_args[0][0] self.assertIsNone(ctxt.cell_uuid) # Now test with a cross-cell migration. mock_service_get.reset_mock() migration.cross_cell_move = True self.compute_api._get_source_compute_service(self.context, migration) mock_hm_get.assert_called_once_with(self.context, 'source.host') mock_service_get.assert_called_once_with( test.MatchType(context.RequestContext), 'source.host') # Make sure the context was targeted. ctxt = mock_service_get.call_args[0][0] self.assertEqual(uuids.cell_uuid, ctxt.cell_uuid) @mock.patch('nova.virt.hardware.numa_get_constraints') @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance', return_value=[]) @mock.patch('nova.availability_zones.get_host_availability_zone', return_value='nova') @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.objects.Migration.get_by_instance_and_status') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.RequestSpec.get_by_instance_uuid') def _test_revert_resize( self, mock_get_reqspec, mock_elevated, mock_get_migration, mock_check, mock_get_host_az, mock_get_requested_resources, mock_get_numa, same_flavor): params = dict(vm_state=vm_states.RESIZED) fake_inst = self._create_instance_obj(params=params) fake_inst.info_cache.network_info = model.NetworkInfo([ model.VIF(id=uuids.port1, profile={'allocation': uuids.rp})]) fake_mig = objects.Migration._from_db_object( self.context, objects.Migration(), test_migration.fake_db_migration()) fake_reqspec = objects.RequestSpec() fake_reqspec.flavor = fake_inst.flavor fake_numa_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=512, pagesize=None, cpu_pinning_raw=None, cpuset_reserved=None, cpu_policy=None, cpu_thread_policy=None)]) if same_flavor: fake_inst.old_flavor = fake_inst.flavor else: fake_inst.old_flavor = self._create_flavor( id=200, flavorid='new-flavor-id', name='new_flavor', disabled=False, extra_specs={'hw:numa_nodes': '1'}) mock_elevated.return_value = self.context mock_get_migration.return_value = fake_mig mock_get_reqspec.return_value = fake_reqspec mock_get_numa.return_value = fake_numa_topology def _check_reqspec(): if same_flavor: assert_func = self.assertNotEqual else: assert_func = self.assertEqual assert_func(fake_numa_topology, fake_reqspec.numa_topology) assert_func(fake_inst.old_flavor, fake_reqspec.flavor) def _check_state(expected_task_state=None): self.assertEqual(task_states.RESIZE_REVERTING, fake_inst.task_state) def _check_mig(): self.assertEqual('reverting', fake_mig.status) with test.nested( mock.patch.object(fake_reqspec, 'save', side_effect=_check_reqspec), mock.patch.object(fake_inst, 'save', side_effect=_check_state), mock.patch.object(fake_mig, 'save', side_effect=_check_mig), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(self.compute_api.compute_rpcapi, 'revert_resize') ) as (mock_reqspec_save, mock_inst_save, mock_mig_save, mock_record_action, mock_revert_resize): self.compute_api.revert_resize(self.context, fake_inst) mock_elevated.assert_called_once_with() mock_get_migration.assert_called_once_with( self.context, fake_inst['uuid'], 'finished') mock_inst_save.assert_called_once_with(expected_task_state=[None]) mock_mig_save.assert_called_once_with() mock_get_reqspec.assert_called_once_with( self.context, fake_inst.uuid) if same_flavor: # if we are not changing flavors through the revert, we # shouldn't attempt to rebuild the NUMA topology since it won't # have changed mock_get_numa.assert_not_called() else: # not so if the flavor *has* changed though mock_get_numa.assert_called_once_with( fake_inst.old_flavor, mock.ANY) mock_record_action.assert_called_once_with(self.context, fake_inst, 'revertResize') mock_revert_resize.assert_called_once_with( self.context, fake_inst, fake_mig, 'compute-dest', mock_get_reqspec.return_value) mock_get_requested_resources.assert_called_once_with( self.context, fake_inst.uuid) self.assertEqual( [], mock_get_reqspec.return_value.requested_resources) def test_revert_resize(self): self._test_revert_resize(same_flavor=False) def test_revert_resize_same_flavor(self): """Test behavior when reverting a migration or a resize to the same flavor. """ self._test_revert_resize(same_flavor=True) @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance') @mock.patch('nova.availability_zones.get_host_availability_zone', return_value='nova') @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.objects.Migration.get_by_instance_and_status') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.RequestSpec') def test_revert_resize_concurrent_fail( self, mock_reqspec, mock_elevated, mock_get_migration, mock_check, mock_get_host_az, mock_get_requested_resources): params = dict(vm_state=vm_states.RESIZED) fake_inst = self._create_instance_obj(params=params) fake_inst.info_cache.network_info = model.NetworkInfo([]) fake_inst.old_flavor = fake_inst.flavor fake_mig = objects.Migration._from_db_object( self.context, objects.Migration(), test_migration.fake_db_migration()) mock_elevated.return_value = self.context mock_get_migration.return_value = fake_mig exc = exception.UnexpectedTaskStateError( instance_uuid=fake_inst['uuid'], actual={'task_state': task_states.RESIZE_REVERTING}, expected={'task_state': [None]}) with mock.patch.object( fake_inst, 'save', side_effect=exc) as mock_inst_save: self.assertRaises(exception.UnexpectedTaskStateError, self.compute_api.revert_resize, self.context, fake_inst) mock_elevated.assert_called_once_with() mock_get_migration.assert_called_once_with( self.context, fake_inst['uuid'], 'finished') mock_inst_save.assert_called_once_with(expected_task_state=[None]) mock_get_requested_resources.assert_not_called() @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch('nova.virt.hardware.numa_get_constraints') @mock.patch('nova.compute.api.API._allow_resize_to_same_host') @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.compute.api.API._validate_flavor_image_nostatus') @mock.patch('nova.objects.Migration') @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(quotas_obj.Quotas, 'limit_check_project_and_user') @mock.patch.object(quotas_obj.Quotas, 'count_as_dict') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'upsize_quota_delta') @mock.patch.object(flavors, 'get_flavor_by_flavor_id') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def _test_resize(self, mock_get_all_by_host, mock_get_by_instance_uuid, mock_get_flavor, mock_upsize, mock_inst_save, mock_count, mock_limit, mock_record, mock_migration, mock_validate, mock_is_vol_backed, mock_allow_resize_to_same_host, mock_get_numa, flavor_id_passed=True, same_host=False, allow_same_host=False, project_id=None, same_flavor=False, clean_shutdown=True, host_name=None, request_spec=True, requested_destination=False, allow_cross_cell_resize=False): self.flags(allow_resize_to_same_host=allow_same_host) mock_allow_resize_to_same_host.return_value = allow_same_host params = {} if project_id is not None: # To test instance w/ different project id than context (admin) params['project_id'] = project_id fake_inst = self._create_instance_obj(params=params) fake_numa_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=512, pagesize=None, cpu_pinning_raw=None, cpuset_reserved=None, cpu_policy=None, cpu_thread_policy=None)]) mock_resize = self.useFixture( fixtures.MockPatchObject(self.compute_api.compute_task_api, 'resize_instance')).mock mock_get_numa.return_value = fake_numa_topology if host_name: mock_get_all_by_host.return_value = [objects.ComputeNode( host=host_name, hypervisor_hostname='hypervisor_host')] current_flavor = fake_inst.get_flavor() if flavor_id_passed: new_flavor = self._create_flavor( id=200, flavorid='new-flavor-id', name='new_flavor', disabled=False, extra_specs={'hw:numa_nodes': '1'}) if same_flavor: new_flavor.id = current_flavor.id mock_get_flavor.return_value = new_flavor else: new_flavor = current_flavor if not (flavor_id_passed and same_flavor): project_id, user_id = quotas_obj.ids_from_instance(self.context, fake_inst) if flavor_id_passed: mock_upsize.return_value = {'cores': 0, 'ram': 0} proj_count = {'instances': 1, 'cores': current_flavor.vcpus, 'ram': current_flavor.memory_mb} user_count = proj_count.copy() mock_count.return_value = {'project': proj_count, 'user': user_count} def _check_state(expected_task_state=None): self.assertEqual(task_states.RESIZE_PREP, fake_inst.task_state) self.assertEqual(fake_inst.progress, 0) mock_inst_save.side_effect = _check_state if allow_same_host: filter_properties = {'ignore_hosts': []} else: filter_properties = {'ignore_hosts': [fake_inst['host']]} if request_spec: fake_spec = objects.RequestSpec() if requested_destination: cell1 = objects.CellMapping(uuid=uuids.cell1, name='cell1') fake_spec.requested_destination = objects.Destination( host='dummy_host', node='dummy_node', cell=cell1) mock_get_by_instance_uuid.return_value = fake_spec else: mock_get_by_instance_uuid.side_effect = ( exception.RequestSpecNotFound(instance_uuid=fake_inst.id)) fake_spec = None scheduler_hint = {'filter_properties': filter_properties} mock_allow_cross_cell_resize = self.useFixture( fixtures.MockPatchObject( self.compute_api, '_allow_cross_cell_resize')).mock mock_allow_cross_cell_resize.return_value = allow_cross_cell_resize if flavor_id_passed: self.compute_api.resize(self.context, fake_inst, flavor_id='new-flavor-id', clean_shutdown=clean_shutdown, host_name=host_name) else: if request_spec: self.compute_api.resize(self.context, fake_inst, clean_shutdown=clean_shutdown, host_name=host_name) else: self.assertRaises(exception.RequestSpecNotFound, self.compute_api.resize, self.context, fake_inst, clean_shutdown=clean_shutdown, host_name=host_name) if request_spec: if allow_same_host: self.assertEqual([], fake_spec.ignore_hosts) else: self.assertEqual([fake_inst['host']], fake_spec.ignore_hosts) if host_name is None: self.assertIsNotNone(fake_spec.requested_destination) else: self.assertIn('host', fake_spec.requested_destination) self.assertEqual(host_name, fake_spec.requested_destination.host) self.assertIn('node', fake_spec.requested_destination) self.assertEqual('hypervisor_host', fake_spec.requested_destination.node) self.assertEqual( allow_cross_cell_resize, fake_spec.requested_destination.allow_cross_cell_move) mock_allow_resize_to_same_host.assert_called_once() if flavor_id_passed and not same_flavor: mock_get_numa.assert_called_once_with(new_flavor, mock.ANY) else: mock_get_numa.assert_not_called() if host_name: mock_get_all_by_host.assert_called_once_with( self.context, host_name, True) if flavor_id_passed: mock_get_flavor.assert_called_once_with('new-flavor-id', read_deleted='no') if not (flavor_id_passed and same_flavor): if flavor_id_passed: mock_upsize.assert_called_once_with( test.MatchType(objects.Flavor), test.MatchType(objects.Flavor)) image_meta = utils.get_image_from_system_metadata( fake_inst.system_metadata) if not same_flavor: mock_validate.assert_called_once_with( self.context, image_meta, new_flavor, root_bdm=None, validate_pci=True) # mock.ANY might be 'instances', 'cores', or 'ram' # depending on how the deltas dict is iterated in check_deltas mock_count.assert_called_once_with( self.context, mock.ANY, project_id, user_id=user_id) # The current and new flavor have the same cores/ram req_cores = current_flavor.vcpus req_ram = current_flavor.memory_mb values = {'cores': req_cores, 'ram': req_ram} mock_limit.assert_called_once_with( self.context, user_values=values, project_values=values, project_id=project_id, user_id=user_id) mock_inst_save.assert_called_once_with( expected_task_state=[None]) else: # This is a migration mock_validate.assert_not_called() mock_migration.assert_not_called() mock_get_by_instance_uuid.assert_called_once_with(self.context, fake_inst.uuid) if flavor_id_passed: mock_record.assert_called_once_with(self.context, fake_inst, 'resize') else: if request_spec: mock_record.assert_called_once_with( self.context, fake_inst, 'migrate') else: mock_record.assert_not_called() if request_spec: mock_resize.assert_called_once_with( self.context, fake_inst, scheduler_hint=scheduler_hint, flavor=test.MatchType(objects.Flavor), clean_shutdown=clean_shutdown, request_spec=fake_spec, do_cast=True) else: mock_resize.assert_not_called() def _test_migrate(self, *args, **kwargs): self._test_resize(*args, flavor_id_passed=False, **kwargs) def test_resize(self): self._test_resize() def test_resize_same_host_and_allowed(self): self._test_resize(same_host=True, allow_same_host=True) def test_resize_same_host_and_not_allowed(self): self._test_resize(same_host=True, allow_same_host=False) def test_resize_different_project_id(self): self._test_resize(project_id='different') def test_resize_forced_shutdown(self): self._test_resize(clean_shutdown=False) def test_resize_allow_cross_cell_resize_true(self): self._test_resize(allow_cross_cell_resize=True) @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch('nova.compute.flavors.get_flavor_by_flavor_id') @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') def test_resize_quota_check(self, mock_check, mock_count, mock_get): self.flags(cores=1, group='quota') self.flags(ram=2048, group='quota') proj_count = {'instances': 1, 'cores': 1, 'ram': 1024} user_count = proj_count.copy() mock_count.return_value = {'project': proj_count, 'user': user_count} cur_flavor = objects.Flavor(id=1, name='foo', vcpus=1, memory_mb=512, root_gb=10, disabled=False, extra_specs={}) fake_inst = self._create_instance_obj() fake_inst.flavor = cur_flavor new_flavor = objects.Flavor(id=2, name='bar', vcpus=1, memory_mb=2048, root_gb=10, disabled=False, extra_specs={}) mock_get.return_value = new_flavor mock_check.side_effect = exception.OverQuota( overs=['ram'], quotas={'cores': 1, 'ram': 2048}, usages={'instances': 1, 'cores': 1, 'ram': 2048}, headroom={'ram': 2048}) self.assertRaises(exception.TooManyInstances, self.compute_api.resize, self.context, fake_inst, flavor_id='new') mock_check.assert_called_once_with( self.context, user_values={'cores': 1, 'ram': 2560}, project_values={'cores': 1, 'ram': 2560}, project_id=fake_inst.project_id, user_id=fake_inst.user_id) @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) @mock.patch.object(flavors, 'get_flavor_by_flavor_id') def test_resize__with_accelerator(self, mock_get_flavor): """Ensure resizes are rejected if either flavor requests accelerator. """ fake_inst = self._create_instance_obj() new_flavor = self._create_flavor( id=200, flavorid='new-flavor-id', name='new_flavor', disabled=False, extra_specs={'accel:device_profile': 'dp'}) mock_get_flavor.return_value = new_flavor self.assertRaises( exception.ForbiddenWithAccelerators, self.compute_api.resize, self.context, fake_inst, flavor_id=new_flavor.flavorid) def test_migrate(self): self._test_migrate() def test_migrate_same_host_and_allowed(self): self._test_migrate(same_host=True, allow_same_host=True) def test_migrate_same_host_and_not_allowed(self): self._test_migrate(same_host=True, allow_same_host=False) def test_migrate_different_project_id(self): self._test_migrate(project_id='different') def test_migrate_request_spec_not_found(self): self._test_migrate(request_spec=False) def test_migrate_with_requested_destination(self): # RequestSpec has requested_destination self._test_migrate(requested_destination=True) def test_migrate_with_host_name(self): self._test_migrate(host_name='target_host') def test_migrate_with_host_name_allow_cross_cell_resize_true(self): self._test_migrate(host_name='target_host', allow_cross_cell_resize=True) @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host', side_effect=exception.ComputeHostNotFound( host='nonexistent_host')) def test_migrate_nonexistent_host(self, mock_get_all_by_host): fake_inst = self._create_instance_obj() self.assertRaises(exception.ComputeHostNotFound, self.compute_api.resize, self.context, fake_inst, host_name='nonexistent_host') @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(quotas_obj.Quotas, 'limit_check_project_and_user') @mock.patch.object(quotas_obj.Quotas, 'count_as_dict') @mock.patch.object(flavors, 'get_flavor_by_flavor_id') def test_resize_invalid_flavor_fails(self, mock_get_flavor, mock_count, mock_limit, mock_record, mock_save): mock_resize = self.useFixture(fixtures.MockPatchObject( self.compute_api.compute_task_api, 'resize_instance')).mock fake_inst = self._create_instance_obj() exc = exception.FlavorNotFound(flavor_id='flavor-id') mock_get_flavor.side_effect = exc self.assertRaises(exception.FlavorNotFound, self.compute_api.resize, self.context, fake_inst, flavor_id='flavor-id') mock_get_flavor.assert_called_once_with('flavor-id', read_deleted='no') # Should never reach these. mock_count.assert_not_called() mock_limit.assert_not_called() mock_record.assert_not_called() mock_resize.assert_not_called() mock_save.assert_not_called() @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(quotas_obj.Quotas, 'limit_check_project_and_user') @mock.patch.object(quotas_obj.Quotas, 'count_as_dict') @mock.patch.object(flavors, 'get_flavor_by_flavor_id') def test_resize_disabled_flavor_fails(self, mock_get_flavor, mock_count, mock_limit, mock_record, mock_save): mock_resize = self.useFixture(fixtures.MockPatchObject( self.compute_api.compute_task_api, 'resize_instance')).mock fake_inst = self._create_instance_obj() fake_flavor = self._create_flavor(id=200, flavorid='flavor-id', name='foo', disabled=True) mock_get_flavor.return_value = fake_flavor self.assertRaises(exception.FlavorNotFound, self.compute_api.resize, self.context, fake_inst, flavor_id='flavor-id') mock_get_flavor.assert_called_once_with('flavor-id', read_deleted='no') # Should never reach these. mock_count.assert_not_called() mock_limit.assert_not_called() mock_record.assert_not_called() mock_resize.assert_not_called() mock_save.assert_not_called() @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(flavors, 'get_flavor_by_flavor_id') def test_resize_to_zero_disk_flavor_fails(self, get_flavor_by_flavor_id): fake_inst = self._create_instance_obj() fake_flavor = self._create_flavor(id=200, flavorid='flavor-id', name='foo', root_gb=0) get_flavor_by_flavor_id.return_value = fake_flavor with mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=False): self.assertRaises(exception.CannotResizeDisk, self.compute_api.resize, self.context, fake_inst, flavor_id='flavor-id') @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch('nova.compute.api.API._validate_flavor_image_nostatus') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch('nova.conductor.conductor_api.ComputeTaskAPI.resize_instance') @mock.patch.object(flavors, 'get_flavor_by_flavor_id') def test_resize_to_zero_disk_flavor_volume_backed(self, get_flavor_by_flavor_id, resize_instance_mock, record_mock, get_by_inst, validate_mock): params = dict(image_ref='') fake_inst = self._create_instance_obj(params=params) fake_flavor = self._create_flavor(id=200, flavorid='flavor-id', name='foo', root_gb=0) get_flavor_by_flavor_id.return_value = fake_flavor @mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=True) @mock.patch.object(fake_inst, 'save') def do_test(mock_save, mock_volume): self.compute_api.resize(self.context, fake_inst, flavor_id='flavor-id') mock_volume.assert_called_once_with(self.context, fake_inst) do_test() @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(quotas_obj.Quotas, 'limit_check_project_and_user') @mock.patch.object(quotas_obj.Quotas, 'count_as_dict') @mock.patch.object(compute_utils, 'upsize_quota_delta') @mock.patch.object(flavors, 'get_flavor_by_flavor_id') def test_resize_quota_exceeds_fails(self, mock_get_flavor, mock_upsize, mock_count, mock_limit, mock_record, mock_save): mock_resize = self.useFixture(fixtures.MockPatchObject( self.compute_api.compute_task_api, 'resize_instance')).mock fake_inst = self._create_instance_obj() fake_flavor = self._create_flavor(id=200, flavorid='flavor-id', name='foo', disabled=False) mock_get_flavor.return_value = fake_flavor deltas = dict(cores=0) mock_upsize.return_value = deltas quotas = {'cores': 0} overs = ['cores'] over_quota_args = dict(quotas=quotas, usages={'instances': 1, 'cores': 1, 'ram': 512}, overs=overs) proj_count = {'instances': 1, 'cores': fake_inst.flavor.vcpus, 'ram': fake_inst.flavor.memory_mb} user_count = proj_count.copy() mock_count.return_value = {'project': proj_count, 'user': user_count} req_cores = fake_inst.flavor.vcpus req_ram = fake_inst.flavor.memory_mb values = {'cores': req_cores, 'ram': req_ram} mock_limit.side_effect = exception.OverQuota(**over_quota_args) self.assertRaises(exception.TooManyInstances, self.compute_api.resize, self.context, fake_inst, flavor_id='flavor-id') mock_save.assert_not_called() mock_get_flavor.assert_called_once_with('flavor-id', read_deleted='no') mock_upsize.assert_called_once_with(test.MatchType(objects.Flavor), test.MatchType(objects.Flavor)) # mock.ANY might be 'instances', 'cores', or 'ram' # depending on how the deltas dict is iterated in check_deltas mock_count.assert_called_once_with(self.context, mock.ANY, fake_inst.project_id, user_id=fake_inst.user_id) mock_limit.assert_called_once_with(self.context, user_values=values, project_values=values, project_id=fake_inst.project_id, user_id=fake_inst.user_id) # Should never reach these. mock_record.assert_not_called() mock_resize.assert_not_called() @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(flavors, 'get_flavor_by_flavor_id') @mock.patch.object(compute_utils, 'upsize_quota_delta') @mock.patch.object(quotas_obj.Quotas, 'count_as_dict') @mock.patch.object(quotas_obj.Quotas, 'limit_check_project_and_user') def test_resize_quota_exceeds_fails_instance(self, mock_check, mock_count, mock_upsize, mock_flavor): fake_inst = self._create_instance_obj() fake_flavor = self._create_flavor(id=200, flavorid='flavor-id', name='foo', disabled=False) mock_flavor.return_value = fake_flavor deltas = dict(cores=1, ram=1) mock_upsize.return_value = deltas quotas = {'instances': 1, 'cores': -1, 'ram': -1} overs = ['ram'] over_quota_args = dict(quotas=quotas, usages={'instances': 1, 'cores': 1, 'ram': 512}, overs=overs) mock_check.side_effect = exception.OverQuota(**over_quota_args) with mock.patch.object(fake_inst, 'save') as mock_save: self.assertRaises(exception.TooManyInstances, self.compute_api.resize, self.context, fake_inst, flavor_id='flavor-id') self.assertFalse(mock_save.called) @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) @mock.patch.object(flavors, 'get_flavor_by_flavor_id') @mock.patch.object(objects.Quotas, 'count_as_dict') @mock.patch.object(objects.Quotas, 'limit_check_project_and_user') def test_resize_instance_quota_exceeds_with_multiple_resources( self, mock_check, mock_count, mock_get_flavor): quotas = {'cores': 1, 'ram': 512} overs = ['cores', 'ram'] over_quota_args = dict(quotas=quotas, usages={'instances': 1, 'cores': 1, 'ram': 512}, overs=overs) proj_count = {'instances': 1, 'cores': 1, 'ram': 512} user_count = proj_count.copy() mock_count.return_value = {'project': proj_count, 'user': user_count} mock_check.side_effect = exception.OverQuota(**over_quota_args) mock_get_flavor.return_value = self._create_flavor(id=333, vcpus=3, memory_mb=1536) try: self.compute_api.resize(self.context, self._create_instance_obj(), 'fake_flavor_id') except exception.TooManyInstances as e: self.assertEqual('cores, ram', e.kwargs['overs']) self.assertEqual('2, 1024', e.kwargs['req']) self.assertEqual('1, 512', e.kwargs['used']) self.assertEqual('1, 512', e.kwargs['allowed']) mock_get_flavor.assert_called_once_with('fake_flavor_id', read_deleted="no") else: self.fail("Exception not raised") @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(objects.Instance, 'save') def test_pause(self, mock_save, mock_record): # Ensure instance can be paused. instance = self._create_instance_obj() self.assertEqual(instance.vm_state, vm_states.ACTIVE) self.assertIsNone(instance.task_state) rpcapi = self.compute_api.compute_rpcapi mock_pause = self.useFixture( fixtures.MockPatchObject(rpcapi, 'pause_instance')).mock with mock.patch.object(rpcapi, 'pause_instance') as mock_pause: self.compute_api.pause(self.context, instance) self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertEqual(task_states.PAUSING, instance.task_state) mock_save.assert_called_once_with(expected_task_state=[None]) mock_record.assert_called_once_with(self.context, instance, instance_actions.PAUSE) mock_pause.assert_called_once_with(self.context, instance) def _test_pause_fails(self, vm_state): params = dict(vm_state=vm_state) instance = self._create_instance_obj(params=params) self.assertIsNone(instance.task_state) self.assertRaises(exception.InstanceInvalidState, self.compute_api.pause, self.context, instance) def test_pause_fails_invalid_states(self): invalid_vm_states = self._get_vm_states(set([vm_states.ACTIVE])) for state in invalid_vm_states: self._test_pause_fails(state) @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(objects.Instance, 'save') def test_unpause(self, mock_save, mock_record): # Ensure instance can be unpaused. params = dict(vm_state=vm_states.PAUSED) instance = self._create_instance_obj(params=params) self.assertEqual(instance.vm_state, vm_states.PAUSED) self.assertIsNone(instance.task_state) rpcapi = self.compute_api.compute_rpcapi with mock.patch.object(rpcapi, 'unpause_instance') as mock_unpause: self.compute_api.unpause(self.context, instance) self.assertEqual(vm_states.PAUSED, instance.vm_state) self.assertEqual(task_states.UNPAUSING, instance.task_state) mock_save.assert_called_once_with(expected_task_state=[None]) mock_record.assert_called_once_with(self.context, instance, instance_actions.UNPAUSE) mock_unpause.assert_called_once_with(self.context, instance) def test_get_diagnostics_none_host(self): instance = self._create_instance_obj() instance.host = None self.assertRaises(exception.InstanceNotReady, self.compute_api.get_diagnostics, self.context, instance) def test_get_instance_diagnostics_none_host(self): instance = self._create_instance_obj() instance.host = None self.assertRaises(exception.InstanceNotReady, self.compute_api.get_instance_diagnostics, self.context, instance) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_live_migrate_active_vm_state(self, mock_nodelist): instance = self._create_instance_obj() self._live_migrate_instance(instance) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_live_migrate_paused_vm_state(self, mock_nodelist): paused_state = dict(vm_state=vm_states.PAUSED) instance = self._create_instance_obj(params=paused_state) self._live_migrate_instance(instance) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.InstanceAction, 'action_start') @mock.patch.object(objects.Instance, 'save') def test_live_migrate_messaging_timeout(self, _save, _action, get_spec, add_instance_fault_from_exc, mock_nodelist): instance = self._create_instance_obj() api = conductor.api.ComputeTaskAPI with mock.patch.object(api, 'live_migrate_instance', side_effect=oslo_exceptions.MessagingTimeout): self.assertRaises(oslo_exceptions.MessagingTimeout, self.compute_api.live_migrate, self.context, instance, host_name='fake_dest_host', block_migration=True, disk_over_commit=True) add_instance_fault_from_exc.assert_called_once_with( self.context, instance, mock.ANY) @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.InstanceAction, 'action_start') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host', side_effect=exception.ComputeHostNotFound( host='fake_host')) def test_live_migrate_computehost_notfound(self, mock_nodelist, mock_action, mock_get_spec): instance = self._create_instance_obj() self.assertRaises(exception.ComputeHostNotFound, self.compute_api.live_migrate, self.context, instance, host_name='fake_host', block_migration='auto', disk_over_commit=False) self.assertIsNone(instance.task_state) @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceAction, 'action_start') def _live_migrate_instance(self, instance, _save, _action, get_spec): api = conductor.api.ComputeTaskAPI fake_spec = objects.RequestSpec() get_spec.return_value = fake_spec with mock.patch.object(api, 'live_migrate_instance') as task: self.compute_api.live_migrate(self.context, instance, block_migration=True, disk_over_commit=True, host_name='fake_dest_host') self.assertEqual(task_states.MIGRATING, instance.task_state) task.assert_called_once_with(self.context, instance, 'fake_dest_host', block_migration=True, disk_over_commit=True, request_spec=fake_spec, async_=False) def _get_volumes_for_test_swap_volume(self): volumes = {} volumes[uuids.old_volume] = { 'id': uuids.old_volume, 'display_name': 'old_volume', 'attach_status': 'attached', 'size': 5, 'status': 'in-use', 'multiattach': False, 'attachments': {uuids.vol_instance: {'attachment_id': 'fakeid'}}} volumes[uuids.new_volume] = { 'id': uuids.new_volume, 'display_name': 'new_volume', 'attach_status': 'detached', 'size': 5, 'status': 'available', 'multiattach': False} return volumes def _get_instance_for_test_swap_volume(self): return fake_instance.fake_instance_obj(None, **{ 'vm_state': vm_states.ACTIVE, 'launched_at': timeutils.utcnow(), 'locked': False, 'availability_zone': 'fake_az', 'uuid': uuids.vol_instance, 'task_state': None}) def _test_swap_volume_for_precheck_with_exception( self, exc, instance_update=None, volume_update=None): volumes = self._get_volumes_for_test_swap_volume() instance = self._get_instance_for_test_swap_volume() if instance_update: instance.update(instance_update) if volume_update: volumes[volume_update['target']].update(volume_update['value']) self.assertRaises(exc, self.compute_api.swap_volume, self.context, instance, volumes[uuids.old_volume], volumes[uuids.new_volume]) self.assertEqual('in-use', volumes[uuids.old_volume]['status']) self.assertEqual('available', volumes[uuids.new_volume]['status']) def test_swap_volume_with_invalid_server_state(self): # Should fail if VM state is not valid self._test_swap_volume_for_precheck_with_exception( exception.InstanceInvalidState, instance_update={'vm_state': vm_states.BUILDING}) self._test_swap_volume_for_precheck_with_exception( exception.InstanceInvalidState, instance_update={'vm_state': vm_states.STOPPED}) self._test_swap_volume_for_precheck_with_exception( exception.InstanceInvalidState, instance_update={'vm_state': vm_states.SUSPENDED}) def test_swap_volume_with_another_server_volume(self): # Should fail if old volume's instance_uuid is not that of the instance self._test_swap_volume_for_precheck_with_exception( exception.InvalidVolume, volume_update={ 'target': uuids.old_volume, 'value': { 'attachments': { uuids.vol_instance_2: {'attachment_id': 'fakeid'}}}}) def test_swap_volume_with_new_volume_attached(self): # Should fail if new volume is attached self._test_swap_volume_for_precheck_with_exception( exception.InvalidVolume, volume_update={'target': uuids.new_volume, 'value': {'attach_status': 'attached'}}) def test_swap_volume_with_smaller_new_volume(self): # Should fail if new volume is smaller than the old volume self._test_swap_volume_for_precheck_with_exception( exception.InvalidVolume, volume_update={'target': uuids.new_volume, 'value': {'size': 4}}) def test_swap_volume_with_swap_volume_error(self): self._test_swap_volume(expected_exception=AttributeError) def test_swap_volume_volume_api_usage(self): self._test_swap_volume() def test_swap_volume_volume_api_usage_new_attach_flow(self): self._test_swap_volume(attachment_id=uuids.attachment_id) def test_swap_volume_with_swap_volume_error_new_attach_flow(self): self._test_swap_volume(expected_exception=AttributeError, attachment_id=uuids.attachment_id) def test_swap_volume_new_vol_already_attached_new_flow(self): self._test_swap_volume(attachment_id=uuids.attachment_id, volume_already_attached=True) def _test_swap_volume(self, expected_exception=None, attachment_id=None, volume_already_attached=False): volumes = self._get_volumes_for_test_swap_volume() instance = self._get_instance_for_test_swap_volume() def fake_vol_api_begin_detaching(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) volumes[volume_id]['status'] = 'detaching' def fake_vol_api_roll_detaching(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) if volumes[volume_id]['status'] == 'detaching': volumes[volume_id]['status'] = 'in-use' def fake_vol_api_reserve(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) self.assertEqual('available', volumes[volume_id]['status']) volumes[volume_id]['status'] = 'attaching' def fake_vol_api_unreserve(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) if volumes[volume_id]['status'] == 'attaching': volumes[volume_id]['status'] = 'available' def fake_vol_api_attachment_create(context, volume_id, instance_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) self.assertEqual('available', volumes[volume_id]['status']) volumes[volume_id]['status'] = 'reserved' return {'id': uuids.attachment_id} def fake_vol_api_attachment_delete(context, attachment_id): self.assertTrue(uuidutils.is_uuid_like(attachment_id)) if volumes[uuids.new_volume]['status'] == 'reserved': volumes[uuids.new_volume]['status'] = 'available' def fake_volume_is_attached(context, instance, volume_id): if volume_already_attached: raise exception.InvalidVolume(reason='Volume already attached') else: pass @mock.patch.object(compute_api.API, '_record_action_start') @mock.patch.object(self.compute_api.compute_rpcapi, 'swap_volume', return_value=True) @mock.patch.object(self.compute_api.volume_api, 'unreserve_volume', side_effect=fake_vol_api_unreserve) @mock.patch.object(self.compute_api.volume_api, 'attachment_delete', side_effect=fake_vol_api_attachment_delete) @mock.patch.object(self.compute_api.volume_api, 'reserve_volume', side_effect=fake_vol_api_reserve) @mock.patch.object(self.compute_api, '_check_volume_already_attached_to_instance', side_effect=fake_volume_is_attached) @mock.patch.object(self.compute_api.volume_api, 'attachment_create', side_effect=fake_vol_api_attachment_create) @mock.patch.object(self.compute_api.volume_api, 'roll_detaching', side_effect=fake_vol_api_roll_detaching) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(self.compute_api.volume_api, 'begin_detaching', side_effect=fake_vol_api_begin_detaching) def _do_test(mock_begin_detaching, mock_get_by_volume_and_instance, mock_roll_detaching, mock_attachment_create, mock_check_volume_attached, mock_reserve_volume, mock_attachment_delete, mock_unreserve_volume, mock_swap_volume, mock_record): bdm = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( {'no_device': False, 'volume_id': '1', 'boot_index': 0, 'connection_info': 'inf', 'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', 'tag': None, 'attachment_id': attachment_id}, anon=True)) mock_get_by_volume_and_instance.return_value = bdm if expected_exception: mock_swap_volume.side_effect = AttributeError() self.assertRaises(expected_exception, self.compute_api.swap_volume, self.context, instance, volumes[uuids.old_volume], volumes[uuids.new_volume]) self.assertEqual('in-use', volumes[uuids.old_volume]['status']) self.assertEqual('available', volumes[uuids.new_volume]['status']) # Make assertions about what was called if there was or was not # a Cinder 3.44 style attachment provided. if attachment_id is None: # Old style attachment, so unreserve was called and # attachment_delete was not called. mock_unreserve_volume.assert_called_once_with( self.context, uuids.new_volume) mock_attachment_delete.assert_not_called() else: # New style attachment, so unreserve was not called and # attachment_delete was called. mock_unreserve_volume.assert_not_called() mock_attachment_delete.assert_called_once_with( self.context, attachment_id) # Assert the call to the rpcapi. mock_swap_volume.assert_called_once_with( self.context, instance=instance, old_volume_id=uuids.old_volume, new_volume_id=uuids.new_volume, new_attachment_id=attachment_id) mock_record.assert_called_once_with( self.context, instance, instance_actions.SWAP_VOLUME) elif volume_already_attached: self.assertRaises(exception.InvalidVolume, self.compute_api.swap_volume, self.context, instance, volumes[uuids.old_volume], volumes[uuids.new_volume]) self.assertEqual('in-use', volumes[uuids.old_volume]['status']) self.assertEqual('available', volumes[uuids.new_volume]['status']) mock_check_volume_attached.assert_called_once_with( self.context, instance, uuids.new_volume) mock_roll_detaching.assert_called_once_with(self.context, uuids.old_volume) else: self.compute_api.swap_volume(self.context, instance, volumes[uuids.old_volume], volumes[uuids.new_volume]) # Make assertions about what was called if there was or was not # a Cinder 3.44 style attachment provided. if attachment_id is None: # Old style attachment, so reserve was called and # attachment_create was not called. mock_reserve_volume.assert_called_once_with( self.context, uuids.new_volume) mock_attachment_create.assert_not_called() mock_check_volume_attached.assert_not_called() else: # New style attachment, so reserve was not called and # attachment_create was called. mock_reserve_volume.assert_not_called() mock_check_volume_attached.assert_called_once_with( self.context, instance, uuids.new_volume) mock_attachment_create.assert_called_once_with( self.context, uuids.new_volume, instance.uuid) # Assert the call to the rpcapi. mock_swap_volume.assert_called_once_with( self.context, instance=instance, old_volume_id=uuids.old_volume, new_volume_id=uuids.new_volume, new_attachment_id=attachment_id) mock_record.assert_called_once_with( self.context, instance, instance_actions.SWAP_VOLUME) _do_test() def test_count_attachments_for_swap_not_found_and_readonly(self): """Tests that attachment records that aren't found are considered read/write by default. Also tests that read-only attachments are not counted. """ ctxt = context.get_admin_context() volume = { 'attachments': { uuids.server1: { 'attachment_id': uuids.attachment1 }, uuids.server2: { 'attachment_id': uuids.attachment2 } } } def fake_attachment_get(_context, attachment_id): if attachment_id == uuids.attachment1: raise exception.VolumeAttachmentNotFound( attachment_id=attachment_id) return {'attach_mode': 'ro'} with mock.patch.object(self.compute_api.volume_api, 'attachment_get', side_effect=fake_attachment_get) as mock_get: self.assertEqual( 1, self.compute_api._count_attachments_for_swap(ctxt, volume)) mock_get.assert_has_calls([ mock.call(ctxt, uuids.attachment1), mock.call(ctxt, uuids.attachment2)], any_order=True) @mock.patch('nova.volume.cinder.API.attachment_get', new_callable=mock.NonCallableMock) # asserts not called def test_count_attachments_for_swap_no_query(self, mock_attachment_get): """Tests that if the volume has <2 attachments, we don't query the attachments for their attach_mode value. """ volume = {} self.assertEqual( 0, self.compute_api._count_attachments_for_swap( mock.sentinel.context, volume)) volume = { 'attachments': { uuids.server: { 'attachment_id': uuids.attach1 } } } self.assertEqual( 1, self.compute_api._count_attachments_for_swap( mock.sentinel.context, volume)) @mock.patch.object(compute_utils, 'is_volume_backed_instance') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(image_api.API, 'create') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(compute_api.API, '_record_action_start') def _test_snapshot_and_backup(self, mock_record, mock_get_image, mock_create, mock_save, mock_is_volume, is_snapshot=True, with_base_ref=False, min_ram=None, min_disk=None, create_fails=False, instance_vm_state=vm_states.ACTIVE): params = dict(locked=True) instance = self._create_instance_obj(params=params) instance.vm_state = instance_vm_state # Test non-inheritable properties, 'user_id' should also not be # carried from sys_meta into image property...since it should be set # explicitly by _create_image() in compute api. fake_image_meta = { 'name': 'base-name', 'disk_format': 'fake', 'container_format': 'fake', 'properties': { 'user_id': 'meow', 'foo': 'bar', 'blah': 'bug?', 'cache_in_nova': 'dropped', 'bittorrent': 'dropped', 'img_signature_hash_method': 'dropped', 'img_signature': 'dropped', 'img_signature_key_type': 'dropped', 'img_signature_certificate_uuid': 'dropped' }, } image_type = is_snapshot and 'snapshot' or 'backup' sent_meta = { 'visibility': 'private', 'name': 'fake-name', 'disk_format': 'fake', 'container_format': 'fake', 'properties': { 'user_id': self.context.user_id, 'instance_uuid': instance.uuid, 'image_type': image_type, 'foo': 'bar', 'blah': 'bug?', 'cow': 'moo', 'cat': 'meow', }, } if is_snapshot: if min_ram is not None: fake_image_meta['min_ram'] = min_ram sent_meta['min_ram'] = min_ram if min_disk is not None: fake_image_meta['min_disk'] = min_disk sent_meta['min_disk'] = min_disk sent_meta.pop('disk_format', None) sent_meta.pop('container_format', None) else: sent_meta['properties']['backup_type'] = 'fake-backup-type' extra_props = dict(cow='moo', cat='meow') if not is_snapshot: mock_is_volume.return_value = False mock_get_image.return_value = fake_image_meta fake_image = dict(id='fake-image-id') if create_fails: mock_create.side_effect = test.TestingException() else: mock_create.return_value = fake_image def check_state(expected_task_state=None): expected_state = (is_snapshot and task_states.IMAGE_SNAPSHOT_PENDING or task_states.IMAGE_BACKUP) self.assertEqual(expected_state, instance.task_state) if not create_fails: mock_save.side_effect = check_state if is_snapshot: with mock.patch.object(self.compute_api.compute_rpcapi, 'snapshot_instance') as mock_snapshot: if create_fails: self.assertRaises(test.TestingException, self.compute_api.snapshot, self.context, instance, 'fake-name', extra_properties=extra_props) else: res = self.compute_api.snapshot( self.context, instance, 'fake-name', extra_properties=extra_props) mock_record.assert_called_once_with( self.context, instance, instance_actions.CREATE_IMAGE) mock_snapshot.assert_called_once_with( self.context, instance, fake_image['id']) else: with mock.patch.object(self.compute_api.compute_rpcapi, 'backup_instance') as mock_backup: if create_fails: self.assertRaises(test.TestingException, self.compute_api.backup, self.context, instance, 'fake-name', 'fake-backup-type', 'fake-rotation', extra_properties=extra_props) else: res = self.compute_api.backup( self.context, instance, 'fake-name', 'fake-backup-type', 'fake-rotation', extra_properties=extra_props) mock_record.assert_called_once_with( self.context, instance, instance_actions.BACKUP) mock_backup.assert_called_once_with( self.context, instance, fake_image['id'], 'fake-backup-type', 'fake-rotation') mock_create.assert_called_once_with(self.context, sent_meta) mock_get_image.assert_called_once_with(instance.system_metadata) if not is_snapshot: mock_is_volume.assert_called_once_with(self.context, instance) else: mock_is_volume.assert_not_called() if not create_fails: self.assertEqual(fake_image, res) mock_save.assert_called_once_with(expected_task_state=[None]) def test_snapshot(self): self._test_snapshot_and_backup() def test_snapshot_fails(self): self._test_snapshot_and_backup(create_fails=True) def test_snapshot_invalid_state(self): instance = self._create_instance_obj() instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_SNAPSHOT self.assertRaises(exception.InstanceInvalidState, self.compute_api.snapshot, self.context, instance, 'fake-name') instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_BACKUP self.assertRaises(exception.InstanceInvalidState, self.compute_api.snapshot, self.context, instance, 'fake-name') instance.vm_state = vm_states.BUILDING instance.task_state = None self.assertRaises(exception.InstanceInvalidState, self.compute_api.snapshot, self.context, instance, 'fake-name') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'create_image') @mock.patch.object(compute_rpcapi.ComputeAPI, 'snapshot_instance') def test_vm_deleting_while_creating_snapshot(self, snapshot_instance, _create_image, save): instance = self._create_instance_obj() save.side_effect = exception.UnexpectedDeletingTaskStateError( "Exception") _create_image.return_value = dict(id='fake-image-id') with mock.patch.object(self.compute_api.image_api, 'delete') as image_delete: self.assertRaises(exception.InstanceInvalidState, self.compute_api.snapshot, self.context, instance, 'fake-image') image_delete.assert_called_once_with(self.context, 'fake-image-id') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'create_image') @mock.patch.object(compute_rpcapi.ComputeAPI, 'snapshot_instance') def test_vm_deleted_while_creating_snapshot(self, snapshot_instance, _create_image, save): instance = self._create_instance_obj() save.side_effect = exception.InstanceNotFound( "Exception") _create_image.return_value = dict(id='fake-image-id') with mock.patch.object(self.compute_api.image_api, 'delete') as image_delete: self.assertRaises(exception.InstanceInvalidState, self.compute_api.snapshot, self.context, instance, 'fake-image') image_delete.assert_called_once_with(self.context, 'fake-image-id') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'create_image') @mock.patch.object(compute_rpcapi.ComputeAPI, 'snapshot_instance') def test_vm_deleted_while_snapshot_and_snapshot_delete_failed(self, snapshot_instance, _create_image, save): instance = self._create_instance_obj() save.side_effect = exception.InstanceNotFound(instance_id='fake') _create_image.return_value = dict(id='fake-image-id') with mock.patch.object(self.compute_api.image_api, 'delete') as image_delete: image_delete.side_effect = test.TestingException() self.assertRaises(exception.InstanceInvalidState, self.compute_api.snapshot, self.context, instance, 'fake-image') image_delete.assert_called_once_with(self.context, 'fake-image-id') def test_snapshot_with_base_image_ref(self): self._test_snapshot_and_backup(with_base_ref=True) def test_snapshot_min_ram(self): self._test_snapshot_and_backup(min_ram=42) def test_snapshot_min_disk(self): self._test_snapshot_and_backup(min_disk=42) def test_backup(self): for state in [vm_states.ACTIVE, vm_states.STOPPED, vm_states.PAUSED, vm_states.SUSPENDED]: self._test_snapshot_and_backup(is_snapshot=False, instance_vm_state=state) def test_backup_fails(self): self._test_snapshot_and_backup(is_snapshot=False, create_fails=True) def test_backup_invalid_state(self): instance = self._create_instance_obj() instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_SNAPSHOT self.assertRaises(exception.InstanceInvalidState, self.compute_api.backup, self.context, instance, 'fake-name', 'fake', 'fake') instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_BACKUP self.assertRaises(exception.InstanceInvalidState, self.compute_api.backup, self.context, instance, 'fake-name', 'fake', 'fake') instance.vm_state = vm_states.BUILDING instance.task_state = None self.assertRaises(exception.InstanceInvalidState, self.compute_api.backup, self.context, instance, 'fake-name', 'fake', 'fake') def test_backup_with_base_image_ref(self): self._test_snapshot_and_backup(is_snapshot=False, with_base_ref=True) def test_backup_volume_backed_instance(self): instance = self._create_instance_obj() with mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=True) as mock_is_volume_backed: self.assertRaises(exception.InvalidRequest, self.compute_api.backup, self.context, instance, 'fake-name', 'weekly', 3, extra_properties={}) mock_is_volume_backed.assert_called_once_with(self.context, instance) def _test_snapshot_volume_backed(self, quiesce_required=False, quiesce_fails=False, quiesce_unsupported=False, vm_state=vm_states.ACTIVE, snapshot_fails=False, limits=None): fake_sys_meta = {'image_min_ram': '11', 'image_min_disk': '22', 'image_container_format': 'ami', 'image_disk_format': 'ami', 'image_ram_disk': 'fake_ram_disk_id', 'image_bdm_v2': 'True', 'image_block_device_mapping': '[]', 'image_mappings': '[]', 'image_cache_in_nova': 'True'} if quiesce_required: fake_sys_meta['image_os_require_quiesce'] = 'yes' params = dict(locked=True, vm_state=vm_state, system_metadata=fake_sys_meta) instance = self._create_instance_obj(params=params) instance['root_device_name'] = 'vda' instance_bdms = [] expect_meta = { 'name': 'test-snapshot', 'properties': {'root_device_name': 'vda', 'ram_disk': 'fake_ram_disk_id'}, 'size': 0, 'min_disk': '22', 'visibility': 'private', 'min_ram': '11', } if quiesce_required: expect_meta['properties']['os_require_quiesce'] = 'yes' quiesced = [False, False] quiesce_expected = not (quiesce_unsupported or quiesce_fails) \ and vm_state == vm_states.ACTIVE @classmethod def fake_bdm_list_get_by_instance_uuid(cls, context, instance_uuid): return obj_base.obj_make_list(context, cls(), objects.BlockDeviceMapping, instance_bdms) def fake_image_create(_self, context, image_meta, data=None): self.assertThat(image_meta, matchers.DictMatches(expect_meta)) def fake_volume_create_snapshot(self, context, volume_id, name, description): if snapshot_fails: raise exception.OverQuota(overs="snapshots") return {'id': '%s-snapshot' % volume_id} def fake_quiesce_instance(context, instance): if quiesce_unsupported: raise exception.InstanceQuiesceNotSupported( instance_id=instance['uuid'], reason='unsupported') if quiesce_fails: raise oslo_exceptions.MessagingTimeout('quiece timeout') quiesced[0] = True def fake_unquiesce_instance(context, instance, mapping=None): quiesced[1] = True def fake_get_absolute_limits(context): if limits is not None: return limits return {"totalSnapshotsUsed": 0, "maxTotalSnapshots": 10} self.stub_out('nova.objects.BlockDeviceMappingList' '.get_by_instance_uuid', fake_bdm_list_get_by_instance_uuid) self.stub_out('nova.image.glance.API.create', fake_image_create) self.stub_out('nova.volume.cinder.API.get', lambda self, context, volume_id: {'id': volume_id, 'display_description': ''}) self.stub_out('nova.volume.cinder.API.create_snapshot_force', fake_volume_create_snapshot) self.useFixture(fixtures.MockPatchObject( self.compute_api.compute_rpcapi, 'quiesce_instance', side_effect=fake_quiesce_instance)) self.useFixture(fixtures.MockPatchObject( self.compute_api.compute_rpcapi, 'unquiesce_instance', side_effect=fake_unquiesce_instance)) fake_image.stub_out_image_service(self) with test.nested( mock.patch.object(compute_api.API, '_record_action_start'), mock.patch.object(compute_utils, 'EventReporter')) as ( mock_record, mock_event): # No block devices defined self.compute_api.snapshot_volume_backed( self.context, instance, 'test-snapshot') mock_record.assert_called_once_with(self.context, instance, instance_actions.CREATE_IMAGE) mock_event.assert_called_once_with(self.context, 'api_snapshot_instance', CONF.host, instance.uuid, graceful_exit=False) bdm = fake_block_device.FakeDbBlockDeviceDict( {'no_device': False, 'volume_id': '1', 'boot_index': 0, 'connection_info': 'inf', 'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', 'tag': None}) instance_bdms.append(bdm) expect_meta['properties']['bdm_v2'] = True expect_meta['properties']['block_device_mapping'] = [] expect_meta['properties']['block_device_mapping'].append( {'guest_format': None, 'boot_index': 0, 'no_device': None, 'image_id': None, 'volume_id': None, 'disk_bus': None, 'volume_size': None, 'source_type': 'snapshot', 'device_type': None, 'snapshot_id': '1-snapshot', 'device_name': '/dev/vda', 'destination_type': 'volume', 'delete_on_termination': False, 'tag': None, 'volume_type': None}) limits_patcher = mock.patch.object( self.compute_api.volume_api, 'get_absolute_limits', side_effect=fake_get_absolute_limits) limits_patcher.start() self.addCleanup(limits_patcher.stop) with test.nested( mock.patch.object(compute_api.API, '_record_action_start'), mock.patch.object(compute_utils, 'EventReporter')) as ( mock_record, mock_event): # All the db_only fields and the volume ones are removed if snapshot_fails: self.assertRaises(exception.OverQuota, self.compute_api.snapshot_volume_backed, self.context, instance, "test-snapshot") else: self.compute_api.snapshot_volume_backed( self.context, instance, 'test-snapshot') self.assertEqual(quiesce_expected, quiesced[0]) self.assertEqual(quiesce_expected, quiesced[1]) mock_record.assert_called_once_with(self.context, instance, instance_actions.CREATE_IMAGE) mock_event.assert_called_once_with(self.context, 'api_snapshot_instance', CONF.host, instance.uuid, graceful_exit=False) instance.system_metadata['image_mappings'] = jsonutils.dumps( [{'virtual': 'ami', 'device': 'vda'}, {'device': 'vda', 'virtual': 'ephemeral0'}, {'device': 'vdb', 'virtual': 'swap'}, {'device': 'vdc', 'virtual': 'ephemeral1'}])[:255] instance.system_metadata['image_block_device_mapping'] = ( jsonutils.dumps( [{'source_type': 'snapshot', 'destination_type': 'volume', 'guest_format': None, 'device_type': 'disk', 'boot_index': 1, 'disk_bus': 'ide', 'device_name': '/dev/vdf', 'delete_on_termination': True, 'snapshot_id': 'snapshot-2', 'volume_id': None, 'volume_size': 100, 'image_id': None, 'no_device': None, 'volume_type': None}])[:255]) bdm = fake_block_device.FakeDbBlockDeviceDict( {'no_device': False, 'volume_id': None, 'boot_index': -1, 'connection_info': 'inf', 'device_name': '/dev/vdh', 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap', 'delete_on_termination': True, 'tag': None, 'volume_type': None}) instance_bdms.append(bdm) # The non-volume image mapping will go at the front of the list # because the volume BDMs are processed separately. expect_meta['properties']['block_device_mapping'].insert(0, {'guest_format': 'swap', 'boot_index': -1, 'no_device': False, 'image_id': None, 'volume_id': None, 'disk_bus': None, 'volume_size': None, 'source_type': 'blank', 'device_type': None, 'snapshot_id': None, 'device_name': '/dev/vdh', 'destination_type': 'local', 'delete_on_termination': True, 'tag': None, 'volume_type': None}) quiesced = [False, False] with test.nested( mock.patch.object(compute_api.API, '_record_action_start'), mock.patch.object(compute_utils, 'EventReporter')) as ( mock_record, mock_event): # Check that the mappings from the image properties are not # included if snapshot_fails: self.assertRaises(exception.OverQuota, self.compute_api.snapshot_volume_backed, self.context, instance, "test-snapshot") else: self.compute_api.snapshot_volume_backed( self.context, instance, 'test-snapshot') self.assertEqual(quiesce_expected, quiesced[0]) self.assertEqual(quiesce_expected, quiesced[1]) mock_record.assert_called_once_with(self.context, instance, instance_actions.CREATE_IMAGE) mock_event.assert_called_once_with(self.context, 'api_snapshot_instance', CONF.host, instance.uuid, graceful_exit=False) def test_snapshot_volume_backed(self): self._test_snapshot_volume_backed(quiesce_required=False, quiesce_unsupported=False) def test_snapshot_volume_backed_with_quiesce_unsupported(self): self._test_snapshot_volume_backed(quiesce_required=True, quiesce_unsupported=False) def test_snaphost_volume_backed_with_quiesce_failure(self): self.assertRaises(oslo_exceptions.MessagingTimeout, self._test_snapshot_volume_backed, quiesce_required=True, quiesce_fails=True) def test_snapshot_volume_backed_with_quiesce_create_snap_fails(self): self._test_snapshot_volume_backed(quiesce_required=True, snapshot_fails=True) def test_snapshot_volume_backed_unlimited_quota(self): """Tests that there is unlimited quota on volume snapshots so we don't perform a quota check. """ limits = {'maxTotalSnapshots': -1, 'totalSnapshotsUsed': 0} self._test_snapshot_volume_backed(limits=limits) def test_snapshot_volume_backed_over_quota_before_snapshot(self): """Tests that the up-front check on quota fails before actually attempting to snapshot any volumes. """ limits = {'maxTotalSnapshots': 1, 'totalSnapshotsUsed': 1} self.assertRaises(exception.OverQuota, self._test_snapshot_volume_backed, limits=limits) def test_snapshot_volume_backed_with_quiesce_skipped(self): self._test_snapshot_volume_backed(quiesce_required=False, quiesce_unsupported=True) def test_snapshot_volume_backed_with_quiesce_exception(self): self.assertRaises(exception.NovaException, self._test_snapshot_volume_backed, quiesce_required=True, quiesce_unsupported=True) def test_snapshot_volume_backed_with_quiesce_stopped(self): self._test_snapshot_volume_backed(quiesce_required=True, quiesce_unsupported=True, vm_state=vm_states.STOPPED) def test_snapshot_volume_backed_with_quiesce_suspended(self): self._test_snapshot_volume_backed(quiesce_required=True, quiesce_unsupported=True, vm_state=vm_states.SUSPENDED) def test_snapshot_volume_backed_with_suspended(self): self._test_snapshot_volume_backed(quiesce_required=False, quiesce_unsupported=True, vm_state=vm_states.SUSPENDED) def test_snapshot_volume_backed_with_pause(self): self._test_snapshot_volume_backed(quiesce_required=False, quiesce_unsupported=True, vm_state=vm_states.PAUSED) @mock.patch.object(context, 'set_target_cell') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume') def test_get_bdm_by_volume_id(self, mock_get_by_volume, mock_target_cell): fake_cells = [mock.sentinel.cell0, mock.sentinel.cell1] mock_get_by_volume.side_effect = [ exception.VolumeBDMNotFound(volume_id=mock.sentinel.volume_id), mock.sentinel.bdm] with mock.patch.object(compute_api, 'CELLS', fake_cells): bdm = self.compute_api._get_bdm_by_volume_id( self.context, mock.sentinel.volume_id, mock.sentinel.expected_attrs) self.assertEqual(mock.sentinel.bdm, bdm) mock_target_cell.assert_has_calls([ mock.call(self.context, cell) for cell in fake_cells]) mock_get_by_volume.assert_has_calls( [mock.call(self.context, mock.sentinel.volume_id, expected_attrs=mock.sentinel.expected_attrs)] * 2) @mock.patch.object(context, 'set_target_cell') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume') def test_get_missing_bdm_by_volume_id(self, mock_get_by_volume, mock_target_cell): fake_cells = [mock.sentinel.cell0, mock.sentinel.cell1] mock_get_by_volume.side_effect = exception.VolumeBDMNotFound( volume_id=mock.sentinel.volume_id) with mock.patch.object(compute_api, 'CELLS', fake_cells): self.assertRaises( exception.VolumeBDMNotFound, self.compute_api._get_bdm_by_volume_id, self.context, mock.sentinel.volume_id, mock.sentinel.expected_attrs) @mock.patch.object(compute_api.API, '_get_bdm_by_volume_id') def test_volume_snapshot_create(self, mock_get_bdm): volume_id = '1' create_info = {'id': 'eyedee'} fake_bdm = fake_block_device.FakeDbBlockDeviceDict({ 'id': 123, 'device_name': '/dev/sda2', 'source_type': 'volume', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'volume_id': 1, 'boot_index': -1}) fake_bdm['instance'] = fake_instance.fake_db_instance( launched_at=timeutils.utcnow(), vm_state=vm_states.ACTIVE) fake_bdm['instance_uuid'] = fake_bdm['instance']['uuid'] fake_bdm = objects.BlockDeviceMapping._from_db_object( self.context, objects.BlockDeviceMapping(), fake_bdm, expected_attrs=['instance']) mock_get_bdm.return_value = fake_bdm with mock.patch.object(self.compute_api.compute_rpcapi, 'volume_snapshot_create') as mock_snapshot: snapshot = self.compute_api.volume_snapshot_create(self.context, volume_id, create_info) expected_snapshot = { 'snapshot': { 'id': create_info['id'], 'volumeId': volume_id, }, } self.assertEqual(snapshot, expected_snapshot) mock_get_bdm.assert_called_once_with( self.context, volume_id, expected_attrs=['instance']) mock_snapshot.assert_called_once_with( self.context, fake_bdm['instance'], volume_id, create_info) @mock.patch.object( objects.BlockDeviceMapping, 'get_by_volume', return_value=objects.BlockDeviceMapping( instance=objects.Instance( launched_at=timeutils.utcnow(), uuid=uuids.instance_uuid, vm_state=vm_states.ACTIVE, task_state=task_states.SHELVING, host='fake_host'))) def test_volume_snapshot_create_shelving(self, bdm_get_by_volume): """Tests a negative scenario where the instance task_state is not accepted for creating a guest-assisted volume snapshot. """ self.assertRaises(exception.InstanceInvalidState, self.compute_api.volume_snapshot_create, self.context, mock.sentinel.volume_id, mock.sentinel.create_info) @mock.patch.object( objects.BlockDeviceMapping, 'get_by_volume', return_value=objects.BlockDeviceMapping( instance=objects.Instance( launched_at=timeutils.utcnow(), uuid=uuids.instance_uuid, vm_state=vm_states.SHELVED_OFFLOADED, task_state=None, host=None))) def test_volume_snapshot_create_shelved_offloaded(self, bdm_get_by_volume): """Tests a negative scenario where the instance is shelved offloaded so we don't have a host to cast to for the guest-assisted snapshot. """ self.assertRaises(exception.InstanceNotReady, self.compute_api.volume_snapshot_create, self.context, mock.sentinel.volume_id, mock.sentinel.create_info) @mock.patch.object(compute_api.API, '_get_bdm_by_volume_id') def test_volume_snapshot_delete(self, mock_get_bdm): volume_id = '1' snapshot_id = '2' fake_bdm = fake_block_device.FakeDbBlockDeviceDict({ 'id': 123, 'device_name': '/dev/sda2', 'source_type': 'volume', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'volume_id': 1, 'boot_index': -1}) fake_bdm['instance'] = fake_instance.fake_db_instance( launched_at=timeutils.utcnow(), vm_state=vm_states.STOPPED) fake_bdm['instance_uuid'] = fake_bdm['instance']['uuid'] fake_bdm = objects.BlockDeviceMapping._from_db_object( self.context, objects.BlockDeviceMapping(), fake_bdm, expected_attrs=['instance']) mock_get_bdm.return_value = fake_bdm with mock.patch.object(self.compute_api.compute_rpcapi, 'volume_snapshot_delete') as mock_snapshot: self.compute_api.volume_snapshot_delete(self.context, volume_id, snapshot_id, {}) mock_get_bdm.assert_called_once_with(self.context, volume_id, expected_attrs=['instance']) mock_snapshot.assert_called_once_with( self.context, fake_bdm['instance'], volume_id, snapshot_id, {}) @mock.patch.object( objects.BlockDeviceMapping, 'get_by_volume', return_value=objects.BlockDeviceMapping( instance=objects.Instance( launched_at=timeutils.utcnow(), uuid=uuids.instance_uuid, vm_state=vm_states.ACTIVE, task_state=task_states.SHELVING, host='fake_host'))) def test_volume_snapshot_delete_shelving(self, bdm_get_by_volume): """Tests a negative scenario where the instance is shelving and the task_state is set so we can't perform the guest-assisted snapshot. """ self.assertRaises(exception.InstanceInvalidState, self.compute_api.volume_snapshot_delete, self.context, mock.sentinel.volume_id, mock.sentinel.snapshot_id, mock.sentinel.delete_info) @mock.patch.object( objects.BlockDeviceMapping, 'get_by_volume', return_value=objects.BlockDeviceMapping( instance=objects.Instance( launched_at=timeutils.utcnow(), uuid=uuids.instance_uuid, vm_state=vm_states.SHELVED_OFFLOADED, task_state=None, host=None))) def test_volume_snapshot_delete_shelved_offloaded(self, bdm_get_by_volume): """Tests a negative scenario where the instance is shelved offloaded so there is no host to cast to for the guest-assisted snapshot delete. """ self.assertRaises(exception.InstanceNotReady, self.compute_api.volume_snapshot_delete, self.context, mock.sentinel.volume_id, mock.sentinel.snapshot_id, mock.sentinel.delete_info) def _test_boot_volume_bootable(self, is_bootable=False): def get_vol_data(*args, **kwargs): return {'bootable': is_bootable} block_device_mapping = [{ 'id': 1, 'device_name': 'vda', 'no_device': None, 'virtual_name': None, 'snapshot_id': None, 'volume_id': '1', 'delete_on_termination': False, }] expected_meta = {'min_disk': 0, 'min_ram': 0, 'properties': {}, 'size': 0, 'status': 'active'} with mock.patch.object(self.compute_api.volume_api, 'get', side_effect=get_vol_data): if not is_bootable: self.assertRaises(exception.InvalidBDMVolumeNotBootable, utils.get_bdm_image_metadata, self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping) else: meta = utils.get_bdm_image_metadata( self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping) self.assertEqual(expected_meta, meta) def test_boot_volume_non_bootable(self): self._test_boot_volume_bootable(False) def test_boot_volume_bootable(self): self._test_boot_volume_bootable(True) def test_boot_volume_basic_property(self): block_device_mapping = [{ 'id': 1, 'device_name': 'vda', 'no_device': None, 'virtual_name': None, 'snapshot_id': None, 'volume_id': '1', 'delete_on_termination': False, }] fake_volume = {"volume_image_metadata": {"min_ram": 256, "min_disk": 128, "foo": "bar"}} with mock.patch.object(self.compute_api.volume_api, 'get', return_value=fake_volume): meta = utils.get_bdm_image_metadata( self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping) self.assertEqual(256, meta['min_ram']) self.assertEqual(128, meta['min_disk']) self.assertEqual('active', meta['status']) self.assertEqual('bar', meta['properties']['foo']) def test_boot_volume_snapshot_basic_property(self): block_device_mapping = [{ 'id': 1, 'device_name': 'vda', 'no_device': None, 'virtual_name': None, 'snapshot_id': '2', 'volume_id': None, 'delete_on_termination': False, }] fake_volume = {"volume_image_metadata": {"min_ram": 256, "min_disk": 128, "foo": "bar"}} fake_snapshot = {"volume_id": "1"} with test.nested( mock.patch.object(self.compute_api.volume_api, 'get', return_value=fake_volume), mock.patch.object(self.compute_api.volume_api, 'get_snapshot', return_value=fake_snapshot)) as ( volume_get, volume_get_snapshot): meta = utils.get_bdm_image_metadata( self.context, self.compute_api.image_api, self.compute_api.volume_api, block_device_mapping) self.assertEqual(256, meta['min_ram']) self.assertEqual(128, meta['min_disk']) self.assertEqual('active', meta['status']) self.assertEqual('bar', meta['properties']['foo']) volume_get_snapshot.assert_called_once_with(self.context, block_device_mapping[0]['snapshot_id']) volume_get.assert_called_once_with(self.context, fake_snapshot['volume_id']) def _create_instance_with_disabled_disk_config(self, object=False): sys_meta = {"image_auto_disk_config": "Disabled"} params = {"system_metadata": sys_meta} instance = self._create_instance_obj(params=params) if object: return instance return obj_base.obj_to_primitive(instance) def _setup_fake_image_with_disabled_disk_config(self): self.fake_image = { 'id': 1, 'name': 'fake_name', 'status': 'active', 'properties': {"auto_disk_config": "Disabled"}, } fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', lambda obj, context, image_id, **kwargs: self.fake_image) return self.fake_image['id'] def _setup_fake_image_with_invalid_arch(self): self.fake_image = { 'id': 2, 'name': 'fake_name', 'status': 'active', 'properties': {"hw_architecture": "arm64"}, } fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', lambda obj, context, image_id, **kwargs: self.fake_image) return self.fake_image['id'] @mock.patch('nova.compute.api.API.get_instance_host_status', new=mock.Mock(return_value=fields_obj.HostStatus.UP)) def test_resize_with_disabled_auto_disk_config_fails(self): fake_inst = self._create_instance_with_disabled_disk_config( object=True) self.assertRaises(exception.AutoDiskConfigDisabledByImage, self.compute_api.resize, self.context, fake_inst, auto_disk_config=True) def test_create_with_disabled_auto_disk_config_fails(self): image_id = self._setup_fake_image_with_disabled_disk_config() self.assertRaises(exception.AutoDiskConfigDisabledByImage, self.compute_api.create, self.context, "fake_flavor", image_id, auto_disk_config=True) def test_rebuild_with_disabled_auto_disk_config_fails(self): fake_inst = self._create_instance_with_disabled_disk_config( object=True) image_id = self._setup_fake_image_with_disabled_disk_config() self.assertRaises(exception.AutoDiskConfigDisabledByImage, self.compute_api.rebuild, self.context, fake_inst, image_id, "new password", auto_disk_config=True) def test_rebuild_with_invalid_image_arch(self): instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata={}, image_ref='foo', expected_attrs=['system_metadata']) image_id = self._setup_fake_image_with_invalid_arch() self.assertRaises(exception.InvalidArchitectureName, self.compute_api.rebuild, self.context, instance, image_id, "new password") self.assertIsNone(instance.task_state) @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_image_arch') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild_with_invalid_volume(self, _record_action_start, _checks_for_create_and_rebuild, _check_auto_disk_config, _check_image_arch, mock_get_image, mock_get_bdms, get_flavor, instance_save, req_spec_get_by_inst_uuid): """Test a negative scenario where the instance has an invalid volume. """ instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata={}, image_ref='foo', expected_attrs=['system_metadata']) bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=None, image_id=None, source_type='volume', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=uuids.volume_id, volume_size=None)]) mock_get_bdms.return_value = bdms get_flavor.return_value = test_flavor.fake_flavor flavor = instance.get_flavor() image_href = 'foo' image = { "min_ram": 10, "min_disk": 1, "properties": { 'architecture': fields_obj.Architecture.X86_64}} mock_get_image.return_value = (None, image) fake_spec = objects.RequestSpec() req_spec_get_by_inst_uuid.return_value = fake_spec fake_volume = {'id': uuids.volume_id, 'status': 'retyping'} with mock.patch.object(self.compute_api.volume_api, 'get', return_value=fake_volume) as mock_get_volume: self.assertRaises(exception.InvalidVolume, self.compute_api.rebuild, self.context, instance, image_href, "new password") self.assertIsNone(instance.task_state) mock_get_bdms.assert_called_once_with(self.context, instance.uuid) mock_get_volume.assert_called_once_with(self.context, uuids.volume_id) _check_auto_disk_config.assert_called_once_with( image=image, auto_disk_config=None) _check_image_arch.assert_called_once_with(image=image) _checks_for_create_and_rebuild.assert_called_once_with( self.context, None, image, flavor, {}, [], None) @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild(self, _record_action_start, _checks_for_create_and_rebuild, _check_auto_disk_config, _get_image, bdm_get_by_instance_uuid, get_flavor, instance_save, req_spec_get_by_inst_uuid): orig_system_metadata = {} instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata=orig_system_metadata, image_ref='foo', expected_attrs=['system_metadata']) get_flavor.return_value = test_flavor.fake_flavor flavor = instance.get_flavor() image_href = 'foo' image = { "min_ram": 10, "min_disk": 1, "properties": { 'architecture': fields_obj.Architecture.X86_64}} admin_pass = '' files_to_inject = [] bdms = objects.BlockDeviceMappingList() _get_image.return_value = (None, image) bdm_get_by_instance_uuid.return_value = bdms fake_spec = objects.RequestSpec() req_spec_get_by_inst_uuid.return_value = fake_spec with mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance') as rebuild_instance: self.compute_api.rebuild(self.context, instance, image_href, admin_pass, files_to_inject) rebuild_instance.assert_called_once_with(self.context, instance=instance, new_pass=admin_pass, injected_files=files_to_inject, image_ref=image_href, orig_image_ref=image_href, orig_sys_metadata=orig_system_metadata, bdms=bdms, preserve_ephemeral=False, host=instance.host, request_spec=fake_spec) _check_auto_disk_config.assert_called_once_with( image=image, auto_disk_config=None) _checks_for_create_and_rebuild.assert_called_once_with(self.context, None, image, flavor, {}, [], None) self.assertNotEqual(orig_system_metadata, instance.system_metadata) bdm_get_by_instance_uuid.assert_called_once_with( self.context, instance.uuid) @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild_change_image(self, _record_action_start, _checks_for_create_and_rebuild, _check_auto_disk_config, _get_image, bdm_get_by_instance_uuid, get_flavor, instance_save, req_spec_get_by_inst_uuid, req_spec_save): orig_system_metadata = {} get_flavor.return_value = test_flavor.fake_flavor orig_image_href = 'orig_image' orig_image = { "min_ram": 10, "min_disk": 1, "properties": {'architecture': fields_obj.Architecture.X86_64, 'vm_mode': 'hvm'}} new_image_href = 'new_image' new_image = { "min_ram": 10, "min_disk": 1, "properties": {'architecture': fields_obj.Architecture.X86_64, 'vm_mode': 'xen'}} admin_pass = '' files_to_inject = [] bdms = objects.BlockDeviceMappingList() instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata=orig_system_metadata, expected_attrs=['system_metadata'], image_ref=orig_image_href, node='node', vm_mode=fields_obj.VMMode.HVM) flavor = instance.get_flavor() def get_image(context, image_href): if image_href == new_image_href: return (None, new_image) if image_href == orig_image_href: return (None, orig_image) _get_image.side_effect = get_image bdm_get_by_instance_uuid.return_value = bdms fake_spec = objects.RequestSpec(id=1) req_spec_get_by_inst_uuid.return_value = fake_spec with mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance') as rebuild_instance: self.compute_api.rebuild(self.context, instance, new_image_href, admin_pass, files_to_inject) rebuild_instance.assert_called_once_with(self.context, instance=instance, new_pass=admin_pass, injected_files=files_to_inject, image_ref=new_image_href, orig_image_ref=orig_image_href, orig_sys_metadata=orig_system_metadata, bdms=bdms, preserve_ephemeral=False, host=None, request_spec=fake_spec) # assert the request spec was modified so the scheduler picks # the existing instance host/node req_spec_save.assert_called_once_with() self.assertIn('_nova_check_type', fake_spec.scheduler_hints) self.assertEqual('rebuild', fake_spec.scheduler_hints['_nova_check_type'][0]) _check_auto_disk_config.assert_called_once_with( image=new_image, auto_disk_config=None) _checks_for_create_and_rebuild.assert_called_once_with(self.context, None, new_image, flavor, {}, [], None) self.assertEqual(fields_obj.VMMode.XEN, instance.vm_mode) @mock.patch.object(objects.KeyPair, 'get_by_name') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild_change_keypair(self, _record_action_start, _checks_for_create_and_rebuild, _check_auto_disk_config, _get_image, bdm_get_by_instance_uuid, get_flavor, instance_save, req_spec_get_by_inst_uuid, mock_get_keypair): orig_system_metadata = {} orig_key_name = 'orig_key_name' orig_key_data = 'orig_key_data_XXX' instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata=orig_system_metadata, image_ref='foo', expected_attrs=['system_metadata'], key_name=orig_key_name, key_data=orig_key_data) get_flavor.return_value = test_flavor.fake_flavor flavor = instance.get_flavor() image_href = 'foo' image = { "min_ram": 10, "min_disk": 1, "properties": {'architecture': fields_obj.Architecture.X86_64, 'vm_mode': 'hvm'}} admin_pass = '' files_to_inject = [] bdms = objects.BlockDeviceMappingList() _get_image.return_value = (None, image) bdm_get_by_instance_uuid.return_value = bdms fake_spec = objects.RequestSpec() req_spec_get_by_inst_uuid.return_value = fake_spec keypair = self._create_keypair_obj(instance) mock_get_keypair.return_value = keypair with mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance') as rebuild_instance: self.compute_api.rebuild(self.context, instance, image_href, admin_pass, files_to_inject, key_name=keypair.name) rebuild_instance.assert_called_once_with(self.context, instance=instance, new_pass=admin_pass, injected_files=files_to_inject, image_ref=image_href, orig_image_ref=image_href, orig_sys_metadata=orig_system_metadata, bdms=bdms, preserve_ephemeral=False, host=instance.host, request_spec=fake_spec) _check_auto_disk_config.assert_called_once_with( image=image, auto_disk_config=None) _checks_for_create_and_rebuild.assert_called_once_with(self.context, None, image, flavor, {}, [], None) self.assertNotEqual(orig_key_name, instance.key_name) self.assertNotEqual(orig_key_data, instance.key_data) @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild_change_trusted_certs(self, _record_action_start, _checks_for_create_and_rebuild, _check_auto_disk_config, _get_image, bdm_get_by_instance_uuid, get_flavor, instance_save, req_spec_get_by_inst_uuid): orig_system_metadata = {} orig_trusted_certs = ['orig-trusted-cert-1', 'orig-trusted-cert-2'] new_trusted_certs = ['new-trusted-cert-1', 'new-trusted-cert-2'] instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata=orig_system_metadata, image_ref='foo', expected_attrs=['system_metadata'], trusted_certs=orig_trusted_certs) get_flavor.return_value = test_flavor.fake_flavor flavor = instance.get_flavor() image_href = 'foo' image = { "min_ram": 10, "min_disk": 1, "properties": {'architecture': fields_obj.Architecture.X86_64, 'vm_mode': 'hvm'}} admin_pass = '' files_to_inject = [] bdms = objects.BlockDeviceMappingList() _get_image.return_value = (None, image) bdm_get_by_instance_uuid.return_value = bdms fake_spec = objects.RequestSpec() req_spec_get_by_inst_uuid.return_value = fake_spec with mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance') as rebuild_instance: self.compute_api.rebuild(self.context, instance, image_href, admin_pass, files_to_inject, trusted_certs=new_trusted_certs) rebuild_instance.assert_called_once_with( self.context, instance=instance, new_pass=admin_pass, injected_files=files_to_inject, image_ref=image_href, orig_image_ref=image_href, orig_sys_metadata=orig_system_metadata, bdms=bdms, preserve_ephemeral=False, host=instance.host, request_spec=fake_spec) _check_auto_disk_config.assert_called_once_with( image=image, auto_disk_config=None) _checks_for_create_and_rebuild.assert_called_once_with( self.context, None, image, flavor, {}, [], None) self.assertEqual(new_trusted_certs, instance.trusted_certs.ids) @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild_unset_trusted_certs(self, _record_action_start, _checks_for_create_and_rebuild, _check_auto_disk_config, _get_image, bdm_get_by_instance_uuid, get_flavor, instance_save, req_spec_get_by_inst_uuid): """Tests the scenario that the server was created with some trusted certs and then rebuilt without trusted_image_certificates=None explicitly to unset the trusted certs on the server. """ orig_system_metadata = {} orig_trusted_certs = ['orig-trusted-cert-1', 'orig-trusted-cert-2'] new_trusted_certs = None instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata=orig_system_metadata, image_ref='foo', expected_attrs=['system_metadata'], trusted_certs=orig_trusted_certs) get_flavor.return_value = test_flavor.fake_flavor flavor = instance.get_flavor() image_href = 'foo' image = { "min_ram": 10, "min_disk": 1, "properties": {'architecture': fields_obj.Architecture.X86_64, 'vm_mode': 'hvm'}} admin_pass = '' files_to_inject = [] bdms = objects.BlockDeviceMappingList() _get_image.return_value = (None, image) bdm_get_by_instance_uuid.return_value = bdms fake_spec = objects.RequestSpec() req_spec_get_by_inst_uuid.return_value = fake_spec with mock.patch.object(self.compute_api.compute_task_api, 'rebuild_instance') as rebuild_instance: self.compute_api.rebuild(self.context, instance, image_href, admin_pass, files_to_inject, trusted_certs=new_trusted_certs) rebuild_instance.assert_called_once_with( self.context, instance=instance, new_pass=admin_pass, injected_files=files_to_inject, image_ref=image_href, orig_image_ref=image_href, orig_sys_metadata=orig_system_metadata, bdms=bdms, preserve_ephemeral=False, host=instance.host, request_spec=fake_spec) _check_auto_disk_config.assert_called_once_with( image=image, auto_disk_config=None) _checks_for_create_and_rebuild.assert_called_once_with( self.context, None, image, flavor, {}, [], None) self.assertIsNone(instance.trusted_certs) @mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=True) @mock.patch.object(objects.Instance, 'get_flavor') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_api.API, '_get_image') @mock.patch.object(compute_api.API, '_check_auto_disk_config') @mock.patch.object(compute_api.API, '_record_action_start') def test_rebuild_volume_backed_instance_with_trusted_certs( self, _record_action_start, _check_auto_disk_config, _get_image, bdm_get_by_instance_uuid, get_flavor, instance_is_volume_backed): orig_system_metadata = {} new_trusted_certs = ['new-trusted-cert-1', 'new-trusted-cert-2'] instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell', launched_at=timeutils.utcnow(), system_metadata=orig_system_metadata, image_ref=None, expected_attrs=['system_metadata'], trusted_certs=None) get_flavor.return_value = test_flavor.fake_flavor image_href = 'foo' image = { "min_ram": 10, "min_disk": 1, "properties": {'architecture': fields_obj.Architecture.X86_64, 'vm_mode': 'hvm'}} admin_pass = '' files_to_inject = [] bdms = objects.BlockDeviceMappingList() _get_image.return_value = (None, image) bdm_get_by_instance_uuid.return_value = bdms self.assertRaises(exception.CertificateValidationFailed, self.compute_api.rebuild, self.context, instance, image_href, admin_pass, files_to_inject, trusted_certs=new_trusted_certs) _check_auto_disk_config.assert_called_once_with( image=image, auto_disk_config=None) @mock.patch('nova.objects.Quotas.limit_check') def test_check_metadata_properties_quota_with_empty_dict(self, limit_check): metadata = {} self.compute_api._check_metadata_properties_quota(self.context, metadata) self.assertEqual(0, limit_check.call_count) @mock.patch('nova.objects.Quotas.limit_check') def test_check_injected_file_quota_with_empty_list(self, limit_check): injected_files = [] self.compute_api._check_injected_file_quota(self.context, injected_files) self.assertEqual(0, limit_check.call_count) def _test_check_injected_file_quota_onset_file_limit_exceeded(self, side_effect): injected_files = [ { "path": "/etc/banner.txt", "contents": "foo" } ] with mock.patch('nova.objects.Quotas.limit_check', side_effect=side_effect): self.compute_api._check_injected_file_quota( self.context, injected_files) def test_check_injected_file_quota_onset_file_limit_exceeded(self): # This is the first call to limit_check. side_effect = exception.OverQuota(overs='injected_files') self.assertRaises(exception.OnsetFileLimitExceeded, self._test_check_injected_file_quota_onset_file_limit_exceeded, side_effect) def test_check_injected_file_quota_onset_file_path_limit(self): # This is the second call to limit_check. side_effect = (mock.DEFAULT, exception.OverQuota(overs='injected_file_path_bytes', quotas={'injected_file_path_bytes': 255})) self.assertRaises(exception.OnsetFilePathLimitExceeded, self._test_check_injected_file_quota_onset_file_limit_exceeded, side_effect) def test_check_injected_file_quota_onset_file_content_limit(self): # This is the second call to limit_check but with different overs. side_effect = (mock.DEFAULT, exception.OverQuota(overs='injected_file_content_bytes', quotas={'injected_file_content_bytes': 10240})) self.assertRaises(exception.OnsetFileContentLimitExceeded, self._test_check_injected_file_quota_onset_file_limit_exceeded, side_effect) @mock.patch('nova.objects.Quotas.get_all_by_project_and_user', new=mock.MagicMock()) @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.objects.InstanceAction.action_start') @mock.patch('nova.compute.api.API._update_queued_for_deletion') def test_restore_by_admin(self, update_qfd, action_start, instance_save, quota_check, quota_count): admin_context = context.RequestContext('admin_user', 'admin_project', True) proj_count = {'instances': 1, 'cores': 1, 'ram': 512} user_count = proj_count.copy() quota_count.return_value = {'project': proj_count, 'user': user_count} instance = self._create_instance_obj() instance.vm_state = vm_states.SOFT_DELETED instance.task_state = None instance.save() with mock.patch.object(self.compute_api, 'compute_rpcapi') as rpc: self.compute_api.restore(admin_context, instance) rpc.restore_instance.assert_called_once_with(admin_context, instance) self.assertEqual(instance.task_state, task_states.RESTORING) # mock.ANY might be 'instances', 'cores', or 'ram' depending on how the # deltas dict is iterated in check_deltas # user_id is expected to be None because no per-user quotas have been # defined quota_count.assert_called_once_with(admin_context, mock.ANY, instance.project_id, user_id=None) quota_check.assert_called_once_with( admin_context, user_values={'instances': 2, 'cores': 1 + instance.flavor.vcpus, 'ram': 512 + instance.flavor.memory_mb}, project_values={'instances': 2, 'cores': 1 + instance.flavor.vcpus, 'ram': 512 + instance.flavor.memory_mb}, project_id=instance.project_id) update_qfd.assert_called_once_with(admin_context, instance.uuid, False) @mock.patch('nova.objects.Quotas.get_all_by_project_and_user', new=mock.MagicMock()) @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.objects.InstanceAction.action_start') @mock.patch('nova.compute.api.API._update_queued_for_deletion') def test_restore_by_instance_owner(self, update_qfd, action_start, instance_save, quota_check, quota_count): proj_count = {'instances': 1, 'cores': 1, 'ram': 512} user_count = proj_count.copy() quota_count.return_value = {'project': proj_count, 'user': user_count} instance = self._create_instance_obj() instance.vm_state = vm_states.SOFT_DELETED instance.task_state = None instance.save() with mock.patch.object(self.compute_api, 'compute_rpcapi') as rpc: self.compute_api.restore(self.context, instance) rpc.restore_instance.assert_called_once_with(self.context, instance) self.assertEqual(instance.project_id, self.context.project_id) self.assertEqual(instance.task_state, task_states.RESTORING) # mock.ANY might be 'instances', 'cores', or 'ram' depending on how the # deltas dict is iterated in check_deltas # user_id is expected to be None because no per-user quotas have been # defined quota_count.assert_called_once_with(self.context, mock.ANY, instance.project_id, user_id=None) quota_check.assert_called_once_with( self.context, user_values={'instances': 2, 'cores': 1 + instance.flavor.vcpus, 'ram': 512 + instance.flavor.memory_mb}, project_values={'instances': 2, 'cores': 1 + instance.flavor.vcpus, 'ram': 512 + instance.flavor.memory_mb}, project_id=instance.project_id) update_qfd.assert_called_once_with(self.context, instance.uuid, False) @mock.patch.object(objects.InstanceAction, 'action_start') def test_external_instance_event(self, mock_action_start): instances = [ objects.Instance(uuid=uuids.instance_1, host='host1', migration_context=None), objects.Instance(uuid=uuids.instance_2, host='host1', migration_context=None), objects.Instance(uuid=uuids.instance_3, host='host2', migration_context=None), objects.Instance(uuid=uuids.instance_4, host='host2', migration_context=None), objects.Instance(uuid=uuids.instance_5, host='host2', migration_context=None, task_state=None, vm_state=vm_states.STOPPED, power_state=power_state.SHUTDOWN), objects.Instance(uuid=uuids.instance_6, host='host2', migration_context=None, task_state=None, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING) ] # Create a single cell context and associate it with all instances mapping = objects.InstanceMapping.get_by_instance_uuid( self.context, instances[0].uuid) with context.target_cell(self.context, mapping.cell_mapping) as cc: cell_context = cc for instance in instances: instance._context = cell_context volume_id = uuidutils.generate_uuid() events = [ objects.InstanceExternalEvent( instance_uuid=uuids.instance_1, name='network-changed'), objects.InstanceExternalEvent( instance_uuid=uuids.instance_2, name='network-changed'), objects.InstanceExternalEvent( instance_uuid=uuids.instance_3, name='network-changed'), objects.InstanceExternalEvent( instance_uuid=uuids.instance_4, name='volume-extended', tag=volume_id), objects.InstanceExternalEvent( instance_uuid=uuids.instance_5, name='power-update', tag="POWER_ON"), objects.InstanceExternalEvent( instance_uuid=uuids.instance_6, name='power-update', tag="POWER_OFF"), ] self.compute_api.compute_rpcapi = mock.MagicMock() self.compute_api.external_instance_event(self.context, instances, events) method = self.compute_api.compute_rpcapi.external_instance_event method.assert_any_call(cell_context, instances[0:2], events[0:2], host='host1') method.assert_any_call(cell_context, instances[2:], events[2:], host='host2') calls = [mock.call(self.context, uuids.instance_4, instance_actions.EXTEND_VOLUME, want_result=False), mock.call(self.context, uuids.instance_5, instance_actions.START, want_result=False), mock.call(self.context, uuids.instance_6, instance_actions.STOP, want_result=False)] mock_action_start.assert_has_calls(calls) self.assertEqual(2, method.call_count) def test_external_instance_event_power_update_invalid_tag(self): instance1 = objects.Instance(self.context) instance1.uuid = uuids.instance1 instance1.id = 1 instance1.vm_state = vm_states.ACTIVE instance1.task_state = None instance1.power_state = power_state.RUNNING instance1.host = 'host1' instance1.migration_context = None instance2 = objects.Instance(self.context) instance2.uuid = uuids.instance2 instance2.id = 2 instance2.vm_state = vm_states.STOPPED instance2.task_state = None instance2.power_state = power_state.SHUTDOWN instance2.host = 'host2' instance2.migration_context = None instances = [instance1, instance2] events = [ objects.InstanceExternalEvent( instance_uuid=instance1.uuid, name='power-update', tag="VACATION"), objects.InstanceExternalEvent( instance_uuid=instance2.uuid, name='power-update', tag="POWER_ON") ] with test.nested( mock.patch.object(self.compute_api.compute_rpcapi, 'external_instance_event'), mock.patch.object(objects.InstanceAction, 'action_start'), mock.patch.object(compute_api, 'LOG') ) as ( mock_ex, mock_action_start, mock_log ): self.compute_api.external_instance_event(self.context, instances, events) self.assertEqual(2, mock_ex.call_count) # event VACATION requested on instance1 is ignored because # its an invalid event tag. mock_ex.assert_has_calls( [mock.call(self.context, [instance2], [events[1]], host=u'host2'), mock.call(self.context, [instance1], [], host=u'host1')], any_order=True) mock_action_start.assert_called_once_with( self.context, instance2.uuid, instance_actions.START, want_result=False) self.assertEqual(1, mock_log.warning.call_count) self.assertIn( 'Invalid power state', mock_log.warning.call_args[0][0]) def test_external_instance_event_evacuating_instance(self): # Since we're patching the db's migration_get(), use a dict here so # that we can validate the id is making its way correctly to the db api migrations = {} migrations[42] = {'id': 42, 'source_compute': 'host1', 'dest_compute': 'host2', 'source_node': None, 'dest_node': None, 'dest_host': None, 'old_instance_type_id': None, 'new_instance_type_id': None, 'uuid': uuids.migration, 'instance_uuid': uuids.instance_2, 'status': None, 'migration_type': 'evacuation', 'memory_total': None, 'memory_processed': None, 'memory_remaining': None, 'disk_total': None, 'disk_processed': None, 'disk_remaining': None, 'deleted': False, 'hidden': False, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'cross_cell_move': False, 'user_id': None, 'project_id': None} def migration_get(context, id): return migrations[id] instances = [ objects.Instance(uuid=uuids.instance_1, host='host1', migration_context=None), objects.Instance(uuid=uuids.instance_2, host='host1', migration_context=objects.MigrationContext( migration_id=42)), objects.Instance(uuid=uuids.instance_3, host='host2', migration_context=None) ] # Create a single cell context and associate it with all instances mapping = objects.InstanceMapping.get_by_instance_uuid( self.context, instances[0].uuid) with context.target_cell(self.context, mapping.cell_mapping) as cc: cell_context = cc for instance in instances: instance._context = cell_context events = [ objects.InstanceExternalEvent( instance_uuid=uuids.instance_1, name='network-changed'), objects.InstanceExternalEvent( instance_uuid=uuids.instance_2, name='network-changed'), objects.InstanceExternalEvent( instance_uuid=uuids.instance_3, name='network-changed'), ] with mock.patch('nova.db.sqlalchemy.api.migration_get', migration_get): self.compute_api.compute_rpcapi = mock.MagicMock() self.compute_api.external_instance_event(self.context, instances, events) method = self.compute_api.compute_rpcapi.external_instance_event method.assert_any_call(cell_context, instances[0:2], events[0:2], host='host1') method.assert_any_call(cell_context, instances[1:], events[1:], host='host2') self.assertEqual(2, method.call_count) @mock.patch('nova.objects.Migration.get_by_id') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.context.set_target_cell') @mock.patch('nova.context.get_admin_context') def test_external_instance_event_cross_cell_move( self, get_admin_context, set_target_cell, get_hm_by_host, get_mig_by_id): """Tests a scenario where an external server event comes for an instance undergoing a cross-cell migration so the event is routed to both the source host in the source cell and dest host in dest cell using the properly targeted request contexts. """ migration = objects.Migration( id=1, source_compute='host1', dest_compute='host2', cross_cell_move=True) migration_context = objects.MigrationContext( instance_uuid=uuids.instance, migration_id=migration.id, migration_type='resize', cross_cell_move=True) instance = objects.Instance( self.context, uuid=uuids.instance, host=migration.source_compute, migration_context=migration_context) get_mig_by_id.return_value = migration source_cell_mapping = objects.CellMapping(name='source-cell') dest_cell_mapping = objects.CellMapping(name='dest-cell') # Wrap _get_relevant_hosts and sort the result for predictable asserts. original_get_relevant_hosts = self.compute_api._get_relevant_hosts def wrap_get_relevant_hosts(_self, *a, **kw): hosts, cross_cell_move = original_get_relevant_hosts(*a, **kw) return sorted(hosts), cross_cell_move self.stub_out('nova.compute.api.API._get_relevant_hosts', wrap_get_relevant_hosts) def fake_hm_get_by_host(ctxt, host): if host == migration.source_compute: return objects.HostMapping( host=host, cell_mapping=source_cell_mapping) if host == migration.dest_compute: return objects.HostMapping( host=host, cell_mapping=dest_cell_mapping) raise Exception('Unexpected host: %s' % host) get_hm_by_host.side_effect = fake_hm_get_by_host # get_admin_context should be called twice in order (source and dest) get_admin_context.side_effect = [ mock.sentinel.source_context, mock.sentinel.dest_context] event = objects.InstanceExternalEvent( instance_uuid=instance.uuid, name='network-vif-plugged') events = [event] with mock.patch.object(self.compute_api.compute_rpcapi, 'external_instance_event') as rpc_mock: self.compute_api.external_instance_event( self.context, [instance], events) # We should have gotten the migration because of the migration_context. get_mig_by_id.assert_called_once_with(self.context, migration.id) # We should have gotten two host mappings (for source and dest). self.assertEqual(2, get_hm_by_host.call_count) get_hm_by_host.assert_has_calls([ mock.call(self.context, migration.source_compute), mock.call(self.context, migration.dest_compute)]) self.assertEqual(2, get_admin_context.call_count) # We should have targeted a context to both cells. self.assertEqual(2, set_target_cell.call_count) set_target_cell.assert_has_calls([ mock.call(mock.sentinel.source_context, source_cell_mapping), mock.call(mock.sentinel.dest_context, dest_cell_mapping)]) # We should have RPC cast to both hosts in different cells. self.assertEqual(2, rpc_mock.call_count) rpc_mock.assert_has_calls([ mock.call(mock.sentinel.source_context, [instance], events, host=migration.source_compute), mock.call(mock.sentinel.dest_context, [instance], events, host=migration.dest_compute)], # The rpc calls are based on iterating over a dict which is not # ordered so we have to just assert the calls in any order. any_order=True) def test_volume_ops_invalid_task_state(self): instance = self._create_instance_obj() self.assertEqual(instance.vm_state, vm_states.ACTIVE) instance.task_state = 'Any' volume_id = uuidutils.generate_uuid() self.assertRaises(exception.InstanceInvalidState, self.compute_api.attach_volume, self.context, instance, volume_id) self.assertRaises(exception.InstanceInvalidState, self.compute_api.detach_volume, self.context, instance, volume_id) new_volume_id = uuidutils.generate_uuid() self.assertRaises(exception.InstanceInvalidState, self.compute_api.swap_volume, self.context, instance, volume_id, new_volume_id) @mock.patch.object(cinder.API, 'get', side_effect=exception.CinderConnectionFailed(reason='error')) def test_get_bdm_image_metadata_with_cinder_down(self, mock_get): bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'volume_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', }))] self.assertRaises(exception.CinderConnectionFailed, utils.get_bdm_image_metadata, self.context, self.compute_api.image_api, self.compute_api.volume_api, bdms, legacy_bdm=True) def test_get_volumes_for_bdms_errors(self): """Simple test to make sure _get_volumes_for_bdms raises up errors.""" # Use a mix of pre-existing and source_type=image volumes to test the # filtering logic in the method. bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='image', volume_id=None), objects.BlockDeviceMapping(source_type='volume', volume_id=uuids.volume_id)]) for exc in ( exception.VolumeNotFound(volume_id=uuids.volume_id), exception.CinderConnectionFailed(reason='gremlins'), exception.Forbidden() ): with mock.patch.object(self.compute_api.volume_api, 'get', side_effect=exc) as mock_vol_get: self.assertRaises(type(exc), self.compute_api._get_volumes_for_bdms, self.context, bdms) mock_vol_get.assert_called_once_with(self.context, uuids.volume_id) @ddt.data(True, False) def test_validate_vol_az_for_create_multiple_vols_diff_az(self, cross_az): """Tests cross_az_attach=True|False scenarios where the volumes are in different zones. """ self.flags(cross_az_attach=cross_az, group='cinder') volumes = [{'availability_zone': str(x)} for x in range(2)] if cross_az: # Since cross_az_attach=True (the default) we do not care that the # volumes are in different zones. self.assertIsNone(self.compute_api._validate_vol_az_for_create( None, volumes)) else: # In this case the volumes cannot be in different zones. ex = self.assertRaises( exception.MismatchVolumeAZException, self.compute_api._validate_vol_az_for_create, None, volumes) self.assertIn('Volumes are in different availability zones: 0,1', six.text_type(ex)) def test_validate_vol_az_for_create_vol_az_matches_default_cpu_az(self): """Tests the scenario that the instance is not being created in a specific zone and the volume's zone matches CONF.default_availabilty_zone so None is returned indicating the RequestSpec.availability_zone does not need to be updated. """ self.flags(cross_az_attach=False, group='cinder') volumes = [{'availability_zone': CONF.default_availability_zone}] self.assertIsNone(self.compute_api._validate_vol_az_for_create( None, volumes)) @mock.patch.object(cinder.API, 'get_snapshot', side_effect=exception.CinderConnectionFailed(reason='error')) def test_validate_bdm_with_cinder_down(self, mock_get_snapshot): instance = self._create_instance_obj() instance_type = self._create_flavor() bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'snapshot_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, }))] image_cache = volumes = {} self.assertRaises(exception.CinderConnectionFailed, self.compute_api._validate_bdm, self.context, instance, instance_type, bdms, image_cache, volumes) @mock.patch.object(cinder.API, 'attachment_create', side_effect=exception.InvalidInput(reason='error')) def test_validate_bdm_with_error_volume_new_flow(self, mock_attach_create): # Tests that an InvalidInput exception raised from # volume_api.attachment_create due to the volume status not being # 'available' results in _validate_bdm re-raising InvalidVolume. instance = self._create_instance_obj() del instance.id instance_type = self._create_flavor() volume_id = 'e856840e-9f5b-4894-8bde-58c6e29ac1e8' volume_info = {'status': 'error', 'attach_status': 'detached', 'id': volume_id, 'multiattach': False} bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'boot_index': 0, 'volume_id': volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', }))] self.assertRaises(exception.InvalidVolume, self.compute_api._validate_bdm, self.context, instance, instance_type, bdms, {}, {volume_id: volume_info}) mock_attach_create.assert_called_once_with( self.context, volume_id, instance.uuid) def test_validate_bdm_missing_boot_index(self): """Tests that _validate_bdm will fail if there is no boot_index=0 entry """ bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=None, image_id=uuids.image_id, source_type='image', destination_type='volume')]) image_cache = volumes = {} self.assertRaises(exception.InvalidBDMBootSequence, self.compute_api._validate_bdm, self.context, objects.Instance(), objects.Flavor(), bdms, image_cache, volumes) def test_validate_bdm_with_volume_type_name_is_specified(self): """Test _check_requested_volume_type method is used. """ instance = self._create_instance_obj() instance_type = self._create_flavor() volume_type = 'fake_lvm_1' volume_types = [{'id': 'fake_volume_type_id_1', 'name': 'fake_lvm_1'}, {'id': 'fake_volume_type_id_2', 'name': 'fake_lvm_2'}] bdm1 = objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( { 'uuid': uuids.image_id, 'source_type': 'image', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, 'volume_size': 3, 'volume_type': 'fake_lvm_1' })) bdm2 = objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( { 'uuid': uuids.image_id, 'source_type': 'snapshot', 'destination_type': 'volume', 'device_name': 'vdb', 'boot_index': 1, 'volume_size': 3, 'volume_type': 'fake_lvm_1' })) bdms = [bdm1, bdm2] with test.nested( mock.patch.object(cinder.API, 'get_all_volume_types', return_value=volume_types), mock.patch.object(compute_api.API, '_check_requested_volume_type')) as ( get_all_vol_types, vol_type_requested): image_cache = volumes = {} self.compute_api._validate_bdm(self.context, instance, instance_type, bdms, image_cache, volumes) get_all_vol_types.assert_called_once_with(self.context) vol_type_requested.assert_any_call(bdms[0], volume_type, volume_types) vol_type_requested.assert_any_call(bdms[1], volume_type, volume_types) @mock.patch('nova.compute.api.API._get_image') def test_validate_bdm_missing_volume_size(self, mock_get_image): """Tests that _validate_bdm fail if there volume_size not provided """ instance = self._create_instance_obj() # first we test the case of instance.image_ref == bdm.image_id bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=0, image_id=instance.image_ref, source_type='image', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=None, volume_size=None)]) image_cache = volumes = {} self.assertRaises(exception.InvalidBDM, self.compute_api._validate_bdm, self.context, instance, objects.Flavor(), bdms, image_cache, volumes) self.assertEqual(0, mock_get_image.call_count) # then we test the case of instance.image_ref != bdm.image_id image_id = uuids.image_id bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=0, image_id=image_id, source_type='image', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=None, volume_size=None)]) self.assertRaises(exception.InvalidBDM, self.compute_api._validate_bdm, self.context, instance, objects.Flavor(), bdms, image_cache, volumes) mock_get_image.assert_called_once_with(self.context, image_id) @mock.patch('nova.compute.api.API._get_image') def test_validate_bdm_disk_bus(self, mock_get_image): """Tests that _validate_bdm fail if an invalid disk_bus is provided """ instance = self._create_instance_obj() bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=0, image_id=instance.image_ref, source_type='image', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=None, volume_size=1, disk_bus='virtio-scsi')]) image_cache = volumes = {} self.assertRaises(exception.InvalidBDMDiskBus, self.compute_api._validate_bdm, self.context, instance, objects.Flavor(), bdms, image_cache, volumes) def test_the_specified_volume_type_id_assignment_to_name(self): """Test _check_requested_volume_type method is called, if the user is using the volume type ID, assign volume_type to volume type name. """ volume_type = 'fake_volume_type_id_1' volume_types = [{'id': 'fake_volume_type_id_1', 'name': 'fake_lvm_1'}, {'id': 'fake_volume_type_id_2', 'name': 'fake_lvm_2'}] bdms = [objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( { 'uuid': uuids.image_id, 'source_type': 'image', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, 'volume_size': 3, 'volume_type': 'fake_volume_type_id_1' }))] self.compute_api._check_requested_volume_type(bdms[0], volume_type, volume_types) self.assertEqual(bdms[0].volume_type, volume_types[0]['name']) def _test_provision_instances_with_cinder_error(self, expected_exception): @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch.object(objects.Instance, 'create') @mock.patch.object(objects.RequestSpec, 'from_components') def do_test( mock_req_spec_from_components, _mock_create, mock_check_num_inst_quota): req_spec_mock = mock.MagicMock() mock_check_num_inst_quota.return_value = 1 mock_req_spec_from_components.return_value = req_spec_mock ctxt = context.RequestContext('fake-user', 'fake-project') flavor = self._create_flavor() min_count = max_count = 1 boot_meta = { 'id': 'fake-image-id', 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} base_options = {'image_ref': 'fake-ref', 'display_name': 'fake-name', 'project_id': 'fake-project', 'availability_zone': None, 'metadata': {}, 'access_ip_v4': None, 'access_ip_v6': None, 'config_drive': None, 'key_name': None, 'reservation_id': None, 'kernel_id': None, 'ramdisk_id': None, 'root_device_name': None, 'user_data': None, 'numa_topology': None, 'pci_requests': None, 'port_resource_requests': None} security_groups = {} block_device_mapping = objects.BlockDeviceMappingList( objects=[objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'volume_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, }))]) shutdown_terminate = True instance_group = None check_server_group_quota = False filter_properties = {'scheduler_hints': None, 'instance_type': flavor} trusted_certs = None self.assertRaises(expected_exception, self.compute_api._provision_instances, ctxt, flavor, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota, filter_properties, None, objects.TagList(), trusted_certs, False) do_test() @mock.patch.object(cinder.API, 'get', side_effect=exception.CinderConnectionFailed(reason='error')) def test_provision_instances_with_cinder_down(self, mock_get): self._test_provision_instances_with_cinder_error( expected_exception=exception.CinderConnectionFailed) @mock.patch.object(cinder.API, 'get', new=mock.Mock( return_value={'id': '1', 'multiattach': False})) @mock.patch.object(cinder.API, 'check_availability_zone', new=mock.Mock()) @mock.patch.object(cinder.API, 'attachment_create', new=mock.Mock( side_effect=exception.InvalidInput(reason='error'))) def test_provision_instances_with_error_volume(self): self._test_provision_instances_with_cinder_error( expected_exception=exception.InvalidVolume) @mock.patch('nova.objects.RequestSpec.from_components') @mock.patch('nova.objects.BuildRequest') @mock.patch('nova.objects.Instance') @mock.patch('nova.objects.InstanceMapping.create') def test_provision_instances_with_keypair(self, mock_im, mock_instance, mock_br, mock_rs): fake_keypair = objects.KeyPair(name='test') inst_type = self._create_flavor() @mock.patch.object(self.compute_api, '_get_volumes_for_bdms') @mock.patch.object(self.compute_api, '_create_reqspec_buildreq_instmapping', new=mock.MagicMock()) @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch('nova.network.security_group_api') @mock.patch.object(self.compute_api, 'create_db_entry_for_new_instance') @mock.patch.object(self.compute_api, '_bdm_validate_set_size_and_instance') def do_test(mock_bdm_v, mock_cdb, mock_sg, mock_cniq, mock_get_vols): mock_cniq.return_value = 1 self.compute_api._provision_instances(self.context, inst_type, 1, 1, mock.MagicMock(), {}, None, None, None, None, {}, None, fake_keypair, objects.TagList(), None, False) self.assertEqual( 'test', mock_instance.return_value.keypairs.objects[0].name) self.compute_api._provision_instances(self.context, inst_type, 1, 1, mock.MagicMock(), {}, None, None, None, None, {}, None, None, objects.TagList(), None, False) self.assertEqual( 0, len(mock_instance.return_value.keypairs.objects)) do_test() @mock.patch('nova.accelerator.cyborg.get_device_profile_request_groups') @mock.patch('nova.objects.RequestSpec.from_components') @mock.patch('nova.objects.BuildRequest') @mock.patch('nova.objects.Instance') @mock.patch('nova.objects.InstanceMapping.create') def _test_provision_instances_with_accels(self, instance_type, dp_request_groups, prev_request_groups, mock_im, mock_instance, mock_br, mock_rs, mock_get_dp): @mock.patch.object(self.compute_api, '_get_volumes_for_bdms') @mock.patch.object(self.compute_api, '_create_reqspec_buildreq_instmapping', new=mock.MagicMock()) @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch('nova.network.security_group_api') @mock.patch.object(self.compute_api, 'create_db_entry_for_new_instance') @mock.patch.object(self.compute_api, '_bdm_validate_set_size_and_instance') def do_test(mock_bdm_v, mock_cdb, mock_sg, mock_cniq, mock_get_vols): mock_cniq.return_value = 1 self.compute_api._provision_instances(self.context, instance_type, 1, 1, mock.MagicMock(), {}, None, None, None, None, {}, None, None, objects.TagList(), None, False) mock_get_dp.return_value = dp_request_groups fake_rs = fake_request_spec.fake_spec_obj() fake_rs.requested_resources = prev_request_groups mock_rs.return_value = fake_rs do_test() return mock_get_dp, fake_rs def test_provision_instances_with_accels_ok(self): # If extra_specs has accel spec, device profile's request_groups # should be obtained, and added to reqspec's requested_resources. dp_name = 'mydp' extra_specs = {'extra_specs': {'accel:device_profile': dp_name}} instance_type = self._create_flavor(**extra_specs) prev_groups = [objects.RequestGroup(requester_id='prev0'), objects.RequestGroup(requester_id='prev1')] dp_groups = [objects.RequestGroup(requester_id='deviceprofile2'), objects.RequestGroup(requester_id='deviceprofile3')] mock_get_dp, fake_rs = self._test_provision_instances_with_accels( instance_type, dp_groups, prev_groups) mock_get_dp.assert_called_once_with(self.context, dp_name) self.assertEqual(prev_groups + dp_groups, fake_rs.requested_resources) def test_provision_instances_with_accels_no_dp(self): # If extra specs has no accel spec, no attempt should be made to # get device profile's request_groups, and reqspec.requested_resources # should be left unchanged. instance_type = self._create_flavor() prev_groups = [objects.RequestGroup(requester_id='prev0'), objects.RequestGroup(requester_id='prev1')] mock_get_dp, fake_rs = self._test_provision_instances_with_accels( instance_type, [], prev_groups) mock_get_dp.assert_not_called() self.assertEqual(prev_groups, fake_rs.requested_resources) def test_provision_instances_creates_build_request(self): @mock.patch.object(self.compute_api, '_get_volumes_for_bdms') @mock.patch.object(self.compute_api, '_create_reqspec_buildreq_instmapping') @mock.patch.object(objects.Instance, 'create') @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch.object(objects.RequestSpec, 'from_components') def do_test(mock_req_spec_from_components, mock_check_num_inst_quota, mock_inst_create, mock_create_rs_br_im, mock_get_volumes): min_count = 1 max_count = 2 mock_check_num_inst_quota.return_value = 2 ctxt = context.RequestContext('fake-user', 'fake-project') flavor = self._create_flavor() boot_meta = { 'id': 'fake-image-id', 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} base_options = {'image_ref': 'fake-ref', 'display_name': 'fake-name', 'project_id': 'fake-project', 'availability_zone': None, 'metadata': {}, 'access_ip_v4': None, 'access_ip_v6': None, 'config_drive': None, 'key_name': None, 'reservation_id': None, 'kernel_id': None, 'ramdisk_id': None, 'root_device_name': None, 'user_data': None, 'numa_topology': None, 'pci_requests': None, 'port_resource_requests': None} security_groups = {} block_device_mappings = objects.BlockDeviceMappingList( objects=[objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'volume_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, }))]) instance_tags = objects.TagList(objects=[objects.Tag(tag='tag')]) shutdown_terminate = True trusted_certs = None instance_group = None check_server_group_quota = False filter_properties = {'scheduler_hints': None, 'instance_type': flavor} with mock.patch.object( self.compute_api, '_bdm_validate_set_size_and_instance', return_value=block_device_mappings) as validate_bdm: instances_to_build = self.compute_api._provision_instances( ctxt, flavor, min_count, max_count, base_options, boot_meta, security_groups, block_device_mappings, shutdown_terminate, instance_group, check_server_group_quota, filter_properties, None, instance_tags, trusted_certs, False) validate_bdm.assert_has_calls([mock.call( ctxt, test.MatchType(objects.Instance), flavor, block_device_mappings, {}, mock_get_volumes.return_value, False)] * max_count) for rs, br, im in instances_to_build: self.assertIsInstance(br.instance, objects.Instance) self.assertTrue(uuidutils.is_uuid_like(br.instance.uuid)) self.assertEqual(base_options['project_id'], br.instance.project_id) self.assertEqual(1, br.block_device_mappings[0].id) self.assertEqual(br.instance.uuid, br.tags[0].resource_id) mock_create_rs_br_im.assert_any_call(ctxt, rs, br, im) do_test() def test_provision_instances_creates_instance_mapping(self): @mock.patch.object(self.compute_api, '_get_volumes_for_bdms') @mock.patch.object(self.compute_api, '_create_reqspec_buildreq_instmapping', new=mock.MagicMock()) @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch.object(objects.Instance, 'create', new=mock.MagicMock()) @mock.patch.object(self.compute_api, '_validate_bdm', new=mock.MagicMock()) @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch('nova.objects.InstanceMapping') def do_test(mock_inst_mapping, mock_rs, mock_check_num_inst_quota, mock_get_vols): inst_mapping_mock = mock.MagicMock() mock_check_num_inst_quota.return_value = 1 mock_inst_mapping.return_value = inst_mapping_mock ctxt = context.RequestContext('fake-user', 'fake-project') flavor = self._create_flavor() min_count = max_count = 1 boot_meta = { 'id': 'fake-image-id', 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} base_options = {'image_ref': 'fake-ref', 'display_name': 'fake-name', 'project_id': 'fake-project', 'availability_zone': None, 'metadata': {}, 'access_ip_v4': None, 'access_ip_v6': None, 'config_drive': None, 'key_name': None, 'reservation_id': None, 'kernel_id': None, 'ramdisk_id': None, 'root_device_name': None, 'user_data': None, 'numa_topology': None, 'pci_requests': None, 'port_resource_requests': None} security_groups = {} block_device_mapping = objects.BlockDeviceMappingList( objects=[objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'volume_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, }))]) shutdown_terminate = True instance_group = None check_server_group_quota = False filter_properties = {'scheduler_hints': None, 'instance_type': flavor} trusted_certs = None instances_to_build = ( self.compute_api._provision_instances(ctxt, flavor, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota, filter_properties, None, objects.TagList(), trusted_certs, False)) rs, br, im = instances_to_build[0] self.assertTrue(uuidutils.is_uuid_like(br.instance.uuid)) self.assertEqual(br.instance_uuid, im.instance_uuid) self.assertEqual(br.instance.uuid, inst_mapping_mock.instance_uuid) self.assertIsNone(inst_mapping_mock.cell_mapping) self.assertEqual(ctxt.project_id, inst_mapping_mock.project_id) # Verify that the instance mapping created has user_id populated. self.assertEqual(ctxt.user_id, inst_mapping_mock.user_id) do_test() @mock.patch.object(cinder.API, 'get', return_value={'id': '1', 'multiattach': False}) @mock.patch.object(cinder.API, 'check_availability_zone',) @mock.patch.object(cinder.API, 'attachment_create', side_effect=[{'id': uuids.attachment_id}, exception.InvalidInput(reason='error')]) @mock.patch.object(objects.BlockDeviceMapping, 'save') def test_provision_instances_cleans_up_when_volume_invalid_new_flow(self, _mock_bdm, _mock_cinder_attach_create, _mock_cinder_check_availability_zone, _mock_cinder_get): @mock.patch.object(self.compute_api, '_create_reqspec_buildreq_instmapping') @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch.object(objects, 'Instance') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(objects, 'BuildRequest') @mock.patch.object(objects, 'InstanceMapping') def do_test(mock_inst_mapping, mock_build_req, mock_req_spec_from_components, mock_inst, mock_check_num_inst_quota, mock_create_rs_br_im): min_count = 1 max_count = 2 mock_check_num_inst_quota.return_value = 2 req_spec_mock = mock.MagicMock() mock_req_spec_from_components.return_value = req_spec_mock inst_mocks = [mock.MagicMock() for i in range(max_count)] for inst_mock in inst_mocks: inst_mock.project_id = 'fake-project' mock_inst.side_effect = inst_mocks build_req_mocks = [mock.MagicMock() for i in range(max_count)] mock_build_req.side_effect = build_req_mocks inst_map_mocks = [mock.MagicMock() for i in range(max_count)] mock_inst_mapping.side_effect = inst_map_mocks ctxt = context.RequestContext('fake-user', 'fake-project') flavor = self._create_flavor(extra_specs={}) boot_meta = { 'id': 'fake-image-id', 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} base_options = {'image_ref': 'fake-ref', 'display_name': 'fake-name', 'project_id': 'fake-project', 'availability_zone': None, 'metadata': {}, 'access_ip_v4': None, 'access_ip_v6': None, 'config_drive': None, 'key_name': None, 'reservation_id': None, 'kernel_id': None, 'ramdisk_id': None, 'root_device_name': None, 'user_data': None, 'numa_topology': None, 'pci_requests': None, 'port_resource_requests': None} security_groups = {} block_device_mapping = objects.BlockDeviceMappingList( objects=[objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'id': 1, 'volume_id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, }))]) shutdown_terminate = True instance_group = None check_server_group_quota = False filter_properties = {'scheduler_hints': None, 'instance_type': flavor} tags = objects.TagList() trusted_certs = None self.assertRaises(exception.InvalidVolume, self.compute_api._provision_instances, ctxt, flavor, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota, filter_properties, None, tags, trusted_certs, False) # First instance, build_req, mapping is created and destroyed mock_create_rs_br_im.assert_called_once_with(ctxt, req_spec_mock, build_req_mocks[0], inst_map_mocks[0]) self.assertTrue(build_req_mocks[0].destroy.called) self.assertTrue(inst_map_mocks[0].destroy.called) # Second instance, build_req, mapping is not created nor destroyed self.assertFalse(inst_mocks[1].create.called) self.assertFalse(inst_mocks[1].destroy.called) self.assertFalse(build_req_mocks[1].destroy.called) self.assertFalse(inst_map_mocks[1].destroy.called) do_test() def test_provision_instances_creates_reqspec_with_secgroups(self): @mock.patch.object(self.compute_api, '_create_reqspec_buildreq_instmapping', new=mock.MagicMock()) @mock.patch('nova.compute.utils.check_num_instances_quota') @mock.patch('nova.network.security_group_api' '.populate_security_groups') @mock.patch.object(compute_api, 'objects') @mock.patch.object(self.compute_api, 'create_db_entry_for_new_instance', new=mock.MagicMock()) @mock.patch.object(self.compute_api, '_bdm_validate_set_size_and_instance', new=mock.MagicMock()) def test(mock_objects, mock_secgroup, mock_cniq): ctxt = context.RequestContext('fake-user', 'fake-project') mock_cniq.return_value = 1 inst_type = self._create_flavor() self.compute_api._provision_instances(ctxt, inst_type, None, None, mock.MagicMock(), None, None, [], None, None, None, None, None, objects.TagList(), None, False) secgroups = mock_secgroup.return_value mock_objects.RequestSpec.from_components.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, security_groups=secgroups, port_resource_requests=mock.ANY) test() def _test_rescue(self, vm_state=vm_states.ACTIVE, rescue_password=None, rescue_image=None, clean_shutdown=True): instance = self._create_instance_obj(params={'vm_state': vm_state}) rescue_image_meta_obj = image_meta_obj.ImageMeta.from_dict({}) bdms = [] with test.nested( mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=bdms), mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=False), mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(self.compute_api.compute_rpcapi, 'rescue_instance'), mock.patch.object(objects.ImageMeta, 'from_image_ref', return_value=rescue_image_meta_obj), ) as ( bdm_get_by_instance_uuid, volume_backed_inst, instance_save, record_action_start, rpcapi_rescue_instance, mock_find_image_ref, ): self.compute_api.rescue(self.context, instance, rescue_password=rescue_password, rescue_image_ref=rescue_image, clean_shutdown=clean_shutdown) # assert field values set on the instance object self.assertEqual(task_states.RESCUING, instance.task_state) # assert our mock calls bdm_get_by_instance_uuid.assert_called_once_with( self.context, instance.uuid) volume_backed_inst.assert_called_once_with( self.context, instance, bdms) instance_save.assert_called_once_with(expected_task_state=[None]) record_action_start.assert_called_once_with( self.context, instance, instance_actions.RESCUE) rpcapi_rescue_instance.assert_called_once_with( self.context, instance=instance, rescue_password=rescue_password, rescue_image_ref=rescue_image, clean_shutdown=clean_shutdown) if rescue_image: mock_find_image_ref.assert_called_once_with( self.context, self.compute_api.image_api, rescue_image) def test_rescue_active(self): self._test_rescue() def test_rescue_stopped(self): self._test_rescue(vm_state=vm_states.STOPPED) def test_rescue_error(self): self._test_rescue(vm_state=vm_states.ERROR) def test_rescue_with_password(self): self._test_rescue(rescue_password='fake-password') def test_rescue_with_image(self): self._test_rescue(rescue_image='fake-image') def test_rescue_forced_shutdown(self): self._test_rescue(clean_shutdown=False) def test_unrescue(self): instance = self._create_instance_obj( params={'vm_state': vm_states.RESCUED}) with test.nested( mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(self.compute_api.compute_rpcapi, 'unrescue_instance') ) as ( instance_save, record_action_start, rpcapi_unrescue_instance ): self.compute_api.unrescue(self.context, instance) # assert field values set on the instance object self.assertEqual(task_states.UNRESCUING, instance.task_state) # assert our mock calls instance_save.assert_called_once_with(expected_task_state=[None]) record_action_start.assert_called_once_with( self.context, instance, instance_actions.UNRESCUE) rpcapi_unrescue_instance.assert_called_once_with( self.context, instance=instance) @mock.patch('nova.objects.image_meta.ImageMeta.from_image_ref') @mock.patch('nova.objects.compute_node.ComputeNode' '.get_by_host_and_nodename') @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) @mock.patch('nova.objects.block_device.BlockDeviceMappingList' '.get_by_instance_uuid') def test_rescue_bfv_with_required_trait(self, mock_get_bdms, mock_is_volume_backed, mock_get_cn, mock_image_meta_obj_from_ref): instance = self._create_instance_obj() bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=0, image_id=uuids.image_id, source_type='image', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=uuids.volume_id, volume_size=None)]) rescue_image_meta_obj = image_meta_obj.ImageMeta.from_dict({}) with test.nested( mock.patch.object(self.compute_api.placementclient, 'get_provider_traits'), mock.patch.object(self.compute_api.volume_api, 'get'), mock.patch.object(self.compute_api.volume_api, 'check_attached'), mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(self.compute_api.compute_rpcapi, 'rescue_instance') ) as ( mock_get_traits, mock_get_volume, mock_check_attached, mock_instance_save, mock_record_start, mock_rpcapi_rescue ): # Mock out the returned compute node, image, bdms and volume mock_get_cn.return_value = mock.Mock(uuid=uuids.cn) mock_image_meta_obj_from_ref.return_value = rescue_image_meta_obj mock_get_bdms.return_value = bdms mock_get_volume.return_value = mock.sentinel.volume # Ensure the required trait is returned, allowing BFV rescue mock_trait_info = mock.Mock(traits=[ot.COMPUTE_RESCUE_BFV]) mock_get_traits.return_value = mock_trait_info # Try to rescue the instance self.compute_api.rescue(self.context, instance, rescue_image_ref=uuids.rescue_image_id, allow_bfv_rescue=True) # Assert all of the calls made in the compute API mock_get_bdms.assert_called_once_with(self.context, instance.uuid) mock_get_volume.assert_called_once_with( self.context, uuids.volume_id) mock_check_attached.assert_called_once_with( self.context, mock.sentinel.volume) mock_is_volume_backed.assert_called_once_with( self.context, instance, bdms) mock_get_cn.assert_called_once_with( self.context, instance.host, instance.node) mock_get_traits.assert_called_once_with(self.context, uuids.cn) mock_instance_save.assert_called_once_with( expected_task_state=[None]) mock_record_start.assert_called_once_with( self.context, instance, instance_actions.RESCUE) mock_rpcapi_rescue.assert_called_once_with( self.context, instance=instance, rescue_password=None, rescue_image_ref=uuids.rescue_image_id, clean_shutdown=True) # Assert that the instance task state as set in the compute API self.assertEqual(task_states.RESCUING, instance.task_state) @mock.patch('nova.objects.compute_node.ComputeNode' '.get_by_host_and_nodename') @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) @mock.patch('nova.objects.block_device.BlockDeviceMappingList' '.get_by_instance_uuid') def test_rescue_bfv_without_required_trait(self, mock_get_bdms, mock_is_volume_backed, mock_get_cn): instance = self._create_instance_obj() bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=0, image_id=uuids.image_id, source_type='image', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=uuids.volume_id, volume_size=None)]) with test.nested( mock.patch.object(self.compute_api.placementclient, 'get_provider_traits'), mock.patch.object(self.compute_api.volume_api, 'get'), mock.patch.object(self.compute_api.volume_api, 'check_attached'), ) as ( mock_get_traits, mock_get_volume, mock_check_attached ): # Mock out the returned compute node, bdms and volume mock_get_bdms.return_value = bdms mock_get_volume.return_value = mock.sentinel.volume mock_get_cn.return_value = mock.Mock(uuid=uuids.cn) # Ensure the required trait is not returned, denying BFV rescue mock_trait_info = mock.Mock(traits=[]) mock_get_traits.return_value = mock_trait_info # Assert that any attempt to rescue a bfv instance on a compute # node that does not report the COMPUTE_RESCUE_BFV trait fails and # raises InstanceNotRescuable self.assertRaises(exception.InstanceNotRescuable, self.compute_api.rescue, self.context, instance, rescue_image_ref=None, allow_bfv_rescue=True) # Assert the calls made in the compute API prior to the failure mock_get_bdms.assert_called_once_with(self.context, instance.uuid) mock_get_volume.assert_called_once_with( self.context, uuids.volume_id) mock_check_attached.assert_called_once_with( self.context, mock.sentinel.volume) mock_is_volume_backed.assert_called_once_with( self.context, instance, bdms) mock_get_cn.assert_called_once_with( self.context, instance.host, instance.node) mock_get_traits.assert_called_once_with( self.context, uuids.cn) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) @mock.patch('nova.objects.block_device.BlockDeviceMappingList' '.get_by_instance_uuid') def test_rescue_bfv_without_allow_flag(self, mock_get_bdms, mock_is_volume_backed): instance = self._create_instance_obj() bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( boot_index=0, image_id=uuids.image_id, source_type='image', destination_type='volume', volume_type=None, snapshot_id=None, volume_id=uuids.volume_id, volume_size=None)]) with test.nested( mock.patch.object(self.compute_api.volume_api, 'get'), mock.patch.object(self.compute_api.volume_api, 'check_attached'), ) as ( mock_get_volume, mock_check_attached ): # Mock out the returned bdms and volume mock_get_bdms.return_value = bdms mock_get_volume.return_value = mock.sentinel.volume # Assert that any attempt to rescue a bfv instance with # allow_bfv_rescue=False fails and raises InstanceNotRescuable self.assertRaises(exception.InstanceNotRescuable, self.compute_api.rescue, self.context, instance, rescue_image_ref=None, allow_bfv_rescue=False) # Assert the calls made in the compute API prior to the failure mock_get_bdms.assert_called_once_with(self.context, instance.uuid) mock_get_volume.assert_called_once_with( self.context, uuids.volume_id) mock_check_attached.assert_called_once_with( self.context, mock.sentinel.volume) mock_is_volume_backed.assert_called_once_with( self.context, instance, bdms) @mock.patch('nova.objects.image_meta.ImageMeta.from_image_ref') def test_rescue_from_image_ref_failure(self, mock_image_meta_obj_from_ref): instance = self._create_instance_obj() mock_image_meta_obj_from_ref.side_effect = [ exception.ImageNotFound(image_id=mock.sentinel.rescue_image_ref), exception.ImageBadRequest( image_id=mock.sentinel.rescue_image_ref, response='bar')] # Assert that UnsupportedRescueImage is raised when from_image_ref # returns exception.ImageNotFound self.assertRaises(exception.UnsupportedRescueImage, self.compute_api.rescue, self.context, instance, rescue_image_ref=mock.sentinel.rescue_image_ref) # Assert that UnsupportedRescueImage is raised when from_image_ref # returns exception.ImageBadRequest self.assertRaises(exception.UnsupportedRescueImage, self.compute_api.rescue, self.context, instance, rescue_image_ref=mock.sentinel.rescue_image_ref) # Assert that we called from_image_ref using the provided ref mock_image_meta_obj_from_ref.assert_has_calls([ mock.call(self.context, self.compute_api.image_api, mock.sentinel.rescue_image_ref), mock.call(self.context, self.compute_api.image_api, mock.sentinel.rescue_image_ref)]) @mock.patch('nova.objects.image_meta.ImageMeta.from_image_ref') def test_rescue_using_volume_backed_snapshot(self, mock_image_meta_obj_from_ref): instance = self._create_instance_obj() rescue_image_obj = image_meta_obj.ImageMeta.from_dict( {'min_disk': 0, 'min_ram': 0, 'properties': {'bdm_v2': True, 'block_device_mapping': [{}]}, 'size': 0, 'status': 'active'}) mock_image_meta_obj_from_ref.return_value = rescue_image_obj # Assert that UnsupportedRescueImage is raised self.assertRaises(exception.UnsupportedRescueImage, self.compute_api.rescue, self.context, instance, rescue_image_ref=mock.sentinel.rescue_image_ref) # Assert that we called from_image_ref using the provided ref mock_image_meta_obj_from_ref.assert_called_once_with( self.context, self.compute_api.image_api, mock.sentinel.rescue_image_ref) def test_set_admin_password_invalid_state(self): # Tests that InstanceInvalidState is raised when not ACTIVE. instance = self._create_instance_obj({'vm_state': vm_states.STOPPED}) self.assertRaises(exception.InstanceInvalidState, self.compute_api.set_admin_password, self.context, instance) def test_set_admin_password(self): # Ensure instance can have its admin password set. instance = self._create_instance_obj() @mock.patch.object(objects.Instance, 'save') @mock.patch.object(self.compute_api, '_record_action_start') @mock.patch.object(self.compute_api.compute_rpcapi, 'set_admin_password') def do_test(compute_rpcapi_mock, record_mock, instance_save_mock): # call the API self.compute_api.set_admin_password(self.context, instance, 'pass') # make our assertions instance_save_mock.assert_called_once_with( expected_task_state=[None]) record_mock.assert_called_once_with( self.context, instance, instance_actions.CHANGE_PASSWORD) compute_rpcapi_mock.assert_called_once_with( self.context, instance=instance, new_pass='pass') do_test() def _test_attach_interface_invalid_state(self, state): instance = self._create_instance_obj( params={'vm_state': state}) self.assertRaises(exception.InstanceInvalidState, self.compute_api.attach_interface, self.context, instance, '', '', '', []) def test_attach_interface_invalid_state(self): for state in [vm_states.BUILDING, vm_states.DELETED, vm_states.ERROR, vm_states.RESCUED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.SUSPENDED, vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]: self._test_attach_interface_invalid_state(state) def _test_detach_interface_invalid_state(self, state): instance = self._create_instance_obj( params={'vm_state': state}) self.assertRaises(exception.InstanceInvalidState, self.compute_api.detach_interface, self.context, instance, '', '', '', []) def test_detach_interface_invalid_state(self): for state in [vm_states.BUILDING, vm_states.DELETED, vm_states.ERROR, vm_states.RESCUED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.SUSPENDED, vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]: self._test_detach_interface_invalid_state(state) def _test_check_and_transform_bdm(self, block_device_mapping): instance_type = self._create_flavor() base_options = {'uuid': uuids.bdm_instance, 'image_ref': 'fake_image_ref', 'metadata': {}} image_meta = {'status': 'active', 'name': 'image_name', 'deleted': False, 'container_format': 'bare', 'id': 'image_id'} legacy_bdm = False block_device_mapping = block_device_mapping self.assertRaises(exception.InvalidRequest, self.compute_api._check_and_transform_bdm, self.context, base_options, instance_type, image_meta, 1, 1, block_device_mapping, legacy_bdm) def test_check_and_transform_bdm_source_volume(self): block_device_mapping = [{'boot_index': 0, 'device_name': None, 'image_id': 'image_id', 'source_type': 'image'}, {'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', 'device_type': None, 'volume_id': 'volume_id'}] self._test_check_and_transform_bdm(block_device_mapping) def test_check_and_transform_bdm_source_snapshot(self): block_device_mapping = [{'boot_index': 0, 'device_name': None, 'image_id': 'image_id', 'source_type': 'image'}, {'device_name': '/dev/vda', 'source_type': 'snapshot', 'destination_type': 'volume', 'device_type': None, 'volume_id': 'volume_id'}] self._test_check_and_transform_bdm(block_device_mapping) def test_bdm_validate_set_size_and_instance(self): swap_size = 42 ephemeral_size = 24 instance = self._create_instance_obj() instance_type = self._create_flavor(swap=swap_size, ephemeral_gb=ephemeral_size) block_device_mapping = [ {'device_name': '/dev/sda1', 'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': '00000000-aaaa-bbbb-cccc-000000000000', 'delete_on_termination': False, 'boot_index': 0}, {'device_name': '/dev/sdb2', 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap', 'delete_on_termination': False}, {'device_name': '/dev/sdb3', 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'ext3', 'delete_on_termination': False}] block_device_mapping = ( block_device_obj.block_device_make_list_from_dicts( self.context, map(fake_block_device.AnonFakeDbBlockDeviceDict, block_device_mapping))) with mock.patch.object(self.compute_api, '_validate_bdm'): image_cache = volumes = {} bdms = self.compute_api._bdm_validate_set_size_and_instance( self.context, instance, instance_type, block_device_mapping, image_cache, volumes) expected = [{'device_name': '/dev/sda1', 'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': '00000000-aaaa-bbbb-cccc-000000000000', 'delete_on_termination': False, 'boot_index': 0}, {'device_name': '/dev/sdb2', 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap', 'delete_on_termination': False}, {'device_name': '/dev/sdb3', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': False}] # Check that the bdm matches what was asked for and that instance_uuid # and volume_size are set properly. for exp, bdm in zip(expected, bdms): self.assertEqual(exp['device_name'], bdm.device_name) self.assertEqual(exp['destination_type'], bdm.destination_type) self.assertEqual(exp['source_type'], bdm.source_type) self.assertEqual(exp['delete_on_termination'], bdm.delete_on_termination) self.assertEqual(instance.uuid, bdm.instance_uuid) self.assertEqual(swap_size, bdms[1].volume_size) self.assertEqual(ephemeral_size, bdms[2].volume_size) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch('nova.compute.instance_list.get_instance_objects_sorted') @mock.patch.object(objects.CellMapping, 'get_by_uuid') def test_tenant_to_project_conversion(self, mock_cell_map_get, mock_get, mock_buildreq_get): mock_cell_map_get.side_effect = exception.CellMappingNotFound( uuid='fake') mock_get.return_value = objects.InstanceList(objects=[]), list() api = compute_api.API() api.get_all(self.context, search_opts={'tenant_id': 'foo'}) filters = mock_get.call_args_list[0][0][1] self.assertEqual({'project_id': 'foo'}, filters) def test_populate_instance_names_host_name(self): params = dict(display_name="vm1") instance = self._create_instance_obj(params=params) self.compute_api._populate_instance_names(instance, 1, 0) self.assertEqual('vm1', instance.hostname) def test_populate_instance_names_host_name_is_empty(self): params = dict(display_name=u'\u865a\u62df\u673a\u662f\u4e2d\u6587') instance = self._create_instance_obj(params=params) self.compute_api._populate_instance_names(instance, 1, 0) self.assertEqual('Server-%s' % instance.uuid, instance.hostname) def test_populate_instance_names_host_name_multi(self): params = dict(display_name="vm") instance = self._create_instance_obj(params=params) self.compute_api._populate_instance_names(instance, 2, 1) self.assertEqual('vm-2', instance.hostname) def test_populate_instance_names_host_name_is_empty_multi(self): params = dict(display_name=u'\u865a\u62df\u673a\u662f\u4e2d\u6587') instance = self._create_instance_obj(params=params) self.compute_api._populate_instance_names(instance, 2, 1) self.assertEqual('Server-%s' % instance.uuid, instance.hostname) def test_host_statuses(self): five_min_ago = timeutils.utcnow() - datetime.timedelta(minutes=5) instances = [ objects.Instance(uuid=uuids.instance_1, host='host1', services= self._obj_to_list_obj(objects.ServiceList( self.context), objects.Service(id=0, host='host1', disabled=True, forced_down=True, binary='nova-compute'))), objects.Instance(uuid=uuids.instance_2, host='host2', services= self._obj_to_list_obj(objects.ServiceList( self.context), objects.Service(id=0, host='host2', disabled=True, forced_down=False, binary='nova-compute'))), objects.Instance(uuid=uuids.instance_3, host='host3', services= self._obj_to_list_obj(objects.ServiceList( self.context), objects.Service(id=0, host='host3', disabled=False, last_seen_up=five_min_ago, forced_down=False, binary='nova-compute'))), objects.Instance(uuid=uuids.instance_4, host='host4', services= self._obj_to_list_obj(objects.ServiceList( self.context), objects.Service(id=0, host='host4', disabled=False, last_seen_up=timeutils.utcnow(), forced_down=False, binary='nova-compute'))), objects.Instance(uuid=uuids.instance_5, host='host5', services= objects.ServiceList()), objects.Instance(uuid=uuids.instance_6, host=None, services= self._obj_to_list_obj(objects.ServiceList( self.context), objects.Service(id=0, host='host6', disabled=True, forced_down=False, binary='nova-compute'))), objects.Instance(uuid=uuids.instance_7, host='host2', services= self._obj_to_list_obj(objects.ServiceList( self.context), objects.Service(id=0, host='host2', disabled=True, forced_down=False, binary='nova-compute'))) ] host_statuses = self.compute_api.get_instances_host_statuses( instances) expect_statuses = {uuids.instance_1: fields_obj.HostStatus.DOWN, uuids.instance_2: fields_obj.HostStatus.MAINTENANCE, uuids.instance_3: fields_obj.HostStatus.UNKNOWN, uuids.instance_4: fields_obj.HostStatus.UP, uuids.instance_5: fields_obj.HostStatus.NONE, uuids.instance_6: fields_obj.HostStatus.NONE, uuids.instance_7: fields_obj.HostStatus.MAINTENANCE} for instance in instances: self.assertEqual(expect_statuses[instance.uuid], host_statuses[instance.uuid]) @mock.patch.object(objects.Migration, 'get_by_id_and_instance') @mock.patch.object(objects.InstanceAction, 'action_start') def test_live_migrate_force_complete_succeeded( self, action_start, get_by_id_and_instance): rpcapi = self.compute_api.compute_rpcapi instance = self._create_instance_obj() instance.task_state = task_states.MIGRATING migration = objects.Migration() migration.id = 0 migration.status = 'running' get_by_id_and_instance.return_value = migration with mock.patch.object( rpcapi, 'live_migration_force_complete') as lm_force_complete: self.compute_api.live_migrate_force_complete( self.context, instance, migration) lm_force_complete.assert_called_once_with(self.context, instance, migration) action_start.assert_called_once_with( self.context, instance.uuid, 'live_migration_force_complete', want_result=False) @mock.patch.object(objects.Migration, 'get_by_id_and_instance') def test_live_migrate_force_complete_invalid_migration_state( self, get_by_id_and_instance): instance = self._create_instance_obj() instance.task_state = task_states.MIGRATING migration = objects.Migration() migration.id = 0 migration.status = 'error' get_by_id_and_instance.return_value = migration self.assertRaises(exception.InvalidMigrationState, self.compute_api.live_migrate_force_complete, self.context, instance, migration.id) def test_live_migrate_force_complete_invalid_vm_state(self): instance = self._create_instance_obj() instance.task_state = None self.assertRaises(exception.InstanceInvalidState, self.compute_api.live_migrate_force_complete, self.context, instance, '1') def _get_migration(self, migration_id, status, migration_type): migration = objects.Migration() migration.id = migration_id migration.status = status migration.migration_type = migration_type return migration @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_rpcapi.ComputeAPI, 'live_migration_abort') @mock.patch.object(objects.Migration, 'get_by_id_and_instance') def test_live_migrate_abort_succeeded(self, mock_get_migration, mock_lm_abort, mock_rec_action): instance = self._create_instance_obj() instance.task_state = task_states.MIGRATING migration = self._get_migration(21, 'running', 'live-migration') mock_get_migration.return_value = migration self.compute_api.live_migrate_abort(self.context, instance, migration.id) mock_rec_action.assert_called_once_with(self.context, instance, instance_actions.LIVE_MIGRATION_CANCEL) mock_lm_abort.assert_called_once_with(self.context, instance, migration.id) @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_rpcapi.ComputeAPI, 'live_migration_abort') @mock.patch.object(objects.Migration, 'get_by_id_and_instance') def test_live_migrate_abort_in_queue_succeeded(self, mock_get_migration, mock_lm_abort, mock_rec_action): instance = self._create_instance_obj() instance.task_state = task_states.MIGRATING for migration_status in ('queued', 'preparing'): migration = self._get_migration( 21, migration_status, 'live-migration') mock_get_migration.return_value = migration self.compute_api.live_migrate_abort(self.context, instance, migration.id, support_abort_in_queue=True) mock_rec_action.assert_called_once_with( self.context, instance, instance_actions.LIVE_MIGRATION_CANCEL) mock_lm_abort.assert_called_once_with(self.context, instance, migration.id) mock_get_migration.reset_mock() mock_rec_action.reset_mock() mock_lm_abort.reset_mock() @mock.patch.object(objects.Migration, 'get_by_id_and_instance') def test_live_migration_abort_in_queue_old_microversion_fails( self, mock_get_migration): instance = self._create_instance_obj() instance.task_state = task_states.MIGRATING migration = self._get_migration(21, 'queued', 'live-migration') mock_get_migration.return_value = migration self.assertRaises(exception.InvalidMigrationState, self.compute_api.live_migrate_abort, self.context, instance, migration.id, support_abort_in_queue=False) @mock.patch.object(objects.Migration, 'get_by_id_and_instance') def test_live_migration_abort_wrong_migration_status(self, mock_get_migration): instance = self._create_instance_obj() instance.task_state = task_states.MIGRATING migration = self._get_migration(21, 'completed', 'live-migration') mock_get_migration.return_value = migration self.assertRaises(exception.InvalidMigrationState, self.compute_api.live_migrate_abort, self.context, instance, migration.id) def test_check_requested_networks_no_requested_networks(self): # When there are no requested_networks we call validate_networks on # the network API and return the results. with mock.patch.object(self.compute_api.network_api, 'validate_networks', return_value=3): count = self.compute_api._check_requested_networks( self.context, None, 5) self.assertEqual(3, count) def test_check_requested_networks_no_allocate(self): # When requested_networks is the single 'none' case for no allocation, # we don't validate networks and return the count passed in. requested_networks = ( objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='none')])) with mock.patch.object(self.compute_api.network_api, 'validate_networks') as validate: count = self.compute_api._check_requested_networks( self.context, requested_networks, 5) self.assertEqual(5, count) self.assertFalse(validate.called) def test_check_requested_networks_auto_allocate(self): # When requested_networks is the single 'auto' case for allocation, # we validate networks and return the results. requested_networks = ( objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='auto')])) with mock.patch.object(self.compute_api.network_api, 'validate_networks', return_value=4): count = self.compute_api._check_requested_networks( self.context, requested_networks, 5) self.assertEqual(4, count) @mock.patch.object(objects.InstanceMapping, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_update_queued_for_deletion(self, mock_get, mock_save): uuid = uuids.inst inst = objects.Instance(uuid=uuid) im = objects.InstanceMapping(instance_uuid=uuid) mock_get.return_value = im self.compute_api._update_queued_for_deletion( self.context, inst.uuid, True) self.assertTrue(im.queued_for_delete) mock_get.assert_called_once_with(self.context, inst.uuid) mock_save.assert_called_once_with() @mock.patch.object(objects.InstanceMappingList, 'get_not_deleted_by_cell_and_project') def test_generate_minimal_construct_for_down_cells(self, mock_get_ims): im1 = objects.InstanceMapping(instance_uuid=uuids.inst1, cell_id=1, project_id='fake', created_at=None, queued_for_delete=False) mock_get_ims.return_value = [im1] down_cell_uuids = [uuids.cell1, uuids.cell2, uuids.cell3] result = self.compute_api._generate_minimal_construct_for_down_cells( self.context, down_cell_uuids, [self.context.project_id], None) for inst in result: self.assertEqual(inst.uuid, im1.instance_uuid) self.assertIn('created_at', inst) # minimal construct doesn't contain the usual keys self.assertNotIn('display_name', inst) self.assertEqual(3, mock_get_ims.call_count) @mock.patch.object(objects.InstanceMappingList, 'get_not_deleted_by_cell_and_project') def test_generate_minimal_construct_for_down_cells_limited(self, mock_get_ims): im1 = objects.InstanceMapping(instance_uuid=uuids.inst1, cell_id=1, project_id='fake', created_at=None, queued_for_delete=False) # If this gets called a third time, it'll explode, thus asserting # that we break out of the loop once the limit is reached mock_get_ims.side_effect = [[im1, im1], [im1]] down_cell_uuids = [uuids.cell1, uuids.cell2, uuids.cell3] result = self.compute_api._generate_minimal_construct_for_down_cells( self.context, down_cell_uuids, [self.context.project_id], 3) for inst in result: self.assertEqual(inst.uuid, im1.instance_uuid) self.assertIn('created_at', inst) # minimal construct doesn't contain the usual keys self.assertNotIn('display_name', inst) # Two instances at limit 3 from first cell, one at limit 1 from the # second, no third call. self.assertEqual(2, mock_get_ims.call_count) mock_get_ims.assert_has_calls([ mock.call(self.context, uuids.cell1, [self.context.project_id], limit=3), mock.call(self.context, uuids.cell2, [self.context.project_id], limit=1), ]) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.InstanceMappingList, 'get_not_deleted_by_cell_and_project') def test_get_all_without_cell_down_support(self, mock_get_ims, mock_buildreq_get): mock_buildreq_get.return_value = objects.BuildRequestList() im1 = objects.InstanceMapping(instance_uuid=uuids.inst1, cell_id=1, project_id='fake', created_at=None, queued_for_delete=False) mock_get_ims.return_value = [im1] cell_instances = self._list_of_instances(2) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), [uuids.cell1] insts = self.compute_api.get_all(self.context, cell_down_support=False) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with(self.context, {}, None, None, fields, None, None, cell_down_support=False) for i, instance in enumerate(cell_instances): self.assertEqual(instance, insts[i]) mock_get_ims.assert_not_called() @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.InstanceMappingList, 'get_not_deleted_by_cell_and_project') def test_get_all_with_cell_down_support(self, mock_get_ims, mock_buildreq_get): mock_buildreq_get.return_value = objects.BuildRequestList() im = objects.InstanceMapping(context=self.context, instance_uuid=uuids.inst1, cell_id=1, project_id='fake', created_at=None, queued_for_delete=False) mock_get_ims.return_value = [im] cell_instances = self._list_of_instances(2) full_instances = objects.InstanceList(self.context, objects=cell_instances) inst = objects.Instance(context=self.context, uuid=im.instance_uuid, project_id=im.project_id, created_at=im.created_at) partial_instances = objects.InstanceList(self.context, objects=[inst]) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), [uuids.cell1] insts = self.compute_api.get_all(self.context, limit=3, cell_down_support=True) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with(self.context, {}, 3, None, fields, None, None, cell_down_support=True) for i, instance in enumerate(partial_instances + full_instances): self.assertTrue(obj_base.obj_equal_prims(instance, insts[i])) # With an original limit of 3, and 0 build requests but 2 instances # from "up" cells, we should only get at most 1 instance mapping # to fill the limit. mock_get_ims.assert_called_once_with(self.context, uuids.cell1, self.context.project_id, limit=1) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.InstanceMappingList, 'get_not_deleted_by_cell_and_project') def test_get_all_with_cell_down_support_all_tenants(self, mock_get_ims, mock_buildreq_get): mock_buildreq_get.return_value = objects.BuildRequestList() im = objects.InstanceMapping(context=self.context, instance_uuid=uuids.inst1, cell_id=1, project_id='fake', created_at=None, queued_for_delete=False) mock_get_ims.return_value = [im] inst = objects.Instance(context=self.context, uuid=im.instance_uuid, project_id=im.project_id, created_at=im.created_at) partial_instances = objects.InstanceList(self.context, objects=[inst]) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( partial_instances), [uuids.cell1] insts = self.compute_api.get_all(self.context, limit=3, cell_down_support=True, all_tenants=True) for i, instance in enumerate(partial_instances): self.assertTrue(obj_base.obj_equal_prims(instance, insts[i])) # get_not_deleted_by_cell_and_project is called with None # project_id because of the all_tenants case. mock_get_ims.assert_called_once_with(self.context, uuids.cell1, None, limit=3) @mock.patch('nova.compute.api.API._save_user_id_in_instance_mapping', new=mock.MagicMock()) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_from_cell_success(self, mock_get_inst): cell_mapping = objects.CellMapping(uuid=uuids.cell1, name='1', id=1) im = objects.InstanceMapping(instance_uuid=uuids.inst, cell_mapping=cell_mapping) mock_get_inst.return_value = objects.Instance(uuid=uuids.inst) result = self.compute_api._get_instance_from_cell(self.context, im, [], True) self.assertEqual(uuids.inst, result.uuid) mock_get_inst.assert_called_once() @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_from_cell_failure(self, mock_get_inst): # Make sure InstanceNotFound is bubbled up and not treated like # other errors mock_get_inst.side_effect = exception.InstanceNotFound( instance_id=uuids.inst) cell_mapping = objects.CellMapping(uuid=uuids.cell1, name='1', id=1) im = objects.InstanceMapping(instance_uuid=uuids.inst, cell_mapping=cell_mapping) exp = self.assertRaises(exception.InstanceNotFound, self.compute_api._get_instance_from_cell, self.context, im, [], False) self.assertIn('could not be found', six.text_type(exp)) @mock.patch('nova.compute.api.API._save_user_id_in_instance_mapping') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') @mock.patch('nova.context.scatter_gather_cells') def test_get_instance_with_cell_down_support(self, mock_sg, mock_rs, mock_save_uid): cell_mapping = objects.CellMapping(uuid=uuids.cell1, name='1', id=1) im1 = objects.InstanceMapping(instance_uuid=uuids.inst1, cell_mapping=cell_mapping, queued_for_delete=True) im2 = objects.InstanceMapping(instance_uuid=uuids.inst2, cell_mapping=cell_mapping, queued_for_delete=False, project_id='fake', created_at=None) mock_sg.return_value = { uuids.cell1: context.did_not_respond_sentinel } # No cell down support, error means we return 500 exp = self.assertRaises(exception.NovaException, self.compute_api._get_instance_from_cell, self.context, im1, [], False) self.assertIn('info is not available', six.text_type(exp)) # Have cell down support, error + queued_for_delete = NotFound exp = self.assertRaises(exception.InstanceNotFound, self.compute_api._get_instance_from_cell, self.context, im1, [], True) self.assertIn('could not be found', six.text_type(exp)) # Have cell down support, error + archived reqspec = NotFound mock_rs.side_effect = exception.RequestSpecNotFound( instance_uuid=uuids.inst2) exp = self.assertRaises(exception.InstanceNotFound, self.compute_api._get_instance_from_cell, self.context, im2, [], True) self.assertIn('could not be found', six.text_type(exp)) # Have cell down support, error + reqspec + not queued_for_delete # means we return a minimal instance req_spec = objects.RequestSpec(instance_uuid=uuids.inst2, user_id='fake', flavor=objects.Flavor(name='fake1'), image=objects.ImageMeta(id=uuids.image, name='fake1'), availability_zone='nova') mock_rs.return_value = req_spec mock_rs.side_effect = None result = self.compute_api._get_instance_from_cell(self.context, im2, [], True) self.assertIn('user_id', result) self.assertNotIn('display_name', result) self.assertEqual(uuids.inst2, result.uuid) self.assertEqual('nova', result.availability_zone) self.assertEqual(uuids.image, result.image_ref) # Verify that user_id is populated during a compute_api.get(). mock_save_uid.assert_called_once_with(im2, result) # Same as above, but boot-from-volume where image is not None but the # id of the image is not set. req_spec.image = objects.ImageMeta(name='fake1') result = self.compute_api._get_instance_from_cell(self.context, im2, [], True) self.assertIsNone(result.image_ref) # Same as above, but boot-from-volume where image is None req_spec.image = None result = self.compute_api._get_instance_from_cell(self.context, im2, [], True) self.assertIsNone(result.image_ref) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=exception.InstanceMappingNotFound(uuid='fake')) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_no_mapping(self, mock_get_inst, mock_get_build_req, mock_get_inst_map): self.useFixture(nova_fixtures.AllServicesCurrent()) # No Mapping means NotFound self.assertRaises(exception.InstanceNotFound, self.compute_api.get, self.context, uuids.inst_uuid) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_not_in_cell(self, mock_get_inst, mock_get_build_req, mock_get_inst_map): build_req_obj = fake_build_request.fake_req_obj(self.context) mock_get_inst_map.return_value = objects.InstanceMapping( cell_mapping=None) mock_get_build_req.return_value = build_req_obj instance = build_req_obj.instance mock_get_inst.return_value = instance inst_from_build_req = self.compute_api.get(self.context, instance.uuid) mock_get_inst_map.assert_called_once_with(self.context, instance.uuid) mock_get_build_req.assert_called_once_with(self.context, instance.uuid) self.assertEqual(instance, inst_from_build_req) @mock.patch('nova.compute.api.API._save_user_id_in_instance_mapping', new=mock.MagicMock()) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_not_in_cell_buildreq_deleted_inst_in_cell( self, mock_get_inst, mock_get_build_req, mock_get_inst_map): # This test checks the following scenario: # The instance is not mapped to a cell, so it should be retrieved from # a BuildRequest object. However the BuildRequest does not exist # because the instance was put in a cell and mapped while while # attempting to get the BuildRequest. So pull the instance from the # cell. self.useFixture(nova_fixtures.AllServicesCurrent()) build_req_obj = fake_build_request.fake_req_obj(self.context) instance = build_req_obj.instance inst_map = objects.InstanceMapping(cell_mapping=objects.CellMapping( uuid=uuids.cell), instance_uuid=instance.uuid) mock_get_inst_map.side_effect = [ objects.InstanceMapping(cell_mapping=None), inst_map] mock_get_build_req.side_effect = exception.BuildRequestNotFound( uuid=instance.uuid) mock_get_inst.return_value = instance inst_from_get = self.compute_api.get(self.context, instance.uuid) inst_map_calls = [mock.call(self.context, instance.uuid), mock.call(self.context, instance.uuid)] mock_get_inst_map.assert_has_calls(inst_map_calls) self.assertEqual(2, mock_get_inst_map.call_count) mock_get_build_req.assert_called_once_with(self.context, instance.uuid) mock_get_inst.assert_called_once_with(self.context, instance.uuid, expected_attrs=[ 'metadata', 'system_metadata', 'security_groups', 'info_cache']) self.assertEqual(instance, inst_from_get) @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_not_in_cell_buildreq_deleted_inst_still_not_in_cell( self, mock_get_inst, mock_get_build_req, mock_get_inst_map, mock_target_cell): # This test checks the following scenario: # The instance is not mapped to a cell, so it should be retrieved from # a BuildRequest object. However the BuildRequest does not exist which # means it should now be possible to find the instance in a cell db. # But the instance is not mapped which means the cellsv2 migration has # not occurred in this scenario, so the instance is pulled from the # configured Nova db. # TODO(alaski): The tested case will eventually be an error condition. # But until we force cellsv2 migrations we need this to work. self.useFixture(nova_fixtures.AllServicesCurrent()) build_req_obj = fake_build_request.fake_req_obj(self.context) instance = build_req_obj.instance mock_get_inst_map.side_effect = [ objects.InstanceMapping(cell_mapping=None), objects.InstanceMapping(cell_mapping=None)] mock_get_build_req.side_effect = exception.BuildRequestNotFound( uuid=instance.uuid) mock_get_inst.return_value = instance self.assertRaises(exception.InstanceNotFound, self.compute_api.get, self.context, instance.uuid) @mock.patch('nova.objects.InstanceMapping.save') def test_save_user_id_in_instance_mapping(self, im_save): # Verify user_id is populated if it not set im = objects.InstanceMapping() i = objects.Instance(user_id='fake') self.compute_api._save_user_id_in_instance_mapping(im, i) self.assertEqual(im.user_id, i.user_id) im_save.assert_called_once_with() # Verify user_id is not saved if it is already set im_save.reset_mock() im.user_id = 'fake-other' self.compute_api._save_user_id_in_instance_mapping(im, i) self.assertNotEqual(im.user_id, i.user_id) im_save.assert_not_called() # Verify user_id is not saved if it is None im_save.reset_mock() im = objects.InstanceMapping() i = objects.Instance(user_id=None) self.compute_api._save_user_id_in_instance_mapping(im, i) self.assertNotIn('user_id', im) im_save.assert_not_called() @mock.patch('nova.compute.api.API._save_user_id_in_instance_mapping') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_instance_in_cell(self, mock_get_inst, mock_get_build_req, mock_get_inst_map, mock_save_uid): self.useFixture(nova_fixtures.AllServicesCurrent()) # This just checks that the instance is looked up normally and not # synthesized from a BuildRequest object. Verification of pulling the # instance from the proper cell will be added when that capability is. instance = self._create_instance_obj() build_req_obj = fake_build_request.fake_req_obj(self.context) inst_map = objects.InstanceMapping(cell_mapping=objects.CellMapping( uuid=uuids.cell), instance_uuid=instance.uuid) mock_get_inst_map.return_value = inst_map mock_get_build_req.return_value = build_req_obj mock_get_inst.return_value = instance returned_inst = self.compute_api.get(self.context, instance.uuid) mock_get_build_req.assert_not_called() mock_get_inst_map.assert_called_once_with(self.context, instance.uuid) # Verify that user_id is populated during a compute_api.get(). mock_save_uid.assert_called_once_with(inst_map, instance) self.assertEqual(instance, returned_inst) mock_get_inst.assert_called_once_with(self.context, instance.uuid, expected_attrs=[ 'metadata', 'system_metadata', 'security_groups', 'info_cache']) def _list_of_instances(self, length=1): instances = [] for i in range(length): instances.append( fake_instance.fake_instance_obj(self.context, objects.Instance, uuid=uuidutils.generate_uuid()) ) return instances @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=exception.CellMappingNotFound(uuid='fake')) def test_get_all_includes_build_requests(self, mock_cell_mapping_get, mock_buildreq_get): build_req_instances = self._list_of_instances(2) build_reqs = [objects.BuildRequest(self.context, instance=instance) for instance in build_req_instances] mock_buildreq_get.return_value = objects.BuildRequestList(self.context, objects=build_reqs) cell_instances = self._list_of_instances(2) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), list() instances = self.compute_api.get_all( self.context, search_opts={'foo': 'bar'}, limit=None, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) mock_buildreq_get.assert_called_once_with( self.context, {'foo': 'bar'}, limit=None, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( self.context, {'foo': 'bar'}, None, None, fields, ['baz'], ['desc'], cell_down_support=False) for i, instance in enumerate(build_req_instances + cell_instances): self.assertEqual(instance, instances[i]) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=exception.CellMappingNotFound(uuid='fake')) def test_get_all_includes_build_requests_filter_dupes(self, mock_cell_mapping_get, mock_buildreq_get): build_req_instances = self._list_of_instances(2) build_reqs = [objects.BuildRequest(self.context, instance=instance) for instance in build_req_instances] mock_buildreq_get.return_value = objects.BuildRequestList(self.context, objects=build_reqs) cell_instances = self._list_of_instances(2) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: # Insert one of the build_req_instances here so it shows up twice mock_inst_get.return_value = objects.InstanceList(self.context, objects=build_req_instances[:1] + cell_instances), list() instances = self.compute_api.get_all( self.context, search_opts={'foo': 'bar'}, limit=None, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) mock_buildreq_get.assert_called_once_with( self.context, {'foo': 'bar'}, limit=None, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( self.context, {'foo': 'bar'}, None, None, fields, ['baz'], ['desc'], cell_down_support=False) for i, instance in enumerate(build_req_instances + cell_instances): self.assertEqual(instance, instances[i]) @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=exception.CellMappingNotFound(uuid='fake')) def test_get_all_build_requests_decrement_limit(self, mock_cell_mapping_get, mock_buildreq_get): build_req_instances = self._list_of_instances(2) build_reqs = [objects.BuildRequest(self.context, instance=instance) for instance in build_req_instances] mock_buildreq_get.return_value = objects.BuildRequestList(self.context, objects=build_reqs) cell_instances = self._list_of_instances(2) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), list() instances = self.compute_api.get_all( self.context, search_opts={'foo': 'bar'}, limit=10, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) mock_buildreq_get.assert_called_once_with( self.context, {'foo': 'bar'}, limit=10, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( self.context, {'foo': 'bar'}, 8, None, fields, ['baz'], ['desc'], cell_down_support=False) for i, instance in enumerate(build_req_instances + cell_instances): self.assertEqual(instance, instances[i]) @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.BuildRequestList, 'get_by_filters') @mock.patch.object(objects.CellMapping, 'get_by_uuid') @mock.patch.object(objects.CellMappingList, 'get_all') def test_get_all_includes_build_request_cell0(self, mock_cm_get_all, mock_cell_mapping_get, mock_buildreq_get, mock_target_cell): build_req_instances = self._list_of_instances(2) build_reqs = [objects.BuildRequest(self.context, instance=instance) for instance in build_req_instances] mock_buildreq_get.return_value = objects.BuildRequestList(self.context, objects=build_reqs) cell_instances = self._list_of_instances(2) with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), [] instances = self.compute_api.get_all( self.context, search_opts={'foo': 'bar'}, limit=10, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) for cm in mock_cm_get_all.return_value: mock_target_cell.assert_any_call(self.context, cm) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( mock.ANY, {'foo': 'bar'}, 8, None, fields, ['baz'], ['desc'], cell_down_support=False) for i, instance in enumerate(build_req_instances + cell_instances): self.assertEqual(instance, instances[i]) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_update_existing_instance_not_in_cell(self, mock_instmap_get, mock_buildreq_get): mock_instmap_get.side_effect = exception.InstanceMappingNotFound( uuid='fake') self.useFixture(nova_fixtures.AllServicesCurrent()) instance = self._create_instance_obj() # Just making sure that the instance has been created self.assertIsNotNone(instance.id) updates = {'display_name': 'foo_updated'} with mock.patch.object(instance, 'save') as mock_inst_save: returned_instance = self.compute_api.update_instance( self.context, instance, updates) mock_buildreq_get.assert_not_called() self.assertEqual('foo_updated', returned_instance.display_name) mock_inst_save.assert_called_once_with() @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_update_existing_instance_in_cell(self, mock_instmap_get, mock_buildreq_get): inst_map = objects.InstanceMapping(cell_mapping=objects.CellMapping()) mock_instmap_get.return_value = inst_map self.useFixture(nova_fixtures.AllServicesCurrent()) instance = self._create_instance_obj() # Just making sure that the instance has been created self.assertIsNotNone(instance.id) updates = {'display_name': 'foo_updated'} with mock.patch.object(instance, 'save') as mock_inst_save: returned_instance = self.compute_api.update_instance( self.context, instance, updates) mock_buildreq_get.assert_not_called() self.assertEqual('foo_updated', returned_instance.display_name) mock_inst_save.assert_called_once_with() @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') def test_update_future_instance_with_buildreq(self, mock_buildreq_get): # This test checks that a new instance which is not yet peristed in # DB can be found by looking up the BuildRequest object so we can # update it. build_req_obj = fake_build_request.fake_req_obj(self.context) mock_buildreq_get.return_value = build_req_obj self.useFixture(nova_fixtures.AllServicesCurrent()) instance = self._create_instance_obj() # Fake the fact that the instance is not yet persisted in DB del instance.id updates = {'display_name': 'foo_updated'} with mock.patch.object(build_req_obj, 'save') as mock_buildreq_save: returned_instance = self.compute_api.update_instance( self.context, instance, updates) mock_buildreq_get.assert_called_once_with(self.context, instance.uuid) self.assertEqual(build_req_obj.instance, returned_instance) mock_buildreq_save.assert_called_once_with() self.assertEqual('foo_updated', returned_instance.display_name) @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') def test_update_instance_in_cell_in_transition_state(self, mock_buildreq_get, mock_instmap_get, mock_inst_get, mock_target_cell): # This test is for covering the following case: # - when we lookup the instance initially, that one is not yet mapped # to a cell and consequently we retrieve it from the BuildRequest # - when we update the instance, that one could have been mapped # meanwhile and the BuildRequest was deleted # - if the instance is mapped, lookup the cell DB to find the instance self.useFixture(nova_fixtures.AllServicesCurrent()) instance = self._create_instance_obj() # Fake the fact that the instance is not yet persisted in DB del instance.id mock_buildreq_get.side_effect = exception.BuildRequestNotFound( uuid=instance.uuid) inst_map = objects.InstanceMapping(cell_mapping=objects.CellMapping()) mock_instmap_get.return_value = inst_map mock_inst_get.return_value = instance updates = {'display_name': 'foo_updated'} with mock.patch.object(instance, 'save') as mock_inst_save: returned_instance = self.compute_api.update_instance( self.context, instance, updates) mock_buildreq_get.assert_called_once_with(self.context, instance.uuid) mock_target_cell.assert_called_once_with(self.context, inst_map.cell_mapping) mock_inst_save.assert_called_once_with() self.assertEqual('foo_updated', returned_instance.display_name) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') def test_update_instance_not_in_cell_in_transition_state(self, mock_buildreq_get, mock_instmap_get): # This test is for covering the following case: # - when we lookup the instance initially, that one is not yet mapped # to a cell and consequently we retrieve it from the BuildRequest # - when we update the instance, that one could have been mapped # meanwhile and the BuildRequest was deleted # - if the instance is not mapped, lookup the API DB to find whether # the instance was deleted self.useFixture(nova_fixtures.AllServicesCurrent()) instance = self._create_instance_obj() # Fake the fact that the instance is not yet persisted in DB del instance.id mock_buildreq_get.side_effect = exception.BuildRequestNotFound( uuid=instance.uuid) mock_instmap_get.side_effect = exception.InstanceMappingNotFound( uuid='fake') updates = {'display_name': 'foo_updated'} with mock.patch.object(instance, 'save') as mock_inst_save: self.assertRaises(exception.InstanceNotFound, self.compute_api.update_instance, self.context, instance, updates) mock_buildreq_get.assert_called_once_with(self.context, instance.uuid) mock_inst_save.assert_not_called() def test_populate_instance_for_create_neutron_secgroups(self): """Tests that a list of security groups passed in do not actually get stored on with the instance when using neutron. """ flavor = self._create_flavor() params = {'display_name': 'fake-instance'} instance = self._create_instance_obj(params, flavor) security_groups = objects.SecurityGroupList() security_groups.objects = [ secgroup_obj.SecurityGroup(uuid=uuids.secgroup_id) ] instance = self.compute_api._populate_instance_for_create( self.context, instance, {}, 0, security_groups, flavor, 1, False) self.assertEqual(0, len(instance.security_groups)) def test_retrieve_trusted_certs_object(self): ids = ['0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8', '674736e3-f25c-405c-8362-bbf991e0ce0a'] retrieved_certs = self.compute_api._retrieve_trusted_certs_object( self.context, ids) self.assertEqual(ids, retrieved_certs.ids) def test_retrieve_trusted_certs_object_conf(self): ids = ['conf-trusted-cert-1', 'conf-trusted-cert-2'] self.flags(verify_glance_signatures=True, group='glance') self.flags(enable_certificate_validation=True, group='glance') self.flags(default_trusted_certificate_ids='conf-trusted-cert-1, ' 'conf-trusted-cert-2', group='glance') retrieved_certs = self.compute_api._retrieve_trusted_certs_object( self.context, None) self.assertEqual(ids, retrieved_certs.ids) def test_retrieve_trusted_certs_object_none(self): self.flags(enable_certificate_validation=False, group='glance') self.assertIsNone( self.compute_api._retrieve_trusted_certs_object(self.context, None)) def test_retrieve_trusted_certs_object_empty(self): self.flags(enable_certificate_validation=False, group='glance') self.assertIsNone(self.compute_api._retrieve_trusted_certs_object( self.context, [])) @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_host( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm): host = 'fake-host' node = None self.compute_api._validate_host_or_node(self.context, host, node) mock_get_hm.assert_called_once_with(self.context, 'fake-host') mock_get_host_node.assert_not_called() mock_get_provider_by_name.assert_not_called() @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_invalid_host( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm): host = 'fake-host' node = None mock_get_hm.side_effect = exception.HostMappingNotFound(name=host) self.assertRaises(exception.ComputeHostNotFound, self.compute_api._validate_host_or_node, self.context, host, node) mock_get_hm.assert_called_once_with(self.context, 'fake-host') mock_get_host_node.assert_not_called() mock_get_provider_by_name.assert_not_called() @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_host_and_node( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm, mock_target_cell): host = 'fake-host' node = 'fake-host' self.compute_api._validate_host_or_node(self.context, host, node) mock_get_host_node.assert_called_once_with( mock_target_cell.return_value.__enter__.return_value, 'fake-host', 'fake-host') mock_get_hm.assert_called_once_with(self.context, 'fake-host') mock_get_provider_by_name.assert_not_called() @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_invalid_host_and_node( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm, mock_target_cell): host = 'fake-host' node = 'fake-host' mock_get_host_node.side_effect = ( exception.ComputeHostNotFound(host=host)) self.assertRaises(exception.ComputeHostNotFound, self.compute_api._validate_host_or_node, self.context, host, node) mock_get_host_node.assert_called_once_with( mock_target_cell.return_value.__enter__.return_value, 'fake-host', 'fake-host') mock_get_hm.assert_called_once_with(self.context, 'fake-host') mock_get_provider_by_name.assert_not_called() @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_node( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm): host = None node = 'fake-host' self.compute_api._validate_host_or_node(self.context, host, node) mock_get_provider_by_name.assert_called_once_with( self.context, 'fake-host') mock_get_host_node.assert_not_called() mock_get_hm.assert_not_called() @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_invalid_node( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm): host = None node = 'fake-host' mock_get_provider_by_name.side_effect = ( exception.ResourceProviderNotFound(name_or_uuid=node)) self.assertRaises(exception.ComputeHostNotFound, self.compute_api._validate_host_or_node, self.context, host, node) mock_get_provider_by_name.assert_called_once_with( self.context, 'fake-host') mock_get_host_node.assert_not_called() mock_get_hm.assert_not_called() @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test__validate_host_or_node_with_rp_500_exception( self, mock_get_provider_by_name, mock_get_host_node, mock_get_hm): host = None node = 'fake-host' mock_get_provider_by_name.side_effect = ( exception.PlacementAPIConnectFailure()) self.assertRaises(exception.PlacementAPIConnectFailure, self.compute_api._validate_host_or_node, self.context, host, node) mock_get_provider_by_name.assert_called_once_with( self.context, 'fake-host') mock_get_host_node.assert_not_called() mock_get_hm.assert_not_called() class ComputeAPIUnitTestCase(_ComputeAPIUnitTestMixIn, test.NoDBTestCase): def setUp(self): super(ComputeAPIUnitTestCase, self).setUp() self.compute_api = compute_api.API() def test_resize_same_flavor_fails(self): self.assertRaises(exception.CannotResizeToSameFlavor, self._test_resize, same_flavor=True) def test_find_service_in_cell_error_case(self): self.assertRaises(exception.NovaException, compute_api._find_service_in_cell, self.context) @mock.patch('nova.objects.Service.get_by_id') def test_find_service_in_cell_targets(self, mock_get_service): mock_get_service.side_effect = [exception.NotFound(), mock.sentinel.service] compute_api.CELLS = [mock.sentinel.cell0, mock.sentinel.cell1] @contextlib.contextmanager def fake_target(context, cell): yield 'context-for-%s' % cell with mock.patch('nova.context.target_cell') as mock_target: mock_target.side_effect = fake_target s = compute_api._find_service_in_cell(self.context, service_id=123) self.assertEqual(mock.sentinel.service, s) cells = [call[0][0] for call in mock_get_service.call_args_list] self.assertEqual(['context-for-%s' % c for c in compute_api.CELLS], cells) def test__validate_numa_rebuild_non_numa(self): """Assert that a rebuild of an instance without a NUMA topology passes validation. """ flavor = objects.Flavor( id=42, vcpus=1, memory_mb=512, root_gb=1, extra_specs={}) instance = self._create_instance_obj(flavor=flavor) # we use a dict instead of image metadata object as # _validate_numa_rebuild constructs the object internally image = { 'id': uuids.image_id, 'status': 'foo', 'properties': {}} self.compute_api._validate_numa_rebuild(instance, image, flavor) def test__validate_numa_rebuild_no_conflict(self): """Assert that a rebuild of an instance without a change in NUMA topology passes validation. """ flavor = objects.Flavor( id=42, vcpus=1, memory_mb=512, root_gb=1, extra_specs={"hw:numa_nodes": 1}) instance = self._create_instance_obj(flavor=flavor) # we use a dict instead of image metadata object as # _validate_numa_rebuild constructs the object internally image = { 'id': uuids.image_id, 'status': 'foo', 'properties': {}} # The flavor creates a NUMA topology but the default image and the # rebuild image do not have any image properties so there will # be no conflict. self.compute_api._validate_numa_rebuild(instance, image, flavor) def test__validate_numa_rebuild_add_numa_toplogy(self): """Assert that a rebuild of an instance with a new image that requests a NUMA topology when the original instance did not have a NUMA topology is invalid. """ flavor = objects.Flavor( id=42, vcpus=1, memory_mb=512, root_gb=1, extra_specs={}) # _create_instance_obj results in the instance.image_meta being None. instance = self._create_instance_obj(flavor=flavor) # we use a dict instead of image metadata object as # _validate_numa_rebuild constructs the object internally image = { 'id': uuids.image_id, 'status': 'foo', 'properties': {"hw_numa_nodes": 1}} # The flavor and default image have no NUMA topology defined. The image # used to rebuild requests a NUMA topology which is not allowed as it # would alter the NUMA constrains. self.assertRaises( exception.ImageNUMATopologyRebuildConflict, self.compute_api._validate_numa_rebuild, instance, image, flavor) def test__validate_numa_rebuild_remove_numa_toplogy(self): """Assert that a rebuild of an instance with a new image that does not request a NUMA topology when the original image did is invalid if it would alter the instances topology as a result. """ flavor = objects.Flavor( id=42, vcpus=1, memory_mb=512, root_gb=1, extra_specs={}) # _create_instance_obj results in the instance.image_meta being None. instance = self._create_instance_obj(flavor=flavor) # we use a dict instead of image metadata object as # _validate_numa_rebuild constructs the object internally old_image = { 'id': uuidutils.generate_uuid(), 'status': 'foo', 'properties': {"hw_numa_nodes": 1}} old_image_meta = objects.ImageMeta.from_dict(old_image) image = { 'id': uuidutils.generate_uuid(), 'status': 'foo', 'properties': {}} with mock.patch( 'nova.objects.instance.Instance.image_meta', new_callable=mock.PropertyMock(return_value=old_image_meta)): # The old image has a NUMA topology defined but the new image # used to rebuild does not. This would alter the NUMA constrains # and therefor should raise. self.assertRaises( exception.ImageNUMATopologyRebuildConflict, self.compute_api._validate_numa_rebuild, instance, image, flavor) def test__validate_numa_rebuild_alter_numa_toplogy(self): """Assert that a rebuild of an instance with a new image that requests a different NUMA topology than the original image is invalid. """ # NOTE(sean-k-mooney): we need to use 2 vcpus here or we will fail # with a different exception ImageNUMATopologyAsymmetric when we # construct the NUMA constrains as the rebuild image would result # in an invalid topology. flavor = objects.Flavor( id=42, vcpus=2, memory_mb=512, root_gb=1, extra_specs={}) # _create_instance_obj results in the instance.image_meta being None. instance = self._create_instance_obj(flavor=flavor) # we use a dict instead of image metadata object as # _validate_numa_rebuild constructs the object internally old_image = { 'id': uuidutils.generate_uuid(), 'status': 'foo', 'properties': {"hw_numa_nodes": 1}} old_image_meta = objects.ImageMeta.from_dict(old_image) image = { 'id': uuidutils.generate_uuid(), 'status': 'foo', 'properties': {"hw_numa_nodes": 2}} with mock.patch( 'nova.objects.instance.Instance.image_meta', new_callable=mock.PropertyMock(return_value=old_image_meta)): # the original image requested 1 NUMA node and the image used # for rebuild requests 2 so assert an error is raised. self.assertRaises( exception.ImageNUMATopologyRebuildConflict, self.compute_api._validate_numa_rebuild, instance, image, flavor) @mock.patch('nova.pci.request.get_pci_requests_from_flavor') def test_pmu_image_and_flavor_conflict(self, mock_request): """Tests that calling _validate_flavor_image_nostatus() with an image that conflicts with the flavor raises but no exception is raised if there is no conflict. """ image = {'id': uuids.image_id, 'status': 'foo', 'properties': {'hw_pmu': False}} flavor = objects.Flavor( vcpus=1, memory_mb=512, root_gb=1, extra_specs={'hw:pmu': "true"}) self.assertRaises( exception.ImagePMUConflict, self.compute_api._validate_flavor_image_nostatus, self.context, image, flavor, None) @mock.patch('nova.pci.request.get_pci_requests_from_flavor') def test_pmu_image_and_flavor_same_value(self, mock_request): # assert that if both the image and flavor are set to the same value # no exception is raised and the function returns nothing. flavor = objects.Flavor( vcpus=1, memory_mb=512, root_gb=1, extra_specs={'hw:pmu': "true"}) image = {'id': uuids.image_id, 'status': 'foo', 'properties': {'hw_pmu': True}} self.assertIsNone(self.compute_api._validate_flavor_image_nostatus( self.context, image, flavor, None)) @mock.patch('nova.pci.request.get_pci_requests_from_flavor') def test_pmu_image_only(self, mock_request): # assert that if only the image metadata is set then it is valid flavor = objects.Flavor( vcpus=1, memory_mb=512, root_gb=1, extra_specs={}) # ensure string to bool conversion works for image metadata # property by using "yes". image = {'id': uuids.image_id, 'status': 'foo', 'properties': {'hw_pmu': "yes"}} self.assertIsNone(self.compute_api._validate_flavor_image_nostatus( self.context, image, flavor, None)) @mock.patch('nova.pci.request.get_pci_requests_from_flavor') def test_pmu_flavor_only(self, mock_request): # assert that if only the flavor extra_spec is set then it is valid # and test the string to bool conversion of "on" works. flavor = objects.Flavor( vcpus=1, memory_mb=512, root_gb=1, extra_specs={'hw:pmu': "on"}) image = {'id': uuids.image_id, 'status': 'foo', 'properties': {}} self.assertIsNone(self.compute_api._validate_flavor_image_nostatus( self.context, image, flavor, None)) @mock.patch('nova.pci.request.get_pci_requests_from_flavor') def test_pci_validated(self, mock_request): """Tests that calling _validate_flavor_image_nostatus() with validate_pci=True results in get_pci_requests_from_flavor() being called. """ image = {'id': uuids.image_id, 'status': 'foo'} flavor = self._create_flavor() self.compute_api._validate_flavor_image_nostatus( self.context, image, flavor, root_bdm=None, validate_pci=True) mock_request.assert_called_once_with(flavor) def test_validate_and_build_base_options_translate_neutron_secgroup(self): """Tests that _check_requested_secgroups will return a uuid for a requested Neutron security group and that will be returned from _validate_and_build_base_options """ instance_type = objects.Flavor(**test_flavor.fake_flavor) boot_meta = metadata = {} kernel_id = ramdisk_id = key_name = key_data = user_data = \ access_ip_v4 = access_ip_v6 = config_drive = \ auto_disk_config = reservation_id = None # This tests that 'default' is unchanged, but 'fake-security-group' # will be translated to a UUID for Neutron. requested_secgroups = ['default', 'fake-security-group'] # This will short-circuit _check_requested_networks requested_networks = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id='none')]) max_count = 1 supports_port_resource_request = False with mock.patch( 'nova.network.security_group_api.validate_name', return_value=uuids.secgroup_uuid) as scget: base_options, max_network_count, key_pair, security_groups, \ network_metadata = ( self.compute_api._validate_and_build_base_options( self.context, instance_type, boot_meta, uuids.image_href, mock.sentinel.image_id, kernel_id, ramdisk_id, 'fake-display-name', 'fake-description', key_name, key_data, requested_secgroups, 'fake-az', user_data, metadata, access_ip_v4, access_ip_v6, requested_networks, config_drive, auto_disk_config, reservation_id, max_count, supports_port_resource_request ) ) # Assert the neutron security group API get method was called once # and only for the non-default security group name. scget.assert_called_once_with(self.context, 'fake-security-group') # Assert we translated the non-default secgroup name to uuid. self.assertItemsEqual(['default', uuids.secgroup_uuid], security_groups) @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_interface') def test_tagged_interface_attach(self, mock_attach, mock_record): instance = self._create_instance_obj() self.compute_api.attach_interface(self.context, instance, None, None, None, tag='foo') mock_attach.assert_called_with(self.context, instance=instance, network_id=None, port_id=None, requested_ip=None, tag='foo') mock_record.assert_called_once_with( self.context, instance, instance_actions.ATTACH_INTERFACE) @mock.patch('nova.compute.api.API._record_action_start') def test_attach_interface_qos_aware_port(self, mock_record): instance = self._create_instance_obj() with mock.patch.object( self.compute_api.network_api, 'show_port', return_value={'port': { constants.RESOURCE_REQUEST: { 'resources': {'CUSTOM_RESOURCE_CLASS': 42} }}}) as mock_show_port: self.assertRaises( exception.AttachInterfaceWithQoSPolicyNotSupported, self.compute_api.attach_interface, self.context, instance, 'foo_net_id', 'foo_port_id', None ) mock_show_port.assert_called_once_with(self.context, 'foo_port_id') @mock.patch('nova.compute.api.API._record_action_start') @mock.patch.object(compute_rpcapi.ComputeAPI, 'detach_interface') def test_detach_interface(self, mock_detach, mock_record): instance = self._create_instance_obj() self.compute_api.detach_interface(self.context, instance, None) mock_detach.assert_called_with(self.context, instance=instance, port_id=None) mock_record.assert_called_once_with( self.context, instance, instance_actions.DETACH_INTERFACE) def test_check_attach_and_reserve_volume_multiattach_old_version(self): """Tests that _check_attach_and_reserve_volume fails if trying to use a multiattach volume with a microversion<2.60. """ instance = self._create_instance_obj() volume = {'id': uuids.volumeid, 'multiattach': True} bdm = objects.BlockDeviceMapping(volume_id=uuids.volumeid, instance_uuid=instance.uuid) self.assertRaises(exception.MultiattachNotSupportedOldMicroversion, self.compute_api._check_attach_and_reserve_volume, self.context, volume, instance, bdm, supports_multiattach=False) @mock.patch('nova.volume.cinder.API.get', return_value={'id': uuids.volumeid, 'multiattach': True}) def test_attach_volume_shelved_offloaded_fails( self, volume_get): """Tests that trying to attach a multiattach volume to a shelved offloaded instance fails because it's not supported. """ instance = self._create_instance_obj( params={'vm_state': vm_states.SHELVED_OFFLOADED}) with mock.patch.object( self.compute_api, '_check_volume_already_attached_to_instance', return_value=None): self.assertRaises(exception.MultiattachToShelvedNotSupported, self.compute_api.attach_volume, self.context, instance, uuids.volumeid) def test_validate_bdm_check_volume_type_raise_not_found(self): """Tests that _validate_bdm will fail if the requested volume type name or id does not match the volume types in Cinder. """ volume_types = [{'id': 'fake_volume_type_id_1', 'name': 'fake_lvm_1'}, {'id': 'fake_volume_type_id_2', 'name': 'fake_lvm_2'}] bdm = objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( { 'uuid': uuids.image_id, 'source_type': 'image', 'destination_type': 'volume', 'device_name': 'vda', 'boot_index': 0, 'volume_size': 3, 'volume_type': 'lvm-1'})) self.assertRaises(exception.VolumeTypeNotFound, self.compute_api._check_requested_volume_type, bdm, 'lvm-1', volume_types) @mock.patch.object(neutron_api.API, 'has_substr_port_filtering_extension') @mock.patch.object(neutron_api.API, 'list_ports') @mock.patch.object(objects.BuildRequestList, 'get_by_filters', new_callable=mock.NonCallableMock) def test_get_all_ip_filter(self, mock_buildreq_get, mock_list_port, mock_check_ext): mock_check_ext.return_value = True cell_instances = self._list_of_instances(2) mock_list_port.return_value = { 'ports': [{'device_id': 'fake_device_id'}]} with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), list() self.compute_api.get_all( self.context, search_opts={'ip': 'fake'}, limit=None, marker=None, sort_keys=['baz'], sort_dirs=['desc']) mock_list_port.assert_called_once_with( self.context, fixed_ips='ip_address_substr=fake', fields=['device_id']) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( self.context, {'ip': 'fake', 'uuid': ['fake_device_id']}, None, None, fields, ['baz'], ['desc'], cell_down_support=False) @mock.patch.object(neutron_api.API, 'has_substr_port_filtering_extension') @mock.patch.object(neutron_api.API, 'list_ports') @mock.patch.object(objects.BuildRequestList, 'get_by_filters', new_callable=mock.NonCallableMock) def test_get_all_ip6_filter(self, mock_buildreq_get, mock_list_port, mock_check_ext): mock_check_ext.return_value = True cell_instances = self._list_of_instances(2) mock_list_port.return_value = { 'ports': [{'device_id': 'fake_device_id'}]} with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), list() self.compute_api.get_all( self.context, search_opts={'ip6': 'fake'}, limit=None, marker=None, sort_keys=['baz'], sort_dirs=['desc']) mock_list_port.assert_called_once_with( self.context, fixed_ips='ip_address_substr=fake', fields=['device_id']) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( self.context, {'ip6': 'fake', 'uuid': ['fake_device_id']}, None, None, fields, ['baz'], ['desc'], cell_down_support=False) @mock.patch.object(neutron_api.API, 'has_substr_port_filtering_extension') @mock.patch.object(neutron_api.API, 'list_ports') @mock.patch.object(objects.BuildRequestList, 'get_by_filters', new_callable=mock.NonCallableMock) def test_get_all_ip_and_ip6_filter(self, mock_buildreq_get, mock_list_port, mock_check_ext): mock_check_ext.return_value = True cell_instances = self._list_of_instances(2) mock_list_port.return_value = { 'ports': [{'device_id': 'fake_device_id'}]} with mock.patch('nova.compute.instance_list.' 'get_instance_objects_sorted') as mock_inst_get: mock_inst_get.return_value = objects.InstanceList( self.context, objects=cell_instances), list() self.compute_api.get_all( self.context, search_opts={'ip': 'fake1', 'ip6': 'fake2'}, limit=None, marker=None, sort_keys=['baz'], sort_dirs=['desc']) mock_list_port.assert_has_calls([ mock.call( self.context, fixed_ips='ip_address_substr=fake1', fields=['device_id']), mock.call( self.context, fixed_ips='ip_address_substr=fake2', fields=['device_id']) ]) fields = ['metadata', 'info_cache', 'security_groups'] mock_inst_get.assert_called_once_with( self.context, {'ip': 'fake1', 'ip6': 'fake2', 'uuid': ['fake_device_id', 'fake_device_id']}, None, None, fields, ['baz'], ['desc'], cell_down_support=False) @mock.patch.object(neutron_api.API, 'has_substr_port_filtering_extension') @mock.patch.object(neutron_api.API, 'list_ports') def test_get_all_ip6_filter_exc(self, mock_list_port, mock_check_ext): mock_check_ext.return_value = True mock_list_port.side_effect = exception.InternalError('fake') instances = self.compute_api.get_all( self.context, search_opts={'ip6': 'fake'}, limit=None, marker='fake-marker', sort_keys=['baz'], sort_dirs=['desc']) mock_list_port.assert_called_once_with( self.context, fixed_ips='ip_address_substr=fake', fields=['device_id']) self.assertEqual([], instances.objects) @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch('nova.compute.api.API._delete_while_booting', return_value=False) @mock.patch('nova.compute.api.API._lookup_instance') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch('nova.context.RequestContext.elevated') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'notify_about_instance_usage') @mock.patch.object(objects.BlockDeviceMapping, 'destroy') @mock.patch.object(objects.Instance, 'destroy') def _test_delete_volume_backed_instance( self, vm_state, mock_instance_destroy, bdm_destroy, notify_about_instance_usage, mock_save, mock_elevated, bdm_get_by_instance_uuid, mock_lookup, _mock_del_booting, notify_about_instance_action): volume_id = uuidutils.generate_uuid() conn_info = {'connector': {'host': 'orig-host'}} bdms = [objects.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( {'id': 42, 'volume_id': volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, 'connection_info': jsonutils.dumps(conn_info)}))] bdm_get_by_instance_uuid.return_value = bdms mock_elevated.return_value = self.context params = {'host': None, 'vm_state': vm_state} inst = self._create_instance_obj(params=params) mock_lookup.return_value = None, inst connector = conn_info['connector'] with mock.patch.object(self.compute_api.network_api, 'deallocate_for_instance') as mock_deallocate, \ mock.patch.object(self.compute_api.volume_api, 'terminate_connection') as mock_terminate_conn, \ mock.patch.object(self.compute_api.volume_api, 'detach') as mock_detach: self.compute_api.delete(self.context, inst) mock_deallocate.assert_called_once_with(self.context, inst) mock_detach.assert_called_once_with(self.context, volume_id, inst.uuid) mock_terminate_conn.assert_called_once_with(self.context, volume_id, connector) bdm_destroy.assert_called_once_with() def test_delete_volume_backed_instance_in_error(self): self._test_delete_volume_backed_instance(vm_states.ERROR) def test_delete_volume_backed_instance_in_shelved_offloaded(self): self._test_delete_volume_backed_instance(vm_states.SHELVED_OFFLOADED) def test_compute_api_host(self): self.assertTrue(hasattr(self.compute_api, 'host')) self.assertEqual(CONF.host, self.compute_api.host) @mock.patch('nova.scheduler.client.report.SchedulerReportClient') def test_placement_client_init(self, mock_report_client): """Tests to make sure that the construction of the placement client only happens once per API class instance. """ self.assertIsNone(self.compute_api._placementclient) # Access the property twice to make sure SchedulerReportClient is # only loaded once. for x in range(2): self.compute_api.placementclient mock_report_client.assert_called_once_with() def test_validate_host_for_cold_migrate_same_host_fails(self): """Asserts CannotMigrateToSameHost is raised when trying to cold migrate to the same host. """ instance = fake_instance.fake_instance_obj(self.context) self.assertRaises(exception.CannotMigrateToSameHost, self.compute_api._validate_host_for_cold_migrate, self.context, instance, instance.host, allow_cross_cell_resize=False) @mock.patch('nova.objects.ComputeNode.' 'get_first_node_by_host_for_old_compat') def test_validate_host_for_cold_migrate_diff_host_no_cross_cell( self, mock_cn_get): """Tests the scenario where allow_cross_cell_resize=False and the host is found in the same cell as the instance. """ instance = fake_instance.fake_instance_obj(self.context) node = self.compute_api._validate_host_for_cold_migrate( self.context, instance, uuids.host, allow_cross_cell_resize=False) self.assertIs(node, mock_cn_get.return_value) mock_cn_get.assert_called_once_with( self.context, uuids.host, use_slave=True) @mock.patch('nova.objects.HostMapping.get_by_host', side_effect=exception.HostMappingNotFound(name=uuids.host)) def test_validate_host_for_cold_migrate_cross_cell_host_mapping_not_found( self, mock_hm_get): """Tests the scenario where allow_cross_cell_resize=True but the HostMapping for the given host could not be found. """ instance = fake_instance.fake_instance_obj(self.context) self.assertRaises(exception.ComputeHostNotFound, self.compute_api._validate_host_for_cold_migrate, self.context, instance, uuids.host, allow_cross_cell_resize=True) mock_hm_get.assert_called_once_with(self.context, uuids.host) @mock.patch('nova.objects.HostMapping.get_by_host', return_value=objects.HostMapping( cell_mapping=objects.CellMapping(uuid=uuids.cell2))) @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.ComputeNode.' 'get_first_node_by_host_for_old_compat') def test_validate_host_for_cold_migrate_cross_cell( self, mock_cn_get, mock_target_cell, mock_hm_get): """Tests the scenario where allow_cross_cell_resize=True and the ComputeNode is pulled from the target cell defined by the HostMapping. """ instance = fake_instance.fake_instance_obj(self.context) node = self.compute_api._validate_host_for_cold_migrate( self.context, instance, uuids.host, allow_cross_cell_resize=True) self.assertIs(node, mock_cn_get.return_value) mock_hm_get.assert_called_once_with(self.context, uuids.host) # get_first_node_by_host_for_old_compat is called with a temporarily # cell-targeted context mock_cn_get.assert_called_once_with( mock_target_cell.return_value.__enter__.return_value, uuids.host, use_slave=True) def _test_get_migrations_sorted_filter_duplicates(self, migrations, expected): """Tests the cross-cell scenario where there are multiple migrations with the same UUID from different cells and only one should be returned. """ sort_keys = ['created_at', 'id'] sort_dirs = ['desc', 'desc'] filters = {'migration_type': 'resize'} limit = 1000 marker = None with mock.patch( 'nova.compute.migration_list.get_migration_objects_sorted', return_value=objects.MigrationList( objects=migrations)) as getter: sorted_migrations = self.compute_api.get_migrations_sorted( self.context, filters, sort_dirs=sort_dirs, sort_keys=sort_keys, limit=limit, marker=marker) self.assertEqual(1, len(sorted_migrations)) getter.assert_called_once_with( self.context, filters, limit, marker, sort_keys, sort_dirs) self.assertIs(expected, sorted_migrations[0]) def test_get_migrations_sorted_filter_duplicates(self): """Tests filtering duplicated Migration records where both have created_at and updated_at set. """ t1 = timeutils.utcnow() source_cell_migration = objects.Migration( uuid=uuids.migration, created_at=t1, updated_at=t1) t2 = t1 + datetime.timedelta(seconds=1) target_cell_migration = objects.Migration( uuid=uuids.migration, created_at=t2, updated_at=t2) self._test_get_migrations_sorted_filter_duplicates( [source_cell_migration, target_cell_migration], target_cell_migration) # Run it again in reverse. self._test_get_migrations_sorted_filter_duplicates( [target_cell_migration, source_cell_migration], target_cell_migration) def test_get_migrations_sorted_filter_duplicates_using_created_at(self): """Tests the cross-cell scenario where there are multiple migrations with the same UUID from different cells and only one should be returned. In this test the first Migration object to be processed has not been updated yet but is created after the second record to process. """ t1 = timeutils.utcnow() older = objects.Migration( uuid=uuids.migration, created_at=t1, updated_at=t1) t2 = t1 + datetime.timedelta(seconds=1) newer = objects.Migration( uuid=uuids.migration, created_at=t2, updated_at=None) self._test_get_migrations_sorted_filter_duplicates( [newer, older], newer) # Test with just created_at. older.updated_at = None self._test_get_migrations_sorted_filter_duplicates( [newer, older], newer) # Run it again in reverse. self._test_get_migrations_sorted_filter_duplicates( [older, newer], newer) @mock.patch('nova.servicegroup.api.API.service_is_up', return_value=True) @mock.patch('nova.objects.Migration.get_by_instance_and_status') def test_confirm_resize_cross_cell_move_true(self, mock_migration_get, mock_service_is_up): """Tests confirm_resize where Migration.cross_cell_move is True""" instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.RESIZED, task_state=None, launched_at=timeutils.utcnow()) migration = objects.Migration(cross_cell_move=True) mock_migration_get.return_value = migration with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(migration, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(self.compute_api.compute_task_api, 'confirm_snapshot_based_resize'), mock.patch.object(self.compute_api, '_get_source_compute_service'), ) as ( mock_elevated, mock_migration_save, mock_record_action, mock_conductor_confirm, mock_get_service ): self.compute_api.confirm_resize(self.context, instance) mock_elevated.assert_called_once_with() mock_service_is_up.assert_called_once_with( mock_get_service.return_value) mock_migration_save.assert_called_once_with() self.assertEqual('confirming', migration.status) mock_record_action.assert_called_once_with( self.context, instance, instance_actions.CONFIRM_RESIZE) mock_conductor_confirm.assert_called_once_with( self.context, instance, migration) @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_allow_cross_cell_resize_default_false(self, mock_get_min_ver): """Based on the default policy this asserts nobody is allowed to perform cross-cell resize. """ instance = objects.Instance( project_id='fake-project', user_id='fake-user') self.assertFalse(self.compute_api._allow_cross_cell_resize( self.context, instance)) # We did not need to check the minimum nova-compute version since the # policy check failed. mock_get_min_ver.assert_not_called() @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=compute_api.MIN_COMPUTE_CROSS_CELL_RESIZE - 1) def test_allow_cross_cell_resize_false_old_version(self, mock_get_min_ver): """Policy allows cross-cell resize but minimum nova-compute service version is not new enough. """ instance = objects.Instance( project_id='fake-project', user_id='fake-user', uuid=uuids.instance) with mock.patch.object(self.context, 'can', return_value=True) as can: self.assertFalse(self.compute_api._allow_cross_cell_resize( self.context, instance)) can.assert_called_once() mock_get_min_ver.assert_called_once_with( self.context, ['nova-compute']) @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance', return_value=[objects.RequestGroup()]) @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=compute_api.MIN_COMPUTE_CROSS_CELL_RESIZE) def test_allow_cross_cell_resize_false_port_with_resource_req( self, mock_get_min_ver, mock_get_res_req): """Policy allows cross-cell resize but minimum nova-compute service version is not new enough. """ instance = objects.Instance( project_id='fake-project', user_id='fake-user', uuid=uuids.instance) with mock.patch.object(self.context, 'can', return_value=True) as can: self.assertFalse(self.compute_api._allow_cross_cell_resize( self.context, instance)) can.assert_called_once() mock_get_min_ver.assert_called_once_with( self.context, ['nova-compute']) mock_get_res_req.assert_called_once_with(self.context, uuids.instance) @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance', return_value=[]) @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=compute_api.MIN_COMPUTE_CROSS_CELL_RESIZE) def test_allow_cross_cell_resize_true( self, mock_get_min_ver, mock_get_res_req): """Policy allows cross-cell resize and minimum nova-compute service version is new enough. """ instance = objects.Instance( project_id='fake-project', user_id='fake-user', uuid=uuids.instance) with mock.patch.object(self.context, 'can', return_value=True) as can: self.assertTrue(self.compute_api._allow_cross_cell_resize( self.context, instance)) can.assert_called_once() mock_get_min_ver.assert_called_once_with( self.context, ['nova-compute']) mock_get_res_req.assert_called_once_with(self.context, uuids.instance) def _test_block_accelerators(self, instance, args_info): @compute_api.block_accelerators def myfunc(self, context, instance, *args, **kwargs): args_info['args'] = (context, instance, *args) args_info['kwargs'] = dict(**kwargs) args = ('arg1', 'arg2') kwargs = {'arg3': 'dummy3', 'arg4': 'dummy4'} myfunc(mock.ANY, self.context, instance, *args, **kwargs) expected_args = (self.context, instance, *args) return expected_args, kwargs def test_block_accelerators_no_device_profile(self): instance = self._create_instance_obj() args_info = {} expected_args, kwargs = self._test_block_accelerators( instance, args_info) self.assertEqual(expected_args, args_info['args']) self.assertEqual(kwargs, args_info['kwargs']) def test_block_accelerators_with_device_profile(self): extra_specs = {'accel:device_profile': 'mydp'} flavor = self._create_flavor(extra_specs=extra_specs) instance = self._create_instance_obj(flavor=flavor) args_info = {} self.assertRaisesRegex(exception.ForbiddenWithAccelerators, 'Forbidden with instances that have accelerators.', self._test_block_accelerators, instance, args_info) # myfunc was not called self.assertEqual({}, args_info) class DiffDictTestCase(test.NoDBTestCase): """Unit tests for _diff_dict().""" def test_no_change(self): old = dict(a=1, b=2, c=3) new = dict(a=1, b=2, c=3) diff = compute_api._diff_dict(old, new) self.assertEqual(diff, {}) def test_new_key(self): old = dict(a=1, b=2, c=3) new = dict(a=1, b=2, c=3, d=4) diff = compute_api._diff_dict(old, new) self.assertEqual(diff, dict(d=['+', 4])) def test_changed_key(self): old = dict(a=1, b=2, c=3) new = dict(a=1, b=4, c=3) diff = compute_api._diff_dict(old, new) self.assertEqual(diff, dict(b=['+', 4])) def test_removed_key(self): old = dict(a=1, b=2, c=3) new = dict(a=1, c=3) diff = compute_api._diff_dict(old, new) self.assertEqual(diff, dict(b=['-'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_compute_mgr.py0000664000175000017500000226341200000000000023272 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for ComputeManager().""" import contextlib import copy import datetime import fixtures as std_fixtures import time from cinderclient import exceptions as cinder_exception from cursive import exception as cursive_exception import ddt from eventlet import event as eventlet_event from eventlet import timeout as eventlet_timeout from keystoneauth1 import exceptions as keystone_exception import mock import netaddr from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_service import fixture as service_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import uuidutils import six import testtools import nova from nova.compute import build_results from nova.compute import manager from nova.compute import power_state from nova.compute import resource_tracker from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova.conductor import api as conductor_api import nova.conf from nova import context from nova.db import api as db from nova import exception from nova.network import model as network_model from nova.network import neutron as neutronv2_api from nova import objects from nova.objects import base as base_obj from nova.objects import block_device as block_device_obj from nova.objects import fields from nova.objects import instance as instance_obj from nova.objects import migrate_data as migrate_data_obj from nova.objects import network_request as net_req_obj from nova.pci import request as pci_request from nova.scheduler.client import report from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit.compute import fake_resource_tracker from nova.tests.unit import fake_block_device from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit import fake_network from nova.tests.unit import fake_network_cache_model from nova.tests.unit import fake_notifier from nova.tests.unit.objects import test_instance_fault from nova.tests.unit.objects import test_instance_info_cache from nova.tests.unit.objects import test_instance_numa from nova import utils from nova.virt.block_device import DriverVolumeBlockDevice as driver_bdm_volume from nova.virt import driver as virt_driver from nova.virt import event as virtevent from nova.virt import fake as fake_driver from nova.virt import hardware from nova.volume import cinder CONF = nova.conf.CONF fake_host_list = [mock.sentinel.host1] @ddt.ddt class ComputeManagerUnitTestCase(test.NoDBTestCase, fake_resource_tracker.RTMockMixin): def setUp(self): super(ComputeManagerUnitTestCase, self).setUp() self.compute = manager.ComputeManager() self.context = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.useFixture(fixtures.EventReporterStub()) self.allocations = { uuids.provider1: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512 } } } @mock.patch.object(manager.ComputeManager, '_get_power_state') @mock.patch.object(manager.ComputeManager, '_sync_instance_power_state') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Migration, 'get_by_instance_and_status') @mock.patch.object(neutronv2_api.API, 'migrate_instance_start') def _test_handle_lifecycle_event(self, migrate_instance_start, mock_get_migration, mock_get, mock_sync, mock_get_power_state, transition, event_pwr_state, current_pwr_state): event = mock.Mock() mock_get.return_value = fake_instance.fake_instance_obj(self.context, task_state=task_states.MIGRATING) event.get_transition.return_value = transition mock_get_power_state.return_value = current_pwr_state self.compute.handle_lifecycle_event(event) expected_attrs = [] if transition in [virtevent.EVENT_LIFECYCLE_POSTCOPY_STARTED, virtevent.EVENT_LIFECYCLE_MIGRATION_COMPLETED]: expected_attrs.append('info_cache') mock_get.assert_called_once_with( test.MatchType(context.RequestContext), event.get_instance_uuid.return_value, expected_attrs=expected_attrs) if event_pwr_state == current_pwr_state: mock_sync.assert_called_with(mock.ANY, mock_get.return_value, event_pwr_state) else: self.assertFalse(mock_sync.called) migrate_finish_statuses = { virtevent.EVENT_LIFECYCLE_POSTCOPY_STARTED: 'running (post-copy)', virtevent.EVENT_LIFECYCLE_MIGRATION_COMPLETED: 'running' } if transition in migrate_finish_statuses: mock_get_migration.assert_called_with( test.MatchType(context.RequestContext), mock_get.return_value.uuid, migrate_finish_statuses[transition]) migrate_instance_start.assert_called_once_with( test.MatchType(context.RequestContext), mock_get.return_value, mock_get_migration.return_value) else: mock_get_migration.assert_not_called() migrate_instance_start.assert_not_called() def test_handle_lifecycle_event(self): event_map = {virtevent.EVENT_LIFECYCLE_STOPPED: power_state.SHUTDOWN, virtevent.EVENT_LIFECYCLE_STARTED: power_state.RUNNING, virtevent.EVENT_LIFECYCLE_PAUSED: power_state.PAUSED, virtevent.EVENT_LIFECYCLE_RESUMED: power_state.RUNNING, virtevent.EVENT_LIFECYCLE_SUSPENDED: power_state.SUSPENDED, virtevent.EVENT_LIFECYCLE_POSTCOPY_STARTED: power_state.PAUSED, virtevent.EVENT_LIFECYCLE_MIGRATION_COMPLETED: power_state.PAUSED, } for transition, pwr_state in event_map.items(): self._test_handle_lifecycle_event(transition=transition, event_pwr_state=pwr_state, current_pwr_state=pwr_state) def test_handle_lifecycle_event_state_mismatch(self): self._test_handle_lifecycle_event( transition=virtevent.EVENT_LIFECYCLE_STOPPED, event_pwr_state=power_state.SHUTDOWN, current_pwr_state=power_state.RUNNING) @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.compute.manager.ComputeManager.' '_sync_instance_power_state') @mock.patch('nova.objects.Migration.get_by_instance_and_status', side_effect=exception.MigrationNotFoundByStatus( instance_id=uuids.instance, status='running (post-copy)')) def test_handle_lifecycle_event_postcopy_migration_not_found( self, mock_get_migration, mock_sync, mock_get_instance): """Tests a EVENT_LIFECYCLE_POSTCOPY_STARTED scenario where the migration record is not found by the expected status. """ inst = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, task_state=task_states.MIGRATING) mock_get_instance.return_value = inst event = virtevent.LifecycleEvent( uuids.instance, virtevent.EVENT_LIFECYCLE_POSTCOPY_STARTED) with mock.patch.object(self.compute, '_get_power_state', return_value=power_state.PAUSED): with mock.patch.object(self.compute.network_api, 'migrate_instance_start') as mig_start: self.compute.handle_lifecycle_event(event) # Since we failed to find the migration record, we shouldn't call # migrate_instance_start. mig_start.assert_not_called() mock_get_migration.assert_called_once_with( test.MatchType(context.RequestContext), uuids.instance, 'running (post-copy)') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_delete_instance_info_cache_delete_ordering(self, mock_notify): call_tracker = mock.Mock() call_tracker.clear_events_for_instance.return_value = None mgr_class = self.compute.__class__ orig_delete = mgr_class._delete_instance specd_compute = mock.create_autospec(mgr_class) # spec out everything except for the method we really want # to test, then use call_tracker to verify call sequence specd_compute._delete_instance = orig_delete specd_compute.host = 'compute' mock_inst = mock.Mock() mock_inst.uuid = uuids.instance mock_inst.save = mock.Mock() mock_inst.destroy = mock.Mock() mock_inst.system_metadata = mock.Mock() def _mark_notify(*args, **kwargs): call_tracker._notify_about_instance_usage(*args, **kwargs) def _mark_shutdown(*args, **kwargs): call_tracker._shutdown_instance(*args, **kwargs) specd_compute.instance_events = call_tracker specd_compute._notify_about_instance_usage = _mark_notify specd_compute._shutdown_instance = _mark_shutdown mock_bdms = mock.Mock() specd_compute._delete_instance(specd_compute, self.context, mock_inst, mock_bdms) methods_called = [n for n, a, k in call_tracker.mock_calls] self.assertEqual(['clear_events_for_instance', '_notify_about_instance_usage', '_shutdown_instance', '_notify_about_instance_usage'], methods_called) mock_notify.assert_has_calls([ mock.call(self.context, mock_inst, specd_compute.host, action='delete', phase='start', bdms=mock_bdms), mock.call(self.context, mock_inst, specd_compute.host, action='delete', phase='end', bdms=mock_bdms)]) @mock.patch.object(objects.Instance, 'destroy') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_complete_deletion') @mock.patch.object(manager.ComputeManager, '_cleanup_volumes') @mock.patch.object(manager.ComputeManager, '_shutdown_instance') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') def _test_delete_instance_with_accels(self, instance, mock_inst_usage, mock_inst_action, mock_shutdown, mock_cleanup_vols, mock_complete_del, mock_inst_save, mock_inst_destroy): self.compute._delete_instance(self.context, instance, bdms=None) @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'delete_arqs_for_instance') def test_delete_instance_with_accels_ok(self, mock_del_arqs): # _delete_instance() calls Cyborg to delete ARQs, if # the extra specs has a device profile name. instance = fake_instance.fake_instance_obj(self.context) instance.flavor.extra_specs = {'accel:device_profile': 'mydp'} self._test_delete_instance_with_accels(instance) mock_del_arqs.assert_called_once_with(instance.uuid) @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'delete_arqs_for_instance') def test_delete_instance_with_accels_no_dp(self, mock_del_arqs): # _delete_instance() does not call Cyborg to delete ARQs, if # the extra specs has no device profile name. instance = fake_instance.fake_instance_obj(self.context) self._test_delete_instance_with_accels(instance) mock_del_arqs.assert_not_called() def _make_compute_node(self, hyp_hostname, cn_id): cn = mock.Mock(spec_set=['hypervisor_hostname', 'id', 'uuid', 'destroy']) cn.id = cn_id cn.hypervisor_hostname = hyp_hostname return cn def test_update_available_resource_for_node(self): rt = self._mock_rt(spec_set=['update_available_resource']) self.compute._update_available_resource_for_node( self.context, mock.sentinel.node, ) rt.update_available_resource.assert_called_once_with( self.context, mock.sentinel.node, startup=False, ) @mock.patch('nova.compute.manager.LOG') def test_update_available_resource_for_node_reshape_failed(self, log_mock): """ReshapeFailed logs and reraises.""" rt = self._mock_rt(spec_set=['update_available_resource']) rt.update_available_resource.side_effect = exception.ReshapeFailed( error='error') self.assertRaises(exception.ReshapeFailed, self.compute._update_available_resource_for_node, self.context, mock.sentinel.node, # While we're here, unit test the startup kwarg startup=True) rt.update_available_resource.assert_called_once_with( self.context, mock.sentinel.node, startup=True) log_mock.critical.assert_called_once() @mock.patch('nova.compute.manager.LOG') def test_update_available_resource_for_node_reshape_needed(self, log_mock): """ReshapeNeeded logs and reraises.""" rt = self._mock_rt(spec_set=['update_available_resource']) rt.update_available_resource.side_effect = exception.ReshapeNeeded() self.assertRaises(exception.ReshapeNeeded, self.compute._update_available_resource_for_node, self.context, mock.sentinel.node, # While we're here, unit test the startup kwarg startup=True) rt.update_available_resource.assert_called_once_with( self.context, mock.sentinel.node, startup=True) log_mock.exception.assert_called_once() @mock.patch.object(manager, 'LOG') @mock.patch.object(manager.ComputeManager, '_update_available_resource_for_node') @mock.patch.object(fake_driver.FakeDriver, 'get_available_nodes') @mock.patch.object(manager.ComputeManager, '_get_compute_nodes_in_db') def test_update_available_resource(self, get_db_nodes, get_avail_nodes, update_mock, mock_log): mock_rt = self._mock_rt() rc_mock = self.useFixture(fixtures.fixtures.MockPatchObject( self.compute, 'reportclient')).mock rc_mock.delete_resource_provider.side_effect = ( keystone_exception.EndpointNotFound) db_nodes = [self._make_compute_node('node%s' % i, i) for i in range(1, 5)] avail_nodes = set(['node2', 'node3', 'node4', 'node5']) avail_nodes_l = list(avail_nodes) get_db_nodes.return_value = db_nodes get_avail_nodes.return_value = avail_nodes self.compute.update_available_resource(self.context, startup=True) get_db_nodes.assert_called_once_with(self.context, avail_nodes, use_slave=True, startup=True) self.assertEqual(len(avail_nodes_l), update_mock.call_count) update_mock.assert_has_calls( [mock.call(self.context, node, startup=True) for node in avail_nodes_l] ) # First node in set should have been removed from DB for db_node in db_nodes: if db_node.hypervisor_hostname == 'node1': db_node.destroy.assert_called_once_with() rc_mock.delete_resource_provider.assert_called_once_with( self.context, db_node, cascade=True) mock_rt.remove_node.assert_called_once_with( 'node1') mock_log.error.assert_called_once_with( "Failed to delete compute node resource provider for " "compute node %s: %s", db_node.uuid, mock.ANY) else: self.assertFalse(db_node.destroy.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_resource_provider') @mock.patch.object(manager.ComputeManager, '_update_available_resource_for_node') @mock.patch.object(fake_driver.FakeDriver, 'get_available_nodes') @mock.patch.object(manager.ComputeManager, '_get_compute_nodes_in_db') def test_update_available_resource_not_ready(self, get_db_nodes, get_avail_nodes, update_mock, del_rp_mock): db_nodes = [self._make_compute_node('node1', 1)] get_db_nodes.return_value = db_nodes get_avail_nodes.side_effect = exception.VirtDriverNotReady self.compute.update_available_resource(self.context) # these shouldn't get processed on VirtDriverNotReady update_mock.assert_not_called() del_rp_mock.assert_not_called() @mock.patch('nova.context.get_admin_context') def test_pre_start_hook(self, get_admin_context): """Very simple test just to make sure update_available_resource is called as expected. """ with mock.patch.object( self.compute, 'update_available_resource') as update_res: self.compute.pre_start_hook() update_res.assert_called_once_with( get_admin_context.return_value, startup=True) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host', side_effect=exception.NotFound) @mock.patch('nova.compute.manager.LOG') def test_get_compute_nodes_in_db_on_startup(self, mock_log, get_all_by_host): """Tests to make sure we only log a warning when we do not find a compute node on startup since this may be expected. """ self.assertEqual([], self.compute._get_compute_nodes_in_db( self.context, {'fake-node'}, startup=True)) get_all_by_host.assert_called_once_with( self.context, self.compute.host, use_slave=False) self.assertTrue(mock_log.warning.called) self.assertFalse(mock_log.error.called) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host', side_effect=exception.NotFound) @mock.patch('nova.compute.manager.LOG') def test_get_compute_nodes_in_db_not_found_no_nodenames( self, mock_log, get_all_by_host): """Tests to make sure that _get_compute_nodes_in_db does not log anything when ComputeNodeList.get_all_by_host raises NotFound and the driver did not report any nodenames. """ self.assertEqual([], self.compute._get_compute_nodes_in_db( self.context, set())) get_all_by_host.assert_called_once_with( self.context, self.compute.host, use_slave=False) mock_log.assert_not_called() def _trusted_certs_setup_instance(self, include_trusted_certs=True): instance = fake_instance.fake_instance_obj(self.context) if include_trusted_certs: instance.trusted_certs = objects.trusted_certs.TrustedCerts( ids=['fake-trusted-cert-1', 'fake-trusted-cert-2']) else: instance.trusted_certs = None return instance def test_check_trusted_certs_provided_no_support(self): instance = self._trusted_certs_setup_instance() with mock.patch.dict(self.compute.driver.capabilities, supports_trusted_certs=False): self.assertRaises(exception.BuildAbortException, self.compute._check_trusted_certs, instance) def test_check_trusted_certs_not_provided_no_support(self): instance = self._trusted_certs_setup_instance( include_trusted_certs=False) with mock.patch.dict(self.compute.driver.capabilities, supports_trusted_certs=False): self.compute._check_trusted_certs(instance) def test_check_trusted_certs_provided_support(self): instance = self._trusted_certs_setup_instance() with mock.patch.dict(self.compute.driver.capabilities, supports_trusted_certs=True): self.compute._check_trusted_certs(instance) def test_check_device_tagging_no_tagging(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', instance_uuid=uuids.instance)]) net_req = net_req_obj.NetworkRequest(tag=None) net_req_list = net_req_obj.NetworkRequestList(objects=[net_req]) with mock.patch.dict(self.compute.driver.capabilities, supports_device_tagging=False): self.compute._check_device_tagging(net_req_list, bdms) def test_check_device_tagging_no_networks(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', instance_uuid=uuids.instance)]) with mock.patch.dict(self.compute.driver.capabilities, supports_device_tagging=False): self.compute._check_device_tagging(None, bdms) def test_check_device_tagging_tagged_net_req_no_virt_support(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', instance_uuid=uuids.instance)]) net_req = net_req_obj.NetworkRequest(port_id=uuids.bar, tag='foo') net_req_list = net_req_obj.NetworkRequestList(objects=[net_req]) with mock.patch.dict(self.compute.driver.capabilities, supports_device_tagging=False): self.assertRaises(exception.BuildAbortException, self.compute._check_device_tagging, net_req_list, bdms) def test_check_device_tagging_tagged_bdm_no_driver_support(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', tag='foo', instance_uuid=uuids.instance)]) with mock.patch.dict(self.compute.driver.capabilities, supports_device_tagging=False): self.assertRaises(exception.BuildAbortException, self.compute._check_device_tagging, None, bdms) def test_check_device_tagging_tagged_bdm_no_driver_support_declared(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', tag='foo', instance_uuid=uuids.instance)]) with mock.patch.dict(self.compute.driver.capabilities): self.compute.driver.capabilities.pop('supports_device_tagging', None) self.assertRaises(exception.BuildAbortException, self.compute._check_device_tagging, None, bdms) def test_check_device_tagging_tagged_bdm_with_driver_support(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', tag='foo', instance_uuid=uuids.instance)]) net_req = net_req_obj.NetworkRequest(network_id=uuids.bar) net_req_list = net_req_obj.NetworkRequestList(objects=[net_req]) with mock.patch.dict(self.compute.driver.capabilities, supports_device_tagging=True): self.compute._check_device_tagging(net_req_list, bdms) def test_check_device_tagging_tagged_net_req_with_driver_support(self): bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(source_type='volume', destination_type='volume', instance_uuid=uuids.instance)]) net_req = net_req_obj.NetworkRequest(network_id=uuids.bar, tag='foo') net_req_list = net_req_obj.NetworkRequestList(objects=[net_req]) with mock.patch.dict(self.compute.driver.capabilities, supports_device_tagging=True): self.compute._check_device_tagging(net_req_list, bdms) @mock.patch.object(objects.BlockDeviceMapping, 'create') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) def test_reserve_block_device_name_with_tag(self, mock_get, mock_create): instance = fake_instance.fake_instance_obj(self.context) with test.nested( mock.patch.object(self.compute, '_get_device_name_for_instance', return_value='/dev/vda'), mock.patch.dict(self.compute.driver.capabilities, supports_tagged_attach_volume=True)): bdm = self.compute.reserve_block_device_name( self.context, instance, None, None, None, None, 'foo', False) self.assertEqual('foo', bdm.tag) @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_reserve_block_device_name_raises(self, _): with mock.patch.dict(self.compute.driver.capabilities, supports_tagged_attach_volume=False): self.assertRaises(exception.VolumeTaggedAttachNotSupported, self.compute.reserve_block_device_name, self.context, fake_instance.fake_instance_obj(self.context), 'fake_device', 'fake_volume_id', 'fake_disk_bus', 'fake_device_type', 'foo', False) @mock.patch.object(objects.BlockDeviceMapping, 'create') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) def test_reserve_block_device_name_multiattach(self, mock_get, mock_create): """Tests the case that multiattach=True and the driver supports it.""" instance = fake_instance.fake_instance_obj(self.context) with test.nested( mock.patch.object(self.compute, '_get_device_name_for_instance', return_value='/dev/vda'), mock.patch.dict(self.compute.driver.capabilities, supports_multiattach=True)): self.compute.reserve_block_device_name( self.context, instance, device=None, volume_id=uuids.volume_id, disk_bus=None, device_type=None, tag=None, multiattach=True) @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_reserve_block_device_name_multiattach_raises(self, _): with mock.patch.dict(self.compute.driver.capabilities, supports_multiattach=False): self.assertRaises(exception.MultiattachNotSupportedByVirtDriver, self.compute.reserve_block_device_name, self.context, fake_instance.fake_instance_obj(self.context), 'fake_device', 'fake_volume_id', 'fake_disk_bus', 'fake_device_type', tag=None, multiattach=True) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(time, 'sleep') def test_allocate_network_succeeds_after_retries( self, mock_sleep, mock_save): self.flags(network_allocate_retries=8) instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) is_vpn = 'fake-is-vpn' req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='fake')]) sec_groups = 'fake-sec-groups' final_result = 'meow' rp_mapping = {} expected_sleep_times = [mock.call(t) for t in (1, 2, 4, 8, 16, 30, 30)] with mock.patch.object( self.compute.network_api, 'allocate_for_instance', side_effect=[test.TestingException()] * 7 + [final_result]): res = self.compute._allocate_network_async(self.context, instance, req_networks, sec_groups, is_vpn, rp_mapping) self.assertEqual(7, mock_sleep.call_count) mock_sleep.assert_has_calls(expected_sleep_times) self.assertEqual(final_result, res) # Ensure save is not called in while allocating networks, the instance # is saved after the allocation. self.assertFalse(mock_save.called) self.assertEqual('True', instance.system_metadata['network_allocated']) def test_allocate_network_fails(self): self.flags(network_allocate_retries=0) instance = {} is_vpn = 'fake-is-vpn' req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='fake')]) sec_groups = 'fake-sec-groups' rp_mapping = {} with mock.patch.object( self.compute.network_api, 'allocate_for_instance', side_effect=test.TestingException) as mock_allocate: self.assertRaises(test.TestingException, self.compute._allocate_network_async, self.context, instance, req_networks, sec_groups, is_vpn, rp_mapping) mock_allocate.assert_called_once_with( self.context, instance, vpn=is_vpn, requested_networks=req_networks, security_groups=sec_groups, bind_host_id=instance.get('host'), resource_provider_mapping=rp_mapping) @mock.patch.object(manager.ComputeManager, '_instance_update') @mock.patch.object(time, 'sleep') def test_allocate_network_with_conf_value_is_one( self, sleep, _instance_update): self.flags(network_allocate_retries=1) instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) is_vpn = 'fake-is-vpn' req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='fake')]) sec_groups = 'fake-sec-groups' final_result = 'zhangtralon' rp_mapping = {} with mock.patch.object(self.compute.network_api, 'allocate_for_instance', side_effect = [test.TestingException(), final_result]): res = self.compute._allocate_network_async(self.context, instance, req_networks, sec_groups, is_vpn, rp_mapping) self.assertEqual(final_result, res) self.assertEqual(1, sleep.call_count) def test_allocate_network_skip_for_no_allocate(self): # Ensures that we don't do anything if requested_networks has 'none' # for the network_id. req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='none')]) nwinfo = self.compute._allocate_network_async( self.context, mock.sentinel.instance, req_networks, security_groups=['default'], is_vpn=False, resource_provider_mapping={}) self.assertEqual(0, len(nwinfo)) @mock.patch('nova.compute.manager.ComputeManager.' '_do_build_and_run_instance') def _test_max_concurrent_builds(self, mock_dbari): with mock.patch.object(self.compute, '_build_semaphore') as mock_sem: instance = objects.Instance(uuid=uuidutils.generate_uuid()) for i in (1, 2, 3): self.compute.build_and_run_instance(self.context, instance, mock.sentinel.image, mock.sentinel.request_spec, {}) self.assertEqual(3, mock_sem.__enter__.call_count) def test_max_concurrent_builds_limited(self): self.flags(max_concurrent_builds=2) self._test_max_concurrent_builds() def test_max_concurrent_builds_unlimited(self): self.flags(max_concurrent_builds=0) self._test_max_concurrent_builds() def test_max_concurrent_builds_semaphore_limited(self): self.flags(max_concurrent_builds=123) self.assertEqual(123, manager.ComputeManager()._build_semaphore.balance) def test_max_concurrent_builds_semaphore_unlimited(self): self.flags(max_concurrent_builds=0) compute = manager.ComputeManager() self.assertEqual(0, compute._build_semaphore.balance) self.assertIsInstance(compute._build_semaphore, compute_utils.UnlimitedSemaphore) def test_nil_out_inst_obj_host_and_node_sets_nil(self): instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance, host='foo-host', node='foo-node', launched_on='foo-host') self.assertIsNotNone(instance.host) self.assertIsNotNone(instance.node) self.assertIsNotNone(instance.launched_on) self.compute._nil_out_instance_obj_host_and_node(instance) self.assertIsNone(instance.host) self.assertIsNone(instance.node) self.assertIsNone(instance.launched_on) def test_init_host(self): our_host = self.compute.host inst = fake_instance.fake_db_instance( vm_state=vm_states.ACTIVE, info_cache=dict(test_instance_info_cache.fake_info_cache, network_info=None), security_groups=None) startup_instances = [inst, inst, inst] def _make_instance_list(db_list): return instance_obj._make_instance_list( self.context, objects.InstanceList(), db_list, None) @mock.patch.object(manager.ComputeManager, '_get_nodes') @mock.patch.object(manager.ComputeManager, '_error_out_instances_whose_build_was_interrupted') @mock.patch.object(fake_driver.FakeDriver, 'init_host') @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(manager.ComputeManager, '_destroy_evacuated_instances') @mock.patch.object(manager.ComputeManager, '_validate_pinning_configuration') @mock.patch.object(manager.ComputeManager, '_init_instance') @mock.patch.object(self.compute, '_update_scheduler_instance_info') def _do_mock_calls(mock_update_scheduler, mock_inst_init, mock_validate_pinning, mock_destroy, mock_admin_ctxt, mock_host_get, mock_init_host, mock_error_interrupted, mock_get_nodes): mock_admin_ctxt.return_value = self.context inst_list = _make_instance_list(startup_instances) mock_host_get.return_value = inst_list our_node = objects.ComputeNode( host='fake-host', uuid=uuids.our_node_uuid, hypervisor_hostname='fake-node') mock_get_nodes.return_value = {uuids.our_node_uuid: our_node} self.compute.init_host() mock_validate_pinning.assert_called_once_with(inst_list) mock_destroy.assert_called_once_with( self.context, {uuids.our_node_uuid: our_node}) mock_inst_init.assert_has_calls( [mock.call(self.context, inst_list[0]), mock.call(self.context, inst_list[1]), mock.call(self.context, inst_list[2])]) mock_init_host.assert_called_once_with(host=our_host) mock_host_get.assert_called_once_with(self.context, our_host, expected_attrs=['info_cache', 'metadata', 'numa_topology']) mock_update_scheduler.assert_called_once_with( self.context, inst_list) mock_error_interrupted.assert_called_once_with( self.context, {inst.uuid for inst in inst_list}, mock_get_nodes.return_value.keys()) _do_mock_calls() @mock.patch('nova.compute.manager.ComputeManager._get_nodes') @mock.patch('nova.compute.manager.ComputeManager.' '_error_out_instances_whose_build_was_interrupted') @mock.patch('nova.objects.InstanceList.get_by_host', return_value=objects.InstanceList()) @mock.patch('nova.compute.manager.ComputeManager.' '_destroy_evacuated_instances') @mock.patch('nova.compute.manager.ComputeManager._init_instance', mock.NonCallableMock()) @mock.patch('nova.compute.manager.ComputeManager.' '_update_scheduler_instance_info', mock.NonCallableMock()) def test_init_host_no_instances( self, mock_destroy_evac_instances, mock_get_by_host, mock_error_interrupted, mock_get_nodes): """Tests the case that init_host runs and there are no instances on this host yet (it's brand new). Uses NonCallableMock for the methods we assert should not be called. """ mock_get_nodes.return_value = { uuids.cn_uuid1: objects.ComputeNode( uuid=uuids.cn_uuid1, hypervisor_hostname='node1')} self.compute.init_host() mock_error_interrupted.assert_called_once_with( test.MatchType(nova.context.RequestContext), set(), mock_get_nodes.return_value.keys()) mock_get_nodes.assert_called_once_with( test.MatchType(nova.context.RequestContext)) @mock.patch('nova.objects.InstanceList') @mock.patch('nova.objects.MigrationList.get_by_filters') def test_cleanup_host(self, mock_miglist_get, mock_instance_list): # just testing whether the cleanup_host method # when fired will invoke the underlying driver's # equivalent method. mock_miglist_get.return_value = [] mock_instance_list.get_by_host.return_value = [] with mock.patch.object(self.compute, 'driver') as mock_driver: self.compute.init_host() mock_driver.init_host.assert_called_once_with(host='fake-mini') self.compute.cleanup_host() # register_event_listener is called on startup (init_host) and # in cleanup_host mock_driver.register_event_listener.assert_has_calls([ mock.call(self.compute.handle_events), mock.call(None)]) mock_driver.cleanup_host.assert_called_once_with(host='fake-mini') def test_cleanup_live_migrations_in_pool_with_record(self): fake_future = mock.MagicMock() fake_instance_uuid = uuids.instance fake_migration = objects.Migration( uuid=uuids.migration, instance_uuid=fake_instance_uuid) fake_migration.save = mock.MagicMock() self.compute._waiting_live_migrations[fake_instance_uuid] = ( fake_migration, fake_future) with mock.patch.object(self.compute, '_live_migration_executor' ) as mock_migration_pool: self.compute._cleanup_live_migrations_in_pool() mock_migration_pool.shutdown.assert_called_once_with(wait=False) self.assertEqual('cancelled', fake_migration.status) fake_future.cancel.assert_called_once_with() self.assertEqual({}, self.compute._waiting_live_migrations) # test again with Future is None self.compute._waiting_live_migrations[fake_instance_uuid] = ( None, None) self.compute._cleanup_live_migrations_in_pool() mock_migration_pool.shutdown.assert_called_with(wait=False) self.assertEqual(2, mock_migration_pool.shutdown.call_count) self.assertEqual({}, self.compute._waiting_live_migrations) def test_init_virt_events_disabled(self): self.flags(handle_virt_lifecycle_events=False, group='workarounds') with mock.patch.object(self.compute.driver, 'register_event_listener') as mock_register: self.compute.init_virt_events() self.assertFalse(mock_register.called) @mock.patch('nova.compute.manager.ComputeManager._get_nodes') @mock.patch.object(manager.ComputeManager, '_error_out_instances_whose_build_was_interrupted') @mock.patch('nova.scheduler.utils.resources_from_flavor') @mock.patch.object(manager.ComputeManager, '_get_instances_on_driver') @mock.patch.object(manager.ComputeManager, 'init_virt_events') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(fake_driver.FakeDriver, 'destroy') @mock.patch.object(fake_driver.FakeDriver, 'init_host') @mock.patch('nova.utils.temporary_mutation') @mock.patch('nova.objects.MigrationList.get_by_filters') @mock.patch('nova.objects.Migration.save') def test_init_host_with_evacuated_instance(self, mock_save, mock_mig_get, mock_temp_mut, mock_init_host, mock_destroy, mock_host_get, mock_admin_ctxt, mock_init_virt, mock_get_inst, mock_resources, mock_error_interrupted, mock_get_nodes): our_host = self.compute.host not_our_host = 'not-' + our_host deleted_instance = fake_instance.fake_instance_obj( self.context, host=not_our_host, uuid=uuids.deleted_instance) migration = objects.Migration(instance_uuid=deleted_instance.uuid) migration.source_node = 'fake-node' mock_mig_get.return_value = [migration] mock_admin_ctxt.return_value = self.context mock_host_get.return_value = objects.InstanceList() our_node = objects.ComputeNode( host=our_host, uuid=uuids.our_node_uuid, hypervisor_hostname='fake-node') mock_get_nodes.return_value = {uuids.our_node_uuid: our_node} mock_resources.return_value = mock.sentinel.my_resources # simulate failed instance mock_get_inst.return_value = [deleted_instance] with test.nested( mock.patch.object( self.compute.network_api, 'get_instance_nw_info', side_effect = exception.InstanceNotFound( instance_id=deleted_instance['uuid'])), mock.patch.object( self.compute.reportclient, 'remove_provider_tree_from_instance_allocation') ) as (mock_get_net, mock_remove_allocation): self.compute.init_host() mock_remove_allocation.assert_called_once_with( self.context, deleted_instance.uuid, uuids.our_node_uuid) mock_init_host.assert_called_once_with(host=our_host) mock_host_get.assert_called_once_with(self.context, our_host, expected_attrs=['info_cache', 'metadata', 'numa_topology']) mock_init_virt.assert_called_once_with() mock_temp_mut.assert_called_once_with(self.context, read_deleted='yes') mock_get_inst.assert_called_once_with(self.context) mock_get_net.assert_called_once_with(self.context, deleted_instance) # ensure driver.destroy is called so that driver may # clean up any dangling files mock_destroy.assert_called_once_with(self.context, deleted_instance, mock.ANY, mock.ANY, mock.ANY) mock_save.assert_called_once_with() mock_error_interrupted.assert_called_once_with( self.context, {deleted_instance.uuid}, mock_get_nodes.return_value.keys()) @mock.patch('nova.compute.manager.ComputeManager._get_nodes') @mock.patch.object(manager.ComputeManager, '_error_out_instances_whose_build_was_interrupted') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(fake_driver.FakeDriver, 'init_host') @mock.patch('nova.compute.manager.ComputeManager._init_instance') @mock.patch('nova.compute.manager.ComputeManager.' '_destroy_evacuated_instances') def test_init_host_with_in_progress_evacuations(self, mock_destroy_evac, mock_init_instance, mock_init_host, mock_host_get, mock_admin_ctxt, mock_error_interrupted, mock_get_nodes): """Assert that init_instance is not called for instances that are evacuating from the host during init_host. """ active_instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, uuid=uuids.active_instance) evacuating_instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, uuid=uuids.evac_instance) instance_list = objects.InstanceList(self.context, objects=[active_instance, evacuating_instance]) mock_host_get.return_value = instance_list mock_admin_ctxt.return_value = self.context mock_destroy_evac.return_value = { uuids.evac_instance: evacuating_instance } our_node = objects.ComputeNode( host='fake-host', uuid=uuids.our_node_uuid, hypervisor_hostname='fake-node') mock_get_nodes.return_value = {uuids.our_node_uuid: our_node} self.compute.init_host() mock_init_instance.assert_called_once_with( self.context, active_instance) mock_error_interrupted.assert_called_once_with( self.context, {active_instance.uuid, evacuating_instance.uuid}, mock_get_nodes.return_value.keys()) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch.object(fake_driver.FakeDriver, 'get_available_nodes') def test_get_nodes(self, mock_driver_get_nodes, mock_get_by_host_and_node): mock_driver_get_nodes.return_value = ['fake-node1', 'fake-node2'] cn1 = objects.ComputeNode(uuid=uuids.cn1) cn2 = objects.ComputeNode(uuid=uuids.cn2) mock_get_by_host_and_node.side_effect = [cn1, cn2] nodes = self.compute._get_nodes(self.context) self.assertEqual({uuids.cn1: cn1, uuids.cn2: cn2}, nodes) mock_driver_get_nodes.assert_called_once_with() mock_get_by_host_and_node.assert_has_calls([ mock.call(self.context, self.compute.host, 'fake-node1'), mock.call(self.context, self.compute.host, 'fake-node2'), ]) @mock.patch.object(manager.LOG, 'warning') @mock.patch.object( objects.ComputeNode, 'get_by_host_and_nodename', new_callable=mock.NonCallableMock) @mock.patch.object( fake_driver.FakeDriver, 'get_available_nodes', side_effect=exception.VirtDriverNotReady) def test_get_nodes_driver_not_ready( self, mock_driver_get_nodes, mock_get_by_host_and_node, mock_log_warning): mock_driver_get_nodes.return_value = ['fake-node1', 'fake-node2'] nodes = self.compute._get_nodes(self.context) self.assertEqual({}, nodes) mock_log_warning.assert_called_once_with( "Virt driver is not ready. If this is the first time this service " "is starting on this host, then you can ignore this warning.") @mock.patch.object(manager.LOG, 'warning') @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch.object(fake_driver.FakeDriver, 'get_available_nodes') def test_get_nodes_node_not_found( self, mock_driver_get_nodes, mock_get_by_host_and_node, mock_log_warning): mock_driver_get_nodes.return_value = ['fake-node1', 'fake-node2'] cn2 = objects.ComputeNode(uuid=uuids.cn2) mock_get_by_host_and_node.side_effect = [ exception.ComputeHostNotFound(host='fake-node1'), cn2] nodes = self.compute._get_nodes(self.context) self.assertEqual({uuids.cn2: cn2}, nodes) mock_driver_get_nodes.assert_called_once_with() mock_get_by_host_and_node.assert_has_calls([ mock.call(self.context, self.compute.host, 'fake-node1'), mock.call(self.context, self.compute.host, 'fake-node2'), ]) mock_log_warning.assert_called_once_with( "Compute node %s not found in the database. If this is the first " "time this service is starting on this host, then you can ignore " "this warning.", 'fake-node1') def test_init_host_disk_devices_configuration_failure(self): self.flags(max_disk_devices_to_attach=0, group='compute') self.assertRaises(exception.InvalidConfiguration, self.compute.init_host) @mock.patch.object(objects.InstanceList, 'get_by_host', new=mock.Mock()) @mock.patch('nova.compute.manager.ComputeManager.' '_validate_pinning_configuration') def test_init_host_pinning_configuration_validation_failure(self, mock_validate_pinning): """Test that we fail init_host if the pinning configuration check fails. """ mock_validate_pinning.side_effect = exception.InvalidConfiguration self.assertRaises(exception.InvalidConfiguration, self.compute.init_host) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceList, 'get_by_filters') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_resource_provider') def test_init_host_with_interrupted_instance_build( self, mock_get_allocations, mock_get_instances, mock_instance_save): active_instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, uuid=uuids.active_instance) evacuating_instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, uuid=uuids.evac_instance) interrupted_instance = fake_instance.fake_instance_obj( self.context, host=None, uuid=uuids.interrupted_instance, vm_state=vm_states.BUILDING) # we have 3 different instances. We need consumers for each instance # in placement and an extra consumer that is not an instance allocations = { uuids.active_instance: "fake-resources-active", uuids.evac_instance: "fake-resources-evacuating", uuids.interrupted_instance: "fake-resources-interrupted", uuids.not_an_instance: "fake-resources-not-an-instance", } mock_get_allocations.return_value = report.ProviderAllocInfo( allocations=allocations) # get is called with a uuid filter containing interrupted_instance, # error_instance, and not_an_instance but it will only return the # interrupted_instance as the error_instance is not in building state # and not_an_instance does not match with any instance in the db. mock_get_instances.return_value = objects.InstanceList( self.context, objects=[interrupted_instance]) # interrupted_instance and error_instance is not in the list passed in # because it is not assigned to the compute and therefore not processed # by init_host and init_instance self.compute._error_out_instances_whose_build_was_interrupted( self.context, {inst.uuid for inst in [active_instance, evacuating_instance]}, [uuids.cn_uuid]) mock_get_instances.assert_called_once_with( self.context, {'vm_state': 'building', 'uuid': {uuids.interrupted_instance, uuids.not_an_instance} }, expected_attrs=[]) # this is expected to be called only once for interrupted_instance mock_instance_save.assert_called_once_with() self.assertEqual(vm_states.ERROR, interrupted_instance.vm_state) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_resource_provider') def test_init_host_with_interrupted_instance_build_compute_rp_not_found( self, mock_get_allocations): mock_get_allocations.side_effect = [ exception.ResourceProviderAllocationRetrievalFailed( rp_uuid=uuids.cn1_uuid, error='error'), report.ProviderAllocInfo(allocations={}) ] self.compute._error_out_instances_whose_build_was_interrupted( self.context, set(), [uuids.cn1_uuid, uuids.cn2_uuid]) # check that nova skip the node that is not found in placement and # continue with the next mock_get_allocations.assert_has_calls( [ mock.call(self.context, uuids.cn1_uuid), mock.call(self.context, uuids.cn2_uuid), ] ) def test_init_instance_with_binding_failed_vif_type(self): # this instance will plug a 'binding_failed' vif instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, info_cache=None, power_state=power_state.RUNNING, vm_state=vm_states.ACTIVE, task_state=None, host=self.compute.host, expected_attrs=['info_cache']) with test.nested( mock.patch.object(context, 'get_admin_context', return_value=self.context), mock.patch.object(objects.Instance, 'get_network_info', return_value=network_model.NetworkInfo()), mock.patch.object(self.compute.driver, 'plug_vifs', side_effect=exception.VirtualInterfacePlugException( "Unexpected vif_type=binding_failed")), mock.patch.object(self.compute, '_set_instance_obj_error_state') ) as (get_admin_context, get_nw_info, plug_vifs, set_error_state): self.compute._init_instance(self.context, instance) set_error_state.assert_called_once_with(self.context, instance) def _test__validate_pinning_configuration(self, supports_pcpus=True): instance_1 = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance_1) instance_2 = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance_2) instance_3 = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance_3) instance_4 = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance_4) instance_1.numa_topology = None numa_wo_pinning = test_instance_numa.get_fake_obj_numa_topology( self.context) instance_2.numa_topology = numa_wo_pinning numa_w_pinning = test_instance_numa.get_fake_obj_numa_topology( self.context) numa_w_pinning.cells[0].pin_vcpus((1, 10), (2, 11)) numa_w_pinning.cells[0].cpu_policy = ( fields.CPUAllocationPolicy.DEDICATED) numa_w_pinning.cells[1].pin_vcpus((3, 0), (4, 1)) numa_w_pinning.cells[1].cpu_policy = ( fields.CPUAllocationPolicy.DEDICATED) instance_3.numa_topology = numa_w_pinning instance_4.deleted = True instances = objects.InstanceList(objects=[ instance_1, instance_2, instance_3, instance_4]) with mock.patch.dict(self.compute.driver.capabilities, supports_pcpus=supports_pcpus): self.compute._validate_pinning_configuration(instances) def test__validate_pinning_configuration_invalid_unpinned_config(self): """Test that configuring only 'cpu_dedicated_set' when there are unpinned instances on the host results in an error. """ self.flags(cpu_dedicated_set='0-7', group='compute') ex = self.assertRaises( exception.InvalidConfiguration, self._test__validate_pinning_configuration) self.assertIn('This host has unpinned instances but has no CPUs ' 'set aside for this purpose;', six.text_type(ex)) def test__validate_pinning_configuration_invalid_pinned_config(self): """Test that configuring only 'cpu_shared_set' when there are pinned instances on the host results in an error """ self.flags(cpu_shared_set='0-7', group='compute') ex = self.assertRaises( exception.InvalidConfiguration, self._test__validate_pinning_configuration) self.assertIn('This host has pinned instances but has no CPUs ' 'set aside for this purpose;', six.text_type(ex)) @mock.patch.object(manager.LOG, 'warning') def test__validate_pinning_configuration_warning(self, mock_log): """Test that configuring 'cpu_dedicated_set' such that some pinned cores of the instance are outside the range it specifies results in a warning. """ self.flags(cpu_shared_set='0-7', cpu_dedicated_set='8-15', group='compute') self._test__validate_pinning_configuration() self.assertEqual(1, mock_log.call_count) self.assertIn('Instance is pinned to host CPUs %(cpus)s ' 'but one or more of these CPUs are not included in ', six.text_type(mock_log.call_args[0])) def test__validate_pinning_configuration_no_config(self): """Test that the entire check is skipped if there's no host configuration. """ self._test__validate_pinning_configuration() def test__validate_pinning_configuration_not_supported(self): """Test that the entire check is skipped if the driver doesn't even support PCPUs. """ self._test__validate_pinning_configuration(supports_pcpus=False) def test__get_power_state_InstanceNotFound(self): instance = fake_instance.fake_instance_obj( self.context, power_state=power_state.RUNNING) with mock.patch.object(self.compute.driver, 'get_info', side_effect=exception.InstanceNotFound(instance_id=1)): self.assertEqual(self.compute._get_power_state(self.context, instance), power_state.NOSTATE) def test__get_power_state_NotFound(self): instance = fake_instance.fake_instance_obj( self.context, power_state=power_state.RUNNING) with mock.patch.object(self.compute.driver, 'get_info', side_effect=exception.NotFound()): self.assertRaises(exception.NotFound, self.compute._get_power_state, self.context, instance) @mock.patch.object(manager.ComputeManager, '_get_power_state') @mock.patch.object(fake_driver.FakeDriver, 'plug_vifs') @mock.patch.object(fake_driver.FakeDriver, 'resume_state_on_host_boot') @mock.patch.object(manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(manager.ComputeManager, '_set_instance_obj_error_state') def test_init_instance_failed_resume_sets_error(self, mock_set_inst, mock_get_inst, mock_resume, mock_plug, mock_get_power): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, info_cache=None, power_state=power_state.RUNNING, vm_state=vm_states.ACTIVE, task_state=None, host=self.compute.host, expected_attrs=['info_cache']) self.flags(resume_guests_state_on_host_boot=True) mock_get_power.side_effect = (power_state.SHUTDOWN, power_state.SHUTDOWN) mock_get_inst.return_value = 'fake-bdm' mock_resume.side_effect = test.TestingException self.compute._init_instance('fake-context', instance) mock_get_power.assert_has_calls([mock.call(mock.ANY, instance), mock.call(mock.ANY, instance)]) mock_plug.assert_called_once_with(instance, mock.ANY) mock_get_inst.assert_called_once_with(mock.ANY, instance) mock_resume.assert_called_once_with(mock.ANY, instance, mock.ANY, 'fake-bdm') mock_set_inst.assert_called_once_with(mock.ANY, instance) @mock.patch.object(objects.BlockDeviceMapping, 'destroy') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'destroy') @mock.patch.object(objects.Instance, 'obj_load_attr') @mock.patch.object(objects.quotas, 'ids_from_instance') def test_init_instance_complete_partial_deletion( self, mock_ids_from_instance, mock_inst_destroy, mock_obj_load_attr, mock_get_by_instance_uuid, mock_bdm_destroy): """Test to complete deletion for instances in DELETED status but not marked as deleted in the DB """ instance = fake_instance.fake_instance_obj( self.context, project_id=fakes.FAKE_PROJECT_ID, uuid=uuids.instance, vcpus=1, memory_mb=64, power_state=power_state.SHUTDOWN, vm_state=vm_states.DELETED, host=self.compute.host, task_state=None, deleted=False, deleted_at=None, metadata={}, system_metadata={}, expected_attrs=['metadata', 'system_metadata']) # Make sure instance vm_state is marked as 'DELETED' but instance is # not destroyed from db. self.assertEqual(vm_states.DELETED, instance.vm_state) self.assertFalse(instance.deleted) def fake_inst_destroy(): instance.deleted = True instance.deleted_at = timeutils.utcnow() mock_ids_from_instance.return_value = (instance.project_id, instance.user_id) mock_inst_destroy.side_effect = fake_inst_destroy() self.compute._init_instance(self.context, instance) # Make sure that instance.destroy method was called and # instance was deleted from db. self.assertNotEqual(0, instance.deleted) @mock.patch('nova.compute.manager.LOG') def test_init_instance_complete_partial_deletion_raises_exception( self, mock_log): instance = fake_instance.fake_instance_obj( self.context, project_id=fakes.FAKE_PROJECT_ID, uuid=uuids.instance, vcpus=1, memory_mb=64, power_state=power_state.SHUTDOWN, vm_state=vm_states.DELETED, host=self.compute.host, task_state=None, deleted=False, deleted_at=None, metadata={}, system_metadata={}, expected_attrs=['metadata', 'system_metadata']) with mock.patch.object(self.compute, '_complete_partial_deletion') as mock_deletion: mock_deletion.side_effect = test.TestingException() self.compute._init_instance(self, instance) msg = u'Failed to complete a deletion' mock_log.exception.assert_called_once_with(msg, instance=instance) def test_init_instance_stuck_in_deleting(self): instance = fake_instance.fake_instance_obj( self.context, project_id=fakes.FAKE_PROJECT_ID, uuid=uuids.instance, vcpus=1, memory_mb=64, power_state=power_state.RUNNING, vm_state=vm_states.ACTIVE, host=self.compute.host, task_state=task_states.DELETING) bdms = [] with test.nested( mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=bdms), mock.patch.object(self.compute, '_delete_instance'), mock.patch.object(instance, 'obj_load_attr') ) as (mock_get, mock_delete, mock_load): self.compute._init_instance(self.context, instance) mock_get.assert_called_once_with(self.context, instance.uuid) mock_delete.assert_called_once_with(self.context, instance, bdms) @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_init_instance_stuck_in_deleting_raises_exception( self, mock_get_by_instance_uuid, mock_get_by_uuid): instance = fake_instance.fake_instance_obj( self.context, project_id=fakes.FAKE_PROJECT_ID, uuid=uuids.instance, vcpus=1, memory_mb=64, metadata={}, system_metadata={}, host=self.compute.host, vm_state=vm_states.ACTIVE, task_state=task_states.DELETING, expected_attrs=['metadata', 'system_metadata']) bdms = [] def _create_patch(name, attr): patcher = mock.patch.object(name, attr) mocked_obj = patcher.start() self.addCleanup(patcher.stop) return mocked_obj mock_delete_instance = _create_patch(self.compute, '_delete_instance') mock_set_instance_error_state = _create_patch( self.compute, '_set_instance_obj_error_state') mock_get_by_instance_uuid.return_value = bdms mock_get_by_uuid.return_value = instance mock_delete_instance.side_effect = test.TestingException('test') self.compute._init_instance(self.context, instance) mock_set_instance_error_state.assert_called_once_with( self.context, instance) def _test_init_instance_reverts_crashed_migrations(self, old_vm_state=None): power_on = True if (not old_vm_state or old_vm_state == vm_states.ACTIVE) else False sys_meta = { 'old_vm_state': old_vm_state } instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_states.ERROR, task_state=task_states.RESIZE_MIGRATING, power_state=power_state.SHUTDOWN, system_metadata=sys_meta, host=self.compute.host, expected_attrs=['system_metadata']) instance.migration_context = objects.MigrationContext(migration_id=42) migration = objects.Migration(source_compute='fake-host1', dest_compute='fake-host2') with test.nested( mock.patch.object(objects.Instance, 'get_network_info', return_value=network_model.NetworkInfo()), mock.patch.object(self.compute.driver, 'plug_vifs'), mock.patch.object(self.compute.driver, 'finish_revert_migration'), mock.patch.object(self.compute, '_get_instance_block_device_info', return_value=[]), mock.patch.object(self.compute.driver, 'get_info'), mock.patch.object(instance, 'save'), mock.patch.object(self.compute, '_retry_reboot', return_value=(False, None)), mock.patch.object(objects.Migration, 'get_by_id_and_instance', return_value=migration) ) as (mock_get_nw, mock_plug, mock_finish, mock_get_inst, mock_get_info, mock_save, mock_retry, mock_get_mig): mock_get_info.side_effect = ( hardware.InstanceInfo(state=power_state.SHUTDOWN), hardware.InstanceInfo(state=power_state.SHUTDOWN)) self.compute._init_instance(self.context, instance) mock_get_mig.assert_called_with(self.context, 42, instance.uuid) mock_retry.assert_called_once_with(self.context, instance, power_state.SHUTDOWN) mock_get_nw.assert_called_once_with() mock_plug.assert_called_once_with(instance, []) mock_get_inst.assert_called_once_with(self.context, instance) mock_finish.assert_called_once_with(self.context, instance, [], migration, [], power_on) mock_save.assert_called_once_with() mock_get_info.assert_has_calls( [mock.call(instance, use_cache=False), mock.call(instance, use_cache=False)]) self.assertIsNone(instance.task_state) def test_init_instance_reverts_crashed_migration_from_active(self): self._test_init_instance_reverts_crashed_migrations( old_vm_state=vm_states.ACTIVE) def test_init_instance_reverts_crashed_migration_from_stopped(self): self._test_init_instance_reverts_crashed_migrations( old_vm_state=vm_states.STOPPED) def test_init_instance_reverts_crashed_migration_no_old_state(self): self._test_init_instance_reverts_crashed_migrations(old_vm_state=None) def test_init_instance_resets_crashed_live_migration(self): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_states.ACTIVE, host=self.compute.host, task_state=task_states.MIGRATING) migration = objects.Migration(source_compute='fake-host1', id=39, dest_compute='fake-host2') with test.nested( mock.patch.object(instance, 'save'), mock.patch('nova.objects.Instance.get_network_info', return_value=network_model.NetworkInfo()), mock.patch.object(objects.Migration, 'get_by_instance_and_status', return_value=migration), mock.patch.object(self.compute, 'live_migration_abort'), mock.patch.object(self.compute, '_set_migration_status') ) as (save, get_nw_info, mock_get_status, mock_abort, mock_set_migr): self.compute._init_instance(self.context, instance) save.assert_called_once_with(expected_task_state=['migrating']) get_nw_info.assert_called_once_with() mock_get_status.assert_called_with(self.context, instance.uuid, 'running') mock_abort.assert_called_with(self.context, instance, migration.id) mock_set_migr.assert_called_with(migration, 'error') self.assertIsNone(instance.task_state) self.assertEqual(vm_states.ACTIVE, instance.vm_state) def _test_init_instance_sets_building_error(self, vm_state, task_state=None): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_state, host=self.compute.host, task_state=task_state) with mock.patch.object(instance, 'save') as save: self.compute._init_instance(self.context, instance) save.assert_called_once_with() self.assertIsNone(instance.task_state) self.assertEqual(vm_states.ERROR, instance.vm_state) def test_init_instance_sets_building_error(self): self._test_init_instance_sets_building_error(vm_states.BUILDING) def test_init_instance_sets_rebuilding_errors(self): tasks = [task_states.REBUILDING, task_states.REBUILD_BLOCK_DEVICE_MAPPING, task_states.REBUILD_SPAWNING] vms = [vm_states.ACTIVE, vm_states.STOPPED] for vm_state in vms: for task_state in tasks: self._test_init_instance_sets_building_error( vm_state, task_state) def _test_init_instance_sets_building_tasks_error(self, instance): instance.host = self.compute.host with mock.patch.object(instance, 'save') as save: self.compute._init_instance(self.context, instance) save.assert_called_once_with() self.assertIsNone(instance.task_state) self.assertEqual(vm_states.ERROR, instance.vm_state) def test_init_instance_sets_building_tasks_error_scheduling(self): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=None, task_state=task_states.SCHEDULING) self._test_init_instance_sets_building_tasks_error(instance) def test_init_instance_sets_building_tasks_error_block_device(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = None instance.task_state = task_states.BLOCK_DEVICE_MAPPING self._test_init_instance_sets_building_tasks_error(instance) def test_init_instance_sets_building_tasks_error_networking(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = None instance.task_state = task_states.NETWORKING self._test_init_instance_sets_building_tasks_error(instance) def test_init_instance_sets_building_tasks_error_spawning(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = None instance.task_state = task_states.SPAWNING self._test_init_instance_sets_building_tasks_error(instance) def _test_init_instance_cleans_image_states(self, instance): with mock.patch.object(instance, 'save') as save: self.compute._get_power_state = mock.Mock() self.compute.driver.post_interrupted_snapshot_cleanup = mock.Mock() instance.info_cache = None instance.power_state = power_state.RUNNING instance.host = self.compute.host self.compute._init_instance(self.context, instance) save.assert_called_once_with() self.compute.driver.post_interrupted_snapshot_cleanup.\ assert_called_once_with(self.context, instance) self.assertIsNone(instance.task_state) @mock.patch('nova.compute.manager.ComputeManager._get_power_state', return_value=power_state.RUNNING) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def _test_init_instance_cleans_task_states(self, powerstate, state, mock_get_uuid, mock_get_power_state): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.info_cache = None instance.power_state = power_state.RUNNING instance.vm_state = vm_states.ACTIVE instance.task_state = state instance.host = self.compute.host mock_get_power_state.return_value = powerstate self.compute._init_instance(self.context, instance) return instance def test_init_instance_cleans_image_state_pending_upload(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_PENDING_UPLOAD self._test_init_instance_cleans_image_states(instance) def test_init_instance_cleans_image_state_uploading(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_UPLOADING self._test_init_instance_cleans_image_states(instance) def test_init_instance_cleans_image_state_snapshot(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_SNAPSHOT self._test_init_instance_cleans_image_states(instance) def test_init_instance_cleans_image_state_snapshot_pending(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.IMAGE_SNAPSHOT_PENDING self._test_init_instance_cleans_image_states(instance) @mock.patch.object(objects.Instance, 'save') def test_init_instance_cleans_running_pausing(self, mock_save): instance = self._test_init_instance_cleans_task_states( power_state.RUNNING, task_states.PAUSING) mock_save.assert_called_once_with() self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) @mock.patch.object(objects.Instance, 'save') def test_init_instance_cleans_running_unpausing(self, mock_save): instance = self._test_init_instance_cleans_task_states( power_state.RUNNING, task_states.UNPAUSING) mock_save.assert_called_once_with() self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) @mock.patch('nova.compute.manager.ComputeManager.unpause_instance') def test_init_instance_cleans_paused_unpausing(self, mock_unpause): def fake_unpause(context, instance): instance.task_state = None mock_unpause.side_effect = fake_unpause instance = self._test_init_instance_cleans_task_states( power_state.PAUSED, task_states.UNPAUSING) mock_unpause.assert_called_once_with(self.context, instance) self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) def test_init_instance_deletes_error_deleting_instance(self): instance = fake_instance.fake_instance_obj( self.context, project_id=fakes.FAKE_PROJECT_ID, uuid=uuids.instance, vcpus=1, memory_mb=64, vm_state=vm_states.ERROR, host=self.compute.host, task_state=task_states.DELETING) bdms = [] with test.nested( mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=bdms), mock.patch.object(self.compute, '_delete_instance'), mock.patch.object(instance, 'obj_load_attr') ) as (mock_get, mock_delete, mock_load): self.compute._init_instance(self.context, instance) mock_get.assert_called_once_with(self.context, instance.uuid) mock_delete.assert_called_once_with(self.context, instance, bdms) def test_init_instance_resize_prep(self): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_states.ACTIVE, host=self.compute.host, task_state=task_states.RESIZE_PREP, power_state=power_state.RUNNING) with test.nested( mock.patch.object(self.compute, '_get_power_state', return_value=power_state.RUNNING), mock.patch.object(objects.Instance, 'get_network_info'), mock.patch.object(instance, 'save', autospec=True) ) as (mock_get_power_state, mock_nw_info, mock_instance_save): self.compute._init_instance(self.context, instance) mock_instance_save.assert_called_once_with() self.assertIsNone(instance.task_state) @mock.patch('nova.virt.fake.FakeDriver.power_off') @mock.patch.object(compute_utils, 'get_value_from_system_metadata', return_value=CONF.shutdown_timeout) def test_power_off_values(self, mock_get_metadata, mock_power_off): self.flags(shutdown_retry_interval=20, group='compute') instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_states.ACTIVE, task_state=task_states.POWERING_OFF) self.compute._power_off_instance( self.context, instance, clean_shutdown=True) mock_power_off.assert_called_once_with( instance, CONF.shutdown_timeout, 20) @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.Instance.get_network_info') @mock.patch( 'nova.compute.manager.ComputeManager._get_instance_block_device_info') @mock.patch('nova.virt.driver.ComputeDriver.destroy') @mock.patch('nova.virt.fake.FakeDriver.get_volume_connector') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch( 'nova.compute.manager.ComputeManager._notify_about_instance_usage') def test_shutdown_instance_versioned_notifications(self, mock_notify_unversioned, mock_notify, mock_connector, mock_destroy, mock_blk_device_info, mock_nw_info, mock_elevated): mock_elevated.return_value = self.context instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_states.ERROR, task_state=task_states.DELETING) bdms = [mock.Mock(id=1, is_volume=True)] self.compute._shutdown_instance(self.context, instance, bdms, notify=True, try_deallocate_networks=False) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='shutdown', phase='start', bdms=bdms), mock.call(self.context, instance, 'fake-mini', action='shutdown', phase='end', bdms=bdms)]) @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.Instance.get_network_info') @mock.patch( 'nova.compute.manager.ComputeManager._get_instance_block_device_info') @mock.patch('nova.virt.driver.ComputeDriver.destroy') @mock.patch('nova.virt.fake.FakeDriver.get_volume_connector') def _test_shutdown_instance_exception(self, exc, mock_connector, mock_destroy, mock_blk_device_info, mock_nw_info, mock_elevated): mock_connector.side_effect = exc mock_elevated.return_value = self.context instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, vm_state=vm_states.ERROR, task_state=task_states.DELETING) bdms = [mock.Mock(id=1, is_volume=True, attachment_id=None)] self.compute._shutdown_instance(self.context, instance, bdms, notify=False, try_deallocate_networks=False) mock_connector.assert_called_once_with(instance) def test_shutdown_instance_endpoint_not_found(self): exc = cinder_exception.EndpointNotFound self._test_shutdown_instance_exception(exc) def test_shutdown_instance_client_exception(self): exc = cinder_exception.ClientException(code=9001) self._test_shutdown_instance_exception(exc) def test_shutdown_instance_volume_not_found(self): exc = exception.VolumeNotFound(volume_id=42) self._test_shutdown_instance_exception(exc) def test_shutdown_instance_disk_not_found(self): exc = exception.DiskNotFound(location="not\\here") self._test_shutdown_instance_exception(exc) def test_shutdown_instance_other_exception(self): exc = Exception('some other exception') self._test_shutdown_instance_exception(exc) def _test_init_instance_retries_reboot(self, instance, reboot_type, return_power_state): instance.host = self.compute.host with test.nested( mock.patch.object(self.compute, '_get_power_state', return_value=return_power_state), mock.patch.object(self.compute, 'reboot_instance'), mock.patch.object(objects.Instance, 'get_network_info') ) as ( _get_power_state, reboot_instance, get_network_info ): self.compute._init_instance(self.context, instance) call = mock.call(self.context, instance, block_device_info=None, reboot_type=reboot_type) reboot_instance.assert_has_calls([call]) def test_init_instance_retries_reboot_pending(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_PENDING for state in vm_states.ALLOW_SOFT_REBOOT: instance.vm_state = state self._test_init_instance_retries_reboot(instance, 'SOFT', power_state.RUNNING) def test_init_instance_retries_reboot_pending_hard(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_PENDING_HARD for state in vm_states.ALLOW_HARD_REBOOT: # NOTE(dave-mcnally) while a reboot of a vm in error state is # possible we don't attempt to recover an error during init if state == vm_states.ERROR: continue instance.vm_state = state self._test_init_instance_retries_reboot(instance, 'HARD', power_state.RUNNING) def test_init_instance_retries_reboot_pending_soft_became_hard(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_PENDING for state in vm_states.ALLOW_HARD_REBOOT: # NOTE(dave-mcnally) while a reboot of a vm in error state is # possible we don't attempt to recover an error during init if state == vm_states.ERROR: continue instance.vm_state = state with mock.patch.object(instance, 'save'): self._test_init_instance_retries_reboot(instance, 'HARD', power_state.SHUTDOWN) self.assertEqual(task_states.REBOOT_PENDING_HARD, instance.task_state) def test_init_instance_retries_reboot_started(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.REBOOT_STARTED with mock.patch.object(instance, 'save'): self._test_init_instance_retries_reboot(instance, 'HARD', power_state.NOSTATE) def test_init_instance_retries_reboot_started_hard(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.REBOOT_STARTED_HARD self._test_init_instance_retries_reboot(instance, 'HARD', power_state.NOSTATE) def _test_init_instance_cleans_reboot_state(self, instance): instance.host = self.compute.host with test.nested( mock.patch.object(self.compute, '_get_power_state', return_value=power_state.RUNNING), mock.patch.object(instance, 'save', autospec=True), mock.patch.object(objects.Instance, 'get_network_info') ) as ( _get_power_state, instance_save, get_network_info ): self.compute._init_instance(self.context, instance) instance_save.assert_called_once_with() self.assertIsNone(instance.task_state) self.assertEqual(vm_states.ACTIVE, instance.vm_state) def test_init_instance_cleans_image_state_reboot_started(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.REBOOT_STARTED instance.power_state = power_state.RUNNING self._test_init_instance_cleans_reboot_state(instance) def test_init_instance_cleans_image_state_reboot_started_hard(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.REBOOT_STARTED_HARD instance.power_state = power_state.RUNNING self._test_init_instance_cleans_reboot_state(instance) def test_init_instance_retries_power_off(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.POWERING_OFF instance.host = self.compute.host with mock.patch.object(self.compute, 'stop_instance'): self.compute._init_instance(self.context, instance) call = mock.call(self.context, instance, True) self.compute.stop_instance.assert_has_calls([call]) def test_init_instance_retries_power_on(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.POWERING_ON instance.host = self.compute.host with mock.patch.object(self.compute, 'start_instance'): self.compute._init_instance(self.context, instance) call = mock.call(self.context, instance) self.compute.start_instance.assert_has_calls([call]) def test_init_instance_retries_power_on_silent_exception(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.POWERING_ON instance.host = self.compute.host with mock.patch.object(self.compute, 'start_instance', return_value=Exception): init_return = self.compute._init_instance(self.context, instance) call = mock.call(self.context, instance) self.compute.start_instance.assert_has_calls([call]) self.assertIsNone(init_return) def test_init_instance_retries_power_off_silent_exception(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.POWERING_OFF instance.host = self.compute.host with mock.patch.object(self.compute, 'stop_instance', return_value=Exception): init_return = self.compute._init_instance(self.context, instance) call = mock.call(self.context, instance, True) self.compute.stop_instance.assert_has_calls([call]) self.assertIsNone(init_return) def test_get_power_state(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.STOPPED instance.task_state = None instance.host = self.compute.host with mock.patch.object(self.compute.driver, 'get_info') as mock_info: mock_info.return_value = hardware.InstanceInfo( state=power_state.SHUTDOWN) res = self.compute._get_power_state(self.context, instance) mock_info.assert_called_once_with(instance, use_cache=False) self.assertEqual(res, power_state.SHUTDOWN) @mock.patch('nova.objects.InstanceList.get_by_filters') def test_get_instances_on_driver(self, mock_instance_list): driver_instances = [] for x in range(10): driver_instances.append(fake_instance.fake_db_instance()) def _make_instance_list(db_list): return instance_obj._make_instance_list( self.context, objects.InstanceList(), db_list, None) driver_uuids = [inst['uuid'] for inst in driver_instances] mock_instance_list.return_value = _make_instance_list(driver_instances) with mock.patch.object(self.compute.driver, 'list_instance_uuids') as mock_driver_uuids: mock_driver_uuids.return_value = driver_uuids result = self.compute._get_instances_on_driver(self.context) self.assertEqual([x['uuid'] for x in driver_instances], [x['uuid'] for x in result]) expected_filters = {'uuid': driver_uuids} mock_instance_list.assert_called_with(self.context, expected_filters, use_slave=True) @mock.patch('nova.objects.InstanceList.get_by_filters') def test_get_instances_on_driver_empty(self, mock_instance_list): with mock.patch.object(self.compute.driver, 'list_instance_uuids') as mock_driver_uuids: mock_driver_uuids.return_value = [] result = self.compute._get_instances_on_driver(self.context) # Short circuit DB call, get_by_filters should not be called self.assertEqual(0, mock_instance_list.call_count) self.assertEqual(1, mock_driver_uuids.call_count) self.assertEqual([], [x['uuid'] for x in result]) @mock.patch('nova.objects.InstanceList.get_by_filters') def test_get_instances_on_driver_fallback(self, mock_instance_list): # Test getting instances when driver doesn't support # 'list_instance_uuids' self.compute.host = 'host' filters = {} self.flags(instance_name_template='inst-%i') all_instances = [] driver_instances = [] for x in range(10): instance = fake_instance.fake_db_instance(name='inst-%i' % x, id=x) if x % 2: driver_instances.append(instance) all_instances.append(instance) def _make_instance_list(db_list): return instance_obj._make_instance_list( self.context, objects.InstanceList(), db_list, None) driver_instance_names = [inst['name'] for inst in driver_instances] mock_instance_list.return_value = _make_instance_list(all_instances) with test.nested( mock.patch.object(self.compute.driver, 'list_instance_uuids'), mock.patch.object(self.compute.driver, 'list_instances') ) as ( mock_driver_uuids, mock_driver_instances ): mock_driver_uuids.side_effect = NotImplementedError() mock_driver_instances.return_value = driver_instance_names result = self.compute._get_instances_on_driver(self.context, filters) self.assertEqual([x['uuid'] for x in driver_instances], [x['uuid'] for x in result]) expected_filters = {'host': self.compute.host} mock_instance_list.assert_called_with(self.context, expected_filters, use_slave=True) @mock.patch.object(compute_utils, 'notify_usage_exists') @mock.patch.object(objects.TaskLog, 'end_task') @mock.patch.object(objects.TaskLog, 'begin_task') @mock.patch.object(objects.InstanceList, 'get_active_by_window_joined') @mock.patch.object(objects.TaskLog, 'get') def test_instance_usage_audit(self, mock_get, mock_get_active, mock_begin, mock_end, mock_notify): instances = [objects.Instance(uuid=uuids.instance)] def fake_task_log(*a, **k): pass def fake_get(*a, **k): return instances mock_get.side_effect = fake_task_log mock_get_active.side_effect = fake_get mock_begin.side_effect = fake_task_log mock_end.side_effect = fake_task_log self.flags(instance_usage_audit=True) self.compute._instance_usage_audit(self.context) mock_notify.assert_called_once_with( self.compute.notifier, self.context, instances[0], 'fake-mini', ignore_missing_network_data=False) self.assertTrue(mock_get.called) self.assertTrue(mock_get_active.called) self.assertTrue(mock_begin.called) self.assertTrue(mock_end.called) @mock.patch.object(objects.InstanceList, 'get_by_host') def test_sync_power_states(self, mock_get): instance = mock.Mock() mock_get.return_value = [instance] with mock.patch.object(self.compute._sync_power_pool, 'spawn_n') as mock_spawn: self.compute._sync_power_states(mock.sentinel.context) mock_get.assert_called_with(mock.sentinel.context, self.compute.host, expected_attrs=[], use_slave=True) mock_spawn.assert_called_once_with(mock.ANY, instance) @mock.patch('nova.objects.InstanceList.get_by_host', new=mock.Mock()) @mock.patch('nova.compute.manager.ComputeManager.' '_query_driver_power_state_and_sync', new_callable=mock.NonCallableMock) def test_sync_power_states_virt_driver_not_ready(self, _mock_sync): """"Tests that the periodic task exits early if the driver raises VirtDriverNotReady. """ with mock.patch.object( self.compute.driver, 'get_num_instances', side_effect=exception.VirtDriverNotReady) as gni: self.compute._sync_power_states(mock.sentinel.context) gni.assert_called_once_with() def _get_sync_instance(self, power_state, vm_state, task_state=None, shutdown_terminate=False): instance = objects.Instance() instance.uuid = uuids.instance instance.power_state = power_state instance.vm_state = vm_state instance.host = self.compute.host instance.task_state = task_state instance.shutdown_terminate = shutdown_terminate return instance @mock.patch.object(objects.Instance, 'refresh') def test_sync_instance_power_state_match(self, mock_refresh): instance = self._get_sync_instance(power_state.RUNNING, vm_states.ACTIVE) self.compute._sync_instance_power_state(self.context, instance, power_state.RUNNING) mock_refresh.assert_called_once_with(use_slave=False) @mock.patch.object(fake_driver.FakeDriver, 'get_info') @mock.patch.object(objects.Instance, 'refresh') @mock.patch.object(objects.Instance, 'save') def test_sync_instance_power_state_running_stopped(self, mock_save, mock_refresh, mock_get_info): mock_get_info.return_value = hardware.InstanceInfo( state=power_state.SHUTDOWN) instance = self._get_sync_instance(power_state.RUNNING, vm_states.ACTIVE) self.compute._sync_instance_power_state(self.context, instance, power_state.SHUTDOWN) self.assertEqual(instance.power_state, power_state.SHUTDOWN) mock_refresh.assert_called_once_with(use_slave=False) self.assertTrue(mock_save.called) mock_get_info.assert_called_once_with(instance, use_cache=False) def _test_sync_to_stop(self, vm_power_state, vm_state, driver_power_state, stop=True, force=False, shutdown_terminate=False): instance = self._get_sync_instance( vm_power_state, vm_state, shutdown_terminate=shutdown_terminate) with test.nested( mock.patch.object(objects.Instance, 'refresh'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(self.compute.compute_api, 'stop'), mock.patch.object(self.compute.compute_api, 'delete'), mock.patch.object(self.compute.compute_api, 'force_stop'), mock.patch.object(self.compute.driver, 'get_info') ) as (mock_refresh, mock_save, mock_stop, mock_delete, mock_force, mock_get_info): mock_get_info.return_value = hardware.InstanceInfo( state=driver_power_state) self.compute._sync_instance_power_state(self.context, instance, driver_power_state) if shutdown_terminate: mock_delete.assert_called_once_with(self.context, instance) elif stop: if force: mock_force.assert_called_once_with(self.context, instance) else: mock_stop.assert_called_once_with(self.context, instance) if (vm_state == vm_states.ACTIVE and vm_power_state in (power_state.SHUTDOWN, power_state.CRASHED)): mock_get_info.assert_called_once_with(instance, use_cache=False) mock_refresh.assert_called_once_with(use_slave=False) self.assertTrue(mock_save.called) def test_sync_instance_power_state_to_stop(self): for ps in (power_state.SHUTDOWN, power_state.CRASHED, power_state.SUSPENDED): self._test_sync_to_stop(power_state.RUNNING, vm_states.ACTIVE, ps) for ps in (power_state.SHUTDOWN, power_state.CRASHED): self._test_sync_to_stop(power_state.PAUSED, vm_states.PAUSED, ps, force=True) self._test_sync_to_stop(power_state.SHUTDOWN, vm_states.STOPPED, power_state.RUNNING, force=True) def test_sync_instance_power_state_to_terminate(self): self._test_sync_to_stop(power_state.RUNNING, vm_states.ACTIVE, power_state.SHUTDOWN, force=False, shutdown_terminate=True) def test_sync_instance_power_state_to_no_stop(self): for ps in (power_state.PAUSED, power_state.NOSTATE): self._test_sync_to_stop(power_state.RUNNING, vm_states.ACTIVE, ps, stop=False) for vs in (vm_states.SOFT_DELETED, vm_states.DELETED): for ps in (power_state.NOSTATE, power_state.SHUTDOWN): self._test_sync_to_stop(power_state.RUNNING, vs, ps, stop=False) @mock.patch('nova.compute.manager.ComputeManager.' '_sync_instance_power_state') def test_query_driver_power_state_and_sync_pending_task( self, mock_sync_power_state): with mock.patch.object(self.compute.driver, 'get_info') as mock_get_info: db_instance = objects.Instance(uuid=uuids.db_instance, task_state=task_states.POWERING_OFF) self.compute._query_driver_power_state_and_sync(self.context, db_instance) self.assertFalse(mock_get_info.called) self.assertFalse(mock_sync_power_state.called) @mock.patch('nova.compute.manager.ComputeManager.' '_sync_instance_power_state') def test_query_driver_power_state_and_sync_not_found_driver( self, mock_sync_power_state): error = exception.InstanceNotFound(instance_id=1) with mock.patch.object(self.compute.driver, 'get_info', side_effect=error) as mock_get_info: db_instance = objects.Instance(uuid=uuids.db_instance, task_state=None) self.compute._query_driver_power_state_and_sync(self.context, db_instance) mock_get_info.assert_called_once_with(db_instance) mock_sync_power_state.assert_called_once_with(self.context, db_instance, power_state.NOSTATE, use_slave=True) def test_cleanup_running_deleted_instances_virt_driver_not_ready(self): """Tests the scenario that the driver raises VirtDriverNotReady when listing instances so the task returns early. """ self.flags(running_deleted_instance_action='shutdown') with mock.patch.object(self.compute, '_running_deleted_instances', side_effect=exception.VirtDriverNotReady) as ls: # Mock the virt driver to make sure nothing calls it. with mock.patch.object(self.compute, 'driver', new_callable=mock.NonCallableMock): self.compute._cleanup_running_deleted_instances(self.context) ls.assert_called_once_with(test.MatchType(context.RequestContext)) @mock.patch.object(virt_driver.ComputeDriver, 'delete_instance_files') @mock.patch.object(objects.InstanceList, 'get_by_filters') def test_run_pending_deletes(self, mock_get, mock_delete): self.flags(instance_delete_interval=10) class FakeInstance(object): def __init__(self, uuid, name, smd): self.uuid = uuid self.name = name self.system_metadata = smd self.cleaned = False def __getitem__(self, name): return getattr(self, name) def save(self): pass def _fake_get(ctx, filter, expected_attrs, use_slave): mock_get.assert_called_once_with( {'read_deleted': 'yes'}, {'deleted': True, 'soft_deleted': False, 'host': 'fake-mini', 'cleaned': False}, expected_attrs=['system_metadata'], use_slave=True) return [a, b, c] a = FakeInstance(uuids.instanceA, 'apple', {'clean_attempts': '100'}) b = FakeInstance(uuids.instanceB, 'orange', {'clean_attempts': '3'}) c = FakeInstance(uuids.instanceC, 'banana', {}) mock_get.side_effect = _fake_get mock_delete.side_effect = [True, False] self.compute._run_pending_deletes({}) self.assertFalse(a.cleaned) self.assertEqual('100', a.system_metadata['clean_attempts']) self.assertTrue(b.cleaned) self.assertEqual('4', b.system_metadata['clean_attempts']) self.assertFalse(c.cleaned) self.assertEqual('1', c.system_metadata['clean_attempts']) mock_delete.assert_has_calls([mock.call(mock.ANY), mock.call(mock.ANY)]) @mock.patch.object(objects.Migration, 'save') @mock.patch.object(objects.MigrationList, 'get_by_filters') @mock.patch.object(objects.InstanceList, 'get_by_filters') def _test_cleanup_incomplete_migrations(self, inst_host, mock_inst_get_by_filters, mock_migration_get_by_filters, mock_save): def fake_inst(context, uuid, host): inst = objects.Instance(context) inst.uuid = uuid inst.host = host return inst def fake_migration(uuid, status, inst_uuid, src_host, dest_host): migration = objects.Migration() migration.uuid = uuid migration.status = status migration.instance_uuid = inst_uuid migration.source_compute = src_host migration.dest_compute = dest_host return migration fake_instances = [fake_inst(self.context, uuids.instance_1, inst_host), fake_inst(self.context, uuids.instance_2, inst_host)] fake_migrations = [fake_migration(uuids.mig1, 'error', uuids.instance_1, 'fake-host', 'fake-mini'), fake_migration(uuids.mig2, 'error', uuids.instance_2, 'fake-host', 'fake-mini')] mock_migration_get_by_filters.return_value = fake_migrations mock_inst_get_by_filters.return_value = fake_instances with mock.patch.object(self.compute.driver, 'delete_instance_files'): self.compute._cleanup_incomplete_migrations(self.context) # Ensure that migration status is set to 'failed' after instance # files deletion for those instances whose instance.host is not # same as compute host where periodic task is running. for inst in fake_instances: if inst.host != CONF.host: for mig in fake_migrations: if inst.uuid == mig.instance_uuid: self.assertEqual('failed', mig.status) def test_cleanup_incomplete_migrations_dest_node(self): """Test to ensure instance files are deleted from destination node. If instance gets deleted during resizing/revert-resizing operation, in that case instance files gets deleted from instance.host (source host here), but there is possibility that instance files could be present on destination node. This test ensures that `_cleanup_incomplete_migration` periodic task deletes orphaned instance files from destination compute node. """ self.flags(host='fake-mini') self._test_cleanup_incomplete_migrations('fake-host') def test_cleanup_incomplete_migrations_source_node(self): """Test to ensure instance files are deleted from source node. If instance gets deleted during resizing/revert-resizing operation, in that case instance files gets deleted from instance.host (dest host here), but there is possibility that instance files could be present on source node. This test ensures that `_cleanup_incomplete_migration` periodic task deletes orphaned instance files from source compute node. """ self.flags(host='fake-host') self._test_cleanup_incomplete_migrations('fake-mini') def test_attach_interface_failure(self): # Test that the fault methods are invoked when an attach fails db_instance = fake_instance.fake_db_instance() f_instance = objects.Instance._from_db_object(self.context, objects.Instance(), db_instance) e = exception.InterfaceAttachFailed(instance_uuid=f_instance.uuid) @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(self.compute.network_api, 'allocate_port_for_instance', side_effect=e) @mock.patch.object(self.compute, '_instance_update', side_effect=lambda *a, **k: {}) def do_test(update, meth, add_fault, notify, event): self.assertRaises(exception.InterfaceAttachFailed, self.compute.attach_interface, self.context, f_instance, 'net_id', 'port_id', None, None) add_fault.assert_has_calls([ mock.call(self.context, f_instance, e, mock.ANY)]) event.assert_called_once_with( self.context, 'compute_attach_interface', CONF.host, f_instance.uuid, graceful_exit=False) with mock.patch.dict(self.compute.driver.capabilities, supports_attach_interface=True): do_test() def test_detach_interface_failure(self): # Test that the fault methods are invoked when a detach fails # Build test data that will cause a PortNotFound exception nw_info = network_model.NetworkInfo([]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) f_instance = objects.Instance(id=3, uuid=uuids.instance, info_cache=info_cache) @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(compute_utils, 'refresh_info_cache_for_instance') @mock.patch.object(self.compute, '_set_instance_obj_error_state') def do_test(meth, refresh, add_fault, event): self.assertRaises(exception.PortNotFound, self.compute.detach_interface, self.context, f_instance, 'port_id') add_fault.assert_has_calls( [mock.call(self.context, f_instance, mock.ANY, mock.ANY)]) event.assert_called_once_with( self.context, 'compute_detach_interface', CONF.host, f_instance.uuid, graceful_exit=False) refresh.assert_called_once_with(self.context, f_instance) do_test() @mock.patch('nova.compute.manager.LOG.log') @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(compute_utils, 'refresh_info_cache_for_instance') def test_detach_interface_instance_not_found(self, mock_refresh, mock_notify, mock_fault, mock_event, mock_log): nw_info = network_model.NetworkInfo([ network_model.VIF(uuids.port_id)]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) instance = objects.Instance(id=1, uuid=uuids.instance, info_cache=info_cache) with mock.patch.object(self.compute.driver, 'detach_interface', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)): self.assertRaises(exception.InterfaceDetachFailed, self.compute.detach_interface, self.context, instance, uuids.port_id) mock_refresh.assert_called_once_with(self.context, instance) self.assertEqual(1, mock_log.call_count) self.assertEqual(logging.DEBUG, mock_log.call_args[0][0]) @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(virt_driver.ComputeDriver, 'get_volume_connector', return_value={}) @mock.patch.object(manager.ComputeManager, '_instance_update', return_value={}) @mock.patch.object(db, 'instance_fault_create') @mock.patch.object(db, 'block_device_mapping_update') @mock.patch.object(db, 'block_device_mapping_get_by_instance_and_volume_id') @mock.patch.object(cinder.API, 'migrate_volume_completion') @mock.patch.object(cinder.API, 'terminate_connection') @mock.patch.object(cinder.API, 'unreserve_volume') @mock.patch.object(cinder.API, 'get') @mock.patch.object(cinder.API, 'roll_detaching') @mock.patch.object(compute_utils, 'notify_about_volume_swap') def _test_swap_volume(self, mock_notify, mock_roll_detaching, mock_cinder_get, mock_unreserve_volume, mock_terminate_connection, mock_migrate_volume_completion, mock_bdm_get, mock_bdm_update, mock_instance_fault_create, mock_instance_update, mock_get_volume_connector, mock_event, expected_exception=None): # This test ensures that volume_id arguments are passed to volume_api # and that volume states are OK volumes = {} volumes[uuids.old_volume] = {'id': uuids.old_volume, 'display_name': 'old_volume', 'status': 'detaching', 'size': 1} volumes[uuids.new_volume] = {'id': uuids.new_volume, 'display_name': 'new_volume', 'status': 'available', 'size': 2} fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'device_name': '/dev/vdb', 'source_type': 'volume', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'connection_info': '{"foo": "bar"}', 'volume_id': uuids.old_volume, 'attachment_id': None}) def fake_vol_api_roll_detaching(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) if volumes[volume_id]['status'] == 'detaching': volumes[volume_id]['status'] = 'in-use' def fake_vol_api_func(context, volume, *args): self.assertTrue(uuidutils.is_uuid_like(volume)) return {} def fake_vol_get(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) return volumes[volume_id] def fake_vol_unreserve(context, volume_id): self.assertTrue(uuidutils.is_uuid_like(volume_id)) if volumes[volume_id]['status'] == 'attaching': volumes[volume_id]['status'] = 'available' def fake_vol_migrate_volume_completion(context, old_volume_id, new_volume_id, error=False): self.assertTrue(uuidutils.is_uuid_like(old_volume_id)) self.assertTrue(uuidutils.is_uuid_like(new_volume_id)) volumes[old_volume_id]['status'] = 'in-use' return {'save_volume_id': new_volume_id} def fake_block_device_mapping_update(ctxt, id, updates, legacy): self.assertEqual(2, updates['volume_size']) return fake_bdm mock_roll_detaching.side_effect = fake_vol_api_roll_detaching mock_terminate_connection.side_effect = fake_vol_api_func mock_cinder_get.side_effect = fake_vol_get mock_migrate_volume_completion.side_effect = ( fake_vol_migrate_volume_completion) mock_unreserve_volume.side_effect = fake_vol_unreserve mock_bdm_get.return_value = fake_bdm mock_bdm_update.side_effect = fake_block_device_mapping_update mock_instance_fault_create.return_value = ( test_instance_fault.fake_faults['fake-uuid'][0]) instance1 = fake_instance.fake_instance_obj( self.context, **{'uuid': uuids.instance}) if expected_exception: volumes[uuids.old_volume]['status'] = 'detaching' volumes[uuids.new_volume]['status'] = 'attaching' self.assertRaises(expected_exception, self.compute.swap_volume, self.context, uuids.old_volume, uuids.new_volume, instance1, None) self.assertEqual('in-use', volumes[uuids.old_volume]['status']) self.assertEqual('available', volumes[uuids.new_volume]['status']) self.assertEqual(2, mock_notify.call_count) mock_notify.assert_any_call( test.MatchType(context.RequestContext), instance1, self.compute.host, fields.NotificationPhase.START, uuids.old_volume, uuids.new_volume) mock_notify.assert_any_call( test.MatchType(context.RequestContext), instance1, self.compute.host, fields.NotificationPhase.ERROR, uuids.old_volume, uuids.new_volume, test.MatchType(expected_exception), mock.ANY) else: self.compute.swap_volume(self.context, uuids.old_volume, uuids.new_volume, instance1, None) self.assertEqual(volumes[uuids.old_volume]['status'], 'in-use') self.assertEqual(2, mock_notify.call_count) mock_notify.assert_any_call(test.MatchType(context.RequestContext), instance1, self.compute.host, fields.NotificationPhase.START, uuids.old_volume, uuids.new_volume) mock_notify.assert_any_call(test.MatchType(context.RequestContext), instance1, self.compute.host, fields.NotificationPhase.END, uuids.old_volume, uuids.new_volume) mock_event.assert_called_once_with(self.context, 'compute_swap_volume', CONF.host, instance1.uuid, graceful_exit=False) def _assert_volume_api(self, context, volume, *args): self.assertTrue(uuidutils.is_uuid_like(volume)) return {} def _assert_swap_volume(self, context, old_connection_info, new_connection_info, instance, mountpoint, resize_to): self.assertEqual(2, resize_to) @mock.patch.object(cinder.API, 'initialize_connection') @mock.patch.object(fake_driver.FakeDriver, 'swap_volume') def test_swap_volume_volume_api_usage(self, mock_swap_volume, mock_initialize_connection): mock_initialize_connection.side_effect = self._assert_volume_api mock_swap_volume.side_effect = self._assert_swap_volume self._test_swap_volume() @mock.patch.object(cinder.API, 'initialize_connection') @mock.patch.object(fake_driver.FakeDriver, 'swap_volume', side_effect=test.TestingException()) def test_swap_volume_with_compute_driver_exception( self, mock_swap_volume, mock_initialize_connection): mock_initialize_connection.side_effect = self._assert_volume_api self._test_swap_volume(expected_exception=test.TestingException) @mock.patch.object(cinder.API, 'initialize_connection', side_effect=test.TestingException()) @mock.patch.object(fake_driver.FakeDriver, 'swap_volume') def test_swap_volume_with_initialize_connection_exception( self, mock_swap_volume, mock_initialize_connection): self._test_swap_volume(expected_exception=test.TestingException) @mock.patch('nova.compute.utils.notify_about_volume_swap') @mock.patch( 'nova.db.api.block_device_mapping_get_by_instance_and_volume_id') @mock.patch('nova.db.api.block_device_mapping_update') @mock.patch('nova.volume.cinder.API.get') @mock.patch('nova.virt.libvirt.LibvirtDriver.get_volume_connector') @mock.patch('nova.compute.manager.ComputeManager._swap_volume') def test_swap_volume_delete_on_termination_flag(self, swap_volume_mock, volume_connector_mock, get_volume_mock, update_bdm_mock, get_bdm_mock, notify_mock): # This test ensures that delete_on_termination flag arguments # are reserved volumes = {} old_volume_id = uuids.fake volumes[old_volume_id] = {'id': old_volume_id, 'display_name': 'old_volume', 'status': 'detaching', 'size': 2} new_volume_id = uuids.fake_2 volumes[new_volume_id] = {'id': new_volume_id, 'display_name': 'new_volume', 'status': 'available', 'size': 2} fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'device_name': '/dev/vdb', 'source_type': 'volume', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'delete_on_termination': True, 'connection_info': '{"foo": "bar"}', 'attachment_id': None}) comp_ret = {'save_volume_id': old_volume_id} new_info = {"foo": "bar", "serial": old_volume_id} swap_volume_mock.return_value = (comp_ret, new_info) volume_connector_mock.return_value = {} update_bdm_mock.return_value = fake_bdm get_bdm_mock.return_value = fake_bdm get_volume_mock.return_value = volumes[old_volume_id] self.compute.swap_volume(self.context, old_volume_id, new_volume_id, fake_instance.fake_instance_obj(self.context, **{'uuid': uuids.instance}), None) update_values = {'no_device': False, 'connection_info': jsonutils.dumps(new_info), 'volume_id': old_volume_id, 'source_type': u'volume', 'snapshot_id': None, 'destination_type': u'volume'} update_bdm_mock.assert_called_once_with(mock.ANY, mock.ANY, update_values, legacy=False) @mock.patch.object(compute_utils, 'notify_about_volume_swap') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch('nova.volume.cinder.API.get') @mock.patch('nova.volume.cinder.API.attachment_update') @mock.patch('nova.volume.cinder.API.attachment_delete') @mock.patch('nova.volume.cinder.API.attachment_complete') @mock.patch('nova.volume.cinder.API.migrate_volume_completion', return_value={'save_volume_id': uuids.old_volume_id}) def test_swap_volume_with_new_attachment_id_cinder_migrate_true( self, migrate_volume_completion, attachment_complete, attachment_delete, attachment_update, get_volume, get_bdm, notify_about_volume_swap): """Tests a swap volume operation with a new style volume attachment passed in from the compute API, and the case that Cinder initiated the swap volume because of a volume retype situation. This is a happy path test. Since it is a retype there is no volume size change. """ bdm = objects.BlockDeviceMapping( volume_id=uuids.old_volume_id, device_name='/dev/vda', attachment_id=uuids.old_attachment_id, connection_info='{"data": {}}', volume_size=1) old_volume = { 'id': uuids.old_volume_id, 'size': 1, 'status': 'retyping', 'migration_status': 'migrating', 'multiattach': False } new_volume = { 'id': uuids.new_volume_id, 'size': 1, 'status': 'reserved', 'migration_status': 'migrating', 'multiattach': False } attachment_update.return_value = {"connection_info": {"data": {}}} get_bdm.return_value = bdm get_volume.side_effect = (old_volume, new_volume) instance = fake_instance.fake_instance_obj(self.context) with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(self.compute.driver, 'get_volume_connector', return_value=mock.sentinel.connector), mock.patch.object(bdm, 'save') ) as ( mock_elevated, mock_get_volume_connector, mock_save ): self.compute.swap_volume( self.context, uuids.old_volume_id, uuids.new_volume_id, instance, uuids.new_attachment_id) # Assert the expected calls. get_bdm.assert_called_once_with( self.context, uuids.old_volume_id, instance.uuid) # We updated the new attachment with the host connector. attachment_update.assert_called_once_with( self.context, uuids.new_attachment_id, mock.sentinel.connector, bdm.device_name) # We tell Cinder that the new volume is connected attachment_complete.assert_called_once_with( self.context, uuids.new_attachment_id) # After a successful swap volume, we deleted the old attachment. attachment_delete.assert_called_once_with( self.context, uuids.old_attachment_id) # After a successful swap volume, we tell Cinder so it can complete # the retype operation. migrate_volume_completion.assert_called_once_with( self.context, uuids.old_volume_id, uuids.new_volume_id, error=False) # The BDM should have been updated. Since it's a retype, the old # volume ID is returned from Cinder so that's what goes into the # BDM but the new attachment ID is saved. mock_save.assert_called_once_with() self.assertEqual(uuids.old_volume_id, bdm.volume_id) self.assertEqual(uuids.new_attachment_id, bdm.attachment_id) self.assertEqual(1, bdm.volume_size) self.assertEqual(uuids.old_volume_id, jsonutils.loads(bdm.connection_info)['serial']) @mock.patch.object(compute_utils, 'notify_about_volume_swap') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch('nova.volume.cinder.API.get') @mock.patch('nova.volume.cinder.API.attachment_update') @mock.patch('nova.volume.cinder.API.attachment_delete') @mock.patch('nova.volume.cinder.API.attachment_complete') @mock.patch('nova.volume.cinder.API.migrate_volume_completion') def test_swap_volume_with_new_attachment_id_cinder_migrate_false( self, migrate_volume_completion, attachment_complete, attachment_delete, attachment_update, get_volume, get_bdm, notify_about_volume_swap): """Tests a swap volume operation with a new style volume attachment passed in from the compute API, and the case that Cinder did not initiate the swap volume. This is a happy path test. Since it is not a retype we also change the size. """ bdm = objects.BlockDeviceMapping( volume_id=uuids.old_volume_id, device_name='/dev/vda', attachment_id=uuids.old_attachment_id, connection_info='{"data": {}}') old_volume = { 'id': uuids.old_volume_id, 'size': 1, 'status': 'detaching', 'multiattach': False } new_volume = { 'id': uuids.new_volume_id, 'size': 2, 'status': 'reserved', 'multiattach': False } attachment_update.return_value = {"connection_info": {"data": {}}} get_bdm.return_value = bdm get_volume.side_effect = (old_volume, new_volume) instance = fake_instance.fake_instance_obj(self.context) with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(self.compute.driver, 'get_volume_connector', return_value=mock.sentinel.connector), mock.patch.object(bdm, 'save') ) as ( mock_elevated, mock_get_volume_connector, mock_save ): self.compute.swap_volume( self.context, uuids.old_volume_id, uuids.new_volume_id, instance, uuids.new_attachment_id) # Assert the expected calls. get_bdm.assert_called_once_with( self.context, uuids.old_volume_id, instance.uuid) # We updated the new attachment with the host connector. attachment_update.assert_called_once_with( self.context, uuids.new_attachment_id, mock.sentinel.connector, bdm.device_name) # We tell Cinder that the new volume is connected attachment_complete.assert_called_once_with( self.context, uuids.new_attachment_id) # After a successful swap volume, we deleted the old attachment. attachment_delete.assert_called_once_with( self.context, uuids.old_attachment_id) # After a successful swap volume, since it was not a # Cinder-initiated call, we don't call migrate_volume_completion. migrate_volume_completion.assert_not_called() # The BDM should have been updated. Since it's a not a retype, the # volume_id is now the new volume ID. mock_save.assert_called_once_with() self.assertEqual(uuids.new_volume_id, bdm.volume_id) self.assertEqual(uuids.new_attachment_id, bdm.attachment_id) self.assertEqual(2, bdm.volume_size) new_conn_info = jsonutils.loads(bdm.connection_info) self.assertEqual(uuids.new_volume_id, new_conn_info['serial']) self.assertNotIn('multiattach', new_conn_info) @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(compute_utils, 'notify_about_volume_swap') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch('nova.volume.cinder.API.get') @mock.patch('nova.volume.cinder.API.attachment_update', side_effect=exception.VolumeAttachmentNotFound( attachment_id=uuids.new_attachment_id)) @mock.patch('nova.volume.cinder.API.roll_detaching') @mock.patch('nova.volume.cinder.API.attachment_delete') @mock.patch('nova.volume.cinder.API.migrate_volume_completion') def test_swap_volume_with_new_attachment_id_attachment_update_fails( self, migrate_volume_completion, attachment_delete, roll_detaching, attachment_update, get_volume, get_bdm, notify_about_volume_swap, add_instance_fault_from_exc): """Tests a swap volume operation with a new style volume attachment passed in from the compute API, and the case that Cinder initiated the swap volume because of a volume migrate situation. This is a negative test where attachment_update fails. """ bdm = objects.BlockDeviceMapping( volume_id=uuids.old_volume_id, device_name='/dev/vda', attachment_id=uuids.old_attachment_id, connection_info='{"data": {}}') old_volume = { 'id': uuids.old_volume_id, 'size': 1, 'status': 'in-use', 'migration_status': 'migrating', 'multiattach': False } new_volume = { 'id': uuids.new_volume_id, 'size': 1, 'status': 'reserved', 'migration_status': 'migrating', 'multiattach': False } get_bdm.return_value = bdm get_volume.side_effect = (old_volume, new_volume) instance = fake_instance.fake_instance_obj(self.context) with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(self.compute.driver, 'get_volume_connector', return_value=mock.sentinel.connector) ) as ( mock_elevated, mock_get_volume_connector ): self.assertRaises( exception.VolumeAttachmentNotFound, self.compute.swap_volume, self.context, uuids.old_volume_id, uuids.new_volume_id, instance, uuids.new_attachment_id) # Assert the expected calls. get_bdm.assert_called_once_with( self.context, uuids.old_volume_id, instance.uuid) # We tried to update the new attachment with the host connector. attachment_update.assert_called_once_with( self.context, uuids.new_attachment_id, mock.sentinel.connector, bdm.device_name) # After a failure, we rollback the detaching status of the old # volume. roll_detaching.assert_called_once_with( self.context, uuids.old_volume_id) # After a failure, we deleted the new attachment. attachment_delete.assert_called_once_with( self.context, uuids.new_attachment_id) # After a failure for a Cinder-initiated swap volume, we called # migrate_volume_completion to let Cinder know things blew up. migrate_volume_completion.assert_called_once_with( self.context, uuids.old_volume_id, uuids.new_volume_id, error=True) @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(compute_utils, 'notify_about_volume_swap') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch('nova.volume.cinder.API.get') @mock.patch('nova.volume.cinder.API.attachment_update') @mock.patch('nova.volume.cinder.API.roll_detaching') @mock.patch('nova.volume.cinder.API.attachment_delete') @mock.patch('nova.volume.cinder.API.migrate_volume_completion') def test_swap_volume_with_new_attachment_id_driver_swap_fails( self, migrate_volume_completion, attachment_delete, roll_detaching, attachment_update, get_volume, get_bdm, notify_about_volume_swap, add_instance_fault_from_exc): """Tests a swap volume operation with a new style volume attachment passed in from the compute API, and the case that Cinder did not initiate the swap volume. This is a negative test where the compute driver swap_volume method fails. """ bdm = objects.BlockDeviceMapping( volume_id=uuids.old_volume_id, device_name='/dev/vda', attachment_id=uuids.old_attachment_id, connection_info='{"data": {}}') old_volume = { 'id': uuids.old_volume_id, 'size': 1, 'status': 'detaching', 'multiattach': False } new_volume = { 'id': uuids.new_volume_id, 'size': 2, 'status': 'reserved', 'multiattach': False } attachment_update.return_value = {"connection_info": {"data": {}}} get_bdm.return_value = bdm get_volume.side_effect = (old_volume, new_volume) instance = fake_instance.fake_instance_obj(self.context) with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(self.compute.driver, 'get_volume_connector', return_value=mock.sentinel.connector), mock.patch.object(self.compute.driver, 'swap_volume', side_effect=test.TestingException('yikes')) ) as ( mock_elevated, mock_get_volume_connector, mock_driver_swap ): self.assertRaises( test.TestingException, self.compute.swap_volume, self.context, uuids.old_volume_id, uuids.new_volume_id, instance, uuids.new_attachment_id) # Assert the expected calls. # The new connection_info has the new_volume_id as the serial. new_cinfo = mock_driver_swap.call_args[0][2] self.assertIn('serial', new_cinfo) self.assertEqual(uuids.new_volume_id, new_cinfo['serial']) get_bdm.assert_called_once_with( self.context, uuids.old_volume_id, instance.uuid) # We updated the new attachment with the host connector. attachment_update.assert_called_once_with( self.context, uuids.new_attachment_id, mock.sentinel.connector, bdm.device_name) # After a failure, we rollback the detaching status of the old # volume. roll_detaching.assert_called_once_with( self.context, uuids.old_volume_id) # After a failed swap volume, we deleted the new attachment. attachment_delete.assert_called_once_with( self.context, uuids.new_attachment_id) # After a failed swap volume, since it was not a # Cinder-initiated call, we don't call migrate_volume_completion. migrate_volume_completion.assert_not_called() @mock.patch('nova.volume.cinder.API.attachment_update') def test_swap_volume_with_multiattach(self, attachment_update): """Tests swap volume where the volume being swapped-to supports multiattach as well as the compute driver, so the attachment for the new volume (created in the API) is updated with the host connector and the new_connection_info is updated with the multiattach flag. """ bdm = objects.BlockDeviceMapping( volume_id=uuids.old_volume_id, device_name='/dev/vda', attachment_id=uuids.old_attachment_id, connection_info='{"data": {}}') new_volume = { 'id': uuids.new_volume_id, 'size': 2, 'status': 'reserved', 'multiattach': True } attachment_update.return_value = {"connection_info": {"data": {}}} connector = mock.sentinel.connector with mock.patch.dict(self.compute.driver.capabilities, {'supports_multiattach': True}): _, new_cinfo = self.compute._init_volume_connection( self.context, new_volume, uuids.old_volume_id, connector, bdm, uuids.new_attachment_id, bdm.device_name) self.assertEqual(uuids.new_volume_id, new_cinfo['serial']) self.assertIn('multiattach', new_cinfo) self.assertTrue(new_cinfo['multiattach']) attachment_update.assert_called_once_with( self.context, uuids.new_attachment_id, connector, bdm.device_name) def test_swap_volume_with_multiattach_no_driver_support(self): """Tests a swap volume scenario where the new volume being swapped-to supports multiattach but the virt driver does not, so swap volume fails. """ bdm = objects.BlockDeviceMapping( volume_id=uuids.old_volume_id, device_name='/dev/vda', attachment_id=uuids.old_attachment_id, connection_info='{"data": {}}') new_volume = { 'id': uuids.new_volume_id, 'size': 2, 'status': 'reserved', 'multiattach': True } connector = {'host': 'localhost'} with mock.patch.dict(self.compute.driver.capabilities, {'supports_multiattach': False}): self.assertRaises(exception.MultiattachNotSupportedByVirtDriver, self.compute._init_volume_connection, self.context, new_volume, uuids.old_volume_id, connector, bdm, uuids.new_attachment_id, bdm.device_name) def test_live_migration_claim(self): mock_claim = mock.Mock() mock_claim.claimed_numa_topology = mock.sentinel.claimed_topology stub_image_meta = objects.ImageMeta() post_claim_md = migrate_data_obj.LiveMigrateData() with test.nested( mock.patch.object(self.compute.rt, 'live_migration_claim', return_value=mock_claim), mock.patch.object(self.compute, '_get_nodename', return_value='fake-dest-node'), mock.patch.object(objects.ImageMeta, 'from_instance', return_value=stub_image_meta), mock.patch.object(fake_driver.FakeDriver, 'post_claim_migrate_data', return_value=post_claim_md) ) as (mock_lm_claim, mock_get_nodename, mock_from_instance, mock_post_claim_migrate_data): instance = objects.Instance(flavor=objects.Flavor()) md = objects.LibvirtLiveMigrateData() migration = objects.Migration() self.assertEqual( post_claim_md, self.compute._live_migration_claim( self.context, instance, md, migration, mock.sentinel.limits, None)) mock_lm_claim.assert_called_once_with( self.context, instance, 'fake-dest-node', migration, mock.sentinel.limits, None) mock_post_claim_migrate_data.assert_called_once_with( self.context, instance, md, mock_claim) def test_live_migration_claim_claim_raises(self): stub_image_meta = objects.ImageMeta() with test.nested( mock.patch.object( self.compute.rt, 'live_migration_claim', side_effect=exception.ComputeResourcesUnavailable( reason='bork')), mock.patch.object(self.compute, '_get_nodename', return_value='fake-dest-node'), mock.patch.object(objects.ImageMeta, 'from_instance', return_value=stub_image_meta), mock.patch.object(fake_driver.FakeDriver, 'post_claim_migrate_data') ) as (mock_lm_claim, mock_get_nodename, mock_from_instance, mock_post_claim_migrate_data): instance = objects.Instance(flavor=objects.Flavor()) migration = objects.Migration() self.assertRaises( exception.MigrationPreCheckError, self.compute._live_migration_claim, self.context, instance, objects.LibvirtLiveMigrateData(), migration, mock.sentinel.limits, None) mock_lm_claim.assert_called_once_with( self.context, instance, 'fake-dest-node', migration, mock.sentinel.limits, None) mock_get_nodename.assert_called_once_with(instance) mock_post_claim_migrate_data.assert_not_called() def test_drop_move_claim_at_destination(self): instance = objects.Instance(flavor=objects.Flavor()) with test.nested( mock.patch.object(self.compute.rt, 'drop_move_claim'), mock.patch.object(self.compute, '_get_nodename', return_value='fake-node') ) as (mock_drop_move_claim, mock_get_nodename): self.compute.drop_move_claim_at_destination(self.context, instance) mock_get_nodename.assert_called_once_with(instance) mock_drop_move_claim.assert_called_once_with( self.context, instance, 'fake-node', instance_type=instance.flavor) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(fake_driver.FakeDriver, 'check_can_live_migrate_source') @mock.patch.object(manager.ComputeManager, '_get_instance_block_device_info') @mock.patch.object(compute_utils, 'is_volume_backed_instance') @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(manager.ComputeManager, '_source_can_numa_live_migrate') def test_check_can_live_migrate_source( self, mock_src_can_numa, mock_event, mock_volume, mock_get_inst, mock_check, mock_get_bdms): fake_bdms = objects.BlockDeviceMappingList() mock_get_bdms.return_value = fake_bdms drvr_check_result = migrate_data_obj.LiveMigrateData() mock_check.return_value = drvr_check_result can_numa_result = migrate_data_obj.LiveMigrateData() mock_src_can_numa.return_value = can_numa_result is_volume_backed = 'volume_backed' dest_check_data = migrate_data_obj.LiveMigrateData() db_instance = fake_instance.fake_db_instance() instance = objects.Instance._from_db_object( self.context, objects.Instance(), db_instance) mock_volume.return_value = is_volume_backed mock_get_inst.return_value = {'block_device_mapping': 'fake'} result = self.compute.check_can_live_migrate_source( self.context, instance=instance, dest_check_data=dest_check_data) self.assertEqual(can_numa_result, result) mock_src_can_numa.assert_called_once_with( self.context, dest_check_data, drvr_check_result) mock_event.assert_called_once_with( self.context, 'compute_check_can_live_migrate_source', CONF.host, instance.uuid, graceful_exit=False) mock_check.assert_called_once_with(self.context, instance, dest_check_data, {'block_device_mapping': 'fake'}) mock_volume.assert_called_once_with(self.context, instance, fake_bdms) mock_get_inst.assert_called_once_with(self.context, instance, refresh_conn_info=False, bdms=fake_bdms) self.assertTrue(dest_check_data.is_volume_backed) def _test_check_can_live_migrate_destination(self, do_raise=False, src_numa_lm=None): db_instance = fake_instance.fake_db_instance(host='fake-host') instance = objects.Instance._from_db_object( self.context, objects.Instance(), db_instance) instance.host = 'fake-host' block_migration = 'block_migration' disk_over_commit = 'disk_over_commit' src_info = 'src_info' dest_info = 'dest_info' dest_check_data = objects.LibvirtLiveMigrateData( dst_supports_numa_live_migration=True) mig_data = objects.LibvirtLiveMigrateData() if src_numa_lm is not None: mig_data.src_supports_numa_live_migration = src_numa_lm with test.nested( mock.patch.object(self.compute, '_live_migration_claim'), mock.patch.object(self.compute, '_get_compute_info'), mock.patch.object(self.compute.driver, 'check_can_live_migrate_destination'), mock.patch.object(self.compute.compute_rpcapi, 'check_can_live_migrate_source'), mock.patch.object(self.compute.driver, 'cleanup_live_migration_destination_check'), mock.patch.object(db, 'instance_fault_create'), mock.patch.object(compute_utils, 'EventReporter'), mock.patch.object(migrate_data_obj.VIFMigrateData, 'create_skeleton_migrate_vifs', return_value=[]), mock.patch.object(instance, 'get_network_info'), mock.patch.object(self.compute, '_claim_pci_for_instance_vifs'), mock.patch.object(self.compute, '_update_migrate_vifs_profile_with_pci'), mock.patch.object(self.compute, '_dest_can_numa_live_migrate'), mock.patch('nova.compute.manager.LOG'), ) as (mock_lm_claim, mock_get, mock_check_dest, mock_check_src, mock_check_clean, mock_fault_create, mock_event, mock_create_mig_vif, mock_nw_info, mock_claim_pci, mock_update_mig_vif, mock_dest_can_numa, mock_log): mock_get.side_effect = (src_info, dest_info) mock_check_dest.return_value = dest_check_data mock_dest_can_numa.return_value = dest_check_data post_claim_md = objects.LibvirtLiveMigrateData( dst_numa_info=objects.LibvirtLiveMigrateNUMAInfo()) mock_lm_claim.return_value = post_claim_md if do_raise: mock_check_src.side_effect = test.TestingException mock_fault_create.return_value = \ test_instance_fault.fake_faults['fake-uuid'][0] else: mock_check_src.return_value = mig_data migration = objects.Migration() limits = objects.SchedulerLimits() result = self.compute.check_can_live_migrate_destination( self.context, instance=instance, block_migration=block_migration, disk_over_commit=disk_over_commit, migration=migration, limits=limits) if do_raise: mock_fault_create.assert_called_once_with(self.context, mock.ANY) mock_check_src.assert_called_once_with(self.context, instance, dest_check_data) mock_dest_can_numa.assert_called_with(dest_check_data, migration) if not src_numa_lm: mock_lm_claim.assert_not_called() self.assertEqual(mig_data, result) mock_log.info.assert_called() self.assertThat( mock_log.info.call_args[0][0], testtools.matchers.MatchesRegex( 'Destination was ready for NUMA live migration')) else: mock_lm_claim.assert_called_once_with( self.context, instance, mig_data, migration, limits, None) self.assertEqual(post_claim_md, result) mock_check_clean.assert_called_once_with(self.context, dest_check_data) mock_get.assert_has_calls([mock.call(self.context, 'fake-host'), mock.call(self.context, CONF.host)]) mock_check_dest.assert_called_once_with(self.context, instance, src_info, dest_info, block_migration, disk_over_commit) mock_event.assert_called_once_with( self.context, 'compute_check_can_live_migrate_destination', CONF.host, instance.uuid, graceful_exit=False) return result @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_check_can_live_migrate_destination_success(self): self.useFixture(std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: True)) self._test_check_can_live_migrate_destination() @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_check_can_live_migrate_destination_fail(self): self.useFixture(std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: True)) self.assertRaises( test.TestingException, self._test_check_can_live_migrate_destination, do_raise=True) @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_check_can_live_migrate_destination_contains_vifs(self): self.useFixture(std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: True)) migrate_data = self._test_check_can_live_migrate_destination() self.assertIn('vifs', migrate_data) self.assertIsNotNone(migrate_data.vifs) @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_check_can_live_migrate_destination_no_binding_extended(self): self.useFixture(std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: False)) migrate_data = self._test_check_can_live_migrate_destination() self.assertNotIn('vifs', migrate_data) @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_check_can_live_migrate_destination_src_numa_lm_false(self): self.useFixture(std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: True)) self._test_check_can_live_migrate_destination(src_numa_lm=False) @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_check_can_live_migrate_destination_src_numa_lm_true(self): self.useFixture(std_fixtures.MonkeyPatch( 'nova.network.neutron.API.supports_port_binding_extension', lambda *args: True)) self._test_check_can_live_migrate_destination(src_numa_lm=True) @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_check_can_live_migrate_destination_fail_group_policy( self, mock_fail_db): instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, vm_state=vm_states.ACTIVE, node='fake-node') ex = exception.RescheduledException( instance_uuid=instance.uuid, reason="policy violated") with mock.patch.object(self.compute, '_validate_instance_group_policy', side_effect=ex): self.assertRaises( exception.MigrationPreCheckError, self.compute.check_can_live_migrate_destination, self.context, instance, None, None, None, None) def test_dest_can_numa_live_migrate(self): positive_dest_check_data = objects.LibvirtLiveMigrateData( dst_supports_numa_live_migration=True) negative_dest_check_data = objects.LibvirtLiveMigrateData() self.assertNotIn( 'dst_supports_numa_live_migration', self.compute._dest_can_numa_live_migrate( copy.deepcopy(positive_dest_check_data), None)) self.assertIn( 'dst_supports_numa_live_migration', self.compute._dest_can_numa_live_migrate( copy.deepcopy(positive_dest_check_data), 'fake-migration')) self.assertNotIn( 'dst_supports_numa_live_migration', self.compute._dest_can_numa_live_migrate( copy.deepcopy(negative_dest_check_data), 'fake-migration')) self.assertNotIn( 'dst_supports_numa_live_migration', self.compute._dest_can_numa_live_migrate( copy.deepcopy(negative_dest_check_data), None)) def test_source_can_numa_live_migrate(self): positive_dest_check_data = objects.LibvirtLiveMigrateData( dst_supports_numa_live_migration=True) negative_dest_check_data = objects.LibvirtLiveMigrateData() positive_source_check_data = objects.LibvirtLiveMigrateData( src_supports_numa_live_migration=True) negative_source_check_data = objects.LibvirtLiveMigrateData() with mock.patch.object(self.compute.compute_rpcapi, 'supports_numa_live_migration', return_value=True ) as mock_supports_numa_lm: self.assertIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, positive_dest_check_data, copy.deepcopy(positive_source_check_data))) mock_supports_numa_lm.assert_called_once_with(self.context) self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, positive_dest_check_data, copy.deepcopy(negative_source_check_data))) self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, negative_dest_check_data, copy.deepcopy(positive_source_check_data))) self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, negative_dest_check_data, copy.deepcopy(negative_source_check_data))) with mock.patch.object(self.compute.compute_rpcapi, 'supports_numa_live_migration', return_value=False ) as mock_supports_numa_lm: self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, positive_dest_check_data, copy.deepcopy(positive_source_check_data))) mock_supports_numa_lm.assert_called_once_with(self.context) self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, positive_dest_check_data, copy.deepcopy(negative_source_check_data))) self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, negative_dest_check_data, copy.deepcopy(positive_source_check_data))) self.assertNotIn( 'src_supports_numa_live_migration', self.compute._source_can_numa_live_migrate( self.context, negative_dest_check_data, copy.deepcopy(negative_source_check_data))) @mock.patch('nova.compute.manager.InstanceEvents._lock_name') def test_prepare_for_instance_event(self, lock_name_mock): inst_obj = objects.Instance(uuid=uuids.instance) result = self.compute.instance_events.prepare_for_instance_event( inst_obj, 'test-event', None) self.assertIn(uuids.instance, self.compute.instance_events._events) self.assertIn(('test-event', None), self.compute.instance_events._events[uuids.instance]) self.assertEqual( result, self.compute.instance_events._events[uuids.instance] [('test-event', None)]) self.assertTrue(hasattr(result, 'send')) lock_name_mock.assert_called_once_with(inst_obj) @mock.patch('nova.compute.manager.InstanceEvents._lock_name') def test_pop_instance_event(self, lock_name_mock): event = eventlet_event.Event() self.compute.instance_events._events = { uuids.instance: { ('network-vif-plugged', None): event, } } inst_obj = objects.Instance(uuid=uuids.instance) event_obj = objects.InstanceExternalEvent(name='network-vif-plugged', tag=None) result = self.compute.instance_events.pop_instance_event(inst_obj, event_obj) self.assertEqual(result, event) lock_name_mock.assert_called_once_with(inst_obj) @mock.patch('nova.compute.manager.InstanceEvents._lock_name') def test_clear_events_for_instance(self, lock_name_mock): event = eventlet_event.Event() self.compute.instance_events._events = { uuids.instance: { ('test-event', None): event, } } inst_obj = objects.Instance(uuid=uuids.instance) result = self.compute.instance_events.clear_events_for_instance( inst_obj) self.assertEqual(result, {'test-event-None': event}) lock_name_mock.assert_called_once_with(inst_obj) def test_instance_events_lock_name(self): inst_obj = objects.Instance(uuid=uuids.instance) result = self.compute.instance_events._lock_name(inst_obj) self.assertEqual(result, "%s-events" % uuids.instance) def test_prepare_for_instance_event_again(self): inst_obj = objects.Instance(uuid=uuids.instance) self.compute.instance_events.prepare_for_instance_event( inst_obj, 'test-event', None) # A second attempt will avoid creating a new list; make sure we # get the current list result = self.compute.instance_events.prepare_for_instance_event( inst_obj, 'test-event', None) self.assertIn(uuids.instance, self.compute.instance_events._events) self.assertIn(('test-event', None), self.compute.instance_events._events[uuids.instance]) self.assertEqual( result, self.compute.instance_events._events[uuids.instance] [('test-event', None)]) self.assertTrue(hasattr(result, 'send')) def test_process_instance_event(self): event = eventlet_event.Event() self.compute.instance_events._events = { uuids.instance: { ('network-vif-plugged', None): event, } } inst_obj = objects.Instance(uuid=uuids.instance) event_obj = objects.InstanceExternalEvent(name='network-vif-plugged', tag=None) self.compute._process_instance_event(inst_obj, event_obj) self.assertTrue(event.ready()) self.assertEqual(event_obj, event.wait()) self.assertEqual({}, self.compute.instance_events._events) @ddt.data(task_states.DELETING, task_states.MIGRATING) @mock.patch('nova.compute.manager.LOG') def test_process_instance_event_expected_task(self, task_state, mock_log): """Tests that we don't log a warning when we get a network-vif-unplugged event for an instance that's undergoing a task state transition that will generate the expected event. """ inst_obj = objects.Instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE, task_state=task_state) event_obj = objects.InstanceExternalEvent(name='network-vif-unplugged', tag=uuids.port_id) with mock.patch.object(self.compute.instance_events, 'pop_instance_event', return_value=None): self.compute._process_instance_event(inst_obj, event_obj) # assert we logged at debug level mock_log.debug.assert_called() self.assertThat(mock_log.debug.call_args[0][0], testtools.matchers.MatchesRegex( 'Received event .* for instance with task_state ' '%s')) @mock.patch('nova.compute.manager.LOG') def test_process_instance_event_unexpected_warning(self, mock_log): """Tests that we log a warning when we get an unexpected event.""" inst_obj = objects.Instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE, task_state=None) event_obj = objects.InstanceExternalEvent(name='network-vif-unplugged', tag=uuids.port_id) with mock.patch.object(self.compute.instance_events, 'pop_instance_event', return_value=None): self.compute._process_instance_event(inst_obj, event_obj) # assert we logged at warning level mock_log.warning.assert_called() self.assertThat(mock_log.warning.call_args[0][0], testtools.matchers.MatchesRegex( 'Received unexpected event .* for ' 'instance with vm_state .* and ' 'task_state .*.')) def test_process_instance_vif_deleted_event(self): vif1 = fake_network_cache_model.new_vif() vif1['id'] = '1' vif2 = fake_network_cache_model.new_vif() vif2['id'] = '2' nw_info = network_model.NetworkInfo([vif1, vif2]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) inst_obj = objects.Instance(id=3, uuid=uuids.instance, info_cache=info_cache) @mock.patch.object(manager.neutron, 'update_instance_cache_with_nw_info') @mock.patch.object(self.compute.driver, 'detach_interface') def do_test(detach_interface, update_instance_cache_with_nw_info): self.compute._process_instance_vif_deleted_event(self.context, inst_obj, vif2['id']) update_instance_cache_with_nw_info.assert_called_once_with( self.compute.network_api, self.context, inst_obj, nw_info=[vif1]) detach_interface.assert_called_once_with(self.context, inst_obj, vif2) do_test() def test_process_instance_vif_deleted_event_not_implemented_error(self): """Tests the case where driver.detach_interface raises NotImplementedError. """ vif = fake_network_cache_model.new_vif() nw_info = network_model.NetworkInfo([vif]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) inst_obj = objects.Instance(id=3, uuid=uuids.instance, info_cache=info_cache) @mock.patch.object(manager.neutron, 'update_instance_cache_with_nw_info') @mock.patch.object(self.compute.driver, 'detach_interface', side_effect=NotImplementedError) def do_test(detach_interface, update_instance_cache_with_nw_info): self.compute._process_instance_vif_deleted_event( self.context, inst_obj, vif['id']) update_instance_cache_with_nw_info.assert_called_once_with( self.compute.network_api, self.context, inst_obj, nw_info=[]) detach_interface.assert_called_once_with( self.context, inst_obj, vif) do_test() @mock.patch('nova.compute.manager.LOG.info') # This is needed for py35. @mock.patch('nova.compute.manager.LOG.log') def test_process_instance_vif_deleted_event_instance_not_found( self, mock_log, mock_log_info): """Tests the case where driver.detach_interface raises InstanceNotFound. """ vif = fake_network_cache_model.new_vif() nw_info = network_model.NetworkInfo([vif]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) inst_obj = objects.Instance(id=3, uuid=uuids.instance, info_cache=info_cache) @mock.patch.object(manager.neutron, 'update_instance_cache_with_nw_info') @mock.patch.object(self.compute.driver, 'detach_interface', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)) def do_test(detach_interface, update_instance_cache_with_nw_info): self.compute._process_instance_vif_deleted_event( self.context, inst_obj, vif['id']) update_instance_cache_with_nw_info.assert_called_once_with( self.compute.network_api, self.context, inst_obj, nw_info=[]) detach_interface.assert_called_once_with( self.context, inst_obj, vif) # LOG.log should have been called with a DEBUG level message. self.assertEqual(1, mock_log.call_count, mock_log.mock_calls) self.assertEqual(logging.DEBUG, mock_log.call_args[0][0]) do_test() def test_power_update(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.STOPPED instance.task_state = None instance.power_state = power_state.SHUTDOWN instance.host = self.compute.host with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.driver, 'power_update_event'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(manager, 'LOG') ) as ( mock_instance_notify, mock_instance_usage, mock_event, mock_save, mock_log ): self.compute.power_update(self.context, instance, "POWER_ON") calls = [mock.call(self.context, instance, self.compute.host, action=fields.NotificationAction.POWER_ON, phase=fields.NotificationPhase.START), mock.call(self.context, instance, self.compute.host, action=fields.NotificationAction.POWER_ON, phase=fields.NotificationPhase.END)] mock_instance_notify.assert_has_calls(calls) calls = [mock.call(self.context, instance, "power_on.start"), mock.call(self.context, instance, "power_on.end")] mock_instance_usage.assert_has_calls(calls) mock_event.assert_called_once_with(instance, 'POWER_ON') mock_save.assert_called_once_with( expected_task_state=[None]) self.assertEqual(2, mock_log.debug.call_count) self.assertIn('Trying to', mock_log.debug.call_args[0][0]) def test_power_update_not_implemented(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.id = 1 instance.vm_state = vm_states.STOPPED instance.task_state = None instance.power_state = power_state.SHUTDOWN instance.host = self.compute.host with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.driver, 'power_update_event', side_effect=NotImplementedError()), mock.patch.object(instance, 'save'), mock.patch.object(nova.compute.utils, 'add_instance_fault_from_exc'), ) as ( mock_instance_notify, mock_instance_usage, mock_event, mock_save, mock_fault ): self.assertRaises(NotImplementedError, self.compute.power_update, self.context, instance, "POWER_ON") self.assertIsNone(instance.task_state) self.assertEqual(2, mock_save.call_count) # second save is done by revert_task_state mock_save.assert_has_calls( [mock.call(expected_task_state=[None]), mock.call()]) mock_instance_notify.assert_called_once_with( self.context, instance, self.compute.host, action=fields.NotificationAction.POWER_ON, phase=fields.NotificationPhase.START) mock_instance_usage.assert_called_once_with( self.context, instance, "power_on.start") mock_fault.assert_called_once_with( self.context, instance, mock.ANY, mock.ANY) def test_external_instance_event_power_update_invalid_state(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance1 instance.id = 1 instance.vm_state = vm_states.ACTIVE instance.task_state = task_states.POWERING_OFF instance.power_state = power_state.RUNNING instance.host = 'host1' instance.migration_context = None with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.driver, 'power_update_event'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(manager, 'LOG') ) as ( mock_instance_notify, mock_instance_usage, mock_event, mock_save, mock_log ): self.compute.power_update(self.context, instance, "POWER_ON") mock_instance_notify.assert_not_called() mock_instance_usage.assert_not_called() mock_event.assert_not_called() mock_save.assert_not_called() self.assertEqual(1, mock_log.info.call_count) self.assertIn('is a no-op', mock_log.info.call_args[0][0]) def test_external_instance_event_power_update_unexpected_task_state(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance1 instance.id = 1 instance.vm_state = vm_states.ACTIVE instance.task_state = None instance.power_state = power_state.RUNNING instance.host = 'host1' instance.migration_context = None with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.driver, 'power_update_event'), mock.patch.object(objects.Instance, 'save', side_effect=exception.UnexpectedTaskStateError("blah")), mock.patch.object(manager, 'LOG') ) as ( mock_instance_notify, mock_instance_usage, mock_event, mock_save, mock_log ): self.compute.power_update(self.context, instance, "POWER_OFF") mock_instance_notify.assert_not_called() mock_instance_usage.assert_not_called() mock_event.assert_not_called() self.assertEqual(1, mock_log.info.call_count) self.assertIn('possibly preempted', mock_log.info.call_args[0][0]) def test_extend_volume(self): inst_obj = objects.Instance(id=3, uuid=uuids.instance) connection_info = {'foo': 'bar'} new_size = 20 bdm = objects.BlockDeviceMapping( source_type='volume', destination_type='volume', volume_id=uuids.volume_id, volume_size=10, instance_uuid=uuids.instance, device_name='/dev/vda', connection_info=jsonutils.dumps(connection_info)) @mock.patch.object(self.compute, 'volume_api') @mock.patch.object(self.compute.driver, 'extend_volume') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(objects.BlockDeviceMapping, 'save') def do_test(bdm_save, bdm_get_by_vol_and_inst, extend_volume, volume_api): bdm_get_by_vol_and_inst.return_value = bdm volume_api.get.return_value = {'size': new_size} self.compute.extend_volume( self.context, inst_obj, uuids.volume_id) bdm_save.assert_called_once_with() extend_volume.assert_called_once_with( self.context, connection_info, inst_obj, new_size * pow(1024, 3)) do_test() def test_extend_volume_not_implemented_error(self): """Tests the case where driver.extend_volume raises NotImplementedError. """ inst_obj = objects.Instance(id=3, uuid=uuids.instance) connection_info = {'foo': 'bar'} bdm = objects.BlockDeviceMapping( source_type='volume', destination_type='volume', volume_id=uuids.volume_id, volume_size=10, instance_uuid=uuids.instance, device_name='/dev/vda', connection_info=jsonutils.dumps(connection_info)) @mock.patch.object(self.compute, 'volume_api') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') @mock.patch.object(objects.BlockDeviceMapping, 'save') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def do_test(add_fault_mock, bdm_save, bdm_get_by_vol_and_inst, volume_api): bdm_get_by_vol_and_inst.return_value = bdm volume_api.get.return_value = {'size': 20} self.assertRaises( exception.ExtendVolumeNotSupported, self.compute.extend_volume, self.context, inst_obj, uuids.volume_id) add_fault_mock.assert_called_once_with( self.context, inst_obj, mock.ANY, mock.ANY) with mock.patch.dict(self.compute.driver.capabilities, supports_extend_volume=False): do_test() def test_extend_volume_volume_not_found(self): """Tests the case where driver.extend_volume tries to extend a volume not attached to the specified instance. """ inst_obj = objects.Instance(id=3, uuid=uuids.instance) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.NotFound()) def do_test(bdm_get_by_vol_and_inst): self.compute.extend_volume( self.context, inst_obj, uuids.volume_id) do_test() def test_external_instance_event(self): instances = [ objects.Instance(id=1, uuid=uuids.instance_1), objects.Instance(id=2, uuid=uuids.instance_2), objects.Instance(id=3, uuid=uuids.instance_3), objects.Instance(id=4, uuid=uuids.instance_4), objects.Instance(id=4, uuid=uuids.instance_5)] events = [ objects.InstanceExternalEvent(name='network-changed', tag='tag1', instance_uuid=uuids.instance_1), objects.InstanceExternalEvent(name='network-vif-plugged', instance_uuid=uuids.instance_2, tag='tag2'), objects.InstanceExternalEvent(name='network-vif-deleted', instance_uuid=uuids.instance_3, tag='tag3'), objects.InstanceExternalEvent(name='volume-extended', instance_uuid=uuids.instance_4, tag='tag4'), objects.InstanceExternalEvent(name='power-update', instance_uuid=uuids.instance_5, tag='POWER_ON')] @mock.patch.object(self.compute, 'power_update') @mock.patch.object(self.compute, 'extend_volume') @mock.patch.object(self.compute, '_process_instance_vif_deleted_event') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info') @mock.patch.object(self.compute, '_process_instance_event') def do_test(_process_instance_event, get_instance_nw_info, _process_instance_vif_deleted_event, extend_volume, power_update): self.compute.external_instance_event(self.context, instances, events) get_instance_nw_info.assert_called_once_with(self.context, instances[0], refresh_vif_id='tag1') _process_instance_event.assert_called_once_with(instances[1], events[1]) _process_instance_vif_deleted_event.assert_called_once_with( self.context, instances[2], events[2].tag) extend_volume.assert_called_once_with( self.context, instances[3], events[3].tag) power_update.assert_called_once_with( self.context, instances[4], events[4].tag) do_test() def test_external_instance_event_with_exception(self): vif1 = fake_network_cache_model.new_vif() vif1['id'] = '1' vif2 = fake_network_cache_model.new_vif() vif2['id'] = '2' nw_info = network_model.NetworkInfo([vif1, vif2]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance_2) instances = [ objects.Instance(id=1, uuid=uuids.instance_1), objects.Instance(id=2, uuid=uuids.instance_2, info_cache=info_cache), objects.Instance(id=3, uuid=uuids.instance_3), # instance_4 doesn't have info_cache set so it will be lazy-loaded # and blow up with an InstanceNotFound error. objects.Instance(id=4, uuid=uuids.instance_4), objects.Instance(id=5, uuid=uuids.instance_5), ] events = [ objects.InstanceExternalEvent(name='network-changed', tag='tag1', instance_uuid=uuids.instance_1), objects.InstanceExternalEvent(name='network-vif-deleted', instance_uuid=uuids.instance_2, tag='2'), objects.InstanceExternalEvent(name='network-vif-plugged', instance_uuid=uuids.instance_3, tag='tag3'), objects.InstanceExternalEvent(name='network-vif-deleted', instance_uuid=uuids.instance_4, tag='tag4'), objects.InstanceExternalEvent(name='volume-extended', instance_uuid=uuids.instance_5, tag='tag5'), ] # Make sure all the four events are handled despite the exceptions in # processing events 1, 2, 4 and 5. @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.InstanceNotFound( instance_id=uuids.instance_5)) @mock.patch.object(instances[3], 'obj_load_attr', side_effect=exception.InstanceNotFound( instance_id=uuids.instance_4)) @mock.patch.object(manager.neutron, 'update_instance_cache_with_nw_info') @mock.patch.object(self.compute.driver, 'detach_interface', side_effect=exception.NovaException) @mock.patch.object(self.compute.network_api, 'get_instance_nw_info', side_effect=exception.InstanceInfoCacheNotFound( instance_uuid=uuids.instance_1)) @mock.patch.object(self.compute, '_process_instance_event') def do_test(_process_instance_event, get_instance_nw_info, detach_interface, update_instance_cache_with_nw_info, obj_load_attr, bdm_get_by_vol_and_inst): self.compute.external_instance_event(self.context, instances, events) get_instance_nw_info.assert_called_once_with(self.context, instances[0], refresh_vif_id='tag1') update_instance_cache_with_nw_info.assert_called_once_with( self.compute.network_api, self.context, instances[1], nw_info=[vif1]) detach_interface.assert_called_once_with(self.context, instances[1], vif2) _process_instance_event.assert_called_once_with(instances[2], events[2]) obj_load_attr.assert_called_once_with('info_cache') bdm_get_by_vol_and_inst.assert_called_once_with( self.context, 'tag5', instances[4].uuid) do_test() def test_cancel_all_events(self): inst = objects.Instance(uuid=uuids.instance) fake_eventlet_event = mock.MagicMock() self.compute.instance_events._events = { inst.uuid: { ('network-vif-plugged', uuids.portid): fake_eventlet_event, } } self.compute.instance_events.cancel_all_events() # call it again to make sure we handle that gracefully self.compute.instance_events.cancel_all_events() self.assertTrue(fake_eventlet_event.send.called) event = fake_eventlet_event.send.call_args_list[0][0][0] self.assertEqual('network-vif-plugged', event.name) self.assertEqual(uuids.portid, event.tag) self.assertEqual('failed', event.status) def test_cleanup_cancels_all_events(self): with mock.patch.object(self.compute, 'instance_events') as mock_ev: self.compute.cleanup_host() mock_ev.cancel_all_events.assert_called_once_with() def test_cleanup_blocks_new_events(self): instance = objects.Instance(uuid=uuids.instance) self.compute.instance_events.cancel_all_events() callback = mock.MagicMock() body = mock.MagicMock() with self.compute.virtapi.wait_for_instance_event( instance, [('network-vif-plugged', 'bar')], error_callback=callback): body() self.assertTrue(body.called) callback.assert_called_once_with('network-vif-plugged-bar', instance) def test_pop_events_fails_gracefully(self): inst = objects.Instance(uuid=uuids.instance) event = mock.MagicMock() self.compute.instance_events._events = None self.assertIsNone( self.compute.instance_events.pop_instance_event(inst, event)) def test_clear_events_fails_gracefully(self): inst = objects.Instance(uuid=uuids.instance) self.compute.instance_events._events = None self.assertEqual( self.compute.instance_events.clear_events_for_instance(inst), {}) def test_retry_reboot_pending_soft(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_PENDING instance.vm_state = vm_states.ACTIVE allow_reboot, reboot_type = self.compute._retry_reboot( context, instance, power_state.RUNNING) self.assertTrue(allow_reboot) self.assertEqual(reboot_type, 'SOFT') def test_retry_reboot_pending_hard(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_PENDING_HARD instance.vm_state = vm_states.ACTIVE allow_reboot, reboot_type = self.compute._retry_reboot( context, instance, power_state.RUNNING) self.assertTrue(allow_reboot) self.assertEqual(reboot_type, 'HARD') def test_retry_reboot_starting_soft_off(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_STARTED allow_reboot, reboot_type = self.compute._retry_reboot( context, instance, power_state.NOSTATE) self.assertTrue(allow_reboot) self.assertEqual(reboot_type, 'HARD') def test_retry_reboot_starting_hard_off(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_STARTED_HARD allow_reboot, reboot_type = self.compute._retry_reboot( context, instance, power_state.NOSTATE) self.assertTrue(allow_reboot) self.assertEqual(reboot_type, 'HARD') def test_retry_reboot_starting_hard_on(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = task_states.REBOOT_STARTED_HARD allow_reboot, reboot_type = self.compute._retry_reboot( context, instance, power_state.RUNNING) self.assertFalse(allow_reboot) self.assertEqual(reboot_type, 'HARD') def test_retry_reboot_no_reboot(self): instance = objects.Instance(self.context) instance.uuid = uuids.instance instance.task_state = 'bar' allow_reboot, reboot_type = self.compute._retry_reboot( context, instance, power_state.RUNNING) self.assertFalse(allow_reboot) self.assertEqual(reboot_type, 'HARD') @mock.patch('nova.objects.BlockDeviceMapping.get_by_volume_and_instance') def test_remove_volume_connection(self, bdm_get): inst = mock.Mock() inst.uuid = uuids.instance_uuid fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'device_name': '/dev/vdb', 'connection_info': '{"test": "test"}'}) bdm = objects.BlockDeviceMapping(context=self.context, **fake_bdm) bdm_get.return_value = bdm with test.nested( mock.patch.object(self.compute, 'volume_api'), mock.patch.object(self.compute, 'driver'), mock.patch.object(driver_bdm_volume, 'driver_detach'), ) as (mock_volume_api, mock_virt_driver, mock_driver_detach): connector = mock.Mock() def fake_driver_detach(context, instance, volume_api, virt_driver): # This is just here to validate the function signature. pass # There should be an easier way to do this with autospec... mock_driver_detach.side_effect = fake_driver_detach mock_virt_driver.get_volume_connector.return_value = connector self.compute.remove_volume_connection(self.context, uuids.volume_id, inst) bdm_get.assert_called_once_with(self.context, uuids.volume_id, uuids.instance_uuid) mock_driver_detach.assert_called_once_with(self.context, inst, mock_volume_api, mock_virt_driver) mock_volume_api.terminate_connection.assert_called_once_with( self.context, uuids.volume_id, connector) def test_remove_volume_connection_cinder_v3_api(self): instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) volume_id = uuids.volume vol_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) vol_bdm.attachment_id = uuids.attachment @mock.patch.object(self.compute.volume_api, 'terminate_connection') @mock.patch.object(self.compute, 'driver') @mock.patch.object(driver_bdm_volume, 'driver_detach') @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance') def _test(mock_get_bdms, mock_detach, mock_driver, mock_terminate): mock_get_bdms.return_value = vol_bdm self.compute.remove_volume_connection(self.context, volume_id, instance) mock_detach.assert_called_once_with(self.context, instance, self.compute.volume_api, mock_driver) mock_terminate.assert_not_called() _test() @mock.patch('nova.volume.cinder.API.attachment_delete') @mock.patch('nova.virt.block_device.DriverVolumeBlockDevice.driver_detach') def test_remove_volume_connection_v3_delete_attachment_true( self, driver_detach, attachment_delete): """Tests _remove_volume_connection with a cinder v3 style volume attachment and delete_attachment=True. """ instance = objects.Instance(uuid=uuids.instance) volume_id = uuids.volume vol_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) vol_bdm.attachment_id = uuids.attachment_id self.compute._remove_volume_connection( self.context, vol_bdm, instance, delete_attachment=True) driver_detach.assert_called_once_with( self.context, instance, self.compute.volume_api, self.compute.driver) attachment_delete.assert_called_once_with( self.context, vol_bdm.attachment_id) def test_delete_disk_metadata(self): bdm = objects.BlockDeviceMapping(volume_id=uuids.volume_id, tag='foo') instance = fake_instance.fake_instance_obj(self.context) instance.device_metadata = objects.InstanceDeviceMetadata( devices=[objects.DiskMetadata(serial=uuids.volume_id, tag='foo')]) instance.save = mock.Mock() self.compute._delete_disk_metadata(instance, bdm) self.assertEqual(0, len(instance.device_metadata.devices)) instance.save.assert_called_once_with() def test_delete_disk_metadata_no_serial(self): bdm = objects.BlockDeviceMapping(tag='foo') instance = fake_instance.fake_instance_obj(self.context) instance.device_metadata = objects.InstanceDeviceMetadata( devices=[objects.DiskMetadata(tag='foo')]) self.compute._delete_disk_metadata(instance, bdm) # NOTE(artom) This looks weird because we haven't deleted anything, but # it's normal behaviour for when DiskMetadata doesn't have serial set # and we can't find it based on BlockDeviceMapping's volume_id. self.assertEqual(1, len(instance.device_metadata.devices)) def test_detach_volume(self): # TODO(lyarwood): Test DriverVolumeBlockDevice.detach in # ../virt/test_block_device.py self._test_detach_volume() def test_detach_volume_not_destroy_bdm(self): # TODO(lyarwood): Test DriverVolumeBlockDevice.detach in # ../virt/test_block_device.py self._test_detach_volume(destroy_bdm=False) @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch.object(driver_bdm_volume, 'detach') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') @mock.patch('nova.compute.manager.ComputeManager._delete_disk_metadata') def test_detach_untagged_volume_metadata_not_deleted( self, mock_delete_metadata, _, __, ___): inst_obj = mock.Mock() fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume, 'device_name': '/dev/vdb', 'connection_info': '{"test": "test"}'}) bdm = objects.BlockDeviceMapping(context=self.context, **fake_bdm) self.compute._detach_volume(self.context, bdm, inst_obj, destroy_bdm=False, attachment_id=uuids.attachment) self.assertFalse(mock_delete_metadata.called) @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch.object(driver_bdm_volume, 'detach') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') @mock.patch('nova.compute.manager.ComputeManager._delete_disk_metadata') def test_detach_tagged_volume(self, mock_delete_metadata, _, __, ___): inst_obj = mock.Mock() fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume, 'device_name': '/dev/vdb', 'connection_info': '{"test": "test"}', 'tag': 'foo'}) bdm = objects.BlockDeviceMapping(context=self.context, **fake_bdm) self.compute._detach_volume(self.context, bdm, inst_obj, destroy_bdm=False, attachment_id=uuids.attachment) mock_delete_metadata.assert_called_once_with(inst_obj, bdm) @mock.patch.object(driver_bdm_volume, 'detach') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') def _test_detach_volume(self, mock_notify_attach_detach, notify_inst_usage, detach, destroy_bdm=True): # TODO(lyarwood): Test DriverVolumeBlockDevice.detach in # ../virt/test_block_device.py volume_id = uuids.volume inst_obj = mock.Mock() inst_obj.uuid = uuids.instance inst_obj.host = CONF.host attachment_id = uuids.attachment fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'connection_info': '{"test": "test"}'}) bdm = objects.BlockDeviceMapping(context=self.context, **fake_bdm) with test.nested( mock.patch.object(self.compute, 'volume_api'), mock.patch.object(self.compute, 'driver'), mock.patch.object(bdm, 'destroy'), ) as (volume_api, driver, bdm_destroy): self.compute._detach_volume(self.context, bdm, inst_obj, destroy_bdm=destroy_bdm, attachment_id=attachment_id) detach.assert_called_once_with(self.context, inst_obj, self.compute.volume_api, self.compute.driver, attachment_id=attachment_id, destroy_bdm=destroy_bdm) notify_inst_usage.assert_called_once_with( self.context, inst_obj, "volume.detach", extra_usage_info={'volume_id': volume_id}) if destroy_bdm: bdm_destroy.assert_called_once_with() else: self.assertFalse(bdm_destroy.called) mock_notify_attach_detach.assert_has_calls([ mock.call(self.context, inst_obj, 'fake-mini', action='volume_detach', phase='start', volume_id=volume_id), mock.call(self.context, inst_obj, 'fake-mini', action='volume_detach', phase='end', volume_id=volume_id), ]) def test_detach_volume_evacuate(self): """For evacuate, terminate_connection is called with original host.""" # TODO(lyarwood): Test DriverVolumeBlockDevice.driver_detach in # ../virt/test_block_device.py expected_connector = {'host': 'evacuated-host'} conn_info_str = '{"connector": {"host": "evacuated-host"}}' self._test_detach_volume_evacuate(conn_info_str, expected=expected_connector) def test_detach_volume_evacuate_legacy(self): """Test coverage for evacuate with legacy attachments. In this case, legacy means the volume was attached to the instance before nova stashed the connector in connection_info. The connector sent to terminate_connection will still be for the local host in this case because nova does not have the info to get the connector for the original (evacuated) host. """ # TODO(lyarwood): Test DriverVolumeBlockDevice.driver_detach in # ../virt/test_block_device.py conn_info_str = '{"foo": "bar"}' # Has no 'connector'. self._test_detach_volume_evacuate(conn_info_str) def test_detach_volume_evacuate_mismatch(self): """Test coverage for evacuate with connector mismatch. For evacuate, if the stashed connector also has the wrong host, then log it and stay with the local connector. """ # TODO(lyarwood): Test DriverVolumeBlockDevice.driver_detach in # ../virt/test_block_device.py conn_info_str = '{"connector": {"host": "other-host"}}' self._test_detach_volume_evacuate(conn_info_str) @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_volume_attach_detach') def _test_detach_volume_evacuate(self, conn_info_str, mock_notify_attach_detach, notify_inst_usage, expected=None): """Re-usable code for detach volume evacuate test cases. :param conn_info_str: String form of the stashed connector. :param expected: Dict of the connector that is expected in the terminate call (optional). Default is to expect the local connector to be used. """ # TODO(lyarwood): Test DriverVolumeBlockDevice.driver_detach in # ../virt/test_block_device.py volume_id = 'vol_id' instance = fake_instance.fake_instance_obj(self.context, host='evacuated-host') fake_bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'connection_info': '{"test": "test"}'}) bdm = objects.BlockDeviceMapping(context=self.context, **fake_bdm) bdm.connection_info = conn_info_str local_connector = {'host': 'local-connector-host'} expected_connector = local_connector if not expected else expected with test.nested( mock.patch.object(self.compute, 'volume_api'), mock.patch.object(self.compute, 'driver'), mock.patch.object(driver_bdm_volume, 'driver_detach'), ) as (volume_api, driver, driver_detach): driver.get_volume_connector.return_value = local_connector self.compute._detach_volume(self.context, bdm, instance, destroy_bdm=False) driver_detach.assert_not_called() driver.get_volume_connector.assert_called_once_with(instance) volume_api.terminate_connection.assert_called_once_with( self.context, volume_id, expected_connector) volume_api.detach.assert_called_once_with(mock.ANY, volume_id, instance.uuid, None) notify_inst_usage.assert_called_once_with( self.context, instance, "volume.detach", extra_usage_info={'volume_id': volume_id} ) mock_notify_attach_detach.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='volume_detach', phase='start', volume_id=volume_id), mock.call(self.context, instance, 'fake-mini', action='volume_detach', phase='end', volume_id=volume_id), ]) @mock.patch('nova.compute.utils.notify_about_instance_rescue_action') def _test_rescue(self, mock_notify, clean_shutdown=True): instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE) fake_nw_info = network_model.NetworkInfo() rescue_image_meta = objects.ImageMeta.from_dict( {'id': uuids.image_id, 'name': uuids.image_name}) with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value=fake_nw_info), mock.patch.object(self.compute, '_get_rescue_image', return_value=rescue_image_meta), mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=mock.sentinel.bdms), mock.patch.object(self.compute, '_get_instance_block_device_info', return_value=mock.sentinel.block_device_info), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute, '_power_off_instance'), mock.patch.object(self.compute.driver, 'rescue'), mock.patch.object(compute_utils, 'notify_usage_exists'), mock.patch.object(self.compute, '_get_power_state', return_value=power_state.RUNNING), mock.patch.object(instance, 'save') ) as ( elevated_context, get_nw_info, get_rescue_image, get_bdm_list, get_block_info, notify_instance_usage, power_off_instance, driver_rescue, notify_usage_exists, get_power_state, instance_save ): self.compute.rescue_instance( self.context, instance, rescue_password='verybadpass', rescue_image_ref=None, clean_shutdown=clean_shutdown) # assert the field values on the instance object self.assertEqual(vm_states.RESCUED, instance.vm_state) self.assertIsNone(instance.task_state) self.assertEqual(power_state.RUNNING, instance.power_state) self.assertIsNotNone(instance.launched_at) # assert our mock calls get_nw_info.assert_called_once_with(self.context, instance) get_rescue_image.assert_called_once_with( self.context, instance, None) get_bdm_list.assert_called_once_with(self.context, instance.uuid) get_block_info.assert_called_once_with(self.context, instance, bdms=mock.sentinel.bdms) extra_usage_info = {'rescue_image_name': uuids.image_name} notify_calls = [ mock.call(self.context, instance, "rescue.start", extra_usage_info=extra_usage_info, network_info=fake_nw_info), mock.call(self.context, instance, "rescue.end", extra_usage_info=extra_usage_info, network_info=fake_nw_info) ] notify_instance_usage.assert_has_calls(notify_calls) power_off_instance.assert_called_once_with(self.context, instance, clean_shutdown) driver_rescue.assert_called_once_with( self.context, instance, fake_nw_info, rescue_image_meta, 'verybadpass', mock.sentinel.block_device_info) notify_usage_exists.assert_called_once_with(self.compute.notifier, self.context, instance, 'fake-mini', current_period=True) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', None, phase='start'), mock.call(self.context, instance, 'fake-mini', None, phase='end')]) instance_save.assert_called_once_with( expected_task_state=task_states.RESCUING) def test_rescue(self): self._test_rescue() def test_rescue_forced_shutdown(self): self._test_rescue(clean_shutdown=False) @mock.patch('nova.compute.utils.notify_about_instance_action') def test_unrescue(self, mock_notify): instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.RESCUED) fake_nw_info = network_model.NetworkInfo() with test.nested( mock.patch.object(self.context, 'elevated', return_value=self.context), mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value=fake_nw_info), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.driver, 'unrescue'), mock.patch.object(self.compute, '_get_power_state', return_value=power_state.RUNNING), mock.patch.object(instance, 'save') ) as ( elevated_context, get_nw_info, notify_instance_usage, driver_unrescue, get_power_state, instance_save ): self.compute.unrescue_instance(self.context, instance) # assert the field values on the instance object self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) self.assertEqual(power_state.RUNNING, instance.power_state) # assert our mock calls get_nw_info.assert_called_once_with(self.context, instance) notify_calls = [ mock.call(self.context, instance, "unrescue.start", network_info=fake_nw_info), mock.call(self.context, instance, "unrescue.end", network_info=fake_nw_info) ] notify_instance_usage.assert_has_calls(notify_calls) driver_unrescue.assert_called_once_with(instance, fake_nw_info) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='unrescue', phase='start'), mock.call(self.context, instance, 'fake-mini', action='unrescue', phase='end')]) instance_save.assert_called_once_with( expected_task_state=task_states.UNRESCUING) @mock.patch('nova.compute.manager.ComputeManager._get_power_state', return_value=power_state.RUNNING) @mock.patch.object(objects.Instance, 'save') def test_set_admin_password(self, instance_save_mock, power_state_mock): # Ensure instance can have its admin password set. instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, task_state=task_states.UPDATING_PASSWORD) @mock.patch.object(self.context, 'elevated', return_value=self.context) @mock.patch.object(self.compute.driver, 'set_admin_password') def do_test(driver_mock, elevated_mock): # call the manager method self.compute.set_admin_password(self.context, instance, 'fake-pass') # make our assertions self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) power_state_mock.assert_called_once_with(self.context, instance) driver_mock.assert_called_once_with(instance, 'fake-pass') instance_save_mock.assert_called_once_with( expected_task_state=task_states.UPDATING_PASSWORD) do_test() @mock.patch('nova.compute.manager.ComputeManager._get_power_state', return_value=power_state.NOSTATE) @mock.patch('nova.compute.manager.ComputeManager._instance_update') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_set_admin_password_bad_state(self, add_fault_mock, instance_save_mock, update_mock, power_state_mock): # Test setting password while instance is rebuilding. instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(self.context, 'elevated', return_value=self.context): # call the manager method self.assertRaises(exception.InstancePasswordSetFailed, self.compute.set_admin_password, self.context, instance, None) # make our assertions power_state_mock.assert_called_once_with(self.context, instance) instance_save_mock.assert_called_once_with( expected_task_state=task_states.UPDATING_PASSWORD) add_fault_mock.assert_called_once_with( self.context, instance, mock.ANY, mock.ANY) @mock.patch('nova.utils.generate_password', return_value='fake-pass') @mock.patch('nova.compute.manager.ComputeManager._get_power_state', return_value=power_state.RUNNING) @mock.patch('nova.compute.manager.ComputeManager._instance_update') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def _do_test_set_admin_password_driver_error(self, exc, expected_vm_state, expected_task_state, expected_exception, add_fault_mock, instance_save_mock, update_mock, power_state_mock, gen_password_mock): # Ensure expected exception is raised if set_admin_password fails. instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, task_state=task_states.UPDATING_PASSWORD) @mock.patch.object(self.context, 'elevated', return_value=self.context) @mock.patch.object(self.compute.driver, 'set_admin_password', side_effect=exc) def do_test(driver_mock, elevated_mock): # error raised from the driver should not reveal internal # information so a new error is raised self.assertRaises(expected_exception, self.compute.set_admin_password, self.context, instance=instance, new_pass=None) if expected_exception != exception.InstancePasswordSetFailed: instance_save_mock.assert_called_once_with( expected_task_state=task_states.UPDATING_PASSWORD) self.assertEqual(expected_vm_state, instance.vm_state) # check revert_task_state decorator update_mock.assert_called_once_with( self.context, instance, task_state=expected_task_state) # check wrap_instance_fault decorator add_fault_mock.assert_called_once_with( self.context, instance, mock.ANY, mock.ANY) do_test() def test_set_admin_password_driver_not_authorized(self): # Ensure expected exception is raised if set_admin_password not # authorized. exc = exception.Forbidden('Internal error') expected_exception = exception.InstancePasswordSetFailed self._do_test_set_admin_password_driver_error( exc, vm_states.ACTIVE, None, expected_exception) def test_set_admin_password_driver_not_implemented(self): # Ensure expected exception is raised if set_admin_password not # implemented by driver. exc = NotImplementedError() expected_exception = NotImplementedError self._do_test_set_admin_password_driver_error( exc, vm_states.ACTIVE, None, expected_exception) def test_set_admin_password_driver_not_supported(self): exc = exception.SetAdminPasswdNotSupported() expected_exception = exception.SetAdminPasswdNotSupported self._do_test_set_admin_password_driver_error( exc, vm_states.ACTIVE, None, expected_exception) def test_set_admin_password_guest_agent_no_enabled(self): exc = exception.QemuGuestAgentNotEnabled() expected_exception = exception.InstanceAgentNotEnabled self._do_test_set_admin_password_driver_error( exc, vm_states.ACTIVE, None, expected_exception) def test_destroy_evacuated_instances_no_migrations(self): with mock.patch( 'nova.objects.MigrationList.get_by_filters') as migration_list: migration_list.return_value = [] result = self.compute._destroy_evacuated_instances( self.context, {}) self.assertEqual({}, result) def test_destroy_evacuated_instances(self): our_host = self.compute.host flavor = objects.Flavor() instance_1 = objects.Instance(self.context, flavor=flavor) instance_1.uuid = uuids.instance_1 instance_1.task_state = None instance_1.vm_state = vm_states.ACTIVE instance_1.host = 'not-' + our_host instance_1.user_id = uuids.user_id instance_1.project_id = uuids.project_id instance_2 = objects.Instance(self.context, flavor=flavor) instance_2.uuid = uuids.instance_2 instance_2.task_state = None instance_2.vm_state = vm_states.ACTIVE instance_2.host = 'not-' + our_host instance_2.user_id = uuids.user_id instance_2.project_id = uuids.project_id instance_2.deleted = False # Only instance 2 has a migration record migration = objects.Migration(instance_uuid=instance_2.uuid) # Consider the migration successful migration.status = 'done' migration.source_node = 'fake-node' node_cache = { uuids.our_node_uuid: objects.ComputeNode( uuid=uuids.our_node_uuid, hypervisor_hostname='fake-node') } with test.nested( mock.patch.object(self.compute, '_get_instances_on_driver', return_value=[instance_1, instance_2]), mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value=None), mock.patch.object(self.compute, '_get_instance_block_device_info', return_value={}), mock.patch.object(self.compute, '_is_instance_storage_shared', return_value=False), mock.patch.object(self.compute.driver, 'destroy'), mock.patch('nova.objects.MigrationList.get_by_filters'), mock.patch('nova.objects.Migration.save'), mock.patch('nova.scheduler.utils.resources_from_flavor'), mock.patch.object(self.compute.reportclient, 'remove_provider_tree_from_instance_allocation') ) as (_get_instances_on_driver, get_instance_nw_info, _get_instance_block_device_info, _is_instance_storage_shared, destroy, migration_list, migration_save, get_resources, remove_allocation): migration_list.return_value = [migration] get_resources.return_value = mock.sentinel.resources self.compute._destroy_evacuated_instances(self.context, node_cache) # Only instance 2 should be deleted. Instance 1 is still running # here, but no migration from our host exists, so ignore it destroy.assert_called_once_with(self.context, instance_2, None, {}, True) remove_allocation.assert_called_once_with( self.context, instance_2.uuid, uuids.our_node_uuid) def test_destroy_evacuated_instances_node_deleted(self): our_host = self.compute.host flavor = objects.Flavor() instance_1 = objects.Instance(self.context, flavor=flavor) instance_1.uuid = uuids.instance_1 instance_1.task_state = None instance_1.vm_state = vm_states.ACTIVE instance_1.host = 'not-' + our_host instance_1.user_id = uuids.user_id instance_1.project_id = uuids.project_id instance_1.deleted = False instance_2 = objects.Instance(self.context, flavor=flavor) instance_2.uuid = uuids.instance_2 instance_2.task_state = None instance_2.vm_state = vm_states.ACTIVE instance_2.host = 'not-' + our_host instance_2.user_id = uuids.user_id instance_2.project_id = uuids.project_id instance_2.deleted = False migration_1 = objects.Migration(instance_uuid=instance_1.uuid) # Consider the migration successful but the node was deleted while the # compute was down migration_1.status = 'done' migration_1.source_node = 'deleted-node' migration_2 = objects.Migration(instance_uuid=instance_2.uuid) # Consider the migration successful migration_2.status = 'done' migration_2.source_node = 'fake-node' node_cache = { uuids.our_node_uuid: objects.ComputeNode( uuid=uuids.our_node_uuid, hypervisor_hostname='fake-node') } with test.nested( mock.patch.object(self.compute, '_get_instances_on_driver', return_value=[instance_1, instance_2]), mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value=None), mock.patch.object(self.compute, '_get_instance_block_device_info', return_value={}), mock.patch.object(self.compute, '_is_instance_storage_shared', return_value=False), mock.patch.object(self.compute.driver, 'destroy'), mock.patch('nova.objects.MigrationList.get_by_filters'), mock.patch('nova.objects.Migration.save'), mock.patch('nova.scheduler.utils.resources_from_flavor'), mock.patch.object(self.compute.reportclient, 'remove_provider_tree_from_instance_allocation') ) as (_get_instances_on_driver, get_instance_nw_info, _get_instance_block_device_info, _is_instance_storage_shared, destroy, migration_list, migration_save, get_resources, remove_allocation): migration_list.return_value = [migration_1, migration_2] get_resources.return_value = mock.sentinel.resources self.compute._destroy_evacuated_instances(self.context, node_cache) # both instance_1 and instance_2 is destroyed in the driver destroy.assert_has_calls( [mock.call(self.context, instance_1, None, {}, True), mock.call(self.context, instance_2, None, {}, True)], any_order=True) # but only instance_2 is deallocated as the compute node for # instance_1 is already deleted remove_allocation.assert_called_once_with( self.context, instance_2.uuid, uuids.our_node_uuid) def test_destroy_evacuated_instances_not_on_hypervisor(self): our_host = self.compute.host flavor = objects.Flavor() instance_1 = objects.Instance(self.context, flavor=flavor) instance_1.uuid = uuids.instance_1 instance_1.task_state = None instance_1.vm_state = vm_states.ACTIVE instance_1.host = 'not-' + our_host instance_1.user_id = uuids.user_id instance_1.project_id = uuids.project_id instance_1.deleted = False migration_1 = objects.Migration(instance_uuid=instance_1.uuid) migration_1.status = 'done' migration_1.source_node = 'our-node' node_cache = { uuids.our_node_uuid: objects.ComputeNode( uuid=uuids.our_node_uuid, hypervisor_hostname='our-node') } with test.nested( mock.patch.object(self.compute, '_get_instances_on_driver', return_value=[]), mock.patch.object(self.compute.driver, 'destroy'), mock.patch('nova.objects.MigrationList.get_by_filters'), mock.patch('nova.objects.Migration.save'), mock.patch('nova.scheduler.utils.resources_from_flavor'), mock.patch.object(self.compute.reportclient, 'remove_provider_tree_from_instance_allocation'), mock.patch('nova.objects.Instance.get_by_uuid') ) as (_get_intances_on_driver, destroy, migration_list, migration_save, get_resources, remove_allocation, instance_get_by_uuid): migration_list.return_value = [migration_1] instance_get_by_uuid.return_value = instance_1 get_resources.return_value = mock.sentinel.resources self.compute._destroy_evacuated_instances(self.context, node_cache) # nothing to be destroyed as the driver returned no instances on # the hypervisor self.assertFalse(destroy.called) # but our only instance still cleaned up in placement remove_allocation.assert_called_once_with( self.context, instance_1.uuid, uuids.our_node_uuid) instance_get_by_uuid.assert_called_once_with( self.context, instance_1.uuid) def test_destroy_evacuated_instances_not_on_hyp_and_instance_deleted(self): migration_1 = objects.Migration(instance_uuid=uuids.instance_1) migration_1.status = 'done' migration_1.source_node = 'our-node' with test.nested( mock.patch.object(self.compute, '_get_instances_on_driver', return_value=[]), mock.patch.object(self.compute.driver, 'destroy'), mock.patch('nova.objects.MigrationList.get_by_filters'), mock.patch('nova.objects.Migration.save'), mock.patch('nova.scheduler.utils.resources_from_flavor'), mock.patch.object(self.compute.reportclient, 'remove_provider_tree_from_instance_allocation'), mock.patch('nova.objects.Instance.get_by_uuid') ) as (_get_instances_on_driver, destroy, migration_list, migration_save, get_resources, remove_allocation, instance_get_by_uuid): migration_list.return_value = [migration_1] instance_get_by_uuid.side_effect = exception.InstanceNotFound( instance_id=uuids.instance_1) self.compute._destroy_evacuated_instances(self.context, {}) # nothing to be destroyed as the driver returned no instances on # the hypervisor self.assertFalse(destroy.called) instance_get_by_uuid.assert_called_once_with( self.context, uuids.instance_1) # nothing to be cleaned as the instance was deleted already self.assertFalse(remove_allocation.called) @mock.patch('nova.compute.manager.ComputeManager.' '_destroy_evacuated_instances') @mock.patch('nova.compute.manager.LOG') def test_init_host_foreign_instance(self, mock_log, mock_destroy): inst = mock.MagicMock() inst.host = self.compute.host + '-alt' self.compute._init_instance(mock.sentinel.context, inst) self.assertFalse(inst.save.called) self.assertTrue(mock_log.warning.called) msg = mock_log.warning.call_args_list[0] self.assertIn('appears to not be owned by this host', msg[0][0]) def test_init_host_pci_passthrough_whitelist_validation_failure(self): # Tests that we fail init_host if there is a pci.passthrough_whitelist # configured incorrectly. self.flags(passthrough_whitelist=[ # it's invalid to specify both in the same devspec jsonutils.dumps({'address': 'foo', 'devname': 'bar'})], group='pci') self.assertRaises(exception.PciDeviceInvalidDeviceName, self.compute.init_host) @mock.patch('nova.compute.manager.ComputeManager._instance_update') def test_error_out_instance_on_exception_not_implemented_err(self, inst_update_mock): instance = fake_instance.fake_instance_obj(self.context) def do_test(): with self.compute._error_out_instance_on_exception( self.context, instance, instance_state=vm_states.STOPPED): raise NotImplementedError('test') self.assertRaises(NotImplementedError, do_test) inst_update_mock.assert_called_once_with( self.context, instance, vm_state=vm_states.STOPPED, task_state=None) @mock.patch('nova.compute.manager.ComputeManager._instance_update') def test_error_out_instance_on_exception_inst_fault_rollback(self, inst_update_mock): instance = fake_instance.fake_instance_obj(self.context) def do_test(): with self.compute._error_out_instance_on_exception( self.context, instance, instance_state=vm_states.STOPPED): raise exception.InstanceFaultRollback( inner_exception=test.TestingException('test')) self.assertRaises(test.TestingException, do_test) # The vm_state should be set to the instance_state parameter. inst_update_mock.assert_called_once_with( self.context, instance, vm_state=vm_states.STOPPED, task_state=None) @mock.patch('nova.compute.manager.ComputeManager.' '_set_instance_obj_error_state') def test_error_out_instance_on_exception_unknown_with_quotas(self, set_error): instance = fake_instance.fake_instance_obj(self.context) def do_test(): with self.compute._error_out_instance_on_exception( self.context, instance): raise test.TestingException('test') self.assertRaises(test.TestingException, do_test) set_error.assert_called_once_with(self.context, instance) @mock.patch('nova.compute.manager.ComputeManager.' '_detach_volume') def test_cleanup_volumes(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdm_do_not_delete_dict = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id1', 'source_type': 'image', 'delete_on_termination': False}) bdm_delete_dict = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id2', 'source_type': 'image', 'delete_on_termination': True}) bdms = block_device_obj.block_device_make_list(self.context, [bdm_do_not_delete_dict, bdm_delete_dict]) with mock.patch.object(self.compute.volume_api, 'delete') as volume_delete: self.compute._cleanup_volumes(self.context, instance, bdms) calls = [mock.call(self.context, bdm, instance, destroy_bdm=bdm.delete_on_termination) for bdm in bdms] self.assertEqual(calls, mock_detach.call_args_list) volume_delete.assert_called_once_with(self.context, bdms[1].volume_id) @mock.patch('nova.compute.manager.ComputeManager.' '_detach_volume') def test_cleanup_volumes_exception_do_not_raise(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdm_dict1 = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id1', 'source_type': 'image', 'delete_on_termination': True}) bdm_dict2 = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id2', 'source_type': 'image', 'delete_on_termination': True}) bdms = block_device_obj.block_device_make_list(self.context, [bdm_dict1, bdm_dict2]) with mock.patch.object(self.compute.volume_api, 'delete', side_effect=[test.TestingException(), None]) as volume_delete: self.compute._cleanup_volumes(self.context, instance, bdms, raise_exc=False) calls = [mock.call(self.context, bdm.volume_id) for bdm in bdms] self.assertEqual(calls, volume_delete.call_args_list) calls = [mock.call(self.context, bdm, instance, destroy_bdm=True) for bdm in bdms] self.assertEqual(calls, mock_detach.call_args_list) @mock.patch('nova.compute.manager.ComputeManager.' '_detach_volume') def test_cleanup_volumes_exception_raise(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdm_dict1 = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id1', 'source_type': 'image', 'delete_on_termination': True}) bdm_dict2 = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id2', 'source_type': 'image', 'delete_on_termination': True}) bdms = block_device_obj.block_device_make_list(self.context, [bdm_dict1, bdm_dict2]) with mock.patch.object(self.compute.volume_api, 'delete', side_effect=[test.TestingException(), None]) as volume_delete: self.assertRaises(test.TestingException, self.compute._cleanup_volumes, self.context, instance, bdms) calls = [mock.call(self.context, bdm.volume_id) for bdm in bdms] self.assertEqual(calls, volume_delete.call_args_list) calls = [mock.call(self.context, bdm, instance, destroy_bdm=bdm.delete_on_termination) for bdm in bdms] self.assertEqual(calls, mock_detach.call_args_list) @mock.patch('nova.compute.manager.ComputeManager._detach_volume', side_effect=exception.CinderConnectionFailed(reason='idk')) def test_cleanup_volumes_detach_fails_raise_exc(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdms = block_device_obj.block_device_make_list( self.context, [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': uuids.volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False})]) self.assertRaises(exception.CinderConnectionFailed, self.compute._cleanup_volumes, self.context, instance, bdms) mock_detach.assert_called_once_with( self.context, bdms[0], instance, destroy_bdm=False) def test_stop_instance_task_state_none_power_state_shutdown(self): # Tests that stop_instance doesn't puke when the instance power_state # is shutdown and the task_state is None. instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE, task_state=None, power_state=power_state.SHUTDOWN) @mock.patch.object(self.compute, '_get_power_state', return_value=power_state.SHUTDOWN) @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, '_power_off_instance') @mock.patch.object(instance, 'save') def do_test(save_mock, power_off_mock, notify_mock, notify_action_mock, get_state_mock): # run the code self.compute.stop_instance(self.context, instance, True) # assert the calls self.assertEqual(2, get_state_mock.call_count) notify_mock.assert_has_calls([ mock.call(self.context, instance, 'power_off.start'), mock.call(self.context, instance, 'power_off.end') ]) notify_action_mock.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='power_off', phase='start'), mock.call(self.context, instance, 'fake-mini', action='power_off', phase='end'), ]) power_off_mock.assert_called_once_with( self.context, instance, True) save_mock.assert_called_once_with( expected_task_state=[task_states.POWERING_OFF, None]) self.assertEqual(power_state.SHUTDOWN, instance.power_state) self.assertIsNone(instance.task_state) self.assertEqual(vm_states.STOPPED, instance.vm_state) do_test() def test_reset_network_driver_not_implemented(self): instance = fake_instance.fake_instance_obj(self.context) @mock.patch.object(self.compute.driver, 'reset_network', side_effect=NotImplementedError()) @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def do_test(mock_add_fault, mock_reset): self.assertRaises(messaging.ExpectedException, self.compute.reset_network, self.context, instance) self.compute = utils.ExceptionHelper(self.compute) self.assertRaises(NotImplementedError, self.compute.reset_network, self.context, instance) do_test() @mock.patch.object(manager.ComputeManager, '_set_migration_status') @mock.patch.object(manager.ComputeManager, '_do_rebuild_instance_with_claim') @mock.patch('nova.compute.utils.notify_about_instance_rebuild') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') def _test_rebuild_ex(self, instance, exc, mock_notify_about_instance_usage, mock_notify, mock_rebuild, mock_set, recreate=False, scheduled_node=None): mock_rebuild.side_effect = exc self.compute.rebuild_instance(self.context, instance, None, None, None, None, None, None, recreate, False, False, None, scheduled_node, {}, None) mock_set.assert_called_once_with(None, 'failed') mock_notify_about_instance_usage.assert_called_once_with( mock.ANY, instance, 'rebuild.error', fault=mock_rebuild.side_effect ) mock_notify.assert_called_once_with( mock.ANY, instance, 'fake-mini', phase='error', exception=exc, bdms=None, tb=mock.ANY) def test_rebuild_deleting(self): instance = fake_instance.fake_instance_obj(self.context) ex = exception.UnexpectedDeletingTaskStateError( instance_uuid=instance.uuid, expected='expected', actual='actual') self._test_rebuild_ex(instance, ex) def test_rebuild_notfound(self): instance = fake_instance.fake_instance_obj(self.context) ex = exception.InstanceNotFound(instance_id=instance.uuid) self._test_rebuild_ex(instance, ex) @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_error_out_instance_on_exception') def test_rebuild_driver_error_same_host(self, mock_error, mock_aiffe): instance = fake_instance.fake_instance_obj(self.context) ex = test.TestingException('foo') rt = self._mock_rt() self.assertRaises(test.TestingException, self._test_rebuild_ex, instance, ex) self.assertFalse( rt.delete_allocation_for_evacuated_instance.called) @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_error_out_instance_on_exception') def test_rebuild_driver_error_evacuate(self, mock_error, mock_aiffe, mock_elevated): mock_elevated.return_value = self.context instance = fake_instance.fake_instance_obj(self.context) instance.system_metadata = {} ex = test.TestingException('foo') rt = self._mock_rt() self.assertRaises(test.TestingException, self._test_rebuild_ex, instance, ex, recreate=True, scheduled_node='foo') delete_alloc = rt.delete_allocation_for_evacuated_instance delete_alloc.assert_called_once_with(self.context, instance, 'foo', node_type='destination') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' 'delete_allocation_for_evacuated_instance') def test_rebuild_compute_resources_unavailable(self, mock_delete_alloc, mock_add_fault, mock_save): """Tests that when the rebuild_claim fails with ComputeResourcesUnavailable the vm_state on the instance remains unchanged. """ instance = fake_instance.fake_instance_obj(self.context) instance.vm_state = vm_states.ACTIVE ex = exception.ComputeResourcesUnavailable(reason='out of foo') self.assertRaises(exception.BuildAbortException, self._test_rebuild_ex, instance, ex) # Make sure the instance vm_state did not change. self.assertEqual(vm_states.ACTIVE, instance.vm_state) mock_delete_alloc.assert_called_once() mock_save.assert_called() mock_add_fault.assert_called_once() @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.instance.Instance.drop_migration_context') @mock.patch('nova.objects.instance.Instance.apply_migration_context') @mock.patch('nova.objects.instance.Instance.mutated_migration_context') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.network.neutron.API.setup_instance_network_on_host') @mock.patch('nova.network.neutron.API.setup_networks_on_host') @mock.patch('nova.objects.instance.Instance.save') @mock.patch('nova.compute.utils.notify_about_instance_rebuild') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.objects.instance.Instance.image_meta', new_callable=mock.PropertyMock) @mock.patch('nova.compute.manager.ComputeManager.' '_validate_instance_group_policy') @mock.patch('nova.compute.manager.ComputeManager._set_migration_status') @mock.patch('nova.compute.resource_tracker.ResourceTracker.rebuild_claim') def test_evacuate_late_server_group_policy_check( self, mock_rebuild_claim, mock_set_migration_status, mock_validate_policy, mock_image_meta, mock_notify_exists, mock_notify_legacy, mock_notify, mock_instance_save, mock_setup_networks, mock_setup_intance_network, mock_get_bdms, mock_mutate_migration, mock_appy_migration, mock_drop_migration, mock_context_elevated): self.flags(api_servers=['http://localhost/image/v2'], group='glance') instance = fake_instance.fake_instance_obj(self.context) instance.trusted_certs = None instance.info_cache = None elevated_context = mock.Mock() mock_context_elevated.return_value = elevated_context request_spec = objects.RequestSpec() request_spec.scheduler_hints = {'group': [uuids.group]} with mock.patch.object(self.compute, 'network_api'): self.compute.rebuild_instance( self.context, instance, None, None, None, None, None, None, recreate=True, on_shared_storage=None, preserve_ephemeral=False, migration=None, scheduled_node='fake-node', limits={}, request_spec=request_spec) mock_validate_policy.assert_called_once_with( elevated_context, instance, {'group': [uuids.group]}) @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' 'delete_allocation_for_evacuated_instance') @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.instance.Instance.save') @mock.patch('nova.compute.utils.notify_about_instance_rebuild') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.manager.ComputeManager.' '_validate_instance_group_policy') @mock.patch('nova.compute.manager.ComputeManager._set_migration_status') @mock.patch('nova.compute.resource_tracker.ResourceTracker.rebuild_claim') def test_evacuate_late_server_group_policy_check_fails( self, mock_rebuild_claim, mock_set_migration_status, mock_validate_policy, mock_notify_legacy, mock_notify, mock_instance_save, mock_context_elevated, mock_delete_allocation, mock_instance_fault): instance = fake_instance.fake_instance_obj(self.context) instance.info_cache = None instance.system_metadata = {} instance.vm_state = vm_states.ACTIVE elevated_context = mock.Mock() mock_context_elevated.return_value = elevated_context request_spec = objects.RequestSpec() request_spec.scheduler_hints = {'group': [uuids.group]} exc = exception.RescheduledException( instance_uuid=instance.uuid, reason='policy violation') mock_validate_policy.side_effect = exc self.assertRaises( exception.BuildAbortException, self.compute.rebuild_instance, self.context, instance, None, None, None, None, None, None, recreate=True, on_shared_storage=None, preserve_ephemeral=False, migration=None, scheduled_node='fake-node', limits={}, request_spec=request_spec) mock_validate_policy.assert_called_once_with( elevated_context, instance, {'group': [uuids.group]}) mock_delete_allocation.assert_called_once_with( elevated_context, instance, 'fake-node', node_type='destination') mock_notify.assert_called_once_with( elevated_context, instance, 'fake-mini', bdms=None, exception=exc, phase='error', tb=mock.ANY) # Make sure the instance vm_state did not change. self.assertEqual(vm_states.ACTIVE, instance.vm_state) def test_rebuild_node_not_updated_if_not_recreate(self): node = uuidutils.generate_uuid() # ironic node uuid instance = fake_instance.fake_instance_obj(self.context, node=node) instance.migration_context = None migration = objects.Migration(status='accepted') with test.nested( mock.patch.object(self.compute, '_get_compute_info'), mock.patch.object(self.compute, '_do_rebuild_instance_with_claim'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(migration, 'save'), ) as (mock_get, mock_rebuild, mock_save, mock_migration_save): self.compute.rebuild_instance( self.context, instance, None, None, None, None, None, None, False, False, False, migration, None, {}, None) self.assertFalse(mock_get.called) self.assertEqual(node, instance.node) self.assertEqual('done', migration.status) mock_migration_save.assert_called_once_with() def test_rebuild_node_updated_if_recreate(self): dead_node = uuidutils.generate_uuid() img_sys_meta = {'image_hw_numa_nodes': 1} instance = fake_instance.fake_instance_obj(self.context, node=dead_node) instance.system_metadata = img_sys_meta instance.migration_context = None mock_rt = self._mock_rt() with test.nested( mock.patch.object(self.compute, '_get_compute_info'), mock.patch.object(self.compute, '_do_rebuild_instance'), ) as (mock_get, mock_rebuild): mock_get.return_value.hypervisor_hostname = 'new-node' self.compute.rebuild_instance( self.context, instance, None, None, None, None, None, None, True, False, False, mock.sentinel.migration, None, {}, None) mock_get.assert_called_once_with(mock.ANY, self.compute.host) mock_rt.finish_evacuation.assert_called_once_with( instance, 'new-node', mock.sentinel.migration) # Make sure the rebuild_claim was called with the proper image_meta # from the instance. mock_rt.rebuild_claim.assert_called_once() self.assertIn('image_meta', mock_rt.rebuild_claim.call_args[1]) actual_image_meta = mock_rt.rebuild_claim.call_args[1][ 'image_meta'].properties self.assertIn('hw_numa_nodes', actual_image_meta) self.assertEqual(1, actual_image_meta.hw_numa_nodes) @mock.patch.object(compute_utils, 'notify_about_instance_rebuild') @mock.patch.object(compute_utils, 'notify_usage_exists') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(objects.ImageMeta, 'from_instance') @mock.patch.object(objects.Instance, 'save', return_value=None) def test_rebuild_nw_updated_if_recreate(self, mock_save, mock_image_ref, mock_notify, mock_notify_exists, mock_notify_rebuild): with test.nested( mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute.network_api, 'setup_networks_on_host'), mock.patch.object(self.compute.network_api, 'setup_instance_network_on_host'), mock.patch.object(self.compute.network_api, 'get_instance_nw_info'), mock.patch.object(self.compute, '_get_instance_block_device_info', return_value='fake-bdminfo'), mock.patch.object(self.compute, '_check_trusted_certs'), ) as( mock_notify_usage, mock_setup, mock_setup_inst, mock_get_nw_info, mock_get_blk, mock_check_trusted_certs ): self.flags(group="glance", api_servers="http://127.0.0.1:9292") instance = fake_instance.fake_instance_obj(self.context) orig_vif = fake_network_cache_model.new_vif( {'profile': {"pci_slot": "0000:01:00.1"}}) orig_nw_info = network_model.NetworkInfo([orig_vif]) new_vif = fake_network_cache_model.new_vif( {'profile': {"pci_slot": "0000:02:00.1"}}) new_nw_info = network_model.NetworkInfo([new_vif]) info_cache = objects.InstanceInfoCache(network_info=orig_nw_info, instance_uuid=instance.uuid) instance.info_cache = info_cache instance.task_state = task_states.REBUILDING instance.migration_context = None instance.numa_topology = None instance.pci_requests = None instance.pci_devices = None orig_image_ref = None image_meta = instance.image_meta injected_files = [] new_pass = None orig_sys_metadata = None bdms = [] recreate = True on_shared_storage = None preserve_ephemeral = None mock_get_nw_info.return_value = new_nw_info self.compute._do_rebuild_instance(self.context, instance, orig_image_ref, image_meta, injected_files, new_pass, orig_sys_metadata, bdms, recreate, on_shared_storage, preserve_ephemeral, {}, {}, self.allocations, mock.sentinel.mapping) mock_notify_usage.assert_has_calls( [mock.call(self.context, instance, "rebuild.start", extra_usage_info=mock.ANY), mock.call(self.context, instance, "rebuild.end", network_info=new_nw_info, extra_usage_info=mock.ANY)]) self.assertTrue(mock_image_ref.called) self.assertTrue(mock_save.called) self.assertTrue(mock_notify_exists.called) mock_setup.assert_called_once_with(self.context, instance, mock.ANY) mock_setup_inst.assert_called_once_with( self.context, instance, mock.ANY, mock.ANY, provider_mappings=mock.sentinel.mapping) mock_get_nw_info.assert_called_once_with(self.context, instance) def test_rebuild_default_impl(self): def _detach(context, bdms): # NOTE(rpodolyaka): check that instance has been powered off by # the time we detach block devices, exact calls arguments will be # checked below self.assertTrue(mock_power_off.called) self.assertFalse(mock_destroy.called) def _attach(context, instance, bdms): return {'block_device_mapping': 'shared_block_storage'} def _spawn(context, instance, image_meta, injected_files, admin_password, allocations, network_info=None, block_device_info=None): self.assertEqual(block_device_info['block_device_mapping'], 'shared_block_storage') with test.nested( mock.patch.object(self.compute.driver, 'destroy', return_value=None), mock.patch.object(self.compute.driver, 'spawn', side_effect=_spawn), mock.patch.object(objects.Instance, 'save', return_value=None), mock.patch.object(self.compute, '_power_off_instance', return_value=None) ) as( mock_destroy, mock_spawn, mock_save, mock_power_off ): instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = None instance.numa_topology = None instance.pci_requests = None instance.pci_devices = None instance.device_metadata = None instance.task_state = task_states.REBUILDING instance.save(expected_task_state=[task_states.REBUILDING]) self.compute._rebuild_default_impl(self.context, instance, None, [], admin_password='new_pass', bdms=[], allocations={}, detach_block_devices=_detach, attach_block_devices=_attach, network_info=None, evacuate=False, block_device_info=None, preserve_ephemeral=False) self.assertTrue(mock_save.called) self.assertTrue(mock_spawn.called) mock_destroy.assert_called_once_with( self.context, instance, network_info=None, block_device_info=None) mock_power_off.assert_called_once_with( self.context, instance, clean_shutdown=True) def test_do_rebuild_instance_check_trusted_certs(self): """Tests the scenario that we're rebuilding an instance with trusted_certs on a host that does not support trusted certs so a BuildAbortException is raised. """ instance = self._trusted_certs_setup_instance() instance.system_metadata = {} with mock.patch.dict(self.compute.driver.capabilities, supports_trusted_certs=False): ex = self.assertRaises( exception.BuildAbortException, self.compute._do_rebuild_instance, self.context, instance, instance.image_ref, instance.image_meta, injected_files=[], new_pass=None, orig_sys_metadata={}, bdms=objects.BlockDeviceMapping(), evacuate=False, on_shared_storage=None, preserve_ephemeral=False, migration=objects.Migration(), request_spec=objects.RequestSpec(), allocations=self.allocations, request_group_resource_providers_mapping=mock.sentinel.mapping) self.assertIn('Trusted image certificates provided on host', six.text_type(ex)) @mock.patch.object(utils, 'last_completed_audit_period', return_value=(0, 0)) @mock.patch.object(time, 'time', side_effect=[10, 20, 21]) @mock.patch.object(objects.InstanceList, 'get_by_host', return_value=[]) @mock.patch.object(objects.BandwidthUsage, 'get_by_instance_uuid_and_mac') @mock.patch.object(db, 'bw_usage_update') def test_poll_bandwidth_usage(self, bw_usage_update, get_by_uuid_mac, get_by_host, time, last_completed_audit): bw_counters = [{'uuid': uuids.instance, 'mac_address': 'fake-mac', 'bw_in': 1, 'bw_out': 2}] usage = objects.BandwidthUsage() usage.bw_in = 3 usage.bw_out = 4 usage.last_ctr_in = 0 usage.last_ctr_out = 0 self.flags(bandwidth_poll_interval=1) get_by_uuid_mac.return_value = usage _time = timeutils.utcnow() bw_usage_update.return_value = {'uuid': uuids.instance, 'mac': '', 'start_period': _time, 'last_refreshed': _time, 'bw_in': 0, 'bw_out': 0, 'last_ctr_in': 0, 'last_ctr_out': 0, 'deleted': 0, 'created_at': _time, 'updated_at': _time, 'deleted_at': _time} with mock.patch.object(self.compute.driver, 'get_all_bw_counters', return_value=bw_counters): self.compute._poll_bandwidth_usage(self.context) get_by_uuid_mac.assert_called_once_with(self.context, uuids.instance, 'fake-mac', start_period=0, use_slave=True) # NOTE(sdague): bw_usage_update happens at some time in # the future, so what last_refreshed is irrelevant. bw_usage_update.assert_called_once_with(self.context, uuids.instance, 'fake-mac', 0, 4, 6, 1, 2, last_refreshed=mock.ANY) def test_reverts_task_state_instance_not_found(self): # Tests that the reverts_task_state decorator in the compute manager # will not trace when an InstanceNotFound is raised. instance = objects.Instance(uuid=uuids.instance, task_state="FAKE") instance_update_mock = mock.Mock( side_effect=exception.InstanceNotFound(instance_id=instance.uuid)) self.compute._instance_update = instance_update_mock log_mock = mock.Mock() manager.LOG = log_mock @manager.reverts_task_state def fake_function(self, context, instance): raise test.TestingException() self.assertRaises(test.TestingException, fake_function, self, self.context, instance) self.assertFalse(log_mock.called) @mock.patch.object(nova.scheduler.client.query.SchedulerQueryClient, 'update_instance_info') def test_update_scheduler_instance_info(self, mock_update): instance = objects.Instance(uuid=uuids.instance) self.compute._update_scheduler_instance_info(self.context, instance) self.assertEqual(mock_update.call_count, 1) args = mock_update.call_args[0] self.assertNotEqual(args[0], self.context) self.assertIsInstance(args[0], self.context.__class__) self.assertEqual(args[1], self.compute.host) # Send a single instance; check that the method converts to an # InstanceList self.assertIsInstance(args[2], objects.InstanceList) self.assertEqual(args[2].objects[0], instance) @mock.patch.object(nova.scheduler.client.query.SchedulerQueryClient, 'delete_instance_info') def test_delete_scheduler_instance_info(self, mock_delete): self.compute._delete_scheduler_instance_info(self.context, mock.sentinel.inst_uuid) self.assertEqual(mock_delete.call_count, 1) args = mock_delete.call_args[0] self.assertNotEqual(args[0], self.context) self.assertIsInstance(args[0], self.context.__class__) self.assertEqual(args[1], self.compute.host) self.assertEqual(args[2], mock.sentinel.inst_uuid) @ddt.data(('vnc', 'spice', 'rdp', 'serial_console', 'mks'), ('spice', 'vnc', 'rdp', 'serial_console', 'mks'), ('rdp', 'vnc', 'spice', 'serial_console', 'mks'), ('serial_console', 'vnc', 'spice', 'rdp', 'mks'), ('mks', 'vnc', 'spice', 'rdp', 'serial_console')) @ddt.unpack @mock.patch('nova.objects.ConsoleAuthToken.' 'clean_console_auths_for_instance') def test_clean_instance_console_tokens(self, g1, g2, g3, g4, g5, mock_clean): # Enable one of each of the console types and disable the rest self.flags(enabled=True, group=g1) for g in [g2, g3, g4, g5]: self.flags(enabled=False, group=g) instance = objects.Instance(uuid=uuids.instance) self.compute._clean_instance_console_tokens(self.context, instance) mock_clean.assert_called_once_with(self.context, instance.uuid) @mock.patch('nova.objects.ConsoleAuthToken.' 'clean_console_auths_for_instance') def test_clean_instance_console_tokens_no_consoles_enabled(self, mock_clean): for g in ['vnc', 'spice', 'rdp', 'serial_console', 'mks']: self.flags(enabled=False, group=g) instance = objects.Instance(uuid=uuids.instance) self.compute._clean_instance_console_tokens(self.context, instance) mock_clean.assert_not_called() @mock.patch('nova.objects.ConsoleAuthToken.clean_expired_console_auths') def test_cleanup_expired_console_auth_tokens(self, mock_clean): self.compute._cleanup_expired_console_auth_tokens(self.context) mock_clean.assert_called_once_with(self.context) @mock.patch.object(nova.context.RequestContext, 'elevated') @mock.patch.object(nova.objects.InstanceList, 'get_by_host') @mock.patch.object(nova.scheduler.client.query.SchedulerQueryClient, 'sync_instance_info') def test_sync_scheduler_instance_info(self, mock_sync, mock_get_by_host, mock_elevated): inst1 = objects.Instance(uuid=uuids.instance_1) inst2 = objects.Instance(uuid=uuids.instance_2) inst3 = objects.Instance(uuid=uuids.instance_3) exp_uuids = [inst.uuid for inst in [inst1, inst2, inst3]] mock_get_by_host.return_value = objects.InstanceList( objects=[inst1, inst2, inst3]) fake_elevated = context.get_admin_context() mock_elevated.return_value = fake_elevated self.compute._sync_scheduler_instance_info(self.context) mock_get_by_host.assert_called_once_with( fake_elevated, self.compute.host, expected_attrs=[], use_slave=True) mock_sync.assert_called_once_with(fake_elevated, self.compute.host, exp_uuids) @mock.patch.object(nova.scheduler.client.query.SchedulerQueryClient, 'sync_instance_info') @mock.patch.object(nova.scheduler.client.query.SchedulerQueryClient, 'delete_instance_info') @mock.patch.object(nova.scheduler.client.query.SchedulerQueryClient, 'update_instance_info') def test_scheduler_info_updates_off(self, mock_update, mock_delete, mock_sync): mgr = self.compute mgr.send_instance_updates = False mgr._update_scheduler_instance_info(self.context, mock.sentinel.instance) mgr._delete_scheduler_instance_info(self.context, mock.sentinel.instance_uuid) mgr._sync_scheduler_instance_info(self.context) # None of the calls should have been made self.assertFalse(mock_update.called) self.assertFalse(mock_delete.called) self.assertFalse(mock_sync.called) def test_set_instance_obj_error_state_with_clean_task_state(self): instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.BUILDING, task_state=task_states.SPAWNING) with mock.patch.object(instance, 'save'): self.compute._set_instance_obj_error_state(self.context, instance, clean_task_state=True) self.assertEqual(vm_states.ERROR, instance.vm_state) self.assertIsNone(instance.task_state) def test_set_instance_obj_error_state_by_default(self): instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.BUILDING, task_state=task_states.SPAWNING) with mock.patch.object(instance, 'save'): self.compute._set_instance_obj_error_state(self.context, instance) self.assertEqual(vm_states.ERROR, instance.vm_state) self.assertEqual(task_states.SPAWNING, instance.task_state) @mock.patch.object(objects.Instance, 'save') def test_instance_update(self, mock_save): instance = objects.Instance(task_state=task_states.SCHEDULING, vm_state=vm_states.BUILDING) updates = {'task_state': None, 'vm_state': vm_states.ERROR} with mock.patch.object(self.compute, '_update_resource_tracker') as mock_rt: self.compute._instance_update(self.context, instance, **updates) self.assertIsNone(instance.task_state) self.assertEqual(vm_states.ERROR, instance.vm_state) mock_save.assert_called_once_with() mock_rt.assert_called_once_with(self.context, instance) def test_reset_reloads_rpcapi(self): orig_rpc = self.compute.compute_rpcapi with mock.patch('nova.compute.rpcapi.ComputeAPI') as mock_rpc: self.compute.reset() mock_rpc.assert_called_once_with() self.assertIsNot(orig_rpc, self.compute.compute_rpcapi) def test_reset_clears_provider_cache(self): # Seed the cache so we can tell we've cleared it reportclient = self.compute.reportclient ptree = reportclient._provider_tree ptree.new_root('foo', uuids.foo) self.assertEqual([uuids.foo], ptree.get_provider_uuids()) times = reportclient._association_refresh_time times[uuids.foo] = time.time() self.compute.reset() ptree = reportclient._provider_tree self.assertEqual([], ptree.get_provider_uuids()) times = reportclient._association_refresh_time self.assertEqual({}, times) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.compute.manager.ComputeManager._delete_instance') def test_terminate_instance_no_bdm_volume_id(self, mock_delete_instance, mock_bdm_get_by_inst): # Tests that we refresh the bdm list if a volume bdm does not have the # volume_id set. instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ERROR, task_state=task_states.DELETING) bdm = fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'snapshot', 'destination_type': 'volume', 'instance_uuid': instance.uuid, 'device_name': '/dev/vda'}) bdms = block_device_obj.block_device_make_list(self.context, [bdm]) # since the bdms passed in don't have a volume_id, we'll go back to the # database looking for updated versions mock_bdm_get_by_inst.return_value = bdms self.compute.terminate_instance(self.context, instance, bdms) mock_bdm_get_by_inst.assert_called_once_with( self.context, instance.uuid) mock_delete_instance.assert_called_once_with( self.context, instance, bdms) @mock.patch('nova.context.RequestContext.elevated') def test_terminate_instance_no_network_info(self, mock_elevated): # Tests that we refresh the network info if it was empty instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE) empty_nw_info = network_model.NetworkInfo() instance.info_cache = objects.InstanceInfoCache( network_info=empty_nw_info) vif = fake_network_cache_model.new_vif() nw_info = network_model.NetworkInfo([vif]) bdms = objects.BlockDeviceMappingList() elevated = context.get_admin_context() mock_elevated.return_value = elevated # Call shutdown instance with test.nested( mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value=nw_info), mock.patch.object(self.compute, '_get_instance_block_device_info'), mock.patch.object(self.compute.driver, 'destroy') ) as ( mock_nw_api_info, mock_get_bdi, mock_destroy ): self.compute._shutdown_instance(self.context, instance, bdms, notify=False, try_deallocate_networks=False) # Verify mock_nw_api_info.assert_called_once_with(elevated, instance) mock_get_bdi.assert_called_once_with(elevated, instance, bdms=bdms) # destroy should have been called with the refresh network_info mock_destroy.assert_called_once_with( elevated, instance, nw_info, mock_get_bdi.return_value) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(nova.compute.manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(compute_utils, 'EventReporter') def test_trigger_crash_dump(self, event_mock, notify_mock, mock_instance_action_notify): instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE) self.compute.trigger_crash_dump(self.context, instance) mock_instance_action_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='trigger_crash_dump', phase='start'), mock.call(self.context, instance, 'fake-mini', action='trigger_crash_dump', phase='end')]) notify_mock.assert_has_calls([ mock.call(self.context, instance, 'trigger_crash_dump.start'), mock.call(self.context, instance, 'trigger_crash_dump.end') ]) self.assertIsNone(instance.task_state) self.assertEqual(vm_states.ACTIVE, instance.vm_state) def test_instance_restore_notification(self): inst_obj = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.SOFT_DELETED) with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(self.compute.driver, 'restore') ) as ( fake_notify, fake_usage, fake_save, fake_restore ): self.compute.restore_instance(self.context, inst_obj) fake_notify.assert_has_calls([ mock.call(self.context, inst_obj, 'fake-mini', action='restore', phase='start'), mock.call(self.context, inst_obj, 'fake-mini', action='restore', phase='end')]) def test_delete_image_on_error_image_not_found_ignored(self): """Tests that we don't log an exception trace if we get a 404 when trying to delete an image as part of the image cleanup decorator. """ @manager.delete_image_on_error def some_image_related_op(self, context, image_id, instance): raise test.TestingException('oops!') image_id = uuids.image_id instance = objects.Instance(uuid=uuids.instance_uuid) with mock.patch.object(manager.LOG, 'exception') as mock_log: with mock.patch.object( self, 'image_api', create=True) as mock_image_api: mock_image_api.delete.side_effect = ( exception.ImageNotFound(image_id=image_id)) self.assertRaises(test.TestingException, some_image_related_op, self, self.context, image_id, instance) mock_image_api.delete.assert_called_once_with( self.context, image_id) # make sure nothing was logged at exception level mock_log.assert_not_called() @mock.patch('nova.volume.cinder.API.attachment_delete') @mock.patch('nova.volume.cinder.API.attachment_create', return_value={'id': uuids.attachment_id}) @mock.patch('nova.objects.BlockDeviceMapping.save') @mock.patch('nova.volume.cinder.API.terminate_connection') def test_terminate_volume_connections(self, mock_term_conn, mock_bdm_save, mock_attach_create, mock_attach_delete): """Tests _terminate_volume_connections with cinder v2 style, cinder v3.44 style, and non-volume BDMs. """ bdms = objects.BlockDeviceMappingList( objects=[ # We use two old-style BDMs to make sure we only build the # connector object once. objects.BlockDeviceMapping(volume_id=uuids.v2_volume_id_1, destination_type='volume', attachment_id=None), objects.BlockDeviceMapping(volume_id=uuids.v2_volume_id_2, destination_type='volume', attachment_id=None), objects.BlockDeviceMapping(volume_id=uuids.v3_volume_id, destination_type='volume', attachment_id=uuids.attach_id), objects.BlockDeviceMapping(volume_id=None, destination_type='local') ]) instance = fake_instance.fake_instance_obj( self.context, vm_state=vm_states.ACTIVE) fake_connector = mock.sentinel.fake_connector with mock.patch.object(self.compute.driver, 'get_volume_connector', return_value=fake_connector) as connector_mock: self.compute._terminate_volume_connections( self.context, instance, bdms) # assert we called terminate_connection twice (once per old volume bdm) mock_term_conn.assert_has_calls([ mock.call(self.context, uuids.v2_volume_id_1, fake_connector), mock.call(self.context, uuids.v2_volume_id_2, fake_connector) ]) # assert we only build the connector once connector_mock.assert_called_once_with(instance) # assert we called delete_attachment once for the single new volume bdm mock_attach_delete.assert_called_once_with( self.context, uuids.attach_id) mock_attach_create.assert_called_once_with( self.context, uuids.v3_volume_id, instance.uuid) def test_instance_soft_delete_notification(self): inst_obj = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE) inst_obj.system_metadata = {} with test.nested( mock.patch.object(nova.compute.utils, 'notify_about_instance_action'), mock.patch.object(nova.compute.utils, 'notify_about_instance_usage'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(self.compute.driver, 'soft_delete') ) as (fake_notify, fake_legacy_notify, fake_save, fake_soft_delete): self.compute.soft_delete_instance(self.context, inst_obj) fake_notify.assert_has_calls([ mock.call(self.context, inst_obj, action='soft_delete', source='nova-compute', host='fake-mini', phase='start'), mock.call(self.context, inst_obj, action='soft_delete', source='nova-compute', host='fake-mini', phase='end')]) def test_get_scheduler_hints(self): # 1. No hints and no request_spec. self.assertEqual({}, self.compute._get_scheduler_hints({})) # 2. Hints come from the filter_properties. hints = {'foo': 'bar'} filter_properties = {'scheduler_hints': hints} self.assertEqual( hints, self.compute._get_scheduler_hints(filter_properties)) # 3. Hints come from filter_properties because reqspec is empty. reqspec = objects.RequestSpec.from_primitives(self.context, {}, {}) self.assertEqual( hints, self.compute._get_scheduler_hints( filter_properties, reqspec)) # 4. Hints come from the request spec. reqspec_hints = {'boo': 'baz'} reqspec = objects.RequestSpec.from_primitives( self.context, {}, {'scheduler_hints': reqspec_hints}) # The RequestSpec unconditionally stores hints as a key=list # unlike filter_properties which just stores whatever came in from # the API request. expected_reqspec_hints = {'boo': ['baz']} self.assertDictEqual( expected_reqspec_hints, self.compute._get_scheduler_hints( filter_properties, reqspec)) def test_notify_volume_usage_detach_no_block_stats(self): """Tests the case that the virt driver returns None from the block_stats() method and no notification is sent, similar to the virt driver raising NotImplementedError. """ self.flags(volume_usage_poll_interval=60) fake_instance = objects.Instance() fake_bdm = objects.BlockDeviceMapping(device_name='/dev/vda') with mock.patch.object(self.compute.driver, 'block_stats', return_value=None) as block_stats: # Assert a notification isn't sent. with mock.patch.object(self.compute.notifier, 'info', new_callable=mock.NonCallableMock): self.compute._notify_volume_usage_detach( self.context, fake_instance, fake_bdm) block_stats.assert_called_once_with(fake_instance, 'vda') def _test_finish_revert_resize_network_migrate_finish( self, vifs, events, migration=None): instance = fake_instance.fake_instance_obj(self.context) instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo(vifs)) if migration is None: migration = objects.Migration( source_compute='fake-source', dest_compute='fake-dest') def fake_migrate_instance_finish( context, instance, migration, mapping): # NOTE(artom) This looks weird, but it's checking that the # temporaty_mutation() context manager did its job. self.assertEqual(migration.dest_compute, migration.source_compute) with test.nested( mock.patch.object(self.compute.virtapi, 'wait_for_instance_event'), mock.patch.object(self.compute.network_api, 'migrate_instance_finish', side_effect=fake_migrate_instance_finish) ) as (mock_wait, mock_migrate_instance_finish): self.compute._finish_revert_resize_network_migrate_finish( self.context, instance, migration, mock.sentinel.mapping) mock_wait.assert_called_once_with( instance, events, deadline=CONF.vif_plugging_timeout, error_callback=self.compute._neutron_failed_migration_callback) mock_migrate_instance_finish.assert_called_once_with( self.context, instance, migration, mock.sentinel.mapping) def test_finish_revert_resize_network_migrate_finish_wait(self): """Test that we wait for bind-time events if we have a hybrid-plugged VIF. """ self._test_finish_revert_resize_network_migrate_finish( [network_model.VIF(id=uuids.hybrid_vif, details={'ovs_hybrid_plug': True}), network_model.VIF(id=uuids.normal_vif, details={'ovs_hybrid_plug': False})], [('network-vif-plugged', uuids.hybrid_vif)]) def test_finish_revert_resize_network_migrate_finish_same_host(self): """Test that we're not waiting for any events if its a same host resize revert. """ migration = objects.Migration( source_compute='fake-source', dest_compute='fake-source') self._test_finish_revert_resize_network_migrate_finish( [network_model.VIF(id=uuids.hybrid_vif, details={'ovs_hybrid_plug': True}), network_model.VIF(id=uuids.normal_vif, details={'ovs_hybrid_plug': False})], [], migration=migration ) def test_finish_revert_resize_network_migrate_finish_dont_wait(self): """Test that we're not waiting for any events if we don't have any hybrid-plugged VIFs. """ self._test_finish_revert_resize_network_migrate_finish( [network_model.VIF(id=uuids.hybrid_vif, details={'ovs_hybrid_plug': False}), network_model.VIF(id=uuids.normal_vif, details={'ovs_hybrid_plug': False})], []) def test_finish_revert_resize_network_migrate_finish_no_vif_timeout(self): """Test that we're not waiting for any events if vif_plugging_timeout is 0. """ self.flags(vif_plugging_timeout=0) self._test_finish_revert_resize_network_migrate_finish( [network_model.VIF(id=uuids.hybrid_vif, details={'ovs_hybrid_plug': True}), network_model.VIF(id=uuids.normal_vif, details={'ovs_hybrid_plug': True})], []) @mock.patch('nova.compute.manager.LOG') def test_cache_images_unsupported(self, mock_log): r = self.compute.cache_images(self.context, ['an-image']) self.assertEqual({'an-image': 'unsupported'}, r) mock_log.warning.assert_called_once_with( 'Virt driver does not support image pre-caching; ignoring request') def test_cache_image_existing(self): with mock.patch.object(self.compute.driver, 'cache_image') as c: c.return_value = False r = self.compute.cache_images(self.context, ['an-image']) self.assertEqual({'an-image': 'existing'}, r) def test_cache_image_downloaded(self): with mock.patch.object(self.compute.driver, 'cache_image') as c: c.return_value = True r = self.compute.cache_images(self.context, ['an-image']) self.assertEqual({'an-image': 'cached'}, r) def test_cache_image_failed(self): with mock.patch.object(self.compute.driver, 'cache_image') as c: c.side_effect = test.TestingException('foo') r = self.compute.cache_images(self.context, ['an-image']) self.assertEqual({'an-image': 'error'}, r) def test_cache_images_multi(self): with mock.patch.object(self.compute.driver, 'cache_image') as c: c.side_effect = [True, False] r = self.compute.cache_images(self.context, ['one-image', 'two-image']) self.assertEqual({'one-image': 'cached', 'two-image': 'existing'}, r) class ComputeManagerBuildInstanceTestCase(test.NoDBTestCase): def setUp(self): super(ComputeManagerBuildInstanceTestCase, self).setUp() self.compute = manager.ComputeManager() self.context = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) self.instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, expected_attrs=['metadata', 'system_metadata', 'info_cache']) self.instance.trusted_certs = None # avoid lazy-load failures self.instance.pci_requests = None # avoid lazy-load failures self.admin_pass = 'pass' self.injected_files = [] self.image = {} self.node = 'fake-node' self.limits = {} self.requested_networks = [] self.security_groups = [] self.block_device_mapping = [] self.accel_uuids = None self.filter_properties = {'retry': {'num_attempts': 1, 'hosts': [[self.compute.host, 'fake-node']]}} self.resource_provider_mapping = None self.useFixture(fixtures.SpawnIsSynchronousFixture()) def fake_network_info(): return network_model.NetworkInfo([{'address': '1.2.3.4'}]) self.network_info = network_model.NetworkInfoAsyncWrapper( fake_network_info) self.block_device_info = self.compute._prep_block_device(context, self.instance, self.block_device_mapping) # override tracker with a version that doesn't need the database: fake_rt = fake_resource_tracker.FakeResourceTracker(self.compute.host, self.compute.driver) self.compute.rt = fake_rt self.allocations = { uuids.provider1: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512 } } } self.mock_get_allocs = self.useFixture( fixtures.fixtures.MockPatchObject( self.compute.reportclient, 'get_allocations_for_consumer')).mock self.mock_get_allocs.return_value = self.allocations def _do_build_instance_update(self, mock_save, reschedule_update=False): mock_save.return_value = self.instance if reschedule_update: mock_save.side_effect = (self.instance, self.instance) @staticmethod def _assert_build_instance_update(mock_save, reschedule_update=False): if reschedule_update: mock_save.assert_has_calls([ mock.call(expected_task_state=(task_states.SCHEDULING, None)), mock.call()]) else: mock_save.assert_called_once_with(expected_task_state= (task_states.SCHEDULING, None)) def _instance_action_events(self, mock_start, mock_finish): mock_start.assert_called_once_with(self.context, self.instance.uuid, mock.ANY, host=CONF.host, want_result=False) mock_finish.assert_called_once_with(self.context, self.instance.uuid, mock.ANY, exc_val=mock.ANY, exc_tb=mock.ANY, want_result=False) @staticmethod def _assert_build_instance_hook_called(mock_hooks, result): # NOTE(coreywright): we want to test the return value of # _do_build_and_run_instance, but it doesn't bubble all the way up, so # mock the hooking, which allows us to test that too, though a little # too intimately mock_hooks.setdefault().run_post.assert_called_once_with( 'build_instance', result, mock.ANY, mock.ANY, f=None) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(nova.compute.manager.ComputeManager, '_default_block_device_names') @mock.patch.object(nova.compute.manager.ComputeManager, '_prep_block_device') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(nova.compute.manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_networks_before_block_device_mapping') def _test_accel_build_resources(self, accel_uuids, mock_prep_net, mock_build_net, mock_prep_spawn, mock_prep_bd, mock_bdnames, mock_save): args = (self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, accel_uuids) resources = [] with self.compute._build_resources(*args) as resources: pass return resources @mock.patch.object(nova.compute.manager.ComputeManager, '_get_bound_arq_resources') def test_accel_build_resources_no_device_profile(self, mock_get_arqs): # If dp_name is None, accel path is a no-op. self.instance.flavor.extra_specs = {} self._test_accel_build_resources(None) mock_get_arqs.assert_not_called() @mock.patch.object(nova.compute.manager.ComputeManager, '_get_bound_arq_resources') def test_accel_build_resources(self, mock_get_arqs): # Happy path for accels in build_resources dp_name = "mydp" self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} arq_list = fixtures.CyborgFixture.bound_arq_list mock_get_arqs.return_value = arq_list arq_uuids = [arq['uuid'] for arq in arq_list] resources = self._test_accel_build_resources(arq_uuids) mock_get_arqs.assert_called_once_with(self.context, dp_name, self.instance, arq_uuids) self.assertEqual(sorted(resources['accel_info']), sorted(arq_list)) @mock.patch.object(virt_driver.ComputeDriver, 'clean_networks_preparation') @mock.patch.object(nova.compute.manager.ComputeManager, '_get_bound_arq_resources') def test_accel_build_resources_exception(self, mock_get_arqs, mock_clean_net): dp_name = "mydp" self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} mock_get_arqs.side_effect = ( exception.AcceleratorRequestOpFailed(op='get', msg='')) self.assertRaises(exception.NovaException, self._test_accel_build_resources, None) mock_clean_net.assert_called_once() @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'exit_wait_early') @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'wait_for_instance_event') @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'get_arqs_for_instance') def test_arq_bind_wait_exit_early(self, mock_get_arqs, mock_wait_inst_ev, mock_exit_wait_early): # Bound ARQs available on first query, quit early. dp_name = fixtures.CyborgFixture.dp_name arq_list = fixtures.CyborgFixture.bound_arq_list self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} arq_events = [('accelerator-request-bound', arq['uuid']) for arq in arq_list] arq_uuids = [arq['uuid'] for arq in arq_list] mock_get_arqs.return_value = arq_list ret_arqs = self.compute._get_bound_arq_resources( self.context, dp_name, self.instance, arq_uuids) mock_wait_inst_ev.assert_called_once_with( self.instance, arq_events, deadline=mock.ANY) mock_exit_wait_early.assert_called_once_with(arq_events) mock_get_arqs.assert_has_calls([ mock.call(self.instance.uuid, only_resolved=True)]) self.assertEqual(sorted(ret_arqs), sorted(arq_list)) @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'exit_wait_early') @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'wait_for_instance_event') @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'get_arqs_for_instance') def test_arq_bind_wait_exit_early_no_arq_uuids(self, mock_get_arqs, mock_wait_inst_ev, mock_exit_wait_early): # If no ARQ UUIDs are passed in, call Cyborg to get the ARQs. # Then, if bound ARQs available on first query, quit early. dp_name = fixtures.CyborgFixture.dp_name arq_list = fixtures.CyborgFixture.bound_arq_list self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} arq_events = [('accelerator-request-bound', arq['uuid']) for arq in arq_list] mock_get_arqs.side_effect = [arq_list, arq_list] ret_arqs = self.compute._get_bound_arq_resources( self.context, dp_name, self.instance, arq_uuids=None) mock_wait_inst_ev.assert_called_once_with( self.instance, arq_events, deadline=mock.ANY) mock_exit_wait_early.assert_called_once_with(arq_events) mock_get_arqs.assert_has_calls([ mock.call(self.instance.uuid), mock.call(self.instance.uuid, only_resolved=True)]) self.assertEqual(sorted(ret_arqs), sorted(arq_list)) @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'exit_wait_early') @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'wait_for_instance_event') @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'get_arqs_for_instance') def test_arq_bind_wait(self, mock_get_arqs, mock_wait_inst_ev, mock_exit_wait_early): # If binding is in progress, must wait. dp_name = fixtures.CyborgFixture.dp_name arq_list = fixtures.CyborgFixture.bound_arq_list self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} arq_events = [('accelerator-request-bound', arq['uuid']) for arq in arq_list] arq_uuids = [arq['uuid'] for arq in arq_list] # get_arqs_for_instance gets called 2 times, returning the # resolved ARQs first, and the full list finally mock_get_arqs.side_effect = [[], arq_list] ret_arqs = self.compute._get_bound_arq_resources( self.context, dp_name, self.instance, arq_uuids) mock_wait_inst_ev.assert_called_once_with( self.instance, arq_events, deadline=mock.ANY) mock_exit_wait_early.assert_not_called() self.assertEqual(sorted(ret_arqs), sorted(arq_list)) mock_get_arqs.assert_has_calls([ mock.call(self.instance.uuid, only_resolved=True), mock.call(self.instance.uuid)]) @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'exit_wait_early') @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'wait_for_instance_event') @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'get_arqs_for_instance') def test_arq_bind_timeout(self, mock_get_arqs, mock_wait_inst_ev, mock_exit_wait_early): # If binding fails even after wait, exception is thrown dp_name = fixtures.CyborgFixture.dp_name arq_list = fixtures.CyborgFixture.bound_arq_list self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} arq_events = [('accelerator-request-bound', arq['uuid']) for arq in arq_list] arq_uuids = [arq['uuid'] for arq in arq_list] mock_get_arqs.return_value = arq_list mock_wait_inst_ev.side_effect = eventlet_timeout.Timeout self.assertRaises(eventlet_timeout.Timeout, self.compute._get_bound_arq_resources, self.context, dp_name, self.instance, arq_uuids) mock_wait_inst_ev.assert_called_once_with( self.instance, arq_events, deadline=mock.ANY) mock_exit_wait_early.assert_not_called() mock_get_arqs.assert_not_called() @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'exit_wait_early') @mock.patch.object(nova.compute.manager.ComputeVirtAPI, 'wait_for_instance_event') @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'get_arqs_for_instance') def test_arq_bind_exception(self, mock_get_arqs, mock_wait_inst_ev, mock_exit_wait_early): # If the code inside the context manager of _get_bound_arq_resources # raises an exception, that exception must be handled. dp_name = fixtures.CyborgFixture.dp_name arq_list = fixtures.CyborgFixture.bound_arq_list self.instance.flavor.extra_specs = {"accel:device_profile": dp_name} arq_events = [('accelerator-request-bound', arq['uuid']) for arq in arq_list] arq_uuids = [arq['uuid'] for arq in arq_list] mock_get_arqs.side_effect = ( exception.AcceleratorRequestOpFailed(op='', msg='')) self.assertRaises(exception.AcceleratorRequestOpFailed, self.compute._get_bound_arq_resources, self.context, dp_name, self.instance, arq_uuids) mock_wait_inst_ev.assert_called_once_with( self.instance, arq_events, deadline=mock.ANY) mock_exit_wait_early.assert_not_called() mock_get_arqs.assert_called_once_with( self.instance.uuid, only_resolved=True) @mock.patch.object(fake_driver.FakeDriver, 'spawn') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer') @mock.patch.object(manager.ComputeManager, '_get_request_group_mapping') @mock.patch.object(manager.ComputeManager, '_check_trusted_certs') @mock.patch.object(manager.ComputeManager, '_check_device_tagging') @mock.patch.object(compute_utils, 'notify_about_instance_create') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') def test_spawn_called_with_accel_info(self, mock_ins_usage, mock_ins_create, mock_dev_tag, mock_certs, mock_req_group_map, mock_get_allocations, mock_ins_save, mock_spawn): accel_info = [{'k1': 'v1', 'k2': 'v2'}] @contextlib.contextmanager def fake_build_resources(compute_mgr, *args, **kwargs): yield { 'block_device_info': None, 'network_info': None, 'accel_info': accel_info, } self.stub_out('nova.compute.manager.ComputeManager._build_resources', fake_build_resources) mock_req_group_map.return_value = None mock_get_allocations.return_value = mock.sentinel.allocation self.compute._build_and_run_instance(self.context, self.instance, self.image, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, filter_properties=self.filter_properties) mock_spawn.assert_called_once_with(self.context, self.instance, mock.ANY, self.injected_files, self.admin_pass, mock.ANY, network_info=None, block_device_info=None, accel_info=accel_info) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(nova.compute.manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(nova.compute.manager.ComputeManager, '_default_block_device_names') @mock.patch.object(nova.compute.manager.ComputeManager, '_prep_block_device') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_networks_before_block_device_mapping') @mock.patch.object(virt_driver.ComputeDriver, 'clean_networks_preparation') @mock.patch.object(nova.compute.manager.ComputeManager, '_get_bound_arq_resources') def _test_delete_arqs_exception(self, mock_get_arqs, mock_clean_net, mock_prep_net, mock_prep_spawn, mock_prep_bd, mock_bdnames, mock_build_net, mock_save): args = (self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids) mock_get_arqs.side_effect = ( exception.AcceleratorRequestOpFailed(op='get', msg='')) with self.compute._build_resources(*args): raise test.TestingException() @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'delete_arqs_for_instance') def test_delete_arqs_if_build_res_exception(self, mock_del_arqs): # Cyborg is called to delete ARQs if exception is thrown inside # the context of # _build_resources(). self.instance.flavor.extra_specs = {'accel:device_profile': 'mydp'} self.assertRaisesRegex(exception.BuildAbortException, 'Failure getting accelerator requests', self._test_delete_arqs_exception) mock_del_arqs.assert_called_once_with(self.instance.uuid) @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'delete_arqs_for_instance') def test_delete_arqs_if_build_res_exception_no_dp(self, mock_del_arqs): # Cyborg is not called to delete ARQs, even if an exception is # thrown inside the context of _build_resources(), if there is no # device profile name in the extra specs. self.instance.flavor.extra_specs = {} self.assertRaises(exception.BuildAbortException, self._test_delete_arqs_exception) mock_del_arqs.assert_not_called() def test_build_and_run_instance_called_with_proper_args(self): self._test_build_and_run_instance() def test_build_and_run_instance_with_unlimited_max_concurrent_builds(self): self.flags(max_concurrent_builds=0) self.compute = manager.ComputeManager() self._test_build_and_run_instance() @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def _test_build_and_run_instance(self, mock_hooks, mock_build, mock_save, mock_start, mock_finish): self._do_build_instance_update(mock_save) self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._assert_build_instance_hook_called(mock_hooks, build_results.ACTIVE) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save) mock_build.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) # This test when sending an icehouse compatible rpc call to juno compute # node, NetworkRequest object can load from three items tuple. @mock.patch.object(compute_utils, 'EventReporter') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager._build_and_run_instance') def test_build_and_run_instance_with_icehouse_requested_network( self, mock_build_and_run, mock_save, mock_event): mock_save.return_value = self.instance self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=[objects.NetworkRequest( network_id='fake_network_id', address='10.0.0.1', port_id=uuids.port_instance)], security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) requested_network = mock_build_and_run.call_args[0][5][0] self.assertEqual('fake_network_id', requested_network.network_id) self.assertEqual('10.0.0.1', str(requested_network.address)) self.assertEqual(uuids.port_instance, requested_network.port_id) @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(manager.ComputeManager, '_cleanup_volumes') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(manager.ComputeManager, '_set_instance_obj_error_state') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def test_build_abort_exception(self, mock_hooks, mock_build_run, mock_build, mock_set, mock_nil, mock_add, mock_clean_vol, mock_clean_net, mock_save, mock_start, mock_finish): self._do_build_instance_update(mock_save) mock_build_run.side_effect = exception.BuildAbortException(reason='', instance_uuid=self.instance.uuid) self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save) self._assert_build_instance_hook_called(mock_hooks, build_results.FAILED) mock_build_run.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_clean_net.assert_called_once_with(self.context, self.instance, self.requested_networks) mock_clean_vol.assert_called_once_with(self.context, self.instance, self.block_device_mapping, raise_exc=False) mock_add.assert_called_once_with(self.context, self.instance, mock.ANY, mock.ANY) mock_nil.assert_called_once_with(self.instance) mock_set.assert_called_once_with(self.context, self.instance, clean_task_state=True) @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(manager.ComputeManager, '_set_instance_obj_error_state') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def test_rescheduled_exception(self, mock_hooks, mock_build_run, mock_build, mock_set, mock_nil, mock_save, mock_start, mock_finish): self._do_build_instance_update(mock_save, reschedule_update=True) mock_build_run.side_effect = exception.RescheduledException(reason='', instance_uuid=self.instance.uuid) with mock.patch.object( self.compute.network_api, 'get_instance_nw_info', ): self.compute.build_and_run_instance( self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._assert_build_instance_hook_called(mock_hooks, build_results.RESCHEDULED) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save, reschedule_update=True) mock_build_run.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_nil.assert_called_once_with(self.instance) mock_build.assert_called_once_with(self.context, [self.instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) @mock.patch.object(manager.ComputeManager, '_shutdown_instance') @mock.patch.object(manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(fake_driver.FakeDriver, 'spawn') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') def test_rescheduled_exception_with_non_ascii_exception(self, mock_notify, mock_save, mock_spawn, mock_build, mock_shutdown): exc = exception.NovaException(u's\xe9quence') mock_build.return_value = self.network_info mock_spawn.side_effect = exc self.assertRaises(exception.RescheduledException, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) mock_save.assert_has_calls([ mock.call(), mock.call(), mock.call(expected_task_state='block_device_mapping'), ]) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, 'create.start', extra_usage_info={'image_name': self.image.get('name')}), mock.call(self.context, self.instance, 'create.error', fault=exc)]) mock_build.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_shutdown.assert_called_once_with(self.context, self.instance, self.block_device_mapping, self.requested_networks, try_deallocate_networks=False) mock_spawn.assert_called_once_with(self.context, self.instance, test.MatchType(objects.ImageMeta), self.injected_files, self.admin_pass, self.allocations, network_info=self.network_info, block_device_info=self.block_device_info, accel_info=[]) @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') def test_rescheduled_exception_with_network_allocated(self, mock_event_finish, mock_event_start, mock_ins_save, mock_build_ins, mock_build_and_run): instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, system_metadata={'network_allocated': 'True'}, expected_attrs=['metadata', 'system_metadata', 'info_cache']) mock_ins_save.return_value = instance mock_build_and_run.side_effect = exception.RescheduledException( reason='', instance_uuid=self.instance.uuid) with mock.patch.object( self.compute.network_api, 'get_instance_nw_info', ): self.compute._do_build_and_run_instance( self.context, instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) mock_build_and_run.assert_called_once_with(self.context, instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_build_ins.assert_called_once_with(self.context, [instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') def test_rescheduled_exception_with_network_allocated_with_neutron(self, mock_event_finish, mock_event_start, mock_ins_save, mock_build_ins, mock_cleanup_network, mock_build_and_run): """Tests that we always cleanup allocated networks for the instance when using neutron and before we reschedule off the failed host. """ instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, system_metadata={'network_allocated': 'True'}, expected_attrs=['metadata', 'system_metadata', 'info_cache']) mock_ins_save.return_value = instance mock_build_and_run.side_effect = exception.RescheduledException( reason='', instance_uuid=self.instance.uuid) self.compute._do_build_and_run_instance(self.context, instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) mock_build_and_run.assert_called_once_with(self.context, instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_cleanup_network.assert_called_once_with( self.context, instance, self.requested_networks) mock_build_ins.assert_called_once_with(self.context, [instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') def test_rescheduled_exception_with_sriov_network_allocated(self, mock_event_finish, mock_event_start, mock_ins_save, mock_cleanup_network, mock_build_ins, mock_build_and_run): vif1 = fake_network_cache_model.new_vif() vif1['id'] = '1' vif1['vnic_type'] = network_model.VNIC_TYPE_NORMAL vif2 = fake_network_cache_model.new_vif() vif2['id'] = '2' vif1['vnic_type'] = network_model.VNIC_TYPE_DIRECT nw_info = network_model.NetworkInfo([vif1, vif2]) instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, system_metadata={'network_allocated': 'True'}, expected_attrs=['metadata', 'system_metadata', 'info_cache']) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=instance.uuid) instance.info_cache = info_cache mock_ins_save.return_value = instance mock_build_and_run.side_effect = exception.RescheduledException( reason='', instance_uuid=self.instance.uuid) self.compute._do_build_and_run_instance(self.context, instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) mock_build_and_run.assert_called_once_with(self.context, instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_cleanup_network.assert_called_once_with( self.context, instance, self.requested_networks) mock_build_ins.assert_called_once_with(self.context, [instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(manager.ComputeManager, '_cleanup_volumes') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(manager.ComputeManager, '_set_instance_obj_error_state') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def test_rescheduled_exception_without_retry(self, mock_hooks, mock_build_run, mock_add, mock_set, mock_clean_net, mock_clean_vol, mock_nil, mock_save, mock_start, mock_finish): self._do_build_instance_update(mock_save) mock_build_run.side_effect = exception.RescheduledException(reason='', instance_uuid=self.instance.uuid) self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties={}, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._assert_build_instance_hook_called(mock_hooks, build_results.FAILED) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save) mock_build_run.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, {}, {}, self.accel_uuids) mock_clean_net.assert_called_once_with(self.context, self.instance, self.requested_networks) mock_clean_vol.assert_called_once_with(self.context, self.instance, self.block_device_mapping, raise_exc=False) mock_add.assert_called_once_with(self.context, self.instance, mock.ANY, mock.ANY, fault_message=mock.ANY) mock_nil.assert_called_once_with(self.instance) mock_set.assert_called_once_with(self.context, self.instance, clean_task_state=True) @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def test_rescheduled_exception_do_not_deallocate_network(self, mock_hooks, mock_build_run, mock_build, mock_nil, mock_clean_net, mock_save, mock_start, mock_finish): self._do_build_instance_update(mock_save, reschedule_update=True) mock_build_run.side_effect = exception.RescheduledException(reason='', instance_uuid=self.instance.uuid) self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._assert_build_instance_hook_called(mock_hooks, build_results.RESCHEDULED) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save, reschedule_update=True) mock_build_run.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_nil.assert_called_once_with(self.instance) mock_build.assert_called_once_with(self.context, [self.instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def test_rescheduled_exception_deallocate_network(self, mock_hooks, mock_build_run, mock_build, mock_nil, mock_clean, mock_save, mock_start, mock_finish): self._do_build_instance_update(mock_save, reschedule_update=True) mock_build_run.side_effect = exception.RescheduledException(reason='', instance_uuid=self.instance.uuid) self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._assert_build_instance_hook_called(mock_hooks, build_results.RESCHEDULED) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save, reschedule_update=True) mock_build_run.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_clean.assert_called_once_with(self.context, self.instance, self.requested_networks) mock_nil.assert_called_once_with(self.instance) mock_build.assert_called_once_with(self.context, [self.instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(manager.ComputeManager, '_cleanup_volumes') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(manager.ComputeManager, '_set_instance_obj_error_state') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch('nova.hooks._HOOKS') def _test_build_and_run_exceptions(self, exc, mock_hooks, mock_build_run, mock_build, mock_set, mock_nil, mock_add, mock_clean_vol, mock_clean_net, mock_save, mock_start, mock_finish, set_error=False, cleanup_volumes=False, nil_out_host_and_node=False): self._do_build_instance_update(mock_save) mock_build_run.side_effect = exc self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._assert_build_instance_hook_called(mock_hooks, build_results.FAILED) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save) if cleanup_volumes: mock_clean_vol.assert_called_once_with(self.context, self.instance, self.block_device_mapping, raise_exc=False) if nil_out_host_and_node: mock_nil.assert_called_once_with(self.instance) if set_error: mock_add.assert_called_once_with(self.context, self.instance, mock.ANY, mock.ANY) mock_set.assert_called_once_with(self.context, self.instance, clean_task_state=True) mock_build_run.assert_called_once_with(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, {}, self.accel_uuids) mock_clean_net.assert_called_once_with(self.context, self.instance, self.requested_networks) def test_build_and_run_notfound_exception(self): self._test_build_and_run_exceptions(exception.InstanceNotFound( instance_id='')) def test_build_and_run_unexpecteddeleting_exception(self): self._test_build_and_run_exceptions( exception.UnexpectedDeletingTaskStateError( instance_uuid=uuids.instance, expected={}, actual={})) @mock.patch('nova.compute.manager.LOG.error') def test_build_and_run_buildabort_exception(self, mock_le): self._test_build_and_run_exceptions( exception.BuildAbortException(instance_uuid='', reason=''), set_error=True, cleanup_volumes=True, nil_out_host_and_node=True) mock_le.assert_called_once_with('Build of instance aborted: ', instance=mock.ANY) def test_build_and_run_unhandled_exception(self): self._test_build_and_run_exceptions(test.TestingException(), set_error=True, cleanup_volumes=True, nil_out_host_and_node=True) @mock.patch.object(manager.ComputeManager, '_do_build_and_run_instance') @mock.patch('nova.compute.stats.Stats.build_failed') def test_build_failures_reported(self, mock_failed, mock_dbari): mock_dbari.return_value = build_results.FAILED instance = objects.Instance(uuid=uuids.instance) for i in range(0, 10): self.compute.build_and_run_instance(self.context, instance, None, None, None) self.assertEqual(10, mock_failed.call_count) @mock.patch.object(manager.ComputeManager, '_do_build_and_run_instance') @mock.patch('nova.compute.stats.Stats.build_failed') def test_build_failures_not_reported(self, mock_failed, mock_dbari): self.flags(consecutive_build_service_disable_threshold=0, group='compute') mock_dbari.return_value = build_results.FAILED instance = objects.Instance(uuid=uuids.instance) for i in range(0, 10): self.compute.build_and_run_instance(self.context, instance, None, None, None) mock_failed.assert_not_called() @mock.patch.object(manager.ComputeManager, '_do_build_and_run_instance') @mock.patch.object(manager.ComputeManager, '_build_failed') @mock.patch.object(manager.ComputeManager, '_build_succeeded') def test_transient_build_failures_no_report(self, mock_succeeded, mock_failed, mock_dbari): results = [build_results.FAILED, build_results.ACTIVE, build_results.RESCHEDULED] def _fake_build(*a, **k): if results: return results.pop(0) else: return build_results.ACTIVE mock_dbari.side_effect = _fake_build instance = objects.Instance(uuid=uuids.instance) for i in range(0, 10): self.compute.build_and_run_instance(self.context, instance, None, None, None) self.assertEqual(2, mock_failed.call_count) self.assertEqual(8, mock_succeeded.call_count) @mock.patch.object(manager.ComputeManager, '_do_build_and_run_instance') @mock.patch.object(manager.ComputeManager, '_build_failed') @mock.patch.object(manager.ComputeManager, '_build_succeeded') def test_build_reschedules_reported(self, mock_succeeded, mock_failed, mock_dbari): mock_dbari.return_value = build_results.RESCHEDULED instance = objects.Instance(uuid=uuids.instance) for i in range(0, 10): self.compute.build_and_run_instance(self.context, instance, None, None, None) self.assertEqual(10, mock_failed.call_count) mock_succeeded.assert_not_called() @mock.patch.object(manager.ComputeManager, '_do_build_and_run_instance') @mock.patch('nova.exception_wrapper._emit_exception_notification') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_build_failed') @mock.patch.object(manager.ComputeManager, '_build_succeeded') def test_build_exceptions_reported(self, mock_succeeded, mock_failed, mock_if, mock_notify, mock_dbari): mock_dbari.side_effect = test.TestingException() instance = objects.Instance(uuid=uuids.instance, task_state=None) for i in range(0, 10): self.assertRaises(test.TestingException, self.compute.build_and_run_instance, self.context, instance, None, None, None) self.assertEqual(10, mock_failed.call_count) mock_succeeded.assert_not_called() @mock.patch.object(manager.ComputeManager, '_shutdown_instance') @mock.patch.object(manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(fake_driver.FakeDriver, 'spawn') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') def _test_instance_exception(self, exc, raised_exc, mock_notify, mock_save, mock_spawn, mock_build, mock_shutdown): """This method test the instance related InstanceNotFound and reschedule on exception errors. The test cases get from arguments. :param exc: Injected exception into the code under test :param exception: Raised exception in test case :param result: At end the excepted state """ mock_build.return_value = self.network_info mock_spawn.side_effect = exc self.assertRaises(raised_exc, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) mock_save.assert_has_calls([ mock.call(), mock.call(), mock.call(expected_task_state='block_device_mapping')]) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, 'create.start', extra_usage_info={'image_name': self.image.get('name')}), mock.call(self.context, self.instance, 'create.error', fault=exc)]) mock_build.assert_called_once_with( self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_shutdown.assert_called_once_with( self.context, self.instance, self.block_device_mapping, self.requested_networks, try_deallocate_networks=False) mock_spawn.assert_called_once_with( self.context, self.instance, test.MatchType(objects.ImageMeta), self.injected_files, self.admin_pass, self.allocations, network_info=self.network_info, block_device_info=self.block_device_info, accel_info=[]) def test_instance_not_found(self): got_exc = exception.InstanceNotFound(instance_id=1) self._test_instance_exception(got_exc, exception.InstanceNotFound) def test_reschedule_on_exception(self): got_exc = test.TestingException() self._test_instance_exception(got_exc, exception.RescheduledException) def test_spawn_network_auto_alloc_failure(self): # This isn't really a driver.spawn failure, it's a failure from # network_api.allocate_for_instance, but testing it here is convenient. self._test_build_and_run_spawn_exceptions( exception.UnableToAutoAllocateNetwork( project_id=self.context.project_id)) def test_spawn_network_fixed_ip_not_valid_on_host_failure(self): self._test_build_and_run_spawn_exceptions( exception.FixedIpInvalidOnHost(port_id='fake-port-id')) def test_build_and_run_no_more_fixedips_exception(self): self._test_build_and_run_spawn_exceptions( exception.NoMoreFixedIps("error messge")) def test_build_and_run_flavor_disk_smaller_image_exception(self): self._test_build_and_run_spawn_exceptions( exception.FlavorDiskSmallerThanImage( flavor_size=0, image_size=1)) def test_build_and_run_flavor_disk_smaller_min_disk(self): self._test_build_and_run_spawn_exceptions( exception.FlavorDiskSmallerThanMinDisk( flavor_size=0, image_min_disk=1)) def test_build_and_run_flavor_memory_too_small_exception(self): self._test_build_and_run_spawn_exceptions( exception.FlavorMemoryTooSmall()) def test_build_and_run_image_not_active_exception(self): self._test_build_and_run_spawn_exceptions( exception.ImageNotActive(image_id=self.image.get('id'))) def test_build_and_run_image_unacceptable_exception(self): self._test_build_and_run_spawn_exceptions( exception.ImageUnacceptable(image_id=self.image.get('id'), reason="")) def test_build_and_run_invalid_disk_info_exception(self): self._test_build_and_run_spawn_exceptions( exception.InvalidDiskInfo(reason="")) def test_build_and_run_invalid_disk_format_exception(self): self._test_build_and_run_spawn_exceptions( exception.InvalidDiskFormat(disk_format="")) def test_build_and_run_signature_verification_error(self): self._test_build_and_run_spawn_exceptions( cursive_exception.SignatureVerificationError(reason="")) def test_build_and_run_certificate_validation_error(self): self._test_build_and_run_spawn_exceptions( exception.CertificateValidationFailed(cert_uuid='trusted-cert-id', reason="")) def test_build_and_run_volume_encryption_not_supported(self): self._test_build_and_run_spawn_exceptions( exception.VolumeEncryptionNotSupported(volume_type='something', volume_id='something')) def test_build_and_run_invalid_input(self): self._test_build_and_run_spawn_exceptions( exception.InvalidInput(reason="")) def test_build_and_run_requested_vram_too_high(self): self._test_build_and_run_spawn_exceptions( exception.RequestedVRamTooHigh(req_vram=200, max_vram=100)) def _test_build_and_run_spawn_exceptions(self, exc): with test.nested( mock.patch.object(self.compute.driver, 'spawn', side_effect=exc), mock.patch.object(self.instance, 'save', side_effect=[self.instance, self.instance, self.instance]), mock.patch.object(self.compute, '_build_networks_for_instance', return_value=self.network_info), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute, '_shutdown_instance'), mock.patch.object(self.compute, '_validate_instance_group_policy'), mock.patch('nova.compute.utils.notify_about_instance_create') ) as (spawn, save, _build_networks_for_instance, _notify_about_instance_usage, _shutdown_instance, _validate_instance_group_policy, mock_notify): self.assertRaises(exception.BuildAbortException, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) _validate_instance_group_policy.assert_called_once_with( self.context, self.instance, {}) _build_networks_for_instance.assert_has_calls( [mock.call(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping)]) _notify_about_instance_usage.assert_has_calls([ mock.call(self.context, self.instance, 'create.start', extra_usage_info={'image_name': self.image.get('name')}), mock.call(self.context, self.instance, 'create.error', fault=exc)]) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, 'fake-mini', phase='start', bdms=[]), mock.call(self.context, self.instance, 'fake-mini', phase='error', exception=exc, bdms=[], tb=mock.ANY)]) save.assert_has_calls([ mock.call(), mock.call(), mock.call( expected_task_state=task_states.BLOCK_DEVICE_MAPPING)]) spawn.assert_has_calls([mock.call(self.context, self.instance, test.MatchType(objects.ImageMeta), self.injected_files, self.admin_pass, self.allocations, network_info=self.network_info, block_device_info=self.block_device_info, accel_info=[])]) _shutdown_instance.assert_called_once_with(self.context, self.instance, self.block_device_mapping, self.requested_networks, try_deallocate_networks=False) @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_nil_out_instance_obj_host_and_node') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(resource_tracker.ResourceTracker, 'instance_claim') def test_reschedule_on_resources_unavailable(self, mock_claim, mock_build, mock_nil, mock_save, mock_start, mock_finish, mock_notify): reason = 'resource unavailable' exc = exception.ComputeResourcesUnavailable(reason=reason) mock_claim.side_effect = exc self._do_build_instance_update(mock_save, reschedule_update=True) with mock.patch.object( self.compute.network_api, 'get_instance_nw_info', ): self.compute.build_and_run_instance( self.context, self.instance, self.image, request_spec={}, filter_properties=self.filter_properties, injected_files=self.injected_files, admin_password=self.admin_pass, requested_networks=self.requested_networks, security_groups=self.security_groups, block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits, host_list=fake_host_list) self._instance_action_events(mock_start, mock_finish) self._assert_build_instance_update(mock_save, reschedule_update=True) mock_claim.assert_called_once_with(self.context, self.instance, self.node, self.allocations, self.limits) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, 'create.start', extra_usage_info= {'image_name': self.image.get('name')}), mock.call(self.context, self.instance, 'create.error', fault=exc)]) mock_build.assert_called_once_with(self.context, [self.instance], self.image, self.filter_properties, self.admin_pass, self.injected_files, self.requested_networks, self.security_groups, self.block_device_mapping, request_spec={}, host_lists=[fake_host_list]) mock_nil.assert_called_once_with(self.instance) @mock.patch.object(manager.ComputeManager, '_build_resources') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') def test_build_resources_buildabort_reraise(self, mock_notify, mock_save, mock_build): exc = exception.BuildAbortException( instance_uuid=self.instance.uuid, reason='') mock_build.side_effect = exc self.assertRaises(exception.BuildAbortException, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) mock_save.assert_called_once_with() mock_notify.assert_has_calls([ mock.call(self.context, self.instance, 'create.start', extra_usage_info={'image_name': self.image.get('name')}), mock.call(self.context, self.instance, 'create.error', fault=exc)]) mock_build.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, test.MatchType(objects.ImageMeta), self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids) @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(manager.ComputeManager, '_prep_block_device') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_networks_before_block_device_mapping') @mock.patch.object(virt_driver.ComputeDriver, 'clean_networks_preparation') def test_build_resources_reraises_on_failed_bdm_prep( self, mock_clean, mock_prepnet, mock_prep, mock_build, mock_save, mock_prepspawn, mock_failedspawn): mock_save.return_value = self.instance mock_build.return_value = self.network_info mock_prep.side_effect = test.TestingException try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): pass except Exception as e: self.assertIsInstance(e, exception.BuildAbortException) mock_save.assert_called_once_with() mock_build.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_prep.assert_called_once_with(self.context, self.instance, self.block_device_mapping) mock_prepnet.assert_called_once_with(self.instance, self.network_info) mock_clean.assert_called_once_with(self.instance, self.network_info) mock_prepspawn.assert_called_once_with(self.instance) mock_failedspawn.assert_called_once_with(self.instance) @mock.patch('nova.virt.block_device.attach_block_devices', side_effect=exception.VolumeNotCreated('oops!')) def test_prep_block_device_maintain_original_error_message(self, mock_attach): """Tests that when attach_block_devices raises an Exception, the re-raised InvalidBDM has the original error message which contains the actual details of the failure. """ bdms = objects.BlockDeviceMappingList( objects=[fake_block_device.fake_bdm_object( self.context, dict(source_type='image', destination_type='volume', boot_index=0, image_id=uuids.image_id, device_name='/dev/vda', volume_size=1))]) ex = self.assertRaises(exception.InvalidBDM, self.compute._prep_block_device, self.context, self.instance, bdms) self.assertEqual('oops!', six.text_type(ex)) @mock.patch('nova.objects.InstanceGroup.get_by_hint') def test_validate_policy_honors_workaround_disabled(self, mock_get): instance = objects.Instance(uuid=uuids.instance) hints = {'group': 'foo'} mock_get.return_value = objects.InstanceGroup(policy=None, uuid=uuids.group) self.compute._validate_instance_group_policy(self.context, instance, hints) mock_get.assert_called_once_with(self.context, 'foo') @mock.patch('nova.objects.InstanceGroup.get_by_hint') def test_validate_policy_honors_workaround_enabled(self, mock_get): self.flags(disable_group_policy_check_upcall=True, group='workarounds') instance = objects.Instance(uuid=uuids.instance) hints = {'group': 'foo'} self.compute._validate_instance_group_policy(self.context, instance, hints) self.assertFalse(mock_get.called) @mock.patch('nova.objects.InstanceGroup.get_by_hint') def test_validate_instance_group_policy_handles_hint_list(self, mock_get): """Tests that _validate_instance_group_policy handles getting scheduler_hints from a RequestSpec which stores the hints as a key=list pair. """ instance = objects.Instance(uuid=uuids.instance) hints = {'group': [uuids.group_hint]} self.compute._validate_instance_group_policy(self.context, instance, hints) mock_get.assert_called_once_with(self.context, uuids.group_hint) @mock.patch('nova.objects.InstanceGroup.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') @mock.patch('nova.objects.InstanceGroup.get_by_hint') @mock.patch.object(fake_driver.FakeDriver, 'get_available_nodes') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') def test_validate_instance_group_policy_with_rules( self, migration_list, nodes, mock_get_by_hint, mock_get_by_host, mock_get_by_uuid): # Create 2 instance in same host, inst2 created before inst1 instance = objects.Instance(uuid=uuids.inst1) hints = {'group': [uuids.group_hint]} existing_insts = [uuids.inst1, uuids.inst2] members_uuids = [uuids.inst1, uuids.inst2] mock_get_by_host.return_value = existing_insts # if group policy rules limit to 1, raise RescheduledException group = objects.InstanceGroup( policy='anti-affinity', rules={'max_server_per_host': '1'}, hosts=['host1'], members=members_uuids, uuid=uuids.group) mock_get_by_hint.return_value = group mock_get_by_uuid.return_value = group nodes.return_value = ['nodename'] migration_list.return_value = [objects.Migration( uuid=uuids.migration, instance_uuid=uuids.instance)] self.assertRaises(exception.RescheduledException, self.compute._validate_instance_group_policy, self.context, instance, hints) # if group policy rules limit change to 2, validate OK group2 = objects.InstanceGroup( policy='anti-affinity', rules={'max_server_per_host': 2}, hosts=['host1'], members=members_uuids, uuid=uuids.group) mock_get_by_hint.return_value = group2 mock_get_by_uuid.return_value = group2 self.compute._validate_instance_group_policy(self.context, instance, hints) @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_networks_before_block_device_mapping') @mock.patch.object(virt_driver.ComputeDriver, 'clean_networks_preparation') def test_failed_bdm_prep_from_delete_raises_unexpected(self, mock_clean, mock_prepnet, mock_prepspawn, mock_failedspawn): with test.nested( mock.patch.object(self.compute, '_build_networks_for_instance', return_value=self.network_info), mock.patch.object(self.instance, 'save', side_effect=exception.UnexpectedDeletingTaskStateError( instance_uuid=uuids.instance, actual={'task_state': task_states.DELETING}, expected={'task_state': None})), ) as (_build_networks_for_instance, save): try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): pass except Exception as e: self.assertIsInstance(e, exception.UnexpectedDeletingTaskStateError) _build_networks_for_instance.assert_has_calls( [mock.call(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping)]) save.assert_has_calls([mock.call()]) mock_prepnet.assert_called_once_with(self.instance, self.network_info) mock_clean.assert_called_once_with(self.instance, self.network_info) mock_prepspawn.assert_called_once_with(self.instance) mock_failedspawn.assert_called_once_with(self.instance) @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(manager.ComputeManager, '_build_networks_for_instance') def test_build_resources_aborts_on_failed_network_alloc(self, mock_build, mock_prepspawn, mock_failedspawn): mock_build.side_effect = test.TestingException try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): pass except Exception as e: self.assertIsInstance(e, exception.BuildAbortException) mock_build.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) # This exception is raised prior to initial prep and cleanup # with the virt driver, and as such these should not record # any calls. mock_prepspawn.assert_not_called() mock_failedspawn.assert_not_called() @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') def test_failed_network_alloc_from_delete_raises_unexpected(self, mock_prepspawn, mock_failedspawn): with mock.patch.object(self.compute, '_build_networks_for_instance') as _build_networks: exc = exception.UnexpectedDeletingTaskStateError _build_networks.side_effect = exc( instance_uuid=uuids.instance, actual={'task_state': task_states.DELETING}, expected={'task_state': None}) try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): pass except Exception as e: self.assertIsInstance(e, exc) _build_networks.assert_has_calls( [mock.call(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping)]) mock_prepspawn.assert_not_called() mock_failedspawn.assert_not_called() @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(manager.ComputeManager, '_shutdown_instance') @mock.patch.object(objects.Instance, 'save') def test_build_resources_cleans_up_and_reraises_on_spawn_failure(self, mock_save, mock_shutdown, mock_build, mock_prepspawn, mock_failedspawn): mock_save.return_value = self.instance mock_build.return_value = self.network_info test_exception = test.TestingException() def fake_spawn(): raise test_exception try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): fake_spawn() except Exception as e: self.assertEqual(test_exception, e) mock_save.assert_called_once_with() mock_build.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_shutdown.assert_called_once_with(self.context, self.instance, self.block_device_mapping, self.requested_networks, try_deallocate_networks=False) mock_prepspawn.assert_called_once_with(self.instance) # Complete should have occured with _shutdown_instance # so calling after the fact is not necessary. mock_failedspawn.assert_not_called() @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch('nova.network.model.NetworkInfoAsyncWrapper.wait') @mock.patch( 'nova.compute.manager.ComputeManager._build_networks_for_instance') @mock.patch('nova.objects.Instance.save') def test_build_resources_instance_not_found_before_yield( self, mock_save, mock_build_network, mock_info_wait, mock_prepspawn, mock_failedspawn): mock_build_network.return_value = self.network_info expected_exc = exception.InstanceNotFound( instance_id=self.instance.uuid) mock_save.side_effect = expected_exc try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): raise test.TestingException() except Exception as e: self.assertEqual(expected_exc, e) mock_build_network.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_info_wait.assert_called_once_with(do_raise=False) mock_prepspawn.assert_called_once_with(self.instance) mock_failedspawn.assert_called_once_with(self.instance) @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch('nova.network.model.NetworkInfoAsyncWrapper.wait') @mock.patch( 'nova.compute.manager.ComputeManager._build_networks_for_instance') @mock.patch('nova.objects.Instance.save') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_networks_before_block_device_mapping') @mock.patch.object(virt_driver.ComputeDriver, 'clean_networks_preparation') def test_build_resources_unexpected_task_error_before_yield( self, mock_clean, mock_prepnet, mock_save, mock_build_network, mock_info_wait, mock_prepspawn, mock_failedspawn): mock_build_network.return_value = self.network_info mock_save.side_effect = exception.UnexpectedTaskStateError( instance_uuid=uuids.instance, expected={}, actual={}) try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): raise test.TestingException() except exception.BuildAbortException: pass mock_build_network.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_info_wait.assert_called_once_with(do_raise=False) mock_prepnet.assert_called_once_with(self.instance, self.network_info) mock_clean.assert_called_once_with(self.instance, self.network_info) mock_prepspawn.assert_called_once_with(self.instance) mock_failedspawn.assert_called_once_with(self.instance) @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch('nova.network.model.NetworkInfoAsyncWrapper.wait') @mock.patch( 'nova.compute.manager.ComputeManager._build_networks_for_instance') @mock.patch('nova.objects.Instance.save') def test_build_resources_exception_before_yield( self, mock_save, mock_build_network, mock_info_wait, mock_prepspawn, mock_failedspawn): mock_build_network.return_value = self.network_info mock_save.side_effect = Exception() try: with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): raise test.TestingException() except exception.BuildAbortException: pass mock_build_network.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_info_wait.assert_called_once_with(do_raise=False) mock_prepspawn.assert_called_once_with(self.instance) mock_failedspawn.assert_called_once_with(self.instance) @mock.patch.object(virt_driver.ComputeDriver, 'failed_spawn_cleanup') @mock.patch.object(virt_driver.ComputeDriver, 'prepare_for_spawn') @mock.patch.object(manager.ComputeManager, '_build_networks_for_instance') @mock.patch.object(manager.ComputeManager, '_shutdown_instance') @mock.patch.object(objects.Instance, 'save') @mock.patch('nova.compute.manager.LOG') def test_build_resources_aborts_on_cleanup_failure(self, mock_log, mock_save, mock_shutdown, mock_build, mock_prepspawn, mock_failedspawn): mock_save.return_value = self.instance mock_build.return_value = self.network_info mock_shutdown.side_effect = test.TestingException('Failed to shutdown') def fake_spawn(): raise test.TestingException('Failed to spawn') with self.assertRaisesRegex(exception.BuildAbortException, 'Failed to spawn'): with self.compute._build_resources(self.context, self.instance, self.requested_networks, self.security_groups, self.image, self.block_device_mapping, self.resource_provider_mapping, self.accel_uuids): fake_spawn() self.assertTrue(mock_log.warning.called) msg = mock_log.warning.call_args_list[0] self.assertIn('Failed to shutdown', msg[0][1]) mock_save.assert_called_once_with() mock_build.assert_called_once_with(self.context, self.instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_shutdown.assert_called_once_with(self.context, self.instance, self.block_device_mapping, self.requested_networks, try_deallocate_networks=False) mock_prepspawn.assert_called_once_with(self.instance) # Complete should have occured with _shutdown_instance # so calling after the fact is not necessary. mock_failedspawn.assert_not_called() @mock.patch.object(manager.ComputeManager, '_allocate_network') def test_build_networks_if_not_allocated(self, mock_allocate): instance = fake_instance.fake_instance_obj(self.context, system_metadata={}, expected_attrs=['system_metadata']) nw_info_obj = self.compute._build_networks_for_instance(self.context, instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_allocate.assert_called_once_with(self.context, instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) self.assertTrue(hasattr(nw_info_obj, 'wait'), "wait must be there") @mock.patch.object(manager.ComputeManager, '_allocate_network') def test_build_networks_if_allocated_false(self, mock_allocate): instance = fake_instance.fake_instance_obj(self.context, system_metadata=dict(network_allocated='False'), expected_attrs=['system_metadata']) nw_info_obj = self.compute._build_networks_for_instance(self.context, instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_allocate.assert_called_once_with(self.context, instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) self.assertTrue(hasattr(nw_info_obj, 'wait'), "wait must be there") @mock.patch.object(manager.ComputeManager, '_allocate_network') def test_return_networks_if_found(self, mock_allocate): instance = fake_instance.fake_instance_obj(self.context, system_metadata=dict(network_allocated='True'), expected_attrs=['system_metadata']) def fake_network_info(): return network_model.NetworkInfo([{'address': '123.123.123.123'}]) with test.nested( mock.patch.object( self.compute.network_api, 'setup_instance_network_on_host'), mock.patch.object( self.compute.network_api, 'get_instance_nw_info')) as ( mock_setup, mock_get ): # this should be a NetworkInfo, not NetworkInfoAsyncWrapper, to # match what get_instance_nw_info really returns mock_get.return_value = fake_network_info() self.compute._build_networks_for_instance(self.context, instance, self.requested_networks, self.security_groups, self.resource_provider_mapping) mock_get.assert_called_once_with(self.context, instance) mock_setup.assert_called_once_with(self.context, instance, instance.host) def test__cleanup_allocated_networks__instance_not_found(self): with test.nested( mock.patch.object(self.compute.network_api, 'get_instance_nw_info'), mock.patch.object(self.compute.driver, 'unplug_vifs'), mock.patch.object(self.compute, '_deallocate_network'), mock.patch.object(self.instance, 'save', side_effect=exception.InstanceNotFound(instance_id='')) ) as (mock_nwinfo, mock_unplug, mock_deallocate_network, mock_save): # Testing that this doesn't raise an exception self.compute._cleanup_allocated_networks( self.context, self.instance, self.requested_networks) mock_nwinfo.assert_called_once_with( self.context, self.instance) mock_unplug.assert_called_once_with( self.instance, mock_nwinfo.return_value) mock_deallocate_network.assert_called_once_with( self.context, self.instance, self.requested_networks) mock_save.assert_called_once_with() self.assertEqual( 'False', self.instance.system_metadata['network_allocated']) @mock.patch('nova.compute.manager.LOG') def test__cleanup_allocated_networks__error(self, mock_log): with test.nested( mock.patch.object( self.compute.network_api, 'get_instance_nw_info', side_effect=Exception('some neutron error') ), mock.patch.object(self.compute.driver, 'unplug_vifs'), ) as (mock_nwinfo, mock_unplug): self.compute._cleanup_allocated_networks( self.context, self.instance, self.requested_networks) mock_nwinfo.assert_called_once_with(self.context, self.instance) self.assertEqual(1, mock_log.warning.call_count) self.assertIn( 'Failed to update network info cache', mock_log.warning.call_args[0][0], ) mock_unplug.assert_not_called() def test_deallocate_network_none_requested(self): # Tests that we don't deallocate networks if 'none' were # specifically requested. req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='none')]) with mock.patch.object(self.compute.network_api, 'deallocate_for_instance') as deallocate: self.compute._deallocate_network( self.context, mock.sentinel.instance, req_networks) self.assertFalse(deallocate.called) def test_deallocate_network_auto_requested_or_none_provided(self): # Tests that we deallocate networks if we were requested to # auto-allocate networks or requested_networks=None. req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='auto')]) for requested_networks in (req_networks, None): with mock.patch.object(self.compute.network_api, 'deallocate_for_instance') as deallocate: self.compute._deallocate_network( self.context, mock.sentinel.instance, requested_networks) deallocate.assert_called_once_with( self.context, mock.sentinel.instance, requested_networks=requested_networks) @mock.patch('nova.compute.manager.ComputeManager._deallocate_network') @mock.patch('nova.compute.manager.LOG.warning') def test_try_deallocate_network_retry_direct(self, warning_mock, deallocate_network_mock): """Tests that _try_deallocate_network will retry calling _deallocate_network on keystone ConnectFailure errors up to a limit. """ self.useFixture(service_fixture.SleepFixture()) deallocate_network_mock.side_effect = \ keystone_exception.connection.ConnectFailure req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='auto')]) instance = mock.MagicMock() ctxt = mock.MagicMock() self.assertRaises(keystone_exception.connection.ConnectFailure, self.compute._try_deallocate_network, ctxt, instance, req_networks) # should come to 3 retries and 1 default call , total 4 self.assertEqual(4, deallocate_network_mock.call_count) # And we should have logged a warning. warning_mock.assert_called() self.assertIn('Failed to deallocate network for instance; retrying.', warning_mock.call_args[0][0]) @mock.patch('nova.compute.manager.ComputeManager._deallocate_network') @mock.patch('nova.compute.manager.LOG.warning') def test_try_deallocate_network_no_retry(self, warning_mock, deallocate_network_mock): """Tests that _try_deallocate_network will not retry _deallocate_network for non-ConnectFailure errors. """ deallocate_network_mock.side_effect = test.TestingException('oops') req_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='auto')]) instance = mock.MagicMock() ctxt = mock.MagicMock() self.assertRaises(test.TestingException, self.compute._try_deallocate_network, ctxt, instance, req_networks) deallocate_network_mock.assert_called_once_with( ctxt, instance, req_networks) warning_mock.assert_not_called() @mock.patch('nova.compute.utils.notify_about_instance_create') @mock.patch.object(manager.ComputeManager, '_instance_update') def test_launched_at_in_create_end_notification(self, mock_instance_update, mock_notify_instance_create): def fake_notify(*args, **kwargs): if args[2] == 'create.end': # Check that launched_at is set on the instance self.assertIsNotNone(args[1].launched_at) with test.nested( mock.patch.object(self.compute, '_update_scheduler_instance_info'), mock.patch.object(self.compute.driver, 'spawn'), mock.patch.object(self.compute, '_build_networks_for_instance', return_value=[]), mock.patch.object(self.instance, 'save'), mock.patch.object(self.compute, '_notify_about_instance_usage', side_effect=fake_notify) ) as (mock_upd, mock_spawn, mock_networks, mock_save, mock_notify): self.compute._build_and_run_instance(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) expected_call = mock.call(self.context, self.instance, 'create.end', extra_usage_info={'message': u'Success'}, network_info=[]) create_end_call = mock_notify.call_args_list[ mock_notify.call_count - 1] self.assertEqual(expected_call, create_end_call) mock_notify_instance_create.assert_has_calls([ mock.call(self.context, self.instance, 'fake-mini', phase='start', bdms=[]), mock.call(self.context, self.instance, 'fake-mini', phase='end', bdms=[])]) def test_access_ip_set_when_instance_set_to_active(self): self.flags(default_access_ip_network_name='test1') instance = fake_instance.fake_db_instance() @mock.patch.object(db, 'instance_update_and_get_original', return_value=({}, instance)) @mock.patch.object(self.compute.driver, 'spawn') @mock.patch.object(self.compute, '_build_networks_for_instance', return_value=fake_network.fake_get_instance_nw_info(self)) @mock.patch.object(db, 'instance_extra_update_by_uuid') @mock.patch.object(self.compute, '_notify_about_instance_usage') def _check_access_ip(mock_notify, mock_extra, mock_networks, mock_spawn, mock_db_update): self.compute._build_and_run_instance(self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) updates = {'vm_state': u'active', 'access_ip_v6': netaddr.IPAddress('2001:db8:0:1:dcad:beff:feef:1'), 'access_ip_v4': netaddr.IPAddress('192.168.1.100'), 'power_state': 0, 'task_state': None, 'launched_at': mock.ANY, 'expected_task_state': 'spawning'} expected_call = mock.call(self.context, self.instance.uuid, updates, columns_to_join=['metadata', 'system_metadata', 'info_cache', 'tags']) last_update_call = mock_db_update.call_args_list[ mock_db_update.call_count - 1] self.assertEqual(expected_call, last_update_call) _check_access_ip() @mock.patch.object(manager.ComputeManager, '_instance_update') def test_create_error_on_instance_delete(self, mock_instance_update): def fake_notify(*args, **kwargs): if args[2] == 'create.error': # Check that launched_at is set on the instance self.assertIsNotNone(args[1].launched_at) exc = exception.InstanceNotFound(instance_id='') with test.nested( mock.patch.object(self.compute.driver, 'spawn'), mock.patch.object(self.compute, '_build_networks_for_instance', return_value=[]), mock.patch.object(self.instance, 'save', side_effect=[None, None, None, exc]), mock.patch.object(self.compute, '_notify_about_instance_usage', side_effect=fake_notify) ) as (mock_spawn, mock_networks, mock_save, mock_notify): self.assertRaises(exception.InstanceNotFound, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, self.accel_uuids) expected_call = mock.call(self.context, self.instance, 'create.error', fault=exc) create_error_call = mock_notify.call_args_list[ mock_notify.call_count - 1] self.assertEqual(expected_call, create_error_call) def test_build_with_resource_request_in_the_request_spec(self): request_spec = objects.RequestSpec( requested_resources=[ objects.RequestGroup( requester_id=uuids.port1, provider_uuids=[uuids.rp1])]) with test.nested( mock.patch.object(self.compute.driver, 'spawn'), mock.patch.object( self.compute, '_build_networks_for_instance', return_value=[]), mock.patch.object(self.instance, 'save'), ) as (mock_spawn, mock_networks, mock_save): self.compute._build_and_run_instance( self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, request_spec, self.accel_uuids) mock_networks.assert_called_once_with( self.context, self.instance, self.requested_networks, self.security_groups, {uuids.port1: [uuids.rp1]}) def test_build_with_resource_request_sriov_port(self): request_spec = objects.RequestSpec( requested_resources=[ objects.RequestGroup( requester_id=uuids.port1, provider_uuids=[uuids.rp1])]) # NOTE(gibi): the first request will not match to any request group # this is the case when the request is not coming from a Neutron port # but from flavor or when the instance is old enough that the # requester_id field is not filled. # The second request will match with the request group in the request # spec and will trigger an update on that pci request. # The third request is coming from a Neutron port which doesn't have # resource request and therefore no matching request group exists in # the request spec. self.instance.pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(), objects.InstancePCIRequest( requester_id=uuids.port1, spec=[{'vendor_id': '1377', 'product_id': '0047'}]), objects.InstancePCIRequest(requester_id=uuids.port2), ]) with test.nested( mock.patch.object(self.compute.driver, 'spawn'), mock.patch.object(self.compute, '_build_networks_for_instance', return_value=[]), mock.patch.object(self.instance, 'save'), mock.patch('nova.scheduler.client.report.' 'SchedulerReportClient._get_resource_provider'), ) as (mock_spawn, mock_networks, mock_save, mock_get_rp): mock_get_rp.return_value = { 'uuid': uuids.rp1, 'name': 'compute1:sriov-agent:ens3' } self.compute._build_and_run_instance( self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, request_spec, self.accel_uuids) mock_networks.assert_called_once_with( self.context, self.instance, self.requested_networks, self.security_groups, {uuids.port1: [uuids.rp1]}) mock_get_rp.assert_called_once_with(self.context, uuids.rp1) # As the second pci request matched with the request group from the # request spec. So that pci request is extended with the # parent_ifname calculated from the corresponding RP name. self.assertEqual( [{'parent_ifname': 'ens3', 'vendor_id': '1377', 'product_id': '0047'}], self.instance.pci_requests.requests[1].spec) # the rest of the pci requests are unchanged self.assertNotIn('spec', self.instance.pci_requests.requests[0]) self.assertNotIn('spec', self.instance.pci_requests.requests[2]) def test_build_with_resource_request_sriov_rp_not_found(self): request_spec = objects.RequestSpec( requested_resources=[ objects.RequestGroup( requester_id=uuids.port1, provider_uuids=[uuids.rp1])]) self.instance.pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(requester_id=uuids.port1)]) with mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') as (mock_get_rp): mock_get_rp.return_value = None self.assertRaises( exception.ResourceProviderNotFound, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, request_spec, self.accel_uuids) def test_build_with_resource_request_sriov_rp_wrongly_formatted_name(self): request_spec = objects.RequestSpec( requested_resources=[ objects.RequestGroup( requester_id=uuids.port1, provider_uuids=[uuids.rp1])]) self.instance.pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(requester_id=uuids.port1)]) with mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') as (mock_get_rp): mock_get_rp.return_value = { 'uuid': uuids.rp1, 'name': 'my-awesome-rp' } self.assertRaises( exception.BuildAbortException, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, request_spec, self.accel_uuids) def test_build_with_resource_request_more_than_one_providers(self): request_spec = objects.RequestSpec( requested_resources=[ objects.RequestGroup( requester_id=uuids.port1, provider_uuids=[uuids.rp1, uuids.rp2])]) self.instance.pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(requester_id=uuids.port1)]) self.assertRaises( exception.BuildAbortException, self.compute._build_and_run_instance, self.context, self.instance, self.image, self.injected_files, self.admin_pass, self.requested_networks, self.security_groups, self.block_device_mapping, self.node, self.limits, self.filter_properties, request_spec, self.accel_uuids) class ComputeManagerErrorsOutMigrationTestCase(test.NoDBTestCase): def setUp(self): super(ComputeManagerErrorsOutMigrationTestCase, self).setUp() self.context = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) self.instance = fake_instance.fake_instance_obj(self.context) self.migration = objects.Migration() self.migration.instance_uuid = self.instance.uuid self.migration.status = 'migrating' self.migration.id = 0 @mock.patch.object(objects.Migration, 'save') def test_decorator(self, mock_save): # Tests that errors_out_migration decorator in compute manager sets # migration status to 'error' when an exception is raised from # decorated method @manager.errors_out_migration def fake_function(self, context, instance, migration): raise test.TestingException() self.assertRaises(test.TestingException, fake_function, self, self.context, self.instance, self.migration) self.assertEqual('error', self.migration.status) mock_save.assert_called_once_with() @mock.patch.object(objects.Migration, 'save') def test_contextmanager(self, mock_save): # Tests that errors_out_migration_ctxt context manager in compute # manager sets migration status to 'error' when an exception is raised # from decorated method def test_function(): with manager.errors_out_migration_ctxt(self.migration): raise test.TestingException() self.assertRaises(test.TestingException, test_function) self.assertEqual('error', self.migration.status) mock_save.assert_called_once_with() @ddt.ddt class ComputeManagerMigrationTestCase(test.NoDBTestCase, fake_resource_tracker.RTMockMixin): class TestResizeError(Exception): pass def setUp(self): super(ComputeManagerMigrationTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.flags(compute_driver='fake.SameHostColdMigrateDriver') self.compute = manager.ComputeManager() self.context = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) self.image = {} self.instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, expected_attrs=['metadata', 'system_metadata', 'info_cache']) self.migration = objects.Migration( context=self.context.elevated(), id=1, uuid=uuids.migration_uuid, instance_uuid=self.instance.uuid, new_instance_type_id=7, dest_compute='dest_compute', dest_node='dest_node', dest_host=None, source_compute='source_compute', source_node='source_node', status='migrating') self.migration.save = mock.MagicMock() self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.useFixture(fixtures.EventReporterStub()) @contextlib.contextmanager def _mock_finish_resize(self): with test.nested( mock.patch.object(self.compute, '_finish_resize'), mock.patch.object(db, 'instance_fault_create'), mock.patch.object(self.compute, '_update_resource_tracker'), mock.patch.object(self.instance, 'save'), mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') ) as (_finish_resize, fault_create, instance_update, instance_save, get_bdm): fault_create.return_value = ( test_instance_fault.fake_faults['fake-uuid'][0]) yield _finish_resize def test_finish_resize_failure(self): self.migration.status = 'post-migrating' with self._mock_finish_resize() as _finish_resize: _finish_resize.side_effect = self.TestResizeError self.assertRaises( self.TestResizeError, self.compute.finish_resize, context=self.context, disk_info=[], image=self.image, instance=self.instance, migration=self.migration, request_spec=objects.RequestSpec() ) # Assert that we set the migration to an error state self.assertEqual("error", self.migration.status) @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') def test_finish_resize_notify_failure(self, notify): self.migration.status = 'post-migrating' with self._mock_finish_resize(): notify.side_effect = self.TestResizeError self.assertRaises( self.TestResizeError, self.compute.finish_resize, context=self.context, disk_info=[], image=self.image, instance=self.instance, migration=self.migration, request_spec=objects.RequestSpec() ) # Assert that we set the migration to an error state self.assertEqual("error", self.migration.status) @contextlib.contextmanager def _mock_resize_instance(self): with test.nested( mock.patch.object(self.compute.driver, 'migrate_disk_and_power_off'), mock.patch.object(db, 'instance_fault_create'), mock.patch.object(self.compute, '_update_resource_tracker'), mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.instance, 'save'), mock.patch.object(self.compute, '_notify_about_instance_usage'), mock.patch.object(self.compute, '_get_instance_block_device_info'), mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid'), mock.patch.object(objects.Flavor, 'get_by_id'), mock.patch.object(self.compute, '_terminate_volume_connections'), mock.patch.object(self.compute, 'compute_rpcapi'), ) as ( migrate_disk_and_power_off, fault_create, instance_update, network_api, save_inst, notify, vol_block_info, bdm, flavor, terminate_volume_connections, compute_rpcapi ): fault_create.return_value = ( test_instance_fault.fake_faults['fake-uuid'][0]) yield (migrate_disk_and_power_off, notify) def test_resize_instance_failure(self): with self._mock_resize_instance() as ( migrate_disk_and_power_off, notify): migrate_disk_and_power_off.side_effect = self.TestResizeError self.assertRaises( self.TestResizeError, self.compute.resize_instance, context=self.context, instance=self.instance, image=self.image, migration=self.migration, instance_type='type', clean_shutdown=True, request_spec=objects.RequestSpec()) # Assert that we set the migration to an error state self.assertEqual("error", self.migration.status) def test_resize_instance_fail_rollback_stays_stopped(self): """Tests that when the driver's migrate_disk_and_power_off method raises InstanceFaultRollback that the instance vm_state is preserved rather than reset to ACTIVE which would be wrong if resizing a STOPPED server. """ with self._mock_resize_instance() as ( migrate_disk_and_power_off, notify): migrate_disk_and_power_off.side_effect = \ exception.InstanceFaultRollback( exception.ResizeError(reason='unable to resize disk down')) self.instance.vm_state = vm_states.STOPPED self.assertRaises( exception.ResizeError, self.compute.resize_instance, context=self.context, instance=self.instance, image=self.image, migration=self.migration, instance_type='type', clean_shutdown=True, request_spec=objects.RequestSpec()) # Assert the instance vm_state was unchanged. self.assertEqual(vm_states.STOPPED, self.instance.vm_state) def test_resize_instance_notify_failure(self): # Raise an exception sending the end notification, which is after we # cast the migration to the destination host def fake_notify(context, instance, event, network_info=None): if event == 'resize.end': raise self.TestResizeError() with self._mock_resize_instance() as ( migrate_disk_and_power_off, notify_about_instance_action): notify_about_instance_action.side_effect = fake_notify self.assertRaises( self.TestResizeError, self.compute.resize_instance, context=self.context, instance=self.instance, image=self.image, migration=self.migration, instance_type='type', clean_shutdown=True, request_spec=objects.RequestSpec()) # Assert that we did not set the migration to an error state self.assertEqual('post-migrating', self.migration.status) def _test_revert_resize_instance_destroy_disks(self, is_shared=False): # This test asserts that _is_instance_storage_shared() is called from # revert_resize() and the return value is passed to driver.destroy(). # Otherwise we could regress this. @mock.patch('nova.compute.rpcapi.ComputeAPI.finish_revert_resize') @mock.patch.object(self.instance, 'revert_migration_context') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info') @mock.patch.object(self.compute, '_is_instance_storage_shared') @mock.patch.object(self.compute, 'finish_revert_resize') @mock.patch.object(self.compute, '_instance_update') @mock.patch.object(self.compute.driver, 'destroy') @mock.patch.object(self.compute.network_api, 'setup_networks_on_host') @mock.patch.object(self.compute.network_api, 'migrate_instance_start') @mock.patch.object(compute_utils, 'notify_usage_exists') @mock.patch.object(self.migration, 'save') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def do_test(get_by_instance_uuid, migration_save, notify_usage_exists, migrate_instance_start, setup_networks_on_host, destroy, _instance_update, finish_revert_resize, _is_instance_storage_shared, get_instance_nw_info, revert_migration_context, mock_finish_revert): self._mock_rt() # NOTE(danms): Before a revert, the instance is "on" # the destination host/node self.migration.uuid = uuids.migration self.migration.source_compute = 'src' self.migration.source_node = 'srcnode' self.migration.dest_compute = self.instance.host self.migration.dest_node = self.instance.node # Inform compute that instance uses non-shared or shared storage _is_instance_storage_shared.return_value = is_shared request_spec = objects.RequestSpec() self.compute.revert_resize(context=self.context, migration=self.migration, instance=self.instance, request_spec=request_spec) _is_instance_storage_shared.assert_called_once_with( self.context, self.instance, host=self.migration.source_compute) # If instance storage is shared, driver destroy method # should not destroy disks otherwise it should destroy disks. destroy.assert_called_once_with(self.context, self.instance, mock.ANY, mock.ANY, not is_shared) mock_finish_revert.assert_called_once_with( self.context, self.instance, self.migration, self.migration.source_compute, request_spec) do_test() def test_revert_resize_instance_destroy_disks_shared_storage(self): self._test_revert_resize_instance_destroy_disks(is_shared=True) def test_revert_resize_instance_destroy_disks_non_shared_storage(self): self._test_revert_resize_instance_destroy_disks(is_shared=False) def test_finish_revert_resize_network_calls_order(self): self.nw_info = None def _migrate_instance_finish( context, instance, migration, provider_mappings): # The migration.dest_compute is temporarily set to source_compute. self.assertEqual(migration.source_compute, migration.dest_compute) self.nw_info = 'nw_info' def _get_instance_nw_info(context, instance): return self.nw_info reportclient = self.compute.reportclient @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch.object(self.compute.driver, 'finish_revert_migration') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info', side_effect=_get_instance_nw_info) @mock.patch.object(self.compute.network_api, 'migrate_instance_finish', side_effect=_migrate_instance_finish) @mock.patch.object(self.compute.network_api, 'setup_networks_on_host') @mock.patch.object(self.migration, 'save') @mock.patch.object(self.instance, 'save') @mock.patch.object(self.compute, '_set_instance_info') @mock.patch.object(db, 'instance_fault_create') @mock.patch.object(db, 'instance_extra_update_by_uuid') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute_utils, 'notify_about_instance_usage') def do_test(notify_about_instance_usage, get_by_instance_uuid, extra_update, fault_create, set_instance_info, instance_save, migration_save, setup_networks_on_host, migrate_instance_finish, get_instance_nw_info, finish_revert_migration, mock_get_cn): # Mock the resource tracker, but keep the report client self._mock_rt().reportclient = reportclient fault_create.return_value = ( test_instance_fault.fake_faults['fake-uuid'][0]) self.instance.migration_context = objects.MigrationContext() self.migration.uuid = uuids.migration self.migration.source_compute = self.instance['host'] self.migration.source_node = self.instance['host'] request_spec = objects.RequestSpec() self.compute.finish_revert_resize(context=self.context, migration=self.migration, instance=self.instance, request_spec=request_spec) finish_revert_migration.assert_called_with(self.context, self.instance, 'nw_info', self.migration, mock.ANY, mock.ANY) # Make sure the migration.dest_compute is not still set to the # source_compute value. self.assertNotEqual(self.migration.dest_compute, self.migration.source_compute) do_test() def test_finish_revert_resize_migration_context(self): request_spec = objects.RequestSpec() @mock.patch('nova.compute.resource_tracker.ResourceTracker.' 'drop_move_claim_at_dest') @mock.patch('nova.compute.rpcapi.ComputeAPI.finish_revert_resize') @mock.patch.object(self.instance, 'revert_migration_context') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info') @mock.patch.object(self.compute, '_is_instance_storage_shared') @mock.patch.object(self.compute, '_instance_update') @mock.patch.object(self.compute.driver, 'destroy') @mock.patch.object(self.compute.network_api, 'setup_networks_on_host') @mock.patch.object(self.compute.network_api, 'migrate_instance_start') @mock.patch.object(compute_utils, 'notify_usage_exists') @mock.patch.object(db, 'instance_extra_update_by_uuid') @mock.patch.object(self.migration, 'save') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def do_revert_resize(mock_get_by_instance_uuid, mock_migration_save, mock_extra_update, mock_notify_usage_exists, mock_migrate_instance_start, mock_setup_networks_on_host, mock_destroy, mock_instance_update, mock_is_instance_storage_shared, mock_get_instance_nw_info, mock_revert_migration_context, mock_finish_revert, mock_drop_move_claim): self.compute.rt.tracked_migrations[self.instance['uuid']] = ( self.migration, None) self.instance.migration_context = objects.MigrationContext() self.migration.source_compute = self.instance['host'] self.migration.source_node = self.instance['node'] self.compute.revert_resize(context=self.context, migration=self.migration, instance=self.instance, request_spec=request_spec) mock_drop_move_claim.assert_called_once_with( self.context, self.instance, self.migration) # Three fake BDMs: # 1. volume BDM with an attachment_id which will be updated/completed # 2. volume BDM without an attachment_id so it's not updated # 3. non-volume BDM so it's not updated fake_bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(destination_type='volume', attachment_id=uuids.attachment_id, device_name='/dev/vdb'), objects.BlockDeviceMapping(destination_type='volume', attachment_id=None), objects.BlockDeviceMapping(destination_type='local') ]) @mock.patch('nova.objects.Service.get_minimum_version', return_value=22) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch.object(self.compute, "_notify_about_instance_usage") @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(self.compute, "_set_instance_info") @mock.patch.object(self.instance, 'save') @mock.patch.object(self.migration, 'save') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(db, 'instance_fault_create') @mock.patch.object(db, 'instance_extra_update_by_uuid') @mock.patch.object(self.compute.network_api, 'setup_networks_on_host') @mock.patch.object(self.compute.network_api, 'migrate_instance_finish') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid', return_value=fake_bdms) @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute.driver, 'get_volume_connector') @mock.patch.object(self.compute.volume_api, 'attachment_update') @mock.patch.object(self.compute.volume_api, 'attachment_complete') def do_finish_revert_resize(mock_attachment_complete, mock_attachment_update, mock_get_vol_connector, mock_get_blk, mock_get_by_instance_uuid, mock_get_instance_nw_info, mock_instance_finish, mock_setup_network, mock_extra_update, mock_fault_create, mock_fault_from_exc, mock_mig_save, mock_inst_save, mock_set, mock_notify_about_instance_action, mock_notify, mock_get_cn, mock_version): self.migration.uuid = uuids.migration self.compute.finish_revert_resize(context=self.context, instance=self.instance, migration=self.migration, request_spec=request_spec) self.assertIsNone(self.instance.migration_context) # We should only have one attachment_update/complete call for the # volume BDM that had an attachment. mock_attachment_update.assert_called_once_with( self.context, uuids.attachment_id, mock_get_vol_connector.return_value, '/dev/vdb') mock_attachment_complete.assert_called_once_with( self.context, uuids.attachment_id) do_revert_resize() do_finish_revert_resize() @mock.patch.object(objects.Instance, 'drop_migration_context') @mock.patch('nova.compute.manager.ComputeManager.' '_finish_revert_resize_network_migrate_finish') @mock.patch('nova.scheduler.utils.' 'fill_provider_mapping_based_on_allocation') @mock.patch('nova.compute.manager.ComputeManager._revert_allocation') @mock.patch.object(objects.Instance, 'save') @mock.patch('nova.compute.manager.ComputeManager.' '_set_instance_info') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_finish_revert_resize_recalc_group_rp_mapping( self, mock_get_bdms, mock_notify_action, mock_notify_usage, mock_set_instance_info, mock_instance_save, mock_revert_allocation, mock_fill_provider_mapping, mock_network_migrate_finish, mock_drop_migration_context): mock_get_bdms.return_value = objects.BlockDeviceMappingList() request_spec = objects.RequestSpec() mock_revert_allocation.return_value = mock.sentinel.allocation with mock.patch.object( self.compute.network_api, 'get_instance_nw_info'): self.compute.finish_revert_resize( self.context, self.instance, self.migration, request_spec) mock_fill_provider_mapping.assert_called_once_with( self.context, self.compute.reportclient, request_spec, mock.sentinel.allocation) @mock.patch.object(objects.Instance, 'drop_migration_context') @mock.patch('nova.compute.manager.ComputeManager.' '_finish_revert_resize_network_migrate_finish') @mock.patch('nova.scheduler.utils.' 'fill_provider_mapping_based_on_allocation') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer') @mock.patch('nova.compute.manager.ComputeManager._revert_allocation') @mock.patch.object(objects.Instance, 'save') @mock.patch('nova.compute.manager.ComputeManager.' '_set_instance_info') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_finish_revert_resize_recalc_group_rp_mapping_missing_request_spec( self, mock_get_bdms, mock_notify_action, mock_notify_usage, mock_set_instance_info, mock_instance_save, mock_revert_allocation, mock_get_allocations, mock_fill_provider_mapping, mock_network_migrate_finish, mock_drop_migration_context): mock_get_bdms.return_value = objects.BlockDeviceMappingList() mock_get_allocations.return_value = mock.sentinel.allocation with mock.patch.object( self.compute.network_api, 'get_instance_nw_info'): # This is the case when the compute is pinned to use older than # RPC version 5.2 self.compute.finish_revert_resize( self.context, self.instance, self.migration, request_spec=None) mock_get_allocations.assert_not_called() mock_fill_provider_mapping.assert_not_called() mock_network_migrate_finish.assert_called_once_with( self.context, self.instance, self.migration, None) def test_confirm_resize_deletes_allocations_and_update_scheduler(self): @mock.patch.object(self.compute, '_delete_scheduler_instance_info') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.Migration.get_by_id') @mock.patch.object(self.migration, 'save') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute.driver, 'confirm_migration') @mock.patch.object(self.compute, '_delete_allocation_after_move') @mock.patch.object(self.instance, 'drop_migration_context') @mock.patch.object(self.instance, 'save') def do_confirm_resize(mock_save, mock_drop, mock_delete, mock_confirm, mock_nwapi, mock_notify, mock_mig_save, mock_mig_get, mock_inst_get, mock_delete_scheduler_info): self._mock_rt() self.instance.migration_context = objects.MigrationContext( new_pci_devices=None, old_pci_devices=None) self.migration.source_compute = self.instance['host'] self.migration.source_node = self.instance['node'] self.migration.status = 'confirming' mock_mig_get.return_value = self.migration mock_inst_get.return_value = self.instance self.compute.confirm_resize(self.context, self.instance, self.migration) mock_delete.assert_called_once_with(self.context, self.instance, self.migration) mock_save.assert_has_calls([ mock.call( expected_task_state=[ None, task_states.DELETING, task_states.SOFT_DELETING, ], ), mock.call(), ]) mock_delete_scheduler_info.assert_called_once_with( self.context, self.instance.uuid) do_confirm_resize() @mock.patch('nova.objects.MigrationContext.get_pci_mapping_for_migration') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.objects.Migration.get_by_id') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.Instance.save') def test_confirm_resize_driver_confirm_migration_fails( self, instance_save, notify_action, notify_usage, instance_get_by_uuid, migration_get_by_id, add_fault, get_mapping): """Tests the scenario that driver.confirm_migration raises some error to make sure the error is properly handled, like the instance and migration status is set to 'error'. """ self.migration.status = 'confirming' migration_get_by_id.return_value = self.migration instance_get_by_uuid.return_value = self.instance self.instance.migration_context = objects.MigrationContext() error = exception.HypervisorUnavailable( host=self.migration.source_compute) with test.nested( mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.compute.driver, 'confirm_migration', side_effect=error), mock.patch.object(self.compute, '_delete_allocation_after_move'), mock.patch.object(self.compute, '_get_updated_nw_info_with_pci_mapping') ) as ( network_api, confirm_migration, delete_allocation, pci_mapping ): self.assertRaises(exception.HypervisorUnavailable, self.compute.confirm_resize, self.context, self.instance, self.migration) # Make sure the instance is in ERROR status. self.assertEqual(vm_states.ERROR, self.instance.vm_state) # Make sure the migration is in error status. self.assertEqual('error', self.migration.status) # Instance.save is called twice, once to clear the resize metadata # and once to set the instance to ERROR status. self.assertEqual(2, instance_save.call_count) # The migration.status should have been saved. self.migration.save.assert_called_once_with() # Allocations should always be cleaned up even if cleaning up the # source host fails. delete_allocation.assert_called_once_with( self.context, self.instance, self.migration) # Assert other mocks we care less about. notify_usage.assert_called_once() notify_action.assert_called_once() add_fault.assert_called_once() confirm_migration.assert_called_once() network_api.setup_networks_on_host.assert_called_once() instance_get_by_uuid.assert_called_once() migration_get_by_id.assert_called_once() def test_confirm_resize_calls_virt_driver_with_old_pci(self): @mock.patch.object(self.migration, 'save') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute.driver, 'confirm_migration') @mock.patch.object(self.compute, '_delete_allocation_after_move') @mock.patch.object(self.instance, 'drop_migration_context') @mock.patch.object(self.instance, 'save') def do_confirm_resize(mock_save, mock_drop, mock_delete, mock_confirm, mock_nwapi, mock_notify, mock_mig_save): # Mock virt driver confirm_resize() to save the provided # network_info, we will check it later. updated_nw_info = [] def driver_confirm_resize(*args, **kwargs): if 'network_info' in kwargs: nw_info = kwargs['network_info'] else: nw_info = args[3] updated_nw_info.extend(nw_info) mock_confirm.side_effect = driver_confirm_resize self._mock_rt() old_devs = objects.PciDeviceList( objects=[objects.PciDevice( address='0000:04:00.2', request_id=uuids.pcidev1)]) new_devs = objects.PciDeviceList( objects=[objects.PciDevice( address='0000:05:00.3', request_id=uuids.pcidev1)]) self.instance.migration_context = objects.MigrationContext( new_pci_devices=new_devs, old_pci_devices=old_devs) # Create VIF with new_devs[0] PCI address. nw_info = network_model.NetworkInfo([ network_model.VIF( id=uuids.port1, vnic_type=network_model.VNIC_TYPE_DIRECT, profile={'pci_slot': new_devs[0].address})]) mock_nwapi.get_instance_nw_info.return_value = nw_info self.migration.source_compute = self.instance['host'] self.migration.source_node = self.instance['node'] self.compute._confirm_resize(self.context, self.instance, self.migration) # Assert virt driver confirm_migration() was called # with the updated nw_info object. self.assertEqual(old_devs[0].address, updated_nw_info[0]['profile']['pci_slot']) do_confirm_resize() def test_delete_allocation_after_move_confirm_by_migration(self): with mock.patch.object(self.compute, 'reportclient') as mock_report: mock_report.delete_allocation_for_instance.return_value = True self.compute._delete_allocation_after_move(self.context, self.instance, self.migration) mock_report.delete_allocation_for_instance.assert_called_once_with( self.context, self.migration.uuid, consumer_type='migration') def test_revert_allocation_allocation_exists(self): """New-style migration-based allocation revert.""" @mock.patch('nova.compute.manager.LOG.info') @mock.patch.object(self.compute, 'reportclient') def doit(mock_report, mock_info): a = { uuids.node: {'resources': {'DISK_GB': 1}}, uuids.child_rp: {'resources': {'CUSTOM_FOO': 1}} } mock_report.get_allocations_for_consumer.return_value = a self.migration.uuid = uuids.migration r = self.compute._revert_allocation(mock.sentinel.ctx, self.instance, self.migration) self.assertTrue(r) mock_report.move_allocations.assert_called_once_with( mock.sentinel.ctx, self.migration.uuid, self.instance.uuid) mock_info.assert_called_once_with( 'Swapping old allocation on %(rp_uuids)s held by migration ' '%(mig)s for instance', {'rp_uuids': a.keys(), 'mig': self.migration.uuid}, instance=self.instance) doit() def test_revert_allocation_allocation_not_exist(self): """Test that we don't delete allocs for migration if none found.""" @mock.patch('nova.compute.manager.LOG.error') @mock.patch.object(self.compute, 'reportclient') def doit(mock_report, mock_error): mock_report.get_allocations_for_consumer.return_value = {} self.migration.uuid = uuids.migration r = self.compute._revert_allocation(mock.sentinel.ctx, self.instance, self.migration) self.assertFalse(r) self.assertFalse(mock_report.move_allocations.called) mock_error.assert_called_once_with( 'Did not find resource allocations for migration ' '%s on source node %s. Unable to revert source node ' 'allocations back to the instance.', self.migration.uuid, self.migration.source_node, instance=self.instance) doit() def test_consoles_enabled(self): self.flags(enabled=False, group='vnc') self.flags(enabled=False, group='spice') self.flags(enabled=False, group='rdp') self.flags(enabled=False, group='serial_console') self.assertFalse(self.compute._consoles_enabled()) self.flags(enabled=True, group='vnc') self.assertTrue(self.compute._consoles_enabled()) self.flags(enabled=False, group='vnc') for console in ['spice', 'rdp', 'serial_console']: self.flags(enabled=True, group=console) self.assertTrue(self.compute._consoles_enabled()) self.flags(enabled=False, group=console) @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.ConsoleAuthToken') def test_get_mks_console(self, mock_console_obj, mock_elevated): self.flags(enabled=True, group='mks') instance = objects.Instance(uuid=uuids.instance) with mock.patch.object(self.compute.driver, 'get_mks_console') as mock_get_console: console = self.compute.get_mks_console(self.context, 'webmks', instance) driver_console = mock_get_console.return_value mock_console_obj.assert_called_once_with( context=mock_elevated.return_value, console_type='webmks', host=driver_console.host, port=driver_console.port, internal_access_path=driver_console.internal_access_path, instance_uuid=instance.uuid, access_url_base=CONF.mks.mksproxy_base_url) mock_console_obj.return_value.authorize.assert_called_once_with( CONF.consoleauth.token_ttl) self.assertEqual(driver_console.get_connection_info.return_value, console) @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.objects.ConsoleAuthToken') def test_get_serial_console(self, mock_console_obj, mock_elevated): self.flags(enabled=True, group='serial_console') instance = objects.Instance(uuid=uuids.instance) with mock.patch.object(self.compute.driver, 'get_serial_console') as mock_get_console: console = self.compute.get_serial_console(self.context, 'serial', instance) driver_console = mock_get_console.return_value mock_console_obj.assert_called_once_with( context=mock_elevated.return_value, console_type='serial', host=driver_console.host, port=driver_console.port, internal_access_path=driver_console.internal_access_path, instance_uuid=instance.uuid, access_url_base=CONF.serial_console.base_url) mock_console_obj.return_value.authorize.assert_called_once_with( CONF.consoleauth.token_ttl) self.assertEqual(driver_console.get_connection_info.return_value, console) @mock.patch('nova.compute.manager.ComputeManager.' '_do_live_migration') def _test_max_concurrent_live(self, mock_lm): @mock.patch('nova.objects.Migration.save') def _do_it(mock_mig_save): instance = objects.Instance(uuid=uuids.fake) migration = objects.Migration(uuid=uuids.migration) self.compute.live_migration(self.context, mock.sentinel.dest, instance, mock.sentinel.block_migration, migration, mock.sentinel.migrate_data) self.assertEqual('queued', migration.status) migration.save.assert_called_once_with() with mock.patch.object(self.compute, '_live_migration_executor') as mock_exc: for i in (1, 2, 3): _do_it() self.assertEqual(3, mock_exc.submit.call_count) def test_max_concurrent_live_limited(self): self.flags(max_concurrent_live_migrations=2) self._test_max_concurrent_live() def test_max_concurrent_live_unlimited(self): self.flags(max_concurrent_live_migrations=0) self._test_max_concurrent_live() @mock.patch('futurist.GreenThreadPoolExecutor') def test_max_concurrent_live_semaphore_limited(self, mock_executor): self.flags(max_concurrent_live_migrations=123) manager.ComputeManager() mock_executor.assert_called_once_with(max_workers=123) @mock.patch('futurist.GreenThreadPoolExecutor') def test_max_concurrent_live_semaphore_unlimited(self, mock_executor): self.flags(max_concurrent_live_migrations=0) manager.ComputeManager() mock_executor.assert_called_once_with() @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_pre_live_migration_cinder_v3_api(self): # This tests that pre_live_migration with a bdm with an # attachment_id, will create a new attachment and update # attachment_id's in the bdm. compute = manager.ComputeManager() instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) volume_id = uuids.volume vol_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) # attach_create should not be called on this image_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'image', 'destination_type': 'local', 'volume_id': volume_id, 'device_name': '/dev/vda', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) orig_attachment_id = uuids.attachment1 vol_bdm.attachment_id = orig_attachment_id new_attachment_id = uuids.attachment2 image_bdm.attachment_id = uuids.attachment3 migrate_data = migrate_data_obj.LiveMigrateData() migrate_data.old_vol_attachment_ids = {} @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(compute.volume_api, 'attachment_complete') @mock.patch.object(vol_bdm, 'save') @mock.patch.object(compute, '_notify_about_instance_usage') @mock.patch.object(compute, 'network_api') @mock.patch.object(compute.driver, 'pre_live_migration') @mock.patch.object(compute, '_get_instance_block_device_info') @mock.patch.object(compute_utils, 'is_volume_backed_instance') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute.volume_api, 'attachment_create') def _test(mock_attach, mock_get_bdms, mock_ivbi, mock_gibdi, mock_plm, mock_nwapi, mock_notify, mock_bdm_save, mock_attach_complete, mock_notify_about_inst): mock_get_bdms.return_value = [vol_bdm, image_bdm] mock_attach.return_value = {'id': new_attachment_id} mock_plm.return_value = migrate_data connector = compute.driver.get_volume_connector(instance) r = compute.pre_live_migration(self.context, instance, False, {}, migrate_data) mock_notify_about_inst.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='live_migration_pre', phase='start', bdms=mock_get_bdms.return_value), mock.call(self.context, instance, 'fake-mini', action='live_migration_pre', phase='end', bdms=mock_get_bdms.return_value)]) self.assertIsInstance(r, migrate_data_obj.LiveMigrateData) self.assertIsInstance(mock_plm.call_args_list[0][0][5], migrate_data_obj.LiveMigrateData) mock_attach.assert_called_once_with( self.context, volume_id, instance.uuid, connector=connector, mountpoint=vol_bdm.device_name) self.assertEqual(vol_bdm.attachment_id, new_attachment_id) self.assertEqual(migrate_data.old_vol_attachment_ids[volume_id], orig_attachment_id) mock_bdm_save.assert_called_once_with() mock_attach_complete.assert_called_once_with(self.context, new_attachment_id) _test() @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_pre_live_migration_exception_cinder_v3_api(self): # The instance in this test has 2 attachments. The second attach_create # will throw an exception. This will test that the first attachment # is restored after the exception is thrown. compute = manager.ComputeManager() instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) volume1_id = uuids.volume1 vol1_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume1_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) vol1_orig_attachment_id = uuids.attachment1 vol1_bdm.attachment_id = vol1_orig_attachment_id volume2_id = uuids.volume2 vol2_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume2_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) vol2_orig_attachment_id = uuids.attachment2 vol2_bdm.attachment_id = vol2_orig_attachment_id migrate_data = migrate_data_obj.LiveMigrateData() migrate_data.old_vol_attachment_ids = {} @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(vol1_bdm, 'save') @mock.patch.object(compute, '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(compute, 'network_api') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute.volume_api, 'attachment_delete') @mock.patch.object(compute.volume_api, 'attachment_create') def _test(mock_attach_create, mock_attach_delete, mock_get_bdms, mock_nwapi, mock_ver_notify, mock_notify, mock_bdm_save, mock_exception): new_attachment_id = uuids.attachment3 mock_attach_create.side_effect = [{'id': new_attachment_id}, test.TestingException] mock_get_bdms.return_value = [vol1_bdm, vol2_bdm] self.assertRaises(test.TestingException, compute.pre_live_migration, self.context, instance, False, {}, migrate_data) self.assertEqual(vol1_orig_attachment_id, vol1_bdm.attachment_id) self.assertEqual(vol2_orig_attachment_id, vol2_bdm.attachment_id) self.assertEqual(mock_attach_create.call_count, 2) mock_attach_delete.assert_called_once_with(self.context, new_attachment_id) # Meta: ensure un-asserted mocks are still required for m in (mock_nwapi, mock_get_bdms, mock_ver_notify, mock_notify, mock_bdm_save, mock_exception): # NOTE(artom) This is different from assert_called() because # mock_calls contains the calls to a mock's method as well # (which is what we want for network_api.get_instance_nw_info # for example), whereas assert_called() only asserts # calls to the mock itself. self.assertGreater(len(m.mock_calls), 0) _test() @mock.patch('nova.objects.InstanceGroup.get_by_instance_uuid', mock.Mock( side_effect=exception.InstanceGroupNotFound(group_uuid=''))) def test_pre_live_migration_exceptions_delete_attachments(self): # The instance in this test has 2 attachments. The call to # driver.pre_live_migration will raise an exception. This will test # that the attachments are restored after the exception is thrown. compute = manager.ComputeManager() instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) vol1_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.vol1, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) vol1_bdm.attachment_id = uuids.vol1_attach_orig vol2_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.vol2, 'device_name': '/dev/vdc', 'instance_uuid': instance.uuid, 'connection_info': '{"test": "test"}'}) vol2_bdm.attachment_id = uuids.vol2_attach_orig migrate_data = migrate_data_obj.LiveMigrateData() migrate_data.old_vol_attachment_ids = {} @mock.patch.object(manager, 'compute_utils', autospec=True) @mock.patch.object(compute, 'network_api', autospec=True) @mock.patch.object(compute, 'volume_api', autospec=True) @mock.patch.object(objects.BlockDeviceMapping, 'save') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(compute.driver, 'pre_live_migration', autospec=True) def _test(mock_plm, mock_bdms_get, mock_bdm_save, mock_vol_api, mock_net_api, mock_compute_utils): mock_vol_api.attachment_create.side_effect = [ {'id': uuids.vol1_attach_new}, {'id': uuids.vol2_attach_new}] mock_bdms_get.return_value = [vol1_bdm, vol2_bdm] mock_plm.side_effect = test.TestingException self.assertRaises(test.TestingException, compute.pre_live_migration, self.context, instance, False, {}, migrate_data) self.assertEqual(2, mock_vol_api.attachment_create.call_count) # Assert BDMs have original attachments restored self.assertEqual(uuids.vol1_attach_orig, vol1_bdm.attachment_id) self.assertEqual(uuids.vol2_attach_orig, vol2_bdm.attachment_id) # Assert attachment cleanup self.assertEqual(2, mock_vol_api.attachment_delete.call_count) mock_vol_api.attachment_delete.assert_has_calls( [mock.call(self.context, uuids.vol1_attach_new), mock.call(self.context, uuids.vol2_attach_new)], any_order=True) # Meta: ensure un-asserted mocks are still required for m in (mock_net_api, mock_compute_utils): self.assertGreater(len(m.mock_calls), 0) _test() def test_get_neutron_events_for_live_migration_empty(self): """Tests the various ways that _get_neutron_events_for_live_migration will return an empty list. """ migration = mock.Mock() migration.is_same_host = lambda: False self.assertFalse(migration.is_same_host()) # 1. no timeout self.flags(vif_plugging_timeout=0) with mock.patch.object(self.instance, 'get_network_info') as nw_info: nw_info.return_value = network_model.NetworkInfo( [network_model.VIF(uuids.port1, details={ network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True})]) self.assertTrue(nw_info.return_value[0].is_hybrid_plug_enabled()) self.assertEqual( [], self.compute._get_neutron_events_for_live_migration( self.instance)) # 2. no VIFs self.flags(vif_plugging_timeout=300) with mock.patch.object(self.instance, 'get_network_info') as nw_info: nw_info.return_value = network_model.NetworkInfo([]) self.assertEqual( [], self.compute._get_neutron_events_for_live_migration( self.instance)) # 3. no plug time events with mock.patch.object(self.instance, 'get_network_info') as nw_info: nw_info.return_value = network_model.NetworkInfo( [network_model.VIF( uuids.port1, details={ network_model.VIF_DETAILS_OVS_HYBRID_PLUG: False})]) self.assertFalse(nw_info.return_value[0].is_hybrid_plug_enabled()) self.assertEqual( [], self.compute._get_neutron_events_for_live_migration( self.instance)) @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration') @mock.patch('nova.compute.manager.ComputeManager._post_live_migration') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_live_migration_wait_vif_plugged( self, mock_get_bdms, mock_post_live_mig, mock_pre_live_mig): """Tests the happy path of waiting for network-vif-plugged events from neutron when pre_live_migration returns a migrate_data object with wait_for_vif_plugged=True. """ migrate_data = objects.LibvirtLiveMigrateData( wait_for_vif_plugged=True) mock_get_bdms.return_value = objects.BlockDeviceMappingList(objects=[]) mock_pre_live_mig.return_value = migrate_data details = {network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True} self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ network_model.VIF(uuids.port1, details=details), network_model.VIF(uuids.port2, details=details) ])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() ) with mock.patch.object(self.compute.virtapi, 'wait_for_instance_event') as wait_for_event: self.compute._do_live_migration( self.context, 'dest-host', self.instance, None, self.migration, migrate_data) self.assertEqual(2, len(wait_for_event.call_args[0][1])) self.assertEqual(CONF.vif_plugging_timeout, wait_for_event.call_args[1]['deadline']) mock_pre_live_mig.assert_called_once_with( self.context, self.instance, None, None, 'dest-host', migrate_data) @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration') @mock.patch('nova.compute.manager.ComputeManager._post_live_migration') @mock.patch('nova.compute.manager.LOG.debug') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_live_migration_wait_vif_plugged_old_dest_host( self, mock_get_bdms, mock_log_debug, mock_post_live_mig, mock_pre_live_mig): """Tests the scenario that the destination compute returns a migrate_data with no wait_for_vif_plugged set because the dest compute doesn't have that code yet. In this case, we default to legacy behavior of not waiting. """ migrate_data = objects.LibvirtLiveMigrateData() details = {network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True} mock_get_bdms.return_value = objects.BlockDeviceMappingList(objects=[]) mock_pre_live_mig.return_value = migrate_data self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ network_model.VIF(uuids.port1, details=details)])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() ) with mock.patch.object( self.compute.virtapi, 'wait_for_instance_event'): self.compute._do_live_migration( self.context, 'dest-host', self.instance, None, self.migration, migrate_data) # This isn't awesome, but we need a way to assert that we # short-circuit'ed the wait_for_instance_event context manager. self.assertEqual(2, mock_log_debug.call_count) self.assertIn('Not waiting for events after pre_live_migration', mock_log_debug.call_args_list[0][0][0]) # first call/arg @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration') @mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_live_migration_wait_vif_plugged_vif_plug_error( self, mock_get_bdms, mock_rollback_live_mig, mock_pre_live_mig): """Tests the scenario where wait_for_instance_event fails with VirtualInterfacePlugException. """ migrate_data = objects.LibvirtLiveMigrateData( wait_for_vif_plugged=True) source_bdms = objects.BlockDeviceMappingList(objects=[]) mock_get_bdms.return_value = source_bdms mock_pre_live_mig.return_value = migrate_data self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ network_model.VIF(uuids.port1)])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() ) with mock.patch.object( self.compute.virtapi, 'wait_for_instance_event') as wait_for_event: wait_for_event.return_value.__enter__.side_effect = ( exception.VirtualInterfacePlugException()) self.assertRaises( exception.VirtualInterfacePlugException, self.compute._do_live_migration, self.context, 'dest-host', self.instance, None, self.migration, migrate_data) self.assertEqual('error', self.migration.status) mock_rollback_live_mig.assert_called_once_with( self.context, self.instance, 'dest-host', migrate_data=migrate_data, source_bdms=source_bdms) @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration') @mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_live_migration_wait_vif_plugged_timeout_error( self, mock_get_bdms, mock_rollback_live_mig, mock_pre_live_mig): """Tests the scenario where wait_for_instance_event raises an eventlet Timeout exception and we're configured such that vif plugging failures are fatal (which is the default). """ migrate_data = objects.LibvirtLiveMigrateData( wait_for_vif_plugged=True) source_bdms = objects.BlockDeviceMappingList(objects=[]) mock_get_bdms.return_value = source_bdms mock_pre_live_mig.return_value = migrate_data self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ network_model.VIF(uuids.port1)])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() ) with mock.patch.object( self.compute.virtapi, 'wait_for_instance_event') as wait_for_event: wait_for_event.return_value.__enter__.side_effect = ( eventlet_timeout.Timeout()) ex = self.assertRaises( exception.MigrationError, self.compute._do_live_migration, self.context, 'dest-host', self.instance, None, self.migration, migrate_data) self.assertIn('Timed out waiting for events', six.text_type(ex)) self.assertEqual('error', self.migration.status) mock_rollback_live_mig.assert_called_once_with( self.context, self.instance, 'dest-host', migrate_data=migrate_data, source_bdms=source_bdms) @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration') @mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration') @mock.patch('nova.compute.manager.ComputeManager._post_live_migration') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_live_migration_wait_vif_plugged_timeout_non_fatal( self, mock_get_bdms, mock_post_live_mig, mock_rollback_live_mig, mock_pre_live_mig): """Tests the scenario where wait_for_instance_event raises an eventlet Timeout exception and we're configured such that vif plugging failures are NOT fatal. """ self.flags(vif_plugging_is_fatal=False) mock_get_bdms.return_value = objects.BlockDeviceMappingList(objects=[]) migrate_data = objects.LibvirtLiveMigrateData( wait_for_vif_plugged=True) mock_pre_live_mig.return_value = migrate_data self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ network_model.VIF(uuids.port1)])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() ) with mock.patch.object( self.compute.virtapi, 'wait_for_instance_event') as wait_for_event: wait_for_event.return_value.__enter__.side_effect = ( eventlet_timeout.Timeout()) self.compute._do_live_migration( self.context, 'dest-host', self.instance, None, self.migration, migrate_data) self.assertEqual('running', self.migration.status) mock_rollback_live_mig.assert_not_called() @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_live_migration_submit_failed(self, mock_notify, mock_exc): migration = objects.Migration(self.context, uuid=uuids.migration) migration.save = mock.MagicMock() with mock.patch.object( self.compute._live_migration_executor, 'submit') as mock_sub: mock_sub.side_effect = RuntimeError self.assertRaises(exception.LiveMigrationNotSubmitted, self.compute.live_migration, self.context, 'fake', self.instance, True, migration, {}) self.assertEqual('error', migration.status) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch( 'nova.compute.manager.ComputeManager._notify_about_instance_usage') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration') @mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration') def test_live_migration_aborted_before_running(self, mock_rpc, mock_rollback, mock_action_notify, mock_usage_notify, mock_get_bdms): source_bdms = objects.BlockDeviceMappingList(objects=[]) mock_get_bdms.return_value = source_bdms migrate_data = objects.LibvirtLiveMigrateData( wait_for_vif_plugged=True) details = {network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True} self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ network_model.VIF(uuids.port1, details=details), network_model.VIF(uuids.port2, details=details) ])) self.compute._waiting_live_migrations = {} fake_migration = objects.Migration( uuid=uuids.migration, instance_uuid=self.instance.uuid) fake_migration.save = mock.MagicMock() with mock.patch.object(self.compute.virtapi, 'wait_for_instance_event') as wait_for_event: self.compute._do_live_migration( self.context, 'dest-host', self.instance, 'block_migration', fake_migration, migrate_data) self.assertEqual(2, len(wait_for_event.call_args[0][1])) mock_rpc.assert_called_once_with( self.context, self.instance, 'block_migration', None, 'dest-host', migrate_data) # Ensure that rollback is specifically called with the migrate_data # that came back from the call to pre_live_migration on the dest host # rather than the one passed to _do_live_migration. mock_rollback.assert_called_once_with( self.context, self.instance, 'dest-host', mock_rpc.return_value, 'cancelled', source_bdms=source_bdms) mock_usage_notify.assert_called_once_with( self.context, self.instance, 'live.migration.abort.end') mock_action_notify.assert_called_once_with( self.context, self.instance, self.compute.host, action=fields.NotificationAction.LIVE_MIGRATION_ABORT, phase=fields.NotificationPhase.END) def test_live_migration_force_complete_succeeded(self): migration = objects.Migration() migration.status = 'running' migration.id = 0 @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.image.glance.API.generate_image_url', return_value='fake-url') @mock.patch.object(objects.Migration, 'get_by_id', return_value=migration) @mock.patch.object(self.compute.driver, 'live_migration_force_complete') def _do_test(force_complete, get_by_id, gen_img_url, mock_notify): self.compute.live_migration_force_complete( self.context, self.instance) force_complete.assert_called_once_with(self.instance) self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) self.assertEqual( 'compute.instance.live.migration.force.complete.start', fake_notifier.NOTIFICATIONS[0].event_type) self.assertEqual( self.instance.uuid, fake_notifier.NOTIFICATIONS[0].payload['instance_id']) self.assertEqual( 'compute.instance.live.migration.force.complete.end', fake_notifier.NOTIFICATIONS[1].event_type) self.assertEqual( self.instance.uuid, fake_notifier.NOTIFICATIONS[1].payload['instance_id']) self.assertEqual(2, mock_notify.call_count) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, self.compute.host, action='live_migration_force_complete', phase='start'), mock.call(self.context, self.instance, self.compute.host, action='live_migration_force_complete', phase='end')]) _do_test() def test_post_live_migration_at_destination_success(self): @mock.patch.object(objects.Instance, 'drop_migration_context') @mock.patch.object(objects.Instance, 'apply_migration_context') @mock.patch.object(self.compute, 'rt') @mock.patch.object(self.instance, 'save') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value='test_network') @mock.patch.object(self.compute.network_api, 'setup_networks_on_host') @mock.patch.object(self.compute.network_api, 'migrate_instance_finish') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute, '_get_power_state', return_value=1) @mock.patch.object(self.compute, '_get_compute_info') @mock.patch.object(self.compute.driver, 'post_live_migration_at_destination') @mock.patch('nova.compute.utils.notify_about_instance_action') def _do_test(mock_notify, post_live_migration_at_destination, _get_compute_info, _get_power_state, _get_instance_block_device_info, _notify_about_instance_usage, migrate_instance_finish, setup_networks_on_host, get_instance_nw_info, save, rt_mock, mock_apply_mig_ctxt, mock_drop_mig_ctxt): cn = mock.Mock(spec_set=['hypervisor_hostname']) cn.hypervisor_hostname = 'test_host' _get_compute_info.return_value = cn cn_old = self.instance.host instance_old = self.instance self.compute.post_live_migration_at_destination( self.context, self.instance, False) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, self.instance.host, action='live_migration_post_dest', phase='start'), mock.call(self.context, self.instance, self.instance.host, action='live_migration_post_dest', phase='end')]) setup_networks_calls = [ mock.call(self.context, self.instance, self.compute.host), mock.call(self.context, self.instance, cn_old, teardown=True), mock.call(self.context, self.instance, self.compute.host) ] setup_networks_on_host.assert_has_calls(setup_networks_calls) notify_usage_calls = [ mock.call(self.context, instance_old, "live_migration.post.dest.start", network_info='test_network'), mock.call(self.context, self.instance, "live_migration.post.dest.end", network_info='test_network') ] _notify_about_instance_usage.assert_has_calls(notify_usage_calls) migrate_instance_finish.assert_called_once_with( self.context, self.instance, test.MatchType(objects.Migration), provider_mappings=None) mig = migrate_instance_finish.call_args[0][2] self.assertTrue(base_obj.obj_equal_prims( objects.Migration(source_compute=cn_old, dest_compute=self.compute.host, migration_type='live-migration'), mig)) _get_instance_block_device_info.assert_called_once_with( self.context, self.instance ) get_instance_nw_info.assert_called_once_with(self.context, self.instance) _get_power_state.assert_called_once_with(self.context, self.instance) _get_compute_info.assert_called_once_with(self.context, self.compute.host) rt_mock.allocate_pci_devices_for_instance.assert_called_once_with( self.context, self.instance) self.assertEqual(self.compute.host, self.instance.host) self.assertEqual('test_host', self.instance.node) self.assertEqual(1, self.instance.power_state) self.assertEqual(0, self.instance.progress) self.assertIsNone(self.instance.task_state) save.assert_called_once_with( expected_task_state=task_states.MIGRATING) mock_apply_mig_ctxt.assert_called_once_with() mock_drop_mig_ctxt.assert_called_once_with() _do_test() def test_post_live_migration_at_destination_compute_not_found(self): @mock.patch.object(objects.Instance, 'drop_migration_context') @mock.patch.object(objects.Instance, 'apply_migration_context') @mock.patch.object(self.compute, 'rt') @mock.patch.object(self.instance, 'save') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute, '_get_power_state', return_value=1) @mock.patch.object(self.compute, '_get_compute_info', side_effect=exception.ComputeHostNotFound( host=uuids.fake_host)) @mock.patch.object(self.compute.driver, 'post_live_migration_at_destination') @mock.patch('nova.compute.utils.notify_about_instance_action') def _do_test(mock_notify, post_live_migration_at_destination, _get_compute_info, _get_power_state, _get_instance_block_device_info, _notify_about_instance_usage, network_api, save, rt_mock, mock_apply_mig_ctxt, mock_drop_mig_ctxt): cn = mock.Mock(spec_set=['hypervisor_hostname']) cn.hypervisor_hostname = 'test_host' _get_compute_info.return_value = cn self.compute.post_live_migration_at_destination( self.context, self.instance, False) mock_notify.assert_has_calls([ mock.call(self.context, self.instance, self.instance.host, action='live_migration_post_dest', phase='start'), mock.call(self.context, self.instance, self.instance.host, action='live_migration_post_dest', phase='end')]) self.assertIsNone(self.instance.node) mock_apply_mig_ctxt.assert_called_with() mock_drop_mig_ctxt.assert_called_once_with() _do_test() def test_post_live_migration_at_destination_unexpected_exception(self): @mock.patch.object(objects.Instance, 'drop_migration_context') @mock.patch.object(objects.Instance, 'apply_migration_context') @mock.patch.object(self.compute, 'rt') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(self.instance, 'save') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute, '_get_power_state', return_value=1) @mock.patch.object(self.compute, '_get_compute_info') @mock.patch.object(self.compute.driver, 'post_live_migration_at_destination', side_effect=exception.NovaException) def _do_test(post_live_migration_at_destination, _get_compute_info, _get_power_state, _get_instance_block_device_info, _notify_about_instance_usage, network_api, save, add_instance_fault_from_exc, rt_mock, mock_apply_mig_ctxt, mock_drop_mig_ctxt): cn = mock.Mock(spec_set=['hypervisor_hostname']) cn.hypervisor_hostname = 'test_host' _get_compute_info.return_value = cn self.assertRaises(exception.NovaException, self.compute.post_live_migration_at_destination, self.context, self.instance, False) self.assertEqual(vm_states.ERROR, self.instance.vm_state) mock_apply_mig_ctxt.assert_called_with() mock_drop_mig_ctxt.assert_called_once_with() _do_test() @mock.patch('nova.compute.manager.LOG.error') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_post_live_migration_at_destination_port_binding_delete_fails( self, mock_notify, mock_log_error): """Tests that neutron fails to delete the source host port bindings but we handle the error and just log it. """ @mock.patch.object(self.instance, 'drop_migration_context') @mock.patch.object(objects.Instance, 'apply_migration_context') @mock.patch.object(self.compute, 'rt') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute, '_get_power_state', return_value=power_state.RUNNING) @mock.patch.object(self.compute, '_get_compute_info', return_value=objects.ComputeNode( hypervisor_hostname='fake-dest-host')) @mock.patch.object(self.instance, 'save') def _do_test(instance_save, get_compute_node, get_power_state, get_bdms, network_api, legacy_notify, rt_mock, mock_apply_mig_ctxt, mock_drop_mig_ctxt): # setup_networks_on_host is called three times: # 1. set the migrating_to port binding profile value (no-op) # 2. delete the source host port bindings - we make this raise # 3. once more to update dhcp for nova-network (no-op for neutron) network_api.setup_networks_on_host.side_effect = [ None, exception.PortBindingDeletionFailed( port_id=uuids.port_id, host='fake-source-host'), None] self.compute.post_live_migration_at_destination( self.context, self.instance, block_migration=False) self.assertEqual(1, mock_log_error.call_count) self.assertIn('Network cleanup failed for source host', mock_log_error.call_args[0][0]) mock_apply_mig_ctxt.assert_called_once_with() mock_drop_mig_ctxt.assert_called_once_with() _do_test() @mock.patch('nova.objects.ConsoleAuthToken.' 'clean_console_auths_for_instance') def _call_post_live_migration(self, mock_clean, *args, **kwargs): @mock.patch.object(self.compute, 'update_available_resource') @mock.patch.object(self.compute, 'compute_rpcapi') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, 'network_api') def _do_call(nwapi, notify, rpc, update): bdms = objects.BlockDeviceMappingList(objects=[]) result = self.compute._post_live_migration( self.context, self.instance, 'foo', *args, source_bdms=bdms, **kwargs) return result mock_rt = self._mock_rt() result = _do_call() mock_clean.assert_called_once_with(self.context, self.instance.uuid) mock_rt.free_pci_device_allocations_for_instance.\ assert_called_once_with(self.context, self.instance) return result def test_post_live_migration_new_allocations(self): # We have a migrate_data with a migration... migration = objects.Migration(uuid=uuids.migration) migration.save = mock.MagicMock() md = objects.LibvirtLiveMigrateData(migration=migration, is_shared_instance_path=False, is_shared_block_storage=False) with test.nested( mock.patch.object(self.compute, 'reportclient'), mock.patch.object(self.compute, '_delete_allocation_after_move'), ) as ( mock_report, mock_delete, ): # ...and that migration has allocations... mock_report.get_allocations_for_consumer.return_value = ( mock.sentinel.allocs) self._call_post_live_migration(migrate_data=md) # ...so we should have called the new style delete mock_delete.assert_called_once_with(self.context, self.instance, migration) def test_post_live_migration_cinder_pre_344_api(self): # Because live migration has # succeeded,_post_live_migration_remove_source_vol_connections() # should call terminate_connection() with the volume UUID. dest_host = 'test_dest_host' instance = fake_instance.fake_instance_obj(self.context, node='dest', uuid=uuids.instance) vol_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'id': 42, 'connection_info': '{"connector": {"host": "%s"}}' % dest_host}) image_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'image', 'destination_type': 'local', 'volume_id': uuids.image_volume, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid}) @mock.patch.object(self.compute.driver, 'get_volume_connector') @mock.patch.object(self.compute.volume_api, 'terminate_connection') def _test(mock_term_conn, mock_get_vol_conn): bdms = objects.BlockDeviceMappingList(objects=[vol_bdm, image_bdm]) self.compute._post_live_migration_remove_source_vol_connections( self.context, instance, bdms) mock_term_conn.assert_called_once_with( self.context, uuids.volume, mock_get_vol_conn.return_value) _test() def test_post_live_migration_cinder_v3_api(self): # Because live migration has succeeded, _post_live_migration # should call attachment_delete with the original/old attachment_id dest_host = 'test_dest_host' instance = fake_instance.fake_instance_obj(self.context, node='dest', uuid=uuids.instance) bdm_id = 1 volume_id = uuids.volume vol_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'id': bdm_id, 'connection_info': '{"connector": {"host": "%s"}}' % dest_host}) image_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'image', 'destination_type': 'local', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid}) vol_bdm.attachment_id = uuids.attachment image_bdm.attachment_id = uuids.attachment3 @mock.patch.object(self.compute.volume_api, 'attachment_delete') def _test(mock_attach_delete): bdms = objects.BlockDeviceMappingList(objects=[vol_bdm, image_bdm]) self.compute._post_live_migration_remove_source_vol_connections( self.context, instance, bdms) mock_attach_delete.assert_called_once_with( self.context, uuids.attachment) _test() @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_post_live_migration_unplug_with_stashed_source_vifs( self, mock_add_fault, mock_notify, mock_get_bdms): """Tests the scenario that migrate_data.vifs is set so we unplug using the stashed source vifs from that rather than the current instance network info cache. """ migrate_data = objects.LibvirtLiveMigrateData() source_vif = network_model.VIF(uuids.port_id, type='ovs') migrate_data.vifs = [objects.VIFMigrateData(source_vif=source_vif)] bdms = objects.BlockDeviceMappingList(objects=[]) nw_info = network_model.NetworkInfo( [network_model.VIF(uuids.port_id, type='ovn')]) def fake_post_live_migration_at_source( _context, _instance, network_info): # Make sure we got the source_vif for unplug. self.assertEqual(1, len(network_info)) self.assertEqual(source_vif, network_info[0]) def fake_driver_cleanup(_context, _instance, network_info, *a, **kw): # Make sure we got the source_vif for unplug. self.assertEqual(1, len(network_info)) self.assertEqual(source_vif, network_info[0]) # Based on the number of mocks here, clearly _post_live_migration is # too big at this point... @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute.network_api, 'get_instance_nw_info', return_value=nw_info) @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute.network_api, 'migrate_instance_start') @mock.patch.object(self.compute.driver, 'post_live_migration_at_source', side_effect=fake_post_live_migration_at_source) @mock.patch.object(self.compute.compute_rpcapi, 'post_live_migration_at_destination') @mock.patch.object(self.compute, '_live_migration_cleanup_flags', return_value=(True, False)) @mock.patch.object(self.compute.driver, 'cleanup', side_effect=fake_driver_cleanup) @mock.patch.object(self.compute, 'update_available_resource') @mock.patch.object(self.compute, '_update_scheduler_instance_info') @mock.patch.object(self.compute, '_clean_instance_console_tokens') def _test(_clean_instance_console_tokens, _update_scheduler_instance_info, update_available_resource, driver_cleanup, _live_migration_cleanup_flags, post_live_migration_at_destination, post_live_migration_at_source, migrate_instance_start, _notify_about_instance_usage, get_instance_nw_info, _get_instance_block_device_info): self._mock_rt() self.compute._post_live_migration( self.context, self.instance, 'fake-dest', migrate_data=migrate_data, source_bdms=bdms) post_live_migration_at_source.assert_called_once_with( self.context, self.instance, test.MatchType(network_model.NetworkInfo)) driver_cleanup.assert_called_once_with( self.context, self.instance, test.MatchType(network_model.NetworkInfo), destroy_disks=False, migrate_data=migrate_data, destroy_vifs=False) _test() def _generate_volume_bdm_list(self, instance, original=False): # TODO(lyarwood): There are various methods generating fake bdms within # this class, we should really look at writing a small number of # generic reusable methods somewhere to replace all of these. connection_info = "{'data': {'host': 'dest'}}" if original: connection_info = "{'data': {'host': 'original'}}" vol1_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.vol1, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'connection_info': connection_info}) vol1_bdm.save = mock.Mock() vol2_bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.vol2, 'device_name': '/dev/vdc', 'instance_uuid': instance.uuid, 'connection_info': connection_info}) vol2_bdm.save = mock.Mock() if original: vol1_bdm.attachment_id = uuids.vol1_attach_original vol2_bdm.attachment_id = uuids.vol2_attach_original else: vol1_bdm.attachment_id = uuids.vol1_attach vol2_bdm.attachment_id = uuids.vol2_attach return objects.BlockDeviceMappingList(objects=[vol1_bdm, vol2_bdm]) @mock.patch('nova.compute.rpcapi.ComputeAPI.remove_volume_connection') def test_remove_remote_volume_connections(self, mock_remove_vol_conn): instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) bdms = self._generate_volume_bdm_list(instance) self.compute._remove_remote_volume_connections(self.context, 'fake', bdms, instance) mock_remove_vol_conn.assert_has_calls([ mock.call(self.context, instance, bdm.volume_id, 'fake') for bdm in bdms]) @mock.patch('nova.compute.rpcapi.ComputeAPI.remove_volume_connection') def test_remove_remote_volume_connections_exc(self, mock_remove_vol_conn): instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) bdms = self._generate_volume_bdm_list(instance) # Raise an exception for the second call to remove_volume_connections mock_remove_vol_conn.side_effect = [None, test.TestingException] # Assert that errors are ignored self.compute._remove_remote_volume_connections(self.context, 'fake', bdms, instance) @mock.patch('nova.volume.cinder.API.attachment_delete') def test_rollback_volume_bdms(self, mock_delete_attachment): instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) bdms = self._generate_volume_bdm_list(instance) original_bdms = self._generate_volume_bdm_list(instance, original=True) self.compute._rollback_volume_bdms(self.context, bdms, original_bdms, instance) # Assert that we delete the current attachments mock_delete_attachment.assert_has_calls([ mock.call(self.context, uuids.vol1_attach), mock.call(self.context, uuids.vol2_attach)]) # Assert that we switch the attachment ids and connection_info for each # bdm back to their original values self.assertEqual(uuids.vol1_attach_original, bdms[0].attachment_id) self.assertEqual("{'data': {'host': 'original'}}", bdms[0].connection_info) self.assertEqual(uuids.vol2_attach_original, bdms[1].attachment_id) self.assertEqual("{'data': {'host': 'original'}}", bdms[1].connection_info) # Assert that save is called for each bdm bdms[0].save.assert_called_once() bdms[1].save.assert_called_once() @mock.patch('nova.compute.manager.LOG') @mock.patch('nova.volume.cinder.API.attachment_delete') def test_rollback_volume_bdms_exc(self, mock_delete_attachment, mock_log): instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) bdms = self._generate_volume_bdm_list(instance) original_bdms = self._generate_volume_bdm_list(instance, original=True) # Assert that we ignore cinderclient exceptions and continue to attempt # to rollback any remaining bdms. mock_delete_attachment.side_effect = [ cinder_exception.ClientException(code=9001), None] self.compute._rollback_volume_bdms(self.context, bdms, original_bdms, instance) self.assertEqual(uuids.vol2_attach_original, bdms[1].attachment_id) self.assertEqual("{'data': {'host': 'original'}}", bdms[1].connection_info) bdms[0].save.assert_not_called() bdms[1].save.assert_called_once() mock_log.warning.assert_called_once() self.assertIn('Ignoring cinderclient exception', mock_log.warning.call_args[0][0]) # Assert that we raise unknown Exceptions mock_log.reset_mock() bdms[0].save.reset_mock() bdms[1].save.reset_mock() mock_delete_attachment.side_effect = test.TestingException self.assertRaises(test.TestingException, self.compute._rollback_volume_bdms, self.context, bdms, original_bdms, instance) bdms[0].save.assert_not_called() bdms[1].save.assert_not_called() mock_log.exception.assert_called_once() self.assertIn('Exception while attempting to rollback', mock_log.exception.call_args[0][0]) @mock.patch('nova.volume.cinder.API.attachment_delete') def test_rollback_volume_bdms_after_pre_failure( self, mock_delete_attachment): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance) original_bdms = bdms = self._generate_volume_bdm_list(instance) self.compute._rollback_volume_bdms( self.context, bdms, original_bdms, instance) # Assert that attachment_delete isn't called when the bdms have already # been rolled back by a failure in pre_live_migration to reference the # source bdms. mock_delete_attachment.assert_not_called() @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') def test_rollback_live_migration_cinder_v3_api(self, mock_remove_allocs, mock_get_node): compute = manager.ComputeManager() dest_node = objects.ComputeNode(host='foo', uuid=uuids.dest_node) mock_get_node.return_value = dest_node instance = fake_instance.fake_instance_obj(self.context, uuid=uuids.instance) volume_id = uuids.volume orig_attachment_id = uuids.attachment1 new_attachment_id = uuids.attachment2 migrate_data = migrate_data_obj.LiveMigrateData() migrate_data.old_vol_attachment_ids = { volume_id: orig_attachment_id} def fake_bdm(): bdm = fake_block_device.fake_bdm_object( self.context, {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid}) bdm.save = mock.Mock() return bdm # NOTE(mdbooth): Use of attachment_id as connection_info is a # test convenience. It just needs to be a string. source_bdm = fake_bdm() source_bdm.attachment_id = orig_attachment_id source_bdm.connection_info = orig_attachment_id source_bdms = objects.BlockDeviceMappingList(objects=[source_bdm]) bdm = fake_bdm() bdm.attachment_id = new_attachment_id bdm.connection_info = new_attachment_id bdms = objects.BlockDeviceMappingList(objects=[bdm]) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance_uuid') @mock.patch.object(compute.volume_api, 'attachment_delete') @mock.patch.object(compute_utils, 'notify_about_instance_action') @mock.patch.object(instance, 'save') @mock.patch.object(compute, '_notify_about_instance_usage') @mock.patch.object(compute.compute_rpcapi, 'remove_volume_connection') @mock.patch.object(compute, 'network_api') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'drop_migration_context') def _test(mock_drop_mig_ctxt, mock_get_bdms, mock_net_api, mock_remove_conn, mock_usage, mock_instance_save, mock_action, mock_attach_delete, mock_get_pci): # this tests that _rollback_live_migration replaces the bdm's # attachment_id with the original attachment id that is in # migrate_data. mock_get_bdms.return_value = bdms mock_get_pci.return_value = objects.InstancePCIRequests() compute._rollback_live_migration(self.context, instance, None, migrate_data=migrate_data, source_bdms=source_bdms) mock_remove_conn.assert_called_once_with(self.context, instance, bdm.volume_id, None) mock_attach_delete.assert_called_once_with(self.context, new_attachment_id) self.assertEqual(bdm.attachment_id, orig_attachment_id) self.assertEqual(orig_attachment_id, bdm.connection_info) bdm.save.assert_called_once_with() mock_drop_mig_ctxt.assert_called_once_with() mock_get_pci.assert_called_once_with(self.context, instance.uuid) self.assertEqual(mock_get_pci.return_value, instance.pci_requests) _test() @mock.patch('nova.compute.manager.LOG.error') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid', return_value=objects.BlockDeviceMappingList()) @mock.patch('nova.compute.utils.notify_about_instance_action') def test_rollback_live_migration_port_binding_delete_fails( self, mock_notify, mock_get_bdms, mock_log_error): """Tests that neutron fails to delete the destination host port bindings but we handle the error and just log it. """ migrate_data = objects.LibvirtLiveMigrateData( migration=self.migration, is_shared_instance_path=True, is_shared_block_storage=True) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance_uuid') @mock.patch.object(self.compute, '_revert_allocation') @mock.patch.object(self.instance, 'save') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(objects.Instance, 'drop_migration_context') def _do_test(drop_mig_ctxt, legacy_notify, network_api, instance_save, _revert_allocation, mock_get_pci): # setup_networks_on_host is called two times: # 1. set the migrating_to attribute in the port binding profile, # which is a no-op in this case for neutron # 2. delete the dest host port bindings - we make this raise network_api.setup_networks_on_host.side_effect = [ None, exception.PortBindingDeletionFailed( port_id=uuids.port_id, host='fake-dest-host')] mock_get_pci.return_value = objects.InstancePCIRequests() self.compute._rollback_live_migration( self.context, self.instance, 'fake-dest-host', migrate_data, source_bdms=objects.BlockDeviceMappingList()) self.assertEqual(1, mock_log_error.call_count) self.assertIn('Network cleanup failed for destination host', mock_log_error.call_args[0][0]) drop_mig_ctxt.assert_called_once_with() mock_get_pci.assert_called_once_with( self.context, self.instance.uuid) self.assertEqual( mock_get_pci.return_value, self.instance.pci_requests) _do_test() @mock.patch('nova.compute.manager.LOG.error') def test_rollback_live_migration_at_destination_port_binding_delete_fails( self, mock_log_error): """Tests that neutron fails to delete the destination host port bindings but we handle the error and just log it. """ @mock.patch.object(self.compute, 'rt') @mock.patch.object(self.compute, '_notify_about_instance_usage') @mock.patch.object(self.compute, 'network_api') @mock.patch.object(self.compute, '_get_instance_block_device_info') @mock.patch.object(self.compute.driver, 'rollback_live_migration_at_destination') def _do_test(driver_rollback, get_bdms, network_api, legacy_notify, rt_mock): self.compute.network_api.setup_networks_on_host.side_effect = ( exception.PortBindingDeletionFailed( port_id=uuids.port_id, host='fake-dest-host')) mock_md = mock.MagicMock() self.compute.rollback_live_migration_at_destination( self.context, self.instance, destroy_disks=False, migrate_data=mock_md) self.assertEqual(1, mock_log_error.call_count) self.assertIn('Network cleanup failed for destination host', mock_log_error.call_args[0][0]) driver_rollback.assert_called_once_with( self.context, self.instance, network_api.get_instance_nw_info.return_value, get_bdms.return_value, destroy_disks=False, migrate_data=mock_md) rt_mock.free_pci_device_claims_for_instance.\ assert_called_once_with(self.context, self.instance) _do_test() def _get_migration(self, migration_id, status, migration_type): migration = objects.Migration() migration.id = migration_id migration.status = status migration.migration_type = migration_type return migration @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(objects.Migration, 'get_by_id') @mock.patch.object(nova.virt.fake.FakeDriver, 'live_migration_abort') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_live_migration_abort(self, mock_notify_action, mock_driver, mock_get_migration, mock_notify): instance = objects.Instance(id=123, uuid=uuids.instance) migration = self._get_migration(10, 'running', 'live-migration') mock_get_migration.return_value = migration self.compute.live_migration_abort(self.context, instance, migration.id) mock_driver.assert_called_with(instance) mock_notify.assert_has_calls( [mock.call(self.context, instance, 'live.migration.abort.start'), mock.call(self.context, instance, 'live.migration.abort.end')] ) mock_notify_action.assert_has_calls( [mock.call(self.context, instance, 'fake-mini', action='live_migration_abort', phase='start'), mock.call(self.context, instance, 'fake-mini', action='live_migration_abort', phase='end')] ) @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(objects.Migration, 'get_by_id') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_live_migration_abort_queued(self, mock_notify_action, mock_get_migration, mock_notify): instance = objects.Instance(id=123, uuid=uuids.instance) migration = self._get_migration(10, 'queued', 'live-migration') migration.save = mock.MagicMock() mock_get_migration.return_value = migration fake_future = mock.MagicMock() self.compute._waiting_live_migrations[instance.uuid] = ( migration, fake_future) self.compute.live_migration_abort(self.context, instance, migration.id) mock_notify.assert_has_calls( [mock.call(self.context, instance, 'live.migration.abort.start'), mock.call(self.context, instance, 'live.migration.abort.end')] ) mock_notify_action.assert_has_calls( [mock.call(self.context, instance, 'fake-mini', action='live_migration_abort', phase='start'), mock.call(self.context, instance, 'fake-mini', action='live_migration_abort', phase='end')] ) self.assertEqual('cancelled', migration.status) fake_future.cancel.assert_called_once_with() @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(objects.Migration, 'get_by_id') @mock.patch.object(nova.virt.fake.FakeDriver, 'live_migration_abort') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_live_migration_abort_not_supported(self, mock_notify_action, mock_driver, mock_get_migration, mock_notify, mock_instance_fault): instance = objects.Instance(id=123, uuid=uuids.instance) migration = self._get_migration(10, 'running', 'live-migration') mock_get_migration.return_value = migration mock_driver.side_effect = NotImplementedError() self.assertRaises(NotImplementedError, self.compute.live_migration_abort, self.context, instance, migration.id) mock_notify_action.assert_called_once_with(self.context, instance, 'fake-mini', action='live_migration_abort', phase='start') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(objects.Migration, 'get_by_id') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_live_migration_abort_wrong_migration_state(self, mock_notify_action, mock_get_migration, mock_notify, mock_instance_fault): instance = objects.Instance(id=123, uuid=uuids.instance) migration = self._get_migration(10, 'completed', 'live-migration') mock_get_migration.return_value = migration self.assertRaises(exception.InvalidMigrationState, self.compute.live_migration_abort, self.context, instance, migration.id) def test_live_migration_cleanup_flags_shared_path_and_vpmem_libvirt(self): migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=False, is_shared_instance_path=True) migr_ctxt = objects.MigrationContext() vpmem_resource = objects.Resource( provider_uuid=uuids.rp_uuid, resource_class="CUSTOM_PMEM_NAMESPACE_4GB", identifier='ns_0', metadata=objects.LibvirtVPMEMDevice( label='4GB', name='ns_0', devpath='/dev/dax0.0', size=4292870144, align=2097152)) migr_ctxt.old_resources = objects.ResourceList( objects=[vpmem_resource]) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data, migr_ctxt) self.assertTrue(do_cleanup) self.assertTrue(destroy_disks) def test_live_migration_cleanup_flags_block_migrate_libvirt(self): migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=False, is_shared_instance_path=False) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertTrue(do_cleanup) self.assertTrue(destroy_disks) def test_live_migration_cleanup_flags_shared_block_libvirt(self): migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=True, is_shared_instance_path=False) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertTrue(do_cleanup) self.assertFalse(destroy_disks) def test_live_migration_cleanup_flags_shared_path_libvirt(self): migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=False, is_shared_instance_path=True) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertFalse(do_cleanup) self.assertTrue(destroy_disks) def test_live_migration_cleanup_flags_shared_libvirt(self): migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=True, is_shared_instance_path=True) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertFalse(do_cleanup) self.assertFalse(destroy_disks) def test_live_migration_cleanup_flags_block_migrate_xenapi(self): migrate_data = objects.XenapiLiveMigrateData(block_migration=True) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertTrue(do_cleanup) self.assertTrue(destroy_disks) def test_live_migration_cleanup_flags_live_migrate_xenapi(self): migrate_data = objects.XenapiLiveMigrateData(block_migration=False) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertFalse(do_cleanup) self.assertFalse(destroy_disks) def test_live_migration_cleanup_flags_live_migrate(self): do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( {}) self.assertFalse(do_cleanup) self.assertFalse(destroy_disks) def test_live_migration_cleanup_flags_block_migrate_hyperv(self): migrate_data = objects.HyperVLiveMigrateData( is_shared_instance_path=False) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertTrue(do_cleanup) self.assertTrue(destroy_disks) def test_live_migration_cleanup_flags_shared_hyperv(self): migrate_data = objects.HyperVLiveMigrateData( is_shared_instance_path=True) do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertTrue(do_cleanup) self.assertFalse(destroy_disks) def test_live_migration_cleanup_flags_other(self): migrate_data = mock.Mock() do_cleanup, destroy_disks = self.compute._live_migration_cleanup_flags( migrate_data) self.assertFalse(do_cleanup) self.assertFalse(destroy_disks) @mock.patch('nova.compute.utils.notify_about_resize_prep_instance') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.InstanceFault.create') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.utils.is_volume_backed_instance', new=lambda *a: False) def test_prep_resize_errors_migration(self, mock_niu, mock_notify, mock_save, mock_if, mock_cn, mock_notify_resize): migration = mock.MagicMock() flavor = objects.Flavor(name='flavor', id=1) cn = objects.ComputeNode(uuid=uuids.compute) mock_cn.return_value = cn reportclient = self.compute.reportclient @mock.patch.object(self.compute, '_reschedule_resize_or_reraise') @mock.patch.object(self.compute, '_prep_resize') def doit(mock_pr, mock_r): # Mock the resource tracker, but keep the report client self._mock_rt().reportclient = reportclient mock_pr.side_effect = test.TestingException mock_r.side_effect = test.TestingException instance = objects.Instance(uuid=uuids.instance, id=1, host='host', node='node', vm_state='active', task_state=None) self.assertRaises(test.TestingException, self.compute.prep_resize, self.context, mock.sentinel.image, instance, flavor, mock.sentinel.request_spec, {}, 'node', False, migration, []) # Make sure we set migration status to error self.assertEqual(migration.status, 'error') migration.save.assert_called_once_with() mock_r.assert_called_once_with( self.context, instance, mock.ANY, flavor, mock.sentinel.request_spec, {}, []) mock_notify_resize.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', 'start', flavor), mock.call(self.context, instance, 'fake-mini', 'end', flavor)]) doit() def test_prep_resize_fails_unable_to_migrate_to_self(self): """Asserts that _prep_resize handles UnableToMigrateToSelf when _prep_resize is called on the host on which the instance lives and the flavor is not changing. """ instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, expected_attrs=['system_metadata', 'flavor']) migration = mock.MagicMock(spec='nova.objects.Migration') with mock.patch.dict(self.compute.driver.capabilities, {'supports_migrate_to_same_host': False}): ex = self.assertRaises( exception.InstanceFaultRollback, self.compute._prep_resize, self.context, instance.image_meta, instance, instance.flavor, filter_properties={}, node=instance.node, migration=migration, request_spec=mock.sentinel) self.assertIsInstance( ex.inner_exception, exception.UnableToMigrateToSelf) @mock.patch('nova.compute.rpcapi.ComputeAPI.resize_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.resize_claim') @mock.patch('nova.compute.manager.ComputeManager.' '_get_request_group_mapping') @mock.patch.object(objects.Instance, 'save') def test_prep_resize_handles_legacy_request_spec( self, mock_save, mock_get_mapping, mock_claim, mock_resize): instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, expected_attrs=['system_metadata', 'flavor']) request_spec = objects.RequestSpec(instance_uuid = instance.uuid) mock_get_mapping.return_value = {} self.compute._prep_resize( self.context, instance.image_meta, instance, instance.flavor, filter_properties={}, node=instance.node, migration=self.migration, request_spec=request_spec.to_legacy_request_spec_dict()) # we expect that the legacy request spec is transformed to object # before _prep_resize calls _get_request_group_mapping() mock_get_mapping.assert_called_once_with( test.MatchType(objects.RequestSpec)) @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_resize_prep_instance') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager._revert_allocation') @mock.patch('nova.compute.manager.ComputeManager.' '_reschedule_resize_or_reraise') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_prep_resize_fails_rollback( self, add_instance_fault_from_exc, _reschedule_resize_or_reraise, _revert_allocation, mock_instance_save, notify_about_resize_prep_instance, _notify_about_instance_usage, notify_usage_exists): """Tests that if _prep_resize raises InstanceFaultRollback, the instance.vm_state is reset properly in _error_out_instance_on_exception """ instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, vm_state=vm_states.STOPPED, node='fake-node', expected_attrs=['system_metadata', 'flavor']) migration = mock.MagicMock(spec='nova.objects.Migration') request_spec = mock.MagicMock(spec='nova.objects.RequestSpec') ex = exception.InstanceFaultRollback( inner_exception=exception.UnableToMigrateToSelf( instance_id=instance.uuid, host=instance.host)) def fake_reschedule_resize_or_reraise(*args, **kwargs): raise ex _reschedule_resize_or_reraise.side_effect = ( fake_reschedule_resize_or_reraise) with mock.patch.object(self.compute, '_prep_resize', side_effect=ex): self.assertRaises( # _error_out_instance_on_exception should reraise the # UnableToMigrateToSelf inside InstanceFaultRollback. exception.UnableToMigrateToSelf, self.compute.prep_resize, self.context, instance.image_meta, instance, instance.flavor, request_spec, filter_properties={}, node=instance.node, clean_shutdown=True, migration=migration, host_list=[]) # The instance.vm_state should remain unchanged # (_error_out_instance_on_exception will set to ACTIVE by default). self.assertEqual(vm_states.STOPPED, instance.vm_state) @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_resize_prep_instance') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager._revert_allocation') @mock.patch('nova.compute.manager.ComputeManager.' '_reschedule_resize_or_reraise') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') # this is almost copy-paste from test_prep_resize_fails_rollback def test_prep_resize_fails_group_validation( self, add_instance_fault_from_exc, _reschedule_resize_or_reraise, _revert_allocation, mock_instance_save, notify_about_resize_prep_instance, _notify_about_instance_usage, notify_usage_exists): """Tests that if _validate_instance_group_policy raises InstanceFaultRollback, the instance.vm_state is reset properly in _error_out_instance_on_exception """ instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, vm_state=vm_states.STOPPED, node='fake-node', expected_attrs=['system_metadata', 'flavor']) migration = mock.MagicMock(spec='nova.objects.Migration') request_spec = mock.MagicMock(spec='nova.objects.RequestSpec') ex = exception.RescheduledException( instance_uuid=instance.uuid, reason="policy violated") ex2 = exception.InstanceFaultRollback( inner_exception=ex) def fake_reschedule_resize_or_reraise(*args, **kwargs): raise ex2 _reschedule_resize_or_reraise.side_effect = ( fake_reschedule_resize_or_reraise) with mock.patch.object(self.compute, '_validate_instance_group_policy', side_effect=ex): self.assertRaises( # _error_out_instance_on_exception should reraise the # RescheduledException inside InstanceFaultRollback. exception.RescheduledException, self.compute.prep_resize, self.context, instance.image_meta, instance, instance.flavor, request_spec, filter_properties={}, node=instance.node, clean_shutdown=True, migration=migration, host_list=[]) # The instance.vm_state should remain unchanged # (_error_out_instance_on_exception will set to ACTIVE by default). self.assertEqual(vm_states.STOPPED, instance.vm_state) @mock.patch('nova.compute.rpcapi.ComputeAPI.resize_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.resize_claim') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name') @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_resize_prep_instance') def test_prep_resize_update_pci_request( self, mock_notify, mock_notify_usage, mock_notify_exists, mock_update_pci, mock_save, mock_claim, mock_resize): instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, vm_state=vm_states.STOPPED, node='fake-node', expected_attrs=['system_metadata', 'flavor']) instance.pci_requests = objects.InstancePCIRequests(requests=[]) request_spec = objects.RequestSpec() request_spec.requested_resources = [ objects.RequestGroup( requester_id=uuids.port_id, provider_uuids=[uuids.rp_uuid])] self.compute.prep_resize( self.context, instance.image_meta, instance, instance.flavor, request_spec, filter_properties={}, node=instance.node, clean_shutdown=True, migration=self.migration, host_list=[]) mock_update_pci.assert_called_once_with( self.context, self.compute.reportclient, instance, {uuids.port_id: [uuids.rp_uuid]}) mock_save.assert_called_once_with() @mock.patch('nova.compute.manager.ComputeManager._revert_allocation') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.compute.rpcapi.ComputeAPI.resize_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.resize_claim') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name') @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.compute.manager.ComputeManager.' '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_resize_prep_instance') def test_prep_resize_update_pci_request_fails( self, mock_notify, mock_notify_usage, mock_notify_exists, mock_update_pci, mock_save, mock_claim, mock_resize, mock_fault, mock_revert): instance = fake_instance.fake_instance_obj( self.context, host=self.compute.host, vm_state=vm_states.STOPPED, node='fake-node', expected_attrs=['system_metadata', 'flavor']) instance.pci_requests = objects.InstancePCIRequests(requests=[]) request_spec = objects.RequestSpec() request_spec.requested_resources = [ objects.RequestGroup( requester_id=uuids.port_id, provider_uuids=[uuids.rp_uuid])] migration = mock.MagicMock(spec='nova.objects.Migration') mock_update_pci.side_effect = exception.BuildAbortException( instance_uuid=instance.uuid, reason="") self.assertRaises( exception.BuildAbortException, self.compute.prep_resize, self.context, instance.image_meta, instance, instance.flavor, request_spec, filter_properties={}, node=instance.node, clean_shutdown=True, migration=migration, host_list=[]) mock_revert.assert_called_once_with(self.context, instance, migration) mock_notify.assert_has_calls([ mock.call( self.context, instance, 'fake-mini', 'start', instance.flavor), mock.call( self.context, instance, 'fake-mini', 'end', instance.flavor), ]) mock_notify_usage.assert_has_calls([ mock.call( self.context, instance, 'resize.prep.start', extra_usage_info=None), mock.call( self.context, instance, 'resize.prep.end', extra_usage_info={ 'new_instance_type_id': 1, 'new_instance_type': 'flavor1'}), ]) mock_notify_exists.assert_called_once_with( self.compute.notifier, self.context, instance, 'fake-mini', current_period=True) def test__claim_pci_for_instance_no_vifs(self): @mock.patch.object(self.compute, 'rt') @mock.patch.object(pci_request, 'get_instance_pci_request_from_vif') @mock.patch.object(self.instance, 'get_network_info') def _test(mock_get_network_info, mock_get_instance_pci_request_from_vif, rt_mock): # when no VIFS, expecting no pci devices to be claimed mock_get_network_info.return_value = [] port_id_to_pci = self.compute._claim_pci_for_instance_vifs( self.context, self.instance) rt_mock.claim_pci_devices.assert_not_called() self.assertEqual(0, len(port_id_to_pci)) _test() def test__claim_pci_for_instance_vifs(self): @mock.patch.object(self.compute, 'rt') @mock.patch.object(pci_request, 'get_instance_pci_request_from_vif') @mock.patch.object(self.instance, 'get_network_info') def _test(mock_get_network_info, mock_get_instance_pci_request_from_vif, rt_mock): # when there are VIFs, expect only ones with related PCI to be # claimed and their migrate vif profile to be updated. # Mock needed objects nw_vifs = network_model.NetworkInfo([ network_model.VIF( id=uuids.port0, vnic_type='direct', type=network_model.VIF_TYPE_HW_VEB, profile={'pci_slot': '0000:04:00.3', 'pci_vendor_info': '15b3:1018', 'physical_network': 'default'}), network_model.VIF( id=uuids.port1, vnic_type='normal', type=network_model.VIF_TYPE_OVS, profile={'some': 'attribute'})]) mock_get_network_info.return_value = nw_vifs pci_req = objects.InstancePCIRequest(request_id=uuids.pci_req) instance_pci_reqs = objects.InstancePCIRequests(requests=[pci_req]) instance_pci_devs = objects.PciDeviceList( objects=[objects.PciDevice(request_id=uuids.pci_req, address='0000:04:00.3', vendor_id='15b3', product_id='1018')]) def get_pci_req_side_effect(context, instance, vif): if vif['id'] == uuids.port0: return pci_req return None mock_get_instance_pci_request_from_vif.side_effect = \ get_pci_req_side_effect self.instance.pci_devices = instance_pci_devs self.instance.pci_requests = instance_pci_reqs rt_mock.reset() claimed_pci_dev = objects.PciDevice(request_id=uuids.pci_req, address='0000:05:00.4', vendor_id='15b3', product_id='1018') rt_mock.claim_pci_devices.return_value = [claimed_pci_dev] # Do the actual test port_id_to_pci = self.compute._claim_pci_for_instance_vifs( self.context, self.instance) self.assertEqual(len(nw_vifs), mock_get_instance_pci_request_from_vif.call_count) self.assertTrue(rt_mock.claim_pci_devices.called) self.assertEqual(len(port_id_to_pci), 1) _test() def test__update_migrate_vifs_profile_with_pci(self): # Define two migrate vifs with only one pci that is required # to be updated. Make sure method under test updated the correct one nw_vifs = network_model.NetworkInfo( [network_model.VIF( id=uuids.port0, vnic_type='direct', type=network_model.VIF_TYPE_HW_VEB, profile={'pci_slot': '0000:04:00.3', 'pci_vendor_info': '15b3:1018', 'physical_network': 'default'}), network_model.VIF( id=uuids.port1, vnic_type='normal', type=network_model.VIF_TYPE_OVS, profile={'some': 'attribute'})]) pci_dev = objects.PciDevice(request_id=uuids.pci_req, address='0000:05:00.4', vendor_id='15b3', product_id='1018') port_id_to_pci_dev = {uuids.port0: pci_dev} mig_vifs = migrate_data_obj.VIFMigrateData.\ create_skeleton_migrate_vifs(nw_vifs) self.compute._update_migrate_vifs_profile_with_pci(mig_vifs, port_id_to_pci_dev) # Make sure method under test updated the correct one. changed_mig_vif = mig_vifs[0] unchanged_mig_vif = mig_vifs[1] # Migrate vifs profile was updated with pci_dev.address # for port ID uuids.port0. self.assertEqual(changed_mig_vif.profile['pci_slot'], pci_dev.address) # Migrate vifs profile was unchanged for port ID uuids.port1. # i.e 'profile' attribute does not exist. self.assertNotIn('profile', unchanged_mig_vif) def test_get_updated_nw_info_with_pci_mapping(self): old_dev = objects.PciDevice(address='0000:04:00.2') new_dev = objects.PciDevice(address='0000:05:00.3') pci_mapping = {old_dev.address: new_dev} nw_info = network_model.NetworkInfo([ network_model.VIF( id=uuids.port1, vnic_type=network_model.VNIC_TYPE_NORMAL), network_model.VIF( id=uuids.port2, vnic_type=network_model.VNIC_TYPE_DIRECT, profile={'pci_slot': old_dev.address})]) updated_nw_info = self.compute._get_updated_nw_info_with_pci_mapping( nw_info, pci_mapping) self.assertDictEqual(nw_info[0], updated_nw_info[0]) self.assertEqual(new_dev.address, updated_nw_info[1]['profile']['pci_slot']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') def test_prep_snapshot_based_resize_at_dest(self, get_allocs): """Tests happy path for prep_snapshot_based_resize_at_dest""" # Setup mocks. flavor = self.instance.flavor limits = objects.SchedulerLimits() request_spec = objects.RequestSpec() # resize_claim normally sets instance.migration_context and returns # a MoveClaim which is a context manager. Rather than deal with # mocking a context manager we just set the migration_context on the # fake instance ahead of time to ensure it is returned as expected. self.instance.migration_context = objects.MigrationContext() with test.nested( mock.patch.object(self.compute, '_send_prep_resize_notifications'), mock.patch.object(self.compute.rt, 'resize_claim'), ) as ( _send_prep_resize_notifications, resize_claim, ): # Run the code. mc = self.compute.prep_snapshot_based_resize_at_dest( self.context, self.instance, flavor, 'nodename', self.migration, limits, request_spec) self.assertIs(mc, self.instance.migration_context) # Assert the mock calls. _send_prep_resize_notifications.assert_has_calls([ mock.call(self.context, self.instance, fields.NotificationPhase.START, flavor), mock.call(self.context, self.instance, fields.NotificationPhase.END, flavor)]) resize_claim.assert_called_once_with( self.context, self.instance, flavor, 'nodename', self.migration, get_allocs.return_value['allocations'], image_meta=test.MatchType(objects.ImageMeta), limits=limits) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_prep_snapshot_based_resize_at_dest_get_allocs_fails( self, add_fault, get_allocs): """Tests that getting allocations fails and ExpectedException is raised with the MigrationPreCheckError inside. """ # Setup mocks. flavor = self.instance.flavor limits = objects.SchedulerLimits() request_spec = objects.RequestSpec() ex1 = exception.ConsumerAllocationRetrievalFailed( consumer_uuid=self.instance.uuid, error='oops') get_allocs.side_effect = ex1 with test.nested( mock.patch.object(self.compute, '_send_prep_resize_notifications'), mock.patch.object(self.compute.rt, 'resize_claim') ) as ( _send_prep_resize_notifications, resize_claim, ): # Run the code. ex2 = self.assertRaises( messaging.ExpectedException, self.compute.prep_snapshot_based_resize_at_dest, self.context, self.instance, flavor, 'nodename', self.migration, limits, request_spec) wrapped_exc = ex2.exc_info[1] # The original error should be in the MigrationPreCheckError which # itself is in the ExpectedException. self.assertIn(ex1.format_message(), six.text_type(wrapped_exc)) # Assert the mock calls. _send_prep_resize_notifications.assert_has_calls([ mock.call(self.context, self.instance, fields.NotificationPhase.START, flavor), mock.call(self.context, self.instance, fields.NotificationPhase.END, flavor)]) resize_claim.assert_not_called() # Assert the decorators that are triggered on error add_fault.assert_called_once_with( self.context, self.instance, wrapped_exc, mock.ANY) # There would really be three notifications but because we mocked out # _send_prep_resize_notifications there is just the one error # notification from the wrap_exception decorator. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_prep_snapshot_based_resize_at_dest_claim_fails( self, add_fault, get_allocs): """Tests that the resize_claim fails and ExpectedException is raised with the MigrationPreCheckError inside. """ # Setup mocks. flavor = self.instance.flavor limits = objects.SchedulerLimits() request_spec = objects.RequestSpec() ex1 = exception.ComputeResourcesUnavailable(reason='numa') with test.nested( mock.patch.object(self.compute, '_send_prep_resize_notifications'), mock.patch.object(self.compute.rt, 'resize_claim', side_effect=ex1) ) as ( _send_prep_resize_notifications, resize_claim, ): # Run the code. ex2 = self.assertRaises( messaging.ExpectedException, self.compute.prep_snapshot_based_resize_at_dest, self.context, self.instance, flavor, 'nodename', self.migration, limits, request_spec) wrapped_exc = ex2.exc_info[1] # The original error should be in the MigrationPreCheckError which # itself is in the ExpectedException. self.assertIn(ex1.format_message(), six.text_type(wrapped_exc)) # Assert the mock calls. _send_prep_resize_notifications.assert_has_calls([ mock.call(self.context, self.instance, fields.NotificationPhase.START, flavor), mock.call(self.context, self.instance, fields.NotificationPhase.END, flavor)]) resize_claim.assert_called_once_with( self.context, self.instance, flavor, 'nodename', self.migration, get_allocs.return_value['allocations'], image_meta=test.MatchType(objects.ImageMeta), limits=limits) # Assert the decorators that are triggered on error add_fault.assert_called_once_with( self.context, self.instance, wrapped_exc, mock.ANY) # There would really be three notifications but because we mocked out # _send_prep_resize_notifications there is just the one error # notification from the wrap_exception decorator. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_snapshot_for_resize(self): """Happy path test for _snapshot_for_resize.""" with mock.patch.object(self.compute.driver, 'snapshot') as snapshot: self.compute._snapshot_for_resize( self.context, uuids.image_id, self.instance) snapshot.assert_called_once_with( self.context, self.instance, uuids.image_id, mock.ANY) def test_snapshot_for_resize_delete_image_on_error(self): """Tests that any exception raised from _snapshot_for_resize will result in attempting to delete the image. """ with mock.patch.object(self.compute.driver, 'snapshot', side_effect=test.TestingException) as snapshot: with mock.patch.object(self.compute.image_api, 'delete') as delete: self.assertRaises(test.TestingException, self.compute._snapshot_for_resize, self.context, uuids.image_id, self.instance) snapshot.assert_called_once_with( self.context, self.instance, uuids.image_id, mock.ANY) delete.assert_called_once_with(self.context, uuids.image_id) @mock.patch('nova.objects.Instance.get_bdms', return_value=objects.BlockDeviceMappingList()) @mock.patch('nova.objects.Instance.save') @mock.patch( 'nova.compute.manager.InstanceEvents.clear_events_for_instance') def _test_prep_snapshot_based_resize_at_source( self, clear_events_for_instance, instance_save, get_bdms, snapshot_id=None): """Happy path test for prep_snapshot_based_resize_at_source.""" with test.nested( mock.patch.object( self.compute.network_api, 'get_instance_nw_info', return_value=network_model.NetworkInfo()), mock.patch.object( self.compute, '_send_resize_instance_notifications'), mock.patch.object(self.compute, '_power_off_instance'), mock.patch.object(self.compute, '_get_power_state', return_value=power_state.SHUTDOWN), mock.patch.object(self.compute, '_snapshot_for_resize'), mock.patch.object(self.compute, '_get_instance_block_device_info'), mock.patch.object(self.compute.driver, 'destroy'), mock.patch.object(self.compute, '_terminate_volume_connections'), mock.patch.object( self.compute.network_api, 'migrate_instance_start') ) as ( get_instance_nw_info, _send_resize_instance_notifications, _power_off_instance, _get_power_state, _snapshot_for_resize, _get_instance_block_device_info, destroy, _terminate_volume_connections, migrate_instance_start ): self.compute.prep_snapshot_based_resize_at_source( self.context, self.instance, self.migration, snapshot_id=snapshot_id) # Assert the mock calls. get_instance_nw_info.assert_called_once_with( self.context, self.instance) _send_resize_instance_notifications.assert_has_calls([ mock.call(self.context, self.instance, get_bdms.return_value, get_instance_nw_info.return_value, fields.NotificationPhase.START), mock.call(self.context, self.instance, get_bdms.return_value, get_instance_nw_info.return_value, fields.NotificationPhase.END)]) _power_off_instance.assert_called_once_with( self.context, self.instance) self.assertEqual(power_state.SHUTDOWN, self.instance.power_state) if snapshot_id is None: _snapshot_for_resize.assert_not_called() else: _snapshot_for_resize.assert_called_once_with( self.context, snapshot_id, self.instance) _get_instance_block_device_info.assert_called_once_with( self.context, self.instance, bdms=get_bdms.return_value) destroy.assert_called_once_with( self.context, self.instance, get_instance_nw_info.return_value, block_device_info=_get_instance_block_device_info.return_value, destroy_disks=False) _terminate_volume_connections.assert_called_once_with( self.context, self.instance, get_bdms.return_value) migrate_instance_start.assert_called_once_with( self.context, self.instance, self.migration) self.assertEqual('post-migrating', self.migration.status) self.assertEqual(2, self.migration.save.call_count) self.assertEqual(task_states.RESIZE_MIGRATED, self.instance.task_state) instance_save.assert_called_once_with( expected_task_state=task_states.RESIZE_MIGRATING) clear_events_for_instance.assert_called_once_with(self.instance) def test_prep_snapshot_based_resize_at_source_with_snapshot(self): self._test_prep_snapshot_based_resize_at_source( snapshot_id=uuids.snapshot_id) def test_prep_snapshot_based_resize_at_source_without_snapshot(self): self._test_prep_snapshot_based_resize_at_source() @mock.patch('nova.objects.Instance.get_bdms', return_value=objects.BlockDeviceMappingList()) def test_prep_snapshot_based_resize_at_source_power_off_failure( self, get_bdms): """Tests that the power off fails and raises InstancePowerOffFailure""" with test.nested( mock.patch.object( self.compute.network_api, 'get_instance_nw_info', return_value=network_model.NetworkInfo()), mock.patch.object( self.compute, '_send_resize_instance_notifications'), mock.patch.object(self.compute, '_power_off_instance', side_effect=test.TestingException), ) as ( get_instance_nw_info, _send_resize_instance_notifications, _power_off_instance, ): self.assertRaises( exception.InstancePowerOffFailure, self.compute._prep_snapshot_based_resize_at_source, self.context, self.instance, self.migration) _power_off_instance.assert_called_once_with( self.context, self.instance) @mock.patch('nova.objects.Instance.get_bdms', return_value=objects.BlockDeviceMappingList()) @mock.patch('nova.objects.Instance.save') def test_prep_snapshot_based_resize_at_source_destroy_error( self, instance_save, get_bdms): """Tests that the driver.destroy fails and _error_out_instance_on_exception sets the instance.vm_state='error'. """ with test.nested( mock.patch.object( self.compute.network_api, 'get_instance_nw_info', return_value=network_model.NetworkInfo()), mock.patch.object( self.compute, '_send_resize_instance_notifications'), mock.patch.object(self.compute, '_power_off_instance'), mock.patch.object(self.compute, '_get_power_state', return_value=power_state.SHUTDOWN), mock.patch.object(self.compute, '_snapshot_for_resize'), mock.patch.object(self.compute, '_get_instance_block_device_info'), mock.patch.object(self.compute.driver, 'destroy', side_effect=test.TestingException), ) as ( get_instance_nw_info, _send_resize_instance_notifications, _power_off_instance, _get_power_state, _snapshot_for_resize, _get_instance_block_device_info, destroy, ): self.assertRaises( test.TestingException, self.compute._prep_snapshot_based_resize_at_source, self.context, self.instance, self.migration) destroy.assert_called_once_with( self.context, self.instance, get_instance_nw_info.return_value, block_device_info=_get_instance_block_device_info.return_value, destroy_disks=False) instance_save.assert_called_once_with() self.assertEqual(vm_states.ERROR, self.instance.vm_state) @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.objects.Instance.save') def test_prep_snapshot_based_resize_at_source_general_error_handling( self, instance_save, add_fault): """Tests the general error handling and allocation rollback code when _prep_snapshot_based_resize_at_source raises an exception. """ ex1 = exception.InstancePowerOffFailure(reason='testing') with mock.patch.object( self.compute, '_prep_snapshot_based_resize_at_source', side_effect=ex1) as _prep_snapshot_based_resize_at_source: self.instance.task_state = task_states.RESIZE_MIGRATING ex2 = self.assertRaises( messaging.ExpectedException, self.compute.prep_snapshot_based_resize_at_source, self.context, self.instance, self.migration, snapshot_id=uuids.snapshot_id) # The InstancePowerOffFailure should be wrapped in the # ExpectedException. wrapped_exc = ex2.exc_info[1] self.assertIn('Failed to power off instance: testing', six.text_type(wrapped_exc)) # Assert the non-decorator mock calls. _prep_snapshot_based_resize_at_source.assert_called_once_with( self.context, self.instance, self.migration, snapshot_id=uuids.snapshot_id) # Assert wrap_instance_fault is called. add_fault.assert_called_once_with( self.context, self.instance, wrapped_exc, mock.ANY) # Assert wrap_exception is called. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) # Assert errors_out_migration is called. self.assertEqual('error', self.migration.status) self.migration.save.assert_called_once_with() # Assert reverts_task_state is called. self.assertIsNone(self.instance.task_state) instance_save.assert_called_once_with() @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.objects.Instance.save') def test_finish_snapshot_based_resize_at_dest_outer_error( self, instance_save, add_fault): """Tests the error handling on the finish_snapshot_based_resize_at_dest method. """ request_spec = objects.RequestSpec() self.instance.task_state = task_states.RESIZE_MIGRATED with mock.patch.object( self.compute, '_finish_snapshot_based_resize_at_dest', side_effect=test.TestingException('oops')) as _finish: ex = self.assertRaises( test.TestingException, self.compute.finish_snapshot_based_resize_at_dest, self.context, self.instance, self.migration, uuids.snapshot_id, request_spec) # Assert the non-decorator mock calls. _finish.assert_called_once_with( self.context, self.instance, self.migration, uuids.snapshot_id) # Assert _error_out_instance_on_exception is called. self.assertEqual(vm_states.ERROR, self.instance.vm_state) # Assert wrap_instance_fault is called. add_fault.assert_called_once_with( self.context, self.instance, ex, mock.ANY) # Assert wrap_exception is called. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) # Assert errors_out_migration is called. self.assertEqual('error', self.migration.status) self.migration.save.assert_called_once_with() # Assert reverts_task_state is called. self.assertIsNone(self.instance.task_state) # Instance.save is called twice: # 1. _error_out_instance_on_exception # 2. reverts_task_state self.assertEqual(2, instance_save.call_count) @mock.patch('nova.objects.Instance.get_bdms') @mock.patch('nova.objects.Instance.apply_migration_context') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager.' '_send_finish_resize_notifications') @mock.patch('nova.compute.manager.ComputeManager.' '_finish_snapshot_based_resize_at_dest_spawn') @mock.patch('nova.objects.ImageMeta.from_image_ref') @mock.patch('nova.compute.utils.delete_image') def _test_finish_snapshot_based_resize_at_dest( self, delete_image, from_image_ref, _finish_spawn, notify, inst_save, apply_migration_context, get_bdms, snapshot_id=None): """Happy path test for finish_snapshot_based_resize_at_dest.""" # Setup the fake instance. request_spec = objects.RequestSpec() self.instance.task_state = task_states.RESIZE_MIGRATED nwinfo = network_model.NetworkInfo([ network_model.VIF(id=uuids.port_id)]) self.instance.info_cache = objects.InstanceInfoCache( network_info=nwinfo) self.instance.new_flavor = fake_flavor.fake_flavor_obj(self.context) old_flavor = self.instance.flavor # Mock out ImageMeta. if snapshot_id: from_image_ref.return_value = objects.ImageMeta() # Setup the fake migration. self.migration.migration_type = 'resize' self.migration.dest_compute = uuids.dest self.migration.dest_node = uuids.dest with mock.patch.object(self.compute, 'network_api') as network_api: network_api.get_instance_nw_info.return_value = nwinfo # Run that big beautiful code! self.compute.finish_snapshot_based_resize_at_dest( self.context, self.instance, self.migration, snapshot_id, request_spec) # Check the changes to the instance and migration object. self.assertEqual(vm_states.RESIZED, self.instance.vm_state) self.assertIsNone(self.instance.task_state) self.assertIs(self.instance.flavor, self.instance.new_flavor) self.assertIs(self.instance.old_flavor, old_flavor) self.assertEqual(self.migration.dest_compute, self.instance.host) self.assertEqual(self.migration.dest_node, self.instance.node) self.assertEqual('finished', self.migration.status) # Assert the mock calls. if snapshot_id: from_image_ref.assert_called_once_with( self.context, self.compute.image_api, snapshot_id) delete_image.assert_called_once_with( self.context, self.instance, self.compute.image_api, snapshot_id) else: from_image_ref.assert_not_called() delete_image.assert_not_called() # The instance migration context was applied and changes were saved # to the instance twice. apply_migration_context.assert_called_once_with() inst_save.assert_has_calls([ mock.call(expected_task_state=task_states.RESIZE_MIGRATED), mock.call(expected_task_state=task_states.RESIZE_FINISH)]) self.migration.save.assert_called_once_with() # Start and end notifications were sent. notify.assert_has_calls([ mock.call(self.context, self.instance, get_bdms.return_value, nwinfo, fields.NotificationPhase.START), mock.call(self.context, self.instance, get_bdms.return_value, nwinfo, fields.NotificationPhase.END)]) # Volumes and networking were setup prior to calling driver spawn. spawn_image_meta = from_image_ref.return_value \ if snapshot_id else test.MatchType(objects.ImageMeta) _finish_spawn.assert_called_once_with( self.context, self.instance, self.migration, spawn_image_meta, get_bdms.return_value) def test_finish_snapshot_based_resize_at_dest_image_backed(self): """Happy path test for finish_snapshot_based_resize_at_dest with an image-backed server where snapshot_id is provided. """ self._test_finish_snapshot_based_resize_at_dest( snapshot_id=uuids.snapshot_id) def test_finish_snapshot_based_resize_at_dest_volume_backed(self): """Happy path test for finish_snapshot_based_resize_at_dest with a volume-backed server where snapshot_id is None. """ self._test_finish_snapshot_based_resize_at_dest(snapshot_id=None) @mock.patch('nova.compute.manager.ComputeManager._prep_block_device') @mock.patch('nova.compute.manager.ComputeManager.' '_remove_volume_connection') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer') def _test_finish_snapshot_based_resize_at_dest_spawn_fails( self, get_allocs, remove_volume_connection, _prep_block_device, volume_backed=False): """Tests _finish_snapshot_based_resize_at_dest_spawn where spawn fails. """ nwinfo = network_model.NetworkInfo([ network_model.VIF(id=uuids.port_id)]) self.instance.system_metadata['old_vm_state'] = vm_states.STOPPED # Mock out BDMs. if volume_backed: # Single remote volume BDM. bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( source_type='volume', destination_type='volume', volume_id=uuids.volume_id, boot_index=0)]) else: # Single local image BDM. bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( source_type='image', destination_type='local', image_id=uuids.image_id, boot_index=0)]) self.migration.migration_type = 'migration' self.migration.dest_compute = uuids.dest self.migration.source_compute = uuids.source image_meta = self.instance.image_meta # Stub out migrate_instance_start so we can assert how it is called. def fake_migrate_instance_start(context, instance, migration): # Make sure the migration.dest_compute was temporarily changed # to the source_compute value. self.assertEqual(uuids.source, migration.dest_compute) with test.nested( mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.compute.driver, 'spawn', side_effect=test.TestingException('spawn fail')), ) as ( network_api, spawn, ): network_api.get_instance_nw_info.return_value = nwinfo network_api.migrate_instance_start.side_effect = \ fake_migrate_instance_start # Run that big beautiful code! self.assertRaises( test.TestingException, self.compute._finish_snapshot_based_resize_at_dest_spawn, self.context, self.instance, self.migration, image_meta, bdms) # Assert the mock calls. # Volumes and networking were setup prior to calling driver spawn. _prep_block_device.assert_called_once_with( self.context, self.instance, bdms) get_allocs.assert_called_once_with(self.context, self.instance.uuid) network_api.migrate_instance_finish.assert_called_once_with( self.context, self.instance, self.migration, provider_mappings=None) spawn.assert_called_once_with( self.context, self.instance, image_meta, injected_files=[], admin_password=None, allocations=get_allocs.return_value, network_info=nwinfo, block_device_info=_prep_block_device.return_value, power_on=False) # Port bindings were rolled back to the source host. network_api.migrate_instance_start.assert_called_once_with( self.context, self.instance, self.migration) if volume_backed: # Volume connections were deleted. remove_volume_connection.assert_called_once_with( self.context, bdms[0], self.instance, delete_attachment=True) else: remove_volume_connection.assert_not_called() def test_finish_snapshot_based_resize_at_dest_spawn_fails_image_back(self): """Tests _finish_snapshot_based_resize_at_dest_spawn failing with an image-backed server. """ self._test_finish_snapshot_based_resize_at_dest_spawn_fails( volume_backed=False) def test_finish_snapshot_based_resize_at_dest_spawn_fails_vol_backed(self): """Tests _finish_snapshot_based_resize_at_dest_spawn failing with a volume-backed server. """ self._test_finish_snapshot_based_resize_at_dest_spawn_fails( volume_backed=True) @mock.patch('nova.compute.manager.ComputeManager._prep_block_device') @mock.patch('nova.compute.manager.ComputeManager.' '_remove_volume_connection') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer') def test_finish_snapshot_based_resize_at_dest_spawn_fail_graceful_rollback( self, get_allocs, remove_volume_connection, _prep_block_device): """Tests that the cleanup except block is graceful in that one failure does not prevent trying to cleanup the other resources. """ nwinfo = network_model.NetworkInfo([ network_model.VIF(id=uuids.port_id)]) self.instance.system_metadata['old_vm_state'] = vm_states.STOPPED # Three BDMs: two volume (one of which will fail rollback) and a local. bdms = objects.BlockDeviceMappingList(objects=[ # First volume BDM which fails rollback. objects.BlockDeviceMapping( destination_type='volume', volume_id=uuids.bad_volume), # Second volume BDM is rolled back. objects.BlockDeviceMapping( destination_type='volume', volume_id=uuids.good_volume), # Third BDM is a local image BDM so we do not try to roll it back. objects.BlockDeviceMapping( destination_type='local', image_id=uuids.image_id) ]) self.migration.migration_type = 'migration' self.migration.dest_compute = uuids.dest self.migration.source_compute = uuids.source image_meta = self.instance.image_meta with test.nested( mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.compute.driver, 'spawn', side_effect=test.TestingException( 'spawn fail')), ) as ( network_api, spawn, ): network_api.get_instance_nw_info.return_value = nwinfo # Mock migrate_instance_start to fail on rollback. network_api.migrate_instance_start.side_effect = \ exception.PortNotFound(port_id=uuids.port_id) # Mock remove_volume_connection to fail on the first call. remove_volume_connection.side_effect = [ exception.CinderConnectionFailed(reason='gremlins'), None] # Run that big beautiful code! self.assertRaises( test.TestingException, self.compute._finish_snapshot_based_resize_at_dest_spawn, self.context, self.instance, self.migration, image_meta, bdms) # Assert the mock calls. # Volumes and networking were setup prior to calling driver spawn. _prep_block_device.assert_called_once_with( self.context, self.instance, bdms) get_allocs.assert_called_once_with(self.context, self.instance.uuid) network_api.migrate_instance_finish.assert_called_once_with( self.context, self.instance, self.migration, provider_mappings=None) spawn.assert_called_once_with( self.context, self.instance, image_meta, injected_files=[], admin_password=None, allocations=get_allocs.return_value, network_info=nwinfo, block_device_info=_prep_block_device.return_value, power_on=False) # Port bindings were rolled back to the source host. network_api.migrate_instance_start.assert_called_once_with( self.context, self.instance, self.migration) # Volume connections were deleted. remove_volume_connection.assert_has_calls([ mock.call(self.context, bdms[0], self.instance, delete_attachment=True), mock.call(self.context, bdms[1], self.instance, delete_attachment=True)]) # Assert the expected errors to get logged. self.assertIn('Failed to activate port bindings on the source', self.stdlog.logger.output) self.assertIn('Failed to remove volume connection', self.stdlog.logger.output) @mock.patch('nova.compute.manager.ComputeManager._prep_block_device') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer') def test_finish_snapshot_based_resize_at_dest_spawn( self, get_allocs, _prep_block_device): """Happy path test for test_finish_snapshot_based_resize_at_dest_spawn. """ nwinfo = network_model.NetworkInfo([ network_model.VIF(id=uuids.port_id)]) self.instance.system_metadata['old_vm_state'] = vm_states.ACTIVE self.migration.migration_type = 'migration' self.migration.dest_compute = uuids.dest self.migration.source_compute = uuids.source image_meta = self.instance.image_meta bdms = objects.BlockDeviceMappingList() with test.nested( mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.compute.driver, 'spawn') ) as ( network_api, spawn, ): network_api.get_instance_nw_info.return_value = nwinfo # Run that big beautiful code! self.compute._finish_snapshot_based_resize_at_dest_spawn( self.context, self.instance, self.migration, image_meta, bdms) # Assert the mock calls. _prep_block_device.assert_called_once_with( self.context, self.instance, bdms) get_allocs.assert_called_once_with(self.context, self.instance.uuid) network_api.migrate_instance_finish.assert_called_once_with( self.context, self.instance, self.migration, provider_mappings=None) spawn.assert_called_once_with( self.context, self.instance, image_meta, injected_files=[], admin_password=None, allocations=get_allocs.return_value, network_info=nwinfo, block_device_info=_prep_block_device.return_value, power_on=True) @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager.' '_delete_allocation_after_move') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_confirm_snapshot_based_resize_at_source_error_handling( self, mock_add_fault, mock_delete_allocs, mock_inst_save): """Tests the error handling in confirm_snapshot_based_resize_at_source when a failure occurs. """ error = test.TestingException('oops') with mock.patch.object( self.compute, '_confirm_snapshot_based_resize_at_source', side_effect=error) as confirm_mock: self.assertRaises( test.TestingException, self.compute.confirm_snapshot_based_resize_at_source, self.context, self.instance, self.migration) confirm_mock.assert_called_once_with( self.context, self.instance, self.migration) self.assertIn('Confirm resize failed on source host', self.stdlog.logger.output) # _error_out_instance_on_exception should set the instance to ERROR self.assertEqual(vm_states.ERROR, self.instance.vm_state) mock_inst_save.assert_called() mock_delete_allocs.assert_called_once_with( self.context, self.instance, self.migration) # wrap_instance_fault should record a fault mock_add_fault.assert_called_once_with( self.context, self.instance, error, test.MatchType(tuple)) # errors_out_migration should set the migration status to 'error' self.assertEqual('error', self.migration.status) self.migration.save.assert_called_once_with() # Assert wrap_exception is called. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager.' '_confirm_snapshot_based_resize_at_source') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_confirm_snapshot_based_resize_at_source_delete_alloc_fails( self, mock_add_fault, mock_confirm, mock_inst_save): """Tests the error handling in confirm_snapshot_based_resize_at_source when _delete_allocation_after_move fails. """ error = exception.AllocationDeleteFailed( consumer_uuid=self.migration.uuid, error='placement down') with mock.patch.object( self.compute, '_delete_allocation_after_move', side_effect=error) as mock_delete_allocs: self.assertRaises( exception.AllocationDeleteFailed, self.compute.confirm_snapshot_based_resize_at_source, self.context, self.instance, self.migration) mock_confirm.assert_called_once_with( self.context, self.instance, self.migration) # _error_out_instance_on_exception should set the instance to ERROR self.assertEqual(vm_states.ERROR, self.instance.vm_state) mock_inst_save.assert_called() mock_delete_allocs.assert_called_once_with( self.context, self.instance, self.migration) # wrap_instance_fault should record a fault mock_add_fault.assert_called_once_with( self.context, self.instance, error, test.MatchType(tuple)) # errors_out_migration should set the migration status to 'error' self.assertEqual('error', self.migration.status) self.migration.save.assert_called_once_with() # Assert wrap_exception is called. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) @mock.patch('nova.objects.Instance.get_bdms') @mock.patch('nova.compute.manager.ComputeManager.' '_delete_allocation_after_move') @mock.patch('nova.compute.manager.ComputeManager.' '_confirm_snapshot_based_resize_delete_port_bindings') @mock.patch('nova.compute.manager.ComputeManager.' '_delete_volume_attachments') def test_confirm_snapshot_based_resize_at_source( self, mock_delete_vols, mock_delete_bindings, mock_delete_allocs, mock_get_bdms): """Happy path test for confirm_snapshot_based_resize_at_source.""" self.instance.old_flavor = objects.Flavor() with test.nested( mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.compute.driver, 'cleanup'), mock.patch.object(self.compute.rt, 'drop_move_claim_at_source') ) as ( mock_network_api, mock_cleanup, mock_drop_claim, ): # Run the code. self.compute.confirm_snapshot_based_resize_at_source( self.context, self.instance, self.migration) # Assert the mocks. mock_network_api.get_instance_nw_info.assert_called_once_with( self.context, self.instance) # The guest was cleaned up. mock_cleanup.assert_called_once_with( self.context, self.instance, mock_network_api.get_instance_nw_info.return_value, block_device_info=None, destroy_disks=True, destroy_vifs=False) # Ports and volumes were cleaned up. mock_delete_bindings.assert_called_once_with( self.context, self.instance, self.migration) mock_delete_vols.assert_called_once_with( self.context, mock_get_bdms.return_value) # Move claim and migration context were dropped. mock_drop_claim.assert_called_once_with( self.context, self.instance, self.migration) mock_delete_allocs.assert_called_once_with( self.context, self.instance, self.migration) def test_confirm_snapshot_based_resize_delete_port_bindings(self): """Happy path test for _confirm_snapshot_based_resize_delete_port_bindings. """ with mock.patch.object( self.compute.network_api, 'cleanup_instance_network_on_host') as cleanup_networks: self.compute._confirm_snapshot_based_resize_delete_port_bindings( self.context, self.instance, self.migration) cleanup_networks.assert_called_once_with( self.context, self.instance, self.compute.host) def test_confirm_snapshot_based_resize_delete_port_bindings_errors(self): """Tests error handling for _confirm_snapshot_based_resize_delete_port_bindings. """ # PortBindingDeletionFailed will be caught and logged. with mock.patch.object( self.compute.network_api, 'cleanup_instance_network_on_host', side_effect=exception.PortBindingDeletionFailed( port_id=uuids.port_id, host=self.compute.host) ) as cleanup_networks: self.compute._confirm_snapshot_based_resize_delete_port_bindings( self.context, self.instance, self.migration) cleanup_networks.assert_called_once_with( self.context, self.instance, self.compute.host) self.assertIn('Failed to delete port bindings from source host', self.stdlog.logger.output) # Anything else is re-raised. func = self.compute._confirm_snapshot_based_resize_delete_port_bindings with mock.patch.object( self.compute.network_api, 'cleanup_instance_network_on_host', side_effect=test.TestingException('neutron down') ) as cleanup_networks: self.assertRaises(test.TestingException, func, self.context, self.instance, self.migration) cleanup_networks.assert_called_once_with( self.context, self.instance, self.compute.host) def test_delete_volume_attachments(self): """Happy path test for _delete_volume_attachments.""" bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(attachment_id=uuids.attachment1), objects.BlockDeviceMapping(attachment_id=None), # skipped objects.BlockDeviceMapping(attachment_id=uuids.attachment2) ]) with mock.patch.object(self.compute.volume_api, 'attachment_delete') as attachment_delete: self.compute._delete_volume_attachments(self.context, bdms) self.assertEqual(2, attachment_delete.call_count) attachment_delete.assert_has_calls([ mock.call(self.context, uuids.attachment1), mock.call(self.context, uuids.attachment2)]) def test_delete_volume_attachments_errors(self): """Tests error handling for _delete_volume_attachments.""" bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(attachment_id=uuids.attachment1, instance_uuid=self.instance.uuid), objects.BlockDeviceMapping(attachment_id=uuids.attachment2) ]) # First attachment_delete call fails and is logged, second is OK. errors = [test.TestingException('cinder down'), None] with mock.patch.object(self.compute.volume_api, 'attachment_delete', side_effect=errors) as attachment_delete: self.compute._delete_volume_attachments(self.context, bdms) self.assertEqual(2, attachment_delete.call_count) attachment_delete.assert_has_calls([ mock.call(self.context, uuids.attachment1), mock.call(self.context, uuids.attachment2)]) self.assertIn('Failed to delete volume attachment with ID %s' % uuids.attachment1, self.stdlog.logger.output) @mock.patch('nova.compute.utils.notify_usage_exists') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch('nova.compute.manager.InstanceEvents.' 'clear_events_for_instance') def test_revert_snapshot_based_resize_at_dest_error_handling( self, mock_clear_events, mock_add_fault, mock_inst_save, mock_notify_usage): """Tests error handling in revert_snapshot_based_resize_at_dest when a failure occurs. """ self.instance.task_state = task_states.RESIZE_REVERTING error = test.TestingException('oops') with mock.patch.object( self.compute, '_revert_snapshot_based_resize_at_dest', side_effect=error) as mock_revert: self.assertRaises( test.TestingException, self.compute.revert_snapshot_based_resize_at_dest, self.context, self.instance, self.migration) mock_notify_usage.assert_called_once_with( self.compute.notifier, self.context, self.instance, self.compute.host, current_period=True) mock_revert.assert_called_once_with( self.context, self.instance, self.migration) mock_inst_save.assert_called() # _error_out_instance_on_exception sets the instance to ERROR. self.assertEqual(vm_states.ERROR, self.instance.vm_state) # reverts_task_state will reset the task_state to None. self.assertIsNone(self.instance.task_state) # Ensure wrap_instance_fault was called. mock_add_fault.assert_called_once_with( self.context, self.instance, error, test.MatchType(tuple)) # errors_out_migration should mark the migration as 'error' status self.assertEqual('error', self.migration.status) self.migration.save.assert_called_once_with() # Assert wrap_exception is called. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) # clear_events_for_instance should not have been called. mock_clear_events.assert_not_called() @mock.patch('nova.compute.utils.notify_usage_exists', new=mock.Mock()) @mock.patch('nova.compute.manager.ComputeManager.' '_revert_snapshot_based_resize_at_dest') def test_revert_snapshot_based_resize_at_dest_post_error_log(self, revert): """Tests when _revert_snapshot_based_resize_at_dest is OK but post-processing cleanup fails and is just logged. """ # First test _delete_scheduler_instance_info failing. with mock.patch.object( self.compute, '_delete_scheduler_instance_info', side_effect=( test.TestingException('scheduler'), None)) as mock_del: self.compute.revert_snapshot_based_resize_at_dest( self.context, self.instance, self.migration) revert.assert_called_once() mock_del.assert_called_once_with(self.context, self.instance.uuid) self.assertIn('revert_snapshot_based_resize_at_dest failed during ' 'post-processing. Error: scheduler', self.stdlog.logger.output) revert.reset_mock() mock_del.reset_mock() # Now test clear_events_for_instance failing. with mock.patch.object( self.compute.instance_events, 'clear_events_for_instance', side_effect=test.TestingException( 'events')) as mock_events: self.compute.revert_snapshot_based_resize_at_dest( self.context, self.instance, self.migration) revert.assert_called_once() mock_del.assert_called_once_with(self.context, self.instance.uuid) mock_events.assert_called_once_with(self.instance) self.assertIn('revert_snapshot_based_resize_at_dest failed during ' 'post-processing. Error: events', self.stdlog.logger.output) # Assert _error_out_instance_on_exception wasn't tripped somehow. self.assertNotEqual(vm_states.ERROR, self.instance.vm_state) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') def test_revert_snapshot_based_resize_at_dest(self, mock_get_bdms): """Happy path test for _revert_snapshot_based_resize_at_dest""" # Setup more mocks. def stub_migrate_instance_start(ctxt, instance, migration): # The migration.dest_compute should have been mutated to point # at the source compute. self.assertEqual(migration.source_compute, migration.dest_compute) with test.nested( mock.patch.object(self.compute, 'network_api'), mock.patch.object(self.compute, '_get_instance_block_device_info'), mock.patch.object(self.compute.driver, 'destroy'), mock.patch.object(self.compute, '_delete_volume_attachments'), mock.patch.object(self.compute.rt, 'drop_move_claim_at_dest') ) as ( mock_network_api, mock_get_bdi, mock_destroy, mock_delete_attachments, mock_drop_claim ): mock_network_api.migrate_instance_start.side_effect = \ stub_migrate_instance_start # Raise PortBindingDeletionFailed to make sure it's caught and # logged but not fatal. mock_network_api.cleanup_instance_network_on_host.side_effect = \ exception.PortBindingDeletionFailed(port_id=uuids.port_id, host=self.compute.host) # Run the code. self.compute._revert_snapshot_based_resize_at_dest( self.context, self.instance, self.migration) # Assert the calls. mock_network_api.get_instance_nw_info.assert_called_once_with( self.context, self.instance) mock_get_bdi.assert_called_once_with( self.context, self.instance, bdms=mock_get_bdms.return_value) mock_destroy.assert_called_once_with( self.context, self.instance, mock_network_api.get_instance_nw_info.return_value, block_device_info=mock_get_bdi.return_value) mock_network_api.migrate_instance_start.assert_called_once_with( self.context, self.instance, self.migration) mock_network_api.cleanup_instance_network_on_host.\ assert_called_once_with( self.context, self.instance, self.compute.host) # Assert that even though setup_networks_on_host raised # PortBindingDeletionFailed it was handled and logged. self.assertIn('Failed to delete port bindings from target host.', self.stdlog.logger.output) mock_delete_attachments.assert_called_once_with( self.context, mock_get_bdms.return_value) mock_drop_claim.assert_called_once_with( self.context, self.instance, self.migration) @mock.patch('nova.compute.manager.ComputeManager.' '_finish_revert_snapshot_based_resize_at_source') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager.' '_update_scheduler_instance_info') @mock.patch('nova.compute.utils.add_instance_fault_from_exc') def test_finish_revert_snapshot_based_resize_at_source_error_handling( self, mock_add_fault, mock_update_sched, mock_inst_save, mock_finish_revert): """Tests error handling (context managers, decorators, post-processing) in finish_revert_snapshot_based_resize_at_source. """ self.instance.task_state = task_states.RESIZE_REVERTING # First make _finish_revert_snapshot_based_resize_at_source fail. error = test.TestingException('oops') mock_finish_revert.side_effect = error self.assertRaises( test.TestingException, self.compute.finish_revert_snapshot_based_resize_at_source, self.context, self.instance, self.migration) mock_finish_revert.assert_called_once_with( self.context, self.instance, self.migration) # _error_out_instance_on_exception should have set the instance status # to ERROR. mock_inst_save.assert_called() self.assertEqual(vm_states.ERROR, self.instance.vm_state) # We should not have updated the scheduler since we failed. mock_update_sched.assert_not_called() # reverts_task_state should have set the task_state to None. self.assertIsNone(self.instance.task_state) # errors_out_migration should have set the migration status to error. self.migration.save.assert_called_once_with() self.assertEqual('error', self.migration.status) # wrap_instance_fault should have recorded a fault. mock_add_fault.assert_called_once_with( self.context, self.instance, error, test.MatchType(tuple)) # wrap_exception should have sent an error notification. self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'compute.%s' % fields.NotificationAction.EXCEPTION, fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) # Now run it again but _finish_revert_snapshot_based_resize_at_source # will pass and _update_scheduler_instance_info will fail but not be # fatal (just logged). mock_finish_revert.reset_mock(side_effect=True) # reset side_effect mock_update_sched.side_effect = test.TestingException('scheduler oops') self.compute.finish_revert_snapshot_based_resize_at_source( self.context, self.instance, self.migration) mock_finish_revert.assert_called_once_with( self.context, self.instance, self.migration) mock_update_sched.assert_called_once_with(self.context, self.instance) self.assertIn('finish_revert_snapshot_based_resize_at_source failed ' 'during post-processing. Error: scheduler oops', self.stdlog.logger.output) @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager._revert_allocation', side_effect=exception.AllocationMoveFailed('placement down')) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.compute.manager.ComputeManager.' '_update_volume_attachments') @mock.patch('nova.compute.manager.ComputeManager.' '_finish_revert_resize_network_migrate_finish') @mock.patch('nova.compute.manager.ComputeManager.' '_get_instance_block_device_info') @mock.patch('nova.objects.Instance.drop_migration_context') @mock.patch('nova.compute.manager.ComputeManager.' '_update_instance_after_spawn') @mock.patch('nova.compute.manager.ComputeManager.' '_complete_volume_attachments') def test_finish_revert_snapshot_based_resize_at_source( self, mock_complete_attachments, mock_update_after_spawn, mock_drop_mig_context, mock_get_bdi, mock_net_migrate_finish, mock_update_attachments, mock_get_bdms, mock_revert_allocs, mock_inst_save): """Happy path test for finish_revert_snapshot_based_resize_at_source. Also makes sure some cleanups that are handled gracefully do not raise. """ # Make _update_scheduler_instance_info a no-op. self.flags(track_instance_changes=False, group='filter_scheduler') # Configure the instance with old_vm_state = STOPPED so the guest is # created but not powered on. self.instance.system_metadata['old_vm_state'] = vm_states.STOPPED # Configure the instance with an old_flavor for the revert. old_flavor = fake_flavor.fake_flavor_obj(self.context) self.instance.old_flavor = old_flavor with test.nested( mock.patch.object(self.compute.network_api, 'get_instance_nw_info'), mock.patch.object(self.compute.driver, 'finish_revert_migration') ) as ( mock_get_nw_info, mock_driver_finish ): # Run the code. self.compute.finish_revert_snapshot_based_resize_at_source( self.context, self.instance, self.migration) # Assert the instance host/node and flavor info was updated. self.assertEqual(self.migration.source_compute, self.instance.host) self.assertEqual(self.migration.source_node, self.instance.node) self.assertIs(self.instance.flavor, old_flavor) self.assertEqual(old_flavor.id, self.instance.instance_type_id) # Assert _revert_allocation was called, raised and logged the error. mock_revert_allocs.assert_called_once_with( self.context, self.instance, self.migration) self.assertIn('Reverting allocation in placement for migration %s ' 'failed.' % self.migration.uuid, self.stdlog.logger.output) # Assert that volume attachments were updated. mock_update_attachments.assert_called_once_with( self.context, self.instance, mock_get_bdms.return_value) # Assert that port bindings were updated to point at the source host. mock_net_migrate_finish.assert_called_once_with( self.context, self.instance, self.migration, provider_mappings=None) # Assert the driver finished reverting the migration. mock_get_bdi.assert_called_once_with( self.context, self.instance, bdms=mock_get_bdms.return_value) mock_driver_finish.assert_called_once_with( self.context, self.instance, mock_get_nw_info.return_value, self.migration, block_device_info=mock_get_bdi.return_value, power_on=False) # Assert final DB cleanup for the instance. mock_drop_mig_context.assert_called_once_with() mock_update_after_spawn.assert_called_once_with( self.context, self.instance, vm_state=vm_states.STOPPED) mock_inst_save.assert_has_calls([ mock.call(expected_task_state=[task_states.RESIZE_REVERTING])] * 2) # And finally that the volume attachments were completed. mock_complete_attachments.assert_called_once_with( self.context, mock_get_bdms.return_value) @mock.patch('nova.objects.Instance.save') @mock.patch('nova.compute.manager.ComputeManager._revert_allocation') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.compute.manager.ComputeManager.' '_update_volume_attachments') @mock.patch('nova.compute.manager.ComputeManager.' '_finish_revert_resize_network_migrate_finish') @mock.patch('nova.compute.manager.ComputeManager.' '_get_instance_block_device_info') @mock.patch('nova.objects.Instance.drop_migration_context') @mock.patch('nova.compute.manager.ComputeManager.' '_update_instance_after_spawn') @mock.patch('nova.compute.manager.ComputeManager.' '_complete_volume_attachments', side_effect=test.TestingException('vol complete failed')) def test_finish_revert_snapshot_based_resize_at_source_driver_fails( self, mock_complete_attachments, mock_update_after_spawn, mock_drop_mig_context, mock_get_bdi, mock_net_migrate_finish, mock_update_attachments, mock_get_bdms, mock_revert_allocs, mock_inst_save): """Test for _finish_revert_snapshot_based_resize_at_source where the driver call to finish_revert_migration fails. """ self.instance.system_metadata['old_vm_state'] = vm_states.ACTIVE # Configure the instance with an old_flavor for the revert. old_flavor = fake_flavor.fake_flavor_obj(self.context) self.instance.old_flavor = old_flavor with test.nested( mock.patch.object(self.compute.network_api, 'get_instance_nw_info'), mock.patch.object(self.compute.driver, 'finish_revert_migration', side_effect=test.TestingException('driver fail')) ) as ( mock_get_nw_info, mock_driver_finish, ): # Run the code. ex = self.assertRaises( test.TestingException, self.compute._finish_revert_snapshot_based_resize_at_source, self.context, self.instance, self.migration) # Assert the driver call (note power_on=True b/c old_vm_state=active) mock_get_bdi.assert_called_once_with( self.context, self.instance, bdms=mock_get_bdms.return_value) mock_driver_finish.assert_called_once_with( self.context, self.instance, mock_get_nw_info.return_value, self.migration, block_device_info=mock_get_bdi.return_value, power_on=True) # _complete_volume_attachments is called even though the driver call # failed. mock_complete_attachments.assert_called_once_with( self.context, mock_get_bdms.return_value) # finish_revert_migration failed but _complete_volume_attachments # is still called and in this case also fails so the resulting # exception should be the one from _complete_volume_attachments # but the finish_revert_migration error should have been logged. self.assertIn('vol complete failed', six.text_type(ex)) self.assertIn('driver fail', self.stdlog.logger.output) # Assert the migration status was not updated. self.migration.save.assert_not_called() # Assert the instance host/node and flavor info was updated. self.assertEqual(self.migration.source_compute, self.instance.host) self.assertEqual(self.migration.source_node, self.instance.node) self.assertIs(self.instance.flavor, old_flavor) self.assertEqual(old_flavor.id, self.instance.instance_type_id) # Assert allocations were reverted. mock_revert_allocs.assert_called_once_with( self.context, self.instance, self.migration) # Assert that volume attachments were updated. mock_update_attachments.assert_called_once_with( self.context, self.instance, mock_get_bdms.return_value) # Assert that port bindings were updated to point at the source host. mock_net_migrate_finish.assert_called_once_with( self.context, self.instance, self.migration, provider_mappings=None) # Assert final DB cleanup for the instance did not happen. mock_drop_mig_context.assert_not_called() mock_update_after_spawn.assert_not_called() # _finish_revert_resize_update_instance_flavor_host_node updated the # instance. mock_inst_save.assert_called_once_with( expected_task_state=[task_states.RESIZE_REVERTING]) class ComputeManagerInstanceUsageAuditTestCase(test.TestCase): def setUp(self): super(ComputeManagerInstanceUsageAuditTestCase, self).setUp() self.flags(group='glance', api_servers=['http://localhost:9292']) self.flags(instance_usage_audit=True) @mock.patch('nova.objects.TaskLog') def test_deleted_instance(self, mock_task_log): mock_task_log.get.return_value = None compute = manager.ComputeManager() admin_context = context.get_admin_context() fake_db_flavor = fake_flavor.fake_db_flavor() flavor = objects.Flavor(admin_context, **fake_db_flavor) updates = {'host': compute.host, 'flavor': flavor, 'root_gb': 0, 'ephemeral_gb': 0} # fudge beginning and ending time by a second (backwards and forwards, # respectively) so they differ from the instance's launch and # termination times when sub-seconds are truncated and fall within the # audit period one_second = datetime.timedelta(seconds=1) begin = timeutils.utcnow() - one_second instance = objects.Instance(admin_context, **updates) instance.create() instance.launched_at = timeutils.utcnow() instance.save() instance.destroy() end = timeutils.utcnow() + one_second def fake_last_completed_audit_period(): return (begin, end) self.stub_out('nova.utils.last_completed_audit_period', fake_last_completed_audit_period) compute._instance_usage_audit(admin_context) self.assertEqual(1, mock_task_log().task_items, 'the deleted test instance was not found in the audit' ' period') self.assertEqual(0, mock_task_log().errors, 'an error was encountered processing the deleted test' ' instance') @ddt.ddt class ComputeManagerSetHostEnabledTestCase(test.NoDBTestCase): def setUp(self): super(ComputeManagerSetHostEnabledTestCase, self).setUp() self.compute = manager.ComputeManager() self.context = context.RequestContext(user_id=fakes.FAKE_USER_ID, project_id=fakes.FAKE_PROJECT_ID) @ddt.data(True, False) def test_set_host_enabled(self, enabled): """Happy path test for set_host_enabled""" with mock.patch.object(self.compute, '_update_compute_provider_status') as ucpt: retval = self.compute.set_host_enabled(self.context, enabled) expected_retval = 'enabled' if enabled else 'disabled' self.assertEqual(expected_retval, retval) ucpt.assert_called_once_with(self.context, enabled) @mock.patch('nova.compute.manager.LOG.warning') def test_set_host_enabled_compute_host_not_found(self, mock_warning): """Tests _update_compute_provider_status raising ComputeHostNotFound""" error = exception.ComputeHostNotFound(host=self.compute.host) with mock.patch.object(self.compute, '_update_compute_provider_status', side_effect=error) as ucps: retval = self.compute.set_host_enabled(self.context, False) self.assertEqual('disabled', retval) ucps.assert_called_once_with(self.context, False) # A warning should have been logged for the ComputeHostNotFound error. mock_warning.assert_called_once() self.assertIn('Unable to add/remove trait COMPUTE_STATUS_DISABLED. ' 'No ComputeNode(s) found for host', mock_warning.call_args[0][0]) def test_set_host_enabled_update_provider_status_error(self): """Tests _update_compute_provider_status raising some unexpected error """ error = messaging.MessagingTimeout with test.nested( mock.patch.object(self.compute, '_update_compute_provider_status', side_effect=error), mock.patch.object(self.compute.driver, 'set_host_enabled', # The driver is not called in this case. new_callable=mock.NonCallableMock), ) as ( ucps, driver_set_host_enabled, ): self.assertRaises(error, self.compute.set_host_enabled, self.context, False) ucps.assert_called_once_with(self.context, False) @ddt.data(True, False) def test_set_host_enabled_not_implemented_error(self, enabled): """Tests the driver raising NotImplementedError""" with test.nested( mock.patch.object(self.compute, '_update_compute_provider_status'), mock.patch.object(self.compute.driver, 'set_host_enabled', side_effect=NotImplementedError), ) as ( ucps, driver_set_host_enabled, ): retval = self.compute.set_host_enabled(self.context, enabled) expected_retval = 'enabled' if enabled else 'disabled' self.assertEqual(expected_retval, retval) ucps.assert_called_once_with(self.context, enabled) driver_set_host_enabled.assert_called_once_with(enabled) def test_set_host_enabled_driver_error(self): """Tests the driver raising some unexpected error""" error = exception.HypervisorUnavailable(host=self.compute.host) with test.nested( mock.patch.object(self.compute, '_update_compute_provider_status'), mock.patch.object(self.compute.driver, 'set_host_enabled', side_effect=error), ) as ( ucps, driver_set_host_enabled, ): self.assertRaises(exception.HypervisorUnavailable, self.compute.set_host_enabled, self.context, False) ucps.assert_called_once_with(self.context, False) driver_set_host_enabled.assert_called_once_with(False) @ddt.data(True, False) def test_update_compute_provider_status(self, enabled): """Happy path test for _update_compute_provider_status""" # Fake out some fake compute nodes (ironic driver case). self.compute.rt.compute_nodes = { uuids.node1: objects.ComputeNode(uuid=uuids.node1), uuids.node2: objects.ComputeNode(uuid=uuids.node2), } with mock.patch.object(self.compute.virtapi, 'update_compute_provider_status') as ucps: self.compute._update_compute_provider_status( self.context, enabled=enabled) self.assertEqual(2, ucps.call_count) ucps.assert_has_calls([ mock.call(self.context, uuids.node1, enabled), mock.call(self.context, uuids.node2, enabled), ], any_order=True) def test_update_compute_provider_status_no_nodes(self): """Tests the case that _update_compute_provider_status will raise ComputeHostNotFound if there are no nodes in the resource tracker. """ self.assertRaises(exception.ComputeHostNotFound, self.compute._update_compute_provider_status, self.context, enabled=True) @mock.patch('nova.compute.manager.LOG.warning') def test_update_compute_provider_status_expected_errors(self, m_warn): """Tests _update_compute_provider_status handling a set of expected errors from the ComputeVirtAPI and logging a warning. """ # Setup a fake compute in the resource tracker. self.compute.rt.compute_nodes = { uuids.node: objects.ComputeNode(uuid=uuids.node) } errors = ( exception.ResourceProviderTraitRetrievalFailed(uuid=uuids.node), exception.ResourceProviderUpdateConflict( uuid=uuids.node, generation=1, error='conflict'), exception.ResourceProviderUpdateFailed( url='https://placement', error='dogs'), exception.TraitRetrievalFailed(error='cats'), ) for error in errors: with mock.patch.object( self.compute.virtapi, 'update_compute_provider_status', side_effect=error) as ucps: self.compute._update_compute_provider_status( self.context, enabled=False) ucps.assert_called_once_with(self.context, uuids.node, False) # The expected errors are logged as a warning. m_warn.assert_called_once() self.assertIn('An error occurred while updating ' 'COMPUTE_STATUS_DISABLED trait on compute node', m_warn.call_args[0][0]) m_warn.reset_mock() @mock.patch('nova.compute.manager.LOG.exception') def test_update_compute_provider_status_unexpected_error(self, m_exc): """Tests _update_compute_provider_status handling an unexpected exception from the ComputeVirtAPI and logging it. """ # Use two fake nodes here to make sure we try updating each even when # an error occurs. self.compute.rt.compute_nodes = { uuids.node1: objects.ComputeNode(uuid=uuids.node1), uuids.node2: objects.ComputeNode(uuid=uuids.node2), } with mock.patch.object( self.compute.virtapi, 'update_compute_provider_status', side_effect=(TypeError, AttributeError)) as ucps: self.compute._update_compute_provider_status( self.context, enabled=False) self.assertEqual(2, ucps.call_count) ucps.assert_has_calls([ mock.call(self.context, uuids.node1, False), mock.call(self.context, uuids.node2, False), ], any_order=True) # Each exception should have been logged. self.assertEqual(2, m_exc.call_count) self.assertIn('An error occurred while updating ' 'COMPUTE_STATUS_DISABLED trait', m_exc.call_args_list[0][0][0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_compute_utils.py0000664000175000017500000022734600000000000023651 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests For miscellaneous util methods used with compute.""" import copy import datetime import string import traceback import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils import six from nova.accelerator.cyborg import _CyborgClient as cyborgclient from nova.compute import manager from nova.compute import power_state from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states import nova.conf from nova import context from nova import exception from nova.image import glance from nova.network import model from nova import objects from nova.objects import base from nova.objects import block_device as block_device_obj from nova.objects import fields from nova import rpc from nova.scheduler.client import report from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit import fake_crypto from nova.tests.unit import fake_instance from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier from nova.tests.unit import fake_server_actions from nova.tests.unit.objects import test_flavor FAKE_IMAGE_REF = uuids.image_ref CONF = nova.conf.CONF def create_instance(context, user_id='fake', project_id='fake', params=None): """Create a test instance.""" flavor = objects.Flavor.get_by_name(context, 'm1.tiny') net_info = model.NetworkInfo([model.VIF(id=uuids.port_id)]) info_cache = objects.InstanceInfoCache(network_info=net_info) inst = objects.Instance(context=context, image_ref=uuids.fake_image_ref, reservation_id='r-fakeres', user_id=user_id, project_id=project_id, instance_type_id=flavor.id, flavor=flavor, old_flavor=None, new_flavor=None, system_metadata={}, ami_launch_index=0, root_gb=0, ephemeral_gb=0, info_cache=info_cache) if params: inst.update(params) inst.create() return inst class ComputeValidateDeviceTestCase(test.NoDBTestCase): def setUp(self): super(ComputeValidateDeviceTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') # check if test name includes "xen" if 'xen' in self.id(): self.flags(compute_driver='xenapi.XenAPIDriver') self.instance = objects.Instance( uuid=uuidutils.generate_uuid(dashed=False), root_device_name=None, default_ephemeral_device=None) else: self.instance = objects.Instance( uuid=uuidutils.generate_uuid(dashed=False), root_device_name='/dev/vda', default_ephemeral_device='/dev/vdb') flavor = objects.Flavor(**test_flavor.fake_flavor) self.instance.system_metadata = {} self.instance.flavor = flavor self.instance.default_swap_device = None self.data = [] def _validate_device(self, device=None): bdms = base.obj_make_list(self.context, objects.BlockDeviceMappingList(), objects.BlockDeviceMapping, self.data) return compute_utils.get_device_name_for_instance( self.instance, bdms, device) @staticmethod def _fake_bdm(device): return fake_block_device.FakeDbBlockDeviceDict({ 'source_type': 'volume', 'destination_type': 'volume', 'device_name': device, 'no_device': None, 'volume_id': 'fake', 'snapshot_id': None, 'guest_format': None }) def test_wrap(self): self.data = [] for letter in string.ascii_lowercase[2:]: self.data.append(self._fake_bdm('/dev/vd' + letter)) device = self._validate_device() self.assertEqual(device, '/dev/vdaa') def test_wrap_plus_one(self): self.data = [] for letter in string.ascii_lowercase[2:]: self.data.append(self._fake_bdm('/dev/vd' + letter)) self.data.append(self._fake_bdm('/dev/vdaa')) device = self._validate_device() self.assertEqual(device, '/dev/vdab') def test_later(self): self.data = [ self._fake_bdm('/dev/vdc'), self._fake_bdm('/dev/vdd'), self._fake_bdm('/dev/vde'), ] device = self._validate_device() self.assertEqual(device, '/dev/vdf') def test_gap(self): self.data = [ self._fake_bdm('/dev/vdc'), self._fake_bdm('/dev/vde'), ] device = self._validate_device() self.assertEqual(device, '/dev/vdd') def test_no_bdms(self): self.data = [] device = self._validate_device() self.assertEqual(device, '/dev/vdc') def test_lxc_names_work(self): self.instance['root_device_name'] = '/dev/a' self.instance['ephemeral_device_name'] = '/dev/b' self.data = [] device = self._validate_device() self.assertEqual(device, '/dev/c') def test_name_conversion(self): self.data = [] device = self._validate_device('/dev/c') self.assertEqual(device, '/dev/vdc') device = self._validate_device('/dev/sdc') self.assertEqual(device, '/dev/vdc') device = self._validate_device('/dev/xvdc') self.assertEqual(device, '/dev/vdc') def test_invalid_device_prefix(self): self.assertRaises(exception.InvalidDevicePath, self._validate_device, '/baddata/vdc') def test_device_in_use(self): exc = self.assertRaises(exception.DevicePathInUse, self._validate_device, '/dev/vda') self.assertIn('/dev/vda', six.text_type(exc)) def test_swap(self): self.instance['default_swap_device'] = "/dev/vdc" device = self._validate_device() self.assertEqual(device, '/dev/vdd') def test_swap_no_ephemeral(self): self.instance.default_ephemeral_device = None self.instance.default_swap_device = "/dev/vdb" device = self._validate_device() self.assertEqual(device, '/dev/vdc') def test_ephemeral_xenapi(self): self.instance.flavor.ephemeral_gb = 10 self.instance.flavor.swap = 0 device = self._validate_device() self.assertEqual(device, '/dev/xvdc') def test_swap_xenapi(self): self.instance.flavor.ephemeral_gb = 0 self.instance.flavor.swap = 10 device = self._validate_device() self.assertEqual(device, '/dev/xvdb') def test_swap_and_ephemeral_xenapi(self): self.instance.flavor.ephemeral_gb = 10 self.instance.flavor.swap = 10 device = self._validate_device() self.assertEqual(device, '/dev/xvdd') def test_swap_and_one_attachment_xenapi(self): self.instance.flavor.ephemeral_gb = 0 self.instance.flavor.swap = 10 device = self._validate_device() self.assertEqual(device, '/dev/xvdb') self.data.append(self._fake_bdm(device)) device = self._validate_device() self.assertEqual(device, '/dev/xvdd') def test_no_dev_root_device_name_get_next_name(self): self.instance['root_device_name'] = 'vda' device = self._validate_device() self.assertEqual('/dev/vdc', device) class DefaultDeviceNamesForInstanceTestCase(test.NoDBTestCase): def setUp(self): super(DefaultDeviceNamesForInstanceTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.ephemerals = block_device_obj.block_device_make_list( self.context, [fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vdb', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'guest_format': None, 'boot_index': -1})]) self.swap = block_device_obj.block_device_make_list( self.context, [fake_block_device.FakeDbBlockDeviceDict( {'id': 2, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vdc', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'guest_format': 'swap', 'boot_index': -1})]) self.block_device_mapping = block_device_obj.block_device_make_list( self.context, [fake_block_device.FakeDbBlockDeviceDict( {'id': 3, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-volume-id-1', 'boot_index': 0}), fake_block_device.FakeDbBlockDeviceDict( {'id': 4, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vdd', 'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': 'fake-snapshot-id-1', 'boot_index': -1}), fake_block_device.FakeDbBlockDeviceDict( {'id': 5, 'instance_uuid': uuids.block_device_instance, 'device_name': '/dev/vde', 'source_type': 'blank', 'destination_type': 'volume', 'boot_index': -1})]) self.instance = {'uuid': uuids.instance, 'ephemeral_gb': 2} self.is_libvirt = False self.root_device_name = '/dev/vda' self.update_called = False self.patchers = [] self.patchers.append( mock.patch.object(objects.BlockDeviceMapping, 'save')) for patcher in self.patchers: patcher.start() def tearDown(self): super(DefaultDeviceNamesForInstanceTestCase, self).tearDown() for patcher in self.patchers: patcher.stop() def _test_default_device_names(self, *block_device_lists): compute_utils.default_device_names_for_instance(self.instance, self.root_device_name, *block_device_lists) def test_only_block_device_mapping(self): # Test no-op original_bdm = copy.deepcopy(self.block_device_mapping) self._test_default_device_names([], [], self.block_device_mapping) for original, new in zip(original_bdm, self.block_device_mapping): self.assertEqual(original.device_name, new.device_name) # Assert it defaults the missing one as expected self.block_device_mapping[1]['device_name'] = None self.block_device_mapping[2]['device_name'] = None self._test_default_device_names([], [], self.block_device_mapping) self.assertEqual('/dev/vdb', self.block_device_mapping[1]['device_name']) self.assertEqual('/dev/vdc', self.block_device_mapping[2]['device_name']) def test_with_ephemerals(self): # Test ephemeral gets assigned self.ephemerals[0]['device_name'] = None self._test_default_device_names(self.ephemerals, [], self.block_device_mapping) self.assertEqual(self.ephemerals[0]['device_name'], '/dev/vdb') self.block_device_mapping[1]['device_name'] = None self.block_device_mapping[2]['device_name'] = None self._test_default_device_names(self.ephemerals, [], self.block_device_mapping) self.assertEqual('/dev/vdc', self.block_device_mapping[1]['device_name']) self.assertEqual('/dev/vdd', self.block_device_mapping[2]['device_name']) def test_with_swap(self): # Test swap only self.swap[0]['device_name'] = None self._test_default_device_names([], self.swap, []) self.assertEqual(self.swap[0]['device_name'], '/dev/vdb') # Test swap and block_device_mapping self.swap[0]['device_name'] = None self.block_device_mapping[1]['device_name'] = None self.block_device_mapping[2]['device_name'] = None self._test_default_device_names([], self.swap, self.block_device_mapping) self.assertEqual(self.swap[0]['device_name'], '/dev/vdb') self.assertEqual('/dev/vdc', self.block_device_mapping[1]['device_name']) self.assertEqual('/dev/vdd', self.block_device_mapping[2]['device_name']) def test_all_together(self): # Test swap missing self.swap[0]['device_name'] = None self._test_default_device_names(self.ephemerals, self.swap, self.block_device_mapping) self.assertEqual(self.swap[0]['device_name'], '/dev/vdc') # Test swap and eph missing self.swap[0]['device_name'] = None self.ephemerals[0]['device_name'] = None self._test_default_device_names(self.ephemerals, self.swap, self.block_device_mapping) self.assertEqual(self.ephemerals[0]['device_name'], '/dev/vdb') self.assertEqual(self.swap[0]['device_name'], '/dev/vdc') # Test all missing self.swap[0]['device_name'] = None self.ephemerals[0]['device_name'] = None self.block_device_mapping[1]['device_name'] = None self.block_device_mapping[2]['device_name'] = None self._test_default_device_names(self.ephemerals, self.swap, self.block_device_mapping) self.assertEqual(self.ephemerals[0]['device_name'], '/dev/vdb') self.assertEqual(self.swap[0]['device_name'], '/dev/vdc') self.assertEqual('/dev/vdd', self.block_device_mapping[1]['device_name']) self.assertEqual('/dev/vde', self.block_device_mapping[2]['device_name']) class UsageInfoTestCase(test.TestCase): def setUp(self): self.public_key = fake_crypto.get_ssh_public_key() self.fingerprint = '1e:2c:9b:56:79:4b:45:77:f9:ca:7a:98:2c:b0:d5:3c' super(UsageInfoTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.flags(compute_driver='fake.FakeDriver') self.compute = manager.ComputeManager() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.flavor = objects.Flavor.get_by_name(self.context, 'm1.tiny') def fake_show(meh, context, id, **kwargs): return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}} self.flags(group='glance', api_servers=['http://localhost:9292']) self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', fake_show) fake_network.set_stub_network_methods(self) fake_server_actions.stub_out_action_events(self) def test_notify_usage_exists(self): # Ensure 'exists' notification generates appropriate usage data. instance = create_instance(self.context) # Set some system metadata sys_metadata = {'image_md_key1': 'val1', 'image_md_key2': 'val2', 'other_data': 'meow'} instance.system_metadata.update(sys_metadata) instance.save() compute_utils.notify_usage_exists( rpc.get_notifier('compute'), self.context, instance, 'fake-host') self.assertEqual(len(fake_notifier.NOTIFICATIONS), 1) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'compute.instance.exists') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') type_id = self.flavor.id self.assertEqual(str(payload['instance_type_id']), str(type_id)) flavor_id = self.flavor.flavorid self.assertEqual(str(payload['instance_flavor_id']), str(flavor_id)) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'state_description', 'bandwidth', 'audit_period_beginning', 'audit_period_ending', 'image_meta'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_meta'], {'md_key1': 'val1', 'md_key2': 'val2'}) image_ref_url = "%s/images/%s" % ( glance.generate_glance_url(self.context), uuids.fake_image_ref) self.assertEqual(payload['image_ref_url'], image_ref_url) self.compute.terminate_instance(self.context, instance, []) def test_notify_usage_exists_emits_versioned(self): # Ensure 'exists' notification generates appropriate usage data. instance = create_instance(self.context) compute_utils.notify_usage_exists( rpc.get_notifier('compute'), self.context, instance, 'fake-host') self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) msg = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual(msg['priority'], 'INFO') self.assertEqual(msg['event_type'], 'instance.exists') payload = msg['payload']['nova_object.data'] self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['uuid'], instance['uuid']) flavor = payload['flavor']['nova_object.data'] self.assertEqual(flavor['name'], 'm1.tiny') flavorid = self.flavor.flavorid self.assertEqual(str(flavor['flavorid']), str(flavorid)) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'bandwidth', 'audit_period'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_uuid'], uuids.fake_image_ref) self.compute.terminate_instance(self.context, instance, []) def test_notify_usage_exists_deleted_instance(self): # Ensure 'exists' notification generates appropriate usage data. instance = create_instance(self.context) # Set some system metadata sys_metadata = {'image_md_key1': 'val1', 'image_md_key2': 'val2', 'other_data': 'meow'} instance.system_metadata.update(sys_metadata) instance.save() self.compute.terminate_instance(self.context, instance, []) compute_utils.notify_usage_exists( rpc.get_notifier('compute'), self.context, instance, 'fake-host') msg = fake_notifier.NOTIFICATIONS[-1] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'compute.instance.exists') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') type_id = self.flavor.id self.assertEqual(str(payload['instance_type_id']), str(type_id)) flavor_id = self.flavor.flavorid self.assertEqual(str(payload['instance_flavor_id']), str(flavor_id)) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'state_description', 'bandwidth', 'audit_period_beginning', 'audit_period_ending', 'image_meta'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_meta'], {'md_key1': 'val1', 'md_key2': 'val2'}) image_ref_url = "%s/images/%s" % ( glance.generate_glance_url(self.context), uuids.fake_image_ref) self.assertEqual(payload['image_ref_url'], image_ref_url) def test_notify_about_instance_action(self): instance = create_instance(self.context) bdms = block_device_obj.block_device_make_list( self.context, [fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'device_name': '/dev/vda', 'instance_uuid': 'f8000000-0000-0000-0000-000000000000', 'destination_type': 'volume', 'boot_index': 0, 'volume_id': 'de8836ac-d75e-11e2-8271-5254009297d6'})]) compute_utils.notify_about_instance_action( self.context, instance, host='fake-compute', action='delete', phase='start', bdms=bdms) self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual(notification['priority'], 'INFO') self.assertEqual(notification['event_type'], 'instance.delete.start') self.assertEqual(notification['publisher_id'], 'nova-compute:fake-compute') payload = notification['payload']['nova_object.data'] self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['uuid'], instance['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state', 'display_description', 'locked', 'auto_disk_config', 'key_name'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_uuid'], uuids.fake_image_ref) self.assertEqual(1, len(payload['block_devices'])) payload_bdm = payload['block_devices'][0]['nova_object.data'] self.assertEqual( {'boot_index': 0, 'delete_on_termination': False, 'device_name': '/dev/vda', 'tag': None, 'volume_id': 'de8836ac-d75e-11e2-8271-5254009297d6'}, payload_bdm) def test_notify_about_instance_create(self): keypair = objects.KeyPair(name='my-key', user_id='fake', type='ssh', public_key=self.public_key, fingerprint=self.fingerprint) keypairs = objects.KeyPairList(objects=[keypair]) instance = create_instance(self.context, params={'keypairs': keypairs}) compute_utils.notify_about_instance_create( self.context, instance, host='fake-compute', phase='start') self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('INFO', notification['priority']) self.assertEqual('instance.create.start', notification['event_type']) self.assertEqual('nova-compute:fake-compute', notification['publisher_id']) payload = notification['payload']['nova_object.data'] self.assertEqual('fake', payload['tenant_id']) self.assertEqual('fake', payload['user_id']) self.assertEqual(instance['uuid'], payload['uuid']) flavorid = self.flavor.flavorid flavor = payload['flavor']['nova_object.data'] self.assertEqual(flavorid, str(flavor['flavorid'])) keypairs_payload = payload['keypairs'] self.assertEqual(1, len(keypairs_payload)) keypair_data = keypairs_payload[0]['nova_object.data'] self.assertEqual(keypair_data, {'name': 'my-key', 'user_id': 'fake', 'type': 'ssh', 'public_key': self.public_key, 'fingerprint': self.fingerprint}) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state', 'display_description', 'locked', 'auto_disk_config'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(uuids.fake_image_ref, payload['image_uuid']) def test_notify_about_instance_create_without_keypair(self): instance = create_instance(self.context) compute_utils.notify_about_instance_create( self.context, instance, host='fake-compute', phase='start') self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('INFO', notification['priority']) self.assertEqual('instance.create.start', notification['event_type']) self.assertEqual('nova-compute:fake-compute', notification['publisher_id']) payload = notification['payload']['nova_object.data'] self.assertEqual('fake', payload['tenant_id']) self.assertEqual('fake', payload['user_id']) self.assertEqual(instance['uuid'], payload['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) self.assertEqual(0, len(payload['keypairs'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state', 'display_description', 'locked', 'auto_disk_config'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(uuids.fake_image_ref, payload['image_uuid']) def test_notify_about_instance_create_with_tags(self): instance = create_instance(self.context) # TODO(Kevin Zheng): clean this up to pass tags as params to # create_instance() once instance.create() handles tags. instance.tags = objects.TagList( objects=[objects.Tag(self.context, tag='tag1')]) compute_utils.notify_about_instance_create( self.context, instance, host='fake-compute', phase='start') self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('INFO', notification['priority']) self.assertEqual('instance.create.start', notification['event_type']) self.assertEqual('nova-compute:fake-compute', notification['publisher_id']) payload = notification['payload']['nova_object.data'] self.assertEqual('fake', payload['tenant_id']) self.assertEqual('fake', payload['user_id']) self.assertEqual(instance.uuid, payload['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) self.assertEqual(0, len(payload['keypairs'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state', 'display_description', 'locked', 'auto_disk_config', 'tags'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(1, len(payload['tags'])) self.assertEqual('tag1', payload['tags'][0]) self.assertEqual(uuids.fake_image_ref, payload['image_uuid']) def test_notify_about_volume_swap(self): instance = create_instance(self.context) compute_utils.notify_about_volume_swap( self.context, instance, 'fake-compute', fields.NotificationPhase.START, uuids.old_volume_id, uuids.new_volume_id) self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('INFO', notification['priority']) self.assertEqual('instance.%s.%s' % (fields.NotificationAction.VOLUME_SWAP, fields.NotificationPhase.START), notification['event_type']) self.assertEqual('nova-compute:fake-compute', notification['publisher_id']) payload = notification['payload']['nova_object.data'] self.assertEqual(self.project_id, payload['tenant_id']) self.assertEqual(self.user_id, payload['user_id']) self.assertEqual(instance['uuid'], payload['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state'): self.assertIn(attr, payload) self.assertEqual(uuids.fake_image_ref, payload['image_uuid']) self.assertEqual(uuids.old_volume_id, payload['old_volume_id']) self.assertEqual(uuids.new_volume_id, payload['new_volume_id']) def test_notify_about_volume_swap_with_error(self): instance = create_instance(self.context) try: # To get exception trace, raise and catch an exception raise test.TestingException('Volume swap error.') except Exception as ex: tb = traceback.format_exc() compute_utils.notify_about_volume_swap( self.context, instance, 'fake-compute', fields.NotificationPhase.ERROR, uuids.old_volume_id, uuids.new_volume_id, ex, tb) self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('ERROR', notification['priority']) self.assertEqual('instance.%s.%s' % (fields.NotificationAction.VOLUME_SWAP, fields.NotificationPhase.ERROR), notification['event_type']) self.assertEqual('nova-compute:fake-compute', notification['publisher_id']) payload = notification['payload']['nova_object.data'] self.assertEqual(self.project_id, payload['tenant_id']) self.assertEqual(self.user_id, payload['user_id']) self.assertEqual(instance['uuid'], payload['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state'): self.assertIn(attr, payload) self.assertEqual(uuids.fake_image_ref, payload['image_uuid']) self.assertEqual(uuids.old_volume_id, payload['old_volume_id']) self.assertEqual(uuids.new_volume_id, payload['new_volume_id']) # Check ExceptionPayload exception_payload = payload['fault']['nova_object.data'] self.assertEqual('TestingException', exception_payload['exception']) self.assertEqual('Volume swap error.', exception_payload['exception_message']) self.assertEqual('test_notify_about_volume_swap_with_error', exception_payload['function_name']) self.assertEqual('nova.tests.unit.compute.test_compute_utils', exception_payload['module_name']) self.assertIn('test_notify_about_volume_swap_with_error', exception_payload['traceback']) def test_notify_about_instance_rescue_action(self): instance = create_instance(self.context) compute_utils.notify_about_instance_rescue_action( self.context, instance, 'fake-compute', uuids.rescue_image_ref, phase='start') self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual(notification['priority'], 'INFO') self.assertEqual(notification['event_type'], 'instance.rescue.start') self.assertEqual(notification['publisher_id'], 'nova-compute:fake-compute') payload = notification['payload']['nova_object.data'] self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['uuid'], instance['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state', 'display_description', 'locked', 'auto_disk_config', 'key_name'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_uuid'], uuids.fake_image_ref) self.assertEqual(payload['rescue_image_ref'], uuids.rescue_image_ref) def test_notify_about_resize_prep_instance(self): instance = create_instance(self.context) new_flavor = objects.Flavor.get_by_name(self.context, 'm1.small') compute_utils.notify_about_resize_prep_instance( self.context, instance, 'fake-compute', 'start', new_flavor) self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual(notification['priority'], 'INFO') self.assertEqual(notification['event_type'], 'instance.resize_prep.start') self.assertEqual(notification['publisher_id'], 'nova-compute:fake-compute') payload = notification['payload']['nova_object.data'] self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['uuid'], instance['uuid']) self.assertEqual( self.flavor.flavorid, str(payload['flavor']['nova_object.data']['flavorid'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'task_state', 'display_description', 'locked', 'auto_disk_config', 'key_name'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_uuid'], uuids.fake_image_ref) self.assertEqual(payload['new_flavor']['nova_object.data'][ 'flavorid'], new_flavor.flavorid) def test_notify_usage_exists_instance_not_found(self): # Ensure 'exists' notification generates appropriate usage data. instance = create_instance(self.context) self.compute.terminate_instance(self.context, instance, []) compute_utils.notify_usage_exists( rpc.get_notifier('compute'), self.context, instance, 'fake-host') msg = fake_notifier.NOTIFICATIONS[-1] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'compute.instance.exists') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.flavor.id), str(payload['instance_type_id'])) self.assertEqual(str(self.flavor.flavorid), str(payload['instance_flavor_id'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'state_description', 'bandwidth', 'audit_period_beginning', 'audit_period_ending', 'image_meta'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_meta'], {}) image_ref_url = "%s/images/%s" % ( glance.generate_glance_url(self.context), uuids.fake_image_ref) self.assertEqual(payload['image_ref_url'], image_ref_url) def test_notify_about_volume_usage(self): # Ensure 'volume.usage' notification generates appropriate usage data. vol_usage = objects.VolumeUsage( id=1, volume_id=uuids.volume, instance_uuid=uuids.instance, project_id=self.project_id, user_id=self.user_id, availability_zone='AZ1', tot_last_refreshed=datetime.datetime(second=1, minute=1, hour=1, day=5, month=7, year=2018), tot_reads=100, tot_read_bytes=100, tot_writes=100, tot_write_bytes=100, curr_last_refreshed=datetime.datetime(second=1, minute=1, hour=2, day=5, month=7, year=2018), curr_reads=100, curr_read_bytes=100, curr_writes=100, curr_write_bytes=100) compute_utils.notify_about_volume_usage(self.context, vol_usage, 'fake-compute') self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('INFO', notification['priority']) self.assertEqual('volume.usage', notification['event_type']) self.assertEqual('nova-compute:fake-compute', notification['publisher_id']) payload = notification['payload']['nova_object.data'] self.assertEqual(uuids.volume, payload['volume_id']) self.assertEqual(uuids.instance, payload['instance_uuid']) self.assertEqual(self.project_id, payload['project_id']) self.assertEqual(self.user_id, payload['user_id']) self.assertEqual('AZ1', payload['availability_zone']) self.assertEqual('2018-07-05T02:01:01Z', payload['last_refreshed']) self.assertEqual(200, payload['read_bytes']) self.assertEqual(200, payload['reads']) self.assertEqual(200, payload['write_bytes']) self.assertEqual(200, payload['writes']) def test_notify_about_instance_usage(self): instance = create_instance(self.context) # Set some system metadata sys_metadata = {'image_md_key1': 'val1', 'image_md_key2': 'val2', 'other_data': 'meow'} instance.system_metadata.update(sys_metadata) instance.save() extra_usage_info = {'image_name': 'fake_name'} compute_utils.notify_about_instance_usage( rpc.get_notifier('compute'), self.context, instance, 'create.start', extra_usage_info=extra_usage_info) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 1) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'compute.instance.create.start') payload = msg.payload self.assertEqual(payload['tenant_id'], self.project_id) self.assertEqual(payload['user_id'], self.user_id) self.assertEqual(payload['instance_id'], instance['uuid']) self.assertEqual(payload['instance_type'], 'm1.tiny') self.assertEqual(str(self.flavor.id), str(payload['instance_type_id'])) self.assertEqual(str(self.flavor.flavorid), str(payload['instance_flavor_id'])) for attr in ('display_name', 'created_at', 'launched_at', 'state', 'state_description', 'image_meta'): self.assertIn(attr, payload, "Key %s not in payload" % attr) self.assertEqual(payload['image_meta'], {'md_key1': 'val1', 'md_key2': 'val2'}) self.assertEqual(payload['image_name'], 'fake_name') image_ref_url = "%s/images/%s" % ( glance.generate_glance_url(self.context), uuids.fake_image_ref) self.assertEqual(payload['image_ref_url'], image_ref_url) self.compute.terminate_instance(self.context, instance, []) def test_notify_about_aggregate_update_with_id(self): # Set aggregate payload aggregate_payload = {'aggregate_id': 1} compute_utils.notify_about_aggregate_update(self.context, "create.end", aggregate_payload) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 1) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'aggregate.create.end') payload = msg.payload self.assertEqual(payload['aggregate_id'], 1) def test_notify_about_aggregate_update_with_name(self): # Set aggregate payload aggregate_payload = {'name': 'fakegroup'} compute_utils.notify_about_aggregate_update(self.context, "create.start", aggregate_payload) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 1) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual(msg.priority, 'INFO') self.assertEqual(msg.event_type, 'aggregate.create.start') payload = msg.payload self.assertEqual(payload['name'], 'fakegroup') def test_notify_about_aggregate_update_without_name_id(self): # Set empty aggregate payload aggregate_payload = {} compute_utils.notify_about_aggregate_update(self.context, "create.start", aggregate_payload) self.assertEqual(len(fake_notifier.NOTIFICATIONS), 0) class ComputeUtilsGetValFromSysMetadata(test.NoDBTestCase): def test_get_value_from_system_metadata(self): instance = fake_instance.fake_instance_obj('fake-context') system_meta = {'int_val': 1, 'int_string': '2', 'not_int': 'Nope'} instance.system_metadata = system_meta result = compute_utils.get_value_from_system_metadata( instance, 'int_val', int, 0) self.assertEqual(1, result) result = compute_utils.get_value_from_system_metadata( instance, 'int_string', int, 0) self.assertEqual(2, result) result = compute_utils.get_value_from_system_metadata( instance, 'not_int', int, 0) self.assertEqual(0, result) class ComputeUtilsRefreshInfoCacheForInstance(test.NoDBTestCase): def test_instance_info_cache_not_found(self): inst = fake_instance.fake_instance_obj('fake-context') net_info = model.NetworkInfo([]) info_cache = objects.InstanceInfoCache(network_info=net_info) inst.info_cache = info_cache with mock.patch.object(inst.info_cache, 'refresh', side_effect=exception.InstanceInfoCacheNotFound( instance_uuid=inst.uuid)): # we expect that the raised exception is ok with mock.patch.object(compute_utils.LOG, 'debug') as log_mock: compute_utils.refresh_info_cache_for_instance(None, inst) log_mock.assert_called_once_with( 'Can not refresh info_cache because instance ' 'was not found', instance=inst) class ComputeUtilsGetRebootTypes(test.NoDBTestCase): def setUp(self): super(ComputeUtilsGetRebootTypes, self).setUp() self.context = context.RequestContext('fake', 'fake') def test_get_reboot_type_started_soft(self): reboot_type = compute_utils.get_reboot_type(task_states.REBOOT_STARTED, power_state.RUNNING) self.assertEqual(reboot_type, 'SOFT') def test_get_reboot_type_pending_soft(self): reboot_type = compute_utils.get_reboot_type(task_states.REBOOT_PENDING, power_state.RUNNING) self.assertEqual(reboot_type, 'SOFT') def test_get_reboot_type_hard(self): reboot_type = compute_utils.get_reboot_type('foo', power_state.RUNNING) self.assertEqual(reboot_type, 'HARD') def test_get_reboot_not_running_hard(self): reboot_type = compute_utils.get_reboot_type('foo', 'bar') self.assertEqual(reboot_type, 'HARD') class ComputeUtilsTestCase(test.NoDBTestCase): def setUp(self): super(ComputeUtilsTestCase, self).setUp() self.compute = 'compute' self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) @mock.patch.object(compute_utils, 'EventReporter') def test_wrap_instance_event_without_host(self, mock_event): inst = objects.Instance(uuid=uuids.instance) @compute_utils.wrap_instance_event(prefix='compute') def fake_event(self, context, instance): pass fake_event(self.compute, self.context, instance=inst) # if the class doesn't include a self.host, the default host is None mock_event.assert_called_once_with(self.context, 'compute_fake_event', None, uuids.instance, graceful_exit=False) @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') def test_wrap_instance_event(self, mock_finish, mock_start): inst = {"uuid": uuids.instance} @compute_utils.wrap_instance_event(prefix='compute') def fake_event(self, context, instance): pass fake_event(self.compute, self.context, instance=inst) self.assertTrue(mock_start.called) self.assertTrue(mock_finish.called) @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') def test_wrap_instance_event_return(self, mock_finish, mock_start): inst = {"uuid": uuids.instance} @compute_utils.wrap_instance_event(prefix='compute') def fake_event(self, context, instance): return True retval = fake_event(self.compute, self.context, instance=inst) self.assertTrue(retval) self.assertTrue(mock_start.called) self.assertTrue(mock_finish.called) @mock.patch.object(objects.InstanceActionEvent, 'event_start') @mock.patch.object(objects.InstanceActionEvent, 'event_finish_with_failure') def test_wrap_instance_event_log_exception(self, mock_finish, mock_start): inst = {"uuid": uuids.instance} @compute_utils.wrap_instance_event(prefix='compute') def fake_event(self2, context, instance): raise exception.NovaException() self.assertRaises(exception.NovaException, fake_event, self.compute, self.context, instance=inst) self.assertTrue(mock_start.called) self.assertTrue(mock_finish.called) args, kwargs = mock_finish.call_args self.assertIsInstance(kwargs['exc_val'], exception.NovaException) @mock.patch('nova.objects.InstanceActionEvent.event_start') @mock.patch('nova.objects.InstanceActionEvent.event_finish_with_failure') def _test_event_reporter_graceful_exit(self, error, mock_event_finish, mock_event_start): with compute_utils.EventReporter(self.context, 'fake_event', 'fake.host', uuids.instance, graceful_exit=True): mock_event_finish.side_effect = error mock_event_start.assert_called_once_with( self.context, uuids.instance, 'fake_event', want_result=False, host='fake.host') mock_event_finish.assert_called_once_with( self.context, uuids.instance, 'fake_event', exc_val=None, exc_tb=None, want_result=False) def test_event_reporter_graceful_exit_action_not_found(self): """Tests that when graceful_exit=True and InstanceActionNotFound is raised it is handled and not re-raised. """ error = exception.InstanceActionNotFound( request_id=self.context.request_id, instance_uuid=uuids.instance) self._test_event_reporter_graceful_exit(error) def test_event_reporter_graceful_exit_unexpected_error(self): """Tests that even if graceful_exit=True the EventReporter will re-raise an unexpected exception. """ error = test.TestingException('uh oh') self.assertRaises(test.TestingException, self._test_event_reporter_graceful_exit, error) @mock.patch('netifaces.interfaces') def test_get_machine_ips_value_error(self, mock_interfaces): # Tests that the utility method does not explode if netifaces raises # a ValueError. iface = mock.sentinel mock_interfaces.return_value = [iface] with mock.patch('netifaces.ifaddresses', side_effect=ValueError) as mock_ifaddresses: addresses = compute_utils.get_machine_ips() self.assertEqual([], addresses) mock_ifaddresses.assert_called_once_with(iface) @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.objects.Instance.destroy') def test_notify_about_instance_delete(self, mock_instance_destroy, mock_notify_usage, mock_notify_action): instance = fake_instance.fake_instance_obj( self.context, expected_attrs=('system_metadata',)) with compute_utils.notify_about_instance_delete( mock.sentinel.notifier, self.context, instance): instance.destroy() expected_notify_calls = [ mock.call(mock.sentinel.notifier, self.context, instance, 'delete.start'), mock.call(mock.sentinel.notifier, self.context, instance, 'delete.end') ] mock_notify_usage.assert_has_calls(expected_notify_calls) mock_notify_action.assert_has_calls([ mock.call(self.context, instance, host='fake-mini', source='nova-api', action='delete', phase='start'), mock.call(self.context, instance, host='fake-mini', source='nova-api', action='delete', phase='end'), ]) def test_get_stashed_volume_connector_none(self): inst = fake_instance.fake_instance_obj(self.context) # connection_info isn't set bdm = objects.BlockDeviceMapping(self.context) self.assertIsNone( compute_utils.get_stashed_volume_connector(bdm, inst)) # connection_info is None bdm.connection_info = None self.assertIsNone( compute_utils.get_stashed_volume_connector(bdm, inst)) # connector is not set in connection_info bdm.connection_info = jsonutils.dumps({}) self.assertIsNone( compute_utils.get_stashed_volume_connector(bdm, inst)) # connector is set but different host conn_info = {'connector': {'host': 'other_host'}} bdm.connection_info = jsonutils.dumps(conn_info) self.assertIsNone( compute_utils.get_stashed_volume_connector(bdm, inst)) def test_may_have_ports_or_volumes(self): inst = objects.Instance() for vm_state, expected_result in ((vm_states.ERROR, True), (vm_states.SHELVED_OFFLOADED, True), (vm_states.BUILDING, False)): inst.vm_state = vm_state self.assertEqual( expected_result, compute_utils.may_have_ports_or_volumes(inst), vm_state) def test_heal_reqspec_is_bfv_no_update(self): reqspec = objects.RequestSpec(is_bfv=False) with mock.patch.object(compute_utils, 'is_volume_backed_instance', new_callable=mock.NonCallableMock): compute_utils.heal_reqspec_is_bfv( self.context, reqspec, mock.sentinel.instance) @mock.patch('nova.objects.RequestSpec.save') def test_heal_reqspec_is_bfv_with_update(self, mock_save): reqspec = objects.RequestSpec() with mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=True): compute_utils.heal_reqspec_is_bfv( self.context, reqspec, mock.sentinel.instance) self.assertTrue(reqspec.is_bfv) mock_save.assert_called_once_with() def test_delete_image(self): """Happy path test for the delete_image utility method""" image_api = mock.Mock() compute_utils.delete_image( self.context, mock.sentinel.instance, image_api, uuids.image_id) image_api.delete.assert_called_once_with(self.context, uuids.image_id) @mock.patch('nova.compute.utils.LOG.exception') def test_delete_image_not_found(self, mock_log_exception): """Tests the delete_image method when ImageNotFound is raised.""" image_api = mock.Mock() image_api.delete.side_effect = exception.ImageNotFound( image_id=uuids.image_id) compute_utils.delete_image( self.context, mock.sentinel.instance, image_api, uuids.image_id) image_api.delete.assert_called_once_with(self.context, uuids.image_id) # The image was not found but that's OK so no errors should be logged. mock_log_exception.assert_not_called() @mock.patch('nova.compute.utils.LOG.exception') def test_delete_image_unknown_error(self, mock_log_exception): """Tests the delete_image method when some unexpected error is raised. """ image_api = mock.Mock() image_api.delete.side_effect = test.TestingException compute_utils.delete_image( self.context, mock.sentinel.instance, image_api, uuids.image_id) image_api.delete.assert_called_once_with(self.context, uuids.image_id) # An unexpected error should not be re-raised but just log it. mock_log_exception.assert_called_once() self.assertIn('Error while trying to clean up image', mock_log_exception.call_args[0][0]) class ServerGroupTestCase(test.TestCase): def setUp(self): super(ServerGroupTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.group = objects.InstanceGroup(context=self.context, id=1, uuid=uuids.server_group, user_id=self.user_id, project_id=self.project_id, name="test-server-group", policy="anti-affinity", policies=["anti-affinity"], rules={"max_server_per_host": 3}) def test_notify_about_server_group_action(self): compute_utils.notify_about_server_group_action(self.context, self.group, 'create') self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] expected = {'priority': 'INFO', 'event_type': u'server_group.create', 'publisher_id': u'nova-api:fake-mini', 'payload': { 'nova_object.data': { 'name': u'test-server-group', 'policies': [u'anti-affinity'], 'policy': u'anti-affinity', 'rules': {"max_server_per_host": "3"}, 'project_id': u'fake', 'user_id': u'fake', 'uuid': uuids.server_group, 'hosts': None, 'members': None }, 'nova_object.name': 'ServerGroupPayload', 'nova_object.namespace': 'nova', 'nova_object.version': '1.1' } } self.assertEqual(notification, expected) @mock.patch.object(objects.InstanceGroup, 'get_by_uuid') def test_notify_about_server_group_add_member(self, mock_get_by_uuid): self.group.members = [uuids.instance] mock_get_by_uuid.return_value = self.group compute_utils.notify_about_server_group_add_member( self.context, uuids.server_group) mock_get_by_uuid.assert_called_once_with(self.context, uuids.server_group) self.assertEqual(len(fake_notifier.VERSIONED_NOTIFICATIONS), 1) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] expected = {'priority': 'INFO', 'event_type': u'server_group.add_member', 'publisher_id': u'nova-api:fake-mini', 'payload': { 'nova_object.data': { 'name': u'test-server-group', 'policies': [u'anti-affinity'], 'policy': u'anti-affinity', 'rules': {"max_server_per_host": "3"}, 'project_id': u'fake', 'user_id': u'fake', 'uuid': uuids.server_group, 'hosts': None, 'members': [uuids.instance] }, 'nova_object.name': 'ServerGroupPayload', 'nova_object.namespace': 'nova', 'nova_object.version': '1.1' } } self.assertEqual(notification, expected) class ComputeUtilsQuotaTestCase(test.TestCase): def setUp(self): super(ComputeUtilsQuotaTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') def test_upsize_quota_delta(self): old_flavor = objects.Flavor.get_by_name(self.context, 'm1.tiny') new_flavor = objects.Flavor.get_by_name(self.context, 'm1.medium') expected_deltas = { 'cores': new_flavor['vcpus'] - old_flavor['vcpus'], 'ram': new_flavor['memory_mb'] - old_flavor['memory_mb'] } deltas = compute_utils.upsize_quota_delta(new_flavor, old_flavor) self.assertEqual(expected_deltas, deltas) @mock.patch('nova.objects.Quotas.count_as_dict') def test_check_instance_quota_exceeds_with_multiple_resources(self, mock_count): quotas = {'cores': 1, 'instances': 1, 'ram': 512} overs = ['cores', 'instances', 'ram'] over_quota_args = dict(quotas=quotas, usages={'instances': 1, 'cores': 1, 'ram': 512}, overs=overs) e = exception.OverQuota(**over_quota_args) fake_flavor = objects.Flavor(vcpus=1, memory_mb=512) instance_num = 1 proj_count = {'instances': 1, 'cores': 1, 'ram': 512} user_count = proj_count.copy() mock_count.return_value = {'project': proj_count, 'user': user_count} with mock.patch.object(objects.Quotas, 'limit_check_project_and_user', side_effect=e): try: compute_utils.check_num_instances_quota(self.context, fake_flavor, instance_num, instance_num) except exception.TooManyInstances as e: self.assertEqual('cores, instances, ram', e.kwargs['overs']) self.assertEqual('1, 1, 512', e.kwargs['req']) self.assertEqual('1, 1, 512', e.kwargs['used']) self.assertEqual('1, 1, 512', e.kwargs['allowed']) else: self.fail("Exception not raised") @mock.patch('nova.objects.Quotas.get_all_by_project_and_user') @mock.patch('nova.objects.Quotas.check_deltas') def test_check_num_instances_omits_user_if_no_user_quota(self, mock_check, mock_get): # Return no per-user quota. mock_get.return_value = {'project_id': self.context.project_id, 'user_id': self.context.user_id} fake_flavor = objects.Flavor(vcpus=1, memory_mb=512) compute_utils.check_num_instances_quota( self.context, fake_flavor, 1, 1) deltas = {'instances': 1, 'cores': 1, 'ram': 512} # Verify that user_id has not been passed along to scope the resource # counting. mock_check.assert_called_once_with( self.context, deltas, self.context.project_id, user_id=None, check_project_id=self.context.project_id, check_user_id=None) @mock.patch('nova.objects.Quotas.get_all_by_project_and_user') @mock.patch('nova.objects.Quotas.check_deltas') def test_check_num_instances_passes_user_if_user_quota(self, mock_check, mock_get): for resource in ['instances', 'cores', 'ram']: # Return some per-user quota for each of the instance-related # resources. mock_get.return_value = {'project_id': self.context.project_id, 'user_id': self.context.user_id, resource: 5} fake_flavor = objects.Flavor(vcpus=1, memory_mb=512) compute_utils.check_num_instances_quota( self.context, fake_flavor, 1, 1) deltas = {'instances': 1, 'cores': 1, 'ram': 512} # Verify that user_id is passed along to scope the resource # counting and limit checking. mock_check.assert_called_once_with( self.context, deltas, self.context.project_id, user_id=self.context.user_id, check_project_id=self.context.project_id, check_user_id=self.context.user_id) mock_check.reset_mock() class IsVolumeBackedInstanceTestCase(test.TestCase): def setUp(self): super(IsVolumeBackedInstanceTestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) def test_is_volume_backed_instance_no_bdm_no_image(self): ctxt = self.context instance = create_instance(ctxt, params={'image_ref': ''}) self.assertTrue( compute_utils.is_volume_backed_instance(ctxt, instance, None)) def test_is_volume_backed_instance_empty_bdm_with_image(self): ctxt = self.context instance = create_instance(ctxt, params={ 'root_device_name': 'vda', 'image_ref': FAKE_IMAGE_REF }) self.assertFalse( compute_utils.is_volume_backed_instance( ctxt, instance, block_device_obj.block_device_make_list(ctxt, []))) def test_is_volume_backed_instance_bdm_volume_no_image(self): ctxt = self.context instance = create_instance(ctxt, params={ 'root_device_name': 'vda', 'image_ref': '' }) bdms = block_device_obj.block_device_make_list(ctxt, [fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'device_name': '/dev/vda', 'volume_id': uuids.volume_id, 'instance_uuid': 'f8000000-0000-0000-0000-000000000000', 'boot_index': 0, 'destination_type': 'volume'})]) self.assertTrue( compute_utils.is_volume_backed_instance(ctxt, instance, bdms)) def test_is_volume_backed_instance_bdm_local_no_image(self): # if the root device is local the instance is not volume backed, even # if no image_ref is set. ctxt = self.context instance = create_instance(ctxt, params={ 'root_device_name': 'vda', 'image_ref': '' }) bdms = block_device_obj.block_device_make_list(ctxt, [fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'device_name': '/dev/vda', 'volume_id': uuids.volume_id, 'destination_type': 'local', 'instance_uuid': 'f8000000-0000-0000-0000-000000000000', 'boot_index': 0, 'snapshot_id': None}), fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'device_name': '/dev/vdb', 'instance_uuid': 'f8000000-0000-0000-0000-000000000000', 'boot_index': 1, 'destination_type': 'volume', 'volume_id': 'c2ec2156-d75e-11e2-985b-5254009297d6', 'snapshot_id': None})]) self.assertFalse( compute_utils.is_volume_backed_instance(ctxt, instance, bdms)) def test_is_volume_backed_instance_bdm_volume_with_image(self): ctxt = self.context instance = create_instance(ctxt, params={ 'root_device_name': 'vda', 'image_ref': FAKE_IMAGE_REF }) bdms = block_device_obj.block_device_make_list(ctxt, [fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'device_name': '/dev/vda', 'volume_id': uuids.volume_id, 'boot_index': 0, 'destination_type': 'volume'})]) self.assertTrue( compute_utils.is_volume_backed_instance(ctxt, instance, bdms)) def test_is_volume_backed_instance_bdm_snapshot(self): ctxt = self.context instance = create_instance(ctxt, params={ 'root_device_name': 'vda' }) bdms = block_device_obj.block_device_make_list(ctxt, [fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'device_name': '/dev/vda', 'snapshot_id': 'de8836ac-d75e-11e2-8271-5254009297d6', 'instance_uuid': 'f8000000-0000-0000-0000-000000000000', 'destination_type': 'volume', 'boot_index': 0, 'volume_id': None})]) self.assertTrue( compute_utils.is_volume_backed_instance(ctxt, instance, bdms)) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_is_volume_backed_instance_empty_bdm_by_uuid(self, mock_bdms): ctxt = self.context instance = create_instance(ctxt) mock_bdms.return_value = block_device_obj.block_device_make_list( ctxt, []) self.assertFalse( compute_utils.is_volume_backed_instance(ctxt, instance, None)) mock_bdms.assert_called_with(ctxt, instance.uuid) class ComputeUtilsImageFunctionsTestCase(test.TestCase): def setUp(self): super(ComputeUtilsImageFunctionsTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') def test_initialize_instance_snapshot_metadata_no_metadata(self): # show no borkage from empty system meta ctxt = self.context instance = create_instance(ctxt) image_meta = compute_utils.initialize_instance_snapshot_metadata( ctxt, instance, 'empty properties') self.assertEqual({}, image_meta['properties']) def test_initialize_instance_snapshot_metadata_removed_metadata(self): # show non-inheritable properties are excluded ctxt = self.context instance = create_instance(ctxt) instance.system_metadata = { 'image_img_signature': 'an-image-signature', 'image_cinder_encryption_key_id': 'deeeeeac-d75e-11e2-8271-1234567897d6', 'image_some_key': 'some_value', 'image_fred': 'barney', 'image_cache_in_nova': 'true' } image_meta = compute_utils.initialize_instance_snapshot_metadata( ctxt, instance, 'removed properties') properties = image_meta['properties'] self.assertGreater(len(properties), 0) self.assertIn('some_key', properties) self.assertIn('fred', properties) for p in compute_utils.NON_INHERITABLE_IMAGE_PROPERTIES: self.assertNotIn(p, properties) for p in CONF.non_inheritable_image_properties: self.assertNotIn(p, properties) class PciRequestUpdateTestCase(test.NoDBTestCase): def setUp(self): super().setUp() self.context = context.RequestContext('fake', 'fake') def test_no_pci_request(self): instance = objects.Instance( pci_requests=objects.InstancePCIRequests(requests=[])) provider_mapping = {} compute_utils.update_pci_request_spec_with_allocated_interface_name( self.context, mock.sentinel.report_client, instance, provider_mapping) def test_pci_request_from_flavor(self): instance = objects.Instance( pci_requests=objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(requester_id=None) ])) provider_mapping = {} compute_utils.update_pci_request_spec_with_allocated_interface_name( self.context, mock.sentinel.report_client, instance, provider_mapping) def test_pci_request_has_no_mapping(self): instance = objects.Instance( pci_requests=objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(requester_id=uuids.port_1) ])) provider_mapping = {} compute_utils.update_pci_request_spec_with_allocated_interface_name( self.context, mock.sentinel.report_client, instance, provider_mapping) def test_pci_request_ambiguous_mapping(self): instance = objects.Instance( pci_requests=objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(requester_id=uuids.port_1) ])) provider_mapping = {uuids.port_1: [uuids.rp1, uuids.rp2]} self.assertRaises( exception.AmbiguousResourceProviderForPCIRequest, (compute_utils. update_pci_request_spec_with_allocated_interface_name), self.context, mock.sentinel.report_client, instance, provider_mapping) def test_unexpected_provider_name(self): report_client = mock.Mock(spec=report.SchedulerReportClient) report_client.get_resource_provider_name.return_value = 'unexpected' instance = objects.Instance( pci_requests=objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest( requester_id=uuids.port_1, spec=[{}]) ])) provider_mapping = {uuids.port_1: [uuids.rp1]} self.assertRaises( exception.UnexpectedResourceProviderNameForPCIRequest, (compute_utils. update_pci_request_spec_with_allocated_interface_name), self.context, report_client, instance, provider_mapping) report_client.get_resource_provider_name.assert_called_once_with( self.context, uuids.rp1) self.assertNotIn( 'parent_ifname', instance.pci_requests.requests[0].spec[0]) def test_pci_request_updated(self): report_client = mock.Mock(spec=report.SchedulerReportClient) report_client.get_resource_provider_name.return_value = ( 'host:agent:enp0s31f6') instance = objects.Instance( pci_requests=objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest( requester_id=uuids.port_1, spec=[{}], ) ])) provider_mapping = {uuids.port_1: [uuids.rp1]} compute_utils.update_pci_request_spec_with_allocated_interface_name( self.context, report_client, instance, provider_mapping) report_client.get_resource_provider_name.assert_called_once_with( self.context, uuids.rp1) self.assertEqual( 'enp0s31f6', instance.pci_requests.requests[0].spec[0]['parent_ifname']) class AcceleratorRequestTestCase(test.NoDBTestCase): def setUp(self): super(AcceleratorRequestTestCase, self).setUp() self.context = context.get_admin_context() @mock.patch.object(cyborgclient, 'delete_arqs_for_instance') def test_delete_with_device_profile(self, mock_del_arq): flavor = objects.Flavor(**test_flavor.fake_flavor) flavor['extra_specs'] = {'accel:device_profile': 'mydp'} instance = fake_instance.fake_instance_obj(self.context, flavor=flavor) compute_utils.delete_arqs_if_needed(self.context, instance) mock_del_arq.assert_called_once_with(instance.uuid) @mock.patch.object(cyborgclient, 'delete_arqs_for_instance') def test_delete_with_no_device_profile(self, mock_del_arq): flavor = objects.Flavor(**test_flavor.fake_flavor) flavor['extra_specs'] = {} instance = fake_instance.fake_instance_obj(self.context, flavor=flavor) compute_utils.delete_arqs_if_needed(self.context, instance) mock_del_arq.assert_not_called() @mock.patch('nova.compute.utils.LOG.exception') @mock.patch.object(cyborgclient, 'delete_arqs_for_instance') def test_delete_with_device_profile_exception(self, mock_del_arq, mock_log_exc): flavor = objects.Flavor(**test_flavor.fake_flavor) flavor['extra_specs'] = {'accel:device_profile': 'mydp'} instance = fake_instance.fake_instance_obj(self.context, flavor=flavor) mock_del_arq.side_effect = exception.AcceleratorRequestOpFailed( op='', msg='') compute_utils.delete_arqs_if_needed(self.context, instance) mock_del_arq.assert_called_once_with(instance.uuid) mock_log_exc.assert_called_once() self.assertIn('Failed to delete accelerator requests for instance', mock_log_exc.call_args[0][0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_compute_xen.py0000664000175000017500000000575400000000000023300 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for expectations of behaviour from the Xen driver.""" import mock from nova.compute import manager from nova.compute import power_state from nova import context from nova import objects from nova.objects import instance as instance_obj from nova.tests.unit.compute import eventlet_utils from nova.tests.unit import fake_instance from nova.tests.unit.virt.xenapi import stubs from nova.virt.xenapi import vm_utils class ComputeXenTestCase(stubs.XenAPITestBaseNoDB): def setUp(self): super(ComputeXenTestCase, self).setUp() self.flags(compute_driver='xenapi.XenAPIDriver') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') stubs.stubout_session(self, stubs.FakeSessionForVMTests) self.compute = manager.ComputeManager() # execute power syncing synchronously for testing: self.compute._sync_power_pool = eventlet_utils.SyncPool() def test_sync_power_states_instance_not_found(self): db_instance = fake_instance.fake_db_instance() ctxt = context.get_admin_context() instance_list = instance_obj._make_instance_list(ctxt, objects.InstanceList(), [db_instance], None) instance = instance_list[0] @mock.patch.object(vm_utils, 'lookup') @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(self.compute.driver, 'get_num_instances') @mock.patch.object(self.compute, '_sync_instance_power_state') def do_test(mock_compute_sync_powerstate, mock_compute_get_num_instances, mock_instance_list_get_by_host, mock_vm_utils_lookup): mock_instance_list_get_by_host.return_value = instance_list mock_compute_get_num_instances.return_value = 1 mock_vm_utils_lookup.return_value = None self.compute._sync_power_states(ctxt) mock_instance_list_get_by_host.assert_called_once_with( ctxt, self.compute.host, expected_attrs=[], use_slave=True) mock_compute_get_num_instances.assert_called_once_with() mock_compute_sync_powerstate.assert_called_once_with( ctxt, instance, power_state.NOSTATE, use_slave=True) mock_vm_utils_lookup.assert_called_once_with( self.compute.driver._session, instance['name'], False) do_test() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_flavors.py0000664000175000017500000000443300000000000022417 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for flavor basic functions""" from nova.compute import flavors from nova import exception from nova import test class ExtraSpecTestCase(test.NoDBTestCase): def _flavor_validate_extra_spec_keys_invalid_input(self, key_name_list): self.assertRaises(exception.InvalidInput, flavors.validate_extra_spec_keys, key_name_list) def test_flavor_validate_extra_spec_keys_invalid_input(self): lists = [['', ], ['*', ], ['+', ]] for x in lists: self._flavor_validate_extra_spec_keys_invalid_input(x) def test_flavor_validate_extra_spec_keys(self): key_name_list = ['abc', 'ab c', 'a-b-c', 'a_b-c', 'a:bc'] flavors.validate_extra_spec_keys(key_name_list) class CreateFlavorTestCase(test.NoDBTestCase): def test_create_flavor_ram_error(self): args = ("ram_test", "9999999999", "1", "10", "1") try: flavors.create(*args) self.fail("Be sure this will never be executed.") except exception.InvalidInput as e: self.assertIn("ram", e.message) def test_create_flavor_disk_error(self): args = ("disk_test", "1024", "1", "9999999999", "1") try: flavors.create(*args) self.fail("Be sure this will never be executed.") except exception.InvalidInput as e: self.assertIn("disk", e.message) def test_create_flavor_ephemeral_error(self): args = ("ephemeral_test", "1024", "1", "10", "9999999999") try: flavors.create(*args) self.fail("Be sure this will never be executed.") except exception.InvalidInput as e: self.assertIn("ephemeral", e.message) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_host_api.py0000664000175000017500000010152200000000000022546 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock import oslo_messaging as messaging from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import services from nova.compute import api as compute from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_notifier from nova.tests.unit.objects import test_objects from nova.tests.unit.objects import test_service class ComputeHostAPITestCase(test.TestCase): def setUp(self): super(ComputeHostAPITestCase, self).setUp() self.host_api = compute.HostAPI() self.aggregate_api = compute.AggregateAPI() self.ctxt = context.get_admin_context() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.req = fakes.HTTPRequest.blank('') self.controller = services.ServiceController() self.useFixture(nova_fixtures.SingleCellSimple()) def _compare_obj(self, obj, db_obj): test_objects.compare_obj(self, obj, db_obj, allow_missing=test_service.OPTIONAL) def _compare_objs(self, obj_list, db_obj_list): self.assertEqual(len(obj_list), len(db_obj_list), "The length of two object lists are different.") for index, obj in enumerate(obj_list): self._compare_obj(obj, db_obj_list[index]) def test_set_host_enabled(self): fake_notifier.NOTIFICATIONS = [] @mock.patch.object(self.host_api.rpcapi, 'set_host_enabled', return_value='fake-result') @mock.patch.object(self.host_api, '_assert_host_exists', return_value='fake_host') def _do_test(mock_assert_host_exists, mock_set_host_enabled): result = self.host_api.set_host_enabled(self.ctxt, 'fake_host', 'fake_enabled') self.assertEqual('fake-result', result) self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('HostAPI.set_enabled.start', msg.event_type) self.assertEqual('api.fake_host', msg.publisher_id) self.assertEqual('INFO', msg.priority) self.assertEqual('fake_enabled', msg.payload['enabled']) self.assertEqual('fake_host', msg.payload['host_name']) msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual('HostAPI.set_enabled.end', msg.event_type) self.assertEqual('api.fake_host', msg.publisher_id) self.assertEqual('INFO', msg.priority) self.assertEqual('fake_enabled', msg.payload['enabled']) self.assertEqual('fake_host', msg.payload['host_name']) _do_test() def test_host_name_from_assert_hosts_exists(self): @mock.patch.object(self.host_api.rpcapi, 'set_host_enabled', return_value='fake-result') @mock.patch.object(self.host_api, '_assert_host_exists', return_value='fake_host') def _do_test(mock_assert_host_exists, mock_set_host_enabled): result = self.host_api.set_host_enabled(self.ctxt, 'fake_host', 'fake_enabled') self.assertEqual('fake-result', result) _do_test() def test_get_host_uptime(self): @mock.patch.object(self.host_api.rpcapi, 'get_host_uptime', return_value='fake-result') @mock.patch.object(self.host_api, '_assert_host_exists', return_value='fake_host') def _do_test(mock_assert_host_exists, mock_get_host_uptime): result = self.host_api.get_host_uptime(self.ctxt, 'fake_host') self.assertEqual('fake-result', result) _do_test() def test_get_host_uptime_service_down(self): @mock.patch.object(self.host_api.db, 'service_get_by_compute_host', return_value=dict(test_service.fake_service, id=1)) @mock.patch.object(self.host_api.servicegroup_api, 'service_is_up', return_value=False) def _do_test(mock_service_is_up, mock_service_get_by_compute_host): self.assertRaises(exception.ComputeServiceUnavailable, self.host_api.get_host_uptime, self.ctxt, 'fake_host') _do_test() def test_host_power_action(self): fake_notifier.NOTIFICATIONS = [] @mock.patch.object(self.host_api.rpcapi, 'host_power_action', return_value='fake-result') @mock.patch.object(self.host_api, '_assert_host_exists', return_value='fake_host') def _do_test(mock_assert_host_exists, mock_host_power_action): result = self.host_api.host_power_action(self.ctxt, 'fake_host', 'fake_action') self.assertEqual('fake-result', result) self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('HostAPI.power_action.start', msg.event_type) self.assertEqual('api.fake_host', msg.publisher_id) self.assertEqual('INFO', msg.priority) self.assertEqual('fake_action', msg.payload['action']) self.assertEqual('fake_host', msg.payload['host_name']) msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual('HostAPI.power_action.end', msg.event_type) self.assertEqual('api.fake_host', msg.publisher_id) self.assertEqual('INFO', msg.priority) self.assertEqual('fake_action', msg.payload['action']) self.assertEqual('fake_host', msg.payload['host_name']) _do_test() def test_set_host_maintenance(self): fake_notifier.NOTIFICATIONS = [] @mock.patch.object(self.host_api.rpcapi, 'host_maintenance_mode', return_value='fake-result') @mock.patch.object(self.host_api, '_assert_host_exists', return_value='fake_host') def _do_test(mock_assert_host_exists, mock_host_maintenance_mode): result = self.host_api.set_host_maintenance(self.ctxt, 'fake_host', 'fake_mode') self.assertEqual('fake-result', result) self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('HostAPI.set_maintenance.start', msg.event_type) self.assertEqual('api.fake_host', msg.publisher_id) self.assertEqual('INFO', msg.priority) self.assertEqual('fake_host', msg.payload['host_name']) self.assertEqual('fake_mode', msg.payload['mode']) msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual('HostAPI.set_maintenance.end', msg.event_type) self.assertEqual('api.fake_host', msg.publisher_id) self.assertEqual('INFO', msg.priority) self.assertEqual('fake_host', msg.payload['host_name']) self.assertEqual('fake_mode', msg.payload['mode']) _do_test() def test_service_get_all_cells(self): cells = objects.CellMappingList.get_all(self.ctxt) for cell in cells: with context.target_cell(self.ctxt, cell) as cctxt: objects.Service(context=cctxt, binary='nova-compute', host='host-%s' % cell.uuid).create() services = self.host_api.service_get_all(self.ctxt, all_cells=True) self.assertEqual(sorted(['host-%s' % cell.uuid for cell in cells]), sorted([svc.host for svc in services])) @mock.patch('nova.context.scatter_gather_cells') def test_service_get_all_cells_with_failures(self, mock_sg): service = objects.Service(binary='nova-compute', host='host-%s' % uuids.cell1) mock_sg.return_value = { uuids.cell1: [service], uuids.cell2: context.did_not_respond_sentinel } services = self.host_api.service_get_all(self.ctxt, all_cells=True) # returns the results from cell1 and ignores cell2. self.assertEqual(['host-%s' % uuids.cell1], [svc.host for svc in services]) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch.object(objects.HostMappingList, 'get_by_cell_id') @mock.patch('nova.context.scatter_gather_all_cells') def test_service_get_all_cells_with_minimal_constructs(self, mock_sg, mock_get_hm, mock_cm_list): service = objects.Service(binary='nova-compute', host='host-%s' % uuids.cell0) cells = [ objects.CellMapping(uuid=uuids.cell1, id=1), objects.CellMapping(uuid=uuids.cell2, id=2), ] mock_cm_list.return_value = cells context.load_cells() # create two hms in cell1, which is the down cell in this test. hm1 = objects.HostMapping(self.ctxt, host='host1-unavailable', cell_mapping=cells[0]) hm1.create() hm2 = objects.HostMapping(self.ctxt, host='host2-unavailable', cell_mapping=cells[0]) hm2.create() mock_sg.return_value = { cells[0].uuid: [service], cells[1].uuid: context.did_not_respond_sentinel, } mock_get_hm.return_value = [hm1, hm2] services = self.host_api.service_get_all(self.ctxt, all_cells=True, cell_down_support=True) # returns the results from cell0 and minimal construct from cell1. self.assertEqual(sorted(['host-%s' % uuids.cell0, 'host1-unavailable', 'host2-unavailable']), sorted([svc.host for svc in services])) mock_sg.assert_called_once_with(self.ctxt, objects.ServiceList.get_all, None, set_zones=False) mock_get_hm.assert_called_once_with(self.ctxt, cells[1].id) def test_service_get_all_no_zones(self): services = [dict(test_service.fake_service, id=1, topic='compute', host='host1'), dict(test_service.fake_service, topic='compute', host='host2')] @mock.patch.object(self.host_api.db, 'service_get_all') def _do_test(mock_service_get_all): mock_service_get_all.return_value = services # Test no filters result = self.host_api.service_get_all(self.ctxt) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, services) # Test no filters #2 mock_service_get_all.reset_mock() result = self.host_api.service_get_all(self.ctxt, filters={}) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, services) # Test w/ filter mock_service_get_all.reset_mock() result = self.host_api.service_get_all(self.ctxt, filters=dict(host='host2')) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, [services[1]]) _do_test() def test_service_get_all(self): services = [dict(test_service.fake_service, topic='compute', host='host1'), dict(test_service.fake_service, topic='compute', host='host2')] exp_services = [] for service in services: exp_service = {} exp_service.update(availability_zone='nova', **service) exp_services.append(exp_service) @mock.patch.object(self.host_api.db, 'service_get_all') def _do_test(mock_service_get_all): mock_service_get_all.return_value = services # Test no filters result = self.host_api.service_get_all(self.ctxt, set_zones=True) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, exp_services) # Test no filters #2 mock_service_get_all.reset_mock() result = self.host_api.service_get_all(self.ctxt, filters={}, set_zones=True) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, exp_services) # Test w/ filter mock_service_get_all.reset_mock() result = self.host_api.service_get_all(self.ctxt, filters=dict(host='host2'), set_zones=True) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, [exp_services[1]]) # Test w/ zone filter but no set_zones arg. mock_service_get_all.reset_mock() filters = {'availability_zone': 'nova'} result = self.host_api.service_get_all(self.ctxt, filters=filters) mock_service_get_all.assert_called_once_with(self.ctxt, disabled=None) self._compare_objs(result, exp_services) _do_test() def test_service_get_by_compute_host(self): @mock.patch.object(self.host_api.db, 'service_get_by_compute_host', return_value=test_service.fake_service) def _do_test(mock_service_get_by_compute_host): result = self.host_api.service_get_by_compute_host(self.ctxt, 'fake-host') self.assertEqual(test_service.fake_service['id'], result.id) _do_test() def test_service_update_by_host_and_binary(self): host_name = 'fake-host' binary = 'nova-compute' params_to_update = dict(disabled=True) service_id = 42 expected_result = dict(test_service.fake_service, id=service_id) @mock.patch.object(self.host_api, '_update_compute_provider_status') @mock.patch.object(self.host_api.db, 'service_get_by_host_and_binary') @mock.patch.object(self.host_api.db, 'service_update') def _do_test(mock_service_update, mock_service_get_by_host_and_binary, mock_update_compute_provider_status): mock_service_get_by_host_and_binary.return_value = expected_result mock_service_update.return_value = expected_result result = self.host_api.service_update_by_host_and_binary( self.ctxt, host_name, binary, params_to_update) self._compare_obj(result, expected_result) mock_update_compute_provider_status.assert_called_once_with( self.ctxt, test.MatchType(objects.Service)) _do_test() @mock.patch('nova.compute.api.HostAPI._update_compute_provider_status', new_callable=mock.NonCallableMock) def test_service_update_no_update_provider_status(self, mock_ucps): """Tests the scenario that the service is updated but the disabled field is not changed, for example the forced_down field is only updated. In this case _update_compute_provider_status should not be called. """ service = objects.Service(forced_down=True) self.assertIn('forced_down', service.obj_what_changed()) with mock.patch.object(service, 'save') as mock_save: retval = self.host_api.service_update(self.ctxt, service) self.assertIs(retval, service) mock_save.assert_called_once_with() @mock.patch('nova.compute.rpcapi.ComputeAPI.set_host_enabled', new_callable=mock.NonCallableMock) def test_update_compute_provider_status_service_too_old(self, mock_she): """Tests the scenario that the service is up but is too old to sync the COMPUTE_STATUS_DISABLED trait. """ service = objects.Service(host='fake-host') service.version = compute.MIN_COMPUTE_SYNC_COMPUTE_STATUS_DISABLED - 1 with mock.patch.object( self.host_api.servicegroup_api, 'service_is_up', return_value=True) as service_is_up: self.host_api._update_compute_provider_status(self.ctxt, service) service_is_up.assert_called_once_with(service) self.assertIn('Compute service on host fake-host is too old to sync ' 'the COMPUTE_STATUS_DISABLED trait in Placement.', self.stdlog.logger.output) @mock.patch('nova.compute.rpcapi.ComputeAPI.set_host_enabled', side_effect=messaging.MessagingTimeout) def test_update_compute_provider_status_service_rpc_error(self, mock_she): """Tests the scenario that the RPC call to the compute service raised some exception. """ service = objects.Service(host='fake-host', disabled=True) with mock.patch.object( self.host_api.servicegroup_api, 'service_is_up', return_value=True) as service_is_up: self.host_api._update_compute_provider_status(self.ctxt, service) service_is_up.assert_called_once_with(service) mock_she.assert_called_once_with(self.ctxt, 'fake-host', False) log_output = self.stdlog.logger.output self.assertIn('An error occurred while updating the ' 'COMPUTE_STATUS_DISABLED trait on compute node ' 'resource providers managed by host fake-host.', log_output) self.assertIn('MessagingTimeout', log_output) @mock.patch.object(objects.InstanceList, 'get_by_host', return_value = ['fake-responses']) def test_instance_get_all_by_host(self, mock_get): result = self.host_api.instance_get_all_by_host(self.ctxt, 'fake-host') self.assertEqual(['fake-responses'], result) def test_task_log_get_all(self): @mock.patch.object(self.host_api.db, 'task_log_get_all', return_value='fake-response') def _do_test(mock_task_log_get_all): result = self.host_api.task_log_get_all(self.ctxt, 'fake-name', 'fake-begin', 'fake-end', host='fake-host', state='fake-state') self.assertEqual('fake-response', result) _do_test() @mock.patch.object(objects.CellMappingList, 'get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping( uuid=uuids.cell1_uuid, transport_url='mq://fake1', database_connection='db://fake1'), objects.CellMapping( uuid=uuids.cell2_uuid, transport_url='mq://fake2', database_connection='db://fake2'), objects.CellMapping( uuid=uuids.cell3_uuid, transport_url='mq://fake3', database_connection='db://fake3')])) @mock.patch.object(objects.Service, 'get_by_uuid', side_effect=[ exception.ServiceNotFound( service_id=uuids.service_uuid), objects.Service(uuid=uuids.service_uuid)]) def test_service_get_by_id_using_uuid(self, service_get_by_uuid, cell_mappings_get_all): """Tests that we can lookup a service in the HostAPI using a uuid. There are two calls to objects.Service.get_by_uuid and the first raises ServiceNotFound so that we ensure we keep looping over the cells. We'll find the service in the second cell and break the loop so that we don't needlessly check in the third cell. """ def _fake_set_target_cell(ctxt, cell_mapping): if cell_mapping: # These aren't really what would be set for values but let's # keep this simple so we can assert something is set when a # mapping is provided. ctxt.db_connection = cell_mapping.database_connection ctxt.mq_connection = cell_mapping.transport_url # We have to override the SingleCellSimple fixture. self.useFixture(fixtures.MonkeyPatch( 'nova.context.set_target_cell', _fake_set_target_cell)) ctxt = context.get_admin_context() self.assertIsNone(ctxt.db_connection) self.host_api.service_get_by_id(ctxt, uuids.service_uuid) # We should have broken the loop over the cells and set the target cell # on the context. service_get_by_uuid.assert_has_calls( [mock.call(ctxt, uuids.service_uuid)] * 2) self.assertEqual('db://fake2', ctxt.db_connection) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') @mock.patch.object(objects.HostMapping, 'get_by_host') def test_service_delete_compute_in_aggregate( self, mock_hm, mock_get_cn, mock_add_host, mock_remove_host): compute = objects.Service(self.ctxt, **{'host': 'fake-compute-host', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0}) compute.create() # This is needed because of lazy-loading service.compute_node cn = objects.ComputeNode(uuid=uuids.cn, host="fake-compute-host", hypervisor_hostname="fake-compute-host") mock_get_cn.return_value = [cn] aggregate = self.aggregate_api.create_aggregate(self.ctxt, 'aggregate', None) self.aggregate_api.add_host_to_aggregate(self.ctxt, aggregate.id, 'fake-compute-host') mock_add_host.assert_called_once_with( mock.ANY, aggregate.uuid, host_name='fake-compute-host') self.controller.delete(self.req, compute.id) result = self.aggregate_api.get_aggregate(self.ctxt, aggregate.id).hosts self.assertEqual([], result) mock_hm.return_value.destroy.assert_called_once_with() mock_remove_host.assert_called_once_with( mock.ANY, aggregate.uuid, 'fake-compute-host') @mock.patch('nova.db.api.compute_node_statistics') def test_compute_node_statistics(self, mock_cns): # Note this should only be called twice mock_cns.side_effect = [ {'stat1': 1, 'stat2': 4.0}, {'stat1': 5, 'stat2': 1.2}, ] compute.CELLS = [objects.CellMapping(uuid=uuids.cell1), objects.CellMapping( uuid=objects.CellMapping.CELL0_UUID), objects.CellMapping(uuid=uuids.cell2)] stats = self.host_api.compute_node_statistics(self.ctxt) self.assertEqual({'stat1': 6, 'stat2': 5.2}, stats) @mock.patch.object(objects.CellMappingList, 'get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping( uuid=objects.CellMapping.CELL0_UUID, transport_url='mq://cell0', database_connection='db://cell0'), objects.CellMapping( uuid=uuids.cell1_uuid, transport_url='mq://fake1', database_connection='db://fake1'), objects.CellMapping( uuid=uuids.cell2_uuid, transport_url='mq://fake2', database_connection='db://fake2')])) @mock.patch.object(objects.ComputeNode, 'get_by_uuid', side_effect=[exception.ComputeHostNotFound( host=uuids.cn_uuid), objects.ComputeNode(uuid=uuids.cn_uuid)]) def test_compute_node_get_using_uuid(self, compute_get_by_uuid, cell_mappings_get_all): """Tests that we can lookup a compute node in the HostAPI using a uuid. """ self.host_api.compute_node_get(self.ctxt, uuids.cn_uuid) # cell0 should have been skipped, and the compute node wasn't found # in cell1 so we checked cell2 and found it self.assertEqual(2, compute_get_by_uuid.call_count) compute_get_by_uuid.assert_has_calls( [mock.call(self.ctxt, uuids.cn_uuid)] * 2) @mock.patch.object(objects.CellMappingList, 'get_all', return_value=objects.CellMappingList(objects=[ objects.CellMapping( uuid=objects.CellMapping.CELL0_UUID, transport_url='mq://cell0', database_connection='db://cell0'), objects.CellMapping( uuid=uuids.cell1_uuid, transport_url='mq://fake1', database_connection='db://fake1'), objects.CellMapping( uuid=uuids.cell2_uuid, transport_url='mq://fake2', database_connection='db://fake2')])) @mock.patch.object(objects.ComputeNode, 'get_by_uuid', side_effect=exception.ComputeHostNotFound( host=uuids.cn_uuid)) def test_compute_node_get_not_found(self, compute_get_by_uuid, cell_mappings_get_all): """Tests that we can lookup a compute node in the HostAPI using a uuid and will fail with ComputeHostNotFound if we didn't find it in any cell. """ self.assertRaises(exception.ComputeHostNotFound, self.host_api.compute_node_get, self.ctxt, uuids.cn_uuid) # cell0 should have been skipped, and the compute node wasn't found # in cell1 or cell2. self.assertEqual(2, compute_get_by_uuid.call_count) compute_get_by_uuid.assert_has_calls( [mock.call(self.ctxt, uuids.cn_uuid)] * 2) class ComputeAggregateAPITestCase(test.TestCase): def setUp(self): super(ComputeAggregateAPITestCase, self).setUp() self.aggregate_api = compute.AggregateAPI() self.ctxt = context.get_admin_context() # NOTE(jaypipes): We just mock out the HostNapping and Service object # lookups in order to bypass the code that does cell lookup stuff, # which isn't germane to these tests self.useFixture( fixtures.MockPatch('nova.objects.HostMapping.get_by_host')) self.useFixture( fixtures.MockPatch('nova.context.set_target_cell')) mock_service_get_by_compute_host = ( self.useFixture( fixtures.MockPatch( 'nova.objects.Service.get_by_compute_host')).mock) mock_service_get_by_compute_host.return_value = ( objects.Service(host='fake-host')) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') @mock.patch.object(compute.LOG, 'warning') def test_aggregate_add_host_placement_missing_provider( self, mock_log, mock_pc_add_host): hostname = 'fake-host' err = exception.ResourceProviderNotFound(name_or_uuid=hostname) mock_pc_add_host.side_effect = err aggregate = self.aggregate_api.create_aggregate( self.ctxt, 'aggregate', None) self.aggregate_api.add_host_to_aggregate( self.ctxt, aggregate.id, hostname) # Nothing should blow up in Rocky, but we should get a warning msg = ("Failed to associate %s with a placement " "aggregate: %s. This may be corrected after running " "nova-manage placement sync_aggregates.") mock_log.assert_called_with(msg, hostname, err) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_add_host') def test_aggregate_add_host_bad_placement(self, mock_pc_add_host): hostname = 'fake-host' mock_pc_add_host.side_effect = exception.PlacementAPIConnectFailure aggregate = self.aggregate_api.create_aggregate( self.ctxt, 'aggregate', None) agg_uuid = aggregate.uuid self.assertRaises(exception.PlacementAPIConnectFailure, self.aggregate_api.add_host_to_aggregate, self.ctxt, aggregate.id, hostname) mock_pc_add_host.assert_called_once_with( self.ctxt, agg_uuid, host_name=hostname) @mock.patch('nova.objects.Aggregate.delete_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') def test_aggregate_remove_host_bad_placement( self, mock_pc_remove_host, mock_agg_obj_delete_host): hostname = 'fake-host' mock_pc_remove_host.side_effect = exception.PlacementAPIConnectFailure aggregate = self.aggregate_api.create_aggregate( self.ctxt, 'aggregate', None) agg_uuid = aggregate.uuid self.assertRaises(exception.PlacementAPIConnectFailure, self.aggregate_api.remove_host_from_aggregate, self.ctxt, aggregate.id, hostname) mock_pc_remove_host.assert_called_once_with( self.ctxt, agg_uuid, hostname) # Make sure mock_agg_obj_delete_host wasn't called since placement # should be tried first and failed with a server failure. mock_agg_obj_delete_host.assert_not_called() @mock.patch('nova.objects.Aggregate.delete_host') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'aggregate_remove_host') @mock.patch.object(compute.LOG, 'warning') def test_aggregate_remove_host_placement_missing_provider( self, mock_log, mock_pc_remove_host, mock_agg_obj_delete_host): hostname = 'fake-host' err = exception.ResourceProviderNotFound(name_or_uuid=hostname) mock_pc_remove_host.side_effect = err aggregate = self.aggregate_api.create_aggregate( self.ctxt, 'aggregate', None) self.aggregate_api.remove_host_from_aggregate( self.ctxt, aggregate.id, hostname) # Nothing should blow up in Rocky, but we should get a warning msg = ("Failed to remove association of %s with a placement " "aggregate: %s.") mock_log.assert_called_with(msg, hostname, err) # In this case Aggregate.delete_host is still called because the # ResourceProviderNotFound error is just logged. mock_agg_obj_delete_host.assert_called_once_with(hostname) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_instance_list.py0000664000175000017500000004035100000000000023601 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids import six from nova.compute import instance_list from nova.compute import multi_cell_list from nova import context as nova_context from nova import exception from nova import objects from nova import test from nova.tests import fixtures FAKE_CELLS = [objects.CellMapping(), objects.CellMapping()] class TestInstanceList(test.NoDBTestCase): def setUp(self): super(TestInstanceList, self).setUp() cells = [objects.CellMapping(uuid=getattr(uuids, 'cell%i' % i), name='cell%i' % i, transport_url='fake:///', database_connection='fake://') for i in range(0, 3)] insts = {} for cell in cells: insts[cell.uuid] = list([ dict( uuid=getattr(uuids, '%s-inst%i' % (cell.name, i)), hostname='%s-inst%i' % (cell.name, i)) for i in range(0, 3)]) self.cells = cells self.insts = insts self.context = nova_context.RequestContext() self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.flags(instance_list_cells_batch_strategy='fixed', group='api') def test_compare_simple_instance_quirks(self): # Ensure uuid,asc is added ctx = instance_list.InstanceSortContext(['key0'], ['asc']) self.assertEqual(['key0', 'uuid'], ctx.sort_keys) self.assertEqual(['asc', 'asc'], ctx.sort_dirs) # Ensure defaults are added ctx = instance_list.InstanceSortContext(None, None) self.assertEqual(['created_at', 'id', 'uuid'], ctx.sort_keys) self.assertEqual(['desc', 'desc', 'asc'], ctx.sort_dirs) @mock.patch('nova.db.api.instance_get_all_by_filters_sort') @mock.patch('nova.objects.CellMappingList.get_all') def test_get_instances_sorted(self, mock_cells, mock_inst): mock_cells.return_value = self.cells insts_by_cell = self.insts.values() mock_inst.side_effect = insts_by_cell obj, insts = instance_list.get_instances_sorted(self.context, {}, None, None, [], ['hostname'], ['asc']) insts_one = [inst['hostname'] for inst in insts] # Reverse the order that we get things from the cells so we can # make sure that the result is still sorted the same way insts_by_cell = list(reversed(list(insts_by_cell))) mock_inst.reset_mock() mock_inst.side_effect = insts_by_cell obj, insts = instance_list.get_instances_sorted(self.context, {}, None, None, [], ['hostname'], ['asc']) insts_two = [inst['hostname'] for inst in insts] self.assertEqual(insts_one, insts_two) @mock.patch('nova.objects.BuildRequestList.get_by_filters') @mock.patch('nova.compute.instance_list.get_instances_sorted') @mock.patch('nova.objects.CellMappingList.get_by_project_id') def test_user_gets_subset_of_cells(self, mock_cm, mock_gi, mock_br): self.flags(instance_list_per_project_cells=True, group='api') mock_gi.return_value = instance_list.InstanceLister(None, None), [] mock_br.return_value = [] user_context = nova_context.RequestContext('fake', 'fake') instance_list.get_instance_objects_sorted( user_context, {}, None, None, [], None, None) mock_gi.assert_called_once_with(user_context, {}, None, None, [], None, None, cell_mappings=mock_cm.return_value, batch_size=1000, cell_down_support=False) @mock.patch('nova.context.CELLS', new=FAKE_CELLS) @mock.patch('nova.context.load_cells') @mock.patch('nova.objects.BuildRequestList.get_by_filters') @mock.patch('nova.compute.instance_list.get_instances_sorted') @mock.patch('nova.objects.CellMappingList.get_by_project_id') def test_admin_gets_all_cells(self, mock_cm, mock_gi, mock_br, mock_lc): mock_gi.return_value = instance_list.InstanceLister(None, None), [] mock_br.return_value = [] admin_context = nova_context.RequestContext('fake', 'fake', is_admin=True) instance_list.get_instance_objects_sorted( admin_context, {}, None, None, [], None, None) mock_gi.assert_called_once_with(admin_context, {}, None, None, [], None, None, cell_mappings=FAKE_CELLS, batch_size=100, cell_down_support=False) mock_cm.assert_not_called() mock_lc.assert_called_once_with() @mock.patch('nova.context.CELLS', new=FAKE_CELLS) @mock.patch('nova.context.load_cells') @mock.patch('nova.objects.BuildRequestList.get_by_filters') @mock.patch('nova.compute.instance_list.get_instances_sorted') @mock.patch('nova.objects.CellMappingList.get_by_project_id') def test_user_gets_all_cells(self, mock_cm, mock_gi, mock_br, mock_lc): self.flags(instance_list_per_project_cells=False, group='api') mock_gi.return_value = instance_list.InstanceLister(None, None), [] mock_br.return_value = [] user_context = nova_context.RequestContext('fake', 'fake') instance_list.get_instance_objects_sorted( user_context, {}, None, None, [], None, None) mock_gi.assert_called_once_with(user_context, {}, None, None, [], None, None, cell_mappings=FAKE_CELLS, batch_size=100, cell_down_support=False) mock_lc.assert_called_once_with() @mock.patch('nova.context.CELLS', new=FAKE_CELLS) @mock.patch('nova.context.load_cells') @mock.patch('nova.objects.BuildRequestList.get_by_filters') @mock.patch('nova.compute.instance_list.get_instances_sorted') @mock.patch('nova.objects.CellMappingList.get_by_project_id') def test_admin_gets_all_cells_anyway(self, mock_cm, mock_gi, mock_br, mock_lc): self.flags(instance_list_per_project_cells=True, group='api') mock_gi.return_value = instance_list.InstanceLister(None, None), [] mock_br.return_value = [] admin_context = nova_context.RequestContext('fake', 'fake', is_admin=True) instance_list.get_instance_objects_sorted( admin_context, {}, None, None, [], None, None) mock_gi.assert_called_once_with(admin_context, {}, None, None, [], None, None, cell_mappings=FAKE_CELLS, batch_size=100, cell_down_support=False) mock_cm.assert_not_called() mock_lc.assert_called_once_with() @mock.patch('nova.context.scatter_gather_cells') def test_get_instances_with_down_cells(self, mock_sg): inst_cell0 = self.insts[uuids.cell0] # storing the uuids of the instances from the up cell uuid_initial = [inst['uuid'] for inst in inst_cell0] def wrap(thing): return multi_cell_list.RecordWrapper(ctx, self.context, thing) ctx = nova_context.RequestContext() instances = [wrap(inst) for inst in inst_cell0] # creating one up cell and two down cells ret_val = {} ret_val[uuids.cell0] = instances ret_val[uuids.cell1] = [wrap(exception.BuildRequestNotFound(uuid='f'))] ret_val[uuids.cell2] = [wrap(nova_context.did_not_respond_sentinel)] mock_sg.return_value = ret_val obj, res = instance_list.get_instances_sorted(self.context, {}, None, None, [], None, None) uuid_final = [inst['uuid'] for inst in res] # return the results from the up cell, ignoring the down cell. self.assertEqual(uuid_initial, uuid_final) @mock.patch('nova.context.scatter_gather_cells') def test_get_instances_by_not_skipping_down_cells(self, mock_sg): self.flags(list_records_by_skipping_down_cells=False, group='api') inst_cell0 = self.insts[uuids.cell0] def wrap(thing): return multi_cell_list.RecordWrapper(ctx, self.context, thing) ctx = nova_context.RequestContext() instances = [wrap(inst) for inst in inst_cell0] # creating one up cell and two down cells ret_val = {} ret_val[uuids.cell0] = instances ret_val[uuids.cell1] = [wrap(exception.BuildRequestNotFound(uuid='f'))] ret_val[uuids.cell2] = [wrap(nova_context.did_not_respond_sentinel)] mock_sg.return_value = ret_val # Raises exception if a cell is down without skipping them # as CONF.api.list_records_by_skipping_down_cells is set to False. # This would in turn result in an API 500 internal error. exp = self.assertRaises(exception.NovaException, instance_list.get_instance_objects_sorted, self.context, {}, None, None, [], None, None) self.assertIn('configuration indicates', six.text_type(exp)) @mock.patch('nova.context.scatter_gather_cells') def test_get_instances_with_cell_down_support(self, mock_sg): self.flags(list_records_by_skipping_down_cells=False, group='api') inst_cell0 = self.insts[uuids.cell0] # storing the uuids of the instances from the up cell uuid_initial = [inst['uuid'] for inst in inst_cell0] def wrap(thing): return multi_cell_list.RecordWrapper(ctx, self.context, thing) ctx = nova_context.RequestContext() instances = [wrap(inst) for inst in inst_cell0] # creating one up cell and two down cells ret_val = {} ret_val[uuids.cell0] = instances ret_val[uuids.cell1] = [wrap(exception.BuildRequestNotFound(uuid='f'))] ret_val[uuids.cell2] = [wrap(nova_context.did_not_respond_sentinel)] mock_sg.return_value = ret_val # From the new microversion (2.68) if cell_down_support is True # then CONF.api.list_records_by_skipping_down_cells will be ignored. # Exception will not be raised even if its False. obj, res = instance_list.get_instances_sorted(self.context, {}, None, None, [], None, None, cell_down_support=True) uuid_final = [inst['uuid'] for inst in res] # return the results from the up cell, ignoring the down cell and # constructing partial results later. self.assertEqual(uuid_initial, uuid_final) def test_batch_size_fixed(self): fixed_size = 200 self.flags(instance_list_cells_batch_strategy='fixed', group='api') self.flags(instance_list_cells_batch_fixed_size=fixed_size, group='api') # We call the batch size calculator with various arguments, including # lists of cells which are just counted, so the cardinality is all that # matters. # One cell, so batch at $limit ret = instance_list.get_instance_list_cells_batch_size( 1000, [mock.sentinel.cell1]) self.assertEqual(1000, ret) # Two cells, so batch at $fixed_size ret = instance_list.get_instance_list_cells_batch_size( 1000, [mock.sentinel.cell1, mock.sentinel.cell2]) self.assertEqual(fixed_size, ret) # Four cells, so batch at $fixed_size ret = instance_list.get_instance_list_cells_batch_size( 1000, [mock.sentinel.cell1, mock.sentinel.cell2, mock.sentinel.cell3, mock.sentinel.cell4]) self.assertEqual(fixed_size, ret) # Three cells, tiny limit, so batch at lower threshold ret = instance_list.get_instance_list_cells_batch_size( 10, [mock.sentinel.cell1, mock.sentinel.cell2, mock.sentinel.cell3]) self.assertEqual(100, ret) # Three cells, limit above floor, so batch at limit ret = instance_list.get_instance_list_cells_batch_size( 110, [mock.sentinel.cell1, mock.sentinel.cell2, mock.sentinel.cell3]) self.assertEqual(110, ret) def test_batch_size_distributed(self): self.flags(instance_list_cells_batch_strategy='distributed', group='api') # One cell, so batch at $limit ret = instance_list.get_instance_list_cells_batch_size(1000, [1]) self.assertEqual(1000, ret) # Two cells so batch at ($limit/2)+10% ret = instance_list.get_instance_list_cells_batch_size(1000, [1, 2]) self.assertEqual(550, ret) # Four cells so batch at ($limit/4)+10% ret = instance_list.get_instance_list_cells_batch_size(1000, [1, 2, 3, 4]) self.assertEqual(275, ret) # Three cells, tiny limit, so batch at lower threshold ret = instance_list.get_instance_list_cells_batch_size(10, [1, 2, 3]) self.assertEqual(100, ret) # Three cells, small limit, so batch at lower threshold ret = instance_list.get_instance_list_cells_batch_size(110, [1, 2, 3]) self.assertEqual(100, ret) # No cells, so batch at $limit ret = instance_list.get_instance_list_cells_batch_size(1000, []) self.assertEqual(1000, ret) class TestInstanceListBig(test.NoDBTestCase): def setUp(self): super(TestInstanceListBig, self).setUp() cells = [objects.CellMapping(uuid=getattr(uuids, 'cell%i' % i), name='cell%i' % i, transport_url='fake:///', database_connection='fake://') for i in range(0, 3)] insts = list([ dict( uuid=getattr(uuids, 'inst%i' % i), hostname='inst%i' % i) for i in range(0, 100)]) self.cells = cells self.insts = insts self.context = nova_context.RequestContext() self.useFixture(fixtures.SpawnIsSynchronousFixture()) @mock.patch('nova.db.api.instance_get_all_by_filters_sort') @mock.patch('nova.objects.CellMappingList.get_all') def test_get_instances_batched(self, mock_cells, mock_inst): mock_cells.return_value = self.cells def fake_get_insts(ctx, filters, limit, *a, **k): for i in range(0, limit): yield self.insts.pop() mock_inst.side_effect = fake_get_insts obj, insts = instance_list.get_instances_sorted(self.context, {}, 50, None, [], ['hostname'], ['desc'], batch_size=10) # Make sure we returned exactly how many were requested insts = list(insts) self.assertEqual(50, len(insts)) # Since the instances are all uniform, we should have a # predictable number of queries to the database. 5 queries # would get us 50 results, plus one more gets triggered by the # sort to fill the buffer for the first cell feeder that runs # dry. self.assertEqual(6, mock_inst.call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_keypairs.py0000664000175000017500000002705200000000000022574 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for keypair API.""" import mock from oslo_concurrency import processutils from oslo_config import cfg import six from nova.compute import api as compute_api from nova import context from nova import exception from nova.objects import keypair as keypair_obj from nova import quota from nova.tests.unit.compute import test_compute from nova.tests.unit import fake_crypto from nova.tests.unit import fake_notifier from nova.tests.unit.objects import test_keypair from nova.tests.unit import utils as test_utils CONF = cfg.CONF class KeypairAPITestCase(test_compute.BaseTestCase): def setUp(self): super(KeypairAPITestCase, self).setUp() self.keypair_api = compute_api.KeypairAPI() self.ctxt = context.RequestContext('fake', 'fake') self._keypair_db_call_stubs() self.existing_key_name = 'fake existing key name' self.pub_key = ('ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLnVkqJu9WVf' '/5StU3JCrBR2r1s1j8K1tux+5XeSvdqaM8lMFNorzbY5iyoBbR' 'S56gy1jmm43QsMPJsrpfUZKcJpRENSe3OxIIwWXRoiapZe78u/' 'a9xKwj0avFYMcws9Rk9iAB7W4K1nEJbyCPl5lRBoyqeHBqrnnu' 'XWEgGxJCK0Ah6wcOzwlEiVjdf4kxzXrwPHyi7Ea1qvnNXTziF8' 'yYmUlH4C8UXfpTQckwSwpDyxZUc63P8q+vPbs3Q2kw+/7vvkCK' 'HJAXVI+oCiyMMfffoTq16M1xfV58JstgtTqAXG+ZFpicGajREU' 'E/E3hO5MGgcHmyzIrWHKpe1n3oEGuz') self.fingerprint = '4e:48:c6:a0:4a:f9:dd:b5:4c:85:54:5a:af:43:47:5a' self.keypair_type = keypair_obj.KEYPAIR_TYPE_SSH self.key_destroyed = False def _keypair_db_call_stubs(self): def db_key_pair_get_all_by_user(context, user_id, limit, marker): return [dict(test_keypair.fake_keypair, name=self.existing_key_name, public_key=self.pub_key, fingerprint=self.fingerprint)] def db_key_pair_create(context, keypair): return dict(test_keypair.fake_keypair, **keypair) def db_key_pair_destroy(context, user_id, name): if name == self.existing_key_name: self.key_destroyed = True def db_key_pair_get(context, user_id, name): if name == self.existing_key_name and not self.key_destroyed: return dict(test_keypair.fake_keypair, name=self.existing_key_name, public_key=self.pub_key, fingerprint=self.fingerprint) else: raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out("nova.db.api.key_pair_get_all_by_user", db_key_pair_get_all_by_user) self.stub_out("nova.db.api.key_pair_create", db_key_pair_create) self.stub_out("nova.db.api.key_pair_destroy", db_key_pair_destroy) self.stub_out("nova.db.api.key_pair_get", db_key_pair_get) def _check_notifications(self, action='create', key_name='foo'): self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) n1 = fake_notifier.NOTIFICATIONS[0] self.assertEqual('INFO', n1.priority) self.assertEqual('keypair.%s.start' % action, n1.event_type) self.assertEqual('api.%s' % CONF.host, n1.publisher_id) self.assertEqual('fake', n1.payload['user_id']) self.assertEqual('fake', n1.payload['tenant_id']) self.assertEqual(key_name, n1.payload['key_name']) n2 = fake_notifier.NOTIFICATIONS[1] self.assertEqual('INFO', n2.priority) self.assertEqual('keypair.%s.end' % action, n2.event_type) self.assertEqual('api.%s' % CONF.host, n2.publisher_id) self.assertEqual('fake', n2.payload['user_id']) self.assertEqual('fake', n2.payload['tenant_id']) self.assertEqual(key_name, n2.payload['key_name']) class CreateImportSharedTestMixIn(object): """Tests shared between create and import_key. Mix-in pattern is used here so that these `test_*` methods aren't picked up by the test runner unless they are part of a 'concrete' test case. """ def assertKeypairRaises(self, exc_class, expected_message, name): func = getattr(self.keypair_api, self.func_name) args = [] if self.func_name == 'import_key_pair': args.append(self.pub_key) args.append(self.keypair_type) exc = self.assertRaises(exc_class, func, self.ctxt, self.ctxt.user_id, name, *args) self.assertEqual(expected_message, six.text_type(exc)) def assertInvalidKeypair(self, expected_message, name): msg = 'Keypair data is invalid: %s' % expected_message self.assertKeypairRaises(exception.InvalidKeypair, msg, name) def test_name_too_short(self): msg = ('Keypair name must be string and between 1 ' 'and 255 characters long') self.assertInvalidKeypair(msg, '') def test_name_too_long(self): msg = ('Keypair name must be string and between 1 ' 'and 255 characters long') self.assertInvalidKeypair(msg, 'x' * 256) def test_invalid_chars(self): msg = "Keypair name contains unsafe characters" self.assertInvalidKeypair(msg, '* BAD CHARACTERS! *') def test_already_exists(self): def db_key_pair_create_duplicate(context, keypair): raise exception.KeyPairExists(key_name=keypair.get('name', '')) self.stub_out("nova.db.api.key_pair_create", db_key_pair_create_duplicate) msg = ("Key pair '%(key_name)s' already exists." % {'key_name': self.existing_key_name}) self.assertKeypairRaises(exception.KeyPairExists, msg, self.existing_key_name) @mock.patch.object(quota.QUOTAS, 'count_as_dict', return_value={'user': { 'key_pairs': CONF.quota.key_pairs}}) def test_quota_limit(self, mock_count_as_dict): msg = "Maximum number of key pairs exceeded" self.assertKeypairRaises(exception.KeypairLimitExceeded, msg, 'foo') class CreateKeypairTestCase(KeypairAPITestCase, CreateImportSharedTestMixIn): func_name = 'create_key_pair' @mock.patch('nova.compute.utils.notify_about_keypair_action') def _check_success(self, mock_notify): keypair, private_key = self.keypair_api.create_key_pair( self.ctxt, self.ctxt.user_id, 'foo', key_type=self.keypair_type) self.assertEqual('foo', keypair['name']) self.assertEqual(self.keypair_type, keypair['type']) mock_notify.assert_has_calls([ mock.call(context=self.ctxt, keypair=keypair, action='create', phase='start'), mock.call(context=self.ctxt, keypair=keypair, action='create', phase='end')]) self._check_notifications() def test_success_ssh(self): self._check_success() def test_success_x509(self): self.keypair_type = keypair_obj.KEYPAIR_TYPE_X509 self._check_success() def test_x509_subject_too_long(self): # X509 keypairs will fail if the Subject they're created with # is longer than 64 characters. The previous unit tests could not # detect the issue because the ctxt.user_id was too short. # This unit tests is added to prove this issue. self.keypair_type = keypair_obj.KEYPAIR_TYPE_X509 self.ctxt.user_id = 'a' * 65 self.assertRaises(processutils.ProcessExecutionError, self._check_success) class ImportKeypairTestCase(KeypairAPITestCase, CreateImportSharedTestMixIn): func_name = 'import_key_pair' @mock.patch('nova.compute.utils.notify_about_keypair_action') def _check_success(self, mock_notify): keypair = self.keypair_api.import_key_pair(self.ctxt, self.ctxt.user_id, 'foo', self.pub_key, self.keypair_type) self.assertEqual('foo', keypair['name']) self.assertEqual(self.keypair_type, keypair['type']) self.assertEqual(self.fingerprint, keypair['fingerprint']) self.assertEqual(self.pub_key, keypair['public_key']) self.assertEqual(self.keypair_type, keypair['type']) mock_notify.assert_has_calls([ mock.call(context=self.ctxt, keypair=keypair, action='import', phase='start'), mock.call(context=self.ctxt, keypair=keypair, action='import', phase='end')]) self._check_notifications(action='import') def test_success_ssh(self): self._check_success() def test_success_x509(self): self.keypair_type = keypair_obj.KEYPAIR_TYPE_X509 certif, fingerprint = fake_crypto.get_x509_cert_and_fingerprint() self.pub_key = certif self.fingerprint = fingerprint self._check_success() def test_bad_key_data(self): exc = self.assertRaises(exception.InvalidKeypair, self.keypair_api.import_key_pair, self.ctxt, self.ctxt.user_id, 'foo', 'bad key data') msg = u'Keypair data is invalid: failed to generate fingerprint' self.assertEqual(msg, six.text_type(exc)) class GetKeypairTestCase(KeypairAPITestCase): def test_success(self): keypair = self.keypair_api.get_key_pair(self.ctxt, self.ctxt.user_id, self.existing_key_name) self.assertEqual(self.existing_key_name, keypair['name']) class GetKeypairsTestCase(KeypairAPITestCase): def test_success(self): keypairs = self.keypair_api.get_key_pairs(self.ctxt, self.ctxt.user_id) self.assertEqual([self.existing_key_name], [k['name'] for k in keypairs]) class DeleteKeypairTestCase(KeypairAPITestCase): @mock.patch('nova.compute.utils.notify_about_keypair_action') def test_success(self, mock_notify): self.keypair_api.delete_key_pair(self.ctxt, self.ctxt.user_id, self.existing_key_name) self.assertRaises(exception.KeypairNotFound, self.keypair_api.get_key_pair, self.ctxt, self.ctxt.user_id, self.existing_key_name) match_by_name = test_utils.CustomMockCallMatcher( lambda keypair: keypair['name'] == self.existing_key_name) mock_notify.assert_has_calls([ mock.call(context=self.ctxt, keypair=match_by_name, action='delete', phase='start'), mock.call(context=self.ctxt, keypair=match_by_name, action='delete', phase='end')]) self._check_notifications(action='delete', key_name=self.existing_key_name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/test_multi_cell_list.py0000664000175000017500000004371600000000000024136 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from contextlib import contextmanager import copy import datetime import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import multi_cell_list from nova import context from nova import exception from nova import objects from nova import test class TestUtils(test.NoDBTestCase): def test_compare_simple(self): dt1 = datetime.datetime(2015, 11, 5, 20, 30, 00) dt2 = datetime.datetime(1955, 10, 25, 1, 21, 00) inst1 = {'key0': 'foo', 'key1': 'd', 'key2': 456, 'key4': dt1} inst2 = {'key0': 'foo', 'key1': 's', 'key2': 123, 'key4': dt2} # Equal key0, inst == inst2 ctx = multi_cell_list.RecordSortContext(['key0'], ['asc']) self.assertEqual(0, ctx.compare_records(inst1, inst2)) # Equal key0, inst == inst2 (direction should not matter) ctx = multi_cell_list.RecordSortContext(['key0'], ['desc']) self.assertEqual(0, ctx.compare_records(inst1, inst2)) # Ascending by key1, inst1 < inst2 ctx = multi_cell_list.RecordSortContext(['key1'], ['asc']) self.assertEqual(-1, ctx.compare_records(inst1, inst2)) # Descending by key1, inst2 < inst1 ctx = multi_cell_list.RecordSortContext(['key1'], ['desc']) self.assertEqual(1, ctx.compare_records(inst1, inst2)) # Ascending by key2, inst2 < inst1 ctx = multi_cell_list.RecordSortContext(['key2'], ['asc']) self.assertEqual(1, ctx.compare_records(inst1, inst2)) # Descending by key2, inst1 < inst2 ctx = multi_cell_list.RecordSortContext(['key2'], ['desc']) self.assertEqual(-1, ctx.compare_records(inst1, inst2)) # Ascending by key4, inst1 > inst2 ctx = multi_cell_list.RecordSortContext(['key4'], ['asc']) self.assertEqual(1, ctx.compare_records(inst1, inst2)) # Descending by key4, inst1 < inst2 ctx = multi_cell_list.RecordSortContext(['key4'], ['desc']) self.assertEqual(-1, ctx.compare_records(inst1, inst2)) def test_compare_multiple(self): # key0 should not affect ordering, but key1 should inst1 = {'key0': 'foo', 'key1': 'd', 'key2': 456} inst2 = {'key0': 'foo', 'key1': 's', 'key2': 123} # Should be equivalent to ascending by key1 ctx = multi_cell_list.RecordSortContext(['key0', 'key1'], ['asc', 'asc']) self.assertEqual(-1, ctx.compare_records(inst1, inst2)) # Should be equivalent to descending by key1 ctx = multi_cell_list.RecordSortContext(['key0', 'key1'], ['asc', 'desc']) self.assertEqual(1, ctx.compare_records(inst1, inst2)) def test_wrapper(self): inst1 = {'key0': 'foo', 'key1': 'd', 'key2': 456} inst2 = {'key0': 'foo', 'key1': 's', 'key2': 123} ctx = context.RequestContext() ctx.cell_uuid = uuids.cell # Should sort by key1 sort_ctx = multi_cell_list.RecordSortContext(['key0', 'key1'], ['asc', 'asc']) iw1 = multi_cell_list.RecordWrapper(ctx, sort_ctx, inst1) iw2 = multi_cell_list.RecordWrapper(ctx, sort_ctx, inst2) # Check this both ways to make sure we're comparing against -1 # and not just nonzero return from cmp() self.assertTrue(iw1 < iw2) self.assertFalse(iw2 < iw1) # Should sort reverse by key1 sort_ctx = multi_cell_list.RecordSortContext(['key0', 'key1'], ['asc', 'desc']) iw1 = multi_cell_list.RecordWrapper(ctx, sort_ctx, inst1) iw2 = multi_cell_list.RecordWrapper(ctx, sort_ctx, inst2) # Check this both ways to make sure we're comparing against -1 # and not just nonzero return from cmp() self.assertTrue(iw1 > iw2) self.assertFalse(iw2 > iw1) # Make sure we can tell which cell a request came from self.assertEqual(uuids.cell, iw1.cell_uuid) def test_wrapper_sentinels(self): inst1 = {'key0': 'foo', 'key1': 'd', 'key2': 456} ctx = context.RequestContext() ctx.cell_uuid = uuids.cell sort_ctx = multi_cell_list.RecordSortContext(['key0', 'key1'], ['asc', 'asc']) iw1 = multi_cell_list.RecordWrapper(ctx, sort_ctx, inst1) # Wrappers with sentinels iw2 = multi_cell_list.RecordWrapper(ctx, sort_ctx, context.did_not_respond_sentinel) iw3 = multi_cell_list.RecordWrapper(ctx, sort_ctx, exception.InstanceNotFound( instance_id='fake')) # NOTE(danms): The sentinel wrappers always win self.assertTrue(iw2 < iw1) self.assertTrue(iw3 < iw1) self.assertFalse(iw1 < iw2) self.assertFalse(iw1 < iw3) # NOTE(danms): Comparing two wrappers with sentinels will always return # True for less-than because we're just naive about always favoring the # left hand side. This is fine for our purposes but put it here to make # it explicit. self.assertTrue(iw2 < iw3) self.assertTrue(iw3 < iw2) def test_query_wrapper_success(self): def test(ctx, data): for thing in data: yield thing self.assertEqual([1, 2, 3], list(multi_cell_list.query_wrapper( None, test, [1, 2, 3]))) def test_query_wrapper_timeout(self): def test(ctx): raise exception.CellTimeout self.assertEqual([context.did_not_respond_sentinel], [x._db_record for x in multi_cell_list.query_wrapper( mock.MagicMock(), test)]) def test_query_wrapper_fail(self): def tester(ctx): raise test.TestingException self.assertIsInstance( # query_wrapper is a generator so we convert to a list and # check the type on the first and only result [x._db_record for x in multi_cell_list.query_wrapper( mock.MagicMock(), tester)][0], test.TestingException) class TestListContext(multi_cell_list.RecordSortContext): def compare_records(self, rec1, rec2): return -1 class TestLister(multi_cell_list.CrossCellLister): CONTEXT_CLS = TestListContext def __init__(self, data, sort_keys, sort_dirs, cells=None, batch_size=None): self._data = data self._count_by_cell = {} super(TestLister, self).__init__(self.CONTEXT_CLS(sort_keys, sort_dirs), cells=cells, batch_size=batch_size) @property def marker_identifier(self): return 'id' def _method_called(self, ctx, method, arg): self._count_by_cell.setdefault(ctx.cell_uuid, {}) self._count_by_cell[ctx.cell_uuid].setdefault(method, []) self._count_by_cell[ctx.cell_uuid][method].append(arg) def call_summary(self, method): results = { 'total': 0, 'count_by_cell': [], 'limit_by_cell': [], 'total_by_cell': [], 'called_in_cell': [], } for i, cell in enumerate(self._count_by_cell): if method not in self._count_by_cell[cell]: continue results['total'] += len(self._count_by_cell[cell][method]) # List of number of calls in each cell results['count_by_cell'].append( len(self._count_by_cell[cell][method])) # List of limits used in calls to each cell results['limit_by_cell'].append( self._count_by_cell[cell][method]) try: # List of total results fetched from each cell results['total_by_cell'].append(sum( self._count_by_cell[cell][method])) except TypeError: # Don't do this for non-integer args pass results['called_in_cell'].append(cell) results['count_by_cell'].sort() results['limit_by_cell'].sort() results['total_by_cell'].sort() results['called_in_cell'].sort() return results def get_marker_record(self, ctx, marker): self._method_called(ctx, 'get_marker_record', marker) # Always assume this came from the second cell cell = self.cells[1] return cell.uuid, self._data[0] def get_marker_by_values(self, ctx, values): self._method_called(ctx, 'get_marker_by_values', values) return self._data[0] def get_by_filters(self, ctx, filters, limit, marker, **kwargs): self._method_called(ctx, 'get_by_filters', limit) if 'batch_size' in kwargs: count = min(kwargs['batch_size'], limit) else: count = limit batch = self._data[:count] self._data = self._data[count:] return batch @contextmanager def target_cell_cheater(context, target_cell): # In order to help us do accounting, we need to mimic the real # behavior where at least cell_uuid gets set on the context, which # doesn't happen in the simple test fixture. context = copy.deepcopy(context) context.cell_uuid = target_cell.uuid yield context @mock.patch('nova.context.target_cell', new=target_cell_cheater) class TestBatching(test.NoDBTestCase): def setUp(self): super(TestBatching, self).setUp() self._data = [{'id': 'foo-%i' % i} for i in range(0, 1000)] self._cells = [objects.CellMapping(uuid=getattr(uuids, 'cell%i' % i), name='cell%i' % i) for i in range(0, 10)] def test_batches_not_needed(self): lister = TestLister(self._data, [], [], cells=self._cells, batch_size=10) ctx = context.RequestContext() res = list(lister.get_records_sorted(ctx, {}, 5, None)) self.assertEqual(5, len(res)) summary = lister.call_summary('get_by_filters') # We only needed one batch per cell to hit the total, # so we should have the same number of calls as cells self.assertEqual(len(self._cells), summary['total']) # One call per cell, hitting all cells self.assertEqual(len(self._cells), len(summary['count_by_cell'])) self.assertTrue(all([ cell_count == 1 for cell_count in summary['count_by_cell']])) def test_batches(self): lister = TestLister(self._data, [], [], cells=self._cells, batch_size=10) ctx = context.RequestContext() res = list(lister.get_records_sorted(ctx, {}, 500, None)) self.assertEqual(500, len(res)) summary = lister.call_summary('get_by_filters') # Since we got everything from one cell (due to how things are sorting) # we should have made 500 / 10 calls to one cell, and 1 call to # the rest calls_expected = [1 for cell in self._cells[1:]] + [500 / 10] self.assertEqual(calls_expected, summary['count_by_cell']) # Since we got everything from one cell (due to how things are sorting) # we should have received 500 from one cell and 10 from the rest count_expected = [10 for cell in self._cells[1:]] + [500] self.assertEqual(count_expected, summary['total_by_cell']) # Since we got everything from one cell (due to how things are sorting) # we should have a bunch of calls for batches of 10, one each for # every cell except the one that served the bulk of the requests which # should have 500 / 10 batches of 10. limit_expected = ([[10] for cell in self._cells[1:]] + [[10 for i in range(0, 500 // 10)]]) self.assertEqual(limit_expected, summary['limit_by_cell']) def test_no_batches(self): lister = TestLister(self._data, [], [], cells=self._cells) ctx = context.RequestContext() res = list(lister.get_records_sorted(ctx, {}, 50, None)) self.assertEqual(50, len(res)) summary = lister.call_summary('get_by_filters') # Since we used no batches we should have one call per cell calls_expected = [1 for cell in self._cells] self.assertEqual(calls_expected, summary['count_by_cell']) # Since we used no batches, each cell should have returned 50 results count_expected = [50 for cell in self._cells] self.assertEqual(count_expected, summary['total_by_cell']) # Since we used no batches, each cell call should be for $limit limit_expected = [[count] for count in count_expected] self.assertEqual(limit_expected, summary['limit_by_cell']) class FailureListContext(multi_cell_list.RecordSortContext): def compare_records(self, rec1, rec2): return 0 class FailureLister(TestLister): CONTEXT_CLS = FailureListContext def __init__(self, *a, **k): super(FailureLister, self).__init__(*a, **k) self._fails = {} def set_fails(self, cell, fails): self._fails[cell] = fails def get_by_filters(self, ctx, *a, **k): try: action = self._fails[ctx.cell_uuid].pop(0) except (IndexError, KeyError): action = None if action == context.did_not_respond_sentinel: raise exception.CellTimeout elif isinstance(action, Exception): raise test.TestingException else: return super(FailureLister, self).get_by_filters(ctx, *a, **k) @mock.patch('nova.context.target_cell', new=target_cell_cheater) class TestBaseClass(test.NoDBTestCase): def test_with_failing_cells(self): data = [{'id': 'foo-%i' % i} for i in range(0, 100)] cells = [objects.CellMapping(uuid=getattr(uuids, 'cell%i' % i), name='cell%i' % i) for i in range(0, 3)] lister = FailureLister(data, [], [], cells=cells) # Two of the cells will fail, one with timeout and one # with an error lister.set_fails(uuids.cell0, [context.did_not_respond_sentinel]) # Note that InstanceNotFound exception will never appear during # instance listing, the aim is to only simulate a situation where # there could be some type of exception arising. lister.set_fails(uuids.cell1, exception.InstanceNotFound( instance_id='fake')) ctx = context.RequestContext() result = lister.get_records_sorted(ctx, {}, 50, None, batch_size=10) # We should still have 50 results since there are enough from the # good cells to fill our limit. self.assertEqual(50, len(list(result))) # Make sure the counts line up self.assertEqual(1, len(lister.cells_failed)) self.assertEqual(1, len(lister.cells_timed_out)) self.assertEqual(1, len(lister.cells_responded)) def test_with_failing_middle_cells(self): data = [{'id': 'foo-%i' % i} for i in range(0, 100)] cells = [objects.CellMapping(uuid=getattr(uuids, 'cell%i' % i), name='cell%i' % i) for i in range(0, 3)] lister = FailureLister(data, [], [], cells=cells) # One cell will succeed and then time out, one will fail immediately, # and the last will always work lister.set_fails(uuids.cell0, [None, context.did_not_respond_sentinel]) # Note that BuildAbortException will never appear during instance # listing, the aim is to only simulate a situation where there could # be some type of exception arising. lister.set_fails(uuids.cell1, exception.BuildAbortException( instance_uuid='fake', reason='fake')) ctx = context.RequestContext() result = lister.get_records_sorted(ctx, {}, 50, None, batch_size=5) # We should still have 50 results since there are enough from the # good cells to fill our limit. self.assertEqual(50, len(list(result))) # Make sure the counts line up self.assertEqual(1, len(lister.cells_responded)) self.assertEqual(1, len(lister.cells_failed)) self.assertEqual(1, len(lister.cells_timed_out)) def test_marker_cell_not_requeried(self): data = [{'id': 'foo-%i' % i} for i in range(0, 100)] cells = [objects.CellMapping(uuid=getattr(uuids, 'cell%i' % i), name='cell%i' % i) for i in range(0, 3)] lister = TestLister(data, [], [], cells=cells) ctx = context.RequestContext() result = list(lister.get_records_sorted(ctx, {}, 10, None)) result = list(lister.get_records_sorted(ctx, {}, 10, result[-1]['id'])) # get_marker_record() is called untargeted and its result defines which # cell we skip. gmr_summary = lister.call_summary('get_marker_record') self.assertEqual([None], gmr_summary['called_in_cell']) # All cells other than the second one should have been called for # a local marker gmbv_summary = lister.call_summary('get_marker_by_values') self.assertEqual(sorted([cell.uuid for cell in cells if cell.uuid != uuids.cell1]), gmbv_summary['called_in_cell']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/test_provider_tree.py0000664000175000017500000007040000000000000023611 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import provider_tree from nova import objects from nova import test class TestProviderTree(test.NoDBTestCase): def setUp(self): super(TestProviderTree, self).setUp() self.compute_node1 = objects.ComputeNode( uuid=uuids.cn1, hypervisor_hostname='compute-node-1', ) self.compute_node2 = objects.ComputeNode( uuid=uuids.cn2, hypervisor_hostname='compute-node-2', ) self.compute_nodes = objects.ComputeNodeList( objects=[self.compute_node1, self.compute_node2], ) def _pt_with_cns(self): pt = provider_tree.ProviderTree() for cn in self.compute_nodes: pt.new_root(cn.hypervisor_hostname, cn.uuid, generation=0) return pt def test_tree_ops(self): cn1 = self.compute_node1 cn2 = self.compute_node2 pt = self._pt_with_cns() self.assertRaises( ValueError, pt.new_root, cn1.hypervisor_hostname, cn1.uuid, ) self.assertTrue(pt.exists(cn1.uuid)) self.assertTrue(pt.exists(cn1.hypervisor_hostname)) self.assertFalse(pt.exists(uuids.non_existing_rp)) self.assertFalse(pt.exists('noexist')) self.assertEqual([cn1.uuid], pt.get_provider_uuids(name_or_uuid=cn1.uuid)) # Same with ..._in_tree self.assertEqual([cn1.uuid], pt.get_provider_uuids_in_tree(cn1.uuid)) self.assertEqual(set([cn1.uuid, cn2.uuid]), set(pt.get_provider_uuids())) numa_cell0_uuid = pt.new_child('numa_cell0', cn1.uuid) numa_cell1_uuid = pt.new_child('numa_cell1', cn1.hypervisor_hostname) self.assertEqual(cn1.uuid, pt.data(numa_cell1_uuid).parent_uuid) self.assertTrue(pt.exists(numa_cell0_uuid)) self.assertTrue(pt.exists('numa_cell0')) self.assertTrue(pt.exists(numa_cell1_uuid)) self.assertTrue(pt.exists('numa_cell1')) pf1_cell0_uuid = pt.new_child('pf1_cell0', numa_cell0_uuid) self.assertTrue(pt.exists(pf1_cell0_uuid)) self.assertTrue(pt.exists('pf1_cell0')) # Now we've got a 3-level tree under cn1 - check provider UUIDs again all_cn1 = [cn1.uuid, numa_cell0_uuid, pf1_cell0_uuid, numa_cell1_uuid] self.assertEqual( set(all_cn1), set(pt.get_provider_uuids(name_or_uuid=cn1.uuid))) # Same with ..._in_tree if we're asking for the root self.assertEqual( set(all_cn1), set(pt.get_provider_uuids_in_tree(cn1.uuid))) # Asking for a subtree. self.assertEqual( [numa_cell0_uuid, pf1_cell0_uuid], pt.get_provider_uuids(name_or_uuid=numa_cell0_uuid)) # With ..._in_tree, get the whole tree no matter which we specify. for node in all_cn1: self.assertEqual(set(all_cn1), set(pt.get_provider_uuids_in_tree( node))) # With no provider specified, get everything self.assertEqual( set([cn1.uuid, cn2.uuid, numa_cell0_uuid, pf1_cell0_uuid, numa_cell1_uuid]), set(pt.get_provider_uuids())) self.assertRaises( ValueError, pt.new_child, 'pf1_cell0', uuids.non_existing_rp, ) # Fail attempting to add a child that already exists in the tree # Existing provider is a child; search by name self.assertRaises(ValueError, pt.new_child, 'numa_cell0', cn1.uuid) # Existing provider is a root; search by UUID self.assertRaises(ValueError, pt.new_child, cn1.uuid, cn2.uuid) # Test data(). # Root, by UUID cn1_snap = pt.data(cn1.uuid) # Fields were faithfully copied self.assertEqual(cn1.uuid, cn1_snap.uuid) self.assertEqual(cn1.hypervisor_hostname, cn1_snap.name) self.assertIsNone(cn1_snap.parent_uuid) self.assertEqual({}, cn1_snap.inventory) self.assertEqual(set(), cn1_snap.traits) self.assertEqual(set(), cn1_snap.aggregates) # Validate read-only-ness self.assertRaises(AttributeError, setattr, cn1_snap, 'name', 'foo') cn3 = objects.ComputeNode( uuid=uuids.cn3, hypervisor_hostname='compute-node-3', ) self.assertFalse(pt.exists(cn3.uuid)) self.assertFalse(pt.exists(cn3.hypervisor_hostname)) pt.new_root(cn3.hypervisor_hostname, cn3.uuid) self.assertTrue(pt.exists(cn3.uuid)) self.assertTrue(pt.exists(cn3.hypervisor_hostname)) self.assertRaises( ValueError, pt.new_root, cn3.hypervisor_hostname, cn3.uuid, ) self.assertRaises( ValueError, pt.remove, uuids.non_existing_rp, ) pt.remove(numa_cell1_uuid) self.assertFalse(pt.exists(numa_cell1_uuid)) self.assertTrue(pt.exists(pf1_cell0_uuid)) self.assertTrue(pt.exists(numa_cell0_uuid)) self.assertTrue(pt.exists(uuids.cn1)) # Now remove the root and check that children no longer exist pt.remove(uuids.cn1) self.assertFalse(pt.exists(pf1_cell0_uuid)) self.assertFalse(pt.exists(numa_cell0_uuid)) self.assertFalse(pt.exists(uuids.cn1)) def test_populate_from_iterable_empty(self): pt = provider_tree.ProviderTree() # Empty list is a no-op pt.populate_from_iterable([]) self.assertEqual([], pt.get_provider_uuids()) def test_populate_from_iterable_error_orphan_cycle(self): pt = provider_tree.ProviderTree() # Error trying to populate with an orphan grandchild1_1 = { 'uuid': uuids.grandchild1_1, 'name': 'grandchild1_1', 'generation': 11, 'parent_provider_uuid': uuids.child1, } self.assertRaises(ValueError, pt.populate_from_iterable, [grandchild1_1]) # Create a cycle so there are no orphans, but no path to a root cycle = { 'uuid': uuids.child1, 'name': 'child1', 'generation': 1, # There's a country song about this 'parent_provider_uuid': uuids.grandchild1_1, } self.assertRaises(ValueError, pt.populate_from_iterable, [grandchild1_1, cycle]) def test_populate_from_iterable_complex(self): # root # +-> child1 # | +-> grandchild1_2 # | +-> ggc1_2_1 # | +-> ggc1_2_2 # | +-> ggc1_2_3 # +-> child2 # another_root pt = provider_tree.ProviderTree() plist = [ { 'uuid': uuids.root, 'name': 'root', 'generation': 0, }, { 'uuid': uuids.child1, 'name': 'child1', 'generation': 1, 'parent_provider_uuid': uuids.root, }, { 'uuid': uuids.child2, 'name': 'child2', 'generation': 2, 'parent_provider_uuid': uuids.root, }, { 'uuid': uuids.grandchild1_2, 'name': 'grandchild1_2', 'generation': 12, 'parent_provider_uuid': uuids.child1, }, { 'uuid': uuids.ggc1_2_1, 'name': 'ggc1_2_1', 'generation': 121, 'parent_provider_uuid': uuids.grandchild1_2, }, { 'uuid': uuids.ggc1_2_2, 'name': 'ggc1_2_2', 'generation': 122, 'parent_provider_uuid': uuids.grandchild1_2, }, { 'uuid': uuids.ggc1_2_3, 'name': 'ggc1_2_3', 'generation': 123, 'parent_provider_uuid': uuids.grandchild1_2, }, { 'uuid': uuids.another_root, 'name': 'another_root', 'generation': 911, }, ] pt.populate_from_iterable(plist) def validate_root(expected_uuids): # Make sure we have all and only the expected providers self.assertEqual(expected_uuids, set(pt.get_provider_uuids())) # Now make sure they're in the right hierarchy. Cheat: get the # actual _Provider to make it easier to walk the tree (ProviderData # doesn't include children). root = pt._find_with_lock(uuids.root) self.assertEqual(uuids.root, root.uuid) self.assertEqual('root', root.name) self.assertEqual(0, root.generation) self.assertIsNone(root.parent_uuid) self.assertEqual(2, len(list(root.children))) for child in root.children.values(): self.assertTrue(child.name.startswith('child')) if child.name == 'child1': if uuids.grandchild1_1 in expected_uuids: self.assertEqual(2, len(list(child.children))) else: self.assertEqual(1, len(list(child.children))) for grandchild in child.children.values(): self.assertTrue(grandchild.name.startswith( 'grandchild1_')) if grandchild.name == 'grandchild1_1': self.assertEqual(0, len(list(grandchild.children))) if grandchild.name == 'grandchild1_2': self.assertEqual(3, len(list(grandchild.children))) for ggc in grandchild.children.values(): self.assertTrue(ggc.name.startswith('ggc1_2_')) another_root = pt._find_with_lock(uuids.another_root) self.assertEqual(uuids.another_root, another_root.uuid) self.assertEqual('another_root', another_root.name) self.assertEqual(911, another_root.generation) self.assertIsNone(another_root.parent_uuid) self.assertEqual(0, len(list(another_root.children))) if uuids.new_root in expected_uuids: new_root = pt._find_with_lock(uuids.new_root) self.assertEqual(uuids.new_root, new_root.uuid) self.assertEqual('new_root', new_root.name) self.assertEqual(42, new_root.generation) self.assertIsNone(new_root.parent_uuid) self.assertEqual(0, len(list(new_root.children))) expected_uuids = set([ uuids.root, uuids.child1, uuids.child2, uuids.grandchild1_2, uuids.ggc1_2_1, uuids.ggc1_2_2, uuids.ggc1_2_3, uuids.another_root]) validate_root(expected_uuids) # Merge an orphan - still an error orphan = { 'uuid': uuids.orphan, 'name': 'orphan', 'generation': 86, 'parent_provider_uuid': uuids.mystery, } self.assertRaises(ValueError, pt.populate_from_iterable, [orphan]) # And the tree didn't change validate_root(expected_uuids) # Merge a list with a new grandchild and a new root plist = [ { 'uuid': uuids.grandchild1_1, 'name': 'grandchild1_1', 'generation': 11, 'parent_provider_uuid': uuids.child1, }, { 'uuid': uuids.new_root, 'name': 'new_root', 'generation': 42, }, ] pt.populate_from_iterable(plist) expected_uuids |= set([uuids.grandchild1_1, uuids.new_root]) validate_root(expected_uuids) # Merge an empty list - still a no-op pt.populate_from_iterable([]) validate_root(expected_uuids) # Since we have a complex tree, test the ordering of get_provider_uuids # We can't predict the order of siblings, or where nephews will appear # relative to their uncles, but we can guarantee that any given child # always comes after its parent (and by extension, its ancestors too). puuids = pt.get_provider_uuids() for desc in (uuids.child1, uuids.child2): self.assertGreater(puuids.index(desc), puuids.index(uuids.root)) for desc in (uuids.grandchild1_1, uuids.grandchild1_2): self.assertGreater(puuids.index(desc), puuids.index(uuids.child1)) for desc in (uuids.ggc1_2_1, uuids.ggc1_2_2, uuids.ggc1_2_3): self.assertGreater( puuids.index(desc), puuids.index(uuids.grandchild1_2)) def test_populate_from_iterable_with_root_update(self): # Ensure we can update hierarchies, including adding children, in a # tree that's already populated. This tests the case where a given # provider exists both in the tree and in the input. We must replace # that provider *before* we inject its descendants; otherwise the # descendants will be lost. Note that this test case is not 100% # reliable, as we can't predict the order over which hashed values are # iterated. pt = provider_tree.ProviderTree() # Let's create a root plist = [ { 'uuid': uuids.root, 'name': 'root', 'generation': 0, }, ] pt.populate_from_iterable(plist) expected_uuids = [uuids.root] self.assertEqual(expected_uuids, pt.get_provider_uuids()) # Let's add a child updating the name and generation for the root. # root # +-> child1 plist = [ { 'uuid': uuids.root, 'name': 'root_with_new_name', 'generation': 1, }, { 'uuid': uuids.child1, 'name': 'child1', 'generation': 1, 'parent_provider_uuid': uuids.root, }, ] pt.populate_from_iterable(plist) expected_uuids = [uuids.root, uuids.child1] self.assertEqual(expected_uuids, pt.get_provider_uuids()) def test_populate_from_iterable_disown_grandchild(self): # Start with: # root # +-> child # | +-> grandchild # Then send in [child] and grandchild should disappear. child = { 'uuid': uuids.child, 'name': 'child', 'generation': 1, 'parent_provider_uuid': uuids.root, } pt = provider_tree.ProviderTree() plist = [ { 'uuid': uuids.root, 'name': 'root', 'generation': 0, }, child, { 'uuid': uuids.grandchild, 'name': 'grandchild', 'generation': 2, 'parent_provider_uuid': uuids.child, }, ] pt.populate_from_iterable(plist) self.assertEqual([uuids.root, uuids.child, uuids.grandchild], pt.get_provider_uuids()) self.assertTrue(pt.exists(uuids.grandchild)) pt.populate_from_iterable([child]) self.assertEqual([uuids.root, uuids.child], pt.get_provider_uuids()) self.assertFalse(pt.exists(uuids.grandchild)) def test_has_inventory_changed_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.has_inventory_changed, uuids.non_existing_rp, {} ) def test_update_inventory_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.update_inventory, uuids.non_existing_rp, {}, ) def test_has_inventory_changed(self): cn = self.compute_node1 pt = self._pt_with_cns() rp_gen = 1 cn_inv = { 'VCPU': { 'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, }, 'MEMORY_MB': { 'total': 1024, 'reserved': 512, 'min_unit': 64, 'max_unit': 1024, 'step_size': 64, 'allocation_ratio': 1.5, }, 'DISK_GB': { 'total': 1000, 'reserved': 100, 'min_unit': 10, 'max_unit': 1000, 'step_size': 10, 'allocation_ratio': 1.0, }, } self.assertTrue(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertTrue(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) # Updating with the same inventory info should return False self.assertFalse(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertFalse(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) # A data-grab's inventory should be "equal" to the original cndata = pt.data(cn.uuid) self.assertFalse(pt.has_inventory_changed(cn.uuid, cndata.inventory)) cn_inv['VCPU']['total'] = 6 self.assertTrue(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertTrue(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) # The data() result was not affected; now the tree's copy is different self.assertTrue(pt.has_inventory_changed(cn.uuid, cndata.inventory)) self.assertFalse(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertFalse(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) # Deleting a key in the new record should NOT result in changes being # recorded... del cn_inv['VCPU']['allocation_ratio'] self.assertFalse(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertFalse(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) del cn_inv['MEMORY_MB'] self.assertTrue(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertTrue(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) # ...but *adding* a key in the new record *should* result in changes # being recorded cn_inv['VCPU']['reserved'] = 0 self.assertTrue(pt.has_inventory_changed(cn.uuid, cn_inv)) self.assertTrue(pt.update_inventory(cn.uuid, cn_inv, generation=rp_gen)) def test_have_traits_changed_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.have_traits_changed, uuids.non_existing_rp, []) def test_update_traits_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.update_traits, uuids.non_existing_rp, []) def test_have_traits_changed(self): cn = self.compute_node1 pt = self._pt_with_cns() rp_gen = 1 traits = [ "HW_GPU_API_DIRECT3D_V7_0", "HW_NIC_OFFLOAD_SG", "HW_CPU_X86_AVX", ] self.assertTrue(pt.have_traits_changed(cn.uuid, traits)) # A data-grab's traits are the same cnsnap = pt.data(cn.uuid) self.assertFalse(pt.have_traits_changed(cn.uuid, cnsnap.traits)) self.assertTrue(pt.has_traits(cn.uuid, [])) self.assertFalse(pt.has_traits(cn.uuid, traits)) self.assertFalse(pt.has_traits(cn.uuid, traits[:1])) self.assertTrue(pt.update_traits(cn.uuid, traits, generation=rp_gen)) self.assertTrue(pt.has_traits(cn.uuid, traits)) self.assertTrue(pt.has_traits(cn.uuid, traits[:1])) # Updating with the same traits info should return False self.assertFalse(pt.have_traits_changed(cn.uuid, traits)) # But the generation should get updated rp_gen = 2 self.assertFalse(pt.update_traits(cn.uuid, traits, generation=rp_gen)) self.assertFalse(pt.have_traits_changed(cn.uuid, traits)) self.assertEqual(rp_gen, pt.data(cn.uuid).generation) self.assertTrue(pt.has_traits(cn.uuid, traits)) self.assertTrue(pt.has_traits(cn.uuid, traits[:1])) # Make a change to the traits list traits.append("HW_GPU_RESOLUTION_W800H600") self.assertTrue(pt.have_traits_changed(cn.uuid, traits)) # The previously-taken data now differs self.assertTrue(pt.have_traits_changed(cn.uuid, cnsnap.traits)) self.assertFalse(pt.has_traits(cn.uuid, traits[-1:])) # Don't update the generation self.assertTrue(pt.update_traits(cn.uuid, traits)) self.assertEqual(rp_gen, pt.data(cn.uuid).generation) self.assertTrue(pt.has_traits(cn.uuid, traits[-1:])) def test_add_remove_traits(self): cn = self.compute_node1 pt = self._pt_with_cns() self.assertEqual(set([]), pt.data(cn.uuid).traits) # Test adding with no trait provided for a bogus provider pt.add_traits('bogus-uuid') self.assertEqual( set([]), pt.data(cn.uuid).traits ) # Add a couple of traits pt.add_traits(cn.uuid, "HW_GPU_API_DIRECT3D_V7_0", "HW_NIC_OFFLOAD_SG") self.assertEqual( set(["HW_GPU_API_DIRECT3D_V7_0", "HW_NIC_OFFLOAD_SG"]), pt.data(cn.uuid).traits) # set() behavior: add a trait that's already there, and one that's not. # The unrelated one is unaffected. pt.add_traits(cn.uuid, "HW_GPU_API_DIRECT3D_V7_0", "HW_CPU_X86_AVX") self.assertEqual( set(["HW_GPU_API_DIRECT3D_V7_0", "HW_NIC_OFFLOAD_SG", "HW_CPU_X86_AVX"]), pt.data(cn.uuid).traits) # Test removing with no trait provided for a bogus provider pt.remove_traits('bogus-uuid') self.assertEqual( set(["HW_GPU_API_DIRECT3D_V7_0", "HW_NIC_OFFLOAD_SG", "HW_CPU_X86_AVX"]), pt.data(cn.uuid).traits) # Now remove a trait pt.remove_traits(cn.uuid, "HW_NIC_OFFLOAD_SG") self.assertEqual( set(["HW_GPU_API_DIRECT3D_V7_0", "HW_CPU_X86_AVX"]), pt.data(cn.uuid).traits) # set() behavior: remove a trait that's there, and one that's not. # The unrelated one is unaffected. pt.remove_traits(cn.uuid, "HW_NIC_OFFLOAD_SG", "HW_GPU_API_DIRECT3D_V7_0") self.assertEqual(set(["HW_CPU_X86_AVX"]), pt.data(cn.uuid).traits) # Remove the last trait, and an unrelated one pt.remove_traits(cn.uuid, "CUSTOM_FOO", "HW_CPU_X86_AVX") self.assertEqual(set([]), pt.data(cn.uuid).traits) def test_have_aggregates_changed_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.have_aggregates_changed, uuids.non_existing_rp, []) def test_update_aggregates_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.update_aggregates, uuids.non_existing_rp, []) def test_have_aggregates_changed(self): cn = self.compute_node1 pt = self._pt_with_cns() rp_gen = 1 aggregates = [ uuids.agg1, uuids.agg2, ] self.assertTrue(pt.have_aggregates_changed(cn.uuid, aggregates)) self.assertTrue(pt.in_aggregates(cn.uuid, [])) self.assertFalse(pt.in_aggregates(cn.uuid, aggregates)) self.assertFalse(pt.in_aggregates(cn.uuid, aggregates[:1])) self.assertTrue(pt.update_aggregates(cn.uuid, aggregates, generation=rp_gen)) self.assertTrue(pt.in_aggregates(cn.uuid, aggregates)) self.assertTrue(pt.in_aggregates(cn.uuid, aggregates[:1])) # data() gets the same aggregates cnsnap = pt.data(cn.uuid) self.assertFalse( pt.have_aggregates_changed(cn.uuid, cnsnap.aggregates)) # Updating with the same aggregates info should return False self.assertFalse(pt.have_aggregates_changed(cn.uuid, aggregates)) # But the generation should get updated rp_gen = 2 self.assertFalse(pt.update_aggregates(cn.uuid, aggregates, generation=rp_gen)) self.assertFalse(pt.have_aggregates_changed(cn.uuid, aggregates)) self.assertEqual(rp_gen, pt.data(cn.uuid).generation) self.assertTrue(pt.in_aggregates(cn.uuid, aggregates)) self.assertTrue(pt.in_aggregates(cn.uuid, aggregates[:1])) # Make a change to the aggregates list aggregates.append(uuids.agg3) self.assertTrue(pt.have_aggregates_changed(cn.uuid, aggregates)) self.assertFalse(pt.in_aggregates(cn.uuid, aggregates[-1:])) # Don't update the generation self.assertTrue(pt.update_aggregates(cn.uuid, aggregates)) self.assertEqual(rp_gen, pt.data(cn.uuid).generation) self.assertTrue(pt.in_aggregates(cn.uuid, aggregates[-1:])) # Previously-taken data now differs self.assertTrue(pt.have_aggregates_changed(cn.uuid, cnsnap.aggregates)) def test_add_remove_aggregates(self): cn = self.compute_node1 pt = self._pt_with_cns() self.assertEqual(set([]), pt.data(cn.uuid).aggregates) # Add a couple of aggregates pt.add_aggregates(cn.uuid, uuids.agg1, uuids.agg2) self.assertEqual( set([uuids.agg1, uuids.agg2]), pt.data(cn.uuid).aggregates) # set() behavior: add an aggregate that's already there, and one that's # not. The unrelated one is unaffected. pt.add_aggregates(cn.uuid, uuids.agg1, uuids.agg3) self.assertEqual(set([uuids.agg1, uuids.agg2, uuids.agg3]), pt.data(cn.uuid).aggregates) # Now remove an aggregate pt.remove_aggregates(cn.uuid, uuids.agg2) self.assertEqual(set([uuids.agg1, uuids.agg3]), pt.data(cn.uuid).aggregates) # set() behavior: remove an aggregate that's there, and one that's not. # The unrelated one is unaffected. pt.remove_aggregates(cn.uuid, uuids.agg2, uuids.agg3) self.assertEqual(set([uuids.agg1]), pt.data(cn.uuid).aggregates) # Remove the last aggregate, and an unrelated one pt.remove_aggregates(cn.uuid, uuids.agg4, uuids.agg1) self.assertEqual(set([]), pt.data(cn.uuid).aggregates) def test_update_resources_no_existing_rp(self): pt = self._pt_with_cns() self.assertRaises( ValueError, pt.update_resources, uuids.non_existing_rp, {}, ) def test_update_resources(self): cn = self.compute_node1 pt = self._pt_with_cns() cn_resources = { "CUSTOM_RESOURCE_0": { objects.Resource(provider_uuid=cn.uuid, resource_class="CUSTOM_RESOURCE_0", identifier="bar")}, "CUSTOM_RESOURCE_1": { objects.Resource(provider_uuid=cn.uuid, resource_class="CUSTOM_RESOURCE_1", identifier="foo_1"), objects.Resource(provider_uuid=cn.uuid, resource_class="CUSTOM_RESOURCE_1", identifier="foo_2")}} # resources changed self.assertTrue(pt.update_resources(cn.uuid, cn_resources)) # resources not changed self.assertFalse(pt.update_resources(cn.uuid, cn_resources)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_resource_tracker.py0000664000175000017500000053420700000000000024314 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from keystoneauth1 import exceptions as ks_exc import mock import os_resource_classes as orc import os_traits from oslo_config import cfg from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import units from nova.compute import claims from nova.compute.monitors import base as monitor_base from nova.compute import power_state from nova.compute import provider_tree from nova.compute import resource_tracker from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova import context from nova import exception as exc from nova import objects from nova.objects import base as obj_base from nova.objects import fields as obj_fields from nova.objects import pci_device from nova.pci import manager as pci_manager from nova.scheduler.client import report from nova import test from nova.tests.unit import fake_instance from nova.tests.unit import fake_notifier from nova.tests.unit.objects import test_pci_device as fake_pci_device from nova.tests.unit import utils from nova import utils as nova_utils from nova.virt import driver _HOSTNAME = 'fake-host' _NODENAME = 'fake-node' CONF = cfg.CONF _VIRT_DRIVER_AVAIL_RESOURCES = { 'vcpus': 4, 'memory_mb': 512, 'local_gb': 6, 'vcpus_used': 0, 'memory_mb_used': 0, 'local_gb_used': 0, 'hypervisor_type': 'fake', 'hypervisor_version': 0, 'hypervisor_hostname': _NODENAME, 'cpu_info': '', 'numa_topology': None, } _COMPUTE_NODE_FIXTURES = [ objects.ComputeNode( id=1, uuid=uuids.cn1, host=_HOSTNAME, vcpus=_VIRT_DRIVER_AVAIL_RESOURCES['vcpus'], memory_mb=_VIRT_DRIVER_AVAIL_RESOURCES['memory_mb'], local_gb=_VIRT_DRIVER_AVAIL_RESOURCES['local_gb'], vcpus_used=_VIRT_DRIVER_AVAIL_RESOURCES['vcpus_used'], memory_mb_used=_VIRT_DRIVER_AVAIL_RESOURCES['memory_mb_used'], local_gb_used=_VIRT_DRIVER_AVAIL_RESOURCES['local_gb_used'], hypervisor_type='fake', hypervisor_version=0, hypervisor_hostname=_NODENAME, free_ram_mb=(_VIRT_DRIVER_AVAIL_RESOURCES['memory_mb'] - _VIRT_DRIVER_AVAIL_RESOURCES['memory_mb_used']), free_disk_gb=(_VIRT_DRIVER_AVAIL_RESOURCES['local_gb'] - _VIRT_DRIVER_AVAIL_RESOURCES['local_gb_used']), current_workload=0, running_vms=0, cpu_info='{}', disk_available_least=0, host_ip='1.1.1.1', supported_hv_specs=[ objects.HVSpec.from_list([ obj_fields.Architecture.I686, obj_fields.HVType.KVM, obj_fields.VMMode.HVM]) ], metrics=None, pci_device_pools=None, extra_resources=None, stats={}, numa_topology=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, ), ] _INSTANCE_TYPE_FIXTURES = { 1: { 'id': 1, 'flavorid': 'fakeid-1', 'name': 'fake1.small', 'memory_mb': 128, 'vcpus': 1, 'root_gb': 1, 'ephemeral_gb': 0, 'swap': 0, 'rxtx_factor': 0, 'vcpu_weight': 1, 'extra_specs': {}, 'deleted': 0, }, 2: { 'id': 2, 'flavorid': 'fakeid-2', 'name': 'fake1.medium', 'memory_mb': 256, 'vcpus': 2, 'root_gb': 5, 'ephemeral_gb': 0, 'swap': 0, 'rxtx_factor': 0, 'vcpu_weight': 1, 'extra_specs': {}, 'deleted': 0, }, } _INSTANCE_TYPE_OBJ_FIXTURES = { 1: objects.Flavor(id=1, flavorid='fakeid-1', name='fake1.small', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, swap=0, rxtx_factor=0, vcpu_weight=1, extra_specs={}, deleted=False), 2: objects.Flavor(id=2, flavorid='fakeid-2', name='fake1.medium', memory_mb=256, vcpus=2, root_gb=5, ephemeral_gb=0, swap=0, rxtx_factor=0, vcpu_weight=1, extra_specs={}, deleted=False), } _2MB = 2 * units.Mi / units.Ki _INSTANCE_NUMA_TOPOLOGIES = { '2mb': objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1]), memory=_2MB, pagesize=0), objects.InstanceNUMACell( id=1, cpuset=set([3]), memory=_2MB, pagesize=0)]), } _NUMA_LIMIT_TOPOLOGIES = { '2mb': objects.NUMATopologyLimits(id=0, cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0), } _NUMA_PAGE_TOPOLOGIES = { '2mb*1024': objects.NUMAPagesTopology(size_kb=2048, total=1024, used=0) } _NUMA_HOST_TOPOLOGIES = { '2mb': objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set([1, 2]), pcpuset=set(), memory=_2MB, cpu_usage=0, memory_usage=0, mempages=[_NUMA_PAGE_TOPOLOGIES['2mb*1024']], siblings=[set([1]), set([2])], pinned_cpus=set()), objects.NUMACell( id=1, cpuset=set([3, 4]), pcpuset=set(), memory=_2MB, cpu_usage=0, memory_usage=0, mempages=[_NUMA_PAGE_TOPOLOGIES['2mb*1024']], siblings=[set([3]), set([4])], pinned_cpus=set())]), } _INSTANCE_FIXTURES = [ objects.Instance( id=1, host=_HOSTNAME, node=_NODENAME, uuid='c17741a5-6f3d-44a8-ade8-773dc8c29124', memory_mb=_INSTANCE_TYPE_FIXTURES[1]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[1]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[1]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[1]['ephemeral_gb'], numa_topology=_INSTANCE_NUMA_TOPOLOGIES['2mb'], pci_requests=None, pci_devices=None, instance_type_id=1, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING, task_state=None, os_type='fake-os', # Used by the stats collector. project_id='fake-project', # Used by the stats collector. user_id=uuids.user_id, flavor = _INSTANCE_TYPE_OBJ_FIXTURES[1], old_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[1], new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[1], deleted = False, resources = None, ), objects.Instance( id=2, host=_HOSTNAME, node=_NODENAME, uuid='33805b54-dea6-47b8-acb2-22aeb1b57919', memory_mb=_INSTANCE_TYPE_FIXTURES[2]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[2]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[2]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[2]['ephemeral_gb'], numa_topology=None, pci_requests=None, pci_devices=None, instance_type_id=2, vm_state=vm_states.DELETED, power_state=power_state.SHUTDOWN, task_state=None, os_type='fake-os', project_id='fake-project-2', user_id=uuids.user_id, flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2], old_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2], new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2], deleted = False, resources = None, ), ] _MIGRATION_FIXTURES = { # A migration that has only this compute node as the source host 'source-only': objects.Migration( id=1, instance_uuid='f15ecfb0-9bf6-42db-9837-706eb2c4bf08', source_compute=_HOSTNAME, dest_compute='other-host', source_node=_NODENAME, dest_node='other-node', old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', uuid=uuids.source_only, ), # A migration that has only this compute node as the dest host 'dest-only': objects.Migration( id=2, instance_uuid='f6ed631a-8645-4b12-8e1e-2fff55795765', source_compute='other-host', dest_compute=_HOSTNAME, source_node='other-node', dest_node=_NODENAME, old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', uuid=uuids.dest_only, ), # A migration that has this compute node as both the source and dest host 'source-and-dest': objects.Migration( id=3, instance_uuid='f4f0bfea-fe7e-4264-b598-01cb13ef1997', source_compute=_HOSTNAME, dest_compute=_HOSTNAME, source_node=_NODENAME, dest_node=_NODENAME, old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', uuid=uuids.source_and_dest, ), # A migration that has this compute node as destination and is an evac 'dest-only-evac': objects.Migration( id=4, instance_uuid='077fb63a-bdc8-4330-90ef-f012082703dc', source_compute='other-host', dest_compute=_HOSTNAME, source_node='other-node', dest_node=_NODENAME, old_instance_type_id=2, new_instance_type_id=None, migration_type='evacuation', status='pre-migrating', uuid=uuids.dest_only_evac, ), } _MIGRATION_INSTANCE_FIXTURES = { # source-only 'f15ecfb0-9bf6-42db-9837-706eb2c4bf08': objects.Instance( id=101, host=None, # prevent RT trying to lazy-load this node=None, uuid='f15ecfb0-9bf6-42db-9837-706eb2c4bf08', memory_mb=_INSTANCE_TYPE_FIXTURES[1]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[1]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[1]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[1]['ephemeral_gb'], numa_topology=_INSTANCE_NUMA_TOPOLOGIES['2mb'], pci_requests=None, pci_devices=None, instance_type_id=1, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING, task_state=task_states.RESIZE_MIGRATING, system_metadata={}, os_type='fake-os', project_id='fake-project', flavor=_INSTANCE_TYPE_OBJ_FIXTURES[1], old_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[1], new_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], resources = None, ), # dest-only 'f6ed631a-8645-4b12-8e1e-2fff55795765': objects.Instance( id=102, host=None, # prevent RT trying to lazy-load this node=None, uuid='f6ed631a-8645-4b12-8e1e-2fff55795765', memory_mb=_INSTANCE_TYPE_FIXTURES[2]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[2]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[2]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[2]['ephemeral_gb'], numa_topology=None, pci_requests=None, pci_devices=None, instance_type_id=2, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING, task_state=task_states.RESIZE_MIGRATING, system_metadata={}, os_type='fake-os', project_id='fake-project', flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], old_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[1], new_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], resources=None, ), # source-and-dest 'f4f0bfea-fe7e-4264-b598-01cb13ef1997': objects.Instance( id=3, host=None, # prevent RT trying to lazy-load this node=None, uuid='f4f0bfea-fe7e-4264-b598-01cb13ef1997', memory_mb=_INSTANCE_TYPE_FIXTURES[2]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[2]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[2]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[2]['ephemeral_gb'], numa_topology=None, pci_requests=None, pci_devices=None, instance_type_id=2, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING, task_state=task_states.RESIZE_MIGRATING, system_metadata={}, os_type='fake-os', project_id='fake-project', flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], old_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[1], new_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], resources=None, ), # dest-only-evac '077fb63a-bdc8-4330-90ef-f012082703dc': objects.Instance( id=102, host=None, # prevent RT trying to lazy-load this node=None, uuid='077fb63a-bdc8-4330-90ef-f012082703dc', memory_mb=_INSTANCE_TYPE_FIXTURES[2]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[2]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[2]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[2]['ephemeral_gb'], numa_topology=None, pci_requests=None, pci_devices=None, instance_type_id=2, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING, task_state=task_states.REBUILDING, system_metadata={}, os_type='fake-os', project_id='fake-project', flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], old_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[1], new_flavor=_INSTANCE_TYPE_OBJ_FIXTURES[2], resources=None, ), } _MIGRATION_CONTEXT_FIXTURES = { 'f4f0bfea-fe7e-4264-b598-01cb13ef1997': objects.MigrationContext( instance_uuid='f4f0bfea-fe7e-4264-b598-01cb13ef1997', migration_id=3, new_numa_topology=None, old_numa_topology=None), 'c17741a5-6f3d-44a8-ade8-773dc8c29124': objects.MigrationContext( instance_uuid='c17741a5-6f3d-44a8-ade8-773dc8c29124', migration_id=3, new_numa_topology=None, old_numa_topology=None), 'f15ecfb0-9bf6-42db-9837-706eb2c4bf08': objects.MigrationContext( instance_uuid='f15ecfb0-9bf6-42db-9837-706eb2c4bf08', migration_id=1, new_numa_topology=None, old_numa_topology=_INSTANCE_NUMA_TOPOLOGIES['2mb']), 'f6ed631a-8645-4b12-8e1e-2fff55795765': objects.MigrationContext( instance_uuid='f6ed631a-8645-4b12-8e1e-2fff55795765', migration_id=2, new_numa_topology=_INSTANCE_NUMA_TOPOLOGIES['2mb'], old_numa_topology=None), '077fb63a-bdc8-4330-90ef-f012082703dc': objects.MigrationContext( instance_uuid='077fb63a-bdc8-4330-90ef-f012082703dc', migration_id=2, new_numa_topology=None, old_numa_topology=None), } def setup_rt(hostname, virt_resources=_VIRT_DRIVER_AVAIL_RESOURCES): """Sets up the resource tracker instance with mock fixtures. :param virt_resources: Optional override of the resource representation returned by the virt driver's `get_available_resource()` method. """ query_client_mock = mock.MagicMock() report_client_mock = mock.MagicMock() notifier_mock = mock.MagicMock() vd = mock.MagicMock(autospec=driver.ComputeDriver) # Make sure we don't change any global fixtures during tests virt_resources = copy.deepcopy(virt_resources) vd.get_available_resource.return_value = virt_resources def fake_upt(provider_tree, nodename, allocations=None): inventory = { 'VCPU': { 'total': virt_resources['vcpus'], 'min_unit': 1, 'max_unit': virt_resources['vcpus'], 'step_size': 1, 'allocation_ratio': ( CONF.cpu_allocation_ratio or CONF.initial_cpu_allocation_ratio), 'reserved': CONF.reserved_host_cpus, }, 'MEMORY_MB': { 'total': virt_resources['memory_mb'], 'min_unit': 1, 'max_unit': virt_resources['memory_mb'], 'step_size': 1, 'allocation_ratio': ( CONF.ram_allocation_ratio or CONF.initial_ram_allocation_ratio), 'reserved': CONF.reserved_host_memory_mb, }, 'DISK_GB': { 'total': virt_resources['local_gb'], 'min_unit': 1, 'max_unit': virt_resources['local_gb'], 'step_size': 1, 'allocation_ratio': ( CONF.disk_allocation_ratio or CONF.initial_disk_allocation_ratio), 'reserved': compute_utils.convert_mb_to_ceil_gb( CONF.reserved_host_disk_mb), }, } provider_tree.update_inventory(nodename, inventory) vd.update_provider_tree.side_effect = fake_upt vd.get_host_ip_addr.return_value = _NODENAME vd.rebalances_nodes = False with test.nested( mock.patch('nova.scheduler.client.query.SchedulerQueryClient', return_value=query_client_mock), mock.patch('nova.scheduler.client.report.SchedulerReportClient', return_value=report_client_mock), mock.patch('nova.rpc.get_notifier', return_value=notifier_mock)): rt = resource_tracker.ResourceTracker(hostname, vd) return (rt, query_client_mock, report_client_mock, vd) def compute_update_usage(resources, flavor, sign=1): resources.vcpus_used += sign * flavor.vcpus resources.memory_mb_used += sign * flavor.memory_mb resources.local_gb_used += sign * (flavor.root_gb + flavor.ephemeral_gb) resources.free_ram_mb = resources.memory_mb - resources.memory_mb_used resources.free_disk_gb = resources.local_gb - resources.local_gb_used return resources class BaseTestCase(test.NoDBTestCase): def setUp(self): super(BaseTestCase, self).setUp() self.rt = None self.flags(my_ip='1.1.1.1', reserved_host_disk_mb=0, reserved_host_memory_mb=0, reserved_host_cpus=0) self.allocations = { _COMPUTE_NODE_FIXTURES[0].uuid: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512 } } } self.compute = _COMPUTE_NODE_FIXTURES[0] self.resource_0 = objects.Resource(provider_uuid=self.compute.uuid, resource_class="CUSTOM_RESOURCE_0", identifier="bar") self.resource_1 = objects.Resource(provider_uuid=self.compute.uuid, resource_class="CUSTOM_RESOURCE_1", identifier="foo_1") self.resource_2 = objects.Resource(provider_uuid=self.compute.uuid, resource_class="CUSTOM_RESOURCE_1", identifier="foo_2") def _setup_rt(self, virt_resources=_VIRT_DRIVER_AVAIL_RESOURCES): (self.rt, self.sched_client_mock, self.report_client_mock, self.driver_mock) = setup_rt(_HOSTNAME, virt_resources) def _setup_ptree(self, compute): """Set up a ProviderTree with a compute node root, and mock the ReportClient's get_provider_tree_and_ensure_root() to return it. update_traits() is mocked so that tests can specify a return value. Returns the new ProviderTree so that tests can control its behaviour further. """ ptree = provider_tree.ProviderTree() ptree.new_root(compute.hypervisor_hostname, compute.uuid) ptree.update_traits = mock.Mock() resources = {"CUSTOM_RESOURCE_0": {self.resource_0}, "CUSTOM_RESOURCE_1": {self.resource_1, self.resource_2}} ptree.update_resources(compute.uuid, resources) rc_mock = self.rt.reportclient gptaer_mock = rc_mock.get_provider_tree_and_ensure_root gptaer_mock.return_value = ptree return ptree class TestUpdateAvailableResources(BaseTestCase): def _update_available_resources(self, **kwargs): # We test RT._update separately, since the complexity # of the update_available_resource() function is high enough as # it is, we just want to focus here on testing the resources # parameter that update_available_resource() eventually passes # to _update(). with mock.patch.object(self.rt, '_update') as update_mock: self.rt.update_available_resource(mock.MagicMock(), _NODENAME, **kwargs) return update_mock @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_disabled(self, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock): self._setup_rt() # Set up resource tracker in an enabled state and verify that all is # good before simulating a disabled node. get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] # This will call _init_compute_node() and create a ComputeNode object # and will also call through to InstanceList.get_by_host_and_node() # because the node is available. self._update_available_resources() self.assertTrue(get_mock.called) get_mock.reset_mock() # OK, now simulate a node being disabled by the Ironic virt driver. vd = self.driver_mock vd.node_is_available.return_value = False self._update_available_resources() self.assertFalse(get_mock.called) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_no_instances_no_migrations_no_reserved(self, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock): self._setup_rt() get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] update_mock = self._update_available_resources() vd = self.driver_mock vd.get_available_resource.assert_called_once_with(_NODENAME) get_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME, expected_attrs=[ 'system_metadata', 'numa_topology', 'flavor', 'migration_context', 'resources']) get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) migr_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 6, 'local_gb': 6, 'free_ram_mb': 512, 'memory_mb_used': 0, 'vcpus_used': 0, 'local_gb_used': 0, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 0 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_remove_deleted_instances_allocations') def test_startup_makes_it_through(self, rdia, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock): """Just make sure the startup kwarg makes it from _update_available_resource all the way down the call stack to _update. In this case a compute node record already exists. """ self._setup_rt() get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] update_mock = self._update_available_resources(startup=True) update_mock.assert_called_once_with(mock.ANY, mock.ANY, startup=True) rdia.assert_called_once_with( mock.ANY, get_cn_mock.return_value, [], {}) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_init_compute_node', return_value=True) @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_remove_deleted_instances_allocations') def test_startup_new_compute(self, rdia, get_mock, migr_mock, init_cn_mock, pci_mock, instance_pci_mock): """Just make sure the startup kwarg makes it from _update_available_resource all the way down the call stack to _update. In this case a new compute node record is created. """ self._setup_rt() cn = _COMPUTE_NODE_FIXTURES[0] self.rt.compute_nodes[cn.hypervisor_hostname] = cn mock_pci_tracker = mock.MagicMock() mock_pci_tracker.stats.to_device_pools_obj.return_value = ( objects.PciDevicePoolList()) self.rt.pci_tracker = mock_pci_tracker get_mock.return_value = [] migr_mock.return_value = [] update_mock = self._update_available_resources(startup=True) update_mock.assert_called_once_with(mock.ANY, mock.ANY, startup=True) rdia.assert_not_called() @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_no_instances_no_migrations_reserved_disk_ram_and_cpu( self, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock): self.flags(reserved_host_disk_mb=1024, reserved_host_memory_mb=512, reserved_host_cpus=1) self._setup_rt() get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 5, # 6GB avail - 1 GB reserved 'local_gb': 6, 'free_ram_mb': 0, # 512MB avail - 512MB reserved 'memory_mb_used': 512, # 0MB used + 512MB reserved 'vcpus_used': 1, 'local_gb_used': 1, # 0GB used + 1 GB reserved 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 0 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_some_instances_no_migrations(self, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, bfv_check_mock): # Setup virt resources to match used resources to number # of defined instances on the hypervisor # Note that the usage numbers here correspond to only the first # Instance object, because the second instance object fixture is in # DELETED state and therefore we should not expect it to be accounted # for in the auditing process. virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=1, memory_mb_used=128, local_gb_used=1) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = _INSTANCE_FIXTURES migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] bfv_check_mock.return_value = False update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 5, # 6 - 1 used 'local_gb': 6, 'free_ram_mb': 384, # 512 - 128 used 'memory_mb_used': 128, 'vcpus_used': 1, 'local_gb_used': 1, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 1 # One active instance } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_orphaned_instances_no_migrations(self, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock): # Setup virt resources to match used resources to number # of defined instances on the hypervisor virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(memory_mb_used=64) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] # Orphaned instances are those that the virt driver has on # record as consuming resources on the compute node, but the # Nova database has no record of the instance being active # on the host. For some reason, the resource tracker only # considers orphaned instance's memory usage in its calculations # of free resources... orphaned_usages = { '71ed7ef6-9d2e-4c65-9f4e-90bb6b76261d': { # Yes, the return result format of get_per_instance_usage # is indeed this stupid and redundant. Also note that the # libvirt driver just returns an empty dict always for this # method and so who the heck knows whether this stuff # actually works. 'uuid': '71ed7ef6-9d2e-4c65-9f4e-90bb6b76261d', 'memory_mb': 64 } } vd = self.driver_mock vd.get_per_instance_usage.return_value = orphaned_usages update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 6, 'local_gb': 6, 'free_ram_mb': 448, # 512 - 64 orphaned usage 'memory_mb_used': 64, 'vcpus_used': 0, 'local_gb_used': 0, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, # Yep, for some reason, orphaned instances are not counted # as running VMs... 'running_vms': 0 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_no_instances_source_migration(self, get_mock, get_inst_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, mock_is_volume_backed_instance): # We test the behavior of update_available_resource() when # there is an active migration that involves this compute node # as the source host not the destination host, and the resource # tracker does not have any instances assigned to it. This is # the case when a migration from this compute host to another # has been completed, but the user has not confirmed the resize # yet, so the resource tracker must continue to keep the resources # for the original instance type available on the source compute # node in case of a revert of the resize. # Setup virt resources to match used resources to number # of defined instances on the hypervisor virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=4, memory_mb_used=128, local_gb_used=1) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = [] migr_obj = _MIGRATION_FIXTURES['source-only'] migr_mock.return_value = [migr_obj] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] # Migration.instance property is accessed in the migration # processing code, and this property calls # objects.Instance.get_by_uuid, so we have the migration return inst_uuid = migr_obj.instance_uuid instance = _MIGRATION_INSTANCE_FIXTURES[inst_uuid].obj_clone() get_inst_mock.return_value = instance instance.migration_context = _MIGRATION_CONTEXT_FIXTURES[inst_uuid] update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 5, 'local_gb': 6, 'free_ram_mb': 384, # 512 total - 128 for possible revert of orig 'memory_mb_used': 128, # 128 possible revert amount 'vcpus_used': 1, 'local_gb_used': 1, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 0 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_no_instances_dest_migration(self, get_mock, get_inst_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, mock_is_volume_backed_instance): # We test the behavior of update_available_resource() when # there is an active migration that involves this compute node # as the destination host not the source host, and the resource # tracker does not yet have any instances assigned to it. This is # the case when a migration to this compute host from another host # is in progress, but the user has not confirmed the resize # yet, so the resource tracker must reserve the resources # for the possibly-to-be-confirmed instance's instance type # node in case of a confirm of the resize. # Setup virt resources to match used resources to number # of defined instances on the hypervisor virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=2, memory_mb_used=256, local_gb_used=5) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = [] migr_obj = _MIGRATION_FIXTURES['dest-only'] migr_mock.return_value = [migr_obj] inst_uuid = migr_obj.instance_uuid instance = _MIGRATION_INSTANCE_FIXTURES[inst_uuid].obj_clone() get_inst_mock.return_value = instance get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] instance.migration_context = _MIGRATION_CONTEXT_FIXTURES[inst_uuid] update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 1, 'local_gb': 6, 'free_ram_mb': 256, # 512 total - 256 for possible confirm of new 'memory_mb_used': 256, # 256 possible confirmed amount 'vcpus_used': 2, 'local_gb_used': 5, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 0 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_no_instances_dest_evacuation(self, get_mock, get_inst_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, mock_is_volume_backed_instance): # We test the behavior of update_available_resource() when # there is an active evacuation that involves this compute node # as the destination host not the source host, and the resource # tracker does not yet have any instances assigned to it. This is # the case when a migration to this compute host from another host # is in progress, but not finished yet. # Setup virt resources to match used resources to number # of defined instances on the hypervisor virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=2, memory_mb_used=256, local_gb_used=5) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = [] migr_obj = _MIGRATION_FIXTURES['dest-only-evac'] migr_mock.return_value = [migr_obj] inst_uuid = migr_obj.instance_uuid instance = _MIGRATION_INSTANCE_FIXTURES[inst_uuid].obj_clone() get_inst_mock.return_value = instance get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] instance.migration_context = _MIGRATION_CONTEXT_FIXTURES[inst_uuid] instance.migration_context.migration_id = migr_obj.id update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'free_disk_gb': 1, 'free_ram_mb': 256, # 512 total - 256 for possible confirm of new 'memory_mb_used': 256, # 256 possible confirmed amount 'vcpus_used': 2, 'local_gb_used': 5, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 0 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.MigrationContext.get_by_instance_uuid', return_value=None) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_some_instances_source_and_dest_migration(self, get_mock, get_inst_mock, migr_mock, get_cn_mock, get_mig_ctxt_mock, pci_mock, instance_pci_mock, bfv_check_mock): # We test the behavior of update_available_resource() when # there is an active migration that involves this compute node # as the destination host AND the source host, and the resource # tracker has a few instances assigned to it, including the # instance that is resizing to this same compute node. The tracking # of resource amounts takes into account both the old and new # resize instance types as taking up space on the node. # Setup virt resources to match used resources to number # of defined instances on the hypervisor virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=4, memory_mb_used=512, local_gb_used=7) self._setup_rt(virt_resources=virt_resources) migr_obj = _MIGRATION_FIXTURES['source-and-dest'] migr_mock.return_value = [migr_obj] inst_uuid = migr_obj.instance_uuid # The resizing instance has already had its instance type # changed to the *new* instance type (the bigger one, instance type 2) resizing_instance = _MIGRATION_INSTANCE_FIXTURES[inst_uuid].obj_clone() resizing_instance.migration_context = ( _MIGRATION_CONTEXT_FIXTURES[resizing_instance.uuid]) all_instances = _INSTANCE_FIXTURES + [resizing_instance] get_mock.return_value = all_instances get_inst_mock.return_value = resizing_instance get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] bfv_check_mock.return_value = False update_mock = self._update_available_resources() get_cn_mock.assert_called_once_with(mock.ANY, _HOSTNAME, _NODENAME) expected_resources = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { # 6 total - 1G existing - 5G new flav - 1G old flav 'free_disk_gb': -1, 'local_gb': 6, # 512 total - 128 existing - 256 new flav - 128 old flav 'free_ram_mb': 0, 'memory_mb_used': 512, # 128 exist + 256 new flav + 128 old flav 'vcpus_used': 4, 'local_gb_used': 7, # 1G existing, 5G new flav + 1 old flav 'memory_mb': 512, 'current_workload': 1, # One migrating instance... 'vcpus': 4, 'running_vms': 2 } _update_compute_node(expected_resources, **vals) actual_resources = update_mock.call_args[0][1] self.assertTrue(obj_base.obj_equal_prims(expected_resources, actual_resources)) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', new=mock.Mock(return_value=objects.PciDeviceList())) @mock.patch('nova.objects.MigrationContext.get_by_instance_uuid', new=mock.Mock(return_value=None)) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_populate_assigned_resources(self, mock_get_instances, mock_get_instance, mock_get_migrations, mock_get_cn): # when update_available_resources, rt.assigned_resources # will be populated, resources assigned to tracked migrations # and instances will be tracked in rt.assigned_resources. self._setup_rt() # one instance is in the middle of being "resized" to the same host, # meaning there are two related resource allocations - one against # the instance and one against the migration record # here resource_1 and resource_2 are assigned to resizing inst migr_obj = _MIGRATION_FIXTURES['source-and-dest'] inst_uuid = migr_obj.instance_uuid resizing_inst = _MIGRATION_INSTANCE_FIXTURES[inst_uuid].obj_clone() mig_ctxt = _MIGRATION_CONTEXT_FIXTURES[resizing_inst.uuid] mig_ctxt.old_resources = objects.ResourceList( objects=[self.resource_1]) mig_ctxt.new_resources = objects.ResourceList( objects=[self.resource_2]) resizing_inst.migration_context = mig_ctxt # the other instance is not being resized and only has the single # resource allocation for itself # here resource_0 is assigned to inst inst = _INSTANCE_FIXTURES[0] inst.resources = objects.ResourceList(objects=[self.resource_0]) mock_get_instances.return_value = [inst, resizing_inst] mock_get_instance.return_value = resizing_inst mock_get_migrations.return_value = [migr_obj] mock_get_cn.return_value = self.compute update_mock = self._update_available_resources() update_mock.assert_called_once() expected_assigned_resources = {self.compute.uuid: { "CUSTOM_RESOURCE_0": {self.resource_0}, "CUSTOM_RESOURCE_1": {self.resource_1, self.resource_2} }} self.assertEqual(expected_assigned_resources, self.rt.assigned_resources) @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', new=mock.Mock(return_value=objects.PciDeviceList())) @mock.patch('nova.objects.MigrationContext.get_by_instance_uuid', new=mock.Mock(return_value=None)) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_check_resources_startup_success(self, mock_get_instances, mock_get_instance, mock_get_migrations, mock_get_cn): # When update_available_resources is running on startup, # it will trigger this function to check if there are # assigned resources not in provider tree. If so, the reason # may be admin delete the resources on the host or delete some # resource configurations in file. self._setup_rt() # there are three resources in provider tree self.rt.provider_tree = self._setup_ptree(self.compute) migr_obj = migr_obj = _MIGRATION_FIXTURES['source-and-dest'] inst_uuid = migr_obj.instance_uuid resizing_inst = _MIGRATION_INSTANCE_FIXTURES[inst_uuid].obj_clone() mig_ctxt = _MIGRATION_CONTEXT_FIXTURES[resizing_inst.uuid] mig_ctxt.old_resources = objects.ResourceList( objects=[self.resource_1]) mig_ctxt.new_resources = objects.ResourceList( objects=[self.resource_2]) resizing_inst.migration_context = mig_ctxt inst = _INSTANCE_FIXTURES[0] inst.resources = objects.ResourceList(objects=[self.resource_0]) mock_get_instances.return_value = [inst, resizing_inst] mock_get_instance.return_value = resizing_inst mock_get_migrations.return_value = [migr_obj] mock_get_cn.return_value = self.compute # check_resources is only triggered when startup update_mock = self._update_available_resources(startup=True) update_mock.assert_called_once() @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', new=mock.Mock(return_value=objects.PciDeviceList())) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') def test_check_resources_startup_fail(self, mock_get_instances, mock_get_migrations, mock_get_cn): # Similar to testcase test_check_resources_startup_success, # and this one is for check_resources failed resource = objects.Resource(provider_uuid=self.compute.uuid, resource_class="CUSTOM_RESOURCE_0", identifier="notfound") self._setup_rt() # there are three resources in provider tree self.rt.provider_tree = self._setup_ptree(self.compute) inst = _INSTANCE_FIXTURES[0] inst.resources = objects.ResourceList(objects=[resource]) mock_get_instances.return_value = [inst] mock_get_migrations.return_value = [] mock_get_cn.return_value = self.compute # There are assigned resources not found in provider tree self.assertRaises(exc.AssignedResourceNotFound, self._update_available_resources, startup=True) class TestInitComputeNode(BaseTestCase): @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.Service.get_by_compute_host') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_no_op_init_compute_node(self, update_mock, get_mock, service_mock, create_mock, pci_mock): self._setup_rt() resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) compute_node = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) self.rt.compute_nodes[_NODENAME] = compute_node self.assertFalse( self.rt._init_compute_node(mock.sentinel.ctx, resources)) self.assertFalse(service_mock.called) self.assertFalse(get_mock.called) self.assertFalse(create_mock.called) self.assertTrue(pci_mock.called) self.assertFalse(update_mock.called) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_compute_node_loaded(self, update_mock, get_mock, create_mock, pci_mock): self._setup_rt() def fake_get_node(_ctx, host, node): res = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) return res get_mock.side_effect = fake_get_node resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) self.assertFalse( self.rt._init_compute_node(mock.sentinel.ctx, resources)) get_mock.assert_called_once_with(mock.sentinel.ctx, _HOSTNAME, _NODENAME) self.assertFalse(create_mock.called) self.assertFalse(update_mock.called) @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_compute_node_rebalanced(self, update_mock, get_mock, create_mock, pci_mock, get_by_hypervisor_mock): self._setup_rt() self.driver_mock.rebalances_nodes = True cn = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) cn.host = "old-host" def fake_get_all(_ctx, nodename): return [cn] get_mock.side_effect = exc.NotFound get_by_hypervisor_mock.side_effect = fake_get_all resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) self.assertFalse( self.rt._init_compute_node(mock.sentinel.ctx, resources)) get_mock.assert_called_once_with(mock.sentinel.ctx, _HOSTNAME, _NODENAME) get_by_hypervisor_mock.assert_called_once_with(mock.sentinel.ctx, _NODENAME) create_mock.assert_not_called() update_mock.assert_called_once_with(mock.sentinel.ctx, cn) self.assertEqual(_HOSTNAME, self.rt.compute_nodes[_NODENAME].host) @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_compute_node_created_on_empty(self, update_mock, get_mock, create_mock, get_by_hypervisor_mock): get_by_hypervisor_mock.return_value = [] self._test_compute_node_created(update_mock, get_mock, create_mock, get_by_hypervisor_mock) @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_compute_node_created_on_empty_rebalance(self, update_mock, get_mock, create_mock, get_by_hypervisor_mock): get_by_hypervisor_mock.return_value = [] self._test_compute_node_created(update_mock, get_mock, create_mock, get_by_hypervisor_mock, rebalances_nodes=True) @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_compute_node_created_too_many(self, update_mock, get_mock, create_mock, get_by_hypervisor_mock): get_by_hypervisor_mock.return_value = ["fake_node_1", "fake_node_2"] self._test_compute_node_created(update_mock, get_mock, create_mock, get_by_hypervisor_mock, rebalances_nodes=True) def _test_compute_node_created(self, update_mock, get_mock, create_mock, get_by_hypervisor_mock, rebalances_nodes=False): self.flags(cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0, disk_allocation_ratio=1.0) self._setup_rt() self.driver_mock.rebalances_nodes = rebalances_nodes get_mock.side_effect = exc.NotFound resources = { 'host_ip': '1.1.1.1', 'numa_topology': None, 'metrics': '[]', 'cpu_info': '', 'hypervisor_hostname': _NODENAME, 'free_disk_gb': 6, 'hypervisor_version': 0, 'local_gb': 6, 'free_ram_mb': 512, 'memory_mb_used': 0, 'pci_device_pools': [], 'vcpus_used': 0, 'hypervisor_type': 'fake', 'local_gb_used': 0, 'memory_mb': 512, 'current_workload': 0, 'vcpus': 4, 'running_vms': 0, 'pci_passthrough_devices': '[]', 'uuid': uuids.compute_node_uuid } # The expected compute represents the initial values used # when creating a compute node. expected_compute = objects.ComputeNode( host_ip=resources['host_ip'], vcpus=resources['vcpus'], memory_mb=resources['memory_mb'], local_gb=resources['local_gb'], cpu_info=resources['cpu_info'], vcpus_used=resources['vcpus_used'], memory_mb_used=resources['memory_mb_used'], local_gb_used=resources['local_gb_used'], numa_topology=resources['numa_topology'], hypervisor_type=resources['hypervisor_type'], hypervisor_version=resources['hypervisor_version'], hypervisor_hostname=resources['hypervisor_hostname'], # NOTE(sbauza): ResourceTracker adds host field host=_HOSTNAME, # NOTE(sbauza): ResourceTracker adds CONF allocation ratios ram_allocation_ratio=CONF.initial_ram_allocation_ratio, cpu_allocation_ratio=CONF.initial_cpu_allocation_ratio, disk_allocation_ratio=CONF.initial_disk_allocation_ratio, stats={'failed_builds': 0}, uuid=uuids.compute_node_uuid ) with mock.patch.object(self.rt, '_setup_pci_tracker') as setup_pci: self.assertTrue( self.rt._init_compute_node(mock.sentinel.ctx, resources)) cn = self.rt.compute_nodes[_NODENAME] get_mock.assert_called_once_with(mock.sentinel.ctx, _HOSTNAME, _NODENAME) if rebalances_nodes: get_by_hypervisor_mock.assert_called_once_with( mock.sentinel.ctx, _NODENAME) else: get_by_hypervisor_mock.assert_not_called() create_mock.assert_called_once_with() self.assertTrue(obj_base.obj_equal_prims(expected_compute, cn)) setup_pci.assert_called_once_with(mock.sentinel.ctx, cn, resources) self.assertFalse(update_mock.called) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_setup_pci_tracker') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename', side_effect=exc.ComputeHostNotFound(host=_HOSTNAME)) @mock.patch('nova.objects.ComputeNode.create', side_effect=(test.TestingException, None)) def test_compute_node_create_fail_retry_works(self, mock_create, mock_get, mock_setup_pci): """Tests that _init_compute_node will not save the ComputeNode object in the compute_nodes dict if create() fails. """ self._setup_rt() self.assertEqual({}, self.rt.compute_nodes) ctxt = context.get_context() # The first ComputeNode.create fails so rt.compute_nodes should # remain empty. resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) resources['uuid'] = uuids.cn_uuid # for the LOG.info message self.assertRaises(test.TestingException, self.rt._init_compute_node, ctxt, resources) self.assertEqual({}, self.rt.compute_nodes) # Second create works so compute_nodes should have a mapping. self.assertTrue(self.rt._init_compute_node(ctxt, resources)) self.assertIn(_NODENAME, self.rt.compute_nodes) mock_get.assert_has_calls([mock.call( ctxt, _HOSTNAME, _NODENAME)] * 2) self.assertEqual(2, mock_create.call_count) mock_setup_pci.assert_called_once_with( ctxt, test.MatchType(objects.ComputeNode), resources) @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') @mock.patch('nova.objects.ComputeNode.create') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_node_removed(self, update_mock, get_mock, create_mock, get_by_hypervisor_mock): self._test_compute_node_created(update_mock, get_mock, create_mock, get_by_hypervisor_mock) self.rt.old_resources[_NODENAME] = mock.sentinel.foo self.assertIn(_NODENAME, self.rt.compute_nodes) self.assertIn(_NODENAME, self.rt.stats) self.assertIn(_NODENAME, self.rt.old_resources) self.rt.remove_node(_NODENAME) self.assertNotIn(_NODENAME, self.rt.compute_nodes) self.assertNotIn(_NODENAME, self.rt.stats) self.assertNotIn(_NODENAME, self.rt.old_resources) class TestUpdateComputeNode(BaseTestCase): @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.objects.ComputeNode.save') def test_existing_compute_node_updated_same_resources(self, save_mock): self._setup_rt() # This is the same set of resources as the fixture, deliberately. We # are checking below to see that compute_node.save is not needlessly # called when the resources don't actually change. orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute new_compute = orig_compute.obj_clone() self.rt._update(mock.sentinel.ctx, new_compute) self.assertFalse(save_mock.called) # Even the compute node is not updated, update_provider_tree # still got called. self.driver_mock.update_provider_tree.assert_called_once() @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.objects.ComputeNode.save') def test_existing_compute_node_updated_diff_updated_at(self, save_mock): # if only updated_at is changed, it won't call compute_node.save() self._setup_rt() ts1 = timeutils.utcnow() ts2 = ts1 + datetime.timedelta(seconds=10) orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() orig_compute.updated_at = ts1 self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute # Make the new_compute object have a different timestamp # from orig_compute. new_compute = orig_compute.obj_clone() new_compute.updated_at = ts2 self.rt._update(mock.sentinel.ctx, new_compute) self.assertFalse(save_mock.called) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.objects.ComputeNode.save') def test_existing_compute_node_updated_new_resources(self, save_mock): self._setup_rt() orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute # Deliberately changing local_gb_used, vcpus_used, and memory_mb_used # below to be different from the compute node fixture's base usages. # We want to check that the code paths update the stored compute node # usage records with what is supplied to _update(). new_compute = orig_compute.obj_clone() new_compute.memory_mb_used = 128 new_compute.vcpus_used = 2 new_compute.local_gb_used = 4 self.rt._update(mock.sentinel.ctx, new_compute) save_mock.assert_called_once_with() @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait') def test_existing_node_capabilities_as_traits(self, mock_sync_disabled): """The capabilities_as_traits() driver method returns traits information for a node/provider. """ self._setup_rt() rc = self.rt.reportclient rc.set_traits_for_provider = mock.MagicMock() # Emulate a driver that has implemented the update_from_provider_tree() # virt driver method self.driver_mock.update_provider_tree = mock.Mock() self.driver_mock.capabilities_as_traits.return_value = \ {mock.sentinel.trait: True} orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute new_compute = orig_compute.obj_clone() ptree = self._setup_ptree(orig_compute) self.rt._update(mock.sentinel.ctx, new_compute) self.driver_mock.capabilities_as_traits.assert_called_once() # We always decorate with COMPUTE_NODE exp_traits = {mock.sentinel.trait, os_traits.COMPUTE_NODE} # Can't predict the order of the traits list, so use ItemsMatcher ptree.update_traits.assert_called_once_with( new_compute.hypervisor_hostname, utils.ItemsMatcher(exp_traits)) mock_sync_disabled.assert_called_once_with( mock.sentinel.ctx, exp_traits) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait') @mock.patch('nova.objects.ComputeNode.save') def test_existing_node_update_provider_tree_implemented( self, save_mock, mock_sync_disabled): """The update_provider_tree() virt driver method must be implemented by all virt drivers. This method returns inventory, trait, and aggregate information for resource providers in a tree associated with the compute node. """ fake_inv = { orc.VCPU: { 'total': 2, 'min_unit': 1, 'max_unit': 2, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 1, }, orc.MEMORY_MB: { 'total': 4096, 'min_unit': 1, 'max_unit': 4096, 'step_size': 1, 'allocation_ratio': 1.5, 'reserved': 512, }, orc.DISK_GB: { 'total': 500, 'min_unit': 1, 'max_unit': 500, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1, }, } def fake_upt(ptree, nodename, allocations=None): self.assertIsNone(allocations) ptree.update_inventory(nodename, fake_inv) self._setup_rt() # Emulate a driver that has implemented the update_from_provider_tree() # virt driver method self.driver_mock.update_provider_tree.side_effect = fake_upt orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() # self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute # Deliberately changing local_gb to trigger updating inventory new_compute = orig_compute.obj_clone() new_compute.local_gb = 210000 ptree = self._setup_ptree(orig_compute) self.rt._update(mock.sentinel.ctx, new_compute) save_mock.assert_called_once_with() gptaer_mock = self.rt.reportclient.get_provider_tree_and_ensure_root gptaer_mock.assert_called_once_with( mock.sentinel.ctx, new_compute.uuid, name=new_compute.hypervisor_hostname) self.driver_mock.update_provider_tree.assert_called_once_with( ptree, new_compute.hypervisor_hostname) self.rt.reportclient.update_from_provider_tree.assert_called_once_with( mock.sentinel.ctx, ptree, allocations=None) ptree.update_traits.assert_called_once_with( new_compute.hypervisor_hostname, [os_traits.COMPUTE_NODE] ) exp_inv = copy.deepcopy(fake_inv) # These ratios and reserved amounts come from fake_upt exp_inv[orc.VCPU]['allocation_ratio'] = 16.0 exp_inv[orc.MEMORY_MB]['allocation_ratio'] = 1.5 exp_inv[orc.DISK_GB]['allocation_ratio'] = 1.0 exp_inv[orc.VCPU]['reserved'] = 1 exp_inv[orc.MEMORY_MB]['reserved'] = 512 # 1024MB in GB exp_inv[orc.DISK_GB]['reserved'] = 1 self.assertEqual(exp_inv, ptree.data(new_compute.uuid).inventory) mock_sync_disabled.assert_called_once() @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_resource_change', return_value=False) def test_update_retry_success(self, mock_resource_change, mock_sync_disabled): self._setup_rt() orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute # Deliberately changing local_gb to trigger updating inventory new_compute = orig_compute.obj_clone() new_compute.local_gb = 210000 # Emulate a driver that has implemented the update_from_provider_tree() # virt driver method, so we hit the update_from_provider_tree path. self.driver_mock.update_provider_tree.side_effect = lambda *a: None ufpt_mock = self.rt.reportclient.update_from_provider_tree ufpt_mock.side_effect = ( exc.ResourceProviderUpdateConflict( uuid='uuid', generation=42, error='error'), None) self.rt._update(mock.sentinel.ctx, new_compute) self.assertEqual(2, ufpt_mock.call_count) self.assertEqual(2, mock_sync_disabled.call_count) # The retry is restricted to _update_to_placement self.assertEqual(1, mock_resource_change.call_count) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_resource_change', return_value=False) def test_update_retry_raises(self, mock_resource_change, mock_sync_disabled): self._setup_rt() orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = orig_compute self.rt.old_resources[_NODENAME] = orig_compute # Deliberately changing local_gb to trigger updating inventory new_compute = orig_compute.obj_clone() new_compute.local_gb = 210000 # Emulate a driver that has implemented the update_from_provider_tree() # virt driver method, so we hit the update_from_provider_tree path. self.driver_mock.update_provider_tree.side_effect = lambda *a: None ufpt_mock = self.rt.reportclient.update_from_provider_tree ufpt_mock.side_effect = ( exc.ResourceProviderUpdateConflict( uuid='uuid', generation=42, error='error')) self.assertRaises(exc.ResourceProviderUpdateConflict, self.rt._update, mock.sentinel.ctx, new_compute) self.assertEqual(4, ufpt_mock.call_count) self.assertEqual(4, mock_sync_disabled.call_count) # The retry is restricted to _update_to_placement self.assertEqual(1, mock_resource_change.call_count) @mock.patch('nova.objects.Service.get_by_compute_host', return_value=objects.Service(disabled=True)) def test_sync_compute_service_disabled_trait_add(self, mock_get_by_host): """Tests the scenario that the compute service is disabled so the COMPUTE_STATUS_DISABLED trait is added to the traits set. """ self._setup_rt() ctxt = context.get_admin_context() traits = set() self.rt._sync_compute_service_disabled_trait(ctxt, traits) self.assertEqual({os_traits.COMPUTE_STATUS_DISABLED}, traits) mock_get_by_host.assert_called_once_with(ctxt, self.rt.host) @mock.patch('nova.objects.Service.get_by_compute_host', return_value=objects.Service(disabled=False)) def test_sync_compute_service_disabled_trait_remove( self, mock_get_by_host): """Tests the scenario that the compute service is enabled so the COMPUTE_STATUS_DISABLED trait is removed from the traits set. """ self._setup_rt() ctxt = context.get_admin_context() # First test with the trait actually in the set. traits = {os_traits.COMPUTE_STATUS_DISABLED} self.rt._sync_compute_service_disabled_trait(ctxt, traits) self.assertEqual(set(), traits) mock_get_by_host.assert_called_once_with(ctxt, self.rt.host) # Now run it again with the empty set to make sure the method handles # the trait not already being in the set (idempotency). self.rt._sync_compute_service_disabled_trait(ctxt, traits) self.assertEqual(0, len(traits)) @mock.patch('nova.objects.Service.get_by_compute_host', # One might think Service.get_by_compute_host would raise # ServiceNotFound but the DB API raises ComputeHostNotFound. side_effect=exc.ComputeHostNotFound(host=_HOSTNAME)) @mock.patch('nova.compute.resource_tracker.LOG.error') def test_sync_compute_service_disabled_trait_service_not_found( self, mock_log_error, mock_get_by_host): """Tests the scenario that the compute service is not found so the traits set is unmodified and an error is logged. """ self._setup_rt() ctxt = context.get_admin_context() traits = set() self.rt._sync_compute_service_disabled_trait(ctxt, traits) self.assertEqual(0, len(traits)) mock_get_by_host.assert_called_once_with(ctxt, self.rt.host) mock_log_error.assert_called_once() self.assertIn('Unable to find services table record for nova-compute', mock_log_error.call_args[0][0]) def test_update_compute_node_save_fails_restores_old_resources(self): """Tests the scenario that compute_node.save() fails and the old_resources value for the node is restored to its previous value before calling _resource_change updated it. """ self._setup_rt() orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() # Pretend the ComputeNode was just created in the DB but not yet saved # with the free_disk_gb field. delattr(orig_compute, 'free_disk_gb') nodename = orig_compute.hypervisor_hostname self.rt.old_resources[nodename] = orig_compute # Now have an updated compute node with free_disk_gb set which should # make _resource_change modify old_resources and return True. updated_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() ctxt = context.get_admin_context() # Mock ComputeNode.save() to trigger some failure (realistically this # could be a DBConnectionError). with mock.patch.object(updated_compute, 'save', side_effect=test.TestingException('db error')): self.assertRaises(test.TestingException, self.rt._update, ctxt, updated_compute, startup=True) # Make sure that the old_resources entry for the node has not changed # from the original. self.assertTrue(self.rt._resource_change(updated_compute)) def test_copy_resources_no_update_allocation_ratios(self): """Tests that a ComputeNode object's allocation ratio fields are not set if the configured allocation ratio values are default None. """ self._setup_rt() compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() compute.obj_reset_changes() # make sure we start clean self.rt._copy_resources( compute, self.driver_mock.get_available_resource.return_value) # Assert that the ComputeNode fields were not changed. changes = compute.obj_get_changes() for res in ('cpu', 'disk', 'ram'): attr_name = '%s_allocation_ratio' % res self.assertNotIn(attr_name, changes) def test_copy_resources_update_allocation_zero_ratios(self): """Tests that a ComputeNode object's allocation ratio fields are not set if the configured allocation ratio values are 0.0. """ # NOTE(yikun): In Stein version, we change the default value of # (cpu|ram|disk)_allocation_ratio from 0.0 to None, but we still # should allow 0.0 to keep compatibility, and this 0.0 condition # will be removed in the next version (T version). # Set explicit ratio config values to 0.0 (the default is None). for res in ('cpu', 'disk', 'ram'): opt_name = '%s_allocation_ratio' % res CONF.set_override(opt_name, 0.0) self._setup_rt() compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() compute.obj_reset_changes() # make sure we start clean self.rt._copy_resources( compute, self.driver_mock.get_available_resource.return_value) # Assert that the ComputeNode fields were not changed. changes = compute.obj_get_changes() for res in ('cpu', 'disk', 'ram'): attr_name = '%s_allocation_ratio' % res self.assertNotIn(attr_name, changes) def test_copy_resources_update_allocation_ratios_from_config(self): """Tests that a ComputeNode object's allocation ratio fields are set if the configured allocation ratio values are not default. """ # Set explicit ratio config values to 1.0 (the default is None). for res in ('cpu', 'disk', 'ram'): opt_name = '%s_allocation_ratio' % res CONF.set_override(opt_name, 1.0) self._setup_rt() compute = _COMPUTE_NODE_FIXTURES[0].obj_clone() compute.obj_reset_changes() # make sure we start clean self.rt._copy_resources( compute, self.driver_mock.get_available_resource.return_value) # Assert that the ComputeNode fields were changed. changes = compute.obj_get_changes() for res in ('cpu', 'disk', 'ram'): attr_name = '%s_allocation_ratio' % res self.assertIn(attr_name, changes) self.assertEqual(1.0, changes[attr_name]) class TestInstanceClaim(BaseTestCase): def setUp(self): super(TestInstanceClaim, self).setUp() self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self._setup_rt() cn = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = cn self.rt.provider_tree = self._setup_ptree(cn) # not using mock.sentinel.ctx because instance_claim calls #elevated self.ctx = mock.MagicMock() self.elevated = mock.MagicMock() self.ctx.elevated.return_value = self.elevated self.instance = _INSTANCE_FIXTURES[0].obj_clone() def assertEqualNUMAHostTopology(self, expected, got): attrs = ('cpuset', 'pcpuset', 'memory', 'id', 'cpu_usage', 'memory_usage') if None in (expected, got): if expected != got: raise AssertionError("Topologies don't match. Expected: " "%(expected)s, but got: %(got)s" % {'expected': expected, 'got': got}) else: return if len(expected) != len(got): raise AssertionError("Topologies don't match due to different " "number of cells. Expected: " "%(expected)s, but got: %(got)s" % {'expected': expected, 'got': got}) for exp_cell, got_cell in zip(expected.cells, got.cells): for attr in attrs: if getattr(exp_cell, attr) != getattr(got_cell, attr): raise AssertionError("Topologies don't match. Expected: " "%(expected)s, but got: %(got)s" % {'expected': expected, 'got': got}) def test_claim_disabled(self): self.rt.compute_nodes = {} self.assertTrue(self.rt.disabled(_NODENAME)) with mock.patch.object(self.instance, 'save'): claim = self.rt.instance_claim(mock.sentinel.ctx, self.instance, _NODENAME, self.allocations, None) self.assertEqual(self.rt.host, self.instance.host) self.assertEqual(self.rt.host, self.instance.launched_on) self.assertEqual(_NODENAME, self.instance.node) self.assertIsInstance(claim, claims.NopClaim) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') def test_update_usage_with_claim(self, migr_mock, check_bfv_mock): # Test that RT.update_usage() only changes the compute node # resources if there has been a claim first. self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) check_bfv_mock.return_value = False expected = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) self.rt.update_usage(self.ctx, self.instance, _NODENAME) cn = self.rt.compute_nodes[_NODENAME] self.assertTrue(obj_base.obj_equal_prims(expected, cn)) disk_used = self.instance.root_gb + self.instance.ephemeral_gb vals = { 'local_gb_used': disk_used, 'memory_mb_used': self.instance.memory_mb, 'free_disk_gb': expected.local_gb - disk_used, "free_ram_mb": expected.memory_mb - self.instance.memory_mb, 'running_vms': 1, 'vcpus_used': 1, 'pci_device_pools': objects.PciDevicePoolList(), 'stats': { 'io_workload': 0, 'num_instances': 1, 'num_task_None': 1, 'num_os_type_' + self.instance.os_type: 1, 'num_proj_' + self.instance.project_id: 1, 'num_vm_' + self.instance.vm_state: 1, }, } _update_compute_node(expected, **vals) with mock.patch.object(self.rt, '_update') as update_mock: with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, self.allocations, None) cn = self.rt.compute_nodes[_NODENAME] update_mock.assert_called_once_with(self.elevated, cn) self.assertTrue(obj_base.obj_equal_prims(expected, cn)) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') def test_update_usage_removed(self, migr_mock, check_bfv_mock): # Test that RT.update_usage() removes the instance when update is # called in a removed state self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) check_bfv_mock.return_value = False cn = self.rt.compute_nodes[_NODENAME] allocations = { cn.uuid: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512, "CUSTOM_RESOURCE_0": 1, "CUSTOM_RESOURCE_1": 2, } } } expected = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) disk_used = self.instance.root_gb + self.instance.ephemeral_gb vals = { 'local_gb_used': disk_used, 'memory_mb_used': self.instance.memory_mb, 'free_disk_gb': expected.local_gb - disk_used, "free_ram_mb": expected.memory_mb - self.instance.memory_mb, 'running_vms': 1, 'vcpus_used': 1, 'pci_device_pools': objects.PciDevicePoolList(), 'stats': { 'io_workload': 0, 'num_instances': 1, 'num_task_None': 1, 'num_os_type_' + self.instance.os_type: 1, 'num_proj_' + self.instance.project_id: 1, 'num_vm_' + self.instance.vm_state: 1, }, } _update_compute_node(expected, **vals) with mock.patch.object(self.rt, '_update') as update_mock: with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, allocations, None) cn = self.rt.compute_nodes[_NODENAME] update_mock.assert_called_once_with(self.elevated, cn) self.assertTrue(obj_base.obj_equal_prims(expected, cn)) # Verify that the assigned resources are tracked for rc, amount in [("CUSTOM_RESOURCE_0", 1), ("CUSTOM_RESOURCE_1", 2)]: self.assertEqual(amount, len(self.rt.assigned_resources[cn.uuid][rc])) expected_updated = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'pci_device_pools': objects.PciDevicePoolList(), 'stats': { 'io_workload': 0, 'num_instances': 0, 'num_task_None': 0, 'num_os_type_' + self.instance.os_type: 0, 'num_proj_' + self.instance.project_id: 0, 'num_vm_' + self.instance.vm_state: 0, }, } _update_compute_node(expected_updated, **vals) self.instance.vm_state = vm_states.SHELVED_OFFLOADED with mock.patch.object(self.rt, '_update') as update_mock: self.rt.update_usage(self.ctx, self.instance, _NODENAME) cn = self.rt.compute_nodes[_NODENAME] self.assertTrue(obj_base.obj_equal_prims(expected_updated, cn)) # Verify that the resources are released for rc in ["CUSTOM_RESOURCE_0", "CUSTOM_RESOURCE_1"]: self.assertEqual(0, len(self.rt.assigned_resources[cn.uuid][rc])) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') def test_claim(self, migr_mock, check_bfv_mock): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) check_bfv_mock.return_value = False disk_used = self.instance.root_gb + self.instance.ephemeral_gb expected = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'local_gb_used': disk_used, 'memory_mb_used': self.instance.memory_mb, 'free_disk_gb': expected.local_gb - disk_used, "free_ram_mb": expected.memory_mb - self.instance.memory_mb, 'running_vms': 1, 'vcpus_used': 1, 'pci_device_pools': objects.PciDevicePoolList(), 'stats': { 'io_workload': 0, 'num_instances': 1, 'num_task_None': 1, 'num_os_type_' + self.instance.os_type: 1, 'num_proj_' + self.instance.project_id: 1, 'num_vm_' + self.instance.vm_state: 1, }, } _update_compute_node(expected, **vals) with mock.patch.object(self.rt, '_update') as update_mock: with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, self.allocations, None) cn = self.rt.compute_nodes[_NODENAME] update_mock.assert_called_once_with(self.elevated, cn) self.assertTrue(obj_base.obj_equal_prims(expected, cn)) self.assertEqual(self.rt.host, self.instance.host) self.assertEqual(self.rt.host, self.instance.launched_on) self.assertEqual(_NODENAME, self.instance.node) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.pci.stats.PciDeviceStats.support_requests', return_value=True) @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') def test_claim_with_pci(self, migr_mock, pci_stats_mock, check_bfv_mock): # Test that a claim involving PCI requests correctly claims # PCI devices on the host and sends an updated pci_device_pools # attribute of the ComputeNode object. # TODO(jaypipes): Remove once the PCI tracker is always created # upon the resource tracker being initialized... self.rt.pci_tracker = pci_manager.PciDevTracker(mock.sentinel.ctx) pci_dev = pci_device.PciDevice.create( None, fake_pci_device.dev_dict) pci_devs = [pci_dev] self.rt.pci_tracker.pci_devs = objects.PciDeviceList(objects=pci_devs) request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) pci_requests = objects.InstancePCIRequests( requests=[request], instance_uuid=self.instance.uuid) self.instance.pci_requests = pci_requests check_bfv_mock.return_value = False disk_used = self.instance.root_gb + self.instance.ephemeral_gb expected = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) vals = { 'local_gb_used': disk_used, 'memory_mb_used': self.instance.memory_mb, 'free_disk_gb': expected.local_gb - disk_used, "free_ram_mb": expected.memory_mb - self.instance.memory_mb, 'running_vms': 1, 'vcpus_used': 1, 'pci_device_pools': objects.PciDevicePoolList(), 'stats': { 'io_workload': 0, 'num_instances': 1, 'num_task_None': 1, 'num_os_type_' + self.instance.os_type: 1, 'num_proj_' + self.instance.project_id: 1, 'num_vm_' + self.instance.vm_state: 1, }, } _update_compute_node(expected, **vals) with mock.patch.object(self.rt, '_update') as update_mock: with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, self.allocations, None) cn = self.rt.compute_nodes[_NODENAME] update_mock.assert_called_once_with(self.elevated, cn) pci_stats_mock.assert_called_once_with([request]) self.assertTrue(obj_base.obj_equal_prims(expected, cn)) @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) def test_claim_with_resources(self): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) cn = self.rt.compute_nodes[_NODENAME] allocations = { cn.uuid: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512, "CUSTOM_RESOURCE_0": 1, "CUSTOM_RESOURCE_1": 2, } } } expected_resources_0 = {self.resource_0} expected_resources_1 = {self.resource_1, self.resource_2} with mock.patch.object(self.rt, '_update'): with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, allocations, None) self.assertEqual((expected_resources_0 | expected_resources_1), set(self.instance.resources)) @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) def test_claim_with_resources_from_free(self): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) cn = self.rt.compute_nodes[_NODENAME] self.rt.assigned_resources = { self.resource_1.provider_uuid: { self.resource_1.resource_class: {self.resource_1}}} allocations = { cn.uuid: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512, "CUSTOM_RESOURCE_1": 1, } } } # resource_1 is assigned to other instances, # so only resource_2 is available expected_resources = {self.resource_2} with mock.patch.object(self.rt, '_update'): with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, allocations, None) self.assertEqual(expected_resources, set(self.instance.resources)) @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) def test_claim_failed_with_resources(self): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) cn = self.rt.compute_nodes[_NODENAME] # Only one "CUSTOM_RESOURCE_0" resource is available allocations = { cn.uuid: { "generation": 0, "resources": { "VCPU": 1, "MEMORY_MB": 512, "CUSTOM_RESOURCE_0": 2 } } } with mock.patch.object(self.instance, 'save'): self.assertRaises(exc.ComputeResourcesUnavailable, self.rt.instance_claim, self.ctx, self.instance, _NODENAME, allocations, None) self.assertEqual( 0, len(self.rt.assigned_resources[cn.uuid]['CUSTOM_RESOURCE_0'])) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_claim_abort_context_manager(self, save_mock, migr_mock, check_bfv_mock): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) check_bfv_mock.return_value = False cn = self.rt.compute_nodes[_NODENAME] self.assertEqual(0, cn.local_gb_used) self.assertEqual(0, cn.memory_mb_used) self.assertEqual(0, cn.running_vms) mock_save = mock.MagicMock() mock_clear_numa = mock.MagicMock() @mock.patch.object(self.instance, 'save', mock_save) @mock.patch.object(self.instance, 'clear_numa_topology', mock_clear_numa) @mock.patch.object(objects.Instance, 'obj_clone', return_value=self.instance) def _doit(mock_clone): with self.rt.instance_claim(self.ctx, self.instance, _NODENAME, self.allocations, None): # Raise an exception. Just make sure below that the abort() # method of the claim object was called (and the resulting # resources reset to the pre-claimed amounts) raise test.TestingException() self.assertRaises(test.TestingException, _doit) self.assertEqual(2, mock_save.call_count) mock_clear_numa.assert_called_once_with() self.assertIsNone(self.instance.host) self.assertIsNone(self.instance.node) # Assert that the resources claimed by the Claim() constructor # are returned to the resource tracker due to the claim's abort() # method being called when triggered by the exception raised above. self.assertEqual(0, cn.local_gb_used) self.assertEqual(0, cn.memory_mb_used) self.assertEqual(0, cn.running_vms) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_claim_abort(self, save_mock, migr_mock, check_bfv_mock): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) check_bfv_mock.return_value = False disk_used = self.instance.root_gb + self.instance.ephemeral_gb @mock.patch.object(objects.Instance, 'obj_clone', return_value=self.instance) @mock.patch.object(self.instance, 'save') def _claim(mock_save, mock_clone): return self.rt.instance_claim(self.ctx, self.instance, _NODENAME, self.allocations, None) cn = self.rt.compute_nodes[_NODENAME] claim = _claim() self.assertEqual(disk_used, cn.local_gb_used) self.assertEqual(self.instance.memory_mb, cn.memory_mb_used) self.assertEqual(1, cn.running_vms) mock_save = mock.MagicMock() mock_clear_numa = mock.MagicMock() @mock.patch.object(self.instance, 'save', mock_save) @mock.patch.object(self.instance, 'clear_numa_topology', mock_clear_numa) def _abort(): claim.abort() _abort() mock_save.assert_called_once_with() mock_clear_numa.assert_called_once_with() self.assertIsNone(self.instance.host) self.assertIsNone(self.instance.node) self.assertEqual(0, cn.local_gb_used) self.assertEqual(0, cn.memory_mb_used) self.assertEqual(0, cn.running_vms) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_claim_numa(self, save_mock, migr_mock, check_bfv_mock): self.instance.pci_requests = objects.InstancePCIRequests(requests=[]) check_bfv_mock.return_value = False cn = self.rt.compute_nodes[_NODENAME] self.instance.numa_topology = _INSTANCE_NUMA_TOPOLOGIES['2mb'] host_topology = _NUMA_HOST_TOPOLOGIES['2mb'] cn.numa_topology = host_topology._to_json() limits = {'numa_topology': _NUMA_LIMIT_TOPOLOGIES['2mb']} expected_numa = copy.deepcopy(host_topology) for cell in expected_numa.cells: cell.memory_usage += _2MB cell.cpu_usage += 1 with mock.patch.object(self.rt, '_update') as update_mock: with mock.patch.object(self.instance, 'save'): self.rt.instance_claim(self.ctx, self.instance, _NODENAME, self.allocations, limits) update_mock.assert_called_once_with(self.ctx.elevated(), cn) new_numa = cn.numa_topology new_numa = objects.NUMATopology.obj_from_db_obj(new_numa) self.assertEqualNUMAHostTopology(expected_numa, new_numa) class TestResize(BaseTestCase): @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_resize_claim_same_host(self, save_mock, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, is_bfv_mock): # Resize an existing instance from its current flavor (instance type # 1) to a new flavor (instance type 2) and verify that the compute # node's resources are appropriately updated to account for the new # flavor's resources. In this scenario, we use an Instance that has not # already had its "current" flavor set to the new flavor. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=1, memory_mb_used=128, local_gb_used=1) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = _INSTANCE_FIXTURES migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] instance = _INSTANCE_FIXTURES[0].obj_clone() instance.new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] # This migration context is fine, it points to the first instance # fixture and indicates a source-and-dest resize. mig_context_obj = _MIGRATION_CONTEXT_FIXTURES[instance.uuid] instance.migration_context = mig_context_obj self.rt.update_available_resource(mock.MagicMock(), _NODENAME) migration = objects.Migration( id=3, instance_uuid=instance.uuid, source_compute=_HOSTNAME, dest_compute=_HOSTNAME, source_node=_NODENAME, dest_node=_NODENAME, old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', uuid=uuids.migration, ) new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] # not using mock.sentinel.ctx because resize_claim calls #elevated ctx = mock.MagicMock() expected = self.rt.compute_nodes[_NODENAME].obj_clone() expected.vcpus_used = (expected.vcpus_used + new_flavor.vcpus) expected.memory_mb_used = (expected.memory_mb_used + new_flavor.memory_mb) expected.free_ram_mb = expected.memory_mb - expected.memory_mb_used expected.local_gb_used = (expected.local_gb_used + (new_flavor.root_gb + new_flavor.ephemeral_gb)) expected.free_disk_gb = (expected.free_disk_gb - (new_flavor.root_gb + new_flavor.ephemeral_gb)) with test.nested( mock.patch('nova.compute.resource_tracker.ResourceTracker' '._create_migration', return_value=migration), mock.patch('nova.objects.MigrationContext', return_value=mig_context_obj), mock.patch('nova.objects.Instance.save'), ) as (create_mig_mock, ctxt_mock, inst_save_mock): claim = self.rt.resize_claim(ctx, instance, new_flavor, _NODENAME, None, self.allocations) create_mig_mock.assert_called_once_with( ctx, instance, new_flavor, _NODENAME, None # move_type is None for resize... ) self.assertIsInstance(claim, claims.MoveClaim) cn = self.rt.compute_nodes[_NODENAME] self.assertTrue(obj_base.obj_equal_prims(expected, cn)) self.assertEqual(1, len(self.rt.tracked_migrations)) # Now abort the resize claim and check that the resources have been set # back to their original values. with mock.patch('nova.objects.Instance.' 'drop_migration_context') as drop_migr_mock: claim.abort() drop_migr_mock.assert_called_once_with() self.assertEqual(1, cn.vcpus_used) self.assertEqual(1, cn.local_gb_used) self.assertEqual(128, cn.memory_mb_used) self.assertEqual(0, len(self.rt.tracked_migrations)) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename', return_value=_COMPUTE_NODE_FIXTURES[0]) @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node', return_value=[]) @mock.patch('nova.objects.InstanceList.get_by_host_and_node', return_value=[]) @mock.patch('nova.objects.ComputeNode.save') def _test_instance_build_resize(self, save_mock, get_by_host_and_node_mock, get_in_progress_by_host_and_node_mock, get_by_host_and_nodename_mock, pci_get_by_compute_node_mock, pci_get_by_instance_mock, is_bfv_mock, revert=False): self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) self._setup_rt(virt_resources=virt_resources) cn = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.provider_tree = self._setup_ptree(cn) # not using mock.sentinel.ctx because resize_claim calls #elevated ctx = mock.MagicMock() # Init compute node self.rt.update_available_resource(mock.MagicMock(), _NODENAME) expected = self.rt.compute_nodes[_NODENAME].obj_clone() instance = _INSTANCE_FIXTURES[0].obj_clone() old_flavor = instance.flavor instance.new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] instance.pci_requests = objects.InstancePCIRequests(requests=[]) # allocations for create allocations = { cn.uuid: { "generation": 0, "resources": { "CUSTOM_RESOURCE_0": 1, } } } # Build instance with mock.patch.object(instance, 'save'): self.rt.instance_claim(ctx, instance, _NODENAME, allocations, None) expected = compute_update_usage(expected, old_flavor, sign=1) expected.running_vms = 1 self.assertTrue(obj_base.obj_equal_prims( expected, self.rt.compute_nodes[_NODENAME], ignore=['stats'] )) # Verify that resources are assigned and tracked self.assertEqual( 1, len(self.rt.assigned_resources[cn.uuid]["CUSTOM_RESOURCE_0"])) # allocation for resize allocations = { cn.uuid: { "generation": 0, "resources": { "CUSTOM_RESOURCE_1": 2, } } } # This migration context is fine, it points to the first instance # fixture and indicates a source-and-dest resize. mig_context_obj = _MIGRATION_CONTEXT_FIXTURES[instance.uuid] mig_context_obj.old_resources = objects.ResourceList( objects=[self.resource_0]) mig_context_obj.new_resources = objects.ResourceList( objects=[self.resource_1, self.resource_2]) instance.migration_context = mig_context_obj instance.system_metadata = {} migration = objects.Migration( id=3, instance_uuid=instance.uuid, source_compute=_HOSTNAME, dest_compute=_HOSTNAME, source_node=_NODENAME, dest_node=_NODENAME, old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', uuid=uuids.migration, ) new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] # Resize instance with test.nested( mock.patch('nova.compute.resource_tracker.ResourceTracker' '._create_migration', return_value=migration), mock.patch('nova.objects.MigrationContext', return_value=mig_context_obj), mock.patch('nova.objects.Instance.save'), ) as (create_mig_mock, ctxt_mock, inst_save_mock): self.rt.resize_claim(ctx, instance, new_flavor, _NODENAME, None, allocations) expected = compute_update_usage(expected, new_flavor, sign=1) self.assertTrue(obj_base.obj_equal_prims( expected, self.rt.compute_nodes[_NODENAME], ignore=['stats'] )) # Verify that resources are assigned and tracked for rc, amount in [("CUSTOM_RESOURCE_0", 1), ("CUSTOM_RESOURCE_1", 2)]: self.assertEqual(amount, len(self.rt.assigned_resources[cn.uuid][rc])) # Confirm or revert resize with test.nested( mock.patch('nova.objects.Migration.save'), mock.patch('nova.objects.Instance.drop_migration_context'), mock.patch('nova.objects.Instance.save'), ): if revert: flavor = new_flavor self.rt.drop_move_claim_at_dest(ctx, instance, migration) else: # confirm flavor = old_flavor self.rt.drop_move_claim_at_source(ctx, instance, migration) expected = compute_update_usage(expected, flavor, sign=-1) self.assertTrue(obj_base.obj_equal_prims( expected, self.rt.compute_nodes[_NODENAME], ignore=['stats'] )) if revert: # Verify that the new resources are released self.assertEqual( 0, len(self.rt.assigned_resources[cn.uuid][ "CUSTOM_RESOURCE_1"])) # Old resources are not released self.assertEqual( 1, len(self.rt.assigned_resources[cn.uuid][ "CUSTOM_RESOURCE_0"])) else: # Verify that the old resources are released self.assertEqual( 0, len(self.rt.assigned_resources[cn.uuid][ "CUSTOM_RESOURCE_0"])) # new resources are not released self.assertEqual( 2, len(self.rt.assigned_resources[cn.uuid][ "CUSTOM_RESOURCE_1"])) def test_instance_build_resize_revert(self): self._test_instance_build_resize(revert=True) def test_instance_build_resize_confirm(self): self._test_instance_build_resize() @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.pci.stats.PciDeviceStats.support_requests', return_value=True) @mock.patch('nova.objects.PciDevice.save') @mock.patch('nova.pci.manager.PciDevTracker.claim_instance') @mock.patch('nova.pci.request.get_pci_requests_from_flavor') @mock.patch('nova.objects.PciDeviceList.get_by_compute_node') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_resize_claim_dest_host_with_pci(self, save_mock, get_mock, migr_mock, get_cn_mock, pci_mock, pci_req_mock, pci_claim_mock, pci_dev_save_mock, pci_supports_mock, mock_is_volume_backed_instance): # Starting from an empty destination compute node, perform a resize # operation for an instance containing SR-IOV PCI devices on the # original host. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self._setup_rt() # TODO(jaypipes): Remove once the PCI tracker is always created # upon the resource tracker being initialized... self.rt.pci_tracker = pci_manager.PciDevTracker(mock.sentinel.ctx) pci_dev = pci_device.PciDevice.create( None, fake_pci_device.dev_dict) pci_devs = [pci_dev] self.rt.pci_tracker.pci_devs = objects.PciDeviceList(objects=pci_devs) pci_claim_mock.return_value = [pci_dev] # start with an empty dest compute node. No migrations, no instances get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0] self.rt.update_available_resource(mock.MagicMock(), _NODENAME) instance = _INSTANCE_FIXTURES[0].obj_clone() instance.task_state = task_states.RESIZE_MIGRATING instance.new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] # A destination-only migration migration = objects.Migration( id=3, instance_uuid=instance.uuid, source_compute="other-host", dest_compute=_HOSTNAME, source_node="other-node", dest_node=_NODENAME, old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', instance=instance, uuid=uuids.migration, ) mig_context_obj = objects.MigrationContext( instance_uuid=instance.uuid, migration_id=3, new_numa_topology=None, old_numa_topology=None, ) instance.migration_context = mig_context_obj new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) pci_requests = objects.InstancePCIRequests( requests=[request], instance_uuid=instance.uuid, ) instance.pci_requests = pci_requests # NOTE(jaypipes): This looks weird, so let me explain. The Instance PCI # requests on a resize come from two places. The first is the PCI # information from the new flavor. The second is for SR-IOV devices # that are directly attached to the migrating instance. The # pci_req_mock.return value here is for the flavor PCI device requests # (which is nothing). This empty list will be merged with the Instance # PCI requests defined directly above. pci_req_mock.return_value = objects.InstancePCIRequests(requests=[]) # not using mock.sentinel.ctx because resize_claim calls elevated ctx = mock.MagicMock() with test.nested( mock.patch('nova.pci.manager.PciDevTracker.allocate_instance'), mock.patch('nova.compute.resource_tracker.ResourceTracker' '._create_migration', return_value=migration), mock.patch('nova.objects.MigrationContext', return_value=mig_context_obj), mock.patch('nova.objects.Instance.save'), ) as (alloc_mock, create_mig_mock, ctxt_mock, inst_save_mock): self.rt.resize_claim(ctx, instance, new_flavor, _NODENAME, None, self.allocations) pci_claim_mock.assert_called_once_with(ctx, pci_req_mock.return_value, None) # Validate that the pci.request.get_pci_request_from_flavor() return # value was merged with the instance PCI requests from the Instance # itself that represent the SR-IOV devices from the original host. pci_req_mock.assert_called_once_with(new_flavor) self.assertEqual(1, len(pci_req_mock.return_value.requests)) self.assertEqual(request, pci_req_mock.return_value.requests[0]) alloc_mock.assert_called_once_with(instance) def test_drop_move_claim_on_revert(self): self._setup_rt() cn = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = cn # TODO(jaypipes): Remove once the PCI tracker is always created # upon the resource tracker being initialized... self.rt.pci_tracker = pci_manager.PciDevTracker(mock.sentinel.ctx) pci_dev = pci_device.PciDevice.create( None, fake_pci_device.dev_dict) pci_devs = [pci_dev] instance = _INSTANCE_FIXTURES[0].obj_clone() instance.task_state = task_states.RESIZE_MIGRATING instance.new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] instance.migration_context = objects.MigrationContext() instance.migration_context.new_pci_devices = objects.PciDeviceList( objects=pci_devs) # When reverting a resize and dropping the move claim, the destination # compute calls drop_move_claim_at_dest to drop the new_flavor # usage and the instance should be in tracked_migrations from when # the resize_claim was made on the dest during prep_resize. migration = objects.Migration( dest_node=cn.hypervisor_hostname, migration_type='resize', ) self.rt.tracked_migrations = {instance.uuid: migration} # not using mock.sentinel.ctx because _drop_move_claim calls elevated ctx = mock.MagicMock() with test.nested( mock.patch.object(self.rt, '_update'), mock.patch.object(self.rt.pci_tracker, 'free_device'), mock.patch.object(self.rt, '_get_usage_dict'), mock.patch.object(self.rt, '_update_usage'), mock.patch.object(migration, 'save'), mock.patch.object(instance, 'save'), ) as ( update_mock, mock_pci_free_device, mock_get_usage, mock_update_usage, mock_migrate_save, mock_instance_save, ): self.rt.drop_move_claim_at_dest(ctx, instance, migration) mock_pci_free_device.assert_called_once_with( pci_dev, mock.ANY) mock_get_usage.assert_called_once_with( instance.new_flavor, instance, numa_topology=None) mock_update_usage.assert_called_once_with( mock_get_usage.return_value, _NODENAME, sign=-1) mock_migrate_save.assert_called_once() mock_instance_save.assert_called_once() @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_resize_claim_two_instances(self, save_mock, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, mock_is_volume_backed_instance): # Issue two resize claims against a destination host with no prior # instances on it and validate that the accounting for resources is # correct. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) self._setup_rt() get_mock.return_value = [] migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.update_available_resource(mock.MagicMock(), _NODENAME) # Instance #1 is resizing to instance type 2 which has 2 vCPUs, 256MB # RAM and 5GB root disk. instance1 = _INSTANCE_FIXTURES[0].obj_clone() instance1.id = 1 instance1.uuid = uuids.instance1 instance1.task_state = task_states.RESIZE_MIGRATING instance1.new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] migration1 = objects.Migration( id=1, instance_uuid=instance1.uuid, source_compute="other-host", dest_compute=_HOSTNAME, source_node="other-node", dest_node=_NODENAME, old_instance_type_id=1, new_instance_type_id=2, migration_type='resize', status='migrating', instance=instance1, uuid=uuids.migration1, ) mig_context_obj1 = objects.MigrationContext( instance_uuid=instance1.uuid, migration_id=1, new_numa_topology=None, old_numa_topology=None, ) instance1.migration_context = mig_context_obj1 flavor1 = _INSTANCE_TYPE_OBJ_FIXTURES[2] # Instance #2 is resizing to instance type 1 which has 1 vCPU, 128MB # RAM and 1GB root disk. instance2 = _INSTANCE_FIXTURES[0].obj_clone() instance2.id = 2 instance2.uuid = uuids.instance2 instance2.task_state = task_states.RESIZE_MIGRATING instance2.old_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2] instance2.new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[1] migration2 = objects.Migration( id=2, instance_uuid=instance2.uuid, source_compute="other-host", dest_compute=_HOSTNAME, source_node="other-node", dest_node=_NODENAME, old_instance_type_id=2, new_instance_type_id=1, migration_type='resize', status='migrating', instance=instance1, uuid=uuids.migration2, ) mig_context_obj2 = objects.MigrationContext( instance_uuid=instance2.uuid, migration_id=2, new_numa_topology=None, old_numa_topology=None, ) instance2.migration_context = mig_context_obj2 flavor2 = _INSTANCE_TYPE_OBJ_FIXTURES[1] expected = self.rt.compute_nodes[_NODENAME].obj_clone() expected.vcpus_used = (expected.vcpus_used + flavor1.vcpus + flavor2.vcpus) expected.memory_mb_used = (expected.memory_mb_used + flavor1.memory_mb + flavor2.memory_mb) expected.free_ram_mb = expected.memory_mb - expected.memory_mb_used expected.local_gb_used = (expected.local_gb_used + (flavor1.root_gb + flavor1.ephemeral_gb + flavor2.root_gb + flavor2.ephemeral_gb)) expected.free_disk_gb = (expected.free_disk_gb - (flavor1.root_gb + flavor1.ephemeral_gb + flavor2.root_gb + flavor2.ephemeral_gb)) # not using mock.sentinel.ctx because resize_claim calls #elevated ctx = mock.MagicMock() with test.nested( mock.patch('nova.compute.resource_tracker.ResourceTracker' '._create_migration', side_effect=[migration1, migration2]), mock.patch('nova.objects.MigrationContext', side_effect=[mig_context_obj1, mig_context_obj2]), mock.patch('nova.objects.Instance.save'), ) as (create_mig_mock, ctxt_mock, inst_save_mock): self.rt.resize_claim(ctx, instance1, flavor1, _NODENAME, None, self.allocations) self.rt.resize_claim(ctx, instance2, flavor2, _NODENAME, None, self.allocations) cn = self.rt.compute_nodes[_NODENAME] self.assertTrue(obj_base.obj_equal_prims(expected, cn)) self.assertEqual(2, len(self.rt.tracked_migrations), "Expected 2 tracked migrations but got %s" % self.rt.tracked_migrations) class TestRebuild(BaseTestCase): @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_sync_compute_service_disabled_trait', new=mock.Mock()) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.objects.InstancePCIRequests.get_by_instance', return_value=objects.InstancePCIRequests(requests=[])) @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList()) @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.MigrationList.get_in_progress_by_host_and_node') @mock.patch('nova.objects.InstanceList.get_by_host_and_node') @mock.patch('nova.objects.ComputeNode.save') def test_rebuild_claim(self, save_mock, get_mock, migr_mock, get_cn_mock, pci_mock, instance_pci_mock, bfv_check_mock): # Rebuild an instance, emulating an evacuate command issued against the # original instance. The rebuild operation uses the resource tracker's # _move_claim() method, but unlike with resize_claim(), rebuild_claim() # passes in a pre-created Migration object from the destination compute # manager. self.flags(reserved_host_disk_mb=0, reserved_host_memory_mb=0) # Starting state for the destination node of the rebuild claim is the # normal compute node fixture containing a single active running VM # having instance type #1. virt_resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) virt_resources.update(vcpus_used=1, memory_mb_used=128, local_gb_used=1) self._setup_rt(virt_resources=virt_resources) get_mock.return_value = _INSTANCE_FIXTURES migr_mock.return_value = [] get_cn_mock.return_value = _COMPUTE_NODE_FIXTURES[0].obj_clone() bfv_check_mock.return_value = False ctx = mock.MagicMock() self.rt.update_available_resource(ctx, _NODENAME) # Now emulate the evacuate command by calling rebuild_claim() on the # resource tracker as the compute manager does, supplying a Migration # object that corresponds to the evacuation. migration = objects.Migration( mock.sentinel.ctx, id=1, instance_uuid=uuids.rebuilding_instance, source_compute='fake-other-compute', source_node='fake-other-node', status='accepted', migration_type='evacuation', uuid=uuids.migration, ) instance = objects.Instance( id=1, host=None, node=None, uuid='abef5b54-dea6-47b8-acb2-22aeb1b57919', memory_mb=_INSTANCE_TYPE_FIXTURES[2]['memory_mb'], vcpus=_INSTANCE_TYPE_FIXTURES[2]['vcpus'], root_gb=_INSTANCE_TYPE_FIXTURES[2]['root_gb'], ephemeral_gb=_INSTANCE_TYPE_FIXTURES[2]['ephemeral_gb'], numa_topology=None, pci_requests=None, pci_devices=None, instance_type_id=2, vm_state=vm_states.ACTIVE, power_state=power_state.RUNNING, task_state=task_states.REBUILDING, os_type='fake-os', project_id='fake-project', flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2], old_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2], new_flavor = _INSTANCE_TYPE_OBJ_FIXTURES[2], resources = None, ) # not using mock.sentinel.ctx because resize_claim calls #elevated ctx = mock.MagicMock() with test.nested( mock.patch('nova.objects.Migration.save'), mock.patch('nova.objects.Instance.save'), ) as (mig_save_mock, inst_save_mock): self.rt.rebuild_claim(ctx, instance, _NODENAME, self.allocations, migration=migration) self.assertEqual(_HOSTNAME, migration.dest_compute) self.assertEqual(_NODENAME, migration.dest_node) self.assertEqual("pre-migrating", migration.status) self.assertEqual(1, len(self.rt.tracked_migrations)) mig_save_mock.assert_called_once_with() inst_save_mock.assert_called_once_with() class TestLiveMigration(BaseTestCase): def test_live_migration_claim(self): self._setup_rt() self.rt.compute_nodes[_NODENAME] = _COMPUTE_NODE_FIXTURES[0] ctxt = context.get_admin_context() instance = fake_instance.fake_instance_obj(ctxt) instance.pci_requests = None instance.pci_devices = None instance.numa_topology = None migration = objects.Migration(id=42, migration_type='live-migration', status='accepted') image_meta = objects.ImageMeta(properties=objects.ImageMetaProps()) self.rt.pci_tracker = pci_manager.PciDevTracker(mock.sentinel.ctx) with test.nested( mock.patch.object(objects.ImageMeta, 'from_instance', return_value=image_meta), mock.patch.object(objects.Migration, 'save'), mock.patch.object(objects.Instance, 'save'), mock.patch.object(self.rt, '_update'), mock.patch.object(self.rt.pci_tracker, 'claim_instance'), mock.patch.object(self.rt, '_update_usage_from_migration') ) as (mock_from_instance, mock_migration_save, mock_instance_save, mock_update, mock_pci_claim_instance, mock_update_usage): claim = self.rt.live_migration_claim(ctxt, instance, _NODENAME, migration, limits=None, allocs=None) self.assertEqual(42, claim.migration.id) # Check that we didn't set the status to 'pre-migrating', like we # do for cold migrations, but which doesn't exist for live # migrations. self.assertEqual('accepted', claim.migration.status) self.assertIn('migration_context', instance) mock_update.assert_called_with( mock.ANY, _COMPUTE_NODE_FIXTURES[0]) mock_pci_claim_instance.assert_not_called() mock_update_usage.assert_called_with(ctxt, instance, migration, _NODENAME) class TestUpdateUsageFromMigration(test.NoDBTestCase): def test_missing_old_flavor_outbound_resize(self): """Tests the case that an instance is not being tracked on the source host because it has been resized to a dest host. The confirm_resize operation in ComputeManager sets instance.old_flavor to None before the migration.status is changed to "confirmed" so the source compute RT considers it an in-progress migration and tries to update tracked usage from the instance.old_flavor (which is None when _update_usage_from_migration runs). This test just makes sure that the RT method gracefully handles the instance.old_flavor being gone. """ migration = _MIGRATION_FIXTURES['source-only'] rt = resource_tracker.ResourceTracker( migration.source_compute, mock.sentinel.virt_driver) ctxt = context.get_admin_context() instance = objects.Instance( uuid=migration.instance_uuid, old_flavor=None, migration_context=objects.MigrationContext()) rt._update_usage_from_migration( ctxt, instance, migration, migration.source_node) self.assertNotIn('Starting to track outgoing migration', self.stdlog.logger.output) self.assertNotIn(migration.instance_uuid, rt.tracked_migrations) class TestUpdateUsageFromMigrations(BaseTestCase): @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage_from_migration') def test_no_migrations(self, mock_update_usage): migrations = [] self._setup_rt() self.rt._update_usage_from_migrations(mock.sentinel.ctx, migrations, _NODENAME) self.assertFalse(mock_update_usage.called) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage_from_migration') @mock.patch('nova.objects.instance.Instance.get_by_uuid') def test_instance_not_found(self, mock_get_instance, mock_update_usage): mock_get_instance.side_effect = exc.InstanceNotFound( instance_id='some_id', ) migration = objects.Migration( context=mock.sentinel.ctx, instance_uuid='some_uuid', ) self._setup_rt() self.rt._update_usage_from_migrations(mock.sentinel.ctx, [migration], _NODENAME) mock_get_instance.assert_called_once_with(mock.sentinel.ctx, 'some_uuid', expected_attrs=[ 'migration_context', 'flavor']) self.assertFalse(mock_update_usage.called) @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage_from_migration') def test_duplicate_migrations_filtered(self, upd_mock): # The wrapper function _update_usage_from_migrations() looks at the # list of migration objects returned from # MigrationList.get_in_progress_by_host_and_node() and ensures that # only the most recent migration record for an instance is used in # determining the usage records. Here we pass multiple migration # objects for a single instance and ensure that we only call the # _update_usage_from_migration() (note: not migration*s*...) once with # the migration object with greatest updated_at value. We also pass # some None values for various updated_at attributes to exercise some # of the code paths in the filtering logic. self._setup_rt() instance = objects.Instance(vm_state=vm_states.RESIZED, task_state=None) ts1 = timeutils.utcnow() ts0 = ts1 - datetime.timedelta(seconds=10) ts2 = ts1 + datetime.timedelta(seconds=10) migrations = [ objects.Migration(source_compute=_HOSTNAME, source_node=_NODENAME, dest_compute=_HOSTNAME, dest_node=_NODENAME, instance_uuid=uuids.instance, created_at=ts0, updated_at=ts1, id=1, instance=instance), objects.Migration(source_compute=_HOSTNAME, source_node=_NODENAME, dest_compute=_HOSTNAME, dest_node=_NODENAME, instance_uuid=uuids.instance, created_at=ts0, updated_at=ts2, id=2, instance=instance) ] mig1, mig2 = migrations mig_list = objects.MigrationList(objects=migrations) instance.migration_context = objects.MigrationContext( migration_id=mig2.id) self.rt._update_usage_from_migrations(mock.sentinel.ctx, mig_list, _NODENAME) upd_mock.assert_called_once_with(mock.sentinel.ctx, instance, mig2, _NODENAME) upd_mock.reset_mock() mig2.updated_at = None instance.migration_context.migration_id = mig1.id self.rt._update_usage_from_migrations(mock.sentinel.ctx, mig_list, _NODENAME) upd_mock.assert_called_once_with(mock.sentinel.ctx, instance, mig1, _NODENAME) @mock.patch('nova.objects.migration.Migration.save') @mock.patch.object(resource_tracker.ResourceTracker, '_update_usage_from_migration') def test_ignore_stale_migration(self, upd_mock, save_mock): # In _update_usage_from_migrations() we want to only look at # migrations where the migration id matches the migration ID that is # stored in the instance migration context. The problem scenario is # that the instance is migrating on host B, but we run the resource # audit on host A and there is a stale migration in the DB for the # same instance involving host A. self._setup_rt() # Create an instance which is migrating with a migration id of 2 migration_context = objects.MigrationContext(migration_id=2) instance = objects.Instance(vm_state=vm_states.RESIZED, task_state=None, migration_context=migration_context) # Create a stale migration object with id of 1 mig1 = objects.Migration(source_compute=_HOSTNAME, source_node=_NODENAME, dest_compute=_HOSTNAME, dest_node=_NODENAME, instance_uuid=uuids.instance, updated_at=timeutils.utcnow(), id=1, instance=instance) mig_list = objects.MigrationList(objects=[mig1]) self.rt._update_usage_from_migrations(mock.sentinel.ctx, mig_list, _NODENAME) self.assertFalse(upd_mock.called) self.assertEqual(mig1.status, "error") @mock.patch('nova.objects.migration.Migration.save') @mock.patch.object(resource_tracker.ResourceTracker, '_update_usage_from_migration') def test_evacuate_and_resizing_states(self, mock_update_usage, mock_save): self._setup_rt() migration_context = objects.MigrationContext(migration_id=1) instance = objects.Instance( vm_state=vm_states.STOPPED, task_state=None, migration_context=migration_context) migration = objects.Migration( source_compute='other-host', source_node='other-node', dest_compute=_HOSTNAME, dest_node=_NODENAME, instance_uuid=uuids.instance, id=1, instance=instance) for state in task_states.rebuild_states + task_states.resizing_states: instance.task_state = state self.rt._update_usage_from_migrations( mock.sentinel.ctx, [migration], _NODENAME) mock_update_usage.assert_called_once_with( mock.sentinel.ctx, instance, migration, _NODENAME) mock_update_usage.reset_mock() @mock.patch('nova.objects.migration.Migration.save') @mock.patch.object(resource_tracker.ResourceTracker, '_update_usage_from_migration') def test_live_migrating_state(self, mock_update_usage, mock_save): self._setup_rt() migration_context = objects.MigrationContext(migration_id=1) instance = objects.Instance( vm_state=vm_states.ACTIVE, task_state=task_states.MIGRATING, migration_context=migration_context) migration = objects.Migration( source_compute='other-host', source_node='other-node', dest_compute=_HOSTNAME, dest_node=_NODENAME, instance_uuid=uuids.instance, id=1, instance=instance, migration_type='live-migration') self.rt._update_usage_from_migrations( mock.sentinel.ctx, [migration], _NODENAME) mock_update_usage.assert_called_once_with( mock.sentinel.ctx, instance, migration, _NODENAME) class TestUpdateUsageFromInstance(BaseTestCase): def setUp(self): super(TestUpdateUsageFromInstance, self).setUp() self._setup_rt() cn = _COMPUTE_NODE_FIXTURES[0].obj_clone() self.rt.compute_nodes[_NODENAME] = cn self.instance = _INSTANCE_FIXTURES[0].obj_clone() @mock.patch('nova.compute.utils.is_volume_backed_instance') def test_get_usage_dict_return_0_root_gb_for_bfv_instance( self, mock_check_bfv): mock_check_bfv.return_value = True # Make sure the cache is empty. self.assertNotIn(self.instance.uuid, self.rt.is_bfv) result = self.rt._get_usage_dict(self.instance, self.instance) self.assertEqual(0, result['root_gb']) mock_check_bfv.assert_called_once_with( self.instance._context, self.instance) # Make sure we updated the cache. self.assertIn(self.instance.uuid, self.rt.is_bfv) self.assertTrue(self.rt.is_bfv[self.instance.uuid]) # Now run _get_usage_dict again to make sure we don't call # is_volume_backed_instance. mock_check_bfv.reset_mock() result = self.rt._get_usage_dict(self.instance, self.instance) self.assertEqual(0, result['root_gb']) mock_check_bfv.assert_not_called() @mock.patch('nova.compute.utils.is_volume_backed_instance') def test_get_usage_dict_include_swap( self, mock_check_bfv): mock_check_bfv.return_value = False instance_with_swap = self.instance.obj_clone() instance_with_swap.flavor.swap = 10 result = self.rt._get_usage_dict( instance_with_swap, instance_with_swap) self.assertEqual(10, result['swap']) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage') def test_building(self, mock_update_usage, mock_check_bfv): mock_check_bfv.return_value = False self.instance.vm_state = vm_states.BUILDING self.rt._update_usage_from_instance(mock.sentinel.ctx, self.instance, _NODENAME) mock_update_usage.assert_called_once_with( self.rt._get_usage_dict(self.instance, self.instance), _NODENAME, sign=1) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage') def test_shelve_offloading(self, mock_update_usage, mock_check_bfv): mock_check_bfv.return_value = False self.instance.vm_state = vm_states.SHELVED_OFFLOADED # Stub out the is_bfv cache to make sure we remove the instance # from it after updating usage. self.rt.is_bfv[self.instance.uuid] = False self.rt.tracked_instances = set([self.instance.uuid]) self.rt._update_usage_from_instance(mock.sentinel.ctx, self.instance, _NODENAME) # The instance should have been removed from the is_bfv cache. self.assertNotIn(self.instance.uuid, self.rt.is_bfv) mock_update_usage.assert_called_once_with( self.rt._get_usage_dict(self.instance, self.instance), _NODENAME, sign=-1) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage') def test_unshelving(self, mock_update_usage, mock_check_bfv): mock_check_bfv.return_value = False self.instance.vm_state = vm_states.SHELVED_OFFLOADED self.rt._update_usage_from_instance(mock.sentinel.ctx, self.instance, _NODENAME) mock_update_usage.assert_called_once_with( self.rt._get_usage_dict(self.instance, self.instance), _NODENAME, sign=1) @mock.patch('nova.compute.utils.is_volume_backed_instance') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update_usage') def test_deleted(self, mock_update_usage, mock_check_bfv): mock_check_bfv.return_value = False self.instance.vm_state = vm_states.DELETED self.rt.tracked_instances = set([self.instance.uuid]) self.rt._update_usage_from_instance(mock.sentinel.ctx, self.instance, _NODENAME, True) mock_update_usage.assert_called_once_with( self.rt._get_usage_dict(self.instance, self.instance), _NODENAME, sign=-1) @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_deleted_instance(self, mock_inst_get): rc = self.rt.reportclient allocs = report.ProviderAllocInfo( allocations={uuids.deleted: "fake_deleted_instance"}) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() mock_inst_get.return_value = objects.Instance( uuid=uuids.deleted, deleted=True, hidden=False) cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations(ctx, cn, [], {}) # Only one call should be made to delete allocations, and that should # be for the first instance created above rc.delete_allocation_for_instance.assert_called_once_with( ctx, uuids.deleted) mock_inst_get.assert_called_once_with( ctx.elevated.return_value, uuids.deleted, expected_attrs=[]) ctx.elevated.assert_called_once_with(read_deleted='yes') @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_deleted_hidden_instance(self, mock_inst_get): """Tests the scenario where there are allocations against the local compute node held by a deleted instance but it is hidden=True so the ResourceTracker does not delete the allocations because it assumes the cross-cell resize flow will handle the allocations. """ rc = self.rt.reportclient allocs = report.ProviderAllocInfo( allocations={uuids.deleted: "fake_deleted_instance"}) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() cn = self.rt.compute_nodes[_NODENAME] mock_inst_get.return_value = objects.Instance( uuid=uuids.deleted, deleted=True, hidden=True, host=cn.host, node=cn.hypervisor_hostname, task_state=task_states.RESIZE_MIGRATING) ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations(ctx, cn, [], {}) # Only one call should be made to delete allocations, and that should # be for the first instance created above rc.delete_allocation_for_instance.assert_not_called() mock_inst_get.assert_called_once_with( ctx.elevated.return_value, uuids.deleted, expected_attrs=[]) ctx.elevated.assert_called_once_with(read_deleted='yes') @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_building_instance(self, mock_inst_get): rc = self.rt.reportclient allocs = report.ProviderAllocInfo( allocations={uuids.deleted: "fake_deleted_instance"}) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() mock_inst_get.side_effect = exc.InstanceNotFound( instance_id=uuids.deleted) cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations(ctx, cn, [], {}) # Instance wasn't found in the database at all, so the allocation # should not have been deleted self.assertFalse(rc.delete_allocation_for_instance.called) @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_ignores_migrations(self, mock_inst_get): rc = self.rt.reportclient allocs = report.ProviderAllocInfo( allocations={uuids.deleted: "fake_deleted_instance", uuids.migration: "fake_migration"}) mig = objects.Migration(uuid=uuids.migration) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() mock_inst_get.return_value = objects.Instance( uuid=uuids.deleted, deleted=True, hidden=False) cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations( ctx, cn, [mig], {uuids.migration: objects.Instance(uuid=uuids.imigration)}) # Only one call should be made to delete allocations, and that should # be for the first instance created above rc.delete_allocation_for_instance.assert_called_once_with( ctx, uuids.deleted) @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_scheduled_instance(self, mock_inst_get): rc = self.rt.reportclient allocs = report.ProviderAllocInfo( allocations={uuids.scheduled: "fake_scheduled_instance"}) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() instance_by_uuid = {uuids.scheduled: objects.Instance(uuid=uuids.scheduled, deleted=False, host=None)} cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations(ctx, cn, [], instance_by_uuid) # Scheduled instances should not have their allocations removed rc.delete_allocation_for_instance.assert_not_called() def test_remove_deleted_instances_allocations_move_ops(self): """Test that we do NOT delete allocations for instances that are currently undergoing move operations. """ # Create 1 instance instance = _INSTANCE_FIXTURES[0].obj_clone() instance.uuid = uuids.moving_instance instance.host = uuids.destination # Instances in resizing/move will be ACTIVE or STOPPED instance.vm_state = vm_states.ACTIVE # Mock out the allocation call rpt_clt = self.report_client_mock allocs = report.ProviderAllocInfo( allocations={uuids.inst0: mock.sentinel.moving_instance}) rpt_clt.get_allocations_for_resource_provider.return_value = allocs cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() self.rt._remove_deleted_instances_allocations( ctx, cn, [], {uuids.inst0: instance}) rpt_clt.delete_allocation_for_instance.assert_not_called() def test_remove_deleted_instances_allocations_known_instance(self): """Tests the case that actively tracked instances for the given node do not have their allocations removed. """ rc = self.rt.reportclient self.rt.tracked_instances = set([uuids.known]) allocs = report.ProviderAllocInfo( allocations={ uuids.known: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 2048, 'DISK_GB': 20 } } } ) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() instance_by_uuid = {uuids.known: objects.Instance(uuid=uuids.known)} # Call the method. self.rt._remove_deleted_instances_allocations(ctx, cn, [], instance_by_uuid) # We don't delete the allocation because the node is tracking the # instance and has allocations for it. rc.delete_allocation_for_instance.assert_not_called() @mock.patch('nova.compute.resource_tracker.LOG.warning') @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_unknown_instance( self, mock_inst_get, mock_log_warning): """Tests the case that an instance is found with allocations for this host/node but is not in the dict of tracked instances. The allocations are not removed for the instance since we don't know how this happened or what to do. """ instance = _INSTANCE_FIXTURES[0] mock_inst_get.return_value = instance rc = self.rt.reportclient # No tracked instances on this node. # But there is an allocation for an instance on this node. allocs = report.ProviderAllocInfo( allocations={ instance.uuid: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 2048, 'DISK_GB': 20 } } } ) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations( ctx, cn, [], {}) # We don't delete the allocation because we're not sure what to do. # NOTE(mriedem): This is not actually the behavior we want. This is # testing the current behavior but in the future when we get smart # and figure things out, this should actually be an error. rc.delete_allocation_for_instance.assert_not_called() # Assert the expected warning was logged. mock_log_warning.assert_called_once() self.assertIn("Instance %s is not being actively managed by " "this compute host but has allocations " "referencing this compute host", mock_log_warning.call_args[0][0]) @mock.patch('nova.compute.resource_tracker.LOG.debug') @mock.patch('nova.objects.Instance.get_by_uuid') def test_remove_deleted_instances_allocations_state_transition_instance( self, mock_inst_get, mock_log_debug): """Tests the case that an instance is found with allocations for this host/node but is not in the dict of tracked instances but the instance.task_state is not None so we do not log a warning nor remove allocations since we want to let the operation play out. """ instance = copy.deepcopy(_INSTANCE_FIXTURES[0]) instance.task_state = task_states.SPAWNING mock_inst_get.return_value = instance rc = self.rt.reportclient # No tracked instances on this node. # But there is an allocation for an instance on this node. allocs = report.ProviderAllocInfo( allocations={ instance.uuid: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 2048, 'DISK_GB': 20 } } } ) rc.get_allocations_for_resource_provider = mock.MagicMock( return_value=allocs) rc.delete_allocation_for_instance = mock.MagicMock() cn = self.rt.compute_nodes[_NODENAME] ctx = mock.MagicMock() # Call the method. self.rt._remove_deleted_instances_allocations( ctx, cn, [], {}) # We don't delete the allocation because the instance is on this host # but is transitioning task states. rc.delete_allocation_for_instance.assert_not_called() # Assert the expected debug message was logged. mock_log_debug.assert_called_once() self.assertIn('Instance with task_state "%s" is not being ' 'actively managed by this compute host but has ' 'allocations referencing this compute node', mock_log_debug.call_args[0][0]) def test_remove_deleted_instances_allocations_retrieval_fail(self): """When the report client errs or otherwise retrieves no allocations, _remove_deleted_instances_allocations gracefully no-ops. """ cn = self.rt.compute_nodes[_NODENAME] rc = self.rt.reportclient # We'll test three different ways get_allocations_for_resource_provider # can cause us to no-op. side_effects = ( # Actual placement error exc.ResourceProviderAllocationRetrievalFailed( rp_uuid='rp_uuid', error='error'), # API communication failure ks_exc.ClientException, # Legitimately no allocations report.ProviderAllocInfo(allocations={}), ) rc.get_allocations_for_resource_provider = mock.Mock( side_effect=side_effects) for _ in side_effects: # If we didn't no op, this would blow up at 'ctx'.elevated() self.rt._remove_deleted_instances_allocations( 'ctx', cn, [], {}) rc.get_allocations_for_resource_provider.assert_called_once_with( 'ctx', cn.uuid) rc.get_allocations_for_resource_provider.reset_mock() def test_delete_allocation_for_shelve_offloaded_instance(self): instance = _INSTANCE_FIXTURES[0].obj_clone() instance.uuid = uuids.inst0 self.rt.delete_allocation_for_shelve_offloaded_instance( mock.sentinel.ctx, instance) rc = self.rt.reportclient mock_remove_allocation = rc.delete_allocation_for_instance mock_remove_allocation.assert_called_once_with( mock.sentinel.ctx, instance.uuid) def test_update_usage_from_instances_goes_negative(self): # NOTE(danms): The resource tracker _should_ report negative resources # for things like free_ram_mb if overcommit is being used. This test # ensures that we don't collapse negative values to zero. self.flags(reserved_host_memory_mb=2048) self.flags(reserved_host_disk_mb=(11 * 1024)) cn = objects.ComputeNode(memory_mb=1024, local_gb=10) self.rt.compute_nodes['foo'] = cn @mock.patch.object(self.rt, '_update_usage_from_instance') def test(uufi): self.rt._update_usage_from_instances('ctxt', [], 'foo') test() self.assertEqual(-1024, cn.free_ram_mb) self.assertEqual(-1, cn.free_disk_gb) def test_delete_allocation_for_evacuated_instance(self): instance = _INSTANCE_FIXTURES[0].obj_clone() instance.uuid = uuids.inst0 ctxt = context.get_admin_context() self.rt.delete_allocation_for_evacuated_instance( ctxt, instance, _NODENAME) rc = self.rt.reportclient mock_remove_allocs = rc.remove_provider_tree_from_instance_allocation mock_remove_allocs.assert_called_once_with( ctxt, instance.uuid, self.rt.compute_nodes[_NODENAME].uuid) class TestInstanceInResizeState(test.NoDBTestCase): def test_active_suspending(self): instance = objects.Instance(vm_state=vm_states.ACTIVE, task_state=task_states.SUSPENDING) self.assertFalse(resource_tracker._instance_in_resize_state(instance)) def test_resized_suspending(self): instance = objects.Instance(vm_state=vm_states.RESIZED, task_state=task_states.SUSPENDING) self.assertTrue(resource_tracker._instance_in_resize_state(instance)) def test_resized_resize_migrating(self): instance = objects.Instance(vm_state=vm_states.RESIZED, task_state=task_states.RESIZE_MIGRATING) self.assertTrue(resource_tracker._instance_in_resize_state(instance)) def test_resized_resize_finish(self): instance = objects.Instance(vm_state=vm_states.RESIZED, task_state=task_states.RESIZE_FINISH) self.assertTrue(resource_tracker._instance_in_resize_state(instance)) class TestInstanceIsLiveMigrating(test.NoDBTestCase): def test_migrating_active(self): instance = objects.Instance(vm_state=vm_states.ACTIVE, task_state=task_states.MIGRATING) self.assertTrue( resource_tracker._instance_is_live_migrating(instance)) def test_migrating_paused(self): instance = objects.Instance(vm_state=vm_states.PAUSED, task_state=task_states.MIGRATING) self.assertTrue( resource_tracker._instance_is_live_migrating(instance)) def test_migrating_other(self): instance = objects.Instance(vm_state=vm_states.STOPPED, task_state=task_states.MIGRATING) self.assertFalse( resource_tracker._instance_is_live_migrating(instance)) def test_non_migrating_active(self): instance = objects.Instance(vm_state=vm_states.ACTIVE, task_state=None) self.assertFalse( resource_tracker._instance_is_live_migrating(instance)) def test_non_migrating_paused(self): instance = objects.Instance(vm_state=vm_states.PAUSED, task_state=None) self.assertFalse( resource_tracker._instance_is_live_migrating(instance)) class TestSetInstanceHostAndNode(BaseTestCase): def setUp(self): super(TestSetInstanceHostAndNode, self).setUp() self._setup_rt() @mock.patch('nova.objects.Instance.save') def test_set_instance_host_and_node(self, save_mock): inst = objects.Instance() self.rt._set_instance_host_and_node(inst, _NODENAME) save_mock.assert_called_once_with() self.assertEqual(self.rt.host, inst.host) self.assertEqual(_NODENAME, inst.node) self.assertEqual(self.rt.host, inst.launched_on) @mock.patch('nova.objects.Instance.save') def test_unset_instance_host_and_node(self, save_mock): inst = objects.Instance() self.rt._set_instance_host_and_node(inst, _NODENAME) self.rt._unset_instance_host_and_node(inst) self.assertEqual(2, save_mock.call_count) self.assertIsNone(inst.host) self.assertIsNone(inst.node) self.assertEqual(self.rt.host, inst.launched_on) def _update_compute_node(node, **kwargs): for key, value in kwargs.items(): setattr(node, key, value) class ComputeMonitorTestCase(BaseTestCase): def setUp(self): super(ComputeMonitorTestCase, self).setUp() self._setup_rt() self.info = {} self.context = context.RequestContext(mock.sentinel.user_id, mock.sentinel.project_id) def test_get_host_metrics_none(self): self.rt.monitors = [] metrics = self.rt._get_host_metrics(self.context, _NODENAME) self.assertEqual(len(metrics), 0) @mock.patch.object(resource_tracker.LOG, 'warning') def test_get_host_metrics_exception(self, mock_LOG_warning): monitor = mock.MagicMock() monitor.populate_metrics.side_effect = Exception self.rt.monitors = [monitor] metrics = self.rt._get_host_metrics(self.context, _NODENAME) mock_LOG_warning.assert_called_once_with( u'Cannot get the metrics from %(mon)s; error: %(exc)s', mock.ANY) self.assertEqual(0, len(metrics)) @mock.patch('nova.compute.utils.notify_about_metrics_update') def test_get_host_metrics(self, mock_notify): fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) class FakeCPUMonitor(monitor_base.MonitorBase): NOW_TS = timeutils.utcnow() def __init__(self, *args): super(FakeCPUMonitor, self).__init__(*args) self.source = 'FakeCPUMonitor' def get_metric_names(self): return set(["cpu.frequency"]) def populate_metrics(self, monitor_list): metric_object = objects.MonitorMetric() metric_object.name = 'cpu.frequency' metric_object.value = 100 metric_object.timestamp = self.NOW_TS metric_object.source = self.source monitor_list.objects.append(metric_object) self.rt.monitors = [FakeCPUMonitor(None)] metrics = self.rt._get_host_metrics(self.context, _NODENAME) mock_notify.assert_called_once_with( self.context, _HOSTNAME, '1.1.1.1', _NODENAME, test.MatchType(objects.MonitorMetricList)) expected_metrics = [ { 'timestamp': FakeCPUMonitor.NOW_TS.isoformat(), 'name': 'cpu.frequency', 'value': 100, 'source': 'FakeCPUMonitor' }, ] payload = { 'metrics': expected_metrics, 'host': _HOSTNAME, 'host_ip': '1.1.1.1', 'nodename': _NODENAME, } self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.metrics.update', msg.event_type) for p_key in payload: if p_key == 'metrics': self.assertIn(p_key, msg.payload) self.assertEqual(1, len(msg.payload['metrics'])) # make sure the expected metrics match the actual metrics self.assertDictEqual(expected_metrics[0], msg.payload['metrics'][0]) else: self.assertEqual(payload[p_key], msg.payload[p_key]) self.assertEqual(metrics, expected_metrics) class OverCommitTestCase(BaseTestCase): def test_cpu_allocation_ratio_none_negative(self): self.assertRaises(ValueError, CONF.set_default, 'cpu_allocation_ratio', -1.0) def test_ram_allocation_ratio_none_negative(self): self.assertRaises(ValueError, CONF.set_default, 'ram_allocation_ratio', -1.0) def test_disk_allocation_ratio_none_negative(self): self.assertRaises(ValueError, CONF.set_default, 'disk_allocation_ratio', -1.0) class TestPciTrackerDelegationMethods(BaseTestCase): def setUp(self): super(TestPciTrackerDelegationMethods, self).setUp() self._setup_rt() self.rt.pci_tracker = mock.MagicMock() self.context = context.RequestContext(mock.sentinel.user_id, mock.sentinel.project_id) self.instance = _INSTANCE_FIXTURES[0].obj_clone() def test_claim_pci_devices(self): request = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': 'v', 'product_id': 'p'}]) pci_requests = objects.InstancePCIRequests( requests=[request], instance_uuid=self.instance.uuid) self.rt.claim_pci_devices(self.context, pci_requests) self.rt.pci_tracker.claim_instance.assert_called_once_with( self.context, pci_requests, None) self.assertTrue(self.rt.pci_tracker.save.called) def test_allocate_pci_devices_for_instance(self): self.rt.allocate_pci_devices_for_instance(self.context, self.instance) self.rt.pci_tracker.allocate_instance.assert_called_once_with( self.instance) self.assertTrue(self.rt.pci_tracker.save.called) def test_free_pci_device_allocations_for_instance(self): self.rt.free_pci_device_allocations_for_instance(self.context, self.instance) self.rt.pci_tracker.free_instance_allocations.assert_called_once_with( self.context, self.instance) self.assertTrue(self.rt.pci_tracker.save.called) def test_free_pci_device_claims_for_instance(self): self.rt.free_pci_device_claims_for_instance(self.context, self.instance) self.rt.pci_tracker.free_instance_claims.assert_called_once_with( self.context, self.instance) self.assertTrue(self.rt.pci_tracker.save.called) class ResourceTrackerTestCase(test.NoDBTestCase): def test_init_ensure_provided_reportclient_is_used(self): """Simple test to make sure if a reportclient is provided it is used""" rt = resource_tracker.ResourceTracker( _HOSTNAME, mock.sentinel.driver, mock.sentinel.reportclient) self.assertIs(rt.reportclient, mock.sentinel.reportclient) def test_that_unfair_usage_of_compute_resource_semaphore_is_caught(self): def _test_explict_unfair(): class MyResourceTracker(resource_tracker.ResourceTracker): @nova_utils.synchronized( resource_tracker.COMPUTE_RESOURCE_SEMAPHORE, fair=False) def foo(self): pass def _test_implicit_unfair(): class MyResourceTracker(resource_tracker.ResourceTracker): @nova_utils.synchronized( resource_tracker.COMPUTE_RESOURCE_SEMAPHORE) def foo(self): pass self.assertRaises(AssertionError, _test_explict_unfair) self.assertRaises(AssertionError, _test_implicit_unfair) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_rpcapi.py0000664000175000017500000013461500000000000022227 0ustar00zuulzuul00000000000000# Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for nova.compute.rpcapi """ import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import six from nova.compute import rpcapi as compute_rpcapi from nova import context from nova import exception from nova import objects from nova.objects import block_device as objects_block_dev from nova.objects import migration as migration_obj from nova.objects import service as service_obj from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit import fake_request_spec class ComputeRpcAPITestCase(test.NoDBTestCase): def setUp(self): super(ComputeRpcAPITestCase, self).setUp() self.context = context.get_admin_context() self.fake_flavor_obj = fake_flavor.fake_flavor_obj(self.context) self.fake_flavor = jsonutils.to_primitive(self.fake_flavor_obj) instance_attr = {'host': 'fake_host', 'instance_type_id': self.fake_flavor_obj['id'], 'instance_type': self.fake_flavor_obj} self.fake_instance_obj = fake_instance.fake_instance_obj(self.context, **instance_attr) self.fake_instance = jsonutils.to_primitive(self.fake_instance_obj) self.fake_volume_bdm = objects_block_dev.BlockDeviceMapping( **fake_block_device.FakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'instance_uuid': self.fake_instance_obj.uuid, 'volume_id': 'fake-volume-id'})) self.fake_request_spec_obj = fake_request_spec.fake_spec_obj() # FIXME(melwitt): Temporary while things have no mappings self.patcher1 = mock.patch('nova.objects.InstanceMapping.' 'get_by_instance_uuid') self.patcher2 = mock.patch('nova.objects.HostMapping.get_by_host') mock_inst_mapping = self.patcher1.start() mock_host_mapping = self.patcher2.start() mock_inst_mapping.side_effect = exception.InstanceMappingNotFound( uuid=self.fake_instance_obj.uuid) mock_host_mapping.side_effect = exception.HostMappingNotFound( name=self.fake_instance_obj.host) def tearDown(self): super(ComputeRpcAPITestCase, self).tearDown() self.patcher1.stop() self.patcher2.stop() @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_auto_pin(self, mock_get_min): mock_get_min.return_value = 1 self.flags(compute='auto', group='upgrade_levels') compute_rpcapi.LAST_VERSION = None rpcapi = compute_rpcapi.ComputeAPI() self.assertEqual('4.4', rpcapi.router.version_cap) mock_get_min.assert_called_once_with(mock.ANY, ['nova-compute']) @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_auto_pin_fails_if_too_old(self, mock_get_min): mock_get_min.return_value = 1955 self.flags(compute='auto', group='upgrade_levels') self.assertRaises(exception.ServiceTooOld, compute_rpcapi.ComputeAPI()._determine_version_cap, mock.Mock) @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_auto_pin_with_service_version_zero(self, mock_get_min): mock_get_min.return_value = 0 self.flags(compute='auto', group='upgrade_levels') compute_rpcapi.LAST_VERSION = None rpcapi = compute_rpcapi.ComputeAPI() history = service_obj.SERVICE_VERSION_HISTORY current_version = history[service_obj.SERVICE_VERSION]['compute_rpc'] self.assertEqual(current_version, rpcapi.router.version_cap) mock_get_min.assert_called_once_with(mock.ANY, ['nova-compute']) self.assertIsNone(compute_rpcapi.LAST_VERSION) @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_auto_pin_caches(self, mock_get_min): mock_get_min.return_value = 1 self.flags(compute='auto', group='upgrade_levels') compute_rpcapi.LAST_VERSION = None api = compute_rpcapi.ComputeAPI() for x in range(2): api._determine_version_cap(mock.Mock()) mock_get_min.assert_called_once_with(mock.ANY, ['nova-compute']) self.assertEqual('4.4', compute_rpcapi.LAST_VERSION) def _test_compute_api(self, method, rpc_method, expected_args=None, **kwargs): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = kwargs.pop('rpcapi_class', compute_rpcapi.ComputeAPI)() self.assertIsNotNone(rpcapi.router) self.assertEqual(rpcapi.router.target.topic, compute_rpcapi.RPC_TOPIC) # This test wants to run the real prepare function, so must use # a real client object default_client = rpcapi.router.default_client orig_prepare = default_client.prepare base_version = rpcapi.router.target.version expected_version = kwargs.pop('version', base_version) prepare_extra_kwargs = {} cm_timeout = kwargs.pop('call_monitor_timeout', None) timeout = kwargs.pop('timeout', None) if cm_timeout: prepare_extra_kwargs['call_monitor_timeout'] = cm_timeout if timeout: prepare_extra_kwargs['timeout'] = timeout expected_kwargs = kwargs.copy() if expected_args: expected_kwargs.update(expected_args) if 'host_param' in expected_kwargs: expected_kwargs['host'] = expected_kwargs.pop('host_param') else: expected_kwargs.pop('host', None) cast_and_call = ['confirm_resize', 'stop_instance'] if rpc_method == 'call' and method in cast_and_call: if method == 'confirm_resize': kwargs['cast'] = False else: kwargs['do_cast'] = False if 'host' in kwargs: host = kwargs['host'] elif 'instances' in kwargs: host = kwargs['instances'][0]['host'] elif 'destination' in kwargs: host = expected_kwargs.pop('destination') elif 'prepare_server' in kwargs: # This is the "server" kwarg to the prepare() method so remove it # from both kwargs that go to the actual RPC method call. expected_kwargs.pop('prepare_server') host = kwargs.pop('prepare_server') else: host = kwargs['instance']['host'] if method == 'rebuild_instance' and 'node' in expected_kwargs: expected_kwargs['scheduled_node'] = expected_kwargs.pop('node') with test.nested( mock.patch.object(default_client, rpc_method), mock.patch.object(default_client, 'prepare'), mock.patch.object(default_client, 'can_send_version'), ) as ( rpc_mock, prepare_mock, csv_mock ): prepare_mock.return_value = default_client if '_return_value' in kwargs: rpc_mock.return_value = kwargs.pop('_return_value') del expected_kwargs['_return_value'] elif rpc_method == 'call': rpc_mock.return_value = 'foo' else: rpc_mock.return_value = None csv_mock.side_effect = ( lambda v: orig_prepare(version=v).can_send_version()) retval = getattr(rpcapi, method)(ctxt, **kwargs) self.assertEqual(retval, rpc_mock.return_value) prepare_mock.assert_called_once_with(version=expected_version, server=host, **prepare_extra_kwargs) rpc_mock.assert_called_once_with(ctxt, method, **expected_kwargs) def test_add_aggregate_host(self): self._test_compute_api('add_aggregate_host', 'cast', aggregate={'id': 'fake_id'}, host_param='host', host='host', slave_info={}) def test_add_fixed_ip_to_instance(self): self._test_compute_api('add_fixed_ip_to_instance', 'cast', instance=self.fake_instance_obj, network_id='id', version='5.0') def test_attach_interface(self): self._test_compute_api('attach_interface', 'call', instance=self.fake_instance_obj, network_id='id', port_id='id2', version='5.0', requested_ip='192.168.1.50', tag='foo') def test_attach_volume(self): self._test_compute_api('attach_volume', 'cast', instance=self.fake_instance_obj, bdm=self.fake_volume_bdm, version='5.0') def test_change_instance_metadata(self): self._test_compute_api('change_instance_metadata', 'cast', instance=self.fake_instance_obj, diff={}, version='5.0') def test_check_instance_shared_storage(self): self._test_compute_api('check_instance_shared_storage', 'call', instance=self.fake_instance_obj, data='foo', version='5.0') def test_confirm_resize_cast(self): self._test_compute_api('confirm_resize', 'cast', instance=self.fake_instance_obj, migration={'id': 'foo'}, host='host') def test_confirm_resize_call(self): self._test_compute_api('confirm_resize', 'call', instance=self.fake_instance_obj, migration={'id': 'foo'}, host='host') def test_detach_interface(self): self._test_compute_api('detach_interface', 'cast', version='5.0', instance=self.fake_instance_obj, port_id='fake_id') def test_detach_volume(self): self._test_compute_api('detach_volume', 'cast', instance=self.fake_instance_obj, volume_id='id', attachment_id='fake_id', version='5.0') def test_finish_resize(self): self._test_compute_api('finish_resize', 'cast', instance=self.fake_instance_obj, migration={'id': 'foo'}, image='image', disk_info='disk_info', host='host', request_spec=self.fake_request_spec_obj, version='5.2') def test_finish_resize_old_compute(self): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client # So we expect that the messages is backported therefore the # request_spec is dropped mock_client.can_send_version.return_value = False mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx rpcapi.finish_resize( ctxt, instance=self.fake_instance_obj, migration=mock.sentinel.migration, image='image', disk_info='disk_info', host='host', request_spec=self.fake_request_spec_obj) mock_client.can_send_version.assert_called_once_with('5.2') mock_client.prepare.assert_called_with( server='host', version='5.0') mock_cctx.cast.assert_called_with( ctxt, 'finish_resize', instance=self.fake_instance_obj, migration=mock.sentinel.migration, image='image', disk_info='disk_info') def test_finish_revert_resize(self): self._test_compute_api('finish_revert_resize', 'cast', instance=self.fake_instance_obj, migration={'id': 'fake_id'}, host='host', request_spec=self.fake_request_spec_obj, version='5.2') def test_finish_revert_resize_old_compute(self): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client # So we expect that the messages is backported therefore the # request_spec is dropped mock_client.can_send_version.return_value = False mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx rpcapi.finish_revert_resize( ctxt, instance=self.fake_instance_obj, migration=mock.sentinel.migration, host='host', request_spec=self.fake_request_spec_obj) mock_client.can_send_version.assert_called_once_with('5.2') mock_client.prepare.assert_called_with( server='host', version='5.0') mock_cctx.cast.assert_called_with( ctxt, 'finish_revert_resize', instance=self.fake_instance_obj, migration=mock.sentinel.migration) def test_get_console_output(self): self._test_compute_api('get_console_output', 'call', instance=self.fake_instance_obj, tail_length='tl', version='5.0') def test_get_console_pool_info(self): self._test_compute_api('get_console_pool_info', 'call', console_type='type', host='host') def test_get_console_topic(self): self._test_compute_api('get_console_topic', 'call', host='host') def test_get_diagnostics(self): self._test_compute_api('get_diagnostics', 'call', instance=self.fake_instance_obj, version='5.0') def test_get_instance_diagnostics(self): expected_args = {'instance': self.fake_instance_obj} self._test_compute_api('get_instance_diagnostics', 'call', expected_args, instance=self.fake_instance_obj, version='5.0') def test_get_vnc_console(self): self._test_compute_api('get_vnc_console', 'call', instance=self.fake_instance_obj, console_type='type', version='5.0') def test_get_spice_console(self): self._test_compute_api('get_spice_console', 'call', instance=self.fake_instance_obj, console_type='type', version='5.0') def test_get_rdp_console(self): self._test_compute_api('get_rdp_console', 'call', instance=self.fake_instance_obj, console_type='type', version='5.0') def test_get_serial_console(self): self._test_compute_api('get_serial_console', 'call', instance=self.fake_instance_obj, console_type='serial', version='5.0') def test_get_mks_console(self): self._test_compute_api('get_mks_console', 'call', instance=self.fake_instance_obj, console_type='webmks', version='5.0') def test_validate_console_port(self): self._test_compute_api('validate_console_port', 'call', instance=self.fake_instance_obj, port="5900", console_type="novnc", version='5.0') def test_host_maintenance_mode(self): self._test_compute_api('host_maintenance_mode', 'call', host_param='param', mode='mode', host='host') def test_host_power_action(self): self._test_compute_api('host_power_action', 'call', action='action', host='host') def test_inject_network_info(self): self._test_compute_api('inject_network_info', 'cast', instance=self.fake_instance_obj) def test_live_migration(self): self._test_compute_api('live_migration', 'cast', instance=self.fake_instance_obj, dest='dest', block_migration='blockity_block', host='tsoh', migration='migration', migrate_data={}, version='5.0') def test_live_migration_force_complete(self): migration = migration_obj.Migration() migration.id = 1 migration.source_compute = 'fake' ctxt = context.RequestContext('fake_user', 'fake_project') version = '5.0' rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client mock_client.can_send_version.return_value = True mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx rpcapi.live_migration_force_complete(ctxt, self.fake_instance_obj, migration) mock_client.prepare.assert_called_with(server=migration.source_compute, version=version) mock_cctx.cast.assert_called_with(ctxt, 'live_migration_force_complete', instance=self.fake_instance_obj) def test_live_migration_abort(self): self._test_compute_api('live_migration_abort', 'cast', instance=self.fake_instance_obj, migration_id='1', version='5.0') def test_post_live_migration_at_destination(self): self.flags(long_rpc_timeout=1234) self._test_compute_api('post_live_migration_at_destination', 'call', instance=self.fake_instance_obj, block_migration='block_migration', host='host', version='5.0', timeout=1234, call_monitor_timeout=60) def test_pause_instance(self): self._test_compute_api('pause_instance', 'cast', instance=self.fake_instance_obj) def test_soft_delete_instance(self): self._test_compute_api('soft_delete_instance', 'cast', instance=self.fake_instance_obj) def test_swap_volume(self): self._test_compute_api('swap_volume', 'cast', instance=self.fake_instance_obj, old_volume_id='oldid', new_volume_id='newid', new_attachment_id=uuids.attachment_id, version='5.0') def test_restore_instance(self): self._test_compute_api('restore_instance', 'cast', instance=self.fake_instance_obj, version='5.0') def test_pre_live_migration(self): self.flags(long_rpc_timeout=1234) self._test_compute_api('pre_live_migration', 'call', instance=self.fake_instance_obj, block_migration='block_migration', disk='disk', host='host', migrate_data=None, version='5.0', call_monitor_timeout=60, timeout=1234) def test_supports_numa_live_migration(self): mock_client = mock.MagicMock() rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() rpcapi.router.client.return_value = mock_client ctxt = context.RequestContext('fake_user', 'fake_project'), mock_client.can_send_version.return_value = False self.assertFalse(rpcapi.supports_numa_live_migration(ctxt)) mock_client.can_send_version.return_value = True self.assertTrue(rpcapi.supports_numa_live_migration(ctxt)) mock_client.can_send_version.assert_has_calls( [mock.call('5.3'), mock.call('5.3')]) def test_check_can_live_migrate_destination(self): self.flags(long_rpc_timeout=1234) self._test_compute_api('check_can_live_migrate_destination', 'call', instance=self.fake_instance_obj, destination='dest', block_migration=False, disk_over_commit=False, version='5.3', call_monitor_timeout=60, migration='migration', limits='limits', timeout=1234) def test_check_can_live_migrate_destination_backlevel(self): mock_cctxt = mock.MagicMock() mock_client = mock.MagicMock() mock_client.can_send_version.return_value = False mock_client.prepare.return_value = mock_cctxt rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() rpcapi.router.client.return_value = mock_client ctxt = context.RequestContext('fake_user', 'fake_project'), rpcapi.check_can_live_migrate_destination( ctxt, instance=self.fake_instance_obj, destination='dest', block_migration=False, disk_over_commit=False, migration='migration', limits='limits') mock_client.prepare.assert_called_with(server='dest', version='5.0', call_monitor_timeout=mock.ANY, timeout=mock.ANY) mock_cctxt.call.assert_called_once_with( ctxt, 'check_can_live_migrate_destination', instance=self.fake_instance_obj, block_migration=False, disk_over_commit=False) def test_drop_move_claim_at_destination(self): self._test_compute_api('drop_move_claim_at_destination', 'call', instance=self.fake_instance_obj, host='host', version='5.3', _return_value=None) def test_prep_resize(self): self._test_compute_api('prep_resize', 'cast', instance=self.fake_instance_obj, instance_type=self.fake_flavor_obj, image='fake_image', host='host', request_spec='fake_spec', filter_properties={'fakeprop': 'fakeval'}, migration='migration', node='node', clean_shutdown=True, host_list=None, version='5.1') def test_prep_snapshot_based_resize_at_dest(self): """Tests happy path for prep_snapshot_based_resize_at_dest rpc call""" self.flags(long_rpc_timeout=1234) self._test_compute_api( 'prep_snapshot_based_resize_at_dest', 'call', # compute method kwargs instance=self.fake_instance_obj, flavor=self.fake_flavor_obj, nodename='node', migration=migration_obj.Migration(), limits={}, request_spec=objects.RequestSpec(), destination='dest', # client.prepare kwargs version='5.5', call_monitor_timeout=60, timeout=1234, # assert the expected return value _return_value=mock.sentinel.migration_context) @mock.patch('nova.rpc.ClientRouter.client') def test_prep_snapshot_based_resize_at_dest_old_compute(self, mock_client): """Tests when the destination compute service is too old to call prep_snapshot_based_resize_at_dest so MigrationPreCheckError is raised. """ mock_client.return_value.can_send_version.return_value = False rpcapi = compute_rpcapi.ComputeAPI() ex = self.assertRaises( exception.MigrationPreCheckError, rpcapi.prep_snapshot_based_resize_at_dest, self.context, instance=self.fake_instance_obj, flavor=self.fake_flavor_obj, nodename='node', migration=migration_obj.Migration(), limits={}, request_spec=objects.RequestSpec(), destination='dest') self.assertIn('Compute too old', six.text_type(ex)) def test_prep_snapshot_based_resize_at_source(self): """Tests happy path for prep_snapshot_based_resize_at_source rpc call """ self.flags(long_rpc_timeout=1234) self._test_compute_api( 'prep_snapshot_based_resize_at_source', 'call', # compute method kwargs instance=self.fake_instance_obj, migration=migration_obj.Migration(), snapshot_id=uuids.snapshot_id, # client.prepare kwargs version='5.6', call_monitor_timeout=60, timeout=1234) @mock.patch('nova.rpc.ClientRouter.client') def test_prep_snapshot_based_resize_at_source_old_compute( self, mock_client): """Tests when the source compute service is too old to call prep_snapshot_based_resize_at_source so MigrationError is raised. """ mock_client.return_value.can_send_version.return_value = False rpcapi = compute_rpcapi.ComputeAPI() ex = self.assertRaises( exception.MigrationError, rpcapi.prep_snapshot_based_resize_at_source, self.context, instance=self.fake_instance_obj, migration=migration_obj.Migration(), snapshot_id=uuids.snapshot_id) self.assertIn('Compute too old', six.text_type(ex)) def test_finish_snapshot_based_resize_at_dest(self): """Tests happy path for finish_snapshot_based_resize_at_dest.""" self.flags(long_rpc_timeout=1234) self._test_compute_api( 'finish_snapshot_based_resize_at_dest', 'call', # compute method kwargs instance=self.fake_instance_obj, migration=migration_obj.Migration(dest_compute='dest'), snapshot_id=uuids.snapshot_id, request_spec=objects.RequestSpec(), # client.prepare kwargs version='5.7', prepare_server='dest', call_monitor_timeout=60, timeout=1234) @mock.patch('nova.rpc.ClientRouter.client') def test_finish_snapshot_based_resize_at_dest_old_compute(self, client): """Tests when the dest compute service is too old to call finish_snapshot_based_resize_at_dest so MigrationError is raised. """ client.return_value.can_send_version.return_value = False rpcapi = compute_rpcapi.ComputeAPI() ex = self.assertRaises( exception.MigrationError, rpcapi.finish_snapshot_based_resize_at_dest, self.context, instance=self.fake_instance_obj, migration=migration_obj.Migration(dest_compute='dest'), snapshot_id=uuids.snapshot_id, request_spec=objects.RequestSpec()) self.assertIn('Compute too old', six.text_type(ex)) def test_confirm_snapshot_based_resize_at_source(self): """Tests happy path for confirm_snapshot_based_resize_at_source.""" self.flags(long_rpc_timeout=1234) self._test_compute_api( 'confirm_snapshot_based_resize_at_source', 'call', # compute method kwargs instance=self.fake_instance_obj, migration=migration_obj.Migration(source_compute='source'), # client.prepare kwargs version='5.8', prepare_server='source', call_monitor_timeout=60, timeout=1234) @mock.patch('nova.rpc.ClientRouter.client') def test_confirm_snapshot_based_resize_at_source_old_compute(self, client): """Tests when the source compute service is too old to call confirm_snapshot_based_resize_at_source so MigrationError is raised. """ client.return_value.can_send_version.return_value = False rpcapi = compute_rpcapi.ComputeAPI() ex = self.assertRaises( exception.MigrationError, rpcapi.confirm_snapshot_based_resize_at_source, self.context, instance=self.fake_instance_obj, migration=migration_obj.Migration(source_compute='source')) self.assertIn('Compute too old', six.text_type(ex)) def test_revert_snapshot_based_resize_at_dest(self): """Tests happy path for revert_snapshot_based_resize_at_dest.""" self.flags(long_rpc_timeout=1234) self._test_compute_api( 'revert_snapshot_based_resize_at_dest', 'call', # compute method kwargs instance=self.fake_instance_obj, migration=migration_obj.Migration(dest_compute='dest'), # client.prepare kwargs version='5.9', prepare_server='dest', call_monitor_timeout=60, timeout=1234) @mock.patch('nova.rpc.ClientRouter.client') def test_revert_snapshot_based_resize_at_dest_old_compute(self, client): """Tests when the dest compute service is too old to call revert_snapshot_based_resize_at_dest so MigrationError is raised. """ client.return_value.can_send_version.return_value = False rpcapi = compute_rpcapi.ComputeAPI() ex = self.assertRaises( exception.MigrationError, rpcapi.revert_snapshot_based_resize_at_dest, self.context, instance=self.fake_instance_obj, migration=migration_obj.Migration(dest_compute='dest')) self.assertIn('Compute too old', six.text_type(ex)) def test_finish_revert_snapshot_based_resize_at_source(self): """Tests happy path for finish_revert_snapshot_based_resize_at_source. """ self.flags(long_rpc_timeout=1234) self._test_compute_api( 'finish_revert_snapshot_based_resize_at_source', 'call', # compute method kwargs instance=self.fake_instance_obj, migration=migration_obj.Migration(source_compute='source'), # client.prepare kwargs version='5.10', prepare_server='source', call_monitor_timeout=60, timeout=1234) @mock.patch('nova.rpc.ClientRouter.client') def test_finish_revert_snapshot_based_resize_at_source_old_compute( self, client): """Tests when the source compute service is too old to call finish_revert_snapshot_based_resize_at_source so MigrationError is raised. """ client.return_value.can_send_version.return_value = False rpcapi = compute_rpcapi.ComputeAPI() ex = self.assertRaises( exception.MigrationError, rpcapi.finish_revert_snapshot_based_resize_at_source, self.context, instance=self.fake_instance_obj, migration=migration_obj.Migration(source_compute='source')) self.assertIn('Compute too old', six.text_type(ex)) def test_reboot_instance(self): self.maxDiff = None self._test_compute_api('reboot_instance', 'cast', instance=self.fake_instance_obj, block_device_info={}, reboot_type='type') def test_rebuild_instance(self): self._test_compute_api('rebuild_instance', 'cast', new_pass='None', injected_files='None', image_ref='None', orig_image_ref='None', bdms=[], instance=self.fake_instance_obj, host='new_host', orig_sys_metadata=None, recreate=True, on_shared_storage=True, preserve_ephemeral=True, migration=None, node=None, limits=None, request_spec=None, version='5.0') def test_reserve_block_device_name(self): self.flags(long_rpc_timeout=1234) self._test_compute_api('reserve_block_device_name', 'call', instance=self.fake_instance_obj, device='device', volume_id='id', disk_bus='ide', device_type='cdrom', tag='foo', multiattach=True, version='5.0', timeout=1234, call_monitor_timeout=60, _return_value=objects_block_dev.BlockDeviceMapping()) # TODO(stephenfin): Remove this since it's nova-network only def test_refresh_instance_security_rules(self): expected_args = {'instance': self.fake_instance_obj} self._test_compute_api('refresh_instance_security_rules', 'cast', expected_args, host='fake_host', instance=self.fake_instance_obj, version='5.0') def test_remove_aggregate_host(self): self._test_compute_api('remove_aggregate_host', 'cast', aggregate={'id': 'fake_id'}, host_param='host', host='host', slave_info={}) def test_remove_fixed_ip_from_instance(self): self._test_compute_api('remove_fixed_ip_from_instance', 'cast', instance=self.fake_instance_obj, address='addr', version='5.0') def test_remove_volume_connection(self): self._test_compute_api('remove_volume_connection', 'call', instance=self.fake_instance_obj, volume_id='id', host='host', version='5.0') def test_rescue_instance(self): self._test_compute_api('rescue_instance', 'cast', instance=self.fake_instance_obj, rescue_password='pw', rescue_image_ref='fake_image_ref', clean_shutdown=True, version='5.0') def test_reset_network(self): self._test_compute_api('reset_network', 'cast', instance=self.fake_instance_obj) def test_resize_instance(self): self._test_compute_api('resize_instance', 'cast', instance=self.fake_instance_obj, migration={'id': 'fake_id'}, image='image', instance_type=self.fake_flavor_obj, clean_shutdown=True, request_spec=self.fake_request_spec_obj, version='5.2') def test_resize_instance_old_compute(self): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client # So we expect that the messages is backported therefore the # request_spec is dropped mock_client.can_send_version.return_value = False mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx rpcapi.resize_instance( ctxt, instance=self.fake_instance_obj, migration=mock.sentinel.migration, image='image', instance_type='instance_type', clean_shutdown=True, request_spec=self.fake_request_spec_obj) mock_client.can_send_version.assert_called_once_with('5.2') mock_client.prepare.assert_called_with( server=self.fake_instance_obj.host, version='5.0') mock_cctx.cast.assert_called_with( ctxt, 'resize_instance', instance=self.fake_instance_obj, migration=mock.sentinel.migration, image='image', instance_type='instance_type', clean_shutdown=True) def test_resume_instance(self): self._test_compute_api('resume_instance', 'cast', instance=self.fake_instance_obj) def test_revert_resize(self): self._test_compute_api('revert_resize', 'cast', instance=self.fake_instance_obj, migration={'id': 'fake_id'}, host='host', request_spec=self.fake_request_spec_obj, version='5.2') def test_revert_resize_old_compute(self): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client # So we expect that the messages is backported therefore the # request_spec is dropped mock_client.can_send_version.return_value = False mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx rpcapi.revert_resize( ctxt, instance=self.fake_instance_obj, migration=mock.sentinel.migration, host='host', request_spec=self.fake_request_spec_obj) mock_client.can_send_version.assert_called_once_with('5.2') mock_client.prepare.assert_called_with( server='host', version='5.0') mock_cctx.cast.assert_called_with( ctxt, 'revert_resize', instance=self.fake_instance_obj, migration=mock.sentinel.migration) def test_set_admin_password(self): self._test_compute_api('set_admin_password', 'call', instance=self.fake_instance_obj, new_pass='pw', version='5.0') def test_set_host_enabled(self): self.flags(long_rpc_timeout=600, rpc_response_timeout=120) self._test_compute_api('set_host_enabled', 'call', enabled='enabled', host='host', call_monitor_timeout=120, timeout=600) def test_get_host_uptime(self): self._test_compute_api('get_host_uptime', 'call', host='host') def test_backup_instance(self): self._test_compute_api('backup_instance', 'cast', instance=self.fake_instance_obj, image_id='id', backup_type='type', rotation='rotation') def test_snapshot_instance(self): self._test_compute_api('snapshot_instance', 'cast', instance=self.fake_instance_obj, image_id='id') def test_start_instance(self): self._test_compute_api('start_instance', 'cast', instance=self.fake_instance_obj) def test_stop_instance_cast(self): self._test_compute_api('stop_instance', 'cast', instance=self.fake_instance_obj, clean_shutdown=True, version='5.0') def test_stop_instance_call(self): self._test_compute_api('stop_instance', 'call', instance=self.fake_instance_obj, clean_shutdown=True, version='5.0') def test_suspend_instance(self): self._test_compute_api('suspend_instance', 'cast', instance=self.fake_instance_obj) def test_terminate_instance(self): self._test_compute_api('terminate_instance', 'cast', instance=self.fake_instance_obj, bdms=[], version='5.0') def test_unpause_instance(self): self._test_compute_api('unpause_instance', 'cast', instance=self.fake_instance_obj) def test_unrescue_instance(self): self._test_compute_api('unrescue_instance', 'cast', instance=self.fake_instance_obj, version='5.0') def test_shelve_instance(self): self._test_compute_api('shelve_instance', 'cast', instance=self.fake_instance_obj, image_id='image_id', clean_shutdown=True, version='5.0') def test_shelve_offload_instance(self): self._test_compute_api('shelve_offload_instance', 'cast', instance=self.fake_instance_obj, clean_shutdown=True, version='5.0') def test_unshelve_instance(self): self._test_compute_api('unshelve_instance', 'cast', instance=self.fake_instance_obj, host='host', image='image', filter_properties={'fakeprop': 'fakeval'}, node='node', request_spec=self.fake_request_spec_obj, version='5.2') def test_cache_image(self): self._test_compute_api('cache_images', 'call', host='host', image_ids=['image'], call_monitor_timeout=60, timeout=1800, version='5.4') def test_cache_image_pinned(self): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client mock_client.can_send_version.return_value = False self.assertRaises(exception.NovaException, rpcapi.cache_images, ctxt, 'host', ['image']) def test_unshelve_instance_old_compute(self): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = compute_rpcapi.ComputeAPI() rpcapi.router.client = mock.Mock() mock_client = mock.MagicMock() rpcapi.router.client.return_value = mock_client # So we expect that the messages is backported therefore the # request_spec is dropped mock_client.can_send_version.return_value = False mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx rpcapi.unshelve_instance( ctxt, instance=self.fake_instance_obj, host='host', request_spec=self.fake_request_spec_obj, image='image') mock_client.can_send_version.assert_called_once_with('5.2') mock_client.prepare.assert_called_with( server='host', version='5.0') mock_cctx.cast.assert_called_with( ctxt, 'unshelve_instance', instance=self.fake_instance_obj, image='image', filter_properties=None, node=None) def test_volume_snapshot_create(self): self._test_compute_api('volume_snapshot_create', 'cast', instance=self.fake_instance_obj, volume_id='fake_id', create_info={}, version='5.0') def test_volume_snapshot_delete(self): self._test_compute_api('volume_snapshot_delete', 'cast', instance=self.fake_instance_obj, volume_id='fake_id', snapshot_id='fake_id2', delete_info={}, version='5.0') def test_external_instance_event(self): self._test_compute_api('external_instance_event', 'cast', instances=[self.fake_instance_obj], events=['event'], version='5.0') def test_build_and_run_instance(self): # With rpcapi 5.11, when a list of accel_uuids is passed as a param, # that list must be passed to the client. That is tested in # _test_compute_api with rpc_mock.assert, where expected_kwargs # must have the accel_uuids. accel_uuids = ['938af7f9-f136-4e5a-bdbe-3b6feab54311'] self._test_compute_api('build_and_run_instance', 'cast', instance=self.fake_instance_obj, host='host', image='image', request_spec={'request': 'spec'}, filter_properties=[], admin_password='passwd', injected_files=None, requested_networks=['network1'], security_groups=None, block_device_mapping=None, node='node', limits=[], host_list=None, accel_uuids=accel_uuids, version='5.11') def test_build_and_run_instance_old_rpcapi(self): # With rpcapi < 5.11, accel_uuids must be dropped in the client call. ctxt = context.RequestContext('fake_user', 'fake_project') compute_api = compute_rpcapi.ComputeAPI() compute_api.router.client = mock.Mock() mock_client = mock.MagicMock() compute_api.router.client.return_value = mock_client # Force can_send_version to False, so that 5.0 version is used. mock_client.can_send_version.return_value = False mock_cctx = mock.MagicMock() mock_client.prepare.return_value = mock_cctx compute_api.build_and_run_instance( ctxt, instance=self.fake_instance_obj, host='host', image='image', request_spec=self.fake_request_spec_obj, filter_properties={}, accel_uuids=['938af7f9-f136-4e5a-bdbe-3b6feab54311']) mock_client.can_send_version.assert_called_once_with('5.11') mock_client.prepare.assert_called_with( server='host', version='5.0') mock_cctx.cast.assert_called_with( # No accel_uuids ctxt, 'build_and_run_instance', instance=self.fake_instance_obj, image='image', request_spec=self.fake_request_spec_obj, filter_properties={}, admin_password=None, injected_files=None, requested_networks=None, security_groups=None, block_device_mapping=None, node=None, limits=None, host_list=None) def test_quiesce_instance(self): self._test_compute_api('quiesce_instance', 'call', instance=self.fake_instance_obj, version='5.0') def test_unquiesce_instance(self): self._test_compute_api('unquiesce_instance', 'cast', instance=self.fake_instance_obj, mapping=None, version='5.0') def test_trigger_crash_dump(self): self._test_compute_api('trigger_crash_dump', 'cast', instance=self.fake_instance_obj, version='5.0') @mock.patch('nova.compute.rpcapi.LOG') @mock.patch('nova.objects.Service.get_minimum_version') @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_version_cap_no_computes_log_once(self, mock_allcells, mock_minver, mock_log): self.flags(connection=None, group='api_database') self.flags(compute='auto', group='upgrade_levels') mock_minver.return_value = 0 api = compute_rpcapi.ComputeAPI() for x in range(2): api._determine_version_cap(mock.Mock()) mock_allcells.assert_not_called() mock_minver.assert_has_calls([ mock.call(mock.ANY, 'nova-compute'), mock.call(mock.ANY, 'nova-compute')]) @mock.patch('nova.objects.Service.get_minimum_version') @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_version_cap_all_cells(self, mock_allcells, mock_minver): self.flags(connection='sqlite:///', group='api_database') self.flags(compute='auto', group='upgrade_levels') mock_allcells.return_value = 0 compute_rpcapi.ComputeAPI()._determine_version_cap(mock.Mock()) mock_allcells.assert_called_once_with(mock.ANY, ['nova-compute']) mock_minver.assert_not_called() @mock.patch('nova.compute.rpcapi.LOG.error') @mock.patch('nova.objects.Service.get_minimum_version') @mock.patch('nova.objects.service.get_minimum_version_all_cells', side_effect=exception.DBNotAllowed(binary='nova-compute')) def test_version_cap_all_cells_no_access(self, mock_allcells, mock_minver, mock_log_error): """Tests a scenario where nova-compute is configured with a connection to the API database and fails trying to get the minium nova-compute service version across all cells because nova-compute is configured to not allow direct database access. """ self.flags(connection='sqlite:///', group='api_database') self.assertRaises(exception.DBNotAllowed, compute_rpcapi.ComputeAPI()._determine_version_cap, mock.Mock()) mock_allcells.assert_called_once_with(mock.ANY, ['nova-compute']) mock_minver.assert_not_called() # Make sure the expected error was logged. mock_log_error.assert_called_once_with( 'This service is configured for access to the ' 'API database but is not allowed to directly ' 'access the database. You should run this ' 'service without the [api_database]/connection ' 'config option.') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_shelve.py0000664000175000017500000013475500000000000022244 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.compute import api as compute_api from nova.compute import claims from nova.compute import instance_actions from nova.compute import power_state from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states import nova.conf from nova.db import api as db from nova import exception from nova.network import neutron as neutron_api from nova import objects from nova import test from nova.tests.unit.compute import test_compute from nova.tests.unit.image import fake as fake_image CONF = nova.conf.CONF def _fake_resources(): resources = { 'memory_mb': 2048, 'memory_mb_used': 0, 'free_ram_mb': 2048, 'local_gb': 20, 'local_gb_used': 0, 'free_disk_gb': 20, 'vcpus': 2, 'vcpus_used': 0 } return objects.ComputeNode(**resources) class ShelveComputeManagerTestCase(test_compute.BaseTestCase): @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(nova.compute.manager.ComputeManager, '_terminate_volume_connections') @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'power_off') @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'snapshot') @mock.patch.object(nova.compute.manager.ComputeManager, '_get_power_state') @mock.patch.object(nova.compute.manager.ComputeManager, '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') def _shelve_instance(self, shelved_offload_time, mock_notify, mock_notify_instance_usage, mock_get_power_state, mock_snapshot, mock_power_off, mock_terminate, mock_get_bdms, clean_shutdown=True, guest_power_state=power_state.RUNNING): mock_get_power_state.return_value = 123 CONF.set_override('shelved_offload_time', shelved_offload_time) host = 'fake-mini' instance = self._create_fake_instance_obj( params={'host': host, 'power_state': guest_power_state}) image_id = 'fake_image_id' host = 'fake-mini' self.useFixture(utils_fixture.TimeFixture()) instance.task_state = task_states.SHELVING instance.save() fake_bdms = None if shelved_offload_time == 0: fake_bdms = objects.BlockDeviceMappingList() mock_get_bdms.return_value = fake_bdms tracking = {'last_state': instance.vm_state} def check_save(expected_task_state=None): self.assertEqual(123, instance.power_state) if tracking['last_state'] == vm_states.ACTIVE: if CONF.shelved_offload_time == 0: self.assertEqual(task_states.SHELVING_OFFLOADING, instance.task_state) else: self.assertIsNone(instance.task_state) self.assertEqual(vm_states.SHELVED, instance.vm_state) self.assertEqual([task_states.SHELVING, task_states.SHELVING_IMAGE_UPLOADING], expected_task_state) self.assertIn('shelved_at', instance.system_metadata) self.assertEqual(image_id, instance.system_metadata['shelved_image_id']) self.assertEqual(host, instance.system_metadata['shelved_host']) tracking['last_state'] = instance.vm_state elif (tracking['last_state'] == vm_states.SHELVED and CONF.shelved_offload_time == 0): self.assertIsNone(instance.task_state) self.assertEqual(vm_states.SHELVED_OFFLOADED, instance.vm_state) self.assertEqual([task_states.SHELVING, task_states.SHELVING_OFFLOADING], expected_task_state) tracking['last_state'] = instance.vm_state elif (tracking['last_state'] == vm_states.SHELVED_OFFLOADED and CONF.shelved_offload_time == 0): self.assertIsNone(instance.host) self.assertIsNone(instance.node) self.assertIsNone(expected_task_state) else: self.fail('Unexpected save!') with test.nested( mock.patch.object(instance, 'save'), mock.patch.object(self.compute.network_api, 'cleanup_instance_network_on_host')) as ( mock_save, mock_cleanup ): mock_save.side_effect = check_save self.compute.shelve_instance(self.context, instance, image_id=image_id, clean_shutdown=clean_shutdown) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='shelve', phase='start', bdms=fake_bdms), mock.call(self.context, instance, 'fake-mini', action='shelve', phase='end', bdms=fake_bdms)]) # prepare expect call lists mock_notify_instance_usage_call_list = [ mock.call(self.context, instance, 'shelve.start'), mock.call(self.context, instance, 'shelve.end')] mock_power_off_call_list = [] mock_get_power_state_call_list = [ mock.call(self.context, instance)] if clean_shutdown: if guest_power_state == power_state.PAUSED: mock_power_off_call_list.append(mock.call(instance, 0, 0)) else: mock_power_off_call_list.append( mock.call(instance, CONF.shutdown_timeout, CONF.compute.shutdown_retry_interval)) else: mock_power_off_call_list.append(mock.call(instance, 0, 0)) if CONF.shelved_offload_time == 0: mock_notify_instance_usage_call_list.extend([ mock.call(self.context, instance, 'shelve_offload.start'), mock.call(self.context, instance, 'shelve_offload.end')]) mock_power_off_call_list.append(mock.call(instance, 0, 0)) mock_get_power_state_call_list.append(mock.call(self.context, instance)) mock_notify_instance_usage.assert_has_calls( mock_notify_instance_usage_call_list) mock_power_off.assert_has_calls(mock_power_off_call_list) mock_cleanup.assert_not_called() mock_snapshot.assert_called_once_with(self.context, instance, 'fake_image_id', mock.ANY) mock_get_power_state.assert_has_calls(mock_get_power_state_call_list) if CONF.shelved_offload_time == 0: self.assertTrue(mock_terminate.called) def test_shelve(self): self._shelve_instance(-1) def test_shelve_forced_shutdown(self): self._shelve_instance(-1, clean_shutdown=False) def test_shelve_and_offload(self): self._shelve_instance(0) def test_shelve_paused_instance(self): self._shelve_instance(-1, guest_power_state=power_state.PAUSED) @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'power_off') def test_shelve_offload(self, mock_power_off): instance = self._shelve_offload() mock_power_off.assert_called_once_with(instance, CONF.shutdown_timeout, CONF.compute.shutdown_retry_interval) @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'power_off') def test_shelve_offload_forced_shutdown(self, mock_power_off): instance = self._shelve_offload(clean_shutdown=False) mock_power_off.assert_called_once_with(instance, 0, 0) @mock.patch.object(compute_utils, 'EventReporter') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(nova.compute.manager.ComputeManager, '_terminate_volume_connections') @mock.patch('nova.compute.resource_tracker.ResourceTracker.' 'delete_allocation_for_shelve_offloaded_instance') @mock.patch.object(nova.compute.manager.ComputeManager, '_update_resource_tracker') @mock.patch.object(nova.compute.manager.ComputeManager, '_get_power_state', return_value=123) @mock.patch.object(nova.compute.manager.ComputeManager, '_notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') def _shelve_offload(self, mock_notify, mock_notify_instance_usage, mock_get_power_state, mock_update_resource_tracker, mock_delete_alloc, mock_terminate, mock_get_bdms, mock_event, clean_shutdown=True): host = 'fake-mini' instance = self._create_fake_instance_obj(params={'host': host}) instance.task_state = task_states.SHELVING instance.save() self.useFixture(utils_fixture.TimeFixture()) fake_bdms = objects.BlockDeviceMappingList() mock_get_bdms.return_value = fake_bdms def stub_instance_save(inst, *args, **kwargs): # If the vm_state is changed to SHELVED_OFFLOADED make sure we # have already freed up allocations in placement. if inst.vm_state == vm_states.SHELVED_OFFLOADED: self.assertTrue(mock_delete_alloc.called, 'Allocations must be deleted before the ' 'vm_status can change to shelved_offloaded.') self.stub_out('nova.objects.Instance.save', stub_instance_save) self.compute.shelve_offload_instance(self.context, instance, clean_shutdown=clean_shutdown) mock_notify.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='shelve_offload', phase='start', bdms=fake_bdms), mock.call(self.context, instance, 'fake-mini', action='shelve_offload', phase='end', bdms=fake_bdms)]) self.assertEqual(vm_states.SHELVED_OFFLOADED, instance.vm_state) self.assertIsNone(instance.task_state) self.assertTrue(mock_terminate.called) # prepare expect call lists mock_notify_instance_usage_call_list = [ mock.call(self.context, instance, 'shelve_offload.start'), mock.call(self.context, instance, 'shelve_offload.end')] mock_notify_instance_usage.assert_has_calls( mock_notify_instance_usage_call_list) # instance.host is replaced with host because # original instance.host is clear after # ComputeManager.shelve_offload_instance execute mock_get_power_state.assert_called_once_with( self.context, instance) mock_update_resource_tracker.assert_called_once_with(self.context, instance) mock_delete_alloc.assert_called_once_with(self.context, instance) mock_event.assert_called_once_with(self.context, 'compute_shelve_offload_instance', CONF.host, instance.uuid, graceful_exit=False) return instance @mock.patch('nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name', new=mock.NonCallableMock()) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(nova.compute.manager.ComputeManager, '_notify_about_instance_usage') @mock.patch.object(nova.compute.manager.ComputeManager, '_prep_block_device', return_value='fake_bdm') @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'spawn') @mock.patch.object(nova.compute.manager.ComputeManager, '_get_power_state', return_value=123) @mock.patch.object(neutron_api.API, 'setup_instance_network_on_host') def test_unshelve(self, mock_setup_network, mock_get_power_state, mock_spawn, mock_prep_block_device, mock_notify_instance_usage, mock_notify_instance_action, mock_get_bdms): mock_bdms = mock.Mock() mock_get_bdms.return_value = mock_bdms instance = self._create_fake_instance_obj() instance.task_state = task_states.UNSHELVING instance.save() image = {'id': uuids.image_id} node = test_compute.NODENAME limits = {} filter_properties = {'limits': limits} host = 'fake-mini' cur_time = timeutils.utcnow() # Adding shelved_* keys in system metadata to verify # whether those are deleted after unshelve call. sys_meta = dict(instance.system_metadata) sys_meta['shelved_at'] = cur_time.isoformat() sys_meta['shelved_image_id'] = image['id'] sys_meta['shelved_host'] = host instance.system_metadata = sys_meta self.deleted_image_id = None def fake_delete(self2, ctxt, image_id): self.deleted_image_id = image_id def fake_claim(context, instance, node, allocations, limits): instance.host = self.compute.host requests = objects.InstancePCIRequests(requests=[]) return claims.Claim(context, instance, test_compute.NODENAME, self.rt, _fake_resources(), requests) tracking = { 'last_state': instance.task_state, 'spawned': False, } def check_save(expected_task_state=None): if tracking['last_state'] == task_states.UNSHELVING: if tracking['spawned']: self.assertIsNone(instance.task_state) else: self.assertEqual(task_states.SPAWNING, instance.task_state) tracking['spawned'] = True tracking['last_state'] == instance.task_state elif tracking['last_state'] == task_states.SPAWNING: self.assertEqual(vm_states.ACTIVE, instance.vm_state) tracking['last_state'] == instance.task_state else: self.fail('Unexpected save!') fake_image.stub_out_image_service(self) self.stub_out('nova.tests.unit.image.fake._FakeImageService.delete', fake_delete) with mock.patch.object(self.rt, 'instance_claim', side_effect=fake_claim), \ mock.patch.object(instance, 'save') as mock_save: mock_save.side_effect = check_save self.compute.unshelve_instance( self.context, instance, image=image, filter_properties=filter_properties, node=node, request_spec=objects.RequestSpec()) mock_notify_instance_action.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='unshelve', phase='start', bdms=mock_bdms), mock.call(self.context, instance, 'fake-mini', action='unshelve', phase='end', bdms=mock_bdms)]) # prepare expect call lists mock_notify_instance_usage_call_list = [ mock.call(self.context, instance, 'unshelve.start'), mock.call(self.context, instance, 'unshelve.end')] mock_notify_instance_usage.assert_has_calls( mock_notify_instance_usage_call_list) mock_prep_block_device.assert_called_once_with(self.context, instance, mock.ANY) mock_setup_network.assert_called_once_with( self.context, instance, self.compute.host, provider_mappings=None) mock_spawn.assert_called_once_with(self.context, instance, test.MatchType(objects.ImageMeta), injected_files=[], admin_password=None, allocations={}, network_info=[], block_device_info='fake_bdm') self.mock_get_allocations.assert_called_once_with(self.context, instance.uuid) mock_get_power_state.assert_called_once_with(self.context, instance) self.assertNotIn('shelved_at', instance.system_metadata) self.assertNotIn('shelved_image_id', instance.system_metadata) self.assertNotIn('shelved_host', instance.system_metadata) self.assertEqual(image['id'], self.deleted_image_id) self.assertEqual(instance.host, self.compute.host) self.assertEqual(123, instance.power_state) self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) self.assertIsNone(instance.key_data) self.assertEqual(self.compute.host, instance.host) self.assertFalse(instance.auto_disk_config) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(nova.compute.resource_tracker.ResourceTracker, 'instance_claim') @mock.patch.object(neutron_api.API, 'setup_instance_network_on_host') @mock.patch.object(nova.compute.manager.ComputeManager, '_get_power_state', return_value=123) @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'spawn') @mock.patch.object(nova.compute.manager.ComputeManager, '_prep_block_device', return_value='fake_bdm') @mock.patch.object(nova.compute.manager.ComputeManager, '_notify_about_instance_usage') @mock.patch('nova.utils.get_image_from_system_metadata') def test_unshelve_volume_backed(self, mock_image_meta, mock_notify_instance_usage, mock_prep_block_device, mock_spawn, mock_get_power_state, mock_setup_network, mock_instance_claim, mock_notify_instance_action, mock_get_bdms): mock_bdms = mock.Mock() mock_get_bdms.return_value = mock_bdms instance = self._create_fake_instance_obj() node = test_compute.NODENAME limits = {} filter_properties = {'limits': limits} instance.task_state = task_states.UNSHELVING instance.save() image_meta = {'properties': {'base_image_ref': uuids.image_id}} mock_image_meta.return_value = image_meta tracking = {'last_state': instance.task_state} def fake_claim(context, instance, node, allocations, limits): instance.host = self.compute.host requests = objects.InstancePCIRequests(requests=[]) return claims.Claim(context, instance, test_compute.NODENAME, self.rt, _fake_resources(), requests) mock_instance_claim.side_effect = fake_claim def check_save(expected_task_state=None): if tracking['last_state'] == task_states.UNSHELVING: self.assertEqual(task_states.SPAWNING, instance.task_state) tracking['last_state'] = instance.task_state elif tracking['last_state'] == task_states.SPAWNING: self.assertEqual(123, instance.power_state) self.assertEqual(vm_states.ACTIVE, instance.vm_state) self.assertIsNone(instance.task_state) self.assertIsNone(instance.key_data) self.assertFalse(instance.auto_disk_config) self.assertIsNone(instance.task_state) tracking['last_state'] = instance.task_state else: self.fail('Unexpected save!') with mock.patch.object(instance, 'save') as mock_save: mock_save.side_effect = check_save self.compute.unshelve_instance(self.context, instance, image=None, filter_properties=filter_properties, node=node, request_spec=objects.RequestSpec()) mock_notify_instance_action.assert_has_calls([ mock.call(self.context, instance, 'fake-mini', action='unshelve', phase='start', bdms=mock_bdms), mock.call(self.context, instance, 'fake-mini', action='unshelve', phase='end', bdms=mock_bdms)]) # prepare expect call lists mock_notify_instance_usage_call_list = [ mock.call(self.context, instance, 'unshelve.start'), mock.call(self.context, instance, 'unshelve.end')] mock_notify_instance_usage.assert_has_calls( mock_notify_instance_usage_call_list) mock_prep_block_device.assert_called_once_with(self.context, instance, mock.ANY) mock_setup_network.assert_called_once_with( self.context, instance, self.compute.host, provider_mappings=None) mock_instance_claim.assert_called_once_with(self.context, instance, test_compute.NODENAME, {}, limits) mock_spawn.assert_called_once_with(self.context, instance, test.MatchType(objects.ImageMeta), injected_files=[], admin_password=None, allocations={}, network_info=[], block_device_info='fake_bdm') self.mock_get_allocations.assert_called_once_with(self.context, instance.uuid) mock_get_power_state.assert_called_once_with(self.context, instance) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch.object(nova.compute.resource_tracker.ResourceTracker, 'instance_claim') @mock.patch.object(neutron_api.API, 'setup_instance_network_on_host') @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'spawn', side_effect=test.TestingException('oops!')) @mock.patch.object(nova.compute.manager.ComputeManager, '_prep_block_device', return_value='fake_bdm') @mock.patch.object(nova.compute.manager.ComputeManager, '_notify_about_instance_usage') @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(nova.compute.manager.ComputeManager, '_terminate_volume_connections') def test_unshelve_spawn_fails_cleanup_volume_connections( self, mock_terminate_volume_connections, mock_image_meta, mock_notify_instance_usage, mock_prep_block_device, mock_spawn, mock_setup_network, mock_instance_claim, mock_notify_instance_action, mock_get_bdms): """Tests error handling when a instance fails to unshelve and makes sure that volume connections are cleaned up from the host and that the host/node values are unset on the instance. """ mock_bdms = mock.Mock() mock_get_bdms.return_value = mock_bdms instance = self._create_fake_instance_obj() node = test_compute.NODENAME limits = {} filter_properties = {'limits': limits} instance.task_state = task_states.UNSHELVING instance.save() image_meta = {'properties': {'base_image_ref': uuids.image_id}} mock_image_meta.return_value = image_meta tracking = {'last_state': instance.task_state} def fake_claim(context, instance, node, allocations, limits): instance.host = self.compute.host instance.node = node requests = objects.InstancePCIRequests(requests=[]) return claims.Claim(context, instance, node, self.rt, _fake_resources(), requests, limits=limits) mock_instance_claim.side_effect = fake_claim def check_save(expected_task_state=None): if tracking['last_state'] == task_states.UNSHELVING: # This is before we've failed. self.assertEqual(task_states.SPAWNING, instance.task_state) tracking['last_state'] = instance.task_state elif tracking['last_state'] == task_states.SPAWNING: # This is after we've failed. self.assertIsNone(instance.host) self.assertIsNone(instance.node) self.assertIsNone(instance.task_state) tracking['last_state'] = instance.task_state else: self.fail('Unexpected save!') with mock.patch.object(instance, 'save') as mock_save: mock_save.side_effect = check_save self.assertRaises(test.TestingException, self.compute.unshelve_instance, self.context, instance, image=None, filter_properties=filter_properties, node=node, request_spec=objects.RequestSpec()) mock_notify_instance_action.assert_called_once_with( self.context, instance, 'fake-mini', action='unshelve', phase='start', bdms=mock_bdms) mock_notify_instance_usage.assert_called_once_with( self.context, instance, 'unshelve.start') mock_prep_block_device.assert_called_once_with( self.context, instance, mock_bdms) mock_setup_network.assert_called_once_with( self.context, instance, self.compute.host, provider_mappings=None) mock_instance_claim.assert_called_once_with(self.context, instance, test_compute.NODENAME, {}, limits) mock_spawn.assert_called_once_with( self.context, instance, test.MatchType(objects.ImageMeta), injected_files=[], admin_password=None, allocations={}, network_info=[], block_device_info='fake_bdm') mock_terminate_volume_connections.assert_called_once_with( self.context, instance, mock_bdms) @mock.patch('nova.network.neutron.API.setup_instance_network_on_host') @mock.patch('nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name') def test_unshelve_with_resource_request( self, mock_update_pci, mock_setup_network): requested_res = [objects.RequestGroup( requester_id=uuids.port_1, provider_uuids=[uuids.rp1])] request_spec = objects.RequestSpec(requested_resources=requested_res) instance = self._create_fake_instance_obj() self.compute.unshelve_instance( self.context, instance, image=None, filter_properties={}, node='fake-node', request_spec=request_spec) mock_update_pci.assert_called_once_with( self.context, self.compute.reportclient, instance, {uuids.port_1: [uuids.rp1]}) mock_setup_network.assert_called_once_with( self.context, instance, self.compute.host, provider_mappings={uuids.port_1: [uuids.rp1]}) @mock.patch('nova.network.neutron.API.setup_instance_network_on_host', new=mock.NonCallableMock()) @mock.patch('nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name') def test_unshelve_with_resource_request_update_raises( self, mock_update_pci): requested_res = [objects.RequestGroup( requester_id=uuids.port_1, provider_uuids=[uuids.rp1])] request_spec = objects.RequestSpec(requested_resources=requested_res) instance = self._create_fake_instance_obj() mock_update_pci.side_effect = ( exception.UnexpectedResourceProviderNameForPCIRequest( provider=uuids.rp1, requester=uuids.port1, provider_name='unexpected')) self.assertRaises( exception.UnexpectedResourceProviderNameForPCIRequest, self.compute.unshelve_instance, self.context, instance, image=None, filter_properties={}, node='fake-node', request_spec=request_spec) mock_update_pci.assert_called_once_with( self.context, self.compute.reportclient, instance, {uuids.port_1: [uuids.rp1]}) @mock.patch.object(objects.InstanceList, 'get_by_filters') def test_shelved_poll_none_offloaded(self, mock_get_by_filters): # Test instances are not offloaded when shelved_offload_time is -1 self.flags(shelved_offload_time=-1) self.compute._poll_shelved_instances(self.context) self.assertEqual(0, mock_get_by_filters.call_count) @mock.patch('oslo_utils.timeutils.is_older_than') def test_shelved_poll_none_exist(self, mock_older): self.flags(shelved_offload_time=1) mock_older.return_value = False with mock.patch.object(self.compute, 'shelve_offload_instance') as soi: self.compute._poll_shelved_instances(self.context) self.assertFalse(soi.called) @mock.patch('oslo_utils.timeutils.is_older_than') def test_shelved_poll_not_timedout(self, mock_older): mock_older.return_value = False self.flags(shelved_offload_time=1) shelved_time = timeutils.utcnow() time_fixture = self.useFixture(utils_fixture.TimeFixture(shelved_time)) time_fixture.advance_time_seconds(CONF.shelved_offload_time - 1) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED instance.task_state = None instance.host = self.compute.host sys_meta = instance.system_metadata sys_meta['shelved_at'] = shelved_time.isoformat() instance.save() with mock.patch.object(self.compute, 'shelve_offload_instance') as soi: self.compute._poll_shelved_instances(self.context) self.assertFalse(soi.called) self.assertTrue(mock_older.called) def test_shelved_poll_timedout(self): self.flags(shelved_offload_time=1) shelved_time = timeutils.utcnow() time_fixture = self.useFixture(utils_fixture.TimeFixture(shelved_time)) time_fixture.advance_time_seconds(CONF.shelved_offload_time + 1) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED instance.task_state = None instance.host = self.compute.host sys_meta = instance.system_metadata sys_meta['shelved_at'] = shelved_time.isoformat() instance.save() data = [] def fake_soi(context, instance, **kwargs): data.append(instance.uuid) with mock.patch.object(self.compute, 'shelve_offload_instance') as soi: soi.side_effect = fake_soi self.compute._poll_shelved_instances(self.context) self.assertTrue(soi.called) self.assertEqual(instance.uuid, data[0]) @mock.patch('oslo_utils.timeutils.is_older_than') @mock.patch('oslo_utils.timeutils.parse_strtime') def test_shelved_poll_filters_task_state(self, mock_parse, mock_older): self.flags(shelved_offload_time=1) mock_older.return_value = True instance1 = self._create_fake_instance_obj() instance1.task_state = task_states.SPAWNING instance1.vm_state = vm_states.SHELVED instance1.host = self.compute.host instance1.system_metadata = {'shelved_at': ''} instance1.save() instance2 = self._create_fake_instance_obj() instance2.task_state = None instance2.vm_state = vm_states.SHELVED instance2.host = self.compute.host instance2.system_metadata = {'shelved_at': ''} instance2.save() data = [] def fake_soi(context, instance, **kwargs): data.append(instance.uuid) with mock.patch.object(self.compute, 'shelve_offload_instance') as soi: soi.side_effect = fake_soi self.compute._poll_shelved_instances(self.context) self.assertTrue(soi.called) self.assertEqual([instance2.uuid], data) @mock.patch('oslo_utils.timeutils.is_older_than') @mock.patch('oslo_utils.timeutils.parse_strtime') def test_shelved_poll_checks_task_state_on_save(self, mock_parse, mock_older): self.flags(shelved_offload_time=1) mock_older.return_value = True instance = self._create_fake_instance_obj() instance.task_state = None instance.vm_state = vm_states.SHELVED instance.host = self.compute.host instance.system_metadata = {'shelved_at': ''} instance.save() def fake_parse_hook(timestring): instance.task_state = task_states.SPAWNING instance.save() mock_parse.side_effect = fake_parse_hook with mock.patch.object(self.compute, 'shelve_offload_instance') as soi: self.compute._poll_shelved_instances(self.context) self.assertFalse(soi.called) class ShelveComputeAPITestCase(test_compute.BaseTestCase): def _get_vm_states(self, exclude_states=None): vm_state = set([vm_states.ACTIVE, vm_states.BUILDING, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.RESCUED, vm_states.STOPPED, vm_states.RESIZED, vm_states.SOFT_DELETED, vm_states.DELETED, vm_states.ERROR, vm_states.SHELVED, vm_states.SHELVED_OFFLOADED]) if not exclude_states: exclude_states = set() return vm_state - exclude_states def _test_shelve(self, vm_state=vm_states.ACTIVE, boot_from_volume=False, clean_shutdown=True): # Ensure instance can be shelved. params = dict(task_state=None, vm_state=vm_state, display_name='vm01') fake_instance = self._create_fake_instance_obj(params=params) instance = fake_instance self.assertIsNone(instance['task_state']) with test.nested( mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=boot_from_volume), mock.patch.object(compute_utils, 'create_image', return_value=dict(id='fake-image-id')), mock.patch.object(instance, 'save'), mock.patch.object(self.compute_api, '_record_action_start'), mock.patch.object(self.compute_api.compute_rpcapi, 'shelve_instance'), mock.patch.object(self.compute_api.compute_rpcapi, 'shelve_offload_instance') ) as ( volume_backed_inst, create_image, instance_save, record_action_start, rpcapi_shelve_instance, rpcapi_shelve_offload_instance ): self.compute_api.shelve(self.context, instance, clean_shutdown=clean_shutdown) self.assertEqual(instance.task_state, task_states.SHELVING) # assert our mock calls volume_backed_inst.assert_called_once_with( self.context, instance) instance_save.assert_called_once_with(expected_task_state=[None]) record_action_start.assert_called_once_with( self.context, instance, instance_actions.SHELVE) if boot_from_volume: rpcapi_shelve_offload_instance.assert_called_once_with( self.context, instance=instance, clean_shutdown=clean_shutdown) else: rpcapi_shelve_instance.assert_called_once_with( self.context, instance=instance, image_id='fake-image-id', clean_shutdown=clean_shutdown) db.instance_destroy(self.context, instance['uuid']) def test_shelve(self): self._test_shelve() def test_shelves_stopped(self): self._test_shelve(vm_state=vm_states.STOPPED) def test_shelves_paused(self): self._test_shelve(vm_state=vm_states.PAUSED) def test_shelves_suspended(self): self._test_shelve(vm_state=vm_states.SUSPENDED) def test_shelves_boot_from_volume(self): self._test_shelve(boot_from_volume=True) def test_shelve_forced_shutdown(self): self._test_shelve(clean_shutdown=False) def test_shelve_boot_from_volume_forced_shutdown(self): self._test_shelve(boot_from_volume=True, clean_shutdown=False) def _test_shelve_invalid_state(self, vm_state): params = dict(vm_state=vm_state) fake_instance = self._create_fake_instance_obj(params=params) self.assertRaises(exception.InstanceInvalidState, self.compute_api.shelve, self.context, fake_instance) def test_shelve_fails_invalid_states(self): invalid_vm_states = self._get_vm_states(set([vm_states.ACTIVE, vm_states.STOPPED, vm_states.PAUSED, vm_states.SUSPENDED])) for state in invalid_vm_states: self._test_shelve_invalid_state(state) def _test_shelve_offload(self, clean_shutdown=True): params = dict(task_state=None, vm_state=vm_states.SHELVED) fake_instance = self._create_fake_instance_obj(params=params) with test.nested( mock.patch.object(fake_instance, 'save'), mock.patch.object(self.compute_api.compute_rpcapi, 'shelve_offload_instance'), mock.patch('nova.compute.api.API._record_action_start') ) as ( instance_save, rpcapi_shelve_offload_instance, record ): self.compute_api.shelve_offload(self.context, fake_instance, clean_shutdown=clean_shutdown) # assert field values set on the instance object self.assertEqual(task_states.SHELVING_OFFLOADING, fake_instance.task_state) instance_save.assert_called_once_with(expected_task_state=[None]) rpcapi_shelve_offload_instance.assert_called_once_with( self.context, instance=fake_instance, clean_shutdown=clean_shutdown) record.assert_called_once_with(self.context, fake_instance, instance_actions.SHELVE_OFFLOAD) def test_shelve_offload(self): self._test_shelve_offload() def test_shelve_offload_forced_shutdown(self): self._test_shelve_offload(clean_shutdown=False) def _test_shelve_offload_invalid_state(self, vm_state): params = dict(vm_state=vm_state) fake_instance = self._create_fake_instance_obj(params=params) self.assertRaises(exception.InstanceInvalidState, self.compute_api.shelve_offload, self.context, fake_instance) def test_shelve_offload_fails_invalid_states(self): invalid_vm_states = self._get_vm_states(set([vm_states.SHELVED])) for state in invalid_vm_states: self._test_shelve_offload_invalid_state(state) def _get_specify_state_instance(self, vm_state): # Ensure instance can be unshelved. instance = self._create_fake_instance_obj() self.assertIsNone(instance['task_state']) self.compute_api.shelve(self.context, instance) instance.task_state = None instance.vm_state = vm_state instance.save() return instance @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') def test_unshelve(self, get_by_instance_uuid): # Ensure instance can be unshelved. instance = self._get_specify_state_instance(vm_states.SHELVED) fake_spec = objects.RequestSpec() get_by_instance_uuid.return_value = fake_spec with mock.patch.object(self.compute_api.compute_task_api, 'unshelve_instance') as unshelve: self.compute_api.unshelve(self.context, instance) get_by_instance_uuid.assert_called_once_with(self.context, instance.uuid) unshelve.assert_called_once_with(self.context, instance, fake_spec) self.assertEqual(instance.task_state, task_states.UNSHELVING) db.instance_destroy(self.context, instance['uuid']) @mock.patch('nova.availability_zones.get_availability_zones', return_value=['az1', 'az2']) @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') def test_specified_az_ushelve_invalid_request(self, get_by_instance_uuid, mock_save, mock_availability_zones): # Ensure instance can be unshelved. instance = self._get_specify_state_instance( vm_states.SHELVED_OFFLOADED) new_az = "fake-new-az" fake_spec = objects.RequestSpec() fake_spec.availability_zone = "fake-old-az" get_by_instance_uuid.return_value = fake_spec exc = self.assertRaises(exception.InvalidRequest, self.compute_api.unshelve, self.context, instance, new_az=new_az) self.assertEqual("The requested availability zone is not available", exc.format_message()) @mock.patch('nova.availability_zones.get_availability_zones', return_value=['az1', 'az2']) @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') def test_specified_az_unshelve_invalid_state(self, get_by_instance_uuid, mock_save, mock_availability_zones): # Ensure instance can be unshelved. instance = self._get_specify_state_instance(vm_states.SHELVED) new_az = "az1" fake_spec = objects.RequestSpec() fake_spec.availability_zone = "fake-old-az" get_by_instance_uuid.return_value = fake_spec self.assertRaises(exception.UnshelveInstanceInvalidState, self.compute_api.unshelve, self.context, instance, new_az=new_az) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid', new_callable=mock.NonCallableMock) @mock.patch('nova.availability_zones.get_availability_zones') def test_validate_unshelve_az_cross_az_attach_true( self, mock_get_azs, mock_get_bdms): """Tests a case where the new AZ to unshelve does not match the volume attached to the server but cross_az_attach=True so it's not an error. """ # Ensure instance can be unshelved. instance = self._create_fake_instance_obj( params=dict(vm_state=vm_states.SHELVED_OFFLOADED)) new_az = "west_az" mock_get_azs.return_value = ["west_az", "east_az"] self.flags(cross_az_attach=True, group='cinder') self.compute_api._validate_unshelve_az(self.context, instance, new_az) mock_get_azs.assert_called_once_with( self.context, self.compute_api.host_api, get_only_available=True) @mock.patch('nova.volume.cinder.API.get') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.availability_zones.get_availability_zones') def test_validate_unshelve_az_cross_az_attach_false( self, mock_get_azs, mock_get_bdms, mock_get): """Tests a case where the new AZ to unshelve does not match the volume attached to the server and cross_az_attach=False so it's an error. """ # Ensure instance can be unshelved. instance = self._create_fake_instance_obj( params=dict(vm_state=vm_states.SHELVED_OFFLOADED)) new_az = "west_az" mock_get_azs.return_value = ["west_az", "east_az"] bdms = [objects.BlockDeviceMapping(destination_type='volume', volume_id=uuids.volume_id)] mock_get_bdms.return_value = bdms volume = {'id': uuids.volume_id, 'availability_zone': 'east_az'} mock_get.return_value = volume self.flags(cross_az_attach=False, group='cinder') self.assertRaises(exception.MismatchVolumeAZException, self.compute_api._validate_unshelve_az, self.context, instance, new_az) mock_get_azs.assert_called_once_with( self.context, self.compute_api.host_api, get_only_available=True) mock_get_bdms.assert_called_once_with(self.context, instance.uuid) mock_get.assert_called_once_with(self.context, uuids.volume_id) @mock.patch.object(compute_api.API, '_validate_unshelve_az') @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid') def test_specified_az_unshelve(self, get_by_instance_uuid, mock_save, mock_validate_unshelve_az): # Ensure instance can be unshelved. instance = self._get_specify_state_instance( vm_states.SHELVED_OFFLOADED) new_az = "west_az" fake_spec = objects.RequestSpec() fake_spec.availability_zone = "fake-old-az" get_by_instance_uuid.return_value = fake_spec self.compute_api.unshelve(self.context, instance, new_az=new_az) mock_save.assert_called_once_with() self.assertEqual(new_az, fake_spec.availability_zone) mock_validate_unshelve_az.assert_called_once_with( self.context, instance, new_az) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/compute/test_stats.py0000664000175000017500000002364100000000000022103 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for compute node stats.""" from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import stats from nova.compute import task_states from nova.compute import vm_states from nova import test from nova.tests.unit import fake_instance class StatsTestCase(test.NoDBTestCase): def setUp(self): super(StatsTestCase, self).setUp() self.stats = stats.Stats() def _fake_object(self, updates): return fake_instance.fake_instance_obj(None, **updates) def _create_instance(self, values=None): instance = { "os_type": "Linux", "project_id": "1234", "task_state": None, "vm_state": vm_states.BUILDING, "vcpus": 1, "uuid": uuids.stats_linux_instance_1, } if values: instance.update(values) return self._fake_object(instance) def test_os_type_count(self): os_type = "Linux" self.assertEqual(0, self.stats.num_os_type(os_type)) self.stats._increment("num_os_type_" + os_type) self.stats._increment("num_os_type_" + os_type) self.stats._increment("num_os_type_Vax") self.assertEqual(2, self.stats.num_os_type(os_type)) self.stats["num_os_type_" + os_type] -= 1 self.assertEqual(1, self.stats.num_os_type(os_type)) def test_update_project_count(self): proj_id = "1234" def _get(): return self.stats.num_instances_for_project(proj_id) self.assertEqual(0, _get()) self.stats._increment("num_proj_" + proj_id) self.assertEqual(1, _get()) self.stats["num_proj_" + proj_id] -= 1 self.assertEqual(0, _get()) def test_instance_count(self): self.assertEqual(0, self.stats.num_instances) for i in range(5): self.stats._increment("num_instances") self.stats["num_instances"] -= 1 self.assertEqual(4, self.stats.num_instances) def test_add_stats_for_instance(self): instance = { "os_type": "Linux", "project_id": "1234", "task_state": None, "vm_state": vm_states.BUILDING, "vcpus": 3, "uuid": uuids.stats_linux_instance_1, } self.stats.update_stats_for_instance(self._fake_object(instance)) instance = { "os_type": "FreeBSD", "project_id": "1234", "task_state": task_states.SCHEDULING, "vm_state": None, "vcpus": 1, "uuid": uuids.stats_freebsd_instance, } self.stats.update_stats_for_instance(self._fake_object(instance)) instance = { "os_type": "Linux", "project_id": "2345", "task_state": task_states.SCHEDULING, "vm_state": vm_states.BUILDING, "vcpus": 2, "uuid": uuids.stats_linux_instance_2, } self.stats.update_stats_for_instance(self._fake_object(instance)) instance = { "os_type": "Linux", "project_id": "2345", "task_state": task_states.RESCUING, "vm_state": vm_states.ACTIVE, "vcpus": 2, "uuid": uuids.stats_linux_instance_3, } self.stats.update_stats_for_instance(self._fake_object(instance)) instance = { "os_type": "Linux", "project_id": "2345", "task_state": task_states.UNSHELVING, "vm_state": vm_states.ACTIVE, "vcpus": 2, "uuid": uuids.stats_linux_instance_4, } self.stats.update_stats_for_instance(self._fake_object(instance)) self.assertEqual(4, self.stats.num_os_type("Linux")) self.assertEqual(1, self.stats.num_os_type("FreeBSD")) self.assertEqual(2, self.stats.num_instances_for_project("1234")) self.assertEqual(3, self.stats.num_instances_for_project("2345")) self.assertEqual(1, self.stats["num_task_None"]) self.assertEqual(2, self.stats["num_task_" + task_states.SCHEDULING]) self.assertEqual(1, self.stats["num_task_" + task_states.UNSHELVING]) self.assertEqual(1, self.stats["num_task_" + task_states.RESCUING]) self.assertEqual(1, self.stats["num_vm_None"]) self.assertEqual(2, self.stats["num_vm_" + vm_states.BUILDING]) def test_calculate_workload(self): self.stats._increment("num_task_None") self.stats._increment("num_task_" + task_states.SCHEDULING) self.stats._increment("num_task_" + task_states.SCHEDULING) self.assertEqual(2, self.stats.calculate_workload()) def test_update_stats_for_instance_no_change(self): instance = self._create_instance() self.stats.update_stats_for_instance(instance) self.stats.update_stats_for_instance(instance) # no change self.assertEqual(1, self.stats.num_instances) self.assertEqual(1, self.stats.num_instances_for_project("1234")) self.assertEqual(1, self.stats["num_os_type_Linux"]) self.assertEqual(1, self.stats["num_task_None"]) self.assertEqual(1, self.stats["num_vm_" + vm_states.BUILDING]) def test_update_stats_for_instance_vm_change(self): instance = self._create_instance() self.stats.update_stats_for_instance(instance) instance["vm_state"] = vm_states.PAUSED self.stats.update_stats_for_instance(instance) self.assertEqual(1, self.stats.num_instances) self.assertEqual(1, self.stats.num_instances_for_project(1234)) self.assertEqual(1, self.stats["num_os_type_Linux"]) self.assertEqual(0, self.stats["num_vm_%s" % vm_states.BUILDING]) self.assertEqual(1, self.stats["num_vm_%s" % vm_states.PAUSED]) def test_update_stats_for_instance_task_change(self): instance = self._create_instance() self.stats.update_stats_for_instance(instance) instance["task_state"] = task_states.REBUILDING self.stats.update_stats_for_instance(instance) self.assertEqual(1, self.stats.num_instances) self.assertEqual(1, self.stats.num_instances_for_project("1234")) self.assertEqual(1, self.stats["num_os_type_Linux"]) self.assertEqual(0, self.stats["num_task_None"]) self.assertEqual(1, self.stats["num_task_%s" % task_states.REBUILDING]) def test_update_stats_for_instance_deleted(self): instance = self._create_instance() self.stats.update_stats_for_instance(instance) self.assertEqual(1, self.stats.num_instances_for_project("1234")) instance["vm_state"] = vm_states.DELETED self.stats.update_stats_for_instance(instance) self.assertEqual(0, self.stats.num_instances) self.assertEqual(0, self.stats.num_instances_for_project("1234")) self.assertEqual(0, self.stats.num_os_type("Linux")) self.assertEqual(0, self.stats["num_vm_" + vm_states.BUILDING]) def test_update_stats_for_instance_offloaded(self): instance = self._create_instance() self.stats.update_stats_for_instance(instance) self.assertEqual(1, self.stats.num_instances_for_project("1234")) instance["vm_state"] = vm_states.SHELVED_OFFLOADED self.stats.update_stats_for_instance(instance) self.assertEqual(0, self.stats.num_instances) self.assertEqual(0, self.stats.num_instances_for_project("1234")) self.assertEqual(0, self.stats.num_os_type("Linux")) self.assertEqual(0, self.stats["num_vm_" + vm_states.BUILDING]) def test_io_workload(self): vms = [vm_states.ACTIVE, vm_states.BUILDING, vm_states.PAUSED] tasks = [task_states.RESIZE_MIGRATING, task_states.REBUILDING, task_states.RESIZE_PREP, task_states.IMAGE_SNAPSHOT, task_states.IMAGE_BACKUP, task_states.RESCUING, task_states.UNSHELVING, task_states.SHELVING] for state in vms: self.stats._increment("num_vm_" + state) for state in tasks: self.stats._increment("num_task_" + state) self.assertEqual(8, self.stats.io_workload) def test_io_workload_saved_to_stats(self): values = {'task_state': task_states.RESIZE_MIGRATING} instance = self._create_instance(values) self.stats.update_stats_for_instance(instance) self.assertEqual(2, self.stats["io_workload"]) def test_clear(self): instance = self._create_instance() self.stats.update_stats_for_instance(instance) self.assertNotEqual(0, len(self.stats)) self.assertEqual(1, len(self.stats.states)) self.stats.clear() self.assertEqual(0, len(self.stats)) self.assertEqual(0, len(self.stats.states)) def test_build_failed_succeded(self): self.assertEqual('not-set', self.stats.get('failed_builds', 'not-set')) self.stats.build_failed() self.assertEqual(1, self.stats['failed_builds']) self.stats.build_failed() self.assertEqual(2, self.stats['failed_builds']) self.stats.build_succeeded() self.assertEqual(0, self.stats['failed_builds']) self.stats.build_succeeded() self.assertEqual(0, self.stats['failed_builds']) def test_build_succeeded_first(self): self.assertEqual('not-set', self.stats.get('failed_builds', 'not-set')) self.stats.build_succeeded() self.assertEqual(0, self.stats['failed_builds']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/compute/test_virtapi.py0000664000175000017500000002126100000000000022417 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock import os_traits from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import manager as compute_manager from nova import context as nova_context from nova import exception from nova import objects from nova.scheduler.client import report from nova import test from nova.virt import fake from nova.virt import virtapi class VirtAPIBaseTest(test.NoDBTestCase, test.APICoverage): cover_api = virtapi.VirtAPI def setUp(self): super(VirtAPIBaseTest, self).setUp() self.set_up_virtapi() def set_up_virtapi(self): self.virtapi = virtapi.VirtAPI() def assertExpected(self, method, *args, **kwargs): self.assertRaises(NotImplementedError, getattr(self.virtapi, method), *args, **kwargs) def test_wait_for_instance_event(self): self.assertExpected('wait_for_instance_event', 'instance', ['event']) def test_exit_wait_early(self): self.assertExpected('exit_wait_early', []) def test_update_compute_provider_status(self): self.assertExpected('update_compute_provider_status', nova_context.get_admin_context(), uuids.rp_uuid, enabled=False) class FakeVirtAPITest(VirtAPIBaseTest): cover_api = fake.FakeVirtAPI def set_up_virtapi(self): self.virtapi = fake.FakeVirtAPI() def assertExpected(self, method, *args, **kwargs): if method == 'wait_for_instance_event': run = False with self.virtapi.wait_for_instance_event(*args, **kwargs): run = True self.assertTrue(run) elif method == 'exit_wait_early': self.virtapi.exit_wait_early(*args, **kwargs) elif method == 'update_compute_provider_status': self.virtapi.update_compute_provider_status(*args, **kwargs) else: self.fail("Unhandled FakeVirtAPI method: %s" % method) class FakeCompute(object): def __init__(self): self.conductor_api = mock.MagicMock() self.db = mock.MagicMock() self._events = [] self.instance_events = mock.MagicMock() self.instance_events.prepare_for_instance_event.side_effect = \ self._prepare_for_instance_event self.reportclient = mock.Mock(spec=report.SchedulerReportClient) # Keep track of the traits set on each provider in the test. self.provider_traits = collections.defaultdict(set) self.reportclient.get_provider_traits.side_effect = ( self._get_provider_traits) self.reportclient.set_traits_for_provider.side_effect = ( self._set_traits_for_provider) def _get_provider_traits(self, context, rp_uuid): return mock.Mock(traits=self.provider_traits[rp_uuid]) def _set_traits_for_provider(self, context, rp_uuid, traits): self.provider_traits[rp_uuid] = traits def _event_waiter(self): event = mock.MagicMock() event.status = 'completed' return event def _prepare_for_instance_event(self, instance, name, tag): m = mock.MagicMock() m.instance = instance m.name = name m.tag = tag m.event_name = '%s-%s' % (name, tag) m.wait.side_effect = self._event_waiter self._events.append(m) return m class ComputeVirtAPITest(VirtAPIBaseTest): cover_api = compute_manager.ComputeVirtAPI def set_up_virtapi(self): self.compute = FakeCompute() self.virtapi = compute_manager.ComputeVirtAPI(self.compute) def test_exit_wait_early(self): self.assertRaises(self.virtapi._exit_early_exc, self.virtapi.exit_wait_early, [('foo', 'bar'), ('foo', 'baz')]) def test_wait_for_instance_event(self): and_i_ran = '' event_1_tag = objects.InstanceExternalEvent.make_key( 'event1') event_2_tag = objects.InstanceExternalEvent.make_key( 'event2', 'tag') events = { ('event1', None): event_1_tag, ('event2', 'tag'): event_2_tag, } with self.virtapi.wait_for_instance_event('instance', events.keys()): and_i_ran = 'I ran so far a-waa-y' self.assertEqual('I ran so far a-waa-y', and_i_ran) self.assertEqual(2, len(self.compute._events)) for event in self.compute._events: self.assertEqual('instance', event.instance) self.assertIn((event.name, event.tag), events.keys()) event.wait.assert_called_once_with() def test_wait_for_instance_event_failed(self): def _failer(): event = mock.MagicMock() event.status = 'failed' return event @mock.patch.object(self.virtapi._compute, '_event_waiter', _failer) def do_test(): with self.virtapi.wait_for_instance_event('instance', [('foo', 'bar')]): pass self.assertRaises(exception.NovaException, do_test) def test_wait_for_instance_event_failed_callback(self): def _failer(): event = mock.MagicMock() event.status = 'failed' return event @mock.patch.object(self.virtapi._compute, '_event_waiter', _failer) def do_test(): callback = mock.MagicMock() with self.virtapi.wait_for_instance_event('instance', [('foo', None)], error_callback=callback): pass callback.assert_called_with('foo', 'instance') do_test() def test_wait_for_instance_event_timeout(self): @mock.patch.object(self.virtapi._compute, '_event_waiter', side_effect=test.TestingException()) @mock.patch('eventlet.timeout.Timeout') def do_test(mock_timeout, mock_waiter): with self.virtapi.wait_for_instance_event('instance', [('foo', 'bar')]): pass self.assertRaises(test.TestingException, do_test) def test_wait_for_instance_event_exit_early(self): # Wait for two events, exit early skipping one. # Make sure we waited for one and did not wait for the other with self.virtapi.wait_for_instance_event('instance', [('foo', 'bar'), ('foo', 'baz')]): self.virtapi.exit_wait_early([('foo', 'baz')]) self.fail('never gonna happen') self.assertEqual(2, len(self.compute._events)) for event in self.compute._events: if event.tag == 'bar': event.wait.assert_called_once_with() else: event.wait.assert_not_called() def test_update_compute_provider_status(self): """Tests scenarios for adding/removing the COMPUTE_STATUS_DISABLED trait on a given compute node resource provider. """ ctxt = nova_context.get_admin_context() # Start by adding the trait to a disabled provider. self.assertNotIn(uuids.rp_uuid, self.compute.provider_traits) self.virtapi.update_compute_provider_status( ctxt, uuids.rp_uuid, enabled=False) self.assertEqual({os_traits.COMPUTE_STATUS_DISABLED}, self.compute.provider_traits[uuids.rp_uuid]) # Now run it again to make sure nothing changed. with mock.patch.object(self.compute.reportclient, 'set_traits_for_provider', new_callable=mock.NonCallableMock): self.virtapi.update_compute_provider_status( ctxt, uuids.rp_uuid, enabled=False) # Now enable the provider and make sure the trait is removed. self.virtapi.update_compute_provider_status( ctxt, uuids.rp_uuid, enabled=True) self.assertEqual(set(), self.compute.provider_traits[uuids.rp_uuid]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5344682 nova-21.2.4/nova/tests/unit/conductor/0000775000175000017500000000000000000000000017652 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/conductor/__init__.py0000664000175000017500000000000000000000000021751 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5384681 nova-21.2.4/nova/tests/unit/conductor/tasks/0000775000175000017500000000000000000000000020777 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/conductor/tasks/__init__.py0000664000175000017500000000000000000000000023076 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/conductor/tasks/test_base.py0000664000175000017500000000313500000000000023324 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.conductor.tasks import base from nova import test class FakeTask(base.TaskBase): def __init__(self, context, instance, fail=False): super(FakeTask, self).__init__(context, instance) self.fail = fail def _execute(self): if self.fail: raise Exception else: pass class TaskBaseTestCase(test.NoDBTestCase): def setUp(self): super(TaskBaseTestCase, self).setUp() self.task = FakeTask(mock.MagicMock(), mock.MagicMock()) @mock.patch.object(FakeTask, 'rollback') def test_wrapper_exception(self, fake_rollback): self.task.fail = True try: self.task.execute() except Exception: pass fake_rollback.assert_called_once_with(test.MatchType(Exception)) @mock.patch.object(FakeTask, 'rollback') def test_wrapper_no_exception(self, fake_rollback): try: self.task.execute() except Exception: pass self.assertFalse(fake_rollback.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/conductor/tasks/test_cross_cell_migrate.py0000664000175000017500000024312400000000000026256 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_messaging import exceptions as messaging_exceptions from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils import six from nova.compute import instance_actions from nova.compute import power_state from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova.conductor.tasks import cross_cell_migrate from nova import context as nova_context from nova import exception from nova.network import model as network_model from nova import objects from nova.objects import base as obj_base from nova.objects import fields from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.db import test_db_api from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_compute_node from nova.tests.unit.objects import test_instance_device_metadata from nova.tests.unit.objects import test_instance_numa from nova.tests.unit.objects import test_instance_pci_requests from nova.tests.unit.objects import test_keypair from nova.tests.unit.objects import test_migration from nova.tests.unit.objects import test_pci_device from nova.tests.unit.objects import test_service from nova.tests.unit.objects import test_vcpu_model class ObjectComparatorMixin(test_db_api.ModelsObjectComparatorMixin): """Mixin class to aid in comparing two objects.""" def _compare_objs(self, obj1, obj2, ignored_keys=None): # We can always ignore id since it is not deterministic when records # are copied over to the target cell database. if ignored_keys is None: ignored_keys = [] if 'id' not in ignored_keys: ignored_keys.append('id') prim1 = obj1.obj_to_primitive()['nova_object.data'] prim2 = obj2.obj_to_primitive()['nova_object.data'] if isinstance(obj1, obj_base.ObjectListBase): self.assertEqual(len(obj1), len(obj2)) prim1 = [o['nova_object.data'] for o in prim1['objects']] prim2 = [o['nova_object.data'] for o in prim2['objects']] self._assertEqualListsOfObjects( prim1, prim2, ignored_keys=ignored_keys) else: self._assertEqualObjects(prim1, prim2, ignored_keys=ignored_keys) class TargetDBSetupTaskTestCase(test.TestCase, ObjectComparatorMixin): def setUp(self): super(TargetDBSetupTaskTestCase, self).setUp() cells = list(self.cell_mappings.values()) self.source_cell = cells[0] self.target_cell = cells[1] # Pass is_admin=True because of the funky DB API # _check_instance_exists_in_project check when creating instance tags. self.source_context = nova_context.RequestContext( user_id='fake-user', project_id='fake-project', is_admin=True) self.target_context = self.source_context.elevated() # copy source nova_context.set_target_cell(self.source_context, self.source_cell) nova_context.set_target_cell(self.target_context, self.target_cell) def _create_instance_data(self): """Creates an instance record and associated data like BDMs, VIFs, migrations, etc in the source cell and returns the Instance object. The idea is to create as many things from the Instance.INSTANCE_OPTIONAL_ATTRS list as possible. :returns: The created Instance and Migration objects """ # Create the nova-compute services record first. fake_service = test_service._fake_service() fake_service.pop('version', None) # version field is immutable fake_service.pop('id', None) # cannot create with an id set service = objects.Service(self.source_context, **fake_service) service.create() # Create the compute node using the service. fake_compute_node = copy.copy(test_compute_node.fake_compute_node) fake_compute_node['host'] = service.host fake_compute_node['hypervisor_hostname'] = service.host fake_compute_node['stats'] = {} # the object requires a dict fake_compute_node['service_id'] = service.id fake_compute_node.pop('id', None) # cannot create with an id set compute_node = objects.ComputeNode( self.source_context, **fake_compute_node) compute_node.create() # Build an Instance object with basic fields set. updates = { 'metadata': {'foo': 'bar'}, 'system_metadata': {'roles': ['member']}, 'host': compute_node.host, 'node': compute_node.hypervisor_hostname } inst = fake_instance.fake_instance_obj(self.source_context, **updates) delattr(inst, 'id') # cannot create an instance with an id set # Now we have to dirty all of the fields because fake_instance_obj # uses Instance._from_db_object to create the Instance object we have # but _from_db_object calls obj_reset_changes() which resets all of # the fields that were on the object, including the basic stuff like # the 'host' field, which means those fields don't get set in the DB. # TODO(mriedem): This should live in fake_instance_obj with a # make_creatable kwarg. for field in inst.obj_fields: if field in inst: setattr(inst, field, getattr(inst, field)) # Make sure at least one expected basic field is dirty on the Instance. self.assertIn('host', inst.obj_what_changed()) # Set the optional fields on the instance before creating it. inst.pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest( **test_instance_pci_requests.fake_pci_requests[0])]) inst.numa_topology = objects.InstanceNUMATopology( cells=test_instance_numa.fake_obj_numa_topology.cells) inst.trusted_certs = objects.TrustedCerts(ids=[uuids.cert]) inst.vcpu_model = test_vcpu_model.fake_vcpumodel inst.keypairs = objects.KeyPairList(objects=[ objects.KeyPair(**test_keypair.fake_keypair)]) inst.device_metadata = ( test_instance_device_metadata.get_fake_obj_device_metadata( self.source_context)) # FIXME(mriedem): db.instance_create does not handle tags inst.obj_reset_changes(['tags']) inst.create() bdm = { 'instance_uuid': inst.uuid, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'volume_size': 1, 'device_name': '/dev/vda', } bdm = objects.BlockDeviceMapping( self.source_context, **fake_block_device.FakeDbBlockDeviceDict(bdm_dict=bdm)) delattr(bdm, 'id') # cannot create a bdm with an id set bdm.obj_reset_changes(['id']) bdm.create() vif = objects.VirtualInterface( self.source_context, address='de:ad:be:ef:ca:fe', uuid=uuids.port, instance_uuid=inst.uuid) vif.create() info_cache = objects.InstanceInfoCache().new( self.source_context, inst.uuid) info_cache.network_info = network_model.NetworkInfo([ network_model.VIF(id=vif.uuid, address=vif.address)]) info_cache.save(update_cells=False) objects.TagList.create(self.source_context, inst.uuid, ['test']) try: raise test.TestingException('test-fault') except test.TestingException as fault: compute_utils.add_instance_fault_from_exc( self.source_context, inst, fault) objects.InstanceAction().action_start( self.source_context, inst.uuid, 'resize', want_result=False) objects.InstanceActionEvent().event_start( self.source_context, inst.uuid, 'migrate_server', want_result=False) # Create a fake migration for the cross-cell resize operation. migration = objects.Migration( self.source_context, **test_migration.fake_db_migration( instance_uuid=inst.uuid, cross_cell_move=True, migration_type='resize')) delattr(migration, 'id') # cannot create a migration with an id set migration.obj_reset_changes(['id']) migration.create() # Create an old non-resize migration to make sure it is copied to the # target cell database properly. old_migration = objects.Migration( self.source_context, **test_migration.fake_db_migration( instance_uuid=inst.uuid, migration_type='live-migration', status='completed', uuid=uuids.old_migration)) delattr(old_migration, 'id') # cannot create a migration with an id old_migration.obj_reset_changes(['id']) old_migration.create() fake_pci_device = copy.copy(test_pci_device.fake_db_dev) fake_pci_device['extra_info'] = {} # the object requires a dict fake_pci_device['compute_node_id'] = compute_node.id pci_device = objects.PciDevice.create( self.source_context, fake_pci_device) pci_device.allocate(inst) # sets the status and instance_uuid fields pci_device.save() # Return a fresh copy of the instance from the DB with as many joined # fields loaded as possible. expected_attrs = copy.copy(instance_obj.INSTANCE_OPTIONAL_ATTRS) # Cannot load fault from get_by_uuid. expected_attrs.remove('fault') inst = objects.Instance.get_by_uuid( self.source_context, inst.uuid, expected_attrs=expected_attrs) return inst, migration def test_execute_and_rollback(self): """Happy path test which creates an instance with related records in a source cell and then executes TargetDBSetupTask to create those same records in a target cell. Runs rollback to make sure the target cell instance is deleted. """ source_cell_instance, migration = self._create_instance_data() instance_uuid = source_cell_instance.uuid task = cross_cell_migrate.TargetDBSetupTask( self.source_context, source_cell_instance, migration, self.target_context) target_cell_instance = task.execute()[0] # The instance in the target cell should be hidden. self.assertTrue(target_cell_instance.hidden, 'Target cell instance should be hidden') # Assert that the various records created in _create_instance_data are # found in the target cell database. We ignore 'hidden' because the # values are explicitly different between source and target DB. The # pci_devices and services fields are not set on the target instance # during TargetDBSetupTask.execute so we ignore those here and verify # them below. tags are also special in that we have to lazy-load them # on target_cell_instance so we check those explicitly below as well. ignored_keys = ['hidden', 'pci_devices', 'services', 'tags'] self._compare_objs(source_cell_instance, target_cell_instance, ignored_keys=ignored_keys) # Explicitly compare flavor fields to make sure they are created and # loaded properly. for flavor_field in ('old_', 'new_', ''): source_field = getattr( source_cell_instance, flavor_field + 'flavor') target_field = getattr( target_cell_instance, flavor_field + 'flavor') # old/new may not be set if source_field is None or target_field is None: self.assertIsNone(source_field) self.assertIsNone(target_field) else: self._compare_objs(source_field, target_field) # Compare PCI requests self.assertIsNotNone(target_cell_instance.pci_requests) self._compare_objs(source_cell_instance.pci_requests, target_cell_instance.pci_requests) # Compare requested instance NUMA topology self.assertIsNotNone(target_cell_instance.numa_topology) self._compare_objs(source_cell_instance.numa_topology, target_cell_instance.numa_topology) # Compare trusted certs self.assertIsNotNone(target_cell_instance.trusted_certs) self._compare_objs(source_cell_instance.trusted_certs, target_cell_instance.trusted_certs) # Compare vcpu_model self.assertIsNotNone(target_cell_instance.vcpu_model) self._compare_objs(source_cell_instance.vcpu_model, target_cell_instance.vcpu_model) # Compare keypairs self.assertEqual(1, len(target_cell_instance.keypairs)) self._compare_objs(source_cell_instance.keypairs, target_cell_instance.keypairs) # Compare device_metadata self.assertIsNotNone(target_cell_instance.device_metadata) self._compare_objs(source_cell_instance.device_metadata, target_cell_instance.device_metadata) # Compare BDMs target_bdms = target_cell_instance.get_bdms() self.assertEqual(1, len(target_bdms)) self._compare_objs(source_cell_instance.get_bdms(), target_bdms) self.assertEqual(source_cell_instance.uuid, target_bdms[0].instance_uuid) # Compare VIFs source_vifs = objects.VirtualInterfaceList.get_by_instance_uuid( self.source_context, instance_uuid) target_vifs = objects.VirtualInterfaceList.get_by_instance_uuid( self.target_context, instance_uuid) self.assertEqual(1, len(target_vifs)) self._compare_objs(source_vifs, target_vifs) # Compare info cache (there should be a single vif in the target) self.assertEqual(1, len(target_cell_instance.info_cache.network_info)) self.assertEqual(target_vifs[0].uuid, target_cell_instance.info_cache.network_info[0]['id']) self._compare_objs(source_cell_instance.info_cache, target_cell_instance.info_cache) # Compare tags self.assertEqual(1, len(target_cell_instance.tags)) self._compare_objs(source_cell_instance.tags, target_cell_instance.tags) # Assert that the fault from the source is not in the target. self.assertIsNone(target_cell_instance.fault) # Compare instance actions and events source_actions = objects.InstanceActionList.get_by_instance_uuid( self.source_context, instance_uuid) target_actions = objects.InstanceActionList.get_by_instance_uuid( self.target_context, instance_uuid) self._compare_objs(source_actions, target_actions) # The InstanceActionEvent.action_id is per-cell DB so we need to get # the events per action and compare them but ignore the action_id. source_events = objects.InstanceActionEventList.get_by_action( self.source_context, source_actions[0].id) target_events = objects.InstanceActionEventList.get_by_action( self.target_context, target_actions[0].id) self._compare_objs(source_events, target_events, ignored_keys=['action_id']) # Compare migrations filters = {'instance_uuid': instance_uuid} source_migrations = objects.MigrationList.get_by_filters( self.source_context, filters) target_migrations = objects.MigrationList.get_by_filters( self.target_context, filters) # There should be two migrations in the target cell. self.assertEqual(2, len(target_migrations)) self._compare_objs(source_migrations, target_migrations) # One should be a live-migration type (make sure Migration._from-db_obj # did not set the migration_type for us). migration_types = [mig.migration_type for mig in target_migrations] self.assertIn('resize', migration_types) self.assertIn('live-migration', migration_types) # pci_devices and services should not have been copied over since they # are specific to the compute node in the source cell database for field in ('pci_devices', 'services'): source_value = getattr(source_cell_instance, field) self.assertEqual( 1, len(source_value), 'Unexpected number of %s in source cell instance' % field) target_value = getattr(target_cell_instance, field) self.assertEqual( 0, len(target_value), 'Unexpected number of %s in target cell instance' % field) # Rollback the task and assert the instance and its related data are # gone from the target cell database. Use a modified context to make # sure the instance was hard-deleted. task.rollback(test.TestingException('error')) read_deleted_ctxt = self.target_context.elevated(read_deleted='yes') self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, read_deleted_ctxt, target_cell_instance.uuid) class CrossCellMigrationTaskTestCase(test.NoDBTestCase): def setUp(self): super(CrossCellMigrationTaskTestCase, self).setUp() source_context = nova_context.get_context() host_selection = objects.Selection( service_host='target.host.com', cell_uuid=uuids.cell_uuid) migration = objects.Migration( id=1, cross_cell_move=False, source_compute='source.host.com') instance = objects.Instance() self.task = cross_cell_migrate.CrossCellMigrationTask( source_context, instance, objects.Flavor(), mock.sentinel.request_spec, migration, mock.sentinel.compute_rpcapi, host_selection, mock.sentinel.alternate_hosts) def test_execute_and_rollback(self): """Basic test to just hit execute and rollback.""" # Mock out the things that execute calls with test.nested( mock.patch.object(self.task.source_migration, 'save'), mock.patch.object(self.task, '_perform_external_api_checks'), mock.patch.object(self.task, '_setup_target_cell_db'), mock.patch.object(self.task, '_prep_resize_at_dest'), mock.patch.object(self.task, '_prep_resize_at_source'), mock.patch.object(self.task, '_finish_resize_at_dest'), ) as ( mock_migration_save, mock_perform_external_api_checks, mock_setup_target_cell_db, mock_prep_resize_at_dest, mock_prep_resize_at_source, mock_finish_resize_at_dest, ): mock_setup_target_cell_db.return_value = ( mock.sentinel.target_cell_migration, mock.sentinel.target_cell_mapping) self.task.execute() # Assert the calls self.assertTrue(self.task.source_migration.cross_cell_move, 'Migration.cross_cell_move should be True.') mock_migration_save.assert_called_once_with() mock_perform_external_api_checks.assert_called_once_with() mock_setup_target_cell_db.assert_called_once_with() mock_prep_resize_at_dest.assert_called_once_with( mock.sentinel.target_cell_migration) mock_prep_resize_at_source.assert_called_once_with() mock_finish_resize_at_dest.assert_called_once_with( mock_prep_resize_at_dest.return_value, mock.sentinel.target_cell_mapping, mock_prep_resize_at_source.return_value) # Now rollback the completed sub-tasks self.task.rollback(test.TestingException('error')) def test_perform_external_api_checks_ok(self): """Tests the happy path scenario where neutron APIs are new enough for what we need. """ with mock.patch.object( self.task.network_api, 'supports_port_binding_extension', return_value=True) as mock_neutron_check: self.task._perform_external_api_checks() mock_neutron_check.assert_called_once_with(self.task.context) def test_perform_external_api_checks_old_neutron(self): """Tests the case that neutron API is old.""" with mock.patch.object( self.task.network_api, 'supports_port_binding_extension', return_value=False): ex = self.assertRaises(exception.MigrationPreCheckError, self.task._perform_external_api_checks) self.assertIn('Required networking service API extension', six.text_type(ex)) @mock.patch('nova.conductor.tasks.cross_cell_migrate.LOG.exception') def test_rollback_idempotent(self, mock_log_exception): """Tests that the rollback routine hits all completed tasks even if one or more of them fail their own rollback routine. """ # Mock out some completed tasks for x in range(3): task = mock.Mock() # The 2nd task will fail its rollback. if x == 1: task.rollback.side_effect = test.TestingException('sub-task') self.task._completed_tasks[str(x)] = task # Run execute but mock _execute to fail somehow. error = test.TestingException('main task') with mock.patch.object(self.task, '_execute', side_effect=error): # The TestingException from the main task should be raised. ex = self.assertRaises(test.TestingException, self.task.execute) self.assertEqual('main task', six.text_type(ex)) # And all three sub-task rollbacks should have been called. for subtask in self.task._completed_tasks.values(): subtask.rollback.assert_called_once_with(error) # The 2nd task rollback should have raised and been logged. mock_log_exception.assert_called_once() self.assertEqual('1', mock_log_exception.call_args[0][1]) @mock.patch('nova.objects.CellMapping.get_by_uuid') @mock.patch('nova.context.set_target_cell') @mock.patch.object(cross_cell_migrate.TargetDBSetupTask, 'execute') def test_setup_target_cell_db(self, mock_target_db_set_task_execute, mock_set_target_cell, mock_get_cell_mapping): """Tests setting up and executing TargetDBSetupTask""" mock_target_db_set_task_execute.return_value = ( mock.sentinel.target_cell_instance, mock.sentinel.target_cell_migration) result = self.task._setup_target_cell_db() mock_target_db_set_task_execute.assert_called_once_with() mock_get_cell_mapping.assert_called_once_with( self.task.context, self.task.host_selection.cell_uuid) # The target_cell_context should be set on the main task but as a copy # of the source context. self.assertIsNotNone(self.task._target_cell_context) self.assertIsNot(self.task._target_cell_context, self.task.context) # The target cell context should have been targeted to the target # cell mapping. mock_set_target_cell.assert_called_once_with( self.task._target_cell_context, mock_get_cell_mapping.return_value) # The resulting migration record from TargetDBSetupTask should have # been returned along with the target cell mapping. self.assertIs(result[0], mock.sentinel.target_cell_migration) self.assertIs(result[1], mock_get_cell_mapping.return_value) # The target_cell_instance should be set on the main task. self.assertIsNotNone(self.task._target_cell_instance) self.assertIs(self.task._target_cell_instance, mock.sentinel.target_cell_instance) # And the completed task should have been recorded for rollbacks. self.assertIn('TargetDBSetupTask', self.task._completed_tasks) self.assertIsInstance(self.task._completed_tasks['TargetDBSetupTask'], cross_cell_migrate.TargetDBSetupTask) @mock.patch.object(cross_cell_migrate.PrepResizeAtDestTask, 'execute') @mock.patch('nova.availability_zones.get_host_availability_zone', return_value='cell2-az1') def test_prep_resize_at_dest(self, mock_get_az, mock_task_execute): """Tests setting up and executing PrepResizeAtDestTask""" # _setup_target_cell_db set the _target_cell_context and # _target_cell_instance variables so fake those out here self.task._target_cell_context = mock.sentinel.target_cell_context target_inst = objects.Instance( vm_state=vm_states.ACTIVE, system_metadata={}) self.task._target_cell_instance = target_inst target_cell_migration = objects.Migration( # use unique ids for comparisons id=self.task.source_migration.id + 1) self.assertNotIn('migration_context', self.task.instance) mock_task_execute.return_value = objects.MigrationContext( migration_id=target_cell_migration.id) with test.nested( mock.patch.object(self.task, '_update_migration_from_dest_after_claim'), mock.patch.object(self.task.instance, 'save'), mock.patch.object(target_inst, 'save') ) as ( _upd_mig, source_inst_save, target_inst_save ): retval = self.task._prep_resize_at_dest(target_cell_migration) self.assertIs(retval, _upd_mig.return_value) mock_task_execute.assert_called_once_with() mock_get_az.assert_called_once_with( self.task.context, self.task.host_selection.service_host) self.assertIn('PrepResizeAtDestTask', self.task._completed_tasks) self.assertIsInstance( self.task._completed_tasks['PrepResizeAtDestTask'], cross_cell_migrate.PrepResizeAtDestTask) # The new_flavor should be set on the target cell instance along with # the AZ and old_vm_state. self.assertIs(target_inst.new_flavor, self.task.flavor) self.assertEqual(vm_states.ACTIVE, target_inst.system_metadata['old_vm_state']) self.assertEqual(mock_get_az.return_value, target_inst.availability_zone) # A clone of the MigrationContext returned from execute() should be # stored on the source instance with the internal context targeted # at the source cell context and the migration_id updated. self.assertIsNotNone(self.task.instance.migration_context) self.assertEqual(self.task.source_migration.id, self.task.instance.migration_context.migration_id) source_inst_save.assert_called_once_with() _upd_mig.assert_called_once_with(target_cell_migration) @mock.patch('nova.objects.Migration.get_by_uuid') def test_update_migration_from_dest_after_claim(self, get_by_uuid): """Tests the _update_migration_from_dest_after_claim method.""" self.task._target_cell_context = mock.sentinel.target_cell_context target_cell_migration = objects.Migration( uuid=uuids.migration, cross_cell_move=True, dest_compute='dest-compute', dest_node='dest-node', dest_host='192.168.159.176') get_by_uuid.return_value = target_cell_migration.obj_clone() with mock.patch.object(self.task.source_migration, 'save') as save: retval = self.task._update_migration_from_dest_after_claim( target_cell_migration) # The returned target cell migration should be the one we pulled from # the target cell database. self.assertIs(retval, get_by_uuid.return_value) get_by_uuid.assert_called_once_with( self.task._target_cell_context, target_cell_migration.uuid) # The source cell migration on the task should have been updated. source_cell_migration = self.task.source_migration self.assertEqual('dest-compute', source_cell_migration.dest_compute) self.assertEqual('dest-node', source_cell_migration.dest_node) self.assertEqual('192.168.159.176', source_cell_migration.dest_host) save.assert_called_once_with() @mock.patch.object(cross_cell_migrate.PrepResizeAtSourceTask, 'execute') def test_prep_resize_at_source(self, mock_task_execute): """Tests setting up and executing PrepResizeAtSourceTask""" snapshot_id = self.task._prep_resize_at_source() self.assertIs(snapshot_id, mock_task_execute.return_value) self.assertIn('PrepResizeAtSourceTask', self.task._completed_tasks) self.assertIsInstance( self.task._completed_tasks['PrepResizeAtSourceTask'], cross_cell_migrate.PrepResizeAtSourceTask) @mock.patch.object(cross_cell_migrate.FinishResizeAtDestTask, 'execute') def test_finish_resize_at_dest(self, mock_task_execute): """Tests setting up and executing FinishResizeAtDestTask""" target_cell_migration = objects.Migration() target_cell_mapping = objects.CellMapping() self.task._finish_resize_at_dest( target_cell_migration, target_cell_mapping, uuids.snapshot_id) mock_task_execute.assert_called_once_with() self.assertIn('FinishResizeAtDestTask', self.task._completed_tasks) self.assertIsInstance( self.task._completed_tasks['FinishResizeAtDestTask'], cross_cell_migrate.FinishResizeAtDestTask) class PrepResizeAtDestTaskTestCase(test.NoDBTestCase): def setUp(self): super(PrepResizeAtDestTaskTestCase, self).setUp() host_selection = objects.Selection( service_host='fake-host', nodename='fake-host', limits=objects.SchedulerLimits()) self.task = cross_cell_migrate.PrepResizeAtDestTask( nova_context.get_context(), objects.Instance(uuid=uuids.instance), objects.Flavor(), objects.Migration(), objects.RequestSpec(), compute_rpcapi=mock.Mock(), host_selection=host_selection, network_api=mock.Mock(), volume_api=mock.Mock()) def test_create_port_bindings(self): """Happy path test for creating port bindings""" with mock.patch.object( self.task.network_api, 'bind_ports_to_host') as mock_bind: self.task._create_port_bindings() self.assertIs(self.task._bindings_by_port_id, mock_bind.return_value) mock_bind.assert_called_once_with( self.task.context, self.task.instance, self.task.host_selection.service_host) def test_create_port_bindings_port_binding_failed(self): """Tests that bind_ports_to_host raises PortBindingFailed which results in a MigrationPreCheckError. """ with mock.patch.object( self.task.network_api, 'bind_ports_to_host', side_effect=exception.PortBindingFailed( port_id=uuids.port_id)) as mock_bind: self.assertRaises(exception.MigrationPreCheckError, self.task._create_port_bindings) self.assertEqual({}, self.task._bindings_by_port_id) mock_bind.assert_called_once_with( self.task.context, self.task.instance, self.task.host_selection.service_host) @mock.patch('nova.objects.BlockDeviceMapping.save') def test_create_volume_attachments(self, mock_bdm_save): """Happy path test for creating volume attachments""" # Two BDMs: one as a local image and one as an attached data volume; # only the volume BDM should be processed and returned. bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping( source_type='image', destination_type='local'), objects.BlockDeviceMapping( source_type='volume', destination_type='volume', volume_id=uuids.volume_id, instance_uuid=self.task.instance.uuid)]) with test.nested( mock.patch.object( self.task.instance, 'get_bdms', return_value=bdms), mock.patch.object( self.task.volume_api, 'attachment_create', return_value={'id': uuids.attachment_id}), ) as ( mock_get_bdms, mock_attachment_create ): volume_bdms = self.task._create_volume_attachments() mock_attachment_create.assert_called_once_with( self.task.context, uuids.volume_id, self.task.instance.uuid) # The created attachment ID should be saved for rollbacks. self.assertEqual(1, len(self.task._created_volume_attachment_ids)) self.assertEqual( uuids.attachment_id, self.task._created_volume_attachment_ids[0]) # Only the volume BDM should have been processed and returned. self.assertEqual(1, len(volume_bdms)) self.assertIs(bdms[1], volume_bdms[0]) # The volume BDM attachment_id should have been updated. self.assertEqual(uuids.attachment_id, volume_bdms[0].attachment_id) def test_execute(self): """Happy path for executing the task""" def fake_create_port_bindings(): self.task._bindings_by_port_id = mock.sentinel.bindings with test.nested( mock.patch.object(self.task, '_create_port_bindings', side_effect=fake_create_port_bindings), mock.patch.object(self.task, '_create_volume_attachments'), mock.patch.object( self.task.compute_rpcapi, 'prep_snapshot_based_resize_at_dest') ) as ( _create_port_bindings, _create_volume_attachments, prep_snapshot_based_resize_at_dest ): # Execute the task. The return value should be the MigrationContext # returned from prep_snapshot_based_resize_at_dest. self.assertEqual( prep_snapshot_based_resize_at_dest.return_value, self.task.execute()) _create_port_bindings.assert_called_once_with() _create_volume_attachments.assert_called_once_with() prep_snapshot_based_resize_at_dest.assert_called_once_with( self.task.context, self.task.instance, self.task.flavor, self.task.host_selection.nodename, self.task.target_migration, self.task.host_selection.limits, self.task.request_spec, self.task.host_selection.service_host) def test_execute_messaging_timeout(self): """Tests the case that prep_snapshot_based_resize_at_dest raises MessagingTimeout which results in a MigrationPreCheckError. """ with test.nested( mock.patch.object(self.task, '_create_port_bindings'), mock.patch.object(self.task, '_create_volume_attachments'), mock.patch.object( self.task.compute_rpcapi, 'prep_snapshot_based_resize_at_dest', side_effect=messaging_exceptions.MessagingTimeout) ) as ( _create_port_bindings, _create_volume_attachments, prep_snapshot_based_resize_at_dest ): ex = self.assertRaises( exception.MigrationPreCheckError, self.task.execute) self.assertIn( 'RPC timeout while checking if we can cross-cell migrate to ' 'host: fake-host', six.text_type(ex)) _create_port_bindings.assert_called_once_with() _create_volume_attachments.assert_called_once_with() prep_snapshot_based_resize_at_dest.assert_called_once_with( self.task.context, self.task.instance, self.task.flavor, self.task.host_selection.nodename, self.task.target_migration, self.task.host_selection.limits, self.task.request_spec, self.task.host_selection.service_host) @mock.patch('nova.conductor.tasks.cross_cell_migrate.LOG.exception') def test_rollback(self, mock_log_exception): """Tests rollback to make sure it idempotently handles cleaning up port bindings and volume attachments even if one in the set fails for each. """ # Make sure we have two port bindings and two volume attachments # because we are going to make the first of each fail and we want to # make sure we still try to delete the other. self.task._bindings_by_port_id = { uuids.port_id1: mock.sentinel.binding1, uuids.port_id2: mock.sentinel.binding2 } self.task._created_volume_attachment_ids = [ uuids.attachment_id1, uuids.attachment_id2 ] with test.nested( mock.patch.object( self.task.network_api, 'delete_port_binding', # First call fails, second is OK. side_effect=(exception.PortBindingDeletionFailed, None)), mock.patch.object( self.task.volume_api, 'attachment_delete', # First call fails, second is OK. side_effect=(exception.CinderConnectionFailed, None)), ) as ( delete_port_binding, attachment_delete ): self.task.rollback(test.TestingException('error')) # Should have called both delete methods twice in any order. host = self.task.host_selection.service_host delete_port_binding.assert_has_calls([ mock.call(self.task.context, port_id, host) for port_id in self.task._bindings_by_port_id], any_order=True) attachment_delete.assert_has_calls([ mock.call(self.task.context, attachment_id) for attachment_id in self.task._created_volume_attachment_ids], any_order=True) # Should have logged both exceptions. self.assertEqual(2, mock_log_exception.call_count) class PrepResizeAtSourceTaskTestCase(test.NoDBTestCase): def setUp(self): super(PrepResizeAtSourceTaskTestCase, self).setUp() self.task = cross_cell_migrate.PrepResizeAtSourceTask( nova_context.get_context(), objects.Instance( uuid=uuids.instance, vm_state=vm_states.ACTIVE, display_name='fake-server', system_metadata={}, host='source.host.com'), objects.Migration(), objects.RequestSpec(), compute_rpcapi=mock.Mock(), image_api=mock.Mock()) @mock.patch('nova.compute.utils.create_image') @mock.patch('nova.objects.Instance.save') def test_execute_volume_backed(self, instance_save, create_image): """Tests execution with a volume-backed server so no snapshot image is created. """ self.task.request_spec.is_bfv = True # No image should be created so no image is returned. self.assertIsNone(self.task.execute()) self.assertIsNone(self.task._image_id) create_image.assert_not_called() self.task.compute_rpcapi.prep_snapshot_based_resize_at_source.\ assert_called_once_with( self.task.context, self.task.instance, self.task.migration, snapshot_id=None) # The instance should have been updated. instance_save.assert_called_once_with( expected_task_state=task_states.RESIZE_PREP) self.assertEqual( task_states.RESIZE_MIGRATING, self.task.instance.task_state) self.assertEqual(self.task.instance.vm_state, self.task.instance.system_metadata['old_vm_state']) @mock.patch('nova.compute.utils.create_image', return_value={'id': uuids.snapshot_id}) @mock.patch('nova.objects.Instance.save') def test_execute_image_backed(self, instance_save, create_image): """Tests execution with an image-backed server so a snapshot image is created. """ self.task.request_spec.is_bfv = False self.task.instance.image_ref = uuids.old_image_ref # An image should be created so an image ID is returned. self.assertEqual(uuids.snapshot_id, self.task.execute()) self.assertEqual(uuids.snapshot_id, self.task._image_id) create_image.assert_called_once_with( self.task.context, self.task.instance, 'fake-server-resize-temp', 'snapshot', self.task.image_api) self.task.compute_rpcapi.prep_snapshot_based_resize_at_source.\ assert_called_once_with( self.task.context, self.task.instance, self.task.migration, snapshot_id=uuids.snapshot_id) # The instance should have been updated. instance_save.assert_called_once_with( expected_task_state=task_states.RESIZE_PREP) self.assertEqual( task_states.RESIZE_MIGRATING, self.task.instance.task_state) self.assertEqual(self.task.instance.vm_state, self.task.instance.system_metadata['old_vm_state']) @mock.patch('nova.compute.utils.delete_image') def test_rollback(self, delete_image): """Tests rollback when there is an image and when there is not.""" # First test when there is no image_id so we do not try to delete it. self.task.rollback(test.TestingException('error')) delete_image.assert_not_called() # Now set an image and we should try to delete it. self.task._image_id = uuids.image_id self.task.rollback(test.TestingException('error')) delete_image.assert_called_once_with( self.task.context, self.task.instance, self.task.image_api, self.task._image_id) class FinishResizeAtDestTaskTestCase(test.TestCase): """Tests for FinishResizeAtDestTask which rely on a database""" def _create_instance(self, ctxt, create_instance_mapping=False, **updates): """Create a fake instance with the given cell-targeted context :param ctxt: Cell-targeted RequestContext :param create_instance_mapping: If True, create an InstanceMapping for the instance pointed at the cell in which the ctxt is targeted, otherwise no InstanceMapping is created. :param updates: Additional fields to set on the Instance object. :returns: Instance object that was created. """ inst = fake_instance.fake_instance_obj(ctxt, **updates) delattr(inst, 'id') # make it creatable # Now we have to dirty all of the fields because fake_instance_obj # uses Instance._from_db_object to create the Instance object we have # but _from_db_object calls obj_reset_changes() which resets all of # the fields that were on the object, including the basic stuff like # the 'host' field, which means those fields don't get set in the DB. # TODO(mriedem): This should live in fake_instance_obj for field in inst.obj_fields: if field in inst: setattr(inst, field, getattr(inst, field)) # FIXME(mriedem): db.instance_create does not handle tags inst.obj_reset_changes(['tags']) inst.create() if create_instance_mapping: # Find the cell mapping from the context. self.assertIsNotNone(ctxt.cell_uuid, 'ctxt must be targeted to a cell.') for cell in self.cell_mappings.values(): if cell.uuid == ctxt.cell_uuid: break else: raise Exception('Unable to find CellMapping with UUID %s' % ctxt.cell_uuid) mapping = objects.InstanceMapping( ctxt, instance_uuid=inst.uuid, project_id=inst.project_id, cell_mapping=cell) mapping.create() return inst def setUp(self): super(FinishResizeAtDestTaskTestCase, self).setUp() cells = list(self.cell_mappings.values()) source_cell = cells[0] target_cell = cells[1] self.source_context = nova_context.RequestContext( user_id='fake-user', project_id='fake-project', is_admin=True) self.target_context = self.source_context.elevated() # copy source nova_context.set_target_cell(self.source_context, source_cell) nova_context.set_target_cell(self.target_context, target_cell) # Create the source cell instance. source_instance = self._create_instance( self.source_context, create_instance_mapping=True, hidden=False) # Create the instance action record in the source cell which is needed # by the EventReporter. objects.InstanceAction.action_start( self.source_context, source_instance.uuid, instance_actions.RESIZE, want_result=False) # Create the target cell instance which would normally be a clone of # the source cell instance but the only thing these tests care about # is that the UUID matches. The target cell instance is also hidden. target_instance = self._create_instance( self.target_context, hidden=True, uuid=source_instance.uuid) target_migration = objects.Migration(dest_compute='target.host.com') self.task = cross_cell_migrate.FinishResizeAtDestTask( self.target_context, target_instance, target_migration, source_instance, compute_rpcapi=mock.Mock(), target_cell_mapping=target_cell, snapshot_id=uuids.snapshot_id, request_spec=objects.RequestSpec()) def test_execute(self): """Tests the happy path scenario for the task execution.""" with test.nested( mock.patch.object( self.task.compute_rpcapi, 'finish_snapshot_based_resize_at_dest'), mock.patch.object(self.task.instance, 'refresh') ) as ( finish_resize, refresh ): self.task.execute() # _finish_snapshot_based_resize_at_dest will set the instance # task_state to resize_migrated, save the change, and call the # finish_snapshot_based_resize_at_dest method. target_instance = self.task.instance self.assertEqual(task_states.RESIZE_MIGRATED, self.task.instance.task_state) finish_resize.assert_called_once_with( self.task.context, target_instance, self.task.migration, self.task.snapshot_id, self.task.request_spec) refresh.assert_called_once_with() # _update_instance_mapping will swap the hidden fields and update # the instance mapping to point at the target cell. self.assertFalse(target_instance.hidden, 'Target cell instance should not be hidden') source_instance = self.task.source_cell_instance source_instance.refresh() self.assertTrue(source_instance.hidden, 'Source cell instance should be hidden') mapping = objects.InstanceMapping.get_by_instance_uuid( self.task.context, target_instance.uuid) self.assertEqual(self.target_context.cell_uuid, mapping.cell_mapping.uuid) @mock.patch('nova.objects.InstanceMapping.save') def test_finish_snapshot_based_resize_at_dest_fails(self, mock_im_save): """Tests when the finish_snapshot_based_resize_at_dest compute method raises an error. """ with test.nested( mock.patch.object(self.task.compute_rpcapi, 'finish_snapshot_based_resize_at_dest', side_effect=test.TestingException('oops')), mock.patch.object(self.task, '_copy_latest_fault'), ) as ( finish_resize, copy_fault ): self.assertRaises(test.TestingException, self.task._finish_snapshot_based_resize_at_dest) # The source cell instance should be in error state. source_instance = self.task.source_cell_instance source_instance.refresh() self.assertEqual(vm_states.ERROR, source_instance.vm_state) self.assertIsNone(source_instance.task_state) # And the latest fault and instance action event should have been # copied from the target cell DB to the source cell DB. copy_fault.assert_called_once_with(self.source_context) # Assert the event was recorded in the source cell DB. event_name = 'compute_finish_snapshot_based_resize_at_dest' action = objects.InstanceAction.get_by_request_id( source_instance._context, source_instance.uuid, source_instance._context.request_id) self.assertIsNotNone(action, 'InstanceAction not found.') events = objects.InstanceActionEventList.get_by_action( source_instance._context, action.id) self.assertEqual(1, len(events), events) self.assertEqual(event_name, events[0].event) self.assertEqual('Error', events[0].result) self.assertIn('_finish_snapshot_based_resize_at_dest', events[0].traceback) self.assertEqual(self.task.migration.dest_compute, events[0].host) # Assert the instance mapping was never updated. mock_im_save.assert_not_called() def test_copy_latest_fault(self): """Tests _copy_latest_fault working as expected""" # Inject a fault in the target cell database. try: raise test.TestingException('test-fault') except test.TestingException as fault: compute_utils.add_instance_fault_from_exc( self.target_context, self.task.instance, fault) self.task._copy_latest_fault(self.source_context) # Now make sure that fault shows up in the source cell DB (it will # get lazy-loaded here). fault = self.task.source_cell_instance.fault self.assertIsNotNone(fault, 'Fault not copied to source cell DB') # And it's the fault we expect. self.assertEqual('TestingException', fault.message) @mock.patch('nova.conductor.tasks.cross_cell_migrate.LOG.exception') def test_copy_latest_fault_error(self, mock_log): """Tests that _copy_latest_fault errors are swallowed""" with mock.patch('nova.objects.InstanceFault.get_latest_for_instance', side_effect=test.TestingException): self.task._copy_latest_fault(self.source_context) # The source cell should not have a fault. self.assertIsNone(self.task.source_cell_instance.fault) # The error should have been logged. mock_log.assert_called_once() self.assertIn('Failed to copy instance fault from target cell DB', mock_log.call_args[0][0]) class UtilityTestCase(test.NoDBTestCase): """Tests utility methods in the cross_cell_migrate module.""" @mock.patch('nova.objects.HostMapping.get_by_host', return_value=objects.HostMapping( cell_mapping=objects.CellMapping(uuid=uuids.cell))) @mock.patch('nova.objects.Instance.get_by_uuid') def test_get_inst_and_cell_map_from_source(self, mock_get_inst, mock_get_by_host): target_cell_context = nova_context.get_admin_context() # Stub out Instance.get_by_uuid to make sure a copy of the context is # targeted at the source cell mapping. def stub_get_by_uuid(ctxt, *args, **kwargs): self.assertIsNot(ctxt, target_cell_context) self.assertEqual(uuids.cell, ctxt.cell_uuid) return mock.sentinel.instance mock_get_inst.side_effect = stub_get_by_uuid inst, cell_mapping = ( cross_cell_migrate.get_inst_and_cell_map_from_source( target_cell_context, 'source-host', uuids.instance)) self.assertIs(inst, mock.sentinel.instance) self.assertIs(cell_mapping, mock_get_by_host.return_value.cell_mapping) mock_get_by_host.assert_called_once_with( target_cell_context, 'source-host') mock_get_inst.assert_called_once_with( test.MatchType(nova_context.RequestContext), uuids.instance, expected_attrs=['flavor', 'info_cache', 'system_metadata']) class ConfirmResizeTaskTestCase(test.NoDBTestCase): def setUp(self): super(ConfirmResizeTaskTestCase, self).setUp() context = nova_context.get_admin_context() compute_rpcapi = mock.Mock() self.task = cross_cell_migrate.ConfirmResizeTask( context, objects.Instance(context, uuid=uuids.instance, host='target-host', vm_state=vm_states.RESIZED, system_metadata={ 'old_vm_state': vm_states.ACTIVE}), objects.Migration(context, uuid=uuids.migration, dest_compute='target-host', source_compute='source-host', status='confirming'), mock.sentinel.legacy_notifier, compute_rpcapi) @mock.patch('nova.conductor.tasks.cross_cell_migrate.' 'get_inst_and_cell_map_from_source') def test_execute(self, mock_get_instance): source_cell_instance = objects.Instance( mock.MagicMock(), uuid=uuids.instance) source_cell_instance.destroy = mock.Mock() mock_get_instance.return_value = ( source_cell_instance, objects.CellMapping()) with test.nested( mock.patch.object(self.task, '_send_resize_confirm_notification'), mock.patch.object(self.task, '_cleanup_source_host'), mock.patch.object(self.task, '_finish_confirm_in_target_cell') ) as ( _send_resize_confirm_notification, _cleanup_source_host, _finish_confirm_in_target_cell ): self.task.execute() mock_get_instance.assert_called_once_with( self.task.context, self.task.migration.source_compute, self.task.instance.uuid) self.assertEqual(2, _send_resize_confirm_notification.call_count) _send_resize_confirm_notification.assert_has_calls([ mock.call(source_cell_instance, fields.NotificationPhase.START), mock.call(self.task.instance, fields.NotificationPhase.END)]) _cleanup_source_host.assert_called_once_with(source_cell_instance) source_cell_instance.destroy.assert_called_once_with( hard_delete=True) _finish_confirm_in_target_cell.assert_called_once_with() @mock.patch('nova.conductor.tasks.cross_cell_migrate.' 'get_inst_and_cell_map_from_source', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)) @mock.patch('nova.objects.Migration.save') @mock.patch('nova.objects.RequestSpec.get_by_instance_uuid') @mock.patch('nova.scheduler.utils.set_vm_state_and_notify') def test_rollback(self, mock_set_state_notify, mock_get_reqspec, mock_mig_save, mock_get_instance): self.assertRaises(exception.InstanceNotFound, self.task.execute) mock_get_instance.assert_called_once_with( self.task.context, self.task.migration.source_compute, self.task.instance.uuid) self.assertEqual('error', self.task.migration.status) mock_mig_save.assert_called_once_with() mock_get_reqspec.assert_called_once_with( self.task.context, self.task.instance.uuid) mock_set_state_notify.assert_called_once_with( self.task.context, self.task.instance.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ERROR, 'task_state': None}, mock_get_instance.side_effect, mock_get_reqspec.return_value) @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_send_resize_confirm_notification(self, mock_versioned_notify, mock_legacy_notify): self.flags(host='fake-conductor-host') instance = self.task.instance self.task._send_resize_confirm_notification(instance, 'fake-phase') mock_legacy_notify.assert_called_once_with( self.task.legacy_notifier, instance._context, instance, 'resize.confirm.fake-phase') mock_versioned_notify.assert_called_once_with( instance._context, instance, 'fake-conductor-host', action=fields.NotificationAction.RESIZE_CONFIRM, phase='fake-phase') @mock.patch('nova.objects.InstanceAction.action_start') @mock.patch('nova.objects.Migration.get_by_uuid') @mock.patch('nova.objects.InstanceActionEvent') # stub EventReporter calls def test_cleanup_source_host( self, mock_action_event, mock_get_mig, mock_action_start): instance = objects.Instance(nova_context.get_admin_context(), uuid=uuids.instance, flavor=objects.Flavor()) self.task._cleanup_source_host(instance) self.assertIs(instance.old_flavor, instance.flavor) mock_action_start.assert_called_once_with( instance._context, instance.uuid, instance_actions.CONFIRM_RESIZE, want_result=False) mock_get_mig.assert_called_once_with( instance._context, self.task.migration.uuid) self.task.compute_rpcapi.confirm_snapshot_based_resize_at_source.\ assert_called_once_with(instance._context, instance, mock_get_mig.return_value) mock_action_event.event_start.assert_called_once_with( self.task.context, uuids.instance, 'compute_confirm_snapshot_based_resize_at_source', want_result=False, host=mock_get_mig.return_value.source_compute) mock_action_event.event_finish_with_failure.assert_called_once_with( self.task.context, uuids.instance, 'compute_confirm_snapshot_based_resize_at_source', exc_val=None, exc_tb=None, want_result=False) @mock.patch('nova.objects.Migration.save') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.objects.Instance.drop_migration_context') def test_finish_confirm_in_target_cell(self, mock_drop_ctx, mock_inst_save, mock_mig_save): with mock.patch.object( self.task, '_set_vm_and_task_state') as mock_set_state: self.task._finish_confirm_in_target_cell() self.assertEqual('confirmed', self.task.migration.status) mock_mig_save.assert_called_once_with() self.assertNotIn('old_vm_state', self.task.instance.system_metadata) self.assertIsNone(self.task.instance.old_flavor) self.assertIsNone(self.task.instance.new_flavor) mock_set_state.assert_called_once_with() mock_drop_ctx.assert_called_once_with() mock_inst_save.assert_called_once_with(expected_task_state=[ None, task_states.DELETING, task_states.SOFT_DELETING]) def test_set_vm_and_task_state_shutdown(self): self.task.instance.power_state = power_state.SHUTDOWN self.task._set_vm_and_task_state() self.assertEqual(vm_states.STOPPED, self.task.instance.vm_state) self.assertIsNone(self.task.instance.task_state) def test_set_vm_and_task_state_active(self): self.task.instance.power_state = power_state.RUNNING self.task._set_vm_and_task_state() self.assertEqual(vm_states.ACTIVE, self.task.instance.vm_state) self.assertIsNone(self.task.instance.task_state) class RevertResizeTaskTestCase(test.NoDBTestCase, ObjectComparatorMixin): def setUp(self): super(RevertResizeTaskTestCase, self).setUp() target_cell_context = nova_context.get_admin_context() target_cell_context.cell_uuid = uuids.target_cell instance = fake_instance.fake_instance_obj( target_cell_context, **{ 'vm_state': vm_states.RESIZED, 'task_state': task_states.RESIZE_REVERTING, 'expected_attrs': ['system_metadata', 'flavor'] }) migration = objects.Migration( target_cell_context, uuid=uuids.migration, status='reverting', source_compute='source-host', dest_compute='dest-host') legacy_notifier = mock.MagicMock() compute_rpcapi = mock.MagicMock() self.task = cross_cell_migrate.RevertResizeTask( target_cell_context, instance, migration, legacy_notifier, compute_rpcapi) def _generate_source_cell_instance(self): source_cell_context = nova_context.get_admin_context() source_cell_context.cell_uuid = uuids.source_cell source_cell_instance = self.task.instance.obj_clone() source_cell_instance._context = source_cell_context return source_cell_instance @mock.patch('nova.conductor.tasks.cross_cell_migrate.' 'get_inst_and_cell_map_from_source') @mock.patch('nova.objects.InstanceActionEvent') # Stub EventReport calls. def test_execute(self, mock_action_event, mock_get_instance): """Happy path test for the execute method.""" # Setup mocks. source_cell_instance = self._generate_source_cell_instance() source_cell_context = source_cell_instance._context source_cell_mapping = objects.CellMapping(source_cell_context, uuid=uuids.source_cell) mock_get_instance.return_value = (source_cell_instance, source_cell_mapping) def stub_update_instance_in_source_cell(*args, **kwargs): # Ensure _update_instance_mapping is not called before # _update_instance_in_source_cell. _update_instance_mapping.assert_not_called() return mock.sentinel.source_cell_migration with test.nested( mock.patch.object(self.task, '_send_resize_revert_notification'), mock.patch.object(self.task, '_update_instance_in_source_cell', side_effect=stub_update_instance_in_source_cell), mock.patch.object(self.task, '_update_instance_mapping'), mock.patch.object(self.task.instance, 'destroy'), mock.patch.object(source_cell_instance, 'refresh'), ) as ( _send_resize_revert_notification, _update_instance_in_source_cell, _update_instance_mapping, mock_inst_destroy, mock_inst_refresh, ): # Run the code. self.task.execute() # Should have sent a start and end notification. self.assertEqual(2, _send_resize_revert_notification.call_count, _send_resize_revert_notification.calls) _send_resize_revert_notification.assert_has_calls([ mock.call(self.task.instance, fields.NotificationPhase.START), mock.call(source_cell_instance, fields.NotificationPhase.END), ]) mock_get_instance.assert_called_once_with( self.task.context, self.task.migration.source_compute, self.task.instance.uuid) _update_instance_in_source_cell.assert_called_once_with( source_cell_instance) _update_instance_mapping.assert_called_once_with( source_cell_instance, source_cell_mapping) # _source_cell_instance and _source_cell_migration should have been # set for rollbacks self.assertIs(self.task._source_cell_instance, source_cell_instance) self.assertIs(self.task._source_cell_migration, mock.sentinel.source_cell_migration) # Cleanup at dest host. self.task.compute_rpcapi.revert_snapshot_based_resize_at_dest.\ assert_called_once_with(self.task.context, self.task.instance, self.task.migration) # EventReporter should have been used. event_name = 'compute_revert_snapshot_based_resize_at_dest' mock_action_event.event_start.assert_called_once_with( source_cell_context, source_cell_instance.uuid, event_name, want_result=False, host=self.task.migration.dest_compute) mock_action_event.event_finish_with_failure.assert_called_once_with( source_cell_context, source_cell_instance.uuid, event_name, exc_val=None, exc_tb=None, want_result=False) mock_action_event.event_finish.assert_called_once_with( source_cell_context, source_cell_instance.uuid, 'conductor_revert_snapshot_based_resize', want_result=False) # Destroy the instance in the target cell. mock_inst_destroy.assert_called_once_with(hard_delete=True) # Cleanup at source host. self.task.compute_rpcapi.\ finish_revert_snapshot_based_resize_at_source.\ assert_called_once_with( source_cell_context, source_cell_instance, mock.sentinel.source_cell_migration) # Refresh the source cell instance so we have the latest data. mock_inst_refresh.assert_called_once_with() @mock.patch('nova.conductor.tasks.cross_cell_migrate.RevertResizeTask.' '_execute') @mock.patch('nova.objects.RequestSpec.get_by_instance_uuid') @mock.patch('nova.scheduler.utils.set_vm_state_and_notify') def test_rollback_target_cell( self, mock_set_state_notify, mock_get_reqspec, mock_execute): """Tests the case that we did not update the instance mapping so we set the target cell migration to error status. """ error = test.TestingException('zoinks!') mock_execute.side_effect = error with mock.patch.object(self.task.migration, 'save') as mock_save: self.assertRaises(test.TestingException, self.task.execute) self.assertEqual('error', self.task.migration.status) mock_save.assert_called_once_with() mock_get_reqspec.assert_called_once_with( self.task.context, self.task.instance.uuid) mock_set_state_notify.assert_called_once_with( self.task.instance._context, self.task.instance.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ERROR, 'task_state': None}, error, mock_get_reqspec.return_value) self.assertIn('The instance is mapped to the target cell', self.stdlog.logger.output) @mock.patch('nova.conductor.tasks.cross_cell_migrate.RevertResizeTask.' '_execute') @mock.patch('nova.objects.RequestSpec.get_by_instance_uuid') @mock.patch('nova.scheduler.utils.set_vm_state_and_notify') def test_rollback_source_cell( self, mock_set_state_notify, mock_get_reqspec, mock_execute): """Tests the case that we did update the instance mapping so we set the source cell migration to error status. """ source_cell_instance = self._generate_source_cell_instance() source_cell_context = source_cell_instance._context self.task._source_cell_instance = source_cell_instance self.task._source_cell_migration = objects.Migration( source_cell_context, status='reverting', dest_compute='dest-host') error = test.TestingException('jinkies!') mock_execute.side_effect = error with mock.patch.object(self.task._source_cell_migration, 'save') as mock_save: self.assertRaises(test.TestingException, self.task.execute) self.assertEqual('error', self.task._source_cell_migration.status) mock_save.assert_called_once_with() mock_get_reqspec.assert_called_once_with( self.task.context, self.task.instance.uuid) mock_set_state_notify.assert_called_once_with( source_cell_context, source_cell_instance.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ERROR, 'task_state': None}, error, mock_get_reqspec.return_value) self.assertIn('The instance is mapped to the source cell', self.stdlog.logger.output) @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.utils.notify_about_instance_action') def test_send_resize_revert_notification(self, mock_notify_action, mock_notify_usage): self.flags(host='fake-conductor-host') instance = self.task.instance self.task._send_resize_revert_notification(instance, 'foo') # Assert the legacy notification was sent. mock_notify_usage.assert_called_once_with( self.task.legacy_notifier, instance._context, instance, 'resize.revert.foo') # Assert the versioned notification was sent. mock_notify_action.assert_called_once_with( instance._context, instance, 'fake-conductor-host', action=fields.NotificationAction.RESIZE_REVERT, phase='foo') def test_update_instance_in_source_cell(self): # Setup mocks. source_cell_instance = self._generate_source_cell_instance() source_cell_instance.task_state = None self.task.instance.system_metadata = {'old_vm_state': vm_states.ACTIVE} with test.nested( mock.patch.object(source_cell_instance, 'save'), mock.patch.object(self.task, '_update_bdms_in_source_cell'), mock.patch.object(self.task, '_update_instance_actions_in_source_cell'), mock.patch.object(self.task, '_update_migration_in_source_cell') ) as ( mock_inst_save, _update_bdms_in_source_cell, _update_instance_actions_in_source_cell, _update_migration_in_source_cell ): # Run the code. source_cell_migration = self.task._update_instance_in_source_cell( source_cell_instance) # The returned object should be the updated migration object from the # source cell database. self.assertIs(source_cell_migration, _update_migration_in_source_cell.return_value) # Fields on the source cell instance should have been updated. self.assertEqual(vm_states.ACTIVE, source_cell_instance.system_metadata['old_vm_state']) self.assertIs(source_cell_instance.old_flavor, source_cell_instance.flavor) self.assertEqual(task_states.RESIZE_REVERTING, source_cell_instance.task_state) mock_inst_save.assert_called_once_with() _update_bdms_in_source_cell.assert_called_once_with( source_cell_instance._context) _update_instance_actions_in_source_cell.assert_called_once_with( source_cell_instance._context) _update_migration_in_source_cell.assert_called_once_with( source_cell_instance._context) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.conductor.tasks.cross_cell_migrate.' 'clone_creatable_object') def test_update_bdms_in_source_cell(self, mock_clone, mock_get_bdms): """Test updating BDMs from the target cell to the source cell.""" source_cell_context = nova_context.get_admin_context() source_cell_context.cell_uuid = uuids.source_cell # Setup fake bdms. bdm1 = objects.BlockDeviceMapping( source_cell_context, uuid=uuids.bdm1, volume_id='vol1', attachment_id=uuids.attach1) bdm2 = objects.BlockDeviceMapping( source_cell_context, uuid=uuids.bdm2, volume_id='vol2', attachment_id=uuids.attach2) source_bdms = objects.BlockDeviceMappingList(objects=[bdm1, bdm2]) # With the target BDMs bdm1 from the source is gone and bdm3 is new # to simulate bdm1 being detached and bdm3 being attached while the # instance was in VERIFY_RESIZE status. bdm3 = objects.BlockDeviceMapping( self.task.context, uuid=uuids.bdm3, volume_id='vol3', attachment_id=uuids.attach1) target_bdms = objects.BlockDeviceMappingList(objects=[bdm2, bdm3]) def stub_get_bdms(ctxt, *args, **kwargs): if ctxt.cell_uuid == uuids.source_cell: return source_bdms return target_bdms mock_get_bdms.side_effect = stub_get_bdms def stub_mock_clone(ctxt, obj, *args, **kwargs): # We want to make assertions on our mocks so do not create a copy. return obj mock_clone.side_effect = stub_mock_clone with test.nested( mock.patch.object(self.task.volume_api, 'attachment_create', return_value={'id': uuids.attachment_id}), mock.patch.object(self.task.volume_api, 'attachment_delete'), mock.patch.object(bdm3, 'create'), mock.patch.object(bdm1, 'destroy') ) as ( mock_attachment_create, mock_attachment_delete, mock_bdm_create, mock_bdm_destroy ): self.task._update_bdms_in_source_cell(source_cell_context) # Should have gotten BDMs from the source and target cell (order does # not matter). self.assertEqual(2, mock_get_bdms.call_count, mock_get_bdms.calls) mock_get_bdms.assert_has_calls([ mock.call(source_cell_context, self.task.instance.uuid), mock.call(self.task.context, self.task.instance.uuid)], any_order=True) # Since bdm3 was new in the target cell an attachment should have been # created for it in the source cell. mock_attachment_create.assert_called_once_with( source_cell_context, bdm3.volume_id, self.task.instance.uuid) self.assertEqual(uuids.attachment_id, bdm3.attachment_id) # And bdm3 should have been created in the source cell. mock_bdm_create.assert_called_once_with() # Since bdm1 was not in the target cell it should be destroyed in the # source cell since we can assume it was detached from the target host # in the target cell while the instance was in VERIFY_RESIZE status. mock_attachment_delete.assert_called_once_with( bdm1._context, bdm1.attachment_id) mock_bdm_destroy.assert_called_once_with() @mock.patch('nova.objects.BlockDeviceMapping.destroy') def test_delete_orphan_source_cell_bdms_attach_delete_fails(self, destroy): """Tests attachment_delete failing but not being fatal.""" source_cell_context = nova_context.get_admin_context() bdm1 = objects.BlockDeviceMapping( source_cell_context, volume_id='vol1', attachment_id=uuids.attach1) bdm2 = objects.BlockDeviceMapping( source_cell_context, volume_id='vol2', attachment_id=uuids.attach2) source_cell_bdms = objects.BlockDeviceMappingList(objects=[bdm1, bdm2]) with mock.patch.object(self.task.volume_api, 'attachment_delete') as attachment_delete: # First call to attachment_delete fails, second is OK. attachment_delete.side_effect = [ test.TestingException('cinder is down'), None] self.task._delete_orphan_source_cell_bdms(source_cell_bdms) attachment_delete.assert_has_calls([ mock.call(bdm1._context, bdm1.attachment_id), mock.call(bdm2._context, bdm2.attachment_id)]) self.assertEqual(2, destroy.call_count, destroy.mock_calls) self.assertIn('cinder is down', self.stdlog.logger.output) @mock.patch('nova.objects.InstanceAction.get_by_request_id') @mock.patch('nova.objects.InstanceActionEventList.get_by_action') @mock.patch('nova.objects.InstanceAction.create') @mock.patch('nova.objects.InstanceActionEvent.create') def test_update_instance_actions_in_source_cell( self, mock_event_create, mock_action_create, mock_get_events, mock_get_action): """Tests copying instance actions from the target to source cell.""" source_cell_context = nova_context.get_admin_context() source_cell_context.cell_uuid = uuids.source_cell # Setup a fake action and fake event. action = objects.InstanceAction( id=1, action=instance_actions.REVERT_RESIZE, instance_uuid=self.task.instance.uuid, request_id=self.task.context.request_id) mock_get_action.return_value = action event = objects.InstanceActionEvent( id=2, action_id=action.id, event='conductor_revert_snapshot_based_resize') mock_get_events.return_value = objects.InstanceActionEventList( objects=[event]) # Run the code. self.task._update_instance_actions_in_source_cell(source_cell_context) # Should have created a clone of the action and event. mock_get_action.assert_called_once_with( self.task.context, self.task.instance.uuid, self.task.context.request_id) mock_get_events.assert_called_once_with(self.task.context, action.id) mock_action_create.assert_called_once_with() mock_event_create.assert_called_once_with( action.instance_uuid, action.request_id) def test_update_source_obj_from_target_cell(self): # Create a fake source object to be updated. t1 = timeutils.utcnow() source_obj = objects.Migration(id=1, created_at=t1, updated_at=t1, uuid=uuids.migration, status='post-migrating') t2 = timeutils.utcnow() target_obj = objects.Migration(id=2, created_at=t2, updated_at=t2, uuid=uuids.migration, status='reverting', # Add a field that is not in source_obj. migration_type='resize') # Run the copy code. self.task._update_source_obj_from_target_cell(source_obj, target_obj) # First make sure that id, created_at and updated_at are not changed. ignored_keys = ['id', 'created_at', 'updated_at'] for field in ignored_keys: self.assertNotEqual(getattr(source_obj, field), getattr(target_obj, field)) # Now make sure the rest of the fields are the same. self._compare_objs(source_obj, target_obj, ignored_keys=ignored_keys) def test_update_source_obj_from_target_cell_nested_object(self): """Tests that calling _update_source_obj_from_target_cell with an object that has nested object fields will raise ObjectActionError. """ source = objects.Instance(flavor=objects.Flavor(flavorid='a')) target = objects.Instance(flavor=objects.Flavor(flavorid='b')) ex = self.assertRaises(exception.ObjectActionError, self.task._update_source_obj_from_target_cell, source, target) self.assertIn('nested objects are not supported', six.text_type(ex)) @mock.patch('nova.objects.Migration.get_by_uuid') def test_update_migration_in_source_cell(self, mock_get_migration): """Tests updating the migration record in the source cell from the target cell. """ source_cell_context = nova_context.get_admin_context() with mock.patch.object( self.task, '_update_source_obj_from_target_cell') as mock_update_obj: source_cell_migration = \ self.task._update_migration_in_source_cell(source_cell_context) mock_get_migration.assert_called_once_with(source_cell_context, self.task.migration.uuid) mock_update_obj.assert_called_once_with(source_cell_migration, self.task.migration) self.assertIs(source_cell_migration, mock_get_migration.return_value) source_cell_migration.save.assert_called_once_with() @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_update_instance_mapping(self, get_inst_map): """Tests updating the instance mapping from the target to source cell. """ source_cell_instance = self._generate_source_cell_instance() source_cell_context = source_cell_instance._context source_cell_mapping = objects.CellMapping(source_cell_context, uuid=uuids.source_cell) inst_map = objects.InstanceMapping( cell_mapping=objects.CellMapping(uuids.target_cell)) get_inst_map.return_value = inst_map with test.nested( mock.patch.object(source_cell_instance, 'save'), mock.patch.object(self.task.instance, 'save'), mock.patch.object(inst_map, 'save') ) as ( source_inst_save, target_inst_save, inst_map_save ): self.task._update_instance_mapping(source_cell_instance, source_cell_mapping) get_inst_map.assert_called_once_with(self.task.context, self.task.instance.uuid) # The source cell instance should not be hidden. self.assertFalse(source_cell_instance.hidden) source_inst_save.assert_called_once_with() # The instance mapping should point at the source cell. self.assertIs(source_cell_mapping, inst_map.cell_mapping) inst_map_save.assert_called_once_with() # The target cell instance should be hidden. self.assertTrue(self.task.instance.hidden) target_inst_save.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/conductor/tasks/test_live_migrate.py0000664000175000017500000012456300000000000025072 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import six from nova.compute import power_state from nova.compute import rpcapi as compute_rpcapi from nova.compute import vm_states from nova.conductor.tasks import live_migrate from nova import context as nova_context from nova import exception from nova.network import model as network_model from nova import objects from nova.scheduler.client import query from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import servicegroup from nova import test from nova.tests.unit import fake_instance fake_limits1 = objects.SchedulerLimits() fake_selection1 = objects.Selection(service_host="host1", nodename="node1", cell_uuid=uuids.cell, limits=fake_limits1, compute_node_uuid=uuids.compute_node1) fake_limits2 = objects.SchedulerLimits() fake_selection2 = objects.Selection(service_host="host2", nodename="node2", cell_uuid=uuids.cell, limits=fake_limits2, compute_node_uuid=uuids.compute_node2) class LiveMigrationTaskTestCase(test.NoDBTestCase): def setUp(self): super(LiveMigrationTaskTestCase, self).setUp() self.context = nova_context.get_admin_context() self.instance_host = "host" self.instance_uuid = uuids.instance self.instance_image = "image_ref" db_instance = fake_instance.fake_db_instance( host=self.instance_host, uuid=self.instance_uuid, power_state=power_state.RUNNING, vm_state = vm_states.ACTIVE, memory_mb=512, image_ref=self.instance_image) self.instance = objects.Instance._from_db_object( self.context, objects.Instance(), db_instance) self.instance.system_metadata = {'image_hw_disk_bus': 'scsi'} self.instance.numa_topology = None self.instance.pci_requests = None self.instance.resources = None self.destination = "destination" self.block_migration = "bm" self.disk_over_commit = "doc" self.migration = objects.Migration() self.fake_spec = objects.RequestSpec() self._generate_task() _p = mock.patch('nova.compute.utils.heal_reqspec_is_bfv') self.heal_reqspec_is_bfv_mock = _p.start() self.addCleanup(_p.stop) _p = mock.patch('nova.objects.RequestSpec.ensure_network_metadata') self.ensure_network_metadata_mock = _p.start() self.addCleanup(_p.stop) _p = mock.patch( 'nova.network.neutron.API.' 'get_requested_resource_for_instance', return_value=[]) self.mock_get_res_req = _p.start() self.addCleanup(_p.stop) def _generate_task(self): self.task = live_migrate.LiveMigrationTask(self.context, self.instance, self.destination, self.block_migration, self.disk_over_commit, self.migration, compute_rpcapi.ComputeAPI(), servicegroup.API(), query.SchedulerQueryClient(), report.SchedulerReportClient(), self.fake_spec) @mock.patch('nova.availability_zones.get_host_availability_zone', return_value='fake-az') def test_execute_with_destination(self, mock_get_az): dest_node = objects.ComputeNode(hypervisor_hostname='dest_node') with test.nested( mock.patch.object(self.task, '_check_host_is_up'), mock.patch.object(self.task, '_check_requested_destination'), mock.patch.object(scheduler_utils, 'claim_resources_on_destination'), mock.patch.object(self.migration, 'save'), mock.patch.object(self.task.compute_rpcapi, 'live_migration'), mock.patch('nova.conductor.tasks.migrate.' 'replace_allocation_with_migration'), mock.patch.object(self.task, '_check_destination_is_not_source'), mock.patch.object(self.task, '_check_destination_has_enough_memory'), mock.patch.object(self.task, '_check_compatible_with_source_hypervisor', return_value=(mock.sentinel.source_node, dest_node)), ) as (mock_check_up, mock_check_dest, mock_claim, mock_save, mock_mig, m_alloc, m_check_diff, m_check_enough_mem, m_check_compatible): mock_mig.return_value = "bob" m_alloc.return_value = (mock.MagicMock(), mock.sentinel.allocs) self.assertEqual("bob", self.task.execute()) mock_check_up.assert_has_calls([ mock.call(self.instance_host), mock.call(self.destination)]) mock_check_dest.assert_called_once_with() m_check_diff.assert_called_once() m_check_enough_mem.assert_called_once() m_check_compatible.assert_called_once() allocs = mock.sentinel.allocs mock_claim.assert_called_once_with( self.context, self.task.report_client, self.instance, mock.sentinel.source_node, dest_node, source_allocations=allocs, consumer_generation=None) mock_mig.assert_called_once_with( self.context, host=self.instance_host, instance=self.instance, dest=self.destination, block_migration=self.block_migration, migration=self.migration, migrate_data=None) self.assertTrue(mock_save.called) mock_get_az.assert_called_once_with(self.context, self.destination) self.assertEqual('fake-az', self.instance.availability_zone) # make sure the source/dest fields were set on the migration object self.assertEqual(self.instance.node, self.migration.source_node) self.assertEqual(dest_node.hypervisor_hostname, self.migration.dest_node) self.assertEqual(self.task.destination, self.migration.dest_compute) m_alloc.assert_called_once_with(self.context, self.instance, self.migration) # When the task is executed with a destination it means the host is # being forced and we don't call the scheduler, so we don't need to # heal the request spec. self.heal_reqspec_is_bfv_mock.assert_not_called() # When the task is executed with a destination it means the host is # being forced and we don't call the scheduler, so we don't need to # modify the request spec self.ensure_network_metadata_mock.assert_not_called() @mock.patch('nova.availability_zones.get_host_availability_zone', return_value='nova') def test_execute_without_destination(self, mock_get_az): self.destination = None self._generate_task() self.assertIsNone(self.task.destination) with test.nested( mock.patch.object(self.task, '_check_host_is_up'), mock.patch.object(self.task, '_find_destination'), mock.patch.object(self.task.compute_rpcapi, 'live_migration'), mock.patch.object(self.migration, 'save'), mock.patch('nova.conductor.tasks.migrate.' 'replace_allocation_with_migration'), ) as (mock_check, mock_find, mock_mig, mock_save, mock_alloc): mock_find.return_value = ("found_host", "found_node", None) mock_mig.return_value = "bob" mock_alloc.return_value = (mock.MagicMock(), mock.MagicMock()) self.assertEqual("bob", self.task.execute()) mock_check.assert_called_once_with(self.instance_host) mock_find.assert_called_once_with() mock_mig.assert_called_once_with(self.context, host=self.instance_host, instance=self.instance, dest="found_host", block_migration=self.block_migration, migration=self.migration, migrate_data=None) self.assertTrue(mock_save.called) mock_get_az.assert_called_once_with(self.context, 'found_host') self.assertEqual('found_host', self.migration.dest_compute) self.assertEqual('found_node', self.migration.dest_node) self.assertEqual(self.instance.node, self.migration.source_node) self.assertTrue(mock_alloc.called) def test_check_instance_is_active_passes_when_paused(self): self.task.instance['power_state'] = power_state.PAUSED self.task._check_instance_is_active() def test_check_instance_is_active_fails_when_shutdown(self): self.task.instance['power_state'] = power_state.SHUTDOWN self.assertRaises(exception.InstanceInvalidState, self.task._check_instance_is_active) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') def test_check_instance_has_no_numa_passes_no_numa(self, mock_get): self.flags(enable_numa_live_migration=False, group='workarounds') self.task.instance.numa_topology = None mock_get.return_value = objects.ComputeNode( uuid=uuids.cn1, hypervisor_type='qemu') self.task._check_instance_has_no_numa() @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') def test_check_instance_has_no_numa_passes_non_kvm(self, mock_get): self.flags(enable_numa_live_migration=False, group='workarounds') self.task.instance.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([0]), memory=1024)]) mock_get.return_value = objects.ComputeNode( uuid=uuids.cn1, hypervisor_type='xen') self.task._check_instance_has_no_numa() @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch.object(objects.Service, 'get_minimum_version', return_value=39) def test_check_instance_has_no_numa_passes_workaround( self, mock_get_min_ver, mock_get): self.flags(enable_numa_live_migration=True, group='workarounds') self.task.instance.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([0]), memory=1024)]) mock_get.return_value = objects.ComputeNode( uuid=uuids.cn1, hypervisor_type='qemu') self.task._check_instance_has_no_numa() mock_get_min_ver.assert_called_once_with(self.context, 'nova-compute') @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch.object(objects.Service, 'get_minimum_version', return_value=39) def test_check_instance_has_no_numa_fails(self, mock_get_min_ver, mock_get): self.flags(enable_numa_live_migration=False, group='workarounds') mock_get.return_value = objects.ComputeNode( uuid=uuids.cn1, hypervisor_type='qemu') self.task.instance.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([0]), memory=1024)]) self.assertRaises(exception.MigrationPreCheckError, self.task._check_instance_has_no_numa) mock_get_min_ver.assert_called_once_with(self.context, 'nova-compute') @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename') @mock.patch.object(objects.Service, 'get_minimum_version', return_value=40) def test_check_instance_has_no_numa_new_svc_passes(self, mock_get_min_ver, mock_get): self.flags(enable_numa_live_migration=False, group='workarounds') mock_get.return_value = objects.ComputeNode( uuid=uuids.cn1, hypervisor_type='qemu') self.task.instance.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([0]), memory=1024)]) self.task._check_instance_has_no_numa() mock_get_min_ver.assert_called_once_with(self.context, 'nova-compute') @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(servicegroup.API, 'service_is_up') def test_check_instance_host_is_up(self, mock_is_up, mock_get): mock_get.return_value = "service" mock_is_up.return_value = True self.task._check_host_is_up("host") mock_get.assert_called_once_with(self.context, "host") mock_is_up.assert_called_once_with("service") @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(servicegroup.API, 'service_is_up') def test_check_instance_host_is_up_fails_if_not_up(self, mock_is_up, mock_get): mock_get.return_value = "service" mock_is_up.return_value = False self.assertRaises(exception.ComputeServiceUnavailable, self.task._check_host_is_up, "host") mock_get.assert_called_once_with(self.context, "host") mock_is_up.assert_called_once_with("service") @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=exception.ComputeHostNotFound(host='host')) def test_check_instance_host_is_up_fails_if_not_found(self, mock): self.assertRaises(exception.ComputeHostNotFound, self.task._check_host_is_up, "host") def test_check_destination_fails_with_same_dest(self): self.task.destination = "same" self.task.source = "same" self.assertRaises(exception.UnableToMigrateToSelf, self.task._check_destination_is_not_source) @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat') def test_check_destination_fails_with_not_enough_memory( self, mock_get_first): mock_get_first.return_value = ( objects.ComputeNode(free_ram_mb=513, memory_mb=1024, ram_allocation_ratio=0.9,)) # free_ram is bigger than instance.ram (512) but the allocation # ratio reduces the total available RAM to 410MB # (1024 * 0.9 - (1024 - 513)) self.assertRaises(exception.MigrationPreCheckError, self.task._check_destination_has_enough_memory) mock_get_first.assert_called_once_with(self.context, self.destination) @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') def test_check_compatible_fails_with_hypervisor_diff( self, mock_get_info): mock_get_info.side_effect = [ objects.ComputeNode(hypervisor_type='b'), objects.ComputeNode(hypervisor_type='a')] self.assertRaises(exception.InvalidHypervisorType, self.task._check_compatible_with_source_hypervisor, self.destination) self.assertEqual([mock.call(self.instance_host), mock.call(self.destination)], mock_get_info.call_args_list) @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') def test_check_compatible_fails_with_hypervisor_too_old( self, mock_get_info): host1 = {'hypervisor_type': 'a', 'hypervisor_version': 7} host2 = {'hypervisor_type': 'a', 'hypervisor_version': 6} mock_get_info.side_effect = [objects.ComputeNode(**host1), objects.ComputeNode(**host2)] self.assertRaises(exception.DestinationHypervisorTooOld, self.task._check_compatible_with_source_hypervisor, self.destination) self.assertEqual([mock.call(self.instance_host), mock.call(self.destination)], mock_get_info.call_args_list) @mock.patch.object(compute_rpcapi.ComputeAPI, 'check_can_live_migrate_destination') def test_check_requested_destination(self, mock_check): mock_check.return_value = "migrate_data" self.task.limits = fake_limits1 with test.nested( mock.patch.object(self.task.network_api, 'supports_port_binding_extension', return_value=False), mock.patch.object(self.task, '_check_can_migrate_pci')): self.assertIsNone(self.task._check_requested_destination()) self.assertEqual("migrate_data", self.task.migrate_data) mock_check.assert_called_once_with(self.context, self.instance, self.destination, self.block_migration, self.disk_over_commit, self.task.migration, fake_limits1) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') @mock.patch.object(servicegroup.API, 'service_is_up') @mock.patch.object(compute_rpcapi.ComputeAPI, 'check_can_live_migrate_destination') @mock.patch.object(objects.HostMapping, 'get_by_host', return_value=objects.HostMapping( cell_mapping=objects.CellMapping( uuid=uuids.different))) def test_check_requested_destination_fails_different_cells( self, mock_get_host_mapping, mock_check, mock_is_up, mock_get_info, mock_get_host): mock_get_host.return_value = "service" mock_is_up.return_value = True hypervisor_details = objects.ComputeNode( hypervisor_type="a", hypervisor_version=6.1, free_ram_mb=513, memory_mb=512, ram_allocation_ratio=1.0) mock_get_info.return_value = hypervisor_details mock_check.return_value = "migrate_data" with test.nested( mock.patch.object(self.task.network_api, 'supports_port_binding_extension', return_value=False), mock.patch.object(self.task, '_check_can_migrate_pci')): ex = self.assertRaises(exception.MigrationPreCheckError, self.task._check_requested_destination) self.assertIn('across cells', six.text_type(ex)) @mock.patch.object(live_migrate.LiveMigrationTask, '_call_livem_checks_on_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', return_value=[[fake_selection1]]) @mock.patch.object(objects.RequestSpec, 'reset_forced_destinations') @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_works(self, mock_setup, mock_reset, mock_select, mock_check, mock_call): self.assertEqual(("host1", "node1", fake_limits1), self.task._find_destination()) # Make sure the request_spec was updated to include the cell # mapping. self.assertIsNotNone(self.fake_spec.requested_destination.cell) # Make sure the spec was updated to include the project_id. self.assertEqual(self.fake_spec.project_id, self.instance.project_id) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_reset.assert_called_once_with() self.ensure_network_metadata_mock.assert_called_once_with( self.instance) self.heal_reqspec_is_bfv_mock.assert_called_once_with( self.context, self.fake_spec, self.instance) mock_select.assert_called_once_with(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False) mock_check.assert_called_once_with('host1') mock_call.assert_called_once_with('host1', {}) @mock.patch.object(live_migrate.LiveMigrationTask, '_call_livem_checks_on_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', return_value=[[fake_selection1]]) @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_no_image_works(self, mock_setup, mock_select, mock_check, mock_call): self.instance['image_ref'] = '' self.assertEqual(("host1", "node1", fake_limits1), self.task._find_destination()) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_select.assert_called_once_with( self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False) mock_check.assert_called_once_with('host1') mock_call.assert_called_once_with('host1', {}) @mock.patch.object(live_migrate.LiveMigrationTask, '_remove_host_allocations') @mock.patch.object(live_migrate.LiveMigrationTask, '_call_livem_checks_on_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', side_effect=[[[fake_selection1]], [[fake_selection2]]]) @mock.patch.object(scheduler_utils, 'setup_instance_group') def _test_find_destination_retry_hypervisor_raises( self, error, mock_setup, mock_select, mock_check, mock_call, mock_remove): mock_check.side_effect = [error, None] self.assertEqual(("host2", "node2", fake_limits2), self.task._find_destination()) # Should have removed allocations for the first host. mock_remove.assert_called_once_with(fake_selection1.compute_node_uuid) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_select.assert_has_calls([ mock.call(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False), mock.call(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False)]) mock_check.assert_has_calls([mock.call('host1'), mock.call('host2')]) mock_call.assert_called_once_with('host2', {}) def test_find_destination_retry_with_old_hypervisor(self): self._test_find_destination_retry_hypervisor_raises( exception.DestinationHypervisorTooOld) def test_find_destination_retry_with_invalid_hypervisor_type(self): self._test_find_destination_retry_hypervisor_raises( exception.InvalidHypervisorType) @mock.patch.object(live_migrate.LiveMigrationTask, '_remove_host_allocations') @mock.patch.object(live_migrate.LiveMigrationTask, '_call_livem_checks_on_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', side_effect=[[[fake_selection1]], [[fake_selection2]]]) @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_retry_with_invalid_livem_checks( self, mock_setup, mock_select, mock_check, mock_call, mock_remove): self.flags(migrate_max_retries=1) mock_call.side_effect = [exception.Invalid(), None] self.assertEqual(("host2", "node2", fake_limits2), self.task._find_destination()) # Should have removed allocations for the first host. mock_remove.assert_called_once_with(fake_selection1.compute_node_uuid) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_select.assert_has_calls([ mock.call(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False), mock.call(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False)]) mock_check.assert_has_calls([mock.call('host1'), mock.call('host2')]) mock_call.assert_has_calls( [mock.call('host1', {}), mock.call('host2', {})]) @mock.patch.object(live_migrate.LiveMigrationTask, '_remove_host_allocations') @mock.patch.object(live_migrate.LiveMigrationTask, '_call_livem_checks_on_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', side_effect=[[[fake_selection1]], [[fake_selection2]]]) @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_retry_with_failed_migration_pre_checks( self, mock_setup, mock_select, mock_check, mock_call, mock_remove): self.flags(migrate_max_retries=1) mock_call.side_effect = [exception.MigrationPreCheckError('reason'), None] self.assertEqual(("host2", "node2", fake_limits2), self.task._find_destination()) # Should have removed allocations for the first host. mock_remove.assert_called_once_with(fake_selection1.compute_node_uuid) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_select.assert_has_calls([ mock.call(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False), mock.call(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False)]) mock_check.assert_has_calls([mock.call('host1'), mock.call('host2')]) mock_call.assert_has_calls( [mock.call('host1', {}), mock.call('host2', {})]) @mock.patch.object(objects.Migration, 'save') @mock.patch.object(live_migrate.LiveMigrationTask, '_remove_host_allocations') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor', side_effect=exception.DestinationHypervisorTooOld()) @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', return_value=[[fake_selection1]]) @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_retry_exceeds_max( self, mock_setup, mock_select, mock_check, mock_remove, mock_save): self.flags(migrate_max_retries=0) self.assertRaises(exception.MaxRetriesExceeded, self.task._find_destination) self.assertEqual('failed', self.task.migration.status) mock_save.assert_called_once_with() # Should have removed allocations for the first host. mock_remove.assert_called_once_with(fake_selection1.compute_node_uuid) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_select.assert_called_once_with( self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False) mock_check.assert_called_once_with('host1') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', side_effect=exception.NoValidHost(reason="")) @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_when_runs_out_of_hosts(self, mock_setup, mock_select): self.assertRaises(exception.NoValidHost, self.task._find_destination) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_select.assert_called_once_with( self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False) @mock.patch("nova.utils.get_image_from_system_metadata") @mock.patch("nova.scheduler.utils.build_request_spec") @mock.patch("nova.scheduler.utils.setup_instance_group") @mock.patch("nova.objects.RequestSpec.from_primitives") def test_find_destination_with_remoteError(self, m_from_primitives, m_setup_instance_group, m_build_request_spec, m_get_image_from_system_metadata): m_get_image_from_system_metadata.return_value = {'properties': {}} m_build_request_spec.return_value = {} fake_spec = objects.RequestSpec() m_from_primitives.return_value = fake_spec with mock.patch.object(self.task.query_client, 'select_destinations') as m_select_destinations: error = messaging.RemoteError() m_select_destinations.side_effect = error self.assertRaises(exception.MigrationSchedulerRPCError, self.task._find_destination) def test_call_livem_checks_on_host(self): with test.nested( mock.patch.object(self.task.compute_rpcapi, 'check_can_live_migrate_destination', side_effect=messaging.MessagingTimeout), mock.patch.object(self.task, '_check_can_migrate_pci')): self.assertRaises(exception.MigrationPreCheckError, self.task._call_livem_checks_on_host, {}, {}) def test_call_livem_checks_on_host_bind_ports(self): data = objects.LibvirtLiveMigrateData() bindings = { uuids.port1: {'host': 'dest-host'}, uuids.port2: {'host': 'dest-host'} } @mock.patch.object(self.task, '_check_can_migrate_pci') @mock.patch.object(self.task.compute_rpcapi, 'check_can_live_migrate_destination', return_value=data) @mock.patch.object(self.task.network_api, 'supports_port_binding_extension', return_value=True) @mock.patch.object(self.task.network_api, 'bind_ports_to_host', return_value=bindings) def _test(mock_bind_ports_to_host, mock_supports_port_binding, mock_check_can_live_migrate_dest, mock_check_can_migrate_pci): nwinfo = network_model.NetworkInfo([ network_model.VIF(uuids.port1), network_model.VIF(uuids.port2)]) self.instance.info_cache = objects.InstanceInfoCache( network_info=nwinfo) self.task._call_livem_checks_on_host('dest-host', {}) # Assert the migrate_data set on the task based on the port # bindings created. self.assertIn('vifs', data) self.assertEqual(2, len(data.vifs)) for vif in data.vifs: self.assertIn('source_vif', vif) self.assertEqual('dest-host', vif.host) self.assertEqual(vif.port_id, vif.source_vif['id']) _test() @mock.patch('nova.network.neutron.API.bind_ports_to_host') def test_bind_ports_on_destination_merges_profiles(self, mock_bind_ports): """Assert that if both the migration_data and the provider mapping contains binding profile related information then such information is merged in the resulting profile. """ self.task.migrate_data = objects.LibvirtLiveMigrateData( vifs=[ objects.VIFMigrateData( port_id=uuids.port1, profile_json=jsonutils.dumps( {'some-key': 'value'})) ]) provider_mappings = {uuids.port1: [uuids.dest_bw_rp]} self.task._bind_ports_on_destination('dest-host', provider_mappings) mock_bind_ports.assert_called_once_with( context=self.context, instance=self.instance, host='dest-host', vnic_types=None, port_profiles={uuids.port1: {'allocation': uuids.dest_bw_rp, 'some-key': 'value'}}) @mock.patch('nova.network.neutron.API.bind_ports_to_host') def test_bind_ports_on_destination_migration_data(self, mock_bind_ports): """Assert that if only the migration_data contains binding profile related information then that is sent to neutron. """ self.task.migrate_data = objects.LibvirtLiveMigrateData( vifs=[ objects.VIFMigrateData( port_id=uuids.port1, profile_json=jsonutils.dumps( {'some-key': 'value'})) ]) provider_mappings = {} self.task._bind_ports_on_destination('dest-host', provider_mappings) mock_bind_ports.assert_called_once_with( context=self.context, instance=self.instance, host='dest-host', vnic_types=None, port_profiles={uuids.port1: {'some-key': 'value'}}) @mock.patch('nova.network.neutron.API.bind_ports_to_host') def test_bind_ports_on_destination_provider_mapping(self, mock_bind_ports): """Assert that if only the provider mapping contains binding profile related information then that is sent to neutron. """ self.task.migrate_data = objects.LibvirtLiveMigrateData( vifs=[ objects.VIFMigrateData( port_id=uuids.port1) ]) provider_mappings = {uuids.port1: [uuids.dest_bw_rp]} self.task._bind_ports_on_destination('dest-host', provider_mappings) mock_bind_ports.assert_called_once_with( context=self.context, instance=self.instance, host='dest-host', vnic_types=None, port_profiles={uuids.port1: {'allocation': uuids.dest_bw_rp}}) @mock.patch( 'nova.compute.utils.' 'update_pci_request_spec_with_allocated_interface_name') @mock.patch('nova.scheduler.utils.fill_provider_mapping') @mock.patch.object(live_migrate.LiveMigrationTask, '_call_livem_checks_on_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_compatible_with_source_hypervisor') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations', return_value=[[fake_selection1]]) @mock.patch.object(objects.RequestSpec, 'reset_forced_destinations') @mock.patch.object(scheduler_utils, 'setup_instance_group') def test_find_destination_with_resource_request( self, mock_setup, mock_reset, mock_select, mock_check, mock_call, mock_fill_provider_mapping, mock_update_pci_req): resource_req = [objects.RequestGroup(requester_id=uuids.port_id)] self.mock_get_res_req.return_value = resource_req self.assertEqual(("host1", "node1", fake_limits1), self.task._find_destination()) # Make sure the request_spec was updated to include the cell # mapping. self.assertIsNotNone(self.fake_spec.requested_destination.cell) # Make sure the spec was updated to include the project_id. self.assertEqual(self.fake_spec.project_id, self.instance.project_id) # Make sure that requested_resources are added to the request spec self.assertEqual( resource_req, self.task.request_spec.requested_resources) mock_setup.assert_called_once_with(self.context, self.fake_spec) mock_reset.assert_called_once_with() self.ensure_network_metadata_mock.assert_called_once_with( self.instance) self.heal_reqspec_is_bfv_mock.assert_called_once_with( self.context, self.fake_spec, self.instance) mock_select.assert_called_once_with(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False) mock_check.assert_called_once_with('host1') mock_call.assert_called_once_with('host1', {uuids.port_id: []}) mock_fill_provider_mapping.assert_called_once_with( self.task.request_spec, fake_selection1) mock_update_pci_req.assert_called_once_with( self.context, self.task.report_client, self.instance, {uuids.port_id: []}) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=exception.InstanceMappingNotFound( uuid=uuids.instance)) def test_get_source_cell_mapping_not_found(self, mock_get): """Negative test where InstanceMappingNotFound is raised and converted to MigrationPreCheckError. """ self.assertRaises(exception.MigrationPreCheckError, self.task._get_source_cell_mapping) mock_get.assert_called_once_with( self.task.context, self.task.instance.uuid) @mock.patch.object(objects.HostMapping, 'get_by_host', side_effect=exception.HostMappingNotFound( name='destination')) def test_get_destination_cell_mapping_not_found(self, mock_get): """Negative test where HostMappingNotFound is raised and converted to MigrationPreCheckError. """ self.assertRaises(exception.MigrationPreCheckError, self.task._get_destination_cell_mapping) mock_get.assert_called_once_with( self.task.context, self.task.destination) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'remove_provider_tree_from_instance_allocation') def test_remove_host_allocations(self, remove_provider): self.task._remove_host_allocations(uuids.cn) remove_provider.assert_called_once_with( self.task.context, self.task.instance.uuid, uuids.cn) def test_check_can_migrate_pci(self): """Tests that _check_can_migrate_pci() allows live-migration if instance does not contain non-network related PCI requests and raises MigrationPreCheckError otherwise """ @mock.patch.object(self.task.network_api, 'supports_port_binding_extension') @mock.patch.object(live_migrate, 'supports_vif_related_pci_allocations') def _test(instance_pci_reqs, supp_binding_ext_retval, supp_vif_related_pci_alloc_retval, mock_supp_vif_related_pci_alloc, mock_supp_port_binding_ext): mock_supp_vif_related_pci_alloc.return_value = \ supp_vif_related_pci_alloc_retval mock_supp_port_binding_ext.return_value = \ supp_binding_ext_retval self.task.instance.pci_requests = instance_pci_reqs self.task._check_can_migrate_pci("Src", "Dst") # in case we managed to get away without rasing, check mocks if instance_pci_reqs: mock_supp_port_binding_ext.assert_called_once_with( self.context) self.assertTrue(mock_supp_vif_related_pci_alloc.called) # instance has no PCI requests _test(None, False, False) # No support in Neutron and Computes _test(None, True, False) # No support in Computes _test(None, False, True) # No support in Neutron _test(None, True, True) # Support in both Neutron and Computes # instance contains network related PCI requests (alias_name=None) pci_requests = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(alias_name=None)]) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, False, False) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, True, False) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, False, True) _test(pci_requests, True, True) # instance contains Non network related PCI requests (alias_name!=None) pci_requests.requests.append( objects.InstancePCIRequest(alias_name="non-network-related-pci")) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, False, False) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, True, False) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, False, True) self.assertRaises(exception.MigrationPreCheckError, _test, pci_requests, True, True) def test_check_can_migrate_specific_resources(self): """Test _check_can_migrate_specific_resources allows live migration with vpmem. """ @mock.patch.object(live_migrate, 'supports_vpmem_live_migration') def _test(resources, supp_lm_vpmem_retval, mock_support_lm_vpmem): self.instance.resources = resources mock_support_lm_vpmem.return_value = supp_lm_vpmem_retval self.task._check_can_migrate_specific_resources() vpmem_0 = objects.LibvirtVPMEMDevice( label='4GB', name='ns_0', devpath='/dev/dax0.0', size=4292870144, align=2097152) resource_0 = objects.Resource( provider_uuid=uuids.rp, resource_class="CUSTOM_PMEM_NAMESPACE_4GB", identifier='ns_0', metadata=vpmem_0) resources = objects.ResourceList( objects=[resource_0]) _test(None, False) _test(None, True) _test(resources, True) self.assertRaises(exception.MigrationPreCheckError, _test, resources, False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/conductor/tasks/test_migrate.py0000664000175000017500000012550100000000000024044 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import rpcapi as compute_rpcapi from nova.conductor.tasks import migrate from nova import context from nova import exception from nova import objects from nova.scheduler.client import query from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests.unit.conductor.test_conductor import FakeContext from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance class MigrationTaskTestCase(test.NoDBTestCase): def setUp(self): super(MigrationTaskTestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = FakeContext(self.user_id, self.project_id) # Normally RequestContext.cell_uuid would be set when targeting # the context in nova.conductor.manager.targets_cell but we just # fake it here. self.context.cell_uuid = uuids.cell1 self.flavor = fake_flavor.fake_flavor_obj(self.context) self.flavor.extra_specs = {'extra_specs': 'fake'} inst = fake_instance.fake_db_instance(image_ref='image_ref', instance_type=self.flavor) inst_object = objects.Instance( flavor=self.flavor, numa_topology=None, pci_requests=None, system_metadata={'image_hw_disk_bus': 'scsi'}) self.instance = objects.Instance._from_db_object( self.context, inst_object, inst, []) self.request_spec = objects.RequestSpec(image=objects.ImageMeta()) self.host_lists = [[objects.Selection(service_host="host1", nodename="node1", cell_uuid=uuids.cell1)]] self.filter_properties = {'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host1', 'node1']]}} self.reservations = [] self.clean_shutdown = True _p = mock.patch('nova.compute.utils.heal_reqspec_is_bfv') self.heal_reqspec_is_bfv_mock = _p.start() self.addCleanup(_p.stop) _p = mock.patch('nova.objects.RequestSpec.ensure_network_metadata') self.ensure_network_metadata_mock = _p.start() self.addCleanup(_p.stop) self.mock_network_api = mock.Mock() def _generate_task(self): return migrate.MigrationTask(self.context, self.instance, self.flavor, self.request_spec, self.clean_shutdown, compute_rpcapi.ComputeAPI(), query.SchedulerQueryClient(), report.SchedulerReportClient(), host_list=None, network_api=self.mock_network_api) @mock.patch.object(objects.MigrationList, 'get_by_filters') @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.Migration.save') @mock.patch('nova.objects.Migration.create') @mock.patch('nova.objects.Service.get_minimum_version_multi') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') @mock.patch('nova.conductor.tasks.cross_cell_migrate.' 'CrossCellMigrationTask.execute') def _test_execute(self, cross_cell_exec_mock, prep_resize_mock, sel_dest_mock, sig_mock, az_mock, gmv_mock, cm_mock, sm_mock, cn_mock, rc_mock, gbf_mock, requested_destination=False, same_cell=True): sel_dest_mock.return_value = self.host_lists az_mock.return_value = 'myaz' gbf_mock.return_value = objects.MigrationList() mock_get_resources = \ self.mock_network_api.get_requested_resource_for_instance mock_get_resources.return_value = [] if requested_destination: self.request_spec.requested_destination = objects.Destination( host='target_host', node=None, allow_cross_cell_move=not same_cell) self.request_spec.retry = objects.SchedulerRetries.from_dict( self.context, self.filter_properties['retry']) self.filter_properties.pop('retry') self.filter_properties['requested_destination'] = ( self.request_spec.requested_destination) task = self._generate_task() gmv_mock.return_value = 23 # We just need this hook point to set a uuid on the # migration before we use it for teardown def set_migration_uuid(*a, **k): task._migration.uuid = uuids.migration return mock.MagicMock() # NOTE(danms): It's odd to do this on cn_mock, but it's just because # of when we need to have it set in the flow and where we have an easy # place to find it via self.migration. cn_mock.side_effect = set_migration_uuid selection = self.host_lists[0][0] with mock.patch.object(task, '_is_selected_host_in_source_cell', return_value=same_cell) as _is_source_cell_mock: task.execute() _is_source_cell_mock.assert_called_once_with(selection) self.ensure_network_metadata_mock.assert_called_once_with( self.instance) self.heal_reqspec_is_bfv_mock.assert_called_once_with( self.context, self.request_spec, self.instance) sig_mock.assert_called_once_with(self.context, self.request_spec) task.query_client.select_destinations.assert_called_once_with( self.context, self.request_spec, [self.instance.uuid], return_objects=True, return_alternates=True) if same_cell: prep_resize_mock.assert_called_once_with( self.context, self.instance, self.request_spec.image, self.flavor, selection.service_host, task._migration, request_spec=self.request_spec, filter_properties=self.filter_properties, node=selection.nodename, clean_shutdown=self.clean_shutdown, host_list=[]) az_mock.assert_called_once_with(self.context, 'host1') cross_cell_exec_mock.assert_not_called() else: cross_cell_exec_mock.assert_called_once_with() az_mock.assert_not_called() prep_resize_mock.assert_not_called() self.assertIsNotNone(task._migration) old_flavor = self.instance.flavor new_flavor = self.flavor self.assertEqual(old_flavor.id, task._migration.old_instance_type_id) self.assertEqual(new_flavor.id, task._migration.new_instance_type_id) self.assertEqual('pre-migrating', task._migration.status) self.assertEqual(self.instance.uuid, task._migration.instance_uuid) self.assertEqual(self.instance.host, task._migration.source_compute) self.assertEqual(self.instance.node, task._migration.source_node) if old_flavor.id != new_flavor.id: self.assertEqual('resize', task._migration.migration_type) else: self.assertEqual('migration', task._migration.migration_type) task._migration.create.assert_called_once_with() if requested_destination: self.assertIsNone(self.request_spec.retry) self.assertIn('cell', self.request_spec.requested_destination) self.assertIsNotNone(self.request_spec.requested_destination.cell) self.assertEqual( not same_cell, self.request_spec.requested_destination.allow_cross_cell_move) mock_get_resources.assert_called_once_with( self.context, self.instance.uuid) self.assertEqual([], self.request_spec.requested_resources) def test_execute(self): self._test_execute() def test_execute_with_destination(self): self._test_execute(requested_destination=True) def test_execute_resize(self): self.flavor = self.flavor.obj_clone() self.flavor.id = 3 self._test_execute() def test_execute_same_cell_false(self): """Tests the execute() scenario that the RequestSpec allows cross cell move and the selected target host is in another cell so CrossCellMigrationTask is executed. """ self._test_execute(same_cell=False) @mock.patch.object(objects.MigrationList, 'get_by_filters') @mock.patch('nova.conductor.tasks.migrate.revert_allocation_for_migration') @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.Migration.save') @mock.patch('nova.objects.Migration.create') @mock.patch('nova.objects.Service.get_minimum_version_multi') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') def test_execute_rollback(self, prep_resize_mock, sel_dest_mock, sig_mock, az_mock, gmv_mock, cm_mock, sm_mock, cn_mock, rc_mock, mock_ra, mock_gbf): sel_dest_mock.return_value = self.host_lists az_mock.return_value = 'myaz' task = self._generate_task() gmv_mock.return_value = 23 mock_gbf.return_value = objects.MigrationList() mock_get_resources = \ self.mock_network_api.get_requested_resource_for_instance mock_get_resources.return_value = [] # We just need this hook point to set a uuid on the # migration before we use it for teardown def set_migration_uuid(*a, **k): task._migration.uuid = uuids.migration return mock.MagicMock() # NOTE(danms): It's odd to do this on cn_mock, but it's just because # of when we need to have it set in the flow and where we have an easy # place to find it via self.migration. cn_mock.side_effect = set_migration_uuid prep_resize_mock.side_effect = test.TestingException task._held_allocations = mock.sentinel.allocs self.assertRaises(test.TestingException, task.execute) self.assertIsNotNone(task._migration) task._migration.create.assert_called_once_with() task._migration.save.assert_called_once_with() self.assertEqual('error', task._migration.status) mock_ra.assert_called_once_with(task.context, task._source_cn, task.instance, task._migration) mock_get_resources.assert_called_once_with( self.context, self.instance.uuid) @mock.patch.object(scheduler_utils, 'claim_resources') @mock.patch.object(context.RequestContext, 'elevated') def test_execute_reschedule(self, mock_elevated, mock_claim): report_client = report.SchedulerReportClient() # setup task for re-schedule alloc_req = { "allocations": { uuids.host1: { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}}}} alternate_selection = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps(alloc_req), allocation_request_version='1.19') task = migrate.MigrationTask( self.context, self.instance, self.flavor, self.request_spec, self.clean_shutdown, compute_rpcapi.ComputeAPI(), query.SchedulerQueryClient(), report_client, host_list=[alternate_selection], network_api=self.mock_network_api) mock_claim.return_value = True actual_selection = task._reschedule() self.assertIs(alternate_selection, actual_selection) mock_claim.assert_called_once_with( mock_elevated.return_value, report_client, self.request_spec, self.instance.uuid, alloc_req, '1.19') @mock.patch.object(scheduler_utils, 'fill_provider_mapping') @mock.patch.object(scheduler_utils, 'claim_resources') @mock.patch.object(context.RequestContext, 'elevated') def test_execute_reschedule_claim_fails_no_more_alternate( self, mock_elevated, mock_claim, mock_fill_provider_mapping): report_client = report.SchedulerReportClient() # set up the task for re-schedule alloc_req = { "allocations": { uuids.host1: { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}}}} alternate_selection = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps(alloc_req), allocation_request_version='1.19') task = migrate.MigrationTask( self.context, self.instance, self.flavor, self.request_spec, self.clean_shutdown, compute_rpcapi.ComputeAPI(), query.SchedulerQueryClient(), report_client, host_list=[alternate_selection], network_api=self.mock_network_api) mock_claim.return_value = False self.assertRaises(exception.MaxRetriesExceeded, task._reschedule) mock_claim.assert_called_once_with( mock_elevated.return_value, report_client, self.request_spec, self.instance.uuid, alloc_req, '1.19') mock_fill_provider_mapping.assert_not_called() @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_get_host_supporting_request_no_resource_request( self, mock_get_service, mock_delete_allocation, mock_claim_resources): # no resource request so we expect the first host is simply returned self.request_spec.requested_resources = [] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') alternate = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}}), allocation_request_version='1.19') selection_list = [first, alternate] selected, alternates = task._get_host_supporting_request( selection_list) self.assertEqual(first, selected) self.assertEqual([alternate], alternates) mock_get_service.assert_not_called() # The first host was good and the scheduler made allocation on that # host. So we don't expect any resource claim manipulation mock_delete_allocation.assert_not_called() mock_claim_resources.assert_not_called() @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_get_host_supporting_request_first_host_is_new( self, mock_get_service, mock_delete_allocation, mock_claim_resources): self.request_spec.requested_resources = [ objects.RequestGroup() ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') alternate = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}}), allocation_request_version='1.19') selection_list = [first, alternate] first_service = objects.Service(service_host='host1') first_service.version = 39 mock_get_service.return_value = first_service selected, alternates = task._get_host_supporting_request( selection_list) self.assertEqual(first, selected) self.assertEqual([alternate], alternates) mock_get_service.assert_called_once_with( task.context, 'host1', 'nova-compute') # The first host was good and the scheduler made allocation on that # host. So we don't expect any resource claim manipulation mock_delete_allocation.assert_not_called() mock_claim_resources.assert_not_called() @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_get_host_supporting_request_first_host_is_old_no_alternates( self, mock_get_service, mock_delete_allocation, mock_claim_resources): self.request_spec.requested_resources = [ objects.RequestGroup() ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') selection_list = [first] first_service = objects.Service(service_host='host1') first_service.version = 38 mock_get_service.return_value = first_service self.assertRaises( exception.MaxRetriesExceeded, task._get_host_supporting_request, selection_list) mock_get_service.assert_called_once_with( task.context, 'host1', 'nova-compute') mock_delete_allocation.assert_called_once_with( task.context, self.instance.uuid) mock_claim_resources.assert_not_called() @mock.patch.object(migrate.LOG, 'debug') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_get_host_supporting_request_first_host_is_old_second_good( self, mock_get_service, mock_delete_allocation, mock_claim_resources, mock_debug): self.request_spec.requested_resources = [ objects.RequestGroup() ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') second = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}}), allocation_request_version='1.19') third = objects.Selection( service_host="host3", nodename="node3", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host3: resources}}), allocation_request_version='1.19') selection_list = [first, second, third] first_service = objects.Service(service_host='host1') first_service.version = 38 second_service = objects.Service(service_host='host2') second_service.version = 39 mock_get_service.side_effect = [first_service, second_service] selected, alternates = task._get_host_supporting_request( selection_list) self.assertEqual(second, selected) self.assertEqual([third], alternates) mock_get_service.assert_has_calls([ mock.call(task.context, 'host1', 'nova-compute'), mock.call(task.context, 'host2', 'nova-compute'), ]) mock_delete_allocation.assert_called_once_with( task.context, self.instance.uuid) mock_claim_resources.assert_called_once_with( self.context, task.reportclient, task.request_spec, self.instance.uuid, {"allocations": {uuids.host2: resources}}, '1.19') mock_debug.assert_called_once_with( 'Scheduler returned host %(host)s as a possible migration target ' 'but that host is not new enough to support the migration with ' 'resource request %(request)s or the compute RPC is pinned to ' 'less than 5.2. Trying alternate hosts.', {'host': 'host1', 'request': self.request_spec.requested_resources}, instance=self.instance) @mock.patch.object(migrate.LOG, 'debug') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_get_host_supporting_request_first_host_is_old_second_claim_fails( self, mock_get_service, mock_delete_allocation, mock_claim_resources, mock_debug): self.request_spec.requested_resources = [ objects.RequestGroup() ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') second = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}}), allocation_request_version='1.19') third = objects.Selection( service_host="host3", nodename="node3", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host3: resources}}), allocation_request_version='1.19') fourth = objects.Selection( service_host="host4", nodename="node4", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host4: resources}}), allocation_request_version='1.19') selection_list = [first, second, third, fourth] first_service = objects.Service(service_host='host1') first_service.version = 38 second_service = objects.Service(service_host='host2') second_service.version = 39 third_service = objects.Service(service_host='host3') third_service.version = 39 mock_get_service.side_effect = [ first_service, second_service, third_service] # not called for the first host but called for the second and third # make the second claim fail to force the selection of the third mock_claim_resources.side_effect = [False, True] selected, alternates = task._get_host_supporting_request( selection_list) self.assertEqual(third, selected) self.assertEqual([fourth], alternates) mock_get_service.assert_has_calls([ mock.call(task.context, 'host1', 'nova-compute'), mock.call(task.context, 'host2', 'nova-compute'), mock.call(task.context, 'host3', 'nova-compute'), ]) mock_delete_allocation.assert_called_once_with( task.context, self.instance.uuid) mock_claim_resources.assert_has_calls([ mock.call( self.context, task.reportclient, task.request_spec, self.instance.uuid, {"allocations": {uuids.host2: resources}}, '1.19'), mock.call( self.context, task.reportclient, task.request_spec, self.instance.uuid, {"allocations": {uuids.host3: resources}}, '1.19'), ]) mock_debug.assert_has_calls([ mock.call( 'Scheduler returned host %(host)s as a possible migration ' 'target but that host is not new enough to support the ' 'migration with resource request %(request)s or the compute ' 'RPC is pinned to less than 5.2. Trying alternate hosts.', {'host': 'host1', 'request': self.request_spec.requested_resources}, instance=self.instance), mock.call( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target but resource claim ' 'failed on that host. Trying another alternate.', {'host': 'host2'}, instance=self.instance), ]) @mock.patch.object(migrate.LOG, 'debug') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete_allocation_for_instance') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_get_host_supporting_request_both_first_and_second_too_old( self, mock_get_service, mock_delete_allocation, mock_claim_resources, mock_debug): self.request_spec.requested_resources = [ objects.RequestGroup() ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') second = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}}), allocation_request_version='1.19') third = objects.Selection( service_host="host3", nodename="node3", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host3: resources}}), allocation_request_version='1.19') fourth = objects.Selection( service_host="host4", nodename="node4", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host4: resources}}), allocation_request_version='1.19') selection_list = [first, second, third, fourth] first_service = objects.Service(service_host='host1') first_service.version = 38 second_service = objects.Service(service_host='host2') second_service.version = 38 third_service = objects.Service(service_host='host3') third_service.version = 39 mock_get_service.side_effect = [ first_service, second_service, third_service] # not called for the first and second hosts but called for the third mock_claim_resources.side_effect = [True] selected, alternates = task._get_host_supporting_request( selection_list) self.assertEqual(third, selected) self.assertEqual([fourth], alternates) mock_get_service.assert_has_calls([ mock.call(task.context, 'host1', 'nova-compute'), mock.call(task.context, 'host2', 'nova-compute'), mock.call(task.context, 'host3', 'nova-compute'), ]) mock_delete_allocation.assert_called_once_with( task.context, self.instance.uuid) mock_claim_resources.assert_called_once_with( self.context, task.reportclient, task.request_spec, self.instance.uuid, {"allocations": {uuids.host3: resources}}, '1.19') mock_debug.assert_has_calls([ mock.call( 'Scheduler returned host %(host)s as a possible migration ' 'target but that host is not new enough to support the ' 'migration with resource request %(request)s or the compute ' 'RPC is pinned to less than 5.2. Trying alternate hosts.', {'host': 'host1', 'request': self.request_spec.requested_resources}, instance=self.instance), mock.call( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target but that host is not new enough to support ' 'the migration with resource request %(request)s or the ' 'compute RPC is pinned to less than 5.2. Trying another ' 'alternate.', {'host': 'host2', 'request': self.request_spec.requested_resources}, instance=self.instance), ]) @mock.patch.object(migrate.LOG, 'debug') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_reschedule_old_compute_skipped( self, mock_get_service, mock_claim_resources, mock_debug): self.request_spec.requested_resources = [ objects.RequestGroup(requester_id=uuids.port1) ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}, "mappings": {uuids.port1: [uuids.host1]}}), allocation_request_version='1.35') second = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}, "mappings": {uuids.port1: [uuids.host2]}}), allocation_request_version='1.35') first_service = objects.Service(service_host='host1') first_service.version = 38 second_service = objects.Service(service_host='host2') second_service.version = 39 mock_get_service.side_effect = [first_service, second_service] # set up task for re-schedule task.host_list = [first, second] selected = task._reschedule() self.assertEqual(second, selected) self.assertEqual([], task.host_list) mock_get_service.assert_has_calls([ mock.call(task.context, 'host1', 'nova-compute'), mock.call(task.context, 'host2', 'nova-compute'), ]) mock_claim_resources.assert_called_once_with( self.context.elevated(), task.reportclient, task.request_spec, self.instance.uuid, {"allocations": {uuids.host2: resources}, "mappings": {uuids.port1: [uuids.host2]}}, '1.35') mock_debug.assert_has_calls([ mock.call( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target for re-schedule but that host is not ' 'new enough to support the migration with resource ' 'request %(request)s. Trying another alternate.', {'host': 'host1', 'request': self.request_spec.requested_resources}, instance=self.instance), ]) @mock.patch.object(migrate.LOG, 'debug') @mock.patch('nova.scheduler.utils.fill_provider_mapping') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_reschedule_old_computes_no_more_alternates( self, mock_get_service, mock_claim_resources, mock_fill_mapping, mock_debug): self.request_spec.requested_resources = [ objects.RequestGroup() ] task = self._generate_task() resources = { "resources": { "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100}} first = objects.Selection( service_host="host1", nodename="node1", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host1: resources}}), allocation_request_version='1.19') second = objects.Selection( service_host="host2", nodename="node2", cell_uuid=uuids.cell1, allocation_request=jsonutils.dumps( {"allocations": {uuids.host2: resources}}), allocation_request_version='1.19') first_service = objects.Service(service_host='host1') first_service.version = 38 second_service = objects.Service(service_host='host2') second_service.version = 38 mock_get_service.side_effect = [first_service, second_service] # set up task for re-schedule task.host_list = [first, second] self.assertRaises(exception.MaxRetriesExceeded, task._reschedule) self.assertEqual([], task.host_list) mock_get_service.assert_has_calls([ mock.call(task.context, 'host1', 'nova-compute'), mock.call(task.context, 'host2', 'nova-compute'), ]) mock_claim_resources.assert_not_called() mock_fill_mapping.assert_not_called() mock_debug.assert_has_calls([ mock.call( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target for re-schedule but that host is not ' 'new enough to support the migration with resource ' 'request %(request)s. Trying another alternate.', {'host': 'host1', 'request': self.request_spec.requested_resources}, instance=self.instance), mock.call( 'Scheduler returned alternate host %(host)s as a possible ' 'migration target for re-schedule but that host is not ' 'new enough to support the migration with resource ' 'request %(request)s. Trying another alternate.', {'host': 'host2', 'request': self.request_spec.requested_resources}, instance=self.instance), ]) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid', return_value=objects.InstanceMapping( cell_mapping=objects.CellMapping(uuid=uuids.cell1))) @mock.patch('nova.conductor.tasks.migrate.LOG.debug') def test_set_requested_destination_cell_allow_cross_cell_resize_true( self, mock_debug, mock_get_im): """Tests the scenario that the RequestSpec is configured for allow_cross_cell_resize=True. """ task = self._generate_task() legacy_props = self.request_spec.to_legacy_filter_properties_dict() self.request_spec.requested_destination = objects.Destination( allow_cross_cell_move=True) task._set_requested_destination_cell(legacy_props) mock_get_im.assert_called_once_with(self.context, self.instance.uuid) mock_debug.assert_called_once() self.assertIn('Allowing migration from cell', mock_debug.call_args[0][0]) self.assertEqual(mock_get_im.return_value.cell_mapping, self.request_spec.requested_destination.cell) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid', return_value=objects.InstanceMapping( cell_mapping=objects.CellMapping(uuid=uuids.cell1))) @mock.patch('nova.conductor.tasks.migrate.LOG.debug') def test_set_requested_destination_cell_allow_cross_cell_resize_true_host( self, mock_debug, mock_get_im): """Tests the scenario that the RequestSpec is configured for allow_cross_cell_resize=True and there is a requested target host. """ task = self._generate_task() legacy_props = self.request_spec.to_legacy_filter_properties_dict() self.request_spec.requested_destination = objects.Destination( allow_cross_cell_move=True, host='fake-host') task._set_requested_destination_cell(legacy_props) mock_get_im.assert_called_once_with(self.context, self.instance.uuid) mock_debug.assert_called_once() self.assertIn('Not restricting cell', mock_debug.call_args[0][0]) self.assertIsNone(self.request_spec.requested_destination.cell) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid', return_value=objects.InstanceMapping( cell_mapping=objects.CellMapping(uuid=uuids.cell1))) @mock.patch('nova.conductor.tasks.migrate.LOG.debug') def test_set_requested_destination_cell_allow_cross_cell_resize_false( self, mock_debug, mock_get_im): """Tests the scenario that the RequestSpec is configured for allow_cross_cell_resize=False. """ task = self._generate_task() legacy_props = self.request_spec.to_legacy_filter_properties_dict() # We don't have to explicitly set RequestSpec.requested_destination # since _set_requested_destination_cell will do that and the # Destination object will default allow_cross_cell_move to False. task._set_requested_destination_cell(legacy_props) mock_get_im.assert_called_once_with(self.context, self.instance.uuid) mock_debug.assert_called_once() self.assertIn('Restricting to cell', mock_debug.call_args[0][0]) def test_is_selected_host_in_source_cell_true(self): """Tests the scenario that the host Selection from the scheduler is in the same cell as the instance. """ task = self._generate_task() selection = objects.Selection(cell_uuid=self.context.cell_uuid) self.assertTrue(task._is_selected_host_in_source_cell(selection)) def test_is_selected_host_in_source_cell_false(self): """Tests the scenario that the host Selection from the scheduler is not in the same cell as the instance. """ task = self._generate_task() selection = objects.Selection(cell_uuid=uuids.cell2, service_host='x') self.assertFalse(task._is_selected_host_in_source_cell(selection)) class MigrationTaskAllocationUtils(test.NoDBTestCase): @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') def test_replace_allocation_with_migration_no_host(self, mock_cn): mock_cn.side_effect = exception.ComputeHostNotFound(host='host') migration = objects.Migration() instance = objects.Instance(host='host', node='node') self.assertRaises(exception.ComputeHostNotFound, migrate.replace_allocation_with_migration, mock.sentinel.context, instance, migration) mock_cn.assert_called_once_with(mock.sentinel.context, instance.host, instance.node) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') def test_replace_allocation_with_migration_no_allocs(self, mock_cn, mock_ga): mock_ga.return_value = {'allocations': {}} migration = objects.Migration(uuid=uuids.migration) instance = objects.Instance(uuid=uuids.instance, host='host', node='node') result = migrate.replace_allocation_with_migration( mock.sentinel.context, instance, migration) self.assertEqual((None, None), result) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put_allocations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') def test_replace_allocation_with_migration_allocs_fail(self, mock_cn, mock_ga, mock_pa): ctxt = context.get_admin_context() migration = objects.Migration(uuid=uuids.migration) instance = objects.Instance(uuid=uuids.instance, user_id='fake', project_id='fake', host='host', node='node') mock_pa.return_value = False self.assertRaises(exception.NoValidHost, migrate.replace_allocation_with_migration, ctxt, instance, migration) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/conductor/test_conductor.py0000664000175000017500000061721000000000000023272 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the conductor service.""" import copy import mock from oslo_db import exception as db_exc import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_versionedobjects import exception as ovo_exc import six from nova import block_device from nova.compute import flavors from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states from nova.conductor import api as conductor_api from nova.conductor import manager as conductor_manager from nova.conductor import rpcapi as conductor_rpcapi from nova.conductor.tasks import live_migrate from nova.conductor.tasks import migrate from nova import conf from nova import context from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception as exc from nova.image import glance as image_api from nova import objects from nova.objects import base as obj_base from nova.objects import block_device as block_device_obj from nova.objects import fields from nova.scheduler.client import query from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import cast_as_call from nova.tests.unit.compute import test_compute from nova.tests.unit import fake_build_request from nova.tests.unit import fake_instance from nova.tests.unit import fake_notifier from nova.tests.unit import fake_request_spec from nova.tests.unit import fake_server_actions from nova.tests.unit import utils as test_utils from nova import utils from nova.volume import cinder CONF = conf.CONF fake_alloc1 = { "allocations": { uuids.host1: { "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} } }, "mappings": { uuids.port1: [uuids.host1] } } fake_alloc2 = { "allocations": { uuids.host2: { "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} } }, "mappings": { uuids.port1: [uuids.host2] } } fake_alloc3 = { "allocations": { uuids.host3: { "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} } }, "mappings": { uuids.port1: [uuids.host3] } } fake_alloc_json1 = jsonutils.dumps(fake_alloc1) fake_alloc_json2 = jsonutils.dumps(fake_alloc2) fake_alloc_json3 = jsonutils.dumps(fake_alloc3) fake_alloc_version = "1.28" fake_selection1 = objects.Selection(service_host="host1", nodename="node1", cell_uuid=uuids.cell, limits=None, allocation_request=fake_alloc_json1, allocation_request_version=fake_alloc_version) fake_selection2 = objects.Selection(service_host="host2", nodename="node2", cell_uuid=uuids.cell, limits=None, allocation_request=fake_alloc_json2, allocation_request_version=fake_alloc_version) fake_selection3 = objects.Selection(service_host="host3", nodename="node3", cell_uuid=uuids.cell, limits=None, allocation_request=fake_alloc_json3, allocation_request_version=fake_alloc_version) fake_host_lists1 = [[fake_selection1]] fake_host_lists2 = [[fake_selection1], [fake_selection2]] fake_host_lists_alt = [[fake_selection1, fake_selection2, fake_selection3]] class FakeContext(context.RequestContext): def elevated(self): """Return a consistent elevated context so we can detect it.""" if not hasattr(self, '_elevated'): self._elevated = super(FakeContext, self).elevated() return self._elevated class _BaseTestCase(object): def setUp(self): super(_BaseTestCase, self).setUp() self.user_id = fakes.FAKE_USER_ID self.project_id = fakes.FAKE_PROJECT_ID self.context = FakeContext(self.user_id, self.project_id) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.stub_out('nova.rpc.RequestContextSerializer.deserialize_context', lambda *args, **kwargs: self.context) self.useFixture(fixtures.SpawnIsSynchronousFixture()) class ConductorTestCase(_BaseTestCase, test.TestCase): """Conductor Manager Tests.""" def setUp(self): super(ConductorTestCase, self).setUp() self.conductor = conductor_manager.ConductorManager() self.conductor_manager = self.conductor def _test_object_action(self, is_classmethod, raise_exception): class TestObject(obj_base.NovaObject): def foo(self, raise_exception=False): if raise_exception: raise Exception('test') else: return 'test' @classmethod def bar(cls, context, raise_exception=False): if raise_exception: raise Exception('test') else: return 'test' obj_base.NovaObjectRegistry.register(TestObject) obj = TestObject() # NOTE(danms): After a trip over RPC, any tuple will be a list, # so use a list here to make sure we can handle it fake_args = [] if is_classmethod: versions = {'TestObject': '1.0'} result = self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'bar', versions, fake_args, {'raise_exception': raise_exception}) else: updates, result = self.conductor.object_action( self.context, obj, 'foo', fake_args, {'raise_exception': raise_exception}) self.assertEqual('test', result) def test_object_action(self): self._test_object_action(False, False) def test_object_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, False, True) def test_object_class_action(self): self._test_object_action(True, False) def test_object_class_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, True, True) def test_object_action_copies_object(self): class TestObject(obj_base.NovaObject): fields = {'dict': fields.DictOfStringsField()} def touch_dict(self): self.dict['foo'] = 'bar' self.obj_reset_changes() obj_base.NovaObjectRegistry.register(TestObject) obj = TestObject() obj.dict = {} obj.obj_reset_changes() updates, result = self.conductor.object_action( self.context, obj, 'touch_dict', tuple(), {}) # NOTE(danms): If conductor did not properly copy the object, then # the new and reference copies of the nested dict object will be # the same, and thus 'dict' will not be reported as changed self.assertIn('dict', updates) self.assertEqual({'foo': 'bar'}, updates['dict']) def test_object_class_action_versions(self): @obj_base.NovaObjectRegistry.register class TestObject(obj_base.NovaObject): VERSION = '1.10' @classmethod def foo(cls, context): return cls() versions = { 'TestObject': '1.2', 'OtherObj': '1.0', } with mock.patch.object(self.conductor_manager, '_object_dispatch') as m: m.return_value = TestObject() m.return_value.obj_to_primitive = mock.MagicMock() self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'foo', versions, tuple(), {}) m.return_value.obj_to_primitive.assert_called_once_with( target_version='1.2', version_manifest=versions) def test_object_class_action_versions_old_object(self): # Make sure we return older than requested objects unmodified, # see bug #1596119. @obj_base.NovaObjectRegistry.register class TestObject(obj_base.NovaObject): VERSION = '1.10' @classmethod def foo(cls, context): return cls() versions = { 'TestObject': '1.10', 'OtherObj': '1.0', } with mock.patch.object(self.conductor_manager, '_object_dispatch') as m: m.return_value = TestObject() m.return_value.VERSION = '1.9' m.return_value.obj_to_primitive = mock.MagicMock() obj = self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'foo', versions, tuple(), {}) self.assertFalse(m.return_value.obj_to_primitive.called) self.assertEqual('1.9', obj.VERSION) def test_object_class_action_versions_major_version_diff(self): @obj_base.NovaObjectRegistry.register class TestObject(obj_base.NovaObject): VERSION = '2.10' @classmethod def foo(cls, context): return cls() versions = { 'TestObject': '2.10', 'OtherObj': '1.0', } with mock.patch.object(self.conductor_manager, '_object_dispatch') as m: m.return_value = TestObject() m.return_value.VERSION = '1.9' self.assertRaises( ovo_exc.InvalidTargetVersion, self.conductor.object_class_action_versions, self.context, TestObject.obj_name(), 'foo', versions, tuple(), {}) def test_reset(self): with mock.patch.object(objects.Service, 'clear_min_version_cache' ) as mock_clear_cache: self.conductor.reset() mock_clear_cache.assert_called_once_with() def test_provider_fw_rule_get_all(self): result = self.conductor.provider_fw_rule_get_all(self.context) self.assertEqual([], result) def test_conductor_host(self): self.assertTrue(hasattr(self.conductor_manager, 'host')) self.assertEqual(CONF.host, self.conductor_manager.host) class ConductorRPCAPITestCase(_BaseTestCase, test.TestCase): """Conductor RPC API Tests.""" def setUp(self): super(ConductorRPCAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor_manager = self.conductor_service.manager self.conductor = conductor_rpcapi.ConductorAPI() class ConductorAPITestCase(_BaseTestCase, test.TestCase): """Conductor API Tests.""" def setUp(self): super(ConductorAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor = conductor_api.API() self.conductor_manager = self.conductor_service.manager def test_wait_until_ready(self): timeouts = [] calls = dict(count=0) def fake_ping(self, context, message, timeout): timeouts.append(timeout) calls['count'] += 1 if calls['count'] < 15: raise messaging.MessagingTimeout("fake") self.stub_out('nova.baserpc.BaseAPI.ping', fake_ping) self.conductor.wait_until_ready(self.context) self.assertEqual(timeouts.count(10), 10) self.assertIn(None, timeouts) class _BaseTaskTestCase(object): def setUp(self): super(_BaseTaskTestCase, self).setUp() self.user_id = fakes.FAKE_USER_ID self.project_id = fakes.FAKE_PROJECT_ID self.context = FakeContext(self.user_id, self.project_id) fake_server_actions.stub_out_action_events(self) self.request_spec = objects.RequestSpec() self.stub_out('nova.rpc.RequestContextSerializer.deserialize_context', lambda *args, **kwargs: self.context) self.useFixture(fixtures.SpawnIsSynchronousFixture()) _p = mock.patch('nova.compute.utils.heal_reqspec_is_bfv') self.heal_reqspec_is_bfv_mock = _p.start() self.addCleanup(_p.stop) _p = mock.patch('nova.objects.RequestSpec.ensure_network_metadata') self.ensure_network_metadata_mock = _p.start() self.addCleanup(_p.stop) def _prepare_rebuild_args(self, update_args=None): # Args that don't get passed in to the method but do get passed to RPC migration = update_args and update_args.pop('migration', None) node = update_args and update_args.pop('node', None) limits = update_args and update_args.pop('limits', None) rebuild_args = {'new_pass': 'admin_password', 'injected_files': 'files_to_inject', 'image_ref': uuids.image_ref, 'orig_image_ref': uuids.orig_image_ref, 'orig_sys_metadata': 'orig_sys_meta', 'bdms': {}, 'recreate': False, 'on_shared_storage': False, 'preserve_ephemeral': False, 'host': 'compute-host', 'request_spec': None} if update_args: rebuild_args.update(update_args) compute_rebuild_args = copy.deepcopy(rebuild_args) compute_rebuild_args['migration'] = migration compute_rebuild_args['node'] = node compute_rebuild_args['limits'] = limits return rebuild_args, compute_rebuild_args @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(migrate.MigrationTask, 'execute') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(objects.RequestSpec, 'from_components') def _test_cold_migrate(self, spec_from_components, get_image_from_metadata, migration_task_execute, spec_save, get_im, clean_shutdown=True): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) get_image_from_metadata.return_value = 'image' inst = fake_instance.fake_db_instance(image_ref='image_ref') inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), inst, []) inst_obj.system_metadata = {'image_hw_disk_bus': 'scsi'} flavor = objects.Flavor.get_by_name(self.context, 'm1.small') flavor.extra_specs = {'extra_specs': 'fake'} inst_obj.flavor = flavor fake_spec = fake_request_spec.fake_spec_obj() spec_from_components.return_value = fake_spec scheduler_hint = {'filter_properties': {}} if isinstance(self.conductor, conductor_api.ComputeTaskAPI): # The API method is actually 'resize_instance'. It gets # converted into 'migrate_server' when doing RPC. self.conductor.resize_instance( self.context, inst_obj, scheduler_hint, flavor, [], clean_shutdown, host_list=None) else: self.conductor.migrate_server( self.context, inst_obj, scheduler_hint, False, False, flavor, None, None, [], clean_shutdown) get_image_from_metadata.assert_called_once_with( inst_obj.system_metadata) migration_task_execute.assert_called_once_with() spec_save.assert_called_once_with() def test_cold_migrate(self): self._test_cold_migrate() def test_cold_migrate_forced_shutdown(self): self._test_cold_migrate(clean_shutdown=False) @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_and_bind_arqs') @mock.patch.object(compute_rpcapi.ComputeAPI, 'build_and_run_instance') @mock.patch.object(db, 'block_device_mapping_get_all_by_instance', return_value=[]) @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch('nova.objects.Instance.save') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_build_instances(self, mock_fp, mock_save, mock_getaz, mock_buildreq, mock_schedule, mock_bdm, mock_build, mock_create_bind_arqs): """Tests creating two instances and the scheduler returns a unique host/node combo for each instance. """ fake_spec = objects.RequestSpec() mock_fp.return_value = fake_spec instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') # NOTE(danms): Avoid datetime timezone issues with converted flavors instance_type.created_at = None instances = [objects.Instance(context=self.context, id=i, uuid=uuids.fake, flavor=instance_type) for i in range(2)] instance_type_p = obj_base.obj_to_primitive(instance_type) instance_properties = obj_base.obj_to_primitive(instances[0]) instance_properties['system_metadata'] = flavors.save_flavor_info( {}, instance_type) spec = {'image': {'fake_data': 'should_pass_silently'}, 'instance_properties': instance_properties, 'instance_type': instance_type_p, 'num_instances': 2} filter_properties = {'retry': {'num_attempts': 1, 'hosts': []}} sched_return = copy.deepcopy(fake_host_lists2) mock_schedule.return_value = sched_return filter_properties2 = {'retry': {'num_attempts': 1, 'hosts': [['host1', 'node1']]}, 'limits': {}} filter_properties3 = {'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host2', 'node2']]}} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) mock_getaz.return_value = 'myaz' mock_create_bind_arqs.return_value = mock.sentinel self.conductor.build_instances(self.context, instances=instances, image={'fake_data': 'should_pass_silently'}, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) mock_getaz.assert_has_calls([ mock.call(self.context, 'host1'), mock.call(self.context, 'host2')]) # A RequestSpec is built from primitives once before calling the # scheduler to get hosts and then once per instance we're building. mock_fp.assert_has_calls([ mock.call(self.context, spec, filter_properties), mock.call(self.context, spec, filter_properties2), mock.call(self.context, spec, filter_properties3)]) mock_schedule.assert_called_once_with( self.context, fake_spec, [uuids.fake, uuids.fake], return_alternates=True) mock_bdm.assert_has_calls([mock.call(self.context, instances[0].uuid), mock.call(self.context, instances[1].uuid)]) mock_build.assert_has_calls([ mock.call(self.context, instance=mock.ANY, host='host1', image={'fake_data': 'should_pass_silently'}, request_spec=fake_spec, filter_properties=filter_properties2, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mock.ANY, node='node1', limits=None, host_list=sched_return[0], accel_uuids=mock.sentinel), mock.call(self.context, instance=mock.ANY, host='host2', image={'fake_data': 'should_pass_silently'}, request_spec=fake_spec, filter_properties=filter_properties3, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mock.ANY, node='node2', limits=None, host_list=sched_return[1], accel_uuids=mock.sentinel)]) mock_create_bind_arqs.assert_has_calls([ mock.call(self.context, instances[0].uuid, instances[0].flavor.extra_specs, 'node1', mock.ANY), mock.call(self.context, instances[1].uuid, instances[1].flavor.extra_specs, 'node2', mock.ANY), ]) @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_when_reschedule_fails') @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_and_bind_arqs') @mock.patch.object(compute_rpcapi.ComputeAPI, 'build_and_run_instance') @mock.patch.object(db, 'block_device_mapping_get_all_by_instance', return_value=[]) @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch('nova.objects.Instance.save') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_build_instances_arq_failure(self, mock_fp, mock_save, mock_getaz, mock_buildreq, mock_schedule, mock_bdm, mock_build, mock_create_bind_arqs, mock_cleanup): """If _create_and_bind_arqs throws an exception, _destroy_build_request must be called for each instance. """ fake_spec = objects.RequestSpec() mock_fp.return_value = fake_spec instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') # NOTE(danms): Avoid datetime timezone issues with converted flavors instance_type.created_at = None instances = [objects.Instance(context=self.context, id=i, uuid=uuids.fake, flavor=instance_type) for i in range(2)] instance_properties = obj_base.obj_to_primitive(instances[0]) instance_properties['system_metadata'] = flavors.save_flavor_info( {}, instance_type) sched_return = copy.deepcopy(fake_host_lists2) mock_schedule.return_value = sched_return # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) mock_getaz.return_value = 'myaz' mock_create_bind_arqs.side_effect = ( exc.AcceleratorRequestOpFailed(op='', msg='')) self.conductor.build_instances(self.context, instances=instances, image={'fake_data': 'should_pass_silently'}, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) mock_create_bind_arqs.assert_has_calls([ mock.call(self.context, instances[0].uuid, instances[0].flavor.extra_specs, 'node1', mock.ANY), mock.call(self.context, instances[1].uuid, instances[1].flavor.extra_specs, 'node2', mock.ANY), ]) # Comparing instances fails because the instance objects have changed # in the above flow. So, we compare the fields instead. mock_cleanup.assert_has_calls([ mock.call(self.context, test.MatchType(objects.Instance), test.MatchType(exc.AcceleratorRequestOpFailed), test.MatchType(dict), None), mock.call(self.context, test.MatchType(objects.Instance), test.MatchType(exc.AcceleratorRequestOpFailed), test.MatchType(dict), None), ]) call_list = mock_cleanup.call_args_list for idx, instance in enumerate(instances): actual_inst = call_list[idx][0][1] self.assertEqual(actual_inst['uuid'], instance['uuid']) self.assertEqual(actual_inst['flavor']['extra_specs'], {}) @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') @mock.patch.object(conductor_manager.ComputeTaskManager, '_destroy_build_request') def test_build_instances_scheduler_failure( self, dest_build_req_mock, cleanup_mock, sd_mock, state_mock, sig_mock, bs_mock): instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} spec = {'fake': 'specs', 'instance_properties': instances[0]} exception = exc.NoValidHost(reason='fake-reason') dest_build_req_mock.side_effect = ( exc.BuildRequestNotFound(uuid='fake'), None) bs_mock.return_value = spec sd_mock.side_effect = exception updates = {'vm_state': vm_states.ERROR, 'task_state': None} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances( self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) set_state_calls = [] cleanup_network_calls = [] dest_build_req_calls = [] for instance in instances: set_state_calls.append(mock.call( self.context, instance.uuid, 'compute_task', 'build_instances', updates, exception, spec)) cleanup_network_calls.append(mock.call( self.context, mock.ANY, None)) dest_build_req_calls.append( mock.call(self.context, test.MatchType(type(instance)))) state_mock.assert_has_calls(set_state_calls) cleanup_mock.assert_has_calls(cleanup_network_calls) dest_build_req_mock.assert_has_calls(dest_build_req_calls) def test_build_instances_retry_exceeded(self): instances = [fake_instance.fake_instance_obj(self.context)] image = {'fake-data': 'should_pass_silently'} filter_properties = {'retry': {'num_attempts': 10, 'hosts': []}} updates = {'vm_state': vm_states.ERROR, 'task_state': None} @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(scheduler_utils, 'populate_retry') def _test(populate_retry, build_spec, set_vm_state_and_notify, cleanup_mock): # build_instances() is a cast, we need to wait for it to # complete self.useFixture(cast_as_call.CastAsCall(self)) populate_retry.side_effect = exc.MaxRetriesExceeded( reason="Too many try") self.conductor.build_instances( self.context, instances=instances, image=image, filter_properties=filter_properties, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) populate_retry.assert_called_once_with( filter_properties, instances[0].uuid) set_vm_state_and_notify.assert_called_once_with( self.context, instances[0].uuid, 'compute_task', 'build_instances', updates, mock.ANY, build_spec.return_value) cleanup_mock.assert_called_once_with(self.context, mock.ANY, None) _test() @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') def test_build_instances_scheduler_group_failure( self, cleanup_mock, state_mock, sig_mock, bs_mock): instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} spec = {'fake': 'specs', 'instance_properties': instances[0]} bs_mock.return_value = spec exception = exc.UnsupportedPolicyException(reason='fake-reason') sig_mock.side_effect = exception updates = {'vm_state': vm_states.ERROR, 'task_state': None} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) set_state_calls = [] cleanup_network_calls = [] for instance in instances: set_state_calls.append(mock.call( self.context, instance.uuid, 'build_instances', updates, exception, spec)) cleanup_network_calls.append(mock.call( self.context, mock.ANY, None)) state_mock.assert_has_calls(set_state_calls) cleanup_mock.assert_has_calls(cleanup_network_calls) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=exc.InstanceMappingNotFound(uuid='fake')) @mock.patch.object(objects.HostMapping, 'get_by_host') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_build_instances_no_instance_mapping(self, _mock_set_state, mock_select_dests, mock_get_by_host, mock_get_inst_map_by_uuid, _mock_save, _mock_buildreq): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) mock_get_inst_map_by_uuid.assert_has_calls([ mock.call(self.context, instances[0].uuid), mock.call(self.context, instances[1].uuid)]) self.assertFalse(mock_get_by_host.called) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch.object(objects.Instance, 'save') def test_build_instances_exhaust_host_list(self, _mock_save, mock_notify): # A list of three alternate hosts for one instance host_lists = copy.deepcopy(fake_host_lists_alt) instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances( context=self.context, instances=[instance], image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=None, legacy_bdm=None, host_lists=host_lists ) # Since claim_resources() is mocked to always return False, we will run # out of alternate hosts, and complain about MaxRetriesExceeded. mock_notify.assert_called_once_with( self.context, 'build_instances', instance.uuid, test.MatchType(dict), 'error', test.MatchType(exc.MaxRetriesExceeded), test.MatchType(str)) @mock.patch.object(conductor_manager.ComputeTaskManager, '_destroy_build_request') @mock.patch.object(conductor_manager.LOG, 'debug') @mock.patch("nova.scheduler.utils.claim_resources", return_value=True) @mock.patch.object(objects.Instance, 'save') def test_build_instances_logs_selected_and_alts(self, _mock_save, mock_claim, mock_debug, mock_destroy): # A list of three alternate hosts for one instance host_lists = copy.deepcopy(fake_host_lists_alt) expected_host = host_lists[0][0] expected_alts = host_lists[0][1:] instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances(context=self.context, instances=[instance], image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=None, legacy_bdm=None, host_lists=host_lists) # The last LOG.debug call should record the selected host name and the # list of alternates. last_call = mock_debug.call_args_list[-1][0] self.assertIn(expected_host.service_host, last_call) expected_alt_hosts = [(alt.service_host, alt.nodename) for alt in expected_alts] self.assertIn(expected_alt_hosts, last_call) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.HostMapping, 'get_by_host', side_effect=exc.HostMappingNotFound(name='fake')) @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_build_instances_no_host_mapping(self, _mock_set_state, mock_select_dests, mock_get_by_host, mock_get_inst_map_by_uuid, _mock_save, mock_buildreq): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] num_instances = 2 instances = [fake_instance.fake_instance_obj(self.context) for i in range(num_instances)] inst_mapping_mocks = [mock.Mock() for i in range(num_instances)] mock_get_inst_map_by_uuid.side_effect = inst_mapping_mocks image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) for instance in instances: mock_get_inst_map_by_uuid.assert_any_call(self.context, instance.uuid) for inst_mapping in inst_mapping_mocks: inst_mapping.destroy.assert_called_once_with() mock_get_by_host.assert_has_calls([mock.call(self.context, 'host1'), mock.call(self.context, 'host2')]) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.HostMapping, 'get_by_host') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_build_instances_update_instance_mapping(self, _mock_set_state, mock_select_dests, mock_get_by_host, mock_get_inst_map_by_uuid, _mock_save, _mock_buildreq): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] mock_get_by_host.side_effect = [ objects.HostMapping(cell_mapping=objects.CellMapping(id=1)), objects.HostMapping(cell_mapping=objects.CellMapping(id=2))] num_instances = 2 instances = [fake_instance.fake_instance_obj(self.context) for i in range(num_instances)] inst_mapping_mocks = [mock.Mock() for i in range(num_instances)] mock_get_inst_map_by_uuid.side_effect = inst_mapping_mocks image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) for instance in instances: mock_get_inst_map_by_uuid.assert_any_call(self.context, instance.uuid) for inst_mapping in inst_mapping_mocks: inst_mapping.save.assert_called_once_with() self.assertEqual(1, inst_mapping_mocks[0].cell_mapping.id) self.assertEqual(2, inst_mapping_mocks[1].cell_mapping.id) mock_get_by_host.assert_has_calls([mock.call(self.context, 'host1'), mock.call(self.context, 'host2')]) @mock.patch.object(objects.Instance, 'save', new=mock.MagicMock()) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify', new=mock.MagicMock()) def test_build_instances_destroy_build_request(self, mock_select_dests, mock_build_req_get): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] num_instances = 2 instances = [fake_instance.fake_instance_obj(self.context) for i in range(num_instances)] build_req_mocks = [mock.Mock() for i in range(num_instances)] mock_build_req_get.side_effect = build_req_mocks image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance', new=mock.MagicMock()) @mock.patch.object(self.conductor_manager, '_populate_instance_mapping', new=mock.MagicMock()) def do_test(): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) do_test() for build_req in build_req_mocks: build_req.destroy.assert_called_once_with() @mock.patch.object(objects.Instance, 'save', new=mock.MagicMock()) @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify', new=mock.MagicMock()) def test_build_instances_reschedule_ignores_build_request(self, mock_select_dests): # This test calls build_instances as if it was a reschedule. This means # that the exc.BuildRequestNotFound() exception raised by # conductor_manager._destroy_build_request() should not cause the # build to stop. mock_select_dests.return_value = [[fake_selection1]] instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance') @mock.patch.object(self.conductor_manager, '_populate_instance_mapping') @mock.patch.object(self.conductor_manager, '_destroy_build_request', side_effect=exc.BuildRequestNotFound(uuid='fake')) def do_test(mock_destroy_build_req, mock_pop_inst_map, mock_build_and_run): self.conductor.build_instances( context=self.context, instances=[instance], image=image, filter_properties={'retry': {'num_attempts': 1, 'hosts': []}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) expected_build_run_host_list = copy.copy(fake_host_lists1[0]) if expected_build_run_host_list: expected_build_run_host_list.pop(0) mock_build_and_run.assert_called_once_with( self.context, instance=mock.ANY, host='host1', image=image, request_spec=mock.ANY, filter_properties={'retry': {'num_attempts': 2, 'hosts': [['host1', 'node1']]}, 'limits': {}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=test.MatchType( objects.BlockDeviceMappingList), node='node1', limits=None, host_list=expected_build_run_host_list, accel_uuids=[]) mock_pop_inst_map.assert_not_called() mock_destroy_build_req.assert_not_called() do_test() @mock.patch.object(objects.Instance, 'save', new=mock.MagicMock()) @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify', new=mock.MagicMock()) def test_build_instances_reschedule_recalculates_provider_mapping(self, mock_select_dests): rg1 = objects.RequestGroup(resources={"CUSTOM_FOO": 1}) request_spec = objects.RequestSpec(requested_resources=[rg1]) mock_select_dests.return_value = [[fake_selection1]] instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch('nova.scheduler.utils.' 'fill_provider_mapping') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.objects.request_spec.RequestSpec.from_primitives', return_value=request_spec) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance') @mock.patch.object(self.conductor_manager, '_populate_instance_mapping') @mock.patch.object(self.conductor_manager, '_destroy_build_request') def do_test(mock_destroy_build_req, mock_pop_inst_map, mock_build_and_run, mock_request_spec_from_primitives, mock_claim, mock_rp_mapping): self.conductor.build_instances( context=self.context, instances=[instance], image=image, filter_properties={'retry': {'num_attempts': 1, 'hosts': []}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=copy.deepcopy(fake_host_lists1), request_spec=request_spec) expected_build_run_host_list = copy.copy(fake_host_lists1[0]) if expected_build_run_host_list: expected_build_run_host_list.pop(0) mock_build_and_run.assert_called_once_with( self.context, instance=mock.ANY, host='host1', image=image, request_spec=request_spec, filter_properties={'retry': {'num_attempts': 2, 'hosts': [['host1', 'node1']]}, 'limits': {}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=test.MatchType( objects.BlockDeviceMappingList), node='node1', limits=None, host_list=expected_build_run_host_list, accel_uuids=[]) mock_rp_mapping.assert_called_once_with( test.MatchType(objects.RequestSpec), test.MatchType(objects.Selection)) actual_request_spec = mock_rp_mapping.mock_calls[0][1][0] self.assertEqual( rg1.resources, actual_request_spec.requested_resources[0].resources) do_test() @mock.patch.object(objects.Instance, 'save', new=mock.MagicMock()) @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify', new=mock.MagicMock()) def test_build_instances_reschedule_not_recalc_mapping_if_claim_fails( self, mock_select_dests): rg1 = objects.RequestGroup(resources={"CUSTOM_FOO": 1}) request_spec = objects.RequestSpec(requested_resources=[rg1]) mock_select_dests.return_value = [[fake_selection1]] instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch('nova.scheduler.utils.' 'fill_provider_mapping') @mock.patch('nova.scheduler.utils.claim_resources', # simulate that the first claim fails during re-schedule side_effect=[False, True]) @mock.patch('nova.objects.request_spec.RequestSpec.from_primitives', return_value=request_spec) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance') @mock.patch.object(self.conductor_manager, '_populate_instance_mapping') @mock.patch.object(self.conductor_manager, '_destroy_build_request') def do_test(mock_destroy_build_req, mock_pop_inst_map, mock_build_and_run, mock_request_spec_from_primitives, mock_claim, mock_rp_mapping): self.conductor.build_instances( context=self.context, instances=[instance], image=image, filter_properties={'retry': {'num_attempts': 1, 'hosts': []}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=copy.deepcopy(fake_host_lists_alt), request_spec=request_spec) expected_build_run_host_list = copy.copy(fake_host_lists_alt[0]) if expected_build_run_host_list: # first is consumed but the claim fails so the conductor takes # the next host expected_build_run_host_list.pop(0) # second is consumed and claim succeeds expected_build_run_host_list.pop(0) mock_build_and_run.assert_called_with( self.context, instance=mock.ANY, host='host2', image=image, request_spec=request_spec, filter_properties={'retry': {'num_attempts': 2, 'hosts': [['host2', 'node2']]}, 'limits': {}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=test.MatchType( objects.BlockDeviceMappingList), node='node2', limits=None, host_list=expected_build_run_host_list, accel_uuids=[]) # called only once when the claim succeeded mock_rp_mapping.assert_called_once_with( test.MatchType(objects.RequestSpec), test.MatchType(objects.Selection)) actual_request_spec = mock_rp_mapping.mock_calls[0][1][0] self.assertEqual( rg1.resources, actual_request_spec.requested_resources[0].resources) do_test() @mock.patch.object(cinder.API, 'attachment_get') @mock.patch.object(cinder.API, 'attachment_create') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_validate_existing_attachment_ids_with_missing_attachments(self, mock_bdm_save, mock_attachment_create, mock_attachment_get): instance = self._create_fake_instance_obj() bdms = [ block_device.BlockDeviceDict({ 'boot_index': 0, 'guest_format': None, 'connection_info': None, 'device_type': u'disk', 'source_type': 'image', 'destination_type': 'volume', 'volume_size': 1, 'image_id': 1, 'device_name': '/dev/vdb', 'attachment_id': uuids.attachment, 'volume_id': uuids.volume })] bdms = block_device_obj.block_device_make_list_from_dicts( self.context, bdms) mock_attachment_get.side_effect = exc.VolumeAttachmentNotFound( attachment_id=uuids.attachment) mock_attachment_create.return_value = {'id': uuids.new_attachment} self.assertEqual(uuids.attachment, bdms[0].attachment_id) self.conductor_manager._validate_existing_attachment_ids(self.context, instance, bdms) mock_attachment_get.assert_called_once_with(self.context, uuids.attachment) mock_attachment_create.assert_called_once_with(self.context, uuids.volume, instance.uuid) mock_bdm_save.assert_called_once() self.assertEqual(uuids.new_attachment, bdms[0].attachment_id) @mock.patch.object(cinder.API, 'attachment_get') @mock.patch.object(cinder.API, 'attachment_create') @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_validate_existing_attachment_ids_with_attachments_present(self, mock_bdm_save, mock_attachment_create, mock_attachment_get): instance = self._create_fake_instance_obj() bdms = [ block_device.BlockDeviceDict({ 'boot_index': 0, 'guest_format': None, 'connection_info': None, 'device_type': u'disk', 'source_type': 'image', 'destination_type': 'volume', 'volume_size': 1, 'image_id': 1, 'device_name': '/dev/vdb', 'attachment_id': uuids.attachment, 'volume_id': uuids.volume })] bdms = block_device_obj.block_device_make_list_from_dicts( self.context, bdms) mock_attachment_get.return_value = { "attachment": { "status": "attaching", "detached_at": "2015-09-16T09:28:52.000000", "connection_info": {}, "attached_at": "2015-09-16T09:28:52.000000", "attach_mode": "ro", "instance": instance.uuid, "volume_id": uuids.volume, "id": uuids.attachment }} self.assertEqual(uuids.attachment, bdms[0].attachment_id) self.conductor_manager._validate_existing_attachment_ids(self.context, instance, bdms) mock_attachment_get.assert_called_once_with(self.context, uuids.attachment) mock_attachment_create.assert_not_called() mock_bdm_save.assert_not_called() self.assertEqual(uuids.attachment, bdms[0].attachment_id) @mock.patch.object(compute_rpcapi.ComputeAPI, 'unshelve_instance') @mock.patch.object(compute_rpcapi.ComputeAPI, 'start_instance') def test_unshelve_instance_on_host(self, mock_start, mock_unshelve): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance( self.context, instance, self.request_spec) mock_start.assert_called_once_with(self.context, instance) mock_unshelve.assert_not_called() def test_unshelve_offload_instance_on_host_with_request_spec(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' fake_spec = fake_request_spec.fake_spec_obj() # FIXME(sbauza): Modify the fake RequestSpec object to either add a # non-empty SchedulerRetries object or nullify the field fake_spec.retry = None # FIXME(sbauza): Modify the fake RequestSpec object to either add a # non-empty SchedulerLimits object or nullify the field fake_spec.limits = None # FIXME(sbauza): Modify the fake RequestSpec object to either add a # non-empty InstanceGroup object or nullify the field fake_spec.instance_group = None filter_properties = fake_spec.to_legacy_filter_properties_dict() host = {'host': 'host1', 'nodename': 'node1', 'limits': {}} # unshelve_instance() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(self.conductor_manager.compute_rpcapi, 'unshelve_instance') @mock.patch.object(scheduler_utils, 'populate_filter_properties') @mock.patch.object(self.conductor_manager, '_schedule_instances') @mock.patch.object(objects.RequestSpec, 'to_legacy_filter_properties_dict') @mock.patch.object(objects.RequestSpec, 'reset_forced_destinations') def do_test(reset_forced_destinations, to_filtprops, sched_instances, populate_filter_properties, unshelve_instance, get_by_instance_uuid): cell_mapping = objects.CellMapping.get_by_uuid(self.context, uuids.cell1) get_by_instance_uuid.return_value = objects.InstanceMapping( cell_mapping=cell_mapping) to_filtprops.return_value = filter_properties sched_instances.return_value = [[fake_selection1]] self.conductor.unshelve_instance(self.context, instance, fake_spec) # The fake_spec already has a project_id set which doesn't match # the instance.project_id so the spec's project_id won't be # overridden using the instance.project_id. self.assertNotEqual(fake_spec.project_id, instance.project_id) reset_forced_destinations.assert_called_once_with() # The fake_spec is only going to modified by reference for # ComputeTaskManager. if isinstance(self.conductor, conductor_manager.ComputeTaskManager): self.ensure_network_metadata_mock.assert_called_once_with( test.MatchType(objects.Instance)) self.heal_reqspec_is_bfv_mock.assert_called_once_with( self.context, fake_spec, instance) sched_instances.assert_called_once_with( self.context, fake_spec, [instance.uuid], return_alternates=False) self.assertEqual(cell_mapping, fake_spec.requested_destination.cell) else: # RPC API tests won't have the same request spec or instance # since they go over the wire. self.ensure_network_metadata_mock.assert_called_once_with( test.MatchType(objects.Instance)) self.heal_reqspec_is_bfv_mock.assert_called_once_with( self.context, test.MatchType(objects.RequestSpec), test.MatchType(objects.Instance)) sched_instances.assert_called_once_with( self.context, test.MatchType(objects.RequestSpec), [instance.uuid], return_alternates=False) # NOTE(sbauza): Since the instance is dehydrated when passing # through the RPC API, we can only assert mock.ANY for it unshelve_instance.assert_called_once_with( self.context, mock.ANY, host['host'], test.MatchType(objects.RequestSpec), image=mock.ANY, filter_properties=filter_properties, node=host['nodename'] ) do_test() @mock.patch('nova.compute.utils.add_instance_fault_from_exc') @mock.patch.object(image_api.API, 'get', side_effect=exc.ImageNotFound(image_id=uuids.image)) def test_unshelve_offloaded_instance_glance_image_not_found( self, mock_get, add_instance_fault_from_exc): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_host'] = 'fake-mini' system_metadata['shelved_image_id'] = uuids.image reason = ('Unshelve attempted but the image %s ' 'cannot be found.') % uuids.image self.assertRaises( exc.UnshelveException, self.conductor_manager.unshelve_instance, self.context, instance, self.request_spec) add_instance_fault_from_exc.assert_called_once_with( self.context, instance, mock_get.side_effect, mock.ANY, fault_message=reason) self.assertEqual(instance.vm_state, vm_states.ERROR) mock_get.assert_called_once_with(self.context, uuids.image, show_deleted=False) def test_unshelve_offloaded_instance_image_id_is_none(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING # 'shelved_image_id' is None for volumebacked instance instance.system_metadata['shelved_image_id'] = None with test.nested( mock.patch.object(self.conductor_manager, '_schedule_instances'), mock.patch.object(self.conductor_manager.compute_rpcapi, 'unshelve_instance'), mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid'), ) as (schedule_mock, unshelve_mock, get_by_instance_uuid): schedule_mock.return_value = [[fake_selection1]] get_by_instance_uuid.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid( self.context, uuids.cell1)) self.conductor_manager.unshelve_instance( self.context, instance, self.request_spec) self.assertEqual(1, unshelve_mock.call_count) @mock.patch.object(compute_rpcapi.ComputeAPI, 'unshelve_instance') @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances', return_value=[[objects.Selection( service_host='fake_host', nodename='fake_node', limits=None)]]) @mock.patch.object(image_api.API, 'get', return_value='fake_image') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_unshelve_instance_schedule_and_rebuild( self, mock_im, mock_get, mock_schedule, mock_unshelve): fake_spec = objects.RequestSpec() # Set requested_destination to test setting cell_mapping in # existing object. fake_spec.requested_destination = objects.Destination( host="dummy", cell=None) cell_mapping = objects.CellMapping.get_by_uuid(self.context, uuids.cell1) mock_im.return_value = objects.InstanceMapping( cell_mapping=cell_mapping) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance( self.context, instance, fake_spec) self.assertEqual(cell_mapping, fake_spec.requested_destination.cell) mock_get.assert_called_once_with( self.context, 'fake_image_id', show_deleted=False) mock_schedule.assert_called_once_with( self.context, fake_spec, [instance.uuid], return_alternates=False) mock_unshelve.assert_called_once_with( self.context, instance, 'fake_host', fake_spec, image='fake_image', filter_properties=dict( # populate_filter_properties adds limits={} fake_spec.to_legacy_filter_properties_dict(), limits={}), node='fake_node') def test_unshelve_instance_schedule_and_rebuild_novalid_host(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() system_metadata = instance.system_metadata def fake_schedule_instances(context, request_spec, *instances, **kwargs): raise exc.NoValidHost(reason='') with test.nested( mock.patch.object(self.conductor_manager.image_api, 'get', return_value='fake_image'), mock.patch.object(self.conductor_manager, '_schedule_instances', fake_schedule_instances), mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid'), mock.patch.object(objects.Instance, 'save') ) as (_get_image, _schedule_instances, get_by_instance_uuid, save): get_by_instance_uuid.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid( self.context, uuids.cell1)) system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance( self.context, instance, self.request_spec) _get_image.assert_has_calls([mock.call(self.context, system_metadata['shelved_image_id'], show_deleted=False)]) self.assertEqual(vm_states.SHELVED_OFFLOADED, instance.vm_state) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances', side_effect=messaging.MessagingTimeout()) @mock.patch.object(image_api.API, 'get', return_value='fake_image') @mock.patch.object(objects.Instance, 'save') def test_unshelve_instance_schedule_and_rebuild_messaging_exception( self, mock_save, mock_get_image, mock_schedule_instances, mock_im): mock_im.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.assertRaises(messaging.MessagingTimeout, self.conductor_manager.unshelve_instance, self.context, instance, self.request_spec) mock_get_image.assert_has_calls([mock.call(self.context, system_metadata['shelved_image_id'], show_deleted=False)]) self.assertEqual(vm_states.SHELVED_OFFLOADED, instance.vm_state) self.assertIsNone(instance.task_state) @mock.patch.object(compute_rpcapi.ComputeAPI, 'unshelve_instance') @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances', return_value=[[ objects.Selection(service_host='fake_host', nodename='fake_node', limits=None)]]) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_unshelve_instance_schedule_and_rebuild_volume_backed( self, mock_im, mock_schedule, mock_unshelve): fake_spec = objects.RequestSpec() mock_im.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance( self.context, instance, fake_spec) mock_schedule.assert_called_once_with( self.context, fake_spec, [instance.uuid], return_alternates=False) mock_unshelve.assert_called_once_with( self.context, instance, 'fake_host', fake_spec, image=None, filter_properties={'limits': {}}, node='fake_node') @mock.patch('nova.scheduler.utils.fill_provider_mapping') @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance') @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances', ) def test_unshelve_instance_resource_request( self, mock_schedule, mock_get_res_req, mock_fill_provider_mapping): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() request_spec = objects.RequestSpec() selection = objects.Selection( service_host='fake_host', nodename='fake_node', limits=None) mock_schedule.return_value = [[selection]] res_req = [objects.RequestGroup()] mock_get_res_req.return_value = res_req self.conductor_manager.unshelve_instance( self.context, instance, request_spec) self.assertEqual(res_req, request_spec.requested_resources) mock_get_res_req.assert_called_once_with(self.context, instance.uuid) mock_schedule.assert_called_once_with( self.context, request_spec, [instance.uuid], return_alternates=False) mock_fill_provider_mapping.assert_called_once_with( request_spec, selection) def test_rebuild_instance(self): inst_obj = self._create_fake_instance_obj() rebuild_args, compute_args = self._prepare_rebuild_args( {'host': inst_obj.host}) with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(self.conductor_manager.query_client, 'select_destinations'), mock.patch('nova.scheduler.utils.fill_provider_mapping', new_callable=mock.NonCallableMock), mock.patch('nova.network.neutron.API.' 'get_requested_resource_for_instance', new_callable=mock.NonCallableMock) ) as (rebuild_mock, select_dest_mock, fill_provider_mock, get_resources_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) self.assertFalse(select_dest_mock.called) rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) @mock.patch('nova.compute.utils.notify_about_instance_rebuild') def test_rebuild_instance_with_scheduler(self, mock_notify): inst_obj = self._create_fake_instance_obj() inst_obj.host = 'noselect' expected_host = 'thebesthost' expected_node = 'thebestnode' expected_limits = None fake_selection = objects.Selection(service_host=expected_host, nodename=expected_node, limits=None) rebuild_args, compute_args = self._prepare_rebuild_args( {'host': None, 'node': expected_node, 'limits': expected_limits}) fake_spec = objects.RequestSpec() rebuild_args['request_spec'] = fake_spec inst_uuids = [inst_obj.uuid] with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(self.conductor_manager.query_client, 'select_destinations', return_value=[[fake_selection]]) ) as (rebuild_mock, sig_mock, select_dest_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) self.ensure_network_metadata_mock.assert_called_once_with( inst_obj) self.heal_reqspec_is_bfv_mock.assert_called_once_with( self.context, fake_spec, inst_obj) select_dest_mock.assert_called_once_with(self.context, fake_spec, inst_uuids, return_objects=True, return_alternates=False) compute_args['host'] = expected_host compute_args['request_spec'] = fake_spec rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) self.assertEqual(inst_obj.project_id, fake_spec.project_id) self.assertEqual('compute.instance.rebuild.scheduled', fake_notifier.NOTIFICATIONS[0].event_type) mock_notify.assert_called_once_with( self.context, inst_obj, 'thebesthost', action='rebuild_scheduled', source='nova-conductor') def test_rebuild_instance_with_scheduler_no_host(self): inst_obj = self._create_fake_instance_obj() inst_obj.host = 'noselect' rebuild_args, _ = self._prepare_rebuild_args({'host': None}) fake_spec = objects.RequestSpec() rebuild_args['request_spec'] = fake_spec with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(self.conductor_manager.query_client, 'select_destinations', side_effect=exc.NoValidHost(reason='')), mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') ) as (rebuild_mock, sig_mock, select_dest_mock, set_vm_state_and_notify_mock): self.assertRaises(exc.NoValidHost, self.conductor_manager.rebuild_instance, context=self.context, instance=inst_obj, **rebuild_args) select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=False) self.assertEqual( set_vm_state_and_notify_mock.call_args[0][4]['vm_state'], vm_states.ERROR) self.assertFalse(rebuild_mock.called) @mock.patch.object(conductor_manager.compute_rpcapi.ComputeAPI, 'rebuild_instance') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(conductor_manager.query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_rebuild_instance_with_scheduler_group_failure(self, state_mock, select_dest_mock, sig_mock, rebuild_mock): inst_obj = self._create_fake_instance_obj() rebuild_args, _ = self._prepare_rebuild_args({'host': None}) rebuild_args['request_spec'] = self.request_spec exception = exc.UnsupportedPolicyException(reason='') sig_mock.side_effect = exception # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) # Create the migration record (normally created by the compute API). migration = objects.Migration(self.context, source_compute=inst_obj.host, source_node=inst_obj.node, instance_uuid=inst_obj.uuid, status='accepted', migration_type='evacuation') migration.create() self.assertRaises(exc.UnsupportedPolicyException, self.conductor.rebuild_instance, self.context, inst_obj, **rebuild_args) updates = {'vm_state': vm_states.ERROR, 'task_state': None} state_mock.assert_called_once_with(self.context, inst_obj.uuid, 'rebuild_server', updates, exception, mock.ANY) self.assertFalse(select_dest_mock.called) self.assertFalse(rebuild_mock.called) # Assert the migration status was updated. migration = objects.Migration.get_by_id(self.context, migration.id) self.assertEqual('error', migration.status) def test_rebuild_instance_fill_provider_mapping_raises(self): inst_obj = self._create_fake_instance_obj() rebuild_args, _ = self._prepare_rebuild_args( {'host': None, 'recreate': True}) fake_spec = objects.RequestSpec() rebuild_args['request_spec'] = fake_spec with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(self.conductor_manager.query_client, 'select_destinations'), mock.patch.object(scheduler_utils, 'set_vm_state_and_notify'), mock.patch.object(scheduler_utils, 'fill_provider_mapping', side_effect=ValueError( 'No valid group - RP mapping is found')) ) as (rebuild_mock, sig_mock, select_dest_mock, set_vm_state_and_notify_mock, fill_mapping_mock): self.assertRaises(ValueError, self.conductor_manager.rebuild_instance, context=self.context, instance=inst_obj, **rebuild_args) select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=False) set_vm_state_and_notify_mock.assert_called_once_with( self.context, inst_obj.uuid, 'compute_task', 'rebuild_server', {'vm_state': vm_states.ERROR, 'task_state': None}, test.MatchType(ValueError), fake_spec) self.assertFalse(rebuild_mock.called) def test_rebuild_instance_evacuate_migration_record(self): inst_obj = self._create_fake_instance_obj() migration = objects.Migration(context=self.context, source_compute=inst_obj.host, source_node=inst_obj.node, instance_uuid=inst_obj.uuid, status='accepted', migration_type='evacuation') rebuild_args, compute_args = self._prepare_rebuild_args( {'host': inst_obj.host, 'migration': migration}) with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(self.conductor_manager.query_client, 'select_destinations'), mock.patch.object(objects.Migration, 'get_by_instance_and_status', return_value=migration) ) as (rebuild_mock, select_dest_mock, get_migration_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) self.assertFalse(select_dest_mock.called) rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) @mock.patch('nova.compute.utils.notify_about_instance_rebuild') def test_evacuate_instance_with_request_spec(self, mock_notify): inst_obj = self._create_fake_instance_obj() inst_obj.host = 'noselect' expected_host = 'thebesthost' expected_node = 'thebestnode' expected_limits = None fake_selection = objects.Selection(service_host=expected_host, nodename=expected_node, limits=None) fake_spec = objects.RequestSpec(ignore_hosts=[uuids.ignored_host]) rebuild_args, compute_args = self._prepare_rebuild_args( {'host': None, 'node': expected_node, 'limits': expected_limits, 'request_spec': fake_spec, 'recreate': True}) with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(self.conductor_manager.query_client, 'select_destinations', return_value=[[fake_selection]]), mock.patch.object(fake_spec, 'reset_forced_destinations'), mock.patch('nova.scheduler.utils.fill_provider_mapping'), mock.patch('nova.network.neutron.API.' 'get_requested_resource_for_instance', return_value=[]) ) as (rebuild_mock, sig_mock, select_dest_mock, reset_fd, fill_rp_mapping_mock, get_req_res_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) reset_fd.assert_called_once_with() # The RequestSpec.ignore_hosts field should be overwritten. self.assertEqual([inst_obj.host], fake_spec.ignore_hosts) # The RequestSpec.requested_destination.cell field should be set. self.assertIn('requested_destination', fake_spec) self.assertIn('cell', fake_spec.requested_destination) self.assertIsNotNone(fake_spec.requested_destination.cell) select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=False) compute_args['host'] = expected_host compute_args['request_spec'] = fake_spec rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) get_req_res_mock.assert_called_once_with( self.context, inst_obj.uuid) fill_rp_mapping_mock.assert_called_once_with( fake_spec, fake_selection) self.assertEqual('compute.instance.rebuild.scheduled', fake_notifier.NOTIFICATIONS[0].event_type) mock_notify.assert_called_once_with( self.context, inst_obj, 'thebesthost', action='rebuild_scheduled', source='nova-conductor') @mock.patch( 'nova.conductor.tasks.cross_cell_migrate.ConfirmResizeTask.execute') def test_confirm_snapshot_based_resize(self, mock_execute): instance = self._create_fake_instance_obj(ctxt=self.context) migration = objects.Migration( context=self.context, source_compute=instance.host, source_node=instance.node, instance_uuid=instance.uuid, status='confirming', migration_type='resize') self.conductor_manager.confirm_snapshot_based_resize( self.context, instance=instance, migration=migration) mock_execute.assert_called_once_with() @mock.patch('nova.compute.utils.EventReporter') @mock.patch( 'nova.conductor.tasks.cross_cell_migrate.RevertResizeTask.execute') def test_revert_snapshot_based_resize(self, mock_execute, mock_er): instance = self._create_fake_instance_obj(ctxt=self.context) migration = objects.Migration( context=self.context, source_compute=instance.host, source_node=instance.node, instance_uuid=instance.uuid, status='reverting', migration_type='migration') self.conductor_manager.revert_snapshot_based_resize( self.context, instance=instance, migration=migration) mock_execute.assert_called_once_with() mock_er.assert_called_once_with( self.context, 'conductor_revert_snapshot_based_resize', self.conductor_manager.host, instance.uuid, graceful_exit=True) class ConductorTaskTestCase(_BaseTaskTestCase, test_compute.BaseTestCase): """ComputeTaskManager Tests.""" NUMBER_OF_CELLS = 2 def setUp(self): super(ConductorTaskTestCase, self).setUp() self.conductor = conductor_manager.ComputeTaskManager() self.conductor_manager = self.conductor params = {} self.ctxt = params['context'] = context.RequestContext( 'fake-user', 'fake-project').elevated() build_request = fake_build_request.fake_req_obj(self.ctxt) del build_request.instance.id build_request.create() params['build_requests'] = objects.BuildRequestList( objects=[build_request]) im = objects.InstanceMapping( self.ctxt, instance_uuid=build_request.instance.uuid, cell_mapping=None, project_id=self.ctxt.project_id) im.create() rs = fake_request_spec.fake_spec_obj(remove_id=True) rs._context = self.ctxt rs.instance_uuid = build_request.instance_uuid rs.instance_group = None rs.retry = None rs.limits = None rs.create() params['request_specs'] = [rs] params['image'] = {'fake_data': 'should_pass_silently'} params['admin_password'] = 'admin_password', params['injected_files'] = 'injected_files' params['requested_networks'] = None bdm = objects.BlockDeviceMapping(self.ctxt, **dict( source_type='blank', destination_type='local', guest_format='foo', device_type='disk', disk_bus='', boot_index=1, device_name='xvda', delete_on_termination=False, snapshot_id=None, volume_id=None, volume_size=1, image_id='bar', no_device=False, connection_info=None, tag='')) params['block_device_mapping'] = objects.BlockDeviceMappingList( objects=[bdm]) tag = objects.Tag(self.ctxt, tag='tag1') params['tags'] = objects.TagList(objects=[tag]) self.params = params self.flavor = objects.Flavor.get_by_name(self.ctxt, 'm1.tiny') @mock.patch('nova.accelerator.cyborg.get_client') def test_create_bind_arqs_no_device_profile(self, mock_get_client): # If no device profile name, it is a no op. hostname = 'myhost' instance = fake_instance.fake_instance_obj(self.context) instance.flavor.extra_specs = {} self.conductor._create_and_bind_arqs(self.context, instance.uuid, instance.flavor.extra_specs, hostname, resource_provider_mapping=mock.ANY) mock_get_client.assert_not_called() @mock.patch('nova.accelerator.cyborg._CyborgClient.bind_arqs') @mock.patch('nova.accelerator.cyborg._CyborgClient.' 'create_arqs_and_match_resource_providers') def test_create_bind_arqs(self, mock_create, mock_bind): # Happy path hostname = 'myhost' instance = fake_instance.fake_instance_obj(self.context) dp_name = 'mydp' instance.flavor.extra_specs = {'accel:device_profile': dp_name} in_arq_list, _ = fixtures.get_arqs(dp_name) mock_create.return_value = in_arq_list self.conductor._create_and_bind_arqs(self.context, instance.uuid, instance.flavor.extra_specs, hostname, resource_provider_mapping=mock.ANY) mock_create.assert_called_once_with(dp_name, mock.ANY) expected_bindings = { 'b59d34d3-787b-4fb0-a6b9-019cd81172f8': {'hostname': hostname, 'device_rp_uuid': mock.ANY, 'instance_uuid': instance.uuid} } mock_bind.assert_called_once_with(bindings=expected_bindings) @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def _do_schedule_and_build_instances_test( self, params, select_destinations, build_and_run_instance, get_az, host_list=None): if not host_list: host_list = copy.deepcopy(fake_host_lists1) select_destinations.return_value = host_list get_az.return_value = 'myaz' details = {} def _build_and_run_instance(ctxt, *args, **kwargs): details['instance'] = kwargs['instance'] self.assertTrue(kwargs['instance'].id) self.assertTrue(kwargs['filter_properties'].get('retry')) self.assertEqual(1, len(kwargs['block_device_mapping'])) # FIXME(danms): How to validate the db connection here? self.start_service('compute', host='host1') build_and_run_instance.side_effect = _build_and_run_instance self.conductor.schedule_and_build_instances(**params) self.assertTrue(build_and_run_instance.called) get_az.assert_called_once_with(mock.ANY, 'host1') instance_uuid = details['instance'].uuid bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.ctxt, instance_uuid) ephemeral = list(filter(block_device.new_format_is_ephemeral, bdms)) self.assertEqual(1, len(ephemeral)) swap = list(filter(block_device.new_format_is_swap, bdms)) self.assertEqual(0, len(swap)) self.assertEqual(1, ephemeral[0].volume_size) return instance_uuid @mock.patch('nova.notifications.send_update_with_states') def test_schedule_and_build_instances(self, mock_notify): # NOTE(melwitt): This won't work with call_args because the call # arguments are recorded as references and not as copies of objects. # So even though the notify method was called with Instance._context # targeted, by the time we assert with call_args, the target_cell # context manager has already exited and the referenced Instance # object's _context.db_connection has been restored to None. def fake_notify(ctxt, instance, *args, **kwargs): # Assert the instance object is targeted when going through the # notification code. self.assertIsNotNone(ctxt.db_connection) self.assertIsNotNone(instance._context.db_connection) mock_notify.side_effect = fake_notify instance_uuid = self._do_schedule_and_build_instances_test( self.params) cells = objects.CellMappingList.get_all(self.ctxt) # NOTE(danms): Assert that we created the InstanceAction in the # correct cell # NOTE(Kevin Zheng): Also assert tags in the correct cell for cell in cells: with context.target_cell(self.ctxt, cell) as cctxt: actions = objects.InstanceActionList.get_by_instance_uuid( cctxt, instance_uuid) if cell.name == 'cell1': self.assertEqual(1, len(actions)) tags = objects.TagList.get_by_resource_id( cctxt, instance_uuid) self.assertEqual(1, len(tags)) else: self.assertEqual(0, len(actions)) def test_schedule_and_build_instances_no_tags_provided(self): params = copy.deepcopy(self.params) del params['tags'] instance_uuid = self._do_schedule_and_build_instances_test(params) cells = objects.CellMappingList.get_all(self.ctxt) # NOTE(danms): Assert that we created the InstanceAction in the # correct cell # NOTE(Kevin Zheng): Also assert tags in the correct cell for cell in cells: with context.target_cell(self.ctxt, cell) as cctxt: actions = objects.InstanceActionList.get_by_instance_uuid( cctxt, instance_uuid) if cell.name == 'cell1': self.assertEqual(1, len(actions)) tags = objects.TagList.get_by_resource_id( cctxt, instance_uuid) self.assertEqual(0, len(tags)) else: self.assertEqual(0, len(actions)) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.availability_zones.get_host_availability_zone', return_value='nova') def test_schedule_and_build_multiple_instances(self, mock_get_az, get_hostmapping, select_destinations, build_and_run_instance): # This list needs to match the number of build_requests and the number # of request_specs in params. select_destinations.return_value = [[fake_selection1], [fake_selection2], [fake_selection1], [fake_selection2]] params = self.params self.start_service('compute', host='host1') self.start_service('compute', host='host2') # Because of the cache, this should only be called twice, # once for the first and once for the third request. get_hostmapping.side_effect = self.host_mappings.values() # create three additional build requests for a total of four for x in range(3): build_request = fake_build_request.fake_req_obj(self.ctxt) del build_request.instance.id build_request.create() params['build_requests'].objects.append(build_request) im2 = objects.InstanceMapping( self.ctxt, instance_uuid=build_request.instance.uuid, cell_mapping=None, project_id=self.ctxt.project_id) im2.create() params['request_specs'].append(objects.RequestSpec( instance_uuid=build_request.instance_uuid, instance_group=None)) # Now let's have some fun and delete the third build request before # passing the object on to schedule_and_build_instances so that the # instance will be created for that build request but when it calls # BuildRequest.destroy(), it will raise BuildRequestNotFound and we'll # cleanup the instance instead of passing it to build_and_run_instance # and we make sure that the fourth build request still gets processed. deleted_build_request = params['build_requests'][2] deleted_build_request.destroy() def _build_and_run_instance(ctxt, *args, **kwargs): # Make sure the instance wasn't the one that was deleted. instance = kwargs['instance'] self.assertNotEqual(deleted_build_request.instance_uuid, instance.uuid) # This just makes sure that the instance was created in the DB. self.assertTrue(kwargs['instance'].id) self.assertEqual(1, len(kwargs['block_device_mapping'])) # FIXME(danms): How to validate the db connection here? build_and_run_instance.side_effect = _build_and_run_instance self.conductor.schedule_and_build_instances(**params) self.assertEqual(3, build_and_run_instance.call_count) # We're processing 4 instances over 2 hosts, so we should only lookup # the AZ per host once. mock_get_az.assert_has_calls([ mock.call(self.ctxt, 'host1'), mock.call(self.ctxt, 'host2')], any_order=True) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.HostMapping.get_by_host') def test_schedule_and_build_multiple_cells( self, get_hostmapping, select_destinations, build_and_run_instance): """Test that creates two instances in separate cells.""" # This list needs to match the number of build_requests and the number # of request_specs in params. select_destinations.return_value = [[fake_selection1], [fake_selection2]] params = self.params # The cells are created in the base TestCase setup. self.start_service('compute', host='host1', cell_name='cell1') self.start_service('compute', host='host2', cell_name='cell2') get_hostmapping.side_effect = self.host_mappings.values() # create an additional build request and request spec build_request = fake_build_request.fake_req_obj(self.ctxt) del build_request.instance.id build_request.create() params['build_requests'].objects.append(build_request) im2 = objects.InstanceMapping( self.ctxt, instance_uuid=build_request.instance.uuid, cell_mapping=None, project_id=self.ctxt.project_id) im2.create() params['request_specs'].append(objects.RequestSpec( instance_uuid=build_request.instance_uuid, instance_group=None)) instance_cells = set() def _build_and_run_instance(ctxt, *args, **kwargs): instance = kwargs['instance'] # Keep track of the cells that the instances were created in. inst_mapping = objects.InstanceMapping.get_by_instance_uuid( ctxt, instance.uuid) instance_cells.add(inst_mapping.cell_mapping.uuid) build_and_run_instance.side_effect = _build_and_run_instance self.conductor.schedule_and_build_instances(**params) self.assertEqual(2, build_and_run_instance.call_count) self.assertEqual(2, len(instance_cells)) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_schedule_and_build_scheduler_failure(self, select_destinations, mock_notify): select_destinations.side_effect = Exception self.start_service('compute', host='fake-host') self.conductor.schedule_and_build_instances(**self.params) with conductor_manager.try_target_cell(self.ctxt, self.cell_mappings['cell0']): instance = objects.Instance.get_by_uuid( self.ctxt, self.params['build_requests'][0].instance_uuid) self.assertEqual('error', instance.vm_state) self.assertIsNone(instance.task_state) mock_notify.assert_called_once_with( test.MatchType(context.RequestContext), 'build_instances', instance.uuid, test.MatchType(dict), 'error', test.MatchType(Exception), test.MatchType(str)) request_spec_dict = mock_notify.call_args_list[0][0][3] for key in ('instance_type', 'num_instances', 'instance_properties', 'image'): self.assertIn(key, request_spec_dict) tb = mock_notify.call_args_list[0][0][6] self.assertIn('Traceback (most recent call last):', tb) @mock.patch('nova.objects.TagList.destroy') @mock.patch('nova.objects.TagList.create') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_delete_during_scheduling(self, bury, br_destroy, select_destinations, build_and_run, legacy_notify, notify, taglist_create, taglist_destroy): br_destroy.side_effect = exc.BuildRequestNotFound(uuid='foo') self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] taglist_create.return_value = self.params['tags'] self.conductor.schedule_and_build_instances(**self.params) self.assertFalse(build_and_run.called) self.assertFalse(bury.called) self.assertTrue(br_destroy.called) taglist_destroy.assert_called_once_with( test.MatchType(context.RequestContext), self.params['build_requests'][0].instance_uuid) # Make sure TagList.destroy was called with the targeted context. self.assertIsNotNone(taglist_destroy.call_args[0][0].db_connection) test_utils.assert_instance_delete_notification_by_uuid( legacy_notify, notify, self.params['build_requests'][0].instance_uuid, self.conductor.notifier, test.MatchType(context.RequestContext), expect_targeted_context=True, expected_source='nova-conductor', expected_host='host1') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_delete_during_scheduling_host_changed( self, bury, br_destroy, select_destinations, build_and_run, legacy_notify, instance_destroy, notify): br_destroy.side_effect = exc.BuildRequestNotFound(uuid='foo') instance_destroy.side_effect = [ exc.ObjectActionError(action='destroy', reason='host changed'), None, ] self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) self.assertFalse(build_and_run.called) self.assertFalse(bury.called) self.assertTrue(br_destroy.called) self.assertEqual(2, instance_destroy.call_count) test_utils.assert_instance_delete_notification_by_uuid( legacy_notify, notify, self.params['build_requests'][0].instance_uuid, self.conductor.notifier, test.MatchType(context.RequestContext), expect_targeted_context=True, expected_source='nova-conductor', expected_host='host1') @mock.patch('nova.compute.utils.notify_about_instance_action') @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_delete_during_scheduling_instance_not_found( self, bury, br_destroy, select_destinations, build_and_run, legacy_notify, instance_destroy, notify): br_destroy.side_effect = exc.BuildRequestNotFound(uuid='foo') instance_destroy.side_effect = [ exc.InstanceNotFound(instance_id='fake'), None, ] self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) self.assertFalse(build_and_run.called) self.assertFalse(bury.called) self.assertTrue(br_destroy.called) self.assertEqual(1, instance_destroy.call_count) test_utils.assert_instance_delete_notification_by_uuid( legacy_notify, notify, self.params['build_requests'][0].instance_uuid, self.conductor.notifier, test.MatchType(context.RequestContext), expect_targeted_context=True, expected_source='nova-conductor', expected_host='host1') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') @mock.patch('nova.objects.Instance.create') def test_schedule_and_build_delete_before_scheduling(self, inst_create, bury, br_destroy, br_get_by_inst, select_destinations, build_and_run): """Tests the case that the build request is deleted before the instance is created, so we do not create the instance. """ inst_uuid = self.params['build_requests'][0].instance.uuid br_get_by_inst.side_effect = exc.BuildRequestNotFound(uuid=inst_uuid) self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) # we don't create the instance since the build request is gone self.assertFalse(inst_create.called) # we don't build the instance since we didn't create it self.assertFalse(build_and_run.called) # we don't bury the instance in cell0 since it's already deleted self.assertFalse(bury.called) # we don't don't destroy the build request since it's already gone self.assertFalse(br_destroy.called) # Make sure the instance mapping is gone. self.assertRaises(exc.InstanceMappingNotFound, objects.InstanceMapping.get_by_instance_uuid, self.context, inst_uuid) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_unmapped_host_ends_up_in_cell0(self, bury, br_destroy, select_dest, build_and_run): def _fake_bury(ctxt, request_spec, exc, build_requests=None, instances=None, block_device_mapping=None, tags=None): self.assertIn('not mapped to any cell', str(exc)) self.assertEqual(1, len(build_requests)) self.assertEqual(1, len(instances)) self.assertEqual(build_requests[0].instance_uuid, instances[0].uuid) self.assertEqual(self.params['block_device_mapping'], block_device_mapping) self.assertEqual(self.params['tags'], tags) bury.side_effect = _fake_bury select_dest.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) self.assertTrue(bury.called) self.assertFalse(build_and_run.called) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch('nova.objects.quotas.Quotas.check_deltas') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_schedule_and_build_over_quota_during_recheck(self, mock_select, mock_check, mock_notify): mock_select.return_value = [[fake_selection1]] # Simulate a race where the first check passes and the recheck fails. # First check occurs in compute/api. fake_quotas = {'instances': 5, 'cores': 10, 'ram': 4096} fake_headroom = {'instances': 5, 'cores': 10, 'ram': 4096} fake_usages = {'instances': 5, 'cores': 10, 'ram': 4096} e = exc.OverQuota(overs=['instances'], quotas=fake_quotas, headroom=fake_headroom, usages=fake_usages) mock_check.side_effect = e original_save = objects.Instance.save def fake_save(inst, *args, **kwargs): # Make sure the context is targeted to the cell that the instance # was created in. self.assertIsNotNone( inst._context.db_connection, 'Context is not targeted') original_save(inst, *args, **kwargs) self.stub_out('nova.objects.Instance.save', fake_save) # This is needed to register the compute node in a cell. self.start_service('compute', host='host1') self.assertRaises( exc.TooManyInstances, self.conductor.schedule_and_build_instances, **self.params) project_id = self.params['context'].project_id mock_check.assert_called_once_with( self.params['context'], {'instances': 0, 'cores': 0, 'ram': 0}, project_id, user_id=None, check_project_id=project_id, check_user_id=None) # Verify we set the instance to ERROR state and set the fault message. instances = objects.InstanceList.get_all(self.ctxt) self.assertEqual(1, len(instances)) instance = instances[0] self.assertEqual(vm_states.ERROR, instance.vm_state) self.assertIsNone(instance.task_state) self.assertIn('Quota exceeded', instance.fault.message) # Verify we removed the build objects. build_requests = objects.BuildRequestList.get_all(self.ctxt) # Verify that the instance is mapped to a cell inst_mapping = objects.InstanceMapping.get_by_instance_uuid( self.ctxt, instance.uuid) self.assertIsNotNone(inst_mapping.cell_mapping) self.assertEqual(0, len(build_requests)) @db_api.api_context_manager.reader def request_spec_get_all(context): return context.session.query(api_models.RequestSpec).all() request_specs = request_spec_get_all(self.ctxt) self.assertEqual(0, len(request_specs)) mock_notify.assert_called_once_with( test.MatchType(context.RequestContext), 'build_instances', instance.uuid, test.MatchType(dict), 'error', test.MatchType(exc.TooManyInstances), test.MatchType(str)) request_spec_dict = mock_notify.call_args_list[0][0][3] for key in ('instance_type', 'num_instances', 'instance_properties', 'image'): self.assertIn(key, request_spec_dict) tb = mock_notify.call_args_list[0][0][6] self.assertIn('Traceback (most recent call last):', tb) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.objects.quotas.Quotas.check_deltas') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_schedule_and_build_no_quota_recheck(self, mock_select, mock_check, mock_build): mock_select.return_value = [[fake_selection1]] # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') # This is needed to register the compute node in a cell. self.start_service('compute', host='host1') self.conductor.schedule_and_build_instances(**self.params) # check_deltas should not have been called a second time. The first # check occurs in compute/api. mock_check.assert_not_called() self.assertTrue(mock_build.called) def test_schedule_and_build_instances_fill_request_spec(self): # makes sure there is some request group in the spec to be mapped self.params['request_specs'][0].requested_resources = [ objects.RequestGroup(requester_id=uuids.port1)] self._do_schedule_and_build_instances_test(self.params) @mock.patch('nova.conductor.manager.ComputeTaskManager.' '_cleanup_build_artifacts') @mock.patch('nova.scheduler.utils.' 'fill_provider_mapping', side_effect=test.TestingException) def test_schedule_and_build_instances_fill_request_spec_error( self, mock_fill, mock_cleanup): self.assertRaises( test.TestingException, self._do_schedule_and_build_instances_test, self.params) mock_fill.assert_called_once() mock_cleanup.assert_called_once() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_traits', new_callable=mock.NonCallableMock) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocs_for_consumer', new_callable=mock.NonCallableMock) @mock.patch('nova.objects.request_spec.RequestSpec.' 'map_requested_resources_to_providers', new_callable=mock.NonCallableMock) def test_schedule_and_build_instances_fill_request_spec_noop( self, mock_map, mock_get_allocs, mock_traits): """Tests to make sure _fill_provider_mapping exits early if there are no requested_resources on the RequestSpec. """ self.params['request_specs'][0].requested_resources = [] self._do_schedule_and_build_instances_test(self.params) @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_and_bind_arqs') def test_schedule_and_build_instances_with_arqs_bind_ok( self, mock_create_bind_arqs): extra_specs = {'accel:device_profile': 'mydp'} instance = self.params['build_requests'][0].instance instance.flavor.extra_specs = extra_specs self._do_schedule_and_build_instances_test(self.params) # NOTE(Sundar): At this point, the instance has not been # associated with a host yet. The default host.nodename is # 'node1'. mock_create_bind_arqs.assert_called_once_with( self.params['context'], instance.uuid, extra_specs, 'node1', mock.ANY) @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_build_artifacts') @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_and_bind_arqs') def test_schedule_and_build_instances_with_arqs_bind_exception( self, mock_create_bind_arqs, mock_cleanup): # Exceptions in _create_and_bind_arqs result in cleanup mock_create_bind_arqs.side_effect = ( exc.AcceleratorRequestOpFailed(op='', msg='')) try: self._do_schedule_and_build_instances_test(self.params) except exc.AcceleratorRequestOpFailed: pass mock_cleanup.assert_called_once_with( self.params['context'], mock.ANY, mock.ANY, self.params['build_requests'], self.params['request_specs'], self.params['block_device_mapping'], self.params['tags'], mock.ANY) def test_map_instance_to_cell_already_mapped(self): """Tests a scenario where an instance is already mapped to a cell during scheduling. """ build_request = self.params['build_requests'][0] instance = build_request.get_new_instance(self.ctxt) # Simulate MQ split brain craziness by updating the instance mapping # to point at cell0. inst_mapping = objects.InstanceMapping.get_by_instance_uuid( self.ctxt, instance.uuid) inst_mapping.cell_mapping = self.cell_mappings['cell0'] inst_mapping.save() cell1 = self.cell_mappings['cell1'] inst_mapping = self.conductor._map_instance_to_cell( self.ctxt, instance, cell1) # Assert that the instance mapping was updated to point at cell1 but # also that an error was logged. self.assertEqual(cell1.uuid, inst_mapping.cell_mapping.uuid) self.assertIn('During scheduling instance is already mapped to ' 'another cell', self.stdlog.logger.output) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_cleanup_build_artifacts(self, inst_map_get): """Simple test to ensure the order of operations in the cleanup method is enforced. """ req_spec = fake_request_spec.fake_spec_obj() build_req = fake_build_request.fake_req_obj(self.context) instance = build_req.instance bdms = objects.BlockDeviceMappingList(objects=[ objects.BlockDeviceMapping(instance_uuid=instance.uuid)]) tags = objects.TagList(objects=[objects.Tag(tag='test')]) cell1 = self.cell_mappings['cell1'] cell_mapping_cache = {instance.uuid: cell1} err = exc.TooManyInstances('test') # We need to assert that BDMs and tags are created in the cell DB # before the instance mapping is updated. def fake_create_block_device_mapping(*args, **kwargs): inst_map_get.return_value.save.assert_not_called() def fake_create_tags(*args, **kwargs): inst_map_get.return_value.save.assert_not_called() with test.nested( mock.patch.object(self.conductor_manager, '_set_vm_state_and_notify'), mock.patch.object(self.conductor_manager, '_create_block_device_mapping', side_effect=fake_create_block_device_mapping), mock.patch.object(self.conductor_manager, '_create_tags', side_effect=fake_create_tags), mock.patch.object(build_req, 'destroy'), mock.patch.object(req_spec, 'destroy'), ) as ( _set_vm_state_and_notify, _create_block_device_mapping, _create_tags, build_req_destroy, req_spec_destroy, ): self.conductor_manager._cleanup_build_artifacts( self.context, err, [instance], [build_req], [req_spec], bdms, tags, cell_mapping_cache) # Assert the various mock calls. _set_vm_state_and_notify.assert_called_once_with( test.MatchType(context.RequestContext), instance.uuid, 'build_instances', {'vm_state': vm_states.ERROR, 'task_state': None}, err, req_spec) _create_block_device_mapping.assert_called_once_with( cell1, instance.flavor, instance.uuid, bdms) _create_tags.assert_called_once_with( test.MatchType(context.RequestContext), instance.uuid, tags) inst_map_get.return_value.save.assert_called_once_with() self.assertEqual(cell1, inst_map_get.return_value.cell_mapping) build_req_destroy.assert_called_once_with() req_spec_destroy.assert_called_once_with() @mock.patch('nova.objects.CellMapping.get_by_uuid') def test_bury_in_cell0_no_cell0(self, mock_cm_get): mock_cm_get.side_effect = exc.CellMappingNotFound(uuid='0') # Without an iterable build_requests in the database, this # wouldn't work if it continued past the cell0 lookup. self.conductor._bury_in_cell0(self.ctxt, None, None, build_requests=1) self.assertTrue(mock_cm_get.called) @mock.patch('nova.compute.utils.notify_about_compute_task_error') def test_bury_in_cell0(self, mock_notify): bare_br = self.params['build_requests'][0] inst_br = fake_build_request.fake_req_obj(self.ctxt) del inst_br.instance.id inst_br.create() inst = inst_br.get_new_instance(self.ctxt) deleted_br = fake_build_request.fake_req_obj(self.ctxt) del deleted_br.instance.id deleted_br.create() deleted_inst = inst_br.get_new_instance(self.ctxt) deleted_br.destroy() fast_deleted_br = fake_build_request.fake_req_obj(self.ctxt) del fast_deleted_br.instance.id fast_deleted_br.create() fast_deleted_br.destroy() self.conductor._bury_in_cell0(self.ctxt, self.params['request_specs'][0], Exception('Foo'), build_requests=[bare_br, inst_br, deleted_br, fast_deleted_br], instances=[inst, deleted_inst]) with conductor_manager.try_target_cell(self.ctxt, self.cell_mappings['cell0']): self.ctxt.read_deleted = 'yes' build_requests = objects.BuildRequestList.get_all(self.ctxt) instances = objects.InstanceList.get_all(self.ctxt) # Verify instance mappings. inst_mappings = objects.InstanceMappingList.get_by_cell_id( self.ctxt, self.cell_mappings['cell0'].id) # bare_br is the only instance that has a mapping from setUp. self.assertEqual(1, len(inst_mappings)) # Since we did not setup instance mappings for the other fake build # requests used in this test, we should see a message logged about # there being no instance mappings. self.assertIn('While burying instance in cell0, no instance mapping ' 'was found', self.stdlog.logger.output) self.assertEqual(0, len(build_requests)) self.assertEqual(4, len(instances)) inst_states = {inst.uuid: (inst.deleted, inst.vm_state) for inst in instances} expected = { bare_br.instance_uuid: (False, vm_states.ERROR), inst_br.instance_uuid: (False, vm_states.ERROR), deleted_br.instance_uuid: (True, vm_states.ERROR), fast_deleted_br.instance_uuid: (True, vm_states.ERROR), } self.assertEqual(expected, inst_states) self.assertEqual(4, mock_notify.call_count) mock_notify.assert_has_calls([ mock.call( test.MatchType(context.RequestContext), 'build_instances', bare_br.instance_uuid, test.MatchType(dict), 'error', test.MatchType(Exception), test.MatchType(str)), mock.call( test.MatchType(context.RequestContext), 'build_instances', inst_br.instance_uuid, test.MatchType(dict), 'error', test.MatchType(Exception), test.MatchType(str)), mock.call( test.MatchType(context.RequestContext), 'build_instances', deleted_br.instance_uuid, test.MatchType(dict), 'error', test.MatchType(Exception), test.MatchType(str)), mock.call( test.MatchType(context.RequestContext), 'build_instances', fast_deleted_br.instance_uuid, test.MatchType(dict), 'error', test.MatchType(Exception), test.MatchType(str))], any_order=True) for i in range(0, 3): # traceback.format_exc() returns 'NoneType' # because an exception is not raised in this test. # So the argument for traceback is not checked. request_spec_dict = mock_notify.call_args_list[i][0][3] for key in ('instance_type', 'num_instances', 'instance_properties', 'image'): self.assertIn(key, request_spec_dict) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch.object(objects.CellMapping, 'get_by_uuid') @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_block_device_mapping') def test_bury_in_cell0_with_block_device_mapping(self, mock_create_bdm, mock_get_cell, mock_notify): mock_get_cell.return_value = self.cell_mappings['cell0'] inst_br = fake_build_request.fake_req_obj(self.ctxt) del inst_br.instance.id inst_br.create() inst = inst_br.get_new_instance(self.ctxt) self.conductor._bury_in_cell0( self.ctxt, self.params['request_specs'][0], Exception('Foo'), build_requests=[inst_br], instances=[inst], block_device_mapping=self.params['block_device_mapping']) mock_create_bdm.assert_called_once_with( self.cell_mappings['cell0'], inst.flavor, inst.uuid, self.params['block_device_mapping']) mock_notify.assert_called_once_with( test.MatchType(context.RequestContext), 'build_instances', inst.uuid, test.MatchType(dict), 'error', test.MatchType(Exception), test.MatchType(str)) # traceback.format_exc() returns 'NoneType' # because an exception is not raised in this test. # So the argument for traceback is not checked. request_spec_dict = mock_notify.call_args_list[0][0][3] for key in ('instance_type', 'num_instances', 'instance_properties', 'image'): self.assertIn(key, request_spec_dict) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch.object(objects.CellMapping, 'get_by_uuid') @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_tags') def test_bury_in_cell0_with_tags(self, mock_create_tags, mock_get_cell, mock_notify): mock_get_cell.return_value = self.cell_mappings['cell0'] inst_br = fake_build_request.fake_req_obj(self.ctxt) del inst_br.instance.id inst_br.create() inst = inst_br.get_new_instance(self.ctxt) self.conductor._bury_in_cell0( self.ctxt, self.params['request_specs'][0], Exception('Foo'), build_requests=[inst_br], instances=[inst], tags=self.params['tags']) mock_create_tags.assert_called_once_with( test.MatchType(context.RequestContext), inst.uuid, self.params['tags']) @mock.patch('nova.objects.Instance.create') def test_bury_in_cell0_already_mapped(self, mock_inst_create): """Tests a scenario where the instance mapping is already mapped to a cell when we attempt to bury the instance in cell0. """ build_request = self.params['build_requests'][0] # Simulate MQ split brain craziness by updating the instance mapping # to point at cell1. inst_mapping = objects.InstanceMapping.get_by_instance_uuid( self.ctxt, build_request.instance_uuid) inst_mapping.cell_mapping = self.cell_mappings['cell1'] inst_mapping.save() # Now attempt to bury the instance in cell0. with mock.patch.object(inst_mapping, 'save') as mock_inst_map_save: self.conductor._bury_in_cell0( self.ctxt, self.params['request_specs'][0], exc.NoValidHost(reason='idk'), build_requests=[build_request]) # We should have exited without creating the instance in cell0 nor # should the instance mapping have been updated to point at cell0. mock_inst_create.assert_not_called() mock_inst_map_save.assert_not_called() # And we should have logged an error. self.assertIn('When attempting to bury instance in cell0, the ' 'instance is already mapped to cell', self.stdlog.logger.output) def test_reset(self): with mock.patch('nova.compute.rpcapi.ComputeAPI') as mock_rpc: old_rpcapi = self.conductor_manager.compute_rpcapi self.conductor_manager.reset() mock_rpc.assert_called_once_with() self.assertNotEqual(old_rpcapi, self.conductor_manager.compute_rpcapi) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_migrate_server_fails_with_rebuild(self, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE) self.assertRaises(NotImplementedError, self.conductor.migrate_server, self.context, instance, None, True, True, None, None, None) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_migrate_server_fails_with_flavor(self, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, flavor=self.flavor) self.assertRaises(NotImplementedError, self.conductor.migrate_server, self.context, instance, None, True, False, self.flavor, None, None) def _build_request_spec(self, instance): return { 'instance_properties': { 'uuid': instance['uuid'], }, } @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(live_migrate.LiveMigrationTask, 'execute') def _test_migrate_server_deals_with_expected_exceptions(self, ex, mock_execute, mock_set, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_db_instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE) inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), instance, []) mock_execute.side_effect = ex self.conductor = utils.ExceptionHelper(self.conductor) self.assertRaises(type(ex), self.conductor.migrate_server, self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit') mock_set.assert_called_once_with(self.context, inst_obj.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ACTIVE, 'task_state': None, 'expected_task_state': task_states.MIGRATING}, ex, self._build_request_spec(inst_obj)) @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(live_migrate.LiveMigrationTask, 'execute') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_migrate_server_deals_with_invalidcpuinfo_exception( self, get_im, mock_execute, mock_set): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_db_instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE) inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), instance, []) ex = exc.InvalidCPUInfo(reason="invalid cpu info.") mock_execute.side_effect = ex self.conductor = utils.ExceptionHelper(self.conductor) self.assertRaises(exc.InvalidCPUInfo, self.conductor.migrate_server, self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit') mock_execute.assert_called_once_with() mock_set.assert_called_once_with( self.context, inst_obj.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ACTIVE, 'task_state': None, 'expected_task_state': task_states.MIGRATING}, ex, self._build_request_spec(inst_obj)) def test_migrate_server_deals_with_expected_exception(self): exs = [exc.InstanceInvalidState(instance_uuid="fake", attr='', state='', method=''), exc.DestinationHypervisorTooOld(), exc.HypervisorUnavailable(host='dummy'), exc.MigrationPreCheckError(reason='dummy'), exc.InvalidSharedStorage(path='dummy', reason='dummy'), exc.NoValidHost(reason='dummy'), exc.ComputeServiceUnavailable(host='dummy'), exc.InvalidHypervisorType(), exc.InvalidCPUInfo(reason='dummy'), exc.UnableToMigrateToSelf(instance_id='dummy', host='dummy'), exc.InvalidLocalStorage(path='dummy', reason='dummy'), exc.MigrationSchedulerRPCError(reason='dummy'), exc.ComputeHostNotFound(host='dummy')] for ex in exs: self._test_migrate_server_deals_with_expected_exceptions(ex) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(live_migrate.LiveMigrationTask, 'execute') def test_migrate_server_deals_with_unexpected_exceptions(self, mock_live_migrate, mock_set_state, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) expected_ex = IOError('fake error') mock_live_migrate.side_effect = expected_ex instance = fake_instance.fake_db_instance() inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), instance, []) ex = self.assertRaises(exc.MigrationError, self.conductor.migrate_server, self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit') request_spec = {'instance_properties': { 'uuid': instance['uuid'], }, } mock_set_state.assert_called_once_with(self.context, instance['uuid'], 'compute_task', 'migrate_server', dict(vm_state=vm_states.ERROR, task_state=None, expected_task_state=task_states.MIGRATING,), expected_ex, request_spec) self.assertEqual(ex.kwargs['reason'], six.text_type(expected_ex)) @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') def test_set_vm_state_and_notify(self, mock_set): self.conductor._set_vm_state_and_notify( self.context, 1, 'method', 'updates', 'ex', 'request_spec') mock_set.assert_called_once_with( self.context, 1, 'compute_task', 'method', 'updates', 'ex', 'request_spec') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(migrate.MigrationTask, 'rollback') @mock.patch.object(migrate.MigrationTask, '_preallocate_migration') def test_cold_migrate_no_valid_host_back_in_active_state( self, _preallocate_migration, rollback_mock, notify_mock, select_dest_mock, metadata_mock, sig_mock, spec_fc_mock, im_mock): inst_obj = objects.Instance( image_ref='fake-image_ref', instance_type_id=self.flavor.id, vm_state=vm_states.ACTIVE, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=self.flavor, availability_zone=None, pci_requests=None, numa_topology=None, project_id=self.context.project_id) image = 'fake-image' fake_spec = objects.RequestSpec(image=objects.ImageMeta()) spec_fc_mock.return_value = fake_spec metadata_mock.return_value = image exc_info = exc.NoValidHost(reason="") select_dest_mock.side_effect = exc_info updates = {'vm_state': vm_states.ACTIVE, 'task_state': None} im_mock.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, self.flavor, {}, True, None, None) metadata_mock.assert_called_with({}) sig_mock.assert_called_once_with(self.context, fake_spec) self.assertEqual(inst_obj.project_id, fake_spec.project_id) notify_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exc_info, fake_spec) rollback_mock.assert_called_once_with(exc_info) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(migrate.MigrationTask, 'rollback') @mock.patch.object(migrate.MigrationTask, '_preallocate_migration') def test_cold_migrate_no_valid_host_back_in_stopped_state( self, _preallocate_migration, rollback_mock, notify_mock, select_dest_mock, metadata_mock, spec_fc_mock, sig_mock, im_mock): inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=self.flavor.id, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=self.flavor, numa_topology=None, pci_requests=None, availability_zone=None, project_id=self.context.project_id) image = 'fake-image' fake_spec = objects.RequestSpec(image=objects.ImageMeta()) spec_fc_mock.return_value = fake_spec im_mock.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) metadata_mock.return_value = image exc_info = exc.NoValidHost(reason="") select_dest_mock.side_effect = exc_info updates = {'vm_state': vm_states.STOPPED, 'task_state': None} self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, self.flavor, {}, True, None, None) metadata_mock.assert_called_with({}) sig_mock.assert_called_once_with(self.context, fake_spec) self.assertEqual(inst_obj.project_id, fake_spec.project_id) notify_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exc_info, fake_spec) rollback_mock.assert_called_once_with(exc_info) def test_cold_migrate_no_valid_host_error_msg(self): inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=self.flavor.id, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID) fake_spec = fake_request_spec.fake_spec_obj() image = 'fake-image' with test.nested( mock.patch.object(utils, 'get_image_from_system_metadata', return_value=image), mock.patch.object(self.conductor, '_set_vm_state_and_notify'), mock.patch.object(migrate.MigrationTask, 'execute', side_effect=exc.NoValidHost(reason="")), mock.patch.object(migrate.MigrationTask, 'rollback') ) as (image_mock, set_vm_mock, task_execute_mock, task_rollback_mock): nvh = self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, self.flavor, {}, True, fake_spec, None) self.assertIn('cold migrate', nvh.message) @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(migrate.MigrationTask, 'execute') @mock.patch.object(migrate.MigrationTask, 'rollback') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(objects.RequestSpec, 'from_components') def test_cold_migrate_no_valid_host_in_group(self, spec_fc_mock, set_vm_mock, task_rollback_mock, task_exec_mock, image_mock): inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=self.flavor.id, system_metadata={}, uuid=uuids.instance, project_id=fakes.FAKE_PROJECT_ID, user_id=fakes.FAKE_USER_ID, flavor=self.flavor, numa_topology=None, pci_requests=None, availability_zone=None) image = 'fake-image' exception = exc.UnsupportedPolicyException(reason='') fake_spec = fake_request_spec.fake_spec_obj() spec_fc_mock.return_value = fake_spec image_mock.return_value = image task_exec_mock.side_effect = exception self.assertRaises(exc.UnsupportedPolicyException, self.conductor._cold_migrate, self.context, inst_obj, self.flavor, {}, True, None, None) updates = {'vm_state': vm_states.STOPPED, 'task_state': None} set_vm_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exception, fake_spec) spec_fc_mock.assert_called_once_with( self.context, inst_obj.uuid, image, self.flavor, inst_obj.numa_topology, inst_obj.pci_requests, {}, None, inst_obj.availability_zone, project_id=inst_obj.project_id, user_id=inst_obj.user_id) @mock.patch.object(migrate.MigrationTask, '_is_selected_host_in_source_cell', return_value=True) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(query.SchedulerQueryClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(migrate.MigrationTask, 'rollback') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') @mock.patch.object(migrate.MigrationTask, '_preallocate_migration') def test_cold_migrate_exception_host_in_error_state_and_raise( self, _preallocate_migration, prep_resize_mock, rollback_mock, notify_mock, select_dest_mock, metadata_mock, spec_fc_mock, sig_mock, im_mock, check_cell_mock): inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=self.flavor.id, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=self.flavor, availability_zone=None, pci_requests=None, numa_topology=None, project_id=self.context.project_id) image = 'fake-image' fake_spec = objects.RequestSpec(image=objects.ImageMeta()) spec_fc_mock.return_value = fake_spec im_mock.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) hosts = [dict(host='host1', nodename='node1', limits={})] metadata_mock.return_value = image exc_info = test.TestingException('something happened') select_dest_mock.return_value = [[fake_selection1]] updates = {'vm_state': inst_obj.vm_state, 'task_state': None} prep_resize_mock.side_effect = exc_info with mock.patch.object(inst_obj, 'refresh') as mock_refresh: self.assertRaises(test.TestingException, self.conductor._cold_migrate, self.context, inst_obj, self.flavor, {}, True, None, None) mock_refresh.assert_called_once_with() # Filter properties are populated during code execution legacy_filter_props = {'retry': {'num_attempts': 1, 'hosts': [['host1', 'node1']]}, 'limits': {}} metadata_mock.assert_called_with({}) sig_mock.assert_called_once_with(self.context, fake_spec) self.assertEqual(inst_obj.project_id, fake_spec.project_id) select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=True) prep_resize_mock.assert_called_once_with( self.context, inst_obj, fake_spec.image, self.flavor, hosts[0]['host'], _preallocate_migration.return_value, request_spec=fake_spec, filter_properties=legacy_filter_props, node=hosts[0]['nodename'], clean_shutdown=True, host_list=[]) notify_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exc_info, fake_spec) rollback_mock.assert_called_once_with(exc_info) @mock.patch('nova.conductor.tasks.migrate.MigrationTask.execute', side_effect=test.TestingException('execute fails')) @mock.patch('nova.objects.Instance.refresh', side_effect=exc.InstanceNotFound(instance_id=uuids.instance)) def test_cold_migrate_exception_instance_refresh_not_found( self, mock_refresh, mock_execute): """Tests the scenario where MigrationTask.execute raises some error and then the instance.refresh() in the exception block raises InstanceNotFound because the instance was deleted during the operation. """ params = {'uuid': uuids.instance} instance = self._create_fake_instance_obj(params=params) filter_properties = {} clean_shutdown = True request_spec = fake_request_spec.fake_spec_obj() request_spec.flavor = instance.flavor host_list = None self.assertRaises(test.TestingException, self.conductor._cold_migrate, self.context, instance, instance.flavor, filter_properties, clean_shutdown, request_spec, host_list) self.assertIn('During cold migrate the instance was deleted.', self.stdlog.logger.output) mock_execute.assert_called_once_with() mock_refresh.assert_called_once_with() @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(migrate.MigrationTask, 'execute') @mock.patch.object(utils, 'get_image_from_system_metadata') def test_cold_migrate_updates_flavor_if_existing_reqspec(self, image_mock, task_exec_mock, spec_save_mock): inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=self.flavor.id, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=self.flavor, availability_zone=None, pci_requests=None, numa_topology=None) image = 'fake-image' fake_spec = fake_request_spec.fake_spec_obj() image_mock.return_value = image # Just make sure we have an original flavor which is different from # the new one self.assertNotEqual(self.flavor, fake_spec.flavor) self.conductor._cold_migrate(self.context, inst_obj, self.flavor, {}, True, fake_spec, None) # Now the RequestSpec should be updated... self.assertEqual(self.flavor, fake_spec.flavor) # ...and persisted spec_save_mock.assert_called_once_with() @mock.patch('nova.objects.RequestSpec.from_primitives') @mock.patch.object(objects.RequestSpec, 'save') def test_cold_migrate_reschedule_legacy_request_spec( self, spec_save_mock, from_primitives_mock): """Tests the scenario that compute RPC API is pinned to less than 5.1 so conductor passes a legacy dict request spec to compute and compute sends it back to conductor on a reschedule during prep_resize so conductor has to convert the legacy request spec dict to an object. """ instance = objects.Instance(system_metadata={}) fake_spec = fake_request_spec.fake_spec_obj() from_primitives_mock.return_value = fake_spec legacy_spec = fake_spec.to_legacy_request_spec_dict() filter_props = {} clean_shutdown = True host_list = mock.sentinel.host_list with mock.patch.object( self.conductor, '_build_cold_migrate_task') as build_task_mock: self.conductor._cold_migrate( self.context, instance, self.flavor, filter_props, clean_shutdown, legacy_spec, host_list) # Make sure the legacy request spec was converted. from_primitives_mock.assert_called_once_with( self.context, legacy_spec, filter_props) build_task_mock.assert_called_once_with( self.context, instance, self.flavor, fake_spec, clean_shutdown, host_list) spec_save_mock.assert_called_once_with() def test_resize_no_valid_host_error_msg(self): flavor_new = objects.Flavor.get_by_name(self.ctxt, 'm1.small') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=self.flavor.id, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID) fake_spec = fake_request_spec.fake_spec_obj() image = 'fake-image' with test.nested( mock.patch.object(utils, 'get_image_from_system_metadata', return_value=image), mock.patch.object(scheduler_utils, 'build_request_spec', return_value=fake_spec), mock.patch.object(self.conductor, '_set_vm_state_and_notify'), mock.patch.object(migrate.MigrationTask, 'execute', side_effect=exc.NoValidHost(reason="")), mock.patch.object(migrate.MigrationTask, 'rollback') ) as (image_mock, brs_mock, vm_st_mock, task_execute_mock, task_rb_mock): nvh = self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, flavor_new, {}, True, fake_spec, None) self.assertIn('resize', nvh.message) @mock.patch.object(compute_rpcapi.ComputeAPI, 'build_and_run_instance') @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances') @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(objects.Instance, 'save') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_build_instances_instance_not_found( self, fp, _mock_buildreq, mock_save, mock_build_rspec, mock_schedule, mock_build_run): fake_spec = objects.RequestSpec() fp.return_value = fake_spec instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} spec = {'fake': 'specs', 'instance_properties': instances[0]} mock_build_rspec.return_value = spec filter_properties = {'retry': {'num_attempts': 1, 'hosts': []}} inst_uuids = [inst.uuid for inst in instances] sched_return = copy.deepcopy(fake_host_lists2) mock_schedule.return_value = sched_return mock_save.side_effect = [ exc.InstanceNotFound(instance_id=instances[0].uuid), None] filter_properties2 = {'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host2', 'node2']]}} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances(self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) # RequestSpec.from_primitives is called once before we call the # scheduler to select_destinations and then once per instance that # gets build in the compute. fp.assert_has_calls([ mock.call(self.context, spec, filter_properties), mock.call(self.context, spec, filter_properties2)]) mock_build_rspec.assert_called_once_with(image, mock.ANY) mock_schedule.assert_called_once_with( self.context, fake_spec, inst_uuids, return_alternates=True) mock_save.assert_has_calls([mock.call(), mock.call()]) mock_build_run.assert_called_once_with( self.context, instance=instances[1], host='host2', image={'fake-data': 'should_pass_silently'}, request_spec=fake_spec, filter_properties=filter_properties2, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=test.MatchType( objects.BlockDeviceMappingList), node='node2', limits=None, host_list=[], accel_uuids=[]) @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_utils, 'build_request_spec') def test_build_instances_info_cache_not_found(self, build_request_spec, setup_instance_group): instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} destinations = [[fake_selection1], [fake_selection2]] spec = {'fake': 'specs', 'instance_properties': instances[0]} build_request_spec.return_value = spec fake_spec = objects.RequestSpec() with test.nested( mock.patch.object(instances[0], 'save', side_effect=exc.InstanceInfoCacheNotFound( instance_uuid=instances[0].uuid)), mock.patch.object(instances[1], 'save'), mock.patch.object(objects.RequestSpec, 'from_primitives', return_value=fake_spec), mock.patch.object(self.conductor_manager.query_client, 'select_destinations', return_value=destinations), mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'), mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') ) as (inst1_save, inst2_save, from_primitives, select_destinations, build_and_run_instance, get_buildreq): # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances(self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) setup_instance_group.assert_called_once_with( self.context, fake_spec) get_buildreq.return_value.destroy.assert_called_once_with() build_and_run_instance.assert_called_once_with(self.context, instance=instances[1], host='host2', image={'fake-data': 'should_pass_silently'}, request_spec=from_primitives.return_value, filter_properties={'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host2', 'node2']]}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mock.ANY, node='node2', limits=None, host_list=[], accel_uuids=[]) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch('nova.objects.Instance.save') def test_build_instances_max_retries_exceeded(self, mock_save, mock_notify): """Tests that when populate_retry raises MaxRetriesExceeded in build_instances, we don't attempt to cleanup the build request. """ instance = fake_instance.fake_instance_obj(self.context) image = {'id': uuids.image_id} filter_props = { 'retry': { 'num_attempts': CONF.scheduler.max_attempts } } requested_networks = objects.NetworkRequestList() with mock.patch.object(self.conductor, '_destroy_build_request', new_callable=mock.NonCallableMock): self.conductor.build_instances( self.context, [instance], image, filter_props, mock.sentinel.admin_pass, mock.sentinel.files, requested_networks, mock.sentinel.secgroups) mock_save.assert_called_once_with() mock_notify.assert_called_once_with( self.context, 'build_instances', instance.uuid, test.MatchType(dict), 'error', test.MatchType(exc.MaxRetriesExceeded), test.MatchType(str)) request_spec_dict = mock_notify.call_args_list[0][0][3] for key in ('instance_type', 'num_instances', 'instance_properties', 'image'): self.assertIn(key, request_spec_dict) tb = mock_notify.call_args_list[0][0][6] self.assertIn('Traceback (most recent call last):', tb) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch('nova.objects.Instance.save') def test_build_instances_reschedule_no_valid_host(self, mock_save, mock_notify): """Tests that when select_destinations raises NoValidHost in build_instances, we don't attempt to cleanup the build request if we're rescheduling (num_attempts>1). """ instance = fake_instance.fake_instance_obj(self.context) image = {'id': uuids.image_id} filter_props = { 'retry': { 'num_attempts': 1 # populate_retry will increment this } } requested_networks = objects.NetworkRequestList() with mock.patch.object(self.conductor, '_destroy_build_request', new_callable=mock.NonCallableMock): with mock.patch.object( self.conductor.query_client, 'select_destinations', side_effect=exc.NoValidHost(reason='oops')): self.conductor.build_instances( self.context, [instance], image, filter_props, mock.sentinel.admin_pass, mock.sentinel.files, requested_networks, mock.sentinel.secgroups) mock_save.assert_called_once_with() mock_notify.assert_called_once_with( self.context, 'build_instances', instance.uuid, test.MatchType(dict), 'error', test.MatchType(exc.NoValidHost), test.MatchType(str)) request_spec_dict = mock_notify.call_args_list[0][0][3] for key in ('instance_type', 'num_instances', 'instance_properties', 'image'): self.assertIn(key, request_spec_dict) tb = mock_notify.call_args_list[0][0][6] self.assertIn('Traceback (most recent call last):', tb) @mock.patch('nova.scheduler.utils.claim_resources', return_value=True) @mock.patch('nova.scheduler.utils.fill_provider_mapping') @mock.patch('nova.availability_zones.get_host_availability_zone', side_effect=db_exc.CantStartEngineError) @mock.patch('nova.conductor.manager.ComputeTaskManager.' '_cleanup_when_reschedule_fails') @mock.patch('nova.objects.Instance.save') def test_build_reschedule_get_az_error(self, mock_save, mock_cleanup, mock_get_az, mock_fill, mock_claim): """Tests a scenario where rescheduling during a build fails trying to get the AZ for the selected host will put the instance into a terminal (ERROR) state. """ instance = fake_instance.fake_instance_obj(self.context) image = objects.ImageMeta() requested_networks = objects.NetworkRequestList() request_spec = fake_request_spec.fake_spec_obj() host_lists = copy.deepcopy(fake_host_lists_alt) filter_props = {} # Pre-populate the filter properties with the initial host we tried to # build on which failed and triggered a reschedule. host1 = host_lists[0].pop(0) scheduler_utils.populate_filter_properties(filter_props, host1) # We have to save off the first alternate we try since build_instances # modifies the host_lists list. host2 = host_lists[0][0] self.conductor.build_instances( self.context, [instance], image, filter_props, mock.sentinel.admin_password, mock.sentinel.injected_files, requested_networks, mock.sentinel.security_groups, request_spec=request_spec, host_lists=host_lists) mock_claim.assert_called_once() mock_fill.assert_called_once() mock_get_az.assert_called_once_with(self.context, host2.service_host) mock_cleanup.assert_called_once_with( self.context, instance, test.MatchType(db_exc.CantStartEngineError), test.MatchType(dict), requested_networks) # Assert that we did not continue processing the instance once we # handled the error. mock_save.assert_not_called() @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(compute_utils, 'delete_arqs_if_needed') def test_cleanup_arqs_on_reschedule(self, mock_del_arqs, mock_set_vm, mock_clean_net): instance = fake_instance.fake_instance_obj(self.context) self.conductor_manager._cleanup_when_reschedule_fails( self.context, instance, exception=None, legacy_request_spec=None, requested_networks=None) mock_del_arqs.assert_called_once_with(self.context, instance) def test_cleanup_allocated_networks_none_requested(self): # Tests that we don't deallocate networks if 'none' were specifically # requested. fake_inst = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='none')]) with mock.patch.object(self.conductor.network_api, 'deallocate_for_instance') as deallocate: with mock.patch.object(fake_inst, 'save') as mock_save: self.conductor._cleanup_allocated_networks( self.context, fake_inst, requested_networks) self.assertFalse(deallocate.called) self.assertEqual('False', fake_inst.system_metadata['network_allocated'], fake_inst.system_metadata) mock_save.assert_called_once_with() def test_cleanup_allocated_networks_auto_or_none_provided(self): # Tests that we deallocate networks if auto-allocating networks or # requested_networks=None. fake_inst = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='auto')]) for req_net in (requested_networks, None): with mock.patch.object(self.conductor.network_api, 'deallocate_for_instance') as deallocate: with mock.patch.object(fake_inst, 'save') as mock_save: self.conductor._cleanup_allocated_networks( self.context, fake_inst, req_net) deallocate.assert_called_once_with( self.context, fake_inst, requested_networks=req_net) self.assertEqual('False', fake_inst.system_metadata['network_allocated'], fake_inst.system_metadata) mock_save.assert_called_once_with() @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', side_effect=exc.ComputeHostNotFound('source-host')) def test_allocate_for_evacuate_dest_host_source_node_not_found_no_reqspec( self, get_compute_node): """Tests that the source node for the instance isn't found. In this case there is no request spec provided. """ instance = self.params['build_requests'][0].instance instance.host = 'source-host' with mock.patch.object(self.conductor, '_set_vm_state_and_notify') as notify: ex = self.assertRaises( exc.ComputeHostNotFound, self.conductor._allocate_for_evacuate_dest_host, self.ctxt, instance, 'dest-host') get_compute_node.assert_called_once_with( self.ctxt, instance.host, instance.node) notify.assert_called_once_with( self.ctxt, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, None) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', return_value=objects.ComputeNode(host='source-host')) @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat', side_effect=exc.ComputeHostNotFound(host='dest-host')) def test_allocate_for_evacuate_dest_host_dest_node_not_found_reqspec( self, get_dest_node, get_source_node): """Tests that the destination node for the request isn't found. In this case there is a request spec provided. """ instance = self.params['build_requests'][0].instance instance.host = 'source-host' reqspec = self.params['request_specs'][0] with mock.patch.object(self.conductor, '_set_vm_state_and_notify') as notify: ex = self.assertRaises( exc.ComputeHostNotFound, self.conductor._allocate_for_evacuate_dest_host, self.ctxt, instance, 'dest-host', reqspec) get_source_node.assert_called_once_with( self.ctxt, instance.host, instance.node) get_dest_node.assert_called_once_with( self.ctxt, 'dest-host', use_slave=True) notify.assert_called_once_with( self.ctxt, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, reqspec) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', return_value=objects.ComputeNode(host='source-host')) @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat', return_value=objects.ComputeNode(host='dest-host')) def test_allocate_for_evacuate_dest_host_claim_fails( self, get_dest_node, get_source_node): """Tests that the allocation claim fails.""" instance = self.params['build_requests'][0].instance instance.host = 'source-host' reqspec = self.params['request_specs'][0] with test.nested( mock.patch.object(self.conductor, '_set_vm_state_and_notify'), mock.patch.object(scheduler_utils, 'claim_resources_on_destination', side_effect=exc.NoValidHost(reason='I am full')) ) as ( notify, claim ): ex = self.assertRaises( exc.NoValidHost, self.conductor._allocate_for_evacuate_dest_host, self.ctxt, instance, 'dest-host', reqspec) get_source_node.assert_called_once_with( self.ctxt, instance.host, instance.node) get_dest_node.assert_called_once_with( self.ctxt, 'dest-host', use_slave=True) claim.assert_called_once_with( self.ctxt, self.conductor.report_client, instance, get_source_node.return_value, get_dest_node.return_value) notify.assert_called_once_with( self.ctxt, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, reqspec) @mock.patch('nova.conductor.tasks.live_migrate.LiveMigrationTask.execute') def test_live_migrate_instance(self, mock_execute): """Tests that asynchronous live migration targets the cell that the instance lives in. """ instance = self.params['build_requests'][0].instance scheduler_hint = {'host': None} reqspec = self.params['request_specs'][0] # setUp created the instance mapping but didn't target it to a cell, # to mock out the API doing that, but let's just update it to point # at cell1. im = objects.InstanceMapping.get_by_instance_uuid( self.ctxt, instance.uuid) im.cell_mapping = self.cell_mappings[test.CELL1_NAME] im.save() # Make sure the InstanceActionEvent is created in the cell. original_event_start = objects.InstanceActionEvent.event_start def fake_event_start(_cls, ctxt, *args, **kwargs): # Make sure the context is targeted to the cell that the instance # was created in. self.assertIsNotNone(ctxt.db_connection, 'Context is not targeted') original_event_start(ctxt, *args, **kwargs) self.stub_out( 'nova.objects.InstanceActionEvent.event_start', fake_event_start) self.conductor.live_migrate_instance( self.ctxt, instance, scheduler_hint, block_migration=None, disk_over_commit=None, request_spec=reqspec) mock_execute.assert_called_once_with() class ConductorTaskRPCAPITestCase(_BaseTaskTestCase, test_compute.BaseTestCase): """Conductor compute_task RPC namespace Tests.""" def setUp(self): super(ConductorTaskRPCAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor = conductor_rpcapi.ComputeTaskAPI() service_manager = self.conductor_service.manager self.conductor_manager = service_manager.compute_task_mgr def test_live_migrate_instance(self): inst = fake_instance.fake_db_instance() inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), inst, []) version = '1.15' scheduler_hint = {'host': 'destination'} cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock): self.conductor.live_migrate_instance( self.context, inst_obj, scheduler_hint, 'block_migration', 'disk_over_commit', request_spec=None) prepare_mock.assert_called_once_with(version=version) kw = {'instance': inst_obj, 'scheduler_hint': scheduler_hint, 'block_migration': 'block_migration', 'disk_over_commit': 'disk_over_commit', 'request_spec': None, } cctxt_mock.cast.assert_called_once_with( self.context, 'live_migrate_instance', **kw) _test() @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_targets_cell_no_instance_mapping(self, mock_im): @conductor_manager.targets_cell def test(self, context, instance): return mock.sentinel.iransofaraway mock_im.side_effect = exc.InstanceMappingNotFound(uuid='something') ctxt = mock.MagicMock() inst = mock.MagicMock() self.assertEqual(mock.sentinel.iransofaraway, test(None, ctxt, inst)) mock_im.assert_called_once_with(ctxt, inst.uuid) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=db_exc.CantStartEngineError) def test_targets_cell_no_api_db_conn_noop(self, mock_im): """Tests that targets_cell noops on CantStartEngineError if the API DB is not configured because we assume we're in the cell conductor. """ self.flags(connection=None, group='api_database') @conductor_manager.targets_cell def _test(self, context, instance): return mock.sentinel.iransofaraway ctxt = mock.MagicMock() inst = mock.MagicMock() self.assertEqual(mock.sentinel.iransofaraway, _test(None, ctxt, inst)) mock_im.assert_called_once_with(ctxt, inst.uuid) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=db_exc.CantStartEngineError) def test_targets_cell_no_api_db_conn_reraise(self, mock_im): """Tests that targets_cell reraises CantStartEngineError if the API DB is configured. """ self.flags(connection='mysql://dbhost?nova_api', group='api_database') @conductor_manager.targets_cell def _test(self, context, instance): return mock.sentinel.iransofaraway ctxt = mock.MagicMock() inst = mock.MagicMock() self.assertRaises(db_exc.CantStartEngineError, _test, None, ctxt, inst) mock_im.assert_called_once_with(ctxt, inst.uuid) def test_schedule_and_build_instances_with_tags(self): build_request = fake_build_request.fake_req_obj(self.context) request_spec = objects.RequestSpec( instance_uuid=build_request.instance_uuid) image = {'fake_data': 'should_pass_silently'} admin_password = 'fake_password' injected_file = 'fake' requested_network = None block_device_mapping = None tags = ['fake_tag'] version = '1.17' cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', return_value=True) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.schedule_and_build_instances( self.context, build_request, request_spec, image, admin_password, injected_file, requested_network, block_device_mapping, tags=tags) prepare_mock.assert_called_once_with(version=version) kw = {'build_requests': build_request, 'request_specs': request_spec, 'image': jsonutils.to_primitive(image), 'admin_password': admin_password, 'injected_files': injected_file, 'requested_networks': requested_network, 'block_device_mapping': block_device_mapping, 'tags': tags} cctxt_mock.cast.assert_called_once_with( self.context, 'schedule_and_build_instances', **kw) _test() def test_schedule_and_build_instances_with_tags_cannot_send(self): build_request = fake_build_request.fake_req_obj(self.context) request_spec = objects.RequestSpec( instance_uuid=build_request.instance_uuid) image = {'fake_data': 'should_pass_silently'} admin_password = 'fake_password' injected_file = 'fake' requested_network = None block_device_mapping = None tags = ['fake_tag'] cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', return_value=False) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.schedule_and_build_instances( self.context, build_request, request_spec, image, admin_password, injected_file, requested_network, block_device_mapping, tags=tags) prepare_mock.assert_called_once_with(version='1.16') kw = {'build_requests': build_request, 'request_specs': request_spec, 'image': jsonutils.to_primitive(image), 'admin_password': admin_password, 'injected_files': injected_file, 'requested_networks': requested_network, 'block_device_mapping': block_device_mapping} cctxt_mock.cast.assert_called_once_with( self.context, 'schedule_and_build_instances', **kw) _test() def test_build_instances_with_request_spec_ok(self): """Tests passing a request_spec to the build_instances RPC API method and having it passed through to the conductor task manager. """ image = {} cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', side_effect=(False, True, True, True, True)) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.build_instances( self.context, mock.sentinel.instances, image, mock.sentinel.filter_properties, mock.sentinel.admin_password, mock.sentinel.injected_files, mock.sentinel.requested_networks, mock.sentinel.security_groups, mock.sentinel.block_device_mapping, request_spec=mock.sentinel.request_spec) kw = {'instances': mock.sentinel.instances, 'image': image, 'filter_properties': mock.sentinel.filter_properties, 'admin_password': mock.sentinel.admin_password, 'injected_files': mock.sentinel.injected_files, 'requested_networks': mock.sentinel.requested_networks, 'security_groups': mock.sentinel.security_groups, 'request_spec': mock.sentinel.request_spec} cctxt_mock.cast.assert_called_once_with( self.context, 'build_instances', **kw) _test() def test_build_instances_with_request_spec_cannot_send(self): """Tests passing a request_spec to the build_instances RPC API method but not having it passed through to the conductor task manager because the version is too old to handle it. """ image = {} cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', side_effect=(False, False, True, True, True)) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.build_instances( self.context, mock.sentinel.instances, image, mock.sentinel.filter_properties, mock.sentinel.admin_password, mock.sentinel.injected_files, mock.sentinel.requested_networks, mock.sentinel.security_groups, mock.sentinel.block_device_mapping, request_spec=mock.sentinel.request_spec) kw = {'instances': mock.sentinel.instances, 'image': image, 'filter_properties': mock.sentinel.filter_properties, 'admin_password': mock.sentinel.admin_password, 'injected_files': mock.sentinel.injected_files, 'requested_networks': mock.sentinel.requested_networks, 'security_groups': mock.sentinel.security_groups} cctxt_mock.cast.assert_called_once_with( self.context, 'build_instances', **kw) _test() def test_cache_images(self): with mock.patch.object(self.conductor, 'client') as client: self.conductor.cache_images(self.context, mock.sentinel.aggregate, [mock.sentinel.image]) client.prepare.return_value.cast.assert_called_once_with( self.context, 'cache_images', aggregate=mock.sentinel.aggregate, image_ids=[mock.sentinel.image]) client.prepare.assert_called_once_with(version='1.21') with mock.patch.object(self.conductor.client, 'can_send_version') as v: v.return_value = False self.assertRaises(exc.NovaException, self.conductor.cache_images, self.context, mock.sentinel.aggregate, [mock.sentinel.image]) def test_migrate_server(self): self.flags(rpc_response_timeout=10, long_rpc_timeout=120) instance = objects.Instance() scheduler_hint = {} live = rebuild = False flavor = objects.Flavor() block_migration = disk_over_commit = None @mock.patch.object(self.conductor.client, 'can_send_version', return_value=True) @mock.patch.object(self.conductor.client, 'prepare') def _test(prepare_mock, can_send_mock): self.conductor.migrate_server( self.context, instance, scheduler_hint, live, rebuild, flavor, block_migration, disk_over_commit) kw = {'instance': instance, 'scheduler_hint': scheduler_hint, 'live': live, 'rebuild': rebuild, 'flavor': flavor, 'block_migration': block_migration, 'disk_over_commit': disk_over_commit, 'reservations': None, 'clean_shutdown': True, 'request_spec': None, 'host_list': None} prepare_mock.assert_called_once_with( version=test.MatchType(str), # version call_monitor_timeout=10, timeout=120) prepare_mock.return_value.call.assert_called_once_with( self.context, 'migrate_server', **kw) _test() def test_migrate_server_cast(self): """Tests that if calling migrate_server() with do_cast=True an RPC cast is performed rather than a call. """ instance = objects.Instance() scheduler_hint = {} live = rebuild = False flavor = objects.Flavor() block_migration = disk_over_commit = None @mock.patch.object(self.conductor.client, 'can_send_version', return_value=True) @mock.patch.object(self.conductor.client, 'prepare') def _test(prepare_mock, can_send_mock): self.conductor.migrate_server( self.context, instance, scheduler_hint, live, rebuild, flavor, block_migration, disk_over_commit, do_cast=True) kw = {'instance': instance, 'scheduler_hint': scheduler_hint, 'live': live, 'rebuild': rebuild, 'flavor': flavor, 'block_migration': block_migration, 'disk_over_commit': disk_over_commit, 'reservations': None, 'clean_shutdown': True, 'request_spec': None, 'host_list': None} prepare_mock.assert_called_once_with( version=test.MatchType(str), # version call_monitor_timeout=CONF.rpc_response_timeout, timeout=CONF.long_rpc_timeout) prepare_mock.return_value.cast.assert_called_once_with( self.context, 'migrate_server', **kw) _test() def _test_confirm_snapshot_based_resize(self, do_cast): """Tests how confirm_snapshot_based_resize is called when do_cast is True or False. """ instance = objects.Instance() migration = objects.Migration() @mock.patch.object(self.conductor.client, 'can_send_version', return_value=True) @mock.patch.object(self.conductor.client, 'prepare') def _test(prepare_mock, can_send_mock): self.conductor.confirm_snapshot_based_resize( self.context, instance, migration, do_cast=do_cast) kw = {'instance': instance, 'migration': migration} if do_cast: prepare_mock.return_value.cast.assert_called_once_with( self.context, 'confirm_snapshot_based_resize', **kw) else: prepare_mock.return_value.call.assert_called_once_with( self.context, 'confirm_snapshot_based_resize', **kw) _test() def test_confirm_snapshot_based_resize_cast(self): self._test_confirm_snapshot_based_resize(do_cast=True) def test_confirm_snapshot_based_resize_call(self): self._test_confirm_snapshot_based_resize(do_cast=False) def test_confirm_snapshot_based_resize_old_service(self): """Tests confirm_snapshot_based_resize when the service is too old.""" with mock.patch.object( self.conductor.client, 'can_send_version', return_value=False): self.assertRaises(exc.ServiceTooOld, self.conductor.confirm_snapshot_based_resize, self.context, mock.sentinel.instance, mock.sentinel.migration) def test_revert_snapshot_based_resize_old_service(self): """Tests revert_snapshot_based_resize when the service is too old.""" with mock.patch.object( self.conductor.client, 'can_send_version', return_value=False) as can_send_version: self.assertRaises(exc.ServiceTooOld, self.conductor.revert_snapshot_based_resize, self.context, mock.sentinel.instance, mock.sentinel.migration) can_send_version.assert_called_once_with('1.23') class ConductorTaskAPITestCase(_BaseTaskTestCase, test_compute.BaseTestCase): """Compute task API Tests.""" def setUp(self): super(ConductorTaskAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor = conductor_api.ComputeTaskAPI() service_manager = self.conductor_service.manager self.conductor_manager = service_manager.compute_task_mgr def test_live_migrate(self): inst = fake_instance.fake_db_instance() inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), inst, []) with mock.patch.object(self.conductor.conductor_compute_rpcapi, 'migrate_server') as mock_migrate_server: self.conductor.live_migrate_instance(self.context, inst_obj, 'destination', 'block_migration', 'disk_over_commit') mock_migrate_server.assert_called_once_with( self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit', None, request_spec=None) def test_cache_images(self): @mock.patch.object(self.conductor.conductor_compute_rpcapi, 'cache_images') @mock.patch.object(self.conductor.image_api, 'get') def _test(mock_image, mock_cache): self.conductor.cache_images(self.context, mock.sentinel.aggregate, [mock.sentinel.image1, mock.sentinel.image2]) mock_image.assert_has_calls([mock.call(self.context, mock.sentinel.image1), mock.call(self.context, mock.sentinel.image2)]) mock_cache.assert_called_once_with( self.context, mock.sentinel.aggregate, [mock.sentinel.image1, mock.sentinel.image2]) _test() def test_cache_images_fail(self): @mock.patch.object(self.conductor.conductor_compute_rpcapi, 'cache_images') @mock.patch.object(self.conductor.image_api, 'get') def _test(mock_image, mock_cache): mock_image.side_effect = test.TestingException() # We should expect to see non-NovaException errors # raised directly so the API can 500 for them. self.assertRaises(test.TestingException, self.conductor.cache_images, self.context, mock.sentinel.aggregate, [mock.sentinel.image1, mock.sentinel.image2]) mock_cache.assert_not_called() _test() def test_cache_images_missing(self): @mock.patch.object(self.conductor.conductor_compute_rpcapi, 'cache_images') @mock.patch.object(self.conductor.image_api, 'get') def _test(mock_image, mock_cache): mock_image.side_effect = exc.ImageNotFound('foo') self.assertRaises(exc.ImageNotFound, self.conductor.cache_images, self.context, mock.sentinel.aggregate, [mock.sentinel.image1, mock.sentinel.image2]) mock_cache.assert_not_called() _test() @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.Service.get_by_compute_host') def test_cache_images_failed_compute(self, mock_service, mock_target, mock_gbh): """Test the edge cases for cache_images(), specifically the error, skip, and down situations. """ fake_service = objects.Service(disabled=False, forced_down=False, last_seen_up=timeutils.utcnow()) fake_down_service = objects.Service(disabled=False, forced_down=True, last_seen_up=None) mock_service.side_effect = [fake_service, fake_service, fake_down_service] mock_target.__return_value.__enter__.return_value = self.context fake_cell = objects.CellMapping(uuid=uuids.cell, database_connection='', transport_url='') fake_mapping = objects.HostMapping(cell_mapping=fake_cell) mock_gbh.return_value = fake_mapping fake_agg = objects.Aggregate(name='agg', uuid=uuids.agg, id=1, hosts=['host1', 'host2', 'host3']) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'cache_images') def _test(mock_cache): mock_cache.side_effect = [ {'image1': 'unsupported'}, {'image1': 'error'}, ] self.conductor_manager.cache_images(self.context, fake_agg, ['image1']) _test() logtext = self.stdlog.logger.output self.assertIn( '0 cached, 0 existing, 1 errors, 1 unsupported, 1 skipped', logtext) self.assertIn('host3\' because it is not up', logtext) self.assertIn('image1 failed 1 times', logtext) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5384681 nova-21.2.4/nova/tests/unit/conf/0000775000175000017500000000000000000000000016577 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/conf/__init__.py0000664000175000017500000000000000000000000020676 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/conf/test_devices.py0000664000175000017500000000226700000000000021641 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova import test CONF = nova.conf.CONF class DevicesConfTestCase(test.NoDBTestCase): def test_register_dynamic_opts(self): self.flags(enabled_vgpu_types=['nvidia-11', 'nvidia-12'], group='devices') self.assertNotIn('vgpu_nvidia-11', CONF) self.assertNotIn('vgpu_nvidia-12', CONF) nova.conf.devices.register_dynamic_opts(CONF) self.assertIn('vgpu_nvidia-11', CONF) self.assertIn('vgpu_nvidia-12', CONF) self.assertEqual([], getattr(CONF, 'vgpu_nvidia-11').device_addresses) self.assertEqual([], getattr(CONF, 'vgpu_nvidia-12').device_addresses) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/conf/test_neutron.py0000664000175000017500000000222300000000000021701 0ustar00zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import nova.conf from nova import test CONF = nova.conf.CONF class NeutronConfTestCase(test.NoDBTestCase): def test_register_dynamic_opts(self): self.flags(physnets=['foo', 'bar', 'baz'], group='neutron') self.assertNotIn('neutron_physnet_foo', CONF) self.assertNotIn('neutron_physnet_bar', CONF) nova.conf.neutron.register_dynamic_opts(CONF) self.assertIn('neutron_physnet_foo', CONF) self.assertIn('neutron_physnet_bar', CONF) self.assertIn('neutron_tunnel', CONF) self.assertIn('numa_nodes', CONF.neutron_tunnel) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/conf_fixture.py0000664000175000017500000000541400000000000020723 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import fixture as config_fixture from oslo_policy import opts as policy_opts from nova.conf import devices from nova.conf import neutron from nova.conf import paths from nova import config class ConfFixture(config_fixture.Config): """Fixture to manage global conf settings.""" def setUp(self): super(ConfFixture, self).setUp() # default group self.conf.set_default('compute_driver', 'fake.SmallFakeDriver') self.conf.set_default('host', 'fake-mini') self.conf.set_default('periodic_enable', False) # api_database group self.conf.set_default('connection', "sqlite://", group='api_database') self.conf.set_default('sqlite_synchronous', False, group='api_database') # database group self.conf.set_default('connection', "sqlite://", group='database') self.conf.set_default('sqlite_synchronous', False, group='database') # key_manager group self.conf.set_default('backend', 'nova.keymgr.conf_key_mgr.ConfKeyManager', group='key_manager') # wsgi group self.conf.set_default('api_paste_config', paths.state_path_def('etc/nova/api-paste.ini'), group='wsgi') # The functional tests run wsgi API services using fixtures and # eventlet and we want one connection per request so things don't # leak between requests from separate services in concurrently running # tests. self.conf.set_default('keep_alive', False, group="wsgi") # many tests synchronizes on the reception of versioned notifications self.conf.set_default( 'notification_format', "both", group="notifications") config.parse_args([], default_config_files=[], configure_db=False, init_rpc=False) policy_opts.set_defaults(self.conf) neutron.register_dynamic_opts(self.conf) devices.register_dynamic_opts(self.conf) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5384681 nova-21.2.4/nova/tests/unit/console/0000775000175000017500000000000000000000000017314 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/__init__.py0000664000175000017500000000000000000000000021413 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5384681 nova-21.2.4/nova/tests/unit/console/rfb/0000775000175000017500000000000000000000000020065 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/rfb/__init__.py0000664000175000017500000000000000000000000022164 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/rfb/test_auth.py0000664000175000017500000000453700000000000022450 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.console.rfb import auth from nova.console.rfb import authnone from nova.console.rfb import auths from nova import exception from nova import test class RFBAuthSchemeListTestCase(test.NoDBTestCase): def setUp(self): super(RFBAuthSchemeListTestCase, self).setUp() self.flags(auth_schemes=["none", "vencrypt"], group="vnc") def test_load_ok(self): schemelist = auths.RFBAuthSchemeList() security_types = sorted(schemelist.schemes.keys()) self.assertEqual([auth.AuthType.NONE, auth.AuthType.VENCRYPT], security_types) def test_load_unknown(self): """Ensure invalid auth schemes are not supported. We're really testing oslo_policy functionality, but this case is esoteric enough to warrant this. """ self.assertRaises(ValueError, self.flags, auth_schemes=['none', 'wibble'], group='vnc') def test_find_scheme_ok(self): schemelist = auths.RFBAuthSchemeList() scheme = schemelist.find_scheme( [auth.AuthType.TIGHT, auth.AuthType.NONE]) self.assertIsInstance(scheme, authnone.RFBAuthSchemeNone) def test_find_scheme_fail(self): schemelist = auths.RFBAuthSchemeList() self.assertRaises(exception.RFBAuthNoAvailableScheme, schemelist.find_scheme, [auth.AuthType.TIGHT]) def test_find_scheme_priority(self): schemelist = auths.RFBAuthSchemeList() tight = mock.MagicMock(spec=auth.RFBAuthScheme) schemelist.schemes[auth.AuthType.TIGHT] = tight scheme = schemelist.find_scheme( [auth.AuthType.TIGHT, auth.AuthType.NONE]) self.assertEqual(tight, scheme) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/rfb/test_authnone.py0000664000175000017500000000215600000000000023323 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.console.rfb import auth from nova.console.rfb import authnone from nova import test class RFBAuthSchemeNoneTestCase(test.NoDBTestCase): def test_handshake(self): scheme = authnone.RFBAuthSchemeNone() sock = mock.MagicMock() ret = scheme.security_handshake(sock) self.assertEqual(sock, ret) def test_types(self): scheme = authnone.RFBAuthSchemeNone() self.assertEqual(auth.AuthType.NONE, scheme.security_type()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/rfb/test_authvencrypt.py0000664000175000017500000001512600000000000024237 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ssl import struct import mock from nova.console.rfb import auth from nova.console.rfb import authvencrypt from nova import exception from nova import test class RFBAuthSchemeVeNCryptTestCase(test.NoDBTestCase): def setUp(self): super(RFBAuthSchemeVeNCryptTestCase, self).setUp() self.scheme = authvencrypt.RFBAuthSchemeVeNCrypt() self.compute_sock = mock.MagicMock() self.compute_sock.recv.side_effect = [] self.expected_calls = [] self.flags(vencrypt_ca_certs="/certs/ca.pem", group="vnc") def _expect_send(self, val): self.expected_calls.append(mock.call.sendall(val)) def _expect_recv(self, amt, ret_val): self.expected_calls.append(mock.call.recv(amt)) self.compute_sock.recv.side_effect = ( list(self.compute_sock.recv.side_effect) + [ret_val]) @mock.patch.object(ssl, "wrap_socket", return_value="wrapped") def test_security_handshake_with_x509(self, mock_socket): self.flags(vencrypt_client_key='/certs/keyfile', vencrypt_client_cert='/certs/cert.pem', group="vnc") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x01") self.assertEqual("wrapped", self.scheme.security_handshake( self.compute_sock)) mock_socket.assert_called_once_with( self.compute_sock, keyfile='/certs/keyfile', certfile='/certs/cert.pem', server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/certs/ca.pem') self.assertEqual(self.expected_calls, self.compute_sock.mock_calls) @mock.patch.object(ssl, "wrap_socket", return_value="wrapped") def test_security_handshake_without_x509(self, mock_socket): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x01") self.assertEqual("wrapped", self.scheme.security_handshake( self.compute_sock)) mock_socket.assert_called_once_with( self.compute_sock, keyfile=None, certfile=None, server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/certs/ca.pem' ) self.assertEqual(self.expected_calls, self.compute_sock.mock_calls) def _test_security_handshake_fails(self): self.assertRaises(exception.RFBAuthHandshakeFailed, self.scheme.security_handshake, self.compute_sock) self.assertEqual(self.expected_calls, self.compute_sock.mock_calls) def test_security_handshake_fails_on_low_version(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x01") self._test_security_handshake_fails() def test_security_handshake_fails_on_cant_use_version(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x01") self._test_security_handshake_fails() def test_security_handshake_fails_on_missing_subauth(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x01") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!I', *subtypes_raw) self._expect_recv(4, subtypes) self._test_security_handshake_fails() def test_security_handshake_fails_on_auth_not_accepted(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x00") self._test_security_handshake_fails() @mock.patch.object(ssl, "wrap_socket") def test_security_handshake_fails_on_ssl_failure(self, mock_socket): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x01") mock_socket.side_effect = ssl.SSLError("cheese") self._test_security_handshake_fails() mock_socket.assert_called_once_with( self.compute_sock, keyfile=None, certfile=None, server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/certs/ca.pem' ) def test_types(self): scheme = authvencrypt.RFBAuthSchemeVeNCrypt() self.assertEqual(auth.AuthType.VENCRYPT, scheme.security_type()) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5384681 nova-21.2.4/nova/tests/unit/console/securityproxy/0000775000175000017500000000000000000000000022265 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/securityproxy/__init__.py0000664000175000017500000000000000000000000024364 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/console/securityproxy/test_rfb.py0000664000175000017500000002424000000000000024451 0ustar00zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Console Security Proxy Framework.""" import six import mock from nova.console.rfb import auth from nova.console.rfb import authnone from nova.console.securityproxy import rfb from nova import exception from nova import test class RFBSecurityProxyTestCase(test.NoDBTestCase): """Test case for the base RFBSecurityProxy.""" def setUp(self): super(RFBSecurityProxyTestCase, self).setUp() self.manager = mock.Mock() self.tenant_sock = mock.Mock() self.compute_sock = mock.Mock() self.tenant_sock.recv.side_effect = [] self.compute_sock.recv.side_effect = [] self.expected_manager_calls = [] self.expected_tenant_calls = [] self.expected_compute_calls = [] self.proxy = rfb.RFBSecurityProxy() def _assert_expected_calls(self): self.assertEqual(self.expected_manager_calls, self.manager.mock_calls) self.assertEqual(self.expected_tenant_calls, self.tenant_sock.mock_calls) self.assertEqual(self.expected_compute_calls, self.compute_sock.mock_calls) def _version_handshake(self): full_version_str = "RFB 003.008\n" self._expect_compute_recv(auth.VERSION_LENGTH, full_version_str) self._expect_compute_send(full_version_str) self._expect_tenant_send(full_version_str) self._expect_tenant_recv(auth.VERSION_LENGTH, full_version_str) def _to_binary(self, val): if not isinstance(val, six.binary_type): val = six.binary_type(val, 'utf-8') return val def _expect_tenant_send(self, val): val = self._to_binary(val) self.expected_tenant_calls.append(mock.call.sendall(val)) def _expect_compute_send(self, val): val = self._to_binary(val) self.expected_compute_calls.append(mock.call.sendall(val)) def _expect_tenant_recv(self, amt, ret_val): ret_val = self._to_binary(ret_val) self.expected_tenant_calls.append(mock.call.recv(amt)) self.tenant_sock.recv.side_effect = ( list(self.tenant_sock.recv.side_effect) + [ret_val]) def _expect_compute_recv(self, amt, ret_val): ret_val = self._to_binary(ret_val) self.expected_compute_calls.append(mock.call.recv(amt)) self.compute_sock.recv.side_effect = ( list(self.compute_sock.recv.side_effect) + [ret_val]) def test_fail(self): """Validate behavior for invalid initial message from tenant. The spec defines the sequence that should be used in the handshaking process. Anything outside of this is invalid. """ self._expect_tenant_send("\x00\x00\x00\x01\x00\x00\x00\x04blah") self.proxy._fail(self.tenant_sock, None, 'blah') self._assert_expected_calls() def test_fail_server_message(self): """Validate behavior for invalid initial message from server. The spec defines the sequence that should be used in the handshaking process. Anything outside of this is invalid. """ self._expect_tenant_send("\x00\x00\x00\x01\x00\x00\x00\x04blah") self._expect_compute_send("\x00") self.proxy._fail(self.tenant_sock, self.compute_sock, 'blah') self._assert_expected_calls() def test_parse_version(self): """Validate behavior of version parser.""" res = self.proxy._parse_version("RFB 012.034\n") self.assertEqual(12.34, res) def test_fails_on_compute_version(self): """Validate behavior for unsupported compute RFB version. We only support RFB protocol version 3.8. """ for full_version_str in ["RFB 003.007\n", "RFB 003.009\n"]: self._expect_compute_recv(auth.VERSION_LENGTH, full_version_str) ex = self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self.assertIn('version 3.8, but server', six.text_type(ex)) self._assert_expected_calls() def test_fails_on_tenant_version(self): """Validate behavior for unsupported tenant RFB version. We only support RFB protocol version 3.8. """ full_version_str = "RFB 003.008\n" for full_version_str_invalid in ["RFB 003.007\n", "RFB 003.009\n"]: self._expect_compute_recv(auth.VERSION_LENGTH, full_version_str) self._expect_compute_send(full_version_str) self._expect_tenant_send(full_version_str) self._expect_tenant_recv(auth.VERSION_LENGTH, full_version_str_invalid) ex = self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self.assertIn('version 3.8, but tenant', six.text_type(ex)) self._assert_expected_calls() def test_fails_on_sec_type_cnt_zero(self): """Validate behavior if a server returns 0 supported security types. This indicates a random issue and the cause of that issues should be decoded and reported in the exception. """ self.proxy._fail = mock.Mock() self._version_handshake() self._expect_compute_recv(1, "\x00") self._expect_compute_recv(4, "\x00\x00\x00\x06") self._expect_compute_recv(6, "cheese") self._expect_tenant_send("\x00\x00\x00\x00\x06cheese") ex = self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self.assertIn('cheese', six.text_type(ex)) self._assert_expected_calls() @mock.patch.object(authnone.RFBAuthSchemeNone, "security_handshake") def test_full_run(self, mock_handshake): """Validate correct behavior.""" new_sock = mock.MagicMock() mock_handshake.return_value = new_sock self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x01\x02") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x01") self._expect_compute_send("\x01") self.assertEqual(new_sock, self.proxy.connect( self.tenant_sock, self.compute_sock)) mock_handshake.assert_called_once_with(self.compute_sock) self._assert_expected_calls() def test_client_auth_invalid_fails(self): """Validate behavior if no security types are supported.""" self.proxy._fail = self.manager.proxy._fail self.proxy.security_handshake = self.manager.proxy.security_handshake self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x01\x02") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x02") self.expected_manager_calls.append( mock.call.proxy._fail(self.tenant_sock, self.compute_sock, "Only the security type " "None (1) is supported")) self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self._assert_expected_calls() def test_exception_in_choose_security_type_fails(self): """Validate behavior if a given security type isn't supported.""" self.proxy._fail = self.manager.proxy._fail self.proxy.security_handshake = self.manager.proxy.security_handshake self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x02\x05") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x01") self.expected_manager_calls.extend([ mock.call.proxy._fail( self.tenant_sock, self.compute_sock, 'Unable to negotiate security with server')]) self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self._assert_expected_calls() @mock.patch.object(authnone.RFBAuthSchemeNone, "security_handshake") def test_exception_security_handshake_fails(self, mock_auth): """Validate behavior if the security handshake fails for any reason.""" self.proxy._fail = self.manager.proxy._fail self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x01\x02") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x01") self._expect_compute_send("\x01") ex = exception.RFBAuthHandshakeFailed(reason="crackers") mock_auth.side_effect = ex self.expected_manager_calls.extend([ mock.call.proxy._fail(self.tenant_sock, None, 'Unable to negotiate security with server')]) self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) mock_auth.assert_called_once_with(self.compute_sock) self._assert_expected_calls() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/console/test_serial.py0000664000175000017500000001047700000000000022215 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for Serial Console.""" import socket import mock import six.moves from nova.console import serial from nova import exception from nova import test class SerialTestCase(test.NoDBTestCase): def setUp(self): super(SerialTestCase, self).setUp() serial.ALLOCATED_PORTS = set() def test_get_port_range(self): start, stop = serial._get_port_range() self.assertEqual(10000, start) self.assertEqual(20000, stop) def test_get_port_range_customized(self): self.flags(port_range='30000:40000', group='serial_console') start, stop = serial._get_port_range() self.assertEqual(30000, start) self.assertEqual(40000, stop) def test_get_port_range_bad_range(self): self.flags(port_range='40000:30000', group='serial_console') start, stop = serial._get_port_range() self.assertEqual(10000, start) self.assertEqual(20000, stop) @mock.patch('socket.socket') def test_verify_port(self, fake_socket): s = mock.MagicMock() fake_socket.return_value = s serial._verify_port('127.0.0.1', 10) s.bind.assert_called_once_with(('127.0.0.1', 10)) @mock.patch('socket.socket') def test_verify_port_in_use(self, fake_socket): s = mock.MagicMock() s.bind.side_effect = socket.error() fake_socket.return_value = s self.assertRaises( exception.SocketPortInUseException, serial._verify_port, '127.0.0.1', 10) s.bind.assert_called_once_with(('127.0.0.1', 10)) @mock.patch('nova.console.serial._verify_port', lambda x, y: None) def test_acquire_port(self): start, stop = 15, 20 self.flags( port_range='%d:%d' % (start, stop), group='serial_console') for port in six.moves.range(start, stop): self.assertEqual(port, serial.acquire_port('127.0.0.1')) for port in six.moves.range(start, stop): self.assertEqual(port, serial.acquire_port('127.0.0.2')) self.assertEqual(10, len(serial.ALLOCATED_PORTS)) @mock.patch('nova.console.serial._verify_port') def test_acquire_port_in_use(self, fake_verify_port): def port_10000_already_used(host, port): if port == 10000 and host == '127.0.0.1': raise exception.SocketPortInUseException( port=port, host=host, error="already in use") fake_verify_port.side_effect = port_10000_already_used self.assertEqual(10001, serial.acquire_port('127.0.0.1')) self.assertEqual(10000, serial.acquire_port('127.0.0.2')) self.assertNotIn(('127.0.0.1', 10000), serial.ALLOCATED_PORTS) self.assertIn(('127.0.0.1', 10001), serial.ALLOCATED_PORTS) self.assertIn(('127.0.0.2', 10000), serial.ALLOCATED_PORTS) @mock.patch('nova.console.serial._verify_port') def test_acquire_port_not_ble_to_bind_at_any_port(self, fake_verify_port): start, stop = 15, 20 self.flags( port_range='%d:%d' % (start, stop), group='serial_console') fake_verify_port.side_effect = ( exception.SocketPortRangeExhaustedException(host='127.0.0.1')) self.assertRaises( exception.SocketPortRangeExhaustedException, serial.acquire_port, '127.0.0.1') def test_release_port(self): serial.ALLOCATED_PORTS.add(('127.0.0.1', 100)) serial.ALLOCATED_PORTS.add(('127.0.0.2', 100)) self.assertEqual(2, len(serial.ALLOCATED_PORTS)) serial.release_port('127.0.0.1', 100) self.assertEqual(1, len(serial.ALLOCATED_PORTS)) serial.release_port('127.0.0.2', 100) self.assertEqual(0, len(serial.ALLOCATED_PORTS)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/console/test_type.py0000664000175000017500000000400000000000000021700 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.console import type as ctype from nova import test class TypeTestCase(test.NoDBTestCase): def test_console(self): c = ctype.Console(host='127.0.0.1', port=8945) self.assertTrue(hasattr(c, 'host')) self.assertTrue(hasattr(c, 'port')) self.assertTrue(hasattr(c, 'internal_access_path')) self.assertEqual('127.0.0.1', c.host) self.assertEqual(8945, c.port) self.assertIsNone(c.internal_access_path) self.assertEqual({ 'host': '127.0.0.1', 'port': 8945, 'internal_access_path': None, 'token': 'a-token', 'access_url': 'an-url'}, c.get_connection_info('a-token', 'an-url')) def test_console_vnc(self): c = ctype.ConsoleVNC(host='127.0.0.1', port=8945) self.assertIsInstance(c, ctype.Console) def test_console_rdp(self): c = ctype.ConsoleRDP(host='127.0.0.1', port=8945) self.assertIsInstance(c, ctype.Console) def test_console_spice(self): c = ctype.ConsoleSpice(host='127.0.0.1', port=8945, tlsPort=6547) self.assertIsInstance(c, ctype.Console) self.assertEqual(6547, c.tlsPort) self.assertEqual( 6547, c.get_connection_info('a-token', 'an-url')['tlsPort']) def test_console_serial(self): c = ctype.ConsoleSerial(host='127.0.0.1', port=8945) self.assertIsInstance(c, ctype.Console) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/console/test_websocketproxy.py0000664000175000017500000007443600000000000024033 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for nova websocketproxy.""" import copy import io import socket import mock from oslo_utils.fixture import uuidsentinel as uuids import nova.conf from nova.console.securityproxy import base from nova.console import websocketproxy from nova import context as nova_context from nova import exception from nova import objects from nova import test from nova.tests.unit import fake_console_auth_token as fake_ca from nova import utils CONF = nova.conf.CONF class NovaProxyRequestHandlerDBTestCase(test.TestCase): def setUp(self): super(NovaProxyRequestHandlerDBTestCase, self).setUp() self.flags(console_allowed_origins=['allowed-origin-example-1.net', 'allowed-origin-example-2.net']) with mock.patch('websockify.ProxyRequestHandler'): self.wh = websocketproxy.NovaProxyRequestHandler() self.wh.server = websocketproxy.NovaWebSocketProxy() self.wh.socket = mock.MagicMock() self.wh.msg = mock.MagicMock() self.wh.do_proxy = mock.MagicMock() self.wh.headers = mock.MagicMock() def _fake_console_db(self, **updates): console_db = copy.deepcopy(fake_ca.fake_token_dict) console_db['token_hash'] = utils.get_sha256_str('123-456-789') if updates: console_db.update(updates) return console_db fake_header = { 'cookie': 'token="123-456-789"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } @mock.patch('nova.objects.ConsoleAuthToken.validate') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.compute.rpcapi.ComputeAPI.validate_console_port') def test_new_websocket_client_db( self, mock_validate_port, mock_inst_get, mock_validate, internal_access_path=None, instance_not_found=False): db_obj = self._fake_console_db( host='node1', port=10000, console_type='novnc', access_url_base='https://example.net:6080', internal_access_path=internal_access_path, instance_uuid=uuids.instance, # This is set by ConsoleAuthToken.validate token='123-456-789' ) ctxt = nova_context.get_context() obj = nova.objects.ConsoleAuthToken._from_db_object( ctxt, nova.objects.ConsoleAuthToken(), db_obj) mock_validate.return_value = obj if instance_not_found: mock_inst_get.side_effect = exception.InstanceNotFound( instance_id=uuids.instance) if internal_access_path is None: self.wh.socket.return_value = '' else: tsock = mock.MagicMock() tsock.recv.return_value = "HTTP/1.1 200 OK\r\n\r\n" self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header if instance_not_found: self.assertRaises(exception.InvalidToken, self.wh.new_websocket_client) else: with mock.patch('nova.context.get_admin_context', return_value=ctxt): self.wh.new_websocket_client() mock_validate.assert_called_once_with(ctxt, '123-456-789') mock_validate_port.assert_called_once_with( ctxt, mock_inst_get.return_value, str(db_obj['port']), db_obj['console_type']) self.wh.socket.assert_called_with('node1', 10000, connect=True) if internal_access_path is None: self.wh.do_proxy.assert_called_with('') else: self.wh.do_proxy.assert_called_with(tsock) def test_new_websocket_client_db_internal_access_path(self): self.test_new_websocket_client_db(internal_access_path='vmid') def test_new_websocket_client_db_instance_not_found(self): self.test_new_websocket_client_db(instance_not_found=True) class NovaProxyRequestHandlerTestCase(test.NoDBTestCase): def setUp(self): super(NovaProxyRequestHandlerTestCase, self).setUp() self.flags(allowed_origins=['allowed-origin-example-1.net', 'allowed-origin-example-2.net'], group='console') self.server = websocketproxy.NovaWebSocketProxy() with mock.patch('websockify.ProxyRequestHandler'): self.wh = websocketproxy.NovaProxyRequestHandler() self.wh.server = self.server self.wh.socket = mock.MagicMock() self.wh.msg = mock.MagicMock() self.wh.do_proxy = mock.MagicMock() self.wh.headers = mock.MagicMock() fake_header = { 'cookie': 'token="123-456-789"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } fake_header_ipv6 = { 'cookie': 'token="123-456-789"', 'Origin': 'https://[2001:db8::1]:6080', 'Host': '[2001:db8::1]:6080', } fake_header_bad_token = { 'cookie': 'token="XXX"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } fake_header_bad_origin = { 'cookie': 'token="123-456-789"', 'Origin': 'https://bad-origin-example.net:6080', 'Host': 'example.net:6080', } fake_header_allowed_origin = { 'cookie': 'token="123-456-789"', 'Origin': 'https://allowed-origin-example-2.net:6080', 'Host': 'example.net:6080', } fake_header_blank_origin = { 'cookie': 'token="123-456-789"', 'Origin': '', 'Host': 'example.net:6080', } fake_header_no_origin = { 'cookie': 'token="123-456-789"', 'Host': 'example.net:6080', } fake_header_http = { 'cookie': 'token="123-456-789"', 'Origin': 'http://example.net:6080', 'Host': 'example.net:6080', } fake_header_malformed_cookie = { 'cookie': '?=!; token="123-456-789"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') # ensure that token is masked when logged connection_info = self.wh.msg.mock_calls[0][1][1] self.assertEqual('***', connection_info.token) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_ipv6_url(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url_base': 'https://[2001:db8::1]:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.socket.return_value = '' self.wh.path = "http://[2001:db8::1]/?token=123-456-789" self.wh.headers = self.fake_header_ipv6 self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_token_invalid(self, validate): validate.side_effect = exception.InvalidToken(token='XXX') self.wh.path = "http://127.0.0.1/?token=XXX" self.wh.headers = self.fake_header_bad_token self.assertRaises(exception.InvalidToken, self.wh.new_websocket_client) validate.assert_called_with(mock.ANY, "XXX") @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_internal_access_path(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'internal_access_path': 'vmid', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) tsock = mock.MagicMock() tsock.recv.return_value = "HTTP/1.1 200 OK\r\n\r\n" self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) tsock.send.assert_called_with(test.MatchType(bytes)) self.wh.do_proxy.assert_called_with(tsock) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_internal_access_path_err(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'host': 'node1', 'port': '10000', 'internal_access_path': 'xxx', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) tsock = mock.MagicMock() tsock.recv.return_value = "HTTP/1.1 500 Internal Server Error\r\n\r\n" self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.assertRaises(exception.InvalidConnectionInfo, self.wh.new_websocket_client) validate.assert_called_with(mock.ANY, "123-456-789") @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_internal_access_path_rfb(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'internal_access_path': 'vmid', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) tsock = mock.MagicMock() HTTP_RESP = "HTTP/1.1 200 OK\r\n\r\n" RFB_MSG = "RFB 003.003\n" # RFB negotiation message may arrive earlier. tsock.recv.side_effect = [HTTP_RESP + RFB_MSG, HTTP_RESP] self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) tsock.recv.assert_has_calls([mock.call(4096, socket.MSG_PEEK), mock.call(len(HTTP_RESP))]) self.wh.do_proxy.assert_called_with(tsock) @mock.patch.object(websocketproxy, 'sys') @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_py273_good_scheme( self, validate, check_port, mock_sys): mock_sys.version_info.return_value = (2, 7, 3) params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('socket.getfqdn') def test_address_string_doesnt_do_reverse_dns_lookup(self, getfqdn): request_mock = mock.MagicMock() request_mock.makefile().readline.side_effect = [ b'GET /vnc.html?token=123-456-789 HTTP/1.1\r\n', b'' ] server_mock = mock.MagicMock() client_address = ('8.8.8.8', 54321) handler = websocketproxy.NovaProxyRequestHandler( request_mock, client_address, server_mock) handler.log_message('log message using client address context info') self.assertFalse(getfqdn.called) # no reverse dns look up self.assertEqual(handler.address_string(), '8.8.8.8') # plain address @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_bad_origin_header(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_bad_origin self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_allowed_origin_header(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_allowed_origin self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_blank_origin_header(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_blank_origin self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_no_origin_header(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_no_origin self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_https_origin_proto_http( self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url_base': 'http://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.path = "https://127.0.0.1/" self.wh.headers = self.fake_header self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_https_origin_proto_ws(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'serial', 'access_url_base': 'ws://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.path = "https://127.0.0.1/" self.wh.headers = self.fake_header self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_http_forwarded_proto_https(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'serial', 'access_url_base': 'wss://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) header = { 'cookie': 'token="123-456-789"', 'Origin': 'http://example.net:6080', 'Host': 'example.net:6080', 'X-Forwarded-Proto': 'https' } self.wh.socket.return_value = '' self.wh.path = "https://127.0.0.1/" self.wh.headers = header self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_new_websocket_client_novnc_bad_console_type(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'bad-console-type' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.console.websocketproxy.NovaProxyRequestHandler.' '_check_console_port') @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_malformed_cookie(self, validate, check_port): params = { 'id': 1, 'token': '123-456-789', 'instance_uuid': uuids.instance, 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url_base': 'https://example.net:6080' } validate.return_value = objects.ConsoleAuthToken(**params) self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_malformed_cookie self.wh.new_websocket_client() validate.assert_called_with(mock.ANY, "123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') def test_tcp_rst_no_compute_rpcapi(self): # Tests that we don't create a ComputeAPI object if we receive a # TCP RST message. Simulate by raising the socket.err upon recv. err = socket.error('[Errno 104] Connection reset by peer') self.wh.socket.recv.side_effect = err conn = mock.MagicMock() address = mock.MagicMock() self.wh.server.top_new_client(conn, address) self.assertIsNone(self.wh._compute_rpcapi) def test_reject_open_redirect(self): # This will test the behavior when an attempt is made to cause an open # redirect. It should be rejected. mock_req = mock.MagicMock() mock_req.makefile().readline.side_effect = [ b'GET //example.com/%2F.. HTTP/1.1\r\n', b'' ] client_addr = ('8.8.8.8', 54321) mock_server = mock.MagicMock() # This specifies that the server will be able to handle requests other # than only websockets. mock_server.only_upgrade = False # Constructing a handler will process the mock_req request passed in. handler = websocketproxy.NovaProxyRequestHandler( mock_req, client_addr, mock_server) # Collect the response data to verify at the end. The # SimpleHTTPRequestHandler writes the response data to a 'wfile' # attribute. output = io.BytesIO() handler.wfile = output # Process the mock_req again to do the capture. handler.do_GET() output.seek(0) result = output.readlines() # Verify no redirect happens and instead a 400 Bad Request is returned. self.assertIn('400 URI must not start with //', result[0].decode()) def test_reject_open_redirect_3_slashes(self): # This will test the behavior when an attempt is made to cause an open # redirect. It should be rejected. mock_req = mock.MagicMock() mock_req.makefile().readline.side_effect = [ b'GET ///example.com/%2F.. HTTP/1.1\r\n', b'' ] # Collect the response data to verify at the end. The # SimpleHTTPRequestHandler writes the response data by calling the # request socket sendall() method. self.data = b'' def fake_sendall(data): self.data += data mock_req.sendall.side_effect = fake_sendall client_addr = ('8.8.8.8', 54321) mock_server = mock.MagicMock() # This specifies that the server will be able to handle requests other # than only websockets. mock_server.only_upgrade = False # Constructing a handler will process the mock_req request passed in. websocketproxy.NovaProxyRequestHandler( mock_req, client_addr, mock_server) # Verify no redirect happens and instead a 400 Bad Request is returned. self.data = self.data.decode() self.assertIn('Error code: 400', self.data) self.assertIn('Message: URI must not start with //', self.data) @mock.patch('websockify.websocketproxy.select_ssl_version') def test_ssl_min_version_is_not_set(self, mock_select_ssl): websocketproxy.NovaWebSocketProxy() self.assertFalse(mock_select_ssl.called) @mock.patch('websockify.websocketproxy.select_ssl_version') def test_ssl_min_version_not_set_by_default(self, mock_select_ssl): websocketproxy.NovaWebSocketProxy(ssl_minimum_version='default') self.assertFalse(mock_select_ssl.called) @mock.patch('websockify.websocketproxy.select_ssl_version') def test_non_default_ssl_min_version_is_set(self, mock_select_ssl): minver = 'tlsv1_3' websocketproxy.NovaWebSocketProxy(ssl_minimum_version=minver) mock_select_ssl.assert_called_once_with(minver) class NovaWebsocketSecurityProxyTestCase(test.NoDBTestCase): def setUp(self): super(NovaWebsocketSecurityProxyTestCase, self).setUp() self.flags(allowed_origins=['allowed-origin-example-1.net', 'allowed-origin-example-2.net'], group='console') self.server = websocketproxy.NovaWebSocketProxy( security_proxy=mock.MagicMock( spec=base.SecurityProxy) ) with mock.patch('websockify.ProxyRequestHandler'): self.wh = websocketproxy.NovaProxyRequestHandler() self.wh.server = self.server self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.socket = mock.MagicMock() self.wh.msg = mock.MagicMock() self.wh.do_proxy = mock.MagicMock() self.wh.headers = mock.MagicMock() def get_header(header): if header == 'cookie': return 'token="123-456-789"' elif header == 'Origin': return 'https://example.net:6080' elif header == 'Host': return 'example.net:6080' else: return self.wh.headers.get = get_header @mock.patch('nova.objects.ConsoleAuthToken.validate') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.compute.rpcapi.ComputeAPI.validate_console_port') @mock.patch('nova.console.websocketproxy.TenantSock.close') @mock.patch('nova.console.websocketproxy.TenantSock.finish_up') def test_proxy_connect_ok(self, mock_finish, mock_close, mock_port_validate, mock_get, mock_token_validate): mock_token_validate.return_value = nova.objects.ConsoleAuthToken( instance_uuid=uuids.instance, host='node1', port='10000', console_type='novnc', access_url_base='https://example.net:6080') # The token and id attributes are set by the validate() method. mock_token_validate.return_value.token = '123-456-789' mock_token_validate.return_value.id = 1 sock = mock.MagicMock( spec=websocketproxy.TenantSock) self.server.security_proxy.connect.return_value = sock self.wh.new_websocket_client() self.wh.do_proxy.assert_called_with(sock) mock_finish.assert_called_with() self.assertEqual(len(mock_close.calls), 0) @mock.patch('nova.objects.ConsoleAuthToken.validate') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.compute.rpcapi.ComputeAPI.validate_console_port') @mock.patch('nova.console.websocketproxy.TenantSock.close') @mock.patch('nova.console.websocketproxy.TenantSock.finish_up') def test_proxy_connect_err(self, mock_finish, mock_close, mock_port_validate, mock_get, mock_token_validate): mock_token_validate.return_value = nova.objects.ConsoleAuthToken( instance_uuid=uuids.instance, host='node1', port='10000', console_type='novnc', access_url_base='https://example.net:6080') # The token attribute is set by the validate() method. mock_token_validate.return_value.token = '123-456-789' mock_token_validate.return_value.id = 1 ex = exception.SecurityProxyNegotiationFailed("Wibble") self.server.security_proxy.connect.side_effect = ex self.assertRaises(exception.SecurityProxyNegotiationFailed, self.wh.new_websocket_client) self.assertEqual(len(self.wh.do_proxy.calls), 0) mock_close.assert_called_with() self.assertEqual(len(mock_finish.calls), 0) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.542468 nova-21.2.4/nova/tests/unit/db/0000775000175000017500000000000000000000000016237 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/db/__init__.py0000664000175000017500000000126000000000000020347 0ustar00zuulzuul00000000000000# Copyright (c) 2010 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`db` -- Stubs for DB API ============================= """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/db/fakes.py0000664000175000017500000001133300000000000017703 0ustar00zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stubouts, mocks and fixtures for the test suite.""" import datetime def stub_out(test, funcs): """Set the stubs in mapping in the db api.""" for module, func in funcs.items(): test.stub_out(module, func) def stub_out_db_instance_api(test, injected=True): """Stubs out the db API for creating Instances.""" def _create_instance_type(**updates): instance_type = {'id': 2, 'name': 'm1.tiny', 'memory_mb': 512, 'vcpus': 1, 'vcpu_weight': None, 'root_gb': 0, 'ephemeral_gb': 10, 'flavorid': 1, 'rxtx_factor': 1.0, 'swap': 0, 'deleted_at': None, 'created_at': datetime.datetime(2014, 8, 8, 0, 0, 0), 'updated_at': None, 'deleted': False, 'disabled': False, 'is_public': True, 'extra_specs': {}, 'description': None } if updates: instance_type.update(updates) return instance_type INSTANCE_TYPES = { 'm1.tiny': _create_instance_type( id=2, name='m1.tiny', memory_mb=512, vcpus=1, vcpu_weight=None, root_gb=0, ephemeral_gb=10, flavorid=1, rxtx_factor=1.0, swap=0), 'm1.small': _create_instance_type( id=5, name='m1.small', memory_mb=2048, vcpus=1, vcpu_weight=None, root_gb=20, ephemeral_gb=0, flavorid=2, rxtx_factor=1.0, swap=1024), 'm1.medium': _create_instance_type( id=1, name='m1.medium', memory_mb=4096, vcpus=2, vcpu_weight=None, root_gb=40, ephemeral_gb=40, flavorid=3, rxtx_factor=1.0, swap=0), 'm1.large': _create_instance_type( id=3, name='m1.large', memory_mb=8192, vcpus=4, vcpu_weight=10, root_gb=80, ephemeral_gb=80, flavorid=4, rxtx_factor=1.0, swap=0), 'm1.xlarge': _create_instance_type( id=4, name='m1.xlarge', memory_mb=16384, vcpus=8, vcpu_weight=None, root_gb=160, ephemeral_gb=160, flavorid=5, rxtx_factor=1.0, swap=0)} def fake_flavor_get_all(*a, **k): return INSTANCE_TYPES.values() @classmethod def fake_flavor_get_by_name(cls, context, name): return INSTANCE_TYPES[name] @classmethod def fake_flavor_get(cls, context, id): for inst_type in INSTANCE_TYPES.values(): if str(inst_type['id']) == str(id): return inst_type return None funcs = { 'nova.objects.flavor._flavor_get_all_from_db': ( fake_flavor_get_all), 'nova.objects.Flavor._flavor_get_by_name_from_db': ( fake_flavor_get_by_name), 'nova.objects.Flavor._flavor_get_from_db': fake_flavor_get, } stub_out(test, funcs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/db/test_db_api.py0000664000175000017500000125300100000000000021070 0ustar00zuulzuul00000000000000# encoding=UTF8 # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the DB API.""" import copy import datetime from dateutil import parser as dateutil_parser import iso8601 import mock import netaddr from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import update_match from oslo_db.sqlalchemy import utils as sqlalchemyutils from oslo_serialization import jsonutils from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import range from sqlalchemy import Column from sqlalchemy.exc import OperationalError from sqlalchemy.exc import SQLAlchemyError from sqlalchemy import inspect from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy.orm import query from sqlalchemy.orm import session as sqla_session from sqlalchemy import sql from sqlalchemy import Table from nova import block_device from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova.db import api as db from nova.db.sqlalchemy import api as sqlalchemy_api from nova.db.sqlalchemy import models from nova.db.sqlalchemy import types as col_types from nova.db.sqlalchemy import utils as db_utils from nova import exception from nova.objects import fields from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_console_auth_token from nova import utils CONF = nova.conf.CONF get_engine = sqlalchemy_api.get_engine def _make_compute_node(host, node, hv_type, service_id): compute_node_dict = dict(vcpus=2, memory_mb=1024, local_gb=2048, uuid=uuidutils.generate_uuid(), vcpus_used=0, memory_mb_used=0, local_gb_used=0, free_ram_mb=1024, free_disk_gb=2048, hypervisor_type=hv_type, hypervisor_version=1, cpu_info="", running_vms=0, current_workload=0, service_id=service_id, host=host, disk_available_least=100, hypervisor_hostname=node, host_ip='127.0.0.1', supported_instances='', pci_stats='', metrics='', extra_resources='', cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, stats='', numa_topology='') # add some random stats stats = dict(num_instances=3, num_proj_12345=2, num_proj_23456=2, num_vm_building=3) compute_node_dict['stats'] = jsonutils.dumps(stats) return compute_node_dict def _quota_create(context, project_id, user_id): """Create sample Quota objects.""" quotas = {} user_quotas = {} for i in range(3): resource = 'resource%d' % i if i == 2: # test for project level resources resource = 'fixed_ips' quotas[resource] = db.quota_create(context, project_id, resource, i + 2).hard_limit user_quotas[resource] = quotas[resource] else: quotas[resource] = db.quota_create(context, project_id, resource, i + 1).hard_limit user_quotas[resource] = db.quota_create(context, project_id, resource, i + 1, user_id=user_id).hard_limit @sqlalchemy_api.pick_context_manager_reader def _assert_instance_id_mapping(_ctxt, tc, inst_uuid, expected_existing=False): # NOTE(mriedem): We can't use ec2_instance_get_by_uuid to assert # the instance_id_mappings record is gone because it hard-codes # read_deleted='yes' and will read the soft-deleted record. So we # do the model_query directly here. See bug 1061166. inst_id_mapping = sqlalchemy_api.model_query( _ctxt, models.InstanceIdMapping).filter_by(uuid=inst_uuid).first() if not expected_existing: tc.assertFalse(inst_id_mapping, 'instance_id_mapping not deleted for ' 'instance: %s' % inst_uuid) else: tc.assertTrue(inst_id_mapping, 'instance_id_mapping not found for ' 'instance: %s' % inst_uuid) class DbTestCase(test.TestCase): def setUp(self): super(DbTestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) def create_instance_with_args(self, **kwargs): args = {'reservation_id': 'a', 'image_ref': 1, 'host': 'host1', 'node': 'node1', 'project_id': self.project_id, 'vm_state': 'fake'} if 'context' in kwargs: ctxt = kwargs.pop('context') args['project_id'] = ctxt.project_id else: ctxt = self.context args.update(kwargs) return db.instance_create(ctxt, args) def fake_metadata(self, content): meta = {} for i in range(0, 10): meta["foo%i" % i] = "this is %s item %i" % (content, i) return meta def create_metadata_for_instance(self, instance_uuid): meta = self.fake_metadata('metadata') db.instance_metadata_update(self.context, instance_uuid, meta, False) sys_meta = self.fake_metadata('system_metadata') db.instance_system_metadata_update(self.context, instance_uuid, sys_meta, False) return meta, sys_meta class HelperTestCase(test.TestCase): @mock.patch.object(sqlalchemy_api, 'joinedload') def test_joinedload_helper(self, mock_jl): query = sqlalchemy_api._joinedload_all('foo.bar.baz') # We call sqlalchemy.orm.joinedload() on the first element mock_jl.assert_called_once_with('foo') # Then first.joinedload(second) column2 = mock_jl.return_value column2.joinedload.assert_called_once_with('bar') # Then second.joinedload(third) column3 = column2.joinedload.return_value column3.joinedload.assert_called_once_with('baz') self.assertEqual(column3.joinedload.return_value, query) @mock.patch.object(sqlalchemy_api, 'joinedload') def test_joinedload_helper_single(self, mock_jl): query = sqlalchemy_api._joinedload_all('foo') # We call sqlalchemy.orm.joinedload() on the first element mock_jl.assert_called_once_with('foo') # We should have gotten back just the result of the joinedload() # call if there were no other elements self.assertEqual(mock_jl.return_value, query) class DecoratorTestCase(test.TestCase): def _test_decorator_wraps_helper(self, decorator): def test_func(): """Test docstring.""" decorated_func = decorator(test_func) self.assertEqual(test_func.__name__, decorated_func.__name__) self.assertEqual(test_func.__doc__, decorated_func.__doc__) self.assertEqual(test_func.__module__, decorated_func.__module__) def test_require_context_decorator_wraps_functions_properly(self): self._test_decorator_wraps_helper(sqlalchemy_api.require_context) def test_require_deadlock_retry_wraps_functions_properly(self): self._test_decorator_wraps_helper( oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)) @mock.patch.object(enginefacade._TransactionContextManager, 'using') @mock.patch.object(enginefacade._TransactionContextManager, '_clone') def test_select_db_reader_mode_select_sync(self, mock_clone, mock_using): @db.select_db_reader_mode def func(self, context, value, use_slave=False): pass mock_clone.return_value = enginefacade._TransactionContextManager( mode=enginefacade._READER) ctxt = context.get_admin_context() value = 'some_value' func(self, ctxt, value) mock_clone.assert_called_once_with(mode=enginefacade._READER) mock_using.assert_called_once_with(ctxt) @mock.patch.object(enginefacade._TransactionContextManager, 'using') @mock.patch.object(enginefacade._TransactionContextManager, '_clone') def test_select_db_reader_mode_select_async(self, mock_clone, mock_using): @db.select_db_reader_mode def func(self, context, value, use_slave=False): pass mock_clone.return_value = enginefacade._TransactionContextManager( mode=enginefacade._ASYNC_READER) ctxt = context.get_admin_context() value = 'some_value' func(self, ctxt, value, use_slave=True) mock_clone.assert_called_once_with(mode=enginefacade._ASYNC_READER) mock_using.assert_called_once_with(ctxt) @mock.patch.object(enginefacade._TransactionContextManager, 'using') @mock.patch.object(enginefacade._TransactionContextManager, '_clone') def test_select_db_reader_mode_no_use_slave_select_sync(self, mock_clone, mock_using): @db.select_db_reader_mode def func(self, context, value): pass mock_clone.return_value = enginefacade._TransactionContextManager( mode=enginefacade._READER) ctxt = context.get_admin_context() value = 'some_value' func(self, ctxt, value) mock_clone.assert_called_once_with(mode=enginefacade._READER) mock_using.assert_called_once_with(ctxt) def _get_fake_aggr_values(): return {'name': 'fake_aggregate'} def _get_fake_aggr_metadata(): return {'fake_key1': 'fake_value1', 'fake_key2': 'fake_value2', 'availability_zone': 'fake_avail_zone'} def _get_fake_aggr_hosts(): return ['foo.openstack.org'] def _create_aggregate(context=context.get_admin_context(), values=_get_fake_aggr_values(), metadata=_get_fake_aggr_metadata()): return db.aggregate_create(context, values, metadata) def _create_aggregate_with_hosts(context=context.get_admin_context(), values=_get_fake_aggr_values(), metadata=_get_fake_aggr_metadata(), hosts=_get_fake_aggr_hosts()): result = _create_aggregate(context=context, values=values, metadata=metadata) for host in hosts: db.aggregate_host_add(context, result['id'], host) return result @mock.patch.object(sqlalchemy_api, '_get_regexp_ops', return_value=(lambda x: x, 'LIKE')) class UnsupportedDbRegexpTestCase(DbTestCase): def test_instance_get_all_by_filters_paginate(self, mock_get_regexp): test1 = self.create_instance_with_args(display_name='test1') test2 = self.create_instance_with_args(display_name='test2') test3 = self.create_instance_with_args(display_name='test3') result = db.instance_get_all_by_filters(self.context, {'display_name': '%test%'}, marker=None) self.assertEqual(3, len(result)) result = db.instance_get_all_by_filters(self.context, {'display_name': '%test%'}, sort_dir="asc", marker=test1['uuid']) self.assertEqual(2, len(result)) result = db.instance_get_all_by_filters(self.context, {'display_name': '%test%'}, sort_dir="asc", marker=test2['uuid']) self.assertEqual(1, len(result)) result = db.instance_get_all_by_filters(self.context, {'display_name': '%test%'}, sort_dir="asc", marker=test3['uuid']) self.assertEqual(0, len(result)) self.assertRaises(exception.MarkerNotFound, db.instance_get_all_by_filters, self.context, {'display_name': '%test%'}, marker=uuidsentinel.uuid1) def test_instance_get_all_uuids_by_hosts(self, mock_get_regexp): test1 = self.create_instance_with_args(display_name='test1') test2 = self.create_instance_with_args(display_name='test2') test3 = self.create_instance_with_args(display_name='test3') uuids = [i.uuid for i in (test1, test2, test3)] results = db.instance_get_all_uuids_by_hosts( self.context, [test1.host]) self.assertEqual(1, len(results)) self.assertIn(test1.host, results) found_uuids = results[test1.host] self.assertEqual(sorted(uuids), sorted(found_uuids)) def _assert_equals_inst_order(self, correct_order, filters, sort_keys=None, sort_dirs=None, limit=None, marker=None, match_keys=['uuid', 'vm_state', 'display_name', 'id']): '''Retrieves instances based on the given filters and sorting information and verifies that the instances are returned in the correct sorted order by ensuring that the supplied keys match. ''' result = db.instance_get_all_by_filters_sort( self.context, filters, limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs) self.assertEqual(len(correct_order), len(result)) for inst1, inst2 in zip(result, correct_order): for key in match_keys: self.assertEqual(inst1.get(key), inst2.get(key)) return result def test_instance_get_all_by_filters_sort_keys(self, mock_get_regexp): '''Verifies sort order and direction for multiple instances.''' # Instances that will reply to the query test1_active = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ACTIVE) test1_error = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ERROR) test1_error2 = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ERROR) test2_active = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ACTIVE) test2_error = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ERROR) test2_error2 = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ERROR) # Other instances in the DB, will not match name filter other_error = self.create_instance_with_args( display_name='other', vm_state=vm_states.ERROR) other_active = self.create_instance_with_args( display_name='other', vm_state=vm_states.ACTIVE) filters = {'display_name': '%test%'} # Verify different sort key/direction combinations sort_keys = ['display_name', 'vm_state', 'created_at'] sort_dirs = ['asc', 'asc', 'asc'] correct_order = [test1_active, test1_error, test1_error2, test2_active, test2_error, test2_error2] self._assert_equals_inst_order(correct_order, filters, sort_keys=sort_keys, sort_dirs=sort_dirs) sort_dirs = ['asc', 'desc', 'asc'] correct_order = [test1_error, test1_error2, test1_active, test2_error, test2_error2, test2_active] self._assert_equals_inst_order(correct_order, filters, sort_keys=sort_keys, sort_dirs=sort_dirs) sort_dirs = ['desc', 'desc', 'asc'] correct_order = [test2_error, test2_error2, test2_active, test1_error, test1_error2, test1_active] self._assert_equals_inst_order(correct_order, filters, sort_keys=sort_keys, sort_dirs=sort_dirs) # created_at is added by default if not supplied, descending order sort_keys = ['display_name', 'vm_state'] sort_dirs = ['desc', 'desc'] correct_order = [test2_error2, test2_error, test2_active, test1_error2, test1_error, test1_active] self._assert_equals_inst_order(correct_order, filters, sort_keys=sort_keys, sort_dirs=sort_dirs) # Now created_at should be in ascending order (defaults to the first # sort dir direction) sort_dirs = ['asc', 'asc'] correct_order = [test1_active, test1_error, test1_error2, test2_active, test2_error, test2_error2] self._assert_equals_inst_order(correct_order, filters, sort_keys=sort_keys, sort_dirs=sort_dirs) # Remove name filter, get all instances correct_order = [other_active, other_error, test1_active, test1_error, test1_error2, test2_active, test2_error, test2_error2] self._assert_equals_inst_order(correct_order, {}, sort_keys=sort_keys, sort_dirs=sort_dirs) # Default sorting, 'created_at' then 'id' in desc order correct_order = [other_active, other_error, test2_error2, test2_error, test2_active, test1_error2, test1_error, test1_active] self._assert_equals_inst_order(correct_order, {}) def test_instance_get_all_by_filters_sort_keys_paginate(self, mock_get_regexp): '''Verifies sort order with pagination.''' # Instances that will reply to the query test1_active = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ACTIVE) test1_error = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ERROR) test1_error2 = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ERROR) test2_active = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ACTIVE) test2_error = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ERROR) test2_error2 = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ERROR) # Other instances in the DB, will not match name filter self.create_instance_with_args(display_name='other') self.create_instance_with_args(display_name='other') filters = {'display_name': '%test%'} # Common sort information for every query sort_keys = ['display_name', 'vm_state', 'created_at'] sort_dirs = ['asc', 'desc', 'asc'] # Overall correct instance order based on the sort keys correct_order = [test1_error, test1_error2, test1_active, test2_error, test2_error2, test2_active] # Limits of 1, 2, and 3, verify that the instances returned are in the # correct sorted order, update the marker to get the next correct page for limit in range(1, 4): marker = None # Include the maximum number of instances (ie, 6) to ensure that # the last query (with marker pointing to the last instance) # returns 0 servers for i in range(0, 7, limit): if i == len(correct_order): correct = [] else: correct = correct_order[i:i + limit] insts = self._assert_equals_inst_order( correct, filters, sort_keys=sort_keys, sort_dirs=sort_dirs, limit=limit, marker=marker) if correct: marker = insts[-1]['uuid'] self.assertEqual(correct[-1]['uuid'], marker) def test_instance_get_deleted_by_filters_sort_keys_paginate(self, mock_get_regexp): '''Verifies sort order with pagination for deleted instances.''' ctxt = context.get_admin_context() # Instances that will reply to the query test1_active = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ACTIVE) db.instance_destroy(ctxt, test1_active['uuid']) test1_error = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ERROR) db.instance_destroy(ctxt, test1_error['uuid']) test1_error2 = self.create_instance_with_args( display_name='test1', vm_state=vm_states.ERROR) db.instance_destroy(ctxt, test1_error2['uuid']) test2_active = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ACTIVE) db.instance_destroy(ctxt, test2_active['uuid']) test2_error = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ERROR) db.instance_destroy(ctxt, test2_error['uuid']) test2_error2 = self.create_instance_with_args( display_name='test2', vm_state=vm_states.ERROR) db.instance_destroy(ctxt, test2_error2['uuid']) # Other instances in the DB, will not match name filter self.create_instance_with_args(display_name='other') self.create_instance_with_args(display_name='other') filters = {'display_name': '%test%', 'deleted': True} # Common sort information for every query sort_keys = ['display_name', 'vm_state', 'created_at'] sort_dirs = ['asc', 'desc', 'asc'] # Overall correct instance order based on the sort keys correct_order = [test1_error, test1_error2, test1_active, test2_error, test2_error2, test2_active] # Limits of 1, 2, and 3, verify that the instances returned are in the # correct sorted order, update the marker to get the next correct page for limit in range(1, 4): marker = None # Include the maximum number of instances (ie, 6) to ensure that # the last query (with marker pointing to the last instance) # returns 0 servers for i in range(0, 7, limit): if i == len(correct_order): correct = [] else: correct = correct_order[i:i + limit] insts = self._assert_equals_inst_order( correct, filters, sort_keys=sort_keys, sort_dirs=sort_dirs, limit=limit, marker=marker) if correct: marker = insts[-1]['uuid'] self.assertEqual(correct[-1]['uuid'], marker) class ModelQueryTestCase(DbTestCase): def test_model_query_invalid_arguments(self): @sqlalchemy_api.pick_context_manager_reader def test(context): # read_deleted shouldn't accept invalid values self.assertRaises(ValueError, sqlalchemy_api.model_query, context, models.Instance, read_deleted=False) self.assertRaises(ValueError, sqlalchemy_api.model_query, context, models.Instance, read_deleted="foo") # Check model is a valid model self.assertRaises(TypeError, sqlalchemy_api.model_query, context, "") test(self.context) @mock.patch.object(sqlalchemyutils, 'model_query') def test_model_query_use_context_session(self, mock_model_query): @sqlalchemy_api.main_context_manager.reader def fake_method(context): session = context.session sqlalchemy_api.model_query(context, models.Instance) return session session = fake_method(self.context) mock_model_query.assert_called_once_with(models.Instance, session, None, deleted=False) class EngineFacadeTestCase(DbTestCase): def test_use_single_context_session_writer(self): # Checks that session in context would not be overwritten by # annotation @sqlalchemy_api.main_context_manager.writer if annotation # is used twice. @sqlalchemy_api.main_context_manager.writer def fake_parent_method(context): session = context.session return fake_child_method(context), session @sqlalchemy_api.main_context_manager.writer def fake_child_method(context): session = context.session sqlalchemy_api.model_query(context, models.Instance) return session parent_session, child_session = fake_parent_method(self.context) self.assertEqual(parent_session, child_session) def test_use_single_context_session_reader(self): # Checks that session in context would not be overwritten by # annotation @sqlalchemy_api.main_context_manager.reader if annotation # is used twice. @sqlalchemy_api.main_context_manager.reader def fake_parent_method(context): session = context.session return fake_child_method(context), session @sqlalchemy_api.main_context_manager.reader def fake_child_method(context): session = context.session sqlalchemy_api.model_query(context, models.Instance) return session parent_session, child_session = fake_parent_method(self.context) self.assertEqual(parent_session, child_session) class SqlAlchemyDbApiNoDbTestCase(test.NoDBTestCase): """No-DB test class for simple test cases that do not require a backend.""" def test_manual_join_columns_immutable_list(self): # Tests that _manual_join_columns doesn't modify the list passed in. columns_to_join = ['system_metadata', 'test'] manual_joins, columns_to_join2 = ( sqlalchemy_api._manual_join_columns(columns_to_join)) self.assertEqual(['system_metadata'], manual_joins) self.assertEqual(['test'], columns_to_join2) self.assertEqual(['system_metadata', 'test'], columns_to_join) def test_convert_objects_related_datetimes(self): t1 = timeutils.utcnow() t2 = t1 + datetime.timedelta(seconds=10) t3 = t2 + datetime.timedelta(hours=1) t2_utc = t2.replace(tzinfo=iso8601.UTC) t3_utc = t3.replace(tzinfo=iso8601.UTC) datetime_keys = ('created_at', 'deleted_at') test1 = {'created_at': t1, 'deleted_at': t2, 'updated_at': t3} expected_dict = {'created_at': t1, 'deleted_at': t2, 'updated_at': t3} sqlalchemy_api.convert_objects_related_datetimes(test1, *datetime_keys) self.assertEqual(test1, expected_dict) test2 = {'created_at': t1, 'deleted_at': t2_utc, 'updated_at': t3} expected_dict = {'created_at': t1, 'deleted_at': t2, 'updated_at': t3} sqlalchemy_api.convert_objects_related_datetimes(test2, *datetime_keys) self.assertEqual(test2, expected_dict) test3 = {'deleted_at': t2_utc, 'updated_at': t3_utc} expected_dict = {'deleted_at': t2, 'updated_at': t3_utc} sqlalchemy_api.convert_objects_related_datetimes(test3, *datetime_keys) self.assertEqual(test3, expected_dict) def test_convert_objects_related_datetimes_with_strings(self): t1 = '2015-05-28T17:15:53.000000' t2 = '2012-04-21T18:25:43-05:00' t3 = '2012-04-23T18:25:43.511Z' datetime_keys = ('created_at', 'deleted_at', 'updated_at') test1 = {'created_at': t1, 'deleted_at': t2, 'updated_at': t3} expected_dict = { 'created_at': timeutils.parse_strtime(t1).replace(tzinfo=None), 'deleted_at': timeutils.parse_isotime(t2).replace(tzinfo=None), 'updated_at': timeutils.parse_isotime(t3).replace(tzinfo=None)} sqlalchemy_api.convert_objects_related_datetimes(test1) self.assertEqual(test1, expected_dict) sqlalchemy_api.convert_objects_related_datetimes(test1, *datetime_keys) self.assertEqual(test1, expected_dict) def test_get_regexp_op_for_database_sqlite(self): filter, op = sqlalchemy_api._get_regexp_ops('sqlite:///') self.assertEqual('|', filter('|')) self.assertEqual('REGEXP', op) def test_get_regexp_op_for_database_mysql(self): filter, op = sqlalchemy_api._get_regexp_ops( 'mysql+pymysql://root@localhost') self.assertEqual('\\|', filter('|')) self.assertEqual('REGEXP', op) def test_get_regexp_op_for_database_postgresql(self): filter, op = sqlalchemy_api._get_regexp_ops( 'postgresql://localhost') self.assertEqual('|', filter('|')) self.assertEqual('~', op) def test_get_regexp_op_for_database_unknown(self): filter, op = sqlalchemy_api._get_regexp_ops('notdb:///') self.assertEqual('|', filter('|')) self.assertEqual('LIKE', op) @mock.patch.object(sqlalchemy_api, 'main_context_manager') def test_get_engine(self, mock_ctxt_mgr): sqlalchemy_api.get_engine() mock_ctxt_mgr.writer.get_engine.assert_called_once_with() @mock.patch.object(sqlalchemy_api, 'main_context_manager') def test_get_engine_use_slave(self, mock_ctxt_mgr): sqlalchemy_api.get_engine(use_slave=True) mock_ctxt_mgr.reader.get_engine.assert_called_once_with() def test_get_db_conf_with_connection(self): mock_conf_group = mock.MagicMock() mock_conf_group.connection = 'fakemain://' db_conf = sqlalchemy_api._get_db_conf(mock_conf_group, connection='fake://') self.assertEqual('fake://', db_conf['connection']) @mock.patch.object(sqlalchemy_api, 'api_context_manager') def test_get_api_engine(self, mock_ctxt_mgr): sqlalchemy_api.get_api_engine() mock_ctxt_mgr.writer.get_engine.assert_called_once_with() @mock.patch.object(sqlalchemy_api, '_instance_get_by_uuid') @mock.patch.object(sqlalchemy_api, '_instances_fill_metadata') @mock.patch('oslo_db.sqlalchemy.utils.paginate_query') def test_instance_get_all_by_filters_paginated_allows_deleted_marker( self, mock_paginate, mock_fill, mock_get): ctxt = mock.MagicMock() ctxt.elevated.return_value = mock.sentinel.elevated sqlalchemy_api.instance_get_all_by_filters_sort(ctxt, {}, marker='foo') mock_get.assert_called_once_with(mock.sentinel.elevated, 'foo') ctxt.elevated.assert_called_once_with(read_deleted='yes') def test_replace_sub_expression(self): ret = sqlalchemy_api._safe_regex_mysql('|') self.assertEqual('\\|', ret) ret = sqlalchemy_api._safe_regex_mysql('||') self.assertEqual('\\|\\|', ret) ret = sqlalchemy_api._safe_regex_mysql('a||') self.assertEqual('a\\|\\|', ret) ret = sqlalchemy_api._safe_regex_mysql('|a|') self.assertEqual('\\|a\\|', ret) ret = sqlalchemy_api._safe_regex_mysql('||a') self.assertEqual('\\|\\|a', ret) class SqlAlchemyDbApiTestCase(DbTestCase): def test_instance_get_all_by_host(self): ctxt = context.get_admin_context() self.create_instance_with_args() self.create_instance_with_args() self.create_instance_with_args(host='host2') @sqlalchemy_api.pick_context_manager_reader def test(context): return sqlalchemy_api.instance_get_all_by_host( context, 'host1') result = test(ctxt) self.assertEqual(2, len(result)) # make sure info_cache and security_groups were auto-joined instance = result[0] self.assertIn('info_cache', instance) self.assertIn('security_groups', instance) def test_instance_get_all_by_host_no_joins(self): """Tests that we don't join on the info_cache and security_groups tables if columns_to_join is an empty list. """ self.create_instance_with_args() @sqlalchemy_api.pick_context_manager_reader def test(ctxt): return sqlalchemy_api.instance_get_all_by_host( ctxt, 'host1', columns_to_join=[]) result = test(context.get_admin_context()) self.assertEqual(1, len(result)) # make sure info_cache and security_groups were not auto-joined instance = result[0] self.assertNotIn('info_cache', instance) self.assertNotIn('security_groups', instance) def test_instance_get_all_uuids_by_hosts(self): ctxt = context.get_admin_context() self.create_instance_with_args() self.create_instance_with_args() self.create_instance_with_args(host='host2') @sqlalchemy_api.pick_context_manager_reader def test1(context): return sqlalchemy_api._instance_get_all_uuids_by_hosts( context, ['host1']) @sqlalchemy_api.pick_context_manager_reader def test2(context): return sqlalchemy_api._instance_get_all_uuids_by_hosts( context, ['host1', 'host2']) result = test1(ctxt) self.assertEqual(1, len(result)) self.assertEqual(2, len(result['host1'])) self.assertEqual(six.text_type, type(result['host1'][0])) result = test2(ctxt) self.assertEqual(2, len(result)) self.assertEqual(2, len(result['host1'])) self.assertEqual(1, len(result['host2'])) @mock.patch('oslo_utils.uuidutils.generate_uuid') def test_instance_get_active_by_window_joined_paging(self, mock_uuids): mock_uuids.side_effect = ['BBB', 'ZZZ', 'AAA', 'CCC'] ctxt = context.get_admin_context() now = datetime.datetime(2015, 10, 2) self.create_instance_with_args(project_id='project-ZZZ') self.create_instance_with_args(project_id='project-ZZZ') self.create_instance_with_args(project_id='project-ZZZ') self.create_instance_with_args(project_id='project-AAA') # no limit or marker result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=now, columns_to_join=[]) actual_uuids = [row['uuid'] for row in result] self.assertEqual(['CCC', 'AAA', 'BBB', 'ZZZ'], actual_uuids) # just limit result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=now, columns_to_join=[], limit=2) actual_uuids = [row['uuid'] for row in result] self.assertEqual(['CCC', 'AAA'], actual_uuids) # limit & marker result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=now, columns_to_join=[], limit=2, marker='CCC') actual_uuids = [row['uuid'] for row in result] self.assertEqual(['AAA', 'BBB'], actual_uuids) # unknown marker self.assertRaises( exception.MarkerNotFound, sqlalchemy_api.instance_get_active_by_window_joined, ctxt, begin=now, columns_to_join=[], limit=2, marker='unknown') def test_instance_get_active_by_window_joined(self): now = datetime.datetime(2013, 10, 10, 17, 16, 37, 156701) start_time = now - datetime.timedelta(minutes=10) now1 = now + datetime.timedelta(minutes=1) now2 = now + datetime.timedelta(minutes=2) now3 = now + datetime.timedelta(minutes=3) ctxt = context.get_admin_context() # used for testing columns_to_join network_info = jsonutils.dumps({'ckey': 'cvalue'}) sample_data = { 'metadata': {'mkey1': 'mval1', 'mkey2': 'mval2'}, 'system_metadata': {'smkey1': 'smval1', 'smkey2': 'smval2'}, 'info_cache': {'network_info': network_info}, } self.create_instance_with_args(launched_at=now, **sample_data) self.create_instance_with_args(launched_at=now1, terminated_at=now2, **sample_data) self.create_instance_with_args(launched_at=now2, terminated_at=now3, **sample_data) self.create_instance_with_args(launched_at=now3, terminated_at=None, **sample_data) result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=now) self.assertEqual(4, len(result)) # verify that all default columns are joined meta = utils.metadata_to_dict(result[0]['metadata']) self.assertEqual(sample_data['metadata'], meta) sys_meta = utils.metadata_to_dict(result[0]['system_metadata']) self.assertEqual(sample_data['system_metadata'], sys_meta) self.assertIn('info_cache', result[0]) result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=now3, columns_to_join=['info_cache']) self.assertEqual(2, len(result)) # verify that only info_cache is loaded meta = utils.metadata_to_dict(result[0]['metadata']) self.assertEqual({}, meta) self.assertIn('info_cache', result[0]) result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=start_time, end=now) self.assertEqual(0, len(result)) result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=start_time, end=now2, columns_to_join=['system_metadata']) self.assertEqual(2, len(result)) # verify that only system_metadata is loaded meta = utils.metadata_to_dict(result[0]['metadata']) self.assertEqual({}, meta) sys_meta = utils.metadata_to_dict(result[0]['system_metadata']) self.assertEqual(sample_data['system_metadata'], sys_meta) self.assertNotIn('info_cache', result[0]) result = sqlalchemy_api.instance_get_active_by_window_joined( ctxt, begin=now2, end=now3, columns_to_join=['metadata', 'info_cache']) self.assertEqual(2, len(result)) # verify that only metadata and info_cache are loaded meta = utils.metadata_to_dict(result[0]['metadata']) self.assertEqual(sample_data['metadata'], meta) sys_meta = utils.metadata_to_dict(result[0]['system_metadata']) self.assertEqual({}, sys_meta) self.assertIn('info_cache', result[0]) self.assertEqual(network_info, result[0]['info_cache']['network_info']) @mock.patch('nova.db.sqlalchemy.api.instance_get_all_by_filters_sort') def test_instance_get_all_by_filters_calls_sort(self, mock_get_all_filters_sort): '''Verifies instance_get_all_by_filters calls the sort function.''' # sort parameters should be wrapped in a list, all other parameters # should be passed through ctxt = context.get_admin_context() sqlalchemy_api.instance_get_all_by_filters(ctxt, {'foo': 'bar'}, 'sort_key', 'sort_dir', limit=100, marker='uuid', columns_to_join='columns') mock_get_all_filters_sort.assert_called_once_with(ctxt, {'foo': 'bar'}, limit=100, marker='uuid', columns_to_join='columns', sort_keys=['sort_key'], sort_dirs=['sort_dir']) def test_instance_get_all_by_filters_sort_key_invalid(self): '''InvalidSortKey raised if an invalid key is given.''' for keys in [['foo'], ['uuid', 'foo']]: self.assertRaises(exception.InvalidSortKey, db.instance_get_all_by_filters_sort, self.context, filters={}, sort_keys=keys) def test_instance_get_all_by_filters_sort_hidden(self): """Tests the default filtering behavior of the hidden column.""" # Create a hidden instance record. self.create_instance_with_args(hidden=True) # Get instances which by default will filter out the hidden instance. instances = sqlalchemy_api.instance_get_all_by_filters_sort( self.context, filters={}, limit=10) self.assertEqual(0, len(instances)) # Now explicitly filter for hidden instances. instances = sqlalchemy_api.instance_get_all_by_filters_sort( self.context, filters={'hidden': True}, limit=10) self.assertEqual(1, len(instances)) class ProcessSortParamTestCase(test.TestCase): def test_process_sort_params_defaults(self): '''Verifies default sort parameters.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params([], []) self.assertEqual(['created_at', 'id'], sort_keys) self.assertEqual(['asc', 'asc'], sort_dirs) sort_keys, sort_dirs = sqlalchemy_api.process_sort_params(None, None) self.assertEqual(['created_at', 'id'], sort_keys) self.assertEqual(['asc', 'asc'], sort_dirs) def test_process_sort_params_override_default_keys(self): '''Verifies that the default keys can be overridden.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( [], [], default_keys=['key1', 'key2', 'key3']) self.assertEqual(['key1', 'key2', 'key3'], sort_keys) self.assertEqual(['asc', 'asc', 'asc'], sort_dirs) def test_process_sort_params_override_default_dir(self): '''Verifies that the default direction can be overridden.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( [], [], default_dir='dir1') self.assertEqual(['created_at', 'id'], sort_keys) self.assertEqual(['dir1', 'dir1'], sort_dirs) def test_process_sort_params_override_default_key_and_dir(self): '''Verifies that the default key and dir can be overridden.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( [], [], default_keys=['key1', 'key2', 'key3'], default_dir='dir1') self.assertEqual(['key1', 'key2', 'key3'], sort_keys) self.assertEqual(['dir1', 'dir1', 'dir1'], sort_dirs) sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( [], [], default_keys=[], default_dir='dir1') self.assertEqual([], sort_keys) self.assertEqual([], sort_dirs) def test_process_sort_params_non_default(self): '''Verifies that non-default keys are added correctly.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['key1', 'key2'], ['asc', 'desc']) self.assertEqual(['key1', 'key2', 'created_at', 'id'], sort_keys) # First sort_dir in list is used when adding the default keys self.assertEqual(['asc', 'desc', 'asc', 'asc'], sort_dirs) def test_process_sort_params_default(self): '''Verifies that default keys are added correctly.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2'], ['asc', 'desc']) self.assertEqual(['id', 'key2', 'created_at'], sort_keys) self.assertEqual(['asc', 'desc', 'asc'], sort_dirs) # Include default key value, rely on default direction sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2'], []) self.assertEqual(['id', 'key2', 'created_at'], sort_keys) self.assertEqual(['asc', 'asc', 'asc'], sort_dirs) def test_process_sort_params_default_dir(self): '''Verifies that the default dir is applied to all keys.''' # Direction is set, ignore default dir sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2'], ['desc'], default_dir='dir') self.assertEqual(['id', 'key2', 'created_at'], sort_keys) self.assertEqual(['desc', 'desc', 'desc'], sort_dirs) # But should be used if no direction is set sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2'], [], default_dir='dir') self.assertEqual(['id', 'key2', 'created_at'], sort_keys) self.assertEqual(['dir', 'dir', 'dir'], sort_dirs) def test_process_sort_params_unequal_length(self): '''Verifies that a sort direction list is applied correctly.''' sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2', 'key3'], ['desc']) self.assertEqual(['id', 'key2', 'key3', 'created_at'], sort_keys) self.assertEqual(['desc', 'desc', 'desc', 'desc'], sort_dirs) # Default direction is the first key in the list sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2', 'key3'], ['desc', 'asc']) self.assertEqual(['id', 'key2', 'key3', 'created_at'], sort_keys) self.assertEqual(['desc', 'asc', 'desc', 'desc'], sort_dirs) sort_keys, sort_dirs = sqlalchemy_api.process_sort_params( ['id', 'key2', 'key3'], ['desc', 'asc', 'asc']) self.assertEqual(['id', 'key2', 'key3', 'created_at'], sort_keys) self.assertEqual(['desc', 'asc', 'asc', 'desc'], sort_dirs) def test_process_sort_params_extra_dirs_lengths(self): '''InvalidInput raised if more directions are given.''' self.assertRaises(exception.InvalidInput, sqlalchemy_api.process_sort_params, ['key1', 'key2'], ['asc', 'desc', 'desc']) def test_process_sort_params_invalid_sort_dir(self): '''InvalidInput raised if invalid directions are given.''' for dirs in [['foo'], ['asc', 'foo'], ['asc', 'desc', 'foo']]: self.assertRaises(exception.InvalidInput, sqlalchemy_api.process_sort_params, ['key'], dirs) class MigrationTestCase(test.TestCase): def setUp(self): super(MigrationTestCase, self).setUp() self.ctxt = context.get_admin_context() self._create() self._create() self._create(status='reverted') self._create(status='confirmed') self._create(status='error') self._create(status='failed') self._create(status='accepted') self._create(status='done') self._create(status='completed') self._create(status='cancelled') self._create(source_compute='host2', source_node='b', dest_compute='host1', dest_node='a') self._create(source_compute='host2', dest_compute='host3') self._create(source_compute='host3', dest_compute='host4') def _create(self, status='migrating', source_compute='host1', source_node='a', dest_compute='host2', dest_node='b', system_metadata=None, migration_type=None, uuid=None, created_at=None, updated_at=None, user_id=None, project_id=None): values = {'host': source_compute} instance = db.instance_create(self.ctxt, values) if system_metadata: db.instance_system_metadata_update(self.ctxt, instance['uuid'], system_metadata, False) values = {'status': status, 'source_compute': source_compute, 'source_node': source_node, 'dest_compute': dest_compute, 'dest_node': dest_node, 'instance_uuid': instance['uuid'], 'migration_type': migration_type, 'uuid': uuid} if created_at: values['created_at'] = created_at if updated_at: values['updated_at'] = updated_at if user_id: values['user_id'] = user_id if project_id: values['project_id'] = project_id db.migration_create(self.ctxt, values) return values def _assert_in_progress(self, migrations): for migration in migrations: self.assertNotEqual('confirmed', migration['status']) self.assertNotEqual('reverted', migration['status']) self.assertNotEqual('error', migration['status']) self.assertNotEqual('failed', migration['status']) self.assertNotEqual('done', migration['status']) self.assertNotEqual('cancelled', migration['status']) def test_migration_get_in_progress_joins(self): self._create(source_compute='foo', system_metadata={'foo': 'bar'}) migrations = db.migration_get_in_progress_by_host_and_node(self.ctxt, 'foo', 'a') system_metadata = migrations[0]['instance']['system_metadata'][0] self.assertEqual(system_metadata['key'], 'foo') self.assertEqual(system_metadata['value'], 'bar') def test_in_progress_host1_nodea(self): migrations = db.migration_get_in_progress_by_host_and_node(self.ctxt, 'host1', 'a') # 2 as source + 1 as dest self.assertEqual(4, len(migrations)) self._assert_in_progress(migrations) def test_in_progress_host1_nodeb(self): migrations = db.migration_get_in_progress_by_host_and_node(self.ctxt, 'host1', 'b') # some migrations are to/from host1, but none with a node 'b' self.assertEqual(0, len(migrations)) def test_in_progress_host2_nodeb(self): migrations = db.migration_get_in_progress_by_host_and_node(self.ctxt, 'host2', 'b') # 2 as dest, 1 as source self.assertEqual(4, len(migrations)) self._assert_in_progress(migrations) def test_instance_join(self): migrations = db.migration_get_in_progress_by_host_and_node(self.ctxt, 'host2', 'b') for migration in migrations: instance = migration['instance'] self.assertEqual(migration['instance_uuid'], instance['uuid']) def test_migration_get_by_uuid(self): migration1 = self._create(uuid=uuidsentinel.migration1_uuid) self._create(uuid=uuidsentinel.other_uuid) real_migration1 = db.migration_get_by_uuid( self.ctxt, uuidsentinel.migration1_uuid) for key in migration1: self.assertEqual(migration1[key], real_migration1[key]) def test_migration_get_by_uuid_soft_deleted_and_deleted(self): migration1 = self._create(uuid=uuidsentinel.migration1_uuid) @sqlalchemy_api.pick_context_manager_writer def soft_delete_it(context): sqlalchemy_api.model_query(context, models.Migration).\ filter_by(uuid=uuidsentinel.migration1_uuid).\ soft_delete() @sqlalchemy_api.pick_context_manager_writer def delete_it(context): sqlalchemy_api.model_query(context, models.Migration, read_deleted="yes").\ filter_by(uuid=uuidsentinel.migration1_uuid).\ delete() soft_delete_it(self.ctxt) soft_deletd_migration1 = db.migration_get_by_uuid( self.ctxt, uuidsentinel.migration1_uuid) for key in migration1: self.assertEqual(migration1[key], soft_deletd_migration1[key]) delete_it(self.ctxt) self.assertRaises(exception.MigrationNotFound, db.migration_get_by_uuid, self.ctxt, uuidsentinel.migration1_uuid) def test_migration_get_by_uuid_not_found(self): """Asserts that MigrationNotFound is raised if a migration is not found by a given uuid. """ self.assertRaises(exception.MigrationNotFound, db.migration_get_by_uuid, self.ctxt, uuidsentinel.migration_not_found) def test_get_migrations_by_filters(self): filters = {"status": "migrating", "host": "host3", "migration_type": None, "hidden": False} migrations = db.migration_get_all_by_filters(self.ctxt, filters) self.assertEqual(2, len(migrations)) for migration in migrations: self.assertEqual(filters["status"], migration['status']) hosts = [migration['source_compute'], migration['dest_compute']] self.assertIn(filters["host"], hosts) def test_get_migrations_by_uuid_filters(self): mig_uuid1 = self._create(uuid=uuidsentinel.mig_uuid1) filters = {"uuid": [uuidsentinel.mig_uuid1]} mig_get = db.migration_get_all_by_filters(self.ctxt, filters) self.assertEqual(1, len(mig_get)) for key in mig_uuid1: self.assertEqual(mig_uuid1[key], mig_get[0][key]) def test_get_migrations_by_filters_with_multiple_statuses(self): filters = {"status": ["reverted", "confirmed"], "migration_type": None, "hidden": False} migrations = db.migration_get_all_by_filters(self.ctxt, filters) self.assertEqual(2, len(migrations)) for migration in migrations: self.assertIn(migration['status'], filters['status']) def test_get_migrations_by_filters_unicode_status(self): self._create(status=u"unicode") filters = {"status": u"unicode"} migrations = db.migration_get_all_by_filters(self.ctxt, filters) self.assertEqual(1, len(migrations)) for migration in migrations: self.assertIn(migration['status'], filters['status']) def test_get_migrations_by_filters_with_type(self): self._create(status="special", source_compute="host9", migration_type="evacuation") self._create(status="special", source_compute="host9", migration_type="live-migration") filters = {"status": "special", "host": "host9", "migration_type": "evacuation", "hidden": False} migrations = db.migration_get_all_by_filters(self.ctxt, filters) self.assertEqual(1, len(migrations)) def test_get_migrations_by_filters_source_compute(self): filters = {'source_compute': 'host2'} migrations = db.migration_get_all_by_filters(self.ctxt, filters) self.assertEqual(2, len(migrations)) sources = [x['source_compute'] for x in migrations] self.assertEqual(['host2', 'host2'], sources) dests = [x['dest_compute'] for x in migrations] self.assertEqual(['host1', 'host3'], dests) def test_get_migrations_by_filters_instance_uuid(self): migrations = db.migration_get_all_by_filters(self.ctxt, filters={}) for migration in migrations: filters = {'instance_uuid': migration['instance_uuid']} instance_migrations = db.migration_get_all_by_filters( self.ctxt, filters) self.assertEqual(1, len(instance_migrations)) self.assertEqual(migration['instance_uuid'], instance_migrations[0]['instance_uuid']) def test_get_migrations_by_filters_user_id(self): # Create two migrations with different user_id user_id1 = "fake_user_id" self._create(user_id=user_id1) user_id2 = "other_fake_user_id" self._create(user_id=user_id2) # Filter on only the first user_id filters = {"user_id": user_id1} migrations = db.migration_get_all_by_filters(self.ctxt, filters) # We should only get one migration back because we filtered on only # one of the two different user_id self.assertEqual(1, len(migrations)) for migration in migrations: self.assertEqual(filters['user_id'], migration['user_id']) def test_get_migrations_by_filters_project_id(self): # Create two migrations with different project_id project_id1 = "fake_project_id" self._create(project_id=project_id1) project_id2 = "other_fake_project_id" self._create(project_id=project_id2) # Filter on only the first project_id filters = {"project_id": project_id1} migrations = db.migration_get_all_by_filters(self.ctxt, filters) # We should only get one migration back because we filtered on only # one of the two different project_id self.assertEqual(1, len(migrations)) for migration in migrations: self.assertEqual(filters['project_id'], migration['project_id']) def test_get_migrations_by_filters_user_id_and_project_id(self): # Create two migrations with different user_id and project_id user_id1 = "fake_user_id" project_id1 = "fake_project_id" self._create(user_id=user_id1, project_id=project_id1) user_id2 = "other_fake_user_id" project_id2 = "other_fake_project_id" self._create(user_id=user_id2, project_id=project_id2) # Filter on only the first user_id and project_id filters = {"user_id": user_id1, "project_id": project_id1} migrations = db.migration_get_all_by_filters(self.ctxt, filters) # We should only get one migration back because we filtered on only # one of the two different user_id and project_id self.assertEqual(1, len(migrations)) for migration in migrations: self.assertEqual(filters['user_id'], migration['user_id']) self.assertEqual(filters['project_id'], migration['project_id']) def test_migration_get_unconfirmed_by_dest_compute(self): # Ensure no migrations are returned. results = db.migration_get_unconfirmed_by_dest_compute(self.ctxt, 10, 'fake_host') self.assertEqual(0, len(results)) # Ensure no migrations are returned. results = db.migration_get_unconfirmed_by_dest_compute(self.ctxt, 10, 'fake_host2') self.assertEqual(0, len(results)) updated_at = datetime.datetime(2000, 1, 1, 12, 0, 0) values = {"status": "finished", "updated_at": updated_at, "dest_compute": "fake_host2"} migration = db.migration_create(self.ctxt, values) # Ensure different host is not returned results = db.migration_get_unconfirmed_by_dest_compute(self.ctxt, 10, 'fake_host') self.assertEqual(0, len(results)) # Ensure one migration older than 10 seconds is returned. results = db.migration_get_unconfirmed_by_dest_compute(self.ctxt, 10, 'fake_host2') self.assertEqual(1, len(results)) db.migration_update(self.ctxt, migration['id'], {"status": "CONFIRMED"}) # Ensure the new migration is not returned. updated_at = timeutils.utcnow() values = {"status": "finished", "updated_at": updated_at, "dest_compute": "fake_host2"} migration = db.migration_create(self.ctxt, values) results = db.migration_get_unconfirmed_by_dest_compute(self.ctxt, 10, "fake_host2") self.assertEqual(0, len(results)) db.migration_update(self.ctxt, migration['id'], {"status": "CONFIRMED"}) def test_migration_get_in_progress_by_instance(self): values = self._create(status='running', migration_type="live-migration") results = db.migration_get_in_progress_by_instance( self.ctxt, values["instance_uuid"], "live-migration") self.assertEqual(1, len(results)) for key in values: self.assertEqual(values[key], results[0][key]) self.assertEqual("running", results[0]["status"]) def test_migration_get_in_progress_by_instance_not_in_progress(self): values = self._create(migration_type="live-migration") results = db.migration_get_in_progress_by_instance( self.ctxt, values["instance_uuid"], "live-migration") self.assertEqual(0, len(results)) def test_migration_get_in_progress_by_instance_not_live_migration(self): values = self._create(migration_type="resize") results = db.migration_get_in_progress_by_instance( self.ctxt, values["instance_uuid"], "live-migration") self.assertEqual(0, len(results)) results = db.migration_get_in_progress_by_instance( self.ctxt, values["instance_uuid"]) self.assertEqual(0, len(results)) def test_migration_update_not_found(self): self.assertRaises(exception.MigrationNotFound, db.migration_update, self.ctxt, 42, {}) def test_get_migration_for_instance(self): migrations = db.migration_get_all_by_filters(self.ctxt, []) migration_id = migrations[0].id instance_uuid = migrations[0].instance_uuid instance_migration = db.migration_get_by_id_and_instance( self.ctxt, migration_id, instance_uuid) self.assertEqual(migration_id, instance_migration.id) self.assertEqual(instance_uuid, instance_migration.instance_uuid) def test_get_migration_for_instance_not_found(self): self.assertRaises(exception.MigrationNotFoundForInstance, db.migration_get_by_id_and_instance, self.ctxt, '500', '501') def _create_3_migration_after_time(self, time=None): time = time or timeutils.utcnow() tmp_time = time + datetime.timedelta(days=1) after_1hour = datetime.timedelta(hours=1) self._create(uuid=uuidsentinel.uuid_time1, created_at=tmp_time, updated_at=tmp_time + after_1hour) tmp_time = time + datetime.timedelta(days=2) self._create(uuid=uuidsentinel.uuid_time2, created_at=tmp_time, updated_at=tmp_time + after_1hour) tmp_time = time + datetime.timedelta(days=3) self._create(uuid=uuidsentinel.uuid_time3, created_at=tmp_time, updated_at=tmp_time + after_1hour) def test_get_migrations_by_filters_with_limit(self): migrations = db.migration_get_all_by_filters(self.ctxt, {}, limit=3) self.assertEqual(3, len(migrations)) def test_get_migrations_by_filters_with_limit_marker(self): self._create_3_migration_after_time() # order by created_at, desc: time3, time2, time1 migrations = db.migration_get_all_by_filters( self.ctxt, {}, limit=2, marker=uuidsentinel.uuid_time3) # time3 as marker: time2, time1 self.assertEqual(2, len(migrations)) self.assertEqual(migrations[0]['uuid'], uuidsentinel.uuid_time2) self.assertEqual(migrations[1]['uuid'], uuidsentinel.uuid_time1) # time3 as marker, limit 2: time3, time2 migrations = db.migration_get_all_by_filters( self.ctxt, {}, limit=1, marker=uuidsentinel.uuid_time3) self.assertEqual(1, len(migrations)) self.assertEqual(migrations[0]['uuid'], uuidsentinel.uuid_time2) def test_get_migrations_by_filters_with_limit_marker_sort(self): self._create_3_migration_after_time() # order by created_at, desc: time3, time2, time1 migrations = db.migration_get_all_by_filters( self.ctxt, {}, limit=2, marker=uuidsentinel.uuid_time3) # time2, time1 self.assertEqual(2, len(migrations)) self.assertEqual(migrations[0]['uuid'], uuidsentinel.uuid_time2) self.assertEqual(migrations[1]['uuid'], uuidsentinel.uuid_time1) # order by updated_at, desc: time1, time2, time3 migrations = db.migration_get_all_by_filters( self.ctxt, {}, sort_keys=['updated_at'], sort_dirs=['asc'], limit=2, marker=uuidsentinel.uuid_time1) # time2, time3 self.assertEqual(2, len(migrations)) self.assertEqual(migrations[0]['uuid'], uuidsentinel.uuid_time2) self.assertEqual(migrations[1]['uuid'], uuidsentinel.uuid_time3) def test_get_migrations_by_filters_with_not_found_marker(self): self.assertRaises(exception.MarkerNotFound, db.migration_get_all_by_filters, self.ctxt, {}, marker=uuidsentinel.not_found_marker) def test_get_migrations_by_filters_with_changes_since(self): changes_time = timeutils.utcnow(with_timezone=True) self._create_3_migration_after_time(changes_time) after_1day_2hours = datetime.timedelta(days=1, hours=2) filters = {"changes-since": changes_time + after_1day_2hours} migrations = db.migration_get_all_by_filters( self.ctxt, filters, sort_keys=['updated_at'], sort_dirs=['asc']) self.assertEqual(2, len(migrations)) self.assertEqual(migrations[0]['uuid'], uuidsentinel.uuid_time2) self.assertEqual(migrations[1]['uuid'], uuidsentinel.uuid_time3) def test_get_migrations_by_filters_with_changes_before(self): changes_time = timeutils.utcnow(with_timezone=True) self._create_3_migration_after_time(changes_time) after_3day_2hours = datetime.timedelta(days=3, hours=2) filters = {"changes-before": changes_time + after_3day_2hours} migrations = db.migration_get_all_by_filters( self.ctxt, filters, sort_keys=['updated_at'], sort_dirs=['asc']) self.assertEqual(3, len(migrations)) self.assertEqual(migrations[0]['uuid'], uuidsentinel.uuid_time1) self.assertEqual(migrations[1]['uuid'], uuidsentinel.uuid_time2) self.assertEqual(migrations[2]['uuid'], uuidsentinel.uuid_time3) class ModelsObjectComparatorMixin(object): def _dict_from_object(self, obj, ignored_keys): if ignored_keys is None: ignored_keys = [] return {k: v for k, v in obj.items() if k not in ignored_keys} def _assertEqualObjects(self, obj1, obj2, ignored_keys=None): obj1 = self._dict_from_object(obj1, ignored_keys) obj2 = self._dict_from_object(obj2, ignored_keys) self.assertEqual(len(obj1), len(obj2), "Keys mismatch: %s" % str(set(obj1.keys()) ^ set(obj2.keys()))) for key, value in obj1.items(): self.assertEqual(value, obj2[key], "Key mismatch: %s" % key) def _assertEqualListsOfObjects(self, objs1, objs2, ignored_keys=None): obj_to_dict = lambda o: self._dict_from_object(o, ignored_keys) sort_key = lambda d: [d[k] for k in sorted(d)] conv_and_sort = lambda obj: sorted(map(obj_to_dict, obj), key=sort_key) self.assertEqual(conv_and_sort(objs1), conv_and_sort(objs2)) def _assertEqualOrderedListOfObjects(self, objs1, objs2, ignored_keys=None): obj_to_dict = lambda o: self._dict_from_object(o, ignored_keys) conv = lambda objs: [obj_to_dict(obj) for obj in objs] self.assertEqual(conv(objs1), conv(objs2)) def _assertEqualListsOfPrimitivesAsSets(self, primitives1, primitives2): self.assertEqual(len(primitives1), len(primitives2)) for primitive in primitives1: self.assertIn(primitive, primitives2) for primitive in primitives2: self.assertIn(primitive, primitives1) class InstanceSystemMetadataTestCase(test.TestCase): """Tests for db.api.instance_system_metadata_* methods.""" def setUp(self): super(InstanceSystemMetadataTestCase, self).setUp() values = {'host': 'h1', 'project_id': 'p1', 'system_metadata': {'key': 'value'}} self.ctxt = context.get_admin_context() self.instance = db.instance_create(self.ctxt, values) def test_instance_system_metadata_get(self): metadata = db.instance_system_metadata_get(self.ctxt, self.instance['uuid']) self.assertEqual(metadata, {'key': 'value'}) def test_instance_system_metadata_update_new_pair(self): db.instance_system_metadata_update( self.ctxt, self.instance['uuid'], {'new_key': 'new_value'}, False) metadata = db.instance_system_metadata_get(self.ctxt, self.instance['uuid']) self.assertEqual(metadata, {'key': 'value', 'new_key': 'new_value'}) def test_instance_system_metadata_update_existent_pair(self): db.instance_system_metadata_update( self.ctxt, self.instance['uuid'], {'key': 'new_value'}, True) metadata = db.instance_system_metadata_get(self.ctxt, self.instance['uuid']) self.assertEqual(metadata, {'key': 'new_value'}) def test_instance_system_metadata_update_delete_true(self): db.instance_system_metadata_update( self.ctxt, self.instance['uuid'], {'new_key': 'new_value'}, True) metadata = db.instance_system_metadata_get(self.ctxt, self.instance['uuid']) self.assertEqual(metadata, {'new_key': 'new_value'}) @test.testtools.skip("bug 1189462") def test_instance_system_metadata_update_nonexistent(self): self.assertRaises(exception.InstanceNotFound, db.instance_system_metadata_update, self.ctxt, 'nonexistent-uuid', {'key': 'value'}, True) @mock.patch('time.sleep', new=lambda x: None) class InstanceTestCase(test.TestCase, ModelsObjectComparatorMixin): """Tests for db.api.instance_* methods.""" sample_data = { 'project_id': 'project1', 'hostname': 'example.com', 'host': 'h1', 'node': 'n1', 'metadata': {'mkey1': 'mval1', 'mkey2': 'mval2'}, 'system_metadata': {'smkey1': 'smval1', 'smkey2': 'smval2'}, 'info_cache': {'ckey': 'cvalue'}, } def setUp(self): super(InstanceTestCase, self).setUp() self.ctxt = context.get_admin_context() def _assertEqualInstances(self, instance1, instance2): self._assertEqualObjects(instance1, instance2, ignored_keys=['metadata', 'system_metadata', 'info_cache', 'extra']) def _assertEqualListsOfInstances(self, list1, list2): self._assertEqualListsOfObjects(list1, list2, ignored_keys=['metadata', 'system_metadata', 'info_cache', 'extra']) def create_instance_with_args(self, **kwargs): if 'context' in kwargs: context = kwargs.pop('context') else: context = self.ctxt args = self.sample_data.copy() args.update(kwargs) return db.instance_create(context, args) def test_instance_create(self): instance = self.create_instance_with_args() self.assertTrue(uuidutils.is_uuid_like(instance['uuid'])) @mock.patch.object(sqlalchemy_api, 'security_group_ensure_default') def test_instance_create_with_deadlock_retry(self, mock_sg): mock_sg.side_effect = [db_exc.DBDeadlock(), None] instance = self.create_instance_with_args() self.assertTrue(uuidutils.is_uuid_like(instance['uuid'])) def test_instance_create_with_object_values(self): values = { 'access_ip_v4': netaddr.IPAddress('1.2.3.4'), 'access_ip_v6': netaddr.IPAddress('::1'), } dt_keys = ('created_at', 'deleted_at', 'updated_at', 'launched_at', 'terminated_at') dt = timeutils.utcnow() dt_utc = dt.replace(tzinfo=iso8601.UTC) for key in dt_keys: values[key] = dt_utc inst = db.instance_create(self.ctxt, values) self.assertEqual(inst['access_ip_v4'], '1.2.3.4') self.assertEqual(inst['access_ip_v6'], '::1') for key in dt_keys: self.assertEqual(inst[key], dt) def test_instance_update_with_object_values(self): values = { 'access_ip_v4': netaddr.IPAddress('1.2.3.4'), 'access_ip_v6': netaddr.IPAddress('::1'), } dt_keys = ('created_at', 'deleted_at', 'updated_at', 'launched_at', 'terminated_at') dt = timeutils.utcnow() dt_utc = dt.replace(tzinfo=iso8601.UTC) for key in dt_keys: values[key] = dt_utc inst = db.instance_create(self.ctxt, {}) inst = db.instance_update(self.ctxt, inst['uuid'], values) self.assertEqual(inst['access_ip_v4'], '1.2.3.4') self.assertEqual(inst['access_ip_v6'], '::1') for key in dt_keys: self.assertEqual(inst[key], dt) def test_instance_update_no_metadata_clobber(self): meta = {'foo': 'bar'} sys_meta = {'sfoo': 'sbar'} values = { 'metadata': meta, 'system_metadata': sys_meta, } inst = db.instance_create(self.ctxt, {}) inst = db.instance_update(self.ctxt, inst['uuid'], values) self.assertEqual(meta, utils.metadata_to_dict(inst['metadata'])) self.assertEqual(sys_meta, utils.metadata_to_dict(inst['system_metadata'])) def test_instance_get_all_with_meta(self): self.create_instance_with_args() for inst in db.instance_get_all(self.ctxt): meta = utils.metadata_to_dict(inst['metadata']) self.assertEqual(meta, self.sample_data['metadata']) sys_meta = utils.metadata_to_dict(inst['system_metadata']) self.assertEqual(sys_meta, self.sample_data['system_metadata']) def test_instance_get_with_meta(self): inst_id = self.create_instance_with_args().id inst = db.instance_get(self.ctxt, inst_id) meta = utils.metadata_to_dict(inst['metadata']) self.assertEqual(meta, self.sample_data['metadata']) sys_meta = utils.metadata_to_dict(inst['system_metadata']) self.assertEqual(sys_meta, self.sample_data['system_metadata']) def test_instance_update(self): instance = self.create_instance_with_args() metadata = {'host': 'bar', 'key2': 'wuff'} system_metadata = {'original_image_ref': 'baz'} # Update the metadata db.instance_update(self.ctxt, instance['uuid'], {'metadata': metadata, 'system_metadata': system_metadata}) # Retrieve the user-provided metadata to ensure it was successfully # updated self.assertEqual(metadata, db.instance_metadata_get(self.ctxt, instance['uuid'])) self.assertEqual(system_metadata, db.instance_system_metadata_get(self.ctxt, instance['uuid'])) def test_instance_update_bad_str_dates(self): instance = self.create_instance_with_args() values = {'created_at': '123'} self.assertRaises(ValueError, db.instance_update, self.ctxt, instance['uuid'], values) def test_instance_update_good_str_dates(self): instance = self.create_instance_with_args() values = {'created_at': '2011-01-31T00:00:00.0'} actual = db.instance_update(self.ctxt, instance['uuid'], values) expected = datetime.datetime(2011, 1, 31) self.assertEqual(expected, actual["created_at"]) def test_create_instance_unique_hostname(self): context1 = context.RequestContext('user1', 'p1') context2 = context.RequestContext('user2', 'p2') self.create_instance_with_args(hostname='h1', project_id='p1') # With scope 'global' any duplicate should fail, be it this project: self.flags(osapi_compute_unique_server_name_scope='global') self.assertRaises(exception.InstanceExists, self.create_instance_with_args, context=context1, hostname='h1', project_id='p3') # or another: self.assertRaises(exception.InstanceExists, self.create_instance_with_args, context=context2, hostname='h1', project_id='p2') # With scope 'project' a duplicate in the project should fail: self.flags(osapi_compute_unique_server_name_scope='project') self.assertRaises(exception.InstanceExists, self.create_instance_with_args, context=context1, hostname='h1', project_id='p1') # With scope 'project' a duplicate in a different project should work: self.flags(osapi_compute_unique_server_name_scope='project') self.create_instance_with_args(context=context2, hostname='h2') self.flags(osapi_compute_unique_server_name_scope=None) def test_instance_get_all_by_filters_empty_list_filter(self): filters = {'uuid': []} instances = db.instance_get_all_by_filters_sort(self.ctxt, filters) self.assertEqual([], instances) @mock.patch('nova.db.sqlalchemy.api.undefer') @mock.patch('nova.db.sqlalchemy.api.joinedload') def test_instance_get_all_by_filters_extra_columns(self, mock_joinedload, mock_undefer): db.instance_get_all_by_filters_sort( self.ctxt, {}, columns_to_join=['info_cache', 'extra.pci_requests']) mock_joinedload.assert_called_once_with('info_cache') mock_undefer.assert_called_once_with('extra.pci_requests') @mock.patch('nova.db.sqlalchemy.api.undefer') @mock.patch('nova.db.sqlalchemy.api.joinedload') def test_instance_get_active_by_window_extra_columns(self, mock_joinedload, mock_undefer): now = datetime.datetime(2013, 10, 10, 17, 16, 37, 156701) db.instance_get_active_by_window_joined( self.ctxt, now, columns_to_join=['info_cache', 'extra.pci_requests']) mock_joinedload.assert_called_once_with('info_cache') mock_undefer.assert_called_once_with('extra.pci_requests') def test_instance_get_all_by_filters_with_meta(self): self.create_instance_with_args() for inst in db.instance_get_all_by_filters(self.ctxt, {}): meta = utils.metadata_to_dict(inst['metadata']) self.assertEqual(meta, self.sample_data['metadata']) sys_meta = utils.metadata_to_dict(inst['system_metadata']) self.assertEqual(sys_meta, self.sample_data['system_metadata']) def test_instance_get_all_by_filters_without_meta(self): self.create_instance_with_args() result = db.instance_get_all_by_filters(self.ctxt, {}, columns_to_join=[]) for inst in result: meta = utils.metadata_to_dict(inst['metadata']) self.assertEqual(meta, {}) sys_meta = utils.metadata_to_dict(inst['system_metadata']) self.assertEqual(sys_meta, {}) def test_instance_get_all_by_filters_with_fault(self): inst = self.create_instance_with_args() result = db.instance_get_all_by_filters(self.ctxt, {}, columns_to_join=['fault']) self.assertIsNone(result[0]['fault']) db.instance_fault_create(self.ctxt, {'instance_uuid': inst['uuid'], 'code': 123}) fault2 = db.instance_fault_create(self.ctxt, {'instance_uuid': inst['uuid'], 'code': 123}) result = db.instance_get_all_by_filters(self.ctxt, {}, columns_to_join=['fault']) # Make sure we get the latest fault self.assertEqual(fault2['id'], result[0]['fault']['id']) def test_instance_get_all_by_filters(self): instances = [self.create_instance_with_args() for i in range(3)] filtered_instances = db.instance_get_all_by_filters(self.ctxt, {}) self._assertEqualListsOfInstances(instances, filtered_instances) def test_instance_get_all_by_filters_zero_limit(self): self.create_instance_with_args() instances = db.instance_get_all_by_filters(self.ctxt, {}, limit=0) self.assertEqual([], instances) def test_instance_metadata_get_multi(self): uuids = [self.create_instance_with_args()['uuid'] for i in range(3)] @sqlalchemy_api.pick_context_manager_reader def test(context): return sqlalchemy_api._instance_metadata_get_multi( context, uuids) meta = test(self.ctxt) for row in meta: self.assertIn(row['instance_uuid'], uuids) @mock.patch.object(query.Query, 'filter') def test_instance_metadata_get_multi_no_uuids(self, mock_query_filter): with sqlalchemy_api.main_context_manager.reader.using(self.ctxt): sqlalchemy_api._instance_metadata_get_multi(self.ctxt, []) self.assertFalse(mock_query_filter.called) def test_instance_system_system_metadata_get_multi(self): uuids = [self.create_instance_with_args()['uuid'] for i in range(3)] @sqlalchemy_api.pick_context_manager_reader def test(context): return sqlalchemy_api._instance_system_metadata_get_multi( context, uuids) sys_meta = test(self.ctxt) for row in sys_meta: self.assertIn(row['instance_uuid'], uuids) @mock.patch.object(query.Query, 'filter') def test_instance_system_metadata_get_multi_no_uuids(self, mock_query_filter): sqlalchemy_api._instance_system_metadata_get_multi(self.ctxt, []) self.assertFalse(mock_query_filter.called) def test_instance_get_all_by_filters_regex(self): i1 = self.create_instance_with_args(display_name='test1') i2 = self.create_instance_with_args(display_name='teeeest2') self.create_instance_with_args(display_name='diff') result = db.instance_get_all_by_filters(self.ctxt, {'display_name': 't.*st.'}) self._assertEqualListsOfInstances(result, [i1, i2]) def test_instance_get_all_by_filters_changes_since(self): i1 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:25.000000') i2 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:26.000000') changes_since = iso8601.parse_date('2013-12-05T15:03:25.000000') result = db.instance_get_all_by_filters(self.ctxt, {'changes-since': changes_since}) self._assertEqualListsOfInstances([i1, i2], result) changes_since = iso8601.parse_date('2013-12-05T15:03:26.000000') result = db.instance_get_all_by_filters(self.ctxt, {'changes-since': changes_since}) self._assertEqualListsOfInstances([i2], result) db.instance_destroy(self.ctxt, i1['uuid']) filters = {} filters['changes-since'] = changes_since filters['marker'] = i1['uuid'] result = db.instance_get_all_by_filters(self.ctxt, filters) self._assertEqualListsOfInstances([i2], result) def test_instance_get_all_by_filters_changes_before(self): i1 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:25.000000') i2 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:26.000000') changes_before = iso8601.parse_date('2013-12-05T15:03:26.000000') result = db.instance_get_all_by_filters(self.ctxt, {'changes-before': changes_before}) self._assertEqualListsOfInstances([i1, i2], result) changes_before = iso8601.parse_date('2013-12-05T15:03:25.000000') result = db.instance_get_all_by_filters(self.ctxt, {'changes-before': changes_before}) self._assertEqualListsOfInstances([i1], result) db.instance_destroy(self.ctxt, i2['uuid']) filters = {} filters['changes-before'] = changes_before filters['marker'] = i2['uuid'] result = db.instance_get_all_by_filters(self.ctxt, filters) self._assertEqualListsOfInstances([i1], result) def test_instance_get_all_by_filters_changes_time_period(self): i1 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:25.000000') i2 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:26.000000') i3 = self.create_instance_with_args(updated_at= '2013-12-05T15:03:27.000000') changes_since = iso8601.parse_date('2013-12-05T15:03:25.000000') changes_before = iso8601.parse_date('2013-12-05T15:03:27.000000') result = db.instance_get_all_by_filters(self.ctxt, {'changes-since': changes_since, 'changes-before': changes_before}) self._assertEqualListsOfInstances([i1, i2, i3], result) changes_since = iso8601.parse_date('2013-12-05T15:03:26.000000') changes_before = iso8601.parse_date('2013-12-05T15:03:27.000000') result = db.instance_get_all_by_filters(self.ctxt, {'changes-since': changes_since, 'changes-before': changes_before}) self._assertEqualListsOfInstances([i2, i3], result) db.instance_destroy(self.ctxt, i1['uuid']) filters = {} filters['changes-since'] = changes_since filters['changes-before'] = changes_before filters['marker'] = i1['uuid'] result = db.instance_get_all_by_filters(self.ctxt, filters) self._assertEqualListsOfInstances([i2, i3], result) def test_instance_get_all_by_filters_exact_match(self): instance = self.create_instance_with_args(host='host1') self.create_instance_with_args(host='host12') result = db.instance_get_all_by_filters(self.ctxt, {'host': 'host1'}) self._assertEqualListsOfInstances([instance], result) def test_instance_get_all_by_filters_locked_key_true(self): instance = self.create_instance_with_args(locked=True) self.create_instance_with_args(locked=False) result = db.instance_get_all_by_filters(self.ctxt, {'locked': True}) self._assertEqualListsOfInstances([instance], result) def test_instance_get_all_by_filters_locked_key_false(self): self.create_instance_with_args(locked=True) result = db.instance_get_all_by_filters(self.ctxt, {'locked': False}) self._assertEqualListsOfInstances([], result) def test_instance_get_all_by_filters_metadata(self): instance = self.create_instance_with_args(metadata={'foo': 'bar'}) self.create_instance_with_args() result = db.instance_get_all_by_filters(self.ctxt, {'metadata': {'foo': 'bar'}}) self._assertEqualListsOfInstances([instance], result) def test_instance_get_all_by_filters_system_metadata(self): instance = self.create_instance_with_args( system_metadata={'foo': 'bar'}) self.create_instance_with_args() result = db.instance_get_all_by_filters(self.ctxt, {'system_metadata': {'foo': 'bar'}}) self._assertEqualListsOfInstances([instance], result) def test_instance_get_all_by_filters_unicode_value(self): i1 = self.create_instance_with_args(display_name=u'test♥') i2 = self.create_instance_with_args(display_name=u'test') i3 = self.create_instance_with_args(display_name=u'test♥test') self.create_instance_with_args(display_name='diff') result = db.instance_get_all_by_filters(self.ctxt, {'display_name': u'test'}) self._assertEqualListsOfInstances([i1, i2, i3], result) result = db.instance_get_all_by_filters(self.ctxt, {'display_name': u'test♥'}) self._assertEqualListsOfInstances(result, [i1, i3]) def test_instance_get_by_uuid(self): inst = self.create_instance_with_args() result = db.instance_get_by_uuid(self.ctxt, inst['uuid']) # instance_create() will return a fault=None, so delete it before # comparing the result of instance_get_by_uuid() del inst.fault self._assertEqualInstances(inst, result) def test_instance_get_by_uuid_join_empty(self): inst = self.create_instance_with_args() result = db.instance_get_by_uuid(self.ctxt, inst['uuid'], columns_to_join=[]) meta = utils.metadata_to_dict(result['metadata']) self.assertEqual(meta, {}) sys_meta = utils.metadata_to_dict(result['system_metadata']) self.assertEqual(sys_meta, {}) def test_instance_get_by_uuid_join_meta(self): inst = self.create_instance_with_args() result = db.instance_get_by_uuid(self.ctxt, inst['uuid'], columns_to_join=['metadata']) meta = utils.metadata_to_dict(result['metadata']) self.assertEqual(meta, self.sample_data['metadata']) sys_meta = utils.metadata_to_dict(result['system_metadata']) self.assertEqual(sys_meta, {}) def test_instance_get_by_uuid_join_sys_meta(self): inst = self.create_instance_with_args() result = db.instance_get_by_uuid(self.ctxt, inst['uuid'], columns_to_join=['system_metadata']) meta = utils.metadata_to_dict(result['metadata']) self.assertEqual(meta, {}) sys_meta = utils.metadata_to_dict(result['system_metadata']) self.assertEqual(sys_meta, self.sample_data['system_metadata']) def test_instance_get_all_by_filters_deleted(self): inst1 = self.create_instance_with_args() inst2 = self.create_instance_with_args(reservation_id='b') db.instance_destroy(self.ctxt, inst1['uuid']) result = db.instance_get_all_by_filters(self.ctxt, {}) self._assertEqualListsOfObjects([inst1, inst2], result, ignored_keys=['metadata', 'system_metadata', 'deleted', 'deleted_at', 'info_cache', 'pci_devices', 'extra']) def test_instance_get_all_by_filters_deleted_and_soft_deleted(self): inst1 = self.create_instance_with_args() inst2 = self.create_instance_with_args(vm_state=vm_states.SOFT_DELETED) self.create_instance_with_args() db.instance_destroy(self.ctxt, inst1['uuid']) result = db.instance_get_all_by_filters(self.ctxt, {'deleted': True}) self._assertEqualListsOfObjects([inst1, inst2], result, ignored_keys=['metadata', 'system_metadata', 'deleted', 'deleted_at', 'info_cache', 'pci_devices', 'extra']) def test_instance_get_all_by_filters_deleted_no_soft_deleted(self): inst1 = self.create_instance_with_args() self.create_instance_with_args(vm_state=vm_states.SOFT_DELETED) self.create_instance_with_args() db.instance_destroy(self.ctxt, inst1['uuid']) result = db.instance_get_all_by_filters(self.ctxt, {'deleted': True, 'soft_deleted': False}) self._assertEqualListsOfObjects([inst1], result, ignored_keys=['deleted', 'deleted_at', 'metadata', 'system_metadata', 'info_cache', 'pci_devices', 'extra']) def test_instance_get_all_by_filters_alive_and_soft_deleted(self): inst1 = self.create_instance_with_args() inst2 = self.create_instance_with_args(vm_state=vm_states.SOFT_DELETED) inst3 = self.create_instance_with_args() db.instance_destroy(self.ctxt, inst1['uuid']) result = db.instance_get_all_by_filters(self.ctxt, {'deleted': False, 'soft_deleted': True}) self._assertEqualListsOfInstances([inst2, inst3], result) def test_instance_get_all_by_filters_not_deleted(self): inst1 = self.create_instance_with_args() self.create_instance_with_args(vm_state=vm_states.SOFT_DELETED) inst3 = self.create_instance_with_args() inst4 = self.create_instance_with_args(vm_state=vm_states.ACTIVE) db.instance_destroy(self.ctxt, inst1['uuid']) result = db.instance_get_all_by_filters(self.ctxt, {'deleted': False}) self.assertIsNone(inst3.vm_state) self._assertEqualListsOfInstances([inst3, inst4], result) def test_instance_get_all_by_filters_cleaned(self): inst1 = self.create_instance_with_args() inst2 = self.create_instance_with_args(reservation_id='b') db.instance_update(self.ctxt, inst1['uuid'], {'cleaned': 1}) result = db.instance_get_all_by_filters(self.ctxt, {}) self.assertEqual(2, len(result)) self.assertIn(inst1['uuid'], [result[0]['uuid'], result[1]['uuid']]) self.assertIn(inst2['uuid'], [result[0]['uuid'], result[1]['uuid']]) if inst1['uuid'] == result[0]['uuid']: self.assertTrue(result[0]['cleaned']) self.assertFalse(result[1]['cleaned']) else: self.assertTrue(result[1]['cleaned']) self.assertFalse(result[0]['cleaned']) def test_instance_get_all_by_host_and_node_no_join(self): instance = self.create_instance_with_args() result = db.instance_get_all_by_host_and_node(self.ctxt, 'h1', 'n1') self.assertEqual(result[0]['uuid'], instance['uuid']) self.assertEqual(result[0]['system_metadata'], []) def test_instance_get_all_by_host_and_node(self): instance = self.create_instance_with_args( system_metadata={'foo': 'bar'}) result = db.instance_get_all_by_host_and_node( self.ctxt, 'h1', 'n1', columns_to_join=['system_metadata', 'extra']) self.assertEqual(instance['uuid'], result[0]['uuid']) self.assertEqual('bar', result[0]['system_metadata'][0]['value']) self.assertEqual(instance['uuid'], result[0]['extra']['instance_uuid']) @mock.patch('nova.db.sqlalchemy.api._instances_fill_metadata') @mock.patch('nova.db.sqlalchemy.api._instance_get_all_query') def test_instance_get_all_by_host_and_node_fills_manually(self, mock_getall, mock_fill): db.instance_get_all_by_host_and_node( self.ctxt, 'h1', 'n1', columns_to_join=['metadata', 'system_metadata', 'extra', 'foo']) self.assertEqual(sorted(['extra', 'foo']), sorted(mock_getall.call_args[1]['joins'])) self.assertEqual(sorted(['metadata', 'system_metadata']), sorted(mock_fill.call_args[1]['manual_joins'])) def test_instance_get_all_hung_in_rebooting(self): # Ensure no instances are returned. results = db.instance_get_all_hung_in_rebooting(self.ctxt, 10) self.assertEqual([], results) # Ensure one rebooting instance with updated_at older than 10 seconds # is returned. instance = self.create_instance_with_args(task_state="rebooting", updated_at=datetime.datetime(2000, 1, 1, 12, 0, 0)) results = db.instance_get_all_hung_in_rebooting(self.ctxt, 10) self._assertEqualListsOfObjects([instance], results, ignored_keys=['task_state', 'info_cache', 'security_groups', 'metadata', 'system_metadata', 'pci_devices', 'extra']) db.instance_update(self.ctxt, instance['uuid'], {"task_state": None}) # Ensure the newly rebooted instance is not returned. self.create_instance_with_args(task_state="rebooting", updated_at=timeutils.utcnow()) results = db.instance_get_all_hung_in_rebooting(self.ctxt, 10) self.assertEqual([], results) def test_instance_update_with_expected_vm_state(self): instance = self.create_instance_with_args(vm_state='foo') db.instance_update(self.ctxt, instance['uuid'], {'host': 'h1', 'expected_vm_state': ('foo', 'bar')}) def test_instance_update_with_unexpected_vm_state(self): instance = self.create_instance_with_args(vm_state='foo') self.assertRaises(exception.InstanceUpdateConflict, db.instance_update, self.ctxt, instance['uuid'], {'host': 'h1', 'expected_vm_state': ('spam', 'bar')}) def test_instance_update_with_instance_uuid(self): # test instance_update() works when an instance UUID is passed. ctxt = context.get_admin_context() # Create an instance with some metadata values = {'metadata': {'host': 'foo', 'key1': 'meow'}, 'system_metadata': {'original_image_ref': 'blah'}} instance = db.instance_create(ctxt, values) # Update the metadata values = {'metadata': {'host': 'bar', 'key2': 'wuff'}, 'system_metadata': {'original_image_ref': 'baz'}} db.instance_update(ctxt, instance['uuid'], values) # Retrieve the user-provided metadata to ensure it was successfully # updated instance_meta = db.instance_metadata_get(ctxt, instance['uuid']) self.assertEqual('bar', instance_meta['host']) self.assertEqual('wuff', instance_meta['key2']) self.assertNotIn('key1', instance_meta) # Retrieve the system metadata to ensure it was successfully updated system_meta = db.instance_system_metadata_get(ctxt, instance['uuid']) self.assertEqual('baz', system_meta['original_image_ref']) def test_delete_block_device_mapping_on_instance_destroy(self): # Makes sure that the block device mapping is deleted when the # related instance is deleted. ctxt = context.get_admin_context() instance = db.instance_create(ctxt, dict(display_name='bdm-test')) bdm = { 'volume_id': uuidsentinel.uuid1, 'device_name': '/dev/vdb', 'instance_uuid': instance['uuid'], } bdm = db.block_device_mapping_create(ctxt, bdm, legacy=False) db.instance_destroy(ctxt, instance['uuid']) # make sure the bdm is deleted as well bdms = db.block_device_mapping_get_all_by_instance( ctxt, instance['uuid']) self.assertEqual([], bdms) def test_delete_instance_metadata_on_instance_destroy(self): ctxt = context.get_admin_context() # Create an instance with some metadata values = {'metadata': {'host': 'foo', 'key1': 'meow'}, 'system_metadata': {'original_image_ref': 'blah'}} instance = db.instance_create(ctxt, values) instance_meta = db.instance_metadata_get(ctxt, instance['uuid']) self.assertEqual('foo', instance_meta['host']) self.assertEqual('meow', instance_meta['key1']) db.instance_destroy(ctxt, instance['uuid']) instance_meta = db.instance_metadata_get(ctxt, instance['uuid']) # Make sure instance metadata is deleted as well self.assertEqual({}, instance_meta) def test_delete_instance_faults_on_instance_destroy(self): ctxt = context.get_admin_context() uuid = uuidsentinel.uuid1 # Create faults db.instance_create(ctxt, {'uuid': uuid}) fault_values = { 'message': 'message', 'details': 'detail', 'instance_uuid': uuid, 'code': 404, 'host': 'localhost' } fault = db.instance_fault_create(ctxt, fault_values) # Retrieve the fault to ensure it was successfully added faults = db.instance_fault_get_by_instance_uuids(ctxt, [uuid]) self.assertEqual(1, len(faults[uuid])) self._assertEqualObjects(fault, faults[uuid][0]) db.instance_destroy(ctxt, uuid) faults = db.instance_fault_get_by_instance_uuids(ctxt, [uuid]) # Make sure instance faults is deleted as well self.assertEqual(0, len(faults[uuid])) def test_delete_migrations_on_instance_destroy(self): ctxt = context.get_admin_context() uuid = uuidsentinel.uuid1 db.instance_create(ctxt, {'uuid': uuid}) migrations_values = {'instance_uuid': uuid} migration = db.migration_create(ctxt, migrations_values) migrations = db.migration_get_all_by_filters( ctxt, {'instance_uuid': uuid}) self.assertEqual(1, len(migrations)) self._assertEqualObjects(migration, migrations[0]) instance = db.instance_destroy(ctxt, uuid) migrations = db.migration_get_all_by_filters( ctxt, {'instance_uuid': uuid}) self.assertTrue(instance.deleted) self.assertEqual(0, len(migrations)) def test_delete_virtual_interfaces_on_instance_destroy(self): # Create the instance. ctxt = context.get_admin_context() uuid = uuidsentinel.uuid1 db.instance_create(ctxt, {'uuid': uuid}) # Create the VirtualInterface. db.virtual_interface_create(ctxt, {'instance_uuid': uuid}) # Make sure the vif is tied to the instance. vifs = db.virtual_interface_get_by_instance(ctxt, uuid) self.assertEqual(1, len(vifs)) # Destroy the instance and verify the vif is gone as well. db.instance_destroy(ctxt, uuid) self.assertEqual( 0, len(db.virtual_interface_get_by_instance(ctxt, uuid))) def test_instance_update_and_get_original(self): instance = self.create_instance_with_args(vm_state='building') (old_ref, new_ref) = db.instance_update_and_get_original(self.ctxt, instance['uuid'], {'vm_state': 'needscoffee'}) self.assertEqual('building', old_ref['vm_state']) self.assertEqual('needscoffee', new_ref['vm_state']) def test_instance_update_and_get_original_metadata(self): instance = self.create_instance_with_args() columns_to_join = ['metadata'] (old_ref, new_ref) = db.instance_update_and_get_original( self.ctxt, instance['uuid'], {'vm_state': 'needscoffee'}, columns_to_join=columns_to_join) meta = utils.metadata_to_dict(new_ref['metadata']) self.assertEqual(meta, self.sample_data['metadata']) sys_meta = utils.metadata_to_dict(new_ref['system_metadata']) self.assertEqual(sys_meta, {}) def test_instance_update_and_get_original_metadata_none_join(self): instance = self.create_instance_with_args() (old_ref, new_ref) = db.instance_update_and_get_original( self.ctxt, instance['uuid'], {'metadata': {'mk1': 'mv3'}}) meta = utils.metadata_to_dict(new_ref['metadata']) self.assertEqual(meta, {'mk1': 'mv3'}) def test_instance_update_and_get_original_no_conflict_on_session(self): @sqlalchemy_api.pick_context_manager_writer def test(context): instance = self.create_instance_with_args() (old_ref, new_ref) = db.instance_update_and_get_original( context, instance['uuid'], {'metadata': {'mk1': 'mv3'}}) # test some regular persisted fields self.assertEqual(old_ref.uuid, new_ref.uuid) self.assertEqual(old_ref.project_id, new_ref.project_id) # after a copy operation, we can assert: # 1. the two states have their own InstanceState old_insp = inspect(old_ref) new_insp = inspect(new_ref) self.assertNotEqual(old_insp, new_insp) # 2. only one of the objects is still in our Session self.assertIs(new_insp.session, self.ctxt.session) self.assertIsNone(old_insp.session) # 3. The "new" object remains persistent and ready # for updates self.assertTrue(new_insp.persistent) # 4. the "old" object is detached from this Session. self.assertTrue(old_insp.detached) test(self.ctxt) def test_instance_update_and_get_original_conflict_race(self): # Ensure that we correctly process expected_task_state when retrying # due to an unknown conflict # This requires modelling the MySQL read view, which means that if we # have read something in the current transaction and we read it again, # we will read the same data every time even if another committed # transaction has since altered that data. In this test we have an # instance whose task state was originally None, but has been set to # SHELVING by another, concurrent transaction. Therefore the first time # we read the data we will read None, but when we restart the # transaction we will read the correct data. instance = self.create_instance_with_args( task_state=task_states.SHELVING) instance_out_of_date = copy.copy(instance) instance_out_of_date['task_state'] = None # NOTE(mdbooth): SQLA magic which makes this dirty object look # like a freshly loaded one. sqla_session.make_transient(instance_out_of_date) sqla_session.make_transient_to_detached(instance_out_of_date) # update_on_match will fail first time because the actual task state # (SHELVING) doesn't match the expected task state (None). However, # we ensure that the first time we fetch the instance object we get # out-of-date data. This forces us to retry the operation to find out # what really went wrong. with mock.patch.object(sqlalchemy_api, '_instance_get_by_uuid', side_effect=[instance_out_of_date, instance]), \ mock.patch.object(sqlalchemy_api, '_instance_update', side_effect=sqlalchemy_api._instance_update): self.assertRaises(exception.UnexpectedTaskStateError, db.instance_update_and_get_original, self.ctxt, instance['uuid'], {'expected_task_state': [None]}) sqlalchemy_api._instance_update.assert_has_calls([ mock.call(self.ctxt, instance['uuid'], {'expected_task_state': [None]}, None, original=instance_out_of_date), mock.call(self.ctxt, instance['uuid'], {'expected_task_state': [None]}, None, original=instance), ]) def test_instance_update_and_get_original_conflict_race_fallthrough(self): # Ensure that is update_match continuously fails for no discernable # reason, we evantually raise UnknownInstanceUpdateConflict instance = self.create_instance_with_args() # Reproduce the conditions of a race between fetching and updating the # instance by making update_on_match fail for no discernable reason. with mock.patch.object(update_match, 'update_on_match', side_effect=update_match.NoRowsMatched): self.assertRaises(exception.UnknownInstanceUpdateConflict, db.instance_update_and_get_original, self.ctxt, instance['uuid'], {'metadata': {'mk1': 'mv3'}}) def test_instance_update_and_get_original_expected_host(self): # Ensure that we allow update when expecting a host field instance = self.create_instance_with_args() (orig, new) = db.instance_update_and_get_original( self.ctxt, instance['uuid'], {'host': None}, expected={'host': 'h1'}) self.assertIsNone(new['host']) def test_instance_update_and_get_original_expected_host_fail(self): # Ensure that we detect a changed expected host and raise # InstanceUpdateConflict instance = self.create_instance_with_args() try: db.instance_update_and_get_original( self.ctxt, instance['uuid'], {'host': None}, expected={'host': 'h2'}) except exception.InstanceUpdateConflict as ex: self.assertEqual(ex.kwargs['instance_uuid'], instance['uuid']) self.assertEqual(ex.kwargs['actual'], {'host': 'h1'}) self.assertEqual(ex.kwargs['expected'], {'host': ['h2']}) else: self.fail('InstanceUpdateConflict was not raised') def test_instance_update_and_get_original_expected_host_none(self): # Ensure that we allow update when expecting a host field of None instance = self.create_instance_with_args(host=None) (old, new) = db.instance_update_and_get_original( self.ctxt, instance['uuid'], {'host': 'h1'}, expected={'host': None}) self.assertEqual('h1', new['host']) def test_instance_update_and_get_original_expected_host_none_fail(self): # Ensure that we detect a changed expected host of None and raise # InstanceUpdateConflict instance = self.create_instance_with_args() try: db.instance_update_and_get_original( self.ctxt, instance['uuid'], {'host': None}, expected={'host': None}) except exception.InstanceUpdateConflict as ex: self.assertEqual(ex.kwargs['instance_uuid'], instance['uuid']) self.assertEqual(ex.kwargs['actual'], {'host': 'h1'}) self.assertEqual(ex.kwargs['expected'], {'host': [None]}) else: self.fail('InstanceUpdateConflict was not raised') def test_instance_update_and_get_original_expected_task_state_single_fail(self): # noqa # Ensure that we detect a changed expected task and raise # UnexpectedTaskStateError instance = self.create_instance_with_args() try: db.instance_update_and_get_original( self.ctxt, instance['uuid'], { 'host': None, 'expected_task_state': task_states.SCHEDULING }) except exception.UnexpectedTaskStateError as ex: self.assertEqual(ex.kwargs['instance_uuid'], instance['uuid']) self.assertEqual(ex.kwargs['actual'], {'task_state': None}) self.assertEqual(ex.kwargs['expected'], {'task_state': [task_states.SCHEDULING]}) else: self.fail('UnexpectedTaskStateError was not raised') def test_instance_update_and_get_original_expected_task_state_single_pass(self): # noqa # Ensure that we allow an update when expected task is correct instance = self.create_instance_with_args() (orig, new) = db.instance_update_and_get_original( self.ctxt, instance['uuid'], { 'host': None, 'expected_task_state': None }) self.assertIsNone(new['host']) def test_instance_update_and_get_original_expected_task_state_multi_fail(self): # noqa # Ensure that we detect a changed expected task and raise # UnexpectedTaskStateError when there are multiple potential expected # tasks instance = self.create_instance_with_args() try: db.instance_update_and_get_original( self.ctxt, instance['uuid'], { 'host': None, 'expected_task_state': [task_states.SCHEDULING, task_states.REBUILDING] }) except exception.UnexpectedTaskStateError as ex: self.assertEqual(ex.kwargs['instance_uuid'], instance['uuid']) self.assertEqual(ex.kwargs['actual'], {'task_state': None}) self.assertEqual(ex.kwargs['expected'], {'task_state': [task_states.SCHEDULING, task_states.REBUILDING]}) else: self.fail('UnexpectedTaskStateError was not raised') def test_instance_update_and_get_original_expected_task_state_multi_pass(self): # noqa # Ensure that we allow an update when expected task is in a list of # expected tasks instance = self.create_instance_with_args() (orig, new) = db.instance_update_and_get_original( self.ctxt, instance['uuid'], { 'host': None, 'expected_task_state': [task_states.SCHEDULING, None] }) self.assertIsNone(new['host']) def test_instance_update_and_get_original_expected_task_state_deleting(self): # noqa # Ensure that we raise UnexpectedDeletingTaskStateError when task state # is not as expected, and it is DELETING instance = self.create_instance_with_args( task_state=task_states.DELETING) try: db.instance_update_and_get_original( self.ctxt, instance['uuid'], { 'host': None, 'expected_task_state': task_states.SCHEDULING }) except exception.UnexpectedDeletingTaskStateError as ex: self.assertEqual(ex.kwargs['instance_uuid'], instance['uuid']) self.assertEqual(ex.kwargs['actual'], {'task_state': task_states.DELETING}) self.assertEqual(ex.kwargs['expected'], {'task_state': [task_states.SCHEDULING]}) else: self.fail('UnexpectedDeletingTaskStateError was not raised') def test_instance_update_unique_name(self): context1 = context.RequestContext('user1', 'p1') context2 = context.RequestContext('user2', 'p2') inst1 = self.create_instance_with_args(context=context1, project_id='p1', hostname='fake_name1') inst2 = self.create_instance_with_args(context=context1, project_id='p1', hostname='fake_name2') inst3 = self.create_instance_with_args(context=context2, project_id='p2', hostname='fake_name3') # osapi_compute_unique_server_name_scope is unset so this should work: db.instance_update(context1, inst1['uuid'], {'hostname': 'fake_name2'}) db.instance_update(context1, inst1['uuid'], {'hostname': 'fake_name1'}) # With scope 'global' any duplicate should fail. self.flags(osapi_compute_unique_server_name_scope='global') self.assertRaises(exception.InstanceExists, db.instance_update, context1, inst2['uuid'], {'hostname': 'fake_name1'}) self.assertRaises(exception.InstanceExists, db.instance_update, context2, inst3['uuid'], {'hostname': 'fake_name1'}) # But we should definitely be able to update our name if we aren't # really changing it. db.instance_update(context1, inst1['uuid'], {'hostname': 'fake_NAME'}) # With scope 'project' a duplicate in the project should fail: self.flags(osapi_compute_unique_server_name_scope='project') self.assertRaises(exception.InstanceExists, db.instance_update, context1, inst2['uuid'], {'hostname': 'fake_NAME'}) # With scope 'project' a duplicate in a different project should work: self.flags(osapi_compute_unique_server_name_scope='project') db.instance_update(context2, inst3['uuid'], {'hostname': 'fake_NAME'}) def _test_instance_update_updates_metadata(self, metadata_type): instance = self.create_instance_with_args() def set_and_check(meta): inst = db.instance_update(self.ctxt, instance['uuid'], {metadata_type: dict(meta)}) _meta = utils.metadata_to_dict(inst[metadata_type]) self.assertEqual(meta, _meta) meta = {'speed': '88', 'units': 'MPH'} set_and_check(meta) meta['gigawatts'] = '1.21' set_and_check(meta) del meta['gigawatts'] set_and_check(meta) self.ctxt.read_deleted = 'yes' self.assertNotIn('gigawatts', db.instance_system_metadata_get(self.ctxt, instance.uuid)) def test_security_group_in_use(self): db.instance_create(self.ctxt, dict(host='foo')) def test_instance_update_updates_system_metadata(self): # Ensure that system_metadata is updated during instance_update self._test_instance_update_updates_metadata('system_metadata') def test_instance_update_updates_metadata(self): # Ensure that metadata is updated during instance_update self._test_instance_update_updates_metadata('metadata') def test_instance_stringified_ips(self): instance = self.create_instance_with_args() instance = db.instance_update( self.ctxt, instance['uuid'], {'access_ip_v4': netaddr.IPAddress('1.2.3.4'), 'access_ip_v6': netaddr.IPAddress('::1')}) self.assertIsInstance(instance['access_ip_v4'], six.string_types) self.assertIsInstance(instance['access_ip_v6'], six.string_types) instance = db.instance_get_by_uuid(self.ctxt, instance['uuid']) self.assertIsInstance(instance['access_ip_v4'], six.string_types) self.assertIsInstance(instance['access_ip_v6'], six.string_types) @mock.patch('nova.db.sqlalchemy.api._check_instance_exists_in_project', return_value=None) def test_instance_destroy(self, mock_check_inst_exists): ctxt = context.get_admin_context() values = { 'metadata': {'key': 'value'}, 'system_metadata': {'key': 'value'} } inst_uuid = self.create_instance_with_args(**values)['uuid'] db.instance_tag_set(ctxt, inst_uuid, [u'tag1', u'tag2']) db.instance_destroy(ctxt, inst_uuid) self.assertRaises(exception.InstanceNotFound, db.instance_get, ctxt, inst_uuid) self.assertIsNone(db.instance_info_cache_get(ctxt, inst_uuid)) self.assertEqual({}, db.instance_metadata_get(ctxt, inst_uuid)) self.assertEqual([], db.instance_tag_get_by_instance_uuid( ctxt, inst_uuid)) _assert_instance_id_mapping(ctxt, self, inst_uuid) ctxt.read_deleted = 'yes' self.assertEqual(values['system_metadata'], db.instance_system_metadata_get(ctxt, inst_uuid)) def test_instance_destroy_already_destroyed(self): ctxt = context.get_admin_context() instance = self.create_instance_with_args() db.instance_destroy(ctxt, instance['uuid']) self.assertRaises(exception.InstanceNotFound, db.instance_destroy, ctxt, instance['uuid']) def test_instance_destroy_hard(self): ctxt = context.get_admin_context() instance = self.create_instance_with_args() uuid = instance['uuid'] utc_now = timeutils.utcnow() action_values = { 'action': 'run_instance', 'instance_uuid': uuid, 'request_id': ctxt.request_id, 'user_id': ctxt.user_id, 'project_id': ctxt.project_id, 'start_time': utc_now, 'updated_at': utc_now, 'message': 'action-message' } action = db.action_start(ctxt, action_values) action_event_values = { 'event': 'schedule', 'action_id': action['id'], 'instance_uuid': uuid, 'start_time': utc_now, 'request_id': ctxt.request_id, 'host': 'fake-host', } db.action_event_start(ctxt, action_event_values) security_group_values = { 'name': 'fake_sec_group', 'user_id': ctxt.user_id, 'project_id': ctxt.project_id, 'instances': [] } security_group = db.security_group_create(ctxt, security_group_values) db.instance_add_security_group(ctxt, uuid, security_group['id']) instance_fault_values = { 'message': 'message', 'details': 'detail', 'instance_uuid': uuid, 'code': 404, 'host': 'localhost' } db.instance_fault_create(ctxt, instance_fault_values) bdm_values = { 'instance_uuid': uuid, 'device_name': '/dev/vda', 'source_type': 'volume', 'destination_type': 'volume', } block_dev = block_device.BlockDeviceDict(bdm_values) db.block_device_mapping_create(self.ctxt, block_dev, legacy=False) # Crate a second BDM that is soft-deleted to simulate that the # volume was detached and the BDM was deleted before the instance # was hard destroyed. bdm2_values = { 'instance_uuid': uuid, 'device_name': '/dev/vdb', 'source_type': 'volume', 'destination_type': 'volume', } block_dev2 = block_device.BlockDeviceDict(bdm2_values) bdm2 = db.block_device_mapping_create( self.ctxt, block_dev2, legacy=False) db.block_device_mapping_destroy(self.ctxt, bdm2.id) migration_values = { "status": "finished", "instance_uuid": uuid, "dest_compute": "fake_host2" } db.migration_create(self.ctxt, migration_values) db.virtual_interface_create(ctxt, {'instance_uuid': uuid}) # Hard delete the instance db.instance_destroy(ctxt, uuid, hard_delete=True) # Check that related records are deleted with utils.temporary_mutation(ctxt, read_deleted="yes"): # Assert that all information related to the instance is not found # even using a context that can read soft deleted records. self.assertEqual(0, len(db.actions_get(ctxt, uuid))) self.assertEqual(0, len(db.action_events_get(ctxt, action['id']))) db_sg = db.security_group_get_by_name( ctxt, ctxt.project_id, security_group_values['name']) self.assertEqual(0, len(db_sg['instances'])) instance_faults = db.instance_fault_get_by_instance_uuids( ctxt, [uuid]) self.assertEqual(0, len(instance_faults[uuid])) inst_bdms = db.block_device_mapping_get_all_by_instance(ctxt, uuid) self.assertEqual(0, len(inst_bdms)) filters = {"instance_uuid": uuid} inst_migrations = db.migration_get_all_by_filters(ctxt, filters) self.assertEqual(0, len(inst_migrations)) vifs = db.virtual_interface_get_by_instance(ctxt, uuid) self.assertEqual(0, len(vifs)) self.assertIsNone(db.instance_info_cache_get(ctxt, uuid)) self.assertEqual({}, db.instance_metadata_get(ctxt, uuid)) self.assertIsNone(db.instance_extra_get_by_instance_uuid( ctxt, uuid)) system_meta = db.instance_system_metadata_get(ctxt, uuid) self.assertEqual({}, system_meta) _assert_instance_id_mapping(ctxt, self, uuid) self.assertRaises(exception.InstanceNotFound, db.instance_destroy, ctxt, uuid) # NOTE(ttsiouts): FixedIp has the instance_uuid as a foreign key def test_check_instance_exists(self): instance = self.create_instance_with_args() @sqlalchemy_api.pick_context_manager_reader def test(context): self.assertIsNone(sqlalchemy_api._check_instance_exists_in_project( context, instance['uuid'])) test(self.ctxt) def test_check_instance_exists_non_existing_instance(self): @sqlalchemy_api.pick_context_manager_reader def test(ctxt): self.assertRaises(exception.InstanceNotFound, sqlalchemy_api._check_instance_exists_in_project, self.ctxt, '123') test(self.ctxt) def test_check_instance_exists_from_different_tenant(self): context1 = context.RequestContext('user1', 'project1') context2 = context.RequestContext('user2', 'project2') instance = self.create_instance_with_args(context=context1) @sqlalchemy_api.pick_context_manager_reader def test1(context): self.assertIsNone(sqlalchemy_api._check_instance_exists_in_project( context, instance['uuid'])) test1(context1) @sqlalchemy_api.pick_context_manager_reader def test2(context): self.assertRaises(exception.InstanceNotFound, sqlalchemy_api._check_instance_exists_in_project, context, instance['uuid']) test2(context2) def test_check_instance_exists_admin_context(self): some_context = context.RequestContext('some_user', 'some_project') instance = self.create_instance_with_args(context=some_context) @sqlalchemy_api.pick_context_manager_reader def test(context): # Check that method works correctly with admin context self.assertIsNone(sqlalchemy_api._check_instance_exists_in_project( context, instance['uuid'])) test(self.ctxt) class InstanceMetadataTestCase(test.TestCase): """Tests for db.api.instance_metadata_* methods.""" def setUp(self): super(InstanceMetadataTestCase, self).setUp() self.ctxt = context.get_admin_context() def test_instance_metadata_get(self): instance = db.instance_create(self.ctxt, {'metadata': {'key': 'value'}}) self.assertEqual({'key': 'value'}, db.instance_metadata_get( self.ctxt, instance['uuid'])) def test_instance_metadata_delete(self): instance = db.instance_create(self.ctxt, {'metadata': {'key': 'val', 'key1': 'val1'}}) db.instance_metadata_delete(self.ctxt, instance['uuid'], 'key1') self.assertEqual({'key': 'val'}, db.instance_metadata_get( self.ctxt, instance['uuid'])) def test_instance_metadata_update(self): instance = db.instance_create(self.ctxt, {'host': 'h1', 'project_id': 'p1', 'metadata': {'key': 'value'}}) # This should add new key/value pair db.instance_metadata_update(self.ctxt, instance['uuid'], {'new_key': 'new_value'}, False) metadata = db.instance_metadata_get(self.ctxt, instance['uuid']) self.assertEqual(metadata, {'key': 'value', 'new_key': 'new_value'}) # This should leave only one key/value pair db.instance_metadata_update(self.ctxt, instance['uuid'], {'new_key': 'new_value'}, True) metadata = db.instance_metadata_get(self.ctxt, instance['uuid']) self.assertEqual(metadata, {'new_key': 'new_value'}) class InstanceExtraTestCase(test.TestCase): def setUp(self): super(InstanceExtraTestCase, self).setUp() self.ctxt = context.get_admin_context() self.instance = db.instance_create(self.ctxt, {}) def test_instance_extra_get_by_uuid_instance_create(self): inst_extra = db.instance_extra_get_by_instance_uuid( self.ctxt, self.instance['uuid']) self.assertIsNotNone(inst_extra) def test_instance_extra_update_by_uuid(self): db.instance_extra_update_by_uuid(self.ctxt, self.instance['uuid'], {'numa_topology': 'changed', 'trusted_certs': "['123', 'foo']", 'resources': "['res0', 'res1']", }) inst_extra = db.instance_extra_get_by_instance_uuid( self.ctxt, self.instance['uuid']) self.assertEqual('changed', inst_extra.numa_topology) # NOTE(jackie-truong): trusted_certs is stored as a Text type in # instance_extra and read as a list of strings self.assertEqual("['123', 'foo']", inst_extra.trusted_certs) self.assertEqual("['res0', 'res1']", inst_extra.resources) def test_instance_extra_update_by_uuid_and_create(self): @sqlalchemy_api.pick_context_manager_writer def test(context): sqlalchemy_api.model_query(context, models.InstanceExtra).\ filter_by(instance_uuid=self.instance['uuid']).\ delete() test(self.ctxt) inst_extra = db.instance_extra_get_by_instance_uuid( self.ctxt, self.instance['uuid']) self.assertIsNone(inst_extra) db.instance_extra_update_by_uuid(self.ctxt, self.instance['uuid'], {'numa_topology': 'changed'}) inst_extra = db.instance_extra_get_by_instance_uuid( self.ctxt, self.instance['uuid']) self.assertEqual('changed', inst_extra.numa_topology) def test_instance_extra_get_with_columns(self): extra = db.instance_extra_get_by_instance_uuid( self.ctxt, self.instance['uuid'], columns=['numa_topology', 'vcpu_model', 'trusted_certs', 'resources']) self.assertRaises(SQLAlchemyError, extra.__getitem__, 'pci_requests') self.assertIn('numa_topology', extra) self.assertIn('vcpu_model', extra) self.assertIn('trusted_certs', extra) self.assertIn('resources', extra) class ServiceTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(ServiceTestCase, self).setUp() self.ctxt = context.get_admin_context() def _get_base_values(self): return { 'uuid': None, 'host': 'fake_host', 'binary': 'fake_binary', 'topic': 'fake_topic', 'report_count': 3, 'disabled': False, 'forced_down': False } def _create_service(self, values): v = self._get_base_values() v.update(values) return db.service_create(self.ctxt, v) def test_service_create(self): service = self._create_service({}) self.assertIsNotNone(service['id']) for key, value in self._get_base_values().items(): self.assertEqual(value, service[key]) def test_service_create_disabled(self): self.flags(enable_new_services=False) service = self._create_service({'binary': 'nova-compute'}) self.assertTrue(service['disabled']) def test_service_create_disabled_reason(self): self.flags(enable_new_services=False) service = self._create_service({'binary': 'nova-compute'}) msg = "New compute service disabled due to config option." self.assertEqual(msg, service['disabled_reason']) def test_service_create_disabled_non_compute_ignored(self): """Tests that enable_new_services=False has no effect on auto-disabling a new non-nova-compute service. """ self.flags(enable_new_services=False) service = self._create_service({'binary': 'nova-scheduler'}) self.assertFalse(service['disabled']) self.assertIsNone(service['disabled_reason']) def test_service_destroy(self): service1 = self._create_service({}) service2 = self._create_service({'host': 'fake_host2'}) db.service_destroy(self.ctxt, service1['id']) self.assertRaises(exception.ServiceNotFound, db.service_get, self.ctxt, service1['id']) self._assertEqualObjects(db.service_get(self.ctxt, service2['id']), service2, ignored_keys=['compute_node']) def test_service_destroy_not_nova_compute(self): service = self._create_service({'binary': 'nova-consoleauth', 'host': 'host1'}) compute_node_dict = _make_compute_node('host1', 'node1', 'kvm', None) compute_node = db.compute_node_create(self.ctxt, compute_node_dict) db.service_destroy(self.ctxt, service['id']) # make sure ComputeHostNotFound is not raised db.compute_node_get(self.ctxt, compute_node['id']) def test_service_update(self): service = self._create_service({}) new_values = { 'uuid': uuidsentinel.service, 'host': 'fake_host1', 'binary': 'fake_binary1', 'topic': 'fake_topic1', 'report_count': 4, 'disabled': True } db.service_update(self.ctxt, service['id'], new_values) updated_service = db.service_get(self.ctxt, service['id']) for key, value in new_values.items(): self.assertEqual(value, updated_service[key]) def test_service_update_not_found_exception(self): self.assertRaises(exception.ServiceNotFound, db.service_update, self.ctxt, 100500, {}) def test_service_update_with_set_forced_down(self): service = self._create_service({}) db.service_update(self.ctxt, service['id'], {'forced_down': True}) updated_service = db.service_get(self.ctxt, service['id']) self.assertTrue(updated_service['forced_down']) def test_service_update_with_unset_forced_down(self): service = self._create_service({'forced_down': True}) db.service_update(self.ctxt, service['id'], {'forced_down': False}) updated_service = db.service_get(self.ctxt, service['id']) self.assertFalse(updated_service['forced_down']) def test_service_get(self): service1 = self._create_service({}) self._create_service({'host': 'some_other_fake_host'}) real_service1 = db.service_get(self.ctxt, service1['id']) self._assertEqualObjects(service1, real_service1, ignored_keys=['compute_node']) def test_service_get_by_uuid(self): service1 = self._create_service({'uuid': uuidsentinel.service1_uuid}) self._create_service({'host': 'some_other_fake_host', 'uuid': uuidsentinel.other_uuid}) real_service1 = db.service_get_by_uuid( self.ctxt, uuidsentinel.service1_uuid) self._assertEqualObjects(service1, real_service1, ignored_keys=['compute_node']) def test_service_get_by_uuid_not_found(self): """Asserts that ServiceNotFound is raised if a service is not found by a given uuid. """ self.assertRaises(exception.ServiceNotFound, db.service_get_by_uuid, self.ctxt, uuidsentinel.service_not_found) def test_service_get_minimum_version(self): self._create_service({'version': 1, 'host': 'host3', 'binary': 'compute', 'forced_down': True}) self._create_service({'version': 2, 'host': 'host1', 'binary': 'compute'}) self._create_service({'version': 3, 'host': 'host2', 'binary': 'compute'}) self._create_service({'version': 0, 'host': 'host0', 'binary': 'compute', 'deleted': 1}) self.assertEqual({'compute': 2}, db.service_get_minimum_version(self.ctxt, ['compute'])) def test_service_get_not_found_exception(self): self.assertRaises(exception.ServiceNotFound, db.service_get, self.ctxt, 100500) def test_service_get_by_host_and_topic(self): service1 = self._create_service({'host': 'host1', 'topic': 'topic1'}) self._create_service({'host': 'host2', 'topic': 'topic2'}) real_service1 = db.service_get_by_host_and_topic(self.ctxt, host='host1', topic='topic1') self._assertEqualObjects(service1, real_service1) def test_service_get_by_host_and_binary(self): service1 = self._create_service({'host': 'host1', 'binary': 'foo'}) self._create_service({'host': 'host2', 'binary': 'bar'}) real_service1 = db.service_get_by_host_and_binary(self.ctxt, host='host1', binary='foo') self._assertEqualObjects(service1, real_service1) def test_service_get_by_host_and_binary_raises(self): self.assertRaises(exception.HostBinaryNotFound, db.service_get_by_host_and_binary, self.ctxt, host='host1', binary='baz') def test_service_get_all(self): values = [ {'host': 'host1', 'topic': 'topic1'}, {'host': 'host2', 'topic': 'topic2'}, {'disabled': True} ] services = [self._create_service(vals) for vals in values] disabled_services = [services[-1]] non_disabled_services = services[:-1] compares = [ (services, db.service_get_all(self.ctxt)), (disabled_services, db.service_get_all(self.ctxt, True)), (non_disabled_services, db.service_get_all(self.ctxt, False)) ] for comp in compares: self._assertEqualListsOfObjects(*comp) def test_service_get_all_by_topic(self): values = [ {'host': 'host1', 'topic': 't1'}, {'host': 'host2', 'topic': 't1'}, {'disabled': True, 'topic': 't1'}, {'host': 'host3', 'topic': 't2'} ] services = [self._create_service(vals) for vals in values] expected = services[:2] real = db.service_get_all_by_topic(self.ctxt, 't1') self._assertEqualListsOfObjects(expected, real) def test_service_get_all_by_binary(self): values = [ {'host': 'host1', 'binary': 'b1'}, {'host': 'host2', 'binary': 'b1'}, {'disabled': True, 'binary': 'b1'}, {'host': 'host3', 'binary': 'b2'} ] services = [self._create_service(vals) for vals in values] expected = services[:2] real = db.service_get_all_by_binary(self.ctxt, 'b1') self._assertEqualListsOfObjects(expected, real) def test_service_get_all_by_binary_include_disabled(self): values = [ {'host': 'host1', 'binary': 'b1'}, {'host': 'host2', 'binary': 'b1'}, {'disabled': True, 'binary': 'b1'}, {'host': 'host3', 'binary': 'b2'} ] services = [self._create_service(vals) for vals in values] expected = services[:3] real = db.service_get_all_by_binary(self.ctxt, 'b1', include_disabled=True) self._assertEqualListsOfObjects(expected, real) def test_service_get_all_computes_by_hv_type(self): values = [ {'host': 'host1', 'binary': 'nova-compute'}, {'host': 'host2', 'binary': 'nova-compute', 'disabled': True}, {'host': 'host3', 'binary': 'nova-compute'}, {'host': 'host4', 'binary': 'b2'} ] services = [self._create_service(vals) for vals in values] compute_nodes = [ _make_compute_node('host1', 'node1', 'ironic', services[0]['id']), _make_compute_node('host1', 'node2', 'ironic', services[0]['id']), _make_compute_node('host2', 'node3', 'ironic', services[1]['id']), _make_compute_node('host3', 'host3', 'kvm', services[2]['id']), ] for cn in compute_nodes: db.compute_node_create(self.ctxt, cn) expected = services[:1] real = db.service_get_all_computes_by_hv_type(self.ctxt, 'ironic', include_disabled=False) self._assertEqualListsOfObjects(expected, real) def test_service_get_all_computes_by_hv_type_include_disabled(self): values = [ {'host': 'host1', 'binary': 'nova-compute'}, {'host': 'host2', 'binary': 'nova-compute', 'disabled': True}, {'host': 'host3', 'binary': 'nova-compute'}, {'host': 'host4', 'binary': 'b2'} ] services = [self._create_service(vals) for vals in values] compute_nodes = [ _make_compute_node('host1', 'node1', 'ironic', services[0]['id']), _make_compute_node('host1', 'node2', 'ironic', services[0]['id']), _make_compute_node('host2', 'node3', 'ironic', services[1]['id']), _make_compute_node('host3', 'host3', 'kvm', services[2]['id']), ] for cn in compute_nodes: db.compute_node_create(self.ctxt, cn) expected = services[:2] real = db.service_get_all_computes_by_hv_type(self.ctxt, 'ironic', include_disabled=True) self._assertEqualListsOfObjects(expected, real) def test_service_get_all_by_host(self): values = [ {'host': 'host1', 'topic': 't11', 'binary': 'b11'}, {'host': 'host1', 'topic': 't12', 'binary': 'b12'}, {'host': 'host2', 'topic': 't1'}, {'host': 'host3', 'topic': 't1'} ] services = [self._create_service(vals) for vals in values] expected = services[:2] real = db.service_get_all_by_host(self.ctxt, 'host1') self._assertEqualListsOfObjects(expected, real) def test_service_get_by_compute_host(self): values = [ {'host': 'host1', 'binary': 'nova-compute'}, {'host': 'host2', 'binary': 'nova-scheduler'}, {'host': 'host3', 'binary': 'nova-compute'} ] services = [self._create_service(vals) for vals in values] real_service = db.service_get_by_compute_host(self.ctxt, 'host1') self._assertEqualObjects(services[0], real_service) self.assertRaises(exception.ComputeHostNotFound, db.service_get_by_compute_host, self.ctxt, 'non-exists-host') def test_service_get_by_compute_host_not_found(self): self.assertRaises(exception.ComputeHostNotFound, db.service_get_by_compute_host, self.ctxt, 'non-exists-host') def test_service_binary_exists_exception(self): db.service_create(self.ctxt, self._get_base_values()) values = self._get_base_values() values.update({'topic': 'top1'}) self.assertRaises(exception.ServiceBinaryExists, db.service_create, self.ctxt, values) def test_service_topic_exists_exceptions(self): db.service_create(self.ctxt, self._get_base_values()) values = self._get_base_values() values.update({'binary': 'bin1'}) self.assertRaises(exception.ServiceTopicExists, db.service_create, self.ctxt, values) def test_migration_migrate_to_uuid(self): total, done = sqlalchemy_api.migration_migrate_to_uuid(self.ctxt, 10) self.assertEqual(0, total) self.assertEqual(0, done) # Create two migrations, one with a uuid and one without. db.migration_create(self.ctxt, dict(source_compute='src', source_node='srcnode', dest_compute='dst', dest_node='dstnode', status='running')) db.migration_create(self.ctxt, dict(source_compute='src', source_node='srcnode', dest_compute='dst', dest_node='dstnode', status='running', uuid=uuidsentinel.migration2)) # Now migrate them, we should find one and update one total, done = sqlalchemy_api.migration_migrate_to_uuid(self.ctxt, 10) self.assertEqual(1, total) self.assertEqual(1, done) # Get the migrations back to make sure the original uuid didn't change. migrations = db.migration_get_all_by_filters(self.ctxt, {}) uuids = [m.uuid for m in migrations] self.assertIn(uuidsentinel.migration2, uuids) self.assertNotIn(None, uuids) # Run the online migration again to see nothing was processed. total, done = sqlalchemy_api.migration_migrate_to_uuid(self.ctxt, 10) self.assertEqual(0, total) self.assertEqual(0, done) class InstanceActionTestCase(test.TestCase, ModelsObjectComparatorMixin): IGNORED_FIELDS = [ 'id', 'created_at', 'updated_at', 'deleted_at', 'deleted' ] def setUp(self): super(InstanceActionTestCase, self).setUp() self.ctxt = context.get_admin_context() def _create_action_values(self, uuid, action='run_instance', ctxt=None, extra=None, instance_create=True): if ctxt is None: ctxt = self.ctxt if instance_create: db.instance_create(ctxt, {'uuid': uuid}) utc_now = timeutils.utcnow() values = { 'action': action, 'instance_uuid': uuid, 'request_id': ctxt.request_id, 'user_id': ctxt.user_id, 'project_id': ctxt.project_id, 'start_time': utc_now, 'updated_at': utc_now, 'message': 'action-message' } if extra is not None: values.update(extra) return values def _create_event_values(self, uuid, event='schedule', ctxt=None, extra=None): if ctxt is None: ctxt = self.ctxt values = { 'event': event, 'instance_uuid': uuid, 'request_id': ctxt.request_id, 'start_time': timeutils.utcnow(), 'host': 'fake-host', 'details': 'fake-details', } if extra is not None: values.update(extra) return values def _assertActionSaved(self, action, uuid): """Retrieve the action to ensure it was successfully added.""" actions = db.actions_get(self.ctxt, uuid) self.assertEqual(1, len(actions)) self._assertEqualObjects(action, actions[0]) def _assertActionEventSaved(self, event, action_id): # Retrieve the event to ensure it was successfully added events = db.action_events_get(self.ctxt, action_id) self.assertEqual(1, len(events)) self._assertEqualObjects(event, events[0], ['instance_uuid', 'request_id']) def test_instance_action_start(self): """Create an instance action.""" uuid = uuidsentinel.uuid1 action_values = self._create_action_values(uuid) action = db.action_start(self.ctxt, action_values) ignored_keys = self.IGNORED_FIELDS + ['finish_time'] self._assertEqualObjects(action_values, action, ignored_keys) self._assertActionSaved(action, uuid) def test_instance_action_finish(self): """Create an instance action.""" uuid = uuidsentinel.uuid1 action_values = self._create_action_values(uuid) db.action_start(self.ctxt, action_values) action_values['finish_time'] = timeutils.utcnow() action = db.action_finish(self.ctxt, action_values) self._assertEqualObjects(action_values, action, self.IGNORED_FIELDS) self._assertActionSaved(action, uuid) def test_instance_action_finish_without_started_event(self): """Create an instance finish action.""" uuid = uuidsentinel.uuid1 action_values = self._create_action_values(uuid) action_values['finish_time'] = timeutils.utcnow() self.assertRaises(exception.InstanceActionNotFound, db.action_finish, self.ctxt, action_values) def test_instance_actions_get_by_instance(self): """Ensure we can get actions by UUID.""" uuid1 = uuidsentinel.uuid1 expected = [] action_values = self._create_action_values(uuid1) action = db.action_start(self.ctxt, action_values) expected.append(action) action_values['action'] = 'resize' action = db.action_start(self.ctxt, action_values) expected.append(action) # Create some extra actions uuid2 = uuidsentinel.uuid2 ctxt2 = context.get_admin_context() action_values = self._create_action_values(uuid2, 'reboot', ctxt2) db.action_start(ctxt2, action_values) db.action_start(ctxt2, action_values) # Retrieve the action to ensure it was successfully added actions = db.actions_get(self.ctxt, uuid1) self._assertEqualListsOfObjects(expected, actions) def test_instance_actions_get_are_in_order(self): """Ensure retrived actions are in order.""" uuid1 = uuidsentinel.uuid1 extra = { 'created_at': timeutils.utcnow() } action_values = self._create_action_values(uuid1, extra=extra) action1 = db.action_start(self.ctxt, action_values) action_values['action'] = 'delete' action2 = db.action_start(self.ctxt, action_values) actions = db.actions_get(self.ctxt, uuid1) self.assertEqual(2, len(actions)) self._assertEqualOrderedListOfObjects([action2, action1], actions) def test_instance_actions_get_with_limit(self): """Test list instance actions can support pagination.""" uuid1 = uuidsentinel.uuid1 extra = { 'created_at': timeutils.utcnow() } action_values = self._create_action_values(uuid1, extra=extra) action1 = db.action_start(self.ctxt, action_values) action_values['action'] = 'delete' action_values['request_id'] = 'req-' + uuidsentinel.reqid1 db.action_start(self.ctxt, action_values) actions = db.actions_get(self.ctxt, uuid1) self.assertEqual(2, len(actions)) actions = db.actions_get(self.ctxt, uuid1, limit=1) self.assertEqual(1, len(actions)) actions = db.actions_get( self.ctxt, uuid1, limit=1, marker=action_values['request_id']) self.assertEqual(1, len(actions)) self._assertEqualListsOfObjects([action1], actions) def test_instance_actions_get_with_changes_since(self): """Test list instance actions can support timestamp filter.""" uuid1 = uuidsentinel.uuid1 extra = { 'created_at': timeutils.utcnow() } action_values = self._create_action_values(uuid1, extra=extra) db.action_start(self.ctxt, action_values) timestamp = timeutils.utcnow() action_values['start_time'] = timestamp action_values['updated_at'] = timestamp action_values['action'] = 'delete' action2 = db.action_start(self.ctxt, action_values) actions = db.actions_get(self.ctxt, uuid1) self.assertEqual(2, len(actions)) self.assertNotEqual(actions[0]['updated_at'], actions[1]['updated_at']) actions = db.actions_get( self.ctxt, uuid1, filters={'changes-since': timestamp}) self.assertEqual(1, len(actions)) self._assertEqualListsOfObjects([action2], actions) def test_instance_actions_get_with_changes_before(self): """Test list instance actions can support timestamp filter.""" uuid1 = uuidsentinel.uuid1 expected = [] extra = { 'created_at': timeutils.utcnow() } action_values = self._create_action_values(uuid1, extra=extra) action = db.action_start(self.ctxt, action_values) expected.append(action) timestamp = timeutils.utcnow() action_values['start_time'] = timestamp action_values['updated_at'] = timestamp action_values['action'] = 'delete' action = db.action_start(self.ctxt, action_values) expected.append(action) actions = db.actions_get(self.ctxt, uuid1) self.assertEqual(2, len(actions)) self.assertNotEqual(actions[0]['updated_at'], actions[1]['updated_at']) actions = db.actions_get( self.ctxt, uuid1, filters={'changes-before': timestamp}) self.assertEqual(2, len(actions)) self._assertEqualListsOfObjects(expected, actions) def test_instance_actions_get_with_not_found_marker(self): self.assertRaises(exception.MarkerNotFound, db.actions_get, self.ctxt, uuidsentinel.uuid1, marker=uuidsentinel.not_found_marker) def test_instance_action_get_by_instance_and_action(self): """Ensure we can get an action by instance UUID and action id.""" ctxt2 = context.get_admin_context() uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 action_values = self._create_action_values(uuid1) db.action_start(self.ctxt, action_values) request_id = action_values['request_id'] # NOTE(rpodolyaka): ensure we use a different req id for the 2nd req action_values['action'] = 'resize' action_values['request_id'] = 'req-00000000-7522-4d99-7ff-111111111111' db.action_start(self.ctxt, action_values) action_values = self._create_action_values(uuid2, 'reboot', ctxt2) db.action_start(ctxt2, action_values) db.action_start(ctxt2, action_values) action = db.action_get_by_request_id(self.ctxt, uuid1, request_id) self.assertEqual('run_instance', action['action']) self.assertEqual(self.ctxt.request_id, action['request_id']) def test_instance_action_get_by_instance_and_action_by_order(self): instance_uuid = uuidsentinel.uuid1 t1 = { 'created_at': timeutils.utcnow() } t2 = { 'created_at': timeutils.utcnow() + datetime.timedelta(seconds=5) } # Create a confirmResize action action_values = self._create_action_values( instance_uuid, action='confirmResize', extra=t1) a1 = db.action_start(self.ctxt, action_values) # Create a delete action with same instance uuid and req id action_values = self._create_action_values( instance_uuid, action='delete', extra=t2, instance_create=False) a2 = db.action_start(self.ctxt, action_values) self.assertEqual(a1['request_id'], a2['request_id']) self.assertEqual(a1['instance_uuid'], a2['instance_uuid']) self.assertTrue(a1['created_at'] < a2['created_at']) action = db.action_get_by_request_id(self.ctxt, instance_uuid, a1['request_id']) # Only get the delete action(last created) self.assertEqual(action['action'], a2['action']) def test_instance_action_event_start(self): """Create an instance action event.""" uuid = uuidsentinel.uuid1 action_values = self._create_action_values(uuid) action = db.action_start(self.ctxt, action_values) event_values = self._create_event_values(uuid) event = db.action_event_start(self.ctxt, event_values) # self.fail(self._dict_from_object(event, None)) event_values['action_id'] = action['id'] ignored = self.IGNORED_FIELDS + ['finish_time', 'traceback', 'result'] self._assertEqualObjects(event_values, event, ignored) self._assertActionEventSaved(event, action['id']) def test_instance_action_event_start_without_action(self): """Create an instance action event.""" uuid = uuidsentinel.uuid1 event_values = self._create_event_values(uuid) self.assertRaises(exception.InstanceActionNotFound, db.action_event_start, self.ctxt, event_values) def test_instance_action_event_finish_without_started_event(self): """Finish an instance action event.""" uuid = uuidsentinel.uuid1 db.action_start(self.ctxt, self._create_action_values(uuid)) event_values = { 'finish_time': timeutils.utcnow() + datetime.timedelta(seconds=5), 'result': 'Success' } event_values = self._create_event_values(uuid, extra=event_values) self.assertRaises(exception.InstanceActionEventNotFound, db.action_event_finish, self.ctxt, event_values) def test_instance_action_event_finish_without_action(self): """Finish an instance action event.""" uuid = uuidsentinel.uuid1 event_values = { 'finish_time': timeutils.utcnow() + datetime.timedelta(seconds=5), 'result': 'Success' } event_values = self._create_event_values(uuid, extra=event_values) self.assertRaises(exception.InstanceActionNotFound, db.action_event_finish, self.ctxt, event_values) def test_instance_action_event_finish_success(self): """Finish an instance action event.""" uuid = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid)) db.action_event_start(self.ctxt, self._create_event_values(uuid)) event_values = { 'finish_time': timeutils.utcnow() + datetime.timedelta(seconds=5), 'result': 'Success' } event_values = self._create_event_values(uuid, extra=event_values) event = db.action_event_finish(self.ctxt, event_values) self._assertActionEventSaved(event, action['id']) action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) self.assertNotEqual('Error', action['message']) def test_instance_action_event_finish_error(self): """Finish an instance action event with an error.""" uuid = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid)) db.action_event_start(self.ctxt, self._create_event_values(uuid)) event_values = { 'finish_time': timeutils.utcnow() + datetime.timedelta(seconds=5), 'result': 'Error' } event_values = self._create_event_values(uuid, extra=event_values) event = db.action_event_finish(self.ctxt, event_values) self._assertActionEventSaved(event, action['id']) action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) self.assertEqual('Error', action['message']) def test_instance_action_and_event_start_string_time(self): """Create an instance action and event with a string start_time.""" uuid = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid)) event_values = {'start_time': timeutils.utcnow().isoformat()} event_values = self._create_event_values(uuid, extra=event_values) event = db.action_event_start(self.ctxt, event_values) self._assertActionEventSaved(event, action['id']) def test_instance_action_events_get_are_in_order(self): """Ensure retrived action events are in order.""" uuid1 = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid1)) extra1 = { 'created_at': timeutils.utcnow() } extra2 = { 'created_at': timeutils.utcnow() + datetime.timedelta(seconds=5) } event_val1 = self._create_event_values(uuid1, 'schedule', extra=extra1) event_val2 = self._create_event_values(uuid1, 'run', extra=extra1) event_val3 = self._create_event_values(uuid1, 'stop', extra=extra2) event1 = db.action_event_start(self.ctxt, event_val1) event2 = db.action_event_start(self.ctxt, event_val2) event3 = db.action_event_start(self.ctxt, event_val3) events = db.action_events_get(self.ctxt, action['id']) self.assertEqual(3, len(events)) self._assertEqualOrderedListOfObjects([event3, event2, event1], events, ['instance_uuid', 'request_id']) def test_instance_action_event_get_by_id(self): """Get a specific instance action event.""" ctxt2 = context.get_admin_context() uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 action = db.action_start(self.ctxt, self._create_action_values(uuid1)) db.action_start(ctxt2, self._create_action_values(uuid2, 'reboot', ctxt2)) event = db.action_event_start(self.ctxt, self._create_event_values(uuid1)) event_values = self._create_event_values(uuid2, 'reboot', ctxt2) db.action_event_start(ctxt2, event_values) # Retrieve the event to ensure it was successfully added saved_event = db.action_event_get_by_id(self.ctxt, action['id'], event['id']) self._assertEqualObjects(event, saved_event, ['instance_uuid', 'request_id']) def test_instance_action_event_start_with_different_request_id(self): uuid = uuidsentinel.uuid1 action_values = self._create_action_values(uuid) action = db.action_start(self.ctxt, action_values) # init_host case fake_admin_context = context.get_admin_context() event_values = self._create_event_values(uuid, ctxt=fake_admin_context) event = db.action_event_start(fake_admin_context, event_values) event_values['action_id'] = action['id'] ignored = self.IGNORED_FIELDS + ['finish_time', 'traceback', 'result'] self._assertEqualObjects(event_values, event, ignored) self._assertActionEventSaved(event, action['id']) def test_instance_action_event_finish_with_different_request_id(self): uuid = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid)) # init_host case fake_admin_context = context.get_admin_context() db.action_event_start(fake_admin_context, self._create_event_values( uuid, ctxt=fake_admin_context)) event_values = { 'finish_time': timeutils.utcnow() + datetime.timedelta(seconds=5), 'result': 'Success' } event_values = self._create_event_values(uuid, ctxt=fake_admin_context, extra=event_values) event = db.action_event_finish(fake_admin_context, event_values) self._assertActionEventSaved(event, action['id']) action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) self.assertNotEqual('Error', action['message']) def test_instance_action_updated_with_event_start_and_finish_action(self): uuid = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid)) updated_create = action['updated_at'] self.assertIsNotNone(updated_create) event_values = self._create_event_values(uuid) # event start action time_start = timeutils.utcnow() + datetime.timedelta(seconds=5) event_values['start_time'] = time_start db.action_event_start(self.ctxt, event_values) action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) updated_event_start = action['updated_at'] self.assertEqual(time_start.isoformat(), updated_event_start.isoformat()) self.assertTrue(updated_event_start > updated_create) # event finish action time_finish = timeutils.utcnow() + datetime.timedelta(seconds=10) event_values = { 'finish_time': time_finish, 'result': 'Success' } event_values = self._create_event_values(uuid, extra=event_values) db.action_event_finish(self.ctxt, event_values) action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) updated_event_finish = action['updated_at'] self.assertEqual(time_finish.isoformat(), updated_event_finish.isoformat()) self.assertTrue(updated_event_finish > updated_event_start) def test_instance_action_not_updated_with_unknown_event_request(self): """Tests that we don't update the action.updated_at field when starting or finishing an action event if we couldn't find the action by the request_id. """ # Create a valid action - this represents an active user request. uuid = uuidsentinel.uuid1 action = db.action_start(self.ctxt, self._create_action_values(uuid)) updated_create = action['updated_at'] self.assertIsNotNone(updated_create) event_values = self._create_event_values(uuid) # Now start an event on an unknown request ID and admin context where # project_id won't be set. time_start = timeutils.utcnow() + datetime.timedelta(seconds=5) event_values['start_time'] = time_start random_request_id = 'req-%s' % uuidsentinel.request_id event_values['request_id'] = random_request_id admin_context = context.get_admin_context() event_ref = db.action_event_start(admin_context, event_values) # The event would be created on the existing action. self.assertEqual(action['id'], event_ref['action_id']) # And the action.update_at should be the same as before the event was # started. action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) self.assertEqual(updated_create, action['updated_at']) # Now finish the event on the unknown request ID and admin context. time_finish = timeutils.utcnow() + datetime.timedelta(seconds=10) event_values = { 'finish_time': time_finish, 'request_id': random_request_id, 'result': 'Success' } event_values = self._create_event_values(uuid, extra=event_values) db.action_event_finish(admin_context, event_values) # And the action.update_at should be the same as before the event was # finished. action = db.action_get_by_request_id(self.ctxt, uuid, self.ctxt.request_id) self.assertEqual(updated_create, action['updated_at']) class InstanceFaultTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(InstanceFaultTestCase, self).setUp() self.ctxt = context.get_admin_context() def _create_fault_values(self, uuid, code=404): return { 'message': 'message', 'details': 'detail', 'instance_uuid': uuid, 'code': code, 'host': 'localhost' } def test_instance_fault_create(self): """Ensure we can create an instance fault.""" uuid = uuidsentinel.uuid1 # Ensure no faults registered for this instance faults = db.instance_fault_get_by_instance_uuids(self.ctxt, [uuid]) self.assertEqual(0, len(faults[uuid])) # Create a fault fault_values = self._create_fault_values(uuid) db.instance_create(self.ctxt, {'uuid': uuid}) fault = db.instance_fault_create(self.ctxt, fault_values) ignored_keys = ['deleted', 'created_at', 'updated_at', 'deleted_at', 'id'] self._assertEqualObjects(fault_values, fault, ignored_keys) # Retrieve the fault to ensure it was successfully added faults = db.instance_fault_get_by_instance_uuids(self.ctxt, [uuid]) self.assertEqual(1, len(faults[uuid])) self._assertEqualObjects(fault, faults[uuid][0]) def test_instance_fault_get_by_instance(self): """Ensure we can retrieve faults for instance.""" uuids = [uuidsentinel.uuid1, uuidsentinel.uuid2] fault_codes = [404, 500] expected = {} # Create faults for uuid in uuids: db.instance_create(self.ctxt, {'uuid': uuid}) expected[uuid] = [] for code in fault_codes: fault_values = self._create_fault_values(uuid, code) fault = db.instance_fault_create(self.ctxt, fault_values) # We expect the faults to be returned ordered by created_at in # descending order, so insert the newly created fault at the # front of our list. expected[uuid].insert(0, fault) # Ensure faults are saved faults = db.instance_fault_get_by_instance_uuids(self.ctxt, uuids) self.assertEqual(len(expected), len(faults)) for uuid in uuids: self._assertEqualOrderedListOfObjects(expected[uuid], faults[uuid]) def test_instance_fault_get_latest_by_instance(self): """Ensure we can retrieve only latest faults for instance.""" uuids = [uuidsentinel.uuid1, uuidsentinel.uuid2] fault_codes = [404, 500] expected = {} # Create faults for uuid in uuids: db.instance_create(self.ctxt, {'uuid': uuid}) expected[uuid] = [] for code in fault_codes: fault_values = self._create_fault_values(uuid, code) fault = db.instance_fault_create(self.ctxt, fault_values) expected[uuid].append(fault) # We are only interested in the latest fault for each instance for uuid in expected: expected[uuid] = expected[uuid][-1:] # Ensure faults are saved faults = db.instance_fault_get_by_instance_uuids(self.ctxt, uuids, latest=True) self.assertEqual(len(expected), len(faults)) for uuid in uuids: self._assertEqualListsOfObjects(expected[uuid], faults[uuid]) def test_instance_faults_get_by_instance_uuids_no_faults(self): uuid = uuidsentinel.uuid1 # None should be returned when no faults exist. faults = db.instance_fault_get_by_instance_uuids(self.ctxt, [uuid]) expected = {uuid: []} self.assertEqual(expected, faults) @mock.patch.object(query.Query, 'filter') def test_instance_faults_get_by_instance_uuids_no_uuids(self, mock_filter): faults = db.instance_fault_get_by_instance_uuids(self.ctxt, []) self.assertEqual({}, faults) self.assertFalse(mock_filter.called) class InstanceDestroyConstraints(test.TestCase): def test_destroy_with_equal_any_constraint_met_single_value(self): ctx = context.get_admin_context() instance = db.instance_create(ctx, {'task_state': 'deleting'}) constraint = db.constraint(task_state=db.equal_any('deleting')) db.instance_destroy(ctx, instance['uuid'], constraint) self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, ctx, instance['uuid']) def test_destroy_with_equal_any_constraint_met(self): ctx = context.get_admin_context() instance = db.instance_create(ctx, {'task_state': 'deleting'}) constraint = db.constraint(task_state=db.equal_any('deleting', 'error')) db.instance_destroy(ctx, instance['uuid'], constraint) self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, ctx, instance['uuid']) def test_destroy_with_equal_any_constraint_not_met(self): ctx = context.get_admin_context() instance = db.instance_create(ctx, {'vm_state': 'resize'}) constraint = db.constraint(vm_state=db.equal_any('active', 'error')) self.assertRaises(exception.ConstraintNotMet, db.instance_destroy, ctx, instance['uuid'], constraint) instance = db.instance_get_by_uuid(ctx, instance['uuid']) self.assertFalse(instance['deleted']) def test_destroy_with_not_equal_constraint_met(self): ctx = context.get_admin_context() instance = db.instance_create(ctx, {'task_state': 'deleting'}) constraint = db.constraint(task_state=db.not_equal('error', 'resize')) db.instance_destroy(ctx, instance['uuid'], constraint) self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, ctx, instance['uuid']) def test_destroy_with_not_equal_constraint_not_met(self): ctx = context.get_admin_context() instance = db.instance_create(ctx, {'vm_state': 'active'}) constraint = db.constraint(vm_state=db.not_equal('active', 'error')) self.assertRaises(exception.ConstraintNotMet, db.instance_destroy, ctx, instance['uuid'], constraint) instance = db.instance_get_by_uuid(ctx, instance['uuid']) self.assertFalse(instance['deleted']) class VolumeUsageDBApiTestCase(test.TestCase): def setUp(self): super(VolumeUsageDBApiTestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.useFixture(test.TimeOverride()) def test_vol_usage_update_no_totals_update(self): ctxt = context.get_admin_context() now = timeutils.utcnow() self.useFixture(utils_fixture.TimeFixture(now)) start_time = now - datetime.timedelta(seconds=10) expected_vol_usages = { u'1': {'volume_id': u'1', 'instance_uuid': 'fake-instance-uuid1', 'project_id': 'fake-project-uuid1', 'user_id': 'fake-user-uuid1', 'curr_reads': 1000, 'curr_read_bytes': 2000, 'curr_writes': 3000, 'curr_write_bytes': 4000, 'curr_last_refreshed': now, 'tot_reads': 0, 'tot_read_bytes': 0, 'tot_writes': 0, 'tot_write_bytes': 0, 'tot_last_refreshed': None}, u'2': {'volume_id': u'2', 'instance_uuid': 'fake-instance-uuid2', 'project_id': 'fake-project-uuid2', 'user_id': 'fake-user-uuid2', 'curr_reads': 100, 'curr_read_bytes': 200, 'curr_writes': 300, 'curr_write_bytes': 400, 'tot_reads': 0, 'tot_read_bytes': 0, 'tot_writes': 0, 'tot_write_bytes': 0, 'tot_last_refreshed': None} } def _compare(vol_usage, expected): for key, value in expected.items(): self.assertEqual(vol_usage[key], value) vol_usages = db.vol_get_usage_by_time(ctxt, start_time) self.assertEqual(len(vol_usages), 0) db.vol_usage_update(ctxt, u'1', rd_req=10, rd_bytes=20, wr_req=30, wr_bytes=40, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', user_id='fake-user-uuid1', availability_zone='fake-az') db.vol_usage_update(ctxt, u'2', rd_req=100, rd_bytes=200, wr_req=300, wr_bytes=400, instance_id='fake-instance-uuid2', project_id='fake-project-uuid2', user_id='fake-user-uuid2', availability_zone='fake-az') db.vol_usage_update(ctxt, u'1', rd_req=1000, rd_bytes=2000, wr_req=3000, wr_bytes=4000, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', user_id='fake-user-uuid1', availability_zone='fake-az') vol_usages = db.vol_get_usage_by_time(ctxt, start_time) self.assertEqual(len(vol_usages), 2) for usage in vol_usages: _compare(usage, expected_vol_usages[usage.volume_id]) def test_vol_usage_update_totals_update(self): ctxt = context.get_admin_context() now = datetime.datetime(1, 1, 1, 1, 0, 0) start_time = now - datetime.timedelta(seconds=10) now1 = now + datetime.timedelta(minutes=1) now2 = now + datetime.timedelta(minutes=2) now3 = now + datetime.timedelta(minutes=3) time_fixture = self.useFixture(utils_fixture.TimeFixture(now)) db.vol_usage_update(ctxt, u'1', rd_req=100, rd_bytes=200, wr_req=300, wr_bytes=400, instance_id='fake-instance-uuid', project_id='fake-project-uuid', user_id='fake-user-uuid', availability_zone='fake-az') current_usage = db.vol_get_usage_by_time(ctxt, start_time)[0] self.assertEqual(current_usage['tot_reads'], 0) self.assertEqual(current_usage['curr_reads'], 100) time_fixture.advance_time_delta(now1 - now) db.vol_usage_update(ctxt, u'1', rd_req=200, rd_bytes=300, wr_req=400, wr_bytes=500, instance_id='fake-instance-uuid', project_id='fake-project-uuid', user_id='fake-user-uuid', availability_zone='fake-az', update_totals=True) current_usage = db.vol_get_usage_by_time(ctxt, start_time)[0] self.assertEqual(current_usage['tot_reads'], 200) self.assertEqual(current_usage['curr_reads'], 0) time_fixture.advance_time_delta(now2 - now1) db.vol_usage_update(ctxt, u'1', rd_req=300, rd_bytes=400, wr_req=500, wr_bytes=600, instance_id='fake-instance-uuid', project_id='fake-project-uuid', availability_zone='fake-az', user_id='fake-user-uuid') current_usage = db.vol_get_usage_by_time(ctxt, start_time)[0] self.assertEqual(current_usage['tot_reads'], 200) self.assertEqual(current_usage['curr_reads'], 300) time_fixture.advance_time_delta(now3 - now2) db.vol_usage_update(ctxt, u'1', rd_req=400, rd_bytes=500, wr_req=600, wr_bytes=700, instance_id='fake-instance-uuid', project_id='fake-project-uuid', user_id='fake-user-uuid', availability_zone='fake-az', update_totals=True) vol_usages = db.vol_get_usage_by_time(ctxt, start_time) expected_vol_usages = {'volume_id': u'1', 'project_id': 'fake-project-uuid', 'user_id': 'fake-user-uuid', 'instance_uuid': 'fake-instance-uuid', 'availability_zone': 'fake-az', 'tot_reads': 600, 'tot_read_bytes': 800, 'tot_writes': 1000, 'tot_write_bytes': 1200, 'tot_last_refreshed': now3, 'curr_reads': 0, 'curr_read_bytes': 0, 'curr_writes': 0, 'curr_write_bytes': 0, 'curr_last_refreshed': now2} self.assertEqual(1, len(vol_usages)) for key, value in expected_vol_usages.items(): self.assertEqual(vol_usages[0][key], value, key) def test_vol_usage_update_when_blockdevicestats_reset(self): ctxt = context.get_admin_context() now = timeutils.utcnow() start_time = now - datetime.timedelta(seconds=10) vol_usages = db.vol_get_usage_by_time(ctxt, start_time) self.assertEqual(len(vol_usages), 0) db.vol_usage_update(ctxt, u'1', rd_req=10000, rd_bytes=20000, wr_req=30000, wr_bytes=40000, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', availability_zone='fake-az', user_id='fake-user-uuid1') # Instance rebooted or crashed. block device stats were reset and are # less than the previous values db.vol_usage_update(ctxt, u'1', rd_req=100, rd_bytes=200, wr_req=300, wr_bytes=400, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', availability_zone='fake-az', user_id='fake-user-uuid1') db.vol_usage_update(ctxt, u'1', rd_req=200, rd_bytes=300, wr_req=400, wr_bytes=500, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', availability_zone='fake-az', user_id='fake-user-uuid1') vol_usage = db.vol_get_usage_by_time(ctxt, start_time)[0] expected_vol_usage = {'volume_id': u'1', 'instance_uuid': 'fake-instance-uuid1', 'project_id': 'fake-project-uuid1', 'availability_zone': 'fake-az', 'user_id': 'fake-user-uuid1', 'curr_reads': 200, 'curr_read_bytes': 300, 'curr_writes': 400, 'curr_write_bytes': 500, 'tot_reads': 10000, 'tot_read_bytes': 20000, 'tot_writes': 30000, 'tot_write_bytes': 40000} for key, value in expected_vol_usage.items(): self.assertEqual(vol_usage[key], value, key) def test_vol_usage_update_totals_update_when_blockdevicestats_reset(self): # This is unlikely to happen, but could when a volume is detached # right after an instance has rebooted / recovered and before # the system polled and updated the volume usage cache table. ctxt = context.get_admin_context() now = timeutils.utcnow() start_time = now - datetime.timedelta(seconds=10) vol_usages = db.vol_get_usage_by_time(ctxt, start_time) self.assertEqual(len(vol_usages), 0) db.vol_usage_update(ctxt, u'1', rd_req=10000, rd_bytes=20000, wr_req=30000, wr_bytes=40000, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', availability_zone='fake-az', user_id='fake-user-uuid1') # Instance rebooted or crashed. block device stats were reset and are # less than the previous values db.vol_usage_update(ctxt, u'1', rd_req=100, rd_bytes=200, wr_req=300, wr_bytes=400, instance_id='fake-instance-uuid1', project_id='fake-project-uuid1', availability_zone='fake-az', user_id='fake-user-uuid1', update_totals=True) vol_usage = db.vol_get_usage_by_time(ctxt, start_time)[0] expected_vol_usage = {'volume_id': u'1', 'instance_uuid': 'fake-instance-uuid1', 'project_id': 'fake-project-uuid1', 'availability_zone': 'fake-az', 'user_id': 'fake-user-uuid1', 'curr_reads': 0, 'curr_read_bytes': 0, 'curr_writes': 0, 'curr_write_bytes': 0, 'tot_reads': 10100, 'tot_read_bytes': 20200, 'tot_writes': 30300, 'tot_write_bytes': 40400} for key, value in expected_vol_usage.items(): self.assertEqual(vol_usage[key], value, key) class TaskLogTestCase(test.TestCase): def setUp(self): super(TaskLogTestCase, self).setUp() self.context = context.get_admin_context() now = timeutils.utcnow() self.begin = (now - datetime.timedelta(seconds=10)).isoformat() self.end = (now - datetime.timedelta(seconds=5)).isoformat() self.task_name = 'fake-task-name' self.host = 'fake-host' self.message = 'Fake task message' db.task_log_begin_task(self.context, self.task_name, self.begin, self.end, self.host, message=self.message) def test_task_log_get(self): result = db.task_log_get(self.context, self.task_name, self.begin, self.end, self.host) self.assertEqual(result['task_name'], self.task_name) self.assertEqual(result['period_beginning'], timeutils.parse_strtime(self.begin)) self.assertEqual(result['period_ending'], timeutils.parse_strtime(self.end)) self.assertEqual(result['host'], self.host) self.assertEqual(result['message'], self.message) def test_task_log_get_all(self): result = db.task_log_get_all(self.context, self.task_name, self.begin, self.end, host=self.host) self.assertEqual(len(result), 1) result = db.task_log_get_all(self.context, self.task_name, self.begin, self.end, host=self.host, state='') self.assertEqual(len(result), 0) def test_task_log_begin_task(self): db.task_log_begin_task(self.context, 'fake', self.begin, self.end, self.host, task_items=42, message=self.message) result = db.task_log_get(self.context, 'fake', self.begin, self.end, self.host) self.assertEqual(result['task_name'], 'fake') def test_task_log_begin_task_duplicate(self): params = (self.context, 'fake', self.begin, self.end, self.host) db.task_log_begin_task(*params, message=self.message) self.assertRaises(exception.TaskAlreadyRunning, db.task_log_begin_task, *params, message=self.message) def test_task_log_end_task(self): errors = 1 db.task_log_end_task(self.context, self.task_name, self.begin, self.end, self.host, errors, message=self.message) result = db.task_log_get(self.context, self.task_name, self.begin, self.end, self.host) self.assertEqual(result['errors'], 1) def test_task_log_end_task_task_not_running(self): self.assertRaises(exception.TaskNotRunning, db.task_log_end_task, self.context, 'nonexistent', self.begin, self.end, self.host, 42, message=self.message) class BlockDeviceMappingTestCase(test.TestCase): def setUp(self): super(BlockDeviceMappingTestCase, self).setUp() self.ctxt = context.get_admin_context() self.instance = db.instance_create(self.ctxt, {}) def _create_bdm(self, values): values.setdefault('instance_uuid', self.instance['uuid']) values.setdefault('device_name', 'fake_device') values.setdefault('source_type', 'volume') values.setdefault('destination_type', 'volume') block_dev = block_device.BlockDeviceDict(values) db.block_device_mapping_create(self.ctxt, block_dev, legacy=False) uuid = block_dev['instance_uuid'] bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) for bdm in bdms: if bdm['device_name'] == values['device_name']: return bdm def test_scrub_empty_str_values_no_effect(self): values = {'volume_size': 5} expected = copy.copy(values) sqlalchemy_api._scrub_empty_str_values(values, ['volume_size']) self.assertEqual(values, expected) def test_scrub_empty_str_values_empty_string(self): values = {'volume_size': ''} sqlalchemy_api._scrub_empty_str_values(values, ['volume_size']) self.assertEqual(values, {}) def test_scrub_empty_str_values_empty_unicode(self): values = {'volume_size': u''} sqlalchemy_api._scrub_empty_str_values(values, ['volume_size']) self.assertEqual(values, {}) def test_block_device_mapping_create(self): bdm = self._create_bdm({}) self.assertIsNotNone(bdm) self.assertTrue(uuidutils.is_uuid_like(bdm['uuid'])) def test_block_device_mapping_create_with_blank_uuid(self): bdm = self._create_bdm({'uuid': ''}) self.assertIsNotNone(bdm) self.assertTrue(uuidutils.is_uuid_like(bdm['uuid'])) def test_block_device_mapping_create_with_invalid_uuid(self): self.assertRaises(exception.InvalidUUID, self._create_bdm, {'uuid': 'invalid-uuid'}) def test_block_device_mapping_create_with_attachment_id(self): bdm = self._create_bdm({'attachment_id': uuidsentinel.attachment_id}) self.assertEqual(uuidsentinel.attachment_id, bdm.attachment_id) def test_block_device_mapping_update(self): bdm = self._create_bdm({}) self.assertIsNone(bdm.attachment_id) result = db.block_device_mapping_update( self.ctxt, bdm['id'], {'destination_type': 'moon', 'attachment_id': uuidsentinel.attachment_id}, legacy=False) uuid = bdm['instance_uuid'] bdm_real = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) self.assertEqual(bdm_real[0]['destination_type'], 'moon') self.assertEqual(uuidsentinel.attachment_id, bdm_real[0].attachment_id) # Also make sure the update call returned correct data self.assertEqual(dict(bdm_real[0]), dict(result)) def test_block_device_mapping_update_or_create(self): values = { 'instance_uuid': self.instance['uuid'], 'device_name': 'fake_name', 'source_type': 'volume', 'destination_type': 'volume' } # check create bdm = db.block_device_mapping_update_or_create(self.ctxt, copy.deepcopy(values), legacy=False) self.assertTrue(uuidutils.is_uuid_like(bdm['uuid'])) uuid = values['instance_uuid'] bdm_real = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) self.assertEqual(len(bdm_real), 1) self.assertEqual(bdm_real[0]['device_name'], 'fake_name') # check update bdm0 = copy.deepcopy(values) bdm0['destination_type'] = 'camelot' db.block_device_mapping_update_or_create(self.ctxt, bdm0, legacy=False) bdm_real = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) self.assertEqual(len(bdm_real), 1) bdm_real = bdm_real[0] self.assertEqual(bdm_real['device_name'], 'fake_name') self.assertEqual(bdm_real['destination_type'], 'camelot') # check create without device_name bdm1 = copy.deepcopy(values) bdm1['device_name'] = None db.block_device_mapping_update_or_create(self.ctxt, bdm1, legacy=False) bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) with_device_name = [b for b in bdms if b['device_name'] is not None] without_device_name = [b for b in bdms if b['device_name'] is None] self.assertEqual(2, len(bdms)) self.assertEqual(len(with_device_name), 1, 'expected 1 bdm with device_name, found %d' % len(with_device_name)) self.assertEqual(len(without_device_name), 1, 'expected 1 bdm without device_name, found %d' % len(without_device_name)) # check create multiple devices without device_name bdm2 = dict(values) bdm2['device_name'] = None db.block_device_mapping_update_or_create(self.ctxt, bdm2, legacy=False) bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) with_device_name = [b for b in bdms if b['device_name'] is not None] without_device_name = [b for b in bdms if b['device_name'] is None] self.assertEqual(len(with_device_name), 1, 'expected 1 bdm with device_name, found %d' % len(with_device_name)) self.assertEqual(len(without_device_name), 2, 'expected 2 bdms without device_name, found %d' % len(without_device_name)) def test_block_device_mapping_update_or_create_with_uuid(self): # Test that we are able to change device_name when calling # block_device_mapping_update_or_create with a uuid. bdm = self._create_bdm({}) values = { 'uuid': bdm['uuid'], 'instance_uuid': bdm['instance_uuid'], 'device_name': 'foobar', } db.block_device_mapping_update_or_create(self.ctxt, values, legacy=False) real_bdms = db.block_device_mapping_get_all_by_instance( self.ctxt, bdm['instance_uuid']) self.assertEqual(1, len(real_bdms)) self.assertEqual('foobar', real_bdms[0]['device_name']) def test_block_device_mapping_update_or_create_with_blank_uuid(self): # Test that create with block_device_mapping_update_or_create raises an # exception if given an invalid uuid values = { 'uuid': '', 'instance_uuid': uuidsentinel.instance, 'device_name': 'foobar', } db.block_device_mapping_update_or_create(self.ctxt, values) real_bdms = db.block_device_mapping_get_all_by_instance( self.ctxt, uuidsentinel.instance) self.assertEqual(1, len(real_bdms)) self.assertTrue(uuidutils.is_uuid_like(real_bdms[0]['uuid'])) def test_block_device_mapping_update_or_create_with_invalid_uuid(self): # Test that create with block_device_mapping_update_or_create raises an # exception if given an invalid uuid values = { 'uuid': 'invalid-uuid', 'instance_uuid': uuidsentinel.instance, 'device_name': 'foobar', } self.assertRaises(exception.InvalidUUID, db.block_device_mapping_update_or_create, self.ctxt, values) def test_block_device_mapping_update_or_create_multiple_ephemeral(self): uuid = self.instance['uuid'] values = { 'instance_uuid': uuid, 'source_type': 'blank', 'guest_format': 'myformat', } bdm1 = dict(values) bdm1['device_name'] = '/dev/sdb' db.block_device_mapping_update_or_create(self.ctxt, bdm1, legacy=False) bdm2 = dict(values) bdm2['device_name'] = '/dev/sdc' db.block_device_mapping_update_or_create(self.ctxt, bdm2, legacy=False) bdm_real = sorted( db.block_device_mapping_get_all_by_instance(self.ctxt, uuid), key=lambda bdm: bdm['device_name'] ) self.assertEqual(len(bdm_real), 2) for bdm, device_name in zip(bdm_real, ['/dev/sdb', '/dev/sdc']): self.assertEqual(bdm['device_name'], device_name) self.assertEqual(bdm['guest_format'], 'myformat') def test_block_device_mapping_update_or_create_check_remove_virt(self): uuid = self.instance['uuid'] values = { 'instance_uuid': uuid, 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap', } # check that old swap bdms are deleted on create val1 = dict(values) val1['device_name'] = 'device1' db.block_device_mapping_create(self.ctxt, val1, legacy=False) val2 = dict(values) val2['device_name'] = 'device2' db.block_device_mapping_update_or_create(self.ctxt, val2, legacy=False) bdm_real = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) self.assertEqual(len(bdm_real), 1) bdm_real = bdm_real[0] self.assertEqual(bdm_real['device_name'], 'device2') self.assertEqual(bdm_real['source_type'], 'blank') self.assertEqual(bdm_real['guest_format'], 'swap') db.block_device_mapping_destroy(self.ctxt, bdm_real['id']) def test_block_device_mapping_get_all_by_instance_uuids(self): uuid1 = self.instance['uuid'] uuid2 = db.instance_create(self.ctxt, {})['uuid'] bdms_values = [{'instance_uuid': uuid1, 'device_name': '/dev/vda'}, {'instance_uuid': uuid2, 'device_name': '/dev/vdb'}, {'instance_uuid': uuid2, 'device_name': '/dev/vdc'}] for bdm in bdms_values: self._create_bdm(bdm) bdms = db.block_device_mapping_get_all_by_instance_uuids( self.ctxt, []) self.assertEqual(len(bdms), 0) bdms = db.block_device_mapping_get_all_by_instance_uuids( self.ctxt, [uuid2]) self.assertEqual(len(bdms), 2) bdms = db.block_device_mapping_get_all_by_instance_uuids( self.ctxt, [uuid1, uuid2]) self.assertEqual(len(bdms), 3) def test_block_device_mapping_get_all_by_instance(self): uuid1 = self.instance['uuid'] uuid2 = db.instance_create(self.ctxt, {})['uuid'] bdms_values = [{'instance_uuid': uuid1, 'device_name': '/dev/vda'}, {'instance_uuid': uuid2, 'device_name': '/dev/vdb'}, {'instance_uuid': uuid2, 'device_name': '/dev/vdc'}] for bdm in bdms_values: self._create_bdm(bdm) bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid1) self.assertEqual(len(bdms), 1) self.assertEqual(bdms[0]['device_name'], '/dev/vda') bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid2) self.assertEqual(len(bdms), 2) def test_block_device_mapping_destroy(self): bdm = self._create_bdm({}) db.block_device_mapping_destroy(self.ctxt, bdm['id']) bdm = db.block_device_mapping_get_all_by_instance(self.ctxt, bdm['instance_uuid']) self.assertEqual(len(bdm), 0) def test_block_device_mapping_destroy_by_instance_and_volume(self): vol_id1 = '69f5c254-1a5b-4fff-acf7-cb369904f58f' vol_id2 = '69f5c254-1a5b-4fff-acf7-cb369904f59f' self._create_bdm({'device_name': '/dev/vda', 'volume_id': vol_id1}) self._create_bdm({'device_name': '/dev/vdb', 'volume_id': vol_id2}) uuid = self.instance['uuid'] db.block_device_mapping_destroy_by_instance_and_volume(self.ctxt, uuid, vol_id1) bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) self.assertEqual(len(bdms), 1) self.assertEqual(bdms[0]['device_name'], '/dev/vdb') def test_block_device_mapping_destroy_by_instance_and_device(self): self._create_bdm({'device_name': '/dev/vda'}) self._create_bdm({'device_name': '/dev/vdb'}) uuid = self.instance['uuid'] params = (self.ctxt, uuid, '/dev/vdb') db.block_device_mapping_destroy_by_instance_and_device(*params) bdms = db.block_device_mapping_get_all_by_instance(self.ctxt, uuid) self.assertEqual(len(bdms), 1) self.assertEqual(bdms[0]['device_name'], '/dev/vda') def test_block_device_mapping_get_all_by_volume_id(self): self._create_bdm({'volume_id': 'fake_id'}) self._create_bdm({'volume_id': 'fake_id'}) bdms = db.block_device_mapping_get_all_by_volume_id(self.ctxt, 'fake_id') self.assertEqual(bdms[0]['volume_id'], 'fake_id') self.assertEqual(bdms[1]['volume_id'], 'fake_id') self.assertEqual(2, len(bdms)) def test_block_device_mapping_get_all_by_volume_id_join_instance(self): self._create_bdm({'volume_id': 'fake_id'}) bdms = db.block_device_mapping_get_all_by_volume_id(self.ctxt, 'fake_id', ['instance']) self.assertEqual(bdms[0]['volume_id'], 'fake_id') self.assertEqual(bdms[0]['instance']['uuid'], self.instance['uuid']) def test_block_device_mapping_get_by_instance_and_volume_id(self): self._create_bdm({'volume_id': 'fake_id'}) bdm = db.block_device_mapping_get_by_instance_and_volume_id(self.ctxt, 'fake_id', self.instance['uuid']) self.assertEqual(bdm['volume_id'], 'fake_id') self.assertEqual(bdm['instance_uuid'], self.instance['uuid']) def test_block_device_mapping_get_by_instance_and_volume_id_multiplebdms( self): self._create_bdm({'volume_id': 'fake_id', 'instance_uuid': self.instance['uuid']}) self._create_bdm({'volume_id': 'fake_id', 'instance_uuid': self.instance['uuid']}) db_bdm = db.block_device_mapping_get_by_instance_and_volume_id( self.ctxt, 'fake_id', self.instance['uuid']) self.assertIsNotNone(db_bdm) self.assertEqual(self.instance['uuid'], db_bdm['instance_uuid']) def test_block_device_mapping_get_by_instance_and_volume_id_multiattach( self): self.instance2 = db.instance_create(self.ctxt, {}) self._create_bdm({'volume_id': 'fake_id', 'instance_uuid': self.instance['uuid']}) self._create_bdm({'volume_id': 'fake_id', 'instance_uuid': self.instance2['uuid']}) bdm = db.block_device_mapping_get_by_instance_and_volume_id(self.ctxt, 'fake_id', self.instance['uuid']) self.assertEqual(bdm['volume_id'], 'fake_id') self.assertEqual(bdm['instance_uuid'], self.instance['uuid']) bdm2 = db.block_device_mapping_get_by_instance_and_volume_id( self.ctxt, 'fake_id', self.instance2['uuid']) self.assertEqual(bdm2['volume_id'], 'fake_id') self.assertEqual(bdm2['instance_uuid'], self.instance2['uuid']) class AgentBuildTestCase(test.TestCase, ModelsObjectComparatorMixin): """Tests for db.api.agent_build_* methods.""" def setUp(self): super(AgentBuildTestCase, self).setUp() self.ctxt = context.get_admin_context() def test_agent_build_create_and_get_all(self): self.assertEqual(0, len(db.agent_build_get_all(self.ctxt))) agent_build = db.agent_build_create(self.ctxt, {'os': 'GNU/HURD'}) all_agent_builds = db.agent_build_get_all(self.ctxt) self.assertEqual(1, len(all_agent_builds)) self._assertEqualObjects(agent_build, all_agent_builds[0]) def test_agent_build_get_by_triple(self): agent_build = db.agent_build_create( self.ctxt, {'hypervisor': 'kvm', 'os': 'FreeBSD', 'architecture': fields.Architecture.X86_64}) self.assertIsNone(db.agent_build_get_by_triple( self.ctxt, 'kvm', 'FreeBSD', 'i386')) self._assertEqualObjects(agent_build, db.agent_build_get_by_triple( self.ctxt, 'kvm', 'FreeBSD', fields.Architecture.X86_64)) def test_agent_build_destroy(self): agent_build = db.agent_build_create(self.ctxt, {}) self.assertEqual(1, len(db.agent_build_get_all(self.ctxt))) db.agent_build_destroy(self.ctxt, agent_build.id) self.assertEqual(0, len(db.agent_build_get_all(self.ctxt))) def test_agent_build_update(self): agent_build = db.agent_build_create(self.ctxt, {'os': 'HaikuOS'}) db.agent_build_update(self.ctxt, agent_build.id, {'os': 'ReactOS'}) self.assertEqual('ReactOS', db.agent_build_get_all(self.ctxt)[0].os) def test_agent_build_destroy_destroyed(self): agent_build = db.agent_build_create(self.ctxt, {}) db.agent_build_destroy(self.ctxt, agent_build.id) self.assertRaises(exception.AgentBuildNotFound, db.agent_build_destroy, self.ctxt, agent_build.id) def test_agent_build_update_destroyed(self): agent_build = db.agent_build_create(self.ctxt, {'os': 'HaikuOS'}) db.agent_build_destroy(self.ctxt, agent_build.id) self.assertRaises(exception.AgentBuildNotFound, db.agent_build_update, self.ctxt, agent_build.id, {'os': 'OS/2'}) def test_agent_build_exists(self): values = {'hypervisor': 'kvm', 'os': 'FreeBSD', 'architecture': fields.Architecture.X86_64} db.agent_build_create(self.ctxt, values) self.assertRaises(exception.AgentBuildExists, db.agent_build_create, self.ctxt, values) def test_agent_build_get_all_by_hypervisor(self): values = {'hypervisor': 'kvm', 'os': 'FreeBSD', 'architecture': fields.Architecture.X86_64} created = db.agent_build_create(self.ctxt, values) actual = db.agent_build_get_all(self.ctxt, hypervisor='kvm') self._assertEqualListsOfObjects([created], actual) class VirtualInterfaceTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(VirtualInterfaceTestCase, self).setUp() self.ctxt = context.get_admin_context() self.instance_uuid = db.instance_create(self.ctxt, {})['uuid'] def _get_base_values(self): return { 'instance_uuid': self.instance_uuid, 'address': 'fake_address', 'network_id': uuidsentinel.network, 'uuid': uuidutils.generate_uuid(), 'tag': 'fake-tag', } def _create_virt_interface(self, values): v = self._get_base_values() v.update(values) return db.virtual_interface_create(self.ctxt, v) def test_virtual_interface_create(self): vif = self._create_virt_interface({}) self.assertIsNotNone(vif['id']) ignored_keys = ['id', 'deleted', 'deleted_at', 'updated_at', 'created_at', 'uuid'] self._assertEqualObjects(vif, self._get_base_values(), ignored_keys) def test_virtual_interface_create_with_duplicate_address(self): vif = self._create_virt_interface({}) self.assertRaises(exception.VirtualInterfaceCreateException, self._create_virt_interface, {"uuid": vif['uuid']}) def test_virtual_interface_get(self): vifs = [self._create_virt_interface({'address': 'a'}), self._create_virt_interface({'address': 'b'})] for vif in vifs: real_vif = db.virtual_interface_get(self.ctxt, vif['id']) self._assertEqualObjects(vif, real_vif) def test_virtual_interface_get_by_address(self): vifs = [self._create_virt_interface({'address': 'first'}), self._create_virt_interface({'address': 'second'})] for vif in vifs: real_vif = db.virtual_interface_get_by_address(self.ctxt, vif['address']) self._assertEqualObjects(vif, real_vif) def test_virtual_interface_get_by_address_not_found(self): self.assertIsNone(db.virtual_interface_get_by_address(self.ctxt, "i.nv.ali.ip")) @mock.patch.object(query.Query, 'first', side_effect=db_exc.DBError()) def test_virtual_interface_get_by_address_data_error_exception(self, mock_query): self.assertRaises(exception.InvalidIpAddressError, db.virtual_interface_get_by_address, self.ctxt, "i.nv.ali.ip") mock_query.assert_called_once_with() def test_virtual_interface_get_by_uuid(self): vifs = [self._create_virt_interface({"address": "address_1"}), self._create_virt_interface({"address": "address_2"})] for vif in vifs: real_vif = db.virtual_interface_get_by_uuid(self.ctxt, vif['uuid']) self._assertEqualObjects(vif, real_vif) def test_virtual_interface_get_by_instance(self): inst_uuid2 = db.instance_create(self.ctxt, {})['uuid'] vifs1 = [self._create_virt_interface({'address': 'fake1'}), self._create_virt_interface({'address': 'fake2'})] # multiple nic of same instance vifs2 = [self._create_virt_interface({'address': 'fake3', 'instance_uuid': inst_uuid2}), self._create_virt_interface({'address': 'fake4', 'instance_uuid': inst_uuid2})] vifs1_real = db.virtual_interface_get_by_instance(self.ctxt, self.instance_uuid) vifs2_real = db.virtual_interface_get_by_instance(self.ctxt, inst_uuid2) self._assertEqualListsOfObjects(vifs1, vifs1_real) self._assertEqualOrderedListOfObjects(vifs2, vifs2_real) def test_virtual_interface_get_by_instance_and_network(self): inst_uuid2 = db.instance_create(self.ctxt, {})['uuid'] network_id = uuidutils.generate_uuid() vifs = [self._create_virt_interface({'address': 'fake1'}), self._create_virt_interface({'address': 'fake2', 'network_id': network_id, 'instance_uuid': inst_uuid2}), self._create_virt_interface({'address': 'fake3', 'instance_uuid': inst_uuid2})] for vif in vifs: params = (self.ctxt, vif['instance_uuid'], vif['network_id']) r_vif = db.virtual_interface_get_by_instance_and_network(*params) self._assertEqualObjects(r_vif, vif) def test_virtual_interface_delete_by_instance(self): inst_uuid2 = db.instance_create(self.ctxt, {})['uuid'] values = [dict(address='fake1'), dict(address='fake2'), dict(address='fake3', instance_uuid=inst_uuid2)] for vals in values: self._create_virt_interface(vals) db.virtual_interface_delete_by_instance(self.ctxt, self.instance_uuid) real_vifs1 = db.virtual_interface_get_by_instance(self.ctxt, self.instance_uuid) real_vifs2 = db.virtual_interface_get_by_instance(self.ctxt, inst_uuid2) self.assertEqual(len(real_vifs1), 0) self.assertEqual(len(real_vifs2), 1) def test_virtual_interface_delete(self): values = [dict(address='fake1'), dict(address='fake2'), dict(address='fake3')] vifs = [] for vals in values: vifs.append(self._create_virt_interface( dict(vals, instance_uuid=self.instance_uuid))) db.virtual_interface_delete(self.ctxt, vifs[0]['id']) real_vifs = db.virtual_interface_get_by_instance(self.ctxt, self.instance_uuid) self.assertEqual(2, len(real_vifs)) def test_virtual_interface_get_all(self): inst_uuid2 = db.instance_create(self.ctxt, {})['uuid'] values = [dict(address='fake1'), dict(address='fake2'), dict(address='fake3', instance_uuid=inst_uuid2)] vifs = [self._create_virt_interface(val) for val in values] real_vifs = db.virtual_interface_get_all(self.ctxt) self._assertEqualListsOfObjects(vifs, real_vifs) def test_virtual_interface_update(self): instance_uuid = db.instance_create(self.ctxt, {})['uuid'] network_id = uuidutils.generate_uuid() create = {'address': 'fake1', 'network_id': network_id, 'instance_uuid': instance_uuid, 'uuid': uuidsentinel.vif_uuid, 'tag': 'foo'} update = {'tag': 'bar'} updated = {'address': 'fake1', 'network_id': network_id, 'instance_uuid': instance_uuid, 'uuid': uuidsentinel.vif_uuid, 'tag': 'bar', 'deleted': 0} ignored_keys = ['created_at', 'id', 'deleted_at', 'updated_at'] vif_addr = db.virtual_interface_create(self.ctxt, create)['address'] db.virtual_interface_update(self.ctxt, vif_addr, update) updated_vif = db.virtual_interface_get_by_address(self.ctxt, updated['address']) self._assertEqualObjects(updated, updated_vif, ignored_keys) class KeyPairTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(KeyPairTestCase, self).setUp() self.ctxt = context.get_admin_context() def _create_key_pair(self, values): return db.key_pair_create(self.ctxt, values) def test_key_pair_create(self): param = { 'name': 'test_1', 'type': 'ssh', 'user_id': 'test_user_id_1', 'public_key': 'test_public_key_1', 'fingerprint': 'test_fingerprint_1' } key_pair = self._create_key_pair(param) self.assertIsNotNone(key_pair['id']) ignored_keys = ['deleted', 'created_at', 'updated_at', 'deleted_at', 'id'] self._assertEqualObjects(key_pair, param, ignored_keys) def test_key_pair_create_with_duplicate_name(self): params = {'name': 'test_name', 'user_id': 'test_user_id', 'type': 'ssh'} self._create_key_pair(params) self.assertRaises(exception.KeyPairExists, self._create_key_pair, params) def test_key_pair_get(self): params = [ {'name': 'test_1', 'user_id': 'test_user_id_1', 'type': 'ssh'}, {'name': 'test_2', 'user_id': 'test_user_id_2', 'type': 'ssh'}, {'name': 'test_3', 'user_id': 'test_user_id_3', 'type': 'ssh'} ] key_pairs = [self._create_key_pair(p) for p in params] for key in key_pairs: real_key = db.key_pair_get(self.ctxt, key['user_id'], key['name']) self._assertEqualObjects(key, real_key) def test_key_pair_get_no_results(self): param = {'name': 'test_1', 'user_id': 'test_user_id_1'} self.assertRaises(exception.KeypairNotFound, db.key_pair_get, self.ctxt, param['user_id'], param['name']) def test_key_pair_get_deleted(self): param = {'name': 'test_1', 'user_id': 'test_user_id_1', 'type': 'ssh'} key_pair_created = self._create_key_pair(param) db.key_pair_destroy(self.ctxt, param['user_id'], param['name']) self.assertRaises(exception.KeypairNotFound, db.key_pair_get, self.ctxt, param['user_id'], param['name']) ctxt = self.ctxt.elevated(read_deleted='yes') key_pair_deleted = db.key_pair_get(ctxt, param['user_id'], param['name']) ignored_keys = ['deleted', 'created_at', 'updated_at', 'deleted_at'] self._assertEqualObjects(key_pair_deleted, key_pair_created, ignored_keys) self.assertEqual(key_pair_deleted['deleted'], key_pair_deleted['id']) def test_key_pair_get_all_by_user(self): params = [ {'name': 'test_1', 'user_id': 'test_user_id_1', 'type': 'ssh'}, {'name': 'test_2', 'user_id': 'test_user_id_1', 'type': 'ssh'}, {'name': 'test_3', 'user_id': 'test_user_id_2', 'type': 'ssh'} ] key_pairs_user_1 = [self._create_key_pair(p) for p in params if p['user_id'] == 'test_user_id_1'] key_pairs_user_2 = [self._create_key_pair(p) for p in params if p['user_id'] == 'test_user_id_2'] real_keys_1 = db.key_pair_get_all_by_user(self.ctxt, 'test_user_id_1') real_keys_2 = db.key_pair_get_all_by_user(self.ctxt, 'test_user_id_2') self._assertEqualListsOfObjects(key_pairs_user_1, real_keys_1) self._assertEqualListsOfObjects(key_pairs_user_2, real_keys_2) def test_key_pair_get_all_by_user_limit_and_marker(self): params = [ {'name': 'test_1', 'user_id': 'test_user_id', 'type': 'ssh'}, {'name': 'test_2', 'user_id': 'test_user_id', 'type': 'ssh'}, {'name': 'test_3', 'user_id': 'test_user_id', 'type': 'ssh'} ] # check all 3 keypairs keys = [self._create_key_pair(p) for p in params] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_id') self._assertEqualListsOfObjects(keys, db_keys) # check only 1 keypair expected_keys = [keys[0]] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_id', limit=1) self._assertEqualListsOfObjects(expected_keys, db_keys) # check keypairs after 'test_1' expected_keys = [keys[1], keys[2]] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_id', marker='test_1') self._assertEqualListsOfObjects(expected_keys, db_keys) # check only 1 keypairs after 'test_1' expected_keys = [keys[1]] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_id', limit=1, marker='test_1') self._assertEqualListsOfObjects(expected_keys, db_keys) # check non-existing keypair self.assertRaises(exception.MarkerNotFound, db.key_pair_get_all_by_user, self.ctxt, 'test_user_id', limit=1, marker='unknown_kp') def test_key_pair_get_all_by_user_different_users(self): params1 = [ {'name': 'test_1', 'user_id': 'test_user_1', 'type': 'ssh'}, {'name': 'test_2', 'user_id': 'test_user_1', 'type': 'ssh'}, {'name': 'test_3', 'user_id': 'test_user_1', 'type': 'ssh'} ] params2 = [ {'name': 'test_1', 'user_id': 'test_user_2', 'type': 'ssh'}, {'name': 'test_2', 'user_id': 'test_user_2', 'type': 'ssh'}, {'name': 'test_3', 'user_id': 'test_user_2', 'type': 'ssh'} ] # create keypairs for two users keys1 = [self._create_key_pair(p) for p in params1] keys2 = [self._create_key_pair(p) for p in params2] # check all 2 keypairs for test_user_1 db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_1') self._assertEqualListsOfObjects(keys1, db_keys) # check all 2 keypairs for test_user_2 db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_2') self._assertEqualListsOfObjects(keys2, db_keys) # check only 1 keypair for test_user_1 expected_keys = [keys1[0]] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_1', limit=1) self._assertEqualListsOfObjects(expected_keys, db_keys) # check keypairs after 'test_1' for test_user_2 expected_keys = [keys2[1], keys2[2]] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_2', marker='test_1') self._assertEqualListsOfObjects(expected_keys, db_keys) # check only 1 keypairs after 'test_1' for test_user_1 expected_keys = [keys1[1]] db_keys = db.key_pair_get_all_by_user(self.ctxt, 'test_user_1', limit=1, marker='test_1') self._assertEqualListsOfObjects(expected_keys, db_keys) # check non-existing keypair for test_user_2 self.assertRaises(exception.MarkerNotFound, db.key_pair_get_all_by_user, self.ctxt, 'test_user_2', limit=1, marker='unknown_kp') def test_key_pair_count_by_user(self): params = [ {'name': 'test_1', 'user_id': 'test_user_id_1', 'type': 'ssh'}, {'name': 'test_2', 'user_id': 'test_user_id_1', 'type': 'ssh'}, {'name': 'test_3', 'user_id': 'test_user_id_2', 'type': 'ssh'} ] for p in params: self._create_key_pair(p) count_1 = db.key_pair_count_by_user(self.ctxt, 'test_user_id_1') self.assertEqual(count_1, 2) count_2 = db.key_pair_count_by_user(self.ctxt, 'test_user_id_2') self.assertEqual(count_2, 1) def test_key_pair_destroy(self): param = {'name': 'test_1', 'user_id': 'test_user_id_1', 'type': 'ssh'} self._create_key_pair(param) db.key_pair_destroy(self.ctxt, param['user_id'], param['name']) self.assertRaises(exception.KeypairNotFound, db.key_pair_get, self.ctxt, param['user_id'], param['name']) def test_key_pair_destroy_no_such_key(self): param = {'name': 'test_1', 'user_id': 'test_user_id_1'} self.assertRaises(exception.KeypairNotFound, db.key_pair_destroy, self.ctxt, param['user_id'], param['name']) class QuotaTestCase(test.TestCase, ModelsObjectComparatorMixin): """Tests for db.api.quota_* methods.""" def setUp(self): super(QuotaTestCase, self).setUp() self.ctxt = context.get_admin_context() def test_quota_create(self): quota = db.quota_create(self.ctxt, 'project1', 'resource', 99) self.assertEqual(quota.resource, 'resource') self.assertEqual(quota.hard_limit, 99) self.assertEqual(quota.project_id, 'project1') def test_quota_get(self): quota = db.quota_create(self.ctxt, 'project1', 'resource', 99) quota_db = db.quota_get(self.ctxt, 'project1', 'resource') self._assertEqualObjects(quota, quota_db) def test_quota_get_all_by_project(self): for i in range(3): for j in range(3): db.quota_create(self.ctxt, 'proj%d' % i, 'resource%d' % j, j) for i in range(3): quotas_db = db.quota_get_all_by_project(self.ctxt, 'proj%d' % i) self.assertEqual(quotas_db, {'project_id': 'proj%d' % i, 'resource0': 0, 'resource1': 1, 'resource2': 2}) def test_quota_get_all_by_project_and_user(self): for i in range(3): for j in range(3): db.quota_create(self.ctxt, 'proj%d' % i, 'resource%d' % j, j - 1, user_id='user%d' % i) for i in range(3): quotas_db = db.quota_get_all_by_project_and_user(self.ctxt, 'proj%d' % i, 'user%d' % i) self.assertEqual(quotas_db, {'project_id': 'proj%d' % i, 'user_id': 'user%d' % i, 'resource0': -1, 'resource1': 0, 'resource2': 1}) def test_quota_update(self): db.quota_create(self.ctxt, 'project1', 'resource1', 41) db.quota_update(self.ctxt, 'project1', 'resource1', 42) quota = db.quota_get(self.ctxt, 'project1', 'resource1') self.assertEqual(quota.hard_limit, 42) self.assertEqual(quota.resource, 'resource1') self.assertEqual(quota.project_id, 'project1') def test_quota_update_nonexistent(self): self.assertRaises(exception.ProjectQuotaNotFound, db.quota_update, self.ctxt, 'project1', 'resource1', 42) def test_quota_get_nonexistent(self): self.assertRaises(exception.ProjectQuotaNotFound, db.quota_get, self.ctxt, 'project1', 'resource1') def test_quota_destroy_all_by_project(self): _quota_create(self.ctxt, 'project1', 'user1') db.quota_destroy_all_by_project(self.ctxt, 'project1') self.assertEqual(db.quota_get_all_by_project(self.ctxt, 'project1'), {'project_id': 'project1'}) self.assertEqual(db.quota_get_all_by_project_and_user(self.ctxt, 'project1', 'user1'), {'project_id': 'project1', 'user_id': 'user1'}) def test_quota_destroy_all_by_project_and_user(self): _quota_create(self.ctxt, 'project1', 'user1') db.quota_destroy_all_by_project_and_user(self.ctxt, 'project1', 'user1') self.assertEqual(db.quota_get_all_by_project_and_user(self.ctxt, 'project1', 'user1'), {'project_id': 'project1', 'user_id': 'user1'}) def test_quota_create_exists(self): db.quota_create(self.ctxt, 'project1', 'resource1', 41) self.assertRaises(exception.QuotaExists, db.quota_create, self.ctxt, 'project1', 'resource1', 42) class QuotaClassTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(QuotaClassTestCase, self).setUp() self.ctxt = context.get_admin_context() def test_quota_class_get_default(self): params = { 'test_resource1': '10', 'test_resource2': '20', 'test_resource3': '30', } for res, limit in params.items(): db.quota_class_create(self.ctxt, 'default', res, limit) defaults = db.quota_class_get_default(self.ctxt) self.assertEqual(defaults, dict(class_name='default', test_resource1=10, test_resource2=20, test_resource3=30)) def test_quota_class_create(self): qc = db.quota_class_create(self.ctxt, 'class name', 'resource', 42) self.assertEqual(qc.class_name, 'class name') self.assertEqual(qc.resource, 'resource') self.assertEqual(qc.hard_limit, 42) def test_quota_class_get(self): qc = db.quota_class_create(self.ctxt, 'class name', 'resource', 42) qc_db = db.quota_class_get(self.ctxt, 'class name', 'resource') self._assertEqualObjects(qc, qc_db) def test_quota_class_get_nonexistent(self): self.assertRaises(exception.QuotaClassNotFound, db.quota_class_get, self.ctxt, 'nonexistent', 'resource') def test_quota_class_get_all_by_name(self): for i in range(3): for j in range(3): db.quota_class_create(self.ctxt, 'class%d' % i, 'resource%d' % j, j) for i in range(3): classes = db.quota_class_get_all_by_name(self.ctxt, 'class%d' % i) self.assertEqual(classes, {'class_name': 'class%d' % i, 'resource0': 0, 'resource1': 1, 'resource2': 2}) def test_quota_class_update(self): db.quota_class_create(self.ctxt, 'class name', 'resource', 42) db.quota_class_update(self.ctxt, 'class name', 'resource', 43) self.assertEqual(db.quota_class_get(self.ctxt, 'class name', 'resource').hard_limit, 43) def test_quota_class_update_nonexistent(self): self.assertRaises(exception.QuotaClassNotFound, db.quota_class_update, self.ctxt, 'class name', 'resource', 42) class S3ImageTestCase(test.TestCase): def setUp(self): super(S3ImageTestCase, self).setUp() self.ctxt = context.get_admin_context() self.values = [uuidutils.generate_uuid() for i in range(3)] self.images = [db.s3_image_create(self.ctxt, uuid) for uuid in self.values] def test_s3_image_create(self): for ref in self.images: self.assertTrue(uuidutils.is_uuid_like(ref.uuid)) self.assertEqual(sorted(self.values), sorted([ref.uuid for ref in self.images])) def test_s3_image_get_by_uuid(self): for uuid in self.values: ref = db.s3_image_get_by_uuid(self.ctxt, uuid) self.assertTrue(uuidutils.is_uuid_like(ref.uuid)) self.assertEqual(uuid, ref.uuid) def test_s3_image_get(self): self.assertEqual(sorted(self.values), sorted([db.s3_image_get(self.ctxt, ref.id).uuid for ref in self.images])) def test_s3_image_get_not_found(self): self.assertRaises(exception.ImageNotFound, db.s3_image_get, self.ctxt, 100500) def test_s3_image_get_by_uuid_not_found(self): self.assertRaises(exception.ImageNotFound, db.s3_image_get_by_uuid, self.ctxt, uuidsentinel.uuid1) class ComputeNodeTestCase(test.TestCase, ModelsObjectComparatorMixin): _ignored_keys = ['id', 'deleted', 'deleted_at', 'created_at', 'updated_at'] def setUp(self): super(ComputeNodeTestCase, self).setUp() self.ctxt = context.get_admin_context() self.service_dict = dict(host='host1', binary='nova-compute', topic=compute_rpcapi.RPC_TOPIC, report_count=1, disabled=False) self.service = db.service_create(self.ctxt, self.service_dict) self.compute_node_dict = dict(vcpus=2, memory_mb=1024, local_gb=2048, uuid=uuidutils.generate_uuid(), vcpus_used=0, memory_mb_used=0, local_gb_used=0, free_ram_mb=1024, free_disk_gb=2048, hypervisor_type="xen", hypervisor_version=1, cpu_info="", running_vms=0, current_workload=0, service_id=self.service['id'], host=self.service['host'], disk_available_least=100, hypervisor_hostname='abracadabra104', host_ip='127.0.0.1', supported_instances='', pci_stats='', metrics='', mapped=0, extra_resources='', cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, stats='', numa_topology='') # add some random stats self.stats = dict(num_instances=3, num_proj_12345=2, num_proj_23456=2, num_vm_building=3) self.compute_node_dict['stats'] = jsonutils.dumps(self.stats) self.flags(reserved_host_memory_mb=0) self.flags(reserved_host_disk_mb=0) self.item = db.compute_node_create(self.ctxt, self.compute_node_dict) def test_compute_node_create(self): self._assertEqualObjects(self.compute_node_dict, self.item, ignored_keys=self._ignored_keys + ['stats']) new_stats = jsonutils.loads(self.item['stats']) self.assertEqual(self.stats, new_stats) def test_compute_node_create_duplicate_host_hypervisor_hostname(self): """Tests to make sure that DBDuplicateEntry is raised when trying to create a duplicate ComputeNode with the same host and hypervisor_hostname values but different uuid values. This makes sure that when _compute_node_get_and_update_deleted returns None the DBDuplicateEntry is re-raised. """ other_node = dict(self.compute_node_dict) other_node['uuid'] = uuidutils.generate_uuid() self.assertRaises(db_exc.DBDuplicateEntry, db.compute_node_create, self.ctxt, other_node) def test_compute_node_get_all(self): nodes = db.compute_node_get_all(self.ctxt) self.assertEqual(1, len(nodes)) node = nodes[0] self._assertEqualObjects(self.compute_node_dict, node, ignored_keys=self._ignored_keys + ['stats', 'service']) new_stats = jsonutils.loads(node['stats']) self.assertEqual(self.stats, new_stats) def test_compute_node_get_all_mapped_less_than(self): cn = dict(self.compute_node_dict, hostname='foo', hypervisor_hostname='foo', mapped=None, uuid=uuidutils.generate_uuid()) db.compute_node_create(self.ctxt, cn) cn = dict(self.compute_node_dict, hostname='bar', hypervisor_hostname='nar', mapped=3, uuid=uuidutils.generate_uuid()) db.compute_node_create(self.ctxt, cn) cns = db.compute_node_get_all_mapped_less_than(self.ctxt, 1) self.assertEqual(2, len(cns)) def test_compute_node_get_all_by_pagination(self): service_dict = dict(host='host2', binary='nova-compute', topic=compute_rpcapi.RPC_TOPIC, report_count=1, disabled=False) service = db.service_create(self.ctxt, service_dict) compute_node_dict = dict(vcpus=2, memory_mb=1024, local_gb=2048, uuid=uuidsentinel.fake_compute_node, vcpus_used=0, memory_mb_used=0, local_gb_used=0, free_ram_mb=1024, free_disk_gb=2048, hypervisor_type="xen", hypervisor_version=1, cpu_info="", running_vms=0, current_workload=0, service_id=service['id'], host=service['host'], disk_available_least=100, hypervisor_hostname='abcde11', host_ip='127.0.0.1', supported_instances='', pci_stats='', metrics='', mapped=0, extra_resources='', cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, stats='', numa_topology='') stats = dict(num_instances=2, num_proj_12345=1, num_proj_23456=1, num_vm_building=2) compute_node_dict['stats'] = jsonutils.dumps(stats) db.compute_node_create(self.ctxt, compute_node_dict) nodes = db.compute_node_get_all_by_pagination(self.ctxt, limit=1, marker=1) self.assertEqual(1, len(nodes)) node = nodes[0] self._assertEqualObjects(compute_node_dict, node, ignored_keys=self._ignored_keys + ['stats', 'service']) new_stats = jsonutils.loads(node['stats']) self.assertEqual(stats, new_stats) nodes = db.compute_node_get_all_by_pagination(self.ctxt) self.assertEqual(2, len(nodes)) node = nodes[0] self._assertEqualObjects(self.compute_node_dict, node, ignored_keys=self._ignored_keys + ['stats', 'service']) new_stats = jsonutils.loads(node['stats']) self.assertEqual(self.stats, new_stats) self.assertRaises(exception.MarkerNotFound, db.compute_node_get_all_by_pagination, self.ctxt, limit=1, marker=999) def test_compute_node_get_all_deleted_compute_node(self): # Create a service and compute node and ensure we can find its stats; # delete the service and compute node when done and loop again for x in range(2, 5): # Create a service service_data = self.service_dict.copy() service_data['host'] = 'host-%s' % x service = db.service_create(self.ctxt, service_data) # Create a compute node compute_node_data = self.compute_node_dict.copy() compute_node_data['uuid'] = uuidutils.generate_uuid() compute_node_data['service_id'] = service['id'] compute_node_data['stats'] = jsonutils.dumps(self.stats.copy()) compute_node_data['hypervisor_hostname'] = 'hypervisor-%s' % x node = db.compute_node_create(self.ctxt, compute_node_data) # Ensure the "new" compute node is found nodes = db.compute_node_get_all(self.ctxt) self.assertEqual(2, len(nodes)) found = None for n in nodes: if n['id'] == node['id']: found = n break self.assertIsNotNone(found) # Now ensure the match has stats! self.assertNotEqual(jsonutils.loads(found['stats']), {}) # Now delete the newly-created compute node to ensure the related # compute node stats are wiped in a cascaded fashion db.compute_node_delete(self.ctxt, node['id']) # Clean up the service db.service_destroy(self.ctxt, service['id']) def test_compute_node_get_all_mult_compute_nodes_one_service_entry(self): service_data = self.service_dict.copy() service_data['host'] = 'host2' service = db.service_create(self.ctxt, service_data) existing_node = dict(self.item.items()) expected = [existing_node] for name in ['bm_node1', 'bm_node2']: compute_node_data = self.compute_node_dict.copy() compute_node_data['uuid'] = uuidutils.generate_uuid() compute_node_data['service_id'] = service['id'] compute_node_data['stats'] = jsonutils.dumps(self.stats) compute_node_data['hypervisor_hostname'] = name node = db.compute_node_create(self.ctxt, compute_node_data) node = dict(node) expected.append(node) result = sorted(db.compute_node_get_all(self.ctxt), key=lambda n: n['hypervisor_hostname']) self._assertEqualListsOfObjects(expected, result, ignored_keys=['stats']) def test_compute_node_get_all_by_host_with_distinct_hosts(self): # Create another service with another node service2 = self.service_dict.copy() service2['host'] = 'host2' db.service_create(self.ctxt, service2) compute_node_another_host = self.compute_node_dict.copy() compute_node_another_host['uuid'] = uuidutils.generate_uuid() compute_node_another_host['stats'] = jsonutils.dumps(self.stats) compute_node_another_host['hypervisor_hostname'] = 'node_2' compute_node_another_host['host'] = 'host2' node = db.compute_node_create(self.ctxt, compute_node_another_host) result = db.compute_node_get_all_by_host(self.ctxt, 'host1') self._assertEqualListsOfObjects([self.item], result) result = db.compute_node_get_all_by_host(self.ctxt, 'host2') self._assertEqualListsOfObjects([node], result) def test_compute_node_get_all_by_host_with_same_host(self): # Create another node on top of the same service compute_node_same_host = self.compute_node_dict.copy() compute_node_same_host['uuid'] = uuidutils.generate_uuid() compute_node_same_host['stats'] = jsonutils.dumps(self.stats) compute_node_same_host['hypervisor_hostname'] = 'node_3' node = db.compute_node_create(self.ctxt, compute_node_same_host) expected = [self.item, node] result = sorted(db.compute_node_get_all_by_host( self.ctxt, 'host1'), key=lambda n: n['hypervisor_hostname']) ignored = ['stats'] self._assertEqualListsOfObjects(expected, result, ignored_keys=ignored) def test_compute_node_get_all_by_host_not_found(self): self.assertRaises(exception.ComputeHostNotFound, db.compute_node_get_all_by_host, self.ctxt, 'wrong') def test_compute_nodes_get_by_service_id_one_result(self): expected = [self.item] result = db.compute_nodes_get_by_service_id( self.ctxt, self.service['id']) ignored = ['stats'] self._assertEqualListsOfObjects(expected, result, ignored_keys=ignored) def test_compute_nodes_get_by_service_id_multiple_results(self): # Create another node on top of the same service compute_node_same_host = self.compute_node_dict.copy() compute_node_same_host['uuid'] = uuidutils.generate_uuid() compute_node_same_host['stats'] = jsonutils.dumps(self.stats) compute_node_same_host['hypervisor_hostname'] = 'node_2' node = db.compute_node_create(self.ctxt, compute_node_same_host) expected = [self.item, node] result = sorted(db.compute_nodes_get_by_service_id( self.ctxt, self.service['id']), key=lambda n: n['hypervisor_hostname']) ignored = ['stats'] self._assertEqualListsOfObjects(expected, result, ignored_keys=ignored) def test_compute_nodes_get_by_service_id_not_found(self): self.assertRaises(exception.ServiceNotFound, db.compute_nodes_get_by_service_id, self.ctxt, 'fake') def test_compute_node_get_by_host_and_nodename(self): # Create another node on top of the same service compute_node_same_host = self.compute_node_dict.copy() compute_node_same_host['uuid'] = uuidutils.generate_uuid() compute_node_same_host['stats'] = jsonutils.dumps(self.stats) compute_node_same_host['hypervisor_hostname'] = 'node_2' node = db.compute_node_create(self.ctxt, compute_node_same_host) expected = node result = db.compute_node_get_by_host_and_nodename( self.ctxt, 'host1', 'node_2') self._assertEqualObjects(expected, result, ignored_keys=self._ignored_keys + ['stats', 'service']) def test_compute_node_get_by_host_and_nodename_not_found(self): self.assertRaises(exception.ComputeHostNotFound, db.compute_node_get_by_host_and_nodename, self.ctxt, 'host1', 'wrong') def test_compute_node_get_by_nodename(self): # Create another node on top of the same service compute_node_same_host = self.compute_node_dict.copy() compute_node_same_host['uuid'] = uuidutils.generate_uuid() compute_node_same_host['stats'] = jsonutils.dumps(self.stats) compute_node_same_host['hypervisor_hostname'] = 'node_2' node = db.compute_node_create(self.ctxt, compute_node_same_host) expected = node result = db.compute_node_get_by_nodename( self.ctxt, 'node_2') self._assertEqualObjects(expected, result, ignored_keys=self._ignored_keys + ['stats', 'service']) def test_compute_node_get_by_nodename_not_found(self): self.assertRaises(exception.ComputeHostNotFound, db.compute_node_get_by_nodename, self.ctxt, 'wrong') def test_compute_node_get(self): compute_node_id = self.item['id'] node = db.compute_node_get(self.ctxt, compute_node_id) self._assertEqualObjects(self.compute_node_dict, node, ignored_keys=self._ignored_keys + ['stats', 'service']) new_stats = jsonutils.loads(node['stats']) self.assertEqual(self.stats, new_stats) def test_compute_node_update(self): compute_node_id = self.item['id'] stats = jsonutils.loads(self.item['stats']) # change some values: stats['num_instances'] = 8 stats['num_tribbles'] = 1 values = { 'vcpus': 4, 'stats': jsonutils.dumps(stats), } item_updated = db.compute_node_update(self.ctxt, compute_node_id, values) self.assertEqual(4, item_updated['vcpus']) new_stats = jsonutils.loads(item_updated['stats']) self.assertEqual(stats, new_stats) def test_compute_node_delete(self): compute_node_id = self.item['id'] db.compute_node_delete(self.ctxt, compute_node_id) nodes = db.compute_node_get_all(self.ctxt) self.assertEqual(len(nodes), 0) def test_compute_node_search_by_hypervisor(self): nodes_created = [] new_service = copy.copy(self.service_dict) for i in range(3): new_service['binary'] += str(i) new_service['topic'] += str(i) service = db.service_create(self.ctxt, new_service) self.compute_node_dict['service_id'] = service['id'] self.compute_node_dict['hypervisor_hostname'] = 'testhost' + str(i) self.compute_node_dict['stats'] = jsonutils.dumps(self.stats) self.compute_node_dict['uuid'] = uuidutils.generate_uuid() node = db.compute_node_create(self.ctxt, self.compute_node_dict) nodes_created.append(node) nodes = db.compute_node_search_by_hypervisor(self.ctxt, 'host') self.assertEqual(3, len(nodes)) self._assertEqualListsOfObjects(nodes_created, nodes, ignored_keys=self._ignored_keys + ['stats', 'service']) def test_compute_node_statistics(self): service_dict = dict(host='hostA', binary='nova-compute', topic=compute_rpcapi.RPC_TOPIC, report_count=1, disabled=False) service = db.service_create(self.ctxt, service_dict) # Define the various values for the new compute node new_vcpus = 4 new_memory_mb = 4096 new_local_gb = 2048 new_vcpus_used = 1 new_memory_mb_used = 1024 new_local_gb_used = 100 new_free_ram_mb = 3072 new_free_disk_gb = 1948 new_running_vms = 1 new_current_workload = 0 # Calculate the expected values by adding the values for the new # compute node to those for self.item itm = self.item exp_count = 2 exp_vcpus = new_vcpus + itm['vcpus'] exp_memory_mb = new_memory_mb + itm['memory_mb'] exp_local_gb = new_local_gb + itm['local_gb'] exp_vcpus_used = new_vcpus_used + itm['vcpus_used'] exp_memory_mb_used = new_memory_mb_used + itm['memory_mb_used'] exp_local_gb_used = new_local_gb_used + itm['local_gb_used'] exp_free_ram_mb = new_free_ram_mb + itm['free_ram_mb'] exp_free_disk_gb = new_free_disk_gb + itm['free_disk_gb'] exp_running_vms = new_running_vms + itm['running_vms'] exp_current_workload = new_current_workload + itm['current_workload'] # Create the new compute node compute_node_dict = dict(vcpus=new_vcpus, memory_mb=new_memory_mb, local_gb=new_local_gb, uuid=uuidsentinel.fake_compute_node, vcpus_used=new_vcpus_used, memory_mb_used=new_memory_mb_used, local_gb_used=new_local_gb_used, free_ram_mb=new_free_ram_mb, free_disk_gb=new_free_disk_gb, hypervisor_type="xen", hypervisor_version=1, cpu_info="", running_vms=new_running_vms, current_workload=new_current_workload, service_id=service['id'], host=service['host'], disk_available_least=100, hypervisor_hostname='abracadabra', host_ip='127.0.0.2', supported_instances='', pci_stats='', metrics='', extra_resources='', cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, stats='', numa_topology='') db.compute_node_create(self.ctxt, compute_node_dict) # Get the stats, and make sure the stats agree with the expected # amounts. stats = db.compute_node_statistics(self.ctxt) self.assertEqual(exp_count, stats['count']) self.assertEqual(exp_vcpus, stats['vcpus']) self.assertEqual(exp_memory_mb, stats['memory_mb']) self.assertEqual(exp_local_gb, stats['local_gb']) self.assertEqual(exp_vcpus_used, stats['vcpus_used']) self.assertEqual(exp_memory_mb_used, stats['memory_mb_used']) self.assertEqual(exp_local_gb_used, stats['local_gb_used']) self.assertEqual(exp_free_ram_mb, stats['free_ram_mb']) self.assertEqual(exp_free_disk_gb, stats['free_disk_gb']) self.assertEqual(exp_running_vms, stats['running_vms']) self.assertEqual(exp_current_workload, stats['current_workload']) def test_compute_node_statistics_disabled_service(self): serv = db.service_get_by_host_and_topic( self.ctxt, 'host1', compute_rpcapi.RPC_TOPIC) db.service_update(self.ctxt, serv['id'], {'disabled': True}) stats = db.compute_node_statistics(self.ctxt) self.assertEqual(stats.pop('count'), 0) def test_compute_node_statistics_with_old_service_id(self): # NOTE(sbauza): This test is only for checking backwards compatibility # with old versions of compute_nodes not providing host column. # This test could be removed once we are sure that all compute nodes # are populating the host field thanks to the ResourceTracker service2 = self.service_dict.copy() service2['host'] = 'host2' db_service2 = db.service_create(self.ctxt, service2) compute_node_old_host = self.compute_node_dict.copy() compute_node_old_host['uuid'] = uuidutils.generate_uuid() compute_node_old_host['stats'] = jsonutils.dumps(self.stats) compute_node_old_host['hypervisor_hostname'] = 'node_2' compute_node_old_host['service_id'] = db_service2['id'] compute_node_old_host.pop('host') db.compute_node_create(self.ctxt, compute_node_old_host) stats = db.compute_node_statistics(self.ctxt) self.assertEqual(2, stats.pop('count')) def test_compute_node_statistics_with_other_service(self): other_service = self.service_dict.copy() other_service['topic'] = 'fake-topic' other_service['binary'] = 'nova-api' db.service_create(self.ctxt, other_service) stats = db.compute_node_statistics(self.ctxt) data = {'count': 1, 'vcpus_used': 0, 'local_gb_used': 0, 'memory_mb': 1024, 'current_workload': 0, 'vcpus': 2, 'running_vms': 0, 'free_disk_gb': 2048, 'disk_available_least': 100, 'local_gb': 2048, 'free_ram_mb': 1024, 'memory_mb_used': 0} for key, value in data.items(): self.assertEqual(value, stats.pop(key)) def test_compute_node_statistics_delete_and_recreate_service(self): # Test added for bug #1692397, this test tests that deleted # service record will not be selected when calculate compute # node statistics. # Let's first assert what we expect the setup to look like. self.assertEqual(1, len(db.service_get_all_by_binary( self.ctxt, 'nova-compute'))) self.assertEqual(1, len(db.compute_node_get_all_by_host( self.ctxt, 'host1'))) # Get the statistics for the original node/service before we delete # the service. original_stats = db.compute_node_statistics(self.ctxt) # At this point we have one compute_nodes record and one services # record pointing at the same host. Now we need to simulate the user # deleting the service record in the API, which will only delete very # old compute_nodes records where the service and compute node are # linked via the compute_nodes.service_id column, which is the case # in this test class; at some point we should decouple those to be more # modern. db.service_destroy(self.ctxt, self.service['id']) # Now we're going to simulate that the nova-compute service was # restarted, which will create a new services record with a unique # uuid but it will have the same host, binary and topic values as the # deleted service. The unique constraints don't fail in this case since # they include the deleted column and this service and the old service # have a different deleted value. service2_dict = self.service_dict.copy() service2_dict['uuid'] = uuidsentinel.service2_uuid db.service_create(self.ctxt, service2_dict) # Again, because of the way the setUp is done currently, the compute # node was linked to the original now-deleted service, so when we # deleted that service it also deleted the compute node record, so we # have to simulate the ResourceTracker in the nova-compute worker # re-creating the compute nodes record. new_compute_node = self.compute_node_dict.copy() del new_compute_node['service_id'] # make it a new style compute node new_compute_node['uuid'] = uuidsentinel.new_compute_uuid db.compute_node_create(self.ctxt, new_compute_node) # Now get the stats for all compute nodes (we just have one) and it # should just be for a single service, not double, as we should ignore # the (soft) deleted service. stats = db.compute_node_statistics(self.ctxt) self.assertDictEqual(original_stats, stats) def test_compute_node_not_found(self): self.assertRaises(exception.ComputeHostNotFound, db.compute_node_get, self.ctxt, 100500) def test_compute_node_update_always_updates_updated_at(self): item_updated = db.compute_node_update(self.ctxt, self.item['id'], {}) self.assertNotEqual(self.item['updated_at'], item_updated['updated_at']) def test_compute_node_update_override_updated_at(self): # Update the record once so updated_at is set. first = db.compute_node_update(self.ctxt, self.item['id'], {'free_ram_mb': '12'}) self.assertIsNotNone(first['updated_at']) # Update a second time. Make sure that the updated_at value we send # is overridden. second = db.compute_node_update(self.ctxt, self.item['id'], {'updated_at': first.updated_at, 'free_ram_mb': '13'}) self.assertNotEqual(first['updated_at'], second['updated_at']) def test_service_destroy_with_compute_node(self): db.service_destroy(self.ctxt, self.service['id']) self.assertRaises(exception.ComputeHostNotFound, db.compute_node_get_model, self.ctxt, self.item['id']) def test_service_destroy_with_old_compute_node(self): # NOTE(sbauza): This test is only for checking backwards compatibility # with old versions of compute_nodes not providing host column. # This test could be removed once we are sure that all compute nodes # are populating the host field thanks to the ResourceTracker compute_node_old_host_dict = self.compute_node_dict.copy() compute_node_old_host_dict['uuid'] = uuidutils.generate_uuid() compute_node_old_host_dict.pop('host') item_old = db.compute_node_create(self.ctxt, compute_node_old_host_dict) db.service_destroy(self.ctxt, self.service['id']) self.assertRaises(exception.ComputeHostNotFound, db.compute_node_get_model, self.ctxt, item_old['id']) @mock.patch("nova.db.sqlalchemy.api.compute_node_get_model") def test_dbapi_compute_node_get_model(self, mock_get_model): cid = self.item["id"] db.compute_node_get_model(self.ctxt, cid) mock_get_model.assert_called_once_with(self.ctxt, cid) @mock.patch("nova.db.sqlalchemy.api.model_query") def test_compute_node_get_model(self, mock_model_query): class FakeFiltered(object): def first(self): return mock.sentinel.first fake_filtered_cn = FakeFiltered() class FakeModelQuery(object): def filter_by(self, id): return fake_filtered_cn mock_model_query.return_value = FakeModelQuery() result = sqlalchemy_api.compute_node_get_model(self.ctxt, self.item["id"]) self.assertEqual(result, mock.sentinel.first) mock_model_query.assert_called_once_with(self.ctxt, models.ComputeNode) class CertificateTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(CertificateTestCase, self).setUp() self.ctxt = context.get_admin_context() self.created = self._certificates_create() def _get_certs_values(self): base_values = { 'user_id': 'user', 'project_id': 'project', 'file_name': 'filename' } return [{k: v + str(x) for k, v in base_values.items()} for x in range(1, 4)] def _certificates_create(self): return [db.certificate_create(self.ctxt, cert) for cert in self._get_certs_values()] def test_certificate_create(self): ignored_keys = ['id', 'deleted', 'deleted_at', 'created_at', 'updated_at'] for i, cert in enumerate(self._get_certs_values()): self._assertEqualObjects(self.created[i], cert, ignored_keys=ignored_keys) def test_certificate_get_all_by_project(self): cert = db.certificate_get_all_by_project(self.ctxt, self.created[1].project_id) self._assertEqualObjects(self.created[1], cert[0]) def test_certificate_get_all_by_user(self): cert = db.certificate_get_all_by_user(self.ctxt, self.created[1].user_id) self._assertEqualObjects(self.created[1], cert[0]) def test_certificate_get_all_by_user_and_project(self): cert = db.certificate_get_all_by_user_and_project(self.ctxt, self.created[1].user_id, self.created[1].project_id) self._assertEqualObjects(self.created[1], cert[0]) class BwUsageTestCase(test.TestCase, ModelsObjectComparatorMixin): _ignored_keys = ['id', 'deleted', 'deleted_at', 'created_at', 'updated_at'] def setUp(self): super(BwUsageTestCase, self).setUp() self.ctxt = context.get_admin_context() self.useFixture(test.TimeOverride()) def test_bw_usage_get_by_uuids(self): now = timeutils.utcnow() start_period = now - datetime.timedelta(seconds=10) start_period_str = start_period.isoformat() uuid3_refreshed = now - datetime.timedelta(seconds=5) uuid3_refreshed_str = uuid3_refreshed.isoformat() expected_bw_usages = { 'fake_uuid1': {'uuid': 'fake_uuid1', 'mac': 'fake_mac1', 'start_period': start_period, 'bw_in': 100, 'bw_out': 200, 'last_ctr_in': 12345, 'last_ctr_out': 67890, 'last_refreshed': now}, 'fake_uuid2': {'uuid': 'fake_uuid2', 'mac': 'fake_mac2', 'start_period': start_period, 'bw_in': 200, 'bw_out': 300, 'last_ctr_in': 22345, 'last_ctr_out': 77890, 'last_refreshed': now}, 'fake_uuid3': {'uuid': 'fake_uuid3', 'mac': 'fake_mac3', 'start_period': start_period, 'bw_in': 400, 'bw_out': 500, 'last_ctr_in': 32345, 'last_ctr_out': 87890, 'last_refreshed': uuid3_refreshed} } bw_usages = db.bw_usage_get_by_uuids(self.ctxt, ['fake_uuid1', 'fake_uuid2'], start_period_str) # No matches self.assertEqual(len(bw_usages), 0) # Add 3 entries db.bw_usage_update(self.ctxt, 'fake_uuid1', 'fake_mac1', start_period_str, 100, 200, 12345, 67890) db.bw_usage_update(self.ctxt, 'fake_uuid2', 'fake_mac2', start_period_str, 100, 200, 42, 42) # Test explicit refreshed time db.bw_usage_update(self.ctxt, 'fake_uuid3', 'fake_mac3', start_period_str, 400, 500, 32345, 87890, last_refreshed=uuid3_refreshed_str) # Update 2nd entry db.bw_usage_update(self.ctxt, 'fake_uuid2', 'fake_mac2', start_period_str, 200, 300, 22345, 77890) bw_usages = db.bw_usage_get_by_uuids(self.ctxt, ['fake_uuid1', 'fake_uuid2', 'fake_uuid3'], start_period_str) self.assertEqual(len(bw_usages), 3) for usage in bw_usages: self._assertEqualObjects(expected_bw_usages[usage['uuid']], usage, ignored_keys=self._ignored_keys) def _test_bw_usage_update(self, **expected_bw_usage): bw_usage = db.bw_usage_update(self.ctxt, **expected_bw_usage) self._assertEqualObjects(expected_bw_usage, bw_usage, ignored_keys=self._ignored_keys) uuid = expected_bw_usage['uuid'] mac = expected_bw_usage['mac'] start_period = expected_bw_usage['start_period'] bw_usage = db.bw_usage_get(self.ctxt, uuid, start_period, mac) self._assertEqualObjects(expected_bw_usage, bw_usage, ignored_keys=self._ignored_keys) def _create_bw_usage(self, context, uuid, mac, start_period, bw_in, bw_out, last_ctr_in, last_ctr_out, id, last_refreshed=None): with sqlalchemy_api.get_context_manager(context).writer.using(context): bwusage = models.BandwidthUsage() bwusage.start_period = start_period bwusage.uuid = uuid bwusage.mac = mac bwusage.last_refreshed = last_refreshed bwusage.bw_in = bw_in bwusage.bw_out = bw_out bwusage.last_ctr_in = last_ctr_in bwusage.last_ctr_out = last_ctr_out bwusage.id = id bwusage.save(context.session) def test_bw_usage_update_exactly_one_record(self): now = timeutils.utcnow() start_period = now - datetime.timedelta(seconds=10) uuid = 'fake_uuid' # create two equal bw_usages with IDs 1 and 2 for id in range(1, 3): bw_usage = {'uuid': uuid, 'mac': 'fake_mac', 'start_period': start_period, 'bw_in': 100, 'bw_out': 200, 'last_ctr_in': 12345, 'last_ctr_out': 67890, 'last_refreshed': now, 'id': id} self._create_bw_usage(self.ctxt, **bw_usage) # check that we have two equal bw_usages self.assertEqual( 2, len(db.bw_usage_get_by_uuids(self.ctxt, [uuid], start_period))) # update 'last_ctr_in' field in one bw_usage updated_bw_usage = {'uuid': uuid, 'mac': 'fake_mac', 'start_period': start_period, 'bw_in': 100, 'bw_out': 200, 'last_ctr_in': 54321, 'last_ctr_out': 67890, 'last_refreshed': now} result = db.bw_usage_update(self.ctxt, **updated_bw_usage) # check that only bw_usage with ID 1 was updated self.assertEqual(1, result['id']) self._assertEqualObjects(updated_bw_usage, result, ignored_keys=self._ignored_keys) def test_bw_usage_get(self): now = timeutils.utcnow() start_period = now - datetime.timedelta(seconds=10) start_period_str = start_period.isoformat() expected_bw_usage = {'uuid': 'fake_uuid1', 'mac': 'fake_mac1', 'start_period': start_period, 'bw_in': 100, 'bw_out': 200, 'last_ctr_in': 12345, 'last_ctr_out': 67890, 'last_refreshed': now} bw_usage = db.bw_usage_get(self.ctxt, 'fake_uuid1', start_period_str, 'fake_mac1') self.assertIsNone(bw_usage) self._test_bw_usage_update(**expected_bw_usage) def test_bw_usage_update_new(self): now = timeutils.utcnow() start_period = now - datetime.timedelta(seconds=10) expected_bw_usage = {'uuid': 'fake_uuid1', 'mac': 'fake_mac1', 'start_period': start_period, 'bw_in': 100, 'bw_out': 200, 'last_ctr_in': 12345, 'last_ctr_out': 67890, 'last_refreshed': now} self._test_bw_usage_update(**expected_bw_usage) def test_bw_usage_update_existing(self): now = timeutils.utcnow() start_period = now - datetime.timedelta(seconds=10) expected_bw_usage = {'uuid': 'fake_uuid1', 'mac': 'fake_mac1', 'start_period': start_period, 'bw_in': 100, 'bw_out': 200, 'last_ctr_in': 12345, 'last_ctr_out': 67890, 'last_refreshed': now} self._test_bw_usage_update(**expected_bw_usage) expected_bw_usage['bw_in'] = 300 expected_bw_usage['bw_out'] = 400 expected_bw_usage['last_ctr_in'] = 23456 expected_bw_usage['last_ctr_out'] = 78901 self._test_bw_usage_update(**expected_bw_usage) class Ec2TestCase(test.TestCase): def setUp(self): super(Ec2TestCase, self).setUp() self.ctxt = context.RequestContext('fake_user', 'fake_project') def test_ec2_instance_create(self): inst = db.ec2_instance_create(self.ctxt, 'fake-uuid') self.assertIsNotNone(inst['id']) self.assertEqual(inst['uuid'], 'fake-uuid') def test_ec2_instance_get_by_uuid(self): inst = db.ec2_instance_create(self.ctxt, 'fake-uuid') inst2 = db.ec2_instance_get_by_uuid(self.ctxt, 'fake-uuid') self.assertEqual(inst['id'], inst2['id']) def test_ec2_instance_get_by_id(self): inst = db.ec2_instance_create(self.ctxt, 'fake-uuid') inst2 = db.ec2_instance_get_by_id(self.ctxt, inst['id']) self.assertEqual(inst['id'], inst2['id']) def test_ec2_instance_get_by_uuid_not_found(self): self.assertRaises(exception.InstanceNotFound, db.ec2_instance_get_by_uuid, self.ctxt, 'uuid-not-present') def test_ec2_instance_get_by_id_not_found(self): self.assertRaises(exception.InstanceNotFound, db.ec2_instance_get_by_uuid, self.ctxt, 12345) def test_get_instance_uuid_by_ec2_id(self): inst = db.ec2_instance_create(self.ctxt, 'fake-uuid') inst_uuid = db.get_instance_uuid_by_ec2_id(self.ctxt, inst['id']) self.assertEqual(inst_uuid, 'fake-uuid') def test_get_instance_uuid_by_ec2_id_not_found(self): self.assertRaises(exception.InstanceNotFound, db.get_instance_uuid_by_ec2_id, self.ctxt, 100500) class ArchiveTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(ArchiveTestCase, self).setUp() self.engine = get_engine() self.metadata = MetaData(self.engine) self.conn = self.engine.connect() self.instance_id_mappings = models.InstanceIdMapping.__table__ self.shadow_instance_id_mappings = sqlalchemyutils.get_table( self.engine, "shadow_instance_id_mappings") self.instances = models.Instance.__table__ self.shadow_instances = sqlalchemyutils.get_table( self.engine, "shadow_instances") self.instance_actions = models.InstanceAction.__table__ self.shadow_instance_actions = sqlalchemyutils.get_table( self.engine, "shadow_instance_actions") self.instance_actions_events = models.InstanceActionEvent.__table__ self.shadow_instance_actions_events = sqlalchemyutils.get_table( self.engine, "shadow_instance_actions_events") self.migrations = models.Migration.__table__ self.shadow_migrations = sqlalchemyutils.get_table( self.engine, "shadow_migrations") self.uuidstrs = [] for _ in range(6): self.uuidstrs.append(uuidutils.generate_uuid(dashed=False)) def _assert_shadow_tables_empty_except(self, *exceptions): """Ensure shadow tables are empty This method ensures that all the shadow tables in the schema, except for specificially named exceptions, are empty. This makes sure that archiving isn't moving unexpected content. """ metadata = MetaData(bind=self.engine) metadata.reflect() for table in metadata.tables: if table.startswith("shadow_") and table not in exceptions: rows = self.conn.execute("select * from %s" % table).fetchall() self.assertEqual(rows, [], "Table %s not empty" % table) def test_shadow_tables(self): metadata = MetaData(bind=self.engine) metadata.reflect() for table_name in metadata.tables: # NOTE(rpodolyaka): migration 209 introduced a few new tables, # which don't have shadow tables and it's # completely OK, so we should skip them here if table_name.startswith("dump_"): continue # NOTE(snikitin): migration 266 introduced a new table 'tags', # which have no shadow table and it's # completely OK, so we should skip it here # NOTE(cdent): migration 314 introduced three new # ('resource_providers', 'allocations' and 'inventories') # with no shadow table and it's OK, so skip. # 318 adds one more: 'resource_provider_aggregates'. # NOTE(PaulMurray): migration 333 adds 'console_auth_tokens' if table_name in ['tags', 'resource_providers', 'allocations', 'inventories', 'resource_provider_aggregates', 'console_auth_tokens']: continue if table_name.startswith("shadow_"): self.assertIn(table_name[7:], metadata.tables) continue self.assertTrue(db_utils.check_shadow_table(self.engine, table_name)) self._assert_shadow_tables_empty_except() def test_archive_deleted_rows(self): # Add 6 rows to table for uuidstr in self.uuidstrs: ins_stmt = self.instance_id_mappings.insert().values(uuid=uuidstr) self.conn.execute(ins_stmt) # Set 4 to deleted update_statement = self.instance_id_mappings.update().\ where(self.instance_id_mappings.c.uuid.in_(self.uuidstrs[:4]))\ .values(deleted=1, deleted_at=timeutils.utcnow()) self.conn.execute(update_statement) qiim = sql.select([self.instance_id_mappings]).where(self. instance_id_mappings.c.uuid.in_(self.uuidstrs)) rows = self.conn.execute(qiim).fetchall() # Verify we have 6 in main self.assertEqual(len(rows), 6) qsiim = sql.select([self.shadow_instance_id_mappings]).\ where(self.shadow_instance_id_mappings.c.uuid.in_( self.uuidstrs)) rows = self.conn.execute(qsiim).fetchall() # Verify we have 0 in shadow self.assertEqual(len(rows), 0) # Archive 2 rows results = db.archive_deleted_rows(max_rows=2) expected = dict(instance_id_mappings=2) self._assertEqualObjects(expected, results[0]) rows = self.conn.execute(qiim).fetchall() # Verify we have 4 left in main self.assertEqual(len(rows), 4) rows = self.conn.execute(qsiim).fetchall() # Verify we have 2 in shadow self.assertEqual(len(rows), 2) # Archive 2 more rows results = db.archive_deleted_rows(max_rows=2) expected = dict(instance_id_mappings=2) self._assertEqualObjects(expected, results[0]) rows = self.conn.execute(qiim).fetchall() # Verify we have 2 left in main self.assertEqual(len(rows), 2) rows = self.conn.execute(qsiim).fetchall() # Verify we have 4 in shadow self.assertEqual(len(rows), 4) # Try to archive more, but there are no deleted rows left. results = db.archive_deleted_rows(max_rows=2) expected = dict() self._assertEqualObjects(expected, results[0]) rows = self.conn.execute(qiim).fetchall() # Verify we still have 2 left in main self.assertEqual(len(rows), 2) rows = self.conn.execute(qsiim).fetchall() # Verify we still have 4 in shadow self.assertEqual(len(rows), 4) # Ensure only deleted rows were deleted self._assert_shadow_tables_empty_except( 'shadow_instance_id_mappings') def test_archive_deleted_rows_before(self): # Add 6 rows to table for uuidstr in self.uuidstrs: ins_stmt = self.instances.insert().values(uuid=uuidstr) self.conn.execute(ins_stmt) ins_stmt = self.instance_actions.insert().\ values(instance_uuid=uuidstr) result = self.conn.execute(ins_stmt) instance_action_uuid = result.inserted_primary_key[0] ins_stmt = self.instance_actions_events.insert().\ values(action_id=instance_action_uuid) self.conn.execute(ins_stmt) # Set 1 to deleted before 2017-01-01 deleted_at = timeutils.parse_strtime('2017-01-01T00:00:00.0') update_statement = self.instances.update().\ where(self.instances.c.uuid.in_(self.uuidstrs[0:1]))\ .values(deleted=1, deleted_at=deleted_at) self.conn.execute(update_statement) # Set 1 to deleted before 2017-01-02 deleted_at = timeutils.parse_strtime('2017-01-02T00:00:00.0') update_statement = self.instances.update().\ where(self.instances.c.uuid.in_(self.uuidstrs[1:2]))\ .values(deleted=1, deleted_at=deleted_at) self.conn.execute(update_statement) # Set 2 to deleted now update_statement = self.instances.update().\ where(self.instances.c.uuid.in_(self.uuidstrs[2:4]))\ .values(deleted=1, deleted_at=timeutils.utcnow()) self.conn.execute(update_statement) qiim = sql.select([self.instances]).where(self. instances.c.uuid.in_(self.uuidstrs)) qsiim = sql.select([self.shadow_instances]).\ where(self.shadow_instances.c.uuid.in_(self.uuidstrs)) # Verify we have 6 in main rows = self.conn.execute(qiim).fetchall() self.assertEqual(len(rows), 6) # Make sure 'before' comparison is for < not <=, nothing deleted before_date = dateutil_parser.parse('2017-01-01', fuzzy=True) _, uuids, _ = db.archive_deleted_rows(max_rows=1, before=before_date) self.assertEqual([], uuids) # Archive rows deleted before 2017-01-02 before_date = dateutil_parser.parse('2017-01-02', fuzzy=True) results = db.archive_deleted_rows(max_rows=100, before=before_date) expected = dict(instances=1, instance_actions=1, instance_actions_events=1) self._assertEqualObjects(expected, results[0]) # Archive 1 row deleted before 2017-01-03 # Because the instances table will be processed first, tables that # refer to it (instance_actions and instance_action_events) will be # visited and archived in the same transaction as the instance, to # avoid orphaning the instance record (archive dependent records in one # transaction) before_date = dateutil_parser.parse('2017-01-03', fuzzy=True) results = db.archive_deleted_rows(max_rows=1, before=before_date) expected = dict(instances=1, instance_actions=1, instance_actions_events=1) self._assertEqualObjects(expected, results[0]) # Try to archive all other rows deleted before 2017-01-03. This should # not archive anything because the instances table and tables that # refer to it (instance_actions and instance_action_events) were all # archived in the last run. results = db.archive_deleted_rows(max_rows=100, before=before_date) expected = {} self._assertEqualObjects(expected, results[0]) # Verify we have 4 left in main rows = self.conn.execute(qiim).fetchall() self.assertEqual(len(rows), 4) # Verify we have 2 in shadow rows = self.conn.execute(qsiim).fetchall() self.assertEqual(len(rows), 2) # Archive everything else, make sure default operation without # before argument didn't break results = db.archive_deleted_rows(max_rows=1000) # Verify we have 2 left in main rows = self.conn.execute(qiim).fetchall() self.assertEqual(len(rows), 2) # Verify we have 4 in shadow rows = self.conn.execute(qsiim).fetchall() self.assertEqual(len(rows), 4) def test_archive_deleted_rows_for_every_uuid_table(self): tablenames = [] for model_class in six.itervalues(models.__dict__): if hasattr(model_class, "__tablename__"): tablenames.append(model_class.__tablename__) tablenames.sort() for tablename in tablenames: self._test_archive_deleted_rows_for_one_uuid_table(tablename) def _test_archive_deleted_rows_for_one_uuid_table(self, tablename): """:returns: 0 on success, 1 if no uuid column, 2 if insert failed.""" # NOTE(cdent): migration 314 adds the resource_providers # table with a uuid column that does not archive, so skip. skip_tables = ['resource_providers'] if tablename in skip_tables: return 1 main_table = sqlalchemyutils.get_table(self.engine, tablename) if not hasattr(main_table.c, "uuid"): # Not a uuid table, so skip it. return 1 shadow_table = sqlalchemyutils.get_table( self.engine, "shadow_" + tablename) # Add 6 rows to table for uuidstr in self.uuidstrs: ins_stmt = main_table.insert().values(uuid=uuidstr) try: self.conn.execute(ins_stmt) except (db_exc.DBError, OperationalError): # This table has constraints that require a table-specific # insert, so skip it. return 2 # Set 4 to deleted update_statement = main_table.update().\ where(main_table.c.uuid.in_(self.uuidstrs[:4]))\ .values(deleted=1, deleted_at=timeutils.utcnow()) self.conn.execute(update_statement) qmt = sql.select([main_table]).where(main_table.c.uuid.in_( self.uuidstrs)) rows = self.conn.execute(qmt).fetchall() # Verify we have 6 in main self.assertEqual(len(rows), 6) qst = sql.select([shadow_table]).\ where(shadow_table.c.uuid.in_(self.uuidstrs)) rows = self.conn.execute(qst).fetchall() # Verify we have 0 in shadow self.assertEqual(len(rows), 0) # Archive 2 rows sqlalchemy_api._archive_deleted_rows_for_table(self.metadata, tablename, max_rows=2, before=None) # Verify we have 4 left in main rows = self.conn.execute(qmt).fetchall() self.assertEqual(len(rows), 4) # Verify we have 2 in shadow rows = self.conn.execute(qst).fetchall() self.assertEqual(len(rows), 2) # Archive 2 more rows sqlalchemy_api._archive_deleted_rows_for_table(self.metadata, tablename, max_rows=2, before=None) # Verify we have 2 left in main rows = self.conn.execute(qmt).fetchall() self.assertEqual(len(rows), 2) # Verify we have 4 in shadow rows = self.conn.execute(qst).fetchall() self.assertEqual(len(rows), 4) # Try to archive more, but there are no deleted rows left. sqlalchemy_api._archive_deleted_rows_for_table(self.metadata, tablename, max_rows=2, before=None) # Verify we still have 2 left in main rows = self.conn.execute(qmt).fetchall() self.assertEqual(len(rows), 2) # Verify we still have 4 in shadow rows = self.conn.execute(qst).fetchall() self.assertEqual(len(rows), 4) return 0 def test_archive_deleted_rows_shadow_insertions_equals_deletions(self): # Add 2 rows to table for uuidstr in self.uuidstrs[:2]: ins_stmt = self.instance_id_mappings.insert().values(uuid=uuidstr) self.conn.execute(ins_stmt) # Set both to deleted update_statement = self.instance_id_mappings.update().\ where(self.instance_id_mappings.c.uuid.in_(self.uuidstrs[:2]))\ .values(deleted=1) self.conn.execute(update_statement) qiim = sql.select([self.instance_id_mappings]).where(self. instance_id_mappings.c.uuid.in_(self.uuidstrs[:2])) rows = self.conn.execute(qiim).fetchall() # Verify we have 2 in main self.assertEqual(len(rows), 2) qsiim = sql.select([self.shadow_instance_id_mappings]).\ where(self.shadow_instance_id_mappings.c.uuid.in_( self.uuidstrs[:2])) shadow_rows = self.conn.execute(qsiim).fetchall() # Verify we have 0 in shadow self.assertEqual(len(shadow_rows), 0) # Archive the rows db.archive_deleted_rows(max_rows=2) main_rows = self.conn.execute(qiim).fetchall() shadow_rows = self.conn.execute(qsiim).fetchall() # Verify the insertions into shadow is same as deletions from main self.assertEqual(len(shadow_rows), len(rows) - len(main_rows)) def _check_sqlite_version_less_than_3_7(self): # SQLite doesn't enforce foreign key constraints without a pragma. self.enforce_fk_constraints(engine=self.engine) def test_archive_deleted_rows_for_migrations(self): # migrations.instance_uuid depends on instances.uuid self._check_sqlite_version_less_than_3_7() instance_uuid = uuidsentinel.instance ins_stmt = self.instances.insert().values( uuid=instance_uuid, deleted=1, deleted_at=timeutils.utcnow()) self.conn.execute(ins_stmt) ins_stmt = self.migrations.insert().values(instance_uuid=instance_uuid, deleted=0) self.conn.execute(ins_stmt) # Archiving instances should result in migrations related to the # instances also being archived. num = sqlalchemy_api._archive_deleted_rows_for_table(self.metadata, "instances", max_rows=None, before=None) self.assertEqual(1, num[0]) self._assert_shadow_tables_empty_except( 'shadow_instances', 'shadow_migrations' ) def test_archive_deleted_rows_2_tables(self): # Add 6 rows to each table for uuidstr in self.uuidstrs: ins_stmt = self.instance_id_mappings.insert().values(uuid=uuidstr) self.conn.execute(ins_stmt) ins_stmt2 = self.instances.insert().values(uuid=uuidstr) self.conn.execute(ins_stmt2) # Set 4 of each to deleted update_statement = self.instance_id_mappings.update().\ where(self.instance_id_mappings.c.uuid.in_(self.uuidstrs[:4]))\ .values(deleted=1, deleted_at=timeutils.utcnow()) self.conn.execute(update_statement) update_statement2 = self.instances.update().\ where(self.instances.c.uuid.in_(self.uuidstrs[:4]))\ .values(deleted=1, deleted_at=timeutils.utcnow()) self.conn.execute(update_statement2) # Verify we have 6 in each main table qiim = sql.select([self.instance_id_mappings]).where( self.instance_id_mappings.c.uuid.in_(self.uuidstrs)) rows = self.conn.execute(qiim).fetchall() self.assertEqual(len(rows), 6) qi = sql.select([self.instances]).where(self.instances.c.uuid.in_( self.uuidstrs)) rows = self.conn.execute(qi).fetchall() self.assertEqual(len(rows), 6) # Verify we have 0 in each shadow table qsiim = sql.select([self.shadow_instance_id_mappings]).\ where(self.shadow_instance_id_mappings.c.uuid.in_( self.uuidstrs)) rows = self.conn.execute(qsiim).fetchall() self.assertEqual(len(rows), 0) qsi = sql.select([self.shadow_instances]).\ where(self.shadow_instances.c.uuid.in_(self.uuidstrs)) rows = self.conn.execute(qsi).fetchall() self.assertEqual(len(rows), 0) # Archive 7 rows, which should be 4 in one table and 3 in the other. db.archive_deleted_rows(max_rows=7) # Verify we have 5 left in the two main tables combined iim_rows = self.conn.execute(qiim).fetchall() i_rows = self.conn.execute(qi).fetchall() self.assertEqual(len(iim_rows) + len(i_rows), 5) # Verify we have 7 in the two shadow tables combined. siim_rows = self.conn.execute(qsiim).fetchall() si_rows = self.conn.execute(qsi).fetchall() self.assertEqual(len(siim_rows) + len(si_rows), 7) # Archive the remaining deleted rows. db.archive_deleted_rows(max_rows=1) # Verify we have 4 total left in both main tables. iim_rows = self.conn.execute(qiim).fetchall() i_rows = self.conn.execute(qi).fetchall() self.assertEqual(len(iim_rows) + len(i_rows), 4) # Verify we have 8 in shadow siim_rows = self.conn.execute(qsiim).fetchall() si_rows = self.conn.execute(qsi).fetchall() self.assertEqual(len(siim_rows) + len(si_rows), 8) # Try to archive more, but there are no deleted rows left. db.archive_deleted_rows(max_rows=500) # Verify we have 4 total left in both main tables. iim_rows = self.conn.execute(qiim).fetchall() i_rows = self.conn.execute(qi).fetchall() self.assertEqual(len(iim_rows) + len(i_rows), 4) # Verify we have 8 in shadow siim_rows = self.conn.execute(qsiim).fetchall() si_rows = self.conn.execute(qsi).fetchall() self.assertEqual(len(siim_rows) + len(si_rows), 8) self._assert_shadow_tables_empty_except( 'shadow_instances', 'shadow_instance_id_mappings' ) class PciDeviceDBApiTestCase(test.TestCase, ModelsObjectComparatorMixin): def setUp(self): super(PciDeviceDBApiTestCase, self).setUp() self.user_id = 'fake_user' self.project_id = 'fake_project' self.context = context.RequestContext(self.user_id, self.project_id) self.admin_context = context.get_admin_context() self.ignored_keys = ['id', 'deleted', 'deleted_at', 'updated_at', 'created_at'] self._compute_node = None def _get_fake_pci_devs(self): return {'id': 3353, 'uuid': uuidsentinel.pci_device1, 'compute_node_id': 1, 'address': '0000:0f:08.7', 'vendor_id': '8086', 'product_id': '1520', 'numa_node': 1, 'dev_type': fields.PciDeviceType.SRIOV_VF, 'dev_id': 'pci_0000:0f:08.7', 'extra_info': '{}', 'label': 'label_8086_1520', 'status': fields.PciDeviceStatus.AVAILABLE, 'instance_uuid': '00000000-0000-0000-0000-000000000010', 'request_id': None, 'parent_addr': '0000:0f:00.1', }, {'id': 3356, 'uuid': uuidsentinel.pci_device3356, 'compute_node_id': 1, 'address': '0000:0f:03.7', 'parent_addr': '0000:0f:03.0', 'vendor_id': '8083', 'product_id': '1523', 'numa_node': 0, 'dev_type': fields.PciDeviceType.SRIOV_VF, 'dev_id': 'pci_0000:0f:08.7', 'extra_info': '{}', 'label': 'label_8086_1520', 'status': fields.PciDeviceStatus.AVAILABLE, 'instance_uuid': '00000000-0000-0000-0000-000000000010', 'request_id': None, } @property def compute_node(self): if self._compute_node is None: self._compute_node = db.compute_node_create(self.admin_context, { 'vcpus': 0, 'memory_mb': 0, 'local_gb': 0, 'vcpus_used': 0, 'memory_mb_used': 0, 'local_gb_used': 0, 'hypervisor_type': 'fake', 'hypervisor_version': 0, 'cpu_info': 'fake', }) return self._compute_node def _create_fake_pci_devs(self): v1, v2 = self._get_fake_pci_devs() for i in v1, v2: i['compute_node_id'] = self.compute_node['id'] db.pci_device_update(self.admin_context, v1['compute_node_id'], v1['address'], v1) db.pci_device_update(self.admin_context, v2['compute_node_id'], v2['address'], v2) return (v1, v2) def test_pci_device_get_by_addr(self): v1, v2 = self._create_fake_pci_devs() result = db.pci_device_get_by_addr(self.admin_context, 1, '0000:0f:08.7') self._assertEqualObjects(v1, result, self.ignored_keys) def test_pci_device_get_by_addr_not_found(self): self._create_fake_pci_devs() self.assertRaises(exception.PciDeviceNotFound, db.pci_device_get_by_addr, self.admin_context, 1, '0000:0f:08:09') def test_pci_device_get_all_by_parent_addr(self): v1, v2 = self._create_fake_pci_devs() results = db.pci_device_get_all_by_parent_addr(self.admin_context, 1, '0000:0f:00.1') self._assertEqualListsOfObjects([v1], results, self.ignored_keys) def test_pci_device_get_all_by_parent_addr_empty(self): v1, v2 = self._create_fake_pci_devs() results = db.pci_device_get_all_by_parent_addr(self.admin_context, 1, '0000:0f:01.6') self.assertEqual(len(results), 0) def test_pci_device_get_by_id(self): v1, v2 = self._create_fake_pci_devs() result = db.pci_device_get_by_id(self.admin_context, 3353) self._assertEqualObjects(v1, result, self.ignored_keys) def test_pci_device_get_by_id_not_found(self): self._create_fake_pci_devs() self.assertRaises(exception.PciDeviceNotFoundById, db.pci_device_get_by_id, self.admin_context, 3354) def test_pci_device_get_all_by_node(self): v1, v2 = self._create_fake_pci_devs() results = db.pci_device_get_all_by_node(self.admin_context, 1) self._assertEqualListsOfObjects(results, [v1, v2], self.ignored_keys) def test_pci_device_get_all_by_node_empty(self): v1, v2 = self._get_fake_pci_devs() results = db.pci_device_get_all_by_node(self.admin_context, 9) self.assertEqual(len(results), 0) def test_pci_device_get_by_instance_uuid(self): v1, v2 = self._create_fake_pci_devs() v1['status'] = fields.PciDeviceStatus.ALLOCATED v2['status'] = fields.PciDeviceStatus.ALLOCATED db.pci_device_update(self.admin_context, v1['compute_node_id'], v1['address'], v1) db.pci_device_update(self.admin_context, v2['compute_node_id'], v2['address'], v2) results = db.pci_device_get_all_by_instance_uuid( self.context, '00000000-0000-0000-0000-000000000010') self._assertEqualListsOfObjects(results, [v1, v2], self.ignored_keys) def test_pci_device_get_by_instance_uuid_check_status(self): v1, v2 = self._create_fake_pci_devs() v1['status'] = fields.PciDeviceStatus.ALLOCATED v2['status'] = fields.PciDeviceStatus.CLAIMED db.pci_device_update(self.admin_context, v1['compute_node_id'], v1['address'], v1) db.pci_device_update(self.admin_context, v2['compute_node_id'], v2['address'], v2) results = db.pci_device_get_all_by_instance_uuid( self.context, '00000000-0000-0000-0000-000000000010') self._assertEqualListsOfObjects(results, [v1], self.ignored_keys) def test_pci_device_update(self): v1, v2 = self._create_fake_pci_devs() v1['status'] = fields.PciDeviceStatus.ALLOCATED db.pci_device_update(self.admin_context, v1['compute_node_id'], v1['address'], v1) result = db.pci_device_get_by_addr( self.admin_context, 1, '0000:0f:08.7') self._assertEqualObjects(v1, result, self.ignored_keys) v1['status'] = fields.PciDeviceStatus.CLAIMED db.pci_device_update(self.admin_context, v1['compute_node_id'], v1['address'], v1) result = db.pci_device_get_by_addr( self.admin_context, 1, '0000:0f:08.7') self._assertEqualObjects(v1, result, self.ignored_keys) def test_pci_device_destroy(self): v1, v2 = self._create_fake_pci_devs() results = db.pci_device_get_all_by_node(self.admin_context, self.compute_node['id']) self._assertEqualListsOfObjects(results, [v1, v2], self.ignored_keys) db.pci_device_destroy(self.admin_context, v1['compute_node_id'], v1['address']) results = db.pci_device_get_all_by_node(self.admin_context, self.compute_node['id']) self._assertEqualListsOfObjects(results, [v2], self.ignored_keys) def test_pci_device_destroy_exception(self): v1, v2 = self._get_fake_pci_devs() self.assertRaises(exception.PciDeviceNotFound, db.pci_device_destroy, self.admin_context, v1['compute_node_id'], v1['address']) def _create_fake_pci_devs_old_format(self): v1, v2 = self._get_fake_pci_devs() for v in (v1, v2): v['parent_addr'] = None v['extra_info'] = jsonutils.dumps( {'phys_function': 'fake-phys-func'}) db.pci_device_update(self.admin_context, v['compute_node_id'], v['address'], v) @mock.patch('time.sleep', new=lambda x: None) class RetryOnDeadlockTestCase(test.TestCase): def test_without_deadlock(self): @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def call_api(*args, **kwargs): return True self.assertTrue(call_api()) def test_raise_deadlock(self): self.attempts = 2 @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def call_api(*args, **kwargs): while self.attempts: self.attempts = self.attempts - 1 raise db_exc.DBDeadlock("fake exception") return True self.assertTrue(call_api()) class TestSqlalchemyTypesRepr( test_fixtures.OpportunisticDBTestMixin, test.NoDBTestCase): def setUp(self): # NOTE(sdague): the oslo_db base test case completely # invalidates our logging setup, we actually have to do that # before it is called to keep this from vomitting all over our # test output. self.useFixture(nova_fixtures.StandardLogging()) super(TestSqlalchemyTypesRepr, self).setUp() self.engine = enginefacade.writer.get_engine() meta = MetaData(bind=self.engine) self.table = Table( 'cidr_tbl', meta, Column('id', Integer, primary_key=True), Column('addr', col_types.CIDR()) ) self.table.create() self.addCleanup(meta.drop_all) def test_cidr_repr(self): addrs = [('192.168.3.0/24', '192.168.3.0/24'), ('2001:db8::/64', '2001:db8::/64'), ('192.168.3.0', '192.168.3.0/32'), ('2001:db8::', '2001:db8::/128'), (None, None)] with self.engine.begin() as conn: for i in addrs: conn.execute(self.table.insert(), {'addr': i[0]}) query = self.table.select().order_by(self.table.c.id) result = conn.execute(query) for idx, row in enumerate(result): self.assertEqual(addrs[idx][1], row.addr) class TestMySQLSqlalchemyTypesRepr(TestSqlalchemyTypesRepr): FIXTURE = test_fixtures.MySQLOpportunisticFixture class TestPostgreSQLSqlalchemyTypesRepr(TestSqlalchemyTypesRepr): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class TestDBInstanceTags(test.TestCase): sample_data = { 'project_id': 'project1', 'hostname': 'example.com', 'host': 'h1', 'node': 'n1', 'metadata': {'mkey1': 'mval1', 'mkey2': 'mval2'}, 'system_metadata': {'smkey1': 'smval1', 'smkey2': 'smval2'}, 'info_cache': {'ckey': 'cvalue'} } def setUp(self): super(TestDBInstanceTags, self).setUp() self.user_id = 'user1' self.project_id = 'project1' self.context = context.RequestContext(self.user_id, self.project_id) def _create_instance(self): inst = db.instance_create(self.context, self.sample_data) return inst['uuid'] def _get_tags_from_resp(self, tag_refs): return [(t.resource_id, t.tag) for t in tag_refs] def test_instance_tag_add(self): uuid = self._create_instance() tag = u'tag' tag_ref = db.instance_tag_add(self.context, uuid, tag) self.assertEqual(uuid, tag_ref.resource_id) self.assertEqual(tag, tag_ref.tag) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) # Check the tag for the instance was added tags = self._get_tags_from_resp(tag_refs) self.assertEqual([(uuid, tag)], tags) def test_instance_tag_add_duplication(self): uuid = self._create_instance() tag = u'tag' for x in range(5): db.instance_tag_add(self.context, uuid, tag) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) # Check the only one tag for the instance was added tags = self._get_tags_from_resp(tag_refs) self.assertEqual([(uuid, tag)], tags) def test_instance_tag_set(self): uuid = self._create_instance() tag1 = u'tag1' tag2 = u'tag2' tag3 = u'tag3' tag4 = u'tag4' # Set tags to the instance db.instance_tag_set(self.context, uuid, [tag1, tag2]) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) # Check the tags for the instance were set tags = self._get_tags_from_resp(tag_refs) expected = [(uuid, tag1), (uuid, tag2)] self.assertEqual(expected, tags) # Set new tags to the instance db.instance_tag_set(self.context, uuid, [tag3, tag4, tag2]) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) # Check the tags for the instance were replaced tags = self._get_tags_from_resp(tag_refs) expected = [(uuid, tag3), (uuid, tag4), (uuid, tag2)] self.assertEqual(set(expected), set(tags)) @mock.patch('nova.db.sqlalchemy.models.Tag.__table__.insert', return_value=models.Tag.__table__.insert()) def test_instance_tag_set_empty_add(self, mock_insert): uuid = self._create_instance() tag1 = u'tag1' tag2 = u'tag2' db.instance_tag_set(self.context, uuid, [tag1, tag2]) # Check insert() was called to insert 'tag1' and 'tag2' mock_insert.assert_called_once_with(None) mock_insert.reset_mock() db.instance_tag_set(self.context, uuid, [tag1]) # Check insert() wasn't called because there are no tags for creation mock_insert.assert_not_called() @mock.patch('sqlalchemy.orm.query.Query.delete') def test_instance_tag_set_empty_delete(self, mock_delete): uuid = self._create_instance() db.instance_tag_set(self.context, uuid, [u'tag1', u'tag2']) # Check delete() wasn't called because there are no tags for deletion mock_delete.assert_not_called() db.instance_tag_set(self.context, uuid, [u'tag1', u'tag3']) # Check delete() was called to delete 'tag2' mock_delete.assert_called_once_with(synchronize_session=False) def test_instance_tag_get_by_instance_uuid(self): uuid1 = self._create_instance() uuid2 = self._create_instance() tag1 = u'tag1' tag2 = u'tag2' tag3 = u'tag3' db.instance_tag_add(self.context, uuid1, tag1) db.instance_tag_add(self.context, uuid2, tag1) db.instance_tag_add(self.context, uuid2, tag2) db.instance_tag_add(self.context, uuid2, tag3) # Check the tags for the first instance tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid1) tags = self._get_tags_from_resp(tag_refs) expected = [(uuid1, tag1)] self.assertEqual(expected, tags) # Check the tags for the second instance tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid2) tags = self._get_tags_from_resp(tag_refs) expected = [(uuid2, tag1), (uuid2, tag2), (uuid2, tag3)] self.assertEqual(expected, tags) def test_instance_tag_get_by_instance_uuid_no_tags(self): uuid = self._create_instance() self.assertEqual([], db.instance_tag_get_by_instance_uuid(self.context, uuid)) def test_instance_tag_delete(self): uuid = self._create_instance() tag1 = u'tag1' tag2 = u'tag2' db.instance_tag_add(self.context, uuid, tag1) db.instance_tag_add(self.context, uuid, tag2) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) tags = self._get_tags_from_resp(tag_refs) expected = [(uuid, tag1), (uuid, tag2)] # Check the tags for the instance were added self.assertEqual(expected, tags) db.instance_tag_delete(self.context, uuid, tag1) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) tags = self._get_tags_from_resp(tag_refs) expected = [(uuid, tag2)] self.assertEqual(expected, tags) def test_instance_tag_delete_non_existent(self): uuid = self._create_instance() self.assertRaises(exception.InstanceTagNotFound, db.instance_tag_delete, self.context, uuid, u'tag') def test_instance_tag_delete_all(self): uuid = self._create_instance() tag1 = u'tag1' tag2 = u'tag2' db.instance_tag_add(self.context, uuid, tag1) db.instance_tag_add(self.context, uuid, tag2) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) tags = self._get_tags_from_resp(tag_refs) expected = [(uuid, tag1), (uuid, tag2)] # Check the tags for the instance were added self.assertEqual(expected, tags) db.instance_tag_delete_all(self.context, uuid) tag_refs = db.instance_tag_get_by_instance_uuid(self.context, uuid) tags = self._get_tags_from_resp(tag_refs) self.assertEqual([], tags) def test_instance_tag_exists(self): uuid = self._create_instance() tag1 = u'tag1' tag2 = u'tag2' db.instance_tag_add(self.context, uuid, tag1) # NOTE(snikitin): Make sure it's actually a bool self.assertTrue(db.instance_tag_exists(self.context, uuid, tag1)) self.assertFalse(db.instance_tag_exists(self.context, uuid, tag2)) def test_instance_tag_add_to_non_existing_instance(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, db.instance_tag_add, self.context, 'fake_uuid', 'tag') def test_instance_tag_set_to_non_existing_instance(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, db.instance_tag_set, self.context, 'fake_uuid', ['tag1', 'tag2']) def test_instance_tag_get_from_non_existing_instance(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, db.instance_tag_get_by_instance_uuid, self.context, 'fake_uuid') def test_instance_tag_delete_from_non_existing_instance(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, db.instance_tag_delete, self.context, 'fake_uuid', 'tag') def test_instance_tag_delete_all_from_non_existing_instance(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, db.instance_tag_delete_all, self.context, 'fake_uuid') def test_instance_tag_exists_non_existing_instance(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, db.instance_tag_exists, self.context, 'fake_uuid', 'tag') @mock.patch('time.sleep', new=lambda x: None) class TestInstanceInfoCache(test.TestCase): def setUp(self): super(TestInstanceInfoCache, self).setUp() user_id = 'fake' project_id = 'fake' self.context = context.RequestContext(user_id, project_id) def test_instance_info_cache_get(self): instance = db.instance_create(self.context, {}) network_info = 'net' db.instance_info_cache_update(self.context, instance.uuid, {'network_info': network_info}) info_cache = db.instance_info_cache_get(self.context, instance.uuid) self.assertEqual(network_info, info_cache.network_info) def test_instance_info_cache_update(self): instance = db.instance_create(self.context, {}) network_info1 = 'net1' db.instance_info_cache_update(self.context, instance.uuid, {'network_info': network_info1}) info_cache = db.instance_info_cache_get(self.context, instance.uuid) self.assertEqual(network_info1, info_cache.network_info) network_info2 = 'net2' db.instance_info_cache_update(self.context, instance.uuid, {'network_info': network_info2}) info_cache = db.instance_info_cache_get(self.context, instance.uuid) self.assertEqual(network_info2, info_cache.network_info) def test_instance_info_cache_delete(self): instance = db.instance_create(self.context, {}) network_info = 'net' db.instance_info_cache_update(self.context, instance.uuid, {'network_info': network_info}) info_cache = db.instance_info_cache_get(self.context, instance.uuid) self.assertEqual(network_info, info_cache.network_info) db.instance_info_cache_delete(self.context, instance.uuid) info_cache = db.instance_info_cache_get(self.context, instance.uuid) self.assertIsNone(info_cache) def test_instance_info_cache_update_duplicate(self): instance1 = db.instance_create(self.context, {}) instance2 = db.instance_create(self.context, {}) network_info1 = 'net1' db.instance_info_cache_update(self.context, instance1.uuid, {'network_info': network_info1}) network_info2 = 'net2' db.instance_info_cache_update(self.context, instance2.uuid, {'network_info': network_info2}) # updating of instance_uuid causes unique constraint failure, # using of savepoint helps to continue working with existing session # after DB errors, so exception was successfully handled db.instance_info_cache_update(self.context, instance2.uuid, {'instance_uuid': instance1.uuid}) info_cache1 = db.instance_info_cache_get(self.context, instance1.uuid) self.assertEqual(network_info1, info_cache1.network_info) info_cache2 = db.instance_info_cache_get(self.context, instance2.uuid) self.assertEqual(network_info2, info_cache2.network_info) def test_instance_info_cache_create_using_update(self): network_info = 'net' instance_uuid = uuidsentinel.uuid1 db.instance_info_cache_update(self.context, instance_uuid, {'network_info': network_info}) info_cache = db.instance_info_cache_get(self.context, instance_uuid) self.assertEqual(network_info, info_cache.network_info) self.assertEqual(instance_uuid, info_cache.instance_uuid) @mock.patch.object(models.InstanceInfoCache, 'update') def test_instance_info_cache_retried_on_deadlock(self, update): update.side_effect = [db_exc.DBDeadlock(), db_exc.DBDeadlock(), None] instance = db.instance_create(self.context, {}) network_info = 'net' updated = db.instance_info_cache_update(self.context, instance.uuid, {'network_info': network_info}) self.assertEqual(instance.uuid, updated.instance_uuid) @mock.patch.object(models.InstanceInfoCache, 'update') def test_instance_info_cache_not_retried_on_deadlock_forever(self, update): update.side_effect = db_exc.DBDeadlock instance = db.instance_create(self.context, {}) network_info = 'net' self.assertRaises(db_exc.DBDeadlock, db.instance_info_cache_update, self.context, instance.uuid, {'network_info': network_info}) class TestInstanceTagsFiltering(test.TestCase): sample_data = { 'project_id': 'project1' } def setUp(self): super(TestInstanceTagsFiltering, self).setUp() self.ctxt = context.RequestContext('user1', 'project1') def _create_instance_with_kwargs(self, **kw): context = kw.pop('context', self.ctxt) data = self.sample_data.copy() data.update(kw) return db.instance_create(context, data) def _create_instances(self, count): return [self._create_instance_with_kwargs()['uuid'] for i in range(count)] def _assertEqualInstanceUUIDs(self, expected_uuids, observed_instances): observed_uuids = [inst['uuid'] for inst in observed_instances] self.assertEqual(sorted(expected_uuids), sorted(observed_uuids)) def test_instance_get_all_by_filters_tag_any(self): uuids = self._create_instances(3) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[2], [u't3']) result = db.instance_get_all_by_filters(self.ctxt, {'tags-any': [u't1', u't2']}) self._assertEqualInstanceUUIDs([uuids[0], uuids[1]], result) def test_instance_get_all_by_filters_tag_any_empty(self): uuids = self._create_instances(2) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2']) result = db.instance_get_all_by_filters(self.ctxt, {'tags-any': [u't3', u't4']}) self.assertEqual([], result) def test_instance_get_all_by_filters_tag(self): uuids = self._create_instances(3) db.instance_tag_set(self.ctxt, uuids[0], [u't1', u't3']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2', u't3']) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't1', u't2']}) self._assertEqualInstanceUUIDs([uuids[1], uuids[2]], result) def test_instance_get_all_by_filters_tag_empty(self): uuids = self._create_instances(2) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2']) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't3']}) self.assertEqual([], result) def test_instance_get_all_by_filters_tag_any_and_tag(self): uuids = self._create_instances(3) db.instance_tag_set(self.ctxt, uuids[0], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2', u't4']) db.instance_tag_set(self.ctxt, uuids[2], [u't2', u't3']) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't1', u't2'], 'tags-any': [u't3', u't4']}) self._assertEqualInstanceUUIDs([uuids[1]], result) def test_instance_get_all_by_filters_tags_and_project_id(self): context1 = context.RequestContext('user1', 'p1') context2 = context.RequestContext('user2', 'p2') uuid1 = self._create_instance_with_kwargs( context=context1, project_id='p1')['uuid'] uuid2 = self._create_instance_with_kwargs( context=context1, project_id='p1')['uuid'] uuid3 = self._create_instance_with_kwargs( context=context2, project_id='p2')['uuid'] db.instance_tag_set(context1, uuid1, [u't1', u't2']) db.instance_tag_set(context1, uuid2, [u't1', u't2', u't4']) db.instance_tag_set(context2, uuid3, [u't1', u't2', u't3', u't4']) result = db.instance_get_all_by_filters(context.get_admin_context(), {'tags': [u't1', u't2'], 'tags-any': [u't3', u't4'], 'project_id': 'p1'}) self._assertEqualInstanceUUIDs([uuid2], result) def test_instance_get_all_by_filters_not_tags(self): uuids = self._create_instances(8) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't2']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[3], [u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[4], [u't3']) db.instance_tag_set(self.ctxt, uuids[5], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[6], [u't3', u't4']) db.instance_tag_set(self.ctxt, uuids[7], []) result = db.instance_get_all_by_filters( self.ctxt, {'not-tags': [u't1', u't2']}) self._assertEqualInstanceUUIDs([uuids[0], uuids[1], uuids[3], uuids[4], uuids[6], uuids[7]], result) def test_instance_get_all_by_filters_not_tags_multiple_cells(self): """Test added for bug 1682693. In cells v2 scenario, db.instance_get_all_by_filters() will be called multiple times to search across all cells. This test tests that filters for all cells remain the same in the loop. """ uuids = self._create_instances(8) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't2']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[3], [u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[4], [u't3']) db.instance_tag_set(self.ctxt, uuids[5], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[6], [u't3', u't4']) db.instance_tag_set(self.ctxt, uuids[7], []) filters = {'not-tags': [u't1', u't2']} result = db.instance_get_all_by_filters(self.ctxt, filters) self._assertEqualInstanceUUIDs([uuids[0], uuids[1], uuids[3], uuids[4], uuids[6], uuids[7]], result) def test_instance_get_all_by_filters_not_tags_any(self): uuids = self._create_instances(8) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't2']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[3], [u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[4], [u't3']) db.instance_tag_set(self.ctxt, uuids[5], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[6], [u't3', u't4']) db.instance_tag_set(self.ctxt, uuids[7], []) result = db.instance_get_all_by_filters( self.ctxt, {'not-tags-any': [u't1', u't2']}) self._assertEqualInstanceUUIDs([uuids[4], uuids[6], uuids[7]], result) def test_instance_get_all_by_filters_not_tags_and_tags(self): uuids = self._create_instances(5) db.instance_tag_set(self.ctxt, uuids[0], [u't1', u't2', u't4', u't5']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2', u't4']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[3], [u't1', u't3']) db.instance_tag_set(self.ctxt, uuids[4], []) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't1', u't2'], 'not-tags': [u't4', u't5']}) self._assertEqualInstanceUUIDs([uuids[1], uuids[2]], result) def test_instance_get_all_by_filters_tags_contradictory(self): uuids = self._create_instances(4) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[3], []) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't1'], 'not-tags': [u't1']}) self.assertEqual([], result) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't1'], 'not-tags-any': [u't1']}) self.assertEqual([], result) result = db.instance_get_all_by_filters(self.ctxt, {'tags-any': [u't1'], 'not-tags-any': [u't1']}) self.assertEqual([], result) result = db.instance_get_all_by_filters(self.ctxt, {'tags-any': [u't1'], 'not-tags': [u't1']}) self.assertEqual([], result) def test_instance_get_all_by_filters_not_tags_and_tags_any(self): uuids = self._create_instances(6) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't2']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[3], [u't1', u't3']) db.instance_tag_set(self.ctxt, uuids[4], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[5], []) result = db.instance_get_all_by_filters(self.ctxt, {'tags-any': [u't1', u't2'], 'not-tags': [u't1', u't2']}) self._assertEqualInstanceUUIDs([uuids[0], uuids[1], uuids[3]], result) def test_instance_get_all_by_filters_not_tags_and_not_tags_any(self): uuids = self._create_instances(6) db.instance_tag_set(self.ctxt, uuids[0], [u't1']) db.instance_tag_set(self.ctxt, uuids[1], [u't2', u't5']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[3], [u't1', u't3']) db.instance_tag_set(self.ctxt, uuids[4], [u't1', u't2', u't4', u't5']) db.instance_tag_set(self.ctxt, uuids[5], []) result = db.instance_get_all_by_filters(self.ctxt, {'not-tags': [u't1', u't2'], 'not-tags-any': [u't3', u't4']}) self._assertEqualInstanceUUIDs([uuids[0], uuids[1], uuids[5]], result) def test_instance_get_all_by_filters_all_tag_filters(self): uuids = self._create_instances(9) db.instance_tag_set(self.ctxt, uuids[0], [u't1', u't3', u't7']) db.instance_tag_set(self.ctxt, uuids[1], [u't1', u't2']) db.instance_tag_set(self.ctxt, uuids[2], [u't1', u't2', u't7']) db.instance_tag_set(self.ctxt, uuids[3], [u't1', u't2', u't3', u't5']) db.instance_tag_set(self.ctxt, uuids[4], [u't1', u't2', u't3', u't7']) db.instance_tag_set(self.ctxt, uuids[5], [u't1', u't2', u't3']) db.instance_tag_set(self.ctxt, uuids[6], [u't1', u't2', u't3', u't4', u't5']) db.instance_tag_set(self.ctxt, uuids[7], [u't1', u't2', u't3', u't4', u't5', u't6']) db.instance_tag_set(self.ctxt, uuids[8], []) result = db.instance_get_all_by_filters(self.ctxt, {'tags': [u't1', u't2'], 'tags-any': [u't3', u't4'], 'not-tags': [u't5', u't6'], 'not-tags-any': [u't7', u't8']}) self._assertEqualInstanceUUIDs([uuids[3], uuids[5], uuids[6]], result) class ConsoleAuthTokenTestCase(test.TestCase): def _create_instances(self, uuids): for uuid in uuids: db.instance_create(self.context, {'uuid': uuid, 'project_id': self.context.project_id}) def _create(self, token_hash, instance_uuid, expire_offset, host=None): t = copy.deepcopy(fake_console_auth_token.fake_token_dict) del t['id'] t['token_hash'] = token_hash t['instance_uuid'] = instance_uuid t['expires'] = timeutils.utcnow_ts() + expire_offset if host: t['host'] = host db.console_auth_token_create(self.context, t) def setUp(self): super(ConsoleAuthTokenTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') def test_console_auth_token_create_no_instance(self): t = copy.deepcopy(fake_console_auth_token.fake_token_dict) del t['id'] self.assertRaises(exception.InstanceNotFound, db.console_auth_token_create, self.context, t) def test_console_auth_token_get_valid_deleted_instance(self): uuid1 = uuidsentinel.uuid1 hash1 = utils.get_sha256_str(uuidsentinel.token1) self._create_instances([uuid1]) self._create(hash1, uuid1, 100) db_obj1 = db.console_auth_token_get_valid(self.context, hash1, uuid1) self.assertIsNotNone(db_obj1, "a valid token should be in database") db.instance_destroy(self.context, uuid1) self.assertRaises(exception.InstanceNotFound, db.console_auth_token_get_valid, self.context, hash1, uuid1) def test_console_auth_token_destroy_all_by_instance(self): uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 hash1 = utils.get_sha256_str(uuidsentinel.token1) hash2 = utils.get_sha256_str(uuidsentinel.token2) hash3 = utils.get_sha256_str(uuidsentinel.token3) self._create_instances([uuid1, uuid2]) self._create(hash1, uuid1, 100) self._create(hash2, uuid1, 100) self._create(hash3, uuid2, 100) db_obj1 = db.console_auth_token_get_valid(self.context, hash1, uuid1) db_obj2 = db.console_auth_token_get_valid(self.context, hash2, uuid1) db_obj3 = db.console_auth_token_get_valid(self.context, hash3, uuid2) self.assertIsNotNone(db_obj1, "a valid token should be in database") self.assertIsNotNone(db_obj2, "a valid token should be in database") self.assertIsNotNone(db_obj3, "a valid token should be in database") db.console_auth_token_destroy_all_by_instance(self.context, uuid1) db_obj4 = db.console_auth_token_get_valid(self.context, hash1, uuid1) db_obj5 = db.console_auth_token_get_valid(self.context, hash2, uuid1) db_obj6 = db.console_auth_token_get_valid(self.context, hash3, uuid2) self.assertIsNone(db_obj4, "no valid token should be in database") self.assertIsNone(db_obj5, "no valid token should be in database") self.assertIsNotNone(db_obj6, "a valid token should be in database") def test_console_auth_token_get_valid_by_expiry(self): uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 hash1 = utils.get_sha256_str(uuidsentinel.token1) hash2 = utils.get_sha256_str(uuidsentinel.token2) self.addCleanup(timeutils.clear_time_override) timeutils.set_time_override(timeutils.utcnow()) self._create_instances([uuid1, uuid2]) self._create(hash1, uuid1, 10) timeutils.advance_time_seconds(100) self._create(hash2, uuid2, 10) db_obj1 = db.console_auth_token_get_valid(self.context, hash1, uuid1) db_obj2 = db.console_auth_token_get_valid(self.context, hash2, uuid2) self.assertIsNone(db_obj1, "the token should have expired") self.assertIsNotNone(db_obj2, "a valid token should be found here") def test_console_auth_token_get_valid_by_uuid(self): uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 hash1 = utils.get_sha256_str(uuidsentinel.token1) self._create_instances([uuid1, uuid2]) self._create(hash1, uuid1, 10) db_obj1 = db.console_auth_token_get_valid(self.context, hash1, uuid1) db_obj2 = db.console_auth_token_get_valid(self.context, hash1, uuid2) self.assertIsNotNone(db_obj1, "a valid token should be found here") self.assertEqual(hash1, db_obj1['token_hash']) self.assertIsNone(db_obj2, "the token uuid should not match") def test_console_auth_token_destroy_expired(self): uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 uuid3 = uuidsentinel.uuid3 hash1 = utils.get_sha256_str(uuidsentinel.token1) hash2 = utils.get_sha256_str(uuidsentinel.token2) hash3 = utils.get_sha256_str(uuidsentinel.token3) self.addCleanup(timeutils.clear_time_override) timeutils.set_time_override(timeutils.utcnow()) self._create_instances([uuid1, uuid2, uuid3]) self._create(hash1, uuid1, 10) self._create(hash2, uuid2, 10, host='other-host') timeutils.advance_time_seconds(100) self._create(hash3, uuid3, 10) db.console_auth_token_destroy_expired(self.context) # the api only supports getting unexpired tokens # but by rolling back time we can see if a token that # should be deleted is still there timeutils.advance_time_seconds(-100) db_obj1 = db.console_auth_token_get_valid(self.context, hash1, uuid1) db_obj2 = db.console_auth_token_get_valid(self.context, hash2, uuid2) db_obj3 = db.console_auth_token_get_valid(self.context, hash3, uuid3) self.assertIsNone(db_obj1, "the token should have been deleted") self.assertIsNone(db_obj2, "the token should have been deleted") self.assertIsNotNone(db_obj3, "a valid token should be found here") def test_console_auth_token_destroy_expired_by_host(self): uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 uuid3 = uuidsentinel.uuid3 hash1 = utils.get_sha256_str(uuidsentinel.token1) hash2 = utils.get_sha256_str(uuidsentinel.token2) hash3 = utils.get_sha256_str(uuidsentinel.token3) self.addCleanup(timeutils.clear_time_override) timeutils.set_time_override(timeutils.utcnow()) self._create_instances([uuid1, uuid2, uuid3]) self._create(hash1, uuid1, 10) self._create(hash2, uuid2, 10, host='other-host') timeutils.advance_time_seconds(100) self._create(hash3, uuid3, 10) db.console_auth_token_destroy_expired_by_host( self.context, 'fake-host') # the api only supports getting unexpired tokens # but by rolling back time we can see if a token that # should be deleted is still there timeutils.advance_time_seconds(-100) db_obj1 = db.console_auth_token_get_valid(self.context, hash1, uuid1) db_obj2 = db.console_auth_token_get_valid(self.context, hash2, uuid2) db_obj3 = db.console_auth_token_get_valid(self.context, hash3, uuid3) self.assertIsNone(db_obj1, "the token should have been deleted") self.assertIsNotNone(db_obj2, "a valid token should be found here") self.assertIsNotNone(db_obj3, "a valid token should be found here") def test_console_auth_token_get_valid_without_uuid_deleted_instance(self): uuid1 = uuidsentinel.uuid1 hash1 = utils.get_sha256_str(uuidsentinel.token1) self._create_instances([uuid1]) self._create(hash1, uuid1, 100) db_obj1 = db.console_auth_token_get_valid(self.context, hash1) self.assertIsNotNone(db_obj1, "a valid token should be in database") db.instance_destroy(self.context, uuid1) db_obj1 = db.console_auth_token_get_valid(self.context, hash1) self.assertIsNone(db_obj1, "the token should have been deleted") def test_console_auth_token_get_valid_without_uuid_by_expiry(self): uuid1 = uuidsentinel.uuid1 uuid2 = uuidsentinel.uuid2 hash1 = utils.get_sha256_str(uuidsentinel.token1) hash2 = utils.get_sha256_str(uuidsentinel.token2) self.addCleanup(timeutils.clear_time_override) timeutils.set_time_override(timeutils.utcnow()) self._create_instances([uuid1, uuid2]) self._create(hash1, uuid1, 10) timeutils.advance_time_seconds(100) self._create(hash2, uuid2, 10) db_obj1 = db.console_auth_token_get_valid(self.context, hash1) db_obj2 = db.console_auth_token_get_valid(self.context, hash2) self.assertIsNone(db_obj1, "the token should have expired") self.assertIsNotNone(db_obj2, "a valid token should be found here") class SortMarkerHelper(test.TestCase): def setUp(self): super(SortMarkerHelper, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instances = [] launched = datetime.datetime(2005, 4, 30, 13, 00, 00) td = datetime.timedelta values = { 'key_name': ['dan', 'dan', 'taylor', 'jax'], 'memory_mb': [512, 1024, 512, 256], 'launched_at': [launched + td(1), launched - td(256), launched + td(32), launched - td(5000)], } for i in range(0, 4): inst = {'user_id': self.context.user_id, 'project_id': self.context.project_id, 'auto_disk_config': bool(i % 2), 'vcpus': 1} for key in values: inst[key] = values[key].pop(0) db_instance = db.instance_create(self.context, inst) self.instances.append(db_instance) def test_with_one_key(self): """Test instance_get_by_sort_filters() with one sort key.""" # If we sort ascending by key_name and our marker was something # just after jax, taylor would be the next one. marker = db.instance_get_by_sort_filters( self.context, ['key_name'], ['asc'], ['jaxz']) self.assertEqual(self.instances[2]['uuid'], marker) def _test_with_multiple_keys(self, sort_keys, sort_dirs, value_fn): """Test instance_get_by_sort_filters() with multiple sort keys. Since this returns the marker it's looking for, it's actually really hard to test this like we normally would with pagination, i.e. marching through the instances in order. Attempting to do so covered up a bug in this previously. So, for a list of marker values, query and assert we get the instance we expect. """ # For the query below, ordering memory_mb asc, key_name desc, # The following is the expected ordering of the instances we # have to test: # # 256-jax # 512-taylor # 512-dan # 1024-dan steps = [ (200, 'foo', 3), # all less than 256-jax (256, 'xyz', 3), # name comes before jax (256, 'jax', 3), # all equal to 256-jax (256, 'abc', 2), # name after jax (500, 'foo', 2), # all greater than 256-jax (512, 'xyz', 2), # name before taylor and dan (512, 'mno', 0), # name after taylor, before dan-512 (512, 'abc', 1), # name after dan-512 (999, 'foo', 1), # all greater than 512-taylor (1024, 'xyz', 1), # name before dan (1024, 'abc', None), # name after dan (2048, 'foo', None), # all greater than 1024-dan ] for mem, name, expected in steps: marker = db.instance_get_by_sort_filters( self.context, sort_keys, sort_dirs, value_fn(mem, name)) if expected is None: self.assertIsNone(marker) else: expected_inst = self.instances[expected] got_inst = [inst for inst in self.instances if inst['uuid'] == marker][0] self.assertEqual( expected_inst['uuid'], marker, 'marker %s-%s expected %s-%s got %s-%s' % ( mem, name, expected_inst['memory_mb'], expected_inst['key_name'], got_inst['memory_mb'], got_inst['key_name'])) def test_with_two_keys(self): """Test instance_get_by_sort_filters() with two sort_keys.""" self._test_with_multiple_keys( ['memory_mb', 'key_name'], ['asc', 'desc'], lambda mem, name: [mem, name]) def test_with_three_keys(self): """Test instance_get_by_sort_filters() with three sort_keys. This inserts another key in the middle of memory_mb,key_name which is always equal in all the test instances. We do this to make sure that we are only including the equivalence fallback on the final sort_key, otherwise we might stall out in the middle of a series of instances with equivalent values for a key in the middle of sort_keys. """ self._test_with_multiple_keys( ['memory_mb', 'vcpus', 'key_name'], ['asc', 'asc', 'desc'], lambda mem, name: [mem, 1, name]) def test_no_match(self): marker = db.instance_get_by_sort_filters(self.context, ['memory_mb'], ['asc'], [4096]) # None of our instances have >= 4096mb, so nothing matches self.assertIsNone(marker) def test_by_bool(self): """Verify that we can use booleans in sort_keys.""" # If we sort ascending by auto_disk_config, the first one # with True for that value would be the second instance we # create, because bool(1 % 2) == True. marker = db.instance_get_by_sort_filters( self.context, ['auto_disk_config', 'id'], ['asc', 'asc'], [True, 2]) self.assertEqual(self.instances[1]['uuid'], marker) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/db/test_migration_utils.py0000664000175000017500000002235300000000000023066 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Boris Pavlovic (boris@pavlovic.me). # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import test_fixtures from oslo_utils import uuidutils from sqlalchemy import Integer, String from sqlalchemy import MetaData, Table, Column from sqlalchemy.exc import NoSuchTableError from sqlalchemy import sql from sqlalchemy.types import UserDefinedType from nova.db.sqlalchemy import api as db from nova.db.sqlalchemy import utils from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures class CustomType(UserDefinedType): """Dummy column type for testing unsupported types.""" def get_col_spec(self): return "CustomType" # TODO(sdague): no tests in the nova/tests tree should inherit from # base test classes in another library. This causes all kinds of havoc # in these doing things incorrectly for what we need in subunit # reporting. This is a long unwind, but should be done in the future # and any code needed out of oslo_db should be exported / accessed as # a fixture. class TestMigrationUtilsSQLite( test_fixtures.OpportunisticDBTestMixin, test.NoDBTestCase): """Class for testing utils that are used in db migrations.""" def setUp(self): # NOTE(sdague): the oslo_db base test case completely # invalidates our logging setup, we actually have to do that # before it is called to keep this from vomitting all over our # test output. self.useFixture(nova_fixtures.StandardLogging()) super(TestMigrationUtilsSQLite, self).setUp() self.engine = enginefacade.writer.get_engine() self.meta = MetaData(bind=self.engine) def test_delete_from_select(self): table_name = "__test_deletefromselect_table__" uuidstrs = [] for unused in range(10): uuidstrs.append(uuidutils.generate_uuid(dashed=False)) conn = self.engine.connect() test_table = Table(table_name, self.meta, Column('id', Integer, primary_key=True, nullable=False, autoincrement=True), Column('uuid', String(36), nullable=False)) test_table.create() # Add 10 rows to table for uuidstr in uuidstrs: ins_stmt = test_table.insert().values(uuid=uuidstr) conn.execute(ins_stmt) # Delete 4 rows in one chunk column = test_table.c.id query_delete = sql.select([column], test_table.c.id < 5).order_by(column) delete_statement = db.DeleteFromSelect(test_table, query_delete, column) result_delete = conn.execute(delete_statement) # Verify we delete 4 rows self.assertEqual(result_delete.rowcount, 4) query_all = sql.select([test_table])\ .where(test_table.c.uuid.in_(uuidstrs)) rows = conn.execute(query_all).fetchall() # Verify we still have 6 rows in table self.assertEqual(len(rows), 6) def test_check_shadow_table(self): table_name = 'test_check_shadow_table' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer), Column('c', String(256))) table.create() # check missing shadow table self.assertRaises(NoSuchTableError, utils.check_shadow_table, self.engine, table_name) shadow_table = Table(db._SHADOW_TABLE_PREFIX + table_name, self.meta, Column('id', Integer), Column('a', Integer)) shadow_table.create() # check missing column self.assertRaises(exception.NovaException, utils.check_shadow_table, self.engine, table_name) # check when all is ok c = Column('c', String(256)) shadow_table.create_column(c) self.assertTrue(utils.check_shadow_table(self.engine, table_name)) # check extra column d = Column('d', Integer) shadow_table.create_column(d) self.assertRaises(exception.NovaException, utils.check_shadow_table, self.engine, table_name) def test_check_shadow_table_different_types(self): table_name = 'test_check_shadow_table_different_types' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer)) table.create() shadow_table = Table(db._SHADOW_TABLE_PREFIX + table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', String(256))) shadow_table.create() self.assertRaises(exception.NovaException, utils.check_shadow_table, self.engine, table_name) @test_base.backend_specific('sqlite') def test_check_shadow_table_with_unsupported_sqlite_type(self): table_name = 'test_check_shadow_table_with_unsupported_sqlite_type' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer), Column('c', CustomType)) table.create() shadow_table = Table(db._SHADOW_TABLE_PREFIX + table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer), Column('c', CustomType)) shadow_table.create() self.assertTrue(utils.check_shadow_table(self.engine, table_name)) def test_create_shadow_table_by_table_instance(self): table_name = 'test_create_shadow_table_by_table_instance' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer), Column('b', String(256))) table.create() utils.create_shadow_table(self.engine, table=table) self.assertTrue(utils.check_shadow_table(self.engine, table_name)) def test_create_shadow_table_by_name(self): table_name = 'test_create_shadow_table_by_name' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer), Column('b', String(256))) table.create() utils.create_shadow_table(self.engine, table_name=table_name) self.assertTrue(utils.check_shadow_table(self.engine, table_name)) @test_base.backend_specific('sqlite') def test_create_shadow_table_not_supported_type(self): table_name = 'test_create_shadow_table_not_supported_type' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', CustomType)) table.create() utils.create_shadow_table(self.engine, table_name=table_name, a=Column('a', CustomType())) self.assertTrue(utils.check_shadow_table(self.engine, table_name)) def test_create_shadow_both_table_and_table_name_are_none(self): self.assertRaises(exception.NovaException, utils.create_shadow_table, self.engine) def test_create_shadow_both_table_and_table_name_are_specified(self): table_name = ('test_create_shadow_both_table_and_table_name_are_' 'specified') table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer)) table.create() self.assertRaises(exception.NovaException, utils.create_shadow_table, self.engine, table=table, table_name=table_name) def test_create_duplicate_shadow_table(self): table_name = 'test_create_duplicate_shadow_table' table = Table(table_name, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer)) table.create() utils.create_shadow_table(self.engine, table_name=table_name) self.assertRaises(exception.ShadowTableExists, utils.create_shadow_table, self.engine, table_name=table_name) class TestMigrationUtilsPostgreSQL(TestMigrationUtilsSQLite): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class TestMigrationUtilsMySQL(TestMigrationUtilsSQLite): FIXTURE = test_fixtures.MySQLOpportunisticFixture ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/db/test_migrations.py0000664000175000017500000014312700000000000022034 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2012-2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. There are "opportunistic" tests which allows testing against all 3 databases (sqlite in memory, mysql, pg) in a properly configured unit test environment. For the opportunistic testing you need to set up db's named 'openstack_citest' with user 'openstack_citest' and password 'openstack_citest' on localhost. The test will then use that db and u/p combo to run the tests. For postgres on Ubuntu this can be done with the following commands:: | sudo -u postgres psql | postgres=# create user openstack_citest with createdb login password | 'openstack_citest'; | postgres=# create database openstack_citest with owner openstack_citest; """ import glob import os from migrate import UniqueConstraint from migrate.versioning import repository import mock from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import utils as oslodbutils from oslotest import timeout import sqlalchemy from sqlalchemy.engine import reflection import sqlalchemy.exc from sqlalchemy.sql import null import testtools from nova.db import migration from nova.db.sqlalchemy import migrate_repo from nova.db.sqlalchemy import migration as sa_migration from nova.db.sqlalchemy import models from nova.db.sqlalchemy import utils as db_utils from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures # TODO(sdague): no tests in the nova/tests tree should inherit from # base test classes in another library. This causes all kinds of havoc # in these doing things incorrectly for what we need in subunit # reporting. This is a long unwind, but should be done in the future # and any code needed out of oslo_db should be exported / accessed as # a fixture. class NovaMigrationsCheckers(test_migrations.ModelsMigrationsSync, test_migrations.WalkVersionsMixin): """Test sqlalchemy-migrate migrations.""" TIMEOUT_SCALING_FACTOR = 4 @property def INIT_VERSION(self): return migration.db_initial_version() @property def REPOSITORY(self): return repository.Repository( os.path.abspath(os.path.dirname(migrate_repo.__file__))) @property def migration_api(self): return sa_migration.versioning_api @property def migrate_engine(self): return self.engine def setUp(self): # NOTE(sdague): the oslo_db base test case completely # invalidates our logging setup, we actually have to do that # before it is called to keep this from vomitting all over our # test output. self.useFixture(nova_fixtures.StandardLogging()) super(NovaMigrationsCheckers, self).setUp() # The Timeout fixture picks up env.OS_TEST_TIMEOUT, defaulting to 0. self.useFixture(timeout.Timeout( scaling_factor=self.TIMEOUT_SCALING_FACTOR)) self.engine = enginefacade.writer.get_engine() def assertColumnExists(self, engine, table_name, column): self.assertTrue(oslodbutils.column_exists(engine, table_name, column), 'Column %s.%s does not exist' % (table_name, column)) def assertColumnNotExists(self, engine, table_name, column): self.assertFalse(oslodbutils.column_exists(engine, table_name, column), 'Column %s.%s should not exist' % (table_name, column)) def assertTableNotExists(self, engine, table): self.assertRaises(sqlalchemy.exc.NoSuchTableError, oslodbutils.get_table, engine, table) def assertIndexExists(self, engine, table_name, index): self.assertTrue(oslodbutils.index_exists(engine, table_name, index), 'Index %s on table %s does not exist' % (index, table_name)) def assertIndexNotExists(self, engine, table_name, index): self.assertFalse(oslodbutils.index_exists(engine, table_name, index), 'Index %s on table %s should not exist' % (index, table_name)) def assertIndexMembers(self, engine, table, index, members): # NOTE(johannes): Order of columns can matter. Most SQL databases # can use the leading columns for optimizing queries that don't # include all of the covered columns. self.assertIndexExists(engine, table, index) t = oslodbutils.get_table(engine, table) index_columns = None for idx in t.indexes: if idx.name == index: index_columns = [c.name for c in idx.columns] break self.assertEqual(members, index_columns) # Implementations for ModelsMigrationsSync def db_sync(self, engine): with mock.patch.object(sa_migration, 'get_engine', return_value=engine): sa_migration.db_sync() def get_engine(self, context=None): return self.migrate_engine def get_metadata(self): return models.BASE.metadata def include_object(self, object_, name, type_, reflected, compare_to): if type_ == 'table': # migrate_version is a sqlalchemy-migrate control table and # isn't included in the model. shadow_* are generated from # the model and have their own tests to ensure they don't # drift. if name == 'migrate_version' or name.startswith('shadow_'): return False return True def _skippable_migrations(self): special = [ 216, # Havana 272, # NOOP migration due to revert ] havana_placeholders = list(range(217, 227)) icehouse_placeholders = list(range(235, 244)) juno_placeholders = list(range(255, 265)) kilo_placeholders = list(range(281, 291)) liberty_placeholders = list(range(303, 313)) mitaka_placeholders = list(range(320, 330)) newton_placeholders = list(range(335, 345)) ocata_placeholders = list(range(348, 358)) pike_placeholders = list(range(363, 373)) queens_placeholders = list(range(379, 389)) # We forgot to add the rocky placeholder. We've also switched to 5 # placeholders per cycle since the rate of DB changes has dropped # significantly stein_placeholders = list(range(392, 397)) train_placeholders = list(range(403, 408)) return (special + havana_placeholders + icehouse_placeholders + juno_placeholders + kilo_placeholders + liberty_placeholders + mitaka_placeholders + newton_placeholders + ocata_placeholders + pike_placeholders + queens_placeholders + stein_placeholders + train_placeholders) def migrate_up(self, version, with_data=False): if with_data: check = getattr(self, "_check_%03d" % version, None) if version not in self._skippable_migrations(): self.assertIsNotNone(check, ('DB Migration %i does not have a ' 'test. Please add one!') % version) # NOTE(danms): This is a list of migrations where we allow dropping # things. The rules for adding things here are very very specific. # Chances are you don't meet the critera. # Reviewers: DO NOT ALLOW THINGS TO BE ADDED HERE exceptions = [ # 267 enforces non-nullable instance.uuid. This was mostly # a special case because instance.uuid shouldn't be able # to be nullable 267, # 278 removes a FK restriction, so it's an alter operation # that doesn't break existing users 278, # 280 enforces non-null keypair name. This is really not # something we should allow, but it's in the past 280, # 292 drops completely orphaned tables with no users, so # it can be done without affecting anything. 292, # 346 Drops column scheduled_at from instances table since it # is no longer used. The field value is always NULL so # it does not affect anything. 346, ] # Reviewers: DO NOT ALLOW THINGS TO BE ADDED HERE # NOTE(danms): We only started requiring things be additive in # kilo, so ignore all migrations before that point. KILO_START = 265 if version >= KILO_START and version not in exceptions: banned = ['Table', 'Column'] else: banned = None with nova_fixtures.BannedDBSchemaOperations(banned): super(NovaMigrationsCheckers, self).migrate_up(version, with_data) def test_walk_versions(self): self.walk_versions(snake_walk=False, downgrade=False) def _check_227(self, engine, data): table = oslodbutils.get_table(engine, 'project_user_quotas') # Insert fake_quotas with the longest resource name. fake_quotas = {'id': 5, 'project_id': 'fake_project', 'user_id': 'fake_user', 'resource': 'injected_file_content_bytes', 'hard_limit': 10} table.insert().execute(fake_quotas) # Check we can get the longest resource name. quota = table.select(table.c.id == 5).execute().first() self.assertEqual(quota['resource'], 'injected_file_content_bytes') def _check_228(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'metrics') compute_nodes = oslodbutils.get_table(engine, 'compute_nodes') self.assertIsInstance(compute_nodes.c.metrics.type, sqlalchemy.types.Text) def _check_229(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'extra_resources') compute_nodes = oslodbutils.get_table(engine, 'compute_nodes') self.assertIsInstance(compute_nodes.c.extra_resources.type, sqlalchemy.types.Text) def _check_230(self, engine, data): for table_name in ['instance_actions_events', 'shadow_instance_actions_events']: self.assertColumnExists(engine, table_name, 'host') self.assertColumnExists(engine, table_name, 'details') action_events = oslodbutils.get_table(engine, 'instance_actions_events') self.assertIsInstance(action_events.c.host.type, sqlalchemy.types.String) self.assertIsInstance(action_events.c.details.type, sqlalchemy.types.Text) def _check_231(self, engine, data): self.assertColumnExists(engine, 'instances', 'ephemeral_key_uuid') instances = oslodbutils.get_table(engine, 'instances') self.assertIsInstance(instances.c.ephemeral_key_uuid.type, sqlalchemy.types.String) self.assertTrue(db_utils.check_shadow_table(engine, 'instances')) def _check_232(self, engine, data): table_names = ['compute_node_stats', 'compute_nodes', 'instance_actions', 'instance_actions_events', 'instance_faults', 'migrations'] for table_name in table_names: self.assertTableNotExists(engine, 'dump_' + table_name) def _check_233(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'stats') compute_nodes = oslodbutils.get_table(engine, 'compute_nodes') self.assertIsInstance(compute_nodes.c.stats.type, sqlalchemy.types.Text) self.assertRaises(sqlalchemy.exc.NoSuchTableError, oslodbutils.get_table, engine, 'compute_node_stats') def _check_234(self, engine, data): self.assertIndexMembers(engine, 'reservations', 'reservations_deleted_expire_idx', ['deleted', 'expire']) def _check_244(self, engine, data): volume_usage_cache = oslodbutils.get_table( engine, 'volume_usage_cache') self.assertEqual(64, volume_usage_cache.c.user_id.type.length) def _pre_upgrade_245(self, engine): # create a fake network networks = oslodbutils.get_table(engine, 'networks') fake_network = {'id': 1} networks.insert().execute(fake_network) def _check_245(self, engine, data): networks = oslodbutils.get_table(engine, 'networks') network = networks.select(networks.c.id == 1).execute().first() # mtu should default to None self.assertIsNone(network.mtu) # dhcp_server should default to None self.assertIsNone(network.dhcp_server) # enable dhcp should default to true self.assertTrue(network.enable_dhcp) # share address should default to false self.assertFalse(network.share_address) def _check_246(self, engine, data): pci_devices = oslodbutils.get_table(engine, 'pci_devices') self.assertEqual(1, len([fk for fk in pci_devices.foreign_keys if fk.parent.name == 'compute_node_id'])) def _check_247(self, engine, data): quota_usages = oslodbutils.get_table(engine, 'quota_usages') self.assertFalse(quota_usages.c.resource.nullable) pci_devices = oslodbutils.get_table(engine, 'pci_devices') self.assertTrue(pci_devices.c.deleted.nullable) self.assertFalse(pci_devices.c.product_id.nullable) self.assertFalse(pci_devices.c.vendor_id.nullable) self.assertFalse(pci_devices.c.dev_type.nullable) def _check_248(self, engine, data): self.assertIndexMembers(engine, 'reservations', 'reservations_deleted_expire_idx', ['deleted', 'expire']) def _check_249(self, engine, data): # Assert that only one index exists that covers columns # instance_uuid and device_name bdm = oslodbutils.get_table(engine, 'block_device_mapping') self.assertEqual(1, len([i for i in bdm.indexes if [c.name for c in i.columns] == ['instance_uuid', 'device_name']])) def _check_250(self, engine, data): self.assertTableNotExists(engine, 'instance_group_metadata') self.assertTableNotExists(engine, 'shadow_instance_group_metadata') def _check_251(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'numa_topology') self.assertColumnExists(engine, 'shadow_compute_nodes', 'numa_topology') compute_nodes = oslodbutils.get_table(engine, 'compute_nodes') shadow_compute_nodes = oslodbutils.get_table(engine, 'shadow_compute_nodes') self.assertIsInstance(compute_nodes.c.numa_topology.type, sqlalchemy.types.Text) self.assertIsInstance(shadow_compute_nodes.c.numa_topology.type, sqlalchemy.types.Text) def _check_252(self, engine, data): oslodbutils.get_table(engine, 'instance_extra') oslodbutils.get_table(engine, 'shadow_instance_extra') self.assertIndexMembers(engine, 'instance_extra', 'instance_extra_idx', ['instance_uuid']) def _check_253(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'pci_requests') self.assertColumnExists( engine, 'shadow_instance_extra', 'pci_requests') instance_extra = oslodbutils.get_table(engine, 'instance_extra') shadow_instance_extra = oslodbutils.get_table(engine, 'shadow_instance_extra') self.assertIsInstance(instance_extra.c.pci_requests.type, sqlalchemy.types.Text) self.assertIsInstance(shadow_instance_extra.c.pci_requests.type, sqlalchemy.types.Text) def _check_254(self, engine, data): self.assertColumnExists(engine, 'pci_devices', 'request_id') self.assertColumnExists( engine, 'shadow_pci_devices', 'request_id') pci_devices = oslodbutils.get_table(engine, 'pci_devices') shadow_pci_devices = oslodbutils.get_table( engine, 'shadow_pci_devices') self.assertIsInstance(pci_devices.c.request_id.type, sqlalchemy.types.String) self.assertIsInstance(shadow_pci_devices.c.request_id.type, sqlalchemy.types.String) def _check_265(self, engine, data): # Assert that only one index exists that covers columns # host and deleted instances = oslodbutils.get_table(engine, 'instances') self.assertEqual(1, len([i for i in instances.indexes if [c.name for c in i.columns][:2] == ['host', 'deleted']])) # and only one index covers host column iscsi_targets = oslodbutils.get_table(engine, 'iscsi_targets') self.assertEqual(1, len([i for i in iscsi_targets.indexes if [c.name for c in i.columns][:1] == ['host']])) def _check_266(self, engine, data): self.assertColumnExists(engine, 'tags', 'resource_id') self.assertColumnExists(engine, 'tags', 'tag') table = oslodbutils.get_table(engine, 'tags') self.assertIsInstance(table.c.resource_id.type, sqlalchemy.types.String) self.assertIsInstance(table.c.tag.type, sqlalchemy.types.String) def _pre_upgrade_267(self, engine): # Create a fixed_ips row with a null instance_uuid (if not already # there) to make sure that's not deleted. fixed_ips = oslodbutils.get_table(engine, 'fixed_ips') fake_fixed_ip = {'id': 1} fixed_ips.insert().execute(fake_fixed_ip) # Create an instance record with a valid (non-null) UUID so we make # sure we don't do something stupid and delete valid records. instances = oslodbutils.get_table(engine, 'instances') fake_instance = {'id': 1, 'uuid': 'fake-non-null-uuid'} instances.insert().execute(fake_instance) # Add a null instance_uuid entry for the volumes table # since it doesn't have a foreign key back to the instances table. volumes = oslodbutils.get_table(engine, 'volumes') fake_volume = {'id': '9c3c317e-24db-4d57-9a6f-96e6d477c1da'} volumes.insert().execute(fake_volume) def _check_267(self, engine, data): # Make sure the column is non-nullable and the UC exists. fixed_ips = oslodbutils.get_table(engine, 'fixed_ips') self.assertTrue(fixed_ips.c.instance_uuid.nullable) fixed_ip = fixed_ips.select(fixed_ips.c.id == 1).execute().first() self.assertIsNone(fixed_ip.instance_uuid) instances = oslodbutils.get_table(engine, 'instances') self.assertFalse(instances.c.uuid.nullable) inspector = reflection.Inspector.from_engine(engine) constraints = inspector.get_unique_constraints('instances') constraint_names = [constraint['name'] for constraint in constraints] self.assertIn('uniq_instances0uuid', constraint_names) # Make sure the instances record with the valid uuid is still there. instance = instances.select(instances.c.id == 1).execute().first() self.assertIsNotNone(instance) # Check that the null entry in the volumes table is still there since # we skipped tables that don't have FK's back to the instances table. volumes = oslodbutils.get_table(engine, 'volumes') self.assertTrue(volumes.c.instance_uuid.nullable) volume = fixed_ips.select( volumes.c.id == '9c3c317e-24db-4d57-9a6f-96e6d477c1da' ).execute().first() self.assertIsNone(volume.instance_uuid) def test_migration_267(self): # This is separate from test_walk_versions so we can test the case # where there are non-null instance_uuid entries in the database which # cause the 267 migration to fail. engine = self.migrate_engine self.migration_api.version_control( engine, self.REPOSITORY, self.INIT_VERSION) self.migration_api.upgrade(engine, self.REPOSITORY, 266) # Create a consoles record with a null instance_uuid so # we can test that the upgrade fails if that entry is found. # NOTE(mriedem): We use the consoles table since that's the only table # created in the 216 migration with a ForeignKey created on the # instance_uuid table for sqlite. consoles = oslodbutils.get_table(engine, 'consoles') fake_console = {'id': 1} consoles.insert().execute(fake_console) # NOTE(mriedem): We handle the 267 migration where we expect to # hit a ValidationError on the consoles table to have # a null instance_uuid entry ex = self.assertRaises(exception.ValidationError, self.migration_api.upgrade, engine, self.REPOSITORY, 267) self.assertIn("There are 1 records in the " "'consoles' table where the uuid or " "instance_uuid column is NULL.", ex.kwargs['detail']) # Remove the consoles entry with the null instance_uuid column. rows = consoles.delete().where( consoles.c['instance_uuid'] == null()).execute().rowcount self.assertEqual(1, rows) # Now run the 267 upgrade again. self.migration_api.upgrade(engine, self.REPOSITORY, 267) # Make sure the consoles entry with the null instance_uuid # was deleted. console = consoles.select(consoles.c.id == 1).execute().first() self.assertIsNone(console) def _check_268(self, engine, data): # We can only assert that the col exists, not the unique constraint # as the engine is running sqlite self.assertColumnExists(engine, 'compute_nodes', 'host') self.assertColumnExists(engine, 'shadow_compute_nodes', 'host') compute_nodes = oslodbutils.get_table(engine, 'compute_nodes') shadow_compute_nodes = oslodbutils.get_table( engine, 'shadow_compute_nodes') self.assertIsInstance(compute_nodes.c.host.type, sqlalchemy.types.String) self.assertIsInstance(shadow_compute_nodes.c.host.type, sqlalchemy.types.String) def _check_269(self, engine, data): self.assertColumnExists(engine, 'pci_devices', 'numa_node') self.assertColumnExists(engine, 'shadow_pci_devices', 'numa_node') pci_devices = oslodbutils.get_table(engine, 'pci_devices') shadow_pci_devices = oslodbutils.get_table( engine, 'shadow_pci_devices') self.assertIsInstance(pci_devices.c.numa_node.type, sqlalchemy.types.Integer) self.assertTrue(pci_devices.c.numa_node.nullable) self.assertIsInstance(shadow_pci_devices.c.numa_node.type, sqlalchemy.types.Integer) self.assertTrue(shadow_pci_devices.c.numa_node.nullable) def _check_270(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'flavor') self.assertColumnExists(engine, 'shadow_instance_extra', 'flavor') instance_extra = oslodbutils.get_table(engine, 'instance_extra') shadow_instance_extra = oslodbutils.get_table( engine, 'shadow_instance_extra') self.assertIsInstance(instance_extra.c.flavor.type, sqlalchemy.types.Text) self.assertIsInstance(shadow_instance_extra.c.flavor.type, sqlalchemy.types.Text) def _check_271(self, engine, data): self.assertIndexMembers(engine, 'block_device_mapping', 'snapshot_id', ['snapshot_id']) self.assertIndexMembers(engine, 'block_device_mapping', 'volume_id', ['volume_id']) self.assertIndexMembers(engine, 'dns_domains', 'dns_domains_project_id_idx', ['project_id']) self.assertIndexMembers(engine, 'fixed_ips', 'network_id', ['network_id']) self.assertIndexMembers(engine, 'fixed_ips', 'fixed_ips_instance_uuid_fkey', ['instance_uuid']) self.assertIndexMembers(engine, 'fixed_ips', 'fixed_ips_virtual_interface_id_fkey', ['virtual_interface_id']) self.assertIndexMembers(engine, 'floating_ips', 'fixed_ip_id', ['fixed_ip_id']) self.assertIndexMembers(engine, 'iscsi_targets', 'iscsi_targets_volume_id_fkey', ['volume_id']) self.assertIndexMembers(engine, 'virtual_interfaces', 'virtual_interfaces_network_id_idx', ['network_id']) self.assertIndexMembers(engine, 'virtual_interfaces', 'virtual_interfaces_instance_uuid_fkey', ['instance_uuid']) # Removed on MySQL, never existed on other databases self.assertIndexNotExists(engine, 'dns_domains', 'project_id') self.assertIndexNotExists(engine, 'virtual_interfaces', 'network_id') def _pre_upgrade_273(self, engine): if engine.name != 'sqlite': return # Drop a variety of unique constraints to ensure that the script # properly readds them back for table_name, constraint_name in [ ('compute_nodes', 'uniq_compute_nodes0' 'host0hypervisor_hostname'), ('fixed_ips', 'uniq_fixed_ips0address0deleted'), ('instance_info_caches', 'uniq_instance_info_caches0' 'instance_uuid'), ('instance_type_projects', 'uniq_instance_type_projects0' 'instance_type_id0project_id0' 'deleted'), ('pci_devices', 'uniq_pci_devices0compute_node_id0' 'address0deleted'), ('virtual_interfaces', 'uniq_virtual_interfaces0' 'address0deleted')]: table = oslodbutils.get_table(engine, table_name) constraints = [c for c in table.constraints if c.name == constraint_name] for cons in constraints: # Need to use sqlalchemy-migrate UniqueConstraint cons = UniqueConstraint(*[c.name for c in cons.columns], name=cons.name, table=table) cons.drop() def _check_273(self, engine, data): for src_table, src_column, dst_table, dst_column in [ ('fixed_ips', 'instance_uuid', 'instances', 'uuid'), ('block_device_mapping', 'instance_uuid', 'instances', 'uuid'), ('instance_info_caches', 'instance_uuid', 'instances', 'uuid'), ('instance_metadata', 'instance_uuid', 'instances', 'uuid'), ('instance_system_metadata', 'instance_uuid', 'instances', 'uuid'), ('instance_type_projects', 'instance_type_id', 'instance_types', 'id'), ('iscsi_targets', 'volume_id', 'volumes', 'id'), ('reservations', 'usage_id', 'quota_usages', 'id'), ('security_group_instance_association', 'instance_uuid', 'instances', 'uuid'), ('security_group_instance_association', 'security_group_id', 'security_groups', 'id'), ('virtual_interfaces', 'instance_uuid', 'instances', 'uuid'), ('compute_nodes', 'service_id', 'services', 'id'), ('instance_actions', 'instance_uuid', 'instances', 'uuid'), ('instance_faults', 'instance_uuid', 'instances', 'uuid'), ('migrations', 'instance_uuid', 'instances', 'uuid')]: src_table = oslodbutils.get_table(engine, src_table) fkeys = {fk.parent.name: fk.column for fk in src_table.foreign_keys} self.assertIn(src_column, fkeys) self.assertEqual(fkeys[src_column].table.name, dst_table) self.assertEqual(fkeys[src_column].name, dst_column) def _check_274(self, engine, data): self.assertIndexMembers(engine, 'instances', 'instances_project_id_deleted_idx', ['project_id', 'deleted']) self.assertIndexNotExists(engine, 'instances', 'project_id') def _pre_upgrade_275(self, engine): # Create a keypair record so we can test that the upgrade will set # 'ssh' as default value in the new column for the previous keypair # entries. key_pairs = oslodbutils.get_table(engine, 'key_pairs') fake_keypair = {'name': 'test-migr'} key_pairs.insert().execute(fake_keypair) def _check_275(self, engine, data): self.assertColumnExists(engine, 'key_pairs', 'type') self.assertColumnExists(engine, 'shadow_key_pairs', 'type') key_pairs = oslodbutils.get_table(engine, 'key_pairs') shadow_key_pairs = oslodbutils.get_table(engine, 'shadow_key_pairs') self.assertIsInstance(key_pairs.c.type.type, sqlalchemy.types.String) self.assertIsInstance(shadow_key_pairs.c.type.type, sqlalchemy.types.String) # Make sure the keypair entry will have the type 'ssh' key_pairs = oslodbutils.get_table(engine, 'key_pairs') keypair = key_pairs.select( key_pairs.c.name == 'test-migr').execute().first() self.assertEqual('ssh', keypair.type) def _check_276(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'vcpu_model') self.assertColumnExists(engine, 'shadow_instance_extra', 'vcpu_model') instance_extra = oslodbutils.get_table(engine, 'instance_extra') shadow_instance_extra = oslodbutils.get_table( engine, 'shadow_instance_extra') self.assertIsInstance(instance_extra.c.vcpu_model.type, sqlalchemy.types.Text) self.assertIsInstance(shadow_instance_extra.c.vcpu_model.type, sqlalchemy.types.Text) def _check_277(self, engine, data): self.assertIndexMembers(engine, 'fixed_ips', 'fixed_ips_deleted_allocated_updated_at_idx', ['deleted', 'allocated', 'updated_at']) def _check_278(self, engine, data): compute_nodes = oslodbutils.get_table(engine, 'compute_nodes') self.assertEqual(0, len([fk for fk in compute_nodes.foreign_keys if fk.parent.name == 'service_id'])) self.assertTrue(compute_nodes.c.service_id.nullable) def _check_279(self, engine, data): inspector = reflection.Inspector.from_engine(engine) constraints = inspector.get_unique_constraints('compute_nodes') constraint_names = [constraint['name'] for constraint in constraints] self.assertNotIn('uniq_compute_nodes0host0hypervisor_hostname', constraint_names) self.assertIn('uniq_compute_nodes0host0hypervisor_hostname0deleted', constraint_names) def _check_280(self, engine, data): key_pairs = oslodbutils.get_table(engine, 'key_pairs') self.assertFalse(key_pairs.c.name.nullable) def _check_291(self, engine, data): # NOTE(danms): This is a dummy migration that just does a consistency # check pass def _check_292(self, engine, data): self.assertTableNotExists(engine, 'iscsi_targets') self.assertTableNotExists(engine, 'volumes') self.assertTableNotExists(engine, 'shadow_iscsi_targets') self.assertTableNotExists(engine, 'shadow_volumes') def _pre_upgrade_293(self, engine): migrations = oslodbutils.get_table(engine, 'migrations') fake_migration = {} migrations.insert().execute(fake_migration) def _check_293(self, engine, data): self.assertColumnExists(engine, 'migrations', 'migration_type') self.assertColumnExists(engine, 'shadow_migrations', 'migration_type') migrations = oslodbutils.get_table(engine, 'migrations') fake_migration = migrations.select().execute().first() self.assertIsNone(fake_migration.migration_type) self.assertFalse(fake_migration.hidden) def _check_294(self, engine, data): self.assertColumnExists(engine, 'services', 'last_seen_up') self.assertColumnExists(engine, 'shadow_services', 'last_seen_up') services = oslodbutils.get_table(engine, 'services') shadow_services = oslodbutils.get_table( engine, 'shadow_services') self.assertIsInstance(services.c.last_seen_up.type, sqlalchemy.types.DateTime) self.assertIsInstance(shadow_services.c.last_seen_up.type, sqlalchemy.types.DateTime) def _pre_upgrade_295(self, engine): self.assertIndexNotExists(engine, 'virtual_interfaces', 'virtual_interfaces_uuid_idx') def _check_295(self, engine, data): self.assertIndexMembers(engine, 'virtual_interfaces', 'virtual_interfaces_uuid_idx', ['uuid']) def _check_296(self, engine, data): pass def _check_297(self, engine, data): self.assertColumnExists(engine, 'services', 'forced_down') def _check_298(self, engine, data): # NOTE(nic): This is a MySQL-specific migration, and is a no-op from # the point-of-view of unit tests, since they use SQLite pass def filter_metadata_diff(self, diff): # Overriding the parent method to decide on certain attributes # that maybe present in the DB but not in the models.py def removed_column(element): # Define a whitelist of columns that would be removed from the # DB at a later release. # NOTE(Luyao) The vpmems column was added to the schema in train, # and removed from the model in train. column_whitelist = {'instances': ['internal_id'], 'instance_extra': ['vpmems']} if element[0] != 'remove_column': return False table_name, column = element[2], element[3] return (table_name in column_whitelist and column.name in column_whitelist[table_name]) return [ element for element in diff if not removed_column(element) ] def _check_299(self, engine, data): self.assertColumnExists(engine, 'services', 'version') def _check_300(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'migration_context') def _check_301(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'cpu_allocation_ratio') self.assertColumnExists(engine, 'compute_nodes', 'ram_allocation_ratio') def _check_302(self, engine, data): self.assertIndexMembers(engine, 'instance_system_metadata', 'instance_uuid', ['instance_uuid']) def _check_313(self, engine, data): self.assertColumnExists(engine, 'pci_devices', 'parent_addr') self.assertColumnExists(engine, 'shadow_pci_devices', 'parent_addr') pci_devices = oslodbutils.get_table(engine, 'pci_devices') shadow_pci_devices = oslodbutils.get_table( engine, 'shadow_pci_devices') self.assertIsInstance(pci_devices.c.parent_addr.type, sqlalchemy.types.String) self.assertTrue(pci_devices.c.parent_addr.nullable) self.assertIsInstance(shadow_pci_devices.c.parent_addr.type, sqlalchemy.types.String) self.assertTrue(shadow_pci_devices.c.parent_addr.nullable) self.assertIndexMembers(engine, 'pci_devices', 'ix_pci_devices_compute_node_id_parent_addr_deleted', ['compute_node_id', 'parent_addr', 'deleted']) def _check_314(self, engine, data): self.assertColumnExists(engine, 'inventories', 'resource_class_id') self.assertColumnExists(engine, 'allocations', 'resource_class_id') self.assertColumnExists(engine, 'resource_providers', 'id') self.assertColumnExists(engine, 'resource_providers', 'uuid') self.assertColumnExists(engine, 'compute_nodes', 'uuid') self.assertColumnExists(engine, 'shadow_compute_nodes', 'uuid') self.assertIndexMembers(engine, 'allocations', 'allocations_resource_provider_class_id_idx', ['resource_provider_id', 'resource_class_id']) def _check_315(self, engine, data): self.assertColumnExists(engine, 'migrations', 'memory_total') self.assertColumnExists(engine, 'migrations', 'memory_processed') self.assertColumnExists(engine, 'migrations', 'memory_remaining') self.assertColumnExists(engine, 'migrations', 'disk_total') self.assertColumnExists(engine, 'migrations', 'disk_processed') self.assertColumnExists(engine, 'migrations', 'disk_remaining') def _check_316(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'disk_allocation_ratio') def _check_317(self, engine, data): self.assertColumnExists(engine, 'aggregates', 'uuid') self.assertColumnExists(engine, 'shadow_aggregates', 'uuid') def _check_318(self, engine, data): self.assertColumnExists(engine, 'resource_providers', 'name') self.assertColumnExists(engine, 'resource_providers', 'generation') self.assertColumnExists(engine, 'resource_providers', 'can_host') self.assertIndexMembers(engine, 'resource_providers', 'resource_providers_name_idx', ['name']) self.assertColumnExists(engine, 'resource_provider_aggregates', 'resource_provider_id') self.assertColumnExists(engine, 'resource_provider_aggregates', 'aggregate_id') self.assertIndexMembers(engine, 'resource_provider_aggregates', 'resource_provider_aggregates_aggregate_id_idx', ['aggregate_id']) self.assertIndexMembers(engine, 'resource_provider_aggregates', 'resource_provider_aggregates_aggregate_id_idx', ['aggregate_id']) self.assertIndexMembers(engine, 'inventories', 'inventories_resource_provider_resource_class_idx', ['resource_provider_id', 'resource_class_id']) def _check_319(self, engine, data): self.assertIndexMembers(engine, 'instances', 'instances_deleted_created_at_idx', ['deleted', 'created_at']) def _check_330(self, engine, data): # Just a sanity-check migration pass def _check_331(self, engine, data): self.assertColumnExists(engine, 'virtual_interfaces', 'tag') self.assertColumnExists(engine, 'block_device_mapping', 'tag') def _check_332(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'keypairs') def _check_333(self, engine, data): self.assertColumnExists(engine, 'console_auth_tokens', 'id') self.assertColumnExists(engine, 'console_auth_tokens', 'token_hash') self.assertColumnExists(engine, 'console_auth_tokens', 'console_type') self.assertColumnExists(engine, 'console_auth_tokens', 'host') self.assertColumnExists(engine, 'console_auth_tokens', 'port') self.assertColumnExists(engine, 'console_auth_tokens', 'internal_access_path') self.assertColumnExists(engine, 'console_auth_tokens', 'instance_uuid') self.assertColumnExists(engine, 'console_auth_tokens', 'expires') self.assertIndexMembers(engine, 'console_auth_tokens', 'console_auth_tokens_instance_uuid_idx', ['instance_uuid']) self.assertIndexMembers(engine, 'console_auth_tokens', 'console_auth_tokens_host_expires_idx', ['host', 'expires']) self.assertIndexMembers(engine, 'console_auth_tokens', 'console_auth_tokens_token_hash_idx', ['token_hash']) def _check_334(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'device_metadata') self.assertColumnExists(engine, 'shadow_instance_extra', 'device_metadata') def _check_345(self, engine, data): # NOTE(danms): Just a sanity-check migration pass def _check_346(self, engine, data): self.assertColumnNotExists(engine, 'instances', 'scheduled_at') self.assertColumnNotExists(engine, 'shadow_instances', 'scheduled_at') def _check_347(self, engine, data): self.assertIndexMembers(engine, 'instances', 'instances_project_id_idx', ['project_id']) self.assertIndexMembers(engine, 'instances', 'instances_updated_at_project_id_idx', ['updated_at', 'project_id']) def _check_358(self, engine, data): self.assertColumnExists(engine, 'block_device_mapping', 'attachment_id') def _check_359(self, engine, data): self.assertColumnExists(engine, 'services', 'uuid') self.assertIndexMembers(engine, 'services', 'services_uuid_idx', ['uuid']) def _check_360(self, engine, data): self.assertColumnExists(engine, 'compute_nodes', 'mapped') self.assertColumnExists(engine, 'shadow_compute_nodes', 'mapped') def _check_361(self, engine, data): self.assertIndexMembers(engine, 'compute_nodes', 'compute_nodes_uuid_idx', ['uuid']) def _check_362(self, engine, data): self.assertColumnExists(engine, 'pci_devices', 'uuid') def _check_373(self, engine, data): self.assertColumnExists(engine, 'migrations', 'uuid') def _check_374(self, engine, data): self.assertColumnExists(engine, 'block_device_mapping', 'uuid') self.assertColumnExists(engine, 'shadow_block_device_mapping', 'uuid') inspector = reflection.Inspector.from_engine(engine) constraints = inspector.get_unique_constraints('block_device_mapping') constraint_names = [constraint['name'] for constraint in constraints] self.assertIn('uniq_block_device_mapping0uuid', constraint_names) def _check_375(self, engine, data): self.assertColumnExists(engine, 'console_auth_tokens', 'access_url_base') def _check_376(self, engine, data): self.assertIndexMembers( engine, 'console_auth_tokens', 'console_auth_tokens_token_hash_instance_uuid_idx', ['token_hash', 'instance_uuid']) def _check_377(self, engine, data): self.assertIndexMembers(engine, 'migrations', 'migrations_updated_at_idx', ['updated_at']) def _check_378(self, engine, data): self.assertIndexMembers( engine, 'instance_actions', 'instance_actions_instance_uuid_updated_at_idx', ['instance_uuid', 'updated_at']) def _check_389(self, engine, data): self.assertIndexMembers(engine, 'aggregate_metadata', 'aggregate_metadata_value_idx', ['value']) def _check_390(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'trusted_certs') self.assertColumnExists(engine, 'shadow_instance_extra', 'trusted_certs') def _check_391(self, engine, data): self.assertColumnExists(engine, 'block_device_mapping', 'volume_type') self.assertColumnExists(engine, 'shadow_block_device_mapping', 'volume_type') def _check_397(self, engine, data): for prefix in ('', 'shadow_'): self.assertColumnExists( engine, '%smigrations' % prefix, 'cross_cell_move') def _check_398(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'vpmems') self.assertColumnExists(engine, 'shadow_instance_extra', 'vpmems') def _check_399(self, engine, data): for prefix in ('', 'shadow_'): self.assertColumnExists( engine, '%sinstances' % prefix, 'hidden') def _check_400(self, engine, data): # NOTE(mriedem): This is a dummy migration that just does a consistency # check. The actual test for 400 is in TestServicesUUIDCheck. pass def _check_401(self, engine, data): for prefix in ('', 'shadow_'): self.assertColumnExists( engine, '%smigrations' % prefix, 'user_id') self.assertColumnExists( engine, '%smigrations' % prefix, 'project_id') def _check_402(self, engine, data): self.assertColumnExists(engine, 'instance_extra', 'resources') self.assertColumnExists(engine, 'shadow_instance_extra', 'resources') class TestNovaMigrationsSQLite(NovaMigrationsCheckers, test_fixtures.OpportunisticDBTestMixin, testtools.TestCase): pass class TestNovaMigrationsMySQL(NovaMigrationsCheckers, test_fixtures.OpportunisticDBTestMixin, testtools.TestCase): FIXTURE = test_fixtures.MySQLOpportunisticFixture def test_innodb_tables(self): with mock.patch.object(sa_migration, 'get_engine', return_value=self.migrate_engine): sa_migration.db_sync() total = self.migrate_engine.execute( "SELECT count(*) " "FROM information_schema.TABLES " "WHERE TABLE_SCHEMA = '%(database)s'" % {'database': self.migrate_engine.url.database}) self.assertGreater(total.scalar(), 0, "No tables found. Wrong schema?") noninnodb = self.migrate_engine.execute( "SELECT count(*) " "FROM information_schema.TABLES " "WHERE TABLE_SCHEMA='%(database)s' " "AND ENGINE != 'InnoDB' " "AND TABLE_NAME != 'migrate_version'" % {'database': self.migrate_engine.url.database}) count = noninnodb.scalar() self.assertEqual(count, 0, "%d non InnoDB tables created" % count) class TestNovaMigrationsPostgreSQL(NovaMigrationsCheckers, test_fixtures.OpportunisticDBTestMixin, testtools.TestCase): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class ProjectTestCase(test.NoDBTestCase): def test_no_migrations_have_downgrade(self): topdir = os.path.normpath(os.path.dirname(__file__) + '/../../../') # Walk both the nova_api and nova (cell) database migrations. includes_downgrade = [] for subdir in ('api_migrations', ''): py_glob = os.path.join(topdir, "db", "sqlalchemy", subdir, "migrate_repo", "versions", "*.py") for path in glob.iglob(py_glob): has_upgrade = False has_downgrade = False with open(path, "r") as f: for line in f: if 'def upgrade(' in line: has_upgrade = True if 'def downgrade(' in line: has_downgrade = True if has_upgrade and has_downgrade: fname = os.path.basename(path) includes_downgrade.append(fname) helpful_msg = ("The following migrations have a downgrade " "which is not supported:" "\n\t%s" % '\n\t'.join(sorted(includes_downgrade))) self.assertFalse(includes_downgrade, helpful_msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/db/test_models.py0000664000175000017500000000565000000000000021141 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.db.sqlalchemy import api_models from nova.db.sqlalchemy import models from nova import test class TestSoftDeletesDeprecated(test.NoDBTestCase): def test_no_new_soft_deletes(self): whitelist = [ 'agent_builds', 'aggregate_hosts', 'aggregate_metadata', 'aggregates', 'block_device_mapping', 'bw_usage_cache', 'cells', 'certificates', 'compute_nodes', 'console_pools', 'consoles', 'dns_domains', 'fixed_ips', 'floating_ips', 'instance_actions', 'instance_actions_events', 'instance_extra', 'instance_faults', 'instance_group_member', 'instance_group_policy', 'instance_groups', 'instance_id_mappings', 'instance_info_caches', 'instance_metadata', 'instance_system_metadata', 'instance_type_extra_specs', 'instance_type_projects', 'instance_types', 'instances', 'key_pairs', 'migrations', 'networks', 'pci_devices', 'project_user_quotas', 'provider_fw_rules', 'quota_classes', 'quota_usages', 'quotas', 'reservations', 's3_images', 'security_group_default_rules', 'security_group_instance_association', 'security_group_rules', 'security_groups', 'services', 'snapshot_id_mappings', 'snapshots', 'task_log', 'virtual_interfaces', 'volume_id_mappings', 'volume_usage_cache' ] # Soft deletes are deprecated. Whitelist the tables that currently # allow soft deletes. No new tables should be added to this whitelist. tables = [] for base in [models.BASE, api_models.API_BASE]: for table_name, table in base.metadata.tables.items(): columns = [column.name for column in table.columns] if 'deleted' in columns or 'deleted_at' in columns: tables.append(table_name) self.assertEqual(whitelist, sorted(tables)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/db/test_sqlalchemy_migration.py0000664000175000017500000005607400000000000024077 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import importlib from migrate import exceptions as versioning_exceptions from migrate import UniqueConstraint from migrate.versioning import api as versioning_api import mock from oslo_db.sqlalchemy import utils as db_utils from oslo_utils.fixture import uuidsentinel import six import sqlalchemy from nova import context from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import migration from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures class TestNullInstanceUuidScanDB(test.TestCase): # NOTE(mriedem): Copied from the 267 database migration. def downgrade(self, migrate_engine): UniqueConstraint('uuid', table=db_utils.get_table(migrate_engine, 'instances'), name='uniq_instances0uuid').drop() for table_name in ('instances', 'shadow_instances'): table = db_utils.get_table(migrate_engine, table_name) table.columns.uuid.alter(nullable=True) def setUp(self): super(TestNullInstanceUuidScanDB, self).setUp() self.engine = db_api.get_engine() # When this test runs, we've already run the schema migration to make # instances.uuid non-nullable, so we have to alter the table here # so we can test against a real database. self.downgrade(self.engine) # Now create fake entries in the fixed_ips, consoles and # instances table where (instance_)uuid is None for testing. for table_name in ('fixed_ips', 'instances', 'consoles'): table = db_utils.get_table(self.engine, table_name) fake_record = {'id': 1} table.insert().execute(fake_record) def test_db_null_instance_uuid_scan_readonly(self): results = migration.db_null_instance_uuid_scan(delete=False) self.assertEqual(1, results.get('instances')) self.assertEqual(1, results.get('consoles')) # The fixed_ips table should be ignored. self.assertNotIn('fixed_ips', results) # Now pick a random table with an instance_uuid column and show it's # in the results but with 0 hits. self.assertEqual(0, results.get('instance_info_caches')) # Make sure nothing was deleted. for table_name in ('fixed_ips', 'instances', 'consoles'): table = db_utils.get_table(self.engine, table_name) record = table.select(table.c.id == 1).execute().first() self.assertIsNotNone(record) def test_db_null_instance_uuid_scan_delete(self): results = migration.db_null_instance_uuid_scan(delete=True) self.assertEqual(1, results.get('instances')) self.assertEqual(1, results.get('consoles')) # The fixed_ips table should be ignored. self.assertNotIn('fixed_ips', results) # Now pick a random table with an instance_uuid column and show it's # in the results but with 0 hits. self.assertEqual(0, results.get('instance_info_caches')) # Make sure fixed_ips wasn't touched, but instances and instance_faults # records were deleted. fixed_ips = db_utils.get_table(self.engine, 'fixed_ips') record = fixed_ips.select(fixed_ips.c.id == 1).execute().first() self.assertIsNotNone(record) consoles = db_utils.get_table(self.engine, 'consoles') record = consoles.select(consoles.c.id == 1).execute().first() self.assertIsNone(record) instances = db_utils.get_table(self.engine, 'instances') record = instances.select(instances.c.id == 1).execute().first() self.assertIsNone(record) @mock.patch.object(migration, 'db_version', return_value=2) @mock.patch.object(migration, '_find_migrate_repo', return_value='repo') @mock.patch.object(versioning_api, 'upgrade') @mock.patch.object(versioning_api, 'downgrade') @mock.patch.object(migration, 'get_engine', return_value='engine') class TestDbSync(test.NoDBTestCase): def test_version_none(self, mock_get_engine, mock_downgrade, mock_upgrade, mock_find_repo, mock_version): database = 'fake' migration.db_sync(database=database) mock_version.assert_called_once_with(database, context=None) mock_find_repo.assert_called_once_with(database) mock_get_engine.assert_called_once_with(database, context=None) mock_upgrade.assert_called_once_with('engine', 'repo', None) self.assertFalse(mock_downgrade.called) def test_downgrade(self, mock_get_engine, mock_downgrade, mock_upgrade, mock_find_repo, mock_version): database = 'fake' migration.db_sync(1, database=database) mock_version.assert_called_once_with(database, context=None) mock_find_repo.assert_called_once_with(database) mock_get_engine.assert_called_once_with(database, context=None) mock_downgrade.assert_called_once_with('engine', 'repo', 1) self.assertFalse(mock_upgrade.called) @mock.patch.object(migration, '_find_migrate_repo', return_value='repo') @mock.patch.object(versioning_api, 'db_version') @mock.patch.object(migration, 'get_engine') class TestDbVersion(test.NoDBTestCase): def test_db_version(self, mock_get_engine, mock_db_version, mock_find_repo): database = 'fake' mock_get_engine.return_value = 'engine' migration.db_version(database) mock_find_repo.assert_called_once_with(database) mock_db_version.assert_called_once_with('engine', 'repo') def test_not_controlled(self, mock_get_engine, mock_db_version, mock_find_repo): database = 'api' mock_get_engine.side_effect = ['engine', 'engine', 'engine'] exc = versioning_exceptions.DatabaseNotControlledError() mock_db_version.side_effect = [exc, ''] metadata = mock.MagicMock() metadata.tables.return_value = [] with mock.patch.object(sqlalchemy, 'MetaData', metadata), mock.patch.object(migration, 'db_version_control') as mock_version_control: migration.db_version(database) mock_version_control.assert_called_once_with(0, database, context=None) db_version_calls = [mock.call('engine', 'repo')] * 2 self.assertEqual(db_version_calls, mock_db_version.call_args_list) engine_calls = [mock.call(database, context=None)] * 3 self.assertEqual(engine_calls, mock_get_engine.call_args_list) def test_db_version_init_race(self, mock_get_engine, mock_db_version, mock_find_repo): # This test exercises bug 1804652 by causing # versioning_api.version_contro() to raise an unhandleable error the # first time it is called. database = 'api' mock_get_engine.return_value = 'engine' exc = versioning_exceptions.DatabaseNotControlledError() mock_db_version.side_effect = [exc, ''] metadata = mock.MagicMock() metadata.tables.return_value = [] with mock.patch.object(sqlalchemy, 'MetaData', metadata), mock.patch.object(migration, 'db_version_control') as mock_version_control: # db_version_control raises an unhandleable error because we were # racing to initialise with another process. mock_version_control.side_effect = test.TestingException migration.db_version(database) mock_version_control.assert_called_once_with(0, database, context=None) db_version_calls = [mock.call('engine', 'repo')] * 2 self.assertEqual(db_version_calls, mock_db_version.call_args_list) engine_calls = [mock.call(database, context=None)] * 3 self.assertEqual(engine_calls, mock_get_engine.call_args_list) def test_db_version_raise_on_error(self, mock_get_engine, mock_db_version, mock_find_repo): # This test asserts that we will still raise a persistent error after # working around bug 1804652. database = 'api' mock_get_engine.return_value = 'engine' mock_db_version.side_effect = \ versioning_exceptions.DatabaseNotControlledError metadata = mock.MagicMock() metadata.tables.return_value = [] with mock.patch.object(sqlalchemy, 'MetaData', metadata), mock.patch.object(migration, 'db_version_control') as mock_version_control: # db_version_control raises an unhandleable error because we were # racing to initialise with another process. mock_version_control.side_effect = test.TestingException self.assertRaises(test.TestingException, migration.db_version, database) @mock.patch.object(migration, '_find_migrate_repo', return_value='repo') @mock.patch.object(migration, 'get_engine', return_value='engine') @mock.patch.object(versioning_api, 'version_control') class TestDbVersionControl(test.NoDBTestCase): def test_version_control(self, mock_version_control, mock_get_engine, mock_find_repo): database = 'fake' migration.db_version_control(database=database) mock_find_repo.assert_called_once_with(database) mock_version_control.assert_called_once_with('engine', 'repo', None) class TestGetEngine(test.NoDBTestCase): def test_get_main_engine(self): with mock.patch.object(db_api, 'get_engine', return_value='engine') as mock_get_engine: engine = migration.get_engine() self.assertEqual('engine', engine) mock_get_engine.assert_called_once_with(context=None) def test_get_api_engine(self): with mock.patch.object(db_api, 'get_api_engine', return_value='api_engine') as mock_get_engine: engine = migration.get_engine('api') self.assertEqual('api_engine', engine) mock_get_engine.assert_called_once_with() class TestFlavorCheck(test.TestCase): def setUp(self): super(TestFlavorCheck, self).setUp() self.context = context.get_admin_context() self.migration = importlib.import_module( 'nova.db.sqlalchemy.migrate_repo.versions.' '291_enforce_flavors_migrated') self.engine = db_api.get_engine() def test_upgrade_clean(self): inst = objects.Instance(context=self.context, uuid=uuidsentinel.fake, user_id=self.context.user_id, project_id=self.context.project_id, system_metadata={'foo': 'bar'}) inst.create() self.migration.upgrade(self.engine) def test_upgrade_dirty(self): inst = objects.Instance(context=self.context, uuid=uuidsentinel.fake, user_id=self.context.user_id, project_id=self.context.project_id, system_metadata={'foo': 'bar', 'instance_type_id': 'foo'}) inst.create() self.assertRaises(exception.ValidationError, self.migration.upgrade, self.engine) def test_upgrade_flavor_deleted_instances(self): inst = objects.Instance(context=self.context, uuid=uuidsentinel.fake, user_id=self.context.user_id, project_id=self.context.project_id, system_metadata={'foo': 'bar', 'instance_type_id': 'foo'}) inst.create() inst.destroy() self.migration.upgrade(self.engine) class TestNewtonCheck(test.TestCase): def setUp(self): super(TestNewtonCheck, self).setUp() self.useFixture(nova_fixtures.DatabaseAtVersion(329)) self.context = context.get_admin_context() self.migration = importlib.import_module( 'nova.db.sqlalchemy.migrate_repo.versions.' '330_enforce_mitaka_online_migrations') self.engine = db_api.get_engine() def setup_pci_device(self, dev_type): # NOTE(jaypipes): We cannot use db_api.pci_device_update() here because # newer models of PciDevice contain fields (uuid) that are not present # in the older Newton DB schema and pci_device_update() uses the # SQLAlchemy ORM model_query().update() form which will produce an # UPDATE SQL statement that contains those new fields, resulting in an # OperationalError about table pci_devices has no such column uuid. engine = db_api.get_engine() tbl = models.PciDevice.__table__ with engine.connect() as conn: ins_stmt = tbl.insert().values( address='foo:bar', compute_node_id=1, parent_addr=None, vendor_id='123', product_id='456', dev_type=dev_type, label='foobar', status='whatisthis?', ) conn.execute(ins_stmt) def test_pci_device_type_vf_not_migrated(self): self.setup_pci_device('type-VF') # type-VF devices should have a parent_addr self.assertRaises(exception.ValidationError, self.migration.upgrade, self.engine) def test_pci_device_type_pf_not_migrated(self): self.setup_pci_device('type-PF') # blocker should not block on type-PF devices self.migration.upgrade(self.engine) def test_pci_device_type_pci_not_migrated(self): self.setup_pci_device('type-PCI') # blocker should not block on type-PCI devices self.migration.upgrade(self.engine) class TestOcataCheck(test.TestCase): def setUp(self): super(TestOcataCheck, self).setUp() self.context = context.get_admin_context() self.migration = importlib.import_module( 'nova.db.sqlalchemy.migrate_repo.versions.' '345_require_online_migration_completion') self.engine = db_api.get_engine() self.flavor_values = { 'name': 'foo', 'memory_mb': 256, 'vcpus': 1, 'root_gb': 10, 'ephemeral_gb': 100, 'flavorid': 'bar', 'swap': 1, 'rxtx_factor': 1.0, 'vcpu_weight': 1, 'disabled': False, 'is_public': True, 'deleted': 0 } self.keypair_values = { 'name': 'foo', 'user_ud': 'bar', 'fingerprint': 'baz', 'public_key': 'bat', 'type': 'ssh', } self.aggregate_values = { 'uuid': uuidsentinel.agg, 'name': 'foo', } self.ig_values = { 'user_id': 'foo', 'project_id': 'bar', 'uuid': uuidsentinel.ig, 'name': 'baz', 'deleted': 0 } def test_upgrade_clean(self): self.migration.upgrade(self.engine) def test_upgrade_dirty_flavors(self): flavors = db_utils.get_table(self.engine, 'instance_types') flavors.insert().execute(self.flavor_values) self.assertRaises(exception.ValidationError, self.migration.upgrade, self.engine) def test_upgrade_dirty_keypairs(self): db_api.key_pair_create(self.context, self.keypair_values) self.assertRaises(exception.ValidationError, self.migration.upgrade, self.engine) def test_upgrade_with_deleted_keypairs(self): keypair = db_api.key_pair_create(self.context, self.keypair_values) db_api.key_pair_destroy(self.context, keypair['user_id'], keypair['name']) self.migration.upgrade(self.engine) def test_upgrade_dirty_instance_groups(self): igs = db_utils.get_table(self.engine, 'instance_groups') igs.insert().execute(self.ig_values) self.assertRaises(exception.ValidationError, self.migration.upgrade, self.engine) def test_upgrade_with_deleted_instance_groups(self): igs = db_utils.get_table(self.engine, 'instance_groups') group_id = igs.insert().execute(self.ig_values).inserted_primary_key[0] igs.update().where(igs.c.id == group_id).values( deleted=group_id).execute() self.migration.upgrade(self.engine) class TestNewtonCellsCheck(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(TestNewtonCellsCheck, self).setUp() self.useFixture(nova_fixtures.DatabaseAtVersion(28, 'api')) self.context = context.get_admin_context() self.migration = importlib.import_module( 'nova.db.sqlalchemy.api_migrations.migrate_repo.versions.' '030_require_cell_setup') self.engine = db_api.get_api_engine() def _flavor_me(self): # We can't use the Flavor object or model to create the flavor because # the model and object have the description field now but at this point # we have not run the migration schema to add the description column. flavors = db_utils.get_table(self.engine, 'flavors') values = dict(name='foo', memory_mb=123, vcpus=1, root_gb=1, flavorid='m1.foo', swap=0) flavors.insert().execute(values) def _create_cell_mapping(self, **values): mappings = db_utils.get_table(self.engine, 'cell_mappings') return mappings.insert().execute(**values).inserted_primary_key[0] def _create_host_mapping(self, **values): mappings = db_utils.get_table(self.engine, 'host_mappings') return mappings.insert().execute(**values).inserted_primary_key[0] def test_upgrade_with_no_cell_mappings(self): self._flavor_me() self.assertRaisesRegex(exception.ValidationError, 'Cell mappings', self.migration.upgrade, self.engine) def test_upgrade_with_only_cell0(self): self._flavor_me() self._create_cell_mapping(uuid=objects.CellMapping.CELL0_UUID, name='cell0', transport_url='fake', database_connection='fake') self.assertRaisesRegex(exception.ValidationError, 'Cell mappings', self.migration.upgrade, self.engine) def test_upgrade_without_cell0(self): self._flavor_me() self._create_cell_mapping(uuid=uuidsentinel.cell1, name='cell1', transport_url='fake', database_connection='fake') self._create_cell_mapping(uuid=uuidsentinel.cell2, name='cell2', transport_url='fake', database_connection='fake') self.assertRaisesRegex(exception.ValidationError, 'Cell0', self.migration.upgrade, self.engine) def test_upgrade_with_no_host_mappings(self): self._flavor_me() self._create_cell_mapping(uuid=objects.CellMapping.CELL0_UUID, name='cell0', transport_url='fake', database_connection='fake') self._create_cell_mapping(uuid=uuidsentinel.cell1, name='cell1', transport_url='fake', database_connection='fake') with mock.patch.object(self.migration, 'LOG') as log: self.migration.upgrade(self.engine) self.assertTrue(log.warning.called) def test_upgrade_with_required_mappings(self): self._flavor_me() self._create_cell_mapping(uuid=objects.CellMapping.CELL0_UUID, name='cell0', transport_url='fake', database_connection='fake') cell1_id = self._create_cell_mapping(uuid=uuidsentinel.cell1, name='cell1', transport_url='fake', database_connection='fake') self._create_host_mapping(cell_id=cell1_id, host='foo') self.migration.upgrade(self.engine) def test_upgrade_new_deploy(self): self.migration.upgrade(self.engine) class TestServicesUUIDCheck(test.TestCase): """Tests the 400_enforce_service_uuid blocker migration.""" def setUp(self): super(TestServicesUUIDCheck, self).setUp() self.useFixture(nova_fixtures.DatabaseAtVersion(398)) self.context = context.get_admin_context() self.migration = importlib.import_module( 'nova.db.sqlalchemy.migrate_repo.versions.' '400_enforce_service_uuid') self.engine = db_api.get_engine() def test_upgrade_unmigrated_deleted_service(self): """Tests to make sure the 400 migration filters out deleted services""" services = db_utils.get_table(self.engine, 'services') service = { 'host': 'fake-host', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 514, 'version': 16, 'uuid': None, 'deleted': 1 } services.insert().execute(service) self.migration.upgrade(self.engine) def test_upgrade_unmigrated_service_validation_error(self): """Tests that the migration raises ValidationError when an unmigrated non-deleted service record is found. """ services = db_utils.get_table(self.engine, 'services') service = { 'host': 'fake-host', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 514, 'version': 16, 'uuid': None, 'deleted': 0 } services.insert().execute(service) ex = self.assertRaises(exception.ValidationError, self.migration.upgrade, self.engine) self.assertIn('There are still 1 unmigrated records in the ' 'services table.', six.text_type(ex)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_block_device.py0000664000175000017500000000447300000000000021633 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova import block_device from nova import objects def fake_bdm_object(context, bdm_dict): """Creates a BlockDeviceMapping object from the given bdm_dict :param context: nova request context :param bdm_dict: dict of block device mapping info :returns: nova.objects.block_device.BlockDeviceMapping """ # FakeDbBlockDeviceDict mutates the bdm_dict so make a copy of it. return objects.BlockDeviceMapping._from_db_object( context, objects.BlockDeviceMapping(), FakeDbBlockDeviceDict(bdm_dict.copy())) class FakeDbBlockDeviceDict(block_device.BlockDeviceDict): """Defaults db fields - useful for mocking database calls.""" def __init__(self, bdm_dict=None, anon=False, **kwargs): bdm_dict = bdm_dict or {} db_id = bdm_dict.pop('id', 1) db_uuid = bdm_dict.pop('uuid', uuids.bdm) instance_uuid = bdm_dict.pop('instance_uuid', uuids.fake) super(FakeDbBlockDeviceDict, self).__init__(bdm_dict=bdm_dict, **kwargs) fake_db_fields = {'instance_uuid': instance_uuid, 'deleted_at': None, 'deleted': 0} if not anon: fake_db_fields['id'] = db_id fake_db_fields['uuid'] = db_uuid fake_db_fields['attachment_id'] = None fake_db_fields['created_at'] = timeutils.utcnow() fake_db_fields['updated_at'] = timeutils.utcnow() self.update(fake_db_fields) def AnonFakeDbBlockDeviceDict(bdm_dict, **kwargs): return FakeDbBlockDeviceDict(bdm_dict=bdm_dict, anon=True, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_build_request.py0000664000175000017500000000774400000000000022075 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from nova import context from nova import objects from nova.objects import fields from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance def fake_db_req(**updates): ctxt = context.RequestContext('fake-user', 'fake-project') instance_uuid = uuidutils.generate_uuid() instance = fake_instance.fake_instance_obj(ctxt, objects.Instance, uuid=instance_uuid) # This will always be set this way for an instance at build time instance.host = None block_devices = objects.BlockDeviceMappingList( objects=[fake_block_device.fake_bdm_object( context, fake_block_device.FakeDbBlockDeviceDict( source_type='blank', destination_type='local', guest_format='foo', device_type='disk', disk_bus='', boot_index=1, device_name='xvda', delete_on_termination=False, snapshot_id=None, volume_id=None, volume_size=0, image_id='bar', no_device=False, connection_info=None, tag='', instance_uuid=uuids.instance))]) tags = objects.TagList(objects=[objects.Tag(tag='tag1', resource_id=instance_uuid)]) db_build_request = { 'id': 1, 'project_id': 'fake-project', 'instance_uuid': instance_uuid, 'instance': jsonutils.dumps(instance.obj_to_primitive()), 'block_device_mappings': jsonutils.dumps( block_devices.obj_to_primitive()), 'tags': jsonutils.dumps(tags.obj_to_primitive()), 'created_at': datetime.datetime(2016, 1, 16), 'updated_at': datetime.datetime(2016, 1, 16), } for name, field in objects.BuildRequest.fields.items(): if name in db_build_request: continue if field.nullable: db_build_request[name] = None elif field.default != fields.UnspecifiedDefault: db_build_request[name] = field.default else: raise Exception('fake_db_req needs help with %s' % name) if updates: db_build_request.update(updates) return db_build_request def fake_req_obj(ctxt, db_req=None): if db_req is None: db_req = fake_db_req() req_obj = objects.BuildRequest(ctxt) for field in req_obj.fields: value = db_req[field] # create() can't be called if this is set if field == 'id': continue if isinstance(req_obj.fields[field], fields.ObjectField): value = value if field == 'instance': req_obj.instance = objects.Instance.obj_from_primitive( jsonutils.loads(value)) elif field == 'block_device_mappings': req_obj.block_device_mappings = ( objects.BlockDeviceMappingList.obj_from_primitive( jsonutils.loads(value))) elif field == 'tags': req_obj.tags = objects.TagList.obj_from_primitive( jsonutils.loads(value)) elif field == 'instance_metadata': setattr(req_obj, field, jsonutils.loads(value)) else: setattr(req_obj, field, value) # This should never be a changed field req_obj.obj_reset_changes(['id']) return req_obj ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_console_auth_token.py0000664000175000017500000000231600000000000023077 0ustar00zuulzuul00000000000000# Copyright 2016 Intel Corp. # Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel from nova import utils fake_token = uuidsentinel.token fake_token_hash = utils.get_sha256_str(fake_token) fake_instance_uuid = uuidsentinel.instance fake_token_dict = { 'created_at': None, 'updated_at': None, 'id': 123, 'token_hash': fake_token_hash, 'console_type': 'fake-type', 'host': 'fake-host', 'port': 1000, 'internal_access_path': 'fake-path', 'instance_uuid': fake_instance_uuid, 'expires': 100, 'access_url_base': 'http://fake.url.fake/root.html' } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_crypto.py0000664000175000017500000000477200000000000020544 0ustar00zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def get_x509_cert_and_fingerprint(): fingerprint = "a1:6f:6d:ea:a6:36:d0:3a:c6:eb:b6:ee:07:94:3e:2a:90:98:2b:c9" certif = ( "-----BEGIN CERTIFICATE-----\n" "MIIDIjCCAgqgAwIBAgIJAIE8EtWfZhhFMA0GCSqGSIb3DQEBCwUAMCQxIjAgBgNV\n" "BAMTGWNsb3VkYmFzZS1pbml0LXVzZXItMTM1NTkwHhcNMTUwMTI5MTgyMzE4WhcN\n" "MjUwMTI2MTgyMzE4WjAkMSIwIAYDVQQDExljbG91ZGJhc2UtaW5pdC11c2VyLTEz\n" "NTU5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv4lv95ofkXLIbALU\n" "UEb1f949TYNMUvMGNnLyLgGOY+D61TNG7RZn85cRg9GVJ7KDjSLN3e3LwH5rgv5q\n" "pU+nM/idSMhG0CQ1lZeExTsMEJVT3bG7LoU5uJ2fJSf5+hA0oih2M7/Kap5ggHgF\n" "h+h8MWvDC9Ih8x1aadkk/OEmJsTrziYm0C/V/FXPHEuXfZn8uDNKZ/tbyfI6hwEj\n" "nLz5Zjgg29n6tIPYMrnLNDHScCwtNZOcnixmWzsxCt1bxsAEA/y9gXUT7xWUf52t\n" "2+DGQbLYxo0PHjnPf3YnFXNavfTt+4c7ZdHhOQ6ZA8FGQ2LJHDHM1r2/8lK4ld2V\n" "qgNTcQIDAQABo1cwVTATBgNVHSUEDDAKBggrBgEFBQcDAjA+BgNVHREENzA1oDMG\n" "CisGAQQBgjcUAgOgJQwjY2xvdWRiYXNlLWluaXQtdXNlci0xMzU1OUBsb2NhbGhv\n" "c3QwDQYJKoZIhvcNAQELBQADggEBAHHX/ZUOMR0ZggQnfXuXLIHWlffVxxLOV/bE\n" "7JC/dtedHqi9iw6sRT5R6G1pJo0xKWr2yJVDH6nC7pfxCFkby0WgVuTjiu6iNRg2\n" "4zNJd8TGrTU+Mst+PPJFgsxrAY6vjwiaUtvZ/k8PsphHXu4ON+oLurtVDVgog7Vm\n" "fQCShx434OeJj1u8pb7o2WyYS5nDVrHBhlCAqVf2JPKu9zY+i9gOG2kimJwH7fJD\n" "xXpMIwAQ+flwlHR7OrE0L8TNcWwKPRAY4EPcXrT+cWo1k6aTqZDSK54ygW2iWtni\n" "ZBcstxwcB4GIwnp1DrPW9L2gw5eLe1Sl6wdz443TW8K/KPV9rWQ=\n" "-----END CERTIFICATE-----\n") return certif, fingerprint def get_ssh_public_key(): public_key = ("ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDx8nkQv/zgGg" "B4rMYmIf+6A4l6Rr+o/6lHBQdW5aYd44bd8JttDCE/F/pNRr0l" "RE+PiqSPO8nDPHw0010JeMH9gYgnnFlyY3/OcJ02RhIPyyxYpv" "9FhY+2YiUkpwFOcLImyrxEsYXpD/0d3ac30bNH6Sw9JD9UZHYc" "pSxsIbECHw== Generated-by-Nova") return public_key ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_diagnostics.py0000664000175000017500000000246600000000000021531 0ustar00zuulzuul00000000000000# Copyright (c) 2017 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects def fake_diagnostics_obj(**updates): diag = objects.Diagnostics() cpu_details = updates.pop('cpu_details', []) nic_details = updates.pop('nic_details', []) disk_details = updates.pop('disk_details', []) memory_details = updates.pop('memory_details', {}) for field in objects.Diagnostics.fields: if field in updates: setattr(diag, field, updates[field]) for cpu in cpu_details: diag.add_cpu(**cpu) for nic in nic_details: diag.add_nic(**nic) for disk in disk_details: diag.add_disk(**disk) for k, v in memory_details.items(): setattr(diag.memory_details, k, v) return diag ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_flavor.py0000664000175000017500000000330500000000000020504 0ustar00zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.objects import fields def fake_db_flavor(**updates): db_flavor = { 'id': 1, 'name': 'fake_flavor', 'memory_mb': 1024, 'vcpus': 1, 'root_gb': 100, 'ephemeral_gb': 0, 'flavorid': 'abc', 'swap': 0, 'disabled': False, 'is_public': True, 'extra_specs': {}, 'projects': [], 'description': None } for name, field in objects.Flavor.fields.items(): if name in db_flavor: continue if field.nullable: db_flavor[name] = None elif field.default != fields.UnspecifiedDefault: db_flavor[name] = field.default else: raise Exception('fake_db_flavor needs help with %s' % name) if updates: db_flavor.update(updates) return db_flavor def fake_flavor_obj(context, **updates): expected_attrs = updates.pop('expected_attrs', None) return objects.Flavor._from_db_object(context, objects.Flavor(), fake_db_flavor(**updates), expected_attrs=expected_attrs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_hosts.py0000664000175000017500000000255700000000000020363 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Provides some fake hosts to test host and service related functions """ from nova.tests.unit.objects import test_service HOST_LIST = [ {"host_name": "host_c1", "service": "compute", "zone": "nova"}, {"host_name": "host_c2", "service": "compute", "zone": "nova"}] OS_API_HOST_LIST = {"hosts": HOST_LIST} HOST_LIST_NOVA_ZONE = [ {"host_name": "host_c1", "service": "compute", "zone": "nova"}, {"host_name": "host_c2", "service": "compute", "zone": "nova"}] service_base = test_service.fake_service SERVICES_LIST = [ dict(service_base, host='host_c1', topic='compute', binary='nova-compute'), dict(service_base, host='host_c2', topic='compute', binary='nova-compute')] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_instance.py0000664000175000017500000001326600000000000021026 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_serialization import jsonutils from oslo_utils import uuidutils from nova import objects from nova.objects import fields def fake_db_secgroups(instance, names): secgroups = [] for i, name in enumerate(names): group_name = 'secgroup-%i' % i if isinstance(name, dict) and name.get('name'): group_name = name.get('name') secgroups.append( {'id': i, 'instance_uuid': instance['uuid'], 'name': group_name, 'description': 'Fake secgroup', 'user_id': instance['user_id'], 'project_id': instance['project_id'], 'deleted': False, 'deleted_at': None, 'created_at': None, 'updated_at': None, }) return secgroups def fake_db_instance(**updates): if 'instance_type' in updates: if isinstance(updates['instance_type'], objects.Flavor): flavor = updates['instance_type'] else: flavor = objects.Flavor(**updates['instance_type']) flavorinfo = jsonutils.dumps({ 'cur': flavor.obj_to_primitive(), 'old': None, 'new': None, }) else: flavorinfo = None db_instance = { 'id': 1, 'deleted': False, 'uuid': uuidutils.generate_uuid(), 'user_id': 'fake-user', 'project_id': 'fake-project', 'host': 'fake-host', 'created_at': datetime.datetime(1955, 11, 5), 'pci_devices': [], 'security_groups': [], 'metadata': {}, 'system_metadata': {}, 'root_gb': 0, 'ephemeral_gb': 0, 'extra': {'pci_requests': None, 'flavor': flavorinfo, 'numa_topology': None, 'vcpu_model': None, 'device_metadata': None, 'trusted_certs': None, 'resources': None, }, 'tags': [], 'services': [] } for name, field in objects.Instance.fields.items(): if name in db_instance: continue if field.nullable: db_instance[name] = None elif field.default != fields.UnspecifiedDefault: db_instance[name] = field.default elif name in ['flavor', 'ec2_ids', 'keypairs']: pass else: raise Exception('fake_db_instance needs help with %s' % name) if updates: db_instance.update(updates) if db_instance.get('security_groups'): db_instance['security_groups'] = fake_db_secgroups( db_instance, db_instance['security_groups']) return db_instance def fake_instance_obj(context, obj_instance_class=None, **updates): if obj_instance_class is None: obj_instance_class = objects.Instance expected_attrs = updates.pop('expected_attrs', None) flavor = updates.pop('flavor', None) if not flavor: flavor = objects.Flavor(id=1, name='flavor1', memory_mb=256, vcpus=1, root_gb=1, ephemeral_gb=1, flavorid='1', swap=0, rxtx_factor=1.0, vcpu_weight=1, disabled=False, is_public=True, extra_specs={}, projects=[]) inst = obj_instance_class._from_db_object(context, obj_instance_class(), fake_db_instance(**updates), expected_attrs=expected_attrs) inst.keypairs = objects.KeyPairList(objects=[]) inst.tags = objects.TagList() if flavor: inst.flavor = flavor # This is needed for instance quota counting until we have the # ability to count allocations in placement. if 'vcpus' in flavor and 'vcpus' not in updates: inst.vcpus = flavor.vcpus if 'memory_mb' in flavor and 'memory_mb' not in updates: inst.memory_mb = flavor.memory_mb if ('instance_type_id' not in inst or inst.instance_type_id is None and 'id' in flavor): inst.instance_type_id = flavor.id inst.old_flavor = None inst.new_flavor = None inst.resources = None inst.migration_context = None inst.obj_reset_changes(recursive=True) return inst def fake_fault_obj(context, instance_uuid, code=404, message='HTTPNotFound', details='Stock details for test', **updates): fault = { 'id': 1, 'instance_uuid': instance_uuid, 'code': code, 'message': message, 'details': details, 'host': 'fake_host', 'deleted': False, 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'updated_at': None, 'deleted_at': None } if updates: fault.update(updates) return objects.InstanceFault._from_db_object(context, objects.InstanceFault(), fault) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_ldap.py0000664000175000017500000002203000000000000020127 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fake LDAP server for test harness. This class does very little error checking, and knows nothing about ldap class definitions. It implements the minimum emulation of the python ldap library to work with nova. """ import fnmatch from oslo_serialization import jsonutils from six.moves import range class Store(object): def __init__(self): if hasattr(self.__class__, '_instance'): raise Exception('Attempted to instantiate singleton') @classmethod def instance(cls): if not hasattr(cls, '_instance'): cls._instance = _StorageDict() return cls._instance class _StorageDict(dict): def keys(self, pat=None): ret = super(_StorageDict, self).keys() if pat is not None: ret = fnmatch.filter(ret, pat) return ret def delete(self, key): try: del self[key] except KeyError: pass def flushdb(self): self.clear() def hgetall(self, key): """Returns the hash for the given key Creates the hash if the key doesn't exist. """ try: return self[key] except KeyError: self[key] = {} return self[key] def hget(self, key, field): hashdict = self.hgetall(key) try: return hashdict[field] except KeyError: hashdict[field] = {} return hashdict[field] def hset(self, key, field, val): hashdict = self.hgetall(key) hashdict[field] = val def hmset(self, key, value_dict): hashdict = self.hgetall(key) for field, val in value_dict.items(): hashdict[field] = val SCOPE_BASE = 0 SCOPE_ONELEVEL = 1 # Not implemented SCOPE_SUBTREE = 2 MOD_ADD = 0 MOD_DELETE = 1 MOD_REPLACE = 2 class NO_SUCH_OBJECT(Exception): """Duplicate exception class from real LDAP module.""" pass class OBJECT_CLASS_VIOLATION(Exception): """Duplicate exception class from real LDAP module.""" pass class SERVER_DOWN(Exception): """Duplicate exception class from real LDAP module.""" pass def initialize(_uri): """Opens a fake connection with an LDAP server.""" return FakeLDAP() def _match_query(query, attrs): """Match an ldap query to an attribute dictionary. The characters &, |, and ! are supported in the query. No syntax checking is performed, so malformed queries will not work correctly. """ # cut off the parentheses inner = query[1:-1] if inner.startswith('&'): # cut off the & l, r = _paren_groups(inner[1:]) return _match_query(l, attrs) and _match_query(r, attrs) if inner.startswith('|'): # cut off the | l, r = _paren_groups(inner[1:]) return _match_query(l, attrs) or _match_query(r, attrs) if inner.startswith('!'): # cut off the ! and the nested parentheses return not _match_query(query[2:-1], attrs) (k, _sep, v) = inner.partition('=') return _match(k, v, attrs) def _paren_groups(source): """Split a string into parenthesized groups.""" count = 0 start = 0 result = [] for pos in range(len(source)): if source[pos] == '(': if count == 0: start = pos count += 1 if source[pos] == ')': count -= 1 if count == 0: result.append(source[start:pos + 1]) return result def _match(key, value, attrs): """Match a given key and value against an attribute list.""" if key not in attrs: return False # This is a wild card search. Implemented as all or nothing for now. if value == "*": return True if key != "objectclass": return value in attrs[key] # it is an objectclass check, so check subclasses values = _subs(value) for v in values: if v in attrs[key]: return True return False def _subs(value): """Returns a list of subclass strings. The strings represent the ldap object class plus any subclasses that inherit from it. Fakeldap doesn't know about the ldap object structure, so subclasses need to be defined manually in the dictionary below. """ subs = {'groupOfNames': ['novaProject']} if value in subs: return [value] + subs[value] return [value] def _from_json(encoded): """Convert attribute values from json representation. Args: encoded -- a json encoded string Returns a list of strings """ return [str(x) for x in jsonutils.loads(encoded)] def _to_json(unencoded): """Convert attribute values into json representation. Args: unencoded -- an unencoded string or list of strings. If it is a single string, it will be converted into a list. Returns a json string """ return jsonutils.dumps(list(unencoded)) server_fail = False class FakeLDAP(object): """Fake LDAP connection.""" def simple_bind_s(self, dn, password): """This method is ignored, but provided for compatibility.""" if server_fail: raise SERVER_DOWN() pass def unbind_s(self): """This method is ignored, but provided for compatibility.""" if server_fail: raise SERVER_DOWN() pass def add_s(self, dn, attr): """Add an object with the specified attributes at dn.""" if server_fail: raise SERVER_DOWN() key = "%s%s" % (self.__prefix, dn) value_dict = {k: _to_json(v) for k, v in attr} Store.instance().hmset(key, value_dict) def delete_s(self, dn): """Remove the ldap object at specified dn.""" if server_fail: raise SERVER_DOWN() Store.instance().delete("%s%s" % (self.__prefix, dn)) def modify_s(self, dn, attrs): """Modify the object at dn using the attribute list. :param dn: a dn :param attrs: a list of tuples in the following form:: ([MOD_ADD | MOD_DELETE | MOD_REPACE], attribute, value) """ if server_fail: raise SERVER_DOWN() store = Store.instance() key = "%s%s" % (self.__prefix, dn) for cmd, k, v in attrs: values = _from_json(store.hget(key, k)) if cmd == MOD_ADD: values.append(v) elif cmd == MOD_REPLACE: values = [v] else: values.remove(v) store.hset(key, k, _to_json(values)) def modrdn_s(self, dn, newrdn): oldobj = self.search_s(dn, SCOPE_BASE) if not oldobj: raise NO_SUCH_OBJECT() newdn = "%s,%s" % (newrdn, dn.partition(',')[2]) newattrs = oldobj[0][1] modlist = [] for attrtype in newattrs.keys(): modlist.append((attrtype, newattrs[attrtype])) self.add_s(newdn, modlist) self.delete_s(dn) def search_s(self, dn, scope, query=None, fields=None): """Search for all matching objects under dn using the query. Args: dn -- dn to search under scope -- only SCOPE_BASE and SCOPE_SUBTREE are supported query -- query to filter objects by fields -- fields to return. Returns all fields if not specified """ if server_fail: raise SERVER_DOWN() if scope != SCOPE_BASE and scope != SCOPE_SUBTREE: raise NotImplementedError(str(scope)) store = Store.instance() if scope == SCOPE_BASE: pattern = "%s%s" % (self.__prefix, dn) keys = store.keys(pattern) else: keys = store.keys("%s*%s" % (self.__prefix, dn)) if not keys: raise NO_SUCH_OBJECT() objects = [] for key in keys: # get the attributes from the store attrs = store.hgetall(key) # turn the values from the store into lists attrs = {k: _from_json(v) for k, v in attrs.items()} # filter the objects by query if not query or _match_query(query, attrs): # filter the attributes by fields attrs = {k: v for k, v in attrs.items() if not fields or k in fields} objects.append((key[len(self.__prefix):], attrs)) return objects @property def __prefix(self): """Get the prefix to use for all keys.""" return 'ldap:' ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.542468 nova-21.2.4/nova/tests/unit/fake_loadables/0000775000175000017500000000000000000000000020566 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_loadables/__init__.py0000664000175000017500000000154300000000000022702 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fakes For Loadable class handling. """ from nova import loadables class FakeLoadable(object): pass class FakeLoader(loadables.BaseLoader): def __init__(self): super(FakeLoader, self).__init__(FakeLoadable) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_loadables/fake_loadable1.py0000664000175000017500000000237000000000000023754 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake Loadable subclasses module #1 """ from nova.tests.unit import fake_loadables class FakeLoadableSubClass1(fake_loadables.FakeLoadable): pass class FakeLoadableSubClass2(fake_loadables.FakeLoadable): pass class _FakeLoadableSubClass3(fake_loadables.FakeLoadable): """Classes beginning with '_' will be ignored.""" pass class FakeLoadableSubClass4(object): """Not a correct subclass.""" def return_valid_classes(): return [FakeLoadableSubClass1, FakeLoadableSubClass2] def return_invalid_classes(): return [FakeLoadableSubClass1, _FakeLoadableSubClass3, FakeLoadableSubClass4] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_loadables/fake_loadable2.py0000664000175000017500000000215000000000000023751 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake Loadable subclasses module #2 """ from nova.tests.unit import fake_loadables class FakeLoadableSubClass5(fake_loadables.FakeLoadable): pass class FakeLoadableSubClass6(fake_loadables.FakeLoadable): pass class _FakeLoadableSubClass7(fake_loadables.FakeLoadable): """Classes beginning with '_' will be ignored.""" pass class FakeLoadableSubClass8(BaseException): """Not a correct subclass.""" def return_valid_class(): return [FakeLoadableSubClass6] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_network.py0000664000175000017500000002010600000000000020702 0ustar00zuulzuul00000000000000# Copyright 2011 Rackspace # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova.compute import manager as compute_manager from nova.db import api as db from nova.network import model as network_model from nova import objects from nova.objects import base as obj_base from nova.tests.unit.objects import test_instance_info_cache from nova.tests.unit import utils def fake_get_instance_nw_info(test, num_networks=1): def update_cache_fake(*args, **kwargs): fake_info_cache = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'instance_uuid': uuids.vifs_1, 'network_info': '[]', } return fake_info_cache test.stub_out('nova.db.api.instance_info_cache_update', update_cache_fake) # TODO(stephenfin): This doesn't match the kind of object we would receive # from '_build_vif_model' and callers of same. We should fix that. nw_model = network_model.NetworkInfo() for network_id in range(1, num_networks + 1): network = network_model.Network( id=getattr(uuids, 'network%i' % network_id), bridge='fake_br%d' % network_id, label='test%d' % network_id, subnets=[ network_model.Subnet( cidr='192.168.%d.0/24' % network_id, dns=[ network_model.IP( address='192.168.%d.3' % network_id, type='dns', version=4, meta={}, ), network_model.IP( address='192.168.%d.4' % network_id, type='dns', version=4, meta={}, ), ], gateway=network_model.IP( address='192.168.%d.1' % network_id, type='gateway', version=4, meta={}, ), ips=[ network_model.FixedIP( address='192.168.%d.100' % network_id, version=4, meta={}, ), ], routes=[], version=4, meta={}, ), network_model.Subnet( cidr='2001:db8:0:%x::/64' % network_id, dns=[], gateway=network_model.IP( address='2001:db8:0:%x::1' % network_id, type='gateway', version=6, meta={}, ), ips=[ network_model.FixedIP( address='2001:db8:0:%x:dcad:beff:feef:1' % ( network_id), version=6, meta={}, ), ], routes=[], version=6, meta={} ), ], meta={ "tenant_id": "806e1f03-b36f-4fc6-be29-11a366f150eb" }, ) vif = network_model.VIF( id=getattr(uuids, 'vif%i' % network_id), address='DE:AD:BE:EF:00:%02x' % network_id, network=network, type='bridge', details={}, devname=None, ovs_interfaceid=None, qbh_params=None, qbg_params=None, active=False, vnic_type='normal', profile=None, preserve_on_delete=False, meta={'rxtx_cap': 30}, ) nw_model.append(vif) return nw_model _real_functions = {} def set_stub_network_methods(test): global _real_functions cm = compute_manager.ComputeManager if not _real_functions: _real_functions = { '_allocate_network': cm._allocate_network, '_deallocate_network': cm._deallocate_network} def fake_networkinfo(*args, **kwargs): return network_model.NetworkInfo() def fake_async_networkinfo(*args, **kwargs): return network_model.NetworkInfoAsyncWrapper(fake_networkinfo) test.stub_out('nova.compute.manager.ComputeManager._allocate_network', fake_async_networkinfo) test.stub_out('nova.compute.manager.ComputeManager._deallocate_network', lambda *args, **kwargs: None) def unset_stub_network_methods(test): global _real_functions if _real_functions: for name in _real_functions: test.stub_out('nova.compute.manager.ComputeManager.' + name, _real_functions[name]) def _get_fake_cache(): def _ip(ip, fixed=True, floats=None): ip_dict = {'address': ip, 'type': 'fixed'} if not fixed: ip_dict['type'] = 'floating' if fixed and floats: ip_dict['floating_ips'] = [_ip(f, fixed=False) for f in floats] return ip_dict info = [{'address': 'aa:bb:cc:dd:ee:ff', 'id': utils.FAKE_NETWORK_UUID, 'network': {'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [{'cidr': '192.168.0.0/24', 'ips': [_ip('192.168.0.3')]}]}}] ipv6_addr = 'fe80:b33f::a8bb:ccff:fedd:eeff' info[0]['network']['subnets'].append({'cidr': 'fe80:b33f::/64', 'ips': [_ip(ipv6_addr)]}) return jsonutils.dumps(info) def _get_instances_with_cached_ips(orig_func, *args, **kwargs): """Kludge the cache into instance(s) without having to create DB entries """ instances = orig_func(*args, **kwargs) context = args[0] fake_device = objects.PciDevice.get_by_dev_addr(context, 1, 'a') def _info_cache_for(instance): info_cache = dict(test_instance_info_cache.fake_info_cache, network_info=_get_fake_cache(), instance_uuid=instance['uuid']) if isinstance(instance, obj_base.NovaObject): _info_cache = objects.InstanceInfoCache(context) objects.InstanceInfoCache._from_db_object(context, _info_cache, info_cache) info_cache = _info_cache instance['info_cache'] = info_cache if isinstance(instances, (list, obj_base.ObjectListBase)): for instance in instances: _info_cache_for(instance) fake_device.claim(instance.uuid) fake_device.allocate(instance) else: _info_cache_for(instances) fake_device.claim(instances.uuid) fake_device.allocate(instances) return instances def _create_instances_with_cached_ips(orig_func, *args, **kwargs): """Kludge the above kludge so that the database doesn't get out of sync with the actual instance. """ instances, reservation_id = orig_func(*args, **kwargs) fake_cache = _get_fake_cache() for instance in instances: instance['info_cache'].network_info = fake_cache db.instance_info_cache_update(args[1], instance['uuid'], {'network_info': fake_cache}) return instances, reservation_id ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_network_cache_model.py0000664000175000017500000000707400000000000023216 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.network import model def new_ip(ip_dict=None, version=4): if version == 6: new_ip = dict(address='fd00::1:100', version=6) elif version == 4: new_ip = dict(address='192.168.1.100') ip_dict = ip_dict or {} new_ip.update(ip_dict) return model.IP(**new_ip) def new_fixed_ip(ip_dict=None, version=4): if version == 6: new_fixed_ip = dict(address='fd00::1:100', version=6) elif version == 4: new_fixed_ip = dict(address='192.168.1.100') ip_dict = ip_dict or {} new_fixed_ip.update(ip_dict) return model.FixedIP(**new_fixed_ip) def new_route(route_dict=None, version=4): if version == 6: new_route = dict( cidr='::/48', gateway=new_ip(dict(address='fd00::1:1'), version=6), interface='eth0') elif version == 4: new_route = dict( cidr='0.0.0.0/24', gateway=new_ip(dict(address='192.168.1.1')), interface='eth0') route_dict = route_dict or {} new_route.update(route_dict) return model.Route(**new_route) def new_subnet(subnet_dict=None, version=4): if version == 6: new_subnet = dict( cidr='fd00::/48', dns=[new_ip(dict(address='1:2:3:4::'), version=6), new_ip(dict(address='2:3:4:5::'), version=6)], gateway=new_ip(dict(address='fd00::1'), version=6), ips=[new_fixed_ip(dict(address='fd00::2'), version=6), new_fixed_ip(dict(address='fd00::3'), version=6)], routes=[new_route(version=6)], version=6) elif version == 4: new_subnet = dict( cidr='10.10.0.0/24', dns=[new_ip(dict(address='1.2.3.4')), new_ip(dict(address='2.3.4.5'))], gateway=new_ip(dict(address='10.10.0.1')), ips=[new_fixed_ip(dict(address='10.10.0.2')), new_fixed_ip(dict(address='10.10.0.3'))], routes=[new_route()]) subnet_dict = subnet_dict or {} new_subnet.update(subnet_dict) return model.Subnet(**new_subnet) def new_network(network_dict=None, version=4): if version == 6: new_net = dict( id=1, bridge='br0', label='public', subnets=[new_subnet(version=6), new_subnet(dict(cidr='ffff:ffff:ffff:ffff::'), version=6)]) elif version == 4: new_net = dict( id=1, bridge='br0', label='public', subnets=[new_subnet(), new_subnet(dict(cidr='255.255.255.255'))]) network_dict = network_dict or {} new_net.update(network_dict) return model.Network(**new_net) def new_vif(vif_dict=None, version=4): vif = dict( id=1, address='aa:aa:aa:aa:aa:aa', type='bridge', network=new_network(version=version)) vif_dict = vif_dict or {} vif.update(vif_dict) return model.VIF(**vif) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_notifier.py0000664000175000017500000001330100000000000021027 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import pprint import threading from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils import excutils from oslo_utils import timeutils from nova import rpc LOG = logging.getLogger(__name__) class _Sub(object): """Allow a subscriber to efficiently wait for an event to occur, and retrieve events which have occured. """ def __init__(self): self._cond = threading.Condition() self._notifications = [] def received(self, notification): with self._cond: self._notifications.append(notification) self._cond.notifyAll() def wait_n(self, n, event, timeout): """Wait until at least n notifications have been received, and return them. May return less than n notifications if timeout is reached. """ with timeutils.StopWatch(timeout) as timer: with self._cond: while len(self._notifications) < n: if timer.expired(): notifications = pprint.pformat( {event: sub._notifications for event, sub in VERSIONED_SUBS.items()}) raise AssertionError( "Notification %(event)s hasn't been " "received. Received:\n%(notifications)s" % { 'event': event, 'notifications': notifications, }) self._cond.wait(timer.leftover()) # Return a copy of the notifications list return list(self._notifications) VERSIONED_SUBS = collections.defaultdict(_Sub) VERSIONED_NOTIFICATIONS = [] NOTIFICATIONS = [] def reset(): del NOTIFICATIONS[:] del VERSIONED_NOTIFICATIONS[:] VERSIONED_SUBS.clear() FakeMessage = collections.namedtuple('Message', ['publisher_id', 'priority', 'event_type', 'payload', 'context']) class FakeNotifier(object): def __init__(self, transport, publisher_id, serializer=None): self.transport = transport self.publisher_id = publisher_id self._serializer = serializer or messaging.serializer.NoOpSerializer() for priority in ['debug', 'info', 'warn', 'error', 'critical']: setattr(self, priority, functools.partial(self._notify, priority.upper())) def prepare(self, publisher_id=None): if publisher_id is None: publisher_id = self.publisher_id return self.__class__(self.transport, publisher_id, serializer=self._serializer) def _notify(self, priority, ctxt, event_type, payload): try: payload = self._serializer.serialize_entity(ctxt, payload) except Exception: with excutils.save_and_reraise_exception(): LOG.error('Error serializing payload: %s', payload) # NOTE(sileht): simulate the kombu serializer # this permit to raise an exception if something have not # been serialized correctly jsonutils.to_primitive(payload) # NOTE(melwitt): Try to serialize the context, as the rpc would. # An exception will be raised if something is wrong # with the context. self._serializer.serialize_context(ctxt) msg = FakeMessage(self.publisher_id, priority, event_type, payload, ctxt) NOTIFICATIONS.append(msg) def is_enabled(self): return True class FakeVersionedNotifier(FakeNotifier): def _notify(self, priority, ctxt, event_type, payload): payload = self._serializer.serialize_entity(ctxt, payload) notification = {'publisher_id': self.publisher_id, 'priority': priority, 'event_type': event_type, 'payload': payload} VERSIONED_NOTIFICATIONS.append(notification) VERSIONED_SUBS[event_type].received(notification) def stub_notifier(test): test.stub_out('oslo_messaging.Notifier', FakeNotifier) if rpc.LEGACY_NOTIFIER and rpc.NOTIFIER: test.stub_out('nova.rpc.LEGACY_NOTIFIER', FakeNotifier(rpc.LEGACY_NOTIFIER.transport, rpc.LEGACY_NOTIFIER.publisher_id, serializer=getattr(rpc.LEGACY_NOTIFIER, '_serializer', None))) test.stub_out('nova.rpc.NOTIFIER', FakeVersionedNotifier(rpc.NOTIFIER.transport, rpc.NOTIFIER.publisher_id, serializer=getattr(rpc.NOTIFIER, '_serializer', None))) def wait_for_versioned_notifications(event_type, n_events=1, timeout=10.0): return VERSIONED_SUBS[event_type].wait_n(n_events, event_type, timeout) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_pci_device_pools.py0000664000175000017500000000266100000000000022525 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import pci_device_pool # This represents the format that PCI device pool info was stored in the DB # before this info was made into objects. fake_pool_dict = { 'product_id': 'fake-product', 'vendor_id': 'fake-vendor', 'numa_node': 1, 't1': 'v1', 't2': 'v2', 'count': 2, } fake_pool = pci_device_pool.PciDevicePool(count=5, product_id='foo', vendor_id='bar', numa_node=0, tags={'t1': 'v1', 't2': 'v2'}) fake_pool_primitive = fake_pool.obj_to_primitive() fake_pool_list = pci_device_pool.PciDevicePoolList(objects=[fake_pool]) fake_pool_list_primitive = fake_pool_list.obj_to_primitive() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_policy.py0000664000175000017500000002066700000000000020524 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. policy_data = """ { "context_is_admin": "role:admin or role:administrator", "network:attach_external_network": "", "os_compute_api:servers:create": "", "os_compute_api:servers:create:attach_volume": "", "os_compute_api:servers:create:attach_network": "", "os_compute_api:servers:create:forced_host": "", "os_compute_api:servers:create:trusted_certs": "", "os_compute_api:servers:create_image": "", "os_compute_api:servers:create_image:allow_volume_backed": "", "os_compute_api:servers:update": "", "os_compute_api:servers:index": "", "os_compute_api:servers:index:get_all_tenants": "", "os_compute_api:servers:delete": "", "os_compute_api:servers:detail": "", "os_compute_api:servers:detail:get_all_tenants": "", "os_compute_api:servers:show": "", "os_compute_api:servers:rebuild": "", "os_compute_api:servers:rebuild:trusted_certs": "", "os_compute_api:servers:reboot": "", "os_compute_api:servers:resize": "", "os_compute_api:servers:revert_resize": "", "os_compute_api:servers:confirm_resize": "", "os_compute_api:servers:start": "", "os_compute_api:servers:stop": "", "os_compute_api:servers:trigger_crash_dump": "", "os_compute_api:servers:show:host_status": "", "os_compute_api:servers:show": "", "os_compute_api:servers:show:host_status:unknown-only": "", "os_compute_api:servers:allow_all_filters": "", "os_compute_api:servers:migrations:force_complete": "", "os_compute_api:servers:migrations:index": "", "os_compute_api:servers:migrations:show": "", "os_compute_api:servers:migrations:delete": "", "os_compute_api:os-admin-actions:inject_network_info": "", "os_compute_api:os-admin-actions:reset_network": "", "os_compute_api:os-admin-actions:reset_state": "", "os_compute_api:os-admin-password": "", "os_compute_api:os-agents:list": "", "os_compute_api:os-agents:create": "", "os_compute_api:os-agents:update": "", "os_compute_api:os-agents:delete": "", "os_compute_api:os-aggregates:set_metadata": "", "os_compute_api:os-aggregates:remove_host": "", "os_compute_api:os-aggregates:add_host": "", "os_compute_api:os-aggregates:create": "", "os_compute_api:os-aggregates:index": "", "os_compute_api:os-aggregates:update": "", "os_compute_api:os-aggregates:delete": "", "os_compute_api:os-aggregates:show": "", "compute:aggregates:images": "", "os_compute_api:os-attach-interfaces:list": "", "os_compute_api:os-attach-interfaces:show": "", "os_compute_api:os-attach-interfaces:create": "", "os_compute_api:os-attach-interfaces:delete": "", "os_compute_api:os-baremetal-nodes": "", "os_compute_api:os-console-auth-tokens": "", "os_compute_api:os-console-output": "", "os_compute_api:os-remote-consoles": "", "os_compute_api:os-create-backup": "", "os_compute_api:os-deferred-delete:restore": "", "os_compute_api:os-deferred-delete:force": "", "os_compute_api:os-extended-server-attributes": "", "os_compute_api:ips:index": "", "os_compute_api:ips:show": "", "os_compute_api:extensions": "", "os_compute_api:os-evacuate": "", "os_compute_api:os-flavor-access:remove_tenant_access": "", "os_compute_api:os-flavor-access:add_tenant_access": "", "os_compute_api:os-flavor-access": "", "os_compute_api:os-flavor-extra-specs:create": "", "os_compute_api:os-flavor-extra-specs:update": "", "os_compute_api:os-flavor-extra-specs:delete": "", "os_compute_api:os-flavor-extra-specs:index": "", "os_compute_api:os-flavor-extra-specs:show": "", "os_compute_api:os-flavor-manage:create": "", "os_compute_api:os-flavor-manage:update": "", "os_compute_api:os-flavor-manage:delete": "", "os_compute_api:os-floating-ip-pools": "", "os_compute_api:os-floating-ips": "", "os_compute_api:os-instance-actions:list": "", "os_compute_api:os-instance-actions:show": "", "os_compute_api:os-instance-actions:events": "", "os_compute_api:os-instance-actions:events:details": "", "os_compute_api:os-instance-usage-audit-log:list": "", "os_compute_api:os-instance-usage-audit-log:show": "", "os_compute_api:os-keypairs:index": "", "os_compute_api:os-keypairs:create": "", "os_compute_api:os-keypairs:show": "", "os_compute_api:os-keypairs:delete": "", "os_compute_api:os-hypervisors:list": "", "os_compute_api:os-hypervisors:list-detail": "", "os_compute_api:os-hypervisors:statistics": "", "os_compute_api:os-hypervisors:show": "", "os_compute_api:os-hypervisors:uptime": "", "os_compute_api:os-hypervisors:search": "", "os_compute_api:os-hypervisors:servers": "", "os_compute_api:os-lock-server:lock": "", "os_compute_api:os-lock-server:unlock": "", "os_compute_api:os-migrate-server:migrate": "", "os_compute_api:os-migrate-server:migrate_live": "", "os_compute_api:os-migrations:index": "", "os_compute_api:os-multinic": "", "os_compute_api:os-networks:view": "", "os_compute_api:os-tenant-networks": "", "os_compute_api:os-pause-server:pause": "", "os_compute_api:os-pause-server:unpause": "", "os_compute_api:os-quota-sets:show": "", "os_compute_api:os-quota-sets:update": "", "os_compute_api:os-quota-sets:delete": "", "os_compute_api:os-quota-sets:detail": "", "os_compute_api:os-quota-sets:defaults": "", "os_compute_api:os-quota-class-sets:update": "", "os_compute_api:os-quota-class-sets:show": "", "os_compute_api:os-rescue": "", "os_compute_api:os-unrescue": "", "os_compute_api:os-security-groups:list": "", "os_compute_api:os-security-groups:add": "", "os_compute_api:os-security-groups:remove": "", "os_compute_api:os-server-diagnostics": "", "os_compute_api:os-server-password:show": "", "os_compute_api:os-server-password:clear": "", "os_compute_api:os-server-external-events:create": "", "os_compute_api:os-server-tags:index": "", "os_compute_api:os-server-tags:show": "", "os_compute_api:os-server-tags:update": "", "os_compute_api:os-server-tags:update_all": "", "os_compute_api:os-server-tags:delete": "", "os_compute_api:os-server-tags:delete_all": "", "os_compute_api:os-server-groups:show": "", "os_compute_api:os-server-groups:index": "", "os_compute_api:os-server-groups:index:all_projects": "", "os_compute_api:os-server-groups:create": "", "os_compute_api:os-server-groups:delete": "", "os_compute_api:os-services:list": "", "os_compute_api:os-services:update": "", "os_compute_api:os-services:delete": "", "os_compute_api:os-shelve:shelve": "", "os_compute_api:os-shelve:shelve_offload": "", "os_compute_api:os-simple-tenant-usage:show": "", "os_compute_api:os-simple-tenant-usage:list": "", "os_compute_api:os-shelve:unshelve": "", "os_compute_api:os-suspend-server:suspend": "", "os_compute_api:os-suspend-server:resume": "", "os_compute_api:os-volumes": "", "os_compute_api:os-volumes-attachments:index": "", "os_compute_api:os-volumes-attachments:show": "", "os_compute_api:os-volumes-attachments:create": "", "os_compute_api:os-volumes-attachments:update": "", "os_compute_api:os-volumes-attachments:swap":"", "os_compute_api:os-volumes-attachments:delete": "", "os_compute_api:os-availability-zone:list": "", "os_compute_api:os-availability-zone:detail": "", "os_compute_api:limits": "", "os_compute_api:os-assisted-volume-snapshots:create": "", "os_compute_api:os-assisted-volume-snapshots:delete": "", "os_compute_api:server-metadata:create": "", "os_compute_api:server-metadata:update": "", "os_compute_api:server-metadata:update_all": "", "os_compute_api:server-metadata:delete": "", "os_compute_api:server-metadata:show": "", "os_compute_api:server-metadata:index": "", "compute:server:topology:index": "", "compute:server:topology:host:index": "is_admin:True" } """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_processutils.py0000664000175000017500000000665000000000000021760 0ustar00zuulzuul00000000000000# Copyright (c) 2011 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This modules stubs out functions in oslo_concurrency.processutils.""" import re from eventlet import greenthread from oslo_concurrency import processutils from oslo_log import log as logging import six LOG = logging.getLogger(__name__) _fake_execute_repliers = [] _fake_execute_log = [] def fake_execute_get_log(): return _fake_execute_log def fake_execute_clear_log(): global _fake_execute_log _fake_execute_log = [] def fake_execute_set_repliers(repliers): """Allows the client to configure replies to commands.""" global _fake_execute_repliers _fake_execute_repliers = repliers def fake_execute_default_reply_handler(*ignore_args, **ignore_kwargs): """A reply handler for commands that haven't been added to the reply list. Returns empty strings for stdout and stderr. """ return '', '' def fake_execute(*cmd_parts, **kwargs): """This function stubs out execute. It optionally executes a preconfigured function to return expected data. """ global _fake_execute_repliers process_input = kwargs.get('process_input', None) check_exit_code = kwargs.get('check_exit_code', 0) delay_on_retry = kwargs.get('delay_on_retry', True) attempts = kwargs.get('attempts', 1) run_as_root = kwargs.get('run_as_root', False) cmd_str = ' '.join(str(part) for part in cmd_parts) LOG.debug("Faking execution of cmd (subprocess): %s", cmd_str) _fake_execute_log.append(cmd_str) reply_handler = fake_execute_default_reply_handler for fake_replier in _fake_execute_repliers: if re.match(fake_replier[0], cmd_str): reply_handler = fake_replier[1] LOG.debug('Faked command matched %s', fake_replier[0]) break if isinstance(reply_handler, six.string_types): # If the reply handler is a string, return it as stdout reply = reply_handler, '' else: try: # Alternative is a function, so call it reply = reply_handler(cmd_parts, process_input=process_input, delay_on_retry=delay_on_retry, attempts=attempts, run_as_root=run_as_root, check_exit_code=check_exit_code) except processutils.ProcessExecutionError as e: LOG.debug('Faked command raised an exception %s', e) raise LOG.debug("Reply to faked command is stdout='%(stdout)s' " "stderr='%(stderr)s'", {'stdout': reply[0], 'stderr': reply[1]}) # Replicate the sleep call in the real function greenthread.sleep(0) return reply def stub_out_processutils_execute(test): fake_execute_set_repliers([]) fake_execute_clear_log() test.stub_out('oslo_concurrency.processutils.execute', fake_execute) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_request_spec.py0000664000175000017500000000665500000000000021730 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from nova import context from nova import objects from nova.tests.unit import fake_flavor INSTANCE_NUMA_TOPOLOGY = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1, 2]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3, 4]), memory=512)]) INSTANCE_NUMA_TOPOLOGY.obj_reset_changes(recursive=True) IMAGE_META = objects.ImageMeta.from_dict( {'status': 'active', 'container_format': 'bare', 'min_ram': 0, 'updated_at': '2014-12-12T11:16:36.000000', 'min_disk': 0, 'owner': '2d8b9502858c406ebee60f0849486222', 'protected': 'yes', 'properties': { 'os_type': 'Linux', 'hw_video_model': 'vga', 'hw_video_ram': '512', 'hw_qemu_guest_agent': 'yes', 'hw_scsi_model': 'virtio-scsi', }, 'size': 213581824, 'name': 'f16-x86_64-openstack-sda', 'checksum': '755122332caeb9f661d5c978adb8b45f', 'created_at': '2014-12-10T16:23:14.000000', 'disk_format': 'qcow2', 'id': 'c8b1790e-a07d-4971-b137-44f2432936cd', } ) IMAGE_META.obj_reset_changes(recursive=True) PCI_REQUESTS = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(count=1), objects.InstancePCIRequest(count=2)]) PCI_REQUESTS.obj_reset_changes(recursive=True) def fake_db_spec(): req_obj = fake_spec_obj() # NOTE(takashin): There is not 'retry' information in the DB table. del req_obj.retry db_request_spec = { 'id': 1, 'instance_uuid': req_obj.instance_uuid, 'spec': jsonutils.dumps(req_obj.obj_to_primitive()), } return db_request_spec def fake_spec_obj(remove_id=False): ctxt = context.RequestContext('fake', 'fake') req_obj = objects.RequestSpec(ctxt) if not remove_id: req_obj.id = 42 req_obj.instance_uuid = uuidutils.generate_uuid() req_obj.image = IMAGE_META req_obj.numa_topology = INSTANCE_NUMA_TOPOLOGY req_obj.pci_requests = PCI_REQUESTS req_obj.flavor = fake_flavor.fake_flavor_obj(ctxt) req_obj.retry = objects.SchedulerRetries() req_obj.limits = objects.SchedulerLimits() req_obj.instance_group = objects.InstanceGroup(uuid=uuids.instgroup) req_obj.project_id = 'fake' req_obj.user_id = 'fake-user' req_obj.num_instances = 1 req_obj.availability_zone = None req_obj.ignore_hosts = ['host2', 'host4'] req_obj.force_hosts = ['host1', 'host3'] req_obj.force_nodes = ['node1', 'node2'] req_obj.scheduler_hints = {'hint': ['over-there']} req_obj.requested_destination = None # This should never be a changed field req_obj.obj_reset_changes(['id']) return req_obj ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_requests.py0000664000175000017500000000320600000000000021066 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fakes relating to the `requests` module.""" import requests class FakeResponse(requests.Response): def __init__(self, status_code, content=None, headers=None): """A requests.Response that can be used as a mock return_value. A key feature is that the instance will evaluate to True or False like a real Response, based on the status_code. Properties like ok, status_code, text, and content, and methods like json(), work as expected based on the inputs. :param status_code: Integer HTTP response code (200, 404, etc.) :param content: String supplying the payload content of the response. Using a json-encoded string will make the json() method behave as expected. :param headers: Dict of HTTP header values to set. """ super(FakeResponse, self).__init__() self.status_code = status_code if content: self._content = content.encode('utf-8') self.encoding = 'utf-8' if headers: self.headers = headers ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/fake_server_actions.py0000664000175000017500000001174700000000000022252 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime FAKE_UUID = 'b48316c5-71e8-45e4-9884-6c78055b9b13' FAKE_REQUEST_ID1 = 'req-3293a3f1-b44c-4609-b8d2-d81b105636b8' FAKE_REQUEST_ID2 = 'req-25517360-b757-47d3-be45-0e8d2a01b36a' FAKE_ACTION_ID1 = 123 FAKE_ACTION_ID2 = 456 FAKE_HOST_ID1 = '74824069503a752aaa3abf194f73200fcdd117ef70ab28b576e5bf7a' FAKE_HOST_ID2 = '858f5ed465b4967dd1306a38078e9b83b8705bdedfa7f16f898119b4' FAKE_ACTIONS = { FAKE_UUID: { FAKE_REQUEST_ID1: {'id': FAKE_ACTION_ID1, 'action': 'reboot', 'instance_uuid': FAKE_UUID, 'request_id': FAKE_REQUEST_ID1, 'project_id': '147', 'user_id': '789', 'start_time': datetime.datetime( 2012, 12, 5, 0, 0, 0, 0), 'finish_time': None, 'message': '', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, }, FAKE_REQUEST_ID2: {'id': FAKE_ACTION_ID2, 'action': 'resize', 'instance_uuid': FAKE_UUID, 'request_id': FAKE_REQUEST_ID2, 'user_id': '789', 'project_id': '842', 'start_time': datetime.datetime( 2012, 12, 5, 1, 0, 0, 0), 'finish_time': None, 'message': '', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, } } } FAKE_EVENTS = { FAKE_ACTION_ID1: [{'id': 1, 'action_id': FAKE_ACTION_ID1, 'event': 'schedule', 'start_time': datetime.datetime( 2012, 12, 5, 1, 0, 2, 0), 'finish_time': datetime.datetime( 2012, 12, 5, 1, 2, 0, 0), 'result': 'Success', 'traceback': '', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'host': 'host1', 'hostId': FAKE_HOST_ID1, 'details': None }, {'id': 2, 'action_id': FAKE_ACTION_ID1, 'event': 'compute_create', 'start_time': datetime.datetime( 2012, 12, 5, 1, 3, 0, 0), 'finish_time': datetime.datetime( 2012, 12, 5, 1, 4, 0, 0), 'result': 'Success', 'traceback': '', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'host': 'host1', 'hostId': FAKE_HOST_ID1, 'details': None } ], FAKE_ACTION_ID2: [{'id': 3, 'action_id': FAKE_ACTION_ID2, 'event': 'schedule', 'start_time': datetime.datetime( 2012, 12, 5, 3, 0, 0, 0), 'finish_time': datetime.datetime( 2012, 12, 5, 3, 2, 0, 0), 'result': 'Error', 'traceback': '', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'host': 'host2', 'hostId': FAKE_HOST_ID2, 'details': None } ] } def fake_action_event_start(*args): return FAKE_EVENTS[FAKE_ACTION_ID1][0] def fake_action_event_finish(*args): return FAKE_EVENTS[FAKE_ACTION_ID1][0] def stub_out_action_events(test): test.stub_out('nova.db.api.action_event_start', fake_action_event_start) test.stub_out('nova.db.api.action_event_finish', fake_action_event_finish) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/fake_volume.py0000664000175000017500000002264100000000000020526 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of a fake volume API.""" from oslo_log import log as logging from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils import nova.conf from nova import exception LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class fake_volume(object): user_uuid = '4a3cd440-b9c2-11e1-afa6-0800200c9a66' instance_uuid = '4a3cd441-b9c2-11e1-afa6-0800200c9a66' def __init__(self, size, name, description, volume_id, snapshot, volume_type, metadata, availability_zone): snapshot_id = None if snapshot is not None: snapshot_id = snapshot['id'] if volume_id is None: volume_id = uuids.fake1 self.vol = { 'created_at': timeutils.utcnow(), 'deleted_at': None, 'updated_at': timeutils.utcnow(), 'uuid': 'WTF', 'deleted': False, 'id': volume_id, 'user_id': self.user_uuid, 'project_id': 'fake-project-id', 'snapshot_id': snapshot_id, 'host': None, 'size': size, 'availability_zone': availability_zone, 'instance_uuid': None, 'mountpoint': None, 'attach_time': timeutils.utcnow(), 'status': 'available', 'attach_status': 'detached', 'scheduled_at': None, 'launched_at': None, 'terminated_at': None, 'display_name': name, 'display_description': description, 'provider_location': 'fake-location', 'provider_auth': 'fake-auth', 'volume_type_id': 99, 'multiattach': False } def get(self, key, default=None): return self.vol[key] def __setitem__(self, key, value): self.vol[key] = value def __getitem__(self, key): return self.vol[key] class fake_snapshot(object): user_uuid = '4a3cd440-b9c2-11e1-afa6-0800200c9a66' instance_uuid = '4a3cd441-b9c2-11e1-afa6-0800200c9a66' def __init__(self, volume_id, size, name, desc, id=None): if id is None: id = uuids.fake2 self.snap = { 'created_at': timeutils.utcnow(), 'deleted_at': None, 'updated_at': timeutils.utcnow(), 'uuid': 'WTF', 'deleted': False, 'id': str(id), 'volume_id': volume_id, 'status': 'available', 'progress': '100%', 'volume_size': 1, 'display_name': name, 'display_description': desc, 'user_id': self.user_uuid, 'project_id': 'fake-project-id' } def get(self, key, default=None): return self.snap[key] def __setitem__(self, key, value): self.snap[key] = value def __getitem__(self, key): return self.snap[key] class API(object): volume_list = [] snapshot_list = [] _instance = None class Singleton(object): def __init__(self): self.API = None def __init__(self): if API._instance is None: API._instance = API.Singleton() self._EventHandler_instance = API._instance def create(self, context, size, name, description, snapshot=None, volume_type=None, metadata=None, availability_zone=None): v = fake_volume(size, name, description, None, snapshot, volume_type, metadata, availability_zone) self.volume_list.append(v.vol) LOG.info('creating volume %s', v.vol['id']) return v.vol def create_with_kwargs(self, context, **kwargs): volume_id = kwargs.get('volume_id', None) v = fake_volume(kwargs['size'], kwargs['name'], kwargs['description'], str(volume_id), None, None, None, None) if kwargs.get('status', None) is not None: v.vol['status'] = kwargs['status'] if kwargs['host'] is not None: v.vol['host'] = kwargs['host'] if kwargs['attach_status'] is not None: v.vol['attach_status'] = kwargs['attach_status'] if kwargs.get('snapshot_id', None) is not None: v.vol['snapshot_id'] = kwargs['snapshot_id'] self.volume_list.append(v.vol) return v.vol def get(self, context, volume_id): if str(volume_id) == '87654321': return {'id': volume_id, 'attach_time': '13:56:24', 'attach_status': 'attached', 'status': 'in-use'} for v in self.volume_list: if v['id'] == str(volume_id): return v raise exception.VolumeNotFound(volume_id=volume_id) def get_all(self, context): return self.volume_list def delete(self, context, volume_id): LOG.info('deleting volume %s', volume_id) self.volume_list = [v for v in self.volume_list if v['id'] != volume_id] def check_availability_zone(self, context, volume, instance=None): if instance and not CONF.cinder.cross_az_attach: if instance['availability_zone'] != volume['availability_zone']: msg = "Instance and volume not in same availability_zone" raise exception.InvalidVolume(reason=msg) def attach(self, context, volume_id, instance_uuid, mountpoint, mode='rw'): LOG.info('attaching volume %s', volume_id) volume = self.get(context, volume_id) volume['status'] = 'in-use' volume['attach_status'] = 'attached' volume['attach_time'] = timeutils.utcnow() volume['multiattach'] = True volume['attachments'] = {instance_uuid: {'attachment_id': uuids.fake3, 'mountpoint': mountpoint}} def reset_fake_api(self, context): del self.volume_list[:] del self.snapshot_list[:] def detach(self, context, volume_id, instance_uuid, attachment_id=None): LOG.info('detaching volume %s', volume_id) volume = self.get(context, volume_id) volume['status'] = 'available' volume['attach_status'] = 'detached' def initialize_connection(self, context, volume_id, connector): return {'driver_volume_type': 'iscsi', 'data': {}} def terminate_connection(self, context, volume_id, connector): return None def get_snapshot(self, context, snapshot_id): for snap in self.snapshot_list: if snap['id'] == str(snapshot_id): return snap def get_all_snapshots(self, context): return self.snapshot_list def create_snapshot(self, context, volume_id, name, description, id=None): volume = self.get(context, volume_id) snapshot = fake_snapshot(volume['id'], volume['size'], name, description, id) self.snapshot_list.append(snapshot.snap) return snapshot.snap def create_snapshot_with_kwargs(self, context, **kwargs): snapshot = fake_snapshot(kwargs.get('volume_id'), kwargs.get('volume_size'), kwargs.get('name'), kwargs.get('description'), kwargs.get('snap_id')) status = kwargs.get('status', None) snapshot.snap['status'] = status self.snapshot_list.append(snapshot.snap) return snapshot.snap def create_snapshot_force(self, context, volume_id, name, description, id=None): volume = self.get(context, volume_id) snapshot = fake_snapshot(volume['id'], volume['size'], name, description, id) self.snapshot_list.append(snapshot.snap) return snapshot.snap def delete_snapshot(self, context, snapshot_id): self.snapshot_list = [s for s in self.snapshot_list if s['id'] != snapshot_id] def reserve_volume(self, context, volume_id): LOG.info('reserving volume %s', volume_id) volume = self.get(context, volume_id) volume['status'] = 'attaching' def unreserve_volume(self, context, volume_id): LOG.info('unreserving volume %s', volume_id) volume = self.get(context, volume_id) volume['status'] = 'available' def begin_detaching(self, context, volume_id): LOG.info('begin detaching volume %s', volume_id) volume = self.get(context, volume_id) volume['status'] = 'detaching' def roll_detaching(self, context, volume_id): LOG.info('roll detaching volume %s', volume_id) volume = self.get(context, volume_id) volume['status'] = 'in-use' ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.542468 nova-21.2.4/nova/tests/unit/image/0000775000175000017500000000000000000000000016734 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/image/__init__.py0000664000175000017500000000000000000000000021033 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/image/abs.tar.gz0000664000175000017500000000023100000000000020624 0ustar00zuulzuul00000000000000NA 0=EnLSJFEhA"f30^%K y99z.'*mSL/kg]ͻ Kql7ok<+ڟ?rvyQ (././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/image/fake.py0000664000175000017500000003177500000000000020231 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of a fake image service.""" import copy import datetime from oslo_log import log as logging from oslo_utils import uuidutils import nova.conf from nova import exception from nova import objects from nova.objects import fields as obj_fields from nova.tests import fixtures as nova_fixtures CONF = nova.conf.CONF LOG = logging.getLogger(__name__) AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID = '70a599e0-31e7-49b7-b260-868f441e862b' class _FakeImageService(object): """Mock (fake) image service for unit testing.""" def __init__(self): self.images = {} # NOTE(justinsb): The OpenStack API can't upload an image? # So, make sure we've got one.. timestamp = datetime.datetime(2011, 1, 1, 1, 2, 3) image1 = {'id': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'name': 'fakeimage123456', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'raw', 'disk_format': 'raw', 'size': '25165824', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': ['tag1', 'tag2'], 'properties': { 'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': obj_fields.Architecture.X86_64}} image2 = {'id': 'a2459075-d96c-40d5-893e-577ff92e721c', 'name': 'fakeimage123456', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'ami', 'disk_format': 'ami', 'size': '58145823', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel'}} image3 = {'id': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', 'name': 'fakeimage123456', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'bare', 'disk_format': 'raw', 'size': '83594576', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': ['tag3', 'tag4'], 'properties': { 'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': obj_fields.Architecture.X86_64}} image4 = {'id': 'cedef40a-ed67-4d10-800e-17455edce175', 'name': 'fakeimage123456', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'ami', 'disk_format': 'ami', 'size': '84035174', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel'}} image5 = {'id': 'c905cedb-7281-47e4-8a62-f26bc5fc4c77', 'name': 'fakeimage123456', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'ami', 'disk_format': 'ami', 'size': '26360814', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'ramdisk_id': None}} image6 = {'id': 'a440c04b-79fa-479c-bed1-0b816eaec379', 'name': 'fakeimage6', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '49163826', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': { 'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': obj_fields.Architecture.X86_64, 'auto_disk_config': 'False'}} image7 = {'id': AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, 'name': 'fakeimage7', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '74185822', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': { 'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': obj_fields.Architecture.X86_64, 'auto_disk_config': 'True'}} self.create(None, image1) self.create(None, image2) self.create(None, image3) self.create(None, image4) self.create(None, image5) self.create(None, image6) self.create(None, image7) self._imagedata = {} super(_FakeImageService, self).__init__() # TODO(bcwaldon): implement optional kwargs such as limit, sort_dir def detail(self, context, **kwargs): """Return list of detailed image information.""" return copy.deepcopy(list(self.images.values())) def download(self, context, image_id, data=None, dst_path=None, trusted_certs=None): self.show(context, image_id) if data: data.write(self._imagedata.get(image_id, b'')) elif dst_path: with open(dst_path, 'wb') as data: data.write(self._imagedata.get(image_id, b'')) def show(self, context, image_id, include_locations=False, show_deleted=True): """Get data about specified image. Returns a dict containing image data for the given opaque image id. """ image = self.images.get(str(image_id)) if image: return copy.deepcopy(image) LOG.warning('Unable to find image id %s. Have images: %s', image_id, self.images) raise exception.ImageNotFound(image_id=image_id) def create(self, context, metadata, data=None): """Store the image data and return the new image id. :raises: Duplicate if the image already exist. """ image_id = str(metadata.get('id', uuidutils.generate_uuid())) metadata['id'] = image_id if image_id in self.images: raise exception.CouldNotUploadImage(image_id=image_id) image_meta = copy.deepcopy(metadata) # Glance sets the size value when an image is created, so we # need to do that here to fake things out if it's not provided # by the caller. This is needed to avoid a KeyError in the # image-size API. if 'size' not in image_meta: image_meta['size'] = None # Similarly, Glance provides the status on the image once it's created # and this is checked in the compute API when booting a server from # this image, so we just fake it out to be 'active' even though this # is mostly a lie on a newly created image. if 'status' not in metadata: image_meta['status'] = 'active' # The owner of the image is by default the request context project_id. if context and 'owner' not in image_meta.get('properties', {}): # Note that normally "owner" is a top-level field in an image # resource in glance but we have to fake this out for the images # proxy API by throwing it into the generic "properties" dict. image_meta.get('properties', {})['owner'] = context.project_id self.images[image_id] = image_meta if data: self._imagedata[image_id] = data.read() return self.images[image_id] def update(self, context, image_id, metadata, data=None, purge_props=False): """Replace the contents of the given image with the new data. :raises: ImageNotFound if the image does not exist. """ if not self.images.get(image_id): raise exception.ImageNotFound(image_id=image_id) if purge_props: self.images[image_id] = copy.deepcopy(metadata) else: image = self.images[image_id] try: image['properties'].update(metadata.pop('properties')) except KeyError: pass image.update(metadata) return self.images[image_id] def delete(self, context, image_id): """Delete the given image. :raises: ImageNotFound if the image does not exist. """ removed = self.images.pop(image_id, None) if not removed: raise exception.ImageNotFound(image_id=image_id) def get_location(self, context, image_id): if image_id in self.images: return 'fake_location' return None _fakeImageService = _FakeImageService() def FakeImageService(): return _fakeImageService def FakeImageService_reset(): global _fakeImageService _fakeImageService = _FakeImageService() def get_valid_image_id(): return AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID def stub_out_image_service(test): """Stubs out the image service for the test with the FakeImageService :param test: instance of nova.test.TestCase :returns: The stubbed out FakeImageService object """ image_service = FakeImageService() test.stub_out('nova.image.glance.get_remote_image_service', lambda x, y: (image_service, y)) test.stub_out('nova.image.glance.get_default_image_service', lambda: image_service) test.useFixture(nova_fixtures.ConfPatcher( group="glance", api_servers=['http://localhost:9292'])) return image_service def fake_image_obj(default_image_meta=None, default_image_props=None, variable_image_props=None): """Helper for constructing a test ImageMeta object with attributes and properties coming from a combination of (probably hard-coded) values within a test, and (optionally) variable values from the test's caller, if the test is actually a helper written to be reusable and run multiple times with different parameters from different "wrapper" tests. """ image_meta_props = default_image_props or {} if variable_image_props: image_meta_props.update(variable_image_props) test_image_meta = default_image_meta or {"disk_format": "raw"} if 'name' not in test_image_meta: # NOTE(aspiers): the name is specified here in case it's needed # by the logging in nova.virt.hardware.get_mem_encryption_constraint() test_image_meta['name'] = 'fake_image' test_image_meta.update({ 'properties': image_meta_props, }) return objects.ImageMeta.from_dict(test_image_meta) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/image/rel.tar.gz0000664000175000017500000000024500000000000020646 0ustar00zuulzuul00000000000000,NA 0=ENdj2=O]tW"1v!H$ eH%OnHəU6˾`lbTl)yG ?sa_b}1ݷ3B;<(././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/image/test_fake.py0000664000175000017500000001123200000000000021252 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from six.moves import StringIO from nova import context from nova import exception from nova import test import nova.tests.unit.image.fake class FakeImageServiceTestCase(test.NoDBTestCase): def setUp(self): super(FakeImageServiceTestCase, self).setUp() self.image_service = nova.tests.unit.image.fake.FakeImageService() self.context = context.get_admin_context() def tearDown(self): super(FakeImageServiceTestCase, self).tearDown() nova.tests.unit.image.fake.FakeImageService_reset() def test_detail(self): res = self.image_service.detail(self.context) for image in res: keys = set(image.keys()) self.assertEqual(keys, set(['id', 'name', 'created_at', 'updated_at', 'deleted_at', 'deleted', 'status', 'is_public', 'properties', 'disk_format', 'container_format', 'size', 'min_disk', 'min_ram', 'protected', 'tags', 'visibility'])) self.assertIsInstance(image['created_at'], datetime.datetime) self.assertIsInstance(image['updated_at'], datetime.datetime) if not (isinstance(image['deleted_at'], datetime.datetime) or image['deleted_at'] is None): self.fail('image\'s "deleted_at" attribute was neither a ' 'datetime object nor None') def check_is_bool(image, key): val = image.get('deleted') if not isinstance(val, bool): self.fail('image\'s "%s" attribute wasn\'t ' 'a bool: %r' % (key, val)) check_is_bool(image, 'deleted') check_is_bool(image, 'is_public') def test_show_raises_imagenotfound_for_invalid_id(self): self.assertRaises(exception.ImageNotFound, self.image_service.show, self.context, 'this image does not exist') def test_create_adds_id(self): index = self.image_service.detail(self.context) image_count = len(index) self.image_service.create(self.context, {}) index = self.image_service.detail(self.context) self.assertEqual(len(index), image_count + 1) self.assertTrue(index[0]['id']) def test_create_keeps_id(self): self.image_service.create(self.context, {'id': '34'}) self.image_service.show(self.context, '34') def test_create_rejects_duplicate_ids(self): self.image_service.create(self.context, {'id': '34'}) self.assertRaises(exception.CouldNotUploadImage, self.image_service.create, self.context, {'id': '34'}) # Make sure there's still one left self.image_service.show(self.context, '34') def test_update(self): self.image_service.create(self.context, {'id': '34', 'foo': 'bar'}) self.image_service.update(self.context, '34', {'id': '34', 'foo': 'baz'}) img = self.image_service.show(self.context, '34') self.assertEqual(img['foo'], 'baz') def test_delete(self): self.image_service.create(self.context, {'id': '34', 'foo': 'bar'}) self.image_service.delete(self.context, '34') self.assertRaises(exception.NotFound, self.image_service.show, self.context, '34') def test_create_then_get(self): blob = 'some data' s1 = StringIO(blob) self.image_service.create(self.context, {'id': '32', 'foo': 'bar'}, data=s1) s2 = StringIO() self.image_service.download(self.context, '32', data=s2) self.assertEqual(s2.getvalue(), blob, 'Did not get blob back intact') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/image/test_glance.py0000664000175000017500000025703400000000000021611 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import cryptography from cursive import exception as cursive_exception import ddt import glanceclient.exc from glanceclient.v1 import images from glanceclient.v2 import schemas from keystoneauth1 import loading as ks_loading import mock from oslo_utils.fixture import uuidsentinel as uuids import six from six.moves import StringIO import testtools import nova.conf from nova import context from nova import exception from nova.image import glance from nova import objects from nova import service_auth from nova import test CONF = nova.conf.CONF NOW_GLANCE_FORMAT = "2010-10-11T10:30:22.000000" class tzinfo(datetime.tzinfo): @staticmethod def utcoffset(*args, **kwargs): return datetime.timedelta() NOW_DATETIME = datetime.datetime(2010, 10, 11, 10, 30, 22, tzinfo=tzinfo()) class FakeSchema(object): def __init__(self, raw_schema): self.raw_schema = raw_schema self.base_props = ('checksum', 'container_format', 'created_at', 'direct_url', 'disk_format', 'file', 'id', 'locations', 'min_disk', 'min_ram', 'name', 'owner', 'protected', 'schema', 'self', 'size', 'status', 'tags', 'updated_at', 'virtual_size', 'visibility') def is_base_property(self, prop_name): return prop_name in self.base_props def raw(self): return copy.deepcopy(self.raw_schema) image_fixtures = { 'active_image_v1': { 'checksum': 'eb9139e4942121f22bbc2afc0400b2a4', 'container_format': 'ami', 'created_at': '2015-08-31T19:37:41Z', 'deleted': False, 'disk_format': 'ami', 'id': 'da8500d5-8b80-4b9c-8410-cc57fb8fb9d5', 'is_public': True, 'min_disk': 0, 'min_ram': 0, 'name': 'cirros-0.3.4-x86_64-uec', 'owner': 'ea583a4f34444a12bbe4e08c2418ba1f', 'properties': { 'kernel_id': 'f6ebd5f0-b110-4406-8c1e-67b28d4e85e7', 'ramdisk_id': '868efefc-4f2d-4ed8-82b1-7e35576a7a47'}, 'protected': False, 'size': 25165824, 'status': 'active', 'updated_at': '2015-08-31T19:37:45Z'}, 'active_image_v2': { 'checksum': 'eb9139e4942121f22bbc2afc0400b2a4', 'container_format': 'ami', 'created_at': '2015-08-31T19:37:41Z', 'direct_url': 'swift+config://ref1/glance/' 'da8500d5-8b80-4b9c-8410-cc57fb8fb9d5', 'disk_format': 'ami', 'file': '/v2/images/' 'da8500d5-8b80-4b9c-8410-cc57fb8fb9d5/file', 'id': 'da8500d5-8b80-4b9c-8410-cc57fb8fb9d5', 'kernel_id': 'f6ebd5f0-b110-4406-8c1e-67b28d4e85e7', 'locations': [ {'metadata': {}, 'url': 'swift+config://ref1/glance/' 'da8500d5-8b80-4b9c-8410-cc57fb8fb9d5'}], 'min_disk': 0, 'min_ram': 0, 'name': 'cirros-0.3.4-x86_64-uec', 'owner': 'ea583a4f34444a12bbe4e08c2418ba1f', 'protected': False, 'ramdisk_id': '868efefc-4f2d-4ed8-82b1-7e35576a7a47', 'schema': '/v2/schemas/image', 'size': 25165824, 'status': 'active', 'tags': [], 'updated_at': '2015-08-31T19:37:45Z', 'virtual_size': None, 'visibility': 'public'}, 'empty_image_v1': { 'created_at': '2015-09-01T22:37:32.000000', 'deleted': False, 'id': '885d1cb0-9f5c-4677-9d03-175be7f9f984', 'is_public': False, 'min_disk': 0, 'min_ram': 0, 'owner': 'ea583a4f34444a12bbe4e08c2418ba1f', 'properties': {}, 'protected': False, 'size': 0, 'status': 'queued', 'updated_at': '2015-09-01T22:37:32.000000' }, 'empty_image_v2': { 'checksum': None, 'container_format': None, 'created_at': '2015-09-01T22:37:32Z', 'disk_format': None, 'file': '/v2/images/885d1cb0-9f5c-4677-9d03-175be7f9f984/file', 'id': '885d1cb0-9f5c-4677-9d03-175be7f9f984', 'locations': [], 'min_disk': 0, 'min_ram': 0, 'name': None, 'owner': 'ea583a4f34444a12bbe4e08c2418ba1f', 'protected': False, 'schema': '/v2/schemas/image', 'size': None, 'status': 'queued', 'tags': [], 'updated_at': '2015-09-01T22:37:32Z', 'virtual_size': None, 'visibility': 'private' }, 'custom_property_image_v1': { 'checksum': 'e533283e6aac072533d1d091a7d2e413', 'container_format': 'bare', 'created_at': '2015-09-02T00:31:16.000000', 'deleted': False, 'disk_format': 'qcow2', 'id': '10ca6b6b-48f4-43ac-8159-aa9e9353f5e4', 'is_public': False, 'min_disk': 0, 'min_ram': 0, 'name': 'fake_name', 'owner': 'ea583a4f34444a12bbe4e08c2418ba1f', 'properties': {'image_type': 'fake_image_type'}, 'protected': False, 'size': 616, 'status': 'active', 'updated_at': '2015-09-02T00:31:17.000000' }, 'custom_property_image_v2': { 'checksum': 'e533283e6aac072533d1d091a7d2e413', 'container_format': 'bare', 'created_at': '2015-09-02T00:31:16Z', 'disk_format': 'qcow2', 'file': '/v2/images/10ca6b6b-48f4-43ac-8159-aa9e9353f5e4/file', 'id': '10ca6b6b-48f4-43ac-8159-aa9e9353f5e4', 'image_type': 'fake_image_type', 'min_disk': 0, 'min_ram': 0, 'name': 'fake_name', 'owner': 'ea583a4f34444a12bbe4e08c2418ba1f', 'protected': False, 'schema': '/v2/schemas/image', 'size': 616, 'status': 'active', 'tags': [], 'updated_at': '2015-09-02T00:31:17Z', 'virtual_size': None, 'visibility': 'private' } } def fake_glance_response(data): with mock.patch('glanceclient.common.utils._extract_request_id'): return glanceclient.common.utils.RequestIdProxy([data, None]) class ImageV2(dict): # Wrapper class that is used to comply with dual nature of # warlock objects, that are inherited from dict and have 'schema' # attribute. schema = mock.MagicMock() class TestConversions(test.NoDBTestCase): def test_convert_timestamps_to_datetimes(self): fixture = {'name': None, 'properties': {}, 'status': None, 'is_public': None, 'created_at': NOW_GLANCE_FORMAT, 'updated_at': NOW_GLANCE_FORMAT, 'deleted_at': NOW_GLANCE_FORMAT} result = glance._convert_timestamps_to_datetimes(fixture) self.assertEqual(result['created_at'], NOW_DATETIME) self.assertEqual(result['updated_at'], NOW_DATETIME) self.assertEqual(result['deleted_at'], NOW_DATETIME) def _test_extracting_missing_attributes(self, include_locations): # Verify behavior from glance objects that are missing attributes # TODO(jaypipes): Find a better way of testing this crappy # glanceclient magic object stuff. class MyFakeGlanceImage(object): def __init__(self, metadata): IMAGE_ATTRIBUTES = ['size', 'owner', 'id', 'created_at', 'updated_at', 'status', 'min_disk', 'min_ram', 'is_public'] raw = dict.fromkeys(IMAGE_ATTRIBUTES) raw.update(metadata) self.__dict__['raw'] = raw def __getattr__(self, key): try: return self.__dict__['raw'][key] except KeyError: raise AttributeError(key) def __setattr__(self, key, value): try: self.__dict__['raw'][key] = value except KeyError: raise AttributeError(key) metadata = { 'id': 1, 'created_at': NOW_DATETIME, 'updated_at': NOW_DATETIME, } image = MyFakeGlanceImage(metadata) observed = glance._extract_attributes( image, include_locations=include_locations) expected = { 'id': 1, 'name': None, 'is_public': None, 'size': 0, 'min_disk': None, 'min_ram': None, 'disk_format': None, 'container_format': None, 'checksum': None, 'created_at': NOW_DATETIME, 'updated_at': NOW_DATETIME, 'deleted_at': None, 'deleted': None, 'status': None, 'properties': {}, 'owner': None } if include_locations: expected['locations'] = None expected['direct_url'] = None self.assertEqual(expected, observed) def test_extracting_missing_attributes_include_locations(self): self._test_extracting_missing_attributes(include_locations=True) def test_extracting_missing_attributes_exclude_locations(self): self._test_extracting_missing_attributes(include_locations=False) class TestExceptionTranslations(test.NoDBTestCase): def test_client_forbidden_to_imagenotauthed(self): in_exc = glanceclient.exc.Forbidden('123') out_exc = glance._translate_image_exception('123', in_exc) self.assertIsInstance(out_exc, exception.ImageNotAuthorized) def test_client_httpforbidden_converts_to_imagenotauthed(self): in_exc = glanceclient.exc.HTTPForbidden('123') out_exc = glance._translate_image_exception('123', in_exc) self.assertIsInstance(out_exc, exception.ImageNotAuthorized) def test_client_notfound_converts_to_imagenotfound(self): in_exc = glanceclient.exc.NotFound('123') out_exc = glance._translate_image_exception('123', in_exc) self.assertIsInstance(out_exc, exception.ImageNotFound) def test_client_httpnotfound_converts_to_imagenotfound(self): in_exc = glanceclient.exc.HTTPNotFound('123') out_exc = glance._translate_image_exception('123', in_exc) self.assertIsInstance(out_exc, exception.ImageNotFound) def test_client_httpoverlimit_converts_to_imagequotaexceeded(self): in_exc = glanceclient.exc.HTTPOverLimit('123') out_exc = glance._translate_image_exception('123', in_exc) self.assertIsInstance(out_exc, exception.ImageQuotaExceeded) class TestGlanceSerializer(test.NoDBTestCase): def test_serialize(self): metadata = {'name': 'image1', 'is_public': True, 'foo': 'bar', 'properties': { 'prop1': 'propvalue1', 'mappings': [ {'virtual': 'aaa', 'device': 'bbb'}, {'virtual': 'xxx', 'device': 'yyy'}], 'block_device_mapping': [ {'virtual_device': 'fake', 'device_name': '/dev/fake'}, {'virtual_device': 'ephemeral0', 'device_name': '/dev/fake0'}]}} # NOTE(tdurakov): Assertion of serialized objects won't work # during using of random PYTHONHASHSEED. Assertion of # serialized/deserialized object and initial one is enough converted = glance._convert_to_string(metadata) self.assertEqual(glance._convert_from_string(converted), metadata) class TestGetImageService(test.NoDBTestCase): @mock.patch.object(glance.GlanceClientWrapper, '__init__', return_value=None) def test_get_remote_service_from_id(self, gcwi_mocked): id_or_uri = '123' _ignored, image_id = glance.get_remote_image_service( mock.sentinel.ctx, id_or_uri) self.assertEqual(id_or_uri, image_id) gcwi_mocked.assert_called_once_with() @mock.patch.object(glance.GlanceClientWrapper, '__init__', return_value=None) def test_get_remote_service_from_href(self, gcwi_mocked): id_or_uri = 'http://127.0.0.1/v1/images/123' _ignored, image_id = glance.get_remote_image_service( mock.sentinel.ctx, id_or_uri) self.assertEqual('123', image_id) gcwi_mocked.assert_called_once_with(context=mock.sentinel.ctx, endpoint='http://127.0.0.1') class TestCreateGlanceClient(test.NoDBTestCase): @mock.patch.object(service_auth, 'get_auth_plugin') @mock.patch.object(ks_loading, 'load_session_from_conf_options') @mock.patch('glanceclient.Client') def test_glanceclient_with_ks_session(self, mock_client, mock_load, mock_get_auth): session = "fake_session" mock_load.return_value = session auth = "fake_auth" mock_get_auth.return_value = auth ctx = context.RequestContext('fake', 'fake', global_request_id='reqid') endpoint = "fake_endpoint" mock_client.side_effect = ["a", "b"] # Reset the cache, so we know its empty before we start glance._SESSION = None result1 = glance._glanceclient_from_endpoint(ctx, endpoint, 2) result2 = glance._glanceclient_from_endpoint(ctx, endpoint, 2) # Ensure that session is only loaded once. mock_load.assert_called_once_with(glance.CONF, "glance") self.assertEqual(session, glance._SESSION) # Ensure new client created every time client_call = mock.call(2, auth="fake_auth", endpoint_override=endpoint, session=session, global_request_id='reqid') mock_client.assert_has_calls([client_call, client_call]) self.assertEqual("a", result1) self.assertEqual("b", result2) def test_generate_identity_headers(self): ctx = context.RequestContext('user', 'tenant', auth_token='token', roles=["a", "b"]) result = glance.generate_identity_headers(ctx, 'test') expected = { 'X-Auth-Token': 'token', 'X-User-Id': 'user', 'X-Tenant-Id': 'tenant', 'X-Roles': 'a,b', 'X-Identity-Status': 'test', } self.assertDictEqual(expected, result) class TestGlanceClientWrapperRetries(test.NoDBTestCase): def setUp(self): super(TestGlanceClientWrapperRetries, self).setUp() self.ctx = context.RequestContext('fake', 'fake') api_servers = [ 'http://host1:9292', 'https://host2:9293', 'http://host3:9294' ] self.flags(api_servers=api_servers, group='glance') def assert_retry_attempted(self, sleep_mock, client, expected_url): client.call(self.ctx, 1, 'get', args=('meow',)) sleep_mock.assert_called_once_with(1) self.assertEqual(str(client.api_server), expected_url) def assert_retry_not_attempted(self, sleep_mock, client): self.assertRaises(exception.GlanceConnectionFailed, client.call, self.ctx, 1, 'get', args=('meow',)) self.assertFalse(sleep_mock.called) @mock.patch('time.sleep') @mock.patch('nova.image.glance._glanceclient_from_endpoint') def test_static_client_without_retries(self, create_client_mock, sleep_mock): side_effect = glanceclient.exc.ServiceUnavailable self._mock_client_images_response(create_client_mock, side_effect) self.flags(num_retries=0, group='glance') client = self._get_static_client(create_client_mock) self.assert_retry_not_attempted(sleep_mock, client) @mock.patch('time.sleep') @mock.patch('nova.image.glance._glanceclient_from_endpoint') def test_static_client_with_retries(self, create_client_mock, sleep_mock): side_effect = [ glanceclient.exc.ServiceUnavailable, None ] self._mock_client_images_response(create_client_mock, side_effect) self.flags(num_retries=1, group='glance') client = self._get_static_client(create_client_mock) self.assert_retry_attempted(sleep_mock, client, 'http://host4:9295') @mock.patch('random.shuffle') @mock.patch('time.sleep') @mock.patch('nova.image.glance._glanceclient_from_endpoint') def test_default_client_with_retries(self, create_client_mock, sleep_mock, shuffle_mock): side_effect = [ glanceclient.exc.ServiceUnavailable, None ] self._mock_client_images_response(create_client_mock, side_effect) self.flags(num_retries=1, group='glance') client = glance.GlanceClientWrapper() self.assert_retry_attempted(sleep_mock, client, 'https://host2:9293') @mock.patch('random.shuffle') @mock.patch('time.sleep') @mock.patch('nova.image.glance._glanceclient_from_endpoint') def test_retry_works_with_generators(self, create_client_mock, sleep_mock, shuffle_mock): def some_generator(exception): if exception: raise glanceclient.exc.ServiceUnavailable('Boom!') yield 'something' side_effect = [ some_generator(exception=True), some_generator(exception=False), ] self._mock_client_images_response(create_client_mock, side_effect) self.flags(num_retries=1, group='glance') client = glance.GlanceClientWrapper() self.assert_retry_attempted(sleep_mock, client, 'https://host2:9293') @mock.patch('random.shuffle') @mock.patch('time.sleep') @mock.patch('nova.image.glance._glanceclient_from_endpoint') def test_default_client_without_retries(self, create_client_mock, sleep_mock, shuffle_mock): side_effect = glanceclient.exc.ServiceUnavailable self._mock_client_images_response(create_client_mock, side_effect) self.flags(num_retries=0, group='glance') client = glance.GlanceClientWrapper() # Here we are testing the behaviour that calling client.call() twice # when there are no retries will cycle through the api_servers and not # sleep (which would be an indication of a retry) self.assertRaises(exception.GlanceConnectionFailed, client.call, self.ctx, 1, 'get', args=('meow',)) self.assertEqual(str(client.api_server), 'http://host1:9292') self.assertFalse(sleep_mock.called) self.assertRaises(exception.GlanceConnectionFailed, client.call, self.ctx, 1, 'get', args=('meow',)) self.assertEqual(str(client.api_server), 'https://host2:9293') self.assertFalse(sleep_mock.called) def _get_static_client(self, create_client_mock): version = 2 url = 'http://host4:9295' client = glance.GlanceClientWrapper(context=self.ctx, endpoint=url) create_client_mock.assert_called_once_with(self.ctx, mock.ANY, version) return client def _mock_client_images_response(self, create_client_mock, side_effect): client_mock = mock.MagicMock(spec=glanceclient.Client) images_mock = mock.MagicMock(spec=images.ImageManager) images_mock.get.side_effect = side_effect type(client_mock).images = mock.PropertyMock(return_value=images_mock) create_client_mock.return_value = client_mock class TestCommonPropertyNameConflicts(test.NoDBTestCase): """Tests that images that have common property names like "version" don't cause an exception to be raised from the wacky GlanceClientWrapper magic call() method. :see https://bugs.launchpad.net/nova/+bug/1717547 """ @mock.patch('nova.image.glance.GlanceClientWrapper._create_onetime_client') def test_version_property_conflicts(self, mock_glance_client): client = mock.MagicMock() mock_glance_client.return_value = client ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2() # Simulate the process of snapshotting a server that was launched with # an image with the properties collection containing a (very # commonly-named) "version" property. image_meta = { 'id': 1, 'version': 'blows up', } # This call would blow up before the fix for 1717547 service.create(ctx, image_meta) class TestDownloadNoDirectUri(test.NoDBTestCase): """Tests the download method of the GlanceImageServiceV2 when the default of not allowing direct URI transfers is set. """ @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_no_data_no_dest_path_v2(self, show_mock, open_mock): client = mock.MagicMock() client.call.return_value = fake_glance_response( mock.sentinel.image_chunks) ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id) self.assertFalse(show_mock.called) self.assertFalse(open_mock.called) client.call.assert_called_once_with( ctx, 2, 'data', args=(mock.sentinel.image_id,)) self.assertEqual(mock.sentinel.image_chunks, res) @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_data_no_dest_path_v2(self, show_mock, open_mock): client = mock.MagicMock() client.call.return_value = fake_glance_response([1, 2, 3]) ctx = mock.sentinel.ctx data = mock.MagicMock() service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id, data=data) self.assertFalse(show_mock.called) self.assertFalse(open_mock.called) client.call.assert_called_once_with( ctx, 2, 'data', args=(mock.sentinel.image_id,)) self.assertIsNone(res) data.write.assert_has_calls( [ mock.call(1), mock.call(2), mock.call(3) ] ) self.assertFalse(data.close.called) @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance.GlanceImageServiceV2._safe_fsync') def test_download_no_data_dest_path_v2(self, fsync_mock, show_mock, open_mock): client = mock.MagicMock() client.call.return_value = fake_glance_response([1, 2, 3]) ctx = mock.sentinel.ctx writer = mock.MagicMock() open_mock.return_value = writer service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id, dst_path=mock.sentinel.dst_path) self.assertFalse(show_mock.called) client.call.assert_called_once_with( ctx, 2, 'data', args=(mock.sentinel.image_id,)) open_mock.assert_called_once_with(mock.sentinel.dst_path, 'wb') fsync_mock.assert_called_once_with(writer) self.assertIsNone(res) writer.write.assert_has_calls( [ mock.call(1), mock.call(2), mock.call(3) ] ) writer.close.assert_called_once_with() @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_data_dest_path_v2(self, show_mock, open_mock): # NOTE(jaypipes): This really shouldn't be allowed, but because of the # horrible design of the download() method in GlanceImageServiceV2, no # error is raised, and the dst_path is ignored... # #TODO(jaypipes): Fix the aforementioned horrible design of # the download() method. client = mock.MagicMock() client.call.return_value = fake_glance_response([1, 2, 3]) ctx = mock.sentinel.ctx data = mock.MagicMock() service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id, data=data) self.assertFalse(show_mock.called) self.assertFalse(open_mock.called) client.call.assert_called_once_with( ctx, 2, 'data', args=(mock.sentinel.image_id,)) self.assertIsNone(res) data.write.assert_has_calls( [ mock.call(1), mock.call(2), mock.call(3) ] ) self.assertFalse(data.close.called) @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_data_dest_path_write_fails_v2( self, show_mock, open_mock): client = mock.MagicMock() client.call.return_value = fake_glance_response([1, 2, 3]) ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) # NOTE(mikal): data is a file like object, which in our case always # raises an exception when we attempt to write to the file. class FakeDiskException(Exception): pass class Exceptionator(StringIO): def write(self, _): raise FakeDiskException('Disk full!') self.assertRaises(FakeDiskException, service.download, ctx, mock.sentinel.image_id, data=Exceptionator()) @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_no_returned_image_data_v2( self, show_mock, open_mock): """Verify images with no data are handled correctly.""" client = mock.MagicMock() client.call.return_value = fake_glance_response(None) ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) with testtools.ExpectedException(exception.ImageUnacceptable): service.download(ctx, mock.sentinel.image_id) @mock.patch('nova.image.glance.GlanceImageServiceV2._get_transfer_module') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_direct_file_uri_v2(self, show_mock, get_tran_mock): self.flags(allowed_direct_url_schemes=['file'], group='glance') show_mock.return_value = { 'locations': [ { 'url': 'file:///files/image', 'metadata': mock.sentinel.loc_meta } ] } tran_mod = mock.MagicMock() get_tran_mock.return_value = tran_mod client = mock.MagicMock() ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id, dst_path=mock.sentinel.dst_path) self.assertIsNone(res) self.assertFalse(client.call.called) show_mock.assert_called_once_with(ctx, mock.sentinel.image_id, include_locations=True) get_tran_mock.assert_called_once_with('file') tran_mod.download.assert_called_once_with(ctx, mock.ANY, mock.sentinel.dst_path, mock.sentinel.loc_meta) @mock.patch('nova.image.glance.GlanceImageServiceV2._get_transfer_module') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance.GlanceImageServiceV2._safe_fsync') def test_download_direct_exception_fallback_v2( self, fsync_mock, show_mock, get_tran_mock): # Test that we fall back to downloading to the dst_path # if the download method of the transfer module raised # an exception. self.flags(allowed_direct_url_schemes=['file'], group='glance') show_mock.return_value = { 'locations': [ { 'url': 'file:///files/image', 'metadata': mock.sentinel.loc_meta } ] } tran_mod = mock.MagicMock() tran_mod.download.side_effect = Exception get_tran_mock.return_value = tran_mod client = mock.MagicMock() client.call.return_value = fake_glance_response([1, 2, 3]) ctx = mock.sentinel.ctx writer = mock.MagicMock() with mock.patch.object(six.moves.builtins, 'open') as open_mock: open_mock.return_value = writer service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id, dst_path=mock.sentinel.dst_path) self.assertIsNone(res) show_mock.assert_called_once_with(ctx, mock.sentinel.image_id, include_locations=True) get_tran_mock.assert_called_once_with('file') tran_mod.download.assert_called_once_with(ctx, mock.ANY, mock.sentinel.dst_path, mock.sentinel.loc_meta) client.call.assert_called_once_with( ctx, 2, 'data', args=(mock.sentinel.image_id,)) fsync_mock.assert_called_once_with(writer) # NOTE(jaypipes): log messages call open() in part of the # download path, so here, we just check that the last open() # call was done for the dst_path file descriptor. open_mock.assert_called_with(mock.sentinel.dst_path, 'wb') self.assertIsNone(res) writer.write.assert_has_calls( [ mock.call(1), mock.call(2), mock.call(3) ] ) @mock.patch('nova.image.glance.GlanceImageServiceV2._get_transfer_module') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance.GlanceImageServiceV2._safe_fsync') def test_download_direct_no_mod_fallback( self, fsync_mock, show_mock, get_tran_mock): # Test that we fall back to downloading to the dst_path # if no appropriate transfer module is found... # an exception. self.flags(allowed_direct_url_schemes=['funky'], group='glance') show_mock.return_value = { 'locations': [ { 'url': 'file:///files/image', 'metadata': mock.sentinel.loc_meta } ] } get_tran_mock.return_value = None client = mock.MagicMock() client.call.return_value = fake_glance_response([1, 2, 3]) ctx = mock.sentinel.ctx writer = mock.MagicMock() with mock.patch.object(six.moves.builtins, 'open') as open_mock: open_mock.return_value = writer service = glance.GlanceImageServiceV2(client) res = service.download(ctx, mock.sentinel.image_id, dst_path=mock.sentinel.dst_path) self.assertIsNone(res) show_mock.assert_called_once_with(ctx, mock.sentinel.image_id, include_locations=True) get_tran_mock.assert_called_once_with('file') client.call.assert_called_once_with( ctx, 2, 'data', args=(mock.sentinel.image_id,)) fsync_mock.assert_called_once_with(writer) # NOTE(jaypipes): log messages call open() in part of the # download path, so here, we just check that the last open() # call was done for the dst_path file descriptor. open_mock.assert_called_with(mock.sentinel.dst_path, 'wb') self.assertIsNone(res) writer.write.assert_has_calls( [ mock.call(1), mock.call(2), mock.call(3) ] ) writer.close.assert_called_once_with() class TestDownloadSignatureVerification(test.NoDBTestCase): class MockVerifier(object): def update(self, data): return def verify(self): return True class BadVerifier(object): def update(self, data): return def verify(self): raise cryptography.exceptions.InvalidSignature( 'Invalid signature.' ) def setUp(self): super(TestDownloadSignatureVerification, self).setUp() self.flags(verify_glance_signatures=True, group='glance') self.fake_img_props = { 'properties': { 'img_signature': 'signature', 'img_signature_hash_method': 'SHA-224', 'img_signature_certificate_uuid': uuids.img_sig_cert_uuid, 'img_signature_key_type': 'RSA-PSS', } } self.fake_img_data = ['A' * 256, 'B' * 256] self.client = mock.MagicMock() self.client.call.return_value = fake_glance_response( self.fake_img_data) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.signature_utils.get_verifier') def test_download_with_signature_verification_v2(self, mock_get_verifier, mock_show, mock_log): service = glance.GlanceImageServiceV2(self.client) mock_get_verifier.return_value = self.MockVerifier() mock_show.return_value = self.fake_img_props image_id = None res = service.download(context=None, image_id=image_id, data=None, dst_path=None) self.assertEqual(self.fake_img_data, res) mock_get_verifier.assert_called_once_with( context=None, img_signature_certificate_uuid=uuids.img_sig_cert_uuid, img_signature_hash_method='SHA-224', img_signature='signature', img_signature_key_type='RSA-PSS' ) # trusted_certs is None and enable_certificate_validation is # false, which causes the below debug message to occur msg = ('Certificate validation was not performed. A list of ' 'trusted image certificate IDs must be provided in ' 'order to validate an image certificate.') mock_log.debug.assert_called_once_with(msg) msg = ('Image signature verification succeeded for image: %s') mock_log.info.assert_called_once_with(msg, image_id) @mock.patch.object(six.moves.builtins, 'open') @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.signature_utils.get_verifier') @mock.patch('nova.image.glance.GlanceImageServiceV2._safe_fsync') def test_download_dst_path_signature_verification_v2(self, mock_fsync, mock_get_verifier, mock_show, mock_log, mock_open): service = glance.GlanceImageServiceV2(self.client) mock_get_verifier.return_value = self.MockVerifier() mock_show.return_value = self.fake_img_props mock_dest = mock.MagicMock() fake_path = 'FAKE_PATH' mock_open.return_value = mock_dest service.download(context=None, image_id=None, data=None, dst_path=fake_path) mock_get_verifier.assert_called_once_with( context=None, img_signature_certificate_uuid=uuids.img_sig_cert_uuid, img_signature_hash_method='SHA-224', img_signature='signature', img_signature_key_type='RSA-PSS' ) msg = ('Certificate validation was not performed. A list of ' 'trusted image certificate IDs must be provided in ' 'order to validate an image certificate.') mock_log.debug.assert_called_once_with(msg) msg = ('Image signature verification succeeded for image %s') mock_log.info.assert_called_once_with(msg, None) self.assertEqual(len(self.fake_img_data), mock_dest.write.call_count) self.assertTrue(mock_dest.close.called) mock_fsync.assert_called_once_with(mock_dest) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.signature_utils.get_verifier') def test_download_with_get_verifier_failure_v2(self, mock_get, mock_show, mock_log): service = glance.GlanceImageServiceV2(self.client) mock_get.side_effect = cursive_exception.SignatureVerificationError( reason='Signature verification failed.' ) mock_show.return_value = self.fake_img_props self.assertRaises(cursive_exception.SignatureVerificationError, service.download, context=None, image_id=None, data=None, dst_path=None) mock_log.error.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.signature_utils.get_verifier') def test_download_with_invalid_signature_v2(self, mock_get_verifier, mock_show, mock_log): service = glance.GlanceImageServiceV2(self.client) mock_get_verifier.return_value = self.BadVerifier() mock_show.return_value = self.fake_img_props self.assertRaises(cryptography.exceptions.InvalidSignature, service.download, context=None, image_id=None, data=None, dst_path=None) mock_log.error.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') def test_download_missing_signature_metadata_v2(self, mock_show, mock_log): service = glance.GlanceImageServiceV2(self.client) mock_show.return_value = {'properties': {}} self.assertRaisesRegex(cursive_exception.SignatureVerificationError, 'Required image properties for signature ' 'verification do not exist. Cannot verify ' 'signature. Missing property: .*', service.download, context=None, image_id=None, data=None, dst_path=None) @mock.patch.object(six.moves.builtins, 'open') @mock.patch('cursive.signature_utils.get_verifier') @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance.GlanceImageServiceV2._safe_fsync') def test_download_dst_path_signature_fail_v2(self, mock_fsync, mock_show, mock_log, mock_get_verifier, mock_open): service = glance.GlanceImageServiceV2(self.client) mock_get_verifier.return_value = self.BadVerifier() mock_dest = mock.MagicMock() fake_path = 'FAKE_PATH' mock_open.return_value = mock_dest mock_show.return_value = self.fake_img_props self.assertRaises(cryptography.exceptions.InvalidSignature, service.download, context=None, image_id=None, data=None, dst_path=fake_path) mock_log.error.assert_called_once_with(mock.ANY, mock.ANY) mock_open.assert_called_once_with(fake_path, 'wb') mock_fsync.assert_called_once_with(mock_dest) mock_dest.truncate.assert_called_once_with(0) self.assertTrue(mock_dest.close.called) class TestDownloadCertificateValidation(test.NoDBTestCase): """Tests the download method of the GlanceImageServiceV2 when certificate validation is enabled. """ def setUp(self): super(TestDownloadCertificateValidation, self).setUp() self.flags(enable_certificate_validation=True, group='glance') self.fake_img_props = { 'properties': { 'img_signature': 'signature', 'img_signature_hash_method': 'SHA-224', 'img_signature_certificate_uuid': uuids.img_sig_cert_uuid, 'img_signature_key_type': 'RSA-PSS', } } self.fake_img_data = ['A' * 256, 'B' * 256] self.client = mock.MagicMock() self.client.call.return_value = fake_glance_response( self.fake_img_data) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.certificate_utils.verify_certificate') @mock.patch('cursive.signature_utils.get_verifier') def test_download_with_certificate_validation_v2(self, mock_get_verifier, mock_verify_certificate, mock_show, mock_log): service = glance.GlanceImageServiceV2(self.client) mock_show.return_value = self.fake_img_props fake_cert = uuids.img_sig_cert_uuid fake_trusted_certs = objects.TrustedCerts(ids=[fake_cert]) res = service.download(context=None, image_id=None, data=None, dst_path=None, trusted_certs=fake_trusted_certs) self.assertEqual(self.fake_img_data, res) mock_get_verifier.assert_called_once_with( context=None, img_signature_certificate_uuid=uuids.img_sig_cert_uuid, img_signature_hash_method='SHA-224', img_signature='signature', img_signature_key_type='RSA-PSS' ) mock_verify_certificate.assert_called_once_with( context=None, certificate_uuid=uuids.img_sig_cert_uuid, trusted_certificate_uuids=[fake_cert] ) msg = ('Image signature certificate validation succeeded ' 'for certificate: %s') mock_log.debug.assert_called_once_with(msg, uuids.img_sig_cert_uuid) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.certificate_utils.verify_certificate') @mock.patch('cursive.signature_utils.get_verifier') def test_download_with_trusted_certs_and_disabled_cert_validation_v2( self, mock_get_verifier, mock_verify_certificate, mock_show, mock_log): self.flags(enable_certificate_validation=False, group='glance') service = glance.GlanceImageServiceV2(self.client) mock_show.return_value = self.fake_img_props fake_cert = uuids.img_sig_cert_uuid fake_trusted_certs = objects.TrustedCerts(ids=[fake_cert]) res = service.download(context=None, image_id=None, data=None, dst_path=None, trusted_certs=fake_trusted_certs) self.assertEqual(self.fake_img_data, res) mock_get_verifier.assert_called_once_with( context=None, img_signature_certificate_uuid=uuids.img_sig_cert_uuid, img_signature_hash_method='SHA-224', img_signature='signature', img_signature_key_type='RSA-PSS' ) mock_verify_certificate.assert_called_once_with( context=None, certificate_uuid=uuids.img_sig_cert_uuid, trusted_certificate_uuids=[fake_cert] ) msg = ('Image signature certificate validation succeeded ' 'for certificate: %s') mock_log.debug.assert_called_once_with(msg, uuids.img_sig_cert_uuid) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.certificate_utils.verify_certificate') @mock.patch('cursive.signature_utils.get_verifier') def test_download_with_certificate_validation_failure_v2( self, mock_get_verifier, mock_verify_certificate, mock_show, mock_log): service = glance.GlanceImageServiceV2(self.client) mock_verify_certificate.side_effect = \ cursive_exception.SignatureVerificationError( reason='Invalid certificate.' ) mock_show.return_value = self.fake_img_props bad_trusted_certs = objects.TrustedCerts(ids=['bad_cert_id', 'other_bad_cert_id']) self.assertRaises(exception.CertificateValidationFailed, service.download, context=None, image_id=None, data=None, dst_path=None, trusted_certs=bad_trusted_certs) msg = ('Image signature certificate validation failed for ' 'certificate: %s') mock_log.warning.assert_called_once_with(msg, uuids.img_sig_cert_uuid) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.signature_utils.get_verifier') def test_download_without_trusted_certs_failure_v2(self, mock_get_verifier, mock_show, mock_log): # Signature verification needs to be enabled in order to reach the # checkpoint for trusted_certs. Otherwise, all image signature # validation will be skipped. self.flags(verify_glance_signatures=True, group='glance') service = glance.GlanceImageServiceV2(self.client) mock_show.return_value = self.fake_img_props self.assertRaises(exception.CertificateValidationFailed, service.download, context=None, image_id=None, data=None, dst_path=None) msg = ('Image signature certificate validation enabled, but no ' 'trusted certificate IDs were provided. Unable to ' 'validate the certificate used to verify the image ' 'signature.') mock_log.warning.assert_called_once_with(msg) @mock.patch('nova.image.glance.LOG') @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('cursive.signature_utils.get_verifier') @mock.patch('cursive.certificate_utils.verify_certificate') def test_get_verifier_without_trusted_certs_use_default_certs( self, mock_verify_certificate, mock_get_verifier, mock_show, mock_log): """Tests the scenario that trusted_certs is not provided, but signature and cert verification are enabled, and there are default certs to use. """ self.flags(verify_glance_signatures=True, group='glance') self.flags(default_trusted_certificate_ids=[uuids.img_sig_cert_uuid], group='glance') service = glance.GlanceImageServiceV2(self.client) mock_show.return_value = self.fake_img_props service._get_verifier( mock.sentinel.context, mock.sentinel.image_id, trusted_certs=None) mock_verify_certificate.assert_called_once_with( context=mock.sentinel.context, certificate_uuid=uuids.img_sig_cert_uuid, trusted_certificate_uuids=[uuids.img_sig_cert_uuid] ) msg = ('Image signature certificate validation succeeded ' 'for certificate: %s') mock_log.debug.assert_called_once_with(msg, uuids.img_sig_cert_uuid) class TestIsImageAvailable(test.NoDBTestCase): """Tests the internal _is_image_available function.""" class ImageSpecV2(object): visibility = None properties = None def test_auth_token_override(self): ctx = mock.MagicMock(auth_token=True) img = mock.MagicMock() res = glance._is_image_available(ctx, img) self.assertTrue(res) self.assertFalse(img.called) def test_admin_override(self): ctx = mock.MagicMock(auth_token=False, is_admin=True) img = mock.MagicMock() res = glance._is_image_available(ctx, img) self.assertTrue(res) self.assertFalse(img.called) def test_v2_visibility(self): ctx = mock.MagicMock(auth_token=False, is_admin=False) # We emulate warlock validation that throws an AttributeError # if you try to call is_public on an image model returned by # a call to V2 image.get(). Here, the ImageSpecV2 does not have # an is_public attribute and MagicMock will throw an AttributeError. img = mock.MagicMock(visibility='PUBLIC', spec=TestIsImageAvailable.ImageSpecV2) res = glance._is_image_available(ctx, img) self.assertTrue(res) def test_project_is_owner(self): ctx = mock.MagicMock(auth_token=False, is_admin=False, project_id='123') props = { 'owner_id': '123' } img = mock.MagicMock(visibility='private', properties=props, spec=TestIsImageAvailable.ImageSpecV2) res = glance._is_image_available(ctx, img) self.assertTrue(res) def test_project_context_matches_project_prop(self): ctx = mock.MagicMock(auth_token=False, is_admin=False, project_id='123') props = { 'project_id': '123' } img = mock.MagicMock(visibility='private', properties=props, spec=TestIsImageAvailable.ImageSpecV2) res = glance._is_image_available(ctx, img) self.assertTrue(res) def test_no_user_in_props(self): ctx = mock.MagicMock(auth_token=False, is_admin=False, project_id='123') props = { } img = mock.MagicMock(visibility='private', properties=props, spec=TestIsImageAvailable.ImageSpecV2) res = glance._is_image_available(ctx, img) self.assertFalse(res) def test_user_matches_context(self): ctx = mock.MagicMock(auth_token=False, is_admin=False, user_id='123') props = { 'user_id': '123' } img = mock.MagicMock(visibility='private', properties=props, spec=TestIsImageAvailable.ImageSpecV2) res = glance._is_image_available(ctx, img) self.assertTrue(res) class TestShow(test.NoDBTestCase): """Tests the show method of the GlanceImageServiceV2.""" @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_show_success_v2(self, is_avail_mock, trans_from_mock): is_avail_mock.return_value = True trans_from_mock.return_value = {'mock': mock.sentinel.trans_from} client = mock.MagicMock() client.call.return_value = {} ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) info = service.show(ctx, mock.sentinel.image_id) client.call.assert_called_once_with( ctx, 2, 'get', args=(mock.sentinel.image_id,)) is_avail_mock.assert_called_once_with(ctx, {}) trans_from_mock.assert_called_once_with({}, include_locations=False) self.assertIn('mock', info) self.assertEqual(mock.sentinel.trans_from, info['mock']) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_show_not_available_v2(self, is_avail_mock, trans_from_mock): is_avail_mock.return_value = False client = mock.MagicMock() client.call.return_value = mock.sentinel.images_0 ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) with testtools.ExpectedException(exception.ImageNotFound): service.show(ctx, mock.sentinel.image_id) client.call.assert_called_once_with( ctx, 2, 'get', args=(mock.sentinel.image_id,)) is_avail_mock.assert_called_once_with(ctx, mock.sentinel.images_0) self.assertFalse(trans_from_mock.called) @mock.patch('nova.image.glance._reraise_translated_image_exception') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_show_client_failure_v2(self, is_avail_mock, trans_from_mock, reraise_mock): raised = exception.ImageNotAuthorized(image_id=123) client = mock.MagicMock() client.call.side_effect = glanceclient.exc.Forbidden ctx = mock.sentinel.ctx reraise_mock.side_effect = raised service = glance.GlanceImageServiceV2(client) with testtools.ExpectedException(exception.ImageNotAuthorized): service.show(ctx, mock.sentinel.image_id) client.call.assert_called_once_with( ctx, 2, 'get', args=(mock.sentinel.image_id,)) self.assertFalse(is_avail_mock.called) self.assertFalse(trans_from_mock.called) reraise_mock.assert_called_once_with(mock.sentinel.image_id) @mock.patch.object(schemas, 'Schema', side_effect=FakeSchema) @mock.patch('nova.image.glance._is_image_available') def test_show_queued_image_without_some_attrs_v2(self, is_avail_mock, mocked_schema): is_avail_mock.return_value = True client = mock.MagicMock() # fake image cls without disk_format, container_format, name attributes class fake_image_cls(dict): pass glance_image = fake_image_cls( id = 'b31aa5dd-f07a-4748-8f15-398346887584', deleted = False, protected = False, min_disk = 0, created_at = '2014-05-20T08:16:48', size = 0, status = 'queued', visibility = 'private', min_ram = 0, owner = '980ec4870033453ead65c0470a78b8a8', updated_at = '2014-05-20T08:16:48', schema = '') glance_image.id = glance_image['id'] glance_image.schema = '' client.call.return_value = glance_image ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) image_info = service.show(ctx, glance_image.id) client.call.assert_called_once_with( ctx, 2, 'get', args=(glance_image.id,)) NOVA_IMAGE_ATTRIBUTES = set(['size', 'disk_format', 'owner', 'container_format', 'status', 'id', 'name', 'created_at', 'updated_at', 'deleted', 'deleted_at', 'checksum', 'min_disk', 'min_ram', 'is_public', 'properties']) self.assertEqual(NOVA_IMAGE_ATTRIBUTES, set(image_info.keys())) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_include_locations_success_v2(self, avail_mock, trans_from_mock): locations = [mock.sentinel.loc1] avail_mock.return_value = True trans_from_mock.return_value = {'locations': locations} client = mock.Mock() client.call.return_value = mock.sentinel.image service = glance.GlanceImageServiceV2(client) ctx = mock.sentinel.ctx image_id = mock.sentinel.image_id info = service.show(ctx, image_id, include_locations=True) client.call.assert_called_once_with( ctx, 2, 'get', args=(image_id,)) avail_mock.assert_called_once_with(ctx, mock.sentinel.image) trans_from_mock.assert_called_once_with(mock.sentinel.image, include_locations=True) self.assertIn('locations', info) self.assertEqual(locations, info['locations']) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_include_direct_uri_success_v2(self, avail_mock, trans_from_mock): locations = [mock.sentinel.loc1] avail_mock.return_value = True trans_from_mock.return_value = {'locations': locations, 'direct_uri': mock.sentinel.duri} client = mock.Mock() client.call.return_value = mock.sentinel.image service = glance.GlanceImageServiceV2(client) ctx = mock.sentinel.ctx image_id = mock.sentinel.image_id info = service.show(ctx, image_id, include_locations=True) client.call.assert_called_once_with( ctx, 2, 'get', args=(image_id,)) expected = locations expected.append({'url': mock.sentinel.duri, 'metadata': {}}) self.assertIn('locations', info) self.assertEqual(expected, info['locations']) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_do_not_show_deleted_images_v2( self, is_avail_mock, trans_from_mock): class fake_image_cls(dict): id = 'b31aa5dd-f07a-4748-8f15-398346887584' deleted = True glance_image = fake_image_cls() client = mock.MagicMock() client.call.return_value = glance_image ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) with testtools.ExpectedException(exception.ImageNotFound): service.show(ctx, glance_image.id, show_deleted=False) client.call.assert_called_once_with( ctx, 2, 'get', args=(glance_image.id,)) self.assertFalse(is_avail_mock.called) self.assertFalse(trans_from_mock.called) class TestDetail(test.NoDBTestCase): """Tests the detail method of the GlanceImageServiceV2.""" @mock.patch('nova.image.glance._extract_query_params_v2') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_detail_success_available_v2(self, is_avail_mock, trans_from_mock, ext_query_mock): params = {} is_avail_mock.return_value = True ext_query_mock.return_value = params trans_from_mock.return_value = mock.sentinel.trans_from client = mock.MagicMock() client.call.return_value = [mock.sentinel.images_0] ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) images = service.detail(ctx, **params) client.call.assert_called_once_with(ctx, 2, 'list', kwargs={}) is_avail_mock.assert_called_once_with(ctx, mock.sentinel.images_0) trans_from_mock.assert_called_once_with(mock.sentinel.images_0) self.assertEqual([mock.sentinel.trans_from], images) @mock.patch('nova.image.glance._extract_query_params_v2') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_detail_success_unavailable_v2( self, is_avail_mock, trans_from_mock, ext_query_mock): params = {} is_avail_mock.return_value = False ext_query_mock.return_value = params trans_from_mock.return_value = mock.sentinel.trans_from client = mock.MagicMock() client.call.return_value = [mock.sentinel.images_0] ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) images = service.detail(ctx, **params) client.call.assert_called_once_with(ctx, 2, 'list', kwargs={}) is_avail_mock.assert_called_once_with(ctx, mock.sentinel.images_0) self.assertFalse(trans_from_mock.called) self.assertEqual([], images) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_detail_params_passed_v2(self, is_avail_mock, _trans_from_mock): client = mock.MagicMock() client.call.return_value = [mock.sentinel.images_0] ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) service.detail(ctx, page_size=5, limit=10) client.call.assert_called_once_with( ctx, 2, 'list', kwargs=dict(filters={}, page_size=5, limit=10)) @mock.patch('nova.image.glance._reraise_translated_exception') @mock.patch('nova.image.glance._extract_query_params_v2') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_detail_client_failure_v2(self, is_avail_mock, trans_from_mock, ext_query_mock, reraise_mock): params = {} ext_query_mock.return_value = params raised = exception.Forbidden() client = mock.MagicMock() client.call.side_effect = glanceclient.exc.Forbidden ctx = mock.sentinel.ctx reraise_mock.side_effect = raised service = glance.GlanceImageServiceV2(client) with testtools.ExpectedException(exception.Forbidden): service.detail(ctx, **params) client.call.assert_called_once_with(ctx, 2, 'list', kwargs={}) self.assertFalse(is_avail_mock.called) self.assertFalse(trans_from_mock.called) reraise_mock.assert_called_once_with() class TestCreate(test.NoDBTestCase): """Tests the create method of the GlanceImageServiceV2.""" @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_create_success_v2( self, trans_to_mock, trans_from_mock): translated = { 'name': mock.sentinel.name, } trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from image_mock = {} client = mock.MagicMock() client.call.return_value = {'id': '123'} ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) image_meta = service.create(ctx, image_mock) trans_to_mock.assert_called_once_with(image_mock) # Verify that the 'id' element has been removed as a kwarg to # the call to glanceclient's update (since the image ID is # supplied as a positional arg), and that the # purge_props default is True. client.call.assert_called_once_with( ctx, 2, 'create', kwargs=dict(name=mock.sentinel.name)) trans_from_mock.assert_called_once_with({'id': '123'}) self.assertEqual(mock.sentinel.trans_from, image_meta) # Now verify that if we supply image data to the call, # that the client is also called with the data kwarg client.reset_mock() client.call.return_value = {'id': mock.sentinel.image_id} service.create(ctx, {}, data=mock.sentinel.data) self.assertEqual(3, client.call.call_count) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_create_success_v2_force_activate( self, trans_to_mock, trans_from_mock): """Tests that creating an image with the v2 API with a size of 0 will trigger a call to set the disk and container formats. """ translated = { 'name': mock.sentinel.name, } trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from # size=0 will trigger force_activate=True image_mock = {'size': 0} client = mock.MagicMock() client.call.return_value = {'id': '123'} ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) with mock.patch.object(service, '_get_image_create_disk_format_default', return_value='vdi'): image_meta = service.create(ctx, image_mock) trans_to_mock.assert_called_once_with(image_mock) # Verify that the disk_format and container_format kwargs are passed. create_call_kwargs = client.call.call_args_list[0][1]['kwargs'] self.assertEqual('vdi', create_call_kwargs['disk_format']) self.assertEqual('bare', create_call_kwargs['container_format']) trans_from_mock.assert_called_once_with({'id': '123'}) self.assertEqual(mock.sentinel.trans_from, image_meta) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_create_success_v2_with_location( self, trans_to_mock, trans_from_mock): translated = { 'id': mock.sentinel.id, 'name': mock.sentinel.name, 'location': mock.sentinel.location } trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from image_mock = {} client = mock.MagicMock() client.call.return_value = translated ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) image_meta = service.create(ctx, image_mock) trans_to_mock.assert_called_once_with(image_mock) self.assertEqual(2, client.call.call_count) trans_from_mock.assert_called_once_with(translated) self.assertEqual(mock.sentinel.trans_from, image_meta) @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_create_success_v2_with_sharing( self, trans_to_mock, trans_from_mock): """Tests creating a snapshot image by one tenant that is shared with the owner of the instance. """ translated = { 'name': mock.sentinel.name, 'visibility': 'shared' } trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from image_meta = { 'name': mock.sentinel.name, 'visibility': 'shared', 'properties': { # This triggers the image_members.create call to glance. 'instance_owner': uuids.instance_uuid } } client = mock.MagicMock() def fake_call(_ctxt, _version, method, controller=None, args=None, kwargs=None): if method == 'create': if controller is None: # Call to create the image. translated['id'] = uuids.image_id return translated if controller == 'image_members': self.assertIsNotNone(args) self.assertEqual( (uuids.image_id, uuids.instance_uuid), args) # Call to share the image with the instance owner. return mock.sentinel.member self.fail('Unexpected glanceclient call %s.%s' % (controller or 'images', method)) client.call.side_effect = fake_call ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) ret_image = service.create(ctx, image_meta) translated_image_meta = copy.copy(image_meta) # The instance_owner property should have been popped off and not sent # to glance during the create() call. translated_image_meta['properties'].pop('instance_owner', None) trans_to_mock.assert_called_once_with(translated_image_meta) # glanceclient should be called twice: # - once for the image create # - once for sharing the image with the instance owner self.assertEqual(2, client.call.call_count) trans_from_mock.assert_called_once_with(translated) self.assertEqual(mock.sentinel.trans_from, ret_image) @mock.patch('nova.image.glance._reraise_translated_exception') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_create_client_failure_v2(self, trans_to_mock, trans_from_mock, reraise_mock): translated = {} trans_to_mock.return_value = translated image_mock = mock.MagicMock(spec=dict) raised = exception.Invalid() client = mock.MagicMock() client.call.side_effect = glanceclient.exc.BadRequest ctx = mock.sentinel.ctx reraise_mock.side_effect = raised service = glance.GlanceImageServiceV2(client) self.assertRaises(exception.Invalid, service.create, ctx, image_mock) trans_to_mock.assert_called_once_with(image_mock) self.assertFalse(trans_from_mock.called) def _test_get_image_create_disk_format_default(self, test_schema, expected_disk_format): mock_client = mock.MagicMock() mock_client.call.return_value = test_schema service = glance.GlanceImageServiceV2(mock_client) disk_format = service._get_image_create_disk_format_default( mock.sentinel.ctx) self.assertEqual(expected_disk_format, disk_format) mock_client.call.assert_called_once_with( mock.sentinel.ctx, 2, 'get', args=('image',), controller='schemas') def test_get_image_create_disk_format_default_no_schema(self): """Tests that if there is no disk_format schema we default to qcow2. """ test_schema = FakeSchema({'properties': {}}) self._test_get_image_create_disk_format_default(test_schema, 'qcow2') def test_get_image_create_disk_format_default_single_entry(self): """Tests that if there is only a single supported disk_format then we use that. """ test_schema = FakeSchema({ 'properties': { 'disk_format': { 'enum': ['iso'], } } }) self._test_get_image_create_disk_format_default(test_schema, 'iso') def test_get_image_create_disk_format_default_multiple_entries(self): """Tests that if there are multiple supported disk_formats we look for one in a preferred order. """ test_schema = FakeSchema({ 'properties': { 'disk_format': { # For this test we want to skip qcow2 since that's primary. 'enum': ['vhd', 'raw'], } } }) self._test_get_image_create_disk_format_default(test_schema, 'vhd') def test_get_image_create_disk_format_default_multiple_entries_no_match( self): """Tests that if we can't match a supported disk_format to what we prefer then we take the first supported disk_format in the list. """ test_schema = FakeSchema({ 'properties': { 'disk_format': { # For this test we want to skip qcow2 since that's primary. 'enum': ['aki', 'ari', 'ami'], } } }) self._test_get_image_create_disk_format_default(test_schema, 'aki') class TestUpdate(test.NoDBTestCase): """Tests the update method of the GlanceImageServiceV2.""" @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_update_success_v2( self, trans_to_mock, trans_from_mock, show_mock): image = { 'id': mock.sentinel.image_id, 'name': mock.sentinel.name, 'properties': {'prop_to_keep': '4'} } translated = { 'id': mock.sentinel.image_id, 'name': mock.sentinel.name, 'prop_to_keep': '4' } trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from client = mock.MagicMock() client.call.return_value = mock.sentinel.image_meta ctx = mock.sentinel.ctx show_mock.return_value = { 'image_id': mock.sentinel.image_id, 'properties': {'prop_to_remove': '1', 'prop_to_keep': '3'} } service = glance.GlanceImageServiceV2(client) image_meta = service.update( ctx, mock.sentinel.image_id, image, purge_props=True) show_mock.assert_called_once_with( mock.sentinel.ctx, mock.sentinel.image_id) trans_to_mock.assert_called_once_with(image) # Verify that the 'id' element has been removed as a kwarg to # the call to glanceclient's update (since the image ID is # supplied as a positional arg), and that the # purge_props default is True. client.call.assert_called_once_with( ctx, 2, 'update', kwargs=dict( image_id=mock.sentinel.image_id, name=mock.sentinel.name, prop_to_keep='4', remove_props=['prop_to_remove'], )) trans_from_mock.assert_called_once_with(mock.sentinel.image_meta) self.assertEqual(mock.sentinel.trans_from, image_meta) # Now verify that if we supply image data to the call, # that the client is also called with the data kwarg client.reset_mock() client.call.return_value = {'id': mock.sentinel.image_id} service.update(ctx, mock.sentinel.image_id, {}, data=mock.sentinel.data) self.assertEqual(3, client.call.call_count) @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_update_success_v2_with_location( self, trans_to_mock, trans_from_mock, show_mock): translated = { 'id': mock.sentinel.id, 'name': mock.sentinel.name, 'location': mock.sentinel.location } show_mock.return_value = {'image_id': mock.sentinel.image_id} trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from image_mock = mock.MagicMock(spec=dict) client = mock.MagicMock() client.call.return_value = translated ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) image_meta = service.update(ctx, mock.sentinel.image_id, image_mock, purge_props=False) trans_to_mock.assert_called_once_with(image_mock) self.assertEqual(2, client.call.call_count) trans_from_mock.assert_called_once_with(translated) self.assertEqual(mock.sentinel.trans_from, image_meta) @mock.patch('nova.image.glance.GlanceImageServiceV2.show') @mock.patch('nova.image.glance._reraise_translated_image_exception') @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._translate_to_glance') def test_update_client_failure_v2(self, trans_to_mock, trans_from_mock, reraise_mock, show_mock): image = { 'id': mock.sentinel.image_id, 'name': mock.sentinel.name, 'properties': {'prop_to_keep': '4'} } translated = { 'id': mock.sentinel.image_id, 'name': mock.sentinel.name, 'prop_to_keep': '4' } trans_to_mock.return_value = translated trans_from_mock.return_value = mock.sentinel.trans_from raised = exception.ImageNotAuthorized(image_id=123) client = mock.MagicMock() client.call.side_effect = glanceclient.exc.Forbidden ctx = mock.sentinel.ctx reraise_mock.side_effect = raised show_mock.return_value = { 'image_id': mock.sentinel.image_id, 'properties': {'prop_to_remove': '1', 'prop_to_keep': '3'} } service = glance.GlanceImageServiceV2(client) self.assertRaises(exception.ImageNotAuthorized, service.update, ctx, mock.sentinel.image_id, image) client.call.assert_called_once_with( ctx, 2, 'update', kwargs=dict( image_id=mock.sentinel.image_id, name=mock.sentinel.name, prop_to_keep='4', remove_props=['prop_to_remove'], )) reraise_mock.assert_called_once_with(mock.sentinel.image_id) class TestDelete(test.NoDBTestCase): """Tests the delete method of the GlanceImageServiceV2.""" def test_delete_success_v2(self): client = mock.MagicMock() client.call.return_value = True ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) service.delete(ctx, mock.sentinel.image_id) client.call.assert_called_once_with( ctx, 2, 'delete', args=(mock.sentinel.image_id,)) def test_delete_client_failure_v2(self): client = mock.MagicMock() client.call.side_effect = glanceclient.exc.NotFound ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) self.assertRaises(exception.ImageNotFound, service.delete, ctx, mock.sentinel.image_id) def test_delete_client_conflict_failure_v2(self): client = mock.MagicMock() fake_details = 'Image %s is in use' % mock.sentinel.image_id client.call.side_effect = glanceclient.exc.HTTPConflict( details=fake_details) ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) self.assertRaises(exception.ImageDeleteConflict, service.delete, ctx, mock.sentinel.image_id) @ddt.ddt class TestGlanceApiServers(test.NoDBTestCase): def test_get_api_servers_multiple(self): """Test get_api_servers via `api_servers` conf option.""" glance_servers = ['http://10.0.1.1:9292', 'https://10.0.0.1:9293', 'http://10.0.2.2:9294'] expected_servers = set(glance_servers) self.flags(api_servers=glance_servers, group='glance') api_servers = glance.get_api_servers('context') # In len(expected_servers) cycles, we should get all the endpoints self.assertEqual(expected_servers, {next(api_servers) for _ in expected_servers}) @ddt.data(['http://158.69.92.100/image/v2/', 'http://158.69.92.100/image/'], ['http://158.69.92.100/image/v2', 'http://158.69.92.100/image/'], ['http://158.69.92.100/image/v2.0/', 'http://158.69.92.100/image/'], ['http://158.69.92.100/image/', 'http://158.69.92.100/image/'], ['http://158.69.92.100/image', 'http://158.69.92.100/image'], ['http://158.69.92.100/v2', 'http://158.69.92.100/'], ['http://thing.novav2.0oh.v2.foo/image/v2/', 'http://thing.novav2.0oh.v2.foo/image/']) @ddt.unpack def test_get_api_servers_get_ksa_adapter(self, catalog_url, stripped): """Test get_api_servers via nova.utils.get_ksa_adapter().""" self.flags(api_servers=None, group='glance') with mock.patch('keystoneauth1.adapter.Adapter.' 'get_endpoint_data') as mock_epd: mock_epd.return_value.catalog_url = catalog_url api_servers = glance.get_api_servers(mock.Mock()) self.assertEqual(stripped, next(api_servers)) # Still get itertools.cycle behavior self.assertEqual(stripped, next(api_servers)) mock_epd.assert_called_once_with() @mock.patch('keystoneauth1.adapter.Adapter.get_endpoint_data') def test_get_api_servers_get_ksa_adapter_endpoint_override(self, mock_epd): self.flags(endpoint_override='foo', group='glance') api_servers = glance.get_api_servers(mock.Mock()) self.assertEqual('foo', next(api_servers)) self.assertEqual('foo', next(api_servers)) mock_epd.assert_not_called() class TestUpdateGlanceImage(test.NoDBTestCase): @mock.patch('nova.image.glance.GlanceImageServiceV2') def test_start(self, mock_glance_image_service): consumer = glance.UpdateGlanceImage( 'context', 'id', 'metadata', 'stream') with mock.patch.object(glance, 'get_remote_image_service') as a_mock: a_mock.return_value = (mock_glance_image_service, 'image_id') consumer.start() mock_glance_image_service.update.assert_called_with( 'context', 'image_id', 'metadata', 'stream', purge_props=False) class TestExtractAttributes(test.NoDBTestCase): @mock.patch.object(schemas, 'Schema', side_effect=FakeSchema) def test_extract_image_attributes_active_images_with_locations( self, mocked_schema): image_v2 = ImageV2(image_fixtures['active_image_v2']) image_v2_meta = glance._translate_from_glance( image_v2, include_locations=True) self.assertIn('locations', image_v2_meta) self.assertIn('direct_url', image_v2_meta) image_v2_meta = glance._translate_from_glance( image_v2, include_locations=False) self.assertNotIn('locations', image_v2_meta) self.assertNotIn('direct_url', image_v2_meta) class TestExtractQueryParams(test.NoDBTestCase): @mock.patch('nova.image.glance._translate_from_glance') @mock.patch('nova.image.glance._is_image_available') def test_detail_extract_query_params_v2( self, is_avail_mock, _trans_from_mock): client = mock.MagicMock() client.call.return_value = [mock.sentinel.images_0] ctx = mock.sentinel.ctx service = glance.GlanceImageServiceV2(client) input_filters = { 'property-kernel-id': 'some-id', 'changes-since': 'some-date', 'is_public': 'true', 'name': 'some-name' } service.detail(ctx, filters=input_filters, page_size=5, limit=10) expected_filters_v1 = {'visibility': 'public', 'name': 'some-name', 'kernel-id': 'some-id', 'updated_at': 'gte:some-date'} client.call.assert_called_once_with( ctx, 2, 'list', kwargs=dict( filters=expected_filters_v1, page_size=5, limit=10, )) class TestTranslateToGlance(test.NoDBTestCase): """Test that image was translated correct to be accepted by Glance""" def setUp(self): self.fixture = { 'checksum': 'fb10c6486390bec8414be90a93dfff3b', 'container_format': 'bare', 'created_at': "", 'deleted': False, 'deleted_at': None, 'disk_format': 'raw', 'id': 'f8116538-309f-449c-8d49-df252a97a48d', 'is_public': True, 'min_disk': '0', 'min_ram': '0', 'name': 'tempest-image-1294122904', 'owner': 'd76b51cf8a44427ea404046f4c1d82ab', 'properties': {'os_distro': 'value2', 'os_version': 'value1', 'base_image_ref': 'ea36315c-e527-4643-a46a-9fd61d027cc1', 'image_type': 'test', 'instance_uuid': 'ec1ea9c7-8c5e-498d-a753-6ccc2464123c', 'kernel_id': 'None', 'ramdisk_id': ' ', 'user_id': 'ca2ff78fd33042ceb45fbbe19012ef3f', 'boolean_prop': True}, 'size': 1024, 'status': 'active', 'updated_at': ""} super(TestTranslateToGlance, self).setUp() def test_convert_to_v2(self): expected_v2_image = { 'base_image_ref': 'ea36315c-e527-4643-a46a-9fd61d027cc1', 'boolean_prop': 'True', 'checksum': 'fb10c6486390bec8414be90a93dfff3b', 'container_format': 'bare', 'disk_format': 'raw', 'id': 'f8116538-309f-449c-8d49-df252a97a48d', 'image_type': 'test', 'instance_uuid': 'ec1ea9c7-8c5e-498d-a753-6ccc2464123c', 'min_disk': 0, 'min_ram': 0, 'name': 'tempest-image-1294122904', 'os_distro': 'value2', 'os_version': 'value1', 'owner': 'd76b51cf8a44427ea404046f4c1d82ab', 'user_id': 'ca2ff78fd33042ceb45fbbe19012ef3f', 'visibility': 'public'} nova_image_dict = self.fixture image_v2_dict = glance._translate_to_glance(nova_image_dict) self.assertEqual(expected_v2_image, image_v2_dict) @mock.patch('stat.S_ISSOCK') @mock.patch('stat.S_ISFIFO') @mock.patch('os.fsync') @mock.patch('os.fstat') class TestSafeFSync(test.NoDBTestCase): """Validate _safe_fsync.""" @staticmethod def common(mock_isfifo, isfifo, mock_issock, issock, mock_fstat): """Execution & assertions common to all test cases.""" fh = mock.Mock() mock_isfifo.return_value = isfifo mock_issock.return_value = issock glance.GlanceImageServiceV2._safe_fsync(fh) fh.fileno.assert_called_once_with() mock_fstat.assert_called_once_with(fh.fileno.return_value) mock_isfifo.assert_called_once_with(mock_fstat.return_value.st_mode) # Condition short-circuits, so S_ISSOCK is only called if !S_ISFIFO if isfifo: mock_issock.assert_not_called() else: mock_issock.assert_called_once_with( mock_fstat.return_value.st_mode) return fh def test_fsync(self, mock_fstat, mock_fsync, mock_isfifo, mock_issock): """Validate path where fsync is called.""" fh = self.common(mock_isfifo, False, mock_issock, False, mock_fstat) mock_fsync.assert_called_once_with(fh.fileno.return_value) def test_fifo(self, mock_fstat, mock_fsync, mock_isfifo, mock_issock): """Validate fsync not called for pipe/fifo.""" self.common(mock_isfifo, True, mock_issock, False, mock_fstat) mock_fsync.assert_not_called() def test_sock(self, mock_fstat, mock_fsync, mock_isfifo, mock_issock): """Validate fsync not called for socket.""" self.common(mock_isfifo, False, mock_issock, True, mock_fstat) mock_fsync.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/image_fixtures.py0000664000175000017500000000576100000000000021250 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime # nova.image.glance._translate_from_glance() returns datetime # objects, not strings. NOW_DATE = datetime.datetime(2010, 10, 11, 10, 30, 22) def get_image_fixtures(): """Returns a set of image fixture dicts for use in unit tests. Returns a set of dicts representing images/snapshots of varying statuses that would be returned from a call to `glanceclient.client.Client.images.list`. The IDs of the images returned start at 123 and go to 131, with the following brief summary of image attributes: | ID Type Status Notes | ---------------------------------------------------------- | 123 Public image active | 124 Snapshot queued | 125 Snapshot saving | 126 Snapshot active | 127 Snapshot killed | 128 Snapshot deleted | 129 Snapshot pending_delete | 130 Public image active Has no name """ image_id = 123 fixtures = [] def add_fixture(**kwargs): kwargs.update(created_at=NOW_DATE, updated_at=NOW_DATE) fixtures.append(kwargs) # Public image add_fixture(id=str(image_id), name='public image', is_public=True, status='active', properties={'key1': 'value1'}, min_ram="128", min_disk="10", size=25165824) image_id += 1 # Snapshot for User 1 uuid = 'aa640691-d1a7-4a67-9d3c-d35ee6b3cc74' snapshot_properties = {'instance_uuid': uuid, 'user_id': 'fake'} for status in ('queued', 'saving', 'active', 'killed', 'deleted', 'pending_delete'): deleted = False if status != 'deleted' else True deleted_at = NOW_DATE if deleted else None add_fixture(id=str(image_id), name='%s snapshot' % status, is_public=False, status=status, properties=snapshot_properties, size=25165824, deleted=deleted, deleted_at=deleted_at) image_id += 1 # Image without a name add_fixture(id=str(image_id), is_public=True, status='active', properties={}, size=25165824) # Image for permission tests image_id += 1 add_fixture(id=str(image_id), is_public=True, status='active', properties={}, owner='authorized_fake', size=25165824) return fixtures ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.542468 nova-21.2.4/nova/tests/unit/keymgr/0000775000175000017500000000000000000000000017150 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/keymgr/__init__.py0000664000175000017500000000000000000000000021247 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/keymgr/fake.py0000664000175000017500000000155400000000000020435 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of a fake key manager.""" from castellan.tests.unit.key_manager import mock_key_manager def fake_api(configuration=None): return mock_key_manager.MockKeyManager(configuration) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/keymgr/test_conf_key_mgr.py0000664000175000017500000000751300000000000023231 0ustar00zuulzuul00000000000000# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test cases for the conf key manager. """ import binascii import codecs from castellan.common.objects import symmetric_key as key import nova.conf from nova import context from nova import exception from nova.keymgr import conf_key_mgr from nova import test CONF = nova.conf.CONF decode_hex = codecs.getdecoder("hex_codec") class ConfKeyManagerTestCase(test.NoDBTestCase): def __init__(self, *args, **kwargs): super(ConfKeyManagerTestCase, self).__init__(*args, **kwargs) self._hex_key = '0' * 64 def _create_key_manager(self): CONF.set_default('fixed_key', default=self._hex_key, group='key_manager') return conf_key_mgr.ConfKeyManager(CONF) def setUp(self): super(ConfKeyManagerTestCase, self).setUp() self.ctxt = context.RequestContext('fake', 'fake') self.key_mgr = self._create_key_manager() encoded_key = bytes(binascii.unhexlify(self._hex_key)) self.key = key.SymmetricKey('AES', len(encoded_key) * 8, encoded_key) self.key_id = self.key_mgr.key_id def test_init(self): key_manager = self._create_key_manager() self.assertEqual(self._hex_key, key_manager._hex_key) def test_init_value_error(self): CONF.set_default('fixed_key', default=None, group='key_manager') self.assertRaises(ValueError, conf_key_mgr.ConfKeyManager, CONF) def test_create_key(self): key_id_1 = self.key_mgr.create_key(self.ctxt, 'AES', 256) key_id_2 = self.key_mgr.create_key(self.ctxt, 'AES', 256) # ensure that the UUIDs are the same self.assertEqual(key_id_1, key_id_2) def test_create_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.create_key, None, 'AES', 256) def test_store_key(self): key_bytes = bytes(binascii.unhexlify('0' * 64)) _key = key.SymmetricKey('AES', len(key_bytes) * 8, key_bytes) key_id = self.key_mgr.store(self.ctxt, _key) actual_key = self.key_mgr.get(self.ctxt, key_id) self.assertEqual(_key, actual_key) def test_store_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.store, None, self.key) def test_get_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.get, None, None) def test_get_unknown_key(self): self.assertRaises(KeyError, self.key_mgr.get, self.ctxt, None) def test_get(self): self.assertEqual(self.key, self.key_mgr.get(self.ctxt, self.key_id)) def test_delete_key(self): key_id = self.key_mgr.create_key(self.ctxt, 'AES', 256) self.key_mgr.delete(self.ctxt, key_id) # key won't actually be deleted self.assertEqual(self.key, self.key_mgr.get(self.ctxt, key_id)) def test_delete_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.delete, None, None) def test_delete_unknown_key(self): self.assertRaises(exception.KeyManagerError, self.key_mgr.delete, self.ctxt, None) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/matchers.py0000664000175000017500000005120200000000000020032 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Matcher classes to be used inside of the testtools assertThat framework.""" import copy import pprint from lxml import etree import six from testtools import content import testtools.matchers class DictKeysMismatch(object): def __init__(self, d1only, d2only): self.d1only = d1only self.d2only = d2only def describe(self): return ('Keys in d1 and not d2: %(d1only)s.' ' Keys in d2 and not d1: %(d2only)s' % {'d1only': self.d1only, 'd2only': self.d2only}) def get_details(self): return {} class DictMismatch(object): def __init__(self, key, d1_value, d2_value): self.key = key self.d1_value = d1_value self.d2_value = d2_value def describe(self): return ("Dictionaries do not match at %(key)s." " d1: %(d1_value)s d2: %(d2_value)s" % {'key': self.key, 'd1_value': self.d1_value, 'd2_value': self.d2_value}) def get_details(self): return {} class DictMatches(object): def __init__(self, d1, approx_equal=False, tolerance=0.001): self.d1 = d1 self.approx_equal = approx_equal self.tolerance = tolerance def __str__(self): return 'DictMatches(%s)' % (pprint.pformat(self.d1)) # Useful assertions def match(self, d2): """Assert two dicts are equivalent. This is a 'deep' match in the sense that it handles nested dictionaries appropriately. NOTE: If you don't care (or don't know) a given value, you can specify the string DONTCARE as the value. This will cause that dict-item to be skipped. """ d1keys = set(self.d1.keys()) d2keys = set(d2.keys()) if d1keys != d2keys: d1only = sorted(d1keys - d2keys) d2only = sorted(d2keys - d1keys) return DictKeysMismatch(d1only, d2only) for key in d1keys: d1value = self.d1[key] d2value = d2[key] try: error = abs(float(d1value) - float(d2value)) within_tolerance = error <= self.tolerance except (ValueError, TypeError): # If both values aren't convertible to float, just ignore # ValueError if arg is a str, TypeError if it's something else # (like None) within_tolerance = False if hasattr(d1value, 'keys') and hasattr(d2value, 'keys'): matcher = DictMatches(d1value) did_match = matcher.match(d2value) if did_match is not None: return did_match elif 'DONTCARE' in (d1value, d2value): continue elif self.approx_equal and within_tolerance: continue elif d1value != d2value: return DictMismatch(key, d1value, d2value) class ListLengthMismatch(object): def __init__(self, len1, len2): self.len1 = len1 self.len2 = len2 def describe(self): return ('Length mismatch: len(L1)=%(len1)d != ' 'len(L2)=%(len2)d' % {'len1': self.len1, 'len2': self.len2}) def get_details(self): return {} class DictListMatches(object): def __init__(self, l1, approx_equal=False, tolerance=0.001): self.l1 = l1 self.approx_equal = approx_equal self.tolerance = tolerance def __str__(self): return 'DictListMatches(%s)' % (pprint.pformat(self.l1)) # Useful assertions def match(self, l2): """Assert a list of dicts are equivalent.""" l1count = len(self.l1) l2count = len(l2) if l1count != l2count: return ListLengthMismatch(l1count, l2count) for d1, d2 in zip(self.l1, l2): matcher = DictMatches(d2, approx_equal=self.approx_equal, tolerance=self.tolerance) did_match = matcher.match(d1) if did_match: return did_match class SubDictMismatch(object): def __init__(self, key=None, sub_value=None, super_value=None, keys=False): self.key = key self.sub_value = sub_value self.super_value = super_value self.keys = keys def describe(self): if self.keys: return "Keys between dictionaries did not match" else: return ("Dictionaries do not match at %s. d1: %s d2: %s" % (self.key, self.super_value, self.sub_value)) def get_details(self): return {} class IsSubDictOf(object): def __init__(self, super_dict): self.super_dict = super_dict def __str__(self): return 'IsSubDictOf(%s)' % (self.super_dict) def match(self, sub_dict): """Assert a sub_dict is subset of super_dict.""" if not set(sub_dict.keys()).issubset(set(self.super_dict.keys())): return SubDictMismatch(keys=True) for k, sub_value in sub_dict.items(): super_value = self.super_dict[k] if isinstance(sub_value, dict): matcher = IsSubDictOf(super_value) did_match = matcher.match(sub_value) if did_match is not None: return did_match elif 'DONTCARE' in (sub_value, super_value): continue else: if sub_value != super_value: return SubDictMismatch(k, sub_value, super_value) class FunctionCallMatcher(object): def __init__(self, expected_func_calls): self.expected_func_calls = expected_func_calls self.actual_func_calls = [] def call(self, *args, **kwargs): func_call = {'args': args, 'kwargs': kwargs} self.actual_func_calls.append(func_call) def match(self): dict_list_matcher = DictListMatches(self.expected_func_calls) return dict_list_matcher.match(self.actual_func_calls) class XMLMismatch(object): """Superclass for XML mismatch.""" def __init__(self, state): self.path = str(state) self.expected = state.expected self.actual = state.actual def describe(self): return "%(path)s: XML does not match" % {'path': self.path} def get_details(self): return { 'expected': content.text_content(self.expected), 'actual': content.text_content(self.actual), } class XMLDocInfoMismatch(XMLMismatch): """XML version or encoding doesn't match.""" def __init__(self, state, expected_doc_info, actual_doc_info): super(XMLDocInfoMismatch, self).__init__(state) self.expected_doc_info = expected_doc_info self.actual_doc_info = actual_doc_info def describe(self): return ("%(path)s: XML information mismatch(version, encoding) " "expected version %(expected_version)s, " "expected encoding %(expected_encoding)s; " "actual version %(actual_version)s, " "actual encoding %(actual_encoding)s" % {'path': self.path, 'expected_version': self.expected_doc_info['version'], 'expected_encoding': self.expected_doc_info['encoding'], 'actual_version': self.actual_doc_info['version'], 'actual_encoding': self.actual_doc_info['encoding']}) class XMLTagMismatch(XMLMismatch): """XML tags don't match.""" def __init__(self, state, idx, expected_tag, actual_tag): super(XMLTagMismatch, self).__init__(state) self.idx = idx self.expected_tag = expected_tag self.actual_tag = actual_tag def describe(self): return ("%(path)s: XML tag mismatch at index %(idx)d: " "expected tag <%(expected_tag)s>; " "actual tag <%(actual_tag)s>" % {'path': self.path, 'idx': self.idx, 'expected_tag': self.expected_tag, 'actual_tag': self.actual_tag}) class XMLAttrKeysMismatch(XMLMismatch): """XML attribute keys don't match.""" def __init__(self, state, expected_only, actual_only): super(XMLAttrKeysMismatch, self).__init__(state) self.expected_only = ', '.join(sorted(expected_only)) self.actual_only = ', '.join(sorted(actual_only)) def describe(self): return ("%(path)s: XML attributes mismatch: " "keys only in expected: %(expected_only)s; " "keys only in actual: %(actual_only)s" % {'path': self.path, 'expected_only': self.expected_only, 'actual_only': self.actual_only}) class XMLAttrValueMismatch(XMLMismatch): """XML attribute values don't match.""" def __init__(self, state, key, expected_value, actual_value): super(XMLAttrValueMismatch, self).__init__(state) self.key = key self.expected_value = expected_value self.actual_value = actual_value def describe(self): return ("%(path)s: XML attribute value mismatch: " "expected value of attribute %(key)s: %(expected_value)r; " "actual value: %(actual_value)r" % {'path': self.path, 'key': self.key, 'expected_value': self.expected_value, 'actual_value': self.actual_value}) class XMLTextValueMismatch(XMLMismatch): """XML text values don't match.""" def __init__(self, state, expected_text, actual_text): super(XMLTextValueMismatch, self).__init__(state) self.expected_text = expected_text self.actual_text = actual_text def describe(self): return ("%(path)s: XML text value mismatch: " "expected text value: %(expected_text)r; " "actual value: %(actual_text)r" % {'path': self.path, 'expected_text': self.expected_text, 'actual_text': self.actual_text}) class XMLUnexpectedChild(XMLMismatch): """Unexpected child present in XML.""" def __init__(self, state, tag, idx): super(XMLUnexpectedChild, self).__init__(state) self.tag = tag self.idx = idx def describe(self): return ("%(path)s: XML unexpected child element <%(tag)s> " "present at index %(idx)d" % {'path': self.path, 'tag': self.tag, 'idx': self.idx}) class XMLExpectedChild(XMLMismatch): """Expected child not present in XML. idx indicates at which position the child was expected. If idx is None, that indicates that strict ordering was not required. """ def __init__(self, state, tag, idx): super(XMLExpectedChild, self).__init__(state) self.tag = tag self.idx = idx def describe(self): s = ("%(path)s: XML expected child element <%(tag)s> " "not present" % {'path': self.path, 'tag': self.tag}) # If we are not requiring strict ordering then the child element # can be expected at any index, so don't claim that it is expected # at a particular one. if self.idx is not None: s += " at index %d" % self.idx return s class XMLMatchState(object): """Maintain some state for matching. Tracks the XML node path and saves the expected and actual full XML text, for use by the XMLMismatch subclasses. """ def __init__(self, expected, actual): self.path = [] self.expected = expected self.actual = actual def __str__(self): return '/' + '/'.join(self.path) def node(self, tag, idx): """Returns a new state based on the current one, with tag and idx appended to the path. We avoid appending in place and popping on exit from the context of the comparison at this level in the XML tree, because this would mutate state objects embedded in XMLMismatch objects which are bubbled up through recursive calls to _compare_nodes. This would result in a misleading error by the time the XMLMismatch object surfaced at the top of the assertThat() part of the stack. :param tag: The element tag :param idx: If not None, the integer index of the element within its parent. Not included in the path element if None. """ new_state = copy.deepcopy(self) if idx is not None: new_state.path.append("%s[%d]" % (tag, idx)) else: new_state.path.append(tag) return new_state class XMLMatches(object): """Compare XML strings. More complete than string comparison.""" SKIP_TAGS = (etree.Comment, etree.ProcessingInstruction) @staticmethod def _parse(text_or_bytes): if isinstance(text_or_bytes, six.text_type): text_or_bytes = text_or_bytes.encode("utf-8") parser = etree.XMLParser(encoding="UTF-8") return etree.parse(six.BytesIO(text_or_bytes), parser) def __init__(self, expected, allow_mixed_nodes=False, skip_empty_text_nodes=True, skip_values=('DONTCARE',)): self.expected_xml = expected self.expected = self._parse(expected) self.allow_mixed_nodes = allow_mixed_nodes self.skip_empty_text_nodes = skip_empty_text_nodes self.skip_values = set(skip_values) def __str__(self): return 'XMLMatches(%r)' % self.expected_xml def match(self, actual_xml): actual = self._parse(actual_xml) state = XMLMatchState(self.expected_xml, actual_xml) expected_doc_info = self._get_xml_docinfo(self.expected) actual_doc_info = self._get_xml_docinfo(actual) if expected_doc_info != actual_doc_info: return XMLDocInfoMismatch(state, expected_doc_info, actual_doc_info) result = self._compare_node(self.expected.getroot(), actual.getroot(), state, None) if result is False: return XMLMismatch(state) elif result is not True: return result @staticmethod def _get_xml_docinfo(xml_document): return {'version': xml_document.docinfo.xml_version, 'encoding': xml_document.docinfo.encoding} def _compare_text_nodes(self, expected, actual, state): expected_text = [expected.text] expected_text.extend(child.tail for child in expected) actual_text = [actual.text] actual_text.extend(child.tail for child in actual) if self.skip_empty_text_nodes: expected_text = [text for text in expected_text if text and not text.isspace()] actual_text = [text for text in actual_text if text and not text.isspace()] if self.skip_values.intersection( expected_text + actual_text): return if self.allow_mixed_nodes: # lets sort text nodes because they can be mixed expected_text = sorted(expected_text) actual_text = sorted(actual_text) if expected_text != actual_text: return XMLTextValueMismatch(state, expected_text, actual_text) def _compare_node(self, expected, actual, state, idx): """Recursively compares nodes within the XML tree.""" # Start by comparing the tags if expected.tag != actual.tag: return XMLTagMismatch(state, idx, expected.tag, actual.tag) new_state = state.node(expected.tag, idx) # Compare the attribute keys expected_attrs = set(expected.attrib.keys()) actual_attrs = set(actual.attrib.keys()) if expected_attrs != actual_attrs: expected_only = expected_attrs - actual_attrs actual_only = actual_attrs - expected_attrs return XMLAttrKeysMismatch(new_state, expected_only, actual_only) # Compare the attribute values for key in expected_attrs: expected_value = expected.attrib[key] actual_value = actual.attrib[key] if self.skip_values.intersection( [expected_value, actual_value]): continue elif expected_value != actual_value: return XMLAttrValueMismatch(new_state, key, expected_value, actual_value) # Compare text nodes text_nodes_mismatch = self._compare_text_nodes( expected, actual, new_state) if text_nodes_mismatch: return text_nodes_mismatch # Compare the contents of the node matched_actual_child_idxs = set() # first_actual_child_idx - pointer to next actual child # used with allow_mixed_nodes=False ONLY # prevent to visit actual child nodes twice first_actual_child_idx = 0 result = None for expected_child in expected: if expected_child.tag in self.SKIP_TAGS: continue related_actual_child_idx = None if self.allow_mixed_nodes: first_actual_child_idx = 0 for actual_child_idx in range( first_actual_child_idx, len(actual)): if actual[actual_child_idx].tag in self.SKIP_TAGS: first_actual_child_idx += 1 continue if actual_child_idx in matched_actual_child_idxs: continue # Compare the nodes result = self._compare_node(expected_child, actual[actual_child_idx], new_state, actual_child_idx) first_actual_child_idx += 1 if result is not True: if self.allow_mixed_nodes: continue else: return result else: # nodes match related_actual_child_idx = actual_child_idx break if related_actual_child_idx is not None: matched_actual_child_idxs.add(actual_child_idx) else: if isinstance(result, XMLExpectedChild) or \ isinstance(result, XMLUnexpectedChild): return result if self.allow_mixed_nodes: expected_child_idx = None else: expected_child_idx = first_actual_child_idx return XMLExpectedChild(new_state, expected_child.tag, expected_child_idx) # Make sure we consumed all nodes in actual for actual_child_idx, actual_child in enumerate(actual): if (actual_child.tag not in self.SKIP_TAGS and actual_child_idx not in matched_actual_child_idxs): return XMLUnexpectedChild(new_state, actual_child.tag, actual_child_idx) # The nodes match return True class EncodedByUTF8(object): def match(self, obj): if isinstance(obj, six.binary_type): if hasattr(obj, "decode"): try: obj.decode("utf-8") except UnicodeDecodeError: return testtools.matchers.Mismatch( "%s is not encoded in UTF-8." % obj) elif isinstance(obj, six.text_type): try: obj.encode("utf-8", "strict") except UnicodeDecodeError: return testtools.matchers.Mismatch( "%s cannot be encoded in UTF-8." % obj) else: reason = ("Type of '%(obj)s' is '%(obj_type)s', " "should be '%(correct_type)s'." % { "obj": obj, "obj_type": type(obj).__name__, "correct_type": six.binary_type.__name__ }) return testtools.matchers.Mismatch(reason) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5464683 nova-21.2.4/nova/tests/unit/network/0000775000175000017500000000000000000000000017343 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/network/__init__.py0000664000175000017500000000000000000000000021442 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/network/interfaces-override.template0000664000175000017500000000235000000000000025040 0ustar00zuulzuul00000000000000# Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback {% for ifc in interfaces %} auto {{ ifc.name }} iface {{ ifc.name }} inet static address {{ ifc.address }} netmask {{ ifc.netmask }} broadcast {{ ifc.broadcast }} {% if ifc.gateway %} gateway {{ ifc.gateway }} {% endif %} {% if ifc.dns %} dns-nameservers {{ ifc.dns }} {% endif %} {% for route in ifc.routes %} post-up ip route add {{ route.cidr }} via {{ route.gateway }} dev {{ ifc.name }} pre-down ip route del {{ route.cidr }} via {{ route.gateway }} dev {{ ifc.name }} {% endfor %} {% if use_ipv6 %} {% if libvirt_virt_type == 'lxc' %} {% if ifc.address_v6 %} post-up ip -6 addr add {{ ifc.address_v6 }}/{{ifc.netmask_v6 }} dev ${IFACE} {% endif %} {% if ifc.gateway_v6 %} post-up ip -6 route add default via {{ ifc.gateway_v6 }} dev ${IFACE} {% endif %} {% else %} iface {{ ifc.name }} inet6 static address {{ ifc.address_v6 }} netmask {{ ifc.netmask_v6 }} {% if ifc.gateway_v6 %} gateway {{ ifc.gateway_v6 }} {% endif %} {% endif %} {% endif %} {% endfor %} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5464683 nova-21.2.4/nova/tests/unit/network/security_group/0000775000175000017500000000000000000000000022426 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/network/security_group/__init__.py0000664000175000017500000000000000000000000024525 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/network/test_network_info.py0000664000175000017500000014122200000000000023462 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_utils.fixture import uuidsentinel as uuids from nova import exception from nova.network import model from nova import objects from nova import test from nova.tests.unit import fake_network_cache_model from nova.virt import netutils class RouteTests(test.NoDBTestCase): def test_create_route_with_attrs(self): route = fake_network_cache_model.new_route() fake_network_cache_model.new_ip(dict(address='192.168.1.1')) self.assertEqual('0.0.0.0/24', route['cidr']) self.assertEqual('192.168.1.1', route['gateway']['address']) self.assertEqual('eth0', route['interface']) def test_routes_equal(self): route1 = model.Route() route2 = model.Route() self.assertEqual(route1, route2) def test_routes_not_equal(self): route1 = model.Route(cidr='1.1.1.0/24') route2 = model.Route(cidr='2.2.2.0/24') self.assertNotEqual(route1, route2) route1 = model.Route(cidr='1.1.1.1/24', gateway='1.1.1.1') route2 = model.Route(cidr='1.1.1.1/24', gateway='1.1.1.2') self.assertNotEqual(route1, route2) route1 = model.Route(cidr='1.1.1.1/24', interface='tap0') route2 = model.Route(cidr='1.1.1.1/24', interface='tap1') self.assertNotEqual(route1, route2) def test_hydrate(self): route = model.Route.hydrate( {'gateway': fake_network_cache_model.new_ip( dict(address='192.168.1.1'))}) self.assertIsNone(route['cidr']) self.assertEqual('192.168.1.1', route['gateway']['address']) self.assertIsNone(route['interface']) class IPTests(test.NoDBTestCase): def test_ip_equal(self): ip1 = model.IP(address='127.0.0.1') ip2 = model.IP(address='127.0.0.1') self.assertEqual(ip1, ip2) def test_ip_not_equal(self): ip1 = model.IP(address='127.0.0.1') ip2 = model.IP(address='172.0.0.3') self.assertNotEqual(ip1, ip2) ip1 = model.IP(address='127.0.0.1', type=1) ip2 = model.IP(address='172.0.0.1', type=2) self.assertNotEqual(ip1, ip2) ip1 = model.IP(address='127.0.0.1', version=4) ip2 = model.IP(address='172.0.0.1', version=6) self.assertNotEqual(ip1, ip2) class FixedIPTests(test.NoDBTestCase): def test_createnew_fixed_ip_with_attrs(self): fixed_ip = model.FixedIP(address='192.168.1.100') self.assertEqual('192.168.1.100', fixed_ip['address']) self.assertEqual([], fixed_ip['floating_ips']) self.assertEqual('fixed', fixed_ip['type']) self.assertEqual(4, fixed_ip['version']) def test_create_fixed_ipv6(self): fixed_ip = model.FixedIP(address='::1') self.assertEqual('::1', fixed_ip['address']) self.assertEqual([], fixed_ip['floating_ips']) self.assertEqual('fixed', fixed_ip['type']) self.assertEqual(6, fixed_ip['version']) def test_create_fixed_bad_ip_fails(self): self.assertRaises(exception.InvalidIpAddressError, model.FixedIP, address='picklespicklespickles') def test_equate_two_fixed_ips(self): fixed_ip = model.FixedIP(address='::1') fixed_ip2 = model.FixedIP(address='::1') self.assertEqual(fixed_ip, fixed_ip2) def test_equate_two_dissimilar_fixed_ips_fails(self): fixed_ip = model.FixedIP(address='::1') fixed_ip2 = model.FixedIP(address='::2') self.assertNotEqual(fixed_ip, fixed_ip2) fixed_ip = model.FixedIP(address='::1', type='1') fixed_ip2 = model.FixedIP(address='::1', type='2') self.assertNotEqual(fixed_ip, fixed_ip2) fixed_ip = model.FixedIP(address='::1', version='6') fixed_ip2 = model.FixedIP(address='::1', version='4') self.assertNotEqual(fixed_ip, fixed_ip2) fixed_ip = model.FixedIP(address='::1', floating_ips='1.1.1.1') fixed_ip2 = model.FixedIP(address='::1', floating_ips='8.8.8.8') self.assertNotEqual(fixed_ip, fixed_ip2) def test_hydrate(self): fixed_ip = model.FixedIP.hydrate({}) self.assertEqual([], fixed_ip['floating_ips']) self.assertIsNone(fixed_ip['address']) self.assertEqual('fixed', fixed_ip['type']) self.assertIsNone(fixed_ip['version']) def test_add_floating_ip(self): fixed_ip = model.FixedIP(address='192.168.1.100') fixed_ip.add_floating_ip('192.168.1.101') self.assertEqual(['192.168.1.101'], fixed_ip['floating_ips']) def test_add_floating_ip_repeatedly_only_one_instance(self): fixed_ip = model.FixedIP(address='192.168.1.100') for i in range(10): fixed_ip.add_floating_ip('192.168.1.101') self.assertEqual(['192.168.1.101'], fixed_ip['floating_ips']) class SubnetTests(test.NoDBTestCase): def test_create_subnet_with_attrs(self): subnet = fake_network_cache_model.new_subnet() route1 = fake_network_cache_model.new_route() self.assertEqual('10.10.0.0/24', subnet['cidr']) self.assertEqual( [fake_network_cache_model.new_ip(dict(address='1.2.3.4')), fake_network_cache_model.new_ip(dict(address='2.3.4.5'))], subnet['dns']) self.assertEqual('10.10.0.1', subnet['gateway']['address']) self.assertEqual( [fake_network_cache_model.new_fixed_ip( dict(address='10.10.0.2')), fake_network_cache_model.new_fixed_ip( dict(address='10.10.0.3'))], subnet['ips']) self.assertEqual([route1], subnet['routes']) self.assertEqual(4, subnet['version']) def test_subnet_equal(self): subnet1 = fake_network_cache_model.new_subnet() subnet2 = fake_network_cache_model.new_subnet() self.assertEqual(subnet1, subnet2) def test_subnet_not_equal(self): subnet1 = model.Subnet(cidr='1.1.1.0/24') subnet2 = model.Subnet(cidr='2.2.2.0/24') self.assertNotEqual(subnet1, subnet2) subnet1 = model.Subnet(dns='1.1.1.0/24') subnet2 = model.Subnet(dns='2.2.2.0/24') self.assertNotEqual(subnet1, subnet2) subnet1 = model.Subnet(gateway='1.1.1.1/24') subnet2 = model.Subnet(gateway='2.2.2.1/24') self.assertNotEqual(subnet1, subnet2) subnet1 = model.Subnet(ips='1.1.1.0/24') subnet2 = model.Subnet(ips='2.2.2.0/24') self.assertNotEqual(subnet1, subnet2) subnet1 = model.Subnet(routes='1.1.1.0/24') subnet2 = model.Subnet(routes='2.2.2.0/24') self.assertNotEqual(subnet1, subnet2) subnet1 = model.Subnet(version='4') subnet2 = model.Subnet(version='6') self.assertNotEqual(subnet1, subnet2) def test_add_route(self): subnet = fake_network_cache_model.new_subnet() route1 = fake_network_cache_model.new_route() route2 = fake_network_cache_model.new_route({'cidr': '1.1.1.1/24'}) subnet.add_route(route2) self.assertEqual([route1, route2], subnet['routes']) def test_add_route_a_lot(self): subnet = fake_network_cache_model.new_subnet() route1 = fake_network_cache_model.new_route() route2 = fake_network_cache_model.new_route({'cidr': '1.1.1.1/24'}) for i in range(10): subnet.add_route(route2) self.assertEqual([route1, route2], subnet['routes']) def test_add_dns(self): subnet = fake_network_cache_model.new_subnet() dns = fake_network_cache_model.new_ip(dict(address='9.9.9.9')) subnet.add_dns(dns) self.assertEqual( [fake_network_cache_model.new_ip(dict(address='1.2.3.4')), fake_network_cache_model.new_ip(dict(address='2.3.4.5')), fake_network_cache_model.new_ip(dict(address='9.9.9.9'))], subnet['dns']) def test_add_dns_a_lot(self): subnet = fake_network_cache_model.new_subnet() for i in range(10): subnet.add_dns(fake_network_cache_model.new_ip( dict(address='9.9.9.9'))) self.assertEqual( [fake_network_cache_model.new_ip(dict(address='1.2.3.4')), fake_network_cache_model.new_ip(dict(address='2.3.4.5')), fake_network_cache_model.new_ip(dict(address='9.9.9.9'))], subnet['dns']) def test_add_ip(self): subnet = fake_network_cache_model.new_subnet() subnet.add_ip(fake_network_cache_model.new_ip( dict(address='192.168.1.102'))) self.assertEqual( [fake_network_cache_model.new_fixed_ip( dict(address='10.10.0.2')), fake_network_cache_model.new_fixed_ip( dict(address='10.10.0.3')), fake_network_cache_model.new_ip( dict(address='192.168.1.102'))], subnet['ips']) def test_add_ip_a_lot(self): subnet = fake_network_cache_model.new_subnet() for i in range(10): subnet.add_ip(fake_network_cache_model.new_fixed_ip( dict(address='192.168.1.102'))) self.assertEqual( [fake_network_cache_model.new_fixed_ip( dict(address='10.10.0.2')), fake_network_cache_model.new_fixed_ip( dict(address='10.10.0.3')), fake_network_cache_model.new_fixed_ip( dict(address='192.168.1.102'))], subnet['ips']) def test_hydrate(self): subnet_dict = { 'cidr': '255.255.255.0', 'dns': [fake_network_cache_model.new_ip(dict(address='1.1.1.1'))], 'ips': [fake_network_cache_model.new_fixed_ip( dict(address='2.2.2.2'))], 'routes': [fake_network_cache_model.new_route()], 'version': 4, 'gateway': fake_network_cache_model.new_ip( dict(address='3.3.3.3'))} subnet = model.Subnet.hydrate(subnet_dict) self.assertEqual('255.255.255.0', subnet['cidr']) self.assertEqual([fake_network_cache_model.new_ip( dict(address='1.1.1.1'))], subnet['dns']) self.assertEqual('3.3.3.3', subnet['gateway']['address']) self.assertEqual([fake_network_cache_model.new_fixed_ip( dict(address='2.2.2.2'))], subnet['ips']) self.assertEqual([fake_network_cache_model.new_route()], subnet['routes']) self.assertEqual(4, subnet['version']) class NetworkTests(test.NoDBTestCase): def test_create_network(self): network = fake_network_cache_model.new_network() self.assertEqual(1, network['id']) self.assertEqual('br0', network['bridge']) self.assertEqual('public', network['label']) self.assertEqual( [fake_network_cache_model.new_subnet(), fake_network_cache_model.new_subnet( dict(cidr='255.255.255.255'))], network['subnets']) def test_add_subnet(self): network = fake_network_cache_model.new_network() network.add_subnet(fake_network_cache_model.new_subnet( dict(cidr='0.0.0.0'))) self.assertEqual( [fake_network_cache_model.new_subnet(), fake_network_cache_model.new_subnet( dict(cidr='255.255.255.255')), fake_network_cache_model.new_subnet(dict(cidr='0.0.0.0'))], network['subnets']) def test_add_subnet_a_lot(self): network = fake_network_cache_model.new_network() for i in range(10): network.add_subnet(fake_network_cache_model.new_subnet( dict(cidr='0.0.0.0'))) self.assertEqual( [fake_network_cache_model.new_subnet(), fake_network_cache_model.new_subnet( dict(cidr='255.255.255.255')), fake_network_cache_model.new_subnet(dict(cidr='0.0.0.0'))], network['subnets']) def test_network_equal(self): network1 = model.Network() network2 = model.Network() self.assertEqual(network1, network2) def test_network_not_equal(self): network1 = model.Network(id='1') network2 = model.Network(id='2') self.assertNotEqual(network1, network2) network1 = model.Network(bridge='br-int') network2 = model.Network(bridge='br0') self.assertNotEqual(network1, network2) network1 = model.Network(label='net1') network2 = model.Network(label='net2') self.assertNotEqual(network1, network2) network1 = model.Network(subnets='1.1.1.0/24') network2 = model.Network(subnets='2.2.2.0/24') self.assertNotEqual(network1, network2) def test_hydrate(self): fake_network_cache_model.new_subnet() fake_network_cache_model.new_subnet(dict(cidr='255.255.255.255')) network = model.Network.hydrate(fake_network_cache_model.new_network()) self.assertEqual(1, network['id']) self.assertEqual('br0', network['bridge']) self.assertEqual('public', network['label']) self.assertEqual( [fake_network_cache_model.new_subnet(), fake_network_cache_model.new_subnet( dict(cidr='255.255.255.255'))], network['subnets']) class VIFTests(test.NoDBTestCase): def test_create_vif(self): vif = fake_network_cache_model.new_vif() self.assertEqual(1, vif['id']) self.assertEqual('aa:aa:aa:aa:aa:aa', vif['address']) self.assertEqual(fake_network_cache_model.new_network(), vif['network']) def test_vif_equal(self): vif1 = model.VIF() vif2 = model.VIF() self.assertEqual(vif1, vif2) def test_vif_not_equal(self): vif1 = model.VIF(id=1) vif2 = model.VIF(id=2) self.assertNotEqual(vif1, vif2) vif1 = model.VIF(address='00:00:00:00:00:11') vif2 = model.VIF(address='00:00:00:00:00:22') self.assertNotEqual(vif1, vif2) vif1 = model.VIF(network='net1') vif2 = model.VIF(network='net2') self.assertNotEqual(vif1, vif2) vif1 = model.VIF(type='ovs') vif2 = model.VIF(type='linuxbridge') self.assertNotEqual(vif1, vif2) vif1 = model.VIF(devname='ovs1234') vif2 = model.VIF(devname='linuxbridge1234') self.assertNotEqual(vif1, vif2) vif1 = model.VIF(qbh_params=1) vif2 = model.VIF(qbh_params=None) self.assertNotEqual(vif1, vif2) vif1 = model.VIF(qbg_params=1) vif2 = model.VIF(qbg_params=None) self.assertNotEqual(vif1, vif2) vif1 = model.VIF(active=True) vif2 = model.VIF(active=False) self.assertNotEqual(vif1, vif2) vif1 = model.VIF(vnic_type=model.VNIC_TYPE_NORMAL) vif2 = model.VIF(vnic_type=model.VNIC_TYPE_DIRECT) self.assertNotEqual(vif1, vif2) vif1 = model.VIF(profile={'pci_slot': '0000:0a:00.1'}) vif2 = model.VIF(profile={'pci_slot': '0000:0a:00.2'}) self.assertNotEqual(vif1, vif2) vif1 = model.VIF(preserve_on_delete=True) vif2 = model.VIF(preserve_on_delete=False) self.assertNotEqual(vif1, vif2) def test_create_vif_with_type(self): vif_dict = dict( id=1, address='aa:aa:aa:aa:aa:aa', network=fake_network_cache_model.new_network(), type='bridge') vif = fake_network_cache_model.new_vif(vif_dict) self.assertEqual(1, vif['id']) self.assertEqual('aa:aa:aa:aa:aa:aa', vif['address']) self.assertEqual('bridge', vif['type']) self.assertEqual(fake_network_cache_model.new_network(), vif['network']) def test_vif_get_fixed_ips(self): vif = fake_network_cache_model.new_vif() fixed_ips = vif.fixed_ips() ips = [ fake_network_cache_model.new_fixed_ip(dict(address='10.10.0.2')), fake_network_cache_model.new_fixed_ip(dict(address='10.10.0.3')) ] * 2 self.assertEqual(fixed_ips, ips) def test_vif_get_fixed_ips_network_is_none(self): vif = model.VIF() fixed_ips = vif.fixed_ips() self.assertEqual([], fixed_ips) def test_vif_get_floating_ips(self): vif = fake_network_cache_model.new_vif() vif['network']['subnets'][0]['ips'][0].add_floating_ip('192.168.1.1') floating_ips = vif.floating_ips() self.assertEqual(['192.168.1.1'], floating_ips) def test_vif_get_labeled_ips(self): vif = fake_network_cache_model.new_vif() labeled_ips = vif.labeled_ips() ip_dict = { 'network_id': 1, 'ips': [fake_network_cache_model.new_ip( {'address': '10.10.0.2', 'type': 'fixed'}), fake_network_cache_model.new_ip( {'address': '10.10.0.3', 'type': 'fixed'})] * 2, 'network_label': 'public'} self.assertEqual(ip_dict, labeled_ips) def test_hydrate(self): fake_network_cache_model.new_network() vif = model.VIF.hydrate(fake_network_cache_model.new_vif()) self.assertEqual(1, vif['id']) self.assertEqual('aa:aa:aa:aa:aa:aa', vif['address']) self.assertEqual(fake_network_cache_model.new_network(), vif['network']) def test_hydrate_vif_with_type(self): vif_dict = dict( id=1, address='aa:aa:aa:aa:aa:aa', network=fake_network_cache_model.new_network(), type='bridge') vif = model.VIF.hydrate(fake_network_cache_model.new_vif(vif_dict)) self.assertEqual(1, vif['id']) self.assertEqual('aa:aa:aa:aa:aa:aa', vif['address']) self.assertEqual('bridge', vif['type']) self.assertEqual(fake_network_cache_model.new_network(), vif['network']) class NetworkInfoTests(test.NoDBTestCase): def test_create_model(self): ninfo = model.NetworkInfo([fake_network_cache_model.new_vif(), fake_network_cache_model.new_vif( {'address': 'bb:bb:bb:bb:bb:bb'})]) self.assertEqual( [fake_network_cache_model.new_fixed_ip( {'address': '10.10.0.2'}), fake_network_cache_model.new_fixed_ip( {'address': '10.10.0.3'})] * 4, ninfo.fixed_ips()) def test_create_async_model(self): def async_wrapper(): return model.NetworkInfo( [fake_network_cache_model.new_vif(), fake_network_cache_model.new_vif( {'address': 'bb:bb:bb:bb:bb:bb'})]) ninfo = model.NetworkInfoAsyncWrapper(async_wrapper) self.assertEqual( [fake_network_cache_model.new_fixed_ip( {'address': '10.10.0.2'}), fake_network_cache_model.new_fixed_ip( {'address': '10.10.0.3'})] * 4, ninfo.fixed_ips()) def test_create_async_model_exceptions(self): def async_wrapper(): raise test.TestingException() ninfo = model.NetworkInfoAsyncWrapper(async_wrapper) self.assertRaises(test.TestingException, ninfo.wait) # 2nd one doesn't raise self.assertIsNone(ninfo.wait()) # Test that do_raise=False works on .wait() ninfo = model.NetworkInfoAsyncWrapper(async_wrapper) self.assertIsNone(ninfo.wait(do_raise=False)) # Test we also raise calling a method ninfo = model.NetworkInfoAsyncWrapper(async_wrapper) self.assertRaises(test.TestingException, ninfo.fixed_ips) def test_get_floating_ips(self): vif = fake_network_cache_model.new_vif() vif['network']['subnets'][0]['ips'][0].add_floating_ip('192.168.1.1') ninfo = model.NetworkInfo([vif, fake_network_cache_model.new_vif( {'address': 'bb:bb:bb:bb:bb:bb'})]) self.assertEqual(['192.168.1.1'], ninfo.floating_ips()) def test_hydrate(self): ninfo = model.NetworkInfo([fake_network_cache_model.new_vif(), fake_network_cache_model.new_vif( {'address': 'bb:bb:bb:bb:bb:bb'})]) model.NetworkInfo.hydrate(ninfo) self.assertEqual( [fake_network_cache_model.new_fixed_ip( {'address': '10.10.0.2'}), fake_network_cache_model.new_fixed_ip( {'address': '10.10.0.3'})] * 4, ninfo.fixed_ips()) def _setup_injected_network_scenario(self, should_inject=True, use_ipv4=True, use_ipv6=False, gateway=True, dns=True, two_interfaces=False, libvirt_virt_type=None): """Check that netutils properly decides whether to inject based on whether the supplied subnet is static or dynamic. """ network = fake_network_cache_model.new_network({'subnets': []}) subnet_dict = {} if not gateway: subnet_dict['gateway'] = None if not dns: subnet_dict['dns'] = None if not should_inject: subnet_dict['dhcp_server'] = '10.10.0.1' if use_ipv4: network.add_subnet( fake_network_cache_model.new_subnet(subnet_dict)) if should_inject and use_ipv6: gateway_ip = fake_network_cache_model.new_ip(dict( address='1234:567::1')) ip = fake_network_cache_model.new_ip(dict( address='1234:567::2')) ipv6_subnet_dict = dict( cidr='1234:567::/48', gateway=gateway_ip, dns=[fake_network_cache_model.new_ip( dict(address='2001:4860:4860::8888')), fake_network_cache_model.new_ip( dict(address='2001:4860:4860::8844'))], ips=[ip]) if not gateway: ipv6_subnet_dict['gateway'] = None network.add_subnet(fake_network_cache_model.new_subnet( ipv6_subnet_dict)) # Behave as though CONF.flat_injected is True network['meta']['injected'] = True vif = fake_network_cache_model.new_vif({'network': network}) vifs = [vif] if two_interfaces: vifs.append(vif) nwinfo = model.NetworkInfo(vifs) return netutils.get_injected_network_template( nwinfo, libvirt_virt_type=libvirt_virt_type) def test_injection_dynamic(self): expected = None template = self._setup_injected_network_scenario(should_inject=False) self.assertEqual(expected, template) def test_injection_static(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 """ template = self._setup_injected_network_scenario() self.assertEqual(expected, template) def test_injection_static_no_gateway(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 dns-nameservers 1.2.3.4 2.3.4.5 """ template = self._setup_injected_network_scenario(gateway=False) self.assertEqual(expected, template) def test_injection_static_no_dns(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 """ template = self._setup_injected_network_scenario(dns=False) self.assertEqual(expected, template) def test_injection_static_overridden_template(self): cfg.CONF.set_override( 'injected_network_template', 'nova/tests/unit/network/interfaces-override.template') expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 post-up ip route add 0.0.0.0/24 via 192.168.1.1 dev eth0 pre-down ip route del 0.0.0.0/24 via 192.168.1.1 dev eth0 """ template = self._setup_injected_network_scenario() self.assertEqual(expected, template) def test_injection_static_ipv6(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 iface eth0 inet6 static hwaddress ether aa:aa:aa:aa:aa:aa address 1234:567::2 netmask 48 gateway 1234:567::1 dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844 """ template = self._setup_injected_network_scenario(use_ipv6=True) self.assertEqual(expected, template) def test_injection_static_ipv6_no_gateway(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 dns-nameservers 1.2.3.4 2.3.4.5 iface eth0 inet6 static hwaddress ether aa:aa:aa:aa:aa:aa address 1234:567::2 netmask 48 dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844 """ template = self._setup_injected_network_scenario(use_ipv6=True, gateway=False) self.assertEqual(expected, template) def test_injection_static_with_ipv4_off(self): expected = None template = self._setup_injected_network_scenario(use_ipv4=False) self.assertEqual(expected, template) def test_injection_ipv6_two_interfaces(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 iface eth0 inet6 static hwaddress ether aa:aa:aa:aa:aa:aa address 1234:567::2 netmask 48 gateway 1234:567::1 dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844 auto eth1 iface eth1 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 iface eth1 inet6 static hwaddress ether aa:aa:aa:aa:aa:aa address 1234:567::2 netmask 48 gateway 1234:567::1 dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844 """ template = self._setup_injected_network_scenario(use_ipv6=True, two_interfaces=True) self.assertEqual(expected, template) def test_injection_ipv6_with_lxc(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 post-up ip -6 addr add 1234:567::2/48 dev ${IFACE} post-up ip -6 route add default via 1234:567::1 dev ${IFACE} auto eth1 iface eth1 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 gateway 10.10.0.1 dns-nameservers 1.2.3.4 2.3.4.5 post-up ip -6 addr add 1234:567::2/48 dev ${IFACE} post-up ip -6 route add default via 1234:567::1 dev ${IFACE} """ template = self._setup_injected_network_scenario( use_ipv6=True, two_interfaces=True, libvirt_virt_type='lxc') self.assertEqual(expected, template) def test_injection_ipv6_with_lxc_no_gateway(self): expected = """\ # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 dns-nameservers 1.2.3.4 2.3.4.5 post-up ip -6 addr add 1234:567::2/48 dev ${IFACE} auto eth1 iface eth1 inet static hwaddress ether aa:aa:aa:aa:aa:aa address 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 dns-nameservers 1.2.3.4 2.3.4.5 post-up ip -6 addr add 1234:567::2/48 dev ${IFACE} """ template = self._setup_injected_network_scenario( use_ipv6=True, gateway=False, two_interfaces=True, libvirt_virt_type='lxc') self.assertEqual(expected, template) def test_get_events(self): network_info = model.NetworkInfo([ model.VIF( id=uuids.hybrid_vif, details={'ovs_hybrid_plug': True}), model.VIF( id=uuids.normal_vif, details={'ovs_hybrid_plug': False})]) same_host = objects.Migration(source_compute='fake-host', dest_compute='fake-host') diff_host = objects.Migration(source_compute='fake-host1', dest_compute='fake-host2') # Same-host migrations will have all events be plug-time. self.assertItemsEqual( [('network-vif-plugged', uuids.normal_vif), ('network-vif-plugged', uuids.hybrid_vif)], network_info.get_plug_time_events(same_host)) # Same host migration will have no plug-time events. self.assertEqual([], network_info.get_bind_time_events(same_host)) # Diff-host migration + OVS hybrid plug = bind-time events self.assertEqual( [('network-vif-plugged', uuids.hybrid_vif)], network_info.get_bind_time_events(diff_host)) # Diff-host migration + normal OVS = plug-time events self.assertEqual( [('network-vif-plugged', uuids.normal_vif)], network_info.get_plug_time_events(diff_host)) def test_has_port_with_allocation(self): network_info = model.NetworkInfo([]) self.assertFalse(network_info.has_port_with_allocation()) network_info.append( model.VIF(id=uuids.port_without_profile)) self.assertFalse(network_info.has_port_with_allocation()) network_info.append( model.VIF(id=uuids.port_no_allocation, profile={'foo': 'bar'})) self.assertFalse(network_info.has_port_with_allocation()) network_info.append( model.VIF( id=uuids.port_empty_alloc, profile={'allocation': None})) self.assertFalse(network_info.has_port_with_allocation()) network_info.append( model.VIF( id=uuids.port_with_alloc, profile={'allocation': uuids.rp})) self.assertTrue(network_info.has_port_with_allocation()) class TestNetworkMetadata(test.NoDBTestCase): def setUp(self): super(TestNetworkMetadata, self).setUp() self.netinfo = self._new_netinfo() def _new_netinfo(self, vif_type='ethernet'): netinfo = model.NetworkInfo([fake_network_cache_model.new_vif( {'type': vif_type})]) # Give this vif ipv4 and ipv6 dhcp subnets ipv4_subnet = fake_network_cache_model.new_subnet(version=4) ipv6_subnet = fake_network_cache_model.new_subnet(version=6) netinfo[0]['network']['subnets'][0] = ipv4_subnet netinfo[0]['network']['subnets'][1] = ipv6_subnet netinfo[0]['network']['meta']['mtu'] = 1500 return netinfo def test_get_network_metadata_json(self): net_metadata = netutils.get_network_metadata(self.netinfo) # Physical Ethernet self.assertEqual( { 'id': 'interface0', 'type': 'phy', 'ethernet_mac_address': 'aa:aa:aa:aa:aa:aa', 'vif_id': 1, 'mtu': 1500 }, net_metadata['links'][0]) # IPv4 Network self.assertEqual( { 'id': 'network0', 'link': 'interface0', 'type': 'ipv4', 'ip_address': '10.10.0.2', 'netmask': '255.255.255.0', 'routes': [ { 'network': '0.0.0.0', 'netmask': '0.0.0.0', 'gateway': '10.10.0.1' }, { 'network': '0.0.0.0', 'netmask': '255.255.255.0', 'gateway': '192.168.1.1' } ], 'services': [{'address': '1.2.3.4', 'type': 'dns'}, {'address': '2.3.4.5', 'type': 'dns'}], 'network_id': 1 }, net_metadata['networks'][0]) self.assertEqual( { 'id': 'network1', 'link': 'interface0', 'type': 'ipv6', 'ip_address': 'fd00::2', 'netmask': 'ffff:ffff:ffff::', 'routes': [ { 'network': '::', 'netmask': '::', 'gateway': 'fd00::1' }, { 'network': '::', 'netmask': 'ffff:ffff:ffff::', 'gateway': 'fd00::1:1' } ], 'services': [{'address': '1:2:3:4::', 'type': 'dns'}, {'address': '2:3:4:5::', 'type': 'dns'}], 'network_id': 1 }, net_metadata['networks'][1]) def test_get_network_metadata_json_dhcp(self): ipv4_subnet = fake_network_cache_model.new_subnet( subnet_dict=dict(dhcp_server='1.1.1.1'), version=4) ipv6_subnet = fake_network_cache_model.new_subnet( subnet_dict=dict(dhcp_server='1234:567::'), version=6) self.netinfo[0]['network']['subnets'][0] = ipv4_subnet self.netinfo[0]['network']['subnets'][1] = ipv6_subnet net_metadata = netutils.get_network_metadata(self.netinfo) # IPv4 Network self.assertEqual( { 'id': 'network0', 'link': 'interface0', 'type': 'ipv4_dhcp', 'network_id': 1 }, net_metadata['networks'][0]) # IPv6 Network self.assertEqual( { 'id': 'network1', 'link': 'interface0', 'type': 'ipv6_dhcp', 'network_id': 1 }, net_metadata['networks'][1]) def _test_get_network_metadata_json_ipv6_addr_mode(self, mode): ipv6_subnet = fake_network_cache_model.new_subnet( subnet_dict=dict(dhcp_server='1234:567::', ipv6_address_mode=mode), version=6) self.netinfo[0]['network']['subnets'][1] = ipv6_subnet net_metadata = netutils.get_network_metadata(self.netinfo) self.assertEqual( { 'id': 'network1', 'link': 'interface0', 'ip_address': 'fd00::2', 'netmask': 'ffff:ffff:ffff::', 'routes': [ { 'network': '::', 'netmask': '::', 'gateway': 'fd00::1' }, { 'network': '::', 'netmask': 'ffff:ffff:ffff::', 'gateway': 'fd00::1:1' } ], 'services': [ {'address': '1:2:3:4::', 'type': 'dns'}, {'address': '2:3:4:5::', 'type': 'dns'} ], 'type': 'ipv6_%s' % mode, 'network_id': 1 }, net_metadata['networks'][1]) def test_get_network_metadata_json_ipv6_addr_mode_slaac(self): self._test_get_network_metadata_json_ipv6_addr_mode('slaac') def test_get_network_metadata_json_ipv6_addr_mode_stateful(self): self._test_get_network_metadata_json_ipv6_addr_mode('dhcpv6-stateful') def test_get_network_metadata_json_ipv6_addr_mode_stateless(self): self._test_get_network_metadata_json_ipv6_addr_mode('dhcpv6-stateless') def test__get_nets(self): expected_net = { 'id': 'network0', 'ip_address': '10.10.0.2', 'link': 1, 'netmask': '255.255.255.0', 'network_id': 1, 'routes': [ { 'gateway': '10.10.0.1', 'netmask': '0.0.0.0', 'network': '0.0.0.0'}, { 'gateway': '192.168.1.1', 'netmask': '255.255.255.0', 'network': '0.0.0.0'}], 'services': [ {'address': '1.2.3.4', 'type': 'dns'}, {'address': '2.3.4.5', 'type': 'dns'} ], 'type': 'ipv4' } net = netutils._get_nets( self.netinfo[0], self.netinfo[0]['network']['subnets'][0], 4, 0, 1) self.assertEqual(expected_net, net) def test__get_eth_link(self): expected_link = { 'id': 'interface0', 'vif_id': 1, 'type': 'vif', 'ethernet_mac_address': 'aa:aa:aa:aa:aa:aa', 'mtu': 1500 } self.netinfo[0]['type'] = 'vif' link = netutils._get_eth_link(self.netinfo[0], 0) self.assertEqual(expected_link, link) def test__get_eth_link_physical(self): expected_link = { 'id': 'interface1', 'vif_id': 1, 'type': 'phy', 'ethernet_mac_address': 'aa:aa:aa:aa:aa:aa', 'mtu': 1500 } link = netutils._get_eth_link(self.netinfo[0], 1) self.assertEqual(expected_link, link) def test__get_default_route(self): v4_expected = [{ 'network': '0.0.0.0', 'netmask': '0.0.0.0', 'gateway': '10.10.0.1', }] v6_expected = [{ 'network': '::', 'netmask': '::', 'gateway': 'fd00::1' }] v4 = netutils._get_default_route( 4, self.netinfo[0]['network']['subnets'][0]) self.assertEqual(v4_expected, v4) v6 = netutils._get_default_route( 6, self.netinfo[0]['network']['subnets'][1]) self.assertEqual(v6_expected, v6) # Test for no gateway self.netinfo[0]['network']['subnets'][0]['gateway'] = None no_route = netutils._get_default_route( 4, self.netinfo[0]['network']['subnets'][0]) self.assertEqual([], no_route) def test__get_dns_services(self): expected_dns = [ {'type': 'dns', 'address': '1.2.3.4'}, {'type': 'dns', 'address': '2.3.4.5'}, {'type': 'dns', 'address': '3.4.5.6'} ] subnet = fake_network_cache_model.new_subnet(version=4) subnet['dns'].append(fake_network_cache_model.new_ip( {'address': '3.4.5.6'})) dns = netutils._get_dns_services(subnet) self.assertEqual(expected_dns, dns) def test_get_network_metadata(self): expected_json = { "links": [ { "ethernet_mac_address": "aa:aa:aa:aa:aa:aa", "id": "interface0", "type": "phy", "vif_id": 1, "mtu": 1500 }, { "ethernet_mac_address": "aa:aa:aa:aa:aa:ab", "id": "interface1", "type": "phy", "vif_id": 1, "mtu": 1500 }, ], "networks": [ { "id": "network0", "ip_address": "10.10.0.2", "link": "interface0", "netmask": "255.255.255.0", "network_id": "00000000-0000-0000-0000-000000000000", "routes": [ { "gateway": "10.10.0.1", "netmask": "0.0.0.0", "network": "0.0.0.0" }, { "gateway": "192.168.1.1", "netmask": "255.255.255.0", "network": "0.0.0.0" } ], 'services': [{'address': '1.2.3.4', 'type': 'dns'}, {'address': '2.3.4.5', 'type': 'dns'}], "type": "ipv4" }, { 'id': 'network1', 'ip_address': 'fd00::2', 'link': 'interface0', 'netmask': 'ffff:ffff:ffff::', 'network_id': '00000000-0000-0000-0000-000000000000', 'routes': [{'gateway': 'fd00::1', 'netmask': '::', 'network': '::'}, {'gateway': 'fd00::1:1', 'netmask': 'ffff:ffff:ffff::', 'network': '::'}], 'services': [{'address': '1:2:3:4::', 'type': 'dns'}, {'address': '2:3:4:5::', 'type': 'dns'}], 'type': 'ipv6' }, { "id": "network2", "ip_address": "192.168.0.2", "link": "interface1", "netmask": "255.255.255.0", "network_id": "11111111-1111-1111-1111-111111111111", "routes": [ { "gateway": "192.168.0.1", "netmask": "0.0.0.0", "network": "0.0.0.0" } ], 'services': [{'address': '1.2.3.4', 'type': 'dns'}, {'address': '2.3.4.5', 'type': 'dns'}], "type": "ipv4" } ], 'services': [ {'address': '1.2.3.4', 'type': 'dns'}, {'address': '2.3.4.5', 'type': 'dns'}, {'address': '1:2:3:4::', 'type': 'dns'}, {'address': '2:3:4:5::', 'type': 'dns'} ] } self.netinfo[0]['network']['id'] = ( '00000000-0000-0000-0000-000000000000') # Add a second NIC self.netinfo.append(fake_network_cache_model.new_vif({ 'type': 'ethernet', 'address': 'aa:aa:aa:aa:aa:ab'})) address = fake_network_cache_model.new_ip({'address': '192.168.0.2'}) gateway_address = fake_network_cache_model.new_ip( {'address': '192.168.0.1'}) ipv4_subnet = fake_network_cache_model.new_subnet( {'cidr': '192.168.0.0/24', 'gateway': gateway_address, 'ips': [address], 'routes': []}) self.netinfo[1]['network']['id'] = ( '11111111-1111-1111-1111-111111111111') self.netinfo[1]['network']['subnets'][0] = ipv4_subnet self.netinfo[1]['network']['meta']['mtu'] = 1500 network_json = netutils.get_network_metadata(self.netinfo) self.assertEqual(expected_json, network_json) def test_get_network_metadata_no_ipv4(self): expected_json = { "services": [ { "type": "dns", "address": "1:2:3:4::" }, { "type": "dns", "address": "2:3:4:5::" } ], "networks": [ { "network_id": 1, "type": "ipv6", "netmask": "ffff:ffff:ffff::", "link": "interface0", "routes": [ { "netmask": "::", "network": "::", "gateway": "fd00::1" }, { "netmask": "ffff:ffff:ffff::", "network": "::", "gateway": "fd00::1:1" } ], 'services': [{'address': '1:2:3:4::', 'type': 'dns'}, {'address': '2:3:4:5::', 'type': 'dns'}], "ip_address": "fd00::2", "id": "network0" } ], "links": [ { "ethernet_mac_address": "aa:aa:aa:aa:aa:aa", "mtu": 1500, "type": "phy", "id": "interface0", "vif_id": 1 } ] } # drop the ipv4 subnet self.netinfo[0]['network']['subnets'].pop(0) network_json = netutils.get_network_metadata(self.netinfo) self.assertEqual(expected_json, network_json) def test_legacy_vif_types_type_passed_through(self): legacy_types = [ model.VIF_TYPE_BRIDGE, model.VIF_TYPE_DVS, model.VIF_TYPE_HW_VEB, model.VIF_TYPE_HYPERV, model.VIF_TYPE_OVS, model.VIF_TYPE_TAP, model.VIF_TYPE_VHOSTUSER, model.VIF_TYPE_VIF, ] link_types = [] for vif_type in legacy_types: network_json = netutils.get_network_metadata( self._new_netinfo(vif_type=vif_type)) link_types.append(network_json["links"][0]["type"]) self.assertEqual(legacy_types, link_types) def test_new_vif_types_get_type_phy(self): new_types = ["whizbang_nvf", "vswitch9"] link_types = [] for vif_type in new_types: network_json = netutils.get_network_metadata( self._new_netinfo(vif_type=vif_type)) link_types.append(network_json["links"][0]["type"]) self.assertEqual(["phy"] * len(new_types), link_types) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/network/test_neutron.py0000664000175000017500000136517000000000000022463 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import copy from keystoneauth1.fixture import V2Token from keystoneauth1 import loading as ks_loading from keystoneauth1 import service_token import mock from neutronclient.common import exceptions from neutronclient.v2_0 import client from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_policy import policy as oslo_policy from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import uuidutils import requests_mock import six from six.moves import range from nova import context from nova.db.sqlalchemy import api as db_api from nova import exception from nova.network import constants from nova.network import model from nova.network import neutron as neutronapi from nova import objects from nova.objects import network_request as net_req_obj from nova.objects import virtual_interface as obj_vif from nova.pci import manager as pci_manager from nova.pci import request as pci_request from nova.pci import utils as pci_utils from nova.pci import whitelist as pci_whitelist from nova import policy from nova import service_auth from nova import test from nova.tests.unit import fake_instance from nova.tests.unit import fake_requests as fake_req CONF = cfg.CONF # NOTE: Neutron client raises Exception which is discouraged by HACKING. # We set this variable here and use it for assertions below to avoid # the hacking checks until we can make neutron client throw a custom # exception class instead. NEUTRON_CLIENT_EXCEPTION = Exception fake_info_cache = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'instance_uuid': uuids.instance, 'network_info': '[]', } class TestNeutronClient(test.NoDBTestCase): def setUp(self): super(TestNeutronClient, self).setUp() neutronapi.reset_state() self.addCleanup(service_auth.reset_globals) def test_ksa_adapter_loading_defaults(self): """No 'url' triggers ksa loading path with defaults.""" my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') cl = neutronapi.get_client(my_context) self.assertEqual('network', cl.httpclient.service_type) self.assertIsNone(cl.httpclient.service_name) self.assertEqual(['internal', 'public'], cl.httpclient.interface) self.assertIsNone(cl.httpclient.region_name) self.assertIsNone(cl.httpclient.endpoint_override) self.assertIsNotNone(cl.httpclient.global_request_id) self.assertEqual(my_context.global_id, cl.httpclient.global_request_id) def test_ksa_adapter_loading(self): """Test ksa loading path with specified values.""" self.flags(group='neutron', service_type='st', service_name='sn', valid_interfaces='admin', region_name='RegionTwo', endpoint_override='eo') my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') cl = neutronapi.get_client(my_context) self.assertEqual('st', cl.httpclient.service_type) self.assertEqual('sn', cl.httpclient.service_name) self.assertEqual(['admin'], cl.httpclient.interface) self.assertEqual('RegionTwo', cl.httpclient.region_name) self.assertEqual('eo', cl.httpclient.endpoint_override) def test_withtoken(self): self.flags(endpoint_override='http://anyhost/', group='neutron') self.flags(timeout=30, group='neutron') # Will use the token rather than load auth from config. my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') cl = neutronapi.get_client(my_context) self.assertEqual(CONF.neutron.endpoint_override, cl.httpclient.endpoint_override) self.assertEqual(CONF.neutron.region_name, cl.httpclient.region_name) self.assertEqual(my_context.auth_token, cl.httpclient.auth.auth_token) self.assertEqual(CONF.neutron.timeout, cl.httpclient.session.timeout) def test_withouttoken(self): my_context = context.RequestContext('userid', uuids.my_tenant) self.assertRaises(exception.Unauthorized, neutronapi.get_client, my_context) @mock.patch.object(ks_loading, 'load_auth_from_conf_options') def test_non_admin_with_service_token(self, mock_load): self.flags(send_service_user_token=True, group='service_user') my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') cl = neutronapi.get_client(my_context) self.assertIsInstance(cl.httpclient.auth, service_token.ServiceTokenAuthWrapper) @mock.patch.object(client.Client, "list_networks", side_effect=exceptions.Unauthorized()) def test_Unauthorized_user(self, mock_list_networks): my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token', is_admin=False) client = neutronapi.get_client(my_context) self.assertRaises( exception.Unauthorized, client.list_networks) @mock.patch.object(client.Client, "list_networks", side_effect=exceptions.Unauthorized()) def test_Unauthorized_admin(self, mock_list_networks): my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token', is_admin=True) client = neutronapi.get_client(my_context) self.assertRaises( exception.NeutronAdminCredentialConfigurationInvalid, client.list_networks) @mock.patch.object(client.Client, "create_port", side_effect=exceptions.Forbidden()) def test_Forbidden(self, mock_create_port): my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token', is_admin=False) client = neutronapi.get_client(my_context) exc = self.assertRaises( exception.Forbidden, client.create_port) self.assertIsInstance(exc.format_message(), six.text_type) def test_withtoken_context_is_admin(self): self.flags(endpoint_override='http://anyhost/', group='neutron') self.flags(timeout=30, group='neutron') # No auth_token set but is_admin will load auth from config. my_context = context.RequestContext('userid', uuids.my_tenant, is_admin=True) with mock.patch.object(neutronapi, '_load_auth_plugin') as mock_auth: cl = neutronapi.get_client(my_context) self.assertEqual(CONF.neutron.endpoint_override, cl.httpclient.endpoint_override) self.assertEqual(mock_auth.return_value.auth_token, cl.httpclient.auth.auth_token) self.assertEqual(CONF.neutron.timeout, cl.httpclient.session.timeout) def test_withouttoken_keystone_connection_error(self): self.flags(endpoint_override='http://anyhost/', group='neutron') my_context = context.RequestContext('userid', uuids.my_tenant) self.assertRaises(NEUTRON_CLIENT_EXCEPTION, neutronapi.get_client, my_context) @mock.patch('nova.network.neutron._ADMIN_AUTH') @mock.patch.object(client.Client, "list_networks", new=mock.Mock()) def test_reuse_admin_token(self, m): self.flags(endpoint_override='http://anyhost/', group='neutron') my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') tokens = ['new_token2', 'new_token1'] def token_vals(*args, **kwargs): return tokens.pop() m.get_token.side_effect = token_vals client1 = neutronapi.get_client(my_context, True) client1.list_networks(retrieve_all=False) self.assertEqual('new_token1', client1.httpclient.auth.get_token(None)) client1 = neutronapi.get_client(my_context, True) client1.list_networks(retrieve_all=False) self.assertEqual('new_token2', client1.httpclient.auth.get_token(None)) @mock.patch('nova.network.neutron.LOG.error') @mock.patch.object(ks_loading, 'load_auth_from_conf_options') def test_load_auth_plugin_failed(self, mock_load_from_conf, mock_log_err): mock_load_from_conf.return_value = None from neutronclient.common import exceptions as neutron_client_exc self.assertRaises(neutron_client_exc.Unauthorized, neutronapi._load_auth_plugin, CONF) mock_log_err.assert_called() self.assertIn('The [neutron] section of your nova configuration file', mock_log_err.call_args[0][0]) @mock.patch.object(client.Client, "list_networks", side_effect=exceptions.Unauthorized()) def test_wrapper_exception_translation(self, m): my_context = context.RequestContext('userid', 'my_tenantid', auth_token='token') client = neutronapi.get_client(my_context) self.assertRaises( exception.Unauthorized, client.list_networks) def test_neutron_http_retries(self): retries = 42 self.flags(http_retries=retries, group='neutron') my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') cl = neutronapi.get_client(my_context) self.assertEqual(retries, cl.httpclient.connect_retries) kcl = neutronapi._get_ksa_client(my_context) self.assertEqual(retries, kcl.connect_retries) class TestAPIBase(test.TestCase): def setUp(self): super(TestAPIBase, self).setUp() self.api = neutronapi.API() self.context = context.RequestContext( 'userid', uuids.my_tenant, auth_token='bff4a5a6b9eb4ea2a6efec6eefb77936') self.tenant_id = '9d049e4b60b64716978ab415e6fbd5c0' self.instance = {'project_id': self.tenant_id, 'uuid': uuids.fake, 'display_name': 'test_instance', 'hostname': 'test-instance', 'availability_zone': 'nova', 'host': 'some_host', 'info_cache': {'network_info': []}, 'security_groups': []} self.instance2 = {'project_id': self.tenant_id, 'uuid': uuids.fake, 'display_name': 'test_instance2', 'availability_zone': 'nova', 'info_cache': {'network_info': []}, 'security_groups': []} self.nets1 = [{'id': uuids.my_netid1, 'name': 'my_netname1', 'subnets': ['mysubnid1'], 'tenant_id': uuids.my_tenant}] self.nets2 = [] self.nets2.append(self.nets1[0]) self.nets2.append({'id': uuids.my_netid2, 'name': 'my_netname2', 'subnets': ['mysubnid2'], 'tenant_id': uuids.my_tenant}) self.nets3 = self.nets2 + [{'id': uuids.my_netid3, 'name': 'my_netname3', 'subnets': ['mysubnid3'], 'tenant_id': uuids.my_tenant}] self.nets4 = [{'id': 'his_netid4', 'name': 'his_netname4', 'tenant_id': 'his_tenantid'}] # A network request with external networks self.nets5 = self.nets1 + [{'id': 'the-external-one', 'name': 'out-of-this-world', 'subnets': ['mysubnid5'], 'router:external': True, 'tenant_id': 'should-be-an-admin'}] # A network request with a duplicate self.nets6 = [] self.nets6.append(self.nets1[0]) self.nets6.append(self.nets1[0]) # A network request with a combo self.nets7 = [] self.nets7.append(self.nets2[1]) self.nets7.append(self.nets1[0]) self.nets7.append(self.nets2[1]) self.nets7.append(self.nets1[0]) # A network request with only external network self.nets8 = [self.nets5[1]] # An empty network self.nets9 = [] # A network that is both shared and external self.nets10 = [{'id': 'net_id', 'name': 'net_name', 'router:external': True, 'shared': True, 'subnets': ['mysubnid10']}] # A network with non-blank dns_domain to test _update_port_dns_name self.nets11 = [{'id': uuids.my_netid1, 'name': 'my_netname1', 'subnets': ['mysubnid1'], 'tenant_id': uuids.my_tenant, 'dns_domain': 'my-domain.org.'}] self.nets = [self.nets1, self.nets2, self.nets3, self.nets4, self.nets5, self.nets6, self.nets7, self.nets8, self.nets9, self.nets10, self.nets11] self.port_address = '10.0.1.2' self.port_data1 = [{'network_id': uuids.my_netid1, 'device_id': self.instance2['uuid'], 'tenant_id': self.tenant_id, 'device_owner': 'compute:nova', 'id': uuids.portid_1, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'status': 'DOWN', 'admin_state_up': True, 'fixed_ips': [{'ip_address': self.port_address, 'subnet_id': 'my_subid1'}], 'mac_address': 'my_mac1', }] self.float_data1 = [{'port_id': uuids.portid_1, 'fixed_ip_address': self.port_address, 'floating_ip_address': '172.0.1.2'}] self.dhcp_port_data1 = [{'fixed_ips': [{'ip_address': '10.0.1.9', 'subnet_id': 'my_subid1'}], 'status': 'ACTIVE', 'admin_state_up': True}] self.port_address2 = '10.0.2.2' self.port_data2 = [] self.port_data2.append(self.port_data1[0]) self.port_data2.append({'network_id': uuids.my_netid2, 'device_id': self.instance['uuid'], 'tenant_id': self.tenant_id, 'admin_state_up': True, 'status': 'ACTIVE', 'device_owner': 'compute:nova', 'id': uuids.portid_2, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'fixed_ips': [{'ip_address': self.port_address2, 'subnet_id': 'my_subid2'}], 'mac_address': 'my_mac2', }) self.float_data2 = [] self.float_data2.append(self.float_data1[0]) self.float_data2.append({'port_id': uuids.portid_2, 'fixed_ip_address': '10.0.2.2', 'floating_ip_address': '172.0.2.2'}) self.port_data3 = [{'network_id': uuids.my_netid1, 'device_id': 'device_id3', 'tenant_id': self.tenant_id, 'status': 'DOWN', 'admin_state_up': True, 'device_owner': 'compute:nova', 'id': uuids.portid_3, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'fixed_ips': [], # no fixed ip 'mac_address': 'my_mac3', }] self.subnet_data1 = [{'id': 'my_subid1', 'cidr': '10.0.1.0/24', 'network_id': uuids.my_netid1, 'gateway_ip': '10.0.1.1', 'dns_nameservers': ['8.8.1.1', '8.8.1.2']}] self.subnet_data2 = [] self.subnet_data_n = [{'id': 'my_subid1', 'cidr': '10.0.1.0/24', 'network_id': uuids.my_netid1, 'gateway_ip': '10.0.1.1', 'dns_nameservers': ['8.8.1.1', '8.8.1.2']}, {'id': 'my_subid2', 'cidr': '20.0.1.0/24', 'network_id': uuids.my_netid2, 'gateway_ip': '20.0.1.1', 'dns_nameservers': ['8.8.1.1', '8.8.1.2']}] self.subnet_data2.append({'id': 'my_subid2', 'cidr': '10.0.2.0/24', 'network_id': uuids.my_netid2, 'gateway_ip': '10.0.2.1', 'dns_nameservers': ['8.8.2.1', '8.8.2.2']}) self.fip_pool = {'id': '4fdbfd74-eaf8-4884-90d9-00bd6f10c2d3', 'name': 'ext_net', 'router:external': True, 'tenant_id': 'admin_tenantid'} self.fip_pool_nova = {'id': '435e20c3-d9f1-4f1b-bee5-4611a1dd07db', 'name': 'nova', 'router:external': True, 'tenant_id': 'admin_tenantid'} self.fip_unassociated = {'tenant_id': uuids.my_tenant, 'id': uuids.fip_id1, 'floating_ip_address': '172.24.4.227', 'floating_network_id': self.fip_pool['id'], 'port_id': None, 'fixed_ip_address': None, 'router_id': None} fixed_ip_address = self.port_data2[1]['fixed_ips'][0]['ip_address'] self.fip_associated = {'tenant_id': uuids.my_tenant, 'id': uuids.fip_id2, 'floating_ip_address': '172.24.4.228', 'floating_network_id': self.fip_pool['id'], 'port_id': self.port_data2[1]['id'], 'fixed_ip_address': fixed_ip_address, 'router_id': 'router_id1'} self._returned_nw_info = [] def _fake_instance_object(self, instance): return fake_instance.fake_instance_obj(self.context, **instance) def _fake_instance_info_cache(self, nw_info, instance_uuid=None): info_cache = {} if instance_uuid is None: info_cache['instance_uuid'] = uuids.fake else: info_cache['instance_uuid'] = instance_uuid info_cache['deleted'] = False info_cache['created_at'] = timeutils.utcnow() info_cache['deleted_at'] = timeutils.utcnow() info_cache['updated_at'] = timeutils.utcnow() info_cache['network_info'] = model.NetworkInfo.hydrate(six.text_type( jsonutils.dumps(nw_info))) return info_cache def _fake_instance_object_with_info_cache(self, instance): expected_attrs = ['info_cache'] instance = objects.Instance._from_db_object(self.context, objects.Instance(), fake_instance.fake_db_instance(**instance), expected_attrs=expected_attrs) return instance def _test_allocate_for_instance_with_virtual_interface( self, net_idx=1, **kwargs): self._vifs_created = [] def _new_vif(*args): m = mock.MagicMock() self._vifs_created.append(m) return m with mock.patch('nova.objects.VirtualInterface') as mock_vif: mock_vif.side_effect = _new_vif requested_networks = kwargs.pop('requested_networks', None) return self._test_allocate_for_instance( net_idx=net_idx, requested_networks=requested_networks, **kwargs) @mock.patch.object(neutronapi.API, '_populate_neutron_extension_values') @mock.patch.object(neutronapi.API, '_refresh_neutron_extensions_cache') @mock.patch.object(neutronapi.API, 'get_instance_nw_info', return_value=None) @mock.patch.object(neutronapi, 'get_client') def _test_allocate_for_instance(self, mock_get_client, mock_get_nw, mock_refresh, mock_populate, net_idx=1, requested_networks=None, exception=None, context=None, **kwargs): ctxt = context or self.context self.instance = self._fake_instance_object(self.instance) self.instance2 = self._fake_instance_object(self.instance2) mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client bind_host_id = kwargs.get('bind_host_id') has_dns_extension = False if kwargs.get('dns_extension'): has_dns_extension = True self.api.extensions[constants.DNS_INTEGRATION] = 1 # Net idx is 1-based for compatibility with existing unit tests nets = self.nets[net_idx - 1] ports = {} fixed_ips = {} req_net_ids = [] ordered_networks = [] expected_show_port_calls = self._stub_allocate_for_instance_show_port( nets, ports, fixed_ips, req_net_ids, ordered_networks, requested_networks, mocked_client, **kwargs) populate_values = [] update_port_values = [] try: if kwargs.get('_break') == 'pre_list_networks': raise test.TestingException() expected_list_networks_calls = ( self._stub_allocate_for_instance_list_networks( req_net_ids, nets, mocked_client)) pre_create_port = ( kwargs.get('_break') == 'post_list_networks' or ((requested_networks is None or requested_networks.as_tuples() == [(None, None, None)]) and len(nets) > 1) or kwargs.get('_break') == 'post_list_extensions') if pre_create_port: raise test.TestingException() expected_create_port_calls = ( self._stub_allocate_for_instance_create_port( ordered_networks, fixed_ips, nets, mocked_client)) preexisting_port_ids = [] ports_in_requested_net_order = [] nets_in_requested_net_order = [] index = 0 expected_populate_calls = [] expected_update_port_calls = [] for request in ordered_networks: index += 1 port_req_body = { 'port': { 'device_id': self.instance.uuid, 'device_owner': 'compute:nova', }, } # Network lookup for available network_id network = None for net in nets: if net['id'] == request.network_id: network = net break # if net_id did not pass validate_networks() and not available # here then skip it safely not continuing with a None Network else: continue populate_values.append(None) expected_populate_calls.append( mock.call(mock.ANY, self.instance, mock.ANY, mock.ANY, network=network, neutron=mocked_client, bind_host_id=bind_host_id)) if not request.port_id: port_id = uuids.fake update_port_res = {'port': { 'id': port_id, 'mac_address': 'fakemac%i' % index}} ports_in_requested_net_order.append(port_id) if kwargs.get('_break') == 'mac' + request.network_id: if populate_values: mock_populate.side_effect = populate_values if update_port_values: mocked_client.update_port.side_effect = ( update_port_values) raise test.TestingException() else: ports_in_requested_net_order.append(request.port_id) preexisting_port_ids.append(request.port_id) port_id = request.port_id update_port_res = {'port': ports[port_id]} new_mac = port_req_body['port'].get('mac_address') if new_mac: update_port_res['port']['mac_address'] = new_mac update_port_values.append(update_port_res) expected_update_port_calls.append( mock.call(port_id, port_req_body)) if has_dns_extension: if net_idx == 11: port_req_body_dns = { 'port': { 'dns_name': self.instance.hostname } } res_port_dns = { 'port': { 'id': ports_in_requested_net_order[-1] } } update_port_values.append(res_port_dns) expected_update_port_calls.append( mock.call(ports_in_requested_net_order[-1], port_req_body_dns)) nets_in_requested_net_order.append(network) mock_get_nw.return_value = self._returned_nw_info except test.TestingException: pass mock_populate.side_effect = populate_values mocked_client.update_port.side_effect = update_port_values # Call the allocate_for_instance method nw_info = None if exception: self.assertRaises(exception, self.api.allocate_for_instance, ctxt, self.instance, False, requested_networks, bind_host_id=bind_host_id) else: nw_info = self.api.allocate_for_instance( ctxt, self.instance, False, requested_networks, bind_host_id=bind_host_id) mock_get_client.assert_has_calls([ mock.call(ctxt), mock.call(ctxt, admin=True)], any_order=True) mock_refresh.assert_not_called() if requested_networks: mocked_client.show_port.assert_has_calls(expected_show_port_calls) self.assertEqual(len(expected_show_port_calls), mocked_client.show_port.call_count) if kwargs.get('_break') == 'pre_list_networks': return nw_info, mocked_client mocked_client.list_networks.assert_has_calls( expected_list_networks_calls) self.assertEqual(len(expected_list_networks_calls), mocked_client.list_networks.call_count) if pre_create_port: return nw_info, mocked_client mocked_client.create_port.assert_has_calls( expected_create_port_calls) self.assertEqual(len(expected_create_port_calls), mocked_client.create_port.call_count) mocked_client.update_port.assert_has_calls( expected_update_port_calls) self.assertEqual(len(expected_update_port_calls), mocked_client.update_port.call_count) mock_populate.assert_has_calls(expected_populate_calls) self.assertEqual(len(expected_populate_calls), mock_populate.call_count) if mock_get_nw.return_value is None: mock_get_nw.assert_not_called() else: mock_get_nw.assert_called_once_with( mock.ANY, self.instance, networks=nets_in_requested_net_order, port_ids=ports_in_requested_net_order, admin_client=mocked_client, preexisting_port_ids=preexisting_port_ids) return nw_info, mocked_client def _stub_allocate_for_instance_show_port(self, nets, ports, fixed_ips, req_net_ids, ordered_networks, requested_networks, mocked_client, **kwargs): expected_show_port_calls = [] if requested_networks: show_port_values = [] for request in requested_networks: if request.port_id: if request.port_id == uuids.portid_3: show_port_values.append( {'port': {'id': uuids.portid_3, 'network_id': uuids.my_netid1, 'tenant_id': self.tenant_id, 'mac_address': 'my_mac1', 'device_id': kwargs.get('_device') and self.instance2.uuid or ''}}) ports[uuids.my_netid1] = [self.port_data1[0], self.port_data3[0]] ports[request.port_id] = self.port_data3[0] request.network_id = uuids.my_netid1 elif request.port_id == uuids.non_existent_uuid: show_port_values.append( exceptions.PortNotFoundClient(status_code=404)) else: show_port_values.append( {'port': {'id': uuids.portid_1, 'network_id': uuids.my_netid1, 'tenant_id': self.tenant_id, 'mac_address': 'my_mac1', 'device_id': kwargs.get('_device') and self.instance2.uuid or '', 'dns_name': kwargs.get('_dns_name') or ''}}) ports[request.port_id] = self.port_data1[0] request.network_id = uuids.my_netid1 expected_show_port_calls.append(mock.call(request.port_id)) else: fixed_ips[request.network_id] = request.address req_net_ids.append(request.network_id) ordered_networks.append(request) if show_port_values: mocked_client.show_port.side_effect = show_port_values else: for n in nets: ordered_networks.append( objects.NetworkRequest(network_id=n['id'])) return expected_show_port_calls def _stub_allocate_for_instance_list_networks(self, req_net_ids, nets, mocked_client): if req_net_ids: expected_list_networks_calls = [mock.call(id=req_net_ids)] mocked_client.list_networks.return_value = {'networks': nets} else: expected_list_networks_calls = [ mock.call(tenant_id=self.instance.project_id, shared=False), mock.call(shared=True)] mocked_client.list_networks.side_effect = [ {'networks': nets}, {'networks': []}] return expected_list_networks_calls def _stub_allocate_for_instance_create_port(self, ordered_networks, fixed_ips, nets, mocked_client): create_port_values = [] expected_create_port_calls = [] for request in ordered_networks: if not request.port_id: # Check network is available, skip if not network = None for net in nets: if net['id'] == request.network_id: network = net break if network is None: continue port_req_body_create = {'port': {'device_id': self.instance.uuid}} request.address = fixed_ips.get(request.network_id) if request.address: port_req_body_create['port']['fixed_ips'] = [ {'ip_address': str(request.address)}] port_req_body_create['port']['network_id'] = ( request.network_id) port_req_body_create['port']['admin_state_up'] = True port_req_body_create['port']['tenant_id'] = ( self.instance.project_id) res_port = {'port': {'id': uuids.fake}} expected_create_port_calls.append( mock.call(port_req_body_create)) create_port_values.append(res_port) mocked_client.create_port.side_effect = create_port_values return expected_create_port_calls class TestAPI(TestAPIBase): """Used to test Neutron V2 API.""" @mock.patch.object(db_api, 'instance_info_cache_get') @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def _test_get_instance_nw_info(self, number, mock_get_client, mock_cache_update, mock_cache_get): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_data = number == 1 and self.port_data1 or self.port_data2 net_info_cache = [] for port in port_data: net_info_cache.append({"network": {"id": port['network_id']}, "id": port['id']}) list_ports_values = [{'ports': port_data}] expected_list_ports_calls = [mock.call( tenant_id=self.instance['project_id'], device_id=self.instance['uuid'])] net_ids = [port['network_id'] for port in port_data] nets = number == 1 and self.nets1 or self.nets2 mocked_client.list_networks.return_value = {'networks': nets} list_subnets_values = [] expected_list_subnets_calls = [] list_floatingips_values = [] expected_list_floatingips_calls = [] for i in range(1, number + 1): float_data = number == 1 and self.float_data1 or self.float_data2 for ip in port_data[i - 1]['fixed_ips']: float_data = [x for x in float_data if x['fixed_ip_address'] == ip['ip_address']] list_floatingips_values.append({'floatingips': float_data}) expected_list_floatingips_calls.append( mock.call(fixed_ip_address=ip['ip_address'], port_id=port_data[i - 1]['id'])) subnet_data = i == 1 and self.subnet_data1 or self.subnet_data2 list_subnets_values.append({'subnets': subnet_data}) expected_list_subnets_calls.append( mock.call(id=['my_subid%s' % i])) list_ports_values.append({'ports': []}) expected_list_ports_calls.append(mock.call( network_id=subnet_data[0]['network_id'], device_owner='network:dhcp')) mocked_client.list_ports.side_effect = list_ports_values mocked_client.list_subnets.side_effect = list_subnets_values mocked_client.list_floatingips.side_effect = list_floatingips_values self.instance['info_cache'] = self._fake_instance_info_cache( net_info_cache, self.instance['uuid']) mock_cache_get.return_value = self.instance['info_cache'] instance = self._fake_instance_object_with_info_cache(self.instance) nw_inf = self.api.get_instance_nw_info(self.context, instance) mock_get_client.assert_any_call(mock.ANY, admin=True) mock_cache_update.assert_called_once_with( mock.ANY, self.instance['uuid'], mock.ANY) mock_cache_get.assert_called_once_with(mock.ANY, self.instance['uuid']) mocked_client.list_ports.assert_has_calls(expected_list_ports_calls) self.assertEqual(len(expected_list_ports_calls), mocked_client.list_ports.call_count) mocked_client.list_subnets.assert_has_calls( expected_list_subnets_calls) self.assertEqual(len(expected_list_subnets_calls), mocked_client.list_subnets.call_count) mocked_client.list_floatingips.assert_has_calls( expected_list_floatingips_calls) self.assertEqual(len(expected_list_floatingips_calls), mocked_client.list_floatingips.call_count) mocked_client.list_networks.assert_called_once_with(id=net_ids) for i in range(0, number): self._verify_nw_info(nw_inf, i) def _verify_nw_info(self, nw_inf, index=0): id_suffix = index + 1 self.assertEqual('10.0.%s.2' % id_suffix, nw_inf.fixed_ips()[index]['address']) self.assertEqual('172.0.%s.2' % id_suffix, nw_inf.fixed_ips()[index].floating_ip_addresses()[0]) self.assertEqual('my_netname%s' % id_suffix, nw_inf[index]['network']['label']) self.assertEqual(getattr(uuids, 'portid_%s' % id_suffix), nw_inf[index]['id']) self.assertEqual('my_mac%s' % id_suffix, nw_inf[index]['address']) self.assertEqual('10.0.%s.0/24' % id_suffix, nw_inf[index]['network']['subnets'][0]['cidr']) ip_addr = model.IP(address='8.8.%s.1' % id_suffix, version=4, type='dns') self.assertIn(ip_addr, nw_inf[index]['network']['subnets'][0]['dns']) def test_get_instance_nw_info_1(self): # Test to get one port in one network and subnet. self._test_get_instance_nw_info(1) def test_get_instance_nw_info_2(self): # Test to get one port in each of two networks and subnets. self._test_get_instance_nw_info(2) def test_get_instance_nw_info_with_nets_add_interface(self): # This tests that adding an interface to an instance does not # remove the first instance from the instance. network_model = model.Network(id='network_id', bridge='br-int', injected='injected', label='fake_network', tenant_id='fake_tenant') network_cache = {'info_cache': { 'network_info': [{'id': self.port_data2[0]['id'], 'address': 'mac_address', 'network': network_model, 'type': 'ovs', 'ovs_interfaceid': 'ovs_interfaceid', 'devname': 'devname'}]}} self._test_get_instance_nw_info_helper( network_cache, self.port_data2, networks=self.nets2, port_ids=[self.port_data2[1]['id']]) def test_get_instance_nw_info_remove_ports_from_neutron(self): # This tests that when a port is removed in neutron it # is also removed from the nova. network_model = model.Network(id=self.port_data2[0]['network_id'], bridge='br-int', injected='injected', label='fake_network', tenant_id='fake_tenant') network_cache = {'info_cache': { 'network_info': [{'id': 'network_id', 'address': 'mac_address', 'network': network_model, 'type': 'ovs', 'ovs_interfaceid': 'ovs_interfaceid', 'devname': 'devname'}]}} self._test_get_instance_nw_info_helper(network_cache, self.port_data2) def test_get_instance_nw_info_ignores_neutron_ports(self): # Tests that only ports in the network_cache are updated # and ports returned from neutron that match the same # instance_id/device_id are ignored. port_data2 = copy.copy(self.port_data2) # set device_id on the ports to be the same. port_data2[1]['device_id'] = port_data2[0]['device_id'] network_model = model.Network(id='network_id', bridge='br-int', injected='injected', label='fake_network', tenant_id='fake_tenant') network_cache = {'info_cache': { 'network_info': [{'id': 'network_id', 'address': 'mac_address', 'network': network_model, 'type': 'ovs', 'ovs_interfaceid': 'ovs_interfaceid', 'devname': 'devname'}]}} self._test_get_instance_nw_info_helper(network_cache, port_data2) def test_get_instance_nw_info_ignores_neutron_ports_empty_cache(self): # Tests that ports returned from neutron that match the same # instance_id/device_id are ignored when the instance info cache is # empty. port_data2 = copy.copy(self.port_data2) # set device_id on the ports to be the same. port_data2[1]['device_id'] = port_data2[0]['device_id'] network_cache = {'info_cache': {'network_info': []}} self._test_get_instance_nw_info_helper(network_cache, port_data2) @mock.patch.object(db_api, 'instance_info_cache_get') @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def _test_get_instance_nw_info_helper(self, network_cache, current_neutron_ports, mock_get_client, mock_cache_update, mock_cache_get, networks=None, port_ids=None): """Helper function to test get_instance_nw_info. :param network_cache - data already in the nova network cache. :param current_neutron_ports - updated list of ports from neutron. :param networks - networks of ports being added to instance. :param port_ids - new ports being added to instance. """ # keep a copy of the original ports/networks to pass to # get_instance_nw_info() as the code below changes them. original_port_ids = copy.copy(port_ids) original_networks = copy.copy(networks) api = neutronapi.API() mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client list_ports_values = [{'ports': current_neutron_ports}] expected_list_ports_calls = [ mock.call(tenant_id=self.instance['project_id'], device_id=self.instance['uuid'])] ifaces = network_cache['info_cache']['network_info'] if port_ids is None: port_ids = [iface['id'] for iface in ifaces] net_ids = [iface['network']['id'] for iface in ifaces] nets = [{'id': iface['network']['id'], 'name': iface['network']['label'], 'tenant_id': iface['network']['meta']['tenant_id']} for iface in ifaces] if networks is None: list_networks_values = [] expected_list_networks_calls = [] if ifaces: list_networks_values.append({'networks': nets}) expected_list_networks_calls.append(mock.call(id=net_ids)) else: non_shared_nets = [ {'id': iface['network']['id'], 'name': iface['network']['label'], 'tenant_id': iface['network']['meta']['tenant_id']} for iface in ifaces if not iface['shared']] shared_nets = [ {'id': iface['network']['id'], 'name': iface['network']['label'], 'tenant_id': iface['network']['meta']['tenant_id']} for iface in ifaces if iface['shared']] list_networks_values.extend([ {'networks': non_shared_nets}, {'networks': shared_nets}]) expected_list_networks_calls.extend([ mock.call(shared=False, tenant_id=self.instance['project_id']), mock.call(shared=True)]) mocked_client.list_networks.side_effect = list_networks_values else: port_ids = [iface['id'] for iface in ifaces] + port_ids index = 0 current_neutron_port_map = {} for current_neutron_port in current_neutron_ports: current_neutron_port_map[current_neutron_port['id']] = ( current_neutron_port) list_floatingips_values = [] expected_list_floatingips_calls = [] list_subnets_values = [] expected_list_subnets_calls = [] for port_id in port_ids: current_neutron_port = current_neutron_port_map.get(port_id) if current_neutron_port: for ip in current_neutron_port['fixed_ips']: list_floatingips_values.append( {'floatingips': [self.float_data2[index]]}) expected_list_floatingips_calls.append( mock.call(fixed_ip_address=ip['ip_address'], port_id=current_neutron_port['id'])) list_subnets_values.append( {'subnets': [self.subnet_data_n[index]]}) expected_list_subnets_calls.append( mock.call(id=[ip['subnet_id']])) list_ports_values.append({'ports': self.dhcp_port_data1}) expected_list_ports_calls.append( mock.call( network_id=current_neutron_port['network_id'], device_owner='network:dhcp')) index += 1 mocked_client.list_floatingips.side_effect = list_floatingips_values mocked_client.list_subnets.side_effect = list_subnets_values mocked_client.list_ports.side_effect = list_ports_values self.instance['info_cache'] = self._fake_instance_info_cache( network_cache['info_cache']['network_info'], self.instance['uuid']) mock_cache_get.return_value = self.instance['info_cache'] instance = self._fake_instance_object_with_info_cache(self.instance) nw_infs = api.get_instance_nw_info(self.context, instance, networks=original_networks, port_ids=original_port_ids) self.assertEqual(index, len(nw_infs)) # ensure that nic ordering is preserved for iface_index in range(index): self.assertEqual(port_ids[iface_index], nw_infs[iface_index]['id']) mock_get_client.assert_any_call(mock.ANY, admin=True) mock_cache_update.assert_called_once_with( mock.ANY, self.instance['uuid'], mock.ANY) mock_cache_get.assert_called_once_with(mock.ANY, self.instance['uuid']) if networks is None: mocked_client.list_networks.assert_has_calls( expected_list_networks_calls) self.assertEqual(len(expected_list_networks_calls), mocked_client.list_networks.call_count) mocked_client.list_floatingips.assert_has_calls( expected_list_floatingips_calls) self.assertEqual(len(expected_list_floatingips_calls), mocked_client.list_floatingips.call_count) mocked_client.list_subnets.assert_has_calls( expected_list_subnets_calls) self.assertEqual(len(expected_list_subnets_calls), mocked_client.list_subnets.call_count) mocked_client.list_ports.assert_has_calls( expected_list_ports_calls) self.assertEqual(len(expected_list_ports_calls), mocked_client.list_ports.call_count) @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(db_api, 'instance_info_cache_get') @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def test_get_instance_nw_info_without_subnet( self, mock_get_client, mock_cache_update, mock_cache_get, mock_get_physnet): # Test get instance_nw_info for a port without subnet. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.list_ports.return_value = {'ports': self.port_data3} mocked_client.list_networks.return_value = {'networks': self.nets1} net_info_cache = [] for port in self.port_data3: net_info_cache.append({"network": {"id": port['network_id']}, "id": port['id']}) self.instance['info_cache'] = self._fake_instance_info_cache( net_info_cache, self.instance['uuid']) mock_cache_get.return_value = self.instance['info_cache'] instance = self._fake_instance_object_with_info_cache(self.instance) nw_inf = self.api.get_instance_nw_info(self.context, instance) id_suffix = 3 self.assertEqual(0, len(nw_inf.fixed_ips())) self.assertEqual('my_netname1', nw_inf[0]['network']['label']) self.assertEqual(uuids.portid_3, nw_inf[0]['id']) self.assertEqual('my_mac%s' % id_suffix, nw_inf[0]['address']) self.assertEqual(0, len(nw_inf[0]['network']['subnets'])) mock_get_client.assert_has_calls([mock.call(mock.ANY, admin=True)] * 2, any_order=True) mock_cache_update.assert_called_once_with( mock.ANY, self.instance['uuid'], mock.ANY) mock_cache_get.assert_called_once_with(mock.ANY, self.instance['uuid']) mocked_client.list_ports.assert_called_once_with( tenant_id=self.instance['project_id'], device_id=self.instance['uuid']) mocked_client.list_networks.assert_called_once_with( id=[self.port_data1[0]['network_id']]) mock_get_physnet.assert_called_once_with( mock.ANY, mock.ANY, self.port_data1[0]['network_id']) @mock.patch.object(neutronapi, 'get_client') def test_refresh_neutron_extensions_cache(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.list_extensions.return_value = { 'extensions': [{'name': constants.QOS_QUEUE}]} self.api._refresh_neutron_extensions_cache(self.context) self.assertEqual( {constants.QOS_QUEUE: {'name': constants.QOS_QUEUE}}, self.api.extensions) mock_get_client.assert_called_once_with(self.context) mocked_client.list_extensions.assert_called_once_with() @mock.patch.object(neutronapi, 'get_client') def test_populate_neutron_extension_values_rxtx_factor( self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.list_extensions.return_value = { 'extensions': [{'name': constants.QOS_QUEUE}]} flavor = objects.Flavor.get_by_name(self.context, 'm1.small') flavor['rxtx_factor'] = 1 instance = objects.Instance(system_metadata={}) instance.flavor = flavor port_req_body = {'port': {}} self.api._populate_neutron_extension_values(self.context, instance, None, port_req_body) self.assertEqual(1, port_req_body['port']['rxtx_factor']) mock_get_client.assert_called_once_with(self.context) mocked_client.list_extensions.assert_called_once_with() def test_allocate_for_instance_1(self): # Allocate one port in one network env. self._test_allocate_for_instance_with_virtual_interface(1) def test_allocate_for_instance_2(self): # Allocate one port in two networks env. self._test_allocate_for_instance( net_idx=2, exception=exception.NetworkAmbiguous) def test_allocate_for_instance_accepts_only_portid(self): # Make sure allocate_for_instance works when only a portid is provided self._returned_nw_info = self.port_data1 result, _ = self._test_allocate_for_instance_with_virtual_interface( requested_networks=objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1, tag='test')])) self.assertEqual(self.port_data1, result) self.assertEqual(1, len(self._vifs_created)) self.assertEqual('test', self._vifs_created[0].tag) self.assertEqual(self.instance.uuid, self._vifs_created[0].instance_uuid) self.assertEqual(uuids.portid_1, self._vifs_created[0].uuid) self.assertEqual('%s/%s' % (self.port_data1[0]['mac_address'], self.port_data1[0]['id']), self._vifs_created[0].address) def test_allocate_for_instance_without_requested_networks(self): self._test_allocate_for_instance( net_idx=3, exception=exception.NetworkAmbiguous) def test_allocate_for_instance_with_requested_non_available_network(self): """verify that a non available network is ignored. self.nets2 (net_idx=2) is composed of self.nets3[0] and self.nets3[1] Do not create a port on a non available network self.nets3[2]. """ requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=net['id']) for net in (self.nets3[0], self.nets3[2], self.nets3[1])]) requested_networks[0].tag = 'foo' self._test_allocate_for_instance_with_virtual_interface( net_idx=2, requested_networks=requested_networks) self.assertEqual(2, len(self._vifs_created)) # NOTE(danms) nets3[2] is chosen above as one that won't validate, # so we never actually run create() on the VIF. vifs_really_created = [vif for vif in self._vifs_created if vif.create.called] self.assertEqual(2, len(vifs_really_created)) self.assertEqual([('foo', 'fakemac1/%s' % uuids.fake), (None, 'fakemac3/%s' % uuids.fake)], [(vif.tag, vif.address) for vif in vifs_really_created]) def test_allocate_for_instance_with_requested_networks(self): # specify only first and last network requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=net['id']) for net in (self.nets3[1], self.nets3[0], self.nets3[2])]) self._test_allocate_for_instance_with_virtual_interface( net_idx=3, requested_networks=requested_networks) def test_allocate_for_instance_with_no_subnet_defined(self): # net_id=4 does not specify subnet and does not set the option # port_security_disabled to True, so Neutron will not been # able to associate the default security group to the port # requested to be created. We expect an exception to be # raised. self._test_allocate_for_instance_with_virtual_interface( net_idx=4, exception=exception.SecurityGroupCannotBeApplied, _break='post_list_extensions') def test_allocate_for_instance_with_invalid_network_id(self): requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=uuids.non_existent_uuid)]) self._test_allocate_for_instance(net_idx=9, requested_networks=requested_networks, exception=exception.NetworkNotFound, _break='post_list_networks') def test_allocate_for_instance_with_requested_networks_with_fixedip(self): # specify only first and last network requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=self.nets1[0]['id'], address='10.0.1.0')]) self._test_allocate_for_instance_with_virtual_interface( net_idx=1, requested_networks=requested_networks) def test_allocate_for_instance_with_requested_networks_with_port(self): requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance_with_virtual_interface( net_idx=1, requested_networks=requested_networks) @mock.patch.object(neutronapi, 'get_client') def test_allocate_for_instance_no_networks(self, mock_get_client): """verify the exception thrown when there are no networks defined.""" self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.list_networks.return_value = { 'networks': model.NetworkInfo([])} nwinfo = self.api.allocate_for_instance(self.context, self.instance, False, None) self.assertEqual(0, len(nwinfo)) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)]) mocked_client.list_networks.assert_has_calls([ mock.call(tenant_id=self.instance.project_id, shared=False), mock.call(shared=True)]) self.assertEqual(2, mocked_client.list_networks.call_count) @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.network.neutron.API._populate_neutron_extension_values') @mock.patch('nova.network.neutron.API._create_ports_for_instance') @mock.patch('nova.network.neutron.API._unbind_ports') def test_allocate_for_instance_ex1(self, mock_unbind, mock_create_ports, mock_populate, mock_get_client): """Verify we will delete created ports if we fail to allocate all net resources. We mock to raise an exception when creating a second port. In this case, the code should delete the first created port. """ self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=net['id']) for net in (self.nets2[0], self.nets2[1])]) mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.list_networks.return_value = {'networks': self.nets2} mock_create_ports.return_value = [ (request, (getattr(uuids, 'portid_%s' % request.network_id))) for request in requested_networks ] index = 0 update_port_values = [] expected_update_port_calls = [] for network in self.nets2: binding_port_req_body = { 'port': { 'device_id': self.instance.uuid, 'device_owner': 'compute:nova', }, } port_req_body = { 'port': { 'network_id': network['id'], 'admin_state_up': True, 'tenant_id': self.instance.project_id, }, } port_req_body['port'].update(binding_port_req_body['port']) port_id = getattr(uuids, 'portid_%s' % network['id']) port = {'id': port_id, 'mac_address': 'foo'} if index == 0: update_port_values.append({'port': port}) else: update_port_values.append(exceptions.MacAddressInUseClient()) expected_update_port_calls.append(mock.call( port_id, binding_port_req_body)) index += 1 mocked_client.update_port.side_effect = update_port_values self.assertRaises(exception.PortInUse, self.api.allocate_for_instance, self.context, self.instance, False, requested_networks=requested_networks) mock_unbind.assert_called_once_with(self.context, [], mocked_client, mock.ANY) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)], any_order=True) mocked_client.list_networks.assert_called_once_with( id=[uuids.my_netid1, uuids.my_netid2]) mocked_client.update_port.assert_has_calls(expected_update_port_calls) self.assertEqual(len(expected_update_port_calls), mocked_client.update_port.call_count) mocked_client.delete_port.assert_has_calls([ mock.call(getattr(uuids, 'portid_%s' % self.nets2[0]['id'])), mock.call(getattr(uuids, 'portid_%s' % self.nets2[1]['id']))]) self.assertEqual(2, mocked_client.delete_port.call_count) @mock.patch.object(neutronapi, 'get_client') def test_allocate_for_instance_ex2(self, mock_get_client): """verify we have no port to delete if we fail to allocate the first net resource. Mock to raise exception when creating the first port. In this case, the code should not delete any ports. """ mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=net['id']) for net in (self.nets2[0], self.nets2[1])]) mocked_client.list_networks.return_value = {'networks': self.nets2} port_req_body = { 'port': { 'network_id': self.nets2[0]['id'], 'admin_state_up': True, 'device_id': self.instance.uuid, 'tenant_id': self.instance.project_id, }, } mocked_client.create_port.side_effect = Exception( "fail to create port") self.assertRaises(NEUTRON_CLIENT_EXCEPTION, self.api.allocate_for_instance, self.context, self.instance, False, requested_networks=requested_networks) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)]) mocked_client.list_networks.assert_called_once_with( id=[uuids.my_netid1, uuids.my_netid2]) mocked_client.create_port.assert_called_once_with(port_req_body) @mock.patch.object(neutronapi.API, '_get_available_networks') @mock.patch.object(neutronapi, 'get_client') def test_allocate_for_instance_no_port_or_network( self, mock_get_client, mock_get_available): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) # Make sure we get an empty list and then bail out of the rest # of the function mock_get_available.side_effect = test.TestingException requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest()]) self.assertRaises(test.TestingException, self.api.allocate_for_instance, self.context, self.instance, False, requested_networks) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)]) mock_get_available.assert_called_once_with( self.context, self.instance.project_id, [], neutron=mocked_client, auto_allocate=False) def test_allocate_for_instance_second_time(self): # Make sure that allocate_for_instance only returns ports that it # allocated during _that_ run. new_port = {'id': uuids.fake} self._returned_nw_info = self.port_data1 + [new_port] nw_info, _ = self._test_allocate_for_instance_with_virtual_interface() self.assertEqual([new_port], nw_info) def test_allocate_for_instance_port_in_use(self): # If a port is already in use, an exception should be raised. requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance( requested_networks=requested_networks, exception=exception.PortInUse, _break='pre_list_networks', _device=True) def test_allocate_for_instance_port_not_found(self): # If a port is not found, an exception should be raised. requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.non_existent_uuid)]) self._test_allocate_for_instance( requested_networks=requested_networks, exception=exception.PortNotFound, _break='pre_list_networks') def test_allocate_for_instance_port_invalid_tenantid(self): self.tenant_id = 'invalid_id' requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance( requested_networks=requested_networks, exception=exception.PortNotUsable, _break='pre_list_networks') @mock.patch.object(neutronapi, 'get_client') def test_allocate_for_instance_with_externalnet_forbidden( self, mock_get_client): """Only one network is available, it's external, and the client is unauthorized to use it. """ rules = {'network:attach_external_network': 'is_admin:True'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) mocked_client.list_networks.side_effect = [ # no networks in the tenant {'networks': model.NetworkInfo([])}, # external network is shared {'networks': self.nets8}] self.assertRaises(exception.ExternalNetworkAttachForbidden, self.api.allocate_for_instance, self.context, self.instance, False, None) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)]) mocked_client.list_networks.assert_has_calls([ mock.call(tenant_id=self.instance.project_id, shared=False), mock.call(shared=True)]) self.assertEqual(2, mocked_client.list_networks.call_count) @mock.patch.object(neutronapi, 'get_client') def test_allocate_for_instance_with_externalnet_multiple( self, mock_get_client): """Multiple networks are available, one the client is authorized to use, and an external one the client is unauthorized to use. """ mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) mocked_client.list_networks.side_effect = [ # network found in the tenant {'networks': self.nets1}, # external network is shared {'networks': self.nets8}] self.assertRaises( exception.NetworkAmbiguous, self.api.allocate_for_instance, self.context, self.instance, False, None) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)]) mocked_client.list_networks.assert_has_calls([ mock.call(tenant_id=self.instance.project_id, shared=False), mock.call(shared=True)]) self.assertEqual(2, mocked_client.list_networks.call_count) def test_allocate_for_instance_with_externalnet_admin_ctx(self): """Only one network is available, it's external, and the client is authorized. """ admin_ctx = context.RequestContext('userid', uuids.my_tenant, is_admin=True) self._test_allocate_for_instance(net_idx=8, context=admin_ctx) def test_allocate_for_instance_with_external_shared_net(self): """Only one network is available, it's external and shared.""" ctx = context.RequestContext('userid', uuids.my_tenant) self._test_allocate_for_instance(net_idx=10, context=ctx) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def _test_deallocate_for_instance(self, number, mock_get_client, mock_cache_update, requested_networks=None): # TODO(mriedem): Remove this conversion when all neutronv2 APIs are # converted to handling instance objects. self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_data = number == 1 and self.port_data1 or self.port_data2 ports = {port['id'] for port in port_data} ret_data = copy.deepcopy(port_data) if requested_networks: if isinstance(requested_networks, objects.NetworkRequestList): requested_networks = requested_networks.as_tuples() for net, fip, port, request_id in requested_networks: ret_data.append({'network_id': net, 'device_id': self.instance.uuid, 'device_owner': 'compute:nova', 'id': port, 'status': 'DOWN', 'admin_state_up': True, 'fixed_ips': [], 'mac_address': 'fake_mac', }) mocked_client.list_ports.return_value = {'ports': ret_data} show_port_values = [] expected_show_port_calls = [] show_network_values = [] expected_show_network_calls = [] expected_update_port_calls = [] if requested_networks: for net, fip, port, request_id in requested_networks: expected_show_port_calls.append(mock.call( port, fields=['binding:profile', 'network_id'])) show_port_values.append({'port': ret_data[0]}) expected_show_network_calls.append(mock.call( ret_data[0]['network_id'], fields=['dns_domain'])) show_network_values.append( {'network': {'id': ret_data[0]['network_id']}}) expected_update_port_calls.append(mock.call( port, {'port': { 'device_owner': '', 'device_id': '', 'binding:host_id': None, 'binding:profile': {}}})) mocked_client.show_port.side_effect = show_port_values mocked_client.show_network.side_effect = show_network_values expected_delete_port_calls = [] for port in ports: expected_delete_port_calls.append(mock.call(port)) self.api.deallocate_for_instance(self.context, self.instance, requested_networks=requested_networks) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)], any_order=True) mocked_client.list_ports.assert_called_once_with( device_id=self.instance.uuid) mocked_client.show_port.assert_has_calls(expected_show_port_calls) self.assertEqual(len(expected_show_port_calls), mocked_client.show_port.call_count) mocked_client.show_network.assert_has_calls( expected_show_network_calls) self.assertEqual(len(expected_show_network_calls), mocked_client.show_network.call_count) mocked_client.update_port.assert_has_calls(expected_update_port_calls) self.assertEqual(len(expected_update_port_calls), mocked_client.update_port.call_count) mocked_client.delete_port.assert_has_calls(expected_delete_port_calls, any_order=True) self.assertEqual(len(expected_delete_port_calls), mocked_client.delete_port.call_count) mock_cache_update.assert_called_once_with( self.context, self.instance.uuid, {'network_info': '[]'}) @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') def test_deallocate_for_instance_1_with_requested(self, mock_preexisting): mock_preexisting.return_value = [] requested = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='fake-net', address='1.2.3.4', port_id=uuids.portid_5)]) # Test to deallocate in one port env. self._test_deallocate_for_instance(1, requested_networks=requested) mock_preexisting.assert_called_once_with(self.instance) @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') def test_deallocate_for_instance_2_with_requested(self, mock_preexisting): mock_preexisting.return_value = [] requested = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='fake-net', address='1.2.3.4', port_id=uuids.portid_6)]) # Test to deallocate in one port env. self._test_deallocate_for_instance(2, requested_networks=requested) mock_preexisting.assert_called_once_with(self.instance) @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') def test_deallocate_for_instance_1(self, mock_preexisting): mock_preexisting.return_value = [] # Test to deallocate in one port env. self._test_deallocate_for_instance(1) mock_preexisting.assert_called_once_with(self.instance) @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') def test_deallocate_for_instance_2(self, mock_preexisting): mock_preexisting.return_value = [] # Test to deallocate in two ports env. self._test_deallocate_for_instance(2) mock_preexisting.assert_called_once_with(self.instance) @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') def test_deallocate_for_instance_port_not_found(self, mock_preexisting, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client # TODO(mriedem): Remove this conversion when all neutronv2 APIs are # converted to handling instance objects. self.instance = fake_instance.fake_instance_obj(self.context, **self.instance) mock_preexisting.return_value = [] port_data = self.port_data1 mocked_client.list_ports.return_value = {'ports': port_data} NeutronNotFound = exceptions.NeutronClientException(status_code=404) delete_port_values = [] expected_delete_port_calls = [] for port in reversed(port_data): delete_port_values.append(NeutronNotFound) expected_delete_port_calls.append(mock.call(port['id'])) mocked_client.delete_port.side_effect = expected_delete_port_calls self.api.deallocate_for_instance(self.context, self.instance) mock_preexisting.assert_called_once_with(self.instance) mock_get_client.assert_has_calls([ mock.call(self.context), mock.call(self.context, admin=True)], any_order=True) mocked_client.list_ports.assert_called_once_with( device_id=self.instance.uuid) mocked_client.delete_port.assert_has_calls(expected_delete_port_calls) self.assertEqual(len(expected_delete_port_calls), mocked_client.delete_port.call_count) @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(db_api, 'instance_info_cache_get') @mock.patch.object(neutronapi, 'get_client') def _test_deallocate_port_for_instance(self, number, mock_get_client, mock_cache_get, mock_get_physnet): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_data = number == 1 and self.port_data1 or self.port_data2 nets = number == 1 and self.nets1 or self.nets2 mocked_client.show_port.return_value = {'port': port_data[0]} net_info_cache = [] for port in port_data: net_info_cache.append({"network": {"id": port['network_id']}, "id": port['id']}) self.instance['info_cache'] = self._fake_instance_info_cache( net_info_cache, self.instance['uuid']) mocked_client.list_ports.return_value = {'ports': port_data[1:]} net_ids = [port['network_id'] for port in port_data] mocked_client.list_networks.return_value = {'networks': nets} list_floatingips_values = [] expected_list_floatingips_calls = [] float_data = number == 1 and self.float_data1 or self.float_data2 for data in port_data[1:]: for ip in data['fixed_ips']: list_floatingips_values.append({'floatingips': float_data[1:]}) expected_list_floatingips_calls.append( mock.call(fixed_ip_address=ip['ip_address'], port_id=data['id'])) mocked_client.list_floatingips.side_effect = list_floatingips_values mocked_client.list_subnets.return_value = {} expected_list_subnets_calls = [] for port in port_data[1:]: expected_list_subnets_calls.append(mock.call(id=['my_subid2'])) mock_cache_get.return_value = self.instance['info_cache'] instance = self._fake_instance_object_with_info_cache(self.instance) nwinfo, port_allocation = self.api.deallocate_port_for_instance( self.context, instance, port_data[0]['id']) self.assertEqual(len(port_data[1:]), len(nwinfo)) if len(port_data) > 1: self.assertEqual(uuids.my_netid2, nwinfo[0]['network']['id']) mocked_client.delete_port.assert_called_once_with(port_data[0]['id']) mocked_client.show_port.assert_called_once_with(port_data[0]['id']) expected_get_client_calls = [ mock.call(self.context), mock.call(self.context, admin=True)] if number == 2: expected_get_client_calls.append(mock.call(self.context, admin=True)) mock_get_client.assert_has_calls(expected_get_client_calls, any_order=True) mocked_client.list_ports.assert_called_once_with( tenant_id=self.instance['project_id'], device_id=self.instance['uuid']) mocked_client.list_networks.assert_called_once_with(id=net_ids) mocked_client.list_floatingips.assert_has_calls( expected_list_floatingips_calls) self.assertEqual(len(expected_list_floatingips_calls), mocked_client.list_floatingips.call_count) mock_cache_get.assert_called_once_with(mock.ANY, self.instance['uuid']) mocked_client.list_subnets.assert_has_calls( expected_list_subnets_calls) self.assertEqual(len(expected_list_subnets_calls), mocked_client.list_subnets.call_count) if number == 2: mock_get_physnet.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) else: mock_get_physnet.assert_not_called() def test_deallocate_port_for_instance_1(self): # Test to deallocate the first and only port self._test_deallocate_port_for_instance(1) def test_deallocate_port_for_instance_2(self): # Test to deallocate the first port of two self._test_deallocate_port_for_instance(2) @mock.patch.object(neutronapi, 'get_client') def test_list_ports(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client search_opts = {'parm': 'value'} self.api.list_ports(self.context, **search_opts) mock_get_client.assert_called_once_with(self.context) mocked_client.list_ports.assert_called_once_with(**search_opts) @mock.patch.object(neutronapi, 'get_client') def test_show_port(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.show_port.return_value = {'port': self.port_data1[0]} self.api.show_port(self.context, 'foo') mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with('foo') @mock.patch.object(neutronapi, 'get_client') def test_validate_networks(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = [(uuids.my_netid1, None, None, None), (uuids.my_netid2, None, None, None)] ids = [uuids.my_netid1, uuids.my_netid2] mocked_client.list_networks.return_value = {'networks': self.nets2} mocked_client.show_quota.return_value = {'quota': {'port': 50}} mocked_client.list_ports.return_value = {'ports': []} self.api.validate_networks(self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.my_tenant, fields=['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_without_port_quota_on_network_side( self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = [(uuids.my_netid1, None, None, None), (uuids.my_netid2, None, None, None)] ids = [uuids.my_netid1, uuids.my_netid2] mocked_client.list_networks.return_value = {'networks': self.nets2} mocked_client.show_quota.return_value = {'quota': {}} self.api.validate_networks(self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_ex_1(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = [(uuids.my_netid1, None, None, None)] mocked_client.list_networks.return_value = {'networks': []} ex = self.assertRaises(exception.NetworkNotFound, self.api.validate_networks, self.context, requested_networks, 1) self.assertIn(uuids.my_netid1, six.text_type(ex)) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with( id=[uuids.my_netid1]) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_ex_2(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = [(uuids.my_netid1, None, None, None), (uuids.my_netid2, None, None, None), (uuids.my_netid3, None, None, None)] ids = [uuids.my_netid1, uuids.my_netid2, uuids.my_netid3] mocked_client.list_networks.return_value = {'networks': self.nets1} ex = self.assertRaises(exception.NetworkNotFound, self.api.validate_networks, self.context, requested_networks, 1) self.assertIn(uuids.my_netid2, six.text_type(ex)) self.assertIn(uuids.my_netid3, six.text_type(ex)) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_duplicate_enable(self, mock_get_client): # Verify that no duplicateNetworks exception is thrown when duplicate # network ids are passed to validate_networks. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=uuids.my_netid1), objects.NetworkRequest(network_id=uuids.my_netid1)]) ids = [uuids.my_netid1, uuids.my_netid1] mocked_client.list_networks.return_value = {'networks': self.nets1} mocked_client.show_quota.return_value = {'quota': {'port': 50}} mocked_client.list_ports.return_value = {'ports': []} self.api.validate_networks(self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.my_tenant, fields=['id']) def test_allocate_for_instance_with_requested_networks_duplicates(self): # specify a duplicate network to allocate to instance requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=net['id']) for net in (self.nets6[0], self.nets6[1])]) self._test_allocate_for_instance_with_virtual_interface( net_idx=6, requested_networks=requested_networks) def test_allocate_for_instance_requested_networks_duplicates_port(self): # specify first port and last port that are in same network requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port['id']) for port in (self.port_data1[0], self.port_data3[0])]) self._test_allocate_for_instance_with_virtual_interface( net_idx=6, requested_networks=requested_networks) def test_allocate_for_instance_requested_networks_duplicates_combo(self): # specify a combo net_idx=7 : net2, port in net1, net2, port in net1 requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=uuids.my_netid2), objects.NetworkRequest(port_id=self.port_data1[0]['id']), objects.NetworkRequest(network_id=uuids.my_netid2), objects.NetworkRequest(port_id=self.port_data3[0]['id'])]) self._test_allocate_for_instance_with_virtual_interface( net_idx=7, requested_networks=requested_networks) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_not_specified(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList(objects=[]) mocked_client.list_networks.side_effect = [ {'networks': self.nets1}, {'networks': self.nets2}] self.assertRaises(exception.NetworkAmbiguous, self.api.validate_networks, self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_has_calls([ mock.call(tenant_id=self.context.project_id, shared=False), mock.call(shared=True)]) self.assertEqual(2, mocked_client.list_networks.call_count) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_port_not_found(self, mock_get_client): # Verify that the correct exception is thrown when a non existent # port is passed to validate_networks. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=uuids.my_netid1, port_id=uuids.portid_1)]) mocked_client.show_port.side_effect = exceptions.PortNotFoundClient self.assertRaises(exception.PortNotFound, self.api.validate_networks, self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with( requested_networks[0].port_id) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_port_show_raises_non404(self, mock_get_client): # Verify that the correct exception is thrown when a non existent # port is passed to validate_networks. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_port_id = uuids.portid_1 requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=uuids.my_netid1, port_id=fake_port_id)]) mocked_client.show_port.side_effect = ( exceptions.NeutronClientException(status_code=0)) exc = self.assertRaises(exception.NovaException, self.api.validate_networks, self.context, requested_networks, 1) expected_exception_message = ('Failed to access port %(port_id)s: ' 'An unknown exception occurred.' % {'port_id': fake_port_id}) self.assertEqual(expected_exception_message, str(exc)) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with( requested_networks[0].port_id) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_port_in_use(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=self.port_data3[0]['id'])]) mocked_client.show_port.return_value = {'port': self.port_data3[0]} self.assertRaises(exception.PortInUse, self.api.validate_networks, self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with( self.port_data3[0]['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_port_no_subnet_id(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_a = self.port_data3[0] port_a['device_id'] = None port_a['device_owner'] = None requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port_a['id'])]) mocked_client.show_port.return_value = {'port': port_a} self.assertRaises(exception.PortRequiresFixedIP, self.api.validate_networks, self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with(port_a['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_no_subnet_id(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='his_netid4')]) ids = ['his_netid4'] mocked_client.list_networks.return_value = {'networks': self.nets4} self.assertRaises(exception.NetworkRequiresSubnet, self.api.validate_networks, self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_ports_in_same_network_enable(self, mock_get_client): # Verify that duplicateNetworks exception is not thrown when ports # on same duplicate network are passed to validate_networks. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_a = self.port_data3[0] port_a['fixed_ips'] = {'ip_address': '10.0.0.2', 'subnet_id': 'subnet_id'} port_b = self.port_data1[0] self.assertEqual(port_a['network_id'], port_b['network_id']) for port in [port_a, port_b]: port['device_id'] = None port['device_owner'] = None requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port_a['id']), objects.NetworkRequest(port_id=port_b['id'])]) mocked_client.show_port.side_effect = [{'port': port_a}, {'port': port_b}] self.api.validate_networks(self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_has_calls([mock.call(port_a['id']), mock.call(port_b['id'])]) self.assertEqual(2, mocked_client.show_port.call_count) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_ports_not_in_same_network(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_a = self.port_data3[0] port_a['fixed_ips'] = {'ip_address': '10.0.0.2', 'subnet_id': 'subnet_id'} port_b = self.port_data2[1] self.assertNotEqual(port_a['network_id'], port_b['network_id']) for port in [port_a, port_b]: port['device_id'] = None port['device_owner'] = None requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port_a['id']), objects.NetworkRequest(port_id=port_b['id'])]) mocked_client.show_port.side_effect = [{'port': port_a}, {'port': port_b}] self.api.validate_networks(self.context, requested_networks, 1) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_has_calls([mock.call(port_a['id']), mock.call(port_b['id'])]) self.assertEqual(2, mocked_client.show_port.call_count) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_no_quota(self, mock_get_client): # Test validation for a request for one instance needing # two ports, where the quota is 2 and 2 ports are in use # => instances which can be created = 0 mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=uuids.my_netid1), objects.NetworkRequest(network_id=uuids.my_netid2)]) ids = [uuids.my_netid1, uuids.my_netid2] mocked_client.list_networks.return_value = {'networks': self.nets2} mocked_client.show_quota.return_value = {'quota': {'port': 2}} mocked_client.list_ports.return_value = {'ports': self.port_data2} max_count = self.api.validate_networks(self.context, requested_networks, 1) self.assertEqual(0, max_count) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.my_tenant, fields=['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_with_ports_and_networks(self, mock_get_client): # Test validation for a request for one instance needing # one port allocated via nova with another port being passed in. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_b = self.port_data2[1] port_b['device_id'] = None port_b['device_owner'] = None requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=uuids.my_netid1), objects.NetworkRequest(port_id=port_b['id'])]) mocked_client.show_port.return_value = {'port': port_b} ids = [uuids.my_netid1] mocked_client.list_networks.return_value = {'networks': self.nets1} mocked_client.show_quota.return_value = {'quota': {'port': 5}} mocked_client.list_ports.return_value = {'ports': self.port_data2} max_count = self.api.validate_networks(self.context, requested_networks, 1) self.assertEqual(1, max_count) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with(port_b['id']) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.my_tenant, fields=['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_one_port_and_no_networks(self, mock_get_client): # Test that show quota is not called if no networks are # passed in and only ports. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_b = self.port_data2[1] port_b['device_id'] = None port_b['device_owner'] = None requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port_b['id'])]) mocked_client.show_port.return_value = {'port': port_b} max_count = self.api.validate_networks(self.context, requested_networks, 1) self.assertEqual(1, max_count) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_called_once_with(port_b['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_some_quota(self, mock_get_client): # Test validation for a request for two instance needing # two ports each, where the quota is 5 and 2 ports are in use # => instances which can be created = 1 mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=uuids.my_netid1), objects.NetworkRequest(network_id=uuids.my_netid2)]) ids = [uuids.my_netid1, uuids.my_netid2] mocked_client.list_networks.return_value = {'networks': self.nets2} mocked_client.show_quota.return_value = {'quota': {'port': 5}} mocked_client.list_ports.return_value = {'ports': self.port_data2} max_count = self.api.validate_networks(self.context, requested_networks, 2) self.assertEqual(1, max_count) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.my_tenant, fields=['id']) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_unlimited_quota(self, mock_get_client): # Test validation for a request for two instance needing # two ports each, where the quota is -1 (unlimited) # => instances which can be created = 1 mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=uuids.my_netid1), objects.NetworkRequest(network_id=uuids.my_netid2)]) ids = [uuids.my_netid1, uuids.my_netid2] mocked_client.list_networks.return_value = {'networks': self.nets2} mocked_client.show_quota.return_value = {'quota': {'port': -1}} max_count = self.api.validate_networks(self.context, requested_networks, 2) self.assertEqual(2, max_count) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(id=ids) mocked_client.show_quota.assert_called_once_with(uuids.my_tenant) @mock.patch.object(neutronapi, 'get_client') def test_validate_networks_no_quota_but_ports_supplied(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_a = self.port_data3[0] port_a['fixed_ips'] = {'ip_address': '10.0.0.2', 'subnet_id': 'subnet_id'} port_b = self.port_data2[1] self.assertNotEqual(port_a['network_id'], port_b['network_id']) for port in [port_a, port_b]: port['device_id'] = None port['device_owner'] = None requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port_a['id']), objects.NetworkRequest(port_id=port_b['id'])]) mocked_client.show_port.side_effect = [{'port': port_a}, {'port': port_b}] max_count = self.api.validate_networks(self.context, requested_networks, 1) self.assertEqual(1, max_count) mock_get_client.assert_called_once_with(self.context) mocked_client.show_port.assert_has_calls([mock.call(port_a['id']), mock.call(port_b['id'])]) self.assertEqual(2, mocked_client.show_port.call_count) @mock.patch.object(neutronapi, 'get_client') def _test_get_fixed_ip_by_address_with_exception(self, mock_get_client, port_data=None, exception=None): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client if port_data is None: port_data = self.port_data2 mocked_client.list_ports.return_value = {'ports': port_data} result = None if exception: self.assertRaises(exception, self.api.get_fixed_ip_by_address, self.context, self.port_address) else: result = self.api.get_fixed_ip_by_address(self.context, self.port_address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_ports.assert_called_once_with( fixed_ips='ip_address=%s' % self.port_address) return result def test_get_fixed_ip_by_address_fails_for_no_ports(self): self._test_get_fixed_ip_by_address_with_exception( port_data=[], exception=exception.FixedIpNotFoundForAddress) def test_get_fixed_ip_by_address_succeeds_for_1_port(self): result = self._test_get_fixed_ip_by_address_with_exception( port_data=self.port_data1) self.assertEqual(self.instance2['uuid'], result['instance_uuid']) def test_get_fixed_ip_by_address_fails_for_more_than_1_port(self): self._test_get_fixed_ip_by_address_with_exception( exception=exception.FixedIpAssociatedWithMultipleInstances) @mock.patch.object(neutronapi, 'get_client') def _test_get_available_networks(self, prv_nets, pub_nets, mock_get_client, req_ids=None, context=None): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client nets = prv_nets + pub_nets if req_ids: mocked_client.list_networks.return_value = {'networks': nets} else: mocked_client.list_networks.side_effect = [{'networks': prv_nets}, {'networks': pub_nets}] rets = self.api._get_available_networks( context if context else self.context, self.instance['project_id'], req_ids) self.assertEqual(nets, rets) mock_get_client.assert_called_once_with(self.context) if req_ids: mocked_client.list_networks.assert_called_once_with(id=req_ids) else: mocked_client.list_networks.assert_has_calls([ mock.call(tenant_id=self.instance['project_id'], shared=False), mock.call(shared=True)]) self.assertEqual(2, mocked_client.list_networks.call_count) def test_get_available_networks_all_private(self): self._test_get_available_networks(self.nets2, []) def test_get_available_networks_all_public(self): self._test_get_available_networks([], self.nets2) def test_get_available_networks_private_and_public(self): self._test_get_available_networks(self.nets1, self.nets4) def test_get_available_networks_with_network_ids(self): prv_nets = [self.nets3[0]] pub_nets = [self.nets3[-1]] # specify only first and last network req_ids = [net['id'] for net in (self.nets3[0], self.nets3[-1])] self._test_get_available_networks(prv_nets, pub_nets, req_ids=req_ids) def test_get_available_networks_with_custom_policy(self): rules = {'network:attach_external_network': ''} policy.set_rules(oslo_policy.Rules.from_dict(rules)) req_ids = [net['id'] for net in self.nets5] self._test_get_available_networks(self.nets5, [], req_ids=req_ids) @mock.patch.object(neutronapi, 'get_client') def test_get_floating_ip_pools(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client search_opts = {'router:external': True} mocked_client.list_networks.return_value = { 'networks': [self.fip_pool, self.fip_pool_nova]} pools = self.api.get_floating_ip_pools(self.context) expected = [self.fip_pool, self.fip_pool_nova] self.assertEqual(expected, pools) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(**search_opts) @mock.patch.object(neutronapi, 'get_client') def test_get_floating_ip_by_address_not_found(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = self.fip_unassociated['floating_ip_address'] mocked_client.list_floatingips.return_value = {'floatingips': []} self.assertRaises(exception.FloatingIpNotFoundForAddress, self.api.get_floating_ip_by_address, self.context, address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) @mock.patch.object(neutronapi, 'get_client') def test_get_floating_ip_by_id_not_found(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client floating_ip_id = self.fip_unassociated['id'] mocked_client.show_floatingip.side_effect = ( exceptions.NeutronClientException(status_code=404)) self.assertRaises(exception.FloatingIpNotFound, self.api.get_floating_ip, self.context, floating_ip_id) mock_get_client.assert_called_once_with(self.context) mocked_client.show_floatingip.assert_called_once_with(floating_ip_id) @mock.patch.object(neutronapi, 'get_client') def test_get_floating_ip_raises_non404(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client floating_ip_id = self.fip_unassociated['id'] mocked_client.show_floatingip.side_effect = ( exceptions.NeutronClientException(status_code=0)) self.assertRaises(exceptions.NeutronClientException, self.api.get_floating_ip, self.context, floating_ip_id) mock_get_client.assert_called_once_with(self.context) mocked_client.show_floatingip.assert_called_once_with(floating_ip_id) @mock.patch.object(neutronapi.API, '_refresh_neutron_extensions_cache') @mock.patch.object(neutronapi, 'get_client') def _test_get_floating_ip( self, fip_ext_enabled, has_port, mock_ntrn, mock_refresh): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc # NOTE(stephenfin): These are clearly not full responses mock_nc.show_floatingip.return_value = { 'floatingip': { 'id': uuids.fip_id, 'floating_network_id': uuids.fip_net_id, 'port_id': uuids.fip_port_id, } } mock_nc.show_network.return_value = { 'network': { 'id': uuids.fip_net_id, }, } if has_port: mock_nc.show_port.return_value = { 'port': { 'id': uuids.fip_port_id, }, } else: mock_nc.show_port.side_effect = exceptions.PortNotFoundClient if fip_ext_enabled: self.api.extensions = [constants.FIP_PORT_DETAILS] else: self.api.extensions = [] fip = self.api.get_floating_ip(self.context, uuids.fip_id) if fip_ext_enabled: mock_nc.show_port.assert_not_called() self.assertNotIn('port_details', fip) else: mock_nc.show_port.assert_called_once_with(uuids.fip_port_id) self.assertIn('port_details', fip) if has_port: self.assertIsNotNone(fip['port_details']) else: self.assertIsNone(fip['port_details']) def test_get_floating_ip_with_fip_port_details_ext(self): """Make sure we used embedded port details if available.""" self._test_get_floating_ip(True, True) def test_get_floating_ip_without_fip_port_details_ext(self): """Make sure we make a second request for port details if necessary.""" self._test_get_floating_ip(False, True) def test_get_floating_ip_without_port(self): """Make sure we don't fail for floating IPs without attached ports.""" self._test_get_floating_ip(False, False) @mock.patch.object(neutronapi, 'get_client') def test_get_floating_ip_by_address_multiple_found(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = self.fip_unassociated['floating_ip_address'] mocked_client.list_floatingips.return_value = { 'floatingips': [self.fip_unassociated] * 2} self.assertRaises(exception.FloatingIpMultipleFoundForAddress, self.api.get_floating_ip_by_address, self.context, address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) @mock.patch.object(neutronapi.API, '_refresh_neutron_extensions_cache') @mock.patch.object(neutronapi, 'get_client') def _test_get_floating_ip_by_address( self, fip_ext_enabled, has_port, mock_ntrn, mock_refresh): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc # NOTE(stephenfin): These are clearly not full responses mock_nc.list_floatingips.return_value = { 'floatingips': [ { 'id': uuids.fip_id, 'floating_network_id': uuids.fip_net_id, 'port_id': uuids.fip_port_id, }, ] } mock_nc.show_network.return_value = { 'network': { 'id': uuids.fip_net_id, }, } if has_port: mock_nc.show_port.return_value = { 'port': { 'id': uuids.fip_port_id, }, } else: mock_nc.show_port.side_effect = exceptions.PortNotFoundClient if fip_ext_enabled: self.api.extensions = [constants.FIP_PORT_DETAILS] else: self.api.extensions = [] fip = self.api.get_floating_ip_by_address(self.context, '172.1.2.3') if fip_ext_enabled: mock_nc.show_port.assert_not_called() self.assertNotIn('port_details', fip) else: mock_nc.show_port.assert_called_once_with(uuids.fip_port_id) self.assertIn('port_details', fip) if has_port: self.assertIsNotNone(fip['port_details']) else: self.assertIsNone(fip['port_details']) def test_get_floating_ip_by_address_with_fip_port_details_ext(self): """Make sure we used embedded port details if available.""" self._test_get_floating_ip_by_address(True, True) def test_get_floating_ip_by_address_without_fip_port_details_ext(self): """Make sure we make a second request for port details if necessary.""" self._test_get_floating_ip_by_address(False, True) def test_get_floating_ip_by_address_without_port(self): """Make sure we don't fail for floating IPs without attached ports.""" self._test_get_floating_ip_by_address(False, False) @mock.patch.object(neutronapi, 'get_client') def _test_get_instance_id_by_floating_address(self, fip_data, mock_get_client, associated=False): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = fip_data['floating_ip_address'] mocked_client.list_floatingips.return_value = { 'floatingips': [fip_data]} if associated: mocked_client.show_port.return_value = {'port': self.port_data2[1]} expected = self.port_data2[1]['device_id'] else: expected = None fip = self.api.get_instance_id_by_floating_address(self.context, address) self.assertEqual(expected, fip) mock_get_client.assert_called_once_with(self.context) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) if associated: mocked_client.show_port.assert_called_once_with( fip_data['port_id']) def test_get_instance_id_by_floating_address(self): self._test_get_instance_id_by_floating_address(self.fip_unassociated) def test_get_instance_id_by_floating_address_associated(self): self._test_get_instance_id_by_floating_address(self.fip_associated, associated=True) @mock.patch.object(neutronapi, 'get_client') def test_allocate_floating_ip(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client pool_name = self.fip_pool['name'] pool_id = self.fip_pool['id'] search_opts = {'router:external': True, 'fields': 'id', 'name': pool_name} mocked_client.list_networks.return_value = { 'networks': [self.fip_pool]} mocked_client.create_floatingip.return_value = { 'floatingip': self.fip_unassociated} fip = self.api.allocate_floating_ip(self.context, 'ext_net') self.assertEqual(self.fip_unassociated['floating_ip_address'], fip) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(**search_opts) mocked_client.create_floatingip.assert_called_once_with( {'floatingip': {'floating_network_id': pool_id}}) @mock.patch.object(neutronapi, 'get_client') def test_allocate_floating_ip_addr_gen_fail(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client pool_name = self.fip_pool['name'] pool_id = self.fip_pool['id'] search_opts = {'router:external': True, 'fields': 'id', 'name': pool_name} mocked_client.list_networks.return_value = { 'networks': [self.fip_pool]} mocked_client.create_floatingip.side_effect = ( exceptions.IpAddressGenerationFailureClient) self.assertRaises(exception.NoMoreFloatingIps, self.api.allocate_floating_ip, self.context, 'ext_net') mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(**search_opts) mocked_client.create_floatingip.assert_called_once_with( {'floatingip': {'floating_network_id': pool_id}}) @mock.patch.object(neutronapi, 'get_client') def test_allocate_floating_ip_exhausted_fail(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client pool_name = self.fip_pool['name'] pool_id = self.fip_pool['id'] search_opts = {'router:external': True, 'fields': 'id', 'name': pool_name} mocked_client.list_networks.return_value = { 'networks': [self.fip_pool]} mocked_client.create_floatingip.side_effect = ( exceptions.ExternalIpAddressExhaustedClient) self.assertRaises(exception.NoMoreFloatingIps, self.api.allocate_floating_ip, self.context, 'ext_net') mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(**search_opts) mocked_client.create_floatingip.assert_called_once_with( {'floatingip': {'floating_network_id': pool_id}}) @mock.patch.object(neutronapi, 'get_client') def test_allocate_floating_ip_with_pool_id(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client pool_id = self.fip_pool['id'] search_opts = {'router:external': True, 'fields': 'id', 'id': pool_id} mocked_client.list_networks.return_value = { 'networks': [self.fip_pool]} mocked_client.create_floatingip.return_value = { 'floatingip': self.fip_unassociated} fip = self.api.allocate_floating_ip(self.context, pool_id) self.assertEqual(self.fip_unassociated['floating_ip_address'], fip) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(**search_opts) mocked_client.create_floatingip.assert_called_once_with( {'floatingip': {'floating_network_id': pool_id}}) @mock.patch.object(neutronapi, 'get_client') def test_allocate_floating_ip_with_default_pool(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client pool_name = self.fip_pool_nova['name'] pool_id = self.fip_pool_nova['id'] search_opts = {'router:external': True, 'fields': 'id', 'name': pool_name} mocked_client.list_networks.return_value = { 'networks': [self.fip_pool_nova]} mocked_client.create_floatingip.return_value = { 'floatingip': self.fip_unassociated} fip = self.api.allocate_floating_ip(self.context) self.assertEqual(self.fip_unassociated['floating_ip_address'], fip) mock_get_client.assert_called_once_with(self.context) mocked_client.list_networks.assert_called_once_with(**search_opts) mocked_client.create_floatingip.assert_called_once_with( {'floatingip': {'floating_network_id': pool_id}}) @mock.patch.object(neutronapi, 'get_client') def test_disassociate_and_release_floating_ip(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = self.fip_unassociated['floating_ip_address'] fip_id = self.fip_unassociated['id'] floating_ip = {'floating_ip_address': address} mocked_client.list_floatingips.return_value = { 'floatingips': [self.fip_unassociated]} self.api.disassociate_and_release_floating_ip(self.context, None, floating_ip) mock_get_client.assert_called_once_with(self.context) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) mocked_client.delete_floatingip.assert_called_once_with(fip_id) @mock.patch.object(neutronapi.API, '_get_instance_nw_info', return_value=model.NetworkInfo()) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def test_disassociate_and_release_floating_ip_with_instance( self, mock_get_client, mock_cache_update, mock_get_nw): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = self.fip_unassociated['floating_ip_address'] fip_id = self.fip_unassociated['id'] floating_ip = {'floating_ip_address': address} instance = self._fake_instance_object(self.instance) mocked_client.list_floatingips.return_value = { 'floatingips': [self.fip_unassociated]} self.api.disassociate_and_release_floating_ip(self.context, instance, floating_ip) mock_get_client.assert_called_once_with(self.context) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) mocked_client.delete_floatingip.assert_called_once_with(fip_id) mock_cache_update.assert_called_once_with(mock.ANY, instance['uuid'], mock.ANY) mock_get_nw.assert_called_once_with(mock.ANY, instance) @mock.patch.object(neutronapi.API, '_get_instance_nw_info', return_value=model.NetworkInfo()) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def test_associate_floating_ip(self, mock_get_client, mock_cache_update, mock_get_nw): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = self.fip_unassociated['floating_ip_address'] fixed_address = self.port_address2 fip_id = self.fip_unassociated['id'] instance = self._fake_instance_object(self.instance) mocked_client.list_ports.return_value = {'ports': [self.port_data2[1]]} mocked_client.list_floatingips.return_value = { 'floatingips': [self.fip_unassociated]} self.api.associate_floating_ip(self.context, instance, address, fixed_address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_ports.assert_called_once_with( **{'device_owner': 'compute:nova', 'device_id': instance.uuid}) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) mocked_client.update_floatingip.assert_called_once_with( fip_id, {'floatingip': {'port_id': self.fip_associated['port_id'], 'fixed_ip_address': fixed_address}}) mock_cache_update.assert_called_once_with(mock.ANY, instance['uuid'], mock.ANY) mock_get_nw.assert_called_once_with(mock.ANY, instance) @mock.patch.object(neutronapi.API, '_get_instance_nw_info', return_value=model.NetworkInfo()) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.objects.Instance.get_by_uuid') def test_reassociate_floating_ip(self, mock_get, mock_get_client, mock_cache_update, mock_get_nw): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client address = self.fip_associated['floating_ip_address'] new_fixed_address = self.port_address fip_id = self.fip_associated['id'] mocked_client.list_ports.return_value = {'ports': [self.port_data2[0]]} mocked_client.list_floatingips.return_value = { 'floatingips': [self.fip_associated]} mocked_client.show_port.return_value = {'port': self.port_data2[1]} mock_get.return_value = fake_instance.fake_instance_obj( self.context, **self.instance) instance2 = self._fake_instance_object(self.instance2) self.api.associate_floating_ip(self.context, instance2, address, new_fixed_address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_ports.assert_called_once_with( **{'device_owner': 'compute:nova', 'device_id': self.instance2['uuid']}) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) mocked_client.update_floatingip.assert_called_once_with( fip_id, {'floatingip': {'port_id': uuids.portid_1, 'fixed_ip_address': new_fixed_address}}) mocked_client.show_port.assert_called_once_with( self.fip_associated['port_id']) mock_cache_update.assert_has_calls([ mock.call(mock.ANY, mock_get.return_value['uuid'], mock.ANY), mock.call(mock.ANY, instance2['uuid'], mock.ANY)]) self.assertEqual(2, mock_cache_update.call_count) mock_get_nw.assert_has_calls([ mock.call(mock.ANY, mock_get.return_value), mock.call(mock.ANY, instance2)]) self.assertEqual(2, mock_get_nw.call_count) @mock.patch.object(neutronapi, 'get_client') def test_associate_floating_ip_not_found_fixed_ip(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client instance = self._fake_instance_object(self.instance) address = self.fip_associated['floating_ip_address'] fixed_address = self.fip_associated['fixed_ip_address'] mocked_client.list_ports.return_value = {'ports': [self.port_data2[0]]} self.assertRaises(exception.FixedIpNotFoundForAddress, self.api.associate_floating_ip, self.context, instance, address, fixed_address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_ports.assert_called_once_with( **{'device_owner': 'compute:nova', 'device_id': self.instance['uuid']}) @mock.patch.object(neutronapi.API, '_get_instance_nw_info', return_value=model.NetworkInfo()) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def test_disassociate_floating_ip(self, mock_get_client, mock_cache_update, mock_get_nw): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client instance = self._fake_instance_object(self.instance) address = self.fip_associated['floating_ip_address'] fip_id = self.fip_associated['id'] mocked_client.list_floatingips.return_value = { 'floatingips': [self.fip_associated]} self.api.disassociate_floating_ip(self.context, instance, address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_floatingips.assert_called_once_with( floating_ip_address=address) mocked_client.update_floatingip.assert_called_once_with( fip_id, {'floatingip': {'port_id': None}}) mock_cache_update.assert_called_once_with(mock.ANY, instance['uuid'], mock.ANY) mock_get_nw.assert_called_once_with(mock.ANY, instance) @mock.patch.object(neutronapi.API, '_get_instance_nw_info', return_value=model.NetworkInfo()) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def test_add_fixed_ip_to_instance(self, mock_get_client, mock_cache_update, mock_get_nw): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client instance = self._fake_instance_object(self.instance) network_id = uuids.my_netid1 mocked_client.list_subnets.return_value = { 'subnets': self.subnet_data_n} mocked_client.list_ports.return_value = {'ports': self.port_data1} port_req_body = { 'port': { 'fixed_ips': [{'subnet_id': 'my_subid1'}, {'subnet_id': 'my_subid1'}], }, } port = self.port_data1[0] port['fixed_ips'] = [{'subnet_id': 'my_subid1'}] mocked_client.update_port.return_value = {'port': port} self.api.add_fixed_ip_to_instance(self.context, instance, network_id) mock_get_client.assert_called_once_with(self.context) mocked_client.list_subnets.assert_called_once_with( network_id=network_id) mocked_client.list_ports.assert_called_once_with( device_id=instance.uuid, device_owner='compute:nova', network_id=network_id) mocked_client.update_port.assert_called_once_with(uuids.portid_1, port_req_body) mock_cache_update.assert_called_once_with(mock.ANY, instance['uuid'], mock.ANY) mock_get_nw.assert_called_once_with(mock.ANY, instance) @mock.patch.object(neutronapi.API, '_get_instance_nw_info', return_value=model.NetworkInfo()) @mock.patch.object(db_api, 'instance_info_cache_update', return_value=fake_info_cache) @mock.patch.object(neutronapi, 'get_client') def test_remove_fixed_ip_from_instance(self, mock_get_client, mock_cache_update, mock_get_nw): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client instance = self._fake_instance_object(self.instance) address = '10.0.0.3' zone = 'compute:%s' % self.instance['availability_zone'] mocked_client.list_ports.return_value = {'ports': self.port_data1} port_req_body = { 'port': { 'fixed_ips': [], }, } port = self.port_data1[0] port['fixed_ips'] = [] mocked_client.update_port.return_value = {'port': port} self.api.remove_fixed_ip_from_instance(self.context, instance, address) mock_get_client.assert_called_once_with(self.context) mocked_client.list_ports.assert_called_once_with( device_id=self.instance['uuid'], device_owner=zone, fixed_ips='ip_address=%s' % address) mocked_client.update_port.assert_called_once_with(uuids.portid_1, port_req_body) mock_cache_update.assert_called_once_with(mock.ANY, instance['uuid'], mock.ANY) mock_get_nw.assert_called_once_with(mock.ANY, instance) def test_list_floating_ips_without_l3_support(self): mocked_client = mock.create_autospec(client.Client) mocked_client.list_floatingips.side_effect = exceptions.NotFound floatingips = self.api._get_floating_ips_by_fixed_and_port( mocked_client, '1.1.1.1', 1) self.assertEqual([], floatingips) mocked_client.list_floatingips.assert_called_once_with( fixed_ip_address='1.1.1.1', port_id=1) @mock.patch.object(neutronapi.API, '_get_floating_ips_by_fixed_and_port') def test_nw_info_get_ips(self, mock_get_floating): mocked_client = mock.create_autospec(client.Client) fake_port = { 'fixed_ips': [ {'ip_address': '1.1.1.1'}], 'id': 'port-id', } mock_get_floating.return_value = [{'floating_ip_address': '10.0.0.1'}] result = self.api._nw_info_get_ips(mocked_client, fake_port) self.assertEqual(1, len(result)) self.assertEqual('1.1.1.1', result[0]['address']) self.assertEqual('10.0.0.1', result[0]['floating_ips'][0]['address']) mock_get_floating.assert_called_once_with(mocked_client, '1.1.1.1', 'port-id') @mock.patch.object(neutronapi.API, '_get_subnets_from_port') def test_nw_info_get_subnets(self, mock_get_subnets): fake_port = { 'fixed_ips': [ {'ip_address': '1.1.1.1'}, {'ip_address': '2.2.2.2'}], 'id': 'port-id', } fake_subnet = model.Subnet(cidr='1.0.0.0/8') fake_ips = [model.IP(x['ip_address']) for x in fake_port['fixed_ips']] mock_get_subnets.return_value = [fake_subnet] subnets = self.api._nw_info_get_subnets(self.context, fake_port, fake_ips) self.assertEqual(1, len(subnets)) self.assertEqual(1, len(subnets[0]['ips'])) self.assertEqual('1.1.1.1', subnets[0]['ips'][0]['address']) mock_get_subnets.assert_called_once_with(self.context, fake_port, None) @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(neutronapi, 'get_client') def _test_nw_info_build_network(self, vif_type, mock_get_client, mock_get_physnet): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_port = { 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'id': 'port-id', 'network_id': 'net-id', 'binding:vif_type': vif_type, } fake_subnets = [model.Subnet(cidr='1.0.0.0/8')] fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant', 'mtu': 9000}] net, iid = self.api._nw_info_build_network(self.context, fake_port, fake_nets, fake_subnets) self.assertEqual(fake_subnets, net['subnets']) self.assertEqual('net-id', net['id']) self.assertEqual('foo', net['label']) self.assertEqual('tenant', net.get_meta('tenant_id')) self.assertEqual(9000, net.get_meta('mtu')) self.assertEqual(CONF.flat_injected, net.get_meta('injected')) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mock_get_physnet.assert_called_once_with(self.context, mocked_client, 'net-id') return net, iid def test_nw_info_build_network_ovs(self): net, iid = self._test_nw_info_build_network(model.VIF_TYPE_OVS) self.assertEqual(CONF.neutron.ovs_bridge, net['bridge']) self.assertNotIn('should_create_bridge', net) self.assertEqual('port-id', iid) def test_nw_info_build_network_dvs(self): net, iid = self._test_nw_info_build_network(model.VIF_TYPE_DVS) self.assertEqual('net-id', net['bridge']) self.assertNotIn('should_create_bridge', net) self.assertNotIn('ovs_interfaceid', net) self.assertIsNone(iid) def test_nw_info_build_network_bridge(self): net, iid = self._test_nw_info_build_network(model.VIF_TYPE_BRIDGE) self.assertEqual('brqnet-id', net['bridge']) self.assertTrue(net['should_create_bridge']) self.assertIsNone(iid) def test_nw_info_build_network_tap(self): net, iid = self._test_nw_info_build_network(model.VIF_TYPE_TAP) self.assertIsNone(net['bridge']) self.assertNotIn('should_create_bridge', net) self.assertIsNone(iid) def test_nw_info_build_network_other(self): net, iid = self._test_nw_info_build_network(None) self.assertIsNone(net['bridge']) self.assertNotIn('should_create_bridge', net) self.assertIsNone(iid) @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(neutronapi, 'get_client') def test_nw_info_build_no_match(self, mock_get_client, mock_get_physnet): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_port = { 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'id': 'port-id', 'network_id': 'net-id1', 'tenant_id': 'tenant', 'binding:vif_type': model.VIF_TYPE_OVS, } fake_subnets = [model.Subnet(cidr='1.0.0.0/8')] fake_nets = [{'id': 'net-id2', 'name': 'foo', 'tenant_id': 'tenant'}] net, iid = self.api._nw_info_build_network(self.context, fake_port, fake_nets, fake_subnets) self.assertEqual(fake_subnets, net['subnets']) self.assertEqual('net-id1', net['id']) self.assertEqual('tenant', net['meta']['tenant_id']) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mock_get_physnet.assert_called_once_with(self.context, mocked_client, 'net-id1') @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(neutronapi, 'get_client') def test_nw_info_build_network_vhostuser(self, mock_get_client, mock_get_physnet): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_port = { 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'id': 'port-id', 'network_id': 'net-id', 'binding:vif_type': model.VIF_TYPE_VHOSTUSER, 'binding:vif_details': { model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True } } fake_subnets = [model.Subnet(cidr='1.0.0.0/8')] fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant'}] net, iid = self.api._nw_info_build_network(self.context, fake_port, fake_nets, fake_subnets) self.assertEqual(fake_subnets, net['subnets']) self.assertEqual('net-id', net['id']) self.assertEqual('foo', net['label']) self.assertEqual('tenant', net.get_meta('tenant_id')) self.assertEqual(CONF.flat_injected, net.get_meta('injected')) self.assertEqual(CONF.neutron.ovs_bridge, net['bridge']) self.assertNotIn('should_create_bridge', net) self.assertEqual('port-id', iid) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mock_get_physnet.assert_called_once_with(self.context, mocked_client, 'net-id') @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(neutronapi, 'get_client') def test_nw_info_build_network_vhostuser_fp(self, mock_get_client, mock_get_physnet): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_port = { 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'id': 'port-id', 'network_id': 'net-id', 'binding:vif_type': model.VIF_TYPE_VHOSTUSER, 'binding:vif_details': { model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True, model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False, } } fake_subnets = [model.Subnet(cidr='1.0.0.0/8')] fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant'}] net, ovs_interfaceid = self.api._nw_info_build_network( self.context, fake_port, fake_nets, fake_subnets) self.assertEqual(fake_subnets, net['subnets']) self.assertEqual('net-id', net['id']) self.assertEqual('foo', net['label']) self.assertEqual('tenant', net.get_meta('tenant_id')) self.assertEqual('brqnet-id', net['bridge']) self.assertIsNone(ovs_interfaceid) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mock_get_physnet.assert_called_once_with(self.context, mocked_client, 'net-id') @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(neutronapi, 'get_client') def _test_nw_info_build_custom_bridge(self, vif_type, mock_get_client, mock_get_physnet, extra_details=None): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_port = { 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'id': 'port-id', 'network_id': 'net-id', 'binding:vif_type': vif_type, 'binding:vif_details': { model.VIF_DETAILS_BRIDGE_NAME: 'custom-bridge', } } if extra_details: fake_port['binding:vif_details'].update(extra_details) fake_subnets = [model.Subnet(cidr='1.0.0.0/8')] fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant'}] net, iid = self.api._nw_info_build_network(self.context, fake_port, fake_nets, fake_subnets) self.assertNotEqual(CONF.neutron.ovs_bridge, net['bridge']) self.assertEqual('custom-bridge', net['bridge']) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mock_get_physnet.assert_called_once_with(self.context, mocked_client, 'net-id') def test_nw_info_build_custom_ovs_bridge(self): self._test_nw_info_build_custom_bridge(model.VIF_TYPE_OVS) def test_nw_info_build_custom_ovs_bridge_vhostuser(self): self._test_nw_info_build_custom_bridge(model.VIF_TYPE_VHOSTUSER, extra_details={model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True}) def test_nw_info_build_custom_lb_bridge(self): self._test_nw_info_build_custom_bridge(model.VIF_TYPE_BRIDGE) @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info', return_value=(None, False)) @mock.patch.object(neutronapi.API, '_get_preexisting_port_ids', return_value=['port5']) @mock.patch.object(neutronapi.API, '_get_subnets_from_port', return_value=[model.Subnet(cidr='1.0.0.0/8')]) @mock.patch.object(neutronapi.API, '_get_floating_ips_by_fixed_and_port', return_value=[{'floating_ip_address': '10.0.0.1'}]) @mock.patch.object(neutronapi, 'get_client') def test_build_network_info_model(self, mock_get_client, mock_get_floating, mock_get_subnets, mock_get_preexisting, mock_get_physnet): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_inst = objects.Instance() fake_inst.project_id = uuids.fake fake_inst.uuid = uuids.instance fake_inst.info_cache = objects.InstanceInfoCache() fake_inst.info_cache.network_info = model.NetworkInfo() fake_ports = [ # admin_state_up=True and status='ACTIVE' thus vif.active=True {'id': 'port1', 'network_id': 'net-id', 'admin_state_up': True, 'status': 'ACTIVE', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:01', 'binding:vif_type': model.VIF_TYPE_BRIDGE, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'binding:vif_details': {}, }, # admin_state_up=False and status='DOWN' thus vif.active=True {'id': 'port2', 'network_id': 'net-id', 'admin_state_up': False, 'status': 'DOWN', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:02', 'binding:vif_type': model.VIF_TYPE_BRIDGE, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'binding:vif_details': {}, }, # admin_state_up=True and status='DOWN' thus vif.active=False {'id': 'port0', 'network_id': 'net-id', 'admin_state_up': True, 'status': 'DOWN', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:03', 'binding:vif_type': model.VIF_TYPE_BRIDGE, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'binding:vif_details': {}, }, # admin_state_up=True and status='ACTIVE' thus vif.active=True {'id': 'port3', 'network_id': 'net-id', 'admin_state_up': True, 'status': 'ACTIVE', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:04', 'binding:vif_type': model.VIF_TYPE_HW_VEB, 'binding:vnic_type': model.VNIC_TYPE_DIRECT, constants.BINDING_PROFILE: {'pci_vendor_info': '1137:0047', 'pci_slot': '0000:0a:00.1', 'physical_network': 'physnet1'}, 'binding:vif_details': {model.VIF_DETAILS_PROFILEID: 'pfid'}, }, # admin_state_up=True and status='ACTIVE' thus vif.active=True {'id': 'port4', 'network_id': 'net-id', 'admin_state_up': True, 'status': 'ACTIVE', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:05', 'binding:vif_type': model.VIF_TYPE_802_QBH, 'binding:vnic_type': model.VNIC_TYPE_MACVTAP, constants.BINDING_PROFILE: {'pci_vendor_info': '1137:0047', 'pci_slot': '0000:0a:00.2', 'physical_network': 'physnet1'}, 'binding:vif_details': {model.VIF_DETAILS_PROFILEID: 'pfid'}, }, # admin_state_up=True and status='ACTIVE' thus vif.active=True # This port has no binding:vnic_type to verify default is assumed {'id': 'port5', 'network_id': 'net-id', 'admin_state_up': True, 'status': 'ACTIVE', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:06', 'binding:vif_type': model.VIF_TYPE_BRIDGE, # No binding:vnic_type 'binding:vif_details': {}, }, # This does not match the networks we provide below, # so it should be ignored (and is here to verify that) {'id': 'port6', 'network_id': 'other-net-id', 'admin_state_up': True, 'status': 'DOWN', 'binding:vnic_type': model.VNIC_TYPE_NORMAL, }, ] fake_nets = [ {'id': 'net-id', 'name': 'foo', 'tenant_id': uuids.fake, } ] mocked_client.list_ports.return_value = {'ports': fake_ports} requested_ports = [fake_ports[2], fake_ports[0], fake_ports[1], fake_ports[3], fake_ports[4], fake_ports[5]] expected_get_floating_calls = [] for requested_port in requested_ports: expected_get_floating_calls.append(mock.call(mocked_client, '1.1.1.1', requested_port['id'])) expected_get_subnets_calls = [] for requested_port in requested_ports: expected_get_subnets_calls.append( mock.call(self.context, requested_port, mocked_client)) fake_inst.info_cache = objects.InstanceInfoCache.new( self.context, uuids.instance) fake_inst.info_cache.network_info = model.NetworkInfo.hydrate([]) nw_infos = self.api._build_network_info_model( self.context, fake_inst, fake_nets, [fake_ports[2]['id'], fake_ports[0]['id'], fake_ports[1]['id'], fake_ports[3]['id'], fake_ports[4]['id'], fake_ports[5]['id']], preexisting_port_ids=['port3']) self.assertEqual(6, len(nw_infos)) index = 0 for nw_info in nw_infos: self.assertEqual(requested_ports[index]['mac_address'], nw_info['address']) self.assertEqual('tapport' + str(index), nw_info['devname']) self.assertIsNone(nw_info['ovs_interfaceid']) self.assertEqual(requested_ports[index]['binding:vif_type'], nw_info['type']) if nw_info['type'] == model.VIF_TYPE_BRIDGE: self.assertEqual('brqnet-id', nw_info['network']['bridge']) self.assertEqual(requested_ports[index].get('binding:vnic_type', model.VNIC_TYPE_NORMAL), nw_info['vnic_type']) self.assertEqual(requested_ports[index].get('binding:vif_details'), nw_info.get('details')) self.assertEqual( # If the requested port does not define a binding:profile, or # has it set to None, we default to an empty dict to avoid # NoneType errors. requested_ports[index].get( constants.BINDING_PROFILE) or {}, nw_info.get('profile')) index += 1 self.assertFalse(nw_infos[0]['active']) self.assertTrue(nw_infos[1]['active']) self.assertTrue(nw_infos[2]['active']) self.assertTrue(nw_infos[3]['active']) self.assertTrue(nw_infos[4]['active']) self.assertTrue(nw_infos[5]['active']) self.assertEqual('port0', nw_infos[0]['id']) self.assertEqual('port1', nw_infos[1]['id']) self.assertEqual('port2', nw_infos[2]['id']) self.assertEqual('port3', nw_infos[3]['id']) self.assertEqual('port4', nw_infos[4]['id']) self.assertEqual('port5', nw_infos[5]['id']) self.assertFalse(nw_infos[0]['preserve_on_delete']) self.assertFalse(nw_infos[1]['preserve_on_delete']) self.assertFalse(nw_infos[2]['preserve_on_delete']) self.assertTrue(nw_infos[3]['preserve_on_delete']) self.assertFalse(nw_infos[4]['preserve_on_delete']) self.assertTrue(nw_infos[5]['preserve_on_delete']) mock_get_client.assert_has_calls([ mock.call(self.context, admin=True)] * 7, any_order=True) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.fake, device_id=uuids.instance) mock_get_floating.assert_has_calls(expected_get_floating_calls) self.assertEqual(len(expected_get_floating_calls), mock_get_floating.call_count) mock_get_subnets.assert_has_calls(expected_get_subnets_calls) self.assertEqual(len(expected_get_subnets_calls), mock_get_subnets.call_count) mock_get_preexisting.assert_called_once_with(fake_inst) mock_get_physnet.assert_has_calls([ mock.call(self.context, mocked_client, 'net-id')] * 6) @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.network.neutron.API._nw_info_get_subnets') @mock.patch('nova.network.neutron.API._nw_info_get_ips') @mock.patch('nova.network.neutron.API._nw_info_build_network') @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') @mock.patch('nova.network.neutron.API._gather_port_ids_and_networks') def test_build_network_info_model_empty( self, mock_gather_port_ids_and_networks, mock_get_preexisting_port_ids, mock_nw_info_build_network, mock_nw_info_get_ips, mock_nw_info_get_subnets, mock_get_client): # An empty instance info network cache should not be populated from # ports found in Neutron. mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client fake_inst = objects.Instance() fake_inst.project_id = uuids.fake fake_inst.uuid = uuids.instance fake_inst.info_cache = objects.InstanceInfoCache() fake_inst.info_cache.network_info = model.NetworkInfo() fake_ports = [ # admin_state_up=True and status='ACTIVE' thus vif.active=True {'id': 'port1', 'network_id': 'net-id', 'admin_state_up': True, 'status': 'ACTIVE', 'fixed_ips': [{'ip_address': '1.1.1.1'}], 'mac_address': 'de:ad:be:ef:00:01', 'binding:vif_type': model.VIF_TYPE_BRIDGE, 'binding:vnic_type': model.VNIC_TYPE_NORMAL, 'binding:vif_details': {}, }, ] fake_subnets = [model.Subnet(cidr='1.0.0.0/8')] mocked_client.list_ports.return_value = {'ports': fake_ports} mock_gather_port_ids_and_networks.return_value = ([], []) mock_get_preexisting_port_ids.return_value = [] mock_nw_info_build_network.return_value = (None, None) mock_nw_info_get_ips.return_value = [] mock_nw_info_get_subnets.return_value = fake_subnets nw_infos = self.api._build_network_info_model( self.context, fake_inst) self.assertEqual(0, len(nw_infos)) mock_get_client.assert_called_once_with(self.context, admin=True) mocked_client.list_ports.assert_called_once_with( tenant_id=uuids.fake, device_id=uuids.instance) @mock.patch.object(neutronapi, 'get_client') def test_get_subnets_from_port(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_data = copy.copy(self.port_data1[0]) # add another IP on the same subnet and verify the subnet is deduped port_data['fixed_ips'].append({'ip_address': '10.0.1.3', 'subnet_id': 'my_subid1'}) subnet_data1 = copy.copy(self.subnet_data1) subnet_data1[0]['host_routes'] = [ {'destination': '192.168.0.0/24', 'nexthop': '1.0.0.10'} ] mocked_client.list_subnets.return_value = {'subnets': subnet_data1} mocked_client.list_ports.return_value = {'ports': []} subnets = self.api._get_subnets_from_port(self.context, port_data) self.assertEqual(1, len(subnets)) self.assertEqual(1, len(subnets[0]['routes'])) self.assertEqual(subnet_data1[0]['host_routes'][0]['destination'], subnets[0]['routes'][0]['cidr']) self.assertEqual(subnet_data1[0]['host_routes'][0]['nexthop'], subnets[0]['routes'][0]['gateway']['address']) mock_get_client.assert_called_once_with(self.context) mocked_client.list_subnets.assert_called_once_with( id=[port_data['fixed_ips'][0]['subnet_id']]) mocked_client.list_ports.assert_called_once_with( network_id=subnet_data1[0]['network_id'], device_owner='network:dhcp') @mock.patch.object(neutronapi, 'get_client') def test_get_subnets_from_port_enabled_dhcp(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_data = copy.copy(self.port_data1[0]) # add another IP on the same subnet and verify the subnet is deduped port_data['fixed_ips'].append({'ip_address': '10.0.1.3', 'subnet_id': 'my_subid1'}) subnet_data1 = copy.copy(self.subnet_data1) subnet_data1[0]['enable_dhcp'] = True mocked_client.list_subnets.return_value = {'subnets': subnet_data1} mocked_client.list_ports.return_value = {'ports': self.dhcp_port_data1} subnets = self.api._get_subnets_from_port(self.context, port_data) self.assertEqual(self.dhcp_port_data1[0]['fixed_ips'][0]['ip_address'], subnets[0]['meta']['dhcp_server']) @mock.patch.object(neutronapi, 'get_client') def test_get_subnets_from_port_enabled_dhcp_no_dhcp_ports(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client port_data = copy.copy(self.port_data1[0]) # add another IP on the same subnet and verify the subnet is deduped port_data['fixed_ips'].append({'ip_address': '10.0.1.3', 'subnet_id': 'my_subid1'}) subnet_data1 = copy.copy(self.subnet_data1) subnet_data1[0]['enable_dhcp'] = True mocked_client.list_subnets.return_value = {'subnets': subnet_data1} mocked_client.list_ports.return_value = {'ports': []} subnets = self.api._get_subnets_from_port(self.context, port_data) self.assertEqual(subnet_data1[0]['gateway_ip'], subnets[0]['meta']['dhcp_server']) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_physnet_tunneled_info_multi_segment(self, mock_get_client): test_net = {'network': {'segments': [{'provider:physical_network': 'physnet10', 'provider:segmentation_id': 1000, 'provider:network_type': 'vlan'}, {'provider:physical_network': None, 'provider:segmentation_id': 153, 'provider:network_type': 'vxlan'}]}} test_ext_list = {'extensions': [{'name': 'Multi Provider Network', 'alias': 'multi-segments'}]} mock_client = mock_get_client.return_value mock_client.list_extensions.return_value = test_ext_list mock_client.show_network.return_value = test_net physnet_name, tunneled = self.api._get_physnet_tunneled_info( self.context, mock_client, 'test-net') mock_client.show_network.assert_called_once_with( 'test-net', fields='segments') self.assertEqual('physnet10', physnet_name) self.assertFalse(tunneled) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_physnet_tunneled_info_vlan_with_multi_segment_ext( self, mock_get_client): test_net = {'network': {'provider:physical_network': 'physnet10', 'provider:segmentation_id': 1000, 'provider:network_type': 'vlan'}} test_ext_list = {'extensions': [{'name': 'Multi Provider Network', 'alias': 'multi-segments'}]} mock_client = mock_get_client.return_value mock_client.list_extensions.return_value = test_ext_list mock_client.show_network.return_value = test_net physnet_name, tunneled = self.api._get_physnet_tunneled_info( self.context, mock_client, 'test-net') mock_client.show_network.assert_called_with( 'test-net', fields=['provider:physical_network', 'provider:network_type']) self.assertEqual('physnet10', physnet_name) self.assertFalse(tunneled) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_physnet_tunneled_info_multi_segment_no_physnet( self, mock_get_client): test_net = {'network': {'segments': [{'provider:physical_network': None, 'provider:segmentation_id': 1000, 'provider:network_type': 'vlan'}, {'provider:physical_network': None, 'provider:segmentation_id': 153, 'provider:network_type': 'vlan'}]}} test_ext_list = {'extensions': [{'name': 'Multi Provider Network', 'alias': 'multi-segments'}]} mock_client = mock_get_client.return_value mock_client.list_extensions.return_value = test_ext_list mock_client.show_network.return_value = test_net self.assertRaises(exception.NovaException, self.api._get_physnet_tunneled_info, self.context, mock_client, 'test-net') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_physnet_tunneled_info_tunneled( self, mock_get_client): test_net = {'network': {'provider:network_type': 'vxlan'}} test_ext_list = {'extensions': []} mock_client = mock_get_client.return_value mock_client.list_extensions.return_value = test_ext_list mock_client.show_network.return_value = test_net physnet_name, tunneled = self.api._get_physnet_tunneled_info( self.context, mock_client, 'test-net') mock_client.show_network.assert_called_once_with( 'test-net', fields=['provider:physical_network', 'provider:network_type']) self.assertTrue(tunneled) self.assertIsNone(physnet_name) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_phynet_tunneled_info_non_tunneled( self, mock_get_client): test_net = {'network': {'provider:network_type': 'vlan'}} test_ext_list = {'extensions': []} mock_client = mock_get_client.return_value mock_client.list_extensions.return_value = test_ext_list mock_client.show_network.return_value = test_net physnet_name, tunneled = self.api._get_physnet_tunneled_info( self.context, mock_client, 'test-net') mock_client.show_network.assert_called_once_with( 'test-net', fields=['provider:physical_network', 'provider:network_type']) self.assertFalse(tunneled) self.assertIsNone(physnet_name) def _test_get_port_vnic_info(self, mock_get_client, binding_vnic_type, expected_vnic_type, port_resource_request=None): test_port = { 'port': {'id': 'my_port_id2', 'network_id': 'net-id', }, } if binding_vnic_type: test_port['port']['binding:vnic_type'] = binding_vnic_type if port_resource_request: test_port['port'][ constants.RESOURCE_REQUEST] = port_resource_request mock_get_client.reset_mock() mock_client = mock_get_client.return_value mock_client.show_port.return_value = test_port vnic_type, trusted, network_id, resource_request = ( self.api._get_port_vnic_info( self.context, mock_client, test_port['port']['id'])) mock_client.show_port.assert_called_once_with(test_port['port']['id'], fields=['binding:vnic_type', 'binding:profile', 'network_id', constants.RESOURCE_REQUEST]) self.assertEqual(expected_vnic_type, vnic_type) self.assertEqual('net-id', network_id) self.assertIsNone(trusted) self.assertEqual(port_resource_request, resource_request) @mock.patch.object(neutronapi, 'get_client', return_value=mock.MagicMock()) def test_get_port_vnic_info_1(self, mock_get_client): self._test_get_port_vnic_info(mock_get_client, model.VNIC_TYPE_DIRECT, model.VNIC_TYPE_DIRECT) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_port_vnic_info_2(self, mock_get_client): self._test_get_port_vnic_info(mock_get_client, model.VNIC_TYPE_NORMAL, model.VNIC_TYPE_NORMAL) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_port_vnic_info_3(self, mock_get_client): self._test_get_port_vnic_info(mock_get_client, None, model.VNIC_TYPE_NORMAL) @mock.patch.object(neutronapi, 'get_client') def test_get_port_vnic_info_requested_resources(self, mock_get_client): self._test_get_port_vnic_info( mock_get_client, None, model.VNIC_TYPE_NORMAL, port_resource_request={ "resources": { "NET_BW_EGR_KILOBIT_PER_SEC": 6000, "NET_BW_IGR_KILOBIT_PER_SEC": 6000, }, "required": [ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL" ] } ) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_get_port_vnic_info_trusted(self, mock_get_client): test_port = { 'port': {'id': 'my_port_id1', 'network_id': 'net-id', 'binding:vnic_type': model.VNIC_TYPE_DIRECT, 'binding:profile': {"trusted": "Yes"}, }, } test_ext_list = {'extensions': []} mock_client = mock_get_client.return_value mock_client.show_port.return_value = test_port mock_client.list_extensions.return_value = test_ext_list result = self.api._get_port_vnic_info( self.context, mock_client, test_port['port']['id']) vnic_type, trusted, network_id, resource_requests = result mock_client.show_port.assert_called_once_with(test_port['port']['id'], fields=['binding:vnic_type', 'binding:profile', 'network_id', constants.RESOURCE_REQUEST]) self.assertEqual(model.VNIC_TYPE_DIRECT, vnic_type) self.assertEqual('net-id', network_id) self.assertTrue(trusted) self.assertIsNone(resource_requests) @mock.patch('nova.network.neutron.API._show_port') def test_deferred_ip_port_immediate_allocation(self, mock_show): port = {'network_id': 'my_netid1', 'device_id': None, 'id': uuids.port, 'fixed_ips': [], # no fixed ip 'ip_allocation': 'immediate', } mock_show.return_value = port requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port['id'])]) self.assertRaises(exception.PortRequiresFixedIP, self.api.validate_networks, self.context, requested_networks, 1) @mock.patch('nova.network.neutron.API._show_port') def test_deferred_ip_port_deferred_allocation(self, mock_show): port = {'network_id': 'my_netid1', 'device_id': None, 'id': uuids.port, 'fixed_ips': [], # no fixed ip 'ip_allocation': 'deferred', } mock_show.return_value = port requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=port['id'])]) count = self.api.validate_networks(self.context, requested_networks, 1) self.assertEqual(1, count) @mock.patch('oslo_concurrency.lockutils.lock') def test_get_instance_nw_info_locks_per_instance(self, mock_lock): instance = objects.Instance(uuid=uuids.fake) api = neutronapi.API() mock_lock.side_effect = test.TestingException self.assertRaises(test.TestingException, api.get_instance_nw_info, 'context', instance) mock_lock.assert_called_once_with('refresh_cache-%s' % instance.uuid) @mock.patch('nova.network.neutron.LOG') def test_get_instance_nw_info_verify_duplicates_ignored(self, mock_log): """test that the returned networks & port_ids from _gather_port_ids_and_networks doesn't contain any duplicates The test fakes an instance with two ports connected to two networks. The _gather_port_ids_and_networks method will be called with the instance and a list of port ids of which one port id is configured already to the instance (== duplicate #1) and a list of networks that already contains a network to which an instance port is connected (== duplicate #2). All-in-all, we expect the resulting port ids list to contain 3 items (["instance_port_1", "port_1", "port_2"]) and the resulting networks list to contain 3 items (["net_1", "net_2", "instance_network_1"]) while the warning message for duplicate items was executed twice (due to "duplicate #1" & "duplicate #2") """ networks = [model.Network(id="net_1"), model.Network(id="net_2")] port_ids = ["port_1", "port_2"] instance_networks = [{"id": "instance_network_1", "name": "fake_network", "tenant_id": "fake_tenant_id"}] instance_port_ids = ["instance_port_1"] network_info = model.NetworkInfo( [{'id': port_ids[0], 'network': networks[0]}, {'id': instance_port_ids[0], 'network': model.Network( id=instance_networks[0]["id"], label=instance_networks[0]["name"], meta={"tenant_id": instance_networks[0]["tenant_id"]})}] ) instance_uuid = uuids.fake instance = objects.Instance(uuid=instance_uuid, info_cache=objects.InstanceInfoCache( context=self.context, instance_uuid=instance_uuid, network_info=network_info)) new_networks, new_port_ids = self.api._gather_port_ids_and_networks( self.context, instance, networks, port_ids) self.assertEqual(new_networks, networks + instance_networks) self.assertEqual(new_port_ids, instance_port_ids + port_ids) self.assertEqual(2, mock_log.warning.call_count) @mock.patch('oslo_concurrency.lockutils.lock') @mock.patch.object(neutronapi.API, '_get_instance_nw_info') @mock.patch('nova.network.neutron.update_instance_cache_with_nw_info') def test_get_instance_nw_info(self, mock_update, mock_get, mock_lock): fake_result = mock.sentinel.get_nw_info_result mock_get.return_value = fake_result instance = fake_instance.fake_instance_obj(self.context) result = self.api.get_instance_nw_info(self.context, instance) mock_get.assert_called_once_with(self.context, instance) mock_update.assert_called_once_with(self.api, self.context, instance, nw_info=fake_result) self.assertEqual(fake_result, result) def _test_validate_networks_fixed_ip_no_dup(self, nets, requested_networks, ids, list_port_values): def _fake_list_ports(**search_opts): for args, return_value in list_port_values: if args == search_opts: return return_value self.fail('Unexpected call to list_ports %s' % search_opts) with test.nested( mock.patch.object(client.Client, 'list_ports', side_effect=_fake_list_ports), mock.patch.object(client.Client, 'list_networks', return_value={'networks': nets}), mock.patch.object(client.Client, 'show_quota', return_value={'quota': {'port': 50}})) as ( list_ports_mock, list_networks_mock, show_quota_mock): self.api.validate_networks(self.context, requested_networks, 1) self.assertEqual(len(list_port_values), len(list_ports_mock.call_args_list)) list_networks_mock.assert_called_once_with(id=ids) show_quota_mock.assert_called_once_with(uuids.my_tenant) def test_validate_networks_over_limit_quota(self): """Test validates that a relevant exception is being raised when there are more ports defined, than there is a quota for it. """ requested_networks = [(uuids.my_netid1, '10.0.1.2', None, None), (uuids.my_netid2, '10.0.1.3', None, None)] list_port_values = [({'network_id': uuids.my_netid1, 'fixed_ips': 'ip_address=10.0.1.2', 'fields': 'device_id'}, {'ports': []}), ({'network_id': uuids.my_netid2, 'fixed_ips': 'ip_address=10.0.1.3', 'fields': 'device_id'}, {'ports': []}), ({'tenant_id': uuids.my_tenant, 'fields': ['id']}, {'ports': [1, 2, 3, 4, 5]})] nets = [{'subnets': '1'}, {'subnets': '2'}] def _fake_list_ports(**search_opts): for args, return_value in list_port_values: if args == search_opts: return return_value with test.nested( mock.patch.object(self.api, '_get_available_networks', return_value=nets), mock.patch.object(client.Client, 'list_ports', side_effect=_fake_list_ports), mock.patch.object(client.Client, 'show_quota', return_value={'quota': {'port': 1}})): exc = self.assertRaises(exception.PortLimitExceeded, self.api.validate_networks, self.context, requested_networks, 1) expected_exception_msg = ('The number of defined ports: ' '%(ports)d is over the limit: ' '%(quota)d' % {'ports': 5, 'quota': 1}) self.assertEqual(expected_exception_msg, str(exc)) def test_validate_networks_fixed_ip_no_dup1(self): # Test validation for a request for a network with a # fixed ip that is not already in use because no fixed ips in use nets1 = [{'id': uuids.my_netid1, 'name': 'my_netname1', 'subnets': ['mysubnid1'], 'tenant_id': uuids.my_tenant}] requested_networks = [(uuids.my_netid1, '10.0.1.2', None, None)] ids = [uuids.my_netid1] list_port_values = [({'network_id': uuids.my_netid1, 'fixed_ips': 'ip_address=10.0.1.2', 'fields': 'device_id'}, {'ports': []}), ({'tenant_id': uuids.my_tenant, 'fields': ['id']}, {'ports': []})] self._test_validate_networks_fixed_ip_no_dup(nets1, requested_networks, ids, list_port_values) def test_validate_networks_fixed_ip_no_dup2(self): # Test validation for a request for a network with a # fixed ip that is not already in use because not used on this net id nets2 = [{'id': uuids.my_netid1, 'name': 'my_netname1', 'subnets': ['mysubnid1'], 'tenant_id': uuids.my_tenant}, {'id': uuids.my_netid2, 'name': 'my_netname2', 'subnets': ['mysubnid2'], 'tenant_id': uuids.my_tenant}] requested_networks = [(uuids.my_netid1, '10.0.1.2', None, None), (uuids.my_netid2, '10.0.1.3', None, None)] ids = [uuids.my_netid1, uuids.my_netid2] list_port_values = [({'network_id': uuids.my_netid1, 'fixed_ips': 'ip_address=10.0.1.2', 'fields': 'device_id'}, {'ports': []}), ({'network_id': uuids.my_netid2, 'fixed_ips': 'ip_address=10.0.1.3', 'fields': 'device_id'}, {'ports': []}), ({'tenant_id': uuids.my_tenant, 'fields': ['id']}, {'ports': []})] self._test_validate_networks_fixed_ip_no_dup(nets2, requested_networks, ids, list_port_values) def test_validate_networks_fixed_ip_dup(self): # Test validation for a request for a network with a # fixed ip that is already in use requested_networks = [(uuids.my_netid1, '10.0.1.2', None, None)] list_port_mock_params = {'network_id': uuids.my_netid1, 'fixed_ips': 'ip_address=10.0.1.2', 'fields': 'device_id'} list_port_mock_return = {'ports': [({'device_id': 'my_deviceid'})]} with mock.patch.object(client.Client, 'list_ports', return_value=list_port_mock_return) as ( list_ports_mock): self.assertRaises(exception.FixedIpAlreadyInUse, self.api.validate_networks, self.context, requested_networks, 1) list_ports_mock.assert_called_once_with(**list_port_mock_params) def test_allocate_floating_ip_exceed_limit(self): # Verify that the correct exception is thrown when quota exceed pool_name = 'dummy' api = neutronapi.API() with test.nested( mock.patch.object(client.Client, 'create_floatingip'), mock.patch.object(api, '_get_floating_ip_pool_id_by_name_or_id')) as ( create_mock, get_mock): create_mock.side_effect = exceptions.OverQuotaClient() self.assertRaises(exception.FloatingIpLimitExceeded, api.allocate_floating_ip, self.context, pool_name) def test_allocate_floating_ip_no_ipv4_subnet(self): api = neutronapi.API() net_id = uuids.fake error_msg = ('Bad floatingip request: Network %s does not contain ' 'any IPv4 subnet' % net_id) with test.nested( mock.patch.object(client.Client, 'create_floatingip'), mock.patch.object(api, '_get_floating_ip_pool_id_by_name_or_id')) as ( create_mock, get_mock): create_mock.side_effect = exceptions.BadRequest(error_msg) self.assertRaises(exception.FloatingIpBadRequest, api.allocate_floating_ip, self.context, 'ext_net') @mock.patch('nova.network.neutron.get_client') @mock.patch('nova.network.neutron.API._get_floating_ip_by_address', return_value={'port_id': None, 'id': 'abc'}) def test_release_floating_ip(self, mock_get_ip, mock_ntrn): """Validate default behavior.""" mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc address = '172.24.4.227' self.api.release_floating_ip(self.context, address) mock_ntrn.assert_called_once_with(self.context) mock_get_ip.assert_called_once_with(mock_nc, address) mock_nc.delete_floatingip.assert_called_once_with('abc') @mock.patch('nova.network.neutron.get_client') @mock.patch('nova.network.neutron.API._get_floating_ip_by_address', return_value={'port_id': 'abc', 'id': 'abc'}) def test_release_floating_ip_associated(self, mock_get_ip, mock_ntrn): """Ensure release fails if a port is still associated with it. If the floating IP has a port associated with it, as indicated by a configured port_id, then any attempt to release should fail. """ mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc address = '172.24.4.227' self.assertRaises(exception.FloatingIpAssociated, self.api.release_floating_ip, self.context, address) @mock.patch('nova.network.neutron.get_client') @mock.patch('nova.network.neutron.API._get_floating_ip_by_address', return_value={'port_id': None, 'id': 'abc'}) def test_release_floating_ip_not_found(self, mock_get_ip, mock_ntrn): """Ensure neutron's NotFound exception is correctly handled. Sometimes, trying to delete a floating IP multiple times in a short delay can trigger an exception because the operation is not atomic. If neutronclient's call to delete fails with a NotFound error, then we should correctly handle this. """ mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_nc.delete_floatingip.side_effect = exceptions.NotFound() address = '172.24.4.227' self.assertRaises(exception.FloatingIpNotFoundForAddress, self.api.release_floating_ip, self.context, address) @mock.patch.object(client.Client, 'create_port') def test_create_port_minimal_raise_no_more_ip(self, create_port_mock): instance = fake_instance.fake_instance_obj(self.context) create_port_mock.side_effect = \ exceptions.IpAddressGenerationFailureClient() self.assertRaises(exception.NoMoreFixedIps, self.api._create_port_minimal, neutronapi.get_client(self.context), instance, uuids.my_netid1) self.assertTrue(create_port_mock.called) @mock.patch.object(client.Client, 'update_port', side_effect=exceptions.MacAddressInUseClient()) def test_update_port_for_instance_mac_address_in_use(self, update_port_mock): port_uuid = uuids.port instance = objects.Instance(uuid=uuids.instance) port_req_body = {'port': { 'id': port_uuid, 'mac_address': 'XX:XX:XX:XX:XX:XX', 'network_id': uuids.network_id}} self.assertRaises(exception.PortInUse, self.api._update_port, neutronapi.get_client(self.context), instance, port_uuid, port_req_body) update_port_mock.assert_called_once_with(port_uuid, port_req_body) @mock.patch.object(client.Client, 'update_port', side_effect=exceptions.HostNotCompatibleWithFixedIpsClient()) def test_update_port_for_instance_fixed_ips_invalid(self, update_port_mock): port_uuid = uuids.port instance = objects.Instance(uuid=uuids.instance) port_req_body = {'port': { 'id': port_uuid, 'mac_address': 'XX:XX:XX:XX:XX:XX', 'network_id': uuids.network_id}} self.assertRaises(exception.FixedIpInvalidOnHost, self.api._update_port, neutronapi.get_client(self.context), instance, port_uuid, port_req_body) update_port_mock.assert_called_once_with(port_uuid, port_req_body) @mock.patch.object(client.Client, 'update_port') def test_update_port_for_instance_binding_failure(self, update_port_mock): port_uuid = uuids.port instance = objects.Instance(uuid=uuids.instance) port_req_body = {'port': { 'id': port_uuid, 'mac_address': 'XX:XX:XX:XX:XX:XX', 'network_id': uuids.network_id}} update_port_mock.return_value = {'port': { 'id': port_uuid, 'binding:vif_type': model.VIF_TYPE_BINDING_FAILED }} self.assertRaises(exception.PortBindingFailed, self.api._update_port, neutronapi.get_client(self.context), instance, port_uuid, port_req_body) @mock.patch.object(client.Client, 'create_port', side_effect=exceptions.IpAddressInUseClient()) def test_create_port_minimal_raise_ip_in_use(self, create_port_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ip = '1.1.1.1' self.assertRaises(exception.FixedIpAlreadyInUse, self.api._create_port_minimal, neutronapi.get_client(self.context), instance, uuids.my_netid1, fixed_ip=fake_ip) self.assertTrue(create_port_mock.called) @mock.patch.object(client.Client, 'create_port', side_effect=exceptions.IpAddressAlreadyAllocatedClient()) def test_create_port_minimal_raise_ip_already_allocated(self, create_port_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ip = '1.1.1.1' self.assertRaises(exception.FixedIpAlreadyInUse, self.api._create_port_minimal, neutronapi.get_client(self.context), instance, uuids.my_netid1, fixed_ip=fake_ip) self.assertTrue(create_port_mock.called) @mock.patch.object(client.Client, 'create_port', side_effect=exceptions.InvalidIpForNetworkClient()) def test_create_port_minimal_raise_invalid_ip(self, create_port_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ip = '1.1.1.1' exc = self.assertRaises(exception.InvalidInput, self.api._create_port_minimal, neutronapi.get_client(self.context), instance, uuids.my_netid1, fixed_ip=fake_ip) expected_exception_msg = ('Invalid input received: Fixed IP %(ip)s is ' 'not a valid ip address for network ' '%(net_id)s.' % {'ip': fake_ip, 'net_id': uuids.my_netid1}) self.assertEqual(expected_exception_msg, str(exc)) self.assertTrue(create_port_mock.called) def test_create_port_minimal_raise_qos_not_supported(self): instance = fake_instance.fake_instance_obj(self.context) mock_client = mock.MagicMock() mock_client.create_port.return_value = {'port': { 'id': uuids.port_id, constants.RESOURCE_REQUEST: { 'resources': {'CUSTOM_RESOURCE_CLASS': 42}} }} exc = self.assertRaises(exception.NetworksWithQoSPolicyNotSupported, self.api._create_port_minimal, mock_client, instance, uuids.my_netid1) expected_exception_msg = ('Using networks with QoS policy is not ' 'supported for instance %(instance)s. ' '(Network ID is %(net_id)s)' % {'instance': instance.uuid, 'net_id': uuids.my_netid1}) self.assertEqual(expected_exception_msg, six.text_type(exc)) mock_client.delete_port.assert_called_once_with(uuids.port_id) @mock.patch('nova.network.neutron.LOG') def test_create_port_minimal_raise_qos_not_supported_cleanup_fails( self, mock_log): instance = fake_instance.fake_instance_obj(self.context) mock_client = mock.MagicMock() mock_client.create_port.return_value = {'port': { 'id': uuids.port_id, constants.RESOURCE_REQUEST: { 'resources': {'CUSTOM_RESOURCE_CLASS': 42}} }} mock_client.delete_port.side_effect = \ exceptions.NeutronClientException() exc = self.assertRaises(exception.NetworksWithQoSPolicyNotSupported, self.api._create_port_minimal, mock_client, instance, uuids.my_netid1) expected_exception_msg = ('Using networks with QoS policy is not ' 'supported for instance %(instance)s. ' '(Network ID is %(net_id)s)' % {'instance': instance.uuid, 'net_id': uuids.my_netid1}) self.assertEqual(expected_exception_msg, six.text_type(exc)) mock_client.delete_port.assert_called_once_with(uuids.port_id) self.assertTrue(mock_log.exception.called) def test_get_network_detail_not_found(self): api = neutronapi.API() expected_exc = exceptions.NetworkNotFoundClient() network_uuid = '02cacbca-7d48-4a2c-8011-43eecf8a9786' with mock.patch.object(client.Client, 'show_network', side_effect=expected_exc) as ( fake_show_network): self.assertRaises(exception.NetworkNotFound, api.get, self.context, network_uuid) fake_show_network.assert_called_once_with(network_uuid) @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') @mock.patch('nova.network.neutron.API.' '_refresh_neutron_extensions_cache') def test_deallocate_for_instance_uses_delete_helper(self, mock_refresh, mock_preexisting): # setup fake data instance = fake_instance.fake_instance_obj(self.context) mock_preexisting.return_value = [] port_data = {'ports': [{'id': uuids.fake}]} ports = set([port['id'] for port in port_data.get('ports')]) api = neutronapi.API() # setup mocks mock_client = mock.Mock() mock_client.list_ports.return_value = port_data with test.nested( mock.patch.object(neutronapi, 'get_client', return_value=mock_client), mock.patch.object(api, '_delete_ports') ) as ( mock_get_client, mock_delete ): # run the code api.deallocate_for_instance(self.context, instance) # assert the calls mock_client.list_ports.assert_called_once_with( device_id=instance.uuid) mock_delete.assert_called_once_with( mock_client, instance, ports, raise_if_fail=True) def _test_delete_ports(self, expect_raise): results = [exceptions.NeutronClientException, None] mock_client = mock.Mock() with mock.patch.object(mock_client, 'delete_port', side_effect=results): api = neutronapi.API() api._delete_ports(mock_client, {'uuid': 'foo'}, ['port1', 'port2'], raise_if_fail=expect_raise) def test_delete_ports_raise(self): self.assertRaises(exceptions.NeutronClientException, self._test_delete_ports, True) def test_delete_ports_no_raise(self): self._test_delete_ports(False) def test_delete_ports_never_raise_404(self): mock_client = mock.Mock() mock_client.delete_port.side_effect = exceptions.PortNotFoundClient api = neutronapi.API() api._delete_ports(mock_client, {'uuid': 'foo'}, ['port1'], raise_if_fail=True) mock_client.delete_port.assert_called_once_with('port1') @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') def test_deallocate_port_for_instance_fails(self, mock_preexisting): mock_preexisting.return_value = [] mock_client = mock.Mock() mock_client.show_port.side_effect = exceptions.Unauthorized() api = neutronapi.API() with test.nested( mock.patch.object(neutronapi, 'get_client', return_value=mock_client), mock.patch.object(api, 'get_instance_nw_info') ) as ( get_client, get_nw_info ): self.assertRaises(exceptions.Unauthorized, api.deallocate_port_for_instance, self.context, instance={'uuid': uuids.fake}, port_id=uuids.fake) # make sure that we didn't try to reload nw info self.assertFalse(get_nw_info.called) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def _test_show_port_exceptions(self, client_exc, expected_nova_exc, get_client_mock): show_port_mock = mock.Mock(side_effect=client_exc) get_client_mock.return_value.show_port = show_port_mock self.assertRaises(expected_nova_exc, self.api.show_port, self.context, 'fake_port_id') def test_show_port_not_found(self): self._test_show_port_exceptions(exceptions.PortNotFoundClient, exception.PortNotFound) def test_show_port_forbidden(self): self._test_show_port_exceptions(exceptions.Unauthorized, exception.Forbidden) def test_show_port_unknown_exception(self): self._test_show_port_exceptions(exceptions.NeutronClientException, exception.NovaException) def test_get_network(self): api = neutronapi.API() fake_network = { 'network': {'id': uuids.instance, 'name': 'fake-network'} } with mock.patch.object(client.Client, 'show_network') as mock_show: mock_show.return_value = fake_network rsp = api.get(self.context, uuids.instance) self.assertEqual(fake_network['network'], rsp) def test_get_all_networks(self): api = neutronapi.API() fake_networks = { 'networks': [ {'id': uuids.network_1, 'name': 'fake-network1'}, {'id': uuids.network_2, 'name': 'fake-network2'}, ], } with mock.patch.object(client.Client, 'list_networks') as mock_list: mock_list.return_value = fake_networks rsp = api.get_all(self.context) self.assertEqual(fake_networks['networks'], rsp) @mock.patch.object(neutronapi.API, "_refresh_neutron_extensions_cache") @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_instance_vnic_index(self, mock_get_client, mock_refresh_extensions): api = neutronapi.API() api.extensions = set([constants.VNIC_INDEX_EXT]) mock_client = mock_get_client.return_value mock_client.update_port.return_value = 'port' instance = {'project_id': '9d049e4b60b64716978ab415e6fbd5c0', 'uuid': uuids.fake, 'display_name': 'test_instance', 'availability_zone': 'nova', 'host': 'some_host'} instance = objects.Instance(**instance) vif = {'id': 'fake-port-id'} api.update_instance_vnic_index(self.context, instance, vif, 7) port_req_body = {'port': {'vnic_index': 7}} mock_client.update_port.assert_called_once_with('fake-port-id', port_req_body) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_migration_profile( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) # We pass in a port profile which has a migration attribute and also # a second port profile attribute 'fake_profile' this can be # an sriov port profile attribute or a pci_slot attribute, but for # now we are just using a fake one to show that the code does not # remove the portbinding_profile if there is one. binding_profile = {'fake_profile': 'fake_data', constants.MIGRATING_ATTR: 'my-dest-host'} fake_ports = {'ports': [ {'id': 'fake-port-1', constants.BINDING_PROFILE: binding_profile, constants.BINDING_HOST_ID: instance.host}]} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock self.api._update_port_binding_for_instance(self.context, instance, 'my-host') # Assert that update_port was called on the port with a # different host and also the migration profile from the port is # removed since it does not match with the current host. update_port_mock.assert_called_once_with( 'fake-port-1', {'port': { constants.BINDING_HOST_ID: 'my-host', 'device_owner': 'compute:%s' % instance.availability_zone, constants.BINDING_PROFILE: { 'fake_profile': 'fake_data'}}}) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_binding_profile_none( self, get_client_mock): """Tests _update_port_binding_for_instance when the binding:profile value is None. """ instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) fake_ports = {'ports': [ {'id': uuids.portid, constants.BINDING_PROFILE: None, constants.BINDING_HOST_ID: instance.host}]} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock self.api._update_port_binding_for_instance(self.context, instance, 'my-host') # Assert that update_port was called on the port with a # different host but with no binding profile. update_port_mock.assert_called_once_with( uuids.portid, {'port': { constants.BINDING_HOST_ID: 'my-host', 'device_owner': 'compute:%s' % instance.availability_zone}}) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_same_host(self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) # We test two ports, one with the same host as the host passed in and # one where binding:host_id isn't set, so we update that port. fake_ports = {'ports': [ {'id': 'fake-port-1', constants.BINDING_HOST_ID: instance.host}, {'id': 'fake-port-2'}]} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock self.api._update_port_binding_for_instance(self.context, instance, instance.host) # Assert that update_port was only called on the port without a host. update_port_mock.assert_called_once_with( 'fake-port-2', {'port': {constants.BINDING_HOST_ID: instance.host, 'device_owner': 'compute:%s' % instance.availability_zone}}) @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_pci(self, get_client_mock, get_pci_device_devspec_mock): devspec = mock.Mock() devspec.get_tags.return_value = {'physical_network': 'physnet1'} get_pci_device_devspec_mock.return_value = devspec instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = objects.MigrationContext() instance.migration_context.old_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0a:00.1', compute_node_id=1, request_id='1234567890')]) instance.migration_context.new_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0b:00.1', compute_node_id=2, request_id='1234567890')]) instance.pci_devices = instance.migration_context.old_pci_devices # Validate that non-direct port aren't updated (fake-port-2). fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'direct', constants.BINDING_HOST_ID: 'fake-host-old', constants.BINDING_PROFILE: {'pci_slot': '0000:0a:00.1', 'physical_network': 'old_phys_net', 'pci_vendor_info': 'old_pci_vendor_info'}}, {'id': 'fake-port-2', constants.BINDING_HOST_ID: instance.host}]} migration = {'status': 'confirmed', 'migration_type': "migration"} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock self.api._update_port_binding_for_instance(self.context, instance, instance.host, migration) # Assert that update_port is called with the binding:profile # corresponding to the PCI device specified. update_port_mock.assert_called_once_with( 'fake-port-1', {'port': { constants.BINDING_HOST_ID: 'fake-host', 'device_owner': 'compute:%s' % instance.availability_zone, constants.BINDING_PROFILE: {'pci_slot': '0000:0b:00.1', 'physical_network': 'physnet1', 'pci_vendor_info': '1377:0047'}}}) @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_pci_fail(self, get_client_mock, get_pci_device_devspec_mock): devspec = mock.Mock() devspec.get_tags.return_value = {'physical_network': 'physnet1'} get_pci_device_devspec_mock.return_value = devspec instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = objects.MigrationContext() instance.migration_context.old_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0c:00.1', compute_node_id=1, request_id='1234567890')]) instance.migration_context.new_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0d:00.1', compute_node_id=2, request_id='1234567890')]) instance.pci_devices = instance.migration_context.old_pci_devices fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'direct', constants.BINDING_HOST_ID: 'fake-host-old', constants.BINDING_PROFILE: {'pci_slot': '0000:0a:00.1', 'physical_network': 'old_phys_net', 'pci_vendor_info': 'old_pci_vendor_info'}}]} migration = {'status': 'confirmed', 'migration_type': "migration"} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock # Assert exception is raised if the mapping is wrong. self.assertRaises(exception.PortUpdateFailed, self.api._update_port_binding_for_instance, self.context, instance, instance.host, migration) @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_pci_no_migration(self, get_client_mock, get_pci_device_devspec_mock): self.api._has_port_binding_extension = mock.Mock(return_value=True) devspec = mock.Mock() devspec.get_tags.return_value = {'physical_network': 'physnet1'} get_pci_device_devspec_mock.return_value = devspec instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = objects.MigrationContext() instance.migration_context.old_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0a:00.1', compute_node_id=1, request_id='1234567890')]) instance.migration_context.new_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0b:00.1', compute_node_id=2, request_id='1234567890')]) instance.pci_devices = instance.migration_context.old_pci_devices fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'direct', constants.BINDING_HOST_ID: instance.host, constants.BINDING_PROFILE: {'pci_slot': '0000:0a:00.1', 'physical_network': 'phys_net', 'pci_vendor_info': 'pci_vendor_info'}}]} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock # Try to update the port binding with no migration object. self.api._update_port_binding_for_instance(self.context, instance, instance.host) # No ports should be updated if the port's pci binding did not change. update_port_mock.assert_not_called() @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_same_host_failed_vif_type( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) list_ports_mock = mock.Mock() update_port_mock = mock.Mock() FAILED_VIF_TYPES = (model.VIF_TYPE_UNBOUND, model.VIF_TYPE_BINDING_FAILED) for vif_type in FAILED_VIF_TYPES: binding_profile = {'fake_profile': 'fake_data', constants.MIGRATING_ATTR: 'my-dest-host'} fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vif_type': 'fake-vif-type', constants.BINDING_PROFILE: binding_profile, constants.BINDING_HOST_ID: instance.host}, {'id': 'fake-port-2', 'binding:vif_type': vif_type, constants.BINDING_PROFILE: binding_profile, constants.BINDING_HOST_ID: instance.host} ]} list_ports_mock.return_value = fake_ports get_client_mock.return_value.list_ports = list_ports_mock get_client_mock.return_value.update_port = update_port_mock update_port_mock.reset_mock() self.api._update_port_binding_for_instance(self.context, instance, instance.host) # Assert that update_port was called on the port with a # failed vif_type and MIGRATING_ATTR is removed update_port_mock.assert_called_once_with( 'fake-port-2', {'port': {constants.BINDING_HOST_ID: instance.host, constants.BINDING_PROFILE: { 'fake_profile': 'fake_data'}, 'device_owner': 'compute:%s' % instance.availability_zone }}) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_diff_host_unbound_vif_type( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) binding_profile = {'fake_profile': 'fake_data', constants.MIGRATING_ATTR: 'my-dest-host'} fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vif_type': model.VIF_TYPE_UNBOUND, constants.BINDING_PROFILE: binding_profile, constants.BINDING_HOST_ID: instance.host}, ]} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock self.api._update_port_binding_for_instance(self.context, instance, 'my-host') # Assert that update_port was called on the port with a # 'unbound' vif_type, host updated and MIGRATING_ATTR is removed update_port_mock.assert_called_once_with( 'fake-port-1', {'port': { constants.BINDING_HOST_ID: 'my-host', constants.BINDING_PROFILE: { 'fake_profile': 'fake_data'}, 'device_owner': 'compute:%s' % instance.availability_zone }}) @mock.patch.object(neutronapi.API, '_get_pci_mapping_for_migration') @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_live_migration( self, get_client_mock, get_devspec_mock, get_pci_mapping_mock): devspec = mock.Mock() devspec.get_tags.return_value = {'physical_network': 'physnet1'} get_devspec_mock.return_value = devspec instance = fake_instance.fake_instance_obj(self.context) fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'direct', constants.BINDING_HOST_ID: 'old-host', constants.BINDING_PROFILE: {'pci_slot': '0000:0a:00.1', 'physical_network': 'phys_net', 'pci_vendor_info': 'vendor_info'}}]} migration = {'status': 'confirmed', 'migration_type': "live-migration"} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock self.api._update_port_binding_for_instance(self.context, instance, 'new-host', migration) # Assert _get_pci_mapping_for_migration was not called self.assertFalse(get_pci_mapping_mock.called) # Assert that update_port() does not update binding:profile # and that it updates host ID called_port_id = update_port_mock.call_args[0][0] called_port_attributes = update_port_mock.call_args[0][1] self.assertEqual(called_port_id, fake_ports['ports'][0]['id']) self.assertNotIn( constants.BINDING_PROFILE, called_port_attributes['port']) self.assertEqual( called_port_attributes['port'][ constants.BINDING_HOST_ID], 'new-host') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_resource_req( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'normal', constants.BINDING_HOST_ID: 'old-host', constants.BINDING_PROFILE: {'allocation': uuids.source_compute_rp}, 'resource_request': mock.sentinel.resource_request}]} migration = objects.Migration( status='confirmed', migration_type='migration') list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock self.api._update_port_binding_for_instance( self.context, instance, 'new-host', migration, {'fake-port-1': [uuids.dest_compute_rp]}) get_client_mock.return_value.update_port.assert_called_once_with( 'fake-port-1', {'port': {'device_owner': 'compute:None', 'binding:profile': {'allocation': uuids.dest_compute_rp}, 'binding:host_id': 'new-host'}}) @mock.patch.object(neutronapi, 'get_client') def test_update_port_bindings_for_instance_with_resource_req_unshelve( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'normal', constants.BINDING_HOST_ID: 'old-host', constants.BINDING_PROFILE: { 'allocation': uuids.source_compute_rp}, 'resource_request': mock.sentinel.resource_request}]} list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock # NOTE(gibi): during unshelve migration object is not created self.api._update_port_binding_for_instance( self.context, instance, 'new-host', None, {'fake-port-1': [uuids.dest_compute_rp]}) get_client_mock.return_value.update_port.assert_called_once_with( 'fake-port-1', {'port': {'device_owner': 'compute:None', 'binding:profile': {'allocation': uuids.dest_compute_rp}, 'binding:host_id': 'new-host'}}) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_resource_req_no_mapping( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'normal', constants.BINDING_HOST_ID: 'old-host', constants.BINDING_PROFILE: {'allocation': uuids.source_compute_rp}, 'resource_request': mock.sentinel.resource_request}]} migration = objects.Migration( status='confirmed', migration_type='migration') list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock ex = self.assertRaises( exception.PortUpdateFailed, self.api._update_port_binding_for_instance, self.context, instance, 'new-host', migration, provider_mappings=None) self.assertIn( "Provider mappings are not available to the compute service but " "are required for ports with a resource request.", six.text_type(ex)) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_bindings_for_instance_with_resource_req_live_mig( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) fake_ports = {'ports': [ {'id': 'fake-port-1', 'binding:vnic_type': 'normal', constants.BINDING_HOST_ID: 'old-host', constants.BINDING_PROFILE: {'allocation': uuids.dest_compute_rp}, 'resource_request': mock.sentinel.resource_request}]} migration = objects.Migration( status='confirmed', migration_type='live-migration') list_ports_mock = mock.Mock(return_value=fake_ports) get_client_mock.return_value.list_ports = list_ports_mock # No mapping is passed in as during live migration the conductor # already created the binding and added the allocation key self.api._update_port_binding_for_instance( self.context, instance, 'new-host', migration, {}) # Note that binding:profile is not updated get_client_mock.return_value.update_port.assert_called_once_with( 'fake-port-1', {'port': {'device_owner': 'compute:None', 'binding:host_id': 'new-host'}}) def test_get_pci_mapping_for_migration(self): instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = objects.MigrationContext() migration = {'status': 'confirmed'} with mock.patch.object(instance.migration_context, 'get_pci_mapping_for_migration') as map_func: self.api._get_pci_mapping_for_migration(instance, migration) map_func.assert_called_with(False) def test_get_pci_mapping_for_migration_reverted(self): instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = objects.MigrationContext() migration = {'status': 'reverted'} with mock.patch.object(instance.migration_context, 'get_pci_mapping_for_migration') as map_func: self.api._get_pci_mapping_for_migration(instance, migration) map_func.assert_called_with(True) def test_get_pci_mapping_for_migration_no_migration_context(self): instance = fake_instance.fake_instance_obj(self.context) instance.migration_context = None pci_mapping = self.api._get_pci_mapping_for_migration( instance, None) self.assertDictEqual({}, pci_mapping) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_profile_for_migration_teardown_false( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) # We test with an instance host and destination_host where the # port will be moving. get_ports = {'ports': [ {'id': uuids.port_id, constants.BINDING_HOST_ID: instance.host}]} self.api.list_ports = mock.Mock(return_value=get_ports) update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock migrate_profile = { constants.MIGRATING_ATTR: 'my-new-host'} port_data = {'port': { constants.BINDING_PROFILE: migrate_profile}} self.api.setup_networks_on_host(self.context, instance, host='my-new-host', teardown=False) update_port_mock.assert_called_once_with( uuids.port_id, port_data) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_profile_for_migration_teardown_false_none_profile( self, get_client_mock): """Tests setup_networks_on_host when migrating the port to the destination host and the binding:profile is None in the port. """ instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) # We test with an instance host and destination_host where the # port will be moving but with binding:profile set to None. get_ports = { 'ports': [ {'id': uuids.port_id, constants.BINDING_HOST_ID: instance.host, constants.BINDING_PROFILE: None} ] } self.api.list_ports = mock.Mock(return_value=get_ports) update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock migrate_profile = { constants.MIGRATING_ATTR: 'my-new-host'} port_data = { 'port': { constants.BINDING_PROFILE: migrate_profile } } self.api.setup_networks_on_host( self.context, instance, host='my-new-host', teardown=False) update_port_mock.assert_called_once_with( uuids.port_id, port_data) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test__setup_migration_port_profile_called_on_teardown_false( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) port_id = uuids.port_id get_ports = {'ports': [ {'id': port_id, constants.BINDING_HOST_ID: instance.host}]} self.api.list_ports = mock.Mock(return_value=get_ports) self.api._setup_migration_port_profile = mock.Mock() self.api.setup_networks_on_host(self.context, instance, host='my-new-host', teardown=False) self.api._setup_migration_port_profile.assert_called_once_with( self.context, instance, 'my-new-host', mock.ANY, get_ports['ports']) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test__setup_migration_port_profile_not_called_with_host_match( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) get_ports = {'ports': [ {'id': uuids.port_id, constants.BINDING_HOST_ID: instance.host}]} self.api.list_ports = mock.Mock(return_value=get_ports) self.api._setup_migration_port_profile = mock.Mock() self.api._clear_migration_port_profile = mock.Mock() self.api.setup_networks_on_host(self.context, instance, host=instance.host, teardown=False) self.api._setup_migration_port_profile.assert_not_called() self.api._clear_migration_port_profile.assert_not_called() def test__setup_migration_port_profile_no_update(self): """Tests the case that the port binding profile already has the "migrating_to" attribute set to the provided host so the port update call is skipped. """ ports = [{ constants.BINDING_HOST_ID: 'source-host', constants.BINDING_PROFILE: { constants.MIGRATING_ATTR: 'dest-host' } }] * 2 with mock.patch.object(self.api, '_update_port_with_migration_profile', new_callable=mock.NonCallableMock): self.api._setup_migration_port_profile( self.context, mock.sentinel.instance, 'dest-host', mock.sentinel.admin_client, ports) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_profile_for_migration_teardown_true_with_profile( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) migrate_profile = { constants.MIGRATING_ATTR: 'new-host'} # Pass a port with an migration porfile attribute. port_id = uuids.port_id get_ports = {'ports': [ {'id': port_id, constants.BINDING_PROFILE: migrate_profile, constants.BINDING_HOST_ID: instance.host}]} self.api.list_ports = mock.Mock(return_value=get_ports) update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock with mock.patch.object(self.api, 'delete_port_binding') as del_binding: with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=True): self.api.setup_networks_on_host(self.context, instance, host='new-host', teardown=True) update_port_mock.assert_called_once_with( port_id, {'port': { constants.BINDING_PROFILE: migrate_profile}}) del_binding.assert_called_once_with( self.context, port_id, 'new-host') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_profile_for_migration_teardown_true_with_profile_exc( self, get_client_mock): """Tests that delete_port_binding raises PortBindingDeletionFailed which is raised through to the caller. """ instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) migrate_profile = { constants.MIGRATING_ATTR: 'new-host'} # Pass a port with an migration porfile attribute. get_ports = { 'ports': [ {'id': uuids.port1, constants.BINDING_PROFILE: migrate_profile, constants.BINDING_HOST_ID: instance.host}, {'id': uuids.port2, constants.BINDING_PROFILE: migrate_profile, constants.BINDING_HOST_ID: instance.host}]} self.api.list_ports = mock.Mock(return_value=get_ports) self.api._clear_migration_port_profile = mock.Mock() with mock.patch.object( self.api, 'delete_port_binding', side_effect=exception.PortBindingDeletionFailed( port_id=uuids.port1, host='new-host')) as del_binding: with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=True): ex = self.assertRaises( exception.PortBindingDeletionFailed, self.api.setup_networks_on_host, self.context, instance, host='new-host', teardown=True) # Make sure both ports show up in the exception message. self.assertIn(uuids.port1, six.text_type(ex)) self.assertIn(uuids.port2, six.text_type(ex)) self.api._clear_migration_port_profile.assert_called_once_with( self.context, instance, get_client_mock.return_value, get_ports['ports']) del_binding.assert_has_calls([ mock.call(self.context, uuids.port1, 'new-host'), mock.call(self.context, uuids.port2, 'new-host')]) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_update_port_profile_for_migration_teardown_true_no_profile( self, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) self.api._has_port_binding_extension = mock.Mock(return_value=True) # Pass a port without any migration porfile attribute. get_ports = {'ports': [ {'id': uuids.port_id, constants.BINDING_HOST_ID: instance.host}]} self.api.list_ports = mock.Mock(return_value=get_ports) update_port_mock = mock.Mock() get_client_mock.return_value.update_port = update_port_mock with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=False): self.api.setup_networks_on_host(self.context, instance, host=instance.host, teardown=True) update_port_mock.assert_not_called() def test__update_port_with_migration_profile_raise_exception(self): instance = fake_instance.fake_instance_obj(self.context) port_id = uuids.port_id migrate_profile = {'fake-attribute': 'my-new-host'} port_profile = {'port': { constants.BINDING_PROFILE: migrate_profile}} update_port_mock = mock.Mock(side_effect=test.TestingException()) admin_client = mock.Mock(update_port=update_port_mock) self.assertRaises(test.TestingException, self.api._update_port_with_migration_profile, instance, port_id, migrate_profile, admin_client) update_port_mock.assert_called_once_with(port_id, port_profile) @mock.patch('nova.objects.Instance.get_network_info') def test_get_preexisting_port_ids(self, mock_get_nw_info): instance = fake_instance.fake_instance_obj(self.context) mock_get_nw_info.return_value = [model.VIF( id='1', preserve_on_delete=False), model.VIF( id='2', preserve_on_delete=True), model.VIF( id='3', preserve_on_delete=True)] result = self.api._get_preexisting_port_ids(instance) self.assertEqual(['2', '3'], result, "Invalid preexisting ports") def _test_unbind_ports_get_client(self, mock_neutron): mock_ctx = mock.Mock(is_admin=False) ports = ["1", "2", "3"] self.api._unbind_ports(mock_ctx, ports, mock_neutron) get_client_calls = [] get_client_calls.append(mock.call(mock_ctx, admin=True)) self.assertEqual(1, mock_neutron.call_count) mock_neutron.assert_has_calls(get_client_calls, True) @mock.patch('nova.network.neutron.get_client') def test_unbind_ports_get_client_binding_extension(self, mock_neutron): self._test_unbind_ports_get_client(mock_neutron) @mock.patch('nova.network.neutron.get_client') def test_unbind_ports_get_client(self, mock_neutron): self._test_unbind_ports_get_client(mock_neutron) @mock.patch('nova.network.neutron.API._show_port') def _test_unbind_ports(self, mock_neutron, mock_show): mock_client = mock.Mock() mock_update_port = mock.Mock() mock_client.update_port = mock_update_port mock_ctx = mock.Mock(is_admin=False) ports = ["1", "2", "3"] mock_show.side_effect = [{"id": "1"}, {"id": "2"}, {"id": "3"}] api = neutronapi.API() api._unbind_ports(mock_ctx, ports, mock_neutron, mock_client) body = {'port': {'device_id': '', 'device_owner': ''}} body['port'][constants.BINDING_HOST_ID] = None body['port'][constants.BINDING_PROFILE] = {} update_port_calls = [] for p in ports: update_port_calls.append(mock.call(p, body)) self.assertEqual(3, mock_update_port.call_count) mock_update_port.assert_has_calls(update_port_calls) @mock.patch('nova.network.neutron.get_client') def test_unbind_ports_binding_ext(self, mock_neutron): self._test_unbind_ports(mock_neutron) @mock.patch('nova.network.neutron.get_client') def test_unbind_ports(self, mock_neutron): self._test_unbind_ports(mock_neutron) def test_unbind_ports_no_port_ids(self): # Tests that None entries in the ports list are filtered out. mock_client = mock.Mock() mock_update_port = mock.Mock() mock_client.update_port = mock_update_port mock_ctx = mock.Mock(is_admin=False) api = neutronapi.API() api._unbind_ports(mock_ctx, [None], mock_client, mock_client) self.assertFalse(mock_update_port.called) @mock.patch('nova.network.neutron.API.get_instance_nw_info') @mock.patch('nova.network.neutron.excutils') @mock.patch('nova.network.neutron.API._delete_ports') @mock.patch('nova.network.neutron.API._check_external_network_attach') @mock.patch('nova.network.neutron.LOG') @mock.patch('nova.network.neutron.API._unbind_ports') @mock.patch('nova.network.neutron.API._populate_neutron_extension_values') @mock.patch('nova.network.neutron.API._get_available_networks') @mock.patch('nova.network.neutron.get_client') @mock.patch('nova.objects.VirtualInterface') def test_allocate_for_instance_unbind(self, mock_vif, mock_ntrn, mock_avail_nets, mock_ext_vals, mock_unbind, mock_log, mock_cena, mock_del_ports, mock_exeu, mock_giwn): mock_nc = mock.Mock() def show_port(port_id): return {'port': {'network_id': 'net-1', 'id': port_id, 'mac_address': 'fakemac', 'tenant_id': 'proj-1'}} mock_nc.show_port = show_port mock_ntrn.return_value = mock_nc def update_port(port_id, body): if port_id == uuids.fail_port_id: raise Exception return {"port": {'mac_address': 'fakemac', 'id': port_id}} mock_nc.update_port.side_effect = update_port mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid='inst-1') nw_req = objects.NetworkRequestList( objects = [objects.NetworkRequest(port_id=uuids.portid_1), objects.NetworkRequest(port_id=uuids.portid_2), objects.NetworkRequest(port_id=uuids.fail_port_id)]) mock_avail_nets.return_value = [{'id': 'net-1', 'subnets': ['subnet1']}] self.api.allocate_for_instance(mock.sentinel.ctx, mock_inst, False, requested_networks=nw_req) mock_unbind.assert_called_once_with(mock.sentinel.ctx, [uuids.portid_1, uuids.portid_2], mock.ANY, mock.ANY) @mock.patch('nova.network.neutron.API._validate_requested_port_ids') @mock.patch('nova.network.neutron.API._get_available_networks') @mock.patch('nova.network.neutron.get_client') def test_allocate_port_for_instance_no_networks(self, mock_getclient, mock_avail_nets, mock_validate_port_ids): """Tests that if no networks are requested and no networks are available, we fail with InterfaceAttachFailedNoNetwork. """ instance = fake_instance.fake_instance_obj(self.context, project_id=uuids.my_tenant) mock_validate_port_ids.return_value = ({}, []) mock_avail_nets.return_value = [] api = neutronapi.API() ex = self.assertRaises(exception.InterfaceAttachFailedNoNetwork, api.allocate_port_for_instance, self.context, instance, port_id=None) self.assertEqual( "No specific network was requested and none are available for " "project '%s'." % uuids.my_tenant, six.text_type(ex)) @mock.patch.object(neutronapi.API, 'allocate_for_instance') def test_allocate_port_for_instance_with_tag(self, mock_allocate): instance = fake_instance.fake_instance_obj(self.context) api = neutronapi.API() api.allocate_port_for_instance(self.context, instance, None, network_id=None, requested_ip=None, bind_host_id=None, tag='foo') req_nets_in_call = mock_allocate.call_args[1]['requested_networks'] self.assertEqual('foo', req_nets_in_call.objects[0].tag) @mock.patch('nova.network.neutron.LOG') @mock.patch('nova.network.neutron.API._delete_ports') @mock.patch('nova.network.neutron.API._unbind_ports') @mock.patch('nova.network.neutron.API._get_preexisting_port_ids') @mock.patch('nova.network.neutron.get_client') @mock.patch.object(objects.VirtualInterface, 'delete_by_instance_uuid') def test_preexisting_deallocate_for_instance(self, mock_delete_vifs, mock_ntrn, mock_gppids, mock_unbind, mock_deletep, mock_log): mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid='inst-1') mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_nc.list_ports.return_value = {'ports': [ {'id': uuids.portid_1}, {'id': uuids.portid_2}, {'id': uuids.portid_3} ]} nw_req = objects.NetworkRequestList( objects = [objects.NetworkRequest(network_id='net-1', address='192.168.0.3', port_id=uuids.portid_1, pci_request_id=uuids.pci_1)]) mock_gppids.return_value = [uuids.portid_3] self.api.deallocate_for_instance(mock.sentinel.ctx, mock_inst, requested_networks=nw_req) mock_unbind.assert_called_once_with(mock.sentinel.ctx, set([uuids.portid_1, uuids.portid_3]), mock.ANY) mock_deletep.assert_called_once_with(mock_nc, mock_inst, set([uuids.portid_2]), raise_if_fail=True) mock_delete_vifs.assert_called_once_with(mock.sentinel.ctx, 'inst-1') @mock.patch('nova.network.neutron.API._delete_nic_metadata') @mock.patch('nova.network.neutron.API.get_instance_nw_info') @mock.patch('nova.network.neutron.API._unbind_ports') @mock.patch('nova.objects.Instance.get_network_info') @mock.patch('nova.network.neutron.get_client') @mock.patch.object(objects.VirtualInterface, 'get_by_uuid') def test_preexisting_deallocate_port_for_instance(self, mock_get_vif_by_uuid, mock_ntrn, mock_inst_get_nwinfo, mock_unbind, mock_netinfo, mock_del_nic_meta): mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid='inst-1') mock_inst.get_network_info.return_value = [model.VIF( id='1', preserve_on_delete=False), model.VIF( id='2', preserve_on_delete=True), model.VIF( id='3', preserve_on_delete=True)] mock_client = mock.Mock() mock_client.show_port.return_value = {'port': {}} mock_ntrn.return_value = mock_client vif = objects.VirtualInterface() vif.tag = 'foo' vif.destroy = mock.MagicMock() mock_get_vif_by_uuid.return_value = vif _, port_allocation = self.api.deallocate_port_for_instance( mock.sentinel.ctx, mock_inst, '2') mock_unbind.assert_called_once_with(mock.sentinel.ctx, ['2'], mock_client) mock_get_vif_by_uuid.assert_called_once_with(mock.sentinel.ctx, '2') mock_del_nic_meta.assert_called_once_with(mock_inst, vif) vif.destroy.assert_called_once_with() self.assertEqual({}, port_allocation) @mock.patch('nova.network.neutron.API.get_instance_nw_info') @mock.patch('nova.network.neutron.API._delete_nic_metadata') @mock.patch.object(objects.VirtualInterface, 'get_by_uuid') @mock.patch('nova.network.neutron.get_client') def test_deallocate_port_for_instance_port_with_allocation( self, mock_get_client, mock_get_vif_by_uuid, mock_del_nic_meta, mock_netinfo): mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid='inst-1') mock_inst.get_network_info.return_value = [ model.VIF(id=uuids.port_uid, preserve_on_delete=True) ] vif = objects.VirtualInterface() vif.tag = 'foo' vif.destroy = mock.MagicMock() mock_get_vif_by_uuid.return_value = vif mock_client = mock.Mock() mock_client.show_port.return_value = { 'port': { constants.RESOURCE_REQUEST: { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000 } }, 'binding:profile': { 'allocation': uuids.rp1 } } } mock_get_client.return_value = mock_client _, port_allocation = self.api.deallocate_port_for_instance( mock.sentinel.ctx, mock_inst, uuids.port_id) self.assertEqual( { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 1000 } }, port_allocation) @mock.patch('nova.network.neutron.API.get_instance_nw_info') @mock.patch('nova.network.neutron.API._delete_nic_metadata') @mock.patch.object(objects.VirtualInterface, 'get_by_uuid') @mock.patch('nova.network.neutron.get_client') def test_deallocate_port_for_instance_port_already_deleted( self, mock_get_client, mock_get_vif_by_uuid, mock_del_nic_meta, mock_netinfo): mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid='inst-1') network_info = [ model.VIF( id=uuids.port_id, preserve_on_delete=True, profile={'allocation': uuids.rp1}) ] mock_inst.get_network_info.return_value = network_info vif = objects.VirtualInterface() vif.tag = 'foo' vif.destroy = mock.MagicMock() mock_get_vif_by_uuid.return_value = vif mock_client = mock.Mock() mock_client.show_port.side_effect = exception.PortNotFound( port_id=uuids.port_id) mock_get_client.return_value = mock_client _, port_allocation = self.api.deallocate_port_for_instance( mock.sentinel.ctx, mock_inst, uuids.port_id) self.assertEqual({}, port_allocation) self.assertIn( 'Resource allocation for this port may be leaked', self.stdlog.logger.output) def test_delete_nic_metadata(self): vif = objects.VirtualInterface(address='aa:bb:cc:dd:ee:ff', tag='foo') instance = fake_instance.fake_instance_obj(self.context) instance.device_metadata = objects.InstanceDeviceMetadata( devices=[objects.NetworkInterfaceMetadata( mac='aa:bb:cc:dd:ee:ff', tag='foo')]) instance.save = mock.Mock() self.api._delete_nic_metadata(instance, vif) self.assertEqual(0, len(instance.device_metadata.devices)) instance.save.assert_called_once_with() @mock.patch('nova.network.neutron.API._check_external_network_attach') @mock.patch('nova.network.neutron.API._populate_neutron_extension_values') @mock.patch('nova.network.neutron.API._get_available_networks') @mock.patch('nova.network.neutron.get_client') def test_port_binding_failed_created_port(self, mock_ntrn, mock_avail_nets, mock_ext_vals, mock_cena): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid=uuids.inst_1) mock_avail_nets.return_value = [{'id': 'net-1', 'subnets': ['subnet1']}] mock_nc.create_port.return_value = {'port': {'id': uuids.portid_1}} port_response = {'port': {'id': uuids.portid_1, 'tenant_id': mock_inst.project_id, 'binding:vif_type': 'binding_failed'}} mock_nc.update_port.return_value = port_response self.assertRaises(exception.PortBindingFailed, self.api.allocate_for_instance, mock.sentinel.ctx, mock_inst, False, None) mock_nc.delete_port.assert_called_once_with(uuids.portid_1) @mock.patch('nova.network.neutron.API._show_port') @mock.patch('nova.network.neutron.get_client') def test_port_binding_failed_with_request(self, mock_ntrn, mock_show_port): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_inst = mock.Mock(project_id="proj-1", availability_zone='zone-1', uuid='inst-1') mock_show_port.return_value = { 'id': uuids.portid_1, 'tenant_id': mock_inst.project_id, 'binding:vif_type': 'binding_failed'} nw_req = objects.NetworkRequestList( objects = [objects.NetworkRequest(port_id=uuids.portid_1)]) self.assertRaises(exception.PortBindingFailed, self.api.allocate_for_instance, mock.sentinel.ctx, mock_inst, False, requested_networks=nw_req) @mock.patch('nova.objects.virtual_interface.VirtualInterface.create') @mock.patch('nova.network.neutron.API._check_external_network_attach') @mock.patch('nova.network.neutron.API._show_port') @mock.patch('nova.network.neutron.API._update_port') @mock.patch('nova.network.neutron.get_client') def test_port_with_resource_request_has_allocation_in_binding( self, mock_get_client, mock_update_port, mock_show_port, mock_check_external, mock_vif_create): nw_req = objects.NetworkRequestList( objects = [objects.NetworkRequest(port_id=uuids.portid_1)]) mock_inst = mock.Mock( uuid=uuids.instance_uuid, project_id=uuids.project_id, availability_zone='nova', ) port = { 'id': uuids.portid_1, 'tenant_id': uuids.project_id, 'network_id': uuids.networkid_1, 'mac_address': 'fake-mac', constants.RESOURCE_REQUEST: 'fake-request' } mock_show_port.return_value = port mock_get_client.return_value.list_networks.return_value = { "networks": [{'id': uuids.networkid_1, 'port_security_enabled': False}]} mock_update_port.return_value = port with mock.patch.object(self.api, 'get_instance_nw_info'): self.api.allocate_for_instance( mock.sentinel.ctx, mock_inst, False, requested_networks=nw_req, resource_provider_mapping={uuids.portid_1: [uuids.rp1]}) mock_update_port.assert_called_once_with( mock_get_client.return_value, mock_inst, uuids.portid_1, { 'port': { 'binding:host_id': None, 'device_id': uuids.instance_uuid, 'binding:profile': { 'allocation': uuids.rp1}, 'device_owner': 'compute:nova'}}) mock_show_port.assert_called_once_with( mock.sentinel.ctx, uuids.portid_1, neutron_client=mock_get_client.return_value) @mock.patch('nova.network.neutron.get_client') def test_get_floating_ip_by_address_not_found_neutron_not_found(self, mock_ntrn): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_nc.list_floatingips.side_effect = exceptions.NotFound() address = '172.24.4.227' self.assertRaises(exception.FloatingIpNotFoundForAddress, self.api.get_floating_ip_by_address, self.context, address) @mock.patch('nova.network.neutron.get_client') def test_get_floating_ip_by_address_not_found_neutron_raises_non404(self, mock_ntrn): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_nc.list_floatingips.side_effect = exceptions.InternalServerError() address = '172.24.4.227' self.assertRaises(exceptions.InternalServerError, self.api.get_floating_ip_by_address, self.context, address) @mock.patch('nova.network.neutron.get_client') def test_get_floating_ips_by_project_not_found(self, mock_ntrn): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_nc.list_floatingips.side_effect = exceptions.NotFound() fips = self.api.get_floating_ips_by_project(self.context) self.assertEqual([], fips) @mock.patch('nova.network.neutron.get_client') def test_get_floating_ips_by_project_not_found_legacy(self, mock_ntrn): # FIXME(danms): Remove this test along with the code path it tests # when bug 1513879 is fixed. mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc # neutronclient doesn't raise NotFound in this scenario, it raises a # NeutronClientException with status_code=404 notfound = exceptions.NeutronClientException(status_code=404) mock_nc.list_floatingips.side_effect = notfound fips = self.api.get_floating_ips_by_project(self.context) self.assertEqual([], fips) @mock.patch('nova.network.neutron.get_client') def test_get_floating_ips_by_project_raises_non404(self, mock_ntrn): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc mock_nc.list_floatingips.side_effect = exceptions.InternalServerError() self.assertRaises(exceptions.InternalServerError, self.api.get_floating_ips_by_project, self.context) @mock.patch.object(neutronapi.API, '_refresh_neutron_extensions_cache') @mock.patch.object(neutronapi, 'get_client') def _test_get_floating_ips_by_project( self, fip_ext_enabled, has_ports, mock_ntrn, mock_refresh): mock_nc = mock.Mock() mock_ntrn.return_value = mock_nc # NOTE(stephenfin): These are clearly not full responses mock_nc.list_floatingips.return_value = { 'floatingips': [ { 'id': uuids.fip_id, 'floating_network_id': uuids.fip_net_id, 'port_id': uuids.fip_port_id, } ] } mock_nc.list_networks.return_value = { 'networks': [ { 'id': uuids.fip_net_id, }, ], } if has_ports: mock_nc.list_ports.return_value = { 'ports': [ { 'id': uuids.fip_port_id, }, ], } else: mock_nc.list_ports.return_value = {'ports': []} if fip_ext_enabled: self.api.extensions = [constants.FIP_PORT_DETAILS] else: self.api.extensions = [] fips = self.api.get_floating_ips_by_project(self.context) mock_nc.list_networks.assert_called_once_with( id=[uuids.fip_net_id]) self.assertEqual(1, len(fips)) if fip_ext_enabled: mock_nc.list_ports.assert_not_called() self.assertNotIn('port_details', fips[0]) else: mock_nc.list_ports.assert_called_once_with( tenant_id=self.context.project_id) self.assertIn('port_details', fips[0]) if has_ports: self.assertIsNotNone(fips[0]['port_details']) else: self.assertIsNone(fips[0]['port_details']) def test_get_floating_ips_by_project_with_fip_port_details_ext(self): """Make sure we used embedded port details if available.""" self._test_get_floating_ips_by_project(True, True) def test_get_floating_ips_by_project_without_fip_port_details_ext(self): """Make sure we make a second request for port details if necessary.""" self._test_get_floating_ips_by_project(False, True) def test_get_floating_ips_by_project_without_ports(self): """Make sure we don't fail for floating IPs without attached ports.""" self._test_get_floating_ips_by_project(False, False) @mock.patch('nova.network.neutron.API._show_port') def test_unbind_ports_reset_dns_name_by_admin(self, mock_show): neutron = mock.Mock() neutron.show_network.return_value = { 'network': { 'id': 'net1', 'dns_domain': None } } port_client = mock.Mock() self.api.extensions = [constants.DNS_INTEGRATION] ports = [uuids.port_id] mock_show.return_value = {'id': uuids.port} self.api._unbind_ports(self.context, ports, neutron, port_client) port_req_body = {'port': {'binding:host_id': None, 'binding:profile': {}, 'device_id': '', 'device_owner': '', 'dns_name': ''}} port_client.update_port.assert_called_once_with( uuids.port_id, port_req_body) neutron.update_port.assert_not_called() @mock.patch('nova.network.neutron.API._show_port') def test_unbind_ports_reset_dns_name_by_non_admin(self, mock_show): neutron = mock.Mock() neutron.show_network.return_value = { 'network': { 'id': 'net1', 'dns_domain': 'test.domain' } } port_client = mock.Mock() self.api.extensions = [constants.DNS_INTEGRATION] ports = [uuids.port_id] mock_show.return_value = {'id': uuids.port} self.api._unbind_ports(self.context, ports, neutron, port_client) admin_port_req_body = {'port': {'binding:host_id': None, 'binding:profile': {}, 'device_id': '', 'device_owner': ''}} non_admin_port_req_body = {'port': {'dns_name': ''}} port_client.update_port.assert_called_once_with( uuids.port_id, admin_port_req_body) neutron.update_port.assert_called_once_with( uuids.port_id, non_admin_port_req_body) @mock.patch('nova.network.neutron.API._show_port') def test_unbind_ports_reset_allocation_in_port_binding(self, mock_show): neutron = mock.Mock() port_client = mock.Mock() ports = [uuids.port_id] mock_show.return_value = {'id': uuids.port, 'binding:profile': {'allocation': uuids.rp1}} self.api._unbind_ports(self.context, ports, neutron, port_client) port_req_body = {'port': {'binding:host_id': None, 'binding:profile': {}, 'device_id': '', 'device_owner': ''}} port_client.update_port.assert_called_once_with( uuids.port_id, port_req_body) @mock.patch('nova.network.neutron.API._show_port') def test_unbind_ports_reset_binding_profile(self, mock_show): neutron = mock.Mock() port_client = mock.Mock() ports = [uuids.port_id] mock_show.return_value = { 'id': uuids.port, 'binding:profile': {'pci_vendor_info': '1377:0047', 'pci_slot': '0000:0a:00.1', 'physical_network': 'physnet1', 'capabilities': ['switchdev']} } self.api._unbind_ports(self.context, ports, neutron, port_client) port_req_body = {'port': {'binding:host_id': None, 'binding:profile': {'physical_network': 'physnet1', 'capabilities': ['switchdev']}, 'device_id': '', 'device_owner': ''} } port_client.update_port.assert_called_once_with( uuids.port_id, port_req_body) @mock.patch('nova.network.neutron.API._populate_neutron_extension_values') @mock.patch('nova.network.neutron.API._update_port', # called twice, fails on the 2nd call and triggers the cleanup side_effect=(mock.MagicMock(), exception.PortInUse( port_id=uuids.created_port_id))) @mock.patch.object(objects.VirtualInterface, 'create') @mock.patch.object(objects.VirtualInterface, 'destroy') @mock.patch('nova.network.neutron.API._unbind_ports') @mock.patch('nova.network.neutron.API._delete_ports') def test_update_ports_for_instance_fails_rollback_ports_and_vifs(self, mock_delete_ports, mock_unbind_ports, mock_vif_destroy, mock_vif_create, mock_update_port, mock_populate_ext_values): """Makes sure we rollback ports and VIFs if we fail updating ports""" instance = fake_instance.fake_instance_obj(self.context) ntrn = mock.Mock(spec=client.Client) # we have two requests, one with a preexisting port and one where nova # created the port (on the same network) requests_and_created_ports = [ (objects.NetworkRequest(network_id=uuids.network_id, port_id=uuids.preexisting_port_id), None), # None means Nova didn't create this port (objects.NetworkRequest(network_id=uuids.network_id, port_id=uuids.created_port_id), uuids.created_port_id), ] network = {'id': uuids.network_id} nets = {uuids.network_id: network} self.assertRaises(exception.PortInUse, self.api._update_ports_for_instance, self.context, instance, ntrn, ntrn, requests_and_created_ports, nets, bind_host_id=None, requested_ports_dict=None) # assert the calls mock_update_port.assert_has_calls([ mock.call(ntrn, instance, uuids.preexisting_port_id, mock.ANY), mock.call(ntrn, instance, uuids.created_port_id, mock.ANY) ]) # we only got to create one vif since the 2nd _update_port call fails mock_vif_create.assert_called_once_with() # we only destroy one vif since we only created one mock_vif_destroy.assert_called_once_with() # we unbind the pre-existing port mock_unbind_ports.assert_called_once_with( self.context, [uuids.preexisting_port_id], ntrn, ntrn) # we delete the created port mock_delete_ports.assert_called_once_with( ntrn, instance, [uuids.created_port_id]) @mock.patch('nova.network.neutron.API._get_floating_ip_by_address', return_value={"port_id": "1"}) @mock.patch('nova.network.neutron.API._show_port', side_effect=exception.PortNotFound(port_id='1')) def test_get_instance_id_by_floating_address_port_not_found(self, mock_show, mock_get): api = neutronapi.API() fip = api.get_instance_id_by_floating_address(self.context, '172.24.4.227') self.assertIsNone(fip) @mock.patch('nova.network.neutron.API._show_port', side_effect=exception.PortNotFound(port_id=uuids.port)) @mock.patch.object(neutronapi.LOG, 'exception') def test_unbind_ports_port_show_portnotfound(self, mock_log, mock_show): api = neutronapi.API() neutron_client = mock.Mock() mock_show.return_value = {'id': uuids.port} api._unbind_ports(self.context, [uuids.port_id], neutron_client, neutron_client) mock_show.assert_called_once_with( mock.ANY, uuids.port_id, fields=['binding:profile', 'network_id'], neutron_client=mock.ANY) mock_log.assert_not_called() @mock.patch('nova.network.neutron.API._show_port', side_effect=Exception) @mock.patch.object(neutronapi.LOG, 'exception') def test_unbind_ports_port_show_unexpected_error(self, mock_log, mock_show): api = neutronapi.API() neutron_client = mock.Mock() mock_show.return_value = {'id': uuids.port} api._unbind_ports(self.context, [uuids.port_id], neutron_client, neutron_client) neutron_client.update_port.assert_called_once_with( uuids.port_id, {'port': { 'device_id': '', 'device_owner': '', 'binding:profile': {}, 'binding:host_id': None}}) self.assertTrue(mock_log.called) @mock.patch('nova.network.neutron.API._show_port') @mock.patch.object(neutronapi.LOG, 'exception') def test_unbind_ports_portnotfound(self, mock_log, mock_show): api = neutronapi.API() neutron_client = mock.Mock() neutron_client.update_port = mock.Mock( side_effect=exceptions.PortNotFoundClient) mock_show.return_value = {'id': uuids.port} api._unbind_ports(self.context, [uuids.port_id], neutron_client, neutron_client) neutron_client.update_port.assert_called_once_with( uuids.port_id, {'port': { 'device_id': '', 'device_owner': '', 'binding:profile': {}, 'binding:host_id': None}}) mock_log.assert_not_called() @mock.patch('nova.network.neutron.API._show_port') @mock.patch.object(neutronapi.LOG, 'exception') def test_unbind_ports_unexpected_error(self, mock_log, mock_show): api = neutronapi.API() neutron_client = mock.Mock() neutron_client.update_port = mock.Mock( side_effect=test.TestingException) mock_show.return_value = {'id': uuids.port} api._unbind_ports(self.context, [uuids.port_id], neutron_client, neutron_client) neutron_client.update_port.assert_called_once_with( uuids.port_id, {'port': { 'device_id': '', 'device_owner': '', 'binding:profile': {}, 'binding:host_id': None}}) self.assertTrue(mock_log.called) @mock.patch.object(neutronapi, 'get_client') def test_create_resource_requests_no_allocate(self, mock_get_client): """Ensure physnet info is not retrieved when networks are not to be allocated. """ requested_networks = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id=net_req_obj.NETWORK_ID_NONE) ]) pci_requests = objects.InstancePCIRequests() api = neutronapi.API() result = api.create_resource_requests( self.context, requested_networks, pci_requests) network_metadata, port_resource_requests = result self.assertFalse(mock_get_client.called) self.assertIsNone(network_metadata) self.assertEqual([], port_resource_requests) @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info') @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_create_resource_requests_auto_allocated(self, mock_get_client, mock_get_physnet_tunneled_info): """Ensure physnet info is not retrieved for auto-allocated networks. This isn't possible so we shouldn't attempt to do it. """ requested_networks = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id=net_req_obj.NETWORK_ID_AUTO) ]) pci_requests = objects.InstancePCIRequests() api = neutronapi.API() result = api.create_resource_requests( self.context, requested_networks, pci_requests) network_metadata, port_resource_requests = result mock_get_physnet_tunneled_info.assert_not_called() self.assertEqual(set(), network_metadata.physnets) self.assertFalse(network_metadata.tunneled) self.assertEqual([], port_resource_requests) @mock.patch('nova.objects.request_spec.RequestGroup.from_port_request') @mock.patch.object(neutronapi.API, '_get_physnet_tunneled_info') @mock.patch.object(neutronapi.API, "_get_port_vnic_info") @mock.patch.object(neutronapi, 'get_client') def test_create_resource_requests(self, getclient, mock_get_port_vnic_info, mock_get_physnet_tunneled_info, mock_from_port_request): requested_networks = objects.NetworkRequestList( objects = [ objects.NetworkRequest(port_id=uuids.portid_1), objects.NetworkRequest(network_id='net1'), objects.NetworkRequest(port_id=uuids.portid_2), objects.NetworkRequest(port_id=uuids.portid_3), objects.NetworkRequest(port_id=uuids.portid_4), objects.NetworkRequest(port_id=uuids.portid_5), objects.NetworkRequest(port_id=uuids.trusted_port)]) pci_requests = objects.InstancePCIRequests(requests=[]) # _get_port_vnic_info should be called for every NetworkRequest with a # port_id attribute (so six times) mock_get_port_vnic_info.side_effect = [ (model.VNIC_TYPE_DIRECT, None, 'netN', None), (model.VNIC_TYPE_NORMAL, None, 'netN', mock.sentinel.resource_request1), (model.VNIC_TYPE_MACVTAP, None, 'netN', None), (model.VNIC_TYPE_MACVTAP, None, 'netN', None), (model.VNIC_TYPE_DIRECT_PHYSICAL, None, 'netN', None), (model.VNIC_TYPE_DIRECT, True, 'netN', mock.sentinel.resource_request2), ] # _get_physnet_tunneled_info should be called for every NetworkRequest # (so seven times) mock_get_physnet_tunneled_info.side_effect = [ ('physnet1', False), ('physnet1', False), ('', True), ('physnet1', False), ('physnet2', False), ('physnet3', False), ('physnet4', False), ] api = neutronapi.API() mock_from_port_request.side_effect = [ mock.sentinel.request_group1, mock.sentinel.request_group2, ] result = api.create_resource_requests( self.context, requested_networks, pci_requests) network_metadata, port_resource_requests = result self.assertEqual([ mock.sentinel.request_group1, mock.sentinel.request_group2], port_resource_requests) self.assertEqual(5, len(pci_requests.requests)) has_pci_request_id = [net.pci_request_id is not None for net in requested_networks.objects] self.assertEqual(pci_requests.requests[3].spec[0]["dev_type"], "type-PF") expected_results = [True, False, False, True, True, True, True] self.assertEqual(expected_results, has_pci_request_id) # Make sure only the trusted VF has the 'trusted' tag set in the spec. for pci_req in pci_requests.requests: spec = pci_req.spec[0] if spec[pci_request.PCI_NET_TAG] == 'physnet4': # trusted should be true in the spec for this request self.assertIn(pci_request.PCI_TRUSTED_TAG, spec) self.assertEqual('True', spec[pci_request.PCI_TRUSTED_TAG]) else: self.assertNotIn(pci_request.PCI_TRUSTED_TAG, spec) # Only the port with a resource_request will have pci_req.requester_id. self.assertEqual( [None, None, None, None, uuids.trusted_port], [pci_req.requester_id for pci_req in pci_requests.requests]) self.assertItemsEqual( ['physnet1', 'physnet2', 'physnet3', 'physnet4'], network_metadata.physnets) self.assertTrue(network_metadata.tunneled) mock_from_port_request.assert_has_calls([ mock.call( context=None, port_uuid=uuids.portid_2, port_resource_request=mock.sentinel.resource_request1), mock.call( context=None, port_uuid=uuids.trusted_port, port_resource_request=mock.sentinel.resource_request2), ]) @mock.patch.object(neutronapi, 'get_client') def test_associate_floating_ip_conflict(self, mock_get_client): """Tests that if Neutron raises a Conflict we handle it and re-raise as a nova-specific exception. """ mock_get_client.return_value.update_floatingip.side_effect = ( exceptions.Conflict( "Cannot associate floating IP 172.24.5.15 " "(60a8f00b-4404-4518-ad66-00448a155904) with port " "95ee1ffb-6d41-447d-a90e-b6ce5d9c92fa using fixed IP " "10.1.0.9, as that fixed IP already has a floating IP on " "external network bdcda645-f612-40ab-a956-0d95af42cf7c.") ) with test.nested( mock.patch.object( self.api, '_get_port_id_by_fixed_address', return_value='95ee1ffb-6d41-447d-a90e-b6ce5d9c92fa'), mock.patch.object( self.api, '_get_floating_ip_by_address', return_value={'id': uuids.floating_ip_id}) ) as ( _get_floating_ip_by_address, _get_port_id_by_fixed_address ): instance = fake_instance.fake_instance_obj( self.context, uuid='2a2200ec-02fe-484e-885b-9bae7b21ecba') self.assertRaises(exception.FloatingIpAssociateFailed, self.api.associate_floating_ip, self.context, instance, '172.24.5.15', '10.1.0.9') @mock.patch('nova.network.neutron.get_client') @mock.patch('nova.network.neutron.LOG.warning') @mock.patch('nova.network.neutron.update_instance_cache_with_nw_info') def test_associate_floating_ip_refresh_error_trap(self, mock_update_cache, mock_log_warning, mock_get_client): """Tests that when _update_inst_info_cache_for_disassociated_fip raises an exception, associate_floating_ip traps and logs it but does not re-raise. """ ctxt = context.get_context() instance = fake_instance.fake_instance_obj(ctxt) floating_addr = '172.24.5.15' fixed_addr = '10.1.0.9' fip = {'id': uuids.floating_ip_id, 'port_id': uuids.old_port_id} # Setup the mocks. with test.nested( mock.patch.object(self.api, '_get_port_id_by_fixed_address', return_value=uuids.new_port_id), mock.patch.object(self.api, '_get_floating_ip_by_address', return_value=fip), mock.patch.object(self.api, '_update_inst_info_cache_for_disassociated_fip', side_effect=exception.PortNotFound( port_id=uuids.old_port_id)) ) as ( _get_port_id_by_fixed_address, _get_floating_ip_by_address, _update_inst_info_cache_for_disassociated_fip ): # Run the code. self.api.associate_floating_ip( ctxt, instance, floating_addr, fixed_addr) # Assert the calls. mock_get_client.assert_called_once_with(ctxt) mock_client = mock_get_client.return_value _get_port_id_by_fixed_address.assert_called_once_with( mock_client, instance, fixed_addr) _get_floating_ip_by_address.assert_called_once_with( mock_client, floating_addr) mock_client.update_floatingip.assert_called_once_with( uuids.floating_ip_id, test.MatchType(dict)) _update_inst_info_cache_for_disassociated_fip.assert_called_once_with( ctxt, instance, mock_client, fip) mock_log_warning.assert_called_once() self.assertIn('An error occurred while trying to refresh the ' 'network info cache for an instance associated ' 'with port', mock_log_warning.call_args[0][0]) mock_update_cache.assert_called_once_with( # from @refresh_cache self.api, ctxt, instance, nw_info=None) @mock.patch('nova.network.neutron.update_instance_cache_with_nw_info') def test_update_inst_info_cache_for_disassociated_fip_other_cell( self, mock_update_cache): """Tests a scenario where a floating IP is associated to an instance in another cell from the one in which it's currently associated and the network info cache on the original instance is refreshed. """ ctxt = context.get_context() mock_client = mock.Mock() new_instance = fake_instance.fake_instance_obj(ctxt) cctxt = context.get_context() old_instance = fake_instance.fake_instance_obj(cctxt) fip = {'id': uuids.floating_ip_id, 'port_id': uuids.old_port_id, 'floating_ip_address': '172.24.5.15'} # Setup the mocks. with test.nested( mock.patch.object(self.api, '_show_port', return_value={ 'device_id': old_instance.uuid}), mock.patch.object(self.api, '_get_instance_by_uuid_using_api_db', return_value=old_instance) ) as ( _show_port, _get_instance_by_uuid_using_api_db ): # Run the code. self.api._update_inst_info_cache_for_disassociated_fip( ctxt, new_instance, mock_client, fip) # Assert the calls. _show_port.assert_called_once_with( ctxt, uuids.old_port_id, neutron_client=mock_client) _get_instance_by_uuid_using_api_db.assert_called_once_with( ctxt, old_instance.uuid) mock_update_cache.assert_called_once_with( self.api, cctxt, old_instance) @mock.patch('nova.network.neutron.LOG.info') @mock.patch('nova.network.neutron.update_instance_cache_with_nw_info') def test_update_inst_info_cache_for_disassociated_fip_inst_not_found( self, mock_update_cache, mock_log_info): """Tests the case that a floating IP is re-associated to an instance in another cell but the original instance cannot be found. """ ctxt = context.get_context() mock_client = mock.Mock() new_instance = fake_instance.fake_instance_obj(ctxt) fip = {'id': uuids.floating_ip_id, 'port_id': uuids.old_port_id, 'floating_ip_address': '172.24.5.15'} # Setup the mocks. with test.nested( mock.patch.object(self.api, '_show_port', return_value={ 'device_id': uuids.original_inst_uuid}), mock.patch.object(self.api, '_get_instance_by_uuid_using_api_db', return_value=None) ) as ( _show_port, _get_instance_by_uuid_using_api_db ): # Run the code. self.api._update_inst_info_cache_for_disassociated_fip( ctxt, new_instance, mock_client, fip) # Assert the calls. _show_port.assert_called_once_with( ctxt, uuids.old_port_id, neutron_client=mock_client) _get_instance_by_uuid_using_api_db.assert_called_once_with( ctxt, uuids.original_inst_uuid) mock_update_cache.assert_not_called() self.assertEqual(2, mock_log_info.call_count) self.assertIn('If the instance still exists, its info cache may ' 'be healed automatically.', mock_log_info.call_args[0][0]) @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_get_instance_by_uuid_using_api_db_current_cell( self, mock_get_map, mock_get_inst): """Tests that _get_instance_by_uuid_using_api_db finds the instance in the cell currently targeted by the context. """ ctxt = context.get_context() instance = fake_instance.fake_instance_obj(ctxt) mock_get_inst.return_value = instance inst = self.api._get_instance_by_uuid_using_api_db(ctxt, instance.uuid) self.assertIs(inst, instance) mock_get_inst.assert_called_once_with(ctxt, instance.uuid) mock_get_map.assert_not_called() @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_get_instance_by_uuid_using_api_db_other_cell( self, mock_get_map, mock_get_inst): """Tests that _get_instance_by_uuid_using_api_db finds the instance in another cell different from the currently targeted context. """ ctxt = context.get_context() instance = fake_instance.fake_instance_obj(ctxt) mock_get_map.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping( uuid=uuids.cell_mapping_uuid, database_connection= self.cell_mappings['cell1'].database_connection, transport_url='none://fake')) # Mock get_by_uuid to not find the instance in the first call, but # do find it in the second. Target the instance context as well so # we can assert that we used a different context for the 2nd call. def stub_inst_get_by_uuid(_context, instance_uuid, *args, **kwargs): if not mock_get_map.called: # First call, raise InstanceNotFound. self.assertIs(_context, ctxt) raise exception.InstanceNotFound(instance_id=instance_uuid) # Else return the instance with a newly targeted context. self.assertIsNot(_context, ctxt) instance._context = _context return instance mock_get_inst.side_effect = stub_inst_get_by_uuid inst = self.api._get_instance_by_uuid_using_api_db(ctxt, instance.uuid) # The instance's internal context should still be targeted and not # the original context. self.assertIsNot(inst._context, ctxt) self.assertIsNotNone(inst._context.db_connection) mock_get_map.assert_called_once_with(ctxt, instance.uuid) mock_get_inst.assert_has_calls([ mock.call(ctxt, instance.uuid), mock.call(inst._context, instance.uuid)]) @mock.patch('nova.objects.Instance.get_by_uuid', side_effect=exception.InstanceNotFound(instance_id=uuids.inst)) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_get_instance_by_uuid_using_api_db_other_cell_never_found( self, mock_get_map, mock_get_inst): """Tests that _get_instance_by_uuid_using_api_db does not find the instance in either the current cell or another cell. """ ctxt = context.get_context() instance = fake_instance.fake_instance_obj(ctxt, uuid=uuids.inst) mock_get_map.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping( uuid=uuids.cell_mapping_uuid, database_connection= self.cell_mappings['cell1'].database_connection, transport_url='none://fake')) self.assertIsNone( self.api._get_instance_by_uuid_using_api_db(ctxt, instance.uuid)) mock_get_map.assert_called_once_with(ctxt, instance.uuid) mock_get_inst.assert_has_calls([ mock.call(ctxt, instance.uuid), mock.call(test.MatchType(context.RequestContext), instance.uuid)]) @mock.patch('nova.objects.Instance.get_by_uuid', side_effect=exception.InstanceNotFound(instance_id=uuids.inst)) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid', side_effect=exception.InstanceMappingNotFound(uuid=uuids.inst)) def test_get_instance_by_uuid_using_api_db_other_cell_map_not_found( self, mock_get_map, mock_get_inst): """Tests that _get_instance_by_uuid_using_api_db does not find an instance mapping for the instance. """ ctxt = context.get_context() instance = fake_instance.fake_instance_obj(ctxt, uuid=uuids.inst) self.assertIsNone( self.api._get_instance_by_uuid_using_api_db(ctxt, instance.uuid)) mock_get_inst.assert_called_once_with(ctxt, instance.uuid) mock_get_map.assert_called_once_with(ctxt, instance.uuid) @mock.patch('nova.network.neutron._get_ksa_client', new_callable=mock.NonCallableMock) # asserts not called def test_migrate_instance_start_no_binding_ext(self, get_client_mock): """Tests that migrate_instance_start exits early if neutron doesn't have the binding-extended API extension. """ with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=False): self.api.migrate_instance_start( self.context, mock.sentinel.instance, {}) @mock.patch('nova.network.neutron._get_ksa_client') def test_migrate_instance_start_activate(self, get_client_mock): """Tests the happy path for migrate_instance_start where the binding for the port(s) attached to the instance are activated on the destination host. """ binding = {'binding': {'status': 'INACTIVE'}} resp = fake_req.FakeResponse(200, content=jsonutils.dumps(binding)) get_client_mock.return_value.get.return_value = resp # Just create a simple instance with a single port. instance = objects.Instance(info_cache=objects.InstanceInfoCache( network_info=model.NetworkInfo([model.VIF(uuids.port_id)]))) migration = {'source_compute': 'source', 'dest_compute': 'dest'} with mock.patch.object(self.api, 'activate_port_binding') as activate: with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=True): self.api.migrate_instance_start( self.context, instance, migration) activate.assert_called_once_with(self.context, uuids.port_id, 'dest') get_client_mock.return_value.get.assert_called_once_with( '/v2.0/ports/%s/bindings/dest' % uuids.port_id, raise_exc=False, global_request_id=self.context.global_id) @mock.patch('nova.network.neutron._get_ksa_client') def test_migrate_instance_start_already_active(self, get_client_mock): """Tests the case that the destination host port binding is already ACTIVE when migrate_instance_start is called so we don't try to activate it again, which would result in a 409 from Neutron. """ binding = {'binding': {'status': 'ACTIVE'}} resp = fake_req.FakeResponse(200, content=jsonutils.dumps(binding)) get_client_mock.return_value.get.return_value = resp # Just create a simple instance with a single port. instance = objects.Instance(info_cache=objects.InstanceInfoCache( network_info=model.NetworkInfo([model.VIF(uuids.port_id)]))) migration = {'source_compute': 'source', 'dest_compute': 'dest'} with mock.patch.object(self.api, 'activate_port_binding', new_callable=mock.NonCallableMock): with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=True): self.api.migrate_instance_start( self.context, instance, migration) get_client_mock.return_value.get.assert_called_once_with( '/v2.0/ports/%s/bindings/dest' % uuids.port_id, raise_exc=False, global_request_id=self.context.global_id) @mock.patch('nova.network.neutron._get_ksa_client') def test_migrate_instance_start_no_bindings(self, get_client_mock): """Tests the case that migrate_instance_start is running against new enough neutron for the binding-extended API but the ports don't have a binding resource against the destination host, so no activation happens. """ get_client_mock.return_value.get.return_value = ( fake_req.FakeResponse(404)) # Create an instance with two ports so we can test the short circuit # when we find that the first port doesn't have a dest host binding. instance = objects.Instance(info_cache=objects.InstanceInfoCache( network_info=model.NetworkInfo([ model.VIF(uuids.port1), model.VIF(uuids.port2)]))) migration = {'source_compute': 'source', 'dest_compute': 'dest'} with mock.patch.object(self.api, 'activate_port_binding', new_callable=mock.NonCallableMock): with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=True): self.api.migrate_instance_start( self.context, instance, migration) get_client_mock.return_value.get.assert_called_once_with( '/v2.0/ports/%s/bindings/dest' % uuids.port1, raise_exc=False, global_request_id=self.context.global_id) @mock.patch('nova.network.neutron._get_ksa_client') def test_migrate_instance_start_get_error(self, get_client_mock): """Tests the case that migrate_instance_start is running against new enough neutron for the binding-extended API but getting the port binding information results in an error response from neutron. """ get_client_mock.return_value.get.return_value = ( fake_req.FakeResponse(500)) instance = objects.Instance(info_cache=objects.InstanceInfoCache( network_info=model.NetworkInfo([ model.VIF(uuids.port1), model.VIF(uuids.port2)]))) migration = {'source_compute': 'source', 'dest_compute': 'dest'} with mock.patch.object(self.api, 'activate_port_binding', new_callable=mock.NonCallableMock): with mock.patch.object(self.api, 'supports_port_binding_extension', return_value=True): self.api.migrate_instance_start( self.context, instance, migration) self.assertEqual(2, get_client_mock.return_value.get.call_count) get_client_mock.return_value.get.assert_has_calls([ mock.call( '/v2.0/ports/%s/bindings/dest' % uuids.port1, raise_exc=False, global_request_id=self.context.global_id), mock.call( '/v2.0/ports/%s/bindings/dest' % uuids.port2, raise_exc=False, global_request_id=self.context.global_id)]) @mock.patch('nova.network.neutron.get_client') def test_get_requested_resource_for_instance_no_resource_request( self, mock_get_client): mock_client = mock_get_client.return_value ports = {'ports': [ { 'id': uuids.port1, 'device_id': uuids.isnt1, } ]} mock_client.list_ports.return_value = ports request_groups = self.api.get_requested_resource_for_instance( self.context, uuids.inst1) mock_client.list_ports.assert_called_with( device_id=uuids.inst1, fields=['id', 'resource_request']) self.assertEqual([], request_groups) @mock.patch('nova.network.neutron.get_client') def test_get_requested_resource_for_instance_no_ports( self, mock_get_client): mock_client = mock_get_client.return_value ports = {'ports': []} mock_client.list_ports.return_value = ports request_groups = self.api.get_requested_resource_for_instance( self.context, uuids.inst1) mock_client.list_ports.assert_called_with( device_id=uuids.inst1, fields=['id', 'resource_request']) self.assertEqual([], request_groups) @mock.patch('nova.network.neutron.get_client') def test_get_requested_resource_for_instance_with_multiple_ports( self, mock_get_client): mock_client = mock_get_client.return_value ports = {'ports': [ { 'id': uuids.port1, 'device_id': uuids.isnt1, 'resource_request': { 'resources': {'NET_BW_EGR_KILOBIT_PER_SEC': 10000}} }, { 'id': uuids.port2, 'device_id': uuids.isnt1, 'resource_request': {} }, ]} mock_client.list_ports.return_value = ports request_groups = self.api.get_requested_resource_for_instance( self.context, uuids.inst1) mock_client.list_ports.assert_called_with( device_id=uuids.inst1, fields=['id', 'resource_request']) self.assertEqual(1, len(request_groups)) self.assertEqual( {'NET_BW_EGR_KILOBIT_PER_SEC': 10000}, request_groups[0].resources) self.assertEqual( uuids.port1, request_groups[0].requester_id) mock_get_client.assert_called_once_with(self.context, admin=True) class TestAPIModuleMethods(test.NoDBTestCase): def test_gather_port_ids_and_networks_wrong_params(self): api = neutronapi.API() # Test with networks not None and port_ids is None self.assertRaises(exception.NovaException, api._gather_port_ids_and_networks, 'fake_context', 'fake_instance', [{'network': {'name': 'foo'}}], None) # Test with networks is None and port_ids not None self.assertRaises(exception.NovaException, api._gather_port_ids_and_networks, 'fake_context', 'fake_instance', None, ['list', 'of', 'port_ids']) def test_ensure_requested_network_ordering_no_preference_ids(self): networks = [1, 2, 3] neutronapi._ensure_requested_network_ordering( lambda x: x, networks, None) def test_ensure_requested_network_ordering_no_preference_hashes(self): networks = [{'id': 3}, {'id': 1}, {'id': 2}] neutronapi._ensure_requested_network_ordering( lambda x: x['id'], networks, None) self.assertEqual(networks, [{'id': 3}, {'id': 1}, {'id': 2}]) def test_ensure_requested_network_ordering_with_preference(self): networks = [{'id': 3}, {'id': 1}, {'id': 2}] neutronapi._ensure_requested_network_ordering( lambda x: x['id'], networks, [1, 2, 3]) self.assertEqual(networks, [{'id': 1}, {'id': 2}, {'id': 3}]) class TestAPIPortbinding(TestAPIBase): def test_allocate_for_instance_portbinding(self): self._test_allocate_for_instance_with_virtual_interface( 1, bind_host_id=self.instance.get('host')) @mock.patch.object(neutronapi, 'get_client') def test_populate_neutron_extension_values_binding(self, mock_get_client): mocked_client = mock.create_autospec(client.Client) mock_get_client.return_value = mocked_client mocked_client.list_extensions.return_value = {'extensions': []} host_id = 'my_host_id' instance = {'host': host_id} port_req_body = {'port': {}} self.api._populate_neutron_extension_values( self.context, instance, None, port_req_body, bind_host_id=host_id) self.assertEqual(host_id, port_req_body['port'][ constants.BINDING_HOST_ID]) self.assertFalse(port_req_body['port'].get( constants.BINDING_PROFILE)) mock_get_client.assert_called_once_with(mock.ANY) mocked_client.list_extensions.assert_called_once_with() @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(pci_manager, 'get_instance_pci_devs') def test_populate_neutron_extension_values_binding_sriov(self, mock_get_instance_pci_devs, mock_get_pci_device_devspec): host_id = 'my_host_id' instance = {'host': host_id} port_req_body = {'port': {}} pci_req_id = 'my_req_id' pci_dev = {'vendor_id': '1377', 'product_id': '0047', 'address': '0000:0a:00.1', } PciDevice = collections.namedtuple('PciDevice', ['vendor_id', 'product_id', 'address']) mydev = PciDevice(**pci_dev) profile = {'pci_vendor_info': '1377:0047', 'pci_slot': '0000:0a:00.1', 'physical_network': 'physnet1', } mock_get_instance_pci_devs.return_value = [mydev] devspec = mock.Mock() devspec.get_tags.return_value = {'physical_network': 'physnet1'} mock_get_pci_device_devspec.return_value = devspec self.api._populate_neutron_binding_profile(instance, pci_req_id, port_req_body) self.assertEqual(profile, port_req_body['port'][ constants.BINDING_PROFILE]) @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(pci_manager, 'get_instance_pci_devs') def test_populate_neutron_extension_values_binding_sriov_with_cap(self, mock_get_instance_pci_devs, mock_get_pci_device_devspec): host_id = 'my_host_id' instance = {'host': host_id} port_req_body = {'port': { constants.BINDING_PROFILE: { 'capabilities': ['switchdev']}}} pci_req_id = 'my_req_id' pci_dev = {'vendor_id': '1377', 'product_id': '0047', 'address': '0000:0a:00.1', } PciDevice = collections.namedtuple('PciDevice', ['vendor_id', 'product_id', 'address']) mydev = PciDevice(**pci_dev) profile = {'pci_vendor_info': '1377:0047', 'pci_slot': '0000:0a:00.1', 'physical_network': 'physnet1', 'capabilities': ['switchdev'], } mock_get_instance_pci_devs.return_value = [mydev] devspec = mock.Mock() devspec.get_tags.return_value = {'physical_network': 'physnet1'} mock_get_pci_device_devspec.return_value = devspec self.api._populate_neutron_binding_profile(instance, pci_req_id, port_req_body) self.assertEqual(profile, port_req_body['port'][ constants.BINDING_PROFILE]) @mock.patch.object(pci_whitelist.Whitelist, 'get_devspec') @mock.patch.object(pci_manager, 'get_instance_pci_devs') def test_populate_neutron_extension_values_binding_sriov_fail( self, mock_get_instance_pci_devs, mock_get_pci_device_devspec): host_id = 'my_host_id' instance = {'host': host_id} port_req_body = {'port': {}} pci_req_id = 'my_req_id' pci_objs = [objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0a:00.1', compute_node_id=1, request_id='1234567890')] mock_get_instance_pci_devs.return_value = pci_objs mock_get_pci_device_devspec.return_value = None self.assertRaises( exception.PciDeviceNotFound, self.api._populate_neutron_binding_profile, instance, pci_req_id, port_req_body) @mock.patch.object(pci_manager, 'get_instance_pci_devs', return_value=[]) def test_populate_neutron_binding_profile_pci_dev_not_found( self, mock_get_instance_pci_devs): api = neutronapi.API() instance = objects.Instance(pci_devices=objects.PciDeviceList()) port_req_body = {'port': {}} pci_req_id = 'my_req_id' self.assertRaises(exception.PciDeviceNotFound, api._populate_neutron_binding_profile, instance, pci_req_id, port_req_body) mock_get_instance_pci_devs.assert_called_once_with( instance, pci_req_id) @mock.patch.object(pci_manager, 'get_instance_pci_devs') def test_pci_parse_whitelist_called_once(self, mock_get_instance_pci_devs): white_list = [ '{"address":"0000:0a:00.1","physical_network":"default"}'] cfg.CONF.set_override('passthrough_whitelist', white_list, 'pci') # NOTE(takashin): neutronapi.API must be initialized # after the 'passthrough_whitelist' is set in this test case. api = neutronapi.API() host_id = 'my_host_id' instance = {'host': host_id} pci_req_id = 'my_req_id' port_req_body = {'port': {}} pci_dev = {'vendor_id': '1377', 'product_id': '0047', 'address': '0000:0a:00.1', } whitelist = pci_whitelist.Whitelist(CONF.pci.passthrough_whitelist) with mock.patch.object(pci_whitelist.Whitelist, '_parse_white_list_from_config', wraps=whitelist._parse_white_list_from_config ) as mock_parse_whitelist: for i in range(2): mydev = objects.PciDevice.create(None, pci_dev) mock_get_instance_pci_devs.return_value = [mydev] api._populate_neutron_binding_profile(instance, pci_req_id, port_req_body) self.assertEqual(0, mock_parse_whitelist.call_count) def _populate_pci_mac_address_fakes(self): instance = fake_instance.fake_instance_obj(self.context) pci_dev = {'vendor_id': '1377', 'product_id': '0047', 'address': '0000:0a:00.1', 'dev_type': 'type-PF'} pf = objects.PciDevice() vf = objects.PciDevice() pf.update_device(pci_dev) pci_dev['dev_type'] = 'type-VF' vf.update_device(pci_dev) return instance, pf, vf @mock.patch.object(pci_manager, 'get_instance_pci_devs') @mock.patch.object(pci_utils, 'get_mac_by_pci_address') def test_populate_pci_mac_address_pf(self, mock_get_mac_by_pci_address, mock_get_instance_pci_devs): instance, pf, vf = self._populate_pci_mac_address_fakes() port_req_body = {'port': {}} mock_get_instance_pci_devs.return_value = [pf] mock_get_mac_by_pci_address.return_value = 'fake-mac-address' expected_port_req_body = {'port': {'mac_address': 'fake-mac-address'}} req = port_req_body.copy() self.api._populate_pci_mac_address(instance, 0, req) self.assertEqual(expected_port_req_body, req) @mock.patch.object(pci_manager, 'get_instance_pci_devs') @mock.patch.object(pci_utils, 'get_mac_by_pci_address') def test_populate_pci_mac_address_vf(self, mock_get_mac_by_pci_address, mock_get_instance_pci_devs): instance, pf, vf = self._populate_pci_mac_address_fakes() port_req_body = {'port': {}} mock_get_instance_pci_devs.return_value = [vf] req = port_req_body.copy() self.api._populate_pci_mac_address(instance, 42, port_req_body) self.assertEqual(port_req_body, req) @mock.patch.object(pci_manager, 'get_instance_pci_devs') @mock.patch.object(pci_utils, 'get_mac_by_pci_address') def test_populate_pci_mac_address_vf_fail(self, mock_get_mac_by_pci_address, mock_get_instance_pci_devs): instance, pf, vf = self._populate_pci_mac_address_fakes() port_req_body = {'port': {}} mock_get_instance_pci_devs.return_value = [vf] mock_get_mac_by_pci_address.side_effect = ( exception.PciDeviceNotFoundById) req = port_req_body.copy() self.api._populate_pci_mac_address(instance, 42, port_req_body) self.assertEqual(port_req_body, req) @mock.patch.object(pci_manager, 'get_instance_pci_devs') @mock.patch('nova.network.neutron.LOG.error') def test_populate_pci_mac_address_no_device(self, mock_log_error, mock_get_instance_pci_devs): instance, pf, vf = self._populate_pci_mac_address_fakes() port_req_body = {'port': {}} mock_get_instance_pci_devs.return_value = [] req = port_req_body.copy() self.api._populate_pci_mac_address(instance, 42, port_req_body) self.assertEqual(port_req_body, req) self.assertEqual(42, mock_log_error.call_args[0][1]) def _test_update_port_binding_true(self, expected_bind_host, func_name, *args): func = getattr(self.api, func_name) search_opts = {'device_id': self.instance['uuid'], 'tenant_id': self.instance['project_id']} ports = {'ports': [{'id': 'test1'}]} port_req_body = {'port': {constants.BINDING_HOST_ID: expected_bind_host, 'device_owner': 'compute:%s' % self.instance['availability_zone']}} mocked_client = mock.create_autospec(client.Client) mocked_client.list_ports.return_value = ports mocked_client.update_port.return_value = None with mock.patch.object(neutronapi, 'get_client', return_value=mocked_client) as mock_get_client: func(*args) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mocked_client.list_ports.assert_called_once_with(**search_opts) mocked_client.update_port.assert_called_once_with( 'test1', port_req_body) def _test_update_port_true_exception(self, expected_bind_host, func_name, *args): func = getattr(self.api, func_name) search_opts = {'device_id': self.instance['uuid'], 'tenant_id': self.instance['project_id']} ports = {'ports': [{'id': 'test1'}]} port_req_body = {'port': {constants.BINDING_HOST_ID: expected_bind_host, 'device_owner': 'compute:%s' % self.instance['availability_zone']}} mocked_client = mock.create_autospec(client.Client) mocked_client.list_ports.return_value = ports mocked_client.update_port.side_effect = Exception( "fail to update port") with mock.patch.object(neutronapi, 'get_client', return_value=mocked_client) as mock_get_client: self.assertRaises(NEUTRON_CLIENT_EXCEPTION, func, *args) mock_get_client.assert_called_once_with(mock.ANY, admin=True) mocked_client.list_ports.assert_called_once_with(**search_opts) mocked_client.update_port.assert_called_once_with( 'test1', port_req_body) def test_migrate_instance_finish_binding_true(self): migration = objects.Migration(source_compute=self.instance.get('host'), dest_compute='dest_host') instance = self._fake_instance_object(self.instance) self._test_update_port_binding_true('dest_host', 'migrate_instance_finish', self.context, instance, migration, {}) def test_migrate_instance_finish_binding_true_exception(self): migration = objects.Migration(source_compute=self.instance.get('host'), dest_compute='dest_host') instance = self._fake_instance_object(self.instance) self._test_update_port_true_exception('dest_host', 'migrate_instance_finish', self.context, instance, migration, {}) def test_setup_instance_network_on_host_true(self): instance = self._fake_instance_object(self.instance) self._test_update_port_binding_true('fake_host', 'setup_instance_network_on_host', self.context, instance, 'fake_host') def test_setup_instance_network_on_host_exception(self): instance = self._fake_instance_object(self.instance) self._test_update_port_true_exception( 'fake_host', 'setup_instance_network_on_host', self.context, instance, 'fake_host') @mock.patch('nova.network.neutron._get_ksa_client', new_callable=mock.NonCallableMock) def test_bind_ports_to_host_no_ports(self, mock_client): self.assertDictEqual({}, self.api.bind_ports_to_host( mock.sentinel.context, objects.Instance(info_cache=None), 'fake-host')) @mock.patch('nova.network.neutron._get_ksa_client') def test_bind_ports_to_host(self, mock_client): """Tests a single port happy path where everything is successful.""" def post_side_effect(*args, **kwargs): self.assertDictEqual(binding, kwargs['json']) return mock.DEFAULT nwinfo = model.NetworkInfo([model.VIF(uuids.port)]) inst = objects.Instance( info_cache=objects.InstanceInfoCache(network_info=nwinfo)) ctxt = context.get_context() binding = {'binding': {'host': 'fake-host', 'vnic_type': 'normal', 'profile': {'foo': 'bar'}}} resp = fake_req.FakeResponse(200, content=jsonutils.dumps(binding)) mock_client.return_value.post.return_value = resp mock_client.return_value.post.side_effect = post_side_effect result = self.api.bind_ports_to_host( ctxt, inst, 'fake-host', {uuids.port: 'normal'}, {uuids.port: {'foo': 'bar'}}) self.assertEqual(1, mock_client.return_value.post.call_count) self.assertDictEqual({uuids.port: binding['binding']}, result) @mock.patch('nova.network.neutron._get_ksa_client') def test_bind_ports_to_host_with_vif_profile_and_vnic(self, mock_client): """Tests bind_ports_to_host with default/non-default parameters.""" def post_side_effect(*args, **kwargs): self.assertDictEqual(binding, kwargs['json']) return mock.DEFAULT ctxt = context.get_context() vif_profile = {'foo': 'default'} nwinfo = model.NetworkInfo([model.VIF(id=uuids.port, vnic_type="direct", profile=vif_profile)]) inst = objects.Instance( info_cache=objects.InstanceInfoCache(network_info=nwinfo)) binding = {'binding': {'host': 'fake-host', 'vnic_type': 'direct', 'profile': vif_profile}} resp = fake_req.FakeResponse(200, content=jsonutils.dumps(binding)) mock_client.return_value.post.return_value = resp mock_client.return_value.post.side_effect = post_side_effect result = self.api.bind_ports_to_host(ctxt, inst, 'fake-host') self.assertEqual(1, mock_client.return_value.post.call_count) self.assertDictEqual({uuids.port: binding['binding']}, result) # assert that that if vnic_type and profile are set in VIF object # the provided vnic_type and profile take precedence. nwinfo = model.NetworkInfo([model.VIF(id=uuids.port, vnic_type='direct', profile=vif_profile)]) inst = objects.Instance( info_cache=objects.InstanceInfoCache(network_info=nwinfo)) vif_profile_per_port = {uuids.port: {'foo': 'overridden'}} vnic_type_per_port = {uuids.port: "direct-overridden"} binding = {'binding': {'host': 'fake-host', 'vnic_type': 'direct-overridden', 'profile': {'foo': 'overridden'}}} resp = fake_req.FakeResponse(200, content=jsonutils.dumps(binding)) mock_client.return_value.post.return_value = resp result = self.api.bind_ports_to_host( ctxt, inst, 'fake-host', vnic_type_per_port, vif_profile_per_port) self.assertEqual(2, mock_client.return_value.post.call_count) self.assertDictEqual({uuids.port: binding['binding']}, result) @mock.patch('nova.network.neutron._get_ksa_client') def test_bind_ports_to_host_rollback(self, mock_client): """Tests a scenario where an instance has two ports, and binding the first is successful but binding the second fails, so the code will rollback the binding for the first port. """ nwinfo = model.NetworkInfo([ model.VIF(uuids.ok), model.VIF(uuids.fail)]) inst = objects.Instance( info_cache=objects.InstanceInfoCache(network_info=nwinfo)) def fake_post(url, *args, **kwargs): if uuids.ok in url: mock_response = fake_req.FakeResponse( 200, content='{"binding": {"host": "fake-host"}}') else: mock_response = fake_req.FakeResponse(500, content='error') return mock_response mock_client.return_value.post.side_effect = fake_post with mock.patch.object(self.api, 'delete_port_binding', # This will be logged but not re-raised. side_effect=exception.PortBindingDeletionFailed( port_id=uuids.ok, host='fake-host' )) as mock_delete: self.assertRaises(exception.PortBindingFailed, self.api.bind_ports_to_host, self.context, inst, 'fake-host') # assert that post was called twice and delete once self.assertEqual(2, mock_client.return_value.post.call_count) mock_delete.assert_called_once_with(self.context, uuids.ok, 'fake-host') @mock.patch('nova.network.neutron._get_ksa_client') def test_delete_port_binding(self, mock_client): # Create three ports where: # - one is successfully unbound # - one is not found # - one fails to be unbound def fake_delete(url, *args, **kwargs): if uuids.ok in url: return fake_req.FakeResponse(204) else: status_code = 404 if uuids.notfound in url else 500 return fake_req.FakeResponse(status_code) mock_client.return_value.delete.side_effect = fake_delete for port_id in (uuids.ok, uuids.notfound, uuids.fail): if port_id == uuids.fail: self.assertRaises(exception.PortBindingDeletionFailed, self.api.delete_port_binding, self.context, port_id, 'fake-host') else: self.api.delete_port_binding(self.context, port_id, 'fake-host') @mock.patch('nova.network.neutron._get_ksa_client') def test_activate_port_binding(self, mock_client): """Tests the happy path of activating an inactive port binding.""" resp = fake_req.FakeResponse(200) mock_client.return_value.put.return_value = resp self.api.activate_port_binding(self.context, uuids.port_id, 'fake-host') mock_client.return_value.put.assert_called_once_with( '/v2.0/ports/%s/bindings/fake-host/activate' % uuids.port_id, raise_exc=False, global_request_id=self.context.global_id) @mock.patch('nova.network.neutron._get_ksa_client') @mock.patch('nova.network.neutron.LOG.warning') def test_activate_port_binding_already_active( self, mock_log_warning, mock_client): """Tests the 409 case of activating an already active port binding.""" mock_client.return_value.put.return_value = fake_req.FakeResponse(409) self.api.activate_port_binding(self.context, uuids.port_id, 'fake-host') mock_client.return_value.put.assert_called_once_with( '/v2.0/ports/%s/bindings/fake-host/activate' % uuids.port_id, raise_exc=False, global_request_id=self.context.global_id) self.assertEqual(1, mock_log_warning.call_count) self.assertIn('is already active', mock_log_warning.call_args[0][0]) @mock.patch('nova.network.neutron._get_ksa_client') def test_activate_port_binding_fails(self, mock_client): """Tests the unknown error case of binding activation.""" mock_client.return_value.put.return_value = fake_req.FakeResponse(500) self.assertRaises(exception.PortBindingActivationFailed, self.api.activate_port_binding, self.context, uuids.port_id, 'fake-host') mock_client.return_value.put.assert_called_once_with( '/v2.0/ports/%s/bindings/fake-host/activate' % uuids.port_id, raise_exc=False, global_request_id=self.context.global_id) class TestAllocateForInstance(test.NoDBTestCase): def setUp(self): super(TestAllocateForInstance, self).setUp() self.context = context.RequestContext('userid', uuids.my_tenant) self.instance = objects.Instance(uuid=uuids.instance, project_id=uuids.tenant_id, hostname="host") def test_allocate_for_instance_raises_invalid_input(self): api = neutronapi.API() self.instance.project_id = "" self.assertRaises(exception.InvalidInput, api.allocate_for_instance, self.context, self.instance, False, None) @mock.patch.object(neutronapi.API, 'get_instance_nw_info') @mock.patch.object(neutronapi.API, '_update_ports_for_instance') @mock.patch.object(neutronapi.API, '_create_ports_for_instance') @mock.patch.object(neutronapi.API, '_process_security_groups') @mock.patch.object(neutronapi.API, '_clean_security_groups') @mock.patch.object(neutronapi.API, '_validate_requested_network_ids') @mock.patch.object(neutronapi.API, '_validate_requested_port_ids') @mock.patch.object(neutronapi, 'get_client') def test_allocate_for_instance_minimal_args(self, mock_get_client, mock_validate_ports, mock_validate_nets, mock_clean_sg, mock_sg, mock_create_ports, mock_update_ports, mock_gni): api = neutronapi.API() mock_get_client.side_effect = ["user", "admin"] mock_validate_ports.return_value = ({}, "ordered_nets") mock_validate_nets.return_value = "nets" mock_clean_sg.return_value = "security_groups" mock_sg.return_value = "security_group_ids" mock_create_ports.return_value = "requests_and_created_ports" mock_update_ports.return_value = ( "nets", "ports", [uuids.preexist], [uuids.created]) mock_gni.return_value = [ {"id": uuids.created}, {"id": uuids.preexist}, {"id": "foo"} ] result = api.allocate_for_instance(self.context, self.instance, False, None) self.assertEqual(len(result), 2) self.assertEqual(result[0], {"id": uuids.created}) self.assertEqual(result[1], {"id": uuids.preexist}) mock_validate_ports.assert_called_once_with( self.context, self.instance, "admin", None, attach=False) def test_ensure_no_port_binding_failure_raises(self): port = { 'id': uuids.port_id, 'binding:vif_type': model.VIF_TYPE_BINDING_FAILED } self.assertRaises(exception.PortBindingFailed, neutronapi._ensure_no_port_binding_failure, port) def test_ensure_no_port_binding_failure_passes_if_no_binding(self): port = {'id': uuids.port_id} neutronapi._ensure_no_port_binding_failure(port) def test_validate_requested_port_ids_no_ports(self): api = neutronapi.API() mock_client = mock.Mock() network_list = [objects.NetworkRequest(network_id='net-1')] requested_networks = objects.NetworkRequestList(objects=network_list) ports, ordered_networks = api._validate_requested_port_ids( self.context, self.instance, mock_client, requested_networks) self.assertEqual({}, ports) self.assertEqual(network_list, ordered_networks) def test_validate_requested_port_ids_success(self): api = neutronapi.API() mock_client = mock.Mock() requested_networks = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id='net-1'), objects.NetworkRequest(port_id=uuids.port_id)]) port = { "id": uuids.port_id, "tenant_id": uuids.tenant_id, "network_id": 'net-2' } mock_client.show_port.return_value = {"port": port} ports, ordered_networks = api._validate_requested_port_ids( self.context, self.instance, mock_client, requested_networks) mock_client.show_port.assert_called_once_with(uuids.port_id) self.assertEqual({uuids.port_id: port}, ports) self.assertEqual(2, len(ordered_networks)) self.assertEqual(requested_networks[0], ordered_networks[0]) self.assertEqual('net-2', ordered_networks[1].network_id) def _assert_validate_requested_port_ids_raises(self, exception, extras, attach=False): api = neutronapi.API() mock_client = mock.Mock() requested_networks = objects.NetworkRequestList(objects=[ objects.NetworkRequest(port_id=uuids.port_id)]) port = { "id": uuids.port_id, "tenant_id": uuids.tenant_id, "network_id": 'net-2' } port.update(extras) mock_client.show_port.return_value = {"port": port} self.assertRaises(exception, api._validate_requested_port_ids, self.context, self.instance, mock_client, requested_networks, attach=attach) def test_validate_requested_port_ids_raise_not_usable(self): self._assert_validate_requested_port_ids_raises( exception.PortNotUsable, {"tenant_id": "foo"}) def test_validate_requested_port_ids_raise_in_use(self): self._assert_validate_requested_port_ids_raises( exception.PortInUse, {"device_id": "foo"}) def test_validate_requested_port_ids_raise_dns(self): self._assert_validate_requested_port_ids_raises( exception.PortNotUsableDNS, {"dns_name": "foo"}) def test_validate_requested_port_ids_raise_binding(self): self._assert_validate_requested_port_ids_raises( exception.PortBindingFailed, {"binding:vif_type": model.VIF_TYPE_BINDING_FAILED}) def test_validate_requested_port_ids_raise_sriov(self): self._assert_validate_requested_port_ids_raises( exception.AttachSRIOVPortNotSupported, {"binding:vnic_type": model.VNIC_TYPE_DIRECT}, attach=True) def test_validate_requested_network_ids_success_auto_net(self): requested_networks = [] ordered_networks = [] api = neutronapi.API() mock_client = mock.Mock() nets = [{'id': "net1"}] mock_client.list_networks.side_effect = [{}, {"networks": nets}] result = api._validate_requested_network_ids(self.context, self.instance, mock_client, requested_networks, ordered_networks) self.assertEqual(nets, list(result.values())) expected_call_list = [ mock.call(shared=False, tenant_id=uuids.tenant_id), mock.call(shared=True) ] self.assertEqual(expected_call_list, mock_client.list_networks.call_args_list) def test_validate_requested_network_ids_success_found_net(self): ordered_networks = [objects.NetworkRequest(network_id="net1")] requested_networks = objects.NetworkRequestList(ordered_networks) api = neutronapi.API() mock_client = mock.Mock() nets = [{'id': "net1"}] mock_client.list_networks.return_value = {"networks": nets} result = api._validate_requested_network_ids(self.context, self.instance, mock_client, requested_networks, ordered_networks) self.assertEqual(nets, list(result.values())) mock_client.list_networks.assert_called_once_with(id=['net1']) def test_validate_requested_network_ids_success_no_nets(self): requested_networks = [] ordered_networks = [] api = neutronapi.API() mock_client = mock.Mock() mock_client.list_networks.side_effect = [{}, {"networks": []}] result = api._validate_requested_network_ids(self.context, self.instance, mock_client, requested_networks, ordered_networks) self.assertEqual({}, result) expected_call_list = [ mock.call(shared=False, tenant_id=uuids.tenant_id), mock.call(shared=True) ] self.assertEqual(expected_call_list, mock_client.list_networks.call_args_list) def _assert_validate_requested_network_ids_raises(self, exception, nets, requested_networks=None): ordered_networks = [] if requested_networks is None: requested_networks = objects.NetworkRequestList() api = neutronapi.API() mock_client = mock.Mock() mock_client.list_networks.side_effect = [{}, {"networks": nets}] self.assertRaises(exception, api._validate_requested_network_ids, self.context, self.instance, mock_client, requested_networks, ordered_networks) def test_validate_requested_network_ids_raises_forbidden(self): rules = {'network:attach_external_network': 'is_admin:True'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) self._assert_validate_requested_network_ids_raises( exception.ExternalNetworkAttachForbidden, [{'id': "net1", 'router:external': True, 'shared': False}]) def test_validate_requested_network_ids_raises_net_not_found(self): requested_networks = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id="1")]) self._assert_validate_requested_network_ids_raises( exception.NetworkNotFound, [], requested_networks=requested_networks) def test_validate_requested_network_ids_raises_too_many_nets(self): self._assert_validate_requested_network_ids_raises( exception.NetworkAmbiguous, [{'id': "net1"}, {'id': "net2"}]) def test_create_ports_for_instance_no_security(self): api = neutronapi.API() ordered_networks = [objects.NetworkRequest(network_id=uuids.net)] nets = {uuids.net: {"id": uuids.net, "port_security_enabled": False}} mock_client = mock.Mock() mock_client.create_port.return_value = {"port": {"id": uuids.port}} result = api._create_ports_for_instance(self.context, self.instance, ordered_networks, nets, mock_client, None) self.assertEqual([(ordered_networks[0], uuids.port)], result) mock_client.create_port.assert_called_once_with( {'port': { 'network_id': uuids.net, 'tenant_id': uuids.tenant_id, 'admin_state_up': True, 'device_id': self.instance.uuid}}) def test_create_ports_for_instance_with_security_groups(self): api = neutronapi.API() ordered_networks = [objects.NetworkRequest(network_id=uuids.net)] nets = {uuids.net: {"id": uuids.net, "subnets": [uuids.subnet]}} mock_client = mock.Mock() mock_client.create_port.return_value = {"port": {"id": uuids.port}} security_groups = [uuids.sg] result = api._create_ports_for_instance(self.context, self.instance, ordered_networks, nets, mock_client, security_groups) self.assertEqual([(ordered_networks[0], uuids.port)], result) mock_client.create_port.assert_called_once_with( {'port': { 'network_id': uuids.net, 'tenant_id': uuids.tenant_id, 'admin_state_up': True, 'security_groups': security_groups, 'device_id': self.instance.uuid}}) def test_create_ports_for_instance_with_cleanup_after_pc_failure(self): api = neutronapi.API() ordered_networks = [ objects.NetworkRequest(network_id=uuids.net1), objects.NetworkRequest(network_id=uuids.net2), objects.NetworkRequest(network_id=uuids.net3), objects.NetworkRequest(network_id=uuids.net4) ] nets = { uuids.net1: {"id": uuids.net1, "port_security_enabled": False}, uuids.net2: {"id": uuids.net2, "port_security_enabled": False}, uuids.net3: {"id": uuids.net3, "port_security_enabled": False}, uuids.net4: {"id": uuids.net4, "port_security_enabled": False} } error = exception.PortLimitExceeded() mock_client = mock.Mock() mock_client.create_port.side_effect = [ {"port": {"id": uuids.port1}}, {"port": {"id": uuids.port2}}, error ] self.assertRaises(exception.PortLimitExceeded, api._create_ports_for_instance, self.context, self.instance, ordered_networks, nets, mock_client, None) self.assertEqual([mock.call(uuids.port1), mock.call(uuids.port2)], mock_client.delete_port.call_args_list) self.assertEqual(3, mock_client.create_port.call_count) def test_create_ports_for_instance_with_cleanup_after_sg_failure(self): api = neutronapi.API() ordered_networks = [ objects.NetworkRequest(network_id=uuids.net1), objects.NetworkRequest(network_id=uuids.net2), objects.NetworkRequest(network_id=uuids.net3) ] nets = { uuids.net1: {"id": uuids.net1, "port_security_enabled": False}, uuids.net2: {"id": uuids.net2, "port_security_enabled": False}, uuids.net3: {"id": uuids.net3, "port_security_enabled": True} } mock_client = mock.Mock() mock_client.create_port.side_effect = [ {"port": {"id": uuids.port1}}, {"port": {"id": uuids.port2}} ] self.assertRaises(exception.SecurityGroupCannotBeApplied, api._create_ports_for_instance, self.context, self.instance, ordered_networks, nets, mock_client, None) self.assertEqual([mock.call(uuids.port1), mock.call(uuids.port2)], mock_client.delete_port.call_args_list) self.assertEqual(2, mock_client.create_port.call_count) def test_create_ports_for_instance_raises_subnets_missing(self): api = neutronapi.API() ordered_networks = [objects.NetworkRequest(network_id=uuids.net)] nets = {uuids.net: {"id": uuids.net, "port_security_enabled": True}} mock_client = mock.Mock() self.assertRaises(exception.SecurityGroupCannotBeApplied, api._create_ports_for_instance, self.context, self.instance, ordered_networks, nets, mock_client, None) self.assertFalse(mock_client.create_port.called) def test_create_ports_for_instance_raises_security_off(self): api = neutronapi.API() ordered_networks = [objects.NetworkRequest(network_id=uuids.net)] nets = {uuids.net: { "id": uuids.net, "port_security_enabled": False}} mock_client = mock.Mock() self.assertRaises(exception.SecurityGroupCannotBeApplied, api._create_ports_for_instance, self.context, self.instance, ordered_networks, nets, mock_client, [uuids.sg]) self.assertFalse(mock_client.create_port.called) @mock.patch.object(objects.VirtualInterface, "create") def test_update_ports_for_instance_with_portbinding(self, mock_create): api = neutronapi.API() self.instance.availability_zone = "test_az" mock_neutron = mock.Mock() mock_admin = mock.Mock() requests_and_created_ports = [ (objects.NetworkRequest( network_id=uuids.net1), uuids.port1), (objects.NetworkRequest( network_id=uuids.net2, port_id=uuids.port2), None)] net1 = {"id": uuids.net1} net2 = {"id": uuids.net2} nets = {uuids.net1: net1, uuids.net2: net2} bind_host_id = "bind_host_id" requested_ports_dict = {uuids.port1: {}, uuids.port2: {}} mock_neutron.list_extensions.return_value = {"extensions": [ {"name": "asdf"}]} port1 = {"port": {"id": uuids.port1, "mac_address": "mac1r"}} port2 = {"port": {"id": uuids.port2, "mac_address": "mac2r"}} mock_admin.update_port.side_effect = [port1, port2] ordered_nets, ordered_ports, preexisting_port_ids, \ created_port_ids = api._update_ports_for_instance( self.context, self.instance, mock_neutron, mock_admin, requests_and_created_ports, nets, bind_host_id, requested_ports_dict) self.assertEqual([net1, net2], ordered_nets, "ordered_nets") self.assertEqual([uuids.port1, uuids.port2], ordered_ports, "ordered_ports") self.assertEqual([uuids.port2], preexisting_port_ids, "preexisting") self.assertEqual([uuids.port1], created_port_ids, "created") mock_admin.update_port.assert_called_with(uuids.port2, {'port': { 'device_owner': 'compute:test_az', constants.BINDING_HOST_ID: bind_host_id, 'device_id': self.instance.uuid}}) class TestAPINeutronHostnameDNS(TestAPIBase): def test_allocate_for_instance_create_port(self): # The port's dns_name attribute should be set by the port create # request in allocate_for_instance self._test_allocate_for_instance_with_virtual_interface( 1, dns_extension=True) def test_allocate_for_instance_with_requested_port(self): # The port's dns_name attribute should be set by the port update # request in allocate_for_instance requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance_with_virtual_interface( net_idx=1, dns_extension=True, requested_networks=requested_networks) def test_allocate_for_instance_port_dns_name_preset_equal_hostname(self): # The port's dns_name attribute should be set by the port update # request in allocate_for_instance. The port's dns_name was preset by # the user with a value equal to the instance's hostname requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance_with_virtual_interface( net_idx=1, dns_extension=True, requested_networks=requested_networks, _dns_name='test-instance') def test_allocate_for_instance_port_dns_name_preset_noteq_hostname(self): # If a pre-existing port has dns_name set, an exception should be # raised if dns_name is not equal to the instance's hostname requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance( requested_networks=requested_networks, exception=exception.PortNotUsableDNS, dns_extension=True, _break='pre_list_networks', _dns_name='my-instance') class TestAPINeutronHostnameDNSPortbinding(TestAPIBase): def test_allocate_for_instance_create_port(self): # The port's dns_name attribute should be set by the port create # request in allocate_for_instance self._test_allocate_for_instance_with_virtual_interface( 1, dns_extension=True, bind_host_id=self.instance.get('host')) def test_allocate_for_instance_with_requested_port(self): # The port's dns_name attribute should be set by the port update # request in allocate_for_instance requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance_with_virtual_interface( net_idx=1, dns_extension=True, bind_host_id=self.instance.get('host'), requested_networks=requested_networks) def test_allocate_for_instance_create_port_with_dns_domain(self): # The port's dns_name attribute should be set by the port update # request in _update_port_dns_name. This should happen only when the # port binding extension is enabled and the port's network has a # non-blank dns_domain attribute self._test_allocate_for_instance_with_virtual_interface( 11, dns_extension=True, bind_host_id=self.instance.get('host')) def test_allocate_for_instance_with_requested_port_with_dns_domain(self): # The port's dns_name attribute should be set by the port update # request in _update_port_dns_name. This should happen only when the # port binding extension is enabled and the port's network has a # non-blank dns_domain attribute requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(port_id=uuids.portid_1)]) self._test_allocate_for_instance_with_virtual_interface( net_idx=11, dns_extension=True, bind_host_id=self.instance.get('host'), requested_networks=requested_networks) class TestNeutronClientForAdminScenarios(test.NoDBTestCase): def setUp(self): super(TestNeutronClientForAdminScenarios, self).setUp() # NOTE(morganfainberg): The real configuration fixture here is used # instead o the already existing fixtures to ensure that the new # config options are automatically deregistered at the end of the # test run. Without the use of this fixture, the config options # from the plugin(s) would persist for all subsequent tests from when # these are run (due to glonal conf object) and not be fully # representative of a "clean" slate at the start of a test. self.config_fixture = self.useFixture(config_fixture.Config()) oslo_opts = ks_loading.get_auth_plugin_conf_options('v2password') self.config_fixture.register_opts(oslo_opts, 'neutron') @requests_mock.mock() def _test_get_client_for_admin(self, req_mock, use_id=False, admin_context=False): token_value = uuidutils.generate_uuid(dashed=False) auth_url = 'http://anyhost/auth' token_resp = V2Token(token_id=token_value) req_mock.post(auth_url + '/tokens', json=token_resp) self.flags(endpoint_override='http://anyhost/', group='neutron') self.flags(auth_type='v2password', group='neutron') self.flags(auth_url=auth_url, group='neutron') self.flags(timeout=30, group='neutron') if use_id: self.flags(tenant_id='tenant_id', group='neutron') self.flags(user_id='user_id', group='neutron') if admin_context: my_context = context.get_admin_context() else: my_context = context.RequestContext('userid', uuids.my_tenant, auth_token='token') # clean global neutronapi.reset_state() if admin_context: # Note that the context does not contain a token but is # an admin context which will force an elevation to admin # credentials. context_client = neutronapi.get_client(my_context) else: # Note that the context is not elevated, but the True is passed in # which will force an elevation to admin credentials even though # the context has an auth_token. context_client = neutronapi.get_client(my_context, True) admin_auth = neutronapi._ADMIN_AUTH self.assertEqual(CONF.neutron.auth_url, admin_auth.auth_url) self.assertEqual(CONF.neutron.password, admin_auth.password) if use_id: self.assertEqual(CONF.neutron.tenant_id, admin_auth.tenant_id) self.assertEqual(CONF.neutron.user_id, admin_auth.user_id) self.assertIsNone(admin_auth.tenant_name) self.assertIsNone(admin_auth.username) else: self.assertEqual(CONF.neutron.username, admin_auth.username) self.assertIsNone(admin_auth.tenant_id) self.assertIsNone(admin_auth.user_id) self.assertEqual(CONF.neutron.timeout, neutronapi._SESSION.timeout) self.assertEqual( token_value, context_client.httpclient.auth.get_token(neutronapi._SESSION)) self.assertEqual( CONF.neutron.endpoint_override, context_client.httpclient.get_endpoint()) def test_get_client_for_admin(self): self._test_get_client_for_admin() def test_get_client_for_admin_with_id(self): self._test_get_client_for_admin(use_id=True) def test_get_client_for_admin_context(self): self._test_get_client_for_admin(admin_context=True) def test_get_client_for_admin_context_with_id(self): self._test_get_client_for_admin(use_id=True, admin_context=True) class TestNeutronPortSecurity(test.NoDBTestCase): @mock.patch.object(neutronapi.API, 'get_instance_nw_info') @mock.patch.object(neutronapi.API, '_update_port_dns_name') @mock.patch.object(neutronapi.API, '_create_port_minimal') @mock.patch.object(neutronapi.API, '_populate_neutron_extension_values') @mock.patch.object(neutronapi.API, '_check_external_network_attach') @mock.patch.object(neutronapi.API, '_process_security_groups') @mock.patch.object(neutronapi.API, '_get_available_networks') @mock.patch.object(neutronapi.API, '_validate_requested_port_ids') @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.objects.VirtualInterface') def test_no_security_groups_requested( self, mock_vif, mock_get_client, mock_validate_requested_port_ids, mock_get_available_networks, mock_process_security_groups, mock_check_external_network_attach, mock_populate_neutron_extension_values, mock_create_port, mock_update_port_dns_name, mock_get_instance_nw_info): nets = [ {'id': 'net1', 'name': 'net_name1', 'subnets': ['mysubnid1'], 'port_security_enabled': True}, {'id': 'net2', 'name': 'net_name2', 'subnets': ['mysubnid2'], 'port_security_enabled': True}] onets = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id='net1'), objects.NetworkRequest(network_id='net2')]) instance = objects.Instance( project_id=1, availability_zone='nova', uuid=uuids.instance) secgroups = ['default'] # Nova API provides the 'default' mock_validate_requested_port_ids.return_value = [{}, onets] mock_get_available_networks.return_value = nets mock_process_security_groups.return_value = [] api = neutronapi.API() mock_create_port.return_value = {'id': 'foo', 'mac_address': 'bar'} api.allocate_for_instance( 'context', instance, False, requested_networks=onets, security_groups=secgroups) mock_process_security_groups.assert_called_once_with( instance, mock.ANY, []) mock_create_port.assert_has_calls([ mock.call(mock.ANY, instance, u'net1', None, []), mock.call(mock.ANY, instance, u'net2', None, [])], any_order=True) @mock.patch.object(neutronapi.API, 'get_instance_nw_info') @mock.patch.object(neutronapi.API, '_update_port_dns_name') @mock.patch.object(neutronapi.API, '_create_port_minimal') @mock.patch.object(neutronapi.API, '_populate_neutron_extension_values') @mock.patch.object(neutronapi.API, '_check_external_network_attach') @mock.patch.object(neutronapi.API, '_process_security_groups') @mock.patch.object(neutronapi.API, '_get_available_networks') @mock.patch.object(neutronapi.API, '_validate_requested_port_ids') @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.objects.VirtualInterface') def test_security_groups_requested( self, mock_vif, mock_get_client, mock_validate_requested_port_ids, mock_get_available_networks, mock_process_security_groups, mock_check_external_network_attach, mock_populate_neutron_extension_values, mock_create_port, mock_update_port_dns_name, mock_get_instance_nw_info): nets = [ {'id': 'net1', 'name': 'net_name1', 'subnets': ['mysubnid1'], 'port_security_enabled': True}, {'id': 'net2', 'name': 'net_name2', 'subnets': ['mysubnid2'], 'port_security_enabled': True}] onets = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id='net1'), objects.NetworkRequest(network_id='net2')]) instance = objects.Instance( project_id=1, availability_zone='nova', uuid=uuids.instance) secgroups = ['default', 'secgrp1', 'secgrp2'] mock_validate_requested_port_ids.return_value = [{}, onets] mock_get_available_networks.return_value = nets mock_process_security_groups.return_value = ['default-uuid', 'secgrp-uuid1', 'secgrp-uuid2'] api = neutronapi.API() mock_create_port.return_value = {'id': 'foo', 'mac_address': 'bar'} api.allocate_for_instance( 'context', instance, False, requested_networks=onets, security_groups=secgroups) mock_create_port.assert_has_calls([ mock.call(mock.ANY, instance, u'net1', None, ['default-uuid', 'secgrp-uuid1', 'secgrp-uuid2']), mock.call(mock.ANY, instance, u'net2', None, ['default-uuid', 'secgrp-uuid1', 'secgrp-uuid2'])], any_order=True) @mock.patch.object(neutronapi.API, 'get_instance_nw_info') @mock.patch.object(neutronapi.API, '_update_port_dns_name') @mock.patch.object(neutronapi.API, '_create_port_minimal') @mock.patch.object(neutronapi.API, '_populate_neutron_extension_values') @mock.patch.object(neutronapi.API, '_check_external_network_attach') @mock.patch.object(neutronapi.API, '_process_security_groups') @mock.patch.object(neutronapi.API, '_get_available_networks') @mock.patch.object(neutronapi.API, '_validate_requested_port_ids') @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.objects.VirtualInterface') def test_port_security_disabled_no_security_groups_requested( self, mock_vif, mock_get_client, mock_validate_requested_port_ids, mock_get_available_networks, mock_process_security_groups, mock_check_external_network_attach, mock_populate_neutron_extension_values, mock_create_port, mock_update_port_dns_name, mock_get_instance_nw_info): nets = [ {'id': 'net1', 'name': 'net_name1', 'subnets': ['mysubnid1'], 'port_security_enabled': False}, {'id': 'net2', 'name': 'net_name2', 'subnets': ['mysubnid2'], 'port_security_enabled': False}] onets = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id='net1'), objects.NetworkRequest(network_id='net2')]) instance = objects.Instance( project_id=1, availability_zone='nova', uuid=uuids.instance) secgroups = ['default'] # Nova API provides the 'default' mock_validate_requested_port_ids.return_value = [{}, onets] mock_get_available_networks.return_value = nets mock_process_security_groups.return_value = [] api = neutronapi.API() mock_create_port.return_value = {'id': 'foo', 'mac_address': 'bar'} api.allocate_for_instance( 'context', instance, False, requested_networks=onets, security_groups=secgroups) mock_process_security_groups.assert_called_once_with( instance, mock.ANY, []) mock_create_port.assert_has_calls([ mock.call(mock.ANY, instance, u'net1', None, []), mock.call(mock.ANY, instance, u'net2', None, [])], any_order=True) @mock.patch.object(neutronapi.API, 'get_instance_nw_info') @mock.patch.object(neutronapi.API, '_update_port_dns_name') @mock.patch.object(neutronapi.API, '_create_port_minimal') @mock.patch.object(neutronapi.API, '_populate_neutron_extension_values') @mock.patch.object(neutronapi.API, '_check_external_network_attach') @mock.patch.object(neutronapi.API, '_process_security_groups') @mock.patch.object(neutronapi.API, '_get_available_networks') @mock.patch.object(neutronapi.API, '_validate_requested_port_ids') @mock.patch.object(neutronapi, 'get_client') @mock.patch('nova.objects.VirtualInterface') def test_port_security_disabled_and_security_groups_requested( self, mock_vif, mock_get_client, mock_validate_requested_port_ids, mock_get_available_networks, mock_process_security_groups, mock_check_external_network_attach, mock_populate_neutron_extension_values, mock_create_port, mock_update_port_dns_name, mock_get_instance_nw_info): nets = [ {'id': 'net1', 'name': 'net_name1', 'subnets': ['mysubnid1'], 'port_security_enabled': True}, {'id': 'net2', 'name': 'net_name2', 'subnets': ['mysubnid2'], 'port_security_enabled': False}] onets = objects.NetworkRequestList(objects=[ objects.NetworkRequest(network_id='net1'), objects.NetworkRequest(network_id='net2')]) instance = objects.Instance( project_id=1, availability_zone='nova', uuid=uuids.instance) secgroups = ['default', 'secgrp1', 'secgrp2'] mock_validate_requested_port_ids.return_value = [{}, onets] mock_get_available_networks.return_value = nets mock_process_security_groups.return_value = ['default-uuid', 'secgrp-uuid1', 'secgrp-uuid2'] api = neutronapi.API() self.assertRaises( exception.SecurityGroupCannotBeApplied, api.allocate_for_instance, 'context', instance, False, requested_networks=onets, security_groups=secgroups) mock_process_security_groups.assert_called_once_with( instance, mock.ANY, ['default', 'secgrp1', 'secgrp2']) class TestAPIAutoAllocateNetwork(test.NoDBTestCase): """Tests auto-allocation scenarios""" def setUp(self): super(TestAPIAutoAllocateNetwork, self).setUp() self.api = neutronapi.API() self.context = context.RequestContext(uuids.user_id, uuids.project_id) def test__can_auto_allocate_network_validation_conflict(self): # Tests that the dry-run validation with neutron fails (not ready). ntrn = mock.Mock() ntrn.validate_auto_allocated_topology_requirements.side_effect = \ exceptions.Conflict self.assertFalse(self.api._can_auto_allocate_network( self.context, ntrn)) validate = ntrn.validate_auto_allocated_topology_requirements validate.assert_called_once_with(uuids.project_id) def test__can_auto_allocate_network(self): # Tests the happy path. ntrn = mock.Mock() self.assertTrue(self.api._can_auto_allocate_network( self.context, ntrn)) validate = ntrn.validate_auto_allocated_topology_requirements validate.assert_called_once_with(uuids.project_id) def test__ports_needed_per_instance_no_reqs_no_nets(self): # Tests no requested_networks and no available networks. with mock.patch.object(self.api, '_get_available_networks', return_value=[]): self.assertEqual( 1, self.api._ports_needed_per_instance(self.context, mock.sentinel.neutron, None)) def test__ports_needed_per_instance_empty_reqs_no_nets(self): # Tests empty requested_networks and no available networks. requested_networks = objects.NetworkRequestList() with mock.patch.object(self.api, '_get_available_networks', return_value=[]): self.assertEqual( 1, self.api._ports_needed_per_instance(self.context, mock.sentinel.neutron, requested_networks)) def test__ports_needed_per_instance_auto_reqs_no_nets_not_ready(self): # Test for when there are no available networks and we're requested # to auto-allocate the network but auto-allocation is not available. net_req = objects.NetworkRequest( network_id=net_req_obj.NETWORK_ID_AUTO) requested_networks = objects.NetworkRequestList(objects=[net_req]) with mock.patch.object(self.api, '_get_available_networks', return_value=[]): with mock.patch.object(self.api, '_can_auto_allocate_network', spec=True, return_value=False) as can_alloc: self.assertRaises( exception.UnableToAutoAllocateNetwork, self.api._ports_needed_per_instance, self.context, mock.sentinel.neutron, requested_networks) can_alloc.assert_called_once_with( self.context, mock.sentinel.neutron) def test__ports_needed_per_instance_auto_reqs_no_nets_ok(self): # Test for when there are no available networks and we're requested # to auto-allocate the network and auto-allocation is available. net_req = objects.NetworkRequest( network_id=net_req_obj.NETWORK_ID_AUTO) requested_networks = objects.NetworkRequestList(objects=[net_req]) with mock.patch.object(self.api, '_get_available_networks', return_value=[]): with mock.patch.object(self.api, '_can_auto_allocate_network', spec=True, return_value=True) as can_alloc: self.assertEqual( 1, self.api._ports_needed_per_instance( self.context, mock.sentinel.neutron, requested_networks)) can_alloc.assert_called_once_with( self.context, mock.sentinel.neutron) def test__validate_requested_port_ids_auto_allocate(self): # Tests that _validate_requested_port_ids doesn't really do anything # if there is an auto-allocate network request. net_req = objects.NetworkRequest( network_id=net_req_obj.NETWORK_ID_AUTO) requested_networks = objects.NetworkRequestList(objects=[net_req]) self.assertEqual(({}, []), self.api._validate_requested_port_ids( self.context, mock.sentinel.instance, mock.sentinel.neutron_client, requested_networks)) def test__auto_allocate_network_conflict(self): # Tests that we handle a 409 from Neutron when auto-allocating topology instance = mock.Mock(project_id=self.context.project_id) ntrn = mock.Mock() ntrn.get_auto_allocated_topology = mock.Mock( side_effect=exceptions.Conflict) self.assertRaises(exception.UnableToAutoAllocateNetwork, self.api._auto_allocate_network, instance, ntrn) ntrn.get_auto_allocated_topology.assert_called_once_with( instance.project_id) def test__auto_allocate_network_network_not_found(self): # Tests that we handle a 404 from Neutron when auto-allocating topology instance = mock.Mock(project_id=self.context.project_id) ntrn = mock.Mock() ntrn.get_auto_allocated_topology.return_value = { 'auto_allocated_topology': { 'id': uuids.network_id } } ntrn.show_network = mock.Mock( side_effect=exceptions.NetworkNotFoundClient) self.assertRaises(exception.UnableToAutoAllocateNetwork, self.api._auto_allocate_network, instance, ntrn) ntrn.show_network.assert_called_once_with(uuids.network_id) def test__auto_allocate_network(self): # Tests the happy path. instance = mock.Mock(project_id=self.context.project_id) ntrn = mock.Mock() ntrn.get_auto_allocated_topology.return_value = { 'auto_allocated_topology': { 'id': uuids.network_id } } ntrn.show_network.return_value = {'network': mock.sentinel.network} self.assertEqual(mock.sentinel.network, self.api._auto_allocate_network(instance, ntrn)) def test_allocate_for_instance_auto_allocate(self): # Tests the happy path. ntrn = mock.Mock() # mock neutron.list_networks which is called from # _get_available_networks when net_ids is empty, which it will be # because _validate_requested_port_ids will return an empty list since # we requested 'auto' allocation. ntrn.list_networks.return_value = {} fake_network = { 'id': uuids.network_id, 'subnets': [ uuids.subnet_id, ] } def fake_get_instance_nw_info(context, instance, **kwargs): # assert the network and port are what was used in the test self.assertIn('networks', kwargs) self.assertEqual(1, len(kwargs['networks'])) self.assertEqual(uuids.network_id, kwargs['networks'][0]['id']) self.assertIn('port_ids', kwargs) self.assertEqual(1, len(kwargs['port_ids'])) self.assertEqual(uuids.port_id, kwargs['port_ids'][0]) # return a fake vif return [model.VIF(id=uuids.port_id)] @mock.patch('nova.network.neutron.get_client', return_value=ntrn) @mock.patch.object(self.api, '_auto_allocate_network', return_value=fake_network) @mock.patch.object(self.api, '_check_external_network_attach') @mock.patch.object(self.api, '_populate_neutron_extension_values') @mock.patch.object(self.api, '_create_port_minimal', spec=True, return_value={'id': uuids.port_id, 'mac_address': 'foo'}) @mock.patch.object(self.api, '_update_port') @mock.patch.object(self.api, '_update_port_dns_name') @mock.patch.object(self.api, 'get_instance_nw_info', fake_get_instance_nw_info) @mock.patch('nova.objects.VirtualInterface') def do_test(self, mock_vif, update_port_dsn_name_mock, update_port_mock, create_port_mock, populate_ext_values_mock, check_external_net_attach_mock, auto_allocate_mock, get_client_mock): instance = fake_instance.fake_instance_obj(self.context) net_req = objects.NetworkRequest( network_id=net_req_obj.NETWORK_ID_AUTO) requested_networks = objects.NetworkRequestList(objects=[net_req]) nw_info = self.api.allocate_for_instance( self.context, instance, False, requested_networks) self.assertEqual(1, len(nw_info)) self.assertEqual(uuids.port_id, nw_info[0]['id']) # assert that we filtered available networks on admin_state_up=True ntrn.list_networks.assert_has_calls([ mock.call(tenant_id=instance.project_id, shared=False, admin_state_up=True), mock.call(shared=True)]) # assert the calls to create the port are using the network that # was auto-allocated port_req_body = mock.ANY create_port_mock.assert_called_once_with( ntrn, instance, uuids.network_id, None, # request.address (fixed IP) [], # security_group_ids - we didn't request any ) update_port_mock.assert_called_once_with( ntrn, instance, uuids.port_id, port_req_body) do_test(self) class TestGetInstanceNetworkInfo(test.NoDBTestCase): """Tests rebuilding the network_info cache.""" def setUp(self): super(TestGetInstanceNetworkInfo, self).setUp() self.api = neutronapi.API() self.context = context.RequestContext(uuids.user_id, uuids.project_id) self.instance = fake_instance.fake_instance_obj(self.context) client_mock = mock.patch('nova.network.neutron.get_client') self.client = client_mock.start().return_value self.addCleanup(client_mock.stop) # This is a no-db set of tests and we don't care about refreshing the # info_cache from the database so just mock it out. refresh_info_cache_for_instance = mock.patch( 'nova.compute.utils.refresh_info_cache_for_instance') refresh_info_cache_for_instance.start() self.addCleanup(refresh_info_cache_for_instance.stop) @staticmethod def _get_vif_in_cache(info_cache, vif_id): for vif in info_cache: if vif['id'] == vif_id: return vif @staticmethod def _get_fake_info_cache(vif_ids, **kwargs): """Returns InstanceInfoCache based on the list of provided VIF IDs""" nwinfo = model.NetworkInfo( [model.VIF(vif_id, **kwargs) for vif_id in vif_ids]) return objects.InstanceInfoCache(network_info=nwinfo) @staticmethod def _get_fake_port(port_id, **kwargs): network_id = kwargs.get('network_id', uuids.network_id) return {'id': port_id, 'network_id': network_id} @staticmethod def _get_fake_vif(context, **kwargs): """Returns VirtualInterface based on provided VIF ID""" return obj_vif.VirtualInterface(context=context, **kwargs) def test_get_nw_info_refresh_vif_id_add_vif(self): """Tests that a network-changed event occurred on a single port which is not already in the cache so it's added. """ # The cache has one existing port. self.instance.info_cache = self._get_fake_info_cache([uuids.old_port]) # The instance has two ports, one old, one new. self.client.list_ports.return_value = { 'ports': [self._get_fake_port(uuids.old_port), self._get_fake_port(uuids.new_port)]} with test.nested( mock.patch.object(self.api, '_get_available_networks', return_value=[{'id': uuids.network_id}]), mock.patch.object(self.api, '_build_vif_model', return_value=model.VIF(uuids.new_port)), # We should not get as far as calling _gather_port_ids_and_networks mock.patch.object(self.api, '_gather_port_ids_and_networks', new_callable=mock.NonCallableMock) ) as ( get_nets, build_vif, gather_ports ): nwinfo = self.api._get_instance_nw_info( self.context, self.instance, refresh_vif_id=uuids.new_port) get_nets.assert_called_once_with( self.context, self.instance.project_id, [uuids.network_id], self.client) # Assert that the old and new ports are in the cache. for port_id in (uuids.old_port, uuids.new_port): self.assertIsNotNone(self._get_vif_in_cache(nwinfo, port_id)) def test_get_nw_info_refresh_vif_id_update_vif(self): """Tests that a network-changed event occurred on a single port which is already in the cache so it's updated. """ # The cache has two existing active VIFs. self.instance.info_cache = self._get_fake_info_cache( [uuids.old_port, uuids.new_port], active=True) # The instance has two ports, one old, one new. self.client.list_ports.return_value = { 'ports': [self._get_fake_port(uuids.old_port), self._get_fake_port(uuids.new_port)]} with test.nested( mock.patch.object(self.api, '_get_available_networks', return_value=[{'id': uuids.network_id}]), # Fake that the port is no longer active. mock.patch.object(self.api, '_build_vif_model', return_value=model.VIF( uuids.new_port, active=False)), # We should not get as far as calling _gather_port_ids_and_networks mock.patch.object(self.api, '_gather_port_ids_and_networks', new_callable=mock.NonCallableMock) ) as ( get_nets, build_vif, gather_ports ): nwinfo = self.api._get_instance_nw_info( self.context, self.instance, refresh_vif_id=uuids.new_port) get_nets.assert_called_once_with( self.context, self.instance.project_id, [uuids.network_id], self.client) # Assert that the old and new ports are in the cache and that the # old port is still active and the new port is not active. old_vif = self._get_vif_in_cache(nwinfo, uuids.old_port) self.assertIsNotNone(old_vif) self.assertTrue(old_vif['active']) new_vif = self._get_vif_in_cache(nwinfo, uuids.new_port) self.assertIsNotNone(new_vif) self.assertFalse(new_vif['active']) def test_get_nw_info_refresh_vif_id_remove_vif(self): """Tests that a network-changed event occurred on a single port which is already in the cache but not in the current list of ports for the instance, so it's removed from the cache. """ # The cache has two existing VIFs. self.instance.info_cache = self._get_fake_info_cache( [uuids.old_port, uuids.removed_port]) # The instance has one remaining port. self.client.list_ports.return_value = { 'ports': [self._get_fake_port(uuids.old_port)]} # We should not get as far as calling _gather_port_ids_and_networks with mock.patch.object( self.api, '_gather_port_ids_and_networks', new_callable=mock.NonCallableMock): nwinfo = self.api._get_instance_nw_info( self.context, self.instance, refresh_vif_id=uuids.removed_port) # Assert that only the old port is still in the cache. old_vif = self._get_vif_in_cache(nwinfo, uuids.old_port) self.assertIsNotNone(old_vif) removed_vif = self._get_vif_in_cache(nwinfo, uuids.removed_port) self.assertIsNone(removed_vif) def test_get_instance_nw_info_force_refresh(self): """Tests a full refresh of the instance info cache using information from neutron rather than the instance's current info cache data. """ # Fake out an empty cache. self.instance.info_cache = self._get_fake_info_cache([]) # The instance has one attached port in neutron. self.client.list_ports.return_value = { 'ports': [self._get_fake_port(uuids.port_id)]} ordered_port_list = [uuids.port_id] with test.nested( mock.patch.object(self.api, '_get_available_networks', return_value=[{'id': uuids.network_id}]), mock.patch.object(self.api, '_build_vif_model', return_value=model.VIF(uuids.port_id)), # We should not call _gather_port_ids_and_networks since that uses # the existing instance.info_cache when no ports/networks are # passed to _build_network_info_model and what we want is a full # refresh of the ports based on what neutron says is current. mock.patch.object(self.api, '_gather_port_ids_and_networks', new_callable=mock.NonCallableMock), mock.patch.object(self.api, '_get_ordered_port_list', return_value=ordered_port_list) ) as ( get_nets, build_vif, gather_ports, mock_port_map ): nwinfo = self.api._get_instance_nw_info( self.context, self.instance, force_refresh=True) get_nets.assert_called_once_with( self.context, self.instance.project_id, [uuids.network_id], self.client) # Assert that the port is in the cache now. self.assertIsNotNone(self._get_vif_in_cache(nwinfo, uuids.port_id)) def test__get_ordered_port_list(self): """This test if port_list is sorted by VirtualInterface id sequence. """ nova_vifs = [ self._get_fake_vif(self.context, uuid=uuids.port_id_1, id=0), self._get_fake_vif(self.context, uuid=uuids.port_id_2, id=1), self._get_fake_vif(self.context, uuid=uuids.port_id_3, id=2), ] # Random order. current_neutron_ports = [ self._get_fake_port(uuids.port_id_2), self._get_fake_port(uuids.port_id_1), self._get_fake_port(uuids.port_id_3), ] expected_port_list = [uuids.port_id_1, uuids.port_id_2, uuids.port_id_3] with mock.patch.object(self.api, 'get_vifs_by_instance', return_value=nova_vifs): port_list = self.api._get_ordered_port_list( self.context, self.instance, current_neutron_ports) self.assertEqual(expected_port_list, port_list) def test__get_ordered_port_list_new_port(self): """This test if port_list is sorted by VirtualInterface id sequence while new port appears. """ nova_vifs = [ self._get_fake_vif(self.context, uuid=uuids.port_id_1, id=0), self._get_fake_vif(self.context, uuid=uuids.port_id_3, id=2), ] # New port appears. current_neutron_ports = [ self._get_fake_port(uuids.port_id_1), self._get_fake_port(uuids.port_id_4), self._get_fake_port(uuids.port_id_3) ] expected_port_list = [uuids.port_id_1, uuids.port_id_3, uuids.port_id_4] with mock.patch.object(self.api, 'get_vifs_by_instance', return_value=nova_vifs): port_list = self.api._get_ordered_port_list( self.context, self.instance, current_neutron_ports) self.assertEqual(expected_port_list, port_list) def test__get_ordered_port_list_new_port_and_deleted_vif(self): """This test if port_list is sorted by VirtualInterface id sequence while new port appears along with deleted old VirtualInterface objects. """ # Display also deleted VirtualInterface. nova_vifs = [ self._get_fake_vif(self.context, uuid=uuids.port_id_1, id=0, deleted=True), self._get_fake_vif(self.context, uuid=uuids.port_id_2, id=3), self._get_fake_vif(self.context, uuid=uuids.port_id_3, id=5), ] # Random order and new port. current_neutron_ports = [ self._get_fake_port(uuids.port_id_4), self._get_fake_port(uuids.port_id_3), self._get_fake_port(uuids.port_id_2), ] expected_port_list = [uuids.port_id_2, uuids.port_id_3, uuids.port_id_4] with mock.patch.object(self.api, 'get_vifs_by_instance', return_value=nova_vifs): port_list = self.api._get_ordered_port_list( self.context, self.instance, current_neutron_ports) self.assertEqual(expected_port_list, port_list) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/network/test_os_vif_util.py0000664000175000017500000013327200000000000023306 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_vif import objects as osv_objects from os_vif.objects import fields as os_vif_fields import six from nova import exception from nova.network import model from nova.network import os_vif_util from nova import objects from nova import test class OSVIFUtilTestCase(test.NoDBTestCase): def setUp(self): super(OSVIFUtilTestCase, self).setUp() osv_objects.register_all() # Remove when all os-vif objects include the # ComparableVersionedObject mix-in def assertObjEqual(self, expect, actual): actual.obj_reset_changes(recursive=True) expect.obj_reset_changes(recursive=True) self.assertEqual(expect.obj_to_primitive(), actual.obj_to_primitive()) def test_nova_to_osvif_instance(self): inst = objects.Instance( id="1242", uuid="d5b1090c-9e00-4fa4-9504-4b1494857970", project_id="2f37d7f6-e51a-4a1f-8b6e-b0917ffc8390") info = os_vif_util.nova_to_osvif_instance(inst) expect = osv_objects.instance_info.InstanceInfo( uuid="d5b1090c-9e00-4fa4-9504-4b1494857970", name="instance-000004da", project_id="2f37d7f6-e51a-4a1f-8b6e-b0917ffc8390") self.assertObjEqual(info, expect) def test_nova_to_osvif_instance_minimal(self): inst = objects.Instance( id="1242", uuid="d5b1090c-9e00-4fa4-9504-4b1494857970") actual = os_vif_util.nova_to_osvif_instance(inst) expect = osv_objects.instance_info.InstanceInfo( uuid=inst.uuid, name=inst.name) self.assertObjEqual(expect, actual) def test_nova_to_osvif_ips(self): ips = [ model.FixedIP( address="192.168.122.24", floating_ips=[ model.IP(address="192.168.122.100", type="floating"), model.IP(address="192.168.122.101", type="floating"), model.IP(address="192.168.122.102", type="floating"), ], version=4), model.FixedIP( address="2001::beef", version=6), ] actual = os_vif_util._nova_to_osvif_ips(ips) expect = osv_objects.fixed_ip.FixedIPList( objects=[ osv_objects.fixed_ip.FixedIP( address="192.168.122.24", floating_ips=[ "192.168.122.100", "192.168.122.101", "192.168.122.102", ]), osv_objects.fixed_ip.FixedIP( address="2001::beef", floating_ips=[]), ], ) self.assertObjEqual(expect, actual) def test_nova_to_osvif_routes(self): routes = [ model.Route(cidr="192.168.1.0/24", gateway=model.IP( address="192.168.1.254", type='gateway'), interface="eth0"), model.Route(cidr="10.0.0.0/8", gateway=model.IP( address="10.0.0.1", type='gateway')), ] expect = osv_objects.route.RouteList( objects=[ osv_objects.route.Route( cidr="192.168.1.0/24", gateway="192.168.1.254", interface="eth0"), osv_objects.route.Route( cidr="10.0.0.0/8", gateway="10.0.0.1"), ]) actual = os_vif_util._nova_to_osvif_routes(routes) self.assertObjEqual(expect, actual) def test_nova_to_osvif_subnets(self): subnets = [ model.Subnet(cidr="192.168.1.0/24", dns=[ model.IP( address="192.168.1.1", type="dns"), model.IP( address="192.168.1.2", type="dns"), ], gateway=model.IP( address="192.168.1.254", type='gateway'), ips=[ model.FixedIP( address="192.168.1.100", ), model.FixedIP( address="192.168.1.101", ), ], routes=[ model.Route( cidr="10.0.0.1/24", gateway=model.IP( address="192.168.1.254", type="gateway"), interface="eth0"), ]), model.Subnet(dns=[ model.IP( address="192.168.1.1", type="dns"), model.IP( address="192.168.1.2", type="dns"), ], ips=[ model.FixedIP( address="192.168.1.100", ), model.FixedIP( address="192.168.1.101", ), ], routes=[ model.Route( cidr="10.0.0.1/24", gateway=model.IP( address="192.168.1.254", type="gateway"), interface="eth0"), ]), model.Subnet(dns=[ model.IP( address="192.168.1.1", type="dns"), model.IP( address="192.168.1.2", type="dns"), ], gateway=model.IP( type='gateway'), ips=[ model.FixedIP( address="192.168.1.100", ), model.FixedIP( address="192.168.1.101", ), ], routes=[ model.Route( cidr="10.0.0.1/24", gateway=model.IP( address="192.168.1.254", type="gateway"), interface="eth0"), ]), ] expect = osv_objects.subnet.SubnetList( objects=[ osv_objects.subnet.Subnet( cidr="192.168.1.0/24", dns=["192.168.1.1", "192.168.1.2"], gateway="192.168.1.254", ips=osv_objects.fixed_ip.FixedIPList( objects=[ osv_objects.fixed_ip.FixedIP( address="192.168.1.100", floating_ips=[]), osv_objects.fixed_ip.FixedIP( address="192.168.1.101", floating_ips=[]), ]), routes=osv_objects.route.RouteList( objects=[ osv_objects.route.Route( cidr="10.0.0.1/24", gateway="192.168.1.254", interface="eth0") ]), ), osv_objects.subnet.Subnet( dns=["192.168.1.1", "192.168.1.2"], ips=osv_objects.fixed_ip.FixedIPList( objects=[ osv_objects.fixed_ip.FixedIP( address="192.168.1.100", floating_ips=[]), osv_objects.fixed_ip.FixedIP( address="192.168.1.101", floating_ips=[]), ]), routes=osv_objects.route.RouteList( objects=[ osv_objects.route.Route( cidr="10.0.0.1/24", gateway="192.168.1.254", interface="eth0") ]), ), osv_objects.subnet.Subnet( dns=["192.168.1.1", "192.168.1.2"], ips=osv_objects.fixed_ip.FixedIPList( objects=[ osv_objects.fixed_ip.FixedIP( address="192.168.1.100", floating_ips=[]), osv_objects.fixed_ip.FixedIP( address="192.168.1.101", floating_ips=[]), ]), routes=osv_objects.route.RouteList( objects=[ osv_objects.route.Route( cidr="10.0.0.1/24", gateway="192.168.1.254", interface="eth0") ]), ), ]) actual = os_vif_util._nova_to_osvif_subnets(subnets) self.assertObjEqual(expect, actual) def test_nova_to_osvif_network(self): network = model.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge="br0", subnets=[ model.Subnet(cidr="192.168.1.0/24", gateway=model.IP( address="192.168.1.254", type='gateway')), ]) expect = osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge="br0", bridge_interface=None, subnets=osv_objects.subnet.SubnetList( objects=[ osv_objects.subnet.Subnet( cidr="192.168.1.0/24", dns=[], gateway="192.168.1.254", ips=osv_objects.fixed_ip.FixedIPList( objects=[]), routes=osv_objects.route.RouteList( objects=[]), ) ])) actual = os_vif_util._nova_to_osvif_network(network) self.assertObjEqual(expect, actual) def test_nova_to_osvif_network_extra(self): network = model.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge="br0", multi_host=True, should_create_bridge=True, should_create_vlan=True, bridge_interface="eth0", vlan=1729, subnets=[ model.Subnet(cidr="192.168.1.0/24", gateway=model.IP( address="192.168.1.254", type='gateway')), ]) expect = osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge="br0", multi_host=True, should_provide_bridge=True, should_provide_vlan=True, bridge_interface="eth0", vlan=1729, subnets=osv_objects.subnet.SubnetList( objects=[ osv_objects.subnet.Subnet( cidr="192.168.1.0/24", dns=[], gateway="192.168.1.254", ips=osv_objects.fixed_ip.FixedIPList( objects=[]), routes=osv_objects.route.RouteList( objects=[]), ) ])) actual = os_vif_util._nova_to_osvif_network(network) self.assertObjEqual(expect, actual) def test_nova_to_osvif_network_labeled_no_bridge(self): network = model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[ model.Subnet(cidr="192.168.1.0/24", gateway=model.IP( address="192.168.1.254", type='gateway')), ]) expect = osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[ osv_objects.subnet.Subnet( cidr="192.168.1.0/24", dns=[], gateway="192.168.1.254", ips=osv_objects.fixed_ip.FixedIPList( objects=[]), routes=osv_objects.route.RouteList( objects=[]), ) ])) actual = os_vif_util._nova_to_osvif_network(network) self.assertObjEqual(expect, actual) def test_nova_to_osvif_network_labeled_no_vlan(self): network = model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", should_create_vlan=True, subnets=[ model.Subnet(cidr="192.168.1.0/24", gateway=model.IP( address="192.168.1.254", type='gateway')), ]) self.assertRaises(exception.NovaException, os_vif_util._nova_to_osvif_network, network) def test_nova_to_osvif_network_mtu(self): network = model.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge="br0", mtu=550, subnets=[]) osv_obj = os_vif_util._nova_to_osvif_network(network) self.assertEqual(550, osv_obj.mtu) def test_nova_to_osvif_vif_linux_bridge(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_BRIDGE, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_PORT_FILTER: True, } ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFBridge( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", has_traffic_filtering=True, plugin="linux_bridge", preserve_on_delete=False, vif_name="nicdc065497-3c", network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vif_agilio_ovs_fallthrough(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_AGILIO_OVS, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_PORT_FILTER: True, } ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFOpenVSwitch( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", has_traffic_filtering=True, plugin="ovs", port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", datapath_type=None), preserve_on_delete=False, vif_name="nicdc065497-3c", network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vif_agilio_ovs_direct(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_AGILIO_OVS, address="22:52:25:62:e2:aa", profile={ "pci_slot": "0000:08:08.5", }, network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), vnic_type=model.VNIC_TYPE_DIRECT, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFHostDevice( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, has_traffic_filtering=False, address="22:52:25:62:e2:aa", dev_type=osv_objects.fields.VIFHostDeviceDevType.ETHERNET, dev_address="0000:08:08.5", plugin="agilio_ovs", port_profile=osv_objects.vif.VIFPortProfileOVSRepresentor( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", representor_name="nicdc065497-3c", representor_address="0000:08:08.5", datapath_offload=osv_objects.vif.DatapathOffloadRepresentor( representor_name="nicdc065497-3c", representor_address="0000:08:08.5")), preserve_on_delete=False, vif_name="nicdc065497-3c", network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vif_agilio_ovs_forwarder(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_AGILIO_OVS, address="22:52:25:62:e2:aa", profile={ "pci_slot": "0000:08:08.5", }, network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), vnic_type=model.VNIC_TYPE_VIRTIO_FORWARDER, details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True, model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', } ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", has_traffic_filtering=False, plugin="agilio_ovs", port_profile=osv_objects.vif.VIFPortProfileOVSRepresentor( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", representor_address="0000:08:08.5", representor_name="nicdc065497-3c", datapath_offload=osv_objects.vif.DatapathOffloadRepresentor( representor_name="nicdc065497-3c", representor_address="0000:08:08.5")), preserve_on_delete=False, vif_name="nicdc065497-3c", path='/fake/socket', mode='client', network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vif_ovs_plain(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_OVS, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_PORT_FILTER: True, model.VIF_DETAILS_OVS_DATAPATH_TYPE: model.VIF_DETAILS_OVS_DATAPATH_SYSTEM }, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFOpenVSwitch( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", has_traffic_filtering=True, plugin="ovs", port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", datapath_type=model.VIF_DETAILS_OVS_DATAPATH_SYSTEM), preserve_on_delete=False, vif_name="nicdc065497-3c", network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vif_ovs_hybrid(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_OVS, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_PORT_FILTER: False, model.VIF_DETAILS_OVS_HYBRID_PLUG: True, model.VIF_DETAILS_OVS_DATAPATH_TYPE: model.VIF_DETAILS_OVS_DATAPATH_SYSTEM }, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFBridge( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", has_traffic_filtering=False, plugin="ovs", bridge_name="qbrdc065497-3c", port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", datapath_type="system"), preserve_on_delete=False, vif_name="nicdc065497-3c", network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_ovs_with_vnic_direct(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_OVS, address="22:52:25:62:e2:aa", vnic_type=model.VNIC_TYPE_DIRECT, network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), profile={'pci_slot': '0000:0a:00.1'} ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFHostDevice( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", dev_address='0000:0a:00.1', dev_type=os_vif_fields.VIFHostDeviceDevType.ETHERNET, plugin="ovs", port_profile=osv_objects.vif.VIFPortProfileOVSRepresentor( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", representor_name="nicdc065497-3c", representor_address="0000:0a:00.1", datapath_offload=osv_objects.vif.DatapathOffloadRepresentor( representor_name="nicdc065497-3c", representor_address="0000:0a:00.1")), has_traffic_filtering=False, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vhostuser_ovs(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True, model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', model.VIF_DETAILS_PORT_FILTER: True, model.VIF_DETAILS_OVS_DATAPATH_TYPE: model.VIF_DETAILS_OVS_DATAPATH_SYSTEM }, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="ovs", port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", datapath_type=model.VIF_DETAILS_OVS_DATAPATH_SYSTEM), vif_name="vhudc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=True, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vhostuser_ovs_no_socket_path(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True, model.VIF_DETAILS_PORT_FILTER: True } ) self.assertRaises(exception.VifDetailsMissingVhostuserSockPath, os_vif_util.nova_to_osvif_vif, vif) def test_nova_to_osvif_vhostuser_non_ovs(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False, model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket' } ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="noop", vif_name="nicdc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=False, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", mtu=None, subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vhostuser_fp_ovs_hybrid(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", mtu="1500", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True, model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True, model.VIF_DETAILS_OVS_HYBRID_PLUG: True, model.VIF_DETAILS_PORT_FILTER: False, model.VIF_DETAILS_OVS_DATAPATH_TYPE: model.VIF_DETAILS_OVS_DATAPATH_SYSTEM }, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="vhostuser_fp", port_profile=osv_objects.vif.VIFPortProfileFPOpenVSwitch( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", bridge_name="qbrdc065497-3c", hybrid_plug=True, datapath_type=model.VIF_DETAILS_OVS_DATAPATH_SYSTEM), vif_name="nicdc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=False, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", mtu="1500", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vhostuser_fp_ovs_plain(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", mtu="1500", bridge="br-int", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True, model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True, model.VIF_DETAILS_OVS_HYBRID_PLUG: False, model.VIF_DETAILS_PORT_FILTER: True, model.VIF_DETAILS_OVS_DATAPATH_TYPE: model.VIF_DETAILS_OVS_DATAPATH_SYSTEM }, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="vhostuser_fp", port_profile=osv_objects.vif.VIFPortProfileFPOpenVSwitch( interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", bridge_name="br-int", hybrid_plug=False, datapath_type=model.VIF_DETAILS_OVS_DATAPATH_SYSTEM), vif_name="nicdc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=True, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", mtu="1500", bridge="br-int", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vhostuser_fp_lb(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", mtu="1500", bridge="brq12345", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True, model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False, } ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="vhostuser_fp", port_profile=osv_objects.vif.VIFPortProfileFPBridge( bridge_name="brq12345"), vif_name="nicdc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=False, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", mtu="1500", bridge="brq12345", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vhostuser_fp_no_socket_path(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True, model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False, model.VIF_DETAILS_PORT_FILTER: True, } ) self.assertRaises(exception.VifDetailsMissingVhostuserSockPath, os_vif_util.nova_to_osvif_vif, vif) def test_nova_to_osvif_vif_ivs_plain(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_IVS, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_PORT_FILTER: True, } ) actual = os_vif_util.nova_to_osvif_vif(vif) # expected vif_name is nic + vif_id, with total length 14 chars expected_vif_name = 'nicdc065497-3c' self.assertIsInstance(actual, osv_objects.vif.VIFGeneric) self.assertEqual(expected_vif_name, actual.vif_name) def test_nova_to_osvif_vif_ivs_bridged(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_IVS, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_PORT_FILTER: True, model.VIF_DETAILS_OVS_HYBRID_PLUG: True, } ) actual = os_vif_util.nova_to_osvif_vif(vif) # expected vif_name is nic + vif_id, with total length 14 chars expected_vif_name = 'nicdc065497-3c' self.assertIsInstance(actual, osv_objects.vif.VIFBridge) self.assertEqual(expected_vif_name, actual.vif_name) def test_nova_to_osvif_vif_unknown(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type="wibble", address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), ) ex = self.assertRaises(exception.NovaException, os_vif_util.nova_to_osvif_vif, vif) self.assertIn('Unsupported VIF type wibble', six.text_type(ex)) def test_nova_to_osvif_vif_binding_failed(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type="binding_failed", address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]),) self.assertIsNone(os_vif_util.nova_to_osvif_vif(vif)) def test_nova_to_osvif_vif_unbound(self): vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type="unbound", address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]),) self.assertIsNone(os_vif_util.nova_to_osvif_vif(vif)) def test_nova_to_osvif_contrail_vrouter(self): """Test for the Contrail / Tungsten Fabric DPDK datapath.""" vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_VROUTER_PLUG: True, model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', } ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="contrail_vrouter", vif_name="nicdc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=False, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_contrail_vrouter_no_socket_path(self): """Test for the Contrail / Tungsten Fabric DPDK datapath.""" vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VHOSTUSER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_VROUTER_PLUG: True, } ) self.assertRaises(exception.VifDetailsMissingVhostuserSockPath, os_vif_util.nova_to_osvif_vif, vif) def test_nova_to_osvif_vrouter(self): """Test for the Contrail / Tungsten Fabric kernel datapath.""" vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VROUTER, address="22:52:25:62:e2:aa", network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFGeneric( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="vrouter", vif_name="nicdc065497-3c", has_traffic_filtering=False, preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vrouter_direct(self): """Test for Contrail / Tungsten Fabric direct offloaded datapath.""" vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VROUTER, address="22:52:25:62:e2:aa", profile={ "pci_slot": "0000:08:08.5", }, network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), vnic_type=model.VNIC_TYPE_DIRECT, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFHostDevice( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, has_traffic_filtering=False, address="22:52:25:62:e2:aa", dev_type=osv_objects.fields.VIFHostDeviceDevType.ETHERNET, dev_address="0000:08:08.5", plugin="vrouter", port_profile=osv_objects.vif.VIFPortProfileBase( datapath_offload=osv_objects.vif.DatapathOffloadRepresentor( representor_name="nicdc065497-3c", representor_address="0000:08:08.5") ), preserve_on_delete=False, vif_name="nicdc065497-3c", network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) def test_nova_to_osvif_vrouter_forwarder(self): """Test for Contrail / Tungsten Fabric indirect offloaded datapath.""" vif = model.VIF( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", type=model.VIF_TYPE_VROUTER, address="22:52:25:62:e2:aa", profile={ "pci_slot": "0000:08:08.5", }, network=model.Network( id="b82c1929-051e-481d-8110-4669916c7915", label="Demo Net", subnets=[]), details={ model.VIF_DETAILS_VHOSTUSER_MODE: 'client', model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket', }, vnic_type=model.VNIC_TYPE_VIRTIO_FORWARDER, ) actual = os_vif_util.nova_to_osvif_vif(vif) expect = osv_objects.vif.VIFVHostUser( id="dc065497-3c8d-4f44-8fb4-e1d33c16a536", active=False, address="22:52:25:62:e2:aa", plugin="vrouter", vif_name="nicdc065497-3c", path='/fake/socket', mode='client', has_traffic_filtering=False, port_profile=osv_objects.vif.VIFPortProfileBase( datapath_offload=osv_objects.vif.DatapathOffloadRepresentor( representor_address="0000:08:08.5", representor_name="nicdc065497-3c") ), preserve_on_delete=False, network=osv_objects.network.Network( id="b82c1929-051e-481d-8110-4669916c7915", bridge_interface=None, label="Demo Net", subnets=osv_objects.subnet.SubnetList( objects=[]))) self.assertObjEqual(expect, actual) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/network/test_security_group.py0000664000175000017500000004377000000000000024052 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import mock from neutronclient.common import exceptions as n_exc from neutronclient.neutron import v2_0 as neutronv20 from neutronclient.v2_0 import client from oslo_utils.fixture import uuidsentinel as uuids from six.moves import range from nova import context from nova import exception from nova.network import security_group_api as sg_api from nova import objects from nova import test class TestNeutronDriver(test.NoDBTestCase): def setUp(self): super(TestNeutronDriver, self).setUp() self.context = context.RequestContext('userid', 'my_tenantid') setattr(self.context, 'auth_token', 'bff4a5a6b9eb4ea2a6efec6eefb77936') self.mocked_client = mock.Mock(spec=client.Client) self.stub_out('nova.network.neutron.get_client', lambda context: self.mocked_client) def test_list_with_project(self): project_id = '0af70a4d22cf4652824ddc1f2435dd85' security_groups_list = {'security_groups': []} self.mocked_client.list_security_groups.return_value = ( security_groups_list) sg_api.list(self.context, project=project_id) self.mocked_client.list_security_groups.assert_called_once_with( tenant_id = project_id) def test_list_with_all_tenants_and_admin_context(self): project_id = '0af70a4d22cf4652824ddc1f2435dd85' search_opts = {'all_tenants': 1} security_groups_list = {'security_groups': []} admin_context = context.RequestContext('user1', project_id, True) with mock.patch.object( self.mocked_client, 'list_security_groups', return_value=security_groups_list) as mock_list_secgroup: sg_api.list(admin_context, project=project_id, search_opts=search_opts) mock_list_secgroup.assert_called_once_with() def test_list_without_all_tenants_and_admin_context(self): project_id = '0af70a4d22cf4652824ddc1f2435dd85' security_groups_list = {'security_groups': []} admin_context = context.RequestContext('user1', project_id, True) with mock.patch.object( self.mocked_client, 'list_security_groups', return_value=security_groups_list) as mock_list_secgroup: sg_api.list(admin_context, project=project_id) mock_list_secgroup.assert_called_once_with(tenant_id=project_id) def test_list_with_all_tenants_not_admin(self): search_opts = {'all_tenants': 1} security_groups_list = {'security_groups': []} with mock.patch.object( self.mocked_client, 'list_security_groups', return_value=security_groups_list) as mock_list_secgroup: sg_api.list(self.context, project=self.context.project_id, search_opts=search_opts) mock_list_secgroup.assert_called_once_with( tenant_id=self.context.project_id) def test_create_security_group_with_bad_request(self): name = 'test-security-group' description = None body = {'security_group': {'name': name, 'description': description}} message = "Invalid input. Reason: 'None' is not a valid string." self.mocked_client.create_security_group.side_effect = ( n_exc.BadRequest(message=message)) self.assertRaises(exception.Invalid, sg_api.create_security_group, self.context, name, description) self.mocked_client.create_security_group.assert_called_once_with(body) def test_create_security_group_exceed_quota(self): name = 'test-security-group' description = 'test-security-group' body = {'security_group': {'name': name, 'description': description}} message = "Quota exceeded for resources: ['security_group']" self.mocked_client.create_security_group.side_effect = ( n_exc.NeutronClientException(status_code=409, message=message)) self.assertRaises(exception.SecurityGroupLimitExceeded, sg_api.create_security_group, self.context, name, description) self.mocked_client.create_security_group.assert_called_once_with(body) def test_create_security_group_rules_exceed_quota(self): vals = {'protocol': 'tcp', 'cidr': '0.0.0.0/0', 'parent_group_id': '7ae75663-277e-4a0e-8f87-56ea4e70cb47', 'group_id': None, 'from_port': 1025, 'to_port': 1025} body = {'security_group_rules': [{'remote_group_id': None, 'direction': 'ingress', 'protocol': 'tcp', 'ethertype': 'IPv4', 'port_range_max': 1025, 'port_range_min': 1025, 'security_group_id': '7ae75663-277e-4a0e-8f87-56ea4e70cb47', 'remote_ip_prefix': '0.0.0.0/0'}]} name = 'test-security-group' message = "Quota exceeded for resources: ['security_group_rule']" self.mocked_client.create_security_group_rule.side_effect = ( n_exc.NeutronClientException(status_code=409, message=message)) self.assertRaises(exception.SecurityGroupLimitExceeded, sg_api.add_rules, self.context, None, name, [vals]) self.mocked_client.create_security_group_rule.assert_called_once_with( body) def test_create_security_group_rules_bad_request(self): vals = {'protocol': 'icmp', 'cidr': '0.0.0.0/0', 'parent_group_id': '7ae75663-277e-4a0e-8f87-56ea4e70cb47', 'group_id': None, 'to_port': 255} body = {'security_group_rules': [{'remote_group_id': None, 'direction': 'ingress', 'protocol': 'icmp', 'ethertype': 'IPv4', 'port_range_max': 255, 'security_group_id': '7ae75663-277e-4a0e-8f87-56ea4e70cb47', 'remote_ip_prefix': '0.0.0.0/0'}]} name = 'test-security-group' message = "ICMP code (port-range-max) 255 is provided but ICMP type" \ " (port-range-min) is missing" self.mocked_client.create_security_group_rule.side_effect = ( n_exc.NeutronClientException(status_code=400, message=message)) self.assertRaises(exception.Invalid, sg_api.add_rules, self.context, None, name, [vals]) self.mocked_client.create_security_group_rule.assert_called_once_with( body) def test_list_security_group_with_no_port_range_and_not_tcp_udp_icmp(self): project_id = '0af70a4d22cf4652824ddc1f2435dd85' sg1 = {'description': 'default', 'id': '07f1362f-34f6-4136-819a-2dcde112269e', 'name': 'default', 'tenant_id': 'c166d9316f814891bcb66b96c4c891d6', 'security_group_rules': [{'direction': 'ingress', 'ethertype': 'IPv4', 'id': '0a4647f1-e1aa-488d-90e1-97a7d0293beb', 'port_range_max': None, 'port_range_min': None, 'protocol': '51', 'remote_group_id': None, 'remote_ip_prefix': None, 'security_group_id': '07f1362f-34f6-4136-819a-2dcde112269e', 'tenant_id': 'c166d9316f814891bcb66b96c4c891d6'}]} self.mocked_client.list_security_groups.return_value = ( {'security_groups': [sg1]}) result = sg_api.list(self.context, project=project_id) expected = [{'rules': [{'from_port': -1, 'protocol': '51', 'to_port': -1, 'parent_group_id': '07f1362f-34f6-4136-819a-2dcde112269e', 'cidr': '0.0.0.0/0', 'group_id': None, 'id': '0a4647f1-e1aa-488d-90e1-97a7d0293beb'}], 'project_id': 'c166d9316f814891bcb66b96c4c891d6', 'id': '07f1362f-34f6-4136-819a-2dcde112269e', 'name': 'default', 'description': 'default'}] self.assertEqual(expected, result) self.mocked_client.list_security_groups.assert_called_once_with( tenant_id=project_id) def test_instances_security_group_bindings(self, detailed=False): server_id = 'c5a20e8d-c4b0-47cf-9dca-ebe4f758acb1' port1_id = '4c505aec-09aa-47bc-bcc0-940477e84dc0' port2_id = 'b3b31a53-6e29-479f-ae5c-00b7b71a6d44' sg1_id = '2f7ce969-1a73-4ef9-bbd6-c9a91780ecd4' sg2_id = '20c89ce5-9388-4046-896e-64ffbd3eb584' servers = [{'id': server_id}] ports = [{'id': port1_id, 'device_id': server_id, 'security_groups': [sg1_id]}, {'id': port2_id, 'device_id': server_id, 'security_groups': [sg2_id]}] port_list = {'ports': ports} sg1 = {'id': sg1_id, 'name': 'wol'} sg2 = {'id': sg2_id, 'name': 'eor'} security_groups_list = {'security_groups': [sg1, sg2]} self.mocked_client.list_ports.return_value = port_list self.mocked_client.list_security_groups.return_value = ( security_groups_list) with mock.patch.object( sg_api, '_convert_to_nova_security_group_format') as convert: result = sg_api.get_instances_security_groups_bindings( self.context, servers, detailed=detailed) if detailed: convert.assert_has_calls([mock.call(sg1), mock.call(sg2)], any_order=True) sg_bindings = {server_id: [ call() for call in convert.mock_calls ]} else: convert.assert_not_called() sg_bindings = {server_id: [{'name': 'wol'}, {'name': 'eor'}]} self.assertEqual(sg_bindings, result) self.mocked_client.list_ports.assert_called_once_with( device_id=[server_id]) expected_search_opts = {'id': mock.ANY} if not detailed: expected_search_opts['fields'] = ['id', 'name'] self.mocked_client.list_security_groups.assert_called_once_with( **expected_search_opts) self.assertEqual(sorted([sg1_id, sg2_id]), sorted(self.mocked_client.list_security_groups.call_args[1]['id'])) def test_instances_security_group_bindings_detailed(self): self.test_instances_security_group_bindings(detailed=True) def test_instances_security_group_bindings_port_not_found(self): server_id = 'c5a20e8d-c4b0-47cf-9dca-ebe4f758acb1' servers = [{'id': server_id}] self.mocked_client.list_ports.side_effect = n_exc.PortNotFoundClient() result = sg_api.get_instances_security_groups_bindings( self.context, servers) self.mocked_client.list_ports.assert_called_once_with( device_id=[server_id]) self.assertEqual({}, result) def _test_instances_security_group_bindings_scale(self, num_servers): max_query = 150 sg1_id = '2f7ce969-1a73-4ef9-bbd6-c9a91780ecd4' sg2_id = '20c89ce5-9388-4046-896e-64ffbd3eb584' sg1 = {'id': sg1_id, 'name': 'wol'} sg2 = {'id': sg2_id, 'name': 'eor'} security_groups_list = {'security_groups': [sg1, sg2]} servers = [] device_ids = [] ports = [] sg_bindings = {} for i in range(0, num_servers): server_id = "server-%d" % i port_id = "port-%d" % i servers.append({'id': server_id}) device_ids.append(server_id) ports.append({'id': port_id, 'device_id': server_id, 'security_groups': [sg1_id, sg2_id]}) sg_bindings[server_id] = [{'name': 'wol'}, {'name': 'eor'}] expected_args = [] return_values = [] for x in range(0, num_servers, max_query): expected_args.append( mock.call(device_id=device_ids[x:x + max_query])) return_values.append({'ports': ports[x:x + max_query]}) self.mocked_client.list_security_groups.return_value = ( security_groups_list) self.mocked_client.list_ports.side_effect = return_values result = sg_api.get_instances_security_groups_bindings( self.context, servers) self.assertEqual(sg_bindings, result) self.mocked_client.list_security_groups.assert_called_once_with( id=mock.ANY, fields=['id', 'name']) self.assertEqual(sorted([sg1_id, sg2_id]), sorted(self.mocked_client.list_security_groups.call_args[1]['id'])) self.assertEqual(expected_args, self.mocked_client.list_ports.call_args_list) def test_instances_security_group_bindings_less_than_max(self): self._test_instances_security_group_bindings_scale(100) def test_instances_security_group_bindings_max(self): self._test_instances_security_group_bindings_scale(150) def test_instances_security_group_bindings_more_then_max(self): self._test_instances_security_group_bindings_scale(300) def test_instances_security_group_bindings_with_hidden_sg(self): servers = [{'id': 'server_1'}] ports = [{'id': '1', 'device_id': 'dev_1', 'security_groups': ['1']}, {'id': '2', 'device_id': 'dev_1', 'security_groups': ['2']}] port_list = {'ports': ports} sg1 = {'id': '1', 'name': 'wol'} # User doesn't have access to sg2 security_groups_list = {'security_groups': [sg1]} sg_bindings = {'dev_1': [{'name': 'wol'}]} self.mocked_client.list_ports.return_value = port_list self.mocked_client.list_security_groups.return_value = ( security_groups_list) result = sg_api.get_instances_security_groups_bindings( self.context, servers) self.assertEqual(sg_bindings, result) self.mocked_client.list_ports.assert_called_once_with( device_id=['server_1']) self.mocked_client.list_security_groups.assert_called_once_with( id=mock.ANY, fields=['id', 'name']) self.assertEqual(['1', '2'], sorted(self.mocked_client.list_security_groups.call_args[1]['id'])) def test_instance_empty_security_groups(self): port_list = {'ports': [{'id': 1, 'device_id': uuids.instance, 'security_groups': []}]} self.mocked_client.list_ports.return_value = port_list result = sg_api.get_instance_security_groups( self.context, objects.Instance(uuid=uuids.instance)) self.assertEqual([], result) self.mocked_client.list_ports.assert_called_once_with( device_id=[uuids.instance]) def test_add_to_instance(self): sg_name = 'web_server' sg_id = '85cc3048-abc3-43cc-89b3-377341426ac5' port_id = 1 port_list = {'ports': [{'id': port_id, 'device_id': uuids.instance, 'fixed_ips': [{'ip_address': '10.0.0.1'}], 'port_security_enabled': True, 'security_groups': []}]} self.mocked_client.list_ports.return_value = port_list with mock.patch.object(neutronv20, 'find_resourceid_by_name_or_id', return_value=sg_id): sg_api.add_to_instance( self.context, objects.Instance(uuid=uuids.instance), sg_name) self.mocked_client.list_ports.assert_called_once_with( device_id=uuids.instance) self.mocked_client.update_port.assert_called_once_with( port_id, {'port': {'security_groups': [sg_id]}}) def test_add_to_instance_with_bad_request(self): sg_name = 'web_server' sg_id = '85cc3048-abc3-43cc-89b3-377341426ac5' port_id = 1 port_list = {'ports': [{'id': port_id, 'device_id': uuids.instance, 'fixed_ips': [{'ip_address': '10.0.0.1'}], 'port_security_enabled': True, 'security_groups': []}]} self.mocked_client.list_ports.return_value = port_list self.mocked_client.update_port.side_effect = ( n_exc.BadRequest(message='error')) with mock.patch.object(neutronv20, 'find_resourceid_by_name_or_id', return_value=sg_id): self.assertRaises(exception.SecurityGroupCannotBeApplied, sg_api.add_to_instance, self.context, objects.Instance(uuid=uuids.instance), sg_name) self.mocked_client.list_ports.assert_called_once_with( device_id=uuids.instance) self.mocked_client.update_port.assert_called_once_with( port_id, {'port': {'security_groups': [sg_id]}}) class TestNeutronDriverWithoutMock(test.NoDBTestCase): def test_validate_property(self): sg_api.validate_property('foo', 'name', None) sg_api.validate_property('', 'name', None) self.assertRaises(exception.Invalid, sg_api.validate_property, 'a' * 256, 'name', None) self.assertRaises(exception.Invalid, sg_api.validate_property, None, 'name', None) def test_populate_security_groups(self): r = sg_api.populate_security_groups(['default', uuids.secgroup_uuid]) self.assertIsInstance(r, objects.SecurityGroupList) self.assertEqual(2, len(r)) self.assertEqual('default', r[0].name) self.assertEqual(uuids.secgroup_uuid, r[1].uuid) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5464683 nova-21.2.4/nova/tests/unit/notifications/0000775000175000017500000000000000000000000020523 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/notifications/__init__.py0000664000175000017500000000000000000000000022622 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5464683 nova-21.2.4/nova/tests/unit/notifications/objects/0000775000175000017500000000000000000000000022154 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/notifications/objects/__init__.py0000664000175000017500000000000000000000000024253 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/notifications/objects/test_flavor.py0000664000175000017500000001425100000000000025061 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova import context from nova.notifications.objects import flavor as flavor_notification from nova import objects from nova.objects import fields from nova import test from nova.tests.unit.objects.test_flavor import fake_flavor PROJECTS_SENTINEL = object() class TestFlavorNotification(test.TestCase): def setUp(self): self.ctxt = context.get_admin_context() super(TestFlavorNotification, self).setUp() @mock.patch('nova.notifications.objects.flavor.FlavorNotification') def _verify_notification(self, flavor_obj, flavor, action, mock_notification, project_id=None, expected_projects=PROJECTS_SENTINEL): notification = mock_notification if action == "CREATE": flavor_obj.create() elif action == "DELETE": flavor_obj.destroy() elif action == "ADD_ACCESS": action = "UPDATE" flavor_obj.add_access(project_id) elif action == "REMOVE_ACCESS": action = "UPDATE" flavor_obj.remove_access(project_id) else: flavor_obj.save() self.assertTrue(notification.called) event_type = notification.call_args[1]['event_type'] priority = notification.call_args[1]['priority'] publisher = notification.call_args[1]['publisher'] payload = notification.call_args[1]['payload'] self.assertEqual("fake-mini", publisher.host) self.assertEqual("nova-api", publisher.source) self.assertEqual(fields.NotificationPriority.INFO, priority) self.assertEqual('flavor', event_type.object) self.assertEqual(getattr(fields.NotificationAction, action), event_type.action) notification.return_value.emit.assert_called_once_with(self.ctxt) schema = flavor_notification.FlavorPayload.SCHEMA for field in schema: if field == 'projects' and expected_projects != PROJECTS_SENTINEL: self.assertEqual(expected_projects, getattr(payload, field)) elif field in flavor_obj: self.assertEqual(flavor_obj[field], getattr(payload, field)) else: self.fail('Missing check for field %s in flavor_obj.' % field) @mock.patch('nova.objects.Flavor._flavor_create') def test_flavor_create_with_notification(self, mock_create): flavor = copy.deepcopy(fake_flavor) flavor_obj = objects.Flavor(context=self.ctxt) flavor_obj.extra_specs = flavor['extra_specs'] flavorid = '1' flavor['flavorid'] = flavorid flavor['id'] = flavorid mock_create.return_value = flavor self._verify_notification(flavor_obj, flavor, 'CREATE') @mock.patch('nova.objects.Flavor._flavor_extra_specs_del') def test_flavor_update_with_notification(self, mock_delete): flavor = copy.deepcopy(fake_flavor) flavorid = '1' flavor['flavorid'] = flavorid flavor['id'] = flavorid flavor_obj = objects.Flavor(context=self.ctxt, **flavor) flavor_obj.obj_reset_changes() del flavor_obj.extra_specs['foo'] del flavor['extra_specs']['foo'] self._verify_notification(flavor_obj, flavor, "UPDATE") projects = ['project-1', 'project-2'] flavor_obj.projects = projects flavor['projects'] = projects self._verify_notification(flavor_obj, flavor, "UPDATE") @mock.patch('nova.objects.Flavor._add_access') @mock.patch('nova.objects.Flavor._remove_access') def test_flavor_access_with_notification(self, mock_remove_access, mock_add_access): flavor = copy.deepcopy(fake_flavor) flavorid = '1' flavor['flavorid'] = flavorid flavor['id'] = flavorid flavor_obj = objects.Flavor(context=self.ctxt, **flavor) flavor_obj.obj_reset_changes() self._verify_notification(flavor_obj, flavor, "ADD_ACCESS", project_id="project1") self._verify_notification(flavor_obj, flavor, "REMOVE_ACCESS", project_id="project1") @mock.patch('nova.objects.Flavor._flavor_destroy') def test_flavor_destroy_with_notification(self, mock_destroy): flavor = copy.deepcopy(fake_flavor) flavorid = '1' flavor['flavorid'] = flavorid flavor['id'] = flavorid mock_destroy.return_value = flavor flavor_obj = objects.Flavor(context=self.ctxt, **flavor) flavor_obj.obj_reset_changes() self.assertNotIn('projects', flavor_obj) # We specifically expect there to not be any projects as we don't want # to try and lazy-load them from the main database and end up with []. self._verify_notification(flavor_obj, flavor, "DELETE", expected_projects=None) @mock.patch('nova.objects.Flavor._flavor_destroy') def test_flavor_destroy_with_notification_and_projects(self, mock_destroy): """Tests the flavor-delete notification with flavor.projects loaded.""" flavor = copy.deepcopy(fake_flavor) flavorid = '1' flavor['flavorid'] = flavorid flavor['id'] = flavorid mock_destroy.return_value = flavor flavor_obj = objects.Flavor( context=self.ctxt, projects=['foo'], **flavor) flavor_obj.obj_reset_changes() self.assertIn('projects', flavor_obj) self.assertEqual(['foo'], flavor_obj.projects) # Since projects is loaded we shouldn't try to lazy-load it. self._verify_notification(flavor_obj, flavor, "DELETE") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/notifications/objects/test_instance.py0000664000175000017500000001643400000000000025401 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova import context as nova_context from nova.network import model as network_model from nova.notifications import base as notification_base from nova.notifications.objects import instance as instance_notification from nova import objects from nova import test from nova.tests.unit import fake_instance class TestInstanceNotification(test.NoDBTestCase): def setUp(self): super(TestInstanceNotification, self).setUp() self.test_keys = ['memory_mb', 'vcpus', 'root_gb', 'ephemeral_gb', 'swap'] self.flavor_values = {k: 123 for k in self.test_keys} instance_values = {k: 456 for k in self.test_keys} flavor = objects.Flavor(flavorid='test-flavor', name='test-flavor', disabled=False, projects=[], is_public=True, extra_specs={}, **self.flavor_values) info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo()) self.instance = objects.Instance( flavor=flavor, info_cache=info_cache, metadata={}, uuid=uuids.instance1, locked=False, auto_disk_config=False, system_metadata={}, **instance_values) self.payload = { 'bandwidth': {}, 'audit_period_ending': timeutils.utcnow(), 'audit_period_beginning': timeutils.utcnow(), } @mock.patch('nova.notifications.objects.instance.' 'InstanceUpdateNotification._emit') def test_send_version_instance_update_uses_flavor(self, mock_emit): # instance.update notification needs some tags value to avoid lazy-load self.instance.tags = objects.TagList() # Make sure that the notification payload chooses the values in # instance.flavor.$value instead of instance.$value mock_context = mock.MagicMock() mock_context.project_id = 'fake_project_id' mock_context.user_id = 'fake_user_id' mock_context.request_id = 'fake_req_id' notification_base._send_versioned_instance_update( mock_context, self.instance, self.payload, 'host', 'compute') payload = mock_emit.call_args_list[0][1]['payload']['nova_object.data'] flavor_payload = payload['flavor']['nova_object.data'] data = {k: flavor_payload[k] for k in self.test_keys} self.assertEqual(self.flavor_values, data) @mock.patch('nova.rpc.NOTIFIER') @mock.patch('nova.notifications.objects.instance.' 'InstanceUpdatePayload.__init__', return_value=None) @mock.patch('nova.notifications.objects.instance.' 'InstanceUpdateNotification.__init__', return_value=None) def test_send_versioned_instance_notification_is_not_called_disabled( self, mock_notification, mock_payload, mock_notifier): mock_notifier.is_enabled.return_value = False notification_base._send_versioned_instance_update( mock.MagicMock(), self.instance, self.payload, 'host', 'compute') self.assertFalse(mock_payload.called) self.assertFalse(mock_notification.called) @mock.patch('nova.notifications.objects.instance.' 'InstanceUpdatePayload.__init__', return_value=None) @mock.patch('nova.notifications.objects.instance.' 'InstanceUpdateNotification.__init__', return_value=None) def test_send_versioned_instance_notification_is_not_called_unversioned( self, mock_notification, mock_payload): self.flags(notification_format='unversioned', group='notifications') notification_base._send_versioned_instance_update( mock.MagicMock(), self.instance, self.payload, 'host', 'compute') self.assertFalse(mock_payload.called) self.assertFalse(mock_notification.called) def test_instance_payload_request_id_periodic_task(self): """Tests that creating an InstancePayload from the type of request context used during a periodic task will not populate the payload request_id field since it is not an end user request. """ ctxt = nova_context.get_admin_context() instance = fake_instance.fake_instance_obj(ctxt) # Set some other fields otherwise populate_schema tries to hit the DB. instance.metadata = {} instance.system_metadata = {} instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([])) payload = instance_notification.InstancePayload(ctxt, instance) self.assertIsNone(payload.request_id) class TestBlockDevicePayload(test.NoDBTestCase): @mock.patch('nova.objects.instance.Instance.get_bdms') def test_payload_contains_volume_bdms_if_requested(self, mock_get_bdms): self.flags(bdms_in_notifications='True', group='notifications') context = mock.Mock() instance = objects.Instance(uuid=uuids.instance_uuid) image_bdm = objects.BlockDeviceMapping( **{'context': context, 'source_type': 'image', 'destination_type': 'local', 'image_id': uuids.image_id, 'volume_id': None, 'device_name': '/dev/vda', 'instance_uuid': instance.uuid}) volume_bdm = objects.BlockDeviceMapping( **{'context': context, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': uuids.volume_id, 'device_name': '/dev/vdb', 'instance_uuid': instance.uuid, 'boot_index': 0, 'delete_on_termination': True, 'tag': 'my-tag'}) mock_get_bdms.return_value = [image_bdm, volume_bdm] bdms = instance_notification.BlockDevicePayload.from_instance( instance) self.assertEqual(1, len(bdms)) bdm = bdms[0] self.assertIsInstance(bdm, instance_notification.BlockDevicePayload) self.assertEqual('/dev/vdb', bdm.device_name) self.assertEqual(0, bdm.boot_index) self.assertTrue(bdm.delete_on_termination) self.assertEqual('my-tag', bdm.tag) self.assertEqual(uuids.volume_id, bdm.volume_id) @mock.patch('nova.objects.instance.Instance.get_bdms', return_value=mock.NonCallableMock()) def test_bdms_are_skipped_by_default(self, mock_get_bdms): instance = objects.Instance(uuid=uuids.instance_uuid) bmds = instance_notification.BlockDevicePayload.from_instance( instance) self.assertIsNone(bmds) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/notifications/objects/test_notification.py0000664000175000017500000005113700000000000026262 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_versionedobjects import fixture from nova import exception from nova.notifications.objects import base as notification from nova import objects from nova.objects import base from nova.objects import fields from nova import test from nova.tests.unit.objects import test_objects class TestNotificationBase(test.NoDBTestCase): @base.NovaObjectRegistry.register_if(False) class TestObject(base.NovaObject): VERSION = '1.0' fields = { 'field_1': fields.StringField(), 'field_2': fields.IntegerField(), 'not_important_field': fields.IntegerField(), 'lazy_field': fields.IntegerField() } def obj_load_attr(self, attrname): if attrname == 'lazy_field': self.lazy_field = 42 else: raise exception.ObjectActionError( action='obj_load_attr', reason='attribute %s not lazy-loadable' % attrname) def __init__(self, not_important_field): super(TestNotificationBase.TestObject, self).__init__() # field1 and field_2 simulates that some fields are initialized # outside of the object's ctor self.not_important_field = not_important_field @base.NovaObjectRegistry.register_if(False) class TestNotificationPayload(notification.NotificationPayloadBase): VERSION = '1.0' SCHEMA = { 'field_1': ('source_field', 'field_1'), 'field_2': ('source_field', 'field_2'), 'lazy_field': ('source_field', 'lazy_field') } fields = { 'extra_field': fields.StringField(), # filled by ctor # filled by the schema 'field_1': fields.StringField(nullable=True), 'field_2': fields.IntegerField(), # filled by the schema 'lazy_field': fields.IntegerField() # filled by the schema } def __init__(self, extra_field, source_field): super(TestNotificationBase.TestNotificationPayload, self).__init__() self.extra_field = extra_field self.populate_schema(source_field=source_field) @base.NovaObjectRegistry.register_if(False) class TestNotificationPayloadEmptySchema( notification.NotificationPayloadBase): VERSION = '1.0' fields = { 'extra_field': fields.StringField(), # filled by ctor } def __init__(self, extra_field): super(TestNotificationBase.TestNotificationPayloadEmptySchema, self).__init__() self.extra_field = extra_field @notification.notification_sample('test-update-1.json') @notification.notification_sample('test-update-2.json') @base.NovaObjectRegistry.register_if(False) class TestNotification(notification.NotificationBase): VERSION = '1.0' fields = { 'payload': fields.ObjectField('TestNotificationPayload') } @base.NovaObjectRegistry.register_if(False) class TestNotificationEmptySchema(notification.NotificationBase): VERSION = '1.0' fields = { 'payload': fields.ObjectField('TestNotificationPayloadEmptySchema') } fake_service = { 'created_at': timeutils.utcnow().replace(microsecond=0), 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 123, 'uuid': uuids.service, 'host': 'fake-host', 'binary': 'nova-compute', 'topic': 'fake-service-topic', 'report_count': 1, 'forced_down': False, 'disabled': False, 'disabled_reason': None, 'last_seen_up': None, 'version': 1} expected_payload = { 'nova_object.name': 'TestNotificationPayload', 'nova_object.data': { 'extra_field': 'test string', 'field_1': 'test1', 'field_2': 15, 'lazy_field': 42}, 'nova_object.version': '1.0', 'nova_object.namespace': 'nova'} def setUp(self): super(TestNotificationBase, self).setUp() with mock.patch( 'nova.db.api.service_update') as mock_db_service_update: self.service_obj = objects.Service(context=mock.sentinel.context, id=self.fake_service['id']) self.service_obj.obj_reset_changes(['version']) mock_db_service_update.return_value = self.fake_service self.service_obj.save() self.my_obj = self.TestObject(not_important_field=13) self.my_obj.field_1 = 'test1' self.my_obj.field_2 = 15 self.payload = self.TestNotificationPayload( extra_field='test string', source_field=self.my_obj) self.notification = self.TestNotification( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE, phase=fields.NotificationPhase.START), publisher=notification.NotificationPublisher.from_service_obj( self.service_obj), priority=fields.NotificationPriority.INFO, payload=self.payload) def _verify_notification(self, mock_notifier, mock_context, expected_event_type, expected_payload): mock_notifier.prepare.assert_called_once_with( publisher_id='nova-compute:fake-host') mock_notify = mock_notifier.prepare.return_value.info self.assertTrue(mock_notify.called) self.assertEqual(mock_notify.call_args[0][0], mock_context) self.assertEqual(mock_notify.call_args[1]['event_type'], expected_event_type) actual_payload = mock_notify.call_args[1]['payload'] self.assertJsonEqual(expected_payload, actual_payload) @mock.patch('nova.rpc.LEGACY_NOTIFIER') @mock.patch('nova.rpc.NOTIFIER') def test_emit_notification(self, mock_notifier, mock_legacy): mock_context = mock.Mock() mock_context.to_dict.return_value = {} self.notification.emit(mock_context) self._verify_notification( mock_notifier, mock_context, expected_event_type='test_object.update.start', expected_payload=self.expected_payload) self.assertFalse(mock_legacy.called) @mock.patch('nova.rpc.NOTIFIER') def test_emit_with_host_and_binary_as_publisher(self, mock_notifier): noti = self.TestNotification( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE), publisher=notification.NotificationPublisher( host='fake-host', source='nova-compute'), priority=fields.NotificationPriority.INFO, payload=self.payload) mock_context = mock.Mock() mock_context.to_dict.return_value = {} noti.emit(mock_context) self._verify_notification( mock_notifier, mock_context, expected_event_type='test_object.update', expected_payload=self.expected_payload) @mock.patch('nova.rpc.LEGACY_NOTIFIER') @mock.patch('nova.rpc.NOTIFIER') def test_emit_event_type_without_phase(self, mock_notifier, mock_legacy): noti = self.TestNotification( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE), publisher=notification.NotificationPublisher.from_service_obj( self.service_obj), priority=fields.NotificationPriority.INFO, payload=self.payload) mock_context = mock.Mock() mock_context.to_dict.return_value = {} noti.emit(mock_context) self._verify_notification( mock_notifier, mock_context, expected_event_type='test_object.update', expected_payload=self.expected_payload) self.assertFalse(mock_legacy.called) @mock.patch('nova.rpc.NOTIFIER') def test_not_possible_to_emit_if_not_populated(self, mock_notifier): payload = self.TestNotificationPayload( extra_field='test string', source_field=self.my_obj) payload.populated = False noti = self.TestNotification( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE), publisher=notification.NotificationPublisher.from_service_obj( self.service_obj), priority=fields.NotificationPriority.INFO, payload=payload) mock_context = mock.Mock() self.assertRaises(AssertionError, noti.emit, mock_context) self.assertFalse(mock_notifier.called) def test_lazy_load_source_field(self): my_obj = self.TestObject(not_important_field=13) my_obj.field_1 = 'test1' my_obj.field_2 = 15 payload = self.TestNotificationPayload(extra_field='test string', source_field=my_obj) self.assertEqual(42, payload.lazy_field) def test_uninited_source_field_defaulted_to_none(self): my_obj = self.TestObject(not_important_field=13) # intentionally not initializing field_1 to simulate an uninited but # nullable field my_obj.field_2 = 15 payload = self.TestNotificationPayload(extra_field='test string', source_field=my_obj) self.assertIsNone(payload.field_1) def test_uninited_source_field_not_nullable_payload_field_fails(self): my_obj = self.TestObject(not_important_field=13) # intentionally not initializing field_2 to simulate an uninited no # nullable field my_obj.field_1 = 'test1' self.assertRaises(ValueError, self.TestNotificationPayload, extra_field='test string', source_field=my_obj) @mock.patch('nova.rpc.NOTIFIER') def test_empty_schema(self, mock_notifier): non_populated_payload = self.TestNotificationPayloadEmptySchema( extra_field='test string') noti = self.TestNotificationEmptySchema( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE), publisher=notification.NotificationPublisher.from_service_obj( self.service_obj), priority=fields.NotificationPriority.INFO, payload=non_populated_payload) mock_context = mock.Mock() mock_context.to_dict.return_value = {} noti.emit(mock_context) self._verify_notification( mock_notifier, mock_context, expected_event_type='test_object.update', expected_payload= {'nova_object.name': 'TestNotificationPayloadEmptySchema', 'nova_object.data': {'extra_field': u'test string'}, 'nova_object.version': '1.0', 'nova_object.namespace': 'nova'}) def test_sample_decorator(self): self.assertEqual(2, len(self.TestNotification.samples)) self.assertIn('test-update-1.json', self.TestNotification.samples) self.assertIn('test-update-2.json', self.TestNotification.samples) @mock.patch('nova.notifications.objects.base.NotificationBase._emit') @mock.patch('nova.rpc.NOTIFIER') def test_payload_is_not_generated_if_notifier_is_not_enabled( self, mock_notifier, mock_emit): mock_notifier.is_enabled.return_value = False payload = self.TestNotificationPayload( extra_field='test string', source_field=self.my_obj) noti = self.TestNotification( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE), publisher=notification.NotificationPublisher.from_service_obj( self.service_obj), priority=fields.NotificationPriority.INFO, payload=payload) mock_context = mock.Mock() noti.emit(mock_context) self.assertFalse(payload.populated) self.assertFalse(mock_emit.called) @mock.patch('nova.notifications.objects.base.NotificationBase._emit') def test_payload_is_not_generated_if_notification_format_is_unversioned( self, mock_emit): self.flags(notification_format='unversioned', group='notifications') payload = self.TestNotificationPayload( extra_field='test string', source_field=self.my_obj) noti = self.TestNotification( event_type=notification.EventType( object='test_object', action=fields.NotificationAction.UPDATE), publisher=notification.NotificationPublisher.from_service_obj( self.service_obj), priority=fields.NotificationPriority.INFO, payload=payload) mock_context = mock.Mock() noti.emit(mock_context) self.assertFalse(payload.populated) self.assertFalse(mock_emit.called) notification_object_data = { 'AggregateCacheNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'AggregateCachePayload': '1.0-3f4dc002bed67d06eecb577242a43572', 'AggregateNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'AggregatePayload': '1.1-1eb9adcc4440d8627de6ec37c6398746', 'AuditPeriodPayload': '1.0-2b429dd307b8374636703b843fa3f9cb', 'BandwidthPayload': '1.0-ee2616a7690ab78406842a2b68e34130', 'BlockDevicePayload': '1.0-29751e1b6d41b1454e36768a1e764df8', 'CellMappingPayload': '2.0-8acd412eb4edff1cd2ecb9867feeb243', 'ComputeTaskNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'ComputeTaskPayload': '1.0-e3d34762c14d131c98337b72e8c600e1', 'DestinationPayload': '1.0-4ccf26318dd18c4377dada2b1e74ec2e', 'EventType': '1.21-6a5f57fafe478f354f66b81b4cb537ea', 'ExceptionNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'ExceptionPayload': '1.1-6c43008bd81885a63bc7f7c629f0793b', 'FlavorNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'FlavorPayload': '1.4-2e7011b8b4e59167fe8b7a0a81f0d452', 'ImageMetaPayload': '1.0-0e65beeacb3393beed564a57bc2bc989', # NOTE(efried): ImageMetaPropsPayload is built dynamically from # ImageMetaProps, so when you see a fail here for that reason, you must # *also* bump the version of ImageMetaPropsPayload. See its docstring for # more information. 'ImageMetaPropsPayload': '1.3-9c200c895932163a4e14e6bb385fa1e0', 'InstanceActionNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionPayload': '1.8-4fa3da9cbf0761f1f700ae578f36dc2f', 'InstanceActionRebuildNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionRebuildPayload': '1.9-10eebfbf6e944aaac43188173dff9e01', 'InstanceActionRescueNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionRescuePayload': '1.3-dbf4de42bc02ebc4cdbe42f90d343bfd', 'InstanceActionResizePrepNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionResizePrepPayload': '1.3-baca73cc450f72d4e1ce6b9aca2bbdf6', 'InstanceActionVolumeNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionVolumePayload': '1.6-0a30e870677e6166c50645623e287f78', 'InstanceActionVolumeSwapNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionVolumeSwapPayload': '1.8-d2255347cb2353cb12c174aad4dab93c', 'InstanceCreateNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceCreatePayload': '1.12-749f2da7c2435a0e55c076d6bf0ea81d', 'InstancePayload': '1.8-60d62df5a6b6aa7817ec5d09f4b8a3e5', 'InstanceActionSnapshotNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceActionSnapshotPayload': '1.9-c3e0bbaaefafdfa2f8e6e504c2c9b12c', 'InstanceExistsNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceExistsPayload': '1.2-e082c02438ee57164829afaeee3bf7f8', 'InstanceNUMACellPayload': '1.0-2f13614648bc46f2e29578a206561ef6', 'InstanceNUMATopologyPayload': '1.0-247361b152047c18ae9ad1da2544a3c9', 'InstancePCIRequestPayload': '1.0-12d0d61baf183daaafd93cbeeed2956f', 'InstancePCIRequestsPayload': '1.0-6751cffe0c0fabd212aad624f672429a', 'InstanceStateUpdatePayload': '1.0-07e111c0fa0f6db0f79b0726d593e3da', 'InstanceUpdateNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'InstanceUpdatePayload': '1.9-0295e45efc2c6ba98fbca77bbddf882d', 'IpPayload': '1.0-8ecf567a99e516d4af094439a7632d34', 'KeypairNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'KeypairPayload': '1.0-6daebbbde0e1bf35c1556b1ecd9385c1', 'LibvirtErrorNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'LibvirtErrorPayload': '1.0-9e7a8f0b895dd15531d5a6f3aa95d58e', 'MetricPayload': '1.0-bcdbe85048f335132e4c82a1b8fa3da8', 'MetricsNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'MetricsPayload': '1.0-65c69b15b4de5a8c01971cb5bb9ab650', 'NotificationPublisher': '2.2-b6ad48126247e10b46b6b0240e52e614', 'RequestSpecPayload': '1.1-64d30723a2e381d0cd6a16a877002c64', 'SchedulerRetriesPayload': '1.0-03a07d09575ef52cced5b1b24301d0b4', 'SelectDestinationsNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'ServerGroupNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'ServerGroupPayload': '1.1-4ded2997ea1b07038f7af33ef5c45f7f', 'ServiceStatusNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'ServiceStatusPayload': '1.1-7b6856bd879db7f3ecbcd0ca9f35f92f', 'VirtCPUTopologyPayload': '1.0-1b1600fe55465209682d96bbe3209f27', 'VolumeUsageNotification': '1.0-a73147b93b520ff0061865849d3dfa56', 'VolumeUsagePayload': '1.0-5f99d8b978a32040eecac0975e5a53e9', } class TestNotificationObjectVersions(test.NoDBTestCase): def setUp(self): super(TestNotificationObjectVersions, self).setUp() base.NovaObjectRegistry.register_notification_objects() def test_versions(self): checker = fixture.ObjectVersionChecker( test_objects.get_nova_objects()) notification_object_data.update(test_objects.object_data) expected, actual = checker.test_hashes(notification_object_data, extra_data_func=get_extra_data) self.assertEqual(expected, actual, 'Some notification objects have changed; please make ' 'sure the versions have been bumped, and then update ' 'their hashes here.') def test_notification_payload_version_depends_on_the_schema(self): @base.NovaObjectRegistry.register_if(False) class TestNotificationPayload(notification.NotificationPayloadBase): VERSION = '1.0' SCHEMA = { 'field_1': ('source_field', 'field_1'), 'field_2': ('source_field', 'field_2'), } fields = { 'extra_field': fields.StringField(), # filled by ctor 'field_1': fields.StringField(), # filled by the schema 'field_2': fields.IntegerField(), # filled by the schema } checker = fixture.ObjectVersionChecker( {'TestNotificationPayload': (TestNotificationPayload,)}) old_hash = checker.get_hashes(extra_data_func=get_extra_data) TestNotificationPayload.SCHEMA['field_3'] = ('source_field', 'field_3') new_hash = checker.get_hashes(extra_data_func=get_extra_data) self.assertNotEqual(old_hash, new_hash) def get_extra_data(obj_class): extra_data = tuple() # Get the SCHEMA items to add to the fingerprint # if we are looking at a notification if issubclass(obj_class, notification.NotificationPayloadBase): schema_data = collections.OrderedDict( sorted(obj_class.SCHEMA.items())) extra_data += (schema_data,) return extra_data ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/notifications/objects/test_service.py0000664000175000017500000001075500000000000025235 0ustar00zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_utils import timeutils from nova import context from nova.notifications.objects import service as service_notification from nova import objects from nova.objects import fields from nova import test from nova.tests.unit.objects.test_service import fake_service class TestServiceStatusNotification(test.TestCase): def setUp(self): self.ctxt = context.get_admin_context() super(TestServiceStatusNotification, self).setUp() @mock.patch('nova.notifications.objects.service.ServiceStatusNotification') def _verify_notification(self, service_obj, action, mock_notification): if action == fields.NotificationAction.CREATE: service_obj.create() elif action == fields.NotificationAction.UPDATE: service_obj.save() elif action == fields.NotificationAction.DELETE: service_obj.destroy() else: raise Exception('Unsupported action: %s' % action) self.assertTrue(mock_notification.called) event_type = mock_notification.call_args[1]['event_type'] priority = mock_notification.call_args[1]['priority'] publisher = mock_notification.call_args[1]['publisher'] payload = mock_notification.call_args[1]['payload'] self.assertEqual(service_obj.host, publisher.host) self.assertEqual(service_obj.binary, publisher.source) self.assertEqual(fields.NotificationPriority.INFO, priority) self.assertEqual('service', event_type.object) self.assertEqual(action, event_type.action) for field in service_notification.ServiceStatusPayload.SCHEMA: if field in fake_service: self.assertEqual(fake_service[field], getattr(payload, field)) mock_notification.return_value.emit.assert_called_once_with(self.ctxt) @mock.patch('nova.db.api.service_update') def test_service_update_with_notification(self, mock_db_service_update): service_obj = objects.Service(context=self.ctxt, id=fake_service['id']) mock_db_service_update.return_value = fake_service for key, value in {'disabled': True, 'disabled_reason': 'my reason', 'forced_down': True}.items(): setattr(service_obj, key, value) self._verify_notification(service_obj, fields.NotificationAction.UPDATE) @mock.patch('nova.notifications.objects.service.ServiceStatusNotification') @mock.patch('nova.db.api.service_update') def test_service_update_without_notification(self, mock_db_service_update, mock_notification): service_obj = objects.Service(context=self.ctxt, id=fake_service['id']) mock_db_service_update.return_value = fake_service for key, value in {'report_count': 13, 'last_seen_up': timeutils.utcnow()}.items(): setattr(service_obj, key, value) service_obj.save() self.assertFalse(mock_notification.called) @mock.patch('nova.db.api.service_create') def test_service_create_with_notification(self, mock_db_service_create): service_obj = objects.Service(context=self.ctxt) service_obj["uuid"] = fake_service["uuid"] mock_db_service_create.return_value = fake_service self._verify_notification(service_obj, fields.NotificationAction.CREATE) @mock.patch('nova.db.api.service_destroy') def test_service_destroy_with_notification(self, mock_db_service_destroy): service = copy.deepcopy(fake_service) service.pop("version") service_obj = objects.Service(context=self.ctxt, **service) self._verify_notification(service_obj, fields.NotificationAction.DELETE) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/notifications/test_base.py0000664000175000017500000001637100000000000023056 0ustar00zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from keystoneauth1 import exceptions as ks_exc import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import context as nova_context from nova.notifications import base from nova import test from nova.tests.unit import fake_instance from nova import utils class TestNullSafeUtils(test.NoDBTestCase): def test_null_safe_isotime(self): dt = None self.assertEqual('', base.null_safe_isotime(dt)) dt = datetime.datetime(second=1, minute=1, hour=1, day=1, month=1, year=2017) self.assertEqual(utils.strtime(dt), base.null_safe_isotime(dt)) def test_null_safe_str(self): line = None self.assertEqual('', base.null_safe_str(line)) line = 'test' self.assertEqual(line, base.null_safe_str(line)) class TestSendInstanceUpdateNotification(test.NoDBTestCase): @mock.patch('nova.notifications.objects.base.NotificationBase.emit', new_callable=mock.NonCallableMock) # asserts not called # TODO(mriedem): Rather than mock is_enabled, it would be better to # configure oslo_messaging_notifications.driver=['noop'] @mock.patch('nova.rpc.NOTIFIER.is_enabled', return_value=False) def test_send_versioned_instance_update_notification_disabled(self, mock_enabled, mock_info): """Tests the case that versioned notifications are disabled which makes _send_versioned_instance_update_notification a noop. """ base._send_versioned_instance_update(mock.sentinel.ctxt, mock.sentinel.instance, mock.sentinel.payload, mock.sentinel.host, mock.sentinel.service) @mock.patch.object(base, 'bandwidth_usage') @mock.patch.object(base, '_compute_states_payload') @mock.patch('nova.rpc.get_notifier') @mock.patch.object(base, 'info_from_instance') def test_send_legacy_instance_update_notification(self, mock_info, mock_get_notifier, mock_states, mock_bw): """Tests the case that versioned notifications are disabled and assert that this does not prevent sending the unversioned instance.update notification. """ self.flags(notification_format='unversioned', group='notifications') base.send_instance_update_notification(mock.sentinel.ctxt, mock.sentinel.instance) mock_get_notifier.return_value.info.assert_called_once_with( mock.sentinel.ctxt, 'compute.instance.update', mock.ANY) mock_info.assert_called_once_with( mock.sentinel.ctxt, mock.sentinel.instance, None, populate_image_ref_url=True) @mock.patch('nova.image.glance.API.generate_image_url', side_effect=ks_exc.EndpointNotFound) def test_info_from_instance_image_api_endpoint_not_found_no_token( self, mock_gen_image_url): """Tests the case that we fail to generate the image ref url because CONF.glance.api_servers isn't set and we have a context without an auth token, like in the case of a periodic task using an admin context. In this case, we expect the payload field 'image_ref_url' to just be the instance.image_ref (image ID for a non-volume-backed server). """ ctxt = nova_context.get_admin_context() instance = fake_instance.fake_instance_obj(ctxt, image_ref=uuids.image) instance.system_metadata = {} instance.metadata = {} payload = base.info_from_instance(ctxt, instance, network_info=None, populate_image_ref_url=True) self.assertEqual(instance.image_ref, payload['image_ref_url']) mock_gen_image_url.assert_called_once_with(instance.image_ref, ctxt) @mock.patch('nova.image.glance.API.generate_image_url', side_effect=ks_exc.EndpointNotFound) def test_info_from_instance_image_api_endpoint_not_found_with_token( self, mock_gen_image_url): """Tests the case that we fail to generate the image ref url because an EndpointNotFound error is raised up from the image API but the context does have a token so we pass the error through. """ ctxt = nova_context.RequestContext( 'fake-user', 'fake-project', auth_token='fake-token') instance = fake_instance.fake_instance_obj(ctxt, image_ref=uuids.image) self.assertRaises(ks_exc.EndpointNotFound, base.info_from_instance, ctxt, instance, network_info=None, populate_image_ref_url=True) mock_gen_image_url.assert_called_once_with(instance.image_ref, ctxt) @mock.patch('nova.image.glance.API.generate_image_url') def test_info_from_instance_not_call_generate_image_url( self, mock_gen_image_url): ctxt = nova_context.get_admin_context() instance = fake_instance.fake_instance_obj(ctxt, image_ref=uuids.image) instance.system_metadata = {} instance.metadata = {} base.info_from_instance(ctxt, instance, network_info=None, populate_image_ref_url=False) mock_gen_image_url.assert_not_called() class TestBandwidthUsage(test.NoDBTestCase): @mock.patch('nova.context.RequestContext.elevated') @mock.patch('nova.network.neutron.API') @mock.patch('nova.objects.BandwidthUsageList.get_by_uuids') def test_context_elevated(self, mock_get_bw_usage, mock_nw_api, mock_elevated): context = nova_context.RequestContext('fake', 'fake') # We need this to not be a NovaObject so the old school # get_instance_nw_info will run. instance = {'uuid': uuids.instance} audit_start = 'fake' base.bandwidth_usage(context, instance, audit_start) network_api = mock_nw_api.return_value network_api.get_instance_nw_info.assert_called_once_with( mock_elevated.return_value, instance) mock_get_bw_usage.assert_called_once_with( mock_elevated.return_value, [uuids.instance], audit_start) mock_elevated.assert_called_once_with(read_deleted='yes') ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.558468 nova-21.2.4/nova/tests/unit/objects/0000775000175000017500000000000000000000000017303 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/__init__.py0000664000175000017500000000000000000000000021402 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_agent.py0000664000175000017500000000732700000000000022023 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova.objects import agent as agent_obj from nova.tests.unit.objects import test_objects fake_agent = { 'id': 1, 'hypervisor': 'novavm', 'os': 'linux', 'architecture': 'DISC', 'version': '1.0', 'url': 'http://openstack.org/novavm/agents/novavm_agent_v1.0.rpm', 'md5hash': '8cb151f3adc23a92db8ddbe084796823', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, } class _TestAgent(object): @staticmethod def _compare(test, db, obj): for field, value in db.items(): test.assertEqual(db[field], getattr(obj, field)) @mock.patch('nova.db.api.agent_build_get_by_triple') def test_get_by_triple(self, mock_get): mock_get.return_value = fake_agent agent = agent_obj.Agent.get_by_triple(self.context, 'novavm', 'linux', 'DISC') self._compare(self, fake_agent, agent) @mock.patch('nova.db.api.agent_build_get_by_triple') def test_get_by_triple_none(self, mock_get): mock_get.return_value = None agent = agent_obj.Agent.get_by_triple(self.context, 'novavm', 'linux', 'DISC') self.assertIsNone(agent) @mock.patch('nova.db.api.agent_build_create') def test_create(self, mock_create): mock_create.return_value = fake_agent agent = agent_obj.Agent(context=self.context) agent.hypervisor = 'novavm' agent.create() mock_create.assert_called_once_with(self.context, {'hypervisor': 'novavm'}) self._compare(self, fake_agent, agent) @mock.patch('nova.db.api.agent_build_create') def test_create_with_id(self, mock_create): agent = agent_obj.Agent(context=self.context, id=123) self.assertRaises(exception.ObjectActionError, agent.create) self.assertFalse(mock_create.called) @mock.patch('nova.db.api.agent_build_destroy') def test_destroy(self, mock_destroy): agent = agent_obj.Agent(context=self.context, id=123) agent.destroy() mock_destroy.assert_called_once_with(self.context, 123) @mock.patch('nova.db.api.agent_build_update') def test_save(self, mock_update): mock_update.return_value = fake_agent agent = agent_obj.Agent(context=self.context, id=123) agent.obj_reset_changes() agent.hypervisor = 'novavm' agent.save() mock_update.assert_called_once_with(self.context, 123, {'hypervisor': 'novavm'}) @mock.patch('nova.db.api.agent_build_get_all') def test_get_all(self, mock_get_all): mock_get_all.return_value = [fake_agent] agents = agent_obj.AgentList.get_all(self.context, hypervisor='novavm') self.assertEqual(1, len(agents)) self._compare(self, fake_agent, agents[0]) mock_get_all.assert_called_once_with(self.context, hypervisor='novavm') class TestAgent(test_objects._LocalTest, _TestAgent): pass class TestAgentRemote(test_objects._RemoteTest, _TestAgent): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_aggregate.py0000664000175000017500000002665600000000000022661 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils from nova import exception from nova.objects import aggregate from nova import test from nova.tests.unit import fake_notifier from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) fake_aggregate = { 'deleted': 0, 'deleted_at': None, 'created_at': NOW, 'updated_at': None, 'id': 123, 'uuid': uuidsentinel.fake_aggregate, 'name': 'fake-aggregate', 'hosts': ['foo', 'bar'], 'metadetails': {'this': 'that'}, } SUBS = {'metadata': 'metadetails'} class _TestAggregateObject(object): @mock.patch('nova.objects.aggregate._aggregate_get_from_db') def test_get_by_id_from_api(self, mock_get_api): mock_get_api.return_value = fake_aggregate agg = aggregate.Aggregate.get_by_id(self.context, 123) self.compare_obj(agg, fake_aggregate, subs=SUBS) mock_get_api.assert_called_once_with(self.context, 123) @mock.patch('nova.objects.aggregate._aggregate_get_from_db_by_uuid') def test_get_by_uuid_from_api(self, get_by_uuid_api): get_by_uuid_api.return_value = fake_aggregate agg = aggregate.Aggregate.get_by_uuid(self.context, uuidsentinel.fake_aggregate) self.assertEqual(uuidsentinel.fake_aggregate, agg.uuid) self.assertEqual(fake_aggregate['id'], agg.id) @mock.patch('nova.objects.aggregate._aggregate_create_in_db') def test_create(self, api_create_mock): api_create_mock.return_value = fake_aggregate agg = aggregate.Aggregate(context=self.context) agg.name = 'foo' agg.metadata = {'one': 'two'} agg.uuid = uuidsentinel.fake_agg agg.create() api_create_mock.assert_called_once_with( self.context, {'name': 'foo', 'uuid': uuidsentinel.fake_agg}, metadata={'one': 'two'}) self.compare_obj(agg, fake_aggregate, subs=SUBS) api_create_mock.assert_called_once_with(self.context, {'name': 'foo', 'uuid': uuidsentinel.fake_agg}, metadata={'one': 'two'}) @mock.patch('nova.objects.aggregate._aggregate_create_in_db') def test_recreate_fails(self, api_create_mock): api_create_mock.return_value = fake_aggregate agg = aggregate.Aggregate(context=self.context) agg.name = 'foo' agg.metadata = {'one': 'two'} agg.uuid = uuidsentinel.fake_agg agg.create() self.assertRaises(exception.ObjectActionError, agg.create) api_create_mock.assert_called_once_with(self.context, {'name': 'foo', 'uuid': uuidsentinel.fake_agg}, metadata={'one': 'two'}) @mock.patch('nova.objects.aggregate._aggregate_delete_from_db') def test_destroy(self, api_delete_mock): agg = aggregate.Aggregate(context=self.context) agg.id = 123 agg.destroy() api_delete_mock.assert_called_with(self.context, 123) @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch('nova.objects.aggregate._aggregate_update_to_db') def test_save_to_api(self, api_update_mock, mock_notify): api_update_mock.return_value = fake_aggregate agg = aggregate.Aggregate(context=self.context) agg.id = 123 agg.name = 'fake-api-aggregate' agg.save() self.compare_obj(agg, fake_aggregate, subs=SUBS) api_update_mock.assert_called_once_with(self.context, 123, {'name': 'fake-api-aggregate'}) api_update_mock.assert_called_once_with(self.context, 123, {'name': 'fake-api-aggregate'}) mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=test.MatchType(aggregate.Aggregate), action='update_prop', phase='start'), mock.call(context=self.context, aggregate=test.MatchType(aggregate.Aggregate), action='update_prop', phase='end')]) self.assertEqual(2, mock_notify.call_count) def test_save_and_create_no_hosts(self): agg = aggregate.Aggregate(context=self.context) agg.id = 123 agg.hosts = ['foo', 'bar'] self.assertRaises(exception.ObjectActionError, agg.create) self.assertRaises(exception.ObjectActionError, agg.save) @mock.patch('nova.objects.aggregate._metadata_delete_from_db') @mock.patch('nova.objects.aggregate._metadata_add_to_db') @mock.patch('nova.compute.utils.notify_about_aggregate_action') @mock.patch('oslo_versionedobjects.base.VersionedObject.' 'obj_from_primitive') def test_update_metadata_api(self, mock_obj_from_primitive, mock_notify, mock_api_metadata_add, mock_api_metadata_delete): fake_notifier.NOTIFICATIONS = [] agg = aggregate.Aggregate() agg._context = self.context agg.id = 123 agg.metadata = {'foo': 'bar'} agg.obj_reset_changes() mock_obj_from_primitive.return_value = agg agg.update_metadata({'todelete': None, 'toadd': 'myval'}) self.assertEqual(2, len(fake_notifier.NOTIFICATIONS)) msg = fake_notifier.NOTIFICATIONS[0] self.assertEqual('aggregate.updatemetadata.start', msg.event_type) self.assertEqual({'todelete': None, 'toadd': 'myval'}, msg.payload['meta_data']) msg = fake_notifier.NOTIFICATIONS[1] self.assertEqual('aggregate.updatemetadata.end', msg.event_type) mock_notify.assert_has_calls([ mock.call(context=self.context, aggregate=agg, action='update_metadata', phase='start'), mock.call(context=self.context, aggregate=agg, action='update_metadata', phase='end')]) self.assertEqual({'todelete': None, 'toadd': 'myval'}, msg.payload['meta_data']) self.assertEqual({'foo': 'bar', 'toadd': 'myval'}, agg.metadata) mock_api_metadata_delete.assert_called_once_with(self.context, 123, 'todelete') mock_api_metadata_add.assert_called_once_with(self.context, 123, {'toadd': 'myval'}) mock_api_metadata_delete.assert_called_once_with(self.context, 123, 'todelete') mock_api_metadata_add.assert_called_once_with(self.context, 123, {'toadd': 'myval'}) @mock.patch('nova.objects.aggregate._host_add_to_db') def test_add_host_api(self, mock_host_add_api): mock_host_add_api.return_value = {'host': 'bar'} agg = aggregate.Aggregate() agg.id = 123 agg.hosts = ['foo'] agg._context = self.context agg.add_host('bar') self.assertEqual(agg.hosts, ['foo', 'bar']) mock_host_add_api.assert_called_once_with(self.context, 123, 'bar') @mock.patch('nova.objects.aggregate._host_delete_from_db') def test_delete_host_api(self, mock_host_delete_api): agg = aggregate.Aggregate() agg.id = 123 agg.hosts = ['foo', 'bar'] agg._context = self.context agg.delete_host('foo') self.assertEqual(agg.hosts, ['bar']) mock_host_delete_api.assert_called_once_with(self.context, 123, 'foo') def test_availability_zone(self): agg = aggregate.Aggregate() agg.metadata = {'availability_zone': 'foo'} self.assertEqual('foo', agg.availability_zone) @mock.patch('nova.objects.aggregate._get_all_from_db') def test_get_all(self, mock_api_get_all): mock_api_get_all.return_value = [fake_aggregate] aggs = aggregate.AggregateList.get_all(self.context) self.assertEqual(1, len(aggs)) self.compare_obj(aggs[0], fake_aggregate, subs=SUBS) @mock.patch('nova.objects.aggregate._get_by_host_from_db') def test_by_host(self, mock_api_get_by_host): mock_api_get_by_host.return_value = [fake_aggregate] aggs = aggregate.AggregateList.get_by_host(self.context, 'fake-host') self.assertEqual(1, len(aggs)) self.compare_obj(aggs[0], fake_aggregate, subs=SUBS) @mock.patch('nova.objects.aggregate._get_by_metadata_from_db') def test_get_by_metadata_key(self, mock_api_get_by_metadata_key): mock_api_get_by_metadata_key.return_value = [fake_aggregate] aggs = aggregate.AggregateList.get_by_metadata_key( self.context, 'this') self.assertEqual(1, len(aggs)) self.compare_obj(aggs[0], fake_aggregate, subs=SUBS) @mock.patch('nova.objects.aggregate._get_by_metadata_from_db') def test_get_by_metadata_key_and_hosts_no_match(self, get_by_metadata_key): get_by_metadata_key.return_value = [fake_aggregate] aggs = aggregate.AggregateList.get_by_metadata_key( self.context, 'this', hosts=['baz']) self.assertEqual(0, len(aggs)) @mock.patch('nova.objects.aggregate._get_by_metadata_from_db') def test_get_by_metadata_key_and_hosts_match(self, get_by_metadata_key): get_by_metadata_key.return_value = [fake_aggregate] aggs = aggregate.AggregateList.get_by_metadata_key( self.context, 'this', hosts=['foo', 'bar']) self.assertEqual(1, len(aggs)) self.compare_obj(aggs[0], fake_aggregate, subs=SUBS) @mock.patch('nova.objects.aggregate.' '_get_non_matching_by_metadata_keys_from_db') def test_get_non_matching_by_metadata_keys( self, get_non_matching_by_metadata_keys): get_non_matching_by_metadata_keys.return_value = [fake_aggregate] aggs = aggregate.AggregateList.get_non_matching_by_metadata_keys( self.context, ['abc'], 'th', value='that') self.assertEqual('that', aggs[0].metadata['this']) @mock.patch('nova.objects.aggregate.' '_get_non_matching_by_metadata_keys_from_db') def test_get_non_matching_by_metadata_keys_and_hosts_no_match( self, get_non_matching_by_metadata_keys): get_non_matching_by_metadata_keys.return_value = [] aggs = aggregate.AggregateList.get_non_matching_by_metadata_keys( self.context, ['this'], 'th', value='that') self.assertEqual(0, len(aggs)) class TestAggregateObject(test_objects._LocalTest, _TestAggregateObject): pass class TestRemoteAggregateObject(test_objects._RemoteTest, _TestAggregateObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_bandwidth_usage.py0000664000175000017500000001376300000000000024056 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova import context from nova.db import api as db from nova.objects import bandwidth_usage from nova import test from nova.tests.unit.objects import test_objects class _TestBandwidthUsage(test.TestCase): def setUp(self): super(_TestBandwidthUsage, self).setUp() self.user_id = 'fake_user' self.project_id = 'fake_project' self.context = context.RequestContext(self.user_id, self.project_id) now, start_period = self._time_now_and_start_period() self.expected_bw_usage = self._fake_bw_usage( time=now, start_period=start_period) @staticmethod def _compare(test, db, obj, ignored_fields=None): if ignored_fields is None: ignored_fields = [] for field, value in db.items(): if field in ignored_fields: continue obj_field = field if obj_field == 'uuid': obj_field = 'instance_uuid' test.assertEqual(db[field], getattr(obj, obj_field), 'Field %s is not equal' % field) @staticmethod def _fake_bw_usage(time=None, start_period=None, bw_in=100, bw_out=200, last_ctr_in=12345, last_ctr_out=67890): fake_bw_usage = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'uuid': uuids.instance, 'mac': 'fake_mac1', 'start_period': start_period, 'bw_in': bw_in, 'bw_out': bw_out, 'last_ctr_in': last_ctr_in, 'last_ctr_out': last_ctr_out, 'last_refreshed': time } return fake_bw_usage @staticmethod def _time_now_and_start_period(): now = timeutils.utcnow().replace(tzinfo=iso8601.UTC, microsecond=0) start_period = now - datetime.timedelta(seconds=10) return now, start_period @mock.patch.object(db, 'bw_usage_get') def test_get_by_instance_uuid_and_mac(self, mock_get): mock_get.return_value = self.expected_bw_usage bw_usage = bandwidth_usage.BandwidthUsage.get_by_instance_uuid_and_mac( self.context, uuids.instance, 'fake_mac', start_period=self.expected_bw_usage['start_period']) self._compare(self, self.expected_bw_usage, bw_usage) @mock.patch.object(db, 'bw_usage_get_by_uuids') def test_get_by_uuids(self, mock_get_by_uuids): mock_get_by_uuids.return_value = [self.expected_bw_usage] bw_usages = bandwidth_usage.BandwidthUsageList.get_by_uuids( self.context, [uuids.instance], start_period=self.expected_bw_usage['start_period']) self.assertEqual(1, len(bw_usages)) self._compare(self, self.expected_bw_usage, bw_usages[0]) @mock.patch.object(db, 'bw_usage_update') def test_create(self, mock_create): mock_create.return_value = self.expected_bw_usage bw_usage = bandwidth_usage.BandwidthUsage(context=self.context) bw_usage.create(uuids.instance, 'fake_mac', 100, 200, 12345, 67890, start_period=self.expected_bw_usage['start_period']) self._compare(self, self.expected_bw_usage, bw_usage) def test_update_with_db(self): expected_bw_usage1 = self._fake_bw_usage( time=self.expected_bw_usage['last_refreshed'], start_period=self.expected_bw_usage['start_period'], last_ctr_in=42, last_ctr_out=42) bw_usage = bandwidth_usage.BandwidthUsage(context=self.context) bw_usage.create(uuids.instance, 'fake_mac1', 100, 200, 42, 42, start_period=self.expected_bw_usage['start_period']) self._compare(self, expected_bw_usage1, bw_usage, ignored_fields=['last_refreshed', 'created_at']) bw_usage.create(uuids.instance, 'fake_mac1', 100, 200, 12345, 67890, start_period=self.expected_bw_usage['start_period']) self._compare(self, self.expected_bw_usage, bw_usage, ignored_fields=['last_refreshed', 'created_at', 'updated_at']) @mock.patch.object(db, 'bw_usage_update') def test_update(self, mock_update): expected_bw_usage1 = self._fake_bw_usage( time=self.expected_bw_usage['last_refreshed'], start_period=self.expected_bw_usage['start_period'], last_ctr_in=42, last_ctr_out=42) mock_update.side_effect = [expected_bw_usage1, self.expected_bw_usage] bw_usage = bandwidth_usage.BandwidthUsage(context=self.context) bw_usage.create('fake_uuid1', 'fake_mac1', 100, 200, 42, 42, start_period=self.expected_bw_usage['start_period']) self._compare(self, expected_bw_usage1, bw_usage) bw_usage.create('fake_uuid1', 'fake_mac1', 100, 200, 12345, 67890, start_period=self.expected_bw_usage['start_period']) self._compare(self, self.expected_bw_usage, bw_usage) class TestBandwidthUsageObject(test_objects._LocalTest, _TestBandwidthUsage): pass class TestRemoteBandwidthUsageObject(test_objects._RemoteTest, _TestBandwidthUsage): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_block_device.py0000664000175000017500000006075100000000000023336 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova.db import api as db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models as db_models from nova import exception from nova import objects from nova.objects import block_device as block_device_obj from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_objects class _TestBlockDeviceMappingObject(object): def fake_bdm(self, instance=None): instance = instance or {} fake_bdm = fake_block_device.FakeDbBlockDeviceDict({ 'id': 123, 'uuid': uuids.bdm, 'instance_uuid': instance.get('uuid') or uuids.instance, 'attachment_id': None, 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'snapshot_id': 'fake-snapshot-id-1', 'boot_index': -1 }) if instance: fake_bdm['instance'] = instance return fake_bdm def test_save(self): fake_bdm = self.fake_bdm() with mock.patch.object(db, 'block_device_mapping_update', return_value=fake_bdm) as bdm_update_mock: bdm_object = objects.BlockDeviceMapping(context=self.context) bdm_object.id = 123 bdm_object.volume_id = 'fake_volume_id' bdm_object.save() bdm_update_mock.assert_called_once_with( self.context, 123, {'volume_id': 'fake_volume_id'}, legacy=False) def test_save_instance_changed(self): bdm_object = objects.BlockDeviceMapping(context=self.context) bdm_object.instance = objects.Instance() self.assertRaises(exception.ObjectActionError, bdm_object.save) @mock.patch.object(db, 'block_device_mapping_update', return_value=None) def test_save_not_found(self, bdm_update): bdm_object = objects.BlockDeviceMapping(context=self.context) bdm_object.id = 123 self.assertRaises(exception.BDMNotFound, bdm_object.save) @mock.patch.object(db, 'block_device_mapping_get_all_by_volume_id') def test_get_by_volume_id(self, get_by_vol_id): # NOTE(danms): Include two results to make sure the first was picked. # An invalid second item shouldn't be touched -- if it is, it'll # fail from_db_object(). get_by_vol_id.return_value = [self.fake_bdm(), None] vol_bdm = objects.BlockDeviceMapping.get_by_volume_id( self.context, 'fake-volume-id') for attr in block_device_obj.BLOCK_DEVICE_OPTIONAL_ATTRS: self.assertFalse(vol_bdm.obj_attr_is_set(attr)) @mock.patch.object(db, 'block_device_mapping_get_all_by_volume_id') def test_get_by_volume_id_not_found(self, get_by_vol_id): get_by_vol_id.return_value = None self.assertRaises(exception.VolumeBDMNotFound, objects.BlockDeviceMapping.get_by_volume_id, self.context, 'fake-volume-id') @mock.patch.object(db, 'block_device_mapping_get_all_by_volume_id') def test_get_by_volume_instance_uuid_mismatch(self, get_by_vol_id): fake_bdm_vol = self.fake_bdm(instance={'uuid': 'other-fake-instance'}) get_by_vol_id.return_value = [fake_bdm_vol] self.assertRaises(exception.InvalidVolume, objects.BlockDeviceMapping.get_by_volume_id, self.context, 'fake-volume-id', instance_uuid='fake-instance') @mock.patch.object(db, 'block_device_mapping_get_all_by_volume_id') def test_get_by_volume_id_with_expected(self, get_by_vol_id): get_by_vol_id.return_value = [self.fake_bdm( fake_instance.fake_db_instance())] vol_bdm = objects.BlockDeviceMapping.get_by_volume_id( self.context, 'fake-volume-id', expected_attrs=['instance']) for attr in block_device_obj.BLOCK_DEVICE_OPTIONAL_ATTRS: self.assertTrue(vol_bdm.obj_attr_is_set(attr)) get_by_vol_id.assert_called_once_with(self.context, 'fake-volume-id', ['instance']) @mock.patch.object(db, 'block_device_mapping_get_all_by_volume_id') def test_get_by_volume_returned_single(self, get_all): fake_bdm_vol = self.fake_bdm() get_all.return_value = [fake_bdm_vol] vol_bdm = objects.BlockDeviceMapping.get_by_volume( self.context, 'fake-volume-id') self.assertEqual(fake_bdm_vol['id'], vol_bdm.id) @mock.patch.object(db, 'block_device_mapping_get_all_by_volume_id') def test_get_by_volume_returned_multiple(self, get_all): fake_bdm_vol1 = self.fake_bdm() fake_bdm_vol2 = self.fake_bdm() get_all.return_value = [fake_bdm_vol1, fake_bdm_vol2] self.assertRaises(exception.VolumeBDMIsMultiAttach, objects.BlockDeviceMapping.get_by_volume, self.context, 'fake-volume-id') @mock.patch.object(db, 'block_device_mapping_get_by_instance_and_volume_id') def test_get_by_instance_and_volume_id(self, mock_get): fake_inst = fake_instance.fake_db_instance() mock_get.return_value = self.fake_bdm(fake_inst) obj_bdm = objects.BlockDeviceMapping vol_bdm = obj_bdm.get_by_volume_and_instance( self.context, 'fake-volume-id', 'fake-instance-id') for attr in block_device_obj.BLOCK_DEVICE_OPTIONAL_ATTRS: self.assertFalse(vol_bdm.obj_attr_is_set(attr)) @mock.patch.object(db, 'block_device_mapping_get_by_instance_and_volume_id') def test_test_get_by_instance_and_volume_id_with_expected(self, mock_get): fake_inst = fake_instance.fake_db_instance() mock_get.return_value = self.fake_bdm(fake_inst) obj_bdm = objects.BlockDeviceMapping vol_bdm = obj_bdm.get_by_volume_and_instance( self.context, 'fake-volume-id', fake_inst['uuid'], expected_attrs=['instance']) for attr in block_device_obj.BLOCK_DEVICE_OPTIONAL_ATTRS: self.assertTrue(vol_bdm.obj_attr_is_set(attr)) mock_get.assert_called_once_with(self.context, 'fake-volume-id', fake_inst['uuid'], ['instance']) @mock.patch.object(db, 'block_device_mapping_get_by_instance_and_volume_id') def test_get_by_instance_and_volume_id_not_found(self, mock_get): mock_get.return_value = None obj_bdm = objects.BlockDeviceMapping self.assertRaises(exception.VolumeBDMNotFound, obj_bdm.get_by_volume_and_instance, self.context, 'fake-volume-id', 'fake-instance-id') def _test_create_mocked(self, update_or_create=False): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'attachment_id': None} fake_bdm = fake_block_device.FakeDbBlockDeviceDict(values) with test.nested( mock.patch.object( db, 'block_device_mapping_create', return_value=fake_bdm), mock.patch.object( db, 'block_device_mapping_update_or_create', return_value=fake_bdm), ) as (bdm_create_mock, bdm_update_or_create_mock): bdm = objects.BlockDeviceMapping(context=self.context, **values) if update_or_create: method = bdm.update_or_create else: method = bdm.create method() if update_or_create: bdm_update_or_create_mock.assert_called_once_with( self.context, values, legacy=False) else: bdm_create_mock.assert_called_once_with( self.context, values, legacy=False) def test_create(self): self._test_create_mocked() def test_update_or_create(self): self._test_create_mocked(update_or_create=True) def test_create_fails(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance} bdm = objects.BlockDeviceMapping(context=self.context, **values) bdm.create() self.assertRaises(exception.ObjectActionError, bdm.create) def test_create_fails_instance(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'instance': objects.Instance()} bdm = objects.BlockDeviceMapping(context=self.context, **values) self.assertRaises(exception.ObjectActionError, bdm.create) def test_destroy(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'id': 1, 'instance_uuid': uuids.instance, 'device_name': 'fake'} with mock.patch.object(db, 'block_device_mapping_destroy') as bdm_del: bdm = objects.BlockDeviceMapping(context=self.context, **values) bdm.destroy() bdm_del.assert_called_once_with(self.context, values['id']) def test_is_image_true(self): bdm = objects.BlockDeviceMapping(context=self.context, source_type='image') self.assertTrue(bdm.is_image) def test_is_image_false(self): bdm = objects.BlockDeviceMapping(context=self.context, source_type='snapshot') self.assertFalse(bdm.is_image) def test_is_volume_true(self): bdm = objects.BlockDeviceMapping(context=self.context, destination_type='volume') self.assertTrue(bdm.is_volume) def test_is_volume_false(self): bdm = objects.BlockDeviceMapping(context=self.context, destination_type='local') self.assertFalse(bdm.is_volume) def test_obj_load_attr_not_instance(self): """Tests that lazy-loading something other than the instance field results in an error. """ bdm = objects.BlockDeviceMapping(self.context, **self.fake_bdm()) self.assertRaises(exception.ObjectActionError, bdm.obj_load_attr, 'invalid') def test_obj_load_attr_orphaned(self): """Tests that lazy-loading the instance field on an orphaned BDM results in an error. """ bdm = objects.BlockDeviceMapping(context=None, **self.fake_bdm()) self.assertRaises(exception.OrphanedObjectError, bdm.obj_load_attr, 'instance') @mock.patch.object(objects.Instance, 'get_by_uuid', return_value=objects.Instance(uuid=uuids.instance)) def test_obj_load_attr_instance(self, mock_inst_get_by_uuid): """Tests lazy-loading the instance field.""" bdm = objects.BlockDeviceMapping(self.context, **self.fake_bdm()) self.assertEqual(mock_inst_get_by_uuid.return_value, bdm.instance) mock_inst_get_by_uuid.assert_called_once_with( self.context, bdm.instance_uuid) def test_obj_make_compatible_pre_1_17(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'tag': 'fake-tag'} bdm = objects.BlockDeviceMapping(context=self.context, **values) data = lambda x: x['nova_object.data'] primitive = data(bdm.obj_to_primitive(target_version='1.17')) self.assertIn('tag', primitive) primitive = data(bdm.obj_to_primitive(target_version='1.16')) self.assertNotIn('tag', primitive) self.assertIn('volume_id', primitive) def test_obj_make_compatible_pre_1_18(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'attachment_id': uuids.attachment_id} bdm = objects.BlockDeviceMapping(context=self.context, **values) data = lambda x: x['nova_object.data'] primitive = data(bdm.obj_to_primitive(target_version='1.17')) self.assertNotIn('attachment_id', primitive) self.assertIn('volume_id', primitive) def test_obj_make_compatible_pre_1_19(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'uuid': uuids.bdm} bdm = objects.BlockDeviceMapping(context=self.context, **values) data = lambda x: x['nova_object.data'] primitive = data(bdm.obj_to_primitive(target_version='1.18')) self.assertNotIn('uuid', primitive) self.assertIn('volume_id', primitive) def test_obj_make_compatible_pre_1_20(self): values = {'source_type': 'volume', 'volume_id': 'fake-vol-id', 'destination_type': 'volume', 'instance_uuid': uuids.instance, 'volume_type': 'fake-lvm-1'} bdm = objects.BlockDeviceMapping(context=self.context, **values) data = lambda x: x['nova_object.data'] primitive = data(bdm.obj_to_primitive(target_version='1.19')) self.assertNotIn('volume_type', primitive) self.assertIn('volume_id', primitive) class TestBlockDeviceMappingUUIDMigration(test.TestCase): def setUp(self): super(TestBlockDeviceMappingUUIDMigration, self).setUp() self.context = context.RequestContext('fake-user-id', 'fake-project-id') self.orig_create_uuid = \ objects.BlockDeviceMapping._create_uuid @staticmethod @db_api.pick_context_manager_writer def _create_legacy_bdm(context, deleted=False): # Create a BDM with no uuid values = {'instance_uuid': uuids.instance_uuid} bdm_ref = db_models.BlockDeviceMapping() bdm_ref.update(values) bdm_ref.save(context.session) if deleted: bdm_ref.soft_delete(context.session) return bdm_ref @mock.patch.object(objects.BlockDeviceMapping, '_create_uuid') def test_populate_uuid(self, mock_create_uuid): mock_create_uuid.side_effect = self.orig_create_uuid self._create_legacy_bdm(self.context) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance_uuid) # UUID should have been populated uuid = bdms[0].uuid self.assertIsNotNone(uuid) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance_uuid) # UUID should not have changed self.assertEqual(uuid, bdms[0].uuid) self.assertEqual(1, mock_create_uuid.call_count) def test_create_uuid_race(self): # If threads read a legacy BDM object concurrently, we can end up # calling _create_uuid multiple times. Assert that calling _create_uuid # multiple times yields the same uuid. # NOTE(mdbooth): _create_uuid handles all forms of race, including any # amount of overlapping. I have not attempted to write unit tests for # all potential execution orders. This test is sufficient to # demonstrate that the compare-and-swap works correctly, and we trust # the correctness of the database for the rest. db_bdm = self._create_legacy_bdm(self.context) uuid1 = objects.BlockDeviceMapping._create_uuid(self.context, db_bdm['id']) bdm = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance_uuid)[0] self.assertEqual(uuid1, bdm.uuid) # We would only ever call this twice if we raced # This is also testing that the compare-and-swap doesn't overwrite an # existing uuid if we hit that race. uuid2 = objects.BlockDeviceMapping._create_uuid(self.context, bdm['id']) self.assertEqual(uuid1, uuid2) def _assert_online_migration(self, expected_total, expected_done, limit=10): total, done = objects.BlockDeviceMapping.populate_uuids( self.context, limit) self.assertEqual(expected_total, total) self.assertEqual(expected_done, done) def test_online_migration(self): self._assert_online_migration(0, 0) # Create 2 BDMs, one with a uuid and one without self._create_legacy_bdm(self.context) db_api.block_device_mapping_create(self.context, {'uuid': uuids.bdm2, 'instance_uuid': uuids.instance_uuid}, legacy=False) # Run the online migration. We should find 1 and update 1 self._assert_online_migration(1, 1) # Fetch the BDMs and check we didn't modify the uuid of bdm2 bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance_uuid) bdm_uuids = [bdm.uuid for bdm in bdms] self.assertIn(uuids.bdm2, bdm_uuids) self.assertNotIn(None, bdm_uuids) # Run the online migration again to see nothing was processed self._assert_online_migration(0, 0) # Assert that we assign a uuid to a deleted bdm. self._create_legacy_bdm(self.context, deleted=True) self._assert_online_migration(1, 1) # Test that we don't migrate more than the limit for i in range(0, 3): self._create_legacy_bdm(self.context) self._assert_online_migration(2, 2, limit=2) class TestBlockDeviceMappingObject(test_objects._LocalTest, _TestBlockDeviceMappingObject): pass class TestRemoteBlockDeviceMappingObject(test_objects._RemoteTest, _TestBlockDeviceMappingObject): pass class _TestBlockDeviceMappingListObject(object): def fake_bdm(self, bdm_id, boot_index=-1, instance_uuid=uuids.instance): fake_bdm = fake_block_device.FakeDbBlockDeviceDict({ 'id': bdm_id, 'boot_index': boot_index, 'instance_uuid': instance_uuid, 'attachment_id': None, 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'snapshot_id': 'fake-snapshot-id-1', }) return fake_bdm @mock.patch.object(db, 'block_device_mapping_get_all_by_instance_uuids') def test_bdms_by_instance_uuid(self, get_all_by_inst_uuids): fakes = [self.fake_bdm(123), self.fake_bdm(456)] get_all_by_inst_uuids.return_value = fakes bdms_by_uuid = objects.BlockDeviceMappingList.bdms_by_instance_uuid( self.context, [uuids.instance]) self.assertEqual([uuids.instance], list(bdms_by_uuid.keys())) self.assertIsInstance( bdms_by_uuid[uuids.instance], objects.BlockDeviceMappingList) for faked, got in zip(fakes, bdms_by_uuid[uuids.instance]): self.assertIsInstance(got, objects.BlockDeviceMapping) self.assertEqual(faked['id'], got.id) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance_uuids') def test_bdms_by_instance_uuid_no_result(self, get_all_by_inst_uuids): get_all_by_inst_uuids.return_value = None bdms_by_uuid = objects.BlockDeviceMappingList.bdms_by_instance_uuid( self.context, [uuids.instance]) self.assertEqual({}, bdms_by_uuid) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance_uuids') def test_get_by_instance_uuids(self, get_all_by_inst_uuids): fakes = [self.fake_bdm(123), self.fake_bdm(456)] get_all_by_inst_uuids.return_value = fakes bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuids( self.context, [uuids.instance]) for faked, got in zip(fakes, bdm_list): self.assertIsInstance(got, objects.BlockDeviceMapping) self.assertEqual(faked['id'], got.id) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance_uuids') def test_get_by_instance_uuids_no_result(self, get_all_by_inst_uuids): get_all_by_inst_uuids.return_value = None bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuids( self.context, [uuids.instance]) self.assertEqual(0, len(bdm_list)) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance') def test_get_by_instance_uuid(self, get_all_by_inst): fakes = [self.fake_bdm(123), self.fake_bdm(456)] get_all_by_inst.return_value = fakes bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance) for faked, got in zip(fakes, bdm_list): self.assertIsInstance(got, objects.BlockDeviceMapping) self.assertEqual(faked['id'], got.id) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance') def test_get_by_instance_uuid_no_result(self, get_all_by_inst): get_all_by_inst.return_value = None bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance) self.assertEqual(0, len(bdm_list)) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance') def test_root_bdm(self, get_all_by_inst): fakes = [self.fake_bdm(123), self.fake_bdm(456, boot_index=0)] get_all_by_inst.return_value = fakes bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance) self.assertEqual(456, bdm_list.root_bdm().id) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance') def test_root_bdm_empty_bdm_list(self, get_all_by_inst): get_all_by_inst.return_value = None bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.instance) self.assertIsNone(bdm_list.root_bdm()) @mock.patch.object(db, 'block_device_mapping_get_all_by_instance') def test_root_bdm_undefined(self, get_all_by_inst): fakes = [ self.fake_bdm(123, instance_uuid=uuids.instance_1), self.fake_bdm(456, instance_uuid=uuids.instance_2) ] get_all_by_inst.return_value = fakes bdm_list = objects.BlockDeviceMappingList.get_by_instance_uuid( self.context, uuids.bdm_instance) self.assertRaises(exception.UndefinedRootBDM, bdm_list.root_bdm) class TestBlockDeviceMappingListObject(test_objects._LocalTest, _TestBlockDeviceMappingListObject): pass class TestRemoteBlockDeviceMappingListObject( test_objects._RemoteTest, _TestBlockDeviceMappingListObject): pass class TestBlockDeviceUtils(test.NoDBTestCase): def test_make_list_from_dicts(self): ctx = context.get_admin_context() dicts = [{'id': 1}, {'id': 2}] objs = block_device_obj.block_device_make_list_from_dicts(ctx, dicts) self.assertIsInstance(objs, block_device_obj.BlockDeviceMappingList) self.assertEqual(2, len(objs)) self.assertEqual(1, objs[0].id) self.assertEqual(2, objs[1].id) def test_make_list_from_dicts_empty(self): ctx = context.get_admin_context() objs = block_device_obj.block_device_make_list_from_dicts(ctx, []) self.assertIsInstance(objs, block_device_obj.BlockDeviceMappingList) self.assertEqual(0, len(objs)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_build_request.py0000664000175000017500000002327500000000000023574 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_versionedobjects import base as o_vo_base from nova import exception from nova import objects from nova.objects import base from nova.objects import build_request from nova.tests.unit import fake_build_request from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_objects class _TestBuildRequestObject(object): @mock.patch.object(build_request.BuildRequest, '_get_by_instance_uuid_from_db') def test_get_by_instance_uuid(self, get_by_uuid): fake_req = fake_build_request.fake_db_req() get_by_uuid.return_value = fake_req req_obj = build_request.BuildRequest.get_by_instance_uuid(self.context, fake_req['instance_uuid']) self.assertEqual(fake_req['instance_uuid'], req_obj.instance_uuid) self.assertEqual(fake_req['project_id'], req_obj.project_id) self.assertIsInstance(req_obj.instance, objects.Instance) get_by_uuid.assert_called_once_with(self.context, fake_req['instance_uuid']) @mock.patch.object(build_request.BuildRequest, '_get_by_instance_uuid_from_db') def test_get_by_instance_uuid_instance_none(self, get_by_uuid): fake_req = fake_build_request.fake_db_req() fake_req['instance'] = None get_by_uuid.return_value = fake_req self.assertRaises(exception.BuildRequestNotFound, build_request.BuildRequest.get_by_instance_uuid, self.context, fake_req['instance_uuid']) @mock.patch.object(build_request.BuildRequest, '_get_by_instance_uuid_from_db') def test_get_by_instance_uuid_instance_version_too_new(self, get_by_uuid): fake_req = fake_build_request.fake_db_req() instance = fake_instance.fake_instance_obj(self.context, objects.Instance, uuid=fake_req['instance_uuid']) instance.VERSION = '99' fake_req['instance'] = jsonutils.dumps(instance.obj_to_primitive()) get_by_uuid.return_value = fake_req self.assertRaises(exception.BuildRequestNotFound, build_request.BuildRequest.get_by_instance_uuid, self.context, fake_req['instance_uuid']) @mock.patch.object(build_request.BuildRequest, '_get_by_instance_uuid_from_db') def test_get_by_instance_uuid_do_not_override_locked_by(self, get_by_uuid): fake_req = fake_build_request.fake_db_req() instance = fake_instance.fake_instance_obj(self.context, objects.Instance, uuid=fake_req['instance_uuid']) instance.locked_by = 'admin' fake_req['instance'] = jsonutils.dumps(instance.obj_to_primitive()) get_by_uuid.return_value = fake_req req_obj = build_request.BuildRequest.get_by_instance_uuid(self.context, fake_req['instance_uuid']) self.assertIsInstance(req_obj.instance, objects.Instance) self.assertEqual('admin', req_obj.instance.locked_by) def test_create(self): fake_req = fake_build_request.fake_db_req() req_obj = fake_build_request.fake_req_obj(self.context, fake_req) def _test_create_args(self2, context, changes): for field in ['instance_uuid', 'project_id']: self.assertEqual(fake_req[field], changes[field]) self.assertEqual( jsonutils.dumps(req_obj.instance.obj_to_primitive()), changes['instance']) return fake_req with mock.patch.object(build_request.BuildRequest, '_create_in_db', _test_create_args): req_obj.create() self.assertEqual({}, req_obj.obj_get_changes()) def test_create_id_set(self): req_obj = build_request.BuildRequest(self.context) req_obj.id = 3 self.assertRaises(exception.ObjectActionError, req_obj.create) def test_create_uuid_set(self): req_obj = build_request.BuildRequest(self.context) self.assertRaises(exception.ObjectActionError, req_obj.create) @mock.patch.object(build_request.BuildRequest, '_destroy_in_db') def test_destroy(self, destroy_in_db): req_obj = build_request.BuildRequest(self.context) req_obj.instance_uuid = uuids.instance req_obj.destroy() destroy_in_db.assert_called_once_with(self.context, req_obj.instance_uuid) @mock.patch.object(build_request.BuildRequest, '_save_in_db') def test_save(self, save_in_db): fake_req = fake_build_request.fake_db_req() save_in_db.return_value = fake_req req_obj = fake_build_request.fake_req_obj(self.context, fake_req) # We need to simulate the BuildRequest object being persisted before # that call. req_obj.id = 1 req_obj.obj_reset_changes(recursive=True) req_obj.project_id = 'foo' req_obj.save() save_in_db.assert_called_once_with(self.context, req_obj.id, {'project_id': 'foo'}) def test_get_new_instance_show_changed_fields(self): # Assert that we create a very dirty object from the cleaned one # on build_request fake_req = fake_build_request.fake_db_req() fields = jsonutils.loads(fake_req['instance'])['nova_object.data'] # TODO(Kevin Zheng): clean up this workaround once # build_request.get_new_instance() can handle tags. fields.pop('tags', None) build_request = objects.BuildRequest._from_db_object( self.context, objects.BuildRequest(), fake_req) self.assertEqual(0, len(build_request.instance.obj_what_changed())) instance = build_request.get_new_instance(self.context) for field in fields: self.assertIn(field, instance.obj_what_changed()) self.assertEqual(getattr(build_request.instance, field), getattr(instance, field)) def test_from_db_object_set_deleted_hidden(self): # Assert that if we persisted an instance not yet having the deleted # or hidden field being set, we still return a value for that field. fake_req = fake_build_request.fake_db_req() with mock.patch.object(o_vo_base.VersionedObject, 'obj_set_defaults') as mock_obj_set_defaults: build_request = objects.BuildRequest._from_db_object( self.context, objects.BuildRequest(), fake_req) mock_obj_set_defaults.assert_called_once_with('deleted', 'hidden') self.assertFalse(build_request.instance.deleted) self.assertFalse(build_request.instance.hidden) def test_obj_make_compatible_pre_1_3(self): obj = fake_build_request.fake_req_obj(self.context) build_request_obj = objects.BuildRequest(self.context) data = lambda x: x['nova_object.data'] obj_primitive = data(obj.obj_to_primitive()) self.assertIn('tags', obj_primitive) build_request_obj.obj_make_compatible(obj_primitive, '1.2') self.assertIn('instance_uuid', obj_primitive) self.assertNotIn('tags', obj_primitive) def test_create_with_tags_set(self): # Test that when we set tags on the build request, # create it and reload it from the database that the # build_request.instance.tags is the same thing. build_request_obj = fake_build_request.fake_req_obj(self.context) self.assertEqual(1, len(build_request_obj.tags)) build_request_obj.create() self.assertEqual(1, len(build_request_obj.tags)) self.assertEqual(len(build_request_obj.tags), len(build_request_obj.instance.tags)) # Can't compare list objects directly, just compare the single # item they contain. self.assertTrue(base.obj_equal_prims( build_request_obj.tags[0], build_request_obj.instance.tags[0])) class TestBuildRequestObject(test_objects._LocalTest, _TestBuildRequestObject): pass class TestRemoteBuildRequestObject(test_objects._RemoteTest, _TestBuildRequestObject): pass class _TestBuildRequestListObject(object): @mock.patch.object(build_request.BuildRequestList, '_get_all_from_db') def test_get_all(self, get_all): fake_reqs = [fake_build_request.fake_db_req() for x in range(2)] get_all.return_value = fake_reqs req_objs = build_request.BuildRequestList.get_all(self.context) self.assertEqual(2, len(req_objs)) for i in range(2): self.assertEqual(fake_reqs[i]['instance_uuid'], req_objs[i].instance_uuid) self.assertEqual(fake_reqs[i]['project_id'], req_objs[i].project_id) self.assertIsInstance(req_objs[i].instance, objects.Instance) class TestBuildRequestListObject(test_objects._LocalTest, _TestBuildRequestListObject): pass class TestRemoteBuildRequestListObject(test_objects._RemoteTest, _TestBuildRequestListObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_cell_mapping.py0000664000175000017500000003605700000000000023361 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from nova import exception from nova import objects from nova.objects import cell_mapping from nova.tests.unit.objects import test_objects def get_db_mapping(**updates): db_mapping = { 'id': 1, 'uuid': uuidutils.generate_uuid(), 'name': 'cell1', 'transport_url': 'rabbit://', 'database_connection': 'sqlite:///', 'created_at': None, 'updated_at': None, 'disabled': False, } db_mapping.update(updates) return db_mapping class _TestCellMappingObject(object): @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_get_by_uuid(self, uuid_from_db): db_mapping = get_db_mapping() uuid_from_db.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) uuid_from_db.assert_called_once_with(self.context, db_mapping['uuid']) self.compare_obj(mapping_obj, db_mapping) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db', side_effect=exception.CellMappingNotFound(uuid='fake')) def test_get_by_uuid_invalid(self, uuid_from_db): db_mapping = get_db_mapping() self.assertRaises(exception.CellMappingNotFound, objects.CellMapping().get_by_uuid, self.context, db_mapping['uuid']) uuid_from_db.assert_called_once_with(self.context, db_mapping['uuid']) @mock.patch.object(cell_mapping.CellMapping, '_create_in_db') def test_create(self, create_in_db): uuid = uuidutils.generate_uuid() db_mapping = get_db_mapping(uuid=uuid, name='test', database_connection='mysql+pymysql:///') create_in_db.return_value = db_mapping mapping_obj = objects.CellMapping(self.context) mapping_obj.uuid = uuid mapping_obj.name = 'test' mapping_obj.database_connection = 'mysql+pymysql:///' mapping_obj.create() create_in_db.assert_called_once_with(self.context, {'uuid': uuid, 'name': 'test', 'database_connection': 'mysql+pymysql:///'}) self.compare_obj(mapping_obj, db_mapping) @mock.patch.object(cell_mapping.CellMapping, '_save_in_db') def test_save(self, save_in_db): uuid = uuidutils.generate_uuid() db_mapping = get_db_mapping(database_connection='mysql+pymysql:///') save_in_db.return_value = db_mapping mapping_obj = objects.CellMapping(self.context) mapping_obj.uuid = uuid mapping_obj.database_connection = 'mysql+pymysql:///' mapping_obj.save() save_in_db.assert_called_once_with(self.context, uuid, {'uuid': uuid, 'database_connection': 'mysql+pymysql:///'}) self.compare_obj(mapping_obj, db_mapping) @mock.patch.object(cell_mapping.CellMapping, '_destroy_in_db') def test_destroy(self, destroy_in_db): uuid = uuidutils.generate_uuid() mapping_obj = objects.CellMapping(self.context) mapping_obj.uuid = uuid mapping_obj.destroy() destroy_in_db.assert_called_once_with(self.context, uuid) def test_is_cell0(self): self.assertFalse(objects.CellMapping().is_cell0()) self.assertFalse(objects.CellMapping( uuid=uuidutils.generate_uuid()).is_cell0()) self.assertTrue(objects.CellMapping( uuid=objects.CellMapping.CELL0_UUID).is_cell0()) def test_identity_no_name_set(self): cm = objects.CellMapping(uuid=uuids.cell1) self.assertEqual(uuids.cell1, cm.identity) def test_identity_name_is_none(self): cm = objects.CellMapping(uuid=uuids.cell1) self.assertEqual(uuids.cell1, cm.identity) def test_identity_with_name(self): cm = objects.CellMapping(uuid=uuids.cell1, name='foo') self.assertEqual('%s(foo)' % uuids.cell1, cm.identity) def test_obj_make_compatible(self): cell_mapping_obj = cell_mapping.CellMapping(context=self.context) fake_cell_mapping_obj = cell_mapping.CellMapping(context=self.context, uuid=uuids.cell, disabled=False) obj_primitive = fake_cell_mapping_obj.obj_to_primitive('1.0') obj = cell_mapping_obj.obj_from_primitive(obj_primitive) self.assertIn('uuid', obj) self.assertEqual(uuids.cell, obj.uuid) self.assertNotIn('disabled', obj) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_db_url(self, mock_get): url = 'sqlite://bob:s3kret@localhost:123/nova?munchies=doritos#baz' varurl = ('{scheme}://not{username}:{password}@' '{hostname}:1{port}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(connection=url, group='database') db_mapping = get_db_mapping(database_connection=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(('sqlite://notbob:s3kret@localhost:1123/nova?' 'munchies=doritos&flavor=coolranch#baz'), mapping_obj.database_connection) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_mq_url(self, mock_get): url = 'rabbit://bob:s3kret@localhost:123/nova?munchies=doritos#baz' varurl = ('{scheme}://not{username}:{password}@' '{hostname}:1{port}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(transport_url=url) db_mapping = get_db_mapping(transport_url=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(('rabbit://notbob:s3kret@localhost:1123/nova?' 'munchies=doritos&flavor=coolranch#baz'), mapping_obj.transport_url) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_mq_url_multi_netloc1(self, mock_get): # Multiple netlocs, each with all parameters url = ('rabbit://alice:n0ts3kret@otherhost:456,' 'bob:s3kret@localhost:123' '/nova?munchies=doritos#baz') varurl = ('{scheme}://not{username2}:{password1}@' '{hostname2}:1{port1}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(transport_url=url) db_mapping = get_db_mapping(transport_url=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(('rabbit://notbob:n0ts3kret@localhost:1456/nova?' 'munchies=doritos&flavor=coolranch#baz'), mapping_obj.transport_url) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_mq_url_multi_netloc1_but_ipv6(self, mock_get): # Multiple netlocs, each with all parameters url = ('rabbit://alice:n0ts3kret@otherhost:456,' 'bob:s3kret@[1:2::7]:123' '/nova?munchies=doritos#baz') varurl = ('{scheme}://not{username2}:{password1}@' '[{hostname2}]:1{port1}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(transport_url=url) db_mapping = get_db_mapping(transport_url=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(('rabbit://notbob:n0ts3kret@[1:2::7]:1456/nova?' 'munchies=doritos&flavor=coolranch#baz'), mapping_obj.transport_url) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_mq_url_multi_netloc2(self, mock_get): # Multiple netlocs, without optional password and port url = ('rabbit://alice@otherhost,' 'bob:s3kret@localhost:123' '/nova?munchies=doritos#baz') varurl = ('{scheme}://not{username1}:{password2}@' '{hostname2}:1{port2}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(transport_url=url) db_mapping = get_db_mapping(transport_url=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(('rabbit://notalice:s3kret@localhost:1123/nova?' 'munchies=doritos&flavor=coolranch#baz'), mapping_obj.transport_url) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_mq_url_multi_netloc3(self, mock_get): # Multiple netlocs, without optional args url = ('rabbit://otherhost,' 'bob:s3kret@localhost:123' '/nova?munchies=doritos#baz') varurl = ('{scheme}://not{username2}:{password2}@' '{hostname1}:1{port2}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(transport_url=url) db_mapping = get_db_mapping(transport_url=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(('rabbit://notbob:s3kret@otherhost:1123/nova?' 'munchies=doritos&flavor=coolranch#baz'), mapping_obj.transport_url) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') def test_formatted_url_without_base_set(self, mock_get): # Make sure we just pass through the template URL if the base # URLs are not set varurl = ('{scheme}://not{username2}:{password2}@' '{hostname1}:1{port2}/{path}?{query}&flavor=coolranch' '#{fragment}') self.flags(transport_url=None) self.flags(connection=None, group='database') db_mapping = get_db_mapping(transport_url=varurl, database_connection=varurl) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(varurl, mapping_obj.database_connection) self.assertEqual(varurl, mapping_obj.transport_url) @mock.patch.object(cell_mapping.CellMapping, '_get_by_uuid_from_db') @mock.patch.object(cell_mapping.CellMapping, '_format_url') def test_non_formatted_url_with_no_base(self, mock_format, mock_get): # Make sure we just pass through the template URL if the base # URLs are not set, i.e. we don't try to format the URL to a template. url = 'foo' self.flags(transport_url=None) self.flags(connection=None, group='database') db_mapping = get_db_mapping(transport_url=url, database_connection=url) mock_get.return_value = db_mapping mapping_obj = objects.CellMapping().get_by_uuid(self.context, db_mapping['uuid']) self.assertEqual(url, mapping_obj.database_connection) self.assertEqual(url, mapping_obj.transport_url) mock_format.assert_not_called() class TestCellMappingObject(test_objects._LocalTest, _TestCellMappingObject): pass class TestRemoteCellMappingObject(test_objects._RemoteTest, _TestCellMappingObject): pass class _TestCellMappingListObject(object): @mock.patch.object(cell_mapping.CellMappingList, '_get_all_from_db') def test_get_all(self, get_all_from_db): db_mapping = get_db_mapping() get_all_from_db.return_value = [db_mapping] mapping_obj = objects.CellMappingList.get_all(self.context) get_all_from_db.assert_called_once_with(self.context) self.compare_obj(mapping_obj.objects[0], db_mapping) def test_get_all_sorted(self): for ident in (10, 3): cm = objects.CellMapping(context=self.context, id=ident, uuid=getattr(uuids, 'cell%i' % ident), transport_url='fake://%i' % ident, database_connection='fake://%i' % ident) cm.create() obj = objects.CellMappingList.get_all(self.context) ids = [c.id for c in obj] # Find the two normal cells, plus the two we created, but in the right # order self.assertEqual([1, 2, 3, 10], ids) def test_get_by_disabled(self): for ident in (4, 3): # We start with id's 4 and 3 because we already have 2 enabled cell # mappings in the base test case setup. 4 is before 3 to simply # verify the sorting mechanism. cm = objects.CellMapping(context=self.context, id=ident, uuid=getattr(uuids, 'cell%i' % ident), transport_url='fake://%i' % ident, database_connection='fake://%i' % ident, disabled=True) cm.create() obj = objects.CellMappingList.get_all(self.context) ids = [c.id for c in obj] # Returns all the cells self.assertEqual([1, 2, 3, 4], ids) obj = objects.CellMappingList.get_by_disabled(self.context, disabled=False) ids = [c.id for c in obj] # Returns only the enabled ones self.assertEqual([1, 2], ids) obj = objects.CellMappingList.get_by_disabled(self.context, disabled=True) ids = [c.id for c in obj] # Returns only the disabled ones self.assertEqual([3, 4], ids) class TestCellMappingListObject(test_objects._LocalTest, _TestCellMappingListObject): pass class TestRemoteCellMappingListObject(test_objects._RemoteTest, _TestCellMappingListObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_compute_node.py0000664000175000017500000010062200000000000023376 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import netaddr from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils from oslo_versionedobjects import base as ovo_base from oslo_versionedobjects import exception as ovo_exc from nova import conf from nova.db import api as db from nova import exception from nova import objects from nova.objects import base from nova.objects import compute_node from nova.objects import hv_spec from nova.objects import service from nova.tests.unit import fake_pci_device_pools from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) fake_stats = {'num_foo': '10'} fake_stats_db_format = jsonutils.dumps(fake_stats) # host_ip is coerced from a string to an IPAddress # but needs to be converted to a string for the database format fake_host_ip = '127.0.0.1' fake_numa_topology = objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set([1, 2]), pcpuset=set(), memory=512, cpu_usage=0, memory_usage=0, mempages=[], pinned_cpus=set(), siblings=[set([1]), set([2])]), objects.NUMACell( id=1, cpuset=set([3, 4]), pcpuset=set(), memory=512, cpu_usage=0, memory_usage=0, mempages=[], pinned_cpus=set(), siblings=[set([3]), set([4])])]) fake_numa_topology_db_format = fake_numa_topology._to_json() fake_supported_instances = [('x86_64', 'kvm', 'hvm')] fake_hv_spec = hv_spec.HVSpec(arch=fake_supported_instances[0][0], hv_type=fake_supported_instances[0][1], vm_mode=fake_supported_instances[0][2]) fake_supported_hv_specs = [fake_hv_spec] # for backward compatibility, each supported instance object # is stored as a list in the database fake_supported_hv_specs_db_format = jsonutils.dumps([fake_hv_spec.to_list()]) fake_pci = jsonutils.dumps(fake_pci_device_pools.fake_pool_list_primitive) fake_compute_node = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 123, 'uuid': uuidsentinel.fake_compute_node, 'service_id': None, 'host': 'fake', 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'hypervisor_hostname': 'vm.danplanet.com', 'free_ram_mb': 1024, 'free_disk_gb': 256, 'current_workload': 100, 'running_vms': 2013, 'cpu_info': 'Schmintel i786', 'disk_available_least': 256, 'metrics': '', 'stats': fake_stats_db_format, 'host_ip': fake_host_ip, 'numa_topology': fake_numa_topology_db_format, 'supported_instances': fake_supported_hv_specs_db_format, 'pci_stats': fake_pci, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5, 'disk_allocation_ratio': 1.0, 'mapped': 0, } # FIXME(sbauza) : For compatibility checking, to be removed once we are sure # that all computes are running latest DB version with host field in it. fake_old_compute_node = fake_compute_node.copy() del fake_old_compute_node['host'] # resources are passed from the virt drivers and copied into the compute_node fake_resources = { 'vcpus': 2, 'memory_mb': 1024, 'local_gb': 10, 'cpu_info': 'fake-info', 'vcpus_used': 1, 'memory_mb_used': 512, 'local_gb_used': 4, 'numa_topology': fake_numa_topology_db_format, 'hypervisor_type': 'fake-type', 'hypervisor_version': 1, 'hypervisor_hostname': 'fake-host', 'disk_available_least': 256, 'host_ip': fake_host_ip, 'supported_instances': fake_supported_instances } fake_compute_with_resources = objects.ComputeNode( vcpus=fake_resources['vcpus'], memory_mb=fake_resources['memory_mb'], local_gb=fake_resources['local_gb'], cpu_info=fake_resources['cpu_info'], vcpus_used=fake_resources['vcpus_used'], memory_mb_used=fake_resources['memory_mb_used'], local_gb_used =fake_resources['local_gb_used'], numa_topology=fake_resources['numa_topology'], hypervisor_type=fake_resources['hypervisor_type'], hypervisor_version=fake_resources['hypervisor_version'], hypervisor_hostname=fake_resources['hypervisor_hostname'], disk_available_least=fake_resources['disk_available_least'], host_ip=netaddr.IPAddress(fake_resources['host_ip']), supported_hv_specs=fake_supported_hv_specs, ) CONF = conf.CONF class _TestComputeNodeObject(object): def supported_hv_specs_comparator(self, expected, obj_val): obj_val = [inst.to_list() for inst in obj_val] self.assertJsonEqual(expected, obj_val) def pci_device_pools_comparator(self, expected, obj_val): if obj_val is not None: obj_val = obj_val.obj_to_primitive() self.assertJsonEqual(expected, obj_val) else: self.assertEqual(expected, obj_val) def comparators(self): return {'stats': self.assertJsonEqual, 'host_ip': self.str_comparator, 'supported_hv_specs': self.supported_hv_specs_comparator, 'pci_device_pools': self.pci_device_pools_comparator, } def subs(self): return {'supported_hv_specs': 'supported_instances', 'pci_device_pools': 'pci_stats'} @mock.patch.object(db, 'compute_node_get') def test_get_by_id(self, get_mock): get_mock.return_value = fake_compute_node compute = compute_node.ComputeNode.get_by_id(self.context, 123) self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) self.assertNotIn('uuid', compute.obj_what_changed()) get_mock.assert_called_once_with(self.context, 123) @mock.patch.object(compute_node.ComputeNodeList, 'get_all_by_uuids') def test_get_by_uuid(self, get_all_by_uuids): fake_node = copy.copy(fake_compute_node) fake_node['stats'] = None get_all_by_uuids.return_value = objects.ComputeNodeList( objects=[objects.ComputeNode(**fake_node)]) compute = compute_node.ComputeNode.get_by_uuid( self.context, uuidsentinel.fake_compute_node) self.assertEqual(uuidsentinel.fake_compute_node, compute.uuid) get_all_by_uuids.assert_called_once_with( self.context, [uuidsentinel.fake_compute_node]) @mock.patch.object(compute_node.ComputeNodeList, 'get_all_by_uuids') def test_get_by_uuid_not_found(self, get_all_by_uuids): get_all_by_uuids.return_value = objects.ComputeNodeList() self.assertRaises(exception.ComputeHostNotFound, compute_node.ComputeNode.get_by_uuid, self.context, uuidsentinel.fake_compute_node) get_all_by_uuids.assert_called_once_with( self.context, [uuidsentinel.fake_compute_node]) @mock.patch.object(db, 'compute_node_get') def test_get_without_mapped(self, get_mock): fake_node = copy.copy(fake_compute_node) fake_node['mapped'] = None get_mock.return_value = fake_node compute = compute_node.ComputeNode.get_by_id(self.context, 123) self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) self.assertIn('mapped', compute) self.assertEqual(0, compute.mapped) @mock.patch.object(objects.Service, 'get_by_id') @mock.patch.object(db, 'compute_node_get') def test_get_by_id_with_host_field_not_in_db(self, mock_cn_get, mock_obj_svc_get): fake_compute_node_with_svc_id = fake_compute_node.copy() fake_compute_node_with_svc_id['service_id'] = 123 fake_compute_node_with_no_host = fake_compute_node_with_svc_id.copy() host = fake_compute_node_with_no_host.pop('host') fake_service = service.Service(id=123) fake_service.host = host mock_cn_get.return_value = fake_compute_node_with_no_host mock_obj_svc_get.return_value = fake_service compute = compute_node.ComputeNode.get_by_id(self.context, 123) self.compare_obj(compute, fake_compute_node_with_svc_id, subs=self.subs(), comparators=self.comparators()) @mock.patch.object(db, 'compute_nodes_get_by_service_id') def test_get_by_service_id(self, get_mock): get_mock.return_value = [fake_compute_node] compute = compute_node.ComputeNode.get_by_service_id(self.context, 456) self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) get_mock.assert_called_once_with(self.context, 456) @mock.patch.object(db, 'compute_node_get_by_host_and_nodename') def test_get_by_host_and_nodename(self, cn_get_by_h_and_n): cn_get_by_h_and_n.return_value = fake_compute_node compute = compute_node.ComputeNode.get_by_host_and_nodename( self.context, 'fake', 'vm.danplanet.com') self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) @mock.patch.object(db, 'compute_node_get_by_nodename') def test_get_by_nodename(self, cn_get_by_n): cn_get_by_n.return_value = fake_compute_node compute = compute_node.ComputeNode.get_by_nodename( self.context, 'vm.danplanet.com') self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) @mock.patch('nova.db.api.compute_node_get_all_by_host') def test_get_first_node_by_host_for_old_compat( self, cn_get_all_by_host): another_node = fake_compute_node.copy() another_node['hypervisor_hostname'] = 'neverland' cn_get_all_by_host.return_value = [fake_compute_node, another_node] compute = ( compute_node.ComputeNode.get_first_node_by_host_for_old_compat( self.context, 'fake') ) self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_get_first_node_by_host_for_old_compat_not_found( self, cn_get_all_by_host): cn_get_all_by_host.side_effect = exception.ComputeHostNotFound( host='fake') self.assertRaises( exception.ComputeHostNotFound, compute_node.ComputeNode.get_first_node_by_host_for_old_compat, self.context, 'fake') @mock.patch.object(db, 'compute_node_create') @mock.patch('nova.db.api.compute_node_get', return_value=fake_compute_node) def test_create(self, mock_get, mock_create): mock_create.return_value = fake_compute_node compute = compute_node.ComputeNode(context=self.context) compute.service_id = 456 compute.uuid = uuidsentinel.fake_compute_node compute.stats = fake_stats # NOTE (pmurray): host_ip is coerced to an IPAddress compute.host_ip = fake_host_ip compute.supported_hv_specs = fake_supported_hv_specs with mock.patch('oslo_utils.uuidutils.generate_uuid') as mock_gu: compute.create() self.assertFalse(mock_gu.called) self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) param_dict = { 'service_id': 456, 'stats': fake_stats_db_format, 'host_ip': fake_host_ip, 'supported_instances': fake_supported_hv_specs_db_format, 'uuid': uuidsentinel.fake_compute_node } mock_create.assert_called_once_with(self.context, param_dict) @mock.patch('nova.db.api.compute_node_create') @mock.patch('oslo_utils.uuidutils.generate_uuid') @mock.patch('nova.db.api.compute_node_get', return_value=fake_compute_node) def test_create_allocates_uuid(self, mock_get, mock_gu, mock_create): mock_create.return_value = fake_compute_node mock_gu.return_value = fake_compute_node['uuid'] obj = objects.ComputeNode(context=self.context) obj.create() mock_gu.assert_called_once_with() mock_create.assert_called_once_with( self.context, {'uuid': fake_compute_node['uuid']}) @mock.patch('nova.db.api.compute_node_create') @mock.patch('nova.db.api.compute_node_get', return_value=fake_compute_node) def test_recreate_fails(self, mock_get, mock_create): mock_create.return_value = fake_compute_node compute = compute_node.ComputeNode(context=self.context) compute.service_id = 456 compute.uuid = uuidsentinel.fake_compute_node compute.create() self.assertRaises(exception.ObjectActionError, compute.create) param_dict = {'service_id': 456, 'uuid': uuidsentinel.fake_compute_node} mock_create.assert_called_once_with(self.context, param_dict) @mock.patch.object(db, 'compute_node_update') @mock.patch('nova.db.api.compute_node_get', return_value=fake_compute_node) def test_save(self, mock_get, mock_update): mock_update.return_value = fake_compute_node compute = compute_node.ComputeNode(context=self.context) compute.id = 123 compute.vcpus_used = 3 compute.stats = fake_stats compute.uuid = uuidsentinel.fake_compute_node # NOTE (pmurray): host_ip is coerced to an IPAddress compute.host_ip = fake_host_ip compute.supported_hv_specs = fake_supported_hv_specs compute.save() self.compare_obj(compute, fake_compute_node, subs=self.subs(), comparators=self.comparators()) param_dict = { 'vcpus_used': 3, 'stats': fake_stats_db_format, 'host_ip': fake_host_ip, 'supported_instances': fake_supported_hv_specs_db_format, 'uuid': uuidsentinel.fake_compute_node, } mock_update.assert_called_once_with(self.context, 123, param_dict) @mock.patch('nova.db.api.compute_node_update') def test_save_pci_device_pools_empty(self, mock_update): fake_pci = jsonutils.dumps( objects.PciDevicePoolList(objects=[]).obj_to_primitive()) compute_dict = fake_compute_node.copy() compute_dict['pci_stats'] = fake_pci mock_update.return_value = compute_dict compute = compute_node.ComputeNode(context=self.context) compute.id = 123 compute.pci_device_pools = objects.PciDevicePoolList(objects=[]) compute.save() self.compare_obj(compute, compute_dict, subs=self.subs(), comparators=self.comparators()) mock_update.assert_called_once_with( self.context, 123, {'pci_stats': fake_pci}) @mock.patch('nova.db.api.compute_node_update') def test_save_pci_device_pools_null(self, mock_update): compute_dict = fake_compute_node.copy() compute_dict['pci_stats'] = None mock_update.return_value = compute_dict compute = compute_node.ComputeNode(context=self.context) compute.id = 123 compute.pci_device_pools = None compute.save() self.compare_obj(compute, compute_dict, subs=self.subs(), comparators=self.comparators()) mock_update.assert_called_once_with( self.context, 123, {'pci_stats': None}) @mock.patch.object(db, 'compute_node_create', return_value=fake_compute_node) @mock.patch.object(db, 'compute_node_get', return_value=fake_compute_node) def test_set_id_failure(self, mock_get, db_mock): compute = compute_node.ComputeNode(context=self.context, uuid=fake_compute_node['uuid']) compute.create() self.assertRaises(ovo_exc.ReadOnlyFieldError, setattr, compute, 'id', 124) @mock.patch.object(db, 'compute_node_delete') def test_destroy(self, mock_delete): compute = compute_node.ComputeNode(context=self.context) compute.id = 123 compute.destroy() mock_delete.assert_called_once_with(self.context, 123) @mock.patch.object(db, 'compute_node_get_all') def test_get_all(self, mock_get_all): mock_get_all.return_value = [fake_compute_node] computes = compute_node.ComputeNodeList.get_all(self.context) self.assertEqual(1, len(computes)) self.compare_obj(computes[0], fake_compute_node, subs=self.subs(), comparators=self.comparators()) mock_get_all.assert_called_once_with(self.context) @mock.patch.object(db, 'compute_node_search_by_hypervisor') def test_get_by_hypervisor(self, mock_search): mock_search.return_value = [fake_compute_node] computes = compute_node.ComputeNodeList.get_by_hypervisor(self.context, 'hyper') self.assertEqual(1, len(computes)) self.compare_obj(computes[0], fake_compute_node, subs=self.subs(), comparators=self.comparators()) mock_search.assert_called_once_with(self.context, 'hyper') @mock.patch('nova.db.api.compute_node_get_all_by_pagination', return_value=[fake_compute_node]) def test_get_by_pagination(self, fake_get_by_pagination): computes = compute_node.ComputeNodeList.get_by_pagination( self.context, limit=1, marker=1) self.assertEqual(1, len(computes)) self.compare_obj(computes[0], fake_compute_node, subs=self.subs(), comparators=self.comparators()) @mock.patch('nova.db.api.compute_nodes_get_by_service_id') def test__get_by_service(self, cn_get_by_svc_id): cn_get_by_svc_id.return_value = [fake_compute_node] computes = compute_node.ComputeNodeList._get_by_service(self.context, 123) self.assertEqual(1, len(computes)) self.compare_obj(computes[0], fake_compute_node, subs=self.subs(), comparators=self.comparators()) @mock.patch('nova.db.api.compute_node_get_all_by_host') def test_get_all_by_host(self, cn_get_all_by_host): cn_get_all_by_host.return_value = [fake_compute_node] computes = compute_node.ComputeNodeList.get_all_by_host(self.context, 'fake') self.assertEqual(1, len(computes)) self.compare_obj(computes[0], fake_compute_node, subs=self.subs(), comparators=self.comparators()) def test_compat_numa_topology(self): compute = compute_node.ComputeNode(numa_topology='fake-numa-topology') versions = ovo_base.obj_tree_get_versions('ComputeNode') primitive = compute.obj_to_primitive(target_version='1.4', version_manifest=versions) self.assertNotIn('numa_topology', primitive['nova_object.data']) primitive = compute.obj_to_primitive(target_version='1.5', version_manifest=versions) self.assertIn('numa_topology', primitive['nova_object.data']) def test_compat_supported_hv_specs(self): compute = compute_node.ComputeNode() compute.supported_hv_specs = fake_supported_hv_specs versions = ovo_base.obj_tree_get_versions('ComputeNode') primitive = compute.obj_to_primitive(target_version='1.5', version_manifest=versions) self.assertNotIn('supported_hv_specs', primitive['nova_object.data']) primitive = compute.obj_to_primitive(target_version='1.6', version_manifest=versions) self.assertIn('supported_hv_specs', primitive['nova_object.data']) @mock.patch('nova.objects.service.Service.get_by_compute_host') def test_compat_host(self, mock_get_compute): compute = compute_node.ComputeNode(host='fake-host') primitive = compute.obj_to_primitive(target_version='1.6') self.assertNotIn('host', primitive['nova_object.data']) primitive = compute.obj_to_primitive(target_version='1.7') self.assertIn('host', primitive['nova_object.data']) def test_compat_pci_device_pools(self): compute = compute_node.ComputeNode() compute.pci_device_pools = fake_pci_device_pools.fake_pool_list versions = ovo_base.obj_tree_get_versions('ComputeNode') primitive = compute.obj_to_primitive(target_version='1.8', version_manifest=versions) self.assertNotIn('pci_device_pools', primitive['nova_object.data']) primitive = compute.obj_to_primitive(target_version='1.9', version_manifest=versions) self.assertIn('pci_device_pools', primitive['nova_object.data']) @mock.patch('nova.objects.Service.get_by_compute_host') def test_compat_service_id(self, mock_get): mock_get.return_value = objects.Service(id=1) compute = objects.ComputeNode(host='fake-host', service_id=None) primitive = compute.obj_to_primitive(target_version='1.12') self.assertEqual(1, primitive['nova_object.data']['service_id']) @mock.patch('nova.objects.Service.get_by_compute_host') def test_compat_service_id_compute_host_not_found(self, mock_get): mock_get.side_effect = exception.ComputeHostNotFound(host='fake-host') compute = objects.ComputeNode(host='fake-host', service_id=None) primitive = compute.obj_to_primitive(target_version='1.12') self.assertEqual(-1, primitive['nova_object.data']['service_id']) def test_update_from_virt_driver(self): # copy in case the update has a side effect resources = copy.deepcopy(fake_resources) # Emulate the ironic driver which adds a uuid field. resources['uuid'] = uuidsentinel.node_uuid compute = compute_node.ComputeNode() compute.update_from_virt_driver(resources) expected = fake_compute_with_resources.obj_clone() expected.uuid = uuidsentinel.node_uuid self.assertTrue(base.obj_equal_prims(expected, compute)) def test_update_from_virt_driver_uuid_already_set(self): """Tests update_from_virt_driver where the compute node object already has a uuid value so the uuid from the virt driver is ignored. """ # copy in case the update has a side effect resources = copy.deepcopy(fake_resources) # Emulate the ironic driver which adds a uuid field. resources['uuid'] = uuidsentinel.node_uuid compute = compute_node.ComputeNode(uuid=uuidsentinel.something_else) compute.update_from_virt_driver(resources) expected = fake_compute_with_resources.obj_clone() expected.uuid = uuidsentinel.something_else self.assertTrue(base.obj_equal_prims(expected, compute)) def test_update_from_virt_driver_missing_field(self): # NOTE(pmurray): update_from_virt_driver does not require # all fields to be present in resources. Validation of the # resources data structure would be done in a different method. resources = copy.deepcopy(fake_resources) del resources['vcpus'] compute = compute_node.ComputeNode() compute.update_from_virt_driver(resources) expected = fake_compute_with_resources.obj_clone() del expected.vcpus self.assertTrue(base.obj_equal_prims(expected, compute)) def test_update_from_virt_driver_extra_field(self): # copy in case the update has a side effect resources = copy.deepcopy(fake_resources) resources['extra_field'] = 'nonsense' compute = compute_node.ComputeNode() compute.update_from_virt_driver(resources) expected = fake_compute_with_resources self.assertTrue(base.obj_equal_prims(expected, compute)) def test_update_from_virt_driver_bad_value(self): # copy in case the update has a side effect resources = copy.deepcopy(fake_resources) resources['vcpus'] = 'nonsense' compute = compute_node.ComputeNode() self.assertRaises(ValueError, compute.update_from_virt_driver, resources) def test_compat_allocation_ratios(self): compute = compute_node.ComputeNode( cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0) primitive = compute.obj_to_primitive(target_version='1.13') self.assertNotIn('cpu_allocation_ratio', primitive['nova_object.data']) self.assertNotIn('ram_allocation_ratio', primitive['nova_object.data']) primitive = compute.obj_to_primitive(target_version='1.14') self.assertIn('cpu_allocation_ratio', primitive['nova_object.data']) self.assertIn('ram_allocation_ratio', primitive['nova_object.data']) def test_compat_disk_allocation_ratio(self): compute = compute_node.ComputeNode(disk_allocation_ratio=1.0) primitive = compute.obj_to_primitive(target_version='1.15') self.assertNotIn( 'disk_allocation_ratio', primitive['nova_object.data']) primitive = compute.obj_to_primitive(target_version='1.16') self.assertIn('disk_allocation_ratio', primitive['nova_object.data']) @mock.patch('nova.db.api.compute_node_update') def test_compat_allocation_ratios_old_compute(self, mock_update): """Tests the scenario that allocation ratios are overridden in config and the legacy compute node record from the database has None set for the allocation ratio values. The result is that the migrated record allocation ratios should reflect the config overrides. """ self.flags(cpu_allocation_ratio=2.0, ram_allocation_ratio=3.0, disk_allocation_ratio=0.9) compute_dict = fake_compute_node.copy() # old computes don't provide allocation ratios to the table compute_dict['cpu_allocation_ratio'] = None compute_dict['ram_allocation_ratio'] = None compute_dict['disk_allocation_ratio'] = None cls = objects.ComputeNode compute = cls._from_db_object(self.context, cls(), compute_dict) self.assertEqual(2.0, compute.cpu_allocation_ratio) self.assertEqual(3.0, compute.ram_allocation_ratio) self.assertEqual(0.9, compute.disk_allocation_ratio) mock_update.assert_called_once_with( self.context, 123, {'cpu_allocation_ratio': 2.0, 'ram_allocation_ratio': 3.0, 'disk_allocation_ratio': 0.9}) @mock.patch('nova.db.api.compute_node_update') def test_compat_allocation_ratios_zero_conf(self, mock_update): """Tests that the override allocation ratios are set to 0.0 for whatever reason (maybe an old nova.conf sample file is being used) and the legacy compute node record has None for allocation ratios, so the resulting data migration makes the record allocation ratios use the CONF.initial_*_allocation_ratio values. """ self.flags(cpu_allocation_ratio=0.0, ram_allocation_ratio=0.0, disk_allocation_ratio=0.0) compute_dict = fake_compute_node.copy() # the computes provide allocation ratios None compute_dict['cpu_allocation_ratio'] = None compute_dict['ram_allocation_ratio'] = None compute_dict['disk_allocation_ratio'] = None cls = objects.ComputeNode compute = cls._from_db_object(self.context, cls(), compute_dict) self.assertEqual( CONF.initial_cpu_allocation_ratio, compute.cpu_allocation_ratio) self.assertEqual( CONF.initial_ram_allocation_ratio, compute.ram_allocation_ratio) self.assertEqual( CONF.initial_disk_allocation_ratio, compute.disk_allocation_ratio) mock_update.assert_called_once_with( self.context, 123, {'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5, 'disk_allocation_ratio': 1.0}) @mock.patch('nova.db.api.compute_node_update') def test_compat_allocation_ratios_None_conf_zero_values(self, mock_update): """Tests the scenario that the CONF.*_allocation_ratio overrides are left to the default (None) and the compute node record allocation ratio values in the DB are 0.0, so they will be migrated to the CONF.initial_*_allocation_ratio values. """ # the CONF.x_allocation_ratio is None by default compute_dict = fake_compute_node.copy() # the computes provide allocation ratios 0.0 compute_dict['cpu_allocation_ratio'] = 0.0 compute_dict['ram_allocation_ratio'] = 0.0 compute_dict['disk_allocation_ratio'] = 0.0 cls = objects.ComputeNode compute = cls._from_db_object(self.context, cls(), compute_dict) self.assertEqual( CONF.initial_cpu_allocation_ratio, compute.cpu_allocation_ratio) self.assertEqual( CONF.initial_ram_allocation_ratio, compute.ram_allocation_ratio) self.assertEqual( CONF.initial_disk_allocation_ratio, compute.disk_allocation_ratio) mock_update.assert_called_once_with( self.context, 123, {'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5, 'disk_allocation_ratio': 1.0}) @mock.patch('nova.db.api.compute_node_update') def test_compat_allocation_ratios_None_conf_None_values(self, mock_update): """Tests the scenario that the override CONF.*_allocation_ratio options are the default values (None), the compute node record from the DB has None values for allocation ratios, so the resulting migrated record will have the CONF.initial_*_allocation_ratio values. """ # the CONF.x_allocation_ratio is None by default compute_dict = fake_compute_node.copy() # # the computes provide allocation ratios None compute_dict['cpu_allocation_ratio'] = None compute_dict['ram_allocation_ratio'] = None compute_dict['disk_allocation_ratio'] = None cls = objects.ComputeNode compute = cls._from_db_object(self.context, cls(), compute_dict) self.assertEqual( CONF.initial_cpu_allocation_ratio, compute.cpu_allocation_ratio) self.assertEqual( CONF.initial_ram_allocation_ratio, compute.ram_allocation_ratio) self.assertEqual( CONF.initial_disk_allocation_ratio, compute.disk_allocation_ratio) mock_update.assert_called_once_with( self.context, 123, {'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5, 'disk_allocation_ratio': 1.0}) def test_get_all_by_not_mapped(self): for mapped in (1, 0, 1, 3): compute = fake_compute_with_resources.obj_clone() compute._context = self.context compute.mapped = mapped compute.create() nodes = compute_node.ComputeNodeList.get_all_by_not_mapped( self.context, 2) self.assertEqual(3, len(nodes)) self.assertEqual([0, 1, 1], sorted([x.mapped for x in nodes])) class TestComputeNodeObject(test_objects._LocalTest, _TestComputeNodeObject): pass class TestRemoteComputeNodeObject(test_objects._RemoteTest, _TestComputeNodeObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_console_auth_token.py0000664000175000017500000001664600000000000024614 0ustar00zuulzuul00000000000000# Copyright 2016 Intel Corp. # Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_db.exception import DBDuplicateEntry from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils import six.moves.urllib.parse as urlparse from nova import exception from nova.objects import console_auth_token as token_obj from nova.tests.unit import fake_console_auth_token as fakes from nova.tests.unit.objects import test_objects class _TestConsoleAuthToken(object): @mock.patch('nova.db.api.console_auth_token_create') def _test_authorize(self, console_type, mock_create): # the expires time is calculated from the current time and # a ttl value in the object. Fix the current time so we can # test expires is calculated correctly as expected self.addCleanup(timeutils.clear_time_override) timeutils.set_time_override() ttl = 10 expires = timeutils.utcnow_ts() + ttl db_dict = copy.deepcopy(fakes.fake_token_dict) db_dict['expires'] = expires db_dict['console_type'] = console_type mock_create.return_value = db_dict create_dict = copy.deepcopy(fakes.fake_token_dict) create_dict['expires'] = expires create_dict['console_type'] = console_type del create_dict['id'] del create_dict['created_at'] del create_dict['updated_at'] expected = copy.deepcopy(fakes.fake_token_dict) del expected['token_hash'] del expected['expires'] expected['token'] = fakes.fake_token expected['console_type'] = console_type obj = token_obj.ConsoleAuthToken( context=self.context, console_type=console_type, host=fakes.fake_token_dict['host'], port=fakes.fake_token_dict['port'], internal_access_path=fakes.fake_token_dict['internal_access_path'], instance_uuid=fakes.fake_token_dict['instance_uuid'], access_url_base=fakes.fake_token_dict['access_url_base'], ) with mock.patch('uuid.uuid4', return_value=fakes.fake_token): token = obj.authorize(ttl) mock_create.assert_called_once_with(self.context, create_dict) self.assertEqual(token, fakes.fake_token) self.compare_obj(obj, expected) url = obj.access_url if console_type != 'novnc': expected_url = '%s?token=%s' % ( fakes.fake_token_dict['access_url_base'], fakes.fake_token) else: path = urlparse.urlencode({'path': '?token=%s' % fakes.fake_token}) expected_url = '%s?%s' % ( fakes.fake_token_dict['access_url_base'], path) self.assertEqual(expected_url, url) def test_authorize(self): self._test_authorize(fakes.fake_token_dict['console_type']) def test_authorize_novnc(self): self._test_authorize('novnc') @mock.patch('nova.db.api.console_auth_token_create') def test_authorize_duplicate_token(self, mock_create): mock_create.side_effect = DBDuplicateEntry() obj = token_obj.ConsoleAuthToken( context=self.context, console_type=fakes.fake_token_dict['console_type'], host=fakes.fake_token_dict['host'], port=fakes.fake_token_dict['port'], internal_access_path=fakes.fake_token_dict['internal_access_path'], instance_uuid=fakes.fake_token_dict['instance_uuid'], access_url_base=fakes.fake_token_dict['access_url_base'], ) self.assertRaises(exception.TokenInUse, obj.authorize, 100) @mock.patch('nova.db.api.console_auth_token_create') def test_authorize_instance_not_found(self, mock_create): mock_create.side_effect = exception.InstanceNotFound( instance_id=fakes.fake_token_dict['instance_uuid']) obj = token_obj.ConsoleAuthToken( context=self.context, console_type=fakes.fake_token_dict['console_type'], host=fakes.fake_token_dict['host'], port=fakes.fake_token_dict['port'], internal_access_path=fakes.fake_token_dict['internal_access_path'], instance_uuid=fakes.fake_token_dict['instance_uuid'], access_url_base=fakes.fake_token_dict['access_url_base'], ) self.assertRaises(exception.InstanceNotFound, obj.authorize, 100) @mock.patch('nova.db.api.console_auth_token_create') def test_authorize_object_already_created(self, mock_create): # the expires time is calculated from the current time and # a ttl value in the object. Fix the current time so we can # test expires is calculated correctly as expected self.addCleanup(timeutils.clear_time_override) timeutils.set_time_override() ttl = 10 expires = timeutils.utcnow_ts() + ttl db_dict = copy.deepcopy(fakes.fake_token_dict) db_dict['expires'] = expires mock_create.return_value = db_dict obj = token_obj.ConsoleAuthToken( context=self.context, console_type=fakes.fake_token_dict['console_type'], host=fakes.fake_token_dict['host'], port=fakes.fake_token_dict['port'], internal_access_path=fakes.fake_token_dict['internal_access_path'], instance_uuid=fakes.fake_token_dict['instance_uuid'], access_url_base=fakes.fake_token_dict['access_url_base'], ) obj.authorize(100) self.assertRaises(exception.ObjectActionError, obj.authorize, 100) @mock.patch('nova.db.api.console_auth_token_destroy_all_by_instance') def test_clean_console_auths_for_instance(self, mock_destroy): token_obj.ConsoleAuthToken.clean_console_auths_for_instance( self.context, uuidsentinel.instance) mock_destroy.assert_called_once_with( self.context, uuidsentinel.instance) @mock.patch('nova.db.api.console_auth_token_destroy_expired') def test_clean_expired_console_auths(self, mock_destroy): token_obj.ConsoleAuthToken.clean_expired_console_auths(self.context) mock_destroy.assert_called_once_with(self.context) @mock.patch('nova.db.api.console_auth_token_destroy_expired_by_host') def test_clean_expired_console_auths_for_host(self, mock_destroy): token_obj.ConsoleAuthToken.clean_expired_console_auths_for_host( self.context, 'fake-host') mock_destroy.assert_called_once_with( self.context, 'fake-host') class TestConsoleAuthToken(test_objects._LocalTest, _TestConsoleAuthToken): pass class TestRemoteConsoleAuthToken(test_objects._RemoteTest, _TestConsoleAuthToken): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_diagnostics.py0000664000175000017500000001473600000000000023236 0ustar00zuulzuul00000000000000# Copyright (c) 2014 VMware, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import diagnostics from nova import test class DiagnosticsComparisonMixin(object): def assertDiagnosticsEqual(self, expected, actual): expected.obj_reset_changes(recursive=True) actual.obj_reset_changes(recursive=True) # NOTE(snikitin): Fields 'cpu_details', 'disk_details' and # 'nic_details' are list objects. They wouldn't be marked as 'changed' # if new item will be added by 'append()' method # (like # Diagnostics.add_*** methods do). So we have to reset these # objects manually. for obj in [expected, actual]: for field in ['cpu_details', 'disk_details', 'nic_details']: for item in getattr(obj, field): item.obj_reset_changes() self.assertEqual(expected.obj_to_primitive(), actual.obj_to_primitive()) class DiagnosticsTests(test.NoDBTestCase): def test_cpu_diagnostics(self): cpu = diagnostics.CpuDiagnostics(id=1, time=7, utilisation=15) self.assertEqual(1, cpu.id) self.assertEqual(7, cpu.time) self.assertEqual(15, cpu.utilisation) def test_nic_diagnostics(self): nic = diagnostics.NicDiagnostics( mac_address='00:00:ca:fe:00:00', rx_octets=1, rx_errors=2, rx_drop=3, rx_packets=4, rx_rate=5, tx_octets=6, tx_errors=7, tx_drop=8, tx_packets=9, tx_rate=10) self.assertEqual('00:00:ca:fe:00:00', nic.mac_address) self.assertEqual(1, nic.rx_octets) self.assertEqual(2, nic.rx_errors) self.assertEqual(3, nic.rx_drop) self.assertEqual(4, nic.rx_packets) self.assertEqual(5, nic.rx_rate) self.assertEqual(6, nic.tx_octets) self.assertEqual(7, nic.tx_errors) self.assertEqual(8, nic.tx_drop) self.assertEqual(9, nic.tx_packets) self.assertEqual(10, nic.tx_rate) def test_disk_diagnostics(self): disk = diagnostics.DiskDiagnostics( read_bytes=1, read_requests=2, write_bytes=3, write_requests=4, errors_count=5) self.assertEqual(1, disk.read_bytes) self.assertEqual(2, disk.read_requests) self.assertEqual(3, disk.write_bytes) self.assertEqual(4, disk.write_requests) self.assertEqual(5, disk.errors_count) def test_memory_diagnostics(self): memory = diagnostics.MemoryDiagnostics(maximum=1, used=2) self.assertEqual(1, memory.maximum) self.assertEqual(2, memory.used) def test_diagnostics(self): cpu_details = [diagnostics.CpuDiagnostics()] nic_details = [diagnostics.NicDiagnostics()] disk_details = [diagnostics.DiskDiagnostics()] memory_details = diagnostics.MemoryDiagnostics(maximum=1, used=1) diags = diagnostics.Diagnostics( state='running', driver='libvirt', hypervisor_os='fake-os', uptime=1, cpu_details=cpu_details, nic_details=nic_details, disk_details=disk_details, config_drive=True, memory_details=memory_details) self.assertEqual('running', diags.state) self.assertEqual('libvirt', diags.driver) self.assertEqual('fake-os', diags.hypervisor_os) self.assertEqual(1, diags.uptime) self.assertTrue(diags.config_drive) self.assertEqual(1, len(diags.cpu_details)) self.assertEqual(1, len(diags.nic_details)) self.assertEqual(1, len(diags.disk_details)) self.assertEqual(1, diags.num_cpus) self.assertEqual(1, diags.num_disks) self.assertEqual(1, diags.num_nics) self.assertEqual(1, diags.memory_details.maximum) self.assertEqual(1, diags.memory_details.used) self.assertEqual('1.0', diags.VERSION) def test_add_cpu(self): diags = diagnostics.Diagnostics() self.assertEqual([], diags.cpu_details) diags.add_cpu(id=1, time=7, utilisation=15) self.assertEqual(1, len(diags.cpu_details)) self.assertEqual(7, diags.cpu_details[0].time) self.assertEqual(1, diags.cpu_details[0].id) self.assertEqual(15, diags.cpu_details[0].utilisation) self.assertEqual(1, diags.num_cpus) def test_add_nic(self): diags = diagnostics.Diagnostics() self.assertEqual([], diags.nic_details) diags.add_nic(mac_address='00:00:ca:fe:00:00', rx_octets=1, rx_errors=2, rx_drop=3, rx_packets=4, rx_rate=5, tx_octets=6, tx_errors=7, tx_drop=8, tx_packets=9, tx_rate=10) self.assertEqual(1, len(diags.nic_details)) self.assertEqual('00:00:ca:fe:00:00', diags.nic_details[0].mac_address) self.assertEqual(1, diags.nic_details[0].rx_octets) self.assertEqual(2, diags.nic_details[0].rx_errors) self.assertEqual(3, diags.nic_details[0].rx_drop) self.assertEqual(4, diags.nic_details[0].rx_packets) self.assertEqual(5, diags.nic_details[0].rx_rate) self.assertEqual(6, diags.nic_details[0].tx_octets) self.assertEqual(7, diags.nic_details[0].tx_errors) self.assertEqual(8, diags.nic_details[0].tx_drop) self.assertEqual(9, diags.nic_details[0].tx_packets) self.assertEqual(10, diags.nic_details[0].tx_rate) self.assertEqual(1, diags.num_nics) def test_add_disk(self): diags = diagnostics.Diagnostics() self.assertEqual([], diags.disk_details) diags.add_disk(read_bytes=1, read_requests=2, write_bytes=3, write_requests=4, errors_count=5) self.assertEqual(1, len(diags.disk_details)) self.assertEqual(1, diags.disk_details[0].read_bytes) self.assertEqual(2, diags.disk_details[0].read_requests) self.assertEqual(3, diags.disk_details[0].write_bytes) self.assertEqual(4, diags.disk_details[0].write_requests) self.assertEqual(5, diags.disk_details[0].errors_count) self.assertEqual(1, diags.num_disks) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_ec2.py0000664000175000017500000001714100000000000021371 0ustar00zuulzuul00000000000000# Copyright (C) 2014, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.db import api as db from nova import objects from nova.objects import ec2 as ec2_obj from nova.tests.unit.objects import test_objects fake_map = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': 1, 'uuid': uuids.ec2_map_uuid, } class _TestEC2InstanceMapping(object): @staticmethod def _compare(test, db, obj): for field, value in db.items(): test.assertEqual(db[field], getattr(obj, field)) def test_create(self): imap = ec2_obj.EC2InstanceMapping(context=self.context) imap.uuid = uuids.ec2_map_uuid with mock.patch.object(db, 'ec2_instance_create') as create: create.return_value = fake_map imap.create() self.assertEqual(self.context, imap._context) imap._context = None self._compare(self, fake_map, imap) def test_get_by_uuid(self): with mock.patch.object(db, 'ec2_instance_get_by_uuid') as get: get.return_value = fake_map imap = ec2_obj.EC2InstanceMapping.get_by_uuid(self.context, uuids.ec2_map_uuid) self._compare(self, fake_map, imap) def test_get_by_ec2_id(self): with mock.patch.object(db, 'ec2_instance_get_by_id') as get: get.return_value = fake_map imap = ec2_obj.EC2InstanceMapping.get_by_id(self.context, 1) self._compare(self, fake_map, imap) class TestEC2InstanceMapping(test_objects._LocalTest, _TestEC2InstanceMapping): pass class TestRemoteEC2InstanceMapping(test_objects._RemoteTest, _TestEC2InstanceMapping): pass class _TestS3ImageMapping(object): @staticmethod def _compare(test, db, obj): for field, value in db.items(): test.assertEqual(db[field], getattr(obj, field)) def test_create(self): s3imap = ec2_obj.S3ImageMapping(context=self.context) s3imap.uuid = uuids.ec2_map_uuid with mock.patch.object(db, 's3_image_create') as create: create.return_value = fake_map s3imap.create() self.assertEqual(self.context, s3imap._context) s3imap._context = None self._compare(self, fake_map, s3imap) def test_get_by_uuid(self): with mock.patch.object(db, 's3_image_get_by_uuid') as get: get.return_value = fake_map s3imap = ec2_obj.S3ImageMapping.get_by_uuid(self.context, uuids.ec2_map_uuid) self._compare(self, fake_map, s3imap) def test_get_by_s3_id(self): with mock.patch.object(db, 's3_image_get') as get: get.return_value = fake_map s3imap = ec2_obj.S3ImageMapping.get_by_id(self.context, 1) self._compare(self, fake_map, s3imap) class TestS3ImageMapping(test_objects._LocalTest, _TestS3ImageMapping): pass class TestRemoteS3ImageMapping(test_objects._RemoteTest, _TestS3ImageMapping): pass class _TestEC2Ids(object): @mock.patch('nova.objects.ec2.glance_type_to_ec2_type') @mock.patch('nova.objects.ec2.glance_id_to_ec2_id') @mock.patch('nova.objects.ec2.id_to_ec2_inst_id') def test_get_by_instance(self, mock_inst, mock_glance, mock_type): mock_inst.return_value = 'fake-ec2-inst-id' mock_glance.side_effect = ['fake-ec2-ami-id', 'fake-ec2-kernel-id', 'fake-ec2-ramdisk-id'] mock_type.side_effect = [mock.sentinel.ec2_kernel_type, mock.sentinel.ec2_ramdisk_type] inst = objects.Instance(uuid=uuids.instance, image_ref='fake-image-id', kernel_id='fake-kernel-id', ramdisk_id='fake-ramdisk-id') result = ec2_obj.EC2Ids.get_by_instance(self.context, inst) self.assertEqual('fake-ec2-inst-id', result.instance_id) self.assertEqual('fake-ec2-ami-id', result.ami_id) self.assertEqual('fake-ec2-kernel-id', result.kernel_id) self.assertEqual('fake-ec2-ramdisk-id', result.ramdisk_id) @mock.patch('nova.objects.ec2.glance_id_to_ec2_id') @mock.patch('nova.objects.ec2.id_to_ec2_inst_id') def test_get_by_instance_no_image_ref(self, mock_inst, mock_glance): mock_inst.return_value = 'fake-ec2-inst-id' mock_glance.return_value = None inst = objects.Instance(uuid=uuids.instance, image_ref=None, kernel_id=None, ramdisk_id=None) result = ec2_obj.EC2Ids.get_by_instance(self.context, inst) self.assertEqual('fake-ec2-inst-id', result.instance_id) self.assertIsNone(result.ami_id) self.assertIsNone(result.kernel_id) self.assertIsNone(result.ramdisk_id) @mock.patch('nova.objects.ec2.glance_type_to_ec2_type') @mock.patch('nova.objects.ec2.glance_id_to_ec2_id') @mock.patch('nova.objects.ec2.id_to_ec2_inst_id') def test_get_by_instance_no_kernel_id(self, mock_inst, mock_glance, mock_type): mock_inst.return_value = 'fake-ec2-inst-id' mock_glance.side_effect = ['fake-ec2-ami-id', 'fake-ec2-ramdisk-id'] mock_type.return_value = mock.sentinel.ec2_ramdisk_type inst = objects.Instance(uuid=uuids.instance, image_ref='fake-image-id', kernel_id=None, ramdisk_id='fake-ramdisk-id') result = ec2_obj.EC2Ids.get_by_instance(self.context, inst) self.assertEqual('fake-ec2-inst-id', result.instance_id) self.assertEqual('fake-ec2-ami-id', result.ami_id) self.assertIsNone(result.kernel_id) self.assertEqual('fake-ec2-ramdisk-id', result.ramdisk_id) @mock.patch('nova.objects.ec2.glance_type_to_ec2_type') @mock.patch('nova.objects.ec2.glance_id_to_ec2_id') @mock.patch('nova.objects.ec2.id_to_ec2_inst_id') def test_get_by_instance_no_ramdisk_id(self, mock_inst, mock_glance, mock_type): mock_inst.return_value = 'fake-ec2-inst-id' mock_glance.side_effect = ['fake-ec2-ami-id', 'fake-ec2-kernel-id'] mock_type.return_value = mock.sentinel.ec2_kernel_type inst = objects.Instance(uuid=uuids.instance, image_ref='fake-image-id', kernel_id='fake-kernel-id', ramdisk_id=None) result = ec2_obj.EC2Ids.get_by_instance(self.context, inst) self.assertEqual('fake-ec2-inst-id', result.instance_id) self.assertEqual('fake-ec2-ami-id', result.ami_id) self.assertEqual('fake-ec2-kernel-id', result.kernel_id) self.assertIsNone(result.ramdisk_id) class TestEC2Ids(test_objects._LocalTest, _TestEC2Ids): pass class TestRemoteEC2Ids(test_objects._RemoteTest, _TestEC2Ids): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_external_event.py0000664000175000017500000000404200000000000023737 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.objects import external_event as external_event_obj from nova.tests.unit.objects import test_objects class _TestInstanceExternalEventObject(object): def test_make_key(self): key = external_event_obj.InstanceExternalEvent.make_key('foo', 'bar') self.assertEqual('foo-bar', key) def test_make_key_no_tag(self): key = external_event_obj.InstanceExternalEvent.make_key('foo') self.assertEqual('foo', key) def test_key(self): event = external_event_obj.InstanceExternalEvent( name='network-changed', tag='bar') with mock.patch.object(event, 'make_key') as make_key: make_key.return_value = 'key' self.assertEqual('key', event.key) make_key.assert_called_once_with('network-changed', 'bar') def test_event_names(self): for event in external_event_obj.EVENT_NAMES: external_event_obj.InstanceExternalEvent(name=event, tag='bar') self.assertRaises(ValueError, external_event_obj.InstanceExternalEvent, name='foo', tag='bar') class TestInstanceExternalEventObject(test_objects._LocalTest, _TestInstanceExternalEventObject): pass class TestRemoteInstanceExternalEventObject(test_objects._RemoteTest, _TestInstanceExternalEventObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_fields.py0000664000175000017500000006444700000000000022201 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import os import iso8601 import mock from oslo_serialization import jsonutils from oslo_versionedobjects import exception as ovo_exc import six from nova import exception from nova.network import model as network_model from nova.objects import fields from nova import test from nova.tests.unit import fake_instance from nova import utils class FakeFieldType(fields.FieldType): def coerce(self, obj, attr, value): return '*%s*' % value def to_primitive(self, obj, attr, value): return '!%s!' % value def from_primitive(self, obj, attr, value): return value[1:-1] class FakeEnum(fields.Enum): FROG = "frog" PLATYPUS = "platypus" ALLIGATOR = "alligator" ALL = (FROG, PLATYPUS, ALLIGATOR) def __init__(self, **kwargs): super(FakeEnum, self).__init__(valid_values=FakeEnum.ALL, **kwargs) class FakeEnumAlt(fields.Enum): FROG = "frog" PLATYPUS = "platypus" AARDVARK = "aardvark" ALL = (FROG, PLATYPUS, AARDVARK) def __init__(self, **kwargs): super(FakeEnumAlt, self).__init__(valid_values=FakeEnumAlt.ALL, **kwargs) class FakeAddress(fields.AddressBase): PATTERN = '[a-z]+[0-9]+' class FakeAddressField(fields.AutoTypedField): AUTO_TYPE = FakeAddress() class FakeEnumField(fields.BaseEnumField): AUTO_TYPE = FakeEnum() class FakeEnumAltField(fields.BaseEnumField): AUTO_TYPE = FakeEnumAlt() class TestField(test.NoDBTestCase): def setUp(self): super(TestField, self).setUp() self.field = fields.Field(FakeFieldType()) self.coerce_good_values = [('foo', '*foo*')] self.coerce_bad_values = [] self.to_primitive_values = [('foo', '!foo!')] self.from_primitive_values = [('!foo!', 'foo')] def test_coerce_good_values(self): for in_val, out_val in self.coerce_good_values: self.assertEqual(out_val, self.field.coerce('obj', 'attr', in_val)) def test_coerce_bad_values(self): for in_val in self.coerce_bad_values: self.assertRaises((TypeError, ValueError), self.field.coerce, 'obj', 'attr', in_val) def test_to_primitive(self): for in_val, prim_val in self.to_primitive_values: self.assertEqual(prim_val, self.field.to_primitive('obj', 'attr', in_val)) def test_from_primitive(self): class ObjectLikeThing(object): _context = 'context' for prim_val, out_val in self.from_primitive_values: self.assertEqual(out_val, self.field.from_primitive( ObjectLikeThing, 'attr', prim_val)) def test_stringify(self): self.assertEqual('123', self.field.stringify(123)) class TestString(TestField): def setUp(self): super(TestString, self).setUp() self.field = fields.StringField() self.coerce_good_values = [('foo', 'foo'), (1, '1'), (True, 'True')] if six.PY2: self.coerce_good_values.append((int(1), '1')) self.coerce_bad_values = [None] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'123'", self.field.stringify(123)) class TestBaseEnum(TestField): def setUp(self): super(TestBaseEnum, self).setUp() self.field = FakeEnumField() self.coerce_good_values = [('frog', 'frog'), ('platypus', 'platypus'), ('alligator', 'alligator')] self.coerce_bad_values = ['aardvark', 'wookie'] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'platypus'", self.field.stringify('platypus')) def test_stringify_invalid(self): self.assertRaises(ValueError, self.field.stringify, 'aardvark') def test_fingerprint(self): # Notes(yjiang5): make sure changing valid_value will be detected # in test_objects.test_versions field1 = FakeEnumField() field2 = FakeEnumAltField() self.assertNotEqual(str(field1), str(field2)) class TestEnum(TestField): def setUp(self): super(TestEnum, self).setUp() self.field = fields.EnumField( valid_values=['foo', 'bar', 1, 1, True]) self.coerce_good_values = [('foo', 'foo'), (1, '1'), (True, 'True')] if six.PY2: self.coerce_good_values.append((int(1), '1')) self.coerce_bad_values = ['boo', 2, False] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'foo'", self.field.stringify('foo')) def test_stringify_invalid(self): self.assertRaises(ValueError, self.field.stringify, '123') def test_fingerprint(self): # Notes(yjiang5): make sure changing valid_value will be detected # in test_objects.test_versions field1 = fields.EnumField(valid_values=['foo', 'bar']) field2 = fields.EnumField(valid_values=['foo', 'bar1']) self.assertNotEqual(str(field1), str(field2)) def test_without_valid_values(self): self.assertRaises(ovo_exc.EnumValidValuesInvalidError, fields.EnumField, 1) def test_with_empty_values(self): self.assertRaises(ovo_exc.EnumRequiresValidValuesError, fields.EnumField, []) class TestArchitecture(TestField): @mock.patch.object(os, 'uname') def test_host(self, mock_uname): mock_uname.return_value = ( 'Linux', 'localhost.localdomain', '3.14.8-200.fc20.x86_64', '#1 SMP Mon Jun 16 21:57:53 UTC 2014', 'i686' ) self.assertEqual(fields.Architecture.I686, fields.Architecture.from_host()) def test_valid_string(self): self.assertTrue(fields.Architecture.is_valid('x86_64')) def test_valid_constant(self): self.assertTrue(fields.Architecture.is_valid( fields.Architecture.X86_64)) def test_valid_bogus(self): self.assertFalse(fields.Architecture.is_valid('x86_64wibble')) def test_canonicalize_i386(self): self.assertEqual(fields.Architecture.I686, fields.Architecture.canonicalize('i386')) def test_canonicalize_amd64(self): self.assertEqual(fields.Architecture.X86_64, fields.Architecture.canonicalize('amd64')) def test_canonicalize_case(self): self.assertEqual(fields.Architecture.X86_64, fields.Architecture.canonicalize('X86_64')) def test_canonicalize_compat_xen1(self): self.assertEqual(fields.Architecture.I686, fields.Architecture.canonicalize('x86_32')) def test_canonicalize_compat_xen2(self): self.assertEqual(fields.Architecture.I686, fields.Architecture.canonicalize('x86_32p')) def test_canonicalize_invalid(self): self.assertRaises(exception.InvalidArchitectureName, fields.Architecture.canonicalize, 'x86_64wibble') class TestHVType(TestField): def test_valid_string(self): self.assertTrue(fields.HVType.is_valid('vmware')) def test_valid_constant(self): self.assertTrue(fields.HVType.is_valid(fields.HVType.QEMU)) def test_valid_docker(self): self.assertTrue(fields.HVType.is_valid('docker')) def test_valid_lxd(self): self.assertTrue(fields.HVType.is_valid('lxd')) def test_valid_vz(self): self.assertTrue(fields.HVType.is_valid( fields.HVType.VIRTUOZZO)) def test_valid_bogus(self): self.assertFalse(fields.HVType.is_valid('acmehypervisor')) def test_canonicalize_none(self): self.assertIsNone(fields.HVType.canonicalize(None)) def test_canonicalize_case(self): self.assertEqual(fields.HVType.QEMU, fields.HVType.canonicalize('QeMu')) def test_canonicalize_xapi(self): self.assertEqual(fields.HVType.XEN, fields.HVType.canonicalize('xapi')) def test_canonicalize_invalid(self): self.assertRaises(exception.InvalidHypervisorVirtType, fields.HVType.canonicalize, 'wibble') class TestVMMode(TestField): def _fake_object(self, updates): return fake_instance.fake_instance_obj(None, **updates) def test_case(self): inst = self._fake_object(dict(vm_mode='HVM')) mode = fields.VMMode.get_from_instance(inst) self.assertEqual(mode, 'hvm') def test_legacy_pv(self): inst = self._fake_object(dict(vm_mode='pv')) mode = fields.VMMode.get_from_instance(inst) self.assertEqual(mode, 'xen') def test_legacy_hv(self): inst = self._fake_object(dict(vm_mode='hv')) mode = fields.VMMode.get_from_instance(inst) self.assertEqual(mode, 'hvm') def test_bogus(self): inst = self._fake_object(dict(vm_mode='wibble')) self.assertRaises(exception.Invalid, fields.VMMode.get_from_instance, inst) def test_good(self): inst = self._fake_object(dict(vm_mode='hvm')) mode = fields.VMMode.get_from_instance(inst) self.assertEqual(mode, 'hvm') def test_canonicalize_pv_compat(self): mode = fields.VMMode.canonicalize('pv') self.assertEqual(fields.VMMode.XEN, mode) def test_canonicalize_hv_compat(self): mode = fields.VMMode.canonicalize('hv') self.assertEqual(fields.VMMode.HVM, mode) def test_canonicalize_baremetal_compat(self): mode = fields.VMMode.canonicalize('baremetal') self.assertEqual(fields.VMMode.HVM, mode) def test_canonicalize_hvm(self): mode = fields.VMMode.canonicalize('hvm') self.assertEqual(fields.VMMode.HVM, mode) def test_canonicalize_none(self): mode = fields.VMMode.canonicalize(None) self.assertIsNone(mode) def test_canonicalize_invalid(self): self.assertRaises(exception.InvalidVirtualMachineMode, fields.VMMode.canonicalize, 'invalid') class TestInteger(TestField): def setUp(self): super(TestInteger, self).setUp() self.field = fields.IntegerField() self.coerce_good_values = [(1, 1), ('1', 1)] self.coerce_bad_values = ['foo', None] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] class TestNonNegativeInteger(TestInteger): def setUp(self): super(TestNonNegativeInteger, self).setUp() self.field = fields.NonNegativeIntegerField() self.coerce_bad_values.extend(['-2', '4.2']) class TestFloat(TestField): def setUp(self): super(TestFloat, self).setUp() self.field = fields.FloatField() self.coerce_good_values = [(1.1, 1.1), ('1.1', 1.1)] self.coerce_bad_values = ['foo', None] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] class TestNonNegativeFloat(TestFloat): def setUp(self): super(TestNonNegativeFloat, self).setUp() self.field = fields.NonNegativeFloatField() self.coerce_bad_values.extend(['-4.2']) class TestBoolean(TestField): def setUp(self): super(TestBoolean, self).setUp() self.field = fields.BooleanField() self.coerce_good_values = [(True, True), (False, False), (1, True), ('foo', True), (0, False), ('', False)] self.coerce_bad_values = [] self.to_primitive_values = self.coerce_good_values[0:2] self.from_primitive_values = self.coerce_good_values[0:2] class TestDateTime(TestField): def setUp(self): super(TestDateTime, self).setUp() self.dt = datetime.datetime(1955, 11, 5, tzinfo=iso8601.UTC) self.field = fields.DateTimeField() self.coerce_good_values = [(self.dt, self.dt), (utils.isotime(self.dt), self.dt)] self.coerce_bad_values = [1, 'foo'] self.to_primitive_values = [(self.dt, utils.isotime(self.dt))] self.from_primitive_values = [(utils.isotime(self.dt), self.dt)] def test_stringify(self): self.assertEqual( '1955-11-05T18:00:00Z', self.field.stringify( datetime.datetime(1955, 11, 5, 18, 0, 0, tzinfo=iso8601.UTC))) class TestDict(TestField): def setUp(self): super(TestDict, self).setUp() self.field = fields.Field(fields.Dict(FakeFieldType())) self.coerce_good_values = [({'foo': 'bar'}, {'foo': '*bar*'}), ({'foo': 1}, {'foo': '*1*'})] self.coerce_bad_values = [{1: 'bar'}, 'foo'] self.to_primitive_values = [({'foo': 'bar'}, {'foo': '!bar!'})] self.from_primitive_values = [({'foo': '!bar!'}, {'foo': 'bar'})] def test_stringify(self): self.assertEqual("{key=val}", self.field.stringify({'key': 'val'})) class TestDictOfStrings(TestField): def setUp(self): super(TestDictOfStrings, self).setUp() self.field = fields.DictOfStringsField() self.coerce_good_values = [({'foo': 'bar'}, {'foo': 'bar'}), ({'foo': 1}, {'foo': '1'})] self.coerce_bad_values = [{1: 'bar'}, {'foo': None}, 'foo'] self.to_primitive_values = [({'foo': 'bar'}, {'foo': 'bar'})] self.from_primitive_values = [({'foo': 'bar'}, {'foo': 'bar'})] def test_stringify(self): self.assertEqual("{key='val'}", self.field.stringify({'key': 'val'})) class TestDictOfIntegers(TestField): def setUp(self): super(TestDictOfIntegers, self).setUp() self.field = fields.DictOfIntegersField() self.coerce_good_values = [({'foo': '42'}, {'foo': 42}), ({'foo': 4.2}, {'foo': 4})] self.coerce_bad_values = [{1: 'bar'}, {'foo': 'boo'}, 'foo', {'foo': None}] self.to_primitive_values = [({'foo': 42}, {'foo': 42})] self.from_primitive_values = [({'foo': 42}, {'foo': 42})] def test_stringify(self): self.assertEqual("{key=42}", self.field.stringify({'key': 42})) class TestDictOfStringsNone(TestField): def setUp(self): super(TestDictOfStringsNone, self).setUp() self.field = fields.DictOfNullableStringsField() self.coerce_good_values = [({'foo': 'bar'}, {'foo': 'bar'}), ({'foo': 1}, {'foo': '1'}), ({'foo': None}, {'foo': None})] self.coerce_bad_values = [{1: 'bar'}, 'foo'] self.to_primitive_values = [({'foo': 'bar'}, {'foo': 'bar'})] self.from_primitive_values = [({'foo': 'bar'}, {'foo': 'bar'})] def test_stringify(self): self.assertEqual("{k2=None,key='val'}", self.field.stringify({'k2': None, 'key': 'val'})) class TestListOfDictOfNullableStringsField(TestField): def setUp(self): super(TestListOfDictOfNullableStringsField, self).setUp() self.field = fields.ListOfDictOfNullableStringsField() self.coerce_good_values = [([{'f': 'b', 'f1': 'b1'}, {'f2': 'b2'}], [{'f': 'b', 'f1': 'b1'}, {'f2': 'b2'}]), ([{'f': 1}, {'f1': 'b1'}], [{'f': '1'}, {'f1': 'b1'}]), ([{'foo': None}], [{'foo': None}])] self.coerce_bad_values = [[{1: 'a'}], ['ham', 1], ['eggs']] self.to_primitive_values = [([{'f': 'b'}, {'f1': 'b1'}, {'f2': None}], [{'f': 'b'}, {'f1': 'b1'}, {'f2': None}])] self.from_primitive_values = [([{'f': 'b'}, {'f1': 'b1'}, {'f2': None}], [{'f': 'b'}, {'f1': 'b1'}, {'f2': None}])] def test_stringify(self): self.assertEqual("[{f=None,f1='b1'},{f2='b2'}]", self.field.stringify( [{'f': None, 'f1': 'b1'}, {'f2': 'b2'}])) class TestList(TestField): def setUp(self): super(TestList, self).setUp() self.field = fields.Field(fields.List(FakeFieldType())) self.coerce_good_values = [(['foo', 'bar'], ['*foo*', '*bar*'])] self.coerce_bad_values = ['foo'] self.to_primitive_values = [(['foo'], ['!foo!'])] self.from_primitive_values = [(['!foo!'], ['foo'])] def test_stringify(self): self.assertEqual('[123]', self.field.stringify([123])) class TestListOfStrings(TestField): def setUp(self): super(TestListOfStrings, self).setUp() self.field = fields.ListOfStringsField() self.coerce_good_values = [(['foo', 'bar'], ['foo', 'bar'])] self.coerce_bad_values = ['foo'] self.to_primitive_values = [(['foo'], ['foo'])] self.from_primitive_values = [(['foo'], ['foo'])] def test_stringify(self): self.assertEqual("['abc']", self.field.stringify(['abc'])) class TestSet(TestField): def setUp(self): super(TestSet, self).setUp() self.field = fields.Field(fields.Set(FakeFieldType())) self.coerce_good_values = [(set(['foo', 'bar']), set(['*foo*', '*bar*']))] self.coerce_bad_values = [['foo'], {'foo': 'bar'}] self.to_primitive_values = [(set(['foo']), tuple(['!foo!']))] self.from_primitive_values = [(tuple(['!foo!']), set(['foo']))] def test_stringify(self): self.assertEqual('set([123])', self.field.stringify(set([123]))) class TestSetOfIntegers(TestField): def setUp(self): super(TestSetOfIntegers, self).setUp() self.field = fields.SetOfIntegersField() self.coerce_good_values = [(set(['1', 2]), set([1, 2]))] self.coerce_bad_values = [set(['foo'])] self.to_primitive_values = [(set([1]), tuple([1]))] self.from_primitive_values = [(tuple([1]), set([1]))] def test_stringify(self): self.assertEqual('set([1,2])', self.field.stringify(set([1, 2]))) class TestListOfSetsOfIntegers(TestField): def setUp(self): super(TestListOfSetsOfIntegers, self).setUp() self.field = fields.ListOfSetsOfIntegersField() self.coerce_good_values = [([set(['1', 2]), set([3, '4'])], [set([1, 2]), set([3, 4])])] self.coerce_bad_values = [[set(['foo'])]] self.to_primitive_values = [([set([1])], [tuple([1])])] self.from_primitive_values = [([tuple([1])], [set([1])])] def test_stringify(self): self.assertEqual('[set([1,2])]', self.field.stringify([set([1, 2])])) class TestNetworkModel(TestField): def setUp(self): super(TestNetworkModel, self).setUp() model = network_model.NetworkInfo() self.field = fields.Field(fields.NetworkModel()) self.coerce_good_values = [(model, model), (model.json(), model)] self.coerce_bad_values = [[], 'foo'] self.to_primitive_values = [(model, model.json())] self.from_primitive_values = [(model.json(), model)] def test_stringify(self): networkinfo = network_model.NetworkInfo() networkinfo.append(network_model.VIF(id=123)) networkinfo.append(network_model.VIF(id=456)) self.assertEqual('NetworkModel(123,456)', self.field.stringify(networkinfo)) class TestNetworkVIFModel(TestField): def setUp(self): super(TestNetworkVIFModel, self).setUp() model = network_model.VIF('6c197bc7-820c-40d5-8aff-7116b993e793') primitive = jsonutils.dumps(model) self.field = fields.Field(fields.NetworkVIFModel()) self.coerce_good_values = [(model, model), (primitive, model)] self.coerce_bad_values = [[], 'foo'] self.to_primitive_values = [(model, primitive)] self.from_primitive_values = [(primitive, model)] class TestNotificationPriority(TestField): def setUp(self): super(TestNotificationPriority, self).setUp() self.field = fields.NotificationPriorityField() self.coerce_good_values = [('audit', 'audit'), ('critical', 'critical'), ('debug', 'debug'), ('error', 'error'), ('sample', 'sample'), ('warn', 'warn')] self.coerce_bad_values = ['warning'] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'warn'", self.field.stringify('warn')) def test_stringify_invalid(self): self.assertRaises(ValueError, self.field.stringify, 'warning') class TestNotificationPhase(TestField): def setUp(self): super(TestNotificationPhase, self).setUp() self.field = fields.NotificationPhaseField() self.coerce_good_values = [('start', 'start'), ('end', 'end'), ('error', 'error')] self.coerce_bad_values = ['begin'] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'error'", self.field.stringify('error')) def test_stringify_invalid(self): self.assertRaises(ValueError, self.field.stringify, 'begin') class TestNotificationAction(TestField): def setUp(self): super(TestNotificationAction, self).setUp() self.field = fields.NotificationActionField() self.coerce_good_values = [('update', 'update')] self.coerce_bad_values = ['magic'] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'update'", self.field.stringify('update')) def test_stringify_invalid(self): self.assertRaises(ValueError, self.field.stringify, 'magic') class TestUSBAddress(TestField): def setUp(self): super(TestUSBAddress, self).setUp() self.field = fields.Field(fields.USBAddressField()) self.coerce_good_values = [('0:0', '0:0')] self.coerce_bad_values = [ '00', '0:', '0.0', '-.0', ] self.to_primitive_values = self.coerce_good_values self.from_primitive_values = self.coerce_good_values class TestSCSIAddress(TestField): def setUp(self): super(TestSCSIAddress, self).setUp() self.field = fields.Field(fields.SCSIAddressField()) self.coerce_good_values = [('1:0:2:0', '1:0:2:0')] self.coerce_bad_values = [ '1:0:2', '-:0:2:0', '1:-:2:0', '1:0:-:0', '1:0:2:-', ] self.to_primitive_values = self.coerce_good_values self.from_primitive_values = self.coerce_good_values class TestIDEAddress(TestField): def setUp(self): super(TestIDEAddress, self).setUp() self.field = fields.Field(fields.IDEAddressField()) self.coerce_good_values = [('0:0', '0:0')] self.coerce_bad_values = [ '0:2', '00', '0', ] self.to_primitive_values = self.coerce_good_values self.from_primitive_values = self.coerce_good_values class TestXenAddress(TestField): def setUp(self): super(TestXenAddress, self).setUp() self.field = fields.Field(fields.XenAddressField()) self.coerce_good_values = [('000100', '000100'), ('768', '768')] self.coerce_bad_values = [ '1', '00100', ] self.to_primitive_values = self.coerce_good_values self.from_primitive_values = self.coerce_good_values class TestSecureBoot(TestField): def setUp(self): super(TestSecureBoot, self).setUp() self.field = fields.SecureBoot() self.coerce_good_values = [('required', 'required'), ('disabled', 'disabled'), ('optional', 'optional')] self.coerce_bad_values = ['enabled'] self.to_primitive_values = self.coerce_good_values[0:1] self.from_primitive_values = self.coerce_good_values[0:1] def test_stringify(self): self.assertEqual("'required'", self.field.stringify('required')) def test_stringify_invalid(self): self.assertRaises(ValueError, self.field.stringify, 'enabled') class TestSchemaGeneration(test.NoDBTestCase): def test_address_base_get_schema(self): field = FakeAddressField() expected = {'type': ['string'], 'pattern': '[a-z]+[0-9]+', 'readonly': False} self.assertEqual(expected, field.get_schema()) class TestNotificationSource(test.NoDBTestCase): def test_get_source_by_binary(self): self.assertEqual('nova-api', fields.NotificationSource.get_source_by_binary( 'nova-osapi_compute')) self.assertEqual('nova-metadata', fields.NotificationSource.get_source_by_binary( 'nova-metadata')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_flavor.py0000664000175000017500000004707600000000000022223 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_db import exception as db_exc from oslo_utils import uuidutils from nova import context as nova_context from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception from nova import objects from nova.objects import flavor as flavor_obj from nova.tests.unit.objects import test_objects fake_flavor = { 'created_at': None, 'updated_at': None, 'id': 1, 'name': 'm1.foo', 'memory_mb': 1024, 'vcpus': 4, 'root_gb': 20, 'ephemeral_gb': 0, 'flavorid': 'm1.foo', 'swap': 0, 'rxtx_factor': 1.0, 'vcpu_weight': 1, 'disabled': False, 'is_public': True, 'extra_specs': {'foo': 'bar'}, 'description': None } # We mock out _get_projects_from_db globally since it raises FlavorNotFound if # the flavor is not found in the database, which is going to happen in a lot of # tests when the flavor isn't actually created. Individual tests can override # this mock as necessary. @mock.patch('nova.objects.Flavor._get_projects_from_db', lambda *a, **kw: []) class _TestFlavor(object): @staticmethod def _compare(test, db, obj): for field, value in db.items(): # NOTE(danms): The datetimes on SQLA models are tz-unaware, # but the object has tz-aware datetimes. If we're comparing # a model to an object (as opposed to a fake dict), just # ignore the datetimes in the comparison. if (isinstance(db, api_models.API_BASE) and isinstance(value, datetime.datetime)): continue test.assertEqual(db[field], obj[field]) @mock.patch('nova.objects.Flavor._flavor_get_from_db') def test_api_get_by_id_from_api(self, mock_get): mock_get.return_value = fake_flavor flavor = flavor_obj.Flavor.get_by_id(self.context, 1) self._compare(self, fake_flavor, flavor) mock_get.assert_called_once_with(self.context, 1) @mock.patch('nova.objects.Flavor._flavor_get_by_name_from_db') def test_get_by_name_from_api(self, mock_get): mock_get.return_value = fake_flavor flavor = flavor_obj.Flavor.get_by_name(self.context, 'm1.foo') self._compare(self, fake_flavor, flavor) mock_get.assert_called_once_with(self.context, 'm1.foo') @mock.patch('nova.objects.Flavor._flavor_get_by_flavor_id_from_db') def test_get_by_flavor_id_from_api(self, mock_get): mock_get.return_value = fake_flavor flavor = flavor_obj.Flavor.get_by_flavor_id(self.context, 'm1.foo') self._compare(self, fake_flavor, flavor) mock_get.assert_called_once_with(self.context, 'm1.foo') @staticmethod @db_api.api_context_manager.writer def _create_api_flavor(context, altid=None): fake_db_flavor = dict(fake_flavor) del fake_db_flavor['extra_specs'] del fake_db_flavor['id'] flavor = api_models.Flavors() flavor.update(fake_db_flavor) if altid: flavor.update({'flavorid': altid, 'name': altid}) flavor.save(context.session) fake_db_extra_spec = {'flavor_id': flavor['id'], 'key': 'foo', 'value': 'bar'} flavor_es = api_models.FlavorExtraSpecs() flavor_es.update(fake_db_extra_spec) flavor_es.save(context.session) return flavor def test_get_by_id_from_db(self): db_flavor = self._create_api_flavor(self.context) flavor = objects.Flavor.get_by_id(self.context, db_flavor['id']) self._compare(self, db_flavor, flavor) def test_get_by_name_from_db(self): db_flavor = self._create_api_flavor(self.context) flavor = objects.Flavor.get_by_name(self.context, db_flavor['name']) self._compare(self, db_flavor, flavor) def test_get_by_flavor_id_from_db(self): db_flavor = self._create_api_flavor(self.context) flavor = objects.Flavor.get_by_flavor_id(self.context, db_flavor['flavorid']) self._compare(self, db_flavor, flavor) @mock.patch('nova.objects.Flavor._send_notification') @mock.patch('nova.objects.Flavor._flavor_add_project') def test_add_access_api(self, mock_api_add, mock_notify): elevated = self.context.elevated() flavor = flavor_obj.Flavor(context=elevated, id=12345, flavorid='123') flavor.add_access('456') mock_api_add.assert_called_once_with(elevated, 12345, '456') def test_add_access_with_dirty_projects(self): flavor = flavor_obj.Flavor(context=self.context, projects=['1']) self.assertRaises(exception.ObjectActionError, flavor.add_access, '2') @mock.patch('nova.objects.Flavor._send_notification') @mock.patch('nova.objects.Flavor._flavor_del_project') def test_remove_access_api(self, mock_api_del, mock_notify): elevated = self.context.elevated() flavor = flavor_obj.Flavor(context=elevated, id=12345, flavorid='123') flavor.remove_access('456') mock_api_del.assert_called_once_with(elevated, 12345, '456') @mock.patch('nova.objects.Flavor._flavor_create') def test_create(self, mock_create): mock_create.return_value = fake_flavor flavor = flavor_obj.Flavor(context=self.context) flavor.name = 'm1.foo' flavor.extra_specs = fake_flavor['extra_specs'] flavor.create() self.assertEqual(self.context, flavor._context) # NOTE(danms): Orphan this to avoid lazy-loads flavor._context = None self._compare(self, fake_flavor, flavor) @mock.patch('nova.objects.Flavor._flavor_create') def test_create_with_projects(self, mock_create): context = self.context.elevated() flavor = flavor_obj.Flavor(context=context) flavor.name = 'm1.foo' flavor.extra_specs = fake_flavor['extra_specs'] flavor.projects = ['project-1', 'project-2'] db_flavor = dict(fake_flavor, projects=[{'project_id': pid} for pid in flavor.projects]) mock_create.return_value = db_flavor flavor.create() mock_create.assert_called_once_with( context, {'name': 'm1.foo', 'extra_specs': fake_flavor['extra_specs'], 'projects': ['project-1', 'project-2']}) self.assertEqual(context, flavor._context) # NOTE(danms): Orphan this to avoid lazy-loads flavor._context = None self._compare(self, fake_flavor, flavor) self.assertEqual(['project-1', 'project-2'], flavor.projects) def test_create_with_id(self): flavor = flavor_obj.Flavor(context=self.context, id=123) self.assertRaises(exception.ObjectActionError, flavor.create) @mock.patch('nova.db.sqlalchemy.api_models.Flavors') def test_create_duplicate(self, mock_flavors): mock_flavors.return_value.save.side_effect = db_exc.DBDuplicateEntry fields = dict(fake_flavor) del fields['id'] flavor = objects.Flavor(self.context, **fields) self.assertRaises(exception.FlavorExists, flavor.create) @mock.patch('nova.objects.Flavor._send_notification') @mock.patch('nova.objects.Flavor._flavor_add_project') @mock.patch('nova.objects.Flavor._flavor_del_project') @mock.patch('nova.objects.Flavor._flavor_extra_specs_del') @mock.patch('nova.objects.Flavor._flavor_extra_specs_add') def test_save_api(self, mock_update, mock_delete, mock_remove, mock_add, mock_notify): extra_specs = {'key1': 'value1', 'key2': 'value2'} projects = ['project-1', 'project-2'] flavor = flavor_obj.Flavor(context=self.context, flavorid='foo', id=123, extra_specs=extra_specs, projects=projects) flavor.obj_reset_changes() # Test deleting an extra_specs key and project del flavor.extra_specs['key1'] del flavor.projects[-1] self.assertEqual(set(['extra_specs', 'projects']), flavor.obj_what_changed()) flavor.save() self.assertEqual({'key2': 'value2'}, flavor.extra_specs) mock_delete.assert_called_once_with(self.context, 123, 'key1') self.assertEqual(['project-1'], flavor.projects) mock_remove.assert_called_once_with(self.context, 123, 'project-2') # Test updating an extra_specs key value flavor.extra_specs['key2'] = 'foobar' self.assertEqual(set(['extra_specs']), flavor.obj_what_changed()) flavor.save() self.assertEqual({'key2': 'foobar'}, flavor.extra_specs) mock_update.assert_called_with(self.context, 123, {'key2': 'foobar'}) # Test adding an extra_specs and project flavor.extra_specs['key3'] = 'value3' flavor.projects.append('project-3') self.assertEqual(set(['extra_specs', 'projects']), flavor.obj_what_changed()) flavor.save() self.assertEqual({'key2': 'foobar', 'key3': 'value3'}, flavor.extra_specs) mock_update.assert_called_with(self.context, 123, {'key2': 'foobar', 'key3': 'value3'}) self.assertEqual(['project-1', 'project-3'], flavor.projects) mock_add.assert_called_once_with(self.context, 123, 'project-3') @mock.patch('nova.objects.Flavor._flavor_create') @mock.patch('nova.objects.Flavor._flavor_extra_specs_del') @mock.patch('nova.objects.Flavor._flavor_extra_specs_add') def test_save_deleted_extra_specs(self, mock_add, mock_delete, mock_create): mock_create.return_value = dict(fake_flavor, extra_specs={'key1': 'value1'}) flavor = flavor_obj.Flavor(context=self.context) flavor.flavorid = 'test' flavor.extra_specs = {'key1': 'value1'} flavor.create() flavor.extra_specs = {} flavor.save() mock_delete.assert_called_once_with(self.context, flavor.id, 'key1') self.assertFalse(mock_add.called) def test_save_invalid_fields(self): flavor = flavor_obj.Flavor(id=123) self.assertRaises(exception.ObjectActionError, flavor.save) @mock.patch('nova.objects.Flavor._flavor_destroy') def test_destroy_api_by_id(self, mock_destroy): mock_destroy.return_value = dict(fake_flavor, id=123) flavor = flavor_obj.Flavor(context=self.context, id=123) flavor.destroy() mock_destroy.assert_called_once_with(self.context, flavor_id=flavor.id) @mock.patch('nova.objects.Flavor._flavor_destroy') def test_destroy_api_by_flavorid(self, mock_destroy): mock_destroy.return_value = dict(fake_flavor, flavorid='foo') flavor = flavor_obj.Flavor(context=self.context, flavorid='foo') flavor.destroy() mock_destroy.assert_called_once_with(self.context, flavorid=flavor.flavorid) def test_load_projects_from_api(self): mock_get_projects = mock.Mock(return_value=['a', 'b']) objects.Flavor._get_projects_from_db = mock_get_projects flavor = objects.Flavor(context=self.context, flavorid='m1.foo') self.assertEqual(['a', 'b'], flavor.projects) mock_get_projects.assert_called_once_with(self.context, 'm1.foo') self.assertNotIn('projects', flavor.obj_what_changed()) def test_from_db_loads_projects(self): fake = dict(fake_flavor, projects=[{'project_id': 'foo'}]) obj = objects.Flavor._from_db_object(self.context, objects.Flavor(), fake, expected_attrs=['projects']) self.assertIn('projects', obj) self.assertEqual(['foo'], obj.projects) def test_load_anything_else(self): flavor = flavor_obj.Flavor() self.assertRaises(exception.ObjectActionError, getattr, flavor, 'name') class TestFlavor(test_objects._LocalTest, _TestFlavor): # NOTE(danms): Run this test local-only because we would otherwise # have to do a bunch of change-resetting to handle the way we do # our change tracking for special attributes like projects. There is # nothing remotely-concerning (see what I did there?) so this is fine. def test_projects_in_db(self): db_flavor = self._create_api_flavor(self.context) flavor = objects.Flavor.get_by_id(self.context, db_flavor['id']) flavor.add_access('project1') flavor.add_access('project2') flavor.add_access('project3') flavor.remove_access('project2') flavor = flavor.get_by_id(self.context, db_flavor['id']) self.assertEqual(['project1', 'project3'], flavor.projects) self.assertRaises(exception.FlavorAccessExists, flavor.add_access, 'project1') self.assertRaises(exception.FlavorAccessNotFound, flavor.remove_access, 'project2') def test_extra_specs_in_db(self): db_flavor = self._create_api_flavor(self.context) flavor = objects.Flavor.get_by_id(self.context, db_flavor['id']) flavor.extra_specs['marty'] = 'mcfly' del flavor.extra_specs['foo'] flavor.save() flavor = objects.Flavor.get_by_id(self.context, db_flavor['id']) self.assertEqual({'marty': 'mcfly'}, flavor.extra_specs) # NOTE(mriedem): There is no remotable method for updating the description # in a flavor so we test this local-only. @mock.patch('nova.objects.Flavor._send_notification') def test_description(self, mock_notify): # Create a flavor with a description. ctxt = nova_context.get_admin_context() flavorid = uuidutils.generate_uuid() dict_flavor = dict(fake_flavor, name=flavorid, flavorid=flavorid) del dict_flavor['id'] flavor = flavor_obj.Flavor(ctxt, **dict_flavor) flavor.description = 'rainbows and unicorns' flavor.create() mock_notify.assert_called_once_with('create') # Lookup the flavor to make sure the description is set. flavor = flavor_obj.Flavor.get_by_flavor_id(ctxt, flavorid) self.assertEqual('rainbows and unicorns', flavor.description) # Now reset the flavor.description since it's nullable=True. mock_notify.reset_mock() self.assertEqual(0, len(flavor.obj_what_changed()), flavor.obj_what_changed()) flavor.description = None self.assertEqual(['description'], list(flavor.obj_what_changed()), flavor.obj_what_changed()) old_updated_at = flavor.updated_at flavor.save() # Make sure we reloaded the flavor from the database. self.assertNotEqual(old_updated_at, flavor.updated_at) mock_notify.assert_called_once_with('update') self.assertEqual(0, len(flavor.obj_what_changed()), flavor.obj_what_changed()) # Lookup the flavor to make sure the description is gone. flavor = flavor_obj.Flavor.get_by_flavor_id(ctxt, flavorid) self.assertIsNone(flavor.description) # Test compatibility. flavor.description = 'flavor descriptions are not backward compatible' data = lambda x: x['nova_object.data'] flavor_primitive = data(flavor.obj_to_primitive()) self.assertIn('description', flavor_primitive) flavor.obj_make_compatible(flavor_primitive, '1.1') self.assertIn('name', flavor_primitive) self.assertNotIn('description', flavor_primitive) class TestFlavorRemote(test_objects._RemoteTest, _TestFlavor): pass class _TestFlavorList(object): def test_get_all_from_db(self): # Get a list of the actual flavors in the API DB api_flavors = flavor_obj._flavor_get_all_from_db(self.context, False, None, 'flavorid', 'asc', None, None) flavors = objects.FlavorList.get_all(self.context) # Make sure we're getting all flavors from the api self.assertEqual(len(api_flavors), len(flavors)) def test_get_all_from_db_with_limit(self): flavors = objects.FlavorList.get_all(self.context, limit=1) self.assertEqual(1, len(flavors)) @mock.patch('nova.objects.flavor._flavor_get_all_from_db') def test_get_all(self, mock_api_get): _fake_flavor = dict(fake_flavor, id=2, name='m1.bar', flavorid='m1.bar') mock_api_get.return_value = [_fake_flavor] filters = {'min_memory_mb': 4096} flavors = flavor_obj.FlavorList.get_all(self.context, inactive=False, filters=filters, sort_key='id', sort_dir='asc') self.assertEqual(1, len(flavors)) _TestFlavor._compare(self, _fake_flavor, flavors[0]) mock_api_get.assert_called_once_with(self.context, inactive=False, filters=filters, sort_key='id', sort_dir='asc', limit=None, marker=None) @mock.patch('nova.objects.flavor._flavor_get_all_from_db') def test_get_all_limit_applied_to_api(self, mock_api_get): _fake_flavor = dict(fake_flavor, id=2, name='m1.bar', flavorid='m1.bar') mock_api_get.return_value = [_fake_flavor] filters = {'min_memory_mb': 4096} flavors = flavor_obj.FlavorList.get_all(self.context, inactive=False, filters=filters, limit=1, sort_key='id', sort_dir='asc') self.assertEqual(1, len(flavors)) _TestFlavor._compare(self, _fake_flavor, flavors[0]) mock_api_get.assert_called_once_with(self.context, inactive=False, filters=filters, sort_key='id', sort_dir='asc', limit=1, marker=None) def test_get_no_marker_in_api(self): self.assertRaises(exception.MarkerNotFound, flavor_obj.FlavorList.get_all, self.context, inactive=False, filters=None, limit=1, marker='foo', sort_key='id', sort_dir='asc') class TestFlavorList(test_objects._LocalTest, _TestFlavorList): pass class TestFlavorListRemote(test_objects._RemoteTest, _TestFlavorList): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_host_mapping.py0000664000175000017500000003540300000000000023411 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_db import exception as db_exc from oslo_utils.fixture import uuidsentinel as uuids import six from nova import context from nova import exception from nova import objects from nova.objects import host_mapping from nova import test from nova.tests.unit.objects import test_cell_mapping from nova.tests.unit.objects import test_objects def get_db_mapping(mapped_cell=None, **updates): db_mapping = { 'id': 1, 'cell_id': None, 'host': 'fake-host', 'created_at': None, 'updated_at': None, } if mapped_cell: db_mapping["cell_mapping"] = mapped_cell else: db_mapping["cell_mapping"] = test_cell_mapping.get_db_mapping(id=42) db_mapping['cell_id'] = db_mapping["cell_mapping"]["id"] db_mapping.update(updates) return db_mapping class _TestHostMappingObject(object): def _check_cell_map_value(self, db_val, cell_obj): self.assertEqual(db_val, cell_obj.id) @mock.patch.object(host_mapping.HostMapping, '_get_by_host_from_db') def test_get_by_host(self, host_from_db): fake_cell = test_cell_mapping.get_db_mapping(id=1) db_mapping = get_db_mapping(mapped_cell=fake_cell) host_from_db.return_value = db_mapping mapping_obj = objects.HostMapping().get_by_host( self.context, db_mapping['host']) host_from_db.assert_called_once_with(self.context, db_mapping['host']) with mock.patch.object( host_mapping.HostMapping, '_get_cell_mapping') as mock_load: self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) # Check that lazy loading isn't happening self.assertFalse(mock_load.called) def test_from_db_object_no_cell_map(self): """Test when db object does not have cell_mapping""" fake_cell = test_cell_mapping.get_db_mapping(id=1) db_mapping = get_db_mapping(mapped_cell=fake_cell) # If db object has no cell_mapping, lazy loading should occur db_mapping.pop("cell_mapping") fake_cell_obj = objects.CellMapping(self.context, **fake_cell) mapping_obj = objects.HostMapping()._from_db_object( self.context, objects.HostMapping(), db_mapping) with mock.patch.object( host_mapping.HostMapping, '_get_cell_mapping') as mock_load: mock_load.return_value = fake_cell_obj self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) # Check that cell_mapping is lazy loaded mock_load.assert_called_once_with() @mock.patch.object(host_mapping.HostMapping, '_create_in_db') def test_create(self, create_in_db): fake_cell = test_cell_mapping.get_db_mapping(id=1) db_mapping = get_db_mapping(mapped_cell=fake_cell) db_mapping.pop("cell_mapping") host = db_mapping['host'] create_in_db.return_value = db_mapping fake_cell_obj = objects.CellMapping(self.context, **fake_cell) mapping_obj = objects.HostMapping(self.context) mapping_obj.host = host mapping_obj.cell_mapping = fake_cell_obj mapping_obj.create() create_in_db.assert_called_once_with(self.context, {'host': host, 'cell_id': fake_cell["id"]}) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) @mock.patch.object(host_mapping.HostMapping, '_save_in_db') def test_save(self, save_in_db): db_mapping = get_db_mapping() # This isn't needed here db_mapping.pop("cell_mapping") host = db_mapping['host'] mapping_obj = objects.HostMapping(self.context) mapping_obj.host = host mapping_obj.id = db_mapping['id'] new_fake_cell = test_cell_mapping.get_db_mapping(id=10) fake_cell_obj = objects.CellMapping(self.context, **new_fake_cell) mapping_obj.cell_mapping = fake_cell_obj db_mapping.update({"cell_id": new_fake_cell["id"]}) save_in_db.return_value = db_mapping mapping_obj.save() save_in_db.assert_called_once_with(self.context, test.MatchType(host_mapping.HostMapping), {'cell_id': new_fake_cell["id"], 'host': host, 'id': db_mapping['id']}) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) @mock.patch.object(host_mapping.HostMapping, '_destroy_in_db') def test_destroy(self, destroy_in_db): mapping_obj = objects.HostMapping(self.context) mapping_obj.host = "fake-host2" mapping_obj.destroy() destroy_in_db.assert_called_once_with(self.context, "fake-host2") class TestHostMappingObject(test_objects._LocalTest, _TestHostMappingObject): pass class TestRemoteHostMappingObject(test_objects._RemoteTest, _TestHostMappingObject): pass class _TestHostMappingListObject(object): def _check_cell_map_value(self, db_val, cell_obj): self.assertEqual(db_val, cell_obj.id) @mock.patch.object(host_mapping.HostMappingList, '_get_from_db') def test_get_all(self, get_from_db): fake_cell = test_cell_mapping.get_db_mapping(id=1) db_mapping = get_db_mapping(mapped_cell=fake_cell) get_from_db.return_value = [db_mapping] mapping_obj = objects.HostMappingList.get_all(self.context) get_from_db.assert_called_once_with(self.context) self.compare_obj(mapping_obj.objects[0], db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) class TestCellMappingListObject(test_objects._LocalTest, _TestHostMappingListObject): pass class TestRemoteCellMappingListObject(test_objects._RemoteTest, _TestHostMappingListObject): pass class TestHostMappingDiscovery(test.NoDBTestCase): @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.HostMapping.create') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNodeList.get_all_by_not_mapped') def test_discover_hosts_all(self, mock_cn_get, mock_hm_get, mock_hm_create, mock_cm): def _hm_get(context, host): if host in ['a', 'b', 'c']: return objects.HostMapping() raise exception.HostMappingNotFound(name=host) mock_hm_get.side_effect = _hm_get mock_cn_get.side_effect = [[objects.ComputeNode(host='d', uuid=uuids.cn1)], [objects.ComputeNode(host='e', uuid=uuids.cn2)]] cell_mappings = [objects.CellMapping(name='foo', uuid=uuids.cm1), objects.CellMapping(name='bar', uuid=uuids.cm2)] mock_cm.return_value = cell_mappings ctxt = context.get_admin_context() with mock.patch('nova.objects.ComputeNode.save') as mock_save: hms = host_mapping.discover_hosts(ctxt) mock_save.assert_has_calls([mock.call(), mock.call()]) self.assertEqual(2, len(hms)) self.assertTrue(mock_hm_create.called) self.assertEqual(['d', 'e'], [hm.host for hm in hms]) @mock.patch('nova.objects.CellMapping.get_by_uuid') @mock.patch('nova.objects.HostMapping.create') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNodeList.get_all_by_not_mapped') def test_discover_hosts_one(self, mock_cn_get, mock_hm_get, mock_hm_create, mock_cm): def _hm_get(context, host): if host in ['a', 'b', 'c']: return objects.HostMapping() raise exception.HostMappingNotFound(name=host) mock_hm_get.side_effect = _hm_get # NOTE(danms): Provide both side effects, but expect it to only # be called once if we provide a cell mock_cn_get.side_effect = [[objects.ComputeNode(host='d', uuid=uuids.cn1)], [objects.ComputeNode(host='e', uuid=uuids.cn2)]] mock_cm.return_value = objects.CellMapping(name='foo', uuid=uuids.cm1) ctxt = context.get_admin_context() with mock.patch('nova.objects.ComputeNode.save') as mock_save: hms = host_mapping.discover_hosts(ctxt, uuids.cm1) mock_save.assert_called_once_with() self.assertEqual(1, len(hms)) self.assertTrue(mock_hm_create.called) self.assertEqual(['d'], [hm.host for hm in hms]) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.HostMapping.create') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_discover_services(self, mock_srv, mock_hm_create, mock_hm_get, mock_cm): mock_cm.return_value = [ objects.CellMapping(uuid=uuids.cell1), objects.CellMapping(uuid=uuids.cell2), ] mock_srv.side_effect = [ [objects.Service(host='host1'), objects.Service(host='host2')], [objects.Service(host='host3')], ] def fake_get_host_mapping(ctxt, host): if host == 'host2': return else: raise exception.HostMappingNotFound(name=host) mock_hm_get.side_effect = fake_get_host_mapping ctxt = context.get_admin_context() mappings = host_mapping.discover_hosts(ctxt, by_service=True) self.assertEqual(2, len(mappings)) self.assertEqual(['host1', 'host3'], sorted([m.host for m in mappings])) @mock.patch('nova.objects.CellMapping.get_by_uuid') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.HostMapping.create') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_discover_services_one_cell(self, mock_srv, mock_hm_create, mock_hm_get, mock_cm): mock_cm.return_value = objects.CellMapping(uuid=uuids.cell1) mock_srv.return_value = [ objects.Service(host='host1'), objects.Service(host='host2'), ] def fake_get_host_mapping(ctxt, host): if host == 'host2': return else: raise exception.HostMappingNotFound(name=host) mock_hm_get.side_effect = fake_get_host_mapping lines = [] def fake_status(msg): lines.append(msg) ctxt = context.get_admin_context() mappings = host_mapping.discover_hosts(ctxt, cell_uuid=uuids.cell1, status_fn=fake_status, by_service=True) self.assertEqual(1, len(mappings)) self.assertEqual(['host1'], sorted([m.host for m in mappings])) expected = """\ Getting computes from cell: %(cell)s Creating host mapping for service host1 Found 1 unmapped computes in cell: %(cell)s""" % {'cell': uuids.cell1} self.assertEqual(expected, '\n'.join(lines)) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.HostMapping.create') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNodeList.get_all_by_not_mapped') def test_discover_hosts_duplicate(self, mock_cn_get, mock_hm_get, mock_hm_create, mock_cm): mock_cm.return_value = [objects.CellMapping(name='foo', uuid=uuids.cm)] mock_cn_get.return_value = [objects.ComputeNode(host='bar', uuid=uuids.cn)] mock_hm_get.side_effect = exception.HostMappingNotFound(name='bar') mock_hm_create.side_effect = db_exc.DBDuplicateEntry() ctxt = context.get_admin_context() exp = self.assertRaises(exception.HostMappingExists, host_mapping.discover_hosts, ctxt) expected = "Host 'bar' mapping already exists" self.assertIn(expected, six.text_type(exp)) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.HostMapping.create') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_discover_services_duplicate(self, mock_srv, mock_hm_create, mock_hm_get, mock_cm): mock_cm.return_value = [objects.CellMapping(name='foo', uuid=uuids.cm)] mock_srv.return_value = [objects.Service(host='bar')] mock_hm_get.side_effect = exception.HostMappingNotFound(name='bar') mock_hm_create.side_effect = db_exc.DBDuplicateEntry() ctxt = context.get_admin_context() exp = self.assertRaises(exception.HostMappingExists, host_mapping.discover_hosts, ctxt, by_service=True) expected = "Host 'bar' mapping already exists" self.assertIn(expected, six.text_type(exp)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_hv_spec.py0000664000175000017500000000462500000000000022352 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.objects import fields as obj_fields from nova.tests.unit.objects import test_objects spec_dict = { 'arch': obj_fields.Architecture.I686, 'hv_type': obj_fields.HVType.KVM, 'vm_mode': obj_fields.VMMode.HVM } spec_list = [ obj_fields.Architecture.I686, obj_fields.HVType.KVM, obj_fields.VMMode.HVM ] spec_dict_vz = { 'arch': obj_fields.Architecture.I686, 'hv_type': obj_fields.HVType.VIRTUOZZO, 'vm_mode': obj_fields.VMMode.HVM } spec_dict_parallels = { 'arch': obj_fields.Architecture.I686, 'hv_type': obj_fields.HVType.PARALLELS, 'vm_mode': obj_fields.VMMode.HVM } class _TestHVSpecObject(object): def test_hv_spec_from_list(self): spec_obj = objects.HVSpec.from_list(spec_list) self.compare_obj(spec_obj, spec_dict) def test_hv_spec_to_list(self): spec_obj = objects.HVSpec() spec_obj.arch = obj_fields.Architecture.I686 spec_obj.hv_type = obj_fields.HVType.KVM spec_obj.vm_mode = obj_fields.VMMode.HVM spec = spec_obj.to_list() self.assertEqual(spec_list, spec) def test_hv_spec_obj_make_compatible(self): spec_dict_vz_copy = spec_dict_vz.copy() # check 1.1->1.0 compatibility objects.HVSpec().obj_make_compatible(spec_dict_vz_copy, '1.0') self.assertEqual(spec_dict_parallels, spec_dict_vz_copy) # check that nothing changed objects.HVSpec().obj_make_compatible(spec_dict_vz_copy, '1.1') self.assertEqual(spec_dict_parallels, spec_dict_vz_copy) class TestHVSpecObject(test_objects._LocalTest, _TestHVSpecObject): pass class TestRemoteHVSpecObject(test_objects._RemoteTest, _TestHVSpecObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_image_meta.py0000664000175000017500000004234300000000000023012 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import six from nova import exception from nova import objects from nova.objects import fields from nova import test class TestImageMeta(test.NoDBTestCase): def test_basic_attrs(self): image = {'status': 'active', 'container_format': 'bare', 'min_ram': 0, 'updated_at': '2014-12-12T11:16:36.000000', # Testing string -> int conversion 'min_disk': '0', 'owner': '2d8b9502858c406ebee60f0849486222', # Testing string -> bool conversion 'protected': 'yes', 'properties': { 'os_type': 'Linux', 'hw_video_model': 'vga', 'hw_video_ram': '512', 'hw_qemu_guest_agent': 'yes', 'hw_scsi_model': 'virtio-scsi', }, 'size': 213581824, 'name': 'f16-x86_64-openstack-sda', 'checksum': '755122332caeb9f661d5c978adb8b45f', 'created_at': '2014-12-10T16:23:14.000000', 'disk_format': 'qcow2', 'id': 'c8b1790e-a07d-4971-b137-44f2432936cd' } image_meta = objects.ImageMeta.from_dict(image) self.assertEqual('active', image_meta.status) self.assertEqual('bare', image_meta.container_format) self.assertEqual(0, image_meta.min_ram) self.assertIsInstance(image_meta.updated_at, datetime.datetime) self.assertEqual(0, image_meta.min_disk) self.assertEqual('2d8b9502858c406ebee60f0849486222', image_meta.owner) self.assertTrue(image_meta.protected) self.assertEqual(213581824, image_meta.size) self.assertEqual('f16-x86_64-openstack-sda', image_meta.name) self.assertEqual('755122332caeb9f661d5c978adb8b45f', image_meta.checksum) self.assertIsInstance(image_meta.created_at, datetime.datetime) self.assertEqual('qcow2', image_meta.disk_format) self.assertEqual('c8b1790e-a07d-4971-b137-44f2432936cd', image_meta.id) self.assertIsInstance(image_meta.properties, objects.ImageMetaProps) def test_no_props(self): image_meta = objects.ImageMeta.from_dict({}) self.assertIsInstance(image_meta.properties, objects.ImageMetaProps) def test_volume_backed_image(self): image = {'container_format': None, 'size': 0, 'checksum': None, 'disk_format': None, } image_meta = objects.ImageMeta.from_dict(image) self.assertEqual('', image_meta.container_format) self.assertEqual(0, image_meta.size) self.assertEqual('', image_meta.checksum) self.assertEqual('', image_meta.disk_format) def test_null_substitution(self): image = {'name': None, 'checksum': None, 'owner': None, 'size': None, 'virtual_size': None, 'container_format': None, 'disk_format': None, } image_meta = objects.ImageMeta.from_dict(image) self.assertEqual('', image_meta.name) self.assertEqual('', image_meta.checksum) self.assertEqual('', image_meta.owner) self.assertEqual(0, image_meta.size) self.assertEqual(0, image_meta.virtual_size) self.assertEqual('', image_meta.container_format) self.assertEqual('', image_meta.disk_format) class TestImageMetaProps(test.NoDBTestCase): def test_normal_props(self): props = {'os_type': 'windows', 'hw_video_model': 'vga', 'hw_video_ram': '512', 'hw_qemu_guest_agent': 'yes', 'trait:CUSTOM_TRUSTED': 'required', # Fill sane values for the rest here } virtprops = objects.ImageMetaProps.from_dict(props) self.assertEqual('windows', virtprops.os_type) self.assertEqual('vga', virtprops.hw_video_model) self.assertEqual(512, virtprops.hw_video_ram) self.assertTrue(virtprops.hw_qemu_guest_agent) self.assertIsNotNone(virtprops.traits_required) self.assertIn('CUSTOM_TRUSTED', virtprops.traits_required) def test_default_props(self): props = {} virtprops = objects.ImageMetaProps.from_dict(props) for prop in virtprops.fields: self.assertIsNone(virtprops.get(prop)) def test_default_prop_value(self): props = {} virtprops = objects.ImageMetaProps.from_dict(props) self.assertEqual("hvm", virtprops.get("hw_vm_mode", "hvm")) self.assertIsNone(virtprops.get("traits_required")) def test_non_existent_prop(self): props = {} virtprops = objects.ImageMetaProps.from_dict(props) self.assertRaises(AttributeError, virtprops.get, "doesnotexist") def test_legacy_compat(self): legacy_props = { 'architecture': 'x86_64', 'owner_id': '123', 'vmware_adaptertype': 'lsiLogic', 'vmware_disktype': 'preallocated', 'vmware_image_version': '2', 'vmware_ostype': 'rhel3_64Guest', 'auto_disk_config': 'yes', 'ipxe_boot': 'yes', 'xenapi_device_id': '3', 'xenapi_image_compression_level': '2', 'vmware_linked_clone': 'false', 'xenapi_use_agent': 'yes', 'xenapi_skip_agent_inject_ssh': 'no', 'xenapi_skip_agent_inject_files_at_boot': 'no', 'cache_in_nova': 'yes', 'vm_mode': 'hvm', 'bittorrent': 'yes', 'mappings': [], 'block_device_mapping': [], 'bdm_v2': 'yes', 'root_device_name': '/dev/vda', 'hypervisor_version_requires': '>=1.5.3', 'hypervisor_type': 'qemu', } image_meta = objects.ImageMetaProps.from_dict(legacy_props) self.assertEqual('x86_64', image_meta.hw_architecture) self.assertEqual('123', image_meta.img_owner_id) self.assertEqual('lsilogic', image_meta.hw_scsi_model) self.assertEqual('preallocated', image_meta.hw_disk_type) self.assertEqual(2, image_meta.img_version) self.assertEqual('rhel3_64Guest', image_meta.os_distro) self.assertTrue(image_meta.hw_auto_disk_config) self.assertTrue(image_meta.hw_ipxe_boot) self.assertEqual(3, image_meta.hw_device_id) self.assertEqual(2, image_meta.img_compression_level) self.assertFalse(image_meta.img_linked_clone) self.assertTrue(image_meta.img_use_agent) self.assertFalse(image_meta.os_skip_agent_inject_ssh) self.assertFalse(image_meta.os_skip_agent_inject_files_at_boot) self.assertTrue(image_meta.img_cache_in_nova) self.assertTrue(image_meta.img_bittorrent) self.assertEqual([], image_meta.img_mappings) self.assertEqual([], image_meta.img_block_device_mapping) self.assertTrue(image_meta.img_bdm_v2) self.assertEqual("/dev/vda", image_meta.img_root_device_name) self.assertEqual('>=1.5.3', image_meta.img_hv_requested_version) self.assertEqual('qemu', image_meta.img_hv_type) def test_legacy_compat_vmware_adapter_types(self): legacy_types = ['lsiLogic', 'busLogic', 'ide', 'lsiLogicsas', 'paraVirtual', None, ''] for legacy_type in legacy_types: legacy_props = { 'vmware_adaptertype': legacy_type, } image_meta = objects.ImageMetaProps.from_dict(legacy_props) if legacy_type == 'ide': self.assertEqual('ide', image_meta.hw_disk_bus) elif not legacy_type: self.assertFalse(image_meta.obj_attr_is_set('hw_disk_bus')) self.assertFalse(image_meta.obj_attr_is_set('hw_scsi_model')) else: self.assertEqual('scsi', image_meta.hw_disk_bus) if legacy_type == 'lsiLogicsas': expected = 'lsisas1068' elif legacy_type == 'paraVirtual': expected = 'vmpvscsi' else: expected = legacy_type.lower() self.assertEqual(expected, image_meta.hw_scsi_model) def test_duplicate_legacy_and_normal_props(self): # Both keys are referring to the same object field props = {'hw_scsi_model': 'virtio-scsi', 'vmware_adaptertype': 'lsiLogic', } virtprops = objects.ImageMetaProps.from_dict(props) # The normal property always wins vs. the legacy field since # _set_attr_from_current_names is called finally self.assertEqual('virtio-scsi', virtprops.hw_scsi_model) def test_get(self): props = objects.ImageMetaProps(os_distro='linux') self.assertEqual('linux', props.get('os_distro')) self.assertIsNone(props.get('img_version')) self.assertEqual(1, props.get('img_version', 1)) def test_set_numa_mem(self): props = {'hw_numa_nodes': 2, 'hw_numa_mem.0': "2048", 'hw_numa_mem.1': "4096"} virtprops = objects.ImageMetaProps.from_dict(props) self.assertEqual(2, virtprops.hw_numa_nodes) self.assertEqual([2048, 4096], virtprops.hw_numa_mem) def test_set_numa_mem_sparse(self): props = {'hw_numa_nodes': 2, 'hw_numa_mem.0': "2048", 'hw_numa_mem.1': "1024", 'hw_numa_mem.3': "4096"} virtprops = objects.ImageMetaProps.from_dict(props) self.assertEqual(2, virtprops.hw_numa_nodes) self.assertEqual([2048, 1024], virtprops.hw_numa_mem) def test_set_numa_mem_no_count(self): props = {'hw_numa_mem.0': "2048", 'hw_numa_mem.3': "4096"} virtprops = objects.ImageMetaProps.from_dict(props) self.assertIsNone(virtprops.get("hw_numa_nodes")) self.assertEqual([2048], virtprops.hw_numa_mem) def test_set_numa_cpus(self): props = {'hw_numa_nodes': 2, 'hw_numa_cpus.0': "0-3", 'hw_numa_cpus.1': "4-7"} virtprops = objects.ImageMetaProps.from_dict(props) self.assertEqual(2, virtprops.hw_numa_nodes) self.assertEqual([set([0, 1, 2, 3]), set([4, 5, 6, 7])], virtprops.hw_numa_cpus) def test_set_numa_cpus_sparse(self): props = {'hw_numa_nodes': 4, 'hw_numa_cpus.0': "0-3", 'hw_numa_cpus.1': "4,5", 'hw_numa_cpus.3': "6-7"} virtprops = objects.ImageMetaProps.from_dict(props) self.assertEqual(4, virtprops.hw_numa_nodes) self.assertEqual([set([0, 1, 2, 3]), set([4, 5])], virtprops.hw_numa_cpus) def test_set_numa_cpus_no_count(self): props = {'hw_numa_cpus.0': "0-3", 'hw_numa_cpus.3': "4-7"} virtprops = objects.ImageMetaProps.from_dict(props) self.assertIsNone(virtprops.get("hw_numa_nodes")) self.assertEqual([set([0, 1, 2, 3])], virtprops.hw_numa_cpus) def test_get_unnumbered_trait_fields(self): """Tests that only valid un-numbered required traits are parsed from the properties. """ props = {'trait:HW_CPU_X86_AVX2': 'required', 'trait:CUSTOM_TRUSTED': 'required', 'trait1:CUSTOM_FPGA': 'required', 'trai:CUSTOM_FOO': 'required', 'trait:CUSTOM_XYZ': 'xyz'} virtprops = objects.ImageMetaProps.from_dict(props) self.assertIn('CUSTOM_TRUSTED', virtprops.traits_required) self.assertIn('HW_CPU_X86_AVX2', virtprops.traits_required) # numbered traits are ignored self.assertNotIn('CUSTOM_FPGA', virtprops.traits_required) # property key does not start with `trait:` exactly self.assertNotIn('CUSTOM_FOO', virtprops.traits_required) # property value is not required self.assertNotIn('CUSTOM_XYZ', virtprops.traits_required) def test_traits_required_initialized_as_list(self): """Tests that traits_required field is set as a list even if the same property is set on the image metadata. """ props = {'trait:HW_CPU_X86_AVX2': 'required', 'trait:CUSTOM_TRUSTED': 'required', 'traits_required': 'foo'} virtprops = objects.ImageMetaProps.from_dict(props) self.assertIsInstance(virtprops.traits_required, list) self.assertIn('CUSTOM_TRUSTED', virtprops.traits_required) self.assertIn('HW_CPU_X86_AVX2', virtprops.traits_required) self.assertEqual(2, len(virtprops.traits_required)) def test_obj_make_compatible(self): props = { 'hw_firmware_type': 'uefi', 'hw_cpu_realtime_mask': '^0-1', 'hw_cpu_thread_policy': 'prefer', 'img_config_drive': 'mandatory', 'os_admin_user': 'root', 'hw_vif_multiqueue_enabled': True, 'img_hv_type': 'kvm', 'img_hv_requested_version': '>= 1.0', 'os_require_quiesce': True, 'os_secure_boot': 'required', 'hw_rescue_bus': 'ide', 'hw_rescue_device': 'disk', 'hw_watchdog_action': fields.WatchdogAction.DISABLED, } obj = objects.ImageMetaProps(**props) primitive = obj.obj_to_primitive('1.0') self.assertFalse(any([x in primitive['nova_object.data'] for x in props])) for bus in ('lxc', 'uml'): obj.hw_disk_bus = bus self.assertRaises(exception.ObjectActionError, obj.obj_to_primitive, '1.0') def test_obj_make_compatible_video_model(self): # assert that older video models are preserved. obj = objects.ImageMetaProps( hw_video_model=objects.fields.VideoModel.QXL, hw_disk_bus=objects.fields.DiskBus.VIRTIO ) primitive = obj.obj_to_primitive('1.21') self.assertIn("hw_video_model", primitive['nova_object.data']) self.assertEqual(objects.fields.VideoModel.QXL, primitive['nova_object.data']['hw_video_model']) self.assertIn("hw_disk_bus", primitive['nova_object.data']) self.assertEqual(objects.fields.DiskBus.VIRTIO, primitive['nova_object.data']['hw_disk_bus']) # Virtio, GOP and None were added in 1.22 and should raise an # exception when backleveling. models = [objects.fields.VideoModel.VIRTIO, objects.fields.VideoModel.GOP, objects.fields.VideoModel.NONE] for model in models: obj = objects.ImageMetaProps(hw_video_model=model) ex = self.assertRaises(exception.ObjectActionError, obj.obj_to_primitive, '1.21') self.assertIn('hw_video_model', six.text_type(ex)) def test_obj_make_compatible_watchdog_action_not_disabled(self): """Tests that we don't pop the hw_watchdog_action if the value is not 'disabled'. """ obj = objects.ImageMetaProps( hw_watchdog_action=fields.WatchdogAction.PAUSE) primitive = obj.obj_to_primitive('1.0') self.assertIn('hw_watchdog_action', primitive['nova_object.data']) self.assertEqual(fields.WatchdogAction.PAUSE, primitive['nova_object.data']['hw_watchdog_action']) def test_set_os_secure_boot(self): props = {'os_secure_boot': "required"} secure_props = objects.ImageMetaProps.from_dict(props) self.assertEqual("required", secure_props.os_secure_boot) def test_obj_make_compatible_img_hide_hypervisor_id(self): """Tests that checks if we pop img_hide_hypervisor_id.""" obj = objects.ImageMetaProps(img_hide_hypervisor_id=True) primitive = obj.obj_to_primitive('1.0') self.assertNotIn('img_hide_hypervisor_id', primitive['nova_object.data']) def test_obj_make_compatible_trait_fields(self): """Tests that checks if we pop traits_required.""" obj = objects.ImageMetaProps(traits_required=['CUSTOM_TRUSTED']) primitive = obj.obj_to_primitive('1.19') self.assertNotIn('traits_required', primitive['nova_object.data']) def test_obj_make_compatible_pmu(self): """Tests that checks if we pop hw_pmu.""" obj = objects.ImageMetaProps(hw_pmu=True) primitive = obj.obj_to_primitive() old_primitive = obj.obj_to_primitive('1.22') self.assertIn('hw_pmu', primitive['nova_object.data']) self.assertNotIn('hw_pmu', old_primitive['nova_object.data']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance.py0000664000175000017500000027332200000000000022531 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime import six import mock import netaddr from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_versionedobjects import base as ovo_base from nova.compute import task_states from nova.compute import vm_states from nova.db import api as db from nova.db.sqlalchemy import api as sql_api from nova.db.sqlalchemy import models as sql_models from nova import exception from nova.network import model as network_model from nova import notifications from nova import objects from nova.objects import fields from nova.objects import instance from nova.objects import instance_info_cache from nova.objects import pci_device from nova.objects import security_group from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_instance_device_metadata from nova.tests.unit.objects import test_instance_fault from nova.tests.unit.objects import test_instance_info_cache from nova.tests.unit.objects import test_instance_numa from nova.tests.unit.objects import test_instance_pci_requests from nova.tests.unit.objects import test_migration_context as test_mig_ctxt from nova.tests.unit.objects import test_objects from nova.tests.unit.objects import test_security_group from nova.tests.unit.objects import test_vcpu_model from nova import utils class _TestInstanceObject(object): @property def fake_instance(self): db_inst = fake_instance.fake_db_instance(id=2, access_ip_v4='1.2.3.4', access_ip_v6='::1') db_inst['uuid'] = uuids.db_instance db_inst['cell_name'] = 'api!child' db_inst['terminated_at'] = None db_inst['deleted_at'] = None db_inst['created_at'] = None db_inst['updated_at'] = None db_inst['launched_at'] = datetime.datetime(1955, 11, 12, 22, 4, 0) db_inst['deleted'] = False db_inst['security_groups'] = [] db_inst['pci_devices'] = [] db_inst['user_id'] = self.context.user_id db_inst['project_id'] = self.context.project_id db_inst['tags'] = [] db_inst['trusted_certs'] = [] db_inst['info_cache'] = dict(test_instance_info_cache.fake_info_cache, instance_uuid=db_inst['uuid']) db_inst['system_metadata'] = { 'image_name': 'os2-warp', 'image_min_ram': 100, 'image_hw_disk_bus': 'ide', 'image_hw_vif_model': 'ne2k_pci', } return db_inst def test_datetime_deserialization(self): red_letter_date = timeutils.parse_isotime( utils.isotime(datetime.datetime(1955, 11, 5))) inst = objects.Instance(uuid=uuids.instance, launched_at=red_letter_date) primitive = inst.obj_to_primitive() expected = {'nova_object.name': 'Instance', 'nova_object.namespace': 'nova', 'nova_object.version': inst.VERSION, 'nova_object.data': {'uuid': uuids.instance, 'launched_at': '1955-11-05T00:00:00Z'}, 'nova_object.changes': ['launched_at', 'uuid']} self.assertJsonEqual(primitive, expected) inst2 = objects.Instance.obj_from_primitive(primitive) self.assertIsInstance(inst2.launched_at, datetime.datetime) self.assertEqual(red_letter_date, inst2.launched_at) def test_ip_deserialization(self): inst = objects.Instance(uuid=uuids.instance, access_ip_v4='1.2.3.4', access_ip_v6='::1') primitive = inst.obj_to_primitive() expected = {'nova_object.name': 'Instance', 'nova_object.namespace': 'nova', 'nova_object.version': inst.VERSION, 'nova_object.data': {'uuid': uuids.instance, 'access_ip_v4': '1.2.3.4', 'access_ip_v6': '::1'}, 'nova_object.changes': ['uuid', 'access_ip_v6', 'access_ip_v4']} self.assertJsonEqual(primitive, expected) inst2 = objects.Instance.obj_from_primitive(primitive) self.assertIsInstance(inst2.access_ip_v4, netaddr.IPAddress) self.assertIsInstance(inst2.access_ip_v6, netaddr.IPAddress) self.assertEqual(netaddr.IPAddress('1.2.3.4'), inst2.access_ip_v4) self.assertEqual(netaddr.IPAddress('::1'), inst2.access_ip_v6) @mock.patch.object(db, 'instance_get_by_uuid') def test_get_without_expected(self, mock_get): mock_get.return_value = self.fake_instance inst = objects.Instance.get_by_uuid(self.context, 'uuid', expected_attrs=[]) for attr in instance.INSTANCE_OPTIONAL_ATTRS: self.assertFalse(inst.obj_attr_is_set(attr)) mock_get.assert_called_once_with(self.context, 'uuid', columns_to_join=[]) @mock.patch.object(db, 'instance_extra_get_by_instance_uuid') @mock.patch.object(db, 'instance_fault_get_by_instance_uuids') @mock.patch.object(db, 'instance_get_by_uuid') def test_get_with_expected(self, mock_get, mock_fault_get, mock_extra_get): exp_cols = instance.INSTANCE_OPTIONAL_ATTRS[:] exp_cols.remove('numa_topology') exp_cols.remove('pci_requests') exp_cols.remove('vcpu_model') exp_cols.remove('ec2_ids') exp_cols.remove('migration_context') exp_cols.remove('keypairs') exp_cols.remove('device_metadata') exp_cols.remove('trusted_certs') exp_cols.remove('resources') exp_cols = [exp_col for exp_col in exp_cols if 'flavor' not in exp_col] exp_cols.extend(['extra', 'extra.numa_topology', 'extra.pci_requests', 'extra.flavor', 'extra.vcpu_model', 'extra.migration_context', 'extra.keypairs', 'extra.device_metadata', 'extra.trusted_certs', 'extra.resources']) fake_topology = test_instance_numa.fake_db_topology['numa_topology'] fake_requests = jsonutils.dumps(test_instance_pci_requests. fake_pci_requests) fake_devices_metadata = \ test_instance_device_metadata.fake_devices_metadata fake_flavor = jsonutils.dumps( {'cur': objects.Flavor().obj_to_primitive(), 'old': None, 'new': None}) fake_vcpu_model = jsonutils.dumps( test_vcpu_model.fake_vcpumodel.obj_to_primitive()) fake_mig_context = jsonutils.dumps( test_mig_ctxt.fake_migration_context_obj.obj_to_primitive()) fake_keypairlist = objects.KeyPairList(objects=[ objects.KeyPair(name='foo')]) fake_keypairs = jsonutils.dumps( fake_keypairlist.obj_to_primitive()) fake_trusted_certs = jsonutils.dumps( objects.TrustedCerts(ids=['123foo']).obj_to_primitive()) fake_resource = objects.Resource( provider_uuid=uuids.rp, resource_class='CUSTOM_FOO', identifier='foo') fake_resources = jsonutils.dumps(objects.ResourceList( objects=[fake_resource]).obj_to_primitive()) fake_service = {'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 123, 'host': 'fake-host', 'binary': 'nova-compute', 'topic': 'fake-service-topic', 'report_count': 1, 'forced_down': False, 'disabled': False, 'disabled_reason': None, 'last_seen_up': None, 'version': 1, 'uuid': uuids.service, } fake_instance = dict(self.fake_instance, services=[fake_service], extra={ 'numa_topology': fake_topology, 'pci_requests': fake_requests, 'device_metadata': fake_devices_metadata, 'flavor': fake_flavor, 'vcpu_model': fake_vcpu_model, 'migration_context': fake_mig_context, 'keypairs': fake_keypairs, 'trusted_certs': fake_trusted_certs, 'resources': fake_resources, }) mock_get.return_value = fake_instance fake_faults = test_instance_fault.fake_faults mock_fault_get.return_value = fake_faults inst = objects.Instance.get_by_uuid( self.context, 'uuid', expected_attrs=instance.INSTANCE_OPTIONAL_ATTRS) for attr in instance.INSTANCE_OPTIONAL_ATTRS: self.assertTrue(inst.obj_attr_is_set(attr)) self.assertEqual(123, inst.services[0].id) self.assertEqual('foo', inst.keypairs[0].name) self.assertEqual(['123foo'], inst.trusted_certs.ids) self.assertEqual(fake_resource.identifier, inst.resources[0].identifier) mock_get.assert_called_once_with(self.context, 'uuid', columns_to_join=exp_cols) mock_fault_get.assert_called_once_with(self.context, [fake_instance['uuid']]) self.assertFalse(mock_extra_get.called) def test_lazy_load_services_on_deleted_instance(self): # We should avoid trying to hit the database to reload the instance # and just set the services attribute to an empty list. instance = objects.Instance(self.context, uuid=uuids.instance, deleted=True) self.assertEqual(0, len(instance.services)) def test_lazy_load_tags_on_deleted_instance(self): # We should avoid trying to hit the database to reload the instance # and just set the tags attribute to an empty list. instance = objects.Instance(self.context, uuid=uuids.instance, deleted=True) self.assertEqual(0, len(instance.tags)) def test_lazy_load_generic_on_deleted_instance(self): # For generic fields, we try to load the deleted record from the # database. instance = objects.Instance(self.context, uuid=uuids.instance, user_id=self.context.user_id, project_id=self.context.project_id) instance.create() instance.destroy() # Re-create our local object to make sure it doesn't have sysmeta # filled in by create() instance = objects.Instance(self.context, uuid=uuids.instance, user_id=self.context.user_id, project_id=self.context.project_id) self.assertNotIn('system_metadata', instance) self.assertEqual(0, len(instance.system_metadata)) def test_lazy_load_flavor_on_deleted_instance(self): # For something like a flavor, we should be reading from the DB # with read_deleted='yes' flavor = objects.Flavor(name='testflavor') instance = objects.Instance(self.context, uuid=uuids.instance, flavor=flavor, user_id=self.context.user_id, project_id=self.context.project_id) instance.create() instance.destroy() # Re-create our local object to make sure it doesn't have sysmeta # filled in by create() instance = objects.Instance(self.context, uuid=uuids.instance, user_id=self.context.user_id, project_id=self.context.project_id) self.assertNotIn('flavor', instance) self.assertEqual('testflavor', instance.flavor.name) def test_lazy_load_tags(self): instance = objects.Instance(self.context, uuid=uuids.instance, user_id=self.context.user_id, project_id=self.context.project_id) instance.create() tag = objects.Tag(self.context, resource_id=instance.uuid, tag='foo') tag.create() self.assertNotIn('tags', instance) self.assertEqual(1, len(instance.tags)) self.assertEqual('foo', instance.tags[0].tag) @mock.patch('nova.objects.instance.LOG.exception') def test_save_does_not_log_exception_after_tags_loaded(self, mock_log): instance = objects.Instance(self.context, uuid=uuids.instance, user_id=self.context.user_id, project_id=self.context.project_id) instance.create() tag = objects.Tag(self.context, resource_id=instance.uuid, tag='foo') tag.create() # this will lazy load tags so instance.tags will be set self.assertEqual(1, len(instance.tags)) # instance.save will try to find a way to save tags but is should not # spam the log with errors instance.display_name = 'foobar' instance.save() self.assertFalse(mock_log.called) @mock.patch.object(db, 'instance_get') def test_get_by_id(self, mock_get): mock_get.return_value = self.fake_instance inst = objects.Instance.get_by_id(self.context, 'instid') self.assertEqual(self.fake_instance['uuid'], inst.uuid) mock_get.assert_called_once_with(self.context, 'instid', columns_to_join=['info_cache', 'security_groups']) @mock.patch.object(db, 'instance_get_by_uuid') def test_load(self, mock_get): fake_uuid = self.fake_instance['uuid'] fake_inst2 = dict(self.fake_instance, metadata=[{'key': 'foo', 'value': 'bar'}]) mock_get.side_effect = [self.fake_instance, fake_inst2] inst = objects.Instance.get_by_uuid(self.context, fake_uuid) self.assertFalse(hasattr(inst, '_obj_metadata')) meta = inst.metadata self.assertEqual({'foo': 'bar'}, meta) self.assertTrue(hasattr(inst, '_obj_metadata')) # Make sure we don't run load again meta2 = inst.metadata self.assertEqual({'foo': 'bar'}, meta2) call_list = [mock.call(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']), mock.call(self.context, fake_uuid, columns_to_join=['metadata']), ] mock_get.assert_has_calls(call_list, any_order=False) def test_load_invalid(self): inst = objects.Instance(context=self.context, uuid=uuids.instance) self.assertRaises(exception.ObjectActionError, inst.obj_load_attr, 'foo') def test_create_and_load_keypairs_from_extra(self): inst = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id) inst.keypairs = objects.KeyPairList(objects=[ objects.KeyPair(name='foo')]) inst.create() inst = objects.Instance.get_by_uuid(self.context, inst.uuid, expected_attrs=['keypairs']) self.assertEqual('foo', inst.keypairs[0].name) def test_lazy_load_keypairs_from_extra(self): inst = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id) inst.keypairs = objects.KeyPairList(objects=[ objects.KeyPair(name='foo')]) inst.create() inst = objects.Instance.get_by_uuid(self.context, inst.uuid) self.assertNotIn('keypairs', inst) self.assertEqual('foo', inst.keypairs[0].name) self.assertNotIn('keypairs', inst.obj_what_changed()) @mock.patch.object(db, 'instance_get_by_uuid') def test_lazy_load_flavor_from_extra(self, mock_get): inst = objects.Instance(context=self.context, uuid=uuids.instance) # These disabled values are only here for logic testing purposes to # make sure we default the "new" flavor's disabled value to False on # load from the database. fake_flavor = jsonutils.dumps( {'cur': objects.Flavor(disabled=False, is_public=True).obj_to_primitive(), 'old': objects.Flavor(disabled=True, is_public=False).obj_to_primitive(), 'new': objects.Flavor().obj_to_primitive()}) fake_inst = dict(self.fake_instance, extra={'flavor': fake_flavor}) mock_get.return_value = fake_inst # Assert the disabled values on the flavors. self.assertFalse(inst.flavor.disabled) self.assertTrue(inst.old_flavor.disabled) self.assertFalse(inst.new_flavor.disabled) # Assert the is_public values on the flavors self.assertTrue(inst.flavor.is_public) self.assertFalse(inst.old_flavor.is_public) self.assertTrue(inst.new_flavor.is_public) @mock.patch.object(db, 'instance_get_by_uuid') def test_get_remote(self, mock_get): # isotime doesn't have microseconds and is always UTC fake_instance = self.fake_instance mock_get.return_value = fake_instance inst = objects.Instance.get_by_uuid(self.context, uuids.instance) self.assertEqual(fake_instance['id'], inst.id) self.assertEqual(fake_instance['launched_at'], inst.launched_at.replace(tzinfo=None)) self.assertEqual(fake_instance['access_ip_v4'], str(inst.access_ip_v4)) self.assertEqual(fake_instance['access_ip_v6'], str(inst.access_ip_v6)) mock_get.assert_called_once_with(self.context, uuids.instance, columns_to_join=['info_cache', 'security_groups']) @mock.patch.object(instance_info_cache.InstanceInfoCache, 'refresh') @mock.patch.object(db, 'instance_get_by_uuid') def test_refresh(self, mock_get, mock_refresh): fake_uuid = self.fake_instance['uuid'] fake_inst = dict(self.fake_instance, host='orig-host') fake_inst2 = dict(self.fake_instance, host='new-host') mock_get.side_effect = [fake_inst, fake_inst2] inst = objects.Instance.get_by_uuid(self.context, fake_uuid) self.assertEqual('orig-host', inst.host) inst.refresh() self.assertEqual('new-host', inst.host) self.assertEqual(set([]), inst.obj_what_changed()) get_call_list = [mock.call(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']), mock.call(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']), ] mock_get.assert_has_calls(get_call_list, any_order=False) mock_refresh.assert_called_once_with() @mock.patch.object(objects.Instance, 'get_by_uuid') def test_refresh_does_not_recurse(self, mock_get): inst = objects.Instance(context=self.context, uuid=uuids.instance, metadata={}) inst_copy = objects.Instance() inst_copy.uuid = inst.uuid mock_get.return_value = inst_copy inst.refresh() mock_get.assert_called_once_with(self.context, uuid=inst.uuid, expected_attrs=['metadata'], use_slave=False) @mock.patch.object(notifications, 'send_update') @mock.patch.object(db, 'instance_info_cache_update') @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(db, 'instance_get_by_uuid') def _save_test_helper(self, save_kwargs, mock_db_instance_get_by_uuid, mock_db_instance_update_and_get_original, mock_db_instance_info_cache_update, mock_notifications_send_update): """Common code for testing save() for cells/non-cells.""" old_ref = dict(self.fake_instance, host='oldhost', user_data='old', vm_state='old', task_state='old') fake_uuid = old_ref['uuid'] expected_updates = dict(vm_state='meow', task_state='wuff', user_data='new') new_ref = dict(old_ref, host='newhost', **expected_updates) exp_vm_state = save_kwargs.get('expected_vm_state') exp_task_state = save_kwargs.get('expected_task_state') if exp_vm_state: expected_updates['expected_vm_state'] = exp_vm_state if exp_task_state: if (exp_task_state == 'image_snapshot' and 'instance_version' in save_kwargs and save_kwargs['instance_version'] == '1.9'): expected_updates['expected_task_state'] = [ 'image_snapshot', 'image_snapshot_pending'] else: expected_updates['expected_task_state'] = exp_task_state mock_db_instance_get_by_uuid.return_value = old_ref mock_db_instance_update_and_get_original\ .return_value = (old_ref, new_ref) inst = objects.Instance.get_by_uuid(self.context, old_ref['uuid']) if 'instance_version' in save_kwargs: inst.VERSION = save_kwargs.pop('instance_version') self.assertEqual('old', inst.task_state) self.assertEqual('old', inst.vm_state) self.assertEqual('old', inst.user_data) inst.vm_state = 'meow' inst.task_state = 'wuff' inst.user_data = 'new' save_kwargs.pop('context', None) inst.save(**save_kwargs) self.assertEqual('newhost', inst.host) self.assertEqual('meow', inst.vm_state) self.assertEqual('wuff', inst.task_state) self.assertEqual('new', inst.user_data) # NOTE(danms): Ignore flavor migrations for the moment self.assertEqual(set([]), inst.obj_what_changed() - set(['flavor'])) mock_db_instance_get_by_uuid.assert_called_once_with( self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) mock_db_instance_update_and_get_original.assert_called_once_with( self.context, fake_uuid, expected_updates, columns_to_join=['info_cache', 'security_groups', 'system_metadata'] ) mock_notifications_send_update.assert_called_with(self.context, mock.ANY, mock.ANY) def test_save(self): self._save_test_helper({}) def test_save_exp_vm_state(self): self._save_test_helper({'expected_vm_state': ['meow']}) def test_save_exp_task_state(self): self._save_test_helper({'expected_task_state': ['meow']}) @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(db, 'instance_get_by_uuid') @mock.patch.object(notifications, 'send_update') def test_save_rename_sends_notification(self, mock_send, mock_get, mock_update_and_get): old_ref = dict(self.fake_instance, display_name='hello') fake_uuid = old_ref['uuid'] expected_updates = dict(display_name='goodbye') new_ref = dict(old_ref, **expected_updates) mock_get.return_value = old_ref mock_update_and_get.return_value = (old_ref, new_ref) inst = objects.Instance.get_by_uuid(self.context, old_ref['uuid'], use_slave=False) self.assertEqual('hello', inst.display_name) inst.display_name = 'goodbye' inst.save() self.assertEqual('goodbye', inst.display_name) # NOTE(danms): Ignore flavor migrations for the moment self.assertEqual(set([]), inst.obj_what_changed() - set(['flavor'])) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) mock_update_and_get.assert_called_once_with(self.context, fake_uuid, expected_updates, columns_to_join=['info_cache', 'security_groups', 'system_metadata']) mock_send.assert_called_once_with(self.context, mock.ANY, mock.ANY) @mock.patch('nova.db.api.instance_extra_update_by_uuid') def test_save_object_pci_requests(self, mock_instance_extra_update): expected_json = ('[{"count": 1, "alias_name": null, "is_new": false,' '"request_id": null, "requester_id": null,' '"spec": [{"vendor_id": "8086", ' '"product_id": "1502"}], "numa_policy": null}]') inst = objects.Instance() inst = objects.Instance._from_db_object(self.context, inst, self.fake_instance) inst.obj_reset_changes() pci_req_obj = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': '8086', 'product_id': '1502'}]) inst.pci_requests = ( objects.InstancePCIRequests(requests=[pci_req_obj])) inst.pci_requests.instance_uuid = inst.uuid inst.save() mock_instance_extra_update.assert_called_once_with( self.context, inst.uuid, mock.ANY) actual_args = ( mock_instance_extra_update.call_args[0][2]['pci_requests']) mock_instance_extra_update.reset_mock() self.assertJsonEqual(expected_json, actual_args) inst.pci_requests = None inst.save() mock_instance_extra_update.assert_called_once_with( self.context, inst.uuid, {'pci_requests': None}) mock_instance_extra_update.reset_mock() inst.obj_reset_changes() inst.save() self.assertFalse(mock_instance_extra_update.called) @mock.patch('nova.db.api.instance_update_and_get_original') @mock.patch.object(instance.Instance, '_from_db_object') def test_save_does_not_refresh_pci_devices(self, mock_fdo, mock_update): # NOTE(danms): This tests that we don't update the pci_devices # field from the contents of the database. This is not because we # don't necessarily want to, but because the way pci_devices is # currently implemented it causes versioning issues. When that is # resolved, this test should go away. mock_update.return_value = None, None inst = objects.Instance(context=self.context, id=123) inst.uuid = uuids.test_instance_not_refresh inst.pci_devices = pci_device.PciDeviceList() inst.save() self.assertNotIn('pci_devices', mock_fdo.call_args_list[0][1]['expected_attrs']) @mock.patch('nova.db.api.instance_extra_update_by_uuid') @mock.patch('nova.db.api.instance_update_and_get_original') @mock.patch.object(instance.Instance, '_from_db_object') def test_save_updates_numa_topology(self, mock_fdo, mock_update, mock_extra_update): fake_obj_numa_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0]), memory=128), objects.InstanceNUMACell(id=1, cpuset=set([1]), memory=128)]) fake_obj_numa_topology.instance_uuid = uuids.instance jsonified = fake_obj_numa_topology._to_json() mock_update.return_value = None, None inst = objects.Instance( context=self.context, id=123, uuid=uuids.instance) inst.numa_topology = fake_obj_numa_topology inst.save() # NOTE(sdague): the json representation of nova object for # NUMA isn't stable from a string comparison # perspective. There are sets which get converted to lists, # and based on platform differences may show up in different # orders. So we can't have mock do the comparison. Instead # manually compare the final parameter using our json equality # operator which does the right thing here. mock_extra_update.assert_called_once_with( self.context, inst.uuid, mock.ANY) called_arg = mock_extra_update.call_args_list[0][0][2]['numa_topology'] self.assertJsonEqual(called_arg, jsonified) mock_extra_update.reset_mock() inst.numa_topology = None inst.save() mock_extra_update.assert_called_once_with( self.context, inst.uuid, {'numa_topology': None}) @mock.patch('nova.db.api.instance_extra_update_by_uuid') def test_save_vcpu_model(self, mock_update): inst = fake_instance.fake_instance_obj(self.context) inst.vcpu_model = test_vcpu_model.fake_vcpumodel inst.save() self.assertTrue(mock_update.called) self.assertEqual(1, mock_update.call_count) actual_args = mock_update.call_args self.assertEqual(self.context, actual_args[0][0]) self.assertEqual(inst.uuid, actual_args[0][1]) self.assertEqual(['vcpu_model'], list(actual_args[0][2].keys())) self.assertJsonEqual(jsonutils.dumps( test_vcpu_model.fake_vcpumodel.obj_to_primitive()), actual_args[0][2]['vcpu_model']) mock_update.reset_mock() inst.vcpu_model = None inst.save() mock_update.assert_called_once_with( self.context, inst.uuid, {'vcpu_model': None}) @mock.patch('nova.db.api.instance_extra_update_by_uuid') def test_save_migration_context_model(self, mock_update): inst = fake_instance.fake_instance_obj(self.context) inst.migration_context = test_mig_ctxt.get_fake_migration_context_obj( self.context) inst.save() self.assertTrue(mock_update.called) self.assertEqual(1, mock_update.call_count) actual_args = mock_update.call_args self.assertEqual(self.context, actual_args[0][0]) self.assertEqual(inst.uuid, actual_args[0][1]) self.assertEqual(['migration_context'], list(actual_args[0][2].keys())) self.assertIsInstance( objects.MigrationContext.obj_from_db_obj( actual_args[0][2]['migration_context']), objects.MigrationContext) mock_update.reset_mock() inst.migration_context = None inst.save() mock_update.assert_called_once_with( self.context, inst.uuid, {'migration_context': None}) def test_save_flavor_skips_unchanged_flavors(self): inst = objects.Instance(context=self.context, flavor=objects.Flavor()) inst.obj_reset_changes() with mock.patch( 'nova.db.api.instance_extra_update_by_uuid') as mock_upd: inst.save() self.assertFalse(mock_upd.called) @mock.patch('nova.db.api.instance_extra_update_by_uuid') def test_save_multiple_extras_updates_once(self, mock_update): inst = fake_instance.fake_instance_obj(self.context) inst.numa_topology = None inst.migration_context = None inst.vcpu_model = test_vcpu_model.fake_vcpumodel inst.keypairs = objects.KeyPairList( objects=[objects.KeyPair(name='foo')]) json_vcpu_model = jsonutils.dumps( test_vcpu_model.fake_vcpumodel.obj_to_primitive()) json_keypairs = jsonutils.dumps(inst.keypairs.obj_to_primitive()) # Check changed fields in the instance object self.assertIn('keypairs', inst.obj_what_changed()) self.assertEqual({'objects'}, inst.keypairs.obj_what_changed()) inst.save() expected_vals = { 'numa_topology': None, 'migration_context': None, 'vcpu_model': json_vcpu_model, 'keypairs': json_keypairs, } mock_update.assert_called_once_with(self.context, inst.uuid, expected_vals) # Verify that the record of changed fields has been cleared self.assertNotIn('keypairs', inst.obj_what_changed()) self.assertEqual(set(), inst.keypairs.obj_what_changed()) @mock.patch.object(db, 'instance_get_by_uuid') def test_get_deleted(self, mock_get): fake_inst = dict(self.fake_instance, id=123, deleted=123) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid) # NOTE(danms): Make sure it's actually a bool self.assertTrue(inst.deleted) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) @mock.patch.object(db, 'instance_get_by_uuid') def test_get_not_cleaned(self, mock_get): fake_inst = dict(self.fake_instance, id=123, cleaned=None) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid) # NOTE(mikal): Make sure it's actually a bool self.assertFalse(inst.cleaned) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) @mock.patch.object(db, 'instance_get_by_uuid') def test_get_cleaned(self, mock_get): fake_inst = dict(self.fake_instance, id=123, cleaned=1) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid) # NOTE(mikal): Make sure it's actually a bool self.assertTrue(inst.cleaned) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(db, 'instance_info_cache_update') @mock.patch.object(db, 'instance_get_by_uuid') def test_with_info_cache(self, mock_get, mock_upd_cache, mock_upd_and_get): fake_inst = dict(self.fake_instance) fake_uuid = fake_inst['uuid'] nwinfo1 = network_model.NetworkInfo.hydrate([{'address': 'foo'}]) nwinfo2 = network_model.NetworkInfo.hydrate([{'address': 'bar'}]) nwinfo1_json = nwinfo1.json() nwinfo2_json = nwinfo2.json() fake_info_cache = test_instance_info_cache.fake_info_cache fake_inst['info_cache'] = dict( fake_info_cache, network_info=nwinfo1_json, instance_uuid=fake_uuid) mock_get.return_value = fake_inst mock_upd_cache.return_value = fake_info_cache inst = objects.Instance.get_by_uuid(self.context, fake_uuid) self.assertEqual(nwinfo1, inst.info_cache.network_info) self.assertEqual(fake_uuid, inst.info_cache.instance_uuid) inst.info_cache.network_info = nwinfo2 inst.save() mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) mock_upd_cache.assert_called_once_with(self.context, fake_uuid, {'network_info': nwinfo2_json}) self.assertFalse(mock_upd_and_get.called) @mock.patch.object(db, 'instance_get_by_uuid') def test_with_info_cache_none(self, mock_get): fake_inst = dict(self.fake_instance, info_cache=None) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid, ['info_cache']) self.assertIsNone(inst.info_cache) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache']) def test_get_network_info_with_cache(self): info_cache = instance_info_cache.InstanceInfoCache() nwinfo = network_model.NetworkInfo.hydrate([{'address': 'foo'}]) info_cache.network_info = nwinfo inst = objects.Instance(context=self.context, info_cache=info_cache) self.assertEqual(nwinfo, inst.get_network_info()) def test_get_network_info_without_cache(self): inst = objects.Instance(context=self.context, info_cache=None) self.assertEqual(network_model.NetworkInfo.hydrate([]), inst.get_network_info()) @mock.patch.object(db, 'security_group_update') @mock.patch.object(db, 'instance_update_and_get_original') @mock.patch.object(db, 'instance_get_by_uuid') def test_with_security_groups(self, mock_get, mock_upd_and_get, mock_upd_secgrp): fake_inst = dict(self.fake_instance) fake_uuid = fake_inst['uuid'] fake_inst['security_groups'] = [ {'id': 1, 'name': 'secgroup1', 'description': 'fake-desc', 'user_id': 'fake-user', 'project_id': 'fake_project', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False}, {'id': 2, 'name': 'secgroup2', 'description': 'fake-desc', 'user_id': 'fake-user', 'project_id': 'fake_project', 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False}, ] mock_get.return_value = fake_inst mock_upd_secgrp.return_value = fake_inst['security_groups'][0] inst = objects.Instance.get_by_uuid(self.context, fake_uuid) self.assertEqual(2, len(inst.security_groups)) for index, group in enumerate(fake_inst['security_groups']): for key in group: self.assertEqual(group[key], getattr(inst.security_groups[index], key)) self.assertIsInstance(inst.security_groups[index], security_group.SecurityGroup) self.assertEqual(set(), inst.security_groups.obj_what_changed()) inst.security_groups[0].description = 'changed' inst.save() self.assertEqual(set(), inst.security_groups.obj_what_changed()) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) mock_upd_secgrp.assert_called_once_with(self.context, 1, {'description': 'changed'}) self.assertFalse(mock_upd_and_get.called) @mock.patch.object(db, 'instance_get_by_uuid') def test_with_empty_security_groups(self, mock_get): fake_inst = dict(self.fake_instance, security_groups=[]) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid) self.assertEqual(0, len(inst.security_groups)) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['info_cache', 'security_groups']) @mock.patch.object(db, 'instance_get_by_uuid') def test_with_empty_pci_devices(self, mock_get): fake_inst = dict(self.fake_instance, pci_devices=[]) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid, ['pci_devices']) self.assertEqual(0, len(inst.pci_devices)) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['pci_devices']) @mock.patch.object(db, 'instance_get_by_uuid') def test_with_pci_devices(self, mock_get): fake_inst = dict(self.fake_instance) fake_uuid = fake_inst['uuid'] fake_inst['pci_devices'] = [ {'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 2, 'uuid': uuids.pci_device2, 'compute_node_id': 1, 'address': 'a1', 'vendor_id': 'v1', 'numa_node': 0, 'product_id': 'p1', 'dev_type': fields.PciDeviceType.STANDARD, 'status': fields.PciDeviceStatus.ALLOCATED, 'dev_id': 'i', 'label': 'l', 'instance_uuid': fake_uuid, 'request_id': None, 'parent_addr': None, 'extra_info': '{}'}, { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 1, 'uuid': uuids.pci_device1, 'compute_node_id': 1, 'address': 'a', 'vendor_id': 'v', 'numa_node': 1, 'product_id': 'p', 'dev_type': fields.PciDeviceType.STANDARD, 'status': fields.PciDeviceStatus.ALLOCATED, 'dev_id': 'i', 'label': 'l', 'instance_uuid': fake_uuid, 'request_id': None, 'parent_addr': 'a1', 'extra_info': '{}'}, ] mock_get.return_value = fake_inst inst = objects.Instance.get_by_uuid(self.context, fake_uuid, ['pci_devices']) self.assertEqual(2, len(inst.pci_devices)) self.assertEqual(fake_uuid, inst.pci_devices[0].instance_uuid) self.assertEqual(fake_uuid, inst.pci_devices[1].instance_uuid) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['pci_devices']) @mock.patch.object(db, 'instance_fault_get_by_instance_uuids') @mock.patch.object(db, 'instance_get_by_uuid') def test_with_fault(self, mock_get, mock_fault_get): fake_inst = dict(self.fake_instance) fake_uuid = fake_inst['uuid'] fake_faults = [dict(x, instance_uuid=fake_uuid) for x in test_instance_fault.fake_faults['fake-uuid']] mock_get.return_value = self.fake_instance mock_fault_get.return_value = {fake_uuid: fake_faults} inst = objects.Instance.get_by_uuid(self.context, fake_uuid, expected_attrs=['fault']) self.assertEqual(fake_faults[0], dict(inst.fault.items())) mock_get.assert_called_once_with(self.context, fake_uuid, columns_to_join=['fault']) mock_fault_get.assert_called_once_with(self.context, [fake_uuid]) @mock.patch('nova.objects.EC2Ids.get_by_instance') @mock.patch('nova.db.api.instance_get_by_uuid') def test_with_ec2_ids(self, mock_get, mock_ec2): fake_inst = dict(self.fake_instance) fake_uuid = fake_inst['uuid'] mock_get.return_value = fake_inst fake_ec2_ids = objects.EC2Ids(instance_id='fake-inst', ami_id='fake-ami') mock_ec2.return_value = fake_ec2_ids inst = objects.Instance.get_by_uuid(self.context, fake_uuid, expected_attrs=['ec2_ids']) mock_ec2.assert_called_once_with(self.context, mock.ANY) self.assertEqual(fake_ec2_ids.instance_id, inst.ec2_ids.instance_id) @mock.patch('nova.db.api.instance_get_by_uuid') def test_with_image_meta(self, mock_get): fake_inst = dict(self.fake_instance) mock_get.return_value = fake_inst inst = instance.Instance.get_by_uuid(self.context, fake_inst['uuid'], expected_attrs=['image_meta']) image_meta = inst.image_meta self.assertIsInstance(image_meta, objects.ImageMeta) self.assertEqual(100, image_meta.min_ram) self.assertEqual('ide', image_meta.properties.hw_disk_bus) self.assertEqual('ne2k_pci', image_meta.properties.hw_vif_model) def test_iteritems_with_extra_attrs(self): self.stub_out('nova.objects.Instance.name', 'foo') inst = objects.Instance(uuid=uuids.instance) self.assertEqual(sorted({'uuid': uuids.instance, 'name': 'foo', }.items()), sorted(inst.items())) def _test_metadata_change_tracking(self, which): inst = objects.Instance(uuid=uuids.instance) setattr(inst, which, {}) inst.obj_reset_changes() getattr(inst, which)['foo'] = 'bar' self.assertEqual(set([which]), inst.obj_what_changed()) inst.obj_reset_changes() self.assertEqual(set(), inst.obj_what_changed()) @mock.patch.object(db, 'instance_create') def test_create_skip_scheduled_at(self, mock_create): vals = {'host': 'foo-host', 'deleted': 0, 'memory_mb': 128, 'system_metadata': {'foo': 'bar'}, 'extra': { 'vcpu_model': None, 'numa_topology': None, 'pci_requests': None, 'device_metadata': None, 'trusted_certs': None, 'resources': None, }} fake_inst = fake_instance.fake_db_instance(**vals) mock_create.return_value = fake_inst inst = objects.Instance(context=self.context, host='foo-host', memory_mb=128, scheduled_at=None, system_metadata={'foo': 'bar'}) inst.create() self.assertEqual('foo-host', inst.host) mock_create.assert_called_once_with(self.context, vals) def test_metadata_change_tracking(self): self._test_metadata_change_tracking('metadata') def test_system_metadata_change_tracking(self): self._test_metadata_change_tracking('system_metadata') @mock.patch.object(db, 'instance_create') def test_create_stubbed(self, mock_create): vals = {'host': 'foo-host', 'deleted': 0, 'memory_mb': 128, 'system_metadata': {'foo': 'bar'}, 'extra': { 'vcpu_model': None, 'numa_topology': None, 'pci_requests': None, 'device_metadata': None, 'trusted_certs': None, 'resources': None, }} fake_inst = fake_instance.fake_db_instance(**vals) mock_create.return_value = fake_inst inst = objects.Instance(context=self.context, host='foo-host', memory_mb=128, system_metadata={'foo': 'bar'}) inst.create() mock_create.assert_called_once_with(self.context, vals) @mock.patch.object(db, 'instance_create') def test_create(self, mock_create): extras = {'vcpu_model': None, 'numa_topology': None, 'pci_requests': None, 'device_metadata': None, 'trusted_certs': None, 'resources': None, } mock_create.return_value = self.fake_instance inst = objects.Instance(context=self.context) inst.create() self.assertEqual(self.fake_instance['id'], inst.id) self.assertIsNotNone(inst.ec2_ids) mock_create.assert_called_once_with(self.context, {'deleted': 0, 'extra': extras}) def test_create_with_values(self): inst1 = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id, host='foo-host') inst1.create() self.assertEqual('foo-host', inst1.host) inst2 = objects.Instance.get_by_uuid(self.context, inst1.uuid) self.assertEqual('foo-host', inst2.host) def test_create_deleted(self): inst1 = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id, deleted=True) self.assertRaises(exception.ObjectActionError, inst1.create) def test_create_with_extras(self): inst = objects.Instance(context=self.context, uuid=self.fake_instance['uuid'], numa_topology=test_instance_numa.fake_obj_numa_topology, pci_requests=objects.InstancePCIRequests( requests=[ objects.InstancePCIRequest(count=123, spec=[])]), vcpu_model=test_vcpu_model.fake_vcpumodel, trusted_certs=objects.TrustedCerts(ids=['123foo']), resources=objects.ResourceList(objects=[objects.Resource( provider_uuid=uuids.rp, resource_class='CUSTOM_FOO', identifier='foo')]) ) inst.create() self.assertIsNotNone(inst.numa_topology) self.assertIsNotNone(inst.pci_requests) self.assertEqual(1, len(inst.pci_requests.requests)) self.assertIsNotNone(inst.vcpu_model) got_numa_topo = objects.InstanceNUMATopology.get_by_instance_uuid( self.context, inst.uuid) self.assertEqual(inst.numa_topology.instance_uuid, got_numa_topo.instance_uuid) got_pci_requests = objects.InstancePCIRequests.get_by_instance_uuid( self.context, inst.uuid) self.assertEqual(123, got_pci_requests.requests[0].count) vcpu_model = objects.VirtCPUModel.get_by_instance_uuid( self.context, inst.uuid) self.assertEqual('fake-model', vcpu_model.model) self.assertEqual(['123foo'], inst.trusted_certs.ids) self.assertEqual('foo', inst.resources[0].identifier) def test_recreate_fails(self): inst = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id, host='foo-host') inst.create() self.assertRaises(exception.ObjectActionError, inst.create) @mock.patch.object(db, 'instance_create') def test_create_with_special_things(self, mock_create): fake_inst = fake_instance.fake_db_instance() mock_create.return_value = fake_inst secgroups = security_group.SecurityGroupList() secgroups.objects = [] for name in ('foo', 'bar'): secgroup = security_group.SecurityGroup() secgroup.name = name secgroups.objects.append(secgroup) info_cache = instance_info_cache.InstanceInfoCache() info_cache.network_info = network_model.NetworkInfo() inst = objects.Instance(context=self.context, host='foo-host', security_groups=secgroups, info_cache=info_cache) inst.create() mock_create.assert_called_once_with(self.context, {'host': 'foo-host', 'deleted': 0, 'security_groups': ['foo', 'bar'], 'info_cache': {'network_info': '[]'}, 'extra': { 'vcpu_model': None, 'numa_topology': None, 'pci_requests': None, 'device_metadata': None, 'trusted_certs': None, 'resources': None, }, }) @mock.patch.object(db, 'instance_destroy') def test_destroy_stubbed(self, mock_destroy): deleted_at = datetime.datetime(1955, 11, 6) fake_inst = fake_instance.fake_db_instance(deleted_at=deleted_at, deleted=True) mock_destroy.return_value = fake_inst inst = objects.Instance(context=self.context, id=1, uuid=uuids.instance, host='foo') inst.destroy() self.assertEqual(timeutils.normalize_time(deleted_at), timeutils.normalize_time(inst.deleted_at)) self.assertTrue(inst.deleted) mock_destroy.assert_called_once_with(self.context, uuids.instance, constraint=None, hard_delete=False) def test_destroy(self): values = {'user_id': self.context.user_id, 'project_id': self.context.project_id} db_inst = db.instance_create(self.context, values) inst = objects.Instance(context=self.context, id=db_inst['id'], uuid=db_inst['uuid']) inst.destroy() self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, self.context, db_inst['uuid']) def test_destroy_host_constraint(self): values = {'user_id': self.context.user_id, 'project_id': self.context.project_id, 'host': 'foo'} db_inst = db.instance_create(self.context, values) inst = objects.Instance.get_by_uuid(self.context, db_inst['uuid']) inst.host = None self.assertRaises(exception.ObjectActionError, inst.destroy) def test_destroy_hard(self): values = {'user_id': self.context.user_id, 'project_id': self.context.project_id} db_inst = db.instance_create(self.context, values) inst = objects.Instance(context=self.context, id=db_inst['id'], uuid=db_inst['uuid']) inst.destroy(hard_delete=True) elevated = self.context.elevated(read_deleted="yes") self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, elevated, db_inst['uuid']) def test_destroy_hard_host_constraint(self): values = {'user_id': self.context.user_id, 'project_id': self.context.project_id, 'host': 'foo'} db_inst = db.instance_create(self.context, values) inst = objects.Instance.get_by_uuid(self.context, db_inst['uuid']) inst.host = None ex = self.assertRaises(exception.ObjectActionError, inst.destroy, hard_delete=True) self.assertIn('host changed', six.text_type(ex)) def test_name_does_not_trigger_lazy_loads(self): values = {'user_id': self.context.user_id, 'project_id': self.context.project_id, 'host': 'foo'} db_inst = db.instance_create(self.context, values) inst = objects.Instance.get_by_uuid(self.context, db_inst['uuid']) self.assertFalse(inst.obj_attr_is_set('fault')) self.flags(instance_name_template='foo-%(uuid)s') self.assertEqual('foo-%s' % db_inst['uuid'], inst.name) self.assertFalse(inst.obj_attr_is_set('fault')) def test_name_blank_if_no_id_pre_scheduling(self): # inst.id is not set and can't be lazy loaded inst = objects.Instance(context=self.context, vm_state=vm_states.BUILDING, task_state=task_states.SCHEDULING) self.assertEqual('', inst.name) def test_name_uuid_if_no_id_post_scheduling(self): # inst.id is not set and can't be lazy loaded inst = objects.Instance(context=self.context, uuid=uuids.instance, vm_state=vm_states.ACTIVE, task_state=None) self.assertEqual(uuids.instance, inst.name) def test_from_db_object_not_overwrite_info_cache(self): info_cache = instance_info_cache.InstanceInfoCache() inst = objects.Instance(context=self.context, info_cache=info_cache) db_inst = fake_instance.fake_db_instance() db_inst['info_cache'] = dict( test_instance_info_cache.fake_info_cache) inst._from_db_object(self.context, inst, db_inst, expected_attrs=['info_cache']) self.assertIs(info_cache, inst.info_cache) def test_from_db_object_info_cache_not_set(self): inst = instance.Instance(context=self.context, info_cache=None) db_inst = fake_instance.fake_db_instance() db_inst.pop('info_cache') inst._from_db_object(self.context, inst, db_inst, expected_attrs=['info_cache']) self.assertIsNone(inst.info_cache) def test_from_db_object_security_groups_net_set(self): inst = instance.Instance(context=self.context, info_cache=None) db_inst = fake_instance.fake_db_instance() db_inst.pop('security_groups') inst._from_db_object(self.context, inst, db_inst, expected_attrs=['security_groups']) self.assertEqual([], inst.security_groups.objects) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid', return_value=None) def test_from_db_object_no_extra_db_calls(self, mock_get): db_inst = fake_instance.fake_db_instance() instance.Instance._from_db_object( self.context, objects.Instance(), db_inst, expected_attrs=instance._INSTANCE_EXTRA_FIELDS) self.assertEqual(0, mock_get.call_count) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance_uuid') def test_get_with_pci_requests(self, mock_get): mock_get.return_value = objects.InstancePCIRequests() db_instance = db.instance_create(self.context, { 'user_id': self.context.user_id, 'project_id': self.context.project_id}) instance = objects.Instance.get_by_uuid( self.context, db_instance['uuid'], expected_attrs=['pci_requests']) self.assertTrue(instance.obj_attr_is_set('pci_requests')) self.assertIsNotNone(instance.pci_requests) def test_get_flavor(self): db_flavor = objects.Flavor.get_by_name(self.context, 'm1.small') inst = objects.Instance(flavor=db_flavor) self.assertEqual(db_flavor['flavorid'], inst.get_flavor().flavorid) def test_get_flavor_namespace(self): db_flavor = objects.Flavor.get_by_name(self.context, 'm1.small') inst = objects.Instance(old_flavor=db_flavor) self.assertEqual(db_flavor['flavorid'], inst.get_flavor('old').flavorid) @mock.patch.object(db, 'instance_metadata_delete') def test_delete_metadata_key(self, db_delete): inst = objects.Instance(context=self.context, id=1, uuid=uuids.instance) inst.metadata = {'foo': '1', 'bar': '2'} inst.obj_reset_changes() inst.delete_metadata_key('foo') self.assertEqual({'bar': '2'}, inst.metadata) self.assertEqual({}, inst.obj_get_changes()) db_delete.assert_called_once_with(self.context, inst.uuid, 'foo') def test_reset_changes(self): inst = objects.Instance() inst.metadata = {'1985': 'present'} inst.system_metadata = {'1955': 'past'} self.assertEqual({}, inst._orig_metadata) inst.obj_reset_changes(['metadata']) self.assertEqual({'1985': 'present'}, inst._orig_metadata) self.assertEqual({}, inst._orig_system_metadata) def test_load_generic_calls_handler(self): inst = objects.Instance(context=self.context, uuid=uuids.instance) with mock.patch.object(inst, '_load_generic') as mock_load: def fake_load(name): inst.system_metadata = {} mock_load.side_effect = fake_load inst.system_metadata mock_load.assert_called_once_with('system_metadata') def test_load_fault_calls_handler(self): inst = objects.Instance(context=self.context, uuid=uuids.instance) with mock.patch.object(inst, '_load_fault') as mock_load: def fake_load(): inst.fault = None mock_load.side_effect = fake_load inst.fault mock_load.assert_called_once_with() def test_load_ec2_ids_calls_handler(self): inst = objects.Instance(context=self.context, uuid=uuids.instance) with mock.patch.object(inst, '_load_ec2_ids') as mock_load: def fake_load(): inst.ec2_ids = objects.EC2Ids(instance_id='fake-inst', ami_id='fake-ami') mock_load.side_effect = fake_load inst.ec2_ids mock_load.assert_called_once_with() def test_load_migration_context(self): inst = instance.Instance(context=self.context, uuid=uuids.instance) with mock.patch.object( objects.MigrationContext, 'get_by_instance_uuid', return_value=test_mig_ctxt.fake_migration_context_obj ) as mock_get: inst.migration_context mock_get.assert_called_once_with(self.context, inst.uuid) def test_load_migration_context_no_context(self): inst = instance.Instance(context=self.context, uuid=uuids.instance) with mock.patch.object( objects.MigrationContext, 'get_by_instance_uuid', side_effect=exception.MigrationContextNotFound( instance_uuid=inst.uuid) ) as mock_get: mig_ctxt = inst.migration_context mock_get.assert_called_once_with(self.context, inst.uuid) self.assertIsNone(mig_ctxt) def test_load_migration_context_no_data(self): inst = instance.Instance(context=self.context, uuid=uuids.instance) with mock.patch.object( objects.MigrationContext, 'get_by_instance_uuid') as mock_get: loaded_ctxt = inst._load_migration_context(db_context=None) self.assertFalse(mock_get.called) self.assertIsNone(loaded_ctxt) def test_apply_revert_migration_context(self): inst = instance.Instance(context=self.context, uuid=uuids.instance, numa_topology=None, pci_requests=None, pci_devices=None) inst.migration_context = test_mig_ctxt.get_fake_migration_context_obj( self.context) inst.apply_migration_context() attrs_type = {'numa_topology': objects.InstanceNUMATopology, 'pci_requests': objects.InstancePCIRequests, 'pci_devices': objects.PciDeviceList, 'resources': objects.ResourceList} for attr_name in instance._MIGRATION_CONTEXT_ATTRS: value = getattr(inst, attr_name) self.assertIsInstance(value, attrs_type[attr_name]) inst.revert_migration_context() for attr_name in instance._MIGRATION_CONTEXT_ATTRS: value = getattr(inst, attr_name) self.assertIsNone(value) def test_drop_migration_context(self): inst = instance.Instance(context=self.context, uuid=uuids.instance) inst.migration_context = test_mig_ctxt.get_fake_migration_context_obj( self.context) inst.migration_context.instance_uuid = inst.uuid inst.migration_context.id = 7 with mock.patch( 'nova.db.api.instance_extra_update_by_uuid') as update_extra: inst.drop_migration_context() self.assertIsNone(inst.migration_context) update_extra.assert_called_once_with(self.context, inst.uuid, {"migration_context": None}) def test_mutated_migration_context(self): numa_topology = (test_instance_numa. fake_obj_numa_topology.obj_clone()) numa_topology.cells[0].memory = 1024 numa_topology.cells[1].memory = 1024 pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(count=1, spec=[])]) pci_devices = pci_device.PciDeviceList() resources = objects.ResourceList() inst = instance.Instance(context=self.context, uuid=uuids.instance, numa_topology=numa_topology, pci_requests=pci_requests, pci_devices=pci_devices, resources=resources) expected_objs = {'numa_topology': numa_topology, 'pci_requests': pci_requests, 'pci_devices': pci_devices, 'resources': resources} inst.migration_context = test_mig_ctxt.get_fake_migration_context_obj( self.context) with inst.mutated_migration_context(): for attr_name in instance._MIGRATION_CONTEXT_ATTRS: inst_value = getattr(inst, attr_name) migration_context_value = ( getattr(inst.migration_context, 'new_' + attr_name)) self.assertIs(inst_value, migration_context_value) for attr_name in instance._MIGRATION_CONTEXT_ATTRS: inst_value = getattr(inst, attr_name) self.assertIs(expected_objs[attr_name], inst_value) @mock.patch('nova.objects.Instance.obj_load_attr', new_callable=mock.NonCallableMock) # asserts not called def test_mutated_migration_context_early_exit(self, obj_load_attr): """Tests that we exit early from mutated_migration_context if the migration_context attribute is set to None meaning this instance is not being migrated. """ inst = instance.Instance(context=self.context, migration_context=None) for attr in instance._MIGRATION_CONTEXT_ATTRS: self.assertNotIn(attr, inst) with inst.mutated_migration_context(): for attr in instance._MIGRATION_CONTEXT_ATTRS: self.assertNotIn(attr, inst) def test_clear_numa_topology(self): numa_topology = test_instance_numa.fake_obj_numa_topology.obj_clone() numa_topology.cells[0].id = 42 numa_topology.cells[1].id = 43 inst = instance.Instance(context=self.context, uuid=uuids.instance, numa_topology=numa_topology) inst.obj_reset_changes() inst.clear_numa_topology() self.assertIn('numa_topology', inst.obj_what_changed()) self.assertEqual(-1, numa_topology.cells[0].id) self.assertEqual(-1, numa_topology.cells[1].id) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_load_generic(self, mock_get): inst2 = instance.Instance(metadata={'foo': 'bar'}) mock_get.return_value = inst2 inst = instance.Instance(context=self.context, uuid=uuids.instance) inst.metadata @mock.patch.object(objects.Instance, 'get_by_uuid') def test_load_something_unspecial(self, mock_get): inst2 = objects.Instance(vm_state=vm_states.ACTIVE, task_state=task_states.SCHEDULING) mock_get.return_value = inst2 inst = instance.Instance(context=self.context, uuid=uuids.instance) self.assertEqual(vm_states.ACTIVE, inst.vm_state) self.assertEqual(task_states.SCHEDULING, inst.task_state) mock_get.assert_called_once_with(self.context, uuid=uuids.instance, expected_attrs=['vm_state']) @mock.patch('nova.db.api.instance_fault_get_by_instance_uuids') def test_load_fault(self, mock_get): fake_fault = test_instance_fault.fake_faults['fake-uuid'][0] mock_get.return_value = {uuids.load_fault_instance: [fake_fault]} inst = objects.Instance(context=self.context, uuid=uuids.load_fault_instance) fault = inst.fault mock_get.assert_called_once_with(self.context, [uuids.load_fault_instance]) self.assertEqual(fake_fault['id'], fault.id) self.assertNotIn('metadata', inst.obj_what_changed()) @mock.patch('nova.objects.EC2Ids.get_by_instance') def test_load_ec2_ids(self, mock_get): fake_ec2_ids = objects.EC2Ids(instance_id='fake-inst', ami_id='fake-ami') mock_get.return_value = fake_ec2_ids inst = objects.Instance(context=self.context, uuid=uuids.instance) ec2_ids = inst.ec2_ids mock_get.assert_called_once_with(self.context, inst) self.assertEqual(fake_ec2_ids, ec2_ids) @mock.patch('nova.objects.SecurityGroupList.get_by_instance') def test_load_security_groups(self, mock_get): secgroups = [] for name in ('foo', 'bar'): secgroup = security_group.SecurityGroup() secgroup.name = name secgroups.append(secgroup) fake_secgroups = security_group.SecurityGroupList(objects=secgroups) mock_get.return_value = fake_secgroups inst = objects.Instance(context=self.context, uuid=uuids.instance) secgroups = inst.security_groups mock_get.assert_called_once_with(self.context, inst) self.assertEqual(fake_secgroups, secgroups) @mock.patch('nova.objects.PciDeviceList.get_by_instance_uuid') def test_load_pci_devices(self, mock_get): fake_pci_devices = pci_device.PciDeviceList() mock_get.return_value = fake_pci_devices inst = objects.Instance(context=self.context, uuid=uuids.pci_devices) pci_devices = inst.pci_devices mock_get.assert_called_once_with(self.context, uuids.pci_devices) self.assertEqual(fake_pci_devices, pci_devices) @mock.patch('nova.objects.ResourceList.get_by_instance_uuid') def test_load_resources(self, mock_get): fake_resources = objects.ResourceList() mock_get.return_value = fake_resources inst = objects.Instance(context=self.context, uuid=uuids.resources) resources = inst.resources mock_get.assert_called_once_with(self.context, uuids.resources) self.assertEqual(fake_resources, resources) def test_get_with_extras(self): pci_requests = objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(count=123, spec=[])]) inst = objects.Instance(context=self.context, user_id=self.context.user_id, project_id=self.context.project_id, pci_requests=pci_requests) inst.create() uuid = inst.uuid inst = objects.Instance.get_by_uuid(self.context, uuid) self.assertFalse(inst.obj_attr_is_set('pci_requests')) inst = objects.Instance.get_by_uuid( self.context, uuid, expected_attrs=['pci_requests']) self.assertTrue(inst.obj_attr_is_set('pci_requests')) def test_obj_clone(self): # Make sure clone shows no changes when no metadata is set inst1 = objects.Instance(uuid=uuids.instance) inst1.obj_reset_changes() inst1 = inst1.obj_clone() self.assertEqual(len(inst1.obj_what_changed()), 0) # Make sure clone shows no changes when metadata is set inst1 = objects.Instance(uuid=uuids.instance) inst1.metadata = dict(key1='val1') inst1.system_metadata = dict(key1='val1') inst1.obj_reset_changes() inst1 = inst1.obj_clone() self.assertEqual(len(inst1.obj_what_changed()), 0) def test_obj_make_compatible(self): inst_obj = objects.Instance( # trusted_certs were added in 2.4 trusted_certs=objects.TrustedCerts(ids=[uuids.cert1]), # hidden was added in 2.6 hidden=True) versions = ovo_base.obj_tree_get_versions('Instance') data = lambda x: x['nova_object.data'] primitive = data(inst_obj.obj_to_primitive( target_version='2.5', version_manifest=versions)) self.assertIn('trusted_certs', primitive) self.assertNotIn('hidden', primitive) class TestInstanceObject(test_objects._LocalTest, _TestInstanceObject): def _test_save_objectfield_fk_constraint_fails(self, foreign_key, expected_exception): # NOTE(danms): Do this here and not in the remote test because # we're mocking out obj_attr_is_set() without the thing actually # being set, which confuses the heck out of the serialization # stuff. error = db_exc.DBReferenceError('table', 'constraint', foreign_key, 'key_table') # Prevent lazy-loading any fields, results in InstanceNotFound attrs = objects.instance.INSTANCE_OPTIONAL_ATTRS instance = fake_instance.fake_instance_obj(self.context, expected_attrs=attrs) fields_with_save_methods = [field for field in instance.fields if hasattr(instance, '_save_%s' % field)] for field in fields_with_save_methods: @mock.patch.object(instance, '_save_%s' % field) @mock.patch.object(instance, 'obj_attr_is_set') def _test(mock_is_set, mock_save_field): mock_is_set.return_value = True mock_save_field.side_effect = error instance.obj_reset_changes(fields=[field]) instance._changed_fields.add(field) self.assertRaises(expected_exception, instance.save) instance.obj_reset_changes(fields=[field]) _test() def test_save_objectfield_missing_instance_row(self): self._test_save_objectfield_fk_constraint_fails( 'instance_uuid', exception.InstanceNotFound) def test_save_objectfield_reraises_if_not_instance_related(self): self._test_save_objectfield_fk_constraint_fails( 'other_foreign_key', db_exc.DBReferenceError) class TestRemoteInstanceObject(test_objects._RemoteTest, _TestInstanceObject): pass class _TestInstanceListObject(object): def fake_instance(self, id, updates=None): db_inst = fake_instance.fake_db_instance(id=2, access_ip_v4='1.2.3.4', access_ip_v6='::1') db_inst['terminated_at'] = None db_inst['deleted_at'] = None db_inst['created_at'] = None db_inst['updated_at'] = None db_inst['launched_at'] = datetime.datetime(1955, 11, 12, 22, 4, 0) db_inst['security_groups'] = [] db_inst['deleted'] = 0 db_inst['info_cache'] = dict(test_instance_info_cache.fake_info_cache, instance_uuid=db_inst['uuid']) if updates: db_inst.update(updates) return db_inst @mock.patch.object(db, 'instance_get_all_by_filters') def test_get_all_by_filters(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2)] mock_get_all.return_value = fakes inst_list = objects.InstanceList.get_by_filters( self.context, {'foo': 'bar'}, 'uuid', 'asc', expected_attrs=['metadata'], use_slave=False) for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) mock_get_all.assert_called_once_with(self.context, {'foo': 'bar'}, 'uuid', 'asc', limit=None, marker=None, columns_to_join=['metadata']) @mock.patch.object(db, 'instance_get_all_by_filters_sort') def test_get_all_by_filters_sorted(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2)] mock_get_all.return_value = fakes inst_list = objects.InstanceList.get_by_filters( self.context, {'foo': 'bar'}, expected_attrs=['metadata'], use_slave=False, sort_keys=['uuid'], sort_dirs=['asc']) for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) mock_get_all.assert_called_once_with(self.context, {'foo': 'bar'}, limit=None, marker=None, columns_to_join=['metadata'], sort_keys=['uuid'], sort_dirs=['asc']) @mock.patch.object(db, 'instance_get_all_by_filters_sort') @mock.patch.object(db, 'instance_get_all_by_filters') def test_get_all_by_filters_calls_non_sort(self, mock_get_by_filters, mock_get_by_filters_sort): '''Verifies InstanceList.get_by_filters calls correct DB function.''' # Single sort key/direction is set, call non-sorted DB function objects.InstanceList.get_by_filters( self.context, {'foo': 'bar'}, sort_key='key', sort_dir='dir', limit=100, marker='uuid', use_slave=True) mock_get_by_filters.assert_called_once_with( self.context, {'foo': 'bar'}, 'key', 'dir', limit=100, marker='uuid', columns_to_join=None) self.assertEqual(0, mock_get_by_filters_sort.call_count) @mock.patch.object(db, 'instance_get_all_by_filters_sort') @mock.patch.object(db, 'instance_get_all_by_filters') def test_get_all_by_filters_calls_sort(self, mock_get_by_filters, mock_get_by_filters_sort): '''Verifies InstanceList.get_by_filters calls correct DB function.''' # Multiple sort keys/directions are set, call sorted DB function objects.InstanceList.get_by_filters( self.context, {'foo': 'bar'}, limit=100, marker='uuid', use_slave=True, sort_keys=['key1', 'key2'], sort_dirs=['dir1', 'dir2']) mock_get_by_filters_sort.assert_called_once_with( self.context, {'foo': 'bar'}, limit=100, marker='uuid', columns_to_join=None, sort_keys=['key1', 'key2'], sort_dirs=['dir1', 'dir2']) self.assertEqual(0, mock_get_by_filters.call_count) @mock.patch.object(db, 'instance_get_all_by_filters') def test_get_all_by_filters_works_for_cleaned(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2, updates={'deleted': 2, 'cleaned': None})] self.context.read_deleted = 'yes' mock_get_all.return_value = [fakes[1]] inst_list = objects.InstanceList.get_by_filters( self.context, {'deleted': True, 'cleaned': False}, 'uuid', 'asc', expected_attrs=['metadata'], use_slave=False) self.assertEqual(1, len(inst_list)) self.assertIsInstance(inst_list.objects[0], instance.Instance) self.assertEqual(fakes[1]['uuid'], inst_list.objects[0].uuid) mock_get_all.assert_called_once_with( self.context, {'deleted': True, 'cleaned': False}, 'uuid', 'asc', limit=None, marker=None, columns_to_join=['metadata']) @mock.patch.object(db, 'instance_get_all_by_host') def test_get_by_host(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2)] mock_get_all.return_value = fakes inst_list = objects.InstanceList.get_by_host(self.context, 'foo') for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) self.assertEqual(self.context, inst_list.objects[i]._context) self.assertEqual(set(), inst_list.obj_what_changed()) mock_get_all.assert_called_once_with(self.context, 'foo', columns_to_join=None) @mock.patch.object(db, 'instance_get_all_by_host_and_node') def test_get_by_host_and_node(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2)] mock_get_all.return_value = fakes inst_list = objects.InstanceList.get_by_host_and_node(self.context, 'foo', 'bar') for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) mock_get_all.assert_called_once_with(self.context, 'foo', 'bar', columns_to_join=None) @mock.patch.object(db, 'instance_get_all_by_host_and_not_type') def test_get_by_host_and_not_type(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2)] mock_get_all.return_value = fakes inst_list = objects.InstanceList.get_by_host_and_not_type( self.context, 'foo', 'bar') for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) mock_get_all.assert_called_once_with(self.context, 'foo', type_id='bar') @mock.patch('nova.objects.instance._expected_cols') @mock.patch('nova.db.api.instance_get_all') def test_get_all(self, mock_get_all, mock_exp): fakes = [self.fake_instance(1), self.fake_instance(2)] mock_get_all.return_value = fakes mock_exp.return_value = mock.sentinel.exp_att inst_list = objects.InstanceList.get_all( self.context, expected_attrs='fake') mock_get_all.assert_called_once_with( self.context, columns_to_join=mock.sentinel.exp_att) for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) @mock.patch.object(db, 'instance_get_all_hung_in_rebooting') def test_get_hung_in_rebooting(self, mock_get_all): fakes = [self.fake_instance(1), self.fake_instance(2)] dt = utils.isotime() mock_get_all.return_value = fakes inst_list = objects.InstanceList.get_hung_in_rebooting(self.context, dt) for i in range(0, len(fakes)): self.assertIsInstance(inst_list.objects[i], instance.Instance) self.assertEqual(fakes[i]['uuid'], inst_list.objects[i].uuid) mock_get_all.assert_called_once_with(self.context, dt) def test_get_active_by_window_joined(self): fakes = [self.fake_instance(1), self.fake_instance(2)] # NOTE(mriedem): Send in a timezone-naive datetime since the # InstanceList.get_active_by_window_joined method should convert it # to tz-aware for the DB API call, which we'll assert with our stub. dt = timeutils.utcnow() def fake_instance_get_active_by_window_joined(context, begin, end, project_id, host, columns_to_join, limit=None, marker=None): # make sure begin is tz-aware self.assertIsNotNone(begin.utcoffset()) self.assertIsNone(end) self.assertEqual(['metadata'], columns_to_join) return fakes with mock.patch.object(db, 'instance_get_active_by_window_joined', fake_instance_get_active_by_window_joined): inst_list = objects.InstanceList.get_active_by_window_joined( self.context, dt, expected_attrs=['metadata']) for fake, obj in zip(fakes, inst_list.objects): self.assertIsInstance(obj, instance.Instance) self.assertEqual(fake['uuid'], obj.uuid) @mock.patch.object(db, 'instance_fault_get_by_instance_uuids') @mock.patch.object(db, 'instance_get_all_by_host') def test_with_fault(self, mock_get_all, mock_fault_get): fake_insts = [ fake_instance.fake_db_instance(uuid=uuids.faults_instance, host='host'), fake_instance.fake_db_instance(uuid=uuids.faults_instance_nonexist, host='host'), ] fake_faults = test_instance_fault.fake_faults mock_get_all.return_value = fake_insts mock_fault_get.return_value = fake_faults instances = objects.InstanceList.get_by_host(self.context, 'host', expected_attrs=['fault'], use_slave=False) self.assertEqual(2, len(instances)) self.assertEqual(fake_faults['fake-uuid'][0], dict(instances[0].fault)) self.assertIsNone(instances[1].fault) mock_get_all.assert_called_once_with(self.context, 'host', columns_to_join=['fault']) mock_fault_get.assert_called_once_with(self.context, [x['uuid'] for x in fake_insts]) @mock.patch.object(db, 'instance_fault_get_by_instance_uuids') def test_fill_faults(self, mock_fault_get): inst1 = objects.Instance(uuid=uuids.db_fault_1) inst2 = objects.Instance(uuid=uuids.db_fault_2) insts = [inst1, inst2] for inst in insts: inst.obj_reset_changes() db_faults = { 'uuid1': [{'id': 123, 'instance_uuid': uuids.db_fault_1, 'code': 456, 'message': 'Fake message', 'details': 'No details', 'host': 'foo', 'deleted': False, 'deleted_at': None, 'updated_at': None, 'created_at': None, } ]} mock_fault_get.return_value = db_faults inst_list = objects.InstanceList() inst_list._context = self.context inst_list.objects = insts faulty = inst_list.fill_faults() self.assertEqual([uuids.db_fault_1], list(faulty)) self.assertEqual(db_faults['uuid1'][0]['message'], inst_list[0].fault.message) self.assertIsNone(inst_list[1].fault) for inst in inst_list: self.assertEqual(set(), inst.obj_what_changed()) mock_fault_get.assert_called_once_with(self.context, [x.uuid for x in insts], latest=True) @mock.patch('nova.objects.instance.Instance.obj_make_compatible') def test_get_by_security_group(self, mock_compat): fake_secgroup = dict(test_security_group.fake_secgroup) fake_secgroup['instances'] = [ fake_instance.fake_db_instance(id=1, system_metadata={'foo': 'bar'}), fake_instance.fake_db_instance(id=2), ] with mock.patch.object(db, 'security_group_get') as sgg: sgg.return_value = fake_secgroup secgroup = security_group.SecurityGroup() secgroup.id = fake_secgroup['id'] instances = instance.InstanceList.get_by_security_group( self.context, secgroup) self.assertEqual(2, len(instances)) self.assertEqual([1, 2], [x.id for x in instances]) self.assertTrue(instances[0].obj_attr_is_set('system_metadata')) self.assertEqual({'foo': 'bar'}, instances[0].system_metadata) def test_get_by_security_group_after_destroy(self): db_sg = db.security_group_create( self.context, {'name': 'foo', 'description': 'test group', 'user_id': self.context.user_id, 'project_id': self.context.project_id}) self.assertFalse(db.security_group_in_use(self.context, db_sg.id)) inst = objects.Instance( context=self.context, user_id=self.context.user_id, project_id=self.context.project_id) inst.create() db.instance_add_security_group(self.context, inst.uuid, db_sg.id) self.assertTrue(db.security_group_in_use(self.context, db_sg.id)) inst.destroy() self.assertFalse(db.security_group_in_use(self.context, db_sg.id)) @mock.patch('nova.db.api.instance_get_all_uuids_by_hosts') def test_get_uuids_by_host_no_match(self, mock_get_all): mock_get_all.return_value = collections.defaultdict(list) actual_uuids = objects.InstanceList.get_uuids_by_host( self.context, 'b') self.assertEqual([], actual_uuids) mock_get_all.assert_called_once_with(self.context, ['b']) @mock.patch('nova.db.api.instance_get_all_uuids_by_hosts') def test_get_uuids_by_host(self, mock_get_all): fake_instances = [uuids.inst1, uuids.inst2] mock_get_all.return_value = { 'b': fake_instances } actual_uuids = objects.InstanceList.get_uuids_by_host( self.context, 'b') self.assertEqual(fake_instances, actual_uuids) mock_get_all.assert_called_once_with(self.context, ['b']) @mock.patch('nova.db.api.instance_get_all_uuids_by_hosts') def test_get_uuids_by_hosts(self, mock_get_all): fake_instances_a = [uuids.inst1, uuids.inst2] fake_instances_b = [uuids.inst3, uuids.inst4] fake_instances = { 'a': fake_instances_a, 'b': fake_instances_b } mock_get_all.return_value = fake_instances actual_uuids = objects.InstanceList.get_uuids_by_hosts( self.context, ['a', 'b']) self.assertEqual(fake_instances, actual_uuids) mock_get_all.assert_called_once_with(self.context, ['a', 'b']) class TestInstanceListObject(test_objects._LocalTest, _TestInstanceListObject): # No point in doing this db-specific test twice for remote def test_hidden_filter_query(self): """Check that our instance_get_by_filters() honors hidden properly As reported in bug #1862205, we need to properly handle instances with the hidden field set to NULL and not expect SQLAlchemy to translate those values on SELECT. """ values = {'user_id': self.context.user_id, 'project_id': self.context.project_id, 'host': 'foo'} for hidden_value in (True, False): db.instance_create(self.context, dict(values, hidden=hidden_value)) # NOTE(danms): Because the model has default=False, we can not use # it to create an instance with a hidden value of NULL. So, do it # manually here. engine = sql_api.get_engine() table = sql_models.Instance.__table__ with engine.connect() as conn: update = table.insert().values(user_id=self.context.user_id, project_id=self.context.project_id, uuid=uuids.nullinst, host='foo', hidden=None) conn.execute(update) insts = objects.InstanceList.get_by_filters(self.context, {'hidden': True}) # We created one hidden instance above, so expect only that one # to come out of this query. self.assertEqual(1, len(insts)) # We created one unhidden instance above, and one specifically # with a NULL value to represent an unmigrated instance, which # defaults to hidden=False, so expect both of those here. insts = objects.InstanceList.get_by_filters(self.context, {'hidden': False}) self.assertEqual(2, len(insts)) # Do the same check as above, but make sure hidden=False is the # default behavior. insts = objects.InstanceList.get_by_filters(self.context, {}) self.assertEqual(2, len(insts)) class TestRemoteInstanceListObject(test_objects._RemoteTest, _TestInstanceListObject): pass class TestInstanceObjectMisc(test.NoDBTestCase): def test_expected_cols(self): self.stub_out('nova.objects.instance._INSTANCE_OPTIONAL_JOINED_FIELDS', ['bar']) self.assertEqual(['bar'], instance._expected_cols(['foo', 'bar'])) self.assertIsNone(instance._expected_cols(None)) def test_expected_cols_extra(self): self.assertEqual(['metadata', 'extra', 'extra.numa_topology'], instance._expected_cols(['metadata', 'numa_topology'])) def test_expected_cols_no_duplicates(self): expected_attr = ['metadata', 'system_metadata', 'info_cache', 'security_groups', 'info_cache', 'metadata', 'pci_devices', 'tags', 'extra', 'flavor'] result_list = instance._expected_cols(expected_attr) self.assertEqual(len(result_list), len(set(expected_attr))) self.assertEqual(['metadata', 'system_metadata', 'info_cache', 'security_groups', 'pci_devices', 'tags', 'extra', 'extra.flavor'], result_list) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance_action.py0000664000175000017500000004725500000000000024072 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import traceback import mock from oslo_utils import fixture as utils_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils import six from nova.db import api as db from nova import exception from nova import objects from nova.objects import instance_action from nova import test from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) fake_action = { 'created_at': NOW, 'deleted_at': None, 'updated_at': None, 'deleted': False, 'id': 123, 'action': 'fake-action', 'instance_uuid': uuids.instance, 'request_id': 'fake-request', 'user_id': 'fake-user', 'project_id': 'fake-project', 'start_time': NOW, 'finish_time': None, 'message': 'foo', } fake_event = { 'created_at': NOW, 'deleted_at': None, 'updated_at': None, 'deleted': False, 'id': 123, 'event': 'fake-event', 'action_id': 123, 'start_time': NOW, 'finish_time': None, 'result': 'fake-result', 'traceback': 'fake-tb', 'host': 'fake-host', 'details': None } class _TestInstanceActionObject(object): @mock.patch.object(db, 'action_get_by_request_id') def test_get_by_request_id(self, mock_get): context = self.context mock_get.return_value = fake_action action = instance_action.InstanceAction.get_by_request_id( context, 'fake-uuid', 'fake-request') self.compare_obj(action, fake_action) mock_get.assert_called_once_with(context, 'fake-uuid', 'fake-request') def test_pack_action_start(self): values = instance_action.InstanceAction.pack_action_start( self.context, 'fake-uuid', 'fake-action') self.assertEqual(values['request_id'], self.context.request_id) self.assertEqual(values['user_id'], self.context.user_id) self.assertEqual(values['project_id'], self.context.project_id) self.assertEqual(values['instance_uuid'], 'fake-uuid') self.assertEqual(values['action'], 'fake-action') self.assertEqual(values['start_time'].replace(tzinfo=None), self.context.timestamp) def test_pack_action_finish(self): self.useFixture(utils_fixture.TimeFixture(NOW)) values = instance_action.InstanceAction.pack_action_finish( self.context, 'fake-uuid') self.assertEqual(values['request_id'], self.context.request_id) self.assertEqual(values['instance_uuid'], 'fake-uuid') self.assertEqual(values['finish_time'].replace(tzinfo=None), NOW) @mock.patch.object(db, 'action_start') def test_action_start(self, mock_start): test_class = instance_action.InstanceAction expected_packed_values = test_class.pack_action_start( self.context, 'fake-uuid', 'fake-action') mock_start.return_value = fake_action action = instance_action.InstanceAction.action_start( self.context, 'fake-uuid', 'fake-action', want_result=True) mock_start.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(action, fake_action) @mock.patch.object(db, 'action_start') def test_action_start_no_result(self, mock_start): test_class = instance_action.InstanceAction expected_packed_values = test_class.pack_action_start( self.context, 'fake-uuid', 'fake-action') mock_start.return_value = fake_action action = instance_action.InstanceAction.action_start( self.context, 'fake-uuid', 'fake-action', want_result=False) mock_start.assert_called_once_with(self.context, expected_packed_values) self.assertIsNone(action) @mock.patch.object(db, 'action_finish') def test_action_finish(self, mock_finish): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceAction expected_packed_values = test_class.pack_action_finish( self.context, 'fake-uuid') mock_finish.return_value = fake_action action = instance_action.InstanceAction.action_finish( self.context, 'fake-uuid', want_result=True) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(action, fake_action) @mock.patch.object(db, 'action_finish') def test_action_finish_no_result(self, mock_finish): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceAction expected_packed_values = test_class.pack_action_finish( self.context, 'fake-uuid') mock_finish.return_value = fake_action action = instance_action.InstanceAction.action_finish( self.context, 'fake-uuid', want_result=False) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.assertIsNone(action) @mock.patch.object(db, 'action_finish') @mock.patch.object(db, 'action_start') def test_finish(self, mock_start, mock_finish): self.useFixture(utils_fixture.TimeFixture(NOW)) expected_packed_action_start = { 'request_id': self.context.request_id, 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'instance_uuid': uuids.instance, 'action': 'fake-action', 'start_time': self.context.timestamp, 'updated_at': self.context.timestamp, } expected_packed_action_finish = { 'request_id': self.context.request_id, 'instance_uuid': uuids.instance, 'finish_time': NOW, 'updated_at': NOW, } mock_start.return_value = fake_action mock_finish.return_value = fake_action action = instance_action.InstanceAction.action_start( self.context, uuids.instance, 'fake-action') action.finish() mock_start.assert_called_once_with(self.context, expected_packed_action_start) mock_finish.assert_called_once_with(self.context, expected_packed_action_finish) self.compare_obj(action, fake_action) @mock.patch.object(db, 'actions_get') def test_get_list(self, mock_get): fake_actions = [dict(fake_action, id=1234), dict(fake_action, id=5678)] mock_get.return_value = fake_actions obj_list = instance_action.InstanceActionList.get_by_instance_uuid( self.context, 'fake-uuid') for index, action in enumerate(obj_list): self.compare_obj(action, fake_actions[index]) mock_get.assert_called_once_with(self.context, 'fake-uuid', None, None, None) def test_create_id_in_updates_error(self): action = instance_action.InstanceAction(self.context, id=1) ex = self.assertRaises(exception.ObjectActionError, action.create) self.assertIn('already created', six.text_type(ex)) @mock.patch('nova.db.api.action_start') def test_create(self, mock_action_start): mock_action_start.return_value = fake_action action = instance_action.InstanceAction(self.context) expected_updates = action.obj_get_changes() action.create() mock_action_start.assert_called_once_with( self.context, expected_updates) self.compare_obj(action, fake_action) class TestInstanceActionObject(test_objects._LocalTest, _TestInstanceActionObject): pass class TestRemoteInstanceActionObject(test_objects._RemoteTest, _TestInstanceActionObject): pass class _TestInstanceActionEventObject(object): @mock.patch.object(db, 'action_event_get_by_id') def test_get_by_id(self, mock_get): mock_get.return_value = fake_event event = instance_action.InstanceActionEvent.get_by_id( self.context, 'fake-action-id', 'fake-event-id') self.compare_obj(event, fake_event) mock_get.assert_called_once_with(self.context, 'fake-action-id', 'fake-event-id') @mock.patch.object(db, 'action_event_start') def test_event_start(self, mock_start): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent expected_packed_values = test_class.pack_action_event_start( self.context, 'fake-uuid', 'fake-event') mock_start.return_value = fake_event event = instance_action.InstanceActionEvent.event_start( self.context, 'fake-uuid', 'fake-event', want_result=True) mock_start.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(event, fake_event) @mock.patch.object(db, 'action_event_start') def test_event_start_no_result(self, mock_start): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent expected_packed_values = test_class.pack_action_event_start( self.context, 'fake-uuid', 'fake-event') mock_start.return_value = fake_event event = instance_action.InstanceActionEvent.event_start( self.context, 'fake-uuid', 'fake-event', want_result=False) mock_start.assert_called_once_with(self.context, expected_packed_values) self.assertIsNone(event) @mock.patch.object(db, 'action_event_finish') def test_event_finish(self, mock_finish): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent expected_packed_values = test_class.pack_action_event_finish( self.context, 'fake-uuid', 'fake-event') expected_packed_values['finish_time'] = NOW self.assertNotIn('details', expected_packed_values) mock_finish.return_value = fake_event event = instance_action.InstanceActionEvent.event_finish( self.context, 'fake-uuid', 'fake-event', want_result=True) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(event, fake_event) @mock.patch.object(db, 'action_event_finish') def test_event_finish_no_result(self, mock_finish): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent expected_packed_values = test_class.pack_action_event_finish( self.context, 'fake-uuid', 'fake-event') expected_packed_values['finish_time'] = NOW mock_finish.return_value = fake_event event = instance_action.InstanceActionEvent.event_finish( self.context, 'fake-uuid', 'fake-event', want_result=False) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.assertIsNone(event) @mock.patch.object(traceback, 'format_tb') @mock.patch.object(db, 'action_event_finish') def test_event_finish_with_failure(self, mock_finish, mock_tb): self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent # The NovaException message will get formatted for the 'details' field. exc_val = exception.NoValidHost(reason='some error') expected_packed_values = test_class.pack_action_event_finish( self.context, 'fake-uuid', 'fake-event', exc_val, 'fake-tb') expected_packed_values['finish_time'] = NOW self.assertEqual(exc_val.format_message(), expected_packed_values['details']) fake_event_with_details = copy.deepcopy(fake_event) fake_event_with_details['details'] = expected_packed_values['details'] mock_finish.return_value = fake_event_with_details event = test_class.event_finish_with_failure( self.context, 'fake-uuid', 'fake-event', exc_val, 'fake-tb', want_result=True) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(event, fake_event_with_details) mock_tb.assert_not_called() @mock.patch.object(traceback, 'format_tb') @mock.patch.object(db, 'action_event_finish') def test_event_finish_with_failure_legacy(self, mock_finish, mock_tb): # Tests that exc_tb is serialized when it's not a string type. mock_tb.return_value = 'fake-tb' self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent # A non-NovaException will use the exception class name for # the 'details' field. exc_val = test.TestingException('non-nova-error') expected_packed_values = test_class.pack_action_event_finish( self.context, 'fake-uuid', 'fake-event', exc_val, 'fake-tb') expected_packed_values['finish_time'] = NOW self.assertEqual('TestingException', expected_packed_values['details']) fake_event_with_details = copy.deepcopy(fake_event) fake_event_with_details['details'] = expected_packed_values['details'] mock_finish.return_value = fake_event_with_details fake_tb = mock.sentinel.fake_tb event = test_class.event_finish_with_failure( self.context, 'fake-uuid', 'fake-event', exc_val=exc_val, exc_tb=fake_tb, want_result=True) # When calling event_finish_with_failure and using exc_val as a kwarg # serialize_args will convert exc_val to non-nova exception class name # form before it reaches event_finish_with_failure. mock_finish.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(event, fake_event_with_details) mock_tb.assert_called_once_with(fake_tb) @mock.patch.object(db, 'action_event_finish') def test_event_finish_with_failure_legacy_unicode(self, mock_finish): # Tests that traceback.format_tb is not called when exc_tb is unicode. self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent expected_packed_values = test_class.pack_action_event_finish( self.context, 'fake-uuid', 'fake-event', 'val', six.text_type('fake-tb')) expected_packed_values['finish_time'] = NOW mock_finish.return_value = fake_event event = test_class.event_finish_with_failure( self.context, 'fake-uuid', 'fake-event', exc_val='val', exc_tb=six.text_type('fake-tb'), want_result=True) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.compare_obj(event, fake_event) @mock.patch.object(traceback, 'format_tb') @mock.patch.object(db, 'action_event_finish') def test_event_finish_with_failure_no_result(self, mock_finish, mock_tb): # Tests that traceback.format_tb is not called when exc_tb is a str # and want_result is False, so no event should come back. mock_tb.return_value = 'fake-tb' self.useFixture(utils_fixture.TimeFixture(NOW)) test_class = instance_action.InstanceActionEvent expected_packed_values = test_class.pack_action_event_finish( self.context, 'fake-uuid', 'fake-event', 'val', 'fake-tb') expected_packed_values['finish_time'] = NOW mock_finish.return_value = fake_event event = test_class.event_finish_with_failure( self.context, 'fake-uuid', 'fake-event', 'val', 'fake-tb', want_result=False) mock_finish.assert_called_once_with(self.context, expected_packed_values) self.assertIsNone(event) self.assertFalse(mock_tb.called) @mock.patch.object(db, 'action_events_get') def test_get_by_action(self, mock_get): fake_events = [dict(fake_event, id=1234), dict(fake_event, id=5678)] mock_get.return_value = fake_events obj_list = instance_action.InstanceActionEventList.get_by_action( self.context, 'fake-action-id') for index, event in enumerate(obj_list): self.compare_obj(event, fake_events[index]) mock_get.assert_called_once_with(self.context, 'fake-action-id') @mock.patch('nova.objects.instance_action.InstanceActionEvent.' 'pack_action_event_finish') @mock.patch('traceback.format_tb') def test_event_finish_with_failure_serialized(self, mock_format, mock_pack): mock_format.return_value = 'traceback' mock_pack.side_effect = test.TestingException exc = exception.NotFound() self.assertRaises( test.TestingException, instance_action.InstanceActionEvent.event_finish_with_failure, self.context, 'fake-uuid', 'fake-event', exc_val=exc, exc_tb=mock.sentinel.exc_tb) mock_pack.assert_called_once_with(self.context, 'fake-uuid', 'fake-event', exc_val=exc.format_message(), exc_tb='traceback') mock_format.assert_called_once_with(mock.sentinel.exc_tb) def test_create_id_in_updates_error(self): event = instance_action.InstanceActionEvent(self.context, id=1) ex = self.assertRaises( exception.ObjectActionError, event.create, fake_action['instance_uuid'], fake_action['request_id']) self.assertIn('already created', six.text_type(ex)) @mock.patch('nova.db.api.action_event_start') def test_create(self, mock_action_event_start): mock_action_event_start.return_value = fake_event event = instance_action.InstanceActionEvent(self.context) expected_updates = event.obj_get_changes() expected_updates['instance_uuid'] = fake_action['instance_uuid'] expected_updates['request_id'] = fake_action['request_id'] event.create(fake_action['instance_uuid'], fake_action['request_id']) mock_action_event_start.assert_called_once_with( self.context, expected_updates) self.compare_obj(event, fake_event) def test_obj_make_compatible(self): action_event_obj = objects.InstanceActionEvent( details=None, # added in 1.4 host='fake-host' # added in 1.2 ) data = lambda x: x['nova_object.data'] primitive = data(action_event_obj.obj_to_primitive( target_version='1.3')) self.assertIn('host', primitive) self.assertNotIn('details', primitive) class TestInstanceActionEventObject(test_objects._LocalTest, _TestInstanceActionEventObject): pass class TestRemoteInstanceActionEventObject(test_objects._RemoteTest, _TestInstanceActionEventObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance_device_metadata.py0000664000175000017500000001025100000000000025676 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from nova import objects from nova.tests.unit.objects import test_objects fake_net_interface_meta = objects.NetworkInterfaceMetadata( mac='52:54:00:f6:35:8f', tags=['mytag1'], bus=objects.PCIDeviceBus(address='0000:00:03.0'), vlan=1000) fake_pci_disk_meta = objects.DiskMetadata( bus=objects.PCIDeviceBus(address='0000:00:09.0'), tags=['nfvfunc3']) fake_obj_devices_metadata = objects.InstanceDeviceMetadata( devices=[fake_net_interface_meta, fake_pci_disk_meta]) fake_devices_metadata = fake_obj_devices_metadata._to_json() fake_db_metadata = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': 1, 'device_metadata': fake_obj_devices_metadata._to_json() } fake_old_db_metadata = dict(fake_db_metadata) # copy fake_old_db_metadata['device_metadata'] = jsonutils.dumps( fake_devices_metadata) def get_fake_obj_device_metadata(context): fake_obj_devices_metadata_cpy = fake_obj_devices_metadata.obj_clone() fake_obj_devices_metadata_cpy._context = context return fake_obj_devices_metadata_cpy class _TestInstanceDeviceMetadata(object): def _check_object(self, obj_meta): self.assertTrue(isinstance(obj_meta, objects.NetworkInterfaceMetadata) or isinstance(obj_meta, objects.DiskMetadata)) if isinstance(obj_meta, objects.NetworkInterfaceMetadata): self.assertEqual(obj_meta.mac, '52:54:00:f6:35:8f') self.assertEqual(obj_meta.tags, ['mytag1']) self.assertIsInstance(obj_meta.bus, objects.PCIDeviceBus) self.assertEqual(obj_meta.bus.address, '0000:00:03.0') self.assertEqual(obj_meta.vlan, 1000) elif isinstance(obj_meta, objects.DiskMetadata): self.assertIsInstance(obj_meta.bus, objects.PCIDeviceBus) self.assertEqual(obj_meta.bus.address, '0000:00:09.0') self.assertEqual(obj_meta.tags, ['nfvfunc3']) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): mock_get.return_value = fake_db_metadata inst_meta = objects.InstanceDeviceMetadata dev_meta = inst_meta.get_by_instance_uuid( self.context, 'fake_uuid') for obj_meta, fake_meta in zip( dev_meta.devices, fake_obj_devices_metadata.devices): self._check_object(obj_meta) def test_obj_from_db(self): db_meta = fake_db_metadata['device_metadata'] metadata = objects.InstanceDeviceMetadata.obj_from_db(None, db_meta) for obj_meta in metadata.devices: self._check_object(obj_meta) def test_net_if_compatible_pre_1_1(self): vif_obj = objects.NetworkInterfaceMetadata(mac='52:54:00:f6:35:8f') vif_obj.tags = ['test'] vif_obj.vlan = 1000 primitive = vif_obj.obj_to_primitive() self.assertIn('vlan', primitive['nova_object.data']) vif_obj.obj_make_compatible(primitive['nova_object.data'], '1.0') self.assertNotIn('vlan', primitive['nova_object.data']) class TestInstanceDeviceMetadata(test_objects._LocalTest, _TestInstanceDeviceMetadata): pass class TestInstanceDeviceMetadataRemote(test_objects._RemoteTest, _TestInstanceDeviceMetadata): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance_fault.py0000664000175000017500000001037200000000000023716 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.db import api as db from nova import exception from nova.objects import instance_fault from nova.tests.unit.objects import test_objects fake_faults = { 'fake-uuid': [ {'id': 1, 'instance_uuid': uuids.faults_instance, 'code': 123, 'message': 'msg1', 'details': 'details', 'host': 'host', 'deleted': False, 'created_at': None, 'updated_at': None, 'deleted_at': None}, {'id': 2, 'instance_uuid': uuids.faults_instance, 'code': 456, 'message': 'msg2', 'details': 'details', 'host': 'host', 'deleted': False, 'created_at': None, 'updated_at': None, 'deleted_at': None}, ] } class _TestInstanceFault(object): @mock.patch.object(db, 'instance_fault_get_by_instance_uuids', return_value=fake_faults) def test_get_latest_for_instance(self, get_mock): fault = instance_fault.InstanceFault.get_latest_for_instance( self.context, 'fake-uuid') for key in fake_faults['fake-uuid'][0]: self.assertEqual(fake_faults['fake-uuid'][0][key], fault[key]) get_mock.assert_called_once_with(self.context, ['fake-uuid']) @mock.patch.object(db, 'instance_fault_get_by_instance_uuids', return_value={}) def test_get_latest_for_instance_with_none(self, get_mock): fault = instance_fault.InstanceFault.get_latest_for_instance( self.context, 'fake-uuid') self.assertIsNone(fault) get_mock.assert_called_once_with(self.context, ['fake-uuid']) @mock.patch.object(db, 'instance_fault_get_by_instance_uuids', return_value=fake_faults) def test_get_by_instance(self, get_mock): faults = instance_fault.InstanceFaultList.get_by_instance_uuids( self.context, ['fake-uuid']) for index, db_fault in enumerate(fake_faults['fake-uuid']): for key in db_fault: self.assertEqual(fake_faults['fake-uuid'][index][key], faults[index][key]) get_mock.assert_called_once_with(self.context, ['fake-uuid']) @mock.patch.object(db, 'instance_fault_get_by_instance_uuids', return_value={}) def test_get_by_instance_with_none(self, get_mock): faults = instance_fault.InstanceFaultList.get_by_instance_uuids( self.context, ['fake-uuid']) self.assertEqual(0, len(faults)) get_mock.assert_called_once_with(self.context, ['fake-uuid']) @mock.patch('nova.db.api.instance_fault_create') def test_create(self, mock_create): mock_create.return_value = fake_faults['fake-uuid'][1] fault = instance_fault.InstanceFault(context=self.context) fault.instance_uuid = uuids.faults_instance fault.code = 456 fault.message = 'foo' fault.details = 'you screwed up' fault.host = 'myhost' fault.create() self.assertEqual(2, fault.id) mock_create.assert_called_once_with(self.context, {'instance_uuid': uuids.faults_instance, 'code': 456, 'message': 'foo', 'details': 'you screwed up', 'host': 'myhost'}) def test_create_already_created(self): fault = instance_fault.InstanceFault(context=self.context) fault.id = 1 self.assertRaises(exception.ObjectActionError, fault.create) class TestInstanceFault(test_objects._LocalTest, _TestInstanceFault): pass class TestInstanceFaultRemote(test_objects._RemoteTest, _TestInstanceFault): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_instance_group.py0000664000175000017500000004413700000000000023745 0ustar00zuulzuul00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_versionedobjects import exception as ovo_exc from nova import exception from nova import objects from nova.tests.unit.objects import test_objects from nova.tests.unit import utils as test_utils _TS_NOW = timeutils.utcnow(with_timezone=True) # o.vo.fields.DateTimeField converts to tz-aware and # in process we lose microsecond resolution. _TS_NOW = _TS_NOW.replace(microsecond=0) _DB_UUID = uuids.fake _INST_GROUP_POLICY_DB = { 'policy': 'policy1', 'rules': jsonutils.dumps({'max_server_per_host': '2'}), } _INST_GROUP_DB = { 'id': 1, 'uuid': _DB_UUID, 'user_id': 'fake_user', 'project_id': 'fake_project', 'name': 'fake_name', # a group can only have 1 policy associated with it 'policy': _INST_GROUP_POLICY_DB, '_policies': [_INST_GROUP_POLICY_DB], 'members': ['instance_id1', 'instance_id2'], 'created_at': _TS_NOW, 'updated_at': _TS_NOW, } _INST_GROUP_OBJ_VALS = dict( {k: v for k, v in _INST_GROUP_DB.items() if k not in ('policy', '_policies')}, policy=_INST_GROUP_POLICY_DB['policy'], rules=jsonutils.loads(_INST_GROUP_POLICY_DB['rules'])) class _TestInstanceGroupObject(object): @mock.patch('nova.objects.InstanceGroup._get_from_db_by_uuid', return_value=_INST_GROUP_DB) def test_get_by_uuid(self, mock_api_get): obj = objects.InstanceGroup.get_by_uuid(self.context, _DB_UUID) mock_api_get.assert_called_once_with(self.context, _DB_UUID) self.assertEqual(_INST_GROUP_DB['members'], obj.members) self.assertEqual([_INST_GROUP_POLICY_DB['policy']], obj.policies) self.assertEqual(_DB_UUID, obj.uuid) self.assertEqual(_INST_GROUP_DB['project_id'], obj.project_id) self.assertEqual(_INST_GROUP_DB['user_id'], obj.user_id) self.assertEqual(_INST_GROUP_DB['name'], obj.name) self.assertEqual(_INST_GROUP_POLICY_DB['policy'], obj.policy) self.assertEqual({'max_server_per_host': 2}, obj.rules) def test_rules_helper(self): obj = objects.InstanceGroup() self.assertEqual({}, obj.rules) self.assertNotIn('_rules', obj) obj._rules = {} self.assertEqual({}, obj.rules) self.assertIn('_rules', obj) @mock.patch('nova.objects.InstanceGroup._get_from_db_by_instance', return_value=_INST_GROUP_DB) def test_get_by_instance_uuid(self, mock_api_get): objects.InstanceGroup.get_by_instance_uuid( self.context, mock.sentinel.instance_uuid) mock_api_get.assert_called_once_with( self.context, mock.sentinel.instance_uuid) @mock.patch('nova.objects.InstanceGroup._get_from_db_by_uuid') def test_refresh(self, mock_db_get): changed_group = copy.deepcopy(_INST_GROUP_DB) changed_group['name'] = 'new_name' mock_db_get.side_effect = [_INST_GROUP_DB, changed_group] obj = objects.InstanceGroup.get_by_uuid(self.context, _DB_UUID) self.assertEqual(_INST_GROUP_DB['name'], obj.name) obj.refresh() self.assertEqual('new_name', obj.name) self.assertEqual(set([]), obj.obj_what_changed()) @mock.patch('nova.compute.utils.notify_about_server_group_update') @mock.patch('nova.objects.InstanceGroup._get_from_db_by_uuid') @mock.patch('nova.objects.instance_group._instance_group_members_add') def test_save(self, mock_members_add, mock_db_get, mock_notify): changed_group = copy.deepcopy(_INST_GROUP_DB) changed_group['name'] = 'new_name' db_group = copy.deepcopy(_INST_GROUP_DB) mock_db_get.return_value = db_group obj = objects.InstanceGroup(self.context, **_INST_GROUP_OBJ_VALS) self.assertEqual(obj.name, 'fake_name') obj.obj_reset_changes() self.assertEqual(set([]), obj.obj_what_changed()) obj.name = 'new_name' obj.members = ['instance_id1'] # Remove member 2 obj.save() self.assertEqual(set([]), obj.obj_what_changed()) mock_members_add.assert_called_once_with( self.context, mock_db_get.return_value, ['instance_id1']) mock_notify.assert_called_once_with(self.context, "update", {'name': 'new_name', 'members': ['instance_id1'], 'server_group_id': _DB_UUID}) @mock.patch('nova.compute.utils.notify_about_server_group_update') @mock.patch('nova.objects.InstanceGroup._get_from_db_by_uuid') def test_save_without_hosts(self, mock_db_get, mock_notify): mock_db_get.return_value = _INST_GROUP_DB obj = objects.InstanceGroup(self.context, **_INST_GROUP_OBJ_VALS) obj.obj_reset_changes() obj.hosts = ['fake-host1'] self.assertRaises(exception.InstanceGroupSaveException, obj.save) # make sure that we can save by removing hosts from what is updated obj.obj_reset_changes(['hosts']) obj.save() # since hosts was the only update, there is no actual call self.assertFalse(mock_notify.called) def test_set_policies_failure(self): group_obj = objects.InstanceGroup(context=self.context, policies=['affinity']) self.assertRaises(ovo_exc.ReadOnlyFieldError, setattr, group_obj, 'policies', ['anti-affinity']) def test_save_policies(self): group_obj = objects.InstanceGroup(context=self.context) group_obj.policies = ['fake-host1'] self.assertRaises(exception.InstanceGroupSaveException, group_obj.save) @mock.patch('nova.compute.utils.notify_about_server_group_action') @mock.patch('nova.compute.utils.notify_about_server_group_update') @mock.patch('nova.objects.InstanceGroup._create_in_db', return_value=_INST_GROUP_DB) def test_create(self, mock_db_create, mock_notify, mock_notify_action): obj = objects.InstanceGroup(context=self.context) obj.uuid = _DB_UUID obj.name = _INST_GROUP_DB['name'] obj.user_id = _INST_GROUP_DB['user_id'] obj.project_id = _INST_GROUP_DB['project_id'] obj.members = _INST_GROUP_DB['members'] obj.policies = [_INST_GROUP_DB['policy']['policy']] obj.updated_at = _TS_NOW obj.created_at = _TS_NOW obj.create() mock_db_create.assert_called_once_with( self.context, {'uuid': _DB_UUID, 'name': _INST_GROUP_DB['name'], 'user_id': _INST_GROUP_DB['user_id'], 'project_id': _INST_GROUP_DB['project_id'], 'created_at': _TS_NOW, 'updated_at': _TS_NOW, }, members=_INST_GROUP_DB['members'], policies=[_INST_GROUP_DB['policy']['policy']], policy=None, rules=None) mock_notify.assert_called_once_with( self.context, "create", {'uuid': _DB_UUID, 'name': _INST_GROUP_DB['name'], 'user_id': _INST_GROUP_DB['user_id'], 'project_id': _INST_GROUP_DB['project_id'], 'created_at': _TS_NOW, 'updated_at': _TS_NOW, 'members': _INST_GROUP_DB['members'], 'policies': [_INST_GROUP_DB['policy']['policy']], 'server_group_id': _DB_UUID}) def _group_matcher(group): """Custom mock call matcher method.""" return (group.uuid == _DB_UUID and group.name == _INST_GROUP_DB['name'] and group.user_id == _INST_GROUP_DB['user_id'] and group.project_id == _INST_GROUP_DB['project_id'] and group.created_at == _TS_NOW and group.updated_at == _TS_NOW and group.members == _INST_GROUP_DB['members'] and group.policies == [_INST_GROUP_DB['policy']['policy']] and group.id == 1) group_matcher = test_utils.CustomMockCallMatcher(_group_matcher) self.assertRaises(exception.ObjectActionError, obj.create) mock_notify_action.assert_called_once_with(context=self.context, group=group_matcher, action='create') @mock.patch('nova.compute.utils.notify_about_server_group_action') @mock.patch('nova.compute.utils.notify_about_server_group_update') @mock.patch('nova.objects.InstanceGroup._destroy_in_db') def test_destroy(self, mock_db_delete, mock_notify, mock_notify_action): obj = objects.InstanceGroup(context=self.context) obj.uuid = _DB_UUID obj.destroy() group_matcher = test_utils.CustomMockCallMatcher( lambda group: group.uuid == _DB_UUID) mock_notify_action.assert_called_once_with(context=obj._context, group=group_matcher, action='delete') mock_db_delete.assert_called_once_with(self.context, _DB_UUID) mock_notify.assert_called_once_with(self.context, "delete", {'server_group_id': _DB_UUID}) @mock.patch('nova.compute.utils.notify_about_server_group_add_member') @mock.patch('nova.compute.utils.notify_about_server_group_update') @mock.patch('nova.objects.InstanceGroup._add_members_in_db') def test_add_members(self, mock_members_add_db, mock_notify, mock_notify_add_member): fake_member_models = [{'instance_uuid': mock.sentinel.uuid}] fake_member_uuids = [mock.sentinel.uuid] mock_members_add_db.return_value = fake_member_models members = objects.InstanceGroup.add_members(self.context, _DB_UUID, fake_member_uuids) self.assertEqual(fake_member_uuids, members) mock_members_add_db.assert_called_once_with( self.context, _DB_UUID, fake_member_uuids) mock_notify.assert_called_once_with( self.context, "addmember", {'instance_uuids': fake_member_uuids, 'server_group_id': _DB_UUID}) mock_notify_add_member.assert_called_once_with(self.context, _DB_UUID) @mock.patch('nova.objects.InstanceList.get_by_filters') @mock.patch('nova.objects.InstanceGroup._get_from_db_by_uuid', return_value=_INST_GROUP_DB) def test_count_members_by_user(self, mock_get_db, mock_il_get): mock_il_get.return_value = [mock.ANY] obj = objects.InstanceGroup.get_by_uuid(self.context, _DB_UUID) expected_filters = { 'uuid': ['instance_id1', 'instance_id2'], 'user_id': 'fake_user', 'deleted': False } self.assertEqual(1, obj.count_members_by_user('fake_user')) mock_il_get.assert_called_once_with(self.context, filters=expected_filters) @mock.patch('nova.objects.InstanceList.get_by_filters') @mock.patch('nova.objects.InstanceGroup._get_from_db_by_uuid', return_value=_INST_GROUP_DB) def test_get_hosts(self, mock_get_db, mock_il_get): mock_il_get.return_value = [objects.Instance(host='host1'), objects.Instance(host='host2'), objects.Instance(host=None)] obj = objects.InstanceGroup.get_by_uuid(self.context, _DB_UUID) hosts = obj.get_hosts() self.assertEqual(['instance_id1', 'instance_id2'], obj.members) expected_filters = { 'uuid': ['instance_id1', 'instance_id2'], 'deleted': False } mock_il_get.assert_called_once_with(self.context, filters=expected_filters, expected_attrs=[]) self.assertEqual(2, len(hosts)) self.assertIn('host1', hosts) self.assertIn('host2', hosts) # Test manual exclusion mock_il_get.reset_mock() hosts = obj.get_hosts(exclude=['instance_id1']) expected_filters = { 'uuid': set(['instance_id2']), 'deleted': False } mock_il_get.assert_called_once_with(self.context, filters=expected_filters, expected_attrs=[]) def test_obj_make_compatible(self): obj = objects.InstanceGroup(self.context, **_INST_GROUP_OBJ_VALS) data = lambda x: x['nova_object.data'] obj_primitive = data(obj.obj_to_primitive()) self.assertNotIn('metadetails', obj_primitive) obj.obj_make_compatible(obj_primitive, '1.6') self.assertEqual({}, obj_primitive['metadetails']) def test_obj_make_compatible_pre_1_11(self): none_policy_group = copy.deepcopy(_INST_GROUP_DB) none_policy_group['policy'] = None dbs = [_INST_GROUP_DB, none_policy_group] data = lambda x: x['nova_object.data'] for db in dbs: ig = objects.InstanceGroup() obj = ig._from_db_object(self.context, ig, db) # Latest version obj has policy and policies obj_primitive = obj.obj_to_primitive() self.assertIn('policy', data(obj_primitive)) self.assertIn('policies', data(obj_primitive)) # Before 1.10, only has polices which is the list of policy name obj_primitive = obj.obj_to_primitive('1.10') self.assertNotIn('policy', data(obj_primitive)) self.assertIn('policies', data(obj_primitive)) self.assertEqual([db['policy']['policy']] if db['policy'] else [], data(obj_primitive)['policies']) @mock.patch.object(objects.InstanceList, 'get_by_filters') def test_load_hosts(self, mock_get_by_filt): mock_get_by_filt.return_value = [objects.Instance(host='host1'), objects.Instance(host='host2')] obj = objects.InstanceGroup(self.context, members=['uuid1'], uuid=uuids.group) self.assertEqual(2, len(obj.hosts)) self.assertIn('host1', obj.hosts) self.assertIn('host2', obj.hosts) self.assertNotIn('hosts', obj.obj_what_changed()) def test_load_anything_else_but_hosts(self): obj = objects.InstanceGroup(self.context) self.assertRaises(exception.ObjectActionError, getattr, obj, 'members') @mock.patch('nova.objects.InstanceGroup._get_from_db_by_name') def test_get_by_name(self, mock_api_get): db_group = copy.deepcopy(_INST_GROUP_DB) mock_api_get.side_effect = [ db_group, exception.InstanceGroupNotFound(group_uuid='unknown')] ig = objects.InstanceGroup.get_by_name(self.context, 'fake_name') mock_api_get.assert_called_once_with(self.context, 'fake_name') self.assertEqual('fake_name', ig.name) self.assertRaises(exception.InstanceGroupNotFound, objects.InstanceGroup.get_by_name, self.context, 'unknown') @mock.patch('nova.objects.InstanceGroup.get_by_uuid') @mock.patch('nova.objects.InstanceGroup.get_by_name') def test_get_by_hint(self, mock_name, mock_uuid): objects.InstanceGroup.get_by_hint(self.context, _DB_UUID) mock_uuid.assert_called_once_with(self.context, _DB_UUID) objects.InstanceGroup.get_by_hint(self.context, 'name') mock_name.assert_called_once_with(self.context, 'name') class TestInstanceGroupObject(test_objects._LocalTest, _TestInstanceGroupObject): pass class TestRemoteInstanceGroupObject(test_objects._RemoteTest, _TestInstanceGroupObject): pass def _mock_db_list_get(*args, **kwargs): instances = [(uuids.f1, 'f1', 'p1'), (uuids.f2, 'f2', 'p1'), (uuids.f3, 'f3', 'p2'), (uuids.f4, 'f4', 'p2')] result = [] for instance in instances: values = copy.deepcopy(_INST_GROUP_DB) values['uuid'] = instance[0] values['name'] = instance[1] values['project_id'] = instance[2] result.append(values) return result class _TestInstanceGroupListObject(object): @mock.patch('nova.objects.InstanceGroupList._get_from_db') def test_list_all(self, mock_api_get): mock_api_get.side_effect = _mock_db_list_get inst_list = objects.InstanceGroupList.get_all(self.context) self.assertEqual(4, len(inst_list.objects)) mock_api_get.assert_called_once_with(self.context) @mock.patch('nova.objects.InstanceGroupList._get_from_db') def test_list_by_project_id(self, mock_api_get): mock_api_get.side_effect = _mock_db_list_get objects.InstanceGroupList.get_by_project_id( self.context, mock.sentinel.project_id) mock_api_get.assert_called_once_with( self.context, project_id=mock.sentinel.project_id) class TestInstanceGroupListObject(test_objects._LocalTest, _TestInstanceGroupListObject): pass class TestRemoteInstanceGroupListObject(test_objects._RemoteTest, _TestInstanceGroupListObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance_info_cache.py0000664000175000017500000001033600000000000024661 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.db import api as db from nova import exception from nova.network import model as network_model from nova.objects import instance_info_cache from nova.tests.unit.objects import test_objects fake_info_cache = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'instance_uuid': uuids.info_instance, 'network_info': '[]', } class _TestInstanceInfoCacheObject(object): @mock.patch.object(db, 'instance_info_cache_get') def test_get_by_instance_uuid(self, mock_get): nwinfo = network_model.NetworkInfo.hydrate([{'address': 'foo'}]) mock_get.return_value = dict(fake_info_cache, network_info=nwinfo.json()) obj = instance_info_cache.InstanceInfoCache.get_by_instance_uuid( self.context, uuids.info_instance) self.assertEqual(uuids.info_instance, obj.instance_uuid) self.assertEqual(nwinfo, obj.network_info) mock_get.assert_called_once_with(self.context, uuids.info_instance) @mock.patch.object(db, 'instance_info_cache_get', return_value=None) def test_get_by_instance_uuid_no_entries(self, mock_get): self.assertRaises( exception.InstanceInfoCacheNotFound, instance_info_cache.InstanceInfoCache.get_by_instance_uuid, self.context, uuids.info_instance) mock_get.assert_called_once_with(self.context, uuids.info_instance) def test_new(self): obj = instance_info_cache.InstanceInfoCache.new(self.context, uuids.info_instance) self.assertEqual(set(['instance_uuid', 'network_info']), obj.obj_what_changed()) self.assertEqual(uuids.info_instance, obj.instance_uuid) self.assertIsNone(obj.network_info) @mock.patch.object(db, 'instance_info_cache_update') def test_save_updates_self(self, mock_update): fake_updated_at = datetime.datetime(2015, 1, 1) nwinfo = network_model.NetworkInfo.hydrate([{'address': 'foo'}]) nwinfo_json = nwinfo.json() new_info_cache = fake_info_cache.copy() new_info_cache['id'] = 1 new_info_cache['updated_at'] = fake_updated_at new_info_cache['network_info'] = nwinfo_json mock_update.return_value = new_info_cache obj = instance_info_cache.InstanceInfoCache(context=self.context) obj.instance_uuid = uuids.info_instance obj.network_info = nwinfo_json obj.save() mock_update.assert_called_once_with(self.context, uuids.info_instance, {'network_info': nwinfo_json}) self.assertEqual(timeutils.normalize_time(fake_updated_at), timeutils.normalize_time(obj.updated_at)) @mock.patch.object(db, 'instance_info_cache_get', return_value=fake_info_cache) def test_refresh(self, mock_get): obj = instance_info_cache.InstanceInfoCache.new(self.context, uuids.info_instance_1) obj.refresh() self.assertEqual(fake_info_cache['instance_uuid'], obj.instance_uuid) mock_get.assert_called_once_with(self.context, uuids.info_instance_1) class TestInstanceInfoCacheObject(test_objects._LocalTest, _TestInstanceInfoCacheObject): pass class TestInstanceInfoCacheObjectRemote(test_objects._RemoteTest, _TestInstanceInfoCacheObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_instance_mapping.py0000664000175000017500000002624200000000000024241 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils from sqlalchemy.orm import exc as orm_exc from nova import exception from nova import objects from nova.objects import instance_mapping from nova.tests.unit.objects import test_cell_mapping from nova.tests.unit.objects import test_objects def get_db_mapping(**updates): db_mapping = { 'id': 1, 'instance_uuid': uuidutils.generate_uuid(), 'cell_id': None, 'project_id': 'fake-project', 'user_id': 'fake-user', 'created_at': None, 'updated_at': None, 'queued_for_delete': False, } db_mapping["cell_mapping"] = test_cell_mapping.get_db_mapping(id=42) db_mapping['cell_id'] = db_mapping["cell_mapping"]["id"] db_mapping.update(updates) return db_mapping class _TestInstanceMappingObject(object): def _check_cell_map_value(self, db_val, cell_obj): self.assertEqual(db_val, cell_obj.id) @mock.patch.object(instance_mapping.InstanceMapping, '_get_by_instance_uuid_from_db') def test_get_by_instance_uuid(self, uuid_from_db): db_mapping = get_db_mapping() uuid_from_db.return_value = db_mapping mapping_obj = objects.InstanceMapping().get_by_instance_uuid( self.context, db_mapping['instance_uuid']) uuid_from_db.assert_called_once_with(self.context, db_mapping['instance_uuid']) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) @mock.patch.object(instance_mapping.InstanceMapping, '_get_by_instance_uuid_from_db') def test_get_by_instance_uuid_cell_mapping_none(self, uuid_from_db): db_mapping = get_db_mapping(cell_mapping=None, cell_id=None) uuid_from_db.return_value = db_mapping mapping_obj = objects.InstanceMapping().get_by_instance_uuid( self.context, db_mapping['instance_uuid']) uuid_from_db.assert_called_once_with(self.context, db_mapping['instance_uuid']) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}) @mock.patch.object(instance_mapping.InstanceMapping, '_create_in_db') def test_create(self, create_in_db): db_mapping = get_db_mapping() uuid = db_mapping['instance_uuid'] create_in_db.return_value = db_mapping mapping_obj = objects.InstanceMapping(self.context) mapping_obj.instance_uuid = uuid mapping_obj.cell_mapping = objects.CellMapping(self.context, id=db_mapping['cell_mapping']['id']) mapping_obj.project_id = db_mapping['project_id'] mapping_obj.user_id = db_mapping['user_id'] mapping_obj.create() create_in_db.assert_called_once_with(self.context, {'instance_uuid': uuid, 'queued_for_delete': False, 'cell_id': db_mapping['cell_mapping']['id'], 'project_id': db_mapping['project_id'], 'user_id': db_mapping['user_id']}) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) @mock.patch.object(instance_mapping.InstanceMapping, '_create_in_db') def test_create_cell_mapping_none(self, create_in_db): db_mapping = get_db_mapping(cell_mapping=None, cell_id=None) uuid = db_mapping['instance_uuid'] create_in_db.return_value = db_mapping mapping_obj = objects.InstanceMapping(self.context) mapping_obj.instance_uuid = uuid mapping_obj.cell_mapping = None mapping_obj.project_id = db_mapping['project_id'] mapping_obj.user_id = db_mapping['user_id'] mapping_obj.create() create_in_db.assert_called_once_with(self.context, {'instance_uuid': uuid, 'queued_for_delete': False, 'project_id': db_mapping['project_id'], 'user_id': db_mapping['user_id']}) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}) self.assertIsNone(mapping_obj.cell_mapping) @mock.patch.object(instance_mapping.InstanceMapping, '_create_in_db') def test_create_cell_mapping_with_qfd_true(self, create_in_db): db_mapping = get_db_mapping(cell_mapping=None, cell_id=None) create_in_db.return_value = db_mapping mapping_obj = objects.InstanceMapping(self.context) mapping_obj.instance_uuid = db_mapping['instance_uuid'] mapping_obj.cell_mapping = None mapping_obj.project_id = db_mapping['project_id'] mapping_obj.user_id = db_mapping['user_id'] mapping_obj.queued_for_delete = True mapping_obj.create() create_in_db.assert_called_once_with(self.context, {'instance_uuid': db_mapping['instance_uuid'], 'queued_for_delete': True, 'project_id': db_mapping['project_id'], 'user_id': db_mapping['user_id']}) @mock.patch.object(instance_mapping.InstanceMapping, '_save_in_db') def test_save(self, save_in_db): db_mapping = get_db_mapping() uuid = db_mapping['instance_uuid'] save_in_db.return_value = db_mapping mapping_obj = objects.InstanceMapping(self.context) mapping_obj.instance_uuid = uuid mapping_obj.cell_mapping = objects.CellMapping(self.context, id=42) mapping_obj.save() save_in_db.assert_called_once_with(self.context, db_mapping['instance_uuid'], {'cell_id': mapping_obj.cell_mapping.id, 'instance_uuid': uuid}) self.compare_obj(mapping_obj, db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) @mock.patch.object(instance_mapping.InstanceMapping, '_save_in_db') def test_save_stale_data_error(self, save_in_db): save_in_db.side_effect = orm_exc.StaleDataError mapping_obj = objects.InstanceMapping(self.context) mapping_obj.instance_uuid = uuidutils.generate_uuid() self.assertRaises(exception.InstanceMappingNotFound, mapping_obj.save) @mock.patch.object(instance_mapping.InstanceMapping, '_destroy_in_db') def test_destroy(self, destroy_in_db): uuid = uuidutils.generate_uuid() mapping_obj = objects.InstanceMapping(self.context) mapping_obj.instance_uuid = uuid mapping_obj.destroy() destroy_in_db.assert_called_once_with(self.context, uuid) def test_cell_mapping_nullable(self): mapping_obj = objects.InstanceMapping(self.context) # Just ensure this doesn't raise an exception mapping_obj.cell_mapping = None def test_obj_make_compatible(self): uuid = uuidutils.generate_uuid() im_obj = instance_mapping.InstanceMapping(context=self.context) fake_im_obj = instance_mapping.InstanceMapping(context=self.context, instance_uuid=uuid, queued_for_delete=False, user_id='fake-user') obj_primitive = fake_im_obj.obj_to_primitive('1.1') obj = im_obj.obj_from_primitive(obj_primitive) self.assertIn('queued_for_delete', obj) self.assertNotIn('user_id', obj) obj_primitive = fake_im_obj.obj_to_primitive('1.0') obj = im_obj.obj_from_primitive(obj_primitive) self.assertIn('instance_uuid', obj) self.assertEqual(uuid, obj.instance_uuid) self.assertNotIn('queued_for_delete', obj) @mock.patch('nova.objects.instance_mapping.LOG.error') def test_obj_load_attr(self, mock_log): im_obj = instance_mapping.InstanceMapping() # Access of unset user_id should have special handling self.assertRaises(exception.ObjectActionError, im_obj.obj_load_attr, 'user_id') msg = ('The unset user_id attribute of an unmigrated instance mapping ' 'should not be accessed.') mock_log.assert_called_once_with(msg) # Access of any other unset attribute should fall back to base class self.assertRaises(NotImplementedError, im_obj.obj_load_attr, 'project_id') class TestInstanceMappingObject(test_objects._LocalTest, _TestInstanceMappingObject): pass class TestRemoteInstanceMappingObject(test_objects._RemoteTest, _TestInstanceMappingObject): pass class _TestInstanceMappingListObject(object): def _check_cell_map_value(self, db_val, cell_obj): self.assertEqual(db_val, cell_obj.id) @mock.patch.object(instance_mapping.InstanceMappingList, '_get_by_project_id_from_db') def test_get_by_project_id(self, project_id_from_db): db_mapping = get_db_mapping() project_id_from_db.return_value = [db_mapping] mapping_obj = objects.InstanceMappingList().get_by_project_id( self.context, db_mapping['project_id']) project_id_from_db.assert_called_once_with(self.context, db_mapping['project_id']) self.compare_obj(mapping_obj.objects[0], db_mapping, subs={'cell_mapping': 'cell_id'}, comparators={ 'cell_mapping': self._check_cell_map_value}) @mock.patch.object(instance_mapping.InstanceMappingList, '_destroy_bulk_in_db') def test_destroy_bulk(self, destroy_bulk_in_db): uuids_to_be_deleted = [] for i in range(0, 5): uuid = uuidutils.generate_uuid() uuids_to_be_deleted.append(uuid) destroy_bulk_in_db.return_value = 5 result = objects.InstanceMappingList.destroy_bulk(self.context, uuids_to_be_deleted) destroy_bulk_in_db.assert_called_once_with(self.context, uuids_to_be_deleted) self.assertEqual(5, result) class TestInstanceMappingListObject(test_objects._LocalTest, _TestInstanceMappingListObject): pass class TestRemoteInstanceMappingListObject(test_objects._RemoteTest, _TestInstanceMappingListObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance_numa.py0000664000175000017500000002213400000000000023542 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_versionedobjects import base as ovo_base from nova import exception from nova import objects from nova.objects import fields from nova.tests.unit.objects import test_objects fake_instance_uuid = uuids.fake fake_obj_numa_topology = objects.InstanceNUMATopology( instance_uuid = fake_instance_uuid, cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=512, pagesize=2048), objects.InstanceNUMACell( id=1, cpuset=set([3, 4]), memory=512, pagesize=2048) ]) fake_db_topology = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': 1, 'instance_uuid': fake_instance_uuid, 'numa_topology': fake_obj_numa_topology._to_json() } def get_fake_obj_numa_topology(context): fake_obj_numa_topology_cpy = fake_obj_numa_topology.obj_clone() fake_obj_numa_topology_cpy._context = context return fake_obj_numa_topology_cpy class _TestInstanceNUMATopology(object): @mock.patch('nova.db.api.instance_extra_update_by_uuid') def test_create(self, mock_update): topo_obj = get_fake_obj_numa_topology(self.context) topo_obj.instance_uuid = fake_db_topology['instance_uuid'] topo_obj.create() self.assertEqual(1, len(mock_update.call_args_list)) def _test_get_by_instance_uuid(self): numa_topology = objects.InstanceNUMATopology.get_by_instance_uuid( self.context, fake_db_topology['instance_uuid']) self.assertEqual(fake_db_topology['instance_uuid'], numa_topology.instance_uuid) for obj_cell, topo_cell in zip( numa_topology.cells, fake_obj_numa_topology['cells']): self.assertIsInstance(obj_cell, objects.InstanceNUMACell) self.assertEqual(topo_cell.id, obj_cell.id) self.assertEqual(topo_cell.cpuset, obj_cell.cpuset) self.assertEqual(topo_cell.memory, obj_cell.memory) self.assertEqual(topo_cell.pagesize, obj_cell.pagesize) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): mock_get.return_value = fake_db_topology self._test_get_by_instance_uuid() @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid_missing(self, mock_get): mock_get.return_value = None self.assertRaises( exception.NumaTopologyNotFound, objects.InstanceNUMATopology.get_by_instance_uuid, self.context, 'fake_uuid') def test_siblings(self): inst_cell = objects.InstanceNUMACell( cpuset=set([0, 1, 2])) self.assertEqual([], inst_cell.siblings) topo = objects.VirtCPUTopology(sockets=1, cores=3, threads=0) inst_cell = objects.InstanceNUMACell( cpuset=set([0, 1, 2]), cpu_topology=topo) self.assertEqual([], inst_cell.siblings) # One thread actually means no threads topo = objects.VirtCPUTopology(sockets=1, cores=3, threads=1) inst_cell = objects.InstanceNUMACell( cpuset=set([0, 1, 2]), cpu_topology=topo) self.assertEqual([], inst_cell.siblings) topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=2) inst_cell = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), cpu_topology=topo) self.assertEqual([set([0, 1]), set([2, 3])], inst_cell.siblings) topo = objects.VirtCPUTopology(sockets=1, cores=1, threads=4) inst_cell = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), cpu_topology=topo) self.assertEqual([set([0, 1, 2, 3])], inst_cell.siblings) def test_pin(self): inst_cell = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), cpu_pinning=None) inst_cell.pin(0, 14) self.assertEqual({0: 14}, inst_cell.cpu_pinning) inst_cell.pin(12, 14) self.assertEqual({0: 14}, inst_cell.cpu_pinning) inst_cell.pin(1, 16) self.assertEqual({0: 14, 1: 16}, inst_cell.cpu_pinning) def test_pin_vcpus(self): inst_cell = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), cpu_pinning=None) inst_cell.pin_vcpus((0, 14), (1, 15), (2, 16), (3, 17)) self.assertEqual({0: 14, 1: 15, 2: 16, 3: 17}, inst_cell.cpu_pinning) def test_cpu_pinning_requested_cell(self): inst_cell = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), cpu_pinning=None) self.assertFalse(inst_cell.cpu_pinning_requested) inst_cell.cpu_policy = fields.CPUAllocationPolicy.DEDICATED self.assertTrue(inst_cell.cpu_pinning_requested) def test_cpu_pinning(self): topo_obj = get_fake_obj_numa_topology(self.context) self.assertEqual(set(), topo_obj.cpu_pinning) topo_obj.cells[0].pin_vcpus((1, 10), (2, 11)) self.assertEqual(set([10, 11]), topo_obj.cpu_pinning) topo_obj.cells[1].pin_vcpus((3, 0), (4, 1)) self.assertEqual(set([0, 1, 10, 11]), topo_obj.cpu_pinning) def test_cpu_pinning_requested(self): fake_topo_obj = copy.deepcopy(fake_obj_numa_topology) self.assertFalse(fake_topo_obj.cpu_pinning_requested) for cell in fake_topo_obj.cells: cell.cpu_policy = fields.CPUAllocationPolicy.DEDICATED self.assertTrue(fake_topo_obj.cpu_pinning_requested) def test_clear_host_pinning(self): topo_obj = get_fake_obj_numa_topology(self.context) topo_obj.cells[0].pin_vcpus((1, 10), (2, 11)) topo_obj.cells[0].id = 3 topo_obj.cells[1].pin_vcpus((3, 0), (4, 1)) topo_obj.cells[1].id = 0 topo_obj.clear_host_pinning() self.assertEqual({}, topo_obj.cells[0].cpu_pinning) self.assertEqual(-1, topo_obj.cells[0].id) self.assertEqual({}, topo_obj.cells[1].cpu_pinning) self.assertEqual(-1, topo_obj.cells[1].id) def test_emulator_threads_policy(self): topo_obj = get_fake_obj_numa_topology(self.context) self.assertFalse(topo_obj.emulator_threads_isolated) topo_obj.emulator_threads_policy = ( fields.CPUEmulatorThreadsPolicy.ISOLATE) self.assertTrue(topo_obj.emulator_threads_isolated) def test_obj_make_compatible_numa_pre_1_3(self): topo_obj = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE)) versions = ovo_base.obj_tree_get_versions('InstanceNUMATopology') primitive = topo_obj.obj_to_primitive(target_version='1.2', version_manifest=versions) self.assertNotIn( 'emulator_threads_policy', primitive['nova_object.data']) topo_obj = objects.InstanceNUMATopology.obj_from_primitive(primitive) self.assertFalse(topo_obj.emulator_threads_isolated) def test_cpuset_reserved(self): topology = objects.InstanceNUMATopology( instance_uuid = fake_instance_uuid, cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=512, pagesize=2048, cpuset_reserved=set([3, 7])), objects.InstanceNUMACell( id=1, cpuset=set([3, 4]), memory=512, pagesize=2048, cpuset_reserved=set([9, 12])) ]) self.assertEqual(set([3, 7]), topology.cells[0].cpuset_reserved) self.assertEqual(set([9, 12]), topology.cells[1].cpuset_reserved) def test_obj_make_compatible_numa_cell_pre_1_4(self): topo_obj = objects.InstanceNUMACell( cpuset_reserved=set([1, 2])) versions = ovo_base.obj_tree_get_versions('InstanceNUMACell') data = lambda x: x['nova_object.data'] primitive = data(topo_obj.obj_to_primitive(target_version='1.4', version_manifest=versions)) self.assertIn('cpuset_reserved', primitive) primitive = data(topo_obj.obj_to_primitive(target_version='1.3', version_manifest=versions)) self.assertNotIn('cpuset_reserved', primitive) class TestInstanceNUMATopology(test_objects._LocalTest, _TestInstanceNUMATopology): pass class TestInstanceNUMATopologyRemote(test_objects._RemoteTest, _TestInstanceNUMATopology): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_instance_pci_requests.py0000664000175000017500000002102200000000000025303 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_versionedobjects import base as ovo_base from nova import objects from nova.objects import fields from nova.tests.unit.objects import test_objects FAKE_UUID = '79a53d6b-0893-4838-a971-15f4f382e7c2' FAKE_REQUEST_UUID = '69b53d6b-0793-4839-c981-f5c4f382e7d2' # NOTE(danms): Yes, these are the same right now, but going forward, # we have changes to make which will be reflected in the format # in instance_extra, but not in system_metadata. fake_pci_requests = [ {'count': 2, 'spec': [{'vendor_id': '8086', 'device_id': '1502'}], 'alias_name': 'alias_1', 'is_new': False, 'numa_policy': 'preferred', 'request_id': FAKE_REQUEST_UUID}, {'count': 2, 'spec': [{'vendor_id': '6502', 'device_id': '07B5'}], 'alias_name': 'alias_2', 'is_new': True, 'numa_policy': 'preferred', 'request_id': FAKE_REQUEST_UUID, 'requester_id': uuids.requester_id}, ] fake_legacy_pci_requests = [ {'count': 2, 'spec': [{'vendor_id': '8086', 'device_id': '1502'}], 'alias_name': 'alias_1'}, {'count': 1, 'spec': [{'vendor_id': '6502', 'device_id': '07B5'}], 'alias_name': 'alias_2'}, ] class _TestInstancePCIRequests(object): @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): mock_get.return_value = { 'instance_uuid': FAKE_UUID, 'pci_requests': jsonutils.dumps(fake_pci_requests), } requests = objects.InstancePCIRequests.get_by_instance_uuid( self.context, FAKE_UUID) self.assertEqual(2, len(requests.requests)) for index, request in enumerate(requests.requests): self.assertEqual(fake_pci_requests[index]['alias_name'], request.alias_name) self.assertEqual(fake_pci_requests[index]['count'], request.count) self.assertEqual(fake_pci_requests[index]['spec'], [dict(x.items()) for x in request.spec]) self.assertEqual(fake_pci_requests[index]['numa_policy'], request.numa_policy) @mock.patch('nova.objects.InstancePCIRequests.get_by_instance_uuid') def test_get_by_instance_current(self, mock_get): instance = objects.Instance(uuid=uuids.instance, system_metadata={}) objects.InstancePCIRequests.get_by_instance(self.context, instance) mock_get.assert_called_once_with(self.context, uuids.instance) def test_get_by_instance_legacy(self): fakesysmeta = { 'pci_requests': jsonutils.dumps([fake_legacy_pci_requests[0]]), 'new_pci_requests': jsonutils.dumps([fake_legacy_pci_requests[1]]), } instance = objects.Instance(uuid=uuids.instance, system_metadata=fakesysmeta) requests = objects.InstancePCIRequests.get_by_instance(self.context, instance) self.assertEqual(2, len(requests.requests)) self.assertEqual('alias_1', requests.requests[0].alias_name) self.assertFalse(requests.requests[0].is_new) self.assertEqual('alias_2', requests.requests[1].alias_name) self.assertTrue(requests.requests[1].is_new) def test_obj_from_db(self): req = objects.InstancePCIRequests.obj_from_db(None, FAKE_UUID, None) self.assertEqual(FAKE_UUID, req.instance_uuid) self.assertEqual(0, len(req.requests)) db_req = jsonutils.dumps(fake_pci_requests) req = objects.InstancePCIRequests.obj_from_db(None, FAKE_UUID, db_req) self.assertEqual(FAKE_UUID, req.instance_uuid) self.assertEqual(2, len(req.requests)) self.assertEqual('alias_1', req.requests[0].alias_name) self.assertEqual('preferred', req.requests[0].numa_policy) self.assertIsNone(req.requests[0].requester_id) self.assertEqual(uuids.requester_id, req.requests[1].requester_id) def test_from_request_spec_instance_props(self): requests = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(count=1, request_id=FAKE_UUID, spec=[{'vendor_id': '8086', 'device_id': '1502'}]) ], instance_uuid=FAKE_UUID) result = jsonutils.to_primitive(requests) result = objects.InstancePCIRequests.from_request_spec_instance_props( result) self.assertEqual(1, len(result.requests)) self.assertEqual(1, result.requests[0].count) self.assertEqual(FAKE_UUID, result.requests[0].request_id) self.assertEqual([{'vendor_id': '8086', 'device_id': '1502'}], result.requests[0].spec) def test_obj_make_compatible_pre_1_2(self): topo_obj = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': '8086', 'device_id': '1502'}], request_id=uuids.pci_request_id, numa_policy=fields.PCINUMAAffinityPolicy.PREFERRED) versions = ovo_base.obj_tree_get_versions('InstancePCIRequest') primitive = topo_obj.obj_to_primitive(target_version='1.1', version_manifest=versions) self.assertNotIn('numa_policy', primitive['nova_object.data']) self.assertIn('request_id', primitive['nova_object.data']) def test_obj_make_compatible_pre_1_1(self): topo_obj = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': '8086', 'device_id': '1502'}], request_id=uuids.pci_request_id) versions = ovo_base.obj_tree_get_versions('InstancePCIRequest') primitive = topo_obj.obj_to_primitive(target_version='1.0', version_manifest=versions) self.assertNotIn('request_id', primitive['nova_object.data']) def test_obj_make_compatible_pre_1_3(self): topo_obj = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': '8086', 'device_id': '1502'}], request_id=uuids.pci_request_id, requester_id=uuids.requester_id, numa_policy=fields.PCINUMAAffinityPolicy.PREFERRED) versions = ovo_base.obj_tree_get_versions('InstancePCIRequest') primitive = topo_obj.obj_to_primitive(target_version='1.2', version_manifest=versions) self.assertNotIn('requester_id', primitive['nova_object.data']) self.assertIn('numa_policy', primitive['nova_object.data']) def test_source_property(self): neutron_port_pci_req = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': '15b3', 'device_id': '1018'}], request_id=uuids.pci_request_id1, requester_id=uuids.requester_id1, alias_name = None) flavor_alias_pci_req = objects.InstancePCIRequest( count=1, spec=[{'vendor_id': '15b3', 'device_id': '1810'}], request_id=uuids.pci_request_id2, requester_id=uuids.requester_id2, alias_name = 'alias_1') self.assertEqual(neutron_port_pci_req.source, objects.InstancePCIRequest.NEUTRON_PORT) self.assertEqual(flavor_alias_pci_req.source, objects.InstancePCIRequest.FLAVOR_ALIAS) class TestInstancePCIRequests(test_objects._LocalTest, _TestInstancePCIRequests): pass class TestRemoteInstancePCIRequests(test_objects._RemoteTest, _TestInstancePCIRequests): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_keypair.py0000664000175000017500000002406000000000000022362 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_utils import timeutils from nova import exception from nova.objects import keypair from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) fake_keypair = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 123, 'name': 'foo-keypair', 'type': 'ssh', 'user_id': 'fake-user', 'fingerprint': 'fake-fingerprint', 'public_key': 'fake\npublic\nkey', } class _TestKeyPairObject(object): @mock.patch('nova.db.api.key_pair_get') @mock.patch('nova.objects.KeyPair._get_from_db') def test_get_by_name_main(self, mock_api_get, mock_kp_get): mock_api_get.side_effect = exception.KeypairNotFound(user_id='foo', name='foo') mock_kp_get.return_value = fake_keypair keypair_obj = keypair.KeyPair.get_by_name(self.context, 'fake-user', 'foo-keypair') self.compare_obj(keypair_obj, fake_keypair) mock_kp_get.assert_called_once_with(self.context, 'fake-user', 'foo-keypair') mock_api_get.assert_called_once_with(self.context, 'fake-user', 'foo-keypair') @mock.patch('nova.objects.KeyPair._create_in_db') def test_create(self, mock_kp_create): mock_kp_create.return_value = fake_keypair keypair_obj = keypair.KeyPair(context=self.context) keypair_obj.name = 'foo-keypair' keypair_obj.public_key = 'keydata' keypair_obj.user_id = 'fake-user' keypair_obj.create() self.compare_obj(keypair_obj, fake_keypair) mock_kp_create.assert_called_once_with(self.context, {'name': 'foo-keypair', 'public_key': 'keydata', 'user_id': 'fake-user'}) @mock.patch('nova.objects.KeyPair._create_in_db') def test_recreate_fails(self, mock_kp_create): mock_kp_create.return_value = fake_keypair keypair_obj = keypair.KeyPair(context=self.context) keypair_obj.name = 'foo-keypair' keypair_obj.public_key = 'keydata' keypair_obj.user_id = 'fake-user' keypair_obj.create() self.assertRaises(exception.ObjectActionError, keypair_obj.create) mock_kp_create.assert_called_once_with(self.context, {'name': 'foo-keypair', 'public_key': 'keydata', 'user_id': 'fake-user'}) @mock.patch('nova.objects.KeyPair._destroy_in_db') def test_destroy(self, mock_kp_destroy): keypair_obj = keypair.KeyPair(context=self.context) keypair_obj.id = 123 keypair_obj.user_id = 'fake-user' keypair_obj.name = 'foo-keypair' keypair_obj.destroy() mock_kp_destroy.assert_called_once_with( self.context, 'fake-user', 'foo-keypair') @mock.patch('nova.objects.KeyPair._destroy_in_db') def test_destroy_by_name(self, mock_kp_destroy): keypair.KeyPair.destroy_by_name(self.context, 'fake-user', 'foo-keypair') mock_kp_destroy.assert_called_once_with( self.context, 'fake-user', 'foo-keypair') @mock.patch('nova.db.api.key_pair_get_all_by_user') @mock.patch('nova.db.api.key_pair_count_by_user') @mock.patch('nova.objects.KeyPairList._get_from_db') @mock.patch('nova.objects.KeyPairList._get_count_from_db') def test_get_by_user(self, mock_api_count, mock_api_get, mock_kp_count, mock_kp_get): mock_kp_get.return_value = [fake_keypair] mock_kp_count.return_value = 1 mock_api_get.return_value = [fake_keypair] mock_api_count.return_value = 1 keypairs = keypair.KeyPairList.get_by_user(self.context, 'fake-user') self.assertEqual(2, len(keypairs)) self.compare_obj(keypairs[0], fake_keypair) self.compare_obj(keypairs[1], fake_keypair) self.assertEqual(2, keypair.KeyPairList.get_count_by_user(self.context, 'fake-user')) mock_kp_get.assert_called_once_with(self.context, 'fake-user', limit=None, marker=None) mock_kp_count.assert_called_once_with(self.context, 'fake-user') mock_api_get.assert_called_once_with(self.context, 'fake-user', limit=None, marker=None) mock_api_count.assert_called_once_with(self.context, 'fake-user') def test_obj_make_compatible(self): keypair_obj = keypair.KeyPair(context=self.context) fake_keypair_copy = dict(fake_keypair) keypair_obj.obj_make_compatible(fake_keypair_copy, '1.1') self.assertNotIn('type', fake_keypair_copy) @mock.patch('nova.db.api.key_pair_get_all_by_user') @mock.patch('nova.objects.KeyPairList._get_from_db') def test_get_by_user_limit(self, mock_api_get, mock_kp_get): api_keypair = copy.deepcopy(fake_keypair) api_keypair['name'] = 'api_kp' mock_api_get.return_value = [api_keypair] mock_kp_get.return_value = [fake_keypair] keypairs = keypair.KeyPairList.get_by_user(self.context, 'fake-user', limit=1) self.assertEqual(1, len(keypairs)) self.compare_obj(keypairs[0], api_keypair) mock_api_get.assert_called_once_with(self.context, 'fake-user', limit=1, marker=None) self.assertFalse(mock_kp_get.called) @mock.patch('nova.db.api.key_pair_get_all_by_user') @mock.patch('nova.objects.KeyPairList._get_from_db') def test_get_by_user_marker(self, mock_api_get, mock_kp_get): api_kp_name = 'api_kp' mock_api_get.side_effect = exception.MarkerNotFound(marker=api_kp_name) mock_kp_get.return_value = [fake_keypair] keypairs = keypair.KeyPairList.get_by_user(self.context, 'fake-user', marker=api_kp_name) self.assertEqual(1, len(keypairs)) self.compare_obj(keypairs[0], fake_keypair) mock_api_get.assert_called_once_with(self.context, 'fake-user', limit=None, marker=api_kp_name) mock_kp_get.assert_called_once_with(self.context, 'fake-user', limit=None, marker=api_kp_name) @mock.patch('nova.db.api.key_pair_get_all_by_user') @mock.patch('nova.objects.KeyPairList._get_from_db') def test_get_by_user_limit_and_marker_api(self, mock_api_get, mock_kp_get): first_api_kp_name = 'first_api_kp' api_keypair = copy.deepcopy(fake_keypair) api_keypair['name'] = 'api_kp' mock_api_get.return_value = [api_keypair] mock_kp_get.return_value = [fake_keypair] keypairs = keypair.KeyPairList.get_by_user(self.context, 'fake-user', limit=5, marker=first_api_kp_name) self.assertEqual(2, len(keypairs)) self.compare_obj(keypairs[0], api_keypair) self.compare_obj(keypairs[1], fake_keypair) mock_api_get.assert_called_once_with(self.context, 'fake-user', limit=5, marker=first_api_kp_name) mock_kp_get.assert_called_once_with(self.context, 'fake-user', limit=4, marker=None) @mock.patch('nova.db.api.key_pair_get_all_by_user') @mock.patch('nova.objects.KeyPairList._get_from_db') def test_get_by_user_limit_and_marker_main(self, mock_api_get, mock_kp_get): first_main_kp_name = 'first_main_kp' mock_api_get.side_effect = exception.MarkerNotFound( marker=first_main_kp_name) mock_kp_get.return_value = [fake_keypair] keypairs = keypair.KeyPairList.get_by_user(self.context, 'fake-user', limit=5, marker=first_main_kp_name) self.assertEqual(1, len(keypairs)) self.compare_obj(keypairs[0], fake_keypair) mock_api_get.assert_called_once_with(self.context, 'fake-user', limit=5, marker=first_main_kp_name) mock_kp_get.assert_called_once_with(self.context, 'fake-user', limit=5, marker=first_main_kp_name) @mock.patch('nova.db.api.key_pair_get_all_by_user') @mock.patch('nova.objects.KeyPairList._get_from_db') def test_get_by_user_limit_and_marker_invalid_marker( self, mock_api_get, mock_kp_get): kp_name = 'unknown_kp' mock_api_get.side_effect = exception.MarkerNotFound(marker=kp_name) mock_kp_get.side_effect = exception.MarkerNotFound(marker=kp_name) self.assertRaises(exception.MarkerNotFound, keypair.KeyPairList.get_by_user, self.context, 'fake-user', limit=5, marker=kp_name) class TestMigrationObject(test_objects._LocalTest, _TestKeyPairObject): pass class TestRemoteMigrationObject(test_objects._RemoteTest, _TestKeyPairObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_migrate_data.py0000664000175000017500000003502400000000000023341 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_versionedobjects import base as ovo_base from nova import exception from nova.network import model as network_model from nova import objects from nova.objects import migrate_data from nova import test from nova.tests.unit.objects import test_objects class _TestLiveMigrateData(object): def test_obj_make_compatible(self): props = { 'serial_listen_addr': '127.0.0.1', 'serial_listen_ports': [1000, 10001, 10002, 10003], 'wait_for_vif_plugged': True } obj = migrate_data.LibvirtLiveMigrateData(**props) primitive = obj.obj_to_primitive() self.assertIn('serial_listen_ports', primitive['nova_object.data']) self.assertIn('wait_for_vif_plugged', primitive['nova_object.data']) obj.obj_make_compatible(primitive['nova_object.data'], '1.5') self.assertNotIn('wait_for_vif_plugged', primitive['nova_object.data']) obj.obj_make_compatible(primitive['nova_object.data'], '1.1') self.assertNotIn('serial_listen_ports', primitive['nova_object.data']) class TestLiveMigrateData(test_objects._LocalTest, _TestLiveMigrateData): pass class TestRemoteLiveMigrateData(test_objects._RemoteTest, _TestLiveMigrateData): pass class _TestLibvirtLiveMigrateData(object): def test_bdm_to_disk_info(self): obj = migrate_data.LibvirtLiveMigrateBDMInfo( serial='foo', bus='scsi', dev='sda', type='disk') expected_info = { 'dev': 'sda', 'bus': 'scsi', 'type': 'disk', } self.assertEqual(expected_info, obj.as_disk_info()) obj.format = 'raw' obj.boot_index = 1 expected_info['format'] = 'raw' expected_info['boot_index'] = '1' self.assertEqual(expected_info, obj.as_disk_info()) def test_numa_migrate_data(self): data = lambda x: x['nova_object.data'] obj = migrate_data.LibvirtLiveMigrateNUMAInfo( cpu_pins={'0': set([1])}, cell_pins={'2': set([3])}, emulator_pins=set([4]), sched_vcpus=set([5]), sched_priority=6) manifest = ovo_base.obj_tree_get_versions(obj.obj_name()) primitive = data(obj.obj_to_primitive(target_version='1.0', version_manifest=manifest)) self.assertEqual({'0': (1,)}, primitive['cpu_pins']) self.assertEqual({'2': (3,)}, primitive['cell_pins']) self.assertEqual((4,), primitive['emulator_pins']) self.assertEqual((5,), primitive['sched_vcpus']) self.assertEqual(6, primitive['sched_priority']) def test_obj_make_compatible(self): obj = migrate_data.LibvirtLiveMigrateData( src_supports_native_luks=True, old_vol_attachment_ids={uuids.volume: uuids.attachment}, supported_perf_events=[], serial_listen_addr='127.0.0.1', target_connect_addr='127.0.0.1', dst_wants_file_backed_memory=False, file_backed_memory_discard=False, src_supports_numa_live_migraton=True, dst_supports_numa_live_migraton=True, dst_numa_info=migrate_data.LibvirtLiveMigrateNUMAInfo()) manifest = ovo_base.obj_tree_get_versions(obj.obj_name()) data = lambda x: x['nova_object.data'] primitive = data(obj.obj_to_primitive()) self.assertIn('file_backed_memory_discard', primitive) primitive = data(obj.obj_to_primitive(target_version='1.0', version_manifest=manifest)) self.assertNotIn('target_connect_addr', primitive) self.assertNotIn('supported_perf_events', primitive) self.assertNotIn('old_vol_attachment_ids', primitive) self.assertNotIn('src_supports_native_luks', primitive) self.assertNotIn('dst_wants_file_backed_memory', primitive) primitive = data(obj.obj_to_primitive(target_version='1.1', version_manifest=manifest)) self.assertNotIn('serial_listen_ports', primitive) primitive = data(obj.obj_to_primitive(target_version='1.2', version_manifest=manifest)) self.assertNotIn('supported_perf_events', primitive) primitive = data(obj.obj_to_primitive(target_version='1.3', version_manifest=manifest)) self.assertNotIn('old_vol_attachment_ids', primitive) primitive = data(obj.obj_to_primitive(target_version='1.4', version_manifest=manifest)) self.assertNotIn('src_supports_native_luks', primitive) primitive = data(obj.obj_to_primitive(target_version='1.6', version_manifest=manifest)) self.assertNotIn('dst_wants_file_backed_memory', primitive) primitive = data(obj.obj_to_primitive(target_version='1.7', version_manifest=manifest)) self.assertNotIn('file_backed_memory_discard', primitive) primitive = data(obj.obj_to_primitive(target_version='1.9', version_manifest=manifest)) self.assertNotIn('dst_numa_info', primitive) self.assertNotIn('src_supports_numa_live_migration', primitive) self.assertNotIn('dst_supports_numa_live_migration', primitive) def test_bdm_obj_make_compatible(self): obj = migrate_data.LibvirtLiveMigrateBDMInfo( encryption_secret_uuid=uuids.encryption_secret_uuid) primitive = obj.obj_to_primitive(target_version='1.0') self.assertNotIn( 'encryption_secret_uuid', primitive['nova_object.data']) primitive = obj.obj_to_primitive(target_version='1.1') self.assertIn( 'encryption_secret_uuid', primitive['nova_object.data']) def test_vif_migrate_data(self): source_vif = network_model.VIF( id=uuids.port_id, network=network_model.Network(id=uuids.network_id), type=network_model.VIF_TYPE_OVS, vnic_type=network_model.VNIC_TYPE_NORMAL, active=True, profile={'migrating_to': 'dest-host'}) vif_details_dict = {'port_filter': True} profile_dict = {'trusted': False} vif_data = objects.VIFMigrateData( port_id=uuids.port_id, vnic_type=network_model.VNIC_TYPE_NORMAL, vif_type=network_model.VIF_TYPE_BRIDGE, vif_details=vif_details_dict, profile=profile_dict, host='dest-host', source_vif=source_vif) # Make sure the vif_details and profile fields are converted and # stored properly. self.assertEqual( jsonutils.dumps(vif_details_dict), vif_data.vif_details_json) self.assertEqual( jsonutils.dumps(profile_dict), vif_data.profile_json) self.assertDictEqual(vif_details_dict, vif_data.vif_details) self.assertDictEqual(profile_dict, vif_data.profile) obj = migrate_data.LibvirtLiveMigrateData( file_backed_memory_discard=False) obj.vifs = [vif_data] manifest = ovo_base.obj_tree_get_versions(obj.obj_name()) primitive = obj.obj_to_primitive(target_version='1.8', version_manifest=manifest) self.assertIn( 'file_backed_memory_discard', primitive['nova_object.data']) self.assertNotIn('vifs', primitive['nova_object.data']) class TestLibvirtLiveMigrateData(test_objects._LocalTest, _TestLibvirtLiveMigrateData): pass class TestRemoteLibvirtLiveMigrateData(test_objects._RemoteTest, _TestLibvirtLiveMigrateData): pass class _TestXenapiLiveMigrateData(object): def test_obj_make_compatible(self): obj = migrate_data.XenapiLiveMigrateData( is_volume_backed=False, block_migration=False, destination_sr_ref='foo', migrate_send_data={'key': 'val'}, sr_uuid_map={'apple': 'banana'}, vif_uuid_map={'orange': 'lemon'}, old_vol_attachment_ids={uuids.volume: uuids.attachment}, wait_for_vif_plugged=True) primitive = obj.obj_to_primitive('1.0') self.assertNotIn('vif_uuid_map', primitive['nova_object.data']) primitive2 = obj.obj_to_primitive('1.1') self.assertIn('vif_uuid_map', primitive2['nova_object.data']) self.assertNotIn('old_vol_attachment_ids', primitive2) primitive3 = obj.obj_to_primitive('1.2')['nova_object.data'] self.assertNotIn('wait_for_vif_plugged', primitive3) class TestXenapiLiveMigrateData(test_objects._LocalTest, _TestXenapiLiveMigrateData): pass class TestRemoteXenapiLiveMigrateData(test_objects._RemoteTest, _TestXenapiLiveMigrateData): pass class _TestHyperVLiveMigrateData(object): def test_obj_make_compatible(self): obj = migrate_data.HyperVLiveMigrateData( is_shared_instance_path=True, old_vol_attachment_ids={'yes': 'no'}, wait_for_vif_plugged=True) data = lambda x: x['nova_object.data'] primitive = data(obj.obj_to_primitive()) self.assertIn('is_shared_instance_path', primitive) primitive = data(obj.obj_to_primitive(target_version='1.0')) self.assertNotIn('is_shared_instance_path', primitive) primitive = data(obj.obj_to_primitive(target_version='1.1')) self.assertNotIn('old_vol_attachment_ids', primitive) primitive = data(obj.obj_to_primitive(target_version='1.2')) self.assertNotIn('wait_for_vif_plugged', primitive) class TestHyperVLiveMigrateData(test_objects._LocalTest, _TestHyperVLiveMigrateData): pass class TestRemoteHyperVLiveMigrateData(test_objects._RemoteTest, _TestHyperVLiveMigrateData): pass class _TestPowerVMLiveMigrateData(object): @staticmethod def _mk_obj(): return migrate_data.PowerVMLiveMigrateData( host_mig_data=dict(one=2), dest_ip='1.2.3.4', dest_user_id='a_user', dest_sys_name='a_sys', public_key='a_key', dest_proc_compat='POWER7', vol_data=dict(three=4), vea_vlan_mappings=dict(five=6), old_vol_attachment_ids=dict(seven=8), wait_for_vif_plugged=True) @staticmethod def _mk_leg(): return { 'host_mig_data': {'one': '2'}, 'dest_ip': '1.2.3.4', 'dest_user_id': 'a_user', 'dest_sys_name': 'a_sys', 'public_key': 'a_key', 'dest_proc_compat': 'POWER7', 'vol_data': {'three': '4'}, 'vea_vlan_mappings': {'five': '6'}, 'old_vol_attachment_ids': {'seven': '8'}, 'wait_for_vif_plugged': True } def test_migrate_data(self): obj = self._mk_obj() self.assertEqual('a_key', obj.public_key) obj.public_key = 'key2' self.assertEqual('key2', obj.public_key) def test_obj_make_compatible(self): obj = self._mk_obj() data = lambda x: x['nova_object.data'] primitive = data(obj.obj_to_primitive()) self.assertIn('vea_vlan_mappings', primitive) primitive = data(obj.obj_to_primitive(target_version='1.0')) self.assertNotIn('vea_vlan_mappings', primitive) primitive = data(obj.obj_to_primitive(target_version='1.1')) self.assertNotIn('old_vol_attachment_ids', primitive) primitive = data(obj.obj_to_primitive(target_version='1.2')) self.assertNotIn('wait_for_vif_plugged', primitive) class TestPowerVMLiveMigrateData(test_objects._LocalTest, _TestPowerVMLiveMigrateData): pass class TestRemotePowerVMLiveMigrateData(test_objects._RemoteTest, _TestPowerVMLiveMigrateData): pass class TestVIFMigrateData(test.NoDBTestCase): def test_get_dest_vif_source_vif_not_set(self): migrate_vif = objects.VIFMigrateData( port_id=uuids.port_id, vnic_type=network_model.VNIC_TYPE_NORMAL, vif_type=network_model.VIF_TYPE_OVS, vif_details={}, profile={}, host='fake-dest-host') self.assertRaises( exception.ObjectActionError, migrate_vif.get_dest_vif) def test_get_dest_vif(self): source_vif = network_model.VIF( id=uuids.port_id, type=network_model.VIF_TYPE_OVS, details={}, vnic_type=network_model.VNIC_TYPE_DIRECT, profile={'foo': 'bar'}, ovs_interfaceid=uuids.ovs_interfaceid) migrate_vif = objects.VIFMigrateData( port_id=uuids.port_id, vnic_type=network_model.VNIC_TYPE_NORMAL, vif_type=network_model.VIF_TYPE_BRIDGE, vif_details={'bar': 'baz'}, profile={}, host='fake-dest-host', source_vif=source_vif) dest_vif = migrate_vif.get_dest_vif() self.assertEqual(migrate_vif.port_id, dest_vif['id']) self.assertEqual(migrate_vif.vnic_type, dest_vif['vnic_type']) self.assertEqual(migrate_vif.vif_type, dest_vif['type']) self.assertEqual(migrate_vif.vif_details, dest_vif['details']) self.assertEqual(migrate_vif.profile, dest_vif['profile']) self.assertEqual(uuids.ovs_interfaceid, dest_vif['ovs_interfaceid']) def test_create_skeleton_migrate_vifs(self): vifs = [ network_model.VIF(id=uuids.port1), network_model.VIF(id=uuids.port2)] mig_vifs = migrate_data.VIFMigrateData.create_skeleton_migrate_vifs( vifs) self.assertEqual(len(vifs), len(mig_vifs)) self.assertEqual([vif['id'] for vif in vifs], [mig_vif.port_id for mig_vif in mig_vifs]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_migration.py0000664000175000017500000004071100000000000022710 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils from nova import context from nova.db import api as db from nova import exception from nova import objects from nova.objects import migration from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) def fake_db_migration(**updates): db_instance = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 123, 'uuid': uuidsentinel.migration, 'source_compute': 'compute-source', 'dest_compute': 'compute-dest', 'source_node': 'node-source', 'dest_node': 'node-dest', 'dest_host': 'host-dest', 'old_instance_type_id': 42, 'new_instance_type_id': 84, 'instance_uuid': 'fake-uuid', 'status': 'migrating', 'migration_type': 'resize', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'cross_cell_move': False, 'user_id': None, 'project_id': None, } if updates: db_instance.update(updates) return db_instance class _TestMigrationObject(object): @mock.patch.object(db, 'migration_get') def test_get_by_id(self, mock_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() mock_get.return_value = fake_migration mig = migration.Migration.get_by_id(ctxt, fake_migration['id']) self.compare_obj(mig, fake_migration) mock_get.assert_called_once_with(ctxt, fake_migration['id']) @mock.patch.object(db, 'migration_get_by_instance_and_status') def test_get_by_instance_and_status(self, mock_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() mock_get.return_value = fake_migration mig = migration.Migration.get_by_instance_and_status( ctxt, fake_migration['id'], 'migrating') self.compare_obj(mig, fake_migration) mock_get.assert_called_once_with(ctxt, fake_migration['id'], 'migrating') @mock.patch('nova.db.api.migration_get_in_progress_by_instance') def test_get_in_progress_by_instance(self, m_get_mig): ctxt = context.get_admin_context() fake_migration = fake_db_migration() db_migrations = [fake_migration, dict(fake_migration, id=456)] m_get_mig.return_value = db_migrations migrations = migration.MigrationList.get_in_progress_by_instance( ctxt, fake_migration['instance_uuid']) self.assertEqual(2, len(migrations)) for index, db_migration in enumerate(db_migrations): self.compare_obj(migrations[index], db_migration) @mock.patch.object(db, 'migration_create') def test_create(self, mock_create): ctxt = context.get_admin_context() fake_migration = fake_db_migration() mock_create.return_value = fake_migration mig = migration.Migration(context=ctxt) mig.source_compute = 'foo' mig.migration_type = 'resize' mig.uuid = uuidsentinel.migration mig.user_id = 'fake-user' mig.project_id = 'fake-project' mig.create() self.assertEqual(fake_migration['dest_compute'], mig.dest_compute) self.assertIn('uuid', mig) self.assertFalse(mig.cross_cell_move) self.assertIn('user_id', mig) self.assertIn('project_id', mig) mock_create.assert_called_once_with(ctxt, {'source_compute': 'foo', 'migration_type': 'resize', 'uuid': uuidsentinel.migration, 'user_id': 'fake-user', 'project_id': 'fake-project'}) @mock.patch.object(db, 'migration_create') def test_create_with_user_id_and_project_id_set_in_ctxt(self, mock_create): ctxt = context.get_admin_context() ctxt.user_id = 'fake-user' ctxt.project_id = 'fake-project' fake_migration = fake_db_migration(user_id='fake-user', project_id='fake-project') mock_create.return_value = fake_migration mig = migration.Migration(context=ctxt) mig.source_compute = 'foo' mig.migration_type = 'resize' mig.uuid = uuidsentinel.migration mig.create() self.assertEqual(fake_migration['dest_compute'], mig.dest_compute) self.assertIn('uuid', mig) self.assertFalse(mig.cross_cell_move) self.assertEqual('fake-user', mig.user_id) self.assertEqual('fake-project', mig.project_id) mock_create.assert_called_once_with(ctxt, {'source_compute': 'foo', 'migration_type': 'resize', 'uuid': uuidsentinel.migration, 'user_id': 'fake-user', 'project_id': 'fake-project'}) @mock.patch.object(db, 'migration_create') def test_recreate_fails(self, mock_create): ctxt = context.get_admin_context() fake_migration = fake_db_migration() mock_create.return_value = fake_migration mig = migration.Migration(context=ctxt) mig.source_compute = 'foo' mig.migration_type = 'resize' mig.uuid = uuidsentinel.migration mig.user_id = 'fake-user' mig.project_id = 'fake-project' mig.create() self.assertRaises(exception.ObjectActionError, mig.create) mock_create.assert_called_once_with(ctxt, {'source_compute': 'foo', 'migration_type': 'resize', 'uuid': uuidsentinel.migration, 'user_id': 'fake-user', 'project_id': 'fake-project'}) def test_create_fails_migration_type(self): ctxt = context.get_admin_context() mig = migration.Migration(context=ctxt, old_instance_type_id=42, new_instance_type_id=84) mig.source_compute = 'foo' self.assertRaises(exception.ObjectActionError, mig.create) @mock.patch.object(db, 'migration_update') def test_save(self, mock_update): ctxt = context.get_admin_context() fake_migration = fake_db_migration() mock_update.return_value = fake_migration mig = migration.Migration(context=ctxt) mig.id = 123 mig.source_compute = 'foo' mig.save() self.assertEqual(fake_migration['dest_compute'], mig.dest_compute) mock_update.assert_called_once_with(ctxt, 123, {'source_compute': 'foo'}) @mock.patch.object(db, 'instance_get_by_uuid') def test_instance(self, mock_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() fake_inst = fake_instance.fake_db_instance() mock_get.return_value = fake_inst mig = migration.Migration._from_db_object(ctxt, migration.Migration(), fake_migration) mig._context = ctxt self.assertEqual(mig.instance.host, fake_inst['host']) mock_get.assert_called_once_with( ctxt, fake_migration['instance_uuid'], columns_to_join=['extra', 'extra.flavor', 'extra.migration_context']) def test_instance_setter(self): migration = objects.Migration(instance_uuid=uuidsentinel.instance) inst = objects.Instance(uuid=uuidsentinel.instance) with mock.patch('nova.objects.Instance.get_by_uuid') as mock_get: migration.instance = inst migration.instance self.assertFalse(mock_get.called) self.assertEqual(inst, migration._cached_instance) self.assertEqual(inst, migration.instance) @mock.patch.object(db, 'migration_get_unconfirmed_by_dest_compute') def test_get_unconfirmed_by_dest_compute(self, mock_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() db_migrations = [fake_migration, dict(fake_migration, id=456)] mock_get.return_value = db_migrations migrations = ( migration.MigrationList.get_unconfirmed_by_dest_compute( ctxt, 'window', 'foo', use_slave=False)) self.assertEqual(2, len(migrations)) for index, db_migration in enumerate(db_migrations): self.compare_obj(migrations[index], db_migration) mock_get.assert_called_once_with(ctxt, 'window', 'foo') @mock.patch.object(db, 'migration_get_in_progress_by_host_and_node') def test_get_in_progress_by_host_and_node(self, mock_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() db_migrations = [fake_migration, dict(fake_migration, id=456)] mock_get.return_value = db_migrations migrations = ( migration.MigrationList.get_in_progress_by_host_and_node( ctxt, 'host', 'node')) self.assertEqual(2, len(migrations)) for index, db_migration in enumerate(db_migrations): self.compare_obj(migrations[index], db_migration) mock_get.assert_called_once_with(ctxt, 'host', 'node') @mock.patch.object(db, 'migration_get_all_by_filters') def test_get_by_filters(self, mock_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() db_migrations = [fake_migration, dict(fake_migration, id=456)] filters = {'foo': 'bar'} mock_get.return_value = db_migrations migrations = migration.MigrationList.get_by_filters(ctxt, filters) self.assertEqual(2, len(migrations)) for index, db_migration in enumerate(db_migrations): self.compare_obj(migrations[index], db_migration) mock_get.assert_called_once_with(ctxt, filters, sort_dirs=None, sort_keys=None, limit=None, marker=None) def test_migrate_old_resize_record(self): db_migration = dict(fake_db_migration(), migration_type=None) with mock.patch('nova.db.api.migration_get') as fake_get: fake_get.return_value = db_migration mig = objects.Migration.get_by_id(context.get_admin_context(), 1) self.assertTrue(mig.obj_attr_is_set('migration_type')) self.assertEqual('resize', mig.migration_type) def test_migrate_old_migration_record(self): db_migration = dict( fake_db_migration(), migration_type=None, old_instance_type_id=1, new_instance_type_id=1) with mock.patch('nova.db.api.migration_get') as fake_get: fake_get.return_value = db_migration mig = objects.Migration.get_by_id(context.get_admin_context(), 1) self.assertTrue(mig.obj_attr_is_set('migration_type')) self.assertEqual('migration', mig.migration_type) def test_migrate_unset_type_resize(self): mig = objects.Migration(old_instance_type_id=1, new_instance_type_id=2) self.assertEqual('resize', mig.migration_type) self.assertTrue(mig.obj_attr_is_set('migration_type')) def test_migrate_unset_type_migration(self): mig = objects.Migration(old_instance_type_id=1, new_instance_type_id=1) self.assertEqual('migration', mig.migration_type) self.assertTrue(mig.obj_attr_is_set('migration_type')) def test_obj_load_attr_hidden(self): mig = objects.Migration() self.assertFalse(mig.hidden) self.assertIn('hidden', mig) def test_obj_load_attr_cross_cell_move(self): mig = objects.Migration() self.assertFalse(mig.cross_cell_move) self.assertIn('cross_cell_move', mig) @mock.patch('nova.db.api.migration_get_by_id_and_instance') def test_get_by_id_and_instance(self, fake_get): ctxt = context.get_admin_context() fake_migration = fake_db_migration() fake_get.return_value = fake_migration migration = objects.Migration.get_by_id_and_instance(ctxt, '1', '1') self.compare_obj(migration, fake_migration) def test_create_uuid_on_load(self): values = {'source_compute': 'src', 'dest_compute': 'dst', 'source_node': 'srcnode', 'dest_node': 'dstnode', 'instance_uuid': 'fake', 'status': 'faking', 'migration_type': 'migration', 'created_at': None, 'deleted_at': None, 'updated_at': None} db_mig = db.migration_create(self.context, values) mig = objects.Migration.get_by_id(self.context, db_mig.id) self.assertIn('uuid', mig) uuid = mig.uuid # Make sure that it was saved and we get the same one back mig = objects.Migration.get_by_id(self.context, db_mig.id) self.assertEqual(uuid, mig.uuid) def test_obj_make_compatible(self): mig = objects.Migration( cross_cell_move=True, # added in 1.6 uuid=uuidsentinel.migration, # added in 1.5 memory_total=1024, memory_processed=0, memory_remaining=0, # 1.4 disk_total=20, disk_processed=0, disk_remaining=0, migration_type='resize', hidden=False, # added in 1.2 source_compute='fake-host' # added in 1.0 ) data = lambda x: x['nova_object.data'] primitive = data(mig.obj_to_primitive(target_version='1.6')) self.assertIn('cross_cell_move', primitive) self.assertNotIn('user_id', primitive) self.assertNotIn('project_id', primitive) primitive = data(mig.obj_to_primitive(target_version='1.5')) self.assertIn('uuid', primitive) self.assertNotIn('cross_cell_move', primitive) primitive = data(mig.obj_to_primitive(target_version='1.4')) self.assertIn('memory_total', primitive) self.assertNotIn('uuid', primitive) primitive = data(mig.obj_to_primitive(target_version='1.3')) self.assertIn('migration_type', primitive) self.assertNotIn('memory_total', primitive) primitive = data(mig.obj_to_primitive(target_version='1.1')) self.assertIn('source_compute', primitive) self.assertNotIn('migration_type', primitive) @mock.patch('nova.db.api.migration_get_by_uuid') def test_get_by_uuid(self, mock_db_get): mock_db_get.return_value = fake_db_migration(uuid=uuidsentinel.mig) mig = objects.Migration.get_by_uuid(self.context, uuidsentinel.mig) self.assertEqual(uuidsentinel.mig, mig.uuid) def test_is_same_host(self): same_host = objects.Migration(source_compute='fake-host', dest_compute='fake-host') diff_host = objects.Migration(source_compute='fake-host1', dest_compute='fake-host2') self.assertTrue(same_host.is_same_host()) self.assertFalse(diff_host.is_same_host()) class TestMigrationObject(test_objects._LocalTest, _TestMigrationObject): pass class TestRemoteMigrationObject(test_objects._RemoteTest, _TestMigrationObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_migration_context.py0000664000175000017500000001544100000000000024456 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import exception from nova import objects from nova.tests.unit.objects import test_instance_numa from nova.tests.unit.objects import test_objects fake_instance_uuid = uuids.fake fake_migration_context_obj = objects.MigrationContext() fake_migration_context_obj.instance_uuid = fake_instance_uuid fake_migration_context_obj.migration_id = 42 fake_migration_context_obj.new_numa_topology = ( test_instance_numa.fake_obj_numa_topology.obj_clone()) fake_migration_context_obj.old_numa_topology = None fake_migration_context_obj.new_pci_devices = objects.PciDeviceList() fake_migration_context_obj.old_pci_devices = None fake_migration_context_obj.new_pci_requests = ( objects.InstancePCIRequests(requests=[ objects.InstancePCIRequest(count=123, spec=[])])) fake_migration_context_obj.old_pci_requests = None fake_migration_context_obj.new_resources = objects.ResourceList() fake_migration_context_obj.old_resources = None fake_db_context = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'instance_uuid': fake_instance_uuid, 'migration_context': jsonutils.dumps( fake_migration_context_obj.obj_to_primitive()), } def get_fake_migration_context_obj(ctxt): obj = fake_migration_context_obj.obj_clone() obj._context = ctxt return obj def get_fake_migration_context_with_pci_devs(ctxt=None): obj = get_fake_migration_context_obj(ctxt) obj.old_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0a:00.1', compute_node_id=1, request_id=uuids.pcidev)]) obj.new_pci_devices = objects.PciDeviceList( objects=[objects.PciDevice(vendor_id='1377', product_id='0047', address='0000:0b:00.1', compute_node_id=2, request_id=uuids.pcidev)]) return obj class _TestMigrationContext(object): def _test_get_by_instance_uuid(self, db_data): mig_context = objects.MigrationContext.get_by_instance_uuid( self.context, fake_db_context['instance_uuid']) if mig_context: self.assertEqual(fake_db_context['instance_uuid'], mig_context.instance_uuid) expected_mig_context = db_data and db_data.get('migration_context') expected_mig_context = objects.MigrationContext.obj_from_db_obj( expected_mig_context) self.assertEqual(expected_mig_context.instance_uuid, mig_context.instance_uuid) self.assertEqual(expected_mig_context.migration_id, mig_context.migration_id) self.assertIsInstance(expected_mig_context.new_numa_topology, mig_context.new_numa_topology.__class__) self.assertIsInstance(expected_mig_context.old_numa_topology, mig_context.old_numa_topology.__class__) self.assertIsInstance(expected_mig_context.new_pci_devices, mig_context.new_pci_devices.__class__) self.assertIsInstance(expected_mig_context.old_pci_devices, mig_context.old_pci_devices.__class__) self.assertIsInstance(expected_mig_context.new_pci_requests, mig_context.new_pci_requests.__class__) self.assertIsInstance(expected_mig_context.old_pci_requests, mig_context.old_pci_requests.__class__) self.assertIsInstance(expected_mig_context. new_resources, mig_context.new_resources.__class__) self.assertIsInstance(expected_mig_context.old_resources, mig_context.old_resources.__class__) else: self.assertIsNone(mig_context) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): mock_get.return_value = fake_db_context self._test_get_by_instance_uuid(fake_db_context) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid_none(self, mock_get): db_context = fake_db_context.copy() db_context['migration_context'] = None mock_get.return_value = db_context self._test_get_by_instance_uuid(db_context) @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid_missing(self, mock_get): mock_get.return_value = None self.assertRaises( exception.MigrationContextNotFound, objects.MigrationContext.get_by_instance_uuid, self.context, 'fake_uuid') @mock.patch('nova.objects.Migration.get_by_id', return_value=objects.Migration(cross_cell_move=True)) def test_is_cross_cell_move(self, mock_get_by_id): ctxt = context.get_admin_context() mig_ctx = get_fake_migration_context_obj(ctxt) self.assertTrue(mig_ctx.is_cross_cell_move()) mock_get_by_id.assert_called_once_with(ctxt, mig_ctx.migration_id) class TestMigrationContext(test_objects._LocalTest, _TestMigrationContext): def test_pci_mapping_for_migration(self): mig_ctx = get_fake_migration_context_with_pci_devs() pci_mapping = mig_ctx.get_pci_mapping_for_migration(False) self.assertDictEqual( {mig_ctx.old_pci_devices[0].address: mig_ctx.new_pci_devices[0]}, pci_mapping) def test_pci_mapping_for_migration_revert(self): mig_ctx = get_fake_migration_context_with_pci_devs() pci_mapping = mig_ctx.get_pci_mapping_for_migration(True) self.assertDictEqual( {mig_ctx.new_pci_devices[0].address: mig_ctx.old_pci_devices[0]}, pci_mapping) class TestMigrationContextRemote(test_objects._RemoteTest, _TestMigrationContext): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_monitor_metric.py0000664000175000017500000001046200000000000023751 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import timeutils from nova import objects from nova.objects import fields from nova.tests.unit.objects import test_objects _ts_now = timeutils.utcnow() _monitor_metric_spec = { 'name': fields.MonitorMetricType.CPU_FREQUENCY, 'value': 1000, 'timestamp': _ts_now.isoformat(), 'source': 'nova.virt.libvirt.driver' } _monitor_metric_perc_spec = { 'name': fields.MonitorMetricType.CPU_PERCENT, 'value': 0.17, 'timestamp': _ts_now.isoformat(), 'source': 'nova.virt.libvirt.driver' } _monitor_numa_metric_spec = { 'name': fields.MonitorMetricType.NUMA_MEM_BW_CURRENT, 'numa_membw_values': {"0": 10, "1": 43}, 'timestamp': _ts_now.isoformat(), 'source': 'nova.virt.libvirt.driver' } _monitor_metric_list_spec = [_monitor_metric_spec] class _TestMonitorMetricObject(object): def test_monitor_metric_to_dict(self): obj = objects.MonitorMetric(name='cpu.frequency', value=1000, timestamp=_ts_now, source='nova.virt.libvirt.driver') self.assertEqual(_monitor_metric_spec, obj.to_dict()) def test_monitor_metric_perc_to_dict(self): """Test to ensure division by 100.0 occurs on percentage value.""" obj = objects.MonitorMetric(name='cpu.percent', value=17, timestamp=_ts_now, source='nova.virt.libvirt.driver') self.assertEqual(_monitor_metric_perc_spec, obj.to_dict()) def test_monitor_metric_list_to_list(self): obj = objects.MonitorMetric(name='cpu.frequency', value=1000, timestamp=_ts_now, source='nova.virt.libvirt.driver') list_obj = objects.MonitorMetricList(objects=[obj]) self.assertEqual(_monitor_metric_list_spec, list_obj.to_list()) def test_monitor_NUMA_metric_to_dict(self): obj = objects.MonitorMetric(name='numa.membw.current', numa_membw_values={"0": 10, "1": 43}, timestamp=_ts_now, source='nova.virt.libvirt.driver') self.assertEqual(_monitor_numa_metric_spec, obj.to_dict()) def test_conversion_in_monitor_metric_list_from_json(self): spec_list = [_monitor_metric_spec, _monitor_metric_perc_spec] metrics = objects.MonitorMetricList.from_json( jsonutils.dumps(spec_list)) for metric, spec in zip(metrics, spec_list): exp = spec['value'] if (spec['name'] in objects.monitor_metric.FIELDS_REQUIRING_CONVERSION): exp = spec['value'] * 100 self.assertEqual(exp, metric.value) def test_obj_make_compatible(self): monitormetric_obj = objects.MonitorMetric( name=fields.MonitorMetricType.NUMA_MEM_BW_CURRENT, numa_membw_values={"0": 10, "1": 43}, timestamp=_ts_now.isoformat(), source='nova.virt.libvirt.driver') primitive = monitormetric_obj.obj_to_primitive() self.assertIn('numa_membw_values', primitive['nova_object.data']) monitormetric_obj.obj_make_compatible(primitive['nova_object.data'], '1.0') self.assertNotIn('numa_membw_values', primitive['nova_object.data']) class TestMonitorMetricObject(test_objects._LocalTest, _TestMonitorMetricObject): pass class TestRemoteMonitorMetricObject(test_objects._RemoteTest, _TestMonitorMetricObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_network_request.py0000664000175000017500000001220100000000000024151 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.objects import network_request from nova.tests.unit.objects import test_objects FAKE_UUID = '0C5C9AD2-F967-4E92-A7F3-24410F697440' class _TestNetworkRequestObject(object): def test_basic(self): request = objects.NetworkRequest() request.network_id = '456' request.address = '1.2.3.4' request.port_id = FAKE_UUID self.assertFalse(request.auto_allocate) self.assertFalse(request.no_allocate) def test_load(self): request = objects.NetworkRequest() self.assertIsNone(request.port_id) self.assertFalse(request.auto_allocate) self.assertFalse(request.no_allocate) def test_to_tuple(self): request = objects.NetworkRequest(network_id='123', address='1.2.3.4', port_id=FAKE_UUID, ) self.assertEqual(('123', '1.2.3.4', FAKE_UUID, None), request.to_tuple()) def test_from_tuples(self): requests = objects.NetworkRequestList.from_tuples( [('123', '1.2.3.4', FAKE_UUID, None)]) self.assertEqual(1, len(requests)) self.assertEqual('123', requests[0].network_id) self.assertEqual('1.2.3.4', str(requests[0].address)) self.assertEqual(FAKE_UUID, requests[0].port_id) self.assertIsNone(requests[0].pci_request_id) def test_list_as_tuples(self): requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='123'), objects.NetworkRequest(network_id='456')]) self.assertEqual( [('123', None, None, None), ('456', None, None, None)], requests.as_tuples()) def test_is_single_unspecified(self): requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='123')]) self.assertFalse(requests.is_single_unspecified) requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(), objects.NetworkRequest()]) self.assertFalse(requests.is_single_unspecified) requests = objects.NetworkRequestList( objects=[objects.NetworkRequest()]) self.assertTrue(requests.is_single_unspecified) def test_auto_allocate(self): # no objects requests = objects.NetworkRequestList() self.assertFalse(requests.auto_allocate) # single object with network uuid requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=FAKE_UUID)]) self.assertFalse(requests.auto_allocate) # multiple objects requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(), objects.NetworkRequest()]) self.assertFalse(requests.auto_allocate) # single object, 'auto' case requests = objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=network_request.NETWORK_ID_AUTO)]) self.assertTrue(requests.auto_allocate) def test_no_allocate(self): # no objects requests = objects.NetworkRequestList() self.assertFalse(requests.no_allocate) # single object with network uuid requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id=FAKE_UUID)]) self.assertFalse(requests.no_allocate) # multiple objects requests = objects.NetworkRequestList( objects=[objects.NetworkRequest(), objects.NetworkRequest()]) self.assertFalse(requests.no_allocate) # single object, 'none' case requests = objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=network_request.NETWORK_ID_NONE)]) self.assertTrue(requests.no_allocate) def test_obj_make_compatible_pre_1_2(self): net_req = objects.NetworkRequest() net_req.tag = 'foo' data = lambda x: x['nova_object.data'] primitive = data(net_req.obj_to_primitive(target_version='1.2')) self.assertIn('tag', primitive) primitive = data(net_req.obj_to_primitive(target_version='1.1')) self.assertNotIn('tag', primitive) class TestNetworkRequestObject(test_objects._LocalTest, _TestNetworkRequestObject): pass class TestNetworkRequestRemoteObject(test_objects._RemoteTest, _TestNetworkRequestObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_numa.py0000664000175000017500000003650600000000000021666 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_versionedobjects import base as ovo_base import testtools from nova import exception from nova import objects from nova.tests.unit.objects import test_objects fake_obj_numa = objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set([1, 2]), pcpuset=set(), memory=512, cpu_usage=2, memory_usage=256, mempages=[], pinned_cpus=set(), siblings=[set([1]), set([2])]), objects.NUMACell( id=1, cpuset=set([3, 4]), pcpuset=set(), memory=512, cpu_usage=1, memory_usage=128, mempages=[], pinned_cpus=set(), siblings=[set([3]), set([4])])]) class _TestNUMACell(object): def test_free_cpus(self): cell_a = objects.NUMACell( id=0, cpuset=set(), pcpuset=set([1, 2]), memory=512, cpu_usage=2, memory_usage=256, pinned_cpus=set([1]), siblings=[set([1]), set([2])], mempages=[]) cell_b = objects.NUMACell( id=1, cpuset=set(), pcpuset=set([3, 4]), memory=512, cpu_usage=1, memory_usage=128, pinned_cpus=set(), siblings=[set([3]), set([4])], mempages=[]) self.assertEqual(set([2]), cell_a.free_pcpus) self.assertEqual(set([3, 4]), cell_b.free_pcpus) def test_pinning_logic(self): numacell = objects.NUMACell( id=0, cpuset=set(), pcpuset=set([1, 2, 3, 4]), memory=512, cpu_usage=2, memory_usage=256, pinned_cpus=set([1]), siblings=[set([1]), set([2]), set([3]), set([4])], mempages=[]) numacell.pin_cpus(set([2, 3])) self.assertEqual(set([4]), numacell.free_pcpus) expect_msg = exception.CPUPinningUnknown.msg_fmt % { 'requested': r'\[1, 55\]', 'available': r'\[1, 2, 3, 4\]'} with testtools.ExpectedException(exception.CPUPinningUnknown, expect_msg): numacell.pin_cpus(set([1, 55])) self.assertRaises(exception.CPUPinningInvalid, numacell.pin_cpus, set([1, 4])) expect_msg = exception.CPUUnpinningUnknown.msg_fmt % { 'requested': r'\[1, 55\]', 'available': r'\[1, 2, 3, 4\]'} with testtools.ExpectedException(exception.CPUUnpinningUnknown, expect_msg): numacell.unpin_cpus(set([1, 55])) self.assertRaises(exception.CPUUnpinningInvalid, numacell.unpin_cpus, set([1, 4])) numacell.unpin_cpus(set([1, 2, 3])) self.assertEqual(set([1, 2, 3, 4]), numacell.free_pcpus) def test_pinning_with_siblings(self): numacell = objects.NUMACell( id=0, cpuset=set(), pcpuset=set([1, 2, 3, 4]), memory=512, cpu_usage=2, memory_usage=256, pinned_cpus=set(), siblings=[set([1, 3]), set([2, 4])], mempages=[]) numacell.pin_cpus_with_siblings(set([1, 2])) self.assertEqual(set(), numacell.free_pcpus) numacell.unpin_cpus_with_siblings(set([1])) self.assertEqual(set([1, 3]), numacell.free_pcpus) self.assertRaises(exception.CPUUnpinningInvalid, numacell.unpin_cpus_with_siblings, set([3])) self.assertRaises(exception.CPUPinningInvalid, numacell.pin_cpus_with_siblings, set([4])) self.assertRaises(exception.CPUUnpinningInvalid, numacell.unpin_cpus_with_siblings, set([3, 4])) self.assertEqual(set([1, 3]), numacell.free_pcpus) numacell.unpin_cpus_with_siblings(set([4])) self.assertEqual(set([1, 2, 3, 4]), numacell.free_pcpus) def test_pinning_with_siblings_no_host_siblings(self): numacell = objects.NUMACell( id=0, cpuset=set(), pcpuset=set([1, 2, 3, 4]), memory=512, cpu_usage=0, memory_usage=256, pinned_cpus=set(), siblings=[set([1]), set([2]), set([3]), set([4])], mempages=[]) numacell.pin_cpus_with_siblings(set([1, 2])) self.assertEqual(set([1, 2]), numacell.pinned_cpus) numacell.unpin_cpus_with_siblings(set([1])) self.assertEqual(set([2]), numacell.pinned_cpus) self.assertRaises(exception.CPUUnpinningInvalid, numacell.unpin_cpus_with_siblings, set([1])) self.assertRaises(exception.CPUPinningInvalid, numacell.pin_cpus_with_siblings, set([2])) self.assertEqual(set([2]), numacell.pinned_cpus) def test_can_fit_pagesize(self): # NOTE(stephenfin): '**' is Python's "power of" symbol cell = objects.NUMACell( id=0, cpuset=set([1, 2]), pcpuset=set(), memory=1024, siblings=[set([1]), set([2])], pinned_cpus=set(), mempages=[ objects.NUMAPagesTopology( size_kb=4, total=1548736, used=0), objects.NUMAPagesTopology( size_kb=2048, total=513, used=0), objects.NUMAPagesTopology( size_kb=1048576, total=4, used=1, reserved=1)]) pagesize = 2048 self.assertTrue(cell.can_fit_pagesize(pagesize, 2 ** 20)) self.assertFalse(cell.can_fit_pagesize(pagesize, 2 ** 21)) self.assertFalse(cell.can_fit_pagesize(pagesize, 2 ** 19 + 1)) pagesize = 1048576 self.assertTrue(cell.can_fit_pagesize(pagesize, 2 ** 20)) self.assertTrue(cell.can_fit_pagesize(pagesize, 2 ** 20 * 2)) self.assertFalse(cell.can_fit_pagesize(pagesize, 2 ** 20 * 3)) self.assertRaises( exception.MemoryPageSizeNotSupported, cell.can_fit_pagesize, 12345, 2 ** 20) def test_can_fit_pagesize_oversubscription(self): """Validate behavior when using page oversubscription. While hugepages aren't themselves oversubscribable, we also track small pages which are. """ # NOTE(stephenfin): '**' is Python's "power of" symbol cell = objects.NUMACell( id=0, cpuset=set([1, 2]), pcpuset=set(), memory=1024, siblings=[set([1]), set([2])], pinned_cpus=set(), mempages=[ # 1 GiB total, all used objects.NUMAPagesTopology( size_kb=4, total=2 ** 18, used=2 ** 18), ]) pagesize = 4 # request 2^20 KiB (so 1 GiB) self.assertTrue(cell.can_fit_pagesize( pagesize, 2 ** 20, use_free=False)) # request 2^20 + 1 KiB (so # > 1 GiB) self.assertFalse(cell.can_fit_pagesize( pagesize, 2 ** 20 + 1, use_free=False)) def test_default_behavior(self): inst_cell = objects.NUMACell() self.assertEqual(0, len(inst_cell.obj_get_changes())) def test_equivalent(self): cell1 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) cell2 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) self.assertEqual(cell1, cell2) def test_not_equivalent(self): cell1 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) cell2 = objects.NUMACell( id=2, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) self.assertNotEqual(cell1, cell2) def test_not_equivalent_missing_a(self): cell1 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) cell2 = objects.NUMACell( id=2, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) self.assertNotEqual(cell1, cell2) def test_not_equivalent_missing_b(self): cell1 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) cell2 = objects.NUMACell( id=2, cpuset=set([1, 2]), pcpuset=set(), memory=32, pinned_cpus=set([3, 4]), siblings=[set([5, 6])]) self.assertNotEqual(cell1, cell2) def test_equivalent_with_pages(self): pt1 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) pt2 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) cell1 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])], mempages=[pt1]) cell2 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])], mempages=[pt2]) self.assertEqual(cell1, cell2) def test_not_equivalent_with_pages(self): pt1 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) pt2 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=1) cell1 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])], mempages=[pt1]) cell2 = objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])], mempages=[pt2]) self.assertNotEqual(cell1, cell2) def test_obj_make_compatible(self): network_metadata = objects.NetworkMetadata( physnets=set(['foo', 'bar']), tunneled=True) cell = objects.NUMACell( id=0, cpuset=set([1, 2]), pcpuset=set([3, 4]), memory=32, cpu_usage=10, pinned_cpus=set([3, 4]), siblings=[set([5, 6])], network_metadata=network_metadata) versions = ovo_base.obj_tree_get_versions('NUMACell') primitive = cell.obj_to_primitive(target_version='1.4', version_manifest=versions) self.assertIn('pcpuset', primitive['nova_object.data']) primitive = cell.obj_to_primitive(target_version='1.3', version_manifest=versions) self.assertNotIn('pcpuset', primitive['nova_object.data']) self.assertIn('network_metadata', primitive['nova_object.data']) primitive = cell.obj_to_primitive(target_version='1.2', version_manifest=versions) self.assertNotIn('network_metadata', primitive['nova_object.data']) class TestNUMACell(test_objects._LocalTest, _TestNUMACell): pass class TestNUMACellRemote(test_objects._RemoteTest, _TestNUMACell): pass class _TestNUMAPagesTopology(object): def test_wipe(self): pages_topology = objects.NUMAPagesTopology( size_kb=2048, total=1024, used=512) self.assertEqual(2048, pages_topology.size_kb) self.assertEqual(1024, pages_topology.total) self.assertEqual(512, pages_topology.used) self.assertEqual(512, pages_topology.free) self.assertEqual(1048576, pages_topology.free_kb) def test_equivalent(self): pt1 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) pt2 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) self.assertEqual(pt1, pt2) def test_not_equivalent(self): pt1 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) pt2 = objects.NUMAPagesTopology(size_kb=1024, total=33, used=0) self.assertNotEqual(pt1, pt2) def test_not_equivalent_missing_a(self): pt1 = objects.NUMAPagesTopology(size_kb=1024, used=0) pt2 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) self.assertNotEqual(pt1, pt2) def test_not_equivalent_missing_b(self): pt1 = objects.NUMAPagesTopology(size_kb=1024, total=32, used=0) pt2 = objects.NUMAPagesTopology(size_kb=1024, used=0) self.assertNotEqual(pt1, pt2) def test_reserved_property_not_set(self): p = objects.NUMAPagesTopology( # To have reserved not set is similar than to have receive # a NUMAPageTopology version 1.0 size_kb=1024, total=64, used=32) self.assertEqual(32, p.free) class TestNUMAPagesTopology(test_objects._LocalTest, _TestNUMACell): pass class TestNUMAPagesTopologyRemote(test_objects._RemoteTest, _TestNUMACell): pass class _TestNUMATopologyLimits(object): def test_obj_make_compatible(self): network_meta = objects.NetworkMetadata( physnets=set(['foo', 'bar']), tunneled=True) limits = objects.NUMATopologyLimits( cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0, network_metadata=network_meta) versions = ovo_base.obj_tree_get_versions('NUMATopologyLimits') primitive = limits.obj_to_primitive(target_version='1.1', version_manifest=versions) self.assertIn('network_metadata', primitive['nova_object.data']) primitive = limits.obj_to_primitive(target_version='1.0', version_manifest=versions) self.assertNotIn('network_metadata', primitive['nova_object.data']) class TestNUMA(test_objects._LocalTest, _TestNUMATopologyLimits): pass class TestNUMARemote(test_objects._RemoteTest, _TestNUMATopologyLimits): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_objects.py0000664000175000017500000015677300000000000022370 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import copy import datetime import os import pprint import fixtures import mock from oslo_utils import timeutils from oslo_versionedobjects import base as ovo_base from oslo_versionedobjects import exception as ovo_exc from oslo_versionedobjects import fixture import six from nova import context from nova import exception from nova import objects from nova.objects import base from nova.objects import fields from nova.objects import virt_device_metadata from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_notifier from nova import utils class MyOwnedObject(base.NovaPersistentObject, base.NovaObject): VERSION = '1.0' fields = {'baz': fields.IntegerField()} class MyObj(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): VERSION = '1.6' fields = {'foo': fields.IntegerField(default=1), 'bar': fields.StringField(), 'missing': fields.StringField(), 'readonly': fields.IntegerField(read_only=True), 'rel_object': fields.ObjectField('MyOwnedObject', nullable=True), 'rel_objects': fields.ListOfObjectsField('MyOwnedObject', nullable=True), 'mutable_default': fields.ListOfStringsField(default=[]), } @staticmethod def _from_db_object(context, obj, db_obj): self = MyObj() self.foo = db_obj['foo'] self.bar = db_obj['bar'] self.missing = db_obj['missing'] self.readonly = 1 self._context = context return self def obj_load_attr(self, attrname): setattr(self, attrname, 'loaded!') @base.remotable_classmethod def query(cls, context): obj = cls(context=context, foo=1, bar='bar') obj.obj_reset_changes() return obj @base.remotable def marco(self): return 'polo' @base.remotable def _update_test(self): self.bar = 'updated' @base.remotable def save(self): self.obj_reset_changes() @base.remotable def refresh(self): self.foo = 321 self.bar = 'refreshed' self.obj_reset_changes() @base.remotable def modify_save_modify(self): self.bar = 'meow' self.save() self.foo = 42 self.rel_object = MyOwnedObject(baz=42) def obj_make_compatible(self, primitive, target_version): super(MyObj, self).obj_make_compatible(primitive, target_version) # NOTE(danms): Simulate an older version that had a different # format for the 'bar' attribute if target_version == '1.1' and 'bar' in primitive: primitive['bar'] = 'old%s' % primitive['bar'] class RandomMixInWithNoFields(object): """Used to test object inheritance using a mixin that has no fields.""" pass @base.NovaObjectRegistry.register_if(False) class SubclassedObject(RandomMixInWithNoFields, MyObj): fields = {'new_field': fields.StringField()} class TestObjToPrimitive(test.NoDBTestCase): def test_obj_to_primitive_list(self): @base.NovaObjectRegistry.register_if(False) class MyObjElement(base.NovaObject): fields = {'foo': fields.IntegerField()} def __init__(self, foo): super(MyObjElement, self).__init__() self.foo = foo @base.NovaObjectRegistry.register_if(False) class MyList(base.ObjectListBase, base.NovaObject): fields = {'objects': fields.ListOfObjectsField('MyObjElement')} mylist = MyList() mylist.objects = [MyObjElement(1), MyObjElement(2), MyObjElement(3)] self.assertEqual([1, 2, 3], [x['foo'] for x in base.obj_to_primitive(mylist)]) def test_obj_to_primitive_dict(self): base.NovaObjectRegistry.register(MyObj) myobj = MyObj(foo=1, bar='foo') self.assertEqual({'foo': 1, 'bar': 'foo'}, base.obj_to_primitive(myobj)) def test_obj_to_primitive_recursive(self): base.NovaObjectRegistry.register(MyObj) class MyList(base.ObjectListBase, base.NovaObject): fields = {'objects': fields.ListOfObjectsField('MyObj')} mylist = MyList(objects=[MyObj(), MyObj()]) for i, value in enumerate(mylist): value.foo = i self.assertEqual([{'foo': 0}, {'foo': 1}], base.obj_to_primitive(mylist)) def test_obj_to_primitive_with_ip_addr(self): @base.NovaObjectRegistry.register_if(False) class TestObject(base.NovaObject): fields = {'addr': fields.IPAddressField(), 'cidr': fields.IPNetworkField()} obj = TestObject(addr='1.2.3.4', cidr='1.1.1.1/16') self.assertEqual({'addr': '1.2.3.4', 'cidr': '1.1.1.1/16'}, base.obj_to_primitive(obj)) def compare_obj(test, obj, db_obj, subs=None, allow_missing=None, comparators=None): """Compare a NovaObject and a dict-like database object. This automatically converts TZ-aware datetimes and iterates over the fields of the object. :param:test: The TestCase doing the comparison :param:obj: The NovaObject to examine :param:db_obj: The dict-like database object to use as reference :param:subs: A dict of objkey=dbkey field substitutions :param:allow_missing: A list of fields that may not be in db_obj :param:comparators: Map of comparator functions to use for certain fields """ if subs is None: subs = {} if allow_missing is None: allow_missing = [] if comparators is None: comparators = {} for key in obj.fields: if key in allow_missing and not obj.obj_attr_is_set(key): continue obj_val = getattr(obj, key) db_key = subs.get(key, key) db_val = db_obj[db_key] if isinstance(obj_val, datetime.datetime): obj_val = obj_val.replace(tzinfo=None) if key in comparators: comparator = comparators[key] comparator(db_val, obj_val) else: test.assertEqual(db_val, obj_val) class _BaseTestCase(test.TestCase): def setUp(self): super(_BaseTestCase, self).setUp() self.user_id = 'fake-user' self.project_id = 'fake-project' self.context = context.RequestContext(self.user_id, self.project_id) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) # NOTE(danms): register these here instead of at import time # so that they're not always present base.NovaObjectRegistry.register(MyObj) base.NovaObjectRegistry.register(MyOwnedObject) def compare_obj(self, obj, db_obj, subs=None, allow_missing=None, comparators=None): compare_obj(self, obj, db_obj, subs=subs, allow_missing=allow_missing, comparators=comparators) def str_comparator(self, expected, obj_val): """Compare an object field to a string in the db by performing a simple coercion on the object field value. """ self.assertEqual(expected, str(obj_val)) class _LocalTest(_BaseTestCase): def setUp(self): super(_LocalTest, self).setUp() # Just in case self.useFixture(nova_fixtures.IndirectionAPIFixture(None)) @contextlib.contextmanager def things_temporarily_local(): # Temporarily go non-remote so the conductor handles # this request directly _api = base.NovaObject.indirection_api base.NovaObject.indirection_api = None yield base.NovaObject.indirection_api = _api # FIXME(danms): We shouldn't be overriding any of this, but need to # for the moment because of the mocks in the base fixture that don't # hit our registry subclass. class FakeIndirectionHack(fixture.FakeIndirectionAPI): def object_action(self, context, objinst, objmethod, args, kwargs): objinst = self._ser.deserialize_entity( context, self._ser.serialize_entity( context, objinst)) objmethod = six.text_type(objmethod) args = self._ser.deserialize_entity( None, self._ser.serialize_entity(None, args)) kwargs = self._ser.deserialize_entity( None, self._ser.serialize_entity(None, kwargs)) original = objinst.obj_clone() with mock.patch('nova.objects.base.NovaObject.' 'indirection_api', new=None): result = getattr(objinst, objmethod)(*args, **kwargs) updates = self._get_changes(original, objinst) updates['obj_what_changed'] = objinst.obj_what_changed() return updates, result def object_class_action(self, context, objname, objmethod, objver, args, kwargs): objname = six.text_type(objname) objmethod = six.text_type(objmethod) objver = six.text_type(objver) args = self._ser.deserialize_entity( None, self._ser.serialize_entity(None, args)) kwargs = self._ser.deserialize_entity( None, self._ser.serialize_entity(None, kwargs)) cls = base.NovaObject.obj_class_from_name(objname, objver) with mock.patch('nova.objects.base.NovaObject.' 'indirection_api', new=None): result = getattr(cls, objmethod)(context, *args, **kwargs) manifest = ovo_base.obj_tree_get_versions(objname) return (base.NovaObject.obj_from_primitive( result.obj_to_primitive(target_version=objver, version_manifest=manifest), context=context) if isinstance(result, base.NovaObject) else result) def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): objname = six.text_type(objname) objmethod = six.text_type(objmethod) object_versions = {six.text_type(o): six.text_type(v) for o, v in object_versions.items()} args, kwargs = self._canonicalize_args(context, args, kwargs) objver = object_versions[objname] cls = base.NovaObject.obj_class_from_name(objname, objver) with mock.patch('nova.objects.base.NovaObject.' 'indirection_api', new=None): result = getattr(cls, objmethod)(context, *args, **kwargs) return (base.NovaObject.obj_from_primitive( result.obj_to_primitive(target_version=objver), context=context) if isinstance(result, base.NovaObject) else result) class IndirectionFixture(fixtures.Fixture): def setUp(self): super(IndirectionFixture, self).setUp() ser = base.NovaObjectSerializer() self.indirection_api = FakeIndirectionHack(serializer=ser) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.base.NovaObject.indirection_api', self.indirection_api)) class _RemoteTest(_BaseTestCase): def setUp(self): super(_RemoteTest, self).setUp() self.useFixture(IndirectionFixture()) class _TestObject(object): def test_object_attrs_in_init(self): # Spot check a few objects.Instance objects.InstanceInfoCache objects.SecurityGroup # Now check the test one in this file. Should be newest version self.assertEqual('1.6', objects.MyObj.VERSION) def test_hydration_type_error(self): primitive = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.5', 'nova_object.data': {'foo': 'a'}} self.assertRaises(ValueError, MyObj.obj_from_primitive, primitive) def test_hydration(self): primitive = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.5', 'nova_object.data': {'foo': 1}} real_method = MyObj._obj_from_primitive def _obj_from_primitive(*args): return real_method(*args) with mock.patch.object(MyObj, '_obj_from_primitive') as ofp: ofp.side_effect = _obj_from_primitive obj = MyObj.obj_from_primitive(primitive) ofp.assert_called_once_with(None, '1.5', primitive) self.assertEqual(obj.foo, 1) def test_hydration_version_different(self): primitive = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.2', 'nova_object.data': {'foo': 1}} obj = MyObj.obj_from_primitive(primitive) self.assertEqual(obj.foo, 1) self.assertEqual('1.2', obj.VERSION) def test_hydration_bad_ns(self): primitive = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'foo', 'nova_object.version': '1.5', 'nova_object.data': {'foo': 1}} self.assertRaises(ovo_exc.UnsupportedObjectError, MyObj.obj_from_primitive, primitive) def test_hydration_additional_unexpected_stuff(self): primitive = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.5.1', 'nova_object.data': { 'foo': 1, 'unexpected_thing': 'foobar'}} obj = MyObj.obj_from_primitive(primitive) self.assertEqual(1, obj.foo) self.assertFalse(hasattr(obj, 'unexpected_thing')) # NOTE(danms): If we call obj_from_primitive() directly # with a version containing .z, we'll get that version # in the resulting object. In reality, when using the # serializer, we'll get that snipped off (tested # elsewhere) self.assertEqual('1.5.1', obj.VERSION) def test_dehydration(self): expected = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.6', 'nova_object.data': {'foo': 1}} obj = MyObj(foo=1) obj.obj_reset_changes() self.assertEqual(obj.obj_to_primitive(), expected) def test_object_property(self): obj = MyObj(foo=1) self.assertEqual(obj.foo, 1) def test_object_property_type_error(self): obj = MyObj() def fail(): obj.foo = 'a' self.assertRaises(ValueError, fail) def test_load(self): obj = MyObj() self.assertEqual(obj.bar, 'loaded!') def test_load_in_base(self): @base.NovaObjectRegistry.register_if(False) class Foo(base.NovaObject): fields = {'foobar': fields.IntegerField()} obj = Foo() with self.assertRaisesRegex(NotImplementedError, ".*foobar.*"): obj.foobar def test_loaded_in_primitive(self): obj = MyObj(foo=1) obj.obj_reset_changes() self.assertEqual(obj.bar, 'loaded!') expected = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.6', 'nova_object.changes': ['bar'], 'nova_object.data': {'foo': 1, 'bar': 'loaded!'}} self.assertEqual(obj.obj_to_primitive(), expected) def test_changes_in_primitive(self): obj = MyObj(foo=123) self.assertEqual(obj.obj_what_changed(), set(['foo'])) primitive = obj.obj_to_primitive() self.assertIn('nova_object.changes', primitive) obj2 = MyObj.obj_from_primitive(primitive) self.assertEqual(obj2.obj_what_changed(), set(['foo'])) obj2.obj_reset_changes() self.assertEqual(obj2.obj_what_changed(), set()) def test_orphaned_object(self): obj = MyObj.query(self.context) obj._context = None self.assertRaises(ovo_exc.OrphanedObjectError, obj._update_test) def test_changed_1(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(obj.obj_what_changed(), set(['foo'])) obj._update_test() self.assertEqual(obj.obj_what_changed(), set(['foo', 'bar'])) self.assertEqual(obj.foo, 123) def test_changed_2(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(obj.obj_what_changed(), set(['foo'])) obj.save() self.assertEqual(obj.obj_what_changed(), set([])) self.assertEqual(obj.foo, 123) def test_changed_3(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(obj.obj_what_changed(), set(['foo'])) obj.refresh() self.assertEqual(obj.obj_what_changed(), set([])) self.assertEqual(obj.foo, 321) self.assertEqual(obj.bar, 'refreshed') def test_changed_4(self): obj = MyObj.query(self.context) obj.bar = 'something' self.assertEqual(obj.obj_what_changed(), set(['bar'])) obj.modify_save_modify() self.assertEqual(obj.obj_what_changed(), set(['foo', 'rel_object'])) self.assertEqual(obj.foo, 42) self.assertEqual(obj.bar, 'meow') self.assertIsInstance(obj.rel_object, MyOwnedObject) def test_changed_with_sub_object(self): @base.NovaObjectRegistry.register_if(False) class ParentObject(base.NovaObject): fields = {'foo': fields.IntegerField(), 'bar': fields.ObjectField('MyObj'), } obj = ParentObject() self.assertEqual(set(), obj.obj_what_changed()) obj.foo = 1 self.assertEqual(set(['foo']), obj.obj_what_changed()) bar = MyObj() obj.bar = bar self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed()) obj.obj_reset_changes() self.assertEqual(set(), obj.obj_what_changed()) bar.foo = 1 self.assertEqual(set(['bar']), obj.obj_what_changed()) def test_static_result(self): obj = MyObj.query(self.context) self.assertEqual(obj.bar, 'bar') result = obj.marco() self.assertEqual(result, 'polo') def test_updates(self): obj = MyObj.query(self.context) self.assertEqual(obj.foo, 1) obj._update_test() self.assertEqual(obj.bar, 'updated') def test_base_attributes(self): dt = datetime.datetime(1955, 11, 5) obj = MyObj(created_at=dt, updated_at=dt, deleted_at=None, deleted=False) expected = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.6', 'nova_object.changes': ['deleted', 'created_at', 'deleted_at', 'updated_at'], 'nova_object.data': {'created_at': utils.isotime(dt), 'updated_at': utils.isotime(dt), 'deleted_at': None, 'deleted': False, } } actual = obj.obj_to_primitive() self.assertJsonEqual(actual, expected) def test_contains(self): obj = MyObj() self.assertNotIn('foo', obj) obj.foo = 1 self.assertIn('foo', obj) self.assertNotIn('does_not_exist', obj) def test_obj_attr_is_set(self): obj = MyObj(foo=1) self.assertTrue(obj.obj_attr_is_set('foo')) self.assertFalse(obj.obj_attr_is_set('bar')) self.assertRaises(AttributeError, obj.obj_attr_is_set, 'bang') def test_get(self): obj = MyObj(foo=1) # Foo has value, should not get the default self.assertEqual(obj.get('foo', 2), 1) # Foo has value, should return the value without error self.assertEqual(obj.get('foo'), 1) # Bar is not loaded, so we should get the default self.assertEqual(obj.get('bar', 'not-loaded'), 'not-loaded') # Bar without a default should lazy-load self.assertEqual(obj.get('bar'), 'loaded!') # Bar now has a default, but loaded value should be returned self.assertEqual(obj.get('bar', 'not-loaded'), 'loaded!') # Invalid attribute should raise AttributeError self.assertRaises(AttributeError, obj.get, 'nothing') # ...even with a default self.assertRaises(AttributeError, obj.get, 'nothing', 3) def test_object_inheritance(self): base_fields = base.NovaPersistentObject.fields.keys() myobj_fields = (['foo', 'bar', 'missing', 'readonly', 'rel_object', 'rel_objects', 'mutable_default'] + list(base_fields)) myobj3_fields = ['new_field'] self.assertTrue(issubclass(SubclassedObject, MyObj)) self.assertEqual(len(myobj_fields), len(MyObj.fields)) self.assertEqual(set(myobj_fields), set(MyObj.fields.keys())) self.assertEqual(len(myobj_fields) + len(myobj3_fields), len(SubclassedObject.fields)) self.assertEqual(set(myobj_fields) | set(myobj3_fields), set(SubclassedObject.fields.keys())) def test_obj_alternate_context(self): obj = MyObj(context=self.context) with obj.obj_alternate_context(mock.sentinel.alt_ctx): self.assertEqual(mock.sentinel.alt_ctx, obj._context) self.assertEqual(self.context, obj._context) def test_get_changes(self): obj = MyObj() self.assertEqual({}, obj.obj_get_changes()) obj.foo = 123 self.assertEqual({'foo': 123}, obj.obj_get_changes()) obj.bar = 'test' self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes()) obj.obj_reset_changes() self.assertEqual({}, obj.obj_get_changes()) def test_obj_fields(self): @base.NovaObjectRegistry.register_if(False) class TestObj(base.NovaObject): fields = {'foo': fields.IntegerField()} obj_extra_fields = ['bar'] @property def bar(self): return 'this is bar' obj = TestObj() self.assertEqual(['foo', 'bar'], obj.obj_fields) def test_obj_constructor(self): obj = MyObj(context=self.context, foo=123, bar='abc') self.assertEqual(123, obj.foo) self.assertEqual('abc', obj.bar) self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed()) def test_obj_read_only(self): obj = MyObj(context=self.context, foo=123, bar='abc') obj.readonly = 1 self.assertRaises(ovo_exc.ReadOnlyFieldError, setattr, obj, 'readonly', 2) def test_obj_mutable_default(self): obj = MyObj(context=self.context, foo=123, bar='abc') obj.mutable_default = None obj.mutable_default.append('s1') self.assertEqual(obj.mutable_default, ['s1']) obj1 = MyObj(context=self.context, foo=123, bar='abc') obj1.mutable_default = None obj1.mutable_default.append('s2') self.assertEqual(obj1.mutable_default, ['s2']) def test_obj_mutable_default_set_default(self): obj1 = MyObj(context=self.context, foo=123, bar='abc') obj1.obj_set_defaults('mutable_default') self.assertEqual(obj1.mutable_default, []) obj1.mutable_default.append('s1') self.assertEqual(obj1.mutable_default, ['s1']) obj2 = MyObj(context=self.context, foo=123, bar='abc') obj2.obj_set_defaults('mutable_default') self.assertEqual(obj2.mutable_default, []) obj2.mutable_default.append('s2') self.assertEqual(obj2.mutable_default, ['s2']) def test_obj_repr(self): obj = MyObj(foo=123) self.assertEqual('MyObj(bar=,created_at=,deleted=,' 'deleted_at=,foo=123,missing=,' 'mutable_default=,readonly=,rel_object=,' 'rel_objects=,updated_at=)', repr(obj)) def test_obj_make_obj_compatible(self): subobj = MyOwnedObject(baz=1) subobj.VERSION = '1.2' obj = MyObj(rel_object=subobj) obj.obj_relationships = { 'rel_object': [('1.5', '1.1'), ('1.7', '1.2')], } orig_primitive = obj.obj_to_primitive()['nova_object.data'] with mock.patch.object(subobj, 'obj_make_compatible') as mock_compat: primitive = copy.deepcopy(orig_primitive) obj._obj_make_obj_compatible(primitive, '1.8', 'rel_object') self.assertFalse(mock_compat.called) with mock.patch.object(subobj, 'obj_make_compatible') as mock_compat: primitive = copy.deepcopy(orig_primitive) obj._obj_make_obj_compatible(primitive, '1.7', 'rel_object') mock_compat.assert_called_once_with( primitive['rel_object']['nova_object.data'], '1.2') with mock.patch.object(subobj, 'obj_make_compatible') as mock_compat: primitive = copy.deepcopy(orig_primitive) obj._obj_make_obj_compatible(primitive, '1.6', 'rel_object') mock_compat.assert_called_once_with( primitive['rel_object']['nova_object.data'], '1.1') self.assertEqual('1.1', primitive['rel_object']['nova_object.version']) with mock.patch.object(subobj, 'obj_make_compatible') as mock_compat: primitive = copy.deepcopy(orig_primitive) obj._obj_make_obj_compatible(primitive, '1.5', 'rel_object') mock_compat.assert_called_once_with( primitive['rel_object']['nova_object.data'], '1.1') self.assertEqual('1.1', primitive['rel_object']['nova_object.version']) with mock.patch.object(subobj, 'obj_make_compatible') as mock_compat: primitive = copy.deepcopy(orig_primitive) obj._obj_make_obj_compatible(primitive, '1.4', 'rel_object') self.assertFalse(mock_compat.called) self.assertNotIn('rel_object', primitive) def test_obj_make_compatible_hits_sub_objects(self): subobj = MyOwnedObject(baz=1) obj = MyObj(foo=123, rel_object=subobj) obj.obj_relationships = {'rel_object': [('1.0', '1.0')]} with mock.patch.object(obj, '_obj_make_obj_compatible') as mock_compat: obj.obj_make_compatible({'rel_object': 'foo'}, '1.10') mock_compat.assert_called_once_with({'rel_object': 'foo'}, '1.10', 'rel_object') def test_obj_make_compatible_skips_unset_sub_objects(self): obj = MyObj(foo=123) obj.obj_relationships = {'rel_object': [('1.0', '1.0')]} with mock.patch.object(obj, '_obj_make_obj_compatible') as mock_compat: obj.obj_make_compatible({'rel_object': 'foo'}, '1.10') self.assertFalse(mock_compat.called) def test_obj_make_compatible_doesnt_skip_falsey_sub_objects(self): @base.NovaObjectRegistry.register_if(False) class MyList(base.ObjectListBase, base.NovaObject): VERSION = '1.2' fields = {'objects': fields.ListOfObjectsField('MyObjElement')} obj_relationships = { 'objects': [('1.1', '1.1'), ('1.2', '1.2')], } mylist = MyList(objects=[]) @base.NovaObjectRegistry.register_if(False) class MyOwner(base.NovaObject): VERSION = '1.2' fields = {'mylist': fields.ObjectField('MyList')} obj_relationships = { 'mylist': [('1.1', '1.1')], } myowner = MyOwner(mylist=mylist) primitive = myowner.obj_to_primitive('1.1') self.assertIn('mylist', primitive['nova_object.data']) def test_obj_make_compatible_handles_list_of_objects(self): subobj = MyOwnedObject(baz=1) obj = MyObj(rel_objects=[subobj]) obj.obj_relationships = {'rel_objects': [('1.0', '1.123')]} def fake_make_compat(primitive, version): self.assertEqual('1.123', version) self.assertIn('baz', primitive) with mock.patch.object(subobj, 'obj_make_compatible') as mock_mc: mock_mc.side_effect = fake_make_compat obj.obj_to_primitive('1.0') self.assertTrue(mock_mc.called) def test_delattr(self): obj = MyObj(bar='foo') del obj.bar # Should appear unset now self.assertFalse(obj.obj_attr_is_set('bar')) # Make sure post-delete, references trigger lazy loads self.assertEqual('loaded!', getattr(obj, 'bar')) def test_delattr_unset(self): obj = MyObj() self.assertRaises(AttributeError, delattr, obj, 'bar') class TestObject(_LocalTest, _TestObject): def test_set_defaults(self): obj = MyObj() obj.obj_set_defaults('foo') self.assertTrue(obj.obj_attr_is_set('foo')) self.assertEqual(1, obj.foo) def test_set_defaults_no_default(self): obj = MyObj() self.assertRaises(ovo_exc.ObjectActionError, obj.obj_set_defaults, 'bar') def test_set_all_defaults(self): obj = MyObj() obj.obj_set_defaults() self.assertEqual(set(['deleted', 'foo', 'mutable_default']), obj.obj_what_changed()) self.assertEqual(1, obj.foo) def test_set_defaults_not_overwrite(self): # NOTE(danms): deleted defaults to False, so verify that it does # not get reset by obj_set_defaults() obj = MyObj(deleted=True) obj.obj_set_defaults() self.assertEqual(1, obj.foo) self.assertTrue(obj.deleted) class TestObjectSerializer(_BaseTestCase): def test_serialize_entity_primitive(self): ser = base.NovaObjectSerializer() for thing in (1, 'foo', [1, 2], {'foo': 'bar'}): self.assertEqual(thing, ser.serialize_entity(None, thing)) def test_deserialize_entity_primitive(self): ser = base.NovaObjectSerializer() for thing in (1, 'foo', [1, 2], {'foo': 'bar'}): self.assertEqual(thing, ser.deserialize_entity(None, thing)) def test_serialize_set_to_list(self): ser = base.NovaObjectSerializer() self.assertEqual([1, 2], ser.serialize_entity(None, set([1, 2]))) def _test_deserialize_entity_newer(self, obj_version, backported_to, my_version='1.6'): ser = base.NovaObjectSerializer() ser._conductor = mock.Mock() ser._conductor.object_backport_versions.return_value = 'backported' class MyTestObj(MyObj): VERSION = my_version base.NovaObjectRegistry.register(MyTestObj) obj = MyTestObj() obj.VERSION = obj_version primitive = obj.obj_to_primitive() result = ser.deserialize_entity(self.context, primitive) if backported_to is None: self.assertFalse(ser._conductor.object_backport_versions.called) else: self.assertEqual('backported', result) versions = ovo_base.obj_tree_get_versions('MyTestObj') ser._conductor.object_backport_versions.assert_called_with( self.context, primitive, versions) def test_deserialize_entity_newer_version_backports(self): self._test_deserialize_entity_newer('1.25', '1.6') def test_deserialize_entity_newer_revision_does_not_backport_zero(self): self._test_deserialize_entity_newer('1.6.0', None) def test_deserialize_entity_newer_revision_does_not_backport(self): self._test_deserialize_entity_newer('1.6.1', None) def test_deserialize_entity_newer_version_passes_revision(self): self._test_deserialize_entity_newer('1.7', '1.6.1', '1.6.1') def test_deserialize_dot_z_with_extra_stuff(self): primitive = {'nova_object.name': 'MyObj', 'nova_object.namespace': 'nova', 'nova_object.version': '1.6.1', 'nova_object.data': { 'foo': 1, 'unexpected_thing': 'foobar'}} ser = base.NovaObjectSerializer() obj = ser.deserialize_entity(self.context, primitive) self.assertEqual(1, obj.foo) self.assertFalse(hasattr(obj, 'unexpected_thing')) # NOTE(danms): The serializer is where the logic lives that # avoids backports for cases where only a .z difference in # the received object version is detected. As a result, we # end up with a version of what we expected, effectively the # .0 of the object. self.assertEqual('1.6', obj.VERSION) @mock.patch('oslo_versionedobjects.base.obj_tree_get_versions') def test_object_tree_backport(self, mock_get_versions): # Test the full client backport path all the way from the serializer # to the conductor and back. self.start_service('conductor', manager='nova.conductor.manager.ConductorManager') # NOTE(danms): Actually register a complex set of objects, # two versions of the same parent object which contain a # child sub object. @base.NovaObjectRegistry.register class Child(base.NovaObject): VERSION = '1.10' @base.NovaObjectRegistry.register class Parent(base.NovaObject): VERSION = '1.0' fields = { 'child': fields.ObjectField('Child'), } @base.NovaObjectRegistry.register # noqa class Parent(base.NovaObject): VERSION = '1.1' fields = { 'child': fields.ObjectField('Child'), } # NOTE(danms): Since we're on the same node as conductor, # return a fake version manifest so that we confirm that it # actually honors what the client asked for and not just what # it sees in the local machine state. mock_get_versions.return_value = { 'Parent': '1.0', 'Child': '1.5', } call_context = {} real_ofp = base.NovaObject.obj_from_primitive def fake_obj_from_primitive(*a, **k): # NOTE(danms): We need the first call to this to report an # incompatible object version, but subsequent calls must # succeed. Since we're testing the backport path all the # way through conductor and RPC, we can't fully break this # method, we just need it to fail once to trigger the # backport. if 'run' in call_context: return real_ofp(*a, **k) else: call_context['run'] = True raise ovo_exc.IncompatibleObjectVersion('foo') child = Child() parent = Parent(child=child) prim = parent.obj_to_primitive() ser = base.NovaObjectSerializer() with mock.patch('nova.objects.base.NovaObject.' 'obj_from_primitive') as mock_ofp: mock_ofp.side_effect = fake_obj_from_primitive result = ser.deserialize_entity(self.context, prim) # Our newest version (and what we passed back) of Parent # is 1.1, make sure that the manifest version is honored self.assertEqual('1.0', result.VERSION) # Our newest version (and what we passed back) of Child # is 1.10, make sure that the manifest version is honored self.assertEqual('1.5', result.child.VERSION) def test_object_serialization(self): ser = base.NovaObjectSerializer() obj = MyObj() primitive = ser.serialize_entity(self.context, obj) self.assertIn('nova_object.name', primitive) obj2 = ser.deserialize_entity(self.context, primitive) self.assertIsInstance(obj2, MyObj) self.assertEqual(self.context, obj2._context) def test_object_serialization_iterables(self): ser = base.NovaObjectSerializer() obj = MyObj() for iterable in (list, tuple, set): thing = iterable([obj]) primitive = ser.serialize_entity(self.context, thing) self.assertEqual(1, len(primitive)) for item in primitive: self.assertNotIsInstance(item, base.NovaObject) thing2 = ser.deserialize_entity(self.context, primitive) self.assertEqual(1, len(thing2)) for item in thing2: self.assertIsInstance(item, MyObj) # dict case thing = {'key': obj} primitive = ser.serialize_entity(self.context, thing) self.assertEqual(1, len(primitive)) for item in six.itervalues(primitive): self.assertNotIsInstance(item, base.NovaObject) thing2 = ser.deserialize_entity(self.context, primitive) self.assertEqual(1, len(thing2)) for item in six.itervalues(thing2): self.assertIsInstance(item, MyObj) # object-action updates dict case thing = {'foo': obj.obj_to_primitive()} primitive = ser.serialize_entity(self.context, thing) self.assertEqual(thing, primitive) thing2 = ser.deserialize_entity(self.context, thing) self.assertIsInstance(thing2['foo'], base.NovaObject) class TestArgsSerializer(test.NoDBTestCase): def setUp(self): super(TestArgsSerializer, self).setUp() self.now = timeutils.utcnow() self.str_now = utils.strtime(self.now) self.exc = exception.NotFound() @base.serialize_args def _test_serialize_args(self, *args, **kwargs): expected_args = ('untouched', self.str_now, self.str_now) for index, val in enumerate(args): self.assertEqual(expected_args[index], val) expected_kwargs = {'a': 'untouched', 'b': self.str_now, 'c': self.str_now} nonnova = kwargs.pop('nonnova', None) if nonnova: expected_kwargs['exc_val'] = 'TestingException' else: expected_kwargs['exc_val'] = self.exc.format_message() for key, val in kwargs.items(): self.assertEqual(expected_kwargs[key], val) def test_serialize_args(self): self._test_serialize_args('untouched', self.now, self.now, a='untouched', b=self.now, c=self.now, exc_val=self.exc) def test_serialize_args_non_nova_exception(self): self._test_serialize_args('untouched', self.now, self.now, a='untouched', b=self.now, c=self.now, exc_val=test.TestingException('foo'), nonnova=True) class TestRegistry(test.NoDBTestCase): @mock.patch('nova.objects.base.objects') def test_hook_chooses_newer_properly(self, mock_objects): reg = base.NovaObjectRegistry() reg.registration_hook(MyObj, 0) class MyNewerObj(object): VERSION = '1.123' @classmethod def obj_name(cls): return 'MyObj' self.assertEqual(MyObj, mock_objects.MyObj) reg.registration_hook(MyNewerObj, 0) self.assertEqual(MyNewerObj, mock_objects.MyObj) @mock.patch('nova.objects.base.objects') def test_hook_keeps_newer_properly(self, mock_objects): reg = base.NovaObjectRegistry() reg.registration_hook(MyObj, 0) class MyOlderObj(object): VERSION = '1.1' @classmethod def obj_name(cls): return 'MyObj' self.assertEqual(MyObj, mock_objects.MyObj) reg.registration_hook(MyOlderObj, 0) self.assertEqual(MyObj, mock_objects.MyObj) # NOTE(danms): The hashes in this list should only be changed if # they come with a corresponding version bump in the affected # objects object_data = { 'Agent': '1.0-c0c092abaceb6f51efe5d82175f15eba', 'AgentList': '1.0-5a7380d02c3aaf2a32fc8115ae7ca98c', 'Aggregate': '1.3-f315cb68906307ca2d1cca84d4753585', 'AggregateList': '1.3-3ea55a050354e72ef3306adefa553957', 'BandwidthUsage': '1.2-c6e4c779c7f40f2407e3d70022e3cd1c', 'BandwidthUsageList': '1.2-5fe7475ada6fe62413cbfcc06ec70746', 'BlockDeviceMapping': '1.20-45a6ad666ddf14bbbedece2293af77e2', 'BlockDeviceMappingList': '1.17-1e568eecb91d06d4112db9fd656de235', 'BuildRequest': '1.3-077dee42bed93f8a5b62be77657b7152', 'BuildRequestList': '1.0-cd95608eccb89fbc702c8b52f38ec738', 'CellMapping': '1.1-5d652928000a5bc369d79d5bde7e497d', 'CellMappingList': '1.1-496ef79bb2ab41041fff8bcb57996352', 'ComputeNode': '1.19-af6bd29a6c3b225da436a0d8487096f2', 'ComputeNodeList': '1.17-52f3b0962b1c86b98590144463ebb192', 'ConsoleAuthToken': '1.1-8da320fb065080eb4d3c2e5c59f8bf52', 'CpuDiagnostics': '1.0-d256f2e442d1b837735fd17dfe8e3d47', 'Destination': '1.4-3b440d29459e2c98987ad5b25ad1cb2c', 'DeviceBus': '1.0-77509ea1ea0dd750d5864b9bd87d3f9d', 'DeviceMetadata': '1.0-04eb8fd218a49cbc3b1e54b774d179f7', 'Diagnostics': '1.0-38ad3e9b1a59306253fc03f97936db95', 'DiskDiagnostics': '1.0-dfd0892b5924af1a585f3fed8c9899ca', 'DiskMetadata': '1.0-e7a0f1ccccf10d26a76b28e7492f3788', 'EC2Ids': '1.0-474ee1094c7ec16f8ce657595d8c49d9', 'EC2InstanceMapping': '1.0-a4556eb5c5e94c045fe84f49cf71644f', 'Flavor': '1.2-4ce99b41327bb230262e5a8f45ff0ce3', 'FlavorList': '1.1-52b5928600e7ca973aa4fc1e46f3934c', 'HVSpec': '1.2-de06bcec472a2f04966b855a49c46b41', 'HostMapping': '1.0-1a3390a696792a552ab7bd31a77ba9ac', 'HostMappingList': '1.1-18ac2bfb8c1eb5545bed856da58a79bc', 'HyperVLiveMigrateData': '1.4-e265780e6acfa631476c8170e8d6fce0', 'IDEDeviceBus': '1.0-29d4c9f27ac44197f01b6ac1b7e16502', 'ImageMeta': '1.8-642d1b2eb3e880a367f37d72dd76162d', 'ImageMetaProps': '1.25-66fc973af215eb5701ed4034bb6f0685', 'Instance': '2.7-d187aec68cad2e4d8b8a03a68e4739ce', 'InstanceAction': '1.2-9a5abc87fdd3af46f45731960651efb5', 'InstanceActionEvent': '1.4-5b1f361bd81989f8bb2c20bb7e8a4cb4', 'InstanceActionEventList': '1.1-13d92fb953030cdbfee56481756e02be', 'InstanceActionList': '1.1-a2b2fb6006b47c27076d3a1d48baa759', 'InstanceDeviceMetadata': '1.0-74d78dd36aa32d26d2769a1b57caf186', 'InstanceExternalEvent': '1.4-06c2dfcf2d2813c24cd37ee728524f1a', 'InstanceFault': '1.2-7ef01f16f1084ad1304a513d6d410a38', 'InstanceFaultList': '1.2-6bb72de2872fe49ded5eb937a93f2451', 'InstanceGroup': '1.11-852ac511d30913ee88f3c3a869a8f30a', 'InstanceGroupList': '1.8-90f8f1a445552bb3bbc9fa1ae7da27d4', 'InstanceInfoCache': '1.5-cd8b96fefe0fc8d4d337243ba0bf0e1e', 'InstanceList': '2.6-238f125650c25d6d12722340d726f723', 'InstanceMapping': '1.2-3bd375e65c8eb9c45498d2f87b882e03', 'InstanceMappingList': '1.3-d34b6ebb076d542ae0f8b440534118da', 'InstanceNUMACell': '1.4-b68e13eacba363ae8f196abf0ffffb5b', 'InstanceNUMATopology': '1.3-ec0030cb0402a49c96da7051c037082a', 'InstancePCIRequest': '1.3-f6d324f1c337fad4f34892ed5f484c9a', 'InstancePCIRequests': '1.1-65e38083177726d806684cb1cc0136d2', 'KeyPair': '1.4-1244e8d1b103cc69d038ed78ab3a8cc6', 'KeyPairList': '1.3-94aad3ac5c938eef4b5e83da0212f506', 'LibvirtLiveMigrateBDMInfo': '1.1-5f4a68873560b6f834b74e7861d71aaf', 'LibvirtLiveMigrateData': '1.10-348cf70ea44d3b985f45f64725d6f6a7', 'LibvirtLiveMigrateNUMAInfo': '1.0-0e777677f3459d0ed1634eabbdb6c22f', 'MemoryDiagnostics': '1.0-2c995ae0f2223bb0f8e523c5cc0b83da', 'Migration': '1.7-b77066a88d08bdb0b05d7bc18780c55a', 'MigrationContext': '1.2-89f10a83999f852a489962ae37d8a026', 'MigrationList': '1.4-983a9c29d4f1e747ce719dc9063b729b', 'MonitorMetric': '1.1-53b1db7c4ae2c531db79761e7acc52ba', 'MonitorMetricList': '1.1-15ecf022a68ddbb8c2a6739cfc9f8f5e', 'NUMACell': '1.4-7695303e820fa855d76954be2eb2680e', 'NUMAPagesTopology': '1.1-edab9fa2dc43c117a38d600be54b4542', 'NUMATopology': '1.2-c63fad38be73b6afd04715c9c1b29220', 'NUMATopologyLimits': '1.1-4235c5da7a76c7e36075f0cd2f5cf922', 'NetworkInterfaceMetadata': '1.2-6f3d480b40fe339067b1c0dd4d656716', 'NetworkMetadata': '1.0-2cb8d21b34f87b0261d3e1d1ae5cf218', 'NetworkRequest': '1.2-af1ff2d986999fbb79377712794d82aa', 'NetworkRequestList': '1.1-15ecf022a68ddbb8c2a6739cfc9f8f5e', 'NicDiagnostics': '1.0-895e9ad50e0f56d5258585e3e066aea5', 'PCIDeviceBus': '1.0-2b891cb77e42961044689f3dc2718995', 'PciDevice': '1.6-25ca0542a22bc25386a72c0065a79c01', 'PciDeviceList': '1.3-52ff14355491c8c580bdc0ba34c26210', 'PciDevicePool': '1.1-3f5ddc3ff7bfa14da7f6c7e9904cc000', 'PciDevicePoolList': '1.1-15ecf022a68ddbb8c2a6739cfc9f8f5e', 'PowerVMLiveMigrateData': '1.4-a745f4eda16b45e1bc5686a0c498f27e', 'Quotas': '1.3-3b2b91371f60e788035778fc5f87797d', 'QuotasNoOp': '1.3-d1593cf969c81846bc8192255ea95cce', 'RequestGroup': '1.3-0458d350a8ec9d0673f9be5640a990ce', 'RequestLevelParams': '1.0-1e5c8c18bd44cd233c8b32509c99d06f', 'RequestSpec': '1.13-e1aa38b2bf3f8547474ee9e4c0aa2745', 'Resource': '1.0-d8a2abbb380da583b995fd118f6a8953', 'ResourceList': '1.0-4a53826625cc280e15fae64a575e0879', 'ResourceMetadata': '1.0-77509ea1ea0dd750d5864b9bd87d3f9d', 'S3ImageMapping': '1.0-7dd7366a890d82660ed121de9092276e', 'SCSIDeviceBus': '1.0-61c1e89a00901069ab1cf2991681533b', 'SchedulerLimits': '1.0-249c4bd8e62a9b327b7026b7f19cc641', 'SchedulerRetries': '1.1-3c9c8b16143ebbb6ad7030e999d14cc0', 'SecurityGroup': '1.2-86d67d8d3ab0c971e1dc86e02f9524a8', 'SecurityGroupList': '1.1-c655ed13298e630f4d398152f7d08d71', 'Selection': '1.1-548e3c2f04da2a61ceaf9c4e1589f264', 'Service': '1.22-8a740459ab9bf258a19c8fcb875c2d9a', 'ServiceList': '1.19-5325bce13eebcbf22edc9678285270cc', 'Tag': '1.1-8b8d7d5b48887651a0e01241672e2963', 'TagList': '1.1-55231bdb671ecf7641d6a2e9109b5d8e', 'TaskLog': '1.0-78b0534366f29aa3eebb01860fbe18fe', 'TaskLogList': '1.0-cc8cce1af8a283b9d28b55fcd682e777', 'TrustedCerts': '1.0-dcf528851e0f868c77ee47e90563cda7', 'USBDeviceBus': '1.0-e4c7dd6032e46cd74b027df5eb2d4750', 'VIFMigrateData': '1.0-cb15282b25a039ab35046ed705eb931d', 'VMwareLiveMigrateData': '1.0-a3cc858a2bf1d3806d6f57cfaa1fb98a', 'VirtCPUFeature': '1.0-ea2464bdd09084bd388e5f61d5d4fc86', 'VirtCPUModel': '1.0-5e1864af9227f698326203d7249796b5', 'VirtCPUTopology': '1.0-fc694de72e20298f7c6bab1083fd4563', 'VirtualInterface': '1.3-efd3ca8ebcc5ce65fff5a25f31754c54', 'VirtualInterfaceList': '1.0-9750e2074437b3077e46359102779fc6', 'VolumeUsage': '1.0-6c8190c46ce1469bb3286a1f21c2e475', 'XenDeviceBus': '1.0-272a4f899b24e31e42b2b9a7ed7e9194', 'XenapiLiveMigrateData': '1.4-7dc9417e921b2953faa6751f18785f3f', # TODO(efried): re-alphabetize this 'LibvirtVPMEMDevice': '1.0-17ffaf47585199eeb9a2b83d6bde069f', } def get_nova_objects(): """Get Nova versioned objects This returns a dict of versioned objects which are in the Nova project namespace only. ie excludes objects from os-vif and other 3rd party modules :return: a dict mapping class names to lists of versioned objects """ all_classes = base.NovaObjectRegistry.obj_classes() nova_classes = {} for name in all_classes: objclasses = all_classes[name] # NOTE(danms): All object registries that inherit from the # base VersionedObjectRegistry share a common list of classes. # That means even things like os_vif objects will be in our # registry, and for any of them that share the same name # (i.e. Network), we need to keep ours and exclude theirs. our_ns = [cls for cls in objclasses if (cls.OBJ_PROJECT_NAMESPACE == base.NovaObject.OBJ_PROJECT_NAMESPACE)] if our_ns: nova_classes[name] = our_ns return nova_classes class TestObjectVersions(test.NoDBTestCase): def test_versions(self): checker = fixture.ObjectVersionChecker( get_nova_objects()) fingerprints = checker.get_hashes() if os.getenv('GENERATE_HASHES'): open('object_hashes.txt', 'w').write( pprint.pformat(fingerprints)) raise test.TestingException( 'Generated hashes in object_hashes.txt') expected, actual = checker.test_hashes(object_data) self.assertEqual(expected, actual, 'Some objects have changed; please make sure the ' 'versions have been bumped, and then update their ' 'hashes here.') def test_obj_make_compatible(self): # NOTE(danms): This is normally not registered because it is just a # base class. However, the test fixture below requires it to be # in the registry so that it can verify backports based on its # children. So, register it here, which will be reverted after the # cleanUp for this (and all) tests is run. base.NovaObjectRegistry.register(virt_device_metadata.DeviceBus) # Iterate all object classes and verify that we can run # obj_make_compatible with every older version than current. # This doesn't actually test the data conversions, but it at least # makes sure the method doesn't blow up on something basic like # expecting the wrong version format. # Hold a dictionary of args/kwargs that need to get passed into # __init__() for specific classes. The key in the dictionary is # the obj_class that needs the init args/kwargs. init_args = {} init_kwargs = {} checker = fixture.ObjectVersionChecker(get_nova_objects()) checker.test_compatibility_routines(use_manifest=True, init_args=init_args, init_kwargs=init_kwargs) def test_list_obj_make_compatible(self): @base.NovaObjectRegistry.register_if(False) class TestObj(base.NovaObject): VERSION = '1.4' fields = {'foo': fields.IntegerField()} @base.NovaObjectRegistry.register_if(False) class TestListObj(base.ObjectListBase, base.NovaObject): VERSION = '1.5' fields = {'objects': fields.ListOfObjectsField('TestObj')} obj_relationships = { 'objects': [('1.0', '1.1'), ('1.1', '1.2'), ('1.3', '1.3'), ('1.5', '1.4')] } my_list = TestListObj() my_obj = TestObj(foo=1) my_list.objects = [my_obj] primitive = my_list.obj_to_primitive(target_version='1.5') primitive_data = primitive['nova_object.data'] obj_primitive = my_obj.obj_to_primitive(target_version='1.4') obj_primitive_data = obj_primitive['nova_object.data'] with mock.patch.object(TestObj, 'obj_make_compatible') as comp: my_list.obj_make_compatible(primitive_data, '1.1') comp.assert_called_with(obj_primitive_data, '1.2') def test_list_obj_make_compatible_when_no_objects(self): # Test to make sure obj_make_compatible works with no 'objects' # If a List object ever has a version that did not contain the # 'objects' key, we need to make sure converting back to that version # doesn't cause backporting problems. @base.NovaObjectRegistry.register_if(False) class TestObj(base.NovaObject): VERSION = '1.1' fields = {'foo': fields.IntegerField()} @base.NovaObjectRegistry.register_if(False) class TestListObj(base.ObjectListBase, base.NovaObject): VERSION = '1.1' fields = {'objects': fields.ListOfObjectsField('TestObj')} # pretend that version 1.0 didn't have 'objects' obj_relationships = { 'objects': [('1.1', '1.1')] } my_list = TestListObj() my_list.objects = [TestObj(foo=1)] primitive = my_list.obj_to_primitive(target_version='1.1') primitive_data = primitive['nova_object.data'] my_list.obj_make_compatible(primitive_data, target_version='1.0') self.assertNotIn('objects', primitive_data, "List was backported to before 'objects' existed." " 'objects' should not be in the primitive.") class TestObjEqualPrims(_BaseTestCase): def test_object_equal(self): obj1 = MyObj(foo=1, bar='goodbye') obj1.obj_reset_changes() obj2 = MyObj(foo=1, bar='goodbye') obj2.obj_reset_changes() obj2.bar = 'goodbye' # obj2 will be marked with field 'three' updated self.assertTrue(base.obj_equal_prims(obj1, obj2), "Objects that differ only because one a is marked " "as updated should be equal") def test_object_not_equal(self): obj1 = MyObj(foo=1, bar='goodbye') obj1.obj_reset_changes() obj2 = MyObj(foo=1, bar='hello') obj2.obj_reset_changes() self.assertFalse(base.obj_equal_prims(obj1, obj2), "Objects that differ in any field " "should not be equal") def test_object_ignore_equal(self): obj1 = MyObj(foo=1, bar='goodbye') obj1.obj_reset_changes() obj2 = MyObj(foo=1, bar='hello') obj2.obj_reset_changes() self.assertTrue(base.obj_equal_prims(obj1, obj2, ['bar']), "Objects that only differ in an ignored field " "should be equal") class TestObjMethodOverrides(test.NoDBTestCase): def test_obj_reset_changes(self): args = utils.getargspec(base.NovaObject.obj_reset_changes) obj_classes = base.NovaObjectRegistry.obj_classes() for obj_name in obj_classes: obj_class = obj_classes[obj_name][0] self.assertEqual(args, utils.getargspec(obj_class.obj_reset_changes)) class TestObjectsDefaultingOnInit(test.NoDBTestCase): def test_init_behavior_policy(self): all_objects = get_nova_objects() violations = collections.defaultdict(list) # NOTE(danms): Do not add things to this list! # # There is one known exception to this init policy, and that # is the Service object because of the special behavior of the # version field. We *want* to counteract the usual non-clobber # behavior of that field specifically. See the comments in # Service.__init__ for more details. This will likely never # apply to any other non-ephemeral object, so this list should # never grow. exceptions = [objects.Service] for name, objclasses in all_objects.items(): for objcls in objclasses: if objcls in exceptions: continue key = '%s-%s' % (name, objcls.VERSION) obj = objcls() if isinstance(obj, base.NovaEphemeralObject): # Skip ephemeral objects, which are allowed to # set fields at init time continue for field in objcls.fields: if field in obj: violations[key].append(field) self.assertEqual({}, violations, 'Some non-ephemeral objects set fields during ' 'initialization; This is not allowed.') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_pci_device.py0000664000175000017500000007204700000000000023020 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova import context from nova.db import api as db from nova import exception from nova import objects from nova.objects import fields from nova.objects import instance from nova.objects import pci_device from nova.tests.unit.objects import test_objects dev_dict = { 'compute_node_id': 1, 'address': 'a', 'product_id': 'p', 'vendor_id': 'v', 'numa_node': 0, 'dev_type': fields.PciDeviceType.STANDARD, 'parent_addr': None, 'status': fields.PciDeviceStatus.AVAILABLE} fake_db_dev = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'parent_addr': None, 'id': 1, 'uuid': uuids.pci_dev1, 'compute_node_id': 1, 'address': 'a', 'vendor_id': 'v', 'product_id': 'p', 'numa_node': 0, 'dev_type': fields.PciDeviceType.STANDARD, 'status': fields.PciDeviceStatus.AVAILABLE, 'dev_id': 'i', 'label': 'l', 'instance_uuid': None, 'extra_info': '{}', 'request_id': None, } fake_db_dev_1 = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 2, 'uuid': uuids.pci_dev2, 'parent_addr': 'a', 'compute_node_id': 1, 'address': 'a1', 'vendor_id': 'v1', 'product_id': 'p1', 'numa_node': 1, 'dev_type': fields.PciDeviceType.STANDARD, 'status': fields.PciDeviceStatus.AVAILABLE, 'dev_id': 'i', 'label': 'l', 'instance_uuid': None, 'extra_info': '{}', 'request_id': None, } fake_db_dev_old = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 2, 'uuid': uuids.pci_dev2, 'parent_addr': None, 'compute_node_id': 1, 'address': 'a1', 'vendor_id': 'v1', 'product_id': 'p1', 'numa_node': 1, 'dev_type': fields.PciDeviceType.SRIOV_VF, 'status': fields.PciDeviceStatus.AVAILABLE, 'dev_id': 'i', 'label': 'l', 'instance_uuid': None, 'extra_info': '{"phys_function": "blah"}', 'request_id': None, } class _TestPciDeviceObject(object): def _create_fake_instance(self): self.inst = instance.Instance() self.inst.uuid = uuids.instance self.inst.pci_devices = pci_device.PciDeviceList() @mock.patch.object(db, 'pci_device_get_by_addr') def _create_fake_pci_device(self, mock_get, ctxt=None): if not ctxt: ctxt = context.get_admin_context() mock_get.return_value = fake_db_dev self.pci_device = pci_device.PciDevice.get_by_dev_addr(ctxt, 1, 'a') mock_get.assert_called_once_with(ctxt, 1, 'a') def test_create_pci_device(self): self.pci_device = pci_device.PciDevice.create(None, dev_dict) self.assertEqual(self.pci_device.product_id, 'p') self.assertEqual(self.pci_device.obj_what_changed(), set(['compute_node_id', 'product_id', 'vendor_id', 'numa_node', 'status', 'address', 'extra_info', 'dev_type', 'parent_addr', 'uuid'])) def test_pci_device_extra_info(self): self.dev_dict = copy.copy(dev_dict) self.dev_dict['k1'] = 'v1' self.dev_dict['k2'] = 'v2' self.pci_device = pci_device.PciDevice.create(None, self.dev_dict) extra_value = self.pci_device.extra_info self.assertEqual(extra_value.get('k1'), self.dev_dict['k1']) self.assertEqual(set(extra_value.keys()), set(('k1', 'k2'))) self.assertEqual(self.pci_device.obj_what_changed(), set(['compute_node_id', 'address', 'product_id', 'vendor_id', 'numa_node', 'status', 'uuid', 'extra_info', 'dev_type', 'parent_addr'])) def test_pci_device_extra_info_with_dict(self): self.dev_dict = copy.copy(dev_dict) self.dev_dict['k1'] = {'sub_k1': ['val1', 'val2']} self.pci_device = pci_device.PciDevice.create(None, self.dev_dict) extra_value = self.pci_device.extra_info self.assertEqual(jsonutils.loads(extra_value.get('k1')), self.dev_dict['k1']) self.assertEqual(set(extra_value.keys()), set(['k1'])) self.assertEqual(self.pci_device.obj_what_changed(), set(['compute_node_id', 'address', 'product_id', 'vendor_id', 'numa_node', 'status', 'uuid', 'extra_info', 'dev_type', 'parent_addr'])) def test_update_device(self): self.pci_device = pci_device.PciDevice.create(None, dev_dict) self.pci_device.obj_reset_changes() changes = {'product_id': 'p2', 'vendor_id': 'v2'} self.pci_device.update_device(changes) self.assertEqual(self.pci_device.vendor_id, 'v2') self.assertEqual(self.pci_device.obj_what_changed(), set(['vendor_id', 'product_id', 'parent_addr'])) def test_update_device_same_value(self): self.pci_device = pci_device.PciDevice.create(None, dev_dict) self.pci_device.obj_reset_changes() changes = {'product_id': 'p', 'vendor_id': 'v2'} self.pci_device.update_device(changes) self.assertEqual(self.pci_device.product_id, 'p') self.assertEqual(self.pci_device.vendor_id, 'v2') self.assertEqual(self.pci_device.obj_what_changed(), set(['vendor_id', 'product_id', 'parent_addr'])) @mock.patch.object(db, 'pci_device_get_by_addr') def test_get_by_dev_addr(self, mock_get): ctxt = context.get_admin_context() mock_get.return_value = fake_db_dev self.pci_device = pci_device.PciDevice.get_by_dev_addr(ctxt, 1, 'a') self.assertEqual(self.pci_device.product_id, 'p') self.assertEqual(self.pci_device.obj_what_changed(), set()) mock_get.assert_called_once_with(ctxt, 1, 'a') @mock.patch.object(db, 'pci_device_get_by_id') def test_get_by_dev_id(self, mock_get): ctxt = context.get_admin_context() mock_get.return_value = fake_db_dev self.pci_device = pci_device.PciDevice.get_by_dev_id(ctxt, 1) self.assertEqual(self.pci_device.product_id, 'p') self.assertEqual(self.pci_device.obj_what_changed(), set()) mock_get.assert_called_once_with(ctxt, 1) @mock.patch.object(db, 'pci_device_get_by_id') @mock.patch.object(objects.PciDevice, 'save') @mock.patch('oslo_utils.uuidutils.generate_uuid') def test_get_by_dev_id_auto_generate_uuid(self, mock_uuid, mock_save, mock_get): """Tests loading an old db record which doesn't have a uuid set so the object code auto-generates one and saves the update. """ fake_db_dev_no_uuid = copy.deepcopy(fake_db_dev) fake_db_dev_no_uuid['uuid'] = None ctxt = context.get_admin_context() mock_get.return_value = fake_db_dev_no_uuid fake_uuid = '3afad0d9-d2db-46fd-b56b-79f90043de5e' mock_uuid.return_value = fake_uuid obj_dev = pci_device.PciDevice.get_by_dev_id(ctxt, 1) self.assertEqual(fake_uuid, obj_dev.uuid) # The obj_what_changed is still dirty from _from_db_object because we # are mocking out save() which would eventually update the pci device # in the database and call _from_db_object again on the updated record, # and _from_db_object would reset the changed fields. self.assertEqual(set(['uuid']), obj_dev.obj_what_changed()) mock_get.assert_called_once_with(ctxt, 1) mock_save.assert_called_once_with() mock_uuid.assert_called_once_with() def test_from_db_obj_pre_1_5_format(self): ctxt = context.get_admin_context() fake_dev_pre_1_5 = copy.deepcopy(fake_db_dev_old) fake_dev_pre_1_5['status'] = fields.PciDeviceStatus.UNAVAILABLE dev = pci_device.PciDevice._from_db_object( ctxt, pci_device.PciDevice(), fake_dev_pre_1_5) self.assertRaises(exception.ObjectActionError, dev.obj_to_primitive, '1.4') def test_save_empty_parent_addr(self): ctxt = context.get_admin_context() dev = pci_device.PciDevice._from_db_object( ctxt, pci_device.PciDevice(), fake_db_dev) dev.parent_addr = None with mock.patch.object(db, 'pci_device_update', return_value=fake_db_dev): dev.save() self.assertIsNone(dev.parent_addr) self.assertEqual({}, dev.extra_info) @mock.patch.object(db, 'pci_device_update') def test_save(self, mock_update): ctxt = context.get_admin_context() self._create_fake_pci_device(ctxt=ctxt) return_dev = dict(fake_db_dev, status=fields.PciDeviceStatus.AVAILABLE, instance_uuid=uuids.instance3) self.pci_device.status = fields.PciDeviceStatus.ALLOCATED self.pci_device.instance_uuid = uuids.instance2 expected_updates = dict(status=fields.PciDeviceStatus.ALLOCATED, instance_uuid=uuids.instance2) mock_update.return_value = return_dev self.pci_device.save() self.assertEqual(self.pci_device.status, fields.PciDeviceStatus.AVAILABLE) self.assertEqual(self.pci_device.instance_uuid, uuids.instance3) mock_update.assert_called_once_with(ctxt, 1, 'a', expected_updates) def test_save_no_extra_info(self): return_dev = dict(fake_db_dev, status=fields.PciDeviceStatus.AVAILABLE, instance_uuid=uuids.instance3) def _fake_update(ctxt, node_id, addr, updates): self.extra_info = updates.get('extra_info') return return_dev ctxt = context.get_admin_context() self.stub_out('nova.db.api.pci_device_update', _fake_update) self.pci_device = pci_device.PciDevice.create(None, dev_dict) self.pci_device._context = ctxt self.pci_device.save() self.assertEqual(self.extra_info, '{}') @mock.patch.object(db, 'pci_device_destroy') def test_save_removed(self, mock_destroy): ctxt = context.get_admin_context() self._create_fake_pci_device(ctxt=ctxt) self.pci_device.status = fields.PciDeviceStatus.REMOVED self.pci_device.save() self.assertEqual(self.pci_device.status, fields.PciDeviceStatus.DELETED) mock_destroy.assert_called_once_with(ctxt, 1, 'a') def test_save_deleted(self): def _fake_destroy(ctxt, node_id, addr): self.called = True def _fake_update(ctxt, node_id, addr, updates): self.called = True self.stub_out('nova.db.api.pci_device_destroy', _fake_destroy) self.stub_out('nova.db.api.pci_device_update', _fake_update) self._create_fake_pci_device() self.pci_device.status = fields.PciDeviceStatus.DELETED self.called = False self.pci_device.save() self.assertFalse(self.called) def test_update_numa_node(self): self.pci_device = pci_device.PciDevice.create(None, dev_dict) self.assertEqual(0, self.pci_device.numa_node) self.dev_dict = copy.copy(dev_dict) self.dev_dict['numa_node'] = '1' self.pci_device = pci_device.PciDevice.create(None, self.dev_dict) self.assertEqual(1, self.pci_device.numa_node) @mock.patch('oslo_utils.uuidutils.generate_uuid', return_value=uuids.pci_dev1) def test_pci_device_equivalent(self, mock_uuid): pci_device1 = pci_device.PciDevice.create(None, dev_dict) pci_device2 = pci_device.PciDevice.create(None, dev_dict) self.assertEqual(pci_device1, pci_device2) @mock.patch('oslo_utils.uuidutils.generate_uuid', return_value=uuids.pci_dev1) def test_pci_device_equivalent_with_ignore_field(self, mock_uuid): pci_device1 = pci_device.PciDevice.create(None, dev_dict) pci_device2 = pci_device.PciDevice.create(None, dev_dict) pci_device2.updated_at = timeutils.utcnow() self.assertEqual(pci_device1, pci_device2) @mock.patch('oslo_utils.uuidutils.generate_uuid', return_value=uuids.pci_dev1) def test_pci_device_not_equivalent1(self, mock_uuid): pci_device1 = pci_device.PciDevice.create(None, dev_dict) dev_dict2 = copy.copy(dev_dict) dev_dict2['address'] = 'b' pci_device2 = pci_device.PciDevice.create(None, dev_dict2) self.assertNotEqual(pci_device1, pci_device2) @mock.patch('oslo_utils.uuidutils.generate_uuid', return_value=uuids.pci_dev1) def test_pci_device_not_equivalent2(self, mock_uuid): pci_device1 = pci_device.PciDevice.create(None, dev_dict) pci_device2 = pci_device.PciDevice.create(None, dev_dict) delattr(pci_device2, 'address') self.assertNotEqual(pci_device1, pci_device2) @mock.patch('oslo_utils.uuidutils.generate_uuid', return_value=uuids.pci_dev1) def test_pci_device_not_equivalent_with_none(self, mock_uuid): pci_device1 = pci_device.PciDevice.create(None, dev_dict) pci_device2 = pci_device.PciDevice.create(None, dev_dict) pci_device1.instance_uuid = 'aaa' pci_device2.instance_uuid = None self.assertNotEqual(pci_device1, pci_device2) @mock.patch('oslo_utils.uuidutils.generate_uuid', return_value=uuids.pci_dev1) def test_pci_device_not_equivalent_with_not_pci_device(self, mock_uuid): pci_device1 = pci_device.PciDevice.create(None, dev_dict) self.assertIsNotNone(pci_device1) self.assertNotEqual(pci_device1, 'foo') self.assertNotEqual(pci_device1, 1) self.assertNotEqual(pci_device1, objects.PciDeviceList()) def test_claim_device(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.claim(self.inst.uuid) self.assertEqual(devobj.status, fields.PciDeviceStatus.CLAIMED) self.assertEqual(devobj.instance_uuid, self.inst.uuid) self.assertEqual(len(self.inst.pci_devices), 0) def test_claim_device_fail(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.status = fields.PciDeviceStatus.ALLOCATED self.assertRaises(exception.PciDeviceInvalidStatus, devobj.claim, self.inst) def test_allocate_device(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.claim(self.inst.uuid) devobj.allocate(self.inst) self.assertEqual(devobj.status, fields.PciDeviceStatus.ALLOCATED) self.assertEqual(devobj.instance_uuid, uuids.instance) self.assertEqual(len(self.inst.pci_devices), 1) self.assertEqual(self.inst.pci_devices[0].vendor_id, 'v') self.assertEqual(self.inst.pci_devices[0].status, fields.PciDeviceStatus.ALLOCATED) def test_allocate_device_fail_status(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.status = 'removed' self.assertRaises(exception.PciDeviceInvalidStatus, devobj.allocate, self.inst) def test_allocate_device_fail_owner(self): self._create_fake_instance() inst_2 = instance.Instance() inst_2.uuid = uuids.instance_2 devobj = pci_device.PciDevice.create(None, dev_dict) devobj.claim(self.inst.uuid) self.assertRaises(exception.PciDeviceInvalidOwner, devobj.allocate, inst_2) def test_free_claimed_device(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.claim(self.inst.uuid) devobj.free(self.inst) self.assertEqual(devobj.status, fields.PciDeviceStatus.AVAILABLE) self.assertIsNone(devobj.instance_uuid) def test_free_allocated_device(self): self._create_fake_instance() ctx = context.get_admin_context() devobj = pci_device.PciDevice._from_db_object( ctx, pci_device.PciDevice(), fake_db_dev) devobj.claim(self.inst.uuid) devobj.allocate(self.inst) self.assertEqual(len(self.inst.pci_devices), 1) devobj.free(self.inst) self.assertEqual(len(self.inst.pci_devices), 0) self.assertEqual(devobj.status, fields.PciDeviceStatus.AVAILABLE) self.assertIsNone(devobj.instance_uuid) def test_free_device_fail(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.status = fields.PciDeviceStatus.REMOVED self.assertRaises(exception.PciDeviceInvalidStatus, devobj.free) def test_remove_device(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.remove() self.assertEqual(devobj.status, fields.PciDeviceStatus.REMOVED) self.assertIsNone(devobj.instance_uuid) def test_remove_device_fail(self): self._create_fake_instance() devobj = pci_device.PciDevice.create(None, dev_dict) devobj.claim(self.inst.uuid) self.assertRaises(exception.PciDeviceInvalidStatus, devobj.remove) class TestPciDeviceObject(test_objects._LocalTest, _TestPciDeviceObject): pass class TestPciDeviceObjectRemote(test_objects._RemoteTest, _TestPciDeviceObject): pass fake_pci_devs = [fake_db_dev, fake_db_dev_1] class _TestPciDeviceListObject(object): def test_create_pci_device_list(self): ctxt = context.get_admin_context() devobj = pci_device.PciDevice.create(ctxt, dev_dict) pci_device_list = objects.PciDeviceList( context=ctxt, objects=[devobj]) self.assertEqual(1, len(pci_device_list)) self.assertIsInstance(pci_device_list[0], pci_device.PciDevice) @mock.patch.object(db, 'pci_device_get_all_by_node') def test_get_by_compute_node(self, mock_get): ctxt = context.get_admin_context() mock_get.return_value = fake_pci_devs devs = pci_device.PciDeviceList.get_by_compute_node(ctxt, 1) for i in range(len(fake_pci_devs)): self.assertIsInstance(devs[i], pci_device.PciDevice) self.assertEqual(fake_pci_devs[i]['vendor_id'], devs[i].vendor_id) mock_get.assert_called_once_with(ctxt, 1) @mock.patch.object(db, 'pci_device_get_all_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): ctxt = context.get_admin_context() fake_db_1 = dict(fake_db_dev, address='a1', status=fields.PciDeviceStatus.ALLOCATED, instance_uuid='1') fake_db_2 = dict(fake_db_dev, address='a2', status=fields.PciDeviceStatus.ALLOCATED, instance_uuid='1') mock_get.return_value = [fake_db_1, fake_db_2] devs = pci_device.PciDeviceList.get_by_instance_uuid(ctxt, '1') self.assertEqual(len(devs), 2) for i in range(len(fake_pci_devs)): self.assertIsInstance(devs[i], pci_device.PciDevice) self.assertEqual(devs[0].vendor_id, 'v') self.assertEqual(devs[1].vendor_id, 'v') mock_get.assert_called_once_with(ctxt, '1') class TestPciDeviceListObject(test_objects._LocalTest, _TestPciDeviceListObject): pass class TestPciDeviceListObjectRemote(test_objects._RemoteTest, _TestPciDeviceListObject): pass class _TestSRIOVPciDeviceObject(object): def _create_pci_devices(self, vf_product_id=1515, pf_product_id=1528, num_pfs=2, num_vfs=8): self.sriov_pf_devices = [] for dev in range(num_pfs): pci_dev = {'compute_node_id': 1, 'address': '0000:81:00.%d' % dev, 'vendor_id': '8086', 'product_id': '%d' % pf_product_id, 'status': 'available', 'request_id': None, 'dev_type': fields.PciDeviceType.SRIOV_PF, 'parent_addr': None, 'numa_node': 0} pci_dev_obj = objects.PciDevice.create(None, pci_dev) pci_dev_obj.id = dev + 81 pci_dev_obj.child_devices = [] self.sriov_pf_devices.append(pci_dev_obj) self.sriov_vf_devices = [] for dev in range(num_vfs): pci_dev = {'compute_node_id': 1, 'address': '0000:81:10.%d' % dev, 'vendor_id': '8086', 'product_id': '%d' % vf_product_id, 'status': 'available', 'request_id': None, 'dev_type': fields.PciDeviceType.SRIOV_VF, 'parent_addr': '0000:81:00.%d' % int(dev / 4), 'numa_node': 0} pci_dev_obj = objects.PciDevice.create(None, pci_dev) pci_dev_obj.id = dev + 1 pci_dev_obj.parent_device = self.sriov_pf_devices[int(dev / 4)] pci_dev_obj.parent_device.child_devices.append(pci_dev_obj) self.sriov_vf_devices.append(pci_dev_obj) def _create_fake_instance(self): self.inst = instance.Instance() self.inst.uuid = uuids.instance self.inst.pci_devices = pci_device.PciDeviceList() @mock.patch.object(db, 'pci_device_get_by_addr') def _create_fake_pci_device(self, mock_get, ctxt=None): if not ctxt: ctxt = context.get_admin_context() mock_get.return_value = fake_db_dev self.pci_device = pci_device.PciDevice.get_by_dev_addr(ctxt, 1, 'a') mock_get.assert_called_once_with(ctxt, 1, 'a') def _get_children_by_parent_address(self, addr): vf_devs = [] for dev in self.sriov_vf_devices: if dev.parent_addr == addr: vf_devs.append(dev) return vf_devs def _get_parent_by_address(self, addr): for dev in self.sriov_pf_devices: if dev.address == addr: return dev def test_claim_PF(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_pf_devices[0] devobj.claim(self.inst.uuid) self.assertEqual(devobj.status, fields.PciDeviceStatus.CLAIMED) self.assertEqual(devobj.instance_uuid, self.inst.uuid) self.assertEqual(len(self.inst.pci_devices), 0) # check if the all the dependants are UNCLAIMABLE self.assertTrue(all( [dev.status == fields.PciDeviceStatus.UNCLAIMABLE for dev in self._get_children_by_parent_address( self.sriov_pf_devices[0].address)])) def test_claim_VF(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_vf_devices[0] devobj.claim(self.inst.uuid) self.assertEqual(devobj.status, fields.PciDeviceStatus.CLAIMED) self.assertEqual(devobj.instance_uuid, self.inst.uuid) self.assertEqual(len(self.inst.pci_devices), 0) # check if parent device status has been changed to UNCLAIMABLE parent = self._get_parent_by_address(devobj.parent_addr) self.assertEqual(fields.PciDeviceStatus.UNCLAIMABLE, parent.status) def test_allocate_PF(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_pf_devices[0] devobj.claim(self.inst.uuid) devobj.allocate(self.inst) self.assertEqual(devobj.status, fields.PciDeviceStatus.ALLOCATED) self.assertEqual(devobj.instance_uuid, self.inst.uuid) self.assertEqual(len(self.inst.pci_devices), 1) # check if the all the dependants are UNAVAILABLE self.assertTrue(all( [dev.status == fields.PciDeviceStatus.UNAVAILABLE for dev in self._get_children_by_parent_address( self.sriov_pf_devices[0].address)])) def test_allocate_VF(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_vf_devices[0] devobj.claim(self.inst.uuid) devobj.allocate(self.inst) self.assertEqual(devobj.status, fields.PciDeviceStatus.ALLOCATED) self.assertEqual(devobj.instance_uuid, self.inst.uuid) self.assertEqual(len(self.inst.pci_devices), 1) # check if parent device status has been changed to UNAVAILABLE parent = self._get_parent_by_address(devobj.parent_addr) self.assertEqual(fields.PciDeviceStatus.UNAVAILABLE, parent.status) def test_claim_PF_fail(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_pf_devices[0] self.sriov_vf_devices[0].status = fields.PciDeviceStatus.CLAIMED self.assertRaises(exception.PciDeviceVFInvalidStatus, devobj.claim, self.inst) def test_claim_VF_fail(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_vf_devices[0] parent = self._get_parent_by_address(devobj.parent_addr) parent.status = fields.PciDeviceStatus.CLAIMED self.assertRaises(exception.PciDevicePFInvalidStatus, devobj.claim, self.inst) def test_allocate_PF_fail(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_pf_devices[0] self.sriov_vf_devices[0].status = fields.PciDeviceStatus.CLAIMED self.assertRaises(exception.PciDeviceVFInvalidStatus, devobj.allocate, self.inst) def test_allocate_VF_fail(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_vf_devices[0] parent = self._get_parent_by_address(devobj.parent_addr) parent.status = fields.PciDeviceStatus.CLAIMED self.assertRaises(exception.PciDevicePFInvalidStatus, devobj.allocate, self.inst) def test_free_allocated_PF(self): self._create_fake_instance() self._create_pci_devices() devobj = self.sriov_pf_devices[0] devobj.claim(self.inst.uuid) devobj.allocate(self.inst) devobj.free(self.inst) self.assertEqual(devobj.status, fields.PciDeviceStatus.AVAILABLE) self.assertIsNone(devobj.instance_uuid) # check if the all the dependants are AVAILABLE self.assertTrue(all( [dev.status == fields.PciDeviceStatus.AVAILABLE for dev in self._get_children_by_parent_address( self.sriov_pf_devices[0].address)])) def test_free_allocated_VF(self): self._create_fake_instance() self._create_pci_devices() vf = self.sriov_vf_devices[0] dependents = self._get_children_by_parent_address(vf.parent_addr) for devobj in dependents: devobj.claim(self.inst.uuid) devobj.allocate(self.inst) self.assertEqual(devobj.status, fields.PciDeviceStatus.ALLOCATED) for devobj in dependents[:-1]: devobj.free(self.inst) # check if parent device status is still UNAVAILABLE parent = self._get_parent_by_address(devobj.parent_addr) self.assertEqual(fields.PciDeviceStatus.UNAVAILABLE, parent.status) devobj = dependents[-1] devobj.free(self.inst) # check if parent device status is now AVAILABLE parent = self._get_parent_by_address(devobj.parent_addr) self.assertEqual(fields.PciDeviceStatus.AVAILABLE, parent.status) class TestSRIOVPciDeviceListObject(test_objects._LocalTest, _TestSRIOVPciDeviceObject): pass class TestSRIOVPciDeviceListObjectRemote(test_objects._RemoteTest, _TestSRIOVPciDeviceObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_pci_device_pool.py0000664000175000017500000001056100000000000024042 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova import objects from nova.objects import pci_device_pool from nova import test from nova.tests.unit import fake_pci_device_pools as fake_pci from nova.tests.unit.objects import test_objects class _TestPciDevicePoolObject(object): def test_pci_pool_from_dict_not_distructive(self): test_dict = copy.copy(fake_pci.fake_pool_dict) objects.PciDevicePool.from_dict(test_dict) self.assertEqual(fake_pci.fake_pool_dict, test_dict) def test_pci_pool_from_dict(self): pool_obj = objects.PciDevicePool.from_dict(fake_pci.fake_pool_dict) self.assertEqual(pool_obj.product_id, 'fake-product') self.assertEqual(pool_obj.vendor_id, 'fake-vendor') self.assertEqual(pool_obj.numa_node, 1) self.assertEqual(pool_obj.tags, {'t1': 'v1', 't2': 'v2'}) self.assertEqual(pool_obj.count, 2) def test_pci_pool_from_dict_bad_tags(self): bad_dict = copy.deepcopy(fake_pci.fake_pool_dict) bad_dict['bad'] = {'foo': 'bar'} self.assertRaises(ValueError, objects.PciDevicePool.from_dict, value=bad_dict) def test_pci_pool_from_dict_no_tags(self): dict_notag = copy.copy(fake_pci.fake_pool_dict) dict_notag.pop('t1') dict_notag.pop('t2') pool_obj = objects.PciDevicePool.from_dict(dict_notag) self.assertEqual(pool_obj.tags, {}) def test_pci_pool_to_dict(self): tags = {'t1': 'foo', 't2': 'bar'} pool_obj = objects.PciDevicePool(product_id='pid', tags=tags) pool_dict = pool_obj.to_dict() self.assertEqual({'product_id': 'pid', 't1': 'foo', 't2': 'bar'}, pool_dict) def test_pci_pool_to_dict_no_tags(self): pool_obj = objects.PciDevicePool(product_id='pid', tags={}) pool_dict = pool_obj.to_dict() self.assertEqual({'product_id': 'pid'}, pool_dict) def test_pci_pool_to_dict_with_tags_unset(self): pool_obj = objects.PciDevicePool(product_id='pid') pool_dict = pool_obj.to_dict() self.assertEqual({'product_id': 'pid'}, pool_dict) def test_obj_make_compatible(self): pool_obj = objects.PciDevicePool(product_id='pid', numa_node=1) primitive = pool_obj.obj_to_primitive() self.assertIn('numa_node', primitive['nova_object.data']) pool_obj.obj_make_compatible(primitive['nova_object.data'], '1.0') self.assertNotIn('numa_node', primitive['nova_object.data']) class TestPciDevicePoolObject(test_objects._LocalTest, _TestPciDevicePoolObject): pass class TestRemotePciDevicePoolObject(test_objects._RemoteTest, _TestPciDevicePoolObject): pass class TestConvertPciStats(test.NoDBTestCase): def test_from_pci_stats_obj(self): prim = fake_pci.fake_pool_list_primitive pools = pci_device_pool.from_pci_stats(prim) self.assertIsInstance(pools, pci_device_pool.PciDevicePoolList) self.assertEqual(len(pools), 1) def test_from_pci_stats_dict(self): prim = fake_pci.fake_pool_dict pools = pci_device_pool.from_pci_stats(prim) self.assertIsInstance(pools, pci_device_pool.PciDevicePoolList) self.assertEqual(len(pools), 1) def test_from_pci_stats_list_of_dicts(self): prim = fake_pci.fake_pool_dict pools = pci_device_pool.from_pci_stats([prim, prim]) self.assertIsInstance(pools, pci_device_pool.PciDevicePoolList) self.assertEqual(len(pools), 2) def test_from_pci_stats_bad(self): prim = "not a valid json string for an object" pools = pci_device_pool.from_pci_stats(prim) self.assertEqual(len(pools), 0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_quotas.py0000664000175000017500000005121400000000000022233 0ustar00zuulzuul00000000000000# Copyright 2013 Rackspace Hosting. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context from nova.db.sqlalchemy import api as db_api from nova import exception from nova.objects import quotas as quotas_obj from nova import quota from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_objects QUOTAS = quota.QUOTAS class TestQuotasModule(test.NoDBTestCase): def setUp(self): super(TestQuotasModule, self).setUp() self.context = context.RequestContext('fake_user1', 'fake_proj1') self.instance = fake_instance.fake_db_instance( project_id='fake_proj2', user_id='fake_user2') def test_ids_from_instance_non_admin(self): project_id, user_id = quotas_obj.ids_from_instance( self.context, self.instance) self.assertEqual('fake_user2', user_id) self.assertEqual('fake_proj1', project_id) def test_ids_from_instance_admin(self): project_id, user_id = quotas_obj.ids_from_instance( self.context.elevated(), self.instance) self.assertEqual('fake_user2', user_id) self.assertEqual('fake_proj2', project_id) class _TestQuotasObject(object): def setUp(self): super(_TestQuotasObject, self).setUp() self.context = context.RequestContext('fake_user1', 'fake_proj1') self.instance = fake_instance.fake_db_instance( project_id='fake_proj2', user_id='fake_user2') @mock.patch('nova.db.api.quota_get', side_effect=exception.QuotaNotFound) @mock.patch('nova.objects.Quotas._create_limit_in_db') def test_create_limit(self, mock_create, mock_get): quotas_obj.Quotas.create_limit(self.context, 'fake-project', 'foo', 10, user_id='user') mock_create.assert_called_once_with(self.context, 'fake-project', 'foo', 10, user_id='user') @mock.patch('nova.db.api.quota_get') @mock.patch('nova.objects.Quotas._create_limit_in_db') def test_create_limit_exists_in_main(self, mock_create, mock_get): self.assertRaises(exception.QuotaExists, quotas_obj.Quotas.create_limit, self.context, 'fake-project', 'foo', 10, user_id='user') @mock.patch('nova.objects.Quotas._update_limit_in_db') def test_update_limit(self, mock_update): quotas_obj.Quotas.update_limit(self.context, 'fake-project', 'foo', 10, user_id='user') mock_update.assert_called_once_with(self.context, 'fake-project', 'foo', 10, user_id='user') @mock.patch.object(QUOTAS, 'count_as_dict') def test_count(self, mock_count): # key_pairs can't actually be counted across a project, this is just # for testing. mock_count.return_value = {'project': {'key_pairs': 5}, 'user': {'key_pairs': 4}} count = quotas_obj.Quotas.count(self.context, 'key_pairs', 'a-user') self.assertEqual(4, count) # key_pairs can't actually be counted across a project, this is just # for testing. mock_count.return_value = {'project': {'key_pairs': 5}} count = quotas_obj.Quotas.count(self.context, 'key_pairs', 'a-user') self.assertEqual(5, count) mock_count.return_value = {'user': {'key_pairs': 3}} count = quotas_obj.Quotas.count(self.context, 'key_pairs', 'a-user') self.assertEqual(3, count) @mock.patch('nova.objects.Quotas.count_as_dict') def test_check_deltas(self, mock_count): self.flags(key_pairs=3, group='quota') self.flags(server_group_members=3, group='quota') def fake_count(context, resource): if resource in ('key_pairs', 'server_group_members'): return {'project': {'key_pairs': 2, 'server_group_members': 2}, 'user': {'key_pairs': 1, 'server_group_members': 2}} else: return {'user': {resource: 2}} mock_count.side_effect = fake_count deltas = {'key_pairs': 1, 'server_group_members': 1, 'security_group_rules': 1} project_id = 'fake-other-project' user_id = 'fake-other-user' quotas_obj.Quotas.check_deltas(self.context, deltas, check_project_id=project_id, check_user_id=user_id) # Should be called twice: once for key_pairs/server_group_members, # once for security_group_rules. self.assertEqual(2, mock_count.call_count) call1 = mock.call(self.context, 'key_pairs') call2 = mock.call(self.context, 'server_group_members') call3 = mock.call(self.context, 'security_group_rules') self.assertTrue(call1 in mock_count.mock_calls or call2 in mock_count.mock_calls) self.assertIn(call3, mock_count.mock_calls) @mock.patch('nova.objects.Quotas.count_as_dict') def test_check_deltas_zero(self, mock_count): # This will test that we will raise OverQuota if given a zero delta if # an object creation has put us over the allowed quota. # This is for the scenario where we recheck quota and delete an object # if we have gone over quota during a race. self.flags(key_pairs=3, group='quota') self.flags(server_group_members=3, group='quota') def fake_count(context, resource): return {'user': {resource: 4}} mock_count.side_effect = fake_count deltas = {'key_pairs': 0, 'server_group_members': 0} project_id = 'fake-other-project' user_id = 'fake-other-user' self.assertRaises(exception.OverQuota, quotas_obj.Quotas.check_deltas, self.context, deltas, check_project_id=project_id, check_user_id=user_id) # Should be called twice, once for key_pairs, once for # server_group_members self.assertEqual(2, mock_count.call_count) call1 = mock.call(self.context, 'key_pairs') call2 = mock.call(self.context, 'server_group_members') mock_count.assert_has_calls([call1, call2], any_order=True) @mock.patch('nova.objects.Quotas.count_as_dict') def test_check_deltas_negative(self, mock_count): """Test check_deltas with a negative delta. Negative deltas probably won't be used going forward for countable resources because there are no usage records to decrement and there won't be quota operations done when deleting resources. When resources are deleted, they will no longer be reflected in the count. """ self.flags(key_pairs=3, group='quota') mock_count.return_value = {'user': {'key_pairs': 4}} deltas = {'key_pairs': -1} # Should pass because the delta makes 3 key_pairs quotas_obj.Quotas.check_deltas(self.context, deltas, 'a-user', something='something') # args for the count function should get passed along mock_count.assert_called_once_with(self.context, 'key_pairs', 'a-user', something='something') @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') def test_check_deltas_limit_check_scoping(self, mock_check, mock_count): # check_project_id and check_user_id kwargs should get passed along to # limit_check_project_and_user() mock_count.return_value = {'project': {'foo': 5}, 'user': {'foo': 1}} deltas = {'foo': 1} quotas_obj.Quotas.check_deltas(self.context, deltas, 'a-project') mock_check.assert_called_once_with(self.context, project_values={'foo': 6}, user_values={'foo': 2}) mock_check.reset_mock() quotas_obj.Quotas.check_deltas(self.context, deltas, 'a-project', check_project_id='a-project') mock_check.assert_called_once_with(self.context, project_values={'foo': 6}, user_values={'foo': 2}, project_id='a-project') mock_check.reset_mock() quotas_obj.Quotas.check_deltas(self.context, deltas, 'a-project', check_user_id='a-user') mock_check.assert_called_once_with(self.context, project_values={'foo': 6}, user_values={'foo': 2}, user_id='a-user') @mock.patch('nova.objects.Quotas._update_limit_in_db', side_effect=exception.QuotaNotFound) @mock.patch('nova.db.api.quota_update') def test_update_limit_main(self, mock_update_main, mock_update): quotas_obj.Quotas.update_limit(self.context, 'fake-project', 'foo', 10, user_id='user') mock_update.assert_called_once_with(self.context, 'fake-project', 'foo', 10, user_id='user') mock_update_main.assert_called_once_with(self.context, 'fake-project', 'foo', 10, user_id='user') @mock.patch('nova.objects.Quotas._get_from_db') def test_get(self, mock_get): qclass = quotas_obj.Quotas.get(self.context, 'fake-project', 'foo', user_id='user') mock_get.assert_called_once_with(self.context, 'fake-project', 'foo', user_id='user') self.assertEqual(mock_get.return_value, qclass) @mock.patch('nova.objects.Quotas._get_from_db', side_effect=exception.QuotaNotFound) @mock.patch('nova.db.api.quota_get') def test_get_main(self, mock_get_main, mock_get): quotas_obj.Quotas.get(self.context, 'fake-project', 'foo', user_id='user') mock_get.assert_called_once_with(self.context, 'fake-project', 'foo', user_id='user') mock_get_main.assert_called_once_with(self.context, 'fake-project', 'foo', user_id='user') @mock.patch('nova.objects.Quotas._get_all_from_db') @mock.patch('nova.db.api.quota_get_all') def test_get_all(self, mock_get_all_main, mock_get_all): mock_get_all.return_value = ['api1'] mock_get_all_main.return_value = ['main1', 'main2'] quotas = quotas_obj.Quotas.get_all(self.context, 'fake-project') mock_get_all.assert_called_once_with(self.context, 'fake-project') mock_get_all_main.assert_called_once_with(self.context, 'fake-project') self.assertEqual(['api1', 'main1', 'main2'], quotas) @mock.patch('nova.objects.Quotas._get_all_from_db_by_project') @mock.patch('nova.db.api.quota_get_all_by_project') def test_get_all_by_project(self, mock_get_all_main, mock_get_all): mock_get_all.return_value = {'project_id': 'fake-project', 'fixed_ips': 20, 'floating_ips': 5} mock_get_all_main.return_value = {'project_id': 'fake-project', 'fixed_ips': 10} quotas_dict = quotas_obj.Quotas.get_all_by_project(self.context, 'fake-project') mock_get_all.assert_called_once_with(self.context, 'fake-project') mock_get_all_main.assert_called_once_with(self.context, 'fake-project') expected = {'project_id': 'fake-project', 'fixed_ips': 20, 'floating_ips': 5} self.assertEqual(expected, quotas_dict) @mock.patch('nova.objects.Quotas._get_all_from_db_by_project_and_user') @mock.patch('nova.db.api.quota_get_all_by_project_and_user') def test_get_all_by_project_and_user(self, mock_get_all_main, mock_get_all): mock_get_all.return_value = {'project_id': 'fake-project', 'user_id': 'user', 'instances': 5, 'cores': 10} mock_get_all_main.return_value = {'project_id': 'fake-project', 'user_id': 'user', 'instances': 10, 'ram': 8192} quotas_dict = quotas_obj.Quotas.get_all_by_project_and_user( self.context, 'fake-project', 'user') mock_get_all.assert_called_once_with(self.context, 'fake-project', 'user') mock_get_all_main.assert_called_once_with(self.context, 'fake-project', 'user') expected = {'project_id': 'fake-project', 'user_id': 'user', 'instances': 5, 'cores': 10, 'ram': 8192} self.assertEqual(expected, quotas_dict) @mock.patch('nova.objects.Quotas._destroy_all_in_db_by_project') def test_destroy_all_by_project(self, mock_destroy_all): quotas_obj.Quotas.destroy_all_by_project(self.context, 'fake-project') mock_destroy_all.assert_called_once_with(self.context, 'fake-project') @mock.patch('nova.objects.Quotas._destroy_all_in_db_by_project', side_effect=exception.ProjectQuotaNotFound( project_id='fake-project')) @mock.patch('nova.db.api.quota_destroy_all_by_project') def test_destroy_all_by_project_main(self, mock_destroy_all_main, mock_destroy_all): quotas_obj.Quotas.destroy_all_by_project(self.context, 'fake-project') mock_destroy_all.assert_called_once_with(self.context, 'fake-project') mock_destroy_all_main.assert_called_once_with(self.context, 'fake-project') @mock.patch('nova.objects.Quotas._destroy_all_in_db_by_project_and_user') def test_destroy_all_by_project_and_user(self, mock_destroy_all): quotas_obj.Quotas.destroy_all_by_project_and_user(self.context, 'fake-project', 'user') mock_destroy_all.assert_called_once_with(self.context, 'fake-project', 'user') @mock.patch('nova.objects.Quotas._destroy_all_in_db_by_project_and_user', side_effect=exception.ProjectUserQuotaNotFound( user_id='user', project_id='fake-project')) @mock.patch('nova.db.api.quota_destroy_all_by_project_and_user') def test_destroy_all_by_project_and_user_main(self, mock_destroy_all_main, mock_destroy_all): quotas_obj.Quotas.destroy_all_by_project_and_user(self.context, 'fake-project', 'user') mock_destroy_all.assert_called_once_with(self.context, 'fake-project', 'user') mock_destroy_all_main.assert_called_once_with(self.context, 'fake-project', 'user') @mock.patch('nova.objects.Quotas._get_class_from_db') def test_get_class(self, mock_get): qclass = quotas_obj.Quotas.get_class(self.context, 'class', 'resource') mock_get.assert_called_once_with(self.context, 'class', 'resource') self.assertEqual(mock_get.return_value, qclass) @mock.patch('nova.objects.Quotas._get_class_from_db', side_effect=exception.QuotaClassNotFound(class_name='class')) @mock.patch('nova.db.api.quota_class_get') def test_get_class_main(self, mock_get_main, mock_get): qclass = quotas_obj.Quotas.get_class(self.context, 'class', 'resource') mock_get.assert_called_once_with(self.context, 'class', 'resource') mock_get_main.assert_called_once_with(self.context, 'class', 'resource') self.assertEqual(mock_get_main.return_value, qclass) @mock.patch('nova.objects.Quotas._get_all_class_from_db_by_name') def test_get_default_class(self, mock_get_all): qclass = quotas_obj.Quotas.get_default_class(self.context) mock_get_all.assert_called_once_with(self.context, db_api._DEFAULT_QUOTA_NAME) self.assertEqual(mock_get_all.return_value, qclass) @mock.patch('nova.objects.Quotas._get_all_class_from_db_by_name', side_effect=exception.QuotaClassNotFound(class_name='class')) @mock.patch('nova.db.api.quota_class_get_default') def test_get_default_class_main(self, mock_get_default_main, mock_get_all): qclass = quotas_obj.Quotas.get_default_class(self.context) mock_get_all.assert_called_once_with(self.context, db_api._DEFAULT_QUOTA_NAME) mock_get_default_main.assert_called_once_with(self.context) self.assertEqual(mock_get_default_main.return_value, qclass) @mock.patch('nova.objects.Quotas._get_all_class_from_db_by_name') @mock.patch('nova.db.api.quota_class_get_all_by_name') def test_get_class_by_name(self, mock_get_all_main, mock_get_all): mock_get_all.return_value = {'class_name': 'foo', 'cores': 10, 'instances': 5} mock_get_all_main.return_value = {'class_name': 'foo', 'cores': 20, 'fixed_ips': 10} quotas_dict = quotas_obj.Quotas.get_all_class_by_name(self.context, 'foo') mock_get_all.assert_called_once_with(self.context, 'foo') mock_get_all_main.assert_called_once_with(self.context, 'foo') expected = {'class_name': 'foo', 'cores': 10, 'instances': 5, 'fixed_ips': 10} self.assertEqual(expected, quotas_dict) @mock.patch('nova.db.api.quota_class_get', side_effect=exception.QuotaClassNotFound(class_name='class')) @mock.patch('nova.objects.Quotas._create_class_in_db') def test_create_class(self, mock_create, mock_get): quotas_obj.Quotas.create_class(self.context, 'class', 'resource', 'limit') mock_create.assert_called_once_with(self.context, 'class', 'resource', 'limit') @mock.patch('nova.db.api.quota_class_get') @mock.patch('nova.objects.Quotas._create_class_in_db') def test_create_class_exists_in_main(self, mock_create, mock_get): self.assertRaises(exception.QuotaClassExists, quotas_obj.Quotas.create_class, self.context, 'class', 'resource', 'limit') @mock.patch('nova.objects.Quotas._update_class_in_db') def test_update_class(self, mock_update): quotas_obj.Quotas.update_class(self.context, 'class', 'resource', 'limit') mock_update.assert_called_once_with(self.context, 'class', 'resource', 'limit') @mock.patch('nova.objects.Quotas._update_class_in_db', side_effect=exception.QuotaClassNotFound(class_name='class')) @mock.patch('nova.db.api.quota_class_update') def test_update_class_main(self, mock_update_main, mock_update): quotas_obj.Quotas.update_class(self.context, 'class', 'resource', 'limit') mock_update.assert_called_once_with(self.context, 'class', 'resource', 'limit') mock_update_main.assert_called_once_with(self.context, 'class', 'resource', 'limit') class TestQuotasObject(_TestQuotasObject, test_objects._LocalTest): pass class TestRemoteQuotasObject(_TestQuotasObject, test_objects._RemoteTest): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_request_spec.py0000664000175000017500000020700100000000000023416 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from oslo_versionedobjects import base as ovo_base from nova import context from nova import exception from nova.network import model as network_model from nova import objects from nova.objects import base from nova.objects import request_spec from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit import fake_network_cache_model from nova.tests.unit import fake_request_spec from nova.tests.unit.objects import test_objects class _TestRequestSpecObject(object): def test_image_meta_from_image_as_object(self): # Just isolating the test for the from_dict() method image_meta = objects.ImageMeta(name='foo') spec = objects.RequestSpec() spec._image_meta_from_image(image_meta) self.assertEqual(image_meta, spec.image) @mock.patch.object(objects.ImageMeta, 'from_dict') def test_image_meta_from_image_as_dict(self, from_dict): # Just isolating the test for the from_dict() method image_meta = objects.ImageMeta(name='foo') from_dict.return_value = image_meta spec = objects.RequestSpec() spec._image_meta_from_image({'name': 'foo'}) self.assertEqual(image_meta, spec.image) def test_image_meta_from_image_as_none(self): # just add a dumb check to have a full coverage spec = objects.RequestSpec() spec._image_meta_from_image(None) self.assertIsNone(spec.image) @mock.patch.object(base, 'obj_to_primitive') def test_to_legacy_image(self, obj_to_primitive): spec = objects.RequestSpec(image=objects.ImageMeta()) fake_dict = mock.Mock() obj_to_primitive.return_value = fake_dict self.assertEqual(fake_dict, spec._to_legacy_image()) obj_to_primitive.assert_called_once_with(spec.image) @mock.patch.object(base, 'obj_to_primitive') def test_to_legacy_image_with_none(self, obj_to_primitive): spec = objects.RequestSpec(image=None) self.assertEqual({}, spec._to_legacy_image()) self.assertFalse(obj_to_primitive.called) def test_from_instance_as_object(self): instance = objects.Instance() instance.uuid = uuidutils.generate_uuid() instance.numa_topology = None instance.pci_requests = None instance.project_id = fakes.FAKE_PROJECT_ID instance.user_id = fakes.FAKE_USER_ID instance.availability_zone = 'nova' spec = objects.RequestSpec() spec._from_instance(instance) instance_fields = ['numa_topology', 'pci_requests', 'uuid', 'project_id', 'user_id', 'availability_zone'] for field in instance_fields: if field == 'uuid': self.assertEqual(getattr(instance, field), getattr(spec, 'instance_uuid')) else: self.assertEqual(getattr(instance, field), getattr(spec, field)) def test_from_instance_as_dict(self): instance = dict(uuid=uuidutils.generate_uuid(), numa_topology=None, pci_requests=None, project_id=fakes.FAKE_PROJECT_ID, user_id=fakes.FAKE_USER_ID, availability_zone='nova') spec = objects.RequestSpec() spec._from_instance(instance) instance_fields = ['numa_topology', 'pci_requests', 'uuid', 'project_id', 'user_id', 'availability_zone'] for field in instance_fields: if field == 'uuid': self.assertEqual(instance.get(field), getattr(spec, 'instance_uuid')) else: self.assertEqual(instance.get(field), getattr(spec, field)) @mock.patch.object(objects.InstancePCIRequests, 'from_request_spec_instance_props') def test_from_instance_with_pci_requests(self, pci_from_spec): fake_pci_requests = objects.InstancePCIRequests() pci_from_spec.return_value = fake_pci_requests instance = dict( uuid=uuidutils.generate_uuid(), root_gb=10, ephemeral_gb=0, memory_mb=10, vcpus=1, numa_topology=None, project_id=fakes.FAKE_PROJECT_ID, user_id=fakes.FAKE_USER_ID, availability_zone='nova', pci_requests={ 'instance_uuid': 'fakeid', 'requests': [{'count': 1, 'spec': [{'vendor_id': '8086'}]}]}) spec = objects.RequestSpec() spec._from_instance(instance) pci_from_spec.assert_called_once_with(instance['pci_requests']) self.assertEqual(fake_pci_requests, spec.pci_requests) def test_from_instance_with_numa_stuff(self): instance = dict( uuid=uuidutils.generate_uuid(), root_gb=10, ephemeral_gb=0, memory_mb=10, vcpus=1, project_id=fakes.FAKE_PROJECT_ID, user_id=fakes.FAKE_USER_ID, availability_zone='nova', pci_requests=None, numa_topology=fake_request_spec.INSTANCE_NUMA_TOPOLOGY, ) spec = objects.RequestSpec() spec._from_instance(instance) self.assertIsInstance(spec.numa_topology, objects.InstanceNUMATopology) cells = spec.numa_topology.cells self.assertEqual(2, len(cells)) self.assertIsInstance(cells[0], objects.InstanceNUMACell) def test_from_flavor_as_object(self): flavor = objects.Flavor() spec = objects.RequestSpec() spec._from_flavor(flavor) self.assertEqual(flavor, spec.flavor) def test_from_flavor_as_dict(self): flavor_dict = dict(id=1) ctxt = context.RequestContext('fake', 'fake') spec = objects.RequestSpec(ctxt) spec._from_flavor(flavor_dict) self.assertIsInstance(spec.flavor, objects.Flavor) self.assertEqual({'id': 1}, spec.flavor.obj_get_changes()) def test_to_legacy_instance(self): spec = objects.RequestSpec() spec.flavor = objects.Flavor(root_gb=10, ephemeral_gb=0, memory_mb=10, vcpus=1) spec.numa_topology = None spec.pci_requests = None spec.project_id = fakes.FAKE_PROJECT_ID spec.user_id = fakes.FAKE_USER_ID spec.availability_zone = 'nova' instance = spec._to_legacy_instance() self.assertEqual({'root_gb': 10, 'ephemeral_gb': 0, 'memory_mb': 10, 'vcpus': 1, 'numa_topology': None, 'pci_requests': None, 'project_id': fakes.FAKE_PROJECT_ID, 'user_id': fakes.FAKE_USER_ID, 'availability_zone': 'nova'}, instance) def test_to_legacy_instance_with_unset_values(self): spec = objects.RequestSpec() self.assertEqual({}, spec._to_legacy_instance()) def test_from_retry(self): retry_dict = {'num_attempts': 1, 'hosts': [['fake1', 'node1']]} ctxt = context.RequestContext('fake', 'fake') spec = objects.RequestSpec(ctxt) spec._from_retry(retry_dict) self.assertIsInstance(spec.retry, objects.SchedulerRetries) self.assertEqual(1, spec.retry.num_attempts) self.assertIsInstance(spec.retry.hosts, objects.ComputeNodeList) self.assertEqual(1, len(spec.retry.hosts)) self.assertEqual('fake1', spec.retry.hosts[0].host) self.assertEqual('node1', spec.retry.hosts[0].hypervisor_hostname) def test_from_retry_missing_values(self): retry_dict = {} ctxt = context.RequestContext('fake', 'fake') spec = objects.RequestSpec(ctxt) spec._from_retry(retry_dict) self.assertIsNone(spec.retry) def test_populate_group_info(self): filt_props = {} filt_props['group_updated'] = True filt_props['group_policies'] = set(['affinity']) filt_props['group_hosts'] = set(['fake1']) filt_props['group_members'] = set(['fake-instance1']) # Make sure it can handle group uuid not being present. for group_uuid in (None, uuids.group_uuid): if group_uuid: filt_props['group_uuid'] = group_uuid spec = objects.RequestSpec() spec._populate_group_info(filt_props) self.assertIsInstance(spec.instance_group, objects.InstanceGroup) self.assertEqual('affinity', spec.instance_group.policy) self.assertEqual(['fake1'], spec.instance_group.hosts) self.assertEqual(['fake-instance1'], spec.instance_group.members) if group_uuid: self.assertEqual(uuids.group_uuid, spec.instance_group.uuid) def test_populate_group_info_missing_values(self): filt_props = {} spec = objects.RequestSpec() spec._populate_group_info(filt_props) self.assertIsNone(spec.instance_group) def test_from_limits(self): limits_dict = {'numa_topology': None, 'vcpu': 1.0, 'disk_gb': 1.0, 'memory_mb': 1.0} spec = objects.RequestSpec() spec._from_limits(limits_dict) self.assertIsInstance(spec.limits, objects.SchedulerLimits) self.assertIsNone(spec.limits.numa_topology) self.assertEqual(1, spec.limits.vcpu) self.assertEqual(1, spec.limits.disk_gb) self.assertEqual(1, spec.limits.memory_mb) def test_from_limits_missing_values(self): limits_dict = {} spec = objects.RequestSpec() spec._from_limits(limits_dict) self.assertIsInstance(spec.limits, objects.SchedulerLimits) self.assertIsNone(spec.limits.numa_topology) self.assertIsNone(spec.limits.vcpu) self.assertIsNone(spec.limits.disk_gb) self.assertIsNone(spec.limits.memory_mb) def test_from_hints(self): hints_dict = {'foo_str': '1', 'bar_list': ['2']} spec = objects.RequestSpec() spec._from_hints(hints_dict) expected = {'foo_str': ['1'], 'bar_list': ['2']} self.assertEqual(expected, spec.scheduler_hints) def test_from_hints_with_no_hints(self): spec = objects.RequestSpec() spec._from_hints(None) self.assertIsNone(spec.scheduler_hints) @mock.patch.object(objects.SchedulerLimits, 'from_dict') def test_from_primitives(self, mock_limits): spec_dict = {'instance_type': objects.Flavor(), 'instance_properties': objects.Instance( uuid=uuidutils.generate_uuid(), numa_topology=None, pci_requests=None, project_id=1, user_id=2, availability_zone='nova')} filt_props = {} # We seriously don't care about the return values, we just want to make # sure that all the fields are set mock_limits.return_value = None ctxt = context.RequestContext('fake', 'fake') spec = objects.RequestSpec.from_primitives(ctxt, spec_dict, filt_props) mock_limits.assert_called_once_with({}) # Make sure that all fields are set using that helper method skip = ['id', 'security_groups', 'network_metadata', 'is_bfv', 'request_level_params'] for field in [f for f in spec.obj_fields if f not in skip]: self.assertTrue(spec.obj_attr_is_set(field), 'Field: %s is not set' % field) # just making sure that the context is set by the method self.assertEqual(ctxt, spec._context) def test_from_primitives_with_requested_destination(self): destination = objects.Destination(host='foo') spec_dict = {} filt_props = {'requested_destination': destination} ctxt = context.RequestContext('fake', 'fake') spec = objects.RequestSpec.from_primitives(ctxt, spec_dict, filt_props) self.assertEqual(destination, spec.requested_destination) def test_from_components(self): ctxt = context.RequestContext('fake-user', 'fake-project') destination = objects.Destination(host='foo') self.assertFalse(destination.allow_cross_cell_move) instance = fake_instance.fake_instance_obj(ctxt) image = {'id': uuids.image_id, 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} flavor = fake_flavor.fake_flavor_obj(ctxt) filter_properties = {'requested_destination': destination} instance_group = None spec = objects.RequestSpec.from_components(ctxt, instance.uuid, image, flavor, instance.numa_topology, instance.pci_requests, filter_properties, instance_group, instance.availability_zone, objects.SecurityGroupList()) # Make sure that all fields are set using that helper method skip = ['id', 'network_metadata', 'is_bfv', 'request_level_params'] for field in [f for f in spec.obj_fields if f not in skip]: self.assertTrue(spec.obj_attr_is_set(field), 'Field: %s is not set' % field) # just making sure that the context is set by the method self.assertEqual(ctxt, spec._context) self.assertEqual(destination, spec.requested_destination) self.assertFalse(spec.requested_destination.allow_cross_cell_move) @mock.patch('nova.objects.RequestSpec._populate_group_info') def test_from_components_with_instance_group(self, mock_pgi): # This test makes sure that we don't overwrite instance group passed # to from_components ctxt = context.RequestContext('fake-user', 'fake-project') instance = fake_instance.fake_instance_obj(ctxt) image = {'id': uuids.image_id, 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} flavor = fake_flavor.fake_flavor_obj(ctxt) filter_properties = {'fake': 'property'} instance_group = objects.InstanceGroup() objects.RequestSpec.from_components(ctxt, instance.uuid, image, flavor, instance.numa_topology, instance.pci_requests, filter_properties, instance_group, instance.availability_zone) self.assertFalse(mock_pgi.called) @mock.patch('nova.objects.RequestSpec._populate_group_info') def test_from_components_without_instance_group(self, mock_pgi): # This test makes sure that we populate instance group if not # present ctxt = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) instance = fake_instance.fake_instance_obj(ctxt) image = {'id': uuids.image_id, 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} flavor = fake_flavor.fake_flavor_obj(ctxt) filter_properties = {'fake': 'property'} objects.RequestSpec.from_components(ctxt, instance.uuid, image, flavor, instance.numa_topology, instance.pci_requests, filter_properties, None, instance.availability_zone) mock_pgi.assert_called_once_with(filter_properties) @mock.patch('nova.objects.RequestSpec._populate_group_info') def test_from_components_without_security_groups(self, mock_pgi): # This test makes sure that we populate instance group if not # present ctxt = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) instance = fake_instance.fake_instance_obj(ctxt) image = {'id': uuids.image_id, 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} flavor = fake_flavor.fake_flavor_obj(ctxt) filter_properties = {'fake': 'property'} spec = objects.RequestSpec.from_components(ctxt, instance.uuid, image, flavor, instance.numa_topology, instance.pci_requests, filter_properties, None, instance.availability_zone) self.assertNotIn('security_groups', spec) def test_from_components_with_port_resource_request(self, ): ctxt = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) instance = fake_instance.fake_instance_obj(ctxt) image = {'id': uuids.image_id, 'properties': {'mappings': []}, 'status': 'fake-status', 'location': 'far-away'} flavor = fake_flavor.fake_flavor_obj(ctxt) filter_properties = {'fake': 'property'} rg = request_spec.RequestGroup() spec = objects.RequestSpec.from_components(ctxt, instance.uuid, image, flavor, instance.numa_topology, instance.pci_requests, filter_properties, None, instance.availability_zone, port_resource_requests=[rg]) self.assertListEqual([rg], spec.requested_resources) def test_get_scheduler_hint(self): spec_obj = objects.RequestSpec(scheduler_hints={'foo_single': ['1'], 'foo_mul': ['1', '2']}) self.assertEqual('1', spec_obj.get_scheduler_hint('foo_single')) self.assertEqual(['1', '2'], spec_obj.get_scheduler_hint('foo_mul')) self.assertIsNone(spec_obj.get_scheduler_hint('oops')) self.assertEqual('bar', spec_obj.get_scheduler_hint('oops', default='bar')) def test_get_scheduler_hint_with_no_hints(self): spec_obj = objects.RequestSpec() self.assertEqual('bar', spec_obj.get_scheduler_hint('oops', default='bar')) @mock.patch.object(objects.RequestSpec, '_to_legacy_instance') @mock.patch.object(base, 'obj_to_primitive') def test_to_legacy_request_spec_dict(self, image_to_primitive, spec_to_legacy_instance): fake_image_dict = mock.Mock() image_to_primitive.return_value = fake_image_dict fake_instance = {'root_gb': 1.0, 'ephemeral_gb': 1.0, 'memory_mb': 1.0, 'vcpus': 1, 'numa_topology': None, 'pci_requests': None, 'project_id': fakes.FAKE_PROJECT_ID, 'availability_zone': 'nova', 'uuid': '1'} spec_to_legacy_instance.return_value = fake_instance fake_flavor = objects.Flavor(root_gb=10, ephemeral_gb=0, memory_mb=512, vcpus=1) spec = objects.RequestSpec(num_instances=1, image=objects.ImageMeta(), # instance properties numa_topology=None, pci_requests=None, project_id=1, availability_zone='nova', instance_uuid=uuids.instance, flavor=fake_flavor) spec_dict = spec.to_legacy_request_spec_dict() expected = {'num_instances': 1, 'image': fake_image_dict, 'instance_properties': fake_instance, 'instance_type': fake_flavor} self.assertEqual(expected, spec_dict) def test_to_legacy_request_spec_dict_with_unset_values(self): spec = objects.RequestSpec() self.assertEqual({'num_instances': 1, 'image': {}, 'instance_properties': {}, 'instance_type': {}}, spec.to_legacy_request_spec_dict()) def test_to_legacy_filter_properties_dict(self): fake_numa_limits = objects.NUMATopologyLimits() fake_computes_obj = objects.ComputeNodeList( objects=[objects.ComputeNode(host='fake1', hypervisor_hostname='node1')]) fake_dest = objects.Destination(host='fakehost') spec = objects.RequestSpec( ignore_hosts=['ignoredhost'], force_hosts=['fakehost'], force_nodes=['fakenode'], retry=objects.SchedulerRetries(num_attempts=1, hosts=fake_computes_obj), limits=objects.SchedulerLimits(numa_topology=fake_numa_limits, vcpu=1.0, disk_gb=10.0, memory_mb=8192.0), instance_group=objects.InstanceGroup(hosts=['fake1'], policy='affinity', members=['inst1', 'inst2'], uuid=uuids.group_uuid), scheduler_hints={'foo': ['bar']}, requested_destination=fake_dest) expected = {'ignore_hosts': ['ignoredhost'], 'force_hosts': ['fakehost'], 'force_nodes': ['fakenode'], 'retry': {'num_attempts': 1, 'hosts': [['fake1', 'node1']]}, 'limits': {'numa_topology': fake_numa_limits, 'vcpu': 1.0, 'disk_gb': 10.0, 'memory_mb': 8192.0}, 'group_updated': True, 'group_hosts': set(['fake1']), 'group_policies': set(['affinity']), 'group_members': set(['inst1', 'inst2']), 'group_uuid': uuids.group_uuid, 'scheduler_hints': {'foo': 'bar'}, 'requested_destination': fake_dest} self.assertEqual(expected, spec.to_legacy_filter_properties_dict()) def test_to_legacy_filter_properties_dict_with_nullable_values(self): spec = objects.RequestSpec(force_hosts=None, force_nodes=None, retry=None, limits=None, instance_group=None, scheduler_hints=None) self.assertEqual({}, spec.to_legacy_filter_properties_dict()) def test_to_legacy_filter_properties_dict_with_unset_values(self): spec = objects.RequestSpec() self.assertEqual({}, spec.to_legacy_filter_properties_dict()) def test_ensure_network_metadata(self): network_a = fake_network_cache_model.new_network({ 'physical_network': 'foo', 'tunneled': False}) vif_a = fake_network_cache_model.new_vif({'network': network_a}) network_b = fake_network_cache_model.new_network({ 'physical_network': 'foo', 'tunneled': False}) vif_b = fake_network_cache_model.new_vif({'network': network_b}) network_c = fake_network_cache_model.new_network({ 'physical_network': 'bar', 'tunneled': False}) vif_c = fake_network_cache_model.new_vif({'network': network_c}) network_d = fake_network_cache_model.new_network({ 'physical_network': None, 'tunneled': True}) vif_d = fake_network_cache_model.new_vif({'network': network_d}) nw_info = network_model.NetworkInfo([vif_a, vif_b, vif_c, vif_d]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) instance = objects.Instance(id=3, uuid=uuids.instance, info_cache=info_cache) spec = objects.RequestSpec() self.assertNotIn('network_metadata', spec) spec.ensure_network_metadata(instance) self.assertIn('network_metadata', spec) self.assertIsInstance(spec.network_metadata, objects.NetworkMetadata) self.assertEqual(spec.network_metadata.physnets, set(['foo', 'bar'])) self.assertTrue(spec.network_metadata.tunneled) def test_ensure_network_metadata_missing(self): nw_info = network_model.NetworkInfo([]) info_cache = objects.InstanceInfoCache(network_info=nw_info, instance_uuid=uuids.instance) instance = objects.Instance(id=3, uuid=uuids.instance, info_cache=info_cache) spec = objects.RequestSpec() self.assertNotIn('network_metadata', spec) spec.ensure_network_metadata(instance) self.assertNotIn('network_metadata', spec) @mock.patch.object(request_spec.RequestSpec, '_get_by_instance_uuid_from_db') @mock.patch('nova.objects.InstanceGroup.get_by_uuid') def test_get_by_instance_uuid(self, mock_get_ig, get_by_uuid): fake_spec = fake_request_spec.fake_db_spec() get_by_uuid.return_value = fake_spec mock_get_ig.return_value = objects.InstanceGroup(name='fresh') req_obj = request_spec.RequestSpec.get_by_instance_uuid(self.context, fake_spec['instance_uuid']) self.assertEqual(1, req_obj.num_instances) # ignore_hosts is not persisted self.assertIsNone(req_obj.ignore_hosts) self.assertEqual('fake', req_obj.project_id) self.assertEqual({'hint': ['over-there']}, req_obj.scheduler_hints) self.assertEqual(['host1', 'host3'], req_obj.force_hosts) self.assertIsNone(req_obj.availability_zone) self.assertEqual(['node1', 'node2'], req_obj.force_nodes) self.assertIsInstance(req_obj.image, objects.ImageMeta) self.assertIsInstance(req_obj.numa_topology, objects.InstanceNUMATopology) self.assertIsInstance(req_obj.pci_requests, objects.InstancePCIRequests) self.assertIsInstance(req_obj.flavor, objects.Flavor) # The 'retry' field is not persistent. self.assertIsNone(req_obj.retry) self.assertIsInstance(req_obj.limits, objects.SchedulerLimits) self.assertIsInstance(req_obj.instance_group, objects.InstanceGroup) self.assertEqual('fresh', req_obj.instance_group.name) def _check_update_primitive(self, req_obj, changes): self.assertEqual(req_obj.instance_uuid, changes['instance_uuid']) serialized_obj = objects.RequestSpec.obj_from_primitive( jsonutils.loads(changes['spec'])) # primitive fields for field in ['instance_uuid', 'num_instances', 'project_id', 'scheduler_hints', 'force_hosts', 'availability_zone', 'force_nodes']: self.assertEqual(getattr(req_obj, field), getattr(serialized_obj, field)) # object fields for field in ['image', 'numa_topology', 'pci_requests', 'flavor', 'limits', 'network_metadata']: self.assertEqual( getattr(req_obj, field).obj_to_primitive(), getattr(serialized_obj, field).obj_to_primitive()) self.assertIsNone(serialized_obj.instance_group.members) self.assertIsNone(serialized_obj.instance_group.hosts) self.assertIsNone(serialized_obj.retry) self.assertIsNone(serialized_obj.requested_destination) self.assertIsNone(serialized_obj.ignore_hosts) def test_create(self): req_obj = fake_request_spec.fake_spec_obj(remove_id=True) def _test_create_args(self2, context, changes): self._check_update_primitive(req_obj, changes) # DB creation would have set an id changes['id'] = 42 return changes with mock.patch.object(request_spec.RequestSpec, '_create_in_db', _test_create_args): req_obj.create() def test_create_id_set(self): req_obj = request_spec.RequestSpec(self.context) req_obj.id = 3 self.assertRaises(exception.ObjectActionError, req_obj.create) def test_create_does_not_persist_requested_fields(self): req_obj = fake_request_spec.fake_spec_obj(remove_id=True) expected_network_metadata = objects.NetworkMetadata( physnets=set(['foo', 'bar']), tunneled=True) req_obj.network_metadata = expected_network_metadata expected_destination = request_spec.Destination(host='sample-host') req_obj.requested_destination = expected_destination rg = request_spec.RequestGroup(resources={'fake-rc': 13}) req_obj.requested_resources = [rg] expected_retry = objects.SchedulerRetries( num_attempts=2, hosts=objects.ComputeNodeList(objects=[ objects.ComputeNode(host='host1', hypervisor_hostname='node1'), objects.ComputeNode(host='host2', hypervisor_hostname='node2'), ])) req_obj.retry = expected_retry orig_create_in_db = request_spec.RequestSpec._create_in_db with mock.patch.object(request_spec.RequestSpec, '_create_in_db') \ as mock_create_in_db: mock_create_in_db.side_effect = orig_create_in_db req_obj.create() mock_create_in_db.assert_called_once() updates = mock_create_in_db.mock_calls[0][1][1] # assert that the following fields are not stored in the db # 1. network_metadata # 2. requested_destination # 3. requested_resources # 4. retry data = jsonutils.loads(updates['spec'])['nova_object.data'] self.assertNotIn('network_metadata', data) self.assertIsNone(data['requested_destination']) self.assertIsNone(data['requested_resources']) self.assertIsNone(data['retry']) self.assertIsNotNone(data['instance_uuid']) # also we expect that the following fields are not reset after create # 1. network_metadata # 2. requested_destination # 3. requested_resources # 4. retry self.assertIsNotNone(req_obj.network_metadata) self.assertJsonEqual(expected_network_metadata.obj_to_primitive(), req_obj.network_metadata.obj_to_primitive()) self.assertIsNotNone(req_obj.requested_destination) self.assertJsonEqual(expected_destination.obj_to_primitive(), req_obj.requested_destination.obj_to_primitive()) self.assertIsNotNone(req_obj.requested_resources) self.assertEqual( 13, req_obj.requested_resources[0].resources['fake-rc']) self.assertIsNotNone(req_obj.retry) self.assertJsonEqual(expected_retry.obj_to_primitive(), req_obj.retry.obj_to_primitive()) def test_save_does_not_persist_requested_fields(self): req_obj = fake_request_spec.fake_spec_obj(remove_id=True) req_obj.create() # change something to make sure _save_in_db is called expected_network_metadata = objects.NetworkMetadata( physnets=set(['foo', 'bar']), tunneled=True) req_obj.network_metadata = expected_network_metadata expected_destination = request_spec.Destination(host='sample-host') req_obj.requested_destination = expected_destination rg = request_spec.RequestGroup(resources={'fake-rc': 13}) req_obj.requested_resources = [rg] expected_retry = objects.SchedulerRetries( num_attempts=2, hosts=objects.ComputeNodeList(objects=[ objects.ComputeNode(host='host1', hypervisor_hostname='node1'), objects.ComputeNode(host='host2', hypervisor_hostname='node2'), ])) req_obj.retry = expected_retry req_obj.num_instances = 2 req_obj.ignore_hosts = [uuids.ignored_host] orig_save_in_db = request_spec.RequestSpec._save_in_db with mock.patch.object(request_spec.RequestSpec, '_save_in_db') \ as mock_save_in_db: mock_save_in_db.side_effect = orig_save_in_db req_obj.save() mock_save_in_db.assert_called_once() updates = mock_save_in_db.mock_calls[0][1][2] # assert that the following fields are not stored in the db # 1. network_metadata # 2. requested_destination # 3. requested_resources # 4. retry # 5. ignore_hosts data = jsonutils.loads(updates['spec'])['nova_object.data'] self.assertNotIn('network_metadata', data) self.assertIsNone(data['requested_destination']) self.assertIsNone(data['requested_resources']) self.assertIsNone(data['retry']) self.assertIsNone(data['ignore_hosts']) self.assertIsNotNone(data['instance_uuid']) # also we expect that the following fields are not reset after save # 1. network_metadata # 2. requested_destination # 3. requested_resources # 4. retry # 5. ignore_hosts self.assertIsNotNone(req_obj.network_metadata) self.assertJsonEqual(expected_network_metadata.obj_to_primitive(), req_obj.network_metadata.obj_to_primitive()) self.assertIsNotNone(req_obj.requested_destination) self.assertJsonEqual(expected_destination.obj_to_primitive(), req_obj.requested_destination.obj_to_primitive()) self.assertIsNotNone(req_obj.requested_resources) self.assertEqual(13, req_obj.requested_resources[0].resources ['fake-rc']) self.assertIsNotNone(req_obj.retry) self.assertJsonEqual(expected_retry.obj_to_primitive(), req_obj.retry.obj_to_primitive()) self.assertIsNotNone(req_obj.ignore_hosts) self.assertEqual([uuids.ignored_host], req_obj.ignore_hosts) def test_save(self): req_obj = fake_request_spec.fake_spec_obj() # Make sure the requested_destination is not persisted since it is # only valid per request/operation. req_obj.requested_destination = objects.Destination(host='fake') def _test_save_args(self2, context, instance_uuid, changes): self._check_update_primitive(req_obj, changes) # DB creation would have set an id changes['id'] = 42 return changes with mock.patch.object(request_spec.RequestSpec, '_save_in_db', _test_save_args): req_obj.save() @mock.patch.object(request_spec.RequestSpec, '_destroy_in_db') def test_destroy(self, destroy_in_db): req_obj = fake_request_spec.fake_spec_obj() req_obj.destroy() destroy_in_db.assert_called_once_with(req_obj._context, req_obj.instance_uuid) @mock.patch.object(request_spec.RequestSpec, '_destroy_bulk_in_db') def test_destroy_bulk(self, destroy_bulk_in_db): uuids_to_be_deleted = [] for i in range(0, 5): uuid = uuidutils.generate_uuid() uuids_to_be_deleted.append(uuid) destroy_bulk_in_db.return_value = 5 result = objects.RequestSpec.destroy_bulk(self.context, uuids_to_be_deleted) destroy_bulk_in_db.assert_called_once_with(self.context, uuids_to_be_deleted) self.assertEqual(5, result) def test_reset_forced_destinations(self): req_obj = fake_request_spec.fake_spec_obj() # Making sure the fake object has forced hosts and nodes self.assertIsNotNone(req_obj.force_hosts) self.assertIsNotNone(req_obj.force_nodes) with mock.patch.object(req_obj, 'obj_reset_changes') as mock_reset: req_obj.reset_forced_destinations() self.assertIsNone(req_obj.force_hosts) self.assertIsNone(req_obj.force_nodes) mock_reset.assert_called_once_with(['force_hosts', 'force_nodes']) def test_compat_requested_destination(self): req_obj = objects.RequestSpec( requested_destination=objects.Destination()) versions = ovo_base.obj_tree_get_versions('RequestSpec') primitive = req_obj.obj_to_primitive(target_version='1.5', version_manifest=versions) self.assertNotIn( 'requested_destination', primitive['nova_object.data']) primitive = req_obj.obj_to_primitive(target_version='1.6', version_manifest=versions) self.assertIn('requested_destination', primitive['nova_object.data']) def test_compat_security_groups(self): sgl = objects.SecurityGroupList(objects=[]) req_obj = objects.RequestSpec(security_groups=sgl) versions = ovo_base.obj_tree_get_versions('RequestSpec') primitive = req_obj.obj_to_primitive(target_version='1.7', version_manifest=versions) self.assertNotIn('security_groups', primitive['nova_object.data']) primitive = req_obj.obj_to_primitive(target_version='1.8', version_manifest=versions) self.assertIn('security_groups', primitive['nova_object.data']) def test_compat_user_id(self): req_obj = objects.RequestSpec(project_id=fakes.FAKE_PROJECT_ID, user_id=fakes.FAKE_USER_ID) versions = ovo_base.obj_tree_get_versions('RequestSpec') primitive = req_obj.obj_to_primitive(target_version='1.8', version_manifest=versions) primitive = primitive['nova_object.data'] self.assertNotIn('user_id', primitive) self.assertIn('project_id', primitive) def test_compat_network_metadata(self): network_metadata = objects.NetworkMetadata(physnets=set(), tunneled=False) req_obj = objects.RequestSpec(network_metadata=network_metadata, user_id=fakes.FAKE_USER_ID) versions = ovo_base.obj_tree_get_versions('RequestSpec') primitive = req_obj.obj_to_primitive(target_version='1.9', version_manifest=versions) primitive = primitive['nova_object.data'] self.assertNotIn('network_metadata', primitive) self.assertIn('user_id', primitive) def test_compat_requested_resources(self): req_obj = objects.RequestSpec(requested_resources=[], instance_uuid=uuids.instance) versions = ovo_base.obj_tree_get_versions('RequestSpec') primitive = req_obj.obj_to_primitive(target_version='1.11', version_manifest=versions) primitive = primitive['nova_object.data'] self.assertNotIn('requested_resources', primitive) self.assertIn('instance_uuid', primitive) def test_default_requested_destination(self): req_obj = objects.RequestSpec() self.assertIsNone(req_obj.requested_destination) def test_security_groups_load(self): req_obj = objects.RequestSpec() self.assertNotIn('security_groups', req_obj) self.assertIsInstance(req_obj.security_groups, objects.SecurityGroupList) self.assertIn('security_groups', req_obj) def test_network_requests_load(self): req_obj = objects.RequestSpec() self.assertNotIn('network_metadata', req_obj) self.assertIsInstance(req_obj.network_metadata, objects.NetworkMetadata) self.assertIn('network_metadata', req_obj) def test_create_raises_on_unchanged_object(self): ctxt = context.RequestContext(uuids.user_id, uuids.project_id) req_obj = request_spec.RequestSpec(context=ctxt) self.assertRaises(exception.ObjectActionError, req_obj.create) def test_save_can_be_called_on_unchanged_object(self): req_obj = fake_request_spec.fake_spec_obj(remove_id=True) req_obj.create() req_obj.save() def test_get_request_group_mapping_no_request(self): req_obj = request_spec.RequestSpec() self.assertIsNone(req_obj.get_request_group_mapping()) def test_get_request_group_mapping(self): req_obj = request_spec.RequestSpec( requested_resources=[ request_spec.RequestGroup( requester_id='requester1', provider_uuids=[uuids.pr1, uuids.pr2]), request_spec.RequestGroup( requester_id='requester2', provider_uuids=[]), ]) self.assertEqual( {'requester1': [uuids.pr1, uuids.pr2], 'requester2': []}, req_obj.get_request_group_mapping()) class TestRequestSpecObject(test_objects._LocalTest, _TestRequestSpecObject): pass class TestRemoteRequestSpecObject(test_objects._RemoteTest, _TestRequestSpecObject): pass class TestRequestGroupObject(test.NoDBTestCase): def setUp(self): super(TestRequestGroupObject, self).setUp() self.user_id = uuids.user_id self.project_id = uuids.project_id self.context = context.RequestContext(uuids.user_id, uuids.project_id) def test_fields_defaulted_at_create(self): rg = request_spec.RequestGroup(self.context) self.assertTrue(rg.use_same_provider) self.assertEqual({}, rg.resources) self.assertEqual(set(), rg.required_traits) self.assertEqual(set(), rg.forbidden_traits) self.assertEqual([], rg.aggregates) self.assertIsNone(rg.requester_id) self.assertEqual([], rg.provider_uuids) self.assertIsNone(rg.in_tree) def test_from_port_request(self): port_resource_request = { "resources": { "NET_BW_IGR_KILOBIT_PER_SEC": 1000, "NET_BW_EGR_KILOBIT_PER_SEC": 1000}, "required": ["CUSTOM_PHYSNET_2", "CUSTOM_VNIC_TYPE_NORMAL"] } rg = request_spec.RequestGroup.from_port_request( self.context, uuids.port_id, port_resource_request) self.assertTrue(rg.use_same_provider) self.assertEqual( {"NET_BW_IGR_KILOBIT_PER_SEC": 1000, "NET_BW_EGR_KILOBIT_PER_SEC": 1000}, rg.resources) self.assertEqual({"CUSTOM_PHYSNET_2", "CUSTOM_VNIC_TYPE_NORMAL"}, rg.required_traits) self.assertEqual(uuids.port_id, rg.requester_id) # and the rest is defaulted self.assertEqual(set(), rg.forbidden_traits) self.assertEqual([], rg.aggregates) self.assertEqual([], rg.provider_uuids) def test_from_port_request_without_traits(self): port_resource_request = { "resources": { "NET_BW_IGR_KILOBIT_PER_SEC": 1000, "NET_BW_EGR_KILOBIT_PER_SEC": 1000}} rg = request_spec.RequestGroup.from_port_request( self.context, uuids.port_id, port_resource_request) self.assertTrue(rg.use_same_provider) self.assertEqual( {"NET_BW_IGR_KILOBIT_PER_SEC": 1000, "NET_BW_EGR_KILOBIT_PER_SEC": 1000}, rg.resources) self.assertEqual(uuids.port_id, rg.requester_id) # and the rest is defaulted self.assertEqual(set(), rg.required_traits) self.assertEqual(set(), rg.forbidden_traits) self.assertEqual([], rg.aggregates) self.assertEqual([], rg.provider_uuids) def test_compat_requester_and_provider(self): req_obj = objects.RequestGroup( requester_id=uuids.requester, provider_uuids=[uuids.rp1], required_traits=set(['CUSTOM_PHYSNET_2']), forbidden_aggregates=set(['agg3', 'agg4'])) versions = ovo_base.obj_tree_get_versions('RequestGroup') primitive = req_obj.obj_to_primitive( target_version='1.3', version_manifest=versions)['nova_object.data'] self.assertIn('forbidden_aggregates', primitive) self.assertIn('in_tree', primitive) self.assertIn('requester_id', primitive) self.assertIn('provider_uuids', primitive) self.assertIn('required_traits', primitive) self.assertItemsEqual( primitive['forbidden_aggregates'], set(['agg3', 'agg4'])) primitive = req_obj.obj_to_primitive( target_version='1.2', version_manifest=versions)['nova_object.data'] self.assertNotIn('forbidden_aggregates', primitive) self.assertIn('in_tree', primitive) self.assertIn('requester_id', primitive) self.assertIn('provider_uuids', primitive) self.assertIn('required_traits', primitive) primitive = req_obj.obj_to_primitive( target_version='1.1', version_manifest=versions)['nova_object.data'] self.assertNotIn('forbidden_aggregates', primitive) self.assertNotIn('in_tree', primitive) self.assertIn('requester_id', primitive) self.assertIn('provider_uuids', primitive) self.assertIn('required_traits', primitive) primitive = req_obj.obj_to_primitive( target_version='1.0', version_manifest=versions)['nova_object.data'] self.assertNotIn('forbidden_aggregates', primitive) self.assertNotIn('in_tree', primitive) self.assertNotIn('requester_id', primitive) self.assertNotIn('provider_uuids', primitive) self.assertIn('required_traits', primitive) class TestDestinationObject(test.NoDBTestCase): def setUp(self): super(TestDestinationObject, self).setUp() self.user_id = uuids.user_id self.project_id = uuids.project_id self.context = context.RequestContext(uuids.user_id, uuids.project_id) def test_destination_aggregates_default(self): destination = objects.Destination() self.assertIsNone(destination.aggregates) def test_destination_require_aggregates(self): destination = objects.Destination() destination.require_aggregates(['foo', 'bar']) destination.require_aggregates(['baz']) self.assertEqual(['foo,bar', 'baz'], destination.aggregates) def test_destination_forbidden_aggregates_default(self): destination = objects.Destination() self.assertIsNone(destination.forbidden_aggregates) def test_destination_append_forbidden_aggregates(self): destination = objects.Destination() destination.append_forbidden_aggregates(set(['foo', 'bar'])) self.assertEqual( set(['foo', 'bar']), destination.forbidden_aggregates) destination.append_forbidden_aggregates(set(['bar', 'baz'])) self.assertEqual( set(['foo', 'bar', 'baz']), destination.forbidden_aggregates) def test_obj_make_compatible(self): values = { 'host': 'fake_host', 'node': 'fake_node', 'cell': objects.CellMapping(uuid=uuids.cell1), 'aggregates': ['agg1', 'agg2'], 'allow_cross_cell_move': False, 'forbidden_aggregates': set(['agg3', 'agg4'])} obj = objects.Destination(self.context, **values) data = lambda x: x['nova_object.data'] manifest = ovo_base.obj_tree_get_versions(obj.obj_name()) obj_primitive = data(obj.obj_to_primitive(target_version='1.4', version_manifest=manifest)) self.assertIn('forbidden_aggregates', obj_primitive) self.assertItemsEqual(obj_primitive['forbidden_aggregates'], set(['agg3', 'agg4'])) self.assertIn('aggregates', obj_primitive) obj_primitive = data(obj.obj_to_primitive(target_version='1.3', version_manifest=manifest)) self.assertNotIn('forbidden_aggregates', obj_primitive) self.assertIn('allow_cross_cell_move', obj_primitive) obj_primitive = data(obj.obj_to_primitive(target_version='1.2', version_manifest=manifest)) self.assertIn('aggregates', obj_primitive) self.assertNotIn('allow_cross_cell_move', obj_primitive) obj_primitive = data(obj.obj_to_primitive(target_version='1.1', version_manifest=manifest)) self.assertIn('cell', obj_primitive) self.assertNotIn('aggregates', obj_primitive) obj_primitive = data(obj.obj_to_primitive(target_version='1.0', version_manifest=manifest)) self.assertNotIn('forbidden_aggregates', obj_primitive) self.assertNotIn('cell', obj_primitive) self.assertEqual('fake_host', obj_primitive['host']) class TestMappingRequestGroupsToProviders(test.NoDBTestCase): def setUp(self): super(TestMappingRequestGroupsToProviders, self).setUp() self.spec = request_spec.RequestSpec() def test_no_groups(self): allocations = None provider_traits = {} self.spec.map_requested_resources_to_providers( allocations, provider_traits) # we cannot assert much, at least we see that the above call doesn't # blow self.assertIsNone(self.spec.requested_resources) def test_unnumbered_group_not_supported(self): allocations = {} provider_traits = {} group1 = request_spec.RequestGroup( use_same_provider=False) self.spec.requested_resources = [group1] self.assertRaises( NotImplementedError, self.spec.map_requested_resources_to_providers, allocations, provider_traits) def test_forbidden_traits_not_supported(self): allocations = {} provider_traits = {} group1 = request_spec.RequestGroup( forbidden_traits={'STORAGE_DISK_HDD'}) self.spec.requested_resources = [group1] self.assertRaises( NotImplementedError, self.spec.map_requested_resources_to_providers, allocations, provider_traits) def test_aggregates_not_supported(self): allocations = {} provider_traits = {} group1 = request_spec.RequestGroup( aggregates=[[uuids.agg1]]) self.spec.requested_resources = [group1] self.assertRaises( NotImplementedError, self.spec.map_requested_resources_to_providers, allocations, provider_traits) def test_one_group(self): allocations = { uuids.compute1_rp: { "resources": { 'VCPU': 1 } }, uuids.net_dev1_rp: { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1] self.spec.map_requested_resources_to_providers( allocations, provider_traits) self.assertEqual([uuids.net_dev1_rp], group1.provider_uuids) def test_one_group_no_matching_allocation(self): # NOTE(gibi): This negative test scenario should not happen in real # end to end test as we assume that placement only returns candidates # that are valid. But still we want to cover the error case in our # implementation allocations = { uuids.compute1_rp: { "resources": { 'VCPU': 1 } }, uuids.net_dev1_rp: { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, } } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1] self.assertRaises( ValueError, self.spec.map_requested_resources_to_providers, allocations, provider_traits) def test_one_group_no_matching_trait(self): # NOTE(gibi): This negative test scenario should not happen in real # end to end test as we assume that placement only returns candidates # that are valid. But still we want to cover the error case in our # implementation allocations = { uuids.compute1_rp: { "resources": { 'VCPU': 1 } }, uuids.net_dev1_rp: { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET1', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1] self.assertRaises( ValueError, self.spec.map_requested_resources_to_providers, allocations, provider_traits) def test_two_groups_same_provider(self): allocations = { uuids.compute1_rp: { "resources": { 'VCPU': 1 } }, uuids.net_dev1_rp: { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 3, 'NET_BW_EGR_KILOBIT_PER_SEC': 3, } } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) group2 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 2, "NET_BW_EGR_KILOBIT_PER_SEC": 2, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1, group2] self.spec.map_requested_resources_to_providers( allocations, provider_traits) self.assertEqual([uuids.net_dev1_rp], group1.provider_uuids) self.assertEqual([uuids.net_dev1_rp], group2.provider_uuids) def test_two_groups_different_providers(self): # NOTE(gibi): we use OrderedDict here to make the test deterministic allocations = collections.OrderedDict() allocations[uuids.compute1_rp] = { "resources": { 'VCPU': 1 } } allocations[uuids.net_dev1_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 2, 'NET_BW_EGR_KILOBIT_PER_SEC': 2, } } allocations[uuids.net_dev2_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], uuids.net_dev2_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) group2 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 2, "NET_BW_EGR_KILOBIT_PER_SEC": 2, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1, group2] self.spec.map_requested_resources_to_providers( allocations, provider_traits) self.assertEqual([uuids.net_dev2_rp], group1.provider_uuids) self.assertEqual([uuids.net_dev1_rp], group2.provider_uuids) def test_two_groups_different_providers_reverse(self): """Similar as test_two_groups_different_providers but reorder the groups to exercises another code path """ # NOTE(gibi): we use OrderedDict here to make the test deterministic allocations = collections.OrderedDict() allocations[uuids.compute1_rp] = { "resources": { 'VCPU': 1 } } allocations[uuids.net_dev1_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 2, 'NET_BW_EGR_KILOBIT_PER_SEC': 2, } } allocations[uuids.net_dev2_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], uuids.net_dev2_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 2, "NET_BW_EGR_KILOBIT_PER_SEC": 2, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) group2 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1, group2] self.spec.map_requested_resources_to_providers( allocations, provider_traits) self.assertEqual([uuids.net_dev1_rp], group1.provider_uuids) self.assertEqual([uuids.net_dev2_rp], group2.provider_uuids) def test_two_groups_different_providers_different_traits(self): allocations = collections.OrderedDict() allocations[uuids.compute1_rp] = { "resources": { 'VCPU': 1 } } allocations[uuids.net_dev1_physnet1_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } allocations[uuids.net_dev2_physnet0_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_physnet1_rp: [ 'CUSTOM_PHYSNET_PHYSNET1', 'CUSTOM_VNIC_TYPE_NORMAL' ], uuids.net_dev2_physnet0_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) group2 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET1", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1, group2] self.spec.map_requested_resources_to_providers( allocations, provider_traits) self.assertEqual([uuids.net_dev2_physnet0_rp], group1.provider_uuids) self.assertEqual([uuids.net_dev1_physnet1_rp], group2.provider_uuids) def test_three_groups(self): """A complex example where a lot of mappings are tried before the solution is found. """ allocations = collections.OrderedDict() allocations[uuids.compute1_rp] = { "resources": { 'VCPU': 1 } } allocations[uuids.net_dev1_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 3, 'NET_BW_EGR_KILOBIT_PER_SEC': 3, } } allocations[uuids.net_dev2_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 2, 'NET_BW_EGR_KILOBIT_PER_SEC': 2, } } allocations[uuids.net_dev3_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 3, } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], uuids.net_dev2_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], uuids.net_dev3_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } # this fits to 2 RPs group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 3, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) # this fits to 2 RPs group2 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 2, "NET_BW_EGR_KILOBIT_PER_SEC": 2, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) # this fits to only one RPs group3 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 3, "NET_BW_EGR_KILOBIT_PER_SEC": 3, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1, group2, group3] orig_validator = self.spec._is_valid_group_rp_mapping with mock.patch.object( self.spec, '_is_valid_group_rp_mapping', side_effect=orig_validator ) as mock_validator: self.spec.map_requested_resources_to_providers( allocations, provider_traits) self.assertEqual([uuids.net_dev3_rp], group1.provider_uuids) self.assertEqual([uuids.net_dev2_rp], group2.provider_uuids) self.assertEqual([uuids.net_dev1_rp], group3.provider_uuids) # the algorithm tried out many possible mappings before found the # the solution self.assertEqual(58, mock_validator.call_count) @mock.patch.object(request_spec.LOG, 'debug') def test_two_groups_matches_but_allocation_leftover(self, mock_debug): # NOTE(gibi): This negative test scenario should not happen in real # end to end test as we assume that placement only returns candidates # that are valid and this candidate is not valid as it provides more # resources than the ports are requesting. Still we want to cover the # error case in our implementation allocations = collections.OrderedDict() allocations[uuids.compute1_rp] = { "resources": { 'VCPU': 1 } } allocations[uuids.net_dev1_physnet0_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 2, 'NET_BW_EGR_KILOBIT_PER_SEC': 2, } } allocations[uuids.net_dev2_physnet0_rp] = { "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 1, 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } provider_traits = { uuids.compute1_rp: [], uuids.net_dev1_physnet0_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], uuids.net_dev2_physnet0_rp: [ 'CUSTOM_PHYSNET_PHYSNET0', 'CUSTOM_VNIC_TYPE_NORMAL' ], } group1 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) group2 = request_spec.RequestGroup( resources={ "NET_BW_IGR_KILOBIT_PER_SEC": 1, "NET_BW_EGR_KILOBIT_PER_SEC": 1, }, required_traits={ "CUSTOM_PHYSNET_PHYSNET0", "CUSTOM_VNIC_TYPE_NORMAL", }) self.spec.requested_resources = [group1, group2] self.assertRaises( ValueError, self.spec.map_requested_resources_to_providers, allocations, provider_traits) self.assertIn('allocations leftover', mock_debug.mock_calls[3][1][0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_resource.py0000664000175000017500000001152100000000000022543 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import six from nova.objects import resource from nova.tests.unit.objects import test_objects fake_resources = resource.ResourceList(objects=[ resource.Resource(provider_uuid=uuids.rp, resource_class='CUSTOM_RESOURCE', identifier='foo'), resource.Resource(provider_uuid=uuids.rp, resource_class='CUSTOM_RESOURCE', identifier='bar')]) fake_vpmems = [ resource.LibvirtVPMEMDevice( label='4GB', name='ns_0', devpath='/dev/dax0.0', size=4292870144, align=2097152), resource.LibvirtVPMEMDevice( label='4GB', name='ns_1', devpath='/dev/dax0.0', size=4292870144, align=2097152)] fake_instance_extras = { 'resources': jsonutils.dumps(fake_resources.obj_to_primitive()) } class TestResourceObject(test_objects._LocalTest): def _create_resource(self, metadata=None): fake_resource = resource.Resource(provider_uuid=uuids.rp, resource_class='bar', identifier='foo') if metadata: fake_resource.metadata = metadata return fake_resource def _test_set_malformed_resource_class(self, rc): try: resource.Resource(provider_uuid=uuids.rp, resource_class=rc, identifier='foo') except ValueError as e: self.assertEqual('Malformed Resource Class %s' % rc, six.text_type(e)) else: self.fail('Check malformed resource class failed.') def _test_set_formed_resource_class(self, rc): resource.Resource(provider_uuid=uuids.rp, resource_class=rc, identifier='foo') def test_set_malformed_resource_classes(self): malformed_resource_classes = ['!', ';', ' '] for rc in malformed_resource_classes: self._test_set_malformed_resource_class(rc) def test_set_formed_resource_classes(self): formed_resource_classes = ['resource', 'RESOURCE', '0123'] for rc in formed_resource_classes: self._test_set_formed_resource_class(rc) def test_equal_without_metadata(self): resource_0 = resource.Resource(provider_uuid=uuids.rp, resource_class='bar', identifier='foo') resource_1 = resource.Resource(provider_uuid=uuids.rp, resource_class='bar', identifier='foo') self.assertEqual(resource_0, resource_1) def test_not_equal_without_matadata(self): self.assertNotEqual(fake_resources[0], fake_resources[1]) def test_equal_with_vpmem_metadata(self): resource_0 = self._create_resource(metadata=fake_vpmems[0]) resource_1 = self._create_resource(metadata=fake_vpmems[0]) self.assertEqual(resource_0, resource_1) def test_not_equal_with_vpmem_metadata(self): resource_0 = self._create_resource(metadata=fake_vpmems[0]) resource_1 = self._create_resource(metadata=fake_vpmems[1]) self.assertNotEqual(resource_0, resource_1) def test_not_equal_with_and_without_metadata(self): # one resource has metadata, another one has not metadata resource_0 = self._create_resource(metadata=fake_vpmems[0]) resource_1 = self._create_resource() self.assertNotEqual(resource_0, resource_1) class _TestResourceListObject(object): @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): mock_get.return_value = fake_instance_extras resources = resource.ResourceList.get_by_instance_uuid( self.context, 'fake_uuid') for i in range(len(resources)): self.assertEqual(resources[i].identifier, fake_resources[i].identifier) class TestResourceListObject(test_objects._LocalTest, _TestResourceListObject): pass class TestRemoteResourceListObject(test_objects._RemoteTest, _TestResourceListObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_security_group.py0000664000175000017500000002006300000000000024000 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_versionedobjects import fixture as ovo_fixture from nova.db import api as db from nova.objects import instance from nova.objects import security_group from nova.tests.unit.objects import test_objects fake_secgroup = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 1, 'name': 'fake-name', 'description': 'fake-desc', 'user_id': 'fake-user', 'project_id': 'fake-project', } class _TestSecurityGroupObject(object): def _fix_deleted(self, db_secgroup): # NOTE(danms): Account for the difference in 'deleted' return dict(db_secgroup.items(), deleted=False) @mock.patch.object(db, 'security_group_get', return_value=fake_secgroup) def test_get(self, mock_get): secgroup = security_group.SecurityGroup.get(self.context, 1) ovo_fixture.compare_obj(self, secgroup, self._fix_deleted(fake_secgroup)) self.assertEqual(secgroup.obj_what_changed(), set()) mock_get.assert_called_once_with(self.context, 1) @mock.patch.object(db, 'security_group_get_by_name', return_value=fake_secgroup) def test_get_by_name(self, mock_get): secgroup = security_group.SecurityGroup.get_by_name(self.context, 'fake-project', 'fake-name') ovo_fixture.compare_obj(self, secgroup, self._fix_deleted(fake_secgroup)) self.assertEqual(secgroup.obj_what_changed(), set()) mock_get.assert_called_once_with(self.context, 'fake-project', 'fake-name') @mock.patch.object(db, 'security_group_in_use', return_value=True) def test_in_use(self, mock_inuse): secgroup = security_group.SecurityGroup(context=self.context) secgroup.id = 123 self.assertTrue(secgroup.in_use()) mock_inuse.assert_called_once_with(self.context, 123) @mock.patch.object(db, 'security_group_update') def test_save(self, mock_update): updated_secgroup = dict(fake_secgroup, project_id='changed') mock_update.return_value = updated_secgroup secgroup = security_group.SecurityGroup._from_db_object( self.context, security_group.SecurityGroup(), fake_secgroup) secgroup.description = 'foobar' secgroup.save() ovo_fixture.compare_obj(self, secgroup, self._fix_deleted(updated_secgroup)) self.assertEqual(secgroup.obj_what_changed(), set()) mock_update.assert_called_once_with(self.context, 1, {'description': 'foobar'}) @mock.patch.object(db, 'security_group_update') def test_save_no_changes(self, mock_update): secgroup = security_group.SecurityGroup._from_db_object( self.context, security_group.SecurityGroup(), fake_secgroup) secgroup.save() self.assertFalse(mock_update.called) @mock.patch.object(db, 'security_group_get') def test_refresh(self, mock_get): updated_secgroup = dict(fake_secgroup, description='changed') mock_get.return_value = updated_secgroup secgroup = security_group.SecurityGroup._from_db_object( self.context, security_group.SecurityGroup(self.context), fake_secgroup) secgroup.refresh() ovo_fixture.compare_obj(self, secgroup, self._fix_deleted(updated_secgroup)) self.assertEqual(secgroup.obj_what_changed(), set()) mock_get.assert_called_once_with(self.context, 1) @mock.patch.object(db, 'security_group_update') def test_with_uuid(self, mock_db_update): """Tests that we can set a uuid but not save it and it's removed when backporting to an older version of the object. """ # Test set/get. secgroup = security_group.SecurityGroup( self.context, uuid=uuids.neutron_id) self.assertEqual(uuids.neutron_id, secgroup.uuid) # Test backport. primitive = secgroup.obj_to_primitive(target_version='1.2') self.assertIn('uuid', primitive['nova_object.data']) primitive = secgroup.obj_to_primitive(target_version='1.1') self.assertNotIn('uuid', primitive['nova_object.data']) # Make sure the uuid is still set before we save(). self.assertIn('uuid', secgroup) secgroup.save() self.assertFalse(mock_db_update.called) def test_identifier(self): secgroup = security_group.SecurityGroup(name='foo') self.assertEqual('foo', secgroup.identifier) secgroup.uuid = uuids.secgroup self.assertEqual(uuids.secgroup, secgroup.identifier) class TestSecurityGroupObject(test_objects._LocalTest, _TestSecurityGroupObject): pass class TestSecurityGroupObjectRemote(test_objects._RemoteTest, _TestSecurityGroupObject): pass fake_secgroups = [ dict(fake_secgroup, id=1, name='secgroup1'), dict(fake_secgroup, id=2, name='secgroup2'), ] class _TestSecurityGroupListObject(object): @mock.patch.object(db, 'security_group_get_all', return_value=fake_secgroups) def test_get_all(self, mock_get): secgroup_list = security_group.SecurityGroupList.get_all(self.context) for i in range(len(fake_secgroups)): self.assertIsInstance(secgroup_list[i], security_group.SecurityGroup) self.assertEqual(fake_secgroups[i]['id'], secgroup_list[i].id) self.assertEqual(secgroup_list[i]._context, self.context) mock_get.assert_called_once_with(self.context) @mock.patch.object(db, 'security_group_get_by_project', return_value=fake_secgroups) def test_get_by_project(self, mock_get): secgroup_list = security_group.SecurityGroupList.get_by_project( self.context, 'fake-project') for i in range(len(fake_secgroups)): self.assertIsInstance(secgroup_list[i], security_group.SecurityGroup) self.assertEqual(fake_secgroups[i]['id'], secgroup_list[i].id) mock_get.assert_called_once_with(self.context, 'fake-project') @mock.patch.object(db, 'security_group_get_by_instance', return_value=fake_secgroups) def test_get_by_instance(self, mock_get): inst = instance.Instance() inst.uuid = uuids.instance secgroup_list = security_group.SecurityGroupList.get_by_instance( self.context, inst) for i in range(len(fake_secgroups)): self.assertIsInstance(secgroup_list[i], security_group.SecurityGroup) self.assertEqual(fake_secgroups[i]['id'], secgroup_list[i].id) mock_get.assert_called_once_with(self.context, inst.uuid) class TestSecurityGroupListObject(test_objects._LocalTest, _TestSecurityGroupListObject): pass class TestSecurityGroupListObjectRemote(test_objects._RemoteTest, _TestSecurityGroupListObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_selection.py0000664000175000017500000002147400000000000022711 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova import conf from nova import objects from nova.objects import numa from nova.scheduler import host_manager from nova.tests.unit.objects import test_objects CONF = conf.CONF fake_numa_limit1 = numa.NUMATopologyLimits(cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0) fake_limit1 = {"memory_mb": 1024, "disk_gb": 100, "vcpus": 2, "numa_topology": fake_numa_limit1} fake_limit_obj1 = objects.SchedulerLimits.from_dict(fake_limit1) fake_host1 = { "uuid": uuids.host1, "host": "host1", "nodename": "node1", "cell_uuid": uuids.cell, "limits": fake_limit1, } fake_host_state1 = host_manager.HostState("host1", "node1", uuids.cell) fake_host_state1.uuid = uuids.host1 fake_host_state1.limits = fake_limit1.copy() fake_alloc1 = {"allocations": [ {"resource_provider": {"uuid": uuids.host1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc_version = "1.23" class _TestSelectionObject(object): def test_create_with_values(self): json_alloc = jsonutils.dumps(fake_alloc1) dest = objects.Selection(service_host="host", nodename="node", compute_node_uuid=uuids.host1, cell_uuid=uuids.cell, limits=fake_limit_obj1, allocation_request=json_alloc, allocation_request_version=fake_alloc_version) self.assertEqual("host", dest.service_host) self.assertEqual(uuids.host1, dest.compute_node_uuid) self.assertEqual("node", dest.nodename) self.assertEqual(uuids.cell, dest.cell_uuid) self.assertEqual(fake_limit_obj1, dest.limits) self.assertEqual(json_alloc, dest.allocation_request) self.assertEqual(fake_alloc_version, dest.allocation_request_version) def test_passing_dict_allocation_fails(self): self.assertRaises(ValueError, objects.Selection, service_host="host", compute_node_uuid=uuids.host, nodename="node", cell_uuid=uuids.cell, allocation_request=fake_alloc1, allocation_request_version=fake_alloc_version) def test_passing_numeric_allocation_version_converts(self): json_alloc = jsonutils.dumps(fake_alloc1) dest = objects.Selection(service_host="host", compute_node_uuid=uuids.host, nodename="node", cell_uuid=uuids.cell, allocation_request=json_alloc, allocation_request_version=1.23) self.assertEqual("1.23", dest.allocation_request_version) def test_from_host_state(self): dest = objects.Selection.from_host_state(fake_host_state1, fake_alloc1, fake_alloc_version) self.assertEqual(dest.service_host, fake_host_state1.host) expected_alloc = jsonutils.dumps(fake_alloc1) self.assertEqual(dest.allocation_request, expected_alloc) self.assertEqual(dest.allocation_request_version, fake_alloc_version) def test_from_host_state_no_alloc_info(self): dest = objects.Selection.from_host_state(fake_host_state1) self.assertEqual(dest.service_host, fake_host_state1.host) expected_alloc = jsonutils.dumps(None) self.assertEqual(expected_alloc, dest.allocation_request) self.assertIsNone(dest.allocation_request_version) def test_selection_obj_to_dict(self): """Tests that to_dict() method properly converts a Selection object to the corresponding dict. """ fake_network_metadata = objects.NetworkMetadata( physnets=set(['foo', 'bar']), tunneled=True) fake_numa_limit = objects.numa.NUMATopologyLimits( cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0, network_metadata=fake_network_metadata) fake_limit = {"memory_mb": 1024, "disk_gb": 100, "vcpus": 2, "numa_topology": fake_numa_limit} fake_limit_obj = objects.SchedulerLimits.from_dict(fake_limit) sel_obj = objects.Selection(service_host="fakehost", nodename="fakenode", compute_node_uuid=uuids.host, cell_uuid=uuids.cell, limits=fake_limit_obj, allocation_request="fake", allocation_request_version="99.9") result = sel_obj.to_dict() self.assertEqual(['host', 'limits', 'nodename'], sorted(result.keys())) self.assertEqual('fakehost', result['host']) self.assertEqual('fakenode', result['nodename']) limits = result['limits'] self.assertEqual(['disk_gb', 'memory_mb', 'numa_topology'], sorted(limits.keys())) self.assertEqual(100, limits['disk_gb']) self.assertEqual(1024, limits['memory_mb']) numa_topology = limits['numa_topology']['nova_object.data'] self.assertEqual(1.0, numa_topology['cpu_allocation_ratio']) self.assertEqual(1.0, numa_topology['ram_allocation_ratio']) network_meta = numa_topology['network_metadata']['nova_object.data'] # sets are unordered so we need to convert to a list self.assertEqual(['bar', 'foo'], sorted(network_meta['physnets'])) self.assertTrue(network_meta['tunneled']) def test_selection_obj_to_dict_no_numa(self): """Tests that to_dict() method properly converts a Selection object to the corresponding dict when the numa_topology field is None. """ fake_limit = {"memory_mb": 1024, "disk_gb": 100, "vcpus": 2, "numa_topology": None} fake_limit_obj = objects.SchedulerLimits.from_dict(fake_limit) sel_obj = objects.Selection(service_host="fakehost", nodename="fakenode", compute_node_uuid=uuids.host, cell_uuid=uuids.cell, limits=fake_limit_obj, allocation_request="fake", allocation_request_version="99.9") expected = {"host": "fakehost", "nodename": "fakenode", "limits": { "disk_gb": 100, "memory_mb": 1024}} result = sel_obj.to_dict() self.assertDictEqual(expected, result) class TestSelectionObject(test_objects._LocalTest, _TestSelectionObject): # NOTE(mriedem): The tests below are for methods which are not remotable # so they can go in the local-only test class rather than the mixin above. def test_obj_make_compatible(self): selection = objects.Selection(service_host='host1', availability_zone='zone1') primitive = selection.obj_to_primitive( target_version='1.1')['nova_object.data'] self.assertIn('availability_zone', primitive) primitive = selection.obj_to_primitive( target_version='1.0')['nova_object.data'] self.assertNotIn('availability_zone', primitive) self.assertIn('service_host', primitive) def test_from_host_state_az_via_aggregate_metadata(self): """Tests the scenario that the host is in multiple aggregates and one has the availability_zone aggregate metadata key which is used on the selection object. """ host_state = host_manager.HostState('host', 'node', uuids.cell_uuid) host_state.uuid = uuids.compute_node_uuid host_state.limits = {} host_state.aggregates = [ objects.Aggregate(metadata={'foo': 'bar'}), objects.Aggregate(metadata={'availability_zone': 'zone1'}) ] selection = objects.Selection.from_host_state(host_state) self.assertEqual('zone1', selection.availability_zone) def test_from_host_state_az_via_config(self): """Tests the scenario that the host is not in an aggregate with the availability_zone metadata key so the AZ comes from config. """ host_state = host_manager.HostState('host', 'node', uuids.cell_uuid) host_state.uuid = uuids.compute_node_uuid host_state.limits = {} host_state.aggregates = [] selection = objects.Selection.from_host_state(host_state) self.assertEqual(CONF.default_availability_zone, selection.availability_zone) class TestRemoteSelectionObject(test_objects._RemoteTest, _TestSelectionObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_service.py0000664000175000017500000006244200000000000022364 0ustar00zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils from oslo_versionedobjects import base as ovo_base from oslo_versionedobjects import exception as ovo_exc import six from nova.compute import manager as compute_manager from nova import context from nova.db import api as db from nova import exception from nova import objects from nova.objects import aggregate from nova.objects import service from nova import test from nova.tests import fixtures from nova.tests.unit.objects import test_compute_node from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) def _fake_service(**kwargs): fake_service = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 123, 'uuid': uuidsentinel.service, 'host': 'fake-host', 'binary': 'nova-compute', 'topic': 'fake-service-topic', 'report_count': 1, 'forced_down': False, 'disabled': False, 'disabled_reason': None, 'last_seen_up': None, 'version': service.SERVICE_VERSION, } fake_service.update(kwargs) return fake_service fake_service = _fake_service() OPTIONAL = ['availability_zone', 'compute_node'] class _TestServiceObject(object): def supported_hv_specs_comparator(self, expected, obj_val): obj_val = [inst.to_list() for inst in obj_val] self.assertJsonEqual(expected, obj_val) def pci_device_pools_comparator(self, expected, obj_val): obj_val = obj_val.obj_to_primitive() self.assertJsonEqual(expected, obj_val) def comparators(self): return {'stats': self.assertJsonEqual, 'host_ip': self.assertJsonEqual, 'supported_hv_specs': self.supported_hv_specs_comparator, 'pci_device_pools': self.pci_device_pools_comparator} def subs(self): return {'supported_hv_specs': 'supported_instances', 'pci_device_pools': 'pci_stats'} def _test_query(self, db_method, obj_method, *args, **kwargs): db_exception = kwargs.pop('db_exception', None) if db_exception: with mock.patch.object(db, db_method, side_effect=db_exception) \ as mock_db_method: obj = getattr(service.Service, obj_method)(self.context, *args, **kwargs) self.assertIsNone(obj) mock_db_method.assert_called_once_with(self.context, *args, **kwargs) else: with mock.patch.object(db, db_method, return_value=fake_service) \ as mock_db_method: obj = getattr(service.Service, obj_method)(self.context, *args, **kwargs) self.compare_obj(obj, fake_service, allow_missing=OPTIONAL) mock_db_method.assert_called_once_with(self.context, *args, **kwargs) def test_get_by_id(self): self._test_query('service_get', 'get_by_id', 123) def test_get_by_uuid(self): self._test_query('service_get_by_uuid', 'get_by_uuid', uuidsentinel.service_uuid) def test_get_by_host_and_topic(self): self._test_query('service_get_by_host_and_topic', 'get_by_host_and_topic', 'fake-host', 'fake-topic') def test_get_by_host_and_binary(self): self._test_query('service_get_by_host_and_binary', 'get_by_host_and_binary', 'fake-host', 'fake-binary') def test_get_by_host_and_binary_raises(self): self._test_query('service_get_by_host_and_binary', 'get_by_host_and_binary', 'fake-host', 'fake-binary', db_exception=exception.HostBinaryNotFound( host='fake-host', binary='fake-binary')) def test_get_by_compute_host(self): self._test_query('service_get_by_compute_host', 'get_by_compute_host', 'fake-host') def test_get_by_args(self): self._test_query('service_get_by_host_and_binary', 'get_by_args', 'fake-host', 'fake-binary') @mock.patch.object(db, 'service_create', return_value=fake_service) def test_create(self, mock_service_create): service_obj = service.Service(context=self.context) service_obj.host = 'fake-host' service_obj.uuid = uuidsentinel.service2 service_obj.create() self.assertEqual(fake_service['id'], service_obj.id) self.assertEqual(service.SERVICE_VERSION, service_obj.version) mock_service_create.assert_called_once_with( self.context, {'host': 'fake-host', 'uuid': uuidsentinel.service2, 'version': fake_service['version']}) @mock.patch('nova.objects.service.uuidutils.generate_uuid', return_value=uuidsentinel.service3) @mock.patch.object(db, 'service_create', return_value=fake_service) def test_create_without_uuid_generates_one( self, mock_service_create, generate_uuid): service_obj = service.Service(context=self.context) service_obj.create() create_args = mock_service_create.call_args[0][1] self.assertEqual(generate_uuid.return_value, create_args['uuid']) @mock.patch.object(db, 'service_create', return_value=fake_service) def test_recreate_fails(self, mock_service_create): service_obj = service.Service(context=self.context) service_obj.host = 'fake-host' service_obj.create() self.assertRaises(exception.ObjectActionError, service_obj.create) mock_service_create(self.context, {'host': 'fake-host', 'version': fake_service['version']}) @mock.patch('nova.objects.Service._send_notification') @mock.patch.object(db, 'service_update', return_value=fake_service) def test_save(self, mock_service_update, mock_notify): service_obj = service.Service(context=self.context) service_obj.id = 123 service_obj.host = 'fake-host' service_obj.save() self.assertEqual(service.SERVICE_VERSION, service_obj.version) mock_service_update.assert_called_once_with( self.context, 123, {'host': 'fake-host', 'version': fake_service['version']}) @mock.patch.object(db, 'service_create', return_value=fake_service) def test_set_id_failure(self, db_mock): service_obj = service.Service(context=self.context, binary='nova-compute') service_obj.create() self.assertRaises(ovo_exc.ReadOnlyFieldError, setattr, service_obj, 'id', 124) @mock.patch('nova.objects.Service._send_notification') @mock.patch.object(db, 'service_destroy') def _test_destroy(self, mock_service_destroy, mock_notify): service_obj = service.Service(context=self.context) service_obj.id = 123 service_obj.destroy() mock_service_destroy.assert_called_once_with(self.context, 123) def test_destroy(self): # The test harness needs db.service_destroy to work, # so avoid leaving it broken here after we're done orig_service_destroy = db.service_destroy try: self._test_destroy() finally: db.service_destroy = orig_service_destroy @mock.patch.object(db, 'service_get_all_by_topic', return_value=[fake_service]) def test_get_by_topic(self, mock_service_get): services = service.ServiceList.get_by_topic(self.context, 'fake-topic') self.assertEqual(1, len(services)) self.compare_obj(services[0], fake_service, allow_missing=OPTIONAL) mock_service_get.assert_called_once_with(self.context, 'fake-topic') @mock.patch('nova.db.api.service_get_all_by_binary') def test_get_by_binary(self, mock_get): mock_get.return_value = [fake_service] services = service.ServiceList.get_by_binary(self.context, 'fake-binary') self.assertEqual(1, len(services)) mock_get.assert_called_once_with(self.context, 'fake-binary', include_disabled=False) @mock.patch('nova.db.api.service_get_all_by_binary') def test_get_by_binary_disabled(self, mock_get): mock_get.return_value = [_fake_service(disabled=True)] services = service.ServiceList.get_by_binary(self.context, 'fake-binary', include_disabled=True) self.assertEqual(1, len(services)) mock_get.assert_called_once_with(self.context, 'fake-binary', include_disabled=True) @mock.patch('nova.db.api.service_get_all_by_binary') def test_get_by_binary_both(self, mock_get): mock_get.return_value = [_fake_service(), _fake_service(disabled=True)] services = service.ServiceList.get_by_binary(self.context, 'fake-binary', include_disabled=True) self.assertEqual(2, len(services)) mock_get.assert_called_once_with(self.context, 'fake-binary', include_disabled=True) @mock.patch.object(db, 'service_get_all_by_host', return_value=[fake_service]) def test_get_by_host(self, mock_service_get): services = service.ServiceList.get_by_host(self.context, 'fake-host') self.assertEqual(1, len(services)) self.compare_obj(services[0], fake_service, allow_missing=OPTIONAL) mock_service_get.assert_called_once_with(self.context, 'fake-host') @mock.patch.object(db, 'service_get_all', return_value=[fake_service]) def test_get_all(self, mock_get_all): services = service.ServiceList.get_all(self.context, disabled=False) self.assertEqual(1, len(services)) self.compare_obj(services[0], fake_service, allow_missing=OPTIONAL) mock_get_all.assert_called_once_with(self.context, disabled=False) @mock.patch.object(db, 'service_get_all') @mock.patch.object(aggregate.AggregateList, 'get_by_metadata_key') def test_get_all_with_az(self, mock_get_by_key, mock_get_all): agg = aggregate.Aggregate(context=self.context) agg.name = 'foo' agg.metadata = {'availability_zone': 'test-az'} agg.create() agg.hosts = [fake_service['host']] mock_get_by_key.return_value = [agg] mock_get_all.return_value = [dict(fake_service, topic='compute')] services = service.ServiceList.get_all(self.context, set_zones=True) self.assertEqual(1, len(services)) self.assertEqual('test-az', services[0].availability_zone) mock_get_all.assert_called_once_with(self.context, disabled=None) mock_get_by_key.assert_called_once_with(self.context, 'availability_zone', hosts=set(agg.hosts)) @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_compute_node(self, mock_get): fake_compute_node = objects.ComputeNode._from_db_object( self.context, objects.ComputeNode(), test_compute_node.fake_compute_node) mock_get.return_value = [fake_compute_node] service_obj = service.Service(id=123, host="fake-host", binary="nova-compute") service_obj._context = self.context self.assertEqual(service_obj.compute_node, fake_compute_node) # Make sure it doesn't re-fetch this service_obj.compute_node mock_get.assert_called_once_with(self.context, 'fake-host') @mock.patch.object(db, 'service_get_all_computes_by_hv_type') def test_get_all_computes_by_hv_type(self, mock_get_all): mock_get_all.return_value = [fake_service] services = service.ServiceList.get_all_computes_by_hv_type( self.context, 'hv-type') self.assertEqual(1, len(services)) self.compare_obj(services[0], fake_service, allow_missing=OPTIONAL) mock_get_all.assert_called_once_with(self.context, 'hv-type', include_disabled=False) def test_load_when_orphaned(self): service_obj = service.Service() service_obj.id = 123 self.assertRaises(exception.OrphanedObjectError, getattr, service_obj, 'compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_obj_make_compatible_for_compute_node(self, get_all_by_host): service_obj = objects.Service(context=self.context) fake_service_dict = fake_service.copy() fake_compute_obj = objects.ComputeNode(host=fake_service['host'], service_id=fake_service['id']) get_all_by_host.return_value = [fake_compute_obj] versions = ovo_base.obj_tree_get_versions('Service') versions['ComputeNode'] = '1.10' service_obj.obj_make_compatible_from_manifest(fake_service_dict, '1.9', versions) self.assertEqual( fake_compute_obj.obj_to_primitive(target_version='1.10', version_manifest=versions), fake_service_dict['compute_node']) @mock.patch('nova.db.api.service_get_minimum_version') def test_get_minimum_version_none(self, mock_get): mock_get.return_value = None self.assertEqual(0, objects.Service.get_minimum_version(self.context, 'nova-compute')) mock_get.assert_called_once_with(self.context, ['nova-compute']) @mock.patch('nova.db.api.service_get_minimum_version') def test_get_minimum_version(self, mock_get): mock_get.return_value = {'nova-compute': 123} self.assertEqual(123, objects.Service.get_minimum_version(self.context, 'nova-compute')) mock_get.assert_called_once_with(self.context, ['nova-compute']) @mock.patch('nova.db.api.service_get_minimum_version') @mock.patch('nova.objects.service.LOG') def test_get_minimum_version_checks_binary(self, mock_log, mock_get): mock_get.return_value = None self.assertEqual(0, objects.Service.get_minimum_version(self.context, 'nova-compute')) self.assertFalse(mock_log.warning.called) self.assertRaises(exception.ObjectActionError, objects.Service.get_minimum_version, self.context, 'compute') self.assertTrue(mock_log.warning.called) @mock.patch('nova.db.api.service_get_minimum_version') def test_get_minimum_version_with_caching(self, mock_get): objects.Service.enable_min_version_cache() mock_get.return_value = {'nova-compute': 123} self.assertEqual(123, objects.Service.get_minimum_version(self.context, 'nova-compute')) self.assertEqual({"nova-compute": 123}, objects.Service._MIN_VERSION_CACHE) self.assertEqual(123, objects.Service.get_minimum_version(self.context, 'nova-compute')) mock_get.assert_called_once_with(self.context, ['nova-compute']) objects.Service._SERVICE_VERSION_CACHING = False objects.Service.clear_min_version_cache() @mock.patch('nova.db.api.service_get_minimum_version') def test_get_min_version_multiple_with_old(self, mock_gmv): mock_gmv.return_value = {'nova-api': None, 'nova-scheduler': 2, 'nova-conductor': 3} binaries = ['nova-api', 'nova-api', 'nova-conductor', 'nova-conductor', 'nova-api'] minimum = objects.Service.get_minimum_version_multi(self.context, binaries) self.assertEqual(0, minimum) @mock.patch('nova.db.api.service_get_minimum_version') def test_get_min_version_multiple(self, mock_gmv): mock_gmv.return_value = {'nova-api': 1, 'nova-scheduler': 2, 'nova-conductor': 3} binaries = ['nova-api', 'nova-api', 'nova-conductor', 'nova-conductor', 'nova-api'] minimum = objects.Service.get_minimum_version_multi(self.context, binaries) self.assertEqual(1, minimum) @mock.patch('nova.objects.Service._send_notification') @mock.patch('nova.db.api.service_get_minimum_version', return_value={'nova-compute': 2}) def test_create_above_minimum(self, mock_get, mock_notify): with mock.patch('nova.objects.service.SERVICE_VERSION', new=3): objects.Service(context=self.context, binary='nova-compute').create() @mock.patch('nova.objects.Service._send_notification') @mock.patch('nova.db.api.service_get_minimum_version', return_value={'nova-compute': 2}) def test_create_equal_to_minimum(self, mock_get, mock_notify): with mock.patch('nova.objects.service.SERVICE_VERSION', new=2): objects.Service(context=self.context, binary='nova-compute').create() @mock.patch('nova.db.api.service_get_minimum_version', return_value={'nova-compute': 2}) def test_create_below_minimum(self, mock_get): with mock.patch('nova.objects.service.SERVICE_VERSION', new=1): self.assertRaises(exception.ServiceTooOld, objects.Service(context=self.context, binary='nova-compute', ).create) @mock.patch('nova.objects.base.NovaObject' '.obj_make_compatible_from_manifest', new=mock.Mock()) def test_obj_make_compatible_from_manifest_strips_uuid(self): s = service.Service() primitive = {'uuid': uuidsentinel.service} s.obj_make_compatible_from_manifest(primitive, '1.20', mock.Mock()) self.assertNotIn('uuid', primitive) class TestServiceObject(test_objects._LocalTest, _TestServiceObject): pass class TestRemoteServiceObject(test_objects._RemoteTest, _TestServiceObject): pass class TestServiceVersion(test.NoDBTestCase): def setUp(self): self.ctxt = context.get_admin_context() super(TestServiceVersion, self).setUp() def _collect_things(self): data = { 'compute_rpc': compute_manager.ComputeManager.target.version, } return data def test_version(self): calculated = self._collect_things() self.assertEqual( len(service.SERVICE_VERSION_HISTORY), service.SERVICE_VERSION + 1, 'Service version %i has no history. Please update ' 'nova.objects.service.SERVICE_VERSION_HISTORY ' 'and add %s to it' % (service.SERVICE_VERSION, repr(calculated))) current = service.SERVICE_VERSION_HISTORY[service.SERVICE_VERSION] self.assertEqual( current, calculated, 'Changes detected that require a SERVICE_VERSION change. Please ' 'increment nova.objects.service.SERVICE_VERSION, and make sure it ' 'is equal to nova.compute.manager.ComputeManager.target.version.') def test_version_in_init(self): self.assertRaises(exception.ObjectActionError, objects.Service, version=123) def test_version_set_on_init(self): self.assertEqual(service.SERVICE_VERSION, objects.Service().version) def test_version_loaded_from_db(self): fake_version = fake_service['version'] + 1 fake_different_service = dict(fake_service) fake_different_service['version'] = fake_version obj = objects.Service() obj._from_db_object(self.ctxt, obj, fake_different_service) self.assertEqual(fake_version, obj.version) class TestServiceVersionCells(test.TestCase): def setUp(self): self.context = context.get_admin_context() super(TestServiceVersionCells, self).setUp() def _setup_cells(self): # NOTE(danms): Override the base class's cell setup so we can have two self.cells = fixtures.CellDatabases() self.cells.add_cell_database(uuidsentinel.cell1, default=True) self.cells.add_cell_database(uuidsentinel.cell2) self.useFixture(self.cells) cm = objects.CellMapping(context=self.context, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://nowhere/', database_connection=uuidsentinel.cell1) cm.create() cm = objects.CellMapping(context=self.context, uuid=uuidsentinel.cell2, name='cell2', transport_url='fake://nowhere/', database_connection=uuidsentinel.cell2) cm.create() def _create_services(self, *versions): cells = objects.CellMappingList.get_all(self.context) index = 0 for version in versions: service = objects.Service(context=self.context, binary='nova-compute') service.version = version cell = cells[index % len(cells)] with context.target_cell(self.context, cell): service.create() index += 1 @mock.patch('nova.objects.Service._send_notification') @mock.patch('nova.objects.Service._check_minimum_version') def test_version_all_cells(self, mock_check, mock_notify): self._create_services(16, 16, 13, 16) self.assertEqual(13, service.get_minimum_version_all_cells( self.context, ['nova-compute'])) @mock.patch('nova.objects.service.LOG') def test_get_minimum_version_checks_binary(self, mock_log): ex = self.assertRaises(exception.ObjectActionError, service.get_minimum_version_all_cells, self.context, ['compute']) self.assertIn('Invalid binary prefix', six.text_type(ex)) self.assertTrue(mock_log.warning.called) @mock.patch('nova.context.scatter_gather_all_cells') def test_version_all_cells_with_fail(self, mock_scatter): mock_scatter.return_value = { 'foo': {'nova-compute': 13}, 'bar': exception.ServiceNotFound(service_id='fake'), } self.assertEqual(13, service.get_minimum_version_all_cells( self.context, ['nova-compute'])) self.assertRaises(exception.CellTimeout, service.get_minimum_version_all_cells, self.context, ['nova-compute'], require_all=True) @mock.patch('nova.context.scatter_gather_all_cells') def test_version_all_cells_with_timeout(self, mock_scatter): mock_scatter.return_value = { 'foo': {'nova-compute': 13}, 'bar': context.did_not_respond_sentinel, } self.assertEqual(13, service.get_minimum_version_all_cells( self.context, ['nova-compute'])) self.assertRaises(exception.CellTimeout, service.get_minimum_version_all_cells, self.context, ['nova-compute'], require_all=True) @mock.patch('nova.context.scatter_gather_all_cells') def test_version_all_cells_exclude_zero_service(self, mock_scatter): mock_scatter.return_value = { 'foo': {'nova-compute': 13}, 'bar': {'nova-compute': 0}, } self.assertEqual(13, service.get_minimum_version_all_cells( self.context, ['nova-compute'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_tag.py0000664000175000017500000000730600000000000021475 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.objects import tag from nova.tests.unit.objects import test_objects RESOURCE_ID = '123' TAG_NAME1 = 'fake-tag1' TAG_NAME2 = 'fake-tag2' fake_tag1 = { 'resource_id': RESOURCE_ID, 'tag': TAG_NAME1, } fake_tag2 = { 'resource_id': RESOURCE_ID, 'tag': TAG_NAME1, } fake_tag_list = [fake_tag1, fake_tag2] def _get_tag(resource_id, tag_name, context=None): t = tag.Tag(context=context) t.resource_id = resource_id t.tag = tag_name return t class _TestTagObject(object): @mock.patch('nova.db.api.instance_tag_add') def test_create(self, tag_add): tag_add.return_value = fake_tag1 tag_obj = _get_tag(RESOURCE_ID, TAG_NAME1, context=self.context) tag_obj.create() tag_add.assert_called_once_with(self.context, RESOURCE_ID, TAG_NAME1) self.compare_obj(tag_obj, fake_tag1) @mock.patch('nova.db.api.instance_tag_delete') def test_destroy(self, tag_delete): tag.Tag.destroy(self.context, RESOURCE_ID, TAG_NAME1) tag_delete.assert_called_once_with(self.context, RESOURCE_ID, TAG_NAME1) @mock.patch('nova.db.api.instance_tag_exists') def test_exists(self, instance_tag_exists): tag.Tag.exists(self.context, RESOURCE_ID, TAG_NAME1) instance_tag_exists.assert_called_once_with( self.context, RESOURCE_ID, TAG_NAME1) class TestMigrationObject(test_objects._LocalTest, _TestTagObject): pass class TestRemoteMigrationObject(test_objects._RemoteTest, _TestTagObject): pass class _TestTagList(object): def _compare_tag_list(self, tag_list, tag_list_obj): self.assertEqual(len(tag_list), len(tag_list_obj)) for obj, fake in zip(tag_list_obj, tag_list): self.assertIsInstance(obj, tag.Tag) self.assertEqual(obj.tag, fake['tag']) self.assertEqual(obj.resource_id, fake['resource_id']) @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') def test_get_by_resource_id(self, get_by_inst): get_by_inst.return_value = fake_tag_list tag_list_obj = tag.TagList.get_by_resource_id( self.context, RESOURCE_ID) get_by_inst.assert_called_once_with(self.context, RESOURCE_ID) self._compare_tag_list(fake_tag_list, tag_list_obj) @mock.patch('nova.db.api.instance_tag_set') def test_create(self, tag_set): tag_set.return_value = fake_tag_list tag_list_obj = tag.TagList.create( self.context, RESOURCE_ID, [TAG_NAME1, TAG_NAME2]) tag_set.assert_called_once_with(self.context, RESOURCE_ID, [TAG_NAME1, TAG_NAME2]) self._compare_tag_list(fake_tag_list, tag_list_obj) @mock.patch('nova.db.api.instance_tag_delete_all') def test_destroy(self, tag_delete_all): tag.TagList.destroy(self.context, RESOURCE_ID) tag_delete_all.assert_called_once_with(self.context, RESOURCE_ID) class TestTagList(test_objects._LocalTest, _TestTagList): pass class TestTagListRemote(test_objects._RemoteTest, _TestTagList): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_task_log.py0000664000175000017500000001213400000000000022520 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 import mock from oslo_utils import timeutils from nova import objects from nova.tests.unit.objects import test_objects from nova import utils NOW = timeutils.utcnow().replace(microsecond=0) fake_task_log = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': 1, 'task_name': 'fake-name', 'state': 'fake-state', 'host': 'fake-host', 'period_beginning': NOW - datetime.timedelta(seconds=10), 'period_ending': NOW, 'message': 'fake-message', 'task_items': 1, 'errors': 0, } class _TestTaskLog(object): @mock.patch('nova.db.api.task_log_get', return_value=fake_task_log) def test_get(self, mock_get): task_log = objects.TaskLog.get(self.context, fake_task_log['task_name'], fake_task_log['period_beginning'], fake_task_log['period_ending'], fake_task_log['host'], state=fake_task_log['state']) mock_get.assert_called_once_with( self.context, fake_task_log['task_name'], utils.strtime(fake_task_log['period_beginning']), utils.strtime(fake_task_log['period_ending']), fake_task_log['host'], state=fake_task_log['state']) self.compare_obj(task_log, fake_task_log) @mock.patch('nova.db.api.task_log_begin_task') def test_begin_task(self, mock_begin_task): task_log = objects.TaskLog(self.context) task_log.task_name = fake_task_log['task_name'] task_log.period_beginning = fake_task_log['period_beginning'] task_log.period_ending = fake_task_log['period_ending'] task_log.host = fake_task_log['host'] task_log.task_items = fake_task_log['task_items'] task_log.message = fake_task_log['message'] task_log.begin_task() mock_begin_task.assert_called_once_with( self.context, fake_task_log['task_name'], fake_task_log['period_beginning'].replace( tzinfo=iso8601.UTC), fake_task_log['period_ending'].replace( tzinfo=iso8601.UTC), fake_task_log['host'], task_items=fake_task_log['task_items'], message=fake_task_log['message']) @mock.patch('nova.db.api.task_log_end_task') def test_end_task(self, mock_end_task): task_log = objects.TaskLog(self.context) task_log.task_name = fake_task_log['task_name'] task_log.period_beginning = fake_task_log['period_beginning'] task_log.period_ending = fake_task_log['period_ending'] task_log.host = fake_task_log['host'] task_log.errors = fake_task_log['errors'] task_log.message = fake_task_log['message'] task_log.end_task() mock_end_task.assert_called_once_with( self.context, fake_task_log['task_name'], fake_task_log['period_beginning'].replace( tzinfo=iso8601.UTC), fake_task_log['period_ending'].replace( tzinfo=iso8601.UTC), fake_task_log['host'], errors=fake_task_log['errors'], message=fake_task_log['message']) class TestTaskLog(test_objects._LocalTest, _TestTaskLog): pass class TestRemoteTaskLog(test_objects._RemoteTest, _TestTaskLog): pass class _TestTaskLogList(object): @mock.patch('nova.db.api.task_log_get_all') def test_get_all(self, mock_get_all): fake_task_logs = [dict(fake_task_log, id=1), dict(fake_task_log, id=2)] mock_get_all.return_value = fake_task_logs task_logs = objects.TaskLogList.get_all( self.context, fake_task_log['task_name'], fake_task_log['period_beginning'], fake_task_log['period_ending'], host=fake_task_log['host'], state=fake_task_log['state']) mock_get_all.assert_called_once_with( self.context, fake_task_log['task_name'], utils.strtime(fake_task_log['period_beginning']), utils.strtime(fake_task_log['period_ending']), host=fake_task_log['host'], state=fake_task_log['state']) for index, task_log in enumerate(task_logs): self.compare_obj(task_log, fake_task_logs[index]) class TestTaskLogList(test_objects._LocalTest, _TestTaskLogList): pass class TestRemoteTaskLogList(test_objects._RemoteTest, _TestTaskLogList): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_trusted_certs.py0000664000175000017500000000310300000000000023603 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.objects import trusted_certs from nova.tests.unit.objects import test_objects from oslo_serialization import jsonutils fake_trusted_certs = trusted_certs.TrustedCerts(ids=['fake-trusted-cert-1', 'fake-trusted-cert-2']) fake_instance_extras = { 'trusted_certs': jsonutils.dumps(fake_trusted_certs.obj_to_primitive()) } class _TestTrustedCertsObject(object): @mock.patch('nova.db.api.instance_extra_get_by_instance_uuid') def test_get_by_instance_uuid(self, mock_get): mock_get.return_value = fake_instance_extras certs = trusted_certs.TrustedCerts.get_by_instance_uuid( self.context, 'fake_uuid') self.assertEqual(certs.ids, fake_trusted_certs.ids) class TestTrustedCertsObject(test_objects._LocalTest, _TestTrustedCertsObject): pass class TestRemoteTrustedCertsObject(test_objects._RemoteTest, _TestTrustedCertsObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_vcpu_model.py0000664000175000017500000000721200000000000023053 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.objects import fields as obj_fields from nova.tests.unit.objects import test_objects fake_cpu_model_feature = { 'policy': obj_fields.CPUFeaturePolicy.REQUIRE, 'name': 'sse2', } fake_cpu_model_feature_obj = objects.VirtCPUFeature( **fake_cpu_model_feature) fake_vcpumodel_dict = { 'arch': obj_fields.Architecture.I686, 'vendor': 'fake-vendor', 'match': obj_fields.CPUMatch.EXACT, 'topology': objects.VirtCPUTopology(sockets=1, cores=1, threads=1), 'features': [fake_cpu_model_feature_obj], 'mode': obj_fields.CPUMode.HOST_MODEL, 'model': 'fake-model', } fake_vcpumodel = objects.VirtCPUModel(**fake_vcpumodel_dict) class _TestVirtCPUFeatureObj(object): def test_policy_limitation(self): obj = objects.VirtCPUFeature() self.assertRaises(ValueError, setattr, obj, 'policy', 'foo') class TestVirtCPUFeatureObj(test_objects._LocalTest, _TestVirtCPUFeatureObj): pass class TestRemoteVirtCPUFeatureObj(test_objects._LocalTest, _TestVirtCPUFeatureObj): pass class _TestVirtCPUModel(object): def test_create(self): model = objects.VirtCPUModel(**fake_vcpumodel_dict) self.assertEqual(fake_vcpumodel_dict['model'], model.model) self.assertEqual(fake_vcpumodel_dict['topology'].sockets, model.topology.sockets) feature = model.features[0] self.assertEqual(fake_cpu_model_feature['policy'], feature.policy) def test_defaults(self): model = objects.VirtCPUModel() self.assertIsNone(model.mode) self.assertIsNone(model.model) self.assertIsNone(model.vendor) self.assertIsNone(model.arch) self.assertIsNone(model.match) self.assertEqual([], model.features) self.assertIsNone(model.topology) def test_arch_field(self): model = objects.VirtCPUModel(**fake_vcpumodel_dict) self.assertRaises(ValueError, setattr, model, 'arch', 'foo') def test_serialize(self): modelin = objects.VirtCPUModel(**fake_vcpumodel_dict) modelout = objects.VirtCPUModel.from_json(modelin.to_json()) self.assertEqual(modelin.mode, modelout.mode) self.assertEqual(modelin.model, modelout.model) self.assertEqual(modelin.vendor, modelout.vendor) self.assertEqual(modelin.arch, modelout.arch) self.assertEqual(modelin.match, modelout.match) self.assertEqual(modelin.features[0].policy, modelout.features[0].policy) self.assertEqual(modelin.features[0].name, modelout.features[0].name) self.assertEqual(modelin.topology.sockets, modelout.topology.sockets) self.assertEqual(modelin.topology.cores, modelout.topology.cores) self.assertEqual(modelin.topology.threads, modelout.topology.threads) class TestVirtCPUModel(test_objects._LocalTest, _TestVirtCPUModel): pass class TestRemoteVirtCPUModel(test_objects._LocalTest, _TestVirtCPUModel): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/objects/test_virt_cpu_topology.py0000664000175000017500000000256500000000000024513 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.tests.unit.objects import test_objects _top_dict = { 'sockets': 2, 'cores': 4, 'threads': 8 } class _TestVirtCPUTopologyObject(object): def test_object_from_dict(self): top_obj = objects.VirtCPUTopology.from_dict(_top_dict) self.compare_obj(top_obj, _top_dict) def test_object_to_dict(self): top_obj = objects.VirtCPUTopology() top_obj.sockets = 2 top_obj.cores = 4 top_obj.threads = 8 spec = top_obj.to_dict() self.assertEqual(_top_dict, spec) class TestVirtCPUTopologyObject(test_objects._LocalTest, _TestVirtCPUTopologyObject): pass class TestRemoteVirtCPUTopologyObject(test_objects._RemoteTest, _TestVirtCPUTopologyObject): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_virtual_interface.py0000664000175000017500000001560600000000000024432 0ustar00zuulzuul00000000000000# Copyright (C) 2014, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.db import api as db from nova.objects import virtual_interface as vif_obj from nova.tests.unit.objects import test_objects fake_vif = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': 1, 'address': '00:00:00:00:00:00', 'network_id': 123, 'instance_uuid': uuids.instance, 'uuid': uuids.uuid, 'tag': 'fake-tag', } class _TestVirtualInterface(object): @staticmethod def _compare(test, db, obj): for field, value in db.items(): test.assertEqual(db[field], getattr(obj, field)) def test_get_by_id(self): with mock.patch.object(db, 'virtual_interface_get') as get: get.return_value = fake_vif vif = vif_obj.VirtualInterface.get_by_id(self.context, 1) self._compare(self, fake_vif, vif) def test_get_by_uuid(self): with mock.patch.object(db, 'virtual_interface_get_by_uuid') as get: get.return_value = fake_vif vif = vif_obj.VirtualInterface.get_by_uuid(self.context, 'fake-uuid-2') self._compare(self, fake_vif, vif) def test_get_by_address(self): with mock.patch.object(db, 'virtual_interface_get_by_address') as get: get.return_value = fake_vif vif = vif_obj.VirtualInterface.get_by_address(self.context, '00:00:00:00:00:00') self._compare(self, fake_vif, vif) def test_get_by_instance_and_network(self): with mock.patch.object(db, 'virtual_interface_get_by_instance_and_network') as get: get.return_value = fake_vif vif = vif_obj.VirtualInterface.get_by_instance_and_network( self.context, 'fake-uuid', 123) self._compare(self, fake_vif, vif) def test_create(self): vif = vif_obj.VirtualInterface(context=self.context) vif.address = '00:00:00:00:00:00' vif.network_id = 123 vif.instance_uuid = uuids.instance vif.uuid = uuids.uuid vif.tag = 'fake-tag' with mock.patch.object(db, 'virtual_interface_create') as create: create.return_value = fake_vif vif.create() self.assertEqual(self.context, vif._context) vif._context = None self._compare(self, fake_vif, vif) def test_create_neutron_styyyyle(self): vif = vif_obj.VirtualInterface(context=self.context) vif.address = '00:00:00:00:00:00/%s' % uuids.port vif.instance_uuid = uuids.instance vif.uuid = uuids.uuid vif.tag = 'fake-tag' with mock.patch.object(db, 'virtual_interface_create') as create: create.return_value = dict(fake_vif, address=vif.address) vif.create() self.assertEqual(self.context, vif._context) vif._context = None # NOTE(danms): The actual vif should now have the namespace # stripped out self._compare(self, fake_vif, vif) def test_save(self): vif = vif_obj.VirtualInterface(context=self.context) vif.address = '00:00:00:00:00:00' vif.network_id = 123 vif.instance_uuid = uuids.instance_uuid vif.uuid = uuids.vif_uuid vif.tag = 'foo' vif.create() with mock.patch.object(db, 'virtual_interface_update') as update: update.return_value = fake_vif vif.tag = 'bar' vif.save() update.assert_called_once_with(self.context, '00:00:00:00:00:00', {'tag': 'bar'}) def test_delete_by_instance_uuid(self): with mock.patch.object(db, 'virtual_interface_delete_by_instance') as delete: vif_obj.VirtualInterface.delete_by_instance_uuid(self.context, 'fake-uuid') delete.assert_called_with(self.context, 'fake-uuid') def test_destroy(self): vif = vif_obj.VirtualInterface(context=self.context) vif.address = '00:00:00:00:00:00' vif.network_id = 123 vif.instance_uuid = uuids.instance_uuid vif.uuid = uuids.vif_uuid vif.tag = 'foo' vif.create() vif = vif_obj.VirtualInterface.get_by_id(self.context, vif.id) vif.destroy() self.assertIsNone(vif_obj.VirtualInterface.get_by_id(self.context, vif.id)) def test_obj_make_compatible_pre_1_1(self): vif = vif_obj.VirtualInterface(context=self.context) vif.address = '00:00:00:00:00:00' vif.network_id = 123 vif.instance_uuid = uuids.instance vif.uuid = uuids.uuid vif.tag = 'fake-tag' data = lambda x: x['nova_object.data'] primitive = data(vif.obj_to_primitive(target_version='1.0')) self.assertNotIn('tag', primitive) self.assertIn('uuid', primitive) class TestVirtualInterfaceObject(test_objects._LocalTest, _TestVirtualInterface): pass class TestRemoteVirtualInterfaceObject(test_objects._RemoteTest, _TestVirtualInterface): pass class _TestVirtualInterfaceList(object): def test_get_all(self): with mock.patch.object(db, 'virtual_interface_get_all') as get: get.return_value = [fake_vif] vifs = vif_obj.VirtualInterfaceList.get_all(self.context) self.assertEqual(1, len(vifs)) _TestVirtualInterface._compare(self, fake_vif, vifs[0]) def test_get_by_instance_uuid(self): with mock.patch.object(db, 'virtual_interface_get_by_instance') as get: get.return_value = [fake_vif] vifs = vif_obj.VirtualInterfaceList.get_by_instance_uuid( self.context, 'fake-uuid') self.assertEqual(1, len(vifs)) _TestVirtualInterface._compare(self, fake_vif, vifs[0]) class TestVirtualInterfaceList(test_objects._LocalTest, _TestVirtualInterfaceList): pass class TestRemoteVirtualInterfaceList(test_objects._RemoteTest, _TestVirtualInterfaceList): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/objects/test_volume_usage.py0000664000175000017500000000620100000000000023406 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova import objects from nova.tests.unit.objects import test_objects NOW = timeutils.utcnow().replace(microsecond=0) fake_vol_usage = { 'created_at': NOW, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': 1, 'volume_id': uuids.volume_id, 'instance_uuid': uuids.instance, 'project_id': 'fake-project-id', 'user_id': 'fake-user-id', 'availability_zone': None, 'tot_last_refreshed': None, 'tot_reads': 0, 'tot_read_bytes': 0, 'tot_writes': 0, 'tot_write_bytes': 0, 'curr_last_refreshed': NOW, 'curr_reads': 10, 'curr_read_bytes': 20, 'curr_writes': 30, 'curr_write_bytes': 40, } class _TestVolumeUsage(object): @mock.patch('nova.db.api.vol_usage_update', return_value=fake_vol_usage) def test_save(self, mock_upd): vol_usage = objects.VolumeUsage(self.context) vol_usage.volume_id = uuids.volume_id vol_usage.instance_uuid = uuids.instance vol_usage.project_id = 'fake-project-id' vol_usage.user_id = 'fake-user-id' vol_usage.availability_zone = None vol_usage.curr_reads = 10 vol_usage.curr_read_bytes = 20 vol_usage.curr_writes = 30 vol_usage.curr_write_bytes = 40 vol_usage.save() mock_upd.assert_called_once_with( self.context, uuids.volume_id, 10, 20, 30, 40, uuids.instance, 'fake-project-id', 'fake-user-id', None, update_totals=False) self.compare_obj(vol_usage, fake_vol_usage) @mock.patch('nova.db.api.vol_usage_update', return_value=fake_vol_usage) def test_save_update_totals(self, mock_upd): vol_usage = objects.VolumeUsage(self.context) vol_usage.volume_id = uuids.volume_id vol_usage.instance_uuid = uuids.instance vol_usage.project_id = 'fake-project-id' vol_usage.user_id = 'fake-user-id' vol_usage.availability_zone = None vol_usage.curr_reads = 10 vol_usage.curr_read_bytes = 20 vol_usage.curr_writes = 30 vol_usage.curr_write_bytes = 40 vol_usage.save(update_totals=True) mock_upd.assert_called_once_with( self.context, uuids.volume_id, 10, 20, 30, 40, uuids.instance, 'fake-project-id', 'fake-user-id', None, update_totals=True) self.compare_obj(vol_usage, fake_vol_usage) class TestVolumeUsage(test_objects._LocalTest, _TestVolumeUsage): pass class TestRemoteVolumeUsage(test_objects._RemoteTest, _TestVolumeUsage): pass ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.558468 nova-21.2.4/nova/tests/unit/pci/0000775000175000017500000000000000000000000016425 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/pci/__init__.py0000664000175000017500000000000000000000000020524 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/pci/fakes.py0000664000175000017500000000225100000000000020070 0ustar00zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import mock from nova.pci import whitelist def fake_pci_whitelist(): devspec = mock.Mock() devspec.get_tags.return_value = None patcher = mock.patch.object(whitelist.Whitelist, 'get_devspec', return_value=devspec) patcher.start() return patcher def patch_pci_whitelist(f): @functools.wraps(f) def wrapper(self, *args, **kwargs): patcher = fake_pci_whitelist() try: f(self, *args, **kwargs) finally: patcher.stop() return wrapper ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/pci/test_devspec.py0000664000175000017500000004626400000000000021503 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from nova import exception from nova import objects from nova.pci import devspec from nova import test dev = {"vendor_id": "8086", "product_id": "5057", "address": "0000:0a:00.5", "parent_addr": "0000:0a:00.0"} class PciAddressSpecTestCase(test.NoDBTestCase): def test_pci_address_spec_abstact_instance_fail(self): self.assertRaises(TypeError, devspec.PciAddressSpec) class PhysicalPciAddressTestCase(test.NoDBTestCase): pci_addr = {"domain": "0000", "bus": "0a", "slot": "00", "function": "5"} def test_init_by_dict(self): phys_addr = devspec.PhysicalPciAddress(self.pci_addr) self.assertEqual(phys_addr.domain, self.pci_addr['domain']) self.assertEqual(phys_addr.bus, self.pci_addr['bus']) self.assertEqual(phys_addr.slot, self.pci_addr['slot']) self.assertEqual(phys_addr.func, self.pci_addr['function']) def test_init_by_dict_invalid_address_values(self): invalid_val_addr = {"domain": devspec.MAX_DOMAIN + 1, "bus": devspec.MAX_BUS + 1, "slot": devspec.MAX_SLOT + 1, "function": devspec.MAX_FUNC + 1} for component in invalid_val_addr: address = dict(self.pci_addr) address[component] = str(invalid_val_addr[component]) self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PhysicalPciAddress, address) def test_init_by_dict_missing_values(self): for component in self.pci_addr: address = dict(self.pci_addr) del address[component] self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PhysicalPciAddress, address) def test_init_by_string(self): address_str = "0000:0a:00.5" phys_addr = devspec.PhysicalPciAddress(address_str) self.assertEqual(phys_addr.domain, "0000") self.assertEqual(phys_addr.bus, "0a") self.assertEqual(phys_addr.slot, "00") self.assertEqual(phys_addr.func, "5") def test_init_by_string_invalid_values(self): invalid_addresses = [str(devspec.MAX_DOMAIN + 1) + ":0a:00.5", "0000:" + str(devspec.MAX_BUS + 1) + ":00.5", "0000:0a:" + str(devspec.MAX_SLOT + 1) + ".5", "0000:0a:00." + str(devspec.MAX_FUNC + 1)] for address in invalid_addresses: self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PhysicalPciAddress, address) def test_init_by_string_missing_values(self): invalid_addresses = ["00:0000:0a:00.5", "0a:00.5", "0000:00.5"] for address in invalid_addresses: self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PhysicalPciAddress, address) def test_match(self): address_str = "0000:0a:00.5" phys_addr1 = devspec.PhysicalPciAddress(address_str) phys_addr2 = devspec.PhysicalPciAddress(address_str) self.assertTrue(phys_addr1.match(phys_addr2)) def test_false_match(self): address_str = "0000:0a:00.5" phys_addr1 = devspec.PhysicalPciAddress(address_str) addresses = ["0010:0a:00.5", "0000:0b:00.5", "0000:0a:01.5", "0000:0a:00.4"] for address in addresses: phys_addr2 = devspec.PhysicalPciAddress(address) self.assertFalse(phys_addr1.match(phys_addr2)) class PciAddressGlobSpecTestCase(test.NoDBTestCase): def test_init(self): address_str = "0000:0a:00.5" phys_addr = devspec.PciAddressGlobSpec(address_str) self.assertEqual(phys_addr.domain, "0000") self.assertEqual(phys_addr.bus, "0a") self.assertEqual(phys_addr.slot, "00") self.assertEqual(phys_addr.func, "5") def test_init_invalid_address(self): invalid_addresses = ["00:0000:0a:00.5"] for address in invalid_addresses: self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PciAddressGlobSpec, address) def test_init_invalid_values(self): invalid_addresses = [str(devspec.MAX_DOMAIN + 1) + ":0a:00.5", "0000:" + str(devspec.MAX_BUS + 1) + ":00.5", "0000:0a:" + str(devspec.MAX_SLOT + 1) + ".5", "0000:0a:00." + str(devspec.MAX_FUNC + 1)] for address in invalid_addresses: self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciAddressGlobSpec, address) def test_match(self): address_str = "0000:0a:00.5" phys_addr = devspec.PhysicalPciAddress(address_str) addresses = ["0000:0a:00.5", "*:0a:00.5", "0000:*:00.5", "0000:0a:*.5", "0000:0a:00.*"] for address in addresses: glob_addr = devspec.PciAddressGlobSpec(address) self.assertTrue(glob_addr.match(phys_addr)) def test_false_match(self): address_str = "0000:0a:00.5" phys_addr = devspec.PhysicalPciAddress(address_str) addresses = ["0010:0a:00.5", "0000:0b:00.5", "*:0a:01.5", "0000:0a:*.4"] for address in addresses: glob_addr = devspec.PciAddressGlobSpec(address) self.assertFalse(phys_addr.match(glob_addr)) class PciAddressRegexSpecTestCase(test.NoDBTestCase): def test_init(self): address_regex = {"domain": ".*", "bus": "02", "slot": "01", "function": "[0-2]"} phys_addr = devspec.PciAddressRegexSpec(address_regex) self.assertEqual(phys_addr.domain, ".*") self.assertEqual(phys_addr.bus, "02") self.assertEqual(phys_addr.slot, "01") self.assertEqual(phys_addr.func, "[0-2]") def test_init_invalid_address(self): invalid_addresses = [{"domain": "*", "bus": "02", "slot": "01", "function": "[0-2]"}] for address in invalid_addresses: self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PciAddressRegexSpec, address) def test_match(self): address_str = "0000:0a:00.5" phys_addr = devspec.PhysicalPciAddress(address_str) addresses = [{"domain": ".*", "bus": "0a", "slot": "00", "function": "[5-6]"}, {"domain": ".*", "bus": "0a", "slot": ".*", "function": "[4-5]"}, {"domain": ".*", "bus": "0a", "slot": "[0-3]", "function": ".*"}] for address in addresses: regex_addr = devspec.PciAddressRegexSpec(address) self.assertTrue(regex_addr.match(phys_addr)) def test_false_match(self): address_str = "0000:0b:00.5" phys_addr = devspec.PhysicalPciAddress(address_str) addresses = [{"domain": ".*", "bus": "0a", "slot": "00", "function": "[5-6]"}, {"domain": ".*", "bus": "02", "slot": ".*", "function": "[4-5]"}, {"domain": ".*", "bus": "02", "slot": "[0-3]", "function": ".*"}] for address in addresses: regex_addr = devspec.PciAddressRegexSpec(address) self.assertFalse(regex_addr.match(phys_addr)) class PciAddressTestCase(test.NoDBTestCase): def test_wrong_address(self): pci_info = {"vendor_id": "8086", "address": "*: *: *.6", "product_id": "5057", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertFalse(pci.match(dev)) def test_address_too_big(self): pci_info = {"address": "0000:0a:0b:00.5", "physical_network": "hr_net"} self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PciDeviceSpec, pci_info) def test_address_invalid_character(self): pci_info = {"address": "0000:h4.12:6", "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) msg = ("Invalid PCI devices Whitelist config: property func ('12:6') " "does not parse as a hex number.") self.assertEqual(msg, six.text_type(exc)) def test_max_func(self): pci_info = {"address": "0000:0a:00.%s" % (devspec.MAX_FUNC + 1), "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) msg = ('Invalid PCI devices Whitelist config: property func (%x) is ' 'greater than the maximum allowable value (%x).' % (devspec.MAX_FUNC + 1, devspec.MAX_FUNC)) self.assertEqual(msg, six.text_type(exc)) def test_max_domain(self): pci_info = {"address": "%x:0a:00.5" % (devspec.MAX_DOMAIN + 1), "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) msg = ('Invalid PCI devices Whitelist config: property domain (%X) ' 'is greater than the maximum allowable value (%X).' % (devspec.MAX_DOMAIN + 1, devspec.MAX_DOMAIN)) self.assertEqual(msg, six.text_type(exc)) def test_max_bus(self): pci_info = {"address": "0000:%x:00.5" % (devspec.MAX_BUS + 1), "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) msg = ('Invalid PCI devices Whitelist config: property bus (%X) is ' 'greater than the maximum allowable value (%X).' % (devspec.MAX_BUS + 1, devspec.MAX_BUS)) self.assertEqual(msg, six.text_type(exc)) def test_max_slot(self): pci_info = {"address": "0000:0a:%x.5" % (devspec.MAX_SLOT + 1), "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) msg = ('Invalid PCI devices Whitelist config: property slot (%X) is ' 'greater than the maximum allowable value (%X).' % (devspec.MAX_SLOT + 1, devspec.MAX_SLOT)) self.assertEqual(msg, six.text_type(exc)) def test_address_is_undefined(self): pci_info = {"vendor_id": "8086", "product_id": "5057"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) def test_partial_address(self): pci_info = {"address": ":0a:00.", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) dev = {"vendor_id": "1137", "product_id": "0071", "address": "0000:0a:00.5", "parent_addr": "0000:0a:00.0"} self.assertTrue(pci.match(dev)) def test_partial_address_func(self): pci_info = {"address": ".5", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) dev = {"vendor_id": "1137", "product_id": "0071", "address": "0000:0a:00.5", "phys_function": "0000:0a:00.0"} self.assertTrue(pci.match(dev)) @mock.patch('nova.pci.utils.is_physical_function', return_value=True) def test_address_is_pf(self, mock_is_physical_function): pci_info = {"address": "0000:0a:00.0", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) @mock.patch('nova.pci.utils.is_physical_function', return_value=True) def test_address_pf_no_parent_addr(self, mock_is_physical_function): _dev = dev.copy() _dev.pop('parent_addr') pci_info = {"address": "0000:0a:00.5", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(_dev)) def test_spec_regex_match(self): pci_info = {"address": {"domain": ".*", "bus": ".*", "slot": "00", "function": "[5-6]" }, "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) def test_spec_regex_no_match(self): pci_info = {"address": {"domain": ".*", "bus": ".*", "slot": "00", "function": "[6-7]" }, "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertFalse(pci.match(dev)) def test_spec_invalid_regex(self): pci_info = {"address": {"domain": ".*", "bus": ".*", "slot": "00", "function": "[6[-7]" }, "physical_network": "hr_net"} self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PciDeviceSpec, pci_info) def test_spec_invalid_regex2(self): pci_info = {"address": {"domain": "*", "bus": "*", "slot": "00", "function": "[6-7]" }, "physical_network": "hr_net"} self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PciDeviceSpec, pci_info) def test_spec_partial_bus_regex(self): pci_info = {"address": {"domain": ".*", "slot": "00", "function": "[5-6]" }, "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) def test_spec_partial_address_regex(self): pci_info = {"address": {"domain": ".*", "bus": ".*", "slot": "00", }, "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) def test_spec_invalid_address(self): pci_info = {"address": [".*", ".*", "00", "[6-7]"], "physical_network": "hr_net"} self.assertRaises(exception.PciDeviceWrongAddressFormat, devspec.PciDeviceSpec, pci_info) @mock.patch('nova.pci.utils.is_physical_function', return_value=True) def test_address_is_pf_regex(self, mock_is_physical_function): pci_info = {"address": {"domain": "0000", "bus": "0a", "slot": "00", "function": "0" }, "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) class PciDevSpecTestCase(test.NoDBTestCase): def test_spec_match(self): pci_info = {"vendor_id": "8086", "address": "*: *: *.5", "product_id": "5057", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) def test_invalid_vendor_id(self): pci_info = {"vendor_id": "8087", "address": "*: *: *.5", "product_id": "5057", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertFalse(pci.match(dev)) def test_vendor_id_out_of_range(self): pci_info = {"vendor_id": "80860", "address": "*:*:*.5", "product_id": "5057", "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) self.assertEqual( "Invalid PCI devices Whitelist config: property vendor_id (80860) " "is greater than the maximum allowable value (FFFF).", six.text_type(exc)) def test_invalid_product_id(self): pci_info = {"vendor_id": "8086", "address": "*: *: *.5", "product_id": "5056", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertFalse(pci.match(dev)) def test_product_id_out_of_range(self): pci_info = {"vendor_id": "8086", "address": "*:*:*.5", "product_id": "50570", "physical_network": "hr_net"} exc = self.assertRaises(exception.PciConfigInvalidWhitelist, devspec.PciDeviceSpec, pci_info) self.assertEqual( "Invalid PCI devices Whitelist config: property product_id " "(50570) is greater than the maximum allowable value (FFFF).", six.text_type(exc)) def test_devname_and_address(self): pci_info = {"devname": "eth0", "vendor_id": "8086", "address": "*:*:*.5", "physical_network": "hr_net"} self.assertRaises(exception.PciDeviceInvalidDeviceName, devspec.PciDeviceSpec, pci_info) def test_blank_devname(self): pci_info = {"devname": "", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) for field in ['domain', 'bus', 'slot', 'func']: self.assertEqual('*', getattr( pci.address.pci_address_spec, field)) @mock.patch('nova.pci.utils.get_function_by_ifname', return_value = ("0000:0a:00.0", True)) def test_by_name(self, mock_get_function_by_ifname): pci_info = {"devname": "eth0", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertTrue(pci.match(dev)) @mock.patch('nova.pci.utils.get_function_by_ifname', return_value = (None, False)) def test_invalid_name(self, mock_get_function_by_ifname): pci_info = {"devname": "lo", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) self.assertFalse(pci.match(dev)) def test_pci_obj(self): pci_info = {"vendor_id": "8086", "address": "*:*:*.5", "product_id": "5057", "physical_network": "hr_net"} pci = devspec.PciDeviceSpec(pci_info) pci_dev = { 'compute_node_id': 1, 'address': '0000:00:00.5', 'product_id': '5057', 'vendor_id': '8086', 'status': 'available', 'parent_addr': None, 'extra_k1': 'v1', } pci_obj = objects.PciDevice.create(None, pci_dev) self.assertTrue(pci.match_pci_obj(pci_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/pci/test_manager.py0000664000175000017500000007151600000000000021462 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel import nova from nova.compute import vm_states from nova import context from nova import objects from nova.objects import fields from nova.pci import manager from nova import test from nova.tests.unit.pci import fakes as pci_fakes fake_pci = { 'compute_node_id': 1, 'address': '0000:00:00.1', 'product_id': 'p', 'vendor_id': 'v', 'request_id': None, 'status': fields.PciDeviceStatus.AVAILABLE, 'dev_type': fields.PciDeviceType.STANDARD, 'parent_addr': None, 'numa_node': 0} fake_pci_1 = dict(fake_pci, address='0000:00:00.2', product_id='p1', vendor_id='v1') fake_pci_2 = dict(fake_pci, address='0000:00:00.3') fake_pci_3 = dict(fake_pci, address='0000:00:01.1', dev_type=fields.PciDeviceType.SRIOV_PF, vendor_id='v2', product_id='p2', numa_node=None) fake_pci_4 = dict(fake_pci, address='0000:00:02.1', dev_type=fields.PciDeviceType.SRIOV_VF, parent_addr='0000:00:01.1', vendor_id='v2', product_id='p2', numa_node=None) fake_pci_5 = dict(fake_pci, address='0000:00:02.2', dev_type=fields.PciDeviceType.SRIOV_VF, parent_addr='0000:00:01.1', vendor_id='v2', product_id='p2', numa_node=None) fake_db_dev = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 1, 'uuid': uuidsentinel.pci_device1, 'compute_node_id': 1, 'address': '0000:00:00.1', 'vendor_id': 'v', 'product_id': 'p', 'numa_node': 1, 'dev_type': fields.PciDeviceType.STANDARD, 'status': fields.PciDeviceStatus.AVAILABLE, 'dev_id': 'i', 'label': 'l', 'instance_uuid': None, 'extra_info': '{}', 'request_id': None, 'parent_addr': None, } fake_db_dev_1 = dict(fake_db_dev, vendor_id='v1', uuid=uuidsentinel.pci_device1, product_id='p1', id=2, address='0000:00:00.2', numa_node=0) fake_db_dev_2 = dict(fake_db_dev, id=3, address='0000:00:00.3', uuid=uuidsentinel.pci_device2, numa_node=None, parent_addr='0000:00:00.1') fake_db_devs = [fake_db_dev, fake_db_dev_1, fake_db_dev_2] fake_db_dev_3 = dict(fake_db_dev, id=4, address='0000:00:01.1', uuid=uuidsentinel.pci_device3, vendor_id='v2', product_id='p2', numa_node=None, dev_type=fields.PciDeviceType.SRIOV_PF) fake_db_dev_4 = dict(fake_db_dev, id=5, address='0000:00:02.1', uuid=uuidsentinel.pci_device4, numa_node=None, dev_type=fields.PciDeviceType.SRIOV_VF, vendor_id='v2', product_id='p2', parent_addr='0000:00:01.1') fake_db_dev_5 = dict(fake_db_dev, id=6, address='0000:00:02.2', uuid=uuidsentinel.pci_device5, numa_node=None, dev_type=fields.PciDeviceType.SRIOV_VF, vendor_id='v2', product_id='p2', parent_addr='0000:00:01.1') fake_db_devs_tree = [fake_db_dev_3, fake_db_dev_4, fake_db_dev_5] fake_pci_requests = [ {'count': 1, 'spec': [{'vendor_id': 'v'}]}, {'count': 1, 'spec': [{'vendor_id': 'v1'}]}] class PciDevTrackerTestCase(test.NoDBTestCase): def _create_fake_instance(self): self.inst = objects.Instance() self.inst.uuid = uuidsentinel.instance1 self.inst.pci_devices = objects.PciDeviceList() self.inst.vm_state = vm_states.ACTIVE self.inst.task_state = None self.inst.numa_topology = None def _fake_get_pci_devices(self, ctxt, node_id): return self.fake_devs def _fake_pci_device_update(self, ctxt, node_id, address, value): self.update_called += 1 self.called_values = value fake_return = copy.deepcopy(fake_db_dev) return fake_return def _fake_pci_device_destroy(self, ctxt, node_id, address): self.destroy_called += 1 def _create_pci_requests_object(self, requests, instance_uuid=None): instance_uuid = instance_uuid or uuidsentinel.instance1 pci_reqs = [] for request in requests: pci_req_obj = objects.InstancePCIRequest(count=request['count'], spec=request['spec']) pci_reqs.append(pci_req_obj) return objects.InstancePCIRequests( instance_uuid=instance_uuid, requests=pci_reqs) def _create_tracker(self, fake_devs): self.fake_devs = fake_devs self.tracker = manager.PciDevTracker(self.fake_context, 1) def setUp(self): super(PciDevTrackerTestCase, self).setUp() self.fake_context = context.get_admin_context() self.fake_devs = fake_db_devs[:] self.stub_out('nova.db.api.pci_device_get_all_by_node', self._fake_get_pci_devices) # The fake_pci_whitelist must be called before creating the fake # devices patcher = pci_fakes.fake_pci_whitelist() self.addCleanup(patcher.stop) self._create_fake_instance() self._create_tracker(fake_db_devs[:]) def test_pcidev_tracker_create(self): self.assertEqual(len(self.tracker.pci_devs), 3) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 3) self.assertEqual(list(self.tracker.stale), []) self.assertEqual(len(self.tracker.stats.pools), 3) self.assertEqual(self.tracker.node_id, 1) for dev in self.tracker.pci_devs: self.assertIsNone(dev.parent_device) self.assertEqual(dev.child_devices, []) def test_pcidev_tracker_create_device_tree(self): self._create_tracker(fake_db_devs_tree) self.assertEqual(len(self.tracker.pci_devs), 3) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 3) self.assertEqual(list(self.tracker.stale), []) self.assertEqual(len(self.tracker.stats.pools), 2) self.assertEqual(self.tracker.node_id, 1) pf = [dev for dev in self.tracker.pci_devs if dev.dev_type == fields.PciDeviceType.SRIOV_PF].pop() vfs = [dev for dev in self.tracker.pci_devs if dev.dev_type == fields.PciDeviceType.SRIOV_VF] self.assertEqual(2, len(vfs)) # Assert we build the device tree correctly self.assertEqual(vfs, pf.child_devices) for vf in vfs: self.assertEqual(vf.parent_device, pf) def test_pcidev_tracker_create_device_tree_pf_only(self): self._create_tracker([fake_db_dev_3]) self.assertEqual(len(self.tracker.pci_devs), 1) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 1) self.assertEqual(list(self.tracker.stale), []) self.assertEqual(len(self.tracker.stats.pools), 1) self.assertEqual(self.tracker.node_id, 1) pf = self.tracker.pci_devs[0] self.assertIsNone(pf.parent_device) self.assertEqual([], pf.child_devices) def test_pcidev_tracker_create_device_tree_vf_only(self): self._create_tracker([fake_db_dev_4]) self.assertEqual(len(self.tracker.pci_devs), 1) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 1) self.assertEqual(list(self.tracker.stale), []) self.assertEqual(len(self.tracker.stats.pools), 1) self.assertEqual(self.tracker.node_id, 1) vf = self.tracker.pci_devs[0] self.assertIsNone(vf.parent_device) self.assertEqual([], vf.child_devices) @mock.patch.object(nova.objects.PciDeviceList, 'get_by_compute_node') def test_pcidev_tracker_create_no_nodeid(self, mock_get_cn): self.tracker = manager.PciDevTracker(self.fake_context) self.assertEqual(len(self.tracker.pci_devs), 0) self.assertFalse(mock_get_cn.called) @mock.patch.object(nova.objects.PciDeviceList, 'get_by_compute_node') def test_pcidev_tracker_create_with_nodeid(self, mock_get_cn): self.tracker = manager.PciDevTracker(self.fake_context, node_id=1) mock_get_cn.assert_called_once_with(self.fake_context, 1) @mock.patch('nova.pci.whitelist.Whitelist.device_assignable', return_value=True) def test_update_devices_from_hypervisor_resources(self, _mock_dev_assign): fake_pci_devs = [copy.deepcopy(fake_pci), copy.deepcopy(fake_pci_2)] fake_pci_devs_json = jsonutils.dumps(fake_pci_devs) tracker = manager.PciDevTracker(self.fake_context) tracker.update_devices_from_hypervisor_resources(fake_pci_devs_json) self.assertEqual(2, len(tracker.pci_devs)) @mock.patch("nova.pci.manager.LOG.debug") def test_update_devices_from_hypervisor_resources_32bit_domain( self, mock_debug): self.flags( group='pci', passthrough_whitelist=[ '{"product_id":"2032", "vendor_id":"8086"}']) # There are systems where 32 bit PCI domain is used. See bug 1897528 # for example. While nova (and qemu) does not support assigning such # devices but the existence of such device in the system should not # lead to an error. fake_pci = { 'compute_node_id': 1, 'address': '10000:00:02.0', 'product_id': '2032', 'vendor_id': '8086', 'request_id': None, 'status': fields.PciDeviceStatus.AVAILABLE, 'dev_type': fields.PciDeviceType.STANDARD, 'parent_addr': None, 'numa_node': 0} fake_pci_devs = [fake_pci] fake_pci_devs_json = jsonutils.dumps(fake_pci_devs) tracker = manager.PciDevTracker(self.fake_context) # We expect that the device with 32bit PCI domain is ignored tracker.update_devices_from_hypervisor_resources(fake_pci_devs_json) self.assertEqual(0, len(tracker.pci_devs)) mock_debug.assert_called_once_with( 'Skipping PCI device %s reported by the hypervisor: %s', {'address': '10000:00:02.0', 'parent_addr': None}, 'The property domain (10000) is greater than the maximum ' 'allowable value (FFFF).') def test_set_hvdev_new_dev(self): fake_pci_3 = dict(fake_pci, address='0000:00:00.4', vendor_id='v2') fake_pci_devs = [copy.deepcopy(fake_pci), copy.deepcopy(fake_pci_1), copy.deepcopy(fake_pci_2), copy.deepcopy(fake_pci_3)] self.tracker._set_hvdevs(fake_pci_devs) self.assertEqual(len(self.tracker.pci_devs), 4) self.assertEqual(set([dev.address for dev in self.tracker.pci_devs]), set(['0000:00:00.1', '0000:00:00.2', '0000:00:00.3', '0000:00:00.4'])) self.assertEqual(set([dev.vendor_id for dev in self.tracker.pci_devs]), set(['v', 'v1', 'v2'])) def test_set_hvdev_new_dev_tree_maintained(self): # Make sure the device tree is properly maintained when there are new # devices reported by the driver self._create_tracker(fake_db_devs_tree) fake_new_device = dict(fake_pci_5, id=12, address='0000:00:02.3') fake_pci_devs = [copy.deepcopy(fake_pci_3), copy.deepcopy(fake_pci_4), copy.deepcopy(fake_pci_5), copy.deepcopy(fake_new_device)] self.tracker._set_hvdevs(fake_pci_devs) self.assertEqual(len(self.tracker.pci_devs), 4) pf = [dev for dev in self.tracker.pci_devs if dev.dev_type == fields.PciDeviceType.SRIOV_PF].pop() vfs = [dev for dev in self.tracker.pci_devs if dev.dev_type == fields.PciDeviceType.SRIOV_VF] self.assertEqual(3, len(vfs)) # Assert we build the device tree correctly self.assertEqual(vfs, pf.child_devices) for vf in vfs: self.assertEqual(vf.parent_device, pf) def test_set_hvdev_changed(self): fake_pci_v2 = dict(fake_pci, address='0000:00:00.2', vendor_id='v1') fake_pci_devs = [copy.deepcopy(fake_pci), copy.deepcopy(fake_pci_2), copy.deepcopy(fake_pci_v2)] self.tracker._set_hvdevs(fake_pci_devs) self.assertEqual(set([dev.vendor_id for dev in self.tracker.pci_devs]), set(['v', 'v1'])) def test_set_hvdev_remove(self): self.tracker._set_hvdevs([fake_pci]) self.assertEqual( len([dev for dev in self.tracker.pci_devs if dev.status == fields.PciDeviceStatus.REMOVED]), 2) def test_set_hvdev_remove_tree_maintained(self): # Make sure the device tree is properly maintained when there are # devices removed from the system (not reported by the driver but known # from previous scans) self._create_tracker(fake_db_devs_tree) fake_pci_devs = [copy.deepcopy(fake_pci_3), copy.deepcopy(fake_pci_4)] self.tracker._set_hvdevs(fake_pci_devs) self.assertEqual( 2, len([dev for dev in self.tracker.pci_devs if dev.status != fields.PciDeviceStatus.REMOVED])) pf = [dev for dev in self.tracker.pci_devs if dev.dev_type == fields.PciDeviceType.SRIOV_PF].pop() vfs = [dev for dev in self.tracker.pci_devs if (dev.dev_type == fields.PciDeviceType.SRIOV_VF and dev.status != fields.PciDeviceStatus.REMOVED)] self.assertEqual(1, len(vfs)) self.assertEqual(vfs, pf.child_devices) self.assertEqual(vfs[0].parent_device, pf) def test_set_hvdev_remove_tree_maintained_with_allocations(self): # Make sure the device tree is properly maintained when there are # devices removed from the system that are allocated to vms. all_devs = fake_db_devs_tree[:] self._create_tracker(all_devs) # we start with 3 devices self.assertEqual( 3, len([dev for dev in self.tracker.pci_devs if dev.status != fields.PciDeviceStatus.REMOVED])) # we then allocate one device pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v2'}]}]) # NOTE(sean-k-mooney): context, pci request, numa topology claimed_dev = self.tracker.claim_instance( mock.sentinel.context, pci_requests_obj, None)[0] self.tracker._set_hvdevs(all_devs) # and assert that no devices were removed self.assertEqual( 0, len([dev for dev in self.tracker.pci_devs if dev.status == fields.PciDeviceStatus.REMOVED])) # we then try to remove the allocated device from the set reported # by the driver. fake_pci_devs = [dev for dev in all_devs if dev['address'] != claimed_dev.address] with mock.patch("nova.pci.manager.LOG.warning") as log: self.tracker._set_hvdevs(fake_pci_devs) log.assert_called_once() args = log.call_args_list[0][0] # args of first call self.assertIn('Unable to remove device with', args[0]) # and assert no devices are removed from the tracker self.assertEqual( 0, len([dev for dev in self.tracker.pci_devs if dev.status == fields.PciDeviceStatus.REMOVED])) # free the device that was allocated and update tracker again self.tracker._free_device(claimed_dev) self.tracker._set_hvdevs(fake_pci_devs) # and assert that one device is removed from the tracker self.assertEqual( 1, len([dev for dev in self.tracker.pci_devs if dev.status == fields.PciDeviceStatus.REMOVED])) def test_set_hvdev_changed_stal(self): pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v1'}]}]) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) fake_pci_3 = dict(fake_pci, address='0000:00:00.2', vendor_id='v2') fake_pci_devs = [copy.deepcopy(fake_pci), copy.deepcopy(fake_pci_2), copy.deepcopy(fake_pci_3)] self.tracker._set_hvdevs(fake_pci_devs) self.assertEqual(len(self.tracker.stale), 1) self.assertEqual(self.tracker.stale['0000:00:00.2']['vendor_id'], 'v2') def test_update_pci_for_instance_active(self): pci_requests_obj = self._create_pci_requests_object(fake_pci_requests) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.assertEqual(len(self.tracker.claims[self.inst['uuid']]), 2) self.tracker.update_pci_for_instance(None, self.inst, sign=1) self.assertEqual(len(self.tracker.allocations[self.inst['uuid']]), 2) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 1) self.assertEqual(free_devs[0].vendor_id, 'v') def test_update_pci_for_instance_fail(self): pci_requests = copy.deepcopy(fake_pci_requests) pci_requests[0]['count'] = 4 pci_requests_obj = self._create_pci_requests_object(pci_requests) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.assertEqual(len(self.tracker.claims[self.inst['uuid']]), 0) devs = self.tracker.update_pci_for_instance(None, self.inst, sign=1) self.assertEqual(len(self.tracker.allocations[self.inst['uuid']]), 0) self.assertIsNone(devs) def test_pci_claim_instance_with_numa(self): fake_db_dev_3 = dict(fake_db_dev_1, id=4, address='0000:00:00.4') fake_devs_numa = copy.deepcopy(fake_db_devs) fake_devs_numa.append(fake_db_dev_3) self.tracker = manager.PciDevTracker(1) self.tracker._set_hvdevs(fake_devs_numa) pci_requests = copy.deepcopy(fake_pci_requests)[:1] pci_requests[0]['count'] = 2 pci_requests_obj = self._create_pci_requests_object(pci_requests) self.inst.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, self.inst.numa_topology) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(2, len(free_devs)) self.assertEqual('v1', free_devs[0].vendor_id) self.assertEqual('v1', free_devs[1].vendor_id) def test_pci_claim_instance_with_numa_fail(self): pci_requests_obj = self._create_pci_requests_object(fake_pci_requests) self.inst.numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=512)]) self.assertIsNone(self.tracker.claim_instance( mock.sentinel.context, pci_requests_obj, self.inst.numa_topology)) def test_update_pci_for_instance_deleted(self): pci_requests_obj = self._create_pci_requests_object(fake_pci_requests) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 1) self.inst.vm_state = vm_states.DELETED self.tracker.update_pci_for_instance(None, self.inst, -1) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 3) self.assertEqual(set([dev.vendor_id for dev in self.tracker.pci_devs]), set(['v', 'v1'])) @mock.patch.object(objects.PciDevice, 'should_migrate_data', return_value=False) def test_save(self, migrate_mock): self.stub_out( 'nova.db.api.pci_device_update', self._fake_pci_device_update) fake_pci_v3 = dict(fake_pci, address='0000:00:00.2', vendor_id='v3') fake_pci_devs = [copy.deepcopy(fake_pci), copy.deepcopy(fake_pci_2), copy.deepcopy(fake_pci_v3)] self.tracker._set_hvdevs(fake_pci_devs) self.update_called = 0 self.tracker.save(self.fake_context) self.assertEqual(self.update_called, 3) def test_save_removed(self): self.stub_out( 'nova.db.api.pci_device_update', self._fake_pci_device_update) self.stub_out( 'nova.db.api.pci_device_destroy', self._fake_pci_device_destroy) self.destroy_called = 0 self.assertEqual(len(self.tracker.pci_devs), 3) dev = self.tracker.pci_devs[0] self.update_called = 0 dev.remove() self.tracker.save(self.fake_context) self.assertEqual(len(self.tracker.pci_devs), 2) self.assertEqual(self.destroy_called, 1) def test_clean_usage(self): inst_2 = copy.copy(self.inst) inst_2.uuid = uuidsentinel.instance2 migr = {'instance_uuid': 'uuid2', 'vm_state': vm_states.BUILDING} orph = {'uuid': 'uuid3', 'vm_state': vm_states.BUILDING} pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v'}]}]) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.tracker.update_pci_for_instance(None, self.inst, sign=1) pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v1'}]}], instance_uuid=inst_2.uuid) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.tracker.update_pci_for_instance(None, inst_2, sign=1) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 1) self.assertEqual(free_devs[0].vendor_id, 'v') self.tracker.clean_usage([self.inst], [migr], [orph]) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 2) self.assertEqual( set([dev.vendor_id for dev in free_devs]), set(['v', 'v1'])) def test_clean_usage_no_request_match_no_claims(self): # Tests the case that there is no match for the request so the # claims mapping is set to None for the instance when the tracker # calls clean_usage. self.tracker.update_pci_for_instance(None, self.inst, sign=1) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(3, len(free_devs)) self.tracker.clean_usage([], [], []) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(3, len(free_devs)) self.assertEqual( set([dev.address for dev in free_devs]), set(['0000:00:00.1', '0000:00:00.2', '0000:00:00.3'])) def test_free_devices(self): pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v'}]}]) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.tracker.update_pci_for_instance(None, self.inst, sign=1) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 2) self.tracker.free_instance(None, self.inst) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(free_devs), 3) def test_free_device(self): pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v'}]}]) self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.tracker.update_pci_for_instance(None, self.inst, sign=1) free_pci_device_ids = ( [dev.id for dev in self.tracker.pci_stats.get_free_devs()]) self.assertEqual(2, len(free_pci_device_ids)) allocated_devs = manager.get_instance_pci_devs(self.inst) pci_device = allocated_devs[0] self.assertNotIn(pci_device.id, free_pci_device_ids) instance_uuid = self.inst['uuid'] self.assertIn(pci_device, self.tracker.allocations[instance_uuid]) self.tracker.free_device(pci_device, self.inst) free_pci_device_ids = ( [dev.id for dev in self.tracker.pci_stats.get_free_devs()]) self.assertEqual(3, len(free_pci_device_ids)) self.assertIn(pci_device.id, free_pci_device_ids) self.assertIsNone(self.tracker.allocations.get(instance_uuid)) def test_free_instance_claims(self): # Create an InstancePCIRequest object pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v'}]}]) # Claim a single PCI device claimed_devs = self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) # Assert we have exactly one claimed device for the given instance. claimed_dev = claimed_devs[0] instance_uuid = self.inst['uuid'] self.assertEqual(1, len(self.tracker.claims.get(instance_uuid))) self.assertIn(claimed_dev.id, [pci_dev.id for pci_dev in self.tracker.claims.get(instance_uuid)]) self.assertIsNone(self.tracker.allocations.get(instance_uuid)) # Free instance claims self.tracker.free_instance_claims(mock.sentinel.context, self.inst) # Assert no claims for instance and all PCI devices are free self.assertIsNone(self.tracker.claims.get(instance_uuid)) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(fake_db_devs), len(free_devs)) def test_free_instance_allocations(self): # Create an InstancePCIRequest object pci_requests_obj = self._create_pci_requests_object( [{'count': 1, 'spec': [{'vendor_id': 'v'}]}]) # Allocate a single PCI device allocated_devs = self.tracker.claim_instance(mock.sentinel.context, pci_requests_obj, None) self.tracker.allocate_instance(self.inst) # Assert we have exactly one allocated device for the given instance. allocated_dev = allocated_devs[0] instance_uuid = self.inst['uuid'] self.assertIsNone(self.tracker.claims.get(instance_uuid)) self.assertEqual(1, len(self.tracker.allocations.get(instance_uuid))) self.assertIn(allocated_dev.id, [pci_dev.id for pci_dev in self.tracker.allocations.get(instance_uuid)]) # Free instance allocations and assert claims did not change self.tracker.free_instance_allocations(mock.sentinel.context, self.inst) # Assert all PCI devices are free. self.assertIsNone(self.tracker.allocations.get(instance_uuid)) free_devs = self.tracker.pci_stats.get_free_devs() self.assertEqual(len(fake_db_devs), len(free_devs)) class PciGetInstanceDevs(test.NoDBTestCase): def test_get_devs_object(self): def _fake_obj_load_attr(foo, attrname): if attrname == 'pci_devices': self.load_attr_called = True foo.pci_devices = objects.PciDeviceList() self.stub_out( 'nova.objects.Instance.obj_load_attr', _fake_obj_load_attr) self.load_attr_called = False manager.get_instance_pci_devs(objects.Instance()) self.assertTrue(self.load_attr_called) def test_get_devs_no_pci_devices(self): inst = objects.Instance(pci_devices=None) self.assertEqual([], manager.get_instance_pci_devs(inst)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/pci/test_request.py0000664000175000017500000004403100000000000021530 0ustar00zuulzuul00000000000000# Copyright 2013 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for PCI request.""" import mock from oslo_utils.fixture import uuidsentinel from nova import context from nova import exception from nova.network import model from nova import objects from nova.objects import fields from nova.pci import request from nova import test from nova.tests.unit.api.openstack import fakes _fake_alias1 = """{ "name": "QuicAssist", "capability_type": "pci", "product_id": "4443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "legacy" }""" _fake_alias11 = """{ "name": "QuicAssist", "capability_type": "pci", "product_id": "4444", "vendor_id": "8086", "device_type": "type-PCI" }""" _fake_alias2 = """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "1111", "device_type": "N" }""" _fake_alias3 = """{ "name": "IntelNIC", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "device_type": "type-PF" }""" _fake_alias4 = """{ "name": " Cirrus Logic ", "capability_type": "pci", "product_id": "0ff2", "vendor_id": "10de", "device_type": "type-PCI" }""" class PciRequestTestCase(test.NoDBTestCase): @staticmethod def _create_fake_inst_with_pci_devs(pci_req_list, pci_dev_list): """Create a fake Instance object with the provided InstancePciRequests and PciDevices. :param pci_req_list: a list of InstancePCIRequest objects. :param pci_dev_list: a list of PciDevice objects, each element associated (via request_id attribute)with a corresponding element from pci_req_list. :return: A fake Instance object associated with the provided PciRequests and PciDevices. """ inst = objects.Instance() inst.uuid = uuidsentinel.instance1 inst.pci_requests = objects.InstancePCIRequests( requests=pci_req_list) inst.pci_devices = objects.PciDeviceList(objects=pci_dev_list) inst.host = 'fake-host' inst.node = 'fake-node' return inst def setUp(self): super(PciRequestTestCase, self).setUp() self.context = context.RequestContext(fakes.FAKE_USER_ID, fakes.FAKE_PROJECT_ID) self.mock_inst_cn = mock.Mock() def test_valid_alias(self): self.flags(alias=[_fake_alias1], group='pci') result = request._get_alias_from_config() expected_result = ( 'legacy', [{ "capability_type": "pci", "product_id": "4443", "vendor_id": "8086", "dev_type": "type-PCI", }]) self.assertEqual(expected_result, result['QuicAssist']) def test_valid_multispec_alias(self): self.flags(alias=[_fake_alias1, _fake_alias11], group='pci') result = request._get_alias_from_config() expected_result = ( 'legacy', [{ "capability_type": "pci", "product_id": "4443", "vendor_id": "8086", "dev_type": "type-PCI" }, { "capability_type": "pci", "product_id": "4444", "vendor_id": "8086", "dev_type": "type-PCI" }]) self.assertEqual(expected_result, result['QuicAssist']) def test_invalid_type_alias(self): self.flags(alias=[_fake_alias2], group='pci') self.assertRaises(exception.PciInvalidAlias, request._get_alias_from_config) def test_invalid_product_id_alias(self): self.flags(alias=[ """{ "name": "xxx", "capability_type": "pci", "product_id": "g111", "vendor_id": "1111", "device_type": "NIC" }"""], group='pci') self.assertRaises(exception.PciInvalidAlias, request._get_alias_from_config) def test_invalid_vendor_id_alias(self): self.flags(alias=[ """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "0xg111", "device_type": "NIC" }"""], group='pci') self.assertRaises(exception.PciInvalidAlias, request._get_alias_from_config) def test_invalid_cap_type_alias(self): self.flags(alias=[ """{ "name": "xxx", "capability_type": "usb", "product_id": "1111", "vendor_id": "8086", "device_type": "NIC" }"""], group='pci') self.assertRaises(exception.PciInvalidAlias, request._get_alias_from_config) def test_invalid_numa_policy(self): self.flags(alias=[ """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "device_type": "NIC", "numa_policy": "derp" }"""], group='pci') self.assertRaises(exception.PciInvalidAlias, request._get_alias_from_config) def test_valid_numa_policy(self): for policy in fields.PCINUMAAffinityPolicy.ALL: self.flags(alias=[ """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "%s" }""" % policy], group='pci') aliases = request._get_alias_from_config() self.assertIsNotNone(aliases) self.assertIn("xxx", aliases) self.assertEqual(policy, aliases["xxx"][0]) def test_conflicting_device_type(self): """Check behavior when device_type conflicts occur.""" self.flags(alias=[ """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "device_type": "NIC" }""", """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "device_type": "type-PCI" }"""], group='pci') self.assertRaises( exception.PciInvalidAlias, request._get_alias_from_config) def test_conflicting_numa_policy(self): """Check behavior when numa_policy conflicts occur.""" self.flags(alias=[ """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "numa_policy": "required", }""", """{ "name": "xxx", "capability_type": "pci", "product_id": "1111", "vendor_id": "8086", "numa_policy": "legacy", }"""], group='pci') self.assertRaises( exception.PciInvalidAlias, request._get_alias_from_config) def _verify_result(self, expected, real): exp_real = zip(expected, real) for exp, real in exp_real: self.assertEqual(exp['count'], real.count) self.assertEqual(exp['alias_name'], real.alias_name) self.assertEqual(exp['spec'], real.spec) def test_alias_2_request(self): self.flags(alias=[_fake_alias1, _fake_alias3], group='pci') expect_request = [ {'count': 3, 'requester_id': None, 'spec': [{'vendor_id': '8086', 'product_id': '4443', 'dev_type': 'type-PCI', 'capability_type': 'pci'}], 'alias_name': 'QuicAssist'}, {'count': 1, 'requester_id': None, 'spec': [{'vendor_id': '8086', 'product_id': '1111', 'dev_type': "type-PF", 'capability_type': 'pci'}], 'alias_name': 'IntelNIC'}, ] requests = request._translate_alias_to_requests( "QuicAssist : 3, IntelNIC: 1") self.assertEqual(set([p['count'] for p in requests]), set([1, 3])) self._verify_result(expect_request, requests) def test_alias_2_request_invalid(self): self.flags(alias=[_fake_alias1, _fake_alias3], group='pci') self.assertRaises(exception.PciRequestAliasNotDefined, request._translate_alias_to_requests, "QuicAssistX : 3") def test_alias_2_request_affinity_policy(self): # _fake_alias1 requests the legacy policy and _fake_alias3 # has no numa_policy set so it will default to legacy. self.flags(alias=[_fake_alias1, _fake_alias3], group='pci') # so to test that the flavor/image policy takes precedence # set use the preferred policy. policy = fields.PCINUMAAffinityPolicy.PREFERRED expect_request = [ {'count': 3, 'requester_id': None, 'spec': [{'vendor_id': '8086', 'product_id': '4443', 'dev_type': 'type-PCI', 'capability_type': 'pci'}], 'alias_name': 'QuicAssist', 'numa_policy': policy }, {'count': 1, 'requester_id': None, 'spec': [{'vendor_id': '8086', 'product_id': '1111', 'dev_type': "type-PF", 'capability_type': 'pci'}], 'alias_name': 'IntelNIC', 'numa_policy': policy }, ] requests = request._translate_alias_to_requests( "QuicAssist : 3, IntelNIC: 1", affinity_policy=policy) self.assertEqual(set([p['count'] for p in requests]), set([1, 3])) self._verify_result(expect_request, requests) @mock.patch.object(objects.compute_node.ComputeNode, 'get_by_host_and_nodename') def test_get_instance_pci_request_from_vif_invalid( self, cn_get_by_host_and_node): # Basically make sure we raise an exception if an instance # has an allocated PCI device without having the its corresponding # PCIRequest object in instance.pci_requests self.mock_inst_cn.id = 1 cn_get_by_host_and_node.return_value = self.mock_inst_cn # Create a fake instance with PCI request and allocated PCI devices pci_dev1 = objects.PciDevice(request_id=uuidsentinel.pci_req_id1, address='0000:04:00.0', compute_node_id=1) pci_req2 = objects.InstancePCIRequest( request_id=uuidsentinel.pci_req_id2) pci_dev2 = objects.PciDevice(request_id=uuidsentinel.pci_req_id2, address='0000:05:00.0', compute_node_id=1) pci_request_list = [pci_req2] pci_device_list = [pci_dev1, pci_dev2] inst = PciRequestTestCase._create_fake_inst_with_pci_devs( pci_request_list, pci_device_list) # Create a VIF with pci_dev1 that has no corresponding PCI request pci_vif = model.VIF(vnic_type=model.VNIC_TYPE_DIRECT, profile={'pci_slot': '0000:04:00.0'}) self.assertRaises(exception.PciRequestFromVIFNotFound, request.get_instance_pci_request_from_vif, self.context, inst, pci_vif) @mock.patch.object(objects.compute_node.ComputeNode, 'get_by_host_and_nodename') def test_get_instance_pci_request_from_vif(self, cn_get_by_host_and_node): self.mock_inst_cn.id = 1 cn_get_by_host_and_node.return_value = self.mock_inst_cn # Create a fake instance with PCI request and allocated PCI devices pci_req1 = objects.InstancePCIRequest( request_id=uuidsentinel.pci_req_id1) pci_dev1 = objects.PciDevice(request_id=uuidsentinel.pci_req_id1, address='0000:04:00.0', compute_node_id = 1) pci_req2 = objects.InstancePCIRequest( request_id=uuidsentinel.pci_req_id2) pci_dev2 = objects.PciDevice(request_id=uuidsentinel.pci_req_id2, address='0000:05:00.0', compute_node_id=1) pci_request_list = [pci_req1, pci_req2] pci_device_list = [pci_dev1, pci_dev2] inst = PciRequestTestCase._create_fake_inst_with_pci_devs( pci_request_list, pci_device_list) # Create a vif with normal port and make sure no PCI request returned normal_vif = model.VIF(vnic_type=model.VNIC_TYPE_NORMAL) self.assertIsNone(request.get_instance_pci_request_from_vif( self.context, inst, normal_vif)) # Create a vif with PCI address under profile, make sure the correct # PCI request is returned pci_vif = model.VIF(vnic_type=model.VNIC_TYPE_DIRECT, profile={'pci_slot': '0000:05:00.0'}) self.assertEqual(uuidsentinel.pci_req_id2, request.get_instance_pci_request_from_vif( self.context, inst, pci_vif).request_id) # Create a vif with PCI under profile which is not claimed # for the instance, i.e no matching pci device in instance.pci_devices nonclaimed_pci_vif = model.VIF(vnic_type=model.VNIC_TYPE_DIRECT, profile={'pci_slot': '0000:08:00.0'}) self.assertIsNone(request.get_instance_pci_request_from_vif( self.context, inst, nonclaimed_pci_vif)) # "Move" the instance to another compute node, make sure that no # matching PCI request against the new compute. self.mock_inst_cn.id = 2 self.assertIsNone(request.get_instance_pci_request_from_vif( self.context, inst, pci_vif)) def test_get_pci_requests_from_flavor(self): self.flags(alias=[_fake_alias1, _fake_alias3], group='pci') expect_request = [ {'count': 3, 'spec': [{'vendor_id': '8086', 'product_id': '4443', 'dev_type': "type-PCI", 'capability_type': 'pci'}], 'alias_name': 'QuicAssist'}, {'count': 1, 'spec': [{'vendor_id': '8086', 'product_id': '1111', 'dev_type': "type-PF", 'capability_type': 'pci'}], 'alias_name': 'IntelNIC'}, ] flavor = {'extra_specs': {"pci_passthrough:alias": "QuicAssist:3, IntelNIC: 1"}} requests = request.get_pci_requests_from_flavor(flavor) self.assertEqual(set([1, 3]), set([p.count for p in requests.requests])) self._verify_result(expect_request, requests.requests) def test_get_pci_requests_from_flavor_including_space(self): self.flags(alias=[_fake_alias3, _fake_alias4], group='pci') expect_request = [ {'count': 4, 'spec': [{'vendor_id': '10de', 'product_id': '0ff2', 'dev_type': "type-PCI", 'capability_type': 'pci'}], 'alias_name': 'Cirrus Logic'}, {'count': 3, 'spec': [{'vendor_id': '8086', 'product_id': '1111', 'dev_type': "type-PF", 'capability_type': 'pci'}], 'alias_name': 'IntelNIC'}, ] flavor = {'extra_specs': {"pci_passthrough:alias": " Cirrus Logic : 4, IntelNIC: 3"}} requests = request.get_pci_requests_from_flavor(flavor) self.assertEqual(set([3, 4]), set([p.count for p in requests.requests])) self._verify_result(expect_request, requests.requests) def test_get_pci_requests_from_flavor_no_extra_spec(self): self.flags(alias=[_fake_alias1, _fake_alias3], group='pci') flavor = {} requests = request.get_pci_requests_from_flavor(flavor) self.assertEqual([], requests.requests) @mock.patch.object( request, "_translate_alias_to_requests", return_value=[]) def test_get_pci_requests_from_flavor_affinity_policy( self, mock_translate): self.flags(alias=[_fake_alias1, _fake_alias3], group='pci') flavor = {'extra_specs': {"pci_passthrough:alias": "QuicAssist:3, IntelNIC: 1"}} policy = fields.PCINUMAAffinityPolicy.PREFERRED request.get_pci_requests_from_flavor(flavor, affinity_policy=policy) mock_translate.assert_called_with(mock.ANY, affinity_policy=policy) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/pci/test_stats.py0000664000175000017500000007112500000000000021202 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from nova import exception from nova import objects from nova.objects import fields from nova.pci import stats from nova.pci import whitelist from nova import test from nova.tests.unit.pci import fakes CONF = cfg.CONF fake_pci_1 = { 'compute_node_id': 1, 'address': '0000:00:00.1', 'product_id': 'p1', 'vendor_id': 'v1', 'status': 'available', 'extra_k1': 'v1', 'request_id': None, 'numa_node': 0, 'dev_type': fields.PciDeviceType.STANDARD, 'parent_addr': None, } fake_pci_2 = dict(fake_pci_1, vendor_id='v2', product_id='p2', address='0000:00:00.2', numa_node=1) fake_pci_3 = dict(fake_pci_1, address='0000:00:00.3') fake_pci_4 = dict(fake_pci_1, vendor_id='v3', product_id='p3', address='0000:00:00.3', numa_node= None) pci_requests = [objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v1'}]), objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v2'}])] pci_requests_multiple = [objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v1'}]), objects.InstancePCIRequest(count=3, spec=[{'vendor_id': 'v2'}])] class PciDeviceStatsTestCase(test.NoDBTestCase): @staticmethod def _get_fake_requests(vendor_ids=None, numa_policy=None, count=1): if not vendor_ids: vendor_ids = ['v1', 'v2'] specs = [{'vendor_id': vendor_id} for vendor_id in vendor_ids] return [objects.InstancePCIRequest(count=count, spec=[spec], numa_policy=numa_policy) for spec in specs] def _create_fake_devs(self): self.fake_dev_1 = objects.PciDevice.create(None, fake_pci_1) self.fake_dev_2 = objects.PciDevice.create(None, fake_pci_2) self.fake_dev_3 = objects.PciDevice.create(None, fake_pci_3) self.fake_dev_4 = objects.PciDevice.create(None, fake_pci_4) for dev in [self.fake_dev_1, self.fake_dev_2, self.fake_dev_3, self.fake_dev_4]: self.pci_stats.add_device(dev) def _add_fake_devs_with_numa(self): fake_pci = dict(fake_pci_1, vendor_id='v4', product_id='pr4') devs = [dict(fake_pci, product_id='pr0', numa_node=0, ), dict(fake_pci, product_id='pr1', numa_node=1), dict(fake_pci, product_id='pr_none', numa_node=None)] for dev in devs: self.pci_stats.add_device(objects.PciDevice.create(None, dev)) def setUp(self): super(PciDeviceStatsTestCase, self).setUp() self.pci_stats = stats.PciDeviceStats() # The following two calls need to be made before adding the devices. patcher = fakes.fake_pci_whitelist() self.addCleanup(patcher.stop) self._create_fake_devs() def test_add_device(self): self.assertEqual(len(self.pci_stats.pools), 3) self.assertEqual(set([d['vendor_id'] for d in self.pci_stats]), set(['v1', 'v2', 'v3'])) self.assertEqual(set([d['count'] for d in self.pci_stats]), set([1, 2])) def test_remove_device(self): self.pci_stats.remove_device(self.fake_dev_2) self.assertEqual(len(self.pci_stats.pools), 2) self.assertEqual(self.pci_stats.pools[0]['count'], 2) self.assertEqual(self.pci_stats.pools[0]['vendor_id'], 'v1') def test_remove_device_exception(self): self.pci_stats.remove_device(self.fake_dev_2) self.assertRaises(exception.PciDevicePoolEmpty, self.pci_stats.remove_device, self.fake_dev_2) def test_pci_stats_equivalent(self): pci_stats2 = stats.PciDeviceStats() for dev in [self.fake_dev_1, self.fake_dev_2, self.fake_dev_3, self.fake_dev_4]: pci_stats2.add_device(dev) self.assertEqual(self.pci_stats, pci_stats2) def test_pci_stats_not_equivalent(self): pci_stats2 = stats.PciDeviceStats() for dev in [self.fake_dev_1, self.fake_dev_2, self.fake_dev_3]: pci_stats2.add_device(dev) self.assertNotEqual(self.pci_stats, pci_stats2) def test_object_create(self): m = self.pci_stats.to_device_pools_obj() new_stats = stats.PciDeviceStats(m) self.assertEqual(len(new_stats.pools), 3) self.assertEqual(set([d['count'] for d in new_stats]), set([1, 2])) self.assertEqual(set([d['vendor_id'] for d in new_stats]), set(['v1', 'v2', 'v3'])) def test_apply_requests(self): self.pci_stats.apply_requests(pci_requests) self.assertEqual(len(self.pci_stats.pools), 2) self.assertEqual(self.pci_stats.pools[0]['vendor_id'], 'v1') self.assertEqual(self.pci_stats.pools[0]['count'], 1) def test_apply_requests_failed(self): self.assertRaises(exception.PciDeviceRequestFailed, self.pci_stats.apply_requests, pci_requests_multiple) def test_support_requests(self): self.assertTrue(self.pci_stats.support_requests(pci_requests)) self.assertEqual(len(self.pci_stats.pools), 3) self.assertEqual(set([d['count'] for d in self.pci_stats]), set((1, 2))) def test_support_requests_failed(self): self.assertFalse( self.pci_stats.support_requests(pci_requests_multiple)) self.assertEqual(len(self.pci_stats.pools), 3) self.assertEqual(set([d['count'] for d in self.pci_stats]), set([1, 2])) def test_support_requests_numa(self): cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0), objects.InstanceNUMACell(id=1, cpuset=set(), memory=0)] self.assertTrue(self.pci_stats.support_requests(pci_requests, cells)) def test_support_requests_numa_failed(self): cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0)] self.assertFalse(self.pci_stats.support_requests(pci_requests, cells)) def test_support_requests_no_numa_info(self): cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0)] pci_requests = self._get_fake_requests(vendor_ids=['v3']) self.assertTrue(self.pci_stats.support_requests(pci_requests, cells)) # 'legacy' is the default numa_policy so the result must be same pci_requests = self._get_fake_requests(vendor_ids=['v3'], numa_policy = fields.PCINUMAAffinityPolicy.LEGACY) self.assertTrue(self.pci_stats.support_requests(pci_requests, cells)) def test_support_requests_numa_pci_numa_policy_preferred(self): # numa node 0 has 2 devices with vendor_id 'v1' # numa node 1 has 1 device with vendor_id 'v2' # we request two devices with vendor_id 'v1' and 'v2'. # pci_numa_policy is 'preferred' so we can ignore numa affinity cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0)] pci_requests = self._get_fake_requests( numa_policy=fields.PCINUMAAffinityPolicy.PREFERRED) self.assertTrue(self.pci_stats.support_requests(pci_requests, cells)) def test_support_requests_no_numa_info_pci_numa_policy_required(self): # pci device with vendor_id 'v3' has numa_node=None. # pci_numa_policy is 'required' so we can't use this device cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0)] pci_requests = self._get_fake_requests(vendor_ids=['v3'], numa_policy=fields.PCINUMAAffinityPolicy.REQUIRED) self.assertFalse(self.pci_stats.support_requests(pci_requests, cells)) def test_consume_requests(self): devs = self.pci_stats.consume_requests(pci_requests) self.assertEqual(2, len(devs)) self.assertEqual(set(['v1', 'v2']), set([dev.vendor_id for dev in devs])) def test_consume_requests_empty(self): devs = self.pci_stats.consume_requests([]) self.assertEqual(0, len(devs)) def test_consume_requests_failed(self): self.assertIsNone(self.pci_stats.consume_requests( pci_requests_multiple)) def test_consume_requests_numa(self): cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0), objects.InstanceNUMACell(id=1, cpuset=set(), memory=0)] devs = self.pci_stats.consume_requests(pci_requests, cells) self.assertEqual(2, len(devs)) self.assertEqual(set(['v1', 'v2']), set([dev.vendor_id for dev in devs])) def test_consume_requests_numa_failed(self): cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0)] self.assertIsNone(self.pci_stats.consume_requests(pci_requests, cells)) def test_consume_requests_no_numa_info(self): cells = [objects.InstanceNUMACell(id=0, cpuset=set(), memory=0)] pci_request = [objects.InstancePCIRequest(count=1, spec=[{'vendor_id': 'v3'}])] devs = self.pci_stats.consume_requests(pci_request, cells) self.assertEqual(1, len(devs)) self.assertEqual(set(['v3']), set([dev.vendor_id for dev in devs])) def _test_consume_requests_numa_policy(self, cell_ids, policy, expected, vendor_id='v4', count=1): """Base test for 'consume_requests' function. Create three devices with vendor_id of 'v4': 'pr0' in NUMA node 0, 'pr1' in NUMA node 1, 'pr_none' without NUMA affinity info. Attempt to consume a PCI request with a single device with ``vendor_id`` using the provided ``cell_ids`` and ``policy``. Compare result against ``expected``. """ self._add_fake_devs_with_numa() cells = [objects.InstanceNUMACell(id=id, cpuset=set(), memory=0) for id in cell_ids] pci_requests = self._get_fake_requests(vendor_ids=[vendor_id], numa_policy=policy, count=count) devs = self.pci_stats.consume_requests(pci_requests, cells) if expected is None: self.assertIsNone(devs) else: self.assertEqual(set(expected), set([dev.product_id for dev in devs])) def test_consume_requests_numa_policy_required(self): """Ensure REQUIRED policy will ensure NUMA affinity. Policy is 'required' which means we must use a device with strict NUMA affinity. Request a device from NUMA node 0, which contains such a device, and ensure it's used. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.REQUIRED, ['pr0']) def test_consume_requests_numa_policy_required_fail(self): """Ensure REQUIRED policy will *only* provide NUMA affinity. Policy is 'required' which means we must use a device with strict NUMA affinity. Request a device from NUMA node 999, which does not contain any suitable devices, and ensure nothing is returned. """ self._test_consume_requests_numa_policy( [999], fields.PCINUMAAffinityPolicy.REQUIRED, None) def test_consume_requests_numa_policy_legacy(self): """Ensure LEGACY policy will ensure NUMA affinity if possible. Policy is 'legacy' which means we must use a device with strict NUMA affinity or no provided NUMA affinity. Request a device from NUMA node 0, which contains such a device, and ensure it's used. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.LEGACY, ['pr0']) def test_consume_requests_numa_policy_legacy_fallback(self): """Ensure LEGACY policy will fallback to no NUMA affinity. Policy is 'legacy' which means we must use a device with strict NUMA affinity or no provided NUMA affinity. Request a device from NUMA node 999, which contains no such device, and ensure we fallback to the device without any NUMA affinity. """ self._test_consume_requests_numa_policy( [999], fields.PCINUMAAffinityPolicy.LEGACY, ['pr_none']) def test_consume_requests_numa_policy_legacy_multiple(self): """Ensure LEGACY policy will use best policy for multiple devices. Policy is 'legacy' which means we must use a device with strict NUMA affinity or no provided NUMA affinity. Request two devices from NUMA node 0, which contains only one such device, and ensure we use that device and the next best thing for the second device. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.PREFERRED, ['pr0', 'pr_none'], count=2) def test_consume_requests_numa_policy_legacy_fail(self): """Ensure REQUIRED policy will *not* provide NUMA non-affinity. Policy is 'legacy' which means we must use a device with strict NUMA affinity or no provided NUMA affinity. Request a device with ``vendor_id`` of ``v2``, which can only be found in NUMA node 1, from NUMA node 0, and ensure nothing is returned. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.LEGACY, None, vendor_id='v2') def test_consume_requests_numa_policy_preferred(self): """Ensure PREFERRED policy will ensure NUMA affinity if possible. Policy is 'preferred' which means we must use a device with any level of NUMA affinity. Request a device from NUMA node 0, which contains an affined device, and ensure it's used. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.PREFERRED, ['pr0']) def test_consume_requests_numa_policy_preferred_fallback_a(self): """Ensure PREFERRED policy will fallback to no NUMA affinity. Policy is 'preferred' which means we must use a device with any level of NUMA affinity. Request a device from NUMA node 999, which contains no such device, and ensure we fallback to the device without any NUMA affinity. """ self._test_consume_requests_numa_policy( [999], fields.PCINUMAAffinityPolicy.PREFERRED, ['pr_none']) def test_consume_requests_numa_policy_preferred_fallback_b(self): """Ensure PREFERRED policy will fallback to different NUMA affinity. Policy is 'preferred' which means we must use a device with any level of NUMA affinity. Request a device with ``vendor_id`` of ``v2``, which can only be found in NUMA node 1, from NUMA node 0, and ensure we fallback to this device. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.PREFERRED, ['p2'], vendor_id='v2') def test_consume_requests_numa_policy_preferred_multiple_a(self): """Ensure PREFERRED policy will use best policy for multiple devices. Policy is 'preferred' which means we must use a device with any level of NUMA affinity. Request two devices from NUMA node 0, which contains only one such device, and ensure we use that device and gracefully degrade for the other device. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.PREFERRED, ['pr0', 'pr_none'], count=2) def test_consume_requests_numa_policy_preferred_multiple_b(self): """Ensure PREFERRED policy will use best policy for multiple devices. Policy is 'preferred' which means we must use a device with any level of NUMA affinity. Request three devices from NUMA node 0, which contains only one such device, and ensure we use that device and gracefully degrade for the other devices. """ self._test_consume_requests_numa_policy( [0], fields.PCINUMAAffinityPolicy.PREFERRED, ['pr0', 'pr_none', 'pr1'], count=3) @mock.patch( 'nova.pci.whitelist.Whitelist._parse_white_list_from_config') def test_white_list_parsing(self, mock_whitelist_parse): white_list = '{"product_id":"0001", "vendor_id":"8086"}' CONF.set_override('passthrough_whitelist', white_list, 'pci') pci_stats = stats.PciDeviceStats() pci_stats.add_device(self.fake_dev_2) pci_stats.remove_device(self.fake_dev_2) self.assertEqual(1, mock_whitelist_parse.call_count) class PciDeviceStatsWithTagsTestCase(test.NoDBTestCase): def setUp(self): super(PciDeviceStatsWithTagsTestCase, self).setUp() white_list = ['{"vendor_id":"1137","product_id":"0071",' '"address":"*:0a:00.*","physical_network":"physnet1"}', '{"vendor_id":"1137","product_id":"0072"}'] self.flags(passthrough_whitelist=white_list, group='pci') dev_filter = whitelist.Whitelist(white_list) self.pci_stats = stats.PciDeviceStats(dev_filter=dev_filter) def _create_pci_devices(self): self.pci_tagged_devices = [] for dev in range(4): pci_dev = {'compute_node_id': 1, 'address': '0000:0a:00.%d' % dev, 'vendor_id': '1137', 'product_id': '0071', 'status': 'available', 'request_id': None, 'dev_type': 'type-PCI', 'parent_addr': None, 'numa_node': 0} self.pci_tagged_devices.append(objects.PciDevice.create(None, pci_dev)) self.pci_untagged_devices = [] for dev in range(3): pci_dev = {'compute_node_id': 1, 'address': '0000:0b:00.%d' % dev, 'vendor_id': '1137', 'product_id': '0072', 'status': 'available', 'request_id': None, 'dev_type': 'type-PCI', 'parent_addr': None, 'numa_node': 0} self.pci_untagged_devices.append(objects.PciDevice.create(None, pci_dev)) for dev in self.pci_tagged_devices: self.pci_stats.add_device(dev) for dev in self.pci_untagged_devices: self.pci_stats.add_device(dev) def _assertPoolContent(self, pool, vendor_id, product_id, count, **tags): self.assertEqual(vendor_id, pool['vendor_id']) self.assertEqual(product_id, pool['product_id']) self.assertEqual(count, pool['count']) if tags: for k, v in tags.items(): self.assertEqual(v, pool[k]) def _assertPools(self): # Pools are ordered based on the number of keys. 'product_id', # 'vendor_id' are always part of the keys. When tags are present, # they are also part of the keys. In this test class, we have # two pools with the second one having the tag 'physical_network' # and the value 'physnet1' self.assertEqual(2, len(self.pci_stats.pools)) self._assertPoolContent(self.pci_stats.pools[0], '1137', '0072', len(self.pci_untagged_devices)) self.assertEqual(self.pci_untagged_devices, self.pci_stats.pools[0]['devices']) self._assertPoolContent(self.pci_stats.pools[1], '1137', '0071', len(self.pci_tagged_devices), physical_network='physnet1') self.assertEqual(self.pci_tagged_devices, self.pci_stats.pools[1]['devices']) def test_add_devices(self): self._create_pci_devices() self._assertPools() def test_consume_requests(self): self._create_pci_devices() pci_requests = [objects.InstancePCIRequest(count=1, spec=[{'physical_network': 'physnet1'}]), objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '1137', 'product_id': '0072'}])] devs = self.pci_stats.consume_requests(pci_requests) self.assertEqual(2, len(devs)) self.assertEqual(set(['0071', '0072']), set([dev.product_id for dev in devs])) self._assertPoolContent(self.pci_stats.pools[0], '1137', '0072', 2) self._assertPoolContent(self.pci_stats.pools[1], '1137', '0071', 3, physical_network='physnet1') def test_add_device_no_devspec(self): self._create_pci_devices() pci_dev = {'compute_node_id': 1, 'address': '0000:0c:00.1', 'vendor_id': '2345', 'product_id': '0172', 'status': 'available', 'parent_addr': None, 'request_id': None} pci_dev_obj = objects.PciDevice.create(None, pci_dev) self.pci_stats.add_device(pci_dev_obj) # There should be no change self.assertIsNone( self.pci_stats._create_pool_keys_from_dev(pci_dev_obj)) self._assertPools() def test_remove_device_no_devspec(self): self._create_pci_devices() pci_dev = {'compute_node_id': 1, 'address': '0000:0c:00.1', 'vendor_id': '2345', 'product_id': '0172', 'status': 'available', 'parent_addr': None, 'request_id': None} pci_dev_obj = objects.PciDevice.create(None, pci_dev) self.pci_stats.remove_device(pci_dev_obj) # There should be no change self.assertIsNone( self.pci_stats._create_pool_keys_from_dev(pci_dev_obj)) self._assertPools() def test_remove_device(self): self._create_pci_devices() dev1 = self.pci_untagged_devices.pop() self.pci_stats.remove_device(dev1) dev2 = self.pci_tagged_devices.pop() self.pci_stats.remove_device(dev2) self._assertPools() def test_update_device(self): # Update device type of one of the device from type-PCI to # type-PF. Verify if the existing pool is updated and a new # pool is created with dev_type type-PF. self._create_pci_devices() dev1 = self.pci_tagged_devices.pop() dev1.dev_type = 'type-PF' self.pci_stats.update_device(dev1) self.assertEqual(3, len(self.pci_stats.pools)) self._assertPoolContent(self.pci_stats.pools[0], '1137', '0072', len(self.pci_untagged_devices)) self.assertEqual(self.pci_untagged_devices, self.pci_stats.pools[0]['devices']) self._assertPoolContent(self.pci_stats.pools[1], '1137', '0071', len(self.pci_tagged_devices), physical_network='physnet1') self.assertEqual(self.pci_tagged_devices, self.pci_stats.pools[1]['devices']) self._assertPoolContent(self.pci_stats.pools[2], '1137', '0071', 1, physical_network='physnet1') self.assertEqual(dev1, self.pci_stats.pools[2]['devices'][0]) class PciDeviceVFPFStatsTestCase(test.NoDBTestCase): def setUp(self): super(PciDeviceVFPFStatsTestCase, self).setUp() white_list = ['{"vendor_id":"8086","product_id":"1528"}', '{"vendor_id":"8086","product_id":"1515"}'] self.flags(passthrough_whitelist=white_list, group='pci') self.pci_stats = stats.PciDeviceStats() def _create_pci_devices(self, vf_product_id=1515, pf_product_id=1528): self.sriov_pf_devices = [] for dev in range(2): pci_dev = {'compute_node_id': 1, 'address': '0000:81:00.%d' % dev, 'vendor_id': '8086', 'product_id': '%d' % pf_product_id, 'status': 'available', 'request_id': None, 'dev_type': fields.PciDeviceType.SRIOV_PF, 'parent_addr': None, 'numa_node': 0} dev_obj = objects.PciDevice.create(None, pci_dev) dev_obj.child_devices = [] self.sriov_pf_devices.append(dev_obj) self.sriov_vf_devices = [] for dev in range(8): pci_dev = {'compute_node_id': 1, 'address': '0000:81:10.%d' % dev, 'vendor_id': '8086', 'product_id': '%d' % vf_product_id, 'status': 'available', 'request_id': None, 'dev_type': fields.PciDeviceType.SRIOV_VF, 'parent_addr': '0000:81:00.%d' % int(dev / 4), 'numa_node': 0} dev_obj = objects.PciDevice.create(None, pci_dev) dev_obj.parent_device = self.sriov_pf_devices[int(dev / 4)] dev_obj.parent_device.child_devices.append(dev_obj) self.sriov_vf_devices.append(dev_obj) list(map(self.pci_stats.add_device, self.sriov_pf_devices)) list(map(self.pci_stats.add_device, self.sriov_vf_devices)) def test_consume_VF_requests(self): self._create_pci_devices() pci_requests = [objects.InstancePCIRequest(count=2, spec=[{'product_id': '1515'}])] devs = self.pci_stats.consume_requests(pci_requests) self.assertEqual(2, len(devs)) self.assertEqual(set(['1515']), set([dev.product_id for dev in devs])) free_devs = self.pci_stats.get_free_devs() # Validate that the parents of these VFs has been removed # from pools. for dev in devs: self.assertNotIn(dev.parent_addr, [free_dev.address for free_dev in free_devs]) def test_consume_PF_requests(self): self._create_pci_devices() pci_requests = [objects.InstancePCIRequest(count=2, spec=[{'product_id': '1528', 'dev_type': 'type-PF'}])] devs = self.pci_stats.consume_requests(pci_requests) self.assertEqual(2, len(devs)) self.assertEqual(set(['1528']), set([dev.product_id for dev in devs])) free_devs = self.pci_stats.get_free_devs() # Validate that there are no free devices left, as when allocating # both available PFs, its VFs should not be available. self.assertEqual(0, len(free_devs)) def test_consume_VF_and_PF_requests(self): self._create_pci_devices() pci_requests = [objects.InstancePCIRequest(count=2, spec=[{'product_id': '1515'}]), objects.InstancePCIRequest(count=1, spec=[{'product_id': '1528', 'dev_type': 'type-PF'}])] devs = self.pci_stats.consume_requests(pci_requests) self.assertEqual(3, len(devs)) self.assertEqual(set(['1528', '1515']), set([dev.product_id for dev in devs])) def test_consume_VF_and_PF_requests_failed(self): self._create_pci_devices() pci_requests = [objects.InstancePCIRequest(count=5, spec=[{'product_id': '1515'}]), objects.InstancePCIRequest(count=1, spec=[{'product_id': '1528', 'dev_type': 'type-PF'}])] self.assertIsNone(self.pci_stats.consume_requests(pci_requests)) def test_consume_VF_and_PF_same_prodict_id_failed(self): self._create_pci_devices(pf_product_id=1515) pci_requests = [objects.InstancePCIRequest(count=9, spec=[{'product_id': '1515'}])] self.assertIsNone(self.pci_stats.consume_requests(pci_requests)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/pci/test_utils.py0000664000175000017500000003041600000000000021202 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Intel, Inc. # Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glob import os import fixtures import mock from six.moves import builtins from nova import exception from nova.pci import utils from nova import test class PciDeviceMatchTestCase(test.NoDBTestCase): def setUp(self): super(PciDeviceMatchTestCase, self).setUp() self.fake_pci_1 = {'vendor_id': 'v1', 'device_id': 'd1', 'capabilities_network': ['cap1', 'cap2', 'cap3']} def test_single_spec_match(self): self.assertTrue(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': 'd1'}])) self.assertTrue(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'V1', 'device_id': 'D1'}])) def test_multiple_spec_match(self): self.assertTrue(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': 'd1'}, {'vendor_id': 'v3', 'device_id': 'd3'}])) def test_spec_dismatch(self): self.assertFalse(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v4', 'device_id': 'd4'}, {'vendor_id': 'v3', 'device_id': 'd3'}])) def test_spec_extra_key(self): self.assertFalse(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': 'd1', 'wrong_key': 'k1'}])) def test_spec_list(self): self.assertTrue(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': 'd1', 'capabilities_network': ['cap1', 'cap2', 'cap3']}])) self.assertTrue(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': 'd1', 'capabilities_network': ['cap3', 'cap1']}])) def test_spec_list_no_matching(self): self.assertFalse(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': 'd1', 'capabilities_network': ['cap1', 'cap33']}])) def test_spec_list_wrong_type(self): self.assertFalse(utils.pci_device_prop_match( self.fake_pci_1, [{'vendor_id': 'v1', 'device_id': ['d1']}])) class PciDeviceAddressParserTestCase(test.NoDBTestCase): def test_parse_address(self): self.parse_result = utils.parse_address("0000:04:12.6") self.assertEqual(self.parse_result, ('0000', '04', '12', '6')) def test_parse_address_wrong(self): self.assertRaises(exception.PciDeviceWrongAddressFormat, utils.parse_address, "0000:04.12:6") def test_parse_address_invalid_character(self): self.assertRaises(exception.PciDeviceWrongAddressFormat, utils.parse_address, "0000:h4.12:6") class GetFunctionByIfnameTestCase(test.NoDBTestCase): @mock.patch('os.path.isdir', return_value=True) @mock.patch.object(os, 'readlink') def test_virtual_function(self, mock_readlink, *args): mock_readlink.return_value = '../../../0000.00.00.1' with mock.patch.object( builtins, 'open', side_effect=IOError()): address, physical_function = utils.get_function_by_ifname('eth0') self.assertEqual(address, '0000.00.00.1') self.assertFalse(physical_function) @mock.patch('os.path.isdir', return_value=True) @mock.patch.object(os, 'readlink') def test_physical_function(self, mock_readlink, *args): ifname = 'eth0' totalvf_path = "/sys/class/net/%s/device/%s" % (ifname, utils._SRIOV_TOTALVFS) mock_readlink.return_value = '../../../0000:00:00.1' with self.patch_open(totalvf_path, '4') as mock_open: address, physical_function = utils.get_function_by_ifname('eth0') self.assertEqual(address, '0000:00:00.1') self.assertTrue(physical_function) mock_open.assert_called_once_with(totalvf_path) @mock.patch('os.path.isdir', return_value=False) def test_exception(self, *args): address, physical_function = utils.get_function_by_ifname('lo') self.assertIsNone(address) self.assertFalse(physical_function) class IsPhysicalFunctionTestCase(test.NoDBTestCase): def setUp(self): super(IsPhysicalFunctionTestCase, self).setUp() self.pci_args = utils.get_pci_address_fields('0000:00:00.1') @mock.patch('os.path.isdir', return_value=True) def test_virtual_function(self, *args): with mock.patch.object( builtins, 'open', side_effect=IOError()): self.assertFalse(utils.is_physical_function(*self.pci_args)) @mock.patch('os.path.isdir', return_value=True) def test_physical_function(self, *args): with mock.patch.object( builtins, 'open', mock.mock_open(read_data='4')): self.assertTrue(utils.is_physical_function(*self.pci_args)) @mock.patch('os.path.isdir', return_value=False) def test_exception(self, *args): self.assertFalse(utils.is_physical_function(*self.pci_args)) class GetIfnameByPciAddressTestCase(test.NoDBTestCase): def setUp(self): super(GetIfnameByPciAddressTestCase, self).setUp() self.pci_address = '0000:00:00.1' @mock.patch.object(os, 'listdir') def test_physical_function_inferface_name(self, mock_listdir): mock_listdir.return_value = ['foo', 'bar'] ifname = utils.get_ifname_by_pci_address( self.pci_address, pf_interface=True) self.assertEqual(ifname, 'bar') @mock.patch.object(os, 'listdir') def test_virtual_function_inferface_name(self, mock_listdir): mock_listdir.return_value = ['foo', 'bar'] ifname = utils.get_ifname_by_pci_address( self.pci_address, pf_interface=False) self.assertEqual(ifname, 'bar') @mock.patch.object(os, 'listdir') def test_exception(self, mock_listdir): mock_listdir.side_effect = OSError('No such file or directory') self.assertRaises( exception.PciDeviceNotFoundById, utils.get_ifname_by_pci_address, self.pci_address ) class GetMacByPciAddressTestCase(test.NoDBTestCase): def setUp(self): super(GetMacByPciAddressTestCase, self).setUp() self.pci_address = '0000:07:00.1' self.if_name = 'enp7s0f1' self.tmpdir = self.useFixture(fixtures.TempDir()) self.fake_file = os.path.join(self.tmpdir.path, "address") with open(self.fake_file, "w") as f: f.write("a0:36:9f:72:00:00\n") @mock.patch.object(os, 'listdir') @mock.patch.object(os.path, 'join') def test_get_mac(self, mock_join, mock_listdir): mock_listdir.return_value = [self.if_name] mock_join.return_value = self.fake_file mac = utils.get_mac_by_pci_address(self.pci_address) mock_join.assert_called_once_with( "/sys/bus/pci/devices/%s/net" % self.pci_address, self.if_name, "address") self.assertEqual("a0:36:9f:72:00:00", mac) @mock.patch.object(os, 'listdir') @mock.patch.object(os.path, 'join') def test_get_mac_fails(self, mock_join, mock_listdir): os.unlink(self.fake_file) mock_listdir.return_value = [self.if_name] mock_join.return_value = self.fake_file self.assertRaises( exception.PciDeviceNotFoundById, utils.get_mac_by_pci_address, self.pci_address) @mock.patch.object(os, 'listdir') @mock.patch.object(os.path, 'join') def test_get_mac_fails_empty(self, mock_join, mock_listdir): with open(self.fake_file, "w") as f: f.truncate(0) mock_listdir.return_value = [self.if_name] mock_join.return_value = self.fake_file self.assertRaises( exception.PciDeviceNotFoundById, utils.get_mac_by_pci_address, self.pci_address) @mock.patch.object(os, 'listdir') @mock.patch.object(os.path, 'join') def test_get_physical_function_mac(self, mock_join, mock_listdir): mock_listdir.return_value = [self.if_name] mock_join.return_value = self.fake_file mac = utils.get_mac_by_pci_address(self.pci_address, pf_interface=True) mock_join.assert_called_once_with( "/sys/bus/pci/devices/%s/physfn/net" % self.pci_address, self.if_name, "address") self.assertEqual("a0:36:9f:72:00:00", mac) class GetVfNumByPciAddressTestCase(test.NoDBTestCase): def setUp(self): super(GetVfNumByPciAddressTestCase, self).setUp() self.pci_address = '0000:00:00.1' self.paths = [ '/sys/bus/pci/devices/0000:00:00.1/physfn/virtfn3', ] @mock.patch.object(os, 'readlink') @mock.patch.object(glob, 'iglob') def test_vf_number_found(self, mock_iglob, mock_readlink): mock_iglob.return_value = self.paths mock_readlink.return_value = '../../0000:00:00.1' vf_num = utils.get_vf_num_by_pci_address(self.pci_address) self.assertEqual(vf_num, '3') @mock.patch.object(os, 'readlink') @mock.patch.object(glob, 'iglob') def test_vf_number_not_found(self, mock_iglob, mock_readlink): mock_iglob.return_value = self.paths mock_readlink.return_value = '../../0000:00:00.2' self.assertRaises( exception.PciDeviceNotFoundById, utils.get_vf_num_by_pci_address, self.pci_address ) @mock.patch.object(os, 'readlink') @mock.patch.object(glob, 'iglob') def test_exception(self, mock_iglob, mock_readlink): mock_iglob.return_value = self.paths mock_readlink.side_effect = OSError('No such file or directory') self.assertRaises( exception.PciDeviceNotFoundById, utils.get_vf_num_by_pci_address, self.pci_address ) class GetNetNameByVfPciAddressTestCase(test.NoDBTestCase): def setUp(self): super(GetNetNameByVfPciAddressTestCase, self).setUp() self._get_mac = mock.patch.object(utils, 'get_mac_by_pci_address') self.mock_get_mac = self._get_mac.start() self._get_ifname = mock.patch.object( utils, 'get_ifname_by_pci_address') self.mock_get_ifname = self._get_ifname.start() self.addCleanup(self._get_mac.stop) self.addCleanup(self._get_ifname.stop) self.mac = 'ca:fe:ca:fe:ca:fe' self.if_name = 'enp7s0f0' self.pci_address = '0000:07:02.1' def test_correct_behaviour(self): ref_net_name = 'net_enp7s0f0_ca_fe_ca_fe_ca_fe' self.mock_get_mac.return_value = self.mac self.mock_get_ifname.return_value = self.if_name net_name = utils.get_net_name_by_vf_pci_address(self.pci_address) self.assertEqual(ref_net_name, net_name) self.mock_get_mac.assert_called_once_with(self.pci_address) self.mock_get_ifname.assert_called_once_with(self.pci_address) def test_wrong_mac(self): self.mock_get_mac.side_effect = ( exception.PciDeviceNotFoundById(self.pci_address)) net_name = utils.get_net_name_by_vf_pci_address(self.pci_address) self.assertIsNone(net_name) self.mock_get_mac.assert_called_once_with(self.pci_address) self.mock_get_ifname.assert_not_called() def test_wrong_ifname(self): self.mock_get_mac.return_value = self.mac self.mock_get_ifname.side_effect = ( exception.PciDeviceNotFoundById(self.pci_address)) net_name = utils.get_net_name_by_vf_pci_address(self.pci_address) self.assertIsNone(net_name) self.mock_get_mac.assert_called_once_with(self.pci_address) self.mock_get_ifname.assert_called_once_with(self.pci_address) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/pci/test_whitelist.py0000664000175000017500000000620400000000000022054 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.pci import whitelist from nova import test dev_dict = { 'compute_node_id': 1, 'address': '0000:00:0a.1', 'product_id': '0001', 'vendor_id': '8086', 'status': 'available', 'phys_function': '0000:00:0a.0', } class WhitelistTestCase(test.NoDBTestCase): def test_whitelist(self): white_list = '{"product_id":"0001", "vendor_id":"8086"}' parsed = whitelist.Whitelist([white_list]) self.assertEqual(1, len(parsed.specs)) def test_whitelist_list_format(self): white_list = '[{"product_id":"0001", "vendor_id":"8086"},'\ '{"product_id":"0002", "vendor_id":"8086"}]' parsed = whitelist.Whitelist([white_list]) self.assertEqual(2, len(parsed.specs)) def test_whitelist_empty(self): parsed = whitelist.Whitelist() self.assertFalse(parsed.device_assignable(dev_dict)) def test_whitelist_multiple(self): wl1 = '{"product_id":"0001", "vendor_id":"8086"}' wl2 = '{"product_id":"0002", "vendor_id":"8087"}' parsed = whitelist.Whitelist([wl1, wl2]) self.assertEqual(2, len(parsed.specs)) def test_device_assignable_glob(self): white_list = '{"address":"*:00:0a.*", "physical_network":"hr_net"}' parsed = whitelist.Whitelist( [white_list]) self.assertTrue(parsed.device_assignable(dev_dict)) def test_device_not_assignable_glob(self): white_list = '{"address":"*:00:0b.*", "physical_network":"hr_net"}' parsed = whitelist.Whitelist( [white_list]) self.assertFalse(parsed.device_assignable(dev_dict)) def test_device_assignable_regex(self): white_list = ('{"address":{"domain": ".*", "bus": ".*", ' '"slot": "0a", "function": "[0-1]"}, ' '"physical_network":"hr_net"}') parsed = whitelist.Whitelist( [white_list]) self.assertTrue(parsed.device_assignable(dev_dict)) def test_device_not_assignable_regex(self): white_list = ('{"address":{"domain": ".*", "bus": ".*", ' '"slot": "0a", "function": "[2-3]"}, ' '"physical_network":"hr_net"}') parsed = whitelist.Whitelist( [white_list]) self.assertFalse(parsed.device_assignable(dev_dict)) def test_device_assignable(self): white_list = '{"product_id":"0001", "vendor_id":"8086"}' parsed = whitelist.Whitelist([white_list]) self.assertTrue(parsed.device_assignable(dev_dict)) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.566468 nova-21.2.4/nova/tests/unit/policies/0000775000175000017500000000000000000000000017461 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/policies/__init__.py0000664000175000017500000000000000000000000021560 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/base.py0000664000175000017500000002232500000000000020751 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log as logging from oslo_utils.fixture import uuidsentinel as uuids from nova import context as nova_context from nova import exception from nova import test from nova.tests.unit import policy_fixture LOG = logging.getLogger(__name__) class BasePolicyTest(test.TestCase): # NOTE(gmann): Set this flag to True if you would like to tests the # new behaviour of policy without deprecated rules. # This means you can simulate the phase when policies completely # switch to new behaviour by removing the support of old rules. without_deprecated_rules = False # Add rules here other than base rules which need to override # to remove the deprecated rules. # For Example: # rules_without_deprecation{ # "os_compute_api:os-deferred-delete:restore": # "rule:system_admin_or_owner"} rules_without_deprecation = {} def setUp(self): super(BasePolicyTest, self).setUp() self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) self.admin_project_id = uuids.admin_project_id self.project_id = uuids.project_id self.project_id_other = uuids.project_id_other # all context are with implied roles. self.legacy_admin_context = nova_context.RequestContext( user_id="legacy_admin", project_id=self.admin_project_id, roles=['admin', 'member', 'reader']) # system scoped users self.system_admin_context = nova_context.RequestContext( user_id="admin", roles=['admin', 'member', 'reader'], system_scope='all') self.system_member_context = nova_context.RequestContext( user_id="member", roles=['member', 'reader'], system_scope='all') self.system_reader_context = nova_context.RequestContext( user_id="reader", roles=['reader'], system_scope='all') self.system_foo_context = nova_context.RequestContext( user_id="foo", roles=['foo'], system_scope='all') # project scoped users self.project_admin_context = nova_context.RequestContext( user_id="project_admin", project_id=self.project_id, roles=['admin', 'member', 'reader']) self.project_member_context = nova_context.RequestContext( user_id="project_member", project_id=self.project_id, roles=['member', 'reader']) self.project_reader_context = nova_context.RequestContext( user_id="project_reader", project_id=self.project_id, roles=['reader']) self.project_foo_context = nova_context.RequestContext( user_id="project_foo", project_id=self.project_id, roles=['foo']) self.other_project_member_context = nova_context.RequestContext( user_id="other_project_member", project_id=self.project_id_other, roles=['member', 'reader']) self.other_project_reader_context = nova_context.RequestContext( user_id="other_project_member", project_id=self.project_id_other, roles=['reader']) self.all_contexts = [ self.legacy_admin_context, self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.other_project_member_context, self.project_foo_context, self.other_project_reader_context ] if self.without_deprecated_rules: # To simulate the new world, remove deprecations by overriding # rules which has the deprecated rules. self.rules_without_deprecation.update({ "system_admin_or_owner": "rule:system_admin_api or rule:project_member_api", "system_or_project_reader": "rule:system_reader_api or rule:project_reader_api", "system_admin_api": "role:admin and system_scope:all", "system_reader_api": "role:reader and system_scope:all", "project_member_api": "role:member and project_id:%(project_id)s", }) self.policy.set_rules(self.rules_without_deprecation, overwrite=False) def common_policy_check(self, authorized_contexts, unauthorized_contexts, rule_name, func, req, *arg, **kwarg): # NOTE(brinzhang): When fatal=False is passed as a parameter # in context.can(), we cannot get the desired ensure_raises(). # At this time, we can call ensure_return() to assert the func's # response to ensure that changes are right. fatal = kwarg.pop('fatal', True) authorized_response = [] unauthorize_response = [] # TODO(gmann): we need to add the new context # self.other_project_reader_context in all tests and then remove # this conditional adjusment. test_context = authorized_contexts + unauthorized_contexts test_context_len = len(test_context) if self.other_project_reader_context not in test_context: test_context_len += 1 self.assertEqual(len(self.all_contexts), test_context_len, "Expected testing context are mismatch. check all " "contexts mentioned in self.all_contexts are tested") def ensure_return(req, *args, **kwargs): return func(req, *arg, **kwargs) def ensure_raises(req, *args, **kwargs): exc = self.assertRaises( exception.PolicyNotAuthorized, func, req, *arg, **kwarg) # NOTE(gmann): In case of multi-policy APIs, PolicyNotAuthorized # exception can be raised from either of the policy so checking # the error message, which includes the rule name, can mismatch. # Tests verifying the multi policy can pass rule_name as None # to skip the error message assert. if rule_name is not None: self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) # Verify all the context having allowed scope and roles pass # the policy check. for context in authorized_contexts: LOG.info("Testing authorized context: %s", context) req.environ['nova.context'] = context args1 = copy.deepcopy(arg) kwargs1 = copy.deepcopy(kwarg) if not fatal: authorized_response.append( ensure_return(req, *args1, **kwargs1)) else: func(req, *args1, **kwargs1) # Verify all the context not having allowed scope or roles fail # the policy check. for context in unauthorized_contexts: LOG.info("Testing unauthorized context: %s", context) req.environ['nova.context'] = context args1 = copy.deepcopy(arg) kwargs1 = copy.deepcopy(kwarg) if not fatal: try: unauthorize_response.append( ensure_return(req, *args1, **kwargs1)) # NOTE(gmann): We need to ignore the PolicyNotAuthorized # exception here so that we can add the correct response # in unauthorize_response for the case of fatal=False. # This handle the case of multi policy checks where tests # are verifying the second policy via the response of # fatal-False and ignoring the response checks where the # first policy itself fail to pass (even test override the # first policy to allow for everyone but still, scope # checks can leads to PolicyNotAuthorized error). # For example: flavor extra specs policy for GET flavor # API. In that case, flavor extra spec policy is checked # after the GET flavor policy. So any context failing on # GET flavor will raise the PolicyNotAuthorized and for # that case we do not have any way to verify the flavor # extra specs so skip that context to check in test. except exception.PolicyNotAuthorized: continue else: ensure_raises(req, *args1, **kwargs1) return authorized_response, unauthorize_response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_admin_actions.py0000664000175000017500000001275500000000000023714 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import admin_actions from nova.compute import vm_states from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class AdminActionsPolicyTest(base.BasePolicyTest): """Test Admin Actions APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AdminActionsPolicyTest, self).setUp() self.controller = admin_actions.AdminActionsController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.compute.api.API.get')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin is able to change the service self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to change the service self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] @mock.patch('nova.objects.Instance.save') def test_reset_state_policy(self, mock_save): rule_name = "os_compute_api:os-admin-actions:reset_state" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._reset_state, self.req, self.instance.uuid, body={'os-resetState': {'state': 'active'}}) def test_inject_network_info_policy(self): rule_name = "os_compute_api:os-admin-actions:inject_network_info" with mock.patch.object(self.controller.compute_api, "inject_network_info"): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._inject_network_info, self.req, self.instance.uuid, body={}) def test_reset_network_policy(self): rule_name = "os_compute_api:os-admin-actions:reset_network" with mock.patch.object(self.controller.compute_api, "reset_network"): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._reset_network, self.req, self.instance.uuid, body={}) class AdminActionsScopeTypePolicyTest(AdminActionsPolicyTest): """Test Admin Actions APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scopped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AdminActionsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class AdminActionsNoLegacyPolicyTest(AdminActionsScopeTypePolicyTest): """Test Admin Actions APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True def setUp(self): super(AdminActionsScopeTypePolicyTest, self).setUp() # Check that system admin is able to perform the system level actions # on server. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to perform the system # level actions on server. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_admin_password.py0000664000175000017500000001403300000000000024105 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import admin_password from nova.compute import vm_states from nova import exception from nova.policies import admin_password as ap_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class AdminPasswordPolicyTest(base.BasePolicyTest): """Test Admin Password APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AdminPasswordPolicyTest, self).setUp() self.controller = admin_password.AdminPasswordController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.rule_name = ap_policies.BASE_POLICY_NAME self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to change the password self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin is not able to change the password self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.set_admin_password') def test_change_paassword_policy(self, mock_password): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, self.rule_name, self.controller.change_password, self.req, self.instance.uuid, body={'changePassword': { 'adminPass': '1234pass'}}) def test_change_password_overridden_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' body = {'changePassword': {'adminPass': '1234pass'}} self.policy.set_rules({self.rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.change_password, req, fakes.FAKE_UUID, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % self.rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.set_admin_password') def test_change_password_overridden_policy_pass_with_same_user( self, password_mock): self.policy.set_rules({self.rule_name: "user_id:%(user_id)s"}) body = {'changePassword': {'adminPass': '1234pass'}} self.controller.change_password(self.req, fakes.FAKE_UUID, body=body) password_mock.assert_called_once_with(self.req.environ['nova.context'], mock.ANY, '1234pass') class AdminPasswordScopeTypePolicyTest(AdminPasswordPolicyTest): """Test Admin Password APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AdminPasswordScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class AdminPasswordNoLegacyPolicyTest(AdminPasswordPolicyTest): """Test Admin Password APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(AdminPasswordNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system or projct admin or owner is able to change # the password. self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to change the # password. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_agents.py0000664000175000017500000002264300000000000022362 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import agents from nova.db.sqlalchemy import models from nova import exception from nova.policies import base as base_policy from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base from nova.tests.unit import policy_fixture class AgentsPolicyTest(base.BasePolicyTest): """Test os-agents APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AgentsPolicyTest, self).setUp() self.controller = agents.AgentController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to perform the CRUD operation # on agents. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to perform the CRUD operation # on agents. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system scoped admin, member and reader are able to # read the agent data. # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to read the agent data. This make sure that existing # tokens will keep working even we have changed this policy defaults # to reader role. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-system-reader are not able to read the agent # data self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] @mock.patch('nova.db.api.agent_build_destroy') def test_delete_agent_policy(self, mock_delete): rule_name = "os_compute_api:os-agents:delete" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, 1) @mock.patch('nova.db.api.agent_build_get_all') def test_index_agents_policy(self, mock_get): rule_name = "os_compute_api:os-agents:list" self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req) @mock.patch('nova.db.api.agent_build_update') def test_update_agent_policy(self, mock_update): rule_name = "os_compute_api:os-agents:update" body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, 1, body=body) def test_create_agent_policy(self): rule_name = "os_compute_api:os-agents:create" body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} def fake_agent_build_create(context, values): values['id'] = 1 agent_build_ref = models.AgentBuild() agent_build_ref.update(values) return agent_build_ref self.stub_out("nova.db.api.agent_build_create", fake_agent_build_create) self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, self.req, body=body) class AgentsScopeTypePolicyTest(AgentsPolicyTest): """Test os-agents APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AgentsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to perform the CRUD operation # on agents. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to perform the CRUD # operation on agents. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system admin, member and reader are able to read the # agent data self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system or non-reader are not able to read the agent # data self.reader_unauthorized_contexts = [ self.system_foo_context, self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class AgentsDeprecatedPolicyTest(base.BasePolicyTest): """Test os-agents APIs Deprecated policies. This class checks if deprecated policy rules are overridden by user on policy.json file then they still work because oslo.policy add deprecated rules in logical OR condition and enforce them for policy checks if overridden. """ def setUp(self): super(AgentsDeprecatedPolicyTest, self).setUp() self.controller = agents.AgentController() self.admin_req = fakes.HTTPRequest.blank('') self.admin_req.environ['nova.context'] = self.project_admin_context self.reader_req = fakes.HTTPRequest.blank('') self.reader_req.environ['nova.context'] = self.project_reader_context self.deprecated_policy = "os_compute_api:os-agents" # Overridde rule with different checks than defaults so that we can # verify the rule overridden case. override_rules = {self.deprecated_policy: base_policy.RULE_ADMIN_API} # NOTE(gmann): Only override the deprecated rule in policy file so # that we can verify if overridden checks are considered by # oslo.policy. Oslo.policy will consider the overridden rules if: # 1. overridden deprecated rule's checks are different than defaults # 2. new rules are not present in policy file self.policy = self.useFixture(policy_fixture.OverridePolicyFixture( rules_in_file=override_rules)) def test_deprecated_policy_overridden_rule_is_checked(self): # Test to verify if deprecatd overridden policy is working. # check for success as admin role. Deprecated rule # has been overridden with admin checks in policy.json # If admin role pass it means overridden rule is enforced by # olso.policy because new default is system reader and the old # default is admin. with mock.patch('nova.db.api.agent_build_get_all'): self.controller.index(self.admin_req) # check for failure with reader context. exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.reader_req) self.assertEqual( "Policy doesn't allow os_compute_api:os-agents:list to be" " performed.", exc.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_aggregates.py0000664000175000017500000002234700000000000023213 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import aggregates from nova import objects from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class AggregatesPolicyTest(base.BasePolicyTest): """Test Aggregates APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AggregatesPolicyTest, self).setUp() self.controller = aggregates.AggregateController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to perform Aggregate Operations self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to perform Aggregate Operations self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system reader is able to get Aggregate self.system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-admin is not able to get Aggregate self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] @mock.patch('nova.compute.api.AggregateAPI.get_aggregate_list') def test_list_aggregate_policy(self, mock_list): rule_name = "os_compute_api:os-aggregates:index" self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.index, self.req) @mock.patch('nova.compute.api.AggregateAPI.create_aggregate') def test_create_aggregate_policy(self, mock_create): rule_name = "os_compute_api:os-aggregates:create" mock_create.return_value = objects.Aggregate(**{"name": "aggregate1", "id": "1", "metadata": {'availability_zone': 'nova1'}, "hosts": ["host1", "host2"]}) body = {"aggregate": {"name": "test", "availability_zone": "nova1"}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, self.req, body=body) @mock.patch('nova.compute.api.AggregateAPI.update_aggregate') def test_update_aggregate_policy(self, mock_update): rule_name = "os_compute_api:os-aggregates:update" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, 1, body={"aggregate": {"name": "new_name"}}) @mock.patch('nova.compute.api.AggregateAPI.delete_aggregate') def test_delete_aggregate_policy(self, mock_delete): rule_name = "os_compute_api:os-aggregates:delete" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, 1) @mock.patch('nova.compute.api.AggregateAPI.get_aggregate') def test_show_aggregate_policy(self, mock_show): rule_name = "os_compute_api:os-aggregates:show" self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.show, self.req, 1) @mock.patch('nova.compute.api.AggregateAPI.update_aggregate_metadata') def test_set_metadata_aggregate_policy(self, mock_metadata): rule_name = "os_compute_api:os-aggregates:set_metadata" body = {"set_metadata": {"metadata": {"foo": "bar"}}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._set_metadata, self.req, 1, body=body) @mock.patch('nova.compute.api.AggregateAPI.add_host_to_aggregate') def test_add_host_aggregate_policy(self, mock_add): rule_name = "os_compute_api:os-aggregates:add_host" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._add_host, self.req, 1, body={"add_host": {"host": "host1"}}) @mock.patch('nova.compute.api.AggregateAPI.remove_host_from_aggregate') def test_remove_host_aggregate_policy(self, mock_remove): rule_name = "os_compute_api:os-aggregates:remove_host" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._remove_host, self.req, 1, body={"remove_host": {"host": "host1"}}) @mock.patch('nova.compute.api.AggregateAPI.get_aggregate') def test_images_aggregate_policy(self, mock_get): rule_name = "compute:aggregates:images" mock_get.return_value = {"name": "aggregate1", "id": "1", "hosts": ["host1", "host2"]} body = {'cache': [{'id': uuids.fake_id}]} req = fakes.HTTPRequest.blank('', version='2.81') with mock.patch('nova.conductor.api.ComputeTaskAPI.cache_images'): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.images, req, 1, body=body) class AggregatesScopeTypePolicyTest(AggregatesPolicyTest): """Test Aggregates APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AggregatesScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to perform Aggregate Operations. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to perform # Aggregate Operations. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system reader is able to get Aggregate self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-admin is not able to get Aggregate self.system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_assisted_volume_snapshots.py0000664000175000017500000001105500000000000026404 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import urllib from nova.api.openstack.compute import assisted_volume_snapshots as snapshots from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class AssistedVolumeSnapshotPolicyTest(base.BasePolicyTest): """Test Assisted Volume Snapshots APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AssistedVolumeSnapshotPolicyTest, self).setUp() self.controller = snapshots.AssistedVolumeSnapshotsController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to take volume snapshot. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to take volume snapshot. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] @mock.patch('nova.compute.api.API.volume_snapshot_create') def test_assisted_create_policy(self, mock_create): rule_name = "os_compute_api:os-assisted-volume-snapshots:create" body = {'snapshot': {'volume_id': uuids.fake_id, 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, self.req, body=body) @mock.patch('nova.compute.api.API.volume_snapshot_delete') def test_assisted_delete_policy(self, mock_delete): rule_name = "os_compute_api:os-assisted-volume-snapshots:delete" params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), } req = fakes.HTTPRequest.blank('?%s' % urllib.parse.urlencode(params)) self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, req, 1) class AssistedSnapshotScopeTypePolicyTest(AssistedVolumeSnapshotPolicyTest): """Test Assisted Volume Snapshots APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scopped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AssistedSnapshotScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to take volume snapshot. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to take volume # snapshot. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_attach_interfaces.py0000664000175000017500000002557600000000000024560 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import attach_interfaces from nova.compute import vm_states from nova import exception from nova.policies import attach_interfaces as ai_policies from nova.policies import base as base_policy from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base from nova.tests.unit import policy_fixture class AttachInterfacesPolicyTest(base.BasePolicyTest): """Test Attach Interfaces APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AttachInterfacesPolicyTest, self).setUp() self.controller = attach_interfaces.InterfaceAttachmentController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_foo_context, self.project_reader_context, self.project_member_context ] self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, self.project_foo_context ] self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.get') @mock.patch('nova.network.neutron.API.list_ports') def test_index_interfaces_policy(self, mock_port, mock_get): rule_name = "os_compute_api:os-attach-interfaces:list" self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req, uuids.fake_id) @mock.patch('nova.compute.api.API.get') @mock.patch('nova.network.neutron.API.show_port') def test_show_interface_policy(self, mock_port, mock_get): rule_name = "os_compute_api:os-attach-interfaces:show" server_id = uuids.fake_id port_id = uuids.fake_id mock_port.return_value = {'port': { "id": port_id, "network_id": uuids.fake_id, "admin_state_up": True, "status": "ACTIVE", "mac_address": "bb:bb:bb:bb:bb:bb", "fixed_ips": ["10.0.2.2"], "device_id": server_id, }} self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, server_id, port_id) @mock.patch('nova.compute.api.API.get') @mock.patch('nova.api.openstack.compute.attach_interfaces' '.InterfaceAttachmentController.show') @mock.patch('nova.compute.api.API.attach_interface') def test_attach_interface(self, mock_interface, mock_port, mock_get): rule_name = "os_compute_api:os-attach-interfaces:create" body = {'interfaceAttachment': {'net_id': uuids.fake_id}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, self.req, uuids.fake_id, body=body) @mock.patch('nova.compute.api.API.get') @mock.patch('nova.compute.api.API.detach_interface') def test_delete_interface(self, mock_detach, mock_get): rule_name = "os_compute_api:os-attach-interfaces:delete" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, uuids.fake_id, uuids.fake_id) class AttachInterfacesScopeTypePolicyTest(AttachInterfacesPolicyTest): """Test Attach Interfaces APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AttachInterfacesScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class AttachInterfacesDeprecatedPolicyTest(base.BasePolicyTest): """Test Attach Interfaces APIs Deprecated policies. This class checks if deprecated policy rules are overridden by user on policy.json file then they still work because oslo.policy add deprecated rules in logical OR condition and enforce them for policy checks if overridden. """ def setUp(self): super(AttachInterfacesDeprecatedPolicyTest, self).setUp() self.controller = attach_interfaces.InterfaceAttachmentController() self.admin_req = fakes.HTTPRequest.blank('') self.admin_req.environ['nova.context'] = self.project_admin_context self.reader_req = fakes.HTTPRequest.blank('') self.reader_req.environ['nova.context'] = self.project_reader_context self.deprecated_policy = "os_compute_api:os-attach-interfaces" # Overridde rule with different checks than defaults so that we can # verify the rule overridden case. override_rules = {self.deprecated_policy: base_policy.RULE_ADMIN_API} # NOTE(gmann): Only override the deprecated rule in policy file so # that # we can verify if overridden checks are considered by oslo.policy. # Oslo.policy will consider the overridden rules if: # 1. overridden deprecated rule's checks are different than defaults # 2. new rules are not present in policy file self.policy = self.useFixture(policy_fixture.OverridePolicyFixture( rules_in_file=override_rules)) @mock.patch('nova.compute.api.API.get') @mock.patch('nova.network.neutron.API.list_ports') def test_deprecated_policy_overridden_rule_is_checked(self, mock_port, mock_get): # Test to verify if deprecatd overridden policy is working. # check for success as admin role. Deprecated rule # has been overridden with admin checks in policy.json # If admin role pass it means overridden rule is enforced by # olso.policy because new default is system or project reader and the # old default is admin. self.controller.index(self.admin_req, uuids.fake_id) # check for failure with reader context. exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.reader_req, uuids.fake_id) self.assertEqual( "Policy doesn't allow os_compute_api:os-attach-interfaces:list" " to be performed.", exc.format_message()) class AttachInterfacesNoLegacyPolicyTest(AttachInterfacesPolicyTest): """Test Attach Interfaces APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True rules_without_deprecation = { ai_policies.POLICY_ROOT % 'list': base_policy.PROJECT_READER_OR_SYSTEM_READER, ai_policies.POLICY_ROOT % 'show': base_policy.PROJECT_READER_OR_SYSTEM_READER, ai_policies.POLICY_ROOT % 'create': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN, ai_policies.POLICY_ROOT % 'delete': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN} def setUp(self): super(AttachInterfacesNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system or projct admin or owner is able to # create or delete interfaces. self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to # create or delete interfaces. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] # Check that system reader or projct is able to # create or delete interfaces. self.reader_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, ] # Check that non-system reader nd non-admin/owner is not able to # create or delete interfaces. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_availability_zone.py0000664000175000017500000001111700000000000024600 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import availability_zone from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class AvailabilityZonePolicyTest(base.BasePolicyTest): """Test Availability Zone APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(AvailabilityZonePolicyTest, self).setUp() self.controller = availability_zone.AvailabilityZoneController() self.req = fakes.HTTPRequest.blank('') # Check that everyone is able to list the AZ self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context] self.everyone_unauthorized_contexts = [] # Check that system reader is able to list the AZ Detail # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to list the AZ. This make sure that existing # tokens will keep working even we have changed this policy defaults # to reader role. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-system-reader are not able to list the AZ. self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] @mock.patch('nova.objects.Instance.save') def test_availability_zone_list_policy(self, mock_save): rule_name = "os_compute_api:os-availability-zone:list" self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.index, self.req) def test_availability_zone_detail_policy(self): rule_name = "os_compute_api:os-availability-zone:detail" self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.detail, self.req) class AvailabilityZoneScopeTypePolicyTest(AvailabilityZonePolicyTest): """Test Availability Zone APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(AvailabilityZoneScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to list the AZ. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system-reader is not able to list AZ. self.reader_unauthorized_contexts = [ self.system_foo_context, self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_console_auth_tokens.py0000664000175000017500000000765500000000000025155 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import console_auth_tokens from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class ConsoleAuthTokensPolicyTest(base.BasePolicyTest): """Test Console Auth Tokens APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ConsoleAuthTokensPolicyTest, self).setUp() self.controller = console_auth_tokens.ConsoleAuthTokensController() self.req = fakes.HTTPRequest.blank('', version='2.31') # Check that system reader is able to get console connection # information. # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to get console. This make sure that existing # tokens will keep working even we have changed this policy defaults # to reader role. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-admin is not able to get console connection # information. self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] @mock.patch('nova.objects.ConsoleAuthToken.validate') def test_console_connect_info_token_policy(self, mock_validate): rule_name = "os_compute_api:os-console-auth-tokens" self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, fakes.FAKE_UUID) class ConsoleAuthTokensScopeTypePolicyTest(ConsoleAuthTokensPolicyTest): """Test Console Auth Tokens APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ConsoleAuthTokensScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to get console connection # information. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system-reader is not able to get console connection # information. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_console_output.py0000664000175000017500000001136200000000000024157 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import console_output from nova.compute import vm_states from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ConsoleOutputPolicyTest(base.BasePolicyTest): """Test Console Output APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ConsoleOutputPolicyTest, self).setUp() self.controller = console_output.ConsoleOutputController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, project_id=self.project_id, id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or owner is able to get the server console. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context ] # Check that non-admin and non-owner is not able to get the server # console. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.get_console_output') def test_console_output_policy(self, mock_console): mock_console.return_value = '\n'.join([str(i) for i in range(2)]) rule_name = "os_compute_api:os-console-output" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.get_console_output, self.req, self.instance.uuid, body={'os-getConsoleOutput': {}}) class ConsoleOutputScopeTypePolicyTest(ConsoleOutputPolicyTest): """Test Console Output APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ConsoleOutputScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ConsoleOutputNoLegacyPolicyTest(ConsoleOutputPolicyTest): """Test Console Output APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(ConsoleOutputNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system or projct admin or owner is able to # get the server console. self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to # get the server console. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_create_backup.py0000664000175000017500000001143300000000000023664 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import create_backup from nova.compute import vm_states from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class CreateBackupPolicyTest(base.BasePolicyTest): """Test Create Backup APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(CreateBackupPolicyTest, self).setUp() self.controller = create_backup.CreateBackupController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, project_id=self.project_id, id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or owner is able to create server backup. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context ] # Check that non-admin and non-owner is not able to create server # backup. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.backup') def test_create_backup_policy(self, mock_backup): rule_name = "os_compute_api:os-create-backup" body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._create_backup, self.req, self.instance.uuid, body=body) class CreateBackupScopeTypePolicyTest(CreateBackupPolicyTest): """Test Create Backup APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(CreateBackupScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class CreateBackupNoLegacyPolicyTest(CreateBackupPolicyTest): """Test Create Backup APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(CreateBackupNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system or projct admin or owner is able to create # server backup. self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to # create server backup. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_deferred_delete.py0000664000175000017500000001527300000000000024204 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import deferred_delete from nova.compute import vm_states from nova import exception from nova.policies import base as base_policy from nova.policies import deferred_delete as dd_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class DeferredDeletePolicyTest(base.BasePolicyTest): """Test Deferred Delete APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(DeferredDeletePolicyTest, self).setUp() self.controller = deferred_delete.DeferredDeleteController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, project_id=self.project_id, id=1, uuid=uuid, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or owner is able to force delete or restore server. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context ] # Check that non-admin and non-owner is not able to force delete or # restore server. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.restore') def test_restore_server_policy(self, mock_restore): rule_name = dd_policies.BASE_POLICY_NAME % 'restore' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._restore, self.req, self.instance.uuid, body={'restore': {}}) def test_force_delete_server_policy(self): rule_name = dd_policies.BASE_POLICY_NAME % 'force' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._force_delete, self.req, self.instance.uuid, body={'forceDelete': {}}) def test_force_delete_server_policy_failed_with_other_user(self): rule_name = dd_policies.BASE_POLICY_NAME % 'force' # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._force_delete, req, self.instance.uuid, body={'forceDelete': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.force_delete') def test_force_delete_server_policy_pass_with_same_user( self, force_delete_mock): rule_name = dd_policies.BASE_POLICY_NAME % 'force' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._force_delete(self.req, self.instance.uuid, body={'forceDelete': {}}) force_delete_mock.assert_called_once_with( self.req.environ['nova.context'], self.instance) class DeferredDeleteScopeTypePolicyTest(DeferredDeletePolicyTest): """Test Deferred Delete APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(DeferredDeleteScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class DeferredDeleteNoLegacyPolicyTest(DeferredDeletePolicyTest): """Test Deferred Delete APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True rules_without_deprecation = { dd_policies.BASE_POLICY_NAME % 'restore': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN, dd_policies.BASE_POLICY_NAME % 'force': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN} def setUp(self): super(DeferredDeleteNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system or projct admin or owner is able to # force delete or restore server. self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to # force delete or restore server. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_evacuate.py0000664000175000017500000001515300000000000022674 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import evacuate from nova.compute import vm_states from nova import exception from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base def fake_service_get_by_compute_host(self, context, host): return {'host_name': host, 'service': 'compute', 'zone': 'nova' } class EvacuatePolicyTest(base.BasePolicyTest): """Test Evacuate APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(EvacuatePolicyTest, self).setUp() self.controller = evacuate.EvacuateController() self.req = fakes.HTTPRequest.blank('') self.user_req = fakes.HTTPRequest.blank('') user_id = self.user_req.environ['nova.context'].user_id self.stub_out('nova.compute.api.HostAPI.service_get_by_compute_host', fake_service_get_by_compute_host) self.stub_out( 'nova.api.openstack.common.' 'instance_has_port_with_resource_request', lambda *args, **kwargs: False) self.mock_get = self.useFixture( fixtures.MockPatch('nova.compute.api.API.get')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, project_id=self.project_id, id=1, uuid=uuid, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin is able to evacuate the server self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to evacuate the server self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] @mock.patch('nova.compute.api.API.evacuate') def test_evacuate_policy(self, mock_evacuate): rule_name = "os_compute_api:os-evacuate" body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'admin_pass'} } self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._evacuate, self.req, uuids.fake_id, body=body) def test_evacuate_policy_failed_with_other_user(self): rule_name = "os_compute_api:os-evacuate" # Change the user_id in request context. self.user_req.environ['nova.context'].user_id = 'other-user' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass' }} exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._evacuate, self.user_req, fakes.FAKE_UUID, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.evacuate') def test_evacuate_policy_pass_with_same_user(self, evacuate_mock): rule_name = "os_compute_api:os-evacuate" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass' }} self.controller._evacuate(self.user_req, fakes.FAKE_UUID, body=body) evacuate_mock.assert_called_once_with( self.user_req.environ['nova.context'], mock.ANY, 'my-host', False, 'MyNewPass', None) class EvacuateScopeTypePolicyTest(EvacuatePolicyTest): """Test Evacuate APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scopped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(EvacuateScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class EvacuateNoLegacyPolicyTest(EvacuateScopeTypePolicyTest): """Test Evacuate APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(EvacuateNoLegacyPolicyTest, self).setUp() # Check that system admin is able to evacuate server. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to evacuate # server. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_flavor_access.py0000664000175000017500000002101200000000000023700 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import flavor_access from nova.policies import base as base_policy from nova.policies import flavor_access as fa_policy from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_flavor from nova.tests.unit.policies import base class FlavorAccessPolicyTest(base.BasePolicyTest): """Test Flavor Access APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(FlavorAccessPolicyTest, self).setUp() self.controller = flavor_access.FlavorActionController() self.controller_index = flavor_access.FlavorAccessController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_flavor')).mock uuid = uuids.fake_id self.flavor = fake_flavor.fake_flavor_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, is_public=False) self.mock_get.return_value = self.flavor self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) self.stub_out('nova.objects.flavor._get_projects_from_db', lambda context, flavorid: []) # Check that admin is able to add/remove flavor access # to a tenant. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to add/remove flavor access # to a tenant. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that everyone is able to list flavor access # information which is nothing but bug#1867840. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] self.reader_unauthorized_contexts = [ ] def test_list_flavor_access_policy(self): rule_name = fa_policy.BASE_POLICY_NAME self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller_index.index, self.req, '1') @mock.patch('nova.objects.Flavor.add_access') def test_add_tenant_access_policy(self, mock_add): rule_name = fa_policy.POLICY_ROOT % "add_tenant_access" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._add_tenant_access, self.req, '1', body={'addTenantAccess': {'tenant': 't1'}}) @mock.patch('nova.objects.Flavor.remove_access') def test_remove_tenant_access_policy(self, mock_remove): rule_name = fa_policy.POLICY_ROOT % "remove_tenant_access" self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._remove_tenant_access, self.req, '1', body={'removeTenantAccess': {'tenant': 't1'}}) class FlavorAccessScopeTypePolicyTest(FlavorAccessPolicyTest): """Test Flavor Access APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(FlavorAccessScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to add/remove flavor access # to a tenant. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system-admin is not able to add/remove flavor access # to a tenant. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system user is able to list flavor access # information. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context] # Check that non-system is not able to list flavor access # information. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.other_project_member_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] class FlavorAccessNoLegacyPolicyTest(FlavorAccessPolicyTest): """Test FlavorAccess APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_redear APIs. """ without_deprecated_rules = True rules_without_deprecation = { fa_policy.POLICY_ROOT % "add_tenant_access": base_policy.SYSTEM_ADMIN, fa_policy.POLICY_ROOT % "remove_tenant_access": base_policy.SYSTEM_ADMIN, fa_policy.BASE_POLICY_NAME: base_policy.SYSTEM_READER} def setUp(self): super(FlavorAccessNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to add/remove flavor access # to a tenant. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system-admin is not able to add/remove flavor access # to a tenant. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system reader is able to list flavor access # information. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system-reader is not able to list flavor access # information. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.other_project_member_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_foo_context] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/policies/test_flavor_extra_specs.py0000664000175000017500000004631500000000000024774 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import flavor_manage from nova.api.openstack.compute import flavors from nova.api.openstack.compute import flavors_extraspecs from nova.api.openstack.compute import servers from nova.compute import vm_states from nova import objects from nova.policies import flavor_extra_specs as policies from nova.policies import flavor_manage as fm_policies from nova.policies import servers as s_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class FlavorExtraSpecsPolicyTest(base.BasePolicyTest): """Test Flavor Extra Specs APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(FlavorExtraSpecsPolicyTest, self).setUp() self.controller = flavors_extraspecs.FlavorExtraSpecsController() self.flavor_ctrl = flavors.FlavorsController() self.fm_ctrl = flavor_manage.FlavorManageController() self.server_ctrl = servers.ServersController() self.req = fakes.HTTPRequest.blank('') self.server_ctrl._view_builder._add_security_grps = mock.MagicMock() self.server_ctrl._view_builder._get_metadata = mock.MagicMock() self.server_ctrl._view_builder._get_addresses = mock.MagicMock() self.server_ctrl._view_builder._get_host_id = mock.MagicMock() self.server_ctrl._view_builder._get_fault = mock.MagicMock() self.server_ctrl._view_builder._add_host_status = mock.MagicMock() self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, vm_state=vm_states.ACTIVE) self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.mock_get.return_value = self.instance fakes.stub_out_secgroup_api( self, security_groups=[{'name': 'default'}]) self.mock_get_all = self.useFixture(fixtures.MockPatchObject( self.server_ctrl.compute_api, 'get_all')).mock self.mock_get_all.return_value = objects.InstanceList( objects=[self.instance]) def get_flavor_extra_specs(context, flavor_id): return fake_flavor.fake_flavor_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, is_public=False, extra_specs={'hw:cpu_policy': 'shared'}, expected_attrs='extra_specs') self.stub_out('nova.api.openstack.common.get_flavor', get_flavor_extra_specs) # Check that all are able to get flavor extra specs. self.all_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] self.all_unauthorized_contexts = [] # Check that all system scoped are able to get flavor extra specs. self.all_system_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] self.all_system_unauthorized_contexts = [] # Check that admin is able to create, update and delete flavor # extra specs. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to create, update and # delete flavor extra specs. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] @mock.patch('nova.objects.Flavor.save') def test_create_flavor_extra_specs_policy(self, mock_save): body = {'extra_specs': {'hw:numa_nodes': '1'}} rule_name = policies.POLICY_ROOT % 'create' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, self.req, '1234', body=body) @mock.patch('nova.objects.Flavor._flavor_extra_specs_del') @mock.patch('nova.objects.Flavor.save') def test_delete_flavor_extra_specs_policy(self, mock_save, mock_delete): rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, '1234', 'hw:cpu_policy') @mock.patch('nova.objects.Flavor.save') def test_update_flavor_extra_specs_policy(self, mock_save): body = {'hw:cpu_policy': 'shared'} rule_name = policies.POLICY_ROOT % 'update' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, '1234', 'hw:cpu_policy', body=body) def test_show_flavor_extra_specs_policy(self): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.controller.show, self.req, '1234', 'hw:cpu_policy') def test_index_flavor_extra_specs_policy(self): rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.controller.index, self.req, '1234') def test_flavor_detail_with_extra_specs_policy(self): fakes.stub_out_flavor_get_all(self) rule_name = policies.POLICY_ROOT % 'index' req = fakes.HTTPRequest.blank('', version='2.61') authorize_res, unauthorize_res = self.common_policy_check( self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.flavor_ctrl.detail, req, fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['flavors'][0]) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['flavors'][0]) def test_flavor_show_with_extra_specs_policy(self): fakes.stub_out_flavor_get_by_flavor_id(self) rule_name = policies.POLICY_ROOT % 'index' req = fakes.HTTPRequest.blank('', version='2.61') authorize_res, unauthorize_res = self.common_policy_check( self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.flavor_ctrl.show, req, '1', fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['flavor']) def test_flavor_create_with_extra_specs_policy(self): rule_name = policies.POLICY_ROOT % 'index' # 'create' policy is checked before flavor extra specs 'index' policy # so we have to allow it for everyone otherwise it will fail first # for unauthorized contexts. rule = fm_policies.POLICY_ROOT % 'create' self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.61') def fake_create(newflavor): newflavor['flavorid'] = uuids.fake_id newflavor["name"] = 'test' newflavor["memory_mb"] = 512 newflavor["vcpus"] = 2 newflavor["root_gb"] = 1 newflavor["ephemeral_gb"] = 1 newflavor["swap"] = 512 newflavor["rxtx_factor"] = 1.0 newflavor["is_public"] = True newflavor["disabled"] = False newflavor["extra_specs"] = {} self.stub_out("nova.objects.Flavor.create", fake_create) body = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, } } authorize_res, unauthorize_res = self.common_policy_check( self.all_system_authorized_contexts, self.all_system_unauthorized_contexts, rule_name, self.fm_ctrl._create, req, body=body, fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['flavor']) @mock.patch('nova.objects.Flavor.save') def test_flavor_update_with_extra_specs_policy(self, mock_save): fakes.stub_out_flavor_get_by_flavor_id(self) rule_name = policies.POLICY_ROOT % 'index' # 'update' policy is checked before flavor extra specs 'index' policy # so we have to allow it for everyone otherwise it will fail first # for unauthorized contexts. rule = fm_policies.POLICY_ROOT % 'update' self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.61') authorize_res, unauthorize_res = self.common_policy_check( self.all_system_authorized_contexts, self.all_system_unauthorized_contexts, rule_name, self.fm_ctrl._update, req, '1', body={'flavor': {'description': None}}, fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['flavor']) def test_server_detail_with_extra_specs_policy(self): rule = s_policies.SERVERS % 'detail' # server 'detail' policy is checked before flavor extra specs 'index' # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.47') rule_name = policies.POLICY_ROOT % 'index' authorize_res, unauthorize_res = self.common_policy_check( self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.server_ctrl.detail, req, fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['servers'][0]['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['servers'][0]['flavor']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') def test_server_show_with_extra_specs_policy(self, mock_get, mock_block): rule = s_policies.SERVERS % 'show' # server 'show' policy is checked before flavor extra specs 'index' # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.47') rule_name = policies.POLICY_ROOT % 'index' authorize_res, unauthorize_res = self.common_policy_check( self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.server_ctrl.show, req, 'fake', fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['server']['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['server']['flavor']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') @mock.patch('nova.compute.api.API.rebuild') def test_server_rebuild_with_extra_specs_policy(self, mock_rebuild, mock_get, mock_bdm): rule = s_policies.SERVERS % 'rebuild' # server 'rebuild' policy is checked before flavor extra specs 'index' # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.47') rule_name = policies.POLICY_ROOT % 'index' authorize_res, unauthorize_res = self.common_policy_check( self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.server_ctrl._action_rebuild, req, self.instance.uuid, body={'rebuild': {"imageRef": uuids.fake_id}}, fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp.obj['server']['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp.obj['server']['flavor']) @mock.patch('nova.compute.api.API.update_instance') def test_server_update_with_extra_specs_policy(self, mock_update): rule = s_policies.SERVERS % 'update' # server 'update' policy is checked before flavor extra specs 'index' # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.47') rule_name = policies.POLICY_ROOT % 'index' authorize_res, unauthorize_res = self.common_policy_check( self.all_authorized_contexts, self.all_unauthorized_contexts, rule_name, self.server_ctrl.update, req, self.instance.uuid, body={'server': {'name': 'test'}}, fatal=False) for resp in authorize_res: self.assertIn('extra_specs', resp['server']['flavor']) for resp in unauthorize_res: self.assertNotIn('extra_specs', resp['server']['flavor']) class FlavorExtraSpecsScopeTypePolicyTest(FlavorExtraSpecsPolicyTest): """Test Flavor Extra Specs APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(FlavorExtraSpecsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that all system scoped are able to get flavor extra specs. self.all_system_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context ] self.all_system_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that system admin is able to create, update and delete flavor # extra specs. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system admin is not able to create, update and # delete flavor extra specs. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] class FlavorExtraSpecsNoLegacyPolicyTest(FlavorExtraSpecsScopeTypePolicyTest): """Test Flavor Extra Specs APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(FlavorExtraSpecsNoLegacyPolicyTest, self).setUp() # Check that system or project reader are able to get flavor # extra specs. self.all_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.system_member_context, self.system_reader_context, self.other_project_member_context, self.other_project_reader_context ] self.all_unauthorized_contexts = [ self.project_foo_context, self.system_foo_context ] # Check that all system scoped reader are able to get flavor # extra specs. self.all_system_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context ] self.all_system_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_flavor_manage.py0000664000175000017500000001264400000000000023702 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import flavor_manage from nova.policies import flavor_manage as fm_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class FlavorManagePolicyTest(base.BasePolicyTest): """Test os-flavor-manage APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(FlavorManagePolicyTest, self).setUp() self.controller = flavor_manage.FlavorManageController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to manage the flavors. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to manage the flavors. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] def test_create_flavor_policy(self): rule_name = fm_policies.POLICY_ROOT % 'create' def fake_create(newflavor): newflavor['flavorid'] = uuids.fake_id newflavor["name"] = 'test' newflavor["memory_mb"] = 512 newflavor["vcpus"] = 2 newflavor["root_gb"] = 1 newflavor["ephemeral_gb"] = 1 newflavor["swap"] = 512 newflavor["rxtx_factor"] = 1.0 newflavor["is_public"] = True newflavor["disabled"] = False self.stub_out("nova.objects.Flavor.create", fake_create) body = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, } } self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._create, self.req, body=body) @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_update_flavor_policy(self, mock_save, mock_get): rule_name = fm_policies.POLICY_ROOT % 'update' req = fakes.HTTPRequest.blank('', version='2.55') self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._update, req, uuids.fake_id, body={'flavor': {'description': None}}) @mock.patch('nova.objects.Flavor.destroy') def test_delete_flavor_policy(self, mock_delete): rule_name = fm_policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._delete, self.req, uuids.fake_id) class FlavorManageScopeTypePolicyTest(FlavorManagePolicyTest): """Test os-flavor-manage APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(FlavorManageScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to manage the flavors. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system-admin is not able to manage the flavors. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class FlavorManageNoLegacyPolicyTest(FlavorManageScopeTypePolicyTest): """Test Flavor Manage APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_hypervisors.py0000664000175000017500000001644300000000000023477 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import hypervisors from nova.policies import base as base_policy from nova.policies import hypervisors as hv_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class HypervisorsPolicyTest(base.BasePolicyTest): """Test os-hypervisors APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(HypervisorsPolicyTest, self).setUp() self.controller = hypervisors.HypervisorsController() self.req = fakes.HTTPRequest.blank('') self.controller._get_compute_nodes_by_name_pattern = mock.MagicMock() self.controller.host_api.compute_node_get_all = mock.MagicMock() self.controller.host_api.service_get_by_compute_host = mock.MagicMock() self.controller.host_api.compute_node_get = mock.MagicMock() # Check that system scoped admin, member and reader are able to # perform operations on hypervisors. # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to get hypervisors. This make sure that existing # tokens will keep working even we have changed this policy defaults # to reader role. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-system-reader are not able to perform operations # on hypervisors self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] def test_list_hypervisors_policy(self): rule_name = hv_policies.BASE_POLICY_NAME % 'list' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req) def test_list_details_hypervisors_policy(self): rule_name = hv_policies.BASE_POLICY_NAME % 'list-detail' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.detail, self.req) def test_show_hypervisors_policy(self): rule_name = hv_policies.BASE_POLICY_NAME % 'show' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, 11111) @mock.patch('nova.compute.api.HostAPI.get_host_uptime') def test_uptime_hypervisors_policy(self, mock_uptime): rule_name = hv_policies.BASE_POLICY_NAME % 'uptime' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.uptime, self.req, 11111) def test_search_hypervisors_policy(self): rule_name = hv_policies.BASE_POLICY_NAME % 'search' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.search, self.req, 11111) def test_servers_hypervisors_policy(self): rule_name = hv_policies.BASE_POLICY_NAME % 'servers' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.servers, self.req, 11111) @mock.patch('nova.compute.api.HostAPI.compute_node_statistics') def test_statistics_hypervisors_policy(self, mock_statistics): rule_name = hv_policies.BASE_POLICY_NAME % 'statistics' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.statistics, self.req) class HypervisorsScopeTypePolicyTest(HypervisorsPolicyTest): """Test os-hypervisors APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(HypervisorsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to perform operations # on hypervisors. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system-reader is not able to perform operations # on hypervisors. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class HypervisorsNoLegacyPolicyTest(HypervisorsScopeTypePolicyTest): """Test Hypervisors APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True rules_without_deprecation = { hv_policies.BASE_POLICY_NAME % 'list': base_policy.SYSTEM_READER, hv_policies.BASE_POLICY_NAME % 'list-detail': base_policy.SYSTEM_READER, hv_policies.BASE_POLICY_NAME % 'show': base_policy.SYSTEM_READER, hv_policies.BASE_POLICY_NAME % 'statistics': base_policy.SYSTEM_READER, hv_policies.BASE_POLICY_NAME % 'uptime': base_policy.SYSTEM_READER, hv_policies.BASE_POLICY_NAME % 'search': base_policy.SYSTEM_READER, hv_policies.BASE_POLICY_NAME % 'servers': base_policy.SYSTEM_READER, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_instance_actions.py0000664000175000017500000003415000000000000024421 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import fixtures import mock from nova.api.openstack import api_version_request from oslo_policy import policy as oslo_policy from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import instance_actions as instance_actions_v21 from nova.compute import vm_states from nova import exception from nova.policies import base as base_policy from nova.policies import instance_actions as ia_policies from nova import policy from nova.tests.unit.api.openstack.compute import test_instance_actions from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit import fake_server_actions from nova.tests.unit.policies import base from nova.tests.unit import policy_fixture FAKE_UUID = fake_server_actions.FAKE_UUID FAKE_REQUEST_ID = fake_server_actions.FAKE_REQUEST_ID1 class InstanceActionsPolicyTest(base.BasePolicyTest): """Test os-instance-actions APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(InstanceActionsPolicyTest, self).setUp() self.controller = instance_actions_v21.InstanceActionsController() self.req = fakes.HTTPRequest.blank('') self.fake_actions = copy.deepcopy(fake_server_actions.FAKE_ACTIONS) self.fake_events = copy.deepcopy(fake_server_actions.FAKE_EVENTS) self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that system reader are able to show the instance # actions events. self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-system-reader are not able to show the instance # actions events. self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] self.project_or_system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, self.project_foo_context ] self.project_or_system_reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context ] def _set_policy_rules(self, overwrite=True): rules = {ia_policies.BASE_POLICY_NAME % 'show': '@'} policy.set_rules(oslo_policy.Rules.from_dict(rules), overwrite=overwrite) def test_index_instance_action_policy(self): rule_name = ia_policies.BASE_POLICY_NAME % "list" self.common_policy_check( self.project_or_system_reader_authorized_contexts, self.project_or_system_reader_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance['uuid']) @mock.patch('nova.compute.api.InstanceActionAPI.action_get_by_request_id') def test_show_instance_action_policy(self, mock_action_get): fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] mock_action_get.return_value = fake_action rule_name = ia_policies.BASE_POLICY_NAME % "show" self.common_policy_check( self.project_or_system_reader_authorized_contexts, self.project_or_system_reader_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance['uuid'], fake_action['request_id']) @mock.patch('nova.objects.InstanceActionEventList.get_by_action') @mock.patch('nova.objects.InstanceAction.get_by_request_id') def test_show_instance_action_policy_with_events( self, mock_get_action, mock_get_events): """Test to ensure skip checking policy rule 'os_compute_api:os-instance-actions:show'. """ fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] mock_get_action.return_value = fake_action fake_events = self.fake_events[fake_action['id']] fake_action['events'] = fake_events mock_get_events.return_value = fake_events fake_action_fmt = test_instance_actions.format_action( copy.deepcopy(fake_action)) self._set_policy_rules(overwrite=False) rule_name = ia_policies.BASE_POLICY_NAME % "events" authorize_res, unauthorize_res = self.common_policy_check( self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance['uuid'], fake_action['request_id'], fatal=False) for action in authorize_res: # In order to unify the display forms of 'start_time' and # 'finish_time', format the results returned by the show api. res_fmt = test_instance_actions.format_action( action['instanceAction']) self.assertEqual(fake_action_fmt['events'], res_fmt['events']) for action in unauthorize_res: self.assertNotIn('events', action['instanceAction']) class InstanceActionsDeprecatedPolicyTest(base.BasePolicyTest): """Test os-instance-actions APIs Deprecated policies. This class checks if deprecated policy rules are overridden by user on policy.json file then they still work because oslo.policy add deprecated rules in logical OR condition and enforces them for policy checks if overridden. """ def setUp(self): super(InstanceActionsDeprecatedPolicyTest, self).setUp() self.controller = instance_actions_v21.InstanceActionsController() self.admin_or_owner_req = fakes.HTTPRequest.blank('') self.admin_or_owner_req.environ[ 'nova.context'] = self.project_admin_context self.reader_req = fakes.HTTPRequest.blank('') self.reader_req.environ['nova.context'] = self.project_reader_context self.deprecated_policy = ia_policies.ROOT_POLICY # Overridde rule with different checks than defaults so that we can # verify the rule overridden case. override_rules = { self.deprecated_policy: base_policy.RULE_ADMIN_OR_OWNER, } # NOTE(brinzhang): Only override the deprecated rule in policy file # so that we can verify if overridden checks are considered by # oslo.policy. # Oslo.policy will consider the overridden rules if: # 1. overridden deprecated rule's checks are different than defaults # 2. new rules are not present in policy file self.policy = self.useFixture(policy_fixture.OverridePolicyFixture( rules_in_file=override_rules)) @mock.patch('nova.compute.api.InstanceActionAPI.actions_get') @mock.patch('nova.api.openstack.common.get_instance') def test_deprecated_policy_overridden_rule_is_checked( self, mock_instance_get, mock_actions_get): # Test to verify if deprecatd overridden policy is working. instance = fake_instance.fake_instance_obj( self.admin_or_owner_req.environ['nova.context']) # Check for success as admin_or_owner role. Deprecated rule # has been overridden with admin checks in policy.json # If admin role pass it means overridden rule is enforced by # olso.policy because new default is system reader and the old # default is admin. self.controller.index(self.admin_or_owner_req, instance['uuid']) # check for failure with reader context. exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.reader_req, instance['uuid']) self.assertEqual( "Policy doesn't allow os_compute_api:os-instance-actions:list " "to be performed.", exc.format_message()) class InstanceActionsScopeTypePolicyTest(InstanceActionsPolicyTest): """Test os-instance-actions APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True, so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(InstanceActionsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") @mock.patch('nova.objects.InstanceActionEventList.get_by_action') @mock.patch('nova.objects.InstanceAction.get_by_request_id') def test_show_instance_action_policy_with_show_details( self, mock_get_action, mock_get_events): """Test to ensure skip checking policy rule 'os_compute_api:os-instance-actions:show'. """ self.req.api_version_request = api_version_request.APIVersionRequest( '2.84') fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] mock_get_action.return_value = fake_action fake_events = self.fake_events[fake_action['id']] fake_action['events'] = fake_events mock_get_events.return_value = fake_events fake_action_fmt = test_instance_actions.format_action( copy.deepcopy(fake_action)) self._set_policy_rules(overwrite=False) rule_name = ia_policies.BASE_POLICY_NAME % "events:details" authorize_res, unauthorize_res = self.common_policy_check( self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance['uuid'], fake_action['request_id'], fatal=False) for action in authorize_res: # Ensure the 'details' field in the action events for event in action['instanceAction']['events']: self.assertIn('details', event) # In order to unify the display forms of 'start_time' and # 'finish_time', format the results returned by the show api. res_fmt = test_instance_actions.format_action( action['instanceAction']) self.assertEqual(fake_action_fmt['events'], res_fmt['events']) # Because of the microversion > '2.51', that will be contain # 'events' in the os-instance-actions show api response, but the # 'details' should not contain in the action events. for action in unauthorize_res: # Ensure the 'details' field not in the action events for event in action['instanceAction']['events']: self.assertNotIn('details', event) class InstanceActionsNoLegacyPolicyTest(InstanceActionsPolicyTest): """Test os-instance-actions APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True rules_without_deprecation = { ia_policies.BASE_POLICY_NAME % 'list': base_policy.PROJECT_READER_OR_SYSTEM_READER, ia_policies.BASE_POLICY_NAME % 'show': base_policy.PROJECT_READER_OR_SYSTEM_READER, ia_policies.BASE_POLICY_NAME % 'events': base_policy.SYSTEM_READER, } def setUp(self): super(InstanceActionsNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader are able to get the # instance action events. self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_reader_context, self.system_member_context] # Check that non-system-reader are not able to # get the instance action events self.system_reader_unauthorized_contexts = [ self.project_admin_context, self.system_foo_context, self.legacy_admin_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] # Check that system or projct reader is able to # show the instance actions events. self.project_or_system_reader_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, ] # Check that non-system or non-project reader is not able to # show the instance actions events. self.project_or_system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_instance_usage_audit_log.py0000664000175000017500000001150700000000000026115 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import instance_usage_audit_log as iual from nova.policies import base as base_policy from nova.policies import instance_usage_audit_log as iual_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class InstanceUsageAuditLogPolicyTest(base.BasePolicyTest): """Test os-instance-usage-audit-log APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(InstanceUsageAuditLogPolicyTest, self).setUp() self.controller = iual.InstanceUsageAuditLogController() self.req = fakes.HTTPRequest.blank('') self.controller.host_api.task_log_get_all = mock.MagicMock() self.controller.host_api.service_get_all = mock.MagicMock() # Check that admin is able to get instance usage audit log. # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to get instance usage audit log. This make sure # that existing tokens will keep working even we have changed # this policy defaults to reader role. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-admin is not able to get instance usage audit log. self.reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] def test_show_policy(self): rule_name = iual_policies.BASE_POLICY_NAME % 'show' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, '2020-03-25 14:40:00') def test_index_policy(self): rule_name = iual_policies.BASE_POLICY_NAME % 'list' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req) class InstanceUsageScopeTypePolicyTest(InstanceUsageAuditLogPolicyTest): """Test os-instance-usage-audit-log APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(InstanceUsageScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to get instance usage audit log. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system-admin is not able to get instance # usage audit log. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class InstanceUsageNoLegacyPolicyTest(InstanceUsageScopeTypePolicyTest): """Test Instance Usage Audit Log APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True rules_without_deprecation = { iual_policies.BASE_POLICY_NAME % 'list': base_policy.SYSTEM_READER, iual_policies.BASE_POLICY_NAME % 'show': base_policy.SYSTEM_READER, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_keypairs.py0000664000175000017500000002315400000000000022726 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.policies import keypairs as policies from nova.api.openstack.compute import keypairs from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_keypair from nova.tests.unit.policies import base class KeypairsPolicyTest(base.BasePolicyTest): """Test Keypairs APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(KeypairsPolicyTest, self).setUp() self.controller = keypairs.KeypairController() self.req = fakes.HTTPRequest.blank('') # Check that everyone is able to create, delete and get # their keypairs. self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] self.everyone_unauthorized_contexts = [] # Check that admin is able to create, delete and get # other users keypairs. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to create, delete and get # other users keypairs. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] # Check that system reader is able to get # other users keypairs. self.system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get # other users keypairs. self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.KeypairAPI.get_key_pairs') def test_index_keypairs_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.index, self.req) @mock.patch('nova.compute.api.KeypairAPI.get_key_pairs') def test_index_others_keypairs_policy(self, mock_get): req = fakes.HTTPRequest.blank('?user_id=user2', version='2.10') rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.index, req) @mock.patch('nova.compute.api.KeypairAPI.get_key_pair') def test_show_keypairs_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.show, self.req, fakes.FAKE_UUID) @mock.patch('nova.compute.api.KeypairAPI.get_key_pair') def test_show_others_keypairs_policy(self, mock_get): # Change the user_id in request context. req = fakes.HTTPRequest.blank('?user_id=user2', version='2.10') rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.show, req, fakes.FAKE_UUID) @mock.patch('nova.compute.api.KeypairAPI.create_key_pair') def test_create_keypairs_policy(self, mock_create): rule_name = policies.POLICY_ROOT % 'create' mock_create.return_value = (test_keypair.fake_keypair, 'FAKE_KEY') self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.create, self.req, body={'keypair': {'name': 'create_test'}}) @mock.patch('nova.compute.api.KeypairAPI.create_key_pair') def test_create_others_keypairs_policy(self, mock_create): # Change the user_id in request context. req = fakes.HTTPRequest.blank('', version='2.10') rule_name = policies.POLICY_ROOT % 'create' mock_create.return_value = (test_keypair.fake_keypair, 'FAKE_KEY') body = {'keypair': {'name': 'test2', 'user_id': 'user2'}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, req, body=body) @mock.patch('nova.compute.api.KeypairAPI.delete_key_pair') def test_delete_keypairs_policy(self, mock_delete): rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.delete, self.req, fakes.FAKE_UUID) @mock.patch('nova.compute.api.KeypairAPI.delete_key_pair') def test_delete_others_keypairs_policy(self, mock_delete): # Change the user_id in request context. req = fakes.HTTPRequest.blank('?user_id=user2', version='2.10') rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, req, fakes.FAKE_UUID) class KeypairsScopeTypePolicyTest(KeypairsPolicyTest): """Test Keypairs APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(KeypairsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class KeypairsNoLegacyPolicyTest(KeypairsScopeTypePolicyTest): """Test Keypairs APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(KeypairsNoLegacyPolicyTest, self).setUp() # Check that system admin is able to create, delete and get # other users keypairs. self.admin_authorized_contexts = [ self.system_admin_context] # Check that system non-admin is not able to create, delete and get # other users keypairs. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system reader is able to get # other users keypairs. self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get # other users keypairs. self.system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_limits.py0000664000175000017500000001321700000000000022377 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import limits from nova.policies import base as base_policy from nova.policies import limits as limits_policies from nova import quota from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class LimitsPolicyTest(base.BasePolicyTest): """Test Limits APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(LimitsPolicyTest, self).setUp() self.controller = limits.LimitsController() self.req = fakes.HTTPRequest.blank('') self.absolute_limits = { 'ram': 512, 'instances': 5, 'cores': 21, 'key_pairs': 10, 'floating_ips': 10, 'security_groups': 10, 'security_group_rules': 20, } def stub_get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v, in_use=v // 2) for k, v in self.absolute_limits.items()} mock_get_project_quotas = mock.patch.object( quota.QUOTAS, "get_project_quotas", side_effect = stub_get_project_quotas) mock_get_project_quotas.start() # Check that everyone is able to get their limits self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context] self.everyone_unauthorized_contexts = [] # Check that system reader is able to get other projects limit. # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to get limit. This make sure that existing # tokens will keep working even we have changed this policy defaults # to reader role. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-admin is not able to get other projects limit. self.reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] def test_get_limits_policy(self): rule_name = limits_policies.BASE_POLICY_NAME self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.index, self.req) def test_get_other_limits_policy(self): req = fakes.HTTPRequest.blank('/?tenant_id=faketenant') rule_name = limits_policies.OTHER_PROJECT_LIMIT_POLICY_NAME self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, req) class LimitsScopeTypePolicyTest(LimitsPolicyTest): """Test Limits APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(LimitsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to get other projects limit. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able toget other # projects limit. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class LimitsNoLegacyPolicyTest(LimitsScopeTypePolicyTest): """Test Limits APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True rules_without_deprecation = { limits_policies.OTHER_PROJECT_LIMIT_POLICY_NAME: base_policy.SYSTEM_READER} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_lock_server.py0000664000175000017500000002354000000000000023414 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import lock_server from nova.compute import vm_states from nova import exception from nova.policies import base as base_policy from nova.policies import lock_server as ls_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class LockServerPolicyTest(base.BasePolicyTest): """Test Lock server APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(LockServerPolicyTest, self).setUp() self.controller = lock_server.LockServerController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to lock/unlock # the server self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to lock/unlock # the server self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] # Check that admin is able to unlock the server which is # locked by other self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to unlock the server # which is locked by other self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.lock') def test_lock_server_policy(self, mock_lock): rule_name = ls_policies.POLICY_ROOT % 'lock' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._lock, self.req, self.instance.uuid, body={'lock': {}}) @mock.patch('nova.compute.api.API.unlock') def test_unlock_server_policy(self, mock_unlock): rule_name = ls_policies.POLICY_ROOT % 'unlock' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._unlock, self.req, self.instance.uuid, body={'unlock': {}}) @mock.patch('nova.compute.api.API.unlock') @mock.patch('nova.compute.api.API.is_expected_locked_by') def test_unlock_override_server_policy(self, mock_expected, mock_unlock): mock_expected.return_value = False rule = ls_policies.POLICY_ROOT % 'unlock' self.policy.set_rules({rule: "@"}, overwrite=False) rule_name = ls_policies.POLICY_ROOT % 'unlock:unlock_override' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._unlock, self.req, self.instance.uuid, body={'unlock': {}}) def test_lock_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = ls_policies.POLICY_ROOT % 'lock' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._lock, req, fakes.FAKE_UUID, body={'lock': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.lock') def test_lock_sevrer_overridden_policy_pass_with_same_user( self, mock_lock): rule_name = ls_policies.POLICY_ROOT % 'lock' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._lock(self.req, fakes.FAKE_UUID, body={'lock': {}}) class LockServerScopeTypePolicyTest(LockServerPolicyTest): """Test Lock Server APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(LockServerScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class LockServerNoLegacyPolicyTest(LockServerScopeTypePolicyTest): """Test Lock Server APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(LockServerNoLegacyPolicyTest, self).setUp() # Check that system admin or and server owner is able to lock/unlock # the server self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/owner is not able to lock/unlock # the server self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context ] # Check that system admin is able to unlock the server which is # locked by other self.admin_authorized_contexts = [ self.system_admin_context] # Check that system non-admin is not able to unlock the server # which is locked by other self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class LockServerOverridePolicyTest(LockServerNoLegacyPolicyTest): """Test Lock Server APIs policies with system and project scoped but default to system roles only are allowed for project roles if override by operators. This test is with system scope enable and no more deprecated rules. """ def setUp(self): super(LockServerOverridePolicyTest, self).setUp() # Check that system admin or project scoped role as override above # is able to unlock the server which is locked by other self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system admin or project role is not able to # unlock the server which is locked by other self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] def test_unlock_override_server_policy(self): rule = ls_policies.POLICY_ROOT % 'unlock:unlock_override' self.policy.set_rules({ # make unlock allowed for everyone so that we can check unlock # override policy. ls_policies.POLICY_ROOT % 'unlock': "@", rule: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN}, overwrite=False) super(LockServerOverridePolicyTest, self).test_unlock_override_server_policy() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_migrate_server.py0000664000175000017500000001601300000000000024111 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import migrate_server from nova.compute import vm_states from nova.policies import base as base_policy from nova.policies import migrate_server as ms_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class MigrateServerPolicyTest(base.BasePolicyTest): """Test Migrate Server APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(MigrateServerPolicyTest, self).setUp() self.controller = migrate_server.MigrateServerController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, project_id=self.project_id, id=1, uuid=uuid, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin is able to migrate the server. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context ] # Check that non-admin is not able to migrate the server self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.resize') @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request', return_value=False) def test_migrate_server_policy(self, mock_port, mock_resize): rule_name = ms_policies.POLICY_ROOT % 'migrate' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._migrate, self.req, self.instance.uuid, body={'migrate': None}) @mock.patch('nova.compute.api.API.live_migrate') @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request', return_value=False) def test_migrate_live_server_policy(self, mock_port, mock_live_migrate): rule_name = ms_policies.POLICY_ROOT % 'migrate_live' body = {'os-migrateLive': { 'host': 'hostname', 'block_migration': "False", 'disk_over_commit': "False"} } self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._migrate_live, self.req, self.instance.uuid, body=body) class MigrateServerScopeTypePolicyTest(MigrateServerPolicyTest): """Test Migrate Server APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(MigrateServerScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class MigrateServerNoLegacyPolicyTest(MigrateServerScopeTypePolicyTest): """Test Migrate Server APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True def setUp(self): super(MigrateServerNoLegacyPolicyTest, self).setUp() # Check that system admin is able to migrate the server. self.admin_authorized_contexts = [ self.system_admin_context ] # Check that non system admin is not able to migrate the server self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] class MigrateServerOverridePolicyTest(MigrateServerNoLegacyPolicyTest): """Test Migrate Server APIs policies with system and project scoped but default to system roles only are allowed for project roles if override by operators. This test is with system scope enable and no more deprecated rules. """ def setUp(self): super(MigrateServerOverridePolicyTest, self).setUp() rule_migrate = ms_policies.POLICY_ROOT % 'migrate' rule_live_migrate = ms_policies.POLICY_ROOT % 'migrate_live' # NOTE(gmann): override the rule to project member and verify it # work as policy is system and projct scoped. self.policy.set_rules({ rule_migrate: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN, rule_live_migrate: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN}, overwrite=False) # Check that system admin or project scoped role as override above # is able to migrate the server self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system admin or project role is not able to # migrate the server self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_migrations.py0000664000175000017500000000672200000000000023255 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import migrations from nova.policies import migrations as migrations_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class MigrationsPolicyTest(base.BasePolicyTest): """Test Migrations APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(MigrationsPolicyTest, self).setUp() self.controller = migrations.MigrationsController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to list migrations. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context ] # Check that non-admin is not able to list migrations. self.reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.get_migrations') def test_list_migrations_policy(self, mock_migration): rule_name = migrations_policies.POLICY_ROOT % 'index' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req) class MigrationsScopeTypePolicyTest(MigrationsPolicyTest): """Test Migrations APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(MigrationsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to list migrations. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non system reader is not able to list migrations. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_pause_server.py0000664000175000017500000001444300000000000023603 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import pause_server from nova.compute import vm_states from nova import exception from nova.policies import pause_server as ps_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class PauseServerPolicyTest(base.BasePolicyTest): """Test Pause server APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(PauseServerPolicyTest, self).setUp() self.controller = pause_server.PauseServerController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to pause/unpause # the server self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to pause/unpause # the server self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.pause') def test_pause_server_policy(self, mock_pause): rule_name = ps_policies.POLICY_ROOT % 'pause' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._pause, self.req, self.instance.uuid, body={'pause': {}}) @mock.patch('nova.compute.api.API.unpause') def test_unpause_server_policy(self, mock_unpause): rule_name = ps_policies.POLICY_ROOT % 'unpause' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._unpause, self.req, self.instance.uuid, body={'unpause': {}}) def test_pause_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = ps_policies.POLICY_ROOT % 'pause' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._pause, req, fakes.FAKE_UUID, body={'pause': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.pause') def test_pause_server_overridden_policy_pass_with_same_user( self, mock_pause): rule_name = ps_policies.POLICY_ROOT % 'pause' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._pause(self.req, fakes.FAKE_UUID, body={'pause': {}}) class PauseServerScopeTypePolicyTest(PauseServerPolicyTest): """Test Pause Server APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(PauseServerScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class PauseServerNoLegacyPolicyTest(PauseServerScopeTypePolicyTest): """Test Pause Server APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(PauseServerNoLegacyPolicyTest, self).setUp() # Check that system admin or server owner is able to pause/unpause # the server self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/owner is not able to pause/unpause # the server self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_quota_class_sets.py0000664000175000017500000001347500000000000024460 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import quota_classes from nova.policies import quota_class_sets as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class QuotaClassSetsPolicyTest(base.BasePolicyTest): """Test Quota Class Set APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(QuotaClassSetsPolicyTest, self).setUp() self.controller = quota_classes.QuotaClassSetsController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to update quota class self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to update quota class self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] # Check that system reader is able to get quota class self.system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get quota class self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.objects.Quotas.update_class') def test_update_quota_class_sets_policy(self, mock_update): rule_name = policies.POLICY_ROOT % 'update' body = {'quota_class_set': {'metadata_items': 128, 'ram': 51200, 'floating_ips': -1, 'fixed_ips': -1, 'instances': 10, 'injected_files': 5, 'cores': 20}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, 'test_class', body=body) @mock.patch('nova.quota.QUOTAS.get_class_quotas') def test_show_quota_class_sets_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.show, self.req, 'test_class') class QuotaClassSetsScopeTypePolicyTest(QuotaClassSetsPolicyTest): """Test Quota Class Sets APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(QuotaClassSetsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to update and get quota class self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system/admin is not able to update and get quota class self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] # Check that system reader is able to get quota class self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get quota class self.system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] class QuotaClassSetsNoLegacyPolicyTest(QuotaClassSetsScopeTypePolicyTest): """Test Quota Class Sets APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(QuotaClassSetsNoLegacyPolicyTest, self).setUp() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/policies/test_quota_sets.py0000664000175000017500000002325200000000000023265 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import quota_sets from nova.policies import quota_sets as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class QuotaSetsPolicyTest(base.BasePolicyTest): """Test Quota Sets APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(QuotaSetsPolicyTest, self).setUp() self.controller = quota_sets.QuotaSetsController() self.controller._validate_quota_limit = mock.MagicMock() self.req = fakes.HTTPRequest.blank('') self.project_id = self.req.environ['nova.context'].project_id # Check that admin is able to update or revert quota # to default. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to update or revert # quota to default. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that system reader is able to get another project's quota. self.system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get another # project's quota. self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that everyone is able to get the default quota or # their own quota. self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] self.everyone_unauthorized_contexts = [] # Check that system reader or owner is able to get their own quota. self.system_reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] @mock.patch('nova.quota.QUOTAS.get_project_quotas') @mock.patch('nova.quota.QUOTAS.get_settable_quotas') def test_update_quota_sets_policy(self, mock_update, mock_get): rule_name = policies.POLICY_ROOT % 'update' body = {'quota_set': { 'instances': 50, 'cores': 50} } self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, self.project_id, body=body) @mock.patch('nova.objects.Quotas.destroy_all_by_project') def test_delete_quota_sets_policy(self, mock_delete): rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, self.project_id) @mock.patch('nova.quota.QUOTAS.get_defaults') def test_default_quota_sets_policy(self, mock_default): rule_name = policies.POLICY_ROOT % 'defaults' self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.defaults, self.req, self.project_id) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_detail_quota_sets_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'detail' self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.detail, self.req, 'try-other-project') # Check if everyone (owner) is able to get their own quota for cxtx in self.system_reader_or_owner_authorized_contexts: req = fakes.HTTPRequest.blank('') req.environ['nova.context'] = cxtx self.controller.detail(req, cxtx.project_id) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_show_quota_sets_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.show, self.req, 'try-other-project') # Check if everyone (owner) is able to get their own quota for cxtx in self.system_reader_or_owner_authorized_contexts: req = fakes.HTTPRequest.blank('') req.environ['nova.context'] = cxtx self.controller.show(req, cxtx.project_id) class QuotaSetsScopeTypePolicyTest(QuotaSetsPolicyTest): """Test Quota Sets APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(QuotaSetsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to update or revert quota # to default. self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system admin is not able to update or revert # quota to default. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.project_admin_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] class QuotaSetsNoLegacyPolicyTest(QuotaSetsScopeTypePolicyTest): """Test Quota Sets APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(QuotaSetsNoLegacyPolicyTest, self).setUp() # Check that system reader is able to get another project's quota. self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get anotherproject's # quota. self.system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that everyone is able to get their own quota. self.system_reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_member_context, self.project_reader_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_remote_consoles.py0000664000175000017500000001174700000000000024304 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova.policies import remote_consoles as rc_policies from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import remote_consoles from nova.compute import vm_states from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class RemoteConsolesPolicyTest(base.BasePolicyTest): """Test Remote Consoles APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(RemoteConsolesPolicyTest, self).setUp() self.controller = remote_consoles.RemoteConsolesController() mock_handler = mock.MagicMock() mock_handler.return_value = {'url': "http://fake"} self.controller.handlers['vnc'] = mock_handler self.req = fakes.HTTPRequest.blank('', version='2.6') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to get server # remote consoles. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to get server # remote consoles. self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] def test_create_console_policy(self): rule_name = rc_policies.BASE_POLICY_NAME body = {'remote_console': {'protocol': 'vnc', 'type': 'novnc'}} self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.create, self.req, self.instance.uuid, body=body) class RemoteConsolesScopeTypePolicyTest(RemoteConsolesPolicyTest): """Test Remote Consoles APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(RemoteConsolesScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class RemoteConsolesNoLegacyPolicyTest(RemoteConsolesScopeTypePolicyTest): """Test Remote Consoles APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(RemoteConsolesNoLegacyPolicyTest, self).setUp() # Check that system admin or and server owner is able to get server # remote consoles. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/owner is not able to get server # remote consoles. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_rescue.py0000664000175000017500000001506300000000000022365 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova.policies import base as base_policy from nova.policies import rescue as rs_policies from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import rescue from nova.compute import vm_states from nova import exception from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class RescueServerPolicyTest(base.BasePolicyTest): """Test Rescue Server APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(RescueServerPolicyTest, self).setUp() self.controller = rescue.RescueController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to rescue/unrescue # the sevrer self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to rescue/unrescue # the server self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.rescue') def test_rescue_server_policy(self, mock_rescue): rule_name = rs_policies.BASE_POLICY_NAME self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._rescue, self.req, self.instance.uuid, body={'rescue': {}}) @mock.patch('nova.compute.api.API.unrescue') def test_unrescue_server_policy(self, mock_unrescue): rule_name = rs_policies.UNRESCUE_POLICY_NAME self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._unrescue, self.req, self.instance.uuid, body={'unrescue': {}}) def test_rescue_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = rs_policies.BASE_POLICY_NAME self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._rescue, req, fakes.FAKE_UUID, body={'rescue': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.rescue') def test_rescue_sevrer_overridden_policy_pass_with_same_user( self, mock_rescue): rule_name = rs_policies.BASE_POLICY_NAME self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._rescue(self.req, fakes.FAKE_UUID, body={'rescue': {}}) class RescueServerScopeTypePolicyTest(RescueServerPolicyTest): """Test Rescue Server APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(RescueServerScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class RescueServerNoLegacyPolicyTest(RescueServerScopeTypePolicyTest): """Test Rescue Server APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True rules_without_deprecation = { rs_policies.UNRESCUE_POLICY_NAME: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN, rs_policies.BASE_POLICY_NAME: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN} def setUp(self): super(RescueServerNoLegacyPolicyTest, self).setUp() # Check that system admin or and server owner is able to # rescue/unrescue the sevrer self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/owner is not able to rescue/unrescue # the server self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_security_groups.py0000664000175000017500000001735000000000000024346 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import security_groups from nova.compute import vm_states from nova.policies import base as base_policy from nova.policies import security_groups as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class SecurityGroupsPolicyTest(base.BasePolicyTest): """Test Security Groups APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(SecurityGroupsPolicyTest, self).setUp() self.controller = security_groups.ServerSecurityGroupController() self.action_ctr = security_groups.SecurityGroupActionController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to operate # server security groups. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to operate # server security groups. self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, self.project_foo_context ] self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] @mock.patch('nova.network.security_group_api.get_instance_security_groups') def test_get_security_groups_policy(self, mock_get): rule_name = policies.POLICY_NAME % 'list' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) @mock.patch('nova.network.security_group_api.add_to_instance') def test_add_security_groups_policy(self, mock_add): rule_name = policies.POLICY_NAME % 'add' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.action_ctr._addSecurityGroup, self.req, self.instance.uuid, body={'addSecurityGroup': {'name': 'fake'}}) @mock.patch('nova.network.security_group_api.remove_from_instance') def test_remove_security_groups_policy(self, mock_remove): rule_name = policies.POLICY_NAME % 'remove' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.action_ctr._removeSecurityGroup, self.req, self.instance.uuid, body={'removeSecurityGroup': {'name': 'fake'}}) class SecurityGroupsScopeTypePolicyTest(SecurityGroupsPolicyTest): """Test Security Groups APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(SecurityGroupsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class SecurityGroupsNoLegacyPolicyTest(SecurityGroupsScopeTypePolicyTest): """Test Security Groups APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True rules_without_deprecation = { policies.POLICY_NAME % 'list': base_policy.PROJECT_READER_OR_SYSTEM_READER, policies.POLICY_NAME % 'add': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN, policies.POLICY_NAME % 'remove': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN} def setUp(self): super(SecurityGroupsNoLegacyPolicyTest, self).setUp() # Check that system or projct admin or owner is able to operate # server security groups. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to operate # server security groups. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context] # Check that system reader or projct is able to get # server security groups. self.reader_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, ] # Check that non-system reader nd non-admin/owner is not able to get # server security groups. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_diagnostics.py0000664000175000017500000001363300000000000024775 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import server_diagnostics from nova.compute import vm_states from nova.policies import base as base_policy from nova.policies import server_diagnostics as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerDiagnosticsPolicyTest(base.BasePolicyTest): """Test Server Diagnostics APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerDiagnosticsPolicyTest, self).setUp() self.controller = server_diagnostics.ServerDiagnosticsController() self.req = fakes.HTTPRequest.blank('', version='2.48') self.controller.compute_api.get_instance_diagnostics = mock.MagicMock() self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, project_id=self.project_id, id=1, uuid=uuids.fake_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin is able to get server diagnostics. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context ] # Check that non-admin is not able to get server diagnostics. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] def test_server_diagnostics_policy(self): rule_name = policies.BASE_POLICY_NAME self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) class ServerDiagnosticsScopeTypePolicyTest(ServerDiagnosticsPolicyTest): """Test Server Diagnostics APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerDiagnosticsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ServerDiagnosticsNoLegacyPolicyTest( ServerDiagnosticsScopeTypePolicyTest): """Test Server Diagnostics APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True def setUp(self): super(ServerDiagnosticsNoLegacyPolicyTest, self).setUp() # Check that system admin is able to get server diagnostics. self.admin_authorized_contexts = [ self.system_admin_context ] # Check that non system admin is not able to get server diagnostics. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] class ServerDiagnosticsOverridePolicyTest(ServerDiagnosticsNoLegacyPolicyTest): """Test Server Diagnostics APIs policies with system and project scoped but default to system roles only are allowed for project roles if override by operators. This test is with system scope enable and no more deprecated rules. """ def setUp(self): super(ServerDiagnosticsOverridePolicyTest, self).setUp() rule = policies.BASE_POLICY_NAME # NOTE(gmann): override the rule to project member and verify it # work as policy is system and projct scoped. self.policy.set_rules({ rule: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN}, overwrite=False) # Check that system admin or project scoped role as override above # is able to get server diagnostics. self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system admin or project role is not able to # get server diagnostics. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_external_events.py0000664000175000017500000001050000000000000025662 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_external_events as ev from nova.policies import server_external_events as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class ServerExternalEventsPolicyTest(base.BasePolicyTest): """Test Server External Events APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerExternalEventsPolicyTest, self).setUp() self.controller = ev.ServerExternalEventsController() self.req = fakes.HTTPRequest.blank('') # Check that admin is able to create the server external events. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context ] # Check that non-admin is not able to create the server # external events. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.external_instance_event') @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_server_external_events_policy(self, mock_event, mock_get, mock_filter): rule_name = policies.POLICY_ROOT % 'create' body = {'events': [{'name': 'network-vif-plugged', 'server_uuid': uuids.fake_id, 'status': 'completed'}] } self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.create, self.req, body=body) class ServerExternalEventsScopeTypePolicyTest(ServerExternalEventsPolicyTest): """Test Server External Events APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerExternalEventsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that admin is able to create the server external events. self.admin_authorized_contexts = [ self.system_admin_context, ] # Check that non-admin is not able to create the server # external events. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] class ServerExternalEventsNoLegacyPolicyTest( ServerExternalEventsScopeTypePolicyTest): """Test Server External Events APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_groups.py0000664000175000017500000002666100000000000024012 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_groups from nova import objects from nova.policies import server_groups as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class ServerGroupPolicyTest(base.BasePolicyTest): """Test Server Groups APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerGroupPolicyTest, self).setUp() self.controller = server_groups.ServerGroupController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.objects.InstanceGroup.get_by_uuid')).mock self.sg = [objects.InstanceGroup( uuid=uuids.fake_id, name='fake', project_id=self.project_id, user_id='u1', policies=[], members=[]), objects.InstanceGroup( uuid=uuids.fake_id, name='fake2', project_id='proj2', user_id='u2', policies=[], members=[])] self.mock_get.return_value = self.sg[0] # Check that admin or and owner is able to delete # the server group. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to delete # the server group. self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] # Check that system reader or owner is able to get # the server group. Due to old default everyone # is allowed to perform this operation. self.system_reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.system_member_context, self.system_reader_context, self.project_foo_context ] self.system_reader_or_owner_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context ] # Check that everyone is able to list # theie own server group. Due to old defaults everyone # is able to list their server groups. self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] self.everyone_unauthorized_contexts = [ ] # Check that project member is able to create server group. # Due to old defaults everyone is able to list their server groups. self.project_member_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.system_member_context, self.project_reader_context, self.project_foo_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context] self.project_member_unauthorized_contexts = [] @mock.patch('nova.objects.InstanceGroupList.get_by_project_id') def test_index_server_groups_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.index, self.req) @mock.patch('nova.objects.InstanceGroupList.get_all') def test_index_all_project_server_groups_policy(self, mock_get_all): mock_get_all.return_value = objects.InstanceGroupList(objects=self.sg) # 'index' policy is checked before 'index:all_projects' so # we have to allow it for everyone otherwise it will fail for # unauthorized contexts here. rule = policies.POLICY_ROOT % 'index' self.policy.set_rules({rule: "@"}, overwrite=False) admin_req = fakes.HTTPRequest.blank( '/os-server-groups?all_projects=True', version='2.13', use_admin_context=True) # Check admin user get all projects server groups. resp = self.controller.index(admin_req) projs = [sg['project_id'] for sg in resp['server_groups']] self.assertEqual(2, len(projs)) self.assertIn('proj2', projs) # Check non-admin user does not get all projects server groups. req = fakes.HTTPRequest.blank('/os-server-groups?all_projects=True', version='2.13') resp = self.controller.index(req) projs = [sg['project_id'] for sg in resp['server_groups']] self.assertNotIn('proj2', projs) def test_show_server_groups_policy(self): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check( self.system_reader_or_owner_authorized_contexts, self.system_reader_or_owner_unauthorized_contexts, rule_name, self.controller.show, self.req, uuids.fake_id) @mock.patch('nova.objects.Quotas.check_deltas') def test_create_server_groups_policy(self, mock_quota): rule_name = policies.POLICY_ROOT % 'create' body = {'server_group': {'name': 'fake', 'policies': ['affinity']}} self.common_policy_check(self.project_member_authorized_contexts, self.project_member_unauthorized_contexts, rule_name, self.controller.create, self.req, body=body) @mock.patch('nova.objects.InstanceGroup.destroy') def test_delete_server_groups_policy(self, mock_destroy): rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.delete, self.req, uuids.fake_id) class ServerGroupScopeTypePolicyTest(ServerGroupPolicyTest): """Test Server Groups APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerGroupScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check if project scoped can create the server group. self.project_member_authorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] # Check if non-project scoped cannot create the server group. self.project_member_unauthorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context ] # TODO(gmann): Test this with system scope once we remove # the hardcoded admin check def test_index_all_project_server_groups_policy(self): pass class ServerGroupNoLegacyPolicyTest(ServerGroupScopeTypePolicyTest): """Test Server Group APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(ServerGroupNoLegacyPolicyTest, self).setUp() # Check that system admin or and owner is able to delete # the server group. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context, ] # Check that non-system admin/owner is not able to delete # the server group. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] # Check that system reader or owner is able to get # the server group. self.system_reader_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.system_member_context, self.system_reader_context ] self.system_reader_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context ] self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.system_member_context, self.system_reader_context, self.other_project_member_context ] self.everyone_unauthorized_contexts = [ self.project_foo_context, self.system_foo_context ] # Check if project member can create the server group. self.project_member_authorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.other_project_member_context ] # Check if non-project member cannot create the server group. self.project_member_unauthorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_reader_context, self.project_foo_context, ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_ips.py0000664000175000017500000001231100000000000023251 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import ips from nova.compute import vm_states from nova.policies import ips as ips_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerIpsPolicyTest(base.BasePolicyTest): """Test Server IPs APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerIpsPolicyTest, self).setUp() self.controller = ips.IPsController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance self.mock_get_network = self.useFixture( fixtures.MockPatch('nova.api.openstack.common' '.get_networks_for_instance')).mock self.mock_get_network.return_value = {'net1': {'ips': '', 'floating_ips': ''}} # Check that admin or and server owner is able to get server # IP addresses. self.reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context] # Check that non-admin/owner is not able to get the server IP # adderesses self.reader_or_owner_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context ] def test_index_ips_policy(self): rule_name = ips_policies.POLICY_ROOT % 'index' self.common_policy_check(self.reader_or_owner_authorized_contexts, self.reader_or_owner_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) def test_show_ips_policy(self): rule_name = ips_policies.POLICY_ROOT % 'show' self.common_policy_check(self.reader_or_owner_authorized_contexts, self.reader_or_owner_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance.uuid, 'net1') class ServerIpsScopeTypePolicyTest(ServerIpsPolicyTest): """Test Server IPs APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerIpsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ServerIpsNoLegacyPolicyTest(ServerIpsScopeTypePolicyTest): """Test Server IPs APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(ServerIpsNoLegacyPolicyTest, self).setUp() # Check that system reader or owner is able to # get the server IP adderesses. self.reader_or_owner_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context] # Check that non-system and non-owner is not able to # get the server IP adderesses. self.reader_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/policies/test_server_metadata.py0000664000175000017500000002135200000000000024243 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_metadata from nova.policies import server_metadata as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerMetadataPolicyTest(base.BasePolicyTest): """Test Server Metadata APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerMetadataPolicyTest, self).setUp() self.controller = server_metadata.ServerMetadataController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to CRUD # the server metadata. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to CRUD # the server metadata self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that admin or and server owner is able to get # the server metadata. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to get # the server metadata. self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] @mock.patch('nova.compute.api.API.get_instance_metadata') def test_index_server_Metadata_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) @mock.patch('nova.compute.api.API.get_instance_metadata') def test_show_server_Metadata_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'show' mock_get.return_value = {'key9': 'value'} self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance.uuid, 'key9') @mock.patch('nova.compute.api.API.update_instance_metadata') def test_create_server_Metadata_policy(self, mock_quota): rule_name = policies.POLICY_ROOT % 'create' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.create, self.req, self.instance.uuid, body={"metadata": {"key9": "value9"}}) @mock.patch('nova.compute.api.API.update_instance_metadata') def test_update_server_Metadata_policy(self, mock_quota): rule_name = policies.POLICY_ROOT % 'update' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.update, self.req, self.instance.uuid, 'key9', body={"meta": {"key9": "value9"}}) @mock.patch('nova.compute.api.API.update_instance_metadata') def test_update_all_server_Metadata_policy(self, mock_quota): rule_name = policies.POLICY_ROOT % 'update_all' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.update_all, self.req, self.instance.uuid, body={"metadata": {"key9": "value9"}}) @mock.patch('nova.compute.api.API.get_instance_metadata') @mock.patch('nova.compute.api.API.delete_instance_metadata') def test_delete_server_Metadata_policy(self, mock_delete, mock_get): rule_name = policies.POLICY_ROOT % 'delete' mock_get.return_value = {'key9': 'value'} self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.delete, self.req, self.instance.uuid, 'key9') class ServerMetadataScopeTypePolicyTest(ServerMetadataPolicyTest): """Test Server Metadata APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerMetadataScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ServerMetadataNoLegacyPolicyTest(ServerMetadataScopeTypePolicyTest): """Test Server Metadata APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(ServerMetadataNoLegacyPolicyTest, self).setUp() # Check that system admin or project member is able to create, update # and delete the server metadata. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/member is not able to create, update # and delete the server metadata. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_reader_context, self.system_foo_context, self.system_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that system admin or project member is able to # get the server metadata. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context] # Check that non-system/admin/member is not able to # get the server metadata. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_migrations.py0000664000175000017500000002345300000000000024643 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_migrations from nova.compute import vm_states from nova.policies import base as base_policy from nova.policies import servers_migrations as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerMigrationsPolicyTest(base.BasePolicyTest): """Test Server Migrations APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerMigrationsPolicyTest, self).setUp() self.controller = server_migrations.ServerMigrationsController() self.req = fakes.HTTPRequest.blank('', version='2.24') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, vm_state=vm_states.ACTIVE) self.mock_get.return_value = self.instance # Check that admin is able to perform operations # for server migrations. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to perform operations # for server migrations. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system-reader are able to perform operations # for server migrations. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-system-reader are not able to perform operations # for server migrations. self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] @mock.patch('nova.compute.api.API.get_migrations_in_progress_by_instance') def test_list_server_migrations_policy(self, mock_get): rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) @mock.patch('nova.api.openstack.compute.server_migrations.output') @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') def test_show_server_migrations_policy(self, mock_show, mock_output): rule_name = policies.POLICY_ROOT % 'show' mock_show.return_value = {"migration_type": "live-migration", "status": "running"} self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance.uuid, 11111) @mock.patch('nova.compute.api.API.live_migrate_abort') def test_delete_server_migrations_policy(self, mock_delete): rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, self.instance.uuid, 11111) @mock.patch('nova.compute.api.API.live_migrate_force_complete') def test_force_delete_server_migrations_policy(self, mock_force): rule_name = policies.POLICY_ROOT % 'force_complete' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._force_complete, self.req, self.instance.uuid, 11111, body={"force_complete": None}) class ServerMigrationsScopeTypePolicyTest(ServerMigrationsPolicyTest): """Test Server Migrations APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerMigrationsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ServerMigrationsNoLegacyPolicyTest(ServerMigrationsScopeTypePolicyTest): """Test Server Migrations APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True def setUp(self): super(ServerMigrationsNoLegacyPolicyTest, self).setUp() # Check that admin is able to perform operations # for server migrations. self.admin_authorized_contexts = [ self.system_admin_context ] # Check that non-admin is not able to perform operations # for server migrations. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] # Check that system reader is able to perform operations # for server migrations. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system-reader is not able to perform operations # for server migrations. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class ServerMigrationsOverridePolicyTest(ServerMigrationsNoLegacyPolicyTest): """Test Server Migrations APIs policies with system and project scoped but default to system roles only are allowed for project roles if override by operators. This test is with system scope enable and no more deprecated rules. """ def setUp(self): super(ServerMigrationsOverridePolicyTest, self).setUp() rule_show = policies.POLICY_ROOT % 'show' rule_list = policies.POLICY_ROOT % 'index' rule_force = policies.POLICY_ROOT % 'force_complete' rule_delete = policies.POLICY_ROOT % 'delete' # NOTE(gmann): override the rule to project member and verify it # work as policy is system and projct scoped. self.policy.set_rules({ rule_show: base_policy.PROJECT_READER_OR_SYSTEM_READER, rule_list: base_policy.PROJECT_READER_OR_SYSTEM_READER, rule_force: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN, rule_delete: base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN}, overwrite=False) # Check that system admin or project scoped role as override above # is able to migrate the server self.admin_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system admin or project role is not able to # migrate the server self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system reader is able to perform operations # for server migrations. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context] # Check that non-system-reader is not able to perform operations # for server migrations. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/policies/test_server_password.py0000664000175000017500000001540300000000000024325 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_password from nova.policies import base as base_policy from nova.policies import server_password as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerPasswordPolicyTest(base.BasePolicyTest): """Test Server Password APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerPasswordPolicyTest, self).setUp() self.controller = server_password.ServerPasswordController() self.req = fakes.HTTPRequest.blank('') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, system_metadata={}, expected_attrs=['system_metadata']) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to # delete the server password. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to delete # the server password. self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that admin or and server owner is able to get # the server password. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to get # the server password. self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] @mock.patch('nova.api.metadata.password.extract_password') def test_index_server_password_policy(self, mock_pass): rule_name = policies.BASE_POLICY_NAME % 'show' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) @mock.patch('nova.api.metadata.password.convert_password') def test_clear_server_password_policy(self, mock_pass): rule_name = policies.BASE_POLICY_NAME % 'clear' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.clear, self.req, self.instance.uuid) class ServerPasswordScopeTypePolicyTest(ServerPasswordPolicyTest): """Test Server Password APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerPasswordScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ServerPasswordNoLegacyPolicyTest(ServerPasswordScopeTypePolicyTest): """Test Server Password APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True rules_without_deprecation = { policies.BASE_POLICY_NAME % 'show': base_policy.PROJECT_READER_OR_SYSTEM_READER, policies.BASE_POLICY_NAME % 'clear': base_policy.PROJECT_MEMBER_OR_SYSTEM_ADMIN} def setUp(self): super(ServerPasswordNoLegacyPolicyTest, self).setUp() # Check that system or projct admin or owner is able to clear # server password. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system and non-admin/owner is not able to clear # server password. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context] # Check that system reader or projct owner is able to get # server password. self.reader_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, ] # Check that non-system reader nd non-admin/owner is not able to get # server password. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_tags.py0000664000175000017500000002257000000000000023424 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_tags from nova.compute import vm_states from nova import context from nova import objects from nova.policies import server_tags as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerTagsPolicyTest(base.BasePolicyTest): """Test Server Tags APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerTagsPolicyTest, self).setUp() self.controller = server_tags.ServerTagsController() self.req = fakes.HTTPRequest.blank('', version='2.26') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, vm_state=vm_states.ACTIVE, project_id=self.project_id) self.mock_get.return_value = self.instance inst_map = objects.InstanceMapping( project_id=self.project_id, cell_mapping=objects.CellMappingList.get_all( context.get_admin_context())[1]) self.stub_out('nova.objects.InstanceMapping.get_by_instance_uuid', lambda s, c, u: inst_map) # Check that admin or and server owner is able to perform # operations on server tags. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context ] # Check that non-admin/owner is not able to perform operations # on server tags self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that reader or and server owner is able to perform operations # on server tags. self.reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-reader/owner is not able to perform operations # on server tags. self.reader_or_owner_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] @mock.patch('nova.objects.TagList.get_by_resource_id') def test_index_server_tags_policy(self, mock_tag): rule_name = policies.POLICY_ROOT % 'index' self.common_policy_check(self.reader_or_owner_authorized_contexts, self.reader_or_owner_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) @mock.patch('nova.objects.Tag.exists') def test_show_server_tags_policy(self, mock_exists): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.reader_or_owner_authorized_contexts, self.reader_or_owner_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance.uuid, uuids.fake_id) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') @mock.patch('nova.objects.Tag.create') def test_update_server_tags_policy(self, mock_create, mock_tag, mock_notf): rule_name = policies.POLICY_ROOT % 'update' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.update, self.req, self.instance.uuid, uuids.fake_id, body=None) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_set') def test_update_all_server_tags_policy(self, mock_set, mock_notf): rule_name = policies.POLICY_ROOT % 'update_all' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.update_all, self.req, self.instance.uuid, body={'tags': ['tag1', 'tag2']}) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.objects.TagList.destroy') def test_delete_all_server_tags_policy(self, mock_destroy, mock_notf): rule_name = policies.POLICY_ROOT % 'delete_all' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.delete_all, self.req, self.instance.uuid) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.api.instance_tag_get_by_instance_uuid') @mock.patch('nova.objects.Tag.destroy') def test_delete_server_tags_policy(self, mock_destroy, mock_get, mock_notf): rule_name = policies.POLICY_ROOT % 'delete' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.delete, self.req, self.instance.uuid, uuids.fake_id) class ServerTagsScopeTypePolicyTest(ServerTagsPolicyTest): """Test Server Tags APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerTagsScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ServerTagsNoLegacyPolicyTest(ServerTagsScopeTypePolicyTest): """Test Server Tags APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(ServerTagsNoLegacyPolicyTest, self).setUp() # Check that system admin or project member is able to # perform operations on server tags. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/member is not able to # perform operations on server tags. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_reader_context, self.system_foo_context, self.system_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that system reader or owner is able to # perform operations on server tags. self.reader_or_owner_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context] # Check that non-system/reader/owner is not able to # perform operations on server tags. self.reader_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_server_topology.py0000664000175000017500000001647500000000000024351 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import server_topology from nova.compute import vm_states from nova import objects from nova.policies import server_topology as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServerTopologyPolicyTest(base.BasePolicyTest): """Test Server Topology APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServerTopologyPolicyTest, self).setUp() self.controller = server_topology.ServerTopologyController() self.req = fakes.HTTPRequest.blank('', version='2.78') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, vm_state=vm_states.ACTIVE) self.mock_get.return_value = self.instance self.instance.numa_topology = objects.InstanceNUMATopology( instance_uuid = self.instance.uuid, cells=[objects.InstanceNUMACell( node=0, memory=1024, pagesize=4, id=123, cpu_topology=None, cpu_pinning={}, cpuset=set([0, 1]))]) # Check that system reader or and server owner is able to get # the server topology. self.system_reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context] # Check that non-stem reader/owner is not able to get # the server topology. self.system_reader_or_owner_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context, ] # Check that system reader is able to get the server topology # host information. self.system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to get the server topology # host information. self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context, self.other_project_reader_context ] def test_index_server_topology_policy(self): rule_name = policies.BASE_POLICY_NAME % 'index' self.common_policy_check( self.system_reader_or_owner_authorized_contexts, self.system_reader_or_owner_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid) def test_index_host_server_topology_policy(self): rule_name = policies.BASE_POLICY_NAME % 'host:index' # 'index' policy is checked before 'host:index' so # we have to allow it for everyone otherwise it will # fail first for unauthorized contexts. rule = policies.BASE_POLICY_NAME % 'index' self.policy.set_rules({rule: "@"}, overwrite=False) authorize_res, unauthorize_res = self.common_policy_check( self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.index, self.req, self.instance.uuid, fatal=False) for resp in authorize_res: self.assertEqual(123, resp['nodes'][0]['host_node']) self.assertEqual({}, resp['nodes'][0]['cpu_pinning']) for resp in unauthorize_res: self.assertNotIn('host_node', resp['nodes'][0]) self.assertNotIn('cpu_pinning', resp['nodes'][0]) class ServerTopologyScopeTypePolicyTest(ServerTopologyPolicyTest): """Test Server Topology APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServerTopologyScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to get the server topology # host information. self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system/reader is not able to get the server topology # host information. self.system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context, self.other_project_reader_context, ] class ServerTopologyNoLegacyPolicyTest(ServerTopologyScopeTypePolicyTest): """Test Server Topology APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(ServerTopologyNoLegacyPolicyTest, self).setUp() # Check that system reader/owner is able to get # the server topology. self.system_reader_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context, self.system_member_context, self.system_reader_context, self.project_reader_context] # Check that non-system/reader/owner is not able to get # the server topology. self.system_reader_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.other_project_reader_context, ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_servers.py0000664000175000017500000021236200000000000022571 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import migrate_server from nova.api.openstack.compute import servers from nova.compute import api as compute from nova.compute import vm_states from nova import exception from nova.network import neutron from nova import objects from nova.objects import fields from nova.objects.instance_group import InstanceGroup from nova.policies import extended_server_attributes as ea_policies from nova.policies import servers as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ServersPolicyTest(base.BasePolicyTest): """Test Servers APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServersPolicyTest, self).setUp() self.controller = servers.ServersController() self.m_controller = migrate_server.MigrateServerController() self.rule_trusted_certs = policies.SERVERS % 'create:trusted_certs' self.rule_attach_network = policies.SERVERS % 'create:attach_network' self.rule_attach_volume = policies.SERVERS % 'create:attach_volume' self.rule_requested_destination = policies.REQUESTED_DESTINATION self.rule_forced_host = policies.SERVERS % 'create:forced_host' self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.controller._view_builder._add_security_grps = mock.MagicMock() self.controller._view_builder._get_metadata = mock.MagicMock() self.controller._view_builder._get_addresses = mock.MagicMock() self.controller._view_builder._get_host_id = mock.MagicMock() self.controller._view_builder._get_fault = mock.MagicMock() self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE, system_metadata={}, expected_attrs=['system_metadata']) self.mock_flavor = self.useFixture( fixtures.MockPatch('nova.compute.flavors.get_flavor_by_flavor_id' )).mock self.mock_flavor.return_value = fake_flavor.fake_flavor_obj( self.req.environ['nova.context'], flavorid='1') self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.mock_get.return_value = self.instance self.mock_get_instance = self.useFixture(fixtures.MockPatchObject( self.controller, '_get_instance')).mock self.mock_get_instance.return_value = self.instance self.servers = [fakes.stub_instance_obj( 1, vm_state=vm_states.ACTIVE, uuid=uuids.fake, project_id=self.project_id, user_id='user1'), fakes.stub_instance_obj( 2, vm_state=vm_states.ACTIVE, uuid=uuids.fake, project_id='proj2', user_id='user2')] fakes.stub_out_secgroup_api( self, security_groups=[{'name': 'default'}]) self.mock_get_all = self.useFixture(fixtures.MockPatchObject( self.controller.compute_api, 'get_all')).mock self.body = { 'server': { 'name': 'server_test', 'imageRef': uuids.fake_id, 'flavorRef': uuids.fake_id, }, } self.extended_attr = ['OS-EXT-SRV-ATTR:host', 'OS-EXT-SRV-ATTR:hypervisor_hostname', 'OS-EXT-SRV-ATTR:instance_name', 'OS-EXT-SRV-ATTR:hostname', 'OS-EXT-SRV-ATTR:kernel_id', 'OS-EXT-SRV-ATTR:launch_index', 'OS-EXT-SRV-ATTR:ramdisk_id', 'OS-EXT-SRV-ATTR:reservation_id', 'OS-EXT-SRV-ATTR:root_device_name', 'OS-EXT-SRV-ATTR:user_data' ] # Check that admin or and owner is able to update, delete # or perform server action. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to update, delete # or perform server action. self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that system reader or owner is able to get # the server. self.system_reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.system_member_context, self.system_reader_context, self.project_foo_context ] self.system_reader_or_owner_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that everyone is able to list their own server. self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context] self.everyone_unauthorized_contexts = [ ] # Check that admin is able to create server with host request # and get server extended attributes or host status. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to create server with host request # and get server extended attributes or host status. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that sustem reader is able to list the server # for all projects. self.system_reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to list the server # for all projects. self.system_reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that project member is able to create serve self.project_member_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.system_member_context, self.system_reader_context, self.other_project_member_context, self.system_foo_context, self.project_reader_context, self.project_foo_context, self.other_project_reader_context] # Check that non-project member is not able to create server self.project_member_unauthorized_contexts = [ ] # Check that project admin is able to create server with requested # destination. self.project_admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-project admin is not able to create server with # requested destination self.project_admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that no one is able to resize cross cell. self.cross_cell_authorized_contexts = [] self.cross_cell_unauthorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context] # Check that admin is able to access the zero disk flavor # and external network policies. self.zero_disk_external_net_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to caccess the zero disk flavor # and external network policies. self.zero_disk_external_net_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that admin is able to get server extended attributes # or host status. self.server_attr_admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to get server extended attributes # or host status. self.server_attr_admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] def test_index_server_policy(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) if 'project_id' in search_opts or 'user_id' in search_opts: return objects.InstanceList(objects=self.servers) else: raise self.mock_get_all.side_effect = fake_get_all rule_name = policies.SERVERS % 'index' self.common_policy_check( self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.index, self.req) def test_index_all_project_server_policy(self): # 'index' policy is checked before 'index:get_all_tenants' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'index' self.policy.set_rules({rule: "@"}, overwrite=False) rule_name = policies.SERVERS % 'index:get_all_tenants' req = fakes.HTTPRequest.blank('/servers?all_tenants') def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertNotIn('project_id', search_opts) return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.index, req) @mock.patch('nova.compute.api.API.get_all') def test_detail_list_server_policy(self, mock_get): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) if 'project_id' in search_opts or 'user_id' in search_opts: return objects.InstanceList(objects=self.servers) else: raise self.mock_get_all.side_effect = fake_get_all rule_name = policies.SERVERS % 'detail' self.common_policy_check( self.everyone_authorized_contexts, self.everyone_unauthorized_contexts, rule_name, self.controller.detail, self.req) def test_detail_list_all_project_server_policy(self): # 'detail' policy is checked before 'detail:get_all_tenants' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'detail' self.policy.set_rules({rule: "@"}, overwrite=False) rule_name = policies.SERVERS % 'detail:get_all_tenants' req = fakes.HTTPRequest.blank('/servers?all_tenants') def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) self.assertNotIn('project_id', search_opts) return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all self.common_policy_check(self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.detail, req) def test_index_server_allow_all_filters_policy(self): # 'index' policy is checked before 'allow_all_filters' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'index' self.policy.set_rules({rule: "@"}, overwrite=False) def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) if context in self.system_reader_unauthorized_contexts: self.assertNotIn('host', search_opts) if context in self.system_reader_authorized_contexts: self.assertIn('host', search_opts) return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all req = fakes.HTTPRequest.blank('/servers?host=1') rule_name = policies.SERVERS % 'allow_all_filters' self.common_policy_check( self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.index, req, fatal=False) def test_detail_server_allow_all_filters_policy(self): # 'detail' policy is checked before 'allow_all_filters' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'detail' self.policy.set_rules({rule: "@"}, overwrite=False) def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): self.assertIsNotNone(search_opts) if context in self.system_reader_unauthorized_contexts: self.assertNotIn('host', search_opts) if context in self.system_reader_authorized_contexts: self.assertIn('host', search_opts) return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all req = fakes.HTTPRequest.blank('/servers?host=1') rule_name = policies.SERVERS % 'allow_all_filters' self.common_policy_check( self.system_reader_authorized_contexts, self.system_reader_unauthorized_contexts, rule_name, self.controller.detail, req, fatal=False) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') def test_show_server_policy(self, mock_bdm): rule_name = policies.SERVERS % 'show' self.common_policy_check( self.system_reader_or_owner_authorized_contexts, self.system_reader_or_owner_unauthorized_contexts, rule_name, self.controller.show, self.req, self.instance.uuid) @mock.patch('nova.compute.api.API.create') def test_create_server_policy(self, mock_create): mock_create.return_value = ([self.instance], '') rule_name = policies.SERVERS % 'create' self.common_policy_check(self.project_member_authorized_contexts, self.project_member_unauthorized_contexts, rule_name, self.controller.create, self.req, body=self.body) @mock.patch('nova.compute.api.API.create') @mock.patch('nova.compute.api.API.parse_availability_zone') def test_create_forced_host_server_policy(self, mock_az, mock_create): # 'create' policy is checked before 'create:forced_host' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'create' self.policy.set_rules({rule: "@"}, overwrite=False) mock_create.return_value = ([self.instance], '') mock_az.return_value = ('test', 'host', None) self.common_policy_check(self.project_admin_authorized_contexts, self.project_admin_unauthorized_contexts, self.rule_forced_host, self.controller.create, self.req, body=self.body) @mock.patch('nova.compute.api.API.create') def test_create_attach_volume_server_policy(self, mock_create): # 'create' policy is checked before 'create:attach_volume' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'create' self.policy.set_rules({rule: "@"}, overwrite=False) mock_create.return_value = ([self.instance], '') body = { 'server': { 'name': 'server_test', 'imageRef': uuids.fake_id, 'flavorRef': uuids.fake_id, 'block_device_mapping': [{'device_name': 'foo'}], }, } self.common_policy_check(self.project_member_authorized_contexts, self.project_member_unauthorized_contexts, self.rule_attach_volume, self.controller.create, self.req, body=body) @mock.patch('nova.compute.api.API.create') def test_create_attach_network_server_policy(self, mock_create): # 'create' policy is checked before 'create:attach_network' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'create' self.policy.set_rules({rule: "@"}, overwrite=False) mock_create.return_value = ([self.instance], '') body = { 'server': { 'name': 'server_test', 'imageRef': uuids.fake_id, 'flavorRef': uuids.fake_id, 'networks': [{ 'uuid': uuids.fake_id }], }, } self.common_policy_check(self.project_member_authorized_contexts, self.project_member_unauthorized_contexts, self.rule_attach_network, self.controller.create, self.req, body=body) @mock.patch('nova.compute.api.API.create') def test_create_trusted_certs_server_policy(self, mock_create): # 'create' policy is checked before 'create:trusted_certs' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'create' self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.63') mock_create.return_value = ([self.instance], '') body = { 'server': { 'name': 'server_test', 'imageRef': uuids.fake_id, 'flavorRef': uuids.fake_id, 'trusted_image_certificates': [uuids.fake_id], 'networks': [{ 'uuid': uuids.fake_id }], }, } self.common_policy_check(self.project_member_authorized_contexts, self.project_member_unauthorized_contexts, self.rule_trusted_certs, self.controller.create, req, body=body) @mock.patch('nova.compute.api.API.delete') def test_delete_server_policy(self, mock_delete): rule_name = policies.SERVERS % 'delete' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.delete, self.req, self.instance.uuid) def test_delete_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.SERVERS % 'delete' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, req, self.instance.uuid) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.delete') def test_delete_server_overridden_policy_pass_with_same_user( self, mock_delete): rule_name = policies.SERVERS % 'delete' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) self.controller.delete(self.req, self.instance.uuid) @mock.patch('nova.compute.api.API.update_instance') def test_update_server_policy(self, mock_update): rule_name = policies.SERVERS % 'update' body = {'server': {'name': 'test'}} self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.update, self.req, self.instance.uuid, body=body) def test_update_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.SERVERS % 'update' body = {'server': {'name': 'test'}} self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, req, self.instance.uuid, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.update_instance') def test_update_server_overridden_policy_pass_with_same_user( self, mock_update): rule_name = policies.SERVERS % 'update' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) body = {'server': {'name': 'test'}} self.controller.update(self.req, self.instance.uuid, body=body) @mock.patch('nova.compute.api.API.confirm_resize') def test_confirm_resize_server_policy(self, mock_confirm_resize): rule_name = policies.SERVERS % 'confirm_resize' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_confirm_resize, self.req, self.instance.uuid, body={'confirmResize': 'null'}) @mock.patch('nova.compute.api.API.revert_resize') def test_revert_resize_server_policy(self, mock_revert_resize): rule_name = policies.SERVERS % 'revert_resize' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_revert_resize, self.req, self.instance.uuid, body={'revertResize': 'null'}) @mock.patch('nova.compute.api.API.reboot') def test_reboot_server_policy(self, mock_reboot): rule_name = policies.SERVERS % 'reboot' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_reboot, self.req, self.instance.uuid, body={'reboot': {'type': 'soft'}}) @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request') @mock.patch('nova.compute.api.API.resize') def test_resize_server_policy(self, mock_resize, mock_port): rule_name = policies.SERVERS % 'resize' mock_port.return_value = False self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_resize, self.req, self.instance.uuid, body={'resize': {'flavorRef': 'f1'}}) def test_resize_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.SERVERS % 'resize' body = {'resize': {'flavorRef': 'f1'}} self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._action_resize, req, self.instance.uuid, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request') @mock.patch('nova.compute.api.API.resize') def test_resize_server_overridden_policy_pass_with_same_user( self, mock_resize, mock_port): rule_name = policies.SERVERS % 'resize' mock_port.return_value = False self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) body = {'resize': {'flavorRef': 'f1'}} self.controller._action_resize(self.req, self.instance.uuid, body=body) @mock.patch('nova.compute.api.API.start') def test_start_server_policy(self, mock_start): rule_name = policies.SERVERS % 'start' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._start_server, self.req, self.instance.uuid, body={'os-start': 'null'}) @mock.patch('nova.compute.api.API.stop') def test_stop_server_policy(self, mock_stop): rule_name = policies.SERVERS % 'stop' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._stop_server, self.req, self.instance.uuid, body={'os-stop': 'null'}) def test_stop_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.SERVERS % 'stop' body = {'os-stop': 'null'} self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._stop_server, req, self.instance.uuid, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.stop') def test_stop_server_overridden_policy_pass_with_same_user( self, mock_stop): rule_name = policies.SERVERS % 'stop' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) body = {'os-stop': 'null'} self.controller._stop_server(self.req, self.instance.uuid, body=body) @mock.patch('nova.compute.api.API.rebuild') def test_rebuild_server_policy(self, mock_rebuild): rule_name = policies.SERVERS % 'rebuild' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_rebuild, self.req, self.instance.uuid, body={'rebuild': {"imageRef": uuids.fake_id}}) def test_rebuild_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.SERVERS % 'rebuild' body = {'rebuild': {"imageRef": uuids.fake_id}} self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._action_rebuild, req, self.instance.uuid, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.rebuild') def test_rebuild_server_overridden_policy_pass_with_same_user( self, mock_rebuild): rule_name = policies.SERVERS % 'rebuild' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) body = {'rebuild': {"imageRef": uuids.fake_id}} self.controller._action_rebuild(self.req, self.instance.uuid, body=body) @mock.patch('nova.compute.api.API.rebuild') def test_rebuild_trusted_certs_server_policy(self, mock_rebuild): # 'rebuild' policy is checked before 'rebuild:trusted_certs' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'rebuild' self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.63') rule_name = policies.SERVERS % 'rebuild:trusted_certs' body = { 'rebuild': { 'imageRef': uuids.fake_id, 'trusted_image_certificates': [uuids.fake_id], }, } self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_rebuild, req, self.instance.uuid, body=body) def test_rebuild_trusted_certs_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('', version='2.63') req.environ['nova.context'].user_id = 'other-user' rule = policies.SERVERS % 'rebuild' rule_name = policies.SERVERS % 'rebuild:trusted_certs' body = { 'rebuild': { 'imageRef': uuids.fake_id, 'trusted_image_certificates': [uuids.fake_id], }, } self.policy.set_rules( {rule: "@", rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._action_rebuild, req, self.instance.uuid, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.rebuild') def test_rebuild_trusted_certs_overridden_policy_pass_with_same_user( self, mock_rebuild): req = fakes.HTTPRequest.blank('', version='2.63') rule = policies.SERVERS % 'rebuild' rule_name = policies.SERVERS % 'rebuild:trusted_certs' body = { 'rebuild': { 'imageRef': uuids.fake_id, 'trusted_image_certificates': [uuids.fake_id], }, } self.policy.set_rules( {rule: "@", rule_name: "user_id:%(user_id)s"}, overwrite=False) self.controller._action_rebuild(req, self.instance.uuid, body=body) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.image.glance.API.generate_image_url') @mock.patch('nova.compute.api.API.snapshot_volume_backed') def test_create_image_server_policy(self, mock_snapshot, mock_image, mock_bdm): rule_name = policies.SERVERS % 'create_image' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_create_image, self.req, self.instance.uuid, body={'createImage': {"name": 'test'}}) @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch('nova.image.glance.API.generate_image_url') @mock.patch('nova.compute.api.API.snapshot_volume_backed') def test_create_image_allow_volume_backed_server_policy(self, mock_snapshot, mock_image, mock_bdm): # 'create_image' policy is checked before # 'create_image:allow_volume_backed' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'create_image' self.policy.set_rules({rule: "@"}, overwrite=False) rule_name = policies.SERVERS % 'create_image:allow_volume_backed' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_create_image, self.req, self.instance.uuid, body={'createImage': {"name": 'test'}}) @mock.patch('nova.compute.api.API.trigger_crash_dump') def test_trigger_crash_dump_server_policy(self, mock_crash): rule_name = policies.SERVERS % 'trigger_crash_dump' req = fakes.HTTPRequest.blank('', version='2.17') self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._action_trigger_crash_dump, req, self.instance.uuid, body={'trigger_crash_dump': None}) def test_trigger_crash_dump_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('', version='2.17') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.SERVERS % 'trigger_crash_dump' body = {'trigger_crash_dump': None} self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._action_trigger_crash_dump, req, self.instance.uuid, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.trigger_crash_dump') def test_trigger_crash_dump_overridden_policy_pass_with_same_user( self, mock_crash): req = fakes.HTTPRequest.blank('', version='2.17') rule_name = policies.SERVERS % 'trigger_crash_dump' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}, overwrite=False) body = {'trigger_crash_dump': None} self.controller._action_trigger_crash_dump(req, self.instance.uuid, body=body) def test_server_detail_with_extended_attr_policy(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all rule = policies.SERVERS % 'detail' # server 'detail' policy is checked before extended attributes # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.3') rule_name = ea_policies.BASE_POLICY_NAME authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.detail, req, fatal=False) for attr in self.extended_attr: for resp in authorize_res: self.assertIn(attr, resp['servers'][0]) for resp in unauthorize_res: self.assertNotIn(attr, resp['servers'][0]) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') def test_server_show_with_extended_attr_policy(self, mock_get, mock_block): rule = policies.SERVERS % 'show' # server 'show' policy is checked before extended attributes # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.3') rule_name = ea_policies.BASE_POLICY_NAME authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.show, req, 'fake', fatal=False) for attr in self.extended_attr: for resp in authorize_res: self.assertIn(attr, resp['server']) for resp in unauthorize_res: self.assertNotIn(attr, resp['server']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') @mock.patch('nova.compute.api.API.rebuild') def test_server_rebuild_with_extended_attr_policy(self, mock_rebuild, mock_get, mock_bdm): rule = policies.SERVERS % 'rebuild' # server 'rebuild' policy is checked before extended attributes # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.75') rule_name = ea_policies.BASE_POLICY_NAME authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller._action_rebuild, req, self.instance.uuid, body={'rebuild': {"imageRef": uuids.fake_id}}, fatal=False) for attr in self.extended_attr: # NOTE(gmann): user_data attribute is always present in # rebuild response since 2.47. if attr == 'OS-EXT-SRV-ATTR:user_data': continue for resp in authorize_res: self.assertIn(attr, resp.obj['server']) for resp in unauthorize_res: self.assertNotIn(attr, resp.obj['server']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch.object(InstanceGroup, 'get_by_instance_uuid') @mock.patch('nova.compute.api.API.update_instance') def test_server_update_with_extended_attr_policy(self, mock_update, mock_group, mock_bdm): rule = policies.SERVERS % 'update' # server 'update' policy is checked before extended attributes # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.75') rule_name = ea_policies.BASE_POLICY_NAME authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.update, req, self.instance.uuid, body={'server': {'name': 'test'}}, fatal=False) for attr in self.extended_attr: for resp in authorize_res: self.assertIn(attr, resp['server']) for resp in unauthorize_res: self.assertNotIn(attr, resp['server']) def test_server_detail_with_host_status_policy(self): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all rule = policies.SERVERS % 'detail' # server 'detail' policy is checked before host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.16') rule_name = policies.SERVERS % 'show:host_status' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.detail, req, fatal=False) for resp in authorize_res: self.assertIn('host_status', resp['servers'][0]) for resp in unauthorize_res: self.assertNotIn('host_status', resp['servers'][0]) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') def test_server_show_with_host_status_policy(self, mock_status, mock_block): rule = policies.SERVERS % 'show' # server 'show' policy is checked before host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.16') rule_name = policies.SERVERS % 'show:host_status' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.show, req, 'fake', fatal=False) for resp in authorize_res: self.assertIn('host_status', resp['server']) for resp in unauthorize_res: self.assertNotIn('host_status', resp['server']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') @mock.patch('nova.compute.api.API.rebuild') def test_server_rebuild_with_host_status_policy(self, mock_rebuild, mock_status, mock_bdm): rule = policies.SERVERS % 'rebuild' # server 'rebuild' policy is checked before host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.75') rule_name = policies.SERVERS % 'show:host_status' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller._action_rebuild, req, self.instance.uuid, body={'rebuild': {"imageRef": uuids.fake_id}}, fatal=False) for resp in authorize_res: self.assertIn('host_status', resp.obj['server']) for resp in unauthorize_res: self.assertNotIn('host_status', resp.obj['server']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch.object(InstanceGroup, 'get_by_instance_uuid') @mock.patch('nova.compute.api.API.update_instance') def test_server_update_with_host_status_policy(self, mock_update, mock_group, mock_bdm): rule = policies.SERVERS % 'update' # server 'update' policy is checked before host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.75') rule_name = policies.SERVERS % 'show:host_status' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.update, req, self.instance.uuid, body={'server': {'name': 'test'}}, fatal=False) for resp in authorize_res: self.assertIn('host_status', resp['server']) for resp in unauthorize_res: self.assertNotIn('host_status', resp['server']) @mock.patch('nova.compute.api.API.get_instances_host_statuses') def test_server_detail_with_unknown_host_status_policy(self, mock_status): def fake_get_all(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None, cell_down_support=False, all_tenants=False): return objects.InstanceList(objects=self.servers) self.mock_get_all.side_effect = fake_get_all host_statuses = {} for server in self.servers: host_statuses.update({server.uuid: fields.HostStatus.UNKNOWN}) mock_status.return_value = host_statuses rule = policies.SERVERS % 'detail' # server 'detail' policy is checked before unknown host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. To verify the unknown host_status # policy we need to disallow host_status policy for everyone. rule_host_status = policies.SERVERS % 'show:host_status' self.policy.set_rules({ rule: "@", rule_host_status: "!"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.16') rule_name = policies.SERVERS % 'show:host_status:unknown-only' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.detail, req, fatal=False) for resp in authorize_res: self.assertIn('host_status', resp['servers'][0]) self.assertEqual(fields.HostStatus.UNKNOWN, resp['servers'][0]['host_status']) for resp in unauthorize_res: self.assertNotIn('host_status', resp['servers'][0]) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') def test_server_show_with_unknown_host_status_policy(self, mock_status, mock_block): mock_status.return_value = fields.HostStatus.UNKNOWN rule = policies.SERVERS % 'show' # server 'show' policy is checked before unknown host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. To verify the unknown host_status # policy we need to disallow host_status policy for everyone. rule_host_status = policies.SERVERS % 'show:host_status' self.policy.set_rules({ rule: "@", rule_host_status: "!"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.16') rule_name = policies.SERVERS % 'show:host_status:unknown-only' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.show, req, 'fake', fatal=False) for resp in authorize_res: self.assertIn('host_status', resp['server']) self.assertEqual( fields.HostStatus.UNKNOWN, resp['server']['host_status']) for resp in unauthorize_res: self.assertNotIn('host_status', resp['server']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') @mock.patch('nova.compute.api.API.rebuild') def test_server_rebuild_with_unknown_host_status_policy(self, mock_rebuild, mock_status, mock_bdm): mock_status.return_value = fields.HostStatus.UNKNOWN rule = policies.SERVERS % 'rebuild' # server 'rebuild' policy is checked before unknown host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. To verify the unknown host_status # policy we need to disallow host_status policy for everyone. rule_host_status = policies.SERVERS % 'show:host_status' self.policy.set_rules({ rule: "@", rule_host_status: "!"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.75') rule_name = policies.SERVERS % 'show:host_status:unknown-only' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller._action_rebuild, req, self.instance.uuid, body={'rebuild': {"imageRef": uuids.fake_id}}, fatal=False) for resp in authorize_res: self.assertIn('host_status', resp.obj['server']) self.assertEqual( fields.HostStatus.UNKNOWN, resp.obj['server']['host_status']) for resp in unauthorize_res: self.assertNotIn('host_status', resp.obj['server']) @mock.patch('nova.objects.BlockDeviceMappingList.bdms_by_instance_uuid') @mock.patch('nova.compute.api.API.get_instance_host_status') @mock.patch.object(InstanceGroup, 'get_by_instance_uuid') @mock.patch('nova.compute.api.API.update_instance') def test_server_update_with_unknown_host_status_policy(self, mock_update, mock_group, mock_status, mock_bdm): mock_status.return_value = fields.HostStatus.UNKNOWN rule = policies.SERVERS % 'update' # server 'update' policy is checked before unknown host_status # policy so we have to allow it for everyone otherwise it will fail # first for unauthorized contexts. To verify the unknown host_status # policy we need to disallow host_status policy for everyone. rule_host_status = policies.SERVERS % 'show:host_status' self.policy.set_rules({ rule: "@", rule_host_status: "!"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.75') rule_name = policies.SERVERS % 'show:host_status:unknown-only' authorize_res, unauthorize_res = self.common_policy_check( self.server_attr_admin_authorized_contexts, self.server_attr_admin_unauthorized_contexts, rule_name, self.controller.update, req, self.instance.uuid, body={'server': {'name': 'test'}}, fatal=False) for resp in authorize_res: self.assertIn('host_status', resp['server']) self.assertEqual( fields.HostStatus.UNKNOWN, resp['server']['host_status']) for resp in unauthorize_res: self.assertNotIn('host_status', resp['server']) @mock.patch('nova.compute.api.API.create') def test_create_requested_destination_server_policy(self, mock_create): # 'create' policy is checked before 'create:requested_destination' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = policies.SERVERS % 'create' self.policy.set_rules({rule: "@"}, overwrite=False) req = fakes.HTTPRequest.blank('', version='2.74') def fake_create(context, *args, **kwargs): for attr in ['requested_host', 'requested_hypervisor_hostname']: if context in self.project_admin_authorized_contexts: self.assertIn(attr, kwargs) if context in self.project_admin_unauthorized_contexts: self.assertNotIn(attr, kwargs) return ([self.instance], '') mock_create.side_effect = fake_create body = { 'server': { 'name': 'server_test', 'imageRef': uuids.fake_id, 'flavorRef': uuids.fake_id, 'networks': [{ 'uuid': uuids.fake_id }], 'host': 'fake', 'hypervisor_hostname': 'fake' }, } self.common_policy_check(self.project_admin_authorized_contexts, self.project_admin_unauthorized_contexts, self.rule_requested_destination, self.controller.create, req, body=body) @mock.patch('nova.compute.api.API._check_requested_networks') @mock.patch('nova.compute.api.API._allow_resize_to_same_host') @mock.patch('nova.objects.RequestSpec.get_by_instance_uuid') @mock.patch('nova.objects.Instance.save') @mock.patch('nova.api.openstack.common.get_instance') @mock.patch('nova.api.openstack.common.' 'instance_has_port_with_resource_request') @mock.patch('nova.conductor.ComputeTaskAPI.resize_instance') def test_cross_cell_resize_server_policy(self, mock_resize, mock_port, mock_get, mock_save, mock_rs, mock_allow, m_net): self.stub_out('nova.compute.api.API.get_instance_host_status', lambda x, y: "UP") # 'migrate' policy is checked before 'resize:cross_cell' so # we have to allow it for everyone otherwise it will # fail for unauthorized contexts here. rule = 'os_compute_api:os-migrate-server:migrate' self.policy.set_rules({rule: "@"}, overwrite=False) rule_name = policies.CROSS_CELL_RESIZE mock_port.return_value = False req = fakes.HTTPRequest.blank('', version='2.56') def fake_get(*args, **kwargs): return fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, user_id='fake-user', vm_state=vm_states.ACTIVE, launched_at=timeutils.utcnow()) mock_get.side_effect = fake_get def fake_validate(context, instance, host_name, allow_cross_cell_resize): if context in self.cross_cell_authorized_contexts: self.assertTrue(allow_cross_cell_resize) if context in self.cross_cell_unauthorized_contexts: self.assertFalse(allow_cross_cell_resize) return objects.ComputeNode(host=1, hypervisor_hostname=2) self.stub_out( 'nova.compute.api.API._validate_host_for_cold_migrate', fake_validate) self.common_policy_check(self.cross_cell_authorized_contexts, self.cross_cell_unauthorized_contexts, rule_name, self.m_controller._migrate, req, self.instance.uuid, body={'migrate': {'host': 'fake'}}, fatal=False) def test_network_attach_external_network_policy(self): # NOTE(gmann): Testing policy 'network:attach_external_network' # which raise different error then PolicyNotAuthorized # if not allowed. neutron_api = neutron.API() for context in self.zero_disk_external_net_authorized_contexts: neutron_api._check_external_network_attach(context, [{'id': 1, 'router:external': 'ext'}]) for context in self.zero_disk_external_net_unauthorized_contexts: self.assertRaises(exception.ExternalNetworkAttachForbidden, neutron_api._check_external_network_attach, context, [{'id': 1, 'router:external': 'ext'}]) def test_zero_disk_flavor_policy(self): # NOTE(gmann): Testing policy 'create:zero_disk_flavor' # which raise different error then PolicyNotAuthorized # if not allowed. image = {'id': uuids.image_id, 'status': 'foo'} flavor = objects.Flavor( vcpus=1, memory_mb=512, root_gb=0, extra_specs={'hw:pmu': "true"}) compute_api = compute.API() for context in self.zero_disk_external_net_authorized_contexts: compute_api._validate_flavor_image_nostatus(context, image, flavor, None) for context in self.zero_disk_external_net_unauthorized_contexts: self.assertRaises( exception.BootFromVolumeRequiredForZeroDiskFlavor, compute_api._validate_flavor_image_nostatus, context, image, flavor, None) class ServersScopeTypePolicyTest(ServersPolicyTest): """Test Servers APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServersScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # These policy are project scoped only and 'create' policy is checked # first so even we allow it for everyone the system scoped context # cannot validate these as they fail on 'create' policy due to # scope_type. So we need to set rule name as None to skip the policy # error message assertion in base class. These rule name are only used # for error message assertion. self.rule_trusted_certs = None self.rule_attach_network = None self.rule_attach_volume = None self.rule_requested_destination = None self.rule_forced_host = None # Check that system admin is able to create server with host request # and get server extended attributes or host status. self.admin_authorized_contexts = [ self.system_admin_context ] # Check that non-system/admin is not able to create server with # host request and get server extended attributes or host status. self.admin_unauthorized_contexts = [ self.project_admin_context, self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that system reader is able to list the server # for all projects. self.system_reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system reader is not able to list the server # for all projects. self.system_reader_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check if project member can create the server. self.project_member_authorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_reader_context ] # Check if non-project member cannot create the server. self.project_member_unauthorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context ] # Check that project admin is able to create server with requested # destination. self.project_admin_authorized_contexts = [ self.legacy_admin_context, self.project_admin_context] # Check that non-project admin is not able to create server with # requested destination self.project_admin_unauthorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] class ServersNoLegacyPolicyTest(ServersScopeTypePolicyTest): """Test Servers APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(ServersNoLegacyPolicyTest, self).setUp() # Check that system admin or owner is able to update, delete # or perform server action. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context, ] # Check that non-system and non-admin/owner is not able to update, # delete or perform server action. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context] # Check that system reader or projct owner is able to get # server. self.system_reader_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_member_context, ] # Check that non-system reader nd non-admin/owner is not able to get # server. self.system_reader_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] self.everyone_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.system_member_context, self.system_reader_context, self.other_project_member_context ] self.everyone_unauthorized_contexts = [ self.project_foo_context, self.system_foo_context ] # Check if project member can create the server. self.project_member_authorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.other_project_member_context ] # Check if non-project member cannot create the server. self.project_member_unauthorized_contexts = [ self.system_admin_context, self.system_member_context, self.project_reader_context, self.project_foo_context, self.other_project_reader_context, self.system_reader_context, self.system_foo_context ] # Check that system admin is able to get server extended attributes # or host status. self.server_attr_admin_authorized_contexts = [ self.system_admin_context] # Check that non-system admin is not able to get server extended # attributes or host status. self.server_attr_admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_services.py0000664000175000017500000002214200000000000022716 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import services as services_v21 from nova import exception from nova.policies import base as base_policy from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base from nova.tests.unit import policy_fixture class ServicesPolicyTest(base.BasePolicyTest): """Test os-services APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ServicesPolicyTest, self).setUp() self.controller = services_v21.ServiceController() self.req = fakes.HTTPRequest.blank('/services') # Check that admin is able to change the service self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to change the service self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system scoped admin, member and reader are able to # read the service data. # NOTE(gmann): Until old default rule which is admin_api is # deprecated and not removed, project admin and legacy admin # will be able to read the service data. This make sure that existing # tokens will keep working even we have changed this policy defaults # to reader role. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.legacy_admin_context, self.project_admin_context] # Check that non-system-reader are not able to read the service # data self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.project_member_context, self.project_reader_context] def test_delete_service_policy(self): rule_name = "os_compute_api:os-services:delete" with mock.patch('nova.compute.api.HostAPI.service_get_by_id'): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.delete, self.req, 1) def test_index_service_policy(self): rule_name = "os_compute_api:os-services:list" with mock.patch('nova.compute.api.HostAPI.service_get_all'): self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req) def test_old_update_service_policy(self): rule_name = "os_compute_api:os-services:update" body = {'host': 'host1', 'binary': 'nova-compute'} update = 'nova.compute.api.HostAPI.service_update_by_host_and_binary' with mock.patch(update): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, 'enable', body=body) def test_update_service_policy(self): rule_name = "os_compute_api:os-services:update" req = fakes.HTTPRequest.blank( '', version=services_v21.UUID_FOR_ID_MIN_VERSION) service = self.start_service( 'compute', 'fake-compute-host').service_ref with mock.patch('nova.compute.api.HostAPI.service_update'): self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, req, service.uuid, body={'status': 'enabled'}) class ServicesScopeTypePolicyTest(ServicesPolicyTest): """Test os-services APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scopped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ServicesScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to change the service self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to change the service self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] # Check that system admin, member and reader are able to read the # service data self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system or non-reader are not able to read the service # data self.reader_unauthorized_contexts = [ self.system_foo_context, self.legacy_admin_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class ServicesDeprecatedPolicyTest(base.BasePolicyTest): """Test os-services APIs Deprecated policies. This class checks if deprecated policy rules are overridden by user on policy.json file then they still work because oslo.policy add deprecated rules in logical OR condition and enforce them for policy checks if overridden. """ def setUp(self): super(ServicesDeprecatedPolicyTest, self).setUp() self.controller = services_v21.ServiceController() self.member_req = fakes.HTTPRequest.blank('') self.member_req.environ['nova.context'] = self.system_reader_context self.reader_req = fakes.HTTPRequest.blank('') self.reader_req.environ['nova.context'] = self.project_reader_context self.deprecated_policy = "os_compute_api:os-services" # Overridde rule with different checks than defaults so that we can # verify the rule overridden case. override_rules = {self.deprecated_policy: base_policy.SYSTEM_READER} # NOTE(gmann): Only override the deprecated rule in policy file so # that # we can verify if overridden checks are considered by oslo.policy. # Oslo.policy will consider the overridden rules if: # 1. overridden deprecated rule's checks are different than defaults # 2. new rules are not present in policy file self.policy = self.useFixture(policy_fixture.OverridePolicyFixture( rules_in_file=override_rules)) def test_deprecated_policy_overridden_rule_is_checked(self): # Test to verify if deprecatd overridden policy is working. # check for success as member role. Deprecated rule # has been overridden with member checks in policy.json # If member role pass it means overridden rule is enforced by # olso.policy because new default is system admin and the old # default is admin. with mock.patch('nova.compute.api.HostAPI.service_get_by_id'): self.controller.index(self.member_req) # check for failure with reader context. exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.reader_req) self.assertEqual( "Policy doesn't allow os_compute_api:os-services:list to be" " performed.", exc.format_message()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_shelve.py0000664000175000017500000001747700000000000022400 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import shelve from nova.compute import vm_states from nova import exception from nova.policies import shelve as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class ShelveServerPolicyTest(base.BasePolicyTest): """Test Shelve server APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(ShelveServerPolicyTest, self).setUp() self.controller = shelve.ShelveController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to shelve/unshelve # the server self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to shelve/unshelve # the server self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] # Check that admin is able to shelve offload the server. self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context] # Check that non-admin is not able to shelve offload the server. self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.shelve') def test_shelve_server_policy(self, mock_shelve): rule_name = policies.POLICY_ROOT % 'shelve' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._shelve, self.req, self.instance.uuid, body={'shelve': {}}) @mock.patch('nova.compute.api.API.unshelve') def test_unshelve_server_policy(self, mock_unshelve): rule_name = policies.POLICY_ROOT % 'unshelve' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._unshelve, self.req, self.instance.uuid, body={'unshelve': {}}) @mock.patch('nova.compute.api.API.shelve_offload') def test_shelve_offload_server_policy(self, mock_offload): rule_name = policies.POLICY_ROOT % 'shelve_offload' self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller._shelve_offload, self.req, self.instance.uuid, body={'shelveOffload': {}}) def test_shelve_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.POLICY_ROOT % 'shelve' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._shelve, req, fakes.FAKE_UUID, body={'shelve': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.shelve') def test_shelve_sevrer_overridden_policy_pass_with_same_user( self, mock_shelve): rule_name = policies.POLICY_ROOT % 'shelve' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._shelve(self.req, fakes.FAKE_UUID, body={'shelve': {}}) class ShelveServerScopeTypePolicyTest(ShelveServerPolicyTest): """Test Shelve Server APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(ShelveServerScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class ShelveServerNoLegacyPolicyTest(ShelveServerScopeTypePolicyTest): """Test Shelve Server APIs policies with system scope enabled, and no more deprecated rules. """ without_deprecated_rules = True def setUp(self): super(ShelveServerNoLegacyPolicyTest, self).setUp() # Check that system admin or and owner is able to shelve/unshelve # the server. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/owner is not able to shelve/unshelve # the server. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context ] # Check that system admin is able to shelve offload the server. self.admin_authorized_contexts = [ self.system_admin_context ] # Check that non system admin is not able to shelve offload the server self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.other_project_member_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/policies/test_simple_tenant_usage.py0000664000175000017500000001376200000000000025131 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import simple_tenant_usage from nova.policies import simple_tenant_usage as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit.policies import base class SimpleTenantUsagePolicyTest(base.BasePolicyTest): """Test Simple Tenant Usage APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(SimpleTenantUsagePolicyTest, self).setUp() self.controller = simple_tenant_usage.SimpleTenantUsageController() self.req = fakes.HTTPRequest.blank('') self.controller._get_instances_all_cells = mock.MagicMock() # Check that reader(legacy admin) or and owner is able to get # the tenant usage statistics for a specific tenant. self.reader_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context, self.system_member_context, self.system_reader_context] # Check that non-reader(legacy non-admin) or owner is not able to get # the tenant usage statistics for a specific tenant. self.reader_or_owner_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context, self.other_project_reader_context ] # Check that reader is able to get the tenant usage statistics. self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.system_member_context, self.system_reader_context] # Check that non-reader is not able to get the tenant usage statistics. self.reader_unauthorized_contexts = [ self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context, self.other_project_reader_context ] def test_index_simple_tenant_usage_policy(self): rule_name = policies.POLICY_ROOT % 'list' self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req) def test_show_simple_tenant_usage_policy(self): rule_name = policies.POLICY_ROOT % 'show' self.common_policy_check(self.reader_or_owner_authorized_contexts, self.reader_or_owner_unauthorized_contexts, rule_name, self.controller.show, self.req, self.project_id) class SimpleTenantUsageScopeTypePolicyTest(SimpleTenantUsagePolicyTest): """Test Simple Tenant Usage APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(SimpleTenantUsageScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system reader is able to get the tenant usage statistics. self.reader_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context] # Check that non-system/reader is not able to get the tenant usage # statistics. self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context, self.other_project_reader_context ] class SimpleTenantUsageNoLegacyPolicyTest( SimpleTenantUsageScopeTypePolicyTest): """Test Simple Tenant Usage APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(SimpleTenantUsageNoLegacyPolicyTest, self).setUp() # Check that system reader or owner is able to get # the tenant usage statistics for a specific tenant. self.reader_or_owner_authorized_contexts = [ self.system_admin_context, self.system_member_context, self.system_reader_context, self.project_admin_context, self.project_member_context, self.project_reader_context] # Check that non-system reader/owner is not able to get # the tenant usage statistics for a specific tenant. self.reader_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.other_project_member_context, self.project_foo_context, self.other_project_reader_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_suspend_server.py0000664000175000017500000001435400000000000024150 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from nova.api.openstack.compute import suspend_server from nova.compute import vm_states from nova import exception from nova.policies import suspend_server as policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.policies import base class SuspendServerPolicyTest(base.BasePolicyTest): """Test Suspend Server APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(SuspendServerPolicyTest, self).setUp() self.controller = suspend_server.SuspendServerController() self.req = fakes.HTTPRequest.blank('') user_id = self.req.environ['nova.context'].user_id self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuids.fake_id, project_id=self.project_id, user_id=user_id, vm_state=vm_states.ACTIVE) self.mock_get.return_value = self.instance # Check that admin or and server owner is able to suspend/resume # the server self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.project_reader_context, self.project_foo_context] # Check that non-admin/owner is not able to suspend/resume # the server self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] @mock.patch('nova.compute.api.API.suspend') def test_suspend_server_policy(self, mock_suspend): rule_name = policies.POLICY_ROOT % 'suspend' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._suspend, self.req, self.instance.uuid, body={'suspend': {}}) @mock.patch('nova.compute.api.API.resume') def test_resume_server_policy(self, mock_resume): rule_name = policies.POLICY_ROOT % 'resume' self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller._resume, self.req, self.instance.uuid, body={'resume': {}}) def test_suspend_server_policy_failed_with_other_user(self): # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' rule_name = policies.POLICY_ROOT % 'suspend' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._suspend, req, self.instance.uuid, body={'suspend': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.suspend') def test_suspend_server_overridden_policy_pass_with_same_user( self, mock_suspend): rule_name = policies.POLICY_ROOT % 'suspend' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._suspend(self.req, self.instance.uuid, body={'suspend': {}}) class SuspendServerScopeTypePolicyTest(SuspendServerPolicyTest): """Test Suspend Server APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(SuspendServerScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") class SuspendServerNoLegacyPolicyTest(SuspendServerScopeTypePolicyTest): """Test Suspend Server APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system APIs. """ without_deprecated_rules = True def setUp(self): super(SuspendServerNoLegacyPolicyTest, self).setUp() # Check that system admin or and server owner is able to # suspend/resume the server. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context] # Check that non-system/admin/owner is not able to suspend/resume # the server. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context, self.project_reader_context, self.project_foo_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policies/test_volumes.py0000664000175000017500000003313000000000000022564 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.api.openstack.compute import volumes as volumes_v21 from nova.compute import vm_states from nova import exception from nova import objects from nova.objects import block_device as block_device_obj from nova.policies import volumes_attachments as va_policies from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests.unit.policies import base # This is the server ID. FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' # This is the old volume ID (to swap from). FAKE_UUID_A = '00000000-aaaa-aaaa-aaaa-000000000000' # This is the new volume ID (to swap to). FAKE_UUID_B = 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb' def fake_bdm_get_by_volume_and_instance(cls, ctxt, volume_id, instance_uuid): if volume_id != FAKE_UUID_A: raise exception.VolumeBDMNotFound(volume_id=volume_id) db_bdm = fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'instance_uuid': instance_uuid, 'device_name': '/dev/fake0', 'delete_on_termination': 'False', 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': volume_id, 'volume_size': 1}) return objects.BlockDeviceMapping._from_db_object( ctxt, objects.BlockDeviceMapping(), db_bdm) def fake_get_volume(self, context, id): if id == FAKE_UUID_A: status = 'in-use' attach_status = 'attached' elif id == FAKE_UUID_B: status = 'available' attach_status = 'detached' else: raise exception.VolumeNotFound(volume_id=id) return {'id': id, 'status': status, 'attach_status': attach_status} class VolumeAttachPolicyTest(base.BasePolicyTest): """Test os-volumes-attachments APIs policies with all possible context. This class defines the set of context with different roles which are allowed and not allowed to pass the policy checks. With those set of context, it will call the API operation and verify the expected behaviour. """ def setUp(self): super(VolumeAttachPolicyTest, self).setUp() self.controller = volumes_v21.VolumeAttachmentController() self.req = fakes.HTTPRequest.blank('') self.policy_root = va_policies.POLICY_ROOT self.stub_out('nova.objects.BlockDeviceMapping' '.get_by_volume_and_instance', fake_bdm_get_by_volume_and_instance) self.stub_out('nova.volume.cinder.API.get', fake_get_volume) self.mock_get = self.useFixture( fixtures.MockPatch('nova.api.openstack.common.get_instance')).mock uuid = uuids.fake_id self.instance = fake_instance.fake_instance_obj( self.project_member_context, id=1, uuid=uuid, project_id=self.project_id, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.mock_get.return_value = self.instance # Check that admin or owner is able to list/create/show/delete # the attached volume. self.admin_or_owner_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_foo_context, self.project_reader_context, self.project_member_context ] self.admin_or_owner_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.other_project_member_context ] # Check that admin is able to update the attached volume self.admin_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.project_admin_context ] # Check that non-admin is not able to update the attached # volume self.admin_unauthorized_contexts = [ self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] self.reader_authorized_contexts = [ self.legacy_admin_context, self.system_admin_context, self.system_reader_context, self.system_member_context, self.project_admin_context, self.project_reader_context, self.project_member_context, self.project_foo_context ] self.reader_unauthorized_contexts = [ self.system_foo_context, self.other_project_member_context ] @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_index_volume_attach_policy(self, mock_get_instance): rule_name = self.policy_root % "index" self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.index, self.req, FAKE_UUID) def test_show_volume_attach_policy(self): rule_name = self.policy_root % "show" self.common_policy_check(self.reader_authorized_contexts, self.reader_unauthorized_contexts, rule_name, self.controller.show, self.req, FAKE_UUID, FAKE_UUID_A) @mock.patch('nova.compute.api.API.attach_volume') def test_create_volume_attach_policy(self, mock_attach_volume): rule_name = self.policy_root % "create" body = {'volumeAttachment': {'volumeId': FAKE_UUID_B, 'device': '/dev/fake'}} self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.create, self.req, FAKE_UUID, body=body) @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') def test_update_volume_attach_policy(self, mock_bdm_save): rule_name = self.policy_root % "update" req = fakes.HTTPRequest.blank('', version='2.85') body = {'volumeAttachment': { 'volumeId': FAKE_UUID_A, 'delete_on_termination': True}} self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.update, req, FAKE_UUID, FAKE_UUID_A, body=body) @mock.patch('nova.compute.api.API.detach_volume') def test_delete_volume_attach_policy(self, mock_detach_volume): rule_name = self.policy_root % "delete" self.common_policy_check(self.admin_or_owner_authorized_contexts, self.admin_or_owner_unauthorized_contexts, rule_name, self.controller.delete, self.req, FAKE_UUID, FAKE_UUID_A) @mock.patch('nova.compute.api.API.swap_volume') def test_swap_volume_attach_policy(self, mock_swap_volume): rule_name = self.policy_root % "swap" body = {'volumeAttachment': {'volumeId': FAKE_UUID_B}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') @mock.patch('nova.compute.api.API.swap_volume') def test_swap_volume_attach_policy_failed(self, mock_swap_volume, mock_bdm_save): """Policy check fails for swap + update due to swap policy failure. """ rule_name = self.policy_root % "swap" req = fakes.HTTPRequest.blank('', version='2.85') req.environ['nova.context'].user_id = 'other-user' self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'volumeAttachment': {'volumeId': FAKE_UUID_B, 'delete_on_termination': True}} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, req, FAKE_UUID, FAKE_UUID_A, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) mock_swap_volume.assert_not_called() mock_bdm_save.assert_not_called() @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save') @mock.patch('nova.compute.api.API.swap_volume') def test_pass_swap_and_update_volume_attach_policy(self, mock_swap_volume, mock_bdm_save): rule_name = self.policy_root % "swap" req = fakes.HTTPRequest.blank('', version='2.85') body = {'volumeAttachment': {'volumeId': FAKE_UUID_B, 'delete_on_termination': True}} self.common_policy_check(self.admin_authorized_contexts, self.admin_unauthorized_contexts, rule_name, self.controller.update, req, FAKE_UUID, FAKE_UUID_A, body=body) mock_swap_volume.assert_called() mock_bdm_save.assert_called() class VolumeAttachScopeTypePolicyTest(VolumeAttachPolicyTest): """Test os-volume-attachments APIs policies with system scope enabled. This class set the nova.conf [oslo_policy] enforce_scope to True so that we can switch on the scope checking on oslo policy side. It defines the set of context with scoped token which are allowed and not allowed to pass the policy checks. With those set of context, it will run the API operation and verify the expected behaviour. """ def setUp(self): super(VolumeAttachScopeTypePolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system admin is able to update the attached volume self.admin_authorized_contexts = [ self.system_admin_context] # Check that non-system or non-admin is not able to update # the attached volume. self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] class VolumeAttachNoLegacyPolicyTest(VolumeAttachPolicyTest): """Test os-volume-attachments APIs policies with system scope enabled, and no more deprecated rules that allow the legacy admin API to access system_admin_or_owner APIs. """ without_deprecated_rules = True def setUp(self): super(VolumeAttachNoLegacyPolicyTest, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") # Check that system or projct admin or owner is able to # list/create/show/delete the attached volume. self.admin_or_owner_authorized_contexts = [ self.system_admin_context, self.project_admin_context, self.project_member_context ] # Check that non-system and non-admin/owner is not able to # list/create/show/delete the attached volume. self.admin_or_owner_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.project_reader_context, self.project_foo_context, self.system_foo_context, self.other_project_member_context ] # Check that admin is able to update the attached volume self.admin_authorized_contexts = [ self.system_admin_context ] # Check that non-admin is not able to update the attached # volume self.admin_unauthorized_contexts = [ self.legacy_admin_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_admin_context, self.project_member_context, self.other_project_member_context, self.project_foo_context, self.project_reader_context ] self.reader_authorized_contexts = [ self.system_admin_context, self.system_reader_context, self.system_member_context, self.project_admin_context, self.project_reader_context, self.project_member_context ] self.reader_unauthorized_contexts = [ self.legacy_admin_context, self.system_foo_context, self.project_foo_context, self.other_project_member_context ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/policy_fixture.py0000664000175000017500000001571200000000000021277 0ustar00zuulzuul00000000000000# Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures from oslo_policy import policy as oslo_policy from oslo_serialization import jsonutils import nova.conf from nova.conf import paths from nova import policies import nova.policy from nova.tests.unit import fake_policy CONF = nova.conf.CONF class RealPolicyFixture(fixtures.Fixture): """Load the live policy for tests. A base policy fixture that starts with the assumption that you'd like to load and enforce the shipped default policy in tests. Provides interfaces to tinker with both the contents and location of the policy file before loading to allow overrides. To do this implement ``_prepare_policy`` in the subclass, and adjust the ``policy_file`` accordingly. """ def _prepare_policy(self): """Allow changing of the policy before we get started""" pass def setUp(self): super(RealPolicyFixture, self).setUp() # policy_file can be overridden by subclasses self.policy_file = paths.state_path_def('etc/nova/policy.json') self._prepare_policy() CONF.set_override('policy_file', self.policy_file, group='oslo_policy') nova.policy.reset() # NOTE(gmann): Logging all the deprecation warning for every unit # test will overflow the log files. Suppress the deprecation warnings # for tests. nova.policy.init(suppress_deprecation_warnings=True) self.addCleanup(nova.policy.reset) def set_rules(self, rules, overwrite=True): policy = nova.policy._ENFORCER policy.set_rules(oslo_policy.Rules.from_dict(rules), overwrite=overwrite) def add_missing_default_rules(self, rules): """Adds default rules and their values to the given rules dict. The given rulen dict may have an incomplete set of policy rules. This method will add the default policy rules and their values to the dict. It will not override the existing rules. """ for rule in policies.list_rules(): # NOTE(lbragstad): Only write the rule if it isn't already in the # rule set and if it isn't deprecated. Otherwise we're just going # to spam test runs with deprecate policy warnings. if rule.name not in rules and not rule.deprecated_for_removal: rules[rule.name] = rule.check_str class PolicyFixture(RealPolicyFixture): """Load a fake policy from nova.tests.unit.fake_policy This overrides the policy with a completely fake and synthetic policy file. NOTE(sdague): the use of this is deprecated, and we should unwind the tests so that they can function with the real policy. This is mostly legacy because our default test instances and default test contexts don't match up. It appears that in many cases fake_policy was just modified to whatever makes tests pass, which makes it dangerous to be used in tree. Long term a NullPolicy fixture might be better in those cases. """ def _prepare_policy(self): self.policy_dir = self.useFixture(fixtures.TempDir()) self.policy_file = os.path.join(self.policy_dir.path, 'policy.json') # load the fake_policy data and add the missing default rules. policy_rules = jsonutils.loads(fake_policy.policy_data) self.add_missing_default_rules(policy_rules) with open(self.policy_file, 'w') as f: jsonutils.dump(policy_rules, f) CONF.set_override('policy_dirs', [], group='oslo_policy') class RoleBasedPolicyFixture(RealPolicyFixture): """Load a modified policy which allows all actions only by a single role. This fixture can be used for testing role based permissions as it provides a version of the policy which stomps over all previous declaration and makes every action only available to a single role. """ def __init__(self, role="admin", *args, **kwargs): super(RoleBasedPolicyFixture, self).__init__(*args, **kwargs) self.role = role def _prepare_policy(self): # Convert all actions to require the specified role policy = {} for rule in policies.list_rules(): policy[rule.name] = 'role:%s' % self.role self.policy_dir = self.useFixture(fixtures.TempDir()) self.policy_file = os.path.join(self.policy_dir.path, 'policy.json') with open(self.policy_file, 'w') as f: jsonutils.dump(policy, f) class OverridePolicyFixture(RealPolicyFixture): """Load the set of requested rules into policy file This overrides the policy with the requested rules only into policy file. This fixture is to verify the use case where operator has overridden the policy rules in policy file means default policy not used. One example is when policy rules are deprecated. In that case tests can use this fixture and verify if deprecated rules are overridden then does nova code enforce the overridden rules not only defaults. As per oslo.policy deprecattion feature, if deprecated rule is overridden in policy file then, overridden check is used to verify the policy. Example of usage: self.deprecated_policy = "os_compute_api:os-services" # set check_str as different than defaults to verify the # rule overridden case. override_rules = {self.deprecated_policy: 'is_admin:True'} # NOTE(gmann): Only override the deprecated rule in policy file so that # we can verify if overridden checks are considered by oslo.policy. # Oslo.policy will consider the overridden rules if: # 1. overridden checks are different than defaults # 2. new rules for deprecated rules are not present in policy file self.policy = self.useFixture(policy_fixture.OverridePolicyFixture( rules_in_file=override_rules)) """ def __init__(self, rules_in_file, *args, **kwargs): self.rules_in_file = rules_in_file super(OverridePolicyFixture, self).__init__(*args, **kwargs) def _prepare_policy(self): self.policy_dir = self.useFixture(fixtures.TempDir()) self.policy_file = os.path.join(self.policy_dir.path, 'policy.json') with open(self.policy_file, 'w') as f: jsonutils.dump(self.rules_in_file, f) CONF.set_override('policy_dirs', [], group='oslo_policy') ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.570468 nova-21.2.4/nova/tests/unit/privsep/0000775000175000017500000000000000000000000017342 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/privsep/__init__.py0000664000175000017500000000000000000000000021441 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/privsep/test_fs.py0000664000175000017500000003650500000000000021374 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2019 Aptira Pty Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.privsep.fs from nova import test from nova.tests import fixtures class PrivsepFilesystemHelpersTestCase(test.NoDBTestCase): """Test filesystem related utility methods.""" def setUp(self): super(PrivsepFilesystemHelpersTestCase, self).setUp() self.useFixture(fixtures.PrivsepFixture()) @mock.patch('oslo_concurrency.processutils.execute') def test_mount_simple(self, mock_execute): nova.privsep.fs.mount(None, '/dev/nosuch', '/fake/path', None) mock_execute.assert_called_with('mount', '/dev/nosuch', '/fake/path') @mock.patch('oslo_concurrency.processutils.execute') def test_mount_less_simple(self, mock_execute): nova.privsep.fs.mount('ext4', '/dev/nosuch', '/fake/path', ['-o', 'remount']) mock_execute.assert_called_with('mount', '-t', 'ext4', '-o', 'remount', '/dev/nosuch', '/fake/path') @mock.patch('oslo_concurrency.processutils.execute') def test_umount(self, mock_execute): nova.privsep.fs.umount('/fake/path') mock_execute.assert_called_with('umount', '/fake/path', attempts=3, delay_on_retry=True) @mock.patch('oslo_concurrency.processutils.execute') def test_lvcreate_simple(self, mock_execute): nova.privsep.fs.lvcreate(1024, 'lv', 'vg') mock_execute.assert_called_with('lvcreate', '-L', '1024b', '-n', 'lv', 'vg', attempts=3) @mock.patch('oslo_concurrency.processutils.execute') def test_lvcreate_preallocated(self, mock_execute): nova.privsep.fs.lvcreate(1024, 'lv', 'vg', preallocated=512) mock_execute.assert_called_with('lvcreate', '-L', '512b', '--virtualsize', '1024b', '-n', 'lv', 'vg', attempts=3) @mock.patch('oslo_concurrency.processutils.execute') def test_vginfo(self, mock_execute): nova.privsep.fs.vginfo('vg') mock_execute.assert_called_with('vgs', '--noheadings', '--nosuffix', '--separator', '|', '--units', 'b', '-o', 'vg_size,vg_free', 'vg') @mock.patch('oslo_concurrency.processutils.execute') def test_lvlist(self, mock_execute): nova.privsep.fs.lvlist('vg') mock_execute.assert_called_with('lvs', '--noheadings', '-o', 'lv_name', 'vg') @mock.patch('oslo_concurrency.processutils.execute') def test_lvinfo(self, mock_execute): nova.privsep.fs.lvinfo('/path/to/lv') mock_execute.assert_called_with('lvs', '-o', 'vg_all,lv_all', '--separator', '|', '/path/to/lv') @mock.patch('oslo_concurrency.processutils.execute') def test_lvremove(self, mock_execute): nova.privsep.fs.lvremove('/path/to/lv') mock_execute.assert_called_with('lvremove', '-f', '/path/to/lv', attempts=3) @mock.patch('oslo_concurrency.processutils.execute') def test_blockdev_size(self, mock_execute): nova.privsep.fs.blockdev_size('/dev/nosuch') mock_execute.assert_called_with('blockdev', '--getsize64', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_blockdev_flush(self, mock_execute): nova.privsep.fs.blockdev_flush('/dev/nosuch') mock_execute.assert_called_with('blockdev', '--flushbufs', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_clear_simple(self, mock_execute): nova.privsep.fs.clear('/dev/nosuch', 1024) mock_execute.assert_called_with('shred', '-n0', '-z', '-s1024', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_clear_with_shred(self, mock_execute): nova.privsep.fs.clear('/dev/nosuch', 1024, shred=True) mock_execute.assert_called_with('shred', '-n3', '-s1024', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_loopsetup(self, mock_execute): nova.privsep.fs.loopsetup('/dev/nosuch') mock_execute.assert_called_with('losetup', '--find', '--show', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_loopremove(self, mock_execute): nova.privsep.fs.loopremove('/dev/nosuch') mock_execute.assert_called_with('losetup', '--detach', '/dev/nosuch', attempts=3) @mock.patch('oslo_concurrency.processutils.execute') def test_nbd_connect(self, mock_execute): nova.privsep.fs.nbd_connect('/dev/nosuch', '/fake/path') mock_execute.assert_called_with('qemu-nbd', '-c', '/dev/nosuch', '/fake/path') @mock.patch('oslo_concurrency.processutils.execute') def test_nbd_disconnect(self, mock_execute): nova.privsep.fs.nbd_disconnect('/dev/nosuch') mock_execute.assert_called_with('qemu-nbd', '-d', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_create_device_maps(self, mock_execute): nova.privsep.fs.create_device_maps('/dev/nosuch') mock_execute.assert_called_with('kpartx', '-a', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_remove_device_maps(self, mock_execute): nova.privsep.fs.remove_device_maps('/dev/nosuch') mock_execute.assert_called_with('kpartx', '-d', '/dev/nosuch') @mock.patch('oslo_concurrency.processutils.execute') def test_get_filesystem_type(self, mock_execute): nova.privsep.fs.get_filesystem_type('/dev/nosuch') mock_execute.assert_called_with('blkid', '-o', 'value', '-s', 'TYPE', '/dev/nosuch', check_exit_code=[0, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_privileged_e2fsck(self, mock_execute): nova.privsep.fs.e2fsck('/path/nosuch') mock_execute.assert_called_with('e2fsck', '-fp', '/path/nosuch', check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_privileged_e2fsck_with_flags(self, mock_execute): nova.privsep.fs.e2fsck('/path/nosuch', flags='festive') mock_execute.assert_called_with('e2fsck', 'festive', '/path/nosuch', check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_unprivileged_e2fsck(self, mock_execute): nova.privsep.fs.unprivileged_e2fsck('/path/nosuch') mock_execute.assert_called_with('e2fsck', '-fp', '/path/nosuch', check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_unprivileged_e2fsck_with_flags(self, mock_execute): nova.privsep.fs.unprivileged_e2fsck('/path/nosuch', flags='festive') mock_execute.assert_called_with('e2fsck', 'festive', '/path/nosuch', check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_privileged_resize2fs(self, mock_execute): nova.privsep.fs.resize2fs('/path/nosuch', [0, 1, 2]) mock_execute.assert_called_with('resize2fs', '/path/nosuch', check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_privileged_resize2fs_with_size(self, mock_execute): nova.privsep.fs.resize2fs('/path/nosuch', [0, 1, 2], 1024) mock_execute.assert_called_with('resize2fs', '/path/nosuch', 1024, check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_unprivileged_resize2fs(self, mock_execute): nova.privsep.fs.unprivileged_resize2fs('/path/nosuch', [0, 1, 2]) mock_execute.assert_called_with('resize2fs', '/path/nosuch', check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_unprivileged_resize2fs_with_size(self, mock_execute): nova.privsep.fs.unprivileged_resize2fs('/path/nosuch', [0, 1, 2], 1024) mock_execute.assert_called_with('resize2fs', '/path/nosuch', 1024, check_exit_code=[0, 1, 2]) @mock.patch('oslo_concurrency.processutils.execute') def test_create_partition_table(self, mock_execute): nova.privsep.fs.create_partition_table('/dev/nosuch', 'style') mock_execute.assert_called_with('parted', '--script', '/dev/nosuch', 'mklabel', 'style', check_exit_code=True) @mock.patch('oslo_concurrency.processutils.execute') def test_create_partition(self, mock_execute): nova.privsep.fs.create_partition('/dev/nosuch', 'style', 0, 100) mock_execute.assert_called_with('parted', '--script', '/dev/nosuch', '--', 'mkpart', 'style', 0, 100, check_exit_code=True) @mock.patch('oslo_concurrency.processutils.execute') def _test_list_partitions(self, meth, mock_execute): parted_return = "BYT;\n...\n" parted_return += "1:2s:11s:10s:ext3::boot;\n" parted_return += "2:20s:11s:10s::bob:;\n" mock_execute.return_value = (parted_return, None) partitions = meth("abc") self.assertEqual(2, len(partitions)) self.assertEqual((1, 2, 10, "ext3", "", "boot"), partitions[0]) self.assertEqual((2, 20, 10, "", "bob", ""), partitions[1]) def test_privileged_list_partitions(self): self._test_list_partitions(nova.privsep.fs.list_partitions) def test_unprivileged_list_partitions(self): self._test_list_partitions( nova.privsep.fs.unprivileged_list_partitions) @mock.patch('oslo_concurrency.processutils.execute') def test_resize_partition(self, mock_execute): nova.privsep.fs.resize_partition('/dev/nosuch', 0, 100, True) mock_execute.assert_has_calls([ mock.call('parted', '--script', '/dev/nosuch', 'rm', '1'), mock.call('parted', '--script', '/dev/nosuch', 'mkpart', 'primary', '0s', '100s'), mock.call('parted', '--script', '/dev/nosuch', 'set', '1', 'boot', 'on')]) class MkfsTestCase(test.NoDBTestCase): @mock.patch('oslo_concurrency.processutils.execute') def test_mkfs_ext4(self, mock_execute): nova.privsep.fs.unprivileged_mkfs('ext4', '/my/block/dev') mock_execute.assert_called_once_with('mkfs', '-t', 'ext4', '-F', '/my/block/dev') @mock.patch('oslo_concurrency.processutils.execute') def test_mkfs_msdos(self, mock_execute): nova.privsep.fs.unprivileged_mkfs('msdos', '/my/msdos/block/dev') mock_execute.assert_called_once_with('mkfs', '-t', 'msdos', '/my/msdos/block/dev') @mock.patch('oslo_concurrency.processutils.execute') def test_mkfs_swap(self, mock_execute): nova.privsep.fs.unprivileged_mkfs('swap', '/my/swap/block/dev') mock_execute.assert_called_once_with('mkswap', '/my/swap/block/dev') @mock.patch('oslo_concurrency.processutils.execute') def test_mkfs_ext4_withlabel(self, mock_execute): nova.privsep.fs.unprivileged_mkfs('ext4', '/my/block/dev', 'ext4-vol') mock_execute.assert_called_once_with( 'mkfs', '-t', 'ext4', '-F', '-L', 'ext4-vol', '/my/block/dev') @mock.patch('oslo_concurrency.processutils.execute') def test_mkfs_msdos_withlabel(self, mock_execute): nova.privsep.fs.unprivileged_mkfs( 'msdos', '/my/msdos/block/dev', 'msdos-vol') mock_execute.assert_called_once_with( 'mkfs', '-t', 'msdos', '-n', 'msdos-vol', '/my/msdos/block/dev') @mock.patch('oslo_concurrency.processutils.execute') def test_mkfs_swap_withlabel(self, mock_execute): nova.privsep.fs.unprivileged_mkfs( 'swap', '/my/swap/block/dev', 'swap-vol') mock_execute.assert_called_once_with( 'mkswap', '-L', 'swap-vol', '/my/swap/block/dev') HASH_VFAT = nova.privsep.fs._get_hash_str( nova.privsep.fs.FS_FORMAT_VFAT)[:7] HASH_EXT4 = nova.privsep.fs._get_hash_str( nova.privsep.fs.FS_FORMAT_EXT4)[:7] HASH_NTFS = nova.privsep.fs._get_hash_str( nova.privsep.fs.FS_FORMAT_NTFS)[:7] def test_get_file_extension_for_os_type(self): self.assertEqual(self.HASH_VFAT, nova.privsep.fs.get_file_extension_for_os_type( None, None)) self.assertEqual(self.HASH_EXT4, nova.privsep.fs.get_file_extension_for_os_type( 'linux', None)) self.assertEqual(self.HASH_NTFS, nova.privsep.fs.get_file_extension_for_os_type( 'windows', None)) def test_get_file_extension_for_os_type_with_overrides(self): with mock.patch('nova.privsep.fs._DEFAULT_MKFS_COMMAND', 'custom mkfs command'): self.assertEqual("a74d253", nova.privsep.fs.get_file_extension_for_os_type( 'linux', None)) self.assertEqual("a74d253", nova.privsep.fs.get_file_extension_for_os_type( 'windows', None)) self.assertEqual("a74d253", nova.privsep.fs.get_file_extension_for_os_type( 'osx', None)) with mock.patch.dict(nova.privsep.fs._MKFS_COMMAND, {'osx': 'custom mkfs command'}, clear=True): self.assertEqual(self.HASH_VFAT, nova.privsep.fs.get_file_extension_for_os_type( None, None)) self.assertEqual(self.HASH_EXT4, nova.privsep.fs.get_file_extension_for_os_type( 'linux', None)) self.assertEqual(self.HASH_NTFS, nova.privsep.fs.get_file_extension_for_os_type( 'windows', None)) self.assertEqual("a74d253", nova.privsep.fs.get_file_extension_for_os_type( 'osx', None)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/privsep/test_idmapshift.py0000664000175000017500000003733300000000000023114 0ustar00zuulzuul00000000000000# Copyright 2014 Rackspace, Andrew Melton # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import fixtures import mock from six.moves import StringIO import nova.privsep.idmapshift from nova import test def join_side_effect(root, *args): path = root if root != '/': path += '/' path += '/'.join(args) return path class FakeStat(object): def __init__(self, uid, gid): self.st_uid = uid self.st_gid = gid class BaseTestCase(test.NoDBTestCase): def setUp(self): super(BaseTestCase, self).setUp() self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) self.uid_maps = [(0, 10000, 10), (10, 20000, 1000)] self.gid_maps = [(0, 10000, 10), (10, 20000, 1000)] class FindTargetIDTestCase(BaseTestCase): def test_find_target_id_range_1_first(self): actual_target = nova.privsep.idmapshift.find_target_id( 0, self.uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(10000, actual_target) def test_find_target_id_inside_range_1(self): actual_target = nova.privsep.idmapshift.find_target_id( 2, self.uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(10002, actual_target) def test_find_target_id_range_2_first(self): actual_target = nova.privsep.idmapshift.find_target_id( 10, self.uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(20000, actual_target) def test_find_target_id_inside_range_2(self): actual_target = nova.privsep.idmapshift.find_target_id( 100, self.uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(20090, actual_target) def test_find_target_id_outside_range(self): actual_target = nova.privsep.idmapshift.find_target_id( 10000, self.uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(nova.privsep.idmapshift.NOBODY_ID, actual_target) def test_find_target_id_no_mappings(self): actual_target = nova.privsep.idmapshift.find_target_id( 0, [], nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(nova.privsep.idmapshift.NOBODY_ID, actual_target) def test_find_target_id_updates_memo(self): memo = dict() nova.privsep.idmapshift.find_target_id( 0, self.uid_maps, nova.privsep.idmapshift.NOBODY_ID, memo) self.assertIn(0, memo) self.assertEqual(10000, memo[0]) def test_find_target_guest_id_greater_than_count(self): uid_maps = [(500, 10000, 10)] # Below range actual_target = nova.privsep.idmapshift.find_target_id( 499, uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(nova.privsep.idmapshift.NOBODY_ID, actual_target) # Match actual_target = nova.privsep.idmapshift.find_target_id( 501, uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(10001, actual_target) # Beyond range actual_target = nova.privsep.idmapshift.find_target_id( 510, uid_maps, nova.privsep.idmapshift.NOBODY_ID, dict()) self.assertEqual(nova.privsep.idmapshift.NOBODY_ID, actual_target) class ShiftPathTestCase(BaseTestCase): @mock.patch('os.lchown') @mock.patch('os.lstat') def test_shift_path(self, mock_lstat, mock_lchown): mock_lstat.return_value = FakeStat(0, 0) nova.privsep.idmapshift.shift_path( '/test/path', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID, dict(), dict()) mock_lstat.assert_has_calls([mock.call('/test/path')]) mock_lchown.assert_has_calls([mock.call('/test/path', 10000, 10000)]) class ShiftDirTestCase(BaseTestCase): @mock.patch('nova.privsep.idmapshift.shift_path') @mock.patch('os.path.join') @mock.patch('os.walk') def test_shift_dir(self, mock_walk, mock_join, mock_shift_path): mock_walk.return_value = [('/', ['a', 'b'], ['c', 'd'])] mock_join.side_effect = join_side_effect nova.privsep.idmapshift.shift_dir('/', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) files = ['a', 'b', 'c', 'd'] mock_walk.assert_has_calls([mock.call('/')]) mock_join_calls = [mock.call('/', x) for x in files] mock_join.assert_has_calls(mock_join_calls) args = (self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) kwargs = dict(uid_memo=dict(), gid_memo=dict()) shift_path_calls = [mock.call('/', *args, **kwargs)] shift_path_calls += [mock.call('/' + x, *args, **kwargs) for x in files] mock_shift_path.assert_has_calls(shift_path_calls) class ConfirmPathTestCase(test.NoDBTestCase): @mock.patch('os.lstat') def test_confirm_path(self, mock_lstat): uid_ranges = [(1000, 1999)] gid_ranges = [(300, 399)] mock_lstat.return_value = FakeStat(1000, 301) result = nova.privsep.idmapshift.confirm_path( '/test/path', uid_ranges, gid_ranges, 50000) mock_lstat.assert_has_calls([mock.call('/test/path')]) self.assertTrue(result) @mock.patch('os.lstat') def test_confirm_path_nobody(self, mock_lstat): uid_ranges = [(1000, 1999)] gid_ranges = [(300, 399)] mock_lstat.return_value = FakeStat(50000, 50000) result = nova.privsep.idmapshift.confirm_path( '/test/path', uid_ranges, gid_ranges, 50000) mock_lstat.assert_has_calls([mock.call('/test/path')]) self.assertTrue(result) @mock.patch('os.lstat') def test_confirm_path_uid_mismatch(self, mock_lstat): uid_ranges = [(1000, 1999)] gid_ranges = [(300, 399)] mock_lstat.return_value = FakeStat(0, 301) result = nova.privsep.idmapshift.confirm_path( '/test/path', uid_ranges, gid_ranges, 50000) mock_lstat.assert_has_calls([mock.call('/test/path')]) self.assertFalse(result) @mock.patch('os.lstat') def test_confirm_path_gid_mismatch(self, mock_lstat): uid_ranges = [(1000, 1999)] gid_ranges = [(300, 399)] mock_lstat.return_value = FakeStat(1000, 0) result = nova.privsep.idmapshift.confirm_path( '/test/path', uid_ranges, gid_ranges, 50000) mock_lstat.assert_has_calls([mock.call('/test/path')]) self.assertFalse(result) @mock.patch('os.lstat') def test_confirm_path_uid_nobody(self, mock_lstat): uid_ranges = [(1000, 1999)] gid_ranges = [(300, 399)] mock_lstat.return_value = FakeStat(50000, 301) result = nova.privsep.idmapshift.confirm_path( '/test/path', uid_ranges, gid_ranges, 50000) mock_lstat.assert_has_calls([mock.call('/test/path')]) self.assertTrue(result) @mock.patch('os.lstat') def test_confirm_path_gid_nobody(self, mock_lstat): uid_ranges = [(1000, 1999)] gid_ranges = [(300, 399)] mock_lstat.return_value = FakeStat(1000, 50000) result = nova.privsep.idmapshift.confirm_path( '/test/path', uid_ranges, gid_ranges, 50000) mock_lstat.assert_has_calls([mock.call('/test/path')]) self.assertTrue(result) class ConfirmDirTestCase(BaseTestCase): def setUp(self): super(ConfirmDirTestCase, self).setUp() self.uid_map_ranges = nova.privsep.idmapshift.get_ranges(self.uid_maps) self.gid_map_ranges = nova.privsep.idmapshift.get_ranges(self.gid_maps) @mock.patch('nova.privsep.idmapshift.confirm_path') @mock.patch('os.path.join') @mock.patch('os.walk') def test_confirm_dir(self, mock_walk, mock_join, mock_confirm_path): mock_walk.return_value = [('/', ['a', 'b'], ['c', 'd'])] mock_join.side_effect = join_side_effect mock_confirm_path.return_value = True nova.privsep.idmapshift.confirm_dir('/', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) files = ['a', 'b', 'c', 'd'] mock_walk.assert_has_calls([mock.call('/')]) mock_join_calls = [mock.call('/', x) for x in files] mock_join.assert_has_calls(mock_join_calls) args = (self.uid_map_ranges, self.gid_map_ranges, nova.privsep.idmapshift.NOBODY_ID) confirm_path_calls = [mock.call('/', *args)] confirm_path_calls += [mock.call('/' + x, *args) for x in files] mock_confirm_path.assert_has_calls(confirm_path_calls) @mock.patch('nova.privsep.idmapshift.confirm_path') @mock.patch('os.path.join') @mock.patch('os.walk') def test_confirm_dir_short_circuit_root(self, mock_walk, mock_join, mock_confirm_path): mock_walk.return_value = [('/', ['a', 'b'], ['c', 'd'])] mock_join.side_effect = join_side_effect mock_confirm_path.return_value = False nova.privsep.idmapshift.confirm_dir('/', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) args = (self.uid_map_ranges, self.gid_map_ranges, nova.privsep.idmapshift.NOBODY_ID) confirm_path_calls = [mock.call('/', *args)] mock_confirm_path.assert_has_calls(confirm_path_calls) @mock.patch('nova.privsep.idmapshift.confirm_path') @mock.patch('os.path.join') @mock.patch('os.walk') def test_confirm_dir_short_circuit_file(self, mock_walk, mock_join, mock_confirm_path): mock_walk.return_value = [('/', ['a', 'b'], ['c', 'd'])] mock_join.side_effect = join_side_effect def confirm_path_side_effect(path, *args): if 'a' in path: return False return True mock_confirm_path.side_effect = confirm_path_side_effect nova.privsep.idmapshift.confirm_dir('/', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) mock_walk.assert_has_calls([mock.call('/')]) mock_join.assert_has_calls([mock.call('/', 'a')]) args = (self.uid_map_ranges, self.gid_map_ranges, nova.privsep.idmapshift.NOBODY_ID) confirm_path_calls = [mock.call('/', *args), mock.call('/' + 'a', *args)] mock_confirm_path.assert_has_calls(confirm_path_calls) @mock.patch('nova.privsep.idmapshift.confirm_path') @mock.patch('os.path.join') @mock.patch('os.walk') def test_confirm_dir_short_circuit_dir(self, mock_walk, mock_join, mock_confirm_path): mock_walk.return_value = [('/', ['a', 'b'], ['c', 'd'])] mock_join.side_effect = join_side_effect def confirm_path_side_effect(path, *args): if 'c' in path: return False return True mock_confirm_path.side_effect = confirm_path_side_effect nova.privsep.idmapshift.confirm_dir('/', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) files = ['a', 'b', 'c'] mock_walk.assert_has_calls([mock.call('/')]) mock_join_calls = [mock.call('/', x) for x in files] mock_join.assert_has_calls(mock_join_calls) args = (self.uid_map_ranges, self.gid_map_ranges, nova.privsep.idmapshift.NOBODY_ID) confirm_path_calls = [mock.call('/', *args)] confirm_path_calls += [mock.call('/' + x, *args) for x in files] mock_confirm_path.assert_has_calls(confirm_path_calls) class IntegrationTestCase(BaseTestCase): @mock.patch('os.lchown') @mock.patch('os.lstat') @mock.patch('os.path.join') @mock.patch('os.walk') def test_integrated_shift_dir(self, mock_walk, mock_join, mock_lstat, mock_lchown): mock_walk.return_value = [('/tmp/test', ['a', 'b', 'c'], ['d']), ('/tmp/test/d', ['1', '2'], [])] mock_join.side_effect = join_side_effect def lstat(path): stats = { 't': FakeStat(0, 0), 'a': FakeStat(0, 0), 'b': FakeStat(0, 2), 'c': FakeStat(30000, 30000), 'd': FakeStat(100, 100), '1': FakeStat(0, 100), '2': FakeStat(100, 100), } return stats[path[-1]] mock_lstat.side_effect = lstat nova.privsep.idmapshift.shift_dir('/tmp/test', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) lchown_calls = [ mock.call('/tmp/test', 10000, 10000), mock.call('/tmp/test/a', 10000, 10000), mock.call('/tmp/test/b', 10000, 10002), mock.call('/tmp/test/c', nova.privsep.idmapshift.NOBODY_ID, nova.privsep.idmapshift.NOBODY_ID), mock.call('/tmp/test/d', 20090, 20090), mock.call('/tmp/test/d/1', 10000, 20090), mock.call('/tmp/test/d/2', 20090, 20090), ] mock_lchown.assert_has_calls(lchown_calls) @mock.patch('os.lstat') @mock.patch('os.path.join') @mock.patch('os.walk') def test_integrated_confirm_dir_shifted(self, mock_walk, mock_join, mock_lstat): mock_walk.return_value = [('/tmp/test', ['a', 'b', 'c'], ['d']), ('/tmp/test/d', ['1', '2'], [])] mock_join.side_effect = join_side_effect def lstat(path): stats = { 't': FakeStat(10000, 10000), 'a': FakeStat(10000, 10000), 'b': FakeStat(10000, 10002), 'c': FakeStat(nova.privsep.idmapshift.NOBODY_ID, nova.privsep.idmapshift.NOBODY_ID), 'd': FakeStat(20090, 20090), '1': FakeStat(10000, 20090), '2': FakeStat(20090, 20090), } return stats[path[-1]] mock_lstat.side_effect = lstat result = nova.privsep.idmapshift.confirm_dir( '/tmp/test', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) self.assertTrue(result) @mock.patch('os.lstat') @mock.patch('os.path.join') @mock.patch('os.walk') def test_integrated_confirm_dir_unshifted(self, mock_walk, mock_join, mock_lstat): mock_walk.return_value = [('/tmp/test', ['a', 'b', 'c'], ['d']), ('/tmp/test/d', ['1', '2'], [])] mock_join.side_effect = join_side_effect def lstat(path): stats = { 't': FakeStat(0, 0), 'a': FakeStat(0, 0), 'b': FakeStat(0, 2), 'c': FakeStat(30000, 30000), 'd': FakeStat(100, 100), '1': FakeStat(0, 100), '2': FakeStat(100, 100), } return stats[path[-1]] mock_lstat.side_effect = lstat result = nova.privsep.idmapshift.confirm_dir( '/tmp/test', self.uid_maps, self.gid_maps, nova.privsep.idmapshift.NOBODY_ID) self.assertFalse(result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/privsep/test_libvirt.py0000664000175000017500000002435700000000000022441 0ustar00zuulzuul00000000000000# Copyright 2019 OpenStack Foundation # Copyright 2019 Aptira Pty Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import binascii import ddt import mock import os import six import nova.privsep.libvirt from nova import test from nova.tests import fixtures from oslo_utils import units class LibvirtTestCase(test.NoDBTestCase): """Test libvirt related utility methods.""" def setUp(self): super(LibvirtTestCase, self).setUp() self.useFixture(fixtures.PrivsepFixture()) @mock.patch('oslo_concurrency.processutils.execute') def test_dmcrypt_create_volume(self, mock_execute): nova.privsep.libvirt.dmcrypt_create_volume( '/fake/path', '/dev/nosuch', 'LUKS1', 1024, b'I am a fish') mock_execute.assert_called_with( 'cryptsetup', 'create', '/fake/path', '/dev/nosuch', '--cipher=LUKS1', '--key-size=1024', '--key-file=-', process_input=binascii.hexlify(b'I am a fish').decode('utf-8')) @mock.patch('oslo_concurrency.processutils.execute') def test_dmcrypt_delete_volume(self, mock_execute): nova.privsep.libvirt.dmcrypt_delete_volume('/fake/path') mock_execute.assert_called_with('cryptsetup', 'remove', '/fake/path') @mock.patch('oslo_concurrency.processutils.execute') @mock.patch('os.stat') @mock.patch('os.chmod') def test_ploop_init(self, mock_chmod, mock_stat, mock_execute): nova.privsep.libvirt.ploop_init(1024, 'raw', 'ext4', '/fake/path') mock_execute.assert_called_with( 'ploop', 'init', '-s', 1024, '-f', 'raw', '-t', 'ext4', '/fake/path', check_exit_code=True) mock_stat.assert_called_with('/fake/path') mock_chmod.assert_called_with('/fake/path', mock.ANY) @mock.patch('oslo_concurrency.processutils.execute') def test_ploop_resize(self, mock_execute): nova.privsep.libvirt.ploop_resize( '/fake/path', 2048 * units.Mi) mock_execute.assert_called_with('prl_disk_tool', 'resize', '--size', '2048M', '--resize_partition', '--hdd', '/fake/path', check_exit_code=True) @mock.patch('oslo_concurrency.processutils.execute') def test_ploop_restore_descriptor(self, mock_execute): nova.privsep.libvirt.ploop_restore_descriptor( '/img/dir', 'imagefile', 'raw') mock_execute.assert_called_with( 'ploop', 'restore-descriptor', '-f', 'raw', '/img/dir', 'imagefile', check_exit_code=True) @mock.patch('oslo_concurrency.processutils.execute') def test_plug_infiniband_vif(self, mock_execute): nova.privsep.libvirt.plug_infiniband_vif('fakemac', 'devid', 'fabric', 'netmodel', 'pcislot') mock_execute.assert_called_with( 'ebrctl', 'add-port', 'fakemac', 'devid', 'fabric', 'netmodel', 'pcislot') @mock.patch('oslo_concurrency.processutils.execute') def test_unplug_infiniband_vif(self, mock_execute): nova.privsep.libvirt.unplug_infiniband_vif('fabric', 'fakemac') mock_execute.assert_called_with( 'ebrctl', 'del-port', 'fabric', 'fakemac') @mock.patch('oslo_concurrency.processutils.execute') def test_plug_midonet_vif(self, mock_execute): nova.privsep.libvirt.plug_midonet_vif('portid', 'dev') mock_execute.assert_called_with( 'mm-ctl', '--bind-port', 'portid', 'dev') @mock.patch('oslo_concurrency.processutils.execute') def test_unplug_midonet_vif(self, mock_execute): nova.privsep.libvirt.unplug_midonet_vif('portid') mock_execute.assert_called_with( 'mm-ctl', '--unbind-port', 'portid') @mock.patch('oslo_concurrency.processutils.execute') def test_plug_plumgrid_vif(self, mock_execute): nova.privsep.libvirt.plug_plumgrid_vif( 'dev', 'iface', 'addr', 'netid', 'tenantid') mock_execute.assert_has_calls( [ mock.call('ifc_ctl', 'gateway', 'add_port', 'dev'), mock.call('ifc_ctl', 'gateway', 'ifup', 'dev', 'access_vm', 'iface', 'addr', 'pgtag2=netid', 'pgtag1=tenantid') ]) @mock.patch('oslo_concurrency.processutils.execute') def test_unplug_plumgrid_vif(self, mock_execute): nova.privsep.libvirt.unplug_plumgrid_vif('dev') mock_execute.assert_has_calls( [ mock.call('ifc_ctl', 'gateway', 'ifdown', 'dev'), mock.call('ifc_ctl', 'gateway', 'del_port', 'dev') ]) def test_readpty(self): # Conditionally mock `import` orig_import = __import__ mock_fcntl = mock.Mock(fcntl=mock.Mock(return_value=32769)) def fake_import(module, *args): if module == 'fcntl': return mock_fcntl return orig_import(module, *args) with test.nested( mock.patch.object(six.moves.builtins, 'open', new=mock.mock_open()), mock.patch.object(six.moves.builtins, '__import__', side_effect=fake_import), ) as (mock_open, mock_import): nova.privsep.libvirt.readpty('/fake/path') mock_fileno = mock_open.return_value.fileno.return_value # NOTE(efried): The fact that we see fcntl's mocked return value in # here proves that `import fcntl` was called within the method. mock_fcntl.fcntl.assert_has_calls( [mock.call(mock_fileno, mock_fcntl.F_GETFL), mock.call(mock_fileno, mock_fcntl.F_SETFL, 32769 | os.O_NONBLOCK)]) self.assertIn(mock.call('/fake/path', 'r'), mock_open.mock_calls) @mock.patch('oslo_concurrency.processutils.execute') def test_xend_probe(self, mock_execute): nova.privsep.libvirt.xend_probe() mock_execute.assert_called_with('xend', 'status', check_exit_code=True) def test_create_nmdev(self): mock_open = mock.mock_open() with mock.patch.object(six.moves.builtins, 'open', new=mock_open) as mock_open: nova.privsep.libvirt.create_mdev('phys', 'mdevtype', uuid='fakeuuid') handle = mock_open() self.assertTrue(mock.call('/sys/class/mdev_bus/phys/' 'mdev_supported_types/mdevtype/create', 'w') in mock_open.mock_calls) handle.write.assert_called_with('fakeuuid') @mock.patch('oslo_concurrency.processutils.execute') def test_umount(self, mock_execute): nova.privsep.libvirt.umount('/fake/path') mock_execute.assert_called_with('umount', '/fake/path') @mock.patch('oslo_concurrency.processutils.execute') def test_unprivileged_umount(self, mock_execute): nova.privsep.libvirt.unprivileged_umount('/fake/path') mock_execute.assert_called_with('umount', '/fake/path') @ddt.ddt class PrivsepLibvirtMountTestCase(test.NoDBTestCase): QB_BINARY = "mount.quobyte" QB_FIXED_OPT_1 = "--disable-xattrs" FAKE_VOLUME = "fake_volume" FAKE_MOUNT_BASE = "/fake/mount/base" def setUp(self): super(PrivsepLibvirtMountTestCase, self).setUp() self.useFixture(test.nova_fixtures.PrivsepFixture()) @ddt.data(None, "/FAKE/CFG/PATH.cfg") @mock.patch('oslo_concurrency.processutils.execute') def test_systemd_run_qb_mount(self, cfg_file, mock_execute): sysd_bin = "systemd-run" sysd_opt_1 = "--scope" nova.privsep.libvirt.systemd_run_qb_mount( self.FAKE_VOLUME, self.FAKE_MOUNT_BASE, cfg_file=cfg_file) if cfg_file: mock_execute.assert_called_once_with(sysd_bin, sysd_opt_1, self.QB_BINARY, self.QB_FIXED_OPT_1, self.FAKE_VOLUME, self.FAKE_MOUNT_BASE, "-c", cfg_file) else: mock_execute.assert_called_once_with(sysd_bin, sysd_opt_1, self.QB_BINARY, self.QB_FIXED_OPT_1, self.FAKE_VOLUME, self.FAKE_MOUNT_BASE) @ddt.data(None, "/FAKE/CFG/PATH.cfg") @mock.patch('oslo_concurrency.processutils.execute') def test_unprivileged_qb_mount(self, cfg_file, mock_execute): nova.privsep.libvirt.unprivileged_qb_mount(self.FAKE_VOLUME, self.FAKE_MOUNT_BASE, cfg_file=cfg_file) if cfg_file: mock_execute.assert_called_once_with(self.QB_BINARY, self.QB_FIXED_OPT_1, self.FAKE_VOLUME, self.FAKE_MOUNT_BASE, "-c", cfg_file) else: mock_execute.assert_called_once_with(self.QB_BINARY, self.QB_FIXED_OPT_1, self.FAKE_VOLUME, self.FAKE_MOUNT_BASE) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/privsep/test_linux_net.py0000664000175000017500000001175500000000000022771 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_concurrency import processutils import nova.privsep.linux_net from nova import test from nova.tests import fixtures @mock.patch('oslo_concurrency.processutils.execute') class LinuxNetTestCase(test.NoDBTestCase): """Test networking helpers.""" def setUp(self): super(LinuxNetTestCase, self).setUp() self.useFixture(fixtures.PrivsepFixture()) @mock.patch('os.path.exists') def test_device_exists(self, mock_exists, mock_execute): nova.privsep.linux_net.device_exists('eth0') mock_exists('/sys/class/net/eth0') def test_set_device_mtu_default(self, mock_execute): mock_execute.return_value = ('', '') nova.privsep.linux_net.set_device_mtu('fake-dev', None) mock_execute.assert_has_calls([]) def test_set_device_mtu_actual(self, mock_execute): mock_execute.return_value = ('', '') nova.privsep.linux_net.set_device_mtu('fake-dev', 1500) mock_execute.assert_has_calls([ mock.call('ip', 'link', 'set', 'fake-dev', 'mtu', 1500, check_exit_code=[0, 2, 254])]) @mock.patch('nova.privsep.linux_net._set_device_enabled_inner') def test_create_tap_dev(self, mock_enabled, mock_execute): nova.privsep.linux_net.create_tap_dev('tap42') mock_execute.assert_has_calls([ mock.call('ip', 'tuntap', 'add', 'tap42', 'mode', 'tap', check_exit_code=[0, 2, 254]) ]) mock_enabled.assert_called_once_with('tap42') @mock.patch('os.path.exists', return_value=True) def test_create_tap_skipped_when_exists(self, mock_exists, mock_execute): nova.privsep.linux_net.create_tap_dev('tap42') mock_exists.assert_called_once_with('/sys/class/net/tap42') mock_execute.assert_not_called() @mock.patch('nova.privsep.linux_net._set_device_enabled_inner') @mock.patch('nova.privsep.linux_net._set_device_macaddr_inner') def test_create_tap_dev_mac(self, mock_set_macaddr, mock_enabled, mock_execute): nova.privsep.linux_net.create_tap_dev( 'tap42', '00:11:22:33:44:55') mock_execute.assert_has_calls([ mock.call('ip', 'tuntap', 'add', 'tap42', 'mode', 'tap', check_exit_code=[0, 2, 254]) ]) mock_enabled.assert_called_once_with('tap42') mock_set_macaddr.assert_has_calls([ mock.call('tap42', '00:11:22:33:44:55')]) @mock.patch('nova.privsep.linux_net._set_device_enabled_inner') def test_create_tap_dev_fallback_to_tunctl(self, mock_enabled, mock_execute): # ip failed, fall back to tunctl mock_execute.side_effect = [processutils.ProcessExecutionError, 0, 0] nova.privsep.linux_net.create_tap_dev('tap42') mock_execute.assert_has_calls([ mock.call('ip', 'tuntap', 'add', 'tap42', 'mode', 'tap', check_exit_code=[0, 2, 254]), mock.call('tunctl', '-b', '-t', 'tap42') ]) mock_enabled.assert_called_once_with('tap42') @mock.patch('nova.privsep.linux_net._set_device_enabled_inner') def test_create_tap_dev_multiqueue(self, mock_enabled, mock_execute): nova.privsep.linux_net.create_tap_dev( 'tap42', multiqueue=True) mock_execute.assert_has_calls([ mock.call('ip', 'tuntap', 'add', 'tap42', 'mode', 'tap', 'multi_queue', check_exit_code=[0, 2, 254]) ]) mock_enabled.assert_called_once_with('tap42') def test_create_tap_dev_multiqueue_tunctl_raises(self, mock_execute): # if creation of a tap by the means of ip command fails, # create_tap_dev() will try to do that by the means of tunctl mock_execute.side_effect = processutils.ProcessExecutionError # but tunctl can't create multiqueue taps, so the failure is expected self.assertRaises(processutils.ProcessExecutionError, nova.privsep.linux_net.create_tap_dev, 'tap42', multiqueue=True) def test_add_vlan(self, mock_execute): nova.privsep.linux_net.add_vlan('eth0', 'vlan_name', 1) cmd = ['ip', 'link', 'add', 'link', 'eth0', 'name', 'vlan_name', 'type', 'vlan', 'id', 1] mock_execute.assert_called_once_with(*cmd, check_exit_code=[0, 2, 254]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/privsep/test_path.py0000664000175000017500000001462300000000000021715 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # Copyright 2017 Rackspace Australia # Copyright 2019 Aptira Pty Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os import six import tempfile from nova import exception import nova.privsep.path from nova import test from nova.tests import fixtures class FileTestCase(test.NoDBTestCase): """Test file related utility methods.""" def setUp(self): super(FileTestCase, self).setUp() self.useFixture(fixtures.PrivsepFixture()) @mock.patch('os.path.exists', return_value=True) def test_readfile(self, mock_exists): mock_open = mock.mock_open(read_data='hello world') with mock.patch.object(six.moves.builtins, 'open', new=mock_open): self.assertEqual('hello world', nova.privsep.path.readfile('/fake/path')) @mock.patch('os.path.exists', return_value=False) def test_readfile_file_not_found(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.readfile, '/fake/path') @mock.patch('os.path.exists', return_value=True) def test_write(self, mock_exists): mock_open = mock.mock_open() with mock.patch.object(six.moves.builtins, 'open', new=mock_open): nova.privsep.path.writefile('/fake/path/file', 'w', 'foo') handle = mock_open() mock_exists.assert_called_with('/fake/path') self.assertTrue(mock.call('/fake/path/file', 'w') in mock_open.mock_calls) handle.write.assert_called_with('foo') @mock.patch('os.path.exists', return_value=False) def test_write_dir_missing(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.writefile, '/fake/path', 'w', 'foo') @mock.patch('os.path.exists', return_value=True) @mock.patch('os.readlink') def test_readlink(self, mock_readlink, mock_exists): nova.privsep.path.readlink('/fake/path') mock_exists.assert_called_with('/fake/path') mock_readlink.assert_called_with('/fake/path') @mock.patch('os.path.exists', return_value=False) def test_readlink_file_not_found(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.readlink, '/fake/path') @mock.patch('os.path.exists', return_value=True) @mock.patch('os.chown') def test_chown(self, mock_chown, mock_exists): nova.privsep.path.chown('/fake/path', uid=42, gid=43) mock_exists.assert_called_with('/fake/path') mock_chown.assert_called_with('/fake/path', 42, 43) @mock.patch('os.path.exists', return_value=False) def test_chown_file_not_found(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.chown, '/fake/path') @mock.patch('oslo_utils.fileutils.ensure_tree') def test_makedirs(self, mock_ensure_tree): nova.privsep.path.makedirs('/fake/path') mock_ensure_tree.assert_called_with('/fake/path') @mock.patch('os.path.exists', return_value=True) @mock.patch('os.chmod') def test_chmod(self, mock_chmod, mock_exists): nova.privsep.path.chmod('/fake/path', 0x666) mock_exists.assert_called_with('/fake/path') mock_chmod.assert_called_with('/fake/path', 0x666) @mock.patch('os.path.exists', return_value=False) def test_chmod_file_not_found(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.chmod, '/fake/path', 0x666) @mock.patch('os.path.exists', return_value=True) @mock.patch('os.utime') def test_utime(self, mock_utime, mock_exists): nova.privsep.path.utime('/fake/path') mock_exists.assert_called_with('/fake/path') mock_utime.assert_called_with('/fake/path', None) @mock.patch('os.path.exists', return_value=False) def test_utime_file_not_found(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.utime, '/fake/path') @mock.patch('os.path.exists', return_value=True) @mock.patch('os.rmdir') def test_rmdir(self, mock_rmdir, mock_exists): nova.privsep.path.rmdir('/fake/path') mock_exists.assert_called_with('/fake/path') mock_rmdir.assert_called_with('/fake/path') @mock.patch('os.path.exists', return_value=False) def test_rmdir_file_not_found(self, mock_exists): self.assertRaises(exception.FileNotFound, nova.privsep.path.rmdir, '/fake/path') @mock.patch('os.path.exists', return_value=True) def test_exists(self, mock_exists): nova.privsep.path.path.exists('/fake/path') mock_exists.assert_called_with('/fake/path') class LastBytesTestCase(test.NoDBTestCase): """Test the last_bytes() utility method.""" def setUp(self): super(LastBytesTestCase, self).setUp() self.useFixture(fixtures.PrivsepFixture()) def test_truncated(self): try: fd, path = tempfile.mkstemp() os.write(fd, b'1234567890') os.close(fd) out, remaining = nova.privsep.path.last_bytes(path, 5) self.assertEqual(out, b'67890') self.assertGreater(remaining, 0) finally: os.unlink(path) def test_read_all(self): try: fd, path = tempfile.mkstemp() os.write(fd, b'1234567890') os.close(fd) out, remaining = nova.privsep.path.last_bytes(path, 1000) self.assertEqual(out, b'1234567890') self.assertFalse(remaining > 0) finally: os.unlink(path) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/privsep/test_qemu.py0000664000175000017500000000622000000000000021722 0ustar00zuulzuul00000000000000# Copyright 2019 Aptira Pty Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.privsep.qemu from nova import test from nova.tests import fixtures class QemuTestCase(test.NoDBTestCase): """Test qemu related utility methods.""" def setUp(self): super(QemuTestCase, self).setUp() self.useFixture(fixtures.PrivsepFixture()) @mock.patch('oslo_concurrency.processutils.execute') @mock.patch('nova.privsep.utils.supports_direct_io') def _test_convert_image(self, meth, mock_supports_direct_io, mock_execute): mock_supports_direct_io.return_value = True meth('/fake/source', '/fake/destination', 'informat', 'outformat', '/fake/instances/path', compress=True) mock_execute.assert_called_with( 'qemu-img', 'convert', '-t', 'none', '-O', 'outformat', '-f', 'informat', '-c', '/fake/source', '/fake/destination') mock_supports_direct_io.reset_mock() mock_execute.reset_mock() mock_supports_direct_io.return_value = False meth('/fake/source', '/fake/destination', 'informat', 'outformat', '/fake/instances/path', compress=True) mock_execute.assert_called_with( 'qemu-img', 'convert', '-t', 'writeback', '-O', 'outformat', '-f', 'informat', '-c', '/fake/source', '/fake/destination') def test_convert_image(self): self._test_convert_image(nova.privsep.qemu.convert_image) def test_convert_image_unprivileged(self): self._test_convert_image(nova.privsep.qemu.unprivileged_convert_image) @mock.patch('oslo_concurrency.processutils.execute') @mock.patch('os.path.isdir') def _test_qemu_img_info(self, method, mock_isdir, mock_execute): mock_isdir.return_value = False mock_execute.return_value = (mock.sentinel.out, None) expected_cmd = ( 'env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', mock.sentinel.path, '--force-share', '--output=json', '-f', mock.sentinel.format) # Assert that the output from processutils is returned self.assertEqual( mock.sentinel.out, method(mock.sentinel.path, format=mock.sentinel.format)) # Assert that the expected command is used mock_execute.assert_called_once_with( *expected_cmd, prlimit=nova.privsep.qemu.QEMU_IMG_LIMITS) def test_privileged_qemu_img_info(self): self._test_qemu_img_info(nova.privsep.qemu.privileged_qemu_img_info) def test_unprivileged_qemu_img_info(self): self._test_qemu_img_info(nova.privsep.qemu.unprivileged_qemu_img_info) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/privsep/test_utils.py0000664000175000017500000001246000000000000022116 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import mock import os import nova.privsep.utils from nova import test class SupportDirectIOTestCase(test.NoDBTestCase): def setUp(self): super(SupportDirectIOTestCase, self).setUp() # O_DIRECT is not supported on all Python runtimes, so on platforms # where it's not supported (e.g. Mac), we can still test the code-path # by stubbing out the value. if not hasattr(os, 'O_DIRECT'): # `mock` seems to have trouble stubbing an attr that doesn't # originally exist, so falling back to stubbing out the attribute # directly. os.O_DIRECT = 16384 self.addCleanup(delattr, os, 'O_DIRECT') self.einval = OSError() self.einval.errno = errno.EINVAL self.enoent = OSError() self.enoent.errno = errno.ENOENT self.test_path = os.path.join('.', '.directio.test.123') self.io_flags = os.O_CREAT | os.O_WRONLY | os.O_DIRECT open_patcher = mock.patch('os.open') write_patcher = mock.patch('os.write') close_patcher = mock.patch('os.close') unlink_patcher = mock.patch('os.unlink') random_string_patcher = mock.patch( 'nova.privsep.utils.generate_random_string', return_value='123') self.addCleanup(open_patcher.stop) self.addCleanup(write_patcher.stop) self.addCleanup(close_patcher.stop) self.addCleanup(unlink_patcher.stop) self.addCleanup(random_string_patcher.stop) self.mock_open = open_patcher.start() self.mock_write = write_patcher.start() self.mock_close = close_patcher.start() self.mock_unlink = unlink_patcher.start() random_string_patcher.start() def test_supports_direct_io(self): self.mock_open.return_value = 3 self.assertTrue(nova.privsep.utils.supports_direct_io('.')) self.mock_open.assert_called_once_with(self.test_path, self.io_flags) self.mock_write.assert_called_once_with(3, mock.ANY) # ensure unlink(filepath) will actually remove the file by deleting # the remaining link to it in close(fd) self.mock_close.assert_called_once_with(3) self.mock_unlink.assert_called_once_with(self.test_path) def test_supports_direct_io_with_exception_in_write(self): self.mock_open.return_value = 3 self.mock_write.side_effect = ValueError() self.assertRaises(ValueError, nova.privsep.utils.supports_direct_io, '.') self.mock_open.assert_called_once_with(self.test_path, self.io_flags) self.mock_write.assert_called_once_with(3, mock.ANY) # ensure unlink(filepath) will actually remove the file by deleting # the remaining link to it in close(fd) self.mock_close.assert_called_once_with(3) self.mock_unlink.assert_called_once_with(self.test_path) def test_supports_direct_io_with_exception_in_open(self): self.mock_open.side_effect = ValueError() self.assertRaises(ValueError, nova.privsep.utils.supports_direct_io, '.') self.mock_open.assert_called_once_with(self.test_path, self.io_flags) self.mock_write.assert_not_called() self.mock_close.assert_not_called() self.mock_unlink.assert_called_once_with(self.test_path) def test_supports_direct_io_with_oserror_in_write(self): self.mock_open.return_value = 3 self.mock_write.side_effect = self.einval self.assertFalse(nova.privsep.utils.supports_direct_io('.')) self.mock_open.assert_called_once_with(self.test_path, self.io_flags) self.mock_write.assert_called_once_with(3, mock.ANY) # ensure unlink(filepath) will actually remove the file by deleting # the remaining link to it in close(fd) self.mock_close.assert_called_once_with(3) self.mock_unlink.assert_called_once_with(self.test_path) def test_supports_direct_io_with_oserror_in_open_einval(self): self.mock_open.side_effect = self.einval self.assertFalse(nova.privsep.utils.supports_direct_io('.')) self.mock_open.assert_called_once_with(self.test_path, self.io_flags) self.mock_write.assert_not_called() self.mock_close.assert_not_called() self.mock_unlink.assert_called_once_with(self.test_path) def test_supports_direct_io_with_oserror_in_open_enoent(self): self.mock_open.side_effect = self.enoent self.assertFalse(nova.privsep.utils.supports_direct_io('.')) self.mock_open.assert_called_once_with(self.test_path, self.io_flags) self.mock_write.assert_not_called() self.mock_close.assert_not_called() self.mock_unlink.assert_called_once_with(self.test_path) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.570468 nova-21.2.4/nova/tests/unit/scheduler/0000775000175000017500000000000000000000000017630 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/__init__.py0000664000175000017500000000000000000000000021727 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1636736378.574468 nova-21.2.4/nova/tests/unit/scheduler/client/0000775000175000017500000000000000000000000021106 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/client/__init__.py0000664000175000017500000000000000000000000023205 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/client/test_query.py0000664000175000017500000000563700000000000023677 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import objects from nova.scheduler.client import query from nova import test class SchedulerQueryClientTestCase(test.NoDBTestCase): def setUp(self): super(SchedulerQueryClientTestCase, self).setUp() self.context = context.get_admin_context() self.client = query.SchedulerQueryClient() def test_constructor(self): self.assertIsNotNone(self.client.scheduler_rpcapi) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_select_destinations(self, mock_select_destinations): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance self.client.select_destinations( context=self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=True, return_alternates=True, ) mock_select_destinations.assert_called_once_with(self.context, fake_spec, [fake_spec.instance_uuid], True, True) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_select_destinations_old_call(self, mock_select_destinations): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance self.client.select_destinations( context=self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid] ) mock_select_destinations.assert_called_once_with(self.context, fake_spec, [fake_spec.instance_uuid], False, False) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.update_aggregates') def test_update_aggregates(self, mock_update_aggs): aggregates = [objects.Aggregate(id=1)] self.client.update_aggregates( context=self.context, aggregates=aggregates) mock_update_aggs.assert_called_once_with( self.context, aggregates) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.delete_aggregate') def test_delete_aggregate(self, mock_delete_agg): aggregate = objects.Aggregate(id=1) self.client.delete_aggregate( context=self.context, aggregate=aggregate) mock_delete_agg.assert_called_once_with( self.context, aggregate) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/client/test_report.py0000664000175000017500000052462400000000000024047 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import time import fixtures from keystoneauth1 import exceptions as ks_exc import mock import os_resource_classes as orc from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import six from six.moves.urllib import parse import nova.conf from nova import context from nova import exception from nova import objects from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_requests CONF = nova.conf.CONF class SafeConnectedTestCase(test.NoDBTestCase): """Test the safe_connect decorator for the scheduler client.""" def setUp(self): super(SafeConnectedTestCase, self).setUp() self.context = context.get_admin_context() with mock.patch('keystoneauth1.loading.load_auth_from_conf_options'): self.client = report.SchedulerReportClient() @mock.patch('keystoneauth1.session.Session.request') def test_missing_endpoint(self, req): """Test EndpointNotFound behavior. A missing endpoint entry should not explode. """ req.side_effect = ks_exc.EndpointNotFound() self.client._get_resource_provider(self.context, "fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider(self.context, "fake") self.assertTrue(req.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_client') @mock.patch('keystoneauth1.session.Session.request') def test_missing_endpoint_create_client(self, req, create_client): """Test EndpointNotFound retry behavior. A missing endpoint should cause _create_client to be called. """ req.side_effect = ks_exc.EndpointNotFound() self.client._get_resource_provider(self.context, "fake") # This is the second time _create_client is called, but the first since # the mock was created. self.assertTrue(create_client.called) @mock.patch('keystoneauth1.session.Session.request') def test_missing_auth(self, req): """Test Missing Auth handled correctly. A missing auth configuration should not explode. """ req.side_effect = ks_exc.MissingAuthPlugin() self.client._get_resource_provider(self.context, "fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider(self.context, "fake") self.assertTrue(req.called) @mock.patch('keystoneauth1.session.Session.request') def test_unauthorized(self, req): """Test Unauthorized handled correctly. An unauthorized configuration should not explode. """ req.side_effect = ks_exc.Unauthorized() self.client._get_resource_provider(self.context, "fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider(self.context, "fake") self.assertTrue(req.called) @mock.patch('keystoneauth1.session.Session.request') def test_connect_fail(self, req): """Test Connect Failure handled correctly. If we get a connect failure, this is transient, and we expect that this will end up working correctly later. """ req.side_effect = ks_exc.ConnectFailure() self.client._get_resource_provider(self.context, "fake") # reset the call count to demonstrate that future calls do # work req.reset_mock() self.client._get_resource_provider(self.context, "fake") self.assertTrue(req.called) @mock.patch.object(report, 'LOG') def test_warning_limit(self, mock_log): # Assert that __init__ initializes _warn_count as we expect self.assertEqual(0, self.client._warn_count) mock_self = mock.MagicMock() mock_self._warn_count = 0 for i in range(0, report.WARN_EVERY + 3): report.warn_limit(mock_self, 'warning') mock_log.warning.assert_has_calls([mock.call('warning'), mock.call('warning')]) @mock.patch('keystoneauth1.session.Session.request') def test_failed_discovery(self, req): """Test DiscoveryFailure behavior. Failed discovery should not blow up. """ req.side_effect = ks_exc.DiscoveryFailure() self.client._get_resource_provider(self.context, "fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider(self.context, "fake") self.assertTrue(req.called) class TestConstructor(test.NoDBTestCase): def setUp(self): super(TestConstructor, self).setUp() ksafx = self.useFixture(nova_fixtures.KSAFixture()) self.load_auth_mock = ksafx.mock_load_auth self.load_sess_mock = ksafx.mock_load_sess def test_constructor(self): client = report.SchedulerReportClient() self.load_auth_mock.assert_called_once_with(CONF, 'placement') self.load_sess_mock.assert_called_once_with( CONF, 'placement', auth=self.load_auth_mock.return_value) self.assertEqual(['internal', 'public'], client._client.interface) self.assertEqual({'accept': 'application/json'}, client._client.additional_headers) def test_constructor_admin_interface(self): self.flags(valid_interfaces='admin', group='placement') client = report.SchedulerReportClient() self.load_auth_mock.assert_called_once_with(CONF, 'placement') self.load_sess_mock.assert_called_once_with( CONF, 'placement', auth=self.load_auth_mock.return_value) self.assertEqual(['admin'], client._client.interface) self.assertEqual({'accept': 'application/json'}, client._client.additional_headers) class SchedulerReportClientTestCase(test.NoDBTestCase): def setUp(self): super(SchedulerReportClientTestCase, self).setUp() self.context = context.get_admin_context() self.useFixture(nova_fixtures.KSAFixture()) self.ks_adap_mock = mock.Mock() self.compute_node = objects.ComputeNode( uuid=uuids.compute_node, hypervisor_hostname='foo', vcpus=8, cpu_allocation_ratio=16.0, memory_mb=1024, ram_allocation_ratio=1.5, local_gb=10, disk_allocation_ratio=1.0, ) self.client = report.SchedulerReportClient(self.ks_adap_mock) def _init_provider_tree(self, generation_override=None, resources_override=None): cn = self.compute_node resources = resources_override if resources_override is None: resources = { 'VCPU': { 'total': cn.vcpus, 'reserved': 0, 'min_unit': 1, 'max_unit': cn.vcpus, 'step_size': 1, 'allocation_ratio': cn.cpu_allocation_ratio, }, 'MEMORY_MB': { 'total': cn.memory_mb, 'reserved': 512, 'min_unit': 1, 'max_unit': cn.memory_mb, 'step_size': 1, 'allocation_ratio': cn.ram_allocation_ratio, }, 'DISK_GB': { 'total': cn.local_gb, 'reserved': 0, 'min_unit': 1, 'max_unit': cn.local_gb, 'step_size': 1, 'allocation_ratio': cn.disk_allocation_ratio, }, } generation = generation_override or 1 rp_uuid = self.client._provider_tree.new_root( cn.hypervisor_hostname, cn.uuid, generation=generation, ) self.client._provider_tree.update_inventory(rp_uuid, resources) def _validate_provider(self, name_or_uuid, **kwargs): """Validates existence and values of a provider in this client's _provider_tree. :param name_or_uuid: The name or UUID of the provider to validate. :param kwargs: Optional keyword arguments of ProviderData attributes whose values are to be validated. """ found = self.client._provider_tree.data(name_or_uuid) # If kwargs provided, their names indicate ProviderData attributes for attr, expected in kwargs.items(): try: self.assertEqual(getattr(found, attr), expected) except AttributeError: self.fail("Provider with name or UUID %s doesn't have " "attribute %s (expected value: %s)" % (name_or_uuid, attr, expected)) class TestPutAllocations(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations(self, mock_put): mock_put.return_value.status_code = 204 mock_put.return_value.text = "cool" rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid payload = { "allocations": { rp_uuid: {"resources": data} }, "project_id": mock.sentinel.project_id, "user_id": mock.sentinel.user_id, "consumer_generation": mock.sentinel.consumer_generation } resp = self.client.put_allocations( self.context, consumer_uuid, payload) self.assertTrue(resp) mock_put.assert_called_once_with( expected_url, payload, version='1.28', global_request_id=self.context.global_id) @mock.patch.object(report.LOG, 'warning') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_fail(self, mock_put, mock_warn): mock_put.return_value.status_code = 400 mock_put.return_value.text = "not cool" rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid payload = { "allocations": { rp_uuid: {"resources": data} }, "project_id": mock.sentinel.project_id, "user_id": mock.sentinel.user_id, "consumer_generation": mock.sentinel.consumer_generation } resp = self.client.put_allocations( self.context, consumer_uuid, payload) self.assertFalse(resp) mock_put.assert_called_once_with( expected_url, payload, version='1.28', global_request_id=self.context.global_id) log_msg = mock_warn.call_args[0][0] self.assertIn("Failed to save allocation for", log_msg) def test_put_allocations_fail_connection_error(self): self.ks_adap_mock.put.side_effect = ks_exc.EndpointNotFound() self.assertRaises( exception.PlacementAPIConnectFailure, self.client.put_allocations, self.context, mock.sentinel.consumer, mock.sentinel.payload) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_fail_due_to_consumer_generation_conflict( self, mock_put): mock_put.return_value = fake_requests.FakeResponse( status_code=409, content=jsonutils.dumps( {'errors': [{'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid payload = { "allocations": { rp_uuid: {"resources": data} }, "project_id": mock.sentinel.project_id, "user_id": mock.sentinel.user_id, "consumer_generation": mock.sentinel.consumer_generation } self.assertRaises(exception.AllocationUpdateFailed, self.client.put_allocations, self.context, consumer_uuid, payload) mock_put.assert_called_once_with( expected_url, mock.ANY, version='1.28', global_request_id=self.context.global_id) @mock.patch('time.sleep', new=mock.Mock()) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_retries_conflict(self, mock_put): failed = fake_requests.FakeResponse( status_code=409, content=jsonutils.dumps( {'errors': [{'code': 'placement.concurrent_update', 'detail': ''}]})) succeeded = mock.MagicMock() succeeded.status_code = 204 mock_put.side_effect = (failed, succeeded) rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid payload = { "allocations": { rp_uuid: {"resources": data} }, "project_id": mock.sentinel.project_id, "user_id": mock.sentinel.user_id, "consumer_generation": mock.sentinel.consumer_generation } resp = self.client.put_allocations( self.context, consumer_uuid, payload) self.assertTrue(resp) mock_put.assert_has_calls([ mock.call(expected_url, payload, version='1.28', global_request_id=self.context.global_id)] * 2) @mock.patch('time.sleep', new=mock.Mock()) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_retry_gives_up(self, mock_put): failed = fake_requests.FakeResponse( status_code=409, content=jsonutils.dumps( {'errors': [{'code': 'placement.concurrent_update', 'detail': ''}]})) mock_put.return_value = failed rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid payload = { "allocations": { rp_uuid: {"resources": data} }, "project_id": mock.sentinel.project_id, "user_id": mock.sentinel.user_id, "consumer_generation": mock.sentinel.consumer_generation } resp = self.client.put_allocations( self.context, consumer_uuid, payload) self.assertFalse(resp) mock_put.assert_has_calls([ mock.call(expected_url, payload, version='1.28', global_request_id=self.context.global_id)] * 3) def test_claim_resources_success(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid expected_payload = {'allocations': { rp_uuid: alloc for rp_uuid, alloc in alloc_req['allocations'].items()}} expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=expected_payload, global_request_id=self.context.global_id) self.assertTrue(res) def test_claim_resources_older_alloc_req(self): """Test the case when a stale allocation request is sent to the report client to claim """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': { rp_uuid: res for rp_uuid, res in alloc_req['allocations'].items()}, # no consumer generation in the payload as the caller requested # older microversion to be used 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=expected_payload, global_request_id=self.context.global_id) self.assertTrue(res) def test_claim_resources_success_resize_to_same_host_no_shared(self): """Tests resize to the same host operation. In this case allocation exists against the same host RP but with the migration_uuid. """ get_current_allocations_resp_mock = mock.Mock(status_code=200) # source host allocation held by the migration_uuid so it is not # not returned to the claim code as that asks for the instance_uuid # consumer get_current_allocations_resp_mock.json.return_value = { 'allocations': {}, "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } self.ks_adap_mock.get.return_value = get_current_allocations_resp_mock put_allocations_resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = put_allocations_resp_mock consumer_uuid = uuids.consumer_uuid # This is the resize-up allocation where VCPU, MEMORY_MB and DISK_GB # are all being increased but on the same host. We also throw a custom # resource class in the new allocation to make sure it's not lost alloc_req = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 40, 'CUSTOM_FOO': 1 } }, }, # this allocation request comes from the scheduler therefore it # does not have consumer_generation in it. "project_id": uuids.project_id, "user_id": uuids.user_id } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 40, 'CUSTOM_FOO': 1 } }, }, # report client assumes a new consumer in this case 'consumer_generation': None, 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_resize_to_same_host_with_shared(self): """Tests resize to the same host operation. In this case allocation exists against the same host RP and the shared RP but with the migration_uuid. """ get_current_allocations_resp_mock = mock.Mock(status_code=200) # source host allocation held by the migration_uuid so it is not # not returned to the claim code as that asks for the instance_uuid # consumer get_current_allocations_resp_mock.json.return_value = { 'allocations': {}, "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } self.ks_adap_mock.get.return_value = get_current_allocations_resp_mock put_allocations_resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = put_allocations_resp_mock consumer_uuid = uuids.consumer_uuid # This is the resize-up allocation where VCPU, MEMORY_MB and DISK_GB # are all being increased but on the same host. We also throw a custom # resource class in the new allocation to make sure it's not lost alloc_req = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 2, 'MEMORY_MB': 2048, 'CUSTOM_FOO': 1 } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 40, } }, }, # this allocation request comes from the scheduler therefore it # does not have consumer_generation in it. "project_id": uuids.project_id, "user_id": uuids.user_id } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 2, 'MEMORY_MB': 2048, 'CUSTOM_FOO': 1 } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 40, } }, }, # report client assumes a new consumer in this case 'consumer_generation': None, 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_evacuate_no_shared(self): """Tests non-forced evacuate. In this case both the source and the dest allocation are held by the instance_uuid in placement. So the claim code needs to merge allocations. The second claim comes from the scheduler and therefore it does not have consumer_generation in it. """ # the source allocation is also held by the instance_uuid so report # client will see it. current_allocs = { 'allocations': { uuids.source_host: { 'generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20 }, }, }, "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } self.ks_adap_mock.get.return_value = fake_requests.FakeResponse( status_code=200, content=jsonutils.dumps(current_allocs)) put_allocations_resp_mock = fake_requests.FakeResponse(status_code=204) self.ks_adap_mock.put.return_value = put_allocations_resp_mock consumer_uuid = uuids.consumer_uuid # this is an evacuate so we have the same resources request towards the # dest host alloc_req = { 'allocations': { uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20, } }, }, # this allocation request comes from the scheduler therefore it # does not have consumer_generation in it. "project_id": uuids.project_id, "user_id": uuids.user_id } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid # we expect that both the source and dest allocations are here expected_payload = { 'allocations': { uuids.source_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20 }, }, uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20, } }, }, # report client uses the consumer_generation that it got from # placement when asked for the existing allocations 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_evacuate_with_shared(self): """Similar test that test_claim_resources_success_evacuate_no_shared but adds shared disk into the mix. """ # the source allocation is also held by the instance_uuid so report # client will see it. current_allocs = { 'allocations': { uuids.source_host: { 'generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'generation': 42, 'resources': { 'DISK_GB': 20, }, }, }, "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } self.ks_adap_mock.get.return_value = fake_requests.FakeResponse( status_code=200, content = jsonutils.dumps(current_allocs)) self.ks_adap_mock.put.return_value = fake_requests.FakeResponse( status_code=204) consumer_uuid = uuids.consumer_uuid # this is an evacuate so we have the same resources request towards the # dest host alloc_req = { 'allocations': { uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'generation': 42, 'resources': { 'DISK_GB': 20, }, }, }, # this allocation request comes from the scheduler therefore it # does not have consumer_generation in it. "project_id": uuids.project_id, "user_id": uuids.user_id } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid # we expect that both the source and dest allocations are here plus the # shared storage allocation expected_payload = { 'allocations': { uuids.source_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 20, }, }, }, # report client uses the consumer_generation that got from # placement when asked for the existing allocations 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_force_evacuate_no_shared(self): """Tests forced evacuate. In this case both the source and the dest allocation are held by the instance_uuid in placement. So the claim code needs to merge allocations. The second claim comes from the conductor and therefore it does have consumer_generation in it. """ # the source allocation is also held by the instance_uuid so report # client will see it. current_allocs = { 'allocations': { uuids.source_host: { 'generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20 }, }, }, "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } self.ks_adap_mock.get.return_value = fake_requests.FakeResponse( status_code=200, content=jsonutils.dumps(current_allocs)) self.ks_adap_mock.put.return_value = fake_requests.FakeResponse( status_code=204) consumer_uuid = uuids.consumer_uuid # this is an evacuate so we have the same resources request towards the # dest host alloc_req = { 'allocations': { uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20, } }, }, # this allocation request comes from the conductor that read the # allocation from placement therefore it has consumer_generation in # it. "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid # we expect that both the source and dest allocations are here expected_payload = { 'allocations': { uuids.source_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20 }, }, uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20, } }, }, # report client uses the consumer_generation that it got in the # allocation request 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_force_evacuate_with_shared(self): """Similar test that test_claim_resources_success_force_evacuate_no_shared but adds shared disk into the mix. """ # the source allocation is also held by the instance_uuid so report # client will see it. current_allocs = { 'allocations': { uuids.source_host: { 'generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'generation': 42, 'resources': { 'DISK_GB': 20, }, }, }, "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } self.ks_adap_mock.get.return_value = fake_requests.FakeResponse( status_code=200, content=jsonutils.dumps(current_allocs)) self.ks_adap_mock.put.return_value = fake_requests.FakeResponse( status_code=204) consumer_uuid = uuids.consumer_uuid # this is an evacuate so we have the same resources request towards the # dest host alloc_req = { 'allocations': { uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'generation': 42, 'resources': { 'DISK_GB': 20, }, }, }, # this allocation request comes from the conductor that read the # allocation from placement therefore it has consumer_generation in # it. "consumer_generation": 1, "project_id": uuids.project_id, "user_id": uuids.user_id } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid # we expect that both the source and dest allocations are here plus the # shared storage allocation expected_payload = { 'allocations': { uuids.source_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.dest_host: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 20, }, }, }, # report client uses the consumer_generation that it got in the # allocation request 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) @mock.patch('time.sleep', new=mock.Mock()) def test_claim_resources_fail_due_to_rp_generation_retry_success(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mocks = [ fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': ''}]})), fake_requests.FakeResponse(204) ] self.ks_adap_mock.put.side_effect = resp_mocks consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': {rp_uuid: res for rp_uuid, res in alloc_req['allocations'].items()} } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id expected_payload['consumer_generation'] = None # We should have exactly two calls to the placement API that look # identical since we're retrying the same HTTP request expected_calls = [ mock.call(expected_url, microversion='1.28', json=expected_payload, global_request_id=self.context.global_id)] * 2 self.assertEqual(len(expected_calls), self.ks_adap_mock.put.call_count) self.ks_adap_mock.put.assert_has_calls(expected_calls) self.assertTrue(res) @mock.patch.object(report.LOG, 'warning') def test_claim_resources_failure(self, mock_log): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'something else', 'detail': 'not cool'}]})) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': {rp_uuid: res for rp_uuid, res in alloc_req['allocations'].items()} } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id expected_payload['consumer_generation'] = None self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=expected_payload, global_request_id=self.context.global_id) self.assertFalse(res) self.assertTrue(mock_log.called) def test_claim_resources_consumer_generation_failure(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id self.assertRaises(exception.AllocationUpdateFailed, self.client.claim_resources, self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.28') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': { rp_uuid: res for rp_uuid, res in alloc_req['allocations'].items()}, 'project_id': project_id, 'user_id': user_id, 'consumer_generation': None} self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=expected_payload, global_request_id=self.context.global_id) def test_remove_provider_from_inst_alloc_no_shared(self): """Tests that the method which manipulates an existing doubled-up allocation for a move operation to remove the source host results in sending placement the proper payload to PUT /allocations/{consumer_uuid} call. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.side_effect = [ { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': uuids.project_id, 'user_id': uuids.user_id, }, # the second get is for resource providers in the compute tree, # return just the compute { "resource_providers": [ { "uuid": uuids.source, }, ] }, ] self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_tree_from_instance_allocation( self.context, consumer_uuid, uuids.source) expected_url = "/allocations/%s" % consumer_uuid # New allocations should only include the destination... expected_payload = { 'allocations': { uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id } # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) self.assertTrue(res) def test_remove_provider_from_inst_alloc_with_shared(self): """Tests that the method which manipulates an existing doubled-up allocation with DISK_GB being consumed from a shared storage provider for a move operation to remove the source host results in sending placement the proper payload to PUT /allocations/{consumer_uuid} call. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.side_effect = [ { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 100, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': uuids.project_id, 'user_id': uuids.user_id, }, # the second get is for resource providers in the compute tree, # return just the compute { "resource_providers": [ { "uuid": uuids.source, }, ] }, ] self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_tree_from_instance_allocation( self.context, consumer_uuid, uuids.source) expected_url = "/allocations/%s" % consumer_uuid # New allocations should only include the destination... expected_payload = { 'allocations': { uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 100, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id } # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) self.assertTrue(res) def test_remove_provider_from_inst_alloc_no_source(self): """Tests that if remove_provider_tree_from_instance_allocation() fails to find any allocations for the source host, it just returns True and does not attempt to rewrite the allocation for the consumer. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.side_effect = [ # Act like the allocations already did not include the source host # for some reason { 'allocations': { uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 100, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': uuids.project_id, 'user_id': uuids.user_id, }, # the second get is for resource providers in the compute tree, # return just the compute { "resource_providers": [ { "uuid": uuids.source, }, ] }, ] self.ks_adap_mock.get.return_value = get_resp_mock consumer_uuid = uuids.consumer_uuid res = self.client.remove_provider_tree_from_instance_allocation( self.context, consumer_uuid, uuids.source) self.ks_adap_mock.get.assert_called() self.ks_adap_mock.put.assert_not_called() self.assertTrue(res) def test_remove_provider_from_inst_alloc_fail_get_allocs(self): self.ks_adap_mock.get.return_value = fake_requests.FakeResponse( status_code=500) consumer_uuid = uuids.consumer_uuid self.assertRaises( exception.ConsumerAllocationRetrievalFailed, self.client.remove_provider_tree_from_instance_allocation, self.context, consumer_uuid, uuids.source) self.ks_adap_mock.get.assert_called() self.ks_adap_mock.put.assert_not_called() def test_remove_provider_from_inst_alloc_consumer_gen_conflict(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.side_effect = [ { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': uuids.project_id, 'user_id': uuids.user_id, }, # the second get is for resource providers in the compute tree, # return just the compute { "resource_providers": [ { "uuid": uuids.source, }, ] }, ] self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=409) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid res = self.client.remove_provider_tree_from_instance_allocation( self.context, consumer_uuid, uuids.source) self.assertFalse(res) def test_remove_provider_tree_from_inst_alloc_nested(self): self.ks_adap_mock.get.side_effect = [ fake_requests.FakeResponse( status_code=200, content=jsonutils.dumps( { 'allocations': { uuids.source_compute: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.source_nested: { 'resource_provider_generation': 42, 'resources': { 'CUSTOM_MAGIC': 1 }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': uuids.project_id, 'user_id': uuids.user_id, })), # the second get is for resource providers in the compute tree, # return both RPs in the source compute tree fake_requests.FakeResponse( status_code=200, content=jsonutils.dumps( { "resource_providers": [ { "uuid": uuids.source_compute, }, { "uuid": uuids.source_nested, }, ] })) ] self.ks_adap_mock.put.return_value = fake_requests.FakeResponse( status_code=204) consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_tree_from_instance_allocation( self.context, consumer_uuid, uuids.source_compute) expected_url = "/allocations/%s" % consumer_uuid # New allocations should only include the destination... expected_payload = { 'allocations': { uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, 'consumer_generation': 1, 'project_id': project_id, 'user_id': user_id } self.assertEqual( [ mock.call( '/allocations/%s' % consumer_uuid, global_request_id=self.context.global_id, microversion='1.28' ), mock.call( '/resource_providers?in_tree=%s' % uuids.source_compute, global_request_id=self.context.global_id, microversion='1.14' ) ], self.ks_adap_mock.get.mock_calls) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.28', json=mock.ANY, global_request_id=self.context.global_id) self.assertTrue(res) class TestMoveAllocations(SchedulerReportClientTestCase): def setUp(self): super(TestMoveAllocations, self).setUp() # We want to reuse the mock throughout the class, but with # different return values. patcher = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.post') self.mock_post = patcher.start() self.addCleanup(patcher.stop) self.mock_post.return_value.status_code = 204 self.rp_uuid = mock.sentinel.rp self.consumer_uuid = mock.sentinel.consumer self.data = {"MEMORY_MB": 1024} patcher = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.get') self.mock_get = patcher.start() self.addCleanup(patcher.stop) self.project_id = mock.sentinel.project_id self.user_id = mock.sentinel.user_id self.mock_post.return_value.status_code = 204 self.rp_uuid = mock.sentinel.rp self.source_consumer_uuid = mock.sentinel.source_consumer self.target_consumer_uuid = mock.sentinel.target_consumer self.source_consumer_data = { "allocations": { self.rp_uuid: { "generation": 1, "resources": { "MEMORY_MB": 1024 } } }, "consumer_generation": 2, "project_id": self.project_id, "user_id": self.user_id } self.source_rsp = mock.Mock() self.source_rsp.json.return_value = self.source_consumer_data self.target_consumer_data = { "allocations": { self.rp_uuid: { "generation": 1, "resources": { "MEMORY_MB": 2048 } } }, "consumer_generation": 1, "project_id": self.project_id, "user_id": self.user_id } self.target_rsp = mock.Mock() self.target_rsp.json.return_value = self.target_consumer_data self.mock_get.side_effect = [self.source_rsp, self.target_rsp] self.expected_url = '/allocations' self.expected_microversion = '1.28' def test_url_microversion(self): resp = self.client.move_allocations( self.context, self.source_consumer_uuid, self.target_consumer_uuid) self.assertTrue(resp) self.mock_post.assert_called_once_with( self.expected_url, mock.ANY, version=self.expected_microversion, global_request_id=self.context.global_id) def test_move_to_empty_target(self): self.target_consumer_data = {"allocations": {}} target_rsp = mock.Mock() target_rsp.json.return_value = self.target_consumer_data self.mock_get.side_effect = [self.source_rsp, target_rsp] expected_payload = { self.target_consumer_uuid: { "allocations": { self.rp_uuid: { "resources": { "MEMORY_MB": 1024 }, "generation": 1 } }, "consumer_generation": None, "project_id": self.project_id, "user_id": self.user_id, }, self.source_consumer_uuid: { "allocations": {}, "consumer_generation": 2, "project_id": self.project_id, "user_id": self.user_id, } } resp = self.client.move_allocations( self.context, self.source_consumer_uuid, self.target_consumer_uuid) self.assertTrue(resp) self.mock_post.assert_called_once_with( self.expected_url, expected_payload, version=self.expected_microversion, global_request_id=self.context.global_id) @mock.patch('nova.scheduler.client.report.LOG.info') def test_move_from_empty_source(self, mock_info): """Tests the case that the target has allocations but the source does not so the move_allocations method assumes the allocations were already moved and returns True without trying to POST /allocations. """ source_consumer_data = {"allocations": {}} source_rsp = mock.Mock() source_rsp.json.return_value = source_consumer_data self.mock_get.side_effect = [source_rsp, self.target_rsp] resp = self.client.move_allocations( self.context, self.source_consumer_uuid, self.target_consumer_uuid) self.assertTrue(resp) self.mock_post.assert_not_called() mock_info.assert_called_once() self.assertIn('Allocations not found for consumer', mock_info.call_args[0][0]) def test_move_to_non_empty_target(self): self.mock_get.side_effect = [self.source_rsp, self.target_rsp] expected_payload = { self.target_consumer_uuid: { "allocations": { self.rp_uuid: { "resources": { "MEMORY_MB": 1024 }, "generation": 1 } }, "consumer_generation": 1, "project_id": self.project_id, "user_id": self.user_id, }, self.source_consumer_uuid: { "allocations": {}, "consumer_generation": 2, "project_id": self.project_id, "user_id": self.user_id, } } with fixtures.EnvironmentVariable('OS_DEBUG', '1'): with nova_fixtures.StandardLogging() as stdlog: resp = self.client.move_allocations( self.context, self.source_consumer_uuid, self.target_consumer_uuid) self.assertTrue(resp) self.mock_post.assert_called_once_with( self.expected_url, expected_payload, version=self.expected_microversion, global_request_id=self.context.global_id) self.assertIn('Overwriting current allocation', stdlog.logger.output) @mock.patch('time.sleep') def test_409_concurrent_provider_update(self, mock_sleep): # there will be 1 normal call and 3 retries self.mock_get.side_effect = [self.source_rsp, self.target_rsp, self.source_rsp, self.target_rsp, self.source_rsp, self.target_rsp, self.source_rsp, self.target_rsp] rsp = fake_requests.FakeResponse( 409, jsonutils.dumps( {'errors': [ {'code': 'placement.concurrent_update', 'detail': ''}]})) self.mock_post.return_value = rsp resp = self.client.move_allocations( self.context, self.source_consumer_uuid, self.target_consumer_uuid) self.assertFalse(resp) # Post was attempted four times. self.assertEqual(4, self.mock_post.call_count) @mock.patch('nova.scheduler.client.report.LOG.warning') def test_not_409_failure(self, mock_log): error_message = 'placement not there' self.mock_post.return_value.status_code = 503 self.mock_post.return_value.text = error_message resp = self.client.move_allocations( self.context, self.source_consumer_uuid, self.target_consumer_uuid) self.assertFalse(resp) args, kwargs = mock_log.call_args log_message = args[0] log_args = args[1] self.assertIn('Unable to post allocations', log_message) self.assertEqual(error_message, log_args['text']) def test_409_concurrent_consumer_update(self): self.mock_post.return_value = fake_requests.FakeResponse( status_code=409, content=jsonutils.dumps( {'errors': [{'code': 'placement.concurrent_update', 'detail': 'consumer generation conflict'}]})) self.assertRaises(exception.AllocationMoveFailed, self.client.move_allocations, self.context, self.source_consumer_uuid, self.target_consumer_uuid) class TestProviderOperations(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') def test_ensure_resource_provider_get(self, get_rpt_mock, get_shr_mock, get_trait_mock, get_agg_mock, get_inv_mock, create_rp_mock): # No resource provider exists in the client's cache, so validate that # if we get the resource provider from the placement API that we don't # try to create the resource provider. get_rpt_mock.return_value = [{ 'uuid': uuids.compute_node, 'name': mock.sentinel.name, 'generation': 1, }] get_inv_mock.return_value = None get_agg_mock.return_value = report.AggInfo( aggregates=set([uuids.agg1]), generation=42) get_trait_mock.return_value = report.TraitInfo( traits=set(['CUSTOM_GOLD']), generation=43) get_shr_mock.return_value = [] def assert_cache_contents(): self.assertTrue( self.client._provider_tree.exists(uuids.compute_node)) self.assertTrue( self.client._provider_tree.in_aggregates(uuids.compute_node, [uuids.agg1])) self.assertFalse( self.client._provider_tree.in_aggregates(uuids.compute_node, [uuids.agg2])) self.assertTrue( self.client._provider_tree.has_traits(uuids.compute_node, ['CUSTOM_GOLD'])) self.assertFalse( self.client._provider_tree.has_traits(uuids.compute_node, ['CUSTOM_SILVER'])) data = self.client._provider_tree.data(uuids.compute_node) self.assertEqual(43, data.generation) self.client._ensure_resource_provider(self.context, uuids.compute_node) assert_cache_contents() get_rpt_mock.assert_called_once_with(self.context, uuids.compute_node) get_agg_mock.assert_called_once_with(self.context, uuids.compute_node) get_trait_mock.assert_called_once_with(self.context, uuids.compute_node) get_shr_mock.assert_called_once_with(self.context, set([uuids.agg1])) self.assertFalse(create_rp_mock.called) # Now that the cache is populated, a subsequent call should be a no-op. get_rpt_mock.reset_mock() get_agg_mock.reset_mock() get_trait_mock.reset_mock() get_shr_mock.reset_mock() self.client._ensure_resource_provider(self.context, uuids.compute_node) assert_cache_contents() get_rpt_mock.assert_not_called() get_agg_mock.assert_not_called() get_trait_mock.assert_not_called() get_shr_mock.assert_not_called() create_rp_mock.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') def test_ensure_resource_provider_create_fail(self, get_rpt_mock, refresh_mock, create_rp_mock): # No resource provider exists in the client's cache, and # _create_provider raises, indicating there was an error with the # create call. Ensure we don't populate the resource provider cache get_rpt_mock.return_value = [] create_rp_mock.side_effect = exception.ResourceProviderCreationFailed( name=uuids.compute_node) self.assertRaises( exception.ResourceProviderCreationFailed, self.client._ensure_resource_provider, self.context, uuids.compute_node) get_rpt_mock.assert_called_once_with(self.context, uuids.compute_node) create_rp_mock.assert_called_once_with( self.context, uuids.compute_node, uuids.compute_node, parent_provider_uuid=None) self.assertFalse(self.client._provider_tree.exists(uuids.compute_node)) self.assertFalse(refresh_mock.called) self.assertRaises( ValueError, self.client._provider_tree.in_aggregates, uuids.compute_node, []) self.assertRaises( ValueError, self.client._provider_tree.has_traits, uuids.compute_node, []) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider', return_value=None) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') def test_ensure_resource_provider_create_no_placement(self, get_rpt_mock, refresh_mock, create_rp_mock): # No resource provider exists in the client's cache, and # @safe_connect on _create_resource_provider returns None because # Placement isn't running yet. Ensure we don't populate the resource # provider cache. get_rpt_mock.return_value = [] self.assertRaises( exception.ResourceProviderCreationFailed, self.client._ensure_resource_provider, self.context, uuids.compute_node) get_rpt_mock.assert_called_once_with(self.context, uuids.compute_node) create_rp_mock.assert_called_once_with( self.context, uuids.compute_node, uuids.compute_node, parent_provider_uuid=None) self.assertFalse(self.client._provider_tree.exists(uuids.compute_node)) refresh_mock.assert_not_called() self.assertRaises( ValueError, self.client._provider_tree.in_aggregates, uuids.compute_node, []) self.assertRaises( ValueError, self.client._provider_tree.has_traits, uuids.compute_node, []) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_and_get_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') def test_ensure_resource_provider_create(self, get_rpt_mock, refresh_inv_mock, refresh_assoc_mock, create_rp_mock): # No resource provider exists in the client's cache and no resource # provider was returned from the placement API, so verify that in this # case we try to create the resource provider via the placement API. get_rpt_mock.return_value = [] create_rp_mock.return_value = { 'uuid': uuids.compute_node, 'name': 'compute-name', 'generation': 1, } self.assertEqual( uuids.compute_node, self.client._ensure_resource_provider(self.context, uuids.compute_node)) self._validate_provider(uuids.compute_node, name='compute-name', generation=1, parent_uuid=None, aggregates=set(), traits=set()) # We don't refresh for a just-created provider refresh_inv_mock.assert_not_called() refresh_assoc_mock.assert_not_called() get_rpt_mock.assert_called_once_with(self.context, uuids.compute_node) create_rp_mock.assert_called_once_with( self.context, uuids.compute_node, uuids.compute_node, # name param defaults to UUID if None parent_provider_uuid=None, ) self.assertTrue(self.client._provider_tree.exists(uuids.compute_node)) create_rp_mock.reset_mock() # Validate the path where we specify a name (don't default to the UUID) self.client._ensure_resource_provider( self.context, uuids.cn2, 'a-name') create_rp_mock.assert_called_once_with( self.context, uuids.cn2, 'a-name', parent_provider_uuid=None) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') def test_ensure_resource_provider_tree(self, get_rpt_mock, create_rp_mock, refresh_mock): """Test _ensure_resource_provider with a tree of providers.""" def _create_resource_provider(context, uuid, name, parent_provider_uuid=None): """Mock side effect for creating the RP with the specified args.""" return { 'uuid': uuid, 'name': name, 'generation': 0, 'parent_provider_uuid': parent_provider_uuid } create_rp_mock.side_effect = _create_resource_provider # We at least have to simulate the part of _refresh_associations that # marks a provider as 'seen' def mocked_refresh(context, rp_uuid, **kwargs): self.client._association_refresh_time[rp_uuid] = time.time() refresh_mock.side_effect = mocked_refresh # Not initially in the placement database, so we have to create it. get_rpt_mock.return_value = [] # Create the root root = self.client._ensure_resource_provider(self.context, uuids.root) self.assertEqual(uuids.root, root) # Now create a child child1 = self.client._ensure_resource_provider( self.context, uuids.child1, name='junior', parent_provider_uuid=uuids.root) self.assertEqual(uuids.child1, child1) # If we re-ensure the child, we get the object from the tree, not a # newly-created one - i.e. the early .find() works like it should. self.assertIs(child1, self.client._ensure_resource_provider(self.context, uuids.child1)) # Make sure we can create a grandchild grandchild = self.client._ensure_resource_provider( self.context, uuids.grandchild, parent_provider_uuid=uuids.child1) self.assertEqual(uuids.grandchild, grandchild) # Now create a second child of the root and make sure it doesn't wind # up in some crazy wrong place like under child1 or grandchild child2 = self.client._ensure_resource_provider( self.context, uuids.child2, parent_provider_uuid=uuids.root) self.assertEqual(uuids.child2, child2) all_rp_uuids = [uuids.root, uuids.child1, uuids.child2, uuids.grandchild] # At this point we should get all the providers. self.assertEqual( set(all_rp_uuids), set(self.client._provider_tree.get_provider_uuids())) # And now _ensure is a no-op because everything is cached get_rpt_mock.reset_mock() create_rp_mock.reset_mock() refresh_mock.reset_mock() for rp_uuid in all_rp_uuids: self.client._ensure_resource_provider(self.context, rp_uuid) get_rpt_mock.assert_not_called() create_rp_mock.assert_not_called() refresh_mock.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') def test_ensure_resource_provider_refresh_fetch(self, mock_ref_assoc, mock_gpit): """Make sure refreshes are called with the appropriate UUIDs and flags when we fetch the provider tree from placement. """ tree_uuids = set([uuids.root, uuids.one, uuids.two]) mock_gpit.return_value = [{'uuid': u, 'name': u, 'generation': 42} for u in tree_uuids] self.assertEqual(uuids.root, self.client._ensure_resource_provider(self.context, uuids.root)) mock_gpit.assert_called_once_with(self.context, uuids.root) mock_ref_assoc.assert_has_calls( [mock.call(self.context, uuid, force=True) for uuid in tree_uuids]) self.assertEqual(tree_uuids, set(self.client._provider_tree.get_provider_uuids())) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_providers_in_tree') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') def test_ensure_resource_provider_refresh_create(self, mock_refresh, mock_create, mock_gpit): """Make sure refresh is not called when we create the RP.""" mock_gpit.return_value = [] mock_create.return_value = {'name': 'cn', 'uuid': uuids.cn, 'generation': 42} self.assertEqual(uuids.root, self.client._ensure_resource_provider(self.context, uuids.root)) mock_gpit.assert_called_once_with(self.context, uuids.root) mock_create.assert_called_once_with(self.context, uuids.root, uuids.root, parent_provider_uuid=None) mock_refresh.assert_not_called() self.assertEqual([uuids.cn], self.client._provider_tree.get_provider_uuids()) def test_get_allocation_candidates(self): resp_mock = mock.Mock(status_code=200) json_data = { 'allocation_requests': mock.sentinel.alloc_reqs, 'provider_summaries': mock.sentinel.p_sums, } flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'resources:VCPU': '1', 'resources:MEMORY_MB': '1024', 'trait:HW_CPU_X86_AVX': 'required', 'trait:CUSTOM_TRAIT1': 'required', 'trait:CUSTOM_TRAIT2': 'preferred', 'trait:CUSTOM_TRAIT3': 'forbidden', 'trait:CUSTOM_TRAIT4': 'forbidden', 'resources_DISK:DISK_GB': '30', 'trait_DISK:STORAGE_DISK_SSD': 'required', 'resources2:VGPU': '2', 'trait2:HW_GPU_RESOLUTION_W2560H1600': 'required', 'trait2:HW_GPU_API_VULKAN': 'required', 'resources_NET:SRIOV_NET_VF': '1', 'resources_NET:CUSTOM_NET_EGRESS_BYTES_SEC': '125000', 'group_policy': 'isolate', # These are ignored because misspelled, bad value, etc. 'resources*2:CUSTOM_WIDGET': '123', 'trait:HW_NIC_OFFLOAD_LRO': 'preferred', 'group_policy3': 'none', }) req_spec = objects.RequestSpec(flavor=flavor, is_bfv=False) resources = scheduler_utils.ResourceRequest(req_spec) resources.get_request_group(None).aggregates = [ ['agg1', 'agg2', 'agg3'], ['agg1', 'agg2']] forbidden_aggs = set(['agg1', 'agg5', 'agg6']) resources.get_request_group(None).forbidden_aggregates = forbidden_aggs expected_path = '/allocation_candidates' expected_query = [ ('group_policy', 'isolate'), ('limit', '1000'), ('member_of', '!in:agg1,agg5,agg6'), ('member_of', 'in:agg1,agg2'), ('member_of', 'in:agg1,agg2,agg3'), ('required', 'CUSTOM_TRAIT1,HW_CPU_X86_AVX,!CUSTOM_TRAIT3,' '!CUSTOM_TRAIT4'), ('required2', 'HW_GPU_API_VULKAN,HW_GPU_RESOLUTION_W2560H1600'), ('required_DISK', 'STORAGE_DISK_SSD'), ('resources', 'MEMORY_MB:1024,VCPU:1'), ('resources2', 'VGPU:2'), ('resources_DISK', 'DISK_GB:30'), ('resources_NET', 'CUSTOM_NET_EGRESS_BYTES_SEC:125000,SRIOV_NET_VF:1') ] resp_mock.json.return_value = json_data self.ks_adap_mock.get.return_value = resp_mock alloc_reqs, p_sums, allocation_request_version = ( self.client.get_allocation_candidates(self.context, resources)) url = self.ks_adap_mock.get.call_args[0][0] split_url = parse.urlsplit(url) query = parse.parse_qsl(split_url.query) self.assertEqual(expected_path, split_url.path) self.assertEqual(expected_query, query) expected_url = '/allocation_candidates?%s' % parse.urlencode( expected_query) self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.35', global_request_id=self.context.global_id) self.assertEqual(mock.sentinel.alloc_reqs, alloc_reqs) self.assertEqual(mock.sentinel.p_sums, p_sums) def test_get_ac_no_trait_bogus_group_policy_custom_limit(self): self.flags(max_placement_results=42, group='scheduler') resp_mock = mock.Mock(status_code=200) json_data = { 'allocation_requests': mock.sentinel.alloc_reqs, 'provider_summaries': mock.sentinel.p_sums, } flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'resources:VCPU': '1', 'resources:MEMORY_MB': '1024', 'resources1:DISK_GB': '30', 'group_policy': 'bogus', }) req_spec = objects.RequestSpec(flavor=flavor, is_bfv=False) resources = scheduler_utils.ResourceRequest(req_spec) expected_path = '/allocation_candidates' expected_query = [ ('limit', '42'), ('resources', 'MEMORY_MB:1024,VCPU:1'), ('resources1', 'DISK_GB:30'), ] resp_mock.json.return_value = json_data self.ks_adap_mock.get.return_value = resp_mock alloc_reqs, p_sums, allocation_request_version = ( self.client.get_allocation_candidates(self.context, resources)) url = self.ks_adap_mock.get.call_args[0][0] split_url = parse.urlsplit(url) query = parse.parse_qsl(split_url.query) self.assertEqual(expected_path, split_url.path) self.assertEqual(expected_query, query) expected_url = '/allocation_candidates?%s' % parse.urlencode( expected_query) self.assertEqual(mock.sentinel.alloc_reqs, alloc_reqs) self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.35', global_request_id=self.context.global_id) self.assertEqual(mock.sentinel.p_sums, p_sums) def test_get_allocation_candidates_not_found(self): # Ensure _get_resource_provider() just returns None when the placement # API doesn't find a resource provider matching a UUID resp_mock = mock.Mock(status_code=404) self.ks_adap_mock.get.return_value = resp_mock expected_path = '/allocation_candidates' expected_query = { 'resources': ['DISK_GB:15,MEMORY_MB:1024,VCPU:1'], 'limit': ['100'] } # Make sure we're also honoring the configured limit self.flags(max_placement_results=100, group='scheduler') flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) req_spec = objects.RequestSpec(flavor=flavor, is_bfv=False) resources = scheduler_utils.ResourceRequest(req_spec) res = self.client.get_allocation_candidates(self.context, resources) self.ks_adap_mock.get.assert_called_once_with( mock.ANY, microversion='1.35', global_request_id=self.context.global_id) url = self.ks_adap_mock.get.call_args[0][0] split_url = parse.urlsplit(url) query = parse.parse_qs(split_url.query) self.assertEqual(expected_path, split_url.path) self.assertEqual(expected_query, query) self.assertIsNone(res[0]) def test_get_resource_provider_found(self): # Ensure _get_resource_provider() returns a dict of resource provider # if it finds a resource provider record from the placement API uuid = uuids.compute_node resp_mock = mock.Mock(status_code=200) json_data = { 'uuid': uuid, 'name': uuid, 'generation': 42, 'parent_provider_uuid': None, } resp_mock.json.return_value = json_data self.ks_adap_mock.get.return_value = resp_mock result = self.client._get_resource_provider(self.context, uuid) expected_provider_dict = dict( uuid=uuid, name=uuid, generation=42, parent_provider_uuid=None, ) expected_url = '/resource_providers/' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.14', global_request_id=self.context.global_id) self.assertEqual(expected_provider_dict, result) def test_get_resource_provider_not_found(self): # Ensure _get_resource_provider() just returns None when the placement # API doesn't find a resource provider matching a UUID resp_mock = mock.Mock(status_code=404) self.ks_adap_mock.get.return_value = resp_mock uuid = uuids.compute_node result = self.client._get_resource_provider(self.context, uuid) expected_url = '/resource_providers/' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.14', global_request_id=self.context.global_id) self.assertIsNone(result) @mock.patch.object(report.LOG, 'error') def test_get_resource_provider_error(self, logging_mock): # Ensure _get_resource_provider() sets the error flag when trying to # communicate with the placement API and not getting an error we can # deal with resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.get.return_value = resp_mock self.ks_adap_mock.get.return_value.headers = { 'x-openstack-request-id': uuids.request_id} uuid = uuids.compute_node self.assertRaises( exception.ResourceProviderRetrievalFailed, self.client._get_resource_provider, self.context, uuid) expected_url = '/resource_providers/' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.14', global_request_id=self.context.global_id) # A 503 Service Unavailable should trigger an error log that # includes the placement request id and return None # from _get_resource_provider() self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_get_sharing_providers(self): resp_mock = mock.Mock(status_code=200) rpjson = [ { 'uuid': uuids.sharing1, 'name': 'bandwidth_provider', 'generation': 42, 'parent_provider_uuid': None, 'root_provider_uuid': None, 'links': [], }, { 'uuid': uuids.sharing2, 'name': 'storage_provider', 'generation': 42, 'parent_provider_uuid': None, 'root_provider_uuid': None, 'links': [], }, ] resp_mock.json.return_value = {'resource_providers': rpjson} self.ks_adap_mock.get.return_value = resp_mock result = self.client._get_sharing_providers( self.context, [uuids.agg1, uuids.agg2]) expected_url = ('/resource_providers?member_of=in:' + ','.join((uuids.agg1, uuids.agg2)) + '&required=MISC_SHARES_VIA_AGGREGATE') self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.18', global_request_id=self.context.global_id) self.assertEqual(rpjson, result) def test_get_sharing_providers_emptylist(self): self.assertEqual( [], self.client._get_sharing_providers(self.context, [])) self.ks_adap_mock.get.assert_not_called() @mock.patch.object(report.LOG, 'error') def test_get_sharing_providers_error(self, logging_mock): # Ensure _get_sharing_providers() logs an error and raises if the # placement API call doesn't respond 200 resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.get.return_value = resp_mock self.ks_adap_mock.get.return_value.headers = { 'x-openstack-request-id': uuids.request_id} uuid = uuids.agg self.assertRaises(exception.ResourceProviderRetrievalFailed, self.client._get_sharing_providers, self.context, [uuid]) expected_url = ('/resource_providers?member_of=in:' + uuid + '&required=MISC_SHARES_VIA_AGGREGATE') self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.18', global_request_id=self.context.global_id) # A 503 Service Unavailable should trigger an error log that # includes the placement request id self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_get_providers_in_tree(self): # Ensure get_providers_in_tree() returns a list of resource # provider dicts if it finds a resource provider record from the # placement API root = uuids.compute_node child = uuids.child resp_mock = mock.Mock(status_code=200) rpjson = [ { 'uuid': root, 'name': 'daddy', 'generation': 42, 'parent_provider_uuid': None, }, { 'uuid': child, 'name': 'junior', 'generation': 42, 'parent_provider_uuid': root, }, ] resp_mock.json.return_value = {'resource_providers': rpjson} self.ks_adap_mock.get.return_value = resp_mock result = self.client.get_providers_in_tree(self.context, root) expected_url = '/resource_providers?in_tree=' + root self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.14', global_request_id=self.context.global_id) self.assertEqual(rpjson, result) @mock.patch.object(report.LOG, 'error') def test_get_providers_in_tree_error(self, logging_mock): # Ensure get_providers_in_tree() logs an error and raises if the # placement API call doesn't respond 200 resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.get.return_value = resp_mock self.ks_adap_mock.get.return_value.headers = { 'x-openstack-request-id': 'req-' + uuids.request_id} uuid = uuids.compute_node self.assertRaises(exception.ResourceProviderRetrievalFailed, self.client.get_providers_in_tree, self.context, uuid) expected_url = '/resource_providers?in_tree=' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.14', global_request_id=self.context.global_id) # A 503 Service Unavailable should trigger an error log that includes # the placement request id self.assertTrue(logging_mock.called) self.assertEqual('req-' + uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_get_providers_in_tree_ksa_exc(self): self.ks_adap_mock.get.side_effect = ks_exc.EndpointNotFound() self.assertRaises( ks_exc.ClientException, self.client.get_providers_in_tree, self.context, uuids.whatever) def test_create_resource_provider(self): """Test that _create_resource_provider() sends a dict of resource provider information without a parent provider UUID. """ uuid = uuids.compute_node name = 'computehost' resp_mock = mock.Mock(status_code=200) self.ks_adap_mock.post.return_value = resp_mock self.assertEqual( resp_mock.json.return_value, self.client._create_resource_provider(self.context, uuid, name)) expected_payload = { 'uuid': uuid, 'name': name, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, microversion='1.20', global_request_id=self.context.global_id) def test_create_resource_provider_with_parent(self): """Test that when specifying a parent provider UUID, that the parent_provider_uuid part of the payload is properly specified. """ parent_uuid = uuids.parent uuid = uuids.compute_node name = 'computehost' resp_mock = mock.Mock(status_code=200) self.ks_adap_mock.post.return_value = resp_mock self.assertEqual( resp_mock.json.return_value, self.client._create_resource_provider( self.context, uuid, name, parent_provider_uuid=parent_uuid, ) ) expected_payload = { 'uuid': uuid, 'name': name, 'parent_provider_uuid': parent_uuid, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, microversion='1.20', global_request_id=self.context.global_id) @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') def test_create_resource_provider_concurrent_create(self, get_rp_mock, logging_mock): # Ensure _create_resource_provider() returns a dict of resource # provider gotten from _get_resource_provider() if the call to create # the resource provider in the placement API returned a 409 Conflict, # indicating another thread concurrently created the resource provider # record. uuid = uuids.compute_node name = 'computehost' self.ks_adap_mock.post.return_value = fake_requests.FakeResponse( 409, content='not a name conflict', headers={'x-openstack-request-id': uuids.request_id}) get_rp_mock.return_value = mock.sentinel.get_rp result = self.client._create_resource_provider(self.context, uuid, name) expected_payload = { 'uuid': uuid, 'name': name, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, microversion='1.20', global_request_id=self.context.global_id) self.assertEqual(mock.sentinel.get_rp, result) # The 409 response will produce a message to the info log. self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_create_resource_provider_name_conflict(self): # When the API call to create the resource provider fails 409 with a # name conflict, we raise an exception. self.ks_adap_mock.post.return_value = fake_requests.FakeResponse( 409, content='Conflicting resource provider name: foo ' 'already exists.') self.assertRaises( exception.ResourceProviderCreationFailed, self.client._create_resource_provider, self.context, uuids.compute_node, 'foo') @mock.patch.object(report.LOG, 'error') def test_create_resource_provider_error(self, logging_mock): # Ensure _create_resource_provider() sets the error flag when trying to # communicate with the placement API and not getting an error we can # deal with uuid = uuids.compute_node name = 'computehost' self.ks_adap_mock.post.return_value = fake_requests.FakeResponse( 503, headers={'x-openstack-request-id': uuids.request_id}) self.assertRaises( exception.ResourceProviderCreationFailed, self.client._create_resource_provider, self.context, uuid, name) expected_payload = { 'uuid': uuid, 'name': name, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, microversion='1.20', global_request_id=self.context.global_id) # A 503 Service Unavailable should log an error that # includes the placement request id and # _create_resource_provider() should return None self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_put_empty(self): # A simple put with an empty (not None) payload should send the empty # payload through. # Bug #1744786 url = '/resource_providers/%s/aggregates' % uuids.foo self.client.put(url, []) self.ks_adap_mock.put.assert_called_once_with( url, json=[], microversion=None, global_request_id=None) def test_delete_provider(self): delete_mock = fake_requests.FakeResponse(None) self.ks_adap_mock.delete.return_value = delete_mock for status_code in (204, 404): delete_mock.status_code = status_code # Seed the caches self.client._provider_tree.new_root('compute', uuids.root, generation=0) self.client._association_refresh_time[uuids.root] = 1234 self.client._delete_provider(uuids.root, global_request_id='gri') self.ks_adap_mock.delete.assert_called_once_with( '/resource_providers/' + uuids.root, global_request_id='gri', microversion=None) self.assertFalse(self.client._provider_tree.exists(uuids.root)) self.assertNotIn(uuids.root, self.client._association_refresh_time) self.ks_adap_mock.delete.reset_mock() def test_delete_provider_fail(self): delete_mock = fake_requests.FakeResponse(None) self.ks_adap_mock.delete.return_value = delete_mock resp_exc_map = {409: exception.ResourceProviderInUse, 503: exception.ResourceProviderDeletionFailed} for status_code, exc in resp_exc_map.items(): delete_mock.status_code = status_code self.assertRaises(exc, self.client._delete_provider, uuids.root) self.ks_adap_mock.delete.assert_called_once_with( '/resource_providers/' + uuids.root, microversion=None, global_request_id=None) self.ks_adap_mock.delete.reset_mock() def test_set_aggregates_for_provider(self): aggs = [uuids.agg1, uuids.agg2] self.ks_adap_mock.put.return_value = fake_requests.FakeResponse( 200, content=jsonutils.dumps({ 'aggregates': aggs, 'resource_provider_generation': 1})) # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=0) self.assertEqual(set(), self.client._provider_tree.data(uuids.rp).aggregates) self.client.set_aggregates_for_provider(self.context, uuids.rp, aggs) exp_payload = {'aggregates': aggs, 'resource_provider_generation': 0} self.ks_adap_mock.put.assert_called_once_with( '/resource_providers/%s/aggregates' % uuids.rp, json=exp_payload, microversion='1.19', global_request_id=self.context.global_id) # Cache was updated ptree_data = self.client._provider_tree.data(uuids.rp) self.assertEqual(set(aggs), ptree_data.aggregates) self.assertEqual(1, ptree_data.generation) def test_set_aggregates_for_provider_bad_args(self): self.assertRaises(ValueError, self.client.set_aggregates_for_provider, self.context, uuids.rp, {}, use_cache=False) self.assertRaises(ValueError, self.client.set_aggregates_for_provider, self.context, uuids.rp, {}, use_cache=False, generation=None) def test_set_aggregates_for_provider_fail(self): self.ks_adap_mock.put.return_value = fake_requests.FakeResponse(503) # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=0) self.assertRaises( exception.ResourceProviderUpdateFailed, self.client.set_aggregates_for_provider, self.context, uuids.rp, [uuids.agg]) # The cache wasn't updated self.assertEqual(set(), self.client._provider_tree.data(uuids.rp).aggregates) def test_set_aggregates_for_provider_conflict(self): # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=0) self.ks_adap_mock.put.return_value = fake_requests.FakeResponse(409) self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.set_aggregates_for_provider, self.context, uuids.rp, [uuids.agg]) # The cache was invalidated self.assertNotIn(uuids.rp, self.client._provider_tree.get_provider_uuids()) self.assertNotIn(uuids.rp, self.client._association_refresh_time) def test_set_aggregates_for_provider_short_circuit(self): """No-op when aggregates have not changed.""" # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=7) self.client.set_aggregates_for_provider(self.context, uuids.rp, []) self.ks_adap_mock.put.assert_not_called() def test_set_aggregates_for_provider_no_short_circuit(self): """Don't short-circuit if generation doesn't match, even if aggs have not changed. """ # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=2) self.ks_adap_mock.put.return_value = fake_requests.FakeResponse( 200, content=jsonutils.dumps({ 'aggregates': [], 'resource_provider_generation': 5})) self.client.set_aggregates_for_provider(self.context, uuids.rp, [], generation=4) exp_payload = {'aggregates': [], 'resource_provider_generation': 4} self.ks_adap_mock.put.assert_called_once_with( '/resource_providers/%s/aggregates' % uuids.rp, json=exp_payload, microversion='1.19', global_request_id=self.context.global_id) # Cache was updated ptree_data = self.client._provider_tree.data(uuids.rp) self.assertEqual(set(), ptree_data.aggregates) self.assertEqual(5, ptree_data.generation) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider', return_value=mock.NonCallableMock) def test_get_resource_provider_name_from_cache(self, mock_placement_get): expected_name = 'rp' self.client._provider_tree.new_root( expected_name, uuids.rp, generation=0) actual_name = self.client.get_resource_provider_name( self.context, uuids.rp) self.assertEqual(expected_name, actual_name) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') def test_get_resource_provider_name_from_placement( self, mock_placement_get): expected_name = 'rp' mock_placement_get.return_value = { 'uuid': uuids.rp, 'name': expected_name } actual_name = self.client.get_resource_provider_name( self.context, uuids.rp) self.assertEqual(expected_name, actual_name) mock_placement_get.assert_called_once_with(self.context, uuids.rp) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') def test_get_resource_provider_name_rp_not_found_in_placement( self, mock_placement_get): mock_placement_get.side_effect = \ exception.ResourceProviderNotFound(uuids.rp) self.assertRaises( exception.ResourceProviderNotFound, self.client.get_resource_provider_name, self.context, uuids.rp) mock_placement_get.assert_called_once_with(self.context, uuids.rp) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') def test_get_resource_provider_name_placement_unavailable( self, mock_placement_get): mock_placement_get.side_effect = \ exception.ResourceProviderRetrievalFailed(uuid=uuids.rp) self.assertRaises( exception.ResourceProviderRetrievalFailed, self.client.get_resource_provider_name, self.context, uuids.rp) class TestAggregates(SchedulerReportClientTestCase): def test_get_provider_aggregates_found(self): uuid = uuids.compute_node resp_mock = mock.Mock(status_code=200) aggs = [ uuids.agg1, uuids.agg2, ] resp_mock.json.return_value = {'aggregates': aggs, 'resource_provider_generation': 42} self.ks_adap_mock.get.return_value = resp_mock result, gen = self.client._get_provider_aggregates(self.context, uuid) expected_url = '/resource_providers/' + uuid + '/aggregates' self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.19', global_request_id=self.context.global_id) self.assertEqual(set(aggs), result) self.assertEqual(42, gen) @mock.patch.object(report.LOG, 'error') def test_get_provider_aggregates_error(self, log_mock): """Test that when the placement API returns any error when looking up a provider's aggregates, we raise an exception. """ uuid = uuids.compute_node resp_mock = mock.Mock(headers={ 'x-openstack-request-id': uuids.request_id}) self.ks_adap_mock.get.return_value = resp_mock for status_code in (400, 404, 503): resp_mock.status_code = status_code self.assertRaises( exception.ResourceProviderAggregateRetrievalFailed, self.client._get_provider_aggregates, self.context, uuid) expected_url = '/resource_providers/' + uuid + '/aggregates' self.ks_adap_mock.get.assert_called_once_with( expected_url, microversion='1.19', global_request_id=self.context.global_id) self.assertTrue(log_mock.called) self.assertEqual(uuids.request_id, log_mock.call_args[0][1]['placement_req_id']) self.ks_adap_mock.get.reset_mock() log_mock.reset_mock() class TestTraits(SchedulerReportClientTestCase): trait_api_kwargs = {'microversion': '1.6'} def test_get_provider_traits_found(self): uuid = uuids.compute_node resp_mock = mock.Mock(status_code=200) traits = [ 'CUSTOM_GOLD', 'CUSTOM_SILVER', ] resp_mock.json.return_value = {'traits': traits, 'resource_provider_generation': 42} self.ks_adap_mock.get.return_value = resp_mock result, gen = self.client.get_provider_traits(self.context, uuid) expected_url = '/resource_providers/' + uuid + '/traits' self.ks_adap_mock.get.assert_called_once_with( expected_url, global_request_id=self.context.global_id, **self.trait_api_kwargs) self.assertEqual(set(traits), result) self.assertEqual(42, gen) @mock.patch.object(report.LOG, 'error') def test_get_provider_traits_error(self, log_mock): """Test that when the placement API returns any error when looking up a provider's traits, we raise an exception. """ uuid = uuids.compute_node resp_mock = mock.Mock(headers={ 'x-openstack-request-id': uuids.request_id}) self.ks_adap_mock.get.return_value = resp_mock for status_code in (400, 404, 503): resp_mock.status_code = status_code self.assertRaises( exception.ResourceProviderTraitRetrievalFailed, self.client.get_provider_traits, self.context, uuid) expected_url = '/resource_providers/' + uuid + '/traits' self.ks_adap_mock.get.assert_called_once_with( expected_url, global_request_id=self.context.global_id, **self.trait_api_kwargs) self.assertTrue(log_mock.called) self.assertEqual(uuids.request_id, log_mock.call_args[0][1]['placement_req_id']) self.ks_adap_mock.get.reset_mock() log_mock.reset_mock() def test_get_provider_traits_placement_comm_error(self): """ksa ClientException raises through.""" uuid = uuids.compute_node self.ks_adap_mock.get.side_effect = ks_exc.EndpointNotFound() self.assertRaises(ks_exc.ClientException, self.client.get_provider_traits, self.context, uuid) expected_url = '/resource_providers/' + uuid + '/traits' self.ks_adap_mock.get.assert_called_once_with( expected_url, global_request_id=self.context.global_id, **self.trait_api_kwargs) def test_ensure_traits(self): """Successful paths, various permutations of traits existing or needing to be created. """ standard_traits = ['HW_NIC_OFFLOAD_UCS', 'HW_NIC_OFFLOAD_RDMA'] custom_traits = ['CUSTOM_GOLD', 'CUSTOM_SILVER'] all_traits = standard_traits + custom_traits get_mock = mock.Mock(status_code=200) self.ks_adap_mock.get.return_value = get_mock # Request all traits; custom traits need to be created get_mock.json.return_value = {'traits': standard_traits} self.client._ensure_traits(self.context, all_traits) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:' + ','.join(all_traits), global_request_id=self.context.global_id, **self.trait_api_kwargs) self.ks_adap_mock.put.assert_has_calls( [mock.call('/traits/' + trait, global_request_id=self.context.global_id, json=None, **self.trait_api_kwargs) for trait in custom_traits], any_order=True) self.ks_adap_mock.reset_mock() # Request standard traits; no traits need to be created get_mock.json.return_value = {'traits': standard_traits} self.client._ensure_traits(self.context, standard_traits) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:' + ','.join(standard_traits), global_request_id=self.context.global_id, **self.trait_api_kwargs) self.ks_adap_mock.put.assert_not_called() self.ks_adap_mock.reset_mock() # Request no traits - short circuit self.client._ensure_traits(self.context, None) self.client._ensure_traits(self.context, []) self.ks_adap_mock.get.assert_not_called() self.ks_adap_mock.put.assert_not_called() def test_ensure_traits_fail_retrieval(self): self.ks_adap_mock.get.return_value = mock.Mock(status_code=400) self.assertRaises(exception.TraitRetrievalFailed, self.client._ensure_traits, self.context, ['FOO']) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:FOO', global_request_id=self.context.global_id, **self.trait_api_kwargs) self.ks_adap_mock.put.assert_not_called() def test_ensure_traits_fail_creation(self): get_mock = mock.Mock(status_code=200) get_mock.json.return_value = {'traits': []} self.ks_adap_mock.get.return_value = get_mock self.ks_adap_mock.put.return_value = fake_requests.FakeResponse(400) self.assertRaises(exception.TraitCreationFailed, self.client._ensure_traits, self.context, ['FOO']) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:FOO', global_request_id=self.context.global_id, **self.trait_api_kwargs) self.ks_adap_mock.put.assert_called_once_with( '/traits/FOO', global_request_id=self.context.global_id, json=None, **self.trait_api_kwargs) def test_set_traits_for_provider(self): traits = ['HW_NIC_OFFLOAD_UCS', 'HW_NIC_OFFLOAD_RDMA'] # Make _ensure_traits succeed without PUTting get_mock = mock.Mock(status_code=200) get_mock.json.return_value = {'traits': traits} self.ks_adap_mock.get.return_value = get_mock # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=0) # Mock the /rp/{u}/traits PUT to succeed put_mock = mock.Mock(status_code=200) put_mock.json.return_value = {'traits': traits, 'resource_provider_generation': 1} self.ks_adap_mock.put.return_value = put_mock # Invoke self.client.set_traits_for_provider(self.context, uuids.rp, traits) # Verify API calls self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:' + ','.join(traits), global_request_id=self.context.global_id, **self.trait_api_kwargs) self.ks_adap_mock.put.assert_called_once_with( '/resource_providers/%s/traits' % uuids.rp, json={'traits': traits, 'resource_provider_generation': 0}, global_request_id=self.context.global_id, **self.trait_api_kwargs) # And ensure the provider tree cache was updated appropriately self.assertFalse( self.client._provider_tree.have_traits_changed(uuids.rp, traits)) # Validate the generation self.assertEqual( 1, self.client._provider_tree.data(uuids.rp).generation) def test_set_traits_for_provider_fail(self): traits = ['HW_NIC_OFFLOAD_UCS', 'HW_NIC_OFFLOAD_RDMA'] get_mock = mock.Mock() self.ks_adap_mock.get.return_value = get_mock # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, generation=0) # _ensure_traits exception bubbles up get_mock.status_code = 400 self.assertRaises( exception.TraitRetrievalFailed, self.client.set_traits_for_provider, self.context, uuids.rp, traits) self.ks_adap_mock.put.assert_not_called() get_mock.status_code = 200 get_mock.json.return_value = {'traits': traits} # Conflict self.ks_adap_mock.put.return_value = mock.Mock(status_code=409) self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.set_traits_for_provider, self.context, uuids.rp, traits) # Other error self.ks_adap_mock.put.return_value = mock.Mock(status_code=503) self.assertRaises( exception.ResourceProviderUpdateFailed, self.client.set_traits_for_provider, self.context, uuids.rp, traits) class TestAssociations(SchedulerReportClientTestCase): def setUp(self): super(TestAssociations, self).setUp() self.mock_get_inv = self.useFixture(fixtures.MockPatch( 'nova.scheduler.client.report.SchedulerReportClient.' '_get_inventory')).mock self.inv = { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } self.mock_get_inv.return_value = { 'resource_provider_generation': 41, 'inventories': self.inv, } self.mock_get_aggs = self.useFixture(fixtures.MockPatch( 'nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates')).mock self.mock_get_aggs.return_value = report.AggInfo( aggregates=set([uuids.agg1]), generation=42) self.mock_get_traits = self.useFixture(fixtures.MockPatch( 'nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_traits')).mock self.mock_get_traits.return_value = report.TraitInfo( traits=set(['CUSTOM_GOLD']), generation=43) self.mock_get_sharing = self.useFixture(fixtures.MockPatch( 'nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers')).mock def assert_getters_were_called(self, uuid, sharing=True): self.mock_get_inv.assert_called_once_with(self.context, uuid) self.mock_get_aggs.assert_called_once_with(self.context, uuid) self.mock_get_traits.assert_called_once_with(self.context, uuid) if sharing: self.mock_get_sharing.assert_called_once_with( self.context, self.mock_get_aggs.return_value[0]) self.assertIn(uuid, self.client._association_refresh_time) self.assertFalse( self.client._provider_tree.has_inventory_changed(uuid, self.inv)) self.assertTrue( self.client._provider_tree.in_aggregates(uuid, [uuids.agg1])) self.assertFalse( self.client._provider_tree.in_aggregates(uuid, [uuids.agg2])) self.assertTrue( self.client._provider_tree.has_traits(uuid, ['CUSTOM_GOLD'])) self.assertFalse( self.client._provider_tree.has_traits(uuid, ['CUSTOM_SILVER'])) self.assertEqual(43, self.client._provider_tree.data(uuid).generation) def assert_getters_not_called(self, timer_entry=None): self.mock_get_inv.assert_not_called() self.mock_get_aggs.assert_not_called() self.mock_get_traits.assert_not_called() self.mock_get_sharing.assert_not_called() if timer_entry is None: self.assertFalse(self.client._association_refresh_time) else: self.assertIn(timer_entry, self.client._association_refresh_time) def reset_getter_mocks(self): self.mock_get_inv.reset_mock() self.mock_get_aggs.reset_mock() self.mock_get_traits.reset_mock() self.mock_get_sharing.reset_mock() def test_refresh_associations_no_last(self): """Test that associations are refreshed when stale.""" uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, generation=1) self.client._refresh_associations(self.context, uuid) self.assert_getters_were_called(uuid) def test_refresh_associations_no_refresh_sharing(self): """Test refresh_sharing=False.""" uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, generation=1) self.client._refresh_associations(self.context, uuid, refresh_sharing=False) self.assert_getters_were_called(uuid, sharing=False) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_associations_stale') def test_refresh_associations_not_stale(self, mock_stale): """Test that refresh associations is not called when the map is not stale. """ mock_stale.return_value = False uuid = uuids.compute_node self.client._refresh_associations(self.context, uuid) self.assert_getters_not_called() @mock.patch.object(report.LOG, 'debug') def test_refresh_associations_time(self, log_mock): """Test that refresh associations is called when the map is stale.""" uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, generation=1) # Called a first time because association_refresh_time is empty. now = time.time() self.client._refresh_associations(self.context, uuid) self.assert_getters_were_called(uuid) log_mock.assert_has_calls([ mock.call('Refreshing inventories for resource provider %s', uuid), mock.call('Updating ProviderTree inventory for provider %s from ' '_refresh_and_get_inventory using data: %s', uuid, self.inv), mock.call('Refreshing aggregate associations for resource ' 'provider %s, aggregates: %s', uuid, uuids.agg1), mock.call('Refreshing trait associations for resource ' 'provider %s, traits: %s', uuid, 'CUSTOM_GOLD') ]) # Clear call count. self.reset_getter_mocks() with mock.patch('time.time') as mock_future: # Not called a second time because not enough time has passed. mock_future.return_value = (now + CONF.compute.resource_provider_association_refresh / 2) self.client._refresh_associations(self.context, uuid) self.assert_getters_not_called(timer_entry=uuid) # Called because time has passed. mock_future.return_value = (now + CONF.compute.resource_provider_association_refresh + 1) self.client._refresh_associations(self.context, uuid) self.assert_getters_were_called(uuid) def test_refresh_associations_disabled(self): """Test that refresh associations can be disabled.""" self.flags(resource_provider_association_refresh=0, group='compute') uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, generation=1) # Called a first time because association_refresh_time is empty. now = time.time() self.client._refresh_associations(self.context, uuid) self.assert_getters_were_called(uuid) # Clear call count. self.reset_getter_mocks() with mock.patch('time.time') as mock_future: # A lot of time passes mock_future.return_value = now + 10000000000000 self.client._refresh_associations(self.context, uuid) self.assert_getters_not_called(timer_entry=uuid) self.reset_getter_mocks() # Forever passes mock_future.return_value = float('inf') self.client._refresh_associations(self.context, uuid) self.assert_getters_not_called(timer_entry=uuid) self.reset_getter_mocks() # Even if no time passes, clearing the counter triggers refresh mock_future.return_value = now del self.client._association_refresh_time[uuid] self.client._refresh_associations(self.context, uuid) self.assert_getters_were_called(uuid) class TestAllocations(SchedulerReportClientTestCase): @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete") @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete_allocation_for_instance") @mock.patch("nova.objects.InstanceList.get_uuids_by_host_and_node") def test_delete_resource_provider_cascade(self, mock_by_host, mock_del_alloc, mock_delete): self.client._provider_tree.new_root(uuids.cn, uuids.cn, generation=1) cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) mock_by_host.return_value = [uuids.inst1, uuids.inst2] resp_mock = mock.Mock(status_code=204) mock_delete.return_value = resp_mock self.client.delete_resource_provider(self.context, cn, cascade=True) mock_by_host.assert_called_once_with( self.context, cn.host, cn.hypervisor_hostname) self.assertEqual(2, mock_del_alloc.call_count) exp_url = "/resource_providers/%s" % uuids.cn mock_delete.assert_called_once_with( exp_url, global_request_id=self.context.global_id) self.assertFalse(self.client._provider_tree.exists(uuids.cn)) @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete") @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete_allocation_for_instance") @mock.patch("nova.objects.InstanceList.get_uuids_by_host_and_node") def test_delete_resource_provider_no_cascade(self, mock_by_host, mock_del_alloc, mock_delete): self.client._provider_tree.new_root(uuids.cn, uuids.cn, generation=1) self.client._association_refresh_time[uuids.cn] = mock.Mock() cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) mock_by_host.return_value = [uuids.inst1, uuids.inst2] resp_mock = mock.Mock(status_code=204) mock_delete.return_value = resp_mock self.client.delete_resource_provider(self.context, cn) mock_del_alloc.assert_not_called() exp_url = "/resource_providers/%s" % uuids.cn mock_delete.assert_called_once_with( exp_url, global_request_id=self.context.global_id) self.assertNotIn(uuids.cn, self.client._association_refresh_time) @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete") @mock.patch('nova.scheduler.client.report.LOG') def test_delete_resource_provider_log_calls(self, mock_log, mock_delete): # First, check a successful call self.client._provider_tree.new_root(uuids.cn, uuids.cn, generation=1) cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) resp_mock = fake_requests.FakeResponse(204) mock_delete.return_value = resp_mock self.client.delete_resource_provider(self.context, cn) # With a 204, only the info should be called self.assertEqual(1, mock_log.info.call_count) self.assertEqual(0, mock_log.warning.call_count) # Now check a 404 response mock_log.reset_mock() resp_mock.status_code = 404 self.client.delete_resource_provider(self.context, cn) # With a 404, neither log message should be called self.assertEqual(0, mock_log.info.call_count) self.assertEqual(0, mock_log.warning.call_count) # Finally, check a 409 response mock_log.reset_mock() resp_mock.status_code = 409 self.client.delete_resource_provider(self.context, cn) # With a 409, only the error should be called self.assertEqual(0, mock_log.info.call_count) self.assertEqual(1, mock_log.error.call_count) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.delete', new=mock.Mock(side_effect=ks_exc.EndpointNotFound())) def test_delete_resource_provider_placement_exception(self): """Ensure that a ksa exception in delete_resource_provider raises through. """ self.client._provider_tree.new_root(uuids.cn, uuids.cn, generation=1) cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) self.assertRaises( ks_exc.ClientException, self.client.delete_resource_provider, self.context, cn) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_get_allocations_for_resource_provider(self, mock_get): mock_get.return_value = fake_requests.FakeResponse( 200, content=jsonutils.dumps( {'allocations': 'fake', 'resource_provider_generation': 42})) ret = self.client.get_allocations_for_resource_provider( self.context, 'rpuuid') self.assertEqual('fake', ret.allocations) mock_get.assert_called_once_with( '/resource_providers/rpuuid/allocations', global_request_id=self.context.global_id) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_get_allocations_for_resource_provider_fail(self, mock_get): mock_get.return_value = fake_requests.FakeResponse(400, content="ouch") self.assertRaises(exception.ResourceProviderAllocationRetrievalFailed, self.client.get_allocations_for_resource_provider, self.context, 'rpuuid') mock_get.assert_called_once_with( '/resource_providers/rpuuid/allocations', global_request_id=self.context.global_id) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_get_allocs_for_consumer(self, mock_get): mock_get.return_value = fake_requests.FakeResponse( 200, content=jsonutils.dumps({'foo': 'bar'})) ret = self.client.get_allocs_for_consumer(self.context, 'consumer') self.assertEqual({'foo': 'bar'}, ret) mock_get.assert_called_once_with( '/allocations/consumer', version='1.28', global_request_id=self.context.global_id) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_get_allocs_for_consumer_fail(self, mock_get): mock_get.return_value = fake_requests.FakeResponse(400, content='err') self.assertRaises(exception.ConsumerAllocationRetrievalFailed, self.client.get_allocs_for_consumer, self.context, 'consumer') mock_get.assert_called_once_with( '/allocations/consumer', version='1.28', global_request_id=self.context.global_id) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_get_allocs_for_consumer_safe_connect_fail(self, mock_get): mock_get.side_effect = ks_exc.EndpointNotFound() self.assertRaises(ks_exc.ClientException, self.client.get_allocs_for_consumer, self.context, 'consumer') mock_get.assert_called_once_with( '/allocations/consumer', version='1.28', global_request_id=self.context.global_id) def _test_remove_res_from_alloc( self, current_allocations, resources_to_remove, updated_allocations): with test.nested( mock.patch( "nova.scheduler.client.report.SchedulerReportClient.get"), mock.patch( "nova.scheduler.client.report.SchedulerReportClient.put") ) as (mock_get, mock_put): mock_get.return_value = fake_requests.FakeResponse( 200, content=jsonutils.dumps(current_allocations)) self.client.remove_resources_from_instance_allocation( self.context, uuids.consumer_uuid, resources_to_remove) mock_get.assert_called_once_with( '/allocations/%s' % uuids.consumer_uuid, version='1.28', global_request_id=self.context.global_id) mock_put.assert_called_once_with( '/allocations/%s' % uuids.consumer_uuid, updated_allocations, version='1.28', global_request_id=self.context.global_id) def test_remove_res_from_alloc(self): current_allocations = { "allocations": { uuids.rp1: { "generation": 13, "resources": { 'VCPU': 10, 'MEMORY_MB': 4096, }, }, uuids.rp2: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 300, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } resources_to_remove = { uuids.rp1: { 'VCPU': 1 }, uuids.rp2: { 'NET_BW_EGR_KILOBIT_PER_SEC': 100, 'NET_BW_IGR_KILOBIT_PER_SEC': 200, } } updated_allocations = { "allocations": { uuids.rp1: { "generation": 13, "resources": { 'VCPU': 9, 'MEMORY_MB': 4096, }, }, uuids.rp2: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 100, 'NET_BW_IGR_KILOBIT_PER_SEC': 100, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } self._test_remove_res_from_alloc( current_allocations, resources_to_remove, updated_allocations) def test_remove_res_from_alloc_remove_rc_when_value_dropped_to_zero(self): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 300, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } # this will remove all of NET_BW_EGR_KILOBIT_PER_SEC resources from # the allocation so the whole resource class will be removed resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 200, } } updated_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_IGR_KILOBIT_PER_SEC': 100, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } self._test_remove_res_from_alloc( current_allocations, resources_to_remove, updated_allocations) def test_remove_res_from_alloc_remove_rp_when_all_rc_removed(self): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 300, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 300, } } updated_allocations = { "allocations": {}, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } self._test_remove_res_from_alloc( current_allocations, resources_to_remove, updated_allocations) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_remove_res_from_alloc_failed_to_get_alloc( self, mock_get): mock_get.side_effect = ks_exc.EndpointNotFound() resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 200, } } self.assertRaises( ks_exc.ClientException, self.client.remove_resources_from_instance_allocation, self.context, uuids.consumer_uuid, resources_to_remove) def test_remove_res_from_alloc_empty_alloc(self): resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, 'NET_BW_IGR_KILOBIT_PER_SEC': 200, } } current_allocations = { "allocations": {}, "consumer_generation": 0, "project_id": uuids.project_id, "user_id": uuids.user_id, } ex = self.assertRaises( exception.AllocationUpdateFailed, self._test_remove_res_from_alloc, current_allocations, resources_to_remove, None) self.assertIn('The allocation is empty', six.text_type(ex)) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.put") @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_remove_res_from_alloc_no_resource_to_remove( self, mock_get, mock_put): self.client.remove_resources_from_instance_allocation( self.context, uuids.consumer_uuid, {}) mock_get.assert_not_called() mock_put.assert_not_called() def test_remove_res_from_alloc_missing_rc(self): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } resources_to_remove = { uuids.rp1: { 'VCPU': 1, } } ex = self.assertRaises( exception.AllocationUpdateFailed, self._test_remove_res_from_alloc, current_allocations, resources_to_remove, None) self.assertIn( "Key 'VCPU' is missing from the allocation", six.text_type(ex)) def test_remove_res_from_alloc_missing_rp(self): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } resources_to_remove = { uuids.other_rp: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, } } ex = self.assertRaises( exception.AllocationUpdateFailed, self._test_remove_res_from_alloc, current_allocations, resources_to_remove, None) self.assertIn( "Key '%s' is missing from the allocation" % uuids.other_rp, six.text_type(ex)) def test_remove_res_from_alloc_not_enough_resource_to_remove(self): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 400, } } ex = self.assertRaises( exception.AllocationUpdateFailed, self._test_remove_res_from_alloc, current_allocations, resources_to_remove, None) self.assertIn( 'There are not enough allocated resources left on %s resource ' 'provider to remove 400 amount of NET_BW_EGR_KILOBIT_PER_SEC ' 'resources' % uuids.rp1, six.text_type(ex)) @mock.patch('time.sleep', new=mock.Mock()) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.put") @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_remove_res_from_alloc_retry_succeed( self, mock_get, mock_put): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } current_allocations_2 = copy.deepcopy(current_allocations) current_allocations_2['consumer_generation'] = 3 resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, } } updated_allocations = { "allocations": {}, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } updated_allocations_2 = copy.deepcopy(updated_allocations) updated_allocations_2['consumer_generation'] = 3 mock_get.side_effect = [ fake_requests.FakeResponse( 200, content=jsonutils.dumps(current_allocations)), fake_requests.FakeResponse( 200, content=jsonutils.dumps(current_allocations_2)) ] mock_put.side_effect = [ fake_requests.FakeResponse( status_code=409, content=jsonutils.dumps( {'errors': [{'code': 'placement.concurrent_update', 'detail': ''}]})), fake_requests.FakeResponse( status_code=204) ] self.client.remove_resources_from_instance_allocation( self.context, uuids.consumer_uuid, resources_to_remove) self.assertEqual( [ mock.call( '/allocations/%s' % uuids.consumer_uuid, version='1.28', global_request_id=self.context.global_id), mock.call( '/allocations/%s' % uuids.consumer_uuid, version='1.28', global_request_id=self.context.global_id) ], mock_get.mock_calls) self.assertEqual( [ mock.call( '/allocations/%s' % uuids.consumer_uuid, updated_allocations, version='1.28', global_request_id=self.context.global_id), mock.call( '/allocations/%s' % uuids.consumer_uuid, updated_allocations_2, version='1.28', global_request_id=self.context.global_id), ], mock_put.mock_calls) @mock.patch('time.sleep', new=mock.Mock()) @mock.patch("nova.scheduler.client.report.SchedulerReportClient.put") @mock.patch("nova.scheduler.client.report.SchedulerReportClient.get") def test_remove_res_from_alloc_run_out_of_retries( self, mock_get, mock_put): current_allocations = { "allocations": { uuids.rp1: { "generation": 42, "resources": { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, }, }, }, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } resources_to_remove = { uuids.rp1: { 'NET_BW_EGR_KILOBIT_PER_SEC': 200, } } updated_allocations = { "allocations": {}, "consumer_generation": 2, "project_id": uuids.project_id, "user_id": uuids.user_id, } get_rsp = fake_requests.FakeResponse( 200, content=jsonutils.dumps(current_allocations)) mock_get.side_effect = [get_rsp] * 4 put_rsp = fake_requests.FakeResponse( status_code=409, content=jsonutils.dumps( {'errors': [{'code': 'placement.concurrent_update', 'detail': ''}]})) mock_put.side_effect = [put_rsp] * 4 ex = self.assertRaises( exception.AllocationUpdateFailed, self.client.remove_resources_from_instance_allocation, self.context, uuids.consumer_uuid, resources_to_remove) self.assertIn( 'due to multiple successive generation conflicts', six.text_type(ex)) get_call = mock.call( '/allocations/%s' % uuids.consumer_uuid, version='1.28', global_request_id=self.context.global_id) mock_get.assert_has_calls([get_call] * 4) put_call = mock.call( '/allocations/%s' % uuids.consumer_uuid, updated_allocations, version='1.28', global_request_id=self.context.global_id) mock_put.assert_has_calls([put_call] * 4) class TestResourceClass(SchedulerReportClientTestCase): def setUp(self): super(TestResourceClass, self).setUp() _put_patch = mock.patch( "nova.scheduler.client.report.SchedulerReportClient.put") self.addCleanup(_put_patch.stop) self.mock_put = _put_patch.start() def test_ensure_resource_classes(self): rcs = ['VCPU', 'CUSTOM_FOO', 'MEMORY_MB', 'CUSTOM_BAR'] self.client._ensure_resource_classes(self.context, rcs) self.mock_put.assert_has_calls([ mock.call('/resource_classes/%s' % rc, None, version='1.7', global_request_id=self.context.global_id) for rc in ('CUSTOM_FOO', 'CUSTOM_BAR') ], any_order=True) def test_ensure_resource_classes_none(self): for empty in ([], (), set(), {}): self.client._ensure_resource_classes(self.context, empty) self.mock_put.assert_not_called() def test_ensure_resource_classes_put_fail(self): self.mock_put.return_value = fake_requests.FakeResponse(503) rcs = ['VCPU', 'MEMORY_MB', 'CUSTOM_BAD'] self.assertRaises( exception.InvalidResourceClass, self.client._ensure_resource_classes, self.context, rcs) # Only called with the "bad" one self.mock_put.assert_called_once_with( '/resource_classes/CUSTOM_BAD', None, version='1.7', global_request_id=self.context.global_id) class TestAggregateAddRemoveHost(SchedulerReportClientTestCase): """Unit tests for the methods of the report client which look up providers by name and add/remove host aggregates to providers. These methods do not access the SchedulerReportClient provider_tree attribute and are called from the nova API, not the nova compute manager/resource tracker. """ def setUp(self): super(TestAggregateAddRemoveHost, self).setUp() self.mock_get = self.useFixture( fixtures.MockPatch('nova.scheduler.client.report.' 'SchedulerReportClient.get')).mock self.mock_put = self.useFixture( fixtures.MockPatch('nova.scheduler.client.report.' 'SchedulerReportClient.put')).mock def test_get_provider_by_name_success(self): get_resp = mock.Mock() get_resp.status_code = 200 get_resp.json.return_value = { "resource_providers": [ mock.sentinel.expected, ] } self.mock_get.return_value = get_resp name = 'cn1' res = self.client.get_provider_by_name(self.context, name) exp_url = "/resource_providers?name=%s" % name self.mock_get.assert_called_once_with( exp_url, global_request_id=self.context.global_id) self.assertEqual(mock.sentinel.expected, res) @mock.patch.object(report.LOG, 'warning') def test_get_provider_by_name_multiple_results(self, mock_log): """Test that if we find multiple resource providers with the same name, that a ResourceProviderNotFound is raised (the reason being that >1 resource provider with a name should never happen...) """ get_resp = mock.Mock() get_resp.status_code = 200 get_resp.json.return_value = { "resource_providers": [ {'uuid': uuids.cn1a}, {'uuid': uuids.cn1b}, ] } self.mock_get.return_value = get_resp name = 'cn1' self.assertRaises( exception.ResourceProviderNotFound, self.client.get_provider_by_name, self.context, name) mock_log.assert_called_once() @mock.patch.object(report.LOG, 'warning') def test_get_provider_by_name_500(self, mock_log): get_resp = mock.Mock() get_resp.status_code = 500 self.mock_get.return_value = get_resp name = 'cn1' self.assertRaises( exception.ResourceProviderNotFound, self.client.get_provider_by_name, self.context, name) mock_log.assert_called_once() @mock.patch.object(report.LOG, 'warning') def test_get_provider_by_name_404(self, mock_log): get_resp = mock.Mock() get_resp.status_code = 404 self.mock_get.return_value = get_resp name = 'cn1' self.assertRaises( exception.ResourceProviderNotFound, self.client.get_provider_by_name, self.context, name) mock_log.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_add_host_success_no_existing( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } agg_uuid = uuids.agg1 mock_get_aggs.return_value = report.AggInfo(aggregates=set([]), generation=42) name = 'cn1' self.client.aggregate_add_host(self.context, agg_uuid, host_name=name) mock_set_aggs.assert_called_once_with( self.context, uuids.cn1, set([agg_uuid]), use_cache=False, generation=42) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name', new=mock.NonCallableMock()) def test_aggregate_add_host_rp_uuid(self, mock_get_aggs, mock_set_aggs): mock_get_aggs.return_value = report.AggInfo( aggregates=set([]), generation=42) self.client.aggregate_add_host( self.context, uuids.agg1, rp_uuid=uuids.cn1) mock_set_aggs.assert_called_once_with( self.context, uuids.cn1, set([uuids.agg1]), use_cache=False, generation=42) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_add_host_success_already_existing( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } agg1_uuid = uuids.agg1 agg2_uuid = uuids.agg2 agg3_uuid = uuids.agg3 mock_get_aggs.return_value = report.AggInfo( aggregates=set([agg1_uuid]), generation=42) name = 'cn1' self.client.aggregate_add_host(self.context, agg1_uuid, host_name=name) mock_set_aggs.assert_not_called() mock_get_aggs.reset_mock() mock_set_aggs.reset_mock() mock_get_aggs.return_value = report.AggInfo( aggregates=set([agg1_uuid, agg3_uuid]), generation=43) self.client.aggregate_add_host(self.context, agg2_uuid, host_name=name) mock_set_aggs.assert_called_once_with( self.context, uuids.cn1, set([agg1_uuid, agg2_uuid, agg3_uuid]), use_cache=False, generation=43) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name', side_effect=exception.PlacementAPIConnectFailure) def test_aggregate_add_host_no_placement(self, mock_get_by_name): """Tests that PlacementAPIConnectFailure will be raised up from aggregate_add_host if get_provider_by_name raises that error. """ name = 'cn1' agg_uuid = uuids.agg1 self.assertRaises( exception.PlacementAPIConnectFailure, self.client.aggregate_add_host, self.context, agg_uuid, host_name=name) self.mock_get.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_add_host_retry_success( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } gens = (42, 43, 44) mock_get_aggs.side_effect = ( report.AggInfo(aggregates=set([]), generation=gen) for gen in gens) mock_set_aggs.side_effect = ( exception.ResourceProviderUpdateConflict( uuid='uuid', generation=42, error='error'), exception.ResourceProviderUpdateConflict( uuid='uuid', generation=43, error='error'), None, ) self.client.aggregate_add_host(self.context, uuids.agg1, host_name='cn1') mock_set_aggs.assert_has_calls([mock.call( self.context, uuids.cn1, set([uuids.agg1]), use_cache=False, generation=gen) for gen in gens]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_add_host_retry_raises( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } gens = (42, 43, 44, 45) mock_get_aggs.side_effect = ( report.AggInfo(aggregates=set([]), generation=gen) for gen in gens) mock_set_aggs.side_effect = ( exception.ResourceProviderUpdateConflict( uuid='uuid', generation=gen, error='error') for gen in gens) self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.aggregate_add_host, self.context, uuids.agg1, host_name='cn1') mock_set_aggs.assert_has_calls([mock.call( self.context, uuids.cn1, set([uuids.agg1]), use_cache=False, generation=gen) for gen in gens]) def test_aggregate_add_host_no_host_name_or_rp_uuid(self): self.assertRaises( ValueError, self.client.aggregate_add_host, self.context, uuids.agg1) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name', side_effect=exception.PlacementAPIConnectFailure) def test_aggregate_remove_host_no_placement(self, mock_get_by_name): """Tests that PlacementAPIConnectFailure will be raised up from aggregate_remove_host if get_provider_by_name raises that error. """ name = 'cn1' agg_uuid = uuids.agg1 self.assertRaises( exception.PlacementAPIConnectFailure, self.client.aggregate_remove_host, self.context, agg_uuid, name) self.mock_get.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_remove_host_success_already_existing( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } agg_uuid = uuids.agg1 mock_get_aggs.return_value = report.AggInfo(aggregates=set([agg_uuid]), generation=42) name = 'cn1' self.client.aggregate_remove_host(self.context, agg_uuid, name) mock_set_aggs.assert_called_once_with( self.context, uuids.cn1, set([]), use_cache=False, generation=42) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_remove_host_success_no_existing( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } agg1_uuid = uuids.agg1 agg2_uuid = uuids.agg2 agg3_uuid = uuids.agg3 mock_get_aggs.return_value = report.AggInfo(aggregates=set([]), generation=42) name = 'cn1' self.client.aggregate_remove_host(self.context, agg2_uuid, name) mock_set_aggs.assert_not_called() mock_get_aggs.reset_mock() mock_set_aggs.reset_mock() mock_get_aggs.return_value = report.AggInfo( aggregates=set([agg1_uuid, agg2_uuid, agg3_uuid]), generation=43) self.client.aggregate_remove_host(self.context, agg2_uuid, name) mock_set_aggs.assert_called_once_with( self.context, uuids.cn1, set([agg1_uuid, agg3_uuid]), use_cache=False, generation=43) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_remove_host_retry_success( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } gens = (42, 43, 44) mock_get_aggs.side_effect = ( report.AggInfo(aggregates=set([uuids.agg1]), generation=gen) for gen in gens) mock_set_aggs.side_effect = ( exception.ResourceProviderUpdateConflict( uuid='uuid', generation=42, error='error'), exception.ResourceProviderUpdateConflict( uuid='uuid', generation=43, error='error'), None, ) self.client.aggregate_remove_host(self.context, uuids.agg1, 'cn1') mock_set_aggs.assert_has_calls([mock.call( self.context, uuids.cn1, set([]), use_cache=False, generation=gen) for gen in gens]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'set_aggregates_for_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_provider_by_name') def test_aggregate_remove_host_retry_raises( self, mock_get_by_name, mock_get_aggs, mock_set_aggs): mock_get_by_name.return_value = { 'uuid': uuids.cn1, 'generation': 1, } gens = (42, 43, 44, 45) mock_get_aggs.side_effect = ( report.AggInfo(aggregates=set([uuids.agg1]), generation=gen) for gen in gens) mock_set_aggs.side_effect = ( exception.ResourceProviderUpdateConflict( uuid='uuid', generation=gen, error='error') for gen in gens) self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.aggregate_remove_host, self.context, uuids.agg1, 'cn1') mock_set_aggs.assert_has_calls([mock.call( self.context, uuids.cn1, set([]), use_cache=False, generation=gen) for gen in gens]) class TestUsages(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_usages_counts_for_quota_fail(self, mock_get): # First call with project fails mock_get.return_value = fake_requests.FakeResponse(500, content='err') self.assertRaises(exception.UsagesRetrievalFailed, self.client.get_usages_counts_for_quota, self.context, 'fake-project') mock_get.assert_called_once_with( '/usages?project_id=fake-project', version='1.9', global_request_id=self.context.global_id) # Second call with project + user fails mock_get.reset_mock() fake_good_response = fake_requests.FakeResponse( 200, content=jsonutils.dumps( {'usages': {orc.VCPU: 2, orc.MEMORY_MB: 512}})) mock_get.side_effect = [fake_good_response, fake_requests.FakeResponse(500, content='err')] self.assertRaises(exception.UsagesRetrievalFailed, self.client.get_usages_counts_for_quota, self.context, 'fake-project', user_id='fake-user') self.assertEqual(2, mock_get.call_count) call1 = mock.call( '/usages?project_id=fake-project', version='1.9', global_request_id=self.context.global_id) call2 = mock.call( '/usages?project_id=fake-project&user_id=fake-user', version='1.9', global_request_id=self.context.global_id) mock_get.assert_has_calls([call1, call2]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_usages_counts_for_quota_retries(self, mock_get): # Two attempts have a ConnectFailure and the third succeeds fake_project_response = fake_requests.FakeResponse( 200, content=jsonutils.dumps( {'usages': {orc.VCPU: 2, orc.MEMORY_MB: 512}})) mock_get.side_effect = [ks_exc.ConnectFailure, ks_exc.ConnectFailure, fake_project_response] counts = self.client.get_usages_counts_for_quota(self.context, 'fake-project') self.assertEqual(3, mock_get.call_count) expected = {'project': {'cores': 2, 'ram': 512}} self.assertDictEqual(expected, counts) # Project query succeeds, first project + user query has a # ConnectFailure, second project + user query succeeds mock_get.reset_mock() fake_user_response = fake_requests.FakeResponse( 200, content=jsonutils.dumps( {'usages': {orc.VCPU: 1, orc.MEMORY_MB: 256}})) mock_get.side_effect = [fake_project_response, ks_exc.ConnectFailure, fake_user_response] counts = self.client.get_usages_counts_for_quota( self.context, 'fake-project', user_id='fake-user') self.assertEqual(3, mock_get.call_count) expected['user'] = {'cores': 1, 'ram': 256} self.assertDictEqual(expected, counts) # Three attempts in a row have a ConnectFailure mock_get.reset_mock() mock_get.side_effect = [ks_exc.ConnectFailure] * 4 self.assertRaises(ks_exc.ConnectFailure, self.client.get_usages_counts_for_quota, self.context, 'fake-project') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_usages_counts_default_zero(self, mock_get): # A project and user are not yet consuming any resources. fake_response = fake_requests.FakeResponse( 200, content=jsonutils.dumps({'usages': {}})) mock_get.side_effect = [fake_response, fake_response] counts = self.client.get_usages_counts_for_quota( self.context, 'fake-project', user_id='fake-user') self.assertEqual(2, mock_get.call_count) expected = {'project': {'cores': 0, 'ram': 0}, 'user': {'cores': 0, 'ram': 0}} self.assertDictEqual(expected, counts) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_usages_count_with_pcpu(self, mock_get): fake_responses = fake_requests.FakeResponse( 200, content=jsonutils.dumps({'usages': {orc.VCPU: 2, orc.PCPU: 2}})) mock_get.return_value = fake_responses counts = self.client.get_usages_counts_for_quota( self.context, 'fake-project', user_id='fake-user') self.assertEqual(2, mock_get.call_count) expected = {'project': {'cores': 4, 'ram': 0}, 'user': {'cores': 4, 'ram': 0}} self.assertDictEqual(expected, counts) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/fakes.py0000664000175000017500000001753400000000000021305 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fakes For Scheduler tests. """ import datetime from oslo_utils.fixture import uuidsentinel from nova import objects from nova.scheduler import driver from nova.scheduler import host_manager # TODO(stephenfin): Rework these so they're functions instead of global # variables that can be mutated NUMA_TOPOLOGY = objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set([0, 1]), pcpuset=set([2, 3]), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[ objects.NUMAPagesTopology(size_kb=16, total=387184, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0)], siblings=[set([0]), set([1]), set([2]), set([3])]), objects.NUMACell( id=1, cpuset=set([4, 5]), pcpuset=set([6, 7]), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[ objects.NUMAPagesTopology(size_kb=4, total=1548736, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0)], siblings=[set([4]), set([5]), set([6]), set([7])])]) NUMA_TOPOLOGIES_W_HT = [ objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set(), pcpuset=set([1, 2, 5, 6]), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[], siblings=[set([1, 5]), set([2, 6])]), objects.NUMACell( id=1, cpuset=set(), pcpuset=set([3, 4, 7, 8]), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[], siblings=[set([3, 4]), set([7, 8])]) ]), objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set(), pcpuset=set(), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[], siblings=[]), objects.NUMACell( id=1, cpuset=set(), pcpuset=set([1, 2, 5, 6]), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[], siblings=[set([1, 5]), set([2, 6])]), objects.NUMACell( id=2, cpuset=set(), pcpuset=set([3, 4, 7, 8]), memory=512, cpu_usage=0, memory_usage=0, pinned_cpus=set(), mempages=[], siblings=[set([3, 4]), set([7, 8])]), ]), ] COMPUTE_NODES = [ objects.ComputeNode( uuid=uuidsentinel.cn1, id=1, local_gb=1024, memory_mb=1024, vcpus=1, disk_available_least=None, free_ram_mb=512, vcpus_used=1, free_disk_gb=512, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host1', hypervisor_hostname='node1', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=None, hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), objects.ComputeNode( uuid=uuidsentinel.cn2, id=2, local_gb=2048, memory_mb=2048, vcpus=2, disk_available_least=1024, free_ram_mb=1024, vcpus_used=2, free_disk_gb=1024, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host2', hypervisor_hostname='node2', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=None, hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), objects.ComputeNode( uuid=uuidsentinel.cn3, id=3, local_gb=4096, memory_mb=4096, vcpus=4, disk_available_least=3333, free_ram_mb=3072, vcpus_used=1, free_disk_gb=3072, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host3', hypervisor_hostname='node3', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=NUMA_TOPOLOGY._to_json(), hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), objects.ComputeNode( uuid=uuidsentinel.cn4, id=4, local_gb=8192, memory_mb=8192, vcpus=8, disk_available_least=8192, free_ram_mb=8192, vcpus_used=0, free_disk_gb=8888, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host4', hypervisor_hostname='node4', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=None, hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), # Broken entry objects.ComputeNode( uuid=uuidsentinel.cn5, id=5, local_gb=1024, memory_mb=1024, vcpus=1, host='fake', hypervisor_hostname='fake-hyp'), ] def get_fake_alloc_reqs(): return [ { 'allocations': { cn.uuid: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 512, }, } } } for cn in COMPUTE_NODES ] RESOURCE_PROVIDERS = [ dict( uuid=uuidsentinel.rp1, name='host1', generation=1), dict( uuid=uuidsentinel.rp2, name='host2', generation=1), dict( uuid=uuidsentinel.rp3, name='host3', generation=1), dict( uuid=uuidsentinel.rp4, name='host4', generation=1), ] SERVICES = [ objects.Service(host='host1', disabled=False), objects.Service(host='host2', disabled=True), objects.Service(host='host3', disabled=False), objects.Service(host='host4', disabled=False), ] def get_service_by_host(host): services = [service for service in SERVICES if service.host == host] return services[0] class FakeHostState(host_manager.HostState): def __init__(self, host, node, attribute_dict, instances=None): super(FakeHostState, self).__init__(host, node, None) if instances: self.instances = {inst.uuid: inst for inst in instances} else: self.instances = {} for (key, val) in attribute_dict.items(): setattr(self, key, val) class FakeScheduler(driver.Scheduler): def select_destinations(self, context, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, provider_summaries, allocation_request_version=None, return_alternates=False): return [] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5784678 nova-21.2.4/nova/tests/unit/scheduler/filters/0000775000175000017500000000000000000000000021300 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/__init__.py0000664000175000017500000000000000000000000023377 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_affinity_filters.py0000664000175000017500000002521400000000000026256 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import objects from nova.scheduler.filters import affinity_filter from nova import test from nova.tests.unit.scheduler import fakes class TestDifferentHostFilter(test.NoDBTestCase): def setUp(self): super(TestDifferentHostFilter, self).setUp() self.filt_cls = affinity_filter.DifferentHostFilter() def test_affinity_different_filter_passes(self): host = fakes.FakeHostState('host1', 'node1', {}) inst1 = objects.Instance(uuid=uuids.instance) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(different_host=['same'])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_different_filter_fails(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(different_host=[uuids.instance])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_different_filter_handles_none(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) class TestSameHostFilter(test.NoDBTestCase): def setUp(self): super(TestSameHostFilter, self).setUp() self.filt_cls = affinity_filter.SameHostFilter() def test_affinity_same_filter_passes(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(same_host=[uuids.instance])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_same_filter_no_list_passes(self): host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(same_host=['same'])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_same_filter_fails(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(same_host=['same'])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_same_filter_handles_none(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) class TestSimpleCIDRAffinityFilter(test.NoDBTestCase): def setUp(self): super(TestSimpleCIDRAffinityFilter, self).setUp() self.filt_cls = affinity_filter.SimpleCIDRAffinityFilter() def test_affinity_simple_cidr_filter_passes(self): host = fakes.FakeHostState('host1', 'node1', {}) host.host_ip = '10.8.1.1' affinity_ip = "10.8.1.100" spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict( cidr=['/24'], build_near_host_ip=[affinity_ip])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_simple_cidr_filter_fails(self): host = fakes.FakeHostState('host1', 'node1', {}) host.host_ip = '10.8.1.1' affinity_ip = "10.8.1.100" spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict( cidr=['/32'], build_near_host_ip=[affinity_ip])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_simple_cidr_filter_handles_none(self): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) class TestGroupAffinityFilter(test.NoDBTestCase): def _test_group_anti_affinity_filter_passes(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(instance_group=None) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policy='affinity')) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policy=policy, members=[]), instance_uuid=uuids.fake) spec_obj.instance_group.hosts = [] self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj.instance_group.hosts = ['host2'] self.assertTrue(filt_cls.host_passes(host, spec_obj)) def test_group_anti_affinity_filter_passes(self): self._test_group_anti_affinity_filter_passes( affinity_filter.ServerGroupAntiAffinityFilter(), 'anti-affinity') def _test_group_anti_affinity_filter_fails(self, filt_cls, policy): inst1 = objects.Instance(uuid=uuids.inst1) # We already have an inst1 on host1 host = fakes.FakeHostState('host1', 'node1', {}, instances=[inst1]) spec_obj = objects.RequestSpec( instance_group=objects.InstanceGroup(policy=policy, hosts=['host1'], members=[uuids.inst1], rules={}), instance_uuid=uuids.fake) self.assertFalse(filt_cls.host_passes(host, spec_obj)) def test_group_anti_affinity_filter_fails(self): self._test_group_anti_affinity_filter_fails( affinity_filter.ServerGroupAntiAffinityFilter(), 'anti-affinity') def _test_group_anti_affinity_filter_with_rules(self, rules, members): filt_cls = affinity_filter.ServerGroupAntiAffinityFilter() inst1 = objects.Instance(uuid=uuids.inst1) inst2 = objects.Instance(uuid=uuids.inst2) spec_obj = objects.RequestSpec( instance_group=objects.InstanceGroup(policy='anti-affinity', hosts=['host1'], members=members, rules=rules), instance_uuid=uuids.fake) # 2 instances on same host host_wit_2_inst = fakes.FakeHostState( 'host1', 'node1', {}, instances=[inst1, inst2]) return filt_cls.host_passes(host_wit_2_inst, spec_obj) def test_group_anti_affinity_filter_with_rules_fail(self): # the members of this group on the host already reach to max, # create one more servers would be failed. result = self._test_group_anti_affinity_filter_with_rules( {"max_server_per_host": 1}, [uuids.inst1]) self.assertFalse(result) result = self._test_group_anti_affinity_filter_with_rules( {"max_server_per_host": 2}, [uuids.inst1, uuids.inst2]) self.assertFalse(result) def test_group_anti_affinity_filter_with_rules_pass(self): result = self._test_group_anti_affinity_filter_with_rules( {"max_server_per_host": 1}, []) self.assertTrue(result) # we can have at most 2 members from the same group on the same host. result = self._test_group_anti_affinity_filter_with_rules( {"max_server_per_host": 2}, [uuids.inst1]) self.assertTrue(result) def test_group_anti_affinity_filter_allows_instance_to_same_host(self): fake_uuid = uuids.fake mock_instance = objects.Instance(uuid=fake_uuid) host_state = fakes.FakeHostState('host1', 'node1', {}, instances=[mock_instance]) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policy='anti-affinity', hosts=['host1', 'host2'], members=[]), instance_uuid=mock_instance.uuid) self.assertTrue(affinity_filter.ServerGroupAntiAffinityFilter(). host_passes(host_state, spec_obj)) def _test_group_affinity_filter_passes(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(instance_group=None) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=['anti-affinity'])) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=['affinity'], hosts=['host1'])) self.assertTrue(filt_cls.host_passes(host, spec_obj)) def test_group_affinity_filter_passes(self): self._test_group_affinity_filter_passes( affinity_filter.ServerGroupAffinityFilter(), 'affinity') def _test_group_affinity_filter_fails(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=[policy], hosts=['host2'])) self.assertFalse(filt_cls.host_passes(host, spec_obj)) def test_group_affinity_filter_fails(self): self._test_group_affinity_filter_fails( affinity_filter.ServerGroupAffinityFilter(), 'affinity') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_aggregate_image_properties_isolation_filters.py0000664000175000017500000001571000000000000034072 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import aggregate_image_properties_isolation as aipi from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAggImagePropsIsolationFilter(test.NoDBTestCase): def setUp(self): super(TestAggImagePropsIsolationFilter, self).setUp() self.filt_cls = aipi.AggregateImagePropertiesIsolation() def test_aggregate_image_properties_isolation_passes(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_passes_comma(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm', 'xen'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_bad_comma(self, agg_mock): agg_mock.return_value = {'os_distro': set(['windows', 'linux'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_distro='windows,'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_multi_props_passes(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm']), 'hw_cpu_cores': set(['2'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm', hw_cpu_cores=2))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_with_meta_passes(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps())) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_imgprops_passes(self, agg_mock): agg_mock.return_value = {} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_not_match_fails(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='xen'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_not_match2_fails(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm']), 'hw_cpu_cores': set(['1'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm', hw_cpu_cores=2))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_namespace(self, agg_mock): self.flags(aggregate_image_properties_isolation_namespace='hw', group='filter_scheduler') self.flags(aggregate_image_properties_isolation_separator='_', group='filter_scheduler') agg_mock.return_value = {'hw_vm_mode': set(['hvm']), 'img_owner_id': set(['foo'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm', img_owner_id='wrong'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_iso_props_with_custom_meta(self, agg_mock): agg_mock.return_value = {'os': set(['linux'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_type='linux'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_iso_props_with_matching_meta_pass(self, agg_mock): agg_mock.return_value = {'os_type': set(['linux'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_type='linux'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_iso_props_with_matching_meta_fail( self, agg_mock): agg_mock.return_value = {'os_type': set(['windows'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_type='linux'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_aggregate_instance_extra_specs_filters.py0000664000175000017500000000756700000000000032672 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import aggregate_instance_extra_specs as agg_specs from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAggregateInstanceExtraSpecsFilter(test.NoDBTestCase): def setUp(self): super(TestAggregateInstanceExtraSpecsFilter, self).setUp() self.filt_cls = agg_specs.AggregateInstanceExtraSpecsFilter() def test_aggregate_filter_passes_no_extra_specs(self, agg_mock): capabilities = {'opt1': 1, 'opt2': 2} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertFalse(agg_mock.called) def test_aggregate_filter_passes_empty_extra_specs(self, agg_mock): capabilities = {'opt1': 1, 'opt2': 2} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024, extra_specs={})) host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertFalse(agg_mock.called) def _do_test_aggregate_filter_extra_specs(self, especs, passes): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024, extra_specs=especs)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024}) assertion = self.assertTrue if passes else self.assertFalse assertion(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_filter_passes_extra_specs_simple(self, agg_mock): agg_mock.return_value = {'opt1': set(['1']), 'opt2': set(['2'])} especs = { # Un-scoped extra spec 'opt1': '1', # Scoped extra spec that applies to this filter 'aggregate_instance_extra_specs:opt2': '2', } self._do_test_aggregate_filter_extra_specs(especs, passes=True) def test_aggregate_filter_passes_extra_specs_simple_comma(self, agg_mock): agg_mock.return_value = {'opt1': set(['1', '3']), 'opt2': set(['2'])} especs = { # Un-scoped extra spec 'opt1': '1', # Scoped extra spec that applies to this filter 'aggregate_instance_extra_specs:opt1': '3', } self._do_test_aggregate_filter_extra_specs(especs, passes=True) def test_aggregate_filter_passes_with_key_same_as_scope(self, agg_mock): agg_mock.return_value = {'aggregate_instance_extra_specs': set(['1'])} especs = { # Un-scoped extra spec, make sure we don't blow up if it # happens to match our scope. 'aggregate_instance_extra_specs': '1', } self._do_test_aggregate_filter_extra_specs(especs, passes=True) def test_aggregate_filter_fails_extra_specs_simple(self, agg_mock): agg_mock.return_value = {'opt1': set(['1']), 'opt2': set(['2'])} especs = { 'opt1': '1', 'opt2': '222' } self._do_test_aggregate_filter_extra_specs(especs, passes=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_aggregate_multitenancy_isolation_filters.py0000664000175000017500000000577500000000000033262 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import aggregate_multitenancy_isolation as ami from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAggregateMultitenancyIsolationFilter(test.NoDBTestCase): def setUp(self): super(TestAggregateMultitenancyIsolationFilter, self).setUp() self.filt_cls = ami.AggregateMultiTenancyIsolation() def test_aggregate_multi_tenancy_isolation_with_meta_passes(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['my_tenantid'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_with_meta_passes_comma(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['my_tenantid', 'mytenantid2'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_fails(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['other_tenantid'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_fails_comma(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['other_tenantid', 'other_tenantid2'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_no_meta_passes(self, agg_mock): agg_mock.return_value = {} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_availability_zone_filters.py0000664000175000017500000000410000000000000030141 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import availability_zone_filter from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAvailabilityZoneFilter(test.NoDBTestCase): def setUp(self): super(TestAvailabilityZoneFilter, self).setUp() self.filt_cls = availability_zone_filter.AvailabilityZoneFilter() @staticmethod def _make_zone_request(zone): return objects.RequestSpec( context=mock.sentinel.ctx, availability_zone=zone) def test_availability_zone_filter_same(self, agg_mock): agg_mock.return_value = {'availability_zone': set(['nova'])} request = self._make_zone_request('nova') host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(self.filt_cls.host_passes(host, request)) def test_availability_zone_filter_same_comma(self, agg_mock): agg_mock.return_value = {'availability_zone': set(['nova', 'nova2'])} request = self._make_zone_request('nova') host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(self.filt_cls.host_passes(host, request)) def test_availability_zone_filter_different(self, agg_mock): agg_mock.return_value = {'availability_zone': set(['nova'])} request = self._make_zone_request('bad') host = fakes.FakeHostState('host1', 'node1', {}) self.assertFalse(self.filt_cls.host_passes(host, request)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_compute_capabilities_filters.py0000664000175000017500000001637200000000000030637 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from nova import objects from nova.scheduler.filters import compute_capabilities_filter from nova import test from nova.tests.unit.scheduler import fakes class TestComputeCapabilitiesFilter(test.NoDBTestCase): def setUp(self): super(TestComputeCapabilitiesFilter, self).setUp() self.filt_cls = compute_capabilities_filter.ComputeCapabilitiesFilter() def _do_test_compute_filter_extra_specs(self, ecaps, especs, passes): # In real OpenStack runtime environment,compute capabilities # value may be number, so we should use number to do unit test. capabilities = {} capabilities.update(ecaps) spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, extra_specs=especs)) host_state = {'free_ram_mb': 1024} host_state.update(capabilities) host = fakes.FakeHostState('host1', 'node1', host_state) assertion = self.assertTrue if passes else self.assertFalse assertion(self.filt_cls.host_passes(host, spec_obj)) def test_compute_filter_passes_without_extra_specs(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) host_state = {'free_ram_mb': 1024} host = fakes.FakeHostState('host1', 'node1', host_state) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_compute_filter_fails_without_host_state(self): especs = {'capabilities:opts': '1'} spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, extra_specs=especs)) self.assertFalse(self.filt_cls.host_passes(None, spec_obj)) def test_compute_filter_fails_without_capabilites(self): cpu_info = """ { } """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'capabilities:cpu_info:vendor': 'Intel'}, passes=False) def test_compute_filter_pass_cpu_info_as_text_type(self): cpu_info = """ { "vendor": "Intel", "model": "core2duo", "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology": {"cores": 1, "threads":1, "sockets": 1}} """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'capabilities:cpu_info:vendor': 'Intel'}, passes=True) def test_compute_filter_pass_cpu_info_with_backward_compatibility(self): cpu_info = """ { "vendor": "Intel", "model": "core2duo", "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology": {"cores": 1, "threads":1, "sockets": 1}} """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'cpu_info': cpu_info}, passes=True) def test_compute_filter_fail_cpu_info_with_backward_compatibility(self): cpu_info = """ { "vendor": "Intel", "model": "core2duo", "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology": {"cores": 1, "threads":1, "sockets": 1}} """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'cpu_info': ''}, passes=False) def test_compute_filter_fail_cpu_info_as_text_type_not_valid(self): cpu_info = "cpu_info" cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'capabilities:cpu_info:vendor': 'Intel'}, passes=False) def test_compute_filter_passes_extra_specs_simple(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': 1, 'opt2': 2}}, especs={'opt1': '1', 'opt2': '2'}, passes=True) def test_compute_filter_fails_extra_specs_simple(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': 1, 'opt2': 2}}, especs={'opt1': '1', 'opt2': '222'}, passes=False) def test_compute_filter_pass_extra_specs_simple_with_scope(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': 1, 'opt2': 2}}, especs={'capabilities:opt1': '1'}, passes=True) def test_compute_filter_pass_extra_specs_same_as_scope(self): # Make sure this still works even if the key is the same as the scope self._do_test_compute_filter_extra_specs( ecaps={'capabilities': 1}, especs={'capabilities': '1'}, passes=True) def test_compute_filter_pass_self_defined_specs(self): # Make sure this will not reject user's self-defined,irrelevant specs self._do_test_compute_filter_extra_specs( ecaps={'opt1': 1, 'opt2': 2}, especs={'XXYY': '1'}, passes=True) def test_compute_filter_extra_specs_simple_with_wrong_scope(self): self._do_test_compute_filter_extra_specs( ecaps={'opt1': 1, 'opt2': 2}, especs={'wrong_scope:opt1': '1'}, passes=True) def test_compute_filter_extra_specs_pass_multi_level_with_scope(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': {'a': 1, 'b': {'aa': 2}}, 'opt2': 2}}, especs={'opt1:a': '1', 'capabilities:opt1:b:aa': '2'}, passes=True) def test_compute_filter_pass_ram_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_ram_mb': '>= 300'}, passes=True) def test_compute_filter_fail_ram_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_ram_mb': '<= 300'}, passes=False) def test_compute_filter_pass_cpu_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'vcpus_used': '<= 20'}, passes=True) def test_compute_filter_fail_cpu_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'vcpus_used': '>= 20'}, passes=False) def test_compute_filter_pass_disk_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_disk_mb': 0}, passes=True) def test_compute_filter_fail_disk_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_disk_mb': 1}, passes=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_compute_filters.py0000664000175000017500000000446400000000000026125 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import compute_filter from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.servicegroup.API.service_is_up') class TestComputeFilter(test.NoDBTestCase): def test_compute_filter_manual_disable(self, service_up_mock): filt_cls = compute_filter.ComputeFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) service = {'disabled': True} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'service': service}) self.assertFalse(filt_cls.host_passes(host, spec_obj)) self.assertFalse(service_up_mock.called) def test_compute_filter_sgapi_passes(self, service_up_mock): filt_cls = compute_filter.ComputeFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'service': service}) service_up_mock.return_value = True self.assertTrue(filt_cls.host_passes(host, spec_obj)) service_up_mock.assert_called_once_with(service) def test_compute_filter_sgapi_fails(self, service_up_mock): filt_cls = compute_filter.ComputeFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) service = {'disabled': False, 'updated_at': 'now'} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'service': service}) service_up_mock.return_value = False self.assertFalse(filt_cls.host_passes(host, spec_obj)) service_up_mock.assert_called_once_with(service) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_core_filters.py0000664000175000017500000000575200000000000025402 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import core_filter from nova import test from nova.tests.unit.scheduler import fakes class TestAggregateCoreFilter(test.NoDBTestCase): @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_core_filter_value_error(self, agg_mock): self.filt_cls = core_filter.AggregateCoreFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 7, 'cpu_allocation_ratio': 2}) agg_mock.return_value = set(['XXX']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'cpu_allocation_ratio') self.assertEqual(4 * 2, host.limits['vcpu']) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_core_filter_default_value(self, agg_mock): self.filt_cls = core_filter.AggregateCoreFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 8, 'cpu_allocation_ratio': 2}) agg_mock.return_value = set([]) # False: fallback to default flag w/o aggregates self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'cpu_allocation_ratio') # True: use ratio from aggregates agg_mock.return_value = set(['3']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(4 * 3, host.limits['vcpu']) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_core_filter_conflict_values(self, agg_mock): self.filt_cls = core_filter.AggregateCoreFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 8, 'cpu_allocation_ratio': 1}) agg_mock.return_value = set(['2', '3']) # use the minimum ratio from aggregates self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(4 * 2, host.limits['vcpu']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_disk_filters.py0000664000175000017500000000465600000000000025406 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import disk_filter from nova import test from nova.tests.unit.scheduler import fakes class TestAggregateDiskFilter(test.NoDBTestCase): @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_disk_filter_value_error(self, agg_mock): filt_cls = disk_filter.AggregateDiskFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor( root_gb=1, ephemeral_gb=1, swap=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 3 * 1024, 'total_usable_disk_gb': 4, 'disk_allocation_ratio': 1.0}) agg_mock.return_value = set(['XXX']) self.assertTrue(filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'disk_allocation_ratio') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_disk_filter_default_value(self, agg_mock): filt_cls = disk_filter.AggregateDiskFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor( root_gb=2, ephemeral_gb=1, swap=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 3 * 1024, 'total_usable_disk_gb': 4, 'disk_allocation_ratio': 1.0}) # Uses global conf. agg_mock.return_value = set([]) self.assertFalse(filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'disk_allocation_ratio') agg_mock.return_value = set(['2']) self.assertTrue(filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_extra_specs_ops.py0000664000175000017500000001566400000000000026126 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.scheduler.filters import extra_specs_ops from nova import test class ExtraSpecsOpsTestCase(test.NoDBTestCase): def _do_extra_specs_ops_test(self, value, req, matches): assertion = self.assertTrue if matches else self.assertFalse assertion(extra_specs_ops.match(value, req)) def test_extra_specs_matches_simple(self): self._do_extra_specs_ops_test( value='1', req='1', matches=True) def test_extra_specs_fails_simple(self): self._do_extra_specs_ops_test( value='', req='1', matches=False) def test_extra_specs_fails_simple2(self): self._do_extra_specs_ops_test( value='3', req='1', matches=False) def test_extra_specs_fails_simple3(self): self._do_extra_specs_ops_test( value='222', req='2', matches=False) def test_extra_specs_fails_with_bogus_ops(self): self._do_extra_specs_ops_test( value='4', req='> 2', matches=False) def test_extra_specs_matches_with_op_eq(self): self._do_extra_specs_ops_test( value='123', req='= 123', matches=True) def test_extra_specs_matches_with_op_eq2(self): self._do_extra_specs_ops_test( value='124', req='= 123', matches=True) def test_extra_specs_fails_with_op_eq(self): self._do_extra_specs_ops_test( value='34', req='= 234', matches=False) def test_extra_specs_fails_with_op_eq3(self): self._do_extra_specs_ops_test( value='34', req='=', matches=False) def test_extra_specs_matches_with_op_seq(self): self._do_extra_specs_ops_test( value='123', req='s== 123', matches=True) def test_extra_specs_fails_with_op_seq(self): self._do_extra_specs_ops_test( value='1234', req='s== 123', matches=False) def test_extra_specs_matches_with_op_sneq(self): self._do_extra_specs_ops_test( value='1234', req='s!= 123', matches=True) def test_extra_specs_fails_with_op_sneq(self): self._do_extra_specs_ops_test( value='123', req='s!= 123', matches=False) def test_extra_specs_fails_with_op_sge(self): self._do_extra_specs_ops_test( value='1000', req='s>= 234', matches=False) def test_extra_specs_fails_with_op_sle(self): self._do_extra_specs_ops_test( value='1234', req='s<= 1000', matches=False) def test_extra_specs_fails_with_op_sl(self): self._do_extra_specs_ops_test( value='2', req='s< 12', matches=False) def test_extra_specs_fails_with_op_sg(self): self._do_extra_specs_ops_test( value='12', req='s> 2', matches=False) def test_extra_specs_matches_with_op_in(self): self._do_extra_specs_ops_test( value='12311321', req=' 11', matches=True) def test_extra_specs_matches_with_op_in2(self): self._do_extra_specs_ops_test( value='12311321', req=' 12311321', matches=True) def test_extra_specs_matches_with_op_in3(self): self._do_extra_specs_ops_test( value='12311321', req=' 12311321 ', matches=True) def test_extra_specs_fails_with_op_in(self): self._do_extra_specs_ops_test( value='12310321', req=' 11', matches=False) def test_extra_specs_fails_with_op_in2(self): self._do_extra_specs_ops_test( value='12310321', req=' 11 ', matches=False) def test_extra_specs_matches_with_op_or(self): self._do_extra_specs_ops_test( value='12', req=' 11 12', matches=True) def test_extra_specs_matches_with_op_or2(self): self._do_extra_specs_ops_test( value='12', req=' 11 12 ', matches=True) def test_extra_specs_fails_with_op_or(self): self._do_extra_specs_ops_test( value='13', req=' 11 12', matches=False) def test_extra_specs_fails_with_op_or2(self): self._do_extra_specs_ops_test( value='13', req=' 11 12 ', matches=False) def test_extra_specs_matches_with_op_le(self): self._do_extra_specs_ops_test( value='2', req='<= 10', matches=True) def test_extra_specs_fails_with_op_le(self): self._do_extra_specs_ops_test( value='3', req='<= 2', matches=False) def test_extra_specs_matches_with_op_ge(self): self._do_extra_specs_ops_test( value='3', req='>= 1', matches=True) def test_extra_specs_fails_with_op_ge(self): self._do_extra_specs_ops_test( value='2', req='>= 3', matches=False) def test_extra_specs_matches_all_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' aes mmx', matches=True) def test_extra_specs_matches_one_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' mmx', matches=True) def test_extra_specs_fails_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' txt', matches=False) def test_extra_specs_fails_all_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' txt 3dnow', matches=False) def test_extra_specs_fails_match_one_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' txt aes', matches=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_image_props_filters.py0000664000175000017500000003067100000000000026755 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import versionutils from nova import objects from nova.objects import fields as obj_fields from nova.scheduler.filters import image_props_filter from nova import test from nova.tests.unit.scheduler import fakes class TestImagePropsFilter(test.NoDBTestCase): def setUp(self): super(TestImagePropsFilter, self).setUp() self.filt_cls = image_props_filter.ImagePropertiesFilter() def test_image_properties_filter_passes_same_inst_props_and_version(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.0,<6.2')) spec_obj = objects.RequestSpec(image=img_props) hypervisor_version = versionutils.convert_version_to_int('6.0.0') capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_uses_default_conf_value(self): self.flags(image_properties_default_architecture='x86_64', group='filter_scheduler') img_props = objects.ImageMeta(properties=objects.ImageMetaProps()) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.AARCH64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_different_inst_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.ARMV7, img_hv_type=obj_fields.HVType.QEMU, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_different_hyper_version(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.2')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_passes_partial_inst_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_partial_inst_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, obj_fields.VMMode.XEN)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_passes_without_inst_props(self): spec_obj = objects.RequestSpec(image=None) hypervisor_version = versionutils.convert_version_to_int('6.0.0') capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_without_host_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_passes_without_hyper_version(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.0')) spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)]} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_with_unsupported_hyper_ver(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.0')) spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': 5000} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_pv_mode_compat(self): # if an old image has 'pv' for a vm_mode it should be treated as xen img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_vm_mode='pv')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, obj_fields.VMMode.XEN)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_hvm_mode_compat(self): # if an old image has 'hv' for a vm_mode it should be treated as xen img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_vm_mode='hv')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_xen_arch_compat(self): # if an old image has 'x86_32' for arch it should be treated as i686 img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture='x86_32')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.I686, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_xen_hv_type_compat(self): # if an old image has 'xapi' for hv_type it should be treated as xen img_props = objects.ImageMeta( properties=objects.ImageMetaProps( img_hv_type='xapi')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.I686, obj_fields.HVType.XEN, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_baremetal_vmmode_compat(self): # if an old image has 'baremetal' for vmmode it should be # treated as hvm img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_vm_mode='baremetal')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.I686, obj_fields.HVType.BAREMETAL, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_io_ops_filters.py0000664000175000017500000000550300000000000025734 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import io_ops_filter from nova import test from nova.tests.unit.scheduler import fakes class TestIoOpsFilter(test.NoDBTestCase): def test_filter_num_iops_passes(self): self.flags(max_io_ops_per_host=8, group='filter_scheduler') self.filt_cls = io_ops_filter.IoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}) spec_obj = objects.RequestSpec() self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_filter_num_iops_fails(self): self.flags(max_io_ops_per_host=8, group='filter_scheduler') self.filt_cls = io_ops_filter.IoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 8}) spec_obj = objects.RequestSpec() self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_filter_num_iops_value(self, agg_mock): self.flags(max_io_ops_per_host=7, group='filter_scheduler') self.filt_cls = io_ops_filter.AggregateIoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) agg_mock.return_value = set([]) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_io_ops_per_host') agg_mock.return_value = set(['8']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_filter_num_iops_value_error(self, agg_mock): self.flags(max_io_ops_per_host=8, group='filter_scheduler') self.filt_cls = io_ops_filter.AggregateIoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}) agg_mock.return_value = set(['XXX']) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_io_ops_per_host') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_isolated_hosts_filter.py0000664000175000017500000001102600000000000027302 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova import objects from nova.scheduler.filters import isolated_hosts_filter from nova import test from nova.tests.unit.scheduler import fakes class TestIsolatedHostsFilter(test.NoDBTestCase): def setUp(self): super(TestIsolatedHostsFilter, self).setUp() self.filt_cls = isolated_hosts_filter.IsolatedHostsFilter() def _do_test_isolated_hosts(self, host_in_list, image_in_list, set_flags=True, restrict_isolated_hosts_to_isolated_images=True): if set_flags: self.flags(isolated_images=[uuids.image_ref], isolated_hosts=['isolated_host'], restrict_isolated_hosts_to_isolated_images= restrict_isolated_hosts_to_isolated_images, group='filter_scheduler') host_name = 'isolated_host' if host_in_list else 'free_host' image_ref = uuids.image_ref if image_in_list else uuids.fake_image_ref spec_obj = objects.RequestSpec(image=objects.ImageMeta(id=image_ref)) host = fakes.FakeHostState(host_name, 'node', {}) return self.filt_cls.host_passes(host, spec_obj) def test_isolated_hosts_fails_isolated_on_non_isolated(self): self.assertFalse(self._do_test_isolated_hosts(False, True)) def test_isolated_hosts_fails_non_isolated_on_isolated(self): self.assertFalse(self._do_test_isolated_hosts(True, False)) def test_isolated_hosts_passes_isolated_on_isolated(self): self.assertTrue(self._do_test_isolated_hosts(True, True)) def test_isolated_hosts_passes_non_isolated_on_non_isolated(self): self.assertTrue(self._do_test_isolated_hosts(False, False)) def test_isolated_hosts_no_config(self): # If there are no hosts nor isolated images in the config, it should # not filter at all. This is the default config. self.assertTrue(self._do_test_isolated_hosts(False, True, False)) self.assertTrue(self._do_test_isolated_hosts(True, False, False)) self.assertTrue(self._do_test_isolated_hosts(True, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False)) def test_isolated_hosts_no_hosts_config(self): self.flags(isolated_images=[uuids.image_ref], group='filter_scheduler') # If there are no hosts in the config, it should only filter out # images that are listed self.assertFalse(self._do_test_isolated_hosts(False, True, False)) self.assertTrue(self._do_test_isolated_hosts(True, False, False)) self.assertFalse(self._do_test_isolated_hosts(True, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False)) def test_isolated_hosts_no_images_config(self): self.flags(isolated_hosts=['isolated_host'], group='filter_scheduler') # If there are no images in the config, it should only filter out # isolated_hosts self.assertTrue(self._do_test_isolated_hosts(False, True, False)) self.assertFalse(self._do_test_isolated_hosts(True, False, False)) self.assertFalse(self._do_test_isolated_hosts(True, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False)) def test_isolated_hosts_less_restrictive(self): # If there are isolated hosts and non isolated images self.assertTrue(self._do_test_isolated_hosts(True, False, True, False)) # If there are isolated hosts and isolated images self.assertTrue(self._do_test_isolated_hosts(True, True, True, False)) # If there are non isolated hosts and non isolated images self.assertTrue(self._do_test_isolated_hosts(False, False, True, False)) # If there are non isolated hosts and isolated images self.assertFalse(self._do_test_isolated_hosts(False, True, True, False)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_json_filters.py0000664000175000017500000002576200000000000025426 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import objects from nova.scheduler.filters import json_filter from nova import test from nova.tests.unit.scheduler import fakes class TestJsonFilter(test.NoDBTestCase): def setUp(self): super(TestJsonFilter, self).setUp() self.filt_cls = json_filter.JsonFilter() self.json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024]]) def test_json_filter_passes(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=dict(query=[self.json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_passes_with_no_query(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=None) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 0, 'free_disk_mb': 0}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_fails_on_memory(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=dict(query=[self.json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'free_disk_mb': 200 * 1024}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_fails_on_disk(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=dict(query=[self.json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'free_disk_mb': (200 * 1024) - 1}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_fails_on_service_disabled(self): json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024], ['not', '$service.disabled']]) spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, local_gb=200), scheduler_hints=dict(query=[json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_happy_day(self): # Test json filter more thoroughly. raw = ['and', '$capabilities.enabled', ['=', '$capabilities.opt1', 'match'], ['or', ['and', ['<', '$free_ram_mb', 30], ['<', '$free_disk_mb', 300]], ['and', ['>', '$free_ram_mb', 30], ['>', '$free_disk_mb', 300]]]] spec_obj = objects.RequestSpec( scheduler_hints=dict(query=[jsonutils.dumps(raw)])) # Passes capabilities = {'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 10, 'free_disk_mb': 200, 'capabilities': capabilities, 'service': service}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # Passes capabilities = {'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 40, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # Fails due to capabilities being disabled capabilities = {'enabled': False, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 40, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) # Fails due to being exact memory/disk we don't want capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 30, 'free_disk_mb': 300, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) # Fails due to memory lower but disk higher capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 20, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) # Fails due to capabilities 'opt1' not equal capabilities = {'enabled': True, 'opt1': 'no-match'} service = {'enabled': True} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 20, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_basic_operators(self): host = fakes.FakeHostState('host1', 'node1', {}) # (operator, arguments, expected_result) ops_to_test = [ ['=', [1, 1], True], ['=', [1, 2], False], ['<', [1, 2], True], ['<', [1, 1], False], ['<', [2, 1], False], ['>', [2, 1], True], ['>', [2, 2], False], ['>', [2, 3], False], ['<=', [1, 2], True], ['<=', [1, 1], True], ['<=', [2, 1], False], ['>=', [2, 1], True], ['>=', [2, 2], True], ['>=', [2, 3], False], ['in', [1, 1], True], ['in', [1, 1, 2, 3], True], ['in', [4, 1, 2, 3], False], ['not', [True], False], ['not', [False], True], ['or', [True, False], True], ['or', [False, False], False], ['and', [True, True], True], ['and', [False, False], False], ['and', [True, False], False], # Nested ((True or False) and (2 > 1)) == Passes ['and', [['or', True, False], ['>', 2, 1]], True]] for (op, args, expected) in ops_to_test: raw = [op] + args spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertEqual(expected, self.filt_cls.host_passes(host, spec_obj)) # This results in [False, True, False, True] and if any are True # then it passes... raw = ['not', True, False, True, False] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # This results in [False, False, False] and if any are True # then it passes...which this doesn't raw = ['not', True, True, True] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_unknown_operator_raises(self): raw = ['!=', 1, 2] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) host = fakes.FakeHostState('host1', 'node1', {}) self.assertRaises(KeyError, self.filt_cls.host_passes, host, spec_obj) def test_json_filter_empty_filters_pass(self): host = fakes.FakeHostState('host1', 'node1', {}) raw = [] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) raw = {} spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_invalid_num_arguments_fails(self): host = fakes.FakeHostState('host1', 'node1', {}) raw = ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) raw = ['>', 1] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_unknown_variable_ignored(self): host = fakes.FakeHostState('host1', 'node1', {}) raw = ['=', '$........', 1, 1] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) raw = ['=', '$foo', 2, 2] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_metrics_filters.py0000664000175000017500000000473200000000000026115 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from nova import objects from nova.scheduler.filters import metrics_filter from nova import test from nova.tests.unit.scheduler import fakes class TestMetricsFilter(test.NoDBTestCase): def test_metrics_filter_pass(self): _ts_now = datetime.datetime(2015, 11, 11, 11, 0, 0) obj1 = objects.MonitorMetric(name='cpu.frequency', value=1000, timestamp=_ts_now, source='nova.virt.libvirt.driver') obj2 = objects.MonitorMetric(name='numa.membw.current', numa_membw_values={"0": 10, "1": 43}, timestamp=_ts_now, source='nova.virt.libvirt.driver') metrics_list = objects.MonitorMetricList(objects=[obj1, obj2]) self.flags(weight_setting=[ 'cpu.frequency=1', 'numa.membw.current=2'], group='metrics') filt_cls = metrics_filter.MetricsFilter() host = fakes.FakeHostState('host1', 'node1', attribute_dict={'metrics': metrics_list}) self.assertTrue(filt_cls.host_passes(host, None)) def test_metrics_filter_missing_metrics(self): _ts_now = datetime.datetime(2015, 11, 11, 11, 0, 0) obj1 = objects.MonitorMetric(name='cpu.frequency', value=1000, timestamp=_ts_now, source='nova.virt.libvirt.driver') metrics_list = objects.MonitorMetricList(objects=[obj1]) self.flags(weight_setting=['foo=1', 'bar=2'], group='metrics') filt_cls = metrics_filter.MetricsFilter() host = fakes.FakeHostState('host1', 'node1', attribute_dict={'metrics': metrics_list}) self.assertFalse(filt_cls.host_passes(host, None)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_num_instances_filters.py0000664000175000017500000000573600000000000027322 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import num_instances_filter from nova import test from nova.tests.unit.scheduler import fakes class TestNumInstancesFilter(test.NoDBTestCase): def test_filter_num_instances_passes(self): self.flags(max_instances_per_host=5, group='filter_scheduler') self.filt_cls = num_instances_filter.NumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {'num_instances': 4}) spec_obj = objects.RequestSpec() self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_filter_num_instances_fails(self): self.flags(max_instances_per_host=5, group='filter_scheduler') self.filt_cls = num_instances_filter.NumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {'num_instances': 5}) spec_obj = objects.RequestSpec() self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_filter_aggregate_num_instances_value(self, agg_mock): self.flags(max_instances_per_host=4, group='filter_scheduler') self.filt_cls = num_instances_filter.AggregateNumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {'num_instances': 5}) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) agg_mock.return_value = set([]) # No aggregate defined for that host. self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_instances_per_host') agg_mock.return_value = set(['6']) # Aggregate defined for that host. self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_filter_aggregate_num_instances_value_error(self, agg_mock): self.flags(max_instances_per_host=6, group='filter_scheduler') self.filt_cls = num_instances_filter.AggregateNumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) agg_mock.return_value = set(['XXX']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_instances_per_host') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_numa_topology_filters.py0000664000175000017500000003432000000000000027337 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import objects from nova.objects import fields from nova.scheduler.filters import numa_topology_filter from nova import test from nova.tests.unit.scheduler import fakes class TestNUMATopologyFilter(test.NoDBTestCase): def setUp(self): super(TestNUMATopologyFilter, self).setUp() self.filt_cls = numa_topology_filter.NUMATopologyFilter() def _get_spec_obj(self, numa_topology, network_metadata=None): image_meta = objects.ImageMeta(properties=objects.ImageMetaProps()) spec_obj = objects.RequestSpec(numa_topology=numa_topology, pci_requests=None, instance_uuid=uuids.fake, flavor=objects.Flavor(extra_specs={}), image=image_meta) if network_metadata: spec_obj.network_metadata = network_metadata return spec_obj def test_numa_topology_filter_pass(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_numa_instance_no_numa_host_fail(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'pci_stats': None}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_numa_host_no_numa_instance_pass(self): spec_obj = self._get_spec_obj(numa_topology=None) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_fit(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([2]), memory=512), objects.InstanceNUMACell(id=2, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_memory(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=1024), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_cpu(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3, 4, 5]), memory=512)]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 1, 'ram_allocation_ratio': 1.5}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_pass_set_limit(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 21, 'ram_allocation_ratio': 1.3}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) limits = host.limits['numa_topology'] self.assertEqual(limits.cpu_allocation_ratio, 21) self.assertEqual(limits.ram_allocation_ratio, 1.3) @mock.patch('nova.objects.instance_numa.InstanceNUMACell' '.cpu_pinning_requested', return_value=True) def _do_test_numa_topology_filter_cpu_policy( self, numa_topology, cpu_policy, cpu_thread_policy, passes, mock_pinning_requested): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = objects.RequestSpec(numa_topology=instance_topology, pci_requests=None, instance_uuid=uuids.fake) extra_specs = [ {}, { 'hw:cpu_policy': cpu_policy, 'hw:cpu_thread_policy': cpu_thread_policy, } ] image_props = [ {}, { 'hw_cpu_policy': cpu_policy, 'hw_cpu_thread_policy': cpu_thread_policy, } ] host = fakes.FakeHostState('host1', 'node1', { 'numa_topology': numa_topology, 'pci_stats': None, 'cpu_allocation_ratio': 1, 'ram_allocation_ratio': 1.5}) assertion = self.assertTrue if passes else self.assertFalse # test combinations of image properties and extra specs for specs, props in itertools.product(extra_specs, image_props): # ...except for the one where no policy is specified if specs == props == {}: continue fake_flavor = objects.Flavor(memory_mb=1024, extra_specs=specs) fake_image_props = objects.ImageMetaProps(**props) fake_image = objects.ImageMeta(properties=fake_image_props) spec_obj.image = fake_image spec_obj.flavor = fake_flavor assertion(self.filt_cls.host_passes(host, spec_obj)) self.assertIsNone(spec_obj.numa_topology.cells[0].cpu_pinning) def test_numa_topology_filter_fail_cpu_thread_policy_require(self): cpu_policy = fields.CPUAllocationPolicy.DEDICATED cpu_thread_policy = fields.CPUThreadAllocationPolicy.REQUIRE numa_topology = fakes.NUMA_TOPOLOGY self._do_test_numa_topology_filter_cpu_policy( numa_topology, cpu_policy, cpu_thread_policy, False) def test_numa_topology_filter_pass_cpu_thread_policy_require(self): cpu_policy = fields.CPUAllocationPolicy.DEDICATED cpu_thread_policy = fields.CPUThreadAllocationPolicy.REQUIRE for numa_topology in fakes.NUMA_TOPOLOGIES_W_HT: self._do_test_numa_topology_filter_cpu_policy( numa_topology, cpu_policy, cpu_thread_policy, True) def test_numa_topology_filter_pass_cpu_thread_policy_others(self): cpu_policy = fields.CPUAllocationPolicy.DEDICATED numa_topology = fakes.NUMA_TOPOLOGY for cpu_thread_policy in [ fields.CPUThreadAllocationPolicy.PREFER, fields.CPUThreadAllocationPolicy.ISOLATE]: self._do_test_numa_topology_filter_cpu_policy( numa_topology, cpu_policy, cpu_thread_policy, True) def test_numa_topology_filter_pass_mempages(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([3]), memory=128, pagesize=4), objects.InstanceNUMACell(id=1, cpuset=set([1]), memory=128, pagesize=16) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_mempages(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([3]), memory=128, pagesize=8), objects.InstanceNUMACell(id=1, cpuset=set([1]), memory=128, pagesize=16) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def _get_fake_host_state_with_networks(self): network_a = objects.NetworkMetadata(physnets=set(['foo', 'bar']), tunneled=False) network_b = objects.NetworkMetadata(physnets=set(), tunneled=True) host_topology = objects.NUMATopology(cells=[ objects.NUMACell( id=1, cpuset=set([1, 2]), pcpuset=set(), memory=2048, cpu_usage=2, memory_usage=2048, mempages=[], siblings=[set([1]), set([2])], pinned_cpus=set(), network_metadata=network_a), objects.NUMACell( id=2, cpuset=set([3, 4]), pcpuset=set(), memory=2048, cpu_usage=2, memory_usage=2048, mempages=[], siblings=[set([3]), set([4])], pinned_cpus=set(), network_metadata=network_b)]) return fakes.FakeHostState('host1', 'node1', { 'numa_topology': host_topology, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) def test_numa_topology_filter_pass_networks(self): host = self._get_fake_host_state_with_networks() instance_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512)]) network_metadata = objects.NetworkMetadata( physnets=set(['foo']), tunneled=False) spec_obj = self._get_spec_obj(numa_topology=instance_topology, network_metadata=network_metadata) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # this should pass because while the networks are affined to different # host NUMA nodes, our guest itself has multiple NUMA nodes network_metadata = objects.NetworkMetadata( physnets=set(['foo', 'bar']), tunneled=True) spec_obj = self._get_spec_obj(numa_topology=instance_topology, network_metadata=network_metadata) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_networks(self): host = self._get_fake_host_state_with_networks() instance_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512)]) # this should fail because the networks are affined to different host # NUMA nodes but our guest only has a single NUMA node network_metadata = objects.NetworkMetadata( physnets=set(['foo']), tunneled=True) spec_obj = self._get_spec_obj(numa_topology=instance_topology, network_metadata=network_metadata) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_pci_passthrough_filters.py0000664000175000017500000000720300000000000027645 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.pci import stats from nova.scheduler.filters import pci_passthrough_filter from nova import test from nova.tests.unit.scheduler import fakes class TestPCIPassthroughFilter(test.NoDBTestCase): def setUp(self): super(TestPCIPassthroughFilter, self).setUp() self.filt_cls = pci_passthrough_filter.PciPassthroughFilter() def test_pci_passthrough_pass(self): pci_stats_mock = mock.MagicMock() pci_stats_mock.support_requests.return_value = True request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState( 'host1', 'node1', attribute_dict={'pci_stats': pci_stats_mock}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) pci_stats_mock.support_requests.assert_called_once_with( requests.requests) def test_pci_passthrough_fail(self): pci_stats_mock = mock.MagicMock() pci_stats_mock.support_requests.return_value = False request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState( 'host1', 'node1', attribute_dict={'pci_stats': pci_stats_mock}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) pci_stats_mock.support_requests.assert_called_once_with( requests.requests) def test_pci_passthrough_no_pci_request(self): spec_obj = objects.RequestSpec(pci_requests=None) host = fakes.FakeHostState('h1', 'n1', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_pci_passthrough_empty_pci_request_obj(self): requests = objects.InstancePCIRequests(requests=[]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState('h1', 'n1', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_pci_passthrough_no_pci_stats(self): request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState('host1', 'node1', attribute_dict={'pci_stats': stats.PciDeviceStats()}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_pci_passthrough_with_pci_stats_none(self): request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState('host1', 'node1', attribute_dict={'pci_stats': None}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_ram_filters.py0000664000175000017500000000544000000000000025223 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import ram_filter from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') class TestAggregateRamFilter(test.NoDBTestCase): def setUp(self): super(TestAggregateRamFilter, self).setUp() self.filt_cls = ram_filter.AggregateRamFilter() def test_aggregate_ram_filter_value_error(self, agg_mock): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) agg_mock.return_value = set(['XXX']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(1024 * 1.0, host.limits['memory_mb']) def test_aggregate_ram_filter_default_value(self, agg_mock): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) # False: fallback to default flag w/o aggregates agg_mock.return_value = set() self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.return_value = set(['2.0']) # True: use ratio from aggregates self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(1024 * 2.0, host.limits['memory_mb']) def test_aggregate_ram_filter_conflict_values(self, agg_mock): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) agg_mock.return_value = set(['1.5', '2.0']) # use the minimum ratio from aggregates self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(1024 * 1.5, host.limits['memory_mb']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_retry_filters.py0000664000175000017500000000446600000000000025620 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import retry_filter from nova import test from nova.tests.unit.scheduler import fakes class TestRetryFilter(test.NoDBTestCase): def setUp(self): super(TestRetryFilter, self).setUp() self.filt_cls = retry_filter.RetryFilter() self.assertIn('The RetryFilter is deprecated', self.stdlog.logger.output) def test_retry_filter_disabled(self): # Test case where retry/re-scheduling is disabled. host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(retry=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_retry_filter_pass(self): # Node not previously tried. host = fakes.FakeHostState('host1', 'nodeX', {}) retry = objects.SchedulerRetries( num_attempts=2, hosts=objects.ComputeNodeList(objects=[ # same host, different node objects.ComputeNode(host='host1', hypervisor_hostname='node1'), # different host and node objects.ComputeNode(host='host2', hypervisor_hostname='node2'), ])) spec_obj = objects.RequestSpec(retry=retry) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_retry_filter_fail(self): # Node was already tried. host = fakes.FakeHostState('host1', 'node1', {}) retry = objects.SchedulerRetries( num_attempts=1, hosts=objects.ComputeNodeList(objects=[ objects.ComputeNode(host='host1', hypervisor_hostname='node1') ])) spec_obj = objects.RequestSpec(retry=retry) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_type_filters.py0000664000175000017500000001132100000000000025420 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import type_filter from nova import test from nova.tests.unit.scheduler import fakes class TestTypeFilter(test.NoDBTestCase): @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_no_metadata(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when no instance_type is defined for aggregate agg_mock.return_value = set([]) # True as no instance_type set for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'instance_type') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_single_instance_type(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) spec_obj2 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake2')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when a single instance_type is defined for an aggregate # using legacy single value syntax agg_mock.return_value = set(['fake1']) # True as instance_type is allowed for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # False as instance_type is not allowed for aggregate self.assertFalse(self.filt_cls.host_passes(host, spec_obj2)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_multi_aggregate(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) spec_obj2 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake2')) spec_obj3 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake3')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when a single instance_type is defined for multiple aggregates # using legacy single value syntax agg_mock.return_value = set(['fake1', 'fake2']) # True as instance_type is allowed for first aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # True as instance_type is allowed for second aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj2)) # False as instance_type is not allowed for aggregates self.assertFalse(self.filt_cls.host_passes(host, spec_obj3)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_multi_instance_type(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) spec_obj2 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake2')) spec_obj3 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake3')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when multiple instance_types are defined for aggregate agg_mock.return_value = set(['fake1,fake2']) # True as instance_type is allowed for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # True as instance_type is allowed for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj2)) # False as instance_type is not allowed for aggregate self.assertFalse(self.filt_cls.host_passes(host, spec_obj3)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/filters/test_utils.py0000664000175000017500000000754100000000000024060 0ustar00zuulzuul00000000000000# Copyright 2015 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova import objects from nova.scheduler.filters import utils from nova import test from nova.tests.unit.scheduler import fakes _AGGREGATE_FIXTURES = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'k1': '1', 'k2': '2'}, ), objects.Aggregate( id=2, name='bar', hosts=['fake-host'], metadata={'k1': '3', 'k2': '4'}, ), objects.Aggregate( id=3, name='bar', hosts=['fake-host'], metadata={'k1': '6,7', 'k2': '8, 9'}, ), ] class TestUtils(test.NoDBTestCase): def test_aggregate_values_from_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) values = utils.aggregate_values_from_key(host_state, key_name='k1') self.assertEqual(set(['1', '3', '6,7']), values) def test_aggregate_values_from_key_with_wrong_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) values = utils.aggregate_values_from_key(host_state, key_name='k3') self.assertEqual(set(), values) def test_aggregate_metadata_get_by_host_no_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) metadata = utils.aggregate_metadata_get_by_host(host_state) self.assertIn('k1', metadata) self.assertEqual(set(['1', '3', '7', '6']), metadata['k1']) self.assertIn('k2', metadata) self.assertEqual(set(['9', '8', '2', '4']), metadata['k2']) def test_aggregate_metadata_get_by_host_with_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) metadata = utils.aggregate_metadata_get_by_host(host_state, 'k1') self.assertIn('k1', metadata) self.assertEqual(set(['1', '3', '7', '6']), metadata['k1']) def test_aggregate_metadata_get_by_host_empty_result(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': []}) metadata = utils.aggregate_metadata_get_by_host(host_state, 'k3') self.assertEqual({}, metadata) def test_validate_num_values(self): f = utils.validate_num_values self.assertEqual("x", f(set(), default="x")) self.assertEqual(1, f(set(["1"]), cast_to=int)) self.assertEqual(1.0, f(set(["1"]), cast_to=float)) self.assertEqual(1, f(set([1, 2]), based_on=min)) self.assertEqual(2, f(set([1, 2]), based_on=max)) self.assertEqual(9, f(set(['10', '9']), based_on=min)) def test_instance_uuids_overlap(self): inst1 = objects.Instance(uuid=uuids.instance_1) inst2 = objects.Instance(uuid=uuids.instance_2) instances = [inst1, inst2] host_state = fakes.FakeHostState('host1', 'node1', {}) host_state.instances = {instance.uuid: instance for instance in instances} self.assertTrue(utils.instance_uuids_overlap(host_state, [uuids.instance_1])) self.assertFalse(utils.instance_uuids_overlap(host_state, ['zz'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/ironic_fakes.py0000664000175000017500000001133200000000000022636 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake nodes for Ironic host manager tests. """ from oslo_utils.fixture import uuidsentinel as uuids from nova import objects COMPUTE_NODES = [ objects.ComputeNode( id=1, local_gb=10, memory_mb=1024, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host1', hypervisor_hostname='node1uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=10, free_ram_mb=1024, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_1), objects.ComputeNode( id=2, local_gb=20, memory_mb=2048, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host2', hypervisor_hostname='node2uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=20, free_ram_mb=2048, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_2), objects.ComputeNode( id=3, local_gb=30, memory_mb=3072, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host3', hypervisor_hostname='node3uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=30, free_ram_mb=3072, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_3), objects.ComputeNode( id=4, local_gb=40, memory_mb=4096, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host4', hypervisor_hostname='node4uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=40, free_ram_mb=4096, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_4), # Broken entry objects.ComputeNode( id=5, local_gb=50, memory_mb=5120, vcpus=1, host='fake', cpu_info='baremetal cpu', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=50, free_ram_mb=5120, hypervisor_hostname='fake-hyp', uuid=uuids.compute_node_5), ] SERVICES = [ objects.Service(host='host1', disabled=False), objects.Service(host='host2', disabled=True), objects.Service(host='host3', disabled=False), objects.Service(host='host4', disabled=False), ] def get_service_by_host(host): services = [service for service in SERVICES if service.host == host] return services[0] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_filter_scheduler.py0000664000175000017500000015147000000000000024574 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Filter Scheduler. """ import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import exception from nova import objects from nova.scheduler import filter_scheduler from nova.scheduler import host_manager from nova.scheduler import utils as scheduler_utils from nova.scheduler import weights from nova import servicegroup from nova import test # noqa fake_numa_limit = objects.NUMATopologyLimits(cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0) fake_limit = {"memory_mb": 1024, "disk_gb": 100, "vcpus": 2, "numa_topology": fake_numa_limit} fake_limit_obj = objects.SchedulerLimits.from_dict(fake_limit) fake_alloc = {"allocations": [ {"resource_provider": {"uuid": uuids.compute_node}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc_version = "1.23" json_alloc = jsonutils.dumps(fake_alloc) fake_selection = objects.Selection(service_host="fake_host", nodename="fake_node", compute_node_uuid=uuids.compute_node, cell_uuid=uuids.cell, limits=fake_limit_obj, allocation_request=json_alloc, allocation_request_version=fake_alloc_version) class FilterSchedulerTestCase(test.NoDBTestCase): """Test case for Filter Scheduler.""" @mock.patch.object(host_manager.HostManager, '_init_instance_info', new=mock.Mock()) @mock.patch.object(host_manager.HostManager, '_init_aggregates', new=mock.Mock()) @mock.patch('nova.scheduler.client.report.SchedulerReportClient', autospec=True, new=mock.Mock()) @mock.patch('nova.scheduler.client.query.SchedulerQueryClient', autospec=True, new=mock.Mock()) def setUp(self): super(FilterSchedulerTestCase, self).setUp() self.driver = filter_scheduler.FilterScheduler() self.context = context.RequestContext('fake_user', 'fake_project') self.topic = 'fake_topic' self.servicegroup_api = servicegroup.API() @mock.patch('nova.objects.ServiceList.get_by_topic') @mock.patch('nova.servicegroup.API.service_is_up') def test_hosts_up(self, mock_service_is_up, mock_get_by_topic): service1 = objects.Service(host='host1') service2 = objects.Service(host='host2') services = objects.ServiceList(objects=[service1, service2]) mock_get_by_topic.return_value = services mock_service_is_up.side_effect = [False, True] result = self.driver.hosts_up(self.context, self.topic) self.assertEqual(result, ['host2']) mock_get_by_topic.assert_called_once_with(self.context, self.topic) calls = [mock.call(service1), mock.call(service2)] self.assertEqual(calls, mock_service_is_up.call_args_list) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_placement_bad_comms(self, mock_get_hosts, mock_get_all_states, mock_claim): """If there was a problem communicating with the Placement service, alloc_reqs_by_rp_uuid will be None and we need to avoid trying to claim in the Placement API. """ spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_group=None, instance_uuid=uuids.instance) # Reset the RequestSpec changes so they don't interfere with the # assertion at the end of the test. spec_obj.obj_reset_changes(recursive=True) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", uuid=uuids.cn1, cell_uuid=uuids.cell, nodename="fake_node", limits={}, aggregates=[]) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states visited_instances = set([]) def fake_get_sorted_hosts(_spec_obj, host_states, index): # Keep track of which instances are passed to the filters. visited_instances.add(_spec_obj.instance_uuid) return all_host_states mock_get_hosts.side_effect = fake_get_sorted_hosts instance_uuids = [uuids.instance] ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, None, mock.sentinel.provider_summaries) expected_hosts = [[objects.Selection.from_host_state(host_state)]] mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called_once_with(spec_obj, all_host_states, 0) self.assertEqual(len(selected_hosts), 1) self.assertEqual(expected_hosts, selected_hosts) # Ensure that we have consumed the resources on the chosen host states host_state.consume_from_request.assert_called_once_with(spec_obj) # And ensure we never called claim_resources() self.assertFalse(mock_claim.called) # Make sure that the RequestSpec.instance_uuid is not dirty. self.assertEqual(sorted(instance_uuids), sorted(visited_instances)) self.assertEqual(0, len(spec_obj.obj_what_changed()), spec_obj.obj_what_changed()) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_old_conductor(self, mock_get_hosts, mock_get_all_states, mock_claim): """Old conductor can call scheduler without the instance_uuids parameter. When this happens, we need to ensure we do not attempt to claim resources in the placement API since obviously we need instance UUIDs to perform those claims. """ group = objects.InstanceGroup(hosts=[]) spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_group=group) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", nodename="fake_node", uuid=uuids.cn1, limits={}, cell_uuid=uuids.cell, instances={}, aggregates=[]) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states instance_uuids = None ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called_once_with(spec_obj, all_host_states, 0) self.assertEqual(len(selected_hosts), 1) expected_host = objects.Selection.from_host_state(host_state) self.assertEqual([[expected_host]], selected_hosts) # Ensure that we have consumed the resources on the chosen host states host_state.consume_from_request.assert_called_once_with(spec_obj) # And ensure we never called claim_resources() self.assertFalse(mock_claim.called) # And that the host is added to the server group but there are no # instances tracked in the host_state. self.assertIn(host_state.host, group.hosts) self.assertEqual(0, len(host_state.instances)) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def _test_schedule_successful_claim(self, mock_get_hosts, mock_get_all_states, mock_claim, num_instances=1): spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", nodename="fake_node", uuid=uuids.cn1, cell_uuid=uuids.cell1, limits={}, aggregates=[]) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = True instance_uuids = [uuids.instance] fake_alloc = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn1: [fake_alloc]} ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) sel_obj = objects.Selection.from_host_state(host_state, allocation_request=fake_alloc) expected_selection = [[sel_obj]] mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called() mock_claim.assert_called_once_with(ctx.elevated.return_value, self.driver.placement_client, spec_obj, uuids.instance, alloc_reqs_by_rp_uuid[uuids.cn1][0], allocation_request_version=None) self.assertEqual(len(selected_hosts), 1) self.assertEqual(expected_selection, selected_hosts) # Ensure that we have consumed the resources on the chosen host states host_state.consume_from_request.assert_called_once_with(spec_obj) def test_schedule_successful_claim(self): self._test_schedule_successful_claim() def test_schedule_old_reqspec_and_move_operation(self): """This test is for verifying that in case of a move operation with an original RequestSpec created for 3 concurrent instances, we only verify the instance that is moved. """ self._test_schedule_successful_claim(num_instances=3) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_cleanup_allocations') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_unsuccessful_claim(self, mock_get_hosts, mock_get_all_states, mock_claim, mock_cleanup): """Tests that we return an empty list if we are unable to successfully claim resources for the instance """ spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host=mock.sentinel.host, uuid=uuids.cn1, cell_uuid=uuids.cell1) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = False instance_uuids = [uuids.instance] alloc_reqs_by_rp_uuid = { uuids.cn1: [{"allocations": mock.sentinel.alloc_req}], } ctx = mock.Mock() fake_version = "1.99" self.assertRaises(exception.NoValidHost, self.driver._schedule, ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries, allocation_request_version=fake_version) mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called_once_with(spec_obj, all_host_states, 0) mock_claim.assert_called_once_with(ctx.elevated.return_value, self.driver.placement_client, spec_obj, uuids.instance, alloc_reqs_by_rp_uuid[uuids.cn1][0], allocation_request_version=fake_version) mock_cleanup.assert_not_called() # Ensure that we have consumed the resources on the chosen host states self.assertFalse(host_state.consume_from_request.called) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_cleanup_allocations') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_not_all_instance_clean_claimed(self, mock_get_hosts, mock_get_all_states, mock_claim, mock_cleanup): """Tests that we clean up previously-allocated instances if not all instances could be scheduled """ spec_obj = objects.RequestSpec( num_instances=2, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", nodename="fake_node", uuid=uuids.cn1, cell_uuid=uuids.cell1, limits={}, updated='fake') all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.side_effect = [ all_host_states, # first instance: return all the hosts (only one) [], # second: act as if no more hosts that meet criteria all_host_states, # the final call when creating alternates ] mock_claim.return_value = True instance_uuids = [uuids.instance1, uuids.instance2] fake_alloc = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn1: [fake_alloc]} ctx = mock.Mock() self.assertRaises(exception.NoValidHost, self.driver._schedule, ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) # Ensure we cleaned up the first successfully-claimed instance mock_cleanup.assert_called_once_with(ctx, [uuids.instance1]) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_selection_alloc_requests_for_alts(self, mock_get_hosts, mock_get_all_states, mock_claim): spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state0 = mock.Mock(spec=host_manager.HostState, host="fake_host0", nodename="fake_node0", uuid=uuids.cn0, cell_uuid=uuids.cell, limits={}, aggregates=[]) host_state1 = mock.Mock(spec=host_manager.HostState, host="fake_host1", nodename="fake_node1", uuid=uuids.cn1, cell_uuid=uuids.cell, limits={}, aggregates=[]) host_state2 = mock.Mock(spec=host_manager.HostState, host="fake_host2", nodename="fake_node2", uuid=uuids.cn2, cell_uuid=uuids.cell, limits={}, aggregates=[]) all_host_states = [host_state0, host_state1, host_state2] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = True instance_uuids = [uuids.instance0] fake_alloc0 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn0}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc1 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc2 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn2}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn0: [fake_alloc0], uuids.cn1: [fake_alloc1], uuids.cn2: [fake_alloc2]} ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries, return_alternates=True) sel0 = objects.Selection.from_host_state(host_state0, allocation_request=fake_alloc0) sel1 = objects.Selection.from_host_state(host_state1, allocation_request=fake_alloc1) sel2 = objects.Selection.from_host_state(host_state2, allocation_request=fake_alloc2) expected_selection = [[sel0, sel1, sel2]] self.assertEqual(expected_selection, selected_hosts) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_selection_alloc_requests_no_alts(self, mock_get_hosts, mock_get_all_states, mock_claim): spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state0 = mock.Mock(spec=host_manager.HostState, host="fake_host0", nodename="fake_node0", uuid=uuids.cn0, cell_uuid=uuids.cell, limits={}, aggregates=[]) host_state1 = mock.Mock(spec=host_manager.HostState, host="fake_host1", nodename="fake_node1", uuid=uuids.cn1, cell_uuid=uuids.cell, limits={}, aggregates=[]) host_state2 = mock.Mock(spec=host_manager.HostState, host="fake_host2", nodename="fake_node2", uuid=uuids.cn2, cell_uuid=uuids.cell, limits={}, aggregates=[]) all_host_states = [host_state0, host_state1, host_state2] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = True instance_uuids = [uuids.instance0] fake_alloc0 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn0}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc1 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc2 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn2}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn0: [fake_alloc0], uuids.cn1: [fake_alloc1], uuids.cn2: [fake_alloc2]} ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries, return_alternates=False) sel0 = objects.Selection.from_host_state(host_state0, allocation_request=fake_alloc0) expected_selection = [[sel0]] self.assertEqual(expected_selection, selected_hosts) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_instance_group(self, mock_get_hosts, mock_get_all_states, mock_claim): """Test that since the request spec object contains an instance group object, that upon choosing a host in the primary schedule loop, that we update the request spec's instance group information """ num_instances = 2 ig = objects.InstanceGroup(hosts=[]) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_group=ig, instance_uuid=uuids.instance0) # Reset the RequestSpec changes so they don't interfere with the # assertion at the end of the test. spec_obj.obj_reset_changes(recursive=True) hs1 = mock.Mock(spec=host_manager.HostState, host='host1', nodename="node1", limits={}, uuid=uuids.cn1, cell_uuid=uuids.cell1, instances={}, aggregates=[]) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', nodename="node2", limits={}, uuid=uuids.cn2, cell_uuid=uuids.cell2, instances={}, aggregates=[]) all_host_states = [hs1, hs2] mock_get_all_states.return_value = all_host_states mock_claim.return_value = True alloc_reqs_by_rp_uuid = { uuids.cn1: [{"allocations": "fake_cn1_alloc"}], uuids.cn2: [{"allocations": "fake_cn2_alloc"}], } # Simulate host 1 and host 2 being randomly returned first by # _get_sorted_hosts() in the two iterations for each instance in # num_instances visited_instances = set([]) def fake_get_sorted_hosts(_spec_obj, host_states, index): # Keep track of which instances are passed to the filters. visited_instances.add(_spec_obj.instance_uuid) if index % 2: return [hs1, hs2] return [hs2, hs1] mock_get_hosts.side_effect = fake_get_sorted_hosts instance_uuids = [ getattr(uuids, 'instance%d' % x) for x in range(num_instances) ] ctx = mock.Mock() self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) # Check that we called claim_resources() for both the first and second # host state claim_calls = [ mock.call(ctx.elevated.return_value, self.driver.placement_client, spec_obj, uuids.instance0, alloc_reqs_by_rp_uuid[uuids.cn2][0], allocation_request_version=None), mock.call(ctx.elevated.return_value, self.driver.placement_client, spec_obj, uuids.instance1, alloc_reqs_by_rp_uuid[uuids.cn1][0], allocation_request_version=None), ] mock_claim.assert_has_calls(claim_calls) # Check that _get_sorted_hosts() is called twice and that the # second time, we pass it the hosts that were returned from # _get_sorted_hosts() the first time sorted_host_calls = [ mock.call(spec_obj, all_host_states, 0), mock.call(spec_obj, [hs2, hs1], 1), ] mock_get_hosts.assert_has_calls(sorted_host_calls) # The instance group object should have both host1 and host2 in its # instance group hosts list and there should not be any "changes" to # save in the instance group object self.assertEqual(['host2', 'host1'], ig.hosts) self.assertEqual({}, ig.obj_get_changes()) # Assert that we updated HostState.instances for each host. self.assertIn(uuids.instance0, hs2.instances) self.assertIn(uuids.instance1, hs1.instances) # Make sure that the RequestSpec.instance_uuid is not dirty. self.assertEqual(sorted(instance_uuids), sorted(visited_instances)) self.assertEqual(0, len(spec_obj.obj_what_changed()), spec_obj.obj_what_changed()) @mock.patch('nova.scheduler.filter_scheduler.LOG.debug') @mock.patch('random.choice', side_effect=lambda x: x[1]) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts(self, mock_filt, mock_weighed, mock_rand, debug): """Tests the call that returns a sorted list of hosts by calling the host manager's filtering and weighing routines """ self.flags(host_subset_size=2, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1', cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), ] # Make sure that when logging the weighed hosts we are logging them # with the WeighedHost wrapper class rather than the HostState objects. def fake_debug(message, *args, **kwargs): if message.startswith('Weighed'): self.assertEqual(1, len(args)) for weighed_host in args[0]['hosts']: self.assertIsInstance(weighed_host, weights.WeighedHost) debug.side_effect = fake_debug results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) debug.assert_called() mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We override random.choice() to pick the **second** element of the # returned weighed hosts list, which is the host state #2. This tests # the code path that combines the randomly-chosen host with the # remaining list of weighed host state objects self.assertEqual([hs2, hs1], results) @mock.patch('random.choice', side_effect=lambda x: x[0]) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts_subset_less_than_num_weighed(self, mock_filt, mock_weighed, mock_rand): """Tests that when we have >1 weighed hosts but a host subset size of 1, that we always pick the first host in the weighed host """ self.flags(host_subset_size=1, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1', cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We should be randomly selecting only from a list of one host state mock_rand.assert_called_once_with([hs1]) self.assertEqual([hs1, hs2], results) @mock.patch('random.choice', side_effect=lambda x: x[0]) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts_subset_greater_than_num_weighed(self, mock_filt, mock_weighed, mock_rand): """Hosts should still be chosen if host subset size is larger than number of weighed hosts. """ self.flags(host_subset_size=20, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1', cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We overrode random.choice() to return the first element in the list, # so even though we had a host_subset_size greater than the number of # weighed hosts (2), we just random.choice() on the entire set of # weighed hosts and thus return [hs1, hs2] self.assertEqual([hs1, hs2], results) @mock.patch('random.shuffle', side_effect=lambda x: x.reverse()) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts_shuffle_top_equal(self, mock_filt, mock_weighed, mock_shuffle): """Tests that top best weighed hosts are shuffled when enabled. """ self.flags(host_subset_size=1, group='filter_scheduler') self.flags(shuffle_best_same_weighed_hosts=True, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1') hs2 = mock.Mock(spec=host_manager.HostState, host='host2') hs3 = mock.Mock(spec=host_manager.HostState, host='host3') hs4 = mock.Mock(spec=host_manager.HostState, host='host4') all_host_states = [hs1, hs2, hs3, hs4] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), weights.WeighedHost(hs3, 0.5), weights.WeighedHost(hs4, 0.5), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We override random.shuffle() to reverse the list, thus the # head of the list should become [host#2, host#1] # (as the host_subset_size is 1) and the tail should stay the same. self.assertEqual([hs2, hs1, hs3, hs4], results) def test_cleanup_allocations(self): instance_uuids = [] # Check we don't do anything if there's no instance UUIDs to cleanup # allocations for pc = self.driver.placement_client self.driver._cleanup_allocations(self.context, instance_uuids) self.assertFalse(pc.delete_allocation_for_instance.called) instance_uuids = [uuids.instance1, uuids.instance2] self.driver._cleanup_allocations(self.context, instance_uuids) exp_calls = [mock.call(self.context, uuids.instance1), mock.call(self.context, uuids.instance2)] pc.delete_allocation_for_instance.assert_has_calls(exp_calls) def test_add_retry_host(self): retry = dict(num_attempts=1, hosts=[]) filter_properties = dict(retry=retry) host = "fakehost" node = "fakenode" scheduler_utils._add_retry_host(filter_properties, host, node) hosts = filter_properties['retry']['hosts'] self.assertEqual(1, len(hosts)) self.assertEqual([host, node], hosts[0]) def test_post_select_populate(self): # Test addition of certain filter props after a node is selected. retry = {'hosts': [], 'num_attempts': 1} filter_properties = {'retry': retry} selection = objects.Selection(service_host="host", nodename="node", cell_uuid=uuids.cell) scheduler_utils.populate_filter_properties(filter_properties, selection) self.assertEqual(['host', 'node'], filter_properties['retry']['hosts'][0]) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_schedule') def test_select_destinations_match_num_instances(self, mock_schedule): """Tests that the select_destinations() method returns the list of hosts from the _schedule() method when the number of returned hosts equals the number of instance UUIDs passed in. """ spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, num_instances=1, image=None, numa_topology=None, pci_requests=None, instance_uuid=uuids.instance_id) mock_schedule.return_value = [[fake_selection]] dests = self.driver.select_destinations(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version) mock_schedule.assert_called_once_with(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version, False) self.assertEqual([[fake_selection]], dests) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_schedule') def test_select_destinations_for_move_ops(self, mock_schedule): """Tests that the select_destinations() method verifies the number of hosts returned from the _schedule() method against the number of instance UUIDs passed as a parameter and not against the RequestSpec num_instances field since the latter could be wrong in case of a move operation. """ spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, num_instances=2, image=None, numa_topology=None, pci_requests=None, instance_uuid=uuids.instance_id) mock_schedule.return_value = [[fake_selection]] dests = self.driver.select_destinations(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version) mock_schedule.assert_called_once_with(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version, False) self.assertEqual([[fake_selection]], dests) @mock.patch('nova.scheduler.utils.claim_resources', return_value=True) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_fewer_num_instances(self, mock_get_hosts, mock_get_all_states, mock_claim): """Tests that the _schedule() method properly handles resetting host state objects and raising NoValidHost when there are not enough hosts available. """ spec_obj = objects.RequestSpec( num_instances=2, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor"), project_id=uuids.project_id, instance_uuid=uuids.instance_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", uuid=uuids.cn1, cell_uuid=uuids.cell, nodename="fake_node", limits={}, updated="Not None") all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.side_effect = [all_host_states, []] instance_uuids = [uuids.inst1, uuids.inst2] fake_allocs_by_rp = {uuids.cn1: [{}]} self.assertRaises(exception.NoValidHost, self.driver._schedule, self.context, spec_obj, instance_uuids, fake_allocs_by_rp, mock.sentinel.p_sums) self.assertIsNone(host_state.updated) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_all_host_states") def _test_alternates_returned(self, mock_get_all_hosts, mock_sorted, mock_claim, mock_consume, num_instances=2, num_alternates=2): all_host_states = [] alloc_reqs = {} for num in range(10): host_name = "host%s" % num hs = host_manager.HostState(host_name, "node%s" % num, uuids.cell) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) alloc_reqs[hs.uuid] = [{}] mock_get_all_hosts.return_value = all_host_states mock_sorted.return_value = all_host_states mock_claim.return_value = True total_returned = num_alternates + 1 self.flags(max_attempts=total_returned, group="scheduler") instance_uuids = [getattr(uuids, "inst%s" % num) for num in range(num_instances)] spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, alloc_reqs, None, return_alternates=True) self.assertEqual(num_instances, len(dests)) # Filtering and weighing hosts should be called num_instances + 1 times # unless we're not getting alternates, and then just num_instances self.assertEqual(num_instances + 1 if num_alternates > 0 and num_instances > 1 else num_instances, mock_sorted.call_count, 'Unexpected number of calls to filter hosts for %s ' 'instances.' % num_instances) selected_hosts = [dest[0] for dest in dests] for dest in dests: self.assertEqual(total_returned, len(dest)) # Verify that there are no duplicates among a destination self.assertEqual(len(dest), len(set(dest))) # Verify that none of the selected hosts appear in the alternates. for alt in dest[1:]: self.assertNotIn(alt, selected_hosts) def test_alternates_returned(self): self._test_alternates_returned(num_instances=1, num_alternates=1) self._test_alternates_returned(num_instances=3, num_alternates=0) self._test_alternates_returned(num_instances=1, num_alternates=4) self._test_alternates_returned(num_instances=2, num_alternates=3) self._test_alternates_returned(num_instances=8, num_alternates=8) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_all_host_states") def test_alternates_same_cell(self, mock_get_all_hosts, mock_sorted, mock_claim, mock_consume): """Tests getting alternates plus claims where the hosts are spread across two cells. """ all_host_states = [] alloc_reqs = {} for num in range(10): host_name = "host%s" % num cell_uuid = uuids.cell1 if num % 2 else uuids.cell2 hs = host_manager.HostState(host_name, "node%s" % num, cell_uuid) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) alloc_reqs[hs.uuid] = [{}] mock_get_all_hosts.return_value = all_host_states # There are two instances so _get_sorted_hosts is called once per # instance and then once again before picking alternates. mock_sorted.side_effect = [all_host_states, list(reversed(all_host_states)), all_host_states] mock_claim.return_value = True total_returned = 3 self.flags(max_attempts=total_returned, group="scheduler") instance_uuids = [uuids.inst1, uuids.inst2] num_instances = len(instance_uuids) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, alloc_reqs, None, return_alternates=True) # There should be max_attempts hosts per instance (1 selected, 2 alts) self.assertEqual(total_returned, len(dests[0])) self.assertEqual(total_returned, len(dests[1])) # Verify that the two selected hosts are not in the same cell. self.assertNotEqual(dests[0][0].cell_uuid, dests[1][0].cell_uuid) for dest in dests: selected_host = dest[0] selected_cell_uuid = selected_host.cell_uuid for alternate in dest[1:]: self.assertEqual(alternate.cell_uuid, selected_cell_uuid) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_all_host_states") def _test_not_enough_alternates(self, mock_get_all_hosts, mock_sorted, mock_claim, mock_consume, num_hosts, max_attempts): all_host_states = [] alloc_reqs = {} for num in range(num_hosts): host_name = "host%s" % num hs = host_manager.HostState(host_name, "node%s" % num, uuids.cell) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) alloc_reqs[hs.uuid] = [{}] mock_get_all_hosts.return_value = all_host_states mock_sorted.return_value = all_host_states mock_claim.return_value = True # Set the total returned to more than the number of available hosts self.flags(max_attempts=max_attempts, group="scheduler") instance_uuids = [uuids.inst1, uuids.inst2] num_instances = len(instance_uuids) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, alloc_reqs, None, return_alternates=True) self.assertEqual(num_instances, len(dests)) selected_hosts = [dest[0] for dest in dests] # The number returned for each destination should be the less of the # number of available host and the max_attempts setting. expected_number = min(num_hosts, max_attempts) for dest in dests: self.assertEqual(expected_number, len(dest)) # Verify that there are no duplicates among a destination self.assertEqual(len(dest), len(set(dest))) # Verify that none of the selected hosts appear in the alternates. for alt in dest[1:]: self.assertNotIn(alt, selected_hosts) def test_not_enough_alternates(self): self._test_not_enough_alternates(num_hosts=100, max_attempts=5) self._test_not_enough_alternates(num_hosts=5, max_attempts=5) self._test_not_enough_alternates(num_hosts=3, max_attempts=5) self._test_not_enough_alternates(num_hosts=20, max_attempts=5) @mock.patch('nova.compute.utils.notify_about_scheduler_action') @mock.patch.object(filter_scheduler.FilterScheduler, '_schedule') def test_select_destinations_notifications(self, mock_schedule, mock_notify): mock_schedule.return_value = ([[mock.Mock()]], [[mock.Mock()]]) with mock.patch.object(self.driver.notifier, 'info') as mock_info: flavor = objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1, disabled=False, is_public=True, name="small_flavor") expected = {'num_instances': 1, 'instance_properties': { 'uuid': uuids.instance, 'ephemeral_gb': 0, 'memory_mb': 512, 'vcpus': 1, 'root_gb': 512}, 'instance_type': flavor, 'image': {}} spec_obj = objects.RequestSpec(num_instances=1, flavor=flavor, instance_uuid=uuids.instance) self.driver.select_destinations(self.context, spec_obj, [uuids.instance], {}, None) expected = [ mock.call(self.context, 'scheduler.select_destinations.start', dict(request_spec=expected)), mock.call(self.context, 'scheduler.select_destinations.end', dict(request_spec=expected))] self.assertEqual(expected, mock_info.call_args_list) mock_notify.assert_has_calls([ mock.call(context=self.context, request_spec=spec_obj, action='select_destinations', phase='start'), mock.call(context=self.context, request_spec=spec_obj, action='select_destinations', phase='end')]) def test_get_all_host_states_provider_summaries_is_none(self): """Tests that HostManager.get_host_states_by_uuids is called with compute_uuids being None when the incoming provider_summaries is None. """ with mock.patch.object(self.driver.host_manager, 'get_host_states_by_uuids') as get_host_states: self.driver._get_all_host_states( mock.sentinel.ctxt, mock.sentinel.spec_obj, None) # Make sure get_host_states_by_uuids was called with # compute_uuids being None. get_host_states.assert_called_once_with( mock.sentinel.ctxt, None, mock.sentinel.spec_obj) def test_get_all_host_states_provider_summaries_is_empty(self): """Tests that HostManager.get_host_states_by_uuids is called with compute_uuids being [] when the incoming provider_summaries is {}. """ with mock.patch.object(self.driver.host_manager, 'get_host_states_by_uuids') as get_host_states: self.driver._get_all_host_states( mock.sentinel.ctxt, mock.sentinel.spec_obj, {}) # Make sure get_host_states_by_uuids was called with # compute_uuids being []. get_host_states.assert_called_once_with( mock.sentinel.ctxt, [], mock.sentinel.spec_obj) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_filters.py0000664000175000017500000002312700000000000022716 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Host Filters. """ import inspect import mock from oslo_utils.fixture import uuidsentinel as uuids from six.moves import range from nova import filters from nova import loadables from nova import objects from nova import test class Filter1(filters.BaseFilter): """Test Filter class #1.""" pass class Filter2(filters.BaseFilter): """Test Filter class #2.""" pass class FiltersTestCase(test.NoDBTestCase): def setUp(self): super(FiltersTestCase, self).setUp() with mock.patch.object(loadables.BaseLoader, "__init__") as mock_load: mock_load.return_value = None self.filter_handler = filters.BaseFilterHandler(filters.BaseFilter) @mock.patch('nova.filters.BaseFilter._filter_one') def test_filter_all(self, mock_filter_one): mock_filter_one.side_effect = [True, False, True] filter_obj_list = ['obj1', 'obj2', 'obj3'] spec_obj = objects.RequestSpec() base_filter = filters.BaseFilter() result = base_filter.filter_all(filter_obj_list, spec_obj) self.assertTrue(inspect.isgenerator(result)) self.assertEqual(['obj1', 'obj3'], list(result)) @mock.patch('nova.filters.BaseFilter._filter_one') def test_filter_all_recursive_yields(self, mock_filter_one): # Test filter_all() allows generators from previous filter_all()s. # filter_all() yields results. We want to make sure that we can # call filter_all() with generators returned from previous calls # to filter_all(). filter_obj_list = ['obj1', 'obj2', 'obj3'] spec_obj = objects.RequestSpec() base_filter = filters.BaseFilter() # The order that _filter_one is going to get called gets # confusing because we will be recursively yielding things.. # We are going to simulate the first call to filter_all() # returning False for 'obj2'. So, 'obj1' will get yielded # 'total_iterations' number of times before the first filter_all() # call gets to processing 'obj2'. We then return 'False' for it. # After that, 'obj3' gets yielded 'total_iterations' number of # times. mock_results = [] total_iterations = 200 for x in range(total_iterations): mock_results.append(True) mock_results.append(False) for x in range(total_iterations): mock_results.append(True) mock_filter_one.side_effect = mock_results objs = iter(filter_obj_list) for x in range(total_iterations): # Pass in generators returned from previous calls. objs = base_filter.filter_all(objs, spec_obj) self.assertTrue(inspect.isgenerator(objs)) self.assertEqual(['obj1', 'obj3'], list(objs)) def test_get_filtered_objects(self): filter_objs_initial = ['initial', 'filter1', 'objects1'] filter_objs_second = ['second', 'filter2', 'objects2'] filter_objs_last = ['last', 'filter3', 'objects3'] spec_obj = objects.RequestSpec() def _fake_base_loader_init(*args, **kwargs): pass self.stub_out('nova.loadables.BaseLoader.__init__', _fake_base_loader_init) filt1_mock = mock.Mock(Filter1) filt1_mock.run_filter_for_index.return_value = True filt1_mock.filter_all.return_value = filter_objs_second filt2_mock = mock.Mock(Filter2) filt2_mock.run_filter_for_index.return_value = True filt2_mock.filter_all.return_value = filter_objs_last filter_handler = filters.BaseFilterHandler(filters.BaseFilter) filter_mocks = [filt1_mock, filt2_mock] result = filter_handler.get_filtered_objects(filter_mocks, filter_objs_initial, spec_obj) self.assertEqual(filter_objs_last, result) filt1_mock.filter_all.assert_called_once_with(filter_objs_initial, spec_obj) filt2_mock.filter_all.assert_called_once_with(filter_objs_second, spec_obj) def test_get_filtered_objects_for_index(self): """Test that we don't call a filter when its run_filter_for_index() method returns false """ filter_objs_initial = ['initial', 'filter1', 'objects1'] filter_objs_second = ['second', 'filter2', 'objects2'] spec_obj = objects.RequestSpec() def _fake_base_loader_init(*args, **kwargs): pass self.stub_out('nova.loadables.BaseLoader.__init__', _fake_base_loader_init) filt1_mock = mock.Mock(Filter1) filt1_mock.run_filter_for_index.return_value = True filt1_mock.filter_all.return_value = filter_objs_second filt2_mock = mock.Mock(Filter2) filt2_mock.run_filter_for_index.return_value = False filter_handler = filters.BaseFilterHandler(filters.BaseFilter) filter_mocks = [filt1_mock, filt2_mock] result = filter_handler.get_filtered_objects(filter_mocks, filter_objs_initial, spec_obj) self.assertEqual(filter_objs_second, result) filt1_mock.filter_all.assert_called_once_with(filter_objs_initial, spec_obj) filt2_mock.filter_all.assert_not_called() def test_get_filtered_objects_none_response(self): filter_objs_initial = ['initial', 'filter1', 'objects1'] spec_obj = objects.RequestSpec() def _fake_base_loader_init(*args, **kwargs): pass self.stub_out('nova.loadables.BaseLoader.__init__', _fake_base_loader_init) filt1_mock = mock.Mock(Filter1) filt1_mock.run_filter_for_index.return_value = True filt1_mock.filter_all.return_value = None filt2_mock = mock.Mock(Filter2) filter_handler = filters.BaseFilterHandler(filters.BaseFilter) filter_mocks = [filt1_mock, filt2_mock] result = filter_handler.get_filtered_objects(filter_mocks, filter_objs_initial, spec_obj) self.assertIsNone(result) filt1_mock.filter_all.assert_called_once_with(filter_objs_initial, spec_obj) filt2_mock.filter_all.assert_not_called() def test_get_filtered_objects_info_log_none_returned(self): LOG = filters.LOG class FilterA(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return all but the first object return list_objs[1:] class FilterB(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return an empty list return [] filter_a = FilterA() filter_b = FilterB() all_filters = [filter_a, filter_b] hosts = ["Host0", "Host1", "Host2"] fake_uuid = uuids.instance spec_obj = objects.RequestSpec(instance_uuid=fake_uuid) with mock.patch.object(LOG, "info") as mock_log: result = self.filter_handler.get_filtered_objects( all_filters, hosts, spec_obj) self.assertFalse(result) # FilterA should leave Host1 and Host2; FilterB should leave None. exp_output = ("['FilterA: (start: 3, end: 2)', " "'FilterB: (start: 2, end: 0)']") cargs = mock_log.call_args[0][0] self.assertIn("with instance ID '%s'" % fake_uuid, cargs) self.assertIn(exp_output, cargs) def test_get_filtered_objects_debug_log_none_returned(self): LOG = filters.LOG class FilterA(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return all but the first object return list_objs[1:] class FilterB(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return an empty list return [] filter_a = FilterA() filter_b = FilterB() all_filters = [filter_a, filter_b] hosts = ["Host0", "Host1", "Host2"] fake_uuid = uuids.instance spec_obj = objects.RequestSpec(instance_uuid=fake_uuid) with mock.patch.object(LOG, "debug") as mock_log: result = self.filter_handler.get_filtered_objects( all_filters, hosts, spec_obj) self.assertFalse(result) # FilterA should leave Host1 and Host2; FilterB should leave None. exp_output = ("[('FilterA', [('Host1', ''), ('Host2', '')]), " + "('FilterB', None)]") cargs = mock_log.call_args[0][0] self.assertIn("with instance ID '%s'" % fake_uuid, cargs) self.assertIn(exp_output, cargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/test_host_filters.py0000664000175000017500000000276100000000000023754 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Host Filters. """ from nova.scheduler import filters from nova.scheduler.filters import all_hosts_filter from nova.scheduler.filters import compute_filter from nova import test from nova.tests.unit.scheduler import fakes class HostFiltersTestCase(test.NoDBTestCase): def test_filter_handler(self): # Double check at least a couple of known filters exist filter_handler = filters.HostFilterHandler() classes = filter_handler.get_matching_classes( ['nova.scheduler.filters.all_filters']) self.assertIn(all_hosts_filter.AllHostsFilter, classes) self.assertIn(compute_filter.ComputeFilter, classes) def test_all_host_filter(self): filt_cls = all_hosts_filter.AllHostsFilter() host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(filt_cls.host_passes(host, {})) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_host_manager.py0000664000175000017500000023266400000000000023725 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For HostManager """ import collections import contextlib import datetime import mock from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import versionutils import nova from nova.compute import task_states from nova.compute import vm_states from nova import context as nova_context from nova import exception from nova import objects from nova.objects import base as obj_base from nova.pci import stats as pci_stats from nova.scheduler import filters from nova.scheduler import host_manager from nova import test from nova.tests import fixtures from nova.tests.unit import fake_instance from nova.tests.unit.scheduler import fakes class FakeFilterClass1(filters.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class FakeFilterClass2(filters.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class HostManagerTestCase(test.NoDBTestCase): """Test case for HostManager class.""" @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(HostManagerTestCase, self).setUp() self.flags(available_filters=[ __name__ + '.FakeFilterClass1', __name__ + '.FakeFilterClass2'], group='filter_scheduler') self.flags(enabled_filters=['FakeFilterClass1'], group='filter_scheduler') self.host_manager = host_manager.HostManager() cell = uuids.cell self.fake_hosts = [host_manager.HostState('fake_host%s' % x, 'fake-node', cell) for x in range(1, 5)] self.fake_hosts += [host_manager.HostState('fake_multihost', 'fake-node%s' % x, cell) for x in range(1, 5)] self.useFixture(fixtures.SpawnIsSynchronousFixture()) def test_load_filters(self): filters = self.host_manager._load_filters() self.assertEqual(filters, ['FakeFilterClass1']) def test_refresh_cells_caches(self): ctxt = nova_context.RequestContext('fake', 'fake') # Loading the non-cell0 mapping from the base test class. self.assertEqual(1, len(self.host_manager.enabled_cells)) self.assertEqual(1, len(self.host_manager.cells)) # Creating cell mappings for mocking the list of cell_mappings obtained # so that the refreshing mechanism can be properly tested. This will in # turn ignore the loaded cell mapping from the base test case setup. cell_uuid1 = uuids.cell1 cell_mapping1 = objects.CellMapping(context=ctxt, uuid=cell_uuid1, database_connection='fake:///db1', transport_url='fake:///mq1', disabled=False) cell_uuid2 = uuids.cell2 cell_mapping2 = objects.CellMapping(context=ctxt, uuid=cell_uuid2, database_connection='fake:///db2', transport_url='fake:///mq2', disabled=True) cell_uuid3 = uuids.cell3 cell_mapping3 = objects.CellMapping(context=ctxt, uuid=cell_uuid3, database_connection='fake:///db3', transport_url='fake:///mq3', disabled=False) cells = [cell_mapping1, cell_mapping2, cell_mapping3] # Throw a random host-to-cell in that cache to make sure it gets reset. self.host_manager.host_to_cell_uuid['fake-host'] = cell_uuid1 with mock.patch('nova.objects.CellMappingList.get_all', return_value=cells) as mock_cm: self.host_manager.refresh_cells_caches() mock_cm.assert_called_once() # Cell2 is not in the enabled list. self.assertEqual(2, len(self.host_manager.enabled_cells)) self.assertEqual(cell_uuid3, self.host_manager.enabled_cells[1].uuid) # But it is still in the full list. self.assertEqual(3, len(self.host_manager.cells)) self.assertIn(cell_uuid2, self.host_manager.cells) # The host_to_cell_uuid cache should have been reset. self.assertEqual({}, self.host_manager.host_to_cell_uuid) def test_refresh_cells_caches_except_cell0(self): ctxt = nova_context.RequestContext('fake-user', 'fake_project') cell_uuid0 = objects.CellMapping.CELL0_UUID cell_mapping0 = objects.CellMapping(context=ctxt, uuid=cell_uuid0, database_connection='fake:///db1', transport_url='fake:///mq1') cells = objects.CellMappingList(cell_mapping0) # Mocking the return value of get_all cell_mappings to return only # the cell0 mapping to check if its filtered or not. with mock.patch('nova.objects.CellMappingList.get_all', return_value=cells) as mock_cm: self.host_manager.refresh_cells_caches() mock_cm.assert_called_once() self.assertEqual(0, len(self.host_manager.cells)) @mock.patch('nova.objects.HostMapping.get_by_host', return_value=objects.HostMapping( cell_mapping=objects.CellMapping(uuid=uuids.cell1))) def test_get_cell_mapping_for_host(self, mock_get_by_host): # Starting with an empty cache, assert that the HostMapping is looked # up and the result is cached. ctxt = nova_context.get_admin_context() host = 'fake-host' self.assertEqual({}, self.host_manager.host_to_cell_uuid) cm = self.host_manager._get_cell_mapping_for_host(ctxt, host) self.assertIs(cm, mock_get_by_host.return_value.cell_mapping) self.assertIn(host, self.host_manager.host_to_cell_uuid) self.assertEqual( uuids.cell1, self.host_manager.host_to_cell_uuid[host]) mock_get_by_host.assert_called_once_with(ctxt, host) # Reset the mock and do it again, assert we do not query the DB. mock_get_by_host.reset_mock() self.host_manager._get_cell_mapping_for_host(ctxt, host) mock_get_by_host.assert_not_called() # Mix up the cache such that the host is mapped to a cell that # is not in the cache which will make us query the DB. Also make the # HostMapping query raise HostMappingNotFound to make sure that comes # up to the caller. mock_get_by_host.reset_mock() self.host_manager.host_to_cell_uuid[host] = uuids.random_cell mock_get_by_host.side_effect = exception.HostMappingNotFound(name=host) with mock.patch('nova.scheduler.host_manager.LOG.warning') as warning: self.assertRaises(exception.HostMappingNotFound, self.host_manager._get_cell_mapping_for_host, ctxt, host) # We should have logged a warning because the host is cached to # a cell uuid but that cell uuid is not in the cells cache. warning.assert_called_once() self.assertIn('Host %s is expected to be in cell %s', warning.call_args[0][0]) # And we should have also tried to lookup the HostMapping in the DB mock_get_by_host.assert_called_once_with(ctxt, host) @mock.patch.object(nova.objects.InstanceList, 'get_by_filters') @mock.patch.object(nova.objects.ComputeNodeList, 'get_all') def test_init_instance_info_batches(self, mock_get_all, mock_get_by_filters): cn_list = objects.ComputeNodeList() for num in range(22): host_name = 'host_%s' % num cn_list.objects.append(objects.ComputeNode(host=host_name)) mock_get_all.return_value = cn_list self.host_manager._init_instance_info() self.assertEqual(mock_get_by_filters.call_count, 3) @mock.patch.object(nova.objects.InstanceList, 'get_by_filters') @mock.patch.object(nova.objects.ComputeNodeList, 'get_all') def test_init_instance_info(self, mock_get_all, mock_get_by_filters): cn1 = objects.ComputeNode(host='host1') cn2 = objects.ComputeNode(host='host2') inst1 = objects.Instance(host='host1', uuid=uuids.instance_1) inst2 = objects.Instance(host='host1', uuid=uuids.instance_2) inst3 = objects.Instance(host='host2', uuid=uuids.instance_3) mock_get_all.return_value = objects.ComputeNodeList(objects=[cn1, cn2]) mock_get_by_filters.return_value = objects.InstanceList( objects=[inst1, inst2, inst3]) hm = self.host_manager hm._instance_info = {} hm._init_instance_info() self.assertEqual(len(hm._instance_info), 2) fake_info = hm._instance_info['host1'] self.assertIn(uuids.instance_1, fake_info['instances']) self.assertIn(uuids.instance_2, fake_info['instances']) self.assertNotIn(uuids.instance_3, fake_info['instances']) exp_filters = {'deleted': False, 'host': [u'host1', u'host2']} mock_get_by_filters.assert_called_once_with(mock.ANY, exp_filters) @mock.patch.object(nova.objects.InstanceList, 'get_by_filters') @mock.patch.object(nova.objects.ComputeNodeList, 'get_all') def test_init_instance_info_compute_nodes(self, mock_get_all, mock_get_by_filters): cn1 = objects.ComputeNode(host='host1') cn2 = objects.ComputeNode(host='host2') inst1 = objects.Instance(host='host1', uuid=uuids.instance_1) inst2 = objects.Instance(host='host1', uuid=uuids.instance_2) inst3 = objects.Instance(host='host2', uuid=uuids.instance_3) cell = objects.CellMapping(database_connection='', target_url='', uuid=uuids.cell_uuid) mock_get_by_filters.return_value = objects.InstanceList( objects=[inst1, inst2, inst3]) hm = self.host_manager hm._instance_info = {} hm._init_instance_info({cell: [cn1, cn2]}) self.assertEqual(len(hm._instance_info), 2) fake_info = hm._instance_info['host1'] self.assertIn(uuids.instance_1, fake_info['instances']) self.assertIn(uuids.instance_2, fake_info['instances']) self.assertNotIn(uuids.instance_3, fake_info['instances']) exp_filters = {'deleted': False, 'host': [u'host1', u'host2']} mock_get_by_filters.assert_called_once_with(mock.ANY, exp_filters) # should not be called if the list of nodes was passed explicitly self.assertFalse(mock_get_all.called) def test_enabled_filters(self): enabled_filters = self.host_manager.enabled_filters self.assertEqual(1, len(enabled_filters)) self.assertIsInstance(enabled_filters[0], FakeFilterClass1) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.AggregateList, 'get_all') def test_init_aggregates_no_aggs(self, agg_get_all, mock_init_info): agg_get_all.return_value = [] self.host_manager = host_manager.HostManager() self.assertEqual({}, self.host_manager.aggs_by_id) self.assertEqual({}, self.host_manager.host_aggregates_map) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.AggregateList, 'get_all') def test_init_aggregates_one_agg_no_hosts(self, agg_get_all, mock_init_info): fake_agg = objects.Aggregate(id=1, hosts=[]) agg_get_all.return_value = [fake_agg] self.host_manager = host_manager.HostManager() self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({}, self.host_manager.host_aggregates_map) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.AggregateList, 'get_all') def test_init_aggregates_one_agg_with_hosts(self, agg_get_all, mock_init_info): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) agg_get_all.return_value = [fake_agg] self.host_manager = host_manager.HostManager() self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([1])}, self.host_manager.host_aggregates_map) def test_update_aggregates(self): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) self.host_manager.update_aggregates([fake_agg]) self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([1])}, self.host_manager.host_aggregates_map) def test_update_aggregates_remove_hosts(self): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) self.host_manager.update_aggregates([fake_agg]) self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([1])}, self.host_manager.host_aggregates_map) # Let's remove the host from the aggregate and update again fake_agg.hosts = [] self.host_manager.update_aggregates([fake_agg]) self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([])}, self.host_manager.host_aggregates_map) def test_delete_aggregate(self): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) self.host_manager.host_aggregates_map = collections.defaultdict( set, {'fake-host': set([1])}) self.host_manager.aggs_by_id = {1: fake_agg} self.host_manager.delete_aggregate(fake_agg) self.assertEqual({}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([])}, self.host_manager.host_aggregates_map) def test_choose_host_filters_not_found(self): self.assertRaises(exception.SchedulerHostFilterNotFound, self.host_manager._choose_host_filters, 'FakeFilterClass3') def test_choose_host_filters(self): # Test we return 1 correct filter object host_filters = self.host_manager._choose_host_filters( ['FakeFilterClass2']) self.assertEqual(1, len(host_filters)) self.assertIsInstance(host_filters[0], FakeFilterClass2) def _mock_get_filtered_hosts(self, info): info['got_objs'] = [] info['got_fprops'] = [] def fake_filter_one(_self, obj, filter_props): info['got_objs'].append(obj) info['got_fprops'].append(filter_props) return True self.stub_out(__name__ + '.FakeFilterClass1._filter_one', fake_filter_one) def _verify_result(self, info, result, filters=True): for x in info['got_fprops']: self.assertEqual(x, info['expected_fprops']) if filters: self.assertEqual(set(info['expected_objs']), set(info['got_objs'])) self.assertEqual(set(info['expected_objs']), set(result)) def test_get_filtered_hosts(self): fake_properties = objects.RequestSpec(ignore_hosts=[], instance_uuid=uuids.instance, force_hosts=[], force_nodes=[]) info = {'expected_objs': self.fake_hosts, 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_requested_destination(self): dest = objects.Destination(host='fake_host1', node='fake-node') fake_properties = objects.RequestSpec(requested_destination=dest, ignore_hosts=[], instance_uuid=uuids.fake_uuid1, force_hosts=[], force_nodes=[]) info = {'expected_objs': [self.fake_hosts[0]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_wrong_requested_destination(self): dest = objects.Destination(host='dummy', node='fake-node') fake_properties = objects.RequestSpec(requested_destination=dest, ignore_hosts=[], instance_uuid=uuids.fake_uuid1, force_hosts=[], force_nodes=[]) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_ignore(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1', 'fake_host3', 'fake_host5', 'fake_multihost'], force_hosts=[], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[1], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_ignore_case_insensitive(self): fake_properties = objects.RequestSpec( instance_uuids=uuids.fakehost, ignore_hosts=['FAKE_HOST1', 'FaKe_HoSt3', 'Fake_Multihost'], force_hosts=[], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[1], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_force_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host1', 'fake_host3', 'fake_host5'], force_nodes=[]) # [0] and [2] are host1 and host3 info = {'expected_objs': [self.fake_hosts[0], self.fake_hosts[2]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_case_insensitive(self): fake_properties = objects.RequestSpec( instance_uuids=uuids.fakehost, ignore_hosts=[], force_hosts=['FAKE_HOST1', 'FaKe_HoSt3', 'fake_host4', 'faKe_host5'], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[0], self.fake_hosts[2], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_no_matching_force_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host5', 'fake_host6'], force_nodes=[]) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) with mock.patch.object(self.host_manager.filter_handler, 'get_filtered_objects') as fake_filter: result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self.assertFalse(fake_filter.called) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_and_force_hosts(self): # Ensure ignore_hosts processed before force_hosts in host filters. fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1'], force_hosts=['fake_host3', 'fake_host1'], force_nodes=[]) # only fake_host3 should be left. info = {'expected_objs': [self.fake_hosts[2]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_host_and_many_nodes(self): # Ensure all nodes returned for a host with many nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_multihost'], force_nodes=[]) info = {'expected_objs': [self.fake_hosts[4], self.fake_hosts[5], self.fake_hosts[6], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_nodes(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=[], force_nodes=['fake-node2', 'fake-node4', 'fake-node9']) # [5] is fake-node2, [7] is fake-node4 info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_hosts_and_nodes(self): # Ensure only overlapping results if both force host and node fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake-host1', 'fake_multihost'], force_nodes=['fake-node2', 'fake-node9']) # [5] is fake-node2 info = {'expected_objs': [self.fake_hosts[5]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_hosts_and_wrong_nodes(self): # Ensure non-overlapping force_node and force_host yield no result fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_multihost'], force_nodes=['fake-node']) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_hosts_and_force_nodes(self): # Ensure ignore_hosts can coexist with force_nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1', 'fake_host2'], force_hosts=[], force_nodes=['fake-node4', 'fake-node2']) info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_hosts_and_force_same_nodes(self): # Ensure ignore_hosts is processed before force_nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_multihost'], force_hosts=[], force_nodes=['fake_node4', 'fake_node2']) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) @mock.patch('nova.scheduler.host_manager.LOG') @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_get_host_states(self, mock_get_by_host, mock_get_all, mock_get_by_binary, mock_log): mock_get_by_host.return_value = [] mock_get_all.return_value = fakes.COMPUTE_NODES mock_get_by_binary.return_value = fakes.SERVICES context = nova_context.get_admin_context() compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in self.host_manager._get_host_states( context, compute_nodes, services)} self.assertEqual(4, len(host_states_map)) calls = [ mock.call( "Host %(hostname)s has more disk space than database " "expected (%(physical)s GB > %(database)s GB)", {'physical': 3333, 'database': 3072, 'hostname': 'node3'} ), mock.call( "No compute service record found for host %(host)s", {'host': 'fake'} ) ] self.assertEqual(calls, mock_log.warning.call_args_list) # Check that .service is set properly for i in range(4): compute_node = fakes.COMPUTE_NODES[i] host = compute_node.host node = compute_node.hypervisor_hostname state_key = (host, node) self.assertEqual(host_states_map[state_key].service, obj_base.obj_to_primitive(fakes.get_service_by_host(host))) self.assertEqual(host_states_map[('host1', 'node1')].free_ram_mb, 512) # 511GB self.assertEqual(host_states_map[('host1', 'node1')].free_disk_mb, 524288) self.assertEqual(host_states_map[('host2', 'node2')].free_ram_mb, 1024) # 1023GB self.assertEqual(host_states_map[('host2', 'node2')].free_disk_mb, 1048576) self.assertEqual(host_states_map[('host3', 'node3')].free_ram_mb, 3072) # 3071GB self.assertEqual(host_states_map[('host3', 'node3')].free_disk_mb, 3145728) self.assertEqual(host_states_map[('host4', 'node4')].free_ram_mb, 8192) # 8191GB self.assertEqual(host_states_map[('host4', 'node4')].free_disk_mb, 8388608) @mock.patch.object(nova.objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_host_states_with_no_aggs(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): svc_get_by_binary.return_value = [objects.Service(host='fake')] cn_get_all.return_value = [ objects.ComputeNode(host='fake', hypervisor_hostname='fake')] mock_get_by_host.return_value = [] self.host_manager.host_aggregates_map = collections.defaultdict(set) context = nova_context.get_admin_context() compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} host_state = host_states_map[('fake', 'fake')] self.assertEqual([], host_state.aggregates) @mock.patch.object(nova.objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_host_states_with_matching_aggs(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): svc_get_by_binary.return_value = [objects.Service(host='fake')] cn_get_all.return_value = [ objects.ComputeNode(host='fake', hypervisor_hostname='fake')] mock_get_by_host.return_value = [] fake_agg = objects.Aggregate(id=1) self.host_manager.host_aggregates_map = collections.defaultdict( set, {'fake': set([1])}) self.host_manager.aggs_by_id = {1: fake_agg} context = nova_context.get_admin_context() compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} host_state = host_states_map[('fake', 'fake')] self.assertEqual([fake_agg], host_state.aggregates) @mock.patch.object(nova.objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_host_states_with_not_matching_aggs(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): svc_get_by_binary.return_value = [objects.Service(host='fake'), objects.Service(host='other')] cn_get_all.return_value = [ objects.ComputeNode(host='fake', hypervisor_hostname='fake'), objects.ComputeNode(host='other', hypervisor_hostname='other')] mock_get_by_host.return_value = [] fake_agg = objects.Aggregate(id=1) self.host_manager.host_aggregates_map = collections.defaultdict( set, {'other': set([1])}) self.host_manager.aggs_by_id = {1: fake_agg} context = nova_context.get_admin_context() compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} host_state = host_states_map[('fake', 'fake')] self.assertEqual([], host_state.aggregates) @mock.patch.object(nova.objects.InstanceList, 'get_uuids_by_host', return_value=[]) @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_host_states_corrupt_aggregates_info(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): """Regression test for bug 1605804 A host can be in multiple host-aggregates at the same time. When a host gets removed from an aggregate in thread A and this aggregate gets deleted in thread B, there can be a race-condition where the mapping data in the host_manager can get out of sync for a moment. This test simulates this condition for the bug-fix. """ host_a = 'host_a' host_b = 'host_b' svc_get_by_binary.return_value = [objects.Service(host=host_a), objects.Service(host=host_b)] cn_get_all.return_value = [ objects.ComputeNode(host=host_a, hypervisor_hostname=host_a), objects.ComputeNode(host=host_b, hypervisor_hostname=host_b)] aggregate = objects.Aggregate(id=1) aggregate.hosts = [host_a, host_b] aggr_list = objects.AggregateList() aggr_list.objects = [aggregate] self.host_manager.update_aggregates(aggr_list) aggregate.hosts = [host_a] self.host_manager.delete_aggregate(aggregate) context = nova_context.get_admin_context() compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) self.host_manager._get_host_states(context, compute_nodes, services) @mock.patch('nova.objects.InstanceList.get_by_host') def test_host_state_update(self, mock_get_by_host): context = 'fake_context' hm = self.host_manager inst1 = objects.Instance(uuid=uuids.instance) cn1 = objects.ComputeNode(host='host1') hm._instance_info = {'host1': {'instances': {uuids.instance: inst1}, 'updated': True}} host_state = host_manager.HostState('host1', cn1, uuids.cell) self.assertFalse(host_state.instances) mock_get_by_host.return_value = None host_state.update( inst_dict=hm._get_instance_info(context, cn1)) self.assertFalse(mock_get_by_host.called) self.assertTrue(host_state.instances) self.assertEqual(host_state.instances[uuids.instance], inst1) @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_host_state_not_updated(self, mock_get_by_host): context = nova_context.get_admin_context() hm = self.host_manager inst1 = objects.Instance(uuid=uuids.instance) cn1 = objects.ComputeNode(host='host1') hm._instance_info = {'host1': {'instances': {uuids.instance: inst1}, 'updated': False}} host_state = host_manager.HostState('host1', cn1, uuids.cell) self.assertFalse(host_state.instances) mock_get_by_host.return_value = [uuids.instance] host_state.update( inst_dict=hm._get_instance_info(context, cn1)) mock_get_by_host.assert_called_once_with(context, cn1.host) self.assertTrue(host_state.instances) self.assertIn(uuids.instance, host_state.instances) inst = host_state.instances[uuids.instance] self.assertEqual(uuids.instance, inst.uuid) self.assertIsNotNone(inst._context, 'Instance is orphaned.') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_recreate_instance_info(self, mock_get_by_host): context = nova_context.get_admin_context() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj(context, uuid=uuids.instance_1) inst2 = fake_instance.fake_instance_obj(context, uuid=uuids.instance_2) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} mock_get_by_host.return_value = [uuids.instance_1, uuids.instance_2] self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': True, }} self.host_manager._recreate_instance_info(context, host_name) new_info = self.host_manager._instance_info[host_name] self.assertEqual(len(new_info['instances']), len(mock_get_by_host.return_value)) self.assertFalse(new_info['updated']) def test_update_instance_info(self): host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} inst3 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_3, host=host_name) inst4 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_4, host=host_name) update = objects.InstanceList(objects=[inst3, inst4]) self.host_manager.update_instance_info('fake_context', host_name, update) new_info = self.host_manager._instance_info[host_name] self.assertEqual(len(new_info['instances']), 4) self.assertTrue(new_info['updated']) def test_update_instance_info_unknown_host(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} bad_host = 'bad_host' inst3 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_3, host=bad_host) inst_list3 = objects.InstanceList(objects=[inst3]) self.host_manager.update_instance_info('fake_context', bad_host, inst_list3) new_info = self.host_manager._instance_info[host_name] self.host_manager._recreate_instance_info.assert_called_once_with( 'fake_context', bad_host) self.assertEqual(len(new_info['instances']), len(orig_inst_dict)) self.assertFalse(new_info['updated']) @mock.patch('nova.objects.HostMapping.get_by_host', side_effect=exception.HostMappingNotFound(name='host1')) def test_update_instance_info_unknown_host_mapping_not_found(self, get_by_host): """Tests that case that update_instance_info is called with an unregistered host so the host manager attempts to recreate the instance list, but there is no host mapping found for the given host (it might have just started not be discovered for cells v2 yet). """ ctxt = nova_context.RequestContext() instance_info = objects.InstanceList() self.host_manager.update_instance_info(ctxt, 'host1', instance_info) self.assertDictEqual( {}, self.host_manager._instance_info['host1']['instances']) get_by_host.assert_called_once_with(ctxt, 'host1') def test_delete_instance_info(self): host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} self.host_manager.delete_instance_info('fake_context', host_name, inst1.uuid) new_info = self.host_manager._instance_info[host_name] self.assertEqual(len(new_info['instances']), 1) self.assertTrue(new_info['updated']) def test_delete_instance_info_unknown_host(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} bad_host = 'bad_host' self.host_manager.delete_instance_info('fake_context', bad_host, uuids.instance_1) new_info = self.host_manager._instance_info[host_name] self.host_manager._recreate_instance_info.assert_called_once_with( 'fake_context', bad_host) self.assertEqual(len(new_info['instances']), len(orig_inst_dict)) self.assertFalse(new_info['updated']) def test_sync_instance_info(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} self.host_manager.sync_instance_info('fake_context', host_name, [uuids.instance_2, uuids.instance_1]) new_info = self.host_manager._instance_info[host_name] self.assertFalse(self.host_manager._recreate_instance_info.called) self.assertTrue(new_info['updated']) def test_sync_instance_info_fail(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} self.host_manager.sync_instance_info('fake_context', host_name, [uuids.instance_2, uuids.instance_1, 'new']) new_info = self.host_manager._instance_info[host_name] self.host_manager._recreate_instance_info.assert_called_once_with( 'fake_context', host_name) self.assertFalse(new_info['updated']) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_get_computes_for_cells(self, mock_sl, mock_cn, mock_cm): cells = [ objects.CellMapping(uuid=uuids.cell1, db_connection='none://1', transport_url='none://'), objects.CellMapping(uuid=uuids.cell2, db_connection='none://2', transport_url='none://'), ] mock_cm.return_value = cells mock_sl.side_effect = [ [objects.ServiceList(host='foo')], [objects.ServiceList(host='bar')], ] mock_cn.side_effect = [ [objects.ComputeNode(host='foo')], [objects.ComputeNode(host='bar')], ] context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells(context, cells) self.assertEqual({uuids.cell1: ['foo'], uuids.cell2: ['bar']}, {cell: [cn.host for cn in computes] for cell, computes in cns.items()}) self.assertEqual(['bar', 'foo'], sorted(list(srv.keys()))) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.ComputeNodeList.get_all_by_uuids') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_get_computes_for_cells_uuid(self, mock_sl, mock_cn, mock_cm): cells = [ objects.CellMapping(uuid=uuids.cell1, db_connection='none://1', transport_url='none://'), objects.CellMapping(uuid=uuids.cell2, db_connection='none://2', transport_url='none://'), ] mock_cm.return_value = cells mock_sl.side_effect = [ [objects.ServiceList(host='foo')], [objects.ServiceList(host='bar')], ] mock_cn.side_effect = [ [objects.ComputeNode(host='foo')], [objects.ComputeNode(host='bar')], ] context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells(context, cells, []) self.assertEqual({uuids.cell1: ['foo'], uuids.cell2: ['bar']}, {cell: [cn.host for cn in computes] for cell, computes in cns.items()}) self.assertEqual(['bar', 'foo'], sorted(list(srv.keys()))) @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_get_computes_for_cells_limit_to_cell(self, mock_sl, mock_cn, mock_cm, mock_target): host_manager.LOG.debug = host_manager.LOG.error cells = [ objects.CellMapping(uuid=uuids.cell1, database_connection='none://1', transport_url='none://'), objects.CellMapping(uuid=uuids.cell2, database_connection='none://2', transport_url='none://'), ] mock_sl.return_value = [objects.ServiceList(host='foo')] mock_cn.return_value = [objects.ComputeNode(host='foo')] mock_cm.return_value = cells @contextlib.contextmanager def fake_set_target(context, cell): yield mock.sentinel.cctxt mock_target.side_effect = fake_set_target context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells( context, cells=cells[1:]) self.assertEqual({uuids.cell2: ['foo']}, {cell: [cn.host for cn in computes] for cell, computes in cns.items()}) self.assertEqual(['foo'], list(srv.keys())) # NOTE(danms): We have two cells, but we should only have # targeted one if we honored the only-cell destination requirement, # and only looked up services and compute nodes in one mock_target.assert_called_once_with(context, cells[1]) mock_cn.assert_called_once_with(mock.sentinel.cctxt) mock_sl.assert_called_once_with(mock.sentinel.cctxt, 'nova-compute', include_disabled=True) @mock.patch('nova.context.scatter_gather_cells') def test_get_computes_for_cells_failures(self, mock_sg): mock_sg.return_value = { uuids.cell1: ([mock.MagicMock(host='a'), mock.MagicMock(host='b')], [mock.sentinel.c1n1, mock.sentinel.c1n2]), uuids.cell2: nova_context.did_not_respond_sentinel, uuids.cell3: exception.ComputeHostNotFound(host='c'), } context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells(context, []) self.assertEqual({uuids.cell1: [mock.sentinel.c1n1, mock.sentinel.c1n2]}, cns) self.assertEqual(['a', 'b'], sorted(srv.keys())) @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNode.get_by_nodename') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_get_compute_nodes_by_host_or_node(self, mock_get_all, mock_get_host_node, mock_get_node, mock_get_hm): def _varify_result(expected, result): self.assertEqual(len(expected), len(result)) for expected_cn, result_cn in zip(expected, result): self.assertEqual(expected_cn.host, result_cn.host) self.assertEqual(expected_cn.node, result_cn.node) context = nova_context.RequestContext('fake', 'fake') cn1 = objects.ComputeNode(host='fake_multihost', node='fake_node1') cn2 = objects.ComputeNode(host='fake_multihost', node='fake_node2') cn3 = objects.ComputeNode(host='fake_host1', node='fake_node') mock_get_all.return_value = objects.ComputeNodeList(objects=[cn1, cn2]) mock_get_host_node.return_value = cn1 mock_get_node.return_value = cn3 mock_get_hm.return_value = objects.HostMapping( context=context, host='fake_multihost', cell_mapping=objects.CellMapping(uuid=uuids.cell1, db_connection='none://1', transport_url='none://')) # Case1: call it with host host = 'fake_multihost' node = None result = self.host_manager.get_compute_nodes_by_host_or_node( context, host, node) expected = objects.ComputeNodeList(objects=[cn1, cn2]) _varify_result(expected, result) mock_get_all.assert_called_once_with(context, 'fake_multihost') mock_get_host_node.assert_not_called() mock_get_node.assert_not_called() mock_get_hm.assert_called_once_with(context, 'fake_multihost') mock_get_all.reset_mock() mock_get_hm.reset_mock() # Case2: call it with host and node host = 'fake_multihost' node = 'fake_node1' result = self.host_manager.get_compute_nodes_by_host_or_node( context, host, node) expected = objects.ComputeNodeList(objects=[cn1]) _varify_result(expected, result) mock_get_all.assert_not_called() mock_get_host_node.assert_called_once_with( context, 'fake_multihost', 'fake_node1') mock_get_node.assert_not_called() mock_get_hm.assert_called_once_with(context, 'fake_multihost') mock_get_host_node.reset_mock() mock_get_hm.reset_mock() # Case3: call it with node host = None node = 'fake_node' result = self.host_manager.get_compute_nodes_by_host_or_node( context, host, node) expected = objects.ComputeNodeList(objects=[cn3]) _varify_result(expected, result) mock_get_all.assert_not_called() mock_get_host_node.assert_not_called() mock_get_node.assert_called_once_with(context, 'fake_node') mock_get_hm.assert_not_called() @mock.patch('nova.objects.HostMapping.get_by_host') @mock.patch('nova.objects.ComputeNodeList.get_all_by_host') def test_get_compute_nodes_by_host_or_node_empty_list( self, mock_get_all, mock_get_hm): mock_get_all.side_effect = exception.ComputeHostNotFound(host='fake') mock_get_hm.side_effect = exception.HostMappingNotFound(name='fake') context = nova_context.RequestContext('fake', 'fake') host = 'fake' node = None result = self.host_manager.get_compute_nodes_by_host_or_node( context, host, node) self.assertEqual(0, len(result)) @mock.patch('nova.context.scatter_gather_cells', side_effect=( # called twice, different return values {uuids.cell1: test.TestingException('conn fail')}, {uuids.cell1: nova_context.did_not_respond_sentinel})) def test_get_compute_nodes_by_host_or_node_filter_errors(self, mock_sgc): """Make sure get_compute_nodes_by_host_or_node filters out exception and cell timeout results from scatter_gather_cells. """ ctxt = nova_context.get_context() cell1 = objects.CellMapping(uuid=uuids.cell1) for x in range(2): # run twice because we have two side effects nodes = self.host_manager.get_compute_nodes_by_host_or_node( ctxt, 'host1', None, cell=cell1) self.assertEqual(0, len(nodes), nodes) self.assertEqual(2, mock_sgc.call_count, mock_sgc.mock_calls) mock_sgc.assert_has_calls([mock.call( ctxt, [cell1], nova_context.CELL_TIMEOUT, mock.ANY)] * 2) class HostManagerChangedNodesTestCase(test.NoDBTestCase): """Test case for HostManager class.""" @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(HostManagerChangedNodesTestCase, self).setUp() self.host_manager = host_manager.HostManager() @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_get_host_states(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_by_host.return_value = [] mock_get_all.return_value = fakes.COMPUTE_NODES mock_get_by_binary.return_value = fakes.SERVICES context = nova_context.get_admin_context() compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in self.host_manager._get_host_states( context, compute_nodes, services)} self.assertEqual(len(host_states_map), 4) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_get_host_states_after_delete_one(self, mock_get_by_host, mock_get_all, mock_get_by_binary): getter = (lambda n: n.hypervisor_hostname if 'hypervisor_hostname' in n else None) running_nodes = [n for n in fakes.COMPUTE_NODES if getter(n) != 'node4'] mock_get_by_host.return_value = [] mock_get_all.side_effect = [fakes.COMPUTE_NODES, running_nodes] mock_get_by_binary.side_effect = [fakes.SERVICES, fakes.SERVICES] context = nova_context.get_admin_context() # first call: all nodes compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 4) # second call: just running nodes compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 3) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_get_host_states_after_delete_all(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_by_host.return_value = [] mock_get_all.side_effect = [fakes.COMPUTE_NODES, []] mock_get_by_binary.side_effect = [fakes.SERVICES, fakes.SERVICES] context = nova_context.get_admin_context() # first call: all nodes compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) # _get_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 4) # second call: no nodes compute_nodes, services = self.host_manager._get_computes_for_cells( context, self.host_manager.enabled_cells) hosts = self.host_manager._get_host_states( context, compute_nodes, services) host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 0) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all_by_uuids') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') def test_get_host_states_by_uuids(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_by_host.return_value = [] mock_get_all.side_effect = [fakes.COMPUTE_NODES, []] mock_get_by_binary.side_effect = [fakes.SERVICES, fakes.SERVICES] # Request 1: all nodes can satisfy the request hosts1 = self.host_manager.get_host_states_by_uuids( mock.sentinel.ctxt1, mock.sentinel.uuids1, objects.RequestSpec()) # get_host_states_by_uuids returns a generator so convert the values # into an iterator host_states1 = iter(hosts1) # Request 2: no nodes can satisfy the request hosts2 = self.host_manager.get_host_states_by_uuids( mock.sentinel.ctxt2, mock.sentinel.uuids2, objects.RequestSpec()) host_states2 = iter(hosts2) # Fake a concurrent request that is still processing the first result # to make sure all nodes are still available candidates to Request 1. num_hosts1 = len(list(host_states1)) self.assertEqual(4, num_hosts1) # Verify that no nodes are available to Request 2. num_hosts2 = len(list(host_states2)) self.assertEqual(0, num_hosts2) @mock.patch('nova.scheduler.host_manager.HostManager.' '_get_computes_for_cells', return_value=(mock.sentinel.compute_nodes, mock.sentinel.services)) @mock.patch('nova.scheduler.host_manager.HostManager._get_host_states') def test_get_host_states_by_uuids_allow_cross_cell_move( self, mock_get_host_states, mock_get_computes): """Tests that get_host_states_by_uuids will not restrict to a given cell if allow_cross_cell_move=True in the request spec. """ ctxt = nova_context.get_admin_context() compute_uuids = [uuids.compute_node_uuid] spec_obj = objects.RequestSpec( requested_destination=objects.Destination( cell=objects.CellMapping(uuid=uuids.cell1), allow_cross_cell_move=True)) self.host_manager.get_host_states_by_uuids( ctxt, compute_uuids, spec_obj) mock_get_computes.assert_called_once_with( ctxt, self.host_manager.enabled_cells, compute_uuids=compute_uuids) mock_get_host_states.assert_called_once_with( ctxt, mock.sentinel.compute_nodes, mock.sentinel.services) class HostStateTestCase(test.NoDBTestCase): """Test case for HostState class.""" # update_from_compute_node() and consume_from_request() are tested # in HostManagerTestCase.test_get_host_states() @mock.patch('nova.utils.synchronized', side_effect=lambda a: lambda f: lambda *args: f(*args)) def test_stat_consumption_from_compute_node(self, sync_mock): stats = { 'num_instances': '5', 'num_proj_12345': '3', 'num_proj_23456': '1', 'num_vm_%s' % vm_states.BUILDING: '2', 'num_vm_%s' % vm_states.SUSPENDED: '1', 'num_task_%s' % task_states.RESIZE_MIGRATING: '1', 'num_task_%s' % task_states.MIGRATING: '2', 'num_os_type_linux': '4', 'num_os_type_windoze': '1', 'io_workload': '42', } hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, stats=stats, memory_mb=1, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=None, pci_device_pools=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) sync_mock.assert_called_once_with(("fakehost", "fakenode")) self.assertEqual(5, host.num_instances) self.assertEqual(42, host.num_io_ops) self.assertEqual(10, len(host.stats)) self.assertEqual('127.0.0.1', str(host.host_ip)) self.assertEqual('htype', host.hypervisor_type) self.assertEqual('hostname', host.hypervisor_hostname) self.assertEqual('cpu_info', host.cpu_info) self.assertEqual([], host.supported_instances) self.assertEqual(hyper_ver_int, host.hypervisor_version) def test_stat_consumption_from_compute_node_non_pci(self): stats = { 'num_instances': '5', 'num_proj_12345': '3', 'num_proj_23456': '1', 'num_vm_%s' % vm_states.BUILDING: '2', 'num_vm_%s' % vm_states.SUSPENDED: '1', 'num_task_%s' % task_states.RESIZE_MIGRATING: '1', 'num_task_%s' % task_states.MIGRATING: '2', 'num_os_type_linux': '4', 'num_os_type_windoze': '1', 'io_workload': '42', } hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, stats=stats, memory_mb=0, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=None, pci_device_pools=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) self.assertEqual([], host.pci_stats.pools) self.assertEqual(hyper_ver_int, host.hypervisor_version) def test_stat_consumption_from_compute_node_rescue_unshelving(self): stats = { 'num_instances': '5', 'num_proj_12345': '3', 'num_proj_23456': '1', 'num_vm_%s' % vm_states.BUILDING: '2', 'num_vm_%s' % vm_states.SUSPENDED: '1', 'num_task_%s' % task_states.UNSHELVING: '1', 'num_task_%s' % task_states.RESCUING: '2', 'num_os_type_linux': '4', 'num_os_type_windoze': '1', 'io_workload': '42', } hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, stats=stats, memory_mb=0, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=None, pci_device_pools=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) self.assertEqual(5, host.num_instances) self.assertEqual(42, host.num_io_ops) self.assertEqual(10, len(host.stats)) self.assertEqual([], host.pci_stats.pools) self.assertEqual(hyper_ver_int, host.hypervisor_version) @mock.patch('nova.utils.synchronized', side_effect=lambda a: lambda f: lambda *args: f(*args)) @mock.patch('nova.virt.hardware.numa_usage_from_instance_numa') @mock.patch('nova.objects.Instance') @mock.patch('nova.virt.hardware.numa_fit_instance_to_host') def test_stat_consumption_from_instance(self, numa_fit_mock, instance_init_mock, numa_usage_mock, sync_mock): fake_numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell()]) fake_host_numa_topology = mock.Mock() fake_instance = objects.Instance(numa_topology=fake_numa_topology) numa_usage_mock.return_value = fake_host_numa_topology numa_fit_mock.return_value = fake_numa_topology instance_init_mock.return_value = fake_instance spec_obj = objects.RequestSpec( instance_uuid=uuids.instance, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=0, vcpus=0), numa_topology=fake_numa_topology, pci_requests=objects.InstancePCIRequests(requests=[])) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.numa_topology = fake_host_numa_topology self.assertIsNone(host.updated) host.consume_from_request(spec_obj) numa_fit_mock.assert_called_once_with(fake_host_numa_topology, fake_numa_topology, limits=None, pci_requests=None, pci_stats=None) numa_usage_mock.assert_called_once_with(fake_host_numa_topology, fake_numa_topology) sync_mock.assert_called_once_with(("fakehost", "fakenode")) self.assertEqual(fake_host_numa_topology, host.numa_topology) self.assertIsNotNone(host.updated) numa_fit_mock.reset_mock() numa_usage_mock.reset_mock() spec_obj = objects.RequestSpec( instance_uuid=uuids.instance, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=0, vcpus=0), numa_topology=None, pci_requests=objects.InstancePCIRequests(requests=[])) host.consume_from_request(spec_obj) numa_fit_mock.assert_not_called() numa_usage_mock.assert_not_called() self.assertEqual(2, host.num_instances) self.assertEqual(2, host.num_io_ops) self.assertIsNotNone(host.updated) def test_stat_consumption_from_instance_pci(self): inst_topology = objects.InstanceNUMATopology( cells = [objects.InstanceNUMACell( cpuset=set([0]), memory=512, id=0)]) fake_requests = [{'request_id': uuids.request_id, 'count': 1, 'spec': [{'vendor_id': '8086'}]}] fake_requests_obj = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(**r) for r in fake_requests], instance_uuid=uuids.instance) req_spec = objects.RequestSpec( instance_uuid=uuids.instance, project_id='12345', numa_topology=inst_topology, pci_requests=fake_requests_obj, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=512, vcpus=1)) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) self.assertIsNone(host.updated) host.pci_stats = pci_stats.PciDeviceStats( [objects.PciDevicePool(vendor_id='8086', product_id='15ed', numa_node=1, count=1)]) host.numa_topology = fakes.NUMA_TOPOLOGY host.consume_from_request(req_spec) self.assertIsInstance(req_spec.numa_topology, objects.InstanceNUMATopology) self.assertEqual(512, host.numa_topology.cells[1].memory_usage) self.assertEqual(1, host.numa_topology.cells[1].cpu_usage) self.assertEqual(0, len(host.pci_stats.pools)) self.assertIsNotNone(host.updated) def test_stat_consumption_from_instance_with_pci_exception(self): fake_requests = [{'request_id': uuids.request_id, 'count': 3, 'spec': [{'vendor_id': '8086'}]}] fake_requests_obj = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(**r) for r in fake_requests], instance_uuid=uuids.instance) req_spec = objects.RequestSpec( instance_uuid=uuids.instance, project_id='12345', numa_topology=None, pci_requests=fake_requests_obj, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=1024, vcpus=1)) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) self.assertIsNone(host.updated) fake_updated = mock.sentinel.fake_updated host.updated = fake_updated host.pci_stats = pci_stats.PciDeviceStats() with mock.patch.object(host.pci_stats, 'apply_requests', side_effect=exception.PciDeviceRequestFailed): host.consume_from_request(req_spec) self.assertEqual(fake_updated, host.updated) def test_resources_consumption_from_compute_node(self): _ts_now = datetime.datetime(2015, 11, 11, 11, 0, 0) metrics = [ dict(name='cpu.frequency', value=1.0, source='source1', timestamp=_ts_now), dict(name='numa.membw.current', numa_membw_values={"0": 10, "1": 43}, source='source2', timestamp=_ts_now), ] hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, metrics=jsonutils.dumps(metrics), memory_mb=0, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=fakes.NUMA_TOPOLOGY._to_json(), stats=None, pci_device_pools=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) self.assertEqual(len(host.metrics), 2) self.assertEqual(1.0, host.metrics.to_list()[0]['value']) self.assertEqual('source1', host.metrics[0].source) self.assertEqual('cpu.frequency', host.metrics[0].name) self.assertEqual('numa.membw.current', host.metrics[1].name) self.assertEqual('source2', host.metrics.to_list()[1]['source']) self.assertEqual({'0': 10, '1': 43}, host.metrics[1].numa_membw_values) self.assertIsInstance(host.numa_topology, objects.NUMATopology) def test_stat_consumption_from_compute_node_not_ready(self): compute = objects.ComputeNode(free_ram_mb=100, uuid=uuids.compute_node_uuid) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host._update_from_compute_node(compute) # Because compute record not ready, the update of free ram # will not happen and the value will still be 0 self.assertEqual(0, host.free_ram_mb) # same with failed_builds self.assertEqual(0, host.failed_builds) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_manager.py0000664000175000017500000005344100000000000022662 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler """ import mock import oslo_messaging as messaging from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import exception from nova import objects from nova.scheduler import filter_scheduler from nova.scheduler import host_manager from nova.scheduler import manager from nova import test from nova.tests.unit import fake_server_actions from nova.tests.unit.scheduler import fakes class SchedulerManagerInitTestCase(test.NoDBTestCase): """Test case for scheduler manager initiation.""" manager_cls = manager.SchedulerManager @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_using_default_schedulerdriver(self, mock_init_agg, mock_init_inst): driver = self.manager_cls().driver self.assertIsInstance(driver, filter_scheduler.FilterScheduler) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_nonexist_schedulerdriver(self, mock_init_agg, mock_init_inst): self.flags(driver='nonexist_scheduler', group='scheduler') # The entry point has to be defined in setup.cfg and nova-scheduler has # to be deployed again before using a custom value. self.assertRaises(RuntimeError, self.manager_cls) class SchedulerManagerTestCase(test.NoDBTestCase): """Test case for scheduler manager.""" manager_cls = manager.SchedulerManager @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(SchedulerManagerTestCase, self).setUp() self.manager = self.manager_cls() self.context = context.RequestContext('fake_user', 'fake_project') self.topic = 'fake_topic' self.fake_args = (1, 2, 3) self.fake_kwargs = {'cat': 'meow', 'dog': 'woof'} fake_server_actions.stub_out_action_events(self) @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination(self, mock_get_ac, mock_rfrs, mock_process): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance fake_version = "9.42" mock_p_sums = mock.Mock() fake_alloc_reqs = fakes.get_fake_alloc_reqs() place_res = (fake_alloc_reqs, mock_p_sums, fake_version) mock_get_ac.return_value = place_res mock_rfrs.return_value.cpu_pinning_requested = False expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fake_alloc_reqs[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid]) mock_process.assert_called_once_with(self.context, fake_spec) select_destinations.assert_called_once_with( self.context, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock_p_sums, fake_version, False) mock_get_ac.assert_called_once_with( self.context, mock_rfrs.return_value) # Now call select_destinations() with True values for the params # introduced in RPC version 4.5 select_destinations.reset_mock() self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=True, return_alternates=True) select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock_p_sums, fake_version, True) @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination_return_objects(self, mock_get_ac, mock_rfrs, mock_process): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance fake_version = "9.42" mock_p_sums = mock.Mock() fake_alloc_reqs = fakes.get_fake_alloc_reqs() place_res = (fake_alloc_reqs, mock_p_sums, fake_version) mock_get_ac.return_value = place_res mock_rfrs.return_value.cpu_pinning_requested = False expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fake_alloc_reqs[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: sel_obj = objects.Selection(service_host="fake_host", nodename="fake_node", compute_node_uuid=uuids.compute_node, cell_uuid=uuids.cell, limits=None) select_destinations.return_value = [[sel_obj]] # Pass True; should get the Selection object back. dests = self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=True, return_alternates=True) sel_host = dests[0][0] self.assertIsInstance(sel_host, objects.Selection) mock_process.assert_called_once_with(None, fake_spec) # Since both return_objects and return_alternates are True, the # driver should have been called with True for return_alternates. select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock_p_sums, fake_version, True) # Now pass False for return objects, but keep return_alternates as # True. Verify that the manager converted the Selection object back # to a dict. select_destinations.reset_mock() dests = self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=False, return_alternates=True) sel_host = dests[0] self.assertIsInstance(sel_host, dict) # Even though return_alternates was passed as True, since # return_objects was False, the driver should have been called with # return_alternates as False. select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock_p_sums, fake_version, False) @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def _test_select_destination(self, get_allocation_candidates_response, mock_get_ac, mock_rfrs, mock_process): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance place_res = get_allocation_candidates_response mock_get_ac.return_value = place_res mock_rfrs.return_value.cpu_pinning_requested = False with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.assertRaises(messaging.rpc.dispatcher.ExpectedException, self.manager.select_destinations, self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid]) select_destinations.assert_not_called() mock_process.assert_called_once_with(self.context, fake_spec) mock_get_ac.assert_called_once_with( self.context, mock_rfrs.return_value) def test_select_destination_old_placement(self): """Tests that we will raise NoValidhost when the scheduler report client's get_allocation_candidates() returns None, None as it would if placement service hasn't been upgraded before scheduler. """ place_res = (None, None, None) self._test_select_destination(place_res) def test_select_destination_placement_connect_fails(self): """Tests that we will raise NoValidHost when the scheduler report client's get_allocation_candidates() returns None, which it would if the connection to Placement failed and the safe_connect decorator returns None. """ place_res = None self._test_select_destination(place_res) def test_select_destination_no_candidates(self): """Tests that we will raise NoValidHost when the scheduler report client's get_allocation_candidates() returns [], {} which it would if placement service hasn't yet had compute nodes populate inventory. """ place_res = ([], {}, None) self._test_select_destination(place_res) @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination_is_rebuild(self, mock_get_ac, mock_rfrs, mock_process): fake_spec = objects.RequestSpec( scheduler_hints={'_nova_check_type': ['rebuild']}) fake_spec.instance_uuid = uuids.instance with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid]) select_destinations.assert_called_once_with( self.context, fake_spec, [fake_spec.instance_uuid], None, None, None, False) mock_get_ac.assert_not_called() mock_process.assert_not_called() @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination_with_4_3_client( self, mock_get_ac, mock_rfrs, mock_process, cpu_pinning_requested=False): fake_spec = objects.RequestSpec() mock_p_sums = mock.Mock() fake_alloc_reqs = fakes.get_fake_alloc_reqs() place_res = (fake_alloc_reqs, mock_p_sums, "42.0") mock_get_ac.return_value = place_res mock_rfrs.return_value.cpu_pinning_requested = cpu_pinning_requested expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fake_alloc_reqs[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(self.context, spec_obj=fake_spec) mock_process.assert_called_once_with(self.context, fake_spec) select_destinations.assert_called_once_with(self.context, fake_spec, None, expected_alloc_reqs_by_rp_uuid, mock_p_sums, "42.0", False) mock_rfrs.assert_called_once_with( self.context, fake_spec, mock.ANY, enable_pinning_translate=True) mock_get_ac.assert_called_once_with( self.context, mock_rfrs.return_value) @mock.patch('nova.scheduler.manager.LOG.debug') @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination_with_pcpu_fallback( self, mock_get_ac, mock_rfrs, mock_process, mock_log): """Check that we make a second request to placement if we've got a PCPU request. """ self.flags(disable_fallback_pcpu_query=False, group='workarounds') # mock the result from placement. In reality, the two calls we expect # would return two different results, but we don't care about that. All # we want to check is that it _was_ called twice fake_spec = objects.RequestSpec() mock_p_sums = mock.Mock() fake_alloc_reqs = fakes.get_fake_alloc_reqs() place_res = (fake_alloc_reqs, mock_p_sums, "42.0") mock_get_ac.return_value = place_res pcpu_rreq = mock.Mock() pcpu_rreq.cpu_pinning_requested = True vcpu_rreq = mock.Mock() mock_rfrs.side_effect = [pcpu_rreq, vcpu_rreq] # as above, the two allocation requests against each compute node would # be different in reality, and not all compute nodes might have two # allocation requests, but that doesn't matter for this simple test expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fake_alloc_reqs[x], fake_alloc_reqs[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(self.context, spec_obj=fake_spec) select_destinations.assert_called_once_with(self.context, fake_spec, None, expected_alloc_reqs_by_rp_uuid, mock_p_sums, "42.0", False) mock_process.assert_called_once_with(self.context, fake_spec) mock_log.assert_called_with( 'Requesting fallback allocation candidates with VCPU instead of ' 'PCPU') mock_rfrs.assert_has_calls([ mock.call(self.context, fake_spec, mock.ANY, enable_pinning_translate=True), mock.call(self.context, fake_spec, mock.ANY, enable_pinning_translate=False), ]) mock_get_ac.assert_has_calls([ mock.call(self.context, pcpu_rreq), mock.call(self.context, vcpu_rreq), ]) def test_select_destination_with_pcpu_fallback_disabled(self): """Check that we do not make a second request to placement if we've been told not to, even though we've got a PCPU instance. """ self.flags(disable_fallback_pcpu_query=True, group='workarounds') self.test_select_destination_with_4_3_client( cpu_pinning_requested=True) # TODO(sbauza): Remove that test once the API v4 is removed @mock.patch('nova.scheduler.request_filter.process_reqspec') @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_select_destination_with_old_client(self, from_primitives, mock_get_ac, mock_rfrs, mock_process): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance from_primitives.return_value = fake_spec mock_p_sums = mock.Mock() fake_alloc_reqs = fakes.get_fake_alloc_reqs() place_res = (fake_alloc_reqs, mock_p_sums, "42.0") mock_get_ac.return_value = place_res mock_rfrs.return_value.cpu_pinning_requested = False expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fake_alloc_reqs[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations( self.context, request_spec='fake_spec', filter_properties='fake_props', instance_uuids=[fake_spec.instance_uuid]) mock_process.assert_called_once_with(self.context, fake_spec) select_destinations.assert_called_once_with( self.context, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock_p_sums, "42.0", False) mock_get_ac.assert_called_once_with( self.context, mock_rfrs.return_value) def test_update_aggregates(self): with mock.patch.object(self.manager.driver.host_manager, 'update_aggregates' ) as update_aggregates: self.manager.update_aggregates(None, aggregates='agg') update_aggregates.assert_called_once_with('agg') def test_delete_aggregate(self): with mock.patch.object(self.manager.driver.host_manager, 'delete_aggregate' ) as delete_aggregate: self.manager.delete_aggregate(None, aggregate='agg') delete_aggregate.assert_called_once_with('agg') def test_update_instance_info(self): with mock.patch.object(self.manager.driver.host_manager, 'update_instance_info') as mock_update: self.manager.update_instance_info(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_info) mock_update.assert_called_once_with(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_info) def test_delete_instance_info(self): with mock.patch.object(self.manager.driver.host_manager, 'delete_instance_info') as mock_delete: self.manager.delete_instance_info(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuid) mock_delete.assert_called_once_with(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuid) def test_sync_instance_info(self): with mock.patch.object(self.manager.driver.host_manager, 'sync_instance_info') as mock_sync: self.manager.sync_instance_info(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuids) mock_sync.assert_called_once_with(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuids) def test_reset(self): with mock.patch.object(self.manager.driver.host_manager, 'refresh_cells_caches') as mock_refresh: self.manager.reset() mock_refresh.assert_called_once_with() @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts(self, mock_discover): cm1 = objects.CellMapping(name='cell1') cm2 = objects.CellMapping(name='cell2') mock_discover.return_value = [objects.HostMapping(host='a', cell_mapping=cm1), objects.HostMapping(host='b', cell_mapping=cm2)] self.manager._discover_hosts_in_cells(mock.sentinel.context) @mock.patch('nova.scheduler.manager.LOG.debug') @mock.patch('nova.scheduler.manager.LOG.warning') @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts_duplicate_host_mapping(self, mock_discover, mock_log_warning, mock_log_debug): # This tests the scenario of multiple schedulers running discover_hosts # at the same time. mock_discover.side_effect = exception.HostMappingExists(name='a') self.manager._discover_hosts_in_cells(mock.sentinel.context) msg = ("This periodic task should only be enabled on a single " "scheduler to prevent collisions between multiple " "schedulers: Host 'a' mapping already exists") mock_log_warning.assert_called_once_with(msg) mock_log_debug.assert_not_called() # Second collision should log at debug, not warning. mock_log_warning.reset_mock() self.manager._discover_hosts_in_cells(mock.sentinel.context) mock_log_warning.assert_not_called() mock_log_debug.assert_called_once_with(msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_request_filter.py0000664000175000017500000005316600000000000024311 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_traits as ot from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova import context as nova_context from nova import exception from nova.network import model as network_model from nova import objects from nova.scheduler import request_filter from nova import test from nova.tests.unit import utils class TestRequestFilter(test.NoDBTestCase): def setUp(self): super(TestRequestFilter, self).setUp() self.context = nova_context.RequestContext(user_id=uuids.user, project_id=uuids.project) self.flags(limit_tenants_to_placement_aggregate=True, group='scheduler') self.flags(query_placement_for_availability_zone=True, group='scheduler') self.flags(enable_isolated_aggregate_filtering=True, group='scheduler') def test_process_reqspec(self): fake_filters = [mock.MagicMock(), mock.MagicMock()] with mock.patch('nova.scheduler.request_filter.ALL_REQUEST_FILTERS', new=fake_filters): request_filter.process_reqspec(mock.sentinel.context, mock.sentinel.reqspec) for filter in fake_filters: filter.assert_called_once_with(mock.sentinel.context, mock.sentinel.reqspec) @mock.patch.object(timeutils, 'now') def test_log_timer(self, mock_now): mock_now.return_value = 123 result = False @request_filter.trace_request_filter def tester(c, r): mock_now.return_value += 2 return result with mock.patch.object(request_filter, 'LOG') as log: # With a False return, no log should be emitted tester(None, None) log.debug.assert_not_called() # With a True return, elapsed time should be logged result = True tester(None, None) log.debug.assert_called_once_with('Request filter %r' ' took %.1f seconds', 'tester', 2.0) @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_require_tenant_aggregate_disabled(self, getmd): self.flags(limit_tenants_to_placement_aggregate=False, group='scheduler') reqspec = mock.MagicMock() request_filter.require_tenant_aggregate(self.context, reqspec) self.assertFalse(getmd.called) @mock.patch('nova.objects.AggregateList.get_by_metadata') @mock.patch.object(request_filter, 'LOG') def test_require_tenant_aggregate(self, mock_log, getmd): getmd.return_value = [ objects.Aggregate( uuid=uuids.agg1, metadata={'filter_tenant_id': 'owner'}), objects.Aggregate( uuid=uuids.agg2, metadata={'filter_tenant_id:12': 'owner'}), objects.Aggregate( uuid=uuids.agg3, metadata={'other_key': 'owner'}), ] reqspec = objects.RequestSpec(project_id='owner') request_filter.require_tenant_aggregate(self.context, reqspec) self.assertEqual( ','.join(sorted([uuids.agg1, uuids.agg2])), ','.join(sorted( reqspec.requested_destination.aggregates[0].split(',')))) # Make sure we called with the request spec's tenant and not # necessarily just the one from context. getmd.assert_called_once_with(self.context, value='owner') log_lines = [c[0][0] for c in mock_log.debug.call_args_list] self.assertIn('filter added aggregates', log_lines[0]) self.assertIn('took %.1f seconds', log_lines[1]) @mock.patch('nova.objects.aggregate.AggregateList.' 'get_non_matching_by_metadata_keys') def test_isolate_aggregates(self, mock_getnotmd): agg4_traits = {'trait:HW_GPU_API_DXVA': 'required', 'trait:HW_NIC_DCB_ETS': 'required'} mock_getnotmd.return_value = [ objects.Aggregate( uuid=uuids.agg1, metadata={'trait:CUSTOM_WINDOWS_LICENSED_TRAIT': 'required'}), objects.Aggregate( uuid=uuids.agg2, metadata={'trait:CUSTOM_WINDOWS_LICENSED_TRAIT': 'required', 'trait:CUSTOM_XYZ_TRAIT': 'required'}), objects.Aggregate( uuid=uuids.agg4, metadata=agg4_traits), ] fake_flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs=agg4_traits) fake_image = objects.ImageMeta( properties=objects.ImageMetaProps( traits_required=[])) reqspec = objects.RequestSpec(flavor=fake_flavor, image=fake_image) result = request_filter.isolate_aggregates(self.context, reqspec) self.assertTrue(result) self.assertItemsEqual( set([uuids.agg1, uuids.agg2, uuids.agg4]), reqspec.requested_destination.forbidden_aggregates) mock_getnotmd.assert_called_once_with( self.context, utils.ItemsMatcher(agg4_traits), 'trait:', value='required') @mock.patch('nova.objects.aggregate.AggregateList.' 'get_non_matching_by_metadata_keys') def test_isolate_aggregates_union(self, mock_getnotmd): agg_traits = {'trait:HW_GPU_API_DXVA': 'required', 'trait:CUSTOM_XYZ_TRAIT': 'required'} mock_getnotmd.return_value = [ objects.Aggregate( uuid=uuids.agg2, metadata={'trait:CUSTOM_WINDOWS_LICENSED_TRAIT': 'required', 'trait:CUSTOM_XYZ_TRAIT': 'required'}), objects.Aggregate( uuid=uuids.agg4, metadata={'trait:HW_GPU_API_DXVA': 'required', 'trait:HW_NIC_DCB_ETS': 'required'}), ] fake_flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs=agg_traits) fake_image = objects.ImageMeta( properties=objects.ImageMetaProps( traits_required=[])) reqspec = objects.RequestSpec(flavor=fake_flavor, image=fake_image) reqspec.requested_destination = objects.Destination( forbidden_aggregates={uuids.agg1}) result = request_filter.isolate_aggregates(self.context, reqspec) self.assertTrue(result) self.assertEqual( ','.join(sorted([uuids.agg1, uuids.agg2, uuids.agg4])), ','.join(sorted( reqspec.requested_destination.forbidden_aggregates))) mock_getnotmd.assert_called_once_with( self.context, utils.ItemsMatcher(agg_traits), 'trait:', value='required') @mock.patch('nova.objects.aggregate.AggregateList.' 'get_non_matching_by_metadata_keys') def test_isolate_agg_trait_on_flavor_destination_not_set(self, mock_getnotmd): mock_getnotmd.return_value = [] traits = set(['HW_GPU_API_DXVA', 'HW_NIC_DCB_ETS']) fake_flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={'trait:' + trait: 'required' for trait in traits}) fake_image = objects.ImageMeta( properties=objects.ImageMetaProps( traits_required=[])) reqspec = objects.RequestSpec(flavor=fake_flavor, image=fake_image) result = request_filter.isolate_aggregates(self.context, reqspec) self.assertTrue(result) self.assertNotIn('requested_destination', reqspec) keys = ['trait:%s' % trait for trait in traits] mock_getnotmd.assert_called_once_with( self.context, utils.ItemsMatcher(keys), 'trait:', value='required') @mock.patch('nova.objects.aggregate.AggregateList.' 'get_non_matching_by_metadata_keys') def test_isolate_agg_trait_on_flv_img_destination_not_set(self, mock_getnotmd): mock_getnotmd.return_value = [] flavor_traits = set(['HW_GPU_API_DXVA']) image_traits = set(['HW_NIC_DCB_ETS']) fake_flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'trait:' + trait: 'required' for trait in flavor_traits}) fake_image = objects.ImageMeta( properties=objects.ImageMetaProps( traits_required=[trait for trait in image_traits])) reqspec = objects.RequestSpec(flavor=fake_flavor, image=fake_image) result = request_filter.isolate_aggregates(self.context, reqspec) self.assertTrue(result) self.assertNotIn('requested_destination', reqspec) keys = ['trait:%s' % trait for trait in flavor_traits.union( image_traits)] mock_getnotmd.assert_called_once_with( self.context, utils.ItemsMatcher(keys), 'trait:', value='required') @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_require_tenant_aggregate_no_match(self, getmd): self.flags(placement_aggregate_required_for_tenants=True, group='scheduler') getmd.return_value = [] self.assertRaises(exception.RequestFilterFailed, request_filter.require_tenant_aggregate, self.context, mock.MagicMock()) @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_require_tenant_aggregate_no_match_not_required(self, getmd): getmd.return_value = [] request_filter.require_tenant_aggregate( self.context, mock.MagicMock()) @mock.patch('nova.objects.AggregateList.get_by_metadata') @mock.patch.object(request_filter, 'LOG') def test_map_az(self, mock_log, getmd): getmd.return_value = [objects.Aggregate(uuid=uuids.agg1)] reqspec = objects.RequestSpec(availability_zone='fooaz') request_filter.map_az_to_placement_aggregate(self.context, reqspec) self.assertEqual([uuids.agg1], reqspec.requested_destination.aggregates) log_lines = [c[0][0] for c in mock_log.debug.call_args_list] self.assertIn('filter added aggregates', log_lines[0]) self.assertIn('took %.1f seconds', log_lines[1]) @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_map_az_no_hint(self, getmd): reqspec = objects.RequestSpec(availability_zone=None) request_filter.map_az_to_placement_aggregate(self.context, reqspec) self.assertNotIn('requested_destination', reqspec) self.assertFalse(getmd.called) @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_map_az_no_aggregates(self, getmd): getmd.return_value = [] reqspec = objects.RequestSpec(availability_zone='fooaz') request_filter.map_az_to_placement_aggregate(self.context, reqspec) self.assertNotIn('requested_destination', reqspec) getmd.assert_called_once_with(self.context, key='availability_zone', value='fooaz') @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_map_az_disabled(self, getmd): self.flags(query_placement_for_availability_zone=False, group='scheduler') reqspec = objects.RequestSpec(availability_zone='fooaz') request_filter.map_az_to_placement_aggregate(self.context, reqspec) getmd.assert_not_called() @mock.patch('nova.objects.aggregate.AggregateList.' 'get_non_matching_by_metadata_keys') @mock.patch('nova.objects.AggregateList.get_by_metadata') def test_with_tenant_and_az_and_traits(self, mock_getmd, mock_getnotmd): mock_getmd.side_effect = [ # Tenant filter [objects.Aggregate( uuid=uuids.agg1, metadata={'filter_tenant_id': 'owner'}), objects.Aggregate( uuid=uuids.agg2, metadata={'filter_tenant_id:12': 'owner'}), objects.Aggregate( uuid=uuids.agg3, metadata={'other_key': 'owner'})], # AZ filter [objects.Aggregate( uuid=uuids.agg4, metadata={'availability_zone': 'myaz'})], ] mock_getnotmd.side_effect = [ # isolate_aggregates filter [objects.Aggregate( uuid=uuids.agg1, metadata={'trait:CUSTOM_WINDOWS_LICENSED_TRAIT': 'required'}), objects.Aggregate( uuid=uuids.agg2, metadata={'trait:CUSTOM_WINDOWS_LICENSED_TRAIT': 'required', 'trait:CUSTOM_XYZ_TRAIT': 'required'}), objects.Aggregate( uuid=uuids.agg3, metadata={'trait:CUSTOM_XYZ_TRAIT': 'required'}), ], ] traits = set(['HW_GPU_API_DXVA', 'HW_NIC_DCB_ETS']) fake_flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={'trait:' + trait: 'required' for trait in traits}) fake_image = objects.ImageMeta( properties=objects.ImageMetaProps( traits_required=[])) reqspec = objects.RequestSpec(project_id='owner', availability_zone='myaz', flavor=fake_flavor, image=fake_image) request_filter.process_reqspec(self.context, reqspec) self.assertEqual( ','.join(sorted([uuids.agg1, uuids.agg2])), ','.join(sorted( reqspec.requested_destination.aggregates[0].split(',')))) self.assertEqual( ','.join(sorted([uuids.agg4])), ','.join(sorted( reqspec.requested_destination.aggregates[1].split(',')))) self.assertItemsEqual( set([uuids.agg1, uuids.agg2, uuids.agg3]), reqspec.requested_destination.forbidden_aggregates) mock_getmd.assert_has_calls([ mock.call(self.context, value='owner'), mock.call(self.context, key='availability_zone', value='myaz')]) keys = ['trait:%s' % trait for trait in traits] mock_getnotmd.assert_called_once_with( self.context, utils.ItemsMatcher(keys), 'trait:', value='required') def test_require_image_type_support_disabled(self): self.flags(query_placement_for_image_type_support=False, group='scheduler') reqspec = objects.RequestSpec(flavor=objects.Flavor(extra_specs={}), is_bfv=False) # Assert that we completely skip the filter if disabled request_filter.require_image_type_support(self.context, reqspec) self.assertEqual(0, len(reqspec.root_required)) self.assertEqual(0, len(reqspec.root_forbidden)) def test_require_image_type_support_volume_backed(self): self.flags(query_placement_for_image_type_support=True, group='scheduler') reqspec = objects.RequestSpec(flavor=objects.Flavor(extra_specs={}), is_bfv=True) # Assert that we completely skip the filter if no image request_filter.require_image_type_support(self.context, reqspec) self.assertEqual(0, len(reqspec.root_required)) self.assertEqual(0, len(reqspec.root_forbidden)) def test_require_image_type_support_unknown(self): self.flags(query_placement_for_image_type_support=True, group='scheduler') reqspec = objects.RequestSpec( image=objects.ImageMeta(disk_format='danformat'), flavor=objects.Flavor(extra_specs={}), is_bfv=False) # Assert that we completely skip the filter if no matching trait request_filter.require_image_type_support(self.context, reqspec) self.assertEqual(0, len(reqspec.root_required)) self.assertEqual(0, len(reqspec.root_forbidden)) @mock.patch.object(request_filter, 'LOG') def test_require_image_type_support_adds_trait(self, mock_log): self.flags(query_placement_for_image_type_support=True, group='scheduler') reqspec = objects.RequestSpec( image=objects.ImageMeta(disk_format='raw'), flavor=objects.Flavor(extra_specs={}), is_bfv=False) self.assertEqual(0, len(reqspec.root_required)) self.assertEqual(0, len(reqspec.root_forbidden)) # Request filter puts the trait into the request spec request_filter.require_image_type_support(self.context, reqspec) self.assertEqual({ot.COMPUTE_IMAGE_TYPE_RAW}, reqspec.root_required) self.assertEqual(0, len(reqspec.root_forbidden)) log_lines = [c[0][0] for c in mock_log.debug.call_args_list] self.assertIn('added required trait', log_lines[0]) self.assertIn('took %.1f seconds', log_lines[1]) @mock.patch.object(request_filter, 'LOG') def test_compute_status_filter(self, mock_log): reqspec = objects.RequestSpec(flavor=objects.Flavor(extra_specs={})) self.assertEqual(0, len(reqspec.root_required)) self.assertEqual(0, len(reqspec.root_forbidden)) # Request filter puts the trait into the request spec request_filter.compute_status_filter(self.context, reqspec) self.assertEqual(0, len(reqspec.root_required)) self.assertEqual({ot.COMPUTE_STATUS_DISABLED}, reqspec.root_forbidden) # Assert both the in-method logging and trace decorator. log_lines = [c[0][0] for c in mock_log.debug.call_args_list] self.assertIn('added forbidden trait', log_lines[0]) self.assertIn('took %.1f seconds', log_lines[1]) @mock.patch.object(request_filter, 'LOG', new=mock.Mock()) def test_transform_image_metadata(self): self.flags(image_metadata_prefilter=True, group='scheduler') properties = objects.ImageMetaProps( hw_disk_bus=objects.fields.DiskBus.SATA, hw_cdrom_bus=objects.fields.DiskBus.IDE, hw_video_model=objects.fields.VideoModel.QXL, hw_vif_model=network_model.VIF_MODEL_VIRTIO ) reqspec = objects.RequestSpec( image=objects.ImageMeta(properties=properties), flavor=objects.Flavor(extra_specs={}), ) self.assertTrue( request_filter.transform_image_metadata(None, reqspec) ) expected = { 'COMPUTE_GRAPHICS_MODEL_QXL', 'COMPUTE_NET_VIF_MODEL_VIRTIO', 'COMPUTE_STORAGE_BUS_IDE', 'COMPUTE_STORAGE_BUS_SATA', } self.assertEqual(expected, reqspec.root_required) def test_transform_image_metadata__disabled(self): self.flags(image_metadata_prefilter=False, group='scheduler') reqspec = objects.RequestSpec(flavor=objects.Flavor(extra_specs={})) # Assert that we completely skip the filter if disabled self.assertFalse( request_filter.transform_image_metadata(self.context, reqspec) ) self.assertEqual(set(), reqspec.root_required) @mock.patch.object(request_filter, 'LOG') def test_accelerators_filter_with_device_profile(self, mock_log): # First ensure that accelerators_filter is included self.assertIn(request_filter.accelerators_filter, request_filter.ALL_REQUEST_FILTERS) es = {'accel:device_profile': 'mydp'} reqspec = objects.RequestSpec(flavor=objects.Flavor(extra_specs=es)) self.assertEqual(set(), reqspec.root_required) self.assertEqual(set(), reqspec.root_forbidden) # Request filter puts the trait into the request spec request_filter.accelerators_filter(self.context, reqspec) self.assertEqual({ot.COMPUTE_ACCELERATORS}, reqspec.root_required) self.assertEqual(set(), reqspec.root_forbidden) # Assert both the in-method logging and trace decorator. log_lines = [c[0][0] for c in mock_log.debug.call_args_list] self.assertIn('added required trait', log_lines[0]) self.assertIn('took %.1f seconds', log_lines[1]) @mock.patch.object(request_filter, 'LOG') def test_accelerators_filter_no_device_profile(self, mock_log): # First ensure that accelerators_filter is included self.assertIn(request_filter.accelerators_filter, request_filter.ALL_REQUEST_FILTERS) reqspec = objects.RequestSpec(flavor=objects.Flavor(extra_specs={})) self.assertEqual(set(), reqspec.root_required) self.assertEqual(set(), reqspec.root_forbidden) # Request filter puts the trait into the request spec request_filter.accelerators_filter(self.context, reqspec) self.assertEqual(set(), reqspec.root_required) self.assertEqual(set(), reqspec.root_forbidden) # Assert about logging mock_log.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/test_rpcapi.py0000664000175000017500000001611300000000000022521 0ustar00zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for nova.scheduler.rpcapi """ import mock from oslo_utils.fixture import uuidsentinel as uuids from nova import conf from nova import context from nova import exception as exc from nova import objects from nova.scheduler import rpcapi as scheduler_rpcapi from nova import test CONF = conf.CONF class SchedulerRpcAPITestCase(test.NoDBTestCase): def _test_scheduler_api(self, method, rpc_method, expected_args=None, **kwargs): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = scheduler_rpcapi.SchedulerAPI() self.assertIsNotNone(rpcapi.client) self.assertEqual(rpcapi.client.target.topic, scheduler_rpcapi.RPC_TOPIC) expected_retval = 'foo' if rpc_method == 'call' else None expected_version = kwargs.pop('version', None) expected_fanout = kwargs.pop('fanout', None) expected_kwargs = kwargs.copy() if expected_args: expected_kwargs = expected_args prepare_kwargs = {} if method == 'select_destinations': prepare_kwargs.update({ 'call_monitor_timeout': CONF.rpc_response_timeout, 'timeout': CONF.long_rpc_timeout }) if expected_fanout: prepare_kwargs['fanout'] = True if expected_version: prepare_kwargs['version'] = expected_version # NOTE(sbauza): We need to persist the method before mocking it orig_prepare = rpcapi.client.prepare def fake_can_send_version(version=None): return orig_prepare(version=version).can_send_version() @mock.patch.object(rpcapi.client, rpc_method, return_value=expected_retval) @mock.patch.object(rpcapi.client, 'prepare', return_value=rpcapi.client) @mock.patch.object(rpcapi.client, 'can_send_version', side_effect=fake_can_send_version) def do_test(mock_csv, mock_prepare, mock_rpc_method): retval = getattr(rpcapi, method)(ctxt, **kwargs) self.assertEqual(retval, expected_retval) mock_prepare.assert_called_once_with(**prepare_kwargs) mock_rpc_method.assert_called_once_with(ctxt, method, **expected_kwargs) do_test() def test_select_destinations(self): fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'spec_obj': fake_spec, 'instance_uuids': [uuids.instance], 'return_objects': True, 'return_alternates': True}, spec_obj=fake_spec, instance_uuids=[uuids.instance], return_objects=True, return_alternates=True, version='4.5') def test_select_destinations_4_4(self): self.flags(scheduler='4.4', group='upgrade_levels') fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'spec_obj': fake_spec, 'instance_uuids': [uuids.instance]}, spec_obj=fake_spec, instance_uuids=[uuids.instance], return_objects=False, return_alternates=False, version='4.4') def test_select_destinations_4_3(self): self.flags(scheduler='4.3', group='upgrade_levels') fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'spec_obj': fake_spec}, spec_obj=fake_spec, instance_uuids=[uuids.instance], return_alternates=False, version='4.3') def test_select_destinations_old_with_new_params(self): self.flags(scheduler='4.4', group='upgrade_levels') fake_spec = objects.RequestSpec() ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = scheduler_rpcapi.SchedulerAPI() self.assertRaises(exc.SelectionObjectsWithOldRPCVersionNotSupported, rpcapi.select_destinations, ctxt, fake_spec, ['fake_uuids'], return_objects=True, return_alternates=True) self.assertRaises(exc.SelectionObjectsWithOldRPCVersionNotSupported, rpcapi.select_destinations, ctxt, fake_spec, ['fake_uuids'], return_objects=False, return_alternates=True) self.assertRaises(exc.SelectionObjectsWithOldRPCVersionNotSupported, rpcapi.select_destinations, ctxt, fake_spec, ['fake_uuids'], return_objects=True, return_alternates=False) @mock.patch.object(objects.RequestSpec, 'to_legacy_filter_properties_dict') @mock.patch.object(objects.RequestSpec, 'to_legacy_request_spec_dict') def test_select_destinations_with_old_manager(self, to_spec, to_props): self.flags(scheduler='4.0', group='upgrade_levels') to_spec.return_value = 'fake_request_spec' to_props.return_value = 'fake_prop' fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'request_spec': 'fake_request_spec', 'filter_properties': 'fake_prop'}, spec_obj=fake_spec, instance_uuids=[uuids.instance], version='4.0') def test_update_aggregates(self): self._test_scheduler_api('update_aggregates', rpc_method='cast', aggregates='aggregates', version='4.1', fanout=True) def test_delete_aggregate(self): self._test_scheduler_api('delete_aggregate', rpc_method='cast', aggregate='aggregate', version='4.1', fanout=True) def test_update_instance_info(self): self._test_scheduler_api('update_instance_info', rpc_method='cast', host_name='fake_host', instance_info='fake_instance', fanout=True, version='4.2') def test_delete_instance_info(self): self._test_scheduler_api('delete_instance_info', rpc_method='cast', host_name='fake_host', instance_uuid='fake_uuid', fanout=True, version='4.2') def test_sync_instance_info(self): self._test_scheduler_api('sync_instance_info', rpc_method='cast', host_name='fake_host', instance_uuids=['fake1', 'fake2'], fanout=True, version='4.2') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_scheduler_utils.py0000664000175000017500000004465200000000000024452 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Utils """ import mock from oslo_utils.fixture import uuidsentinel as uuids import six from nova.compute import flavors from nova.compute import utils as compute_utils from nova import context as nova_context from nova import exception from nova import objects from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor class SchedulerUtilsTestCase(test.NoDBTestCase): """Test case for scheduler utils methods.""" def setUp(self): super(SchedulerUtilsTestCase, self).setUp() self.context = nova_context.get_context() def test_build_request_spec_without_image(self): instance = {'uuid': uuids.instance} instance_type = objects.Flavor(**test_flavor.fake_flavor) with mock.patch.object(flavors, 'extract_flavor') as mock_extract: mock_extract.return_value = instance_type request_spec = scheduler_utils.build_request_spec(None, [instance]) mock_extract.assert_called_once_with({'uuid': uuids.instance}) self.assertEqual({}, request_spec['image']) def test_build_request_spec_with_object(self): instance_type = objects.Flavor() instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(instance, 'get_flavor') as mock_get: mock_get.return_value = instance_type request_spec = scheduler_utils.build_request_spec(None, [instance]) mock_get.assert_called_once_with() self.assertIsInstance(request_spec['instance_properties'], dict) @mock.patch('nova.compute.utils.notify_about_compute_task_error') @mock.patch('nova.rpc.LegacyValidatingNotifier') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(objects.Instance, 'save') def _test_set_vm_state_and_notify(self, mock_save, mock_add, mock_notifier, mock_notify_task, request_spec, payload_request_spec): expected_uuid = uuids.instance updates = dict(vm_state='fake-vm-state') service = 'fake-service' method = 'fake-method' exc_info = 'exc_info' payload = dict(request_spec=payload_request_spec, instance_properties=payload_request_spec.get( 'instance_properties', {}), instance_id=expected_uuid, state='fake-vm-state', method=method, reason=exc_info) event_type = '%s.%s' % (service, method) scheduler_utils.set_vm_state_and_notify(self.context, expected_uuid, service, method, updates, exc_info, request_spec) mock_save.assert_called_once_with() mock_add.assert_called_once_with(self.context, mock.ANY, exc_info, mock.ANY) self.assertIsInstance(mock_add.call_args[0][1], objects.Instance) self.assertIsInstance(mock_add.call_args[0][3], tuple) mock_notifier.return_value.error.assert_called_once_with(self.context, event_type, payload) mock_notify_task.assert_called_once_with( self.context, method, expected_uuid, payload_request_spec, updates['vm_state'], exc_info, test.MatchType(str)) def test_set_vm_state_and_notify_request_spec_dict(self): """Tests passing a legacy dict format request spec to set_vm_state_and_notify. """ request_spec = dict(instance_properties=dict(uuid=uuids.instance)) # The request_spec in the notification payload should be unchanged. self._test_set_vm_state_and_notify( request_spec=request_spec, payload_request_spec=request_spec) def test_set_vm_state_and_notify_request_spec_object(self): """Tests passing a RequestSpec object to set_vm_state_and_notify.""" request_spec = objects.RequestSpec.from_primitives( self.context, dict(instance_properties=dict(uuid=uuids.instance)), filter_properties=dict()) # The request_spec in the notification payload should be converted # to the legacy format. self._test_set_vm_state_and_notify( request_spec=request_spec, payload_request_spec=request_spec.to_legacy_request_spec_dict()) def test_set_vm_state_and_notify_request_spec_none(self): """Tests passing None for the request_spec to set_vm_state_and_notify. """ # The request_spec in the notification payload should be changed to # just an empty dict. self._test_set_vm_state_and_notify( request_spec=None, payload_request_spec={}) def test_build_filter_properties(self): sched_hints = {'hint': ['over-there']} forced_host = 'forced-host1' forced_node = 'forced-node1' instance_type = objects.Flavor() filt_props = scheduler_utils.build_filter_properties(sched_hints, forced_host, forced_node, instance_type) self.assertEqual(sched_hints, filt_props['scheduler_hints']) self.assertEqual([forced_host], filt_props['force_hosts']) self.assertEqual([forced_node], filt_props['force_nodes']) self.assertEqual(instance_type, filt_props['instance_type']) def test_build_filter_properties_no_forced_host_no_force_node(self): sched_hints = {'hint': ['over-there']} forced_host = None forced_node = None instance_type = objects.Flavor() filt_props = scheduler_utils.build_filter_properties(sched_hints, forced_host, forced_node, instance_type) self.assertEqual(sched_hints, filt_props['scheduler_hints']) self.assertEqual(instance_type, filt_props['instance_type']) self.assertNotIn('forced_host', filt_props) self.assertNotIn('forced_node', filt_props) def _test_populate_filter_props(self, with_retry=True, force_hosts=None, force_nodes=None, no_limits=None): if force_hosts is None: force_hosts = [] if force_nodes is None: force_nodes = [] if with_retry: if ((len(force_hosts) == 1 and len(force_nodes) <= 1) or (len(force_nodes) == 1 and len(force_hosts) <= 1)): filter_properties = dict(force_hosts=force_hosts, force_nodes=force_nodes) elif len(force_hosts) > 1 or len(force_nodes) > 1: filter_properties = dict(retry=dict(hosts=[]), force_hosts=force_hosts, force_nodes=force_nodes) else: filter_properties = dict(retry=dict(hosts=[])) else: filter_properties = dict() if no_limits: fake_limits = None else: fake_limits = objects.SchedulerLimits(vcpu=1, disk_gb=2, memory_mb=3, numa_topology=None) selection = objects.Selection(service_host="fake-host", nodename="fake-node", limits=fake_limits) scheduler_utils.populate_filter_properties(filter_properties, selection) enable_retry_force_hosts = not force_hosts or len(force_hosts) > 1 enable_retry_force_nodes = not force_nodes or len(force_nodes) > 1 if with_retry or enable_retry_force_hosts or enable_retry_force_nodes: # So we can check for 2 hosts scheduler_utils.populate_filter_properties(filter_properties, selection) if force_hosts: expected_limits = None elif no_limits: expected_limits = {} elif isinstance(fake_limits, objects.SchedulerLimits): expected_limits = fake_limits.to_dict() else: expected_limits = fake_limits self.assertEqual(expected_limits, filter_properties.get('limits')) if (with_retry and enable_retry_force_hosts and enable_retry_force_nodes): self.assertEqual([['fake-host', 'fake-node'], ['fake-host', 'fake-node']], filter_properties['retry']['hosts']) else: self.assertNotIn('retry', filter_properties) def test_populate_filter_props(self): self._test_populate_filter_props() def test_populate_filter_props_no_retry(self): self._test_populate_filter_props(with_retry=False) def test_populate_filter_props_force_hosts_no_retry(self): self._test_populate_filter_props(force_hosts=['force-host']) def test_populate_filter_props_force_nodes_no_retry(self): self._test_populate_filter_props(force_nodes=['force-node']) def test_populate_filter_props_multi_force_hosts_with_retry(self): self._test_populate_filter_props(force_hosts=['force-host1', 'force-host2']) def test_populate_filter_props_multi_force_nodes_with_retry(self): self._test_populate_filter_props(force_nodes=['force-node1', 'force-node2']) def test_populate_filter_props_no_limits(self): self._test_populate_filter_props(no_limits=True) def test_populate_retry_exception_at_max_attempts(self): self.flags(max_attempts=2, group='scheduler') msg = 'The exception text was preserved!' filter_properties = dict(retry=dict(num_attempts=2, hosts=[], exc_reason=[msg])) nvh = self.assertRaises(exception.MaxRetriesExceeded, scheduler_utils.populate_retry, filter_properties, uuids.instance) # make sure 'msg' is a substring of the complete exception text self.assertIn(msg, six.text_type(nvh)) def _check_parse_options(self, opts, sep, converter, expected): good = scheduler_utils.parse_options(opts, sep=sep, converter=converter) for item in expected: self.assertIn(item, good) def test_parse_options(self): # check normal self._check_parse_options(['foo=1', 'bar=-2.1'], '=', float, [('foo', 1.0), ('bar', -2.1)]) # check convert error self._check_parse_options(['foo=a1', 'bar=-2.1'], '=', float, [('bar', -2.1)]) # check separator missing self._check_parse_options(['foo', 'bar=-2.1'], '=', float, [('bar', -2.1)]) # check key missing self._check_parse_options(['=5', 'bar=-2.1'], '=', float, [('bar', -2.1)]) def test_validate_filters_configured(self): self.flags(enabled_filters='FakeFilter1,FakeFilter2', group='filter_scheduler') self.assertTrue(scheduler_utils.validate_filter('FakeFilter1')) self.assertTrue(scheduler_utils.validate_filter('FakeFilter2')) self.assertFalse(scheduler_utils.validate_filter('FakeFilter3')) def test_validate_weighers_configured(self): self.flags(weight_classes=[ 'ServerGroupSoftAntiAffinityWeigher', 'FakeFilter1'], group='filter_scheduler') self.assertTrue(scheduler_utils.validate_weigher( 'ServerGroupSoftAntiAffinityWeigher')) self.assertTrue(scheduler_utils.validate_weigher('FakeFilter1')) self.assertFalse(scheduler_utils.validate_weigher( 'ServerGroupSoftAffinityWeigher')) def test_validate_weighers_configured_all_weighers(self): self.assertTrue(scheduler_utils.validate_weigher( 'ServerGroupSoftAffinityWeigher')) self.assertTrue(scheduler_utils.validate_weigher( 'ServerGroupSoftAntiAffinityWeigher')) def _create_server_group(self, policy='anti-affinity'): instance = fake_instance.fake_instance_obj(self.context, params={'host': 'hostA'}) group = objects.InstanceGroup() group.name = 'pele' group.uuid = uuids.fake group.members = [instance.uuid] group.policy = policy return group def _get_group_details(self, group, policy=None): group_hosts = ['hostB'] with test.nested( mock.patch.object(objects.InstanceGroup, 'get_by_instance_uuid', return_value=group), mock.patch.object(objects.InstanceGroup, 'get_hosts', return_value=['hostA']), ) as (get_group, get_hosts): scheduler_utils._SUPPORTS_ANTI_AFFINITY = None scheduler_utils._SUPPORTS_AFFINITY = None group_info = scheduler_utils._get_group_details( self.context, 'fake_uuid', group_hosts) self.assertEqual( (set(['hostA', 'hostB']), policy, group.members), group_info) def test_get_group_details(self): for policy in ['affinity', 'anti-affinity', 'soft-affinity', 'soft-anti-affinity']: group = self._create_server_group(policy) self._get_group_details(group, policy=policy) def test_get_group_details_with_no_instance_uuid(self): group_info = scheduler_utils._get_group_details(self.context, None) self.assertIsNone(group_info) def _get_group_details_with_filter_not_configured(self, policy): self.flags(enabled_filters=['fake'], group='filter_scheduler') self.flags(weight_classes=['fake'], group='filter_scheduler') instance = fake_instance.fake_instance_obj(self.context, params={'host': 'hostA'}) group = objects.InstanceGroup() group.uuid = uuids.fake group.members = [instance.uuid] group.policy = policy with test.nested( mock.patch.object(objects.InstanceGroup, 'get_by_instance_uuid', return_value=group), ) as (get_group,): scheduler_utils._SUPPORTS_ANTI_AFFINITY = None scheduler_utils._SUPPORTS_AFFINITY = None scheduler_utils._SUPPORTS_SOFT_AFFINITY = None scheduler_utils._SUPPORTS_SOFT_ANTI_AFFINITY = None self.assertRaises(exception.UnsupportedPolicyException, scheduler_utils._get_group_details, self.context, uuids.instance) def test_get_group_details_with_filter_not_configured(self): policies = ['anti-affinity', 'affinity', 'soft-affinity', 'soft-anti-affinity'] for policy in policies: self._get_group_details_with_filter_not_configured(policy) @mock.patch.object(scheduler_utils, '_get_group_details') def test_setup_instance_group_in_request_spec(self, mock_ggd): mock_ggd.return_value = scheduler_utils.GroupDetails( hosts=set(['hostA', 'hostB']), policy='policy', members=['instance1']) spec = objects.RequestSpec(instance_uuid=uuids.instance) spec.instance_group = objects.InstanceGroup(hosts=['hostC']) scheduler_utils.setup_instance_group(self.context, spec) mock_ggd.assert_called_once_with(self.context, uuids.instance, ['hostC']) # Given it returns a list from a set, make sure it's sorted. self.assertEqual(['hostA', 'hostB'], sorted(spec.instance_group.hosts)) self.assertEqual('policy', spec.instance_group.policy) self.assertEqual(['instance1'], spec.instance_group.members) @mock.patch.object(scheduler_utils, '_get_group_details') def test_setup_instance_group_with_no_group(self, mock_ggd): mock_ggd.return_value = None spec = objects.RequestSpec(instance_uuid=uuids.instance) spec.instance_group = objects.InstanceGroup(hosts=['hostC']) scheduler_utils.setup_instance_group(self.context, spec) mock_ggd.assert_called_once_with(self.context, uuids.instance, ['hostC']) # Make sure the field isn't touched by the caller. self.assertFalse(spec.instance_group.obj_attr_is_set('policies')) self.assertEqual(['hostC'], spec.instance_group.hosts) @mock.patch.object(scheduler_utils, '_get_group_details') def test_setup_instance_group_with_filter_not_configured(self, mock_ggd): mock_ggd.side_effect = exception.NoValidHost(reason='whatever') spec = {'instance_properties': {'uuid': uuids.instance}} spec = objects.RequestSpec(instance_uuid=uuids.instance) spec.instance_group = objects.InstanceGroup(hosts=['hostC']) self.assertRaises(exception.NoValidHost, scheduler_utils.setup_instance_group, self.context, spec) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/test_utils.py0000664000175000017500000022254600000000000022414 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import mock import os_resource_classes as orc from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids import six from nova import context as nova_context from nova import exception from nova import objects from nova.scheduler.client import report from nova.scheduler import utils from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.scheduler import fakes class FakeResourceRequest(object): """A fake of ``nova.scheduler.utils.ResourceRequest``. Allows us to assert that various properties of a real ResourceRequest object are set as we'd like them to be. """ def __init__(self): self._rg_by_id = {} self._group_policy = None self._limit = 1000 class TestUtilsBase(test.NoDBTestCase): def setUp(self): super(TestUtilsBase, self).setUp() self.context = nova_context.get_admin_context() self.mock_host_manager = mock.Mock() def assertResourceRequestsEqual(self, expected, observed): self.assertEqual(expected._limit, observed._limit) self.assertEqual(expected._group_policy, observed._group_policy) ex_by_id = expected._rg_by_id ob_by_id = observed._rg_by_id self.assertEqual(set(ex_by_id), set(ob_by_id)) for ident in ex_by_id: self.assertEqual(vars(ex_by_id[ident]), vars(ob_by_id[ident])) @ddt.ddt class TestUtils(TestUtilsBase): def _test_resources_from_request_spec(self, expected, flavor, image=None): if image is None: image = objects.ImageMeta(properties=objects.ImageMetaProps()) fake_spec = objects.RequestSpec(flavor=flavor, image=image) resources = utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) self.assertResourceRequestsEqual(expected, resources) return resources def test_resources_from_request_spec_flavor_only(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_resources_from_request_spec_flavor_req_traits(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={'trait:CUSTOM_FLAVOR_TRAIT': 'required'}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, required_traits=set(['CUSTOM_FLAVOR_TRAIT']) ) resources = self._test_resources_from_request_spec( expected_resources, flavor) expected_result = set(['CUSTOM_FLAVOR_TRAIT']) self.assertEqual(expected_result, resources.all_required_traits) def test_resources_from_request_spec_flavor_and_image_traits(self): image = objects.ImageMeta.from_dict({ 'properties': { 'trait:CUSTOM_IMAGE_TRAIT1': 'required', 'trait:CUSTOM_IMAGE_TRAIT2': 'required', }, 'id': 'c8b1790e-a07d-4971-b137-44f2432936cd', }) flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'trait:CUSTOM_FLAVOR_TRAIT': 'required', 'trait:CUSTOM_IMAGE_TRAIT2': 'required'}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, required_traits={ # trait:CUSTOM_IMAGE_TRAIT2 is defined in both extra_specs and # image metadata. We get a union of both. 'CUSTOM_IMAGE_TRAIT1', 'CUSTOM_IMAGE_TRAIT2', 'CUSTOM_FLAVOR_TRAIT', } ) self._test_resources_from_request_spec(expected_resources, flavor, image) def test_resources_from_request_spec_flavor_forbidden_trait(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'trait:CUSTOM_FLAVOR_TRAIT': 'forbidden'}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, forbidden_traits={ 'CUSTOM_FLAVOR_TRAIT', } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_resources_from_request_spec_with_no_disk(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=0, ephemeral_gb=0, swap=0) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_get_resources_from_request_spec_custom_resource_class(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={"resources:CUSTOM_TEST_CLASS": 1}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 15, "CUSTOM_TEST_CLASS": 1, } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_get_resources_from_request_spec_override_flavor_amounts(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ "resources:VCPU": 99, "resources:MEMORY_MB": 99, "resources:DISK_GB": 99}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ "VCPU": 99, "MEMORY_MB": 99, "DISK_GB": 99, } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_get_resources_from_request_spec_remove_flavor_amounts(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ "resources:VCPU": 0, "resources:DISK_GB": 0}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ "MEMORY_MB": 1024, } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_get_resources_from_request_spec_vgpu(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=0, swap=0, extra_specs={ "resources:VGPU": 1, "resources:VGPU_DISPLAY_HEAD": 1}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 10, "VGPU": 1, "VGPU_DISPLAY_HEAD": 1, } ) self._test_resources_from_request_spec(expected_resources, flavor) def test_get_resources_from_request_spec_bad_std_resource_class(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ "resources:DOESNT_EXIST": 0}) fake_spec = objects.RequestSpec(flavor=flavor) with mock.patch("nova.objects.request_spec.LOG.warning") as mock_log: utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) mock_log.assert_called_once() args = mock_log.call_args[0] self.assertEqual(args[0], "Received an invalid ResourceClass " "'%(key)s' in extra_specs.") self.assertEqual(args[1], {"key": "DOESNT_EXIST"}) def test_get_resources_from_request_spec_granular(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=0, swap=0, extra_specs={'resources1:VGPU': '1', 'resources1:VGPU_DISPLAY_HEAD': '2', # Replace 'resources3:VCPU': '2', # Stay separate (don't sum) 'resources42:SRIOV_NET_VF': '1', 'resources24:SRIOV_NET_VF': '2', # Ignore 'some:bogus': 'value', # Custom in the unnumbered group (merge with DISK_GB) 'resources:CUSTOM_THING': '123', # Traits make it through 'trait3:CUSTOM_SILVER': 'required', 'trait3:CUSTOM_GOLD': 'required', # Delete standard 'resources86:MEMORY_MB': '0', # Standard and custom zeroes don't make it through 'resources:IPV4_ADDRESS': '0', 'resources:CUSTOM_FOO': '0', # Bogus values don't make it through 'resources1:MEMORY_MB': 'bogus', 'group_policy': 'none'}) expected_resources = FakeResourceRequest() expected_resources._group_policy = 'none' expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'DISK_GB': 10, 'CUSTOM_THING': 123, } ) expected_resources._rg_by_id['1'] = objects.RequestGroup( requester_id='1', resources={ 'VGPU': 1, 'VGPU_DISPLAY_HEAD': 2, } ) expected_resources._rg_by_id['3'] = objects.RequestGroup( requester_id='3', resources={ 'VCPU': 2, }, required_traits={ 'CUSTOM_GOLD', 'CUSTOM_SILVER', } ) expected_resources._rg_by_id['24'] = objects.RequestGroup( requester_id='24', resources={ 'SRIOV_NET_VF': 2, }, ) expected_resources._rg_by_id['42'] = objects.RequestGroup( requester_id='42', resources={ 'SRIOV_NET_VF': 1, } ) rr = self._test_resources_from_request_spec(expected_resources, flavor) expected_querystring = ( 'group_policy=none&' 'limit=1000&' 'required3=CUSTOM_GOLD%2CCUSTOM_SILVER&' 'resources=CUSTOM_THING%3A123%2CDISK_GB%3A10&' 'resources1=VGPU%3A1%2CVGPU_DISPLAY_HEAD%3A2&' 'resources24=SRIOV_NET_VF%3A2&' 'resources3=VCPU%3A2&' 'resources42=SRIOV_NET_VF%3A1' ) self.assertEqual(expected_querystring, rr.to_querystring()) def test_all_required_traits(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'trait:HW_CPU_X86_SSE': 'required', 'trait:HW_CPU_X86_AVX': 'required', 'trait:HW_CPU_X86_AVX2': 'forbidden'}) expected_resources = FakeResourceRequest() expected_resources._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, required_traits={ 'HW_CPU_X86_SSE', 'HW_CPU_X86_AVX' }, forbidden_traits={ 'HW_CPU_X86_AVX2' } ) resource = self._test_resources_from_request_spec(expected_resources, flavor) expected_result = {'HW_CPU_X86_SSE', 'HW_CPU_X86_AVX'} self.assertEqual(expected_result, resource.all_required_traits) def test_resources_from_request_spec_aggregates(self): destination = objects.Destination() flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=1, ephemeral_gb=0, swap=0) reqspec = objects.RequestSpec(flavor=flavor, requested_destination=destination) destination.require_aggregates(['foo', 'bar']) req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual([['foo', 'bar']], req.get_request_group(None).aggregates) destination.require_aggregates(['baz']) req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual([['foo', 'bar'], ['baz']], req.get_request_group(None).aggregates) def test_resources_from_request_spec_no_aggregates(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=1, ephemeral_gb=0, swap=0) reqspec = objects.RequestSpec(flavor=flavor) req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual([], req.get_request_group(None).aggregates) reqspec.requested_destination = None req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual([], req.get_request_group(None).aggregates) reqspec.requested_destination = objects.Destination() req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual([], req.get_request_group(None).aggregates) reqspec.requested_destination.aggregates = None req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual([], req.get_request_group(None).aggregates) def test_resources_from_request_spec_forbidden_aggregates(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=1, ephemeral_gb=0, swap=0) reqspec = objects.RequestSpec( flavor=flavor, requested_destination=objects.Destination( forbidden_aggregates=set(['foo', 'bar']))) req = utils.resources_from_request_spec(self.context, reqspec, self.mock_host_manager) self.assertEqual(set(['foo', 'bar']), req.get_request_group(None).forbidden_aggregates) def test_resources_from_request_spec_no_forbidden_aggregates(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=1, ephemeral_gb=0, swap=0) reqspec = objects.RequestSpec(flavor=flavor) req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual(set([]), req.get_request_group(None). forbidden_aggregates) reqspec.requested_destination = None req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual(set([]), req.get_request_group(None). forbidden_aggregates) reqspec.requested_destination = objects.Destination() req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual(set([]), req.get_request_group(None). forbidden_aggregates) reqspec.requested_destination.forbidden_aggregates = None req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual(set([]), req.get_request_group(None). forbidden_aggregates) def test_process_extra_specs_granular_called(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={"resources:CUSTOM_TEST_CLASS": 1}) fake_spec = objects.RequestSpec(flavor=flavor) # just call this to make sure things don't explode utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) def test_process_extra_specs_granular_not_called(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) fake_spec = objects.RequestSpec(flavor=flavor) # just call this to make sure things don't explode utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) def test_process_missing_extra_specs_value(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={"resources:CUSTOM_TEST_CLASS": ""}) fake_spec = objects.RequestSpec(flavor=flavor) # just call this to make sure things don't explode utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) def test_process_no_force_hosts_or_force_nodes(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=15, ephemeral_gb=0, swap=0) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, ) rr = self._test_resources_from_request_spec(expected, flavor) expected_querystring = ( 'limit=1000&' 'resources=DISK_GB%3A15%2CMEMORY_MB%3A1024%2CVCPU%3A1' ) self.assertEqual(expected_querystring, rr.to_querystring()) def test_process_use_force_nodes(self): fake_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(host='fake-host', uuid='12345678-1234-1234-1234-123456789012', hypervisor_hostname='test')]) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ return_value = fake_nodes flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=15, ephemeral_gb=0, swap=0) fake_spec = objects.RequestSpec(flavor=flavor, force_nodes=['test']) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, in_tree='12345678-1234-1234-1234-123456789012', ) resources = utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) self.assertResourceRequestsEqual(expected, resources) expected_querystring = ( 'in_tree=12345678-1234-1234-1234-123456789012&' 'limit=1000&resources=DISK_GB%3A15%2CMEMORY_MB%3A1024%2CVCPU%3A1') self.assertEqual(expected_querystring, resources.to_querystring()) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ assert_called_once_with(self.context, None, 'test', cell=None) def test_process_use_force_hosts(self): fake_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(host='test', uuid='12345678-1234-1234-1234-123456789012') ]) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ return_value = fake_nodes flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=15, ephemeral_gb=0, swap=0) fake_spec = objects.RequestSpec(flavor=flavor, force_hosts=['test']) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, in_tree='12345678-1234-1234-1234-123456789012', ) resources = utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) self.assertResourceRequestsEqual(expected, resources) expected_querystring = ( 'in_tree=12345678-1234-1234-1234-123456789012&' 'limit=1000&resources=DISK_GB%3A15%2CMEMORY_MB%3A1024%2CVCPU%3A1') self.assertEqual(expected_querystring, resources.to_querystring()) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ assert_called_once_with(self.context, 'test', None, cell=None) def test_process_use_force_hosts_multinodes_found(self): fake_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(host='test', uuid='12345678-1234-1234-1234-123456789012'), objects.ComputeNode(host='test', uuid='87654321-4321-4321-4321-210987654321'), ]) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ return_value = fake_nodes flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=15, ephemeral_gb=0, swap=0) fake_spec = objects.RequestSpec(flavor=flavor, force_hosts=['test']) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, ) # Validate that the limit is unset expected._limit = None resources = utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) self.assertResourceRequestsEqual(expected, resources) # Validate that the limit is unset expected_querystring = ( 'resources=DISK_GB%3A15%2CMEMORY_MB%3A1024%2CVCPU%3A1') self.assertEqual(expected_querystring, resources.to_querystring()) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ assert_called_once_with(self.context, 'test', None, cell=None) def test_process_use_requested_destination(self): fake_cell = objects.CellMapping(uuid=uuids.cell1, name='foo') destination = objects.Destination( host='fake-host', node='fake-node', cell=fake_cell) fake_nodes = objects.ComputeNodeList(objects=[ objects.ComputeNode(host='fake-host', uuid='12345678-1234-1234-1234-123456789012', hypervisor_hostname='fake-node') ]) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ return_value = fake_nodes flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=15, ephemeral_gb=0, swap=0) fake_spec = objects.RequestSpec( flavor=flavor, requested_destination=destination) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, in_tree='12345678-1234-1234-1234-123456789012', ) resources = utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) self.assertResourceRequestsEqual(expected, resources) expected_querystring = ( 'in_tree=12345678-1234-1234-1234-123456789012&' 'limit=1000&resources=DISK_GB%3A15%2CMEMORY_MB%3A1024%2CVCPU%3A1') self.assertEqual(expected_querystring, resources.to_querystring()) self.mock_host_manager.get_compute_nodes_by_host_or_node.\ assert_called_once_with( self.context, 'fake-host', 'fake-node', cell=fake_cell) def test_resources_from_request_spec_having_requested_resources(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) rg1 = objects.RequestGroup( resources={'CUSTOM_FOO': 1}, requester_id='The-first-group') # Leave requester_id out to trigger ValueError rg2 = objects.RequestGroup(required_traits={'CUSTOM_BAR'}) reqspec = objects.RequestSpec(flavor=flavor, requested_resources=[rg1, rg2]) self.assertRaises( ValueError, utils.resources_from_request_spec, self.context, reqspec, self.mock_host_manager) # Set conflicting requester_id rg2.requester_id = 'The-first-group' self.assertRaises( exception.RequestGroupSuffixConflict, utils.resources_from_request_spec, self.context, reqspec, self.mock_host_manager) # Good path: nonempty non-conflicting requester_id rg2.requester_id = 'The-second-group' req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual({'MEMORY_MB': 1024, 'DISK_GB': 15, 'VCPU': 1}, req.get_request_group(None).resources) self.assertIs(rg1, req.get_request_group('The-first-group')) self.assertIs(rg2, req.get_request_group('The-second-group')) # Make sure those ended up as suffixes correctly qs = req.to_querystring() self.assertIn('resourcesThe-first-group=CUSTOM_FOO%3A1', qs) self.assertIn('requiredThe-second-group=CUSTOM_BAR', qs) def test_resources_from_request_spec_requested_resources_unfilled(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) reqspec = objects.RequestSpec(flavor=flavor) req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual({'MEMORY_MB': 1024, 'DISK_GB': 15, 'VCPU': 1}, req.get_request_group(None).resources) self.assertEqual(1, len(list(req._rg_by_id))) reqspec = objects.RequestSpec(flavor=flavor, requested_resources=[]) req = utils.resources_from_request_spec( self.context, reqspec, self.mock_host_manager) self.assertEqual({'MEMORY_MB': 1024, 'DISK_GB': 15, 'VCPU': 1}, req.get_request_group(None).resources) self.assertEqual(1, len(list(req._rg_by_id))) @ddt.data( # Test single hint that we are checking for. {'group': [uuids.fake]}, # Test hint we care about and some other random hint. {'same_host': [uuids.fake], 'fake-hint': ['fake-value']}, # Test multiple hints we are checking for. {'same_host': [uuids.server1], 'different_host': [uuids.server2]}) def test_resources_from_request_spec_no_limit_based_on_hint(self, hints): """Tests that there is no limit applied to the GET /allocation_candidates query string if a given scheduler hint is in the request spec. """ flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=15, ephemeral_gb=0, swap=0) fake_spec = objects.RequestSpec( flavor=flavor, scheduler_hints=hints) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, ) expected._limit = None resources = utils.resources_from_request_spec( self.context, fake_spec, self.mock_host_manager) self.assertResourceRequestsEqual(expected, resources) expected_querystring = ( 'resources=DISK_GB%3A15%2CMEMORY_MB%3A1024%2CVCPU%3A1' ) self.assertEqual(expected_querystring, resources.to_querystring()) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) def test_resources_from_flavor_no_bfv(self, mock_is_bfv): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1024, extra_specs={}) instance = objects.Instance() expected = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 16, } actual = utils.resources_from_flavor(instance, flavor) self.assertEqual(expected, actual) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) def test_resources_from_flavor_bfv(self, mock_is_bfv): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1024, extra_specs={}) instance = objects.Instance() expected = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 6, # No root disk... } actual = utils.resources_from_flavor(instance, flavor) self.assertEqual(expected, actual) @mock.patch('nova.compute.utils.is_volume_backed_instance', new=mock.Mock(return_value=False)) def test_resources_from_flavor_with_override(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1024, extra_specs={ # Replace 'resources:VCPU': '2', # Sum up 'resources42:SRIOV_NET_VF': '1', 'resources24:SRIOV_NET_VF': '2', # Ignore 'some:bogus': 'value', # Custom 'resources:CUSTOM_THING': '123', # Ignore 'trait:CUSTOM_GOLD': 'required', # Delete standard 'resources86:MEMORY_MB': 0, # Standard and custom zeroes don't make it through 'resources:IPV4_ADDRESS': 0, 'resources:CUSTOM_FOO': 0, 'group_policy': 'none'}) instance = objects.Instance() expected = { 'VCPU': 2, 'DISK_GB': 16, 'CUSTOM_THING': 123, 'SRIOV_NET_VF': 3, } actual = utils.resources_from_flavor(instance, flavor) self.assertEqual(expected, actual) def test_resource_request_init(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, ) rs = objects.RequestSpec(flavor=flavor, is_bfv=False) rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) def test_resource_request_init_with_extra_specs(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'resources:VCPU': '2', 'resources:MEMORY_MB': '2048', 'trait:HW_CPU_X86_AVX': 'required', # Key skipped because no colons 'nocolons': '42', 'trait:CUSTOM_MAGIC': 'required', 'trait:CUSTOM_BRONZE': 'forbidden', # Resource skipped because invalid resource class name 'resources86:CUTSOM_MISSPELLED': '86', 'resources1:SRIOV_NET_VF': '1', # Resource skipped because non-int-able value 'resources86:CUSTOM_FOO': 'seven', # Resource skipped because negative value 'resources86:CUSTOM_NEGATIVE': '-7', 'resources1:IPV4_ADDRESS': '1', # Trait skipped because unsupported value 'trait86:CUSTOM_GOLD': 'preferred', 'trait1:CUSTOM_PHYSNET_NET1': 'required', 'trait1:CUSTOM_PHYSNET_NET2': 'forbidden', 'resources2:SRIOV_NET_VF': '1', 'resources2:IPV4_ADDRESS': '2', 'trait2:CUSTOM_PHYSNET_NET2': 'required', 'trait2:HW_NIC_ACCEL_SSL': 'required', # Groupings that don't quite match the patterns are ignored 'resources_*5:SRIOV_NET_VF': '7', 'traitFoo$:HW_NIC_ACCEL_SSL': 'required', # Solo resource, no corresponding traits 'resources3:DISK_GB': '5', 'group_policy': 'isolate', }) expected = FakeResourceRequest() expected._group_policy = 'isolate' expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits={ 'HW_CPU_X86_AVX', 'CUSTOM_MAGIC', }, forbidden_traits={ 'CUSTOM_BRONZE', }, ) expected._rg_by_id['1'] = objects.RequestGroup( requester_id='1', resources={ 'SRIOV_NET_VF': 1, 'IPV4_ADDRESS': 1, }, required_traits={ 'CUSTOM_PHYSNET_NET1', }, forbidden_traits={ 'CUSTOM_PHYSNET_NET2', }, ) expected._rg_by_id['2'] = objects.RequestGroup( requester_id='2', resources={ 'SRIOV_NET_VF': 1, 'IPV4_ADDRESS': 2, }, required_traits={ 'CUSTOM_PHYSNET_NET2', 'HW_NIC_ACCEL_SSL', } ) expected._rg_by_id['3'] = objects.RequestGroup( requester_id='3', resources={ 'DISK_GB': 5, } ) rs = objects.RequestSpec(flavor=flavor, is_bfv=False) rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) expected_querystring = ( 'group_policy=isolate&' 'limit=1000&' 'required=CUSTOM_MAGIC%2CHW_CPU_X86_AVX%2C%21CUSTOM_BRONZE&' 'required1=CUSTOM_PHYSNET_NET1%2C%21CUSTOM_PHYSNET_NET2&' 'required2=CUSTOM_PHYSNET_NET2%2CHW_NIC_ACCEL_SSL&' 'resources=MEMORY_MB%3A2048%2CVCPU%3A2&' 'resources1=IPV4_ADDRESS%3A1%2CSRIOV_NET_VF%3A1&' 'resources2=IPV4_ADDRESS%3A2%2CSRIOV_NET_VF%3A1&' 'resources3=DISK_GB%3A5' ) self.assertEqual(expected_querystring, rr.to_querystring()) def _test_resource_request_init_with_legacy_extra_specs(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'hw:cpu_policy': 'dedicated', 'hw:cpu_thread_policy': 'isolate', 'hw:emulator_threads_policy': 'isolate', }) return objects.RequestSpec(flavor=flavor, is_bfv=False) def test_resource_request_init_with_legacy_extra_specs(self): expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ # we should have two PCPUs, one due to hw:cpu_policy and the # other due to hw:cpu_thread_policy 'PCPU': 2, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, forbidden_traits={ # we should forbid hyperthreading due to hw:cpu_thread_policy 'HW_CPU_HYPERTHREADING', }, ) rs = self._test_resource_request_init_with_legacy_extra_specs() rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) self.assertTrue(rr.cpu_pinning_requested) def test_resource_request_init_with_legacy_extra_specs_no_translate(self): expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ # we should have a VCPU despite hw:cpu_policy because # enable_pinning_translate=False 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, # we should not require hyperthreading despite hw:cpu_thread_policy # because enable_pinning_translate=False forbidden_traits=set(), ) rs = self._test_resource_request_init_with_legacy_extra_specs() rr = utils.ResourceRequest(rs, enable_pinning_translate=False) self.assertResourceRequestsEqual(expected, rr) self.assertFalse(rr.cpu_pinning_requested) def test_resource_request_init_with_image_props(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) image = objects.ImageMeta.from_dict({ 'properties': { 'trait:CUSTOM_TRUSTED': 'required', }, 'id': 'c8b1790e-a07d-4971-b137-44f2432936cd' }) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, required_traits={ 'CUSTOM_TRUSTED', } ) rs = objects.RequestSpec(flavor=flavor, image=image, is_bfv=False) rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) def _test_resource_request_init_with_legacy_image_props(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) image = objects.ImageMeta.from_dict({ 'properties': { 'hw_cpu_policy': 'dedicated', 'hw_cpu_thread_policy': 'require', }, 'id': 'c8b1790e-a07d-4971-b137-44f2432936cd', }) return objects.RequestSpec(flavor=flavor, image=image, is_bfv=False) def test_resource_request_init_with_legacy_image_props(self): expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ # we should have a PCPU due to hw_cpu_policy 'PCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, required_traits={ # we should require hyperthreading due to hw_cpu_thread_policy 'HW_CPU_HYPERTHREADING', }, ) rs = self._test_resource_request_init_with_legacy_image_props() rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) self.assertTrue(rr.cpu_pinning_requested) def test_resource_request_init_with_legacy_image_props_no_translate(self): expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ # we should have a VCPU despite hw_cpu_policy because # enable_pinning_translate=False 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, }, # we should not require hyperthreading despite hw_cpu_thread_policy # because enable_pinning_translate=False required_traits=set(), ) rs = self._test_resource_request_init_with_legacy_image_props() rr = utils.ResourceRequest(rs, enable_pinning_translate=False) self.assertResourceRequestsEqual(expected, rr) self.assertFalse(rr.cpu_pinning_requested) def test_resource_request_init_is_bfv(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1555) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, # this should only include the ephemeral and swap disk, and the # latter should be converted from MB to GB and rounded up 'DISK_GB': 7, }, ) rs = objects.RequestSpec(flavor=flavor, is_bfv=True) rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) def test_resource_request_with_vpmems(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={'hw:pmem': '4GB, 4GB,SMALL'}) expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, 'CUSTOM_PMEM_NAMESPACE_4GB': 2, 'CUSTOM_PMEM_NAMESPACE_SMALL': 1 }, ) rs = objects.RequestSpec(flavor=flavor, is_bfv=False) rr = utils.ResourceRequest(rs) self.assertResourceRequestsEqual(expected, rr) def test_resource_request_add_group_inserts_the_group(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) rs = objects.RequestSpec(flavor=flavor, is_bfv=False) req = utils.ResourceRequest(rs) rg1 = objects.RequestGroup(requester_id='foo', required_traits={'CUSTOM_FOO'}) req._add_request_group(rg1) rg2 = objects.RequestGroup(requester_id='bar', forbidden_traits={'CUSTOM_BAR'}) req._add_request_group(rg2) self.assertIs(rg1, req.get_request_group('foo')) self.assertIs(rg2, req.get_request_group('bar')) def test_empty_groups_forbidden(self): """Not allowed to add premade RequestGroup without resources/traits/ aggregates. """ flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) rs = objects.RequestSpec(flavor=flavor, is_bfv=False) req = utils.ResourceRequest(rs) rg = objects.RequestGroup(requester_id='foo') self.assertRaises(ValueError, req._add_request_group, rg) def test_claim_resources_on_destination_no_source_allocations(self): """Tests the negative scenario where the instance does not have allocations in Placement on the source compute node so no claim is attempted on the destination compute node. """ reportclient = report.SchedulerReportClient() instance = fake_instance.fake_instance_obj(self.context) source_node = objects.ComputeNode( uuid=uuids.source_node, host=instance.host) dest_node = objects.ComputeNode(uuid=uuids.dest_node, host='dest-host') @mock.patch.object(reportclient, 'get_allocs_for_consumer', return_value={}) @mock.patch.object(reportclient, 'claim_resources', new_callable=mock.NonCallableMock) def test(mock_claim, mock_get_allocs): ex = self.assertRaises( exception.ConsumerAllocationRetrievalFailed, utils.claim_resources_on_destination, self.context, reportclient, instance, source_node, dest_node) mock_get_allocs.assert_called_once_with( self.context, instance.uuid) self.assertIn( 'Expected to find allocations for source node resource ' 'provider %s' % source_node.uuid, six.text_type(ex)) test() def test_claim_resources_on_destination_claim_fails(self): """Tests the negative scenario where the resource allocation claim on the destination compute node fails, resulting in an error. """ reportclient = report.SchedulerReportClient() instance = fake_instance.fake_instance_obj(self.context) source_node = objects.ComputeNode( uuid=uuids.source_node, host=instance.host) dest_node = objects.ComputeNode(uuid=uuids.dest_node, host='dest-host') source_res_allocs = { 'allocations': { uuids.source_node: { 'resources': { 'VCPU': instance.vcpus, 'MEMORY_MB': instance.memory_mb, # This would really include ephemeral and swap too but # we're lazy. 'DISK_GB': instance.root_gb } } }, 'consumer_generation': 1, 'project_id': uuids.project_id, 'user_id': uuids.user_id } dest_alloc_request = { 'allocations': { uuids.dest_node: { 'resources': { 'VCPU': instance.vcpus, 'MEMORY_MB': instance.memory_mb, 'DISK_GB': instance.root_gb } } }, } @mock.patch.object(reportclient, 'get_allocs_for_consumer', return_value=source_res_allocs) @mock.patch.object(reportclient, 'claim_resources', return_value=False) def test(mock_claim, mock_get_allocs): # NOTE(danms): Don't pass source_node_allocations here to test # that they are fetched if needed. self.assertRaises(exception.NoValidHost, utils.claim_resources_on_destination, self.context, reportclient, instance, source_node, dest_node) mock_get_allocs.assert_called_once_with( self.context, instance.uuid) mock_claim.assert_called_once_with( self.context, instance.uuid, dest_alloc_request, instance.project_id, instance.user_id, allocation_request_version='1.28', consumer_generation=1) test() def test_claim_resources_on_destination(self): """Happy path test where everything is successful.""" reportclient = report.SchedulerReportClient() instance = fake_instance.fake_instance_obj(self.context) source_node = objects.ComputeNode( uuid=uuids.source_node, host=instance.host) dest_node = objects.ComputeNode(uuid=uuids.dest_node, host='dest-host') source_res_allocs = { uuids.source_node: { 'resources': { 'VCPU': instance.vcpus, 'MEMORY_MB': instance.memory_mb, # This would really include ephemeral and swap too but # we're lazy. 'DISK_GB': instance.root_gb } } } dest_alloc_request = { 'allocations': { uuids.dest_node: { 'resources': { 'VCPU': instance.vcpus, 'MEMORY_MB': instance.memory_mb, 'DISK_GB': instance.root_gb } } }, } @mock.patch.object(reportclient, 'get_allocs_for_consumer') @mock.patch.object(reportclient, 'claim_resources', return_value=True) def test(mock_claim, mock_get_allocs): utils.claim_resources_on_destination( self.context, reportclient, instance, source_node, dest_node, source_res_allocs, consumer_generation=None) self.assertFalse(mock_get_allocs.called) mock_claim.assert_called_once_with( self.context, instance.uuid, dest_alloc_request, instance.project_id, instance.user_id, allocation_request_version='1.28', consumer_generation=None) test() @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.scheduler.utils.request_is_rebuild') def test_claim_resources(self, mock_is_rebuild, mock_client): """Tests that when claim_resources() is called, that we appropriately call the placement client to claim resources for the instance. """ mock_is_rebuild.return_value = False ctx = nova_context.RequestContext(user_id=uuids.user_id) spec_obj = objects.RequestSpec(project_id=uuids.project_id) instance_uuid = uuids.instance alloc_req = mock.sentinel.alloc_req mock_client.claim_resources.return_value = True res = utils.claim_resources(ctx, mock_client, spec_obj, instance_uuid, alloc_req) mock_client.claim_resources.assert_called_once_with( ctx, uuids.instance, mock.sentinel.alloc_req, uuids.project_id, uuids.user_id, allocation_request_version=None, consumer_generation=None) self.assertTrue(res) # Now do it again but with RequestSpec.user_id set. spec_obj.user_id = uuids.spec_user_id mock_client.reset_mock() utils.claim_resources(ctx, mock_client, spec_obj, instance_uuid, alloc_req) mock_client.claim_resources.assert_called_once_with( ctx, uuids.instance, mock.sentinel.alloc_req, uuids.project_id, uuids.spec_user_id, allocation_request_version=None, consumer_generation=None) @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.scheduler.utils.request_is_rebuild') def test_claim_resources_for_policy_check(self, mock_is_rebuild, mock_client): mock_is_rebuild.return_value = True ctx = mock.Mock(user_id=uuids.user_id) res = utils.claim_resources(ctx, None, mock.sentinel.spec_obj, mock.sentinel.instance_uuid, []) self.assertTrue(res) mock_is_rebuild.assert_called_once_with(mock.sentinel.spec_obj) self.assertFalse(mock_client.claim_resources.called) def test_get_weight_multiplier(self): host_attr = {'vcpus_total': 4, 'vcpus_used': 6, 'cpu_allocation_ratio': 1.0} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': 'invalid'}, )] # Get value from default given value if the agg meta is invalid. self.assertEqual( 1.0, utils.get_weight_multiplier(host1, 'cpu_weight_multiplier', 1.0) ) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': '1.9'}, )] # Get value from aggregate metadata self.assertEqual( 1.9, utils.get_weight_multiplier(host1, 'cpu_weight_multiplier', 1.0) ) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': '1.9'}), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': '1.8'}), ] # Get min value from aggregate metadata self.assertEqual( 1.8, utils.get_weight_multiplier(host1, 'cpu_weight_multiplier', 1.0) ) def _set_up_and_fill_provider_mapping(self, requested_resources): request_spec = objects.RequestSpec() request_spec.requested_resources = requested_resources allocs = { uuids.rp_uuid1: { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } }, uuids.rp_uuid2: { 'resources': { 'NET_BW_INGR_KILOBIT_PER_SEC': 1, } } } mappings = { uuids.port_id1: [uuids.rp_uuid2], uuids.port_id2: [uuids.rp_uuid1], } allocation_req = {'allocations': allocs, 'mappings': mappings} selection = objects.Selection( allocation_request=jsonutils.dumps(allocation_req)) # Unmapped initially for rg in requested_resources: self.assertEqual([], rg.provider_uuids) utils.fill_provider_mapping(request_spec, selection) def test_fill_provider_mapping(self): rg1 = objects.RequestGroup(requester_id=uuids.port_id1) rg2 = objects.RequestGroup(requester_id=uuids.port_id2) self._set_up_and_fill_provider_mapping([rg1, rg2]) # Validate the mappings self.assertEqual([uuids.rp_uuid2], rg1.provider_uuids) self.assertEqual([uuids.rp_uuid1], rg2.provider_uuids) def test_fill_provider_mapping_no_op(self): # This just proves that having 'mappings' in the allocation request # doesn't break anything. self._set_up_and_fill_provider_mapping([]) @mock.patch.object(objects.RequestSpec, 'map_requested_resources_to_providers') def test_fill_provider_mapping_based_on_allocation_returns_early( self, mock_map): context = nova_context.RequestContext() request_spec = objects.RequestSpec() # set up the request that there is nothing to do request_spec.requested_resources = [] report_client = mock.sentinel.report_client allocation = mock.sentinel.allocation utils.fill_provider_mapping_based_on_allocation( context, report_client, request_spec, allocation) mock_map.assert_not_called() @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch.object(objects.RequestSpec, 'map_requested_resources_to_providers') def test_fill_provider_mapping_based_on_allocation( self, mock_map, mock_report_client): context = nova_context.RequestContext() request_spec = objects.RequestSpec() # set up the request that there is nothing to do request_spec.requested_resources = [objects.RequestGroup()] allocation = { uuids.rp_uuid: { 'resources': { 'NET_BW_EGR_KILOBIT_PER_SEC': 1, } } } traits = ['CUSTOM_PHYSNET1', 'CUSTOM_VNIC_TYPE_NORMAL'] mock_report_client.get_provider_traits.return_value = report.TraitInfo( traits=['CUSTOM_PHYSNET1', 'CUSTOM_VNIC_TYPE_NORMAL'], generation=0) utils.fill_provider_mapping_based_on_allocation( context, mock_report_client, request_spec, allocation) mock_map.assert_called_once_with(allocation, {uuids.rp_uuid: traits}) class TestEncryptedMemoryTranslation(TestUtilsBase): flavor_name = 'm1.test' image_name = 'cirros' def _get_request_spec(self, extra_specs, image): flavor = objects.Flavor(name=self.flavor_name, vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs=extra_specs) # NOTE(aspiers): RequestSpec.flavor is not nullable, but # RequestSpec.image is. reqspec = objects.RequestSpec(flavor=flavor) if image: reqspec.image = image return reqspec def _get_resource_request(self, extra_specs, image): reqspec = self._get_request_spec(extra_specs, image) return utils.ResourceRequest(reqspec) def _get_expected_resource_request(self, mem_encryption_context): expected_resources = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, } if mem_encryption_context: expected_resources[orc.MEM_ENCRYPTION_CONTEXT] = 1 expected = FakeResourceRequest() expected._rg_by_id[None] = objects.RequestGroup( use_same_provider=False, resources=expected_resources) return expected def _test_encrypted_memory_support_not_required(self, extra_specs, image=None): resreq = self._get_resource_request(extra_specs, image) expected = self._get_expected_resource_request(False) self.assertResourceRequestsEqual(expected, resreq) def test_encrypted_memory_support_empty_extra_specs(self): self._test_encrypted_memory_support_not_required(extra_specs={}) def test_encrypted_memory_support_false_extra_spec(self): for extra_spec in ('0', 'false', 'False'): self._test_encrypted_memory_support_not_required( extra_specs={'hw:mem_encryption': extra_spec}) def test_encrypted_memory_support_empty_image_props(self): self._test_encrypted_memory_support_not_required( extra_specs={}, image=objects.ImageMeta(properties=objects.ImageMetaProps())) def test_encrypted_memory_support_false_image_prop(self): for image_prop in ('0', 'false', 'False'): self._test_encrypted_memory_support_not_required( extra_specs={}, image=objects.ImageMeta( properties=objects.ImageMetaProps( hw_mem_encryption=image_prop)) ) def test_encrypted_memory_support_both_false(self): for extra_spec in ('0', 'false', 'False'): for image_prop in ('0', 'false', 'False'): self._test_encrypted_memory_support_not_required( extra_specs={'hw:mem_encryption': extra_spec}, image=objects.ImageMeta( properties=objects.ImageMetaProps( hw_mem_encryption=image_prop)) ) def _test_encrypted_memory_support_conflict(self, extra_spec, image_prop_in, image_prop_out): # NOTE(aspiers): hw_mem_encryption image property is a # FlexibleBooleanField, so the result should always be coerced # to a boolean. self.assertIsInstance(image_prop_out, bool) image = objects.ImageMeta( name=self.image_name, properties=objects.ImageMetaProps( hw_mem_encryption=image_prop_in) ) reqspec = self._get_request_spec( extra_specs={'hw:mem_encryption': extra_spec}, image=image) # Sanity check that our test request spec has an extra_specs # dict, which is needed in order for there to be a conflict. self.assertIn('flavor', reqspec) self.assertIn('extra_specs', reqspec.flavor) error = ( "Flavor %(flavor_name)s has hw:mem_encryption extra spec " "explicitly set to %(flavor_val)s, conflicting with " "image %(image_name)s which has hw_mem_encryption property " "explicitly set to %(image_val)s" ) exc = self.assertRaises( exception.FlavorImageConflict, utils.ResourceRequest, reqspec ) error_data = { 'flavor_name': self.flavor_name, 'flavor_val': extra_spec, 'image_name': self.image_name, 'image_val': image_prop_out, } self.assertEqual(error % error_data, str(exc)) def test_encrypted_memory_support_conflict1(self): for extra_spec in ('0', 'false', 'False'): for image_prop_in in ('1', 'true', 'True'): self._test_encrypted_memory_support_conflict( extra_spec, image_prop_in, True ) def test_encrypted_memory_support_conflict2(self): for extra_spec in ('1', 'true', 'True'): for image_prop_in in ('0', 'false', 'False'): self._test_encrypted_memory_support_conflict( extra_spec, image_prop_in, False ) @mock.patch.object(utils, 'LOG') def _test_encrypted_memory_support_required(self, requesters, extra_specs, mock_log, image=None): resreq = self._get_resource_request(extra_specs, image) expected = self._get_expected_resource_request(True) self.assertResourceRequestsEqual(expected, resreq) mock_log.debug.assert_has_calls([ mock.call('Added %s=1 to requested resources', orc.MEM_ENCRYPTION_CONTEXT) ]) def test_encrypted_memory_support_extra_spec(self): for extra_spec in ('1', 'true', 'True'): self._test_encrypted_memory_support_required( 'hw:mem_encryption extra spec', {'hw:mem_encryption': extra_spec}, image=objects.ImageMeta( id='005249be-3c2f-4351-9df7-29bb13c21b14', properties=objects.ImageMetaProps( hw_machine_type='q35', hw_firmware_type='uefi')) ) def test_encrypted_memory_support_image_prop(self): for image_prop in ('1', 'true', 'True'): self._test_encrypted_memory_support_required( 'hw_mem_encryption image property', {}, image=objects.ImageMeta( id='005249be-3c2f-4351-9df7-29bb13c21b14', name=self.image_name, properties=objects.ImageMetaProps( hw_machine_type='q35', hw_firmware_type='uefi', hw_mem_encryption=image_prop)) ) def test_encrypted_memory_support_both_required(self): for extra_spec in ('1', 'true', 'True'): for image_prop in ('1', 'true', 'True'): self._test_encrypted_memory_support_required( 'hw:mem_encryption extra spec and ' 'hw_mem_encryption image property', {'hw:mem_encryption': extra_spec}, image=objects.ImageMeta( id='005249be-3c2f-4351-9df7-29bb13c21b14', name=self.image_name, properties=objects.ImageMetaProps( hw_machine_type='q35', hw_firmware_type='uefi', hw_mem_encryption=image_prop)) ) class TestResourcesFromRequestGroupDefaultPolicy(test.NoDBTestCase): """These test cases assert what happens when the group policy is missing from the flavor but more than one numbered request group is requested from various sources. Note that while image can provide required traits for the resource request those traits are always added to the unnumbered group so image cannot be a source of additional numbered groups. """ def setUp(self): super(TestResourcesFromRequestGroupDefaultPolicy, self).setUp() self.context = nova_context.get_admin_context() self.port_group1 = objects.RequestGroup.from_port_request( self.context, uuids.port1, port_resource_request={ "resources": { "NET_BW_IGR_KILOBIT_PER_SEC": 1000, "NET_BW_EGR_KILOBIT_PER_SEC": 1000}, "required": ["CUSTOM_PHYSNET_2", "CUSTOM_VNIC_TYPE_NORMAL"] }) self.port_group2 = objects.RequestGroup.from_port_request( self.context, uuids.port2, port_resource_request={ "resources": { "NET_BW_IGR_KILOBIT_PER_SEC": 2000, "NET_BW_EGR_KILOBIT_PER_SEC": 2000}, "required": ["CUSTOM_PHYSNET_3", "CUSTOM_VNIC_TYPE_DIRECT"] }) self.image = objects.ImageMeta(properties=objects.ImageMetaProps()) def test_one_group_from_flavor_dont_warn(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'resources1:CUSTOM_BAR': '2', }) request_spec = objects.RequestSpec( flavor=flavor, image=self.image, requested_resources=[]) rr = utils.resources_from_request_spec( self.context, request_spec, host_manager=mock.Mock()) log = self.stdlog.logger.output self.assertNotIn( "There is more than one numbered request group in the allocation " "candidate query but the flavor did not specify any group policy.", log) self.assertNotIn( "To avoid the placement failure nova defaults the group policy to " "'none'.", log) self.assertIsNone(rr.group_policy) self.assertNotIn('group_policy=none', rr.to_querystring()) def test_one_group_from_port_dont_warn(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={}) request_spec = objects.RequestSpec( flavor=flavor, image=self.image, requested_resources=[self.port_group1]) rr = utils.resources_from_request_spec( self.context, request_spec, host_manager=mock.Mock()) log = self.stdlog.logger.output self.assertNotIn( "There is more than one numbered request group in the allocation " "candidate query but the flavor did not specify any group policy.", log) self.assertNotIn( "To avoid the placement failure nova defaults the group policy to " "'none'.", log) self.assertIsNone(rr.group_policy) self.assertNotIn('group_policy=none', rr.to_querystring()) def test_two_groups_from_flavor_only_warns(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'resources1:CUSTOM_BAR': '2', 'resources2:CUSTOM_FOO': '1' }) request_spec = objects.RequestSpec( flavor=flavor, image=self.image, requested_resources=[]) rr = utils.resources_from_request_spec( self.context, request_spec, host_manager=mock.Mock()) log = self.stdlog.logger.output self.assertIn( "There is more than one numbered request group in the allocation " "candidate query but the flavor did not specify any group policy.", log) self.assertNotIn( "To avoid the placement failure nova defaults the group policy to " "'none'.", log) self.assertIsNone(rr.group_policy) self.assertNotIn('group_policy', rr.to_querystring()) def test_one_group_from_flavor_one_from_port_policy_defaulted(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ 'resources1:CUSTOM_BAR': '2', }) request_spec = objects.RequestSpec( flavor=flavor, image=self.image, requested_resources=[self.port_group1]) rr = utils.resources_from_request_spec( self.context, request_spec, host_manager=mock.Mock()) log = self.stdlog.logger.output self.assertIn( "There is more than one numbered request group in the allocation " "candidate query but the flavor did not specify any group policy.", log) self.assertIn( "To avoid the placement failure nova defaults the group policy to " "'none'.", log) self.assertEqual('none', rr.group_policy) self.assertIn('group_policy=none', rr.to_querystring()) def test_two_groups_from_ports_policy_defaulted(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={}) request_spec = objects.RequestSpec( flavor=flavor, image=self.image, requested_resources=[self.port_group1, self.port_group2]) rr = utils.resources_from_request_spec( self.context, request_spec, host_manager=mock.Mock()) log = self.stdlog.logger.output self.assertIn( "There is more than one numbered request group in the allocation " "candidate query but the flavor did not specify any group policy.", log) self.assertIn( "To avoid the placement failure nova defaults the group policy to " "'none'.", log) self.assertEqual('none', rr.group_policy) self.assertIn('group_policy=none', rr.to_querystring()) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5784678 nova-21.2.4/nova/tests/unit/scheduler/weights/0000775000175000017500000000000000000000000021302 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/__init__.py0000664000175000017500000000000000000000000023401 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_cross_cell.py0000664000175000017500000001446700000000000025057 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from nova import conf from nova import objects from nova.scheduler import weights from nova.scheduler.weights import cross_cell from nova import test from nova.tests.unit.scheduler import fakes CONF = conf.CONF class CrossCellWeigherTestCase(test.NoDBTestCase): """Tests for the FilterScheduler CrossCellWeigher.""" def setUp(self): super(CrossCellWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [cross_cell.CrossCellWeigher()] def _get_weighed_hosts(self, request_spec): hosts = self._get_all_hosts() return self.weight_handler.get_weighed_objects( self.weighers, hosts, request_spec) @staticmethod def _get_all_hosts(): """Provides two hosts, one in cell1 and one in cell2.""" host_values = [ ('host1', 'node1', {'cell_uuid': uuids.cell1}), ('host2', 'node2', {'cell_uuid': uuids.cell2}), ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_get_weighed_hosts_no_requested_destination_or_cell(self): """Weights should all be 0.0 given there is no requested_destination or source cell in the RequestSpec, e.g. initial server create scenario. """ # Test the requested_destination field not being set. request_spec = objects.RequestSpec() weighed_hosts = self._get_weighed_hosts(request_spec) self.assertTrue(all([wh.weight == 0.0 for wh in weighed_hosts])) # Test the requested_destination field being set to None. request_spec.requested_destination = None weighed_hosts = self._get_weighed_hosts(request_spec) self.assertTrue(all([wh.weight == 0.0 for wh in weighed_hosts])) # Test the requested_destination field being set but without the # cell field set. request_spec.requested_destination = objects.Destination() weighed_hosts = self._get_weighed_hosts(request_spec) self.assertTrue(all([wh.weight == 0.0 for wh in weighed_hosts])) # Test the requested_destination field being set with the cell field # set but to None. request_spec.requested_destination = objects.Destination(cell=None) weighed_hosts = self._get_weighed_hosts(request_spec) self.assertTrue(all([wh.weight == 0.0 for wh in weighed_hosts])) def test_get_weighed_hosts_allow_cross_cell_move_false(self): """Tests the scenario that the source cell is set in the requested destination but it's not a cross cell move so the weights should all be 0.0. """ request_spec = objects.RequestSpec( requested_destination=objects.Destination( cell=objects.CellMapping(uuid=uuids.cell1))) weighed_hosts = self._get_weighed_hosts(request_spec) self.assertTrue(all([wh.weight == 0.0 for wh in weighed_hosts])) def test_get_weighed_hosts_allow_cross_cell_move_true_positive(self): """Tests a cross-cell move where the host in the source (preferred) cell should be weighed higher than the host in the other cell based on the default configuration. """ request_spec = objects.RequestSpec( requested_destination=objects.Destination( cell=objects.CellMapping(uuid=uuids.cell1), allow_cross_cell_move=True)) weighed_hosts = self._get_weighed_hosts(request_spec) multiplier = CONF.filter_scheduler.cross_cell_move_weight_multiplier self.assertEqual([multiplier, 0.0], [wh.weight for wh in weighed_hosts]) # host1 should be preferred since it's in cell1 preferred_host = weighed_hosts[0] self.assertEqual('host1', preferred_host.obj.host) def test_get_weighed_hosts_allow_cross_cell_move_true_negative(self): """Tests a cross-cell move where the host in another cell should be weighed higher than the host in the source cell because the weight value is negative. """ self.flags(cross_cell_move_weight_multiplier=-1000, group='filter_scheduler') request_spec = objects.RequestSpec( requested_destination=objects.Destination( # cell1 is the source cell cell=objects.CellMapping(uuid=uuids.cell1), allow_cross_cell_move=True)) weighed_hosts = self._get_weighed_hosts(request_spec) multiplier = CONF.filter_scheduler.cross_cell_move_weight_multiplier self.assertEqual([0.0, multiplier], [wh.weight for wh in weighed_hosts]) # host2 should be preferred since it's *not* in cell1 preferred_host = weighed_hosts[0] self.assertEqual('host2', preferred_host.obj.host) def test_multiplier(self): weigher = self.weighers[0] host1 = fakes.FakeHostState('fake-host', 'node', {}) # By default, return the cross_cell_move_weight_multiplier # configuration directly since the host is not in an aggregate. self.assertEqual( CONF.filter_scheduler.cross_cell_move_weight_multiplier, weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'cross_cell_move_weight_multiplier': '-1.0'}, )] # Read the weight multiplier from aggregate metadata to override the # config. self.assertEqual(-1.0, weigher.weight_multiplier(host1)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_affinity.py0000664000175000017500000002271400000000000026264 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Ericsson AB # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler import weights from nova.scheduler.weights import affinity from nova import test from nova.tests.unit.scheduler import fakes class SoftWeigherTestBase(test.NoDBTestCase): def setUp(self): super(SoftWeigherTestBase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [] def _get_weighed_host(self, hosts, policy): request_spec = objects.RequestSpec( instance_group=objects.InstanceGroup( policy=policy, members=['member1', 'member2', 'member3', 'member4', 'member5', 'member6', 'member7'])) return self.weight_handler.get_weighed_objects(self.weighers, hosts, request_spec)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'instances': { 'member1': mock.sentinel, 'instance13': mock.sentinel }}), ('host2', 'node2', {'instances': { 'member2': mock.sentinel, 'member3': mock.sentinel, 'member4': mock.sentinel, 'member5': mock.sentinel, 'instance14': mock.sentinel }}), ('host3', 'node3', {'instances': { 'instance15': mock.sentinel }}), ('host4', 'node4', {'instances': { 'member6': mock.sentinel, 'member7': mock.sentinel, 'instance16': mock.sentinel }})] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def _do_test(self, policy, expected_weight, expected_host): hostinfo_list = self._get_all_hosts() weighed_host = self._get_weighed_host(hostinfo_list, policy) self.assertEqual(expected_weight, weighed_host.weight) if expected_host: self.assertEqual(expected_host, weighed_host.obj.host) class SoftAffinityWeigherTestCase(SoftWeigherTestBase): def setUp(self): super(SoftAffinityWeigherTestCase, self).setUp() self.weighers = [affinity.ServerGroupSoftAffinityWeigher()] self.softaffin_weigher = affinity.ServerGroupSoftAffinityWeigher() def test_soft_affinity_weight_multiplier_by_default(self): self._do_test(policy='soft-affinity', expected_weight=1.0, expected_host='host2') def test_soft_affinity_weight_multiplier_zero_value(self): # We do not know the host, all have same weight. self.flags(soft_affinity_weight_multiplier=0.0, group='filter_scheduler') self._do_test(policy='soft-affinity', expected_weight=0.0, expected_host=None) def test_soft_affinity_weight_multiplier_positive_value(self): self.flags(soft_affinity_weight_multiplier=2.0, group='filter_scheduler') self._do_test(policy='soft-affinity', expected_weight=2.0, expected_host='host2') def test_soft_affinity_weight_multiplier(self): self.flags(soft_affinity_weight_multiplier=0.0, group='filter_scheduler') host_attr = {'instances': {'instance1': mock.sentinel}} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(0.0, self.softaffin_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'soft_affinity_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.softaffin_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'soft_affinity_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'soft_affinity_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.softaffin_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(soft_affinity_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'soft_affinity_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs weighed_host = self._get_weighed_host(hostinfo_list, 'soft-affinity') self.assertEqual(1.5, weighed_host.weight) self.assertEqual('host2', weighed_host.obj.host) class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase): def setUp(self): super(SoftAntiAffinityWeigherTestCase, self).setUp() self.weighers = [affinity.ServerGroupSoftAntiAffinityWeigher()] self.antiaffin_weigher = affinity.ServerGroupSoftAntiAffinityWeigher() def test_soft_anti_affinity_weight_multiplier_by_default(self): self._do_test(policy='soft-anti-affinity', expected_weight=1.0, expected_host='host3') def test_soft_anti_affinity_weight_multiplier_zero_value(self): # We do not know the host, all have same weight. self.flags(soft_anti_affinity_weight_multiplier=0.0, group='filter_scheduler') self._do_test(policy='soft-anti-affinity', expected_weight=0.0, expected_host=None) def test_soft_anti_affinity_weight_multiplier_positive_value(self): self.flags(soft_anti_affinity_weight_multiplier=2.0, group='filter_scheduler') self._do_test(policy='soft-anti-affinity', expected_weight=2.0, expected_host='host3') def test_soft_anti_affinity_weight_multiplier(self): self.flags(soft_anti_affinity_weight_multiplier=0.0, group='filter_scheduler') host_attr = {'instances': {'instance1': mock.sentinel}} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(0.0, self.antiaffin_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'soft_anti_affinity_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.antiaffin_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'soft_anti_affinity_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'soft_anti_affinity_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.antiaffin_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(soft_anti_affinity_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4'], metadata={'soft_anti_affinity_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs weighed_host = self._get_weighed_host(hostinfo_list, 'soft-anti-affinity') self.assertEqual(1.5, weighed_host.weight) self.assertEqual('host3', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_compute.py0000664000175000017500000001103400000000000026120 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler build failure weights. """ from nova import objects from nova.scheduler import weights from nova.scheduler.weights import compute from nova import test from nova.tests.unit.scheduler import fakes class BuildFailureWeigherTestCase(test.NoDBTestCase): def setUp(self): super(BuildFailureWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [compute.BuildFailureWeigher()] self.buildfailure_weigher = compute.BuildFailureWeigher() def _get_weighed_host(self, hosts): return self.weight_handler.get_weighed_objects(self.weighers, hosts, {}) def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'failed_builds': 0}), ('host2', 'node2', {'failed_builds': 1}), ('host3', 'node3', {'failed_builds': 10}), ('host4', 'node4', {'failed_builds': 100}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_build_failure_weigher_disabled(self): self.flags(build_failure_weight_multiplier=0.0, group='filter_scheduler') hosts = self._get_all_hosts() weighed_hosts = self._get_weighed_host(hosts) self.assertTrue(all([wh.weight == 0.0 for wh in weighed_hosts])) def test_build_failure_weigher_scaled(self): self.flags(build_failure_weight_multiplier=1000.0, group='filter_scheduler') hosts = self._get_all_hosts() weighed_hosts = self._get_weighed_host(hosts) self.assertEqual([0, -10, -100, -1000], [wh.weight for wh in weighed_hosts]) def test_build_failure_weight_multiplier(self): self.flags(build_failure_weight_multiplier=0.0, group='filter_scheduler') host_attr = {'failed_builds': 1} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(0.0, self.buildfailure_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'build_failure_weight_multiplier': '1000.0'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(-1000, self.buildfailure_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'build_failure_weight_multiplier': '500'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'build_failure_weight_multiplier': '1000'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(-500, self.buildfailure_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(build_failure_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4'], metadata={'build_failure_weight_multiplier': '1000'}, )] for h in hostinfo_list: h.aggregates = aggs weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) self.assertEqual([0, -10, -100, -1000], [wh.weight for wh in weights]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_cpu.py0000664000175000017500000001612200000000000025236 0ustar00zuulzuul00000000000000# Copyright 2016, Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler CPU weights. """ from nova import objects from nova.scheduler import weights from nova.scheduler.weights import cpu from nova import test from nova.tests.unit.scheduler import fakes class CPUWeigherTestCase(test.NoDBTestCase): def setUp(self): super(CPUWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [cpu.CPUWeigher()] self.cpu_weigher = cpu.CPUWeigher() def _get_weighed_host(self, hosts, weight_properties=None): if weight_properties is None: weight_properties = {} return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'vcpus_total': 8, 'vcpus_used': 8, 'cpu_allocation_ratio': 1.0}), # 0 free ('host2', 'node2', {'vcpus_total': 4, 'vcpus_used': 2, 'cpu_allocation_ratio': 1.0}), # 2 free ('host3', 'node3', {'vcpus_total': 6, 'vcpus_used': 0, 'cpu_allocation_ratio': 1.0}), # 6 free ('host4', 'node4', {'vcpus_total': 8, 'vcpus_used': 0, 'cpu_allocation_ratio': 2.0}), # 16 free ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_multiplier_default(self): hostinfo_list = self._get_all_hosts() # host1: vcpus_free=0 # host2: vcpus_free=2 # host3: vcpus_free=6 # host4: vcpus_free=16 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_multiplier_none(self): self.flags(cpu_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: vcpus_free=0 # host2: vcpus_free=2 # host3: vcpus_free=6 # host4: vcpus_free=16 # We do not know the host, all have same weight. weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(0.0, weighed_host.weight) def test_multiplier_positive(self): self.flags(cpu_weight_multiplier=2.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: vcpus_free=0 # host2: vcpus_free=2 # host3: vcpus_free=6 # host4: vcpus_free=16 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0 * 2, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_multiplier_negative(self): self.flags(cpu_weight_multiplier=-1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: vcpus_free=0 # host2: vcpus_free=2 # host3: vcpus_free=6 # host4: vcpus_free=16 # so, host1 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual('host1', weighed_host.obj.host) def test_negative_host(self): self.flags(cpu_weight_multiplier=1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() host_attr = {'vcpus_total': 4, 'vcpus_used': 6, 'cpu_allocation_ratio': 1.0} host_state = fakes.FakeHostState('negative', 'negative', host_attr) hostinfo_list = list(hostinfo_list) + [host_state] # host1: vcpus_free=0 # host2: vcpus_free=2 # host3: vcpus_free=6 # host4: vcpus_free=16 # negative: vcpus_free=-2 # so, host4 should win weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) # and negativehost should lose weighed_host = weights[-1] self.assertEqual(0, weighed_host.weight) self.assertEqual('negative', weighed_host.obj.host) def test_cpu_weigher_multiplier(self): self.flags(cpu_weight_multiplier=-1.0, group='filter_scheduler') host_attr = {'vcpus_total': 4, 'vcpus_used': 6, 'cpu_allocation_ratio': 1.0} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the cpu_weight_multiplier configuration directly self.assertEqual(-1, self.cpu_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.cpu_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'cpu_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.cpu_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(cpu_weight_multiplier=-1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4'], metadata={'cpu_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs # host1: vcpus_free=0 # host2: vcpus_free=2 # host3: vcpus_free=6 # host4: vcpus_free=8 # so, host4 should win weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1.5, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_disk.py0000664000175000017500000001465200000000000025407 0ustar00zuulzuul00000000000000# Copyright 2011-2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler disk weights. """ from nova import objects from nova.scheduler import weights from nova.scheduler.weights import disk from nova import test from nova.tests.unit.scheduler import fakes class DiskWeigherTestCase(test.NoDBTestCase): def setUp(self): super(DiskWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [disk.DiskWeigher()] self.disk_weigher = disk.DiskWeigher() def _get_weighed_host(self, hosts, weight_properties=None): if weight_properties is None: weight_properties = {} return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'free_disk_mb': 5120}), ('host2', 'node2', {'free_disk_mb': 10240}), ('host3', 'node3', {'free_disk_mb': 30720}), ('host4', 'node4', {'free_disk_mb': 81920}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_default_of_spreading_first(self): hostinfo_list = self._get_all_hosts() # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_disk_filter_multiplier1(self): self.flags(disk_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # We do not know the host, all have same weight. weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(0.0, weighed_host.weight) def test_disk_filter_multiplier2(self): self.flags(disk_weight_multiplier=2.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0 * 2, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_disk_filter_negative(self): self.flags(disk_weight_multiplier=1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() host_attr = {'id': 100, 'disk_mb': 81920, 'free_disk_mb': -5120} host_state = fakes.FakeHostState('negative', 'negative', host_attr) hostinfo_list = list(hostinfo_list) + [host_state] # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # negativehost: free_disk_mb=-5120 # so, host4 should win weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) # and negativehost should lose weighed_host = weights[-1] self.assertEqual(0, weighed_host.weight) self.assertEqual('negative', weighed_host.obj.host) def test_disk_weigher_multiplier(self): self.flags(disk_weight_multiplier=-1.0, group='filter_scheduler') host_attr = {'free_disk_mb': 5120} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(-1, self.disk_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'disk_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.disk_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'disk_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'disk_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.disk_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(disk_weight_multiplier=-1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4'], metadata={'disk_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # so, host4 should win: weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1.0 * 1.5, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_hosts.py0000664000175000017500000000325000000000000025605 0ustar00zuulzuul00000000000000# Copyright 2011-2014 IBM # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler weights. """ from nova.scheduler import weights from nova.scheduler.weights import affinity from nova.scheduler.weights import io_ops from nova.scheduler.weights import metrics from nova.scheduler.weights import ram from nova import test from nova.tests.unit import matchers from nova.tests.unit.scheduler import fakes class TestWeighedHost(test.NoDBTestCase): def test_dict_conversion(self): host_state = fakes.FakeHostState('somehost', None, {}) host = weights.WeighedHost(host_state, 'someweight') expected = {'weight': 'someweight', 'host': 'somehost'} self.assertThat(host.to_dict(), matchers.DictMatches(expected)) def test_all_weighers(self): classes = weights.all_weighers() self.assertIn(ram.RAMWeigher, classes) self.assertIn(metrics.MetricsWeigher, classes) self.assertIn(io_ops.IoOpsWeigher, classes) self.assertIn(affinity.ServerGroupSoftAffinityWeigher, classes) self.assertIn(affinity.ServerGroupSoftAntiAffinityWeigher, classes) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_ioopsweight.py0000664000175000017500000001173600000000000027016 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler IoOpsWeigher weights """ from nova import objects from nova.scheduler import weights from nova.scheduler.weights import io_ops from nova import test from nova.tests.unit.scheduler import fakes class IoOpsWeigherTestCase(test.NoDBTestCase): def setUp(self): super(IoOpsWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [io_ops.IoOpsWeigher()] self.ioops_weigher = io_ops.IoOpsWeigher() def _get_weighed_host(self, hosts, io_ops_weight_multiplier): if io_ops_weight_multiplier is not None: self.flags(io_ops_weight_multiplier=io_ops_weight_multiplier, group='filter_scheduler') return self.weight_handler.get_weighed_objects(self.weighers, hosts, {})[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'num_io_ops': 1}), ('host2', 'node2', {'num_io_ops': 2}), ('host3', 'node3', {'num_io_ops': 0}), ('host4', 'node4', {'num_io_ops': 4}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def _do_test(self, io_ops_weight_multiplier, expected_weight, expected_host): hostinfo_list = self._get_all_hosts() weighed_host = self._get_weighed_host(hostinfo_list, io_ops_weight_multiplier) self.assertEqual(weighed_host.weight, expected_weight) if expected_host: self.assertEqual(weighed_host.obj.host, expected_host) def test_io_ops_weight_multiplier_by_default(self): self._do_test(io_ops_weight_multiplier=None, expected_weight=0.0, expected_host='host3') def test_io_ops_weight_multiplier_zero_value(self): # We do not know the host, all have same weight. self._do_test(io_ops_weight_multiplier=0.0, expected_weight=0.0, expected_host=None) def test_io_ops_weight_multiplier_positive_value(self): self._do_test(io_ops_weight_multiplier=2.0, expected_weight=2.0, expected_host='host4') def test_io_ops_weight_multiplier(self): self.flags(io_ops_weight_multiplier=0.0, group='filter_scheduler') host_attr = {'num_io_ops': 1} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(0.0, self.ioops_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'io_ops_weight_multiplier': '1'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(1.0, self.ioops_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'io_ops_weight_multiplier': '1'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'io_ops_weight_multiplier': '0.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(0.5, self.ioops_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(io_ops_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4'], metadata={'io_ops_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1.0 * 1.5, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_metrics.py0000664000175000017500000002310600000000000026115 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler metrics weights. """ from nova import exception from nova import objects from nova.objects import fields from nova.objects import monitor_metric from nova.scheduler import weights from nova.scheduler.weights import metrics from nova import test from nova.tests.unit.scheduler import fakes idle = fields.MonitorMetricType.CPU_IDLE_TIME kernel = fields.MonitorMetricType.CPU_KERNEL_TIME user = fields.MonitorMetricType.CPU_USER_TIME def fake_metric(name, value): return monitor_metric.MonitorMetric(name=name, value=value) def fake_list(objs): m_list = [fake_metric(name, val) for name, val in objs] return monitor_metric.MonitorMetricList(objects=m_list) class MetricsWeigherTestCase(test.NoDBTestCase): def setUp(self): super(MetricsWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [metrics.MetricsWeigher()] self.metrics_weigher = metrics.MetricsWeigher() def _get_weighed_host(self, hosts, setting, weight_properties=None): if not weight_properties: weight_properties = {} self.flags(weight_setting=setting, group='metrics') self.weighers[0]._parse_setting() return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'metrics': fake_list([(idle, 512), (kernel, 1)])}), ('host2', 'node2', {'metrics': fake_list([(idle, 1024), (kernel, 2)])}), ('host3', 'node3', {'metrics': fake_list([(idle, 3072), (kernel, 1)])}), ('host4', 'node4', {'metrics': fake_list([(idle, 8192), (kernel, 0)])}), ('host5', 'node5', {'metrics': fake_list([(idle, 768), (kernel, 0), (user, 1)])}), ('host6', 'node6', {'metrics': fake_list([(idle, 2048), (kernel, 0), (user, 2)])}), ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def _do_test(self, settings, expected_weight, expected_host): hostinfo_list = self._get_all_hosts() weighed_host = self._get_weighed_host(hostinfo_list, settings) self.assertEqual(expected_weight, weighed_host.weight) self.assertEqual(expected_host, weighed_host.obj.host) def test_single_resource_no_metrics(self): setting = [idle + '=1'] hostinfo_list = [fakes.FakeHostState('host1', 'node1', {'metrics': None}), fakes.FakeHostState('host2', 'node2', {'metrics': None})] self.assertRaises(exception.ComputeHostMetricNotFound, self._get_weighed_host, hostinfo_list, setting) def test_single_resource(self): # host1: idle=512 # host2: idle=1024 # host3: idle=3072 # host4: idle=8192 # so, host4 should win: setting = [idle + '=1'] self._do_test(setting, 1.0, 'host4') def test_multiple_resource(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host2 should win: setting = [idle + '=0.0001', kernel + '=1'] self._do_test(setting, 1.0, 'host2') def test_single_resource_duplicate_setting(self): # host1: idle=512 # host2: idle=1024 # host3: idle=3072 # host4: idle=8192 # so, host1 should win (sum of settings is negative): setting = [idle + '=-2', idle + '=1'] self._do_test(setting, 1.0, 'host1') def test_single_resourcenegtive_ratio(self): # host1: idle=512 # host2: idle=1024 # host3: idle=3072 # host4: idle=8192 # so, host1 should win: setting = [idle + '=-1'] self._do_test(setting, 1.0, 'host1') def test_multiple_resource_missing_ratio(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host4 should win: setting = [idle + '=0.0001', kernel] self._do_test(setting, 1.0, 'host4') def test_multiple_resource_wrong_ratio(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host4 should win: setting = [idle + '=0.0001', kernel + ' = 2.0t'] self._do_test(setting, 1.0, 'host4') def _check_parsing_result(self, weigher, setting, results): self.flags(weight_setting=setting, group='metrics') weigher._parse_setting() self.assertEqual(len(weigher.setting), len(results)) for item in results: self.assertIn(item, weigher.setting) def test_parse_setting(self): weigher = self.weighers[0] self._check_parsing_result(weigher, [idle + '=1'], [(idle, 1.0)]) self._check_parsing_result(weigher, [idle + '=1', kernel + '=-2.1'], [(idle, 1.0), (kernel, -2.1)]) self._check_parsing_result(weigher, [idle + '=a1', kernel + '=-2.1'], [(kernel, -2.1)]) self._check_parsing_result(weigher, [idle, kernel + '=-2.1'], [(kernel, -2.1)]) self._check_parsing_result(weigher, ['=5', kernel + '=-2.1'], [(kernel, -2.1)]) def test_metric_not_found_required(self): setting = [idle + '=1', user + '=2'] self.assertRaises(exception.ComputeHostMetricNotFound, self._do_test, setting, 8192, 'host4') def test_metric_not_found_non_required(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # host5: idle=768, kernel=0, user=1 # host6: idle=2048, kernel=0, user=2 # so, host5 should win: self.flags(required=False, group='metrics') setting = [idle + '=0.0001', user + '=-1'] self._do_test(setting, 1.0, 'host5') def test_metrics_weigher_multiplier(self): self.flags(weight_multiplier=-1.0, group='metrics') host_attr = {'metrics': fake_list([(idle, 512), (kernel, 1)])} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(-1, self.metrics_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'metrics_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.metrics_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'metrics_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'metrics_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.metrics_weigher.weight_multiplier(host1)) def test_host_with_agg(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host4 should win: setting = [idle + '=0.0001', kernel] hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4', 'host5', 'host6'], metadata={'metrics_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs weighed_host = self._get_weighed_host(hostinfo_list, setting) self.assertEqual(1.5, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_pci.py0000664000175000017500000002142300000000000025222 0ustar00zuulzuul00000000000000# Copyright (c) 2016, Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for Scheduler PCI weights.""" import copy from nova import objects from nova.pci import stats from nova.scheduler import weights from nova.scheduler.weights import pci from nova import test from nova.tests.unit import fake_pci_device_pools as fake_pci from nova.tests.unit.scheduler import fakes def _create_pci_pool(count): test_dict = copy.copy(fake_pci.fake_pool_dict) test_dict['count'] = count return objects.PciDevicePool.from_dict(test_dict) def _create_pci_stats(counts): if counts is None: # the pci_stats column is nullable return None pools = [_create_pci_pool(count) for count in counts] return stats.PciDeviceStats(pools) class PCIWeigherTestCase(test.NoDBTestCase): def setUp(self): super(PCIWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [pci.PCIWeigher()] self.pci_weigher = pci.PCIWeigher() def _get_weighed_hosts(self, hosts, request_spec): return self.weight_handler.get_weighed_objects(self.weighers, hosts, request_spec) def _get_all_hosts(self, host_values): return [fakes.FakeHostState( host, node, {'pci_stats': _create_pci_stats(values)}) for host, node, values in host_values] def test_multiplier_no_pci_empty_hosts(self): """Test weigher with a no PCI device instance on no PCI device hosts. Ensure that the host with no PCI devices receives the highest weighting. """ hosts = [ ('host1', 'node1', [3, 1]), # 4 devs ('host2', 'node2', []), # 0 devs ] hostinfo_list = self._get_all_hosts(hosts) # we don't request PCI devices spec_obj = objects.RequestSpec(pci_requests=None) # host2, which has the least PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host2', weighed_host.obj.host) def test_multiplier_no_pci_non_empty_hosts(self): """Test weigher with a no PCI device instance on PCI device hosts. Ensure that the host with the least PCI devices receives the highest weighting. """ hosts = [ ('host1', 'node1', [2, 2, 2]), # 6 devs ('host2', 'node2', [3, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) # we don't request PCI devices spec_obj = objects.RequestSpec(pci_requests=None) # host2, which has the least free PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host2', weighed_host.obj.host) def test_multiplier_with_pci(self): """Test weigher with a PCI device instance and a multiplier. Ensure that the host with the smallest number of free PCI devices capable of meeting the requirements of the instance is chosen, enforcing a stacking (rather than spreading) behavior. """ # none of the hosts will have less than the number of devices required # by the instance: the NUMATopologyFilter takes care of this for us hosts = [ ('host1', 'node1', [4, 1]), # 5 devs ('host2', 'node2', [10]), # 10 devs ('host3', 'node3', [1, 1, 1, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) # we request PCI devices request = objects.InstancePCIRequest(count=4, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) # host3, which has the least free PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host3', weighed_host.obj.host) def test_multiplier_with_many_pci(self): """Test weigher with a PCI device instance and huge hosts. Ensure that the weigher gracefully degrades when the number of PCI devices on the host exceeeds MAX_DEVS. """ hosts = [ ('host1', 'node1', [500]), # 500 devs ('host2', 'node2', [2000]), # 2000 devs ] hostinfo_list = self._get_all_hosts(hosts) # we request PCI devices request = objects.InstancePCIRequest(count=4, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) # we do not know the host as all have same weight weighed_hosts = self._get_weighed_hosts(hostinfo_list, spec_obj) for weighed_host in weighed_hosts: # the weigher normalizes all weights to 0 if they're all equal self.assertEqual(0.0, weighed_host.weight) def test_multiplier_none(self): """Test weigher with a PCI device instance and a 0.0 multiplier. Ensure that the 0.0 multiplier disables the weigher entirely. """ self.flags(pci_weight_multiplier=0.0, group='filter_scheduler') hosts = [ ('host1', 'node1', [4, 1]), # 5 devs ('host2', 'node2', [10]), # 10 devs ('host3', 'node3', [1, 1, 1, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) # we do not know the host as all have same weight weighed_hosts = self._get_weighed_hosts(hostinfo_list, spec_obj) for weighed_host in weighed_hosts: # the weigher normalizes all weights to 0 if they're all equal self.assertEqual(0.0, weighed_host.weight) def test_pci_weigher_multiplier(self): self.flags(pci_weight_multiplier=0.0, group='filter_scheduler') hosts = [500] host1 = fakes.FakeHostState( 'fake-host', 'node', {'pci_stats': _create_pci_stats(hosts)}) # By default, return the weight_multiplier configuration directly self.assertEqual(0.0, self.pci_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'pci_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.pci_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'pci_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'pci_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.pci_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(pci_weight_multiplier=0.0, group='filter_scheduler') hosts = [ ('host1', 'node1', [2, 2, 2]), # 6 devs ('host2', 'node2', [3, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2'], metadata={'pci_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs spec_obj = objects.RequestSpec(pci_requests=None) # host2, which has the least free PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.5, weighed_host.weight) self.assertEqual('host2', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/scheduler/weights/test_weights_ram.py0000664000175000017500000001453500000000000025234 0ustar00zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler RAM weights. """ from nova import objects from nova.scheduler import weights from nova.scheduler.weights import ram from nova import test from nova.tests.unit.scheduler import fakes class RamWeigherTestCase(test.NoDBTestCase): def setUp(self): super(RamWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [ram.RAMWeigher()] self.ram_weigher = ram.RAMWeigher() def _get_weighed_host(self, hosts, weight_properties=None): if weight_properties is None: weight_properties = {} return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'free_ram_mb': 512}), ('host2', 'node2', {'free_ram_mb': 1024}), ('host3', 'node3', {'free_ram_mb': 3072}), ('host4', 'node4', {'free_ram_mb': 8192}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_default_of_spreading_first(self): hostinfo_list = self._get_all_hosts() # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_ram_filter_multiplier1(self): self.flags(ram_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # We do not know the host, all have same weight. weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(0.0, weighed_host.weight) def test_ram_filter_multiplier2(self): self.flags(ram_weight_multiplier=2.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0 * 2, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_ram_filter_negative(self): self.flags(ram_weight_multiplier=1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() host_attr = {'id': 100, 'memory_mb': 8192, 'free_ram_mb': -512} host_state = fakes.FakeHostState('negative', 'negative', host_attr) hostinfo_list = list(hostinfo_list) + [host_state] # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # negativehost: free_ram_mb=-512 # so, host4 should win weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) # and negativehost should lose weighed_host = weights[-1] self.assertEqual(0, weighed_host.weight) self.assertEqual('negative', weighed_host.obj.host) def test_ram_weigher_multiplier(self): self.flags(ram_weight_multiplier=-1.0, group='filter_scheduler') host_attr = {'free_ram_mb': 5120} host1 = fakes.FakeHostState('fake-host', 'node', host_attr) # By default, return the weight_multiplier configuration directly self.assertEqual(-1, self.ram_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'ram_weight_multiplier': '2'}, )] # read the weight multiplier from metadata to override the config self.assertEqual(2.0, self.ram_weigher.weight_multiplier(host1)) host1.aggregates = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'ram_weight_multiplier': '2'}, ), objects.Aggregate( id=2, name='foo', hosts=['fake-host'], metadata={'ram_weight_multiplier': '1.5'}, )] # If the host is in multiple aggs and there are conflict weight values # in the metadata, we will use the min value among them self.assertEqual(1.5, self.ram_weigher.weight_multiplier(host1)) def test_host_with_agg(self): self.flags(ram_weight_multiplier=-1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() aggs = [ objects.Aggregate( id=1, name='foo', hosts=['host1', 'host2', 'host3', 'host4'], metadata={'ram_weight_multiplier': '1.5'}, )] for h in hostinfo_list: h.aggregates = aggs # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # so, host4 should win: weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1.0 * 1.5, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5784678 nova-21.2.4/nova/tests/unit/servicegroup/0000775000175000017500000000000000000000000020367 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/servicegroup/__init__.py0000664000175000017500000000000000000000000022466 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/servicegroup/test_api.py0000664000175000017500000000465100000000000022557 0ustar00zuulzuul00000000000000# Copyright 2015 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test the base class for the servicegroup API """ import mock from nova import servicegroup from nova import test class ServiceGroupApiTestCase(test.NoDBTestCase): def setUp(self): super(ServiceGroupApiTestCase, self).setUp() self.flags(servicegroup_driver='db') self.servicegroup_api = servicegroup.API() self.driver = self.servicegroup_api._driver def test_join(self): """""" member = {'host': "fake-host", "topic": "compute"} group = "group" self.driver.join = mock.MagicMock(return_value=None) result = self.servicegroup_api.join(member, group) self.assertIsNone(result) self.driver.join.assert_called_with(member, group, None) def test_service_is_up(self): """""" member = {"host": "fake-host", "topic": "compute", "forced_down": False} for retval in (True, False): driver = self.servicegroup_api._driver driver.is_up = mock.MagicMock(return_value=retval) result = self.servicegroup_api.service_is_up(member) self.assertIs(result, retval) driver.is_up.assert_called_with(member) member["forced_down"] = True driver = self.servicegroup_api._driver driver.is_up = mock.MagicMock() result = self.servicegroup_api.service_is_up(member) self.assertIs(result, False) driver.is_up.assert_not_called() def test_get_updated_time(self): member = {"host": "fake-host", "topic": "compute", "forced_down": False} retval = "2016-11-02T22:40:31.000000" driver = self.servicegroup_api._driver driver.updated_time = mock.MagicMock(return_value=retval) result = self.servicegroup_api.get_updated_time(member) self.assertEqual(retval, result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/servicegroup/test_db_servicegroup.py0000664000175000017500000001154400000000000025167 0ustar00zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import oslo_messaging as messaging from oslo_utils import fixture as utils_fixture from oslo_utils import timeutils from nova import exception from nova import objects from nova import servicegroup from nova import test class DBServiceGroupTestCase(test.NoDBTestCase): def setUp(self): super(DBServiceGroupTestCase, self).setUp() self.down_time = 15 self.flags(service_down_time=self.down_time, servicegroup_driver='db') self.servicegroup_api = servicegroup.API() def test_is_up(self): now = timeutils.utcnow() service = objects.Service( host='fake-host', topic='compute', binary='nova-compute', created_at=now, updated_at=now, last_seen_up=now, forced_down=False, ) time_fixture = self.useFixture(utils_fixture.TimeFixture(now)) # Up (equal) result = self.servicegroup_api.service_is_up(service) self.assertTrue(result) # Up time_fixture.advance_time_seconds(self.down_time) result = self.servicegroup_api.service_is_up(service) self.assertTrue(result) # Down time_fixture.advance_time_seconds(1) result = self.servicegroup_api.service_is_up(service) self.assertFalse(result) # "last_seen_up" says down, "updated_at" says up. # This can happen if we do a service disable/enable while it's down. service.updated_at = timeutils.utcnow() result = self.servicegroup_api.service_is_up(service) self.assertFalse(result) # "last_seen_up" is none before compute node reports its status, # just use 'created_at' as last_heartbeat. service.last_seen_up = None service.created_at = timeutils.utcnow() result = self.servicegroup_api.service_is_up(service) self.assertTrue(result) def test_join(self): service = mock.MagicMock(report_interval=1) self.servicegroup_api.join('fake-host', 'fake-topic', service) fn = self.servicegroup_api._driver._report_state service.tg.add_timer_args.assert_called_once_with( 1, fn, args=[service], initial_delay=5) @mock.patch.object(objects.Service, 'save') def test_report_state(self, upd_mock): service_ref = objects.Service(host='fake-host', topic='compute', report_count=10) service = mock.MagicMock(model_disconnected=False, service_ref=service_ref) fn = self.servicegroup_api._driver._report_state fn(service) upd_mock.assert_called_once_with() self.assertEqual(11, service_ref.report_count) self.assertFalse(service.model_disconnected) @mock.patch.object(objects.Service, 'save') def _test_report_state_error(self, exc_cls, upd_mock): upd_mock.side_effect = exc_cls("service save failed") service_ref = objects.Service(host='fake-host', topic='compute', report_count=10, binary='nova-compute') service = mock.MagicMock(model_disconnected=False, service_ref=service_ref) fn = self.servicegroup_api._driver._report_state fn(service) # fail if exception not caught self.assertTrue(service.model_disconnected) return service_ref def test_report_state_error_handling_timeout(self): self._test_report_state_error(messaging.MessagingTimeout) def test_report_state_unexpected_error(self): self._test_report_state_error(RuntimeError) def test_report_state_service_not_found(self): service_ref = self._test_report_state_error(exception.ServiceNotFound) self.assertIn('The services table record for the %s service on ' 'host %s is gone.' % (service_ref.binary, service_ref.host), self.stdlog.logger.output) def test_get_updated_time(self): retval = "2016-11-02T22:40:31.000000" service_ref = { 'host': 'fake-host', 'topic': 'compute', 'updated_at': retval } result = self.servicegroup_api.get_updated_time(service_ref) self.assertEqual(retval, result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/servicegroup/test_mc_servicegroup.py0000664000175000017500000001052400000000000025176 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Akira Yoshiyama # # This is derived from test_db_servicegroup.py. # Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import iso8601 import mock from nova import servicegroup from nova import test from oslo_utils import timeutils class MemcachedServiceGroupTestCase(test.NoDBTestCase): @mock.patch('nova.cache_utils.get_memcached_client') def setUp(self, mgc_mock): super(MemcachedServiceGroupTestCase, self).setUp() self.mc_client = mock.MagicMock() mgc_mock.return_value = self.mc_client self.flags(servicegroup_driver='mc') self.servicegroup_api = servicegroup.API() def test_is_up(self): service_ref = { 'host': 'fake-host', 'topic': 'compute' } self.mc_client.get.return_value = None self.assertFalse(self.servicegroup_api.service_is_up(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') self.mc_client.reset_mock() self.mc_client.get.return_value = True self.assertTrue(self.servicegroup_api.service_is_up(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') def test_join(self): service = mock.MagicMock(report_interval=1) self.servicegroup_api.join('fake-host', 'fake-topic', service) fn = self.servicegroup_api._driver._report_state service.tg.add_timer_args.assert_called_once_with( 1, fn, args=[service], initial_delay=5) def test_report_state(self): service_ref = { 'host': 'fake-host', 'topic': 'compute' } service = mock.MagicMock(model_disconnected=False, service_ref=service_ref) fn = self.servicegroup_api._driver._report_state fn(service) self.mc_client.set.assert_called_once_with('compute:fake-host', mock.ANY) def test_get_updated_time(self): updated_at_time = timeutils.parse_strtime("2016-04-18T02:56:25.198871") service_ref = { 'host': 'fake-host', 'topic': 'compute', 'updated_at': updated_at_time.replace(tzinfo=iso8601.UTC) } # If no record returned from the mc, return record from DB self.mc_client.get.return_value = None self.assertEqual(service_ref['updated_at'], self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') # If the record in mc is newer than DB, return record from mc self.mc_client.reset_mock() retval = timeutils.utcnow() self.mc_client.get.return_value = retval self.assertEqual(retval.replace(tzinfo=iso8601.UTC), self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') # If the record in DB is newer than mc, return record from DB self.mc_client.reset_mock() service_ref['updated_at'] = \ retval.replace(tzinfo=iso8601.UTC) self.mc_client.get.return_value = updated_at_time self.assertEqual(service_ref['updated_at'], self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') # If no record returned from the DB, return the record from mc self.mc_client.reset_mock() service_ref['updated_at'] = None self.mc_client.get.return_value = updated_at_time self.assertEqual(updated_at_time.replace(tzinfo=iso8601.UTC), self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5824678 nova-21.2.4/nova/tests/unit/ssl_cert/0000775000175000017500000000000000000000000017470 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/ssl_cert/ca.crt0000664000175000017500000000467000000000000020574 0ustar00zuulzuul00000000000000-----BEGIN CERTIFICATE----- MIIHADCCBOigAwIBAgIJAOjPGLL9VDhjMA0GCSqGSIb3DQEBDQUAMIGwMQswCQYD VQQGEwJVUzEOMAwGA1UECBMFVGV4YXMxDzANBgNVBAcTBkF1c3RpbjEdMBsGA1UE ChMUT3BlblN0YWNrIEZvdW5kYXRpb24xHTAbBgNVBAsTFE9wZW5TdGFjayBEZXZl bG9wZXJzMRAwDgYDVQQDEwdUZXN0IENBMTAwLgYJKoZIhvcNAQkBFiFvcGVuc3Rh Y2stZGV2QGxpc3RzLm9wZW5zdGFjay5vcmcwHhcNMTUwMTA4MDIyOTEzWhcNMjUw MTA4MDIyOTEzWjCBsDELMAkGA1UEBhMCVVMxDjAMBgNVBAgTBVRleGFzMQ8wDQYD VQQHEwZBdXN0aW4xHTAbBgNVBAoTFE9wZW5TdGFjayBGb3VuZGF0aW9uMR0wGwYD VQQLExRPcGVuU3RhY2sgRGV2ZWxvcGVyczEQMA4GA1UEAxMHVGVzdCBDQTEwMC4G CSqGSIb3DQEJARYhb3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2sub3JnMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAwILIMebpHYK1E1zhyi6713GG TQ9DFeLOE1T25+XTJqAkO7efQzZfB8QwCXy/8bmbhmKgQQ7APuuDci8SKCkYeWCx qJRGmg0tZVlj5gCfrV2u+olwS+XyaOGCFkYScs6D34BaE2rGD2GDryoSPc2feAt6 X4+ZkDPZnvaHQP6j9Ofq/4WmsECEas0IO5X8SDF8afA47U9ZXFkcgQK6HCHDcokL aaZxEyZFSaPex6ZAESNthkGOxEThRPxAkJhqYCeMl3Hff98XEUcFNzuAOmcnQJJg RemwJO2hS5KS3Y3p9/nBRlh3tSAG1nbY5kXSpyaq296D9x/esnXlt+9JUmn1rKyv maFBC/SbzyyQoO3MT5r8rKte0bulLw1bZOZNlhxSv2KCg5RD6vlNrnpsZszw4nj2 8fBroeFp0JMeT8jcqGs3qdm8sXLcBgiTalLYtiCNV9wZjOduQotuFN6mDwZvfa6h zZjcBNfqeLyTEnFb5k6pIla0wydWx/jvBAzoxOkEcVjak747A+p/rriD5hVUBH0B uNaWcEgKe9jcHnLvU8hUxFtgPxUHOOR+eMa+FS3ApKf9sJ/zVUq0uxyA9hUnsvnq v/CywLSvaNKBiKQTL0QLEXnw6EQb7g/XuwC5mmt+l30wGh9M1U/QMaU/+YzT4sVL TXIHJ7ExRTbEecbNbjsCAwEAAaOCARkwggEVMB0GA1UdDgQWBBQTWz2WEB0sJg9c xfM5JeJMIAJq0jCB5QYDVR0jBIHdMIHagBQTWz2WEB0sJg9cxfM5JeJMIAJq0qGB tqSBszCBsDELMAkGA1UEBhMCVVMxDjAMBgNVBAgTBVRleGFzMQ8wDQYDVQQHEwZB dXN0aW4xHTAbBgNVBAoTFE9wZW5TdGFjayBGb3VuZGF0aW9uMR0wGwYDVQQLExRP cGVuU3RhY2sgRGV2ZWxvcGVyczEQMA4GA1UEAxMHVGVzdCBDQTEwMC4GCSqGSIb3 DQEJARYhb3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2sub3JnggkA6M8Ysv1U OGMwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQ0FAAOCAgEAIfAD6uVorT5WomG1 2DWRm3kuwa+EDimgVF6VRvxCzyHx7e/6KJQj149KpMQ6e0ZPjqQw+pZ+jJSgq6TP MEjCHgIDwdKhi9LmQWIlo8xdzgfZW2VQkVLvwkqAnWWhCy9oGc/Ypk8pjiZfCx+/ DSJBbFnopI9f8epAKMq7N3jJyEMoTctzmI0KckrZnJ1Gq4MZpoxGmkJiGhWoUk8p r8apXZ6B1DzO1XxpGw2BIcrUC3bQS/vPrg5/XbyaAu2BSgu6iF7ULqkBsEd0yK/L i2gO9eTacaX3zJBQOlMJFsIAgIiVw6Rq6BuhU9zxDoopY4feta/NDOpk1OjY3MV7 4rcLTU6XYaItMDRe+dmjBOK+xspsaCU4kHEkA7mHL5YZhEEWLHj6QY8tAiIQMVQZ RuTpQIbNkjLW8Ls+CbwL2LkUFB19rKu9tFpzEJ1IIeFmt5HZsL5ri6W2qkSPIbIe Qq15kl/a45jgBbgn2VNA5ecjW20hhXyaS9AKWXK+AeFBaFIFDUrB2UP4YSDbJWUJ 0LKe+QuumXdl+iRdkgb1Tll7qme8gXAeyzVGHK2AsaBg+gkEeSyVLRKIixceyy+3 6yqlKJhk2qeV3ceOfVm9ZdvRlzWyVctaTcGIpDFqf4y8YyVhL1e2KGKcmYtbLq+m rtku4CM3HldxcM4wqSB1VcaTX8o= -----END CERTIFICATE----- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/ssl_cert/ca.key0000664000175000017500000000625300000000000020573 0ustar00zuulzuul00000000000000-----BEGIN RSA PRIVATE KEY----- MIIJJwIBAAKCAgEAwILIMebpHYK1E1zhyi6713GGTQ9DFeLOE1T25+XTJqAkO7ef QzZfB8QwCXy/8bmbhmKgQQ7APuuDci8SKCkYeWCxqJRGmg0tZVlj5gCfrV2u+olw S+XyaOGCFkYScs6D34BaE2rGD2GDryoSPc2feAt6X4+ZkDPZnvaHQP6j9Ofq/4Wm sECEas0IO5X8SDF8afA47U9ZXFkcgQK6HCHDcokLaaZxEyZFSaPex6ZAESNthkGO xEThRPxAkJhqYCeMl3Hff98XEUcFNzuAOmcnQJJgRemwJO2hS5KS3Y3p9/nBRlh3 tSAG1nbY5kXSpyaq296D9x/esnXlt+9JUmn1rKyvmaFBC/SbzyyQoO3MT5r8rKte 0bulLw1bZOZNlhxSv2KCg5RD6vlNrnpsZszw4nj28fBroeFp0JMeT8jcqGs3qdm8 sXLcBgiTalLYtiCNV9wZjOduQotuFN6mDwZvfa6hzZjcBNfqeLyTEnFb5k6pIla0 wydWx/jvBAzoxOkEcVjak747A+p/rriD5hVUBH0BuNaWcEgKe9jcHnLvU8hUxFtg PxUHOOR+eMa+FS3ApKf9sJ/zVUq0uxyA9hUnsvnqv/CywLSvaNKBiKQTL0QLEXnw 6EQb7g/XuwC5mmt+l30wGh9M1U/QMaU/+YzT4sVLTXIHJ7ExRTbEecbNbjsCAwEA AQKCAgA0ySd/l2NANkDUaFl5CMt0zaoXoyGv9Jqw7lEtUPVO2AZXYYgH8/amuIK7 dztiWpRsisqKTDMmjYljW8jMvkf5sCvGn7GkOAzEh3g+7tjZvqBmDh1+kjSf0YXL +bbBSCMcu6L3RAW+3ewvsYeC7sjVL8CER2nCApWfYtW/WpM2agkju0/zcB1e841Y WU3ttbP5kGbrmyBTlBOexFKnuBJRa4Z3l63VpF7HTGmfsNRMXrx/XaZ55rEmK0zA 2SoB55ZDSHQSKee3UxP5CxWj7fjzWa+QO/2Sgp4BjNU8btdCqXb3hPZ98aQuVjQv H+Ic9xtOYnso3dJAeNdeUfx23psAHhUqYruD+xrjwTJV5viGO05AHjp/i4dKjOaD CMFKP/AGUcGAsL/Mjq5oMbWovbqhGaaOw4I0Xl/JuB0XQXWwr5D2cLUjMaCS9bLq WV8lfEitoCVihAi21s8MIyQWHvl4m4d/aD5KNh0MJYo3vYCrs6A256dhbmlEmGBr DY1++4yxz4YkY07jYbQYkDlCtwu51g+YE8lKAE9+Mz+PDgbRB7dgw7K3Q9SsXp1P ui7/vnrgqppnYm4aaHvXEZ1qwwt2hpoumhQo/k1xrSzVKQ83vjzjXoDc9o84Vsv2 dmcLGKPpu+cm2ks8q6x2EI09dfkJjb/7N9SpU0AOjU7CgDye0QKCAQEA5/mosLuC vXwh5FkJuV/wpipwqkS4vu+KNQiN83wdz+Yxw6siAz6/SIjr0sRmROop6CNCaBNq 887+mgm62rEe5eU4vHRlBOlYQD0qa+il09uwYPU0JunSOabxUCBhSuW/LXZyq7rA ywGB7OVSTWwgb6Y0X1pUcOXK5qYaWJUdUEi2oVrU160phbDAcZNH+vAyl+IRJmVJ LP7f1QwVrnIvIBgpIvPLRigagn84ecXPITClq4KjGNy2Qq/iarEwY7llFG10xHmK xbzQ8v5XfPZ4Swmp+35kwNhfp6HRVWV3RftX4ftFArcFGYEIActItIz10rbLJ+42 fc8oZKq/MB9NlwKCAQEA1HLOuODXrFsKtLaQQzupPLpdyfYWR7A6tbghH5paKkIg A+BSO/b91xOVx0jN2lkxe0Ns1QCpHZU8BXZ9MFCaZgr75z0+vhIRjrMTXXirlray 1mptar018j79sDJLFBF8VQFfi7Edd3OwB2dbdDFJhzNUbNJIVkVo+bXYfuWGlotG EVWxX/CnPgnKknl6vX/8YSg6qJCwcUTmQRoqermd02VtrMrGgytcOG6QdKYTT/ct b3zDNXdeLOJKyLZS1eW4V2Pcl4Njbaxq/U7KYkjWWZzVVsiCjWA8H0RXGf+Uk9Gu cUg5hm5zxXcOGdI6yRVxHEU7CKc25Ks5xw4xPkhA/QKCAQBd7yC6ABQe+qcWul9P q2PdRY49xHozBvimJQKmN/oyd3prS18IhV4b1yX3QQRQn6m8kJqRXluOwqEiaxI5 AEQMv9dLqK5HYN4VlS8aZyjPM0Sm3mPx5fj0038f/RyooYPauv4QQB1VlxSvguTi 6QfxbhIDEqbi2Ipi/5vnhupJ2kfp6sgJVdtcgYhL9WHOYXl7O1XKgHUzPToSIUSe USp4CpCN0L7dd9vUQAP0e382Z2aOnuXAaY98TZCXt4xqtWYS8Ye5D6Z8D8tkuk1f Esb/S7iDWFkgJf4F+Wa099NmiTK7FW6KfOYZv8AoSdL1GadpXg/B6ZozM7Gdoe6t Y9+dAoIBABH2Rv4gnHuJEwWmbdoRYESvKSDbOpUDFGOq1roaTcdG4fgR7kH9pwaZ NE+uGyF76xAV6ky0CphitrlrhDgiiHtaMGQjrHtbgbqD7342pqNOfR5dzzR4HOiH ZOGRzwE6XT2+qPphljE0SczGc1gGlsXklB3DRbRtl+uM8WoBM/jke58ZlK6c5Tb8 kvEBblw5Rvhb82GvIgvhnGoisTbBHNPzvmseldwfPWPUDUifhgB70I6diM+rcP3w gAwqRiSpkIVq/wqcZDqwmjcigz/+EolvFiaJO2iCm3K1T3v2PPSmhM41Ig/4pLcs UrfiK3A27OJMBCq+IIkC5RasX4N5jm0CggEAXT9oyIO+a7ggpfijuba0xuhFwf+r NY49hx3YshWXX5T3LfKZpTh+A1vjGcj57MZacRcTkFQgHVcyu+haA9lI4vsFMesU 9GqenrJNvxsV4i3avIxGjjx7d0Ok/7UuawTDuRea8m13se/oJOl5ftQK+ZoVqtO8 SzeNNpakiuCxmIEqaD8HUwWvgfA6n0HPJNc0vFAqu6Y5oOr8GDHd5JoKA8Sb15N9 AdFqwCbW9SqUVsvHDuiOKXy8lCr3OiuyjgBfbIyuuWbaU0PqIiKW++lTluXkl7Uz vUawgfgX85sY6A35g1O/ydEQw2+h2tzDvQdhhyTYpMZjZwzIIPjCQMgHPA== -----END RSA PRIVATE KEY----- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/ssl_cert/certificate.cnf0000664000175000017500000000242400000000000022444 0ustar00zuulzuul00000000000000[ req ] default_md = sha512 default_bits = 4096 distinguished_name = req-dn req_extensions = req_ext x509_extensions = x509_ext string_mask = utf8only prompt = no # Section x509_ext is used when generating a self-signed certificate. I.e., openssl req -x509 ... [ x509_ext ] subjectKeyIdentifier = hash authorityKeyIdentifier = keyid,issuer basicConstraints = CA:FALSE keyUsage = digitalSignature, keyEncipherment extendedKeyUsage = serverAuth subjectAltName = @alternate_names # Section req_ext is used when generating a certificate signing request. I.e., openssl req ... [ req_ext ] subjectKeyIdentifier = hash basicConstraints = CA:FALSE keyUsage = digitalSignature, keyEncipherment extendedKeyUsage = serverAuth subjectAltName = @alternate_names [ req-dn ] C = US ST = Texas L = Austin O = OpenStack Foundation OU = OpenStack Developers CN = * [ alternate_names ] DNS.1 = localhost DNS.2 = ip6-localhost IP.1 = 127.0.0.1 IP.2 = ::1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/ssl_cert/certificate.crt0000664000175000017500000000431400000000000022466 0ustar00zuulzuul00000000000000-----BEGIN CERTIFICATE----- MIIGUjCCBDqgAwIBAgIBATANBgkqhkiG9w0BAQ0FADCBsDELMAkGA1UEBhMCVVMx DjAMBgNVBAgTBVRleGFzMQ8wDQYDVQQHEwZBdXN0aW4xHTAbBgNVBAoTFE9wZW5T dGFjayBGb3VuZGF0aW9uMR0wGwYDVQQLExRPcGVuU3RhY2sgRGV2ZWxvcGVyczEQ MA4GA1UEAxMHVGVzdCBDQTEwMC4GCSqGSIb3DQEJARYhb3BlbnN0YWNrLWRldkBs aXN0cy5vcGVuc3RhY2sub3JnMB4XDTE3MDczMTAzMDg1MFoXDTI3MDcyOTAzMDg1 MFoweDELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVRleGFzMQ8wDQYDVQQHDAZBdXN0 aW4xHTAbBgNVBAoMFE9wZW5TdGFjayBGb3VuZGF0aW9uMR0wGwYDVQQLDBRPcGVu U3RhY2sgRGV2ZWxvcGVyczEKMAgGA1UEAwwBKjCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBANBJtvyhMKBn397hE7x9Ce/Ny+4ENQfr9VrHuvGNCR3W/uUb QafdNdZCYNAGPrq2T3CEYK0IJxZjr2HuTcSK9StBMFauTeIPqVUVkO3Tjq1Rkv+L np/e6DhHkjCU6Eq/jIw3ic0QoxLygTybGxXgJgVoBzGsJufzOQ14tfkzGeGyE3L5 z5DpCNQqWLWF7soMx3kM5hBm+LWeoiBPjmsEXQY+UYiDlSLW/6I855X/wwDW5+Ot P6/1lWUfcyAyIqj3t0pmxZeY7xQnabWjhXT2dTK+dlwRjb77w665AgeF1R5lpTvU yT1aQwgH1kd9GeQbkBDwWSVLH9boPPgdMLtX2ipUgQAAEhIOUWXOYZVHVNXhV6Cr jAgvfdF39c9hmuXovPP24ikW4L+d5RPE7Vq9KJ4Uzijw9Ghu4lQQCRZ8SCNZIYJn Tz53+6fs93WwnnEPto9tFRKeNWt3jx/wjluDFhhBTZO4snNIq9xnCYSEQAIsRBVW Ahv7LqWLigUy7a9HMIyi3tQEZN9NCDy4BNuJDu33XWLLVMwNrIiR5mdCUFoRKt/E +YPj7bNlzZMTSGLoBFPM71Lnfym9HazHDE1KxvT4gzYMubK4Y07meybiL4QNvU08 ITgFU6DAGob+y/GHqw+bmez5y0F/6FlyV+SiSrbVEEtzp9Ewyrxb85OJFK0tAgMB AAGjga0wgaowHQYDVR0OBBYEFKNbEPSp52P154ZRbIamHLk9Trt0MB8GA1UdIwQY MBaAFBNbPZYQHSwmD1zF8zkl4kwgAmrSMAkGA1UdEwQCMAAwCwYDVR0PBAQDAgWg MBMGA1UdJQQMMAoGCCsGAQUFBwMBMDsGA1UdEQQ0MDKCCWxvY2FsaG9zdIINaXA2 LWxvY2FsaG9zdIcEfwAAAYcQAAAAAAAAAAAAAAAAAAAAATANBgkqhkiG9w0BAQ0F AAOCAgEAcua03v47pQUueiwM5nRxt1Tcbi79JNuGy/JL9c2bedzGumBd/dPQIEJM sieO/oUBxw3BSmwt+mRgLhTf6QYPVfCSZnL4YMHqayjzq4fmOOQaMPaPEy9rY1Re Rp/AlxpTLg2i3wQMJIR9sZ0+wcmr1yJsXn6xnLQHKJNkk+gSw3o13noZ8cEnzUtR C8qSvU4aXeGoBmpX/sPqpc+S82MKCdR/tjEOTDVGdKYsXW8OIDJePNhiHnuJ5M95 vmiFlxJ6Td30ufXscXhFfmmsRoDmxamCZcS4sfQvhe1/hYBGSeJcYkaiTlqENjwJ BpclLBfm9sw1l8gvSd2GtNPO6tHRalIUgaBCTBmyzShEGB7SEBTuBpVAbJMz0K+W 9FCLd7tgTxcSknz51iL3ybKZnVuwFKOm63k50GqGdUDC0zDuXoszNSLeWliQ+d/C mStfQ9ItFe6snbFDbTHPGk9YtYUZ48zs+8sa0QZWemMlQUjxjWHv5DIABFWVZ8ZD HLdAbzbmCHPP3uT1oeXqoZjhPCwCd6xoL5dlU0H/6bdfH5wUhnO1R029w+M2gzXV cNrmsB+Qfiiywqf5DdCVHP9yOBh83Fs82YTt+BhkWSBD4cfAm2PJ4CwOY/S+d6rD Vn50DqL6tYJoTlm/CjjXh3TP4hof7i9lfJjeZfoB5AFaWAVdL7s= -----END CERTIFICATE----- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/ssl_cert/new_cert.sh0000664000175000017500000000107500000000000021635 0ustar00zuulzuul00000000000000#!/usr/bin/env bash openssl req -new -out certificate.csr -key privatekey.key \ -config certificate.cnf openssl x509 -extfile certificate.cnf -extensions x509_ext \ -req -sha512 -days 3650 -set_serial 1 \ -CA ca.crt -CAkey ca.key \ -in certificate.csr -out certificate.crt if [ "$1" == "--dump" ] ; then openssl req -in certificate.csr -text -noout > /tmp/csr.txt openssl x509 -in ca.crt -text -noout > /tmp/ca.txt openssl x509 -in certificate.crt -text -noout > /tmp/certificate.txt fi rm certificate.csr ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/ssl_cert/privatekey.key0000664000175000017500000000625300000000000022373 0ustar00zuulzuul00000000000000-----BEGIN RSA PRIVATE KEY----- MIIJKAIBAAKCAgEA0Em2/KEwoGff3uETvH0J783L7gQ1B+v1Wse68Y0JHdb+5RtB p9011kJg0AY+urZPcIRgrQgnFmOvYe5NxIr1K0EwVq5N4g+pVRWQ7dOOrVGS/4ue n97oOEeSMJToSr+MjDeJzRCjEvKBPJsbFeAmBWgHMawm5/M5DXi1+TMZ4bITcvnP kOkI1CpYtYXuygzHeQzmEGb4tZ6iIE+OawRdBj5RiIOVItb/ojznlf/DANbn460/ r/WVZR9zIDIiqPe3SmbFl5jvFCdptaOFdPZ1Mr52XBGNvvvDrrkCB4XVHmWlO9TJ PVpDCAfWR30Z5BuQEPBZJUsf1ug8+B0wu1faKlSBAAASEg5RZc5hlUdU1eFXoKuM CC990Xf1z2Ga5ei88/biKRbgv53lE8TtWr0onhTOKPD0aG7iVBAJFnxII1khgmdP Pnf7p+z3dbCecQ+2j20VEp41a3ePH/COW4MWGEFNk7iyc0ir3GcJhIRAAixEFVYC G/supYuKBTLtr0cwjKLe1ARk300IPLgE24kO7fddYstUzA2siJHmZ0JQWhEq38T5 g+Pts2XNkxNIYugEU8zvUud/Kb0drMcMTUrG9PiDNgy5srhjTuZ7JuIvhA29TTwh OAVToMAahv7L8YerD5uZ7PnLQX/oWXJX5KJKttUQS3On0TDKvFvzk4kUrS0CAwEA AQKCAgAkdpMrPMi3fBfL+9kpqTYhHgTyYRgrj9o/DzIh8U/EQowS7aebzHUNUkeC g2Vd6GaVywblo8S7/a2JVl+U5cKv1NSyiAcoaRd6xrC9gci7fMlgJUAauroqiBUG njrgQxJGxb5BAQWbXorTYk/mj3v4fFKuFnYlKwY03on020ZPpY4UFbmJo9Ig2lz3 QkAgbQZKocBw5KXrnZ7CS0siXvwuCKDbZjWoiLzt2P2t2712myizSfQZSMPjlRLh cwVwURVsV/uFY4ePHqs52iuV40N3I7KywXvwEEEciFTbnklF7gN0Kvcj33ZWpJCV qUfsEAsze/APQEyNodBymyGZ2nJdn9PqaQYnVhE9xpjiXejQHZsuMnrA3jYr8Mtx j0EZiX4ICI4Njt9oI/EtWhQtcDt86hTEtBlyFRU6jhW8O5Ai7hzxCYgUJ7onWVOE PtCC9FoOwumXWgdZNz/hMqQSn91O8trferccdUGIfx8N/G4QkyzOLI0Hc6Mubby7 +GGRwVXnLsIGxpFc+VBHY/J6offCkXx3MPbfn57x0LGZu1GtHoep391yLUrBs9jx nJrUI9OuwaeOG0iesTuGT+PbZWxDrJEtA7DRM1FBMNMvn5BTTg7yx8EqUM35hnFf 5J1XEf0DW5nUPH1Qadgi1LZjCAhiD5OuNooFsTmN7dSdleF+PQKCAQEA7jq7drTu O1ePCO+dQeECauy1qv9SO2LIHfLZ/L4OwcEtEnE8xBbvrZfUqkbUITCS6rR8UITp 6ru0MyhUEsRsk4FHIJV2P1pB2Zy+8tV4Dm3aHh4bCoECqAPHMgXUkP+9kIOn2QsE uRXnsEiQAl0SxSTcduy5F+WIWLVl4A72ry3cSvrEGwMEz0sjaEMmCZ2B8X8EJt64 uWUSHDaAMSg80bADy3p+OhmWMGZTDl/KRCz9pJLyICMxsotfbvE0BadAZr+UowSe ldqKlgRYlYL3pAhwjeMO/QxmMfRxjvG09romqe0Bcs8BDNII/ShAjjHQUwxcEszQ P14g8QwmTQVm5wKCAQEA39M3GveyIhX6vmyR4DUlxE5+yloTACdlCZu6wvFlRka8 3FEw8DWKVfnmYYFt/RPukYeBRmXwqLciGSly7PnaBXeNFqNXiykKETzS2UISZoqT Dur06GmcI+Lk1my9v5gLB1LT/D8XWjwmjA5hNO1J1UYmp+X4dgaYxWzOKBsTTJ8j SVaEaxBUwLHy58ehoQm+G5+QqL5yU/n1hPwXx1XYvd33OscSGQRbALrH2ZxsqxMZ yvNa2NYt3TnihXcF36Df5861DTNI7NDqpY72C4U8RwaqgTdDkD+t8zrk/r3LUa5d NGkGQF+59spBcb64IPZ4DuJ9//GaEsyj0jPF/FTMywKCAQEA1DiB83eumjKf+yfq AVv/GV2RYKleigSvnO5QfrSY1MXP7xPtPAnqrcwJ6T57jq2E04zBCcG92BwqpUAR 1T4iMy0BPeenlTxEWSUnfY/pCYGWwymykSLoSOBEvS0wdZM9PdXq2pDUPkVjRkj9 8P0U0YbK1y5+nOkfE1dVT8pEuz2xdyH5PM7to/SdsC3RXtNvhMDP5AiYqp99CKEM hb4AoBOa7dNLS1qrzqX4618uApnJwqgdBcAUb6d09pHs8/RQjLeyI57j3z72Ijnw 6A/pp7jU+7EAEzDOgUXvO5Xazch61PmLRsldeBxLYapQB9wcZz8lbqICCdFCqzlV jVt4lQKCAQA9CYxtfj7FrNjENTdSvSufbQiGhinIUPXsuNslbk7/6yp1qm5+Exu2 dn+s927XJShZ52oJmKMYX1idJACDP1+FPiTrl3+4I2jranrVZH9AF2ojF0/SUXqT Drz4/I6CQSRAywWkNFBZ+y1H5GP92vfXgVnpT32CMipXLGTL6xZIPt2QkldqGvoB 0oU7T+Vz1QRS5CC+47Cp1fBuY5DYe0CwBmf1T3RP/jAS8tytK0s3G+5cuiB8IWxA eBid7OddJLHqtSQKhYHNkutqWqIeYicd92Nn+XojTDpTqivojDl1/ObN9BYQWAqO knlmW2w7EPuMk5doxKoPll7WY+gJ99YhAoIBAHf5HYRh4ZuYkx+R1ow8/Ahp7N4u BGFRNnCpMG358Zws95wvBg5dkW8VU0M3256M0kFkw2AOyyyNsHqIhMNakzHesGo/ TWhqCh23p1xBLY5p14K8K6iOc1Jfa1LqGsL2TZ06TeNNyONMGqq0yOyD62CdLRDj 0ACL/z2j494LmfqhV45hYuqjQbrLizjrr6ln75g2WJ32U+zwl7KUHnBL7IEwb4Be KOl1bfVwZAs0GtHuaiScBYRLUaSC/Qq7YPjTh1nmg48DQC/HUCNGMqhoZ950kp9k 76HX+MpwUi5y49moFmn/3qDvefGFpX1td8vYMokx+eyKTXGFtxBUwPnMUSQ= -----END RSA PRIVATE KEY----- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_api_validation.py0000664000175000017500000013555300000000000022262 0ustar00zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re import fixtures from jsonschema import exceptions as jsonschema_exc import six from nova.api.openstack import api_version_request as api_version from nova.api import validation from nova.api.validation import parameter_types from nova.api.validation import validators from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes query_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.single_param({'type': 'string', 'format': 'uuid'}), 'foos': parameter_types.multi_params({'type': 'string'}) }, 'patternProperties': { "^_": parameter_types.multi_params({'type': 'string'})}, 'additionalProperties': True } class FakeQueryParametersController(object): @validation.query_schema(query_schema, '2.3') def get(self, req): return list(set(req.GET.keys())) class RegexFormatFakeController(object): schema = { 'type': 'object', 'properties': { 'foo': { 'format': 'regex', }, }, } @validation.schema(request_body_schema=schema) def post(self, req, body): return 'Validation succeeded.' class FakeRequest(object): api_version_request = api_version.APIVersionRequest("2.1") environ = {} legacy_v2 = False def is_legacy_v2(self): return self.legacy_v2 class ValidationRegex(test.NoDBTestCase): def test_build_regex_range(self): # this is much easier to think about if we only use the ascii # subset because it's a printable range we can think # about. The algorithm works for all ranges. def _get_all_chars(): for i in range(0x7F): yield six.unichr(i) self.useFixture(fixtures.MonkeyPatch( 'nova.api.validation.parameter_types._get_all_chars', _get_all_chars)) # note that since we use only the ascii range in the tests # we have to clear the cache to recompute them. parameter_types._reset_cache() r = parameter_types._build_regex_range(ws=False) self.assertEqual(r, re.escape('!') + '-' + re.escape('~')) # if we allow whitespace the range starts earlier r = parameter_types._build_regex_range(ws=True) self.assertEqual(r, re.escape(' ') + '-' + re.escape('~')) # excluding a character will give us 2 ranges r = parameter_types._build_regex_range(ws=True, exclude=['A']) self.assertEqual(r, re.escape(' ') + '-' + re.escape('@') + 'B' + '-' + re.escape('~')) # inverting which gives us all the initial unprintable characters. r = parameter_types._build_regex_range(ws=False, invert=True) self.assertEqual(r, re.escape('\x00') + '-' + re.escape(' ')) # excluding characters that create a singleton. Naively this would be: # ' -@B-BD-~' which seems to work, but ' -@BD-~' is more natural. r = parameter_types._build_regex_range(ws=True, exclude=['A', 'C']) self.assertEqual(r, re.escape(' ') + '-' + re.escape('@') + 'B' + 'D' + '-' + re.escape('~')) # ws=True means the positive regex has printable whitespaces, # so the inverse will not. The inverse will include things we # exclude. r = parameter_types._build_regex_range( ws=True, exclude=['A', 'B', 'C', 'Z'], invert=True) self.assertEqual(r, re.escape('\x00') + '-' + re.escape('\x1f') + 'A-CZ') class APIValidationTestCase(test.NoDBTestCase): post_schema = None def setUp(self): super(APIValidationTestCase, self).setUp() self.post = None if self.post_schema is not None: @validation.schema(request_body_schema=self.post_schema) def post(req, body): return 'Validation succeeded.' self.post = post def check_validation_error(self, method, body, expected_detail, req=None): if not req: req = FakeRequest() try: method(body=body, req=req) except exception.ValidationError as ex: self.assertEqual(400, ex.kwargs['code']) if isinstance(expected_detail, list): self.assertIn(ex.kwargs['detail'], expected_detail, 'Exception details did not match expected') elif not re.match(expected_detail, ex.kwargs['detail']): self.assertEqual(expected_detail, ex.kwargs['detail'], 'Exception details did not match expected') except Exception as ex: self.fail('An unexpected exception happens: %s' % ex) else: self.fail('Any exception does not happen.') class FormatCheckerTestCase(test.NoDBTestCase): def _format_checker(self, format, value, error_message): format_checker = validators.FormatChecker() exc = self.assertRaises(jsonschema_exc.FormatError, format_checker.check, value, format) self.assertIsInstance(exc.cause, exception.InvalidName) self.assertEqual(error_message, exc.cause.format_message()) def test_format_checker_failed_with_non_string_name(self): error_message = ("An invalid 'name' value was provided. The name must " "be: printable characters. " "Can not start or end with whitespace.") self._format_checker("name", " ", error_message) self._format_checker("name", None, error_message) def test_format_checker_failed_name_with_leading_trailing_spaces(self): error_message = ("An invalid 'name' value was provided. " "The name must be: printable characters with at " "least one non space character") self._format_checker("name_with_leading_trailing_spaces", None, error_message) class MicroversionsSchemaTestCase(APIValidationTestCase): def setUp(self): super(MicroversionsSchemaTestCase, self).setUp() schema_v21_int = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', } } } schema_v20_str = copy.deepcopy(schema_v21_int) schema_v20_str['properties']['foo'] = {'type': 'string'} @validation.schema(schema_v20_str, '2.0', '2.0') @validation.schema(schema_v21_int, '2.1') def post(req, body): return 'Validation succeeded.' self.post = post def test_validate_v2compatible_request(self): req = FakeRequest() req.legacy_v2 = True self.assertEqual(self.post(body={'foo': 'bar'}, req=req), 'Validation succeeded.') detail = ("Invalid input for field/attribute foo. Value: 1. " "1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail, req=req) def test_validate_v21_request(self): req = FakeRequest() self.assertEqual(self.post(body={'foo': 1}, req=req), 'Validation succeeded.') detail = ("Invalid input for field/attribute foo. Value: bar. " "'bar' is not of type 'integer'") self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail, req=req) def test_validate_v2compatible_request_with_none_min_version(self): schema_none = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer' } } } @validation.schema(schema_none) def post(req, body): return 'Validation succeeded.' req = FakeRequest() req.legacy_v2 = True self.assertEqual('Validation succeeded.', post(body={'foo': 1}, req=req)) detail = ("Invalid input for field/attribute foo. Value: bar. " "'bar' is not of type 'integer'") self.check_validation_error(post, body={'foo': 'bar'}, expected_detail=detail, req=req) class QueryParamsSchemaTestCase(test.NoDBTestCase): def setUp(self): super(QueryParamsSchemaTestCase, self).setUp() self.controller = FakeQueryParametersController() def test_validate_request(self): req = fakes.HTTPRequest.blank("/tests?foo=%s" % fakes.FAKE_UUID) req.api_version_request = api_version.APIVersionRequest("2.3") self.assertEqual(['foo'], self.controller.get(req)) def test_validate_request_failed(self): # parameter 'foo' expect a UUID req = fakes.HTTPRequest.blank("/tests?foo=abc") req.api_version_request = api_version.APIVersionRequest("2.3") ex = self.assertRaises(exception.ValidationError, self.controller.get, req) if six.PY3: self.assertEqual("Invalid input for query parameters foo. Value: " "abc. 'abc' is not a 'uuid'", six.text_type(ex)) else: self.assertEqual("Invalid input for query parameters foo. Value: " "abc. u'abc' is not a 'uuid'", six.text_type(ex)) def test_validate_request_with_multiple_values(self): req = fakes.HTTPRequest.blank("/tests?foos=abc") req.api_version_request = api_version.APIVersionRequest("2.3") self.assertEqual(['foos'], self.controller.get(req)) req = fakes.HTTPRequest.blank("/tests?foos=abc&foos=def") self.assertEqual(['foos'], self.controller.get(req)) def test_validate_request_with_multiple_values_fails(self): req = fakes.HTTPRequest.blank( "/tests?foo=%s&foo=%s" % (fakes.FAKE_UUID, fakes.FAKE_UUID)) req.api_version_request = api_version.APIVersionRequest("2.3") self.assertRaises(exception.ValidationError, self.controller.get, req) def test_validate_request_unicode_decode_failure(self): req = fakes.HTTPRequest.blank("/tests?foo=%88") req.api_version_request = api_version.APIVersionRequest("2.1") ex = self.assertRaises( exception.ValidationError, self.controller.get, req) self.assertIn("Query string is not UTF-8 encoded", six.text_type(ex)) def test_strip_out_additional_properties(self): req = fakes.HTTPRequest.blank( "/tests?foos=abc&foo=%s&bar=123&-bar=456" % fakes.FAKE_UUID) req.api_version_request = api_version.APIVersionRequest("2.3") res = self.controller.get(req) res.sort() self.assertEqual(['foo', 'foos'], res) def test_no_strip_out_additional_properties_when_not_match_version(self): req = fakes.HTTPRequest.blank( "/tests?foos=abc&foo=%s&bar=123&bar=456" % fakes.FAKE_UUID) # The JSON-schema matches to the API version 2.3 and above. Request # with version 2.1 to ensure there isn't no strip out for additional # parameters when schema didn't match the request version. req.api_version_request = api_version.APIVersionRequest("2.1") res = self.controller.get(req) res.sort() self.assertEqual(['bar', 'foo', 'foos'], res) def test_strip_out_correct_pattern_retained(self): req = fakes.HTTPRequest.blank( "/tests?foos=abc&foo=%s&bar=123&_foo_=456" % fakes.FAKE_UUID) req.api_version_request = api_version.APIVersionRequest("2.3") res = self.controller.get(req) res.sort() self.assertEqual(['_foo_', 'foo', 'foos'], res) class RequiredDisableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, } def test_validate_required_disable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'abc': 1}, req=FakeRequest()), 'Validation succeeded.') class RequiredEnableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, 'required': ['foo'] } def test_validate_required_enable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') def test_validate_required_enable_fails(self): detail = "'foo' is a required property" self.check_validation_error(self.post, body={'abc': 1}, expected_detail=detail) class AdditionalPropertiesEnableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, 'required': ['foo'], } def test_validate_additionalProperties_enable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': 1, 'ext': 1}, req=FakeRequest()), 'Validation succeeded.') class AdditionalPropertiesDisableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, 'required': ['foo'], 'additionalProperties': False, } def test_validate_additionalProperties_disable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') def test_validate_additionalProperties_disable_fails(self): detail = "Additional properties are not allowed ('ext' was unexpected)" self.check_validation_error(self.post, body={'foo': 1, 'ext': 1}, expected_detail=detail) class PatternPropertiesTestCase(APIValidationTestCase): post_schema = { 'patternProperties': { '^[a-zA-Z0-9]{1,10}$': { 'type': 'string' }, }, 'additionalProperties': False, } def test_validate_patternProperties(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'bar'}, req=FakeRequest())) def test_validate_patternProperties_fails(self): details = [ "Additional properties are not allowed ('__' was unexpected)", "'__' does not match any of the regexes: '^[a-zA-Z0-9]{1,10}$'" ] self.check_validation_error(self.post, body={'__': 'bar'}, expected_detail=details) details = [ "'' does not match any of the regexes: '^[a-zA-Z0-9]{1,10}$'", "Additional properties are not allowed ('' was unexpected)" ] self.check_validation_error(self.post, body={'': 'bar'}, expected_detail=details) details = [ ("'0123456789a' does not match any of the regexes: " "'^[a-zA-Z0-9]{1,10}$'"), ("Additional properties are not allowed ('0123456789a' was" " unexpected)") ] self.check_validation_error(self.post, body={'0123456789a': 'bar'}, expected_detail=details) # Note(jrosenboom): This is referencing an internal python error # string, which is no stable interface. We need a patch in the # jsonschema library in order to fix this properly. if six.PY3: detail = "expected string or bytes-like object" else: detail = "expected string or buffer" self.check_validation_error(self.post, body={None: 'bar'}, expected_detail=detail) class StringTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', }, }, } def test_validate_string(self): self.assertEqual(self.post(body={'foo': 'abc'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '0'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': ''}, req=FakeRequest()), 'Validation succeeded.') def test_validate_string_fails(self): detail = ("Invalid input for field/attribute foo. Value: 1." " 1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1.5." " 1.5 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1.5}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) class StringLengthTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'minLength': 1, 'maxLength': 10, }, }, } def test_validate_string_length(self): self.assertEqual(self.post(body={'foo': '0'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '0123456789'}, req=FakeRequest()), 'Validation succeeded.') def test_validate_string_length_fails(self): detail = ("Invalid input for field/attribute foo. Value: ." " '' is too short") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 0123456789a." " '0123456789a' is too long") self.check_validation_error(self.post, body={'foo': '0123456789a'}, expected_detail=detail) class IntegerTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': ['integer', 'string'], 'pattern': '^[0-9]+$', }, }, } def test_validate_integer(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '1'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '0123456789'}, req=FakeRequest()), 'Validation succeeded.') def test_validate_integer_fails(self): detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' does not match '^[0-9]+$'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'integer', 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 0xffff." " '0xffff' does not match '^[0-9]+$'") self.check_validation_error(self.post, body={'foo': '0xffff'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1.0." " 1.0 is not of type 'integer', 'string'") self.check_validation_error(self.post, body={'foo': 1.0}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1.0." " '1.0' does not match '^[0-9]+$'") self.check_validation_error(self.post, body={'foo': '1.0'}, expected_detail=detail) class IntegerRangeTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': ['integer', 'string'], 'pattern': '^[0-9]+$', 'minimum': 1, 'maximum': 10, }, }, } def test_validate_integer_range(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': 10}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '1'}, req=FakeRequest()), 'Validation succeeded.') def test_validate_integer_range_fails(self): detail = ("Invalid input for field/attribute foo. Value: 0." " 0(.0)? is less than the minimum of 1") self.check_validation_error(self.post, body={'foo': 0}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 11." " 11(.0)? is greater than the maximum of 10") self.check_validation_error(self.post, body={'foo': 11}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 0." " 0(.0)? is less than the minimum of 1") self.check_validation_error(self.post, body={'foo': '0'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 11." " 11(.0)? is greater than the maximum of 10") self.check_validation_error(self.post, body={'foo': '11'}, expected_detail=detail) class BooleanTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.boolean, }, } def test_validate_boolean(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': True}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': False}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'True'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'False'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '1'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '0'}, req=FakeRequest())) def test_validate_boolean_fails(self): enum_boolean = ("[True, 'True', 'TRUE', 'true', '1', 'ON', 'On'," " 'on', 'YES', 'Yes', 'yes'," " False, 'False', 'FALSE', 'false', '0', 'OFF', 'Off'," " 'off', 'NO', 'No', 'no']") detail = ("Invalid input for field/attribute foo. Value: bar." " 'bar' is not one of %s") % enum_boolean self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 2." " '2' is not one of %s") % enum_boolean self.check_validation_error(self.post, body={'foo': '2'}, expected_detail=detail) class HostnameTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.hostname, }, } def test_validate_hostname(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost.localdomain.com'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my-host'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my_host'}, req=FakeRequest())) def test_validate_hostname_fails(self): detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " 1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: my$host." " 'my$host' does not match '^[a-zA-Z0-9-._]*$'") self.check_validation_error(self.post, body={'foo': 'my$host'}, expected_detail=detail) class HostnameIPaddressTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.hostname_or_ip_address, }, } def test_validate_hostname_or_ip_address(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost.localdomain.com'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my-host'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my_host'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '192.168.10.100'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '2001:db8::9abc'}, req=FakeRequest())) def test_validate_hostname_or_ip_address_fails(self): detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " 1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: my$host." " 'my$host' does not match '^[a-zA-Z0-9-_.:]*$'") self.check_validation_error(self.post, body={'foo': 'my$host'}, expected_detail=detail) class NameTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.name, }, } def test_validate_name(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'm1.small'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'a'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434\u2006\ufffd'}, req=FakeRequest())) def test_validate_name_fails(self): error = ("An invalid 'name' value was provided. The name must be: " "printable characters. " "Can not start or end with whitespace.") should_fail = (' ', ' server', 'server ', u'a\xa0', # trailing unicode space u'\uffff', # non-printable unicode ) for item in should_fail: self.check_validation_error(self.post, body={'foo': item}, expected_detail=error) # four-byte unicode, if supported by this python build try: self.check_validation_error(self.post, body={'foo': u'\U00010000'}, expected_detail=error) except ValueError: pass class NameWithLeadingTrailingSpacesTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.name_with_leading_trailing_spaces, }, } def test_validate_name(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'm1.small'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'a'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434\u2006\ufffd'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': ' abc '}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'abc abc abc'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': ' abc abc abc '}, req=FakeRequest())) # leading unicode space self.assertEqual('Validation succeeded.', self.post(body={'foo': '\xa0abc'}, req=FakeRequest())) def test_validate_name_fails(self): error = ("An invalid 'name' value was provided. The name must be: " "printable characters with at least one non space character") should_fail = ( ' ', u'\xa0', # unicode space u'\uffff', # non-printable unicode ) for item in should_fail: self.check_validation_error(self.post, body={'foo': item}, expected_detail=error) # four-byte unicode, if supported by this python build try: self.check_validation_error(self.post, body={'foo': u'\U00010000'}, expected_detail=error) except ValueError: pass class NoneTypeTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.none } } def test_validate_none(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'None'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': None}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': {}}, req=FakeRequest())) def test_validate_none_fails(self): detail = ("Invalid input for field/attribute foo. Value: ." " '' is not one of ['None', None, {}]") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: " "{'key': 'val'}. {'key': 'val'} is not one of " "['None', None, {}]") self.check_validation_error(self.post, body={'foo': {'key': 'val'}}, expected_detail=detail) class NameOrNoneTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.name_or_none } } def test_valid(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': None}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '1'}, req=FakeRequest())) def test_validate_fails(self): detail = ("Invalid input for field/attribute foo. Value: 1234. 1234 " "is not valid under any of the given schemas") self.check_validation_error(self.post, body={'foo': 1234}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: . '' " "is not valid under any of the given schemas") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) too_long_name = 256 * "k" detail = ("Invalid input for field/attribute foo. Value: %s. " "'%s' is not valid under any of the " "given schemas") % (too_long_name, too_long_name) self.check_validation_error(self.post, body={'foo': too_long_name}, expected_detail=detail) class TcpUdpPortTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.tcp_udp_port, }, } def test_validate_tcp_udp_port(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 1024}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '1024'}, req=FakeRequest())) def test_validate_tcp_udp_port_fails(self): detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'integer', 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 65536." " 65536(.0)? is greater than the maximum of 65535") self.check_validation_error(self.post, body={'foo': 65536}, expected_detail=detail) class CidrFormatTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'cidr', }, }, } def test_validate_cidr(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '192.168.10.0/24'}, req=FakeRequest() )) def test_validate_cidr_fails(self): detail = ("Invalid input for field/attribute foo." " Value: bar." " 'bar' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: . '' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 192.168.1.0. '192.168.1.0' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': '192.168.1.0'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 192.168.1.0 /24." " '192.168.1.0 /24' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': '192.168.1.0 /24'}, expected_detail=detail) class DatetimeTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'date-time', }, }, } def test_validate_datetime(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '2014-01-14T01:00:00Z'}, req=FakeRequest() )) def test_validate_datetime_fails(self): detail = ("Invalid input for field/attribute foo." " Value: 2014-13-14T01:00:00Z." " '2014-13-14T01:00:00Z' is not a 'date-time'") self.check_validation_error(self.post, body={'foo': '2014-13-14T01:00:00Z'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: bar. 'bar' is not a 'date-time'") self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " '1' is not a 'date-time'") self.check_validation_error(self.post, body={'foo': '1'}, expected_detail=detail) class UuidTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'uuid', }, }, } def test_validate_uuid(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '70a599e0-31e7-49b7-b260-868f441e862b'}, req=FakeRequest() )) def test_validate_uuid_fails(self): detail = ("Invalid input for field/attribute foo." " Value: 70a599e031e749b7b260868f441e862." " '70a599e031e749b7b260868f441e862' is not a 'uuid'") self.check_validation_error(self.post, body={'foo': '70a599e031e749b7b260868f441e862'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " '1' is not a 'uuid'") self.check_validation_error(self.post, body={'foo': '1'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' is not a 'uuid'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) class UriTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'uri', }, }, } def test_validate_uri(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': 'http://localhost:8774/v2/servers'}, req=FakeRequest() )) self.assertEqual('Validation succeeded.', self.post( body={'foo': 'http://[::1]:8774/v2/servers'}, req=FakeRequest() )) def test_validate_uri_fails(self): base_detail = ("Invalid input for field/attribute foo. Value: {0}. " "'{0}' is not a 'uri'") invalid_uri = 'http://localhost:8774/v2/servers##' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) invalid_uri = 'http://[fdf8:01]:8774/v2/servers' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) invalid_uri = '1' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) invalid_uri = 'abc' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) class Ipv4TestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'ipv4', }, }, } def test_validate_ipv4(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '192.168.0.100'}, req=FakeRequest() )) def test_validate_ipv4_fails(self): detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' is not a 'ipv4'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: localhost." " 'localhost' is not a 'ipv4'") self.check_validation_error(self.post, body={'foo': 'localhost'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 2001:db8::1234:0:0:9abc." " '2001:db8::1234:0:0:9abc' is not a 'ipv4'") self.check_validation_error(self.post, body={'foo': '2001:db8::1234:0:0:9abc'}, expected_detail=detail) class Ipv6TestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'ipv6', }, }, } def test_validate_ipv6(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '2001:db8::1234:0:0:9abc'}, req=FakeRequest() )) def test_validate_ipv6_fails(self): detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' is not a 'ipv6'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: localhost." " 'localhost' is not a 'ipv6'") self.check_validation_error(self.post, body={'foo': 'localhost'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 192.168.0.100. '192.168.0.100' is not a 'ipv6'") self.check_validation_error(self.post, body={'foo': '192.168.0.100'}, expected_detail=detail) class Base64TestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'base64', }, }, } def test_validate_base64(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'aGVsbG8gd29ybGQ='}, req=FakeRequest())) # 'aGVsbG8gd29ybGQ=' is the base64 code of 'hello world' def test_validate_base64_fails(self): value = 'A random string' detail = ("Invalid input for field/attribute foo. " "Value: %s. '%s' is not a 'base64'") % (value, value) self.check_validation_error(self.post, body={'foo': value}, expected_detail=detail) class RegexFormatTestCase(APIValidationTestCase): def setUp(self): super(RegexFormatTestCase, self).setUp() self.controller = RegexFormatFakeController() def test_validate_regex(self): req = fakes.HTTPRequest.blank("") self.assertEqual('Validation succeeded.', self.controller.post(req, body={'foo': u'Myserver'})) def test_validate_regex_fails(self): value = 1 req = fakes.HTTPRequest.blank("") detail = ("Invalid input for field/attribute foo. " "Value: %s. %s is not a 'regex'") % (value, value) self.check_validation_error(self.controller.post, req=req, body={'foo': value}, expected_detail=detail) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_availability_zones.py0000664000175000017500000003121100000000000023151 0ustar00zuulzuul00000000000000# Copyright 2013 Netease Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for availability zones """ import mock from oslo_utils.fixture import uuidsentinel import six from nova import availability_zones as az from nova.compute import api as compute_api import nova.conf from nova import context from nova.db import api as db from nova import objects from nova import test CONF = nova.conf.CONF class AvailabilityZoneTestCases(test.TestCase): """Test case for aggregate based availability zone.""" def setUp(self): super(AvailabilityZoneTestCases, self).setUp() self.host = 'me' self.availability_zone = 'nova-test' self.default_az = CONF.default_availability_zone self.default_in_az = CONF.internal_service_availability_zone self.context = context.get_admin_context() self.agg = self._create_az('az_agg', self.availability_zone) def tearDown(self): self.agg.destroy() super(AvailabilityZoneTestCases, self).tearDown() def _create_az(self, agg_name, az_name): agg_meta = {'name': agg_name, 'uuid': uuidsentinel.agg_uuid, 'metadata': {'availability_zone': az_name}} agg = objects.Aggregate(self.context, **agg_meta) agg.create() agg = objects.Aggregate.get_by_id(self.context, agg.id) return agg def _update_az(self, aggregate, az_name): aggregate.update_metadata({'availability_zone': az_name}) def _create_service_with_topic(self, topic, host, disabled=False): values = { 'binary': 'nova-bin', 'host': host, 'topic': topic, 'disabled': disabled, } return db.service_create(self.context, values) def _destroy_service(self, service): return db.service_destroy(self.context, service['id']) def _add_to_aggregate(self, service, aggregate): aggregate.add_host(service['host']) def _delete_from_aggregate(self, service, aggregate): aggregate.delete_host(service['host']) def test_rest_availability_zone_reset_cache(self): az._get_cache().region.get_or_create('cache', lambda: 'fake_value') az.reset_cache() self.assertIsNone(az._get_cache().get('cache')) def test_update_host_availability_zone_cache(self): """Test availability zone cache could be update.""" service = self._create_service_with_topic('compute', self.host) # Create a new aggregate with an AZ and add the host to the AZ az_name = 'az1' cache_key = az._make_cache_key(self.host) agg_az1 = self._create_az('agg-az1', az_name) self._add_to_aggregate(service, agg_az1) az.update_host_availability_zone_cache(self.context, self.host) self.assertEqual('az1', az._get_cache().get(cache_key)) az.update_host_availability_zone_cache(self.context, self.host, 'az2') self.assertEqual('az2', az._get_cache().get(cache_key)) def test_set_availability_zone_compute_service(self): """Test for compute service get right availability zone.""" service = self._create_service_with_topic('compute', self.host) services = db.service_get_all(self.context) # The service is not add into aggregate, so confirm it is default # availability zone. new_service = az.set_availability_zones(self.context, services)[0] self.assertEqual(self.default_az, new_service['availability_zone']) # The service is added into aggregate, confirm return the aggregate # availability zone. self._add_to_aggregate(service, self.agg) new_service = az.set_availability_zones(self.context, services)[0] self.assertEqual(self.availability_zone, new_service['availability_zone']) self._destroy_service(service) def test_set_availability_zone_unicode_key(self): """Test set availability zone cache key is unicode.""" service = self._create_service_with_topic('network', self.host) services = db.service_get_all(self.context) az.set_availability_zones(self.context, services) self.assertIsInstance(services[0]['host'], six.text_type) cached_key = az._make_cache_key(services[0]['host']) self.assertIsInstance(cached_key, str) self._destroy_service(service) def test_set_availability_zone_not_compute_service(self): """Test not compute service get right availability zone.""" service = self._create_service_with_topic('network', self.host) services = db.service_get_all(self.context) new_service = az.set_availability_zones(self.context, services)[0] self.assertEqual(self.default_in_az, new_service['availability_zone']) self._destroy_service(service) def test_get_host_availability_zone(self): """Test get right availability zone by given host.""" self.assertEqual(self.default_az, az.get_host_availability_zone(self.context, self.host)) service = self._create_service_with_topic('compute', self.host) self._add_to_aggregate(service, self.agg) self.assertEqual(self.availability_zone, az.get_host_availability_zone(self.context, self.host)) def test_update_host_availability_zone(self): """Test availability zone could be update by given host.""" service = self._create_service_with_topic('compute', self.host) # Create a new aggregate with an AZ and add the host to the AZ az_name = 'az1' agg_az1 = self._create_az('agg-az1', az_name) self._add_to_aggregate(service, agg_az1) self.assertEqual(az_name, az.get_host_availability_zone(self.context, self.host)) # Update AZ new_az_name = 'az2' self._update_az(agg_az1, new_az_name) self.assertEqual(new_az_name, az.get_host_availability_zone(self.context, self.host)) def test_delete_host_availability_zone(self): """Test availability zone could be deleted successfully.""" service = self._create_service_with_topic('compute', self.host) # Create a new aggregate with an AZ and add the host to the AZ az_name = 'az1' agg_az1 = self._create_az('agg-az1', az_name) self._add_to_aggregate(service, agg_az1) self.assertEqual(az_name, az.get_host_availability_zone(self.context, self.host)) # Delete the AZ via deleting the aggregate self._delete_from_aggregate(service, agg_az1) self.assertEqual(self.default_az, az.get_host_availability_zone(self.context, self.host)) def test_get_availability_zones(self): """Test get_availability_zones.""" # When the param get_only_available of get_availability_zones is set # to default False, it returns two lists, zones with at least one # enabled services, and zones with no enabled services, # when get_only_available is set to True, only return a list of zones # with at least one enabled services. # Use the following test data: # # zone host enabled # nova-test host1 Yes # nova-test host2 No # nova-test2 host3 Yes # nova-test3 host4 No # host5 No agg2 = self._create_az('agg-az2', 'nova-test2') agg3 = self._create_az('agg-az3', 'nova-test3') service1 = self._create_service_with_topic('compute', 'host1', disabled=False) service2 = self._create_service_with_topic('compute', 'host2', disabled=True) service3 = self._create_service_with_topic('compute', 'host3', disabled=False) service4 = self._create_service_with_topic('compute', 'host4', disabled=True) self._create_service_with_topic('compute', 'host5', disabled=True) self._add_to_aggregate(service1, self.agg) self._add_to_aggregate(service2, self.agg) self._add_to_aggregate(service3, agg2) self._add_to_aggregate(service4, agg3) host_api = compute_api.HostAPI() zones, not_zones = az.get_availability_zones(self.context, host_api) self.assertEqual(['nova-test', 'nova-test2'], zones) self.assertEqual(['nova', 'nova-test3'], not_zones) zones = az.get_availability_zones(self.context, host_api, get_only_available=True) self.assertEqual(['nova-test', 'nova-test2'], zones) zones, not_zones = az.get_availability_zones(self.context, host_api, with_hosts=True) self.assertJsonEqual(zones, [(u'nova-test2', set([u'host3'])), (u'nova-test', set([u'host1']))]) self.assertJsonEqual(not_zones, [(u'nova-test3', set([u'host4'])), (u'nova', set([u'host5']))]) def test_get_instance_availability_zone_default_value(self): """Test get right availability zone by given an instance.""" fake_inst = objects.Instance(host=self.host, availability_zone=None) self.assertEqual(self.default_az, az.get_instance_availability_zone(self.context, fake_inst)) def test_get_instance_availability_zone_from_aggregate(self): """Test get availability zone from aggregate by given an instance.""" host = 'host170' service = self._create_service_with_topic('compute', host) self._add_to_aggregate(service, self.agg) fake_inst = objects.Instance(host=host, availability_zone=self.availability_zone) self.assertEqual(self.availability_zone, az.get_instance_availability_zone(self.context, fake_inst)) @mock.patch.object(az._get_cache(), 'get') def test_get_instance_availability_zone_cache_differs(self, cache_get): host = 'host170' service = self._create_service_with_topic('compute', host) self._add_to_aggregate(service, self.agg) cache_get.return_value = self.default_az fake_inst = objects.Instance(host=host, availability_zone=self.availability_zone) self.assertEqual( self.availability_zone, az.get_instance_availability_zone(self.context, fake_inst)) def test_get_instance_availability_zone_no_host(self): """Test get availability zone from instance if host is None.""" fake_inst = objects.Instance(host=None, availability_zone='inst-az') result = az.get_instance_availability_zone(self.context, fake_inst) self.assertEqual('inst-az', result) def test_get_instance_availability_zone_no_host_set(self): """Test get availability zone from instance if host not set. This is testing the case in the compute API where the Instance object does not have the host attribute set because it's just the object that goes into the BuildRequest, it wasn't actually pulled from the DB. The instance in this case doesn't actually get inserted into the DB until it reaches conductor. So the host attribute may not be set but we expect availability_zone always will in the API because of _validate_and_build_base_options setting a default value which goes into the object. """ fake_inst = objects.Instance(availability_zone='inst-az') result = az.get_instance_availability_zone(self.context, fake_inst) self.assertEqual('inst-az', result) def test_get_instance_availability_zone_no_host_no_az(self): """Test get availability zone if neither host nor az is set.""" fake_inst = objects.Instance(host=None, availability_zone=None) result = az.get_instance_availability_zone(self.context, fake_inst) self.assertIsNone(result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_baserpc.py0000664000175000017500000000276700000000000020716 0ustar00zuulzuul00000000000000# # Copyright 2013 - Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """ Test the base rpc API. """ from nova import baserpc from nova.compute import rpcapi as compute_rpcapi from nova import context from nova import test class BaseAPITestCase(test.TestCase): def setUp(self): super(BaseAPITestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.compute = self.start_service('compute') self.base_rpcapi = baserpc.BaseAPI(compute_rpcapi.RPC_TOPIC) def test_ping(self): res = self.base_rpcapi.ping(self.context, 'foo') self.assertEqual({'service': 'compute', 'arg': 'foo'}, res) def test_get_backdoor_port(self): res = self.base_rpcapi.get_backdoor_port(self.context, self.compute.host) self.assertEqual(self.compute.backdoor_port, res) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_block_device.py0000664000175000017500000007537100000000000021711 0ustar00zuulzuul00000000000000# Copyright 2011 Isaku Yamahata # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for Block Device utility functions. """ from oslo_utils.fixture import uuidsentinel as uuids import six from nova import block_device from nova import exception from nova import objects from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit import matchers class BlockDeviceTestCase(test.NoDBTestCase): def setUp(self): super(BlockDeviceTestCase, self).setUp() BDM = block_device.BlockDeviceDict self.new_mapping = [ BDM({'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'volume_size': 1, 'guest_format': 'swap', 'boot_index': -1}), BDM({'id': 2, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdc1', 'source_type': 'blank', 'destination_type': 'local', 'volume_size': 10, 'delete_on_termination': True, 'boot_index': -1}), BDM({'id': 3, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda1', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-volume-id-1', 'connection_info': "{'fake': 'connection_info'}", 'boot_index': 0}), BDM({'id': 4, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'snapshot_id': 'fake-snapshot-id-1', 'volume_id': 'fake-volume-id-2', 'boot_index': -1}), BDM({'id': 5, 'instance_uuid': uuids.instance, 'no_device': True, 'device_name': '/dev/vdc'}), ] def test_properties(self): root_device0 = '/dev/sda' root_device1 = '/dev/sdb' mappings = [{'virtual': 'root', 'device': root_device0}] properties0 = {'mappings': mappings} properties1 = {'mappings': mappings, 'root_device_name': root_device1} self.assertIsNone(block_device.properties_root_device_name({})) self.assertEqual(root_device0, block_device.properties_root_device_name(properties0)) self.assertEqual(root_device1, block_device.properties_root_device_name(properties1)) def test_ephemeral(self): self.assertFalse(block_device.is_ephemeral('ephemeral')) self.assertTrue(block_device.is_ephemeral('ephemeral0')) self.assertTrue(block_device.is_ephemeral('ephemeral1')) self.assertTrue(block_device.is_ephemeral('ephemeral11')) self.assertFalse(block_device.is_ephemeral('root')) self.assertFalse(block_device.is_ephemeral('swap')) self.assertFalse(block_device.is_ephemeral('/dev/sda1')) self.assertEqual(0, block_device.ephemeral_num('ephemeral0')) self.assertEqual(1, block_device.ephemeral_num('ephemeral1')) self.assertEqual(11, block_device.ephemeral_num('ephemeral11')) self.assertFalse(block_device.is_swap_or_ephemeral('ephemeral')) self.assertTrue(block_device.is_swap_or_ephemeral('ephemeral0')) self.assertTrue(block_device.is_swap_or_ephemeral('ephemeral1')) self.assertTrue(block_device.is_swap_or_ephemeral('swap')) self.assertFalse(block_device.is_swap_or_ephemeral('root')) self.assertFalse(block_device.is_swap_or_ephemeral('/dev/sda1')) def test_mappings_prepend_dev(self): mapping = [ {'virtual': 'ami', 'device': '/dev/sda'}, {'virtual': 'root', 'device': 'sda'}, {'virtual': 'ephemeral0', 'device': 'sdb'}, {'virtual': 'swap', 'device': 'sdc'}, {'virtual': 'ephemeral1', 'device': 'sdd'}, {'virtual': 'ephemeral2', 'device': 'sde'}] expected = [ {'virtual': 'ami', 'device': '/dev/sda'}, {'virtual': 'root', 'device': 'sda'}, {'virtual': 'ephemeral0', 'device': '/dev/sdb'}, {'virtual': 'swap', 'device': '/dev/sdc'}, {'virtual': 'ephemeral1', 'device': '/dev/sdd'}, {'virtual': 'ephemeral2', 'device': '/dev/sde'}] prepended = block_device.mappings_prepend_dev(mapping) self.assertEqual(sorted(expected, key=lambda v: v['virtual']), sorted(prepended, key=lambda v: v['virtual'])) def test_strip_dev(self): self.assertEqual('sda', block_device.strip_dev('/dev/sda')) self.assertEqual('sda', block_device.strip_dev('sda')) self.assertIsNone(block_device.strip_dev(None)) def test_strip_prefix(self): self.assertEqual('a', block_device.strip_prefix('/dev/sda')) self.assertEqual('a', block_device.strip_prefix('a')) self.assertEqual('a', block_device.strip_prefix('xvda')) self.assertEqual('a', block_device.strip_prefix('vda')) self.assertEqual('a', block_device.strip_prefix('hda')) self.assertIsNone(block_device.strip_prefix(None)) def test_get_device_letter(self): self.assertEqual('', block_device.get_device_letter('')) self.assertEqual('a', block_device.get_device_letter('/dev/sda1')) self.assertEqual('b', block_device.get_device_letter('/dev/xvdb')) self.assertEqual('d', block_device.get_device_letter('/dev/d')) self.assertEqual('a', block_device.get_device_letter('a')) self.assertEqual('b', block_device.get_device_letter('sdb2')) self.assertEqual('c', block_device.get_device_letter('vdc')) self.assertEqual('c', block_device.get_device_letter('hdc')) self.assertIsNone(block_device.get_device_letter(None)) def test_generate_device_name(self): expected = ( ('vda', ("vd", 0)), ('vdaa', ("vd", 26)), ('vdabc', ("vd", 730)), ('vdidpok', ("vd", 4194304)), ('sdc', ("sd", 2)), ('sdaa', ("sd", 26)), ('sdiw', ("sd", 256)), ('hdzz', ("hd", 701)) ) for res, args in expected: self.assertEqual(res, block_device.generate_device_name(*args)) def test_volume_in_mapping(self): swap = {'device_name': '/dev/sdb', 'swap_size': 1} ephemerals = [{'num': 0, 'virtual_name': 'ephemeral0', 'device_name': '/dev/sdc1', 'size': 1}, {'num': 2, 'virtual_name': 'ephemeral2', 'device_name': '/dev/sdd', 'size': 1}] block_device_mapping = [{'mount_device': '/dev/sde', 'device_path': 'fake_device'}, {'mount_device': '/dev/sdf', 'device_path': 'fake_device'}] block_device_info = { 'root_device_name': '/dev/sda', 'swap': swap, 'ephemerals': ephemerals, 'block_device_mapping': block_device_mapping} def _assert_volume_in_mapping(device_name, true_or_false): in_mapping = block_device.volume_in_mapping( device_name, block_device_info) self.assertEqual(true_or_false, in_mapping) _assert_volume_in_mapping('sda', False) _assert_volume_in_mapping('sdb', True) _assert_volume_in_mapping('sdc1', True) _assert_volume_in_mapping('sdd', True) _assert_volume_in_mapping('sde', True) _assert_volume_in_mapping('sdf', True) _assert_volume_in_mapping('sdg', False) _assert_volume_in_mapping('sdh1', False) def test_get_root_bdm(self): root_bdm = {'device_name': 'vda', 'boot_index': 0} bdms = [root_bdm, {'device_name': 'vdb', 'boot_index': 1}, {'device_name': 'vdc', 'boot_index': -1}, {'device_name': 'vdd'}] self.assertEqual(root_bdm, block_device.get_root_bdm(bdms)) self.assertEqual(root_bdm, block_device.get_root_bdm([bdms[0]])) self.assertIsNone(block_device.get_root_bdm(bdms[1:])) self.assertIsNone(block_device.get_root_bdm(bdms[2:])) self.assertIsNone(block_device.get_root_bdm(bdms[3:])) self.assertIsNone(block_device.get_root_bdm([])) def test_get_bdm_ephemeral_disk_size(self): size = block_device.get_bdm_ephemeral_disk_size(self.new_mapping) self.assertEqual(10, size) def test_get_bdm_swap_list(self): swap_list = block_device.get_bdm_swap_list(self.new_mapping) self.assertEqual(1, len(swap_list)) self.assertEqual(1, swap_list[0].get('id')) def test_get_bdm_local_disk_num(self): size = block_device.get_bdm_local_disk_num(self.new_mapping) self.assertEqual(2, size) def test_new_format_is_swap(self): expected_results = [True, False, False, False, False] for expected, bdm in zip(expected_results, self.new_mapping): res = block_device.new_format_is_swap(bdm) self.assertEqual(expected, res) def test_new_format_is_ephemeral(self): expected_results = [False, True, False, False, False] for expected, bdm in zip(expected_results, self.new_mapping): res = block_device.new_format_is_ephemeral(bdm) self.assertEqual(expected, res) def test_validate_device_name(self): for value in [' ', 10, None, 'a' * 260]: self.assertRaises(exception.InvalidBDMFormat, block_device.validate_device_name, value) def test_validate_and_default_volume_size(self): bdm = {} for value in [-1, 'a', 2.5]: bdm['volume_size'] = value self.assertRaises(exception.InvalidBDMFormat, block_device.validate_and_default_volume_size, bdm) def test_get_bdms_to_connect(self): root_bdm = {'device_name': 'vda', 'boot_index': 0} bdms = [root_bdm, {'device_name': 'vdb', 'boot_index': 1}, {'device_name': 'vdc', 'boot_index': -1}, {'device_name': 'vde', 'boot_index': None}, {'device_name': 'vdd'}] self.assertNotIn(root_bdm, block_device.get_bdms_to_connect(bdms, exclude_root_mapping=True)) self.assertIn(root_bdm, block_device.get_bdms_to_connect(bdms)) class TestBlockDeviceDict(test.NoDBTestCase): def setUp(self): super(TestBlockDeviceDict, self).setUp() BDM = block_device.BlockDeviceDict self.api_mapping = [ {'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'guest_format': 'swap', 'boot_index': -1}, {'id': 2, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdc1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'boot_index': -1}, {'id': 3, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda1', 'source_type': 'volume', 'destination_type': 'volume', 'uuid': 'fake-volume-id-1', 'boot_index': 0}, {'id': 4, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'uuid': 'fake-snapshot-id-1', 'boot_index': -1}, {'id': 5, 'instance_uuid': uuids.instance, 'no_device': True, 'device_name': '/dev/vdc'}, ] self.new_mapping = [ BDM({'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'guest_format': 'swap', 'boot_index': -1}), BDM({'id': 2, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdc1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'boot_index': -1}), BDM({'id': 3, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda1', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-volume-id-1', 'connection_info': "{'fake': 'connection_info'}", 'boot_index': 0}), BDM({'id': 4, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'snapshot_id': 'fake-snapshot-id-1', 'volume_id': 'fake-volume-id-2', 'boot_index': -1}), BDM({'id': 5, 'instance_uuid': uuids.instance, 'no_device': True, 'device_name': '/dev/vdc'}), ] self.legacy_mapping = [ {'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb1', 'delete_on_termination': True, 'virtual_name': 'swap'}, {'id': 2, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdc1', 'delete_on_termination': True, 'virtual_name': 'ephemeral0'}, {'id': 3, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda1', 'volume_id': 'fake-volume-id-1', 'connection_info': "{'fake': 'connection_info'}"}, {'id': 4, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda2', 'connection_info': "{'fake': 'connection_info'}", 'snapshot_id': 'fake-snapshot-id-1', 'volume_id': 'fake-volume-id-2'}, {'id': 5, 'instance_uuid': uuids.instance, 'no_device': True, 'device_name': '/dev/vdc'}, ] self.new_mapping_source_image = [ BDM({'id': 6, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda3', 'source_type': 'image', 'destination_type': 'volume', 'connection_info': "{'fake': 'connection_info'}", 'volume_id': 'fake-volume-id-3', 'boot_index': -1}), BDM({'id': 7, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda4', 'source_type': 'image', 'destination_type': 'local', 'connection_info': "{'fake': 'connection_info'}", 'image_id': 'fake-image-id-2', 'boot_index': -1}), ] self.legacy_mapping_source_image = [ {'id': 6, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda3', 'connection_info': "{'fake': 'connection_info'}", 'volume_id': 'fake-volume-id-3'}, ] def test_init(self): def fake_validate(obj, dct): pass self.stub_out('nova.block_device.BlockDeviceDict._fields', set(['field1', 'field2'])) self.stub_out('nova.block_device.BlockDeviceDict._db_only_fields', set(['db_field1', 'db_field2'])) self.stub_out('nova.block_device.BlockDeviceDict._validate', fake_validate) # Make sure db fields are not picked up if they are not # in the original dict dev_dict = block_device.BlockDeviceDict({'field1': 'foo', 'field2': 'bar', 'db_field1': 'baz'}) self.assertIn('field1', dev_dict) self.assertIn('field2', dev_dict) self.assertIn('db_field1', dev_dict) self.assertNotIn('db_field2', dev_dict) # Make sure all expected fields are defaulted dev_dict = block_device.BlockDeviceDict({'field1': 'foo'}) self.assertIn('field1', dev_dict) self.assertIn('field2', dev_dict) self.assertIsNone(dev_dict['field2']) self.assertNotIn('db_field1', dev_dict) self.assertNotIn('db_field2', dev_dict) # Unless they are not meant to be dev_dict = block_device.BlockDeviceDict({'field1': 'foo'}, do_not_default=set(['field2'])) self.assertIn('field1', dev_dict) self.assertNotIn('field2', dev_dict) self.assertNotIn('db_field1', dev_dict) self.assertNotIn('db_field2', dev_dict) # Passing kwargs to constructor works dev_dict = block_device.BlockDeviceDict(field1='foo') self.assertIn('field1', dev_dict) self.assertIn('field2', dev_dict) self.assertIsNone(dev_dict['field2']) dev_dict = block_device.BlockDeviceDict( {'field1': 'foo'}, field2='bar') self.assertEqual('foo', dev_dict['field1']) self.assertEqual('bar', dev_dict['field2']) def test_init_prepend_dev_to_device_name(self): bdm = {'id': 3, 'instance_uuid': uuids.instance, 'device_name': 'vda', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-volume-id-1', 'boot_index': 0} bdm_dict = block_device.BlockDeviceDict(bdm) self.assertEqual('/dev/vda', bdm_dict['device_name']) bdm['device_name'] = '/dev/vdb' bdm_dict = block_device.BlockDeviceDict(bdm) self.assertEqual('/dev/vdb', bdm_dict['device_name']) bdm['device_name'] = None bdm_dict = block_device.BlockDeviceDict(bdm) self.assertIsNone(bdm_dict['device_name']) def test_init_boolify_delete_on_termination(self): # Make sure that when delete_on_termination is not passed it's # still set to False and not None bdm = {'id': 3, 'instance_uuid': uuids.instance, 'device_name': 'vda', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-volume-id-1', 'boot_index': 0} bdm_dict = block_device.BlockDeviceDict(bdm) self.assertFalse(bdm_dict['delete_on_termination']) def test_validate(self): self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict, {'bogus_field': 'lame_val'}) lame_bdm = dict(self.new_mapping[2]) del lame_bdm['source_type'] self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict, lame_bdm) lame_bdm['no_device'] = True block_device.BlockDeviceDict(lame_bdm) lame_dev_bdm = dict(self.new_mapping[2]) lame_dev_bdm['device_name'] = "not a valid name" self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict, lame_dev_bdm) lame_dev_bdm['device_name'] = "" self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict, lame_dev_bdm) cool_volume_size_bdm = dict(self.new_mapping[2]) cool_volume_size_bdm['volume_size'] = '42' cool_volume_size_bdm = block_device.BlockDeviceDict( cool_volume_size_bdm) self.assertEqual(42, cool_volume_size_bdm['volume_size']) lame_volume_size_bdm = dict(self.new_mapping[2]) lame_volume_size_bdm['volume_size'] = 'some_non_int_string' self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict, lame_volume_size_bdm) truthy_bdm = dict(self.new_mapping[2]) truthy_bdm['delete_on_termination'] = '1' truthy_bdm = block_device.BlockDeviceDict(truthy_bdm) self.assertTrue(truthy_bdm['delete_on_termination']) verbose_bdm = dict(self.new_mapping[2]) verbose_bdm['boot_index'] = 'first' self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict, verbose_bdm) def test_from_legacy(self): for legacy, new in zip(self.legacy_mapping, self.new_mapping): self.assertThat( block_device.BlockDeviceDict.from_legacy(legacy), matchers.IsSubDictOf(new)) def test_from_legacy_mapping(self): def _get_image_bdms(bdms): return [bdm for bdm in bdms if bdm['source_type'] == 'image'] def _get_bootable_bdms(bdms): return [bdm for bdm in bdms if (bdm['boot_index'] is not None and bdm['boot_index'] >= 0)] new_no_img = block_device.from_legacy_mapping(self.legacy_mapping) self.assertEqual(0, len(_get_image_bdms(new_no_img))) for new, expected in zip(new_no_img, self.new_mapping): self.assertThat(new, matchers.IsSubDictOf(expected)) new_with_img = block_device.from_legacy_mapping( self.legacy_mapping, 'fake_image_ref') image_bdms = _get_image_bdms(new_with_img) boot_bdms = _get_bootable_bdms(new_with_img) self.assertEqual(1, len(image_bdms)) self.assertEqual(1, len(boot_bdms)) self.assertEqual(0, image_bdms[0]['boot_index']) self.assertEqual('image', boot_bdms[0]['source_type']) new_with_img_and_root = block_device.from_legacy_mapping( self.legacy_mapping, 'fake_image_ref', 'sda1') image_bdms = _get_image_bdms(new_with_img_and_root) boot_bdms = _get_bootable_bdms(new_with_img_and_root) self.assertEqual(0, len(image_bdms)) self.assertEqual(1, len(boot_bdms)) self.assertEqual(0, boot_bdms[0]['boot_index']) self.assertEqual('volume', boot_bdms[0]['source_type']) new_no_root = block_device.from_legacy_mapping( self.legacy_mapping, 'fake_image_ref', 'sda1', no_root=True) self.assertEqual(0, len(_get_image_bdms(new_no_root))) self.assertEqual(0, len(_get_bootable_bdms(new_no_root))) def test_from_api(self): for api, new in zip(self.api_mapping, self.new_mapping): new['connection_info'] = None if new['snapshot_id']: new['volume_id'] = None self.assertThat( block_device.BlockDeviceDict.from_api(api, False), matchers.IsSubDictOf(new)) def test_from_api_invalid_blank_id(self): api_dict = {'id': 1, 'source_type': 'blank', 'destination_type': 'volume', 'uuid': 'fake-volume-id-1', 'delete_on_termination': True, 'boot_index': -1} self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict.from_api, api_dict, False) def test_from_api_invalid_source_to_local_mapping(self): api_dict = {'id': 1, 'source_type': 'image', 'destination_type': 'local', 'uuid': 'fake-volume-id-1'} self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict.from_api, api_dict, False) def test_from_api_valid_source_to_local_mapping(self): api_dict = {'id': 1, 'source_type': 'image', 'destination_type': 'local', 'volume_id': 'fake-volume-id-1', 'uuid': 1, 'boot_index': 0} retexp = block_device.BlockDeviceDict( {'id': 1, 'source_type': 'image', 'image_id': 1, 'destination_type': 'local', 'volume_id': 'fake-volume-id-1', 'boot_index': 0}) self.assertEqual(retexp, block_device.BlockDeviceDict.from_api(api_dict, True)) def test_from_api_valid_source_to_local_mapping_with_string_bi(self): api_dict = {'id': 1, 'source_type': 'image', 'destination_type': 'local', 'volume_id': 'fake-volume-id-1', 'uuid': 1, 'boot_index': '0'} retexp = block_device.BlockDeviceDict( {'id': 1, 'source_type': 'image', 'image_id': 1, 'destination_type': 'local', 'volume_id': 'fake-volume-id-1', 'boot_index': 0}) self.assertEqual(retexp, block_device.BlockDeviceDict.from_api(api_dict, True)) def test_from_api_invalid_image_to_destination_local_mapping(self): api_dict = {'id': 1, 'source_type': 'image', 'destination_type': 'local', 'uuid': 'fake-volume-id-1', 'volume_type': 'fake-lvm-1', 'boot_index': 1} ex = self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict.from_api, api_dict, False) self.assertIn('Mapping image to local is not supported', six.text_type(ex)) def test_from_api_invalid_volume_type_to_destination_local_mapping(self): api_dict = {'id': 1, 'source_type': 'volume', 'destination_type': 'local', 'uuid': 'fake-volume-id-1', 'volume_type': 'fake-lvm-1'} ex = self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict.from_api, api_dict, False) self.assertIn('Specifying a volume_type with destination_type=local ' 'is not supported', six.text_type(ex)) def test_from_api_invalid_specify_volume_type_with_source_volume_mapping( self): api_dict = {'id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'uuid': 'fake-volume-id-1', 'volume_type': 'fake-lvm-1'} ex = self.assertRaises(exception.InvalidBDMFormat, block_device.BlockDeviceDict.from_api, api_dict, False) self.assertIn('Specifying volume type to existing volume is ' 'not supported', six.text_type(ex)) def test_legacy(self): for legacy, new in zip(self.legacy_mapping, self.new_mapping): self.assertThat( legacy, matchers.IsSubDictOf(new.legacy())) def test_legacy_mapping(self): got_legacy = block_device.legacy_mapping(self.new_mapping) for legacy, expected in zip(got_legacy, self.legacy_mapping): self.assertThat(expected, matchers.IsSubDictOf(legacy)) def test_legacy_source_image(self): for legacy, new in zip(self.legacy_mapping_source_image, self.new_mapping_source_image): if new['destination_type'] == 'volume': self.assertThat(legacy, matchers.IsSubDictOf(new.legacy())) else: self.assertRaises(exception.InvalidBDMForLegacy, new.legacy) def test_legacy_mapping_source_image(self): got_legacy = block_device.legacy_mapping(self.new_mapping) for legacy, expected in zip(got_legacy, self.legacy_mapping): self.assertThat(expected, matchers.IsSubDictOf(legacy)) def test_legacy_mapping_from_object_list(self): bdm1 = objects.BlockDeviceMapping() bdm1 = objects.BlockDeviceMapping._from_db_object( None, bdm1, fake_block_device.FakeDbBlockDeviceDict( self.new_mapping[0])) bdm2 = objects.BlockDeviceMapping() bdm2 = objects.BlockDeviceMapping._from_db_object( None, bdm2, fake_block_device.FakeDbBlockDeviceDict( self.new_mapping[1])) bdmlist = objects.BlockDeviceMappingList() bdmlist.objects = [bdm1, bdm2] block_device.legacy_mapping(bdmlist) def test_image_mapping(self): removed_fields = ['id', 'instance_uuid', 'connection_info', 'created_at', 'updated_at', 'deleted_at', 'deleted'] for bdm in self.new_mapping: mapping_bdm = fake_block_device.FakeDbBlockDeviceDict( bdm).get_image_mapping() for fld in removed_fields: self.assertNotIn(fld, mapping_bdm) def _test_snapshot_from_bdm(self, template): snapshot = block_device.snapshot_from_bdm('new-snapshot-id', template) self.assertEqual('new-snapshot-id', snapshot['snapshot_id']) self.assertEqual('snapshot', snapshot['source_type']) self.assertEqual('volume', snapshot['destination_type']) self.assertEqual(template.volume_size, snapshot['volume_size']) self.assertEqual(template.delete_on_termination, snapshot['delete_on_termination']) self.assertEqual(template.device_name, snapshot['device_name']) for key in ['disk_bus', 'device_type', 'boot_index']: self.assertEqual(template[key], snapshot[key]) def test_snapshot_from_bdm(self): for bdm in self.new_mapping: self._test_snapshot_from_bdm(objects.BlockDeviceMapping(**bdm)) def test_snapshot_from_object(self): for bdm in self.new_mapping[:-1]: obj = objects.BlockDeviceMapping() obj = objects.BlockDeviceMapping._from_db_object( None, obj, fake_block_device.FakeDbBlockDeviceDict( bdm)) self._test_snapshot_from_bdm(obj) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_cache.py0000664000175000017500000000504100000000000020326 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import cache_utils from nova import test class TestOsloCache(test.NoDBTestCase): def test_get_default_cache_region(self): region = cache_utils._get_default_cache_region(expiration_time=60) self.assertEqual(60, region.expiration_time) self.assertIsNotNone(region) def test_get_default_cache_region_default_expiration_time(self): region = cache_utils._get_default_cache_region(expiration_time=0) # default oslo.cache expiration_time value 600 was taken self.assertEqual(600, region.expiration_time) self.assertIsNotNone(region) @mock.patch('dogpile.cache.region.CacheRegion.configure') def test_get_custom_cache_region(self, mock_cacheregion): self.assertRaises(RuntimeError, cache_utils._get_custom_cache_region) self.assertIsNotNone( cache_utils._get_custom_cache_region( backend='oslo_cache.dict')) self.assertIsNotNone( cache_utils._get_custom_cache_region( backend='dogpile.cache.memcached', url=['localhost:11211'])) mock_cacheregion.assert_has_calls( [mock.call('oslo_cache.dict', arguments={'expiration_time': 604800}, expiration_time=604800), mock.call('dogpile.cache.memcached', arguments={'url': ['localhost:11211']}, expiration_time=604800)] ) @mock.patch('dogpile.cache.region.CacheRegion.configure') def test_get_memcached_client(self, mock_cacheregion): self.flags(group='cache', enabled=True) self.flags(group='cache', memcache_servers=['localhost:11211']) self.assertIsNotNone( cache_utils.get_memcached_client(expiration_time=60)) methods_called = [a[0] for n, a, k in mock_cacheregion.mock_calls] self.assertEqual(['dogpile.cache.null'], methods_called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_cinder.py0000664000175000017500000002367000000000000020537 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from cinderclient.v3 import client as cinder_client_v3 import mock from requests_mock.contrib import fixture import nova.conf from nova import context from nova import exception from nova import test from nova.volume import cinder CONF = nova.conf.CONF _image_metadata = { 'kernel_id': 'fake', 'ramdisk_id': 'fake' } _volume_id = "6edbc2f4-1507-44f8-ac0d-eed1d2608d38" _instance_uuid = "f4fda93b-06e0-4743-8117-bc8bcecd651b" _instance_uuid_2 = "f4fda93b-06e0-4743-8117-bc8bcecd651c" _attachment_id = "3b4db356-253d-4fab-bfa0-e3626c0b8405" _attachment_id_2 = "3b4db356-253d-4fab-bfa0-e3626c0b8406" _device = "/dev/vdb" _device_2 = "/dev/vdc" _volume_attachment = \ [{"server_id": _instance_uuid, "attachment_id": _attachment_id, "host_name": "", "volume_id": _volume_id, "device": _device, "id": _volume_id }] _volume_attachment_2 = _volume_attachment _volume_attachment_2.append({"server_id": _instance_uuid_2, "attachment_id": _attachment_id_2, "host_name": "", "volume_id": _volume_id, "device": _device_2, "id": _volume_id}) exp_volume_attachment = collections.OrderedDict() exp_volume_attachment[_instance_uuid] = {'attachment_id': _attachment_id, 'mountpoint': _device} exp_volume_attachment_2 = exp_volume_attachment exp_volume_attachment_2[_instance_uuid_2] = {'attachment_id': _attachment_id_2, 'mountpoint': _device_2} class BaseCinderTestCase(object): def setUp(self): super(BaseCinderTestCase, self).setUp() cinder.reset_globals() self.requests = self.useFixture(fixture.Fixture()) self.api = cinder.API() self.context = context.RequestContext('username', 'project_id', auth_token='token', service_catalog=self.CATALOG) def flags(self, *args, **kwargs): super(BaseCinderTestCase, self).flags(*args, **kwargs) cinder.reset_globals() def create_client(self): return cinder.cinderclient(self.context) def test_context_with_catalog(self): self.assertEqual(self.URL, self.create_client().client.get_endpoint()) def test_cinder_http_retries(self): retries = 42 self.flags(http_retries=retries, group='cinder') self.assertEqual(retries, self.create_client().client.connect_retries) def test_cinder_api_insecure(self): # The True/False negation is awkward, but better for the client # to pass us insecure=True and we check verify_cert == False self.flags(insecure=True, group='cinder') self.assertFalse(self.create_client().client.session.verify) def test_cinder_http_timeout(self): timeout = 123 self.flags(timeout=timeout, group='cinder') self.assertEqual(timeout, self.create_client().client.session.timeout) def test_cinder_api_cacert_file(self): cacert = "/etc/ssl/certs/ca-certificates.crt" self.flags(cafile=cacert, group='cinder') self.assertEqual(cacert, self.create_client().client.session.verify) # NOTE(mriedem): This does not extend BaseCinderTestCase because Cinder v1 is # no longer supported, this is just to test that trying to use v1 fails. class CinderV1TestCase(test.NoDBTestCase): @mock.patch('nova.volume.cinder.cinder_client.get_volume_api_from_url', return_value='1') def test_cinderclient_unsupported_v1(self, get_api_version): """Tests that we fail if trying to use Cinder v1.""" self.flags(catalog_info='volume:cinder:publicURL', group='cinder') # setup mocks get_endpoint = mock.Mock( return_value='http://localhost:8776/v1/%(project_id)s') fake_session = mock.Mock(get_endpoint=get_endpoint) ctxt = context.get_context() with mock.patch.object(cinder, '_SESSION', fake_session): self.assertRaises(exception.UnsupportedCinderAPIVersion, cinder.cinderclient, ctxt) get_api_version.assert_called_once_with(get_endpoint.return_value) # NOTE(mriedem): This does not extend BaseCinderTestCase because Cinder v2 is # no longer supported, this is just to test that trying to use v2 fails. class CinderV2TestCase(test.NoDBTestCase): @mock.patch('nova.volume.cinder.cinder_client.get_volume_api_from_url', return_value='2') def test_cinderclient_unsupported_v2(self, get_api_version): """Tests that we fail if trying to use Cinder v2.""" self.flags(catalog_info='volumev2:cinderv2:publicURL', group='cinder') # setup mocks get_endpoint = mock.Mock( return_value='http://localhost:8776/v2/%(project_id)s') fake_session = mock.Mock(get_endpoint=get_endpoint) ctxt = context.get_context() with mock.patch.object(cinder, '_SESSION', fake_session): self.assertRaises(exception.UnsupportedCinderAPIVersion, cinder.cinderclient, ctxt) get_api_version.assert_called_once_with(get_endpoint.return_value) class CinderV3TestCase(BaseCinderTestCase, test.NoDBTestCase): """Test case for cinder volume v3 api.""" URL = "http://localhost:8776/v3/project_id" CATALOG = [{ "type": "volumev3", "name": "cinderv3", "endpoints": [{"publicURL": URL}] }] def setUp(self): super(CinderV3TestCase, self).setUp() self.addCleanup(CONF.reset) def create_client(self): c = super(CinderV3TestCase, self).create_client() self.assertIsInstance(c, cinder_client_v3.Client) self.assertEqual('3.0', c.api_version.get_string()) return c def stub_volume(self, **kwargs): volume = { 'name': None, 'description': None, "attachments": [], "availability_zone": "cinderv2", "created_at": "2013-08-10T00:00:00.000000", "id": _volume_id, "metadata": {}, "size": 1, "snapshot_id": None, "status": "available", "volume_type": "None", "bootable": "true", "multiattach": "true" } volume.update(kwargs) return volume def test_cinder_endpoint_template(self): endpoint = 'http://other_host:8776/v3/%(project_id)s' self.flags(endpoint_template=endpoint, group='cinder') self.assertEqual('http://other_host:8776/v3/project_id', self.create_client().client.endpoint_override) def test_get_non_existing_volume(self): self.requests.get(self.URL + '/volumes/nonexisting', status_code=404) self.assertRaises(exception.VolumeNotFound, self.api.get, self.context, 'nonexisting') def test_volume_with_image_metadata(self): v = self.stub_volume(id='1234', volume_image_metadata=_image_metadata) self.requests.get(self.URL + '/volumes/5678', json={'volume': v}) volume = self.api.get(self.context, '5678') self.assertIn('volume_image_metadata', volume) self.assertEqual(_image_metadata, volume['volume_image_metadata']) def test_volume_without_attachment(self): v = self.stub_volume(id='1234') self.requests.get(self.URL + '/volumes/5678', json={'volume': v}) volume = self.api.get(self.context, '5678') self.assertIsNone(volume.get('attachments')) def test_volume_with_one_attachment(self): v = self.stub_volume(id='1234', attachments=_volume_attachment) self.requests.get(self.URL + '/volumes/5678', json={'volume': v}) volume = self.api.get(self.context, '5678') self.assertIn('attachments', volume) self.assertEqual(exp_volume_attachment, volume['attachments']) def test_volume_with_two_attachments(self): v = self.stub_volume(id='1234', attachments=_volume_attachment_2) self.requests.get(self.URL + '/volumes/5678', json={'volume': v}) volume = self.api.get(self.context, '5678') self.assertIn('attachments', volume) self.assertEqual(exp_volume_attachment_2, volume['attachments']) def test_create_client_with_no_service_name(self): """Tests that service_name is not required and not passed through when constructing the cinder client Client object if it's not configured. """ self.flags(catalog_info='volumev3::public', group='cinder') with mock.patch('cinderclient.client.Client') as mock_client: # We don't use self.create_client() because that has additional # assertions that we don't care about in this test. We just care # about how the client is created, not what is returned. cinder.cinderclient(self.context) self.assertEqual(1, len(mock_client.call_args_list)) call_kwargs = mock_client.call_args_list[0][1] # Make sure service_type and interface are passed through. self.assertEqual('volumev3', call_kwargs['service_type']) self.assertEqual('public', call_kwargs['interface']) # And service_name is not passed through. self.assertNotIn('service_name', call_kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_conf.py0000664000175000017500000000771700000000000020224 0ustar00zuulzuul00000000000000# Copyright 2016 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import mock from oslo_config import cfg import nova.conf.compute from nova import config from nova import test class ConfTest(test.NoDBTestCase): """This is a test and pattern for parsing tricky options.""" class TestConfigOpts(cfg.ConfigOpts): def __call__(self, args=None, default_config_files=None): if default_config_files is None: default_config_files = [] return cfg.ConfigOpts.__call__( self, args=args, prog='test', version='1.0', usage='%(prog)s FOO BAR', default_config_files=default_config_files, validate_default_values=True) def setUp(self): super(ConfTest, self).setUp() self.conf = self.TestConfigOpts() self.tempdirs = [] def create_tempfiles(self, files, ext='.conf'): tempfiles = [] for (basename, contents) in files: if not os.path.isabs(basename): (fd, path) = tempfile.mkstemp(prefix=basename, suffix=ext) else: path = basename + ext fd = os.open(path, os.O_CREAT | os.O_WRONLY) tempfiles.append(path) try: os.write(fd, contents.encode('utf-8')) finally: os.close(fd) return tempfiles def test_reserved_huge_page(self): nova.conf.compute.register_opts(self.conf) paths = self.create_tempfiles( [('1', '[DEFAULT]\n' 'reserved_huge_pages = node:0,size:2048,count:64\n')]) self.conf(['--config-file', paths[0]]) # NOTE(sdague): In oslo.config if you specify a parameter # incorrectly, it silently drops it from the conf. Which means # the attr doesn't exist at all. The first attr test here is # for an unrelated boolean option that is using defaults (so # will always work. It's a basic control that *anything* is working. self.assertTrue(hasattr(self.conf, 'force_raw_images')) self.assertTrue(hasattr(self.conf, 'reserved_huge_pages'), "Parse error with reserved_huge_pages") # NOTE(sdague): Yes, this actually parses as an array holding # a dict. actual = [{'node': '0', 'size': '2048', 'count': '64'}] self.assertEqual(actual, self.conf.reserved_huge_pages) class TestParseArgs(test.NoDBTestCase): def setUp(self): super(TestParseArgs, self).setUp() m = mock.patch('nova.db.sqlalchemy.api.configure') self.nova_db_config_mock = m.start() self.addCleanup(self.nova_db_config_mock.stop) @mock.patch.object(config.log, 'register_options') def test_parse_args_glance_debug_false(self, register_options): self.flags(debug=False, group='glance') config.parse_args([], configure_db=False, init_rpc=False) self.assertIn('glanceclient=WARN', config.CONF.default_log_levels) self.nova_db_config_mock.assert_not_called() @mock.patch.object(config.log, 'register_options') def test_parse_args_glance_debug_true(self, register_options): self.flags(debug=True, group='glance') config.parse_args([], configure_db=True, init_rpc=False) self.assertIn('glanceclient=DEBUG', config.CONF.default_log_levels) self.nova_db_config_mock.assert_called_once_with(config.CONF) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_configdrive2.py0000664000175000017500000001161400000000000021647 0ustar00zuulzuul00000000000000# Copyright 2012 Michael Still and Canonical Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import mock from oslo_config import cfg from oslo_utils import fileutils from nova import context from nova import test from nova.tests.unit import fake_instance from nova import utils from nova.virt import configdrive CONF = cfg.CONF class FakeInstanceMD(object): def metadata_for_config_drive(self): yield ('this/is/a/path/hello', 'This is some content') yield ('this/is/a/path/hi', b'This is some other content') class ConfigDriveTestCase(test.NoDBTestCase): @mock.patch('oslo_concurrency.processutils.execute', return_value=None) def test_create_configdrive_iso(self, mock_execute): CONF.set_override('config_drive_format', 'iso9660') imagefile = None try: with configdrive.ConfigDriveBuilder(FakeInstanceMD()) as c: (fd, imagefile) = tempfile.mkstemp(prefix='cd_iso_') os.close(fd) c.make_drive(imagefile) mock_execute.assert_called_once_with('genisoimage', '-o', mock.ANY, '-ldots', '-allow-lowercase', '-allow-multidot', '-l', '-publisher', mock.ANY, '-quiet', '-J', '-r', '-V', 'config-2', mock.ANY, attempts=1, run_as_root=False) finally: if imagefile: fileutils.delete_if_exists(imagefile) @mock.patch('nova.privsep.fs.unprivileged_mkfs', return_value=None) @mock.patch('nova.privsep.fs.mount', return_value=('', '')) @mock.patch('nova.privsep.fs.umount', return_value=None) def test_create_configdrive_vfat( self, mock_umount, mock_mount, mock_mkfs): CONF.set_override('config_drive_format', 'vfat') imagefile = None try: with configdrive.ConfigDriveBuilder(FakeInstanceMD()) as c: (fd, imagefile) = tempfile.mkstemp(prefix='cd_vfat_') os.close(fd) c.make_drive(imagefile) mock_mkfs.assert_called_once_with('vfat', mock.ANY, label='config-2') mock_mount.assert_called_once() mock_umount.assert_called_once() # NOTE(mikal): we can't check for a VFAT output here because the # filesystem creation stuff has been mocked out because it # requires root permissions finally: if imagefile: fileutils.delete_if_exists(imagefile) def test_config_drive_required_by_image_property(self): inst = fake_instance.fake_instance_obj(context.get_admin_context()) inst.config_drive = '' inst.system_metadata = { utils.SM_IMAGE_PROP_PREFIX + 'img_config_drive': 'mandatory'} self.assertTrue(configdrive.required_by(inst)) inst.system_metadata = { utils.SM_IMAGE_PROP_PREFIX + 'img_config_drive': 'optional'} self.assertFalse(configdrive.required_by(inst)) @mock.patch.object(configdrive, 'required_by', return_value=False) def test_config_drive_update_instance_required_by_false(self, mock_required): inst = fake_instance.fake_instance_obj(context.get_admin_context()) inst.config_drive = '' configdrive.update_instance(inst) self.assertEqual('', inst.config_drive) inst.config_drive = True configdrive.update_instance(inst) self.assertTrue(inst.config_drive) @mock.patch.object(configdrive, 'required_by', return_value=True) def test_config_drive_update_instance(self, mock_required): inst = fake_instance.fake_instance_obj(context.get_admin_context()) inst.config_drive = '' configdrive.update_instance(inst) self.assertTrue(inst.config_drive) inst.config_drive = True configdrive.update_instance(inst) self.assertTrue(inst.config_drive) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_context.py0000664000175000017500000005412000000000000020751 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_context import context as o_context from oslo_context import fixture as o_fixture from oslo_utils.fixture import uuidsentinel as uuids from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures class ContextTestCase(test.NoDBTestCase): # NOTE(danms): Avoid any cells setup by claiming we will # do things ourselves. USES_DB_SELF = True def setUp(self): super(ContextTestCase, self).setUp() self.useFixture(o_fixture.ClearRequestContext()) def test_request_context_elevated(self): user_ctxt = context.RequestContext('111', '222', is_admin=False) self.assertFalse(user_ctxt.is_admin) admin_ctxt = user_ctxt.elevated() self.assertTrue(admin_ctxt.is_admin) self.assertIn('admin', admin_ctxt.roles) self.assertFalse(user_ctxt.is_admin) self.assertNotIn('admin', user_ctxt.roles) def test_request_context_sets_is_admin(self): ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) self.assertTrue(ctxt.is_admin) def test_request_context_sets_is_admin_by_role(self): ctxt = context.RequestContext('111', '222', roles=['administrator']) self.assertTrue(ctxt.is_admin) def test_request_context_sets_is_admin_upcase(self): ctxt = context.RequestContext('111', '222', roles=['Admin', 'weasel']) self.assertTrue(ctxt.is_admin) def test_request_context_read_deleted(self): ctxt = context.RequestContext('111', '222', read_deleted='yes') self.assertEqual('yes', ctxt.read_deleted) ctxt.read_deleted = 'no' self.assertEqual('no', ctxt.read_deleted) def test_request_context_read_deleted_invalid(self): self.assertRaises(ValueError, context.RequestContext, '111', '222', read_deleted=True) ctxt = context.RequestContext('111', '222') self.assertRaises(ValueError, setattr, ctxt, 'read_deleted', True) def test_service_catalog_default(self): ctxt = context.RequestContext('111', '222') self.assertEqual([], ctxt.service_catalog) ctxt = context.RequestContext('111', '222', service_catalog=[]) self.assertEqual([], ctxt.service_catalog) ctxt = context.RequestContext('111', '222', service_catalog=None) self.assertEqual([], ctxt.service_catalog) def test_service_catalog_filter(self): service_catalog = [ {u'type': u'compute', u'name': u'nova'}, {u'type': u's3', u'name': u's3'}, {u'type': u'image', u'name': u'glance'}, {u'type': u'volumev3', u'name': u'cinderv3'}, {u'type': u'network', u'name': u'neutron'}, {u'type': u'ec2', u'name': u'ec2'}, {u'type': u'object-store', u'name': u'swift'}, {u'type': u'identity', u'name': u'keystone'}, {u'type': u'block-storage', u'name': u'cinder'}, {u'type': None, u'name': u'S_withouttype'}, {u'type': u'vo', u'name': u'S_partofvolume'}] volume_catalog = [{u'type': u'image', u'name': u'glance'}, {u'type': u'volumev3', u'name': u'cinderv3'}, {u'type': u'network', u'name': u'neutron'}, {u'type': u'block-storage', u'name': u'cinder'}] ctxt = context.RequestContext('111', '222', service_catalog=service_catalog) self.assertEqual(volume_catalog, ctxt.service_catalog) def test_to_dict_from_dict_no_log(self): warns = [] def stub_warn(msg, *a, **kw): if (a and len(a) == 1 and isinstance(a[0], dict) and a[0]): a = a[0] warns.append(str(msg) % a) self.stub_out('nova.context.LOG.warning', stub_warn) ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) context.RequestContext.from_dict(ctxt.to_dict()) self.assertEqual(0, len(warns), warns) def test_store_when_no_overwrite(self): # If no context exists we store one even if overwrite is false # (since we are not overwriting anything). ctx = context.RequestContext('111', '222', overwrite=False) self.assertIs(o_context.get_current(), ctx) def test_no_overwrite(self): # If there is already a context in the cache a new one will # not overwrite it if overwrite=False. ctx1 = context.RequestContext('111', '222', overwrite=True) context.RequestContext('333', '444', overwrite=False) self.assertIs(o_context.get_current(), ctx1) def test_get_context_no_overwrite(self): # If there is already a context in the cache creating another context # should not overwrite it. ctx1 = context.RequestContext('111', '222', overwrite=True) context.get_context() self.assertIs(ctx1, o_context.get_current()) def test_admin_no_overwrite(self): # If there is already a context in the cache creating an admin # context will not overwrite it. ctx1 = context.RequestContext('111', '222', overwrite=True) context.get_admin_context() self.assertIs(o_context.get_current(), ctx1) def test_convert_from_rc_to_dict(self): ctx = context.RequestContext( 111, 222, request_id='req-679033b7-1755-4929-bf85-eb3bfaef7e0b', timestamp='2015-03-02T22:31:56.641629') values2 = ctx.to_dict() expected_values = {'auth_token': None, 'domain': None, 'is_admin': False, 'is_admin_project': True, 'project_id': 222, 'project_domain': None, 'project_name': None, 'quota_class': None, 'read_deleted': 'no', 'read_only': False, 'remote_address': None, 'request_id': 'req-679033b7-1755-4929-bf85-eb3bfaef7e0b', 'resource_uuid': None, 'roles': [], 'service_catalog': [], 'show_deleted': False, 'tenant': 222, 'timestamp': '2015-03-02T22:31:56.641629', 'user': 111, 'user_domain': None, 'user_id': 111, 'user_identity': '111 222 - - -', 'user_name': None} for k, v in expected_values.items(): self.assertIn(k, values2) self.assertEqual(values2[k], v) @mock.patch.object(context.policy, 'authorize') def test_can(self, mock_authorize): mock_authorize.return_value = True ctxt = context.RequestContext('111', '222') result = ctxt.can(mock.sentinel.rule) self.assertTrue(result) mock_authorize.assert_called_once_with( ctxt, mock.sentinel.rule, None) @mock.patch.object(context.policy, 'authorize') def test_can_fatal(self, mock_authorize): mock_authorize.side_effect = exception.Forbidden ctxt = context.RequestContext('111', '222') self.assertRaises(exception.Forbidden, ctxt.can, mock.sentinel.rule) @mock.patch.object(context.policy, 'authorize') def test_can_non_fatal(self, mock_authorize): mock_authorize.side_effect = exception.Forbidden ctxt = context.RequestContext('111', '222') result = ctxt.can(mock.sentinel.rule, mock.sentinel.target, fatal=False) self.assertFalse(result) mock_authorize.assert_called_once_with(ctxt, mock.sentinel.rule, mock.sentinel.target) @mock.patch('nova.rpc.create_transport') @mock.patch('nova.db.api.create_context_manager') def test_target_cell(self, mock_create_ctxt_mgr, mock_rpc): mock_create_ctxt_mgr.return_value = mock.sentinel.cdb mock_rpc.return_value = mock.sentinel.cmq ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) # Verify the existing db_connection, if any, is restored ctxt.db_connection = mock.sentinel.db_conn ctxt.mq_connection = mock.sentinel.mq_conn mapping = objects.CellMapping(database_connection='fake://', transport_url='fake://', uuid=uuids.cell) with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(cctxt.db_connection, mock.sentinel.cdb) self.assertEqual(cctxt.mq_connection, mock.sentinel.cmq) self.assertEqual(cctxt.cell_uuid, mapping.uuid) self.assertEqual(mock.sentinel.db_conn, ctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn, ctxt.mq_connection) self.assertIsNone(ctxt.cell_uuid) # Test again now that we have populated the cache with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(cctxt.db_connection, mock.sentinel.cdb) self.assertEqual(cctxt.mq_connection, mock.sentinel.cmq) self.assertEqual(cctxt.cell_uuid, mapping.uuid) @mock.patch('nova.rpc.create_transport') @mock.patch('nova.db.api.create_context_manager') def test_target_cell_unset(self, mock_create_ctxt_mgr, mock_rpc): """Tests that passing None as the mapping will temporarily untarget any previously set cell context. """ mock_create_ctxt_mgr.return_value = mock.sentinel.cdb mock_rpc.return_value = mock.sentinel.cmq ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) ctxt.db_connection = mock.sentinel.db_conn ctxt.mq_connection = mock.sentinel.mq_conn with context.target_cell(ctxt, None) as cctxt: self.assertIsNone(cctxt.db_connection) self.assertIsNone(cctxt.mq_connection) self.assertEqual(mock.sentinel.db_conn, ctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn, ctxt.mq_connection) @mock.patch('nova.context.set_target_cell') def test_target_cell_regenerates(self, mock_set): ctxt = context.RequestContext('fake', 'fake') # Set a non-tracked property on the context to make sure it # does not make it to the targeted one (like a copy would do) ctxt.sentinel = mock.sentinel.parent with context.target_cell(ctxt, mock.sentinel.cm) as cctxt: # Should be a different object self.assertIsNot(cctxt, ctxt) # Should not have inherited the non-tracked property self.assertFalse(hasattr(cctxt, 'sentinel'), 'Targeted context was copied from original') # Set another non-tracked property cctxt.sentinel = mock.sentinel.child # Make sure we didn't pollute the original context self.assertNotEqual(ctxt.sentinel, mock.sentinel.child) def test_get_context(self): ctxt = context.get_context() self.assertIsNone(ctxt.user_id) self.assertIsNone(ctxt.project_id) self.assertFalse(ctxt.is_admin) @mock.patch('nova.rpc.create_transport') @mock.patch('nova.db.api.create_context_manager') def test_target_cell_caching(self, mock_create_cm, mock_create_tport): mock_create_cm.return_value = mock.sentinel.db_conn_obj mock_create_tport.return_value = mock.sentinel.mq_conn_obj ctxt = context.get_context() mapping = objects.CellMapping(database_connection='fake://db', transport_url='fake://mq', uuid=uuids.cell) # First call should create new connection objects. with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(mock.sentinel.db_conn_obj, cctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn_obj, cctxt.mq_connection) mock_create_cm.assert_called_once_with('fake://db') mock_create_tport.assert_called_once_with('fake://mq') # Second call should use cached objects. mock_create_cm.reset_mock() mock_create_tport.reset_mock() with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(mock.sentinel.db_conn_obj, cctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn_obj, cctxt.mq_connection) mock_create_cm.assert_not_called() mock_create_tport.assert_not_called() def test_is_cell_failure_sentinel(self): record = context.did_not_respond_sentinel self.assertTrue(context.is_cell_failure_sentinel(record)) record = TypeError() self.assertTrue(context.is_cell_failure_sentinel(record)) record = objects.Instance() self.assertFalse(context.is_cell_failure_sentinel(record)) @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_scatter_gather_cells(self, mock_get_inst, mock_target_cell): self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) ctxt = context.get_context() mapping = objects.CellMapping(database_connection='fake://db', transport_url='fake://mq', uuid=uuids.cell) mappings = objects.CellMappingList(objects=[mapping]) filters = {'deleted': False} context.scatter_gather_cells( ctxt, mappings, 60, objects.InstanceList.get_by_filters, filters, sort_dir='foo') mock_get_inst.assert_called_once_with( mock_target_cell.return_value.__enter__.return_value, filters, sort_dir='foo') @mock.patch('nova.context.LOG.warning') @mock.patch('eventlet.timeout.Timeout') @mock.patch('eventlet.queue.LightQueue.get') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_scatter_gather_cells_timeout(self, mock_get_inst, mock_get_result, mock_timeout, mock_log_warning): # This is needed because we're mocking get_by_filters. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mappings = objects.CellMappingList(objects=[mapping0, mapping1]) # Simulate cell1 not responding. mock_get_result.side_effect = [(mapping0.uuid, mock.sentinel.instances), exception.CellTimeout()] results = context.scatter_gather_cells( ctxt, mappings, 30, objects.InstanceList.get_by_filters) self.assertEqual(2, len(results)) self.assertIn(mock.sentinel.instances, results.values()) self.assertIn(context.did_not_respond_sentinel, results.values()) mock_timeout.assert_called_once_with(30, exception.CellTimeout) self.assertTrue(mock_log_warning.called) @mock.patch('nova.context.LOG.exception') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_scatter_gather_cells_exception(self, mock_get_inst, mock_log_exception): # This is needed because we're mocking get_by_filters. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mappings = objects.CellMappingList(objects=[mapping0, mapping1]) # Simulate cell1 raising an exception. mock_get_inst.side_effect = [mock.sentinel.instances, test.TestingException()] filters = {'deleted': False} results = context.scatter_gather_cells( ctxt, mappings, 30, objects.InstanceList.get_by_filters, filters) self.assertEqual(2, len(results)) self.assertIn(mock.sentinel.instances, results.values()) self.assertIsInstance(results[mapping1.uuid], Exception) # non-NovaException gets logged self.assertTrue(mock_log_exception.called) # Now run it again with a NovaException to see it's not logged. mock_log_exception.reset_mock() mock_get_inst.side_effect = [mock.sentinel.instances, exception.NotFound()] results = context.scatter_gather_cells( ctxt, mappings, 30, objects.InstanceList.get_by_filters, filters) self.assertEqual(2, len(results)) self.assertIn(mock.sentinel.instances, results.values()) self.assertIsInstance(results[mapping1.uuid], exception.NovaException) # NovaExceptions are not logged, the caller should handle them. mock_log_exception.assert_not_called() @mock.patch('nova.context.scatter_gather_cells') @mock.patch('nova.objects.CellMappingList.get_all') def test_scatter_gather_all_cells(self, mock_get_all, mock_scatter): ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mock_get_all.return_value = objects.CellMappingList( objects=[mapping0, mapping1]) filters = {'deleted': False} context.scatter_gather_all_cells( ctxt, objects.InstanceList.get_by_filters, filters, sort_dir='foo') mock_scatter.assert_called_once_with( ctxt, mock_get_all.return_value, 60, objects.InstanceList.get_by_filters, filters, sort_dir='foo') @mock.patch('nova.context.scatter_gather_cells') @mock.patch('nova.objects.CellMappingList.get_all') def test_scatter_gather_skip_cell0(self, mock_get_all, mock_scatter): ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mock_get_all.return_value = objects.CellMappingList( objects=[mapping0, mapping1]) filters = {'deleted': False} context.scatter_gather_skip_cell0( ctxt, objects.InstanceList.get_by_filters, filters, sort_dir='foo') mock_scatter.assert_called_once_with( ctxt, [mapping1], 60, objects.InstanceList.get_by_filters, filters, sort_dir='foo') @mock.patch('nova.context.scatter_gather_cells') def test_scatter_gather_single_cell(self, mock_scatter): ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) filters = {'deleted': False} context.scatter_gather_single_cell(ctxt, mapping0, objects.InstanceList.get_by_filters, filters, sort_dir='foo') mock_scatter.assert_called_once_with( ctxt, [mapping0], context.CELL_TIMEOUT, objects.InstanceList.get_by_filters, filters, sort_dir='foo') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_crypto.py0000664000175000017500000002475100000000000020614 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for Crypto module. """ import os from cryptography.hazmat import backends from cryptography.hazmat.primitives import serialization import mock from oslo_concurrency import processutils import paramiko import six from nova import crypto from nova import exception from nova import test from nova import utils class EncryptionTests(test.NoDBTestCase): pubkey = ("ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDArtgrfBu/g2o28o+H2ng/crv" "zgES91i/NNPPFTOutXelrJ9QiPTPTm+B8yspLsXifmbsmXztNOlBQgQXs6usxb4" "fnJKNUZ84Vkp5esbqK/L7eyRqwPvqo7btKBMoAMVX/kUyojMpxb7Ssh6M6Y8cpi" "goi+MSDPD7+5yRJ9z4mH9h7MCY6Ejv8KTcNYmVHvRhsFUcVhWcIISlNWUGiG7rf" "oki060F5myQN3AXcL8gHG5/Qb1RVkQFUKZ5geQ39/wSyYA1Q65QTba/5G2QNbl2" "0eAIBTyKZhN6g88ak+yARa6BLLDkrlP7L4WctHQMLsuXHohQsUO9AcOlVMARgrg" "uF test@test") prikey = """-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAwK7YK3wbv4NqNvKPh9p4P3K784BEvdYvzTTzxUzrrV3payfU Ij0z05vgfMrKS7F4n5m7Jl87TTpQUIEF7OrrMW+H5ySjVGfOFZKeXrG6ivy+3ska sD76qO27SgTKADFV/5FMqIzKcW+0rIejOmPHKYoKIvjEgzw+/uckSfc+Jh/YezAm OhI7/Ck3DWJlR70YbBVHFYVnCCEpTVlBohu636JItOtBeZskDdwF3C/IBxuf0G9U VZEBVCmeYHkN/f8EsmANUOuUE22v+RtkDW5dtHgCAU8imYTeoPPGpPsgEWugSyw5 K5T+y+FnLR0DC7Llx6IULFDvQHDpVTAEYK4LhQIDAQABAoIBAF9ibrrgHnBpItx+ qVUMbriiGK8LUXxUmqdQTljeolDZi6KzPc2RVKWtpazBSvG7skX3+XCediHd+0JP DNri1HlNiA6B0aUIGjoNsf6YpwsE4YwyK9cR5k5YGX4j7se3pKX2jOdngxQyw1Mh dkmCeWZz4l67nbSFz32qeQlwrsB56THJjgHB7elDoGCXTX/9VJyjFlCbfxVCsIng inrNgT0uMSYMNpAjTNOjguJt/DtXpwzei5eVpsERe0TRRVH23ycS0fuq/ancYwI/ MDr9KSB8r+OVGeVGj3popCxECxYLBxhqS1dAQyJjhQXKwajJdHFzidjXO09hLBBz FiutpYUCgYEA6OFikTrPlCMGMJjSj+R9woDAOPfvCDbVZWfNo8iupiECvei88W28 RYFnvUQRjSC0pHe//mfUSmiEaE+SjkNCdnNR+vsq9q+htfrADm84jl1mfeWatg/g zuGz2hAcZnux3kQMI7ufOwZNNpM2bf5B4yKamvG8tZRRxSkkAL1NV48CgYEA08/Z Ty9g9XPKoLnUWStDh1zwG+c0q14l2giegxzaUAG5DOgOXbXcw0VQ++uOWD5ARELG g9wZcbBsXxJrRpUqx+GAlv2Y1bkgiPQS1JIyhsWEUtwfAC/G+uZhCX53aI3Pbsjh QmkPCSp5DuOuW2PybMaw+wVe+CaI/gwAWMYDAasCgYEA4Fzkvc7PVoU33XIeywr0 LoQkrb4QyPUrOvt7H6SkvuFm5thn0KJMlRpLfAksb69m2l2U1+HooZd4mZawN+eN DNmlzgxWJDypq83dYwq8jkxmBj1DhMxfZnIE+L403nelseIVYAfPLOqxUTcbZXVk vRQFp+nmSXqQHUe5rAy1ivkCgYEAqLu7cclchCxqDv/6mc5NTVhMLu5QlvO5U6fq HqitgW7d69oxF5X499YQXZ+ZFdMBf19ypTiBTIAu1M3nh6LtIa4SsjXzus5vjKpj FdQhTBus/hU83Pkymk1MoDOPDEtsI+UDDdSDldmv9pyKGWPVi7H86vusXCLWnwsQ e6fCXWECgYEAqgpGvva5kJ1ISgNwnJbwiNw0sOT9BMOsdNZBElf0kJIIy6FMPvap 6S1ziw+XWfdQ83VIUOCL5DrwmcYzLIogS0agmnx/monfDx0Nl9+OZRxy6+AI9vkK 86A1+DXdo+IgX3grFK1l1gPhAZPRWJZ+anrEkyR4iLq6ZoPZ3BQn97U= -----END RSA PRIVATE KEY-----""" text = "Some text! %$*" def _ssh_decrypt_text(self, ssh_private_key, text): with utils.tempdir() as tmpdir: sshkey = os.path.abspath(os.path.join(tmpdir, 'ssh.key')) with open(sshkey, 'w') as f: f.write(ssh_private_key) try: dec, _err = processutils.execute('openssl', 'rsautl', '-decrypt', '-inkey', sshkey, process_input=text, binary=True) return dec except processutils.ProcessExecutionError as exc: raise exception.DecryptionFailure(reason=exc.stderr) def test_ssh_encrypt_decrypt_text(self): self._test_ssh_encrypt_decrypt_text(self.pubkey) key_with_spaces_in_comment = self.pubkey.replace('test@test', 'Generated by Nova') self._test_ssh_encrypt_decrypt_text(key_with_spaces_in_comment) def _test_ssh_encrypt_decrypt_text(self, key): enc = crypto.ssh_encrypt_text(self.pubkey, self.text) self.assertIsInstance(enc, bytes) # Comparison between bytes and str raises a TypeError # when using python3 -bb if six.PY2: self.assertNotEqual(enc, self.text) result = self._ssh_decrypt_text(self.prikey, enc) self.assertIsInstance(result, bytes) if six.PY3: result = result.decode('utf-8') self.assertEqual(result, self.text) def test_ssh_encrypt_failure(self): self.assertRaises(exception.EncryptionFailure, crypto.ssh_encrypt_text, '', self.text) class KeyPairTest(test.NoDBTestCase): rsa_prv = ( "-----BEGIN RSA PRIVATE KEY-----\n" "MIIEowIBAAKCAQEA5G44D6lEgMj6cRwCPydsMl1VRN2B9DVyV5lmwssGeJClywZM\n" "WcKlSZBaWPbwbt20/r74eMGZPlqtEi9Ro+EHj4/n5+3A2Mh11h0PGSt53PSPfWwo\n" "ZhEg9hQ1w1ZxfBMCx7eG2YdGFQocMgR0zQasJGjjt8hruCnWRB3pNH9DhEwKhgET\n" "H0/CFzxSh0eZWs/O4GSf4upwmRG/1Yu90vnVZq3AanwvvW5UBk6g4uWb6FTES867\n" "kAy4b5EcH6WR3lLE09omuG/NqtH+qkgIdQconDkmkuK3xf5go6GSwEod0erM1G1v\n" "e+C4w/MD98KZ4Zlon9hy7oE2rcqHXf58gZtOTQIDAQABAoIBAQCnkeM2Oemyv7xY\n" "dT+ArJ7GY4lFt2i5iOuUL0ge5Wid0R6OTNR9lDhEOszMLno6GhHIPrdvfjW4dDQ5\n" "/tRY757oRZzNmq+5V3R52V9WC3qeCBmq3EjWdwJDAphd72/YoOmNMKiPsphKntwI\n" "JRS5wodNPlSuYSwEMUypM3f7ttAEn5CASgYgribBDapm7EqkVa2AqSvpFzNvN3/e\n" "Sc36/XlxJin7AkKVOnRksuVOOj504VUQfXgVWZkfTeZqAROgA1FSnjUAffcubJmq\n" "pDL/JSgOqN4S+sJkkTrb19MuM9M/IdXteloynF+GUKZx6FdVQQc8xCiXgeupeeSD\n" "fNMAP7DRAoGBAP0JRFm3fCAavBREKVOyZm20DpeR6zMrVP7ht0SykkT/bw/kiRG+\n" "FH1tNioj9uyixt5SiKhH3ZVAunjsKvrwET8i3uz1M2Gk+ovWdLXurBogYNNWafjQ\n" "hRhFHpyExoZYRsn58bvYvjFXTO6JxuNS2b59DGBRkQ5mpsOhxarfbZnXAoGBAOcb\n" "K+qoPDeDicnQZ8+ygYYHxY3fy1nvm1F19jBiWd26bAUOHeZNPPKGvTSlrGWJgEyA\n" "FjZIlHJOY2s0dhukiytOiXzdA5iqK1NvlF+QTUI4tCeNMVejWC+n6sKR9ADZkX8D\n" "NOHaLkDzc/ukus59aKyjxP53I6SV6y6m5NeyvDx7AoGAaUji1MXA8wbMvU4DOB0h\n" "+4GRFMYVbEwaaJd4jzASJn12M9GuquBBXFMF15DxXFL6lmUXEZYdf83YCRqTY6hi\n" "NLgIs+XuxDFGQssv8sdletWAFE9/dpUk3A1eiFfC1wGCKuZCDBxKPvOJQjO3uryt\n" "d1JGxQkLZ0eVGg+E1O10iC8CgYB4w2QRfNPqllu8D6EPkVHJfeonltgmKOTajm+V\n" "HO+kw7OKeLP7EkVU3j+kcSZC8LUQRKZWu1qG2Jtu+7zz+OmYObPygXNNpS56rQW1\n" "Yixc/FB3knpEN2DvlilAfxAoGYjD/CL4GhCtdAoZZx0Opc262OEpr4v6hzSb7i4K\n" "4KUoXQKBgHfbiaSilxx9guUqvSaexpHmtiUwx05a05fD6tu8Cofl6AM9wGpw3xOT\n" "tfo4ehvS13tTz2RDE2xKuetMmkya7UgifcxUmBzqkOlgr0oOi2rp+eDKXnzUUqsH\n" "V7E96Dj36K8q2+gZIXcNqjN7PzfkF8pA0G+E1veTi8j5dnvIsy1x\n" "-----END RSA PRIVATE KEY-----\n" ) rsa_pub = ( "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDkbjgPqUSAyPpxHAI/J2wyXVVE" "3YH0NXJXmWbCywZ4kKXLBkxZwqVJkFpY9vBu3bT+vvh4wZk+Wq0SL1Gj4QePj+fn" "7cDYyHXWHQ8ZK3nc9I99bChmESD2FDXDVnF8EwLHt4bZh0YVChwyBHTNBqwkaOO3" "yGu4KdZEHek0f0OETAqGARMfT8IXPFKHR5laz87gZJ/i6nCZEb/Vi73S+dVmrcBq" "fC+9blQGTqDi5ZvoVMRLzruQDLhvkRwfpZHeUsTT2ia4b82q0f6qSAh1ByicOSaS" "4rfF/mCjoZLASh3R6szUbW974LjD8wP3wpnhmWif2HLugTatyodd/nyBm05N Gen" "erated-by-Nova" ) rsa_fp = "e7:66:a1:2c:4f:90:6e:11:19:da:ac:c2:69:e1:ad:89" dss_pub = ( "ssh-dss AAAAB3NzaC1kc3MAAACBAKWFW2++pDxJWObkADbSXw8KfZ4VupkRKEXF" "SPN2kV0v+FgdnBEcrEJPExaOTMhmxIuc82ktTv76wHSEpbbsLuI7IDbB6KJJwHs2" "y356yB28Q9rin7X0VMYKkPxvAcbIUSrEbQtyPMihlOaaQ2dGSsEQGQSpjm3f3RU6" "OWux0w/NAAAAFQCgzWF2zxQmi/Obd11z9Im6gY02gwAAAIAHCDLjipVwMLXIqNKO" "MktiPex+ewRQxBi80dzZ3mJzARqzLPYI9hJFUU0LiMtLuypV/djpUWN0cQpmgTQf" "TfuZx9ipC6Mtiz66NQqjkQuoihzdk+9KlOTo03UsX5uBGwuZ09Dnf1VTF8ZsW5Hg" "HyOk6qD71QBajkcFJAKOT3rFfgAAAIAy8trIzqEps9/n37Nli1TvNPLbFQAXl1LN" "wUFmFDwBCGTLl8puVZv7VSu1FG8ko+mzqNebqcN4RMC26NxJqe+RRubn5KtmLoIa" "7tRe74hvQ1HTLLuGxugwa4CewNbwzzEDEs8U79WDhGKzDkJR4nLPVimj5WLAWV70" "RNnRX7zj5w== Generated-by-Nova" ) dss_fp = "b9:dc:ac:57:df:2a:2b:cf:65:a8:c3:4e:9d:4a:82:3c" ecdsa_pub = ( "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAy" "NTYAAABBBG1r4wzPTIjSo78POCq+u/czb8gYK0KvqlmCvcRPrnDWxgLw7y6BX51t" "uYREz7iLRCP7BwUt8R+ZWzFZDeOLIWU= Generated-by-Nova" ) ecdsa_pub_with_spaces = ( "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAy" "NTYAAABBBG1r4wzPTIjSo78POCq+u/czb8gYK0KvqlmCvcRPrnDWxgLw7y6BX51t" "uYREz7iLRCP7BwUt8R+ZWzFZDeOLIWU= Generated by Nova" ) ecdsa_fp = "16:6a:c9:ec:80:4d:17:3e:d5:3b:6f:c0:d7:15:04:40" def test_generate_fingerprint(self): fingerprint = crypto.generate_fingerprint(self.rsa_pub) self.assertEqual(self.rsa_fp, fingerprint) fingerprint = crypto.generate_fingerprint(self.dss_pub) self.assertEqual(self.dss_fp, fingerprint) fingerprint = crypto.generate_fingerprint(self.ecdsa_pub) self.assertEqual(self.ecdsa_fp, fingerprint) fingerprint = crypto.generate_fingerprint(self.ecdsa_pub_with_spaces) self.assertEqual(self.ecdsa_fp, fingerprint) def test_generate_key_pair_2048_bits(self): (private_key, public_key, fingerprint) = crypto.generate_key_pair() pub_bytes = public_key.encode('utf-8') pkey = serialization.load_ssh_public_key( pub_bytes, backends.default_backend()) self.assertEqual(2048, pkey.key_size) def test_generate_key_pair_1024_bits(self): bits = 1024 (private_key, public_key, fingerprint) = crypto.generate_key_pair(bits) pub_bytes = public_key.encode('utf-8') pkey = serialization.load_ssh_public_key( pub_bytes, backends.default_backend()) self.assertEqual(bits, pkey.key_size) def test_generate_key_pair_mocked_private_key(self): keyin = six.StringIO() keyin.write(self.rsa_prv) keyin.seek(0) key = paramiko.RSAKey.from_private_key(keyin) with mock.patch.object(paramiko.RSAKey, 'generate') as mock_generate: mock_generate.return_value = key (private_key, public_key, fingerprint) = crypto.generate_key_pair() self.assertEqual(self.rsa_pub, public_key) self.assertEqual(self.rsa_fp, fingerprint) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_exception.py0000664000175000017500000002734600000000000021275 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import fixtures import mock import six from webob.util import status_reasons from nova import context from nova import exception from nova import exception_wrapper from nova import rpc from nova import test from nova.tests.unit import fake_notifier def good_function(self, context): return 99 def bad_function_exception(self, context, extra, blah="a", boo="b", zoo=None): raise test.TestingException('bad things happened') def bad_function_unknown_module(self, context): """Example traceback that points to a module that getmodule() can't find. Traceback (most recent call last): File "", line 1, in File "src/lxml/lxml.etree.pyx", line 2402, in lxml.etree._Attrib.__setitem__ (src/lxml/lxml.etree.c:67548) File "src/lxml/apihelpers.pxi", line 570, in lxml.etree._setAttributeValue (src/lxml/lxml.etree.c:21551) File "src/lxml/apihelpers.pxi", line 1437, in lxml.etree._utf8 (src/lxml/lxml.etree.c:30194) TypeError: Argument must be bytes or unicode, got 'NoneType' """ from lxml import etree x = etree.fromstring('') x.attrib['foo'] = None class WrapExceptionTestCase(test.NoDBTestCase): def setUp(self): super(WrapExceptionTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) def test_wrap_exception_good_return(self): wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake')) self.assertEqual(99, wrapped(good_function)(1, 2)) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def test_wrap_exception_unknown_module(self): ctxt = context.get_admin_context() wrapped = exception_wrapper.wrap_exception( rpc.get_notifier('fake'), binary='nova-compute') self.assertRaises( TypeError, wrapped(bad_function_unknown_module), None, ctxt) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] payload = notification['payload']['nova_object.data'] self.assertEqual('unknown', payload['module_name']) def test_wrap_exception_with_notifier(self): wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake'), binary='nova-compute') ctxt = context.get_admin_context() self.assertRaises(test.TestingException, wrapped(bad_function_exception), 1, ctxt, 3, zoo=3) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) notification = fake_notifier.NOTIFICATIONS[0] self.assertEqual('bad_function_exception', notification.event_type) self.assertEqual(ctxt, notification.context) self.assertEqual(3, notification.payload['args']['extra']) for key in ['exception', 'args']: self.assertIn(key, notification.payload.keys()) self.assertNotIn('context', notification.payload['args'].keys()) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('compute.exception', notification['event_type']) self.assertEqual('nova-compute:fake-mini', notification['publisher_id']) self.assertEqual('ERROR', notification['priority']) payload = notification['payload'] self.assertEqual('ExceptionPayload', payload['nova_object.name']) self.assertEqual('1.1', payload['nova_object.version']) payload = payload['nova_object.data'] self.assertEqual('TestingException', payload['exception']) self.assertEqual('bad things happened', payload['exception_message']) self.assertEqual('bad_function_exception', payload['function_name']) self.assertEqual('nova.tests.unit.test_exception', payload['module_name']) self.assertIn('bad_function_exception', payload['traceback']) @mock.patch('nova.rpc.NOTIFIER') @mock.patch('nova.notifications.objects.exception.' 'ExceptionNotification.__init__') def test_wrap_exception_notification_not_emitted_if_disabled( self, mock_notification, mock_notifier): mock_notifier.is_enabled.return_value = False wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake'), binary='nova-compute') ctxt = context.get_admin_context() self.assertRaises(test.TestingException, wrapped(bad_function_exception), 1, ctxt, 3, zoo=3) self.assertFalse(mock_notification.called) @mock.patch('nova.notifications.objects.exception.' 'ExceptionNotification.__init__') def test_wrap_exception_notification_not_emitted_if_unversioned( self, mock_notifier): self.flags(notification_format='unversioned', group='notifications') wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake'), binary='nova-compute') ctxt = context.get_admin_context() self.assertRaises(test.TestingException, wrapped(bad_function_exception), 1, ctxt, 3, zoo=3) self.assertFalse(mock_notifier.called) class NovaExceptionTestCase(test.NoDBTestCase): def test_default_error_msg(self): class FakeNovaException(exception.NovaException): msg_fmt = "default message" exc = FakeNovaException() self.assertEqual('default message', six.text_type(exc)) def test_error_msg(self): self.assertEqual('test', six.text_type(exception.NovaException('test'))) self.assertEqual('test', exception.NovaException(Exception('test')).message) def test_default_error_msg_with_kwargs(self): class FakeNovaException(exception.NovaException): msg_fmt = "default message: %(code)s" exc = FakeNovaException(code=500) self.assertEqual('default message: 500', six.text_type(exc)) self.assertEqual('default message: 500', exc.message) def test_error_msg_exception_with_kwargs(self): class FakeNovaException(exception.NovaException): msg_fmt = "default message: %(misspelled_code)s" exc = FakeNovaException(code=500, misspelled_code='blah') self.assertEqual('default message: blah', six.text_type(exc)) self.assertEqual('default message: blah', exc.message) def test_default_error_code(self): class FakeNovaException(exception.NovaException): code = 404 exc = FakeNovaException() self.assertEqual(404, exc.kwargs['code']) def test_error_code_from_kwarg(self): class FakeNovaException(exception.NovaException): code = 500 exc = FakeNovaException(code=404) self.assertEqual(exc.kwargs['code'], 404) def test_cleanse_dict(self): kwargs = {'foo': 1, 'blah_pass': 2, 'zoo_password': 3, '_pass': 4} self.assertEqual({'foo': 1}, exception_wrapper._cleanse_dict(kwargs)) kwargs = {} self.assertEqual({}, exception_wrapper._cleanse_dict(kwargs)) def test_format_message_local(self): class FakeNovaException(exception.NovaException): msg_fmt = "some message" exc = FakeNovaException() self.assertEqual(six.text_type(exc), exc.format_message()) def test_format_message_remote(self): class FakeNovaException_Remote(exception.NovaException): msg_fmt = "some message" if six.PY2: def __unicode__(self): return u"print the whole trace" else: def __str__(self): return "print the whole trace" exc = FakeNovaException_Remote() self.assertEqual(u"print the whole trace", six.text_type(exc)) self.assertEqual("some message", exc.format_message()) def test_format_message_remote_error(self): # NOTE(melwitt): This test checks that errors are formatted as expected # in a real environment where format errors are caught and not # reraised, so we patch in the real implementation. self.useFixture(fixtures.MonkeyPatch( 'nova.exception.NovaException._log_exception', test.NovaExceptionReraiseFormatError.real_log_exception)) class FakeNovaException_Remote(exception.NovaException): msg_fmt = "some message %(somearg)s" def __unicode__(self): return u"print the whole trace" exc = FakeNovaException_Remote(lame_arg='lame') self.assertEqual("some message %(somearg)s", exc.format_message()) def test_repr(self): class FakeNovaException(exception.NovaException): msg_fmt = "some message" mock_exc = FakeNovaException(code=500) exc_repr = repr(mock_exc) eval_repr = eval(exc_repr) exc_kwargs = eval_repr.get('kwargs') self.assertIsNotNone(exc_kwargs) self.assertEqual(500, exc_kwargs.get('code')) self.assertEqual('some message', eval_repr.get('message')) self.assertEqual('FakeNovaException', eval_repr.get('class')) class ConvertedExceptionTestCase(test.NoDBTestCase): def test_instantiate(self): exc = exception.ConvertedException(400, 'Bad Request', 'reason') self.assertEqual(exc.code, 400) self.assertEqual(exc.title, 'Bad Request') self.assertEqual(exc.explanation, 'reason') def test_instantiate_without_title_known_code(self): exc = exception.ConvertedException(500) self.assertEqual(exc.title, status_reasons[500]) def test_instantiate_without_title_unknown_code(self): exc = exception.ConvertedException(499) self.assertEqual(exc.title, 'Unknown Client Error') def test_instantiate_bad_code(self): self.assertRaises(KeyError, exception.ConvertedException, 10) class ExceptionTestCase(test.NoDBTestCase): @staticmethod def _raise_exc(exc): raise exc(500) def test_exceptions_raise(self): # NOTE(dprince): disable format errors since we are not passing kwargs for name in dir(exception): exc = getattr(exception, name) if isinstance(exc, type): self.assertRaises(exc, self._raise_exc, exc) class ExceptionValidMessageTestCase(test.NoDBTestCase): def test_messages(self): failures = [] for name, obj in inspect.getmembers(exception): if name in ['NovaException', 'InstanceFaultRollback']: continue if not inspect.isclass(obj): continue if not issubclass(obj, exception.NovaException): continue e = obj if e.msg_fmt == "An unknown exception occurred.": failures.append('%s needs a more specific msg_fmt' % name) if failures: self.fail('\n'.join(failures)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_fake_notifier.py0000664000175000017500000000615000000000000022072 0ustar00zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from nova import context from nova import exception_wrapper from nova import rpc from nova import test from nova.tests.unit import fake_notifier class FakeVersionedNotifierTestCase(test.NoDBTestCase): def setUp(self): super(FakeVersionedNotifierTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.context = context.RequestContext() _get_notifier = functools.partial(rpc.get_notifier, 'compute') @exception_wrapper.wrap_exception(get_notifier=_get_notifier, binary='nova-compute') def _raise_exception(self, context): raise test.TestingException def _generate_exception_notification(self): self.assertRaises(test.TestingException, self._raise_exception, self.context) def test_wait_for_versioned_notifications(self): # Wait for a single notification which we emitted first self._generate_exception_notification() notifications = fake_notifier.wait_for_versioned_notifications( 'compute.exception') self.assertEqual(1, len(notifications)) def test_wait_for_versioned_notifications_fail(self): # Wait for a single notification which is never sent self.assertRaises( AssertionError, fake_notifier.wait_for_versioned_notifications, 'compute.exception', timeout=0.1) def test_wait_for_versioned_notifications_n(self): # Wait for 2 notifications which we emitted first self._generate_exception_notification() self._generate_exception_notification() notifications = fake_notifier.wait_for_versioned_notifications( 'compute.exception', 2) self.assertEqual(2, len(notifications)) def test_wait_for_versioned_notifications_n_fail(self): # Wait for 2 notifications when we only emitted one self._generate_exception_notification() self.assertRaises( AssertionError, fake_notifier.wait_for_versioned_notifications, 'compute.exception', 2, timeout=0.1) def test_wait_for_versioned_notifications_too_many(self): # Wait for a single notification when there are 2 in the queue self._generate_exception_notification() self._generate_exception_notification() notifications = fake_notifier.wait_for_versioned_notifications( 'compute.exception') self.assertEqual(2, len(notifications)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_fixtures.py0000664000175000017500000005517500000000000021151 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import fixtures as fx import futurist import mock from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from oslo_utils import uuidutils from oslotest import output import sqlalchemy import testtools from nova.compute import rpcapi as compute_rpcapi from nova import conductor from nova import context from nova.db.sqlalchemy import api as session from nova import exception from nova.network import neutron as neutron_api from nova import objects from nova.objects import base as obj_base from nova.objects import service as service_obj from nova import test from nova.tests import fixtures from nova.tests.unit import conf_fixture from nova.tests.unit import fake_instance from nova import utils CONF = cfg.CONF class TestLogging(testtools.TestCase): def test_default_logging(self): stdlog = self.useFixture(fixtures.StandardLogging()) root = logging.getLogger() # there should be a null handler as well at DEBUG self.assertEqual(2, len(root.handlers), root.handlers) log = logging.getLogger(__name__) log.info("at info") log.debug("at debug") self.assertIn("at info", stdlog.logger.output) self.assertNotIn("at debug", stdlog.logger.output) # broken debug messages should still explode, even though we # aren't logging them in the regular handler self.assertRaises(TypeError, log.debug, "this is broken %s %s", "foo") # and, ensure that one of the terrible log messages isn't # output at info warn_log = logging.getLogger('migrate.versioning.api') warn_log.info("warn_log at info, should be skipped") warn_log.error("warn_log at error") self.assertIn("warn_log at error", stdlog.logger.output) self.assertNotIn("warn_log at info", stdlog.logger.output) def test_debug_logging(self): self.useFixture(fx.EnvironmentVariable('OS_DEBUG', '1')) stdlog = self.useFixture(fixtures.StandardLogging()) root = logging.getLogger() # there should no longer be a null handler self.assertEqual(1, len(root.handlers), root.handlers) log = logging.getLogger(__name__) log.info("at info") log.debug("at debug") self.assertIn("at info", stdlog.logger.output) self.assertIn("at debug", stdlog.logger.output) class TestOSAPIFixture(testtools.TestCase): @mock.patch('nova.objects.Service.get_by_host_and_binary') @mock.patch('nova.objects.Service.create') @mock.patch('nova.utils.raise_if_old_compute', new=mock.Mock()) def test_responds_to_version(self, mock_service_create, mock_get): """Ensure the OSAPI server responds to calls sensibly.""" self.useFixture(output.CaptureOutput()) self.useFixture(fixtures.StandardLogging()) self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api # request the API root, which provides us the versions of the API resp = api.api_request('/', strip_version=True) self.assertEqual(200, resp.status_code, resp.content) # request a bad root url, should be a 404 # # NOTE(sdague): this currently fails, as it falls into the 300 # dispatcher instead. This is a bug. The test case is left in # here, commented out until we can address it. # # resp = api.api_request('/foo', strip_version=True) # self.assertEqual(resp.status_code, 400, resp.content) # request a known bad url, and we should get a 404 resp = api.api_request('/foo') self.assertEqual(404, resp.status_code, resp.content) class TestDatabaseFixture(testtools.TestCase): def test_fixture_reset(self): # because this sets up reasonable db connection strings self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.Database()) engine = session.get_engine() conn = engine.connect() result = conn.execute("select * from instance_types") rows = result.fetchall() self.assertEqual(0, len(rows), "Rows %s" % rows) # insert a 6th instance type, column 5 below is an int id # which has a constraint on it, so if new standard instance # types are added you have to bump it. conn.execute("insert into instance_types VALUES " "(NULL, NULL, NULL, 't1.test', 6, 4096, 2, 0, NULL, '87'" ", 1.0, 40, 0, 0, 1, 0)") result = conn.execute("select * from instance_types") rows = result.fetchall() self.assertEqual(1, len(rows), "Rows %s" % rows) # reset by invoking the fixture again # # NOTE(sdague): it's important to reestablish the db # connection because otherwise we have a reference to the old # in mem db. self.useFixture(fixtures.Database()) conn = engine.connect() result = conn.execute("select * from instance_types") rows = result.fetchall() self.assertEqual(0, len(rows), "Rows %s" % rows) def test_api_fixture_reset(self): # This sets up reasonable db connection strings self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.Database(database='api')) engine = session.get_api_engine() conn = engine.connect() result = conn.execute("select * from cell_mappings") rows = result.fetchall() self.assertEqual(0, len(rows), "Rows %s" % rows) uuid = uuidutils.generate_uuid() conn.execute("insert into cell_mappings (uuid, name) VALUES " "('%s', 'fake-cell')" % (uuid,)) result = conn.execute("select * from cell_mappings") rows = result.fetchall() self.assertEqual(1, len(rows), "Rows %s" % rows) # reset by invoking the fixture again # # NOTE(sdague): it's important to reestablish the db # connection because otherwise we have a reference to the old # in mem db. self.useFixture(fixtures.Database(database='api')) conn = engine.connect() result = conn.execute("select * from cell_mappings") rows = result.fetchall() self.assertEqual(0, len(rows), "Rows %s" % rows) def test_fixture_cleanup(self): # because this sets up reasonable db connection strings self.useFixture(conf_fixture.ConfFixture()) fix = fixtures.Database() self.useFixture(fix) # manually do the cleanup that addCleanup will do fix.cleanup() # ensure the db contains nothing engine = session.get_engine() conn = engine.connect() schema = "".join(line for line in conn.connection.iterdump()) self.assertEqual(schema, "BEGIN TRANSACTION;COMMIT;") def test_api_fixture_cleanup(self): # This sets up reasonable db connection strings self.useFixture(conf_fixture.ConfFixture()) fix = fixtures.Database(database='api') self.useFixture(fix) # No data inserted by migrations so we need to add a row engine = session.get_api_engine() conn = engine.connect() uuid = uuidutils.generate_uuid() conn.execute("insert into cell_mappings (uuid, name) VALUES " "('%s', 'fake-cell')" % (uuid,)) result = conn.execute("select * from cell_mappings") rows = result.fetchall() self.assertEqual(1, len(rows), "Rows %s" % rows) # Manually do the cleanup that addCleanup will do fix.cleanup() # Ensure the db contains nothing engine = session.get_api_engine() conn = engine.connect() schema = "".join(line for line in conn.connection.iterdump()) self.assertEqual("BEGIN TRANSACTION;COMMIT;", schema) class TestDatabaseAtVersionFixture(testtools.TestCase): def test_fixture_schema_version(self): self.useFixture(conf_fixture.ConfFixture()) # In/after 317 aggregates did have uuid self.useFixture(fixtures.DatabaseAtVersion(318)) engine = session.get_engine() engine.connect() meta = sqlalchemy.MetaData(engine) aggregate = sqlalchemy.Table('aggregates', meta, autoload=True) self.assertTrue(hasattr(aggregate.c, 'uuid')) # Before 317, aggregates had no uuid self.useFixture(fixtures.DatabaseAtVersion(316)) engine = session.get_engine() engine.connect() meta = sqlalchemy.MetaData(engine) aggregate = sqlalchemy.Table('aggregates', meta, autoload=True) self.assertFalse(hasattr(aggregate.c, 'uuid')) engine.dispose() def test_fixture_after_database_fixture(self): self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.Database()) self.useFixture(fixtures.DatabaseAtVersion(318)) class TestDefaultFlavorsFixture(testtools.TestCase): @mock.patch("nova.objects.flavor.Flavor._send_notification") def test_flavors(self, mock_send_notification): self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.Database()) self.useFixture(fixtures.Database(database='api')) engine = session.get_api_engine() conn = engine.connect() result = conn.execute("select * from flavors") rows = result.fetchall() self.assertEqual(0, len(rows), "Rows %s" % rows) self.useFixture(fixtures.DefaultFlavorsFixture()) result = conn.execute("select * from flavors") rows = result.fetchall() self.assertEqual(6, len(rows), "Rows %s" % rows) class TestIndirectionAPIFixture(testtools.TestCase): def test_indirection_api(self): # Should initially be None self.assertIsNone(obj_base.NovaObject.indirection_api) # make sure the fixture correctly sets the value fix = fixtures.IndirectionAPIFixture('foo') self.useFixture(fix) self.assertEqual('foo', obj_base.NovaObject.indirection_api) # manually do the cleanup that addCleanup will do fix.cleanup() # ensure the initial value is restored self.assertIsNone(obj_base.NovaObject.indirection_api) class TestSpawnIsSynchronousFixture(testtools.TestCase): def test_spawn_patch(self): orig_spawn = utils.spawn_n fix = fixtures.SpawnIsSynchronousFixture() self.useFixture(fix) self.assertNotEqual(orig_spawn, utils.spawn_n) def test_spawn_passes_through(self): self.useFixture(fixtures.SpawnIsSynchronousFixture()) tester = mock.MagicMock() utils.spawn_n(tester.function, 'foo', bar='bar') tester.function.assert_called_once_with('foo', bar='bar') def test_spawn_return_has_wait(self): self.useFixture(fixtures.SpawnIsSynchronousFixture()) gt = utils.spawn(lambda x: '%s' % x, 'foo') foo = gt.wait() self.assertEqual('foo', foo) def test_spawn_n_return_has_wait(self): self.useFixture(fixtures.SpawnIsSynchronousFixture()) gt = utils.spawn_n(lambda x: '%s' % x, 'foo') foo = gt.wait() self.assertEqual('foo', foo) def test_spawn_has_link(self): self.useFixture(fixtures.SpawnIsSynchronousFixture()) gt = utils.spawn(mock.MagicMock) passed_arg = 'test' call_count = [] def fake(thread, param): self.assertEqual(gt, thread) self.assertEqual(passed_arg, param) call_count.append(1) gt.link(fake, passed_arg) self.assertEqual(1, len(call_count)) def test_spawn_n_has_link(self): self.useFixture(fixtures.SpawnIsSynchronousFixture()) gt = utils.spawn_n(mock.MagicMock) passed_arg = 'test' call_count = [] def fake(thread, param): self.assertEqual(gt, thread) self.assertEqual(passed_arg, param) call_count.append(1) gt.link(fake, passed_arg) self.assertEqual(1, len(call_count)) class TestSynchronousThreadPoolExecutorFixture(testtools.TestCase): def test_submit_passes_through(self): self.useFixture(fixtures.SynchronousThreadPoolExecutorFixture()) tester = mock.MagicMock() executor = futurist.GreenThreadPoolExecutor() future = executor.submit(tester.function, 'foo', bar='bar') tester.function.assert_called_once_with('foo', bar='bar') result = future.result() self.assertEqual(tester.function.return_value, result) class TestBannedDBSchemaOperations(testtools.TestCase): def test_column(self): column = sqlalchemy.Column() with fixtures.BannedDBSchemaOperations(['Column']): self.assertRaises(exception.DBNotAllowed, column.drop) self.assertRaises(exception.DBNotAllowed, column.alter) def test_table(self): table = sqlalchemy.Table() with fixtures.BannedDBSchemaOperations(['Table']): self.assertRaises(exception.DBNotAllowed, table.drop) self.assertRaises(exception.DBNotAllowed, table.alter) class TestAllServicesCurrentFixture(testtools.TestCase): @mock.patch('nova.objects.Service._db_service_get_minimum_version') def test_services_current(self, mock_db): mock_db.return_value = {'nova-compute': 123} self.assertEqual(123, service_obj.Service.get_minimum_version( None, 'nova-compute')) mock_db.assert_called_once_with(None, ['nova-compute'], use_slave=False) mock_db.reset_mock() compute_rpcapi.LAST_VERSION = 123 self.useFixture(fixtures.AllServicesCurrent()) self.assertIsNone(compute_rpcapi.LAST_VERSION) self.assertEqual(service_obj.SERVICE_VERSION, service_obj.Service.get_minimum_version( None, 'nova-compute')) self.assertFalse(mock_db.called) class TestNoopConductorFixture(testtools.TestCase): @mock.patch('nova.conductor.api.ComputeTaskAPI.resize_instance') def test_task_api_not_called(self, mock_resize): self.useFixture(fixtures.NoopConductorFixture()) conductor.ComputeTaskAPI().resize_instance() self.assertFalse(mock_resize.called) @mock.patch('nova.conductor.api.API.wait_until_ready') def test_api_not_called(self, mock_wait): self.useFixture(fixtures.NoopConductorFixture()) conductor.API().wait_until_ready() self.assertFalse(mock_wait.called) class TestSingleCellSimpleFixture(testtools.TestCase): def test_single_cell(self): self.useFixture(fixtures.SingleCellSimple()) cml = objects.CellMappingList.get_all(None) self.assertEqual(1, len(cml)) def test_target_cell(self): self.useFixture(fixtures.SingleCellSimple()) with context.target_cell(mock.sentinel.context, None) as c: self.assertIs(mock.sentinel.context, c) class TestWarningsFixture(test.TestCase): def test_invalid_uuid_errors(self): """Creating an oslo.versionedobject with an invalid UUID value for a UUIDField should raise an exception. """ valid_migration_kwargs = { "created_at": timeutils.utcnow().replace(microsecond=0), "updated_at": None, "deleted_at": None, "deleted": False, "id": 123, "uuid": uuids.migration, "source_compute": "compute-source", "dest_compute": "compute-dest", "source_node": "node-source", "dest_node": "node-dest", "dest_host": "host-dest", "old_instance_type_id": 42, "new_instance_type_id": 84, "instance_uuid": "fake-uuid", "status": "migrating", "migration_type": "resize", "hidden": False, "memory_total": 123456, "memory_processed": 12345, "memory_remaining": 111111, "disk_total": 234567, "disk_processed": 23456, "disk_remaining": 211111, } # this shall not throw FutureWarning objects.migration.Migration(**valid_migration_kwargs) invalid_migration_kwargs = copy.deepcopy(valid_migration_kwargs) invalid_migration_kwargs["uuid"] = "fake_id" self.assertRaises(FutureWarning, objects.migration.Migration, **invalid_migration_kwargs) class TestDownCellFixture(test.TestCase): def test_fixture(self): # The test setup creates two cell mappings (cell0 and cell1) by # default. Let's first list servers across all cells while they are # "up" to make sure that works as expected. We'll create a single # instance in cell1. ctxt = context.get_admin_context() cell1 = self.cell_mappings[test.CELL1_NAME] with context.target_cell(ctxt, cell1) as cctxt: inst = fake_instance.fake_instance_obj(cctxt) if 'id' in inst: delattr(inst, 'id') inst.create() # Now list all instances from all cells (should get one back). results = context.scatter_gather_all_cells( ctxt, objects.InstanceList.get_all) self.assertEqual(2, len(results)) self.assertEqual(0, len(results[objects.CellMapping.CELL0_UUID])) self.assertEqual(1, len(results[cell1.uuid])) # Now do the same but with the DownCellFixture which should result # in exception results from both cells. with fixtures.DownCellFixture(): results = context.scatter_gather_all_cells( ctxt, objects.InstanceList.get_all) self.assertEqual(2, len(results)) for result in results.values(): self.assertIsInstance(result, db_exc.DBError) def test_fixture_when_explicitly_passing_down_cell_mappings(self): # The test setup creates two cell mappings (cell0 and cell1) by # default. We'll create one instance per cell and pass cell0 as # the down cell. We should thus get db_exc.DBError for cell0 and # correct InstanceList object from cell1. ctxt = context.get_admin_context() cell0 = self.cell_mappings['cell0'] cell1 = self.cell_mappings['cell1'] with context.target_cell(ctxt, cell0) as cctxt: inst1 = fake_instance.fake_instance_obj(cctxt) if 'id' in inst1: delattr(inst1, 'id') inst1.create() with context.target_cell(ctxt, cell1) as cctxt: inst2 = fake_instance.fake_instance_obj(cctxt) if 'id' in inst2: delattr(inst2, 'id') inst2.create() with fixtures.DownCellFixture([cell0]): results = context.scatter_gather_all_cells( ctxt, objects.InstanceList.get_all) self.assertEqual(2, len(results)) for cell_uuid, result in results.items(): if cell_uuid == cell0.uuid: self.assertIsInstance(result, db_exc.DBError) else: self.assertIsInstance(result, objects.InstanceList) self.assertEqual(1, len(result)) self.assertEqual(inst2.uuid, result[0].uuid) def test_fixture_for_an_individual_down_cell_targeted_call(self): # We have cell0 and cell1 by default in the setup. We try targeting # both the cells. We should get a db error for the down cell and # the correct result for the up cell. ctxt = context.get_admin_context() cell0 = self.cell_mappings['cell0'] cell1 = self.cell_mappings['cell1'] with context.target_cell(ctxt, cell0) as cctxt: inst1 = fake_instance.fake_instance_obj(cctxt) if 'id' in inst1: delattr(inst1, 'id') inst1.create() with context.target_cell(ctxt, cell1) as cctxt: inst2 = fake_instance.fake_instance_obj(cctxt) if 'id' in inst2: delattr(inst2, 'id') inst2.create() def dummy_tester(ctxt, cell_mapping, uuid): with context.target_cell(ctxt, cell_mapping) as cctxt: return objects.Instance.get_by_uuid(cctxt, uuid) # Scenario A: We do not pass any down cells, fixture automatically # assumes the targeted cell is down whether its cell0 or cell1. with fixtures.DownCellFixture(): self.assertRaises( db_exc.DBError, dummy_tester, ctxt, cell1, inst2.uuid) # Scenario B: We pass cell0 as the down cell. with fixtures.DownCellFixture([cell0]): self.assertRaises( db_exc.DBError, dummy_tester, ctxt, cell0, inst1.uuid) # Scenario C: We get the correct result from the up cell # when targeted. result = dummy_tester(ctxt, cell1, inst2.uuid) self.assertEqual(inst2.uuid, result.uuid) class TestNeutronFixture(test.NoDBTestCase): def setUp(self): super(TestNeutronFixture, self).setUp() self.neutron = self.useFixture(fixtures.NeutronFixture(self)) def test_list_ports_with_resource_request_non_admin_client(self): ctxt = context.get_context() client = neutron_api.get_client(ctxt) ports = client.list_ports(ctxt)['ports'] port_id = self.neutron.port_with_resource_request['id'] ports = [port for port in ports if port_id == port['id']] self.assertIsNone(ports[0]['resource_request']) def test_list_ports_with_resource_request_admin_client(self): ctxt = context.get_admin_context() client = neutron_api.get_client(ctxt) ports = client.list_ports(ctxt)['ports'] port_id = self.neutron.port_with_resource_request['id'] ports = [port for port in ports if port_id == port['id']] self.assertIsNotNone(ports[0]['resource_request']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_flavors.py0000664000175000017500000003047400000000000020747 0ustar00zuulzuul00000000000000# Copyright 2011 Ken Pepple # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for flavors code """ from nova.compute import flavors from nova import context from nova.db import api as db from nova import exception from nova import objects from nova.objects import base as obj_base from nova import test class InstanceTypeTestCase(test.TestCase): """Test cases for flavor code.""" def test_will_not_get_instance_by_unknown_flavor_id(self): # Ensure get by flavor raises error with wrong flavorid. self.assertRaises(exception.FlavorNotFound, flavors.get_flavor_by_flavor_id, 'unknown_flavor') def test_will_get_instance_by_flavor_id(self): default_instance_type = objects.Flavor.get_by_name( context.get_admin_context(), 'm1.small') flavorid = default_instance_type.flavorid fetched = flavors.get_flavor_by_flavor_id(flavorid) self.assertIsInstance(fetched, objects.Flavor) self.assertEqual(default_instance_type.flavorid, fetched.flavorid) class InstanceTypeToolsTest(test.TestCase): def setUp(self): super(InstanceTypeToolsTest, self).setUp() self.context = context.get_admin_context() def _dict_to_metadata(self, data): return [{'key': key, 'value': value} for key, value in data.items()] def _test_extract_flavor(self, prefix): instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') instance_type_p = obj_base.obj_to_primitive(instance_type) metadata = {} flavors.save_flavor_info(metadata, instance_type, prefix) instance = {'system_metadata': self._dict_to_metadata(metadata)} _instance_type = flavors.extract_flavor(instance, prefix) _instance_type_p = obj_base.obj_to_primitive(_instance_type) props = flavors.system_metadata_flavor_props.keys() for key in list(instance_type_p.keys()): if key not in props: del instance_type_p[key] self.assertEqual(instance_type_p, _instance_type_p) def test_extract_flavor(self): self._test_extract_flavor('') def test_extract_flavor_no_sysmeta(self): instance = {} prefix = '' result = flavors.extract_flavor(instance, prefix) self.assertIsNone(result) def test_extract_flavor_prefix(self): self._test_extract_flavor('foo_') def test_save_flavor_info(self): instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') example = {} example_prefix = {} for key in flavors.system_metadata_flavor_props.keys(): example['instance_type_%s' % key] = instance_type[key] example_prefix['fooinstance_type_%s' % key] = instance_type[key] metadata = {} flavors.save_flavor_info(metadata, instance_type) self.assertEqual(example, metadata) metadata = {} flavors.save_flavor_info(metadata, instance_type, 'foo') self.assertEqual(example_prefix, metadata) def test_flavor_numa_extras_are_saved(self): instance_type = objects.Flavor.get_by_name(self.context, 'm1.small') instance_type['extra_specs'] = { 'hw:numa_mem.0': '123', 'hw:numa_cpus.0': '456', 'hw:numa_mem.1': '789', 'hw:numa_cpus.1': 'ABC', 'foo': 'bar', } sysmeta = flavors.save_flavor_info({}, instance_type) _instance_type = flavors.extract_flavor({'system_metadata': sysmeta}) expected_extra_specs = { 'hw:numa_mem.0': '123', 'hw:numa_cpus.0': '456', 'hw:numa_mem.1': '789', 'hw:numa_cpus.1': 'ABC', } self.assertEqual(expected_extra_specs, _instance_type['extra_specs']) class InstanceTypeFilteringTest(test.TestCase): """Test cases for the filter option available for instance_type_get_all.""" def setUp(self): super(InstanceTypeFilteringTest, self).setUp() self.context = context.get_admin_context() def assertFilterResults(self, filters, expected): inst_types = objects.FlavorList.get_all( self.context, filters=filters) inst_names = [i.name for i in inst_types] self.assertEqual(inst_names, expected) def test_no_filters(self): filters = None expected = ['m1.tiny', 'm1.small', 'm1.medium', 'm1.large', 'm1.xlarge', 'm1.tiny.specs'] self.assertFilterResults(filters, expected) def test_min_memory_mb_filter(self): # Exclude tiny instance which is 512 MB. filters = dict(min_memory_mb=513) expected = ['m1.small', 'm1.medium', 'm1.large', 'm1.xlarge'] self.assertFilterResults(filters, expected) def test_min_root_gb_filter(self): # Exclude everything but large and xlarge which have >= 80 GB. filters = dict(min_root_gb=80) expected = ['m1.large', 'm1.xlarge'] self.assertFilterResults(filters, expected) def test_min_memory_mb_AND_root_gb_filter(self): # Exclude everything but large and xlarge which have >= 80 GB. filters = dict(min_memory_mb=16384, min_root_gb=80) expected = ['m1.xlarge'] self.assertFilterResults(filters, expected) class CreateInstanceTypeTest(test.TestCase): def assertInvalidInput(self, *create_args, **create_kwargs): self.assertRaises(exception.InvalidInput, flavors.create, *create_args, **create_kwargs) def test_memory_must_be_positive_db_integer(self): self.assertInvalidInput('flavor1', 'foo', 1, 120) self.assertInvalidInput('flavor1', -1, 1, 120) self.assertInvalidInput('flavor1', 0, 1, 120) self.assertInvalidInput('flavor1', db.MAX_INT + 1, 1, 120) flavors.create('flavor1', 1, 1, 120) def test_vcpus_must_be_positive_db_integer(self): self.assertInvalidInput('flavor`', 64, 'foo', 120) self.assertInvalidInput('flavor1', 64, -1, 120) self.assertInvalidInput('flavor1', 64, 0, 120) self.assertInvalidInput('flavor1', 64, db.MAX_INT + 1, 120) flavors.create('flavor1', 64, 1, 120) def test_root_gb_must_be_nonnegative_db_integer(self): self.assertInvalidInput('flavor1', 64, 1, 'foo') self.assertInvalidInput('flavor1', 64, 1, -1) self.assertInvalidInput('flavor1', 64, 1, db.MAX_INT + 1) flavors.create('flavor1', 64, 1, 0) flavors.create('flavor2', 64, 1, 120) def test_ephemeral_gb_must_be_nonnegative_db_integer(self): self.assertInvalidInput('flavor1', 64, 1, 120, ephemeral_gb='foo') self.assertInvalidInput('flavor1', 64, 1, 120, ephemeral_gb=-1) self.assertInvalidInput('flavor1', 64, 1, 120, ephemeral_gb=db.MAX_INT + 1) flavors.create('flavor1', 64, 1, 120, ephemeral_gb=0) flavors.create('flavor2', 64, 1, 120, ephemeral_gb=120) def test_swap_must_be_nonnegative_db_integer(self): self.assertInvalidInput('flavor1', 64, 1, 120, swap='foo') self.assertInvalidInput('flavor1', 64, 1, 120, swap=-1) self.assertInvalidInput('flavor1', 64, 1, 120, swap=db.MAX_INT + 1) flavors.create('flavor1', 64, 1, 120, swap=0) flavors.create('flavor2', 64, 1, 120, swap=1) def test_rxtx_factor_must_be_positive_float(self): self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor='foo') self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor=-1.0) self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor=0.0) flavor = flavors.create('flavor1', 64, 1, 120, rxtx_factor=1.0) self.assertEqual(1.0, flavor.rxtx_factor) flavor = flavors.create('flavor2', 64, 1, 120, rxtx_factor=1.1) self.assertEqual(1.1, flavor.rxtx_factor) def test_rxtx_factor_must_be_within_sql_float_range(self): # We do * 10 since this is an approximation and we need to make sure # the difference is noticeble. over_rxtx_factor = db.SQL_SP_FLOAT_MAX * 10 self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor=over_rxtx_factor) flavor = flavors.create('flavor2', 64, 1, 120, rxtx_factor=db.SQL_SP_FLOAT_MAX) self.assertEqual(db.SQL_SP_FLOAT_MAX, flavor.rxtx_factor) def test_is_public_must_be_valid_bool_string(self): self.assertInvalidInput('flavor1', 64, 1, 120, is_public='foo') flavors.create('flavor1', 64, 1, 120, is_public='TRUE') flavors.create('flavor2', 64, 1, 120, is_public='False') flavors.create('flavor3', 64, 1, 120, is_public='Yes') flavors.create('flavor4', 64, 1, 120, is_public='No') flavors.create('flavor5', 64, 1, 120, is_public='Y') flavors.create('flavor6', 64, 1, 120, is_public='N') flavors.create('flavor7', 64, 1, 120, is_public='1') flavors.create('flavor8', 64, 1, 120, is_public='0') flavors.create('flavor9', 64, 1, 120, is_public='true') def test_flavorid_populated(self): flavor1 = flavors.create('flavor1', 64, 1, 120) self.assertIsNotNone(flavor1.flavorid) flavor2 = flavors.create('flavor2', 64, 1, 120, flavorid='') self.assertIsNotNone(flavor2.flavorid) flavor3 = flavors.create('flavor3', 64, 1, 120, flavorid='foo') self.assertEqual('foo', flavor3.flavorid) def test_default_values(self): flavor1 = flavors.create('flavor1', 64, 1, 120) self.assertIsNotNone(flavor1.flavorid) self.assertEqual(flavor1.ephemeral_gb, 0) self.assertEqual(flavor1.swap, 0) self.assertEqual(flavor1.rxtx_factor, 1.0) def test_basic_create(self): # Ensure instance types can be created. ctxt = context.get_admin_context() original_list = objects.FlavorList.get_all(ctxt) # Create new type and make sure values stick flavor = flavors.create('flavor', 64, 1, 120) self.assertEqual(flavor.name, 'flavor') self.assertEqual(flavor.memory_mb, 64) self.assertEqual(flavor.vcpus, 1) self.assertEqual(flavor.root_gb, 120) # Ensure new type shows up in list new_list = objects.FlavorList.get_all(ctxt) self.assertNotEqual(len(original_list), len(new_list), 'flavor was not created') def test_create_then_delete(self): ctxt = context.get_admin_context() original_list = objects.FlavorList.get_all(ctxt) flavor = flavors.create('flavor', 64, 1, 120) # Ensure new type shows up in list new_list = objects.FlavorList.get_all(ctxt) self.assertNotEqual(len(original_list), len(new_list), 'instance type was not created') flavor.destroy() self.assertRaises(exception.FlavorNotFound, objects.Flavor.get_by_name, ctxt, flavor.name) # Deleted instance should not be in list anymore new_list = objects.FlavorList.get_all(ctxt) self.assertEqual(len(original_list), len(new_list)) for i, f in enumerate(original_list): self.assertIsInstance(f, objects.Flavor) self.assertEqual(f.flavorid, new_list[i].flavorid) def test_duplicate_names_fail(self): # Ensures that name duplicates raise FlavorExists flavors.create('flavor', 256, 1, 120, 200, 'flavor1') self.assertRaises(exception.FlavorExists, flavors.create, 'flavor', 64, 1, 120) def test_duplicate_flavorids_fail(self): # Ensures that flavorid duplicates raise FlavorExists flavors.create('flavor1', 64, 1, 120, flavorid='flavorid') self.assertRaises(exception.FlavorIdExists, flavors.create, 'flavor2', 64, 1, 120, flavorid='flavorid') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_hacking.py0000664000175000017500000012366600000000000020705 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import textwrap import mock import pycodestyle from nova.hacking import checks from nova import test class HackingTestCase(test.NoDBTestCase): """This class tests the hacking checks in nova.hacking.checks by passing strings to the check methods like the pycodestyle/flake8 parser would. The parser loops over each line in the file and then passes the parameters to the check method. The parameter names in the check method dictate what type of object is passed to the check method. The parameter types are:: logical_line: A processed line with the following modifications: - Multi-line statements converted to a single line. - Stripped left and right. - Contents of strings replaced with "xxx" of same length. - Comments removed. physical_line: Raw line of text from the input file. lines: a list of the raw lines from the input file tokens: the tokens that contribute to this logical line line_number: line number in the input file total_lines: number of lines in the input file blank_lines: blank lines before this one indent_char: indentation character in this file (" " or "\t") indent_level: indentation (with tabs expanded to multiples of 8) previous_indent_level: indentation on previous line previous_logical: previous logical line filename: Path of the file being run through pycodestyle When running a test on a check method the return will be False/None if there is no violation in the sample input. If there is an error a tuple is returned with a position in the line, and a message. So to check the result just assertTrue if the check is expected to fail and assertFalse if it should pass. """ def test_virt_driver_imports(self): expect = (0, "N311: importing code from other virt drivers forbidden") self.assertEqual(expect, checks.import_no_virt_driver_import_deps( "from nova.virt.libvirt import utils as libvirt_utils", "./nova/virt/xenapi/driver.py")) self.assertEqual(expect, checks.import_no_virt_driver_import_deps( "import nova.virt.libvirt.utils as libvirt_utils", "./nova/virt/xenapi/driver.py")) self.assertIsNone(checks.import_no_virt_driver_import_deps( "from nova.virt.libvirt import utils as libvirt_utils", "./nova/virt/libvirt/driver.py")) def test_virt_driver_config_vars(self): self.assertIsInstance(checks.import_no_virt_driver_config_deps( "CONF.import_opt('volume_drivers', " "'nova.virt.libvirt.driver', group='libvirt')", "./nova/virt/xenapi/driver.py"), tuple) self.assertIsNone(checks.import_no_virt_driver_config_deps( "CONF.import_opt('volume_drivers', " "'nova.virt.libvirt.driver', group='libvirt')", "./nova/virt/libvirt/volume.py")) def test_assert_true_instance(self): self.assertEqual(len(list(checks.assert_true_instance( "self.assertTrue(isinstance(e, " "exception.BuildAbortException))"))), 1) self.assertEqual( len(list(checks.assert_true_instance("self.assertTrue()"))), 0) def test_assert_equal_type(self): self.assertEqual(len(list(checks.assert_equal_type( "self.assertEqual(type(als['QuicAssist']), list)"))), 1) self.assertEqual( len(list(checks.assert_equal_type("self.assertTrue()"))), 0) def test_assert_equal_in(self): self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(a in b, True)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual('str' in 'string', True)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(any(a==1 for a in b), True)"))), 0) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(True, a in b)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(True, 'str' in 'string')"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(True, any(a==1 for a in b))"))), 0) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(a in b, False)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual('str' in 'string', False)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(any(a==1 for a in b), False)"))), 0) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(False, a in b)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(False, 'str' in 'string')"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(False, any(a==1 for a in b))"))), 0) def test_assert_true_or_false_with_in_or_not_in(self): self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A not in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A not in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A not in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A not in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in 'some string with spaces')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in ['1', '2', '3'])"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in [1, 2, 3])"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(any(A > 5 for A in B))"))), 0) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(any(A > 5 for A in B), 'some message')"))), 0) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(some in list1 and some2 in list2)"))), 0) def test_no_translate_debug_logs(self): self.assertEqual(len(list(checks.no_translate_debug_logs( "LOG.debug(_('foo'))", "nova/scheduler/foo.py"))), 1) self.assertEqual(len(list(checks.no_translate_debug_logs( "LOG.debug('foo')", "nova/scheduler/foo.py"))), 0) self.assertEqual(len(list(checks.no_translate_debug_logs( "LOG.info(_('foo'))", "nova/scheduler/foo.py"))), 0) def test_no_setting_conf_directly_in_tests(self): self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option = 1", "nova/tests/test_foo.py"))), 1) self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.group.option = 1", "nova/tests/test_foo.py"))), 1) self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option = foo = 1", "nova/tests/test_foo.py"))), 1) # Shouldn't fail with comparisons self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option == 'foo'", "nova/tests/test_foo.py"))), 0) self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option != 1", "nova/tests/test_foo.py"))), 0) # Shouldn't fail since not in nova/tests/ self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option = 1", "nova/compute/foo.py"))), 0) def test_no_mutable_default_args(self): self.assertEqual(1, len(list(checks.no_mutable_default_args( "def get_info_from_bdm(virt_type, bdm, mapping=[])")))) self.assertEqual(0, len(list(checks.no_mutable_default_args( "defined = []")))) self.assertEqual(0, len(list(checks.no_mutable_default_args( "defined, undefined = [], {}")))) def test_check_explicit_underscore_import(self): self.assertEqual(len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "cinder/tests/other_files.py"))), 1) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files.py"))), 1) self.assertEqual(len(list(checks.check_explicit_underscore_import( "from cinder.i18n import _", "cinder/tests/other_files.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "cinder/tests/other_files.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "from cinder.i18n import _, _LW", "cinder/tests/other_files2.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files2.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "_ = translations.ugettext", "cinder/tests/other_files3.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files3.py"))), 0) def test_use_jsonutils(self): def __get_msg(fun): msg = ("N324: jsonutils.%(fun)s must be used instead of " "json.%(fun)s" % {'fun': fun}) return [(0, msg)] for method in ('dump', 'dumps', 'load', 'loads'): self.assertEqual( __get_msg(method), list(checks.use_jsonutils("json.%s(" % method, "./nova/virt/xenapi/driver.py"))) self.assertEqual(0, len(list(checks.use_jsonutils("json.%s(" % method, "./plugins/xenserver/script.py")))) self.assertEqual(0, len(list(checks.use_jsonutils("jsonx.%s(" % method, "./nova/virt/xenapi/driver.py")))) self.assertEqual(0, len(list(checks.use_jsonutils("json.dumb", "./nova/virt/xenapi/driver.py")))) # We are patching pycodestyle so that only the check under test is actually # installed. @mock.patch('pycodestyle._checks', {'physical_line': {}, 'logical_line': {}, 'tree': {}}) def _run_check(self, code, checker, filename=None): pycodestyle.register_check(checker) lines = textwrap.dedent(code).lstrip().splitlines(True) checker = pycodestyle.Checker(filename=filename, lines=lines) # NOTE(sdague): the standard reporter has printing to stdout # as a normal part of check_all, which bleeds through to the # test output stream in an unhelpful way. This blocks that printing. with mock.patch('pycodestyle.StandardReport.get_file_results'): checker.check_all() checker.report._deferred_print.sort() return checker.report._deferred_print def _assert_has_errors(self, code, checker, expected_errors=None, filename=None): actual_errors = [e[:3] for e in self._run_check(code, checker, filename)] self.assertEqual(expected_errors or [], actual_errors) def _assert_has_no_errors(self, code, checker, filename=None): self._assert_has_errors(code, checker, filename=filename) def test_str_unicode_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = str(e) return p """ errors = [(5, 16, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): try: p = unicode(a) + str(b) except ValueError as e: p = e return p """ self._assert_has_no_errors(code, checker) code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = unicode(e) return p """ errors = [(5, 20, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + str(ve) p = e return p """ errors = [(8, 20, 'N325'), (8, 29, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + unicode(ve) p = str(e) return p """ errors = [(8, 20, 'N325'), (8, 33, 'N325'), (9, 16, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) def test_api_version_decorator_check(self): code = """ @some_other_decorator @wsgi.api_version("2.5") def my_method(): pass """ self._assert_has_errors(code, checks.check_api_version_decorator, expected_errors=[(2, 0, "N332")]) def test_oslo_assert_raises_regexp(self): code = """ self.assertRaisesRegexp(ValueError, "invalid literal for.*XYZ'$", int, 'XYZ') """ self._assert_has_errors(code, checks.assert_raises_regexp, expected_errors=[(1, 0, "N335")]) def test_api_version_decorator_check_no_errors(self): codelist = [ """ class ControllerClass(): @wsgi.api_version("2.5") def my_method(): pass """, """ @some_other_decorator @mock.patch('foo', return_value=api_versions.APIVersion("2.5")) def my_method(): pass """] for code in codelist: self._assert_has_no_errors( code, checks.check_api_version_decorator) def test_trans_add(self): checker = checks.CheckForTransAdd code = """ def fake_tran(msg): return msg _ = fake_tran _LI = _ _LW = _ _LE = _ _LC = _ def f(a, b): msg = _('test') + 'add me' msg = _LI('test') + 'add me' msg = _LW('test') + 'add me' msg = _LE('test') + 'add me' msg = _LC('test') + 'add me' msg = 'add to me' + _('test') return msg """ errors = [(13, 10, 'N326'), (14, 10, 'N326'), (15, 10, 'N326'), (16, 10, 'N326'), (17, 10, 'N326'), (18, 24, 'N326')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): msg = 'test' + 'add me' return msg """ self._assert_has_no_errors(code, checker) def test_dict_constructor_with_list_copy(self): self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([(i, connect_info[i])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " attrs = dict([(k, _from_json(v))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " type_names = dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( "foo(param=dict((k, v) for k, v in bar.items()))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([[i,i] for i in range(3)])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dd = dict([i,i] for i in range(3))")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " create_kwargs = dict(snapshot=snapshot,")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " self._render_dict(xml, data_el, data.__dict__)")))) def test_check_http_not_implemented(self): code = """ except NotImplementedError: common.raise_http_not_implemented_error() """ filename = "nova/api/openstack/compute/v21/test.py" self._assert_has_no_errors(code, checks.check_http_not_implemented, filename=filename) code = """ except NotImplementedError: msg = _("Unable to set password on instance") raise exc.HTTPNotImplemented(explanation=msg) """ errors = [(3, 4, 'N339')] self._assert_has_errors(code, checks.check_http_not_implemented, expected_errors=errors, filename=filename) def test_check_contextlib_use(self): code = """ with test.nested( mock.patch.object(network_model.NetworkInfo, 'hydrate'), mock.patch.object(objects.InstanceInfoCache, 'save'), ) as ( hydrate_mock, save_mock ) """ filename = "nova/api/openstack/compute/v21/test.py" self._assert_has_no_errors(code, checks.check_no_contextlib_nested, filename=filename) code = """ with contextlib.nested( mock.patch.object(network_model.NetworkInfo, 'hydrate'), mock.patch.object(objects.InstanceInfoCache, 'save'), ) as ( hydrate_mock, save_mock ) """ filename = "nova/api/openstack/compute/legacy_v2/test.py" errors = [(1, 0, 'N341')] self._assert_has_errors(code, checks.check_no_contextlib_nested, expected_errors=errors, filename=filename) def test_check_greenthread_spawns(self): errors = [(1, 0, "N340")] code = "greenthread.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "greenthread.spawn_n(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "eventlet.greenthread.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "eventlet.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "eventlet.spawn_n(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "nova.utils.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_no_errors(code, checks.check_greenthread_spawns) code = "nova.utils.spawn_n(func, arg1, kwarg1=kwarg1)" self._assert_has_no_errors(code, checks.check_greenthread_spawns) def test_config_option_regex_match(self): def should_match(code): self.assertTrue(checks.cfg_opt_re.match(code)) def should_not_match(code): self.assertFalse(checks.cfg_opt_re.match(code)) should_match("opt = cfg.StrOpt('opt_name')") should_match("opt = cfg.IntOpt('opt_name')") should_match("opt = cfg.DictOpt('opt_name')") should_match("opt = cfg.Opt('opt_name')") should_match("opts=[cfg.Opt('opt_name')]") should_match(" cfg.Opt('opt_name')") should_not_match("opt_group = cfg.OptGroup('opt_group_name')") def test_check_config_option_in_central_place(self): errors = [(1, 0, "N342")] code = """ opts = [ cfg.StrOpt('random_opt', default='foo', help='I am here to do stuff'), ] """ # option at the right place in the tree self._assert_has_no_errors(code, checks.check_config_option_in_central_place, filename="nova/conf/serial_console.py") # option at the wrong place in the tree self._assert_has_errors(code, checks.check_config_option_in_central_place, filename="nova/cmd/serialproxy.py", expected_errors=errors) # option at a location which is marked as an exception # TODO(macsz) remove testing exceptions as they are removed from # check_config_option_in_central_place self._assert_has_no_errors(code, checks.check_config_option_in_central_place, filename="nova/cmd/manage.py") self._assert_has_no_errors(code, checks.check_config_option_in_central_place, filename="nova/tests/dummy_test.py") def test_check_doubled_words(self): errors = [(1, 0, "N343")] # Explicit addition of line-ending here and below since this isn't a # block comment and without it we trigger #1804062. Artificial break is # necessary to stop flake8 detecting the test code = "'This is the" + " the best comment'\n" self._assert_has_errors(code, checks.check_doubled_words, expected_errors=errors) code = "'This is the then best comment'\n" self._assert_has_no_errors(code, checks.check_doubled_words) def test_dict_iteritems(self): self.assertEqual(1, len(list(checks.check_python3_no_iteritems( "obj.iteritems()")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "six.iteritems(ob))")))) def test_dict_iterkeys(self): self.assertEqual(1, len(list(checks.check_python3_no_iterkeys( "obj.iterkeys()")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "six.iterkeys(ob))")))) def test_dict_itervalues(self): self.assertEqual(1, len(list(checks.check_python3_no_itervalues( "obj.itervalues()")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "six.itervalues(ob))")))) def test_no_os_popen(self): code = """ import os foobar_cmd = "foobar -get -beer" answer = os.popen(foobar_cmd).read() if answer == nok": try: os.popen(os.popen('foobar -beer -please')).read() except ValueError: go_home() """ errors = [(4, 0, 'N348'), (8, 8, 'N348')] self._assert_has_errors(code, checks.no_os_popen, expected_errors=errors) def test_no_log_warn(self): code = """ LOG.warn("LOG.warn is deprecated") """ errors = [(1, 0, 'N352')] self._assert_has_errors(code, checks.no_log_warn, expected_errors=errors) code = """ LOG.warning("LOG.warn is deprecated") """ self._assert_has_no_errors(code, checks.no_log_warn) def test_uncalled_closures(self): checker = checks.CheckForUncalledTestClosure code = """ def test_fake_thing(): def _test(): pass """ self._assert_has_errors(code, checker, expected_errors=[(1, 0, 'N349')]) code = """ def test_fake_thing(): def _test(): pass _test() """ self._assert_has_no_errors(code, checker) code = """ def test_fake_thing(): def _test(): pass self.assertRaises(FakeExcepion, _test) """ self._assert_has_no_errors(code, checker) def test_check_policy_registration_in_central_place(self): errors = [(3, 0, "N350")] code = """ from nova import policy policy.RuleDefault('context_is_admin', 'role:admin') """ # registration in the proper place self._assert_has_no_errors( code, checks.check_policy_registration_in_central_place, filename="nova/policies/base.py") # option at a location which is not in scope right now self._assert_has_errors( code, checks.check_policy_registration_in_central_place, filename="nova/api/openstack/compute/non_existent.py", expected_errors=errors) def test_check_policy_enforce(self): errors = [(3, 0, "N351")] code = """ from nova import policy policy._ENFORCER.enforce('context_is_admin', target, credentials) """ self._assert_has_errors(code, checks.check_policy_enforce, expected_errors=errors) def test_check_policy_enforce_does_not_catch_other_enforce(self): # Simulate a different enforce method defined in Nova code = """ from nova import foo foo.enforce() """ self._assert_has_no_errors(code, checks.check_policy_enforce) def test_check_python3_xrange(self): func = checks.check_python3_xrange self.assertEqual(1, len(list(func('for i in xrange(10)')))) self.assertEqual(1, len(list(func('for i in xrange (10)')))) self.assertEqual(0, len(list(func('for i in range(10)')))) self.assertEqual(0, len(list(func('for i in six.moves.range(10)')))) def test_log_context(self): code = """ LOG.info(_LI("Rebooting instance"), context=context, instance=instance) """ errors = [(1, 0, 'N353')] self._assert_has_errors(code, checks.check_context_log, expected_errors=errors) code = """ LOG.info(_LI("Rebooting instance"), context=admin_context, instance=instance) """ errors = [(1, 0, 'N353')] self._assert_has_errors(code, checks.check_context_log, expected_errors=errors) code = """ LOG.info(_LI("Rebooting instance"), instance=instance) """ self._assert_has_no_errors(code, checks.check_context_log) def test_no_assert_equal_true_false(self): code = """ self.assertEqual(context_is_admin, True) self.assertEqual(context_is_admin, False) self.assertEqual(True, context_is_admin) self.assertEqual(False, context_is_admin) self.assertNotEqual(context_is_admin, True) self.assertNotEqual(context_is_admin, False) self.assertNotEqual(True, context_is_admin) self.assertNotEqual(False, context_is_admin) """ errors = [(1, 0, 'N355'), (2, 0, 'N355'), (3, 0, 'N355'), (4, 0, 'N355'), (5, 0, 'N355'), (6, 0, 'N355'), (7, 0, 'N355'), (8, 0, 'N355')] self._assert_has_errors(code, checks.no_assert_equal_true_false, expected_errors=errors) code = """ self.assertEqual(context_is_admin, stuff) self.assertNotEqual(context_is_admin, stuff) """ self._assert_has_no_errors(code, checks.no_assert_equal_true_false) def test_no_assert_true_false_is_not(self): code = """ self.assertTrue(test is None) self.assertTrue(False is my_variable) self.assertFalse(None is test) self.assertFalse(my_variable is False) """ errors = [(1, 0, 'N356'), (2, 0, 'N356'), (3, 0, 'N356'), (4, 0, 'N356')] self._assert_has_errors(code, checks.no_assert_true_false_is_not, expected_errors=errors) def test_check_uuid4(self): code = """ fake_uuid = uuid.uuid4() hex_uuid = uuid.uuid4().hex """ errors = [(1, 0, 'N357'), (2, 0, 'N357')] self._assert_has_errors(code, checks.check_uuid4, expected_errors=errors) code = """ int_uuid = uuid.uuid4().int urn_uuid = uuid.uuid4().urn variant_uuid = uuid.uuid4().variant version_uuid = uuid.uuid4().version """ self._assert_has_no_errors(code, checks.check_uuid4) def test_return_followed_by_space(self): code = """ return(42) return(a, b) return(' some string ') """ errors = [(1, 0, 'N358'), (2, 0, 'N358'), (3, 0, 'N358')] self._assert_has_errors(code, checks.return_followed_by_space, expected_errors=errors) code = """ return return 42 return (a, b) return a, b return ' some string ' """ self._assert_has_no_errors(code, checks.return_followed_by_space) def test_no_redundant_import_alias(self): code = """ from x import y as y import x as x import x.y.z as z import x.y.z as y.z """ errors = [(x + 1, 0, 'N359') for x in range(4)] self._assert_has_errors(code, checks.no_redundant_import_alias, expected_errors=errors) code = """ from x import y import x from x.y import z from a import bcd as cd import ab.cd.efg as fg import ab.cd.efg as d.efg """ self._assert_has_no_errors(code, checks.no_redundant_import_alias) def test_yield_followed_by_space(self): code = """ yield(x, y) yield{"type": "test"} yield[a, b, c] yield"test" yield'test' """ errors = [(x + 1, 0, 'N360') for x in range(5)] self._assert_has_errors(code, checks.yield_followed_by_space, expected_errors=errors) code = """ yield x yield (x, y) yield {"type": "test"} yield [a, b, c] yield "test" yield 'test' yieldx_func(a, b) """ self._assert_has_no_errors(code, checks.yield_followed_by_space) def test_assert_regexpmatches(self): code = """ self.assertRegexpMatches("Test", output) self.assertNotRegexpMatches("Notmatch", output) """ errors = [(x + 1, 0, 'N361') for x in range(2)] self._assert_has_errors(code, checks.assert_regexpmatches, expected_errors=errors) code = """ self.assertRegexpMatchesfoo("Test", output) self.assertNotRegexpMatchesbar("Notmatch", output) """ self._assert_has_no_errors(code, checks.assert_regexpmatches) def test_import_alias_privsep(self): code = """ from nova import privsep import nova.privsep as nova_privsep from nova.privsep import linux_net import nova.privsep.linux_net as privsep_linux_net """ errors = [(x + 1, 0, 'N362') for x in range(4)] bad_filenames = ('nova/foo/bar.py', 'nova/foo/privsep.py', 'nova/privsep_foo/bar.py') for filename in bad_filenames: self._assert_has_errors( code, checks.privsep_imports_not_aliased, expected_errors=errors, filename=filename) good_filenames = ('nova/privsep.py', 'nova/privsep/__init__.py', 'nova/privsep/foo.py') for filename in good_filenames: self._assert_has_no_errors( code, checks.privsep_imports_not_aliased, filename=filename) code = """ import nova.privsep import nova.privsep.foo import nova.privsep.foo.bar import nova.foo.privsep import nova.foo.privsep.bar import nova.tests.unit.whatever """ for filename in (good_filenames + bad_filenames): self._assert_has_no_errors( code, checks.privsep_imports_not_aliased, filename=filename) def test_did_you_mean_tuple(self): code = """ if foo in (bar): if foo in ('bar'): if foo in (path.to.CONST_1234): if foo in ( bar): """ errors = [(x + 1, 0, 'N363') for x in range(4)] self._assert_has_errors( code, checks.did_you_mean_tuple, expected_errors=errors) code = """ def in(this_would_be_weird): # A match in (any) comment doesn't count if foo in (bar,) or foo in ('bar',) or foo in ("bar",) or foo in (set1 + set2) or foo in ("string continuations " "are probably okay") or foo in (method_call_should_this_work()): """ self._assert_has_no_errors(code, checks.did_you_mean_tuple) def test_nonexistent_assertion_methods_and_attributes(self): code = """ mock_sample.called_once() mock_sample.called_once_with(a, "TEST") mock_sample.has_calls([mock.call(x), mock.call(y)]) mock_sample.mock_assert_called() mock_sample.mock_assert_called_once() mock_sample.mock_assert_called_with(a, "xxxx") mock_sample.mock_assert_called_once_with(a, b) mock_sample.mock_assert_any_call(1, 2, instance=instance) sample.mock_assert_has_calls([mock.call(x), mock.call(y)]) mock_sample.mock_assert_not_called() mock_sample.asser_called() mock_sample.asser_called_once() mock_sample.asser_called_with(a, "xxxx") mock_sample.asser_called_once_with(a, b) mock_sample.asser_any_call(1, 2, instance=instance) mock_sample.asser_has_calls([mock.call(x), mock.call(y)]) mock_sample.asser_not_called() mock_sample.asset_called() mock_sample.asset_called_once() mock_sample.asset_called_with(a, "xxxx") mock_sample.asset_called_once_with(a, b) mock_sample.asset_any_call(1, 2, instance=instance) mock_sample.asset_has_calls([mock.call(x), mock.call(y)]) mock_sample.asset_not_called() mock_sample.asssert_called() mock_sample.asssert_called_once() mock_sample.asssert_called_with(a, "xxxx") mock_sample.asssert_called_once_with(a, b) mock_sample.asssert_any_call(1, 2, instance=instance) mock_sample.asssert_has_calls([mock.call(x), mock.call(y)]) mock_sample.asssert_not_called() mock_sample.assset_called() mock_sample.assset_called_once() mock_sample.assset_called_with(a, "xxxx") mock_sample.assset_called_once_with(a, b) mock_sample.assset_any_call(1, 2, instance=instance) mock_sample.assset_has_calls([mock.call(x), mock.call(y)]) mock_sample.assset_not_called() mock_sample.retrun_value = 100 sample.call_method(mock_sample.retrun_value, 100) mock.Mock(retrun_value=100) """ errors = [(x + 1, 0, 'N364') for x in range(41)] # Check errors in 'nova/tests' directory. self._assert_has_errors( code, checks.nonexistent_assertion_methods_and_attributes, expected_errors=errors, filename="nova/tests/unit/test_context.py") # Check no errors in other than 'nova/tests' directory. self._assert_has_no_errors( code, checks.nonexistent_assertion_methods_and_attributes, filename="nova/compute/api.py") code = """ mock_sample.assert_called() mock_sample.assert_called_once() mock_sample.assert_called_with(a, "xxxx") mock_sample.assert_called_once_with(a, b) mock_sample.assert_any_call(1, 2, instance=instance) mock_sample.assert_has_calls([mock.call(x), mock.call(y)]) mock_sample.assert_not_called() mock_sample.return_value = 100 sample.call_method(mock_sample.return_value, 100) mock.Mock(return_value=100) sample.has_other_calls([1, 2, 3, 4]) sample.get_called_once("TEST") sample.check_called_once_with("TEST") mock_assert_method.assert_called() test.asset_has_all(test) test_retrun_value = 99 test.retrun_values = 100 """ self._assert_has_no_errors( code, checks.nonexistent_assertion_methods_and_attributes, filename="nova/tests/unit/test_context.py") def test_useless_assertion(self): code = """ self.assertIsNone(None, status) self.assertTrue(True, flag) self.assertTrue(10, count) self.assertTrue('active', status) self.assertTrue("building", status) """ errors = [(x + 1, 0, 'N365') for x in range(5)] # Check errors in 'nova/tests' directory. self._assert_has_errors( code, checks.useless_assertion, expected_errors=errors, filename="nova/tests/unit/test_context.py") # Check no errors in other than 'nova/tests' directory. self._assert_has_no_errors( code, checks.useless_assertion, filename="nova/compute/api.py") code = """ self.assertIsNone(None_test_var, "Fails") self.assertTrue(True_test_var, 'Fails') self.assertTrue(var2, "Fails") self.assertTrue(test_class.is_active('active'), 'Fails') self.assertTrue(check_status("building"), 'Fails') """ self._assert_has_no_errors( code, checks.useless_assertion, filename="nova/tests/unit/test_context.py") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_hooks.py0000664000175000017500000001354600000000000020417 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for hook customization.""" import stevedore from nova import hooks from nova import test class SampleHookA(object): name = "a" def _add_called(self, op, kwargs): called = kwargs.get('called', None) if called is not None: called.append(op + self.name) def pre(self, *args, **kwargs): self._add_called("pre", kwargs) class SampleHookB(SampleHookA): name = "b" def post(self, rv, *args, **kwargs): self._add_called("post", kwargs) class SampleHookC(SampleHookA): name = "c" def pre(self, f, *args, **kwargs): self._add_called("pre" + f.__name__, kwargs) def post(self, f, rv, *args, **kwargs): self._add_called("post" + f.__name__, kwargs) class SampleHookExceptionPre(SampleHookA): name = "epre" exception = Exception() def pre(self, f, *args, **kwargs): raise self.exception class SampleHookExceptionPost(SampleHookA): name = "epost" exception = Exception() def post(self, f, rv, *args, **kwargs): raise self.exception class MockEntryPoint(object): def __init__(self, cls): self.cls = cls def load(self): return self.cls class MockedHookTestCase(test.BaseHookTestCase): PLUGINS = [] def setUp(self): super(MockedHookTestCase, self).setUp() hooks.reset() hook_manager = hooks.HookManager.make_test_instance(self.PLUGINS) self.stub_out('nova.hooks.HookManager', lambda x: hook_manager) class HookTestCase(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('test_hook', MockEntryPoint(SampleHookA), SampleHookA, SampleHookA()), stevedore.extension.Extension('test_hook', MockEntryPoint(SampleHookB), SampleHookB, SampleHookB()), ] def setUp(self): super(HookTestCase, self).setUp() hooks.reset() @hooks.add_hook('test_hook') def _hooked(self, a, b=1, c=2, called=None): return 42 def test_basic(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['test_hook'] self.assert_has_hook('test_hook', self._hooked) self.assertEqual(2, len(mgr.extensions)) self.assertEqual(SampleHookA, mgr.extensions[0].plugin) self.assertEqual(SampleHookB, mgr.extensions[1].plugin) def test_order_of_execution(self): called_order = [] self._hooked(42, called=called_order) self.assertEqual(['prea', 'preb', 'postb'], called_order) class HookTestCaseWithFunction(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('function_hook', MockEntryPoint(SampleHookC), SampleHookC, SampleHookC()), ] @hooks.add_hook('function_hook', pass_function=True) def _hooked(self, a, b=1, c=2, called=None): return 42 def test_basic(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['function_hook'] self.assert_has_hook('function_hook', self._hooked) self.assertEqual(1, len(mgr.extensions)) self.assertEqual(SampleHookC, mgr.extensions[0].plugin) def test_order_of_execution(self): called_order = [] self._hooked(42, called=called_order) self.assertEqual(['pre_hookedc', 'post_hookedc'], called_order) class HookFailPreTestCase(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('fail_pre', MockEntryPoint(SampleHookExceptionPre), SampleHookExceptionPre, SampleHookExceptionPre()), ] @hooks.add_hook('fail_pre', pass_function=True) def _hooked(self, a, b=1, c=2, called=None): return 42 def test_hook_fail_should_still_return(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['fail_pre'] self.assert_has_hook('fail_pre', self._hooked) self.assertEqual(1, len(mgr.extensions)) self.assertEqual(SampleHookExceptionPre, mgr.extensions[0].plugin) def test_hook_fail_should_raise_fatal(self): self.stub_out('nova.tests.unit.test_hooks.' 'SampleHookExceptionPre.exception', hooks.FatalHookException()) self.assertRaises(hooks.FatalHookException, self._hooked, 1) class HookFailPostTestCase(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('fail_post', MockEntryPoint(SampleHookExceptionPost), SampleHookExceptionPost, SampleHookExceptionPost()), ] @hooks.add_hook('fail_post', pass_function=True) def _hooked(self, a, b=1, c=2, called=None): return 42 def test_hook_fail_should_still_return(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['fail_post'] self.assert_has_hook('fail_post', self._hooked) self.assertEqual(1, len(mgr.extensions)) self.assertEqual(SampleHookExceptionPost, mgr.extensions[0].plugin) def test_hook_fail_should_raise_fatal(self): self.stub_out('nova.tests.unit.test_hooks.' 'SampleHookExceptionPost.exception', hooks.FatalHookException()) self.assertRaises(hooks.FatalHookException, self._hooked, 1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_identity.py0000664000175000017500000001140600000000000021116 0ustar00zuulzuul00000000000000# Copyright 2017 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from keystoneauth1.adapter import Adapter from keystoneauth1 import exceptions as kse import webob from nova.api.openstack import identity from nova import test from nova.tests.unit import fake_requests class IdentityValidationTest(test.NoDBTestCase): """Unit tests for our validation of keystone projects. There are times when Nova stores keystone project_id and user_id in our database as strings. Until the Pike release none of this data was validated, so it was very easy for adminstrators to think they were adjusting quota for a project (by name) when instead they were just inserting keys in a database that would not get used. This is only tested in unit tests through mocking out keystoneauth responses because a functional test would need a real keystone or keystone simulator. The functional code works by using the existing keystone credentials and trying to make a /v3/projects/{id} get call. It will return a 403 if the user doesn't have enough permissions to ask about other projects, a 404 if it does and that project does not exist. """ def setUp(self): super(IdentityValidationTest, self).setUp() get_adap_p = mock.patch('nova.utils.get_ksa_adapter') self.addCleanup(get_adap_p.stop) self.mock_get_adap = get_adap_p.start() self.mock_adap = mock.create_autospec(Adapter) self.mock_get_adap.return_value = self.mock_adap def validate_common(self): self.mock_get_adap.assert_called_once_with( 'identity', ksa_auth=mock.ANY, min_version=(3, 0), max_version=(3, 'latest')) self.mock_adap.get.assert_called_once_with('/projects/foo') def test_good_id(self): """Test response 200. This indicates we have permissions, and we have definitively found the project exists. """ self.mock_adap.get.return_value = fake_requests.FakeResponse(200) self.assertTrue(identity.verify_project_id(mock.MagicMock(), "foo")) self.validate_common() def test_no_project(self): """Test response 404. This indicates that we have permissions, and we have definitively found the project does not exist. """ self.mock_adap.get.return_value = fake_requests.FakeResponse(404) self.assertRaises(webob.exc.HTTPBadRequest, identity.verify_project_id, mock.MagicMock(), "foo") self.validate_common() def test_unknown_id(self): """Test response 403. This indicates we don't have permissions. We fail open here and assume the project exists. """ self.mock_adap.get.return_value = fake_requests.FakeResponse(403) self.assertTrue(identity.verify_project_id(mock.MagicMock(), "foo")) self.validate_common() def test_unknown_error(self): """Test some other return from keystone. If we got anything else, something is wrong on the keystone side. We don't want to fail on our side. """ self.mock_adap.get.return_value = fake_requests.FakeResponse( 500, content="Oh noes!") self.assertTrue(identity.verify_project_id(mock.MagicMock(), "foo")) self.validate_common() def test_early_fail(self): """Test if we get a keystoneauth exception. If we get a random keystoneauth exception, fall back and assume the project exists. """ self.mock_adap.get.side_effect = kse.ConnectionError() self.assertTrue(identity.verify_project_id(mock.MagicMock(), "foo")) self.validate_common() def test_wrong_version(self): """Test endpoint not found. EndpointNotFound will be made when the keystone v3 API is not found in the service catalog, or if the v2.0 endpoint was registered as the root endpoint. We treat this the same as 404. """ self.mock_adap.get.side_effect = kse.EndpointNotFound() self.assertRaises(webob.exc.HTTPBadRequest, identity.verify_project_id, mock.MagicMock(), "foo") self.validate_common() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_instance_types_extra_specs.py0000664000175000017500000001061200000000000024713 0ustar00zuulzuul00000000000000# Copyright 2011 University of Southern California # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for instance types extra specs code """ from nova import context from nova import objects from nova.objects import fields from nova import test class InstanceTypeExtraSpecsTestCase(test.TestCase): def setUp(self): super(InstanceTypeExtraSpecsTestCase, self).setUp() self.context = context.get_admin_context() flavor = objects.Flavor(context=self.context, name="cg1.4xlarge", memory_mb=22000, vcpus=8, root_gb=1690, ephemeral_gb=2000, flavorid=105) self.specs = dict(cpu_arch=fields.Architecture.X86_64, cpu_model="Nehalem", xpu_arch="fermi", xpus="2", xpu_model="Tesla 2050") flavor.extra_specs = self.specs flavor.create() self.flavor = flavor self.instance_type_id = flavor.id self.flavorid = flavor.flavorid def tearDown(self): # Remove the instance type from the database self.flavor.destroy() super(InstanceTypeExtraSpecsTestCase, self).tearDown() def test_instance_type_specs_get(self): flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_flavor_extra_specs_delete(self): del self.specs["xpu_model"] del self.flavor.extra_specs['xpu_model'] self.flavor.save() flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_instance_type_extra_specs_update(self): self.specs["cpu_model"] = "Sandy Bridge" self.flavor.extra_specs["cpu_model"] = "Sandy Bridge" self.flavor.save() flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_instance_type_extra_specs_create(self): net_attrs = { "net_arch": "ethernet", "net_mbps": "10000" } self.specs.update(net_attrs) self.flavor.extra_specs.update(net_attrs) self.flavor.save() flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_instance_type_get_with_extra_specs(self): flavor = objects.Flavor.get_by_id(self.context, 5) self.assertEqual(flavor.extra_specs, {}) def test_instance_type_get_by_name_with_extra_specs(self): flavor = objects.Flavor.get_by_name(self.context, "cg1.4xlarge") self.assertEqual(flavor.extra_specs, self.specs) flavor = objects.Flavor.get_by_name(self.context, "m1.small") self.assertEqual(flavor.extra_specs, {}) def test_instance_type_get_by_flavor_id_with_extra_specs(self): flavor = objects.Flavor.get_by_flavor_id(self.context, 105) self.assertEqual(flavor.extra_specs, self.specs) flavor = objects.Flavor.get_by_flavor_id(self.context, 2) self.assertEqual(flavor.extra_specs, {}) def test_instance_type_get_all(self): flavors = objects.FlavorList.get_all(self.context) name2specs = {flavor.name: flavor.extra_specs for flavor in flavors} self.assertEqual(name2specs['cg1.4xlarge'], self.specs) self.assertEqual(name2specs['m1.small'], {}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_json_ref.py0000664000175000017500000001510400000000000021071 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova import test from nova.tests import json_ref class TestJsonRef(test.NoDBTestCase): def test_update_dict_recursively(self): input = {'foo': 1, 'bar': 13, 'a list': [], 'nesting': { 'baz': 42, 'foobar': 121 }} d = copy.deepcopy(input) json_ref._update_dict_recursively(d, {}) self.assertDictEqual(input, d) d = copy.deepcopy(input) json_ref._update_dict_recursively(d, {'foo': 111, 'new_key': 1, 'nesting': { 'baz': 142, 'new_nested': 1 }}) expected = copy.deepcopy(input) expected['foo'] = 111 expected['new_key'] = 1 expected['nesting']['baz'] = 142 expected['nesting']['new_nested'] = 1 self.assertDictEqual(expected, d) d = copy.deepcopy(input) json_ref._update_dict_recursively(d, {'nesting': 1}) expected = copy.deepcopy(input) expected['nesting'] = 1 self.assertDictEqual(expected, d) d = copy.deepcopy(input) @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('six.moves.builtins.open', new_callable=mock.mock_open()) def test_resolve_ref(self, mock_open, mock_json_load): mock_json_load.return_value = {'baz': 13} actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#'}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13}}, actual) mock_open.assert_called_once_with('some/base/path/another.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('six.moves.builtins.open', new_callable=mock.mock_open()) def test_resolve_ref_recursively(self, mock_open, mock_json_load): mock_json_load.side_effect = [ # this is the content of direct_ref.json {'baz': 13, 'nesting': {'$ref': 'subdir/nested_ref.json#'}}, # this is the content of subdir/nested_ref.json {'a deep key': 'happiness'}] actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'direct_ref.json#'}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'nesting': {'a deep key': 'happiness'}}}, actual) mock_open.assert_any_call('some/base/path/direct_ref.json', 'r+b') mock_open.assert_any_call('some/base/path/subdir/nested_ref.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('six.moves.builtins.open', new_callable=mock.mock_open()) def test_resolve_ref_with_override(self, mock_open, mock_json_load): mock_json_load.return_value = {'baz': 13, 'boo': 42} actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#', 'boo': 0}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'boo': 0}}, actual) mock_open.assert_called_once_with('some/base/path/another.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('six.moves.builtins.open', new_callable=mock.mock_open()) def test_resolve_ref_with_nested_override(self, mock_open, mock_json_load): mock_json_load.return_value = {'baz': 13, 'boo': {'a': 1, 'b': 2}} actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#', 'boo': {'b': 3, 'c': 13}}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'boo': {'a': 1, 'b': 3, 'c': 13}}}, actual) mock_open.assert_called_once_with('some/base/path/another.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('six.moves.builtins.open', new_callable=mock.mock_open()) def test_resolve_ref_with_override_having_refs(self, mock_open, mock_json_load): mock_json_load.side_effect = [ {'baz': 13, 'boo': 42}, {'something': 0}] actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#', 'boo': {'$ref': 'override_ref.json#'}}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'boo': {'something': 0}}}, actual) self.assertEqual(2, mock_open.call_count) # any_order=True is needed as context manager calls also done on open mock_open.assert_has_calls( [mock.call('some/base/path/another.json', 'r+b'), mock.call('some/base/path/override_ref.json', 'r+b')], any_order=True) def test_ref_with_json_path_not_supported(self): self.assertRaises( NotImplementedError, json_ref.resolve_refs, {'foo': 1, 'bar': {'$ref': 'another.json#/key-in-another', 'boo': {'b': 3, 'c': 13}}}, 'some/base/path/') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_loadables.py0000664000175000017500000001246100000000000021215 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Loadable class handling. """ from nova import exception from nova import test from nova.tests.unit import fake_loadables class LoadablesTestCase(test.NoDBTestCase): def setUp(self): super(LoadablesTestCase, self).setUp() self.fake_loader = fake_loadables.FakeLoader() # The name that we imported above for testing self.test_package = 'nova.tests.unit.fake_loadables' def test_loader_init(self): self.assertEqual(self.fake_loader.package, self.test_package) # Test the path of the module ending_path = '/' + self.test_package.replace('.', '/') self.assertTrue(self.fake_loader.path.endswith(ending_path)) self.assertEqual(self.fake_loader.loadable_cls_type, fake_loadables.FakeLoadable) def _compare_classes(self, classes, expected): class_names = [cls.__name__ for cls in classes] self.assertEqual(set(class_names), set(expected)) def test_get_all_classes(self): classes = self.fake_loader.get_all_classes() expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass2', 'FakeLoadableSubClass5', 'FakeLoadableSubClass6'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass1', prefix + '.fake_loadable2.FakeLoadableSubClass5'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass5'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes_with_underscore(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass1', prefix + '.fake_loadable2._FakeLoadableSubClass7'] self.assertRaises(exception.ClassNotFound, self.fake_loader.get_matching_classes, test_classes) def test_get_matching_classes_with_wrong_type1(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass4', prefix + '.fake_loadable2.FakeLoadableSubClass5'] self.assertRaises(exception.ClassNotFound, self.fake_loader.get_matching_classes, test_classes) def test_get_matching_classes_with_wrong_type2(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass1', prefix + '.fake_loadable2.FakeLoadableSubClass8'] self.assertRaises(exception.ClassNotFound, self.fake_loader.get_matching_classes, test_classes) def test_get_matching_classes_with_one_function(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.return_valid_classes', prefix + '.fake_loadable2.FakeLoadableSubClass5'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass2', 'FakeLoadableSubClass5'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes_with_two_functions(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.return_valid_classes', prefix + '.fake_loadable2.return_valid_class'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass2', 'FakeLoadableSubClass6'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes_with_function_including_invalids(self): # When using a method, no checking is done on valid classes. prefix = self.test_package test_classes = [prefix + '.fake_loadable1.return_invalid_classes', prefix + '.fake_loadable2.return_valid_class'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', '_FakeLoadableSubClass3', 'FakeLoadableSubClass4', 'FakeLoadableSubClass6'] self._compare_classes(classes, expected_class_names) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_matchers.py0000664000175000017500000004062300000000000021076 0ustar00zuulzuul00000000000000# Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import OrderedDict import pprint import testtools from testtools.tests.matchers import helpers from nova.tests.unit import matchers class TestDictMatches(testtools.TestCase, helpers.TestMatchersInterface): matches_dict = OrderedDict(sorted({'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}.items())) matches_matcher = matchers.DictMatches( matches_dict ) matches_matches = [ {'foo': 'bar', 'baz': 'noox', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': False}}, ] matches_mismatches = [ {}, {'foo': 'bar', 'baz': 'qux'}, {'foo': 'bop', 'baz': 'qux', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, {'foo': 'bar', 'cat': {'tabby': True, 'fluffy': False}}, ] str_examples = [ ('DictMatches(%s)' % (pprint.pformat(matches_dict)), matches_matcher), ] describe_examples = [ ("Keys in d1 and not d2: {0}. Keys in d2 and not d1: []" .format(str(sorted(matches_dict.keys()))), {}, matches_matcher), ("Dictionaries do not match at fluffy. d1: False d2: True", {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, matches_matcher), ("Dictionaries do not match at foo. d1: bar d2: bop", {'foo': 'bop', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': False}}, matches_matcher), ] class TestDictListMatches(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.DictListMatches( [{'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}, ]) matches_matches = [ [{'foo': 'bar', 'baz': 'qoox', 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}], [{'foo': 'bar', 'baz': False, 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}], ] matches_mismatches = [ [], {}, [{'foo': 'bar', 'baz': 'qoox', 'cat': {'tabby': True, 'fluffy': True}}, {'dog': 'yorkie'}], [{'foo': 'bar', 'baz': False, 'cat': {'tabby': True, 'fluffy': False}}, {'cat': 'yorkie'}], [{'foo': 'bop', 'baz': False, 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}], ] str_examples = [ ("DictListMatches([{'baz': 'DONTCARE', 'cat':" " {'fluffy': False, 'tabby': True}, 'foo': 'bar'},\n" " {'dog': 'yorkie'}])", matches_matcher), ] describe_examples = [ ("Length mismatch: len(L1)=2 != len(L2)=0", {}, matches_matcher), ("Dictionaries do not match at fluffy. d1: True d2: False", [{'foo': 'bar', 'baz': 'qoox', 'cat': {'tabby': True, 'fluffy': True}}, {'dog': 'yorkie'}], matches_matcher), ] class TestIsSubDictOf(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.IsSubDictOf( OrderedDict(sorted({'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}.items())) ) matches_matches = [ {'foo': 'bar', 'baz': 'noox', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux'} ] matches_mismatches = [ {'foo': 'bop', 'baz': 'qux', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, {'foo': 'bar', 'cat': {'tabby': True, 'fluffy': False}, 'dog': None}, ] str_examples = [ ("IsSubDictOf({0})".format( str(OrderedDict(sorted({'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}.items())))), matches_matcher), ] describe_examples = [ ("Dictionaries do not match at fluffy. d1: False d2: True", {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, matches_matcher), ("Dictionaries do not match at foo. d1: bar d2: bop", {'foo': 'bop', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': False}}, matches_matcher), ] class TestXMLMatches(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" some text here some other text here child 1 child 2 DONTCARE """, allow_mixed_nodes=False) matches_matches = [""" some text here some other text here child 1 child 2 child 3 """, """ some text here some other text here child 1 child 2 blah """, ] matches_mismatches = [""" some text here mismatch text child 1 child 2 child 3 """, """ some text here some other text here child 1 child 2 child 3 """, """ some text here some other text here child 1 child 2 child 3 """, """ some text here some other text here child 1 child 4 child 2 child 3 """, """ some text here some other text here child 1 child 2 """, """ some text here some other text here child 1 child 2 child 3 child 4 """, """ some text here some other text here child 2 child 1 DONTCARE """, """ some text here some other text here child 1 child 2 DONTCARE """, ] str_examples = [ ("XMLMatches('\\n" "\\n" " some text here\\n" " some other text here\\n" " \\n" " \\n" " \\n" " child 1\\n" " child 2\\n" " DONTCARE\\n" " \\n" " \\n" "')", matches_matcher), ] describe_examples = [ ("/root/text[1]: XML text value mismatch: expected text value: " "['some other text here']; actual value: ['mismatch text']", """ some text here mismatch text child 1 child 2 child 3 """, matches_matcher), ("/root/attrs[2]: XML attributes mismatch: keys only in expected: " "key2; keys only in actual: key3", """ some text here some other text here child 1 child 2 child 3 """, matches_matcher), ("/root/attrs[2]: XML attribute value mismatch: expected value of " "attribute key1: 'spam'; actual value: 'quux'", """ some text here some other text here child 1 child 2 child 3 """, matches_matcher), ("/root/children[3]: XML tag mismatch at index 1: expected tag " "; actual tag ", """ some text here some other text here child 1 child 4 child 2 child 3 """, matches_matcher), ("/root/children[3]: XML expected child element not " "present at index 2", """ some text here some other text here child 1 child 2 """, matches_matcher), ("/root/children[3]: XML unexpected child element " "present at index 3", """ some text here some other text here child 1 child 2 child 3 child 4 """, matches_matcher), ("/root/children[3]: XML tag mismatch at index 0: " "expected tag ; actual tag ", """ some text here some other text here child 2 child 1 child 3 """, matches_matcher), ("/: XML information mismatch(version, encoding) " "expected version 1.0, expected encoding UTF-8; " "actual version 1.1, actual encoding UTF-8", """ some text here some other text here child 1 child 2 DONTCARE """, matches_matcher), ] class TestXMLMatchesUnorderedNodes(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" some text here some other text here DONTCARE child 2 child 1 """, allow_mixed_nodes=True) matches_matches = [""" some text here child 1 child 2 child 3 some other text here """, ] matches_mismatches = [""" some text here mismatch text child 1 child 2 child 3 """, ] describe_examples = [ ("/root: XML expected child element not present", """ some text here mismatch text child 1 child 2 child 3 """, matches_matcher), ] str_examples = [] class TestXMLMatchesOrderedMissingChildren(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" subchild """, allow_mixed_nodes=False) matches_matches = [] matches_mismatches = [] describe_examples = [ ("/root/children[0]/child2[1]: XML expected child element not " "present at index 0", """ """, matches_matcher), ] str_examples = [] class TestXMLMatchesUnorderedMissingChildren(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" subchild """, allow_mixed_nodes=True) matches_matches = [] matches_mismatches = [] describe_examples = [ ("/root/children[0]/child2[1]: XML expected child element not " "present", """ """, matches_matcher), ] str_examples = [] class TestXMLMatchesOrderedExtraChildren(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" """, allow_mixed_nodes=False) matches_matches = [] matches_mismatches = [] describe_examples = [ ("/root/children[0]/child2[1]: XML unexpected child element " "present at index 0", """ subchild """, matches_matcher), ] str_examples = [] class TestXMLMatchesUnorderedExtraChildren(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" """, allow_mixed_nodes=True) matches_matches = [] matches_mismatches = [] describe_examples = [ ("/root/children[0]/child2[1]: XML unexpected child element " "present at index 0", """ subchild """, matches_matcher), ] str_examples = [] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_metadata.py0000664000175000017500000022522700000000000021055 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for metadata service.""" import copy import hashlib import hmac import os import re try: # python 2 import pickle except ImportError: # python 3 import cPickle as pickle from keystoneauth1 import exceptions as ks_exceptions from keystoneauth1 import session import mock from oslo_config import cfg from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils.fixture import uuidsentinel as uuids import requests import six import webob from nova.api.metadata import base from nova.api.metadata import handler from nova.api.metadata import password from nova.api.metadata import vendordata_dynamic from nova import block_device from nova import context from nova import exception from nova.network import model as network_model from nova.network import neutron as neutronapi from nova import objects from nova.objects import virt_device_metadata as metadata_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_network from nova.tests.unit import fake_requests from nova import utils from nova.virt import netutils CONF = cfg.CONF USER_DATA_STRING = (b"This is an encoded string") ENCODE_USER_DATA_STRING = base64.encode_as_text(USER_DATA_STRING) FAKE_SEED = '7qtD24mpMR2' def fake_inst_obj(context): inst = objects.Instance( context=context, id=1, user_id='fake_user', uuid='b65cee2f-8c69-4aeb-be2f-f79742548fc2', project_id='test', key_name="key", key_data="ssh-rsa AAAAB3Nzai....N3NtHw== someuser@somehost", host='test', launch_index=1, reservation_id='r-xxxxxxxx', user_data=ENCODE_USER_DATA_STRING, image_ref=uuids.image_ref, kernel_id=None, ramdisk_id=None, vcpus=1, fixed_ips=[], root_device_name='/dev/sda1', hostname='test', display_name='my_displayname', metadata={}, device_metadata=fake_metadata_objects(), default_ephemeral_device=None, default_swap_device=None, system_metadata={}, security_groups=objects.SecurityGroupList(), availability_zone='fake-az') inst.keypairs = objects.KeyPairList(objects=[ fake_keypair_obj(inst.key_name, inst.key_data)]) nwinfo = network_model.NetworkInfo([]) inst.info_cache = objects.InstanceInfoCache(context=context, instance_uuid=inst.uuid, network_info=nwinfo) inst.flavor = objects.Flavor.get_by_name(context, 'm1.small') return inst def fake_keypair_obj(name, data): return objects.KeyPair(name=name, type='fake_type', public_key=data) def fake_InstanceMetadata(testcase, inst_data, address=None, sgroups=None, content=None, extra_md=None, network_info=None, network_metadata=None): content = content or [] extra_md = extra_md or {} if sgroups is None: sgroups = [{'name': 'default'}] fakes.stub_out_secgroup_api(testcase, security_groups=sgroups) return base.InstanceMetadata(inst_data, address=address, content=content, extra_md=extra_md, network_info=network_info, network_metadata=network_metadata) def fake_request(testcase, mdinst, relpath, address="127.0.0.1", fake_get_metadata=None, headers=None, fake_get_metadata_by_instance_id=None, app=None): def get_metadata_by_remote_address(self, address): return mdinst if app is None: app = handler.MetadataRequestHandler() if fake_get_metadata is None: fake_get_metadata = get_metadata_by_remote_address if testcase: testcase.stub_out( '%(module)s.%(class)s.get_metadata_by_remote_address' % {'module': app.__module__, 'class': app.__class__.__name__}, fake_get_metadata) if fake_get_metadata_by_instance_id: testcase.stub_out( '%(module)s.%(class)s.get_metadata_by_instance_id' % {'module': app.__module__, 'class': app.__class__.__name__}, fake_get_metadata_by_instance_id) request = webob.Request.blank(relpath) request.remote_addr = address if headers is not None: request.headers.update(headers) response = request.get_response(app) return response def fake_metadata_objects(): nic_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.PCIDeviceBus(address='0000:00:01.0'), mac='00:00:00:00:00:00', tags=['foo'] ) nic_vf_trusted_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.PCIDeviceBus(address='0000:00:02.0'), mac='00:11:22:33:44:55', vf_trusted=True, tags=['trusted'] ) nic_vlans_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.PCIDeviceBus(address='0000:80:01.0'), mac='e3:a0:d0:12:c5:10', vlan=1000, ) ide_disk_obj = metadata_obj.DiskMetadata( bus=metadata_obj.IDEDeviceBus(address='0:0'), serial='disk-vol-2352423', path='/dev/sda', tags=['baz'], ) scsi_disk_obj = metadata_obj.DiskMetadata( bus=metadata_obj.SCSIDeviceBus(address='05c8:021e:04a7:011b'), serial='disk-vol-2352423', path='/dev/sda', tags=['baz'], ) usb_disk_obj = metadata_obj.DiskMetadata( bus=metadata_obj.USBDeviceBus(address='05c8:021e'), serial='disk-vol-2352423', path='/dev/sda', tags=['baz'], ) fake_device_obj = metadata_obj.DeviceMetadata() device_with_fake_bus_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.DeviceBus(), mac='00:00:00:00:00:00', tags=['foo'] ) mdlist = metadata_obj.InstanceDeviceMetadata( instance_uuid='b65cee2f-8c69-4aeb-be2f-f79742548fc2', devices=[nic_obj, ide_disk_obj, scsi_disk_obj, usb_disk_obj, fake_device_obj, device_with_fake_bus_obj, nic_vlans_obj, nic_vf_trusted_obj]) return mdlist def fake_metadata_dicts(include_vlan=False, include_vf_trusted=False): nic_meta = { 'type': 'nic', 'bus': 'pci', 'address': '0000:00:01.0', 'mac': '00:00:00:00:00:00', 'tags': ['foo'], } vlan_nic_meta = { 'type': 'nic', 'bus': 'pci', 'address': '0000:80:01.0', 'mac': 'e3:a0:d0:12:c5:10', 'vlan': 1000, } vf_trusted_nic_meta = { 'type': 'nic', 'bus': 'pci', 'address': '0000:00:02.0', 'mac': '00:11:22:33:44:55', 'tags': ['trusted'], } ide_disk_meta = { 'type': 'disk', 'bus': 'ide', 'address': '0:0', 'serial': 'disk-vol-2352423', 'path': '/dev/sda', 'tags': ['baz'], } scsi_disk_meta = copy.copy(ide_disk_meta) scsi_disk_meta['bus'] = 'scsi' scsi_disk_meta['address'] = '05c8:021e:04a7:011b' usb_disk_meta = copy.copy(ide_disk_meta) usb_disk_meta['bus'] = 'usb' usb_disk_meta['address'] = '05c8:021e' dicts = [nic_meta, ide_disk_meta, scsi_disk_meta, usb_disk_meta, vf_trusted_nic_meta] if include_vlan: # NOTE(artom) Yeah, the order is important. dicts.insert(len(dicts) - 1, vlan_nic_meta) if include_vf_trusted: nic_meta['vf_trusted'] = False vlan_nic_meta['vf_trusted'] = False vf_trusted_nic_meta['vf_trusted'] = True return dicts class MetadataTestCase(test.TestCase): def setUp(self): super(MetadataTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) self.keypair = fake_keypair_obj(self.instance.key_name, self.instance.key_data) fakes.stub_out_secgroup_api(self) def test_can_pickle_metadata(self): # Make sure that InstanceMetadata is possible to pickle. This is # required for memcache backend to work correctly. md = fake_InstanceMetadata(self, self.instance.obj_clone()) pickle.dumps(md, protocol=0) def test_user_data(self): inst = self.instance.obj_clone() inst['user_data'] = base64.encode_as_text("happy") md = fake_InstanceMetadata(self, inst) self.assertEqual( md.get_ec2_metadata(version='2009-04-04')['user-data'], b"happy") def test_no_user_data(self): inst = self.instance.obj_clone() inst.user_data = None md = fake_InstanceMetadata(self, inst) obj = object() self.assertEqual( md.get_ec2_metadata(version='2009-04-04').get('user-data', obj), obj) def test_security_groups(self): inst = self.instance.obj_clone() sgroups = [{'name': name} for name in ('default', 'other')] expected = ['default', 'other'] md = fake_InstanceMetadata(self, inst, sgroups=sgroups) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['security-groups'], expected) def test_local_hostname(self): self.flags(dhcp_domain=None, group='api') md = fake_InstanceMetadata(self, self.instance.obj_clone()) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['local-hostname'], self.instance['hostname']) def test_local_hostname_fqdn(self): self.flags(dhcp_domain='fakedomain', group='api') md = fake_InstanceMetadata(self, self.instance.obj_clone()) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual('%s.fakedomain' % self.instance['hostname'], data['meta-data']['local-hostname']) def test_format_instance_mapping(self): # Make sure that _format_instance_mappings works. instance_ref0 = objects.Instance(**{'id': 0, 'uuid': 'e5fe5518-0288-4fa3-b0c4-c79764101b85', 'root_device_name': None, 'default_ephemeral_device': None, 'default_swap_device': None, 'context': self.context}) instance_ref1 = objects.Instance(**{'id': 0, 'uuid': 'b65cee2f-8c69-4aeb-be2f-f79742548fc2', 'root_device_name': '/dev/sda1', 'default_ephemeral_device': None, 'default_swap_device': None, 'context': self.context}) def fake_bdm_get(ctxt, uuid): return [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 87654321, 'snapshot_id': None, 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': True, 'device_name': '/dev/sdh'}), fake_block_device.FakeDbBlockDeviceDict( {'volume_id': None, 'snapshot_id': None, 'no_device': None, 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap', 'delete_on_termination': None, 'device_name': '/dev/sdc'}), fake_block_device.FakeDbBlockDeviceDict( {'volume_id': None, 'snapshot_id': None, 'no_device': None, 'source_type': 'blank', 'destination_type': 'local', 'guest_format': None, 'delete_on_termination': None, 'device_name': '/dev/sdb'})] self.stub_out('nova.db.api.block_device_mapping_get_all_by_instance', fake_bdm_get) expected = {'ami': 'sda1', 'root': '/dev/sda1', 'ephemeral0': '/dev/sdb', 'swap': '/dev/sdc', 'ebs0': '/dev/sdh'} self.assertEqual( base._format_instance_mapping(instance_ref0), block_device._DEFAULT_MAPPINGS) self.assertEqual( base._format_instance_mapping(instance_ref1), expected) def test_pubkey(self): md = fake_InstanceMetadata(self, self.instance.obj_clone()) pubkey_ent = md.lookup("/2009-04-04/meta-data/public-keys") self.assertEqual(base.ec2_md_print(pubkey_ent), "0=%s" % self.instance['key_name']) self.assertEqual(base.ec2_md_print(pubkey_ent['0']['openssh-key']), self.instance['key_data']) def test_image_type_ramdisk(self): inst = self.instance.obj_clone() inst['ramdisk_id'] = uuids.ramdisk_id md = fake_InstanceMetadata(self, inst) data = md.lookup("/latest/meta-data/ramdisk-id") self.assertIsNotNone(data) self.assertTrue(re.match('ari-[0-9a-f]{8}', data)) def test_image_type_kernel(self): inst = self.instance.obj_clone() inst['kernel_id'] = uuids.kernel_id md = fake_InstanceMetadata(self, inst) data = md.lookup("/2009-04-04/meta-data/kernel-id") self.assertTrue(re.match('aki-[0-9a-f]{8}', data)) self.assertEqual( md.lookup("/ec2/2009-04-04/meta-data/kernel-id"), data) def test_image_type_no_kernel_raises(self): inst = self.instance.obj_clone() md = fake_InstanceMetadata(self, inst) self.assertRaises(base.InvalidMetadataPath, md.lookup, "/2009-04-04/meta-data/kernel-id") def test_instance_is_sanitized(self): inst = self.instance.obj_clone() # The instance already has some fake device_metadata stored on it, # and we want to test to see it gets lazy-loaded, so save off the # original attribute value and delete the attribute from the instance, # then we can assert it gets loaded up later. original_device_meta = inst.device_metadata delattr(inst, 'device_metadata') def fake_obj_load_attr(attrname): if attrname == 'device_metadata': inst.device_metadata = original_device_meta elif attrname == 'ec2_ids': inst.ec2_ids = objects.EC2Ids() else: self.fail('Unexpected instance lazy-load: %s' % attrname) inst._will_not_pass = True with mock.patch.object( inst, 'obj_load_attr', side_effect=fake_obj_load_attr) as mock_obj_load_attr: md = fake_InstanceMetadata(self, inst) self.assertFalse(hasattr(md.instance, '_will_not_pass')) self.assertEqual(2, mock_obj_load_attr.call_count) mock_obj_load_attr.assert_has_calls( [mock.call('device_metadata'), mock.call('ec2_ids')], any_order=True) self.assertIs(original_device_meta, inst.device_metadata) def test_check_version(self): inst = self.instance.obj_clone() md = fake_InstanceMetadata(self, inst) self.assertTrue(md._check_version('1.0', '2009-04-04')) self.assertFalse(md._check_version('2009-04-04', '1.0')) self.assertFalse(md._check_version('2009-04-04', '2008-09-01')) self.assertTrue(md._check_version('2008-09-01', '2009-04-04')) self.assertTrue(md._check_version('2009-04-04', '2009-04-04')) @mock.patch('nova.virt.netutils.get_injected_network_template') def test_InstanceMetadata_uses_passed_network_info(self, mock_get): network_info = [] mock_get.return_value = False base.InstanceMetadata(fake_inst_obj(self.context), network_info=network_info) mock_get.assert_called_once_with(network_info) @mock.patch.object(netutils, "get_network_metadata", autospec=True) def test_InstanceMetadata_gets_network_metadata(self, mock_netutils): network_data = {'links': [], 'networks': [], 'services': []} mock_netutils.return_value = network_data md = base.InstanceMetadata(fake_inst_obj(self.context)) self.assertEqual(network_data, md.network_metadata) def test_InstanceMetadata_invoke_metadata_for_config_drive(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() inst_md = base.InstanceMetadata(inst) expected_paths = [ 'ec2/2009-04-04/user-data', 'ec2/2009-04-04/meta-data.json', 'ec2/latest/user-data', 'ec2/latest/meta-data.json', 'openstack/2012-08-10/meta_data.json', 'openstack/2012-08-10/user_data', 'openstack/2013-04-04/meta_data.json', 'openstack/2013-04-04/user_data', 'openstack/2013-10-17/meta_data.json', 'openstack/2013-10-17/user_data', 'openstack/2013-10-17/vendor_data.json', 'openstack/2015-10-15/meta_data.json', 'openstack/2015-10-15/user_data', 'openstack/2015-10-15/vendor_data.json', 'openstack/2015-10-15/network_data.json', 'openstack/2016-06-30/meta_data.json', 'openstack/2016-06-30/user_data', 'openstack/2016-06-30/vendor_data.json', 'openstack/2016-06-30/network_data.json', 'openstack/2016-10-06/meta_data.json', 'openstack/2016-10-06/user_data', 'openstack/2016-10-06/vendor_data.json', 'openstack/2016-10-06/network_data.json', 'openstack/2016-10-06/vendor_data2.json', 'openstack/2017-02-22/meta_data.json', 'openstack/2017-02-22/user_data', 'openstack/2017-02-22/vendor_data.json', 'openstack/2017-02-22/network_data.json', 'openstack/2017-02-22/vendor_data2.json', 'openstack/2018-08-27/meta_data.json', 'openstack/2018-08-27/user_data', 'openstack/2018-08-27/vendor_data.json', 'openstack/2018-08-27/network_data.json', 'openstack/2018-08-27/vendor_data2.json', 'openstack/latest/meta_data.json', 'openstack/latest/user_data', 'openstack/latest/vendor_data.json', 'openstack/latest/network_data.json', 'openstack/latest/vendor_data2.json', ] actual_paths = [] for (path, value) in inst_md.metadata_for_config_drive(): actual_paths.append(path) self.assertIsNotNone(path) self.assertEqual(expected_paths, actual_paths) @mock.patch('nova.virt.netutils.get_injected_network_template') def test_InstanceMetadata_queries_network_API_when_needed(self, mock_get): network_info_from_api = [] mock_get.return_value = False base.InstanceMetadata(fake_inst_obj(self.context)) mock_get.assert_called_once_with(network_info_from_api) def test_local_ipv4(self): nw_info = fake_network.fake_get_instance_nw_info(self) expected_local = "192.168.1.100" md = fake_InstanceMetadata(self, self.instance, network_info=nw_info, address="fake") data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(expected_local, data['meta-data']['local-ipv4']) def test_local_ipv4_from_nw_info(self): nw_info = fake_network.fake_get_instance_nw_info(self) expected_local = "192.168.1.100" md = fake_InstanceMetadata(self, self.instance, network_info=nw_info) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['local-ipv4'], expected_local) def test_local_ipv4_from_address(self): expected_local = "fake" md = fake_InstanceMetadata(self, self.instance, network_info=[], address="fake") data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['local-ipv4'], expected_local) @mock.patch('oslo_serialization.base64.encode_as_text', return_value=FAKE_SEED) @mock.patch.object(jsonutils, 'dump_as_bytes') def _test_as_json_with_options(self, mock_json_dump_as_bytes, mock_base64, os_version=base.GRIZZLY): instance = self.instance keypair = self.keypair md = fake_InstanceMetadata(self, instance) expected_metadata = { 'uuid': md.uuid, 'hostname': md._get_hostname(), 'name': instance.display_name, 'launch_index': instance.launch_index, 'availability_zone': md.availability_zone, } if md.launch_metadata: expected_metadata['meta'] = md.launch_metadata if md.files: expected_metadata['files'] = md.files if md.extra_md: expected_metadata['extra_md'] = md.extra_md if md.network_config: expected_metadata['network_config'] = md.network_config if instance.key_name: expected_metadata['public_keys'] = { keypair.name: keypair.public_key } expected_metadata['keys'] = [{'type': keypair.type, 'data': keypair.public_key, 'name': keypair.name}] if md._check_os_version(base.GRIZZLY, os_version): expected_metadata['random_seed'] = FAKE_SEED if md._check_os_version(base.LIBERTY, os_version): expected_metadata['project_id'] = instance.project_id if md._check_os_version(base.NEWTON_ONE, os_version): expose_vlan = md._check_os_version(base.OCATA, os_version) expected_metadata['devices'] = fake_metadata_dicts(expose_vlan) if md._check_os_version(base.OCATA, os_version): expose_trusted = md._check_os_version(base.ROCKY, os_version) expected_metadata['devices'] = fake_metadata_dicts( True, expose_trusted) md._metadata_as_json(os_version, 'non useless path parameter') self.assertEqual(md.md_mimetype, base.MIME_TYPE_APPLICATION_JSON) mock_json_dump_as_bytes.assert_called_once_with(expected_metadata) def test_as_json(self): for os_version in base.OPENSTACK_VERSIONS: self._test_as_json_with_options(os_version=os_version) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_metadata_as_json_deleted_keypair(self, mock_inst_get_by_uuid): """Tests that we handle missing instance keypairs. """ instance = self.instance.obj_clone() # we want to make sure that key_name is set but not keypairs so it has # to be lazy-loaded from the database delattr(instance, 'keypairs') mock_inst_get_by_uuid.return_value = instance md = fake_InstanceMetadata(self, instance) meta = md._metadata_as_json(base.OPENSTACK_VERSIONS[-1], path=None) meta = jsonutils.loads(meta) self.assertNotIn('keys', meta) self.assertNotIn('public_keys', meta) class OpenStackMetadataTestCase(test.TestCase): def setUp(self): super(OpenStackMetadataTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) def test_empty_device_metadata(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() inst.device_metadata = None mdinst = fake_InstanceMetadata(self, inst) mdjson = mdinst.lookup("/openstack/latest/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual([], mddict['devices']) def test_device_metadata(self): # Because we handle a list of devices, we have only one test and in it # include the various devices types that we have to test, as well as a # couple of fake device types and bus types that should be silently # ignored fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) mdjson = mdinst.lookup("/openstack/latest/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual(fake_metadata_dicts(True, True), mddict['devices']) def test_top_level_listing(self): # request for /openstack// should show metadata.json inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) result = mdinst.lookup("/openstack") # trailing / should not affect anything self.assertEqual(result, mdinst.lookup("/openstack/")) # the 'content' should not show up in directory listing self.assertNotIn(base.CONTENT_DIR, result) self.assertIn('2012-08-10', result) self.assertIn('latest', result) def test_version_content_listing(self): # request for /openstack// should show metadata.json inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) listing = mdinst.lookup("/openstack/2012-08-10") self.assertIn("meta_data.json", listing) def test_returns_apis_supported_in_liberty_version(self): mdinst = fake_InstanceMetadata(self, self.instance) liberty_supported_apis = mdinst.lookup("/openstack/2015-10-15") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME, base.PASS_NAME, base.VD_JSON_NAME, base.NW_JSON_NAME], liberty_supported_apis) def test_returns_apis_supported_in_havana_version(self): mdinst = fake_InstanceMetadata(self, self.instance) havana_supported_apis = mdinst.lookup("/openstack/2013-10-17") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME, base.PASS_NAME, base.VD_JSON_NAME], havana_supported_apis) def test_returns_apis_supported_in_folsom_version(self): mdinst = fake_InstanceMetadata(self, self.instance) folsom_supported_apis = mdinst.lookup("/openstack/2012-08-10") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME], folsom_supported_apis) def test_returns_apis_supported_in_grizzly_version(self): mdinst = fake_InstanceMetadata(self, self.instance) grizzly_supported_apis = mdinst.lookup("/openstack/2013-04-04") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME, base.PASS_NAME], grizzly_supported_apis) def test_metadata_json(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() content = [ ('/etc/my.conf', "content of my.conf"), ('/root/hello', "content of /root/hello"), ] mdinst = fake_InstanceMetadata(self, inst, content=content) mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") mdjson = mdinst.lookup("/openstack/latest/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual(mddict['uuid'], self.instance['uuid']) self.assertIn('files', mddict) self.assertIn('public_keys', mddict) self.assertEqual(mddict['public_keys'][self.instance['key_name']], self.instance['key_data']) self.assertIn('launch_index', mddict) self.assertEqual(mddict['launch_index'], self.instance['launch_index']) # verify that each of the things we put in content # resulted in an entry in 'files', that their content # there is as expected, and that /content lists them. for (path, content) in content: fent = [f for f in mddict['files'] if f['path'] == path] self.assertEqual(1, len(fent)) fent = fent[0] found = mdinst.lookup("/openstack%s" % fent['content_path']) self.assertEqual(found, content) def test_x509_keypair(self): inst = self.instance.obj_clone() expected = {'name': self.instance['key_name'], 'type': 'x509', 'data': 'public_key'} inst.keypairs[0].name = expected['name'] inst.keypairs[0].type = expected['type'] inst.keypairs[0].public_key = expected['data'] mdinst = fake_InstanceMetadata(self, inst) mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual([expected], mddict['keys']) def test_extra_md(self): # make sure extra_md makes it through to metadata fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() extra = {'foo': 'bar', 'mylist': [1, 2, 3], 'mydict': {"one": 1, "two": 2}} mdinst = fake_InstanceMetadata(self, inst, extra_md=extra) mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") mddict = jsonutils.loads(mdjson) for key, val in extra.items(): self.assertEqual(mddict[key], val) def test_password(self): # make sure extra_md makes it through to metadata inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) result = mdinst.lookup("/openstack/latest/password") self.assertEqual(result, password.handle_password) def test_userdata(self): inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) userdata_found = mdinst.lookup("/openstack/2012-08-10/user_data") self.assertEqual(USER_DATA_STRING, userdata_found) # since we had user-data in this instance, it should be in listing self.assertIn('user_data', mdinst.lookup("/openstack/2012-08-10")) inst.user_data = None mdinst = fake_InstanceMetadata(self, inst) # since this instance had no user-data it should not be there. self.assertNotIn('user_data', mdinst.lookup("/openstack/2012-08-10")) self.assertRaises(base.InvalidMetadataPath, mdinst.lookup, "/openstack/2012-08-10/user_data") def test_random_seed(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2013-04-04 has the 'random' field mdjson = mdinst.lookup("/openstack/2013-04-04/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertIn("random_seed", mddict) self.assertEqual(len(base64.decode_as_bytes(mddict["random_seed"])), 512) # verify that older version do not have it mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") self.assertNotIn("random_seed", jsonutils.loads(mdjson)) def test_project_id(self): fakes.stub_out_key_pair_funcs(self) mdinst = fake_InstanceMetadata(self, self.instance) # verify that 2015-10-15 has the 'project_id' field mdjson = mdinst.lookup("/openstack/2015-10-15/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertIn("project_id", mddict) self.assertEqual(mddict["project_id"], self.instance.project_id) # verify that older version do not have it mdjson = mdinst.lookup("/openstack/2013-10-17/meta_data.json") self.assertNotIn("project_id", jsonutils.loads(mdjson)) def test_no_dashes_in_metadata(self): # top level entries in meta_data should not contain '-' in their name fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) mdjson = jsonutils.loads( mdinst.lookup("/openstack/latest/meta_data.json")) self.assertEqual([], [k for k in mdjson.keys() if k.find("-") != -1]) def test_vendor_data_presence(self): inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2013-10-17 has the vendor_data.json file result = mdinst.lookup("/openstack/2013-10-17") self.assertIn('vendor_data.json', result) # verify that older version do not have it result = mdinst.lookup("/openstack/2013-04-04") self.assertNotIn('vendor_data.json', result) # verify that 2016-10-06 has the vendor_data2.json file result = mdinst.lookup("/openstack/2016-10-06") self.assertIn('vendor_data2.json', result) # assert that we never created a ksa session for dynamic vendordata if # we didn't make a request self.assertIsNone(mdinst.vendordata_providers['DynamicJSON'].session) def _test_vendordata2_response_inner(self, request_mock, response_code, include_rest_result=True): content = None if include_rest_result: content = '{"color": "blue"}' fake_response = fake_requests.FakeResponse(response_code, content=content) request_mock.return_value = fake_response with utils.tempdir() as tmpdir: jsonfile = os.path.join(tmpdir, 'test.json') with open(jsonfile, 'w') as f: f.write(jsonutils.dumps({'ldap': '10.0.0.1', 'ad': '10.0.0.2'})) self.flags(vendordata_providers=['StaticJSON', 'DynamicJSON'], vendordata_jsonfile_path=jsonfile, vendordata_dynamic_targets=[ 'web@http://fake.com/foobar'], group='api' ) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2013-10-17 has the vendor_data.json file vdpath = "/openstack/2013-10-17/vendor_data.json" vd = jsonutils.loads(mdinst.lookup(vdpath)) self.assertEqual('10.0.0.1', vd.get('ldap')) self.assertEqual('10.0.0.2', vd.get('ad')) # verify that 2016-10-06 works as well vdpath = "/openstack/2016-10-06/vendor_data.json" vd = jsonutils.loads(mdinst.lookup(vdpath)) self.assertEqual('10.0.0.1', vd.get('ldap')) self.assertEqual('10.0.0.2', vd.get('ad')) # verify the new format as well vdpath = "/openstack/2016-10-06/vendor_data2.json" with mock.patch( 'nova.api.metadata.vendordata_dynamic.LOG.warning') as wrn: vd = jsonutils.loads(mdinst.lookup(vdpath)) # We don't have vendordata_dynamic_auth credentials configured # so we expect to see a warning logged about making an insecure # connection. warning_calls = wrn.call_args_list self.assertEqual(1, len(warning_calls)) # Verify the warning message is the one we expect which is the # first and only arg to the first and only call to the warning. self.assertIn('Passing insecure dynamic vendordata requests', six.text_type(warning_calls[0][0])) self.assertEqual('10.0.0.1', vd['static'].get('ldap')) self.assertEqual('10.0.0.2', vd['static'].get('ad')) if include_rest_result: self.assertEqual('blue', vd['web'].get('color')) else: self.assertEqual({}, vd['web']) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_ok(self, request_mock): self._test_vendordata2_response_inner(request_mock, requests.codes.OK) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_created(self, request_mock): self._test_vendordata2_response_inner(request_mock, requests.codes.CREATED) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_accepted(self, request_mock): self._test_vendordata2_response_inner(request_mock, requests.codes.ACCEPTED) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_no_content(self, request_mock): # Make it a failure if no content was returned and we don't handle it. self.flags(vendordata_dynamic_failure_fatal=True, group='api') self._test_vendordata2_response_inner(request_mock, requests.codes.NO_CONTENT, include_rest_result=False) def _test_vendordata2_response_inner_exceptional( self, request_mock, log_mock, exc): request_mock.side_effect = exc('Ta da!') with utils.tempdir() as tmpdir: jsonfile = os.path.join(tmpdir, 'test.json') with open(jsonfile, 'w') as f: f.write(jsonutils.dumps({'ldap': '10.0.0.1', 'ad': '10.0.0.2'})) self.flags(vendordata_providers=['StaticJSON', 'DynamicJSON'], vendordata_jsonfile_path=jsonfile, vendordata_dynamic_targets=[ 'web@http://fake.com/foobar'], group='api' ) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify the new format as well vdpath = "/openstack/2016-10-06/vendor_data2.json" vd = jsonutils.loads(mdinst.lookup(vdpath)) self.assertEqual('10.0.0.1', vd['static'].get('ldap')) self.assertEqual('10.0.0.2', vd['static'].get('ad')) # and exception should result in nothing being added, but no error self.assertEqual({}, vd['web']) self.assertTrue(log_mock.called) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_type_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, TypeError) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_value_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, ValueError) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_request_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, ks_exceptions.BadRequest) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_ssl_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, ks_exceptions.SSLError) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_ssl_error_fatal(self, request_mock, log_mock): self.flags(vendordata_dynamic_failure_fatal=True, group='api') self.assertRaises(ks_exceptions.SSLError, self._test_vendordata2_response_inner_exceptional, request_mock, log_mock, ks_exceptions.SSLError) def test_network_data_presence(self): inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2015-10-15 has the network_data.json file result = mdinst.lookup("/openstack/2015-10-15") self.assertIn('network_data.json', result) # verify that older version do not have it result = mdinst.lookup("/openstack/2013-10-17") self.assertNotIn('network_data.json', result) def test_network_data_response(self): inst = self.instance.obj_clone() nw_data = { "links": [{"ethernet_mac_address": "aa:aa:aa:aa:aa:aa", "id": "nic0", "type": "ethernet", "vif_id": 1, "mtu": 1500}], "networks": [{"id": "network0", "ip_address": "10.10.0.2", "link": "nic0", "netmask": "255.255.255.0", "network_id": "00000000-0000-0000-0000-000000000000", "routes": [], "type": "ipv4"}], "services": [{'address': '1.2.3.4', 'type': 'dns'}]} mdinst = fake_InstanceMetadata(self, inst, network_metadata=nw_data) # verify that 2015-10-15 has the network_data.json file nwpath = "/openstack/2015-10-15/network_data.json" nw = jsonutils.loads(mdinst.lookup(nwpath)) # check the other expected values for k, v in nw_data.items(): self.assertEqual(nw[k], v) class MetadataHandlerTestCase(test.TestCase): """Test that metadata is returning proper values.""" def setUp(self): super(MetadataHandlerTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) self.mdinst = fake_InstanceMetadata(self, self.instance, address=None, sgroups=None) def test_callable(self): def verify(req, meta_data): self.assertIsInstance(meta_data, CallableMD) return "foo" class CallableMD(object): def lookup(self, path_info): return verify response = fake_request(self, CallableMD(), "/bar") self.assertEqual(response.status_int, 200) self.assertEqual(response.text, "foo") def test_root(self): expected = "\n".join(base.VERSIONS) + "\nlatest" response = fake_request(self, self.mdinst, "/") self.assertEqual(response.text, expected) response = fake_request(self, self.mdinst, "/foo/../") self.assertEqual(response.text, expected) def test_root_metadata_proxy_enabled(self): self.flags(service_metadata_proxy=True, group='neutron') expected = "\n".join(base.VERSIONS) + "\nlatest" response = fake_request(self, self.mdinst, "/") self.assertEqual(response.text, expected) response = fake_request(self, self.mdinst, "/foo/../") self.assertEqual(response.text, expected) def test_version_root(self): response = fake_request(self, self.mdinst, "/2009-04-04") response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("text/plain")) self.assertEqual(response.text, 'meta-data/\nuser-data') response = fake_request(self, self.mdinst, "/9999-99-99") self.assertEqual(response.status_int, 404) def test_json_data(self): fakes.stub_out_key_pair_funcs(self) response = fake_request(self, self.mdinst, "/openstack/latest/meta_data.json") response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("application/json")) response = fake_request(self, self.mdinst, "/openstack/latest/vendor_data.json") response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("application/json")) @mock.patch('nova.network.neutron.API') def test_user_data_non_existing_fixed_address(self, mock_network_api): mock_network_api.return_value.get_fixed_ip_by_address.side_effect = ( exception.NotFound()) response = fake_request(None, self.mdinst, "/2009-04-04/user-data", "127.1.1.1") self.assertEqual(response.status_int, 404) def test_fixed_address_none(self): response = fake_request(None, self.mdinst, relpath="/2009-04-04/user-data", address=None) self.assertEqual(response.status_int, 500) def test_invalid_path_is_404(self): response = fake_request(self, self.mdinst, relpath="/2009-04-04/user-data-invalid") self.assertEqual(response.status_int, 404) def test_user_data_with_use_forwarded_header(self): expected_addr = "192.192.192.2" def fake_get_metadata(self_gm, address): if address == expected_addr: return self.mdinst else: raise Exception("Expected addr of %s, got %s" % (expected_addr, address)) self.flags(use_forwarded_for=True, group='api') response = fake_request(self, self.mdinst, relpath="/2009-04-04/user-data", address="168.168.168.1", fake_get_metadata=fake_get_metadata, headers={'X-Forwarded-For': expected_addr}) self.assertEqual(response.status_int, 200) response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("text/plain")) self.assertEqual(response.body, base64.decode_as_bytes(self.instance['user_data'])) response = fake_request(self, self.mdinst, relpath="/2009-04-04/user-data", address="168.168.168.1", fake_get_metadata=fake_get_metadata, headers=None) self.assertEqual(response.status_int, 500) @mock.patch('oslo_utils.secretutils.constant_time_compare') def test_by_instance_id_uses_constant_time_compare(self, mock_compare): mock_compare.side_effect = test.TestingException req = webob.Request.blank('/') hnd = handler.MetadataRequestHandler() req.headers['X-Instance-ID'] = 'fake-inst' req.headers['X-Instance-ID-Signature'] = 'fake-sig' req.headers['X-Tenant-ID'] = 'fake-proj' self.assertRaises(test.TestingException, hnd._handle_instance_id_request, req) self.assertEqual(1, mock_compare.call_count) def _fake_x_get_metadata(self, self_app, instance_id, remote_address): if remote_address is None: raise Exception('Expected X-Forwared-For header') if encodeutils.to_utf8(instance_id) == self.expected_instance_id: return self.mdinst # raise the exception to aid with 500 response code test raise Exception("Expected instance_id of %r, got %r" % (self.expected_instance_id, instance_id)) def test_user_data_with_neutron_instance_id(self): self.expected_instance_id = b'a-b-c-d' signed = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), self.expected_instance_id, hashlib.sha256).hexdigest() # try a request with service disabled response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", headers={'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 200) # now enable the service self.flags(service_metadata_proxy=True, group='neutron') response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 200) response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("text/plain")) self.assertEqual(response.body, base64.decode_as_bytes(self.instance['user_data'])) # mismatched signature response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': ''}) self.assertEqual(response.status_int, 403) # missing X-Tenant-ID from request response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 400) # mismatched X-Tenant-ID response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'FAKE', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 404) # without X-Forwarded-For response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 500) # unexpected Instance-ID signed = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), b'z-z-z-z', hashlib.sha256).hexdigest() response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'z-z-z-z', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 500) def test_get_metadata(self): def _test_metadata_path(relpath): # recursively confirm a http 200 from all meta-data elements # available at relpath. response = fake_request(self, self.mdinst, relpath=relpath) for item in response.text.split('\n'): if 'public-keys' in relpath: # meta-data/public-keys/0=keyname refers to # meta-data/public-keys/0 item = item.split('=')[0] if item.endswith('/'): path = relpath + '/' + item _test_metadata_path(path) continue path = relpath + '/' + item response = fake_request(self, self.mdinst, relpath=path) self.assertEqual(response.status_int, 200, message=path) _test_metadata_path('/2009-04-04/meta-data') def _metadata_handler_with_instance_id(self, hnd): expected_instance_id = b'a-b-c-d' signed = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), expected_instance_id, hashlib.sha256).hexdigest() self.flags(service_metadata_proxy=True, group='neutron') response = fake_request( None, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata=False, app=hnd, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(200, response.status_int) self.assertEqual(base64.decode_as_bytes(self.instance['user_data']), response.body) @mock.patch.object(base, 'get_metadata_by_instance_id') def test_metadata_handler_with_instance_id(self, get_by_uuid): # test twice to ensure that the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=15, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_instance_id(hnd) self._metadata_handler_with_instance_id(hnd) self.assertEqual(1, get_by_uuid.call_count) @mock.patch.object(base, 'get_metadata_by_instance_id') def test_metadata_handler_with_instance_id_no_cache(self, get_by_uuid): # test twice to ensure that disabling the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=0, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_instance_id(hnd) self._metadata_handler_with_instance_id(hnd) self.assertEqual(2, get_by_uuid.call_count) def _metadata_handler_with_remote_address(self, hnd): response = fake_request( None, self.mdinst, fake_get_metadata=False, app=hnd, relpath="/2009-04-04/user-data", address="192.192.192.2") self.assertEqual(200, response.status_int) self.assertEqual(base64.decode_as_bytes(self.instance.user_data), response.body) @mock.patch.object(base, 'get_metadata_by_address') def test_metadata_handler_with_remote_address(self, get_by_uuid): # test twice to ensure that the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=15, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_remote_address(hnd) self._metadata_handler_with_remote_address(hnd) self.assertEqual(1, get_by_uuid.call_count) @mock.patch.object(base, 'get_metadata_by_address') def test_metadata_handler_with_remote_address_no_cache(self, get_by_uuid): # test twice to ensure that disabling the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=0, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_remote_address(hnd) self._metadata_handler_with_remote_address(hnd) self.assertEqual(2, get_by_uuid.call_count) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy(self, mock_get_client): self.flags(service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client.return_value mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_many_networks(self, mock_get_client): def fake_list_ports(context, fixed_ips, network_id, fields): if 'f-f-f-f' in network_id: return {'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} return {'ports': []} self.flags(service_metadata_proxy=True, group='neutron') handler.MAX_QUERY_NETWORKS = 10 self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client.return_value subnet_list = [{'network_id': 'f-f-f-' + chr(c)} for c in range(ord('a'), ord('z'))] mock_client.list_subnets.return_value = { 'subnets': subnet_list} with mock.patch.object( mock_client, 'list_ports', side_effect=fake_list_ports) as mock_list_ports: response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(3, mock_list_ports.call_count) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def _metadata_handler_with_provider_id(self, hnd, mock_get_client): # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client.return_value mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", app=hnd, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(200, response.status_int) self.assertEqual(base64.decode_as_bytes(self.instance['user_data']), response.body) @mock.patch.object(base, 'get_metadata_by_instance_id') def _test__handler_with_provider_id(self, expected_calls, get_by_uuid): get_by_uuid.return_value = self.mdinst hnd = handler.MetadataRequestHandler() with mock.patch.object(hnd, '_get_instance_id_from_lb', return_value=('a-b-c-d', 'test')) as _get_id_from_lb: self._metadata_handler_with_provider_id(hnd) self._metadata_handler_with_provider_id(hnd) self.assertEqual(expected_calls, _get_id_from_lb.call_count) def test_metadata_handler_with_provider_id(self): self.flags(service_metadata_proxy=True, group='neutron') self.flags(metadata_cache_expiration=15, group='api') self._test__handler_with_provider_id(1) def test_metadata_handler_with_provider_id_no_cache(self): self.flags(service_metadata_proxy=True, group='neutron') self.flags(metadata_cache_expiration=0, group='api') self._test__handler_with_provider_id(2) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_chain(self, mock_get_client): self.flags(service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' def fake_list_ports(ctx, **kwargs): if kwargs.get('fixed_ips') == 'ip_address=192.192.192.2': return { 'ports': [{ 'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} else: return {'ports': []} mock_client = mock_get_client.return_value mock_client.list_ports.side_effect = fake_list_ports mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="10.10.10.10", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2, 10.10.10.10', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_signed(self, mock_get_client): shared_secret = "testing1234" self.flags( metadata_proxy_shared_secret=shared_secret, service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' signature = hmac.new( encodeutils.to_utf8(shared_secret), encodeutils.to_utf8(proxy_lb_id), hashlib.sha256).hexdigest() mock_client = mock_get_client.return_value mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id, 'X-Metadata-Provider-Signature': signature}) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_not_signed(self, mock_get_client): shared_secret = "testing1234" self.flags( metadata_proxy_shared_secret=shared_secret, service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client.return_value mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(403, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_signed_fail(self, mock_get_client): shared_secret = "testing1234" bad_secret = "testing3468" self.flags( metadata_proxy_shared_secret=shared_secret, service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' signature = hmac.new( encodeutils.to_utf8(bad_secret), encodeutils.to_utf8(proxy_lb_id), hashlib.sha256).hexdigest() mock_client = mock_get_client.return_value mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id, 'X-Metadata-Provider-Signature': signature}) self.assertEqual(403, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_net_not_found(self, mock_get_client): self.flags(service_metadata_proxy=True, group='neutron') # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client.return_value mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': []} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(400, response.status_int) def _test_metadata_lb_incorrect_port_count(self, mock_get_client, ports): self.flags(service_metadata_proxy=True, group='neutron') # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client.return_value mock_client.list_ports.return_value = {'ports': ports} mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}, {'device_id': 'x-y-z', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(400, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_too_many_ports(self, mock_get_client): self._test_metadata_lb_incorrect_port_count( mock_get_client, [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}, {'device_id': 'x-y-z', 'tenant_id': 'test'}]) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_no_ports_found(self, mock_get_client): self._test_metadata_lb_incorrect_port_count( mock_get_client, []) @mock.patch.object(context, 'get_admin_context') @mock.patch('nova.network.neutron.API.get_fixed_ip_by_address') def test_get_metadata_by_address(self, mock_get_fip, mock_get_context): mock_get_context.return_value = 'CONTEXT' mock_get_fip.return_value = {'instance_uuid': 'bar'} with mock.patch.object(base, 'get_metadata_by_instance_id') as gmd: base.get_metadata_by_address('foo') mock_get_fip.assert_called_once_with('CONTEXT', 'foo') gmd.assert_called_once_with('bar', 'foo', 'CONTEXT') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_metadata_by_instance_id(self, mock_uuid, mock_context): inst = objects.Instance() mock_uuid.return_value = inst ctxt = context.RequestContext() with mock.patch.object(base, 'InstanceMetadata') as imd: base.get_metadata_by_instance_id('foo', 'bar', ctxt=ctxt) self.assertFalse(mock_context.called, "get_admin_context() should not" "have been called, the context was given") mock_uuid.assert_called_once_with(ctxt, 'foo', expected_attrs=['ec2_ids', 'flavor', 'info_cache', 'metadata', 'system_metadata', 'security_groups', 'keypairs', 'device_metadata']) imd.assert_called_once_with(inst, 'bar') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_metadata_by_instance_id_null_context(self, mock_uuid, mock_context): inst = objects.Instance() mock_uuid.return_value = inst mock_context.return_value = context.RequestContext() with mock.patch.object(base, 'InstanceMetadata') as imd: base.get_metadata_by_instance_id('foo', 'bar') mock_context.assert_called_once_with() mock_uuid.assert_called_once_with(mock_context.return_value, 'foo', expected_attrs=['ec2_ids', 'flavor', 'info_cache', 'metadata', 'system_metadata', 'security_groups', 'keypairs', 'device_metadata']) imd.assert_called_once_with(inst, 'bar') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_get_metadata_by_instance_id_with_cell_mapping(self, mock_get_im, mock_get_inst): ctxt = context.RequestContext() inst = objects.Instance() im = objects.InstanceMapping(cell_mapping=objects.CellMapping()) mock_get_inst.return_value = inst mock_get_im.return_value = im with mock.patch.object(base, 'InstanceMetadata') as imd: with mock.patch('nova.context.target_cell') as mock_tc: base.get_metadata_by_instance_id('foo', 'bar', ctxt=ctxt) mock_tc.assert_called_once_with(ctxt, im.cell_mapping) mock_get_im.assert_called_once_with(ctxt, 'foo') imd.assert_called_once_with(inst, 'bar') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_get_metadata_by_instance_id_with_local_meta(self, mock_get_im, mock_get_inst): # Test that if local_metadata_per_cell is set to True, we don't # query API DB for instance mapping. self.flags(local_metadata_per_cell=True, group='api') ctxt = context.RequestContext() inst = objects.Instance() mock_get_inst.return_value = inst with mock.patch.object(base, 'InstanceMetadata') as imd: base.get_metadata_by_instance_id('foo', 'bar', ctxt=ctxt) mock_get_im.assert_not_called() imd.assert_called_once_with(inst, 'bar') class MetadataPasswordTestCase(test.TestCase): def setUp(self): super(MetadataPasswordTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) self.mdinst = fake_InstanceMetadata(self, self.instance, address=None, sgroups=None) def test_get_password(self): request = webob.Request.blank('') self.mdinst.password = 'foo' result = password.handle_password(request, self.mdinst) self.assertEqual(result, 'foo') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', return_value=objects.InstanceMapping(cell_mapping=None)) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_set_password_instance_not_found(self, get_by_uuid, get_mapping): """Tests that a 400 is returned if the instance can not be found.""" get_by_uuid.side_effect = exception.InstanceNotFound( instance_id=self.instance.uuid) request = webob.Request.blank('') request.method = 'POST' request.val = b'foo' request.content_length = len(request.body) self.assertRaises(webob.exc.HTTPBadRequest, password.handle_password, request, self.mdinst) def test_bad_method(self): request = webob.Request.blank('') request.method = 'PUT' self.assertRaises(webob.exc.HTTPBadRequest, password.handle_password, request, self.mdinst) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.objects.Instance.get_by_uuid') def _try_set_password(self, get_by_uuid, get_mapping, val=b'bar', use_local_meta=False): if use_local_meta: self.flags(local_metadata_per_cell=True, group='api') request = webob.Request.blank('') request.method = 'POST' request.body = val get_mapping.return_value = objects.InstanceMapping(cell_mapping=None) get_by_uuid.return_value = self.instance with mock.patch.object(self.instance, 'save') as save: password.handle_password(request, self.mdinst) save.assert_called_once_with() self.assertIn('password_0', self.instance.system_metadata) if use_local_meta: get_mapping.assert_not_called() else: get_mapping.assert_called_once_with(mock.ANY, self.instance.uuid) def test_set_password(self): self.mdinst.password = '' self._try_set_password() def test_set_password_local_meta(self): self.mdinst.password = '' self._try_set_password(use_local_meta=True) def test_conflict(self): self.mdinst.password = 'foo' self.assertRaises(webob.exc.HTTPConflict, self._try_set_password) def test_too_large(self): self.mdinst.password = '' self.assertRaises(webob.exc.HTTPBadRequest, self._try_set_password, val=(b'a' * (password.MAX_SIZE + 1))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_notifications.py0000664000175000017500000006111100000000000022134 0ustar00zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for common notifications.""" import copy import datetime import mock from oslo_config import cfg from oslo_context import fixture as o_fixture from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from nova.compute import task_states from nova.compute import vm_states from nova import context from nova import exception from nova.notifications import base as notifications from nova import objects from nova.objects import base as obj_base from nova import test from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier CONF = cfg.CONF class NotificationsTestCase(test.TestCase): def setUp(self): super(NotificationsTestCase, self).setUp() self.fixture = self.useFixture(o_fixture.ClearRequestContext()) self.net_info = fake_network.fake_get_instance_nw_info(self) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.flags(host='testhost') self.flags(notify_on_state_change="vm_and_task_state", group='notifications') self.flags(api_servers=['http://localhost:9292'], group='glance') self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.fake_time = datetime.datetime(2017, 2, 2, 16, 45, 0) timeutils.set_time_override(self.fake_time) self.instance = self._wrapped_create() self.decorated_function_called = False def _wrapped_create(self, params=None): instance_type = objects.Flavor.get_by_name(self.context, 'm1.tiny') inst = objects.Instance(image_ref=uuids.image_ref, user_id=self.user_id, project_id=self.project_id, instance_type_id=instance_type['id'], root_gb=0, ephemeral_gb=0, access_ip_v4='1.2.3.4', access_ip_v6='feed::5eed', display_name='test_instance', hostname='test_instance_hostname', node='test_instance_node', system_metadata={}) inst._context = self.context if params: inst.update(params) inst.flavor = instance_type inst.create() return inst def test_notif_disabled(self): # test config disable of the notifications self.flags(notify_on_state_change=None, group='notifications') old = copy.copy(self.instance) self.instance.vm_state = vm_states.ACTIVE old_vm_state = old['vm_state'] new_vm_state = self.instance.vm_state old_task_state = old['task_state'] new_task_state = self.instance.task_state notifications.send_update_with_states(self.context, self.instance, old_vm_state, new_vm_state, old_task_state, new_task_state, verify_states=True) notifications.send_update(self.context, old, self.instance) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def test_task_notif(self): # test config disable of just the task state notifications self.flags(notify_on_state_change="vm_state", group='notifications') # we should not get a notification on task stgate chagne now old = copy.copy(self.instance) self.instance.task_state = task_states.SPAWNING old_vm_state = old['vm_state'] new_vm_state = self.instance.vm_state old_task_state = old['task_state'] new_task_state = self.instance.task_state notifications.send_update_with_states(self.context, self.instance, old_vm_state, new_vm_state, old_task_state, new_task_state, verify_states=True) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # ok now enable task state notifications and re-try self.flags(notify_on_state_change="vm_and_task_state", group='notifications') notifications.send_update(self.context, old, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_send_no_notif(self): # test notification on send no initial vm state: old_vm_state = self.instance.vm_state new_vm_state = self.instance.vm_state old_task_state = self.instance.task_state new_task_state = self.instance.task_state notifications.send_update_with_states(self.context, self.instance, old_vm_state, new_vm_state, old_task_state, new_task_state, service="compute", host=None, verify_states=True) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def test_send_on_vm_change(self): old = obj_base.obj_to_primitive(self.instance) old['vm_state'] = None # pretend we just transitioned to ACTIVE: self.instance.vm_state = vm_states.ACTIVE notifications.send_update(self.context, old, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.testhost', notif.publisher_id) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'nova-compute:testhost', fake_notifier.VERSIONED_NOTIFICATIONS[0]['publisher_id']) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_send_on_task_change(self): old = obj_base.obj_to_primitive(self.instance) old['task_state'] = None # pretend we just transitioned to task SPAWNING: self.instance.task_state = task_states.SPAWNING notifications.send_update(self.context, old, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_no_update_with_states(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, task_states.SPAWNING, verify_states=True) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def get_fake_bandwidth(self): usage = objects.BandwidthUsage(context=self.context) usage.create( self.instance.uuid, mac='DE:AD:BE:EF:00:01', bw_in=1, bw_out=2, last_ctr_in=0, last_ctr_out=0, start_period='2012-10-29T13:42:11Z') return usage @mock.patch.object(objects.BandwidthUsageList, 'get_by_uuids') def test_vm_update_with_states(self, mock_bandwidth_list): mock_bandwidth_list.return_value = [self.get_fake_bandwidth()] fake_net_info = fake_network.fake_get_instance_nw_info(self) self.instance.info_cache.network_info = fake_net_info notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.ACTIVE, task_states.SPAWNING, task_states.SPAWNING, verify_states=True) self._verify_notification() def _verify_notification(self, expected_state=vm_states.ACTIVE, expected_new_task_state=task_states.SPAWNING): self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) access_ip_v4 = str(self.instance.access_ip_v4) access_ip_v6 = str(self.instance.access_ip_v6) display_name = self.instance.display_name hostname = self.instance.hostname node = self.instance.node payload = fake_notifier.NOTIFICATIONS[0].payload self.assertEqual(vm_states.BUILDING, payload["old_state"]) self.assertEqual(expected_state, payload["state"]) self.assertEqual(task_states.SPAWNING, payload["old_task_state"]) self.assertEqual(expected_new_task_state, payload["new_task_state"]) self.assertEqual(payload["access_ip_v4"], access_ip_v4) self.assertEqual(payload["access_ip_v6"], access_ip_v6) self.assertEqual(payload["display_name"], display_name) self.assertEqual(payload["hostname"], hostname) self.assertEqual(payload["node"], node) self.assertEqual("2017-02-01T00:00:00.000000", payload["audit_period_beginning"]) self.assertEqual("2017-02-02T16:45:00.000000", payload["audit_period_ending"]) payload = fake_notifier.VERSIONED_NOTIFICATIONS[0][ 'payload']['nova_object.data'] state_update = payload['state_update']['nova_object.data'] self.assertEqual(vm_states.BUILDING, state_update['old_state']) self.assertEqual(expected_state, state_update["state"]) self.assertEqual(task_states.SPAWNING, state_update["old_task_state"]) self.assertEqual(expected_new_task_state, state_update["new_task_state"]) self.assertEqual(payload["display_name"], display_name) self.assertEqual(payload["host_name"], hostname) self.assertEqual(payload["node"], node) flavor = payload['flavor']['nova_object.data'] self.assertEqual(flavor['flavorid'], '1') self.assertEqual(payload['image_uuid'], uuids.image_ref) net_info = self.instance.info_cache.network_info vif = net_info[0] ip_addresses = payload['ip_addresses'] self.assertEqual(len(ip_addresses), 2) for actual_ip, expected_ip in zip(ip_addresses, vif.fixed_ips()): actual_ip = actual_ip['nova_object.data'] self.assertEqual(actual_ip['label'], vif['network']['label']) self.assertEqual(actual_ip['mac'], vif['address'].lower()) self.assertEqual(actual_ip['port_uuid'], vif['id']) self.assertEqual(actual_ip['device_name'], vif['devname']) self.assertEqual(actual_ip['version'], expected_ip['version']) self.assertEqual(actual_ip['address'], expected_ip['address']) bandwidth = payload['bandwidth'] self.assertEqual(len(bandwidth), 1) bandwidth = bandwidth[0]['nova_object.data'] self.assertEqual(bandwidth['in_bytes'], 1) self.assertEqual(bandwidth['out_bytes'], 2) self.assertEqual(bandwidth['network_name'], 'test1') @mock.patch.object(objects.BandwidthUsageList, 'get_by_uuids') def test_task_update_with_states(self, mock_bandwidth_list): self.flags(notify_on_state_change="vm_and_task_state", group='notifications') mock_bandwidth_list.return_value = [self.get_fake_bandwidth()] fake_net_info = fake_network.fake_get_instance_nw_info(self) self.instance.info_cache.network_info = fake_net_info notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None, verify_states=True) self._verify_notification(expected_state=vm_states.BUILDING, expected_new_task_state=None) def test_update_no_service_name(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.testhost', notif.publisher_id) # in the versioned notification it defaults to nova-compute notif = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('nova-compute:testhost', notif['publisher_id']) def test_update_with_service_name(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None, service="nova-compute") self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('nova-compute.testhost', notif.publisher_id) notif = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('nova-compute:testhost', notif['publisher_id']) def test_update_with_host_name(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None, host="someotherhost") self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.someotherhost', notif.publisher_id) notif = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('nova-compute:someotherhost', notif['publisher_id']) def test_payload_has_fixed_ip_labels(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertIn("fixed_ips", info) self.assertEqual(info["fixed_ips"][0]["label"], "test1") def test_payload_has_vif_mac_address(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertIn("fixed_ips", info) self.assertEqual(self.net_info[0]['address'], info["fixed_ips"][0]["vif_mac"]) def test_payload_has_cell_name_empty(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertIn("cell_name", info) self.assertIsNone(self.instance.cell_name) self.assertEqual("", info["cell_name"]) def test_payload_has_cell_name(self): self.instance.cell_name = "cell1" info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertIn("cell_name", info) self.assertEqual("cell1", info["cell_name"]) def test_payload_has_progress_empty(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertIn("progress", info) self.assertIsNone(self.instance.progress) self.assertEqual("", info["progress"]) def test_payload_has_progress(self): self.instance.progress = 50 info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertIn("progress", info) self.assertEqual(50, info["progress"]) def test_payload_has_flavor_attributes(self): # Zero these to make sure they are not used self.instance.vcpus = self.instance.memory_mb = 0 self.instance.root_gb = self.instance.ephemeral_gb = 0 # Set flavor values and make sure _these_ are present in the output self.instance.flavor.vcpus = 10 self.instance.flavor.root_gb = 20 self.instance.flavor.memory_mb = 30 self.instance.flavor.ephemeral_gb = 40 info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertEqual(10, info['vcpus']) self.assertEqual(20, info['root_gb']) self.assertEqual(30, info['memory_mb']) self.assertEqual(40, info['ephemeral_gb']) self.assertEqual(60, info['disk_gb']) def test_payload_has_timestamp_fields(self): time = datetime.datetime(2017, 2, 2, 16, 45, 0) # do not define deleted_at to test that missing value is handled # properly self.instance.terminated_at = time self.instance.launched_at = time info = notifications.info_from_instance(self.context, self.instance, self.net_info) self.assertEqual('2017-02-02T16:45:00.000000', info['terminated_at']) self.assertEqual('2017-02-02T16:45:00.000000', info['launched_at']) self.assertEqual('', info['deleted_at']) def test_send_access_ip_update(self): notifications.send_update(self.context, self.instance, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) notif = fake_notifier.NOTIFICATIONS[0] payload = notif.payload access_ip_v4 = str(self.instance.access_ip_v4) access_ip_v6 = str(self.instance.access_ip_v6) self.assertEqual(payload["access_ip_v4"], access_ip_v4) self.assertEqual(payload["access_ip_v6"], access_ip_v6) def test_send_name_update(self): param = {"display_name": "new_display_name"} new_name_inst = self._wrapped_create(params=param) notifications.send_update(self.context, self.instance, new_name_inst) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) old_display_name = self.instance.display_name new_display_name = new_name_inst.display_name for payload in [ fake_notifier.NOTIFICATIONS[0].payload, fake_notifier.VERSIONED_NOTIFICATIONS[0][ 'payload']['nova_object.data']]: self.assertEqual(payload["old_display_name"], old_display_name) self.assertEqual(payload["display_name"], new_display_name) def test_send_versioned_tags_update(self): objects.TagList.create(self.context, self.instance.uuid, [u'tag1', u'tag2']) notifications.send_update(self.context, self.instance, self.instance) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual([u'tag1', u'tag2'], fake_notifier.VERSIONED_NOTIFICATIONS[0] ['payload']['nova_object.data']['tags']) def test_send_versioned_action_initiator_update(self): notifications.send_update(self.context, self.instance, self.instance) action_initiator_user = self.context.user_id action_initiator_project = self.context.project_id self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual(action_initiator_user, fake_notifier.VERSIONED_NOTIFICATIONS[0] ['payload']['nova_object.data'] ['action_initiator_user']) self.assertEqual(action_initiator_project, fake_notifier.VERSIONED_NOTIFICATIONS[0] ['payload']['nova_object.data'] ['action_initiator_project']) def test_send_no_state_change(self): called = [False] def sending_no_state_change(context, instance, **kwargs): called[0] = True self.stub_out('nova.notifications.base.' 'send_instance_update_notification', sending_no_state_change) notifications.send_update(self.context, self.instance, self.instance) self.assertTrue(called[0]) def test_fail_sending_update(self): def fail_sending(context, instance, **kwargs): raise Exception('failed to notify') self.stub_out('nova.notifications.base.' 'send_instance_update_notification', fail_sending) notifications.send_update(self.context, self.instance, self.instance) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) @mock.patch.object(notifications.LOG, 'exception') def test_fail_sending_update_instance_not_found(self, mock_log_exception): # Tests that InstanceNotFound is handled as an expected exception and # not logged as an error. notfound = exception.InstanceNotFound(instance_id=self.instance.uuid) with mock.patch.object(notifications, 'send_instance_update_notification', side_effect=notfound): notifications.send_update( self.context, self.instance, self.instance) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, mock_log_exception.call_count) @mock.patch.object(notifications.LOG, 'exception') def test_fail_send_update_with_states_inst_not_found(self, mock_log_exception): # Tests that InstanceNotFound is handled as an expected exception and # not logged as an error. notfound = exception.InstanceNotFound(instance_id=self.instance.uuid) with mock.patch.object(notifications, 'send_instance_update_notification', side_effect=notfound): notifications.send_update_with_states( self.context, self.instance, vm_states.BUILDING, vm_states.ERROR, task_states.NETWORKING, new_task_state=None) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, mock_log_exception.call_count) def _decorated_function(self, arg1, arg2): self.decorated_function_called = True class NotificationsFormatTestCase(test.NoDBTestCase): def test_state_computation(self): instance = {'vm_state': mock.sentinel.vm_state, 'task_state': mock.sentinel.task_state} states = notifications._compute_states_payload(instance) self.assertEqual(mock.sentinel.vm_state, states['state']) self.assertEqual(mock.sentinel.vm_state, states['old_state']) self.assertEqual(mock.sentinel.task_state, states['old_task_state']) self.assertEqual(mock.sentinel.task_state, states['new_task_state']) states = notifications._compute_states_payload( instance, old_vm_state=mock.sentinel.old_vm_state, ) self.assertEqual(mock.sentinel.vm_state, states['state']) self.assertEqual(mock.sentinel.old_vm_state, states['old_state']) self.assertEqual(mock.sentinel.task_state, states['old_task_state']) self.assertEqual(mock.sentinel.task_state, states['new_task_state']) states = notifications._compute_states_payload( instance, old_vm_state=mock.sentinel.old_vm_state, old_task_state=mock.sentinel.old_task_state, new_vm_state=mock.sentinel.new_vm_state, new_task_state=mock.sentinel.new_task_state, ) self.assertEqual(mock.sentinel.new_vm_state, states['state']) self.assertEqual(mock.sentinel.old_vm_state, states['old_state']) self.assertEqual(mock.sentinel.old_task_state, states['old_task_state']) self.assertEqual(mock.sentinel.new_task_state, states['new_task_state']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_notifier.py0000664000175000017500000000442100000000000021103 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import rpc from nova import test class TestNotifier(test.NoDBTestCase): @mock.patch('oslo_messaging.get_rpc_transport') @mock.patch('oslo_messaging.get_notification_transport') @mock.patch('oslo_messaging.Notifier') def test_notification_format_affects_notification_driver(self, mock_notifier, mock_noti_trans, mock_transport): conf = mock.Mock() conf.notifications.versioned_notifications_topics = [ 'versioned_notifications'] cases = { 'unversioned': [ mock.call(mock.ANY, serializer=mock.ANY), mock.call(mock.ANY, serializer=mock.ANY, driver='noop')], 'both': [ mock.call(mock.ANY, serializer=mock.ANY), mock.call(mock.ANY, serializer=mock.ANY, topics=['versioned_notifications'])], 'versioned': [ mock.call(mock.ANY, serializer=mock.ANY, driver='noop'), mock.call(mock.ANY, serializer=mock.ANY, topics=['versioned_notifications'])]} for config in cases: mock_notifier.reset_mock() mock_notifier.side_effect = ['first', 'second'] conf.notifications.notification_format = config rpc.init(conf) self.assertEqual(cases[config], mock_notifier.call_args_list) self.assertEqual('first', rpc.LEGACY_NOTIFIER) self.assertEqual('second', rpc.NOTIFIER) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_policy.py0000664000175000017500000005365100000000000020574 0ustar00zuulzuul00000000000000# Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test of Policy Engine For Nova.""" import os.path import mock from oslo_policy import policy as oslo_policy from oslo_serialization import jsonutils import requests_mock from nova import context from nova import exception from nova.policies import servers as servers_policy from nova import policy from nova import test from nova.tests.unit import fake_policy from nova.tests.unit import policy_fixture from nova import utils class PolicyFileTestCase(test.NoDBTestCase): def setUp(self): super(PolicyFileTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.target = {} def test_modified_policy_reloads(self): with utils.tempdir() as tmpdir: tmpfilename = os.path.join(tmpdir, 'policy') self.flags(policy_file=tmpfilename, group='oslo_policy') # NOTE(uni): context construction invokes policy check to determine # is_admin or not. As a side-effect, policy reset is needed here # to flush existing policy cache. policy.reset() policy.init() rule = oslo_policy.RuleDefault('example:test', "") policy._ENFORCER.register_defaults([rule]) action = "example:test" with open(tmpfilename, "w") as policyfile: policyfile.write('{"example:test": ""}') policy.authorize(self.context, action, self.target) with open(tmpfilename, "w") as policyfile: policyfile.write('{"example:test": "!"}') policy._ENFORCER.load_rules(True) self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) class PolicyTestCase(test.NoDBTestCase): def setUp(self): super(PolicyTestCase, self).setUp() rules = [ oslo_policy.RuleDefault("true", '@'), oslo_policy.RuleDefault("example:allowed", '@'), oslo_policy.RuleDefault("example:denied", "!"), oslo_policy.RuleDefault("old_action_not_default", "@"), oslo_policy.RuleDefault("new_action", "@"), oslo_policy.RuleDefault("old_action_default", "rule:admin_api"), oslo_policy.RuleDefault("example:get_http", "http://www.example.com"), oslo_policy.RuleDefault("example:my_file", "role:compute_admin or " "project_id:%(project_id)s"), oslo_policy.RuleDefault("example:early_and_fail", "! and @"), oslo_policy.RuleDefault("example:early_or_success", "@ or !"), oslo_policy.RuleDefault("example:lowercase_admin", "role:admin or role:sysadmin"), oslo_policy.RuleDefault("example:uppercase_admin", "role:ADMIN or role:sysadmin"), ] policy.reset() policy.init() # before a policy rule can be used, its default has to be registered. policy._ENFORCER.register_defaults(rules) self.context = context.RequestContext('fake', 'fake', roles=['member']) self.target = {} def test_authorize_nonexistent_action_throws(self): action = "example:noexist" self.assertRaises(oslo_policy.PolicyNotRegistered, policy.authorize, self.context, action, self.target) def test_authorize_bad_action_throws(self): action = "example:denied" self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) def test_authorize_bad_action_noraise(self): action = "example:denied" result = policy.authorize(self.context, action, self.target, False) self.assertFalse(result) def test_authorize_good_action(self): action = "example:allowed" result = policy.authorize(self.context, action, self.target) self.assertTrue(result) @requests_mock.mock() def test_authorize_http_true(self, req_mock): req_mock.post('http://www.example.com/', text='True') action = "example:get_http" target = {} result = policy.authorize(self.context, action, target) self.assertTrue(result) @requests_mock.mock() def test_authorize_http_false(self, req_mock): req_mock.post('http://www.example.com/', text='False') action = "example:get_http" target = {} self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, target) def test_templatized_authorization(self): target_mine = {'project_id': 'fake'} target_not_mine = {'project_id': 'another'} action = "example:my_file" policy.authorize(self.context, action, target_mine) # check we fallback to context.project_id # TODO(johngarbutt): longer term we need to remove this and make # the target a required param. policy.authorize(self.context, action) self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, target_not_mine) def test_early_AND_authorization(self): action = "example:early_and_fail" self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) def test_early_OR_authorization(self): action = "example:early_or_success" policy.authorize(self.context, action, self.target) def test_ignore_case_role_check(self): lowercase_action = "example:lowercase_admin" uppercase_action = "example:uppercase_admin" # NOTE(dprince) we mix case in the Admin role here to ensure # case is ignored admin_context = context.RequestContext('admin', 'fake', roles=['AdMiN']) policy.authorize(admin_context, lowercase_action, self.target) policy.authorize(admin_context, uppercase_action, self.target) @mock.patch.object(policy.LOG, 'warning') def test_warning_when_deprecated_user_based_rule_used(self, mock_warning): policy._warning_for_deprecated_user_based_rules( [("os_compute_api:servers:index", "project_id:%(project_id)s or user_id:%(user_id)s")]) mock_warning.assert_called_once_with( u"The user_id attribute isn't supported in the rule " "'%s'. All the user_id based policy enforcement will be removed " "in the future.", "os_compute_api:servers:index") @mock.patch.object(policy.LOG, 'warning') def test_no_warning_for_user_based_resource(self, mock_warning): policy._warning_for_deprecated_user_based_rules( [("os_compute_api:os-keypairs:index", "user_id:%(user_id)s")]) mock_warning.assert_not_called() @mock.patch.object(policy.LOG, 'warning') def test_no_warning_for_no_user_based_rule(self, mock_warning): policy._warning_for_deprecated_user_based_rules( [("os_compute_api:servers:index", "project_id:%(project_id)s")]) mock_warning.assert_not_called() @requests_mock.mock() def test_authorize_raise_invalid_scope(self, req_mock): req_mock.post('http://www.example.com/', text='False') action = "example:get_http" target = {} with mock.patch('oslo_policy.policy.Enforcer.authorize') as auth_mock: auth_mock.side_effect = oslo_policy.InvalidScope( action, self.context.system_scope, 'invalid_scope') exc = self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, target) self.assertEqual( "Policy doesn't allow %s to be performed." % action, exc.format_message()) @mock.patch.object(policy.LOG, 'warning') def test_verify_deprecated_policy_using_old_action(self, mock_warning): old_policy = "old_action_not_default" new_policy = "new_action" default_rule = "rule:admin_api" using_old_action = policy.verify_deprecated_policy( old_policy, new_policy, default_rule, self.context) mock_warning.assert_called_once_with("Start using the new " "action '%(new_policy)s'. The existing action '%(old_policy)s' " "is being deprecated and will be removed in future release.", {'new_policy': new_policy, 'old_policy': old_policy}) self.assertTrue(using_old_action) def test_verify_deprecated_policy_using_new_action(self): old_policy = "old_action_default" new_policy = "new_action" default_rule = "rule:admin_api" using_old_action = policy.verify_deprecated_policy( old_policy, new_policy, default_rule, self.context) self.assertFalse(using_old_action) class IsAdminCheckTestCase(test.NoDBTestCase): def setUp(self): super(IsAdminCheckTestCase, self).setUp() policy.init() def test_init_true(self): check = policy.IsAdminCheck('is_admin', 'True') self.assertEqual(check.kind, 'is_admin') self.assertEqual(check.match, 'True') self.assertTrue(check.expected) def test_init_false(self): check = policy.IsAdminCheck('is_admin', 'nottrue') self.assertEqual(check.kind, 'is_admin') self.assertEqual(check.match, 'False') self.assertFalse(check.expected) def test_call_true(self): check = policy.IsAdminCheck('is_admin', 'True') self.assertTrue(check('target', dict(is_admin=True), policy._ENFORCER)) self.assertFalse(check('target', dict(is_admin=False), policy._ENFORCER)) def test_call_false(self): check = policy.IsAdminCheck('is_admin', 'False') self.assertFalse(check('target', dict(is_admin=True), policy._ENFORCER)) self.assertTrue(check('target', dict(is_admin=False), policy._ENFORCER)) def test_check_is_admin(self): ctxt = context.RequestContext( user_id='fake-user', project_id='fake-project') with mock.patch('oslo_policy.policy.Enforcer.authorize') as mock_auth: result = policy.check_is_admin(ctxt) self.assertEqual(mock_auth.return_value, result) mock_auth.assert_called_once_with( 'context_is_admin', {'user_id': 'fake-user', 'project_id': 'fake-project'}, ctxt) class AdminRolePolicyTestCase(test.NoDBTestCase): def setUp(self): super(AdminRolePolicyTestCase, self).setUp() self.policy = self.useFixture(policy_fixture.RoleBasedPolicyFixture()) self.context = context.RequestContext('fake', 'fake', roles=['member']) self.actions = policy.get_rules().keys() self.target = {} def test_authorize_admin_actions_with_nonadmin_context_throws(self): """Check if non-admin context passed to admin actions throws Policy not authorized exception """ for action in self.actions: self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) class RealRolePolicyTestCase(test.NoDBTestCase): def setUp(self): super(RealRolePolicyTestCase, self).setUp() self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) self.non_admin_context = context.RequestContext('fake', 'fake', roles=['member']) self.admin_context = context.RequestContext('fake', 'fake', True, roles=['member']) self.target = {} self.fake_policy = jsonutils.loads(fake_policy.policy_data) self.admin_only_rules = ( "compute:aggregates:images", "compute:server:topology:host:index", "network:attach_external_network", "os_compute_api:servers:create:forced_host", "compute:servers:create:requested_destination", "os_compute_api:servers:detail:get_all_tenants", "os_compute_api:servers:index:get_all_tenants", "os_compute_api:servers:allow_all_filters", "os_compute_api:servers:show:host_status", "os_compute_api:servers:show:host_status:unknown-only", "os_compute_api:servers:migrations:force_complete", "os_compute_api:servers:migrations:delete", "os_compute_api:os-admin-actions:reset_network", "os_compute_api:os-admin-actions:inject_network_info", "os_compute_api:os-admin-actions:reset_state", "os_compute_api:os-aggregates:index", "os_compute_api:os-aggregates:create", "os_compute_api:os-aggregates:show", "os_compute_api:os-aggregates:update", "os_compute_api:os-aggregates:delete", "os_compute_api:os-aggregates:add_host", "os_compute_api:os-aggregates:remove_host", "os_compute_api:os-aggregates:set_metadata", "os_compute_api:os-agents:create", "os_compute_api:os-agents:update", "os_compute_api:os-agents:delete", "os_compute_api:os-baremetal-nodes", "os_compute_api:os-evacuate", "os_compute_api:os-extended-server-attributes", "os_compute_api:os-flavor-access:remove_tenant_access", "os_compute_api:os-flavor-access:add_tenant_access", "os_compute_api:os-flavor-extra-specs:create", "os_compute_api:os-flavor-extra-specs:update", "os_compute_api:os-flavor-extra-specs:delete", "os_compute_api:os-flavor-manage:create", "os_compute_api:os-flavor-manage:update", "os_compute_api:os-flavor-manage:delete", "os_compute_api:os-hosts", "os_compute_api:os-instance-actions:events", "os_compute_api:os-lock-server:unlock:unlock_override", "os_compute_api:os-migrate-server:migrate", "os_compute_api:os-migrate-server:migrate_live", "os_compute_api:os-quota-sets:update", "os_compute_api:os-quota-sets:delete", "os_compute_api:os-server-diagnostics", "os_compute_api:os-server-groups:index:all_projects", "os_compute_api:os-services:update", "os_compute_api:os-services:delete", "os_compute_api:os-shelve:shelve_offload", "os_compute_api:os-availability-zone:detail", "os_compute_api:os-assisted-volume-snapshots:create", "os_compute_api:os-assisted-volume-snapshots:delete", "os_compute_api:os-console-auth-tokens", "os_compute_api:os-quota-class-sets:update", "os_compute_api:os-server-external-events:create", "os_compute_api:os-volumes-attachments:swap", "os_compute_api:servers:create:zero_disk_flavor", ) self.admin_or_owner_rules = ( "compute:server:topology:index", "os_compute_api:servers:start", "os_compute_api:servers:stop", "os_compute_api:servers:trigger_crash_dump", "os_compute_api:os-create-backup", "os_compute_api:ips:index", "os_compute_api:ips:show", "os_compute_api:os-keypairs:create", "os_compute_api:os-keypairs:delete", "os_compute_api:os-keypairs:index", "os_compute_api:os-keypairs:show", "os_compute_api:os-lock-server:lock", "os_compute_api:os-lock-server:unlock", "os_compute_api:os-pause-server:pause", "os_compute_api:os-pause-server:unpause", "os_compute_api:os-quota-sets:show", "os_compute_api:os-quota-sets:detail", "os_compute_api:server-metadata:index", "os_compute_api:server-metadata:show", "os_compute_api:server-metadata:delete", "os_compute_api:server-metadata:create", "os_compute_api:server-metadata:update", "os_compute_api:server-metadata:update_all", "os_compute_api:os-suspend-server:suspend", "os_compute_api:os-suspend-server:resume", "os_compute_api:os-tenant-networks", "os_compute_api:extensions", "os_compute_api:servers:confirm_resize", "os_compute_api:servers:create", "os_compute_api:servers:create:attach_network", "os_compute_api:servers:create:attach_volume", "os_compute_api:servers:create:trusted_certs", "os_compute_api:servers:create_image", "os_compute_api:servers:delete", "os_compute_api:servers:detail", "os_compute_api:servers:index", "os_compute_api:servers:reboot", "os_compute_api:servers:rebuild", "os_compute_api:servers:rebuild:trusted_certs", "os_compute_api:servers:resize", "os_compute_api:servers:revert_resize", "os_compute_api:servers:show", "os_compute_api:servers:update", "os_compute_api:servers:create_image:allow_volume_backed", "os_compute_api:os-admin-password", "os_compute_api:os-attach-interfaces:create", "os_compute_api:os-attach-interfaces:delete", "os_compute_api:os-console-output", "os_compute_api:os-remote-consoles", "os_compute_api:os-deferred-delete:restore", "os_compute_api:os-deferred-delete:force", "os_compute_api:os-flavor-access", "os_compute_api:os-flavor-extra-specs:index", "os_compute_api:os-flavor-extra-specs:show", "os_compute_api:os-floating-ip-pools", "os_compute_api:os-floating-ips", "os_compute_api:os-multinic", "os_compute_api:os-networks:view", "os_compute_api:os-rescue", "os_compute_api:os-unrescue", "os_compute_api:os-security-groups", "os_compute_api:os-security-groups:add", "os_compute_api:os-security-groups:remove", "os_compute_api:os-server-password:clear", "os_compute_api:os-server-tags:delete", "os_compute_api:os-server-tags:delete_all", "os_compute_api:os-server-tags:update", "os_compute_api:os-server-tags:update_all", "os_compute_api:os-server-groups:index", "os_compute_api:os-server-groups:show", "os_compute_api:os-server-groups:create", "os_compute_api:os-server-groups:delete", "os_compute_api:os-shelve:shelve", "os_compute_api:os-shelve:unshelve", "os_compute_api:os-volumes", "os_compute_api:os-volumes-attachments:create", "os_compute_api:os-volumes-attachments:delete", "os_compute_api:os-volumes-attachments:update", ) self.allow_all_rules = ( "os_compute_api:os-quota-sets:defaults", "os_compute_api:os-availability-zone:list", "os_compute_api:limits", ) self.system_reader_rules = ( "os_compute_api:servers:migrations:index", "os_compute_api:servers:migrations:show", "os_compute_api:os-simple-tenant-usage:list", "os_compute_api:os-migrations:index", "os_compute_api:os-services:list", "os_compute_api:os-instance-actions:events:details", "os_compute_api:os-instance-usage-audit-log:list", "os_compute_api:os-instance-usage-audit-log:show", "os_compute_api:os-agents:list", "os_compute_api:os-hypervisors:list", "os_compute_api:os-hypervisors:list-detail", "os_compute_api:os-hypervisors:show", "os_compute_api:os-hypervisors:statistics", "os_compute_api:os-hypervisors:uptime", "os_compute_api:os-hypervisors:search", "os_compute_api:os-hypervisors:servers", "os_compute_api:limits:other_project", ) self.system_reader_or_owner_rules = ( "os_compute_api:os-simple-tenant-usage:show", "os_compute_api:os-security-groups:list", "os_compute_api:os-volumes-attachments:index", "os_compute_api:os-volumes-attachments:show", "os_compute_api:os-attach-interfaces:list", "os_compute_api:os-attach-interfaces:show", "os_compute_api:os-instance-actions:list", "os_compute_api:os-instance-actions:show", "os_compute_api:os-server-password:show", "os_compute_api:os-server-tags:index", "os_compute_api:os-server-tags:show", ) self.allow_nobody_rules = ( servers_policy.CROSS_CELL_RESIZE, ) def test_all_rules_in_sample_file(self): special_rules = ["context_is_admin", "admin_or_owner", "default"] for (name, rule) in self.fake_policy.items(): if name in special_rules: continue self.assertIn(name, policy.get_rules()) def test_admin_only_rules(self): for rule in self.admin_only_rules: self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.non_admin_context, rule, {'project_id': 'fake', 'user_id': 'fake'}) policy.authorize(self.admin_context, rule, self.target) def test_admin_or_owner_rules(self): for rule in self.admin_or_owner_rules: self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.non_admin_context, rule, self.target) policy.authorize(self.non_admin_context, rule, {'project_id': 'fake', 'user_id': 'fake'}) def test_allow_all_rules(self): for rule in self.allow_all_rules: policy.authorize(self.non_admin_context, rule, self.target) def test_allow_nobody_rules(self): """No one can perform these operations, not even admin.""" for rule in self.allow_nobody_rules: self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.admin_context, rule, self.target) def test_rule_missing(self): rules = policy.get_rules() # eliqiao os_compute_api:os-quota-class-sets:show requires # admin=True or quota_class match, this rule won't belong to # admin_only, non_admin, admin_or_user, empty_rule special_rules = ('admin_api', 'admin_or_owner', 'context_is_admin', 'os_compute_api:os-quota-class-sets:show', 'system_admin_api', 'system_reader_api', 'project_admin_api', 'project_member_api', 'project_reader_api', 'system_admin_or_owner', 'system_or_project_reader') result = set(rules.keys()) - set(self.admin_only_rules + self.admin_or_owner_rules + self.allow_all_rules + self.system_reader_rules + self.system_reader_or_owner_rules + self.allow_nobody_rules + special_rules) self.assertEqual(set([]), result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_profiler.py0000664000175000017500000000711400000000000021110 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import os from oslo_utils import importutils import osprofiler.opts as profiler import six.moves as six from nova import conf from nova import test class TestProfiler(test.NoDBTestCase): def test_all_public_methods_are_traced(self): # NOTE(rpodolyaka): osprofiler only wraps class methods when option # CONF.profiler.enabled is set to True and the default value is False, # which means in our usual test run we use original, not patched # classes. In order to test, that we actually properly wrap methods # we are interested in, this test case sets CONF.profiler.enabled to # True and reloads all the affected Python modules (application of # decorators and metaclasses is performed at module import time). # Unfortunately, this leads to subtle failures of other test cases # (e.g. super() is performed on a "new" version of a class instance # created after a module reload while the class name is a reference to # an "old" version of the class). Thus, this test is run in isolation. if not os.getenv('TEST_OSPROFILER', False): self.skipTest('TEST_OSPROFILER env variable is not set. ' 'Skipping osprofiler tests...') # reinitialize the metaclass after enabling osprofiler profiler.set_defaults(conf.CONF) self.flags(enabled=True, group='profiler') six.reload_module(importutils.import_module('nova.manager')) classes = [ 'nova.compute.api.API', 'nova.compute.manager.ComputeManager', 'nova.compute.rpcapi.ComputeAPI', 'nova.conductor.manager.ComputeTaskManager', 'nova.conductor.manager.ConductorManager', 'nova.conductor.rpcapi.ComputeTaskAPI', 'nova.conductor.rpcapi.ConductorAPI', 'nova.image.glance.API', 'nova.network.neutron.ClientWrapper', 'nova.scheduler.manager.SchedulerManager', 'nova.scheduler.rpcapi.SchedulerAPI', 'nova.virt.libvirt.vif.LibvirtGenericVIFDriver', 'nova.virt.libvirt.volume.volume.LibvirtBaseVolumeDriver', ] for clsname in classes: # give the metaclass and trace_cls() decorator a chance to patch # methods of the classes above six.reload_module( importutils.import_module(clsname.rsplit('.', 1)[0])) cls = importutils.import_class(clsname) for attr, obj in cls.__dict__.items(): # only public methods are traced if attr.startswith('_'): continue # only checks callables if not (inspect.ismethod(obj) or inspect.isfunction(obj)): continue # osprofiler skips static methods if isinstance(obj, staticmethod): continue self.assertTrue(getattr(obj, '__traced__', False), obj) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_quota.py0000664000175000017500000024101300000000000020415 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import mock from oslo_db.sqlalchemy import enginefacade from six.moves import range from nova.compute import api as compute import nova.conf from nova import context from nova.db.sqlalchemy import models as sqa_models from nova import exception from nova import objects from nova import quota from nova import test import nova.tests.unit.image.fake CONF = nova.conf.CONF def _get_fake_get_usages(updates=None): # These values are not realistic (they should all be 0) and are # only for testing that countable usages get included in the # results. usages = {'key_pairs': {'in_use': 2}, 'server_group_members': {'in_use': 3}, 'instances': {'in_use': 2}, 'cores': {'in_use': 4}, 'ram': {'in_use': 10 * 1024}} if updates: usages.update(updates) def fake_get_usages(*a, **k): return usages return fake_get_usages class QuotaIntegrationTestCase(test.TestCase): REQUIRES_LOCKING = True def setUp(self): super(QuotaIntegrationTestCase, self).setUp() self.flags(instances=2, cores=4, group='quota') self.user_id = 'admin' self.project_id = 'admin' self.context = context.RequestContext(self.user_id, self.project_id, is_admin=True) self.inst_type = objects.Flavor.get_by_name(self.context, 'm1.small') nova.tests.unit.image.fake.stub_out_image_service(self) self.compute_api = compute.API() def fake_validate_networks(context, requested_networks, num_instances): return num_instances # we aren't testing network quota in these tests when creating a server # so just mock that out and assume network (port) quota is OK self.compute_api.network_api.validate_networks = ( mock.Mock(side_effect=fake_validate_networks)) def tearDown(self): super(QuotaIntegrationTestCase, self).tearDown() nova.tests.unit.image.fake.FakeImageService_reset() def _create_instance(self, flavor_name='m1.large'): """Create a test instance in cell1 with an instance mapping.""" cell1 = self.cell_mappings[test.CELL1_NAME] with context.target_cell(self.context, cell1) as cctxt: inst = objects.Instance(context=cctxt) inst.image_id = 'cedef40a-ed67-4d10-800e-17455edce175' inst.reservation_id = 'r-fakeres' inst.user_id = self.user_id inst.project_id = self.project_id inst.flavor = objects.Flavor.get_by_name(cctxt, flavor_name) # This is needed for instance quota counting until we have the # ability to count allocations in placement. inst.vcpus = inst.flavor.vcpus inst.memory_mb = inst.flavor.memory_mb inst.create() # Create the related instance mapping which will be used in # _instances_cores_ram_count(). inst_map = objects.InstanceMapping( self.context, instance_uuid=inst.uuid, project_id=inst.project_id, cell_mapping=cell1) inst_map.create() return inst def test_too_many_instances(self): for i in range(CONF.quota.instances): self._create_instance() image_uuid = 'cedef40a-ed67-4d10-800e-17455edce175' try: self.compute_api.create(self.context, min_count=1, max_count=1, instance_type=self.inst_type, image_href=image_uuid) except exception.QuotaError as e: expected_kwargs = {'code': 413, 'req': '1, 1', 'used': '8, 2', 'allowed': '4, 2', 'overs': 'cores, instances'} self.assertEqual(expected_kwargs, e.kwargs) else: self.fail('Expected QuotaError exception') def test_too_many_cores(self): self._create_instance() image_uuid = 'cedef40a-ed67-4d10-800e-17455edce175' try: self.compute_api.create(self.context, min_count=1, max_count=1, instance_type=self.inst_type, image_href=image_uuid) except exception.QuotaError as e: expected_kwargs = {'code': 413, 'req': '1', 'used': '4', 'allowed': '4', 'overs': 'cores'} self.assertEqual(expected_kwargs, e.kwargs) else: self.fail('Expected QuotaError exception') def test_many_cores_with_unlimited_quota(self): # Setting cores quota to unlimited: self.flags(cores=-1, group='quota') # Default is 20 cores, so create 3x m1.xlarge with # 8 cores each. for i in range(3): self._create_instance(flavor_name='m1.xlarge') def test_too_many_metadata_items(self): metadata = {} for i in range(CONF.quota.metadata_items + 1): metadata['key%s' % i] = 'value%s' % i image_uuid = 'cedef40a-ed67-4d10-800e-17455edce175' self.assertRaises(exception.QuotaError, self.compute_api.create, self.context, min_count=1, max_count=1, instance_type=self.inst_type, image_href=image_uuid, metadata=metadata) def _create_with_injected_files(self, files): api = self.compute_api image_uuid = 'cedef40a-ed67-4d10-800e-17455edce175' api.create(self.context, min_count=1, max_count=1, instance_type=self.inst_type, image_href=image_uuid, injected_files=files) def test_no_injected_files(self): api = self.compute_api image_uuid = 'cedef40a-ed67-4d10-800e-17455edce175' api.create(self.context, instance_type=self.inst_type, image_href=image_uuid) def test_max_injected_files(self): files = [] for i in range(CONF.quota.injected_files): files.append(('/my/path%d' % i, 'config = test\n')) self._create_with_injected_files(files) # no QuotaError def test_too_many_injected_files(self): files = [] for i in range(CONF.quota.injected_files + 1): files.append(('/my/path%d' % i, 'my\ncontent%d\n' % i)) self.assertRaises(exception.QuotaError, self._create_with_injected_files, files) def test_max_injected_file_content_bytes(self): max = CONF.quota.injected_file_content_bytes content = ''.join(['a' for i in range(max)]) files = [('/test/path', content)] self._create_with_injected_files(files) # no QuotaError def test_too_many_injected_file_content_bytes(self): max = CONF.quota.injected_file_content_bytes content = ''.join(['a' for i in range(max + 1)]) files = [('/test/path', content)] self.assertRaises(exception.QuotaError, self._create_with_injected_files, files) def test_max_injected_file_path_bytes(self): max = CONF.quota.injected_file_path_length path = ''.join(['a' for i in range(max)]) files = [(path, 'config = quotatest')] self._create_with_injected_files(files) # no QuotaError def test_too_many_injected_file_path_bytes(self): max = CONF.quota.injected_file_path_length path = ''.join(['a' for i in range(max + 1)]) files = [(path, 'config = quotatest')] self.assertRaises(exception.QuotaError, self._create_with_injected_files, files) @enginefacade.transaction_context_provider class FakeContext(context.RequestContext): def __init__(self, project_id, quota_class): super(FakeContext, self).__init__(project_id=project_id, quota_class=quota_class) self.is_admin = False self.user_id = 'fake_user' self.project_id = project_id self.quota_class = quota_class self.read_deleted = 'no' def elevated(self): elevated = self.__class__(self.project_id, self.quota_class) elevated.is_admin = True return elevated class FakeDriver(object): def __init__(self, by_project=None, by_user=None, by_class=None, reservations=None): self.called = [] self.by_project = by_project or {} self.by_user = by_user or {} self.by_class = by_class or {} self.reservations = reservations or [] def get_defaults(self, context, resources): self.called.append(('get_defaults', context, resources)) return resources def get_class_quotas(self, context, resources, quota_class): self.called.append(('get_class_quotas', context, resources, quota_class)) return resources def get_user_quotas(self, context, resources, project_id, user_id, quota_class=None, usages=True): self.called.append(('get_user_quotas', context, resources, project_id, user_id, quota_class, usages)) return resources def get_project_quotas(self, context, resources, project_id, quota_class=None, usages=True, remains=False): self.called.append(('get_project_quotas', context, resources, project_id, quota_class, usages, remains)) return resources def limit_check(self, context, resources, values, project_id=None, user_id=None): self.called.append(('limit_check', context, resources, values, project_id, user_id)) def limit_check_project_and_user(self, context, resources, project_values=None, user_values=None, project_id=None, user_id=None): self.called.append(('limit_check_project_and_user', context, resources, project_values, user_values, project_id, user_id)) class BaseResourceTestCase(test.TestCase): def test_no_flag(self): resource = quota.BaseResource('test_resource') self.assertEqual(resource.name, 'test_resource') self.assertIsNone(resource.flag) self.assertEqual(resource.default, -1) def test_with_flag(self): # We know this flag exists, so use it... self.flags(instances=10, group='quota') resource = quota.BaseResource('test_resource', 'instances') self.assertEqual(resource.name, 'test_resource') self.assertEqual(resource.flag, 'instances') self.assertEqual(resource.default, 10) def test_with_flag_no_quota(self): self.flags(instances=-1, group='quota') resource = quota.BaseResource('test_resource', 'instances') self.assertEqual(resource.name, 'test_resource') self.assertEqual(resource.flag, 'instances') self.assertEqual(resource.default, -1) def test_valid_method_call_check_invalid_input(self): resources = {'dummy': 1} self.assertRaises(exception.InvalidQuotaMethodUsage, quota._valid_method_call_check_resources, resources, 'limit', quota.QUOTAS._resources) def test_valid_method_call_check_invalid_method(self): resources = {'key_pairs': 1} self.assertRaises(exception.InvalidQuotaMethodUsage, quota._valid_method_call_check_resources, resources, 'dummy', quota.QUOTAS._resources) def test_valid_method_call_check_multiple(self): resources = {'key_pairs': 1, 'dummy': 2} self.assertRaises(exception.InvalidQuotaMethodUsage, quota._valid_method_call_check_resources, resources, 'check', quota.QUOTAS._resources) resources = {'key_pairs': 1, 'instances': 2, 'dummy': 3} self.assertRaises(exception.InvalidQuotaMethodUsage, quota._valid_method_call_check_resources, resources, 'check', quota.QUOTAS._resources) def test_valid_method_call_check_wrong_method(self): resources = {'key_pairs': 1} engine_resources = {'key_pairs': quota.CountableResource('key_pairs', None, 'key_pairs')} self.assertRaises(exception.InvalidQuotaMethodUsage, quota._valid_method_call_check_resources, resources, 'bogus', engine_resources) class QuotaEngineTestCase(test.TestCase): def test_init(self): quota_obj = quota.QuotaEngine() self.assertIsInstance(quota_obj._driver, quota.DbQuotaDriver) def test_init_override_obj(self): quota_obj = quota.QuotaEngine(quota_driver=FakeDriver) self.assertEqual(quota_obj._driver, FakeDriver) def _get_quota_engine(self, driver, resources=None): resources = resources or [ quota.AbsoluteResource('test_resource4'), quota.AbsoluteResource('test_resource3'), quota.AbsoluteResource('test_resource2'), quota.AbsoluteResource('test_resource1'), ] return quota.QuotaEngine(quota_driver=driver, resources=resources) def test_get_defaults(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) result = quota_obj.get_defaults(context) self.assertEqual(driver.called, [ ('get_defaults', context, quota_obj._resources), ]) self.assertEqual(result, quota_obj._resources) def test_get_class_quotas(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) result1 = quota_obj.get_class_quotas(context, 'test_class') self.assertEqual(driver.called, [ ('get_class_quotas', context, quota_obj._resources, 'test_class'), ]) self.assertEqual(result1, quota_obj._resources) def test_get_user_quotas(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) result1 = quota_obj.get_user_quotas(context, 'test_project', 'fake_user') result2 = quota_obj.get_user_quotas(context, 'test_project', 'fake_user', quota_class='test_class', usages=False) self.assertEqual(driver.called, [ ('get_user_quotas', context, quota_obj._resources, 'test_project', 'fake_user', None, True), ('get_user_quotas', context, quota_obj._resources, 'test_project', 'fake_user', 'test_class', False), ]) self.assertEqual(result1, quota_obj._resources) self.assertEqual(result2, quota_obj._resources) def test_get_project_quotas(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) result1 = quota_obj.get_project_quotas(context, 'test_project') result2 = quota_obj.get_project_quotas(context, 'test_project', quota_class='test_class', usages=False) self.assertEqual(driver.called, [ ('get_project_quotas', context, quota_obj._resources, 'test_project', None, True, False), ('get_project_quotas', context, quota_obj._resources, 'test_project', 'test_class', False, False), ]) self.assertEqual(result1, quota_obj._resources) self.assertEqual(result2, quota_obj._resources) def test_count_as_dict_no_resource(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) self.assertRaises(exception.QuotaResourceUnknown, quota_obj.count_as_dict, context, 'test_resource5', True, foo='bar') def test_count_as_dict_wrong_resource(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) self.assertRaises(exception.QuotaResourceUnknown, quota_obj.count_as_dict, context, 'test_resource1', True, foo='bar') def test_count_as_dict(self): def fake_count_as_dict(context, *args, **kwargs): self.assertEqual(args, (True,)) self.assertEqual(kwargs, dict(foo='bar')) return {'project': {'test_resource5': 5}} context = FakeContext(None, None) driver = FakeDriver() resources = [ quota.CountableResource('test_resource5', fake_count_as_dict), ] quota_obj = self._get_quota_engine(driver, resources) result = quota_obj.count_as_dict(context, 'test_resource5', True, foo='bar') self.assertEqual({'project': {'test_resource5': 5}}, result) def test_limit_check(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) quota_obj.limit_check(context, test_resource1=4, test_resource2=3, test_resource3=2, test_resource4=1) self.assertEqual(driver.called, [ ('limit_check', context, quota_obj._resources, dict( test_resource1=4, test_resource2=3, test_resource3=2, test_resource4=1, ), None, None), ]) def test_limit_check_project_and_user(self): context = FakeContext(None, None) driver = FakeDriver() quota_obj = self._get_quota_engine(driver) project_values = dict(test_resource1=4, test_resource2=3) user_values = dict(test_resource3=2, test_resource4=1) quota_obj.limit_check_project_and_user(context, project_values=project_values, user_values=user_values) self.assertEqual([('limit_check_project_and_user', context, quota_obj._resources, dict(test_resource1=4, test_resource2=3), dict(test_resource3=2, test_resource4=1), None, None)], driver.called) def test_resources(self): quota_obj = self._get_quota_engine(None) self.assertEqual(quota_obj.resources, ['test_resource1', 'test_resource2', 'test_resource3', 'test_resource4']) class DbQuotaDriverTestCase(test.TestCase): def setUp(self): super(DbQuotaDriverTestCase, self).setUp() self.flags(instances=10, cores=20, ram=50 * 1024, metadata_items=128, injected_files=5, injected_file_content_bytes=10 * 1024, injected_file_path_length=255, server_groups=10, server_group_members=10, group='quota' ) self.driver = quota.DbQuotaDriver() self.calls = [] self.useFixture(test.TimeOverride()) def test_get_defaults(self): # Use our pre-defined resources self._stub_quota_class_get_default() result = self.driver.get_defaults(None, quota.QUOTAS._resources) self.assertEqual(result, dict( instances=5, cores=20, ram=25 * 1024, metadata_items=64, injected_files=5, injected_file_content_bytes=5 * 1024, injected_file_path_bytes=255, key_pairs=100, server_groups=10, server_group_members=10, security_groups=-1, security_group_rules=-1, fixed_ips=-1, floating_ips=-1, )) def _stub_quota_class_get_default(self): # Stub out quota_class_get_default def fake_qcgd(cls, context): self.calls.append('quota_class_get_default') return dict( instances=5, ram=25 * 1024, metadata_items=64, injected_file_content_bytes=5 * 1024, ) self.stub_out('nova.objects.Quotas.get_default_class', fake_qcgd) def _stub_quota_class_get_all_by_name(self): # Stub out quota_class_get_all_by_name def fake_qcgabn(cls, context, quota_class): self.calls.append('quota_class_get_all_by_name') self.assertEqual(quota_class, 'test_class') return dict( instances=5, ram=25 * 1024, metadata_items=64, injected_file_content_bytes=5 * 1024, ) self.stub_out('nova.objects.Quotas.get_all_class_by_name', fake_qcgabn) def test_get_class_quotas(self): self._stub_quota_class_get_all_by_name() result = self.driver.get_class_quotas(None, quota.QUOTAS._resources, 'test_class') self.assertEqual(self.calls, ['quota_class_get_all_by_name']) self.assertEqual(result, dict( instances=5, cores=20, ram=25 * 1024, metadata_items=64, injected_files=5, injected_file_content_bytes=5 * 1024, injected_file_path_bytes=255, key_pairs=100, server_groups=10, server_group_members=10, floating_ips=-1, fixed_ips=-1, security_groups=-1, security_group_rules=-1, )) def _stub_get_by_project_and_user(self): def fake_qgabpau(context, project_id, user_id): self.calls.append('quota_get_all_by_project_and_user') self.assertEqual(project_id, 'test_project') self.assertEqual(user_id, 'fake_user') return dict( cores=10, injected_files=2, injected_file_path_bytes=127, ) def fake_qgabp(context, project_id): self.calls.append('quota_get_all_by_project') self.assertEqual(project_id, 'test_project') return { 'cores': 10, 'injected_files': 2, 'injected_file_path_bytes': 127, } self.stub_out('nova.db.api.quota_get_all_by_project_and_user', fake_qgabpau) self.stub_out('nova.db.api.quota_get_all_by_project', fake_qgabp) self._stub_quota_class_get_all_by_name() def _get_fake_countable_resources(self): # Create several countable resources with fake count functions def fake_instances_cores_ram_count(*a, **k): return {'project': {'instances': 2, 'cores': 4, 'ram': 1024}, 'user': {'instances': 1, 'cores': 2, 'ram': 512}} def fake_server_group_count(*a, **k): return {'project': {'server_groups': 5}, 'user': {'server_groups': 3}} resources = {} resources['key_pairs'] = quota.CountableResource( 'key_pairs', lambda *a, **k: {'user': {'key_pairs': 1}}, 'key_pairs') resources['instances'] = quota.CountableResource( 'instances', fake_instances_cores_ram_count, 'instances') resources['cores'] = quota.CountableResource( 'cores', fake_instances_cores_ram_count, 'cores') resources['ram'] = quota.CountableResource( 'ram', fake_instances_cores_ram_count, 'ram') resources['server_groups'] = quota.CountableResource( 'server_groups', fake_server_group_count, 'server_groups') resources['server_group_members'] = quota.CountableResource( 'server_group_members', lambda *a, **k: {'user': {'server_group_members': 7}}, 'server_group_members') resources['floating_ips'] = quota.AbsoluteResource('floating_ips') resources['fixed_ips'] = quota.AbsoluteResource('fixed_ips') resources['security_groups'] = quota.AbsoluteResource( 'security_groups') resources['security_group_rules'] = quota.AbsoluteResource( 'security_group_rules') return resources def test_get_usages_for_project(self): resources = self._get_fake_countable_resources() actual = self.driver._get_usages( FakeContext('test_project', 'test_class'), resources, 'test_project') # key_pairs, server_group_members, and security_group_rules are never # counted as a usage. Their counts are only for quota limit checking. expected = {'key_pairs': {'in_use': 0}, 'instances': {'in_use': 2}, 'cores': {'in_use': 4}, 'ram': {'in_use': 1024}, 'server_groups': {'in_use': 5}, 'server_group_members': {'in_use': 0}} self.assertEqual(expected, actual) def test_get_usages_for_user(self): resources = self._get_fake_countable_resources() actual = self.driver._get_usages( FakeContext('test_project', 'test_class'), resources, 'test_project', user_id='fake_user') # key_pairs, server_group_members, and security_group_rules are never # counted as a usage. Their counts are only for quota limit checking. expected = {'key_pairs': {'in_use': 0}, 'instances': {'in_use': 1}, 'cores': {'in_use': 2}, 'ram': {'in_use': 512}, 'server_groups': {'in_use': 3}, 'server_group_members': {'in_use': 0}} self.assertEqual(expected, actual) @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_user_quotas(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project_and_user() ctxt = FakeContext('test_project', 'test_class') result = self.driver.get_user_quotas( ctxt, quota.QUOTAS._resources, 'test_project', 'fake_user') self.assertEqual(self.calls, [ 'quota_get_all_by_project_and_user', 'quota_get_all_by_project', 'quota_class_get_all_by_name', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project', user_id='fake_user') self.assertEqual(result, dict( instances=dict( limit=5, in_use=2, ), cores=dict( limit=10, in_use=4, ), ram=dict( limit=25 * 1024, in_use=10 * 1024, ), floating_ips=dict( limit=-1, in_use=0, ), fixed_ips=dict( limit=-1, in_use=0, ), metadata_items=dict( limit=64, in_use=0, ), injected_files=dict( limit=2, in_use=0, ), injected_file_content_bytes=dict( limit=5 * 1024, in_use=0, ), injected_file_path_bytes=dict( limit=127, in_use=0, ), security_groups=dict( limit=-1, in_use=0, ), security_group_rules=dict( limit=-1, in_use=0, ), key_pairs=dict( limit=100, in_use=2, ), server_groups=dict( limit=10, in_use=0, ), server_group_members=dict( limit=10, in_use=3, ), )) def _stub_get_by_project_and_user_specific(self): def fake_quota_get(context, project_id, resource, user_id=None): self.calls.append('quota_get') self.assertEqual(project_id, 'test_project') self.assertEqual(user_id, 'fake_user') self.assertEqual(resource, 'test_resource') return dict( test_resource=dict(in_use=20), ) self.stub_out('nova.db.api.quota_get', fake_quota_get) def _stub_get_by_project(self): def fake_qgabp(context, project_id): self.calls.append('quota_get_all_by_project') self.assertEqual(project_id, 'test_project') return dict( cores=10, injected_files=2, injected_file_path_bytes=127, ) def fake_quota_get_all(context, project_id): self.calls.append('quota_get_all') self.assertEqual(project_id, 'test_project') return [sqa_models.ProjectUserQuota(resource='instances', hard_limit=5), sqa_models.ProjectUserQuota(resource='cores', hard_limit=2)] self.stub_out('nova.db.api.quota_get_all_by_project', fake_qgabp) self.stub_out('nova.db.api.quota_get_all', fake_quota_get_all) self._stub_quota_class_get_all_by_name() self._stub_quota_class_get_default() @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_project_quotas(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project() ctxt = FakeContext('test_project', 'test_class') result = self.driver.get_project_quotas( ctxt, quota.QUOTAS._resources, 'test_project') self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'quota_class_get_all_by_name', 'quota_class_get_default', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project') self.assertEqual(result, dict( instances=dict( limit=5, in_use=2, ), cores=dict( limit=10, in_use=4, ), ram=dict( limit=25 * 1024, in_use=10 * 1024, ), floating_ips=dict( limit=-1, in_use=0, ), fixed_ips=dict( limit=-1, in_use=0, ), metadata_items=dict( limit=64, in_use=0, ), injected_files=dict( limit=2, in_use=0, ), injected_file_content_bytes=dict( limit=5 * 1024, in_use=0, ), injected_file_path_bytes=dict( limit=127, in_use=0, ), security_groups=dict( limit=-1, in_use=0, ), security_group_rules=dict( limit=-1, in_use=0, ), key_pairs=dict( limit=100, in_use=2, ), server_groups=dict( limit=10, in_use=0, ), server_group_members=dict( limit=10, in_use=3, ), )) @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_project_quotas_with_remains(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project() ctxt = FakeContext('test_project', 'test_class') result = self.driver.get_project_quotas( ctxt, quota.QUOTAS._resources, 'test_project', remains=True) self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'quota_class_get_all_by_name', 'quota_class_get_default', 'quota_get_all', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project') self.assertEqual(result, dict( instances=dict( limit=5, in_use=2, remains=0, ), cores=dict( limit=10, in_use=4, remains=8, ), ram=dict( limit=25 * 1024, in_use=10 * 1024, remains=25 * 1024, ), floating_ips=dict( limit=-1, in_use=0, remains=-1, ), fixed_ips=dict( limit=-1, in_use=0, remains=-1, ), metadata_items=dict( limit=64, in_use=0, remains=64, ), injected_files=dict( limit=2, in_use=0, remains=2, ), injected_file_content_bytes=dict( limit=5 * 1024, in_use=0, remains=5 * 1024, ), injected_file_path_bytes=dict( limit=127, in_use=0, remains=127, ), security_groups=dict( limit=-1, in_use=0, remains=-1, ), security_group_rules=dict( limit=-1, in_use=0, remains=-1, ), key_pairs=dict( limit=100, in_use=2, remains=100, ), server_groups=dict( limit=10, in_use=0, remains=10, ), server_group_members=dict( limit=10, in_use=3, remains=10, ), )) @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_user_quotas_alt_context_no_class(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project_and_user() ctxt = FakeContext('other_project', None) result = self.driver.get_user_quotas( ctxt, quota.QUOTAS._resources, 'test_project', 'fake_user') self.assertEqual(self.calls, [ 'quota_get_all_by_project_and_user', 'quota_get_all_by_project', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project', user_id='fake_user') self.assertEqual(result, dict( instances=dict( limit=10, in_use=2, ), cores=dict( limit=10, in_use=4, ), ram=dict( limit=50 * 1024, in_use=10 * 1024, ), floating_ips=dict( limit=-1, in_use=0, ), fixed_ips=dict( limit=-1, in_use=0, ), metadata_items=dict( limit=128, in_use=0, ), injected_files=dict( limit=2, in_use=0, ), injected_file_content_bytes=dict( limit=10 * 1024, in_use=0, ), injected_file_path_bytes=dict( limit=127, in_use=0, ), security_groups=dict( limit=-1, in_use=0, ), security_group_rules=dict( limit=-1, in_use=0, ), key_pairs=dict( limit=100, in_use=2, ), server_groups=dict( limit=10, in_use=0, ), server_group_members=dict( limit=10, in_use=3, ), )) @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_project_quotas_alt_context_no_class(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project() ctxt = FakeContext('other_project', None) result = self.driver.get_project_quotas( ctxt, quota.QUOTAS._resources, 'test_project') self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'quota_class_get_default', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project') self.assertEqual(result, dict( instances=dict( limit=5, in_use=2, ), cores=dict( limit=10, in_use=4, ), ram=dict( limit=25 * 1024, in_use=10 * 1024, ), floating_ips=dict( limit=-1, in_use=0, ), fixed_ips=dict( limit=-1, in_use=0, ), metadata_items=dict( limit=64, in_use=0, ), injected_files=dict( limit=2, in_use=0, ), injected_file_content_bytes=dict( limit=5 * 1024, in_use=0, ), injected_file_path_bytes=dict( limit=127, in_use=0, ), security_groups=dict( limit=-1, in_use=0, ), security_group_rules=dict( limit=-1, in_use=0, ), key_pairs=dict( limit=100, in_use=2, ), server_groups=dict( limit=10, in_use=0, ), server_group_members=dict( limit=10, in_use=3, ), )) @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_user_quotas_alt_context_with_class(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project_and_user() ctxt = FakeContext('other_project', 'other_class') result = self.driver.get_user_quotas( ctxt, quota.QUOTAS._resources, 'test_project', 'fake_user', quota_class='test_class') self.assertEqual(self.calls, [ 'quota_get_all_by_project_and_user', 'quota_get_all_by_project', 'quota_class_get_all_by_name', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project', user_id='fake_user') self.assertEqual(result, dict( instances=dict( limit=5, in_use=2, ), cores=dict( limit=10, in_use=4, ), ram=dict( limit=25 * 1024, in_use=10 * 1024, ), floating_ips=dict( limit=-1, in_use=0, ), fixed_ips=dict( limit=-1, in_use=0, ), metadata_items=dict( limit=64, in_use=0, ), injected_files=dict( limit=2, in_use=0, ), injected_file_content_bytes=dict( limit=5 * 1024, in_use=0, ), injected_file_path_bytes=dict( limit=127, in_use=0, ), security_groups=dict( limit=-1, in_use=0, ), security_group_rules=dict( limit=-1, in_use=0, ), key_pairs=dict( limit=100, in_use=2, ), server_groups=dict( limit=10, in_use=0, ), server_group_members=dict( limit=10, in_use=3, ), )) @mock.patch('nova.quota.DbQuotaDriver._get_usages', side_effect=_get_fake_get_usages()) def test_get_project_quotas_alt_context_with_class(self, mock_get_usages): self.maxDiff = None self._stub_get_by_project() ctxt = FakeContext('other_project', 'other_class') result = self.driver.get_project_quotas( ctxt, quota.QUOTAS._resources, 'test_project', quota_class='test_class') self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'quota_class_get_all_by_name', 'quota_class_get_default', ]) mock_get_usages.assert_called_once_with(ctxt, quota.QUOTAS._resources, 'test_project') self.assertEqual(result, dict( instances=dict( limit=5, in_use=2, ), cores=dict( limit=10, in_use=4, ), ram=dict( limit=25 * 1024, in_use=10 * 1024, ), floating_ips=dict( limit=-1, in_use=0, ), fixed_ips=dict( limit=-1, in_use=0, ), metadata_items=dict( limit=64, in_use=0, ), injected_files=dict( limit=2, in_use=0, ), injected_file_content_bytes=dict( limit=5 * 1024, in_use=0, ), injected_file_path_bytes=dict( limit=127, in_use=0, ), security_groups=dict( limit=-1, in_use=0, ), security_group_rules=dict( limit=-1, in_use=0, ), key_pairs=dict( limit=100, in_use=2, ), server_groups=dict( limit=10, in_use=0, ), server_group_members=dict( limit=10, in_use=3, ), )) def test_get_user_quotas_no_usages(self): self._stub_get_by_project_and_user() result = self.driver.get_user_quotas( FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, 'test_project', 'fake_user', usages=False) self.assertEqual(self.calls, [ 'quota_get_all_by_project_and_user', 'quota_get_all_by_project', 'quota_class_get_all_by_name', ]) self.assertEqual(result, dict( instances=dict( limit=5, ), cores=dict( limit=10, ), ram=dict( limit=25 * 1024, ), floating_ips=dict( limit=-1, ), fixed_ips=dict( limit=-1, ), metadata_items=dict( limit=64, ), injected_files=dict( limit=2, ), injected_file_content_bytes=dict( limit=5 * 1024, ), injected_file_path_bytes=dict( limit=127, ), security_groups=dict( limit=-1, ), security_group_rules=dict( limit=-1, ), key_pairs=dict( limit=100, ), server_groups=dict( limit=10, ), server_group_members=dict( limit=10, ), )) def test_get_project_quotas_no_usages(self): self._stub_get_by_project() result = self.driver.get_project_quotas( FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, 'test_project', usages=False) self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'quota_class_get_all_by_name', 'quota_class_get_default', ]) self.assertEqual(result, dict( instances=dict( limit=5, ), cores=dict( limit=10, ), ram=dict( limit=25 * 1024, ), floating_ips=dict( limit=-1, ), fixed_ips=dict( limit=-1, ), metadata_items=dict( limit=64, ), injected_files=dict( limit=2, ), injected_file_content_bytes=dict( limit=5 * 1024, ), injected_file_path_bytes=dict( limit=127, ), security_groups=dict( limit=-1, ), security_group_rules=dict( limit=-1, ), key_pairs=dict( limit=100, ), server_groups=dict( limit=10, ), server_group_members=dict( limit=10, ), )) def _stub_get_settable_quotas(self): def fake_quota_get_all_by_project(context, project_id): self.calls.append('quota_get_all_by_project') return {'floating_ips': -1} def fake_get_project_quotas(dbdrv, context, resources, project_id, quota_class=None, usages=True, remains=False, project_quotas=None): self.calls.append('get_project_quotas') result = {} for k, v in resources.items(): limit = v.default if k == 'instances': remains = v.default - 5 in_use = 1 elif k == 'cores': remains = -1 in_use = 5 limit = -1 elif k == 'floating_ips': remains = -1 in_use = 0 limit = -1 else: remains = v.default in_use = 0 result[k] = {'limit': limit, 'in_use': in_use, 'remains': remains} return result def fake_process_quotas_in_get_user_quotas(dbdrv, context, resources, project_id, quotas, quota_class=None, usages=None, remains=False): self.calls.append('_process_quotas') result = {} for k, v in resources.items(): if k == 'instances': in_use = 1 elif k == 'cores': in_use = 15 else: in_use = 0 result[k] = {'limit': v.default, 'in_use': in_use} return result def fake_qgabpau(context, project_id, user_id): self.calls.append('quota_get_all_by_project_and_user') return {'instances': 2, 'cores': -1} self.stub_out('nova.db.api.quota_get_all_by_project', fake_quota_get_all_by_project) self.stub_out('nova.quota.DbQuotaDriver.get_project_quotas', fake_get_project_quotas) self.stub_out('nova.quota.DbQuotaDriver._process_quotas', fake_process_quotas_in_get_user_quotas) self.stub_out('nova.db.api.quota_get_all_by_project_and_user', fake_qgabpau) def test_get_settable_quotas_with_user(self): self._stub_get_settable_quotas() result = self.driver.get_settable_quotas( FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, 'test_project', user_id='test_user') self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'get_project_quotas', 'quota_get_all_by_project_and_user', '_process_quotas', ]) self.assertEqual(result, { 'instances': { 'minimum': 1, 'maximum': 7, }, 'cores': { 'minimum': 15, 'maximum': -1, }, 'ram': { 'minimum': 0, 'maximum': 50 * 1024, }, 'floating_ips': { 'minimum': 0, 'maximum': -1, }, 'fixed_ips': { 'minimum': 0, 'maximum': -1, }, 'metadata_items': { 'minimum': 0, 'maximum': 128, }, 'injected_files': { 'minimum': 0, 'maximum': 5, }, 'injected_file_content_bytes': { 'minimum': 0, 'maximum': 10 * 1024, }, 'injected_file_path_bytes': { 'minimum': 0, 'maximum': 255, }, 'security_groups': { 'minimum': 0, 'maximum': -1, }, 'security_group_rules': { 'minimum': 0, 'maximum': -1, }, 'key_pairs': { 'minimum': 0, 'maximum': 100, }, 'server_groups': { 'minimum': 0, 'maximum': 10, }, 'server_group_members': { 'minimum': 0, 'maximum': 10, }, }) def test_get_settable_quotas_without_user(self): self._stub_get_settable_quotas() result = self.driver.get_settable_quotas( FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, 'test_project') self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'get_project_quotas', ]) self.assertEqual(result, { 'instances': { 'minimum': 5, 'maximum': -1, }, 'cores': { 'minimum': 5, 'maximum': -1, }, 'ram': { 'minimum': 0, 'maximum': -1, }, 'floating_ips': { 'minimum': 0, 'maximum': -1, }, 'fixed_ips': { 'minimum': 0, 'maximum': -1, }, 'metadata_items': { 'minimum': 0, 'maximum': -1, }, 'injected_files': { 'minimum': 0, 'maximum': -1, }, 'injected_file_content_bytes': { 'minimum': 0, 'maximum': -1, }, 'injected_file_path_bytes': { 'minimum': 0, 'maximum': -1, }, 'security_groups': { 'minimum': 0, 'maximum': -1, }, 'security_group_rules': { 'minimum': 0, 'maximum': -1, }, 'key_pairs': { 'minimum': 0, 'maximum': -1, }, 'server_groups': { 'minimum': 0, 'maximum': -1, }, 'server_group_members': { 'minimum': 0, 'maximum': -1, }, }) def test_get_settable_quotas_by_user_with_unlimited_value(self): self._stub_get_settable_quotas() result = self.driver.get_settable_quotas( FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, 'test_project', user_id='test_user') self.assertEqual(self.calls, [ 'quota_get_all_by_project', 'get_project_quotas', 'quota_get_all_by_project_and_user', '_process_quotas', ]) self.assertEqual(result, { 'instances': { 'minimum': 1, 'maximum': 7, }, 'cores': { 'minimum': 15, 'maximum': -1, }, 'ram': { 'minimum': 0, 'maximum': 50 * 1024, }, 'floating_ips': { 'minimum': 0, 'maximum': -1, }, 'fixed_ips': { 'minimum': 0, 'maximum': -1, }, 'metadata_items': { 'minimum': 0, 'maximum': 128, }, 'injected_files': { 'minimum': 0, 'maximum': 5, }, 'injected_file_content_bytes': { 'minimum': 0, 'maximum': 10 * 1024, }, 'injected_file_path_bytes': { 'minimum': 0, 'maximum': 255, }, 'security_groups': { 'minimum': 0, 'maximum': -1, }, 'security_group_rules': { 'minimum': 0, 'maximum': -1, }, 'key_pairs': { 'minimum': 0, 'maximum': 100, }, 'server_groups': { 'minimum': 0, 'maximum': 10, }, 'server_group_members': { 'minimum': 0, 'maximum': 10, }, }) def _stub_get_project_quotas(self): def fake_get_project_quotas(dbdrv, context, resources, project_id, quota_class=None, usages=True, remains=False, project_quotas=None): self.calls.append('get_project_quotas') return {k: dict(limit=v.default) for k, v in resources.items()} self.stub_out('nova.quota.DbQuotaDriver.get_project_quotas', fake_get_project_quotas) def test_get_quotas_unknown(self): self._stub_get_project_quotas() self.assertRaises(exception.QuotaResourceUnknown, self.driver._get_quotas, None, quota.QUOTAS._resources, ['unknown']) self.assertEqual(self.calls, []) def test_limit_check_under(self): self._stub_get_project_quotas() self.assertRaises(exception.InvalidQuotaValue, self.driver.limit_check, FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, dict(metadata_items=-1)) def test_limit_check_over(self): self._stub_get_project_quotas() self.assertRaises(exception.OverQuota, self.driver.limit_check, FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, dict(metadata_items=129)) def test_limit_check_project_overs(self): self._stub_get_project_quotas() self.assertRaises(exception.OverQuota, self.driver.limit_check, FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, dict(injected_file_content_bytes=10241, injected_file_path_bytes=256)) def test_limit_check_unlimited(self): self.flags(metadata_items=-1, group='quota') self._stub_get_project_quotas() self.driver.limit_check(FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, dict(metadata_items=32767)) def test_limit_check(self): self._stub_get_project_quotas() self.driver.limit_check(FakeContext('test_project', 'test_class'), quota.QUOTAS._resources, dict(metadata_items=128)) def test_limit_check_project_and_user_no_values(self): self.assertRaises(exception.Invalid, self.driver.limit_check_project_and_user, FakeContext('test_project', 'test_class'), quota.QUOTAS._resources) def test_limit_check_project_and_user_under(self): self._stub_get_project_quotas() ctxt = FakeContext('test_project', 'test_class') resources = self._get_fake_countable_resources() # Check: only project_values, only user_values, and then both. kwargs = [{'project_values': {'fixed_ips': -1}}, {'user_values': {'key_pairs': -1}}, {'project_values': {'instances': -1}, 'user_values': {'instances': -1}}] for kwarg in kwargs: self.assertRaises(exception.InvalidQuotaValue, self.driver.limit_check_project_and_user, ctxt, resources, **kwarg) def test_limit_check_project_and_user_over_project(self): # Check the case where user_values pass user quota but project_values # exceed project quota. self.flags(instances=5, group='quota') self._stub_get_project_quotas() resources = self._get_fake_countable_resources() self.assertRaises(exception.OverQuota, self.driver.limit_check_project_and_user, FakeContext('test_project', 'test_class'), resources, project_values=dict(instances=6), user_values=dict(instances=5)) def test_limit_check_project_and_user_over_user(self): self.flags(instances=5, group='quota') self._stub_get_project_quotas() resources = self._get_fake_countable_resources() # It's not realistic for user_values to be higher than project_values, # but this is just for testing the fictional case where project_values # pass project quota but user_values exceed user quota. self.assertRaises(exception.OverQuota, self.driver.limit_check_project_and_user, FakeContext('test_project', 'test_class'), resources, project_values=dict(instances=5), user_values=dict(instances=6)) def test_limit_check_project_and_user_overs(self): self._stub_get_project_quotas() ctxt = FakeContext('test_project', 'test_class') resources = self._get_fake_countable_resources() # Check: only project_values, only user_values, and then both. kwargs = [{'project_values': {'instances': 512}}, {'user_values': {'key_pairs': 256}}, {'project_values': {'instances': 512}, 'user_values': {'instances': 256}}] for kwarg in kwargs: self.assertRaises(exception.OverQuota, self.driver.limit_check_project_and_user, ctxt, resources, **kwarg) def test_limit_check_project_and_user_unlimited(self): self.flags(key_pairs=-1, group='quota') self.flags(instances=-1, group='quota') self._stub_get_project_quotas() ctxt = FakeContext('test_project', 'test_class') resources = self._get_fake_countable_resources() # Check: only project_values, only user_values, and then both. kwargs = [{'project_values': {'fixed_ips': 32767}}, {'user_values': {'key_pairs': 32767}}, {'project_values': {'instances': 32767}, 'user_values': {'instances': 32767}}] for kwarg in kwargs: self.driver.limit_check_project_and_user(ctxt, resources, **kwarg) def test_limit_check_project_and_user(self): self._stub_get_project_quotas() ctxt = FakeContext('test_project', 'test_class') resources = self._get_fake_countable_resources() # Check: only project_values, only user_values, and then both. kwargs = [{'project_values': {'fixed_ips': 5}}, {'user_values': {'key_pairs': 5}}, {'project_values': {'instances': 5}, 'user_values': {'instances': 5}}] for kwarg in kwargs: self.driver.limit_check_project_and_user(ctxt, resources, **kwarg) def test_limit_check_project_and_user_zero_values(self): """Tests to make sure that we don't compare 0 to None and fail with a TypeError in python 3 when calculating merged_values between project_values and user_values. """ self._stub_get_project_quotas() ctxt = FakeContext('test_project', 'test_class') resources = self._get_fake_countable_resources() # Check: only project_values, only user_values, and then both. kwargs = [{'project_values': {'fixed_ips': 0}}, {'user_values': {'key_pairs': 0}}, {'project_values': {'instances': 0}, 'user_values': {'instances': 0}}] for kwarg in kwargs: self.driver.limit_check_project_and_user(ctxt, resources, **kwarg) class NoopQuotaDriverTestCase(test.TestCase): def setUp(self): super(NoopQuotaDriverTestCase, self).setUp() self.flags(instances=10, cores=20, ram=50 * 1024, metadata_items=128, injected_files=5, injected_file_content_bytes=10 * 1024, injected_file_path_length=255, group='quota' ) self.expected_with_usages = {} self.expected_without_usages = {} self.expected_without_dict = {} self.expected_settable_quotas = {} for r in quota.QUOTAS._resources: self.expected_with_usages[r] = dict(limit=-1, in_use=-1) self.expected_without_usages[r] = dict(limit=-1) self.expected_without_dict[r] = -1 self.expected_settable_quotas[r] = dict(minimum=0, maximum=-1) self.driver = quota.NoopQuotaDriver() def test_get_defaults(self): # Use our pre-defined resources result = self.driver.get_defaults(None, quota.QUOTAS._resources) self.assertEqual(self.expected_without_dict, result) def test_get_class_quotas(self): result = self.driver.get_class_quotas(None, quota.QUOTAS._resources, 'test_class') self.assertEqual(self.expected_without_dict, result) def test_get_project_quotas(self): result = self.driver.get_project_quotas(None, quota.QUOTAS._resources, 'test_project') self.assertEqual(self.expected_with_usages, result) def test_get_user_quotas(self): result = self.driver.get_user_quotas(None, quota.QUOTAS._resources, 'test_project', 'fake_user') self.assertEqual(self.expected_with_usages, result) def test_get_project_quotas_no_usages(self): result = self.driver.get_project_quotas(None, quota.QUOTAS._resources, 'test_project', usages=False) self.assertEqual(self.expected_without_usages, result) def test_get_user_quotas_no_usages(self): result = self.driver.get_user_quotas(None, quota.QUOTAS._resources, 'test_project', 'fake_user', usages=False) self.assertEqual(self.expected_without_usages, result) def test_get_settable_quotas_with_user(self): result = self.driver.get_settable_quotas(None, quota.QUOTAS._resources, 'test_project', 'fake_user') self.assertEqual(self.expected_settable_quotas, result) def test_get_settable_quotas_without_user(self): result = self.driver.get_settable_quotas(None, quota.QUOTAS._resources, 'test_project') self.assertEqual(self.expected_settable_quotas, result) @ddt.ddt class QuotaCountTestCase(test.NoDBTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_usages_counts_for_quota') def test_cores_ram_count_placement(self, mock_get_usages): usages = quota._cores_ram_count_placement( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) mock_get_usages.assert_called_once_with( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) self.assertEqual(mock_get_usages.return_value, usages) @mock.patch('nova.objects.InstanceMappingList.get_counts') @mock.patch('nova.quota._cores_ram_count_placement') def test_instances_cores_ram_count_api_db_placement( self, mock_placement_count, mock_get_im_count): # Fake response from placement with project and user usages of cores # and ram. mock_placement_count.return_value = {'project': {'cores': 2, 'ram': 4}, 'user': {'cores': 1, 'ram': 2}} # Fake count of instances based on instance mappings in the API DB. mock_get_im_count.return_value = {'project': {'instances': 2}, 'user': {'instances': 1}} counts = quota._instances_cores_ram_count_api_db_placement( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) mock_get_im_count.assert_called_once_with( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) mock_placement_count.assert_called_once_with( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) expected = {'project': {'instances': 2, 'cores': 2, 'ram': 4}, 'user': {'instances': 1, 'cores': 1, 'ram': 2}} self.assertDictEqual(expected, counts) @ddt.data((True, True), (True, False), (False, True), (False, False)) @ddt.unpack @mock.patch('nova.quota.LOG.warning') @mock.patch('nova.quota._user_id_queued_for_delete_populated') @mock.patch('nova.quota._instances_cores_ram_count_legacy') @mock.patch('nova.quota._instances_cores_ram_count_api_db_placement') def test_instances_cores_ram_count(self, quota_from_placement, uid_qfd_populated, mock_api_db_placement_count, mock_legacy_count, mock_uid_qfd_populated, mock_warn_log): # Check that all the combinations of # [quota]count_usage_from_placement (True/False) and # user_id_queued_for_delete_populated (True/False) do the right things. # Fake count of instances, cores, and ram. expected = {'project': {'instances': 2, 'cores': 2, 'ram': 4}, 'user': {'instances': 1, 'cores': 1, 'ram': 2}} mock_api_db_placement_count.return_value = expected mock_legacy_count.return_value = expected # user_id and queued_for_delete populated/migrated (True/False) mock_uid_qfd_populated.return_value = uid_qfd_populated # Counting quota usage from placement enabled (True/False) self.flags(count_usage_from_placement=quota_from_placement, group='quota') counts = quota._instances_cores_ram_count( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) if quota_from_placement and uid_qfd_populated: # If we are counting quota usage from placement and user_id and # queued_for_delete data has all been migrated, we should count # instances from the API DB using instance mappings and count # cores and ram from placement. mock_api_db_placement_count.assert_called_once_with( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) # We should not have called the legacy counting method. mock_legacy_count.assert_not_called() # We should not have logged a warn message saying we were falling # back to the legacy counting method. mock_warn_log.assert_not_called() else: # If counting quota usage from placement is not enabled or if # user_id or queued_for_delete data has not all been migrated yet, # we should use the legacy counting method. mock_legacy_count.assert_called_once_with( mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) # We should have logged a warn message saying we were falling back # to the legacy counting method. if quota_from_placement: # We only log the message if someone has opted in to counting # from placement. mock_warn_log.assert_called_once() else: mock_warn_log.assert_not_called() # We should not have called the API DB and placement counting # method. mock_api_db_placement_count.assert_not_called() self.assertDictEqual(expected, counts) @mock.patch('nova.quota._user_id_queued_for_delete_populated') @mock.patch('nova.quota._instances_cores_ram_count_legacy') @mock.patch('nova.quota._instances_cores_ram_count_api_db_placement') def test_user_id_queued_for_delete_populated_cache_by_project( self, mock_api_db_placement_count, mock_legacy_count, mock_uid_qfd_populated): # We need quota usage from placement enabled to test this. For legacy # counting, the cache is not used. self.flags(count_usage_from_placement=True, group='quota') # Fake count of instances, cores, and ram. fake_counts = {'project': {'instances': 2, 'cores': 2, 'ram': 4}, 'user': {'instances': 1, 'cores': 1, 'ram': 2}} mock_api_db_placement_count.return_value = fake_counts mock_legacy_count.return_value = fake_counts # First, check the case where user_id and queued_for_delete are found # not to be migrated. mock_uid_qfd_populated.return_value = False quota._instances_cores_ram_count(mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) mock_uid_qfd_populated.assert_called_once() # The second call should check for unmigrated records again, since the # project was found not to be completely migrated last time. quota._instances_cores_ram_count(mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) self.assertEqual(2, mock_uid_qfd_populated.call_count) # Now check the case where the data migration was found to be complete. mock_uid_qfd_populated.reset_mock() mock_uid_qfd_populated.return_value = True # The first call will check whether there are any unmigrated records. quota._instances_cores_ram_count(mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) mock_uid_qfd_populated.assert_called_once() # Second call should skip the check for user_id and queued_for_delete # migrated because the result was cached. mock_uid_qfd_populated.reset_mock() quota._instances_cores_ram_count(mock.sentinel.context, mock.sentinel.project_id, user_id=mock.sentinel.user_id) mock_uid_qfd_populated.assert_not_called() @mock.patch('nova.quota._user_id_queued_for_delete_populated') @mock.patch('nova.quota._server_group_count_members_by_user_legacy') @mock.patch('nova.objects.InstanceMappingList.get_count_by_uuids_and_user') @mock.patch('nova.quota._instances_cores_ram_count_legacy') @mock.patch('nova.quota._instances_cores_ram_count_api_db_placement') def test_user_id_queued_for_delete_populated_cache_all( self, mock_api_db_placement_count, mock_legacy_icr_count, mock_api_db_sgm_count, mock_legacy_sgm_count, mock_uid_qfd_populated): # Check the case where the data migration was found to be complete by a # server group members count not scoped to a project. mock_uid_qfd_populated.return_value = True # Server group members call will check whether there are any unmigrated # records. fake_group = mock.Mock() quota._server_group_count_members_by_user(mock.sentinel.context, fake_group, mock.sentinel.user_id) mock_uid_qfd_populated.assert_called_once() # Second server group members call should skip the check for user_id # and queued_for_delete migrated because the result was cached. mock_uid_qfd_populated.reset_mock() quota._server_group_count_members_by_user(mock.sentinel.context, fake_group, mock.sentinel.user_id) mock_uid_qfd_populated.assert_not_called() # A call to count instances, cores, and ram should skip the check for # user_id and queued_for_delete migrated because the result was cached # during the call to count server group members. mock_uid_qfd_populated.reset_mock() quota._instances_cores_ram_count(mock.sentinel.context, mock.sentinel.project_id) mock_uid_qfd_populated.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_rpc.py0000664000175000017500000004723600000000000020063 0ustar00zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils import six import nova.conf from nova import context from nova import rpc from nova import test CONF = nova.conf.CONF class TestRPC(test.NoDBTestCase): # We're testing the rpc code so we can't use the RPCFixture. STUB_RPC = False @mock.patch.object(rpc, 'TRANSPORT') @mock.patch.object(rpc, 'NOTIFICATION_TRANSPORT') @mock.patch.object(rpc, 'LEGACY_NOTIFIER') @mock.patch.object(rpc, 'NOTIFIER') @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_notification_transport') @mock.patch.object(messaging, 'Notifier') def _test_init(self, notification_format, expected_driver_topic_kwargs, mock_notif, mock_noti_trans, mock_ser, mock_exmods, mock_NOTIFIER, mock_LEGACY_NOTIFIER, mock_NOTIFICATION_TRANSPORT, mock_TRANSPORT, versioned_notifications_topics=None): if not versioned_notifications_topics: versioned_notifications_topics = ['versioned_notifications'] self.flags( notification_format=notification_format, versioned_notifications_topics=versioned_notifications_topics, group='notifications') legacy_notifier = mock.Mock() notifier = mock.Mock() notif_transport = mock.Mock() transport = mock.Mock() serializer = mock.Mock() mock_exmods.return_value = ['foo'] mock_noti_trans.return_value = notif_transport mock_ser.return_value = serializer mock_notif.side_effect = [legacy_notifier, notifier] with mock.patch.object(rpc, 'create_transport') as create_transport, \ mock.patch.object(rpc, 'get_transport_url') as get_url: create_transport.return_value = transport rpc.init(CONF) create_transport.assert_called_once_with(get_url.return_value) self.assertTrue(mock_exmods.called) self.assertIsNotNone(mock_TRANSPORT) self.assertIsNotNone(mock_LEGACY_NOTIFIER) self.assertIsNotNone(mock_NOTIFIER) self.assertEqual(legacy_notifier, rpc.LEGACY_NOTIFIER) self.assertEqual(notifier, rpc.NOTIFIER) expected_calls = [] for kwargs in expected_driver_topic_kwargs: expected_kwargs = {'serializer': serializer} expected_kwargs.update(kwargs) expected_calls.append(((notif_transport,), expected_kwargs)) self.assertEqual(expected_calls, mock_notif.call_args_list, "The calls to messaging.Notifier() did not create " "the legacy and versioned notifiers properly.") def test_init_unversioned(self): # The expected call to get the legacy notifier will require no new # kwargs, and we expect the new notifier will need the noop driver expected_driver_topic_kwargs = [{}, {'driver': 'noop'}] self._test_init('unversioned', expected_driver_topic_kwargs) def test_init_both(self): expected_driver_topic_kwargs = [ {}, {'topics': ['versioned_notifications']}] self._test_init('both', expected_driver_topic_kwargs) def test_init_versioned(self): expected_driver_topic_kwargs = [ {'driver': 'noop'}, {'topics': ['versioned_notifications']}] self._test_init('versioned', expected_driver_topic_kwargs) def test_init_versioned_with_custom_topics(self): expected_driver_topic_kwargs = [ {'driver': 'noop'}, {'topics': ['custom_topic1', 'custom_topic2']}] versioned_notifications_topics = ['custom_topic1', 'custom_topic2'] self._test_init('versioned', expected_driver_topic_kwargs, versioned_notifications_topics=versioned_notifications_topics) @mock.patch.object(rpc, 'NOTIFICATION_TRANSPORT', new=mock.Mock()) @mock.patch.object(rpc, 'LEGACY_NOTIFIER', new=mock.Mock()) @mock.patch.object(rpc, 'NOTIFIER', new=mock.Mock()) def test_cleanup_transport_null(self): """Ensure cleanup fails if 'rpc.TRANSPORT' wasn't set.""" self.assertRaises(AssertionError, rpc.cleanup) @mock.patch.object(rpc, 'TRANSPORT', new=mock.Mock()) @mock.patch.object(rpc, 'LEGACY_NOTIFIER', new=mock.Mock()) @mock.patch.object(rpc, 'NOTIFIER', new=mock.Mock()) def test_cleanup_notification_transport_null(self): """Ensure cleanup fails if 'rpc.NOTIFICATION_TRANSPORT' wasn't set.""" self.assertRaises(AssertionError, rpc.cleanup) @mock.patch.object(rpc, 'TRANSPORT', new=mock.Mock()) @mock.patch.object(rpc, 'NOTIFICATION_TRANSPORT', new=mock.Mock()) @mock.patch.object(rpc, 'NOTIFIER', new=mock.Mock()) def test_cleanup_legacy_notifier_null(self): """Ensure cleanup fails if 'rpc.LEGACY_NOTIFIER' wasn't set.""" self.assertRaises(AssertionError, rpc.cleanup) @mock.patch.object(rpc, 'TRANSPORT', new=mock.Mock()) @mock.patch.object(rpc, 'NOTIFICATION_TRANSPORT', new=mock.Mock()) @mock.patch.object(rpc, 'LEGACY_NOTIFIER', new=mock.Mock()) def test_cleanup_notifier_null(self): """Ensure cleanup fails if 'rpc.NOTIFIER' wasn't set.""" self.assertRaises(AssertionError, rpc.cleanup) @mock.patch.object(rpc, 'TRANSPORT') @mock.patch.object(rpc, 'NOTIFICATION_TRANSPORT') @mock.patch.object(rpc, 'LEGACY_NOTIFIER') @mock.patch.object(rpc, 'NOTIFIER') def test_cleanup(self, mock_NOTIFIER, mock_LEGACY_NOTIFIER, mock_NOTIFICATION_TRANSPORT, mock_TRANSPORT): rpc.cleanup() mock_TRANSPORT.cleanup.assert_called_once_with() mock_NOTIFICATION_TRANSPORT.cleanup.assert_called_once_with() self.assertIsNone(rpc.TRANSPORT) self.assertIsNone(rpc.NOTIFICATION_TRANSPORT) self.assertIsNone(rpc.LEGACY_NOTIFIER) self.assertIsNone(rpc.NOTIFIER) @mock.patch.object(messaging, 'set_transport_defaults') def test_set_defaults(self, mock_set): control_exchange = mock.Mock() rpc.set_defaults(control_exchange) mock_set.assert_called_once_with(control_exchange) def test_add_extra_exmods(self): extra_exmods = [] with mock.patch.object( rpc, 'EXTRA_EXMODS', extra_exmods) as mock_EXTRA_EXMODS: rpc.add_extra_exmods('foo', 'bar') self.assertEqual(['foo', 'bar'], mock_EXTRA_EXMODS) def test_clear_extra_exmods(self): extra_exmods = ['foo', 'bar'] with mock.patch.object( rpc, 'EXTRA_EXMODS', extra_exmods) as mock_EXTRA_EXMODS: rpc.clear_extra_exmods() self.assertEqual([], mock_EXTRA_EXMODS) def test_get_allowed_exmods(self): allowed_exmods = ['foo'] extra_exmods = ['bar'] with test.nested( mock.patch.object(rpc, 'EXTRA_EXMODS', extra_exmods), mock.patch.object(rpc, 'ALLOWED_EXMODS', allowed_exmods) ) as (mock_EXTRA_EXMODS, mock_ALLOWED_EXMODS): exmods = rpc.get_allowed_exmods() self.assertEqual(['foo', 'bar'], exmods) @mock.patch.object(messaging, 'TransportURL') def test_get_transport_url(self, mock_url): mock_url.parse.return_value = 'foo' url = rpc.get_transport_url(url_str='bar') self.assertEqual('foo', url) mock_url.parse.assert_called_once_with(rpc.CONF, 'bar') @mock.patch.object(messaging, 'TransportURL') def test_get_transport_url_null(self, mock_url): mock_url.parse.return_value = 'foo' url = rpc.get_transport_url() self.assertEqual('foo', url) mock_url.parse.assert_called_once_with(rpc.CONF, None) @mock.patch.object(rpc, 'TRANSPORT') @mock.patch.object(rpc, 'profiler', None) @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'RPCClient') def test_get_client(self, mock_client, mock_ser, mock_TRANSPORT): tgt = mock.Mock() ser = mock.Mock() mock_client.return_value = 'client' mock_ser.return_value = ser client = rpc.get_client(tgt, version_cap='1.0', serializer='foo') mock_ser.assert_called_once_with('foo') mock_client.assert_called_once_with(mock_TRANSPORT, tgt, version_cap='1.0', call_monitor_timeout=None, serializer=ser) self.assertEqual('client', client) @mock.patch.object(rpc, 'TRANSPORT') @mock.patch.object(rpc, 'profiler', None) @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_rpc_server') def test_get_server(self, mock_get, mock_ser, mock_TRANSPORT): ser = mock.Mock() tgt = mock.Mock() ends = mock.Mock() mock_ser.return_value = ser mock_get.return_value = 'server' server = rpc.get_server(tgt, ends, serializer='foo') mock_ser.assert_called_once_with('foo') access_policy = dispatcher.DefaultRPCAccessPolicy mock_get.assert_called_once_with(mock_TRANSPORT, tgt, ends, executor='eventlet', serializer=ser, access_policy=access_policy) self.assertEqual('server', server) @mock.patch.object(rpc, 'TRANSPORT') @mock.patch.object(rpc, 'profiler', mock.Mock()) @mock.patch.object(rpc, 'ProfilerRequestContextSerializer') @mock.patch.object(messaging, 'RPCClient') def test_get_client_profiler_enabled(self, mock_client, mock_ser, mock_TRANSPORT): tgt = mock.Mock() ser = mock.Mock() mock_client.return_value = 'client' mock_ser.return_value = ser client = rpc.get_client(tgt, version_cap='1.0', serializer='foo') mock_ser.assert_called_once_with('foo') mock_client.assert_called_once_with(mock_TRANSPORT, tgt, version_cap='1.0', call_monitor_timeout=None, serializer=ser) self.assertEqual('client', client) @mock.patch.object(rpc, 'TRANSPORT') @mock.patch.object(rpc, 'profiler', mock.Mock()) @mock.patch.object(rpc, 'profiler', mock.Mock()) @mock.patch.object(rpc, 'ProfilerRequestContextSerializer') @mock.patch.object(messaging, 'get_rpc_server') def test_get_server_profiler_enabled(self, mock_get, mock_ser, mock_TRANSPORT): ser = mock.Mock() tgt = mock.Mock() ends = mock.Mock() mock_ser.return_value = ser mock_get.return_value = 'server' server = rpc.get_server(tgt, ends, serializer='foo') mock_ser.assert_called_once_with('foo') access_policy = dispatcher.DefaultRPCAccessPolicy mock_get.assert_called_once_with(mock_TRANSPORT, tgt, ends, executor='eventlet', serializer=ser, access_policy=access_policy) self.assertEqual('server', server) @mock.patch.object(rpc, 'LEGACY_NOTIFIER') def test_get_notifier(self, mock_LEGACY_NOTIFIER): mock_prep = mock.Mock() mock_prep.return_value = 'notifier' mock_LEGACY_NOTIFIER.prepare = mock_prep notifier = rpc.get_notifier('service', publisher_id='foo') mock_prep.assert_called_once_with(publisher_id='foo') self.assertIsInstance(notifier, rpc.LegacyValidatingNotifier) self.assertEqual('notifier', notifier.notifier) @mock.patch.object(rpc, 'LEGACY_NOTIFIER') def test_get_notifier_null_publisher(self, mock_LEGACY_NOTIFIER): mock_prep = mock.Mock() mock_prep.return_value = 'notifier' mock_LEGACY_NOTIFIER.prepare = mock_prep notifier = rpc.get_notifier('service', host='bar') mock_prep.assert_called_once_with(publisher_id='service.bar') self.assertIsInstance(notifier, rpc.LegacyValidatingNotifier) self.assertEqual('notifier', notifier.notifier) @mock.patch.object(rpc, 'NOTIFIER') def test_get_versioned_notifier(self, mock_NOTIFIER): mock_prep = mock.Mock() mock_prep.return_value = 'notifier' mock_NOTIFIER.prepare = mock_prep notifier = rpc.get_versioned_notifier('service.foo') mock_prep.assert_called_once_with(publisher_id='service.foo') self.assertEqual('notifier', notifier) @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(messaging, 'get_rpc_transport') def test_create_transport(self, mock_transport, mock_exmods): exmods = mock_exmods.return_value transport = rpc.create_transport(mock.sentinel.url) self.assertEqual(mock_transport.return_value, transport) mock_exmods.assert_called_once_with() mock_transport.assert_called_once_with(rpc.CONF, url=mock.sentinel.url, allowed_remote_exmods=exmods) class TestJsonPayloadSerializer(test.NoDBTestCase): def test_serialize_entity(self): serializer = rpc.JsonPayloadSerializer() with mock.patch.object(jsonutils, 'to_primitive') as mock_prim: serializer.serialize_entity('context', 'entity') mock_prim.assert_called_once_with('entity', convert_instances=True, fallback=serializer.fallback) def test_fallback(self): # Convert RequestContext, should get a dict. primitive = rpc.JsonPayloadSerializer.fallback(context.get_context()) self.assertIsInstance(primitive, dict) # Convert anything else, should get a string. primitive = rpc.JsonPayloadSerializer.fallback(mock.sentinel.entity) self.assertIsInstance(primitive, six.text_type) class TestRequestContextSerializer(test.NoDBTestCase): def setUp(self): super(TestRequestContextSerializer, self).setUp() self.mock_base = mock.Mock() self.ser = rpc.RequestContextSerializer(self.mock_base) self.ser_null = rpc.RequestContextSerializer(None) def test_serialize_entity(self): self.mock_base.serialize_entity.return_value = 'foo' ser_ent = self.ser.serialize_entity('context', 'entity') self.mock_base.serialize_entity.assert_called_once_with('context', 'entity') self.assertEqual('foo', ser_ent) def test_serialize_entity_null_base(self): ser_ent = self.ser_null.serialize_entity('context', 'entity') self.assertEqual('entity', ser_ent) def test_deserialize_entity(self): self.mock_base.deserialize_entity.return_value = 'foo' deser_ent = self.ser.deserialize_entity('context', 'entity') self.mock_base.deserialize_entity.assert_called_once_with('context', 'entity') self.assertEqual('foo', deser_ent) def test_deserialize_entity_null_base(self): deser_ent = self.ser_null.deserialize_entity('context', 'entity') self.assertEqual('entity', deser_ent) def test_serialize_context(self): context = mock.Mock() self.ser.serialize_context(context) context.to_dict.assert_called_once_with() @mock.patch.object(context, 'RequestContext') def test_deserialize_context(self, mock_req): self.ser.deserialize_context('context') mock_req.from_dict.assert_called_once_with('context') class TestProfilerRequestContextSerializer(test.NoDBTestCase): def setUp(self): super(TestProfilerRequestContextSerializer, self).setUp() self.ser = rpc.ProfilerRequestContextSerializer(mock.Mock()) @mock.patch('nova.rpc.profiler') def test_serialize_context(self, mock_profiler): prof = mock_profiler.get.return_value prof.hmac_key = 'swordfish' prof.get_base_id.return_value = 'baseid' prof.get_id.return_value = 'parentid' context = mock.Mock() context.to_dict.return_value = {'project_id': 'test'} self.assertEqual({'project_id': 'test', 'trace_info': { 'hmac_key': 'swordfish', 'base_id': 'baseid', 'parent_id': 'parentid'}}, self.ser.serialize_context(context)) @mock.patch('nova.rpc.profiler') def test_deserialize_context(self, mock_profiler): serialized = {'project_id': 'test', 'trace_info': { 'hmac_key': 'swordfish', 'base_id': 'baseid', 'parent_id': 'parentid'}} context = self.ser.deserialize_context(serialized) self.assertEqual('test', context.project_id) mock_profiler.init.assert_called_once_with( hmac_key='swordfish', base_id='baseid', parent_id='parentid') class TestClientRouter(test.NoDBTestCase): @mock.patch('oslo_messaging.RPCClient') def test_by_instance(self, mock_rpcclient): default_client = mock.Mock() cell_client = mock.Mock() mock_rpcclient.return_value = cell_client ctxt = mock.Mock() ctxt.mq_connection = mock.sentinel.transport router = rpc.ClientRouter(default_client) client = router.client(ctxt) # verify a client was created by ClientRouter mock_rpcclient.assert_called_once_with( mock.sentinel.transport, default_client.target, version_cap=default_client.version_cap, call_monitor_timeout=default_client.call_monitor_timeout, serializer=default_client.serializer) # verify cell client was returned self.assertEqual(cell_client, client) @mock.patch('oslo_messaging.RPCClient') def test_by_instance_untargeted(self, mock_rpcclient): default_client = mock.Mock() cell_client = mock.Mock() mock_rpcclient.return_value = cell_client ctxt = mock.Mock() ctxt.mq_connection = None router = rpc.ClientRouter(default_client) client = router.client(ctxt) self.assertEqual(router.default_client, client) self.assertFalse(mock_rpcclient.called) class TestIsNotificationsEnabledDecorator(test.NoDBTestCase): def setUp(self): super(TestIsNotificationsEnabledDecorator, self).setUp() self.f = mock.Mock() self.f.__name__ = 'f' self.decorated = rpc.if_notifications_enabled(self.f) def test_call_func_if_needed(self): self.decorated() self.f.assert_called_once_with() @mock.patch('nova.rpc.NOTIFIER.is_enabled', return_value=False) def test_not_call_func_if_notifier_disabled(self, mock_is_enabled): self.decorated() self.assertEqual(0, len(self.f.mock_calls)) def test_not_call_func_if_only_unversioned_notifications_requested(self): self.flags(notification_format='unversioned', group='notifications') self.decorated() self.assertEqual(0, len(self.f.mock_calls)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_safeutils.py0000664000175000017500000000602500000000000021265 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from nova import safe_utils from nova import test def get_closure(): x = 1 def wrapper(self, instance, red=None, blue=None): return x return wrapper class WrappedCodeTestCase(test.NoDBTestCase): """Test the get_wrapped_function utility method.""" def _wrapper(self, function): @functools.wraps(function) def decorated_function(self, *args, **kwargs): function(self, *args, **kwargs) return decorated_function def test_single_wrapped(self): @self._wrapper def wrapped(self, instance, red=None, blue=None): pass func = safe_utils.get_wrapped_function(wrapped) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) def test_double_wrapped(self): @self._wrapper @self._wrapper def wrapped(self, instance, red=None, blue=None): pass func = safe_utils.get_wrapped_function(wrapped) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) def test_triple_wrapped(self): @self._wrapper @self._wrapper @self._wrapper def wrapped(self, instance, red=None, blue=None): pass func = safe_utils.get_wrapped_function(wrapped) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) def test_closure(self): closure = get_closure() func = safe_utils.get_wrapped_function(closure) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_service.py0000664000175000017500000003712100000000000020727 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for remote procedure calls using queue """ import mock from oslo_concurrency import processutils from oslo_config import cfg from oslo_service import service as _service import testtools from nova import exception from nova import manager from nova import objects from nova.objects import base as obj_base from nova import rpc from nova import service from nova import test from nova.tests.unit import utils test_service_opts = [ cfg.HostAddressOpt("test_service_listen", default='127.0.0.1', help="Host to bind test service to"), cfg.IntOpt("test_service_listen_port", default=0, help="Port number to bind test service to"), ] CONF = cfg.CONF CONF.register_opts(test_service_opts) class FakeManager(manager.Manager): """Fake manager for tests.""" def test_method(self): return 'manager' class ExtendedService(service.Service): def test_method(self): return 'service' class ServiceManagerTestCase(test.NoDBTestCase): """Test cases for Services.""" def test_message_gets_to_manager(self): serv = service.Service('test', 'test', 'test', 'nova.tests.unit.test_service.FakeManager') self.assertEqual('manager', serv.test_method()) def test_override_manager_method(self): serv = ExtendedService('test', 'test', 'test', 'nova.tests.unit.test_service.FakeManager') self.assertEqual('service', serv.test_method()) def test_service_with_min_down_time(self): # TODO(hanlind): This really tests code in the servicegroup api. self.flags(service_down_time=10, report_interval=10) service.Service('test', 'test', 'test', 'nova.tests.unit.test_service.FakeManager') self.assertEqual(25, CONF.service_down_time) class ServiceTestCase(test.NoDBTestCase): """Test cases for Services.""" def setUp(self): super(ServiceTestCase, self).setUp() self.host = 'foo' self.binary = 'nova-compute' self.topic = 'fake' def test_create(self): app = service.Service.create(host=self.host, binary=self.binary, topic=self.topic, manager='nova.tests.unit.test_service.FakeManager') self.assertTrue(app) def test_repr(self): # Test if a Service object is correctly represented, for example in # log files. serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') exp = "" self.assertEqual(exp, repr(serv)) @mock.patch.object(objects.Service, 'create') @mock.patch.object(objects.Service, 'get_by_host_and_binary') def test_init_and_start_hooks(self, mock_get_by_host_and_binary, mock_create): mock_get_by_host_and_binary.return_value = None mock_manager = mock.Mock(target=None) serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') serv.manager = mock_manager serv.manager.service_name = self.topic serv.manager.additional_endpoints = [] serv.start() # init_host is called before any service record is created serv.manager.init_host.assert_called_once_with() mock_get_by_host_and_binary.assert_called_once_with(mock.ANY, self.host, self.binary) mock_create.assert_called_once_with() # pre_start_hook is called after service record is created, # but before RPC consumer is created serv.manager.pre_start_hook.assert_called_once_with() # post_start_hook is called after RPC consumer is created. serv.manager.post_start_hook.assert_called_once_with() @mock.patch('nova.conductor.api.API.wait_until_ready') def test_init_with_indirection_api_waits(self, mock_wait): obj_base.NovaObject.indirection_api = mock.MagicMock() with mock.patch.object(FakeManager, '__init__') as init: def check(*a, **k): self.assertTrue(mock_wait.called) init.side_effect = check service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') self.assertTrue(init.called) mock_wait.assert_called_once_with(mock.ANY) @mock.patch('nova.objects.service.Service.get_by_host_and_binary') def test_start_updates_version(self, mock_get_by_host_and_binary): # test that the service version gets updated on services startup service_obj = mock.Mock() service_obj.binary = 'fake-binary' service_obj.host = 'fake-host' service_obj.version = -42 mock_get_by_host_and_binary.return_value = service_obj serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') serv.start() # test service version got updated and saved: self.assertEqual(1, service_obj.save.call_count) self.assertEqual(objects.service.SERVICE_VERSION, service_obj.version) @mock.patch.object(objects.Service, 'create') @mock.patch.object(objects.Service, 'get_by_host_and_binary') def _test_service_check_create_race(self, ex, mock_get_by_host_and_binary, mock_create): mock_manager = mock.Mock() serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') mock_get_by_host_and_binary.side_effect = [None, test.TestingException()] mock_create.side_effect = ex serv.manager = mock_manager self.assertRaises(test.TestingException, serv.start) serv.manager.init_host.assert_called_with() mock_get_by_host_and_binary.assert_has_calls([ mock.call(mock.ANY, self.host, self.binary), mock.call(mock.ANY, self.host, self.binary)]) mock_create.assert_called_once_with() def test_service_check_create_race_topic_exists(self): ex = exception.ServiceTopicExists(host='foo', topic='bar') self._test_service_check_create_race(ex) def test_service_check_create_race_binary_exists(self): ex = exception.ServiceBinaryExists(host='foo', binary='bar') self._test_service_check_create_race(ex) @mock.patch.object(objects.Service, 'create') @mock.patch.object(objects.Service, 'get_by_host_and_binary') @mock.patch.object(_service.Service, 'stop') def test_parent_graceful_shutdown(self, mock_stop, mock_get_by_host_and_binary, mock_create): mock_get_by_host_and_binary.return_value = None mock_manager = mock.Mock(target=None) serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') serv.manager = mock_manager serv.manager.service_name = self.topic serv.manager.additional_endpoints = [] serv.start() serv.manager.init_host.assert_called_once_with() mock_get_by_host_and_binary.assert_called_once_with(mock.ANY, self.host, self.binary) mock_create.assert_called_once_with() serv.manager.pre_start_hook.assert_called_once_with() serv.manager.post_start_hook.assert_called_once_with() serv.stop() mock_stop.assert_called_once_with() @mock.patch('nova.servicegroup.API') @mock.patch('nova.objects.service.Service.get_by_host_and_binary') def test_parent_graceful_shutdown_with_cleanup_host( self, mock_svc_get_by_host_and_binary, mock_API): mock_manager = mock.Mock(target=None) serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') serv.manager = mock_manager serv.manager.additional_endpoints = [] serv.start() serv.manager.init_host.assert_called_with() serv.stop() serv.manager.cleanup_host.assert_called_with() @mock.patch('nova.servicegroup.API') @mock.patch('nova.objects.service.Service.get_by_host_and_binary') @mock.patch.object(rpc, 'get_server') def test_service_stop_waits_for_rpcserver( self, mock_rpc, mock_svc_get_by_host_and_binary, mock_API): serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') serv.start() serv.stop() serv.rpcserver.start.assert_called_once_with() serv.rpcserver.stop.assert_called_once_with() serv.rpcserver.wait.assert_called_once_with() def test_reset(self): serv = service.Service(self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') with mock.patch.object(serv.manager, 'reset') as mock_reset: serv.reset() mock_reset.assert_called_once_with() @mock.patch('nova.conductor.api.API.wait_until_ready') @mock.patch('nova.utils.raise_if_old_compute') def test_old_compute_version_check_happens_after_wait_for_conductor( self, mock_check_old, mock_wait): obj_base.NovaObject.indirection_api = mock.MagicMock() def fake_wait(*args, **kwargs): mock_check_old.assert_not_called() mock_wait.side_effect = fake_wait service.Service.create( self.host, self.binary, self.topic, 'nova.tests.unit.test_service.FakeManager') mock_check_old.assert_called_once_with() mock_wait.assert_called_once_with(mock.ANY) class TestWSGIService(test.NoDBTestCase): def setUp(self): super(TestWSGIService, self).setUp() self.stub_out('nova.api.wsgi.Loader.load_app', lambda *a, **kw: mock.MagicMock()) @mock.patch('nova.objects.Service.get_by_host_and_binary') @mock.patch('nova.objects.Service.create') def test_service_start_creates_record(self, mock_create, mock_get): mock_get.return_value = None test_service = service.WSGIService("test_service") test_service.start() self.assertTrue(mock_create.called) @mock.patch('nova.objects.Service.get_by_host_and_binary') @mock.patch('nova.objects.Service.create') def test_service_start_does_not_create_record(self, mock_create, mock_get): test_service = service.WSGIService("test_service") test_service.start() self.assertFalse(mock_create.called) @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_service_random_port(self, mock_get): test_service = service.WSGIService("test_service") test_service.start() self.assertNotEqual(0, test_service.port) test_service.stop() def test_workers_set_default(self): test_service = service.WSGIService("osapi_compute") self.assertEqual(test_service.workers, processutils.get_worker_count()) def test_workers_set_good_user_setting(self): CONF.set_override('osapi_compute_workers', 8) test_service = service.WSGIService("osapi_compute") self.assertEqual(test_service.workers, 8) def test_openstack_compute_api_workers_set_default(self): test_service = service.WSGIService("openstack_compute_api_v2") self.assertEqual(test_service.workers, processutils.get_worker_count()) def test_openstack_compute_api_workers_set_good_user_setting(self): CONF.set_override('osapi_compute_workers', 8) test_service = service.WSGIService("openstack_compute_api_v2") self.assertEqual(test_service.workers, 8) @testtools.skipIf(not utils.is_ipv6_supported(), "no ipv6 support") @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_service_random_port_with_ipv6(self, mock_get): CONF.set_default("test_service_listen", "::1") test_service = service.WSGIService("test_service") test_service.start() self.assertEqual("::1", test_service.host) self.assertNotEqual(0, test_service.port) test_service.stop() @mock.patch('nova.objects.Service.get_by_host_and_binary') def test_reset_pool_size_to_default(self, mock_get): test_service = service.WSGIService("test_service") test_service.start() # Stopping the service, which in turn sets pool size to 0 test_service.stop() self.assertEqual(test_service.server._pool.size, 0) # Resetting pool size to default test_service.reset() test_service.start() self.assertEqual(test_service.server._pool.size, CONF.wsgi.default_pool_size) class TestLauncher(test.NoDBTestCase): @mock.patch.object(_service, 'launch') def test_launch_app(self, mock_launch): service._launcher = None service.serve(mock.sentinel.service) mock_launch.assert_called_once_with(mock.ANY, mock.sentinel.service, workers=None, restart_method='mutate') @mock.patch.object(_service, 'launch') def test_launch_app_with_workers(self, mock_launch): service._launcher = None service.serve(mock.sentinel.service, workers=mock.sentinel.workers) mock_launch.assert_called_once_with(mock.ANY, mock.sentinel.service, workers=mock.sentinel.workers, restart_method='mutate') @mock.patch.object(_service, 'launch') def test_launch_app_more_than_once_raises(self, mock_launch): service._launcher = None service.serve(mock.sentinel.service) self.assertRaises(RuntimeError, service.serve, mock.sentinel.service) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_service_auth.py0000664000175000017500000000437100000000000021751 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from keystoneauth1 import service_token import mock from nova import context from nova import service_auth from nova import test class ServiceAuthTestCase(test.NoDBTestCase): def setUp(self): super(ServiceAuthTestCase, self).setUp() self.ctx = context.RequestContext('fake', 'fake') self.addCleanup(service_auth.reset_globals) @mock.patch.object(ks_loading, 'load_auth_from_conf_options') def test_get_auth_plugin_no_wraps(self, mock_load): context = mock.MagicMock() context.get_auth_plugin.return_value = "fake" result = service_auth.get_auth_plugin(context) self.assertEqual("fake", result) mock_load.assert_not_called() @mock.patch.object(ks_loading, 'load_auth_from_conf_options') def test_get_auth_plugin_wraps(self, mock_load): self.flags(send_service_user_token=True, group='service_user') result = service_auth.get_auth_plugin(self.ctx) self.assertIsInstance(result, service_token.ServiceTokenAuthWrapper) @mock.patch.object(ks_loading, 'load_auth_from_conf_options', return_value=None) def test_get_auth_plugin_wraps_bad_config(self, mock_load): """Tests the case that send_service_user_token is True but there is some misconfiguration with the [service_user] section which makes KSA return None for the service user auth. """ self.flags(send_service_user_token=True, group='service_user') result = service_auth.get_auth_plugin(self.ctx) self.assertEqual(1, mock_load.call_count) self.assertNotIsInstance(result, service_token.ServiceTokenAuthWrapper) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_test.py0000664000175000017500000003527600000000000020257 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the testing base code.""" import os.path import tempfile import uuid import mock from oslo_log import log as logging import oslo_messaging as messaging import six import nova.conf from nova import exception from nova import rpc from nova import test from nova.tests import fixtures LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class IsolationTestCase(test.TestCase): """Ensure that things are cleaned up after failed tests. These tests don't really do much here, but if isolation fails a bunch of other tests should fail. """ def test_service_isolation(self): self.useFixture(fixtures.ServiceFixture('compute')) def test_rpc_consumer_isolation(self): class NeverCalled(object): def __getattribute__(self, name): if name == 'target': # oslo.messaging 5.31.0 explicitly looks for 'target' # on the endpoint and checks it's type, so we can't avoid # it here, just ignore it if that's the case. return assert False, "I should never get called. name: %s" % name server = rpc.get_server(messaging.Target(topic='compute', server=CONF.host), endpoints=[NeverCalled()]) server.start() class JsonTestCase(test.NoDBTestCase): def test_compare_dict_string(self): expected = { "employees": [ {"firstName": "Anna", "lastName": "Smith"}, {"firstName": "John", "lastName": "Doe"}, {"firstName": "Peter", "lastName": "Jones"} ], "locations": set(['Boston', 'Mumbai', 'Beijing', 'Perth']) } actual = """{ "employees": [ { "lastName": "Doe", "firstName": "John" }, { "lastName": "Smith", "firstName": "Anna" }, { "lastName": "Jones", "firstName": "Peter" } ], "locations": [ "Perth", "Boston", "Mumbai", "Beijing" ] }""" self.assertJsonEqual(expected, actual) def test_fail_on_list_length(self): expected = { 'top': { 'l1': { 'l2': ['a', 'b', 'c'] } } } actual = { 'top': { 'l1': { 'l2': ['c', 'a', 'b', 'd'] } } } try: self.assertJsonEqual(expected, actual) except Exception as e: # error reported is going to be a cryptic length failure # on the level2 structure. self.assertEqual( ("3 != 4: path: root.top.l1.l2. Different list items\n" "expected=['a', 'b', 'c']\n" "observed=['a', 'b', 'c', 'd']\n" "difference=['d']"), e.difference) self.assertIn( "actual:\n{'top': {'l1': {'l2': ['c', 'a', 'b', 'd']}}}", six.text_type(e)) self.assertIn( "expected:\n{'top': {'l1': {'l2': ['a', 'b', 'c']}}}", six.text_type(e)) else: self.fail("This should have raised a mismatch exception") def test_fail_on_dict_length(self): expected = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 3} } } } actual = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2} } } } try: self.assertJsonEqual(expected, actual) except Exception as e: self.assertEqual( ("3 != 2: path: root.top.l1.l2. Different dict key sets\n" "expected=['a', 'b', 'c']\n" "observed=['a', 'b']\n" "difference=['c']"), e.difference) else: self.fail("This should have raised a mismatch exception") def test_fail_on_dict_keys(self): expected = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 3} } } } actual = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'd': 3} } } } try: self.assertJsonEqual(expected, actual) except Exception as e: self.assertIn( "path: root.top.l1.l2. Dict keys are not equal", e.difference) else: self.fail("This should have raised a mismatch exception") def test_fail_on_list_value(self): expected = { 'top': { 'l1': { 'l2': ['a', 'b', 'c'] } } } actual = { 'top': { 'l1': { 'l2': ['c', 'a', 'd'] } } } try: self.assertJsonEqual(expected, actual) except Exception as e: self.assertEqual( "'b' != 'c': path: root.top.l1.l2[1]", e.difference) self.assertIn( "actual:\n{'top': {'l1': {'l2': ['c', 'a', 'd']}}}", six.text_type(e)) self.assertIn( "expected:\n{'top': {'l1': {'l2': ['a', 'b', 'c']}}}", six.text_type(e)) else: self.fail("This should have raised a mismatch exception") def test_fail_on_dict_value(self): expected = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 3} } } } actual = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 4} } } } try: self.assertJsonEqual(expected, actual, 'test message') except Exception as e: self.assertEqual( "3 != 4: path: root.top.l1.l2.c", e.difference) self.assertIn("actual:\n{'top': {'l1': {'l2': {", six.text_type(e)) self.assertIn( "expected:\n{'top': {'l1': {'l2': {", six.text_type(e)) self.assertIn( "message: test message\n", six.text_type(e)) else: self.fail("This should have raised a mismatch exception") def test_compare_scalars(self): with self.assertRaisesRegex(AssertionError, 'True != False'): self.assertJsonEqual(True, False) class BadLogTestCase(test.NoDBTestCase): """Make sure a mis-formatted debug log will get caught.""" def test_bad_debug_log(self): self.assertRaises(KeyError, LOG.debug, "this is a misformated %(log)s", {'nothing': 'nothing'}) class MatchTypeTestCase(test.NoDBTestCase): def test_match_type_simple(self): matcher = test.MatchType(dict) self.assertEqual(matcher, {}) self.assertEqual(matcher, {"hello": "world"}) self.assertEqual(matcher, {"hello": ["world"]}) self.assertNotEqual(matcher, []) self.assertNotEqual(matcher, [{"hello": "world"}]) self.assertNotEqual(matcher, 123) self.assertNotEqual(matcher, "foo") def test_match_type_object(self): class Hello(object): pass class World(object): pass matcher = test.MatchType(Hello) self.assertEqual(matcher, Hello()) self.assertNotEqual(matcher, World()) self.assertNotEqual(matcher, 123) self.assertNotEqual(matcher, "foo") class ContainKeyValueTestCase(test.NoDBTestCase): def test_contain_key_value_normal(self): matcher = test.ContainKeyValue('foo', 'bar') self.assertEqual(matcher, {123: 'nova', 'foo': 'bar'}) self.assertNotEqual(matcher, {'foo': 123}) self.assertNotEqual(matcher, {}) def test_contain_key_value_exception(self): matcher = test.ContainKeyValue('foo', 'bar') # Raise TypeError self.assertNotEqual(matcher, 123) self.assertNotEqual(matcher, 'foo') # Raise KeyError self.assertNotEqual(matcher, {1: 2, '3': 4, 5: '6'}) self.assertNotEqual(matcher, {'bar': 'foo'}) class NovaExceptionReraiseFormatErrorTestCase(test.NoDBTestCase): """Test that format errors are reraised in tests.""" def test_format_error_in_nova_exception(self): class FakeImageException(exception.NovaException): msg_fmt = 'Image %(image_id)s has wrong type %(type)s.' # wrong kwarg ex = self.assertRaises(KeyError, FakeImageException, bogus='wrongkwarg') self.assertIn('image_id', six.text_type(ex)) # no kwarg ex = self.assertRaises(KeyError, FakeImageException) self.assertIn('image_id', six.text_type(ex)) # not enough kwargs ex = self.assertRaises(KeyError, FakeImageException, image_id='image') self.assertIn('type', six.text_type(ex)) class PatchExistsTestCase(test.NoDBTestCase): def test_with_patch_exists_true(self): """Test that "with patch_exists" can fake the existence of a file without changing other file existence checks, and that calls can be asserted on the mocked method. """ self.assertFalse(os.path.exists('fake_file')) with self.patch_exists('fake_file', True) as mock_exists: self.assertTrue(os.path.exists('fake_file')) self.assertTrue(os.path.exists(__file__)) self.assertFalse(os.path.exists('non-existent/file')) self.assertIn(mock.call('fake_file'), mock_exists.mock_calls) def test_with_patch_exists_false(self): """Test that "with patch_exists" can fake the non-existence of a file without changing other file existence checks, and that calls can be asserted on the mocked method. """ self.assertTrue(os.path.exists(__file__)) with self.patch_exists(__file__, False) as mock_exists: self.assertFalse(os.path.exists(__file__)) self.assertTrue(os.path.exists(os.path.dirname(__file__))) self.assertFalse(os.path.exists('non-existent/file')) self.assertIn(mock.call(__file__), mock_exists.mock_calls) @test.patch_exists('fake_file', True) def test_patch_exists_decorator_true(self): """Test that @patch_exists can fake the existence of a file without changing other file existence checks. """ self.assertTrue(os.path.exists('fake_file')) self.assertTrue(os.path.exists(__file__)) self.assertFalse(os.path.exists('non-existent/file')) @test.patch_exists(__file__, False) def test_patch_exists_decorator_false(self): """Test that @patch_exists can fake the non-existence of a file without changing other file existence checks. """ self.assertFalse(os.path.exists(__file__)) self.assertTrue(os.path.exists(os.path.dirname(__file__))) self.assertFalse(os.path.exists('non-existent/file')) @test.patch_exists('fake_file1', True) @test.patch_exists('fake_file2', True) @test.patch_exists(__file__, False) def test_patch_exists_multiple_decorators(self): """Test that @patch_exists can be used multiple times on the same method. """ self.assertTrue(os.path.exists('fake_file1')) self.assertTrue(os.path.exists('fake_file2')) self.assertFalse(os.path.exists(__file__)) # Check non-patched parameters self.assertTrue(os.path.exists(os.path.dirname(__file__))) self.assertFalse(os.path.exists('non-existent/file')) class PatchOpenTestCase(test.NoDBTestCase): fake_contents = "These file contents don't really exist" def _test_patched_open(self): """Test that a selectively patched open can fake the contents of a file while still allowing normal, real file operations. """ self.assertFalse(os.path.exists('fake_file')) with open('fake_file') as f: self.assertEqual(self.fake_contents, f.read()) # Test we can still open and read this file from within the # same context. NOTE: We have to make sure we open the .py # file not the corresponding .pyc file. with open(__file__.rstrip('c')) as f: this_file_contents = f.read() self.assertIn("class %s(" % self.__class__.__name__, this_file_contents) self.assertNotIn("magic concatenated" "string", this_file_contents) # Test we can still create, write to, and then read from a # temporary file, from within the same context. tmp = tempfile.NamedTemporaryFile() tmp_contents = str(uuid.uuid1()) with open(tmp.name, 'w') as f: f.write(tmp_contents) with open(tmp.name) as f: self.assertEqual(tmp_contents, f.read()) return tmp.name def test_with_patch_open(self): """Test that "with patch_open" can fake the contents of a file without changing other file operations, and that calls can be asserted on the mocked method. """ with self.patch_open('fake_file', self.fake_contents) as mock_open: tmp_name = self._test_patched_open() # Test we can make assertions about how the mock_open was called. self.assertIn(mock.call('fake_file'), mock_open.mock_calls) # The mock_open should get bypassed for non-patched path values: self.assertNotIn(mock.call(__file__), mock_open.mock_calls) self.assertNotIn(mock.call(tmp_name), mock_open.mock_calls) @test.patch_open('fake_file', fake_contents) def test_patch_open_decorator(self): """Test that @patch_open can fake the contents of a file without changing other file operations. """ self._test_patched_open() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_test_utils.py0000664000175000017500000000445300000000000021470 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import socket import tempfile import fixtures from nova.db import api as db from nova import test from nova.tests.unit import utils as test_utils class TestUtilsTestCase(test.TestCase): def test_get_test_admin_context(self): # get_test_admin_context's return value behaves like admin context. ctxt = test_utils.get_test_admin_context() # TODO(soren): This should verify the full interface context # objects expose. self.assertTrue(ctxt.is_admin) def test_get_test_instance(self): # get_test_instance's return value looks like an instance_ref. instance_ref = test_utils.get_test_instance() ctxt = test_utils.get_test_admin_context() db.instance_get(ctxt, instance_ref['id']) def test_ipv6_supported(self): self.assertIn(test_utils.is_ipv6_supported(), (False, True)) def fake_open(path): raise IOError def fake_socket_fail(x, y): e = socket.error() e.errno = errno.EAFNOSUPPORT raise e def fake_socket_ok(x, y): return tempfile.TemporaryFile() with fixtures.MonkeyPatch('socket.socket', fake_socket_fail): self.assertFalse(test_utils.is_ipv6_supported()) with fixtures.MonkeyPatch('socket.socket', fake_socket_ok): with fixtures.MonkeyPatch('sys.platform', 'windows'): self.assertTrue(test_utils.is_ipv6_supported()) with fixtures.MonkeyPatch('sys.platform', 'linux2'): with fixtures.MonkeyPatch('six.moves.builtins.open', fake_open): self.assertFalse(test_utils.is_ipv6_supported()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_utils.py0000664000175000017500000016167100000000000020437 0ustar00zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import hashlib import os import os.path import tempfile import eventlet import fixtures from keystoneauth1 import adapter as ks_adapter from keystoneauth1.identity import base as ks_identity from keystoneauth1 import session as ks_session import mock import netaddr from openstack import exceptions as sdk_exc from oslo_config import cfg from oslo_context import context as common_context from oslo_context import fixture as context_fixture from oslo_utils import encodeutils from oslo_utils import fixture as utils_fixture from oslo_utils import units import six from nova import context from nova import exception from nova.objects import base as obj_base from nova.objects import instance as instance_obj from nova.objects import service as service_obj from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.objects import test_objects from nova.tests.unit import utils as test_utils from nova import utils CONF = cfg.CONF class GenericUtilsTestCase(test.NoDBTestCase): def test_parse_server_string(self): result = utils.parse_server_string('::1') self.assertEqual(('::1', ''), result) result = utils.parse_server_string('[::1]:8773') self.assertEqual(('::1', '8773'), result) result = utils.parse_server_string('2001:db8::192.168.1.1') self.assertEqual(('2001:db8::192.168.1.1', ''), result) result = utils.parse_server_string('[2001:db8::192.168.1.1]:8773') self.assertEqual(('2001:db8::192.168.1.1', '8773'), result) result = utils.parse_server_string('192.168.1.1') self.assertEqual(('192.168.1.1', ''), result) result = utils.parse_server_string('192.168.1.2:8773') self.assertEqual(('192.168.1.2', '8773'), result) result = utils.parse_server_string('192.168.1.3') self.assertEqual(('192.168.1.3', ''), result) result = utils.parse_server_string('www.example.com:8443') self.assertEqual(('www.example.com', '8443'), result) result = utils.parse_server_string('www.example.com') self.assertEqual(('www.example.com', ''), result) # error case result = utils.parse_server_string('www.exa:mple.com:8443') self.assertEqual(('', ''), result) result = utils.parse_server_string('') self.assertEqual(('', ''), result) def test_hostname_unicode_sanitization(self): hostname = u"\u7684.test.example.com" self.assertEqual("test.example.com", utils.sanitize_hostname(hostname)) def test_hostname_sanitize_periods(self): hostname = "....test.example.com..." self.assertEqual("test.example.com", utils.sanitize_hostname(hostname)) def test_hostname_sanitize_dashes(self): hostname = "----test.example.com---" self.assertEqual("test.example.com", utils.sanitize_hostname(hostname)) def test_hostname_sanitize_characters(self): hostname = "(#@&$!(@*--#&91)(__=+--test-host.example!!.com-0+" self.assertEqual("91----test-host.example.com-0", utils.sanitize_hostname(hostname)) def test_hostname_translate(self): hostname = "<}\x1fh\x10e\x08l\x02l\x05o\x12!{>" self.assertEqual("hello", utils.sanitize_hostname(hostname)) def test_hostname_has_default(self): hostname = u"\u7684hello" defaultname = "Server-1" self.assertEqual("hello", utils.sanitize_hostname(hostname, defaultname)) def test_hostname_empty_has_default(self): hostname = u"\u7684" defaultname = "Server-1" self.assertEqual(defaultname, utils.sanitize_hostname(hostname, defaultname)) def test_hostname_empty_has_default_too_long(self): hostname = u"\u7684" defaultname = "a" * 64 self.assertEqual("a" * 63, utils.sanitize_hostname(hostname, defaultname)) def test_hostname_empty_no_default(self): hostname = u"\u7684" self.assertEqual("", utils.sanitize_hostname(hostname)) def test_hostname_empty_minus_period(self): hostname = "---..." self.assertEqual("", utils.sanitize_hostname(hostname)) def test_hostname_with_space(self): hostname = " a b c " self.assertEqual("a-b-c", utils.sanitize_hostname(hostname)) def test_hostname_too_long(self): hostname = "a" * 64 self.assertEqual(63, len(utils.sanitize_hostname(hostname))) def test_hostname_truncated_no_hyphen(self): hostname = "a" * 62 hostname = hostname + '-' + 'a' res = utils.sanitize_hostname(hostname) # we trim to 63 and then trim the trailing dash self.assertEqual(62, len(res)) self.assertFalse(res.endswith('-'), 'The hostname ends with a -') def test_generate_password(self): password = utils.generate_password() self.assertTrue([c for c in password if c in '0123456789']) self.assertTrue([c for c in password if c in 'abcdefghijklmnopqrstuvwxyz']) self.assertTrue([c for c in password if c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ']) @mock.patch('nova.privsep.path.chown') def test_temporary_chown(self, mock_chown): with tempfile.NamedTemporaryFile() as f: with utils.temporary_chown(f.name, owner_uid=2): mock_chown.assert_called_once_with(f.name, uid=2) mock_chown.reset_mock() mock_chown.assert_called_once_with(f.name, uid=os.getuid()) def test_get_shortened_ipv6(self): self.assertEqual("abcd:ef01:2345:6789:abcd:ef01:c0a8:fefe", utils.get_shortened_ipv6( "abcd:ef01:2345:6789:abcd:ef01:192.168.254.254")) self.assertEqual("::1", utils.get_shortened_ipv6( "0000:0000:0000:0000:0000:0000:0000:0001")) self.assertEqual("caca::caca:0:babe:201:102", utils.get_shortened_ipv6( "caca:0000:0000:caca:0000:babe:0201:0102")) self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6, "127.0.0.1") self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6, "failure") def test_get_shortened_ipv6_cidr(self): self.assertEqual("2600::/64", utils.get_shortened_ipv6_cidr( "2600:0000:0000:0000:0000:0000:0000:0000/64")) self.assertEqual("2600::/64", utils.get_shortened_ipv6_cidr( "2600::1/64")) self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6_cidr, "127.0.0.1") self.assertRaises(netaddr.AddrFormatError, utils.get_shortened_ipv6_cidr, "failure") def test_safe_ip_format(self): self.assertEqual("[::1]", utils.safe_ip_format("::1")) self.assertEqual("127.0.0.1", utils.safe_ip_format("127.0.0.1")) self.assertEqual("[::ffff:127.0.0.1]", utils.safe_ip_format( "::ffff:127.0.0.1")) self.assertEqual("localhost", utils.safe_ip_format("localhost")) def test_format_remote_path(self): self.assertEqual("[::1]:/foo/bar", utils.format_remote_path("::1", "/foo/bar")) self.assertEqual("127.0.0.1:/foo/bar", utils.format_remote_path("127.0.0.1", "/foo/bar")) self.assertEqual("[::ffff:127.0.0.1]:/foo/bar", utils.format_remote_path("::ffff:127.0.0.1", "/foo/bar")) self.assertEqual("localhost:/foo/bar", utils.format_remote_path("localhost", "/foo/bar")) self.assertEqual("/foo/bar", utils.format_remote_path(None, "/foo/bar")) def test_get_hash_str(self): base_str = b"foo" base_unicode = u"foo" value = hashlib.md5(base_str).hexdigest() self.assertEqual( value, utils.get_hash_str(base_str)) self.assertEqual( value, utils.get_hash_str(base_unicode)) def test_get_obj_repr_unicode(self): instance = instance_obj.Instance() instance.display_name = u'\u00CD\u00F1st\u00E1\u00F1c\u00E9' # should be a bytes string if python2 before conversion self.assertIs(str, type(repr(instance))) self.assertIs(six.text_type, type(utils.get_obj_repr_unicode(instance))) @mock.patch('oslo_concurrency.processutils.execute') def test_ssh_execute(self, mock_execute): expected_args = ('ssh', '-o', 'BatchMode=yes', 'remotehost', 'ls', '-l') utils.ssh_execute('remotehost', 'ls', '-l') mock_execute.assert_called_once_with(*expected_args) def test_generate_hostid(self): host = 'host' project_id = '9b9e3c847e904b0686e8ffb20e4c6381' hostId = 'fa123c6f74efd4aad95f84096f9e187caa0625925a9e7837b2b46792' self.assertEqual(hostId, utils.generate_hostid(host, project_id)) def test_generate_hostid_with_none_host(self): project_id = '9b9e3c847e904b0686e8ffb20e4c6381' self.assertEqual('', utils.generate_hostid(None, project_id)) class TestCachedFile(test.NoDBTestCase): @mock.patch('os.path.getmtime', return_value=1) def test_read_cached_file(self, getmtime): utils._FILE_CACHE = { '/this/is/a/fake': {"data": 1123, "mtime": 1} } fresh, data = utils.read_cached_file("/this/is/a/fake") fdata = utils._FILE_CACHE['/this/is/a/fake']["data"] self.assertEqual(fdata, data) @mock.patch('os.path.getmtime', return_value=2) def test_read_modified_cached_file(self, getmtime): utils._FILE_CACHE = { '/this/is/a/fake': {"data": 1123, "mtime": 1} } fake_contents = "lorem ipsum" with mock.patch('six.moves.builtins.open', mock.mock_open(read_data=fake_contents)): fresh, data = utils.read_cached_file("/this/is/a/fake") self.assertEqual(data, fake_contents) self.assertTrue(fresh) def test_delete_cached_file(self): filename = '/this/is/a/fake/deletion/of/cached/file' utils._FILE_CACHE = { filename: {"data": 1123, "mtime": 1} } self.assertIn(filename, utils._FILE_CACHE) utils.delete_cached_file(filename) self.assertNotIn(filename, utils._FILE_CACHE) def test_delete_cached_file_not_exist(self): # We expect that if cached file does not exist no Exception raised. filename = '/this/is/a/fake/deletion/attempt/of/not/cached/file' self.assertNotIn(filename, utils._FILE_CACHE) utils.delete_cached_file(filename) self.assertNotIn(filename, utils._FILE_CACHE) class AuditPeriodTest(test.NoDBTestCase): def setUp(self): super(AuditPeriodTest, self).setUp() # a fairly random time to test with self.useFixture(utils_fixture.TimeFixture( datetime.datetime(second=23, minute=12, hour=8, day=5, month=3, year=2012))) def test_hour(self): begin, end = utils.last_completed_audit_period(unit='hour') self.assertEqual(begin, datetime.datetime( hour=7, day=5, month=3, year=2012)) self.assertEqual(end, datetime.datetime( hour=8, day=5, month=3, year=2012)) def test_hour_with_offset_before_current(self): begin, end = utils.last_completed_audit_period(unit='hour@10') self.assertEqual(begin, datetime.datetime( minute=10, hour=7, day=5, month=3, year=2012)) self.assertEqual(end, datetime.datetime( minute=10, hour=8, day=5, month=3, year=2012)) def test_hour_with_offset_after_current(self): begin, end = utils.last_completed_audit_period(unit='hour@30') self.assertEqual(begin, datetime.datetime( minute=30, hour=6, day=5, month=3, year=2012)) self.assertEqual(end, datetime.datetime( minute=30, hour=7, day=5, month=3, year=2012)) def test_day(self): begin, end = utils.last_completed_audit_period(unit='day') self.assertEqual(begin, datetime.datetime( day=4, month=3, year=2012)) self.assertEqual(end, datetime.datetime( day=5, month=3, year=2012)) def test_day_with_offset_before_current(self): begin, end = utils.last_completed_audit_period(unit='day@6') self.assertEqual(begin, datetime.datetime( hour=6, day=4, month=3, year=2012)) self.assertEqual(end, datetime.datetime( hour=6, day=5, month=3, year=2012)) def test_day_with_offset_after_current(self): begin, end = utils.last_completed_audit_period(unit='day@10') self.assertEqual(begin, datetime.datetime( hour=10, day=3, month=3, year=2012)) self.assertEqual(end, datetime.datetime( hour=10, day=4, month=3, year=2012)) def test_month(self): begin, end = utils.last_completed_audit_period(unit='month') self.assertEqual(begin, datetime.datetime( day=1, month=2, year=2012)) self.assertEqual(end, datetime.datetime( day=1, month=3, year=2012)) def test_month_with_offset_before_current(self): begin, end = utils.last_completed_audit_period(unit='month@2') self.assertEqual(begin, datetime.datetime( day=2, month=2, year=2012)) self.assertEqual(end, datetime.datetime( day=2, month=3, year=2012)) def test_month_with_offset_after_current(self): begin, end = utils.last_completed_audit_period(unit='month@15') self.assertEqual(begin, datetime.datetime( day=15, month=1, year=2012)) self.assertEqual(end, datetime.datetime( day=15, month=2, year=2012)) def test_year(self): begin, end = utils.last_completed_audit_period(unit='year') self.assertEqual(begin, datetime.datetime( day=1, month=1, year=2011)) self.assertEqual(end, datetime.datetime( day=1, month=1, year=2012)) def test_year_with_offset_before_current(self): begin, end = utils.last_completed_audit_period(unit='year@2') self.assertEqual(begin, datetime.datetime( day=1, month=2, year=2011)) self.assertEqual(end, datetime.datetime( day=1, month=2, year=2012)) def test_year_with_offset_after_current(self): begin, end = utils.last_completed_audit_period(unit='year@6') self.assertEqual(begin, datetime.datetime( day=1, month=6, year=2010)) self.assertEqual(end, datetime.datetime( day=1, month=6, year=2011)) class MetadataToDictTestCase(test.NoDBTestCase): def test_metadata_to_dict(self): self.assertEqual(utils.metadata_to_dict( [{'key': 'foo1', 'value': 'bar'}, {'key': 'foo2', 'value': 'baz'}]), {'foo1': 'bar', 'foo2': 'baz'}) def test_metadata_to_dict_with_include_deleted(self): metadata = [{'key': 'foo1', 'value': 'bar', 'deleted': 1442875429, 'other': 'stuff'}, {'key': 'foo2', 'value': 'baz', 'deleted': 0, 'other': 'stuff2'}] self.assertEqual({'foo1': 'bar', 'foo2': 'baz'}, utils.metadata_to_dict(metadata, include_deleted=True)) self.assertEqual({'foo2': 'baz'}, utils.metadata_to_dict(metadata, include_deleted=False)) # verify correct default behavior self.assertEqual(utils.metadata_to_dict(metadata), utils.metadata_to_dict(metadata, include_deleted=False)) def test_metadata_to_dict_empty(self): self.assertEqual({}, utils.metadata_to_dict([])) self.assertEqual({}, utils.metadata_to_dict([], include_deleted=True)) self.assertEqual({}, utils.metadata_to_dict([], include_deleted=False)) def test_dict_to_metadata(self): def sort_key(adict): return sorted(adict.items()) metadata = utils.dict_to_metadata(dict(foo1='bar1', foo2='bar2')) expected = [{'key': 'foo1', 'value': 'bar1'}, {'key': 'foo2', 'value': 'bar2'}] self.assertEqual(sorted(metadata, key=sort_key), sorted(expected, key=sort_key)) def test_dict_to_metadata_empty(self): self.assertEqual(utils.dict_to_metadata({}), []) class ExpectedArgsTestCase(test.NoDBTestCase): def test_passes(self): @utils.expects_func_args('foo', 'baz') def dec(f): return f @dec def func(foo, bar, baz="lol"): pass # Call to ensure nothing errors func(None, None) def test_raises(self): @utils.expects_func_args('foo', 'baz') def dec(f): return f def func(bar, baz): pass self.assertRaises(TypeError, dec, func) def test_var_no_of_args(self): @utils.expects_func_args('foo') def dec(f): return f @dec def func(bar, *args, **kwargs): pass # Call to ensure nothing errors func(None) def test_more_layers(self): @utils.expects_func_args('foo', 'baz') def dec(f): return f def dec_2(f): def inner_f(*a, **k): return f() return inner_f @dec_2 def func(bar, baz): pass self.assertRaises(TypeError, dec, func) class StringLengthTestCase(test.NoDBTestCase): def test_check_string_length(self): self.assertIsNone(utils.check_string_length( 'test', 'name', max_length=255)) self.assertRaises(exception.InvalidInput, utils.check_string_length, 11, 'name', max_length=255) self.assertRaises(exception.InvalidInput, utils.check_string_length, '', 'name', min_length=1) self.assertRaises(exception.InvalidInput, utils.check_string_length, 'a' * 256, 'name', max_length=255) def test_check_string_length_noname(self): self.assertIsNone(utils.check_string_length( 'test', max_length=255)) self.assertRaises(exception.InvalidInput, utils.check_string_length, 11, max_length=255) self.assertRaises(exception.InvalidInput, utils.check_string_length, '', min_length=1) self.assertRaises(exception.InvalidInput, utils.check_string_length, 'a' * 256, max_length=255) class ValidateIntegerTestCase(test.NoDBTestCase): def test_exception_converted(self): self.assertRaises(exception.InvalidInput, utils.validate_integer, "im-not-an-int", "not-an-int") self.assertRaises(exception.InvalidInput, utils.validate_integer, 3.14, "Pie") self.assertRaises(exception.InvalidInput, utils.validate_integer, "299", "Sparta no-show", min_value=300, max_value=300) self.assertRaises(exception.InvalidInput, utils.validate_integer, 55, "doing 55 in a 54", max_value=54) self.assertRaises(exception.InvalidInput, utils.validate_integer, six.unichr(129), "UnicodeError", max_value=1000) class AutoDiskConfigUtilTestCase(test.NoDBTestCase): def test_is_auto_disk_config_disabled(self): self.assertTrue(utils.is_auto_disk_config_disabled("Disabled ")) def test_is_auto_disk_config_disabled_none(self): self.assertFalse(utils.is_auto_disk_config_disabled(None)) def test_is_auto_disk_config_disabled_false(self): self.assertFalse(utils.is_auto_disk_config_disabled("false")) class GetSystemMetadataFromImageTestCase(test.NoDBTestCase): def get_image(self): image_meta = { "id": "fake-image", "name": "fake-name", "min_ram": 1, "min_disk": 1, "disk_format": "raw", "container_format": "bare", } return image_meta def get_flavor(self): flavor = { "id": "fake.flavor", "root_gb": 10, } return flavor def test_base_image_properties(self): image = self.get_image() # Verify that we inherit all the needed keys sys_meta = utils.get_system_metadata_from_image(image) for key in utils.SM_INHERITABLE_KEYS: sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) self.assertEqual(image[key], sys_meta.get(sys_key)) # Verify that everything else is ignored self.assertEqual(len(sys_meta), len(utils.SM_INHERITABLE_KEYS)) def test_inherit_image_properties(self): image = self.get_image() image["properties"] = {"foo1": "bar", "foo2": "baz"} sys_meta = utils.get_system_metadata_from_image(image) # Verify that we inherit all the image properties for key, expected in image["properties"].items(): sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) self.assertEqual(sys_meta[sys_key], expected) def test_skip_image_properties(self): image = self.get_image() image["properties"] = { "foo1": "bar", "foo2": "baz", "mappings": "wizz", "img_block_device_mapping": "eek", } sys_meta = utils.get_system_metadata_from_image(image) # Verify that we inherit all the image properties for key, expected in image["properties"].items(): sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) if key in utils.SM_SKIP_KEYS: self.assertNotIn(sys_key, sys_meta) else: self.assertEqual(sys_meta[sys_key], expected) def test_vhd_min_disk_image(self): image = self.get_image() flavor = self.get_flavor() image["disk_format"] = "vhd" sys_meta = utils.get_system_metadata_from_image(image, flavor) # Verify that the min_disk property is taken from # flavor's root_gb when using vhd disk format sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, "min_disk") self.assertEqual(sys_meta[sys_key], flavor["root_gb"]) def test_dont_inherit_empty_values(self): image = self.get_image() for key in utils.SM_INHERITABLE_KEYS: image[key] = None sys_meta = utils.get_system_metadata_from_image(image) # Verify that the empty properties have not been inherited for key in utils.SM_INHERITABLE_KEYS: sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) self.assertNotIn(sys_key, sys_meta) class GetImageFromSystemMetadataTestCase(test.NoDBTestCase): def get_system_metadata(self): sys_meta = { "image_min_ram": 1, "image_min_disk": 1, "image_disk_format": "raw", "image_container_format": "bare", } return sys_meta def test_image_from_system_metadata(self): sys_meta = self.get_system_metadata() sys_meta["%soo1" % utils.SM_IMAGE_PROP_PREFIX] = "bar" sys_meta["%soo2" % utils.SM_IMAGE_PROP_PREFIX] = "baz" sys_meta["%simg_block_device_mapping" % utils.SM_IMAGE_PROP_PREFIX] = "eek" image = utils.get_image_from_system_metadata(sys_meta) # Verify that we inherit all the needed keys for key in utils.SM_INHERITABLE_KEYS: sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) self.assertEqual(image[key], sys_meta.get(sys_key)) # Verify that we inherit the rest of metadata as properties self.assertIn("properties", image) for key in image["properties"]: sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) self.assertEqual(image["properties"][key], sys_meta[sys_key]) self.assertNotIn("img_block_device_mapping", image["properties"]) def test_dont_inherit_empty_values(self): sys_meta = self.get_system_metadata() for key in utils.SM_INHERITABLE_KEYS: sys_key = "%s%s" % (utils.SM_IMAGE_PROP_PREFIX, key) sys_meta[sys_key] = None image = utils.get_image_from_system_metadata(sys_meta) # Verify that the empty properties have not been inherited for key in utils.SM_INHERITABLE_KEYS: self.assertNotIn(key, image) class GetImageMetadataFromVolumeTestCase(test.NoDBTestCase): def test_inherit_image_properties(self): properties = {"fake_prop": "fake_value"} volume = {"volume_image_metadata": properties} image_meta = utils.get_image_metadata_from_volume(volume) self.assertEqual(properties, image_meta["properties"]) def test_image_size(self): volume = {"size": 10} image_meta = utils.get_image_metadata_from_volume(volume) self.assertEqual(10 * units.Gi, image_meta["size"]) def test_image_status(self): volume = {} image_meta = utils.get_image_metadata_from_volume(volume) self.assertEqual("active", image_meta["status"]) def test_values_conversion(self): properties = {"min_ram": "5", "min_disk": "7"} volume = {"volume_image_metadata": properties} image_meta = utils.get_image_metadata_from_volume(volume) self.assertEqual(5, image_meta["min_ram"]) self.assertEqual(7, image_meta["min_disk"]) def test_suppress_not_image_properties(self): properties = {"min_ram": "256", "min_disk": "128", "image_id": "fake_id", "image_name": "fake_name", "container_format": "ami", "disk_format": "ami", "size": "1234", "checksum": "fake_checksum"} volume = {"volume_image_metadata": properties} image_meta = utils.get_image_metadata_from_volume(volume) self.assertEqual({}, image_meta["properties"]) self.assertEqual(0, image_meta["size"]) # volume's properties should not be touched self.assertNotEqual({}, properties) class SafeTruncateTestCase(test.NoDBTestCase): def test_exception_to_dict_with_long_message_3_bytes(self): # Generate Chinese byte string whose length is 300. This Chinese UTF-8 # character occupies 3 bytes. After truncating, the byte string length # should be 255. msg = u'\u8d75' * 100 truncated_msg = utils.safe_truncate(msg, 255) byte_message = encodeutils.safe_encode(truncated_msg) self.assertEqual(255, len(byte_message)) def test_exception_to_dict_with_long_message_2_bytes(self): # Generate Russian byte string whose length is 300. This Russian UTF-8 # character occupies 2 bytes. After truncating, the byte string length # should be 254. msg = encodeutils.safe_decode('\xd0\x92' * 150) truncated_msg = utils.safe_truncate(msg, 255) byte_message = encodeutils.safe_encode(truncated_msg) self.assertEqual(254, len(byte_message)) class SpawnNTestCase(test.NoDBTestCase): def setUp(self): super(SpawnNTestCase, self).setUp() self.useFixture(context_fixture.ClearRequestContext()) self.spawn_name = 'spawn_n' def test_spawn_n_no_context(self): self.assertIsNone(common_context.get_current()) def _fake_spawn(func, *args, **kwargs): # call the method to ensure no error is raised func(*args, **kwargs) self.assertEqual('test', args[0]) def fake(arg): pass with mock.patch.object(eventlet, self.spawn_name, _fake_spawn): getattr(utils, self.spawn_name)(fake, 'test') self.assertIsNone(common_context.get_current()) def test_spawn_n_context(self): self.assertIsNone(common_context.get_current()) ctxt = context.RequestContext('user', 'project') def _fake_spawn(func, *args, **kwargs): # call the method to ensure no error is raised func(*args, **kwargs) self.assertEqual(ctxt, args[0]) self.assertEqual('test', kwargs['kwarg1']) def fake(context, kwarg1=None): pass with mock.patch.object(eventlet, self.spawn_name, _fake_spawn): getattr(utils, self.spawn_name)(fake, ctxt, kwarg1='test') self.assertEqual(ctxt, common_context.get_current()) def test_spawn_n_context_different_from_passed(self): self.assertIsNone(common_context.get_current()) ctxt = context.RequestContext('user', 'project') ctxt_passed = context.RequestContext('user', 'project', overwrite=False) self.assertEqual(ctxt, common_context.get_current()) def _fake_spawn(func, *args, **kwargs): # call the method to ensure no error is raised func(*args, **kwargs) self.assertEqual(ctxt_passed, args[0]) self.assertEqual('test', kwargs['kwarg1']) def fake(context, kwarg1=None): pass with mock.patch.object(eventlet, self.spawn_name, _fake_spawn): getattr(utils, self.spawn_name)(fake, ctxt_passed, kwarg1='test') self.assertEqual(ctxt, common_context.get_current()) class SpawnTestCase(SpawnNTestCase): def setUp(self): super(SpawnTestCase, self).setUp() self.spawn_name = 'spawn' class UT8TestCase(test.NoDBTestCase): def test_none_value(self): self.assertIsInstance(utils.utf8(None), type(None)) def test_bytes_value(self): some_value = b"fake data" return_value = utils.utf8(some_value) # check that type of returned value doesn't changed self.assertIsInstance(return_value, type(some_value)) self.assertEqual(some_value, return_value) def test_not_text_type(self): return_value = utils.utf8(1) self.assertEqual(b"1", return_value) self.assertIsInstance(return_value, six.binary_type) def test_text_type_with_encoding(self): some_value = 'test\u2026config' self.assertEqual(some_value, utils.utf8(some_value).decode("utf-8")) class TestObjectCallHelpers(test.NoDBTestCase): def test_with_primitives(self): tester = mock.Mock() tester.foo(1, 'two', three='four') self.assertTrue( test_utils.obj_called_with(tester.foo, 1, 'two', three='four')) self.assertFalse( test_utils.obj_called_with(tester.foo, 42, 'two', three='four')) def test_with_object(self): obj_base.NovaObjectRegistry.register(test_objects.MyObj) obj = test_objects.MyObj(foo=1, bar='baz') tester = mock.Mock() tester.foo(1, obj) self.assertTrue( test_utils.obj_called_with( tester.foo, 1, test_objects.MyObj(foo=1, bar='baz'))) self.assertFalse( test_utils.obj_called_with( tester.foo, 1, test_objects.MyObj(foo=2, bar='baz'))) def test_with_object_multiple(self): obj_base.NovaObjectRegistry.register(test_objects.MyObj) obj1 = test_objects.MyObj(foo=1, bar='baz') obj2 = test_objects.MyObj(foo=3, bar='baz') tester = mock.Mock() tester.foo(1, obj1) tester.foo(1, obj1) tester.foo(3, obj2) # Called at all self.assertTrue( test_utils.obj_called_with( tester.foo, 1, test_objects.MyObj(foo=1, bar='baz'))) # Called once (not true) self.assertFalse( test_utils.obj_called_once_with( tester.foo, 1, test_objects.MyObj(foo=1, bar='baz'))) # Not called with obj.foo=2 self.assertFalse( test_utils.obj_called_with( tester.foo, 1, test_objects.MyObj(foo=2, bar='baz'))) # Called with obj.foo.3 self.assertTrue( test_utils.obj_called_with( tester.foo, 3, test_objects.MyObj(foo=3, bar='baz'))) # Called once with obj.foo.3 self.assertTrue( test_utils.obj_called_once_with( tester.foo, 3, test_objects.MyObj(foo=3, bar='baz'))) class GetKSAAdapterTestCase(test.NoDBTestCase): """Tests for nova.utils.get_endpoint_data().""" def setUp(self): super(GetKSAAdapterTestCase, self).setUp() self.sess = mock.create_autospec(ks_session.Session, instance=True) self.auth = mock.create_autospec(ks_identity.BaseIdentityPlugin, instance=True) load_adap_p = mock.patch( 'keystoneauth1.loading.load_adapter_from_conf_options') self.addCleanup(load_adap_p.stop) self.load_adap = load_adap_p.start() ksa_fixture = self.useFixture(nova_fixtures.KSAFixture()) self.mock_ksa_load_auth = ksa_fixture.mock_load_auth self.mock_ksa_load_sess = ksa_fixture.mock_load_sess self.mock_ksa_session = ksa_fixture.mock_session self.mock_ksa_load_auth.return_value = self.auth self.mock_ksa_load_sess.return_value = self.sess def test_bogus_service_type(self): self.assertRaises(exception.ConfGroupForServiceTypeNotFound, utils.get_ksa_adapter, 'bogus') self.mock_ksa_load_auth.assert_not_called() self.mock_ksa_load_sess.assert_not_called() self.load_adap.assert_not_called() def test_all_params(self): ret = utils.get_ksa_adapter( 'image', ksa_auth='auth', ksa_session='sess', min_version='min', max_version='max') # Returned the result of load_adapter_from_conf_options self.assertEqual(self.load_adap.return_value, ret) # Because we supplied ksa_auth, load_auth* not called self.mock_ksa_load_auth.assert_not_called() # Ditto ksa_session/load_session* self.mock_ksa_load_sess.assert_not_called() # load_adapter* called with what we passed in (and the right group) self.load_adap.assert_called_once_with( utils.CONF, 'glance', session='sess', auth='auth', min_version='min', max_version='max', raise_exc=False) def test_auth_from_session(self): self.sess.auth = 'auth' ret = utils.get_ksa_adapter('baremetal', ksa_session=self.sess) # Returned the result of load_adapter_from_conf_options self.assertEqual(self.load_adap.return_value, ret) # Because ksa_auth found in ksa_session, load_auth* not called self.mock_ksa_load_auth.assert_not_called() # Because we supplied ksa_session, load_session* not called self.mock_ksa_load_sess.assert_not_called() # load_adapter* called with the auth from the session self.load_adap.assert_called_once_with( utils.CONF, 'ironic', session=self.sess, auth='auth', min_version=None, max_version=None, raise_exc=False) def test_load_auth_and_session(self): ret = utils.get_ksa_adapter('volumev3') # Returned the result of load_adapter_from_conf_options self.assertEqual(self.load_adap.return_value, ret) # Had to load the auth self.mock_ksa_load_auth.assert_called_once_with(utils.CONF, 'cinder') # Had to load the session, passed in the loaded auth self.mock_ksa_load_sess.assert_called_once_with(utils.CONF, 'cinder', auth=self.auth) # load_adapter* called with the loaded auth & session self.load_adap.assert_called_once_with( utils.CONF, 'cinder', session=self.sess, auth=self.auth, min_version=None, max_version=None, raise_exc=False) class GetEndpointTestCase(test.NoDBTestCase): def setUp(self): super(GetEndpointTestCase, self).setUp() self.adap = mock.create_autospec(ks_adapter.Adapter, instance=True) self.adap.endpoint_override = None self.adap.service_type = 'stype' self.adap.interface = ['admin', 'public'] def test_endpoint_override(self): self.adap.endpoint_override = 'foo' self.assertEqual('foo', utils.get_endpoint(self.adap)) self.adap.get_endpoint_data.assert_not_called() self.adap.get_endpoint.assert_not_called() def test_image_good(self): self.adap.service_type = 'image' self.adap.get_endpoint_data.return_value.catalog_url = 'url' self.assertEqual('url', utils.get_endpoint(self.adap)) self.adap.get_endpoint_data.assert_called_once_with() self.adap.get_endpoint.assert_not_called() def test_image_bad(self): self.adap.service_type = 'image' self.adap.get_endpoint_data.side_effect = AttributeError self.adap.get_endpoint.return_value = 'url' self.assertEqual('url', utils.get_endpoint(self.adap)) self.adap.get_endpoint_data.assert_called_once_with() self.adap.get_endpoint.assert_called_once_with() def test_nonimage_good(self): self.adap.get_endpoint.return_value = 'url' self.assertEqual('url', utils.get_endpoint(self.adap)) self.adap.get_endpoint_data.assert_not_called() self.adap.get_endpoint.assert_called_once_with() class TestResourceClassNormalize(test.NoDBTestCase): def test_normalize_name(self): values = [ ("foo", "CUSTOM_FOO"), ("VCPU", "CUSTOM_VCPU"), ("CUSTOM_BOB", "CUSTOM_CUSTOM_BOB"), ("CUSTM_BOB", "CUSTOM_CUSTM_BOB"), ] for test_value, expected in values: result = utils.normalize_rc_name(test_value) self.assertEqual(expected, result) def test_normalize_name_bug_1762789(self): """The .upper() builtin treats sharp S (\xdf) differently in py2 vs. py3. Make sure normalize_name handles it properly. """ name = u'Fu\xdfball' self.assertEqual(u'CUSTOM_FU_BALL', utils.normalize_rc_name(name)) class TestGetConfGroup(test.NoDBTestCase): """Tests for nova.utils._get_conf_group""" @mock.patch('nova.utils.CONF') @mock.patch('nova.utils._SERVICE_TYPES.get_project_name') def test__get_conf_group(self, mock_get_project_name, mock_conf): test_conf_grp = 'test_confgrp' test_service_type = 'test_service_type' mock_get_project_name.return_value = test_conf_grp # happy path mock_conf.test_confgrp = None actual_conf_grp = utils._get_conf_group(test_service_type) self.assertEqual(test_conf_grp, actual_conf_grp) mock_get_project_name.assert_called_once_with(test_service_type) # service type as the conf group del mock_conf.test_confgrp mock_conf.test_service_type = None actual_conf_grp = utils._get_conf_group(test_service_type) self.assertEqual(test_service_type, actual_conf_grp) @mock.patch('nova.utils._SERVICE_TYPES.get_project_name') def test__get_conf_group_fail(self, mock_get_project_name): test_service_type = 'test_service_type' # not confgrp mock_get_project_name.return_value = None self.assertRaises(exception.ConfGroupForServiceTypeNotFound, utils._get_conf_group, None) # not hasattr mock_get_project_name.return_value = 'test_fail' self.assertRaises(exception.ConfGroupForServiceTypeNotFound, utils._get_conf_group, test_service_type) class TestGetAuthAndSession(test.NoDBTestCase): """Tests for nova.utils._get_auth_and_session""" def setUp(self): super(TestGetAuthAndSession, self).setUp() self.test_auth = 'test_auth' self.test_session = 'test_session' self.test_session_auth = 'test_session_auth' self.test_confgrp = 'test_confgrp' self.mock_session = mock.Mock() self.mock_session.auth = self.test_session_auth @mock.patch('nova.utils.ks_loading.load_auth_from_conf_options') @mock.patch('nova.utils.ks_loading.load_session_from_conf_options') def test_auth_and_session(self, mock_load_session, mock_load_auth): # yes auth, yes session actual = utils._get_auth_and_session(self.test_confgrp, ksa_auth=self.test_auth, ksa_session=self.test_session) self.assertEqual(actual, (self.test_auth, self.test_session)) mock_load_session.assert_not_called() mock_load_auth.assert_not_called() @mock.patch('nova.utils.ks_loading.load_auth_from_conf_options') @mock.patch('nova.utils.ks_loading.load_session_from_conf_options') @mock.patch('nova.utils.CONF') def test_no_session(self, mock_CONF, mock_load_session, mock_load_auth): # yes auth, no session mock_load_session.return_value = self.test_session actual = utils._get_auth_and_session(self.test_confgrp, ksa_auth=self.test_auth, ksa_session=None) self.assertEqual(actual, (self.test_auth, self.test_session)) mock_load_session.assert_called_once_with(mock_CONF, self.test_confgrp, auth=self.test_auth) mock_load_auth.assert_not_called() @mock.patch('nova.utils.ks_loading.load_auth_from_conf_options') @mock.patch('nova.utils.ks_loading.load_session_from_conf_options') def test_no_auth(self, mock_load_session, mock_load_auth): # no auth, yes session, yes session.auth actual = utils._get_auth_and_session(self.test_confgrp, ksa_auth=None, ksa_session=self.mock_session) self.assertEqual(actual, (self.test_session_auth, self.mock_session)) mock_load_session.assert_not_called() mock_load_auth.assert_not_called() @mock.patch('nova.utils.ks_loading.load_auth_from_conf_options') @mock.patch('nova.utils.ks_loading.load_session_from_conf_options') @mock.patch('nova.utils.CONF') def test_no_auth_no_sauth(self, mock_CONF, mock_load_session, mock_load_auth): # no auth, yes session, no session.auth mock_load_auth.return_value = self.test_auth self.mock_session.auth = None actual = utils._get_auth_and_session(self.test_confgrp, ksa_auth=None, ksa_session=self.mock_session) self.assertEqual(actual, (self.test_auth, self.mock_session)) mock_load_session.assert_not_called() mock_load_auth.assert_called_once_with(mock_CONF, self.test_confgrp) @mock.patch('nova.utils.ks_loading.load_auth_from_conf_options') @mock.patch('nova.utils.ks_loading.load_session_from_conf_options') @mock.patch('nova.utils.CONF') def test__get_auth_and_session(self, mock_CONF, mock_load_session, mock_load_auth): # no auth, no session mock_load_auth.return_value = self.test_auth mock_load_session.return_value = self.test_session actual = utils._get_auth_and_session(self.test_confgrp, ksa_auth=None, ksa_session=None) self.assertEqual(actual, (self.test_auth, self.test_session)) mock_load_session.assert_called_once_with(mock_CONF, self.test_confgrp, auth=self.test_auth) mock_load_auth.assert_called_once_with(mock_CONF, self.test_confgrp) class TestGetSDKAdapter(test.NoDBTestCase): """Tests for nova.utils.get_sdk_adapter""" def setUp(self): super(TestGetSDKAdapter, self).setUp() self.mock_get_confgrp = self.useFixture(fixtures.MockPatch( 'nova.utils._get_conf_group')).mock self.mock_get_auth_sess = self.useFixture(fixtures.MockPatch( 'nova.utils._get_auth_and_session')).mock self.mock_get_auth_sess.return_value = (None, mock.sentinel.session) self.service_type = 'test_service' self.mock_connection = self.useFixture(fixtures.MockPatch( 'nova.utils.connection.Connection')).mock self.mock_connection.return_value = mock.Mock( test_service=mock.sentinel.proxy) # We need to stub the CONF global in nova.utils to assert that the # Connection constructor picks it up. self.mock_conf = self.useFixture(fixtures.MockPatch( 'nova.utils.CONF')).mock def _test_get_sdk_adapter(self, strict=False): actual = utils.get_sdk_adapter(self.service_type, check_service=strict) self.mock_get_confgrp.assert_called_once_with(self.service_type) self.mock_get_auth_sess.assert_called_once_with( self.mock_get_confgrp.return_value) self.mock_connection.assert_called_once_with( session=mock.sentinel.session, oslo_conf=self.mock_conf, service_types={'test_service'}, strict_proxies=strict) return actual def test_get_sdk_adapter(self): self.assertEqual(self._test_get_sdk_adapter(), mock.sentinel.proxy) def test_get_sdk_adapter_strict(self): self.assertEqual( self._test_get_sdk_adapter(strict=True), mock.sentinel.proxy) def test_get_sdk_adapter_strict_fail(self): self.mock_connection.side_effect = sdk_exc.ServiceDiscoveryException() self.assertRaises( exception.ServiceUnavailable, self._test_get_sdk_adapter, strict=True) def test_get_sdk_adapter_conf_group_fail(self): self.mock_get_confgrp.side_effect = ( exception.ConfGroupForServiceTypeNotFound(stype=self.service_type)) self.assertRaises(exception.ConfGroupForServiceTypeNotFound, utils.get_sdk_adapter, self.service_type) self.mock_get_confgrp.assert_called_once_with(self.service_type) self.mock_connection.assert_not_called() self.mock_get_auth_sess.assert_not_called() class TestOldComputeCheck(test.NoDBTestCase): @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_no_compute(self, mock_get_min_service): mock_get_min_service.return_value = 0 utils.raise_if_old_compute() mock_get_min_service.assert_called_once_with( mock.ANY, ['nova-compute']) @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_old_but_supported_compute(self, mock_get_min_service): oldest = service_obj.SERVICE_VERSION_ALIASES[ service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] mock_get_min_service.return_value = oldest utils.raise_if_old_compute() mock_get_min_service.assert_called_once_with( mock.ANY, ['nova-compute']) @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_new_compute(self, mock_get_min_service): mock_get_min_service.return_value = service_obj.SERVICE_VERSION utils.raise_if_old_compute() mock_get_min_service.assert_called_once_with( mock.ANY, ['nova-compute']) @mock.patch('nova.objects.service.Service.get_minimum_version') def test_too_old_compute_cell(self, mock_get_min_service): self.flags(group='api_database', connection=None) oldest = service_obj.SERVICE_VERSION_ALIASES[ service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] mock_get_min_service.return_value = oldest - 1 ex = self.assertRaises( exception.TooOldComputeService, utils.raise_if_old_compute) self.assertIn('cell', str(ex)) mock_get_min_service.assert_called_once_with(mock.ANY, 'nova-compute') @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_too_old_compute_top_level(self, mock_get_min_service): self.flags(group='api_database', connection='fake db connection') oldest = service_obj.SERVICE_VERSION_ALIASES[ service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] mock_get_min_service.return_value = oldest - 1 ex = self.assertRaises( exception.TooOldComputeService, utils.raise_if_old_compute) self.assertIn('system', str(ex)) mock_get_min_service.assert_called_once_with( mock.ANY, ['nova-compute']) @mock.patch.object(utils.LOG, 'warning') @mock.patch('nova.objects.service.Service.get_minimum_version') @mock.patch('nova.objects.service.get_minimum_version_all_cells') def test_api_db_is_configured_but_the_service_cannot_access_db( self, mock_get_all, mock_get, mock_warn): # This is the case when the nova-compute service is wrongly configured # with db credentials but nova-compute is never allowed to access the # db directly. mock_get_all.side_effect = exception.DBNotAllowed( binary='nova-compute') oldest = service_obj.SERVICE_VERSION_ALIASES[ service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] mock_get.return_value = oldest - 1 ex = self.assertRaises( exception.TooOldComputeService, utils.raise_if_old_compute) self.assertIn('cell', str(ex)) mock_get_all.assert_called_once_with(mock.ANY, ['nova-compute']) mock_get.assert_called_once_with(mock.ANY, 'nova-compute') mock_warn.assert_called_once_with( 'This service is configured for access to the API database but is ' 'not allowed to directly access the database. You should run this ' 'service without the [api_database]/connection config option. The ' 'service version check will only query the local cell.') class RunOnceTests(test.NoDBTestCase): fake_logger = mock.MagicMock() @utils.run_once("already ran once", fake_logger) def dummy_test_func(self, fail=False): if fail: raise ValueError() return True def setUp(self): super(RunOnceTests, self).setUp() self.dummy_test_func.reset() RunOnceTests.fake_logger.reset_mock() def test_wrapped_funtions_called_once(self): self.assertFalse(self.dummy_test_func.called) result = self.dummy_test_func() self.assertTrue(result) self.assertTrue(self.dummy_test_func.called) # assert that on second invocation no result # is returned and that the logger is invoked. result = self.dummy_test_func() RunOnceTests.fake_logger.assert_called_once() self.assertIsNone(result) def test_wrapped_funtions_called_once_raises(self): self.assertFalse(self.dummy_test_func.called) self.assertRaises(ValueError, self.dummy_test_func, fail=True) self.assertTrue(self.dummy_test_func.called) # assert that on second invocation no result # is returned and that the logger is invoked. result = self.dummy_test_func() RunOnceTests.fake_logger.assert_called_once() self.assertIsNone(result) def test_wrapped_funtions_can_be_reset(self): # assert we start with a clean state self.assertFalse(self.dummy_test_func.called) result = self.dummy_test_func() self.assertTrue(result) self.dummy_test_func.reset() # assert we restored a clean state self.assertFalse(self.dummy_test_func.called) result = self.dummy_test_func() self.assertTrue(result) # assert that we never called the logger RunOnceTests.fake_logger.assert_not_called() def test_reset_calls_cleanup(self): mock_clean = mock.Mock() @utils.run_once("already ran once", self.fake_logger, cleanup=mock_clean) def f(): pass f() self.assertTrue(f.called) f.reset() self.assertFalse(f.called) mock_clean.assert_called_once_with() def test_clean_is_not_called_at_reset_if_wrapped_not_called(self): mock_clean = mock.Mock() @utils.run_once("already ran once", self.fake_logger, cleanup=mock_clean) def f(): pass self.assertFalse(f.called) f.reset() self.assertFalse(f.called) self.assertFalse(mock_clean.called) def test_reset_works_even_if_cleanup_raises(self): mock_clean = mock.Mock(side_effect=ValueError()) @utils.run_once("already ran once", self.fake_logger, cleanup=mock_clean) def f(): pass f() self.assertTrue(f.called) self.assertRaises(ValueError, f.reset) self.assertFalse(f.called) mock_clean.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_versions.py0000664000175000017500000000360300000000000021135 0ustar00zuulzuul00000000000000# Copyright 2011 Ken Pepple # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from nova import test from nova import version class VersionTestCase(test.NoDBTestCase): """Test cases for Versions code.""" def test_version_string_with_package_is_good(self): """Ensure uninstalled code get version string.""" self.stub_out('nova.version.version_info.version_string', lambda: '5.5.5.5') self.stub_out('nova.version.NOVA_PACKAGE', 'g9ec3421') self.assertEqual("5.5.5.5-g9ec3421", version.version_string_with_package()) @test.patch_open("/etc/nova/release", read_data= "[Nova]\n" "vendor = ACME Corporation\n" "product = ACME Nova\n" "package = 1337\n") def test_release_file(self): version.loaded = False real_find_file = cfg.CONF.find_file def fake_find_file(self, name): if name == "release": return "/etc/nova/release" return real_find_file(self, name) self.stub_out('oslo_config.cfg.ConfigOpts.find_file', fake_find_file) self.assertEqual(version.vendor_string(), "ACME Corporation") self.assertEqual(version.product_string(), "ACME Nova") self.assertEqual(version.package_string(), "1337") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/test_weights.py0000664000175000017500000000526000000000000020740 0ustar00zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For weights. """ import mock from nova.scheduler import weights as scheduler_weights from nova.scheduler.weights import ram from nova import test from nova.tests.unit.scheduler import fakes from nova import weights class TestWeigher(test.NoDBTestCase): def test_no_multiplier(self): class FakeWeigher(weights.BaseWeigher): def _weigh_object(self, *args, **kwargs): pass self.assertEqual(1.0, FakeWeigher().weight_multiplier(None)) def test_no_weight_object(self): class FakeWeigher(weights.BaseWeigher): def weight_multiplier(self, *args, **kwargs): pass self.assertRaises(TypeError, FakeWeigher) def test_normalization(self): # weight_list, expected_result, minval, maxval map_ = ( ((), (), None, None), ((0.0, 0.0), (0.0, 0.0), None, None), ((1.0, 1.0), (0.0, 0.0), None, None), ((20.0, 50.0), (0.0, 1.0), None, None), ((20.0, 50.0), (0.0, 0.375), None, 100.0), ((20.0, 50.0), (0.4, 1.0), 0.0, None), ((20.0, 50.0), (0.2, 0.5), 0.0, 100.0), ) for seq, result, minval, maxval in map_: ret = weights.normalize(seq, minval=minval, maxval=maxval) self.assertEqual(tuple(ret), result) @mock.patch('nova.weights.BaseWeigher.weigh_objects') def test_only_one_host(self, mock_weigh): host_values = [ ('host1', 'node1', {'free_ram_mb': 512}), ] hostinfo = [fakes.FakeHostState(host, node, values) for host, node, values in host_values] weight_handler = scheduler_weights.HostWeightHandler() weighers = [ram.RAMWeigher()] weighed_host = weight_handler.get_weighed_objects(weighers, hostinfo, {}) self.assertEqual(1, len(weighed_host)) self.assertEqual('host1', weighed_host[0].obj.host) self.assertFalse(mock_weigh.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/test_wsgi.py0000664000175000017500000002765400000000000020252 0ustar00zuulzuul00000000000000# Copyright 2011 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for `nova.wsgi`.""" import os.path import socket import tempfile import eventlet import eventlet.wsgi import mock from oslo_config import cfg import requests import six import testtools import webob import nova.api.wsgi import nova.exception from nova import test from nova.tests.unit import utils import nova.wsgi SSL_CERT_DIR = os.path.normpath(os.path.join( os.path.dirname(os.path.abspath(__file__)), 'ssl_cert')) CONF = cfg.CONF class TestLoaderNothingExists(test.NoDBTestCase): """Loader tests where os.path.exists always returns False.""" def setUp(self): super(TestLoaderNothingExists, self).setUp() self.stub_out('os.path.exists', lambda _: False) def test_relpath_config_not_found(self): self.flags(api_paste_config='api-paste.ini', group='wsgi') self.assertRaises( nova.exception.ConfigNotFound, nova.api.wsgi.Loader, ) def test_asbpath_config_not_found(self): self.flags(api_paste_config='/etc/nova/api-paste.ini', group='wsgi') self.assertRaises( nova.exception.ConfigNotFound, nova.api.wsgi.Loader, ) class TestLoaderNormalFilesystem(test.NoDBTestCase): """Loader tests with normal filesystem (unmodified os.path module).""" _paste_config = """ [app:test_app] use = egg:Paste#static document_root = /tmp """ def setUp(self): super(TestLoaderNormalFilesystem, self).setUp() self.config = tempfile.NamedTemporaryFile(mode="w+t") self.config.write(self._paste_config.lstrip()) self.config.seek(0) self.config.flush() self.loader = nova.api.wsgi.Loader(self.config.name) def test_config_found(self): self.assertEqual(self.config.name, self.loader.config_path) def test_app_not_found(self): self.assertRaises( nova.exception.PasteAppNotFound, self.loader.load_app, "nonexistent app", ) def test_app_found(self): url_parser = self.loader.load_app("test_app") self.assertEqual("/tmp", url_parser.directory) def tearDown(self): self.config.close() super(TestLoaderNormalFilesystem, self).tearDown() class TestWSGIServer(test.NoDBTestCase): """WSGI server tests.""" def test_no_app(self): server = nova.wsgi.Server("test_app", None) self.assertEqual("test_app", server.name) def test_custom_max_header_line(self): self.flags(max_header_line=4096, group='wsgi') # Default is 16384 nova.wsgi.Server("test_custom_max_header_line", None) self.assertEqual(CONF.wsgi.max_header_line, eventlet.wsgi.MAX_HEADER_LINE) def test_start_random_port(self): server = nova.wsgi.Server("test_random_port", None, host="127.0.0.1", port=0) server.start() self.assertNotEqual(0, server.port) server.stop() server.wait() @testtools.skipIf(not utils.is_ipv6_supported(), "no ipv6 support") def test_start_random_port_with_ipv6(self): server = nova.wsgi.Server("test_random_port", None, host="::1", port=0) server.start() self.assertEqual("::1", server.host) self.assertNotEqual(0, server.port) server.stop() server.wait() @testtools.skipIf(not utils.is_linux(), 'SO_REUSEADDR behaves differently ' 'on OSX and BSD, see bugs ' '1436895 and 1467145') def test_socket_options_for_simple_server(self): # test normal socket options has set properly self.flags(tcp_keepidle=500, group='wsgi') server = nova.wsgi.Server("test_socket_options", None, host="127.0.0.1", port=0) server.start() sock = server._socket self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE)) if hasattr(socket, 'TCP_KEEPIDLE'): self.assertEqual(CONF.wsgi.tcp_keepidle, sock.getsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE)) server.stop() server.wait() def test_server_pool_waitall(self): # test pools waitall method gets called while stopping server server = nova.wsgi.Server("test_server", None, host="127.0.0.1") server.start() with mock.patch.object(server._pool, 'waitall') as mock_waitall: server.stop() server.wait() mock_waitall.assert_called_once_with() def test_uri_length_limit(self): server = nova.wsgi.Server("test_uri_length_limit", None, host="127.0.0.1", max_url_len=16384) server.start() uri = "http://127.0.0.1:%d/%s" % (server.port, 10000 * 'x') resp = requests.get(uri, proxies={"http": ""}) eventlet.sleep(0) self.assertNotEqual(resp.status_code, requests.codes.REQUEST_URI_TOO_LARGE) uri = "http://127.0.0.1:%d/%s" % (server.port, 20000 * 'x') resp = requests.get(uri, proxies={"http": ""}) eventlet.sleep(0) self.assertEqual(resp.status_code, requests.codes.REQUEST_URI_TOO_LARGE) server.stop() server.wait() def test_reset_pool_size_to_default(self): server = nova.wsgi.Server("test_resize", None, host="127.0.0.1", max_url_len=16384) server.start() # Stopping the server, which in turn sets pool size to 0 server.stop() self.assertEqual(server._pool.size, 0) # Resetting pool size to default server.reset() server.start() self.assertEqual(server._pool.size, CONF.wsgi.default_pool_size) def test_client_socket_timeout(self): self.flags(client_socket_timeout=5, group='wsgi') # mocking eventlet spawn method to check it is called with # configured 'client_socket_timeout' value. with mock.patch.object(eventlet, 'spawn') as mock_spawn: server = nova.wsgi.Server("test_app", None, host="127.0.0.1", port=0) server.start() _, kwargs = mock_spawn.call_args self.assertEqual(CONF.wsgi.client_socket_timeout, kwargs['socket_timeout']) server.stop() def test_keep_alive(self): self.flags(keep_alive=False, group='wsgi') # mocking eventlet spawn method to check it is called with # configured 'keep_alive' value. with mock.patch.object(eventlet, 'spawn') as mock_spawn: server = nova.wsgi.Server("test_app", None, host="127.0.0.1", port=0) server.start() _, kwargs = mock_spawn.call_args self.assertEqual(CONF.wsgi.keep_alive, kwargs['keepalive']) server.stop() @testtools.skipIf(six.PY3, "bug/1482633: test hangs on Python 3") class TestWSGIServerWithSSL(test.NoDBTestCase): """WSGI server with SSL tests.""" def setUp(self): super(TestWSGIServerWithSSL, self).setUp() self.flags(enabled_ssl_apis=['fake_ssl']) self.flags( ssl_cert_file=os.path.join(SSL_CERT_DIR, 'certificate.crt'), ssl_key_file=os.path.join(SSL_CERT_DIR, 'privatekey.key'), group='wsgi') def test_ssl_server(self): def test_app(env, start_response): start_response('200 OK', {}) return ['PONG'] fake_ssl_server = nova.wsgi.Server("fake_ssl", test_app, host="127.0.0.1", port=0, use_ssl=True) fake_ssl_server.start() self.assertNotEqual(0, fake_ssl_server.port) response = requests.post( 'https://127.0.0.1:%s/' % fake_ssl_server.port, verify=os.path.join(SSL_CERT_DIR, 'ca.crt'), data='PING') self.assertEqual(response.text, 'PONG') fake_ssl_server.stop() fake_ssl_server.wait() def test_two_servers(self): def test_app(env, start_response): start_response('200 OK', {}) return ['PONG'] fake_ssl_server = nova.wsgi.Server("fake_ssl", test_app, host="127.0.0.1", port=0, use_ssl=True) fake_ssl_server.start() self.assertNotEqual(0, fake_ssl_server.port) fake_server = nova.wsgi.Server("fake", test_app, host="127.0.0.1", port=0) fake_server.start() self.assertNotEqual(0, fake_server.port) response = requests.post( 'https://127.0.0.1:%s/' % fake_ssl_server.port, verify=os.path.join(SSL_CERT_DIR, 'ca.crt'), data='PING') self.assertEqual(response.text, 'PONG') response = requests.post('http://127.0.0.1:%s/' % fake_server.port, data='PING') self.assertEqual(response.text, 'PONG') fake_ssl_server.stop() fake_ssl_server.wait() fake_server.stop() fake_server.wait() @testtools.skipIf(not utils.is_linux(), 'SO_REUSEADDR behaves differently ' 'on OSX and BSD, see bugs ' '1436895 and 1467145') def test_socket_options_for_ssl_server(self): # test normal socket options has set properly self.flags(tcp_keepidle=500, group='wsgi') server = nova.wsgi.Server("test_socket_options", None, host="127.0.0.1", port=0, use_ssl=True) server.start() sock = server._socket self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE)) if hasattr(socket, 'TCP_KEEPIDLE'): self.assertEqual(CONF.wsgi.tcp_keepidle, sock.getsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE)) server.stop() server.wait() @testtools.skipIf(not utils.is_ipv6_supported(), "no ipv6 support") def test_app_using_ipv6_and_ssl(self): greetings = 'Hello, World!!!' @webob.dec.wsgify def hello_world(req): return greetings server = nova.wsgi.Server("fake_ssl", hello_world, host="::1", port=0, use_ssl=True) server.start() response = requests.get('https://[::1]:%d/' % server.port, verify=os.path.join(SSL_CERT_DIR, 'ca.crt')) self.assertEqual(greetings, response.text) server.stop() server.wait() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/utils.py0000664000175000017500000002676000000000000017377 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import errno import platform import socket import sys import mock from six.moves import range from nova.compute import flavors import nova.conf import nova.context from nova.db import api as db from nova import exception from nova.image import glance from nova.network import model as network_model from nova import objects from nova.objects import base as obj_base import nova.utils CONF = nova.conf.CONF def get_test_admin_context(): return nova.context.get_admin_context() def get_test_image_object(context, instance_ref): if not context: context = get_test_admin_context() image_ref = instance_ref['image_ref'] image_service, image_id = glance.get_remote_image_service(context, image_ref) return objects.ImageMeta.from_dict( image_service.show(context, image_id)) def get_test_flavor(context=None, options=None): options = options or {} if not context: context = get_test_admin_context() test_flavor = {'name': 'kinda.big', 'flavorid': 'someid', 'memory_mb': 2048, 'vcpus': 4, 'root_gb': 40, 'ephemeral_gb': 80, 'swap': 1024} test_flavor.update(options) flavor_ref = objects.Flavor(context, **test_flavor) try: flavor_ref.create() except (exception.FlavorExists, exception.FlavorIdExists): flavor_ref = objects.Flavor.get_by_flavor_id( context, test_flavor['flavorid']) return flavor_ref def get_test_instance(context=None, flavor=None, obj=False): if not context: context = get_test_admin_context() if not flavor: flavor = get_test_flavor(context) test_instance = {'memory_kb': '2048000', 'basepath': '/some/path', 'bridge_name': 'br100', 'vcpus': 4, 'root_gb': 40, 'bridge': 'br101', 'image_ref': 'cedef40a-ed67-4d10-800e-17455edce175', 'instance_type_id': flavor['id'], 'system_metadata': {}, 'extra_specs': {}, 'user_id': context.user_id, 'project_id': context.project_id, } if obj: instance = objects.Instance(context, **test_instance) instance.flavor = objects.Flavor.get_by_id(context, flavor['id']) instance.create() else: flavors.save_flavor_info(test_instance['system_metadata'], flavor, '') instance = db.instance_create(context, test_instance) return instance FAKE_NETWORK_VLAN = 100 FAKE_NETWORK_BRIDGE = 'br0' FAKE_NETWORK_INTERFACE = 'eth0' FAKE_NETWORK_IP4_ADDR1 = '10.0.0.73' FAKE_NETWORK_IP4_ADDR2 = '10.0.0.74' FAKE_NETWORK_IP6_ADDR1 = '2001:b105:f00d::1' FAKE_NETWORK_IP6_ADDR2 = '2001:b105:f00d::2' FAKE_NETWORK_IP6_ADDR3 = '2001:b105:f00d::3' FAKE_NETWORK_IP4_GATEWAY = '10.0.0.254' FAKE_NETWORK_IP6_GATEWAY = '2001:b105:f00d::ff' FAKE_NETWORK_IP4_CIDR = '10.0.0.0/24' FAKE_NETWORK_IP6_CIDR = '2001:b105:f00d::0/64' FAKE_NETWORK_DNS_IP4_ADDR1 = '192.168.122.1' FAKE_NETWORK_DNS_IP4_ADDR2 = '192.168.122.2' FAKE_NETWORK_DNS_IP6_ADDR1 = '2001:dead:beef::1' FAKE_NETWORK_DNS_IP6_ADDR2 = '2001:dead:beef::2' FAKE_NETWORK_DHCP_IP4_ADDR = '192.168.122.253' FAKE_NETWORK_UUID = '4587c867-a2e6-4356-8c5b-bc077dcb8620' FAKE_VIF_UUID = '51a9642b-1414-4bd6-9a92-1320ddc55a63' FAKE_VIF_MAC = 'de:ad:be:ef:ca:fe' def get_test_network_info(count=1): def current(): subnet_4 = network_model.Subnet( cidr=FAKE_NETWORK_IP4_CIDR, dns=[network_model.IP(FAKE_NETWORK_DNS_IP4_ADDR1), network_model.IP(FAKE_NETWORK_DNS_IP4_ADDR2)], gateway=network_model.IP(FAKE_NETWORK_IP4_GATEWAY), ips=[network_model.IP(FAKE_NETWORK_IP4_ADDR1), network_model.IP(FAKE_NETWORK_IP4_ADDR2)], routes=None, dhcp_server=FAKE_NETWORK_DHCP_IP4_ADDR) subnet_6 = network_model.Subnet( cidr=FAKE_NETWORK_IP6_CIDR, gateway=network_model.IP(FAKE_NETWORK_IP6_GATEWAY), ips=[network_model.IP(FAKE_NETWORK_IP6_ADDR1), network_model.IP(FAKE_NETWORK_IP6_ADDR2), network_model.IP(FAKE_NETWORK_IP6_ADDR3)], routes=None, version=6) subnets = [subnet_4, subnet_6] network = network_model.Network( id=FAKE_NETWORK_UUID, bridge=FAKE_NETWORK_BRIDGE, label=None, subnets=subnets, vlan=FAKE_NETWORK_VLAN, bridge_interface=FAKE_NETWORK_INTERFACE, injected=False) vif = network_model.VIF( id=FAKE_VIF_UUID, address=FAKE_VIF_MAC, network=network, type=network_model.VIF_TYPE_OVS, devname=None, ovs_interfaceid=None) return vif return network_model.NetworkInfo([current() for x in range(0, count)]) def is_osx(): return platform.mac_ver()[0] != '' def is_linux(): return platform.system() == 'Linux' def killer_xml_body(): return ((""" ]> %(d)s """) % { 'a': 'A' * 10, 'b': '&a;' * 10, 'c': '&b;' * 10, 'd': '&c;' * 9999, }).strip() def is_ipv6_supported(): has_ipv6_support = socket.has_ipv6 try: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) s.close() except socket.error as e: if e.errno == errno.EAFNOSUPPORT: has_ipv6_support = False else: raise # check if there is at least one interface with ipv6 if has_ipv6_support and sys.platform.startswith('linux'): try: with open('/proc/net/if_inet6') as f: if not f.read(): has_ipv6_support = False except IOError: has_ipv6_support = False return has_ipv6_support def get_api_version(request): if request.path[2:3].isdigit(): return int(request.path[2:3]) def compare_obj_primitive(thing1, thing2): if isinstance(thing1, obj_base.NovaObject): return thing1.obj_to_primitive() == thing2.obj_to_primitive() else: return thing1 == thing2 def _compare_args(args1, args2, cmp): return all(cmp(*pair) for pair in zip(args1, args2)) def _compare_kwargs(kwargs1, kwargs2, cmp): return all(cmp(kwargs1[k], kwargs2[k]) for k in set(list(kwargs1.keys()) + list(kwargs2.keys()))) def _obj_called_with(the_mock, *args, **kwargs): if 'obj_cmp' in kwargs: cmp = kwargs.pop('obj_cmp') else: cmp = compare_obj_primitive count = 0 for call in the_mock.call_args_list: if (_compare_args(call[0], args, cmp) and _compare_kwargs(call[1], kwargs, cmp)): count += 1 return count def obj_called_with(the_mock, *args, **kwargs): return _obj_called_with(the_mock, *args, **kwargs) != 0 def obj_called_once_with(the_mock, *args, **kwargs): return _obj_called_with(the_mock, *args, **kwargs) == 1 class CustomMockCallMatcher(object): def __init__(self, comparator): self.comparator = comparator def __eq__(self, other): return self.comparator(other) class ItemsMatcher(CustomMockCallMatcher): """Convenience matcher for iterables (mainly lists) where the order is unpredictable. Usage:: my_mock.assert_called_once_with( ..., listy_kwarg=ItemsMatcher(['foo', 'bar', 'baz']), ...) Will pass if the mock is called with a listy_kwarg containing 'foo', 'bar', 'baz' in any order. E.g. the following will all pass:: # Called with a list in some other order my_mock(..., listy_kwarg=['baz', 'foo', 'bar'], ...) # Called with a set my_mock(..., listy_kwarg={'baz', 'foo', 'bar'}, ...) # Called with a tuple in some other order my_mock(..., listy_kwarg=('baz', 'foo', 'bar'), ...) But the following will fail:: my_mock(..., listy_kwarg=['foo', 'bar'], ...) """ def __init__(self, iterable): # NOTE(gibi): we need the extra iter() call as Counter handles dicts # directly to initialize item count. However if a dict passed to # ItemMatcher we want to use the keys of such dict as an iterable to # initialize the Counter instead. self.bag = collections.Counter(iter(iterable)) super(ItemsMatcher, self).__init__( lambda other: self.bag == collections.Counter(iter(other))) def __repr__(self): """This exists so a failed assertion prints something useful.""" return 'ItemsMatcher(%r)' % list(self.bag.elements()) def assert_instance_delete_notification_by_uuid( mock_legacy_notify, mock_notify, expected_instance_uuid, expected_notifier, expected_context, expect_targeted_context=False, expected_source='nova-api', expected_host='fake-mini'): match_by_instance_uuid = CustomMockCallMatcher( lambda instance: instance.uuid == expected_instance_uuid) assert_legacy_instance_delete_notification_by_uuid( mock_legacy_notify, match_by_instance_uuid, expected_notifier, expected_context, expect_targeted_context) assert_versioned_instance_delete_notification_by_uuid( mock_notify, match_by_instance_uuid, expected_context, expected_source, expected_host=expected_host) def assert_versioned_instance_delete_notification_by_uuid( mock_notify, instance_matcher, expected_context, expected_source, expected_host): mock_notify.assert_has_calls([ mock.call(expected_context, instance_matcher, host=expected_host, source=expected_source, action='delete', phase='start'), mock.call(expected_context, instance_matcher, host=expected_host, source=expected_source, action='delete', phase='end')]) def assert_legacy_instance_delete_notification_by_uuid( mock_notify, instance_matcher, expected_notifier, expected_context, expect_targeted_context): mock_notify.assert_has_calls([ mock.call(expected_notifier, expected_context, instance_matcher, 'delete.start'), mock.call(expected_notifier, expected_context, instance_matcher, 'delete.end')]) for call in mock_notify.call_args_list: if expect_targeted_context: assert call[0][1].db_connection is not None else: assert call[0][1].db_connection is None ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5824678 nova-21.2.4/nova/tests/unit/virt/0000775000175000017500000000000000000000000016636 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/__init__.py0000664000175000017500000000000000000000000020735 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5864677 nova-21.2.4/nova/tests/unit/virt/disk/0000775000175000017500000000000000000000000017570 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/__init__.py0000664000175000017500000000000000000000000021667 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5864677 nova-21.2.4/nova/tests/unit/virt/disk/mount/0000775000175000017500000000000000000000000020732 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/mount/__init__.py0000664000175000017500000000000000000000000023031 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/mount/test_api.py0000664000175000017500000002041300000000000023114 0ustar00zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_service import fixture as service_fixture from nova import test from nova.virt.disk.mount import api from nova.virt.disk.mount import block from nova.virt.disk.mount import loop from nova.virt.disk.mount import nbd from nova.virt.image import model as imgmodel PARTITION = 77 ORIG_DEVICE = "/dev/null" AUTOMAP_PARTITION = "/dev/nullp77" MAP_PARTITION = "/dev/mapper/nullp77" class MountTestCase(test.NoDBTestCase): def setUp(self): super(MountTestCase, self).setUp() # Make RetryDecorator not actually sleep on retries self.useFixture(service_fixture.SleepFixture()) def _test_map_dev(self, partition): mount = api.Mount(mock.sentinel.image, mock.sentinel.mount_dir) mount.device = ORIG_DEVICE mount.partition = partition mount.map_dev() return mount def _exists_effect(self, data): def exists_effect(filename): try: v = data[filename] if isinstance(v, list): if len(v) > 0: return v.pop(0) self.fail("Out of items for: %s" % filename) return v except KeyError: self.fail("Unexpected call with: %s" % filename) return exists_effect def _check_calls(self, exists, filenames, trailing=0): self.assertEqual([mock.call(x) for x in filenames], exists.call_args_list[:len(filenames)]) self.assertEqual([mock.call(MAP_PARTITION)] * trailing, exists.call_args_list[len(filenames):]) @mock.patch('os.path.exists') def test_map_dev_partition_search(self, exists): exists.side_effect = self._exists_effect({ ORIG_DEVICE: True}) mount = self._test_map_dev(-1) self._check_calls(exists, [ORIG_DEVICE]) self.assertNotEqual("", mount.error) self.assertFalse(mount.mapped) @mock.patch('os.path.exists') @mock.patch('nova.privsep.fs.create_device_maps', return_value=(None, None)) def test_map_dev_good(self, mock_create_maps, mock_exists): mock_exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: False, MAP_PARTITION: [False, True]}) mount = self._test_map_dev(PARTITION) self._check_calls(mock_exists, [ORIG_DEVICE, AUTOMAP_PARTITION], 2) self.assertEqual("", mount.error) self.assertTrue(mount.mapped) @mock.patch('os.path.exists') @mock.patch('nova.privsep.fs.create_device_maps', return_value=(None, None)) def test_map_dev_error(self, mock_create_maps, mock_exists): mock_exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: False, MAP_PARTITION: False}) mount = self._test_map_dev(PARTITION) self._check_calls(mock_exists, [ORIG_DEVICE, AUTOMAP_PARTITION], api.MAX_FILE_CHECKS + 1) self.assertNotEqual("", mount.error) self.assertFalse(mount.mapped) @mock.patch('os.path.exists') @mock.patch('nova.privsep.fs.create_device_maps', return_value=(None, None)) def test_map_dev_error_then_pass(self, mock_create_maps, mock_exists): mock_exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: False, MAP_PARTITION: [False, False, True]}) mount = self._test_map_dev(PARTITION) self._check_calls(mock_exists, [ORIG_DEVICE, AUTOMAP_PARTITION], 3) self.assertEqual("", mount.error) self.assertTrue(mount.mapped) @mock.patch('os.path.exists') def test_map_dev_automap(self, exists): exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: True}) mount = self._test_map_dev(PARTITION) self._check_calls(exists, [ORIG_DEVICE, AUTOMAP_PARTITION, AUTOMAP_PARTITION]) self.assertEqual(AUTOMAP_PARTITION, mount.mapped_device) self.assertTrue(mount.automapped) self.assertTrue(mount.mapped) @mock.patch('os.path.exists') def test_map_dev_else(self, exists): exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: True}) mount = self._test_map_dev(None) self._check_calls(exists, [ORIG_DEVICE]) self.assertEqual(ORIG_DEVICE, mount.mapped_device) self.assertFalse(mount.automapped) self.assertTrue(mount.mapped) def test_instance_for_format_raw(self): image = imgmodel.LocalFileImage("/some/file.raw", imgmodel.FORMAT_RAW) mount_dir = '/mount/dir' partition = -1 inst = api.Mount.instance_for_format(image, mount_dir, partition) self.assertIsInstance(inst, loop.LoopMount) def test_instance_for_format_qcow2(self): image = imgmodel.LocalFileImage("/some/file.qcows", imgmodel.FORMAT_QCOW2) mount_dir = '/mount/dir' partition = -1 inst = api.Mount.instance_for_format(image, mount_dir, partition) self.assertIsInstance(inst, nbd.NbdMount) def test_instance_for_format_block(self): image = imgmodel.LocalBlockImage( "/dev/mapper/instances--instance-0000001_disk",) mount_dir = '/mount/dir' partition = -1 inst = api.Mount.instance_for_format(image, mount_dir, partition) self.assertIsInstance(inst, block.BlockMount) def test_instance_for_device_loop(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = -1 device = '/dev/loop0' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, loop.LoopMount) def test_instance_for_device_loop_partition(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = 1 device = '/dev/mapper/loop0p1' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, loop.LoopMount) def test_instance_for_device_nbd(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = -1 device = '/dev/nbd0' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, nbd.NbdMount) def test_instance_for_device_nbd_partition(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = 1 device = '/dev/mapper/nbd0p1' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, nbd.NbdMount) def test_instance_for_device_block(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = -1 device = '/dev/mapper/instances--instance-0000001_disk' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, block.BlockMount) def test_instance_for_device_block_partiton(self,): image = mock.MagicMock() mount_dir = '/mount/dir' partition = 1 device = '/dev/mapper/instances--instance-0000001_diskp1' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, block.BlockMount) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/mount/test_block.py0000664000175000017500000000272600000000000023444 0ustar00zuulzuul00000000000000# Copyright 2015 Rackspace Hosting, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from nova import test from nova.virt.disk.mount import block from nova.virt.image import model as imgmodel class LoopTestCase(test.NoDBTestCase): def setUp(self): super(LoopTestCase, self).setUp() device_path = '/dev/mapper/instances--instance-0000001_disk' self.image = imgmodel.LocalBlockImage(device_path) def test_get_dev(self): tempdir = self.useFixture(fixtures.TempDir()).path b = block.BlockMount(self.image, tempdir) self.assertTrue(b.get_dev()) self.assertTrue(b.linked) self.assertEqual(self.image.path, b.device) def test_unget_dev(self): tempdir = self.useFixture(fixtures.TempDir()).path b = block.BlockMount(self.image, tempdir) b.unget_dev() self.assertIsNone(b.device) self.assertFalse(b.linked) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/mount/test_loop.py0000664000175000017500000000635500000000000023325 0ustar00zuulzuul00000000000000# Copyright 2012 Michael Still # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova import test from nova.virt.disk.mount import loop from nova.virt.image import model as imgmodel def _fake_noop(*args, **kwargs): return class LoopTestCase(test.NoDBTestCase): def setUp(self): super(LoopTestCase, self).setUp() self.file = imgmodel.LocalFileImage("/some/file.qcow2", imgmodel.FORMAT_QCOW2) @mock.patch('nova.privsep.fs.loopsetup', return_value=('/dev/loop0', '')) @mock.patch('nova.privsep.fs.loopremove') def test_get_dev(self, mock_loopremove, mock_loopsetup): tempdir = self.useFixture(fixtures.TempDir()).path mount = loop.LoopMount(self.file, tempdir) # No error logged, device consumed self.assertTrue(mount.get_dev()) self.assertTrue(mount.linked) self.assertEqual('', mount.error) self.assertEqual('/dev/loop0', mount.device) # Free mount.unget_dev() self.assertFalse(mount.linked) self.assertEqual('', mount.error) self.assertIsNone(mount.device) @mock.patch('nova.privsep.fs.loopsetup', return_value=('', 'doh')) def test_inner_get_dev_fails(self, mock_loopsetup): tempdir = self.useFixture(fixtures.TempDir()).path mount = loop.LoopMount(self.file, tempdir) # No error logged, device consumed self.assertFalse(mount._inner_get_dev()) self.assertFalse(mount.linked) self.assertNotEqual('', mount.error) self.assertIsNone(mount.device) # Free mount.unget_dev() self.assertFalse(mount.linked) self.assertIsNone(mount.device) @mock.patch('nova.privsep.fs.loopsetup', return_value=('', 'doh')) def test_get_dev_timeout(self, mock_loopsetup): tempdir = self.useFixture(fixtures.TempDir()).path mount = loop.LoopMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('time.sleep', _fake_noop)) self.useFixture(fixtures.MonkeyPatch(('nova.virt.disk.mount.api.' 'MAX_DEVICE_WAIT'), -10)) # Always fail to get a device def fake_get_dev_fails(): return False mount._inner_get_dev = fake_get_dev_fails # Fail to get a device self.assertFalse(mount.get_dev()) @mock.patch('nova.privsep.fs.loopremove') def test_unget_dev(self, mock_loopremove): tempdir = self.useFixture(fixtures.TempDir()).path mount = loop.LoopMount(self.file, tempdir) # This just checks that a free of something we don't have doesn't # throw an exception mount.unget_dev() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/disk/mount/test_nbd.py0000664000175000017500000003144700000000000023117 0ustar00zuulzuul00000000000000# Copyright 2012 Michael Still and Canonical Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os import tempfile import time import eventlet import fixtures import six from nova import test from nova.virt.disk.mount import nbd from nova.virt.image import model as imgmodel ORIG_EXISTS = os.path.exists ORIG_LISTDIR = os.listdir def _fake_exists_no_users(path): if path.startswith('/sys/block/nbd'): if path.endswith('pid'): return False return True return ORIG_EXISTS(path) def _fake_listdir_nbd_devices(path): if isinstance(path, six.string_types) and path.startswith('/sys/block'): return ['nbd0', 'nbd1'] return ORIG_LISTDIR(path) def _fake_exists_all_used(path): if path.startswith('/sys/block/nbd'): return True return ORIG_EXISTS(path) def _fake_detect_nbd_devices_none(): return [] def _fake_detect_nbd_devices(): return ['nbd0', 'nbd1'] def _fake_noop(*args, **kwargs): return class NbdTestCaseNoStub(test.NoDBTestCase): @mock.patch('os.listdir') def test_detect_nbd_devices(self, list_dir_mock): list_dir_mock.return_value = _fake_detect_nbd_devices() result = nbd.NbdMount._detect_nbd_devices() self.assertIsNotNone(result) self.assertIsInstance(result, list) self.assertEqual(len(list_dir_mock.return_value), len(result)) for path in list_dir_mock.return_value: self.assertIn(path, result) @mock.patch('os.listdir') def test_detect_nbd_devices_empty(self, list_dir_mock): list_dir_mock.return_value = [ "nbdz", "fake0", "not-nbd1"] result = nbd.NbdMount._detect_nbd_devices() self.assertIsNotNone(result) self.assertIsInstance(result, list) self.assertEqual(0, len(result)) class NbdTestCase(test.NoDBTestCase): def setUp(self): super(NbdTestCase, self).setUp() self.stub_out('nova.virt.disk.mount.nbd.NbdMount._detect_nbd_devices', _fake_detect_nbd_devices) self.useFixture(fixtures.MonkeyPatch('os.listdir', _fake_listdir_nbd_devices)) self.file = imgmodel.LocalFileImage("/some/file.qcow2", imgmodel.FORMAT_QCOW2) def test_nbd_no_devices(self): tempdir = self.useFixture(fixtures.TempDir()).path self.stub_out('nova.virt.disk.mount.nbd.NbdMount._detect_nbd_devices', _fake_detect_nbd_devices_none) n = nbd.NbdMount(self.file, tempdir) self.assertIsNone(n._allocate_nbd()) def test_nbd_no_free_devices(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('os.path.exists', _fake_exists_all_used)) self.assertIsNone(n._allocate_nbd()) def test_nbd_not_loaded(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # Fake out os.path.exists def fake_exists(path): if path.startswith('/sys/block/nbd'): return False return ORIG_EXISTS(path) self.useFixture(fixtures.MonkeyPatch('os.path.exists', fake_exists)) # This should fail, as we don't have the module "loaded" # TODO(mikal): work out how to force english as the gettext language # so that the error check always passes self.assertIsNone(n._allocate_nbd()) self.assertEqual('nbd unavailable: module not loaded', n.error) def test_nbd_allocation(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('os.path.exists', _fake_exists_no_users)) self.useFixture(fixtures.MonkeyPatch('random.shuffle', _fake_noop)) # Allocate a nbd device self.assertEqual('/dev/nbd0', n._allocate_nbd()) def test_nbd_allocation_one_in_use(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('random.shuffle', _fake_noop)) # Fake out os.path.exists def fake_exists(path): if path.startswith('/sys/block/nbd'): if path == '/sys/block/nbd0/pid': return True if path.endswith('pid'): return False return True return ORIG_EXISTS(path) self.useFixture(fixtures.MonkeyPatch('os.path.exists', fake_exists)) # Allocate a nbd device, should not be the in use one # TODO(mikal): Note that there is a leak here, as the in use nbd device # is removed from the list, but not returned so it will never be # re-added. I will fix this in a later patch. self.assertEqual('/dev/nbd1', n._allocate_nbd()) def test_inner_get_dev_no_devices(self): tempdir = self.useFixture(fixtures.TempDir()).path self.stub_out('nova.virt.disk.mount.nbd.NbdMount._detect_nbd_devices', _fake_detect_nbd_devices_none) n = nbd.NbdMount(self.file, tempdir) self.assertFalse(n._inner_get_dev()) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', 'broken')) def test_inner_get_dev_qemu_fails(self, mock_nbd_connect): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('os.path.exists', _fake_exists_no_users)) # Error logged, no device consumed self.assertFalse(n._inner_get_dev()) self.assertTrue(n.error.startswith('qemu-nbd error')) @mock.patch('random.shuffle') @mock.patch('os.listdir', return_value=['nbd0', 'nbd1', 'loop0']) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', '')) @mock.patch('nova.privsep.fs.nbd_disconnect', return_value=('', '')) @mock.patch('time.sleep') def test_inner_get_dev_qemu_timeout(self, mock_sleep, mock_nbd_disconnct, mock_nbd_connect, mock_listdir, mock_shuffle): self.flags(timeout_nbd=3) tempdir = self.useFixture(fixtures.TempDir()).path with mock.patch('os.path.exists', side_effect=[ True, False, False, False, False, False, False, False]): n = nbd.NbdMount(self.file, tempdir) # Error logged, no device consumed self.assertFalse(n._inner_get_dev()) self.assertTrue(n.error.endswith('did not show up')) @mock.patch('time.sleep', new=mock.Mock()) @mock.patch('random.shuffle') @mock.patch('os.path.exists', side_effect=[True, False, False, False, False, True]) @mock.patch('os.listdir', return_value=['nbd0', 'nbd1', 'loop0']) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', '')) @mock.patch('nova.privsep.fs.nbd_disconnect') def test_inner_get_dev_works(self, mock_nbd_disconnect, mock_nbd_connect, mock_exists, mock_listdir, mock_shuffle): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # No error logged, device consumed self.assertTrue(n._inner_get_dev()) self.assertTrue(n.linked) self.assertEqual('', n.error) self.assertEqual('/dev/nbd0', n.device) # Free n.unget_dev() self.assertFalse(n.linked) self.assertEqual('', n.error) self.assertIsNone(n.device) def test_unget_dev_simple(self): # This test is just checking we don't get an exception when we unget # something we don't have tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) n.unget_dev() @mock.patch('time.sleep', new=mock.Mock()) @mock.patch('random.shuffle') @mock.patch('os.path.exists', side_effect=[True, False, False, False, False, True]) @mock.patch('os.listdir', return_value=['nbd0', 'nbd1', 'loop0']) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', '')) @mock.patch('nova.privsep.fs.nbd_disconnect') def test_get_dev(self, mock_nbd_disconnect, mock_nbd_connect, mock_exists, mock_listdir, mock_shuffle): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # No error logged, device consumed self.assertTrue(n.get_dev()) self.assertTrue(n.linked) self.assertEqual('', n.error) self.assertEqual('/dev/nbd0', n.device) # Free n.unget_dev() self.assertFalse(n.linked) self.assertEqual('', n.error) self.assertIsNone(n.device) @mock.patch('random.shuffle') @mock.patch('time.sleep') @mock.patch('nova.privsep.fs.nbd_connect') @mock.patch('nova.privsep.fs.nbd_disconnect') @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.virt.disk.mount.nbd.NbdMount._inner_get_dev', return_value=False) def test_get_dev_timeout(self, mock_get_dev, mock_exists, mock_nbd_disconnect, mock_nbd_connect, mock_sleep, mock_shuffle): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch(('nova.virt.disk.mount.api.' 'MAX_DEVICE_WAIT'), -10)) # No error logged, device consumed self.assertFalse(n.get_dev()) @mock.patch('nova.privsep.fs.mount', return_value=('', 'broken')) def test_do_mount_need_to_specify_fs_type(self, mock_mount): # NOTE(mikal): Bug 1094373 saw a regression where we failed to # communicate a failed mount properly. imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) tempdir = self.useFixture(fixtures.TempDir()).path mount = nbd.NbdMount(imgfile.name, tempdir) def fake_returns_true(*args, **kwargs): return True mount.get_dev = fake_returns_true mount.map_dev = fake_returns_true self.assertFalse(mount.do_mount()) mock_mount.assert_called_once_with( None, None, tempdir, None) @mock.patch('nova.privsep.fs.nbd_connect') @mock.patch('nova.privsep.fs.nbd_disconnect') @mock.patch('os.path.exists') def test_device_creation_race(self, mock_exists, mock_nbd_disconnect, mock_nbd_connect): # Make sure that even if two threads create instances at the same time # they cannot choose the same nbd number (see bug 1207422) tempdir = self.useFixture(fixtures.TempDir()).path free_devices = _fake_detect_nbd_devices()[:] chosen_devices = [] def fake_find_unused(self): return os.path.join('/dev', free_devices[-1]) def delay_and_remove_device(*args, **kwargs): # Ensure that context switch happens before the device is marked # as used. This will cause a failure without nbd-allocation-lock # in place. time.sleep(0.1) # We always choose the top device in find_unused - remove it now. free_devices.pop() return '', '' def pid_exists(pidfile): return pidfile not in [os.path.join('/sys/block', dev, 'pid') for dev in free_devices] self.stub_out('nova.virt.disk.mount.nbd.NbdMount._allocate_nbd', fake_find_unused) mock_nbd_connect.side_effect = delay_and_remove_device mock_exists.side_effect = pid_exists def get_a_device(): n = nbd.NbdMount(self.file, tempdir) n.get_dev() chosen_devices.append(n.device) thread1 = eventlet.spawn(get_a_device) thread2 = eventlet.spawn(get_a_device) thread1.wait() thread2.wait() self.assertEqual(2, len(chosen_devices)) self.assertNotEqual(chosen_devices[0], chosen_devices[1]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/disk/test_api.py0000664000175000017500000001746200000000000021764 0ustar00zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import mock from oslo_concurrency import processutils from oslo_utils import units from nova import test from nova.virt.disk import api from nova.virt.disk.mount import api as mount from nova.virt.disk.vfs import localfs from nova.virt.image import model as imgmodel class FakeMount(object): device = None @staticmethod def instance_for_format(image, mountdir, partition): return FakeMount() def get_dev(self): pass def unget_dev(self): pass class APITestCase(test.NoDBTestCase): @mock.patch.object(localfs.VFSLocalFS, 'get_image_fs', autospec=True, return_value='') def test_can_resize_need_fs_type_specified(self, mock_image_fs): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) image = imgmodel.LocalFileImage(imgfile.name, imgmodel.FORMAT_QCOW2) self.assertFalse(api.is_image_extendable(image)) self.assertTrue(mock_image_fs.called) @mock.patch('oslo_concurrency.processutils.execute', autospec=True) def test_is_image_extendable_raw(self, mock_exec): imgfile = tempfile.NamedTemporaryFile() image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_RAW) self.addCleanup(imgfile.close) self.assertTrue(api.is_image_extendable(image)) mock_exec.assert_called_once_with('e2label', imgfile) @mock.patch('oslo_concurrency.processutils.execute', autospec=True) def test_resize2fs_success(self, mock_exec): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) api.resize2fs(imgfile) mock_exec.assert_has_calls( [mock.call('e2fsck', '-fp', imgfile, check_exit_code=[0, 1, 2]), mock.call('resize2fs', imgfile, check_exit_code=False)]) @mock.patch('oslo_concurrency.processutils.execute') @mock.patch('nova.privsep.fs.resize2fs') @mock.patch('nova.privsep.fs.e2fsck') def test_resize2fs_success_as_root(self, mock_fsck, mock_resize, mock_exec): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) api.resize2fs(imgfile, run_as_root=True) mock_exec.assert_not_called() mock_resize.assert_called() mock_fsck.assert_called() @mock.patch('oslo_concurrency.processutils.execute', autospec=True, side_effect=processutils.ProcessExecutionError("fs error")) def test_resize2fs_e2fsck_fails(self, mock_exec): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) api.resize2fs(imgfile) mock_exec.assert_called_once_with('e2fsck', '-fp', imgfile, check_exit_code=[0, 1, 2]) @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch.object(api, 'is_image_extendable', autospec=True, return_value=True) @mock.patch.object(api, 'resize2fs', autospec=True) @mock.patch.object(mount.Mount, 'instance_for_format') @mock.patch('oslo_concurrency.processutils.execute', autospec=True) def test_extend_qcow_success(self, mock_exec, mock_inst, mock_resize, mock_extendable, mock_can_resize): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 device = "/dev/sdh" image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_QCOW2) self.flags(resize_fs_using_block_device=True) mounter = FakeMount.instance_for_format( image, None, None) mounter.device = device mock_inst.return_value = mounter with test.nested( mock.patch.object(mounter, 'get_dev', autospec=True, return_value=True), mock.patch.object(mounter, 'unget_dev', autospec=True), ) as (mock_get_dev, mock_unget_dev): api.extend(image, imgsize) mock_can_resize.assert_called_once_with(imgfile, imgsize) mock_exec.assert_called_once_with('qemu-img', 'resize', imgfile, imgsize) mock_extendable.assert_called_once_with(image) mock_inst.assert_called_once_with(image, None, None) mock_resize.assert_called_once_with(mounter.device, run_as_root=True, check_exit_code=[0]) mock_get_dev.assert_called_once_with() mock_unget_dev.assert_called_once_with() @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch.object(api, 'is_image_extendable', autospec=True) @mock.patch('oslo_concurrency.processutils.execute', autospec=True) def test_extend_qcow_no_resize(self, mock_execute, mock_extendable, mock_can_resize_image): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_QCOW2) self.flags(resize_fs_using_block_device=False) api.extend(image, imgsize) mock_can_resize_image.assert_called_once_with(imgfile, imgsize) mock_execute.assert_called_once_with('qemu-img', 'resize', imgfile, imgsize) self.assertFalse(mock_extendable.called) @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch('nova.privsep.libvirt.ploop_resize') def test_extend_ploop(self, mock_ploop_resize, mock_can_resize_image): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 * units.Gi image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_PLOOP) api.extend(image, imgsize) mock_can_resize_image.assert_called_once_with(image.path, imgsize) mock_ploop_resize.assert_called_once_with(imgfile, imgsize) @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch.object(api, 'resize2fs', autospec=True) @mock.patch('oslo_concurrency.processutils.execute', autospec=True) def test_extend_raw_success(self, mock_exec, mock_resize, mock_can_resize): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_RAW) api.extend(image, imgsize) mock_exec.assert_has_calls( [mock.call('qemu-img', 'resize', imgfile, imgsize), mock.call('e2label', image.path)]) mock_resize.assert_called_once_with(imgfile, run_as_root=False, check_exit_code=[0]) mock_can_resize.assert_called_once_with(imgfile, imgsize) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/test_inject.py0000664000175000017500000002723300000000000022464 0ustar00zuulzuul00000000000000# Copyright (C) 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import OrderedDict import os import fixtures from nova import exception from nova import test from nova.tests.unit.virt.disk.vfs import fakeguestfs from nova.virt.disk import api as diskapi from nova.virt.disk.vfs import guestfs as vfsguestfs from nova.virt.image import model as imgmodel class VirtDiskTest(test.NoDBTestCase): def setUp(self): super(VirtDiskTest, self).setUp() self.useFixture( fixtures.MonkeyPatch('nova.virt.disk.vfs.guestfs.guestfs', fakeguestfs)) self.file = imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_QCOW2) def test_inject_data(self): self.assertTrue(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_QCOW2))) self.assertTrue(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), mandatory=('files',))) self.assertTrue(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), key="mysshkey", mandatory=('key',))) os_name = os.name os.name = 'nt' # Cause password injection to fail self.assertRaises(exception.NovaException, diskapi.inject_data, imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), admin_password="p", mandatory=('admin_password',)) self.assertFalse(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), admin_password="p")) os.name = os_name self.assertFalse(diskapi.inject_data( imgmodel.LocalFileImage("/some/fail/file", imgmodel.FORMAT_RAW), key="mysshkey")) def test_inject_data_key(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() diskapi._inject_key_into_fs("mysshkey", vfs) self.assertIn("/root/.ssh", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh"], {'isdir': True, 'gid': 0, 'uid': 0, 'mode': 0o700}) self.assertIn("/root/.ssh/authorized_keys", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh/authorized_keys"], {'isdir': False, 'content': "Hello World\n# The following ssh " + "key was injected by Nova\nmysshkey\n", 'gid': 100, 'uid': 100, 'mode': 0o600}) vfs.teardown() def test_inject_data_key_with_selinux(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() vfs.make_path("etc/selinux") vfs.make_path("etc/rc.d") diskapi._inject_key_into_fs("mysshkey", vfs) self.assertIn("/etc/rc.d/rc.local", vfs.handle.files) self.assertEqual(vfs.handle.files["/etc/rc.d/rc.local"], {'isdir': False, 'content': "Hello World#!/bin/sh\n# Added by " + "Nova to ensure injected ssh keys " + "have the right context\nrestorecon " + "-RF root/.ssh 2>/dev/null || :\n", 'gid': 100, 'uid': 100, 'mode': 0o700}) self.assertIn("/root/.ssh", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh"], {'isdir': True, 'gid': 0, 'uid': 0, 'mode': 0o700}) self.assertIn("/root/.ssh/authorized_keys", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh/authorized_keys"], {'isdir': False, 'content': "Hello World\n# The following ssh " + "key was injected by Nova\nmysshkey\n", 'gid': 100, 'uid': 100, 'mode': 0o600}) vfs.teardown() def test_inject_data_key_with_selinux_append_with_newline(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() vfs.replace_file("/etc/rc.d/rc.local", "#!/bin/sh\necho done") vfs.make_path("etc/selinux") vfs.make_path("etc/rc.d") diskapi._inject_key_into_fs("mysshkey", vfs) self.assertIn("/etc/rc.d/rc.local", vfs.handle.files) self.assertEqual(vfs.handle.files["/etc/rc.d/rc.local"], {'isdir': False, 'content': "#!/bin/sh\necho done\n# Added " "by Nova to ensure injected ssh keys have " "the right context\nrestorecon -RF " "root/.ssh 2>/dev/null || :\n", 'gid': 100, 'uid': 100, 'mode': 0o700}) vfs.teardown() def test_inject_net(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() diskapi._inject_net_into_fs("mynetconfig", vfs) self.assertIn("/etc/network/interfaces", vfs.handle.files) self.assertEqual(vfs.handle.files["/etc/network/interfaces"], {'content': 'mynetconfig', 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) vfs.teardown() def test_inject_metadata(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() metadata = {"foo": "bar", "eek": "wizz"} metadata = OrderedDict(sorted(metadata.items())) diskapi._inject_metadata_into_fs(metadata, vfs) self.assertIn("/meta.js", vfs.handle.files) self.assertEqual({'content': '{"eek": "wizz", ' + '"foo": "bar"}', 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}, vfs.handle.files["/meta.js"]) vfs.teardown() def test_inject_admin_password(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() def fake_salt(): return "1234567890abcdef" self.stub_out('nova.virt.disk.api._generate_salt', fake_salt) vfs.handle.write("/etc/shadow", "root:$1$12345678$xxxxx:14917:0:99999:7:::\n" + "bin:*:14495:0:99999:7:::\n" + "daemon:*:14495:0:99999:7:::\n") vfs.handle.write("/etc/passwd", "root:x:0:0:root:/root:/bin/bash\n" + "bin:x:1:1:bin:/bin:/sbin/nologin\n" + "daemon:x:2:2:daemon:/sbin:/sbin/nologin\n") diskapi._inject_admin_password_into_fs("123456", vfs) self.assertEqual(vfs.handle.files["/etc/passwd"], {'content': "root:x:0:0:root:/root:/bin/bash\n" + "bin:x:1:1:bin:/bin:/sbin/nologin\n" + "daemon:x:2:2:daemon:/sbin:" + "/sbin/nologin\n", 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) shadow = vfs.handle.files["/etc/shadow"] # if the encrypted password is only 13 characters long, then # nova.virt.disk.api:_set_password fell back to DES. if len(shadow['content']) == 91: self.assertEqual(shadow, {'content': "root:12tir.zIbWQ3c" + ":14917:0:99999:7:::\n" + "bin:*:14495:0:99999:7:::\n" + "daemon:*:14495:0:99999:7:::\n", 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) else: self.assertEqual(shadow, {'content': "root:$1$12345678$a4ge4d5iJ5vw" + "vbFS88TEN0:14917:0:99999:7:::\n" + "bin:*:14495:0:99999:7:::\n" + "daemon:*:14495:0:99999:7:::\n", 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) vfs.teardown() def test_inject_files_into_fs(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() diskapi._inject_files_into_fs([("/path/to/not/exists/file", "inject-file-contents")], vfs) self.assertIn("/path/to/not/exists", vfs.handle.files) shadow_dir = vfs.handle.files["/path/to/not/exists"] self.assertEqual(shadow_dir, {"isdir": True, "gid": 0, "uid": 0, "mode": 0o744}) shadow_file = vfs.handle.files["/path/to/not/exists/file"] self.assertEqual(shadow_file, {"isdir": False, "content": "inject-file-contents", "gid": 100, "uid": 100, "mode": 0o700}) vfs.teardown() def test_inject_files_into_fs_dir_exists(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() called = {'make_path': False} def fake_has_file(*args, **kwargs): return True def fake_make_path(*args, **kwargs): called['make_path'] = True self.stub_out('nova.virt.disk.vfs.guestfs.VFSGuestFS.has_file', fake_has_file) self.stub_out('nova.virt.disk.vfs.guestfs.VFSGuestFS.make_path', fake_make_path) # test for already exists dir diskapi._inject_files_into_fs([("/path/to/exists/file", "inject-file-contents")], vfs) self.assertIn("/path/to/exists/file", vfs.handle.files) self.assertFalse(called['make_path']) # test for root dir diskapi._inject_files_into_fs([("/inject-file", "inject-file-contents")], vfs) self.assertIn("/inject-file", vfs.handle.files) self.assertFalse(called['make_path']) # test for null dir vfs.handle.files.pop("/inject-file") diskapi._inject_files_into_fs([("inject-file", "inject-file-contents")], vfs) self.assertIn("/inject-file", vfs.handle.files) self.assertFalse(called['make_path']) vfs.teardown() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5864677 nova-21.2.4/nova/tests/unit/virt/disk/vfs/0000775000175000017500000000000000000000000020366 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/disk/vfs/__init__.py0000664000175000017500000000000000000000000022465 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/disk/vfs/fakeguestfs.py0000664000175000017500000001375500000000000023262 0ustar00zuulzuul00000000000000# Copyright 2012 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. EVENT_APPLIANCE = 0x1 EVENT_LIBRARY = 0x2 EVENT_WARNING = 0x3 EVENT_TRACE = 0x4 class GuestFS(object): SUPPORT_CLOSE_ON_EXIT = True SUPPORT_RETURN_DICT = True CAN_SET_OWNERSHIP = True def __init__(self, **kwargs): if not self.SUPPORT_CLOSE_ON_EXIT and 'close_on_exit' in kwargs: raise TypeError('close_on_exit') if not self.SUPPORT_RETURN_DICT and 'python_return_dict' in kwargs: raise TypeError('python_return_dict') self._python_return_dict = kwargs.get('python_return_dict', False) self.kwargs = kwargs self.drives = [] self.running = False self.closed = False self.mounts = [] self.files = {} self.auginit = False self.root_mounted = False self.backend_settings = None self.trace_enabled = False self.verbose_enabled = False self.event_callback = None def launch(self): self.running = True def shutdown(self): self.running = False self.mounts = [] self.drives = [] def set_backend_settings(self, settings): assert isinstance(settings, list), 'set_backend_settings takes a list' self.backend_settings = settings def close(self): self.closed = True def add_drive_opts(self, file, *args, **kwargs): if file == "/some/fail/file": raise RuntimeError("%s: No such file or directory", file) self.drives.append((file, kwargs)) def add_drive(self, file, format=None, *args, **kwargs): self.add_drive_opts(file, format=None, *args, **kwargs) def inspect_os(self): return ["/dev/guestvgf/lv_root"] def inspect_get_mountpoints(self, dev): mountpoints = [("/home", "/dev/mapper/guestvgf-lv_home"), ("/", "/dev/mapper/guestvgf-lv_root"), ("/boot", "/dev/vda1")] if self.SUPPORT_RETURN_DICT and self._python_return_dict: return dict(mountpoints) else: return mountpoints def mount_options(self, options, device, mntpoint): if mntpoint == "/": self.root_mounted = True else: if not self.root_mounted: raise RuntimeError( "mount: %s: No such file or directory" % mntpoint) self.mounts.append((options, device, mntpoint)) def mkdir_p(self, path): if path not in self.files: self.files[path] = { "isdir": True, "gid": 100, "uid": 100, "mode": 0o700 } def read_file(self, path): if path not in self.files: self.files[path] = { "isdir": False, "content": "Hello World", "gid": 100, "uid": 100, "mode": 0o700 } return bytes(self.files[path]["content"].encode()) def write(self, path, content): if path not in self.files: self.files[path] = { "isdir": False, "content": "Hello World", "gid": 100, "uid": 100, "mode": 0o700 } self.files[path]["content"] = content def write_append(self, path, content): if path not in self.files: self.files[path] = { "isdir": False, "content": "Hello World", "gid": 100, "uid": 100, "mode": 0o700 } self.files[path]["content"] = self.files[path]["content"] + content def stat(self, path): if path not in self.files: raise RuntimeError("No such file: " + path) return self.files[path]["mode"] def chown(self, uid, gid, path): if path not in self.files: raise RuntimeError("No such file: " + path) if uid != -1: self.files[path]["uid"] = uid if gid != -1: self.files[path]["gid"] = gid def chmod(self, mode, path): if path not in self.files: raise RuntimeError("No such file: " + path) self.files[path]["mode"] = mode def aug_init(self, root, flags): self.auginit = True def aug_close(self): self.auginit = False def aug_get(self, cfgpath): if not self.auginit: raise RuntimeError("Augeus not initialized") if ((cfgpath.startswith("/files/etc/passwd") or cfgpath.startswith("/files/etc/group")) and not self.CAN_SET_OWNERSHIP): raise RuntimeError("Node not found %s", cfgpath) if cfgpath == "/files/etc/passwd/root/uid": return 0 elif cfgpath == "/files/etc/passwd/fred/uid": return 105 elif cfgpath == "/files/etc/passwd/joe/uid": return 110 elif cfgpath == "/files/etc/group/root/gid": return 0 elif cfgpath == "/files/etc/group/users/gid": return 500 elif cfgpath == "/files/etc/group/admins/gid": return 600 raise RuntimeError("Unknown path %s", cfgpath) def set_trace(self, enabled): self.trace_enabled = enabled def set_verbose(self, enabled): self.verbose_enabled = enabled def set_event_callback(self, func, events): self.event_callback = (func, events) def vfs_type(self, dev): return 'ext3' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/disk/vfs/test_guestfs.py0000664000175000017500000003340600000000000023465 0ustar00zuulzuul00000000000000# Copyright (C) 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova import exception from nova import test from nova.tests.unit.virt.disk.vfs import fakeguestfs from nova.virt.disk.vfs import guestfs as vfsimpl from nova.virt.image import model as imgmodel class VirtDiskVFSGuestFSTest(test.NoDBTestCase): def setUp(self): super(VirtDiskVFSGuestFSTest, self).setUp() self.useFixture( fixtures.MonkeyPatch('nova.virt.disk.vfs.guestfs.guestfs', fakeguestfs)) self.qcowfile = imgmodel.LocalFileImage("/dummy.qcow2", imgmodel.FORMAT_QCOW2) self.rawfile = imgmodel.LocalFileImage("/dummy.img", imgmodel.FORMAT_RAW) self.lvmfile = imgmodel.LocalBlockImage("/dev/volgroup/myvol") self.rbdfile = imgmodel.RBDImage("myvol", "mypool", "cthulu", "arrrrrgh", ["server1:123", "server2:123"]) def _do_test_appliance_setup_inspect(self, image, drives, forcetcg): if forcetcg: vfsimpl.force_tcg() else: vfsimpl.force_tcg(False) vfs = vfsimpl.VFSGuestFS( image, partition=-1) vfs.setup() if forcetcg: self.assertEqual(["force_tcg"], vfs.handle.backend_settings) vfsimpl.force_tcg(False) else: self.assertIsNone(vfs.handle.backend_settings) self.assertTrue(vfs.handle.running) self.assertEqual(drives, vfs.handle.drives) self.assertEqual(3, len(vfs.handle.mounts)) self.assertEqual("/dev/mapper/guestvgf-lv_root", vfs.handle.mounts[0][1]) self.assertEqual("/dev/vda1", vfs.handle.mounts[1][1]) self.assertEqual("/dev/mapper/guestvgf-lv_home", vfs.handle.mounts[2][1]) self.assertEqual("/", vfs.handle.mounts[0][2]) self.assertEqual("/boot", vfs.handle.mounts[1][2]) self.assertEqual("/home", vfs.handle.mounts[2][2]) handle = vfs.handle vfs.teardown() self.assertIsNone(vfs.handle) self.assertFalse(handle.running) self.assertTrue(handle.closed) self.assertEqual(0, len(handle.mounts)) def test_appliance_setup_inspect_auto(self): drives = [("/dummy.qcow2", {"format": "qcow2"})] self._do_test_appliance_setup_inspect(self.qcowfile, drives, False) def test_appliance_setup_inspect_tcg(self): drives = [("/dummy.qcow2", {"format": "qcow2"})] self._do_test_appliance_setup_inspect(self.qcowfile, drives, True) def test_appliance_setup_inspect_raw(self): drives = [("/dummy.img", {"format": "raw"})] self._do_test_appliance_setup_inspect(self.rawfile, drives, True) def test_appliance_setup_inspect_lvm(self): drives = [("/dev/volgroup/myvol", {"format": "raw"})] self._do_test_appliance_setup_inspect(self.lvmfile, drives, True) def test_appliance_setup_inspect_rbd(self): drives = [("mypool/myvol", {"format": "raw", "protocol": "rbd", "username": "cthulu", "secret": "arrrrrgh", "server": ["server1:123", "server2:123"]})] self._do_test_appliance_setup_inspect(self.rbdfile, drives, True) def test_appliance_setup_inspect_no_root_raises(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=-1) # call setup to init the handle so we can stub it vfs.setup() self.assertIsNone(vfs.handle.backend_settings) with mock.patch.object( vfs.handle, 'inspect_os', return_value=[]) as mock_inspect_os: self.assertRaises(exception.NovaException, vfs.setup_os_inspect) mock_inspect_os.assert_called_once_with() def test_appliance_setup_inspect_multi_boots_raises(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=-1) # call setup to init the handle so we can stub it vfs.setup() self.assertIsNone(vfs.handle.backend_settings) with mock.patch.object( vfs.handle, 'inspect_os', return_value=['fake1', 'fake2']) as mock_inspect_os: self.assertRaises(exception.NovaException, vfs.setup_os_inspect) mock_inspect_os.assert_called_once_with() def test_appliance_setup_static_nopart(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=None) vfs.setup() self.assertIsNone(vfs.handle.backend_settings) self.assertTrue(vfs.handle.running) self.assertEqual(1, len(vfs.handle.mounts)) self.assertEqual("/dev/sda", vfs.handle.mounts[0][1]) self.assertEqual("/", vfs.handle.mounts[0][2]) handle = vfs.handle vfs.teardown() self.assertIsNone(vfs.handle) self.assertFalse(handle.running) self.assertTrue(handle.closed) self.assertEqual(0, len(handle.mounts)) def test_appliance_setup_static_part(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=2) vfs.setup() self.assertIsNone(vfs.handle.backend_settings) self.assertTrue(vfs.handle.running) self.assertEqual(1, len(vfs.handle.mounts)) self.assertEqual("/dev/sda2", vfs.handle.mounts[0][1]) self.assertEqual("/", vfs.handle.mounts[0][2]) handle = vfs.handle vfs.teardown() self.assertIsNone(vfs.handle) self.assertFalse(handle.running) self.assertTrue(handle.closed) self.assertEqual(0, len(handle.mounts)) def test_makepath(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.make_path("/some/dir") vfs.make_path("/other/dir") self.assertIn("/some/dir", vfs.handle.files) self.assertIn("/other/dir", vfs.handle.files) self.assertTrue(vfs.handle.files["/some/dir"]["isdir"]) self.assertTrue(vfs.handle.files["/other/dir"]["isdir"]) vfs.teardown() def test_append_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.append_file("/some/file", " Goodbye") self.assertIn("/some/file", vfs.handle.files) self.assertEqual("Hello World Goodbye", vfs.handle.files["/some/file"]["content"]) vfs.teardown() def test_replace_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.replace_file("/some/file", "Goodbye") self.assertIn("/some/file", vfs.handle.files) self.assertEqual("Goodbye", vfs.handle.files["/some/file"]["content"]) vfs.teardown() def test_read_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertEqual("Hello World", vfs.read_file("/some/file")) vfs.teardown() def test_has_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.read_file("/some/file") self.assertTrue(vfs.has_file("/some/file")) self.assertFalse(vfs.has_file("/other/file")) vfs.teardown() def test_set_permissions(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.read_file("/some/file") self.assertEqual(0o700, vfs.handle.files["/some/file"]["mode"]) vfs.set_permissions("/some/file", 0o7777) self.assertEqual(0o7777, vfs.handle.files["/some/file"]["mode"]) vfs.teardown() def test_set_ownership(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.read_file("/some/file") self.assertEqual(100, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(100, vfs.handle.files["/some/file"]["gid"]) vfs.set_ownership("/some/file", "fred", None) self.assertEqual(105, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(100, vfs.handle.files["/some/file"]["gid"]) vfs.set_ownership("/some/file", None, "users") self.assertEqual(105, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(500, vfs.handle.files["/some/file"]["gid"]) vfs.set_ownership("/some/file", "joe", "admins") self.assertEqual(110, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(600, vfs.handle.files["/some/file"]["gid"]) vfs.teardown() def test_set_ownership_not_supported(self): # NOTE(andreaf) Setting ownership relies on /etc/passwd and/or # /etc/group being available in the image, which is not always the # case - e.g. CirrOS image before boot. vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.stub_out('nova.tests.unit.virt.disk.vfs.fakeguestfs.GuestFS.' 'CAN_SET_OWNERSHIP', False) self.assertRaises(exception.NovaException, vfs.set_ownership, "/some/file", "fred", None) self.assertRaises(exception.NovaException, vfs.set_ownership, "/some/file", None, "users") def test_close_on_error(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertFalse(vfs.handle.kwargs['close_on_exit']) vfs.teardown() self.stub_out('nova.tests.unit.virt.disk.vfs.fakeguestfs.GuestFS.' 'SUPPORT_CLOSE_ON_EXIT', False) vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertNotIn('close_on_exit', vfs.handle.kwargs) vfs.teardown() def test_python_return_dict(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertFalse(vfs.handle.kwargs['python_return_dict']) vfs.teardown() self.stub_out('nova.tests.unit.virt.disk.vfs.fakeguestfs.GuestFS.' 'SUPPORT_RETURN_DICT', False) vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertNotIn('python_return_dict', vfs.handle.kwargs) vfs.teardown() def test_setup_debug_disable(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertFalse(vfs.handle.trace_enabled) self.assertFalse(vfs.handle.verbose_enabled) self.assertIsNone(vfs.handle.event_callback) def test_setup_debug_enabled(self): self.flags(debug=True, group='guestfs') vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertTrue(vfs.handle.trace_enabled) self.assertTrue(vfs.handle.verbose_enabled) self.assertIsNotNone(vfs.handle.event_callback) def test_get_format_fs(self): vfs = vfsimpl.VFSGuestFS(self.rawfile) vfs.setup() self.assertIsNotNone(vfs.handle) self.assertEqual('ext3', vfs.get_image_fs()) vfs.teardown() @mock.patch.object(vfsimpl.VFSGuestFS, 'setup_os') def test_setup_mount(self, setup_os): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertTrue(setup_os.called) @mock.patch.object(vfsimpl.VFSGuestFS, 'setup_os') def test_setup_mount_false(self, setup_os): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup(mount=False) self.assertFalse(setup_os.called) @mock.patch('os.access') @mock.patch('os.uname', return_value=('Linux', '', 'kernel_name')) def test_appliance_setup_inspect_capabilties_fail_with_ubuntu(self, mock_uname, mock_access): # In ubuntu os will default host kernel as 600 permission m = mock.MagicMock() m.launch.side_effect = Exception vfs = vfsimpl.VFSGuestFS(self.qcowfile) mock_access.return_value = False self.flags(debug=False, group='guestfs') with mock.patch('eventlet.tpool.Proxy', return_value=m) as tpool_mock: self.assertRaises(exception.LibguestfsCannotReadKernel, vfs.inspect_capabilities) m.add_drive.assert_called_once_with('/dev/null') m.launch.assert_called_once_with() mock_access.assert_called_once_with('/boot/vmlinuz-kernel_name', mock.ANY) mock_uname.assert_called_once_with() self.assertEqual(1, tpool_mock.call_count) def test_appliance_setup_inspect_capabilties_debug_mode(self): """Asserts that we do not use an eventlet thread pool when guestfs debug logging is enabled. """ # We can't actually mock guestfs.GuestFS because it's an optional # native package import. All we really care about here is that # eventlet isn't used. self.flags(debug=True, group='guestfs') vfs = vfsimpl.VFSGuestFS(self.qcowfile) with mock.patch('eventlet.tpool.Proxy', new_callable=mock.NonCallableMock): vfs.inspect_capabilities() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/disk/vfs/test_localfs.py0000664000175000017500000001761500000000000023434 0ustar00zuulzuul00000000000000# Copyright (C) 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import grp import pwd import tempfile from collections import namedtuple import mock from nova import exception from nova import test import nova.utils from nova.virt.disk.mount import nbd from nova.virt.disk.vfs import localfs as vfsimpl from nova.virt.image import model as imgmodel class VirtDiskVFSLocalFSTestPaths(test.NoDBTestCase): def setUp(self): super(VirtDiskVFSLocalFSTestPaths, self).setUp() self.rawfile = imgmodel.LocalFileImage('/dummy.img', imgmodel.FORMAT_RAW) # NOTE(mikal): mocking a decorator is non-trivial, so this is the # best we can do. @mock.patch.object(nova.privsep.path, 'readlink') def test_check_safe_path(self, read_link): vfs = vfsimpl.VFSLocalFS(self.rawfile) vfs.imgdir = '/foo' read_link.return_value = '/foo/etc/something.conf' ret = vfs._canonical_path('etc/something.conf') self.assertEqual(ret, '/foo/etc/something.conf') @mock.patch.object(nova.privsep.path, 'readlink') def test_check_unsafe_path(self, read_link): vfs = vfsimpl.VFSLocalFS(self.rawfile) vfs.imgdir = '/foo' read_link.return_value = '/etc/something.conf' self.assertRaises(exception.Invalid, vfs._canonical_path, 'etc/../../../something.conf') class VirtDiskVFSLocalFSTest(test.NoDBTestCase): def setUp(self): super(VirtDiskVFSLocalFSTest, self).setUp() self.qcowfile = imgmodel.LocalFileImage('/dummy.qcow2', imgmodel.FORMAT_QCOW2) self.rawfile = imgmodel.LocalFileImage('/dummy.img', imgmodel.FORMAT_RAW) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'makedirs') def test_makepath(self, mkdir, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.make_path('/some/dir') read_link.assert_called() mkdir.assert_called_with(read_link.return_value) read_link.reset_mock() mkdir.reset_mock() vfs.make_path('/other/dir') read_link.assert_called() mkdir.assert_called_with(read_link.return_value) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'writefile') def test_append_file(self, write_file, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.append_file('/some/file', ' Goodbye') read_link.assert_called() write_file.assert_called_with(read_link.return_value, 'a', ' Goodbye') @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'writefile') def test_replace_file(self, write_file, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.replace_file('/some/file', 'Goodbye') read_link.assert_called() write_file.assert_called_with(read_link.return_value, 'w', 'Goodbye') @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'readfile') def test_read_file(self, read_file, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' self.assertEqual(read_file.return_value, vfs.read_file('/some/file')) read_link.assert_called() read_file.assert_called() @mock.patch.object(nova.privsep.path.path, 'exists') def test_has_file(self, exists): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' has = vfs.has_file('/some/file') self.assertEqual(exists.return_value, has) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'chmod') def test_set_permissions(self, chmod, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.set_permissions('/some/file', 0o777) read_link.assert_called() chmod.assert_called_with(read_link.return_value, 0o777) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'chown') @mock.patch.object(pwd, 'getpwnam') @mock.patch.object(grp, 'getgrnam') def test_set_ownership(self, getgrnam, getpwnam, chown, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' fake_passwd = namedtuple('fake_passwd', 'pw_uid') getpwnam.return_value(fake_passwd(pw_uid=100)) fake_group = namedtuple('fake_group', 'gr_gid') getgrnam.return_value(fake_group(gr_gid=101)) vfs.set_ownership('/some/file', 'fred', None) read_link.assert_called() chown.assert_called_with(read_link.return_value, uid=getpwnam.return_value.pw_uid) read_link.reset_mock() chown.reset_mock() vfs.set_ownership('/some/file', None, 'users') read_link.assert_called() chown.assert_called_with(read_link.return_value, gid=getgrnam.return_value.gr_gid) read_link.reset_mock() chown.reset_mock() vfs.set_ownership('/some/file', 'joe', 'admins') read_link.assert_called() chown.assert_called_with(read_link.return_value, uid=getpwnam.return_value.pw_uid, gid=getgrnam.return_value.gr_gid) @mock.patch('nova.privsep.fs.get_filesystem_type', return_value=('ext3\n', '')) def test_get_format_fs(self, mock_type): vfs = vfsimpl.VFSLocalFS(self.rawfile) vfs.setup = mock.MagicMock() vfs.teardown = mock.MagicMock() def fake_setup(): vfs.mount = mock.MagicMock() vfs.mount.device = None vfs.mount.get_dev.side_effect = fake_get_dev def fake_teardown(): vfs.mount.device = None def fake_get_dev(): vfs.mount.device = '/dev/xyz' return True vfs.setup.side_effect = fake_setup vfs.teardown.side_effect = fake_teardown vfs.setup() self.assertEqual('ext3', vfs.get_image_fs()) vfs.teardown() vfs.mount.get_dev.assert_called_once_with() mock_type.assert_called_once_with('/dev/xyz') @mock.patch.object(tempfile, 'mkdtemp') @mock.patch.object(nbd, 'NbdMount') def test_setup_mount(self, NbdMount, mkdtemp): vfs = vfsimpl.VFSLocalFS(self.qcowfile) mounter = mock.MagicMock() mkdtemp.return_value = 'tmp/' NbdMount.return_value = mounter vfs.setup() self.assertTrue(mkdtemp.called) NbdMount.assert_called_once_with(self.qcowfile, 'tmp/', None) mounter.do_mount.assert_called_once_with() @mock.patch.object(tempfile, 'mkdtemp') @mock.patch.object(nbd, 'NbdMount') def test_setup_mount_false(self, NbdMount, mkdtemp): vfs = vfsimpl.VFSLocalFS(self.qcowfile) mounter = mock.MagicMock() mkdtemp.return_value = 'tmp/' NbdMount.return_value = mounter vfs.setup(mount=False) self.assertTrue(mkdtemp.called) NbdMount.assert_called_once_with(self.qcowfile, 'tmp/', None) self.assertFalse(mounter.do_mount.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/fakelibosinfo.py0000664000175000017500000001001500000000000022020 0ustar00zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def match_item(obj, fltr): key, val = list(fltr._filter.items())[0] if key == 'class': key = '_class' elif key == 'short-id': key = 'short_id' return getattr(obj, key, None) == val class Loader(object): def process_default_path(self): pass def get_db(self): return Db() class Db(object): def _get_fedora19(self): devs = [] net = Device() net._class = 'net' net.name = 'rtl8139' devs.append(net) net = Device() net._class = 'block' net.name = 'ide' devs.append(net) devlist = DeviceList() devlist.devices = devs fedora = Os() fedora.name = 'Fedora 19' fedora.id = 'http://fedoraproject.org/fedora/19' fedora.short_id = 'fedora19' fedora.dev_list = devlist return fedora def _get_fedora22(self): devs = [] net = Device() net._class = 'net' net.name = 'virtio-net' devs.append(net) net = Device() net._class = 'block' net.name = 'virtio-block' devs.append(net) devlist = DeviceList() devlist.devices = devs fedora = Os() fedora.name = 'Fedora 22' fedora.id = 'http://fedoraproject.org/fedora/22' fedora.short_id = 'fedora22' fedora.dev_list = devlist return fedora def _get_fedora23(self): devs = [] net = Device() net._class = 'net' net.name = 'virtio1.0-net' devs.append(net) net = Device() net._class = 'block' net.name = 'virtio1.0-block' devs.append(net) devlist = DeviceList() devlist.devices = devs fedora = Os() fedora.name = 'Fedora 23' fedora.id = 'http://fedoraproject.org/fedora/23' fedora.short_id = 'fedora23' fedora.dev_list = devlist return fedora def __init__(self): self.oslist = OsList() self.oslist.os_list = [ self._get_fedora19(), self._get_fedora22(), self._get_fedora23(), ] def get_os_list(self): return self.oslist class Filter(object): def __init__(self): self._filter = {} @classmethod def new(cls): return cls() def add_constraint(self, flt_key, val): self._filter[flt_key] = val class OsList(object): def __init__(self): self.os_list = [] def new_filtered(self, fltr): new_list = OsList() new_list.os_list = [os for os in self.os_list if match_item(os, fltr)] return new_list def get_length(self): return len(self.os_list) def get_nth(self, index): return self.os_list[index] class Os(object): def __init__(self): self.name = None self.short_id = None self.id = None self.dev_list = None def get_all_devices(self, fltr): new_list = DeviceList() new_list.devices = [dev for dev in self.dev_list.devices if match_item(dev, fltr)] return new_list def get_name(self): return self.name class DeviceList(object): def __init__(self): self.devices = [] def get_length(self): return len(self.devices) def get_nth(self, index): return self.devices[index] class Device(object): def __init__(self): self.name = None self._class = None def get_name(self): return self.name ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5904677 nova-21.2.4/nova/tests/unit/virt/hyperv/0000775000175000017500000000000000000000000020153 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/__init__.py0000664000175000017500000000000000000000000022252 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_base.py0000664000175000017500000000260000000000000022474 0ustar00zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import utilsfactory from six.moves import builtins from nova import test class HyperVBaseTestCase(test.NoDBTestCase): def setUp(self): super(HyperVBaseTestCase, self).setUp() self._mock_wmi = mock.MagicMock() wmi_patcher = mock.patch.object(builtins, 'wmi', create=True, new=self._mock_wmi) platform_patcher = mock.patch('sys.platform', 'win32') utilsfactory_patcher = mock.patch.object(utilsfactory, '_get_class') platform_patcher.start() wmi_patcher.start() utilsfactory_patcher.start() self.addCleanup(wmi_patcher.stop) self.addCleanup(platform_patcher.stop) self.addCleanup(utilsfactory_patcher.stop) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_block_device_manager.py0000664000175000017500000004765000000000000025703 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Cloudbase Solutions Srl # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import constants as os_win_const from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import block_device_manager from nova.virt.hyperv import constants class BlockDeviceManagerTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V BlockDeviceInfoManager class.""" def setUp(self): super(BlockDeviceManagerTestCase, self).setUp() self._bdman = block_device_manager.BlockDeviceInfoManager() def test_get_device_bus_scsi(self): bdm = {'disk_bus': constants.CTRL_TYPE_SCSI, 'drive_addr': 0, 'ctrl_disk_addr': 2} bus = self._bdman._get_device_bus(bdm) self.assertEqual('0:0:0:2', bus.address) def test_get_device_bus_ide(self): bdm = {'disk_bus': constants.CTRL_TYPE_IDE, 'drive_addr': 0, 'ctrl_disk_addr': 1} bus = self._bdman._get_device_bus(bdm) self.assertEqual('0:1', bus.address) @staticmethod def _bdm_mock(**kwargs): bdm = mock.MagicMock(**kwargs) bdm.__contains__.side_effect = ( lambda attr: getattr(bdm, attr, None) is not None) return bdm @mock.patch.object(block_device_manager.objects, 'DiskMetadata') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_device_bus') @mock.patch.object(block_device_manager.objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_get_bdm_metadata(self, mock_get_by_inst_uuid, mock_get_device_bus, mock_DiskMetadata): mock_instance = mock.MagicMock() root_disk = {'mount_device': mock.sentinel.dev0} ephemeral = {'device_name': mock.sentinel.dev1} block_device_info = { 'root_disk': root_disk, 'block_device_mapping': [ {'mount_device': mock.sentinel.dev2}, {'mount_device': mock.sentinel.dev3}, ], 'ephemerals': [ephemeral], } bdm = self._bdm_mock(device_name=mock.sentinel.dev0, tag='taggy', volume_id=mock.sentinel.uuid1) eph = self._bdm_mock(device_name=mock.sentinel.dev1, tag='ephy', volume_id=mock.sentinel.uuid2) mock_get_by_inst_uuid.return_value = [ bdm, eph, self._bdm_mock(device_name=mock.sentinel.dev2, tag=None), ] bdm_metadata = self._bdman.get_bdm_metadata(mock.sentinel.context, mock_instance, block_device_info) mock_get_by_inst_uuid.assert_called_once_with(mock.sentinel.context, mock_instance.uuid) mock_get_device_bus.assert_has_calls( [mock.call(root_disk), mock.call(ephemeral)], any_order=True) mock_DiskMetadata.assert_has_calls( [mock.call(bus=mock_get_device_bus.return_value, serial=bdm.volume_id, tags=[bdm.tag]), mock.call(bus=mock_get_device_bus.return_value, serial=eph.volume_id, tags=[eph.tag])], any_order=True) self.assertEqual([mock_DiskMetadata.return_value] * 2, bdm_metadata) @mock.patch('nova.virt.configdrive.required_by') def test_init_controller_slot_counter_gen1_no_configdrive( self, mock_cfg_drive_req): mock_cfg_drive_req.return_value = False slot_map = self._bdman._initialize_controller_slot_counter( mock.sentinel.FAKE_INSTANCE, constants.VM_GEN_1) self.assertEqual(slot_map[constants.CTRL_TYPE_IDE][0], os_win_const.IDE_CONTROLLER_SLOTS_NUMBER) self.assertEqual(slot_map[constants.CTRL_TYPE_IDE][1], os_win_const.IDE_CONTROLLER_SLOTS_NUMBER) self.assertEqual(slot_map[constants.CTRL_TYPE_SCSI][0], os_win_const.SCSI_CONTROLLER_SLOTS_NUMBER) @mock.patch('nova.virt.configdrive.required_by') def test_init_controller_slot_counter_gen1(self, mock_cfg_drive_req): slot_map = self._bdman._initialize_controller_slot_counter( mock.sentinel.FAKE_INSTANCE, constants.VM_GEN_1) self.assertEqual(slot_map[constants.CTRL_TYPE_IDE][1], os_win_const.IDE_CONTROLLER_SLOTS_NUMBER - 1) @mock.patch.object(block_device_manager.configdrive, 'required_by') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_initialize_controller_slot_counter') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_root_device') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_ephemerals') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_volumes') def _check_validate_and_update_bdi(self, mock_check_and_update_vol, mock_check_and_update_eph, mock_check_and_update_root, mock_init_ctrl_cntr, mock_required_by, available_slots=1): mock_required_by.return_value = True slot_map = {constants.CTRL_TYPE_SCSI: [available_slots]} mock_init_ctrl_cntr.return_value = slot_map if available_slots: self._bdman.validate_and_update_bdi(mock.sentinel.FAKE_INSTANCE, mock.sentinel.IMAGE_META, constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO) else: self.assertRaises(exception.InvalidBDMFormat, self._bdman.validate_and_update_bdi, mock.sentinel.FAKE_INSTANCE, mock.sentinel.IMAGE_META, constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO) mock_init_ctrl_cntr.assert_called_once_with( mock.sentinel.FAKE_INSTANCE, constants.VM_GEN_2) mock_check_and_update_root.assert_called_once_with( constants.VM_GEN_2, mock.sentinel.IMAGE_META, mock.sentinel.BLOCK_DEV_INFO, slot_map) mock_check_and_update_eph.assert_called_once_with( constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO, slot_map) mock_check_and_update_vol.assert_called_once_with( constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO, slot_map) mock_required_by.assert_called_once_with(mock.sentinel.FAKE_INSTANCE) def test_validate_and_update_bdi(self): self._check_validate_and_update_bdi() def test_validate_and_update_bdi_insufficient_slots(self): self._check_validate_and_update_bdi(available_slots=0) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_available_controller_slot') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, 'is_boot_from_volume') def _test_check_and_update_root_device(self, mock_is_boot_from_vol, mock_get_avail_ctrl_slot, disk_format, vm_gen=constants.VM_GEN_1, boot_from_volume=False): image_meta = mock.MagicMock(disk_format=disk_format) bdi = {'root_device': '/dev/sda', 'block_device_mapping': [ {'mount_device': '/dev/sda', 'connection_info': mock.sentinel.FAKE_CONN_INFO}]} mock_is_boot_from_vol.return_value = boot_from_volume mock_get_avail_ctrl_slot.return_value = (0, 0) self._bdman._check_and_update_root_device(vm_gen, image_meta, bdi, mock.sentinel.SLOT_MAP) root_disk = bdi['root_disk'] if boot_from_volume: self.assertEqual(root_disk['type'], constants.VOLUME) self.assertIsNone(root_disk['path']) self.assertEqual(root_disk['connection_info'], mock.sentinel.FAKE_CONN_INFO) else: image_type = self._bdman._TYPE_FOR_DISK_FORMAT.get( image_meta.disk_format) self.assertEqual(root_disk['type'], image_type) self.assertIsNone(root_disk['path']) self.assertIsNone(root_disk['connection_info']) disk_bus = (constants.CTRL_TYPE_IDE if vm_gen == constants.VM_GEN_1 else constants.CTRL_TYPE_SCSI) self.assertEqual(root_disk['disk_bus'], disk_bus) self.assertEqual(root_disk['drive_addr'], 0) self.assertEqual(root_disk['ctrl_disk_addr'], 0) self.assertEqual(root_disk['boot_index'], 0) self.assertEqual(root_disk['mount_device'], bdi['root_device']) mock_get_avail_ctrl_slot.assert_called_once_with( root_disk['disk_bus'], mock.sentinel.SLOT_MAP) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, 'is_boot_from_volume', return_value=False) def test_check_and_update_root_device_exception(self, mock_is_boot_vol): bdi = {} image_meta = mock.MagicMock(disk_format=mock.sentinel.fake_format) self.assertRaises(exception.InvalidImageFormat, self._bdman._check_and_update_root_device, constants.VM_GEN_1, image_meta, bdi, mock.sentinel.SLOT_MAP) def test_check_and_update_root_device_gen1(self): self._test_check_and_update_root_device(disk_format='vhd') def test_check_and_update_root_device_gen1_vhdx(self): self._test_check_and_update_root_device(disk_format='vhdx') def test_check_and_update_root_device_gen1_iso(self): self._test_check_and_update_root_device(disk_format='iso') def test_check_and_update_root_device_gen2(self): self._test_check_and_update_root_device(disk_format='vhd', vm_gen=constants.VM_GEN_2) def test_check_and_update_root_device_boot_from_vol_gen1(self): self._test_check_and_update_root_device(disk_format='vhd', boot_from_volume=True) def test_check_and_update_root_device_boot_from_vol_gen2(self): self._test_check_and_update_root_device(disk_format='vhd', vm_gen=constants.VM_GEN_2, boot_from_volume=True) @mock.patch('nova.virt.configdrive.required_by', return_value=True) def _test_get_available_controller_slot(self, mock_config_drive_req, bus=constants.CTRL_TYPE_IDE, fail=False): slot_map = self._bdman._initialize_controller_slot_counter( mock.sentinel.FAKE_VM, constants.VM_GEN_1) if fail: slot_map[constants.CTRL_TYPE_IDE][0] = 0 slot_map[constants.CTRL_TYPE_IDE][1] = 0 self.assertRaises(exception.InvalidBDMFormat, self._bdman._get_available_controller_slot, constants.CTRL_TYPE_IDE, slot_map) else: (disk_addr, ctrl_disk_addr) = self._bdman._get_available_controller_slot( bus, slot_map) self.assertEqual(0, disk_addr) self.assertEqual(0, ctrl_disk_addr) def test_get_available_controller_slot(self): self._test_get_available_controller_slot() def test_get_available_controller_slot_scsi_ctrl(self): self._test_get_available_controller_slot(bus=constants.CTRL_TYPE_SCSI) def test_get_available_controller_slot_exception(self): self._test_get_available_controller_slot(fail=True) def test_is_boot_from_volume_true(self): vol = {'mount_device': self._bdman._DEFAULT_ROOT_DEVICE} block_device_info = {'block_device_mapping': [vol]} ret = self._bdman.is_boot_from_volume(block_device_info) self.assertTrue(ret) def test_is_boot_from_volume_false(self): block_device_info = {'block_device_mapping': []} ret = self._bdman.is_boot_from_volume(block_device_info) self.assertFalse(ret) def test_get_root_device_bdm(self): mount_device = '/dev/sda' bdm1 = {'mount_device': None} bdm2 = {'mount_device': mount_device} bdi = {'block_device_mapping': [bdm1, bdm2]} ret = self._bdman._get_root_device_bdm(bdi, mount_device) self.assertEqual(bdm2, ret) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_bdm') def test_check_and_update_ephemerals(self, mock_check_and_update_bdm): fake_ephemerals = [mock.sentinel.eph1, mock.sentinel.eph2, mock.sentinel.eph3] fake_bdi = {'ephemerals': fake_ephemerals} expected_calls = [] for eph in fake_ephemerals: expected_calls.append(mock.call(mock.sentinel.fake_slot_map, mock.sentinel.fake_vm_gen, eph)) self._bdman._check_and_update_ephemerals(mock.sentinel.fake_vm_gen, fake_bdi, mock.sentinel.fake_slot_map) mock_check_and_update_bdm.assert_has_calls(expected_calls) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_bdm') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_root_device_bdm') def test_check_and_update_volumes(self, mock_get_root_dev_bdm, mock_check_and_update_bdm): fake_vol1 = {'mount_device': '/dev/sda'} fake_vol2 = {'mount_device': '/dev/sdb'} fake_volumes = [fake_vol1, fake_vol2] fake_bdi = {'block_device_mapping': fake_volumes, 'root_disk': {'mount_device': '/dev/sda'}} mock_get_root_dev_bdm.return_value = fake_vol1 self._bdman._check_and_update_volumes(mock.sentinel.fake_vm_gen, fake_bdi, mock.sentinel.fake_slot_map) mock_get_root_dev_bdm.assert_called_once_with(fake_bdi, '/dev/sda') mock_check_and_update_bdm.assert_called_once_with( mock.sentinel.fake_slot_map, mock.sentinel.fake_vm_gen, fake_vol2) self.assertNotIn(fake_vol1, fake_bdi) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_available_controller_slot') def test_check_and_update_bdm_with_defaults(self, mock_get_ctrl_slot): mock_get_ctrl_slot.return_value = ((mock.sentinel.DRIVE_ADDR, mock.sentinel.CTRL_DISK_ADDR)) bdm = {'device_type': None, 'disk_bus': None, 'boot_index': None} self._bdman._check_and_update_bdm(mock.sentinel.FAKE_SLOT_MAP, constants.VM_GEN_1, bdm) mock_get_ctrl_slot.assert_called_once_with( bdm['disk_bus'], mock.sentinel.FAKE_SLOT_MAP) self.assertEqual(mock.sentinel.DRIVE_ADDR, bdm['drive_addr']) self.assertEqual(mock.sentinel.CTRL_DISK_ADDR, bdm['ctrl_disk_addr']) self.assertEqual('disk', bdm['device_type']) self.assertEqual(self._bdman._DEFAULT_BUS, bdm['disk_bus']) self.assertIsNone(bdm['boot_index']) def test_check_and_update_bdm_exception_device_type(self): bdm = {'device_type': 'cdrom', 'disk_bus': 'IDE'} self.assertRaises(exception.InvalidDiskInfo, self._bdman._check_and_update_bdm, mock.sentinel.FAKE_SLOT_MAP, constants.VM_GEN_1, bdm) def test_check_and_update_bdm_exception_disk_bus(self): bdm = {'device_type': 'disk', 'disk_bus': 'fake_bus'} self.assertRaises(exception.InvalidDiskInfo, self._bdman._check_and_update_bdm, mock.sentinel.FAKE_SLOT_MAP, constants.VM_GEN_1, bdm) def test_sort_by_boot_order(self): original = [{'boot_index': 2}, {'boot_index': None}, {'boot_index': 1}] expected = [original[2], original[0], original[1]] self._bdman._sort_by_boot_order(original) self.assertEqual(expected, original) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_boot_order_gen1') def test_get_boot_order_gen1_vm(self, mock_get_boot_order): self._bdman.get_boot_order(constants.VM_GEN_1, mock.sentinel.BLOCK_DEV_INFO) mock_get_boot_order.assert_called_once_with( mock.sentinel.BLOCK_DEV_INFO) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_boot_order_gen2') def test_get_boot_order_gen2_vm(self, mock_get_boot_order): self._bdman.get_boot_order(constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO) mock_get_boot_order.assert_called_once_with( mock.sentinel.BLOCK_DEV_INFO) def test_get_boot_order_gen1_iso(self): fake_bdi = {'root_disk': {'type': 'iso'}} expected = [os_win_const.BOOT_DEVICE_CDROM, os_win_const.BOOT_DEVICE_HARDDISK, os_win_const.BOOT_DEVICE_NETWORK, os_win_const.BOOT_DEVICE_FLOPPY] res = self._bdman._get_boot_order_gen1(fake_bdi) self.assertEqual(expected, res) def test_get_boot_order_gen1_vhd(self): fake_bdi = {'root_disk': {'type': 'vhd'}} expected = [os_win_const.BOOT_DEVICE_HARDDISK, os_win_const.BOOT_DEVICE_CDROM, os_win_const.BOOT_DEVICE_NETWORK, os_win_const.BOOT_DEVICE_FLOPPY] res = self._bdman._get_boot_order_gen1(fake_bdi) self.assertEqual(expected, res) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.get_disk_resource_path') def test_get_boot_order_gen2(self, mock_get_disk_path): fake_root_disk = {'boot_index': 0, 'path': mock.sentinel.FAKE_ROOT_PATH} fake_eph1 = {'boot_index': 2, 'path': mock.sentinel.FAKE_EPH_PATH1} fake_eph2 = {'boot_index': 3, 'path': mock.sentinel.FAKE_EPH_PATH2} fake_bdm = {'boot_index': 1, 'connection_info': mock.sentinel.FAKE_CONN_INFO} fake_bdi = {'root_disk': fake_root_disk, 'ephemerals': [fake_eph1, fake_eph2], 'block_device_mapping': [fake_bdm]} mock_get_disk_path.return_value = fake_bdm['connection_info'] expected_res = [mock.sentinel.FAKE_ROOT_PATH, mock.sentinel.FAKE_CONN_INFO, mock.sentinel.FAKE_EPH_PATH1, mock.sentinel.FAKE_EPH_PATH2] res = self._bdman._get_boot_order_gen2(fake_bdi) self.assertEqual(expected_res, res) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_driver.py0000664000175000017500000005170300000000000023065 0ustar00zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the Hyper-V Driver. """ import platform import sys import mock from os_win import exceptions as os_win_exc from nova import exception from nova import safe_utils from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt import driver as base_driver from nova.virt.hyperv import driver class HyperVDriverTestCase(test_base.HyperVBaseTestCase): FAKE_WIN_2008R2_VERSION = '6.0.0' @mock.patch.object(driver.HyperVDriver, '_check_minimum_windows_version') def setUp(self, mock_check_minimum_windows_version): super(HyperVDriverTestCase, self).setUp() self.context = 'context' self.driver = driver.HyperVDriver(mock.sentinel.virtapi) self.driver._hostops = mock.MagicMock() self.driver._volumeops = mock.MagicMock() self.driver._vmops = mock.MagicMock() self.driver._snapshotops = mock.MagicMock() self.driver._livemigrationops = mock.MagicMock() self.driver._migrationops = mock.MagicMock() self.driver._rdpconsoleops = mock.MagicMock() self.driver._serialconsoleops = mock.MagicMock() self.driver._imagecache = mock.MagicMock() @mock.patch.object(driver.LOG, 'warning') @mock.patch.object(driver.utilsfactory, 'get_hostutils') def test_check_minimum_windows_version(self, mock_get_hostutils, mock_warning): mock_hostutils = mock_get_hostutils.return_value mock_hostutils.check_min_windows_version.return_value = False self.assertRaises(exception.HypervisorTooOld, self.driver._check_minimum_windows_version) mock_hostutils.check_min_windows_version.side_effect = [True, False] self.driver._check_minimum_windows_version() self.assertTrue(mock_warning.called) def test_public_api_signatures(self): # NOTE(claudiub): wrapped functions do not keep the same signature in # Python 2.7, which causes this test to fail. Instead, we should # compare the public API signatures of the unwrapped methods. for attr in driver.HyperVDriver.__dict__: class_member = getattr(driver.HyperVDriver, attr) if callable(class_member): mocked_method = mock.patch.object( driver.HyperVDriver, attr, safe_utils.get_wrapped_function(class_member)) mocked_method.start() self.addCleanup(mocked_method.stop) self.assertPublicAPISignatures(base_driver.ComputeDriver, driver.HyperVDriver) def test_converted_exception(self): self.driver._vmops.get_info.side_effect = ( os_win_exc.OSWinException) self.assertRaises(exception.NovaException, self.driver.get_info, mock.sentinel.instance) self.driver._vmops.get_info.side_effect = os_win_exc.HyperVException self.assertRaises(exception.NovaException, self.driver.get_info, mock.sentinel.instance) self.driver._vmops.get_info.side_effect = ( os_win_exc.HyperVVMNotFoundException(vm_name='foofoo')) self.assertRaises(exception.InstanceNotFound, self.driver.get_info, mock.sentinel.instance) def test_assert_original_traceback_maintained(self): def bar(self): foo = "foofoo" raise os_win_exc.HyperVVMNotFoundException(vm_name=foo) self.driver._vmops.get_info.side_effect = bar try: self.driver.get_info(mock.sentinel.instance) self.fail("Test expected exception, but it was not raised.") except exception.InstanceNotFound: # exception has been raised as expected. _, _, trace = sys.exc_info() while trace.tb_next: # iterate until the original exception source, bar. trace = trace.tb_next # original frame will contain the 'foo' variable. self.assertEqual('foofoo', trace.tb_frame.f_locals['foo']) @mock.patch.object(driver.eventhandler, 'InstanceEventHandler') def test_init_host(self, mock_InstanceEventHandler): self.driver.init_host(mock.sentinel.host) mock_start_console_handlers = ( self.driver._serialconsoleops.start_console_handlers) mock_start_console_handlers.assert_called_once_with() mock_InstanceEventHandler.assert_called_once_with( state_change_callback=self.driver.emit_event) fake_event_handler = mock_InstanceEventHandler.return_value fake_event_handler.start_listener.assert_called_once_with() def test_list_instance_uuids(self): self.driver.list_instance_uuids() self.driver._vmops.list_instance_uuids.assert_called_once_with() def test_list_instances(self): self.driver.list_instances() self.driver._vmops.list_instances.assert_called_once_with() def test_spawn(self): self.driver.spawn( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_meta, mock.sentinel.injected_files, mock.sentinel.admin_password, mock.sentinel.allocations, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.spawn.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_meta, mock.sentinel.injected_files, mock.sentinel.admin_password, mock.sentinel.network_info, mock.sentinel.block_device_info) def test_reboot(self): self.driver.reboot( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.reboot_type, mock.sentinel.block_device_info, mock.sentinel.bad_vol_callback) self.driver._vmops.reboot.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.reboot_type) def test_destroy(self): self.driver.destroy( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks) self.driver._vmops.destroy.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks) def test_cleanup(self): self.driver.cleanup( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks, mock.sentinel.migrate_data, mock.sentinel.destroy_vifs) self.driver._vmops.unplug_vifs.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info) def test_get_info(self): self.driver.get_info(mock.sentinel.instance) self.driver._vmops.get_info.assert_called_once_with( mock.sentinel.instance) def test_attach_volume(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.attach_volume( mock.sentinel.context, mock.sentinel.connection_info, mock_instance, mock.sentinel.mountpoint, mock.sentinel.disk_bus, mock.sentinel.device_type, mock.sentinel.encryption) self.driver._volumeops.attach_volume.assert_called_once_with( mock.sentinel.connection_info, mock_instance.name) def test_detach_volume(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.detach_volume( mock.sentinel.context, mock.sentinel.connection_info, mock_instance, mock.sentinel.mountpoint, mock.sentinel.encryption) self.driver._volumeops.detach_volume.assert_called_once_with( mock.sentinel.connection_info, mock_instance.name) def test_get_volume_connector(self): self.driver.get_volume_connector(mock.sentinel.instance) self.driver._volumeops.get_volume_connector.assert_called_once_with() def test_get_available_resource(self): self.driver.get_available_resource(mock.sentinel.nodename) self.driver._hostops.get_available_resource.assert_called_once_with() def test_get_available_nodes(self): response = self.driver.get_available_nodes(mock.sentinel.refresh) self.assertEqual([platform.node()], response) def test_host_power_action(self): self.driver.host_power_action(mock.sentinel.action) self.driver._hostops.host_power_action.assert_called_once_with( mock.sentinel.action) def test_snapshot(self): self.driver.snapshot( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_id, mock.sentinel.update_task_state) self.driver._snapshotops.snapshot.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_id, mock.sentinel.update_task_state) def test_pause(self): self.driver.pause(mock.sentinel.instance) self.driver._vmops.pause.assert_called_once_with( mock.sentinel.instance) def test_unpause(self): self.driver.unpause(mock.sentinel.instance) self.driver._vmops.unpause.assert_called_once_with( mock.sentinel.instance) def test_suspend(self): self.driver.suspend(mock.sentinel.context, mock.sentinel.instance) self.driver._vmops.suspend.assert_called_once_with( mock.sentinel.instance) def test_resume(self): self.driver.resume( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.resume.assert_called_once_with( mock.sentinel.instance) def test_power_off(self): self.driver.power_off( mock.sentinel.instance, mock.sentinel.timeout, mock.sentinel.retry_interval) self.driver._vmops.power_off.assert_called_once_with( mock.sentinel.instance, mock.sentinel.timeout, mock.sentinel.retry_interval) def test_power_on(self): self.driver.power_on( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.power_on.assert_called_once_with( mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.network_info) def test_resume_state_on_host_boot(self): self.driver.resume_state_on_host_boot( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.resume_state_on_host_boot.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) def test_live_migration(self): self.driver.live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.post_method, mock.sentinel.recover_method, mock.sentinel.block_migration, mock.sentinel.migrate_data) self.driver._livemigrationops.live_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.post_method, mock.sentinel.recover_method, mock.sentinel.block_migration, mock.sentinel.migrate_data) @mock.patch.object(driver.HyperVDriver, 'destroy') def test_rollback_live_migration_at_destination(self, mock_destroy): self.driver.rollback_live_migration_at_destination( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks, mock.sentinel.migrate_data) mock_destroy.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, destroy_disks=mock.sentinel.destroy_disks) def test_pre_live_migration(self): migrate_data = self.driver.pre_live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.network_info, mock.sentinel.disk_info, mock.sentinel.migrate_data) self.assertEqual(mock.sentinel.migrate_data, migrate_data) pre_live_migration = self.driver._livemigrationops.pre_live_migration pre_live_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.network_info) def test_post_live_migration(self): self.driver.post_live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.migrate_data) post_live_migration = self.driver._livemigrationops.post_live_migration post_live_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.migrate_data) def test_post_live_migration_at_destination(self): self.driver.post_live_migration_at_destination( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_migration, mock.sentinel.block_device_info) mtd = self.driver._livemigrationops.post_live_migration_at_destination mtd.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_migration) def test_check_can_live_migrate_destination(self): self.driver.check_can_live_migrate_destination( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.src_compute_info, mock.sentinel.dst_compute_info, mock.sentinel.block_migration, mock.sentinel.disk_over_commit) mtd = self.driver._livemigrationops.check_can_live_migrate_destination mtd.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.src_compute_info, mock.sentinel.dst_compute_info, mock.sentinel.block_migration, mock.sentinel.disk_over_commit) def test_cleanup_live_migration_destination_check(self): self.driver.cleanup_live_migration_destination_check( mock.sentinel.context, mock.sentinel.dest_check_data) _livemigrops = self.driver._livemigrationops method = _livemigrops.cleanup_live_migration_destination_check method.assert_called_once_with( mock.sentinel.context, mock.sentinel.dest_check_data) def test_check_can_live_migrate_source(self): self.driver.check_can_live_migrate_source( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest_check_data, mock.sentinel.block_device_info) method = self.driver._livemigrationops.check_can_live_migrate_source method.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest_check_data) def test_plug_vifs(self): self.driver.plug_vifs( mock.sentinel.instance, mock.sentinel.network_info) self.driver._vmops.plug_vifs.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info) def test_unplug_vifs(self): self.driver.unplug_vifs( mock.sentinel.instance, mock.sentinel.network_info) self.driver._vmops.unplug_vifs.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info) def test_migrate_disk_and_power_off(self): self.driver.migrate_disk_and_power_off( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.flavor, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.timeout, mock.sentinel.retry_interval) migr_power_off = self.driver._migrationops.migrate_disk_and_power_off migr_power_off.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.flavor, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.timeout, mock.sentinel.retry_interval) def test_confirm_migration(self): self.driver.confirm_migration( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.network_info) self.driver._migrationops.confirm_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.network_info) def test_finish_revert_migration(self): self.driver.finish_revert_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.migration, mock.sentinel.block_device_info, mock.sentinel.power_on) finish_revert_migr = self.driver._migrationops.finish_revert_migration finish_revert_migr.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.power_on) def test_finish_migration(self): self.driver.finish_migration( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.disk_info, mock.sentinel.network_info, mock.sentinel.image_meta, mock.sentinel.resize_instance, mock.sentinel.allocations, mock.sentinel.block_device_info, mock.sentinel.power_on) self.driver._migrationops.finish_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.disk_info, mock.sentinel.network_info, mock.sentinel.image_meta, mock.sentinel.resize_instance, mock.sentinel.block_device_info, mock.sentinel.power_on) def test_get_host_ip_addr(self): self.driver.get_host_ip_addr() self.driver._hostops.get_host_ip_addr.assert_called_once_with() def test_get_host_uptime(self): self.driver.get_host_uptime() self.driver._hostops.get_host_uptime.assert_called_once_with() def test_get_rdp_console(self): self.driver.get_rdp_console( mock.sentinel.context, mock.sentinel.instance) self.driver._rdpconsoleops.get_rdp_console.assert_called_once_with( mock.sentinel.instance) def test_get_console_output(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.get_console_output(self.context, mock_instance) mock_get_console_output = ( self.driver._serialconsoleops.get_console_output) mock_get_console_output.assert_called_once_with( mock_instance.name) def test_get_serial_console(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.get_console_output(self.context, mock_instance) mock_get_serial_console = ( self.driver._serialconsoleops.get_console_output) mock_get_serial_console.assert_called_once_with( mock_instance.name) def test_manage_image_cache(self): self.driver.manage_image_cache(mock.sentinel.context, mock.sentinel.all_instances) self.driver._imagecache.update.assert_called_once_with( mock.sentinel.context, mock.sentinel.all_instances) @mock.patch.object(driver.HyperVDriver, '_get_allocation_ratios') def test_update_provider_tree(self, mock_get_alloc_ratios): mock_ptree = mock.Mock() mock_inventory = mock_ptree.data.return_value.inventory self.driver.update_provider_tree( mock_ptree, mock.sentinel.nodename, mock.sentinel.allocations) mock_get_alloc_ratios.assert_called_once_with(mock_inventory) self.driver._hostops.update_provider_tree.assert_called_once_with( mock_ptree, mock.sentinel.nodename, mock_get_alloc_ratios.return_value, mock.sentinel.allocations) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_eventhandler.py0000664000175000017500000001345000000000000024246 0ustar00zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import constants from os_win import exceptions as os_win_exc from os_win import utilsfactory from nova.tests.unit.virt.hyperv import test_base from nova import utils from nova.virt.hyperv import eventhandler class EventHandlerTestCase(test_base.HyperVBaseTestCase): _FAKE_POLLING_INTERVAL = 3 _FAKE_EVENT_CHECK_TIMEFRAME = 15 @mock.patch.object(utilsfactory, 'get_vmutils') def setUp(self, mock_get_vmutils): super(EventHandlerTestCase, self).setUp() self._state_change_callback = mock.Mock() self.flags( power_state_check_timeframe=self._FAKE_EVENT_CHECK_TIMEFRAME, group='hyperv') self.flags( power_state_event_polling_interval=self._FAKE_POLLING_INTERVAL, group='hyperv') self._event_handler = eventhandler.InstanceEventHandler( self._state_change_callback) self._event_handler._serial_console_ops = mock.Mock() @mock.patch.object(eventhandler.InstanceEventHandler, '_get_instance_uuid') @mock.patch.object(eventhandler.InstanceEventHandler, '_emit_event') def _test_event_callback(self, mock_emit_event, mock_get_uuid, missing_uuid=False): mock_get_uuid.return_value = ( mock.sentinel.instance_uuid if not missing_uuid else None) self._event_handler._vmutils.get_vm_power_state.return_value = ( mock.sentinel.power_state) self._event_handler._event_callback(mock.sentinel.instance_name, mock.sentinel.power_state) if not missing_uuid: mock_emit_event.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.instance_uuid, mock.sentinel.power_state) else: self.assertFalse(mock_emit_event.called) def test_event_callback_uuid_present(self): self._test_event_callback() def test_event_callback_missing_uuid(self): self._test_event_callback(missing_uuid=True) @mock.patch.object(eventhandler.InstanceEventHandler, '_get_virt_event') @mock.patch.object(utils, 'spawn_n') def test_emit_event(self, mock_spawn, mock_get_event): self._event_handler._emit_event(mock.sentinel.instance_name, mock.sentinel.instance_uuid, mock.sentinel.instance_state) virt_event = mock_get_event.return_value mock_spawn.assert_has_calls( [mock.call(self._state_change_callback, virt_event), mock.call(self._event_handler._handle_serial_console_workers, mock.sentinel.instance_name, mock.sentinel.instance_state)]) def test_handle_serial_console_instance_running(self): self._event_handler._handle_serial_console_workers( mock.sentinel.instance_name, constants.HYPERV_VM_STATE_ENABLED) serialops = self._event_handler._serial_console_ops serialops.start_console_handler.assert_called_once_with( mock.sentinel.instance_name) def test_handle_serial_console_instance_stopped(self): self._event_handler._handle_serial_console_workers( mock.sentinel.instance_name, constants.HYPERV_VM_STATE_DISABLED) serialops = self._event_handler._serial_console_ops serialops.stop_console_handler.assert_called_once_with( mock.sentinel.instance_name) def _test_get_instance_uuid(self, instance_found=True, missing_uuid=False): if instance_found: side_effect = (mock.sentinel.instance_uuid if not missing_uuid else None, ) else: side_effect = os_win_exc.HyperVVMNotFoundException( vm_name=mock.sentinel.instance_name) mock_get_uuid = self._event_handler._vmutils.get_instance_uuid mock_get_uuid.side_effect = side_effect instance_uuid = self._event_handler._get_instance_uuid( mock.sentinel.instance_name) expected_uuid = (mock.sentinel.instance_uuid if instance_found and not missing_uuid else None) self.assertEqual(expected_uuid, instance_uuid) def test_get_nova_created_instance_uuid(self): self._test_get_instance_uuid() def test_get_deleted_instance_uuid(self): self._test_get_instance_uuid(instance_found=False) def test_get_instance_uuid_missing_notes(self): self._test_get_instance_uuid(missing_uuid=True) @mock.patch('nova.virt.event.LifecycleEvent') def test_get_virt_event(self, mock_lifecycle_event): instance_state = constants.HYPERV_VM_STATE_ENABLED expected_transition = self._event_handler._TRANSITION_MAP[ instance_state] virt_event = self._event_handler._get_virt_event( mock.sentinel.instance_uuid, instance_state) self.assertEqual(mock_lifecycle_event.return_value, virt_event) mock_lifecycle_event.assert_called_once_with( uuid=mock.sentinel.instance_uuid, transition=expected_transition) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_hostops.py0000664000175000017500000003305700000000000023273 0ustar00zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock import os_resource_classes as orc from os_win import constants as os_win_const from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import units from nova.objects import fields as obj_fields from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import hostops CONF = cfg.CONF class HostOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V HostOps class.""" FAKE_ARCHITECTURE = 0 FAKE_NAME = 'fake_name' FAKE_MANUFACTURER = 'FAKE_MANUFACTURER' FAKE_NUM_CPUS = 1 FAKE_INSTANCE_DIR = "C:/fake/dir" FAKE_LOCAL_IP = '10.11.12.13' FAKE_TICK_COUNT = 1000000 def setUp(self): super(HostOpsTestCase, self).setUp() self._hostops = hostops.HostOps() self._hostops._hostutils = mock.MagicMock() self._hostops._pathutils = mock.MagicMock() self._hostops._diskutils = mock.MagicMock() def test_get_cpu_info(self): mock_processors = mock.MagicMock() info = {'Architecture': self.FAKE_ARCHITECTURE, 'Name': self.FAKE_NAME, 'Manufacturer': self.FAKE_MANUFACTURER, 'NumberOfCores': self.FAKE_NUM_CPUS, 'NumberOfLogicalProcessors': self.FAKE_NUM_CPUS} def getitem(key): return info[key] mock_processors.__getitem__.side_effect = getitem self._hostops._hostutils.get_cpus_info.return_value = [mock_processors] response = self._hostops._get_cpu_info() self._hostops._hostutils.get_cpus_info.assert_called_once_with() expected = [mock.call(fkey) for fkey in os_win_const.PROCESSOR_FEATURE.keys()] self._hostops._hostutils.is_cpu_feature_present.assert_has_calls( expected, any_order=True) expected_response = self._get_mock_cpu_info() self.assertEqual(expected_response, response) def _get_mock_cpu_info(self): return {'vendor': self.FAKE_MANUFACTURER, 'model': self.FAKE_NAME, 'arch': constants.WMI_WIN32_PROCESSOR_ARCHITECTURE[ self.FAKE_ARCHITECTURE], 'features': list(os_win_const.PROCESSOR_FEATURE.values()), 'topology': {'cores': self.FAKE_NUM_CPUS, 'threads': self.FAKE_NUM_CPUS, 'sockets': self.FAKE_NUM_CPUS}} def _get_mock_gpu_info(self): return {'remotefx_total_video_ram': 4096, 'remotefx_available_video_ram': 2048, 'remotefx_gpu_info': mock.sentinel.FAKE_GPU_INFO} def test_get_memory_info(self): self._hostops._hostutils.get_memory_info.return_value = (2 * units.Ki, 1 * units.Ki) response = self._hostops._get_memory_info() self._hostops._hostutils.get_memory_info.assert_called_once_with() self.assertEqual((2, 1, 1), response) def test_get_storage_info_gb(self): self._hostops._pathutils.get_instances_dir.return_value = '' self._hostops._diskutils.get_disk_capacity.return_value = ( 2 * units.Gi, 1 * units.Gi) response = self._hostops._get_storage_info_gb() self._hostops._pathutils.get_instances_dir.assert_called_once_with() self._hostops._diskutils.get_disk_capacity.assert_called_once_with('') self.assertEqual((2, 1, 1), response) def test_get_hypervisor_version(self): self._hostops._hostutils.get_windows_version.return_value = '6.3.9600' response_lower = self._hostops._get_hypervisor_version() self._hostops._hostutils.get_windows_version.return_value = '10.1.0' response_higher = self._hostops._get_hypervisor_version() self.assertEqual(6003, response_lower) self.assertEqual(10001, response_higher) def test_get_remotefx_gpu_info(self): self.flags(enable_remotefx=True, group='hyperv') fake_gpus = [{'total_video_ram': '2048', 'available_video_ram': '1024'}, {'total_video_ram': '1024', 'available_video_ram': '1024'}] self._hostops._hostutils.get_remotefx_gpu_info.return_value = fake_gpus ret_val = self._hostops._get_remotefx_gpu_info() self.assertEqual(3072, ret_val['total_video_ram']) self.assertEqual(1024, ret_val['used_video_ram']) def test_get_remotefx_gpu_info_disabled(self): self.flags(enable_remotefx=False, group='hyperv') ret_val = self._hostops._get_remotefx_gpu_info() self.assertEqual(0, ret_val['total_video_ram']) self.assertEqual(0, ret_val['used_video_ram']) self._hostops._hostutils.get_remotefx_gpu_info.assert_not_called() @mock.patch.object(hostops.objects, 'NUMACell') @mock.patch.object(hostops.objects, 'NUMATopology') def test_get_host_numa_topology(self, mock_NUMATopology, mock_NUMACell): numa_node = {'id': mock.sentinel.id, 'memory': mock.sentinel.memory, 'memory_usage': mock.sentinel.memory_usage, 'cpuset': mock.sentinel.cpuset, 'cpu_usage': mock.sentinel.cpu_usage} self._hostops._hostutils.get_numa_nodes.return_value = [ numa_node.copy()] result = self._hostops._get_host_numa_topology() self.assertEqual(mock_NUMATopology.return_value, result) mock_NUMACell.assert_called_once_with( pinned_cpus=set([]), mempages=[], siblings=[], **numa_node) mock_NUMATopology.assert_called_once_with( cells=[mock_NUMACell.return_value]) @mock.patch.object(hostops.HostOps, '_get_pci_passthrough_devices') @mock.patch.object(hostops.HostOps, '_get_host_numa_topology') @mock.patch.object(hostops.HostOps, '_get_remotefx_gpu_info') @mock.patch.object(hostops.HostOps, '_get_cpu_info') @mock.patch.object(hostops.HostOps, '_get_memory_info') @mock.patch.object(hostops.HostOps, '_get_hypervisor_version') @mock.patch.object(hostops.HostOps, '_get_storage_info_gb') @mock.patch('platform.node') def test_get_available_resource(self, mock_node, mock_get_storage_info_gb, mock_get_hypervisor_version, mock_get_memory_info, mock_get_cpu_info, mock_get_gpu_info, mock_get_numa_topology, mock_get_pci_devices): mock_get_storage_info_gb.return_value = (mock.sentinel.LOCAL_GB, mock.sentinel.LOCAL_GB_FREE, mock.sentinel.LOCAL_GB_USED) mock_get_memory_info.return_value = (mock.sentinel.MEMORY_MB, mock.sentinel.MEMORY_MB_FREE, mock.sentinel.MEMORY_MB_USED) mock_cpu_info = self._get_mock_cpu_info() mock_get_cpu_info.return_value = mock_cpu_info mock_get_hypervisor_version.return_value = mock.sentinel.VERSION mock_get_numa_topology.return_value._to_json.return_value = ( mock.sentinel.numa_topology_json) mock_get_pci_devices.return_value = mock.sentinel.pcis mock_gpu_info = self._get_mock_gpu_info() mock_get_gpu_info.return_value = mock_gpu_info response = self._hostops.get_available_resource() mock_get_memory_info.assert_called_once_with() mock_get_cpu_info.assert_called_once_with() mock_get_hypervisor_version.assert_called_once_with() mock_get_pci_devices.assert_called_once_with() expected = {'supported_instances': [("i686", "hyperv", "hvm"), ("x86_64", "hyperv", "hvm")], 'hypervisor_hostname': mock_node(), 'cpu_info': jsonutils.dumps(mock_cpu_info), 'hypervisor_version': mock.sentinel.VERSION, 'memory_mb': mock.sentinel.MEMORY_MB, 'memory_mb_used': mock.sentinel.MEMORY_MB_USED, 'local_gb': mock.sentinel.LOCAL_GB, 'local_gb_used': mock.sentinel.LOCAL_GB_USED, 'disk_available_least': mock.sentinel.LOCAL_GB_FREE, 'vcpus': self.FAKE_NUM_CPUS, 'vcpus_used': 0, 'hypervisor_type': 'hyperv', 'numa_topology': mock.sentinel.numa_topology_json, 'remotefx_available_video_ram': 2048, 'remotefx_gpu_info': mock.sentinel.FAKE_GPU_INFO, 'remotefx_total_video_ram': 4096, 'pci_passthrough_devices': mock.sentinel.pcis, } self.assertEqual(expected, response) @mock.patch.object(hostops.jsonutils, 'dumps') def test_get_pci_passthrough_devices(self, mock_jsonutils_dumps): mock_pci_dev = {'vendor_id': 'fake_vendor_id', 'product_id': 'fake_product_id', 'dev_id': 'fake_dev_id', 'address': 'fake_address'} mock_get_pcis = self._hostops._hostutils.get_pci_passthrough_devices mock_get_pcis.return_value = [mock_pci_dev] expected_label = 'label_%(vendor_id)s_%(product_id)s' % { 'vendor_id': mock_pci_dev['vendor_id'], 'product_id': mock_pci_dev['product_id']} expected_pci_dev = mock_pci_dev.copy() expected_pci_dev.update(dev_type=obj_fields.PciDeviceType.STANDARD, label= expected_label, numa_node=None) result = self._hostops._get_pci_passthrough_devices() self.assertEqual(mock_jsonutils_dumps.return_value, result) mock_jsonutils_dumps.assert_called_once_with([expected_pci_dev]) def _test_host_power_action(self, action): self._hostops._hostutils.host_power_action = mock.Mock() self._hostops.host_power_action(action) self._hostops._hostutils.host_power_action.assert_called_with( action) def test_host_power_action_shutdown(self): self._test_host_power_action(constants.HOST_POWER_ACTION_SHUTDOWN) def test_host_power_action_reboot(self): self._test_host_power_action(constants.HOST_POWER_ACTION_REBOOT) def test_host_power_action_exception(self): self.assertRaises(NotImplementedError, self._hostops.host_power_action, constants.HOST_POWER_ACTION_STARTUP) def test_get_host_ip_addr(self): CONF.set_override('my_ip', None) self._hostops._hostutils.get_local_ips.return_value = [ self.FAKE_LOCAL_IP] response = self._hostops.get_host_ip_addr() self._hostops._hostutils.get_local_ips.assert_called_once_with() self.assertEqual(self.FAKE_LOCAL_IP, response) @mock.patch('time.strftime') def test_get_host_uptime(self, mock_time): self._hostops._hostutils.get_host_tick_count64.return_value = ( self.FAKE_TICK_COUNT) response = self._hostops.get_host_uptime() tdelta = datetime.timedelta(milliseconds=int(self.FAKE_TICK_COUNT)) expected = "%s up %s, 0 users, load average: 0, 0, 0" % ( str(mock_time()), str(tdelta)) self.assertEqual(expected, response) @mock.patch.object(hostops.HostOps, 'get_available_resource') def test_update_provider_tree(self, mock_get_avail_res): resources = mock.MagicMock() allocation_ratios = mock.MagicMock() provider_tree = mock.Mock() mock_get_avail_res.return_value = resources self.flags(reserved_host_disk_mb=1) exp_inventory = { orc.VCPU: { 'total': resources['vcpus'], 'min_unit': 1, 'max_unit': resources['vcpus'], 'step_size': 1, 'allocation_ratio': allocation_ratios[orc.VCPU], 'reserved': CONF.reserved_host_cpus, }, orc.MEMORY_MB: { 'total': resources['memory_mb'], 'min_unit': 1, 'max_unit': resources['memory_mb'], 'step_size': 1, 'allocation_ratio': allocation_ratios[orc.MEMORY_MB], 'reserved': CONF.reserved_host_memory_mb, }, orc.DISK_GB: { 'total': resources['local_gb'], 'min_unit': 1, 'max_unit': resources['local_gb'], 'step_size': 1, 'allocation_ratio': allocation_ratios[orc.DISK_GB], 'reserved': 1, }, } self._hostops.update_provider_tree( provider_tree, mock.sentinel.node_name, allocation_ratios, mock.sentinel.allocations) provider_tree.update_inventory.assert_called_once_with( mock.sentinel.node_name, exp_inventory) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_imagecache.py0000664000175000017500000003211000000000000023627 0ustar00zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import ddt import fixtures import mock from oslo_config import cfg from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import units from nova import exception from nova import objects from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import imagecache CONF = cfg.CONF @ddt.ddt class ImageCacheTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V ImageCache class.""" FAKE_FORMAT = 'fake_format' FAKE_IMAGE_REF = 'fake_image_ref' FAKE_VHD_SIZE_GB = 1 def setUp(self): super(ImageCacheTestCase, self).setUp() self.context = 'fake-context' self.instance = fake_instance.fake_instance_obj(self.context) # utilsfactory will check the host OS version via get_hostutils, # in order to return the proper Utils Class, so it must be mocked. patched_get_hostutils = mock.patch.object(imagecache.utilsfactory, "get_hostutils") patched_get_vhdutils = mock.patch.object(imagecache.utilsfactory, "get_vhdutils") patched_get_hostutils.start() patched_get_vhdutils.start() self.addCleanup(patched_get_hostutils.stop) self.addCleanup(patched_get_vhdutils.stop) self.imagecache = imagecache.ImageCache() self.imagecache._pathutils = mock.MagicMock() self.imagecache._vhdutils = mock.MagicMock() self.tmpdir = self.useFixture(fixtures.TempDir()).path def _test_get_root_vhd_size_gb(self, old_flavor=True): if old_flavor: mock_flavor = objects.Flavor(**test_flavor.fake_flavor) self.instance.old_flavor = mock_flavor else: self.instance.old_flavor = None return self.imagecache._get_root_vhd_size_gb(self.instance) def test_get_root_vhd_size_gb_old_flavor(self): ret_val = self._test_get_root_vhd_size_gb() self.assertEqual(test_flavor.fake_flavor['root_gb'], ret_val) def test_get_root_vhd_size_gb(self): ret_val = self._test_get_root_vhd_size_gb(old_flavor=False) self.assertEqual(self.instance.flavor.root_gb, ret_val) @mock.patch.object(imagecache.ImageCache, '_get_root_vhd_size_gb') def test_resize_and_cache_vhd_smaller(self, mock_get_vhd_size_gb): self.imagecache._vhdutils.get_vhd_size.return_value = { 'VirtualSize': (self.FAKE_VHD_SIZE_GB + 1) * units.Gi } mock_get_vhd_size_gb.return_value = self.FAKE_VHD_SIZE_GB mock_internal_vhd_size = ( self.imagecache._vhdutils.get_internal_vhd_size_by_file_size) mock_internal_vhd_size.return_value = self.FAKE_VHD_SIZE_GB * units.Gi self.assertRaises(exception.FlavorDiskSmallerThanImage, self.imagecache._resize_and_cache_vhd, mock.sentinel.instance, mock.sentinel.vhd_path) self.imagecache._vhdutils.get_vhd_size.assert_called_once_with( mock.sentinel.vhd_path) mock_get_vhd_size_gb.assert_called_once_with(mock.sentinel.instance) mock_internal_vhd_size.assert_called_once_with( mock.sentinel.vhd_path, self.FAKE_VHD_SIZE_GB * units.Gi) def _prepare_get_cached_image(self, path_exists=False, use_cow=False, rescue_image_id=None): self.instance.image_ref = self.FAKE_IMAGE_REF self.imagecache._pathutils.get_base_vhd_dir.return_value = ( self.tmpdir) self.imagecache._pathutils.exists.return_value = path_exists self.imagecache._vhdutils.get_vhd_format.return_value = ( constants.DISK_FORMAT_VHD) CONF.set_override('use_cow_images', use_cow) image_file_name = rescue_image_id or self.FAKE_IMAGE_REF expected_path = os.path.join(self.tmpdir, image_file_name) expected_vhd_path = "%s.%s" % (expected_path, constants.DISK_FORMAT_VHD.lower()) return (expected_path, expected_vhd_path) @mock.patch.object(imagecache.images, 'fetch') def test_get_cached_image_with_fetch(self, mock_fetch): (expected_path, expected_vhd_path) = self._prepare_get_cached_image(False, False) result = self.imagecache.get_cached_image(self.context, self.instance) self.assertEqual(expected_vhd_path, result) mock_fetch.assert_called_once_with(self.context, self.FAKE_IMAGE_REF, expected_path) self.imagecache._vhdutils.get_vhd_format.assert_called_once_with( expected_path) self.imagecache._pathutils.rename.assert_called_once_with( expected_path, expected_vhd_path) @mock.patch.object(imagecache.images, 'fetch') def test_get_cached_image_with_fetch_exception(self, mock_fetch): (expected_path, expected_vhd_path) = self._prepare_get_cached_image(False, False) # path doesn't exist until fetched. self.imagecache._pathutils.exists.side_effect = [False, False, True] mock_fetch.side_effect = exception.InvalidImageRef( image_href=self.FAKE_IMAGE_REF) self.assertRaises(exception.InvalidImageRef, self.imagecache.get_cached_image, self.context, self.instance) self.imagecache._pathutils.remove.assert_called_once_with( expected_path) @mock.patch.object(imagecache.ImageCache, '_resize_and_cache_vhd') def test_get_cached_image_use_cow(self, mock_resize): (expected_path, expected_vhd_path) = self._prepare_get_cached_image(True, True) expected_resized_vhd_path = expected_vhd_path + 'x' mock_resize.return_value = expected_resized_vhd_path result = self.imagecache.get_cached_image(self.context, self.instance) self.assertEqual(expected_resized_vhd_path, result) mock_resize.assert_called_once_with(self.instance, expected_vhd_path) @mock.patch.object(imagecache.images, 'fetch') def test_cache_rescue_image_bigger_than_flavor(self, mock_fetch): fake_rescue_image_id = 'fake_rescue_image_id' self.imagecache._vhdutils.get_vhd_info.return_value = { 'VirtualSize': (self.instance.flavor.root_gb + 1) * units.Gi} (expected_path, expected_vhd_path) = self._prepare_get_cached_image( rescue_image_id=fake_rescue_image_id) self.assertRaises(exception.ImageUnacceptable, self.imagecache.get_cached_image, self.context, self.instance, fake_rescue_image_id) mock_fetch.assert_called_once_with(self.context, fake_rescue_image_id, expected_path) self.imagecache._vhdutils.get_vhd_info.assert_called_once_with( expected_vhd_path) @ddt.data(True, False) def test_age_and_verify_cached_images(self, remove_unused_base_images): self.flags(remove_unused_base_images=remove_unused_base_images, group='image_cache') fake_images = [mock.sentinel.FAKE_IMG1, mock.sentinel.FAKE_IMG2] fake_used_images = [mock.sentinel.FAKE_IMG1] self.imagecache.originals = fake_images self.imagecache.used_images = fake_used_images self.imagecache._update_image_timestamp = mock.Mock() self.imagecache._remove_if_old_image = mock.Mock() self.imagecache._age_and_verify_cached_images( mock.sentinel.FAKE_CONTEXT, mock.sentinel.all_instances, mock.sentinel.tmpdir) self.imagecache._update_image_timestamp.assert_called_once_with( mock.sentinel.FAKE_IMG1) if remove_unused_base_images: self.imagecache._remove_if_old_image.assert_called_once_with( mock.sentinel.FAKE_IMG2) else: self.imagecache._remove_if_old_image.assert_not_called() @mock.patch.object(imagecache.os, 'utime') @mock.patch.object(imagecache.ImageCache, '_get_image_backing_files') def test_update_image_timestamp(self, mock_get_backing_files, mock_utime): mock_get_backing_files.return_value = [mock.sentinel.backing_file, mock.sentinel.resized_file] self.imagecache._update_image_timestamp(mock.sentinel.image) mock_get_backing_files.assert_called_once_with(mock.sentinel.image) mock_utime.assert_has_calls([ mock.call(mock.sentinel.backing_file, None), mock.call(mock.sentinel.resized_file, None)]) def test_get_image_backing_files(self): image = 'fake-img' self.imagecache.unexplained_images = ['%s_42' % image, 'unexplained-img'] self.imagecache._pathutils.get_image_path.side_effect = [ mock.sentinel.base_file, mock.sentinel.resized_file] backing_files = self.imagecache._get_image_backing_files(image) self.assertEqual([mock.sentinel.base_file, mock.sentinel.resized_file], backing_files) self.imagecache._pathutils.get_image_path.assert_has_calls( [mock.call(image), mock.call('%s_42' % image)]) @mock.patch.object(imagecache.ImageCache, '_get_image_backing_files') def test_remove_if_old_image(self, mock_get_backing_files): mock_get_backing_files.return_value = [mock.sentinel.backing_file, mock.sentinel.resized_file] self.imagecache._pathutils.get_age_of_file.return_value = 3600 self.imagecache._remove_if_old_image(mock.sentinel.image) calls = [mock.call(mock.sentinel.backing_file), mock.call(mock.sentinel.resized_file)] self.imagecache._pathutils.get_age_of_file.assert_has_calls(calls) mock_get_backing_files.assert_called_once_with(mock.sentinel.image) def test_remove_old_image(self): fake_img_path = os.path.join(self.tmpdir, self.FAKE_IMAGE_REF) self.imagecache._remove_old_image(fake_img_path) self.imagecache._pathutils.remove.assert_called_once_with( fake_img_path) @mock.patch.object(imagecache.ImageCache, '_age_and_verify_cached_images') @mock.patch.object(imagecache.ImageCache, '_list_base_images') @mock.patch.object(imagecache.ImageCache, '_list_running_instances') def test_update(self, mock_list_instances, mock_list_images, mock_age_cached_images): base_vhd_dir = self.imagecache._pathutils.get_base_vhd_dir.return_value mock_list_instances.return_value = { 'used_images': {mock.sentinel.image: mock.sentinel.instances}} mock_list_images.return_value = { 'originals': [mock.sentinel.original_image], 'unexplained_images': [mock.sentinel.unexplained_image]} self.imagecache.update(mock.sentinel.context, mock.sentinel.all_instances) self.assertEqual([mock.sentinel.image], list(self.imagecache.used_images)) self.assertEqual([mock.sentinel.original_image], self.imagecache.originals) self.assertEqual([mock.sentinel.unexplained_image], self.imagecache.unexplained_images) mock_list_instances.assert_called_once_with( mock.sentinel.context, mock.sentinel.all_instances) mock_list_images.assert_called_once_with(base_vhd_dir) mock_age_cached_images.assert_called_once_with( mock.sentinel.context, mock.sentinel.all_instances, base_vhd_dir) @mock.patch.object(imagecache.os, 'listdir') def test_list_base_images(self, mock_listdir): original_image = uuids.fake unexplained_image = 'just-an-image' ignored_file = 'foo.bar' mock_listdir.return_value = ['%s.VHD' % original_image, '%s.vhdx' % unexplained_image, ignored_file] images = self.imagecache._list_base_images(mock.sentinel.base_dir) self.assertEqual([original_image], images['originals']) self.assertEqual([unexplained_image], images['unexplained_images']) mock_listdir.assert_called_once_with(mock.sentinel.base_dir) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_livemigrationops.py0000664000175000017500000002533600000000000025170 0ustar00zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import exceptions as os_win_exc from oslo_config import cfg from nova import exception from nova.objects import migrate_data as migrate_data_obj from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import livemigrationops from nova.virt.hyperv import serialconsoleops CONF = cfg.CONF class LiveMigrationOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V LiveMigrationOps class.""" def setUp(self): super(LiveMigrationOpsTestCase, self).setUp() self.context = 'fake_context' self._livemigrops = livemigrationops.LiveMigrationOps() self._livemigrops._livemigrutils = mock.MagicMock() self._livemigrops._pathutils = mock.MagicMock() self._livemigrops._block_dev_man = mock.MagicMock() self._pathutils = self._livemigrops._pathutils @mock.patch.object(serialconsoleops.SerialConsoleOps, 'stop_console_handler') @mock.patch('nova.virt.hyperv.vmops.VMOps.copy_vm_dvd_disks') def _test_live_migration(self, mock_copy_dvd_disk, mock_stop_console_handler, side_effect=None, shared_storage=False, migrate_data_received=True, migrate_data_version='1.1'): mock_instance = fake_instance.fake_instance_obj(self.context) mock_post = mock.MagicMock() mock_recover = mock.MagicMock() mock_copy_logs = self._livemigrops._pathutils.copy_vm_console_logs fake_dest = mock.sentinel.DESTINATION mock_check_shared_inst_dir = ( self._pathutils.check_remote_instances_dir_shared) mock_check_shared_inst_dir.return_value = shared_storage self._livemigrops._livemigrutils.live_migrate_vm.side_effect = [ side_effect] if migrate_data_received: migrate_data = migrate_data_obj.HyperVLiveMigrateData() if migrate_data_version != '1.0': migrate_data.is_shared_instance_path = shared_storage else: migrate_data = None self._livemigrops.live_migration(context=self.context, instance_ref=mock_instance, dest=fake_dest, post_method=mock_post, recover_method=mock_recover, block_migration=( mock.sentinel.block_migr), migrate_data=migrate_data) if side_effect is os_win_exc.HyperVException: mock_recover.assert_called_once_with(self.context, mock_instance, fake_dest, migrate_data) mock_post.assert_not_called() else: post_call_args = mock_post.call_args_list self.assertEqual(1, len(post_call_args)) post_call_args_list = post_call_args[0][0] self.assertEqual((self.context, mock_instance, fake_dest, mock.sentinel.block_migr), post_call_args_list[:-1]) # The last argument, the migrate_data object, should be created # by the callee if not received. migrate_data_arg = post_call_args_list[-1] self.assertIsInstance( migrate_data_arg, migrate_data_obj.HyperVLiveMigrateData) self.assertEqual(shared_storage, migrate_data_arg.is_shared_instance_path) if not migrate_data_received or migrate_data_version == '1.0': mock_check_shared_inst_dir.assert_called_once_with(fake_dest) else: self.assertFalse(mock_check_shared_inst_dir.called) mock_stop_console_handler.assert_called_once_with(mock_instance.name) if not shared_storage: mock_copy_logs.assert_called_once_with(mock_instance.name, fake_dest) mock_copy_dvd_disk.assert_called_once_with(mock_instance.name, fake_dest) else: self.assertFalse(mock_copy_logs.called) self.assertFalse(mock_copy_dvd_disk.called) mock_live_migr = self._livemigrops._livemigrutils.live_migrate_vm mock_live_migr.assert_called_once_with( mock_instance.name, fake_dest, migrate_disks=not shared_storage) def test_live_migration(self): self._test_live_migration(migrate_data_received=False) def test_live_migration_old_migrate_data_version(self): self._test_live_migration(migrate_data_version='1.0') def test_live_migration_exception(self): self._test_live_migration(side_effect=os_win_exc.HyperVException) def test_live_migration_shared_storage(self): self._test_live_migration(shared_storage=True) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.get_disk_path_mapping') @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.connect_volumes') def _test_pre_live_migration(self, mock_initialize_connection, mock_get_cached_image, mock_get_disk_path_mapping, phys_disks_attached=True): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.image_ref = "fake_image_ref" mock_get_disk_path_mapping.return_value = ( mock.sentinel.disk_path_mapping if phys_disks_attached else None) bdman = self._livemigrops._block_dev_man mock_is_boot_from_vol = bdman.is_boot_from_volume mock_is_boot_from_vol.return_value = None CONF.set_override('use_cow_images', True) self._livemigrops.pre_live_migration( self.context, mock_instance, block_device_info=mock.sentinel.BLOCK_INFO, network_info=mock.sentinel.NET_INFO) check_config = ( self._livemigrops._livemigrutils.check_live_migration_config) check_config.assert_called_once_with() mock_is_boot_from_vol.assert_called_once_with( mock.sentinel.BLOCK_INFO) mock_get_cached_image.assert_called_once_with(self.context, mock_instance) mock_initialize_connection.assert_called_once_with( mock.sentinel.BLOCK_INFO) mock_get_disk_path_mapping.assert_called_once_with( mock.sentinel.BLOCK_INFO) if phys_disks_attached: livemigrutils = self._livemigrops._livemigrutils livemigrutils.create_planned_vm.assert_called_once_with( mock_instance.name, mock_instance.host, mock.sentinel.disk_path_mapping) def test_pre_live_migration(self): self._test_pre_live_migration() def test_pre_live_migration_invalid_disk_mapping(self): self._test_pre_live_migration(phys_disks_attached=False) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.disconnect_volumes') def _test_post_live_migration(self, mock_disconnect_volumes, shared_storage=False): migrate_data = migrate_data_obj.HyperVLiveMigrateData( is_shared_instance_path=shared_storage) self._livemigrops.post_live_migration( self.context, mock.sentinel.instance, mock.sentinel.block_device_info, migrate_data) mock_disconnect_volumes.assert_called_once_with( mock.sentinel.block_device_info) mock_get_inst_dir = self._pathutils.get_instance_dir if not shared_storage: mock_get_inst_dir.assert_called_once_with( mock.sentinel.instance.name, create_dir=False, remove_dir=True) else: self.assertFalse(mock_get_inst_dir.called) def test_post_block_migration(self): self._test_post_live_migration() def test_post_live_migration_shared_storage(self): self._test_post_live_migration(shared_storage=True) @mock.patch.object(migrate_data_obj, 'HyperVLiveMigrateData') def test_check_can_live_migrate_destination(self, mock_migr_data_cls): mock_instance = fake_instance.fake_instance_obj(self.context) migr_data = self._livemigrops.check_can_live_migrate_destination( mock.sentinel.context, mock_instance, mock.sentinel.src_comp_info, mock.sentinel.dest_comp_info) mock_check_shared_inst_dir = ( self._pathutils.check_remote_instances_dir_shared) mock_check_shared_inst_dir.assert_called_once_with(mock_instance.host) self.assertEqual(mock_migr_data_cls.return_value, migr_data) self.assertEqual(mock_check_shared_inst_dir.return_value, migr_data.is_shared_instance_path) @mock.patch('nova.virt.hyperv.vmops.VMOps.plug_vifs') def test_post_live_migration_at_destination(self, mock_plug_vifs): self._livemigrops.post_live_migration_at_destination( self.context, mock.sentinel.instance, network_info=mock.sentinel.NET_INFO, block_migration=mock.sentinel.BLOCK_INFO) mock_plug_vifs.assert_called_once_with(mock.sentinel.instance, mock.sentinel.NET_INFO) def test_check_can_live_migrate_destination_exception(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_check = self._pathutils.check_remote_instances_dir_shared mock_check.side_effect = exception.FileNotFound(file_path='C:\\baddir') self.assertRaises( exception.MigrationPreCheckError, self._livemigrops.check_can_live_migrate_destination, mock.sentinel.context, mock_instance, mock.sentinel.src_comp_info, mock.sentinel.dest_comp_info) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_migrationops.py0000664000175000017500000006256300000000000024313 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from os_win import exceptions as os_win_exc from oslo_utils import units from nova import exception from nova import objects from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import migrationops class MigrationOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V MigrationOps class.""" _FAKE_DISK = 'fake_disk' _FAKE_TIMEOUT = 10 _FAKE_RETRY_INTERVAL = 5 def setUp(self): super(MigrationOpsTestCase, self).setUp() self.context = 'fake-context' self._migrationops = migrationops.MigrationOps() self._migrationops._hostutils = mock.MagicMock() self._migrationops._vmops = mock.MagicMock() self._migrationops._vmutils = mock.MagicMock() self._migrationops._pathutils = mock.Mock() self._migrationops._vhdutils = mock.MagicMock() self._migrationops._pathutils = mock.MagicMock() self._migrationops._volumeops = mock.MagicMock() self._migrationops._imagecache = mock.MagicMock() self._migrationops._block_dev_man = mock.MagicMock() def _check_migrate_disk_files(self, shared_storage=False): instance_path = 'fake/instance/path' dest_instance_path = 'remote/instance/path' self._migrationops._pathutils.get_instance_dir.side_effect = ( instance_path, dest_instance_path) get_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) check_shared_storage = ( self._migrationops._pathutils.check_dirs_shared_storage) check_shared_storage.return_value = shared_storage self._migrationops._pathutils.exists.return_value = True fake_disk_files = [os.path.join(instance_path, disk_name) for disk_name in ['root.vhd', 'configdrive.vhd', 'configdrive.iso', 'eph0.vhd', 'eph1.vhdx']] expected_get_dir = [mock.call(mock.sentinel.instance_name), mock.call(mock.sentinel.instance_name, mock.sentinel.dest_path)] expected_move_calls = [mock.call(instance_path, get_revert_dir.return_value)] self._migrationops._migrate_disk_files( instance_name=mock.sentinel.instance_name, disk_files=fake_disk_files, dest=mock.sentinel.dest_path) self._migrationops._pathutils.exists.assert_called_once_with( dest_instance_path) check_shared_storage.assert_called_once_with( instance_path, dest_instance_path) get_revert_dir.assert_called_with(mock.sentinel.instance_name, remove_dir=True, create_dir=True) if shared_storage: fake_dest_path = '%s_tmp' % instance_path expected_move_calls.append(mock.call(fake_dest_path, instance_path)) self._migrationops._pathutils.rmtree.assert_called_once_with( fake_dest_path) else: fake_dest_path = dest_instance_path self._migrationops._pathutils.makedirs.assert_called_once_with( fake_dest_path) check_remove_dir = self._migrationops._pathutils.check_remove_dir check_remove_dir.assert_called_once_with(fake_dest_path) self._migrationops._pathutils.get_instance_dir.assert_has_calls( expected_get_dir) self._migrationops._pathutils.copy.assert_has_calls( mock.call(fake_disk_file, fake_dest_path) for fake_disk_file in fake_disk_files) self.assertEqual(len(fake_disk_files), self._migrationops._pathutils.copy.call_count) self._migrationops._pathutils.move_folder_files.assert_has_calls( expected_move_calls) def test_migrate_disk_files(self): self._check_migrate_disk_files() def test_migrate_disk_files_same_host(self): self._check_migrate_disk_files(shared_storage=True) @mock.patch.object(migrationops.MigrationOps, '_cleanup_failed_disk_migration') def test_migrate_disk_files_exception(self, mock_cleanup): instance_path = 'fake/instance/path' fake_dest_path = '%s_tmp' % instance_path self._migrationops._pathutils.get_instance_dir.return_value = ( instance_path) get_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) self._migrationops._hostutils.get_local_ips.return_value = [ mock.sentinel.dest_path] self._migrationops._pathutils.copy.side_effect = IOError( "Expected exception.") self.assertRaises(IOError, self._migrationops._migrate_disk_files, instance_name=mock.sentinel.instance_name, disk_files=[self._FAKE_DISK], dest=mock.sentinel.dest_path) mock_cleanup.assert_called_once_with(instance_path, get_revert_dir.return_value, fake_dest_path) def test_cleanup_failed_disk_migration(self): self._migrationops._pathutils.exists.return_value = True self._migrationops._cleanup_failed_disk_migration( instance_path=mock.sentinel.instance_path, revert_path=mock.sentinel.revert_path, dest_path=mock.sentinel.dest_path) expected = [mock.call(mock.sentinel.dest_path), mock.call(mock.sentinel.revert_path)] self._migrationops._pathutils.exists.assert_has_calls(expected) move_folder_files = self._migrationops._pathutils.move_folder_files move_folder_files.assert_called_once_with( mock.sentinel.revert_path, mock.sentinel.instance_path) self._migrationops._pathutils.rmtree.assert_has_calls([ mock.call(mock.sentinel.dest_path), mock.call(mock.sentinel.revert_path)]) def test_check_target_flavor(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.root_gb = 1 mock_flavor = mock.MagicMock(root_gb=0) self.assertRaises(exception.InstanceFaultRollback, self._migrationops._check_target_flavor, mock_instance, mock_flavor) def test_check_and_attach_config_drive(self): mock_instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) mock_instance.config_drive = 'True' self._migrationops._check_and_attach_config_drive( mock_instance, mock.sentinel.vm_gen) self._migrationops._vmops.attach_config_drive.assert_called_once_with( mock_instance, self._migrationops._pathutils.lookup_configdrive_path.return_value, mock.sentinel.vm_gen) def test_check_and_attach_config_drive_unknown_path(self): instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) instance.config_drive = 'True' self._migrationops._pathutils.lookup_configdrive_path.return_value = ( None) self.assertRaises(exception.ConfigDriveNotFound, self._migrationops._check_and_attach_config_drive, instance, mock.sentinel.FAKE_VM_GEN) @mock.patch.object(migrationops.MigrationOps, '_migrate_disk_files') @mock.patch.object(migrationops.MigrationOps, '_check_target_flavor') def test_migrate_disk_and_power_off(self, mock_check_flavor, mock_migrate_disk_files): instance = fake_instance.fake_instance_obj(self.context) flavor = mock.MagicMock() network_info = mock.MagicMock() disk_files = [mock.MagicMock()] volume_drives = [mock.MagicMock()] mock_get_vm_st_path = self._migrationops._vmutils.get_vm_storage_paths mock_get_vm_st_path.return_value = (disk_files, volume_drives) self._migrationops.migrate_disk_and_power_off( self.context, instance, mock.sentinel.FAKE_DEST, flavor, network_info, mock.sentinel.bdi, self._FAKE_TIMEOUT, self._FAKE_RETRY_INTERVAL) mock_check_flavor.assert_called_once_with(instance, flavor) self._migrationops._vmops.power_off.assert_called_once_with( instance, self._FAKE_TIMEOUT, self._FAKE_RETRY_INTERVAL) mock_get_vm_st_path.assert_called_once_with(instance.name) mock_migrate_disk_files.assert_called_once_with( instance.name, disk_files, mock.sentinel.FAKE_DEST) self._migrationops._vmops.destroy.assert_called_once_with( instance, network_info, mock.sentinel.bdi, destroy_disks=False) def test_confirm_migration(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._migrationops.confirm_migration( context=self.context, migration=mock.sentinel.migration, instance=mock_instance, network_info=mock.sentinel.network_info) get_instance_migr_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) get_instance_migr_revert_dir.assert_called_with(mock_instance.name, remove_dir=True) def test_revert_migration_files(self): instance_path = ( self._migrationops._pathutils.get_instance_dir.return_value) get_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) self._migrationops._revert_migration_files( instance_name=mock.sentinel.instance_name) self._migrationops._pathutils.get_instance_dir.assert_called_once_with( mock.sentinel.instance_name, create_dir=False, remove_dir=True) get_revert_dir.assert_called_with(mock.sentinel.instance_name) self._migrationops._pathutils.rename.assert_called_once_with( get_revert_dir.return_value, instance_path) @mock.patch.object(migrationops.MigrationOps, '_check_and_attach_config_drive') @mock.patch.object(migrationops.MigrationOps, '_revert_migration_files') @mock.patch.object(migrationops.MigrationOps, '_check_ephemeral_disks') @mock.patch.object(objects.ImageMeta, "from_instance") def _check_finish_revert_migration(self, mock_image, mock_check_eph_disks, mock_revert_migration_files, mock_check_attach_config_drive, disk_type=constants.DISK): mock_image.return_value = objects.ImageMeta.from_dict({}) mock_instance = fake_instance.fake_instance_obj(self.context) root_device = {'type': disk_type} block_device_info = {'root_disk': root_device, 'ephemerals': []} self._migrationops.finish_revert_migration( context=self.context, instance=mock_instance, network_info=mock.sentinel.network_info, block_device_info=block_device_info, power_on=True) mock_revert_migration_files.assert_called_once_with( mock_instance.name) if root_device['type'] == constants.DISK: lookup_root_vhd = ( self._migrationops._pathutils.lookup_root_vhd_path) lookup_root_vhd.assert_called_once_with(mock_instance.name) self.assertEqual(lookup_root_vhd.return_value, root_device['path']) get_image_vm_gen = self._migrationops._vmops.get_image_vm_generation get_image_vm_gen.assert_called_once_with( mock_instance.uuid, test.MatchType(objects.ImageMeta)) self._migrationops._vmops.create_instance.assert_called_once_with( mock_instance, mock.sentinel.network_info, root_device, block_device_info, get_image_vm_gen.return_value, mock_image.return_value) mock_check_attach_config_drive.assert_called_once_with( mock_instance, get_image_vm_gen.return_value) self._migrationops._vmops.set_boot_order.assert_called_once_with( mock_instance.name, get_image_vm_gen.return_value, block_device_info) self._migrationops._vmops.power_on.assert_called_once_with( mock_instance, network_info=mock.sentinel.network_info) def test_finish_revert_migration_boot_from_volume(self): self._check_finish_revert_migration(disk_type=constants.VOLUME) def test_finish_revert_migration_boot_from_disk(self): self._check_finish_revert_migration(disk_type=constants.DISK) @mock.patch.object(objects.ImageMeta, "from_instance") def test_finish_revert_migration_no_root_vhd(self, mock_image): mock_instance = fake_instance.fake_instance_obj(self.context) self._migrationops._pathutils.lookup_root_vhd_path.return_value = None bdi = {'root_disk': {'type': constants.DISK}, 'ephemerals': []} self.assertRaises( exception.DiskNotFound, self._migrationops.finish_revert_migration, self.context, mock_instance, mock.sentinel.network_info, bdi, True) def test_merge_base_vhd(self): fake_diff_vhd_path = 'fake/diff/path' fake_base_vhd_path = 'fake/base/path' base_vhd_copy_path = os.path.join( os.path.dirname(fake_diff_vhd_path), os.path.basename(fake_base_vhd_path)) self._migrationops._merge_base_vhd(diff_vhd_path=fake_diff_vhd_path, base_vhd_path=fake_base_vhd_path) self._migrationops._pathutils.copyfile.assert_called_once_with( fake_base_vhd_path, base_vhd_copy_path) recon_parent_vhd = self._migrationops._vhdutils.reconnect_parent_vhd recon_parent_vhd.assert_called_once_with(fake_diff_vhd_path, base_vhd_copy_path) self._migrationops._vhdutils.merge_vhd.assert_called_once_with( fake_diff_vhd_path) self._migrationops._pathutils.rename.assert_called_once_with( base_vhd_copy_path, fake_diff_vhd_path) def test_merge_base_vhd_exception(self): fake_diff_vhd_path = 'fake/diff/path' fake_base_vhd_path = 'fake/base/path' base_vhd_copy_path = os.path.join( os.path.dirname(fake_diff_vhd_path), os.path.basename(fake_base_vhd_path)) self._migrationops._vhdutils.reconnect_parent_vhd.side_effect = ( os_win_exc.HyperVException) self._migrationops._pathutils.exists.return_value = True self.assertRaises(os_win_exc.HyperVException, self._migrationops._merge_base_vhd, fake_diff_vhd_path, fake_base_vhd_path) self._migrationops._pathutils.exists.assert_called_once_with( base_vhd_copy_path) self._migrationops._pathutils.remove.assert_called_once_with( base_vhd_copy_path) @mock.patch.object(migrationops.MigrationOps, '_resize_vhd') def test_check_resize_vhd(self, mock_resize_vhd): self._migrationops._check_resize_vhd( vhd_path=mock.sentinel.vhd_path, vhd_info={'VirtualSize': 1}, new_size=2) mock_resize_vhd.assert_called_once_with(mock.sentinel.vhd_path, 2) def test_check_resize_vhd_exception(self): self.assertRaises(exception.CannotResizeDisk, self._migrationops._check_resize_vhd, mock.sentinel.vhd_path, {'VirtualSize': 1}, 0) @mock.patch.object(migrationops.MigrationOps, '_merge_base_vhd') def test_resize_vhd(self, mock_merge_base_vhd): fake_vhd_path = 'fake/path.vhd' new_vhd_size = 2 self._migrationops._resize_vhd(vhd_path=fake_vhd_path, new_size=new_vhd_size) get_vhd_parent_path = self._migrationops._vhdutils.get_vhd_parent_path get_vhd_parent_path.assert_called_once_with(fake_vhd_path) mock_merge_base_vhd.assert_called_once_with( fake_vhd_path, self._migrationops._vhdutils.get_vhd_parent_path.return_value) self._migrationops._vhdutils.resize_vhd.assert_called_once_with( fake_vhd_path, new_vhd_size) def test_check_base_disk(self): mock_instance = fake_instance.fake_instance_obj(self.context) fake_src_vhd_path = 'fake/src/path' fake_base_vhd = 'fake/vhd' get_cached_image = self._migrationops._imagecache.get_cached_image get_cached_image.return_value = fake_base_vhd self._migrationops._check_base_disk( context=self.context, instance=mock_instance, diff_vhd_path=mock.sentinel.diff_vhd_path, src_base_disk_path=fake_src_vhd_path) get_cached_image.assert_called_once_with(self.context, mock_instance) recon_parent_vhd = self._migrationops._vhdutils.reconnect_parent_vhd recon_parent_vhd.assert_called_once_with( mock.sentinel.diff_vhd_path, fake_base_vhd) @mock.patch.object(migrationops.MigrationOps, '_check_and_attach_config_drive') @mock.patch.object(migrationops.MigrationOps, '_check_base_disk') @mock.patch.object(migrationops.MigrationOps, '_check_resize_vhd') @mock.patch.object(migrationops.MigrationOps, '_check_ephemeral_disks') def _check_finish_migration(self, mock_check_eph_disks, mock_check_resize_vhd, mock_check_base_disk, mock_check_attach_config_drive, disk_type=constants.DISK): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.ephemeral_gb = 1 root_device = {'type': disk_type} block_device_info = {'root_disk': root_device, 'ephemerals': []} lookup_root_vhd = self._migrationops._pathutils.lookup_root_vhd_path get_vhd_info = self._migrationops._vhdutils.get_vhd_info mock_vhd_info = get_vhd_info.return_value expected_check_resize = [] expected_get_info = [] self._migrationops.finish_migration( context=self.context, migration=mock.sentinel.migration, instance=mock_instance, disk_info=mock.sentinel.disk_info, network_info=mock.sentinel.network_info, image_meta=mock.sentinel.image_meta, resize_instance=True, block_device_info=block_device_info) if root_device['type'] == constants.DISK: root_device_path = lookup_root_vhd.return_value lookup_root_vhd.assert_called_with(mock_instance.name) expected_get_info = [mock.call(root_device_path)] mock_vhd_info.get.assert_called_once_with("ParentPath") mock_check_base_disk.assert_called_once_with( self.context, mock_instance, root_device_path, mock_vhd_info.get.return_value) expected_check_resize.append( mock.call(root_device_path, mock_vhd_info, mock_instance.flavor.root_gb * units.Gi)) ephemerals = block_device_info['ephemerals'] mock_check_eph_disks.assert_called_once_with( mock_instance, ephemerals, True) mock_check_resize_vhd.assert_has_calls(expected_check_resize) self._migrationops._vhdutils.get_vhd_info.assert_has_calls( expected_get_info) get_image_vm_gen = self._migrationops._vmops.get_image_vm_generation get_image_vm_gen.assert_called_once_with(mock_instance.uuid, mock.sentinel.image_meta) self._migrationops._vmops.create_instance.assert_called_once_with( mock_instance, mock.sentinel.network_info, root_device, block_device_info, get_image_vm_gen.return_value, mock.sentinel.image_meta) mock_check_attach_config_drive.assert_called_once_with( mock_instance, get_image_vm_gen.return_value) self._migrationops._vmops.set_boot_order.assert_called_once_with( mock_instance.name, get_image_vm_gen.return_value, block_device_info) self._migrationops._vmops.power_on.assert_called_once_with( mock_instance, network_info=mock.sentinel.network_info) def test_finish_migration(self): self._check_finish_migration(disk_type=constants.DISK) def test_finish_migration_boot_from_volume(self): self._check_finish_migration(disk_type=constants.VOLUME) def test_finish_migration_no_root(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._migrationops._pathutils.lookup_root_vhd_path.return_value = None bdi = {'root_disk': {'type': constants.DISK}, 'ephemerals': []} self.assertRaises(exception.DiskNotFound, self._migrationops.finish_migration, self.context, mock.sentinel.migration, mock_instance, mock.sentinel.disk_info, mock.sentinel.network_info, mock.sentinel.image_meta, True, bdi, True) @mock.patch.object(migrationops.MigrationOps, '_check_resize_vhd') @mock.patch.object(migrationops.LOG, 'warning') def test_check_ephemeral_disks_multiple_eph_warn(self, mock_warn, mock_check_resize_vhd): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.ephemeral_gb = 3 mock_ephemerals = [{'size': 1}, {'size': 1}] self._migrationops._check_ephemeral_disks(mock_instance, mock_ephemerals, True) mock_warn.assert_called_once_with( "Cannot resize multiple ephemeral disks for instance.", instance=mock_instance) def test_check_ephemeral_disks_exception(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_ephemerals = [dict()] lookup_eph_path = ( self._migrationops._pathutils.lookup_ephemeral_vhd_path) lookup_eph_path.return_value = None self.assertRaises(exception.DiskNotFound, self._migrationops._check_ephemeral_disks, mock_instance, mock_ephemerals) @mock.patch.object(migrationops.MigrationOps, '_check_resize_vhd') def _test_check_ephemeral_disks(self, mock_check_resize_vhd, existing_eph_path=None, new_eph_size=42): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.ephemeral_gb = new_eph_size eph = {} mock_ephemerals = [eph] mock_pathutils = self._migrationops._pathutils lookup_eph_path = mock_pathutils.lookup_ephemeral_vhd_path lookup_eph_path.return_value = existing_eph_path mock_get_eph_vhd_path = mock_pathutils.get_ephemeral_vhd_path mock_get_eph_vhd_path.return_value = mock.sentinel.get_path mock_vhdutils = self._migrationops._vhdutils mock_get_vhd_format = mock_vhdutils.get_best_supported_vhd_format mock_get_vhd_format.return_value = mock.sentinel.vhd_format self._migrationops._check_ephemeral_disks(mock_instance, mock_ephemerals, True) self.assertEqual(mock_instance.ephemeral_gb, eph['size']) if not existing_eph_path: mock_vmops = self._migrationops._vmops mock_vmops.create_ephemeral_disk.assert_called_once_with( mock_instance.name, eph) self.assertEqual(mock.sentinel.vhd_format, eph['format']) self.assertEqual(mock.sentinel.get_path, eph['path']) elif new_eph_size: mock_check_resize_vhd.assert_called_once_with( existing_eph_path, self._migrationops._vhdutils.get_vhd_info.return_value, mock_instance.ephemeral_gb * units.Gi) self.assertEqual(existing_eph_path, eph['path']) else: self._migrationops._pathutils.remove.assert_called_once_with( existing_eph_path) def test_check_ephemeral_disks_create(self): self._test_check_ephemeral_disks() def test_check_ephemeral_disks_resize(self): self._test_check_ephemeral_disks(existing_eph_path=mock.sentinel.path) def test_check_ephemeral_disks_remove(self): self._test_check_ephemeral_disks(existing_eph_path=mock.sentinel.path, new_eph_size=0) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_pathutils.py0000664000175000017500000002302400000000000023602 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import time import mock from six.moves import builtins from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import pathutils class PathUtilsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V PathUtils class.""" def setUp(self): super(PathUtilsTestCase, self).setUp() self.fake_instance_dir = os.path.join('C:', 'fake_instance_dir') self.fake_instance_name = 'fake_instance_name' self._pathutils = pathutils.PathUtils() def _mock_lookup_configdrive_path(self, ext, rescue=False): self._pathutils.get_instance_dir = mock.MagicMock( return_value=self.fake_instance_dir) def mock_exists(*args, **kwargs): path = args[0] return True if path[(path.rfind('.') + 1):] == ext else False self._pathutils.exists = mock_exists configdrive_path = self._pathutils.lookup_configdrive_path( self.fake_instance_name, rescue) return configdrive_path def _test_lookup_configdrive_path(self, rescue=False): configdrive_name = 'configdrive' if rescue: configdrive_name += '-rescue' for format_ext in constants.DISK_FORMAT_MAP: configdrive_path = self._mock_lookup_configdrive_path(format_ext, rescue) expected_path = os.path.join(self.fake_instance_dir, configdrive_name + '.' + format_ext) self.assertEqual(expected_path, configdrive_path) def test_lookup_configdrive_path(self): self._test_lookup_configdrive_path() def test_lookup_rescue_configdrive_path(self): self._test_lookup_configdrive_path(rescue=True) def test_lookup_configdrive_path_non_exist(self): self._pathutils.get_instance_dir = mock.MagicMock( return_value=self.fake_instance_dir) self._pathutils.exists = mock.MagicMock(return_value=False) configdrive_path = self._pathutils.lookup_configdrive_path( self.fake_instance_name) self.assertIsNone(configdrive_path) def test_get_instances_dir_local(self): self.flags(instances_path=self.fake_instance_dir) instances_dir = self._pathutils.get_instances_dir() self.assertEqual(self.fake_instance_dir, instances_dir) def test_get_instances_dir_remote_instance_share(self): # The Hyper-V driver allows using a pre-configured share exporting # the instances dir. The same share name should be used across nodes. fake_instances_dir_share = 'fake_instances_dir_share' fake_remote = 'fake_remote' expected_instance_dir = r'\\%s\%s' % (fake_remote, fake_instances_dir_share) self.flags(instances_path_share=fake_instances_dir_share, group='hyperv') instances_dir = self._pathutils.get_instances_dir( remote_server=fake_remote) self.assertEqual(expected_instance_dir, instances_dir) def test_get_instances_dir_administrative_share(self): self.flags(instances_path=r'C:\fake_instance_dir') fake_remote = 'fake_remote' expected_instance_dir = r'\\fake_remote\C$\fake_instance_dir' instances_dir = self._pathutils.get_instances_dir( remote_server=fake_remote) self.assertEqual(expected_instance_dir, instances_dir) def test_get_instances_dir_unc_path(self): fake_instance_dir = r'\\fake_addr\fake_share\fake_instance_dir' self.flags(instances_path=fake_instance_dir) fake_remote = 'fake_remote' instances_dir = self._pathutils.get_instances_dir( remote_server=fake_remote) self.assertEqual(fake_instance_dir, instances_dir) @mock.patch('os.path.join') def test_get_instances_sub_dir(self, fake_path_join): class WindowsError(Exception): def __init__(self, winerror=None): self.winerror = winerror fake_dir_name = "fake_dir_name" fake_windows_error = WindowsError self._pathutils.check_create_dir = mock.MagicMock( side_effect=WindowsError(pathutils.ERROR_INVALID_NAME)) with mock.patch.object(builtins, 'WindowsError', fake_windows_error, create=True): self.assertRaises(exception.AdminRequired, self._pathutils._get_instances_sub_dir, fake_dir_name) def test_copy_vm_console_logs(self): fake_local_logs = [mock.sentinel.log_path, mock.sentinel.archived_log_path] fake_remote_logs = [mock.sentinel.remote_log_path, mock.sentinel.remote_archived_log_path] self._pathutils.exists = mock.Mock(return_value=True) self._pathutils.copy = mock.Mock() self._pathutils.get_vm_console_log_paths = mock.Mock( side_effect=[fake_local_logs, fake_remote_logs]) self._pathutils.copy_vm_console_logs(mock.sentinel.instance_name, mock.sentinel.dest_host) self._pathutils.get_vm_console_log_paths.assert_has_calls( [mock.call(mock.sentinel.instance_name), mock.call(mock.sentinel.instance_name, remote_server=mock.sentinel.dest_host)]) self._pathutils.copy.assert_has_calls([ mock.call(mock.sentinel.log_path, mock.sentinel.remote_log_path), mock.call(mock.sentinel.archived_log_path, mock.sentinel.remote_archived_log_path)]) @mock.patch.object(pathutils.PathUtils, 'get_base_vhd_dir') @mock.patch.object(pathutils.PathUtils, 'exists') def test_get_image_path(self, mock_exists, mock_get_base_vhd_dir): fake_image_name = 'fake_image_name' mock_exists.side_effect = [True, False] mock_get_base_vhd_dir.return_value = 'fake_base_dir' res = self._pathutils.get_image_path(fake_image_name) mock_get_base_vhd_dir.assert_called_once_with() self.assertEqual(res, os.path.join('fake_base_dir', 'fake_image_name.vhd')) @mock.patch('os.path.getmtime') @mock.patch.object(pathutils, 'time') def test_get_age_of_file(self, mock_time, mock_getmtime): mock_time.time.return_value = time.time() mock_getmtime.return_value = mock_time.time.return_value - 42 actual_age = self._pathutils.get_age_of_file(mock.sentinel.filename) self.assertEqual(42, actual_age) mock_time.time.assert_called_once_with() mock_getmtime.assert_called_once_with(mock.sentinel.filename) @mock.patch('os.path.exists') @mock.patch('tempfile.NamedTemporaryFile') def test_check_dirs_shared_storage(self, mock_named_tempfile, mock_exists): fake_src_dir = 'fake_src_dir' fake_dest_dir = 'fake_dest_dir' mock_exists.return_value = True mock_tmpfile = mock_named_tempfile.return_value.__enter__.return_value mock_tmpfile.name = 'fake_tmp_fname' expected_src_tmp_path = os.path.join(fake_src_dir, mock_tmpfile.name) self._pathutils.check_dirs_shared_storage( fake_src_dir, fake_dest_dir) mock_named_tempfile.assert_called_once_with(dir=fake_dest_dir) mock_exists.assert_called_once_with(expected_src_tmp_path) @mock.patch('os.path.exists') @mock.patch('tempfile.NamedTemporaryFile') def test_check_dirs_shared_storage_exception(self, mock_named_tempfile, mock_exists): fake_src_dir = 'fake_src_dir' fake_dest_dir = 'fake_dest_dir' mock_exists.return_value = True mock_named_tempfile.side_effect = OSError('not exist') self.assertRaises(exception.FileNotFound, self._pathutils.check_dirs_shared_storage, fake_src_dir, fake_dest_dir) @mock.patch.object(pathutils.PathUtils, 'check_dirs_shared_storage') @mock.patch.object(pathutils.PathUtils, 'get_instances_dir') def test_check_remote_instances_shared(self, mock_get_instances_dir, mock_check_dirs_shared_storage): mock_get_instances_dir.side_effect = [mock.sentinel.local_inst_dir, mock.sentinel.remote_inst_dir] shared_storage = self._pathutils.check_remote_instances_dir_shared( mock.sentinel.dest) self.assertEqual(mock_check_dirs_shared_storage.return_value, shared_storage) mock_get_instances_dir.assert_has_calls( [mock.call(), mock.call(mock.sentinel.dest)]) mock_check_dirs_shared_storage.assert_called_once_with( mock.sentinel.local_inst_dir, mock.sentinel.remote_inst_dir) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_rdpconsoleops.py0000664000175000017500000000335600000000000024465 0ustar00zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the Hyper-V RDPConsoleOps. """ import mock from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import rdpconsoleops class RDPConsoleOpsTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(RDPConsoleOpsTestCase, self).setUp() self.rdpconsoleops = rdpconsoleops.RDPConsoleOps() self.rdpconsoleops._hostops = mock.MagicMock() self.rdpconsoleops._vmutils = mock.MagicMock() self.rdpconsoleops._rdpconsoleutils = mock.MagicMock() def test_get_rdp_console(self): mock_get_host_ip = self.rdpconsoleops._hostops.get_host_ip_addr mock_get_rdp_port = ( self.rdpconsoleops._rdpconsoleutils.get_rdp_console_port) mock_get_vm_id = self.rdpconsoleops._vmutils.get_vm_id connect_info = self.rdpconsoleops.get_rdp_console(mock.DEFAULT) self.assertEqual(mock_get_host_ip.return_value, connect_info.host) self.assertEqual(mock_get_rdp_port.return_value, connect_info.port) self.assertEqual(mock_get_vm_id.return_value, connect_info.internal_access_path) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_serialconsolehandler.py0000664000175000017500000002426300000000000025773 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import pathutils from nova.virt.hyperv import serialconsolehandler from nova.virt.hyperv import serialproxy class SerialConsoleHandlerTestCase(test_base.HyperVBaseTestCase): @mock.patch.object(pathutils.PathUtils, 'get_vm_console_log_paths') def setUp(self, mock_get_log_paths): super(SerialConsoleHandlerTestCase, self).setUp() mock_get_log_paths.return_value = [mock.sentinel.log_path] self._consolehandler = serialconsolehandler.SerialConsoleHandler( mock.sentinel.instance_name) self._consolehandler._log_path = mock.sentinel.log_path self._consolehandler._pathutils = mock.Mock() self._consolehandler._vmutils = mock.Mock() @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_setup_handlers') def test_start(self, mock_setup_handlers): mock_workers = [mock.Mock(), mock.Mock()] self._consolehandler._workers = mock_workers self._consolehandler.start() mock_setup_handlers.assert_called_once_with() for worker in mock_workers: worker.start.assert_called_once_with() @mock.patch('nova.console.serial.release_port') def test_stop(self, mock_release_port): mock_serial_proxy = mock.Mock() mock_workers = [mock_serial_proxy, mock.Mock()] self._consolehandler._serial_proxy = mock_serial_proxy self._consolehandler._listen_host = mock.sentinel.host self._consolehandler._listen_port = mock.sentinel.port self._consolehandler._workers = mock_workers self._consolehandler.stop() mock_release_port.assert_called_once_with(mock.sentinel.host, mock.sentinel.port) for worker in mock_workers: worker.stop.assert_called_once_with() @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_setup_named_pipe_handlers') @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_setup_serial_proxy_handler') def _test_setup_handlers(self, mock_setup_proxy, mock_setup_pipe_handlers, serial_console_enabled=True): self.flags(enabled=serial_console_enabled, group='serial_console') self._consolehandler._setup_handlers() self.assertEqual(serial_console_enabled, mock_setup_proxy.called) mock_setup_pipe_handlers.assert_called_once_with() def test_setup_handlers(self): self._test_setup_handlers() def test_setup_handlers_console_disabled(self): self._test_setup_handlers(serial_console_enabled=False) @mock.patch.object(serialproxy, 'SerialProxy') @mock.patch('nova.console.serial.acquire_port') @mock.patch.object(serialconsolehandler.threading, 'Event') @mock.patch.object(serialconsolehandler.ioutils, 'IOQueue') def test_setup_serial_proxy_handler(self, mock_io_queue, mock_event, mock_acquire_port, mock_serial_proxy_class): mock_input_queue = mock.sentinel.input_queue mock_output_queue = mock.sentinel.output_queue mock_client_connected = mock_event.return_value mock_io_queue.side_effect = [mock_input_queue, mock_output_queue] mock_serial_proxy = mock_serial_proxy_class.return_value mock_acquire_port.return_value = mock.sentinel.port self.flags(proxyclient_address='127.0.0.3', group='serial_console') self._consolehandler._setup_serial_proxy_handler() mock_serial_proxy_class.assert_called_once_with( mock.sentinel.instance_name, '127.0.0.3', mock.sentinel.port, mock_input_queue, mock_output_queue, mock_client_connected) self.assertIn(mock_serial_proxy, self._consolehandler._workers) @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_get_named_pipe_handler') @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_get_vm_serial_port_mapping') def _mock_setup_named_pipe_handlers(self, mock_get_port_mapping, mock_get_pipe_handler, serial_port_mapping=None): mock_get_port_mapping.return_value = serial_port_mapping self._consolehandler._setup_named_pipe_handlers() expected_workers = [mock_get_pipe_handler.return_value for port in serial_port_mapping] self.assertEqual(expected_workers, self._consolehandler._workers) return mock_get_pipe_handler def test_setup_ro_pipe_handler(self): serial_port_mapping = { constants.SERIAL_PORT_TYPE_RW: mock.sentinel.pipe_path } mock_get_handler = self._mock_setup_named_pipe_handlers( serial_port_mapping=serial_port_mapping) mock_get_handler.assert_called_once_with( mock.sentinel.pipe_path, pipe_type=constants.SERIAL_PORT_TYPE_RW, enable_logging=True) def test_setup_pipe_handlers(self): serial_port_mapping = { constants.SERIAL_PORT_TYPE_RO: mock.sentinel.ro_pipe_path, constants.SERIAL_PORT_TYPE_RW: mock.sentinel.rw_pipe_path } mock_get_handler = self._mock_setup_named_pipe_handlers( serial_port_mapping=serial_port_mapping) expected_calls = [mock.call(mock.sentinel.ro_pipe_path, pipe_type=constants.SERIAL_PORT_TYPE_RO, enable_logging=True), mock.call(mock.sentinel.rw_pipe_path, pipe_type=constants.SERIAL_PORT_TYPE_RW, enable_logging=False)] mock_get_handler.assert_has_calls(expected_calls, any_order=True) @mock.patch.object(serialconsolehandler.utilsfactory, 'get_named_pipe_handler') def _test_get_named_pipe_handler(self, mock_get_pipe_handler, pipe_type=None, enable_logging=False): expected_args = {} if pipe_type == constants.SERIAL_PORT_TYPE_RW: self._consolehandler._input_queue = mock.sentinel.input_queue self._consolehandler._output_queue = mock.sentinel.output_queue self._consolehandler._client_connected = ( mock.sentinel.connect_event) expected_args.update({ 'input_queue': mock.sentinel.input_queue, 'output_queue': mock.sentinel.output_queue, 'connect_event': mock.sentinel.connect_event}) if enable_logging: expected_args['log_file'] = mock.sentinel.log_path ret_val = self._consolehandler._get_named_pipe_handler( mock.sentinel.pipe_path, pipe_type, enable_logging) self.assertEqual(mock_get_pipe_handler.return_value, ret_val) mock_get_pipe_handler.assert_called_once_with( mock.sentinel.pipe_path, **expected_args) def test_get_ro_named_pipe_handler(self): self._test_get_named_pipe_handler( pipe_type=constants.SERIAL_PORT_TYPE_RO, enable_logging=True) def test_get_rw_named_pipe_handler(self): self._test_get_named_pipe_handler( pipe_type=constants.SERIAL_PORT_TYPE_RW, enable_logging=False) def _mock_get_port_connections(self, port_connections): get_port_connections = ( self._consolehandler._vmutils.get_vm_serial_port_connections) get_port_connections.return_value = port_connections def test_get_vm_serial_port_mapping_having_tagged_pipes(self): ro_pipe_path = 'fake_pipe_ro' rw_pipe_path = 'fake_pipe_rw' self._mock_get_port_connections([ro_pipe_path, rw_pipe_path]) ret_val = self._consolehandler._get_vm_serial_port_mapping() expected_mapping = { constants.SERIAL_PORT_TYPE_RO: ro_pipe_path, constants.SERIAL_PORT_TYPE_RW: rw_pipe_path } self.assertEqual(expected_mapping, ret_val) def test_get_vm_serial_port_mapping_untagged_pipe(self): pipe_path = 'fake_pipe_path' self._mock_get_port_connections([pipe_path]) ret_val = self._consolehandler._get_vm_serial_port_mapping() expected_mapping = {constants.SERIAL_PORT_TYPE_RW: pipe_path} self.assertEqual(expected_mapping, ret_val) def test_get_vm_serial_port_mapping_exception(self): self._mock_get_port_connections([]) self.assertRaises(exception.NovaException, self._consolehandler._get_vm_serial_port_mapping) @mock.patch('nova.console.type.ConsoleSerial') def test_get_serial_console(self, mock_serial_console): self.flags(enabled=True, group='serial_console') self._consolehandler._listen_host = mock.sentinel.host self._consolehandler._listen_port = mock.sentinel.port ret_val = self._consolehandler.get_serial_console() self.assertEqual(mock_serial_console.return_value, ret_val) mock_serial_console.assert_called_once_with( host=mock.sentinel.host, port=mock.sentinel.port) def test_get_serial_console_disabled(self): self.flags(enabled=False, group='serial_console') self.assertRaises(exception.ConsoleTypeUnavailable, self._consolehandler.get_serial_console) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_serialconsoleops.py0000664000175000017500000001123400000000000025151 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves import builtins from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import serialconsolehandler from nova.virt.hyperv import serialconsoleops class SerialConsoleOpsTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(SerialConsoleOpsTestCase, self).setUp() serialconsoleops._console_handlers = {} self._serialops = serialconsoleops.SerialConsoleOps() self._serialops._pathutils = mock.MagicMock() def _setup_console_handler_mock(self): mock_console_handler = mock.Mock() serialconsoleops._console_handlers = {mock.sentinel.instance_name: mock_console_handler} return mock_console_handler @mock.patch.object(serialconsolehandler, 'SerialConsoleHandler') @mock.patch.object(serialconsoleops.SerialConsoleOps, 'stop_console_handler_unsync') def _test_start_console_handler(self, mock_stop_handler, mock_console_handler, raise_exception=False): mock_handler = mock_console_handler.return_value if raise_exception: mock_handler.start.side_effect = Exception self._serialops.start_console_handler(mock.sentinel.instance_name) mock_stop_handler.assert_called_once_with(mock.sentinel.instance_name) mock_console_handler.assert_called_once_with( mock.sentinel.instance_name) if raise_exception: mock_handler.stop.assert_called_once_with() else: console_handler = serialconsoleops._console_handlers.get( mock.sentinel.instance_name) self.assertEqual(mock_handler, console_handler) def test_start_console_handler(self): self._test_start_console_handler() def test_start_console_handler_exception(self): self._test_start_console_handler(raise_exception=True) def test_stop_console_handler(self): mock_console_handler = self._setup_console_handler_mock() self._serialops.stop_console_handler(mock.sentinel.instance_name) mock_console_handler.stop.assert_called_once_with() handler = serialconsoleops._console_handlers.get( mock.sentinel.instance_name) self.assertIsNone(handler) def test_get_serial_console(self): mock_console_handler = self._setup_console_handler_mock() ret_val = self._serialops.get_serial_console( mock.sentinel.instance_name) self.assertEqual(mock_console_handler.get_serial_console(), ret_val) def test_get_serial_console_exception(self): self.assertRaises(exception.ConsoleTypeUnavailable, self._serialops.get_serial_console, mock.sentinel.instance_name) @mock.patch.object(builtins, 'open') @mock.patch("os.path.exists") def test_get_console_output_exception(self, fake_path_exists, fake_open): self._serialops._pathutils.get_vm_console_log_paths.return_value = [ mock.sentinel.log_path_1, mock.sentinel.log_path_2] fake_open.side_effect = IOError fake_path_exists.return_value = True self.assertRaises(exception.ConsoleLogOutputException, self._serialops.get_console_output, mock.sentinel.instance_name) fake_open.assert_called_once_with(mock.sentinel.log_path_2, 'rb') @mock.patch('os.path.exists') @mock.patch.object(serialconsoleops.SerialConsoleOps, 'start_console_handler') def test_start_console_handlers(self, mock_get_instance_dir, mock_exists): self._serialops._pathutils.get_instance_dir.return_value = [ mock.sentinel.nova_instance_name, mock.sentinel.other_instance_name] mock_exists.side_effect = [True, False] self._serialops.start_console_handlers() self._serialops._vmutils.get_active_instances.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_serialproxy.py0000664000175000017500000001130600000000000024146 0ustar00zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket import mock from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import serialproxy class SerialProxyTestCase(test_base.HyperVBaseTestCase): @mock.patch.object(socket, 'socket') def setUp(self, mock_socket): super(SerialProxyTestCase, self).setUp() self._mock_socket = mock_socket self._mock_input_queue = mock.Mock() self._mock_output_queue = mock.Mock() self._mock_client_connected = mock.Mock() threading_patcher = mock.patch.object(serialproxy, 'threading') threading_patcher.start() self.addCleanup(threading_patcher.stop) self._proxy = serialproxy.SerialProxy( mock.sentinel.instance_nane, mock.sentinel.host, mock.sentinel.port, self._mock_input_queue, self._mock_output_queue, self._mock_client_connected) @mock.patch.object(socket, 'socket') def test_setup_socket_exception(self, mock_socket): fake_socket = mock_socket.return_value fake_socket.listen.side_effect = socket.error self.assertRaises(exception.NovaException, self._proxy._setup_socket) fake_socket.setsockopt.assert_called_once_with(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) fake_socket.bind.assert_called_once_with((mock.sentinel.host, mock.sentinel.port)) def test_stop_serial_proxy(self): self._proxy._conn = mock.Mock() self._proxy._sock = mock.Mock() self._proxy.stop() self._proxy._stopped.set.assert_called_once_with() self._proxy._client_connected.clear.assert_called_once_with() self._proxy._conn.shutdown.assert_called_once_with(socket.SHUT_RDWR) self._proxy._conn.close.assert_called_once_with() self._proxy._sock.close.assert_called_once_with() @mock.patch.object(serialproxy.SerialProxy, '_accept_conn') @mock.patch.object(serialproxy.SerialProxy, '_setup_socket') def test_run(self, mock_setup_socket, mock_accept_con): self._proxy._stopped = mock.MagicMock() self._proxy._stopped.isSet.side_effect = [False, True] self._proxy.run() mock_setup_socket.assert_called_once_with() mock_accept_con.assert_called_once_with() def test_accept_connection(self): mock_conn = mock.Mock() self._proxy._sock = mock.Mock() self._proxy._sock.accept.return_value = [ mock_conn, (mock.sentinel.client_addr, mock.sentinel.client_port)] self._proxy._accept_conn() self._proxy._client_connected.set.assert_called_once_with() mock_conn.close.assert_called_once_with() self.assertIsNone(self._proxy._conn) thread = serialproxy.threading.Thread for job in [self._proxy._get_data, self._proxy._send_data]: thread.assert_any_call(target=job) def test_get_data(self): self._mock_client_connected.isSet.return_value = True self._proxy._conn = mock.Mock() self._proxy._conn.recv.side_effect = [mock.sentinel.data, None] self._proxy._get_data() self._mock_client_connected.clear.assert_called_once_with() self._mock_input_queue.put.assert_called_once_with(mock.sentinel.data) def _test_send_data(self, exception=None): self._mock_client_connected.isSet.side_effect = [True, False] self._mock_output_queue.get_burst.return_value = mock.sentinel.data self._proxy._conn = mock.Mock() self._proxy._conn.sendall.side_effect = exception self._proxy._send_data() self._proxy._conn.sendall.assert_called_once_with( mock.sentinel.data) if exception: self._proxy._client_connected.clear.assert_called_once_with() def test_send_data(self): self._test_send_data() def test_send_data_exception(self): self._test_send_data(exception=socket.error) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_snapshotops.py0000664000175000017500000001355000000000000024151 0ustar00zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from nova.compute import task_states from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import snapshotops class SnapshotOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V SnapshotOps class.""" def setUp(self): super(SnapshotOpsTestCase, self).setUp() self.context = 'fake_context' self._snapshotops = snapshotops.SnapshotOps() self._snapshotops._pathutils = mock.MagicMock() self._snapshotops._vmutils = mock.MagicMock() self._snapshotops._vhdutils = mock.MagicMock() @mock.patch('nova.image.glance.get_remote_image_service') def test_save_glance_image(self, mock_get_remote_image_service): image_metadata = {"disk_format": "vhd", "container_format": "bare"} glance_image_service = mock.MagicMock() mock_get_remote_image_service.return_value = (glance_image_service, mock.sentinel.IMAGE_ID) self._snapshotops._save_glance_image(context=self.context, image_id=mock.sentinel.IMAGE_ID, image_vhd_path=mock.sentinel.PATH) mock_get_remote_image_service.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID) self._snapshotops._pathutils.open.assert_called_with( mock.sentinel.PATH, 'rb') glance_image_service.update.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID, image_metadata, self._snapshotops._pathutils.open().__enter__(), purge_props=False) @mock.patch('nova.virt.hyperv.snapshotops.SnapshotOps._save_glance_image') def _test_snapshot(self, mock_save_glance_image, base_disk_path): mock_instance = fake_instance.fake_instance_obj(self.context) mock_update = mock.MagicMock() fake_src_path = os.path.join('fake', 'path') self._snapshotops._pathutils.lookup_root_vhd_path.return_value = ( fake_src_path) fake_exp_dir = os.path.join(os.path.join('fake', 'exp'), 'dir') self._snapshotops._pathutils.get_export_dir.return_value = fake_exp_dir self._snapshotops._vhdutils.get_vhd_parent_path.return_value = ( base_disk_path) fake_snapshot_path = ( self._snapshotops._vmutils.take_vm_snapshot.return_value) self._snapshotops.snapshot(context=self.context, instance=mock_instance, image_id=mock.sentinel.IMAGE_ID, update_task_state=mock_update) self._snapshotops._vmutils.take_vm_snapshot.assert_called_once_with( mock_instance.name) mock_lookup_path = self._snapshotops._pathutils.lookup_root_vhd_path mock_lookup_path.assert_called_once_with(mock_instance.name) mock_get_vhd_path = self._snapshotops._vhdutils.get_vhd_parent_path mock_get_vhd_path.assert_called_once_with(fake_src_path) self._snapshotops._pathutils.get_export_dir.assert_called_once_with( mock_instance.name) expected = [mock.call(fake_src_path, os.path.join(fake_exp_dir, os.path.basename(fake_src_path)))] dest_vhd_path = os.path.join(fake_exp_dir, os.path.basename(fake_src_path)) if base_disk_path: basename = os.path.basename(base_disk_path) base_dest_disk_path = os.path.join(fake_exp_dir, basename) expected.append(mock.call(base_disk_path, base_dest_disk_path)) mock_reconnect = self._snapshotops._vhdutils.reconnect_parent_vhd mock_reconnect.assert_called_once_with(dest_vhd_path, base_dest_disk_path) self._snapshotops._vhdutils.merge_vhd.assert_called_once_with( dest_vhd_path) mock_save_glance_image.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID, base_dest_disk_path) else: mock_save_glance_image.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID, dest_vhd_path) self.assertEqual(len(expected), self._snapshotops._pathutils.copyfile.call_count) self._snapshotops._pathutils.copyfile.assert_has_calls(expected) self.assertEqual(2, mock_update.call_count) expected_update = [ mock.call(task_state=task_states.IMAGE_PENDING_UPLOAD), mock.call(task_state=task_states.IMAGE_UPLOADING, expected_state=task_states.IMAGE_PENDING_UPLOAD)] mock_update.assert_has_calls(expected_update) self._snapshotops._vmutils.remove_vm_snapshot.assert_called_once_with( fake_snapshot_path) self._snapshotops._pathutils.rmtree.assert_called_once_with( fake_exp_dir) def test_snapshot(self): base_disk_path = os.path.join('fake', 'disk') self._test_snapshot(base_disk_path=base_disk_path) def test_snapshot_no_base_disk(self): self._test_snapshot(base_disk_path=None) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_vif.py0000664000175000017500000000701000000000000022346 0ustar00zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions Srl # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.conf from nova import exception from nova.network import model from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import vif CONF = nova.conf.CONF class HyperVVIFDriverTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(HyperVVIFDriverTestCase, self).setUp() self.vif_driver = vif.HyperVVIFDriver() self.vif_driver._netutils = mock.MagicMock() def test_plug(self): vif = {'type': model.VIF_TYPE_HYPERV} # this is handled by neutron so just assert it doesn't blow up self.vif_driver.plug(mock.sentinel.instance, vif) @mock.patch.object(vif, 'os_vif') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_instance') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_vif') def test_plug_ovs(self, mock_nova_to_osvif_vif, mock_nova_to_osvif_instance, mock_os_vif): vif = {'type': model.VIF_TYPE_OVS} self.vif_driver.plug(mock.sentinel.instance, vif) mock_nova_to_osvif_vif.assert_called_once_with(vif) mock_nova_to_osvif_instance.assert_called_once_with( mock.sentinel.instance) connect_vnic = self.vif_driver._netutils.connect_vnic_to_vswitch connect_vnic.assert_called_once_with( CONF.hyperv.vswitch_name, mock_nova_to_osvif_vif.return_value.id) mock_os_vif.plug.assert_called_once_with( mock_nova_to_osvif_vif.return_value, mock_nova_to_osvif_instance.return_value) def test_plug_type_unknown(self): vif = {'type': mock.sentinel.vif_type} self.assertRaises(exception.VirtualInterfacePlugException, self.vif_driver.plug, mock.sentinel.instance, vif) def test_unplug(self): vif = {'type': model.VIF_TYPE_HYPERV} # this is handled by neutron so just assert it doesn't blow up self.vif_driver.unplug(mock.sentinel.instance, vif) @mock.patch.object(vif, 'os_vif') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_instance') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_vif') def test_unplug_ovs(self, mock_nova_to_osvif_vif, mock_nova_to_osvif_instance, mock_os_vif): vif = {'type': model.VIF_TYPE_OVS} self.vif_driver.unplug(mock.sentinel.instance, vif) mock_nova_to_osvif_vif.assert_called_once_with(vif) mock_nova_to_osvif_instance.assert_called_once_with( mock.sentinel.instance) mock_os_vif.unplug.assert_called_once_with( mock_nova_to_osvif_vif.return_value, mock_nova_to_osvif_instance.return_value) def test_unplug_type_unknown(self): vif = {'type': mock.sentinel.vif_type} self.assertRaises(exception.VirtualInterfaceUnplugException, self.vif_driver.unplug, mock.sentinel.instance, vif) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_vmops.py0000664000175000017500000025335200000000000022742 0ustar00zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import ddt from eventlet import timeout as etimeout import mock from os_win import constants as os_win_const from os_win import exceptions as os_win_exc from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import fileutils from oslo_utils import units from nova.compute import vm_states from nova import exception from nova import objects from nova.objects import fields from nova.objects import flavor as flavor_obj from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor from nova.tests.unit.objects import test_virtual_interface from nova.tests.unit.virt.hyperv import test_base from nova.virt import hardware from nova.virt.hyperv import constants from nova.virt.hyperv import vmops from nova.virt.hyperv import volumeops CONF = cfg.CONF @ddt.ddt class VMOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V VMOps class.""" _FAKE_TIMEOUT = 2 FAKE_SIZE = 10 FAKE_DIR = 'fake_dir' FAKE_ROOT_PATH = 'C:\\path\\to\\fake.%s' FAKE_CONFIG_DRIVE_ISO = 'configdrive.iso' FAKE_CONFIG_DRIVE_VHD = 'configdrive.vhd' FAKE_UUID = '4f54fb69-d3a2-45b7-bb9b-b6e6b3d893b3' FAKE_LOG = 'fake_log' _WIN_VERSION_6_3 = '6.3.0' _WIN_VERSION_10 = '10.0' ISO9660 = 'iso9660' _FAKE_CONFIGDRIVE_PATH = 'C:/fake_instance_dir/configdrive.vhd' def setUp(self): super(VMOpsTestCase, self).setUp() self.context = 'fake-context' self._vmops = vmops.VMOps(virtapi=mock.MagicMock()) self._vmops._vmutils = mock.MagicMock() self._vmops._metricsutils = mock.MagicMock() self._vmops._vhdutils = mock.MagicMock() self._vmops._pathutils = mock.MagicMock() self._vmops._hostutils = mock.MagicMock() self._vmops._migrutils = mock.MagicMock() self._vmops._serial_console_ops = mock.MagicMock() self._vmops._block_dev_man = mock.MagicMock() self._vmops._vif_driver = mock.MagicMock() def test_list_instances(self): mock_instance = mock.MagicMock() self._vmops._vmutils.list_instances.return_value = [mock_instance] response = self._vmops.list_instances() self._vmops._vmutils.list_instances.assert_called_once_with() self.assertEqual(response, [mock_instance]) def _test_get_info(self, vm_exists): mock_instance = fake_instance.fake_instance_obj(self.context) mock_info = mock.MagicMock(spec_set=dict) fake_info = {'EnabledState': 2, 'MemoryUsage': mock.sentinel.FAKE_MEM_KB, 'NumberOfProcessors': mock.sentinel.FAKE_NUM_CPU, 'UpTime': mock.sentinel.FAKE_CPU_NS} def getitem(key): return fake_info[key] mock_info.__getitem__.side_effect = getitem expected = hardware.InstanceInfo(state=constants.HYPERV_POWER_STATE[2]) self._vmops._vmutils.vm_exists.return_value = vm_exists self._vmops._vmutils.get_vm_summary_info.return_value = mock_info if not vm_exists: self.assertRaises(exception.InstanceNotFound, self._vmops.get_info, mock_instance) else: response = self._vmops.get_info(mock_instance) self._vmops._vmutils.vm_exists.assert_called_once_with( mock_instance.name) self._vmops._vmutils.get_vm_summary_info.assert_called_once_with( mock_instance.name) self.assertEqual(response, expected) def test_get_info(self): self._test_get_info(vm_exists=True) def test_get_info_exception(self): self._test_get_info(vm_exists=False) @mock.patch.object(vmops.VMOps, 'check_vm_image_type') @mock.patch.object(vmops.VMOps, '_create_root_vhd') def test_create_root_device_type_disk(self, mock_create_root_device, mock_check_vm_image_type): mock_instance = fake_instance.fake_instance_obj(self.context) mock_root_disk_info = {'type': constants.DISK} self._vmops._create_root_device(self.context, mock_instance, mock_root_disk_info, mock.sentinel.VM_GEN_1) mock_create_root_device.assert_called_once_with( self.context, mock_instance) mock_check_vm_image_type.assert_called_once_with( mock_instance.uuid, mock.sentinel.VM_GEN_1, mock_create_root_device.return_value) def _prepare_create_root_device_mocks(self, use_cow_images, vhd_format, vhd_size): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.root_gb = self.FAKE_SIZE self.flags(use_cow_images=use_cow_images) self._vmops._vhdutils.get_vhd_info.return_value = {'VirtualSize': vhd_size * units.Gi} self._vmops._vhdutils.get_vhd_format.return_value = vhd_format root_vhd_internal_size = mock_instance.flavor.root_gb * units.Gi get_size = self._vmops._vhdutils.get_internal_vhd_size_by_file_size get_size.return_value = root_vhd_internal_size self._vmops._pathutils.exists.return_value = True return mock_instance @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') def _test_create_root_vhd_exception(self, mock_get_cached_image, vhd_format): mock_instance = self._prepare_create_root_device_mocks( use_cow_images=False, vhd_format=vhd_format, vhd_size=(self.FAKE_SIZE + 1)) fake_vhd_path = self.FAKE_ROOT_PATH % vhd_format mock_get_cached_image.return_value = fake_vhd_path fake_root_path = self._vmops._pathutils.get_root_vhd_path.return_value self.assertRaises(exception.FlavorDiskSmallerThanImage, self._vmops._create_root_vhd, self.context, mock_instance) self.assertFalse(self._vmops._vhdutils.resize_vhd.called) self._vmops._pathutils.exists.assert_called_once_with( fake_root_path) self._vmops._pathutils.remove.assert_called_once_with( fake_root_path) @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') def _test_create_root_vhd_qcow(self, mock_get_cached_image, vhd_format): mock_instance = self._prepare_create_root_device_mocks( use_cow_images=True, vhd_format=vhd_format, vhd_size=(self.FAKE_SIZE - 1)) fake_vhd_path = self.FAKE_ROOT_PATH % vhd_format mock_get_cached_image.return_value = fake_vhd_path fake_root_path = self._vmops._pathutils.get_root_vhd_path.return_value root_vhd_internal_size = mock_instance.flavor.root_gb * units.Gi get_size = self._vmops._vhdutils.get_internal_vhd_size_by_file_size response = self._vmops._create_root_vhd(context=self.context, instance=mock_instance) self.assertEqual(fake_root_path, response) self._vmops._pathutils.get_root_vhd_path.assert_called_with( mock_instance.name, vhd_format, False) differencing_vhd = self._vmops._vhdutils.create_differencing_vhd differencing_vhd.assert_called_with(fake_root_path, fake_vhd_path) self._vmops._vhdutils.get_vhd_info.assert_called_once_with( fake_vhd_path) if vhd_format is constants.DISK_FORMAT_VHD: self.assertFalse(get_size.called) self.assertFalse(self._vmops._vhdutils.resize_vhd.called) else: get_size.assert_called_once_with(fake_vhd_path, root_vhd_internal_size) self._vmops._vhdutils.resize_vhd.assert_called_once_with( fake_root_path, root_vhd_internal_size, is_file_max_size=False) @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') def _test_create_root_vhd(self, mock_get_cached_image, vhd_format, is_rescue_vhd=False): mock_instance = self._prepare_create_root_device_mocks( use_cow_images=False, vhd_format=vhd_format, vhd_size=(self.FAKE_SIZE - 1)) fake_vhd_path = self.FAKE_ROOT_PATH % vhd_format mock_get_cached_image.return_value = fake_vhd_path rescue_image_id = ( mock.sentinel.rescue_image_id if is_rescue_vhd else None) fake_root_path = self._vmops._pathutils.get_root_vhd_path.return_value root_vhd_internal_size = mock_instance.flavor.root_gb * units.Gi get_size = self._vmops._vhdutils.get_internal_vhd_size_by_file_size response = self._vmops._create_root_vhd( context=self.context, instance=mock_instance, rescue_image_id=rescue_image_id) self.assertEqual(fake_root_path, response) mock_get_cached_image.assert_called_once_with(self.context, mock_instance, rescue_image_id) self._vmops._pathutils.get_root_vhd_path.assert_called_with( mock_instance.name, vhd_format, is_rescue_vhd) self._vmops._pathutils.copyfile.assert_called_once_with( fake_vhd_path, fake_root_path) get_size.assert_called_once_with(fake_vhd_path, root_vhd_internal_size) if is_rescue_vhd: self.assertFalse(self._vmops._vhdutils.resize_vhd.called) else: self._vmops._vhdutils.resize_vhd.assert_called_once_with( fake_root_path, root_vhd_internal_size, is_file_max_size=False) def test_create_root_vhd(self): self._test_create_root_vhd(vhd_format=constants.DISK_FORMAT_VHD) def test_create_root_vhdx(self): self._test_create_root_vhd(vhd_format=constants.DISK_FORMAT_VHDX) def test_create_root_vhd_use_cow_images_true(self): self._test_create_root_vhd_qcow(vhd_format=constants.DISK_FORMAT_VHD) def test_create_root_vhdx_use_cow_images_true(self): self._test_create_root_vhd_qcow(vhd_format=constants.DISK_FORMAT_VHDX) def test_create_rescue_vhd(self): self._test_create_root_vhd(vhd_format=constants.DISK_FORMAT_VHD, is_rescue_vhd=True) def test_create_root_vhdx_size_less_than_internal(self): self._test_create_root_vhd_exception( vhd_format=constants.DISK_FORMAT_VHD) def test_is_resize_needed_exception(self): inst = mock.MagicMock() self.assertRaises( exception.FlavorDiskSmallerThanImage, self._vmops._is_resize_needed, mock.sentinel.FAKE_PATH, self.FAKE_SIZE, self.FAKE_SIZE - 1, inst) def test_is_resize_needed_true(self): inst = mock.MagicMock() self.assertTrue(self._vmops._is_resize_needed( mock.sentinel.FAKE_PATH, self.FAKE_SIZE, self.FAKE_SIZE + 1, inst)) def test_is_resize_needed_false(self): inst = mock.MagicMock() self.assertFalse(self._vmops._is_resize_needed( mock.sentinel.FAKE_PATH, self.FAKE_SIZE, self.FAKE_SIZE, inst)) @mock.patch.object(vmops.VMOps, 'create_ephemeral_disk') def test_create_ephemerals(self, mock_create_ephemeral_disk): mock_instance = fake_instance.fake_instance_obj(self.context) fake_ephemerals = [dict(), dict()] self._vmops._vhdutils.get_best_supported_vhd_format.return_value = ( mock.sentinel.format) self._vmops._pathutils.get_ephemeral_vhd_path.side_effect = [ mock.sentinel.FAKE_PATH0, mock.sentinel.FAKE_PATH1] self._vmops._create_ephemerals(mock_instance, fake_ephemerals) self._vmops._pathutils.get_ephemeral_vhd_path.assert_has_calls( [mock.call(mock_instance.name, mock.sentinel.format, 'eph0'), mock.call(mock_instance.name, mock.sentinel.format, 'eph1')]) mock_create_ephemeral_disk.assert_has_calls( [mock.call(mock_instance.name, fake_ephemerals[0]), mock.call(mock_instance.name, fake_ephemerals[1])]) def test_create_ephemeral_disk(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_ephemeral_info = {'path': 'fake_eph_path', 'size': 10} self._vmops.create_ephemeral_disk(mock_instance.name, mock_ephemeral_info) mock_create_dynamic_vhd = self._vmops._vhdutils.create_dynamic_vhd mock_create_dynamic_vhd.assert_called_once_with('fake_eph_path', 10 * units.Gi) @mock.patch.object(vmops.objects, 'PCIDeviceBus') @mock.patch.object(vmops.objects, 'NetworkInterfaceMetadata') @mock.patch.object(vmops.objects.VirtualInterfaceList, 'get_by_instance_uuid') def test_get_vif_metadata(self, mock_get_by_inst_uuid, mock_NetworkInterfaceMetadata, mock_PCIDevBus): mock_vif = mock.MagicMock(tag='taggy') mock_vif.__contains__.side_effect = ( lambda attr: getattr(mock_vif, attr, None) is not None) mock_get_by_inst_uuid.return_value = [mock_vif, mock.MagicMock(tag=None)] vif_metadata = self._vmops._get_vif_metadata(self.context, mock.sentinel.instance_id) mock_get_by_inst_uuid.assert_called_once_with( self.context, mock.sentinel.instance_id) mock_NetworkInterfaceMetadata.assert_called_once_with( mac=mock_vif.address, bus=mock_PCIDevBus.return_value, tags=[mock_vif.tag]) self.assertEqual([mock_NetworkInterfaceMetadata.return_value], vif_metadata) @mock.patch.object(vmops.objects, 'InstanceDeviceMetadata') @mock.patch.object(vmops.VMOps, '_get_vif_metadata') def test_save_device_metadata(self, mock_get_vif_metadata, mock_InstanceDeviceMetadata): mock_instance = mock.MagicMock() mock_get_vif_metadata.return_value = [mock.sentinel.vif_metadata] self._vmops._block_dev_man.get_bdm_metadata.return_value = [ mock.sentinel.bdm_metadata] self._vmops._save_device_metadata(self.context, mock_instance, mock.sentinel.block_device_info) mock_get_vif_metadata.assert_called_once_with(self.context, mock_instance.uuid) self._vmops._block_dev_man.get_bdm_metadata.assert_called_once_with( self.context, mock_instance, mock.sentinel.block_device_info) expected_metadata = [mock.sentinel.vif_metadata, mock.sentinel.bdm_metadata] mock_InstanceDeviceMetadata.assert_called_once_with( devices=expected_metadata) self.assertEqual(mock_InstanceDeviceMetadata.return_value, mock_instance.device_metadata) def test_set_boot_order(self): self._vmops.set_boot_order(mock.sentinel.instance_name, mock.sentinel.vm_gen, mock.sentinel.bdi) mock_get_boot_order = self._vmops._block_dev_man.get_boot_order mock_get_boot_order.assert_called_once_with( mock.sentinel.vm_gen, mock.sentinel.bdi) self._vmops._vmutils.set_boot_order.assert_called_once_with( mock.sentinel.instance_name, mock_get_boot_order.return_value) @mock.patch.object(vmops.VMOps, 'plug_vifs') @mock.patch('nova.virt.hyperv.vmops.VMOps.destroy') @mock.patch('nova.virt.hyperv.vmops.VMOps.power_on') @mock.patch('nova.virt.hyperv.vmops.VMOps.set_boot_order') @mock.patch('nova.virt.hyperv.vmops.VMOps.attach_config_drive') @mock.patch('nova.virt.hyperv.vmops.VMOps._create_config_drive') @mock.patch('nova.virt.configdrive.required_by') @mock.patch('nova.virt.hyperv.vmops.VMOps._save_device_metadata') @mock.patch('nova.virt.hyperv.vmops.VMOps.create_instance') @mock.patch('nova.virt.hyperv.vmops.VMOps.get_image_vm_generation') @mock.patch('nova.virt.hyperv.vmops.VMOps._create_ephemerals') @mock.patch('nova.virt.hyperv.vmops.VMOps._create_root_device') @mock.patch('nova.virt.hyperv.vmops.VMOps._delete_disk_files') @mock.patch('nova.virt.hyperv.vmops.VMOps._get_neutron_events', return_value=[]) def _test_spawn(self, mock_get_neutron_events, mock_delete_disk_files, mock_create_root_device, mock_create_ephemerals, mock_get_image_vm_gen, mock_create_instance, mock_save_device_metadata, mock_configdrive_required, mock_create_config_drive, mock_attach_config_drive, mock_set_boot_order, mock_power_on, mock_destroy, mock_plug_vifs, exists, configdrive_required, fail, fake_vm_gen=constants.VM_GEN_2): mock_instance = fake_instance.fake_instance_obj(self.context) mock_image_meta = mock.MagicMock() root_device_info = mock.sentinel.ROOT_DEV_INFO mock_get_image_vm_gen.return_value = fake_vm_gen fake_config_drive_path = mock_create_config_drive.return_value block_device_info = {'ephemerals': [], 'root_disk': root_device_info} self._vmops._vmutils.vm_exists.return_value = exists mock_configdrive_required.return_value = configdrive_required mock_create_instance.side_effect = fail if exists: self.assertRaises(exception.InstanceExists, self._vmops.spawn, self.context, mock_instance, mock_image_meta, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info, block_device_info) elif fail is os_win_exc.HyperVException: self.assertRaises(os_win_exc.HyperVException, self._vmops.spawn, self.context, mock_instance, mock_image_meta, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info, block_device_info) mock_destroy.assert_called_once_with(mock_instance, mock.sentinel.network_info, block_device_info) else: self._vmops.spawn(self.context, mock_instance, mock_image_meta, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info, block_device_info) self._vmops._vmutils.vm_exists.assert_called_once_with( mock_instance.name) mock_delete_disk_files.assert_called_once_with( mock_instance.name) mock_validate_and_update_bdi = ( self._vmops._block_dev_man.validate_and_update_bdi) mock_validate_and_update_bdi.assert_called_once_with( mock_instance, mock_image_meta, fake_vm_gen, block_device_info) mock_create_root_device.assert_called_once_with(self.context, mock_instance, root_device_info, fake_vm_gen) mock_create_ephemerals.assert_called_once_with( mock_instance, block_device_info['ephemerals']) mock_get_neutron_events.assert_called_once_with( mock.sentinel.network_info) mock_get_image_vm_gen.assert_called_once_with(mock_instance.uuid, mock_image_meta) mock_create_instance.assert_called_once_with( mock_instance, mock.sentinel.network_info, root_device_info, block_device_info, fake_vm_gen, mock_image_meta) mock_plug_vifs.assert_called_once_with(mock_instance, mock.sentinel.network_info) mock_save_device_metadata.assert_called_once_with( self.context, mock_instance, block_device_info) mock_configdrive_required.assert_called_once_with(mock_instance) if configdrive_required: mock_create_config_drive.assert_called_once_with( self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info) mock_attach_config_drive.assert_called_once_with( mock_instance, fake_config_drive_path, fake_vm_gen) mock_set_boot_order.assert_called_once_with( mock_instance.name, fake_vm_gen, block_device_info) mock_power_on.assert_called_once_with( mock_instance, network_info=mock.sentinel.network_info, should_plug_vifs=False) def test_spawn(self): self._test_spawn(exists=False, configdrive_required=True, fail=None) def test_spawn_instance_exists(self): self._test_spawn(exists=True, configdrive_required=True, fail=None) def test_spawn_create_instance_exception(self): self._test_spawn(exists=False, configdrive_required=True, fail=os_win_exc.HyperVException) def test_spawn_not_required(self): self._test_spawn(exists=False, configdrive_required=False, fail=None) def test_spawn_no_admin_permissions(self): self._vmops._vmutils.check_admin_permissions.side_effect = ( os_win_exc.HyperVException) self.assertRaises(os_win_exc.HyperVException, self._vmops.spawn, self.context, mock.DEFAULT, mock.DEFAULT, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.INFO, mock.sentinel.DEV_INFO) @mock.patch.object(vmops.VMOps, '_get_neutron_events') def test_wait_vif_plug_events(self, mock_get_events): self._vmops._virtapi.wait_for_instance_event.side_effect = ( etimeout.Timeout) self.flags(vif_plugging_timeout=1) self.flags(vif_plugging_is_fatal=True) def _context_user(): with self._vmops.wait_vif_plug_events(mock.sentinel.instance, mock.sentinel.network_info): pass self.assertRaises(exception.VirtualInterfaceCreateException, _context_user) mock_get_events.assert_called_once_with(mock.sentinel.network_info) self._vmops._virtapi.wait_for_instance_event.assert_called_once_with( mock.sentinel.instance, mock_get_events.return_value, deadline=CONF.vif_plugging_timeout, error_callback=self._vmops._neutron_failed_callback) @mock.patch.object(vmops.VMOps, '_get_neutron_events') def test_wait_vif_plug_events_port_binding_failed(self, mock_get_events): mock_get_events.side_effect = exception.PortBindingFailed( port_id='fake_id') def _context_user(): with self._vmops.wait_vif_plug_events(mock.sentinel.instance, mock.sentinel.network_info): pass self.assertRaises(exception.PortBindingFailed, _context_user) def test_neutron_failed_callback(self): self.flags(vif_plugging_is_fatal=True) self.assertRaises(exception.VirtualInterfaceCreateException, self._vmops._neutron_failed_callback, mock.sentinel.event_name, mock.sentinel.instance) def test_get_neutron_events(self): network_info = [{'id': mock.sentinel.vif_id1, 'active': True}, {'id': mock.sentinel.vif_id2, 'active': False}, {'id': mock.sentinel.vif_id3}] events = self._vmops._get_neutron_events(network_info) self.assertEqual([('network-vif-plugged', mock.sentinel.vif_id2)], events) def test_get_neutron_events_no_timeout(self): self.flags(vif_plugging_timeout=0) network_info = [{'id': mock.sentinel.vif_id1, 'active': True}] events = self._vmops._get_neutron_events(network_info) self.assertEqual([], events) @mock.patch.object(vmops.VMOps, '_attach_pci_devices') @mock.patch.object(vmops.VMOps, '_requires_secure_boot') @mock.patch.object(vmops.VMOps, '_requires_certificate') @mock.patch.object(vmops.VMOps, '_get_instance_vnuma_config') @mock.patch('nova.virt.hyperv.volumeops.VolumeOps' '.attach_volumes') @mock.patch.object(vmops.VMOps, '_set_instance_disk_qos_specs') @mock.patch.object(vmops.VMOps, '_create_vm_com_port_pipes') @mock.patch.object(vmops.VMOps, '_attach_ephemerals') @mock.patch.object(vmops.VMOps, '_attach_root_device') @mock.patch.object(vmops.VMOps, '_configure_remotefx') def _test_create_instance(self, mock_configure_remotefx, mock_attach_root_device, mock_attach_ephemerals, mock_create_pipes, mock_set_qos_specs, mock_attach_volumes, mock_get_vnuma_config, mock_requires_certificate, mock_requires_secure_boot, mock_attach_pci_devices, enable_instance_metrics, vm_gen=constants.VM_GEN_1, vnuma_enabled=False, pci_requests=None): self.flags(dynamic_memory_ratio=2.0, group='hyperv') self.flags(enable_instance_metrics_collection=enable_instance_metrics, group='hyperv') root_device_info = mock.sentinel.ROOT_DEV_INFO block_device_info = {'ephemerals': [], 'block_device_mapping': []} fake_network_info = {'id': mock.sentinel.ID, 'address': mock.sentinel.ADDRESS} mock_instance = fake_instance.fake_instance_obj(self.context) instance_path = os.path.join(CONF.instances_path, mock_instance.name) mock_requires_secure_boot.return_value = True flavor = flavor_obj.Flavor(**test_flavor.fake_flavor) mock_instance.flavor = flavor instance_pci_requests = objects.InstancePCIRequests( requests=pci_requests or [], instance_uuid=mock_instance.uuid) mock_instance.pci_requests = instance_pci_requests host_shutdown_action = (os_win_const.HOST_SHUTDOWN_ACTION_SHUTDOWN if pci_requests else None) if vnuma_enabled: mock_get_vnuma_config.return_value = ( mock.sentinel.mem_per_numa, mock.sentinel.cpus_per_numa) cpus_per_numa = mock.sentinel.cpus_per_numa mem_per_numa = mock.sentinel.mem_per_numa dynamic_memory_ratio = 1.0 else: mock_get_vnuma_config.return_value = (None, None) mem_per_numa, cpus_per_numa = (None, None) dynamic_memory_ratio = CONF.hyperv.dynamic_memory_ratio self._vmops.create_instance(instance=mock_instance, network_info=[fake_network_info], root_device=root_device_info, block_device_info=block_device_info, vm_gen=vm_gen, image_meta=mock.sentinel.image_meta) mock_get_vnuma_config.assert_called_once_with(mock_instance, mock.sentinel.image_meta) self._vmops._vmutils.create_vm.assert_called_once_with( mock_instance.name, vnuma_enabled, vm_gen, instance_path, [mock_instance.uuid]) self._vmops._vmutils.update_vm.assert_called_once_with( mock_instance.name, mock_instance.flavor.memory_mb, mem_per_numa, mock_instance.flavor.vcpus, cpus_per_numa, CONF.hyperv.limit_cpu_features, dynamic_memory_ratio, host_shutdown_action=host_shutdown_action) mock_configure_remotefx.assert_called_once_with(mock_instance, vm_gen) mock_create_scsi_ctrl = self._vmops._vmutils.create_scsi_controller mock_create_scsi_ctrl.assert_called_once_with(mock_instance.name) mock_attach_root_device.assert_called_once_with(mock_instance.name, root_device_info) mock_attach_ephemerals.assert_called_once_with(mock_instance.name, block_device_info['ephemerals']) mock_attach_volumes.assert_called_once_with( block_device_info['block_device_mapping'], mock_instance.name) self._vmops._vmutils.create_nic.assert_called_once_with( mock_instance.name, mock.sentinel.ID, mock.sentinel.ADDRESS) mock_enable = self._vmops._metricsutils.enable_vm_metrics_collection if enable_instance_metrics: mock_enable.assert_called_once_with(mock_instance.name) mock_set_qos_specs.assert_called_once_with(mock_instance) mock_requires_secure_boot.assert_called_once_with( mock_instance, mock.sentinel.image_meta, vm_gen) mock_requires_certificate.assert_called_once_with( mock.sentinel.image_meta) enable_secure_boot = self._vmops._vmutils.enable_secure_boot enable_secure_boot.assert_called_once_with( mock_instance.name, msft_ca_required=mock_requires_certificate.return_value) mock_attach_pci_devices.assert_called_once_with(mock_instance) def test_create_instance(self): self._test_create_instance(enable_instance_metrics=True) def test_create_instance_enable_instance_metrics_false(self): self._test_create_instance(enable_instance_metrics=False) def test_create_instance_gen2(self): self._test_create_instance(enable_instance_metrics=False, vm_gen=constants.VM_GEN_2) def test_create_instance_vnuma_enabled(self): self._test_create_instance(enable_instance_metrics=False, vnuma_enabled=True) def test_create_instance_pci_requested(self): vendor_id = 'fake_vendor_id' product_id = 'fake_product_id' spec = {'vendor_id': vendor_id, 'product_id': product_id} request = objects.InstancePCIRequest(count=1, spec=[spec]) self._test_create_instance(enable_instance_metrics=False, pci_requests=[request]) def test_attach_pci_devices(self): mock_instance = fake_instance.fake_instance_obj(self.context) vendor_id = 'fake_vendor_id' product_id = 'fake_product_id' spec = {'vendor_id': vendor_id, 'product_id': product_id} request = objects.InstancePCIRequest(count=2, spec=[spec]) instance_pci_requests = objects.InstancePCIRequests( requests=[request], instance_uuid=mock_instance.uuid) mock_instance.pci_requests = instance_pci_requests self._vmops._attach_pci_devices(mock_instance) self._vmops._vmutils.add_pci_device.assert_has_calls( [mock.call(mock_instance.name, vendor_id, product_id)] * 2) @mock.patch.object(vmops.hardware, 'numa_get_constraints') def _check_get_instance_vnuma_config_exception(self, mock_get_numa, numa_cells): flavor = {'extra_specs': {}} mock_instance = mock.MagicMock(flavor=flavor) image_meta = mock.MagicMock(properties={}) numa_topology = objects.InstanceNUMATopology(cells=numa_cells) mock_get_numa.return_value = numa_topology self.assertRaises(exception.InstanceUnacceptable, self._vmops._get_instance_vnuma_config, mock_instance, image_meta) def test_get_instance_vnuma_config_bad_cpuset(self): cell1 = objects.InstanceNUMACell(cpuset=set([0]), memory=1024) cell2 = objects.InstanceNUMACell(cpuset=set([1, 2]), memory=1024) self._check_get_instance_vnuma_config_exception( numa_cells=[cell1, cell2]) def test_get_instance_vnuma_config_bad_memory(self): cell1 = objects.InstanceNUMACell(cpuset=set([0]), memory=1024) cell2 = objects.InstanceNUMACell(cpuset=set([1]), memory=2048) self._check_get_instance_vnuma_config_exception( numa_cells=[cell1, cell2]) def test_get_instance_vnuma_config_cpu_pinning(self): cell1 = objects.InstanceNUMACell( cpuset=set([0]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED) cell2 = objects.InstanceNUMACell( cpuset=set([1]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED) self._check_get_instance_vnuma_config_exception( numa_cells=[cell1, cell2]) @mock.patch.object(vmops.hardware, 'numa_get_constraints') def _check_get_instance_vnuma_config( self, mock_get_numa, numa_topology=None, expected_mem_per_numa=None, expected_cpus_per_numa=None): mock_instance = mock.MagicMock() image_meta = mock.MagicMock() mock_get_numa.return_value = numa_topology result_memory_per_numa, result_cpus_per_numa = ( self._vmops._get_instance_vnuma_config(mock_instance, image_meta)) self.assertEqual(expected_cpus_per_numa, result_cpus_per_numa) self.assertEqual(expected_mem_per_numa, result_memory_per_numa) def test_get_instance_vnuma_config(self): cell1 = objects.InstanceNUMACell(cpuset=set([0]), memory=2048) cell2 = objects.InstanceNUMACell(cpuset=set([1]), memory=2048) numa_topology = objects.InstanceNUMATopology(cells=[cell1, cell2]) self._check_get_instance_vnuma_config(numa_topology=numa_topology, expected_cpus_per_numa=1, expected_mem_per_numa=2048) def test_get_instance_vnuma_config_no_topology(self): self._check_get_instance_vnuma_config() @mock.patch.object(vmops.volumeops.VolumeOps, 'attach_volume') def test_attach_root_device_volume(self, mock_attach_volume): mock_instance = fake_instance.fake_instance_obj(self.context) root_device_info = {'type': constants.VOLUME, 'connection_info': mock.sentinel.CONN_INFO, 'disk_bus': constants.CTRL_TYPE_IDE} self._vmops._attach_root_device(mock_instance.name, root_device_info) mock_attach_volume.assert_called_once_with( root_device_info['connection_info'], mock_instance.name, disk_bus=root_device_info['disk_bus']) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_root_device_disk(self, mock_attach_drive): mock_instance = fake_instance.fake_instance_obj(self.context) root_device_info = {'type': constants.DISK, 'boot_index': 0, 'disk_bus': constants.CTRL_TYPE_IDE, 'path': 'fake_path', 'drive_addr': 0, 'ctrl_disk_addr': 1} self._vmops._attach_root_device(mock_instance.name, root_device_info) mock_attach_drive.assert_called_once_with( mock_instance.name, root_device_info['path'], root_device_info['drive_addr'], root_device_info['ctrl_disk_addr'], root_device_info['disk_bus'], root_device_info['type']) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_ephemerals(self, mock_attach_drive): mock_instance = fake_instance.fake_instance_obj(self.context) ephemerals = [{'path': mock.sentinel.PATH1, 'boot_index': 1, 'disk_bus': constants.CTRL_TYPE_IDE, 'device_type': 'disk', 'drive_addr': 0, 'ctrl_disk_addr': 1}, {'path': mock.sentinel.PATH2, 'boot_index': 2, 'disk_bus': constants.CTRL_TYPE_SCSI, 'device_type': 'disk', 'drive_addr': 0, 'ctrl_disk_addr': 0}, {'path': None}] self._vmops._attach_ephemerals(mock_instance.name, ephemerals) mock_attach_drive.assert_has_calls( [mock.call(mock_instance.name, mock.sentinel.PATH1, 0, 1, constants.CTRL_TYPE_IDE, constants.DISK), mock.call(mock_instance.name, mock.sentinel.PATH2, 0, 0, constants.CTRL_TYPE_SCSI, constants.DISK) ]) def test_attach_drive_vm_to_scsi(self): self._vmops._attach_drive( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, mock.sentinel.FAKE_DRIVE_ADDR, mock.sentinel.FAKE_CTRL_DISK_ADDR, constants.CTRL_TYPE_SCSI) self._vmops._vmutils.attach_scsi_drive.assert_called_once_with( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, constants.DISK) def test_attach_drive_vm_to_ide(self): self._vmops._attach_drive( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, mock.sentinel.FAKE_DRIVE_ADDR, mock.sentinel.FAKE_CTRL_DISK_ADDR, constants.CTRL_TYPE_IDE) self._vmops._vmutils.attach_ide_drive.assert_called_once_with( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, mock.sentinel.FAKE_DRIVE_ADDR, mock.sentinel.FAKE_CTRL_DISK_ADDR, constants.DISK) def test_get_image_vm_generation_default(self): image_meta = objects.ImageMeta.from_dict({"properties": {}}) self._vmops._hostutils.get_default_vm_generation.return_value = ( constants.IMAGE_PROP_VM_GEN_1) self._vmops._hostutils.get_supported_vm_types.return_value = [ constants.IMAGE_PROP_VM_GEN_1, constants.IMAGE_PROP_VM_GEN_2] response = self._vmops.get_image_vm_generation( mock.sentinel.instance_id, image_meta) self.assertEqual(constants.VM_GEN_1, response) def test_get_image_vm_generation_gen2(self): image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_machine_type": constants.IMAGE_PROP_VM_GEN_2}}) self._vmops._hostutils.get_supported_vm_types.return_value = [ constants.IMAGE_PROP_VM_GEN_1, constants.IMAGE_PROP_VM_GEN_2] response = self._vmops.get_image_vm_generation( mock.sentinel.instance_id, image_meta) self.assertEqual(constants.VM_GEN_2, response) def test_check_vm_image_type_exception(self): self._vmops._vhdutils.get_vhd_format.return_value = ( constants.DISK_FORMAT_VHD) self.assertRaises(exception.InstanceUnacceptable, self._vmops.check_vm_image_type, mock.sentinel.instance_id, constants.VM_GEN_2, mock.sentinel.FAKE_PATH) def _check_requires_certificate(self, os_type): mock_image_meta = mock.MagicMock() mock_image_meta.properties = {'os_type': os_type} expected_result = os_type == fields.OSType.LINUX result = self._vmops._requires_certificate(mock_image_meta) self.assertEqual(expected_result, result) def test_requires_certificate_windows(self): self._check_requires_certificate(os_type=fields.OSType.WINDOWS) def test_requires_certificate_linux(self): self._check_requires_certificate(os_type=fields.OSType.LINUX) def _check_requires_secure_boot( self, image_prop_os_type=fields.OSType.LINUX, image_prop_secure_boot=fields.SecureBoot.REQUIRED, flavor_secure_boot=fields.SecureBoot.REQUIRED, vm_gen=constants.VM_GEN_2, expected_exception=True): mock_instance = fake_instance.fake_instance_obj(self.context) if flavor_secure_boot: mock_instance.flavor.extra_specs = { constants.FLAVOR_SPEC_SECURE_BOOT: flavor_secure_boot} mock_image_meta = mock.MagicMock() mock_image_meta.properties = {'os_type': image_prop_os_type} if image_prop_secure_boot: mock_image_meta.properties['os_secure_boot'] = ( image_prop_secure_boot) if expected_exception: self.assertRaises(exception.InstanceUnacceptable, self._vmops._requires_secure_boot, mock_instance, mock_image_meta, vm_gen) else: result = self._vmops._requires_secure_boot(mock_instance, mock_image_meta, vm_gen) requires_sb = fields.SecureBoot.REQUIRED in [ flavor_secure_boot, image_prop_secure_boot] self.assertEqual(requires_sb, result) def test_requires_secure_boot_ok(self): self._check_requires_secure_boot( expected_exception=False) def test_requires_secure_boot_image_img_prop_none(self): self._check_requires_secure_boot( image_prop_secure_boot=None, expected_exception=False) def test_requires_secure_boot_image_extra_spec_none(self): self._check_requires_secure_boot( flavor_secure_boot=None, expected_exception=False) def test_requires_secure_boot_flavor_no_os_type(self): self._check_requires_secure_boot( image_prop_os_type=None) def test_requires_secure_boot_flavor_no_os_type_no_exc(self): self._check_requires_secure_boot( image_prop_os_type=None, image_prop_secure_boot=fields.SecureBoot.DISABLED, flavor_secure_boot=fields.SecureBoot.DISABLED, expected_exception=False) def test_requires_secure_boot_flavor_disabled(self): self._check_requires_secure_boot( flavor_secure_boot=fields.SecureBoot.DISABLED) def test_requires_secure_boot_image_disabled(self): self._check_requires_secure_boot( image_prop_secure_boot=fields.SecureBoot.DISABLED) def test_requires_secure_boot_generation_1(self): self._check_requires_secure_boot(vm_gen=constants.VM_GEN_1) @mock.patch('nova.api.metadata.base.InstanceMetadata') @mock.patch('nova.virt.configdrive.ConfigDriveBuilder') @mock.patch('oslo_concurrency.processutils.execute') def _test_create_config_drive(self, mock_execute, mock_ConfigDriveBuilder, mock_InstanceMetadata, config_drive_format, config_drive_cdrom, side_effect, rescue=False): mock_instance = fake_instance.fake_instance_obj(self.context) self.flags(config_drive_format=config_drive_format) self.flags(config_drive_cdrom=config_drive_cdrom, group='hyperv') self.flags(config_drive_inject_password=True, group='hyperv') mock_ConfigDriveBuilder().__enter__().make_drive.side_effect = [ side_effect] path_iso = os.path.join(self.FAKE_DIR, self.FAKE_CONFIG_DRIVE_ISO) path_vhd = os.path.join(self.FAKE_DIR, self.FAKE_CONFIG_DRIVE_VHD) def fake_get_configdrive_path(instance_name, disk_format, rescue=False): return (path_iso if disk_format == constants.DVD_FORMAT else path_vhd) mock_get_configdrive_path = self._vmops._pathutils.get_configdrive_path mock_get_configdrive_path.side_effect = fake_get_configdrive_path expected_get_configdrive_path_calls = [mock.call(mock_instance.name, constants.DVD_FORMAT, rescue=rescue)] if not config_drive_cdrom: expected_call = mock.call(mock_instance.name, constants.DISK_FORMAT_VHD, rescue=rescue) expected_get_configdrive_path_calls.append(expected_call) if config_drive_format != self.ISO9660: self.assertRaises(exception.ConfigDriveUnsupportedFormat, self._vmops._create_config_drive, self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.NET_INFO, rescue) elif side_effect is processutils.ProcessExecutionError: self.assertRaises(processutils.ProcessExecutionError, self._vmops._create_config_drive, self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.NET_INFO, rescue) else: path = self._vmops._create_config_drive(self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.NET_INFO, rescue) mock_InstanceMetadata.assert_called_once_with( mock_instance, content=[mock.sentinel.FILE], extra_md={'admin_pass': mock.sentinel.PASSWORD}, network_info=mock.sentinel.NET_INFO, request_context=self.context) mock_get_configdrive_path.assert_has_calls( expected_get_configdrive_path_calls) mock_ConfigDriveBuilder.assert_called_with( instance_md=mock_InstanceMetadata.return_value) mock_make_drive = mock_ConfigDriveBuilder().__enter__().make_drive mock_make_drive.assert_called_once_with(path_iso) if not CONF.hyperv.config_drive_cdrom: expected = path_vhd mock_execute.assert_called_once_with( CONF.hyperv.qemu_img_cmd, 'convert', '-f', 'raw', '-O', 'vpc', path_iso, path_vhd, attempts=1) self._vmops._pathutils.remove.assert_called_once_with( os.path.join(self.FAKE_DIR, self.FAKE_CONFIG_DRIVE_ISO)) else: expected = path_iso self.assertEqual(expected, path) def test_create_config_drive_cdrom(self): self._test_create_config_drive(config_drive_format=self.ISO9660, config_drive_cdrom=True, side_effect=None) def test_create_config_drive_vhd(self): self._test_create_config_drive(config_drive_format=self.ISO9660, config_drive_cdrom=False, side_effect=None) def test_create_rescue_config_drive_vhd(self): self._test_create_config_drive(config_drive_format=self.ISO9660, config_drive_cdrom=False, side_effect=None, rescue=True) def test_create_config_drive_execution_error(self): self._test_create_config_drive( config_drive_format=self.ISO9660, config_drive_cdrom=False, side_effect=processutils.ProcessExecutionError) def test_attach_config_drive_exception(self): instance = fake_instance.fake_instance_obj(self.context) self.assertRaises(exception.InvalidDiskFormat, self._vmops.attach_config_drive, instance, 'C:/fake_instance_dir/configdrive.xxx', constants.VM_GEN_1) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_config_drive(self, mock_attach_drive): instance = fake_instance.fake_instance_obj(self.context) self._vmops.attach_config_drive(instance, self._FAKE_CONFIGDRIVE_PATH, constants.VM_GEN_1) mock_attach_drive.assert_called_once_with( instance.name, self._FAKE_CONFIGDRIVE_PATH, 1, 0, constants.CTRL_TYPE_IDE, constants.DISK) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_config_drive_gen2(self, mock_attach_drive): instance = fake_instance.fake_instance_obj(self.context) self._vmops.attach_config_drive(instance, self._FAKE_CONFIGDRIVE_PATH, constants.VM_GEN_2) mock_attach_drive.assert_called_once_with( instance.name, self._FAKE_CONFIGDRIVE_PATH, 1, 0, constants.CTRL_TYPE_SCSI, constants.DISK) def test_detach_config_drive(self): is_rescue_configdrive = True mock_lookup_configdrive = ( self._vmops._pathutils.lookup_configdrive_path) mock_lookup_configdrive.return_value = mock.sentinel.configdrive_path self._vmops._detach_config_drive(mock.sentinel.instance_name, rescue=is_rescue_configdrive, delete=True) mock_lookup_configdrive.assert_called_once_with( mock.sentinel.instance_name, rescue=is_rescue_configdrive) self._vmops._vmutils.detach_vm_disk.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.configdrive_path, is_physical=False) self._vmops._pathutils.remove.assert_called_once_with( mock.sentinel.configdrive_path) def test_delete_disk_files(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._delete_disk_files(mock_instance.name) stop_console_handler = ( self._vmops._serial_console_ops.stop_console_handler_unsync) stop_console_handler.assert_called_once_with(mock_instance.name) self._vmops._pathutils.get_instance_dir.assert_called_once_with( mock_instance.name, create_dir=False, remove_dir=True) @ddt.data({}, {'vm_exists': True}, {'planned_vm_exists': True}) @ddt.unpack @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.disconnect_volumes') @mock.patch('nova.virt.hyperv.vmops.VMOps._delete_disk_files') @mock.patch('nova.virt.hyperv.vmops.VMOps.power_off') @mock.patch('nova.virt.hyperv.vmops.VMOps.unplug_vifs') def test_destroy(self, mock_unplug_vifs, mock_power_off, mock_delete_disk_files, mock_disconnect_volumes, vm_exists=False, planned_vm_exists=False): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._vmutils.vm_exists.return_value = vm_exists self._vmops._migrutils.planned_vm_exists.return_value = ( planned_vm_exists) self._vmops.destroy(instance=mock_instance, block_device_info=mock.sentinel.FAKE_BD_INFO, network_info=mock.sentinel.fake_network_info) mock_destroy_planned_vms = ( self._vmops._migrutils.destroy_existing_planned_vm) if vm_exists: self._vmops._vmutils.stop_vm_jobs.assert_called_once_with( mock_instance.name) mock_power_off.assert_called_once_with(mock_instance) self._vmops._vmutils.destroy_vm.assert_called_once_with( mock_instance.name) elif planned_vm_exists: self._vmops._migrutils.planned_vm_exists.assert_called_once_with( mock_instance.name) mock_destroy_planned_vms.assert_called_once_with( mock_instance.name) else: self.assertFalse(self._vmops._vmutils.destroy_vm.called) self.assertFalse(mock_destroy_planned_vms.called) self._vmops._vmutils.vm_exists.assert_called_with( mock_instance.name) mock_unplug_vifs.assert_called_once_with( mock_instance, mock.sentinel.fake_network_info) mock_disconnect_volumes.assert_called_once_with( mock.sentinel.FAKE_BD_INFO) mock_delete_disk_files.assert_called_once_with( mock_instance.name) @mock.patch('nova.virt.hyperv.vmops.VMOps.power_off') def test_destroy_exception(self, mock_power_off): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._vmutils.destroy_vm.side_effect = ( os_win_exc.HyperVException) self._vmops._vmutils.vm_exists.return_value = True self.assertRaises(os_win_exc.HyperVException, self._vmops.destroy, mock_instance, mock.sentinel.network_info, mock.sentinel.block_device_info) def test_reboot_hard(self): self._test_reboot(vmops.REBOOT_TYPE_HARD, os_win_const.HYPERV_VM_STATE_REBOOT) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_reboot_soft(self, mock_soft_shutdown): mock_soft_shutdown.return_value = True self._test_reboot(vmops.REBOOT_TYPE_SOFT, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_reboot_soft_failed(self, mock_soft_shutdown): mock_soft_shutdown.return_value = False self._test_reboot(vmops.REBOOT_TYPE_SOFT, os_win_const.HYPERV_VM_STATE_REBOOT) @mock.patch("nova.virt.hyperv.vmops.VMOps.power_on") @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_reboot_soft_exception(self, mock_soft_shutdown, mock_power_on): mock_soft_shutdown.return_value = True mock_power_on.side_effect = os_win_exc.HyperVException( "Expected failure") instance = fake_instance.fake_instance_obj(self.context) self.assertRaises(os_win_exc.HyperVException, self._vmops.reboot, instance, {}, vmops.REBOOT_TYPE_SOFT) mock_soft_shutdown.assert_called_once_with(instance) mock_power_on.assert_called_once_with(instance, network_info={}) def _test_reboot(self, reboot_type, vm_state): instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(self._vmops, '_set_vm_state') as mock_set_state: self._vmops.reboot(instance, {}, reboot_type) mock_set_state.assert_called_once_with(instance, vm_state) @mock.patch("nova.virt.hyperv.vmops.VMOps._wait_for_power_off") def test_soft_shutdown(self, mock_wait_for_power_off): instance = fake_instance.fake_instance_obj(self.context) mock_wait_for_power_off.return_value = True result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT) mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.assert_called_once_with(instance.name) mock_wait_for_power_off.assert_called_once_with( instance.name, self._FAKE_TIMEOUT) self.assertTrue(result) @mock.patch("time.sleep") def test_soft_shutdown_failed(self, mock_sleep): instance = fake_instance.fake_instance_obj(self.context) mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.side_effect = os_win_exc.HyperVException( "Expected failure.") result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT) mock_shutdown_vm.assert_called_once_with(instance.name) self.assertFalse(result) @mock.patch("nova.virt.hyperv.vmops.VMOps._wait_for_power_off") def test_soft_shutdown_wait(self, mock_wait_for_power_off): instance = fake_instance.fake_instance_obj(self.context) mock_wait_for_power_off.side_effect = [False, True] result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT, 1) calls = [mock.call(instance.name, 1), mock.call(instance.name, self._FAKE_TIMEOUT - 1)] mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.assert_called_with(instance.name) mock_wait_for_power_off.assert_has_calls(calls) self.assertTrue(result) @mock.patch("nova.virt.hyperv.vmops.VMOps._wait_for_power_off") def test_soft_shutdown_wait_timeout(self, mock_wait_for_power_off): instance = fake_instance.fake_instance_obj(self.context) mock_wait_for_power_off.return_value = False result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT, 1.5) calls = [mock.call(instance.name, 1.5), mock.call(instance.name, self._FAKE_TIMEOUT - 1.5)] mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.assert_called_with(instance.name) mock_wait_for_power_off.assert_has_calls(calls) self.assertFalse(result) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_pause(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.pause(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_PAUSED) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_unpause(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.unpause(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_suspend(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.suspend(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_SUSPENDED) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_resume(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.resume(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) def _test_power_off(self, timeout, set_state_expected=True): instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(self._vmops, '_set_vm_state') as mock_set_state: self._vmops.power_off(instance, timeout) serialops = self._vmops._serial_console_ops serialops.stop_console_handler.assert_called_once_with( instance.name) if set_state_expected: mock_set_state.assert_called_once_with( instance, os_win_const.HYPERV_VM_STATE_DISABLED) def test_power_off_hard(self): self._test_power_off(timeout=0) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_power_off_exception(self, mock_soft_shutdown): mock_soft_shutdown.return_value = False self._test_power_off(timeout=1) @mock.patch("nova.virt.hyperv.vmops.VMOps._set_vm_state") @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_power_off_soft(self, mock_soft_shutdown, mock_set_state): instance = fake_instance.fake_instance_obj(self.context) mock_soft_shutdown.return_value = True self._vmops.power_off(instance, 1, 0) serialops = self._vmops._serial_console_ops serialops.stop_console_handler.assert_called_once_with( instance.name) mock_soft_shutdown.assert_called_once_with( instance, 1, vmops.SHUTDOWN_TIME_INCREMENT) self.assertFalse(mock_set_state.called) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_power_off_unexisting_instance(self, mock_soft_shutdown): mock_soft_shutdown.side_effect = os_win_exc.HyperVVMNotFoundException( vm_name=mock.sentinel.vm_name) self._test_power_off(timeout=1, set_state_expected=False) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_power_on(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps' '.fix_instance_volume_disk_paths') @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_power_on_having_block_devices(self, mock_set_vm_state, mock_fix_instance_vol_paths): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance, mock.sentinel.block_device_info) mock_fix_instance_vol_paths.assert_called_once_with( mock_instance.name, mock.sentinel.block_device_info) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch.object(vmops.VMOps, 'plug_vifs') def test_power_on_with_network_info(self, mock_plug_vifs): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance, network_info=mock.sentinel.fake_network_info) mock_plug_vifs.assert_called_once_with( mock_instance, mock.sentinel.fake_network_info) @mock.patch.object(vmops.VMOps, 'plug_vifs') def test_power_on_vifs_already_plugged(self, mock_plug_vifs): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance, should_plug_vifs=False) self.assertFalse(mock_plug_vifs.called) def _test_set_vm_state(self, state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._set_vm_state(mock_instance, state) self._vmops._vmutils.set_vm_state.assert_called_once_with( mock_instance.name, state) def test_set_vm_state_disabled(self): self._test_set_vm_state(state=os_win_const.HYPERV_VM_STATE_DISABLED) def test_set_vm_state_enabled(self): self._test_set_vm_state(state=os_win_const.HYPERV_VM_STATE_ENABLED) def test_set_vm_state_reboot(self): self._test_set_vm_state(state=os_win_const.HYPERV_VM_STATE_REBOOT) def test_set_vm_state_exception(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._vmutils.set_vm_state.side_effect = ( os_win_exc.HyperVException) self.assertRaises(os_win_exc.HyperVException, self._vmops._set_vm_state, mock_instance, mock.sentinel.STATE) def test_get_vm_state(self): summary_info = {'EnabledState': os_win_const.HYPERV_VM_STATE_DISABLED} with mock.patch.object(self._vmops._vmutils, 'get_vm_summary_info') as mock_get_summary_info: mock_get_summary_info.return_value = summary_info response = self._vmops._get_vm_state(mock.sentinel.FAKE_VM_NAME) self.assertEqual(response, os_win_const.HYPERV_VM_STATE_DISABLED) @mock.patch.object(vmops.VMOps, '_get_vm_state') def test_wait_for_power_off_true(self, mock_get_state): mock_get_state.return_value = os_win_const.HYPERV_VM_STATE_DISABLED result = self._vmops._wait_for_power_off( mock.sentinel.FAKE_VM_NAME, vmops.SHUTDOWN_TIME_INCREMENT) mock_get_state.assert_called_with(mock.sentinel.FAKE_VM_NAME) self.assertTrue(result) @mock.patch.object(vmops.etimeout, "with_timeout") def test_wait_for_power_off_false(self, mock_with_timeout): mock_with_timeout.side_effect = etimeout.Timeout() result = self._vmops._wait_for_power_off( mock.sentinel.FAKE_VM_NAME, vmops.SHUTDOWN_TIME_INCREMENT) self.assertFalse(result) def test_create_vm_com_port_pipes(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_serial_ports = { 1: constants.SERIAL_PORT_TYPE_RO, 2: constants.SERIAL_PORT_TYPE_RW } self._vmops._create_vm_com_port_pipes(mock_instance, mock_serial_ports) expected_calls = [] for port_number, port_type in mock_serial_ports.items(): expected_pipe = r'\\.\pipe\%s_%s' % (mock_instance.uuid, port_type) expected_calls.append(mock.call(mock_instance.name, port_number, expected_pipe)) mock_set_conn = self._vmops._vmutils.set_vm_serial_port_connection mock_set_conn.assert_has_calls(expected_calls) def test_list_instance_uuids(self): fake_uuid = '4f54fb69-d3a2-45b7-bb9b-b6e6b3d893b3' with mock.patch.object(self._vmops._vmutils, 'list_instance_notes') as mock_list_notes: mock_list_notes.return_value = [('fake_name', [fake_uuid])] response = self._vmops.list_instance_uuids() mock_list_notes.assert_called_once_with() self.assertEqual(response, [fake_uuid]) def test_copy_vm_dvd_disks(self): fake_paths = [mock.sentinel.FAKE_DVD_PATH1, mock.sentinel.FAKE_DVD_PATH2] mock_copy = self._vmops._pathutils.copyfile mock_get_dvd_disk_paths = self._vmops._vmutils.get_vm_dvd_disk_paths mock_get_dvd_disk_paths.return_value = fake_paths self._vmops._pathutils.get_instance_dir.return_value = ( mock.sentinel.FAKE_DEST_PATH) self._vmops.copy_vm_dvd_disks(mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_DEST_HOST) mock_get_dvd_disk_paths.assert_called_with(mock.sentinel.FAKE_VM_NAME) self._vmops._pathutils.get_instance_dir.assert_called_once_with( mock.sentinel.FAKE_VM_NAME, remote_server=mock.sentinel.FAKE_DEST_HOST) self.assertEqual(2, mock_copy.call_count) mock_copy.assert_has_calls([mock.call(mock.sentinel.FAKE_DVD_PATH1, mock.sentinel.FAKE_DEST_PATH), mock.call(mock.sentinel.FAKE_DVD_PATH2, mock.sentinel.FAKE_DEST_PATH)]) def test_plug_vifs(self): mock_instance = fake_instance.fake_instance_obj(self.context) fake_vif1 = {'id': mock.sentinel.ID1, 'type': mock.sentinel.vif_type1} fake_vif2 = {'id': mock.sentinel.ID2, 'type': mock.sentinel.vif_type2} mock_network_info = [fake_vif1, fake_vif2] calls = [mock.call(mock_instance, fake_vif1), mock.call(mock_instance, fake_vif2)] self._vmops.plug_vifs(mock_instance, network_info=mock_network_info) self._vmops._vif_driver.plug.assert_has_calls(calls) def test_unplug_vifs(self): mock_instance = fake_instance.fake_instance_obj(self.context) fake_vif1 = {'id': mock.sentinel.ID1, 'type': mock.sentinel.vif_type1} fake_vif2 = {'id': mock.sentinel.ID2, 'type': mock.sentinel.vif_type2} mock_network_info = [fake_vif1, fake_vif2] calls = [mock.call(mock_instance, fake_vif1), mock.call(mock_instance, fake_vif2)] self._vmops.unplug_vifs(mock_instance, network_info=mock_network_info) self._vmops._vif_driver.unplug.assert_has_calls(calls) def _setup_remotefx_mocks(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.extra_specs = { 'os:resolution': os_win_const.REMOTEFX_MAX_RES_1920x1200, 'os:monitors': '2', 'os:vram': '256'} return mock_instance def test_configure_remotefx_not_required(self): self.flags(enable_remotefx=False, group='hyperv') mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._configure_remotefx(mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx_exception_enable_config(self): self.flags(enable_remotefx=False, group='hyperv') mock_instance = self._setup_remotefx_mocks() self.assertRaises(exception.InstanceUnacceptable, self._vmops._configure_remotefx, mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx_exception_server_feature(self): self.flags(enable_remotefx=True, group='hyperv') mock_instance = self._setup_remotefx_mocks() self._vmops._hostutils.check_server_feature.return_value = False self.assertRaises(exception.InstanceUnacceptable, self._vmops._configure_remotefx, mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx_exception_vm_gen(self): self.flags(enable_remotefx=True, group='hyperv') mock_instance = self._setup_remotefx_mocks() self._vmops._hostutils.check_server_feature.return_value = True self._vmops._vmutils.vm_gen_supports_remotefx.return_value = False self.assertRaises(exception.InstanceUnacceptable, self._vmops._configure_remotefx, mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx(self): self.flags(enable_remotefx=True, group='hyperv') mock_instance = self._setup_remotefx_mocks() self._vmops._hostutils.check_server_feature.return_value = True self._vmops._vmutils.vm_gen_supports_remotefx.return_value = True extra_specs = mock_instance.flavor.extra_specs self._vmops._configure_remotefx(mock_instance, constants.VM_GEN_1) mock_enable_remotefx = ( self._vmops._vmutils.enable_remotefx_video_adapter) mock_enable_remotefx.assert_called_once_with( mock_instance.name, int(extra_specs['os:monitors']), extra_specs['os:resolution'], int(extra_specs['os:vram']) * units.Mi) @mock.patch.object(vmops.VMOps, '_get_vm_state') def test_check_hotplug_available_vm_disabled(self, mock_get_vm_state): fake_vm = fake_instance.fake_instance_obj(self.context) mock_get_vm_state.return_value = os_win_const.HYPERV_VM_STATE_DISABLED result = self._vmops._check_hotplug_available(fake_vm) self.assertTrue(result) mock_get_vm_state.assert_called_once_with(fake_vm.name) self.assertFalse( self._vmops._hostutils.check_min_windows_version.called) self.assertFalse(self._vmops._vmutils.get_vm_generation.called) @mock.patch.object(vmops.VMOps, '_get_vm_state') def _test_check_hotplug_available( self, mock_get_vm_state, expected_result=False, vm_gen=constants.VM_GEN_2, windows_version=_WIN_VERSION_10): fake_vm = fake_instance.fake_instance_obj(self.context) mock_get_vm_state.return_value = os_win_const.HYPERV_VM_STATE_ENABLED self._vmops._vmutils.get_vm_generation.return_value = vm_gen fake_check_win_vers = self._vmops._hostutils.check_min_windows_version fake_check_win_vers.return_value = ( windows_version == self._WIN_VERSION_10) result = self._vmops._check_hotplug_available(fake_vm) self.assertEqual(expected_result, result) mock_get_vm_state.assert_called_once_with(fake_vm.name) fake_check_win_vers.assert_called_once_with(10, 0) def test_check_if_hotplug_available(self): self._test_check_hotplug_available(expected_result=True) def test_check_if_hotplug_available_gen1(self): self._test_check_hotplug_available( expected_result=False, vm_gen=constants.VM_GEN_1) def test_check_if_hotplug_available_win_6_3(self): self._test_check_hotplug_available( expected_result=False, windows_version=self._WIN_VERSION_6_3) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_attach_interface(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = True fake_vm = fake_instance.fake_instance_obj(self.context) fake_vif = test_virtual_interface.fake_vif self._vmops.attach_interface(fake_vm, fake_vif) mock_check_hotplug_available.assert_called_once_with(fake_vm) self._vmops._vif_driver.plug.assert_called_once_with( fake_vm, fake_vif) self._vmops._vmutils.create_nic.assert_called_once_with( fake_vm.name, fake_vif['id'], fake_vif['address']) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_attach_interface_failed(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = False self.assertRaises(exception.InterfaceAttachFailed, self._vmops.attach_interface, mock.MagicMock(), mock.sentinel.fake_vif) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_detach_interface(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = True fake_vm = fake_instance.fake_instance_obj(self.context) fake_vif = test_virtual_interface.fake_vif self._vmops.detach_interface(fake_vm, fake_vif) mock_check_hotplug_available.assert_called_once_with(fake_vm) self._vmops._vif_driver.unplug.assert_called_once_with( fake_vm, fake_vif) self._vmops._vmutils.destroy_nic.assert_called_once_with( fake_vm.name, fake_vif['id']) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_detach_interface_failed(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = False self.assertRaises(exception.InterfaceDetachFailed, self._vmops.detach_interface, mock.MagicMock(), mock.sentinel.fake_vif) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_detach_interface_missing_instance(self, mock_check_hotplug): mock_check_hotplug.side_effect = os_win_exc.HyperVVMNotFoundException( vm_name='fake_vm') self.assertRaises(exception.InterfaceDetachFailed, self._vmops.detach_interface, mock.MagicMock(), mock.sentinel.fake_vif) @mock.patch('nova.virt.configdrive.required_by') @mock.patch.object(vmops.VMOps, '_create_root_vhd') @mock.patch.object(vmops.VMOps, 'get_image_vm_generation') @mock.patch.object(vmops.VMOps, '_attach_drive') @mock.patch.object(vmops.VMOps, '_create_config_drive') @mock.patch.object(vmops.VMOps, 'attach_config_drive') @mock.patch.object(vmops.VMOps, '_detach_config_drive') @mock.patch.object(vmops.VMOps, 'power_on') def test_rescue_instance(self, mock_power_on, mock_detach_config_drive, mock_attach_config_drive, mock_create_config_drive, mock_attach_drive, mock_get_image_vm_gen, mock_create_root_vhd, mock_configdrive_required): mock_image_meta = mock.MagicMock() mock_vm_gen = constants.VM_GEN_2 mock_instance = fake_instance.fake_instance_obj(self.context) mock_configdrive_required.return_value = True mock_create_root_vhd.return_value = mock.sentinel.rescue_vhd_path mock_get_image_vm_gen.return_value = mock_vm_gen self._vmops._vmutils.get_vm_generation.return_value = mock_vm_gen self._vmops._pathutils.lookup_root_vhd_path.return_value = ( mock.sentinel.root_vhd_path) mock_create_config_drive.return_value = ( mock.sentinel.rescue_configdrive_path) self._vmops.rescue_instance(self.context, mock_instance, mock.sentinel.network_info, mock_image_meta, mock.sentinel.rescue_password) mock_get_image_vm_gen.assert_called_once_with( mock_instance.uuid, mock_image_meta) self._vmops._vmutils.detach_vm_disk.assert_called_once_with( mock_instance.name, mock.sentinel.root_vhd_path, is_physical=False) mock_attach_drive.assert_called_once_with( mock_instance.name, mock.sentinel.rescue_vhd_path, 0, self._vmops._ROOT_DISK_CTRL_ADDR, vmops.VM_GENERATIONS_CONTROLLER_TYPES[mock_vm_gen]) self._vmops._vmutils.attach_scsi_drive.assert_called_once_with( mock_instance.name, mock.sentinel.root_vhd_path, drive_type=constants.DISK) mock_detach_config_drive.assert_called_once_with(mock_instance.name) mock_create_config_drive.assert_called_once_with( self.context, mock_instance, injected_files=None, admin_password=mock.sentinel.rescue_password, network_info=mock.sentinel.network_info, rescue=True) mock_attach_config_drive.assert_called_once_with( mock_instance, mock.sentinel.rescue_configdrive_path, mock_vm_gen) @mock.patch.object(vmops.VMOps, '_create_root_vhd') @mock.patch.object(vmops.VMOps, 'get_image_vm_generation') @mock.patch.object(vmops.VMOps, 'unrescue_instance') def _test_rescue_instance_exception(self, mock_unrescue, mock_get_image_vm_gen, mock_create_root_vhd, wrong_vm_gen=False, boot_from_volume=False, expected_exc=None): mock_vm_gen = constants.VM_GEN_1 image_vm_gen = (mock_vm_gen if not wrong_vm_gen else constants.VM_GEN_2) mock_image_meta = mock.MagicMock() mock_instance = fake_instance.fake_instance_obj(self.context) mock_get_image_vm_gen.return_value = image_vm_gen self._vmops._vmutils.get_vm_generation.return_value = mock_vm_gen self._vmops._pathutils.lookup_root_vhd_path.return_value = ( mock.sentinel.root_vhd_path if not boot_from_volume else None) self.assertRaises(expected_exc, self._vmops.rescue_instance, self.context, mock_instance, mock.sentinel.network_info, mock_image_meta, mock.sentinel.rescue_password) mock_unrescue.assert_called_once_with(mock_instance) def test_rescue_instance_wrong_vm_gen(self): # Test the case when the rescue image requires a different # vm generation than the actual rescued instance. self._test_rescue_instance_exception( wrong_vm_gen=True, expected_exc=exception.ImageUnacceptable) def test_rescue_instance_boot_from_volume(self): # Rescuing instances booted from volume is not supported. self._test_rescue_instance_exception( boot_from_volume=True, expected_exc=exception.InstanceNotRescuable) @mock.patch.object(fileutils, 'delete_if_exists') @mock.patch.object(vmops.VMOps, '_attach_drive') @mock.patch.object(vmops.VMOps, 'attach_config_drive') @mock.patch.object(vmops.VMOps, '_detach_config_drive') @mock.patch.object(vmops.VMOps, 'power_on') @mock.patch.object(vmops.VMOps, 'power_off') def test_unrescue_instance(self, mock_power_on, mock_power_off, mock_detach_config_drive, mock_attach_configdrive, mock_attach_drive, mock_delete_if_exists): mock_instance = fake_instance.fake_instance_obj(self.context) mock_vm_gen = constants.VM_GEN_2 self._vmops._vmutils.get_vm_generation.return_value = mock_vm_gen self._vmops._vmutils.is_disk_attached.return_value = False self._vmops._pathutils.lookup_root_vhd_path.side_effect = ( mock.sentinel.root_vhd_path, mock.sentinel.rescue_vhd_path) self._vmops._pathutils.lookup_configdrive_path.return_value = ( mock.sentinel.configdrive_path) self._vmops.unrescue_instance(mock_instance) self._vmops._pathutils.lookup_root_vhd_path.assert_has_calls( [mock.call(mock_instance.name), mock.call(mock_instance.name, rescue=True)]) self._vmops._vmutils.detach_vm_disk.assert_has_calls( [mock.call(mock_instance.name, mock.sentinel.root_vhd_path, is_physical=False), mock.call(mock_instance.name, mock.sentinel.rescue_vhd_path, is_physical=False)]) mock_attach_drive.assert_called_once_with( mock_instance.name, mock.sentinel.root_vhd_path, 0, self._vmops._ROOT_DISK_CTRL_ADDR, vmops.VM_GENERATIONS_CONTROLLER_TYPES[mock_vm_gen]) mock_detach_config_drive.assert_called_once_with(mock_instance.name, rescue=True, delete=True) mock_delete_if_exists.assert_called_once_with( mock.sentinel.rescue_vhd_path) self._vmops._vmutils.is_disk_attached.assert_called_once_with( mock.sentinel.configdrive_path, is_physical=False) mock_attach_configdrive.assert_called_once_with( mock_instance, mock.sentinel.configdrive_path, mock_vm_gen) mock_power_on.assert_called_once_with(mock_instance) @mock.patch.object(vmops.VMOps, 'power_off') def test_unrescue_instance_missing_root_image(self, mock_power_off): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.vm_state = vm_states.RESCUED self._vmops._pathutils.lookup_root_vhd_path.return_value = None self.assertRaises(exception.InstanceNotRescuable, self._vmops.unrescue_instance, mock_instance) @mock.patch.object(volumeops.VolumeOps, 'bytes_per_sec_to_iops') @mock.patch.object(vmops.VMOps, '_get_scoped_flavor_extra_specs') @mock.patch.object(vmops.VMOps, '_get_instance_local_disks') def test_set_instance_disk_qos_specs(self, mock_get_local_disks, mock_get_scoped_specs, mock_bytes_per_sec_to_iops): fake_total_bytes_sec = 8 fake_total_iops_sec = 1 mock_instance = fake_instance.fake_instance_obj(self.context) mock_local_disks = [mock.sentinel.root_vhd_path, mock.sentinel.eph_vhd_path] mock_get_local_disks.return_value = mock_local_disks mock_set_qos_specs = self._vmops._vmutils.set_disk_qos_specs mock_get_scoped_specs.return_value = dict( disk_total_bytes_sec=fake_total_bytes_sec) mock_bytes_per_sec_to_iops.return_value = fake_total_iops_sec self._vmops._set_instance_disk_qos_specs(mock_instance) mock_bytes_per_sec_to_iops.assert_called_once_with( fake_total_bytes_sec) mock_get_local_disks.assert_called_once_with(mock_instance.name) expected_calls = [mock.call(disk_path, fake_total_iops_sec) for disk_path in mock_local_disks] mock_set_qos_specs.assert_has_calls(expected_calls) def test_get_instance_local_disks(self): fake_instance_dir = 'fake_instance_dir' fake_local_disks = [os.path.join(fake_instance_dir, disk_name) for disk_name in ['root.vhd', 'configdrive.iso']] fake_instance_disks = ['fake_remote_disk'] + fake_local_disks mock_get_storage_paths = self._vmops._vmutils.get_vm_storage_paths mock_get_storage_paths.return_value = [fake_instance_disks, []] mock_get_instance_dir = self._vmops._pathutils.get_instance_dir mock_get_instance_dir.return_value = fake_instance_dir ret_val = self._vmops._get_instance_local_disks( mock.sentinel.instance_name) self.assertEqual(fake_local_disks, ret_val) def test_get_scoped_flavor_extra_specs(self): # The flavor extra spect dict contains only string values. fake_total_bytes_sec = '8' mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.extra_specs = { 'spec_key': 'spec_value', 'quota:total_bytes_sec': fake_total_bytes_sec} ret_val = self._vmops._get_scoped_flavor_extra_specs( mock_instance, scope='quota') expected_specs = { 'total_bytes_sec': fake_total_bytes_sec } self.assertEqual(expected_specs, ret_val) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/hyperv/test_volumeops.py0000664000175000017500000006023000000000000023616 0ustar00zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from oslo_config import cfg from oslo_utils import units from nova import exception from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import volumeops CONF = cfg.CONF connection_data = {'volume_id': 'fake_vol_id', 'target_lun': mock.sentinel.fake_lun, 'target_iqn': mock.sentinel.fake_iqn, 'target_portal': mock.sentinel.fake_portal, 'auth_method': 'chap', 'auth_username': mock.sentinel.fake_user, 'auth_password': mock.sentinel.fake_pass} def get_fake_block_dev_info(): return {'block_device_mapping': [ fake_block_device.AnonFakeDbBlockDeviceDict({'source_type': 'volume'})] } def get_fake_connection_info(**kwargs): return {'data': dict(connection_data, **kwargs), 'serial': mock.sentinel.serial} class VolumeOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for VolumeOps class.""" def setUp(self): super(VolumeOpsTestCase, self).setUp() self._volumeops = volumeops.VolumeOps() self._volumeops._volutils = mock.MagicMock() self._volumeops._vmutils = mock.Mock() def test_get_volume_driver(self): fake_conn_info = {'driver_volume_type': mock.sentinel.fake_driver_type} self._volumeops.volume_drivers[mock.sentinel.fake_driver_type] = ( mock.sentinel.fake_driver) result = self._volumeops._get_volume_driver( connection_info=fake_conn_info) self.assertEqual(mock.sentinel.fake_driver, result) def test_get_volume_driver_exception(self): fake_conn_info = {'driver_volume_type': 'fake_driver'} self.assertRaises(exception.VolumeDriverNotFound, self._volumeops._get_volume_driver, connection_info=fake_conn_info) @mock.patch.object(volumeops.VolumeOps, 'attach_volume') def test_attach_volumes(self, mock_attach_volume): block_device_info = get_fake_block_dev_info() self._volumeops.attach_volumes( block_device_info['block_device_mapping'], mock.sentinel.instance_name) mock_attach_volume.assert_called_once_with( block_device_info['block_device_mapping'][0]['connection_info'], mock.sentinel.instance_name) def test_fix_instance_volume_disk_paths_empty_bdm(self): self._volumeops.fix_instance_volume_disk_paths( mock.sentinel.instance_name, block_device_info={}) self.assertFalse( self._volumeops._vmutils.get_vm_physical_disk_mapping.called) @mock.patch.object(volumeops.VolumeOps, 'get_disk_path_mapping') def test_fix_instance_volume_disk_paths(self, mock_get_disk_path_mapping): block_device_info = get_fake_block_dev_info() mock_disk1 = { 'mounted_disk_path': mock.sentinel.mounted_disk1_path, 'resource_path': mock.sentinel.resource1_path } mock_disk2 = { 'mounted_disk_path': mock.sentinel.mounted_disk2_path, 'resource_path': mock.sentinel.resource2_path } mock_vm_disk_mapping = { mock.sentinel.disk1_serial: mock_disk1, mock.sentinel.disk2_serial: mock_disk2 } # In this case, only the first disk needs to be updated. mock_phys_disk_path_mapping = { mock.sentinel.disk1_serial: mock.sentinel.actual_disk1_path, mock.sentinel.disk2_serial: mock.sentinel.mounted_disk2_path } vmutils = self._volumeops._vmutils vmutils.get_vm_physical_disk_mapping.return_value = ( mock_vm_disk_mapping) mock_get_disk_path_mapping.return_value = mock_phys_disk_path_mapping self._volumeops.fix_instance_volume_disk_paths( mock.sentinel.instance_name, block_device_info) vmutils.get_vm_physical_disk_mapping.assert_called_once_with( mock.sentinel.instance_name) mock_get_disk_path_mapping.assert_called_once_with( block_device_info) vmutils.set_disk_host_res.assert_called_once_with( mock.sentinel.resource1_path, mock.sentinel.actual_disk1_path) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_disconnect_volumes(self, mock_get_volume_driver): block_device_info = get_fake_block_dev_info() block_device_mapping = block_device_info['block_device_mapping'] fake_volume_driver = mock_get_volume_driver.return_value self._volumeops.disconnect_volumes(block_device_info) fake_volume_driver.disconnect_volume.assert_called_once_with( block_device_mapping[0]['connection_info']) @mock.patch('time.sleep') @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def _test_attach_volume(self, mock_get_volume_driver, mock_sleep, attach_failed): fake_conn_info = get_fake_connection_info( qos_specs=mock.sentinel.qos_specs) fake_volume_driver = mock_get_volume_driver.return_value expected_try_count = 1 if attach_failed: expected_try_count += CONF.hyperv.volume_attach_retry_count fake_volume_driver.set_disk_qos_specs.side_effect = ( test.TestingException) self.assertRaises(exception.VolumeAttachFailed, self._volumeops.attach_volume, fake_conn_info, mock.sentinel.inst_name, mock.sentinel.disk_bus) else: self._volumeops.attach_volume( fake_conn_info, mock.sentinel.inst_name, mock.sentinel.disk_bus) mock_get_volume_driver.assert_any_call( fake_conn_info) fake_volume_driver.attach_volume.assert_has_calls( [mock.call(fake_conn_info, mock.sentinel.inst_name, mock.sentinel.disk_bus)] * expected_try_count) fake_volume_driver.set_disk_qos_specs.assert_has_calls( [mock.call(fake_conn_info, mock.sentinel.qos_specs)] * expected_try_count) if attach_failed: fake_volume_driver.disconnect_volume.assert_called_once_with( fake_conn_info) mock_sleep.assert_has_calls( [mock.call(CONF.hyperv.volume_attach_retry_interval)] * CONF.hyperv.volume_attach_retry_count) else: mock_sleep.assert_not_called() def test_attach_volume(self): self._test_attach_volume(attach_failed=False) def test_attach_volume_exc(self): self._test_attach_volume(attach_failed=True) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_disconnect_volume(self, mock_get_volume_driver): fake_volume_driver = mock_get_volume_driver.return_value self._volumeops.disconnect_volume(mock.sentinel.conn_info) mock_get_volume_driver.assert_called_once_with( mock.sentinel.conn_info) fake_volume_driver.disconnect_volume.assert_called_once_with( mock.sentinel.conn_info) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_detach_volume(self, mock_get_volume_driver): fake_volume_driver = mock_get_volume_driver.return_value fake_conn_info = {'data': 'fake_conn_info_data'} self._volumeops.detach_volume(fake_conn_info, mock.sentinel.inst_name) mock_get_volume_driver.assert_called_once_with( fake_conn_info) fake_volume_driver.detach_volume.assert_called_once_with( fake_conn_info, mock.sentinel.inst_name) fake_volume_driver.disconnect_volume.assert_called_once_with( fake_conn_info) @mock.patch.object(connector, 'get_connector_properties') def test_get_volume_connector(self, mock_get_connector): conn = self._volumeops.get_volume_connector() mock_get_connector.assert_called_once_with( root_helper=None, my_ip=CONF.my_block_storage_ip, multipath=CONF.hyperv.use_multipath_io, enforce_multipath=True, host=CONF.host) self.assertEqual(mock_get_connector.return_value, conn) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_connect_volumes(self, mock_get_volume_driver): block_device_info = get_fake_block_dev_info() self._volumeops.connect_volumes(block_device_info) init_vol_conn = ( mock_get_volume_driver.return_value.connect_volume) init_vol_conn.assert_called_once_with( block_device_info['block_device_mapping'][0]['connection_info']) @mock.patch.object(volumeops.VolumeOps, 'get_disk_resource_path') def test_get_disk_path_mapping(self, mock_get_disk_path): block_device_info = get_fake_block_dev_info() block_device_mapping = block_device_info['block_device_mapping'] fake_conn_info = get_fake_connection_info() block_device_mapping[0]['connection_info'] = fake_conn_info mock_get_disk_path.return_value = mock.sentinel.disk_path resulted_disk_path_mapping = self._volumeops.get_disk_path_mapping( block_device_info) mock_get_disk_path.assert_called_once_with(fake_conn_info) expected_disk_path_mapping = { mock.sentinel.serial: mock.sentinel.disk_path } self.assertEqual(expected_disk_path_mapping, resulted_disk_path_mapping) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_get_disk_resource_path(self, mock_get_volume_driver): fake_conn_info = get_fake_connection_info() fake_volume_driver = mock_get_volume_driver.return_value resulted_disk_path = self._volumeops.get_disk_resource_path( fake_conn_info) mock_get_volume_driver.assert_called_once_with(fake_conn_info) get_mounted_disk = fake_volume_driver.get_disk_resource_path get_mounted_disk.assert_called_once_with(fake_conn_info) self.assertEqual(get_mounted_disk.return_value, resulted_disk_path) def test_bytes_per_sec_to_iops(self): no_bytes = 15 * units.Ki expected_iops = 2 resulted_iops = self._volumeops.bytes_per_sec_to_iops(no_bytes) self.assertEqual(expected_iops, resulted_iops) @mock.patch.object(volumeops.LOG, 'warning') def test_validate_qos_specs(self, mock_warning): supported_qos_specs = [mock.sentinel.spec1, mock.sentinel.spec2] requested_qos_specs = {mock.sentinel.spec1: mock.sentinel.val, mock.sentinel.spec3: mock.sentinel.val2} self._volumeops.validate_qos_specs(requested_qos_specs, supported_qos_specs) self.assertTrue(mock_warning.called) class BaseVolumeDriverTestCase(test_base.HyperVBaseTestCase): """Unit tests for Hyper-V BaseVolumeDriver class.""" def setUp(self): super(BaseVolumeDriverTestCase, self).setUp() self._base_vol_driver = volumeops.BaseVolumeDriver() self._base_vol_driver._diskutils = mock.Mock() self._base_vol_driver._vmutils = mock.Mock() self._base_vol_driver._migrutils = mock.Mock() self._base_vol_driver._conn = mock.Mock() self._vmutils = self._base_vol_driver._vmutils self._migrutils = self._base_vol_driver._migrutils self._diskutils = self._base_vol_driver._diskutils self._conn = self._base_vol_driver._conn @mock.patch.object(connector.InitiatorConnector, 'factory') def test_connector(self, mock_conn_factory): self._base_vol_driver._conn = None self._base_vol_driver._protocol = mock.sentinel.protocol self._base_vol_driver._extra_connector_args = dict( fake_conn_arg=mock.sentinel.conn_val) conn = self._base_vol_driver._connector self.assertEqual(mock_conn_factory.return_value, conn) mock_conn_factory.assert_called_once_with( protocol=mock.sentinel.protocol, root_helper=None, use_multipath=CONF.hyperv.use_multipath_io, device_scan_attempts=CONF.hyperv.mounted_disk_query_retry_count, device_scan_interval=( CONF.hyperv.mounted_disk_query_retry_interval), **self._base_vol_driver._extra_connector_args) def test_connect_volume(self): conn_info = get_fake_connection_info() dev_info = self._base_vol_driver.connect_volume(conn_info) expected_dev_info = self._conn.connect_volume.return_value self.assertEqual(expected_dev_info, dev_info) self._conn.connect_volume.assert_called_once_with( conn_info['data']) def test_disconnect_volume(self): conn_info = get_fake_connection_info() self._base_vol_driver.disconnect_volume(conn_info) self._conn.disconnect_volume.assert_called_once_with( conn_info['data']) @mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_res_path') def _test_get_disk_resource_path_by_conn_info(self, mock_get_disk_res_path, disk_found=True): conn_info = get_fake_connection_info() mock_vol_paths = [mock.sentinel.disk_path] if disk_found else [] self._conn.get_volume_paths.return_value = mock_vol_paths if disk_found: disk_res_path = self._base_vol_driver.get_disk_resource_path( conn_info) self._conn.get_volume_paths.assert_called_once_with( conn_info['data']) self.assertEqual(mock_get_disk_res_path.return_value, disk_res_path) mock_get_disk_res_path.assert_called_once_with( mock.sentinel.disk_path) else: self.assertRaises(exception.DiskNotFound, self._base_vol_driver.get_disk_resource_path, conn_info) def test_get_existing_disk_res_path(self): self._test_get_disk_resource_path_by_conn_info() def test_get_unfound_disk_res_path(self): self._test_get_disk_resource_path_by_conn_info(disk_found=False) def test_get_block_dev_res_path(self): self._base_vol_driver._is_block_dev = True mock_get_dev_number = ( self._diskutils.get_device_number_from_device_name) mock_get_dev_number.return_value = mock.sentinel.dev_number self._vmutils.get_mounted_disk_by_drive_number.return_value = ( mock.sentinel.disk_path) disk_path = self._base_vol_driver._get_disk_res_path( mock.sentinel.dev_name) mock_get_dev_number.assert_called_once_with(mock.sentinel.dev_name) self._vmutils.get_mounted_disk_by_drive_number.assert_called_once_with( mock.sentinel.dev_number) self.assertEqual(mock.sentinel.disk_path, disk_path) def test_get_virt_disk_res_path(self): # For virtual disk images, we expect the resource path to be the # actual image path, as opposed to passthrough disks, in which case we # need the Msvm_DiskDrive resource path when attaching it to a VM. self._base_vol_driver._is_block_dev = False path = self._base_vol_driver._get_disk_res_path( mock.sentinel.disk_path) self.assertEqual(mock.sentinel.disk_path, path) @mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_res_path') @mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_ctrl_and_slot') @mock.patch.object(volumeops.BaseVolumeDriver, 'connect_volume') def _test_attach_volume(self, mock_connect_volume, mock_get_disk_ctrl_and_slot, mock_get_disk_res_path, is_block_dev=True): connection_info = get_fake_connection_info() self._base_vol_driver._is_block_dev = is_block_dev mock_connect_volume.return_value = dict(path=mock.sentinel.raw_path) mock_get_disk_res_path.return_value = ( mock.sentinel.disk_path) mock_get_disk_ctrl_and_slot.return_value = ( mock.sentinel.ctrller_path, mock.sentinel.slot) self._base_vol_driver.attach_volume( connection_info=connection_info, instance_name=mock.sentinel.instance_name, disk_bus=mock.sentinel.disk_bus) if is_block_dev: self._vmutils.attach_volume_to_controller.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.ctrller_path, mock.sentinel.slot, mock.sentinel.disk_path, serial=connection_info['serial']) else: self._vmutils.attach_drive.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.disk_path, mock.sentinel.ctrller_path, mock.sentinel.slot) mock_get_disk_res_path.assert_called_once_with( mock.sentinel.raw_path) mock_get_disk_ctrl_and_slot.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.disk_bus) def test_attach_volume_image_file(self): self._test_attach_volume(is_block_dev=False) def test_attach_volume_block_dev(self): self._test_attach_volume(is_block_dev=True) def test_detach_volume_planned_vm(self): self._base_vol_driver.detach_volume(mock.sentinel.connection_info, mock.sentinel.inst_name) self._vmutils.detach_vm_disk.assert_not_called() @mock.patch.object(volumeops.BaseVolumeDriver, 'get_disk_resource_path') def test_detach_volume(self, mock_get_disk_resource_path): self._migrutils.planned_vm_exists.return_value = False connection_info = get_fake_connection_info() self._base_vol_driver.detach_volume(connection_info, mock.sentinel.instance_name) mock_get_disk_resource_path.assert_called_once_with( connection_info) self._vmutils.detach_vm_disk.assert_called_once_with( mock.sentinel.instance_name, mock_get_disk_resource_path.return_value, is_physical=self._base_vol_driver._is_block_dev) def test_get_disk_ctrl_and_slot_ide(self): ctrl, slot = self._base_vol_driver._get_disk_ctrl_and_slot( mock.sentinel.instance_name, disk_bus=constants.CTRL_TYPE_IDE) expected_ctrl = self._vmutils.get_vm_ide_controller.return_value expected_slot = 0 self._vmutils.get_vm_ide_controller.assert_called_once_with( mock.sentinel.instance_name, 0) self.assertEqual(expected_ctrl, ctrl) self.assertEqual(expected_slot, slot) def test_get_disk_ctrl_and_slot_scsi(self): ctrl, slot = self._base_vol_driver._get_disk_ctrl_and_slot( mock.sentinel.instance_name, disk_bus=constants.CTRL_TYPE_SCSI) expected_ctrl = self._vmutils.get_vm_scsi_controller.return_value expected_slot = ( self._vmutils.get_free_controller_slot.return_value) self._vmutils.get_vm_scsi_controller.assert_called_once_with( mock.sentinel.instance_name) self._vmutils.get_free_controller_slot( self._vmutils.get_vm_scsi_controller.return_value) self.assertEqual(expected_ctrl, ctrl) self.assertEqual(expected_slot, slot) def test_set_disk_qos_specs(self): # This base method is a noop, we'll just make sure # it doesn't error out. self._base_vol_driver.set_disk_qos_specs( mock.sentinel.conn_info, mock.sentinel.disk_qos_spes) class ISCSIVolumeDriverTestCase(test_base.HyperVBaseTestCase): """Unit tests for Hyper-V BaseVolumeDriver class.""" def test_extra_conn_args(self): fake_iscsi_initiator = ( 'PCI\\VEN_1077&DEV_2031&SUBSYS_17E8103C&REV_02\\' '4&257301f0&0&0010_0') self.flags(iscsi_initiator_list=[fake_iscsi_initiator], group='hyperv') expected_extra_conn_args = dict( initiator_list=[fake_iscsi_initiator]) vol_driver = volumeops.ISCSIVolumeDriver() self.assertEqual(expected_extra_conn_args, vol_driver._extra_connector_args) class SMBFSVolumeDriverTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V SMBFSVolumeDriver class.""" _FAKE_EXPORT_PATH = '//ip/share/' _FAKE_CONN_INFO = get_fake_connection_info(export=_FAKE_EXPORT_PATH) def setUp(self): super(SMBFSVolumeDriverTestCase, self).setUp() self._volume_driver = volumeops.SMBFSVolumeDriver() self._volume_driver._conn = mock.Mock() self._conn = self._volume_driver._conn def test_get_export_path(self): export_path = self._volume_driver._get_export_path( self._FAKE_CONN_INFO) expected_path = self._FAKE_EXPORT_PATH.replace('/', '\\') self.assertEqual(expected_path, export_path) @mock.patch.object(volumeops.BaseVolumeDriver, 'attach_volume') def test_attach_volume(self, mock_attach): # The tested method will just apply a lock before calling # the superclass method. self._volume_driver.attach_volume( self._FAKE_CONN_INFO, mock.sentinel.instance_name, disk_bus=mock.sentinel.disk_bus) mock_attach.assert_called_once_with( self._FAKE_CONN_INFO, mock.sentinel.instance_name, disk_bus=mock.sentinel.disk_bus) @mock.patch.object(volumeops.BaseVolumeDriver, 'detach_volume') def test_detach_volume(self, mock_detach): self._volume_driver.detach_volume( self._FAKE_CONN_INFO, instance_name=mock.sentinel.instance_name) mock_detach.assert_called_once_with( self._FAKE_CONN_INFO, instance_name=mock.sentinel.instance_name) @mock.patch.object(volumeops.VolumeOps, 'bytes_per_sec_to_iops') @mock.patch.object(volumeops.VolumeOps, 'validate_qos_specs') @mock.patch.object(volumeops.BaseVolumeDriver, 'get_disk_resource_path') def test_set_disk_qos_specs(self, mock_get_disk_path, mock_validate_qos_specs, mock_bytes_per_sec_to_iops): fake_total_bytes_sec = 8 fake_total_iops_sec = 1 storage_qos_specs = {'total_bytes_sec': fake_total_bytes_sec} expected_supported_specs = ['total_iops_sec', 'total_bytes_sec'] mock_set_qos_specs = self._volume_driver._vmutils.set_disk_qos_specs mock_bytes_per_sec_to_iops.return_value = fake_total_iops_sec mock_get_disk_path.return_value = mock.sentinel.disk_path self._volume_driver.set_disk_qos_specs(self._FAKE_CONN_INFO, storage_qos_specs) mock_validate_qos_specs.assert_called_once_with( storage_qos_specs, expected_supported_specs) mock_bytes_per_sec_to_iops.assert_called_once_with( fake_total_bytes_sec) mock_get_disk_path.assert_called_once_with(self._FAKE_CONN_INFO) mock_set_qos_specs.assert_called_once_with( mock.sentinel.disk_path, fake_total_iops_sec) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5904677 nova-21.2.4/nova/tests/unit/virt/image/0000775000175000017500000000000000000000000017720 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/image/__init__.py0000664000175000017500000000000000000000000022017 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/image/test_model.py0000664000175000017500000000625000000000000022434 0ustar00zuulzuul00000000000000# # Copyright (C) 2014 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from nova import exception from nova import test from nova.virt.image import model as imgmodel class ImageTest(test.NoDBTestCase): def test_local_file_image(self): img = imgmodel.LocalFileImage( "/var/lib/libvirt/images/demo.qcow2", imgmodel.FORMAT_QCOW2) self.assertIsInstance(img, imgmodel.Image) self.assertEqual("/var/lib/libvirt/images/demo.qcow2", img.path) self.assertEqual(imgmodel.FORMAT_QCOW2, img.format) def test_local_file_bad_format(self): self.assertRaises(exception.InvalidImageFormat, imgmodel.LocalFileImage, "/var/lib/libvirt/images/demo.qcow2", "jpeg") def test_local_block_image(self): img = imgmodel.LocalBlockImage( "/dev/volgroup/demovol") self.assertIsInstance(img, imgmodel.Image) self.assertEqual("/dev/volgroup/demovol", img.path) self.assertEqual(imgmodel.FORMAT_RAW, img.format) def test_rbd_image(self): img = imgmodel.RBDImage( "demo", "openstack", "cthulu", "braanes", ["rbd.example.org"]) self.assertIsInstance(img, imgmodel.Image) self.assertEqual("demo", img.name) self.assertEqual("openstack", img.pool) self.assertEqual("cthulu", img.user) self.assertEqual("braanes", img.password) self.assertEqual(["rbd.example.org"], img.servers) self.assertEqual(imgmodel.FORMAT_RAW, img.format) def test_equality(self): img1 = imgmodel.LocalFileImage( "/var/lib/libvirt/images/demo.qcow2", imgmodel.FORMAT_QCOW2) img2 = imgmodel.LocalFileImage( "/var/lib/libvirt/images/demo.qcow2", imgmodel.FORMAT_QCOW2) img3 = imgmodel.LocalFileImage( "/var/lib/libvirt/images/demo.qcow2", imgmodel.FORMAT_RAW) img4 = imgmodel.LocalImage( "/dev/mapper/vol", imgmodel.FORMAT_RAW) img5 = imgmodel.LocalBlockImage( "/dev/mapper/vol") self.assertEqual(img1, img1) self.assertEqual(img1, img2) self.assertEqual(img1.__hash__(), img2.__hash__()) self.assertNotEqual(img1, img3) self.assertNotEqual(img4, img5) def test_stringify(self): img = imgmodel.RBDImage( "demo", "openstack", "cthulu", "braanes", ["rbd.example.org"]) msg = str(img) self.assertEqual(msg.find("braanes"), -1) self.assertNotEqual(msg.find("***"), -1) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5904677 nova-21.2.4/nova/tests/unit/virt/ironic/0000775000175000017500000000000000000000000020121 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/ironic/__init__.py0000664000175000017500000000000000000000000022220 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/ironic/test_client_wrapper.py0000664000175000017500000002131300000000000024550 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironicclient import client as ironic_client from ironicclient import exc as ironic_exception from keystoneauth1 import discover as ksa_disc import keystoneauth1.session import mock from oslo_config import cfg import nova.conf from nova import exception from nova import test from nova.tests.unit.virt.ironic import utils as ironic_utils from nova.virt.ironic import client_wrapper CONF = nova.conf.CONF FAKE_CLIENT = ironic_utils.FakeClient() def get_new_fake_client(*args, **kwargs): return ironic_utils.FakeClient() class IronicClientWrapperTestCase(test.NoDBTestCase): def setUp(self): super(IronicClientWrapperTestCase, self).setUp() self.ironicclient = client_wrapper.IronicClientWrapper() # Do not waste time sleeping cfg.CONF.set_override('api_retry_interval', 0, 'ironic') cfg.CONF.set_override('region_name', 'RegionOne', 'ironic') get_ksa_adapter_p = mock.patch('nova.utils.get_ksa_adapter') self.addCleanup(get_ksa_adapter_p.stop) self.get_ksa_adapter = get_ksa_adapter_p.start() get_auth_plugin_p = mock.patch('nova.virt.ironic.client_wrapper.' 'IronicClientWrapper._get_auth_plugin') self.addCleanup(get_auth_plugin_p.stop) self.get_auth_plugin = get_auth_plugin_p.start() @mock.patch.object(client_wrapper.IronicClientWrapper, '_multi_getattr') @mock.patch.object(client_wrapper.IronicClientWrapper, '_get_client') def test_call_good_no_args(self, mock_get_client, mock_multi_getattr): mock_get_client.return_value = FAKE_CLIENT self.ironicclient.call("node.list") mock_get_client.assert_called_once_with(retry_on_conflict=True) mock_multi_getattr.assert_called_once_with(FAKE_CLIENT, "node.list") mock_multi_getattr.return_value.assert_called_once_with() @mock.patch.object(client_wrapper.IronicClientWrapper, '_multi_getattr') @mock.patch.object(client_wrapper.IronicClientWrapper, '_get_client') def test_call_good_with_args(self, mock_get_client, mock_multi_getattr): mock_get_client.return_value = FAKE_CLIENT self.ironicclient.call("node.list", 'test', associated=True) mock_get_client.assert_called_once_with(retry_on_conflict=True) mock_multi_getattr.assert_called_once_with(FAKE_CLIENT, "node.list") mock_multi_getattr.return_value.assert_called_once_with( 'test', associated=True) @mock.patch.object(keystoneauth1.session, 'Session') @mock.patch.object(ironic_client, 'get_client') def test__get_client_session(self, mock_ir_cli, mock_session): """An Ironicclient is called with a keystoneauth1 Session""" mock_session.return_value = 'session' ironicclient = client_wrapper.IronicClientWrapper() # dummy call to have _get_client() called ironicclient.call("node.list") self.get_ksa_adapter.assert_called_once_with( 'baremetal', ksa_auth=self.get_auth_plugin.return_value, ksa_session='session', min_version=(1, 0), max_version=(1, ksa_disc.LATEST)) expected = {'session': 'session', 'max_retries': CONF.ironic.api_max_retries, 'retry_interval': CONF.ironic.api_retry_interval, 'os_ironic_api_version': ['1.46', '1.38'], 'endpoint': self.get_ksa_adapter.return_value.get_endpoint.return_value, 'interface': ['internal', 'public']} mock_ir_cli.assert_called_once_with(1, **expected) @mock.patch.object(keystoneauth1.session, 'Session') @mock.patch.object(ironic_client, 'get_client') def test__get_client_session_service_not_found(self, mock_ir_cli, mock_session): """Validate behavior when get_endpoint_data raises.""" mock_session.return_value = 'session' self.get_ksa_adapter.side_effect = ( exception.ConfGroupForServiceTypeNotFound(stype='baremetal')) ironicclient = client_wrapper.IronicClientWrapper() # dummy call to have _get_client() called ironicclient.call("node.list") self.get_ksa_adapter.assert_called_once_with( 'baremetal', ksa_auth=self.get_auth_plugin.return_value, ksa_session='session', min_version=(1, 0), max_version=(1, ksa_disc.LATEST)) # When get_endpoint_data raises any ServiceNotFound, None is returned. expected = {'session': 'session', 'max_retries': CONF.ironic.api_max_retries, 'retry_interval': CONF.ironic.api_retry_interval, 'os_ironic_api_version': ['1.46', '1.38'], 'endpoint': None, 'region_name': CONF.ironic.region_name, 'interface': ['internal', 'public']} mock_ir_cli.assert_called_once_with(1, **expected) @mock.patch.object(keystoneauth1.session, 'Session') @mock.patch.object(ironic_client, 'get_client') def test__get_client_and_valid_interfaces(self, mock_ir_cli, mock_session): """Confirm explicit setting of valid_interfaces.""" mock_session.return_value = 'session' endpoint = 'https://baremetal.example.com/endpoint' self.get_ksa_adapter.return_value.get_endpoint.return_value = endpoint self.flags(endpoint_override=endpoint, group='ironic') self.flags(valid_interfaces='admin', group='ironic') ironicclient = client_wrapper.IronicClientWrapper() # dummy call to have _get_client() called ironicclient.call("node.list") expected = {'session': 'session', 'max_retries': CONF.ironic.api_max_retries, 'retry_interval': CONF.ironic.api_retry_interval, 'os_ironic_api_version': ['1.46', '1.38'], 'endpoint': endpoint, 'interface': ['admin']} mock_ir_cli.assert_called_once_with(1, **expected) @mock.patch.object(client_wrapper.IronicClientWrapper, '_multi_getattr') @mock.patch.object(client_wrapper.IronicClientWrapper, '_get_client') def test_call_fail_exception(self, mock_get_client, mock_multi_getattr): test_obj = mock.Mock() test_obj.side_effect = ironic_exception.HTTPNotFound mock_multi_getattr.return_value = test_obj mock_get_client.return_value = FAKE_CLIENT self.assertRaises(ironic_exception.HTTPNotFound, self.ironicclient.call, "node.list") @mock.patch.object(ironic_client, 'get_client') def test__get_client_unauthorized(self, mock_get_client): mock_get_client.side_effect = ironic_exception.Unauthorized self.assertRaises(exception.NovaException, self.ironicclient._get_client) @mock.patch.object(ironic_client, 'get_client') def test__get_client_unexpected_exception(self, mock_get_client): mock_get_client.side_effect = ironic_exception.ConnectionRefused self.assertRaises(ironic_exception.ConnectionRefused, self.ironicclient._get_client) def test__multi_getattr_good(self): response = self.ironicclient._multi_getattr(FAKE_CLIENT, "node.list") self.assertEqual(FAKE_CLIENT.node.list, response) def test__multi_getattr_fail(self): self.assertRaises(AttributeError, self.ironicclient._multi_getattr, FAKE_CLIENT, "nonexistent") @mock.patch.object(ironic_client, 'get_client') def test__client_is_cached(self, mock_get_client): mock_get_client.side_effect = get_new_fake_client ironicclient = client_wrapper.IronicClientWrapper() first_client = ironicclient._get_client() second_client = ironicclient._get_client() self.assertEqual(id(first_client), id(second_client)) @mock.patch.object(ironic_client, 'get_client') def test_call_uses_cached_client(self, mock_get_client): mock_get_client.side_effect = get_new_fake_client ironicclient = client_wrapper.IronicClientWrapper() for n in range(0, 4): ironicclient.call("node.list") self.assertEqual(1, mock_get_client.call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/ironic/test_driver.py0000664000175000017500000054251000000000000023034 0ustar00zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the ironic driver.""" import fixtures from ironicclient import exc as ironic_exception import mock from openstack import exceptions as sdk_exc from oslo_config import cfg from oslo_service import loopingcall from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils import six from testtools import matchers from tooz import hashring as hash_ring from nova.api.metadata import base as instance_metadata from nova.api.openstack import common from nova import block_device from nova.compute import power_state as nova_states from nova.compute import provider_tree from nova.compute import task_states from nova.compute import vm_states from nova.console import type as console_type from nova import context as nova_context from nova import exception from nova.network import model as network_model from nova import objects from nova import servicegroup from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests.unit import matchers as nova_matchers from nova.tests.unit import utils from nova.tests.unit.virt.ironic import utils as ironic_utils from nova import utils as nova_utils from nova.virt import block_device as driver_block_device from nova.virt import configdrive from nova.virt import driver from nova.virt import fake from nova.virt import hardware from nova.virt.ironic import client_wrapper as cw from nova.virt.ironic import driver as ironic_driver from nova.virt.ironic import ironic_states CONF = cfg.CONF FAKE_CLIENT = ironic_utils.FakeClient() SENTINEL = object() class FakeClientWrapper(cw.IronicClientWrapper): def _get_client(self, retry_on_conflict=True): return FAKE_CLIENT class FakeLoopingCall(object): def __init__(self): self.wait = mock.MagicMock() self.start = mock.MagicMock() self.start.return_value = self def _get_properties(): return {'cpus': 2, 'memory_mb': 512, 'local_gb': 10, 'cpu_arch': 'x86_64', 'capabilities': None} def _get_instance_info(): return {'vcpus': 1, 'memory_mb': 1024, 'local_gb': 10} def _get_stats(): return {'cpu_arch': 'x86_64'} def _get_cached_node(**kw): """Return a fake node object representative of objects in the cache.""" return ironic_utils.get_test_node(**kw) def _make_compute_service(hostname): return objects.Service(host=hostname) FAKE_CLIENT_WRAPPER = FakeClientWrapper() @mock.patch.object(cw, 'IronicClientWrapper', lambda *_: FAKE_CLIENT_WRAPPER) class IronicDriverTestCase(test.NoDBTestCase): @mock.patch.object(cw, 'IronicClientWrapper', lambda *_: FAKE_CLIENT_WRAPPER) @mock.patch.object(ironic_driver.IronicDriver, '_refresh_hash_ring') @mock.patch.object(servicegroup, 'API', autospec=True) def setUp(self, mock_sg, mock_hash): super(IronicDriverTestCase, self).setUp() self.driver = ironic_driver.IronicDriver(None) self.driver.virtapi = fake.FakeVirtAPI() self.mock_conn = self.useFixture( fixtures.MockPatchObject(self.driver, '_ironic_connection')).mock self.ctx = nova_context.get_admin_context() # TODO(dustinc): Remove once all id/uuid usages are normalized. self.instance_id = uuidutils.generate_uuid() self.instance_uuid = self.instance_id self.ptree = provider_tree.ProviderTree() self.ptree.new_root(mock.sentinel.nodename, mock.sentinel.nodename) # mock retries configs to avoid sleeps and make tests run quicker CONF.set_default('api_max_retries', default=1, group='ironic') CONF.set_default('api_retry_interval', default=0, group='ironic') def test_public_api_signatures(self): self.assertPublicAPISignatures(driver.ComputeDriver(None), self.driver) def test_validate_driver_loading(self): self.assertIsInstance(self.driver, ironic_driver.IronicDriver) def test_driver_capabilities(self): self.assertFalse(self.driver.capabilities['has_imagecache'], 'Driver capabilities for \'has_imagecache\'' 'is invalid') self.assertFalse(self.driver.capabilities['supports_evacuate'], 'Driver capabilities for \'supports_evacuate\'' 'is invalid') self.assertFalse(self.driver.capabilities[ 'supports_migrate_to_same_host'], 'Driver capabilities for ' '\'supports_migrate_to_same_host\' is invalid') def test__get_hypervisor_type(self): self.assertEqual('ironic', self.driver._get_hypervisor_type()) def test__get_hypervisor_version(self): self.assertEqual(1, self.driver._get_hypervisor_version()) def test__get_node(self): node_id = uuidutils.generate_uuid() node = _get_cached_node(id=node_id) self.mock_conn.get_node.return_value = node actual = self.driver._get_node(node_id) self.assertEqual(node, actual) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) def test__get_node_not_found(self): node_id = uuidutils.generate_uuid() self.mock_conn.get_node.side_effect = sdk_exc.ResourceNotFound self.assertRaises(sdk_exc.ResourceNotFound, self.driver._get_node, node_id) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) def test__validate_instance_and_node(self): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(id=node_id, instance_id=self.instance_id) instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_id) self.mock_conn.nodes.return_value = iter([node]) result = self.driver._validate_instance_and_node(instance) self.assertEqual(result.uuid, node_id) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) def test__validate_instance_and_node_failed(self): self.mock_conn.nodes.return_value = iter([]) instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_id) self.assertRaises(exception.InstanceNotFound, self.driver._validate_instance_and_node, instance) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) def test__validate_instance_and_node_unexpected_many_nodes(self): self.mock_conn.nodes.return_value = iter(['1', '2', '3']) instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_id) self.assertRaises(exception.NovaException, self.driver._validate_instance_and_node, instance) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) @mock.patch.object(objects.Instance, 'refresh') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__wait_for_active_pass(self, fake_validate, fake_refresh): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid()) node = _get_cached_node(provision_state=ironic_states.DEPLOYING) fake_validate.return_value = node self.driver._wait_for_active(instance) fake_validate.assert_called_once_with(instance) fake_refresh.assert_called_once_with() @mock.patch.object(objects.Instance, 'refresh') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__wait_for_active_done(self, fake_validate, fake_refresh): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid()) node = _get_cached_node(provision_state=ironic_states.ACTIVE) fake_validate.return_value = node self.assertRaises(loopingcall.LoopingCallDone, self.driver._wait_for_active, instance) fake_validate.assert_called_once_with(instance) fake_refresh.assert_called_once_with() @mock.patch.object(objects.Instance, 'refresh') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__wait_for_active_from_error(self, fake_validate, fake_refresh): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid(), vm_state=vm_states.ERROR, task_state=task_states.REBUILD_SPAWNING) node = ironic_utils.get_test_node( provision_state=ironic_states.ACTIVE) fake_validate.return_value = node self.assertRaises(loopingcall.LoopingCallDone, self.driver._wait_for_active, instance) fake_validate.assert_called_once_with(instance) fake_refresh.assert_called_once_with() @mock.patch.object(objects.Instance, 'refresh') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__wait_for_active_fail(self, fake_validate, fake_refresh): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid()) node = _get_cached_node(provision_state=ironic_states.DEPLOYFAIL) fake_validate.return_value = node self.assertRaises(exception.InstanceDeployFailure, self.driver._wait_for_active, instance) fake_validate.assert_called_once_with(instance) fake_refresh.assert_called_once_with() @mock.patch.object(objects.Instance, 'refresh') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def _wait_for_active_abort(self, instance_params, fake_validate, fake_refresh): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid(), **instance_params) self.assertRaises(exception.InstanceDeployFailure, self.driver._wait_for_active, instance) # Assert _validate_instance_and_node wasn't called self.assertFalse(fake_validate.called) fake_refresh.assert_called_once_with() def test__wait_for_active_abort_deleting(self): self._wait_for_active_abort({'task_state': task_states.DELETING}) def test__wait_for_active_abort_deleted(self): self._wait_for_active_abort({'vm_state': vm_states.DELETED}) def test__wait_for_active_abort_error(self): self._wait_for_active_abort({'task_state': task_states.SPAWNING, 'vm_state': vm_states.ERROR}) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__wait_for_power_state_pass(self, fake_validate): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid()) node = _get_cached_node(target_power_state=ironic_states.POWER_OFF) fake_validate.return_value = node self.driver._wait_for_power_state(instance, 'fake message') self.assertTrue(fake_validate.called) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__wait_for_power_state_ok(self, fake_validate): instance = fake_instance.fake_instance_obj(self.ctx, uuid=uuidutils.generate_uuid()) node = _get_cached_node(target_power_state=ironic_states.NOSTATE) fake_validate.return_value = node self.assertRaises(loopingcall.LoopingCallDone, self.driver._wait_for_power_state, instance, 'fake message') self.assertTrue(fake_validate.called) def test__node_resource_with_instance_uuid(self): node_uuid = uuidutils.generate_uuid() props = _get_properties() stats = _get_stats() node = _get_cached_node( uuid=node_uuid, instance_uuid=self.instance_uuid, properties=props, resource_class='foo') result = self.driver._node_resource(node) wantkeys = ["uuid", "hypervisor_hostname", "hypervisor_type", "hypervisor_version", "cpu_info", "vcpus", "vcpus_used", "memory_mb", "memory_mb_used", "local_gb", "local_gb_used", "disk_available_least", "supported_instances", "stats", "numa_topology", "resource_class"] wantkeys.sort() gotkeys = sorted(result.keys()) self.assertEqual(wantkeys, gotkeys) self.assertEqual(0, result['vcpus']) self.assertEqual(0, result['vcpus_used']) self.assertEqual(0, result['memory_mb']) self.assertEqual(0, result['memory_mb_used']) self.assertEqual(0, result['local_gb']) self.assertEqual(0, result['local_gb_used']) self.assertEqual(node_uuid, result['uuid']) self.assertEqual(node_uuid, result['hypervisor_hostname']) self.assertEqual(stats, result['stats']) self.assertEqual('foo', result['resource_class']) self.assertIsNone(result['numa_topology']) def test__node_resource_canonicalizes_arch(self): node_uuid = uuidutils.generate_uuid() props = _get_properties() props['cpu_arch'] = 'i386' node = _get_cached_node(uuid=node_uuid, properties=props) result = self.driver._node_resource(node) self.assertEqual('i686', result['supported_instances'][0][0]) self.assertEqual('i386', result['stats']['cpu_arch']) def test__node_resource_unknown_arch(self): node_uuid = uuidutils.generate_uuid() props = _get_properties() del props['cpu_arch'] node = _get_cached_node(uuid=node_uuid, properties=props) result = self.driver._node_resource(node) self.assertEqual([], result['supported_instances']) def test__node_resource_exposes_capabilities(self): props = _get_properties() props['capabilities'] = 'test:capability, test2:value2' node = _get_cached_node(properties=props) result = self.driver._node_resource(node) stats = result['stats'] self.assertIsNone(stats.get('capabilities')) self.assertEqual('capability', stats.get('test')) self.assertEqual('value2', stats.get('test2')) def test__node_resource_no_capabilities(self): props = _get_properties() props['capabilities'] = None node = _get_cached_node(properties=props) result = self.driver._node_resource(node) self.assertIsNone(result['stats'].get('capabilities')) def test__node_resource_malformed_capabilities(self): props = _get_properties() props['capabilities'] = 'test:capability,:no_key,no_val:' node = _get_cached_node(properties=props) result = self.driver._node_resource(node) stats = result['stats'] self.assertEqual('capability', stats.get('test')) def test__node_resource_available(self): node_uuid = uuidutils.generate_uuid() props = _get_properties() stats = _get_stats() node = _get_cached_node( uuid=node_uuid, instance_uuid=None, power_state=ironic_states.POWER_OFF, properties=props, provision_state=ironic_states.AVAILABLE) result = self.driver._node_resource(node) self.assertEqual(0, result['vcpus']) self.assertEqual(0, result['vcpus_used']) self.assertEqual(0, result['memory_mb']) self.assertEqual(0, result['memory_mb_used']) self.assertEqual(0, result['local_gb']) self.assertEqual(0, result['local_gb_used']) self.assertEqual(node_uuid, result['hypervisor_hostname']) self.assertEqual(stats, result['stats']) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable') def test__node_resource_unavailable_node_res(self, mock_res_unavail): mock_res_unavail.return_value = True node_uuid = uuidutils.generate_uuid() props = _get_properties() stats = _get_stats() node = _get_cached_node( uuid=node_uuid, instance_uuid=None, properties=props) result = self.driver._node_resource(node) self.assertEqual(0, result['vcpus']) self.assertEqual(0, result['vcpus_used']) self.assertEqual(0, result['memory_mb']) self.assertEqual(0, result['memory_mb_used']) self.assertEqual(0, result['local_gb']) self.assertEqual(0, result['local_gb_used']) self.assertEqual(node_uuid, result['hypervisor_hostname']) self.assertEqual(stats, result['stats']) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used') def test__node_resource_used_node_res(self, mock_res_used): mock_res_used.return_value = True node_uuid = uuidutils.generate_uuid() props = _get_properties() stats = _get_stats() instance_info = _get_instance_info() node = _get_cached_node( uuid=node_uuid, instance_uuid=uuidutils.generate_uuid(), provision_state=ironic_states.ACTIVE, properties=props, instance_info=instance_info) result = self.driver._node_resource(node) self.assertEqual(0, result['vcpus']) self.assertEqual(0, result['vcpus_used']) self.assertEqual(0, result['memory_mb']) self.assertEqual(0, result['memory_mb_used']) self.assertEqual(0, result['local_gb']) self.assertEqual(0, result['local_gb_used']) self.assertEqual(node_uuid, result['hypervisor_hostname']) self.assertEqual(stats, result['stats']) @mock.patch.object(ironic_driver.LOG, 'warning') def test__parse_node_properties(self, mock_warning): props = _get_properties() node = _get_cached_node( uuid=uuidutils.generate_uuid(), properties=props) # raw_cpu_arch is included because extra_specs filters do not # canonicalized the arch props['raw_cpu_arch'] = props['cpu_arch'] parsed = self.driver._parse_node_properties(node) self.assertEqual(props, parsed) # Assert we didn't log any warning since all properties are # correct self.assertFalse(mock_warning.called) @mock.patch.object(ironic_driver.LOG, 'warning') def test__parse_node_properties_bad_values(self, mock_warning): props = _get_properties() props['cpus'] = 'bad-value' props['memory_mb'] = 'bad-value' props['local_gb'] = 'bad-value' props['cpu_arch'] = 'bad-value' node = _get_cached_node( uuid=uuidutils.generate_uuid(), properties=props) # raw_cpu_arch is included because extra_specs filters do not # canonicalized the arch props['raw_cpu_arch'] = props['cpu_arch'] parsed = self.driver._parse_node_properties(node) expected_props = props.copy() expected_props['cpus'] = 0 expected_props['memory_mb'] = 0 expected_props['local_gb'] = 0 expected_props['cpu_arch'] = None self.assertEqual(expected_props, parsed) self.assertEqual(4, mock_warning.call_count) @mock.patch.object(ironic_driver.LOG, 'warning') def test__parse_node_properties_canonicalize_cpu_arch(self, mock_warning): props = _get_properties() props['cpu_arch'] = 'amd64' node = _get_cached_node( uuid=uuidutils.generate_uuid(), properties=props) # raw_cpu_arch is included because extra_specs filters do not # canonicalized the arch props['raw_cpu_arch'] = props['cpu_arch'] parsed = self.driver._parse_node_properties(node) expected_props = props.copy() # Make sure it cpu_arch was canonicalized expected_props['cpu_arch'] = 'x86_64' self.assertEqual(expected_props, parsed) # Assert we didn't log any warning since all properties are # correct self.assertFalse(mock_warning.called) def test_instance_exists(self): instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_id) node = ironic_utils.get_test_node(fields=ironic_driver._NODE_FIELDS, id=uuidutils.generate_uuid(), instance_id=instance.uuid) self.mock_conn.nodes.return_value = iter([node]) self.assertTrue(self.driver.instance_exists(instance)) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) def test_instance_exists_fail(self): instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_id) self.mock_conn.nodes.return_value = iter([]) self.assertFalse(self.driver.instance_exists(instance)) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) def test__get_node_list(self): test_nodes = [ironic_utils.get_test_node( fields=ironic_driver._NODE_FIELDS, id=uuidutils.generate_uuid()) for _ in range(3)] self.mock_conn.nodes.return_value = iter(test_nodes) response = self.driver._get_node_list(associated=True) self.assertIsInstance(response, list) self.assertEqual(test_nodes, response) self.mock_conn.nodes.assert_called_once_with(associated=True) def test__get_node_list_generator(self): test_nodes = [ironic_utils.get_test_node( fields=ironic_driver._NODE_FIELDS, id=uuidutils.generate_uuid()) for _ in range(3)] self.mock_conn.nodes.return_value = iter(test_nodes) response = self.driver._get_node_list(return_generator=True, associated=True) # NOTE(dustinc): This is simpler than importing a module just to get # this one type... self.assertIsInstance(response, type(iter([]))) self.assertEqual(test_nodes, list(response)) self.mock_conn.nodes.assert_called_once_with(associated=True) def test__get_node_list_fail(self): self.mock_conn.nodes.side_effect = sdk_exc.InvalidResourceQuery() self.assertRaises(exception.VirtDriverNotReady, self.driver._get_node_list) self.mock_conn.nodes.side_effect = Exception() self.assertRaises(exception.VirtDriverNotReady, self.driver._get_node_list) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_list_instances(self, mock_inst_by_uuid): nodes = [] instances = [] for i in range(2): uuid = uuidutils.generate_uuid() instances.append(fake_instance.fake_instance_obj(self.ctx, id=i, uuid=uuid)) nodes.append(ironic_utils.get_test_node(instance_id=uuid, fields=['instance_id'])) mock_inst_by_uuid.side_effect = instances self.mock_conn.nodes.return_value = iter(nodes) response = self.driver.list_instances() self.mock_conn.nodes.assert_called_with(associated=True, fields=['instance_uuid']) expected_calls = [mock.call(mock.ANY, instances[0].uuid), mock.call(mock.ANY, instances[1].uuid)] mock_inst_by_uuid.assert_has_calls(expected_calls) self.assertEqual(['instance-00000000', 'instance-00000001'], sorted(response)) # NOTE(dustinc) This test ensures we use instance_uuid not instance_id in # 'fields' when calling ironic. @mock.patch.object(objects.Instance, 'get_by_uuid') def test_list_instances_uses_instance_uuid(self, mock_inst_by_uuid): self.driver.list_instances() self.mock_conn.nodes.assert_called_with(associated=True, fields=['instance_uuid']) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_list_instances_fail(self, mock_inst_by_uuid): self.mock_conn.nodes.side_effect = exception.NovaException self.assertRaises(exception.VirtDriverNotReady, self.driver.list_instances) self.mock_conn.nodes.assert_called_with(associated=True, fields=['instance_uuid']) self.assertFalse(mock_inst_by_uuid.called) def test_list_instance_uuids(self): num_nodes = 2 nodes = [] for n in range(num_nodes): nodes.append(ironic_utils.get_test_node( instance_id=uuidutils.generate_uuid(), fields=['instance_id'])) self.mock_conn.nodes.return_value = iter(nodes) uuids = self.driver.list_instance_uuids() self.mock_conn.nodes.assert_called_with(associated=True, fields=['instance_uuid']) expected = [n.instance_id for n in nodes] self.assertEqual(sorted(expected), sorted(uuids)) # NOTE(dustinc) This test ensures we use instance_uuid not instance_id in # 'fields' when calling ironic. @mock.patch.object(objects.Instance, 'get_by_uuid') def test_list_instance_uuids_uses_instance_uuid(self, mock_inst_by_uuid): self.driver.list_instance_uuids() self.mock_conn.nodes.assert_called_with(associated=True, fields=['instance_uuid']) @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_node_is_available_empty_cache_empty_list(self, mock_services, mock_instances): node = _get_cached_node() self.mock_conn.get_node.return_value = node self.mock_conn.nodes.return_value = iter([]) self.assertTrue(self.driver.node_is_available(node.id)) self.mock_conn.get_node.assert_called_with( node.id, fields=ironic_driver._NODE_FIELDS) self.mock_conn.nodes.assert_called_with( fields=ironic_driver._NODE_FIELDS) self.mock_conn.get_node.side_effect = sdk_exc.ResourceNotFound self.assertFalse(self.driver.node_is_available(node.id)) @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_node_is_available_empty_cache(self, mock_services, mock_instances): node = _get_cached_node() self.mock_conn.get_node.return_value = node self.mock_conn.nodes.return_value = iter([node]) self.assertTrue(self.driver.node_is_available(node.uuid)) self.mock_conn.nodes.assert_called_with( fields=ironic_driver._NODE_FIELDS) self.assertEqual(0, self.mock_conn.get_node.call_count) @mock.patch.object(FAKE_CLIENT.node, 'list') @mock.patch.object(FAKE_CLIENT.node, 'get') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_node_is_available_with_cache(self, mock_services, mock_instances, mock_get, mock_list): node = _get_cached_node() mock_get.return_value = node mock_list.return_value = [node] # populate the cache self.driver.get_available_nodes(refresh=True) # prove that zero calls are made after populating cache mock_list.reset_mock() self.assertTrue(self.driver.node_is_available(node.uuid)) self.assertEqual(0, mock_list.call_count) self.assertEqual(0, mock_get.call_count) def test__node_resources_unavailable(self): node_dicts = [ # a node in maintenance /w no instance and power OFF {'uuid': uuidutils.generate_uuid(), 'maintenance': True, 'power_state': ironic_states.POWER_OFF, 'provision_state': ironic_states.AVAILABLE}, # a node in maintenance /w no instance and ERROR power state {'uuid': uuidutils.generate_uuid(), 'maintenance': True, 'power_state': ironic_states.ERROR, 'provision_state': ironic_states.AVAILABLE}, # a node not in maintenance /w no instance and bad power state {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.NOSTATE, 'provision_state': ironic_states.AVAILABLE}, # a node not in maintenance or bad power state, bad provision state {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.POWER_ON, 'provision_state': ironic_states.MANAGEABLE}, # a node in cleaning {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.POWER_ON, 'provision_state': ironic_states.CLEANING}, # a node in cleaning, waiting for a clean step to finish {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.POWER_ON, 'provision_state': ironic_states.CLEANWAIT}, # a node in deleting {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.POWER_ON, 'provision_state': ironic_states.DELETING}, # a node in deleted {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.POWER_ON, 'provision_state': ironic_states.DELETED}, ] for n in node_dicts: node = _get_cached_node(**n) self.assertTrue(self.driver._node_resources_unavailable(node)) for ok_state in (ironic_states.AVAILABLE, ironic_states.NOSTATE): # these are both ok and should present as available as they # have no instance_uuid avail_node = _get_cached_node( power_state=ironic_states.POWER_OFF, provision_state=ok_state) unavailable = self.driver._node_resources_unavailable(avail_node) self.assertFalse(unavailable) def test__node_resources_used(self): node_dicts = [ # a node in maintenance /w instance and active {'uuid': uuidutils.generate_uuid(), 'instance_uuid': uuidutils.generate_uuid(), 'provision_state': ironic_states.ACTIVE}, ] for n in node_dicts: node = _get_cached_node(**n) self.assertTrue(self.driver._node_resources_used(node)) unused_node = _get_cached_node( instance_uuid=None, provision_state=ironic_states.AVAILABLE) self.assertFalse(self.driver._node_resources_used(unused_node)) @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') def test_get_available_nodes(self, mock_gi, mock_services): instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_uuid) mock_gi.return_value = [instance.uuid] node_dicts = [ # a node in maintenance /w no instance and power OFF {'uuid': uuidutils.generate_uuid(), 'maintenance': True, 'power_state': ironic_states.POWER_OFF, 'expected': True}, # a node /w instance on this compute daemon and power ON {'uuid': uuidutils.generate_uuid(), 'instance_uuid': self.instance_uuid, 'power_state': ironic_states.POWER_ON, 'expected': True}, # a node /w instance on another compute daemon and power ON {'uuid': uuidutils.generate_uuid(), 'instance_uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.POWER_ON, 'expected': False}, # a node not in maintenance /w no instance and bad power state {'uuid': uuidutils.generate_uuid(), 'power_state': ironic_states.ERROR, 'expected': True}, ] nodes = [_get_cached_node(**n) for n in node_dicts] self.mock_conn.nodes.return_value = iter(nodes) available_nodes = self.driver.get_available_nodes() mock_gi.assert_called_once_with(mock.ANY, CONF.host) expected_uuids = [n['uuid'] for n in node_dicts if n['expected']] self.assertEqual(sorted(expected_uuids), sorted(available_nodes)) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_no_rc(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when node.resource_class is missing, that we return the legacy VCPU, MEMORY_MB and DISK_GB resources for inventory. """ mock_nr.return_value = { 'vcpus': 24, 'vcpus_used': 0, 'memory_mb': 1024, 'memory_mb_used': 0, 'local_gb': 100, 'local_gb_used': 0, 'resource_class': None, } self.assertRaises(exception.NoResourceClass, self.driver.update_provider_tree, self.ptree, mock.sentinel.nodename) mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_with_rc(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when node.resource_class is present, that we return the legacy VCPU, MEMORY_MB and DISK_GB resources for inventory in addition to the custom resource class inventory record. """ mock_nr.return_value = { 'vcpus': 24, 'vcpus_used': 0, 'memory_mb': 1024, 'memory_mb_used': 0, 'local_gb': 100, 'local_gb_used': 0, 'resource_class': 'iron-nfv', } self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) expected = { 'CUSTOM_IRON_NFV': { 'total': 1, 'reserved': 0, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, }, } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) result = self.ptree.data(mock.sentinel.nodename).inventory self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_only_rc(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when node.resource_class is present, that we return the legacy VCPU, MEMORY_MB and DISK_GB resources for inventory in addition to the custom resource class inventory record. """ mock_nr.return_value = { 'vcpus': 0, 'vcpus_used': 0, 'memory_mb': 0, 'memory_mb_used': 0, 'local_gb': 0, 'local_gb_used': 0, 'resource_class': 'iron-nfv', } self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) expected = { 'CUSTOM_IRON_NFV': { 'total': 1, 'reserved': 0, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, }, } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) result = self.ptree.data(mock.sentinel.nodename).inventory self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=True) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_with_rc_occupied(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when a node is used, we report the inventory matching the consumed resources. """ mock_nr.return_value = { 'vcpus': 24, 'vcpus_used': 24, 'memory_mb': 1024, 'memory_mb_used': 1024, 'local_gb': 100, 'local_gb_used': 100, 'resource_class': 'iron-nfv', } self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) expected = { 'CUSTOM_IRON_NFV': { 'total': 1, 'reserved': 0, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, }, } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) self.assertFalse(mock_res_unavail.called) result = self.ptree.data(mock.sentinel.nodename).inventory self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=True) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_disabled_node(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when a node is disabled, that update_provider_tree() sets inventory with reserved amounts equal to the total amounts. """ mock_nr.return_value = { 'vcpus': 24, 'vcpus_used': 0, 'memory_mb': 1024, 'memory_mb_used': 0, 'local_gb': 100, 'local_gb_used': 0, 'resource_class': 'iron-nfv', } self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) expected = { 'CUSTOM_IRON_NFV': { 'total': 1, 'reserved': 1, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, }, } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) result = self.ptree.data(mock.sentinel.nodename).inventory self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=True) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_no_traits(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when the node has no traits, we set no traits.""" mock_nr.return_value = { 'vcpus': 24, 'vcpus_used': 24, 'memory_mb': 1024, 'memory_mb_used': 1024, 'local_gb': 100, 'local_gb_used': 100, 'resource_class': 'iron-nfv', } mock_nfc.return_value = _get_cached_node(uuid=mock.sentinel.nodename) self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) self.assertFalse(mock_res_unavail.called) result = self.ptree.data(mock.sentinel.nodename).traits self.assertEqual(set(), result) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_used', return_value=True) @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_update_provider_tree_with_traits(self, mock_nfc, mock_nr, mock_res_unavail, mock_res_used): """Ensure that when the node has traits, we set the traits.""" mock_nr.return_value = { 'vcpus': 24, 'vcpus_used': 24, 'memory_mb': 1024, 'memory_mb_used': 1024, 'local_gb': 100, 'local_gb_used': 100, 'resource_class': 'iron-nfv', } traits = ['trait1', 'trait2'] mock_nfc.return_value = _get_cached_node( uuid=mock.sentinel.nodename, traits=traits) self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) mock_res_used.assert_called_once_with(mock_nfc.return_value) self.assertFalse(mock_res_unavail.called) result = self.ptree.data(mock.sentinel.nodename).traits self.assertEqual(set(traits), result) # A different set of traits - we should replace (for now). traits = ['trait1', 'trait7', 'trait42'] mock_nfc.return_value = _get_cached_node( uuid=mock.sentinel.nodename, traits=traits) self.driver.update_provider_tree(self.ptree, mock.sentinel.nodename) result = self.ptree.data(mock.sentinel.nodename).traits self.assertEqual(set(traits), result) @mock.patch.object(FAKE_CLIENT.node, 'list') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') def test_get_available_resource(self, mock_nr, mock_services, mock_instances, mock_list): node = _get_cached_node() node_2 = _get_cached_node(id=uuidutils.generate_uuid()) fake_resource = 'fake-resource' self.mock_conn.get_node.return_value = node # ensure cache gets populated without the node we want mock_list.return_value = [node_2] mock_nr.return_value = fake_resource result = self.driver.get_available_resource(node.id) self.assertEqual(fake_resource, result) mock_nr.assert_called_once_with(node) self.mock_conn.get_node.assert_called_once_with( node.id, fields=ironic_driver._NODE_FIELDS) @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') def test_get_available_resource_with_cache(self, mock_nr, mock_services, mock_instances): node = _get_cached_node() fake_resource = 'fake-resource' self.mock_conn.nodes.return_value = iter([node]) mock_nr.return_value = fake_resource # populate the cache self.driver.get_available_nodes(refresh=True) self.mock_conn.nodes.reset_mock() result = self.driver.get_available_resource(node.uuid) self.assertEqual(fake_resource, result) self.assertEqual(0, self.mock_conn.nodes.call_count) self.assertEqual(0, self.mock_conn.get_node.call_count) mock_nr.assert_called_once_with(node) @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_get_info(self, mock_svc_by_hv, mock_uuids_by_host, mock_get_node_list): properties = {'memory_mb': 512, 'cpus': 2} power_state = ironic_states.POWER_ON node = _get_cached_node( instance_uuid=self.instance_uuid, properties=properties, power_state=power_state) self.mock_conn.nodes.return_value = iter([node]) mock_svc_by_hv.return_value = [] mock_get_node_list.return_value = [] # ironic_states.POWER_ON should be mapped to # nova_states.RUNNING instance = fake_instance.fake_instance_obj('fake-context', uuid=self.instance_uuid) mock_uuids_by_host.return_value = [instance.uuid] result = self.driver.get_info(instance) self.assertEqual(hardware.InstanceInfo(state=nova_states.RUNNING), result) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(FAKE_CLIENT.node, 'get_by_instance_uuid') def test_get_info_cached(self, mock_gbiu, mock_svc_by_hv, mock_uuids_by_host, mock_get_node_list): properties = {'memory_mb': 512, 'cpus': 2} power_state = ironic_states.POWER_ON node = _get_cached_node( instance_uuid=self.instance_uuid, properties=properties, power_state=power_state) mock_gbiu.return_value = node mock_svc_by_hv.return_value = [] mock_get_node_list.return_value = [node] # ironic_states.POWER_ON should be mapped to # nova_states.RUNNING instance = fake_instance.fake_instance_obj('fake-context', uuid=self.instance_uuid) mock_uuids_by_host.return_value = [instance.uuid] result = self.driver.get_info(instance) self.assertEqual(hardware.InstanceInfo(state=nova_states.RUNNING), result) mock_gbiu.assert_not_called() @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_get_info_not_found_in_cache(self, mock_svc_by_hv, mock_uuids_by_host, mock_get_node_list): properties = {'memory_mb': 512, 'cpus': 2} power_state = ironic_states.POWER_ON node = _get_cached_node( instance_uuid=self.instance_uuid, properties=properties, power_state=power_state) node2 = _get_cached_node() self.mock_conn.nodes.return_value = iter([node]) mock_svc_by_hv.return_value = [] mock_get_node_list.return_value = [node2] # ironic_states.POWER_ON should be mapped to # nova_states.RUNNING instance = fake_instance.fake_instance_obj('fake-context', uuid=self.instance_uuid) mock_uuids_by_host.return_value = [instance.uuid] result = self.driver.get_info(instance) self.assertEqual(hardware.InstanceInfo(state=nova_states.RUNNING), result) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_get_info_skip_cache(self, mock_svc_by_hv, mock_uuids_by_host, mock_get_node_list): properties = {'memory_mb': 512, 'cpus': 2} power_state = ironic_states.POWER_ON node = _get_cached_node( instance_uuid=self.instance_uuid, properties=properties, power_state=power_state) self.mock_conn.nodes.return_value = iter([node]) mock_svc_by_hv.return_value = [] mock_get_node_list.return_value = [node] # ironic_states.POWER_ON should be mapped to # nova_states.RUNNING instance = fake_instance.fake_instance_obj('fake-context', uuid=self.instance_uuid) mock_uuids_by_host.return_value = [instance.uuid] result = self.driver.get_info(instance, use_cache=False) self.assertEqual(hardware.InstanceInfo(state=nova_states.RUNNING), result) # verify we hit the ironic API for fresh data self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def test_get_info_http_not_found(self, mock_svc_by_hv, mock_uuids_by_host): self.mock_conn.get_node.side_effect = sdk_exc.ResourceNotFound mock_svc_by_hv.return_value = [] self.mock_conn.nodes.return_value = iter([]) instance = fake_instance.fake_instance_obj( self.ctx, uuid=uuidutils.generate_uuid()) mock_uuids_by_host.return_value = [instance] mock_uuids_by_host.return_value = [instance.uuid] result = self.driver.get_info(instance) self.assertEqual(hardware.InstanceInfo(state=nova_states.NOSTATE), result) self.mock_conn.nodes.assert_has_calls([mock.call( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS)]) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, '_wait_for_active') @mock.patch.object(ironic_driver.IronicDriver, '_add_instance_info_to_node') def _test_spawn(self, mock_aiitn, mock_wait_active, mock_avti, mock_node, mock_looping, mock_save): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', id=node_id) instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) fake_flavor = objects.Flavor(ephemeral_gb=0) instance.flavor = fake_flavor self.mock_conn.get_node.return_value = node mock_node.validate.return_value = ironic_utils.get_test_validation() mock_node.get_by_instance_uuid.return_value = node mock_node.set_provision_state.return_value = mock.MagicMock() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call image_meta = ironic_utils.get_test_image_meta() self.driver.spawn(self.ctx, instance, image_meta, [], None, {}) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock_node.validate.assert_called_once_with(node_id) mock_aiitn.assert_called_once_with(node, instance, test.MatchType(objects.ImageMeta), fake_flavor, block_device_info=None) mock_avti.assert_called_once_with(self.ctx, instance, None) mock_node.set_provision_state.assert_called_once_with( node_id, 'active', configdrive=mock.ANY) self.assertIsNone(instance.default_ephemeral_device) self.assertFalse(mock_save.called) mock_looping.assert_called_once_with(mock_wait_active, instance) fake_looping_call.start.assert_called_once_with( interval=CONF.ironic.api_retry_interval) fake_looping_call.wait.assert_called_once_with() @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') def test_spawn(self, mock_required_by, mock_configdrive): mock_required_by.return_value = False self._test_spawn() # assert configdrive was not generated self.assertFalse(mock_configdrive.called) @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') def test_spawn_with_configdrive(self, mock_required_by, mock_configdrive): mock_required_by.return_value = True self._test_spawn() # assert configdrive was generated mock_configdrive.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, mock.ANY, extra_md={}, files=[]) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, 'destroy') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, '_wait_for_active') @mock.patch.object(ironic_driver.IronicDriver, '_add_instance_info_to_node') def test_spawn_destroyed_after_failure(self, mock_aiitn, mock_wait_active, mock_avti, mock_destroy, mock_node, mock_looping, mock_required_by): mock_required_by.return_value = False node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', uuid=node_uuid) fake_flavor = objects.Flavor(ephemeral_gb=0) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) instance.flavor = fake_flavor mock_node.get.return_value = node mock_node.validate.return_value = ironic_utils.get_test_validation() mock_node.get_by_instance_uuid.return_value = node mock_node.set_provision_state.return_value = mock.MagicMock() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call deploy_exc = exception.InstanceDeployFailure('foo') fake_looping_call.wait.side_effect = deploy_exc self.assertRaises( exception.InstanceDeployFailure, self.driver.spawn, self.ctx, instance, None, [], None, {}) self.assertEqual(0, mock_destroy.call_count) def _test_add_instance_info_to_node(self, mock_update=None, mock_call=None): node = _get_cached_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) image_meta = ironic_utils.get_test_image_meta() flavor = ironic_utils.get_test_flavor() instance.flavor = flavor self.driver._add_instance_info_to_node(node, instance, image_meta, flavor) expected_patch = [{'path': '/instance_info/image_source', 'op': 'add', 'value': image_meta.id}, {'path': '/instance_info/root_gb', 'op': 'add', 'value': str(instance.flavor.root_gb)}, {'path': '/instance_info/swap_mb', 'op': 'add', 'value': str(flavor['swap'])}, {'path': '/instance_info/display_name', 'value': instance.display_name, 'op': 'add'}, {'path': '/instance_info/vcpus', 'op': 'add', 'value': str(instance.flavor.vcpus)}, {'path': '/instance_info/nova_host_id', 'op': 'add', 'value': instance.host}, {'path': '/instance_info/memory_mb', 'op': 'add', 'value': str(instance.flavor.memory_mb)}, {'path': '/instance_info/local_gb', 'op': 'add', 'value': str(node.properties.get('local_gb', 0))}] if mock_call is not None: # assert call() is invoked with retry_on_conflict False to # avoid bug #1341420 mock_call.assert_called_once_with('node.update', node.uuid, expected_patch, retry_on_conflict=False) if mock_update is not None: mock_update.assert_called_once_with(node.uuid, expected_patch) @mock.patch.object(FAKE_CLIENT.node, 'update') def test__add_instance_info_to_node_mock_update(self, mock_update): self._test_add_instance_info_to_node(mock_update=mock_update) @mock.patch.object(cw.IronicClientWrapper, 'call') def test__add_instance_info_to_node_mock_call(self, mock_call): self._test_add_instance_info_to_node(mock_call=mock_call) @mock.patch.object(FAKE_CLIENT.node, 'update') def test__add_instance_info_to_node_fail(self, mock_update): mock_update.side_effect = ironic_exception.BadRequest() node = _get_cached_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) image_meta = ironic_utils.get_test_image_meta() flavor = ironic_utils.get_test_flavor() self.assertRaises(exception.InstanceDeployFailure, self.driver._add_instance_info_to_node, node, instance, image_meta, flavor) def _test_remove_instance_info_from_node(self, mock_update): node = _get_cached_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver._remove_instance_info_from_node(node, instance) expected_patch = [{'path': '/instance_info', 'op': 'remove'}, {'path': '/instance_uuid', 'op': 'remove'}] mock_update.assert_called_once_with(node.uuid, expected_patch) @mock.patch.object(FAKE_CLIENT.node, 'update') def test_remove_instance_info_from_node(self, mock_update): self._test_remove_instance_info_from_node(mock_update) @mock.patch.object(FAKE_CLIENT.node, 'update') def test_remove_instance_info_from_node_fail(self, mock_update): mock_update.side_effect = ironic_exception.BadRequest() self._test_remove_instance_info_from_node(mock_update) def _create_fake_block_device_info(self): bdm_dict = block_device.BlockDeviceDict({ 'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sda', 'source_type': 'volume', 'volume_id': 'fake-volume-id-1', 'connection_info': '{"data":"fake_data",\ "driver_volume_type":"fake_type"}', 'boot_index': 0, 'destination_type': 'volume' }) driver_bdm = driver_block_device.DriverVolumeBlockDevice( fake_block_device.fake_bdm_object(self.ctx, bdm_dict)) return { 'block_device_mapping': [driver_bdm] } @mock.patch.object(FAKE_CLIENT.volume_target, 'create') def test__add_volume_target_info(self, mock_create): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) block_device_info = self._create_fake_block_device_info() self.driver._add_volume_target_info(self.ctx, instance, block_device_info) expected_volume_type = 'fake_type' expected_properties = 'fake_data' expected_boot_index = 0 mock_create.assert_called_once_with(node_uuid=instance.node, volume_type=expected_volume_type, properties=expected_properties, boot_index=expected_boot_index, volume_id='fake-volume-id-1') @mock.patch.object(FAKE_CLIENT.volume_target, 'create') def test__add_volume_target_info_empty_bdms(self, mock_create): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver._add_volume_target_info(self.ctx, instance, None) self.assertFalse(mock_create.called) @mock.patch.object(FAKE_CLIENT.volume_target, 'create') def test__add_volume_target_info_failures(self, mock_create): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) block_device_info = self._create_fake_block_device_info() exceptions = [ ironic_exception.BadRequest(), ironic_exception.Conflict(), ] for e in exceptions: mock_create.side_effect = e self.assertRaises(exception.InstanceDeployFailure, self.driver._add_volume_target_info, self.ctx, instance, block_device_info) @mock.patch.object(FAKE_CLIENT.volume_target, 'delete') @mock.patch.object(FAKE_CLIENT.node, 'list_volume_targets') def test__cleanup_volume_target_info(self, mock_lvt, mock_delete): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_lvt.return_value = [ironic_utils.get_test_volume_target( uuid='fake_uuid')] self.driver._cleanup_volume_target_info(instance) expected_volume_target_id = 'fake_uuid' mock_delete.assert_called_once_with(expected_volume_target_id) @mock.patch.object(FAKE_CLIENT.volume_target, 'delete') @mock.patch.object(FAKE_CLIENT.node, 'list_volume_targets') def test__cleanup_volume_target_info_empty_targets(self, mock_lvt, mock_delete): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_lvt.return_value = [] self.driver._cleanup_volume_target_info(instance) self.assertFalse(mock_delete.called) @mock.patch.object(FAKE_CLIENT.volume_target, 'delete') @mock.patch.object(FAKE_CLIENT.node, 'list_volume_targets') def test__cleanup_volume_target_info_not_found(self, mock_lvt, mock_delete): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_lvt.return_value = [ ironic_utils.get_test_volume_target(uuid='fake_uuid1'), ironic_utils.get_test_volume_target(uuid='fake_uuid2'), ] mock_delete.side_effect = [ironic_exception.NotFound('not found'), None] self.driver._cleanup_volume_target_info(instance) self.assertEqual([mock.call('fake_uuid1'), mock.call('fake_uuid2')], mock_delete.call_args_list) @mock.patch.object(FAKE_CLIENT.volume_target, 'delete') @mock.patch.object(FAKE_CLIENT.node, 'list_volume_targets') def test__cleanup_volume_target_info_bad_request(self, mock_lvt, mock_delete): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_lvt.return_value = [ ironic_utils.get_test_volume_target(uuid='fake_uuid1'), ironic_utils.get_test_volume_target(uuid='fake_uuid2'), ] mock_delete.side_effect = [ironic_exception.BadRequest('error'), None] self.driver._cleanup_volume_target_info(instance) self.assertEqual([mock.call('fake_uuid1'), mock.call('fake_uuid2')], mock_delete.call_args_list) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') def test_spawn_node_driver_validation_fail(self, mock_avti, mock_node, mock_required_by): mock_required_by.return_value = False node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', id=node_id) flavor = ironic_utils.get_test_flavor() instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) instance.flavor = flavor mock_node.validate.return_value = ironic_utils.get_test_validation( power={'result': False}, deploy={'result': False}, storage={'result': False}) self.mock_conn.get_node.return_value = node image_meta = ironic_utils.get_test_image_meta() self.assertRaises(exception.ValidationError, self.driver.spawn, self.ctx, instance, image_meta, [], None, {}) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock_avti.assert_called_once_with(self.ctx, instance, None) mock_node.validate.assert_called_once_with(node_id) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') def test_spawn_node_configdrive_fail(self, mock_configdrive, mock_avti, mock_node, mock_save, mock_required_by): mock_required_by.return_value = True node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', id=node_id) flavor = ironic_utils.get_test_flavor() instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) instance.flavor = flavor self.mock_conn.get_node.return_value = node mock_node.validate.return_value = ironic_utils.get_test_validation() image_meta = ironic_utils.get_test_image_meta() mock_configdrive.side_effect = test.TestingException() with mock.patch.object(self.driver, '_cleanup_deploy', autospec=True) as mock_cleanup_deploy: self.assertRaises(test.TestingException, self.driver.spawn, self.ctx, instance, image_meta, [], None, {}) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock_node.validate.assert_called_once_with(node_id) mock_cleanup_deploy.assert_called_with(node, instance, None) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_deploy') def test_spawn_node_trigger_deploy_fail(self, mock_cleanup_deploy, mock_avti, mock_node, mock_required_by): mock_required_by.return_value = False node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', id=node_id) flavor = ironic_utils.get_test_flavor() instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) instance.flavor = flavor image_meta = ironic_utils.get_test_image_meta() self.mock_conn.get_node.return_value = node mock_node.validate.return_value = ironic_utils.get_test_validation() mock_node.set_provision_state.side_effect = exception.NovaException() self.assertRaises(exception.NovaException, self.driver.spawn, self.ctx, instance, image_meta, [], None, {}) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock_node.validate.assert_called_once_with(node_id) mock_cleanup_deploy.assert_called_once_with(node, instance, None) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_deploy') def test_spawn_node_trigger_deploy_fail2(self, mock_cleanup_deploy, mock_avti, mock_node, mock_required_by): mock_required_by.return_value = False node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', id=node_id) flavor = ironic_utils.get_test_flavor() instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) instance.flavor = flavor image_meta = ironic_utils.get_test_image_meta() self.mock_conn.get_node.return_value = node mock_node.validate.return_value = ironic_utils.get_test_validation() mock_node.set_provision_state.side_effect = ironic_exception.BadRequest self.assertRaises(ironic_exception.BadRequest, self.driver.spawn, self.ctx, instance, image_meta, [], None, {}) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock_node.validate.assert_called_once_with(node_id) mock_cleanup_deploy.assert_called_once_with(node, instance, None) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, 'destroy') def test_spawn_node_trigger_deploy_fail3(self, mock_destroy, mock_avti, mock_node, mock_looping, mock_required_by): mock_required_by.return_value = False node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', id=node_id) flavor = ironic_utils.get_test_flavor() instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) instance.flavor = flavor image_meta = ironic_utils.get_test_image_meta() self.mock_conn.get_node.return_value = node mock_node.validate.return_value = ironic_utils.get_test_validation() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call fake_looping_call.wait.side_effect = ironic_exception.BadRequest fake_net_info = utils.get_test_network_info() self.assertRaises(ironic_exception.BadRequest, self.driver.spawn, self.ctx, instance, image_meta, [], None, {}, fake_net_info) self.assertEqual(0, mock_destroy.call_count) @mock.patch.object(configdrive, 'required_by') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_add_volume_target_info') @mock.patch.object(ironic_driver.IronicDriver, '_wait_for_active') def test_spawn_sets_default_ephemeral_device(self, mock_wait, mock_avti, mock_node, mock_save, mock_looping, mock_required_by): mock_required_by.return_value = False node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(driver='fake', uuid=node_uuid) flavor = ironic_utils.get_test_flavor(ephemeral_gb=1) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) instance.flavor = flavor mock_node.get_by_instance_uuid.return_value = node mock_node.set_provision_state.return_value = mock.MagicMock() image_meta = ironic_utils.get_test_image_meta() self.driver.spawn(self.ctx, instance, image_meta, [], None, {}) self.assertTrue(mock_save.called) self.assertEqual('/dev/sda1', instance.default_ephemeral_device) @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(ironic_driver.IronicDriver, '_remove_instance_info_from_node') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_deploy') def _test_destroy(self, state, mock_cleanup_deploy, mock_remove_instance_info, mock_node): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' network_info = 'foo' node = _get_cached_node( driver='fake', uuid=node_id, provision_state=state) instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) def fake_set_provision_state(*_): node.provision_state = None self.mock_conn.nodes.return_value = iter([node]) mock_node.set_provision_state.side_effect = fake_set_provision_state self.driver.destroy(self.ctx, instance, network_info, None) self.mock_conn.nodes.assert_called_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) mock_cleanup_deploy.assert_called_with(node, instance, network_info, remove_instance_info=False) # For states that makes sense check if set_provision_state has # been called if state in ironic_driver._UNPROVISION_STATES: mock_node.set_provision_state.assert_called_once_with( node_id, 'deleted') self.assertFalse(mock_remove_instance_info.called) else: self.assertFalse(mock_node.set_provision_state.called) mock_remove_instance_info.assert_called_once_with(node, instance) def test_destroy(self): for state in ironic_states.PROVISION_STATE_LIST: self._test_destroy(state) @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_deploy') def test_destroy_trigger_undeploy_fail(self, mock_clean, fake_validate, mock_sps): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node( driver='fake', uuid=node_uuid, provision_state=ironic_states.ACTIVE) fake_validate.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) mock_sps.side_effect = exception.NovaException() self.assertRaises(exception.NovaException, self.driver.destroy, self.ctx, instance, None, None) mock_clean.assert_called_once_with(node, instance, None, remove_instance_info=False) @mock.patch.object(FAKE_CLIENT.node, 'update') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_deploy') def test_destroy_trigger_remove_info_fail(self, mock_clean, fake_validate, mock_update): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = ironic_utils.get_test_node( driver='fake', uuid=node_uuid, provision_state=ironic_states.AVAILABLE) fake_validate.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) mock_update.side_effect = SystemError('unexpected error') self.assertRaises(SystemError, self.driver.destroy, self.ctx, instance, None, None) mock_clean.assert_called_once_with(node, instance, None, remove_instance_info=False) @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def _test__unprovision_instance(self, mock_validate_inst, mock_set_pstate, state=None): node = _get_cached_node(driver='fake', provision_state=state) instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_validate_inst.return_value = node with mock.patch.object(self.driver, 'node_cache') as cache_mock: self.driver._unprovision(instance, node) mock_validate_inst.assert_called_once_with(instance) mock_set_pstate.assert_called_once_with(node.uuid, "deleted") cache_mock.pop.assert_called_once_with(node.uuid, None) def test__unprovision_cleaning(self): self._test__unprovision_instance(state=ironic_states.CLEANING) def test__unprovision_cleanwait(self): self._test__unprovision_instance(state=ironic_states.CLEANWAIT) @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__unprovision_fail_max_retries(self, mock_validate_inst, mock_set_pstate): CONF.set_default('api_max_retries', default=2, group='ironic') node = _get_cached_node( driver='fake', provision_state=ironic_states.ACTIVE) instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_validate_inst.return_value = node self.assertRaises(exception.NovaException, self.driver._unprovision, instance, node) expected_calls = (mock.call(instance), mock.call(instance)) mock_validate_inst.assert_has_calls(expected_calls) mock_set_pstate.assert_called_once_with(node.uuid, "deleted") @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') def test__unprovision_instance_not_found(self, mock_validate_inst, mock_set_pstate): node = _get_cached_node( driver='fake', provision_state=ironic_states.DELETING) instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) mock_validate_inst.side_effect = exception.InstanceNotFound( instance_id='fake') self.driver._unprovision(instance, node) mock_validate_inst.assert_called_once_with(instance) mock_set_pstate.assert_called_once_with(node.uuid, "deleted") @mock.patch.object(FAKE_CLIENT, 'node') def test_destroy_unassociate_fail(self, mock_node): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node( driver='fake', id=node_id, provision_state=ironic_states.ACTIVE) instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) self.mock_conn.nodes.return_value = iter([node]) mock_node.set_provision_state.side_effect = exception.NovaException() self.assertRaises(exception.NovaException, self.driver.destroy, self.ctx, instance, None, None) mock_node.set_provision_state.assert_called_once_with(node_id, 'deleted') self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_reboot(self, mock_sp, fake_validate, mock_looping): node = _get_cached_node() fake_validate.side_effect = [node, node] fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.reboot(self.ctx, instance, None, 'HARD') mock_sp.assert_called_once_with(node.uuid, 'reboot') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'inject_nmi') def test_trigger_crash_dump(self, mock_nmi, fake_validate): node = _get_cached_node() fake_validate.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.trigger_crash_dump(instance) mock_nmi.assert_called_once_with(node.uuid) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'inject_nmi') def test_trigger_crash_dump_error(self, mock_nmi, fake_validate): node = _get_cached_node() fake_validate.return_value = node mock_nmi.side_effect = ironic_exception.BadRequest() instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.assertRaises(ironic_exception.BadRequest, self.driver.trigger_crash_dump, instance) @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_reboot_soft(self, mock_sp, fake_validate, mock_looping): node = _get_cached_node() fake_validate.side_effect = [node, node] fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.reboot(self.ctx, instance, None, 'SOFT') mock_sp.assert_called_once_with(node.uuid, 'reboot', soft=True) @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_reboot_soft_not_supported(self, mock_sp, fake_validate, mock_looping): node = _get_cached_node() fake_validate.side_effect = [node, node] mock_sp.side_effect = [ironic_exception.BadRequest(), None] fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.reboot(self.ctx, instance, None, 'SOFT') mock_sp.assert_has_calls([mock.call(node.uuid, 'reboot', soft=True), mock.call(node.uuid, 'reboot')]) @mock.patch.object(objects.Instance, 'save') def test_power_update_event(self, mock_save): instance = fake_instance.fake_instance_obj( self.ctx, node=self.instance_uuid, power_state=nova_states.RUNNING, vm_state=vm_states.ACTIVE, task_state=task_states.POWERING_OFF) self.driver.power_update_event(instance, common.POWER_OFF) self.assertEqual(nova_states.SHUTDOWN, instance.power_state) self.assertEqual(vm_states.STOPPED, instance.vm_state) self.assertIsNone(instance.task_state) mock_save.assert_called_once_with( expected_task_state=task_states.POWERING_OFF) @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_power_on(self, mock_sp, fake_validate, mock_looping): node = _get_cached_node() fake_validate.side_effect = [node, node] fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=self.instance_uuid) self.driver.power_on(self.ctx, instance, utils.get_test_network_info()) mock_sp.assert_called_once_with(node.uuid, 'on') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') def _test_power_off(self, mock_looping, timeout=0): fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=self.instance_uuid) self.driver.power_off(instance, timeout) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_power_off(self, mock_sp, fake_validate): node = _get_cached_node() fake_validate.side_effect = [node, node] self._test_power_off() mock_sp.assert_called_once_with(node.uuid, 'off') @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_power_off_soft(self, mock_sp, fake_validate): node = _get_cached_node() power_off_node = _get_cached_node(power_state=ironic_states.POWER_OFF) fake_validate.side_effect = [node, power_off_node] self._test_power_off(timeout=30) mock_sp.assert_called_once_with(node.uuid, 'off', soft=True, timeout=30) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_power_off_soft_exception(self, mock_sp, fake_validate): node = _get_cached_node() fake_validate.side_effect = [node, node] mock_sp.side_effect = [ironic_exception.BadRequest(), None] self._test_power_off(timeout=30) expected_calls = [mock.call(node.uuid, 'off', soft=True, timeout=30), mock.call(node.uuid, 'off')] self.assertEqual(len(expected_calls), mock_sp.call_count) mock_sp.assert_has_calls(expected_calls) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_power_state') def test_power_off_soft_not_stopped(self, mock_sp, fake_validate): node = _get_cached_node() fake_validate.side_effect = [node, node] self._test_power_off(timeout=30) expected_calls = [mock.call(node.uuid, 'off', soft=True, timeout=30), mock.call(node.uuid, 'off')] self.assertEqual(len(expected_calls), mock_sp.call_count) mock_sp.assert_has_calls(expected_calls) @mock.patch.object(FAKE_CLIENT.node, 'vif_attach') def test_plug_vifs_with_port(self, mock_vatt): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(uuid=node_uuid) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) network_info = utils.get_test_network_info() vif_id = six.text_type(network_info[0]['id']) self.driver._plug_vifs(node, instance, network_info) # asserts mock_vatt.assert_called_with(node.uuid, vif_id) @mock.patch.object(ironic_driver.IronicDriver, '_plug_vifs') def test_plug_vifs(self, mock__plug_vifs): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(id=node_id) self.mock_conn.get_node.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) network_info = utils.get_test_network_info() self.driver.plug_vifs(instance, network_info) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock__plug_vifs.assert_called_once_with(node, instance, network_info) @mock.patch.object(FAKE_CLIENT.node, 'vif_attach') def test_plug_vifs_multiple_ports(self, mock_vatt): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(uuid=node_uuid) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) first_vif_id = 'aaaaaaaa-vv11-cccc-dddd-eeeeeeeeeeee' second_vif_id = 'aaaaaaaa-vv22-cccc-dddd-eeeeeeeeeeee' first_vif = ironic_utils.get_test_vif(address='22:FF:FF:FF:FF:FF', id=first_vif_id) second_vif = ironic_utils.get_test_vif(address='11:FF:FF:FF:FF:FF', id=second_vif_id) network_info = [first_vif, second_vif] self.driver._plug_vifs(node, instance, network_info) # asserts calls = (mock.call(node.uuid, first_vif_id), mock.call(node.uuid, second_vif_id)) mock_vatt.assert_has_calls(calls, any_order=True) @mock.patch.object(FAKE_CLIENT, 'node') def test_plug_vifs_failure(self, mock_node): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(uuid=node_uuid) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) first_vif_id = 'aaaaaaaa-vv11-cccc-dddd-eeeeeeeeeeee' second_vif_id = 'aaaaaaaa-vv22-cccc-dddd-eeeeeeeeeeee' first_vif = ironic_utils.get_test_vif(address='22:FF:FF:FF:FF:FF', id=first_vif_id) second_vif = ironic_utils.get_test_vif(address='11:FF:FF:FF:FF:FF', id=second_vif_id) mock_node.vif_attach.side_effect = [None, ironic_exception.BadRequest()] network_info = [first_vif, second_vif] self.assertRaises(exception.VirtualInterfacePlugException, self.driver._plug_vifs, node, instance, network_info) @mock.patch('time.sleep') @mock.patch.object(FAKE_CLIENT, 'node') def test_plug_vifs_failure_no_conductor(self, mock_node, mock_sleep): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(uuid=node_uuid) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) first_vif_id = 'aaaaaaaa-vv11-cccc-dddd-eeeeeeeeeeee' second_vif_id = 'aaaaaaaa-vv22-cccc-dddd-eeeeeeeeeeee' first_vif = ironic_utils.get_test_vif(address='22:FF:FF:FF:FF:FF', id=first_vif_id) second_vif = ironic_utils.get_test_vif(address='11:FF:FF:FF:FF:FF', id=second_vif_id) msg = 'No conductor service registered which supports driver ipmi.' mock_node.vif_attach.side_effect = [None, ironic_exception.BadRequest(msg), None] network_info = [first_vif, second_vif] self.driver._plug_vifs(node, instance, network_info) calls = [mock.call(node.uuid, first_vif_id), mock.call(node.uuid, second_vif_id), mock.call(node.uuid, second_vif_id)] mock_node.vif_attach.assert_has_calls(calls, any_order=True) @mock.patch.object(FAKE_CLIENT, 'node') def test_plug_vifs_already_attached(self, mock_node): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(uuid=node_uuid) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) first_vif_id = 'aaaaaaaa-vv11-cccc-dddd-eeeeeeeeeeee' second_vif_id = 'aaaaaaaa-vv22-cccc-dddd-eeeeeeeeeeee' first_vif = ironic_utils.get_test_vif(address='22:FF:FF:FF:FF:FF', id=first_vif_id) second_vif = ironic_utils.get_test_vif(address='11:FF:FF:FF:FF:FF', id=second_vif_id) mock_node.vif_attach.side_effect = [ironic_exception.Conflict(), None] network_info = [first_vif, second_vif] self.driver._plug_vifs(node, instance, network_info) self.assertEqual(2, mock_node.vif_attach.call_count) @mock.patch.object(FAKE_CLIENT.node, 'vif_attach') def test_plug_vifs_no_network_info(self, mock_vatt): node_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(uuid=node_uuid) instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) network_info = [] self.driver._plug_vifs(node, instance, network_info) # asserts self.assertFalse(mock_vatt.called) @mock.patch.object(FAKE_CLIENT, 'node') def test_unplug_vifs(self, mock_node): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(id=node_id) self.mock_conn.get_node.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) network_info = utils.get_test_network_info() vif_id = six.text_type(network_info[0]['id']) self.driver.unplug_vifs(instance, network_info) # asserts self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) mock_node.vif_detach.assert_called_once_with(node.id, vif_id) @mock.patch.object(FAKE_CLIENT, 'node') def test_unplug_vifs_port_not_associated(self, mock_node): node_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' node = _get_cached_node(id=node_id) self.mock_conn.get_node.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) network_info = utils.get_test_network_info() self.driver.unplug_vifs(instance, network_info) self.mock_conn.get_node.assert_called_once_with( node_id, fields=ironic_driver._NODE_FIELDS) self.assertEqual(len(network_info), mock_node.vif_detach.call_count) @mock.patch.object(FAKE_CLIENT.node, 'vif_detach') def test_unplug_vifs_no_network_info(self, mock_vdet): instance = fake_instance.fake_instance_obj(self.ctx) network_info = [] self.driver.unplug_vifs(instance, network_info) self.assertFalse(mock_vdet.called) @mock.patch.object(ironic_driver.IronicDriver, 'plug_vifs') def test_attach_interface(self, mock_pv): self.driver.attach_interface('fake_context', 'fake_instance', 'fake_image_meta', 'fake_vif') mock_pv.assert_called_once_with('fake_instance', ['fake_vif']) @mock.patch.object(ironic_driver.IronicDriver, 'unplug_vifs') def test_detach_interface(self, mock_uv): self.driver.detach_interface('fake_context', 'fake_instance', 'fake_vif') mock_uv.assert_called_once_with('fake_instance', ['fake_vif']) @mock.patch.object(ironic_driver.IronicDriver, '_wait_for_active') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') @mock.patch.object(ironic_driver.IronicDriver, '_add_instance_info_to_node') @mock.patch.object(objects.Instance, 'save') def _test_rebuild(self, mock_save, mock_add_instance_info, mock_set_pstate, mock_looping, mock_wait_active, preserve=False): node_id = uuidutils.generate_uuid() node = _get_cached_node(id=node_id, instance_id=self.instance_id, instance_type_id=5) self.mock_conn.get_node.return_value = node image_meta = ironic_utils.get_test_image_meta() flavor_id = 5 flavor = objects.Flavor(flavor_id=flavor_id, name='baremetal') instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_uuid, node=node_id, instance_type_id=flavor_id) instance.flavor = flavor fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call self.driver.rebuild( context=self.ctx, instance=instance, image_meta=image_meta, injected_files=None, admin_password=None, allocations={}, bdms=None, detach_block_devices=None, attach_block_devices=None, preserve_ephemeral=preserve) mock_save.assert_called_once_with( expected_task_state=[task_states.REBUILDING]) mock_add_instance_info.assert_called_once_with( node, instance, test.MatchType(objects.ImageMeta), flavor, preserve) mock_set_pstate.assert_called_once_with(node_id, ironic_states.REBUILD, configdrive=mock.ANY) mock_looping.assert_called_once_with(mock_wait_active, instance) fake_looping_call.start.assert_called_once_with( interval=CONF.ironic.api_retry_interval) fake_looping_call.wait.assert_called_once_with() @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') def test_rebuild_preserve_ephemeral(self, mock_required_by, mock_configdrive): mock_required_by.return_value = False self._test_rebuild(preserve=True) # assert configdrive was not generated mock_configdrive.assert_not_called() @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') def test_rebuild_no_preserve_ephemeral(self, mock_required_by, mock_configdrive): mock_required_by.return_value = False self._test_rebuild(preserve=False) @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') def test_rebuild_with_configdrive(self, mock_required_by, mock_configdrive): mock_required_by.return_value = True self._test_rebuild() # assert configdrive was generated mock_configdrive.assert_called_once_with( self.ctx, mock.ANY, mock.ANY, mock.ANY, extra_md={}, files=None) @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') @mock.patch.object(ironic_driver.IronicDriver, '_add_instance_info_to_node') @mock.patch.object(FAKE_CLIENT.node, 'get') @mock.patch.object(objects.Instance, 'save') def test_rebuild_with_configdrive_failure(self, mock_save, mock_get, mock_add_instance_info, mock_required_by, mock_configdrive): node_uuid = uuidutils.generate_uuid() node = _get_cached_node( uuid=node_uuid, instance_uuid=self.instance_uuid, instance_type_id=5) mock_get.return_value = node mock_required_by.return_value = True mock_configdrive.side_effect = exception.NovaException() image_meta = ironic_utils.get_test_image_meta() flavor_id = 5 flavor = objects.Flavor(flavor_id=flavor_id, name='baremetal') instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_uuid, node=node_uuid, instance_type_id=flavor_id) instance.flavor = flavor self.assertRaises(exception.InstanceDeployFailure, self.driver.rebuild, context=self.ctx, instance=instance, image_meta=image_meta, injected_files=None, admin_password=None, allocations={}, bdms=None, detach_block_devices=None, attach_block_devices=None) @mock.patch.object(ironic_driver.IronicDriver, '_generate_configdrive') @mock.patch.object(configdrive, 'required_by') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') @mock.patch.object(ironic_driver.IronicDriver, '_add_instance_info_to_node') @mock.patch.object(FAKE_CLIENT.node, 'get') @mock.patch.object(objects.Instance, 'save') def test_rebuild_failures(self, mock_save, mock_get, mock_add_instance_info, mock_set_pstate, mock_required_by, mock_configdrive): node_uuid = uuidutils.generate_uuid() node = _get_cached_node( uuid=node_uuid, instance_uuid=self.instance_uuid, instance_type_id=5) mock_get.return_value = node mock_required_by.return_value = False image_meta = ironic_utils.get_test_image_meta() flavor_id = 5 flavor = objects.Flavor(flavor_id=flavor_id, name='baremetal') instance = fake_instance.fake_instance_obj(self.ctx, uuid=self.instance_uuid, node=node_uuid, instance_type_id=flavor_id) instance.flavor = flavor exceptions = [ exception.NovaException(), ironic_exception.BadRequest(), ironic_exception.InternalServerError(), ] for e in exceptions: mock_set_pstate.side_effect = e self.assertRaises(exception.InstanceDeployFailure, self.driver.rebuild, context=self.ctx, instance=instance, image_meta=image_meta, injected_files=None, admin_password=None, allocations={}, bdms=None, detach_block_devices=None, attach_block_devices=None) @mock.patch.object(FAKE_CLIENT.node, 'get') def test_network_binding_host_id(self, mock_get): node_uuid = uuidutils.generate_uuid() hostname = 'ironic-compute' instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid, host=hostname) node = ironic_utils.get_test_node(uuid=node_uuid, instance_uuid=self.instance_uuid, instance_type_id=5, network_interface='flat') mock_get.return_value = node host_id = self.driver.network_binding_host_id(self.ctx, instance) self.assertIsNone(host_id) @mock.patch.object(FAKE_CLIENT, 'node') def test_get_volume_connector(self, mock_node): node_uuid = uuids.node_uuid node_props = {'cpu_arch': 'x86_64'} node = ironic_utils.get_test_node(uuid=node_uuid, properties=node_props) connectors = [ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='iqn', connector_id='iqn.test'), ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='ip', connector_id='1.2.3.4'), ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='wwnn', connector_id='200010601'), ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='wwpn', connector_id='200010605'), ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='wwpn', connector_id='200010606')] expected_props = {'initiator': 'iqn.test', 'ip': '1.2.3.4', 'host': '1.2.3.4', 'multipath': False, 'wwnns': ['200010601'], 'wwpns': ['200010605', '200010606'], 'os_type': 'baremetal', 'platform': 'x86_64'} mock_node.get.return_value = node mock_node.list_volume_connectors.return_value = connectors instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) props = self.driver.get_volume_connector(instance) self.assertEqual(expected_props, props) mock_node.get.assert_called_once_with(node_uuid) mock_node.list_volume_connectors.assert_called_once_with( node_uuid, detail=True) @mock.patch.object(objects.instance.Instance, 'get_network_info') @mock.patch.object(FAKE_CLIENT, 'node') @mock.patch.object(FAKE_CLIENT.port, 'list') @mock.patch.object(FAKE_CLIENT.portgroup, 'list') def _test_get_volume_connector_no_ip( self, mac_specified, mock_pgroup, mock_port, mock_node, mock_nw_info, portgroup_exist=False, no_fixed_ip=False): node_uuid = uuids.node_uuid node_props = {'cpu_arch': 'x86_64'} node = ironic_utils.get_test_node(uuid=node_uuid, properties=node_props) connectors = [ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='iqn', connector_id='iqn.test')] if mac_specified: connectors.append(ironic_utils.get_test_volume_connector( node_uuid=node_uuid, type='mac', connector_id='11:22:33:44:55:66')) fixed_ip = network_model.FixedIP(address='1.2.3.4', version=4) subnet = network_model.Subnet(ips=[fixed_ip]) network = network_model.Network(subnets=[subnet]) vif = network_model.VIF( id='aaaaaaaa-vv11-cccc-dddd-eeeeeeeeeeee', network=network) expected_props = {'initiator': 'iqn.test', 'ip': '1.2.3.4', 'host': '1.2.3.4', 'multipath': False, 'os_type': 'baremetal', 'platform': 'x86_64'} mock_node.get.return_value = node mock_node.list_volume_connectors.return_value = connectors instance = fake_instance.fake_instance_obj(self.ctx, node=node_uuid) port = ironic_utils.get_test_port( node_uuid=node_uuid, address='11:22:33:44:55:66', internal_info={'tenant_vif_port_id': vif['id']}) mock_port.return_value = [port] if no_fixed_ip: mock_nw_info.return_value = [] expected_props.pop('ip') expected_props['host'] = instance.hostname else: mock_nw_info.return_value = [vif] if portgroup_exist: portgroup = ironic_utils.get_test_portgroup( node_uuid=node_uuid, address='11:22:33:44:55:66', extra={'vif_port_id': vif['id']}) mock_pgroup.return_value = [portgroup] else: mock_pgroup.return_value = [] props = self.driver.get_volume_connector(instance) self.assertEqual(expected_props, props) mock_node.get.assert_called_once_with(node_uuid) mock_node.list_volume_connectors.assert_called_once_with( node_uuid, detail=True) if mac_specified: mock_pgroup.assert_called_once_with( node=node_uuid, address='11:22:33:44:55:66', detail=True) if not portgroup_exist: mock_port.assert_called_once_with( node=node_uuid, address='11:22:33:44:55:66', detail=True) else: mock_port.assert_not_called() else: mock_pgroup.assert_not_called() mock_port.assert_not_called() def test_get_volume_connector_no_ip_with_mac(self): self._test_get_volume_connector_no_ip(True) def test_get_volume_connector_no_ip_with_mac_with_portgroup(self): self._test_get_volume_connector_no_ip(True, portgroup_exist=True) def test_get_volume_connector_no_ip_without_mac(self): self._test_get_volume_connector_no_ip(False) def test_get_volume_connector_no_ip_no_fixed_ip(self): self._test_get_volume_connector_no_ip(False, no_fixed_ip=True) @mock.patch.object(ironic_driver.IronicDriver, 'plug_vifs') def test_prepare_networks_before_block_device_mapping(self, mock_pvifs): instance = fake_instance.fake_instance_obj(self.ctx) network_info = utils.get_test_network_info() self.driver.prepare_networks_before_block_device_mapping(instance, network_info) mock_pvifs.assert_called_once_with(instance, network_info) @mock.patch.object(ironic_driver.IronicDriver, 'plug_vifs') def test_prepare_networks_before_block_device_mapping_error(self, mock_pvifs): instance = fake_instance.fake_instance_obj(self.ctx) network_info = utils.get_test_network_info() mock_pvifs.side_effect = ironic_exception.BadRequest('fake error') self.assertRaises( ironic_exception.BadRequest, self.driver.prepare_networks_before_block_device_mapping, instance, network_info) mock_pvifs.assert_called_once_with(instance, network_info) @mock.patch.object(ironic_driver.IronicDriver, 'unplug_vifs') def test_clean_networks_preparation(self, mock_upvifs): instance = fake_instance.fake_instance_obj(self.ctx) network_info = utils.get_test_network_info() self.driver.clean_networks_preparation(instance, network_info) mock_upvifs.assert_called_once_with(instance, network_info) @mock.patch.object(ironic_driver.IronicDriver, 'unplug_vifs') def test_clean_networks_preparation_error(self, mock_upvifs): instance = fake_instance.fake_instance_obj(self.ctx) network_info = utils.get_test_network_info() mock_upvifs.side_effect = ironic_exception.BadRequest('fake error') self.driver.clean_networks_preparation(instance, network_info) mock_upvifs.assert_called_once_with(instance, network_info) @mock.patch.object(ironic_driver.LOG, 'error') def test_ironicclient_bad_response(self, mock_error): fake_nodes = [_get_cached_node(), ironic_utils.get_test_node(driver='fake', id=uuidutils.generate_uuid())] self.mock_conn.nodes.side_effect = [iter(fake_nodes), Exception()] result = self.driver._get_node_list() mock_error.assert_not_called() self.assertEqual(fake_nodes, result) self.assertRaises(exception.VirtDriverNotReady, self.driver._get_node_list) mock_error.assert_called_once() @mock.patch.object(cw.IronicClientWrapper, 'call') def test_prepare_for_spawn(self, mock_call): node = ironic_utils.get_test_node(driver='fake') self.mock_conn.get_node.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.prepare_for_spawn(instance) self.mock_conn.get_node.assert_called_once_with( node.uuid, fields=('uuid', 'power_state', 'target_power_state', 'provision_state', 'target_provision_state', 'last_error', 'maintenance', 'properties', 'instance_uuid', 'traits', 'resource_class')) self.mock_conn.update_node.assert_called_once_with( node, retry_on_conflict=False, instance_id=instance.uuid) def test__set_instance_id(self): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.id) self.driver._set_instance_id(node, instance) self.mock_conn.update_node.assert_called_once_with( node, retry_on_conflict=False, instance_id=instance.uuid) def test_prepare_for_spawn_invalid_instance(self): instance = fake_instance.fake_instance_obj(self.ctx, node=None) self.assertRaises(ironic_exception.BadRequest, self.driver.prepare_for_spawn, instance) def test_prepare_for_spawn_conflict(self): node = ironic_utils.get_test_node(driver='fake') self.mock_conn.get_node.return_value = node self.mock_conn.update_node.side_effect = sdk_exc.ConflictException instance = fake_instance.fake_instance_obj(self.ctx, node=node.id) self.assertRaises(exception.InstanceDeployFailure, self.driver.prepare_for_spawn, instance) @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_deploy') def test_failed_spawn_cleanup(self, mock_cleanup): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.mock_conn.nodes.return_value = iter([node]) self.driver.failed_spawn_cleanup(instance) self.mock_conn.nodes.assert_called_once_with( instance_id=instance.uuid, fields=ironic_driver._NODE_FIELDS) self.assertEqual(1, mock_cleanup.call_count) @mock.patch.object(ironic_driver.IronicDriver, '_unplug_vifs') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_volume_target_info') @mock.patch.object(cw.IronicClientWrapper, 'call') def test__cleanup_deploy(self, mock_call, mock_vol, mock_unvif): # TODO(TheJulia): This REALLY should be updated to cover all of the # calls that take place. node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver._cleanup_deploy(node, instance) mock_vol.assert_called_once_with(instance) mock_unvif.assert_called_once_with(node, instance, None) expected_patch = [{'path': '/instance_info', 'op': 'remove'}, {'path': '/instance_uuid', 'op': 'remove'}] mock_call.assert_called_once_with('node.update', node.uuid, expected_patch) @mock.patch.object(ironic_driver.IronicDriver, '_unplug_vifs') @mock.patch.object(ironic_driver.IronicDriver, '_cleanup_volume_target_info') @mock.patch.object(cw.IronicClientWrapper, 'call') def test__cleanup_deploy_no_remove_ii(self, mock_call, mock_vol, mock_unvif): # TODO(TheJulia): This REALLY should be updated to cover all of the # calls that take place. node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver._cleanup_deploy(node, instance, remove_instance_info=False) mock_vol.assert_called_once_with(instance) mock_unvif.assert_called_once_with(node, instance, None) self.assertFalse(mock_call.called) class IronicDriverSyncTestCase(IronicDriverTestCase): def setUp(self): super(IronicDriverSyncTestCase, self).setUp() self.driver.node_cache = {} # Since the code we're testing runs in a spawn_n green thread, ensure # that the thread completes. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) self.mock_conn = self.useFixture( fixtures.MockPatchObject(self.driver, '_ironic_connection')).mock @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'save') def test_pike_flavor_migration(self, mock_save, mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list): node1_uuid = uuidutils.generate_uuid() node2_uuid = uuidutils.generate_uuid() hostname = "ironic-compute" fake_flavor1 = objects.Flavor() fake_flavor1.extra_specs = {} fake_flavor2 = objects.Flavor() fake_flavor2.extra_specs = {} inst1 = fake_instance.fake_instance_obj(self.ctx, node=node1_uuid, host=hostname, flavor=fake_flavor1) inst2 = fake_instance.fake_instance_obj(self.ctx, node=node2_uuid, host=hostname, flavor=fake_flavor2) node1 = _get_cached_node( uuid=node1_uuid, instance_uuid=inst1.uuid, instance_type_id=1, resource_class="first", network_interface="flat") node2 = _get_cached_node( uuid=node2_uuid, instance_uuid=inst2.uuid, instance_type_id=2, resource_class="second", network_interface="flat") inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid] mock_svc_by_hv.return_value = [] self.driver.node_cache = {} mock_get_node_list.return_value = [node1, node2] def fake_inst_by_uuid(ctx, uuid, expected_attrs=None): return inst_dict.get(uuid) mock_get_by_uuid.side_effect = fake_inst_by_uuid self.assertEqual({}, inst1.flavor.extra_specs) self.assertEqual({}, inst2.flavor.extra_specs) self.driver._refresh_cache() self.assertEqual(2, mock_save.call_count) expected_specs = {"resources:CUSTOM_FIRST": "1"} self.assertEqual(expected_specs, inst1.flavor.extra_specs) expected_specs = {"resources:CUSTOM_SECOND": "1"} self.assertEqual(expected_specs, inst2.flavor.extra_specs) @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'save') def test_pike_flavor_migration_instance_migrated(self, mock_save, mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list): node1_uuid = uuidutils.generate_uuid() node2_uuid = uuidutils.generate_uuid() hostname = "ironic-compute" fake_flavor1 = objects.Flavor() fake_flavor1.extra_specs = {"resources:CUSTOM_FIRST": "1"} fake_flavor2 = objects.Flavor() fake_flavor2.extra_specs = {} inst1 = fake_instance.fake_instance_obj(self.ctx, node=node1_uuid, host=hostname, flavor=fake_flavor1) inst2 = fake_instance.fake_instance_obj(self.ctx, node=node2_uuid, host=hostname, flavor=fake_flavor2) node1 = _get_cached_node( uuid=node1_uuid, instance_uuid=inst1.uuid, instance_type_id=1, resource_class="first", network_interface="flat") node2 = _get_cached_node( uuid=node2_uuid, instance_uuid=inst2.uuid, instance_type_id=2, resource_class="second", network_interface="flat") inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid] self.driver.node_cache = {} mock_get_node_list.return_value = [node1, node2] mock_svc_by_hv.return_value = [] def fake_inst_by_uuid(ctx, uuid, expected_attrs=None): return inst_dict.get(uuid) mock_get_by_uuid.side_effect = fake_inst_by_uuid self.driver._refresh_cache() # Since one instance already had its extra_specs updated with the # custom resource_class, only the other one should be updated and # saved. self.assertEqual(1, mock_save.call_count) expected_specs = {"resources:CUSTOM_FIRST": "1"} self.assertEqual(expected_specs, inst1.flavor.extra_specs) expected_specs = {"resources:CUSTOM_SECOND": "1"} self.assertEqual(expected_specs, inst2.flavor.extra_specs) @mock.patch.object(ironic_driver.LOG, 'warning') @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'save') def test_pike_flavor_migration_missing_rc(self, mock_save, mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list, mock_warning): node1_uuid = uuidutils.generate_uuid() node2_uuid = uuidutils.generate_uuid() hostname = "ironic-compute" fake_flavor1 = objects.Flavor() fake_flavor1.extra_specs = {} fake_flavor2 = objects.Flavor() fake_flavor2.extra_specs = {} inst1 = fake_instance.fake_instance_obj(self.ctx, node=node1_uuid, host=hostname, flavor=fake_flavor1) inst2 = fake_instance.fake_instance_obj(self.ctx, node=node2_uuid, host=hostname, flavor=fake_flavor2) node1 = _get_cached_node( uuid=node1_uuid, instance_uuid=inst1.uuid, instance_type_id=1, resource_class=None, network_interface="flat") node2 = _get_cached_node( uuid=node2_uuid, instance_uuid=inst2.uuid, instance_type_id=2, resource_class="second", network_interface="flat") inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid] mock_svc_by_hv.return_value = [] self.driver.node_cache = {} mock_get_node_list.return_value = [node1, node2] def fake_inst_by_uuid(ctx, uuid, expected_attrs=None): return inst_dict.get(uuid) mock_get_by_uuid.side_effect = fake_inst_by_uuid self.driver._refresh_cache() # Since one instance was on a node with no resource class set, # only the other one should be updated and saved. self.assertEqual(1, mock_save.call_count) expected_specs = {} self.assertEqual(expected_specs, inst1.flavor.extra_specs) expected_specs = {"resources:CUSTOM_SECOND": "1"} self.assertEqual(expected_specs, inst2.flavor.extra_specs) # Verify that the LOG.warning was called correctly self.assertEqual(1, mock_warning.call_count) self.assertIn("does not have its resource_class set.", mock_warning.call_args[0][0]) self.assertEqual({"node": node1.uuid}, mock_warning.call_args[0][1]) @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'save') def test_pike_flavor_migration_refresh_called_again(self, mock_save, mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list): node1_uuid = uuidutils.generate_uuid() node2_uuid = uuidutils.generate_uuid() hostname = "ironic-compute" fake_flavor1 = objects.Flavor() fake_flavor1.extra_specs = {} fake_flavor2 = objects.Flavor() fake_flavor2.extra_specs = {} inst1 = fake_instance.fake_instance_obj(self.ctx, node=node1_uuid, host=hostname, flavor=fake_flavor1) inst2 = fake_instance.fake_instance_obj(self.ctx, node=node2_uuid, host=hostname, flavor=fake_flavor2) node1 = _get_cached_node( uuid=node1_uuid, instance_uuid=inst1.uuid, instance_type_id=1, resource_class="first", network_interface="flat") node2 = _get_cached_node( uuid=node2_uuid, instance_uuid=inst2.uuid, instance_type_id=2, resource_class="second", network_interface="flat") inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid] mock_svc_by_hv.return_value = [] self.driver.node_cache = {} mock_get_node_list.return_value = [node1, node2] def fake_inst_by_uuid(ctx, uuid, expected_attrs=None): return inst_dict.get(uuid) mock_get_by_uuid.side_effect = fake_inst_by_uuid self.driver._refresh_cache() self.assertEqual(2, mock_get_by_uuid.call_count) # Refresh the cache again. The mock for getting an instance by uuid # should not be called again. mock_get_by_uuid.reset_mock() self.driver._refresh_cache() mock_get_by_uuid.assert_not_called() @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'save') def test_pike_flavor_migration_no_node_change(self, mock_save, mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list): node1_uuid = uuidutils.generate_uuid() node2_uuid = uuidutils.generate_uuid() hostname = "ironic-compute" fake_flavor1 = objects.Flavor() fake_flavor1.extra_specs = {"resources:CUSTOM_FIRST": "1"} fake_flavor2 = objects.Flavor() fake_flavor2.extra_specs = {"resources:CUSTOM_SECOND": "1"} inst1 = fake_instance.fake_instance_obj(self.ctx, node=node1_uuid, host=hostname, flavor=fake_flavor1) inst2 = fake_instance.fake_instance_obj(self.ctx, node=node2_uuid, host=hostname, flavor=fake_flavor2) node1 = _get_cached_node( uuid=node1_uuid, instance_uuid=inst1.uuid, instance_type_id=1, resource_class="first", network_interface="flat") node2 = _get_cached_node( uuid=node2_uuid, instance_uuid=inst2.uuid, instance_type_id=2, resource_class="second", network_interface="flat") inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid] self.driver.node_cache = {node1_uuid: node1, node2_uuid: node2} self.driver._migrated_instance_uuids = set([inst1.uuid, inst2.uuid]) mock_get_node_list.return_value = [node1, node2] mock_svc_by_hv.return_value = [] def fake_inst_by_uuid(ctx, uuid, expected_attrs=None): return inst_dict.get(uuid) mock_get_by_uuid.side_effect = fake_inst_by_uuid self.driver._refresh_cache() # Since the nodes did not change in the call to _refresh_cache(), and # their instance_uuids were in the cache, none of the mocks in the # migration script should have been called. self.assertFalse(mock_get_by_uuid.called) self.assertFalse(mock_save.called) @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'save') def test_pike_flavor_migration_just_instance_change(self, mock_save, mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list): node1_uuid = uuidutils.generate_uuid() node2_uuid = uuidutils.generate_uuid() node3_uuid = uuidutils.generate_uuid() hostname = "ironic-compute" fake_flavor1 = objects.Flavor() fake_flavor1.extra_specs = {} fake_flavor2 = objects.Flavor() fake_flavor2.extra_specs = {} fake_flavor3 = objects.Flavor() fake_flavor3.extra_specs = {} inst1 = fake_instance.fake_instance_obj(self.ctx, node=node1_uuid, host=hostname, flavor=fake_flavor1) inst2 = fake_instance.fake_instance_obj(self.ctx, node=node2_uuid, host=hostname, flavor=fake_flavor2) inst3 = fake_instance.fake_instance_obj(self.ctx, node=node3_uuid, host=hostname, flavor=fake_flavor3) node1 = _get_cached_node( uuid=node1_uuid, instance_uuid=inst1.uuid, instance_type_id=1, resource_class="first", network_interface="flat") node2 = _get_cached_node( uuid=node2_uuid, instance_uuid=inst2.uuid, instance_type_id=2, resource_class="second", network_interface="flat") inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2, inst3.uuid: inst3} mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid] self.driver.node_cache = {node1_uuid: node1, node2_uuid: node2} mock_get_node_list.return_value = [node1, node2] mock_svc_by_hv.return_value = [] def fake_inst_by_uuid(ctx, uuid, expected_attrs=None): return inst_dict.get(uuid) mock_get_by_uuid.side_effect = fake_inst_by_uuid self.driver._refresh_cache() # Since this is a fresh driver, neither will be in the migration cache, # so the migration mocks should have been called. self.assertTrue(mock_get_by_uuid.called) self.assertTrue(mock_save.called) # Now call _refresh_cache() again. Since neither the nodes nor their # instances change, none of the mocks in the migration script should # have been called. mock_get_by_uuid.reset_mock() mock_save.reset_mock() self.driver._refresh_cache() self.assertFalse(mock_get_by_uuid.called) self.assertFalse(mock_save.called) # Now change the node on node2 to inst3 node2.instance_uuid = inst3.uuid mock_get_uuids_by_host.return_value = [inst1.uuid, inst3.uuid] # Call _refresh_cache() again. Since the instance on node2 changed, the # migration mocks should have been called. mock_get_by_uuid.reset_mock() mock_save.reset_mock() self.driver._refresh_cache() self.assertTrue(mock_get_by_uuid.called) self.assertTrue(mock_save.called) @mock.patch.object(nova_utils, 'normalize_rc_name') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_pike_flavor_migration_empty_node(self, mock_node_from_cache, mock_normalize): mock_node_from_cache.return_value = None self.driver._pike_flavor_migration([uuids.node]) mock_normalize.assert_not_called() @mock.patch.object(nova_utils, 'normalize_rc_name') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_pike_flavor_migration_already_migrated(self, mock_node_from_cache, mock_normalize): node1 = _get_cached_node( uuid=uuids.node1, instance_uuid=uuids.instance, instance_type_id=1, resource_class="first", network_interface="flat") mock_node_from_cache.return_value = node1 self.driver._migrated_instance_uuids = set([uuids.instance]) self.driver._pike_flavor_migration([uuids.node1]) mock_normalize.assert_not_called() @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_rescue(self, mock_sps, mock_looping): node = ironic_utils.get_test_node() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.rescue(self.ctx, instance, None, None, 'xyz', None) mock_sps.assert_called_once_with(node.uuid, 'rescue', rescue_password='xyz') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_rescue_provision_state_fail(self, mock_sps, mock_looping): node = ironic_utils.get_test_node() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call mock_sps.side_effect = ironic_exception.BadRequest() instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.assertRaises(exception.InstanceRescueFailure, self.driver.rescue, self.ctx, instance, None, None, 'xyz', None) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_rescue_instance_not_found(self, mock_sps, fake_validate): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) fake_validate.side_effect = exception.InstanceNotFound( instance_id='fake') self.assertRaises(exception.InstanceRescueFailure, self.driver.rescue, self.ctx, instance, None, None, 'xyz', None) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_rescue_rescue_fail(self, mock_sps, fake_validate): node = ironic_utils.get_test_node( provision_state=ironic_states.RESCUEFAIL, last_error='rescue failed') fake_validate.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.assertRaises(exception.InstanceRescueFailure, self.driver.rescue, self.ctx, instance, None, None, 'xyz', None) @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_unrescue(self, mock_sps, mock_looping): node = ironic_utils.get_test_node() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.driver.unrescue(instance, None) mock_sps.assert_called_once_with(node.uuid, 'unrescue') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_unrescue_provision_state_fail(self, mock_sps, mock_looping): node = ironic_utils.get_test_node() fake_looping_call = FakeLoopingCall() mock_looping.return_value = fake_looping_call mock_sps.side_effect = ironic_exception.BadRequest() instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.assertRaises(exception.InstanceUnRescueFailure, self.driver.unrescue, instance, None) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_unrescue_instance_not_found(self, mock_sps, fake_validate): node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) fake_validate.side_effect = exception.InstanceNotFound( instance_id='fake') self.assertRaises(exception.InstanceUnRescueFailure, self.driver.unrescue, instance, None) @mock.patch.object(ironic_driver.IronicDriver, '_validate_instance_and_node') @mock.patch.object(FAKE_CLIENT.node, 'set_provision_state') def test_unrescue_unrescue_fail(self, mock_sps, fake_validate): node = ironic_utils.get_test_node( provision_state=ironic_states.UNRESCUEFAIL, last_error='unrescue failed') fake_validate.return_value = node instance = fake_instance.fake_instance_obj(self.ctx, node=node.uuid) self.assertRaises(exception.InstanceUnRescueFailure, self.driver.unrescue, instance, None) def test__can_send_version(self): self.assertIsNone( self.driver._can_send_version( min_version='%d.%d' % cw.IRONIC_API_VERSION)) def test__can_send_version_too_new(self): self.assertRaises(exception.IronicAPIVersionNotAvailable, self.driver._can_send_version, min_version='%d.%d' % (cw.IRONIC_API_VERSION[0], cw.IRONIC_API_VERSION[1] + 1)) def test__can_send_version_too_old(self): self.assertRaises( exception.IronicAPIVersionNotAvailable, self.driver._can_send_version, max_version='%d.%d' % (cw.PRIOR_IRONIC_API_VERSION[0], cw.PRIOR_IRONIC_API_VERSION[1] - 1)) @mock.patch.object(cw.IronicClientWrapper, 'current_api_version', autospec=True) @mock.patch.object(cw.IronicClientWrapper, 'is_api_version_negotiated', autospec=True) def test__can_send_version_not_negotiated(self, mock_is_negotiated, mock_api_version): mock_is_negotiated.return_value = False self.assertIsNone(self.driver._can_send_version()) self.assertFalse(mock_api_version.called) @mock.patch.object(instance_metadata, 'InstanceMetadata') @mock.patch.object(configdrive, 'ConfigDriveBuilder') class IronicDriverGenerateConfigDriveTestCase(test.NoDBTestCase): @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(cw, 'IronicClientWrapper', lambda *_: FAKE_CLIENT_WRAPPER) def setUp(self, mock_services): super(IronicDriverGenerateConfigDriveTestCase, self).setUp() self.driver = ironic_driver.IronicDriver(None) self.driver.virtapi = fake.FakeVirtAPI() self.ctx = nova_context.get_admin_context() node_id = uuidutils.generate_uuid() self.node = _get_cached_node(driver='fake', id=node_id) self.instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) self.network_info = utils.get_test_network_info() self.mock_conn = self.useFixture( fixtures.MockPatchObject(self.driver, '_ironic_connection')).mock def test_generate_configdrive(self, mock_cd_builder, mock_instance_meta): mock_instance_meta.return_value = 'fake-instance' mock_make_drive = mock.MagicMock(make_drive=lambda *_: None) mock_cd_builder.return_value.__enter__.return_value = mock_make_drive network_metadata_mock = mock.Mock() self.driver._get_network_metadata = network_metadata_mock self.driver._generate_configdrive(None, self.instance, self.node, self.network_info) mock_cd_builder.assert_called_once_with(instance_md='fake-instance') mock_instance_meta.assert_called_once_with( self.instance, content=None, extra_md={}, network_info=self.network_info, network_metadata=network_metadata_mock.return_value, request_context=None) def test_generate_configdrive_fail(self, mock_cd_builder, mock_instance_meta): mock_cd_builder.side_effect = exception.ConfigDriveMountFailed( operation='foo', error='error') mock_instance_meta.return_value = 'fake-instance' mock_make_drive = mock.MagicMock(make_drive=lambda *_: None) mock_cd_builder.return_value.__enter__.return_value = mock_make_drive network_metadata_mock = mock.Mock() self.driver._get_network_metadata = network_metadata_mock self.assertRaises(exception.ConfigDriveMountFailed, self.driver._generate_configdrive, None, self.instance, self.node, self.network_info) mock_cd_builder.assert_called_once_with(instance_md='fake-instance') mock_instance_meta.assert_called_once_with( self.instance, content=None, extra_md={}, network_info=self.network_info, network_metadata=network_metadata_mock.return_value, request_context=None) @mock.patch.object(FAKE_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_CLIENT.portgroup, 'list') def _test_generate_network_metadata(self, mock_portgroups, mock_ports, address=None, vif_internal_info=True): internal_info = ({'tenant_vif_port_id': utils.FAKE_VIF_UUID} if vif_internal_info else {}) extra = ({'vif_port_id': utils.FAKE_VIF_UUID} if not vif_internal_info else {}) portgroup = ironic_utils.get_test_portgroup( node_uuid=self.node.uuid, address=address, extra=extra, internal_info=internal_info, properties={'bond_miimon': 100, 'xmit_hash_policy': 'layer3+4'} ) port1 = ironic_utils.get_test_port(uuid=uuidutils.generate_uuid(), node_uuid=self.node.uuid, address='00:00:00:00:00:01', portgroup_uuid=portgroup.uuid) port2 = ironic_utils.get_test_port(uuid=uuidutils.generate_uuid(), node_uuid=self.node.uuid, address='00:00:00:00:00:02', portgroup_uuid=portgroup.uuid) mock_ports.return_value = [port1, port2] mock_portgroups.return_value = [portgroup] metadata = self.driver._get_network_metadata(self.node, self.network_info) pg_vif = metadata['links'][0] self.assertEqual('bond', pg_vif['type']) self.assertEqual('active-backup', pg_vif['bond_mode']) self.assertEqual(address if address else utils.FAKE_VIF_MAC, pg_vif['ethernet_mac_address']) self.assertEqual('layer3+4', pg_vif['bond_xmit_hash_policy']) self.assertEqual(100, pg_vif['bond_miimon']) self.assertEqual([port1.uuid, port2.uuid], pg_vif['bond_links']) self.assertEqual([{'id': port1.uuid, 'type': 'phy', 'ethernet_mac_address': port1.address}, {'id': port2.uuid, 'type': 'phy', 'ethernet_mac_address': port2.address}], metadata['links'][1:]) # assert there are no duplicate links link_ids = [link['id'] for link in metadata['links']] self.assertEqual(len(set(link_ids)), len(link_ids), 'There are duplicate link IDs: %s' % link_ids) def test_generate_network_metadata_with_pg_address(self, mock_cd_builder, mock_instance_meta): self._test_generate_network_metadata(address='00:00:00:00:00:00') def test_generate_network_metadata_no_pg_address(self, mock_cd_builder, mock_instance_meta): self._test_generate_network_metadata() def test_generate_network_metadata_vif_in_extra(self, mock_cd_builder, mock_instance_meta): self._test_generate_network_metadata(vif_internal_info=False) @mock.patch.object(FAKE_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_CLIENT.portgroup, 'list') def test_generate_network_metadata_ports_only(self, mock_portgroups, mock_ports, mock_cd_builder, mock_instance_meta): address = self.network_info[0]['address'] port = ironic_utils.get_test_port( node_uuid=self.node.uuid, address=address, internal_info={'tenant_vif_port_id': utils.FAKE_VIF_UUID}) mock_ports.return_value = [port] mock_portgroups.return_value = [] metadata = self.driver._get_network_metadata(self.node, self.network_info) self.assertEqual(port.address, metadata['links'][0]['ethernet_mac_address']) self.assertEqual('phy', metadata['links'][0]['type']) class HashRingTestCase(test.NoDBTestCase): @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') @mock.patch.object(servicegroup, 'API', autospec=True) def setUp(self, mock_sg, mock_services): super(HashRingTestCase, self).setUp() self.driver = ironic_driver.IronicDriver(None) self.driver.virtapi = fake.FakeVirtAPI() self.ctx = nova_context.get_admin_context() self.mock_is_up = ( self.driver.servicegroup_api.service_is_up) @mock.patch.object(ironic_driver.IronicDriver, '_refresh_hash_ring') def test_hash_ring_refreshed_on_init(self, mock_hr): d = ironic_driver.IronicDriver(None) self.assertFalse(mock_hr.called) d.init_host('foo') mock_hr.assert_called_once_with(mock.ANY) @mock.patch.object(hash_ring, 'HashRing') @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def _test__refresh_hash_ring(self, services, expected_hosts, mock_services, mock_hash_ring, uncalled=None): uncalled = uncalled or [] services = [_make_compute_service(host) for host in services] is_up_calls = [mock.call(svc) for svc in services if svc.host not in uncalled] self.flags(host='host1') mock_services.return_value = services mock_hash_ring.return_value = SENTINEL self.driver._refresh_hash_ring(self.ctx) mock_services.assert_called_once_with( mock.ANY, self.driver._get_hypervisor_type()) mock_hash_ring.assert_called_once_with(expected_hosts, partitions=32) self.assertEqual(SENTINEL, self.driver.hash_ring) self.mock_is_up.assert_has_calls(is_up_calls) def test__refresh_hash_ring_same_host_different_case(self): # Test that we treat Host1 and host1 as the same host # CONF.host is set to 'host1' in __test_refresh_hash_ring services = ['Host1'] expected_hosts = {'host1'} self.mock_is_up.return_value = True self._test__refresh_hash_ring(services, expected_hosts) def test__refresh_hash_ring_one_compute(self): services = ['host1'] expected_hosts = {'host1'} self.mock_is_up.return_value = True self._test__refresh_hash_ring(services, expected_hosts) @mock.patch('nova.virt.ironic.driver.LOG.debug') def test__refresh_hash_ring_many_computes(self, mock_log_debug): services = ['host1', 'host2', 'host3'] expected_hosts = {'host1', 'host2', 'host3'} self.mock_is_up.return_value = True self._test__refresh_hash_ring(services, expected_hosts) expected_msg = 'Hash ring members are %s' mock_log_debug.assert_called_once_with(expected_msg, set(services)) def test__refresh_hash_ring_one_compute_new_compute(self): services = [] expected_hosts = {'host1'} self.mock_is_up.return_value = True self._test__refresh_hash_ring(services, expected_hosts) def test__refresh_hash_ring_many_computes_new_compute(self): services = ['host2', 'host3'] expected_hosts = {'host1', 'host2', 'host3'} self.mock_is_up.return_value = True self._test__refresh_hash_ring(services, expected_hosts) def test__refresh_hash_ring_some_computes_down(self): services = ['host1', 'host2', 'host3', 'host4'] expected_hosts = {'host1', 'host2', 'host4'} self.mock_is_up.side_effect = [True, True, False, True] self._test__refresh_hash_ring(services, expected_hosts) @mock.patch.object(ironic_driver.IronicDriver, '_can_send_version') def test__refresh_hash_ring_peer_list(self, mock_can_send): services = ['host1', 'host2', 'host3'] expected_hosts = {'host1', 'host2'} self.mock_is_up.return_value = True self.flags(partition_key='not-none', group='ironic') self.flags(peer_list=['host1', 'host2'], group='ironic') self._test__refresh_hash_ring(services, expected_hosts, uncalled=['host3']) mock_can_send.assert_called_once_with(min_version='1.46') @mock.patch.object(ironic_driver.IronicDriver, '_can_send_version') def test__refresh_hash_ring_peer_list_old_api(self, mock_can_send): mock_can_send.side_effect = ( exception.IronicAPIVersionNotAvailable(version='1.46')) services = ['host1', 'host2', 'host3'] expected_hosts = {'host1', 'host2', 'host3'} self.mock_is_up.return_value = True self.flags(partition_key='not-none', group='ironic') self.flags(peer_list=['host1', 'host2'], group='ironic') self._test__refresh_hash_ring(services, expected_hosts, uncalled=['host3']) mock_can_send.assert_called_once_with(min_version='1.46') @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__check_peer_list(self, mock_log): self.flags(host='host1') self.flags(partition_key='not-none', group='ironic') self.flags(peer_list=['host1', 'host2'], group='ironic') ironic_driver._check_peer_list() # happy path, nothing happens self.assertFalse(mock_log.error.called) self.assertFalse(mock_log.warning.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__check_peer_list_empty(self, mock_log): self.flags(host='host1') self.flags(partition_key='not-none', group='ironic') self.flags(peer_list=[], group='ironic') self.assertRaises(exception.InvalidPeerList, ironic_driver._check_peer_list) self.assertTrue(mock_log.error.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__check_peer_list_missing_self(self, mock_log): self.flags(host='host1') self.flags(partition_key='not-none', group='ironic') self.flags(peer_list=['host2'], group='ironic') self.assertRaises(exception.InvalidPeerList, ironic_driver._check_peer_list) self.assertTrue(mock_log.error.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__check_peer_list_only_self(self, mock_log): self.flags(host='host1') self.flags(partition_key='not-none', group='ironic') self.flags(peer_list=['host1'], group='ironic') ironic_driver._check_peer_list() self.assertTrue(mock_log.warning.called) class NodeCacheTestCase(test.NoDBTestCase): @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def setUp(self, mock_services): super(NodeCacheTestCase, self).setUp() self.driver = ironic_driver.IronicDriver(None) self.driver.init_host('foo') self.driver.virtapi = fake.FakeVirtAPI() self.ctx = nova_context.get_admin_context() self.host = 'host1' self.flags(host=self.host) @mock.patch.object(ironic_driver.IronicDriver, '_can_send_version') @mock.patch.object(ironic_driver.IronicDriver, '_refresh_hash_ring') @mock.patch.object(hash_ring.HashRing, 'get_nodes') @mock.patch.object(ironic_driver.IronicDriver, '_get_node_list') @mock.patch.object(objects.InstanceList, 'get_uuids_by_host') def _test__refresh_cache(self, instances, nodes, hosts, mock_instances, mock_nodes, mock_hosts, mock_hash_ring, mock_can_send, partition_key=None, can_send_146=True): mock_instances.return_value = instances mock_nodes.return_value = nodes mock_hosts.side_effect = hosts if not can_send_146: mock_can_send.side_effect = ( exception.IronicAPIVersionNotAvailable(version='1.46')) self.driver.node_cache = {} self.driver.node_cache_time = None kwargs = {} if partition_key is not None and can_send_146: kwargs['conductor_group'] = partition_key self.driver._refresh_cache() mock_hash_ring.assert_called_once_with(mock.ANY) mock_instances.assert_called_once_with(mock.ANY, self.host) mock_nodes.assert_called_once_with(fields=ironic_driver._NODE_FIELDS, **kwargs) self.assertIsNotNone(self.driver.node_cache_time) def test__refresh_cache_same_host_different_case(self): # Test that we treat Host1 and host1 as the same host self.host = 'Host1' self.flags(host=self.host) instances = [] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] hosts = ['host1', 'host1', 'host1'] self._test__refresh_cache(instances, nodes, hosts) expected_cache = {n.uuid: n for n in nodes} self.assertEqual(expected_cache, self.driver.node_cache) def test__refresh_cache(self): # normal operation, one compute service instances = [] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] hosts = [self.host, self.host, self.host] self._test__refresh_cache(instances, nodes, hosts) expected_cache = {n.uuid: n for n in nodes} self.assertEqual(expected_cache, self.driver.node_cache) def test__refresh_cache_partition_key(self): # normal operation, one compute service instances = [] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] hosts = [self.host, self.host, self.host] partition_key = 'some-group' self.flags(partition_key=partition_key, group='ironic') self._test__refresh_cache(instances, nodes, hosts, partition_key=partition_key) expected_cache = {n.uuid: n for n in nodes} self.assertEqual(expected_cache, self.driver.node_cache) def test__refresh_cache_partition_key_old_api(self): # normal operation, one compute service instances = [] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] hosts = [self.host, self.host, self.host] partition_key = 'some-group' self.flags(partition_key=partition_key, group='ironic') self._test__refresh_cache(instances, nodes, hosts, partition_key=partition_key, can_send_146=False) expected_cache = {n.uuid: n for n in nodes} self.assertEqual(expected_cache, self.driver.node_cache) def test__refresh_cache_multiple_services(self): # normal operation, many compute services instances = [] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] hosts = [self.host, 'host2', 'host3'] self._test__refresh_cache(instances, nodes, hosts) expected_cache = {n.uuid: n for n in nodes[0:1]} self.assertEqual(expected_cache, self.driver.node_cache) def test__refresh_cache_our_instances(self): # we should manage a node we have an instance for, even if it doesn't # map to us instances = [uuidutils.generate_uuid()] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=instances[0]), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] # only two calls, having the instance will short-circuit the first node hosts = [{self.host}, {self.host}] self._test__refresh_cache(instances, nodes, hosts) expected_cache = {n.uuid: n for n in nodes} self.assertEqual(expected_cache, self.driver.node_cache) def test__refresh_cache_their_instances(self): # we should never manage a node that another compute service has # an instance for, even if it maps to us instances = [] nodes = [ _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid()), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), _get_cached_node( uuid=uuidutils.generate_uuid(), instance_uuid=None), ] hosts = [self.host, self.host] # only two calls, having the instance will short-circuit the first node self._test__refresh_cache(instances, nodes, hosts) expected_cache = {n.uuid: n for n in nodes[1:]} self.assertEqual(expected_cache, self.driver.node_cache) @mock.patch.object(FAKE_CLIENT, 'node') class IronicDriverConsoleTestCase(test.NoDBTestCase): @mock.patch.object(cw, 'IronicClientWrapper', lambda *_: FAKE_CLIENT_WRAPPER) @mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type') def setUp(self, mock_services): super(IronicDriverConsoleTestCase, self).setUp() self.driver = ironic_driver.IronicDriver(fake.FakeVirtAPI()) self.mock_conn = self.useFixture( fixtures.MockPatchObject(self.driver, '_ironic_connection')).mock self.ctx = nova_context.get_admin_context() node_id = uuidutils.generate_uuid() self.node = _get_cached_node(driver='fake', id=node_id) self.instance = fake_instance.fake_instance_obj(self.ctx, node=node_id) # mock retries configs to avoid sleeps and make tests run quicker CONF.set_default('api_max_retries', default=1, group='ironic') CONF.set_default('api_retry_interval', default=0, group='ironic') self.stub_out('nova.virt.ironic.driver.IronicDriver.' '_validate_instance_and_node', lambda _, inst: self.node) def _create_console_data(self, enabled=True, console_type='socat', url='tcp://127.0.0.1:10000'): return { 'console_enabled': enabled, 'console_info': { 'type': console_type, 'url': url } } def test__get_node_console_with_reset_success(self, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode']) def _fake_set_console_mode(node_uuid, mode): # Set it up so that _fake_get_console() returns 'mode' temp_data['target_mode'] = mode mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode expected = self._create_console_data()['console_info'] result = self.driver._get_node_console_with_reset(self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertEqual(self.node.uuid, result['node'].uuid) self.assertThat(result['console_info'], nova_matchers.DictMatches(expected)) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__get_node_console_with_reset_console_disabled(self, mock_log, mock_node): def _fake_log_debug(msg, *args, **kwargs): regex = r'Console is disabled for instance .*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.return_value = \ self._create_console_data(enabled=False) mock_log.debug.side_effect = _fake_log_debug self.assertRaises(exception.ConsoleNotAvailable, self.driver._get_node_console_with_reset, self.instance) mock_node.get_console.assert_called_once_with(self.node.uuid) mock_node.set_console_mode.assert_not_called() self.assertTrue(mock_log.debug.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__get_node_console_with_reset_set_mode_failed(self, mock_log, mock_node): def _fake_log_error(msg, *args, **kwargs): regex = r'Failed to set console mode .*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.return_value = self._create_console_data() mock_node.set_console_mode.side_effect = exception.NovaException() mock_log.error.side_effect = _fake_log_error self.assertRaises(exception.ConsoleNotAvailable, self.driver._get_node_console_with_reset, self.instance) mock_node.get_console.assert_called_once_with(self.node.uuid) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertTrue(mock_log.error.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__get_node_console_with_reset_wait_failed(self, mock_log, mock_node): def _fake_get_console(node_uuid): if mock_node.set_console_mode.called: # After the call to set_console_mode(), then _wait_state() # will call _get_console() to check the result. raise exception.NovaException() else: return self._create_console_data() def _fake_log_error(msg, *args, **kwargs): regex = r'Failed to acquire console information for instance .*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.side_effect = _fake_get_console mock_log.error.side_effect = _fake_log_error self.assertRaises(exception.ConsoleNotAvailable, self.driver._get_node_console_with_reset, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertTrue(mock_log.error.called) @mock.patch.object(ironic_driver, '_CONSOLE_STATE_CHECKING_INTERVAL', 0.05) @mock.patch.object(loopingcall, 'BackOffLoopingCall') @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test__get_node_console_with_reset_wait_timeout(self, mock_log, mock_looping, mock_node): CONF.set_override('serial_console_state_timeout', 1, group='ironic') temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode']) def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = not mode def _fake_log_error(msg, *args, **kwargs): regex = r'Timeout while waiting for console mode to be set .*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode mock_log.error.side_effect = _fake_log_error mock_timer = mock_looping.return_value mock_event = mock_timer.start.return_value mock_event.wait.side_effect = loopingcall.LoopingCallTimeOut self.assertRaises(exception.ConsoleNotAvailable, self.driver._get_node_console_with_reset, self.instance) self.assertEqual(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertTrue(mock_log.error.called) mock_timer.start.assert_called_with(starting_interval=0.05, timeout=1, jitter=0.5) def test_get_serial_console_socat(self, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode']) def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = mode mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode result = self.driver.get_serial_console(self.ctx, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertIsInstance(result, console_type.ConsoleSerial) self.assertEqual('127.0.0.1', result.host) self.assertEqual(10000, result.port) def test_get_serial_console_socat_disabled(self, mock_node): mock_node.get_console.return_value = \ self._create_console_data(enabled=False) self.assertRaises(exception.ConsoleTypeUnavailable, self.driver.get_serial_console, self.ctx, self.instance) mock_node.get_console.assert_called_once_with(self.node.uuid) mock_node.set_console_mode.assert_not_called() @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test_get_serial_console_socat_invalid_url(self, mock_log, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode'], url='an invalid url') def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = mode def _fake_log_error(msg, *args, **kwargs): regex = r'Invalid Socat console URL .*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode mock_log.error.side_effect = _fake_log_error self.assertRaises(exception.ConsoleTypeUnavailable, self.driver.get_serial_console, self.ctx, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertTrue(mock_log.error.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test_get_serial_console_socat_invalid_url_2(self, mock_log, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode'], url='http://abcxyz:1a1b') def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = mode def _fake_log_error(msg, *args, **kwargs): regex = r'Invalid Socat console URL .*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode mock_log.error.side_effect = _fake_log_error self.assertRaises(exception.ConsoleTypeUnavailable, self.driver.get_serial_console, self.ctx, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertTrue(mock_log.error.called) @mock.patch.object(ironic_driver, 'LOG', autospec=True) def test_get_serial_console_socat_unsupported_scheme(self, mock_log, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode'], url='ssl://127.0.0.1:10000') def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = mode def _fake_log_warning(msg, *args, **kwargs): regex = r'Socat serial console only supports \"tcp\".*' self.assertThat(msg, matchers.MatchesRegex(regex)) mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode mock_log.warning.side_effect = _fake_log_warning self.assertRaises(exception.ConsoleTypeUnavailable, self.driver.get_serial_console, self.ctx, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertTrue(mock_log.warning.called) def test_get_serial_console_socat_tcp6(self, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode'], url='tcp://[::1]:10000') def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = mode mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode result = self.driver.get_serial_console(self.ctx, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) self.assertIsInstance(result, console_type.ConsoleSerial) self.assertEqual('::1', result.host) self.assertEqual(10000, result.port) def test_get_serial_console_shellinabox(self, mock_node): temp_data = {'target_mode': True} def _fake_get_console(node_uuid): return self._create_console_data(enabled=temp_data['target_mode'], console_type='shellinabox') def _fake_set_console_mode(node_uuid, mode): temp_data['target_mode'] = mode mock_node.get_console.side_effect = _fake_get_console mock_node.set_console_mode.side_effect = _fake_set_console_mode self.assertRaises(exception.ConsoleTypeUnavailable, self.driver.get_serial_console, self.ctx, self.instance) self.assertGreater(mock_node.get_console.call_count, 1) self.assertEqual(2, mock_node.set_console_mode.call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/ironic/test_patcher.py0000664000175000017500000002511600000000000023165 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator from oslo_config import cfg from nova import context as nova_context from nova import objects from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.virt.ironic import utils as ironic_utils from nova.virt.ironic import patcher CONF = cfg.CONF class IronicDriverFieldsTestCase(test.NoDBTestCase): def setUp(self): super(IronicDriverFieldsTestCase, self).setUp() self.image_meta = ironic_utils.get_test_image_meta() self.flavor = ironic_utils.get_test_flavor() self.ctx = nova_context.get_admin_context() self.instance = fake_instance.fake_instance_obj(self.ctx) self.instance.flavor = self.flavor self.node = ironic_utils.get_test_node(driver='fake') # Generic expected patches self._expected_deploy_patch = [ {'path': '/instance_info/image_source', 'value': self.image_meta.id, 'op': 'add'}, {'path': '/instance_info/root_gb', 'value': str(self.instance.flavor.root_gb), 'op': 'add'}, {'path': '/instance_info/swap_mb', 'value': str(self.flavor['swap']), 'op': 'add'}, {'path': '/instance_info/display_name', 'value': self.instance['display_name'], 'op': 'add'}, {'path': '/instance_info/vcpus', 'value': str(self.instance.flavor.vcpus), 'op': 'add'}, {'path': '/instance_info/memory_mb', 'value': str(self.instance.flavor.memory_mb), 'op': 'add'}, {'path': '/instance_info/local_gb', 'value': str(self.node.properties.get('local_gb', 0)), 'op': 'add'}, {'path': '/instance_info/nova_host_id', 'value': u'fake-host', 'op': 'add'}, ] def assertPatchEqual(self, expected, observed): self.assertEqual( sorted(expected, key=operator.itemgetter('path', 'value')), sorted(observed, key=operator.itemgetter('path', 'value')) ) def test_create_generic(self): node = ironic_utils.get_test_node(driver='pxe_fake') patcher_obj = patcher.create(node) self.assertIsInstance(patcher_obj, patcher.GenericDriverFields) def test_generic_get_deploy_patch(self): node = ironic_utils.get_test_node(driver='fake') patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(self._expected_deploy_patch, patch) def test_generic_get_deploy_patch_capabilities(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['capabilities:boot_mode'] = 'bios' expected = [{'path': '/instance_info/capabilities', 'value': '{"boot_mode": "bios"}', 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_capabilities_op(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['capabilities:boot_mode'] = ' bios' expected = [{'path': '/instance_info/capabilities', 'value': '{"boot_mode": " bios"}', 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_capabilities_nested_key(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['capabilities:key1:key2'] = ' bios' expected = [{'path': '/instance_info/capabilities', 'value': '{"key1:key2": " bios"}', 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_traits(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['trait:CUSTOM_FOO'] = 'required' expected = [{'path': '/instance_info/traits', 'value': ["CUSTOM_FOO"], 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_traits_granular(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['trait1:CUSTOM_FOO'] = 'required' expected = [{'path': '/instance_info/traits', 'value': ["CUSTOM_FOO"], 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_traits_ignores_not_required(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['trait:CUSTOM_FOO'] = 'invalid' expected = self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_image_traits_required(self): node = ironic_utils.get_test_node(driver='fake') self.image_meta.properties = objects.ImageMetaProps( traits_required=['CUSTOM_TRUSTED']) expected = [{'path': '/instance_info/traits', 'value': ["CUSTOM_TRUSTED"], 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_image_flavor_traits_required(self): node = ironic_utils.get_test_node(driver='fake') self.flavor['extra_specs']['trait:CUSTOM_FOO'] = 'required' self.image_meta.properties = objects.ImageMetaProps( traits_required=['CUSTOM_TRUSTED']) expected = [{'path': '/instance_info/traits', 'value': ["CUSTOM_FOO", "CUSTOM_TRUSTED"], 'op': 'add'}] expected += self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_image_flavor_traits_none(self): node = ironic_utils.get_test_node(driver='fake') self.image_meta.properties = objects.ImageMetaProps() expected = self._expected_deploy_patch patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_boot_from_volume_image_traits_required( self): node = ironic_utils.get_test_node(driver='fake') self.image_meta.properties = objects.ImageMetaProps( traits_required=['CUSTOM_TRUSTED']) expected_deploy_patch_volume = [patch for patch in self._expected_deploy_patch if patch['path'] != '/instance_info/image_source'] expected = [{'path': '/instance_info/traits', 'value': ["CUSTOM_TRUSTED"], 'op': 'add'}] expected += expected_deploy_patch_volume patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor, boot_from_volume=True) self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_ephemeral(self): CONF.set_override('default_ephemeral_format', 'testfmt') node = ironic_utils.get_test_node(driver='fake') instance = fake_instance.fake_instance_obj( self.ctx, flavor=objects.Flavor(root_gb=1, vcpus=1, memory_mb=1, ephemeral_gb=10)) patch = patcher.create(node).get_deploy_patch( instance, self.image_meta, self.flavor) expected = [{'path': '/instance_info/ephemeral_gb', 'value': str(instance.flavor.ephemeral_gb), 'op': 'add'}, {'path': '/instance_info/ephemeral_format', 'value': 'testfmt', 'op': 'add'}] expected += self._expected_deploy_patch self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_preserve_ephemeral(self): node = ironic_utils.get_test_node(driver='fake') for preserve in [True, False]: patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor, preserve_ephemeral=preserve) expected = [{'path': '/instance_info/preserve_ephemeral', 'value': str(preserve), 'op': 'add', }] expected += self._expected_deploy_patch self.assertPatchEqual(expected, patch) def test_generic_get_deploy_patch_boot_from_volume(self): node = ironic_utils.get_test_node(driver='fake') expected = [patch for patch in self._expected_deploy_patch if patch['path'] != '/instance_info/image_source'] patch = patcher.create(node).get_deploy_patch( self.instance, self.image_meta, self.flavor, boot_from_volume=True) self.assertPatchEqual(expected, patch) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/ironic/utils.py0000664000175000017500000002465500000000000021647 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.virt.ironic import client_wrapper from nova.virt.ironic import ironic_states def get_test_validation(**kw): return type('interfaces', (object,), {'power': kw.get('power', {'result': True}), 'deploy': kw.get('deploy', {'result': True}), 'console': kw.get('console', True), 'rescue': kw.get('rescue', True), 'storage': kw.get('storage', {'result': True})})() def get_test_node(fields=None, **kw): # TODO(dustinc): Once the usages of id/uuid, maintenance/is_maintenance, # and portgroup/port_group are normalized, the duplicates can be removed _id = kw.get('id') or kw.get('uuid', 'eeeeeeee-dddd-cccc-bbbb-aaaaaaaaaaaa') _instance_id = kw.get('instance_id') or kw.get('instance_uuid') _is_maintenance = kw.get('is_maintenance') or kw.get('maintenance', False) node = {'uuid': _id, 'id': _id, 'chassis_uuid': kw.get('chassis_uuid'), 'power_state': kw.get('power_state', ironic_states.NOSTATE), 'target_power_state': kw.get('target_power_state', ironic_states.NOSTATE), 'provision_state': kw.get('provision_state', ironic_states.NOSTATE), 'target_provision_state': kw.get('target_provision_state', ironic_states.NOSTATE), 'last_error': kw.get('last_error'), 'instance_uuid': _instance_id, 'instance_id': _instance_id, 'instance_info': kw.get('instance_info'), 'driver': kw.get('driver', 'fake'), 'driver_info': kw.get('driver_info', {}), 'properties': kw.get('properties', {}), 'reservation': kw.get('reservation'), 'maintenance': _is_maintenance, 'is_maintenance': _is_maintenance, 'network_interface': kw.get('network_interface'), 'resource_class': kw.get('resource_class'), 'traits': kw.get('traits', []), 'extra': kw.get('extra', {}), 'updated_at': kw.get('created_at'), 'created_at': kw.get('updated_at')} if fields is not None: node = {key: value for key, value in node.items() if key in fields} return type('node', (object,), node)() def get_test_port(**kw): # TODO(dustinc): Once the usages of id/uuid, maintenance/is_maintenance, # and portgroup/port_group are normalized, the duplicates can be removed _id = kw.get('id') or kw.get('uuid', 'gggggggg-uuuu-qqqq-ffff-llllllllllll') _node_id = kw.get('node_uuid') or kw.get('node_id', get_test_node().id) _port_group_id = kw.get('port_group_id') or kw.get('portgroup_uuid') return type('port', (object,), {'uuid': _id, 'id': _id, 'node_uuid': _node_id, 'node_id': _node_id, 'address': kw.get('address', 'FF:FF:FF:FF:FF:FF'), 'extra': kw.get('extra', {}), 'internal_info': kw.get('internal_info', {}), 'portgroup_uuid': _port_group_id, 'port_group_id': _port_group_id, 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at')})() def get_test_portgroup(**kw): # TODO(dustinc): Once the usages of id/uuid, maintenance/is_maintenance, # and portgroup/port_group are normalized, the duplicates can be removed _id = kw.get('id') or kw.get('uuid', 'deaffeed-1234-5678-9012-fedcbafedcba') _node_id = kw.get('node_id') or kw.get('node_uuid', get_test_node().id) return type('portgroup', (object,), {'uuid': _id, 'id': _id, 'node_uuid': _node_id, 'node_id': _node_id, 'address': kw.get('address', 'EE:EE:EE:EE:EE:EE'), 'extra': kw.get('extra', {}), 'internal_info': kw.get('internal_info', {}), 'properties': kw.get('properties', {}), 'mode': kw.get('mode', 'active-backup'), 'name': kw.get('name'), 'standalone_ports_supported': kw.get( 'standalone_ports_supported', True), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at')})() def get_test_vif(**kw): return { 'profile': kw.get('profile', {}), 'ovs_interfaceid': kw.get('ovs_interfaceid'), 'preserve_on_delete': kw.get('preserve_on_delete', False), 'network': kw.get('network', {}), 'devname': kw.get('devname', 'tapaaaaaaaa-00'), 'vnic_type': kw.get('vnic_type', 'baremetal'), 'qbh_params': kw.get('qbh_params'), 'meta': kw.get('meta', {}), 'details': kw.get('details', {}), 'address': kw.get('address', 'FF:FF:FF:FF:FF:FF'), 'active': kw.get('active', True), 'type': kw.get('type', 'ironic'), 'id': kw.get('id', 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'), 'qbg_params': kw.get('qbg_params')} def get_test_volume_connector(**kw): # TODO(dustinc): Once the usages of id/uuid, maintenance/is_maintenance, # and portgroup/port_group are normalized, the duplicates can be removed _id = kw.get('id') or kw.get('uuid', 'hhhhhhhh-qqqq-uuuu-mmmm-bbbbbbbbbbbb') _node_id = kw.get('node_id') or kw.get('node_uuid', get_test_node().id) return type('volume_connector', (object,), {'uuid': _id, 'id': _id, 'node_uuid': _node_id, 'node_id': _node_id, 'type': kw.get('type', 'iqn'), 'connector_id': kw.get('connector_id', 'iqn.test'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at')})() def get_test_volume_target(**kw): # TODO(dustinc): Once the usages of id/uuid, maintenance/is_maintenance, # and portgroup/port_group are normalized, the duplicates can be removed _id = kw.get('id') or kw.get('uuid', 'aaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee') _node_id = kw.get('node_id') or kw.get('node_uuid', get_test_node().id) return type('volume_target', (object,), {'uuid': _id, 'id': _id, 'node_uuid': _node_id, 'node_id': _node_id, 'volume_type': kw.get('volume_type', 'iscsi'), 'properties': kw.get('properties', {}), 'boot_index': kw.get('boot_index', 0), 'volume_id': kw.get('volume_id', 'fffffff-gggg-hhhh-iiii-jjjjjjjjjjjj'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at')})() def get_test_flavor(**kw): default_extra_specs = {'baremetal:deploy_kernel_id': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'baremetal:deploy_ramdisk_id': 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb'} flavor = {'name': kw.get('name', 'fake.flavor'), 'extra_specs': kw.get('extra_specs', default_extra_specs), 'swap': kw.get('swap', 0), 'root_gb': 1, 'memory_mb': 1, 'vcpus': 1, 'ephemeral_gb': kw.get('ephemeral_gb', 0)} return objects.Flavor(**flavor) def get_test_image_meta(**kw): return objects.ImageMeta.from_dict( {'id': kw.get('id', 'cccccccc-cccc-cccc-cccc-cccccccccccc')}) class FakeVolumeTargetClient(object): def create(self, node_uuid, volume_type, properties, boot_index, volume_id): pass def delete(self, volume_target_id): pass class FakePortClient(object): def list(self, address=None, limit=None, marker=None, sort_key=None, sort_dir=None, detail=False, fields=None, node=None, portgroup=None): pass def get(self, port_uuid): pass def update(self, port_uuid, patch): pass class FakePortgroupClient(object): def list(self, node=None, address=None, limit=None, marker=None, sort_key=None, sort_dir=None, detail=False, fields=None): pass class FakeNodeClient(object): def list(self, associated=None, maintenance=None, marker=None, limit=None, detail=False, sort_key=None, sort_dir=None, fields=None, provision_state=None, driver=None, resource_class=None, chassis=None): return [] def get(self, node_uuid, fields=None): pass def get_by_instance_uuid(self, instance_uuid, fields=None): pass def list_ports(self, node_uuid, detail=False): pass def set_power_state(self, node_uuid, target, soft=False, timeout=None): pass def set_provision_state(self, node_uuid, state, configdrive=None, cleansteps=None, rescue_password=None, os_ironic_api_version=None): pass def update(self, node_uuid, patch): pass def validate(self, node_uuid): pass def vif_attach(self, node_uuid, port_id): pass def vif_detach(self, node_uuid, port_id): pass def inject_nmi(self, node_uuid): pass def list_volume_targets(self, node_uuid, detail=False): pass class FakeClient(object): node = FakeNodeClient() port = FakePortClient() portgroup = FakePortgroupClient() volume_target = FakeVolumeTargetClient() current_api_version = '%d.%d' % client_wrapper.IRONIC_API_VERSION is_api_version_negotiated = True ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1636736378.5984676 nova-21.2.4/nova/tests/unit/virt/libvirt/0000775000175000017500000000000000000000000020311 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/nova/tests/unit/virt/libvirt/__init__.py0000664000175000017500000000000000000000000022410 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/libvirt/fake_imagebackend.py0000664000175000017500000002262500000000000024252 0ustar00zuulzuul00000000000000# Copyright 2012 Grid Dynamics # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import os import fixtures import mock from nova.virt.libvirt import config from nova.virt.libvirt import driver from nova.virt.libvirt import imagebackend from nova.virt.libvirt import utils as libvirt_utils class ImageBackendFixture(fixtures.Fixture): def __init__(self, got_files=None, imported_files=None, exists=None): """This fixture mocks imagebackend.Backend.backend, which is the only entry point to libvirt.imagebackend from libvirt.driver. :param got_files: A list of {'filename': path, 'size': size} for every file which was created. :param imported_files: A list of (local_filename, remote_filename) for every invocation of import_file(). :param exists: An optional lambda which takes the disk name as an argument, and returns True if the disk exists, False otherwise. """ self.got_files = got_files self.imported_files = imported_files self.disks = collections.defaultdict(self._mock_disk) """A dict of name -> Mock image object. This is a defaultdict, so tests may access it directly before a disk has been created.""" self._exists = exists def setUp(self): super(ImageBackendFixture, self).setUp() # Mock template functions passed to cache self.mock_fetch_image = mock.create_autospec(libvirt_utils.fetch_image) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.utils.fetch_image', self.mock_fetch_image)) self.mock_fetch_raw_image = \ mock.create_autospec(libvirt_utils.fetch_raw_image) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.utils.fetch_raw_image', self.mock_fetch_raw_image)) self.mock_create_ephemeral = \ mock.create_autospec(driver.LibvirtDriver._create_ephemeral) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.LibvirtDriver._create_ephemeral', self.mock_create_ephemeral)) self.mock_create_swap = \ mock.create_autospec(driver.LibvirtDriver._create_swap) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.LibvirtDriver._create_swap', self.mock_create_swap)) # Backend.backend creates all Image objects self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.Backend.backend', self._mock_backend)) @property def created_disks(self): """disks, filtered to contain only disks which were actually created by calling a relevant method. """ # A disk was created iff either cache() or import_file() was called. return {name: disk for name, disk in self.disks.items() if any([disk.cache.called, disk.import_file.called])} def _mock_disk(self): # This is the generator passed to the disks defaultdict. It returns # a mocked Image object, but note that the returned object has not # yet been 'constructed'. We don't know at this stage what arguments # will be passed to the constructor, so we don't know, eg, its type # or path. # # The reason for this 2 phase construction is to allow tests to # manipulate mocks for disks before they have been created. eg a # test can do the following before executing the method under test: # # disks['disk'].cache.side_effect = ImageNotFound... # # When the 'constructor' (image_init in _mock_backend) later runs, # it will return the same object we created here, and when the # caller calls cache() it will raise the requested exception. disk = mock.create_autospec(imagebackend.Image) # NOTE(mdbooth): fake_cache and fake_import_file are for compatibility # with existing tests which test got_files and imported_files. They # should be removed when they have no remaining users. disk.cache.side_effect = self._fake_cache disk.import_file.side_effect = self._fake_import_file # NOTE(mdbooth): test_virt_drivers assumes libvirt_info has functional # output disk.libvirt_info.side_effect = \ functools.partial(self._fake_libvirt_info, disk) return disk def _mock_backend(self, backend_self, image_type=None): # This method mocks Backend.backend, which returns a subclass of Image # (it returns a class, not an instance). This mocked method doesn't # return a class; it returns a function which returns a Mock. IOW, # instead of the getting a QCow2, the caller gets image_init, # so instead of: # # QCow2(instance, disk_name='disk') # # the caller effectively does: # # image_init(instance, disk_name='disk') # # Therefore image_init() must have the same signature as an Image # subclass constructor, and return a mocked Image object. # # The returned mocked Image object has the following additional # properties which are useful for testing: # # * Calls with the same disk_name return the same object from # self.disks. This means tests can assert on multiple calls for # the same disk without worrying about whether they were also on # the same object. # # * Mocked objects have an additional image_type attribute set to # the image_type originally passed to Backend.backend() during # their construction. Tests can use this to assert that disks were # created of the expected type. def image_init(instance=None, disk_name=None, path=None): # There's nothing special about this path except that it's # predictable and unique for (instance, disk). if path is None: path = os.path.join( libvirt_utils.get_instance_path(instance), disk_name) else: disk_name = os.path.basename(path) disk = self.disks[disk_name] # Used directly by callers. These would have been set if called # the real constructor. setattr(disk, 'path', path) setattr(disk, 'is_block_dev', mock.sentinel.is_block_dev) # Used by tests. Note that image_init is a closure over image_type. setattr(disk, 'image_type', image_type) # Used by tests to manipulate which disks exist. if self._exists is not None: # We don't just cache the return value here because the # caller may want, eg, a test where the disk initially does not # exist and later exists. disk.exists.side_effect = lambda: self._exists(disk_name) else: disk.exists.return_value = True return disk # Set the SUPPORTS_CLONE member variable to mimic the Image base # class. image_init.SUPPORTS_CLONE = False # Ditto for the 'is_shared_block_storage' function and # 'is_file_in_instance_path' def is_shared_block_storage(): return False def is_file_in_instance_path(): return False setattr(image_init, 'is_shared_block_storage', is_shared_block_storage) setattr(image_init, 'is_file_in_instance_path', is_file_in_instance_path) return image_init def _fake_cache(self, fetch_func, filename, size=None, *args, **kwargs): # Execute the template function so we can test the arguments it was # called with. fetch_func(target=filename, *args, **kwargs) # For legacy tests which use got_files if self.got_files is not None: self.got_files.append({'filename': filename, 'size': size}) def _fake_import_file(self, instance, local_filename, remote_filename): # For legacy tests which use imported_files if self.imported_files is not None: self.imported_files.append((local_filename, remote_filename)) def _fake_libvirt_info(self, mock_disk, disk_info, cache_mode, extra_specs, hypervisor_version, disk_unit=None, boot_order=None): # For tests in test_virt_drivers which expect libvirt_info to be # functional info = config.LibvirtConfigGuestDisk() info.source_type = 'file' info.source_device = disk_info['type'] info.target_bus = disk_info['bus'] info.target_dev = disk_info['dev'] info.driver_cache = cache_mode info.driver_format = 'raw' info.source_path = mock_disk.path if boot_order: info.boot_order = boot_order return info ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/libvirt/fake_libvirt_data.py0000664000175000017500000012715200000000000024325 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import units from nova.objects.fields import Architecture from nova.virt.libvirt import config def fake_kvm_guest(): obj = config.LibvirtConfigGuest() obj.virt_type = "kvm" obj.memory = 100 * units.Mi obj.vcpus = 2 obj.cpuset = set([0, 1, 3, 4, 5]) obj.cputune = config.LibvirtConfigGuestCPUTune() obj.cputune.shares = 100 obj.cputune.quota = 50000 obj.cputune.period = 25000 obj.membacking = config.LibvirtConfigGuestMemoryBacking() page1 = config.LibvirtConfigGuestMemoryBackingPage() page1.size_kb = 2048 page1.nodeset = [0, 1, 2, 3, 5] page2 = config.LibvirtConfigGuestMemoryBackingPage() page2.size_kb = 1048576 page2.nodeset = [4] obj.membacking.hugepages.append(page1) obj.membacking.hugepages.append(page2) obj.memtune = config.LibvirtConfigGuestMemoryTune() obj.memtune.hard_limit = 496 obj.memtune.soft_limit = 672 obj.memtune.swap_hard_limit = 1638 obj.memtune.min_guarantee = 2970 obj.numatune = config.LibvirtConfigGuestNUMATune() numamemory = config.LibvirtConfigGuestNUMATuneMemory() numamemory.mode = "preferred" numamemory.nodeset = [0, 1, 2, 3, 8] obj.numatune.memory = numamemory numamemnode0 = config.LibvirtConfigGuestNUMATuneMemNode() numamemnode0.cellid = 0 numamemnode0.mode = "preferred" numamemnode0.nodeset = [0, 1] numamemnode1 = config.LibvirtConfigGuestNUMATuneMemNode() numamemnode1.cellid = 1 numamemnode1.mode = "preferred" numamemnode1.nodeset = [2, 3] numamemnode2 = config.LibvirtConfigGuestNUMATuneMemNode() numamemnode2.cellid = 2 numamemnode2.mode = "preferred" numamemnode2.nodeset = [8] obj.numatune.memnodes.extend([numamemnode0, numamemnode1, numamemnode2]) obj.name = "demo" obj.uuid = "b38a3f43-4be2-4046-897f-b67c2f5e0147" obj.os_type = "linux" obj.os_boot_dev = ["hd", "cdrom", "fd"] obj.os_smbios = config.LibvirtConfigGuestSMBIOS() obj.features = [ config.LibvirtConfigGuestFeatureACPI(), config.LibvirtConfigGuestFeatureAPIC(), config.LibvirtConfigGuestFeaturePAE(), config.LibvirtConfigGuestFeatureKvmHidden() ] obj.sysinfo = config.LibvirtConfigGuestSysinfo() obj.sysinfo.bios_vendor = "Acme" obj.sysinfo.system_version = "1.0.0" # obj.devices[0] disk = config.LibvirtConfigGuestDisk() disk.source_type = "file" disk.source_path = "/tmp/disk-img" disk.target_dev = "vda" disk.target_bus = "virtio" obj.add_device(disk) # obj.devices[1] disk = config.LibvirtConfigGuestDisk() disk.source_device = "cdrom" disk.source_type = "file" disk.source_path = "/tmp/cdrom-img" disk.target_dev = "sda" disk.target_bus = "sata" obj.add_device(disk) # obj.devices[2] intf = config.LibvirtConfigGuestInterface() intf.net_type = "network" intf.mac_addr = "52:54:00:f6:35:8f" intf.model = "virtio" intf.source_dev = "virbr0" obj.add_device(intf) # obj.devices[3] balloon = config.LibvirtConfigMemoryBalloon() balloon.model = 'virtio' balloon.period = 11 obj.add_device(balloon) # obj.devices[4] mouse = config.LibvirtConfigGuestInput() mouse.type = "mouse" mouse.bus = "virtio" obj.add_device(mouse) # obj.devices[5] gfx = config.LibvirtConfigGuestGraphics() gfx.type = "vnc" gfx.autoport = True gfx.keymap = "en_US" gfx.listen = "127.0.0.1" obj.add_device(gfx) # obj.devices[6] video = config.LibvirtConfigGuestVideo() video.type = 'virtio' obj.add_device(video) # obj.devices[7] serial = config.LibvirtConfigGuestSerial() serial.type = "file" serial.source_path = "/tmp/vm.log" obj.add_device(serial) # obj.devices[8] rng = config.LibvirtConfigGuestRng() rng.backend = '/dev/urandom' rng.rate_period = '12' rng.rate_bytes = '34' obj.add_device(rng) # obj.devices[9] controller = config.LibvirtConfigGuestController() controller.type = 'scsi' controller.model = 'virtio-scsi' # usually set from image meta controller.index = 0 obj.add_device(controller) return obj FAKE_KVM_GUEST = """ b38a3f43-4be2-4046-897f-b67c2f5e0147 demo 104857600 496 672 1638 2970 2 Acme 1.0.0 linux 100 50000 25000 /dev/urandom 0x0033 47 1 """ CAPABILITIES_HOST_TEMPLATE = ''' cef19ce0-0ca2-11df-855d-b19fbce37686 x86_64 Penryn Intel tcp %(topology)s apparmor 0 ''' # NOTE(aspiers): HostTestCase has tests which assert that for any # given (arch, domain) listed in the guest capabilities here, all # canonical machine types (e.g. 'pc' or 'q35') must be a substring of # the expanded machine type returned in the element of the # corresponding fake getDomainCapabilities response for that (arch, # domain, canonical_machine_type) combination. Those responses are # defined by the DOMCAPABILITIES_* variables below. While # DOMCAPABILITIES_X86_64_TEMPLATE can return multiple values for the # element, DOMCAPABILITIES_I686 is fixed to fake a response # of the 'pc-i440fx-2.11' machine type, therefore # CAPABILITIES_GUEST['i686'] should return 'pc' as the only canonical # machine type. # # CAPABILITIES_GUEST does not include canonical machine types for # other non-x86 architectures, so these test assertions on apply to # x86. CAPABILITIES_GUEST = { 'i686': ''' hvm 32 /usr/bin/qemu-system-i386 pc-i440fx-2.11 pc isapc pc-1.1 pc-1.2 pc-1.3 pc-i440fx-2.8 pc-1.0 pc-i440fx-2.9 pc-i440fx-2.6 pc-i440fx-2.7 xenfv pc-i440fx-2.3 pc-i440fx-2.4 pc-i440fx-2.5 pc-i440fx-2.1 pc-i440fx-2.2 pc-i440fx-2.0 pc-q35-2.11 q35 xenpv pc-q35-2.10 pc-i440fx-1.7 pc-q35-2.9 pc-0.15 pc-i440fx-1.5 pc-q35-2.7 pc-i440fx-1.6 pc-q35-2.8 pc-0.13 pc-0.14 pc-q35-2.4 pc-q35-2.5 pc-q35-2.6 pc-i440fx-1.4 pc-i440fx-2.10 pc-0.11 pc-0.12 pc-0.10 /usr/bin/qemu-kvm pc-i440fx-2.11 pc isapc pc-1.1 pc-1.2 pc-1.3 pc-i440fx-2.8 pc-1.0 pc-i440fx-2.9 pc-i440fx-2.6 pc-i440fx-2.7 xenfv pc-i440fx-2.3 pc-i440fx-2.4 pc-i440fx-2.5 pc-i440fx-2.1 pc-i440fx-2.2 pc-i440fx-2.0 pc-q35-2.11 q35 xenpv pc-q35-2.10 pc-i440fx-1.7 pc-q35-2.9 pc-0.15 pc-i440fx-1.5 pc-q35-2.7 pc-i440fx-1.6 pc-q35-2.8 pc-0.13 pc-0.14 pc-q35-2.4 pc-q35-2.5 pc-q35-2.6 pc-i440fx-1.4 pc-i440fx-2.10 pc-0.11 pc-0.12 pc-0.10 ''', 'x86_64': ''' hvm 64 /usr/bin/qemu-system-x86_64 pc-i440fx-2.11 pc isapc pc-1.1 pc-1.2 pc-1.3 pc-i440fx-2.8 pc-1.0 pc-i440fx-2.9 pc-i440fx-2.6 pc-i440fx-2.7 xenfv pc-i440fx-2.3 pc-i440fx-2.4 pc-i440fx-2.5 pc-i440fx-2.1 pc-i440fx-2.2 pc-i440fx-2.0 pc-q35-2.11 q35 xenpv pc-q35-2.10 pc-i440fx-1.7 pc-q35-2.9 pc-0.15 pc-i440fx-1.5 pc-q35-2.7 pc-i440fx-1.6 pc-q35-2.8 pc-0.13 pc-0.14 pc-q35-2.4 pc-q35-2.5 pc-q35-2.6 pc-i440fx-1.4 pc-i440fx-2.10 pc-0.11 pc-0.12 pc-0.10 /usr/bin/qemu-kvm pc-i440fx-2.11 pc isapc pc-1.1 pc-1.2 pc-1.3 pc-i440fx-2.8 pc-1.0 pc-i440fx-2.9 pc-i440fx-2.6 pc-i440fx-2.7 xenfv pc-i440fx-2.3 pc-i440fx-2.4 pc-i440fx-2.5 pc-i440fx-2.1 pc-i440fx-2.2 pc-i440fx-2.0 pc-q35-2.11 q35 xenpv pc-q35-2.10 pc-i440fx-1.7 pc-q35-2.9 pc-0.15 pc-i440fx-1.5 pc-q35-2.7 pc-i440fx-1.6 pc-q35-2.8 pc-0.13 pc-0.14 pc-q35-2.4 pc-q35-2.5 pc-q35-2.6 pc-i440fx-1.4 pc-i440fx-2.10 pc-0.11 pc-0.12 pc-0.10 ''', 'armv7l': ''' hvm 32 /usr/bin/qemu-system-arm integratorcp vexpress-a9 syborg musicpal mainstone n800 n810 n900 cheetah sx1 sx1-v1 beagle beaglexm tosa akita spitz borzoi terrier connex verdex lm3s811evb lm3s6965evb realview-eb realview-eb-mpcore realview-pb-a8 realview-pbx-a9 versatilepb versatileab ''', 'mips': ''' hvm 32 /usr/bin/qemu-system-mips malta mipssim magnum pica61 mips ''', 'mipsel': ''' hvm 32 /usr/bin/qemu-system-mipsel malta mipssim magnum pica61 mips ''', 'sparc': ''' hvm 32 /usr/bin/qemu-system-sparc SS-5 leon3_generic SS-10 SS-600MP SS-20 Voyager LX SS-4 SPARCClassic SPARCbook SS-1000 SS-2000 SS-2 ''', 'ppc': ''' hvm 32 /usr/bin/qemu-system-ppc g3beige virtex-ml507 mpc8544ds bamboo-0.13 bamboo-0.12 ref405ep taihu mac99 prep ''' } CAPABILITIES_TEMPLATE = ( "\n" + CAPABILITIES_HOST_TEMPLATE + CAPABILITIES_GUEST['i686'] + CAPABILITIES_GUEST['x86_64'] + CAPABILITIES_GUEST['armv7l'] + CAPABILITIES_GUEST['mips'] + CAPABILITIES_GUEST['mipsel'] + CAPABILITIES_GUEST['sparc'] + CAPABILITIES_GUEST['ppc'] + "\n" ) DOMCAPABILITIES_SPARC = """ /usr/bin/qemu-system-sparc qemu SS-5 sparc rom pflash yes no disk cdrom floppy lun fdc scsi virtio sdl vnc spice subsystem default mandatory requisite optional usb pci scsi """ DOMCAPABILITIES_ARMV7L = """ /usr/bin/qemu-system-arm qemu virt-2.11 armv7l rom pflash yes no pxa262 pxa270-a0 arm1136 cortex-a15 pxa260 arm1136-r2 pxa261 pxa255 arm926 arm11mpcore pxa250 ti925t sa1110 arm1176 sa1100 pxa270-c5 cortex-a9 cortex-a8 pxa270-c0 cortex-a7 arm1026 pxa270-b1 cortex-m3 cortex-m4 pxa270-b0 arm946 cortex-r5 pxa270-a1 pxa270 disk cdrom floppy lun fdc scsi virtio usb sata sdl vnc spice subsystem default mandatory requisite optional usb pci scsi 2 3 """ DOMCAPABILITIES_PPC = """ /usr/bin/qemu-system-ppc qemu g3beige ppc rom pflash yes no disk cdrom floppy lun ide fdc scsi virtio usb sata sdl vnc spice subsystem default mandatory requisite optional usb pci scsi """ DOMCAPABILITIES_MIPS = """ /usr/bin/qemu-system-mips qemu malta mips rom pflash yes no disk cdrom floppy lun ide fdc scsi virtio usb sata sdl vnc spice subsystem default mandatory requisite optional usb pci scsi """ DOMCAPABILITIES_MIPSEL = """ /usr/bin/qemu-system-mipsel qemu malta mipsel rom pflash yes no disk cdrom floppy lun ide fdc scsi virtio usb sata sdl vnc spice subsystem default mandatory requisite optional usb pci scsi """ # NOTE(sean-k-mooney): yes i686 is actually the i386 emulator # the qemu-system-i386 binary is used for all 32bit x86 # instruction sets. DOMCAPABILITIES_I686 = """ /usr/bin/qemu-system-i386 kvm pc-i440fx-2.11 i686 rom pflash yes no Skylake-Client-IBRS Intel qemu64 qemu32 phenom pentium3 pentium2 pentium n270 kvm64 kvm32 coreduo core2duo athlon Westmere Westmere-IBRS Skylake-Server Skylake-Server-IBRS Skylake-Client Skylake-Client-IBRS SandyBridge SandyBridge-IBRS Penryn Opteron_G5 Opteron_G4 Opteron_G3 Opteron_G2 Opteron_G1 Nehalem Nehalem-IBRS IvyBridge IvyBridge-IBRS Haswell-noTSX Haswell-noTSX-IBRS Haswell Haswell-IBRS EPYC EPYC-IBPB Conroe Broadwell-noTSX Broadwell-noTSX-IBRS Broadwell Broadwell-IBRS 486 disk cdrom floppy lun ide fdc scsi virtio usb sata sdl vnc spice subsystem default mandatory requisite optional usb pci scsi """ STATIC_DOMCAPABILITIES = { Architecture.ARMV7: DOMCAPABILITIES_ARMV7L, Architecture.SPARC: DOMCAPABILITIES_SPARC, Architecture.PPC: DOMCAPABILITIES_PPC, Architecture.MIPS: DOMCAPABILITIES_MIPS, Architecture.MIPSEL: DOMCAPABILITIES_MIPSEL, Architecture.I686: DOMCAPABILITIES_I686 } # NOTE(aspiers): see the above note for CAPABILITIES_GUEST which # explains why the element here needs to be parametrised. # # The element needs to be parametrised for emulating # environments with and without the SEV feature. DOMCAPABILITIES_X86_64_TEMPLATE = """ /usr/bin/qemu-kvm kvm %(mtype)s x86_64 /usr/share/qemu/ovmf-x86_64-ms-4m-code.bin /usr/share/qemu/ovmf-x86_64-ms-code.bin rom pflash yes no EPYC-IBPB AMD qemu64 qemu32 phenom pentium3 pentium2 pentium n270 kvm64 kvm32 coreduo core2duo athlon Westmere Westmere-IBRS Skylake-Server Skylake-Server-IBRS Skylake-Client Skylake-Client-IBRS SandyBridge SandyBridge-IBRS Penryn Opteron_G5 Opteron_G4 Opteron_G3 Opteron_G2 Opteron_G1 Nehalem Nehalem-IBRS IvyBridge IvyBridge-IBRS Haswell Haswell-noTSX Haswell-noTSX-IBRS Haswell-IBRS EPYC EPYC-IBPB Conroe Broadwell Broadwell-noTSX Broadwell-noTSX-IBRS Broadwell-IBRS 486 disk cdrom floppy lun ide fdc scsi virtio usb sata sdl vnc spice subsystem default mandatory requisite optional usb pci scsi default vfio %(features)s """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/libvirt/fake_os_brick_connector.py0000664000175000017500000000264300000000000025523 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def get_connector_properties(root_helper, my_ip, multipath, enforce_multipath, host=None): """Fake os-brick.""" props = {} props['ip'] = my_ip props['host'] = host iscsi = ISCSIConnector('') props['initiator'] = iscsi.get_initiator() props['wwpns'] = ['100010604b019419'] props['wwnns'] = ['200010604b019419'] props['multipath'] = multipath props['platform'] = 'x86_64' props['os_type'] = 'linux2' return props class ISCSIConnector(object): """Mimick the iSCSI connector.""" def __init__(self, root_helper, driver=None, execute=None, use_multipath=False, device_scan_attempts=3, *args, **kwargs): self.root_herlp = root_helper, self.execute = execute def get_initiator(self): return "fake_iscsi.iqn" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736320.0 nova-21.2.4/nova/tests/unit/virt/libvirt/fakelibvirt.py0000664000175000017500000016702700000000000023202 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import textwrap import time import fixtures from lxml import etree from oslo_log import log as logging from oslo_utils.fixture import uuidsentinel as uuids from nova import conf from nova.objects import fields as obj_fields from nova.tests.unit.virt.libvirt import fake_libvirt_data from nova.virt.libvirt import config as vconfig # Allow passing None to the various connect methods # (i.e. allow the client to rely on default URLs) allow_default_uri_connection = True # Has libvirt connection been used at least once connection_used = False def _reset(): global allow_default_uri_connection allow_default_uri_connection = True LOG = logging.getLogger(__name__) CONF = conf.CONF # virDomainState VIR_DOMAIN_NOSTATE = 0 VIR_DOMAIN_RUNNING = 1 VIR_DOMAIN_BLOCKED = 2 VIR_DOMAIN_PAUSED = 3 VIR_DOMAIN_SHUTDOWN = 4 VIR_DOMAIN_SHUTOFF = 5 VIR_DOMAIN_CRASHED = 6 # NOTE(mriedem): These values come from include/libvirt/libvirt-domain.h VIR_DOMAIN_XML_SECURE = 1 VIR_DOMAIN_XML_INACTIVE = 2 VIR_DOMAIN_XML_UPDATE_CPU = 4 VIR_DOMAIN_XML_MIGRATABLE = 8 VIR_DOMAIN_BLOCK_REBASE_SHALLOW = 1 VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT = 2 VIR_DOMAIN_BLOCK_REBASE_COPY = 8 VIR_DOMAIN_BLOCK_REBASE_RELATIVE = 16 VIR_DOMAIN_BLOCK_REBASE_COPY_DEV = 32 # virDomainBlockResize VIR_DOMAIN_BLOCK_RESIZE_BYTES = 1 VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC = 1 VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT = 2 VIR_DOMAIN_EVENT_ID_LIFECYCLE = 0 VIR_DOMAIN_EVENT_DEFINED = 0 VIR_DOMAIN_EVENT_UNDEFINED = 1 VIR_DOMAIN_EVENT_STARTED = 2 VIR_DOMAIN_EVENT_SUSPENDED = 3 VIR_DOMAIN_EVENT_RESUMED = 4 VIR_DOMAIN_EVENT_STOPPED = 5 VIR_DOMAIN_EVENT_SHUTDOWN = 6 VIR_DOMAIN_EVENT_PMSUSPENDED = 7 VIR_DOMAIN_EVENT_SUSPENDED_MIGRATED = 1 VIR_DOMAIN_EVENT_SUSPENDED_POSTCOPY = 7 VIR_DOMAIN_UNDEFINE_MANAGED_SAVE = 1 VIR_DOMAIN_UNDEFINE_NVRAM = 4 VIR_DOMAIN_AFFECT_CURRENT = 0 VIR_DOMAIN_AFFECT_LIVE = 1 VIR_DOMAIN_AFFECT_CONFIG = 2 VIR_CPU_COMPARE_ERROR = -1 VIR_CPU_COMPARE_INCOMPATIBLE = 0 VIR_CPU_COMPARE_IDENTICAL = 1 VIR_CPU_COMPARE_SUPERSET = 2 VIR_CRED_USERNAME = 1 VIR_CRED_AUTHNAME = 2 VIR_CRED_LANGUAGE = 3 VIR_CRED_CNONCE = 4 VIR_CRED_PASSPHRASE = 5 VIR_CRED_ECHOPROMPT = 6 VIR_CRED_NOECHOPROMPT = 7 VIR_CRED_REALM = 8 VIR_CRED_EXTERNAL = 9 VIR_MIGRATE_LIVE = 1 VIR_MIGRATE_PEER2PEER = 2 VIR_MIGRATE_TUNNELLED = 4 VIR_MIGRATE_PERSIST_DEST = 8 VIR_MIGRATE_UNDEFINE_SOURCE = 16 VIR_MIGRATE_NON_SHARED_INC = 128 VIR_MIGRATE_AUTO_CONVERGE = 8192 VIR_MIGRATE_POSTCOPY = 32768 VIR_MIGRATE_TLS = 65536 VIR_NODE_CPU_STATS_ALL_CPUS = -1 VIR_DOMAIN_START_PAUSED = 1 # libvirtError enums # (Intentionally different from what's in libvirt. We do this to check, # that consumers of the library are using the symbolic names rather than # hardcoding the numerical values) VIR_FROM_QEMU = 100 VIR_FROM_DOMAIN = 200 VIR_FROM_NWFILTER = 330 VIR_FROM_REMOTE = 340 VIR_FROM_RPC = 345 VIR_FROM_NODEDEV = 666 VIR_ERR_INVALID_ARG = 8 VIR_ERR_NO_SUPPORT = 3 VIR_ERR_XML_ERROR = 27 VIR_ERR_XML_DETAIL = 350 VIR_ERR_NO_DOMAIN = 420 VIR_ERR_OPERATION_FAILED = 510 VIR_ERR_OPERATION_INVALID = 55 VIR_ERR_OPERATION_TIMEOUT = 68 VIR_ERR_NO_NWFILTER = 620 VIR_ERR_SYSTEM_ERROR = 900 VIR_ERR_INTERNAL_ERROR = 950 VIR_ERR_CONFIG_UNSUPPORTED = 951 VIR_ERR_NO_NODE_DEVICE = 667 VIR_ERR_NO_SECRET = 66 VIR_ERR_AGENT_UNRESPONSIVE = 86 VIR_ERR_ARGUMENT_UNSUPPORTED = 74 VIR_ERR_OPERATION_UNSUPPORTED = 84 VIR_ERR_DEVICE_MISSING = 99 # Readonly VIR_CONNECT_RO = 1 # virConnectBaselineCPU flags VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES = 1 # snapshotCreateXML flags VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA = 4 VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY = 16 VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT = 32 VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE = 64 # blockCommit flags VIR_DOMAIN_BLOCK_COMMIT_RELATIVE = 4 VIR_CONNECT_LIST_DOMAINS_ACTIVE = 1 VIR_CONNECT_LIST_DOMAINS_INACTIVE = 2 # secret type VIR_SECRET_USAGE_TYPE_NONE = 0 VIR_SECRET_USAGE_TYPE_VOLUME = 1 VIR_SECRET_USAGE_TYPE_CEPH = 2 VIR_SECRET_USAGE_TYPE_ISCSI = 3 # Libvirt version to match MIN_LIBVIRT_VERSION in driver.py FAKE_LIBVIRT_VERSION = 4000000 # Libvirt version to match MIN_QEMU_VERSION in driver.py FAKE_QEMU_VERSION = 2011000 PCI_VEND_ID = '8086' PCI_VEND_NAME = 'Intel Corporation' PCI_PROD_ID = '1533' PCI_PROD_NAME = 'I210 Gigabit Network Connection' PCI_DRIVER_NAME = 'igb' PF_PROD_ID = '1528' PF_PROD_NAME = 'Ethernet Controller 10-Gigabit X540-AT2' PF_DRIVER_NAME = 'ixgbe' PF_CAP_TYPE = 'virt_functions' VF_PROD_ID = '1515' VF_PROD_NAME = 'X540 Ethernet Controller Virtual Function' VF_DRIVER_NAME = 'ixgbevf' VF_CAP_TYPE = 'phys_function' MDEV_CAPABLE_VEND_ID = '10DE' MDEV_CAPABLE_VEND_NAME = 'Nvidia' MDEV_CAPABLE_PROD_ID = '0FFE' MDEV_CAPABLE_PROD_NAME = 'GRID M60-0B' MDEV_CAPABLE_DRIVER_NAME = 'nvidia' MDEV_CAPABLE_CAP_TYPE = 'mdev_types' NVIDIA_11_VGPU_TYPE = 'nvidia-11' NVIDIA_12_VGPU_TYPE = 'nvidia-12' PGPU1_PCI_ADDR = 'pci_0000_81_00_0' PGPU2_PCI_ADDR = 'pci_0000_81_01_0' PGPU3_PCI_ADDR = 'pci_0000_81_02_0' class FakePCIDevice(object): """Generate a fake PCI device. Generate a fake PCI devices corresponding to one of the following real-world PCI devices. - I210 Gigabit Network Connection (8086:1533) - Ethernet Controller 10-Gigabit X540-AT2 (8086:1528) - X540 Ethernet Controller Virtual Function (8086:1515) """ pci_device_template = textwrap.dedent(""" pci_0000_81_%(slot)02x_%(function)d /sys/devices/pci0000:80/0000:80:01.0/0000:81:%(slot)02x.%(function)d pci_0000_80_01_0 %(driver)s 0 129 %(slot)d %(function)d %(prod_name)s %(vend_name)s %(capability)s
""".strip()) # noqa cap_templ = "%(addresses)s" addr_templ = "
" # noqa mdevtypes_templ = textwrap.dedent(""" GRID M60-0Bvfio-pci %(instances)s """.strip()) # noqa is_capable_of_mdevs = False def __init__(self, dev_type, slot, function, iommu_group, numa_node, vf_ratio=None, multiple_gpu_types=False): """Populate pci devices :param dev_type: (string) Indicates the type of the device (PCI, PF, VF). :param slot: (int) Slot number of the device. :param function: (int) Function number of the device. :param iommu_group: (int) IOMMU group ID. :param numa_node: (int) NUMA node of the device. :param vf_ratio: (int) Ratio of Virtual Functions on Physical. Only applicable if ``dev_type`` is one of: ``PF``, ``VF``. :param multiple_gpu_types: (bool) Supports different vGPU types """ self.dev_type = dev_type self.slot = slot self.function = function self.iommu_group = iommu_group self.numa_node = numa_node self.vf_ratio = vf_ratio self.multiple_gpu_types = multiple_gpu_types self.generate_xml() def generate_xml(self, skip_capability=False): vend_id = PCI_VEND_ID vend_name = PCI_VEND_NAME capability = '' if self.dev_type == 'PCI': if self.vf_ratio: raise ValueError('vf_ratio does not apply for PCI devices') prod_id = PCI_PROD_ID prod_name = PCI_PROD_NAME driver = PCI_DRIVER_NAME elif self.dev_type == 'PF': prod_id = PF_PROD_ID prod_name = PF_PROD_NAME driver = PF_DRIVER_NAME if not skip_capability: capability = self.cap_templ % { 'cap_type': PF_CAP_TYPE, 'addresses': '\n'.join([ self.addr_templ % { # these are the slot, function values of the child # VFs, we can only assign 8 functions to a slot # (0-7) so bump the slot each time we exceed this 'slot': self.slot + (x // 8), # ...and wrap the function value 'function': x % 8, # the offset is because the PF is occupying function 0 } for x in range(1, self.vf_ratio + 1)]) } elif self.dev_type == 'VF': prod_id = VF_PROD_ID prod_name = VF_PROD_NAME driver = VF_DRIVER_NAME if not skip_capability: capability = self.cap_templ % { 'cap_type': VF_CAP_TYPE, 'addresses': self.addr_templ % { # this is the slot, function value of the parent PF # if we're e.g. device 8, we'll have a different slot # to our parent so reverse this 'slot': self.slot - ((self.vf_ratio + 1) // 8), # the parent PF is always function 0 'function': 0, } } elif self.dev_type == 'MDEV_TYPES': prod_id = MDEV_CAPABLE_PROD_ID prod_name = MDEV_CAPABLE_PROD_NAME driver = MDEV_CAPABLE_DRIVER_NAME vend_id = MDEV_CAPABLE_VEND_ID vend_name = MDEV_CAPABLE_VEND_NAME types = [self.mdevtypes_templ % { 'type_id': NVIDIA_11_VGPU_TYPE, 'instances': 16, }] if self.multiple_gpu_types: types.append(self.mdevtypes_templ % { 'type_id': NVIDIA_12_VGPU_TYPE, 'instances': 8, }) if not skip_capability: capability = self.cap_templ % { 'cap_type': MDEV_CAPABLE_CAP_TYPE, 'addresses': '\n'.join(types) } self.is_capable_of_mdevs = True else: raise ValueError('Expected one of: PCI, VF, PCI') self.pci_device = self.pci_device_template % { 'slot': self.slot, 'function': self.function, 'vend_id': vend_id, 'vend_name': vend_name, 'prod_id': prod_id, 'prod_name': prod_name, 'driver': driver, 'capability': capability, 'iommu_group': self.iommu_group, 'numa_node': self.numa_node, } # -1 is the sentinel set in /sys/bus/pci/devices/*/numa_node # for no NUMA affinity. When the numa_node is set to -1 on a device # Libvirt omits the NUMA element so we remove it. if self.numa_node == -1: self.pci_device = self.pci_device.replace("", "") def XMLDesc(self, flags): return self.pci_device class HostPCIDevicesInfo(object): """Represent a pool of host PCI devices.""" TOTAL_NUMA_NODES = 2 pci_devname_template = 'pci_0000_81_%(slot)02x_%(function)d' def __init__(self, num_pci=0, num_pfs=2, num_vfs=8, num_mdevcap=0, numa_node=None, multiple_gpu_types=False): """Create a new HostPCIDevicesInfo object. :param num_pci: (int) The number of (non-SR-IOV) and (non-MDEV capable) PCI devices. :param num_pfs: (int) The number of PCI SR-IOV Physical Functions. :param num_vfs: (int) The number of PCI SR-IOV Virtual Functions. :param num_mdevcap: (int) The number of PCI devices capable of creating mediated devices. :param iommu_group: (int) Initial IOMMU group ID. :param numa_node: (int) NUMA node of the device; if set all of the devices will be assigned to the specified node else they will be split between ``$TOTAL_NUMA_NODES`` nodes. :param multiple_gpu_types: (bool) Supports different vGPU types """ self.devices = {} if not (num_vfs or num_pfs) and not num_mdevcap: return if num_vfs and not num_pfs: raise ValueError('Cannot create VFs without PFs') if num_pfs and num_vfs % num_pfs: raise ValueError('num_vfs must be a factor of num_pfs') slot = 0 function = 0 iommu_group = 40 # totally arbitrary number # Generate PCI devs for dev in range(num_pci): pci_dev_name = self.pci_devname_template % { 'slot': slot, 'function': function} LOG.info('Generating PCI device %r', pci_dev_name) self.devices[pci_dev_name] = FakePCIDevice( dev_type='PCI', slot=slot, function=function, iommu_group=iommu_group, numa_node=self._calc_numa_node(dev, numa_node)) slot += 1 iommu_group += 1 # Generate MDEV capable devs for dev in range(num_mdevcap): pci_dev_name = self.pci_devname_template % { 'slot': slot, 'function': function} LOG.info('Generating MDEV capable device %r', pci_dev_name) self.devices[pci_dev_name] = FakePCIDevice( dev_type='MDEV_TYPES', slot=slot, function=function, iommu_group=iommu_group, numa_node=self._calc_numa_node(dev, numa_node), multiple_gpu_types=multiple_gpu_types) slot += 1 iommu_group += 1 vf_ratio = num_vfs // num_pfs if num_pfs else 0 # Generate PFs for dev in range(num_pfs): function = 0 numa_node_pf = self._calc_numa_node(dev, numa_node) pci_dev_name = self.pci_devname_template % { 'slot': slot, 'function': function} LOG.info('Generating PF device %r', pci_dev_name) self.devices[pci_dev_name] = FakePCIDevice( dev_type='PF', slot=slot, function=function, iommu_group=iommu_group, numa_node=numa_node_pf, vf_ratio=vf_ratio) # Generate VFs for _ in range(vf_ratio): function += 1 iommu_group += 1 if function % 8 == 0: # functions must be 0-7 slot += 1 function = 0 pci_dev_name = self.pci_devname_template % { 'slot': slot, 'function': function} LOG.info('Generating VF device %r', pci_dev_name) self.devices[pci_dev_name] = FakePCIDevice( dev_type='VF', slot=slot, function=function, iommu_group=iommu_group, numa_node=numa_node_pf, vf_ratio=vf_ratio) slot += 1 @classmethod def _calc_numa_node(cls, dev, numa_node): return dev % cls.TOTAL_NUMA_NODES if numa_node is None else numa_node def get_all_devices(self): return self.devices.keys() def get_device_by_name(self, device_name): pci_dev = self.devices.get(device_name) return pci_dev def get_all_mdev_capable_devices(self): return [dev for dev in self.devices if self.devices[dev].is_capable_of_mdevs] class FakeMdevDevice(object): template = """ %(dev_name)s /sys/devices/pci0000:00/0000:00:02.0/%(path)s %(parent)s vfio_mdev """ def __init__(self, dev_name, type_id, parent): self.xml = self.template % { 'dev_name': dev_name, 'type_id': type_id, 'path': dev_name[len('mdev_'):], 'parent': parent} def XMLDesc(self, flags): return self.xml class HostMdevDevicesInfo(object): def __init__(self, devices=None): if devices is not None: self.devices = devices else: self.devices = {} def get_all_devices(self): return self.devices.keys() def get_device_by_name(self, device_name): dev = self.devices[device_name] return dev class HostInfo(object): def __init__(self, cpu_nodes=1, cpu_sockets=1, cpu_cores=2, cpu_threads=1, kB_mem=4096): """Create a new Host Info object :param cpu_nodes: (int) the number of NUMA cell, 1 for unusual NUMA topologies or uniform :param cpu_sockets: (int) number of CPU sockets per node if nodes > 1, total number of CPU sockets otherwise :param cpu_cores: (int) number of cores per socket :param cpu_threads: (int) number of threads per core :param kB_mem: (int) memory size in KBytes """ self.arch = obj_fields.Architecture.X86_64 self.kB_mem = kB_mem self.cpus = cpu_nodes * cpu_sockets * cpu_cores * cpu_threads self.cpu_mhz = 800 self.cpu_nodes = cpu_nodes self.cpu_cores = cpu_cores self.cpu_threads = cpu_threads self.cpu_sockets = cpu_sockets self.cpu_model = "Penryn" self.cpu_vendor = "Intel" self.numa_topology = NUMATopology(self.cpu_nodes, self.cpu_sockets, self.cpu_cores, self.cpu_threads, self.kB_mem) class NUMATopology(vconfig.LibvirtConfigCapsNUMATopology): """A batteries-included variant of LibvirtConfigCapsNUMATopology. Provides sane defaults for LibvirtConfigCapsNUMATopology that can be used in tests as is, or overridden where necessary. """ def __init__(self, cpu_nodes=4, cpu_sockets=1, cpu_cores=1, cpu_threads=2, kb_mem=1048576, mempages=None, **kwargs): super(NUMATopology, self).__init__(**kwargs) cpu_count = 0 for cell_count in range(cpu_nodes): cell = vconfig.LibvirtConfigCapsNUMACell() cell.id = cell_count cell.memory = kb_mem // cpu_nodes for socket_count in range(cpu_sockets): for cpu_num in range(cpu_cores * cpu_threads): cpu = vconfig.LibvirtConfigCapsNUMACPU() cpu.id = cpu_count cpu.socket_id = cell_count cpu.core_id = cpu_num // cpu_threads cpu.siblings = set([cpu_threads * (cpu_count // cpu_threads) + thread for thread in range(cpu_threads)]) cell.cpus.append(cpu) cpu_count += 1 # If no mempages are provided, use only the default 4K pages if mempages: cell.mempages = mempages[cell_count] else: cell.mempages = create_mempages([(4, cell.memory // 4)]) self.cells.append(cell) def create_mempages(mappings): """Generate a list of LibvirtConfigCapsNUMAPages objects. :param mappings: (dict) A mapping of page size to quantity of said pages. :returns: [LibvirtConfigCapsNUMAPages, ...] """ mempages = [] for page_size, page_qty in mappings: mempage = vconfig.LibvirtConfigCapsNUMAPages() mempage.size = page_size mempage.total = page_qty mempages.append(mempage) return mempages VIR_DOMAIN_JOB_NONE = 0 VIR_DOMAIN_JOB_BOUNDED = 1 VIR_DOMAIN_JOB_UNBOUNDED = 2 VIR_DOMAIN_JOB_COMPLETED = 3 VIR_DOMAIN_JOB_FAILED = 4 VIR_DOMAIN_JOB_CANCELLED = 5 def _parse_disk_info(element): disk_info = {} disk_info['type'] = element.get('type', 'file') disk_info['device'] = element.get('device', 'disk') driver = element.find('./driver') if driver is not None: disk_info['driver_name'] = driver.get('name') disk_info['driver_type'] = driver.get('type') source = element.find('./source') if source is not None: disk_info['source'] = source.get('file') if not disk_info['source']: disk_info['source'] = source.get('dev') if not disk_info['source']: disk_info['source'] = source.get('path') target = element.find('./target') if target is not None: disk_info['target_dev'] = target.get('dev') disk_info['target_bus'] = target.get('bus') return disk_info def _parse_nic_info(element): nic_info = {} nic_info['type'] = element.get('type', 'bridge') driver = element.find('./mac') if driver is not None: nic_info['mac'] = driver.get('address') source = element.find('./source') if source is not None: nic_info['source'] = source.get('bridge') target = element.find('./target') if target is not None: nic_info['target_dev'] = target.get('dev') return nic_info def disable_event_thread(self): """Disable nova libvirt driver event thread. The Nova libvirt driver includes a native thread which monitors the libvirt event channel. In a testing environment this becomes problematic because it means we've got a floating thread calling sleep(1) over the life of the unit test. Seems harmless? It's not, because we sometimes want to test things like retry loops that should have specific sleep paterns. An unlucky firing of the libvirt thread will cause a test failure. """ # because we are patching a method in a class MonkeyPatch doesn't # auto import correctly. Import explicitly otherwise the patching # may silently fail. import nova.virt.libvirt.host # noqa def evloop(*args, **kwargs): pass self.useFixture(fixtures.MockPatch( 'nova.virt.libvirt.host.Host._init_events', side_effect=evloop)) class libvirtError(Exception): """This class was copied and slightly modified from `libvirt-python:libvirt-override.py`. Since a test environment will use the real `libvirt-python` version of `libvirtError` if it's installed and not this fake, we need to maintain strict compatibility with the original class, including `__init__` args and instance-attributes. To create a libvirtError instance you should: # Create an unsupported error exception exc = libvirtError('my message') exc.err = (libvirt.VIR_ERR_NO_SUPPORT,) self.err is a tuple of form: (error_code, error_domain, error_message, error_level, str1, str2, str3, int1, int2) Alternatively, you can use the `make_libvirtError` convenience function to allow you to specify these attributes in one shot. """ def __init__(self, defmsg, conn=None, dom=None, net=None, pool=None, vol=None): Exception.__init__(self, defmsg) self.err = None def get_error_code(self): if self.err is None: return None return self.err[0] def get_error_domain(self): if self.err is None: return None return self.err[1] def get_error_message(self): if self.err is None: return None return self.err[2] def get_error_level(self): if self.err is None: return None return self.err[3] def get_str1(self): if self.err is None: return None return self.err[4] def get_str2(self): if self.err is None: return None return self.err[5] def get_str3(self): if self.err is None: return None return self.err[6] def get_int1(self): if self.err is None: return None return self.err[7] def get_int2(self): if self.err is None: return None return self.err[8] class NWFilter(object): def __init__(self, connection, xml): self._connection = connection self._xml = xml self._parse_xml(xml) def _parse_xml(self, xml): tree = etree.fromstring(xml) root = tree.find('.') self._name = root.get('name') def undefine(self): self._connection._remove_filter(self) class NodeDevice(object): def __init__(self, connection, xml=None): self._connection = connection self._xml = xml if xml is not None: self._parse_xml(xml) def _parse_xml(self, xml): tree = etree.fromstring(xml) root = tree.find('.') self._name = root.get('name') def attach(self): pass def dettach(self): pass def reset(self): pass class Domain(object): def __init__(self, connection, xml, running=False, transient=False): self._connection = connection if running: connection._mark_running(self) self._state = running and VIR_DOMAIN_RUNNING or VIR_DOMAIN_SHUTOFF self._transient = transient self._def = self._parse_definition(xml) self._has_saved_state = False self._snapshots = {} self._id = self._connection._id_counter self._job_type = VIR_DOMAIN_JOB_UNBOUNDED def _parse_definition(self, xml): try: tree = etree.fromstring(xml) except etree.ParseError: raise make_libvirtError( libvirtError, "Invalid XML.", error_code=VIR_ERR_XML_DETAIL, error_domain=VIR_FROM_DOMAIN) definition = {} name = tree.find('./name') if name is not None: definition['name'] = name.text uuid_elem = tree.find('./uuid') if uuid_elem is not None: definition['uuid'] = uuid_elem.text else: definition['uuid'] = uuids.fake vcpu = tree.find('./vcpu') if vcpu is not None: definition['vcpu'] = int(vcpu.text) memory = tree.find('./memory') if memory is not None: definition['memory'] = int(memory.text) os = {} os_type = tree.find('./os/type') if os_type is not None: os['type'] = os_type.text os['arch'] = os_type.get('arch', self._connection.host_info.arch) os_kernel = tree.find('./os/kernel') if os_kernel is not None: os['kernel'] = os_kernel.text os_initrd = tree.find('./os/initrd') if os_initrd is not None: os['initrd'] = os_initrd.text os_cmdline = tree.find('./os/cmdline') if os_cmdline is not None: os['cmdline'] = os_cmdline.text os_boot = tree.find('./os/boot') if os_boot is not None: os['boot_dev'] = os_boot.get('dev') definition['os'] = os features = {} acpi = tree.find('./features/acpi') if acpi is not None: features['acpi'] = True definition['features'] = features devices = {} device_nodes = tree.find('./devices') if device_nodes is not None: disks_info = [] disks = device_nodes.findall('./disk') for disk in disks: disks_info += [_parse_disk_info(disk)] devices['disks'] = disks_info nics_info = [] nics = device_nodes.findall('./interface') for nic in nics: nic_info = {} nic_info['type'] = nic.get('type') mac = nic.find('./mac') if mac is not None: nic_info['mac'] = mac.get('address') source = nic.find('./source') if source is not None: if nic_info['type'] == 'network': nic_info['source'] = source.get('network') elif nic_info['type'] == 'bridge': nic_info['source'] = source.get('bridge') elif nic_info['type'] == 'hostdev': # is for VF when vnic_type # is direct. Add sriov vf pci information in nic_info address = source.find('./address') pci_type = address.get('type') pci_domain = address.get('domain').replace('0x', '') pci_bus = address.get('bus').replace('0x', '') pci_slot = address.get('slot').replace('0x', '') pci_function = address.get('function').replace( '0x', '') pci_device = "%s_%s_%s_%s_%s" % (pci_type, pci_domain, pci_bus, pci_slot, pci_function) nic_info['source'] = pci_device nics_info += [nic_info] devices['nics'] = nics_info hostdev_info = [] hostdevs = device_nodes.findall('./hostdev') for hostdev in hostdevs: address = hostdev.find('./source/address') # NOTE(gibi): only handle mdevs as pci is complicated dev_type = hostdev.get('type') if dev_type == 'mdev': hostdev_info.append({ 'type': dev_type, 'model': hostdev.get('model'), 'address_uuid': address.get('uuid') }) devices['hostdevs'] = hostdev_info vpmem_info = [] vpmems = device_nodes.findall('./memory') for vpmem in vpmems: model = vpmem.get('model') if model == 'nvdimm': source = vpmem.find('./source') target = vpmem.find('./target') path = source.find('./path').text alignsize = source.find('./alignsize').text size = target.find('./size').text node = target.find('./node').text vpmem_info.append({ 'path': path, 'size': size, 'alignsize': alignsize, 'node': node}) devices['vpmems'] = vpmem_info definition['devices'] = devices return definition def verify_hostdevs_interface_are_vfs(self): """Verify for interface type hostdev if the pci device is VF or not. """ error_message = ("Interface type hostdev is currently supported on " "SR-IOV Virtual Functions only") nics = self._def['devices'].get('nics', []) for nic in nics: if nic['type'] == 'hostdev': pci_device = nic['source'] pci_info_from_connection = self._connection.pci_info.devices[ pci_device] if 'phys_function' not in pci_info_from_connection.pci_device: raise make_libvirtError( libvirtError, error_message, error_code=VIR_ERR_CONFIG_UNSUPPORTED, error_domain=VIR_FROM_DOMAIN) def create(self): self.createWithFlags(0) def createWithFlags(self, flags): # FIXME: Not handling flags at the moment self.verify_hostdevs_interface_are_vfs() self._state = VIR_DOMAIN_RUNNING self._connection._mark_running(self) self._has_saved_state = False def isActive(self): return int(self._state == VIR_DOMAIN_RUNNING) def undefine(self): self._connection._undefine(self) def isPersistent(self): return True def undefineFlags(self, flags): self.undefine() if flags & VIR_DOMAIN_UNDEFINE_MANAGED_SAVE: if self.hasManagedSaveImage(0): self.managedSaveRemove() def destroy(self): self._state = VIR_DOMAIN_SHUTOFF self._connection._mark_not_running(self) def ID(self): return self._id def name(self): return self._def['name'] def UUIDString(self): return self._def['uuid'] def interfaceStats(self, device): return [10000242400, 1234, 0, 2, 213412343233, 34214234, 23, 3] def blockStats(self, device): return [2, 10000242400, 234, 2343424234, 34] def setTime(self, time=None, flags=0): pass def suspend(self): self._state = VIR_DOMAIN_PAUSED def shutdown(self): self._state = VIR_DOMAIN_SHUTDOWN self._connection._mark_not_running(self) def reset(self, flags): # FIXME: Not handling flags at the moment self._state = VIR_DOMAIN_RUNNING self._connection._mark_running(self) def info(self): return [self._state, int(self._def['memory']), int(self._def['memory']), self._def['vcpu'], 123456789] def migrateToURI3(self, dconnuri, params, flags): raise make_libvirtError( libvirtError, "Migration always fails for fake libvirt!", error_code=VIR_ERR_INTERNAL_ERROR, error_domain=VIR_FROM_QEMU) def migrateSetMaxDowntime(self, downtime): pass def attachDevice(self, xml): result = False if xml.startswith("
''' % dict(source_attr=source_attr, **disk) nics = '' for nic in self._def['devices']['nics']: if 'source' in nic and nic['type'] != 'hostdev': nics += '''
''' % nic # this covers for direct nic type else: nics += '''
''' % nic hostdevs = '' for hostdev in self._def['devices']['hostdevs']: hostdevs += '''
''' % hostdev # noqa vpmems = '' for vpmem in self._def['devices']['vpmems']: vpmems += ''' %(path)s %(alignsize)s %(size)s %(node)s ''' % vpmem serial_console = '' if CONF.serial_console.enabled: serial_console = """ """ return ''' %(name)s %(uuid)s %(memory)s %(memory)s %(vcpu)s hvm destroy restart restart /usr/bin/kvm %(disks)s
%(nics)s %(serial_console)s

 4 >UhzQ9 #AA#ZA!J`f3FI' BHN MozӋ[Hİ] 2b\B qD `2biVl+qېeNI e\0q:I lӱ`gB_@ 81!@N~P@CBfxwr|5Q5Vabe9iEovhwD_CrA) @"dlSaAiW.Fsb?FyOa`?P p ? 'l)݆9t$1d$G|,6J]w : ~gBt*<wxUGD" # h <^T$(YY#IUFY,b@F"H$+@' P} B &( u` ?U U(( Fu#PtSNA@tTLb!d ^% j#"MG`Make Bh cj groundQ` E;vl nt{!pb~- J%9-H%W#??$ZB##0U2#sF =$&#M4"6"6 ""\#2qƢ0??6#F\.?QE?@)#A!'`?C"yz ix h (t )n 20W@9n MC@cz"s| fIB:Ar vAa i| n.n Al@ AHsJBei@ez !d@ 1`VC@s_u0RXY@cPm! 06JW@2@3!&H/<e#ha!#3 H%Y #H Z%|T[?%|TWgda_C +T#!!"H!#8d3 (cQA6@0EHa?%Tk,@doEd 3F3TA`f;nd 3` D| N| tsB!aC@nEG!#^E %b{Gd?"6!3lr'0tv$@b#"6=s_H8> r =T"?KF& #9 RTB H ]^aд  @+x La7TPHa$ c(G,}UFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X` ]B66]  TTTTTdH?ܸ? Q?B@L&d2?-(,1(.sUW!U&(w/x  0KB` Serv",n two k p"i h"al d v0c@ 1^| (SG|& b?,>?TTwpp pqqppqwqpqwqwq+p pppwwwznw6;5wqpwwwhghw>Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??+ ؿ)j??E(i\.r3r\.CUG DF P# h @T(PYY#E3UF\.?@jZ?F~߻?P} ΍L .B":J :^:rY:f 7 7ָAu` ?Uč"U66JJU^^rr)u#",'tU"J%q4??4bp0{[`: #|2&6#*'Q79" z^L1$" 7'@>>ÿ@q@@5EӅ??D5F0Zr&rBxC2u0`-1Buu&@ CAB@C$9u`u `m"[ b!wu`)3kQQ1Q71`Vis_PRXY%cPm!#5jP4P2B`?CopWyrPgPtf0u(P)f020PU9f0MPcPoP'ofRQrPQUaPiPn% f0WAl` XsbUePePv/`d%SRQ1#d:4%!*@8=3R6#@d\6\6 X2$B#30Ub\3]zg,7^-q>@VpcT<`of0m`sP4a>mmdasIJ'BT1/t"0t.2QQ3QQB3 tj1j1"5hbI6% 72EC?|8LPZX116Q;sa>arExAsF@>>ý__0Ru'@`*Xr>a cc^`Roo H?䣬!e'2$r d bpf`bffb bpf"}Be%Rbk ``.e;Rkbrđf@B #[_z}R=3đđBޅrr++r,$t -tH;:@0a(IafcX(Pq`uK`AP!eqPrqbBq b%F@rBq8%5=3_tD&&|rf"e#F3s͵NSYr'X$G,@R!B衅W_i_.] Q QBHʀ=34I'F}@` %_&͒#J0 Q`wNElWODpKDH0PِRTߐRlIِSqIQiOO|1<3;K'0U?}10~U3Uee8sTeq rqqq13@H36EBM$BM4BeMLBMB3M̅BM-uBG2HD: # h0>Th]]9 MP /AU@jZ?F͂T??FSFظ?P6 .]A]   JuM` W? EKu4bJt1 ytuo[=Jb(A =>JU2zGzt?@dM#߀A@bM;#z1 /#\I(bW)d+)d&J["*(w"?& ?AO$Tf/%B115 `?CopyrigTt(c)2`09MC0c.A0os;0fI2:1r=0v1aI0i;0n. Al0 A8s2ei0ejA0v0d0 1Y4-X 3  7&C0b59 FX7}'0UB)Ź&!$aI 7FKJlJ]Uhz211O! !u6?Fh,\.>*0II=*t >J:@T&ȑ/ [WpY\QEUzU . X[:;QRuSU5UF,®\oZOnlQ 8MP@u?FR >Nk"\T`GbmMlQi_f0%=mUt.m {=4+ ߥP> D"? 5-g?T0tp@bTh]]9 MP qAU@;?F5K?@xEL?6?F6?Pn6 L>M  P$  #uM` ?=`cc.@uRJtO  bm&QȽtA@Lؽl]WI@[>JU2zGzt?+PS@M/#J?&A@bbbPzs q#b](b)+&J[9#,ɺ ("a?& ?AR$ /%B9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0'"O1YS40#-9#b3 b7&0%0AbE9 YFX+G'0UB&!4a yFJlJ]Uhzt1h.A!-(:9#t.%-+0JN H߃W%$Jq-&?F mO" >Nk'"\RSWNh[H@_[4I 뇥P> 'CK=o? nO-?N]`@ b<couor5%BUV 6?woZA^ǿXEU@ T6?F __ jv6dDlVߦyZgPRkro}oiBdalN!\z3>6>CRHS X?BnDeTGvN7VHD: # h0>Th]]9 MP JU@jpiH?F[^?@$?F]Fڻ?P6 >JuM` ?j uJt  (a&aEt9S;-FEN=ۯ=Eh?u;>2zGzt?@M|JA@a7b#z}b(b!).+.&J[-m(?& I?A$0/Q%B!!5 c`?CopyrigTt (c)02`090M 0c 0oKs0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0!(Y$-# '& 0b59 6uX7_'0UJBŃ&J!$a FJlJ]Uhz!1!劵:F% Y.  8?zY^J4 ]9$]$R9S/RJUE UH$S!\1\? GX5ZU_YQJU __-_?\HD H# Uh4>T#A]]9 M /AU@jZ?FL|+??F@{~?P6 m. >b  JuM` ?+ILKu8J1t5 }ty_7??J]bA>tJU2]U"?=@MJ&A@3(WbA)N+N&JJ[ "#?& ?A9/Z) !!5 `?CopyrigTt(c])20A09uM-0c+0os%0If32$1r'0`1ai%0n. WAl{0 +8s2UeS0e+0v0ds0!E$5 򤲝-? 3  7&6iieE9 6X7 "'0#Ŵ&!$0 TIF]JlJ8>U#9AA `1!:Nk"\T`ObmM\"f4N[7 zex-K=FAa տ[jr D"? WZvD?4xa@RB#Th]]9 M IAU@FK?F\gVb?FY%?F_wᳬ?PJwW?>$[ >  JuM` ? ;  )u*Jt'  ^Zmta{֙3.JvdE @x?"_r?v{4܂ >pJU43f<!.&P~?b3bbfz@bAe&bq$Jd 2zGzt&.#@M#T"I&r(~&i$q)Q+}'i"~*B#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d011Y54#-#D3 D7.&%g0Ab59 ;FX G[G'0URB.&!I$a [FoJl&}>UhE3MV1Ai!(# jV? & `& T` V_RUHPD  # #>h0TpHeeJ EU@u6?F5c{g&?@{r`5B?FF:Iͷ?mj>  M   *  (Y JuM` ? Y ":N`u#xJtu IoI%t"' % 'ii%$'`OkŁJU2"?@Ms#J&A@"b(bU)++&RJp}#>1#19 ?#bbbz| &/5> }#11"`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@k"1Ue4t#B-}#3 746t%0 jU FXyGG'0UR46!O4i FJlJ$jUp5 3 h19 1!q(q4}#&J,t R j# 8bD?@@)?FvP؞?PEkU O|\QnS .܁nts}ped`,of :o ,M ? &d2? bȩ?tP@RBBoogr"55ZQUf_x____bY@ßAc_\jF?4ևi$on&h{c8oJo\opoo T)Y oaoooo o 6$|U7I[mS Uy{䷰7h7}'g"(I%ZvShX"cwӏiďa$xUGD  3 h0TdYYB U@%)?Fxg^?@ ߯H?Fw_7Uw?P}   f  f0 D fX lu` ?00DDXXll)ut  871%t%!?"Ư)1%:'ֿTeΦ1%T'ʤU!!O"`?CopyrigPt (c) 2\09 M c oKs f"!r 1a i n. Al00 (s42e0e vF0d(0"#S$#B-#l# '?R=#ǒ 2$Z##0U'B3@& ^5 DFX7dG'R&Dŭ6!4] dFxJeYk} #HFQ%RT][b5RT0][3RTD][5RTX][RTl]WHD:  H h4>T]]9 MAU@ ߯H?Fԋz)e??F9q?P6  >IM (#<Pd`<#P#dJuM` ? #2FZnubJt E-tA!["XѮJV&bN$p'N]I֤>J)U5 J[-# ?6 ??" A@.2Sb64B#2zGzt3@M#Jc678C6.? ;B72C: #115 k"`?CopyrigTt^ (c])^ 20@@9^ uM,@c*@os$@If2B#Ar&@_Aa2@i$@n.^ WAlz@ *Hs~BUeR@e*@v@dr@"14ʺ#-/ C  G6&iieP59 &XG W'0UiRŴ6!#40 $ V4ZlJ]Uhz@A!1(`C@U&pz?FDUy{?,d@#b`sb`"J-d K-q?F@F?F F0 ?Nk"?GiE*]CpdadC0[dhZe@#eGDQ> @" i$a3n&Y+`~rrood]io;e s"yr1C82K?Fѻ4G{?F-qx>tFjR^{|ZlYZe tmP|{Mޥa 0 1iS? q1+?wa@rc(<;"̏A!rFÀ5H5aE=*[^N?F%P?@"P&@ c?F{MTh]]9 M]AU@}%?F[$,?@+#V?Fd_3֡w?P6 B>M @ $ JuM` ^?3YY.KuHJtE  L<ܩt.'_3zMҩ,wsQ>JU2N贁N[?@M#J+&Aw@b](]i+bk)+)&Jp%#4"?>"<5!?& ?M#bbbz$ Rp&"/%B%#L1L15 `?Co yrigTt (c)02`090M{0cy0oss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1YI4#b-%#X3 X7A&%{0bP59 OFX!GoG'0UBŔ&!$a oFJlJ(>UhoI j1h$Ac!(`%#@'$F?>D{Q>L!gfQQ R "Jg V?Fo-R?@bHQ??FT!R?P& ߤlH\f6V\P5_E+UDv;*UmʙHQ:? w8s`*%$? .l?Q? ?\?D4_!R oogr5%Q3p_S@hIL?F~Kz?P)Oy޾ F @c"rA b$^sPj+u`dl(ET+QdAf u8<$HD:  H h4>T]]9 MIAU@ ܋?FOa>?@q`5B?F1*ݷ?P6 m8> (JuM` ?#SS2)uBJt?  \Eto֙>_B|KR>JU5 Jp'>͙#<?Z& ?bbbz@b#;A&b$B 2N/N[Z$@MI #"&(&D$ :;'"*#d1d15 `?Co; yrigTt (c)02U0090M0c0os0f21r; 1a0i0n}.0 Al0U 8s2e0e05v0d0"]1a4 #-,/p3 p7Z&'&iieU59 '&X9GG'K0UBZ&!u$K0 FJlJAUhhG1!- (T9C&J> RbT$R}R( "J:]Uy{䷧_8S s_@ o4%BT@.Tw?F:^֝ä >Nk"?YUU!f>)?\(K8ӷjmKl? a\$QU5Ui,SlQ%iu~!8hE_= T?F;uQuo`"rA 4sjQ޽lğ=onHD:  H h4>T]]9 MIAU@ ܋?F<j?@q`5B?F^ݷ?P6 m8>M (JuM` ?#SS2)uBJt?  \Et[Πؙ>_B?&}>J)>JU5 J#2N迴Nk?S@M #JC&AP bu(b)J++&B1[- #L#?&| ?$H*'#O1O15 `?CopyrigTt (c)020090M~0c|0osv0f2u1rx01a0iv0n.0 Al0 |8s2e0e|0v0d0"H1L4 #-/[3 %[7&'&iie@59 '&X|$GrG'0UBi&!40 rFJlJAUhh7m1!{! (Z$CR&FUy{?X> 81cKR RV "J{T@ ![?F=$. >Nk_"?YPSW>OcK8RKl? ף_p= $B:]'i,S\6aYss9$Q@5UV+6o'lQ:ifOdD`i[Dž}soąo82{_U{"rA z$s"HD:  H h4>T]]9 M!AU@ ܋?F@_v?@q`5B?F1*ݷ?P6 m$> JuM` ??Su.Jt+  \Epteap'Eqz>Bq|7>5 Jp>G͙;?2& ?bbbz@bAi&s&Bh 2zGztM?+"@MrX"&v(bu)+s/8*T<1<15 `?Co yrigTt (c)s02009s0Mk0ci0oKsc0fq2b1r 1aq0ic0n.s0 Al0 i8s2e0ei0v0d051P94-H3 H72&iie-59 XG_G'0UB2&M$0 _FsJlJ( >Uh_I `Z1m!afR* zRQd J:`]Uy{w_$S s@ +o4B`T@.Tw?F:^֝ä >Nk?"?zYQYV*)\§(7$d?ӷjm7lb+d ha\HD:  H h8>T ]]9 M!AU@ ܋?F*?@q`5Bǯ?F1?P6 $6>  JuzM`l? C"u2Jt/  \Etti|u~>Bu뒣;>RJU5 Jp>͙#<?6& ?bbbz@b#Am&by$Bl 2zGzt6#I@M\"&Rz(&q$y)+'q"*h,#:3o1o15 `?Co yrigTtk(c)k2009kM0c0os0f21r 1a0i0n.k Al0 8s2e0e0v@d0h1l4e-:?{3 {7I6&&i bP159 &XDGG'0UBX6&Q$0 FRJlJ(>UhI 1lq!a4DC9Nk"?CYQYf.Q;(hğ=;Af/h (\UHLuD" # ;3h( TEJ kU@jZ?F@{W~?P VN- I A `*%#.>>uA` ?%i i4iHu#XpTb>tU b7>t  a>U2zGz]?@9 %#>5&A@g(bu)+&X" !f$>F/#"&#="& "m"bu);r'X"i,X"/N/#11-#`?CopyrwigXt dwc)020090M0c0o%s0f21r01ua0i0n.0_ Al0 8UsBe0e0v@d0"i f &%!X/#]'0UB&Z5%0 VJl>8fUlY(C '(#vAm!#(:/#_Cgwu?F( mO"\>sl?/h%mq.%-+?@lփ?F^Uw>o|m7yQySf|wA=$*<~a3d@9SӀaۏ펃y'e0a#FX_`H> V+s LyC{:,YFH1r#δ u2&OB C 3~dд 4@+W QsGCflf>ߣBHF(I_M8;P?PzTW?\`Xc$(hg.]jAhn?mtUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X` ]!B66U 66 66%66ȅH??f]B@L&d2?-(1(.sUW!U&w/ҍ~  0QB`#Mainfr me, etwo k p rr pP1al d v c %1^| (SUG|& p??(:L ?qpwqqq wq wwpwq  qqwwzEwwtqwqw~wzqwqwp~*?Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabL&d2ٿ\.??{ڿaAv??"jr3r\.CUG DF P# h>@/T(6D # #UF\.?@L&d2W?@w?P} <.4! D:J D:^:r,: A7f77f7 7!7&"l70&u`?( "66JJ^^rrTU/&0&0&D"u#N) ^2<7qU ?4b@{[`:#B'FT3i"'RQ92 z1L>ÿ@q@E?BDF,@rRSBu@`҈-g d(ag;_3@JAJA2hhXB)3*ÄR+Є.bq q2&&:r816a,s2+66u\L %8WQA$A[(BBSP@RXESTO$@,r23rUrD.e53_4&F3s͵N?SYFer=@:u]Qv,@F=a`up]XE628ABTOfD"J-$RRO(R[)VO`Rpasd`.e Tpx``B2+ 0Ls'_"x#0!UJ"b_ 簋 B)U2brJC\Rr˨83vvF_<U"D"=m2Q:2Q]q2XBƅ3/nR0{.b[a[a22:raa  A AE,,f7Ԣ18Q`#A#AQ; ^rprqqMMr1"8J6=a~Ak ]\SP@` SPa`e`lBsa_` E0ud`px`e`JXEas` T``U\cam`uF3dr7` SbM`i`fbUD7SLgn*f`c`ub/.eQXJAPbd. Nm@Je@=O! P`rx7x.[b`m DraeNiu9KoAe1S`u^8 Sri`a /J T//A/^`L`ch@{/Ҡ///Bl pQ/ @?#?<R`#Q?W??*oNwbkl"b`KҸH?[Ij`ad ps*OT]]9 M JU@z^?@]z?@ZVjźP6 ܠAJuM` ?juJt  3\It=W(\I*RIl#>2zGz?@M|JA@e7b #zb(b%)2+2&J[-m(?& I?A$4/U%B!!5 g`?CopyrigTt (c)020%090M0c0oKs 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-X# '&?6iie59 6X7/'0UvBŇ&-!$0 -FAJlJAn h71Z1!TlL2 $<[V  ORvVJ:%'d2ɩ PY]PeS YY˱QEaZ_`YQU5U=Vr\. l\QY?))bUb526_;"rA #cHD: # h4>T]]9 M JU@d2L&?@P( ??@>?P6 e]A#uM` W?SuJt  RQ͸It=W\_(\IRIlƒ_',#>[,ɺ (? ?Ab A@"Sb$J2zGz?+P@MJ=&#fbbbPz (bE)&,"*UB!!5 g`?CopyrigTt (c)020%090M0c.0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70ej0vu0dW0@!$b-# 'Y?6iie59 6X7'0UvBŴ!0 D-FAJlJAh71h1!Z@bgX,  <)gV#\ M OR|VJ a9PY]PPReS[RUE:]r\.M\Q`YO Q5U=VN_YQĖU526_HU"rA #cHD: # h4>T]]9 M JU@z^?@<0 ?@bX,?@_~w?P6 >JuM` ?uJt  ףp= It=Wk t@IR)\_(IlK&D#>[-m(? ?AbA@"b$J2zGz?@M»J=&#fb{#z (bE)&,"*UB!!5 g`?CopyrigTt (c)020%090M0c.0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70ej0vu0dW0@!$b-# 'Y?6iie59 6X7'0UvBŴ!0 D-FAJlJAh71h1!TL&1 $<[V  ORvVJ:?`0 PY]PeS a+QEaZ_`YQU5U~=Vr\.m\QY%NX526_;"]rA #cHD: # h4>T]]9 M JU@RT*J?@r\.?@zP6  >#uM` ?juJt  ףp= It=W\(\IRQIl#>[,ɺ (a? ?Ab Aw@"b$J2zGz?+P@M»J=&#fbbbPz (b)&,"*B !!5 g`?CopyrigTt (wc)020%090M0c0o%s 0f21r 0D1ua0i 0n.0_ Al_0 8Usc2e70e0vu0 dW0!H$-,# 'K?6i@ie59 6X7'0UvB!0 -FAJlJAh711-!Z@3F  <F9Kt#  OR|VJa9PY]PPReS[RUE:]M\Q`YLQ5U=V N_YQU526_HUv"rA # cHD: # h4>T]]9 M JU@L&d2?@|??@t:?P6  >JuM` W?SuJt At=Wi_6#J]RbJlGX%#R>[*(? ?AbA;@"b $J2zGz?@dM#7& #fbu#z$ ( (&,"*BB!!5 g`?CopyrigT}tZ(c)ZW2009ZM 0]c 0os0f2R1r0>1a0i0n.Z AUlY0 8s]2e10e 0vo0dQ0!$ę-# '?6iie59 6X7}'0UpBi!0 'F;JlJF!( ]A!1-!Z@ B <yO#,4#BQ YR VJF!L&?@bX,_S RQ#jYiFQEU,b?@%\\(\\aX5U@?@ՔJR\p=o ף\kQX5Ufl6?@3oF\H8!\4rOiX:=?F&[f3r|UnTeRU ugZY8oJi QyZV'j[3rJa=yUU2@_RU"rA # HD: H ih(>T9 M3AU@L&d2?@\.?JP6 t >%M  &:"S:JuM` ?S0DVhurlnb#4"Jto ob'#t3&>JU#g"X#a?~& ?AJbAW@"+7 #J!%(,Ja#2zGz~"@MW#"6/%b)_;_6" /*Ba#115 2`?CopyrwigTt `wc)020090M0c0o%s0f21r0Aua0i0n.0_ Al/@ 8Us3Be@e0vE@d'@O"ib =X%`!Xa#Y'0UB~&%0 JlJ UhMQQ A75A!U(!1L&?@bX,Ơ h # RQ{h# >_,S{J,b?@xz^?@1o=?@h?PQ ̿ A!Y\(\\tگ@ UnĿvUohú{:i dk6!d P Ln4!lBoogr5%BUfl6ˣ_nH1_Xq~?4F6H/ ݴpX =?!6@,9?@HW8?P`ڒ੿ ) !oqf%$$UjGil!j( gtoomddvg0|RQPbرɠ )8!Oy;DcwVQ]z,Nz(6abxZQ~FZVjp[3’L%q^QFvc~b-Tt$bxbQ蟯 ' OBNz)6bXfUU^@ss9 ?@u?P1r _RUj2ilpfg.ɏmbd,>gxHD: # h0>Th]]9 MP IAU@Dxb?@F+?@`Wڔ?@u5F&?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  ʼnhmta{|_3.Jv9?\̧q?v-F$>JU2N'N[?@M#J&A@bI(bW)U+RU+U&J[# (<!J m?%P?AR^$(u/%B##1#15 `?CopyrigTt c)Z02`09Z0MR0cP0osJ0fX2I1rL01aX0iJ0n.Z0 l0 P8s2ex0ejP0v0d01Y 4#-X#/3 /7#(R0b59 &FX7FG_'2WUAFJ!$a FFD$&#l,&}>UhEMA11-O!(#H UV & `&d T` V_RHD: # h4>T]]9 M JU@^z?@ `?@<?@>_w?P6 >#uM` ?uJt  QIt=WwIRGzIlƒO_,#>*5 g`?CopyrigTt (c) 2U0 9 Mcosf"r-!a in}. AlH U sL"e e5v^ d@ PB[,ɺ ("a?& ?Ab Aw@"b$- &2?+P@MJ\6#fbbbPz (bE)&,"*%?@6iie 59 @6X7'K0UvBŽ&!$K0 -FAJlJ A>Uh1MDH]A U H!1|4#hLjA*lX!効:@MJR Y <RyJk1LbX,?@`0  S qq#Y`?[G=`a]~mΓAdžeЕQQ0F0ZVCXQ`@iqm2.uQQ=0r"H$YZ~X#Q`kwU{qm)F&uQ'QaZS0W+Q6`V BPXUmt~QR٘6p?@B |"Y(aaX$_کȑYr%e"U1/AoJ\` V"X;Q}rܢZ;o6hׯA\a ?,H%5eQCQ.|JZ'Ja6h1Aupm +p;eQ3@ Z,6hOQ'AE;Y}R8sRYh~Df7gWQAhVǫjfjchiL&E7gjE3ɏd2L14~.yl߶7ggQA?p8;M7 AİeXkQ9QUoQA4F G`XsUWiZñeHD: # h4>T]]9 M JU@JRT*?@+@ ?@ `0?@ ?P6 >n#uM` ?uJt  zGIt=W贁NIRQjIl{^z#R>[,ɺ (? ?Ab A;@"b$UJ=!=!5 g`?CopyrigTt (c)t 20 9t Ml c.j osd fr"c!rf !ar id n.t Al j(s"e ejj v d  6!E:$-I# I'L2]z?+OP@MJ\6#fbbbPz\ (b)&T,"*?@6iie 59 @6X7'0UvBŴ!0 D-FAJlJAhK7[!h1!Z@߆gp8  <JmQ#  OR|VJa9PY]PPReS[RUE:]6BGP(M\Q`YmRbU 5U=VN_YQUM526_HU"]rA #cHD: # h4>T]]9 M JU@jZV?@@?@,b?@&_d2?P(-DTw! @>>JuM` ?uJt  QIst=WR\(\I-lU#>[-m(w?ZP?AbA@"b$J2zG/z?HJ=&#fb{#z (b)&,"*B!!5 g`?CopyrigTt (c)0W20%090M0]c0os 0f2R1r 0D1a0i 0n.0 AUl_0 8sc2e70e0vu0dW0!$ę-# '?6iie59 6X7='0Z!0 -FRAJlJ(>Uh-I 1l!`a9 $< J:@t: HYdQ]S lQE-UV_XYTQHD: # h0>Th]]9 MP 5AU@D 1!?@&?@`Wڔ?@ 6F&?Pps-8R?>$ > n JuMy`?e ;  Su*Jt'  :mta{$_#pYmvK@{3.Jv׊ߖ.?k-I?>JU2N贁N[?@MJ&A@b5(bUC)A+A+A&J[? (<!6 ?%MP?AJ$(a/%B115 `?CopyrigTt] c)F02`W09F0M>0c<0os60fD251r80q1aD0i60nU.F0 l0 <8Us2ed0e<0v0 d01Y 4-3 7 #>0b59 FX72G'2KWU|A F!$a 2F#l(&}>UhE M-11;! AVI & `!&P }T` V_RHD: # h4>T]]9 M JU@&d2?@<0 ?@|>?@_~w?P6 !>#uM` ?uJt  )\(It=Wk t@IRף_p= ׳IlK&D#>[, (? ?Ab A@"b$J2zGz?+P)@MJ=&#fbbbPz (b)&,"*B!!5 g`?CopyrigTt (c)0W20%090M0]c0os 0f2R1r 0D1a0i 0n.0 AUl_0 8sc2e70e0vu0dW0!$ę-# '?6iie59 6X7}'0UvBa!0I -FAJlJ]Uhz11!Z@`0 f <:a+ѐ#B SRԀVJ:BP(?L&YS RJ)#hT_RUEtL!ZrMP  _QdYP^CyQ5Ul6aTYa\8{?h5a=ZR_YyaĚU2:_LU"rA #sHD H# Uh4>T#A]]9 M AU@L&d2?@\.?tP6 j">  ( <$U<mJuM` /?U2FXjutJtq t ,J&b2$H &C">[JU7w$?& ?A5"bA@R"&Jtm#2U"?s"@Mc#J&(IC (/58*m#p1p15 "`?Copy_rigTt (c) 2U009 M0c0os0f21rԙ01a i0n}. Al0U 8s2e0e05v@d0["i1Em4d#(m#5 -EO|3K |7&WFi@ieE9 WFXmGs"'K0#Ŋ&!$K0 FJlJ U#%]VQVQ ]9 1HA!a(Zm#]/z JpC"j(\g}QQ R"fd"J:TZV՛jjS 3sbd dW&@aa5Le@%c~\ja}_,‘up2u?pf}Lb .lukut PF Lpa @cBr5%3q?u,b?@xz^?@oo=?@?PqQ ̿ us&r\ϑ|t@ ħ|v˽|oohwà0B  "K2[@a!fl6NwekH?zGёtm !+s KwYZHCEl#Fsy9 uwOB Ȇ -v~d,Ѵ o@+XW oLsG_qofH $@ ؇dxzRP+=Oa7?;//%/7/ȍ9WՁo8?,k"|ShAl%UFD  h ^TYY>UFD"H$'@F @Fx,/T6D # 9UFD"H$@]F@&u@P} jihf,l>u` ?V,@y^u#hQtSfttRt'A@y3dbr(+#tTB/(u/!/ <"/S.pd#@i#5J~?/"._`Make Bx0cz0groundT#X3Ҵ14;1`VIS_PRXYn0CHM!_#60080"`?C0py0i0ht~0(0)~02009~0M@cJ2s0fB Ar@GAa@i0nn0 ~0Alb@ HsfBe:@e0v|0d$n2"e (^C5@ #5 X3hHA%DK5D,K3D@Gg$aq_DaS T@H11"T2-T3[T ZTET@(APa%1 C1o1Do1y.bg B2Q5SiC@nU3bk,qoUcj?bhKrQi? ^ebo #!} baRe#d,UEohbcD&'C1(cQ(QR=S1fZi#3 D0N0UtDBn@a@nEEg_09YBaPPo1o1"(S11iee52>bXc1`I!]&` H@d|2&F0q}A0q` SPowُbb+e"vM- ~-!bb bL~f]Dfah 7Qbdҏ䅕HD: # ;h4>T]]9 # ,AUFD"H$@FZVj@&@@P -u `u bu  _@"f)l A)0t#` bFNt?p [eܠAuv` ?5urJϞ>Xq& $##./@%W(l"r/ tvel 46<R?$&(Q/$BU2q$0?#?56?\.?Ef`?F 7`BackgroundC05l0rz)z5=t3I1 3&?h<:JA#3+0UQB3_ 3117;l`?1py0i0ht (0)@20@9@M@c2s0fB1r@1a@i0n.@ KA0l@Gs RUe@e0vPd@0l>#$!Uh#3 A A\TYJ*3BU_1HD: ##=h0>Th]]9 3 UF>T/?FD"H$@Fn- @FN?t?P>u` o? u3`>$t  `"?A@>D$_~Ut W"p [VJ5 .Lc0U?AGG}YL?b` LineColA r&zY!"'&ɆX#mddd#nFFF"?&?#!"'&&` ]STadA w?/#/D+$1# F9 lC '2I&aWY*1` Ac= nt/#Ybdy3zg9*#!$)$|I!8<e Q3#8$f$3$FP3(B#GB117;1#?!pyG i}gTt (u)@2`09@uM9 cG osA IfB!r@F!aV0iA n.@ UA2 HsBe@eG v= d@0 l>]UhzЁA3!!`J !V)XU__Y3^_oXgU5^Bo_foU@__oojEb59 Mb{7/v!3rav44K7 %iPCH6Hg7w4'~vv !"608e @Q:(F|@v+?=!$Yz,Z#b~~-T{ HD: ##=h0>Th]]9 3 #UFH$$@FFJx-C-?F BP(ԭ??Pn>u` ?6u3`b>"6(t  2U0*?A@BHV-kYt ["p: [ZՄJU5 CLc0U"?AG}]LC` Line_ColI rz])"/&Cmddd#nFFF"?0&?#)$1$)"/&&` STadI wG/#/L+$9# %FA lK 72Q&a]A1` AcE nt?#]bd3zՃ~92#!c'  3#e8((3F7BGk+B$JA3117;9#?1pyO igTtW ()@2`W09@MA cO osI fB1rԗ@N!am0iI n].@ A22 HUsBe@eO vE d@0l>]U@hzA4K ?BEOJ !9"XU__Y3^o(o{gU5^eo_oU_oo$3ojEb5(9 M{7/6v!3ra4v54ve}%i6FWig74 '$~vx 1)"&8ea:@e@ @("4Q:/=@3 9)']zE,Z#(T(-ӁTjQc@0@HD ##=h4>TB]]9 3UF.L@FD"H$@F+@FNt?P>u` o? u3`>"(t L F%u?A@BHڬ\mY(t["p[ZJ5 .LT0U"?A?G}FCb` LineColE rzR˵#nwFFFw#?x&?$#u+&&` STadE wC/#/H+$5# F= lDG '2M&%,`h9!!Btn! 3#.> (3672M%11Ԓ7;5#?!pyK igTt (c)'@W203@9'@M= ]cK osE f%BR!r@J!a%@iE Un0 '@A2 HUsqBeE@eK vA Id2l>0>UhhCA3%!bJ@&)C6XU!_A_SY3g^__WWU@5^_3__WU@E_y_9o_bT= tG e j5ie@5(9 M{x7/69v!3LrA,Tv5!$;7laiooXx7w'2#x~ov]x vw_H<> Eל ߪJAC BFȓw#C 74B >lѴ k]U@ekmo+_ o]BaGy]#PH/*<N`rXfUFD  h ^TYYHUFD"H$'@F @FxT]]9 #Q ,AUFD"H$@F BߡP(?&@@P-DT! -u `u bu  @")lt A)0t#` bԚp [WenAu` ?urJ &@Fi#$!Uh#3 A Q3 "$BA 3ŊJ*3&"B%e>o1UGDF P# h>,/T6D # 9UFD"H$@]F@&u@P} jih,D >u`} ?VU,@y^u#hQtSft{tRt'A@3<db(+#tTDB/(u/!/<"/S.d#@4#5~%?/"._`Make Bx0cz0gro_und*#X314;1`VIS_PRXYn0CHM!#60082݉"`?C0puy0i0ht~0(0)~0200U9~0M@c2s0IfB Ar@GAa@i0nn0 ~0AUlb@ HsfBe:@e0v|0dn2"e (^C5 (#5 X3hHA%DK5D,K3DGg$aq_aS T11"IT2-T3[T ZTETɴ(Ta%1 C18o1Hy.bgBE 2Q5Si!@nU3lk,{oUZmj?bhuKrQi? ^eb #! >baeo1,UEohlc&'C1(cQPQR=S1fi#3 D0N0tDBn@a@nEEg_09YBaPo1o1"(S11see52>Xc1`I!&` H@d|2&F0}A0q` SPo]wbb+e"vU`6ȅ- ~-!b)bbVf]Dfah 7Qld܏HD: ##=h 0>T@h  9 T3UF}ð?FD"H$@FYjB @F l?uP>u` ? u3`>$t  u?A@>D9ȯvUQt W"up [VJJ5 Lc0U?AG}YL?b` Line_ColA rzY!"'&##nFFF"?&S?#!"'&&w` STadA Iw?/#/D+$1#K F9 lC '2I+'&!$)$|!$L8X< (3#.$3$3632M1P17;1#?!wpyG igT_t ()<@]2`09<@M9 ]cG osA f:BR!r.@F!a:@iA Un0 <@A2 2HUsBeZ@eG v= Id2l>]Uhz#A3!!J !,VKKX%U6_V_hY3|^__WlU5%^_H_olU%__No_j5b59 M{7/'v!3:raBv5X4;s7B %i5oog74'f~]vKx Nv!"I68e ^@Q:(F|O@f+~?!$\Yz,ZB#b ~-~T{ HD: ##=h 0>T@h  9 T3#UFG1"@FF$I @F BP(?V?P>u`} ?u3`b[>"(t  c]K?A@BHuV5Yt T["p [ZjJU5 CKLc0U"?AGQ}]LC` LineColI rzI])"/&Emddd#nFwFF"?&?#)$1$)"/&&` ]STadI wG/#/L+$9# FRA lK 72Q&eoa]R4` AcE nt83Q%#]b[d3z92#M!c'  Q3#8(f(3F7BGkBBLA3B117;9#?1pyO i}gTt (u)@2`09@uMA cO osI IfB1r@N!am0iI n.@ UA22 HsBe@eO vE d@0 l>]UhzA4 ABGOJ!9"XU__Y3^o*o}gU5^go_oU@_oo5ojEb59 Mb{7/v!3rav44vg}%iP8FYkg7w4'$~vv 3)"&P8ea@e@ @$6Q:>@e3 9)']z,Z#(T(-\ՁTQc@ 0@HD: ##=h4>T]]9 @3 UF=U~@FD"H$@F[M l_?uP>u` ? u3`>"(t  {?A@BH-ϝY(Qt ["up [ZJJ5 Li0U"?AG}FLCb` Lne_ColE rz|&.#E#mddd#nFFF"a?&?#%"+&&` S.TadE wC/#i/H+$5# FlG 72M&ea]Ra` AcX0e? t83M%#]bd3zs9.#%$,-$!]/(h9B>3z C#8> (3JF7NBIGJAA!G;5#?1pyK igTt (X0)@20@9@Mc.K osE fB1r@J!a`0iE n.@ A%2 Hs Re@eK v"A d@@l> 0>Uhh @CA3%!J@&CXU__Y3no+o~gU5^ho_oUE_oo6obTt>G e jiHle59 M{G/v!3r@QvE4,K7qgG^"D'2#x [%"6)_H<>? Dל ߪJAC BFXy# ߒ5B} Ѵ ]@eko+> ]BBhɯ@PGQ&P+=Oasw;?IUFD  h ^TYYRUFD"H$'@F @FxT]]9 #AUFD"H$@&@FTЃ$?P ]Au` ? un#`t A@`@I:t5ZE$:"p: [g5J=U5 ? ?B#0{GzI@hQ"*!!5 o`?CopyrigTt](wc)]20 9]M c o%s f"!r !ua i n.]_ Al0 (Us2e e v00-d0l>0>Uhh!#k!UJ`F6[48{  FMBC!??1$E34NMO_OG$E4E6O?O$E28?;1rA TRZSB3HD:  # ;h0>Th]]9 #UFzEI@FD"H$@FE I#@F)[3#Gӹ?P ]Au` ^? u#`t  I.!?_A@=Dgj+U-tA`WU"p [VJ#0{GzK?<@/BhJA/mdddG G) ?AgaYT6` Fil Co orzYbWYAY{bd#;z),!=D" #g#]#M%!!7;}`?!py io ht (c)y02`09y0M c os fw2!rk0!aw0i n.y0 A" o8s2e0e v0d0 l>]Uhz`1X#!J`iF  8H\ 3FM@BsOOAE3NO_WWE%EjFA_Oe_E2cO;QArA RS3HD:  # ;h0>Th]]9 #AUF_k'$@FC@F9C?Fl6_f?P n>u{` ?m u#7`t  Fx $?A@=DsU,tA`Fp5׹?p [WW"WJ#U0{Gz?<@MBh G ^) ?A_aYL!.` Fil Co orVzYAY{b#z)=/["  3~##M% 1 17;}`?!py igTt (c)02`090M c os f2!rz0!a0i n.0 A" ~8s2e0e v0d00l>]Uhzo1X#!Je`FxF ) 8HR )3FMBOOAE3N__fWE%EyFP_Ot_E2rO;`ArA bc3HD:  # ;h0>Th]]9 #AUF_k'$@FF9C?F)[3#ӹ?u?P u` ?u#`b>"(t  Fx $?A@BH_sYCt ["p [ZJ#U0{Gz%?<@?aBhA(G XO) ?A_oa]L.` Fil Co orz]A]bv#z)=/L" !#o#"(#JM!h!7;`?!wpy igTt (cu)y02`09y0uM c os Ifw2!rk0!aw0i n.y0 UA" o8s2e0e v0d0 l>]Uhz`1$ "H/1BJ!`9H""" 3FMBOOE3N __oWE%EY_O}_E2{O;rA bc3HD:  # ;h0>Th]]9 #AUFzEI@FFE I#@F)[?3#ӹ?P u` ?mu#`bn >"m(t  I.!?A@BHgj+YtP ["up [ZJ#U0U?<@/BhA^mdddG `O) ?Aga]T6` Fwil Co orz]b]A]bdv#z)4!=/L" 3o#"(#M%11 7;`?!py iw ht (c)02`090M c os f2!rs0!a0i n.0 A" w8s2e0e v0d00l>]Uhzh1Z$ "71BJ`9H"( "3FMBOOE3N_$_wWEEa_O_E2O;rA bc3UGDF P# h>,/T6D # UFD"H$@]F@&u@P}  k0g,@DTh |u`} ?LU,@TUh|u#QtSt.%& 0"tR.$t|7/I'A@%#tT.%U/I(/g-3b)5/G((""/.d\63@?3e5~???62.`Make B0c0grou/nd& ,3؞314G;1`VIS_PRXY0CHM!#w60@82"`?C0py0i0ht0(0)02=@090MZ@c2s0f`BQArT@Aa`@i0n0 0Al@ XHsBe*@e0v0d2-2 eh8^hhH-Q659TD[e59ThD[39T@D[9TD[E9ThD[5X|D[%U9TDWndaKoc7TP116238ycA1I51x>Xr1I/1&` HZ@d2&F0o`@A0#` SPowo '"0!+x"vde&s ~s!4r:r4r%pw:rgAo%UhmI {a1$162te2-t3I[ ZE*R38\yca651+4r:rB1b?xav@|5Rq'e5ycBE@G'3,'㫊?GMKc`? 3b #/1 bq3Hep@,T*h|62+EM6H'1\+U=yceq@ D0NrQAn`@aZ@5a-2] ,Th O7qouHD:  ##=h0>Th]]9 3!#UFaD@Fi4F?@F3mq?P݀>u` ?u3>(.t  (~k ?A@HrN_atR bp [_a"aJڍU5 ILc0U~$"?AG}cLI` LineCoklf rzcF"L&nFFF#?&S?#F"L&&w` STadf wd/#/i/J(i" Qg3#$.f.36h@H1$g1g1o7;V#?#1pyl igTt ( )@2`09@M^ c.l os$0fB#1r@k!a@i$0n.@ A&0lT@ Gs\Be0@el Evb dP@e0l>]Uhz143F!J2!XVD(Q(E _1_CT3R^k_}_WBUL5N___BUOd_H$o_j5bPL59 M{U7l/f!3raXve5!$fDm2%T12ooXU7w'B#$<~3v!x vI8!e fwQU:(F|@~f5T?F$c"z,Z#bV~-(T{{<B2>/ HD:  ##=h0>Th]]9 3!#UF_k'$@Fi4F?F9C?F3m]?P>u` ?iu3>k(.t  Fx $?A@HN?s_a't bp [a"aJU5 - ILT0U$"?A?G}F/I` LineColf rz!#nFFF#a?&?#F,F"L&&` ]STadf wd/#/i+$V# F^ lh #72n&` =?o)z 3#B..23`6=27kH A3117;V#?i1pyl igTt (c)g@2`09g@M^ cl oKsj0feBi1rY@k!ae@ij0n.g@ A@2 ]HsBe@el vb dH@0l>]UhzNA3F!J@32&2&}Q(PUa__T3^__%gU5P^os_3oUP__yo_ j5b59 M{7/Rv!3eramv4!$Rv}ai6z7 '#$ő~vmv vU%T"b xe@i?8Te'aa@eg@ fQ:?@,3 9F'c"z,Z#.T5-TAci@0m@HD:  ##=h4>T]]9 P3#UFD3@F:N@ @F~?Pn>u` ? vu3`[>"(t  #?A@BHY,t#`F B_P(?p [_["[JU5 Lc0U~"?AG}]TLCb` LineCokl\ rz]<"B&#nFFF#?&?#<"B&&` ]STad\ wZ/#/_+$L# FT l^ 72d+A'Ah9ѤB! Q3#.(f(36(72M3117;L#?1pyb igTt ( )M@20Y@9M@MT cb os\ fKB1r?@a!aK@i\ n0 M@A62 CHsBek@eb vX d2l>0>UhhC4A3 ל ߪJAC BFؗ|#Hֵ w7B} Ҵ ]@e8ko+ ]K|a][HXͩ%ԩX ˩! GyP+=Oasاߥx# mUFD  h(^TYYBUF\.?x<F BP(?P?< 5 6IHC 6gfa6a9SBU@L&d2U?(sU*!(&J/BH8$q?/&?(H])|  0OB`"Firewal0,n0t0o0k"0p0r0pP711,d 0v0c0 U1^| a(){U5GO& h?0B?E\\\o   w w wy ߹{w yw yϹw {y{wwy w  ߛ{ 뛐*8p  } ߰ i|jDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab\.ҿ~?? ;Կ)j??r]3rCUG DF P# h @T(PYY#EψUF\.?@~߻?P} ,.B ! :J :^:rY:k Au` ?"66JJ^^rru# *^b"h,h')qU ?3?4b {[`:#"&a#'JQ)b" ^^l!L1$l" l"7'>>ÿ@q0-5?46 rz23S2-uY0`-1Bu@ l"r3A`'Bl" L$9u`u `"[ bzb!u`R$"% أ#Q!'1`Vis_PRXYcPm!#5HP47"2`?CopyUr.PgPt (>P]) 20{P9 M.PcePo0PoIfmR^QraPQamPi_Pn AUlP eXsRe0PeePvPdB@!#d$%b!(g#>@f!^#&& "2bbbz12bGbf!cB##0Ub#]g,7^aN @V cT8fda[+_'s1 1"hh"2)t3 *tB+tBaa"qqb(!*paaSb"+6u\%G(!!["@Чcq"53T@ R"q3f uEf;uE5's_t$v&F3s͵NSYFuEf0we'Aj,@QZPUpQ"5x}L1*p"(AT?ݟZ2wB2;6q`RPQs.PU TPxmPSP^2l"+ 0^<@5b ۠۠"x5R"۫br3-PBrw('ٛ`ff_< $R-mA.AQS*pl!u "u"2u3/bBA0oB%Q%Q"2b3 !!E؁؁| NPl!#(>*pQ~ [ ]\@U0` SPaaPePl6rsRQA)P E u.PpBPeP{"5X .0` TcPѼSP"`DPvR3 `r ޹b` S;b4SP&SF.PQ!wPQJEd%TMPn;fPEcmPuR"/6HEQI[]1PRd Nm Eh! PPr"h"CU]%R`sRU3eA(ew3`tJرe^`BT , SRi+~K׻T `L_PcXG&8*JBlPiP;anIY0 -R_PoUS>b©'s)&U1:`` %2R&RviR`f`NEtWO6PKK`SHPP60 4ROJ2R*tI60S1QR`f`s7fitac'Eympx]b7'dYfjBr 3aQrf$i-QIaaL&E`v?`HvADK^DKDQKDqKED3aGHD H# h4>T]]9 MT JU@5wn?@'d2?@vP6 AJuM` W?SuJt  w]It=WuVIRY]JIl#>>@fY(? ?ALbbbz@A ebbfb)]b 6(. T#Jq!q!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d j!n$EB5 -F?}# }'X62zGz?3S@MJD6*-!./@ D)QQ&K,<"Q*i@ieE9 X6Xn73'K0UB!K0 aFuJlJAUhhc7![hz]0T ]]JU@\.?@WjZ?@* `?@JRT?P NamuM` ?,2<u#FtC mtimuV~ڻunO2zGz?>?@MA@bb?bz+ )#bC(+bQ)^+^&Ɇ"@fx?0&?x|5/-&bf^ >/`/%N B1B1`?Copyrigt(c)209Mq0co0o%si0fw2h1rk01uaw0ii0n._ Al0 o8Us2e0eo0v0 d0;1U?4-N3 N7&q045 EFXGeG'0UBų&!$$ eFyJlU`1#El,B^ PhO (!p&d2?@`0  <S ?؉O<S 5<Q3::*\QY'aQ5____Tehf'W{oaYQE##$Wuu']C_U_ kuVn_j@ղ_bl|>xoo uJSAU(:L^xSp8wag?%ehYjdcp i!rx  -?QcxS ?@ je,㨏O4Fӏ 5:vox] 2DVhxSPbtߔ尒 ? ypv y FXj|t:NѧwmN6@4~펰pdNAQcu.0ܰ+ 5P/Q&a/4EHD H# h4>T]]9 MT JU@\.?@W~?P6 >JuM` ?juJt At=WJRb~lT>5 g`?CopyrigTtZ(c)Z20 9ZMcoKsfr!!ain.Z Al< s@"e evR d4 HT5 B[}#?& ?AAw@"b4-/ &&t2Uy2?")@MJx6D8<6<2:iieE9 &5X.7"/'0y3&-!$0 FJ)lJ8>Uh ޓ$Ja7Z@?@ 0<Mbh#B] ?RlVJ:vn?6\ES YwQ#TTKRU%Ur=\PYZd;O#UEd^_OZaQU%5UPBgP([Ma\QU-V'?d2=_OZuV%hb2&_8U2rA 3LsQ% bHD H# h4>T]]9 MT JU@* `?@~?@PBWP(P6  n>JuM{` ?5u#t  5]It=WZd;IRMIl#>>@f.? ?ALbbbz@A ebbfb)]b6(. T#Jq!q!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d j!n$EB5 -F?}# }'X62zGz?S@MJP6-21/D)Q&K,<"Q*iiePE9 X6Xn7'0UB!%0 aFuJlJAUhhc7!-oiLaUe52j_|Uv<"rA ;#c8 bHD H# h4>T]]9 MT JU@\.?@@ ??@{?^?P6 v >JuM` ?)uJt At=W;On#JRbJl?ʡE#>>@f?.a? ?ALbbbz@A ebbfb)bE0(( #Jk!k!5 g`?CopyrigTtZ(c)Z20 9ZM c os f"!r !a i n.Z Al (s"e e v d d!h$B5 ɤ-@?w# Rw'R62zGzM?@MAJ6-2+/>)QK&E,6"K*i@ieE9 R6Xh7'0UB!0 [FoJl8JAUhh]7!6!Z@( <xLK#  }RSVJ:vn?6S YwQUTRU5UXz{\YR/IѫQEUPBP3([MabaiU_52d_vU6"rA 5#c8THD: H ih(>T9 M3JU@\.?@~?ҷP6 m >JuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"Ab+Kb )-,-/B!!5 `?CopyrigTt `c)502U0A0950M-0c+0os%0f32$1r'0`1a30i%0n}.50 Al{0U +8s2eS0e+0v0ds0ib =L!XY/'0U Bj&%0 :l*J8>Uh@[A[A 1:vݏn? Y YwAQi@qJEr Ti Z?d;O!U3!^9_KZXQAUEPBϡP(KMQLQ I@'d2\ QLYuV_XAo@ ,oKZ?Mb_XEOOO\_H8> G*+s d:TzR4D$o Fh# wLB H ~dlҴ i @+ kB}G% _P(3  (X5 : 2: l 3.  0 , / UFD  h(^TYYB>UF~?x<F BP(c?P?  s 6 6DSM 6gM 6D{M6M 6TM66x6K6v6HBU?8/&?R)B@L&d2?h-y(\.gҋ(i.sU!A&/@)Y(  0B`KLaptuo>0,>0or@0ableF0nB0tV0bB0oukF0mB0biT4UcB0m>0u^2rj4nr0A11nV0twJ2i1eq0i>0mV0n@0,PaL0d01e11^| ()IU5G& p??U"6? QQQ ``pwwpw w wނ ppK_wpwpwwppwwpppp ~ w w~~@w?~MpopJpppp~ppwpkwwZDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿?_)jk?3@ ?DUG D # h @^T(YYEUF~K? P} 7K.B!C D :T:h1Y:| 73 7 73v7x7377&u` ? "66JTThh||j&&"uJ&)R2%X<X7eU ?0?' 4b0{[`:1262Q3R2A"'QT9R2 ^L1$K" 7"Gd' @Bc]D?MQ0rBCSBuY@`-1"Ru$ @mGrC$Q'('R7"A& <4' &)9uv`u `"[bR1'$u.`425ؓ3a171`Vis_P_RXYcPm!#5H`o06QB`?CopWyr.`gPt0U(>`)02L`0P`W M.`ce`o0`'ofmb^ara`aUam`i_`n 0WAl` ehsbUe0`ee`v`dR13d45R1*83 Br3 66 2'QbtzV[1SBuL@`` F.`a^al^bz1 -2uߐqݐvR2CB#30U3F:w,7^F`^ P sT<`o0m`sm`a>g da;_zAA2Hhh"B)B* R+R::2mDmr81}ahcwP#6u\$ %W11%[2P@RA4"ECT'$@b2AEȜUiBȜU5z_4ɓ&F3sNSYF=U0ʚu s,@a#Z`ep"EˍLA}28ABTO0V""-sRxQ'(BKF>`R`as.`e T`xm`S`^B 0^LK'7"P#P E""b ..2EB2.˯brCRqʯܯ8z,BF3<^-""B=mQQ<*dh2"B B/R0R%a%a2qqr3 11E++0|ss7C8*PbQ11A;Q(r,`LA=k^u1MAE!ffuVҶ8}aY k q]PU@` SPUaa`e`lsRaQ)` Equ .`pB`e`β"Ep` QTc`S`&c^aEma`u9E^Mx_ ` S*bS`u`EL t_b9UdxM`n*f`cm`ub%7Uh5GAPbds Nm0{U0! P`rxqp2D@%b` Dbaew3uĮAe:fC`c'97/I/ Sbi`ao//¹T//` L_`ch/6?'?9?B"l`i`]?8(Y@?0?R_`?65?ON*h"wbk"a"R`K9B-/H`OI4`ad`cOLO/OOnh$gsC_L(t˰=_O_TQm2/atfa I4aoO_g¹f__㪲`Uk f`Pbao%ov8_No\_MPCOoNo(m Qt Q6LM5GMLvEHbdr!SC qpsY` W@DQQ:V6xHS`C@MOя^u`MpmB :V¡3EYx!O U1vqyRdnu}Ϗo߳(©`X^~"` %_@&דq|`NTWORKurH`PaRTRǰISQk:i*,Ŕ{Xo'0U¹ňaY`y}֏(!(H*ڽ(5|ѳy(!{5{ᤰdvHv˷v d,v5˧v(!&v˟"vĈ~(%vċ~Λ%vİQ5vѼd{5v5UGD  3 h0TdYYB IUF%D"?Fب̓m?Fre?F_jYfw?P} 8Yf u` ?0)u:t7  (\toLӑpG?WS̆{c٤CU!!`?CopyrigPt (c)J 2\09J MB c@ oKs: fH"9!r< u!aH i: n.J Al @(s"eh e@ v d  #S$#B- #l# '?R;= # &#24 6 6 2$Z# #0U2 3@&B ^#5 6X'7'R4 6!4] 6:e6^P#[EBM #HA%DGHD: H ih ,>T  9 M/AUF~?FtC&j+?P6 8V> a JuM` ?0K Ku:yJ6bqlt7~ bJt  $OC>JU2zGz?#@MJ?A@+(b9)F+,F&" !*$5J" "& w"\A1"b9)+6'"s,"s/B-  P3v#9 P1P1X7;`?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8s Be0e0v@d0N0i @b9 l6(X>7JG'0UBX&50 JF^JlJ UhMAA $ A11!:Lh4 0 tt3J!@|?Fhdzt 0 ?kCYk XEjUqA '[\<1ߑUjU!o6Pa@oVUhq`V r])m܆YaYi*X(QYaD"H?F=0 \S㥛\blXfYFt:N\[ZR ÅبX0Q^pp\.zZ ,\ 6?iwUS5?FrSGo?PE ?=TvE?!Y?.`txv6mT]]9 M 5AUF@ ?Ft:N?FM~u_7w?P6 .>M JuM` ? IKSu8Jt5  rtyV_->5^IWA>JUT5 J#/!/!5 `?CopyrigTt (c)f 20r 9f M^ c\ osV fd"U!rX !ad iV n.f Al \(s"e e\ v d (!E,$#E)edq%rVA٫llx51o>2rA 53eHD H# ih#4>TA#3 AM JUF8?FzwJHM?F,+?F%VA?P6  n>JuM{` ?5uJt  f߰nZIt=Wv%MIRG|׈,IljE7#D>5 J5 g`?CopyrigTt (c)* 206 9* M" c oKs f("!r U!a( i n.* Alp (st"eH e v dh PB~A!"#?& ?AlJa@Y;6` F" q!z!lT"z0_b0A0#bmT3z0B9Ub25181 $?25?G5\ P1u`"?46?8C_1A1b?t0bAFbI&DɤG b&LBFAL2zGz?")@MJFp'CfT<(Gb&I3F-LB3FiieE9 1X]G/'0UR&-!40 jV~Z 3Q hRG!-AT6yf3@$<f@C  bfJ: @BS+c X$}#e`e%akqcEAyHh?F~l0I#c ,aTE1vo>B]rA Cs+( bHD: # h#4>T#3 AMT JUF ?FH$D?F8}, WcSP6  n>JuM{` ?5uJt  s߿ 0eIt=W(\IR}WIl#>5 UJ5 g`?CopyrigTt (c)* 206 9* M" c. os f("!r U!a( i n.* Alp (st"eH ej v dh  E~A!"w#?X& ?AlJa@Y6` F" q!!lT"z}0b0A0#bT3z0UB9b25U181 $?25?*G5\P1u`"?46 ?8_1A1b?t0}bAF'b&D%G &LBFAL2zGz?"@MJF'CfT<(GbE&I3F-LB3Fiie%9 1X]G'0URŴ&!40 jV~Z 3Q hRG!A`Lޖ3?F\.0hO *1J#<=#,AB  f!JabdP$dreEAJT]]9 M qAUFp8?Ft:N?P6 mL >M ($JuM` ?AggL2DuVJtS  {G7zt.%_>JU5 J =#k!k!5 `?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v d +"d!Eh$4#=#|x!F2 J3?]6 ?Ab4z@0Szu< `` F !!l"z֔0` ?251ݔ6b A6b4(4"Y==/w# w']6T4"BK/C2?=@M3#JF3@C37b"9F<2Fiie759 O&5X/GB_A/'0U{R]6-!x40 2VFZlJQD0h$G!Z@!11(Z/CFF\.30AR@Lu)_G  RaCf4"J? 6Bf*}~2?FVor?PE &TvE?~1LZc Jxdid`gR*YB_Lc׏_:M m&!&Ra DBBBTfwr5UBoPb2]rA 3q+"HD: # h4>T]]9 M 5AUFVj?Fޣ ~-?Fe2L&?FZT?P6 m. >b JuM` ?+ILKu8Jt5  O7nty>+B`"$'-A>JU5 J #/!/!5 `?CopyrigTt (wc)f 20r 9f M^ c\ o%sV fd"U!rX !uad iV n.f _ Al \(Us"e e\ v d (!E,$~|y7, SzQ|nF?F.`d %TvE?B1ZBrA YCc4!*uxnG@#r & 4xavDB#Bsrb5UHD: # h4>T]]9 M 5AUF{y3I?F~??Fd2Lq?P6 . >  mJuM` ?0IKu8Jt5  n`ݴڅtyx&1UԅbX9AR>JU5 J#/!/!5 `?CopyrigTt (c)f W20r 9f M^ ]c\ osV fd"RU!rX !ad iV n.f AUl \(s"e e\ v d @(!E,$~ 0T  JUFe?F:[??FO&Q,"7?P }uM` ?j`fpu#zAtw  [{t2Oۨy`<%t2N贁Nk?@MM+&A8 b*](i+i+i&p!!`?Copyrigt (cu) 209 uM c os If"!r 1a# i n. WAl30 (s72Ue 0e vI0d+0"!3 $#)Ry&#‘4?6Q?\c"bk)i+Q;'-,# '6% h5 IFXGiG'K0UBŨ6!4I iF}JlU!#A(ZFj^I30vi p-ڏ|Y &V"(:!]H?F{wX@0pS l01 o(fy { xU,>Pbtv3m?F/@M\lSFp ^ \ 0+??Fް}ulHyƊqWwi?Fܾrޝ۷cl)-i]oa?F5o רo`lۗ]Cfϟy { xY^p𔟦pO4?F0eO,plkleW,^ \ -r[v?Fp =\LhlmhG N1?F[Q{ l/5:1 fz7[;KՃT0P1ey { x¿Կ{dޯ?Fv9|eޝN.l?DɨT[\[fk'?F#Y}3Zflٙ(oWU"NhLޝ].Yt?BدW>lrt~]|%߸'0(f%y{xa v ?FϘǫḿl$h|a[s!?Fۂe2'[\YRSWW ?FIǧmp~Xl-TBi9w_Ku~Սe~< 8qOfSx#5y=Qgk?FG[Pm j%a ;(|a[sjT?FjwZ̮̈́S"o& +~?F 'APm*?w?3f|o7gyP( '7Ƀp5{/$ x //-/?/Q/c/ya-x?FlS+0ƄɰOYOe[s/?F@A ǻDÄZNF7?>.N?F nMoi?~q,P>Ʉ (:^?OD*x 7OIO[OmOOOyNaZBPK+qǀ̩BF6Ks _S~MGt='7e]Վe?Fɍ#cAcaKҌVB_,ϠZp"IؿoodK eowooooo k4@\{-=(eKs;o@,P%T̆m ?;e_#ֶZpp\J)~Wf ,Padҧɻ;y ÏՏ5A ?Frg,mr2 +nrsc9az?FWn ~}\+ :|wص@ymwQkܳ 8I2=d]? ń̙ "Ͻ)i8j7i3 ͯ߯i+?Fu߫˶<yOKs@,0YvjO}ȶѭ $Y MMeܚ-O ߟ4~ <-f5]Xкj 3ʶPbtf2+ PpߚDڋp,ψyӅ猜,4p:9?Fo}!s>ybH E3Ɲp ?Fn~p6Ü~y˓lj~ݨ" : (0!϶<'y'K+HHZl~f~ӋE?F&Ý A;yƿ* ,/=r?Fegm yQEڳs/e\?Ff"h=y/k.h@̵I y|!??4JkYhs??????iHu)?F͎ l䲥aβ ,IO;o\?F!yhX ejyROm,F @^f]ywO~6bԢ`V 6y O_o%d r______??FlO?=Ȣtc&O ,wo<ڐbrqM~@ O`> s>?Fcme~Vve~sO~%HlĀW<]|F<>~zK_]]M),ͭ`7E̺ژB>mfT# ,Bg)?F[='JFmjG;B|'|`yq])tK } /:Pp0&%;Taqԃa/E-? _ [X!////??D Tw2?Fw]12%W嬨$:W{,߰ M >:6yEpV(0AzGxaNtb#P?P-yp\J~%wMIn17O~:3݃* 3%\K?|\^OT[__ { xOO__+_=_"^j^I<=ADǂr-+ű!M!?夿e{*% m9N?:ߤzf"h<=b\㭖h1boyt65o %#M#'VW?FGn5> 9ؗϽrqCug<#ֶ?Fv^݇qLJ|U[>j?F^@;# qYq(ٿ2#>#Md TH7D6D # 4#T  U U !"#$%&'()*+,-./012*3>5h T  M YUFBg?FJ:?FAؔPM.uM`?u#t  jm7}%t!" .Π%'&!%4-Ut}2N贁NKk?@M#&A b(J++&Yz}/R/$38?#b`4zt (+b/R5N}11/"`?Copyrig t(c 09M0c.0os0f21r0Aa i0n Al6@ 8s:Be@e0vL@d{"1U%4e3 756%01hEPFX|GG'0U%R!56S1I FJlY)U4FX1#Aց(`}oO40OeOW0fnj# Qˣ-Y R5b 8!EЏv?F[ =\}g5ly$Oa3[e_q[~haWWrm)T`fnӟ_C1hhE:]_cd0g`h _R"rg t4{s{"oQ#w______Vcޯ?FW9|e!|vr.l`(n :l Jo}`[Q ?F#Y!|helƳoZfuj'7czX!|ߝ fUo(oZf?M :rm3~Tɶi(R}ty { xUV ?Fǫ!|?UvʁD!iip_(n :l r}9!?Fʂߤe2!|ωY\*!ijlߐjǏȯگ` K <ZfB0񁚍axCmƉR}Bty { xYџ+BQgk?FG[!| =jlU)+R(n :l }jT?FwZ!|B9nl(Z!X󁟜 ")p Tϑ` 4/]$R}p ty { x#5GY,h+?F߼߫!| ?iOf+k:l}‚,Y?FqvjO!|6zKi%SI#Zf@,0ƪ!|~9RĞX}ߘNd̮ Q>'ەߘux x&8J\nYJa-x?FglS+0!| @m}i˜?Wz(n}?F@ D!|¶RiBQ:4i!@Se L]yEڨZf?g5 J[#PR}ty xTfxoPoof+k*\onooocN1?FEߋ]2φ gm_h{ 'aV ?FIQ|X"Z~2z}5o]Sh]CYR`-`CLVI h oo&8@6>Pbtg[`#|϶U /(+~?F0'Au=0cLas` =E ߴI h 0BTfYl~g[ 2DFd-?FGt\e(-upғh6SFOp \pH%y|C/h6+S{0 ӼvAf@g+hPљV M H<l5-W/ :FdAg?F#ccMTbzm1Gcqe~1-B?`ZAvkNngoPoboto OkH?Fɝ?) V1? zk "M ,o"r?FLgmƦ J TܪMcSS6-F؏{wKJ\M,z% #߭ҋEŁN"}3cQLYNVV䯀@.HZl~ n P?F͚Dڋ~Ͱd@~y6{:M ,"Ęp:9?Fn8ͣϕ茻yb H v_=x?vQܠK='+ #X;F.+*3 rz@M\vЯ *ͭ?F;Y ~Eؚ)~ykM ,L5?F)-ffA煤 0{l[΁F~Uk߸?I36&&d>!ё*a0tM@ޞs0COρn{϶ f5A ?Frg, {35ٰy^5PנN,z5kzX_z?F-Wn \?/qy?%b/'8a'߀x\tRʓVxK?q%im!Fg?Jq /2_D_V_hV {+,qO_6}i?FrsM<+bSykY=T=/a]e/r_`x ;8$6HZ>asu=?6|./6l,`F@^f\/j~9̿ia/nLHThmO;~?}[h<11?C?U?g?y??@妎Ė  O0B_f^O8e\?FU"hύ?@B^bybOl?Q ,Mo~SA(5 `_ - <\_n_____⩼Pο!!2o^po8p ?Fna9hfoʌn%Zm?7 ZT![!j( ""`DpHxs/ȏ]-߼LbqS?FҾ+cCXA9x1ˠWC s\3EP|`0""ʟܟy*<N##A?FyYǟìiQ9!.Q^=&J+#ͷFSPA &&Dxj!.?Fo,,M V~< XO~U?F>6S p??HUK,HY&/ ?` _/& A[&PXp//////y*ӋE?FTmcͦQl~V)a ''F? |kټ]c¡J:?YnD((!ooo&$ ?FEc:d9~.]Gl))W_?F޶P dS&9~ԫߟFRB1 > 'Nzrmw }}?=In됿Tr)˻)ڸ" 0BT Yoo?F\/8BҡdifO g **ПA Y{?FְgmƦ͌!MdihMcSS%@'9 )|z֠?H,g]r~ei8YY$m**#(:L^p Y'4?FÄ!GQJ?Zqqqdi +"++,#a?F-ߕfadi [΁S<]?~qUg  ʕ7mI[[ߣވ/ZG +' +6$Vhzߌߞ߰@ Y???6 ,,,X?j??"6?FB6"MRp4KTp#1 ?mJ,` O4PHOZG\,U+,d(% Y__of';--Z__G/_t^> s>?Fyme%]U`vdicdUʔQQ ,?w/ ?~efvoZG*/\-K-H& / YP$6HU[../uO/t|?FaqS} 4dIalBt:K7? \T-E_?!#hTXO\.k.h'??OO(O:O)@R(dvK//Oo_t͇W?FЧji QI?e_N|%ЏIMs.qe$ү䧆o*\/ߋ/(o o2oDoVoho)nπϒƗK00o"яF>t44I8?F?f_<'kԸ!Iv' Pr۱|@? }.q?5d䧴*\0 0)Ptlt)K?FZIl_<緬TF!Ǿ iϿ­.L.7\1 ;1J*j|į)"FM?FO4_<`s31In1)0` 202@q1.?FzT~_<ʱI (빸~u7#ߩ MǍKKRw 2 lPiCϨ6\2i2x+Ϫϼ)PK303nߒ[Pr| ?Fo3on1Ox0 ~L":rCF]KSDJlع0 MPq6>B33, aZdFy^#I1Af4b4J?A^31K404dlI?FjwX@_<\X*߰~LϡD/g\Hb1),-FuPx~K :ehDo^1GEG?7l*B4;48-*<NLIw?F`sT5]|N~LZQCݞaQK505dm%?FGn5;TW~ A,p"wQYy>)N1`B@PKB^qq_'T?B5[5h.?(?:?L?^?p???F1 4c}C0\wPN)qk6<6?d]@?`pF*;Z"}^װIAOfQ;8aVȯ#{Wn\=͘OLP$Zɺ}_.qV'@IO^6 !60/D_V_h_z___2\֟?F4vS[p-̬)K7<7o\?y1<<<:y6yrЃ&~?!1b W@=yBP(lf7?۵FSF&Oi{// bxRC//Jy<<8X5X/j/|//// ???6[=2 l=.?f&M?dsl?~?+eXO-y?Fp\JlR ?ɦp{?\s__ ~C +OOJy==ftSr BTGHD: $ Uh4> T]]9 PM /AUF~?FtC&j?P6 .v> mJuM` ?0IKu8Jt5 }ty$AJbA>JU5 *J)";/#?@& ?Abk$zo@w Su`` Fil Co orzw ` Z/2%w!w&}bA&E&UBh115 `?!wpy igT}t(c)W20[09M ]c os fM2R!rA0!aM0i wn. A"U E8s2em0e 5v0d01 4Y=$3 $7@&T"+t32UHB?=@MJGF(I(/HFiie59 &X7G'K0HC@&![$K0 FJlJ$+VA' (++ ]61h!Z3F\ O6?F9.4?.`tx@! y@c#CJVAbX,?F?&d2b iQA.$c Ȑ #Sa3:TLh4b bcitt3h5eqA 'k?\<1lOe5e4P܇a5f_eD"H?F0 wlS㥛ԍlZblheFt:ǝNl+aiR أhQ\.j ,X^F5~I?F\O;\?PcG &TvE?a!"rA #c4xLexmdAa S ύ4xa$B#T]]9 M qAUFbX?Ffl6?FI$D"?Fz^?P6 L6x> ](m$JuM{` ?Agg2DuVJtS  y&1tQ rhGzI>JU5 J0! #<l!A?% ?A^Jb @bbbz #"b9#$"b$B 2% # ?<@M3#J6(&$%"QD;'"**=#115 `?CopyrigTt (c)02U0090M0c0os0f21r0Aa0i0nU.0 l@ 8Us!Be0e0v3@ d@+"1 (44#-=/3K 73L)i@ie59 O&XuGG'0U RŚF!$0R FJlJ8>UhW^Q^Q 1@!V!1(ZuCFWF3Ф JR L?[<œ_Q3$R$4"J1n6f?Fr\. LSW_Y?)$8aE:TCVb]k&aPdd *#8d2L?Fw;?FH?PBzt &TvE?!YaYcR&ekre~RMv'(J9_b 2}&(!&Ra@cBwrÀ5%8aOEev?n?V]kAyqiQ4eDe'BP(\orlh )$a^F)?F.\?PpU#u#rA =4Pz d|oO_Y~sEzw uFtQcwHD: H h4>T]]9 MT JUF|?Ffl6?F'd2?Fz^?P6 n>JuM{` ?5uJt  bX9It=WQIRuVIlGOz|>5 Jr2?@MJA@]gb%(1+Wb3)O+O&B!!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl 0 (s2e e v 0d0!$صh2?6 ?ALb4z@'!Sgu`` F 1!l"z' ` ?25߶1ݐ6:/[)ʼ=# '66Kiie<59 XQGG'0UB6!4%0 FJlJ(>UhI !+!:6V߫j֋#0=1 JUF'ÇpС0RSO S ;[5)۱Q3a9iQY'aHD: H h4>T]]9 MT JUFt:?FFh4?Fz^?FkZV{?P9>JuM` ?uJt  lIt=W9v׾IR}?_5^IIljt#>5 UJ5 g`?CopyrigTt (c)* 206 9* M" c. os f("!r U!a( i n.* Alp (st"eH ej v dh  3Eص!"#a?&?ALb4z@0Sgu`` F" q!!lT"z0` 2?2C516bANt6b4J= &;02zGz#)@MJF86x?OK7XB 6iie%(9 X7G_'0UB&Z!40 FRJlJ(>UhI !lXAa20$)0=~0M:XbX,\QS }A_б5Y9mYQ_H8>z 9s -b@J)F #B ! OB L# ~dҴs o@+Zo8sG(= $ f֛ؼ ϯ. CX 1 r 5 h  9  Z= ( A +  8E a I B M  b !Pas//'/9/K/]/o/////////?#?5?G?Y?k?}???????? OO1OCOUOgOyOOOOOOOO __-_?_Q_c_u________oo)o;oMo_oqoooooooo%7I[m!3EWi{ i>?  Џ*>ÿ@qZPU?TVpPr&bcwRu}P`-*Q"&bu &@WS$Aa(Kb""ch@D )9u`u `"[bz$u`؂CaAG1`Vis_PRXY%cPm!#52madPa%VR%TKa/ZB0gBqq32REqEqAAB5r6$ 7”ECϔ|A8ܔspꓮQQ;< !LAE7ĂFD a,!J^hdh")xA*HKa./Hd$Gpha1]` Mpnupa2ptqrFqa` E@u"pp 6peptuEQ`ѯ]8QP}rd NmP3P9KZ! PprtX˿]r`iDr|q uϯ\/AA$pqc`#E^-% Sripq߫uT+=,LSpcxwߠ E߱Bl pipgE}Ph RSpoO"|ґyz(ޥA` qp鴜D`NwrkhaGpB`e#AI(pqdpss@|'߸ֿSbpsQx/%mF#qtvq Iq{uQf{ r (pqq@(3/+URqm6put%  SGt5QVeqσąV///(C(pA` WlګO`/iMpmr1//\Qp?!?OUpqsIyP?b?XiBplrgrTSp#"2NO)OS4paB2Cpo! 7iO{O` GT01@ `p#_OOGO _u% `_#kapTq#(?=MpCv/Qv__SH"d\"ZqCB??Pq|&Љ و3UF~?vZS\u`?d?7ruAy}5-.B"TqqՈASw>>{ÿc6Hr92u?`s`Bxrq 07K`R oF@IAeB$H|8Ex~8"7=rb ӀӀƞFSrӋb{ra@RKᙑsz}_MSaabn.{&$,"-Ѵh!PPa@iy60`ZUuu`a p*LQ. p`RÑ.1=f@ߒbÑ;J&;i1J5MS_dK&#„BCF3sNSYFw";(LլDQT]]9 M !AUF8?F:?Fm](?FF]?P6 e$A  JuM` ??Su.Jt+  Rqte_|tH1qzrMZqaMKI7>JU5 J["a?2& ?AbAw@X"b`$B2zGz?")@MJ&a#b#z\c a(b`)m&g,X"m* 21215 `?CopyrigTt (wc)i020u09i0Ma0c_0o%sY0fg2X1r[01uag0iY0n.i0_ Al0 _8Us2e0e_0v0 d0+1 (/4->3K >72&i@ie#59 XG'0UB2&M$K0 UFiJlJA h7P1X!ZCFt,ܿ?F9$*m ;6dm S)RJ:^T<@T R}S $S @gQEU23\T xY _5ˀ aYQU#5UFY&yw\QY!ħ[68^_zd {?FC:\4 `]3?X"rA kf#sS*<7$dFl~r7lb+d [!}UHLuD  #;3h0TlaaJ )UF~?һ P m.N >uA` W?I?u#8b>t5 }ty>b"[>U?& ?bAK@8">&>*m!m!`?CopyrigXt(c)2d09M c oKs f"!r !a i n. Al (s"e e v0d f!3(aj$B-y# y'&t2U2?@9 H>6A(A(>/J8M*ifPE p6XB7^G'03&J!-$e F/Jl>!$Ul8AAa  @!#E18!1Zb?F _4 .40nCA.xMѧnA[(:DTPFmq4 _WtY\ FهX3^&o`_sZRXECU֚|I?F'@[\b Eq\ ypQUdKq\;rdXXQϓCگ K/ڇTXK^F!}} p$?P8-DT! .3!^_p_U.~4}]~q.x~TA#3 AM JUF#H^?FՃv?Fꖞ@?FĶ?P6 n>JuM{` ?5uJt  ߀It=WKꑔIR ׼IlV #D>5 J[? & ?AbA@0"b8$2zGz?@M»Je&9#fb#z; 9(bE8)E&?,0"E*UB 1 15 g`?CopyrigTt (c)A020M09A0M90c.70os10f?201r30l1a?0i10n.A0 Al0 78s2e_0ej70v0d0@1 4Y-3 7 &iieE9 X7'0UvBŴ &!%$0 D-FAJlJAh7(1h0!T#VcS( bHD: # h#4>T#3 AMT JUFȹޘ?FOd?FoT#?FqJĻ?P6  >JuM` ?juJt  uD It=W36EIRj׌IlB?@-#>5 J[? & ?AbA@0"b8$B2zGz?@M»Je&9#fb#z; 9(bE8)E&?,0"E* 1 15 g`?CopyrigTt (c)A0W20M09A0M90]c70os10f?2R01r30l1a?0i10n.A0 AUl0 78s2e_0e70v0d01 4-3 7 &iie%9 X7}'0UvBi &!%$0 -FAJlJAh7(10!`3'e?FfAs!h, ?nN#<bydb   VJaRTT $jTRUE:]gy_eZ-ֱQ%UOfkZoaU526_HU0"rA 5>#cHD: # h4>T]]9 M JUFS6h+?Fڨ?FC7LԠ?F_oo߯w?P6  >JuM` ?uJt  k6It=WHףrIR'c_hIlX_#>5 ՆJ[;? & ?AbA@N0"b8$B2zG7z?@MJe&9#fb#z; 9(b8)E&?,0"RE* 1 15 g`?CopyrigTt (c)A020M09A0M90c70os10f?201r30l1a?0i10n.A0 Al0 78s2e_0e70v0d01 4e-3 7 &iie%9 X7'0UvB &!%$0 -FAJlJ]Uhz(10!Z'F#?F9=7Z#<E $ ĀVJt!#cGP!2, TYŏ.ZfQoS 5!;KƱQE:F1aQ, jRYSuRYFU>X%e1lER[6C/I .g\wPU5egㅪlaYpO0X2:_;0"rA >#sHD: # h#4>T#3 AMT JUFJV?FO7x?F g8t8?FxMƻ?P6  >JuM` ?juJt  .![It=WBaIIR#!xIl?7O##>5 J[? & ?AbA@0"b8$B2zGz?@M»Je&9#fb#z; 9(bE8)E&?,0"E* 1 15 g`?CopyrigTt (c)A0W20M09A0M90]c70os10f?2R01r30l1a?0i10n.A0 AUl0 78s2e_0e70v0d01 4-3 7 &iie%9 X7}'0UvBi &!%$0 -FAJlJAh7(10!ZF/, E <{O!#4  OR|VAJaN[]PPReSRUE:]?Cm{ɾ__Z"u)Q%U=V9P]Y\QĖU526_HU0"rA >#cUHLuD" # ;3h( TEJ UF~K? P .N  >uA` ^?A u#0p,bKp>t- bFtq>U2zGz?@9 >?A@(b),+,,&" !$i>"&p ]""b)+'"Y,"VY/N*1*1`?CopyrigXwt dc)a0W20m09a0MY0]cW0osQ0f_2RP1rS01a_0iQ0n.a0 AUl0 W8s2e0eW0v0d0:i f x!X]'0U5BX&50 Jl>$Ul8AAC U  B # A!֚|I?F'@ & b E9& y'9m(^L0T,_>_ *XX3U{?Fr\HzdKB\;rdXXXAk}pCڇ K//XTTZbP _?F92}&~`m`[ ]3PY0nC]MnBU,Nvlza&p (91n-pQYWAliPFmqٸ EY\ FXX܏&o`DZ?RXXE_&__J_|D#AD1guOKOOjkFs"DI@[sDz.q[]J,bu?kZ̯_ofd $?QRp؞lq o|T]]9 M !AUFM4_?FX({'?F8as?F'|~?P6 m$ > JuM` ??Su.Jt+  ,ئqte_GqzLxg q}KRI7>JU5 J["a?2& ?AbAw@X"b`$B2zGz?")@MJ&a#b#z\c a(b`)m&g,X"m* 21215 `?CopyrigTt (wc)i020u09i0Ma0c_0o%sY0fg2X1r[01uag0iY0n.i0_ Al0 _8Us2e0e_0v0 d0+1 (/4->3K >72&i@ie#59 XG'0UB2&M$K0 UFiJlJ]UhzP1X!ZCFFMaT J*m $d2rv 7|QL{RJ:bTVͶ$S ?!͟TT QE!K:?Fc\?7YMQ#5UC6ǎȥy\aY = GZgh5Uc?hV[\]PfaiQb_tSF菐-?FҮYT˳ ]3?X"]rA f#asS*ԍ֯JQed~V(L qAb+d ׻HD: # h4>T]]9 M JUFd $?Fs"D?tP6 >JuM` ?uJt  RpIt=WcsW.IRI%l#>Q5 J2zGzK?=@MJA@neb1#z' %#b?(1 ?(Z+Z&B[ "?& &?AE$ \/}% 1 15 g`?CopyrigTt (c)A0W20M09A0M90]c70os10f?2R01r30l1a?0i10n.A0 AUl0 78s2e_0e70v0d01 4-3 7&iie%9 X7|'0UvBiů&!$0 -FAJlJA h7(1E!bTc1 $<K $  OR)vVJ: KvŦM\lQeS -^PűQEaZ9]`Y\QU%U=V,M_Zڕ]X526_HUE"rA D#cHD: # h#4>T#3 AMT JUFEM?FZI{?F'ϸBl?Fĺ?P6 >mJuM` ?uJt  ^J˛"It=W0"شIR}*IlFw#>5 J2zGz?)@MJ߀A@eb1#z' %#b?(bJM)Z+Z&B[?& ?$AE$\/}% 1 15 g`?CopyrigTt (c)A020M09A0M90c70oKs10f?201r30l1a?0i10n.A0 Al0 78s2e_0e70v0d01 P4-3 7&iie%9 X7/'0UvBů&-!$0 -FAJlJA h7(1ZTlT]]9 M JUF5 J2zGz?S@MJA@eb1#z' %#b?(bJM)Z+Z&B[?& ?$AE$\/}% 1 15 g`?CopyrigTt (c)A020M09A0M90c70oKs10f?201r30l1a?0i10n.A0 Al0 78s2e_0e70v0d01 P4-3 7&iie%9 X7/'0UvBů&-!$0 -FAJlJA h7(1ZE!ZF" Z <] Gʼ#OR  ORS|VJ:-US DZ/`#dTK QEUHǻM\Q`YO$iQ%UY $?)/Ep?Fa_V0w?P} 8f u` ?0)u:t7  Id̝txhˑgFݑZo`ҤCU!!`?CopyrigPt (c)J 2\09J MB c@ oKs: fH"9!r< u!aH i: n.J Al @(s"eh e@ v d  #S$#B- #l# '?R;= # &#24 6 6 2$Z# #0U2 3@&B ^#5 6X'7'R4 6!4] 6:e6^P#[EBM #HA%DGH7D6D # 4# *  !"#$%&'()*+,-./012*3>5h]T]]M YUFhop?F>+?FoPp?Fsqє?PM]uM`^?u#t  EF't!"UY%'~P*%4'?"U}2N贁Nk?@dM#&A b(+++&Y}/R/`$B8?V#bo4z @/'/5N }11/"`?Copyrig t(Uc 09uM0c0os0If21r0*Aa0i0n AUlE@ 8sIBe@e0v[@d{"1U%4e,3 7D6%01wEPFXGG' 0U4RD6b1I FJlY)U4FX14#A(!L15?Fd4Mf0:O$}#?)߰=^!8U}?Fz d@ lxrNj$!la.PX7h3URWa?3am:"~vn*&^x%>awE:}JC4` k|| a(d08gh_oo)o{T#______Hv?F d1a|3`[]w|c?7oHv.?FMFa|S3w|\£|oHvJKmc~ea|{ oHv(sna|;6n5coMA}#5G~QUHv>Η4?F@va|+|w|JCMHv,f V3Sa|p )zyBr7椏Hv]DWrGsɾa|k eą﫚l2 +7a|QWgQ-<̾э_q}Ra&8JHvˁ?F~廽a|/ͼhzyoχڡHvkT?FS;a|3zy? HvJbS6̨6a|ω9?OHvS1Ɇb._ja|^~_\@{}R -?Qc&Wa#UwvnnE>wm|<jBmm*'Waɱ`nk}.V̲o~'݌ssK/8}߯mEKx6iBJP1 UB}K^~o>P/6i@A}///* ! %/7/I/[/m//&A2mo~Kf΍4D2=y|Bm;m`(=w}zNmWUȣ(K\S}|t|d\L|L+YlQn0T%%$ċ^QUHQOOOO& AOSOeOwOOO&놮 G3!(E> H}OĹ[DnXXt' U!2h`Xn b_ Pg3~qP3fbAu~R)umooo& ]oooooo|o&g-N+_!eW?1 il}1HiDYH'rg|xMMXjQ%f|ɒ= %;vro]ݓinÞ]S( & yӏ&f@e!`¥ G8M Z³绾hv0W5gS1a8|QXSbW~7mȧՠ~=߬lB|y:߾nuTѺ'9& ˯ݯﯾ+A 7>M%- q?j j dEuk uQ9)?FAަIif[WvpQ鐲9 =~OIѮ]xz~êJ 1CU& E`=U4 Xw׵\ o5|݄ erKv7(\2Έf!9"pϼRѡ{ R}\ $”ގ#q[\ƧGJ Q\BĪ&iZf}#Xr_N|1i"@w\y =~X@/n????Te/?#?5?G?Y?3㒾?Fu}A{ZϹ?$=l?Fߣ؝@鴡s! U/q2 y#iwjQ rt}\\ǟ~nxBt@fP@Q*(g˼ߙ`]N1YY͓___Z_-_?_Q_c_u_՜t?F#|IΝ} C)u18h6o]?F\ v!ۊ5,'pm~ "߹3*_CRڟ8Yq_S˼_}5#7o7I[m ռwc?F䱹!? Uܟ*d4IM}l+m/r?F!zxy4GXM g1rH-odip'ʽh:PF}LHM\БqWČMQyS:UŸԟSewZ=x?FrJS=W+}| 8*_ޛU z0~+WzBJ/pVD W| 2'ؚ5F] 0+!31rߠ߲߯450i;h4MLTJMUs`I}$z8.=u2fC}1 O2V{uf`5MEp4}o7o9lM* g`S}+M?=;??1O'9K&)E_i) QiğuPmk6nɣ{j}Q}L]_h?>nj^$oVLɓ߱ mkVo@0Vp > vCu!9QAdmLG~ >Qo1/C/U/g/V/ /T7} 7վkEA*w}+% B`;$gK*"V2w@96קM-nvSiF!nLLgqNy4~ QUcqÍZS)^MO_OqOO-v??OO'O|9O F1'ߢM UPiPK>´}]"}9'uH0 A5]mI;;#ǧ66hx~r JQa~&qrMi~߭Uȴ߯E~߻ io{oooI_ oo1oCo|Uo$C$g;#,b2?.x}CwMLW1b[̵yx8ج`}eM}oJjRݣS62`M\|"XB}B+<5asݴf);M_|q7;ӱ,_|#Vx~ C.t 1_!; }i틺%~?3]9r?Fn?59?趒;>L]ߛ^_u^VQ$2Ս޻}ǸПƯد3EWi{7TS@f𿟜L)l\m߇WMw~쾟K,:֑Zь?FSƊl\?i@9 <>̩#Տߘ]l\ s?BZBDȕ݇k2PU}l\q3h^ ϞOasυϗϩS#ѣn>x8Bި9ڞMh #sUx=BT.< @?F %婢?2#.5>_9K 罰5L]!g6L}MJA?**=k}1TӦy?F'sO=/ge [A\{9M?F>Y=k X*e S(N $9-g[?ow-$.Tzv )\$?3߇Nx'fl\|z $M}~' /IpLֵ?F˦6k]zFq&>}w^YM|Ԅ?F<']?0#Jh#h /xIkHw1]ǥK]P|G>q2:|@_3Y 1I l\"_@=H?` @/?$?6?H?U //////1bJx?FL!E#ԇ}Se γybE?FcN} Ɗx3c }CRD80}>~^ϝ{ª\3s./M?F^Y⣝iϟʁi{s_4I[H?FJ8oxJJϑ+ϊUϰʉhFq$ >LT ]~~[4xYpŬ0Ex}LYTfxy."ooo#5yP?Fឍ~QTJZvW9 ?FڡGSۈrQ5|橁5ǤˑIGYh($Qw~5tjl@fxF# -?Qy?FOT ʼn@-1ş9Ыva*S?FR?fB V1?ݰف]沛tk{Τ/Yv{ʙޅ݉}1MZѰb$%7I[my -}\SU=cx|[)T*Oў}|d.KnޟR{nIxߒΫ}@B j-2YpM:H`FZT=e-B~yh,?/߯}%/ASew߉ߞyf}x0 Q>zsg=ڟUbw;IS ڀ>3LI~ goSy?~wib`W?YM7+mɼ7V9H3pvsSs/M/}~iV9@gM!&K]oZBdPeF]ʼY mV6| ig5 ח9]#.Sp_xuT&~?Ftm*ԍ`_rYHp O~![ިKmU|ױ.jriwo /F'gy|yrqFYH?߁h}FUϯz5ꞵ,WCˑB`Ԟ`H}sU2w~xoӄڭfDjt! J Hna)(J6w~nv؜p oo1oCo)______ֹ2JI`H~Ͱ f tɾ4EwߗMMpW[̵yط]?F3,棣SLLh ?mu`Xr}Q͘~?Dp);M_ *ֹci+!c) x+Ҳ{&3oٶ[i?FqTƊF,]H'^ <Əٶ H;z0=}F,XQʮ}7$zLٶzTǬGWF,4$ZgsRF{^ j4Z>~[T-G^8?Fz^.H ~L?%aasS Xޟno#ά:HTFF}6CF J'bO[;| ~^-!3EWiKYYx4?F -t[|L?r69BMj-1(?FN^4>Nsz9ET~.Q^=#94o4>ݠ_J? H~&U7`dU \(5Vm H FۄܱykyU} қh?nBN 2DV3ïկ IQY1Q( &&] 6P^JY8}Soj11Qً;]n;P^u񟷱g_ G!.H?Fh\<;Qq7B1b;Y|#Kk{[oY??cgCo;M_q4' IsЪN =͐\h \[Vmq:Dyxj?FYi&?Gbx)U͖֒z/^bA/ F5f;d#\EH{=Xj|8%5 1C Id匽 轫 (Z-q͜DD D58|UBn FxV5?F+A玕R\ȚE Fk$!2 +\HC.P?%ҧnP A @6D .THD: # h0>Th]]9 MP IAUFOѬ?FA^?FMOh?F9d*C?P68R?>$y> n JuM{` ?e ;  Su*Jt'  q3 Pmta{Z_ebN3.JvH?zhǿUv?vh e:Z}.b%{>JU2N?贁N[?)@M#J&A@wbI(bW)U+U+U&J π(<!lJ ?%P?AO"d+b Y)(u/%B #A1A15 `?Co}p rigTwt c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0Wn.x0 l0U n8s2e0en05v0d0:1PY>4#-#,M3 M7#(p0b59 DFXGdG'/2WUA;F%!$a dF$&#l,&}>UhET ]]9 M qAUFr ?F l?F?Fg!ِ3u?P6 L}>   h,(JuM`l? Ek"k6HuZJtW  7OttG97i;y4v'ʚc>JU #<H!-A?^% ?AJb^ @bbbz #"b#$"b$J 2zGz?<@M7#J&(&$%" ;'"*@!EA#5 hy<=A#B115 `?CopyrigT}tk(c)kW20@9kM@]c@os0fBR1r05Aa@i0Wn.k ] lP@U HsTBe(@e@5vf@dH@/"148#-,?3 7#9 i b59 6XGG'0U?RF!y$0 F ZlJ]Uhz 1hy1!5(`A#FB1?F#)Q PV?d"cP΄ ťc(Mb(8"J! jo0o Cƣ8-$DqaE4d\b@d Kc;d .#8磵g.p~?FTHR?PV7 /!i q7ikWPV]DT#3 AMT 5AUFr ?F l?F?Fg!ِ3?P6 .6> JuM` ?IKu8Jt5  Otyt_G97iͅ;y4vʚIA>JU5 J#2zGz?@MJ/&A@bm#ztc a#b{(Wb)+&B#!!5 `?CopyrigTt (c) 0W2009 0M0]c0os f 2R!r 61a 0i n. 0 AUlQ0 8sU2e)0e0vg0dI0!$[#8"8"?6 ?A$/%-/# '6&iie59 &XG}'0UBi6!40I iF}JlJ(>UhiI !!AaR4 Rx J:j]ݿb~)_.S buBqVy3'?F(ËE?P,a,[ /A1YQYV4hA.xeAR +3l4xa@ R#Bsoogrb5%UGD  3 h0TdYYB UFW)eK?FI{Ä?F9n;*Y?FI_tnw?P} tf  f0 D Xu` ?00DDXXl)uvts  < %t"F0" %'ӿp %,'qdX¤U!!'"`?CopyrigPt (c) 2\09 M c oKs f"!r !a i n. Al0 (s 2e e v0d0s"#S$|#B-#l# '?Rw=#&|#466 2|$Z##0U23@{& ^5 FX`7Th]]9 M]AUF?FMHR  h$ JuM` ? 3YY.KuHJtE  0oI~$tq/袋[Dmk.IQ>JU2N贁NK[?<@M#J+&A@b](bk)i+i+i&Jy%#7 ?3#) ?Ac"i+h//%*B%#U1U15 `?Cop rigTt (c)02`090M0c0o%s|0f2{1r~01ua0i|0n.0_ Al0 8Us2e0e0v0 d0"N1YR4#--0" a7&%0bE9 XFX'xG'0UB&!$$a xFJlJ]Uhzs1Z!1(`%#FXxcӔ?F>D [TQ>Q) R "J:!l?FQXP\C\6m/m$BQYsZ_?FFt%7?F{hUuW{ ]3\+B MTT0TDiQ>8aQ4? /Ȟv/x i[? `RT*?D`@ S <oor5%QEUVV__ <#<>e__f{,Vxh`d3rAg 4 ccgd$ymAv AGuPHD: H h4>T]]9 MJUFoc?F24ͷ?F7?FXPq?P6 >JuM` ^?MuJt  ~27It=W]tEIURF]IlGEw|>2N/Nk?<@MJA@gbb ) +R + &J[_ffy) ?/3)"B5 !!5 g`?CopyrigTt (c)6020B0960M.0c,0oKs&0f42%1r(0a1a40i&0n.60 Al|0 ,8s2eT0e,0v0dt0!P$-/ 3  7|&&iie%9 &X7"G'0UkB|&!$%0 "F6JlJAUhh71!!T $#< $M DRkVJ:+;5jB\aQZS kBձQ3aZ9]UYQQU%U2VZ?9=_30B_ZK&DiaX52+_=U"rA $cHD: H h4>T]]9 MJUFoc?F24ͷ?F7?FXPq?P6 >JuM` ^?MuJt  ~27It=W]tEIURF]IlGEw|>5 J2zGzKt?<@MJA@gb%(b3)1+1+T1&B!!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al 0 (s2e e v 0d0!$)[&`h4|9 ?A(:/[)-#K '6i@ie<59 X7"G'0UkB6!40 "F6JlJ(>Uh"I !e1+!AaBc1 $<@ J:?+;5j:\YQ,RS kBձaQ3"U9]MYIQHD: H h4>T]]9 MJUFoc?FXP?F7?F+;5jq?P6 >JuM` ^?MuJt  ~27It=WE]tIURF]Il\tE#>5 J2z_Gzt?<@MJA@gb]%(b3)1+1+1&B !!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al 0 (Us2e e v 0 d0!H$[&h4|9 C?A:/[)Y-# '6iie<59 X7"G'0UkB6!40 "F6Jl*$>Uh @3 !Ie1+!a1 $<@ JU99YUQIYEQHD: H h4>T]]9 MJUFc?F24ͷ?F7?FXP㟸?Pn6 >JuM` ?uJt  ~o2It=W]tEZIRF^IlEw|>5 J2zGzt?)@MJA@gb%(1+b3)O+O&*B!!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԶ !a i n}. Al 0U (s2e e 5v 0d0! $![h4?6 ?A:/[)e-# ' 6iie<5(9 X7"G_'0UkB6Z!40 "FR6JlJ(>Uh"I !4e1+!ahBO $<1 J:#]Z9=_30:_RS &DQaQ3"U9=YYQMYIQUHLuD" # ;3h0TlaaBJ >UFE3kh?F3?F_T?FU Whu?P vN>uA` ?)u#>t  5@{It=W#H9IRj;Il廤#2zGzt?@9 >A@gbb )bJ )'+'&>yga?|& ? " +'+,5'UN!!g`?Cop] rigXt (c),02d09,0M$0c."0os0f*21r0W1a*0i0n.,0 Alr0 "8sv2eJ0ej"0v0dj0!Ea$-X3 7|&$0fE 6X7G/'0UaB|&%!$e F,J l>aUl~1#1!jQ?FN @ ?[ql#<3ٽECUS(: 9[#UT eW3wU( R@SRQY?I?dXE ^ߑC8_J_ dX _2_=oV_D#OOOOO _vVFmR^\j:=pd_ofI Њ?F0떜lZ$&Оl ?:Ynx3~ ,az kXao41}gK}YWq`m0BTf|UH luD( " #  #A h]0T]]JMUF:%?F6emP?Fiz9vJ?P N&&M %4&U8HCM 9 / 9 / 9 /9/9CWW W WuM` ?% &W ">"R'&j'&~'&'&'&;&SuQ)t  bD%t!2 F_%7J">67šʿ~ U2zGzK?@Mk3{6A@n4b3z0312b808;6Tu3 A A2`?Copyrigxt c)W@209W@MO@c.M@osG@fUBFArI@AaU@iG@n.W@ AlTW@NGsBeu@eM@5v@d@c2AQ}Dl3R[u3-A B?V?4 ?E-Xu3,C ,GVl5O@(E ԕVXgW|2'0UR)V!-TA VZMU@QjQ1i81g\a?F;|~4P~G #W.?a4Hd3.䛵?Fp _`9},jX?F>@ ]3?iFS_lUӧ:ޜ !hރe#ģ[ Q " i!$a4BH:}k_wg0wEuf.29]~4pO˯ô>P|o!S#_`1y;6}wfl{o@opoQQ$Ug@EiE{iooeD?FD ~a]#Q`7\X!?Fw^?FrY?Fo2?P'DT! /3Q9 մ7'||U|6ys r #, Jދ]? $oe?Ň41:/ B"tŇrf5EU>PbtItnɟ ^HY*Gą 2DhaspA]鯱ï ֯z %7I[mOb罞О )E@ͿXP-?;a+ >L &WEW thqϧ"mϏ] 0BTfߪߘW4. Vwp,> i\_XT&8JZ;?FT}:/?$qxE~yE?FA5g`u/V/ 123\?FrMT/?d3~~?FEa,蕱/?CZ@6 De //*/]0T]]JUFp>)/Ep?F2왵??F ɏD,"7?P uM` ?j`fpu#zAtw tnٳ \b "OMG2N贿Nk?@dM#%&A2 bW(be)Jr+r+r&*!ܿ!`?Copy_rigt_(c)2W09M c os f"!r !1a i n}. Al<0U (s@2e0e 5vR0d40 "!3]$#Ry"&#4?6 Q?]"W*s*;'-X# '6% h5 RFX$GrG'0UBű6!4 rFJlU!h#'A(:FYSocg0vr p_y $(4!Z[7?FL30pS a۫YQ?\I`X3U5 Tީ?FʨQYC\f"qoY5U%?Fq$B\F\bJFXz___\Q# __1_C_U_g_VNx?FM=\>ۆ\34=(_V5T?F}\#R)\b@QoVr]?Fc߰#Z}L \$6ԡyqm0:?F u\.Iſ\)?oUU/ASewV?FN,]\\wzV]~?F@(r\F-~Ȩ\(p|ymIxg?F[j| 7+*y\= qmh?Y?FYgR\Aa\ulӏɟ۟UY=Oas!KC4?F}7PM\8HJ|\ HNVh?Fwi@­\A>xAV%+j7ٽ}#E~\I&)5eF=s '7["mSpуdĿֿRtYk}Ȏ~+]~nn)`]͟Cc֦ъ]#.?FPZk+N:2\y_@W߁2ʞ54}d\e҉qm}&?FUfK­thV\9\ލQak}ߏߡ߳~VZwۿ4&b-} ߜ.c|~kd$~Y""A6\lJ(i0(sVTloV?F} \ǀ_?$qmR'%"2?D= bY\r6yލ 2 QYI0`鎍1/ 48Ve[bV$?Fi뭩1?0pS!K;?F"~[:{Q~b?S)qm#|W-?FG~b .f-?R[Ϡ ލ /*/fɾw2Op>)/Ep?Fd(_a9~ {F!_ ~`2DMz2m^Gm}>IލPobotoj ___o o2o̺m@UP1-JO}^^fon l?FSƊ1̓ ;]\OogMNE0A`^?~uݟ$E~ɰ~%^~ =7}ÀMuҳ1?4tx鑡?3:\ `r $6Hj(i?F ڷ1^̄ZϭSԢ%^XP$֧zw]<<E?F6 "Rf2 =s 7Pq=[~1̇1%]?F | .@Rdc)Jl pe},X|4]-Huvzxډj?FwJ%H]ǕDG7t{GtY~Ǔ.PH#T~]?9+T =~B3~-L /=m$+)Hϰʛ&8J\n|JoWx#0z ),?5Uz!xz?Vsf&; `ߌ 猖gߺ8˺A}2!¹diqBZ$@S =u V N,.zu~6J1&ߴEWi{f&~c?Fۿ>4U, }J\1?Fξ9rзX=YR Qm#07q X; I6 =4T37 xU,=A S)d= Yqas{%?Fyx{wmQ7Z6n4Nl]?F7c2Gsj3j뙚;(N8 i<=ۇ4W-ɫ-9)]-?F`x}SoA뙳x@///?"?Rz//////xmYԒ?F0ufx)r蜅وWjFQ1?GUF MDe9gsaEo2I^~T@s@܂]Y/|sMv۶ IY?}lD?B9kP/ߊ.x/ONϋaz`3O__)_;_uOOOOOOxHOB?Fi$0%oAlOo-M_k"W?F[y\x_ ]g蜢, )+b77bћ9}DxZ]Qey=zFVjx/I bvdRo$6HZooooo  p? {,Բ5~}j8>6zB?FӺ ~|g=)s-?F8H~ݖ}T M?lfa=Kh݁5};p/"_f/tl 8at""4FXj| ߬XNT'(/E9?&I/ i {.PvwHkOQ qӟĶO@ߒ#'ߛ[w̜zfdm ___ooŅp______DAL4?FGww̆Ia{VU$oEFem(5tc(yÂ;UKG whsi}2Ca6o? ?F20?æmEyo6\iq,"4ߤsZ4?F"EwH | QUY% (?FZ <#a;:wPID8 n7 ~f* /e'Mǃac5'?Fԫ|>ka?I}|b:QH,>P גT_HZ> )6s ;gLjOhLAݨF #ȱ ^OB   ~|dҴ o@+Xa o8sG/ o P : x ߨ |P d >R 1T Ӵ hV ; "8 ] LY ѻ[ z  z  ʔ] B:  d&/#/5/G/Y/k/}//////// ??1?C?U?g?y???????? OO-O?OQOcOuOOOOOOOO__)_;_M___q________oo%o7oIo[omooooooooo!3EWi{ /ASew h ԁ < bҏ+=Oas͟ߟ'9KH  Xs_ eт ~  n g    , J o"/  Yk_ S HUFDfP h TPYYhUFL&d2?F`0 ?Fx<F BP(c?P? w<lh -^&88Kh;_ 8srY3 8 83 8 838838 &8&381&8E&38Y&8m&38&d*8&38&8&8&]; 63!8!6#8563$8I6&8]63'8q6(86d2)86*8f6+86,8f6-86.8fF/8F08f/F18CF28fWF38kF48fF58F6U789;8fF<8F=8&F>8 V;fV@83VB8fGVC8[VE8boV;VG8ȗVRH8VI8VJ8VK8VL8fM8fN8-fO`8Af;UfQ8fifR8}fS8ffT8fUUVWXZ8ff[8f\8f v]8v^8f1v_8Eva8fYvb8mvd8fve8vf8Hvrg8vQ;vi8vj8k8l8+m8?n8So8gp8{q8r8s8tuvwy8z8{8|8/}8C~8W8k8Ѕ q  0A`Ap4icbptpo\ evq !aE  -QuYda|z !T3Q_`|]1n@` X?W&&.11 T_?xpxx p  pxp%{wxw?w _w xwOw  p JwwxwpxbL&d2鿃`0 +?? {ҿ ȿUGDF # hz8T YY# B{UFL&d2?F`0 ?P} [u`ho?u#   @B<66D^nY^-] [ [ [ [[[&7^"&[6&[J&[^&]r&[&^&[&d^&[&[&[&[6[&6[:6[N6h^b6![v6"[6#[6$[6%[6&[6'[6([F)[F*[*F+[>F,[RF-[fF.[zF/[F0[F1[F2[F3[F4[F5[V6[V7[.V]_BV9[VV:[jV;[~V<[V=[V>[V?[V@[VA[VB[ fC[f^2fEL[FfF[Zf^nfH[fI[fJ[fK[fL[fM[fN[fO[v^"vQ[6vR[JvS[^vT[rvU[vV[vW[vX[vY[vZ[v[[v\[][&^[:_[N`[ba[vb[c[d[e[Ɔf[چg[Q^i[j[*k[>l[Rm[fn[zo[p[q[r[ʖs[ޖt[u[v[w[.x[By[Vz[j{[~|[}[~[[Φ[[[ ު-2<ZZnn&&"&"&6&6&J&J&^&^&r& r&&&<&U&&&&U&&&&U&&66U&6&6:6:6UN6N6b6b6Uv6v666U6666U6666U66FFUFF*F*FU>F>FRFRFUfFfFzFzFUFFFFUFFFFUFFFFUVVVVU.V.VBVBVUVVVVjVjVU~V~VVVUVVVVUVVVVUVV f fUff2f2fUFfFfZfZfUnfnfffUffffUffffUffffUvv"v"vU6v6vJvJvU^v^vrvrvUvvvvUvvvvUvvvvUvvU&&::UNNbbUvvUUƆƆچچUU**U>>RRUffzzUUʖʖUޖޖUU..BBUVVjjU~~UUΦΦU  $BWW.(%6QDUQ?S>S3Q?RdV"qwX[?( 8RLS`S~SSSSıSرS챔SBRS(Sv|Hv|Rv|\v|fv @ 4񸣢vvy|v|v|v|v|v|v|v-|A|U|U$$}|.|8|V|`|j |t|~1|E|Y|mx؆؆|| "|!"|5"| ]"|q"|("|<"|F" |P"ZZ"|d"|n92|M2|a2|u2|2|2|2|Ȗ2|Җ21@1 AQB|eB|"yB|,B|6B|@B`|TB|^ދQUVVVVUVVVVV SRVfff&f0f :fDfNfSRbfUlfvfffUffffUffffUfffvU vv v*vU4v>vHvRvU\vfvpvzvUvvvvUvvvvUvvvvUvU$.8BULV`jUt~UUĆΆ؆U U(2U<FPZUdnxUUȖҖUܖU"U,6@JUT^h "UȦUܦb&l&U&,@UTh|U&&&&U&̶U046UDXR6lUU66U*>URfz6UFFF$FUUjF.UBFVjU~UFFU(1H1R1\1f1p1z1Ä1Î1Ø1â1ì1ö11111111AAA$A.A8ABALAð÷jAtA~AÈAÒAÜAæAðAúAAAAAAAQ QQQ(Q2QBU=Us/?Se?SU?SѲ?SđPÿ@qP?@R?TV?ܲu`W@jqRu`a@r URfchqb?SuLiup`u`@բѠ+u`ѓb&Y faH1crgg`Ÿ /"(úfiic` dFpaMvaf&afl,s  a?5 %` pq q wu`Vis_PR Y. hm!#5N 93N"+b`?Copyr gt  c)M20Y9MM cCo"o Lÿ҆u` ?u~pQ(-DT! @ p@U au@`ba@? ìҀ@^0FH%@Q.FӀOռB\i@?yQ/FQd@ߐRrf i`R$pnst*oM Pe]ف p ջ bmCuE``-u5@`^0aՁ^0d qn pN<udRE %EЎ%$-+h 0JTlaaM>UFN&d2?F`0 ?F|G?Fm]ri?P U>tA`  M?t q۠ FZ>u hu!s7YaMJMRKm? ?M>bA@bKbmbz #e8 ۢ =# b#2O[(ta k-3thk-![(}/tГ/w t/ t/w t/ t?w t&?t;: Ne F;,";&a#68;R;W *P>B m42A2A`?CopyrigXt _(c)i@2dW09i@Ma@c_@osY@fgBXAr[@Aag@iY@nU.i@ 0l@ _HUsBe@e_@v@-d@\2lH#"p# @K4JUlQQQQPA!@@2Ruݯ?F#& hnGHge) ]NUq Z?FG ߄\.r\oMGhEUKar?F3ojp0m]-Eά\?T\iEUe^?F`1<<0mN[ʰ~\on\i^*S__` 5C3\i__B_TFTA'VW@qqx_[QQ JR! J!4J78 h !a!hE!lU]T T!"#$%T__o{?FlӖ|н1|Ϥo} rhQя Me]otv۲ǀFRH0ύ|dr٣|WӍ[Smp"pʍ|q5JK tvp?FZEoZɍ| 9EPڣ||lbtvX9|"@G!/ ւ?F9|H&3|?<=$_4F+\>30m_d 5\i`U0m~ !Qx$(gN/ ?P0wf>nZ?l? r?;{|{*K1?Fݿ K 1wz_ 9*S?FAcM@=orOyg?_P|* /$?F#K i >捵JLJ=|*zJ@w/d;ЍuI@ͺhˏ4pHD: )&# T=h0>Th]]9 Q BUFܷqP?Fԁ43a?F7??FtYOȻ?P Bt`  XM6 ?t׻4n%O__Bu aquo7>KJASTfw?X ?Az@bbb! ( 6f )  '+ ߈?I-Z#g!z A Qu +%/,8)E!'+ k(?r+o?(a4 8) E"#{ #BstG8tU: Ba L>2?!88;D;68?5;D; tj>8LALA1 `?CopyrigTt (c)@2`09@M{@c.y@oss@fBrAru@Aa@is@n.@ 0l@ yHsBe@ey@vZ@d@v2h<M#l#@ M@?4>UhjA6Q>1| L`Fkdj]J}vMjR]M 6)fMB@Wa\ k_稇avCe=ESeR9Y? l`gESeFeabkod iSQޗh~??5@+ c;Ꞗi"2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUFdHKJAUVJp?X ?Awz@bbb! ( "f Փ'+ ' qq?z A  +/.&68))E! '+ go?W-sU!n*a4 8) "u#{ u#D!st?8tM: Bav L62?!08;<;60?5;<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@c.q@osk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@vZ@d@n2h<M#l#@ M@?]UhzbA.QR61b L` KdjMaQ  6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ NA_qd7eEGec6i\m j'f܋h2_UB3rA'04msCHD: )&# T=h0>Th]]9 Q BUF)8/?FFT?F!/$?Fᱝ̻?P Bt`  vNC?t|¢ϛT H0Bu auo7 >JTA0f88v?WYALv@z bbb!'n 6 % )! ! ,+ <?' o?z #a47% =) J""#K#{% (+ ( tQ#BXYщ?A> ?A "+(/+/=/O/a!y:5rl/@ sU10( /???? "?0<2?M{!b 1/Ĝ/*%,"t:"t>TQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUFJ1>?F}?FF(e?F=PFo?P t`  0!?tj$h5q+ tЖZu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a490 9_b!#{t0#tN8tϻ: Ua BL2$18KE;F?VEK]; t>B 4AA0`?CopyrigTt _(c)@2`W09@M@c@os@fBAr@Qa@i@nU.@ @l/P HUs3RePe@vEP-d'P2h<M#bl# M/P?]Uhz@AQ1 L@3FO#۲?F?o2zP dj|aW{'8cUF:&x0?F?F?F_'qoO 0Yigull4aj?N~f0c~o+ fEee?F<'?F@]?FwF%ޕol}dQ3ll BXl귺m|?bBhaU~e`ba`?FV ?F?Th ofl?PI3qoikeðl8` |GT?heFnF$OĒ?F o* leowk~y{f~tGg˂a ,8b1;MHD: )&# ;h0>Th]]9 PB IAUFGF?FEs?F+\(?F~@ t?P t`  9`Ik?t{ͻIu -ZrqKZu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFGF?F\Dֳ?F+\(?F~@ t?P t`  9`Ik?t%h2tMu -ZrqKZu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFXIn?FLw¾?F+\(?F-ku?P t`  -k?tZ߅BTu -CmoLZu Mauo8 >    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@XEUv/?Fpvw;pg?F $hއvol/AD0qYJg ַBl)$ԞXlD?B:XU^F.fّPKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# ;h0>Th]]9 PB IAUFXIn?F(?F+\(?F-ku?P t`  -k?t7/;3bu -+CmoLZu Mauo8>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L1<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUF贷F$?Ft\@?F\HRq?FƯK7?P Bt`  x#?tK#ll$nZVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 ) "'}# }#BtΜQ8t_:Ba~ LH2!B8;N;6B?5;.N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@oKs}@fB|Ar@Aa@i}@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhuQuQPtA@Qj LZFχϰ 7j bL\2] 6R1bMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeU@Vϰ+@?F23<~pk_xu jlTh]]9 Q BUF#/?FKGY?F\HRq?FƯK7?P Bt`  ~B?tIH#ll$nZVBu acuo77AJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\G6] 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeUWeVϰ+@?Vpk_xu j aGeWe _ooh2K]u:3rA 4s2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?FD6Hr?F\HRq?FƯK7?P Bt`   a1?t -_9#ll$nZVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWenZ\aiavIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#?F\O?Fo?FƯK7?P Bt`  ?tJ8Xj$nZVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\] 6-fMB:ʿMbPdc 2Ngodb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUF?FË.5?FQ?F?P t`  ?{R   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sUK1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDdTh]]9 PB IAUF4{"?F=N?FQ?F?P t`  ^}W;o?tQxČ( 뾅@l* 9u auo8>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUFGC<-?F f?FQ?F?P t`  L?RD)o?tC%)Č( 뾅@l* 9u auo8>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(#{ #st8tϥ:RUa L21*8K;6?@EK; tR>B4AA0`?CopyrigTt c)@2`09@M@c.@os@fBAr@Aa@i@n.@ 0lP HsRe@e@vZ/PdP2h<M#l# MP? < >Uh$QQ]AQ1 LZ3;'xP 7jbUbFb7a 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addǼbjUkgabeE(aFjv?Fr6d?FkĀq o`Oi^j^r`t0sKÊ#|pr9t!KU:M}{]r,Ll}]|9xLufTrg?F<ے7vodObdcZgtfi,V #|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?F$n6b?FQ?F?P t`  ro?t,Č( 뾅@l* 9u auo8dA d  ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7da 6{fM8(a?D +F2|>?FW@WoO 0j?rdcd bj{^WeabeEef"v?Fddd?F܃qo`OivddlXr`tuqE|E:jr3t!KU:G}{]rLl}]|q3xngqrg?F~pooZgt_ML|\ 5ve&1?FWf?F?#0b|[m낪 z'oǵ(|F$$!ayTh]]9 Q BUF贷F$?FlIJ?F\HRq?F#o˯Z?P Bt`  x#?tnZ/J#ll#VBu auo7>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% @& J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F A7<6j% bRzg9<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF#/?F ?F\HRq?Fn˯Z?P Bt`  ~B?t6#ll#VBu acuo7hAJTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt$(c)$2`09$M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.$_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\7MbPdS mT W5UO?6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?FT?F\HRq?Fn˯Z?P Bt`   a1?t#ll#VBu auo7>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#?Flo]?Fo?Fn˯Z?P Bt`  ?tz6,JJ\8Xj#VBu acuo7AJTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUFlQ?Fڋ?FPaW?F ru?P WNtQ` c͕.:7?tU)oni8;(\I2-nNuj mu){E0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUFhY?FlIJ?F&1?F3(e?P Bt`  6n?tҝw~m4tMuVBu auo7>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶGOR] 6SCfMB:ZMbXdc $I$I܍+d `2aa]Eme'la'iL.G]eU@9/$E?;FslAka,dHּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFy?F.޺s?F&1?F3(e?P Bt`  7 ?tN]m4t_MuBu aquo7>KJAS`?X ?Az@bbb! ( 6f )  '+ ߈?I-Z#g!z A " ././@/R+g/y'sU1,a4 ) ."#{ :#Btg8tu:B@a L^2!X8;d;6X?E;d; Kt͊>8alAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hX<M#l # M@?4>UhQQAVQ^1| LZFX} 3 7j b2ɶOK] 6SCfMB:ZMbXdc $I$I܍+d 2aa]Emekla'i.]eU@;0$E?FTh]]9 Q BUF߹e?F4jߤ?F!?FTUUUɺ?P Bt`  ?tiqn= ףc?ckBu1 ,duon7>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFtx?F/!?FTUUU?P Bt`  K_?t)C= ףc?cZBu auo7[>ʜJAUVJp? ?Awz@bbWb! ( 7 " '+ ' qq?z A  +/.&68))E! '+ g?W-sUUn*a4 8) ."u#{ {8aDADA1 `?CopyrigTt (c){@2`09{@Ms@cq@osk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hX<M#l # M@? Uh,@cQcQ] '1bA.Q61b L`zw"?FDA dj O|m cuۡi'1 61fM{":F$3<6d kS=ѣ<n2{!5E[nLah vogaKeE[eV_cbPc g@]amr\Dž}iNu?FS%f|]tElF<2PhQp[" |"*`KeQitVl:qid?5hQ@J_nohU2t3rA 4n2HD: )&# T=h0>Th]]9 Q BUFiv?F{<?FU?F3z[j˺?P Bt`  y]?thS&XRBu ,duo7oAJAa^!? ?Az@bbb! (n 6  )  '+ g~X?I-Z#!g!z A " ,/,/>/P-[ן+g/y'"ي,a4 ) "!E#{ #Btg8tu:Ba L^2!X8;d;6X?Eu;d; t͊>)8lAlA `?CopyrigTt (c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0l@U HsBe@e@v@d@2h<M#l#A M@?8>UhoQQ]AJVQ^1| L@}?F! dj ٯla6?ztB: 7d /kyԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϰ+@ol0juMDhnMbX0j`(DhnC<o+o"<EgUoo6o9HD: )&# T=h0>Th]]9 Q BUFT1#E2H?F]&UaU?F3z[j?P Bt`  ?t1:&XRZBu auo7[!>ʜJAa^Ė? ?Az@bbWb! ( 7 6 )  '+ g~X?I-Z#!g!zV A " ,/,/>/P-[O+g/y',*a4 ) "!#{ #Bstg8tu: Ba L^2!X8;d;6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c.@os@fBAr@Aa@i@n.@ 0l@ HsBe@e@vZ@d@2h<M#l# M@?8>UhoQQ]AVQ^1| L@Ͼ?F! dj ٨la6ztB: 7d /kyԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϰ+@ol0juMDhnH?MbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UFObFd0?Fgj?F0U?F_gw?P >tA`  N+ru?t sM׵01,_Œ X">u eu!s7-"JɅJUU?HA9 j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF\6:?F}?FF(e?F=PFo?P t`  @?tP[5q+ tЖZu Mauo8#>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?FX'qoO 0Yifull4ajS?ON~f0c?h+ fEeޖe?F<'?F@]?F\F%ޕol}dQ3ll BXy귺m|ԇbBhaUe```?FV ?F?Th ofl?PI3qoijeðl.` |=T?heFnF$OĒ?F o* leowk~y{f~tGg˂a ,8b1;MHD: )&# ;h0>Th]]9 PB IAUF\6:?Fx?FF(e?F=PFo?P t`  @?t%5q+ tЖZu Mauo8$>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|a{ޓ8cUF:&x0?F?F?F'qoO 0Yifull4ajK?ON~f0c?+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXyR귺m|bBhaUe```?FV ?F?Th ofl?PI3qoiGkeðl?2` ,yFT?heFnF*OĒ?F o* leowk~yf~tGg˂a ,8b1;MHD: )&# T=h0>Th]]9 Q BUFL&d?F`0?F(\µ^S?P Bt`  {Go?tNV yAP FBub ,duo7%>RJ[0U?<ALv@tt7 }b6*D&tD  td*JBı) ?A>"b7)D&a4D  V3 D 8/td/tRy.x1x1`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"h0!<M#l#/MlP> @?UhLAA]R 1 P1"7Ж1bA>!J`oO2z[ dja_q:z׍ Q 6VMB@챝/P\`_m;E,Ui5Uޖ?FQ[5lf\VU'EUY2Ruݯ?FU]˖7sf\pdѣSYUWfe _zx]݅om{ZcXUFRP\g[)JXAp| rhQ#5 IXAYaLPւ?FتcP\T f\3$sXAp"۲ ElU1YA-k?FFoZP\fyf\!XAYX%?F8K7P\{ 䘷f\&SAY&1l?Fr>|]P\-]qiYip~ X1#g~jt-? XAY?F ףp=P\' f\?P=Rw㹥X'RXT}7vP\__/vnTpljlX7U/?$?vQ[E7Ժf\wUQY|>qh|P\4GN"MDX U29_KU#rA0$@CHD: )&# T=h0>Th]]9 Q BUF8?Fā43?F7??FtYOȻ?P Bt`  1=o2?!88;D;68?5u;D; tj>)8LALA1 `?CopyrigTt (c)@]2`09@M{@]cy@oss@fBRrAru@Aa@is@Wn.@ 0l@U yHsBe@ey@v@d@v2h<M#l#A M@?4>UhkQkQjA6Q>1| L`Fdj]JK}vM41R] 6)fMB@Wa\ k_程avCe=E~SeR9Y? lgESeFeabkod i?SQޗh~?5@+ c;ꉞi2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUF2B9?FZ&?F'l?F{Cƻ?P Bt`  ,i?t('̺㯝4\YDrVBu auo7'>JAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l# M@?]UhzbA.Q61b L` djM"aQ  6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ N_qd7eEGec6i\m jOf܋h2_U3rA'04msCHD: )&# T=h0>Th]]9 Q BUF] ?F t5?F!/$?Fᱝ̻?P Bt`  "?t4@+ϛ׊T, H_0Bu aquo7(>KJTA0f88v?WYgAL5v]@z bbob!'݉ 6 ,% )! ! ,+ <?' o?z #a47% =) J"E#K#{% (+ ( t#BXY?A> ?A "+(/+/=/O/a!y:5r=l/@ sU10( /???? "0<2(?M{!b 1//*%,"st:"t>0QQ'@`?CopyrigTt (cu)FP2`09FPuM>PcTh]]9 PB IAUFR?F}Y?FF(e?F=PFo?P t`  (\#?tƯ.5q+ tЖZu Mauo8)>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 p l l w+ g?-#U!׷!zme Ae [+j/|//-g#+ s"?(a4\0 9 *2!#{0#t8tϻ:񧂍Ua PL2$18K;QF?VEK; t>B4AA0`?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Qua@i@n.@U @l/P Hs3RUePe@vEPd'P 2h<M#l# M/P?]UhzPAQ1 L@3FO#۲?FoO2zP dj|aW{8cUF:&x0?F?F?F_?'qoO 0Yigull4aj?N~f0co+ fEeޖe?F<'?F@]?FwF%ޕol}dQ3ll BXlﷺm|bBhaUeߔ`ba`?FV ?FTh oflPIϮ3qoikeðl8` |GT?heFnF$OĒ?F o* leowky{ftGg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFpX?F&U?F+\(?F~@ t?P t`  (TJ?t[uᯜ(=u -ZrqKZu Mauo8*>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFpX?F*g?F+\(?F~@ t?P t`  (TJ?t4%u -ZrqKZu Mauo8+>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUF$?Fy}?F+\(?F-ku?P t`  p&k?tl    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@XEUv/?Fpvw;pg?F $hއ?FLvo~l/AD0qYJg ַBl)$ԞXlDB:XU^F.fّPKk^Rl_[Ei?1GAf`d?Sq+abvHD: )&# ;h0>Th]]9 PB IAUF$?FQ?F+\(?F-ku?P t`  p&k?t u -+CmoLZu Mauo8->    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUFt[@#?FX J?F\HRq?FƯK7?P Bt`  =0[?t@'#ll$nZBu duo7.>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\GB] 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeU@Vϰ+@_?F23Th]]9 Q BUF偃?F?F\HRq?FƯK7?P Bt`  Xkw?t[I2#ll$nZVBu auo7/>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF͜?Fil8?F\HRq?FƯK7?P Bt`  W׎?tt^ #ll$nZVBu acuo7aJ?JA]^'?X ?Az@bu ( 6  )b= + ? ' 88?z A  +=  /)/;/M$[+["-j!Q U1[!x(a4 ) ."'}#{ :}#BtQ8t_:B@a~ LH2!B8;N;6B?5;N; Ktt>8aVAVA `?CopyrigTt (c)@2`09@M@c@os}@fB|Ar@Aa@i}@n.@ 0l@ HsBe@e@v@d@2hX<M#l # M@?4>UhuQuQtA@QH1j LZFه3 7j bL\`] 6-fMB:MbPdc gmdb`GeGEWenZ\aiavIGeUWeVϰ+@?Vpk_xu j aGeWe_ooh2K]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#"?F`>?Fo?FƯK7?P Bt`  ?tS'8Xj$nZVBu auo71>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\] 6-fMB:ʿMbPdc 2Ngodb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUFv?Fo]j?FQ?F?P t`  Smo?tT.qČ( 뾅@l* 9u duo82>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8cUFnDdTh]]9 PB IAUFSQ=?F{3?FQ?F?P t`  4o?tPǭ\"Č( 뾅@l* 9u auo83>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU K1(a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUF!?F?FQ?F?P t`  + {o?tn&Č( 뾅@l* 9u auo84>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addbjUWkgabeE(aFjv?Fr6d?FkĀAqo`Oi^j^r`t0sK#|pr9t!KU:M}{]rYLl}]|9xLufTrg?F<7vodObdcZgtfi,V#|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?F)lIJ?FQ?F?P t`  o?t\Č( 뾅@l* 9u auo85>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?D +F2|>?FW@WoO 0dTc rdcd bj{^ea͂beEef"v?Fddd?F܃q o`OivddlXr`tuqEÊ|:jr3t!KU:G}{]r,Ll}]|q3xngqrg?F~pooZgtML|\ 5ve&1?FWf?F#0bĂ|[m낪 z'ǵ(|F$$!ayTh]]9 Q BUFt[@#?Fyوe??F\HRq?F#o˯Z?P Bt`  =0[?t&0N#ll#Bu duo76>JTA0f88v?WYϳAjL@z bbb!'  ,% )! ! ,+ <?' o?z #a47% =) J"E#K#{% (+ ( t#B2?A> &?A"&"I&$/*%,"t:"Kt> A A6 `?CopyrigTt (c)D@2`09D@M<@c:@oKs4@fBB3Ar6@oAaB@i4@n.D@ Al@ :HsBeb@e:@vZ@d@2h/1<M#bl#7? M@?(>Uh7E +A(A!{ L:F A<6j% bRzg9ǩ<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF偃?F?P?F\HRq?Fn˯Z?P Bt`  Xkw?tGj#ll#VBu auo77>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F<6j% bRzg9ǩ<2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF͜?F:'?F\HRq?Fn˯Z?P Bt`  W׎?ts aU#ll#VBu auo78>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#"?F.޺?Fo?Fn˯Z?P Bt`  ?tchE 8Xj#VBu auo79>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUF|u(?F-?FPaW?F ru?P WNtQ` 786R7?tU溁i8;(\I2-nNuj mu){:E0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUF4@ ,?FYوe?^?F&1?F3(e?P Bt`  N!?tx/m4tMuVBu auo7;>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶO] 6CfMB:ZMbXdc $I$I܍+d 2aa]Eme'la'iL.]eU@9/$Ew?FslAka,dHޓּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFnټk-?F]u)?F&1?F3(e?P Bt`  3?tg[ m4tMuVBu auo7<>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#z A* " ././@/R+Yg/y'sU1,a4 ) "#{ #BtΜg8tu:Ba L^2!X8;d;6X?E;.d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhQQPAVQ^1| LZFX}  7j b2/O]M 6CfMB:ZMbX͉dc ?$I$I܍+d 2aa]Emekla'i.]eU@;0$E?FTh]]9 Q BUFܲ?FH7H?F!?FTUUUɺ?P Bt`  &?tQ3ņ= ףc?ckBu5 auon7=>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUF:C<?FY!)^?F!?FTUUUɺ?P Bt`  Ȼ?ޫd?ttyI5= ףc?ckBu5 auon7>>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFx]?FTy4߮?FU?F3z[j˺?P Bt`  Q`7lg?t-xW&XRkBu5 auon7?>)JAa^! ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj O٨l,d6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnMbX0j`QDhnC<o+o"Th]]9 Q BUFT1#E2?FLΉ?FU?F3z[j˺?P Bt`  S|.?tп}Qm&XRkBu5 auon7@>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnHMbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UF+1#E2?F* 9?F0U?F_gw?P >tA`  9|4?t +5f1,_Œ X">u eu!s7-AJɅJUU?H<j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF~Q?F?FF(e?F=PFo?P t`  IGr?tn 5q+ tЖZu Mauo8B>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ As"(a40 9 *2!#{0#΍t8tIϻ:AUa L2$18K;F?VEK; Kt>B4aAA0`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hX<M#l # M/P?]UhzA(Q1 L@3FO#?Fo2zP dj|aW{8cUF:&x0?F?Fώ?FX'qoO 0Yifull4ajS?N~dh+ fEeޖe?F<'?F@]?F\F%ޕol}dQ3ll BXlm|ԇbBhaUe```?FV ?FTh oflPIϮ3qoijeðl.` |=T?heFnF$OĒ?F o* leowky{ftGg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUF~Q?Fe?FF(e?F=PFo?P t`  IGr?t8n5q+ tЖZu Mauo8C>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ As"(a40 9 *2!#{0#΍t8tIϻ:AUa L2$18K;F?VEK; Kt>B4aAA0`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hX<M#l # M/P?]UhzA(Q1 L@3FO#?Fo2zP dj|a{8cUF:&x0?F?FȎ?F'qoO 0Yifull4ajK?N~d+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXlRm|bBhaUe`Ԗ``?FV ?FTh oflPIϮ3qoiGkeðl2` |FT?heFnF*OĒ?F o* leowkyftGg˂a,8b1;MHD: )&# T=h0>Th]]9 Q BUF&d29?F `0?F(\µ?F^Sӻ?P Bt`  o?tՐs] yAP FBuj auo7ARJ[0U?<ALv@tt7 }b6*D&tD  td*JBı) ?A>"b7)D&a4D  V3 D 8/td/tRy.x1x1`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"h0!<M#l#/MlP> @?UhLAA]R 1 P1"7Ж1bA>!J`oO2z[ djaq:z׍ QM 6VMB@챝P\`_m;?E,Ui5Uޖ?FQ[_5lf\VU'EUY2Ruݯ?FU]˖7sf\pdѿSYUWfe /zx]݅om{ZcXXUFRP\g[)JXAp| rhQ#5 ?IXAYaLPւ?FتcP\T f\3$sXAp" ElX1YA-k?FFoZP\fyf\?!,UAYX%?F8K7P\{ f\&SAY&1l?Fr>|]P\-]qiYip~ X1#g~jt-? ѰXAY?F ףp=P\' f\?P=RwX'RXT}7vP\__/vnTpډlX7U/$9SYE7Ժf\wUQY|>îqh|P\4GN?"MX U29_KU#rA0$@CHD: )&# T=h0>Th]]9 Q BUFܷqP?F@?F7??FtYOȻ?P Bt`  XM6 ?ti|9O__Bu aquo7E>KJASTfw?X ?Az@bbb! ( 6f )  '+ ߈?I-Z#g!z A Qu +%/,8)E!'+ k(?r+o,(a4 8) E"#{ #BtΜG8tU:Ba L>2?!88;D;68?5;.D; tj>8LALA1 `?CopyrigTt (c)@2`09@M{@cy@oKss@fBrAru@Aa@is@n.@ 0l@ yHsBe@ey@v@d@v2hb<M#l1# M@?4>UhkQkQPjA6Q>1| L`Fd~j]J}v)M]M 6)fMB@Wa\ k_稇avCe=ESeR9Y? l`gESeFeabkod iSQޗh~??5@+ c;Ꞗi"2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUFdHJAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l# M@?]UhzbA.Q61b L` djM"aQ  6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ N_qd7eEGec6i\m jOf܋h2_U3rA'04msCHD: )&# T=h0>Th]]9 Q BUF)8/?FFT?F!/$?Fᱝ̻?P Bt`  vNC?tG(ϛT H0Bu auo7G>JTA0f88v?WYALv@z bbb!'n 6 % )! ! ,+ <?' o?z #a47% =) J""#K#{% (+ ( tQ#BXYщ?A> ?A "+(/+/=/O/a!y:5rl/@ sU10( /???? "?0<2?M{!b 1/Ĝ/*%,"t:"t>TQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUFJ1>?Fl?FF(e?F=PFo?P t`  0!?trj5q+ tЖZu Mauo8A    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?F_'qoO 0Yigull4aj?ON~f0c?o+ fEeޖe?F<'?F@]?FwF%ޕol}dQ3ll BXl귺m|bBhaUe`ba`?FV ?FTh oflPI3qoiؿkeðl8` |G?T?heFnF$O?F o* leowky?{ft?Gg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFGF?FEs?F+\(?F~@ t?P t`  9`Ik?t)6^u -ZrqKZu Mauo8I>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|?rww@,eaUe:&x0?F%Clw?FAz?F 1vņflux1,oiFA(]`t߼ |~Th]]9 PB IAUFGF?FKDֳ?F+\(?F~@ t?P t`  9`Ik?t┯4;u -ZrqKZu Mauo8J>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFXIn?F    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@,UEUv/?Fpvw;pg?F $hއ?FLvo~l/AD0,YJg ַBl)$ԞXlDB:XU^F.fّPKk^Rl_[Ei?1GAf`d?Sq+a,(bvHD: )&# ;h0>Th]]9 PB IAUFXIn?F(?F+\(?F-ku?P t`  -k?tEeoEu -+CmoLZu Mauo8L>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUF贷F$?Ft\@?F\HRq?FƯK7?P Bt`  x#?t#ll$nZVBu auo7M>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\G@[ 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeU@Vϰ+@_?F23Th]]9 Q BUF#/?FKGY?F\HRq?FƯK7?P Bt`  ~B?tl#ll$nZVBu auo7N>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?F46Hr?F\HRq?FƯK7?P Bt`   a1?t 3JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWenZ\aiavIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#?FLO?Fo?FƯK7?P Bt`  ?tSӈEw8Xj$nZVBu acuo7AJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\ѻ] 6-fMB:ʩMbPdc 2N3odb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUF?FtË.5?FQ?F?P t`  ?{R   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDdTh]]9 PB IAUF4{"?F=N?FQ?F?P t`  ^}W;o?t9pČ( 뾅@l* 9u auo8R>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUFGC<-?Ff?FQ?F?P t`  L?RD)o?t? )Č( 뾅@l* 9u auo8S>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addbjUWkgabeE(aFjv?Fr6d?FkĀAqo`Oi^j^r`t0sK#|pr9t!KU:M}{]rYLl}]|9xLufTrg?F<7vodObdcZgtfi,V#|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?Fn6b?FQ?F?P t`  ro?tpwZČ( 뾅@l* 9u auo8T>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?D +F2|>?FW@WoO 0dTc rdcd bj{^ea͂beEef"v?Fddd?F܃q o`OivddlXr`tuqEÊ|:jr3t!KU:G}{]r,Ll}]|q3xngqrg?F~pooZgtML|\ 5ve&1?FWf?F#0bĂ|[m낪 z'ǵ(|F$$!ayTh]]9 Q BUF贷F$?FlIJ?F\HRq?F#o˯Z?P Bt`  x#?tx#ll#VBu auo7U>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F A7<6j% bRzg9<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF#/?F ?F\HRq?Fn˯Z?P Bt`  ~B?t AF#ll#VBu auo7V>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F<6j% bRzg9ǩ<2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF/ ֛9?FD?F\HRq?Fn˯Z?P Bt`   a1?t(9Sj#ll#VBu auo7W>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#?F\o]?Fo?Fn˯Z?P Bt`  ?tF]8Xj#VBu auo7X>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUFlQ?Fڋ?FPaW?F ru?P WNtQ` c͕.:7?tUr=Ki8;(\I2-nNuj mu){YE0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUFhY?FlIJ?F&1?F3(e?P Bt`  6n?t:m4tMuVBu auo7Z>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#,z A* " ././@/R+Yg/y'sU1,a4 ) "#{ #BtΜg8tu:Ba L^2!X8;d;6X?E;.d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhQQPAVQ^1| LFX}  7j b2ɏOR] 6CfMB:ZMbXdc $I$I܍+d 2aa]Eme'la'iL.]eU@9/$Ew?FslAka,dHޓּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFy?Flo]9?F&1?F3(e?P Bt`  7 ?t)!m4tMuBu ,duo7[>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶO] 6CfMB:ZMbXdc $I$I܍+d A2aa]Emekla'i.G]eU@;0$E?[FTh]]9 Q BUF߹e?F$jߤ?F!?FTUUUɺ?P Bt`  ?t&S= ףc?ckBu5 auon7\>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?5W-sUU,p(a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFtx?F/?F!?FTUUUɺ?P Bt`  K_?t= ףc?ckBu5 auon7]>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFiv?F{<?FU?F3z[j˺?P Bt`  y]?tHIΧ&XRBu auo7^>JAa^!? ?Az@bbb! (n 6  )  '+ g~X?I-͊Z#!,[z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+,rb6c}bGeUWeFBϏ+@ol0ju?MDhnMbX0j`QDhnC<o+o"Th]]9 Q BUFT1#E2H?FK&Ua?FU?F3z[j˺?P Bt`  ?t &XRkBu5 auon7_>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnHMbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UFObFd0?F gj?F0U?F_gw?P >tA`  N+ru?t Tլ1,_Œ X">u eu!s7-`JɅJUU?HA9 j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF\6:?F}?FF(e?F=PFo?P t`  @?t^$h5q+ tЖZu Mauo8a>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?FX'qoO 0Yifull4ajS?ON~f0c?h+ fEeޖe?F<'?F@]?F\F%ޕol}dQ3ll BXy귺m|ԇbBhaUe```?FV ?F?Th ofl?PI3qoijeðl.` |=T?heFnF$OĒ?F o* leowk~y{f~tGg˂a ,8b1;MHD: )&# ;h0>Th]]9 PB IAUF\6:?F}?FF(e?F=PFo?P t`  @?t8ȍ5q+ tЖZu Mauo8b>    PEJƱZVJp!?&. ?Az@b/b@Z'݉e 6 ,)l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a4 9 .*2!#{:#t8'tϻ:Ua !L2$18K8F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|a{ޓ8cUF:&x0?F?F?F'qoO 0Yifull4ajK?ON~f0c?+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXyR귺m|bBhaUe```?FV ?F?Th ofl?PI3qoiGkeðl2` |FT?heFnF*OĒ?F o* leowk~yf~tGg˂a ,8b1;MHD: )&# T=h0>Th]]9 Q BUFL&d?F`0?F(\µ?F^Sӻ?P Bt`  {G?t7F0 yA_P FBu aquo7c>KJm0U?<AL@tMt7( }b6*D&stD  td**Bıa) ?A.>"b7)D&ar4D [ 3 D 8/td/Kty.x1x1`?CopyrigTt (c)02`090M0c0oKs0f21r01a0i0n.0 Al0 8s2e0e0vZ @d0"h!<M#l#/MBl> @?UhLAA]J 1 A 1"7A1bA>!J`?o2z[ dUjaq:z׍40S 6VMՋB@_P\`_m;E,Ui5Uޖ??FQ[5lf\VU'EUY2Ruݯ?FU]˖7sf\pdGѿSYUWfe zx]om{Zc`,UUFRP\g[)JXAp|? rhQ#5 IXAYaLPւ?FتcP\T f\3$sXAp"۲ ElU1YA-k?FFoZP\fyf\!XAYX%?F8K7P\{ 䘷f\&SAY&1l?Fr>|]P\-]qiY?ip~ X1#g~jt-? GXAY?F ףp=P\' f\?P=RwX'RXT}7vP\__/vnTpډlX7U/$?vQ[E7Ժf\wUQY|>îqh|P\4GN"MX U29_KU#rA0$@CHD: )&# T=h0>Th]]9 Q BUF8?F43?F7??FtYOȻ?P Bt`  1=oJASTfw? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A u +%/,8)E!'+ k(?r+o#?(a4 8) E"E#{ #BtG8tU:Ba L>2?!88;D;68?5u;D; tj>)8LALA1 `?CopyrigTt (c)@]2`09@M{@]cy@oss@fBRrAru@Aa@is@Wn.@ 0l@U yHsBe@ey@v@d@v2h<M#l#A M@?4>UhkQkQjA6Q>1| L`Fdj]JK}vM4@[ 6)fMB@Wa\ k_程avCe=E~SeR9Y? lgESeFeabkod i?SQޗh~?5@+ c;ꉞi2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUF2B9?FW]i\?F'l?F{Cƻ?P Bt`  ,i?t^̺㯝4\YDrVBu auo7e>JAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l# M@?]UhzbA.Q61b L` djMaQ@ 6fM{"@Ftnq\[3K_,7e5EGe<\ֽE?Nf[ NA_qd7eEGec6i\m j#f,7e2_UB3rA'04msCHD: )&# T=h0>Th]]9 Q BUF] ?FtF?F!/$?Fᱝ̻?P Bt`  "?tEQTϛT H0VBu auo7f>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#BXY'?A> ?A "+(/+/=/O/a!y:5r{l/@ sU10( /???? "0aQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUFR?F|}Y?FF(e?F=PFo?P t`  (\#?tN뻫5q+ tЖZu Mauo8g>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|aW{ޓ8cUF:&x0?F?F?F_'qoO 0Yigull4aj?ON~f0c?o+ fEeޖe?F<'?F@]?FwF%ޕol}dQ3ll BXl귺m|bBhaUe`ba`?FV ?FTh oflPI3qoiؿkeðl8` |G?T?heFnF$O?F o* leowky?{ft?Gg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFpX?FEsR?F+\(?F~@ t?P t`  (TJ?th/3u -ZrqKZu L,duo8UAU    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUFpX?F<Dֳ`?F+\(?F~@ t?P t`  (TJ?tA u -ZrqKZu Mauo8i>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3FMbX9?F4ҫP dj}ah8cUF[~K?F|D?F+Xc?F&PRcGoO 0YiM+FllZaj}0%|f0c̭ fEe.V?Fȿ̅)`oSt?F?(X:olGGTll .Xl{e{|rww@haUe:&x0?F%Clw?FAz?F 1vņflux1qoiFA(]`t߼ |~Th]]9 PB IAUF$?F,w¾`?F+\(?F-ku?P t`  p&k?t?̈n=u -CmoLZu L,duo8j>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1,(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F$x0OԛP dj}תa4(SUF[~K?FE?FSc?F?#oO 0YM+Fӱ\*aj,.}a$?# XEU"n?F +?FbEd?F v<olTBJ\LMYBltavXlp@XEUv/?Fpvw;pg?F $hއ?FLvo~l/AD0qYJg ַBl)$ԞXlDB:XU^F.fّPKk^Rl_[Ei?1GAf`d?Sq+a,(bvHD: )&# ;h0>Th]]9 PB IAUF$?F(n?F+\(?F-ku?P t`  p&k?t A4u -+CmoLZu Mauo8k>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?Copy_rigTt$_(c)$2`W09$M&@c$@os@f,BAr @YAa,@i@n}.$ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#FMbX9?F%x0OԛP dj}תa4(SUF[~K?FE?Fdc?F,?#oO 0YM+Fӱ\*aj ,.}a?# XEU"n?F{ +?F(%/Nd?F<olTBJ\MYBlU{XlpDc@XEUv/vw;pg?F^$hއ?Fvol/AD0qYf ַBls$ԞXl+?B:XU^F.fّPyKk^Rl_[Ei1GAf`dSOq+a,(bvHD: )&# T=h0>Th]]9 Q BUFt[@#?Ft\?F\HRq?FƯK7?P Bt`  =0[?tRwq#ll$nZBu duo7l>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\GlR] 6S-fMB:MbPdc mdb`GeGEWe$oˏZ\ai?bvIGeU@Vϰ+@_?F23Th]]9 Q BUF偃?FKG?F\HRq?FƯK7?P Bt`  Xkw?tO [#ll$nZVBu auo7m>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\] 6-fMB:MbPdc Ɵmdb`GeGEWe$oZ\aibvIGeUWeVϰ+@?Vpk_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF͜?F$6H?F\HRq?FƯK7?P Bt`  W׎?t #ll$nZVBu auo7n>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZFهgϰ 7j bL\G@[ 6S-fMB:MbPdc mdb`GeGEWenˏZ\ai?avIGeUWeVϰ+@?Vpk_xu j aGeWe _ooh2K]u:3rA 4s2HD: )&# T=h0>Th]]9 Q BUF[@#"?FJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt>(cu)>2`09>uM@c@os}@IfB|Ar@Aa@]i}@n.> 0Ul@ HsBe@e@v@d@B2h,<M#l# M@?U4>UhuQuQtAJ@QH1j LZFهϰ 7j bL\ѻ}@[ 6-fMB:ʩMbPdc 2N3odb`GeGEWe$oZ\aibvIGeUWe4?Vpku j aGeWe_ooh2K]u3rA 4s2HD: )&# ;h0>Th]]9 PB IAUFv?FdË.?FQ?F?P t`  Smo?tϰոČ( 뾅@l* 9u duo8p>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8cUFnDdTh]]9 PB IAUFSQ=?F|=?FQ?F?P t`  4o?taVpRKČ( 뾅@l* 9u auo8q>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU K1(a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFbL7a 6{fM8(a?nDFF!@~qWoO 0dTc I\rdcd bjUkgaPbeEef"v?Fr6d?F?kĀqo`OivddlXr`tF0sK|٢jr3t!KU:G}?{]rLl} ]|q3xnTrg?F<ے7pooZgtfio,V|=] 5ve&1?Fpn?FTh]]9 PB IAUF!?F?FQ?F?P t`  + {o?tHUČ( 뾅@l* 9u auo8r>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8cUFnDFK!@?FDqWoO 0dTc I\addǼbjUkgabeE(aFjv?Fr6d?FkĀq o`Oi^j^r`t0sKÊ#|pr9t!KU:M}{]r,Ll}]|9xLufTrg?F<ے7vodObdcZgtfi,V #|=c ;vLu&1?Fpn?FTh]]9 PB IAUFĊ?Fn6b?FQ?F?P t`  o?t]eČ( 뾅@l* 9u auo8s>   ] EJ]_DRb(?&. ?Az@bY(e 6f h )ϓb k+ _?Z'e 88o?ze Ae s[+ h g/y//$# L:5 /sU"K1*a4 9 2(E#{ #t8tϤ:U a L2T18K;6Ԉ?@EK; tͺ>B40AA0`?CopyrigTwt c)@]2`09@M@]c@os@fBRAr@Aa@i@Wn.@ 0lPU HsRe@e@v/PdP2h<M#l#A MP? <>Uh$QQ(]A(Q1 LZ3;'xP 7jbbFb7@c 6{fM8(aD +F2|>?FW@WoO 0dTc ?rdcd bj{^WeabeEef"v?Fddd?F܃qo`OivddlXr`tuqE|E:jr3t!KU:G}{]rLl}]|q3xngqrg?F~pooZgt_ML|\ 5ve&1?FWf?F?#0b|[m낪 z'oǵ(|F$$!ayTh]]9 Q BUFt[@#?FlIJ?F\HRq?F#o˯Z?P Bt`  =0[?t7c`C#ll#Bu duo7t>JTA0f88v?WYϳAjL@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F A7<6j% bRzg9<2 "\MbPdS mT W5U6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF偃?F (?F\HRq?Fn˯Z?P Bt`  Xkw?tɣ#ll#VBu auo7u>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F<6j% bRzg9ǩ<2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF͜?F4@?F\HRq?Fn˯Z?P Bt`  W׎?tϘsW~#ll#VBu auo7v>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:F6j% bT ü2 "\MbPdS mT W5UO6ik0YaYvq 2HD: )&# T=h0>Th]]9 Q BUF[@#"?FLo]Y?Fo?Fn˯Z?P Bt`  ?tC2.8Xj#VBu auo7w>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt>(c)>2`09>M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.>_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F<F6j% bRzg9<2 "\ʩ7MbPdS 2NoT W5UO?6ik0YaYvq 2UHT5D B &# T>h0JTtiiM ENUF|u(?Fڋ?FPaW?F ru?P WNtQ` 786R7?tUau}wAi8;(\I2-nNuj mu){xE0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@nZ pvm?S׮(%UQG@Y9"S\FkV5%UЗn@‹Pهϰ=\WQS\OXbiTQY%_7_I_[_DQOOOO__^̔ԍ?f Q́)f@vVP|?5^BlӿIY:6Xl?Ib ʏ܏EABTfxF\HRqCk?-q_WaW?ϰ+@"Bl!H,BXQnB)fFޖBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUF4@ ,?FlIJ/?F&1?F3(e?P Bt`  N!?tw;%m4tMuVBu auo7y>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFX} g 7j b2ɶGO@[ 6SCfMB:ZMbXdc $I$I܍+d `2aa]Eme'la'iL.G]eU@9/$E?;FslAka,dHּgmeoo(oh2gyu3rA 4s2HD: )&# T=h0>Th]]9 Q BUFnټk-?F.޺s?F&1?F3(e?P Bt`  3?tʼnm4tMuVBu auo7z>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#z A* " ././@/R+Yg/y'sU1,a4 ) "#{ #BtΜg8tu:Ba L^2!X8;d;6X?E;.d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Aa@i@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhQQPAVQ^1| LZFX}  7j b2/O]M 6CfMB:ZMbX͉dc ?$I$I܍+d 2aa]Emekla'i.]eU@;0$E?FTh]]9 Q BUFܲ?Fj$?F!?FTUUUɺ?P Bt`  &?t§= ףc?ckBu5 auon7{>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUF:C<?F8!)^?F!?FTUUUɺ?P Bt`  Ȼ?ޫd?tI= ףc?ckBu5 auon7|>)JAUVJp? ?Awz@b_bb! (݉ " '+ ' qq?z A  +S/.&68))E! '+ g?uW-sUUn*a4 8) "u#{ u#D!tΜ?8tM:Bav L62?!08;<;60?5;.<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@cq@oKsk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@v@d@n2hb<M#l1# M@? Uh,cQcQ] '1PbA.Q61b L`w"?FMDA dj? |m cuۡ'1 6!1fM{":F$3<6d kS=ѣ<n2{!5E[nLa vogaKeE[eV_@cbc g@]amr\GDž}iNu?FS%f|]tElFTh]]9 Q BUFx]?F{<W?FU?F3z[j˺?P Bt`  Q`7lg?t M&XRkBu5 auon7}>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnMbX0j`QDhnC<o+o"Th]]9 Q BUFT1#E2?FxLΉ?FU?F3z[j˺?P Bt`  S|.?tk, &XRkBu5 auon7~>)JAa^!? ?Az@b_bb! (݉ 6 , )  '+ g~X?ZI-Z#!g![z A " ,/,/>/P-[?+g/y'Eي,a4 ) "!#{ #Btg8t)u:Ba L^2!*X8;d;6X?E;d; tR>8lAlA `?CopyrigTt (c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0l@ HsBe@ej@v@d@2!h<M#l# M@? 8>UhoQQ]AVQ^1| L@?F҄! dj _٨la6ztJB: 7d /k?yԷ&Dh]EWemkp+ɉ 5d}bGeUWeFBϏ+@ol0ju?MDhnHMbX0j`DhWef9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UF+1#E2?Fgj?F0U?F_gw?P >tA`  9|4?t <'Ŵ\1,_Œ X">u eu!s7-JɅJUU?HA9 j@7tt7 b:*H&tH  th*Nĵ) ?MB"b;)H&e8H m 3 H -US4o?FPjj?6ֈ@!QpoĬگ eoĩ́ F o 01G}}϶@b@mp?#nʼPV]O߸uy\l2_+ʦE}@p5"iРo~e*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF~Q?F|}Y?FF(e?F=PFo?P t`  IGr?ttƯ.5q+ ~Жu auo8M> `  EJƱZVJp!?&. ?Az@bbbe!Y(ne 6 p )l l w+ g?-#U!׷!zme Ae [+j/|//-g#+ s "(a40 9 *2!E#{0#t8tϤ:U a L2T$18K;FԞ?VEK; t>B40AA0`?CopyrigTt (cu)@2`09@uM@c@os@IfBAr@Qa@]i@n.@ @Ul/P Hs3RePe@vEPd'PB2h,<M#l#_9M/P?]UhzAQ1 L@3FO#۲?Fo2zP dj|_aW{8cUF:&x0?F?Fώ?FX'qoO 0Yifull4ajS?N~dh+ fEeޖe?F<'?F@]?F\F%ol}dQ3ll BXl귺m|ԇbBhaUew```?FV ?FTh oflPI3qoijeðl.` |=T?heFnF$OĒ?F o* leowky{ftGOg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUF~Q?F8?FF(e?F=PFo?P t`  IGr?t+>`5q+ tЖZu Mauo8>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ As"(a40 9 *2!#{0#΍t8tIϻ:AUa L2$18K;F?VEK; Kt>B4aAA0`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hX<M#l # M/P?]UhzA(Q1 L@3FO#?Fo2zP dj|a{8cUF:&x0?F?FȎ?F'qoO 0Yifu,oi4ajK?N~d+ fEeޖe?F<'?F@]?FsF%ޕol}dQ3ll BXlRm|bBhaUe`Ԗ``?FV ?FTh oflPIϮ3qoiGkeðl2` |FT?heFnF*OĒ?F o* leowkyftGg˂,,8b1;MHD: )&# T=h0>Th]]9 Q BUF&d29?Ft`0L?F(\µ?F^Sӻ?P Bt`  o?tbS yAP FBuj auo7>RJ[0U?<ALv@tt7 }b6*D&tD  td*JBı) ?A>"b7)D&a4D  V3 D 8/td/tRy.x1x1`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"h0!<M#l#/MlP> @?UhLAA]R 1 P1"7Ж1bA>!J`oO2z[ dja_q:z׍ Q 6VMB@챝/P\`_m;E,Ui5Uޖ?FQ[5lf\VU'EUY2Ruݯ?FU]˖7sf\pdѣSYUWfe _zx]݅om{ZcXUFRP\g[)JXAp| rhQ#5 IXAYaLPւ?FتcP\T f\3$sXAp"۲ ElX1YA-k?FFoZP\fyf\!XAYX%?F8K7P\{ 䘷f\&SAY&1l?Fr>|]P\-]qiYip~ X1#g~jt-? XAY?F ףp=P\' f\?P=Rw㹥X'RXT}7vP\__/vnTpljlX7U/?$?vQ[E7Ժf\wUQY|>qh|P\4GN"MDX U29_KU#rA0$@C_ HN>Lo3s rFYd߫" EMF(6 #H : B Ӵ N< @e,kPo+8i Ro9BGo oP~' Yp ' b x' e m' i ' l }O' >q^ج' u( Bz D(( ~6H( ʂ (( z (  ( ( c K) ) 9) N ) B]) ) @) iX* A* h* H* . ]* * K* Đ* ߑ hR & * o ~x#* &* l 1&((* )* 7 oG'+* ',* U {' + (#+ RH&+ E5')+  C,+ \ 8h.+  mX+  8+ P 9+  +  + 4 h+ 9Q, w! !(xS, $ U, '*IV, 8+ ?]X@.0Z, 3~9\, 69h^, ]: }'. n>I. B9(. "F . I C&. OW | @SI . W 0&h. +] . ` 8. 2d (. g H. ]l 5'XPp . bu e8. y|@2/ }JY4/ 96/ ;9x8/ ވ9X:/ 9=/ )&PK/ 9/ k~9(/ I/ E*IH߻/ *Iؼo/ ŨNIhB``I00 ~920 I940 i(H60 I70  ~'90 (?:0 L*@&<0 ( 1 }C81 (?$1 0G'&1 T n~''1 [h)1 rI,1 cG/pY21 Y51 9Y91 |9:1 +~j<1  i>1 p h1  ~w1  1 R 1  h1  2  =I߁2 #(2 U& OH2 )82 6.2 12 y5 H2 92 =2 ? ?T wp  ̠ p ʔ  pqwpqwqnwqpqqbqpppw wwwqqEwqpw߄wwrvrrwDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??SbM1{V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉq%4 ?"u'4b0{[`:#(26 U9$"3&x9"L1" ^!'>U@ÿ@ǽq^@?@VBӅ?]DSF0ZBu@`-1ZBuW@rCC!8GAB"2C9u`u `"[ b![u`2 %3Qo1o7B`?CopyrigPt0(c)020P90MPcPosPfRQrPQaPiPn 0Al` Xs bePePv`d/`VPs_SBNcPm!#5Qb41,a@o13dX453B^8=%3p63dz6z6 v24#V%30Urz3]ge<^1E U#5m Hq5twL5t{3t{ez1A+E*F7ԂX$18dT-`XQj2y`]r` MPWnuPaPtar,aOQ` UES0uPpK`ePt[eTL5Qdv],1PRd C`uK`bʦ30ޯ𯇡! PPr+=9H^p]ZaDbQUT\Կ濅APaB`Ȩ-5^]oυ% S biP aϭPeTυLPcXūa5DVߪhB:l(`iPg߸̨@߇RPoǩ|Z\@E pAD`~YxČUSPaPePlP,5ƪXp/` ATP-`54`%aPK]狥%Sb?`.A4`W`ZʦT3NwRk qa2$2`_ @Ȩ͒Y;MI`ad(`cz d ϴ兵nēAs DxP)ߑmr:atV%a Iâ@!auoDfP㞑 R `Qa/)/7M`Cl/Pد/M+QUmK`utP ?`/ D*?"?_ (b? (Dv??;Hd2QC K3` Wf}%5DVO#O `C` akO}O#ԅO` M`mRgOOɧDODp __1;OUQ1y?QI_[_CqtJw fo SU@jZ?Fű?%v%vKpIp(u.p` s?vbup` ? B=݉`C,qtMp5mQ-́<)oTm}QX.q͏Pߊӥ T.OtxFf@>|pÿruob쁙JRq KN̒rr$QUvvvrsE"`RoZK1& be9P bb OObr@muP`TaC]1eߊ_e71721bKa aXaaehAaYip P`u`T_`_`Qhڑx4ӥ5f_dP&b鄞pCwF3s͵NSY:3R rV@0e18TR <շqW,@rB* B}aaaahʾYVԐ;n` %̀P&²S$1aa`NEW_ORKP2H IP_!RRЕISQTҷiroohA'0UQ?8!Ԑ0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BIU@Sj'ో?FL]&?@j>(?FeTot7Ѻ?P} u` ?u#8k/8CU(.88Lt `lm3tϭ^}ѯnAf'wyU~##0UH"?@&#!!*`?CopyrigPt (c) 20 9 M c os f"!r !a i n. Al (s2e e v0d z!~$#B-#,# 'O#^ =$&#4 {6{6 w2$e#ab3W) #LG,%,Dj8Hg\&dPaxO6'TXAXA"(2Ca%?28(B,"iha05( e&XV7 W'G${6!t40 VZHD: # =h4>TB#]]9 T!AU@%?FYp5?@_RP6 u` ?Mu t  !o)t~7RnB25"!)Lv$>   EJtU2q?=@dLA@-wb%(b3)%@+@&[w#?& ?A+/L)R!!5 {`?CopyrigTt (c)'020309'0M0c0os0f%21r0R1ai0n.'0 Alm0 8sq2eE0e0v0de0f"!$MEBY-?# '&rl U# #7 1* #p| '&)1&JH FFѷ |-B1 @V&Rl( b 0Ale59 MX7'0URN!$0% VZUGD D# h0T@dYY]BIU@j>(?F8??F/I6?Pn} 8WdT t`  7?ht#rL&~n[f1uowuwU!!`?CopyrigPtv(cu)v2\09vuMB c@ os: IfH"9!r< u!aH i: n.v WAl @(s"Ueh e@ v d  #S$#$- ## '?w= # &#24 6 6 $B# #0U2 3@& wa#3~  #dH1%4Gi^#5 6X'EGK'4 6! ] EFYJHD # =h4>TB]]9 \#!AU@#.?F/I6?@P t`  gjo?t#T U $YhAn m7 u)hnx Fu!)J hI 97eyh?#!?9& aAbA@_"bg)b$g)bIg$2zGz9#@L@&h*%}$f*+T'_"*BL1L1`?Co}p rigT}t((c)(W2009(M{0]cy0oss0f2Rr1ru01a0is0n.( AUl0 y8s2e0ey0v0d0E1I4Mb-X3 X79&l)Uh# j1#_!J FF$[ 7#e T$V3RiAlPeE9 XW'0URZFN!^0 VHD # =h0>T@h]]9 #U@딃r?F/(?@2?FkF_w?P t`  ;0K?t# y_h:.A>Dj# ixsj||yyh|j|Ay"s"y,"s,"yJ"y^" yr""y"/y"a|"1y"8y":y";y"|`AUhMq@q] y a7T^1 hjP!,!J!^!o$!$U#$%&U'()*+,t|q!a!U234567$9$!1&1>74Ua#[aJ`S@ɞd?F9^2d3Qqs^2tY7N26J6M#hN1y-loG?gn`bDPu6(āThq 6p?F!_T?@R h^*B?PQi e>amabƫWSuMD,Xud3, utSқJ! \&\xZ? $Zas? V =? @S?d2tR@B8BoRį֧r3Cc EH ]&H[Bѻ?@lML6~ ;m`~b&¨r?+J^t/qO@? [,!rPצ 2XQqH zg?F,53lܿu  ?FM9(֜2Ar}JND7UØB]ٿ?@h߾#?Fd~5ȿMuPu`~dg}-S/9u}DѺ f <-@uušJAV+ǥ8xP?\ q|?FKyhQ~2vTPf=?PoyGV(na~u3vu;*G1+u~ݏֻA_ ^Gϯᬅ?oQo!|b0QJX^?Fsm?@@޻Pg?POWLKz Fw^=g%fԧ!6&]JVbGᬤ? o[oQr!rmz@zg?@f(?FVɱC OPKMgBlVܖVVwVsS3O_ܘ=w?FKL?@jP0A`QʱH=lTG5OxOeuF|Fv=/Fꞏ ,-<60 ]Cs~?Mv1E GZ}v -# &<"ݝK-'>TAFd|6δ̙2{# ,',v #cX#JREB'~5-O$ ,RI~  `ܝ-C-X-_]bd% ,Wl' 4%ŀ>-խ@ײc& , Ǡ4 \lb\c>-R땳J ' ,+E :{(b\s"=]& a:U( ,/?Fb\.bLOOMևQ:U) ,!`3?F k?%6fSԿr`* ,Oڿ?F&Kba,e~Lʴ2ͷ+ ,b d?FjFŋݸML, ,Z׺ı?F bҢu W%r;z- ,TUI8?Fs%:v ~LSa@ . ,3Pވ?Fb^vO$)yϊ&Lؾ1/*mCe%f(~5JOAjuLīt~eq>aT~.j&],_10y՚?Fr`QSٵ?F艿O:McfdilFU1i;}m&R F$w|!l~1`-?F~J~ 03y<$-t9BIL{HiG,LDd|jph:OE 4b\{~-K 6,Ku20e%+~L-r; 7,kh}6`E׻L?S)` {A8 $Fٞ@`o?F%&0MuPu9KdfǸ;}-UL󥶚!s@+C9,*}3?F:ט26-*NY'| ݑU8am?FYe~Kp?@ŮM?F@_O`P jBނ> l'xk超VSOB;6{.f9;?@ v%e??Fh1*o/m_~!Wx ~)j |ALyUƅ?PbO__<J+t?F"e?@'IȂl:xl@׭9,&͓S2&n&Yۙo> =~vU^4Q?@,z"TEo-<[?|8q&?FCv?@s&Ŕck>\ :A:z5,7ح-idsjѦD;gcZT#n$!$ɞd?Fх@Z f<x.Ϗ:"r 23ED'佴9Ki PyT A$))X#!('0Uqŝ)) (h(J TEI 3aAU@jZ?F({~+?P >uA` ?u#ΕMJ*A )83*#<VQ 9jQ 9~Q 9Q 93(QA = Q(Q.$* BT`t &h)&4t   4t"z"pbE#"J>U2zGz?@9 3>+6A@"b]8bk9x;x6N2 !\4N%323326 2Mc2b"k9Kh7N2ׯa : R;l   4"ABoogr#5"Ap(U3TH P/?@ yjPvf MuPu@YF]ul9;VmMgHr f 9LTBUx]Si\QE[?MLx/r?FPzQ- %YAxlQY?U . [ ;;\upQYF)$r[.Fxai%?F 6?w\^ W-\Xj ~ɽ7yކ}dOflA=$'p$`4U@x@4S>aai6$xi'e!aS#8&`iMiI51Xԥ'0U6PE0 H?>_;s 枹\$qG"F] #ȸ h_ 5OB ` ~d`Ӵ a @+xn [sGW9zfX :*l  :o :?s :v (:z X:_} F֛Ȩ:ҁ :h 6:? ,: :a : 7:? N: ;% 8:ߠ^: '; (: &UFD  h(^TYYBUFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT"9Td9TȼHBU?F? ?B@L&d2?@-Q(\.,c(A.sU!&(/)1(  0%YB`'FT,serv2,comput"4di0t0ib04d0n0t;w*0rk!ea| (SG& i?!3E?? ?| wp  p | qw qwqw{pqpqwqqpqvppww'wwwnbwqpwoww6zwxvDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??*t {V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?F~?tP} u` ?u#2>D>HX DHlHYHH EOe.O4:DDXXll*k'"&J",':U2%5%4 ?"'{4b0u{[`:Q1#26"*!^ U9$"3&x9"L61 2!'@>e@ÿ@qn@?@fB?BmDcF0)B-u/@`-1Bu@rCC5GAB"2C9uv`u `"[ۢ b!]u` %3Qo1o7B`?CopyrigPt0(c)020P90MPcPosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm!#E5ab3P=` o13dX45CB ^8=%3p63dz6Pz6 v24#%3+0U'rz3]Jge<^AE U#D5m Hq5twL5t{3t{LEt{t{y {et!{|tYwmAdaT׃IUBT$1H@a@a20)L21632CLB3P1$125jb6w  7C|hh2E)*dQ 8ŔIQIQA11CB;1XrbYqea>!bz1AE:FG$18dT=`XQj2`]r` M`n+uPaPt+arp` ?bup` ? B=ݙ`S,t']p5#mQ-݁<9odm'Qh>qݏPT'._t'xVf@>pÿЅobZbq [Nܒrr$ aUvvvrsU"`Roj[16 beI`bb __br@muP`TaSm1u _u'G1G2Ab[a ahaauhAaYypP` u*`To`o` axx%D5f_d`&b鄮pSF3s͵NSYJCb rV@̓@e8d=`LqW,@r B:р.ǏR}qaqahYV䐷Kn ` %݀P&S41qa`NEWORK`2H Po!RRIJSQ*io"ohAo'0Ua?!0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BU@5?F;?@ t2?F`0 ֺ?P} u` ?u#+/8<&L9`d<ft99(.8P8L``Uttyt mH 8"tC"hfc!N&>'nJ!O%X'm= 'p ~i##0U"K?@&S#a!1!1*"`?CopyrigPt (c])X020d09X0uMP0cN0osH0IfV2G1rJ01aV0iH0n.X0 WAl0 N8s2Uev0eN0v0d0"14J#B-#-3 -7#^=$.6#@DFPF B$e 0^dA\C #HA%DK%D8K5D`KEDdKEDg&Ca_'TAA""(Pa%288BKc`DcE ciha5 6rX7fg'$)F!DA ffzjUHPD  # #>h,JTEI MU@jZ?F@s??FX7`/?HmNuQ` o?u# j>tMtQ 1@t PATOQ E O#l|O6,2c J^vNt, "t "QwNbo ?!FVxNo]2УɆX3BUYU>VZ\)_ W거QUTAQ LG4CaCa_* Q_*____ o2f 8CT]?FoYDȡ?@ ~1pc} 9dS0]Yb<=p\/_ RT)4HtvzI!| ?^_AV,uݹ:4Z\o &Ja ʆXBU 9A1p@߄ߝZ\X9 (p\P΀dˆXEBUp2t Z\5q[,=6B҆XaU̓쬏 qXa>YU . [[ ;;p\Ua>YF`rZYoQUaai%?FzP4Z\^ W-p\] xTy8?F.?@@SA?FWk@ ?Nk"_|5`bmMp\PB|y[7 ơ|`?])?p*FWAX|a筃,H7~Z\?bxT?:!G4QڡUeB_T_f_x_DOOkoooZ}u?FU2Ty9=ŏsҟGO"vP_?@mlփ?Ff>> }= R3|Ofѡ|&['aޯ)d3#ux"@4SK 6ASʎy'e>'"Y`r,iMm׹ ߵl%1X~G׷_'0UJ26L50Ov ׶HD: # =h4>TB#]]9 TqAU@1D)?FL"ߛ$??Fٵ #ָ?P6 u` ?u4 t  ]<)t7񿗯)2)LEkL>  JtU2qD ?=@LC&A@-bu(b)+&񆤍[=#L"#a?& ?AH{/)=#@1@15 {`?CopyrigTt (c)w02009w0Mo0cm0osg0fu2f1ri01a; ig0n.w0 Al0 m8s2e0em0v0d0"91=4MB=#e-OL3 L7&l>  U#,|A|A9 $] t^1*=# #p Kw&&J`=#R&F 2'_b0CwvA kRqVM(~"!;ɡ?@<3?F ?P-DT! ;d1?Y=:ZTO㿂f{OaB J4 13? v,? (N_?r5w!:8oJo\g"a@jS(.8R!ZFI nx4__ D_U5U,VS?@g$ ?F~4,ێ~@ 8_H_ZTml`KDfl`'E4q2d8R鿃oo`rmkod&'aE2%_7USr(A@z#y3ae&b{, fZT5sae~t?FG=Va|7>-\氹pAiʈ$%?F(XoZ1a|}%\Iii ^WB^:fvEZT˒AiU?FAx߾Ya|k#ӝY~/Yt(Ae o0AleP59 MX=GL"'0UŢbN!40 O5 |HD: # =h4>TB#]]9 T]AU@R7)?Fy߈C??F"ָ?P6 u` ?u4 t  *5y")t7n)2)LI"+BA B JU2q0 ?=@L/&A@ybm#zc a#-bK{(m {(+I&)#Kڬ !?& ?A"&g$bbu/b /%)#Bo1o15 {`?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v@d0"h1l4M!E)#𤲝-DO{3 {7&l> U#,AA9 ] 18! #p %J`)#@~t?F # 0hU-~-/HA RVM(ET"rYD?@J-?F-' 9d 0_Y b%r\_oӫ([MFoFl 77]8>!Zݹ:4\\o?gƭkdQEF5W4?@A=?F[Q`??P-DT! :U^Zs[ #l@*p9lJjV\\l_ I&vpkhAzi!J?F߸TF\\Ͼ\zT khzinA9f@.mY <\\ԕ$ ,S{khAzi׍8r?Fį^?m ? ~_5 ә(Ae 0AleE9 MXlGR'0UuőN!40 ~5 ,@HD: # =h4>TB#]]9 TU@~t?FYaJE??F? ~?P6 u` 7?u t  }A_)t7Ǫ+)2)L^A]>REJt2q?R=@LA@-bbJ )+&L ;W!?m&L ?A" &zbbbOz /R;%!!5 {`?CopyrigTt (c)(020409(0M 0c0os0f&21r0S1ai0n.(0 Aln0 8sr2eF0e0v0df0>"!$MDO!EY-?# 'm&l> 0>U#3 -A9 B1 #p ;%J`F(jdO C]Se֑  FM(HC-O"K[%qU5)UuBТ:DY=~W\GUE2OE#rA 0"S3QhA%YB[)xIB_STD(Pd 0Ale59 MX7|B'0UbN!$1A5 fjHD # =hj0>T h]]9 U@/?Fxpz?@]JQ?Fk],>Uh@ !f$; hp _Q%Jd(jcZ(sEp!$a VZHD # =hj0>T h]]9 U@~t?Fxpz?@?Fx?vÂE?P6 t`  }A_?t#%_ ,u auo>J2zGzwt?@ʐLA@bu+b)%#+#&!yHa$?x&L ?A+!#+,1'!!5 `?CopY rigTt (c)(02`09(0M 0c0os0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0I"!E]$M-# %'x&l>],>Uhh 1[$ hp  F%Jdjc%O(Eq!$a VjHD: )# Uh4> T]]9 M#AU@#ܡ,?Fb?JP WJtM`  ?t# AJb $>h g JQubhr6 u!>h=U7Jyb?!A?3& [AJ~CA@Y"wba)b$a)'ba$2zGz3#@MJ&b*%w$`*Q+'Y"**F1F1`?Cop rigTt(c)2009Mu0cs0osm0f{2l1ro01a{0im0n. Al0 s8s2e0es0v0d0?1EC4*-R3 %R73&l& UhVE9 `d1#Y!  FFU 7C@VCi@Ale759 XW'0UjR@N!N$0 VHD: )# Uh0>Th]]9 PM#U@X?Fg.?@DE`v?FepA1D?P JtM`  ;0K?t#˯ yh:.A>j# ixsj||"yy h|yy"s"j|,"s," |J""y^" yr"""y"/y" a|"1y""8y":y"";y"JU2z?Gz:1?N1HSJVAL0bXbY[R[VJSON9;?Ax74Ual#aX`S@fN:O6?F9^2d3QqsR^2tY7+"LN26J6RJN1v{+0?F2rn`XDPk6(āThqZMiڲ?FȒ?@%~?džm@`?PR߷i e>amaXƫWSkMD,Xkd3, ktS?J! \&\xZ? $Zas? V =? ?@?d2tR%߀@8BoR4̧r3cu>XRVSwSy?@r$Wè ;m`X&¨r+J^j{/qO@ [jrPͦ 2XQqE>L?F,D?@ cK'ةhzq 6ѱn|3wsW\ۉ>53lҿ侨gnxP?Fa"&~̜2Ah~sJN:@8PKNWM?@0R?Fw+PuP~y`~dg~~s̜ʪ}Bk3a ,?|IF%̜O,nA3O"?̜g'R S;x+#]Ǯ.uǥ8nP\ {r|%ʿs?Fcj˰?@3?FJ?Poy=V(nxW~u3vk;*G!v+u~ݏֻA_ ^=ů׬{5oGorb&Q"P C.?@M:P̈́} ?PNWLKz Fw^=g%fuԧ!6&{]JVbGᢤ oQo׬Qhzrmٿdlƣn@"N@0?@NPXcC OwPKMgBlVܖF{VmVsOS3E_dNF+P?F_iVJ͸?@9;+0}0'Z,H3wlTG5EnOe{uF Fv=/<~R2d0Pm%s~? MvO1; ܿ;t l h-`w?F?6z?Ϋ9W7FOQ1}M4ڔC͏&ِ1Wi!~RѪV? el 3~]_>-#O &2"ܿ#˕G GWAS?@nA8 ?1ІCѵzT]4-'>T7Fd6Ĵ̏2q#~R|Rl uްJREB'+-E$RߎD~ CXн-C-X-]bd%R| j#$ 4-խ@c&Rf^+ !̠Bd% c~4-Rꕳ@ 'R[\e'? }ķp)~ s"ޗ=?]& as(R lM?FAo .bLOMQs)R ^D^*%6VIrV*R2T?F8]X^*a,eLʴ2ͭ+R ug?FdpA1DݸoL,R;0 W%r;p-RH)?F76˸ LSa@ .RPe6˜R|)yϊ&Lؾ1/|EԈ1f?FFX ر}+ ķ&OjuLīt[q>aJ.j&S"_10|*?F `Jԭ%1+.nNMcfdilFU1i1}m&H F$w|!bt1|q59>?F>FK׀*F/BIL{Hi<[cԍ(=Pї!32$f ˂}sD|2k!f{30%_ʣz?T `c`Tw4G,~LDd|`pta?FVynZ{~~-K6~Ka0?F>&Dw@}|%+)~K-r;7F?FTa|E׻LS)`qA8|}FSP`M{ r?FhPuS@/}Kdf1}-U{Ls@+C9\}K;:*N}O'| ݇Uaqyli!&,$e?@hu?FA3]{ӿP jBނ>\'xk|OVO;|Pq2.?@`Ȩ?Fn |%c_Wx ~)j |A LyU{PbE__<|*&G&?F4J?@2s+M{0xl@ף9,͟S2n?&Yۏo4=|4P `RZC?@b* o0Z d֣r~Pz;'Uo};}לSU>T|m?F3|TsDܷ[ڣ̭?|X?F _PМؓ?@1}v򅗫0z5,7-i~ dsjќD;gcZT#dfN:6RDž@2]Kc$Lŏ0r ڝ 23E솤Dʟ佴/AiऑTAX'0Ugœ 2UGD # hz0TdYYBIU@zUϰ?Fz^?@#ܡ,?Fbݱ?P} t`  ?t#H$IpZίP!::8vgpqu B`Ufpp5uU!!`?CopyrigPt (c)J 2\09J MB c@ o%s: fH"9!r< u!uaH i: n.J _ Al @(Us"eh e@ v d  #S$#- #l# '?R= # &#24 6 6 2$ZB# #0U2 3@& a#3~  #pH1%4GicP#5 6X'.EG'4 6%!4] EFYJ_dH >;s w2mUqCU_ܾFv #F  VOB U j ~dӴ uo@+Uw o[sG:3 G fF3? H3 HK3 (M3 XO3  14 Z F֛54 64 6 6:4 l <4 n ?4 / 5 75 NH5 [ B3 b _5 *9x5  <9H 5 ' Vx 5 } X /5 ( %x5  5 3 G&UFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kXt BW66 W TTTTT9ȅH?Q?G ?B@L&ɯd2?"-3(?\.E(#. sUk!i&/J( G 0gB`.email,SNM,serv 2,co puT.3d s.r b 4d n 0tw0rk!e^| (QSG& q ?%7I??? wpwp pqwqqqwqwq}pqqqpppw,wwwqqgawqp~wwww;yDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1?? m{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH EAEdR9.&؍>DDXXllRu'9b"",'US2&5/4&3 ?"'4Wb0{['`:[1#2!6"d^Q_9$7&9"/L6 2&!'ݯ>o@ÿ@qx@?@-pB?wDmF03Bu9@`҈-1Bu @r#CC5 ",CAB2Cm9u`u `"[ b!u`/3Qy1y7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPaaPiPn 0Al ` Xs$bePePv6`d"/`VPs_SBN cPm_!#5P8k` 7Fay1&3db4&5܀MB=/3z6&3d66 2&4#/30U1r3]%ge<^AE U#"&5m Hq&5twV5t{3t{VEt{t{y {etd{|tY wmda*V_BTAJa$Ja&203V21@3 2MVB3Z1125tb6 ta$taC|èEhhPÓnQ* ϔ8ܔAPꓰMB111;@cqcqba>*Ab1%EQF^.1XA#8dTG`XQ"y`])r` M`WnuPaPt5arFaiQ` WEqiPm6`nPvd{V5QX]1PRd*ơ ]`ue`bͯ߯@30! P`r@RdSH@ȱ]C5D6bQ۱eʿܿ{"\ ϬAP9a:B`諸75^ϖϬ% S:bi`#aweT ߬ぶLPchCekT}ߏBalB`i`g̨9@߮,RPoڂ|EZ\@ =phDBU T)?ar0TPpVrKB` B`sR;IY`A Y`Q@;bmr:PeExab!gD"qQr2w`ijJSCC ` TsR\͒kXV` ?ON`1ڒPN`SbY`(|~PEe`a 蘒Z Nwbk4a+.2`\d: IpadB`5cwϯJ)nĺskAx/&/ImTat f?a ];a£rl/kf//ő R pa/a//((/&?4/M`Ci?5?'+ pu"tPf!?BkQ OO"Ȁ\=O_OOkv(OOKHdBQ#CġHC` Wc!wV_ _ C]0ah_z_yi_'`M@m@_2d__vkpoo.KOB%fy "(p5j-T^u uqE珽 ST1.i1xACw>門ÿތc$bel erłłՆ߂^B2`RoUK=e2jrb ]]br-@O"Wyrl]u``dvx~jc*<Q_c--4A4Bj(6yB1B1*P{£qqqDqcҴzQaig6``!-uA`}ż @avh{zI5_}v&ﲄПF3s͵NSYFvy߂f!@v>8M{>6M>Ocլra~g,@wBQ&8P\UqUq}zʷ@ifz}!` %P&4odAA첱UqNNTWORKJeH0P\1RRISQBito̔3GQM'0UƗ?Ϳ10WƕHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@J5K?@xEL?6ѿP6 t`  bm&Qo?t~@@L]WI,#u auoL> @JU2zGzt?@L?&A@b}#zts q#1b(Wb)+&[9#H",ɺ (;H"?&L ?A$ /R%9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0"O1E]S4Mb-9#b3 b7I&l>]Uhzt1.AF!JN!n.%-+0rdjV1ax߃)(@K-&?@ mO"> ?Nk"PL[Majnha_:e 뇿P> #? BK=o? n-g?j4s!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6?@ @ d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEjz36>C~N>?Bn]_zGvN7ViDAbE9 MX+G'0UPN!R4ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@O?@X5?PJW?>t`  ^Z?t#ə.b?T@[ u?/CG<_Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@ic{g&?@%r`5B?@n_F:Is?HNtQ`  II?t#\  _ii$"`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@oElrkRt# 8$lcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_zRX,{_,__ #R__ (YQoSo+o=osd {oEeoooooo@zUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BqU@ t2?@D&d2?@$p?@|>׺?P} u` ?u#Lk/89L<`(.88LL`t m۶m3tبפUC~#9#0Up"?҃@/&9#!!!G*`?CopyrigPt (c) 20 9 M c. os f"!r 1a i n. Al&0 (s*2e ej v<0d0'"@!$0#B-9## 'w#^=9$&0#466 2R0$e#$^ P#30%T% 9#LHLA0%XDcKT%XD8cGg&daO^'BTcAcA0"-()Ca0%g2 ATB_6Si@haX5 &X~7aW'o$Ŵ6!40 aVuZHD: # =h4>TB#]]9 TU@$p?@|>W?P6 u` ?Mu t  )At72(L>REJt2q?=@LA@-bb )+&񆤍[Z#?m& ?A/$)R!!5 {`?CopyrigTt:(c):20 09:M c. os f"!r *1ai n.: AlE0 (sI2e0ej v[0d=0>"@!$MBQ-?# 'm&l>0>U#3 C!*; #p _ &J`@4?g8oyƏ C?,'ߑK FMD(9IMBIGE%UZ8!<A% fjHD: # =h4>TB#]]9 TU@$p?@|>?׸?P6 u` ?u4 t  )t7J)2(LAEJ޵2q?=@L|A@y7b #z]-b( J(2+2&*;q!?& ?A"#&$bb/bI4/U% 1 15 {`?CopyrigTt:(c):20N09:M:0c80os20f@211r40m1a@0i20n.: Al0 88s2e`0e80v0d0X"14MDi!EY-?3 7&Rl>0>U#3 C)1 #p| U%J`@g8oyƩ C2,'ߑB %VM^(9IMBIG3U%CUY8#M)U@$p?@|>?tP >tA`  ?t# >b24 C>u\u!.M# *H>U2llf?@9 >A@b-(C-(b;)W+W&>yg–#?& ?3"B-*X*G,e'񩣕p%1%1`?Cop _rigXt_(c)2dW09MT0cR0osL0fZ2K1rN01aZ0iL0n}. Al0U R8s2ez0eR05v0d0}"1P^"4-,13 17&!Qi 0,fUl4#A5C1CU3! !R \box(<v9s?@^!?@sckz%?PD-DT! /[S@K0Lobp׻nxw_gBs 3CUy_@ xh ,J T  M#EMU@jZ?FH2)'*??@n>?P m>uA` o?u#VlMJl -<lA -PK#TdK2(.K<FZnf>t  t3|>b!F**JRuKJ>U2zGz?@9 ?#>O&Aw@b(Wb)+&r" !$NI#"@#W"6 "\M"b)2;'r",r"/I#11G#`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v-@d@7"BMI#3 76@"!"t1"=ElJ Ul4@AAC 9" =Q A(4#UA!=(^!#?@RXM(0 # g# '?DnMN:U~TR\u5i[HmvQ~X3^w='__ '-~XE:UU . S[ ;;h\gU:U@>wR9o\jZ|K5~XAjai%?@@dQ]^ W-h\_~T9;Tꤲ8?@@)#SpSA?@jK@ ?Nk"V`UY5`bmMh\ly[7 / *?<~X6YxS?LR\cpTqUQ g8oy1 cJQ6YD}uϯ?^&S[vm۶modUQ6Y ~ÀC8[H`K?)K?M~X U@:_L_^_p_DA(G!EM_"_]u?@S.~{>O]UM~1xQDuMuRw?@lփ?@2'>w9Z^|)Sf|u?U35~8q3Du$-@8S,/: CYy$'eisa"+ziM̱iٙI@%!XRGٗ'0UL6t5 0f ٖ_H>x;s \<6x,c @LaF07 #CJ b9 XOB P : ~dӴ 8; @+{ <[sGa6 8 Pl6 F o6 J q6 ~N xs6 \Q u6 QU w6 ,X F֛H{6 r\ (}6 _ 5 N =d ,#N >h %N k ((N o6X*N r N,N 'v N x .N { 3N  i6  9!UFD  h(^TYYBވUFjZ?F~??x<F BPG(?P } XZ B66  TTTTTTd &!&Y"/&%'D/+6,k&/TȅHBU??/&:& ?B#@L&d2?=8\.Y+8 >sUQ1O6Pq?)(:  0%$gB`.man0ge]m0ntt s0rv2,co0pu03di0t0ibDdt n0twN@rk1ea| 8(SGv6 ߃m `???(:Lpppp wpppp pqp~pqwwqwqwpwqpqqwwqqp pp w. wwwicwqwwwww}Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??M7{V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉ%4 ?"u'4b0{N[`:Q1#2T6"!^U9$"3&9"/L6 2&!'ݯ>e@ÿ@qn@?@-fB?mDcFԸ0)Bu/@`҈-1Bu@rCC 5GAB"2Cm9u`u `"[ b!u`%3Qo1o7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm_!#5ab4P=`o13dX45CB^8=%3p63dz6z6 v24#%30U'rz3]ge<^AE@ U#5m Hq5twL5t{3t{LEt{t{y {et@!{|tYwm@Ada׃IUBT$1@a@a20)L2163jajaLBA3P1125jb6w 7C |hhE)*dQ8ŔIQIQA11CB!;1XrYqea>!bz1AXE:FG$1 8dT=`XQj2`]r` M`nuPaPt+arp` ?bup` B=ݧ`S,t']p"51m-݁<9orm'Qh>qݏBKT'.mt'xydf@>p{ÿ#ГobZpq [ܒrr$avvvrs{c"`RQoxi16 e5`(bb_ mmbr@ uP`TaS@u._u'U1U2A-b[aahaDauhAaYg6s(P` -u*`}`ᕁ }`ax3R5Y_dn&bpSF3s͵NSYJQbrVC@ړN-e8%dE%'XLqW,@rb:*<Տ`}aahʀYVYn ` %P&S41Ģʢa`NEWOR+Kn2H P}!RȱRISQĢʢĢ ʦio0oŌh^A%'0Uo?č!0ÎoHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#B#U@f?FjK?@瘛?F1.?ٺ?P} u` ?ut#nW/8d`<Lg<`9ft99b<!9f"9#9f&9&'9f&(9(&)9<&.9P&P&/9n&*9&(.88LL``tt&&&&(&(&J<&<&P/b(n&n&&&t : zBtBIE GeroBE$G?3龅U~0UB?@4sF}CAAJSB`?CopyrigPt(}c; 200P9MPcPo%sPf"RQrPOQUa"PiPnO WAljP XsnRUeBPePvPdO"kBADtCB-}CC EGC=}DFtC dVV $RtDe\^RdYQ qd*a!gtEd8HjUdd{|dh{Sft{ah{ah{ah{ah{ah{ahkah!{ah!{tah(!{edh ,J T  M#EMU@jZ?F??Fĩ!3ܾ?P m>uA` o?u#jpMp -<p=PKA -dK#hxK2(.K<FZn>t,  t"lgyPb;!Fi&d2u!J>U2zGz?@9 g#>w&A@b(b)+&" !$NFq#2h#".6 "M"b)Z;'"i,"/*q#11o#`?CopyrigXt (c)020@90M0c.0os0f21r0$Aa0i0n.0 Al?@ 8sCBe@ej0vU@d7@_"ŤBMq#3 b7.6h"P&0"tY"eElJpYUl@{OX(QYuq'`0\q ٬\:SضX,Qp? X̏ ,MgX0QYŞ6ex`WYBaZuU4QY[S FP\ߴ_\16 4X8Qo џ 4WTX&tqYˣhMGX]}/s]GwXHQYt@tԚ?Fu\ɢ|~\oDlQLQ¿԰:G޿ ˘~XPU~U%BUT__ZAA(GE]T_f_[}u?Fʢ \9=Ŭ\G_ԑuQuucߟ?@ml?F=^`> P+CzOOߎf] B]$嗿<Љq3ux5@4Sa/JYԹ6TfJ'e0k?iMiIh%1X|zG'0U]1.650/ (HD # =h#4>T#3 >U@Ac?FO|_D?@ !?Foצ ?P6 u` ?u >,2t  f?5dLtW1qcRH_Axcl^?kJ2N贁Nk?W@ LA@b ++b )'+'&񉄉ԑ"?& ?A' !zR @$)$)/J%B 31315 `?CopyrigTt (wc)j020v09j0Mb0c`0o%sZ0fh2Y1r\01uah0iZ0n.j0_ Al0 `8Us2e0e`0v0 d0,1(04M-_/?3K ?7&l> 0>Uhh= Q1Zb!!J`F9A,2 %,VMS(Jt= OMmѱQ~%+UD_!OSUE^8__ {GzoXb52O;"rAQ 0#Qc3iAleE9 MXG'0Ub-N!R$KAB5 fjHD # =hj4>T]]>U@CC?Fd?@?FԬVq?P6 u` ?)u hAk,2t  Rg(LtW\jwS}ժcRV]clMJ2N贁Nk?@ LA@]b +Wb )'+'&𩆑;"?& ?A' !zR @$I)$)/J%B31315 `?CopyrigTt (c)j020v09j0Mb0c`0oKsZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,1P04M-_/?3 ?7&l>0>Uhh= Q1b!!J`Fuz T ʼ)UB ,VMS(UjExK?DZ/`D%Q~%+U\bHNOkTI?$i|UE+Ugr?F9Y7TRY_DU52OE"rA 0#Qc3iAleE9 MXG}'0Ub-N!$KAB5 fjHD # =hj4>T]]>!AU@0M,?F (Q!?@2"\?F21기?P6 mu` o?u $lAl 1@,2<UJt  gttqzh&vJU? & ?Ab5"z@AA 6$>)ub ^(6 D#@ D2N߁Nk?@ L&^(+bl)+y&s,d"y*BB[1[15 `?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v0d0T1X4M-/g3 g7 &l>0>Uhh= y1!d!J`@"G3?F9;<#R<M(!A|jL, I%% [F?gGQ%SUl"F?F7K^5˖Q YEQCUESUF]k_T Y0ħ[HOMTb^?FWȬs рY+ d"rA 0#c<Fl~rl r!}iA2gE9 MX0G'0U*arUN!%$sAj5 v,zHDJ # =h#4>T#3 >BU@m?Fyg!B?@u>71?F!A0d6u?P6 u` ?iu >k,2t  eypLtWVcRB )clOODJ? ?Ab "z@A $)Mb 6( # D2N߁Nk?@ L&6(+bD)+Q&K,J<"Q*31315 `?CopyrigTt (c)j020v09j0Mb0c`0osZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,104MB-k/?3 ?7l>0>UhhE= Q1n!T#3 >U@rڌ?FlUM?@MX?FI?P6 u` ?u >,2t  ?ou mLtWWcR$_˟clɽJw? ?Ab "zG@A $)]Mb6( # 2N贁Nk?V@ L\&6(bD)+QQ&K,<"Q*B31315 `?CopyrigTt (c)j020v09j0Mb0c`0oKsZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,1P04M-k/?3 ?7l>0>Uhh= Q1n!T]]>TU@n2L?FIҮ?@y“:K?F[dg?P6 u` ?Mu [>,2t  K4LtWc4cR^rCcl{STJ?0 ?Ab "z@A $Һ)Mb6(@ # 2N贁Nk?@ L&6(bD)+Q&K,<"Q*T31315 `?CopyrigTt (c)j020v09j0Mb0c.`0osZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0ej`0v0d0,1E04M-,k/?3 ?7l>#aUh[3 A Q1n!sXE/U8όKBI .O"tW/U{ؔG\b# aio?p0sX2O;<"rA (0#c3q_(RAeoiAVAeP$59 MXG'0U_r-N!0 B5 v*zHDJ # =h#4>T(#3 >U@ҙK?F<;H?@jB?FEk9?Pn6 u{` ?Su  A,2t  ןG}LtWҤcRMNcl?\zՆJ;? ?Ab "z@A $).Mb6( # 酑2N贁Nk?+@ L&6(bD)+(Q&K,<"Q*31315 `?CopyrigTt (c)j020v09j0Mb0c`0oKsZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,1E(04M-k/?3K ?7l> 0>Uhh= Q1Zn!T]]>!AU@ꪓH|?Fto?@P"?F%dݳ?P6 mu` o?u $!>2 1@,2<UJt  ghttb뽘)ub^(6 D#@ 2N迴Nk?@ L&^(Wbl)+y&Ts,d"y*B[1[15 `?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v0d0T1X4M-/g3 %g7 &l>#aUh[3 A]@ y1!d!J`&F?ޓ, W_rv J<'R<M(!kuϢ?FK!TBKQ%WUZ&L#\ = GZɛXEWU8(?/&?F9I'\]\WHOC@}?F,^ рYZ+ d"rA 0#c֪J~V(L ~A ׻XWU8TL)?Fj;KL~\M2aiA~AeE9 MX0G}'0UrUN2(!0j5 svzHD # =hj4>T]]>U@gr?F^i`?@?Fyrq?P6 u` ?iu ">k,2t  !LtW( UcR]cl ;_@J2N贁Nk?*<@ LA@wbb ) + + &D  ") ?A !EzR @$)R$ )/J%B31315 `?CopyrigTt (c)j020v09j0Mb0c`0osZ0fh2Y1r\01ah0iZ0n.j0 Al0 `8s2e0e`0v0d0,104M-_/?3 %?7&l>0>Uhh= Q1b!-!J`[F%),2 ,VMыS(K=_LD^^PCűQ~%+U9IS_JUE+UF XOO ڕ]oX52OE"rA 0#Qc3iAleE9 MXG}'0Ub-N!$KAB5 fjUGD # hz0TdYYBIU@a ?Ftnm?@Me?FpWe`ݷ?P} mu` o?u#8#5$+4%5HU$*44HHt  g t3 at?Ы O` *0U"?҃@&T #M!M!`?CopyrigPt (c) 2\09 M| cz ost f"s!rv !a it n. Al z(s"e ez v d F!3YJ$#$- #Y# Y'#^ B= #Z&#l4G6PG6 C2$e^.317;( #HH1|H44Gi^P5 1&X"7.EG'$G6%!@4] EFYJUHuDh#" # J#UU U UUA>a0JTaaMU@Me̷?Fk ?@N?F0Cꮷ,"7?P AtQ`  7?`t#jn \v\fw"MGwugoquo$Kt2zG/z?@Q#+&A8 b*](i+i+i&)j#3$(Qc"bk)+'Rp7171{`?Copyrig}tn(c)n]209nMf0]cd0os^0fl2R]1r`01a# i^0n.n AUl0 d8s2e0ed0v0d0@"01Y44-C3 C7$&lU@U1#A(:!F?SD [6yw(E ?F{ŦL.D;\I`H3E^?FpJ"MC=\C"qoI5Eo9?F(bȵLvF=\#bJFHOOOO@DQKO]OoOOOOf??Fޝ,lCۆ=\)?4=(Of`ẏ?FC+mR)=\/b@QS_fFȄ?F7qm ~=\76ԡy]4։U?F*t,lI=\oooo ?E#goyooooo/?F:}W=\wzfu=pӣ?F߼<,l-~Ȩ]p|y^]x?F}+*y=\+ 3]X0l?F-Zߣh,l Bߒa=\ vl%?EYˏݏfO/?FI42,llHJ|=\HN4f:f?F`\},l>x싟f~?F3x!##E=\J&)O]챘?F/8-k%mSp=\< 2DBB]ǯٯ~S*|ǾȰq~Fn)`]]c֦]L½r]7rG2=\`_g@P 4?F8^d=\he]|J%%D2Vthz1hV~=\K m0BTfCAſ+i /НJ+mV.c|~kdny].!̛ ?FW,lJ(Jd0(fä j[+mޣ =\_?ny]~"2!GD= bY~=\6y@ mL^pCA .i}-|{zv,l3  4f #[tU?Fߞp=,l0ϓpBKg嵿?FdNw:{QBb?ɣ]gۼ?F5̅h.fQ[Ϡ mhzCA&8J 7?FG1^}a aB Kf?F@;,l^s};]K_dШ&?FfOBl?w2OKNe9@?+o)F?{qOCw_PvI4~=z2m?Gm}>ID ___Z1 (_:_L_^_p__H>ho 2 DVhzHMgz?FC_߯():?1ZSGbZ2@ҽw]6<730b837SI|~"RM?L2]-m?Fq,cɥ~)̲1%]? FZ ҟ2 `rѩ]/U Bkn|)<|4];-@uvzwWPFMǕDGt{GI"MqcB߀~@6?HPͿ]M+T/]-$A#8FNL /Y-m$+|*1 |Ŀֿ~ѩ'Z?s =-B^l?2UzVs0~ԦLnQ ~덧쭕g8A}2!"M 2:2?ְqBYZ$@SK]-?FkL}-.zus1v8 .@2ߪ߼ѩ|[s(^lU }J\!Omy͹?F$зX-R P"M^b=-s ,G4,c^lX I6g ]-BS3>-U,-A -,>Pb 1Ԧ(?F~^lAw,?Q7ZьkP?F=S~-sj3jK~)4~"M`dqhY W-߇4ĭɫ)^v 0$3^l0}S1Ix@YpB/T/f/x/"R//*/ѩ??FTmȵ^l5rЯLjوWjFQ/Zyk?F|X"\=De9gs jEo9 ^80>~sMY/|=?t۶ I4TSx?F/doM~^l ONQZW`@?[OmOOO;u??O"O4OFO??FhWy?0WvOo-uѣOd[?FSҮ- ]gWy, l Sz?Fl}WmxZWQ灝in]\h?F.j.\=/I Wdvdp_zooooboo,o>oPoboѩm4p=u_0S}Ÿj8>4W[դ?F:mq_Wş|g #Z?F'[=ݛ}T r=lfa4J0-p/"_flȏr$6HZl~ѩ;h4=(V.@ ʯܯa@Rdv`j^l8g Uw*$B_.nV?F" }"SqN̎?F-;u6D!FA\Y] œ1g ^~SI6? r1 b\nϒϤ϶`kx7|>@|SA-:N^{a*0An?F~ _1ZA}2!g-О?F\b F! qZtq*s 1Mm#)*:ݧ#VْG%E\־ |)߽BJx;mmMQs~߯X9Y Z}l@/P N?M/?1?C?U?U!///// ?7 ;{?Ft\,;) q~-a.d?~A3U'3҈>}>/Eܑp%ߠߤ@^^l{;HkOQ ҠO\ۼ?Fߙ4zf]ziO;_M___q_OOO__&_juW/?Fia~Ia{pU߀_~ Z lS;܃(Siri`RriZ;ӱhsm2Ca6.o&s ?F~f(;EyoIs6\aoZl~:oo 0BzP1c/?FAd)? |hAقUYܙK?FxtQمwPIr2] ~€+Xy'MፍǃaM*2S%ʰ?F+#f~;-a~p}|b:ѧ`vXT 3T %iX3ha TaPaMYU@?F{dܫ?@B?Fx?PM]tQ`  -F7?t#UY~PUu0"u;(,\"b&hUt}2zGz?@dQ#&A b(++&Y}/SV0$38?b`4z */!"!f2$R}11`?Copyriug t(cU 09M0]c0os0f2R1r0!Aa i0un Al<@U 8s@Be@e0vR@d21Y%4e3K 756lU)U4FX1#A2%!Gj0?F73W00"6#8$}-%L#1O)߰=! 8@U$h?FTۜX\rNj$n\B.PXX3@U xQwv&5}X\:"^?5&^,!UnE@UO/NPY[||m]P5Wh@_R_d_v_DQOOO __-_f/,?FRal_`[]lS_fV>ս?F)b rlS3le£ϲ_fq l+{ ~W2ofG Dv~iqhCs}6n~#coo[mE#o%7If[v6/?F2l+|lTCf% ?F92mXlp )B&ur7f}ܑ]{}k ~Xą=mg?]̧lHWg#tdf߻mAY /ASefWc{?Fu]S䱴ls :Llgu伟flo gU?F;^U=lChlx-jfF^eۙq~j0xi|m~c&Mz?lG6OɿE]+=Oas8,ǿ?l:<]izT}ۿ~*P?F=|{FlqHlο:.2~DB'ǘ}l5(3V߆ϾfD ыh殴li7>-I<̾ϲBbGYk}ߏGM~}?FF*.^/ͼi?oߛfu?Ff l3Mi[ KP~0굂c}lOF?آロfIӂr=Eldݠ_\B\n"w\㾥P8bnnE]<0j]SEj`B^ݷk. ρV̲_#u悔&ƀ ܠ\mmElxY=mu1~!Ii)\mz^.P/كYm//&*Ax"x$qQ} F4D2}{=yM].u #q2}~s}z0N?lWU$?μǵF!i}߶bܑ!n@垇D}LnXXt~#ViԂ,iߝXQ b4Oq>ad䅔tnR)u@o0oBoTon _____ o"u2$j1}Xι_!eΟ`?7-:ܐ9Ri>DYH-#Gߍ=ԑ4MXjQ9Ef׹ tt}k͟in?S(:L^pn &*~]JB&u¥ G48Ġ7-a#|GgS>a8|I>3; nP#bWU6mȧ .ֹc`}l:2?xuTѺVhzn 0B=M#p0}!:rd Q%- qOj 7-Vsb: u>O9e>(xx&?FgH??[WvpLQ?M9 Ԣ1/!ﰳm 7xzNªJrτϖϨn (:L^sjJ^/0ЧIs*!\ٖ o5|OF ȷӸ@`?F/ ߃7\!\ψf+!"p@)-_!\?$”Mގ#q|f@LS`8}!\߬BĪ&Zf<1ߒ2 2DVhz=U[[0m{TfP7-^u,b%:k#ԍ?FȘ!\(V!I/iUBg78;L5?Qcuӵ3@?F:Ě ],‰L?L/LY!ԣ?FRs;=w] PʉL?^+UϰGIm߲E,1u5w]LK>P#XrO?? vnj!\Zz -~X@////UX/j/|////=R?FQJ<}A{ХloZ ?@lsچk?Fp"mI鴡si}/q2 ym9HQ em\\g^rBt@|? [úA'5r?#7-oo#5ߕoooooo=zg?Fzl) Ud4I堹> /k͹?FV{<˽h4GX'1rn=["jyN%.:PɟLIUw ~#Q)9=Syeӄ0_6=6LTL=Us|9ks}ߵX5>2f$~1b?Uw0'/N`%;M p4~mmo7o 9Q5:+G(j=?=;J?1OnN&4'9K]Vv*RQxR]6g^ɣ|Zk%O!QڑQ^^h?g^j^}_Txk|j~WmVo@?xp >eBcN#c G!Tx5Pm|LGf.Q,ojF.@RdvSzL``qn ~GA>~wz^gqҕim~x$gK>~ V2wTx"dQᆠA(Os:!nLĥlqNCyQӫKcAahG}6ZSN?H????fF1;?M?_?q???Vv|&5}7@R"oPiyK> k#>ġ"}?A |M-¿t6j`~r_QW44zaqYX8Tȴ߯~n߻d ____T_f_x____ Y<,2?(rݾW1[̵yѿ㸗_go4͵S62M\{Qo^~(Z&T!+<5&s pBr~y>08x޻ ϛCE/RuCQݞ%~޽3]Fb?F?|~퀶5 ~ 趒;پQtoqΣ6 ޻֮Ǹ) 1Ÿԟ)e]+akDH~6 WMw׭K,@&"c?F'%|6 ξi@a > <ŗ&8`kb|-~ 6  s??BD&m/V%߇6 b3hD5^E);M̿޿Ͼ)`]|i!x899|/*a|>5i=ϭ,?F#6 K.5> ս5|[ݍvϒe*y34ݶJA)=`2DVh j@?F-/g ?hA\)/N/U?FM2e-j X*?v(N })2-OrYQ-~.Tk )\}/'?^qq|*=3laz ~} 4}ס@Rdv22(:)*?FN !kyMdFq&w^I/(Y[?Fߧ!Ma#Jh|ϻ/x9P8!il{mM{G~.2:O˕BĈy!" _뼙-Y` k/}///KU/ /2/D/V/4^F?}e_mu? y/X /?Fjnǚq7}? D?Ɗx^o߾AytÉmA~ ^{yoQfAYvӗ-ׂoNZc?OOOOgu O*O~gL<7@p?ʼn@-1ş9p6=n?FQ>~:B !~ ޾UC?i^Jak{_f/˲p6J%V{垇͉}1JMZ ۯ#l~Ư4ʩ݆!~ cx|[) T@ Ѳg.Kn>$?{nIn880ߊfr2Yp=ȓ }UptCE_}lB~?h,,$ϚϬm9eG\'1rth|~1=9zsg-?Ubw;]9 _Y*'1`-LI@0goSy?o7x>PEu{ X~=8+m?ɼ7)}^E"S0 }٫i)}.@R%m9Y; dA Z8M]Y ]V6yY+t8skA rM#\$Sp!_n8$Ye#=]*}_I}4I H~`]ت]~| .ʗjri_.@RdF&|m9;q;gi}I߂h(}(Uϯz5y+[Bq}X~v"x>?0~1>4W *wFI.i}A0p}} &N{*y/ K?]?o??+e'//??$?6?MIbLR$UFײ;nC -+}IDQ$Օi}-HsC V2wYNHCw?F3kDjt!eJ }=Zܵ|#aOQcܝw~BnW؜p f_x___F(O __._@_R_MIst@xAOK~` f ͹+ixrݧJQp~[̵yvOGo?Flg6uMh ٘]oSC`qb?':^Bo}^~^?>Dp$ςb)&8J\nMImc@xgAx+_f{&34'Hee÷@%|~g܂HV <|PFXdTX׊ćgߩXQ#֟7$zLvPFBrX}g$Zg̞Rש<͏ß՟*0BTfxMIߵ.W_=%eiJz#Ms2,MD_c|T%#5JLYXD?F5hFg y@8|ѕSCw]&.mᕼA?鯻Ϳ߿+L^pP>FS`D0 e4Z>+-G_SCa\0І?FKȝ ǶH ~%aWNwZ=!/OG#b q1{~! tIWxa@$,"-MIkO/?F|R |~A&'A;߯bc @``5('Mh-nf=!SCrRat!2vIgn/=gyzu  jf&q#=-uS),/*/NR<btB1*<|N t!ߡ..Nsz9_-ֽ'}.<94o._/?{iPא5] HV:;$HĠ﬉ қZ?nBӭ_2"4FXj ErT@y}Mo 6N?PJY_-_`o颾Mn;NOqȝR?FOZ|<;QaA1b;oIaWW6/H]ToY?vbgCt_Ϭz3,>PbtφϞ m9`Xizh \~wKVaϤ~gPϐsY6i&vb8l5' f EXe]/; ^bAߡqEiK )~E?H{=߷4HZl~-Yf"b }h 0JTlaHaMCMU@Kk?FVٖ?@uW< ?FY鷺?P >uA` ?u &J2A:2,t  #)taN_"舺m\pmv*IJ>U2zGz?@UL?A>A@Wb#q)!?(b'=&Nm ?0& ?M "b?)'7,I! O'11`?CopyrigXt (c)902d0990M10c/0oKs)0f72(1r+0d1a70i)0n.90 Al0 /8s2eW0e/0v0dw0!(]$B-3 7&lJaUl~ 1l !`@9  颥 nw&. ʙF,2 FFՋN@DW9_k7BLL␕U?A%E?AL9C=QIDZB3>X5EER_`y5qLWO ]7@D>XE2OE#rM) 0 " c3AAO(O:OLO^OpOFF&(LKSoW N L Od]uvx?FZl6>LI3nZձL)LA_x1ur/ fLP $e'Hٰ_FfIXDoo p_Uoi &kpiMf5kiX7!'0Uʂ>$e5 UH luD( " # #Aha0JTaaMQU@Ad{?FdT#3 >5AU@c^?F`Ld?@9x(g?F_Uw?P6 u` ?u .6(>2Q 1@;,2<_Jt  N3 trBNٟ,PޟbX"; JU2N贁Nk?@ʄ L&A@bu9(E+bG)%c+c&# #!!5 `?CopyrigTt (wc)020 090M c o%s f"!r +1ua i n.0_ AlF0 (UsJ2e0e v\0 d>0!E$M[#!2?06 ?AN$)e/%-/#K '6l> 2UhE=%!?!J!(NѠuOO u(:@¤̥?FbP;?PEB GݐAY1&YMAJFHe׵B `,a,[ /? :7i? >y4vʚ?rJ5;!:_o#ga@^AOiAlerI=MuXG_'0Ub5N!40% rFjUHLuD" # R>h0JT#3 _aRMC5MU@u{v?F ??F];\zظ?P >uA` ?u .)J2 1@;,2<_J>t  w?t&9W`ݟu㵩OJ[>U $?& ?MbA@D"J&N#y!y!`?CopyrigXt (c) 2d09 M c o%s f"!r !ua i n. _ Al (Us"e e v 0 d r!]$v$-## '&zBt#2q0[?"@9 q>6M(bL)J/8Y*lJ.-$U#8APAa A =!4Q1D!1A9 {?F%@ 0nCϿMnNEuýL9%K FH(5N?:G_&_ RHE5E#f/?FR=JvK ϽLQ ELxAEE^]R__ *HE8,;@ƫn~LnzdKL5rdXHsA8#5 뇯/tH@ED@/GpOc#?P-DT! рY?!IO F_}]~3v!9$e% UGD # hz0TdYYBqU@UJm?FBO ?@IxJC?F+9lbݠ?P} mu` o?u#L*5-+4,5HS+5\$*U44HH\\t  j@g> t~pm J)4 {ו' -U;!;! "`?CopyrigPt (c)r ]2\09r Mj ]ch osb fp"Ra!rd !ap ib n.r AUl h(s"e eh v d #"4#S8$,#4G# &G'?=5#H&,#Z45656 12,$B#5#05U253@+& $^ #E K5 5#\HA%$DH /KK5$D4/GiA^K5 6X7G'4Ŕ56!.4] FJHD* # =hj0>Th]]>]AU@ ąMu?F*]!?@v$?Fu?P6 u` ?iut  Dm%t3._袋%.0I~$%UHSB+> PJlU?B& ?AuA@o +bs(b$%#2zGz?+"@ L&s#v ##zu s/ 5&,J2*%#c1c15 w`?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 KA# l07s2Ue0e0v0d0"\1E]`4MŤ-%#o3 o7B&$!A]UhzA 11J`%#@frq?F ["(M(!T?FyIRaZL\m۶mQ8ϐ?F@$mZK\٩?F}e?P,DT! VрYc!ITB M~E'QY5a XȞv/ ,^$ Ui[? RRT*?r.5 1:Oo aosg4 1"KUE[UF5sRŐO_ <tf ?@Gui&PbE9 M5X8G+"!'0UE]N!R]$ar5 vUHLuD*" # R>h-4JTaTa9 JB>U@@(/&?F94;??FIRaZ?P>uA` ?u>t   e~27-t!;E]t-6-P\E`,J%EJff? ?Mdb"z@A }$)1b:( #>!f!?X(d, P酑2N贁Nk@9 ^"&:(+!D:/ 5;T'2U&Ra1a1`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v0d0"Z1E^4-/m3 m7lJ0JUllI 1!1` C  /VN!b͉(O"TYURIUD%YU+u7tYF_QER5:MT?7 T [Nё\#CTAE2OEv2rM $Fyc"mA$zG %I OUOYU`fsRsŠq_ _ wT#3 >U@+(/&?F94;??FIRaZw?P6 u` ?u t  <~2)t7E]tU)2)LXE\->J2N迴Nk?+@ LA@{b +b )'+'&D*!!5 {`?CopyrigTt (c) 2U0 9 M c os f"!rԶ !a i n}. Al 0U (s2e e 5v 0d0M"!E$M[h2?6 ?$A$)/J%,b,# '6l>(>UhE=~%!6!JXS(xNWOO D QH~%xE9OAOAiAle6I=MX7'0UjR>!40% 6FZHD:  # ;h0>T@h]]9 BIAU@Y"2?F h?F\9?F_X*qw?P68Rw?>u` ?u$.>. V- (.8:Q [t  !g?t{Yu?pvMl{~ݕvs5Q!fJU2N贁N[R?@L&A@bI(b*W)U+U+U&񩆍y#? (<!J ?%MP?AO"d+b Y)(u/%*B#A1A15 `?Cop rigTt cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4M-#M3 M7#lUh/#7 _1AO!J # F qBd V(R iAb59 MXGW'2WUQ;N!R$aP5 VM&#HD # =hj8>T! ]]9 UqAU@c^?F`Ld?@D6 5D?6Hb?b06]DN`lt   ?N t!rBN,_PaX"; jJU #<nH!A?^%Y ?Ab^ @$bbbz #"b#$"b$ 2N_Nk?<@L&(&$%" ;'"!*@!A#BBhy?H *A#115 "`?CopyrigTtk(wc)k20@9kM@c@o%s0fB1r05Aua@i0n.kU ] lP@ HsTBUe(@e@vf@dH@/"14M-?3 7#l>]Uhz11!J`A#@SfVo?FnOo0  d" ΄ ť^R^M(!`ZQaSt__ Cƣ8-$Րj8:ۦ?@u-{?FrgY?PC d f"Y!Y1a ]D_;s ˿}nE>0FH0& #N  ZOB CR jύ ~d o@}+hĶ o[sGN _G fHO xO O? ؗO O 8O _` F֛O AP < 6EP r϶ GP t JP 5 LP 7NP  NP?P a N a P(SP w $UP B//'/VP YP P~H[P 6~(]P pX_P 1Q Q .Q QQ , ^(Q J&#S /KH&S 3-S <p/S @98T EdT 1H%KT VL'T OT R(T cU1UFD  h(^TYYB UFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT9T9ȴHBU?Q?G ?B@L&ɯd2?6-G(?\.Y(7. sU!}&/) *%  0eB`-ecom0er 0eW,s2v2, 4put$4dui0t0ib24ud0n0tw 0rk!% ^| (S G& q?%7I???"" wp ""ppqwqqwqw^qpi ppwpw)wwwqqd^wqp~wwwwwyDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??%!{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH EEdR.؍>DDXXllRu'9h",'US2&5/4&3 ?"'4Wb0{['`:[1#26"d^Q_9$7&9"/L6 2&!'ݯ>o@ÿ@qx@?@-pB?wDmF03Bu9@`҈-1Bu@r#CC5",CAB2Cm9u`u `"[ b!u`/3Qy1y7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPaaPiPn 0Al ` Xs$bePePv6`d/`VPs_SBNcPm_!#5P8k` 8Fay1&3db4&5܀MB=/3z6&3d66 2&4#/30U1r3]%ge<^AE U#"&5m Hq&5twV5t{3t{VEt{t{y {etd{|tY wmda*V_BTAJa$Ja&203V21@3 2MVB3Z1125tb6 7$C|èEhhPÓnQ*ϔtataAPꓰMB111;@cqcqba>*Ab1%EQF^.1XA#8dTG`XQy`])r` M`WnuPaPt5arFaiQ` WEqiPm6`nPvd{V5QX]1PRd*ơ ]`ue`bͯ߯@30! P`r@RdSH@ȱ]C5D6bQ۱eʿܿ{"\ ϬAP9a:B`諸75^ϖϬ% S:bi`#aweT ߬ぶLPchCekT}ߏBalB`i`g̨9@߮,RPoڂ|EZ\@ =phDBU T)?ar0TPpVrKB` B`sR;IY`A Y`Q@;bmr:PeExab!gD"qQr2w`ijJSCC ` TsR\͒kXV` ?ON`1ڒBkfHZSbY`|` E@cqTZ Nwbk4qa+.2`f dDIpadB`5cϐT)nĺs kx/0/(mTa t f?a ];a£|v///ő R pa/a@/?(/0?>/M`Cs?5?'U+pu"tPf!?BkQO)O"f=OiODOkvOOKHdBQ#CġRC<` Wm!wV_*_ Cg0ar__yi_'`M@mi2n__v"kpo$o8KOB%fy"FaPoboj qt1dw:1NvcU@jZ?@ű? _v pBup` ?}rup`  B=ݐpj,U>(p5$j-T^u*uq, ST1.s1'xAMw>ÿ$revK ˆeGrςłˆTς߆L2`R"oUK=e2j"rb ggbr-}@"Wbrgu``dv~jc 4F[_c@-->A>Bj(@yL1L1*Z{­q qqqcҴQ:ai6“`k`!uA`aż@q@ʇ{„S5_&ﲄПF3s͵N?SYFy 邉f@ H8W{H5WH`Yc|ag,@ BQр0Bf_q_qʷifz!` %.P&4odAA_qNNTWOWRKeH0PRf1RRI%SQ"it!̔=GQW'70U? 10a?ƕHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@J5K?@xEL?6ѿP6 t`  bm&Qo?t~@@L]WI,#u auoL> @JU2zGzt?@L?&A@b}#zts q#1b(Wb)+&[9#H",ɺ (;H"?&L ?A$ /R%9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0"O1E]S4Mb-9#b3 b7I&l>]Uhzt1.AF!JN!n.%-+0rdjV1ax߃)(@K-&?@ mO"> ?Nk"PL[Majnha_:e 뇿P> #? BK=o? n-g?j4s!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6?@ @ d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEjz36>C~N>?Bn]_zGvN7ViDAbE9 MX+G'0UPN!R4ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@O?@X5?PJW?>t`  ^Z?t#ə.b?T@[ u?/CG<_Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@ic{g&?@%r`5B?@n_F:Is?HNtQ`  II?t#\  _ii$"`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@oElrkRt# 8$lcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_zRX,{_,__ #R__ (YQoSo+o=osd {oEeoooooo@zUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BqU@",?@~?@$p?@|>׺?P} u` ?u#L+/8<L9`( .88L``t uPu3Pt۶m!tUH~#9#[0Up"?@/& 9#!!G*`?CopyrigPt(c)20 9M c os f"!r 1a i n. Al&0 (s*2e e v<0d0'"!$0#B-9#,# 'w#^ =9$&0#4 66 20$e#$^ #3 % 9#HLA0%XD`cKT%XD8cG g&daO^'*TcATcA0"-()Ca0% g2ATBK6SihaX5 &X~7aWK'o$ţ6!4 0 aVuZHD: # =h4>TB#]]9 TU@$p?@|>W?P6 u` ?Mu t  )At72(L>REJt2q?=@LA@-bb )+&񆤍[Z#?m& ?A/$)R!!5 {`?CopyrigTt:(c):20 09:M c. os f"!r *1ai n.: AlE0 (sI2e0ej v[0d=0>"@!$MBQ-?# 'm&l>0>U#3 C!*; #p _ &J`@7?g8oyƏ C?,'ߑK FMD(9IMBIGE%U\8!<A% fjHD: # =h4>TB#]]9 TU@$p?@|>??@?Pn6 u{` ?u th  )t*7)2(QLAEzJ2q?=@LA@yb #zt-b() (2+2&醍N>{??& ?A"b%$$>b7b{/b4/U% 1 15 {`?Copy_rigTt:(c):2U0N09:M:0c80os20f@211r40m1a@0i20n}.: Al0U 88s2e`0e805v0d0X"14Mi!E(-?3K 7&l> 0>U#3 C)1 #p U%J`@!g8oyƩ C2,'ߑB DVM^(9IMBIG3U%~CU[8 h:0T5 AB#MU@1 g?@4?@84C?@Kk^Ӻ?P t`  WwB?t#_L&,qۗ'0 .(='u uu1t6#I    ҩ""PE&&E&U2llf?43f@#&A@)b3bbfzt #Eb(Wb) ; 6#"w?_6C?/$ ;H,7#11+`?Copyright (c)@2t09@M@c@oKs @fB Ar @EAa@i @n.@ Al`@ HsdBe8@e@vv@dX@021M(=4#-#3 7_6( v `@K( vU `A#1(Og0x~u- h'#Q  "8!}Z_]02QEUCT/\?@Wh{vž\<ˉ\78u2/k٠' QQQ,WeI_[Xg_`zT'!68?@J[8j/OW\U M "RSDrV95Uۮ4R?@ rё\!l§\Wԫ{d8m k?NR܆?@ֿK #? }+OqZx~b_Xu5sZ OZ{d+i2o,u2r @3C QAoooooouYYG?@Pb|[p o _7~I|׿zT6Q3M7f?@Cs|h')`ρ?|ώ&a@-_4o0:02i@wE%tHXG'W0UN!z4 0N  UHLuD" # I>h ,J T  M#EMU@jZ?FJX??@HX?P m>uA` o?u#VlMJl -<lA -PK#TdK2(.K<FZnf>t  tt|>b!FQ~uKJ>U2zGz?@9 ?#>O&Aw@b(Wb)+&r" !$NI#"@#W"6 "\M"b)2;'r",r"/I#11G#`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v-@d@7"BMI#3 76@"!"t1"=ElJ Ul0@AAC 9 =QA(#UA!=(^!j9?@+P(0 # 4.!# քOCN6ULw-iN\u5e[EYzX3^GxQ1__ pPzXE6UU . O[ ;;d\@#Q}U6U@ _\fZ__"zXAjai%?@NT_N\^ W-d\ƿFvwzT97Tꤲ8?@)#?@@SAWp>@ ?Nk"R`QY5`ߛbmMd\?E_Wqy[7 9(  UDGzX2YGf?@2CL[N\AlTzXQ2YY a?^&O[?++d\g}UQ2Y?i`hCN\,@Иg*%:׉zXU6U?_z^_p^AA(G!EM __0[u?@ N\>4t>GCc-xQ@uIuc ?@lփ?@nc>s9ZQh |)Sf|Dҍu~4q3@uɕҔ@8S7^A?$My$'eCO9c]ziMqi~I@%!XRG~'0U6t5 Ff ~_H>Y;s ~HA10Hù:zF0\# ^JOB S i_~d,Դ o@+̶ o[sGUT F`: PHT kxT 5o0U /s1U v4U zH6U |F֛9U #;U 5?U ,V V V U6V N V ؚlV ~( V 7X_V T 9!UFD  h(^TYYBBUFL&d2?x<F BP(?P? t B66  TTTTTTdH?? Q?BZ@?"-3(\.E(#.sUk!i&/(  0B`PPublic,pr vUa.e0k0y0s0r01, og"a ,n.0tw*0r01y0 1m0t*0p *2r 0pP1!91Us0c*0n82c.!iz275 eA^| (SG& q ?%7I??? wpwp pqwqqqwqwq}pqqqpppw,wwwqqgawqp~wwww;yDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??f&d23L4CUG DF P# h @T(PYY#E[UFL&d2?jZ?F~?]P} .B":J :^:rY:3 7 77 Au`?؍"66JJ^^*rrUu#J",':U2%&5/4f??z4b0u{[`:[1#2&6#"'V"^%Q_93 2 L1$" "3'@>>ÿ@qZx@mE?wDmF0r&BCZBu%@`-1ZBu&@ >CAB""C$9u`u `"[ bz!u`!Qy1y71`Vis_PRXY%cPm!#5P168B`?CopyUrPgPt0(P])020`90MPc`oPoIf bQrP6aa `iPn% 0AUlQ` hsUbePe`vg`d%R@y1&3db4&5! h8=/3z6&3d6P6 2&4B#/3+0U3r3]*g,^eqV5@cT<*G`o0m?`s `la>t&5 pISO`PdaTs"_BT /&20*V2QQ32DVB3Q1$125kb6x I7EC|ßgh`nQ*ƄQQA``b111;<balrqqAb1#8dsPXQ1`]+r` ]M?`nu-`aPtfarQiQ_` EqiPmg`n `xd>V5Q`N`]1yP%bd Nm0:30ȟڟZq!v P?`r'SHHZ]R`Dgb$a Be>\ЯoAPja}C`y75^GYo-%v Skbi?`Ta:eT̿o-vLPc?hϠk5.@RB$l s`iG`gpϢE%@`ϮqyRPorϱ|ߩZPm\@x px(+s&5ՈUm lparJ0TPpg`PKBg` Br"a;s;0S0LQmrPeExRg""tXqQrb2:`bv+mSJ)CQ`qsĂXC` qP@тuSbp?1PKg`yނn۠UN~w5bka.2` dHEIPNads`fc6H n $׮n~}sIH xJymQt=fpa ஑la1+fHZ㼁y +b P6a`a@lu MO`C(/F,Ÿb/ U+v3utP)Ɵ/H 9// ?Hq?(1tVEImx_dGF~?AvAvp BuJpGYB#uBGYpLJqsp(7Q7Q(xAF@>>ÿOoQ=ONRRCRIHށrA 7S>S"`AR(a1DceF"$?\[?RVRVR?R]?R">UDBb_ PPaMUZB[br@2\ zOz_sv(YqYq!!BO\ɅiqTqvxUzaO9ꂃSS{P_`u`(0y⢀@Bb 2a8RL7@(0aB{8Q|B{BQ|&Cz,@R2ivOOMoM()A)Awb8z[9 N6dՁ:w`` %c`P&!!i2<B)ANDTWORK(%H PD!R1@URIDS{QR<B<hB!i#??=ly|a'0U?Th]]9 MP /AUFjZ?F͂T??FSFظ?P6 .]A]   JuM` W? EKu4bJt1 ytuo[=Jb(A =>JU2zGzt?@dM#߀A@bM;#z1 /#\I(bW)d+)d&J["*(w"?& ?AO$Tf/%B115 `?CopyrigTt(c)2`09MC0c.A0os;0fI2:1r=0v1aI0i;0n. Al0 A8s2ei0ejA0v0d0 1Y4-X 3  7&C0b59 FX7}'0UB)Ź&!$aI 7FKJlJ]Uhz211O! !u6?Fh,\.>*0II=*t >J:@T&ȑ/ [WpY\QEUzU . X[:;QRuSU5UF,®\oZOnlQ 8MP@fu?FR >Nk"\T`bmMlQi_f0%=mUt.m {=4+ 뇥P> D"? 5-g?0tp@bTh]]9 MP qAUF;?F5K?FxEL?6?F6?Pn6 L>M  P$  #uM` ?=`cc.@uRJtO  bm&QȽtA@Lؽl]WI@[>JU2zGzt?@M/#J?&A@b}#zs q#b(b)R+&J[9#H",ɺ (H"?0& ?A$ /%B9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0'"O1YS40#Ť-9#b3 b7&0%0bE9 YFX+G'K0UB&!4Ia yFJlJ]Uhzt1.A!-(:9#t.%-+0JN H߃+%$Jq-&?F mO" >Nk"\RSWNh[H@[4I 뇥P> 'CK=o? n-?N]`@ b<couo]r5%BUV 6?woZÃ^ǿXEUV T6?F __ jv6dDlVyZgPRkro}oiBdalN!\z3ˏ6>CRHS X?BnDe>TGvN7VHD: # h0>Th]]9 MP JUFjpiH?F[^?F$?F]Fڻ?P6 >JuM` ?j uJt  (a&aEt9S;-FEN=ۯ=Eh?u;>2zGzt?@M|JA@a7b#z}b(b!).+.&J[-m(?& I?A$0/Q%B!!5 c`?CopyrigTt (c)02`090M 0c 0oKs0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0!(Y$-# '& 0b59 6uX7_'0UJBŃ&J!$a FJlJ]Uhz!1!劵:F% Y.  8?zY^J4 ]9$]$R9S/RJUE UH$S!\1\? GX5ZU_YQJU __-_?\HD H# Uh4>T#A]]9 M /AUFjZ?FL|+??F@{~?P6 m. >b  JuM` ?+ILKu8J1t5 }ty_7??J]bA>tJU2wq ?=)@MJ&A@\3(bA)N+)N&J[ "#?& C?A9/Z)*!!5 `?CopyrigTt(wc)20A09M-0c+0o%s%0f32$1r'0`1uai%0n._ Al{0 +8Us2eS0e+0v0 ds0!E$$5 -? 3  7&6iiePE9 6X7 "'0UBţ&!$0 IF]JlJ 8>U#9AA 1[!:Nk"\T`bmM\"f4N[7 zex-K=FAa [jr D"? WZvDS?4xa@R#Th]]9 M IAUFFK?F\gVb?FO?F$_5?PJwW?>$[ >  JuM` ? ;  )u*Jt'  ^Zmta{֙3.Jvt?@[_ u?vC >pJU43f<!.&P~?b3bbfz@bAe&bq$Jd 2zGzt&.#@M#T"I&r(~&i$q)Q+}'i"~*B#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d011Y54#-#D3 D7.&%g0Ab59 ;FX G[G'0URB.&!I$a [FoJl&}>UhE3MV1Ai!(# jV? & `& T` V_RUHPD  # #>h0TpHeeJ EUFu6?F5c{g&?F{r`5B?FF:Iͷ?mj>  M   *  (Y JuM` ? Y ":N`u#xJtu IoI%t"' % 'ii%$'`OkŁJU2"?@Ms#J&A@"b(bU)++&RJp}#>1#19 ?#bbbz| &/5> }#11"`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@k"1Ue4t#B-}#3 746t%0 jU FXyGG'0UR46!O4i FJlJ$jUp5 3 h19 1!q(4}#FXJt R j# 8bD?F@)?FvP؞?PEkU O|Ä\QnS ?.܁nts}ped`,of :o ,M ? &d2? bS?tP@RBoogr"55ZQUf_x____bY@?Ac_\jF4ևi$on&h{ǡ8oJo\opoo T)YAoaoooo o 6$|U7I[mS Uy{䷰7ho}V(f"(I%ZvShX?"cwӏ叭iďa$xUGD  3 h0TdYYB UF%)?Fxg^?F ߯H?Fw_7Uw?P}   f  f0 D fX lu` ?00DDXXll)ut  871%t%!?"Ư)1%:'ֿTeΦ1%T'ʤU!!O"`?CopyrigPt (c) 2\09 M c oKs f"!r 1a i n. Al00 (s42e0e vF0d(0"#S$#B-#l# '?R=#ǒ 2$Z##0U'B3@& ^5 DFX7dG'R&Dŭ6!4] dFxJeYk} #HFQ%RT][b5RT0][3RTD][5RTX][RTl]WHD:  H h4>T]]9 MAUF ߯H?Fԋz)e??F9q?P6  >IM (#<Pd`<#P#dJuM` ? #2FZnubJt E-tA!["XѮJV&bN$p'N]I֤>J)U5 J[-# ?6 ??" A@.2Sb64B#2zGzt3@M#Jc678C6.? ;B72C: #115 k"`?CopyrigTt^ (c])^ 20@@9^ uM,@c*@os$@If2B#Ar&@_Aa2@i$@n.^ WAlz@ *Hs~BUeR@e*@v@dr@"14ʺ#-/ C  G6&iieP59 &XG W'0UiRŴ6!#40 $ V4ZlJ]Uhz@A!1(`CFU&pz?FDUy{?,d@#b`sb`"J-d K-q?FFF?F F0 ?Nk"?GiE*]CpdadC0[dhZe@#eGDQ> @" i$a3n&Y+`~rrood]io;e s"yr1C82K?Fѻ4G{?F-qx>tFjR^{|ZlYZe tmP|{Mޥa 0 1iS? q1+?wa@rc(<;"̏A!rFÀ5H5aE=*[^N?F%P?F"P&@ c?F{MTh]]9 M]AUF}%?F[$,?F+#V?Fd_3֡w?P6 B>M @ $ JuM` ^?3YY.KuHJtE  L<ܩt.'_3zMҩ,wsQ>JU2N贁N[?@M#J+&Aw@b](]i+bk)+)&Jp%#4"?>"<5!?& ?M#bbbz$ Rp&"/%B%#L1L15 `?Co yrigTt (c)02`090M{0cy0oss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1YI4#b-%#X3 X7A&%{0bP59 OFX!GoG'0UBŔ&!$a oFJlJ(>UhoI j1h$Ac!(`%#F'$F?>D{Q>L!gfQQ R "Jg V?Fo-RbHQ??FT!R?P& lH\f6V\P5E+UDv_;*UmʙHQ:? w8s`*%$? .l?Q? \ς?D4h_!R otogr5%Q3p_SFhIL?F~Kz?P)Oy޾ F @c"rA b$^sPj+u`dl(?ET+QdAf u8<$HD:  H h4>T]]9 MIAUF ܋?FOa>?Fq`5B?F1*ݷ?P6 m8> (JuM` ?#SS2)uBJt?  \Eto֙>_B|KR>JU5 Jp'>͙#<?Z& ?bbbz@b#;A&b$B 2N/N[Z$@MI #"&(&D$ :;'"*#d1d15 `?Co; yrigTt (c)02U0090M0c0os0f21r; 1a0i0n}.0 Al0U 8s2e0e05v0d0"]1a4 #-,/p3 p7Z&'&iieU59 '&X9GG'K0UBZ&!u$K0 FJlJAUhhG1! (T9CFJ> kRbT$R}R( "J:]Uy{䷧_8S s@ o4%BTV.Tw?F:^֝ä >Nk"?YUU!f>)?\(K8ӷjmKl? a\$QU5Ui,SlQ%iu~!8hE_= T?F;uQuo`"rA 4sjQ޽lğ=onHD:  H h4>T]]9 MIAUF ܋?F<j?Fq`5B?F^ݷ?P6 m8>M (JuM` ?#SS2)uBJt?  \Et[Πؙ>_B?&}>J)>JU5 J#2N迴Nk?S@M #JC&AP bu(b)J++&B1[- #L#?&| ?$H*'#O1O15 `?CopyrigTt (c)020090M~0c|0osv0f2u1rx01a0iv0n.0 Al0 |8s2e0e|0v0d0"H1L4 #-/[3 %[7&'&iie@59 '&X|$GrG'0UBi&!40 rFJlJAUhh7m1!{!Պ (Z$CFFUy{?> 81cKR R)V "J{TV ![?F=$. >Nk_"?YPSW>OcK8RKl? ף_p= $B:]'i,S\6aYss9$Q@5UV+6o'lQ:ifOdD`i[Dž}soąo82{_U{"rA z$s"HD:  H h4>T]]9 M!AUF ܋?F@_v?Fq`5B?F1*ݷ?P6 m$> JuM` ??Su.Jt+  \Epteap'Eqz>Bq|7>5 Jp>G͙;?2& ?bbbz@bAi&s&Bh 2zGztM?+"@MrX"&v(bu)+s/8*T<1<15 `?Co yrigTt (c)s02009s0Mk0ci0oKsc0fq2b1r 1aq0ic0n.s0 Al0 i8s2e0ei0v0d051P94-H3 H72&iie-59 XG_G'0UB2&M$0 _FsJlJ( >Uh_I `Z1m!afR* zRQd J:`]Uy{w_$S s@ +o4B`TgV.Tw?F:^֝ >Nk"?zYQYV*)\(7$djm7lb+d ha\HD:  H h8>T ]]9 M!AUF ܋?F*?Fq`5Bǯ?F1?P6 $6>  JuzM`l? C"u2Jt/  \Etti|u~>Bu뒣;>RJU5 Jp>͙#<?6& ?bbbz@b#Am&by$Bl 2zGzt6#I@M\"&Rz(&q$y)+'q"*h,#:3o1o15 `?Co yrigTtk(c)k2009kM0c0os0f21r 1a0i0n.k Al0 8s2e0e0v@d0h1l4e-:?{3 {7I6&&i bP159 &XDGG'0UBX6&Q$0 FRJlJ(>UhI 1lq!a4DC9Nk"?CYQYf.Q;(hğ=;Af/h (\UGD # hz4TYYP# 9UFs2?FM&d2?F$p?F|>׺?P} 5   & >hRQ p  u` W?   *<J>R\npUu# t  mom%ty!V"!%' %'ؤUC~#30U82?҃@&T3q1q1:"`?Copy_rigPt (c) 2U009 M0c0os0f21rԚ01a0i0n}. Al0U 8s2e0e05v@d0"j1En4#-d3}3 }7Q?3^=4~6#DkFkF gB$gdaO&7*THT"(Pa%/BQ  2iRcE U6rXFGE'74ikF!dD0 EZeYk} 3 QYg%kd>vk5kd#avk3kdpvkEkdvk=Ekdvkkd vgHD H# Uh4>T#A]]9 MUAUF‰IJ?F#-c?F?FFt=?P6 &> A](# D<# P# dd# xJuM` ? #2FZnuJt  }7Fi8&t-!G"R7NA9%UB'DM"9%\'+d>tJU2wq ?=@M#&A@q"b(bJ);6J鏵#"wJ3?]6 ?A<#0Ɖ3z #V#bA38/9*#115 W"`?CopyrigTt (c)#@2U0/@9#@M@c@os@f!BAr@NAa i@n}.#@ Ali@U HsmBeA@e@5v@da@"1E4#(#5 -O3K 7]6Fi@ieE9 FXG"'0UR]6!x40 7VKZl:JdTU#& uUT ]9a x 1 A*# 2&#p +5(Z#FF<g60Q0,#0Q4"1 Rf"J! {-?F'<0c 0TYǮixxLpH$a5e r&7?FQh|a-|,CxEe}$X%lE?FV}w{|C|80}-|?"l!eEeaM?FO|7#Nqi}=ԡ$dduC_?F:s?FQ+7?FO9 uPu~0y jBqi/3-u_2O<-u,#$6v›l* ," z3H[o'?F hnd?FCsM>-|6`؄Ì~[U0!܉f;-aaiq x?F#^|P-|hP`]H<dp&B?FŜgN,5ҫi~l/WG~Ɖ|] ܉NQ*TMW.DPF)?Fa$IKbүnexC؆˄ILMƯدaWe=j$,$,hx?F3%?F+S 雖dNɛ˄@&fUٌ[?承uW[w?FqoW?Fsg:o?F)3⿗1܊5#AKƉsω5`R!Oǟ1a:dj{+ hʲdde5a:f?Foj&GCx9e2oeWcrA 3"5a5PeHD: H h0>Th]]9 MIAUFe?FM᜕?FXWF-0?FO?PDJW?A$> n JuM{` ?e ;  u*t'  Oț2mta{ԿHM3.Jv??v*Ij/I>JU2N贁NKk?<@M#J&A@bI(bW)U+U+TU&J#!!5 `?CopyrigTt (c) 2`09 M c os f"!r 1a i n. Al.0 (s22e0e vD0d&0!3]$#y#&#4#9PO"Pd+T/u/%-#,# '6% b}59 DFXGdG'0UBţ6!4a dFxJl(,&}>UhETh]]9 M9MUFJsx?F6?FI?FnjrL?P9V> $ 8 L ` t JuM` ?.BVj~SuJt  5hޅ%ty!"_:~څ%''"%'Bs>JU2N贁Nk?<@M#J6A@"b98bG9JE;E;E6J3115 "`?CopyrigTt (c)02`090M0c0oKs0f21r0Aa0i0n.0 Al@ 8s"Be0e0v4@d@"13H]4#y36#|D3I?A?2T;D?e?5-33 7F%0AbmE9 4VXWTW'0URRœF!DaR TVhZlJTUh6UT]$8]tA1 QJVb(T3RT0bLx"fA RSf"J:mGӎ!=Мoc ]rte$a31sE?Fd0!@c +HjẏvR%amE@uRU?F.$wX|'1Tn|pA-x@uӆN?FFoX|1Rqyh -x@u`CB2OX|Ә {n|O$ dCe?F_';?F4Hy ?F ! uPu@[y!Xn|kb_6nu(nux#vi8˿IQv x" YꥬM$5zG?F}Bh?u?FoyD?FYό Ə،m\ҋ:n|XAB0B5R+?F;(h?F?F;S4، nB6{n|Jqe!{q o.2ZQI{]>F˻w&t 1t?FOiL،?ö}>{} SS~DxĶȯ]z#0bb?Fwϯ،BZc4s  x̶.jlT1|t[mi 0?F_?Ft ҿ t߇R?ɨV=7Th]]9 MIAUF9PR1?Fq`?F]Pġ?F_Ӱ?PDJW?A$> n JuM{` ?e ;  Su*Jt'  Oځmta{_ t-3.JvĬN %e?7Ùf?vGOw{jyƤ>JU2N贁Nk?<@M#J&A@bI(bUW)U+U+U&J#!!5 `?CopyrigTt (c) ]2`09 M ]c os f"R!r 1a i n. AUl.0 (s22e0e vD0d&0@!3]$# y#&#4#J9PO"d+(T/u/%-## '6% g}59 DFXGdG'K0UBţ6!4Ia dFxJl,&}>UhETh]]9 MJUFâ[w?FVFc-??Fx?P9v>JuM` ? )uJt  81QEt9Siޯ2ENEh^ڻ IH^>2N贁Nk?<@MJA@cbbU)++&Jy;?u)?AbP)/%/F%B!!5 c`?CopY rigTt (c)(02`09(0M 0c0os0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0!Y$b-# 'Ax& 0bP59 6X7G'0U]BŔx&!$a $F(JlJ]Uhz@11!ZF(?~O攚   8P[&~ 6RcVJ:)Hpv ?#V Th]]9 MMUF"q?F땘n?Fq0>?F_A!w?P9@7>M  $ 8 L ` t      8L`tJuM` ?""."B"V"j"~""pu)Jt!  1X{%5t132m_%5.7RjX6H7!>JU2N贁Nk?<@M3J6A@C2b8b9J;;6J=y3;?3UI?A2b9?O&EB3AA5 C2`?Cop9@rigTt _(c)P2`W09PMPc@os@fRAr@3QaPi@n}.P AlNPU HsRRe&Pe@5vdPdFP2APYD3-3,C GXF5PbU9 VXWW'0U=bXF!sDa Vjl:J Uh^maa]`t AQ֍A8Z3FWY?F9#}"# C22@fCsa Ur)[v2JAURo?FN?F=!?F G< 6uPuyA/ybMv Js 9o#"u#|u32$N 4 ?A~0 f? h#|S)?"2@Ts(<2"41rÀ5&Eyq3u0,Ǥ?Fs^?Fq?FS|xX1y6>|>E<ti!Z$@nqcN#`qBsq- dM?F"7z@Er4s0{욹nZ|q9a!i[+|ҼN#`V}188| m vqt(?&4yq`y:#Z?Fvd@^=PiY$|n׽\|6 ^yEb$ ^S ݿN#`"jtyVp`iK?F3H?FڛY ʜ|B6yI^a};I| Yb[soq|U?n \*O#`7I~Fmp1BB{?F>v?F 6Qqy hq|o]^})f~^K!OM3`Xjy%gzp jh?F t?Fŷ 9X|9} |)x?pM` #e2e:~Cra D)2HD: H h0>Th]]9 M9MUF$sж?Fp b?F4tV ?F_<?P96>M  $83L 3]`3t3 JuM` ?3.BVj~)uJt  DYU%ty!"8Ӆ%'fY*%';ƐnƤ>JU2N贁Nk?<@M#J6A@"b98bUG9E;E;E6Jy3;?39?A?2bPG9D?e?5B31A1A5 "`?Cop0rigTt (c)h@2`09h@M`@c^@osX@ffBWArZ@Aaf@iX@n.h@ Al@ ^HsBe@e^@v@d@"*AY.D#b-3=C =GA6%`@bPE9 4VXWTW'0URŔ6!4a TTVhZlJ <>Uh$]T]tOA Q1(`3F ?FiXe& 0On?Tڿx#EA_0_;Jb"JaFK`E?F /?F߱)\ uPu1ad0kvX'qeDz`Ֆex#P_v^ܿ4 Sm?T0 -? YzO?x"pp1@c](Qv:y!r55a3id{ Y,?F\?F O&&`VW:E3 xui?k0M5eigJA|@ W|wv0rIQv v uB@#aE^0:O o?F^k#?FTL?FY.}*\/~kU?F#]ҝZ?G3´<]lѤqA|? yFז_t&o˿Wi}id3.aДgSh?FDR<<ddT0~B{*!ZyqH_C,qÿ́w?F?F}SR?FD?ƙ!FsU.RM .fMA.zM +M++M.++M."M+"M+."M+B" M / 9Z#j"M M / 9#"Q& / 9#"U@#"a#"y&/9# 2Q&/M#"322y&M='y&x(-63"-6(-6uM`? A7242H2\2p22222222,2$,28,2L,6d(6x(6(6(6(6(F(%F(68686,86,3B?T?f?x???>u#9|J2b&baFU*JRu0;t1 b.!ta  zKdS'& 0JU2zGz?@dMc vA@b\`;xbIyV{YVv,r !:tJsrcrv rArb"Iy{Fw,r|,rMNsTTs`?Copyright%(c])%209%uMcos{Ifzr}ai{n.% WAlр sՂUeevdɀ+bMs` `vba3Wbtbiv eqX z/'0UÒv.0 zlJvUt@S +HW!!1<!.!@B!oe#Aqhq}?FTOp2c r\2"02c JWidxȯ(U?F:IВ?F -e젌"v?PN,DT! uPuqMk 6̍B!23ץE c#)!:2 Fbk  1d2b4?Q& Bbҿr45u)3ߤi]~젙K.LL̖bTJIr^F_ l ^c ׳fif|F~1r!o` GR["hJY@]+ݾ?F6vb"BFϤ %7ɝh|ֻL~|OߔhcAXT4B;CS;YBUߤ_qt|ȝ\ {~TqUAdɼx1ֶ?F|DB젓 Sb\.B| JLǝrZ~HĢg^ޥFg?F$@Į﫶 ,3JaE̲Sι?x<#̿- B 俋}p4<'xld~1<ܬCs?FicuU FЪ֌?< 1TZ pq~Y,ͲJOuOOadO"o4orVbP}q1EʄOCZ d:d?FQC;sl_<>Д]xeW. buDSwV*ɟjoko}o(ZoUgrrPʃL?Fse9mpi1@ps?C>O"Y+Aq[*2l>M_Xp#:,?F[`XK ^$ǽhXvFLzN?F-\Z?FM+ ")?P@Μ<^'k<\ 4.iyq1Kqkjцpqx"pȯ(UP:IВMk=6̍Bᦺi(֋P:mk?FJNHP?PN,.y1",+~]s\߂~]5zn5]?]ȻEtAF5G u$6GaMș<ѶgQ>E% Jצ#{~[:ĂKǯsԪhZ*ߝVU4db62GA,ϹJLk81-s߭f>T (ז0g1!e\%zynA,=͒}s c 5P=D mOAؿvP $kL`C@Y?F,ɴZ`2a]o*"+H) $WV= ً5Od.t@//GB"I0}!!v!M ~+ ت!fLZf-?FOgE>k^:Ўmz@ܐE! cykA$i/ Ok/5BOkFBKHgJ&?كd v:Ì?JO\O1/6O1_C_y%tNw&q00[^cAD[F=k"l4 x QP*% T|0IWj&?F?0ۃ4_#BDM[ٺF1يC nf1Wob!x///oHokJ?~CvY_0 ?Fy]Gp0V~YPM'aܿuiTױ ?pNo=W/_$OpG/_قuO> >F6q0>XY'O?/?A6ُ}Deٱ 5?u_aX݆ϟs+?QcuAuu?FH_*|>Pe hV8?FeHN55/Ԡlփ~+'vc>`h?Nk"\T`bmMPyG&_Sf \l>{f`lER ͮF9S*#s B}PBA9!F0# `XOB UV ~d`Դ w6@+_XѶ 8Ms}GV |fjW lW XoW HqW ߆xsW guW C@PyW ߃zW h~W &,X &X $X p-]X AX (3Z X X NX %8X  X OZ&X =9V &UFDfP h>$KT 6D UF~@x! <`>!kbs{kbsi^% X\/7'TFJ!DbA[ qʕHDB # =hj8>T ]]9 #2AUFM&d2?]?Q6 ~A@9 hpAW(4?u`l?Xuhib  #AEJU `0.UH :l$'!f$5 #!`?CopyrigTtk(c])k20 9kuM c os If"!r !a i n.k WAl (s"Ue e v d $\#2q+0?*?<6M?\.?lp*E&B=#Hn&M4?6PMM4-,(,m# m'6##0OoNk3@*6'l6 ?Uhh#3 t!,$ BWaBE@BJ!>BdCRQ, 9VM (DAAO?_SSUR%c^|__WSU5cU ___C2O;rAc#p (e jO(heoiAb59 7,X\,'w'<4N%!4!m; v$zHD # =hj8>T! ]]9 T #iAUFL&d2ٿ} ??FP!3|>2u `u bu  @[A-0' - luu`l?ubC Ae"& J0G6G'0<p'"R('%.&' '!$ +/ "&"7/"',? U`0 h:HM %B7145 11`?CopyrigTtk(c)k20-@9kM@c@oKs@fBAr@LAa@i@n.k Alg@ HskBe?@e@v}@d_@4="36MD?F-P?M-83 7ԿF#30O贁NkC @Lg@?!Uh35 A"@ aRKaSPJOQsb2t8UoAo,giAreE9 6X7g'2Uq`^!D1 5 fj_(Hgyr Ds{ 9O5LF70(#H! *B 7K +a״ o@}+8ٶ o4aG75Z oiP(=Z? I1H;Z ?4PUFD  h,^TYYBBUF\.?x<F BP(c?P?tO @L^^{;B_@L&d2?sU!&"/ȍH$б?W/i&? ~t  0 ;QB`#Com ,link\netw r*"p0r pP1]a ,d0v c0 /1^ ; YP!3u &?  ;;" ;;;gr330;G;#7Drag onut heqp ad likb tw e cmGuiCaisd vOe]sBKcnTe)*`i]^o 9il^)n5k].b\.BP(?? M7Pfo??H D #  B>T>':)DNYh>) e0T9 EM dAU@\.?@?@?BP(?P6 .2A7 7P u` ?8=Q Keuoasu`u`bu`u -##@"L( (0!>zLU<?&(?bAR%g#2q l?AG}F]#qa0 c{`Con0eoct 0r 1l2z0 A0 +` Li2?05U"g# ~#"9 `.4 u{0 `:q1276' x:^%g#@KF$=mG<hB<wOEAGMOQ8u?^"f 4 #G<H7g#AwQ17;1w`VA0s_ RXYJchm!#e5P4PQ`? 1py0igPt (0)P209PMA0c0oPofR 1rP1a0i 2.P ARK0lPWs)bePe0v0dJU" Ip`@#neϥ?xdnf'#rK"c'#f `/2 k@ebum 7@w!;FPSeapb po18 `Qps-8R_t-0`R%J1^#B TU7^%\f*?v~!)?LEp &L,5bK`&! CT<0oPm`s0@a> m D/^""QQl2 2B33B11B56hR6C 7$PMC]E(j )w*:B^%]wQ1{U_ B@#"w` M`nu`a2u9bP1vpuqMc;Ca𡟳œTr`PrRdؐ1 Nؐ!m0k6['Zœ!̓ P`rRdEkœR`GD;bQiPeկ3EĜAP>aůYU_hœ%̓ S?bi`(aѿhUz1h B@̓L 0chAWej|ώUBؐiK0dA2gDMMœ̓ R 0oEAYCUœ`Α Apm0n0}Rfz\SP)aPF1lsQ˒W` ;iC`i0y,B@ ̓ ITPOP` 3c/@śSؐbP` 1mP e B1U"%Ѝ# A77^"[(qpPU]=`% &ROái;bBA`NEWOPKPSA ETPQOkRIWS1PT#ѿq;A11^"!! mBz3BB@9191A??´[(3F&p`g1q1 !0 2`T#;4l5cTIt,@r^";E3EU$hU9_D&zFBECF3s͵NSY";ų`pv*HA`bTfo?wQ&bu0HybDr)Ixbkfz`RQz:c exhGFRbBb<\ARbBg"ұeT2b @@-,eu LbrZSt?A////3s8BP(?N#F@<M?Orb:B$Ix\iĺAqG(.f\gXtFZ'0ye] IV]Zq 8BU~\2C }'qbšg q9#3^sGE[Tb%@tbSAc2} 2FA ]bF" JZd~Eq`axe@:F.qTba'g UD?@ ?`0o:C)aY3pra0]! \2rBa(a 3G2T3rA 0ex9dvn]GWrC jae%ah4F?@ p8Gp  Sfeb]oobs5{HZ,+s 5)d5ukDnmF08q#>K 9B <:aԴ o@+X/ o=aX?Z ; PUFD  h(^TYYBBUF~?x<F BP(?P? ` B 66 6 66^I6ȅH??A>IB@L&ɯd2?-(?\.1(. sUW!U&w/J G 0B`ETerminal,s"v",c i nt wo k t t o , !1i 01, et 05, qu p 5h rd0]1e!i1A^| (]G|& n?_ Q4FPR? pww {pv w wxwږ#wpwp~pw~? ~' ~~  wwpwl+w~wWv-{wywDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿?)j࿚i?3@ ?DUG DF P# h @T(PYY#EUF~K? P} 7K.B! : DTDh1YD| A3 AA3AA3AA&u` ? "6@@TThh||j&&"u#L&)R2%X<X7eqU ??' 4Wb0{[W`:#2P6Q3A"'򉄉Q9 ^ L1$K" H7"G' @>>ÿ@q@E?DF0܇rzBCCB-uI@`-1"Bu$ @]GbC$ Q`'(R7"A& <4' &)9u`u `"[ bz'$u`R425 ؓ3Q171`Vis_PRXYcPm!#58`0o68B`?CopWyr`gPt0(.`)02<`0U90M`cU`o `'of]bNarQ`aUa]`iO`n 0WAl` UhsbUe `eU`v`dR13d45*8=363@2t66 24B#30Ur3]w,A$T!^@cT<`o0m`s]`a>gdPa!gRsAA2hhB)p3+}RqqRqq281pacR2+6uM\$ %G11[2 P@ЯRECT'$@ b23;.UM5s_4/&ׄF3s͵NSYFU00UQ#,!@Q#J`epE>Ҡj_7Nam uBtS` SB~&_o<м_oB=Eoo/.c` WI4q*ՄVoC`C0N`҃?w`M`mI2JmhO@E!4ay>",>jeрϿ55 ©ltgИa ` %_0&ȳa5`NTW_ORK3bHPIPBQRБR5ISЀQ<) i$ eu؊6ŔX·q-/'0UwŻѡ ˈ74hH臘h5: '4ANѤ%Nb͡"44?4A?4'?e4[?4N?]4?4?Y4"4?9%4?%4ɥܣUHLuD" # ;3h( TEJ eUF~K? P VN% @*&>uA` ^?Ci i4Fu#Xp^Tb>tU b7t>U2zGz?@9 #>/&A@qa(bo)e|+|&R" !I`$>)##" #7"& "g"bo);l'R",R"/N )#z1z1'#`?CopyrigXt dc)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v @d0"i f  %!X)#]'0UB&T50 PJl>8fUlYC !(#pAg!(:)#FM&d2?qFGN tQ!P"(>!p8?ST0gYfRaNrTaU3^\.,__ 31bXEU[Vj?F@  l(\\ rX HYF `0QYp= ףXo`?̺oZqapXUL_^_p_\A*# ?Fg?l6 l\1"X%Du~ |!YRQi3a\ayo7;mDucP]{1cr|%iU{^?F^ lʡE\1_Zd럄ht X01?FXE#?FFb?PƚK G2G2 lû c_tFaXNTN]\R-\^Dr7 јٜ ZLT4&1pBHZlr À5%8yU`0 ST*J! o`rcݘEF3?FAM)ỷ?PMޟ+=){ )LFj?cl~Hӿ忷ǨHD: # h4>T]]9 M qAUFU*JR?FKRT*M&d2?Fp_8w?P6 L]A]M ($JuM` ?Agg2DSuVJtS  tq= ףUp$_>JU2zG_z?=@M3#JC&Aw@b#zw u#b( (+T&J[=# "?0& ?A$ /%B=#BZ1Z15 `?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v0d0+"S1W44#-=#f3 f7&4%/O]FiieP E9 ]FXsGL"'0UB&!4%0 FJlJQ h@$Gx12A!1(Z=#FF!0JR Lx{פ_Q\  R)V4"JVT?F]Q!?P-M G2G2\PRSWRc^_Lr___:M -LdTd&!&Ra@SBЯoogr5%Bb#iOD"ZoZD?B!)$a EeVHc^?F7zT?PxT8QoWmilvd)TJEvސl5 gvtmFd(:w.|&E2_U"rA k#ރ+"HD: # h4>T]]9 M 5AUFOt:?Fr\.?F#H$?F|>?P6 e.#A#b  JuM` ?+ILKu8Jt5  r7htyw/ؤA>[JU"?& ?AbA@D"bL$J#2zGz?"S@MJy&M#b#zO M(bL)Y&S,D"Y*B #115 `?CopyrigTt(c])20a09uMM0cK0osE0IfS2D1rG01aS0iE0n. WAl0 K8s2Ues0eK0v0d014-#*3 *7&?!FiieP59 !FX7G'0UB&!9$%0 iF}JlJA h@7<11D!Z#F&*JR?F9.4mL AxY S+RJ:rTDa@ RS .S X%QEU~[r\wY aU5UFGh4\Q iʧEQ8y^Ffj?F~@W?PbʒK G2G2@ D"]rA R#s c4v(O A.x$&AR q|4xa@S#Bsdvwrb5^%HD: # h#4>T#3 AMT JUFx<?Ft:N?F~?F7ͯfл?P6  >JuM` ?juJt It=WGWzHRIlQI#>[? ?AbA@"b$J2z6v?@MJ=&#fb{#z (b)&,"*B!!5 g`?Copy_rigTtZ(c)Z2U0%09ZM0c0os 0f21r 0D1a0i 0n}.Z Al_0U 8sc2e70e05vu0dW0! $-# ',?6iie59 6uX7_'0UvBZ!0 -F"AJlJAh711!TصTh]]9 MP qAUFbX,?F3L&?Ffl6?FWjZ?P6 L6 >M   $ m JuM{` ?=cc.@uRJtO  \(\tu_VQӽmI[>JU s#<@!A?V% ?zAJbV @bbbz #|"b#$|"bĄ$J 2zGz?<)@M/#J&B(&$%|";T'|"*B9#t1t15 `?CopyrigTt (c)02`090M0c0os0f21r01a0i0n.0 U l0 8s2e0e0v@d0'"m1Yq40#b-9#3 7A#-(0bP#E9 wFXIGG'0UBŔnF!q$a TFJlJ8>UhS2Q2Q]1LA|!-(`9#FBP(?FMx HNټOqɠ[Hl[yQ RV0"JU}e\?FAϡ`2If9?PaEQosqvyifdY0_rޖlͅhn4g_oeBdy$w4|!Z$ܿ__  %$?qyU2_U|#rA 4'"HD H# ih#4>TA#3 AM 5AUFbX,?F3L&?Ffl6?FW_jZw?P6 . > JuM` ? IKSu8Jt5  \(\tyuVQӅmݤA>JU2zGz;?@MJ&A@bE#z; 9#bS(ba)n+n&J[#";"?&L ?AY$p/R%#115 `?CopyrigTt (c)U020a09U0MM0cK0osE0fS2D1rG01aS0iE0n.U0 Al0 K8s2es0eK0v0d01E4D#5 Y-?*3 *7&FiieE9 FXG'0UBŴ&!$0 iF}JlJ(>UhiI <1Y!aRY4n Rx_ J:j]_`0 Ɓ_.S F) 8qViH$D?F@\.?Pַ  G2GC2\QYV4GK`mA.x ΤAR Ԡ3l4xa@ R#Bsoog]rb5% 8 bHD: # h4>T]]9 M 5AUFQ( B?F].r?Fx?F`0?P6 m. >b JuM` ?+ILKu8Jt5  ty(\{GzAt>tJU2U"?=@MJ&Aw@b9(WbG)T+T&JJ[#";'?& ?A?/`)UB#115 `?CopyrigTt (c);020G09;0M30c.10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0ej10v0dy0@!4b-#3 7Y&?Fiie59 FjXG"^'0#ũ&Z!$0 OFcJlJ8>Uh9AA x"1DJau'Z#FFp8 J4T .xDD3AQ]M RVJ:n] ̅_Zr==B&d2?tW~ɲ`?FH!{?PZss G2G2 .S =;;RSW4O+EA.xSfϽ؍AR X]ded4Jxa@S#Bsoogrb5w%Q5U}H$D\\~?BBQ5\~CftZ a axUv~lk&lqU2n_U"rA 5>#pHD: # h4>T]]9 M JUFp8?F}>??FbX,?P6 >JuM` W?SuJt  f\It=W|?5^UIRIlI +#>5 ՆJ[;? & ?AbA@N0"b8$Be!e!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v d ^!b$-,q# q' &2zGz?@MJ69#nfb3z; .9(b8)E&?, 0"E*iie59 X:7'0UvB &!%$0 -FAJlJAh/7!0!eZF43, E <ᢋ/.# M OR|VJ:VjUS |bPReS0QEUxx#cHD H# ih#4>TA#3 AM JUFVj?F\.?F`0 Pn6 >JuM` ?uJt  3\ߦIt=WIRfIl$#>5 J[w? & ?AbA;@0"b8$񩣑e!e!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v d @^!Eb$-q# q' &2zGz?S@MJ69#fb3z; 9(b8)(E&?,0"E*iieE9 X:7'K0UvB &!%$K0 -FAJlJAh/7!T#cS( bUGD  3 h0TdYYB IUF}>?F&G/?Fgl6?FB\_z17=w?P} 8f u` ?0)u:t7 ? = ףptDʑ_Gzޑ?"L쇪iU!!`?CopyrigPt (c)J 2\09J MB c@ os: fH"9!r< u!aH i: n.J Al @(s"eh e@ v d  #S$#%B- ## '?;= # &#24 6 6 2$# #0U2 3@&B A^#5 6X'7'4Ŕ 6!4] 6:e6^#[EBM #HA%DGUHuDh" # R#U U  >]0T]]JUFgl6?F &p&??F{jG,"7?P  uM` ?j`fpu#zAtw tnٳ \b "OMGt2N贿Nk?@MM%&A2 bW(c+c+c& !!`?Copyrigt(c)209M c o%s f"!r 1ua i n._ Al-0 (Us12e0e vC0 d%0 "!3]$#Ry&#4?6QC?]"W(c+Q;'-,# '6% h|5 CFXGcG'K0UBŢ6!4I cFwJlU!#A(:F<3)0 vc p/y $(4!Xv݃`?FZyE}0pS a۫YQ\I`X3Uk?FQYC~\f"qoY|5UD/5?Fԧ߾ \F\bJFXk_}__\A#O_"_4_F_X_Vh#?F=]W\>ۆ\34=(_V[e¯?F'e*m#R)\b@QoWZʵ?F,ށmL \$6ԡybm&>e?FDȆY\.Iſ\)ozEU 2DVhVၱ?F}!\\wzVxtD?FYY4F-~Ȩ\(p|y m! ?FߝBF7+*y\= ڙbm&R{?FbZd\Aa\?ulď̟EY.@Rdv m5[?FRⳭ8HJ|\ HN۟VHD2?FߦM\\A>x2V8`?Fitͳ}#E\I&)|5e?:Xa"7["mSpd㯵ǿٿBeJ\n~.?oJ4=|nn)`]Cc֦Š]]7PPE5߁m:2\y_1W3E?F#Bd\eÉbm^9^?FV;.O!thV\9MύAa\n߀ߒߤ߶߻VV ZKX=\ .c|~k綖d]!:`="K\lJ(i0(dV;amЏ9ta*m \ǀ_?bmcO"QGm= bY~\r6y@ύ# AuYKtxoa{6\/ 4)V w!>?F ߄*"̕0ϓpD!K>?FWP! -ߟ:{Qb?D)bmh4(?F|hS ߵ.f-R[Ϡ ύ //-/?* Y7Q%i?F_s"'a E/`'r?Fs7"̍^sÁm;]֜/O?F/RE"ןh~pMM/㿈E8:͉@;$Pe"̪ (M۽^mJ?O.O@ORO ?????O%?kZ@ I~}P"P7|Msk aO=˩ ?Fa7 >fɾw2θOgl6'"*’ ܏{7_(4?F8W!"̿z2m^Gm}>IύAoSoeowj ____o#o\+w(@ `&"-JO}=O^f~oyV 6?F9~"̓ ;M\Oo~ahFp@o!"?ݟ$Eߡ?~%^]"Q>IۢwFX"̰4txڑ3:\Qcu '9لndu%?F<2}G"^̄ZϭSԓD-?F=ߓC.k0w]<<7 Qv@b" "R؝f2]bwƈWРJk "̇1%~]F@m 1CUb^fĽ w \,X|4]-HuvziP=?F ##BOP9]ǕDG(t{GeY zKgPfO2~]9+T~]~;?o-L /=m$+)9ϡʌ);M_q 0;F,5Uz!izVsJ}0W }f6i,g8~ٜ˺A}2!~ tLհ#QqBZ$@S]5tC6 e`,.zu'J1ߥ6HZl~"~ѠZ cX, }ٜ?J\1~C( k0ޜcAKrзX=YϯR p0%N. X, I6]+ Ӂ?FI:5;iU,=A  D)U= 1qRdvE[%ڻ2i{w׾mQ7Z'ütUl?F3&Gsj3jܙ-Ow) S-ۇ4H-ɫ9].E|"?F&ti}S`Aܙx@ ///??Rk/}/////b"m X i)r~ٜوWjFQ"?|PP?Fo%s=De9gsܙaEo#IOl֪E@e~iY/|dM?v۶ :YhSQv}T\Pˇi/ON|aܙz?`$OO__,_uOOOнOOOI@B?FKTG.i`AٜlOo->_|?F!i_ ]gٜ?, )ʼn!|?F3Ck*}DxZٜ?]QVyhk߻?FX[g]/I ٜ?bvd Co '9Koooooopj T&}j8>ͯ߮\nE@L@Gٜ|gxgx0Y3N,*ݖ}T  MlfahhÔ}Dq:6};p/"_f/-?Qc я$]+w(0p:dy~͇JO} M?p 0!s?FYY4ڼ~iS!(p|ʟEDk}'pҧO?zA'xў?R_F!] EQ>sNϽ=(V-Sewaۯ#5ٿ9/{?i߼$&B4O% C3i?y}o3qo?Fq ZiD!'FA\]0$=ڵ}SI6?Ͳ Ai{ߍߟI 0BT~#a\.LSA-BN^{q"?p?Fe6i㪡<_#AA}2![?F1:!6F! [jtE\qRD%yjǍ6Pݿ#V<>tl )ae"%7I[mzj3?C_%?Zzh̿Z} ?-P N?/????eK?]?o????yxz`h qDa.??>#L3ўh>/Ew9&:/ ЎP.JhHkOQ qӠħOʜ- &h̜zfUm@O___oa_s_____}2Z?F$'_{uh̆Ia{V?Uoǿ7e~h̰(ecy) - mz-$.hhsZ}2Ca6o`(~ "?F0.MX×mEyoށ顷6\Zq%բ}t7Z?F~ Oh?H |A?UYwkE?F?>C ?<a,:wPI5) ߈2/WfsM% e'MvǃaËgO?F%F92\aI}|b:B9 /VA ȒTH7D6D # 4# J  !"#$%&'()*+,-./012*3>5haATaaMYUF%>h`?Fr<."L?FcG?Fqј?PM ]u\`^?u#t  EF't !#"UY%'~P*%8'C"Ut2N贁Nk?@d\#&A b(++&Y/R/$78?#bMd4z (+b/513%R 11;3"`?Copyriug t(cU 09M@]c@os @fBR Ar@GAa i @un Alb@U HsfBe:@e@vx@d"1Y]4i<3K 7966i@15P6XGW'0UBQR96W1 VZlY]U]l@TGA#1(!l[?FuO[0:$}#)ߓ=b!8e,չ?FAOw(lxrNj$>la.PXTh|3ecztaLN E'm:"n*&G^|%[a5:q m5a)k?||=a Ed0UgEo"o4oFoT#______evHG?F3'y~|3`[]Ք|cToev_kfP?F;~|S3ٔ|\£oevV!ex<ڤ~|{ |ev-+3t#~|;6nRcoj^}.@RdQUev~؇Z?Fi~|+|ܔ|JCjev<)OO~|p )yBr7ev1 aNְk ~Ůeąܩ#A 5OfYQWgn?FO*~|Ch|`-jݯev%~?&nq΃0xi7y :kOop ~|4ZO勿]oρϓϗU+=Oσ+ӑ!]~|:<]їyT}ςT ?FO'>]P~|qH~|ο:.H!|ςto w'"~| 5(CşVPevR`t)/~|?7>-<Ѫ|Ra 1CUgevo I?Fb7~~|/ͼ셡yoevkY?F>_ ~|3y evMXy"}~|ω9?lev ?< "~|^_\˱R&8J\n|by\M[`bD'nnEm|<z_m6itaW}nk.V̲ocxL45MR9?}.mE?KxSi#;m1LL'7,}K^>P/Si^}///*=! B/T/f/x///byYEvk덚4D2=y_mS M߾}zNmWUdw4֢DB&™t|d˽\Li#hމQ?7B%$Ĩ^QUHnOOO_8& ^OpOOOOObyV~%װ!(E>ܩu;ofUN1[D}~XXtۯ~'haCjmt{ܿ`Xn b_փ%hkqwu~R)uoo 8& zoooooo~KL}"_!eΟW?N eFjӈHi>DYH'h(%i?}ou MMXjQ %f[݉gm`¥ G8j ,E/R&5gS>Na8|'hhv_ ߖ3bW7mȧս[PebݱJ}:?nuTѺ 2DV8& į֯ &i~FR?[M%- q?j  DJP}uQ9/'hdfO?FV+p\[Wvpa9| wdѿOtZ(UxzêJUfeC(6,b#l+eR(߫\(V1:9Oi:4ߋcG1!Gq}9 ?C)uNUBv(Bo?FO& v!~5I-lym(,.$23G_CRHCM4q6vl~_}5#7o Tfxy?F{UX>͘ Uܼ*d4Ij@ ?Fi; Ly4GXj9 g1re-#d\D-h:Pc}Lw4mv/*ĩMQy S:U ߟpʟy7j?FZ?W+}(U*<SW?F .턟bGF赁9 <8߁-`Z͸~~5 CTc/ws}e v#P3ׂo(r.i,"4Nr¿Կy=Lu?F7G3z 9 ˇ 5F?FJ.3$w 1tK -o9a|ዙCѾ;-z0ƛ+WzBg/w-%&G,ߪ 2Dؚ5F] MH,>PNr#߽v<߯0ʨ)QMLTgMUs}I$FATG}=u2f`?}1&OwuXp+MEp41}o7o9ݭ&fgPy}}M?=;??1O2DVh6 yJ]}z aFgRlnk6+nɣj4 anf-n_h?+nj^AoxvTN.g!uh-Vo@MVp >ߜ)sPU 1D'[}LG*>QoN/`/r//.V//+/=/v(#xEAGw} }%(Lʧ1;$gKG"V2w]xR)vJD)3 7F!nLigqN|ݭƃN'QM7ZSF^ jO|OOOJv?O O2ODOVOOh_D PicPK>ѩ݅cDchQ>M cAѩ JBɆf!\.S6x~r|#ݭ#.CqQzUȴ߯b~(oooofo*o<* 8}\q3h^ ߻l~ϢϴQ9 AS@XC^}|/9K ս5١TB㰉\JA**=@$,Q9G\鐒?FU ^d /g [A\9mpï?FnrB=k X* S(N A9]'Nx =$.T~v )\A?S7|#!8+|z AM}D(:LfQ9j,,j?F4*a.ȉ\zFq&ǻ}w^Ym[6?FG!Ɖ\0#Jh@ /x|I]11\\o=P|G>q2:]_1BM4=1vq"_]=H` @]//?A?S?e?e ////??/ԸƑ?F qT}S҂ ϵymޯZ?FQo8G} ϊx"1IQ }M}>~~^{ªy1}eYQrŁ& [ׂox^.cyOK_]_o__+!OO__$_6_/ٗW(?F 5w*zi۟灜i{ސ_26w?Fߒ(xJJ쑻+Uϰ|]FuqyUq~iT ]~[4vp=!;L }?LYh- pzK"o .@R/R7?Fhߤ? QTJZ׬23l%?FO& ?GSۈQ?5])B.zhɅ($Q1(5} X㺾 ?lرc#&8J\n/_?FdW2ʼn@-1ş9ȟ2֮tp?F<*fB s1]V%sAS})Tk{#/v2OB!XLi.݉}1̾MZͯÿտ$0BTfx~P _! QR6#I -hCFUz5|Y}N?#TG}"F$=c*c,j4Z>[T-G#ϊt9a?FQڧc,H ~{%a -2FA`~hLr Xޟno#:L$FFǕgtg'bl[;| ߛ{-,>PbtG)AQHZ?F;ƃOc,t[|{r6<9{mgE?F Cg=:*\R鬱YG/7)- I;-Z?N5şZ=/I'nD5-!bp1{ IWx^q2.BTfx\PX{Zxߛwtu,]R |{A&BQbhì?FB#5һ5(B]-nf!Pb,$21Nw gnV~B]gyz9^p 7QoGI|q~-9@d]/// :/^/p/////\ˍ|x?FȯI|hLYosլ{m|^>`tzϞI|ύKYʰ;!l?bsdzNQN鮢I|\Jm^*Hq^o_`n~APpⶍߥzȣs^ˀ≜O__(ZA}OOOOOO_~ޯ1}^i8Z`֋ ŸLS$W+wvqO j2~Lx/b6^Ni~ KC} mN;-럴 =T=qip8/~ h ;-#霝 2Dz1oooooo\~DG߷&wz P}/p쯹BxN]dK]y.8@h&V&%-2|ЏLWM;| =yidݑ%%-tڨNR 2DV2ď֏ : O21f& =Q>Nsz99]yX21 ;>#94oQ>ݠ_g?]^%mT`>rG.l(5sm H_`?#~6\9Ҵ} қhnBk=Oas#3ί(\Ofe#y~NQmr갯]ߕ 6m^ϕJY9]7}}NQ X]n;m^uԱ_^?$R?F[5l<;Qq<7B1b;-Y='!"/;mZoY?4cgC2oXj|ߎ84 2D\M'g\elh \~5[Vq:a,8ω?F0 iWi&4\GbF|aS)H9z#/<^bAL_߶,fdlEH{=uU%5*eBn v;!#4?FADo)XLȚ|bv!7A!*EKGalH`.P%ҧqp6t KT4HD #h34>T33 AMT JUFM&d2?F W`?P6 >JuM` ?juJt  5]It=Wp= ףIURIl#>5 ՆJ[ ;? & ?Ab A@N0"b8$*e!e!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԎ !a i n}. Al U (s"e e 5v d ^!Eb$-,q# q' &2zGz?=@MJ69#fE 3z; 9(b8)E&?,0"E*iieE9 jX:72'0UvBŴ &!%$0 D-FAJlJAh/7!h0!ZFg, E <ēNmP#  &OR|VJa9PY]PPReS[RUV%:]x#c S( bHD H# h4>T]]9 MT JUFp8?F@ W?P6 >JuM` ?juJt  f\It=W rI*RIl#>5 J[? & ?AbA@0"'b8$e!e!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v d ^!EHb$-q# q' &ڝt2U2?@MJ69(5,E&?,2E*iieE9 X:7'03Ŵ &!%$0 TF'JlJ8>UhAA `!1ZF? `, E <A#] =RjVJ:VjCS |b>RSS6 QV%U\.;\QNYu=!ٱQE^?&d2_MZW#h415U)Hb?*V[E! YB\JQUU+Vp;_MZw@0$[D#h2$_6U2rA >#JsS(#e_zH"%u@s $I/bO`MF(0E# 1GZOB HX H~dմ I@+^C 8s}GZ CZfZ SHZ XZ \Z H`HZ [cRZ g^] k+Z n6X[ *r[ -uAh[ nwp:[ z&] f x] qH'UFD  h(^TYYBUFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT9T9d9iȅHBU?? ?B@L&_d2?@-Q(\.c(A.sU!&/)1(  0%[B`(file,s0rv2,compuUt$4d0s40r0b24d0n0tw,0rk!e^| (QSG& i?!3E?? ?qq wp  p w{{ qw qwqw{pqpqwqt}pqppw'wwwbwqpwww|zwxvDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??*t {V↑T*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉ%4 ?"u'4b0{N[`:Q1#2T6"!^U9$"3&9"/L6 2&!'ݯ>e@ÿ@qn@?@-fB?mDcFԸ0)Bu/@`҈-1Bu@rCC 5GAB"2Cm9u`u `"[ b!u`%3Qo1o7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm_!#5P8a` 6!bz1AXE:FG$1 8dT=`XQj2`]r` M`nuPaPt+arp{` ?bup` BO=ݛ`S,t@']p5%m-T݁<9ofm'Qh>qݏ,BT'.at'xXf@>pÿЇobLZdq [ܒrr$ avvvrsW"`ERol]16 бeI`~bb a}abr@6uP`TaSo1u"_u'I1I2A!b[aahaauhAaYypP`Z u*`q`*q`azAx'F5_db&bpSF3s͵NSYJEbrV@ΓB!e 8d : LqW,@rB:@0ɏT}sasahYV搹Mny ` %߀P&S41sa`NEWOWRKb2H PRq!RRI%SQio$o¹h^A'70Uc?!0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BU@5?F;?@ t2?F`0 ֺ?P} u` ?u#t+/8<&L9`d<t9( .88LU``ttyt m "t"hf;!&&'n"!'%0'm= 'p ~i##0U"K?@&S#a!!*_"`?CopyrigPt (c])0020<0900uM(0c&0os 0If.21r"0[1a.0i 0n.00 WAlv0 &8sz2UeN0e&0v0dn0w"!$J#B-#3 7#^=$6#D66 2$e,^6C% #HA%DK%D8K5D`KEDdGg&}c__'TAA""}(Pa%288BKS`DS\U Siha5 &rX7g'$i6!40 f-jUHPD  # #>h,JTEI MU@jZ?F@s??FX7`/?HmNuQ` o?u# j>tMtQ 1@t PATOQ E O#l|O6,2c J^vNt, "t "QwNbo ?!FVxNo]2УɆX3BUYU>VZ\)_ W거QUTAQ LG4CaCa_* Q_*____ o2f 8CT]?FoYDȡ?@ ~1pc} 9dS0]Yb<=p\/_ RT)4HtvzI!| ?^_AV,uݹ:4Z\o &Ja ʆXBU 9A1p@߄ߝZ\X9 (p\P΀dˆXEBUp2t Z\5q[,=6B҆XaU̓쬏 qXa>YU . [[ ;;p\Ua>YF`rZYoQUaai%?FzP4Z\^ W-p\] xTy8?F.?@@SA?FWk@ ?Nk"_|5`bmMp\PB|y[7 ơ|`?])?p*FWAX|a筃,H7~Z\?bxT?:!G4QڡUeB_T_f_x_DOOkoooZ}u?FU2Ty9=ŏsҟGO"vP_?@mlփ?Ff>> }= R3|Ofѡ|&['aޯ)d3#ux"@4SK 6ASʎy'e>'"Y`r,iMm׹ ߵl%1X~G׷_'0UJ26L50Ov ׶HD: # =h4>TB#]]9 TqAU@1D)?FL"ߛ$??Fٵ #ָ?P6 u` ?u4 t  ]<)t7񿗯)2)LEkL>  JtU2qD ?=@LC&A@-bu(b)+&񆤍[=#L"#a?& ?AH{/)=#@1@15 {`?CopyrigTt (c)w02009w0Mo0cm0osg0fu2f1ri01a; ig0n.w0 Al0 m8s2e0em0v0d0"91=4MB=#e-OL3 L7&l>  U#,|A|A9 $] t^1*=# #p Kw&&J`=#R&F 2'_b0CwvA kRqVM(~"!;ɡ?@<3?F ?P-DT! ;d1?Y=:ZTO㿂f{OaB J4 13? v,? (N_?r5w!:8oJo\g"a@jS(.8R!ZFI nx4__ D_U5U,VS?@g$ ?F~4,ێ~@ 8_H_ZTml`KDfl`'E4q2d8R鿃oo`rmkod&'aE2%_7USr(A@z#y3ae&b{, fZT5sae~t?FG=Va|7>-\氹pAiʈ$%?F(XoZ1a|}%\Iii ^WB^:fvEZT˒AiU?FAx߾Ya|k#ӝY~/Yt(Ae o0AleP59 MX=GL"'0UŢbN!40 O5 |HD: # =h4>TB#]]9 T]AU@R7)?Fy߈C??F"ָ?P6 u` ?u4 t  *5y")t7n)2)LI"+BA B JU2q0 ?=@L/&A@ybm#zc a#-bK{(m {(+I&)#Kڬ !?& ?A"&g$bbu/b /%)#Bo1o15 {`?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v@d0"h1l4M!E)#𤲝-DO{3 {7&l> U#,AA9 ] 18! #p %J`)#@~t?F # 0hU-~-/HA RVM(ET"rYD?@J-?F-' 9d 0_Y b%r\_oӫ([MFoFl 77]8>!Zݹ:4\\o?gƭkdQEF5W4?@A=?F[Q`??P-DT! :U^Zs[ #l@*p9lJjV\\l_ I&vpkhAzi!J?F߸TF\\Ͼ\zT khzinA9f@.mY <\\ԕ$ ,S{khAzi׍8r?Fį^?m ? ~_5 ә(Ae 0AleE9 MXlGR'0UuőN!40 ~5 ,@HD: # =h4>TB#]]9 TU@~t?FYaJE??F? ~?P6 u` 7?u t  }A_)t7Ǫ+)2)L^A]>REJ2q?R=@LA@yb #z.-b( (%2+2&L q!?& &?A"#&$bb/b 4/U%  1 15 {`?CopyrigTt (wc)B020N09B0M:0c80o%s20f@211r40m1ua@0i20n.B0_ Al0 88Us2e`0e80v0 d0X"1 4Mi!E-?3 R7&l> 0>U#3 GA9 B)1 #p| U%J`F(jdO C]/S֑K VM(HC-O"K[%q3U5CU?uBТ:^Y=~q\G3UE2OE#rA 0"c37QA?Y\[)xIB_mT^(`d :0A@le59 MXGB'0Ur-N!$KA5 fjHD # =hj0>T h]]9 U@/?Fxpz?@]JQ?Fk],>Uh@ !f$; hp _Q%Jd(jcZ(sEp!$a VZHD # =hj0>T h]]9 U@~t?Fxpz?@?Fx?vÂE?P6 t`  }A_?t#%_ ,u auo>J2zGzwt?@ʐLA@bu+b)%#+#&![Ha$?x& ?A//)񩣑!!5 `?CopyrigTt (c) 0]2`09 0M0]c0os f2R!r 51a0i n. 0 AUlP0 8sT2e(0e0vf0dH0@I"!E]$M-# 'x&l>],>Uh ![$ hp| F%JdjcO(hEq<cBɀODI$xD\AEOOOOD~hEuBТ91Y=~D\GiAbE9 MX|7W'0Ub>!$a VZ_H$;s nXk_$:@@_Fx# ߯VOB Y ~d8մs o@+C o[sG*7of7Ș7TH7P(7.X7%Q8F֛U8FV86Z8\8_8a8|7d8NHf8ؒ7_8I*9xh8s<9Hk8 ]xm8 Xo8!UFD  h> T6D 4U?\.??F BP(?3P<; (^.h1 B ._ .s....UBU G 0`%Document,aUp, r2 SPe, 1!LF"t:$Fil, 3#g, k!^ x%% -G( S?'5 7NWwp\xpcu wf`oo\Drag onut hep adn"y aex.bL&d2ٿ~[?? ].r\5r\.EUGD" # h <^T$(YY#ӌU@\.?@L&d2?@~߻?P} u` ?Mu#3:D:DTDhYD| A A A AAAA&A&5V0&0& 16@@TThh||&&&h&S"4"!X2^7H^7D',6{҂_a?6 ?bbvbz@ t#RCCYCCCCCCX2&UHF%2?BX2]"BW3Fg3RFh3FX2 FU X2 F X2 FU X2 F X2 F5 X2F F"BC\"CS3F"CS=F2CSGF2CSQF/2CS[FC2CSeF2tsU2URM3@S[CT"bS$bZCfC f|d78JSB<D>`ÿ@{q`?@bK?df2uL -1bu .1ocr4#b vs"xh4C"M7 qx24S"_|um2u`u7`"0`au`rK=pg2 3 @D:'0A2Er:+Jw"+`@rApdBS(pԁ .r`?CopyrigPt (wc) 20U9 McoP'of r6ua in. U DlQ sUUePevgdI1`Vs_0RXYIcPm!#5378eH^0ӕYcU5#.U#Y.3#.E#.u#.=##.#.|#!.#!.#.pfmcaȹW!T4!4!28Toԁ arQ(-DT! @b0Q `aqqbvab? { q¬rrh@@($ &  qàq#@tq3t)'`Rg:sB Tgx Er@9'/'Tw\M8^6uk u  bu\ %qfŪ0`0#`abukv33u &rg𔫿Ͽ`28x1,@9"HD: &# ;ih ,>T!  9 U@D&d2?@KJ}0@@sbA*%BAG W$?n& ?A$#"t9,(tϺ*r(, [ # ͐(tl-t?t - t>?t- th? t}? tݒ? t?t :hT!<M#h $A$A5 s`?CopyrigTt (c)[@20g@9[@MS@cQ@oKsK@fYBJArM@AaY@iK@n.[@ Al@ QHsBey@eQ@vZ@d@?"l>U <>Uh$@AA BAJZ?-ku  ?V|=R"Q jRVMB:]GZpS ,TvRU5UC QkY[zZ%=| hUnd<+o{Yh<gttgYCNboio&1\oBٽB>PhQYwk_>g"U2Q_cU#rA $s?"HD: &# ;ih ,>T!  9 U@ʛ߹0?@nIw?@DI ?@MԿ?P u` ?u4t  !t/ ms!*bB!~DF=nm>J}02@@}%bAUB A?f. ?s,zb/b@'݉ 6 ,) ޷+ 3"?-#ϿS㥛!z /(, ) .":'`hT!<M#hT 115 u`?CopyrigTt (c)020@90M0c.0os0f21r0Aa0i0n.0 Al:@ 8s>Be@ej0vP@d2@?"Ul>8>UhpApA 1 0LZW|?U@7>0XB ,V MB4GRS_FU5:W]dT!  9 U@xj89?@F?@oU?@T?P u` 7?ut  K~D!t/0!*ҹ_!!DS_ Vn[>J}0@L@%_bAU)B AO9tG ?f. ?Y,zbb@' aY + :pΈ?'!x?z!(, ) !b#/h<M#h 115 u`?CopyrigTt (c)0W20090M0]c0os0f2R1r0 Aa0i0n.0 AUl&@ 8s*Be0e0v<@d@?"l>4>Uh\A\A 1 LZX9v?;0 0af҉A BVMB:66C JO DB.Uk5`AOWI[n_/T^9_WQ.U>U?jtOJoBdA D.UE2OErAg D4c?"HD: &# ;ih ,>T!  9 U@fB?@`mH?@i~jJ}02@@}%bAUB"G ?f. ? ,Cb /(,n  # :'hT!<MQ#h 71715 s`?CopyrigTt (c)n020z09n0Mf0cd0os^0fl2]1r`01al0i^0n.n0 Al0 d8s2e0ed0v0d0%?"l>]UhzŒT2JTyF mBFMB:jt?lK{@C vTC E%4TMlO~IADE2TO;rAg ${S?"HD: &# ;ih ,>T!  9 U@D&d2?@b1X΁?@K~?@;d;O ?P u` ?u4t  'V3X!t/XQ_x!*!D?7n  >J2U?R<@Ls,A@Tb+&tt0/t 2- tBy) ?b++'񠀑h!<M]#hR1R15 s`?Cop rigTt (c)020090M0c0osy0f2x1r{01a0iy0n.0 Al0 8s2e0e0v0d0U{"l> <>Uh$AA* p1JZ-ku? _V|=PA BFMB:]GZC ,DBE5E@C QI[J%=| E@^d~XLAIJgOlo.WPE2OE#rA $s{"HD: &# ;ih ,>T!  9 U@^?@K?@ZRJ}e0@@%bAJBAk?f. ?Y,zbb@' 6f ) + w/?-#S!zs+/&a&!+ Pkw?; d;O?8(, ) N" /=$h<MQ#h /A/A5 u`?CopyrigTt (c)f@20r@9f@M^@c\@osV@fdBUArX@Aad@iV@n.f@ Al@ \HsBe@e\@v@d@?"l>UhCAAMA 0LTuV  iRVMB4`_rQw_?"Q5^_zYQU@Uor_:k2P_t;rA $c?"HD: &# ;ih ,>T!  9 U@PaX?@f)?@MbX?@TaWk~n?P u` ?u4t  N=!t/ӯw~!*Dʮ!D#M8  >RJ}e0@@%bAJBAk?f. ?Y,zbb@' 6f ) + w/?-#S!zs+/&a&!+ Pkw?; d;O8,(\, ) ":'hb<M#hT /A/A5 u`?CopyrigTt (c)f@20r@9f@M^@c.\@osV@fdBUArX@Aad@iV@n.f@ Al@ \HsBe@ej\@v@d@?"l>UhCAAMAd 0LTcuV  iR)VMB4`_rQw_?"Q5^_zYQUUor_:k2P_;rA $c?"HD: &# ;ih ,>T!  9 U@PaX?@ w4?@MbX?@DkuL`~n?P u` ?u4t  N=!t/^~!*Dʮ!Duϧ{z  >RJ}e0@@%bAJBAk?f. ?Y,zbb@' 6) + w/?-#oS!zrs+/Ⱦ&a&!+ Pkw?; d;O?8(\, ) "' /=$h<M#h /A/A5 u`?CopyrigTt (c)f@W20r@9f@M^@]c\@osV@fdBRUArX@Aad@iV@n.f@ AUl@ \HsBe@e\@v@d@?"l>UhCAA`MA 0LTƌLuV  iRSVMB4`_rQw_?"Q5^_zYQUU or_:k2P_;:rA $c?"HD: &# ;ih ,>T!  9 U@PaX?@b`;?@MbX?@TaWk~n?P u` ?u4t  N=!t/ӯw~!*Dʮ!D%h >RJ}e0@@%bAJBAk?f. ?Y,zbb@' 6f ) + w/?-#S!zs+/&a&!+ Pkw?; d;O?8,r(, )] ":'h<MQ#h /A/A5 u`?CopyrigTt (c)f@20r@9f@M^@c\@osV@fdBUArX@Aad@iV@n.f@ Al@ \HsBe@e\@v@d@?"l>UhCAAMA 0LTuV  iRVMB4`_rQw_?"Q5^_zYQU@Uor_:k2P_t;rA $c?"HD: &# ;ih ,>T!  9 U@PaX?@iMbX?@TaWk~n?P u` 7?ut  N=!t/Ӕw~!*Dʮ!DNEr >TJ}0@@bAUBA_k?f. ?Y,zbb@t' 6 ) + w/?-#oS!zrs+/Ⱦ&a&!+ Pkw?; d;O?8(\, ) "' /=$h<M#h /A/A5 u`?CopyrigTt (c)f@W20r@9f@M^@]c\@osV@fdBRUArX@Aad@iV@n.f@ AUl@ \HsBe@e\@v@d@?"l>UhCAA`MA 0LTƌuV  iR(=RRMB4`_rQw_?"Q5^_zYQUUor_:k2P_;rA $c?"HD: &# ;ih ,>T!  9 U@PaX?@Cz?@MbX?@aWk~n?P u` ?u4t  N=!t/ӯw~!*Dʮ!D`q1 ARJ}e0@@%bAJBAk?f. ?Y,zbb@' 6f ) + w/?-#S!zs+/&a&!+ Pkw?; d;O?8,r(, )] ":'h<MQ#h /A/A5 u`?CopyrigTt (c)f@20r@9f@M^@c\@osV@fdBUArX@Aad@iV@n.f@ Al@ \HsBe@e\@v@d@?"l>UhCAAMA 0LTIuViR  iRSVMB4`_rQw_?"Q5^_zYQUU or_:k2P_;:rA $c?"HD: &# ;ih ,>T!  9 U@PaX?@h"o?@MbX?@DkuL`~n?P u` ?u4t  N=!t/^~!*Dʮ!D >RJ}e0@@bAJBAk?f. ?Y.zbb@' 6f ) + w/?-#S!zs+/&a&!+ Pkw?; d;O?8(, ) N" /=$h<MQ#h /A/A5 u`?CopyrigTt (c)f@20r@9f@M^@c\@osV@fdBUArX@Aad@iV@n.f@ Al@ \HsBe@e\@v@d@?"l>UhCAAMA 0LTuV  iRVMB4`_rQw_?"Q5^_zYQU@Uor_:k2P_t;rA $c?",H?_bs A/}뿟@JZqՆFMQ3## ߄rB} S ]Dalմ :@+Ck<:aGg8vkPH8v(8W#A8&08?)8],59/*92)ȓ95$9998<~h9b?H9B+UFD  h(^TYYBBUA@ ?I?P?  j FXj$| ȅH?? B@L&ɯd2?-)(?\.;(. sUa!_&/J ( G 0B`@iMac,$intosP,Ap 0le c0m 0u e+r m0n !#1n0tw02k eq0i 0mT0!,Pa"0d<0]1e!i1^| ( 3P B'?/ ~σ(/('v2 ?;%4I^b /pcv  .ԳDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab?ɿt@ ?03%?贁NkHD # =h4>TB]]9 uAU@?%?@jOT\wQ?MΊFA9rr$r&A -34#u `7u ub A V >E& "* MJz:@q&" " ""@)"",Y#LM=b1+$'")"$"$27Y#EM 39 45#^Y#2q0l3SF'Y$?@J3N G>uLA-l5!HBuJAXRBr.#=wCbBܟ0?Cuԟ0I su `VB(#X3&'"2ql>!Uh@r5Y#1 a]WRa_SfPJA\'!2(1\ 4;?P,2Fu`bu@W(&PV$5K:ci3?+3__R8bVf5UVUmgWQheocrATM"AaAa$2f2aa2$(t 5tBtEJb.|wb_z9ի tBi8gvF$t| '"9P(Aqt% wuEwprp{Ϗ'9" nP=3EFre$w'2$9͟\oyЏRsRtruP!s1e!`Hih/P",>w=rr*˶ m4EWlqqPӏMū >sSoeei`%/;@%;LTY`2_3_4_U5O2c3c4cĉ5ZFf=G,[\F-[fpF.[F/[fF0[F1[fF2[F3[VF-2<UZZnnUUUU&&U"&"&6&6&UJ&J&^&^&Ur&r&&&S&&&&&&&&&&&66&6&6:6:6N6N6b6b6v6v6666666666666FFFF**F*F>FHOZHU\FpFpFFUFFFFUFFFFFFFR)!#FQ_? ?҈#lPYPPPPPPP!P1&PJ!P^!Pr!P!PP!PdP!P؂&P1P&1P:1PN1PhPv1P1P1P1P1P1P1PAPAP*AP>AP\APpAPAPAPAPAPAPAPGDPGR"`[R"j[RnS"tR"~R"R"R"R"b"Ė#b"Ζ7b"ؖKb"_b"sb^d^!^!^(2,2/2,A\ApA\AAAAHA{ZU@JT^_&&2c3&2c3&Bc3&GBc3 6yBc36Bc316Bc3E6Bc3Y2ibYCYCJ YC^YC&YC&YC&YC& YC 6ѢYC6YC16YCE6 YCxOrb&bxbsS;6{<F\.?@>Pÿ@qP?@VR?TV?2uL`,@-"Ru p`\@ZJrRf%c!h-" bqn?SuplNiE2u`u[`Fp@auL`bxΆΆ(؂g苤^q_US! >!a{4_b:pu{`:A~PHp\Ov΁Qe7`ESΆl.sPb ΁L'`q q w ;3`Vis_1RXY@chm#589{4*-b`?CopWyrg tU()2&@0,W McEo'ofM>rAzUaMi?n@ WAl EsUeeEvdBBgrRTUs`Hl"et6BTl\桇ub~aaLt~aLđΑؑ (2F@HD # h0>Th]]9 MP# JUF\.?F?Fb-2;?P JtM`  7?t#_B-K~LQGJu duo7]A$>aRAMJob@K+ tQJA$"a?& ?Aa4 ۢ # b)ta*"t =td=!%(U?tB? tс? tݖ? tޫ? t? t?ta.#A#A`?CopyrigTt (c)Z@2`09Z@MR@c.P@osJ@fXBIArL@AaX@iJ@n.Z@ l@ PHsBex@eP@vZ@d@m"Bh<#jl& @K @?UhLBQBQ] 1 !7AA#!V!@}ac RdjaE,JU'5l[Cdk%(gEU˖7s?FXE]ʚdkddѿS&iEn?Dǫoo` j!'hUFBG\k?*J`*e?~7CWi <I'hQYP ?F_}tk)vƫs'hQpsf l'hQYfyP\l'h1Y΋Q?FR&1\kQY-]?F2(Gf/\Rk> u'hQEW?7uL"las \.'h!YJ' ?F^3~\Nk?:P'hQY|y%.?FSo\Wkŏm'hRXD7Ժ?Fv[Fios*e7U l_9;\"kyޙ"M'hQYǬ! om?q:z'hUU3_Ő{ m!j,H0d ǕHD: )&# T=h0>Th]]9 Q BUFpɷL?FE@XV?F&1?Fbi0 ѻ?P Bt`  S0?tFp֯hݿQVBu auo7>JASTfw? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#,z AF u +%/,8)E!'+ k(??r+oF(a4 8) E"#{ #BstG8tU: Ba L>2?!88;D;68?5;D; tj>8LALA1 `?CopyrigTt (c)@2`09@M{@c.y@oss@fBrAru@Aa@is@n.@ 0l@ yHsBe@ey@vZ@d@v2h<M#l#@ M@?4>UhjA6Q>1| L`k$k;cdj]J}vMjR]M 6)fMB@prwp\ k_稇avCe=ESe??F9Y? lgESeF:h%ykod iSQޗh~Kjo*` Oc;Ꞗi2_U3rA/04sCHD: )&# T=h0>Th]]9 Q BUFfFY?F,"+?FYU/?Fʯ8ϻ?P Bt`  I$lG(?t=`j>_PS ,Bu aquo7>KJAUVJp?X ?Awz@bbb! ( "f Փ'+ ' qq?z A  +/.&68))E! '+ go?W-sU!n*a4 8) "u#{ u#D!st?8tM: Bav L62?!08;<;60?5;<; tb>{8DADA1 `?CopyrigTt (c){@2`09{@Ms@c.q@osk@fyBjArm@Aay@ik@n.{@ 0l@ qHsBe@eq@vZ@d@n2h<M#l#@ M@?]UhzbA.QR61b L` KdjMaQ  6fM{"@FT4\[3K_,7e5EGe9?Nf[ N_qd7eEGeXәj'SW\m jOf܋h2_U3rA'04msCHD: )&# T=h0>Th]]9 Q BUF` l?Fċ?FC7Ժ?FBGԻ?P Bt`  I4z?t? D ǯ^\+Bu Fduo7 >JTA0f88v?WYϳAjLF@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#BXY'?A> ?A "+(/+/=/O/a!y:5r{l/@ sU10( /???? "0aQQ'@`?CopyrigTt (c)FP2`09FPM>PcTh]]9 PB IAUF++g?F2?F {J?F%?P t`  r^?t鯕 { Li@Zu Mauo8 >    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a490 9_b!#{t0#tN8tϻ: Ua BL2$18KE;F?VEK]; t>B 4AA0`?CopyrigTt$(c)$2`09$M@c@o%s@fBAr@Qua@i@n.$U @l/P Hs3RUePe@vEPd'P 2h<M#l# M/P?]UhzPAQ1 L@3F l?F OqP dj|aW{,8cUFwW?F0ev"?F%Ti ?FUѣoO 0Yig߻ull4_aj??N~f0co+ fEeC.kn?FtdFِ?Fd?F?#l/Оol}dQ3ll BXl귺m|bBhaUeAvؼg`r4g?FÇ?F6|vflPI3qoikeðl1` |QT?heFnFc^?Fq`h leowkyyftGg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUFd    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3Fy*,[ܶ?F5]P dj}ah8cUFąiBQ?Fۨ.?Fao#ѻoO 0YiM+Fll_Zaj?}0%|f0c˭ fEem<_څ?Fb?F+}?F?oolGGTll .Xlhe{|kww@haUec(x,`H^ǀ?Fw?F234 flux1qoiFA(]l |~Th]]9 PB IAUFd    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3Fy*,[ܶ?F5]P dj}ah8cUFąiBQ?Fۨ.?Fao#ѻoO 0YiM+Fll_Zaj?}0%|f0c˭ fEem<_څ?Fb?F+}?F?oolGGTll .Xlhe{|kww@haUec(x,`H^ǀ?Fw?F>34 flux1qoiFA(]l |~Th]]9 PB IAUF;L?]?F]hk?FXN׶?F`V?P t`  ??tQ0I9xE "˪Zu Mauo8 >    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  b(h#{ h#΍t(tI*#0f88vb.#i L5<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#Fy*,[ܦ?FVTP dj}תa4(SUFąi?F6^G?F/?F?oO 0YM+Fӱ\*aj,.}a$?# XEUYu?F #?Ft Ǜjm?FRtolTBJ\LMYBltavXlp@XEUg6PK|$yp?F`1А?FAD3 o~l/AD0qYJg ַBl*$ԞXlFB:XU^Fi?FfaEHRl_[Ei1GAf`dSq+a,(bvHD: )&# ;h0>Th]]9 PB IAUF;L?]?Fԁ_IRE?FXN׶?FgaV?P t`  ??tnG9xE"˪Zu Mauo8>    PEJڍU[\/(?&. ?AbbY!z@Ae bbCt c(a4b  # b(h#{ h#΍t(tI*#0f88vb.#i L1<1v(&$7/(D%"6/5;0( t.*B#11`?CopyrigTt] c).@2`W09.@M&@c$@os@f,BAr @YAa,@i@n}..@ Alt@U $HsxBeL@e$@v@dl@"h<M#l# Mt@?]UhzAA!J@#Fy*,[ܦ?FVTP dj}תa4(SUFąi?F^G?F/?F?oO 0YM+Fӱ\*aj ,.}a?# XEUYu?F#?Fm?F*m{olTBJ\MYBlU{XljDc@XEUg6PK|$yp?F1А?FD3 o~l/AD0qYf ַBlq$ԞXl.B:XU^Fk?FaEHRl_[Ei1GAf`dSq+a,(bvHD: )&# T=h0>Th]]9 Q BUFa)DbJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 ) "'}# }#BtΜQ8t_:Ba~ LH2!B8;N;6B?5;.N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@oKs}@fB|Ar@Aa@i}@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhuQuQPtA@Qj LZF>7ܾ 7j bL\2] 6R1bMB: ZWdc Ɵmdb`GeGEWe2T|`I\aibvIGeU@OCJQTh]]9 Q BUFq \?FWY۷?F \?F0;?P Bt`  4fտݛ?t? 2N (+Xۙʿ-srVBu acuo77AJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZF>7gܾ 7j bL\G6] 6S-fMB: ZWdc mdb`GeGEWe2T|`I\ai?bvIGeUWeOC9ri_xu j aGeWe_ooh2K]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUF&?FUa|AY?F \?F0;?P Bt`  \vJA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!fQ U1x(a4 ) "'}#{ }#BtΜQ8t_:Ba~ LH2!B8;N;6B?5;.N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@oKs}@fB|Ar@Aa@i}@n.@ 0l@ HsBe@e@v@d@2hb<M#l1# M@?4>UhuQuQPtA@QH1j LZFr7ܾ 7j bL/\]M 6-fMB: ZWdc ?mdb`GeGEWeA?2T|`I\aiavIGeUWeOC9ri_xu j aGe@We_ooh2Kt]u3rA 4s2HD: )&# T=h0>Th]]9 Q BUFR ?F(Zп?F&?F0;?P Bt`  U'?t)S Pʿ-srVBu auo7>JA]^'? ?Az@b ( 6f  )ϓb= + _? ' 88o?z A s +=  /)/;/M$[+["-j!Q U1[!x(ar4 )] "'}#{t }#BtNQ8t_:Ba~ PLH2!B8;N;Q6B?5;N; tt>8VAVA `?CopyrigTt (c)@2`09@M@c@o%s}@fB|Ar@Aua@i}@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhuQuQtA(@QH1j LZF>7gܾ 7j bL\] 6-fMB:D ZWdc 2Nodb`GeGEWe2T|`I\aibvIGeUWe-B9riu j aGeWe _ooh2K]u:3rA 4s2HD: )&# ;h0>Th]]9 PB IAUF(*?F?+8?F-?FP덋?P t`  g ?tr㯰4\ls,Zu Mauo8>  ]  PEJ]_DRb(?&. ?Az@bY(ne 6 h )b k+ ?Z'e 88?z6e Ae [+ h g/y//$# L:5, /sUK1*ap4 9] 2(#{t #tN8tϥ: Ua BL218KE;6?@EK]; tͺ>B 4AA0`?CopyrwigTt c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0lP HsRUe@e@v/PdP 2h<M#l# MTP? <>Uh$QQ]AQ1 LZ3?ZkP 7jbb;'ߨL7a 6{fM8cUF*eE(aFr+}G`_=m?F7T҄xo`Oi^jva~t0sK`+t!kĀ9t!KU:M} -'sLl}]|]r9xLuf.Lp?F4gHvodObdcZgt_=,V~#|..7 ;vLuN?Fbݲt{?FNch|[m?&1 zoUh+#|?n1%*!ayTh]]9 PB IAUF/vt?FxN7?F-?FP덋?P t`  q?t} ܯ` \ls,Zu Mauo8>  ]  PEJ]_DRb(?&. ?Az@bY(ne 6 h )b k+ ?Z'e 88?z6e Ae [+ h g/y//$# L:5, /sUK1*ar4 9] 2(#{t #tN8tϥ: Ua BL218KE;6?@EK]; tͺ>B 4AA0`?CopyrwigTt c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0lP HsRUe@e@v/PdP 2h<M#l# MTP? <>Uh$QQ]AQ1 LZ3?ZkP 7jbb;'ߨL7a 6{fM8(a?*F?F}ޑ`oO 0dTc I\rdcd bjӯokga>?>eEef?+}G?F_=m?FSxo`OivddlvatG0sK`%tkĀѡ3t!KU:G} -'sLl}]?|]r3xn.Lp?F4gHpooZgt_=,V|..7 5veN?Fbݲt{?FNcb|[m&1 zoUh+|n1%$!ay Th]]9 PB IAUF & W?Fq<9?F-?FP덋?P t`  q -?tv3m\ls,Zu Mauo8>  ]  PEJ]_DRb(?&. ?Az@bY(ne 6 h )b k+ ?Z'e 88?z6e Ae [+ h g/y//$# L:5, /sUK1*ar4 9] 2(#{8 #t8'tϥ:Ua L218K;6?@EK.; tͺ>B4AA0`?CopyrigTt c)@2`09@M@c@os@fBAr@Aa@i@n.@ 0lP HsRe@e@v/PdP2hX<M#l # MP? <>Uh$@QQ]AAQ1 LZ3˟ZkP 7]jbb;'ߨ7a 6{fM8cUF*r|p蠼F?F}`oO 0dTc ?I\addǼbjokgaz>eE(aFr+}G?FC_=m?FS҄xo`Oi^jvϾat0sK`+tkĀ9t!KU:M} -'sLl}]|]r9xLuf.Lp?F4gHvodObdcZgt_=,V#|..7 ;vLuN?Fbݲt{?FNch|[m&1 zoUh+#|n1%*!ayTh]]9 PB IAUFt5?FYz?F-?FP덋?P t`  @"?tgQ,n\ls,Zu Mauo8dAd  ]  PEJ]_DRb(?&. ?Az@bY(ne 6 h )b k+ ?Z'e 88?z6e Ae [+ h g/y//$# L:5, /sUK1*ar4 9] 2(#{t #tN8tϥ: Ua BL218KE;6?@EK]; tͺ>B 4AA0`?CopyrwigTt c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0lP HsRUe@e@v/PdP 2h<M#l# MTP? <>Uh$QQ]AQ1 LZ3?ZkP 7jbb;'ߨL7da 6{fM8(a?#FT ?F1uRؑ`oO 0j?rdcd bj]ea }eEef?+}G?F~xW m?Fxo`Oivddlva~ttqE`%t:܃3t!KU:G} -'sLl}]|]r3xn2M#p?FcGpooZgt9L|8 5veN?Fl"Ry?F-#Fkb|[m&1 z'ǵ(|F$0$!ayTh]]9 Q BUFa)DbXX +Xۙ ^MVBu auo7>JTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% @& J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:FQ덋<6j% bRzg9<2 "\ ZWdS ?mT W5U5k0YaY?vq 2HD: )&# T=h0>Th]]9 Q BUFq \?F8˻?F \?FB2T|`I?P Bt`  4fտݛ?t.QW+Xۙ]MVBu acuo7hAJTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:FP덋<6j% bRzg9<2 "\ ZWdS ?mT W5U4k0YaY?vq 2HD: )&# T=h0>Th]]9 Q BUF&?F 77 \?FB2T|`I?P Bt`  \vJTA0f88v?WYALv@z bbb!'n 6 % )! ! ,+ <?' o?z #a47% =) J""#K#{% (+ ( tQ#Bı?A> ?A"&"$&$/*%,"t:"t> A A6 `?CopyrigTt (c)D@2`09D@M<@c:@o%s4@fBB3Ar6@oAuaB@i4@n.D@_ Al@ :HUsBeb@e:@v@-d@2`h/1<M#l1#7? PM@?(>Uh7E +AA!{ L:F|po蠼6j% bT  ü2 "\ ZWdS mT W5U4k0YaYvq 2HD: )&# T=h0>Th]]9 Q BUFR ?FB?F&?FB2T|`I?P Bt`  U'?tǡ P]MVBu acuo7AJTA0f88v?WYϳAjLv@z bbb!' 6Y % )! ! ,+ <?' o?z #a47% =) J"#K#({% (+ (G t#Beı?A>L ?A"&"&$/*8%,"t:"t>  A A6 `?CopyrigTt _(c)D@2`W09D@M<@c:@os4@fBB3Ar6@oAaB@i4@n}.D@ Al@U :HsBeb@e:@v@d@2h/1<M#l#@7? M@?(>Uh7E P+AA!{ L:FP덋<6j% bRzg9<2 "\D ZWdS ?2NoT W5U4k0YaY?vq 2UHT5D B &# T>h0JTtiiM ENUF =?F>?FumK?F_u?P WNtQ` Kl7?tUo>TX*F(бoNuj mu){E0f88wv?AVQ Ӎ!@#b#m@b  0#  #W(3 (o+&T" #t#NĽ/?. ?M"&$a%$$,/>*L%i,T"tΞ*T"tR.R11#`?Copyrig`t (c)02l090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v0@d@"Vt!X#xB #/ @T,UtE1UA-!@:2T|`I pvm?S׮(%U;jX[5G@Y9"S\FkV5%U#ɯ7ܾ=\WQS\O?XbiTQY%_7_I_[_DQOOOO__Ԫ?FH փBl[{Xl?G漫i_)fV u``9`-YBlO N}0,*x]@$?F>BlǏXlwg%ooo EU}ooooooM5#?F߳Y`;Blr*XlYmC*)fWBǽ?F#nZBl>^̔ԍ?f Q́)f?aD%?F qwJBlIY:6XlIb ʏ܏EABTfxF) \Ck?-q_Wa1Cv(Ԅ~Bl!H,BXQnB)fF01DBoT[inJ\ۙgyAHD: )&# T=h0>Th]]9 Q BUFKpJAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFg 7j b2ɶGOR] 6SCfMB:1ۑdc $I$I܍+d 2aa]EmeL.la'iL?.]eU@5/LN?FOOka,dHޓgmeoo(oh2gyu3rA 34s2HD: )&# T=h0>Th]]9 Q BUFܛ?FYQt?F#P3?F f7?P Bt`  _?t+%`' VBu auo7>JAS`? ?Az@bbb! ( 6 )  '+ ߈?/I-Z#mg!z A " ././@/R+,g/y'sU1,ar4 )] "#{t #BtNg8tu:Ba PL^2!X8;d;Q6X?E;d; t͊>8lAlA `?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Aua@i@n.@U 0l@ HsBUe@e@v@d@ 2h<M#l# MT@?4>UhQQA(VQ^1| LZFg 7j b2ɶO] 6CfMB:1ۑdc $I$I܍+d 2aa]Eme^L.la'i.]eU@-7/LN?F$_fq.Th]]9 Q BUFn?F_S?Fi$?Fo2ͯS3»?P Bt`  -1{?t8b ?VBu auo7>JAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l#A M@? Uh,cQcQ] '1bA.Q61b L`?g{Op dj? |m cuۡ'1 6!1fM{":F7Q덋<6d kS=ѣ<n2{!5E[nc^ vogaKeE[eV_@cbc g@]N4o r\Dž}iNuPna?FTPf|]tElF<2hQp_m痪 |"*hQiyl:qid5hQJ_noDhU23rAg 4n2HD: )&# T=h0>Th]]9 Q BUFD7n?FWw?Fi$?Fo2ͯS3»?P Bt`  ;#p!5?tR?9E ?VBu auo7>JAUVJp? ?Awz@bbb! ( " '+ ' qq?z A 9 +/.&68))E! '+ g?W-sU"!n*a4 8) "u#{ u#D!t?8tM:Bav L62?!08;<;60?5u;<; tb>){8DADA1 `?CopyrigTt (c){@]2`09{@Ms@]cq@osk@fyBRjArm@Aay@ik@Wn.{@ 0l@U qHsBe@eq@v@d@n2h<M#l#A M@? Uh,cQcQ] '1bA.Q61b L`?g{Op dj? |m cuۡ'1 6!1fM{":F7Q덋<6d kS=ѣ<n2{!5E[nc^ vogaKeE[eV_@cbc g@]N4o r\Dž}iNuPna?FTPf|]tElF<2hQp_m痪 |"*hQiyl:qid5hQJ_noDhU23rAg 4n2HD: )&# T=h0>Th]]9 Q BUF}h!?F0,?FTZLP?F Qû?P Bt`   Mǿ?t `,1fǰvz;ZBu ,dcuo7oAJAa^!? ?Az@bbb! ( 6 )  '+ g~X?I-Z#!g!z UA " ,/,/>/P-[+g/y'U1Ȋ,a4 u) "!#{ #Bt9g8tu:Ba BL^2!X8;Ed;6X?E;]d; t͊> 8lAlA `?CopyrigTt _(c)@2`W09@M@c@os@fBArԕ@Aa@i@nU.@ 0l@ HUsBe@e@v@-d@2h<M#bl#P M@?8>UhoQQ]AVQ^1| L@%fq.?F?c^ dj ٨la6ztB:'3S3 7d /kyԷ&Dh]EWe?g~kp+ɉ 5d}bGeUWeFTol0juMDhn1ۑ0j`Dhn.R덋<o+o"Th]]9 Q BUF}?Fb-P?FTZLP?F Qû?P Bt`  R딿i?tT`,1fǰvz;ZVBu auo7!>JAa?^? ?Az@bbb! ( 6 )  '+ g~X?I-Z#!g!z UA " ,/,/>/P-[+g/y'U1Ȋ,a4 u) "!#{ #Bt9g8tu:Ba BL^2!X8;Ed;6X?E;]d; t͊> 8lAlA `?CopyrigTt _(c)@2`W09@M@c@os@fBArԕ@Aa@i@nU.@ 0l@ HUsBe@e@v@-d@2h<M#bl#P M@?8>UhoQQ]AVQ^1| L@%fq.?F?c^ dj ٨la6ztB:'3S3 7d /kyԷ&Dh]EWe?g~kp+ɉ 5d}bGeUWeFTol0juMDhn1ۑ0j`Dhf e9o-ai qGeUoo$o6o3UHLD )&# R>h 0JTlaaM>UF .?F@=~9?F î>?FesY?P >tA`  A,:?t _VČ= W:`>u eu!s7"JJ0U?HA9 @tt7 b2:*H&tH  th*N) ?MB"b";)H&e8H  3 H -US4oTZLP?FK+0j6֏@!QpoĬ?> گ ݷeo?gց́ F?i?;0~1G}}϶iM@mp?#nC%fq.?F)Ouy\l2_+ʦmx~E}@p5"iQ덋Рoe*al|ٍşןHD: )&# ;h0>Th]]9 PB IAUF>'?FЖ''?F {J?F%?P t`  V k?t˟R{ Li@Zu Mauo8#>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3F l?FƟ qP dj|aW{ޓ8cUFwW?F0ev"?F%Ti ?FGUoO 0Yifull4ajT?N~f0ci+ fEeC.kn?FtdFِ?Fd?F#l/Оol}dQ3ll BXl귺m|؇bBhaUeAvؼgԵ`r4g?FÇ?Fϓ|vflPIϮ3qoijeðl4` |7T?heFnFc^?F]`h leowkyyftGg˂a,8b1;MHD: )&# ;h0>Th]]9 PB IAUF>'?FtQ>c?F {J?F%?P t`  V k?t^i{ Li@Zu Mauo8$>    PEJƱZVJp!?&. ?Az@b_bbe!Y(݉e 6 ,p )l l w+ g?Z-#U!׷!ze Ae [+j/|//-g#+ s"?(a40 9 .*2!#{0:#t8'tϻ:Ua L2$18K;F?VEK.; t>B4AA0`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa@i@n.@ @l/P Hs3RePe@vEPd'P2hb<M#l1# M/P?]UhzAQ1 L@3F l?FƟ qP dj|a{ޓ8cUFwW?F0ev"?F{%Ti ?FGUoO 0Yifull4ajM?N~f0c+ fEeC.kn?FtdFِ?FBd?F#l/Оol}dQ3ll BXlF귺m|bBhaUeAvؼgԵ`弄r4g]pÇ?F*|vflPI3qoiGkeðl2` |HT?heFnFc^?F`h leowkyftGg˂a,8b1;MHD: )&# T=h0>Th]]9 Q BUF\.?FW?Fb_-2;w?P Bt`  t0zեK~LQGVBu auo7%>J0U?R<AL@tt7P }b6*D&tD  tTd*Bı) ?\A>"b7)D&a4D  3 rD 8/td/ty. x1x1`?CopyrigTt _(c)02`W090M0c0os0f21rԡ01a0i0n}.0 Al0U 8s2e0e0v @d0"h!<M#l#/Ml> @?UhLAA] 1 1"71bAV>!J`Ƭ! Jdjbq:z׍i Q 6VMB@˿acP\`_m;E,Ui5U 5l?FQ[dg[VU'EU˖7s?F'YE]fdg[pdѣSYUfe3_GN]݅om{ZcXUFBGP\g[)J`UAp3C#5 IXAYT ?F_jy}tg[3$sXApsf ElX1Yfy?FPP\pg[!XAYw 䘷?F~_uO]{DŽ~g[&SAY-]?Fr]0P\g[ip~ X1#"5j-? XAY ' ?F@G_|P\̔g[?P=RwX'RX__/?Fv㗂P\#g[?pډlX7UB7Ժ ?vQ[Ezg[wU|QY1GN~V7;P\4Ѥl"MX U29_KU#rA0$@CHDJ  # =h4>T#]A B#BUFF?F|ل?F?F_U[?PBt`  t} :?tӕ##L xZZBu eu!s7[&>V Jߵ?0 ?Ab "z@A $)55 l H#b"#]! #BtΜz(tψ*Bh<M#h B`"ݵ0qql?E LUL1;?k(by$<$@7F/X(a4j$"'6k/5;w+ tE͝.%p! A A`?CopyrigTt (c])C@20O@9C@uM;@c9@os3@IfAB2Ar5@nAaA@i3@n.C@ WAl@ 9HsBUea@e9@v@d@b$JM@?0>Uh#5= ! 5*A3QA]!J!FEÍ hng[GʑB  6SVMB@j]9_^fU3QY>h_"[} [U%7nPobogU 52j_|Uq"ErA Z3s#HDJ # =h4>T ZA B|qAUFi$z?Ff,?F.@?FCCZYݥ?PBt`  zԜ?tƿSᆟ J kBu5 eu!sL'>D jJBU C"?0Z& ?Ab"z@A $ʎ)55 l #b "#! #BtΜ(t:Bh<M#h=# B`!2=#0qqlZ#Q LU1?@(!*7/(*"6/(E1K.+ t>Q5h1AA`?CopyrigTt (c)@20@9@M@c@oKs@fBAr@Aa@i@n.@ AlP HsRe@e@vRPd@v4&=#JMP?0>UhO3uE= !5AQ?Q!Jt1EÍT| hne887  6.fMB^;>?F>,Se?F1Ez ?P~-< kߎ:\zzl knJz۰fL:i 54a}:?  `? D"H$?n:a@Bowr @H'c(T@TFnXJ\kǓ1HeY5XevTݤ?Fc. ?Pfz`_vNST#]DA B#BUFik?F?Fa'D~?F?PBt`  ;#?t_Q$_[G ¾ Bu eu!s7(> Jߵa? ?Ab "z@A $)55 ml H#/b"#]! #Bt9z(tψ*Bh<M;#hu BI`"ݵ0qql? jLUL1w?k(by$<$7F/X(a4j$"'6k/5;w+ t͝.%! A A`?CopyrigTt (c)C@20O@9C@M;@c9@os3@fAB2Ar5@nAaA@i3@n.C@ Al@ 9HsBea@e9@v@dԁ@$&@JM@?0>Uh#5= !@*A3QA]!J!F? hngZ/S/E@M 6VMB@j]9_]gU3QY!w_"[87U%7n PobogU 52j_|Uq"rA Z3s#HD # =h#4>T ZA B|5AUF;|jd?FQn?F&TXdJ?F2gݸ?PBt`  ؽ?tѿ벏9   kWBu5 eu!s.)>D  JBU;"?& ?AbI"z@AU J$R)55J l #beX"X#!J :X#Bt('t*Bhb<M#h#. B`"酑#0qql?!QY LU1?(b$x$7/(4$"c6/5u;+ t.5,1HAHA`?CopyrigTt (c)@W20@9@Mw@]cu@oso@f}BRnArq@Aa}@io@n.@ AUl@ uHsBe@eu@v@d@:4&#YJM(@?0>Uh39E= !fAhoQQ!J81l?F9hnyye7@6SMB@TH0pp0@ Y9%[Zaz eoQioe[:4ևzZIa e5eF/14oTi&qQB8^FqcG?FBP?P :Y@ "rA :03fsgn^KDelr:i n0SpȨ? P( ? UVj?na@Bw-rx @SHD # =h4>TZDA B|qAUFG_?FMOff?Fl?F5Gú?PWBt`  #HEw?t[J |B gzRBuj eu!sL*> @JBU C"a?Z& ?Ab"z@A $)55 ml #/b "#! #Bt9(t:Bh<M;#hu=# BI`!2=#0qqlZ# LU1?(!*7/(D*"6/(E1K]+ t>Q5h1AA`?CopyrigTt (c)@2U0@9@M@c@os@fBArԭ@Aa@i@n}.@ AlPU HsRe@e@vPd@v4&=#JMP? 0>UhO3uE= !AQ?Q!Jt1| hne]Lg137 M 6.fMBFwݡ?F1?P1/`( KP\kEen8Kΰem:i 9`/]R? i4F? +?r?na@BoZwr @'c(T@T>.)qB?F48t\B"SOxlM_HeY5XeV`f΂7?Fwڽ胔?Pw>&T{¿ ?'srmooeN lnD\YgLE~]Z?)tTwd~&'q52_U"r"A v036w3HD # =h 4>TZ Ay IAUF3d?FM?FXxA?F4J^z?PQd93@>WBt`  M|Vt7?t# | .B9jHߜ? Vu*Z?3bLKj"k^Bue u!$+>   JBUB43f"<?2&P?Ab]$z@Ai b3bbff)55b ml #/b"l"l#! l#Bt9(t*񩧉#K B`"#0qq\2#m LU.m1?(&$H7/(*"H6Ի/5;+ t.yh<M#h %1\A\A`?CopyrigTt (c)@2U0@9@M@c@os@fBArԅ@Aa@i@n}.@ Al@U HsBe@e@v@d@4&#mJM@?UhMEI;nIzA#Q!J#6F>x@=&}?F&`?F5zT MST2N`R uܡgrf'QHDJ  # =h4>T#ZA B| AUFn?FZF?FXt?F5Gݮ?PBt`  g?t80ɹ gzkRBu5 eu!sn,>U !JBU;('? ?Ab!"z@A- " 0#e55" # Q#6!b>#0(<&Bst(tϑ*"Bh<vM#h B`"0qqEl1 LUU1?0/B'7O/a/s$"06t/`5;.+ tͦ.%!AA`?CopyrigTt (c)L@20X@9L@MD@cB@oKs<@fJB;Ar>@wAaJ@i<@n.L@ Al@ BHsBej@eB@vR@d@4&&JM@?!Uh!@!9 | 3AAc1J18$FF?F˨O hn؂-؂e/tӉ7R6MB@T21uG*\\VU5"QXޒI9YSD|a\Bg%UF?htR\gYl(Ah5UNl?F[ ݴ\*aUEU-?Fۻ\P-\Kp}AhU+;>?F! ɬ\q\g:,AhU>זtY?FUCZY\/\iCy 5AhTQY{'?FAuJ\l\jGוAdUX_SFКi?CF%ds 0c2rAb4:gn0 L;jHD # =h 8>T ZUy AUFEM&,?FX?FEtT?Fy_-{?PBt`  ڹ2Ԇd?t#P.Øz u_>$ |MBu liu%wM-> # ""%"%"%"$%"-JBBU(`Jyɹ#G 4?&6 ?ABbA@L2WbT459 .a0 ]a0UL1R5U8a6Bt 8t϶:#0qql&3V0L U#A?U?g77t??42Q6?.EK; t>h0 1<M#h^@%!QQ`?CopyrigTtk(c)k20UP9kMAPc?PoKs9PfGR8Qr;PtQaGPi9Pn.k AlP ?XsRegPe?PvRPdP$&$A/#ZMP?0>UhCaa9 #0Q#!L1J!9t?F9lrif7&6&MBCʯ?F{@a?F"[?PM Q ƲOH0i!lѹ3BDirT:i1%u51:m gOp? 0dq? ##Y?r4-ABhwr>#j0@cC(e#iJ"tS?FowO8?F*&خ?P ݿ Z _l!}keaɨ4i2{VtS7Sd|#- cq}tkwK>tɏw\#~eiF'2L?F靕I?FfʶݲDb tS!}gi(@Pzpzd|;٧VzwYiY"}a؏twz#~eAewnF][ߚ~`l<5C)` [-TH0L3rA0DģRaTvd|Hzwpg-JPU4 s!w#~'HD: ) # T=h4>+TZA B|AUFɡo?FD7%?F;+g3P?FPa=!u?PBt`  ~ۿ?tҽJI6 ~&8Bu eu!s.> ! ""!"!"!"P$!" -JBU "?& ?Ab"z@A 0T$955 l 83b 2L 3M1 3Btj8tDx:Bh<M#h#% B`2ݵ#0qNjql# 0jLU51AA`?CopyrigTt (c)3P2U0?P93PM+Pc)Pos#Pf1R"Qr%P^Qa1Pi#Pn}.3P AlyPU )Xs}ReQPe)PvPdqP4&# ZMyP? 0>Uh3E= 5QQM1J1?1!9hn}Ke71R"6MBaf$/ſ Dn/{?PT nN tom2qenלf_E.I:i FDOL?PWC  ?Q 038|!®Ú>O{b 0dw8pYDh[mÏsԟ@wv#~e5mF!u?F+#?Pҿ QC~ra2rbA 0JC:zLYON|n$dw0 kt?>|s w #~'UHPuD R # >hj 4JTb M#5MUFUk ?F[&?F")?F{(%˝ͭ?HNtQ`  y!?tA_?ѭ Σ(|rɛNu mu){.6>/J Q !JNU "?&& f?MbQ"z@QA] !R$Z)5=R t #b 2`"`#!R `#Nt(t* # N`" #0qql&#Qa Q Ua1?@(k!z*7/(*"<6/5;.+ t.pH%# p %1PAPA#`?Copyrig\t (c)@20@9@M@c}@osw@fBvAry@Aa@iw@n.@ Al@ }HsBe*@e}@v@d@45& #aJ(%@0JUp7C5I %nA9  Q!1AuJɤH pv1m˳=7 &6V"N@T3D?F'2L\5eMYSk\QeU$ndG?oQmjf E$eoxolb%2_U"rMen4Js38QQ@_R_d_v___VFBkN\wY{a^ l o}?>h 9rO#,hozf-b?F [4b\#aoܚ;yӏ aꖄ(~Fatk?F@]% a;G!Dx 0Ry@ov{ZmE_K:q ;w c? lZV? Qt:?v:i@B柴 r @S8QA ,sw %]o oofW h2A%mWvbP,?F˴mԠr\ߦ)k\{Xi(vCyRyF"HDJ  # =h 4>TZ y qAUF}!v~?F>!?F7}}?F3 _?u?PBt`  fno?t# W@O림 UBu eut!sL0># |HBU\`J=#G k$?& ?ABbA@"Wb$55 l #!%(&Bt8tI:y=#0qql# LU1?/'7//$ 2QZ6/5;; t'>h0h!<M#h0,%C!nAnA`?CopyrigTt (c)@20@9@M@c.@os@fBAr@Aa@i@n.@ Al@ HsBe@eJ@vPd@Q$ &=#JM@?8>Uhs0E= # Ah#)Q!JO!FU'vC hngQX\A7@6SMB@Trt3ϯ?V[GzaiWR}nwݷ?F6/?FgȝG?F*> ߂?Pqe> d.\SQxlV_gD|enX}el`:i :`Kts` L&d2? )jO?na@B>Pbwr @c(:eFif[θoo $::e4%JeǤ@_"k(\%JeVfY2DAoYnia<5^F}=4?Fm?P 63 J; #rA4 ze<120"yZL8t+T Z >|AUFN~?F8?FOa=!Pt`  Ie%8?t#ŠA |&8/$HMu liu%w1&>   P""%"%"@%"$%"-Jh<M#Nh#u`%Ĺ$?%> ?AbA@S2b[409 h0 ! }3 h0\8h6st 8tϸ:"%!11`?CopyrigTtk(c)k204@9kM @c.@os@f&BAr@SAa&@i@n.k Aln@ HsrBeF@eJ@v@df@$B#0UB-3]0 M2t =tS0=tD7=F?_5;sth0= t> \#-?_MVD?lF_LB#__Bl>x0@?UhL*a*a9 Az1 1!S17!A$!a7J!\;>1?O0lrvoieq7a 6&vMD@dd.H?Fdz3l\xEn;|7xO;R(PtvN?F^Mm?F3?Pfw ? ON1iM|BBy/MirFiiB:~m tu]{qnY? /b.? N&d2?r4^1BЃDVhr>#5^1 rC(@u5Pui?T!k? ph yH5w5u{jN?F_u=p@_?FdAT?PT '&Y}N\J}q y߅Q<ϵhsq(=Sg$>c(ôewϟ៣\#@uEPuM[T?F7glοuҰ|13@uvUPuPt?F߹Zj#mIhq@uPuKĥE2YǾluƔݨPu%bb?Fi$ l޿\+|?p= ףب1ewFB} EzG4uyF-{(%˝?Fr?Fߞ a;}{(\U販%զ;ѼIzϣz#೎@uaLyrt3ϯ?F;lB8|Ԧè1Ly7}}?FqXJlM|szĔx!y߶6u?*BQ?FZ[θ?Ppϐ Bеlj=*he~~|k%zµ;N7#*?6ԉܲy#'DqS1Sp>hlDr}ʨ7+U[CU @u!Lyf ??@VŲoK Ѩe2oeS3rYa0a4QR#iDYaVbT!P5Qih "8XW.'DŗV %!H4q1;  !j^du<"81hT_r!3! Re[>c&kbd1<"8vQU#`a51 HD # =h4>TZDA B|IAUFA>?F+u-?Fe烇?FL"4?PWBt`  wcZw?t4!A׳Yۮ 讚Buj eu!s82>I (@JBU"?2& f?Ab]"z@QAi ^$f)55^ l #b2l"l#!^ l#Bt(t*Bh<M#hS# B`"#0q/ql2#m LU1?(w!*H7/(*"w6Ի/E K+ tE.)5p@1\A\A`?CopyrigTt (c])@20@9@uM@c@os@IfBAr@Aa@i@n.@ WAl@ HsBUe@e@v@d@jN4&#mJM@?0>Uh'3ME= !zAQQ!JL1n@ez?F)hn Q*e7i@ 6f MB]@gS sS _]RfnfYl [aT@~TНJ9 9oi@?qQoa` e150efUFSo!fo^iMeon]nY5_,T "rA N03.O3HD # =h4>TZDA B|qAUFA>?F+u-?FAuJ?F\@̣?PWBt`  wcZw?t4!A׳%f. 5àBuj eu!sL3> @JBU;C"AxQ5h1AA`?CopyrigTt (c)@20@9@M@c.@os@fBAr@Aa@i@n.@ AlP HsRe@eJ@vPd@v4 &=#JMP?0>UhO3uE= !A4Q?Q!Jt1M$;4?F[hnRe7i  6.f MBVhgZK]2 @v&_mfnfh1:i PʬaBv$ /˸? `0?na@aowr" @'c(Ta02"Pggk| ǃodHeY5XeҔu4 /u?Zb&s 3mIܳqfoedm\f9r C:$yTwd~&'q52_U"rA v036w3_HH3s ".6HUK;#4FxMT.# VYB մ NX@ekPo+c Ro(BG(] zo] #f(] ;k] noX Fr u zz( ~H |Gh 6 x  `8 l  Ϳ q *x  D]  7B yE( ٺϰ'q ]s &u h)v #'x y q{Ͽ ~ q s Hu Pr8w Kx 5'zп ~| X~ n5'` aWxc \Hfѿ h j n !p? F%;1UFD  h>$T 6D UuI@@?k??)3P  B B*J*AȅHy\?br  0E`Bainstom,leg nd ic n LH(i^( K%X_'0U"-!0cBeO/a/ AG M  ڭUO)UU)UʪUUW}w | _x ?1?󅈂 ??8??Xb?8?=?LDrag thuesapWonodwipe.bu*B?^+?4l)!y]贁Nk??l_uzkUGD" # h8^T (YY#UPU@?@@u*B?@^+?4l?P} ] Y X   u`h?UVY g# $V. _ _u#`bA@$Ct?.n.tY$.?U"`lgnd` _/cm@ =H$M1y#i%~R#!$"+W!`Vis_SM D2 cPm!#50\06`?Copyr gPtg(H )g2 09gM c:0o of2!r0;1a0i0n2 gAlV0 8sZ2e e0vl0d2" "-X)|# '!  3Bd!%$} i$=r$&# D66 $Z##0U_B3)7gdaOC ]KTD"Di",D3112(jCh?!KrAh? 8/Ubo #1/U!-i"Qi%9_&PtSAftQ@tRT "Q3eQ^"Q,5[,@WnsFOz)IC "(C%!h h%-06`!&> f2uj2 Ll0gl0?#.Q9/K)2X*iXQa5 =|'Lw'2rq6J!a LvZ%RbY "vseaa7m5C #HR%^ii%^i3^.i5^$iE^YiHDB # 3hj8>T! ]]9 JAU@@\+?4d?@u*B?@^P6 $JA u`bCAJ "u#`b5(.tA` t^ft/]7uL0..7>U2N?贁Nk?<.A3T7 "+&<oa){bA-3Q&!,5 JuJ\q +?'"'?\.?l8/: hP50@=Bd1d15 `?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al0 8s2e0e0v0d0-,/p3 p7> &2/4q6fDAFAF =BA/b$m# "!n"3=%l K!Z$b/ghe_y'Rɭ TA,RTCTl@ d1"0Ab&59 =9Q$W_'2r"AFX!=A0 V#LlJ aUh]E@C1#! ! T9!Jpb]e (bx(4do_nu3!~:iHqu&5!uosE2o;CrA D##8!eN0 0ymb0l2ex0sT21 0h0 0a0t[dspj0a0.`UHLD # J#h8/T >M U@@^+?4l?@u*B?@؟[?P. VJ# 0  A>uA`mbAjJ+x&x: N   ^u#`b k>te it<> 3.?"?AT@m>u`&d.W&2N贁GNk?ABa"im"+I&>#Dհg0# 9 ?^bbboz/3%"apoJT##J!#"2JN\$q0?727#?\.? pn)OBl_@%#B #AA; "`?CopyrigXto(c])o20@9ouM@c@os@IfBAr@Aa@i@n.o WAl P HsRUe@e@v!PdP#z"!-?C G6Qg@li_'>T,T"l("Tp@A"@?fQ gIXFWg/'0U^b6,1.10 f%"lJaUl~ A#1!(T#g1d2LuD 8jr"(xܯ)T?@ >20Aw »k F?rUs SSJ޾p8&t8B" ?@(mZ3X?@d*\5 +"?ywiմ|.\us8=us~Up ae i?=n.5tqQ:u=Pu{@_pPyvPEufvS.A @{3?@=m?@yj5#R>NtHOPgChNm{j{a{B4&t6P}u2&eS]ra #*z"aeSb@ PEoo t?ꟃz@Яx&3-?QɯQMLiL!Pg!Pn-P PHD: # T h8> T ]]9 # Bz1AU@@@?@u*B?@$nWs?#P6  >m u[`bKAaK    N Nu#`b+th% )'tr w]v-]-uL&$.->2N?贁Nk?<T)-J*"0+0&z#E;?#) ?A(Ӏ9,3a+lavvA.\$q"0?!7#227?\.?l.^?:WɄ!#5 Jh0@=T#115 `?CopyrigTtk(c)k20>@9kM*@c(@oKs"@f0B!Ar$@]Aa0@i"@n.k Alx@ (Hs|BeP@e(@vZ@dp@BY-?C G&$A2i b5(9 9XGW_'2r"2&X1!0 V'jO_{G/V!%3bjWkVDaWQ^>o9k__5%",@@heo]' TA,tl*"Tl@1rclkQUhm5KA#Z1*!T#X91$rqu   (8r\(4ts~ 34yB 5D M2;!srA 8#J8*bL@g@n@ SuiAtx@epHDB ##3hj8>T! ]]9 Js#U@@ ?@Uq?@$ns?#P6 mBJ >b]  k 3u`bCA$*J Fu3`bSJuthMQ5t] tU*uLNdp-U2L.W>7U5 2@ '\C#2q ?ʗ'"':#?\.?lV/:2J$N贏Nk?<h"QU-23;36<a)!b#rQt6.%)C#O(A439 ! T! ]]9 J0#U@Uq?@ ?@iFc?@$ns?@@#P6 a >]  u`mbCAJ <u3`bItC 4l5thPl]l \K<uLD#-MBI.M>7uU5 @J '重\#2ǭqU ?T'V"e'?\.W?lLP/: JN$N贁Nk?H<^GK"+&<ސa)bٖG16K#  (j4#~9 w!/lM3"!N235l2l:4B?&1Bhg1@@=/T#MAMA5 `?CopyrigTtk(c)k20@9kM|@cz@oKst@fBsArv@Aa@it@n.k Al@ zHsBe@ez@vZ@d@X-YC YG6:&Fi3b[5(9 &IgWSW_'2rU"6Z!40 SV<WKhe oKT,@dMdE"43Tl@MA @ ZdE%ikSlJQUhm>ECkAi!!4S9QBPgEqB(u/iHZt3i~yqYu[5~A{E{#TvX8esAu@t `HDB ##3hj8>T! ]]9 J"#U@UQ?@Ŀ ?7?@$ns?@@#P6 $J > mu`bC[AJ (u3`b*5t/ hi J75tTX]X 7(uL0*#-9..9>7U5 v.@J \2[qG ?F'H"W'?\.?l8/:vJ@$N贁Nk?<JP37"+&<au)b-3#6\4#p9 i!/X?3"!@235l X,44?1B@hY1@@=/?A?A5 `?Copy_rigTtk(c)k2U0@9kMn@cl@osf@ftBeArh@Aat@if@n}.k Al@U lHsBe@el@v@d@KC KG$s6Fi bM5(9 IgGEW_'2rG"s6Z!40 EVa,ִ o@+k oGa(sћ oPu 28 6: ;| ?©z xCxx? GUFDfP h>$KT 6D UA@ ?-I?3MP?% ȅH?BT ?2~t  03]`)disk,torageSLIT,flwcPat,(e^ 3%B ;o(7/I/f2u` F. `" ?&&b TJ /?|wxpf6lbx{s'pwwpl } wwLDrag thuesapWonodwipe.UH PD  (H# Uh -MPA&A AnQrQ  sQM B>B}diiMJN:!@XH`#pr~" %Nrdb#$u=P`N#"WL>$#3184+1N`Vis_S`0DL.c\m!#50346"`?Copy ig\t (j0)02v0090MZ0c: o\0of21r01a0i0nh0 0Al0 8s2e\0e v"0dh00-7 R30ER-)#?3 M?7?&/ & !ԩ=#@13 $BkFkFS)$##2N贁Nk?QIJ&QGlJ0AUppES19 !JB (w*- :"P!$bS",'Ai hiM  "ԟ (W]Z]a_*t@cP cn"!8*\e/S&@Ysg$" f$  )%c}}Uv U?a`3m_|"c2aU_s ?ovbFs}}2,ucrxsHQtXTE9 $5I _C_Spjc +gMEa%{}B\cxv (w%t%<g*hd>M!Pm`wYM#XJ&[f$@[dEfH'S#},b"41B(gǖhUw bf Д_DUUXj|J3P QX% f"%m  `b/"x,ȶ=q#mEPew_fm8J&J&@2MbS-f#%AOrlt%V/{܁/%TR$5 qmPpm+N%TA""11$2 (X;135Ѭ  J B`1s00(B`/R2%kA.Q7*QCDu:\$5sb8S"W` R2oc2DQw %AQAA" (k7?0ASW` %`&2p21vbb 5 !:#DCNg6d=O EAEA$2!8,@B"iNXG'2qkFu!Ŀ0AB57Omb?ؿ贁N8i6??54 _(H_Nf;s s)EFM߮KB#7 LB V ML?w|w{vwpwOwp ߃w pwwwww~wppwp|wp %,:}vLDrag thuesapWonodwipe.b~߿?)j?xk:3?@ ?DUGDF P# h @T(PYY*# UF~K? P} t. ! :ATAhu` ?j"6@@TThh|u# ^",':U? ?4b {[`:O!#"&#'"hQHS)# h!/L!$@" "#' @>>ÿ@qZl0a5?k4a6p Tr&232u0`-!|2u@ "23|12" 9uv `u`"[ۢ b!]u`JS$A"J%)S#Am!m'1`Vis_PRXYcPm!#5nP08B2`?CopWyr@gPt (@) 2 P0U9 M@c%Po@'of-RQr!PZQUa-PiPn WAluP %XsyRUe@e%PvPdBm!J# dV$J%T!B=S(n! J#bx&x& t"3#S#0UWbx#]:W,A^a`N0STgdaoC _c }a}e"!"+@t3hhB )ZtB*gtA"aa G(3T Q J"q%X!`aSl"!6u\%7Ř!O!%["0@Bq35c_J$s&F3s͵NSYFuE{|E{|E{.|U1w;,@A SUro%}h `J"G(paefefs_<%||-p 1CUG(Arb6AKR8A2;6>]q`RP^Qs@fU TPx-PP2"! 0<X@5b ??"5"?br# BmߟE}Cq`#PuJ"]u"Tju3G(fcAuK ]00` SPa*!PeWPlbsQOc+` QnkPe@fQv̢yU%q P]]A6 :T#P@P2` Yc.ĢtucPlp3Zϗ,Ob` MSbPC#DcRqBi!a]X\S''Vdx&!pE0*o سcTD m HQJ%] h%]Th3]hhUHLuD" # ;3h , T  JEUF~?FfQuܽ?P NA  #. A  FV= *oBoVm>uA` /?y $8P`fxu#>b<"W!F&Sྻupt bF{#t=!&>U2zGz?@9 #>&?A@}" (b);,6" !$i>#[2#"r6p 92"b);'"5<"V5?N#AA#`?CopyrigXt (c])=@20I@9=@uM5@c3@os-@If;B,Ar/@hAa;@i-@n.=@ WAl@ 3HsBUe[@e3@v@d{@+"BM#C Gr6"L!3'L"tu"Ei fE %T1XG,W/'0UuRr650 ,V@Zl >(fUlL!@,Y %#A!(! Pprǹ\0Q# ?aGZa# QI8p;94?V?F?_YC!X?P*G0i jOpDl~Yma:Ze~MzYf<#(Wu: wd d 2v<"#41BeB7" t2Dwr$5@5wa3eL 9aImAaFNaaGNbZXoHh{VXn&[|mciynoRZbz8mG)cuTfw8xQ#8W@_C _o#n~smSɿ>F |r}W'`TEkpf>[ktT I% s"MqP SLxNrFH$DҔ0L)_\j(\ҞE\8bF3Ɍ|6ՕEkDZ8TWoiH&7toQcuǿٿw`Ǭޥ:ҡX:jw_o,,o>oPoEЕHD: # h4>T]]9 M !AUFnzjp?FU8<\?F?FoowǙD?]P6 $]A  mJuM` ?l? $u.Jt+  E:n]qte\gIqzq-0m7>JU2zGz?@MJA@b1#z' %#b?(bM)RZ+Z&J!!5 `?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al0 (s2e e v+0d 0!E$[;?6L ?AE$\/}%-# '6? FiiePd59 FX#G'0UBŊ640 UFiJl&>Uh3 GM!Z1E!FϗjN>F-nTdܬV)iJZQVc qQHD: # h4>T]]9 M kAUF~?F˿r\?P6 mL>" ($JuM` ?AggL2DuVJ1tS tQ_uJb%_>-JU=!=!5 `?CopyrigTt(wc)20 9Ml cj o%sd fr"c!rf !uar id n._ Al j(Us"e ej v d %"6#T:$.#B[7#J! w2?/6 ?Ab A;@U2b]4-7#I# eI'/6.%7#2zG_z?=@ M-#J6^3Lj0 Cz`0^8 ^8j6dUh$ ]7T ]([!1 +(aJRj0ER"J:]SnQ7Ir5o5Q5U`oUzo;iU_S<U Pb#iaGc=cUeJ>H:a'Tt?PXX~|k/?lej> yrL&s ; _1t8~^~FutFXqxHD: # h4>T]]9 M kAUF~?F[8?F?K?P6 L6>M (m$JuM{` ?Agg2DuVJtS tH_Jb5_>JU=!=!5 `?CopyrigTt(c)20 9Ml cj osd fr"c!rf !ar id n. Al j(s"e ej v d %"6#:$.#B[7#J!7?/6 ?A A@U2b$]4-7#I# I'/6.%it7#2U2?2@MB-#J6^8Z<j6d<Bj::6iie{59 6jXSG2^'03/6Z!J40 FJlJ8>UhW Q Q [!A+(1.#ptn\Q0LR_7 W_L)E#$J kp>F|QFg[h'`T?PXX*G0i jOp\^">[O13SLR/Ɯ ""a A8l &Ra@lb<oor5o5B:TFH$DQ0mbSj0[K0HRX{5U\QuFə96oXDZ8TZ^DwWi^mi4q}doonFdal5 uueJuqrSWWqU PjrᇰYaG,H?_s  @FrշFȨM4Uc# ߗVB HX WaGU7 o-PH *^ 8/c*< )fLh >j]UFD  h(^TYYB UFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTT)9T9ȅHBU??:& ?B@L&d2?J-[(\.Ym(K.sU!&/")!>%:  0 %wB`6conte 2,ma 0ag$0m$6s$0rvD2,1mpsu"2M1diB0]tF0ibX4d*0Un$0t0oF0k!e^| D(SG& iz ?!3E?? ? DDD wpGGDDtD pqwtDD@pqwqwqqqqpppw%wwwqq`Zwqpwww{w;\Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1?? V↑T*J??@ ?UG DF P# h @T(PYY# [U@jZ?@~??F?Pn} u{` ?Su#>>HX HlHYHH E.iOOQ؍>DDUXXllU*u'T9- &",'tU2J&5q/4 Q?"'4b0{[`:#26Q_9$"3&9"OL1" ^!'t>_@ÿ@qh@?@`B?gDP]F0#Bu)@K`-1Bu@rCC'5GAB2C9u`u `"[ bz!uK`2/3AQy1y7B`?CopyrigP}t0(c)0W20P90MP]cPosPfRRQrPQaPiPun 0Al`U XsbePePv&`d/`VPs_SBNcPm!#5P8#6P7`y1&3dpb4&5=Bh8=/3z6&3d66 2&4#/30U!r3]%ge<^1E  U#"em HqV5tw3t{FEt{t{y {et{k5t !{|tYwmAdaу*IOBT.1:a$:a&20#V2103dadaFB3J1125db6q 7~C|hhE)*^Q_`BCQCQA11=B;8ARrSqea !> b`1AE4FAނ.1+#8dT7`8XQ`]r` MPnuJPaPt%ar6aYQ` E]0u*PpU`e`tee^V5Qn]1PRd M`uU`b¯Ԧ30说! PPr#5GCHhz]3D&bQ e^\޿𿢏AP)aB#`Ҩ75^gyZ% S*biP aϷZeTZLPcX@&ϫk5N`rBDl2`i`g̨)@ߑ !RPoʂѩ|(Z\@ p(KD`ĖSPaPePlP6?ЪX.9` TP7`?>` /aZgSb4I`.3`QΡ@̮ʒ=N•wRka<`q ҨגcM_I` ad2`%c& d nĝs Nx);鑙mDatV/a I rNf󨑙 R `Qa// %;/IM`C~/⯸/+QmU`utP I`/ %N4"?4?q 2t? 2Nv??;EHd2QC]3` W@xڙ%?NV#O05OCr a}OO#ޅO/`M&`mRyOOӧYNp_/_C;!OUc1yQQH[_m_Mqt!Tw foSU@jZ?F~?/v/vUpSp2u8p` s?bup`TaM,{t!Wp5%m-ׁ< 3ofm!Qb8q׏ݥBT!.at!2hAXf@>pý$ bUdq ҹvU֒QrrrvvvrsW"`Ro(l]10 e)Zbb UUbr}@":btbUuP`hoeo1o"4zI}_obI1I2\!b^¸aaaaFѴ_AaYY6P`u$K`bq`ݡ%q`an䑷xh^rݥ5_bv&bЂF3s͵NSYDq\rV!@&!e68E^6*E6ǐGFխZ,@rb40PT}sasavs@YVynv` % P&S.1sa`NEWORKb2H PӰq!RTٰRIӰSIQƑDži|o$ohXAE'0U?!c0 oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUHLuD" # I>h(J TEI 3MU@jZ?F({~+?P >uA` ?u#jJ* )8* 9LGA )`G#dtG.$*G8BVj~>t  e t>"z "p{b+#7"J>U2zGwz?@9 S#>c&A@b(b)+,&" !$5N]#2T#k"6 "M"b)HF;'","/]#11[#`?CopyrigXt dc)020090M0c.0os0f21r0Aa0i0n.0 Al+@ 8s/Be@ej0vA@d#@K"lJ r!UleaAaAC 5LX%#!Q(r!Io?F8.<01# p炊 A# _W NEx]SiL5 [ML"X35^P?zQM___ %YA"XEEU . K ;; \W%U IF)$_ [.ǪF"XA6jai%?F ߭6?wL^ W- \_"T5Dꤲ8?F@@SA?F @ ?Nk"?I5?`bmModiy[7 K=F  %j1"XI1¦e?FrՓ`L dU TZ<٭hEeN@ (?FG+ MuPu;0&z_[Syкf|C`u|p?9L"TYAAekBp OOG}u?F9 mO"L9=T/hQeel.%-+?@mlփ?FU?> J4ɽ7yfXtOff|A=$ 䖏p$za3ex/@4S}a6N`Sy'e!a#q&ziMסiIT%!X '0UW65 0. "UGD # hz4TYYT#BqU@ t2?F`0 ?@jZֿƿP} mu` o?u#Ld28<LS9`(.88L``t, Stfm۶mەtUH~#9#[0Up"?@/& 9#!!G*"`?CopyrigPt (c) 20 9 M c oKs f"!r 1a i n. Al&0 (s*2e e v<0d0'"!P$0#B-9## '(w#^=9$&0#466 20$e$^ #E X5 9#HLA0%XD`cKX5XDdcGg&AcOT^'TcAcA0"-(Ca0%g20"ihaPX5 &X~7.GW'o$ţ6-!40 GV[ZHD: # =h4>T]]9 !AU@jZ?F`0 ?P6 mu` o?u t  M)t7@2(L$A F  EJU'N((? & ?AbA@ybbu?bz; 9#0"b(8$GbKn!M#X#n 9#2zGzt?088r@Lh"&9*?bRbPM/6&0"m!S('0"*TH1H15 {`?CopyrigTt (c)020090Mw0c.u0oso0f}2n1rq01a}0io0n.0 Al0 u8s2e0eju0v0d0"A1E4MO DQ-OT3 T7 &l\ Uh#7 f1 A0!J $FF, |B-&R6 HVWR( b w0AleP59 MXEG'0UbjN!%$0 W5 VZUHXD # U>h4JTmmMWQU@jZ?F`0 ?@P uQ` ?u4Ut9t-GQ"PB6\"mR"Q UA@"Q    #:"    Z]&& """",&*/5D,& I& & I&J&]U2zGzt?088r)@Qo36߀A@bbRbPz鄳03=b8b9;6xRn7Ry3SASA,K`?CopyrigdtJ(c)J20@9JM@c.@osz@fByAr|@Aa@iz@n.J Al@ HsBe@ej@v@d@ B@LAYPDZ- 'N((/`C?EVOea0RAq` A@c@n@yAlZB3z۳0ٳ0bbbzY3mqUZA<__bܠ_bNbbº_8(? EGO_C _GEV<$flmUxqAU1m81`п1?F?nWwgPt,r#51:*2~d|%X|RLr ?9@tIdE;.O ŏm87tZV?Fz?@. ?Fo6tpb&#?9?FnpH5#d|239}A$>k<^Ꟃsv+D?EYzi6ߛfk@bߊ௘,?@.ݮBk_RN|"N(קy9$T^8(8DY#0Y.qeq~ `0u?@e"&c?F+F?@CS ?FEJ N|{8] vtqSH sA$)k4y~ rU/ݡgaidaaTaep5FUhXlgdd'K0UV"őkh`!`T+0FUbE !*HD "#=h0>T@++]]9 !AU@jZ?F`0 ?ֿtP6 u` ?u#Ft %t3.$H$+A  R J}tU2q?=@LAw@)b!(Wb/)<+<&R[G z$?& ;AH'/H)!!5 w`?CopyrigTt6(c)62`096M0c0os0f!21r0N1ai0n.6 Ali0 8sm2eA0e0v0da0b"!E]$Mb-# '&lU+#7 1h#w!!JA tFkFֳ "x)B- FBiAbE9 MX7/W'0UxR>!8a /VCZ_H;s tY,-6nIgnFM&o#CB pFOB ] r~dִ uo@+ oVsG^ of؇ }ӿ b ^h <0 32O ^֛H6 T(8 6;? > "@ (C 7XE׿ Nh ,9G ;I ʹK i _UFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kXj BW66 W TTTTTTȮH?Q?G ?B@L&ɯd2?-)(?\.;(. sUa!_&/J ( G 0qB`3streaming, ed a s rv 2,co pu 31#Wib 4d n wtw0rk!e^| D(SG& m?!3E?? ?   wp x  p  ʵqwp qwqwqpqq/qpppw.wwwqqiwqpwwwweDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1? ؠ?VT*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.DR4:UDDXXUllk'"&R",I'U25ĉ%4 ?"u'4b0{N[`:Q1#2T6"^U9$"3&9"/L6 2&!'ݯ>e@ÿ@qn@?@-fB?mDcFԸ0)Bu/@`҈-1Bu@rCC 5GAB"2Cm9u`u `"[ b!u`%3Qo1o7B`?CopyrigPt (c) 20P9 MPc.PosPfRQrPQaPiPn  Al` XsbePePv,`d/`VPs_SBNcPm_!#5ab4P=`o13dX45CB^8=%3p63dz6z6 v24#%30U'rz3]ge<^AE@ U#5m Hq5twL5t{3t{LEt{t{y {et@{|tYwm@Ada׃IUBT$1@a@a2ja$jaL21632CLBA3P1125jb6w 7C |hhE)*dQ8ŔIQIQA11CB!;1XrYqea>ez1AXE:FG$1 8dT=`XQj2`]r` M`nuPaPt+arp` ?bup` ? B=ݝ`S,t']p5'mQ-݁<9ohm'Qh>qݏBT'.ct'xZf@>pÿ7Љob2Zfq [ܒrr$ avvvrsY"`Ron_16 eI`bb ccbr@uP`TaSq1u$_u'K1K2A#b[@aahaauhA+}pPk` u*`s`s`a|x)H5_dd&һbpSF3s͵N?SYJGbrV@ГD#eb 8d E LqW,5@rB: 2ˏV}uaua hYVȻOn `; %P&S41ua`NEWORKd2H Ps!RR*ISQio&oh^A'0Ueq?!0 oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BqU@1XAɖ?F&fQԽ?@#[5?F/8tֺ?P} u` ?u#Lk/8CW(.J88L`6t stP_m13i-; NIU~񅴉#9#0Up"?@/&9#!!G*"`?CopyrigPt (c) 20 9 M c os f"!r 1a i n. Al&0 (s*2e e v<0d0'"!$0#%B-9## 'w#^=9$&0#466H 20$e#$^A #3  9#LFG0%XD`AHdJT%XD8Gg&daO^' TAA0"-(`@a0%g2!8TBT"i@haX5 &X~7WW'o$Ŵ6!40 WVkZHD: # =h4>TB#]]9 TuU@#[5?F3_(n?P6 ڍ` ?u Ft  )t7 \)2(jL>  ] " " "" >" R"g"g"g"g" $g",g"*/<$g"H,g"\#JtU2wqH0?=@L#G6A@-by8b9;6[A3P2;3?6@ ?A?9*A3DADA5 {`?CopyrigTt:(wc):20@9:Ms@cq@o%sk@fyBjArm@Aua?0ik@n.:_ Al@ qHUsBe@eq@v@ d@2=A(ADMBA3-_PC RPG6l> L>U#aQQ 9 ]R! ;$bA*A3; #p _{66J`A3@>?F(?mؙ @` lt V/&N&wbN&M8A%?@׬?Fk>P6%U ?)o.m  AKiMBaiVϥ]瓧sa5f*>B r7k, Pxz? S? \?r#5{1:PDVhwa@vcą(2HV1Qj\"ocHl\aiu].eEe*bGk?F}2?@6w#`ȸ)|?P ,m!^l_zP$vtR`&V~$}EP{sa>t}<~45 a#y{Ϗ@ᏫteE1oCc@OrbV?FZbyZ Pu @crAP~3zw h%|̏v saliv &Pxeen'n`ὓM?@3?`2UǎKiDc$vfeW^|Ywt$}k~G+BIW?FzUH(?@q||f?F%CRO .1^lztep{ha;y%|U4N*)qyr"[kSHle $rWȂe>!Q7?*GF?@D`uʄ3=CUFS>+悍Wzt0 ;ӝjLpTߙj[>`;]`?@?F?P` ֔zYmkai{﾿|{t(yc|[%tOH~ݽ*<4#>t,iw#ytuR!y7`߯ZiꠗKk~HlJa#(-)0"!8Qe s@AleE9 MXAWP2'0Uwf^!V<0SE .BUGD D# h0T@dYY]BIU@-H?Fc[B<(?@j>(?F/I6Ѹ?P} 8d Ut`  ?ht#rS8n'L֌rX_uo uU!!`?CopyrigPt (c)J 2\09J MB c.@ os: fH"9!r< u!aH i: n.J Al @(s"eh ej@ v d  #S$#-d ## '?= # &#24 6 6 2$B# #0U2 3@& a#3~ #dH@1%4Gi^#5 6rX'EG'4) 6!4] EFYJHD # =h4>TB]]9 \#!AU@#.?F/I6?@P t`  gjo?t#T U $YhAn m7 u)hnx Fu!)J hI 97eyh?#!?9& aAbA@_"bg)b$g)bIg$2zGz9#@L@&h*%}$f*+T'_"*BL1L1`?Co}p rigT}t((c)(W2009(M{0]cy0oss0f2Rr1ru01a0is0n.( AUl0 y8s2e0ey0v0d0E1I4Mb-X3 X79&l)Uh# j1#_!J FF$[ 7#e T$V3RiAlPeE9 XW'0URZFN!^0 VHD # =h0>T@h]]9 #U@딃r?F/(?@2?FkF_w?P t`  ;0K?t# y_h:.A>Dj# ixsj||yyh|j|Ay"s"y,"s,"yJ"y^" yr""y"/y"a|"1y"8y":y";y"|`AUhMq@q] y a7T^1 hjP!,!J!^!o$!$U#$%&U'()*+,t|q!a!U234567$9$!1&1>74Ua#[aJ`S@ɞd?F9^2d3Qqs^2tY7N26J6M#hN1y-loG?gn`bDPu6(āThq 6p?F!_T?@R h^*B?PQi e>amabƫWSuMD,Xud3, utSқJ! \&\xZ? $Zas? V =? @S?d2tR@B8BoRį֧r3Cc EH ]&H[Bѻ?@lML6~ ;m`~b&¨r?+J^t/qO@? [,!rPצ 2XQqH zg?F,53lܿu  ?FM9(֜2Ar}JND7UØB]ٿ?@h߾#?Fd~5ȿMuPu`~dg}-S/9u}DѺ f <-@uušJAV+ǥ8xP?\ q|?FKyhQ~2vTPf=?PoyGV(na~u3vu;*G1+u~ݏֻA_ ^Gϯᬅ?oQo!|b0QJX^?Fsm?@@޻Pg?POWLKz Fw^=g%fԧ!6&]JVbGᬤ? o[oQr!rmz@zg?@f(?FVɱC OPKMgBlVܖVVwVsS3O_ܘ=w?FKL?@jP0A`QʱH=lTG5OxOeuF|Fv=/Fꞏ ,-<60 ]Cs~?Mv1E GZ}v -# &<"ݝK-'>TAFd|6δ̙2{# ,',v #cX#JREB'~5-O$ ,RI~  `ܝ-C-X-_]bd% ,Wl' 4%ŀ>-խ@ײc& , Ǡ4 \lb\c>-R땳J ' ,+E :{(b\s"=]& a:U( ,/?Fb\.bLOOMևQ:U) ,!`3?F k?%6fSԿr`* ,Oڿ?F&Kba,e~Lʴ2ͷ+ ,b d?FjFŋݸML, ,Z׺ı?F bҢu W%r;z- ,TUI8?Fs%:v ~LSa@ . ,3Pވ?Fb^vO$)yϊ&Lؾ1/*mCe%f(~5JOAjuLīt~eq>aT~.j&],_10y՚?Fr`QSٵ?F艿O:McfdilFU1i;}m&R F$w|!l~1`-?F~J~ 03y<$-t9BIL{HiG,LDd|jph:OE 4b\{~-K 6,Ku20e%+~L-r; 7,kh}6`E׻L?S)` {A8 $Fٞ@`o?F%&0MuPu9KdfǸ;}-UL󥶚!s@+C9,*}3?F:ט26-*NY'| ݑU8am?FYe~Kp?@ŮM?F@_O`P jBނ> l'xk超VSOB;6{.f9;?@ v%e??Fh1*o/m_~!Wx ~)j |ALyUƅ?PbO__<J+t?F"e?@'IȂl:xl@׭9,&͓S2&n&Yۙo> =~vU^4Q?@,z"TEo-<[?|8q&?FCv?@s&Ŕck>\ :A:z5,7ح-idsjѦD;gcZT#n$!$ɞd?Fх@Z f<x.Ϗ:"r 23ED'佴9Ki PyT A$))X#!('0Uqŝ)) (h0JTxmmMU@L?FfZN?@]ޫ6?F^ ?P t`  HVRHRbLRvLRODRONVOFuH-HU2N贁NKk?@A@'bՈb񄝃P𫃘Q ? b]z@ՏH񧝃ؑؑ'`?Copyrigdt (c)2p09McoKsf r:a in. AlU sYe-evkdMёY(iՔZ- Tl8UxDE?mP!h#`@yr?FiªD#vBCh?OH#B|/e#L% +.M\?Fx5J+,ՏpeƘb|{0?@ T?F울'5 QuPuu|2P A_O:폵C]>ڏ|,#w| |fɀ~E?F` aFTSL!r?P,DT! c{d}u͏3aT^xƵif"Y:B Y7 E? O? ?ǀ4#XBwߡrS5"#]rpw ?F!~8ӱ-&?P|)\ݏ?̛C/oͰQ 0K8f ߱QI֭f"^!3|DڼCQb32btrX@}䡻XA(XAXA% 4FOaҔK?FgGyp{lQ Bť Dz#.3?Fo>?@vָsy`s&ag?P-Sg  书۟pB.GҢ?k .G TxSb/t/nSɕwe?F%/ȡ?@p̙L?Fyp5O?PN@w+߁aL\񖋷̍A߄ޘp vS&}?x/RޖSԕ??׌SaG9)ܣ?F"/2?@ď6.=H6ĻK(prN=׀zk ɓ??ۣ&OO*̱ص2F-e1fOxO "0vHgO?FN ?@rѴ?F~xy1wWD{{6o֟gvMEX}߆8@,_d9Rodo N3Ř6t?FaV?@Z_g?F,߉$8Yà]OJ r/3ŵm"ʏ%^6o;X0uoщ0邀?o}o±&r Bޚ=$?F?@ ddw?Fy[HEtD0?@ȕ 9pwJK{NӏrE]U 1*<صǥL?F_;+]yC,E{)qb=H1I ?F݄u{?@7J`Hύr_y\zYf̏-pܒj8܏B( + /~) A+3Bَ?@5/?FS}k̥=xEu&}K ѱPpAɯ̹[V]?Fx?@8^f`ó LvbK<IYLYA˲)ƫ̿T1 T]%E Q_U@IYWj~4u\/N+ܡ_[eL4{jERY\~PeC˧EU+?9DGaI-oG7抄Bm턇f޹d&U ]%ߏaE x1꤭?F퇑Ph~8-U_Խqսlr/M1#[~$g H9Ga0B ?FnB@rJo?FtVB$?b8><=QܻE2l@"%;Ghc#?ܒEz&v!3ETȟڟN2]9@:Sǚ~u\4@X~DG㘊1VO4GŚ?Fq?@Z61U-, K1析|LfJ8vĥ.Sh* S7 F//r & 0E=̨qu?@6}[?Fdۗ}CEOd5#|Li1Y.1,2lY?RvCƯ'?FcO +?@H=?F6EКɘCxj/ y<(辂m:}F;ܕЧdqj|BJUS-?/-6OOhP:2H(hhOzO R(RyO5Fnw0gs[`JɤL@p [Bc%0dO swE~n#kN}?˽6yU;%?-_Oa6TofoP>y%PD 0;fL#10t+XˉBH?, -#-/.{Mq12s=cOyU6;ƅ[P`qB!!ME=*By61M -7C !6dFsgyf%R_oo6J\&F^ /P*hTgiE!?FYRRG=V?FU tn2 OAKL2^ق̜ҁk:z焳'!hLK;i B7Hm$M]iH?FګZD?@vit=qN~ p<C MA՝n{jvDr^n4l4Z\{!V!;΂`? Heq$AT s>P/X+]1WU?Fbdi"ܽgD?P3F#?M2T8sY4݃MYec=ݗD}SQU]6⛸Ȍ' k('޲XQqN?FZ D; hwi5mND{I?FXPY?@2 }͘?FBҏÎxv'za݇/i\?M;L0nMuhg}IN;&?F9'w(=xF\_3ѦUi5dm?FC Ax"2009)~4lHN%+)GYo,VB΍z ^?@¼2?F>Yہ3|궕TtjDG:J4l)Gfd{ 2d?:p+/=/ߊdY"֍YyI6YPuHvD~wdVͶ?F *Rō U0)QY?=Խf 4l﫿#b3Mddo/ލ^?p?yǛ2mY̕[Y ?@Ho+ѐσfȝt3ɗøLCW~PHi_̻ԱMd^r\??eOOyLjG]2ğ֕USraARGSriaaT=YXwZ'0mUR?B!0] zVZUHXuD # %#>5h:,JT5 ]IA U@y?FJYﻛ??FbQa?P mNuQ` o?u#6QMIQ 9H#L\WYLpW LWLCWICWA ICW ICW I"CD"W I."WIB"WLV"WLDj"WI~"WI("WQ M W#"k#"#"&U M & M W32C2&MW@3P2W>4:& o2Ro2fo2zo2o2o2o2o2$o2/,$o28,o2L,o2`,o2t,o2,o2,6(6(6(6(68686,?>06J866^8Nt  ::Rt%RR_܋NbD$QFMpR @(uQNU2zGz?N@Q cNfA@RbMh+b[ihkhf>bK !LdNcb c#bf b\MSbb[ikXg>bl>boRcfqfqc`?Copyrigdt (c)p20p9pMpcpospfrqrpqapipn.p Alp xsrepepvpdpbMcrs rwf bQWRtR lQUxeQQWB $.! =1SaQ?$>a~!x!e#!Sa h*anvO)g?Fz ^`S 3?-IS ܇Gh;q?Fψ?@Lm)şƿOzHp MuPu`UR-ga8}|g$_sh}$ GB~3 dGcnjpǟ(Az3F#? Oǟ@fhؔ~ `$6#;zZ?FfuHvD?@ߠM=H *R U\hAᅪN5Ozq7*;]?F[Y ?@|%σf SMY^Qh993+!ڌ )~(;6mlR&&/OO~E~v_bD%%>X-"T G#+EK63 11WA/ T3ٳEC4}$F|rO 2DV0ps-*  ԀCV@59)5AFSƾ\ՌpބTvrD92C#!~V>rA%!?@4%-&?PR^BhTyYWX+  '!vN-^/p/PCZCdAHC$t~HgE;%'e~\aI/pԗ~{φhh~- Z1q܁ɥjZ谔T0Rc M`xhXϔ=0Ҋ??+E?? Y܅+)ȪU . ?F%pAHhAuQeFķeeǜRe !1g쁷Jai%?F,#9 R6$*h~*Aꤲ8iY 4?@?SA?F0pF@ ?Nk"͜!\Th% 4 [Xg?c乇?x)|";}p?FtVXb?F<X?@~LI%?F7ja8Ɵqֳ6h ǰMVX[=l?F@ ?Λh` !X)gxW<>2?߯An`di~RnKgwBXVXM_mpX`}뀁e'}NJc6fgl"0Uř 4ҵIp^\BeHeɉ0$e {eBpD=VX5vIߪ%e?@$ "ĩL¼A$ h|~&Ep=qp\J0~ \m"(@8v"~+1ap=3Ѐᔿ%t f\߱[P-?QcEp=GOM RϡvbxzV ?@%f'1rN&&"ҮcrA '|-(o>?e1pp$>%d|&k`{T~zQoD=gs {~@~IMho=|Ϸͯ߫噮|+2Z(,1t,>{eFud:L^pz$ozq820X3Y[?@7Mu]0mFﶜ݇$I~D={N/ݿGǙZ9:G-LPb~?2$m00 ["]Мߢ""10eVWLKz Fw{M 3D=h(jA+cF, (W$R-//"'p /""H1{,eO<-!t(>Et{ŧ&k}u?FԿ9*\.VԔ?1_)hVrUe ?@llփ?Fx!S> _ؐ%_ؐ.✼=~ߋ{p" xwϨ1wjf-@4Sa0%Ti:w?ؐ?$:~ ;IŊ=?,ģ:?yji6hAATD)IX$*'0UR?I!&c= &J_HZ|`;s dV(]bCV!FHM#F ߌVOB U_ ~d״ `@+^C d_s}G Áfv yҦxؿ /H{ +(} X ! F֛% !& 6* , /  W7 NH xc Gx JX  #/5/G/Y/ $%( 9%UFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kXt BW66 W TTTUTT6"TRȅH?? ?B@L&_d2?"-3(\.E(#.sUk!i&/D % G 0eB`-dir c.o y,s rv 2, ompu.3!s.r b4d0n twB2k!%^| (SG& i?? /,>H ?{ wp | pqpqwq߂qwqwqpqmwlqppwpw'wwwqqb\wqpwwww;^Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1?ouzq7? %!VT*J?@ ?UG DF P# h @T(PYY# QU@jZ?@~??F?Pn} u{` ?Su#؍>>HX HlHYHH EO.LO4:UDDXXUllk'"&R",I'U25ĉ%4 ?"u'4b0{N[`:Q1#2T6"!^U9$"3&9"/L6 2&!'ݯ>e@ÿ@qn@?@-fB?mDcFԸ0)Bu/@`҈-1Bu@rCC 5GAB"2Cm9u`u `"[ b!u`%3Qo1o7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPQaPiPn 0Al` XsbePePv,`d/`VPs_SBNcPm!#5ab4 3!bz1AE:FG$18dT=`XQj2`]rw` M`nuP%aPt+ar? { @&8~? 8Tv??;Hd2QCg3` Wڣ%ETV-O?O C| aOO#䅿O`M,`m@.O_"_Tp'_9_M;Oem1y [Qe_w_Sqt'aZw0fo&SU@jZ?Fű?5v5v[pYp8u>p` ?bup` B=ݥ`S,t']p 5/m-݁<9opm'Qh>qݏBKT'.kt'xybf@>p{ÿ#БobZnq [ܒrr$avvvrs{a"`RQovg16 eI`&bb_ kkbr@ uP`TaSy1@u,_u'S1S2A+b[aahaDauhAaYg6s&P` -u*`{`ᕁ {`ax1P5Y_dl&bpSF3s͵NSYJObrVC@ؓL+e8#d3#%XLqW,@rb:(:ӏ^}}a}ahʀYVWn ` %P&S41¢Ȣ}a`NEWOR+Kl2H P{!RƱRISQ¢Ȣ¢ Ȧio.oÌh^A#'0Um?!0oHD # =hj0>T h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc&N!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BU@ѝ%bm?F? -wʬ?@aL?FHzaֺ?P} u` ?u#k/89bLd<`9ft"929<(.88LL``ttt g`"tk"[&Pw%f'aw%'LOd@:U~##0-U2?@M&#I1I1*"`?CopyrigPt (wc)020090Mx0cv0o%sp0f~2o1rr01ua~0ip0n.0_ Al0 v8Us2e0ev0v0 d0"B1(F4#B-#YU3 U73^=$V6#@hDCFCF ?B$e!Zl AC) 6 #HA%T8[5TL[ETA[Tt[AX[E(TWg$6iBH_'ɥ TQQ"(['ca %BL8RKjcU jcjctjcjciAD5 -6X\Gg'4CFZ!h ,J T  M#EMU@sހɞ?FRX??FHXܾ?P m>uA` o?u#pMJp -<p =PK =dK!=xKA -@K#K2(.K<FZn>t  8dIL"tW"tb!>bo!FQ~ouT!J>U2zGz?W@9 #>~&A@xb(b9;6" !$NF#g2#"~6 E2M"b9;7"iA<"A?*#AA#`?CopyrigXt (c)I@20U@9I@MA@c.?@os9@fGB8Ar;@tAaG@i9@n.I@ Al@ ?HsBeg@ej?@v@d@"ŤBM#C bG~6"!B'&"t"ElJ!XJUl&!!C 9P a u%#AR(!9J?F64EӠ0# C><# KONUjZ?FZw-i\֊\EYh3nNxQ1-o?o pPhEUU . [|iձ9\5gUF'?\x~Z__"QelQzai%?F\T_\{a+\FvwKd9Tꤲ8?F#?@?SA?F>@ ?Nk"`YUd}\kEWq_ʉh9(  GL%hYH.?FVRM\ߣ;b'\;uxPyJ}u?F`&d2?@s OuPu0%23[yO0]0N/\nn*\h|QY]#?FPWYmrWoDteQYx߫ *JR\uk'\eFJxdyX,?@ȼl<~ `0 ~ W$p-0K CF%X[myQYgPln\ g!A\?khQYHza2tY& CXeqUuUZT@3N}4#`Q :$  _?V3?C~Sa7BfN$!QA(.WLYET]__[{u?F7\H\U9ETt>CcxQuuj ?@llփ?Fc> Y6h 0)>AYIDҍuґ[Frq3ux@4S_헓^Agu"3J>ޟ9c]lIriMiI%`1-QD_'0U~6L50 HD: # =h4>TB#]]9 TqAU@aL?FHza_?P6 u` 7?u t  )t7U)2)LL>  @]JtU2qD ?=@LC&A@-bu(b)+&񆤍[=#L"#?& YA{/)R=#<@1@15 {`?Copy_rigTtT(c)T2U009TMo0cm0osg0fu2f1ri01a; ig0n}.T Al0U m8s2e0em05v0d0"91=4MB(=#-OL3K L7&l> R!U#( yD9 ]AB^1*=#; #p _w&&J`=#@0(v?F]?&d20ec+Η֑gRM(R!5?F`*JR8\9dFN\{IA\(T.8!T̠UX,?@azMھPZV OuPu0;Y?_  Ц;%u_6??&VUvl {aOǽXE!_3S@#) ۶?Fvn#o*`SrA@z#cXj3?llf"QooUF`z^?@|;h(;?F Bo*lN\_sVl]C#ll θ%67I~QUz$Ϙ&?F9;Y=FP-N\AY(V?`0 8_J_ t7YVZ\UAY%:$?Fx\.јAra#N\2T(AeGiAle59 MX=G|L"'0UbN!V0O5 HD: # =h4>TB#]]9 TU@[?F$Fh?@rp&?FQ^?P6 u` ?Mu t  c(o#)t7 5~/)2ڦ{)L5EEH AEJ2N迴Nk?=@LA@yb #z-bK( (2+I2&y<$v!Aƿx 0>U#3 TB#]]9 TIAU@$&?F`_0 ?P6 u` 7?u t  OFP-)t7t7)2)jL8> PJU2N_Nk?<@ʐL&A@{b]M(b[)Y+%Y+Y&y#$!Ax1B4MB(#-OQ3K Q7##l> 0>U#3 8A9 Bc1BD #p %J`#@0(v?F^?&d2 C<$I$IRMRM38T\.?@Y{_?F OuPu !YM4\ѶmPGq?l £iXD8*!}9!Y,a7Y.cqQE_S@ ۶?F?vn_P#rA 0S#cZ,1+\Ro#oDT#3 H[]9 U@֧?FS0?@2?F#+/º?P6 t`  )s}&?tM_%b̺P,\u duo#AԖJ2zGz?<)@LA@wbb)++&T[e#au) ?AH//)!!5 `?CopyrigTt (c) 02`09 0M0c0os f2!r 51a0i n. 0 AlP0 8sT2e(0e0vf0dH0I"!E]$M-# 'x&l> .U#,11 ] !a! #p| F%JYiC?F9dj6 {"aI"O(EF~! IGI4FpE%ExQ:^?FLgo$ҮLOHXEEq7R?FVs4մL`VR®LqlhndFEEH?F)HמLsVIfVEE6>6W?F{RLLlV±I]VVpE.A|IyW?Fjt6L'/#7M'$xvX2A|I*{q?FG>BL(fIfpE6AB|Iv7RPKeVR@ m%G:EERH]uwiyd`viAbE9 MX7MB'W0U:>!$)a% vHD # =h*0>T#3 H[]9 U@5 W?F)j?@#˳=.?F~0 ?P6 t`  5YY?tcxWUbYaۃku5 auom>J2zGz?R<@LA@bb*)++&񩆍[~e#u) ?A//) !!5 `?CopyrigTt (c) 02`09 0M0c0o%s f2!r 51ua0i n. 0_ AlP0 8UsT2e(0e0vf0 dH0I"!E]$M-#K 'x&l>U4>U#11 !a! #p F%JyO[f?FVj d~jɧd`X <݉O(pEF9IKG`E%pE9Hd?FBP(~LIHдDdAEpE|'R?F0RLFR׊L\RٴHpEJW*?FvݮL).uL[HpEO!ONciA1eE9 M*X7=B_'0Ub>!$a% fjHD: # =h4>TB#]]9 TIAU@s52W?Fp?@??F3bb!?P6 u` 7?u t  a pʙ)t7E d(2А3(L1qͯ8M"> BJU2N贁Nk?<)@L&A@{wbM(b[)Y+Y+Y&ԍ#;!Ax 0>AU#3 A hEn1MD #p %J`#@z?F"4_ G_ JXRM(*!*[%@[WB_$S@ʹ?FT! OuWPu #rDR$$cm9ha}Jl |>QxUUF9_C_bUATjHZ}?F\*JR?@x\?Ffl6 o`,Y\v ?\/{?TB#]]9 T]AU@_H?FX,ź?@M&?F?Pn6 u{` ?u t  9B)t7 _Ц;B2Zg)UL WB2> JU2N贁Nk?<@L/&A@{ba(bUo)m+m+m&j)##Ax 0>U#3 pA9 Bh1GD #p %J`)#@Ӯ?F9f&*ѦC)RRMU88ˢ?FZV WuPu1&Y9\M%_x;GKa YH_ ? >xx? eQj?r 5c!:o1oCgEa@QSE8>!_*JR\_ EarUE _d2?Fz`0  QU3rbA 0g#QsZ"?Rl BZqlDf +rGaeAiV\.岾\2_qd(Vpd y0Ale59 MXGG|B'0UlN!$AY5 @T_H.;s QEe OOH 3FMJ#J /L[OB M~d8״ N@+𘽷 _sGg kf$ #YH& \( `* c, g/ jF֛ nh aq6 vX z Z~X 7a 8υNc # Ht Kxh MYj ߦлxm v׻`//'/9/8rLj/|//G&Xo /////?#?5?G?Y?k?}????e '&UFD  h(^TYYBUF\.?F `??x<F BPG(?P } Xn B66  TTTTT66d9TTTT!T"*T$6%6&6]W)T,T-T/T0T2636466T7T:T;JT=T>TȅHBU??/&& ?B#@L&d2?=-8,?8>sUe1c6(?) 8  0%$iB`/ap0l0cat0on,servB,.0om0u0CUd0s0r0bDud@n@tw@'rk1eA^| 8SG6 u);M  ?ppqpw~xpqpppwywxpwppJp@t qq~>pwp;ww~w;NqOwqw<ZsfsqpDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabi4Fѿ `??Ԇҿ4ӿw??n\.r0h47?=UG DF P# h @T(PYY# UF\.?@ `?@i4FѺP} u` ?Mu#b.CR;cbwe#1~U:DNblbb''U ^:"@,I@'Ui"r%ĉq{$ ?:"'4Wb {[W`:#2R 6Q )$#@&9D"D!L1:" ^:!ɜ'>0ÿ@q޴0?@2? @!260nuu0`i-12u]R@r\CC@5D"Ch3A 3/BD"'C9u`un `"[{ b:!u`!2{#A!'!B`?CopyrigPt (c) 20"P9 MPc. PosPfRQrPAQaPiPnQ Al\P Xs`Re4Pe PvrPdQ/`VPs_SBNQcPm!#5R46Q!r#d$r%2(={#&r#d&& "r$0Umb#]We,^l Um g%dk3dkUdbk%d lHmAdasVT/AQQr"0t"$32t23D!!i"5RQQ 76CC|8 PEAAD!11VA;w<A>1.b!EbF,J҄hdhr)z!*H/Abao(dTPHAS]ebw` MJPnu8P%aPtgqQrQA_` E uPpPeRPtU#%Q3E]!^PR0Rdgn PuPbu3 ZV![ PJPr 8p-?] ^ݝDrR/QMUr#D\TA4PuQb2`^%^,>T-%[ SvRiJP_Qj|UTT,^LPcJX뿠%%7B l ~PiRPgUχu0`ϓVb^RPorϖ|FJ(R@] p]D`MTNcw@Rkܲa҃Pz"`ۗh(mII@`YQd~PqSsߠt,JSgbRPdbsg삂xTz^VmQtHV{Q IwQjfT^ f 6R @`AQkQhz+[QmPu?t P PF쩂ŀ'ʨg삪eVT1^C@`!T` WDLÂ$X&݁^MrPm@RAyM_Ђŀp6 OPwQKSVyC݂IV[oBrPlPRglRTPuԋ//SPa)"C\PŀX//T` 7T= *![0`ѳ{QpT?f?Z?H%` APp\P-Q_/(/MZPCSŀv`ÿo!$q a f hrb`RoP!e82b bffs"`uRbk |p|ptR|{b{rՑwз@3u@`]^ā'vIo[jvpm_3 ՑՑ'<<މ!#QQaa ´$L0aI6s`ɱu`'ŘP!2P!r7{#453_&w"v#F3sNSYF!bF@!]8l#]no Ŵ1Gk,@b!,RvEoWo{mQQʽ3IFy !^ɲw` %2P&Ao)$0 Q`NE}WORKJUHu@PRR}ISQ  7BiA9_K_6hM3Ll'0Uƶ?5fAy 104(_HD # =hj0>T h]]9 5AU@(b?@L??@C?ִm?P6 jt`  o?ti[=(A Vu aSuo.]&A] *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!Ž?@昫>djIIa? >"(E_~_ I%K9E 5E!*dK ;;5QIWEEE@sT&p _KnSA?<#R@V$T2ݰ?@ʡ /6q!I6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9 qAU@'P?@@="?@EP¿P6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!DØ$j0djV1ax߃S(@où}sװ?@l[ 06q1I[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V7гj_|_ ^XE.UB6VtMhNPڮs} /d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEɈ庾hBN>?Bn]zGvN7ViDAbE9 MX+G'0UPN!4ae5 MaHD # =hj0>T h]]9 U@Nr"?@v->u?@_w`|ach޺?P6 t`  !a&a?t,F=۳=uk;u5 auom>J2zGzt?@LA@b#zt1b(Wb!).+.&[-m(;?&L ?A$0/RQ%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$Mb-# 'I&l>]Uhz!1!J@ 9ե djcYZ(_N9zOMGE%_Eohp3wO"K ңHE ^&_8_WE_OqOOOi&DAbE9 MX7}'0Ufb>!$a% f1jHD: # Uh4>T#]]9 M5AU@1b?@ 0?@(?@n?P6 JuM` ?u JYt  )t7xEA2&L_C?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!$cIo <%OYAJE*dK;;\%W%E@-.L[.FX5ET^gY?@7гL;] W-\TB<#R?WLq#'?@Eڮs /6q!I'?`bmMW_Ye[7 VK=F: [jr ? D"? XZvD?a@Boog]r5;!QE _2%g@LO MLΈXa~#OTZiAle59 XG|"'0Uc&N!$05 .HD # =hj0>T h]]9 IAU@Du??@ e?@=?@n? 3?PJW?>t`  ^Z?t#Ǚ.YX?H"͋s?/HImG_BqhZ+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPuD  # #%>h 0JTpEI MU@Ž?@`?@4xS?@JBf&?HNtQ`  II?t#[  ii$ `kZNu  Miu%wj> JQ       % "&&tJNU2"K?H@Qs#N&A@b(b)++&Np}#> H1#19 ?#bbbz| &/5R }#11`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@21a4t#-}#3K 746lJ #$JUp5 3 M19 1!aq(!@Llrkt# 837^1?@!?@km?P i ?e=1\iOT/irvs}i>`,of :m ,M ?? S&d2? *b?r4!B oo1gr. #5!>%(EQOOOO__@i1_C_ D4և]_o_#RX Ph'X___ #R__ (YQoSo+o=osd {oEeoooooopَm* hDV!VVi&n X"c7I!Hpg bf'iMjSE t%XyG|'0UUŢŞN!O4i |UGD # hz0TdYYBU@R-?@c1*?@=?@(kݫ?P} t`  87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@=?@!-??@$rq?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@l?@َm?NT,dKj&b&R&M/8T>f:?x6@8?@ߛ} _6q0YE*]\LThנKjfGD&Qhj iU3ULeVP۶W?@bs)h5oWKoWo;Kod}l$ro`nEBYFv?@D?%먪?@@3gZR^{|\6?vodmP}l>MBe 9 N 1iS? 1?grl#51:O"4gBa@Sq(UE^@A N?@CJ?m Y&v2rA @3ijD&}laqOz o?#WHZ'Qu{r>@\J"ۭ?@ yT?@ (U" +Web0)\ϼ(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX''W0UdN! 0 /HD  # =hj0T h]]9 #]AU@~M)M?@۵+?@)?@4_=w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@J԰ί9?䟥!foAnBM(TfH?@3A,?@A') ?@:?PT% ibL6VLD5E+v;*AיH: 8snP%$? k?Q? \?4_!__Wr5 _! R,U3OC@bD}?@,en?P)Oy޾ ?pV?c"rA b$cZ-,u`κ\DT+V ?8<iAaeI=MX!GG'W0UxrFN!$ a FCzHD  # =hj4>T!]]>#IAU@+&Z?@A3?@0xS?@F#tݩ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JَmO_ e@ So4(DFtI?@:7f .6+qS w_KT\T+\(Kjӷjmlhj -`\TPQE\Upt_o ?u~!ՠXEOd<"PP'_Pm"rA @3sj߅Q1lğ=GoYniAleE9 MX'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@+&Z?@ ~*?@AxS?@mݩ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@[َm_?NTMjC1gc RVMʣ(D1FʞB.• .6q0IKTLTcKjRlhj ?p= (*!]ݏp{O2Ke?s9T;QEGUF/4qX-}__TM7fTZT\Dž}_P^52OES"rA 0S#Es3iAleE9 MX$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@+&Z?@r<I?@0xS?@F#tݩ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NَmOO e@ o)4(DFtI?@:7f .6q+ '_KD3\T+\(Kjӷjmlhjw P\DiAlesI=MXsG'W0Uzb6N!%$ 0 sFEjHD  # =hj8>T! ]]>#!AU@+&Z?@F#t?@0xSW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!JpG?RSlOS u~!N]8-1d<"P?@:' .6q/ YVTQRXQOnğ=i n (\wTiAaeE9 MXLg'0UbiN!)$0 Lf`jUGD # hz0TdYYB%U@,b?@#@Ս??@c`q?P} Wt`{ f?t#F]tEY6<=P=d =x =f = =u )2<<PPd*dxxUu uE#U!!`?CopyrigPt (c)&02\09&0M0c.0os0f$21r0Q1a$0i0n.&0 All0 8sp2eD0ej0v0dd0"#S$#-`#d0" '?=#&#D66 2$B##0UcB3@P&0P4^ PP|H!YHA%DPK5DdK3DxK5DH!KED@K|DiAAa5IFrX7W'bD)6!4] VZUHPD  # #R>h-(JTeeM QU@f4F?@0`?pNuQ` ?u#>.R -<. =PK=dKQ -xK#|s } K2(.K<FZn#&خ7&Nt, t"t""tb#"NU2zGz?S@Q#N&A@u"b8b9,;,62 !4NF#2#"6 ]2M2b9;72iY<2Y?R#*A*A#`?Copyrwig\t hwc)a@20m@9a@MY@cW@o%sQ@f_BPArS@Aua_@iQ@n.a@_ Al@ WHUsBe@eW@v@-d@"l.]Upd驕XPQۿ~9`9(XTU_ϸTZAQeBtQ \_n_WT2ݰ?@-Ʒ꾌 ˠіTrivUqp5u ]M?@`?@#v;"ұ.+K oY\a'N%Z\nq3u@Q}6Ї'0)N77x[p EOGF؇AU;M_q߃{\Ԡ+ۜݾ4:` Һ!#S ?@.؏`r/A ?ǸBy vSul` g qMݞ@Y+9qZ U N)/` ?u[qirT@%֥x1֥X'0U "Ř6֥E0֥% UGD # hz0TdYYB%U@,b?@"@Ս??@c`q?P} Wt`{ f?t#E]tEuj )u74,bhi|1dlialiiX^hh||#R -u"iU!!`?CopyrigPt (c)&02\09&0M0c0os0f$21r0Q1a$0i0n.&0 All0 8sp2eD0e0v0dd0"#S$#I-#6# '?7)=#&#@D66 2$B##0UcB3@&04^AA| #,HA%D|K5DdK3DK5DKEDK|DiAcP5IFX7.W'bD6%!4] VZHD D# =h50>Th]]9 5AU@(b?@L??@Cmw?P6 t`  7?ti[=(_A u a)uo.dAd JU2zGzt?N@L &A@bA#zt7 5#1bO(Wb])j+j&[ "*(; "?&L ?AU$l/R%115 `?CopyrigTt (c)Q02`09Q0MI0cG0osA0fO2@1rC0|1aO0iA0n.Q0 Al0 G8s2eo0eG0v0d0"1E]4Mb-&3 &7I&l>]Uhz811U!J!Ž?@>djIׄIa? >"(E_~_ I%K9?E 5E!*dK ;;5QIWEEE@sT&p _KnA<#R@V$T2ݰ?@ʡ /6q!I6`bmMM\j=am {:e 뇥P> #? D"? 5-g?j47!B{oogr%57!ibDAAbE9 MX7'W0U~rN!$)a)5 5vIzHD # =hj0>T h]]9 qAU@'P?@@="?@EP¿P6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L ?&Aw@b}#zs q#1b](b)+I&[9#H",ɺ (H"?0& ?A$I /%9#V1V15 `?CopyrigTt (c)02`090M0c0os}0f2|1r01a0i}0n.0 Al0 8s2e0e0v0d0"O1E]S4M-9#b3 %b7&l>]Uhzt1.A!JN!DØ$j0djV1ax߃ݧ(@où}sװ?@lߵ[ 06Aq1I[Majnha_׾:e 뇥P> #? BK=o? n-?j4s!B{__Wr.5s!D8E6V7гj_|_ ?^XE.UB6VtMhNPڮs /d_mOoT ݬ\VߦyWx?RkrVo_bVh*g:n"'UaEɈ}hBN>?Bn]z/GvN7ViDAAbE9 MX+G'W0UPN!4)ae5 MaHD # =hj0>T h]]9 U@Nr"?@v->u?@_w`|ach޺?P6 t`  !a&a?t,F=۳=uk;u5 auoehAJ2zGzt?@L A@b#z1b(b!).+$.&[-m(w?& ?A$0/Q%B!!5 `?CopyrigTt (cu)02`090uM 0c 0os0If21r0@1a0i0n.0 WAl[0 8s_2Ue30e 0vq0dS0T"!E]$MŤ-# '&l>]Uhz!1 !J@ 9D djc?YZ(_N9zOMGE%_Eohp3wO"K ңHE ^&_8_WE_OqOO$Oi&DAbE9 MX7'0U*fb>!$a% f1jHD: # Uh4>T#]]9 M5AU@1b?@ 0?@(?@n?P6 JuM` ?u JYt  )t7xEA2&L_C?.> >tJU2q K?=@M &Aw@-b9(WbG)T+T&JJ[#"#?& ?A?/`) #115 {`?CopyrigTt (wc);020G09;0M30c10o%s+0f92*1r-0f1uai+0n.;0_ Al0 18Us2eY0e10v0 dy0z"!(4B#-?3 R7& bR 0K8>U# ]A]A `"1!!$?cIo <%YAJE*dK;;\%W%E@-.L[.FX5ET^gY?@߄7гL;] W-\_TB<#R?WLq#'?@Eڮs ?/6q!I'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QE 2%g@LO MLXa~#OTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9 IAU@Du??@ e?@=?@n? 3?PJW?>t`  ^Z?t#Ǚ.YX?H"͋s?/HImG_BqhZ+u Su$A  H JpU?43f<!.&P? b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPuD  # #%>h 0JTpEI MU@Ž?@`?@4xS?@JBf&?HNtQ`  II?t#[  ii$ `kZNu  Miu%wj>JQ       % "&&tJNU2"K?H@Qs# &Aw@b(Wb)++I&Np}#>H1#19 ?#bbbIz| &/5R}#11`?Co0yrig\t (c)02h090M0c0oKs0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21(a4t#-}#3 746lJ#$JUp5 @3 M19 1!q(!@LlrkIt# 837^1?@!?@km?P i e=1\iOT/irv_s}i>`,of :m ,M ?? S&d2? *?b?r 4!B oo1g]r #5!>%(EQOOOO__@iڲ1_C_ D4O]_o_#RX Ph'___ #R__ (YQoSo+o=osd {oEeoooooopَm* hDV!VVi&n X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@R-?@c1*?@=?@(kݫ?P} t`  87?t#_)TeΦu ]ukvĝk!L"&UUU&I&3 U!!`?CopyrigPt(c)2\09M c o%s f"!r 1ua i n._ Al00 (Us42e0e vF0d(0'S$#-#l# '?Rk=#ǒ 2$ZB##0U'B3@&F0PYk} #HA%DKb5DK3DkK5DKED!GiA_B5|IDFX7yW'&Dŭ61YA yVZHD # =hj4>T ]]>#AU@=?@!-??@$rq?P6 t`z  ?5t# rXѮu)Nu;):O]ע0&> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@l?@َm?NT,dKj&b&R&M/8T>f:?x6@8?@ߛ} _6q0YE*]\LThנKjfGD hj iU3ULeVP۶W?@bs)h5oWKoWo;Kod}l$ro`nEBYFv?@D?%먪?@@3gZR^{|\6?vodmP}l>MBe 9 N 1iS? 1?grl#51:O"4gBa@Sq(UE^@A N?@CJ?m Y&v2rA @3ijD&}laqOz o?#WHZ'Qu{r>@\J"ۭ?@ yT?@ (U" +Web0)\ϼ(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA eE9 MX''W0UdN! 0 /HD # =hj0T h]]9 #]AU@~M)M?@۵+?@)?@4_=w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@J԰ί9?䟥!foAnBM(TfH?@3A,?@A') ?@:?PT% ibL6VLD5E+v;*AיH: 8snP%$? k?Q? \?4_!__Wr5 _! R,U3OC@bD}?@,en?P)Oy޾ ?pV?c"rA b$cZ-,u`κ\DT+ V ?8<iA eI=MX!GG'W0UxrFN!$ a FCzHD # =hj4>T ]]>#IAU@+&Z?@A3?@0xS?@F#tݩ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@}b#Ai&'bu$ 'h 2N贁N[J2$@ LX"&"v(&m$ :;'m"*B#Bd1d15 5`?Co yrigTt (c)0W20090M0]c0os0f2R1r 1a0i0n.0 AUl0 8s2e0e0v0d0@"]1"a4MY-/p3 p72&Rl>0>Uhh= Ђ1#!m!J`#&@ENTMj.h1%,RM(!JَmO_ e@ o)4(DFtI?@:7f .6qS w_KT\T+\(Kjӷjm hjw -`\TPQE\Up癙t_o u~!ՠXEOd<"PP'X_Pm"rA @3sjQ1lğO=GoYniA eE9 MX'w'0UUr^N!M$|A vzHD # =hj4>T ]]>#IAU@+&Z?@ ~*?@AxS?@mݩ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0e8YAY Y PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@[َm_?NTMjC1cYK RVM(D1FʞB.• .W6q0IKTLT_cKjR hj p= (*!]pǙ{O2Kes9T;QEGUFڈ4qX-}__TM7fTZT\ODž}_P^b52OES"rAQ 0S#Es3iA eE9 MX$Gxw'0UUrIN!4gA xvzHD # =hj4>T ]]>#!AU@+&Z?@r<I?@0xS?@F#tݩ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$!># BJpU>#͙;? & ?5bbbz@}bAA&IK&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NَmOO e@ o)4(DFtI?@:7f .6q+ '_KD3\T+\(Kjӷjm hjw P\DiA esI=MXsG'0Uzb6N(! 0 sFEjHD # =hj8>T! ]]>#!AU@+&Z?@F#t?@0xSW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$">I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!JpG?RSlOS u~!N]8-1d<"P?@:' .6q/ YVTQRXQOnğ= l n (\wTiA eE9 MXLg'0UbiN!)$0 Lf`jUGD # hz0TdYYB%U@n?@c`?@,bػȻP} jt` o?t#uE]tZu M)u7#$_fh%i|&if'i(i̸)i*TiX^hUh||UTf"%-U!!`?CopyrigPtu c 2\W09&0M0c0os0f$21r0Q1a$0i0n}.&0 All0U 8sp2eD0e05v0dd0"#HS$#-## 'I?7=#&#D66 2b$B^0UcBM3@&T04^AA| #hHA%D|K5DK3DK5DKED@K|DiAAa5IFrX7W'bD)6!4] VZHD# # =hj0>T h]]9 5AU@(b?@L??@C?ִm?P6 jt`  o?ti[=(A Vu auo.$&> *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!Ž?@昫>djIIa? >"(E_~_ I%K9E 5E!*dK ;;5QIWEEE@sT&p _KnSA?<#R@V$T2ݰ?@ʡ /6q!I6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD# # =hj0>T h]]9 qAU@'P?@@="?@EP¿P6 t`  bm&Qo?t@@L]WI,#u aiuoL%> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!DØ$j0djV1ax߃S(@où}sװ?@l[ 06q1I[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V7гj_|_ ^XE.UB6VtMhNPڮs} /d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEɈ庾hBN>?Bn]zGvN7ViDAbE9 MX+G'0UPN!4ae5 MaHD# # =hj0>T h]]9 U@Nr"?@v->u?@_w`|ach޺?P6 t`  !a&a?t,F=۳=uk;u5 auom&>J2zGzt?@LA@b#zt1b(Wb!).+.&[-m(;?&L ?A$0/RQ%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$Mb-# 'I&l>]Uhz!1!J@ 9ե djcYZ(_N9zOMGE%_Eohp3wO"K ңHE ^&_8_WE_OqOOOi&DAbE9 MX7}'0Ufb>!$a% f1jHD:# # Uh4>T#]]9 M5AU@1b?@ 0?@(?@n?P6 JuM` ?u JYt  )t7xEA2&L_C?.'> >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!$cIo <%OYAJE*dK;;\%W%E@-.L[.FX5ET^gY?@7гL;] W-\TB<#R?WLq#'?@Eڮs /6q!I'?`bmMW_Ye[7 VK=F: [jr ? D"? XZvD?a@Boog]r5;!QE _2%g@LO MLΈXa~#OTZiAle59 XG|"'0Uc&N!$05 .HD# # =hj0>T h]]9 IAU@Du??@ e?@=?@n? 3?PJW?>t`  ^Z?t#Ǚ.YX?H"͋s?/HImG_BqhZ+u u$(>  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPuD # # #%>h 0JTpEI MU@Ž?@`?@4xS?@JBf&?HNtQ`  II?t#[  ii$ `kZNu  Miu%wj>)JQ       % "&&tJNU2"K?H@Qs#N&A@b(b)++&Np}#> H1#19 ?#bbbz| &/5R }#11`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@21a4t#-}#3K 746lJ #$JUp5 3 M19 1!aq(!@Llrkt# 837^1?@!?@km?P i ?e=1\iOT/irvs}i>`,of :m ,M ?? S&d2? *b?r4!B oo1gr. #5!>%(EQOOOO__@i1_C_ D4և]_o_#RX Ph'X___ #R__ (YQoSo+o=osd {oEeoooooopَm* hDV!VVi&n X"c7I!Hpg bf'iMjSE t%XyG|'0UUŢŞN!O4i |UGD# # hz0TdYYBU@R-?@c1*?@=?@(kݫ?P} t`  87?t#_)TeΦu ]ukv*+,-̝./D]_&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!rT 1a i n] Al00 (Us42e0e vF0d'S$# # '?k=# 66 2$B ְ0U'B3@&F0Yk} #HA%DKb5DK3DK5D@KEDYAGiAZG5|IDFrX7yW'&D)ŭ6!4YA yVZHD* # =hj4>T!]]>#AU@=?@!-??@$rq?P6 t`z  ?5t# rXѮu)Nu;):O]ע0+&> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@l?@َm?NT,dKj&b&R&M/8T>f:?x6@8?@ߛ} _6q0YE*]\LThנKjfGD&Qhj iU3ULeVP۶W?@bs)h5oWKoWo;Kod}l$ro`nEBYFv?@D?%먪?@@3gZR^{|\6?vodmP}l>MBe 9 N 1iS? 1?grl#51:O"4gBa@Sq(UE^@A N?@CJ?m Y&v2rA @3ijD&}laqOz o?#WHZ'Qu{r>@\J"ۭ?@ yT?@ (U" +Web0)\ϼ(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX''W0UdN! 0 /HD* # =hj0T h]]9 #]AU@~M)M?@۵+?@)?@4_=w?P6 u` ?u#t  3zM%t3ws%.<%H._'B,> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@J԰ί9?䟥!foAnBM(TfH?@3A,?@A') ?@:?PT% ibL6VLD5E+v;*AיH: 8snP%$? k?Q? \?4_!__Wr5 _! R,U3OC@bD}?@,en?P)Oy޾ ?pV?c"rA b$cZ-,u`κ\DT+V ?8<iAaeI=MX!GG'W0UxrFN!$ a FCzHD* # =hj4>T!]]>#IAU@+&Z?@A3?@0xS?@F#tݩ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8->$# JpU>##<?2& ?5bbbz@}b#Ai&'bu$ 'h 2N贁N[J2$@ LX"&"v(&m$ :;'m"*B#Bd1d15 5`?Co yrigTt (c)0W20090M0]c0os0f2R1r 1a0i0n.0 AUl0 8s2e0e0v0d0@"]1"a4MX,p3 p72&Rl>0>Uhh= Ђ1#!m!J`#&@ENTMj.h1%,RM(!JَmO_ e@ o)4(DFtI?@:7f .6qS w_KT\T+\(Kjӷjmlhjw -`\TPQE\Up癙t_o u~!ՠXEOd<"PP'X_Pm"rA @3sjQ1lğO=GoYniAleE9 MX'w'0UUr^N!M$|A vzHD* # =hj4>T!]]>#IAU@+&Z?@ ~*?@AxS?@mݩ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8.>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@[َm_?NTMjC1cK RVM(D1FʞB.• .W6q0IKTLT_cKjRlhj p= (*!]pǙ{O2Kes9T;QEGUFڈ4qX-}__TM7fTZT\ODž}_P^b52OES"rAQ 0S#Es3iAleE9 MX$Gxw'0UUrIN!4gA xvzHD* # =hj4>T!]]>#!AU@+&Z?@r<I?@0xS?@F#tݩ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$/># BJpU>#͙;? & ?5bbbz@}bAA&IK&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NَmOO e@ o)4(DFtI?@:7f .6q+ '_KD3\T+\(Kjӷjmlhjw P\DiAlesI=MXsG'W0Uzb6N!%$ 0 sFEjHD* # =hj8>T! ]]>#!AU@+&Z?@F#t?@0xSW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$0>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!JpG?RSlOS u~!N]8-1d<"P?@:' .6q/ YVTQRXQOnğ=i n (\wTiAaeE9 MXLg'0UbiN!)$ Lf`jUGD # hz0TdYYB%U@n?@T h]]9 5AU@(b?@L??@C?ִm?P6 jt`  o?ti[=(A Vu auo.2&> *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!Ž?@昫>djIIa? >"(E_~_ I%K9E 5E!*dK ;;5QIWEEE@sT&p _KnSA?<#R@V$T2ݰ?@ʡ /6q!I6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD1 # =hj0>T h]]9 qAU@'P?@@="?@EP¿P6 t`  bm&Qo?t@@L]WI,#u aiuoL3> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!DØ$j0djV1ax߃S(@où}sװ?@l[ 06q1I[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V7гj_|_ ^XE.UB6VtMhNPڮs} /d_mOoT ݬ\VyWxRkrVo_bVh*g:n"'UaEɈ庾hBN>?Bn]zGvN7ViDAbE9 MX+G'0UPN!4ae5 MaHD1 # =hj0>T h]]9 U@Nr"?@v->u?@_w`|ach޺?P6 t`  !a&a?t,F=۳=uk;u5 auom4>J2zGzt?@LA@b#zt1b(Wb!).+.&[-m(;?&L ?A$0/RQ%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$Mb-# 'I&l>]Uhz!1!J@ 9ե djcYZ(_N9zOMGE%_Eohp3wO"K ңHE ^&_8_WE_OqOOOi&DAbE9 MX7}'0Ufb>!$a% f1jHD:1 # Uh4>T#]]9 M5AU@1b?@ 0?@(?@n?P6 JuM` ?u JYt  )t7xEA2&L_C?.5> >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!$cIo <%OYAJE*dK;;\%W%E@-.L[.FX5ET^gY?@7гL;] W-\TB<#R?WLq#'?@Eڮs /6q!I'?`bmMW_Ye[7 VK=F: [jr ? D"? XZvD?a@Boogr@QE 2%g@LO MLXa~#OTZiAle59 XG"'0Ujc&N!$05 .HD1 # =hj0>T h]]9 IAU@Du??@ e?@=?@n? 3?PJW?>t`  ^Z?t#Ǚ.YX?H"͋s?/HImG_BqhZ+u u$6>  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPuD 1 # #%>h 0JTpEI MU@Ž?@`?@4xS?@JBf&?HNtQ`  II?t#[  ii$ `kZNu  Miu%wj>7JQ       % "&&tJNU2"K?H@Qs#N&A@b(b)++&Np}#> H1#19 ?#bbbz| &/5R }#11`?Co0yrig\t (c)02h090M0c0o%s0f21r0Aua0i0n.0_ Al!@ 8Us%Be0e0v7@ d@21a4t#-}#3K 746lJ #$JUp5 3 M19 1!aq(!@Llrkt# 837^1?@!?@km?P i ?e=1\iOT/irvs}i>`,of :m ,M ?? S&d2? *b?r4!B oo1gr. #5!>%(EQOOOO__@i1_C_ D4և]_o_#RX Ph'X___ #R__ (YQoSo+o=osd {oEeoooooopَm* hDV!VVi&n X"c7I!Hpg bf'iMjSE t%XyG|'0UUŢŞN!O4i |UGD1 # hz0TdYYBU@R-?@c1*?@=?@(kݫ?P} t`  87?t#_)TeΦu ]ukv89:;̝<=L>&UUU&I&3 U!!`?CopyrigPt _(c) 2\]0 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## ' ?kǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD8 # =hj4>T!]]>#AU@=?@!-??@$rq?P6 t`z  ?5t# rXѮu)Nu;):O]ע09&> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@l?@َm?NT,dKj&b&R&M/8T>f:?x6@8?@ߛ} _6q0YE*]\LThנKjfGD&Qhj iU3ULeVP۶W?@bs)h5oWKoWo;Kod}l$ro`nEBYFv?@D?%먪?@@3gZR^{|\6?vodmP}l>MBe 9 N 1iS? 1?grl#51:O"4gBa@Sq(UE^@A N?@CJ?m Y&v2rA @3ijD&}laqOz o?#WHZ'Qu{r>@\J"ۭ?@ yT?@ (U" +Web0)\ϼ(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX''W0UdN! 0 /HD8 # =hj0T h]]9 #]AU@~M)M?@۵+?@)?@4_=w?P6 u` ?u#t  3zM%t3ws%.<%H._'B:> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@J԰ί9?䟥!foAnBM(TfH?@3A,?@A') ?@:?PT% ibL6VLD5E+v;*AH 8snP%$? k?Q? \?4_!__Wr5 _! R,U3OC@bD}?@,en?P)Oy޾ ?pV?c"rA b$cZ-,u`κ\DT+V ?8<iAaeI=MX!GG'W0UxrFN!$ a FCzHD8 # =hj4>T!]]>#IAU@+&Z?@A3?@0xS?@F#tݩ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:| ;>$# JpU>##<?2& ?5bbbz@}b#Ai&'bu$ 'h 2N贁N[J2$@ LX"&"v(&m$ :;'m"*B#Bd1d15 5`?Co yrigTt (c)0W20090M0]c0os0f2R1r 1a0i0n.0 AUl0 8s2e0e0v0d0@"]1"a4MY-/p3 p72&Rl>0>Uhh= Ђ1#!m!J`#&@ENTMj.h1%,RM(!JَmO_ e@ o)4(DFtI?@:7f .6qS w_KT\T+\(Kjӷjmlhjw -`\TPQE\Up癙t_o u~!ՠXEOd<"PP'X_Pm"rA @3sjQ1lğO=GoYniAleE9 MX'w'0UUr^N!M$|A vzHD8 # =hj4>T!]]>#IAU@+&Z?@ ~*?@AxS?@mݩ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0n <>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@[َm_?NTMjC1cK RVM(D1FʞB.• .W6q0IKTLT_cKjRlhj p= (*!]pǙ{O2Kes9T;QEGUFڈ4qX-}__TM7fTZT\ODž}_P^b52OES"rAQ 0S#Es3iAleE9 MX$Gxw'0UUrIN!4gA xvzHD8 # =hj4>T!]]>#!AU@+&Z?@r<I?@0xS?@F#tݩ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$=># BJpU>#͙;? & ?5bbbz@}bAA&IK&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NَmOO e@ o)4(DFtI?@:7f .6q+ '_KD3\T+\(Kjӷjmlhjw P\DiAlesI=MXsG'W0Uzb6N!%$ 0 sFEjHD8 # =h TB ]]>T#!AU@+&Z?@F#t?@0xS?@?P6 ։t`  >B? \E7t#| u l4Ru?]>jI$>>#  ՆJpU>͙#<?& ?9bbbz@b#;AE&bQ$ID 2zGzt#@ $L4"&R(^&I$EQ)+]'I"^*aBh{.M#o1o15 9`?CoyrigTtk(wc)k2009kM0c0o%s0f21r1ua0i0n.k_ Al0 8Us2e0e0v@ d0f"h1(l4M-:?{3K {7&l>*(>UhE= `51#-I!J49An^ RRX f"B!Jp?RSlOS u~! N]8-1d<"P?@:' .6q/ YVTQRXQOnğ=i n (\wTiAaeE(9 MXLg_'0UbiN-!)$0 Lf`j_Hf:L;s O}N#N۱?FX# ߥ"OB H Dz~d?ZE@{+g,IVsG8;՟afH;=x=====}H=WE֛==11=b,>c> >4>Lh9=C4=}>s>$>o>x>>> FH>(>\29PKK(SK'UKL(XK"5XZK%&M\Kr)h^K,'L/~(L3 )L68L:hLp=LA%LJD78L{IIL|Mz)8L8QLT3L X)LZ[| '0N^)82Na)4Ne )6Nh)8Nl9:Neo.9h>NsX@N@vR9CNq{d9HFNrHN-9HKN9xMN%AUFD  h(^TYYBHUFjZ?F~??x<F BPG(?P } Xč ]B66]  TTUTTTT99d9bȅBU?? /&-& ?B#@L&_d2?r-Ƀ(\.ҕ(s.sU!&/DJ) f% G 04%eB`-r<0al,tim<0,s<0rvZ2,coR0pSuN0a3dP0sN0UrP0bn4dL0n<0wtwh0rk195^| (SG& iz ?!3E?? ? DuDGGD wp{Dtp pqwtDD@pqwqDwqWt pqpq ppw)wwwqd^wqpuwwwww`Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??V↑T*J??@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH HE .E9O؍>DDXXllRu'9j &",' !)2&5qq/4 F?"'4b0{[`+:#2J6Q_9$"3&9<"L1"e ^!'>_@ÿ@qh@?@`B?BgD]F0#B-u)@`-1Bu@r#CC5GAB2C9uv`u `"[ۢ b!-u`2/3Qy1y7B`?CopyrigPt0(c])020P90uMPcPosPIfRQrPQaPiPn 0AUl` XsbePePv&`d/`VPs_SBNcPm!#5>P8616a񁧙y1&3db4&5=B+h8=/3z6&3@d66 2&4#/30U!r3(]ge<^1E U#&5m Hq&5twV5t{3t{FEt{t{y {et !{|tYwmAdPaуIOB!T !:a:a&20#V2Hdada32=FB3J1125db6$q 7~C|hhE)*^Q_`CQCQ"A11=B;8ARrSqea> e1A+E4FAނX !#8dT7`XQy`]r` MPWnuPaPt%ar6aYQ` UE]0uPpU`e`tee^V5Qn],1PRd M`uU`b¯Ԧ30! PPr#5GCHhz]daD&bQe^\޿𿏫AP)aB`Ҩ75^gyϏ% S*biPaϷZeTϏLPcX&ϫk5N`ߪrBDl2`i`g̨)@ߑRPoʂѩ|(Z\@E pKD`YĖUSPaPePlP6?ЪXp9` ATP7`?>` /aZ4\nJSbI`W`QmU`u`QW&Zʒ=NwRka8<.2` Ҩ גc[mI` ad2`%c& dϐnĝs   Nx7I(鑙mDa9tV/a I͢+a Nf󨑙 R `Qa/ / 3I/ WM`C//+ tP I`? %3?E? 2Ă? 2Nv??;H"d2QCxk3` W ڧ%?NV1OCOC aOO#ޅO`M&`mRO_ӧYNp+_=_Q;OUq1y_Qi_${_Mqt!Tw*̩fo*SU@jZ?@ű܁?/v/vUpSp2u8p` ?bup`  B=ݩ`M,{t!Wp$53m-ׁ<3otm!#Qbq`׏ݥBT!).ot!xAfg>pÿ$XbUrq vU:֒rrrvvvrse"`Rozk10 eCZbb ccbr@O":tblcuP`TfnM}1o0BzW}_obW1W2\/b^@aaaaFҴ_tAaY6*P`u$`Rb`ݡ` a|.x^€ݥ5_b]&bЂF3s͵NSY`H\rV@4/eD8S^D&@SDՐUFZk,@rb4,>b}aaYV nw` %P& S.1a`NEWORKp2%H P!RURISϑQԑR!i| o2ohX|AS'0U?! 0oHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@I5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(@J-&?@ mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6 @ d_mOoT ݬ\VߦyWx?RkrVo_bVh*g:n"'UaEjz}36>CN>?Bn]z/GvN7ViDAAbE9 MX+G'W0UPN!4)ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@Y%?@wᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@jc{g&?@%r`5B?@m_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@nElrkRt# 8$kcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeoooooo@yUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BU@5?@`0 ?@ t2пP} mu` o?u#`/8d<L1<`9tU(.88LL`Zttt n۶m۶tgf!m-2!tUH~#a#[0U"?@W& a#!!o*"`?CopyrigPt (c)020090M0c oKs f2!r 31a0i n.0 AlN0 (sR2e&0e vd0dF0O"!P$X#B-a## '(#^=a$&X#466 2X$e(^%(5 a#dHoJX%DK|%D@tK5D8Gg&AUc_'TAAX"U(@SaX%2X"ihalI&X7lG'җ$6!40 lFZHD # =hj0>T h]]9 U@oC(?@mWں?@?@Vw?P6 u` ?ut  zU%t3b~}U%.9%UHS> " "" :"" N" b"w"w"w"w"w",w"&/8$w"D,w"PX,w"l#JU 'N((?6 ?A\bbRbz@A0ubbub9)b(8bI4e32zGz?088r@LF:Z2P?b96<2: e3AA5 w`?CopyrigTt (c)@2`09@M@c@o%s@fBAr@Qua@i@n.@_ Al3P HUs7Re Pe@vIP d+P2AE]DM-e3CK G6l>]4Uh8QQ9 !$]b!N!:"AZQ1J`e3@%?@~霤0df'#'/^&b^&M8Xd,O?@PR?@?cߡj ZW0?ri5kHiN׶_\(Úw ckfH AjG|olAGeEA@:?@wUѳ?@wN?P-DT! ]ujye6|ñlL|*%k UUB H 7 C? J? _7ͫ?kwr351:kw41b 8eXojc@{h`荞ۚӴh2rhA%`3`8z?ZJJ=aOy?5qexzaeyw?@J?@yZI?@G& hbt=lkKv>tZa7L|W顟tpҷ!iY#?@#?@sڎ`{R2zo |r}q#z_Ţ6|;|L|8dt~aay W(Ci?@%91?@;8.n?Pris%r{i"6|v}YaL|q|zKΏ#e!yEy?@ ,&G>@"_yy޼X/JCl;~e*6|<8L|wK|༬g >P`r#'9Ze%aiHJo?@U ?@qU}?@;E | /BYlx6|3@*aqOy5#*:!iZO?@C?@cQ`p`~ |IJ1Mlw^6|G#U_L|-hPbqN!yU_s8?@#`Ӳol?U6Xl?RiAeu9 MuXXWt'0UjDŰ^!40uE  HD: # =h4>T]]9 !AU@Fr+?@;?@Ě9?@UGݳ?P6 mu` o?u t  ܍`)t7Sׅ!)25Ia(L޿f$A F  EJU 'N(( ? & ?A`bbRbz@AA ybbub>)-b(D#GbKy!>#c#y D#2zGzt?088r@Ls"&D*8"PX/bl$d"x!(^('d"*S1S15 {`?CopyrigTt (c)020090M0c0oKsz0f2y1r|01a0iz0n.0 Al0 8s2e0e0v0d0"L1@P4M@ D-(O_3 _7 &l  Uh#7 q1+Ad!J VD, |-1R0SVbR( b 0AleE9 MXPG'0U$bZuN!%$0b5 VZHD "#=h0>T@++]]9 ِU@XΞ(?@?@/?@?P6 u` 7?u#t  ʿ%t3{%.9*%HS@7> ]A"" D:" N" b"b"""""""/$"&/8$"D,"X,@"l/~$"$J}tU2q0?=@L6Aw@)b8Wb9;6R[3G 2D?IF !?A?I3AA5 w`?CopyrigTt (c)@2`09@M@c@oKs@fBAr@Qa0i@n.@ Al!P Hs%Re@e@v7PdPBAE(]DM-3C GIFlT>oAT>U+<oAoA]!:" _$ }$AT#/ARJ`3@w{?@Vk@E$|&b|&M HA@nY?@C<ϖ?@:[?P-DT! ]W0alxk>8 <_ 敶`TWbXB p = ? >)PK? ?r#51:]ow41b(H1|_s8?@#`ӲalaDw5Xwl5dEJdZO?@C?@m-p?@w  [edir[wlWV(|uY:l>|j/v w$117JGJo?@U ?@tf+?@?ktzDzXwl(|}x|'=}A[tJ%?@~ל?@~?@.RV~7bߌ:|wla_=v}V痒_Fts4$imƉEy?@Q`/&G>@7;[8|{oodEJCwlv(|5t0\>|9?jՉt0GutWt1Cw3tw#e!i W(?@i?@ :=-?@|`̍f?PodiT rwlAD:(|F|v>| 8O/'ݦ[dvπw&30#e:!FY#?@>#?@r1?@4rzXw\.wl[2jq(|kj祐m>|VOdvs|N!F~oCO !L?@iOS#~Lf%>~zi4ptc+y$>|A ߟqayy-詺alB{TJ7b!iWyw?@f?@$1F?@ݏOD bϑANoBrQN 3*z׳]SqAy -;b>Sgi8TQTQB(YAeeXXyW _'0UiŞ^-!dD0^ 4UHLuD" # R>h 0JTlaHaMCqMU@?NB?@S[s~?@$O?@WZ}8Ÿ?P >uA` ?u4 >t  aU֠y)t7Fϯ1)2_G\Lȟ9flMlA  Z   J>U2zGzt?088r@9 3#>C&A@ybbRbPzw u#-b(bJ)+&N=#!!{`?CopyrigXt (c)02d090M0c.0os0f21r0J1a0i0n.0 Ale0 8si2e=0ej0v{0d]0" !]$4#- 'N((#?6 ?Me>aۀw R4` A#0c{0n01lI23zjw Mw bbbz5Iu#m{EZA<O&Ob5ObNbbOO((/%BG=## '6,lo>R4"lTJ$JUl 5I 1qYƕ!1(R!dӘ`?@S?il0"NX N i*ƙ?@`1Ww?@ټ?P^ÿ D* @[Q ʸZZ\!PX@: \Ŀ gsY @ [,˿? @?t@@Boogr5w!(NU^_p____Xj?R&[T2\o~e!ϏW_"?@\L-ed5?PO6h  U6bMmpN_W\4VI~zl{yfdDԻ~g,+?Ȏ !ood{ &'iMf5 4%uXW_'0U&[J!4e% 7KUHLuD" # I>h(J TEI 3AU@jZ?@({~+?P >uA` ?u#J* )8* 9LG 9`G 9tG 9G 9G9G9G!9GA )@G#"G.$*G8"B"V"j"~""""""l" ,>t   2t2z%2pbEC3O2J>U2zGz?@9 k3>{6A@wb8b9e;62 !4Nu3Bl322F 2M2bE9^K72S!υ`T9`U ?@ `kd\|SiDlؓ<N oldџtGlI?@~?@(t RV~'\Wc-N3pMq$$z$ơ𑯣@&'/6&G>@9 ;?@{gPDlCɚ>#(䞌!gP5|Tf- $` iR9$a/|`;fF`kDlӀ 7A?,s-mqBNf)ϭB*$`#?@ yC$`4rzY|˱AM^)Cm#ŗƦ}?zƀ9b"fjO?@m0?kOS3\?bUAfqGi?SbRS۾Cf?il߯aapeX/G|.lRxX U n@42PrŮ 8b:oLn?Ce(jY榄el`ƀEqQAeRp _og}u?@9 mO".l9=8/ ee )l.%-+mlփ?@U->V.lɽ7yHvOfўA=$}$ʊ x @4S>a U}62D 'e !aS#Ł&ʊi$!Tl51A1X/'0U;"2F1&E01 *_H;s eoY޴OXFȭMo#0 #KOB U n~d״ uo@+_ oVsGv FfD F hI ϦHK xM O dF֛(  @5 u v 6?` 68b Nhd _f k xm  K q#!UFD  h(^TYYB UFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTTT96ȴHBU?Q?G ?B@L&ɯd2?6-G(?\.Y(7. sU!}&/)J'( G 0]B`)proxy,se 0v2,c 0mput4di0t 0ib*4d0n0t;w 0rk! e^| (SG& i?!3E?? ?    wp   p qw qw]qwqpqwwqqpvppww&www]qawqpwoww6zwxvDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??S{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH HE .E9O؍>DDXXllRu'9j &",'US2&5/4 ?"'4Wb0{['`:[1#26" !^Q_9$"3&9"/L6 2&!'ݯ>o@ÿ@qx@?@-pB?wDmF03Bu9@`҈-1Bu@r#CC 5(GAB2Cm9u`u `"[ b!u`/3Qy1y7B`?CopyrigPt0(c)020P90MPc.PosPfRQrPaaPiPn 0Al ` Xs$bePePv6`d/`VPs_SBNcPm!#5P86PG`y1&3db4&5MBh8=/3z6&3d66 Ԁ2&4#/30U1r3]ge<^AE U#&5m Hq&5twV5t{3t{VEt{t{y {et !{|tYwmAdaI_BT.1JaJa&2tataV21@32MVB3Z112I5tb6 7C|hhE) *”nQo`Г!SQSQA11BMB;1brcqe a> !b1AEDFQ.1#8dTG`XQ`])rw` M`nuP%aPt5arFaiQ` EqiPm6`nPvdnV5Q~]1PRd ]`ue`bү30 ! P`r3EWSHx]C(D6bQα eϿn\ϢAP9aB#`75^wωZ% S:bi` #ajeTZLPch@6߫k5^p߂BTlB`i`g̨9@ߡRPoڂ|8Z\@ 0p([D`ĦSPaPe`l`FOX.I` $TP*G`ON`?aj͒soSb Y`qP(xP⨠ڒMNwbk'aL.2`} sYkIpadB`5c6 d  nĭs  ^x5GmTat f?a Iݢ;a ^f󸑩 R pa/a //1G/UM`C/(/+Qme`utP Y`? 5^D.?@?} 0B? B^v??;EHd2QC'i3` W@ڥ%O^V/O0AOC~ aOO#O/`M6`mbO_i^p)_;_O;!O) eo1y]QHg_y_]qt1dw:fo(SU@jZ?@~??v?vepcpBuHp` ?bup`Ta],t1(gp"51m-< Corm1 QrHqBT1.mt1xAdg>pÿސc$bepq verrrvvvrs^c"`Roxi1@ e2Sjbb aabr >@"JtbauP`hoe{1.@zU}_ rU1U2l-bŕnaaaaVҴoAaY6(P`u4`r}`}`azxn4~5_r&bВF3sNSYT}lrV@z-eB8QnBLQBӐSV֜Z,@rBD*<`}(aaY Vn` %P&@!S>1a`NEWORKJn2H P߰}!R尪RI߰S͑QґBio0ohhAQ'0Uƛ?!10oHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@I5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(@J-&?@ mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6 @ d_mOoT ݬ\VߦyWx?RkrVo_bVh*g:n"'UaEjz}36>CN>?Bn]z/GvN7ViDAAbE9 MX+G'W0UPN!4)ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@Y%?@wᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@jc{g&?@%r`5B?@m_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@nElrkRt# 8$kcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeoooooo@yUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz4TYYT#BqU@I7@(?@M&d2?@ t2?@`0 ֺ?P} u` ?u#L+/8<L9`( .88L``t $Itmj gf! !UC~#9#0Up"?҃@/&T9#ة!!G*`?CopyrigPt (c) W20 9 M ]c os f"R!r 1a i n. AUl&0 (s*2e e v<0d0'"!$0#B-9## E'w#^=9$&0#466 20$e#$^ #30%T% 9#HLA0%XD`cKT%XD8cGg&AdaO^'TcAcA0"J-(3Ca0%g2AATBK6S`6SihaX5 &X~7kW'o$ţ6!40 kVZUHLuD" # I>h ,J T  M#EMU@jZ?F"??@`^?P m>uA` o?u#VpMJp -<pA -PK#TdK2(.K<FZnf>t  t3|>b!F\.岿uKJ>U2zGz?@9 ?#>O&Aw@b(Wb)+&r" !$NI#"@#W"6 "\M"b)2;'r",r"/I#11G#`?CopyrigXt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v-@d@7"BMI#3 76@"!"t1"=ElJ4Ul8@AAC 9 =Q A(#UA!=(^!rCa?@ئ3 (0 # ?˕8# ; MkKUN>UV\u5m[} 0҂X3^E] 0__  yXE>UU . W[ ?;;l\gU>U@}bAo\nZM?NXAjai%?@;D%V\^ W-l\jp҂T9?T8?@)#?@@SA?@9ؾ@ ?Nk"?YY5`bmMl\K,q_ܱy[_7 GH5'  ΀X:YC}u?@H[V\u۶m۶l\&Dn۹XQ`C"4` NQ:Y%u?^&W[DqodUQ:YxLPp"2V\$I$Ik]T@!X Q LPݟd(: ?j=YU>UsG_YY f_x^AA(G!EM_&_8[}u?@R6V\9=8uG5xQHuQu?@mlփ?@t;/> ~t~Y|Of|͸ 4zTB#]]9 TU@ t2?@`0 ??@?Pn6 u{` ?u th  )t72(LMm>EJ2wq?=@LA@yb #z.-b( (%2+2&>@f'?& ?A`bbb)$bf/ !4/U%T115 {`?CopyrigTt:(c):20Y09:ME0cC0oKs=0fK2<1r?0x1aK0i=0n.: Al0 C8s2ek0eC0v0d0X"1P4MB-?"3 "7&l> 8>U# RARA9 ]41 #p U%J`@`C C?s9ӑ] ,VM^(s"2ǟO?Y!c1ƸFU5V^dln__ iQsX5VUF__ _ ROJ)XE2O,E"rA 0"|c3JQVU-?@k}A_Є\a#BFUVU~t?kb\+g!^(`d E0Ale59 MXGB'W0Ur8N!<+0%5 zvzUHxuD4" # J#UU >ha0JTaaMU@ t2?@* 7?@H"G?P tQ`  7?,t#6Z hӀ6C2wqGCLvnCu3;uM;2zGzt?>@QA@Ebbbz+ )#abC(+bQ)^+^&)y#?&)?I"O+^+N,l'R,1,1G`?Co}p rig}t:(c):]209:M[0]cY0osS0fa2RR1rU01aa0iS0n.: AUl0 Y8s2e0eY0v0d0@"%1Y)4-83 87$&lUмJ1$ KAp +!~ F F FFFFFFFFF"`!Q[C(CUH?@ &[\uP^CyyTQX3CUjdO[_TtY/`T7Q5____TCUQegG idpogZ4EQ$EGy#3 oEMF#r 7f;_\tTQZsR om_ hlРpv_QeT_2z|ro.FQpt8Auo,>PC^Awz^ @f xY 0BTfx+ 6yI8_mBԐYˏRdٮy %D+x]&8J\n+ }C2q}[eu៿x **xa@Rdv+ zܿȯگ ?{kk 1/ g_xeZl~ƺfZ?@&KWYܿvl")Q )yxitϘϪϼƹCG?@sw8⼕5Pvl'<(fT( ,ӓxmߠ߲ƹ"?@7.&ὡ }'i,D @?.v0B/ Ixq7L+&rZS^e ap\1r;K[J\ vfx N 8cBym <ÊvK%d!dv +@x$6k& LJ (4 e/ P}/@M x //)/;/M/ +'λ fe/?ioH?d/jiAlDlBATKA@egI^%L8X<ӈ9G'0mUB? <!d0M 9FMJHD: # =h4>TB#]]9 TU@ t2?@`0 ??@?Pn6 u{` ?u th  )t72(L>REJt2q?R=@LA@-bbJ )+&[Z#?m& ?A/$)񩣑!!5 {`?Copy_rigTt:(c):2U0 09:M c os f"!r *1ai n}.: AlE0U (sI2e0e 5v[0d=0>"!$MB(-?#K 'm&l>8>U# AA9 ]p! #p +;%J`@`CD Cs9ӑY] FMD(s"2O?I!c1ƸE%^d _2_ iAsLXw5UF_w_O ?RJ)LXE2OE"rA l0".cm3AU-?@[}A_6\G#BEU~tS?[~b6\WD(3`d Alew59 M*X7^B_'0Uur>!<0% ,v@z_Hp;s 7E HHlrF(3#4 POB !~d״ o@+ o[sGSofS(SoShSF S;SF֛XS\HS5S'8S("S%8S)6hS,NU0 S29آU8U<UB[UFD  h$^T YYBBUF\.??x<F BP(?i3^P? .vV B2% 2 2ȅH?p G?`1 Bt  0XGB`User,n tWwo k p"ui h"al d v# c >!^ K%OU# r?/%?IHPNL@DD@@D@tDDtDDGpDtpG ttttG_GOOGGW D@DDGD@%DD@@zD@DGGDDM_iDpicZR?Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab\.ҿL&d2??s׿4?@?p2эr3rCUG DF P# h @T(PYY#EU@\.?@L&d2?]P} .B"H:J "bbrY:f 7 Amu` _?U"66UJJ^^UrrUSu#*",'񩤅MU 02#?T"'#^Ax)"L[$ $3' >>ÿ@q05ӥ?46Zr&23G2uM0`-N12uu&@ f3AB@3t$9u`u `m"[ b!wu`)#A!'1`Vis_PRXY%cPm!#52840!2`?Copyr@gPt (P) 2P09 M@c:)Po@of1R"Qr%P^Qa1Pi#Pn% AlyP )Xs}Re@e)PvPd%BH!#$U~%d$%! =#d&P& "$B##+0Ub#]  gmAdaoc_Td"5 tR6$t 7#tEC0t|hh9PKs(&q`#L ]G2`NgPmPTZqU!hj|]sL#PcgXx'e|Bu@lPioPgyEf|R#Pobv|3qdv]@Vq DPpgPr1PBqn1PxHrfcQA!эSPUa%Pe[PlgPs@ c` "QnoPePjQv@t'PxUrT sAp ;T'PA r`UTTq1PaE m\#ff%<z-q#o&5m1`P]](cEAE5:Z%C %B&MRAiR p!]`NEWO@K SH-0Pr BORIrS1QN1^d^dpЭ&&"(AT܍?<$aylA2;6Mq`CRPQllS exR2" 0R<@5b :::4u"5z":br3@@BK ڿUmRc11"@u&2Mu3+ԼB,@"Q(!`a#b!qo+`9R7[d[#uҳB&53@R"3阼E q`_$&F3s͵NSYFO0UAC,@d_"PUi(X''d&E! EQ7 jXq!Ia2ܬوW.Hƒ%Q &5f ~5! E8! U. UHD: # h0>Th]]9 MP MU@˙?@[ bg?@ƒ?@?]P6 ]&A]M  $ 8A] LGD`G t  DG G GG$oJuM` ?.BVj~uJt  &7&t!"Mg7%'tU% 762>[JU 'N((?6 ?A!2b(A@2b4Jte32zGz?)77s@M[3J68b96<2:Be3hAhA5 2`?CopyrigTt (c)@]2`09@M@]c@os@fBRAr@Aa0i@n.@ AUl@ HsBe@e@v@d@@S2aAYeD\3-e3tC tG 6\5@bU9 kVX=W'0URRł6!4aR VZlJ4Uh8&a&a\qA@Q1Y8`e3@omi?@o˟+00f'##Rܗ'/Rb\2JAL)?@}Ɯ?@FGy?@?Ҿi ǒ$Y1iM/JCl;~e*e<8e#K4 @ J? 7ͫ?"p@c <"!r55qE*uzf7?@|ݨ?@ړ]?@,_oTl%rl |}}Ya|tqw(Kx#0.:Ud:Q5>[}W*??@#Y8'&WSyirlE_Ţ|g;||8Q  cGkfɑudpJ?@P"|*u]ad@k!iNŒ|\(|ݙÚw }:xɒ|)3@*ȡyQ5l~eoc@q(Y?@6qﴐ2rA 3zƟZJJ=ay*/AHD H# h4>T]]9 MT !AU@ `h?@ E?@( y?@b|?P6 $]A n JuM{` ?e? $Su.Jt+  SnpteGVBqzE0E>qچۤ7>[JU 'N(( ? & ?Ab(A@0"Sb8$J2zGz?CUUU@MJe&9#ވ"bU#z.;&b8)E&?,J0"E*115 ^"`?CopyrigTt (c)<020H09<0M40c20os,0f:2+1r.0g1a:0i,0n.<0 Al0 28s2eZ0e20v0dz0!E45 e-?3 7 &6iieE9 6XG{'0o bo &%$0 PFdJlz&>UhE=^!Z10! _V, $*$d* Td$VcR( bHD: # h4>T]]9 M 6AU@ù?@C?@V_#?@ zrC?@+>_$6>M JuM` ??u` noD<Ju ,u ,bsu *| -Jz@"JL>Cјղ?@~T]]9 M 0AU@H}^Wm?@,71?@[>(?@Cn9`P B0wR?>$ >M lJuM{` ?E?Ju`bJu ,u ~,bmWu v {-J)yt@e"J>>?@zc?@c$Û?@s?-'7/I%u'A-(?/I Jt+  >Nkt7"te"^_'n3%':4%'~7JJU2zGz?CUUU@MJ76yAZ2bUu3 zii32Wb9;6B13115 02`?CopyrigTt (c)@W20@9@M@]c@os0f BR1r@9Aa @i0n.@ AUlT@ HsXBe,@e@vj@dL@14[13 'N((3FP% 4(d?U3 7F@T%jeE9 LVXW''0A0A0F-!D0 lVZ9lUhzE ,5i 1a^*0^dJid:SYwh>F R e?ijm$*D@@y46$d tNe%7lb+d hc٢4HD: # h0>Th]]9 MP !AU@܄?@?@oEݒ?@fc?P6 $6 >M JuM` ? ;)u*Jt'  sZlta{Jxlmv _=m)>d3>JU2zGz?CUUU@MJA@"bU-#z#!bs!#<"bD)Q+Q&J[ 'N((?& &?A<$(S/t%UB115 `?CopyrigTt (c)802`0980M00c..0os(0f62'1r*0c1a60i(0n.80 Al~0 .8s2eV0ej.0v0dv0!Y$-X 3  7&00b59 FX7='0Ŧ&$a $F8JlUh E ITh]]9 MP MU@\gr?@KeF{2?@0_'LھP6 @M >M   $ 83]L3` t  3 3  3[3tJuM` ?""."B"V"j"~""""u)Jt!  N7F$6t132_y:G#7.7w%5%H=!>}tJU2q0?=@M3J6Aw@]2b8Wb9;6JJ[326C?IF ?PA?IB3AA5 C2`?CopyrigTt (c)@2`09@M@c@os@fBAr@Qa0i@n.@ Al!P Hs%Re@e@v7PdP2AYD3-3C GIF5@AbSU9 VXyW2'0UbIF!dDa VZlTJoAT>Uh<oAoA]`t AVb8`3@X?@ bgk@#O 3G$ ):r2JXq@|KC8 uV_Itu4$9K]bkt)Y?@>W?@t |jdEJC!|v|#t0\|`jՉw?0GuN.#?ɿۿjjt^qt'f7?@?@cVB?@-~̍{U |S r!|ÿAD:|S|v|8O/w#L,3?j~;̈މ Q5?@]}W*??@ׅK0M='T!x\.!|[2jq|jm|Äމ}ƒpJ?@Ggo: xυAf%r1$yA5pt| D(ǓI |!TJψ' 55)n?@dѓ`iDF_ὐ#ǮD"ϑ<)tyϏ6qy-8n|r!t@Ikݟ -?jta~TUdQ ;f |I ([%6qZueos@xk.?@U.foB]ra 3z׳]yqʰ-;bq2"HD H# ih ,>T  9 M#EMUFuj?@&d2?@ƒUһP6  > M    4/ H/ \/ p/ / // / mJuM` /?*>Rfzu#J"bo1FI AcﰿuJt  ?t,2t!72B1>JU2zGz?@Mg3Jw6A@-2b8b9;62 !4JFq3Bh32.F 2A2b9ZK72i<2?*q3AA5 p2`?CopyrigTt (c)@2U0P9@M@c@os@fBAr@$Qa@i@n}.@ Al?PU HsCRePe@vUPd7P_2bBMq3C G1.Fh2!"t%2eUibE9 h5AXzWW'0Ub1b.FE0I VZlJ1T>Uh<11@ H\p A l5#}Qbe81@;X?@ bgP@3 ggq3 6MgW7]4JAq@dgLjv2xT?@&e ?>Ȓ$YOA"4[Dzx=Buگ+Bu#տ(: (-DT!o Ut  %"4nARB"&r55Hu™0PL?@dl2,|?U6XB|$e6XxEt\s綡?@kZO?@-9b?@B(® Ǣt/y1MB|sf| +6l|?MQ ' K +Xx}uƻ?@3W?@tM/JCB|w|RDDk|& mzf7?@ݨ?@cVB?@w̍{?PFk%rB|:|F,r8|?P)̿޿:Q5?@P[}W*??@ׅK0mM='a3urB|(q|̔mn|{iϡϕtZpJ?@pgo: x+y|xuty>ÿ@q@E?DF0܇rzBCWB-u]@`- A&"Ru. @qGvC&$!Q`1(+RA"p2K& P4 0)9u`]u `"[{ b1$u`K4)25ا3 a171`Vis_PRXYcPm!#5L`463B`?Copyr2`gPt (B`) 20`9 M2`ci`o4`ofqbbare`aaq`ic`n  Al` ihsbe*4`ei`v`dR13d458=363Ft66 24B#V30Ur3]w,7^q^PsT<`o m`sq`a>g d(a5;_@AA2hh&B)3*R+Rq q2qqr81acf2+66u\. %WQ11[2B!P@Rȁ&ECT1$@b2ȁ3@ HOU@NOU5_4P&F3s͵NSYFąU@` @Qu+QD,@ a' [cep+&ERT28ABTO޷4J&",-B{R1(BKF'`R`as2`e T`xq`W`bBp2+ 0bLU'A"Z#P) E,"b 䵰2EV2brC*Rr@Qcu8vv_<7"&"V=mQQ+qz2&B3/r?qDbu1r8 ak ]!PY@` SPae`e`lbsVa-` E0u2`pF`e`U&EX);Mr` [Tg`aW``D`vb3^Mr` S}bR"c`uUdT1M`n}f`Ecq`ubnrxUQ]AWPbd Nm0U0O!W P`r<N`x8|)b`1 sbe7uPc9AjeC`u`W Sbi`aڐT/MTLc`ch?/d/v/*/B]l`i`/ڐ]@//O Wdo?頼%D?V?9Nwbk0"acR`;|?SI8`ad`cm?L&OAnsCaOL,tROONWPAm23atfa Ia?Oڐfp__M1`  b 8`aab_t_P҇O_OM`CMO_> oWKCs2m]:QtR F@Q6WoLuMoo=9KoHMa:|| R©ʳyvڐߡuR"w` %aP&2DaQiCr:@|`NENWORKJbHE`PXQR^mRNIXS'Q,:@:@ikRw$}ŔȟUّˇ_'0UʨڐLԈ{Ѽ=1tıɼј5ׅ-MTрan{шѕ:55ȕHɑ ՔauՔnՔ{HՔՔRuՔՔwՔՔ_%Ք%Ք55ՔHD: # h4>T]]9 M /AU@xJU2zGz?)@MJ&߀A@bM?#z5 3#\M(b[)h+)h&J[ "(.w "?& ?AS$Tj/%B115 `?CopyrigTt(c)20[09MG0c.E0os?0fM2>1rA0z1aM0i?0n. Al0 E8s2em0ejE0v0d0@14b-$3 $7Y&?Fiie59 FX1G'0UBŴ&!$0 TcFwJlJ8>Uh9AA 611S!Z&@_vn J4h .x?+zfAQ5@V RVJ!D"H$?@bRX .S 3 bA~Y2f@B |>?@?RT*?@fyת?@6T]]9 M JU@1&d2?@Vj?@҄BP(P6 ]AJuM` ?ju#t  k]It=W(\ZIRUIl">2zGzw?@MJAw@eb #zb](b%)2+)2&J[/.w?& ?A$T4/U%B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-# e'& ?6iie59 6X7{'0UŇ&!$0R -FAJlJ8>UhAA 11!T2 $S<cV{ WR~VJd2L&?@ubX,X ]S fR#mS [ÐڱQE^Fh4__ K&oeX5:f'l6 XYePY?ۭsHMQ5aZjoohY}aUVeEVx_;"rA #dsHD: # h4>T]]9 M qAU@|BP(?@h_4FP6 L>D   ($JuM` ?Agg2DuVrJtS  Ut{_GzU_>JU2mz?3@M3#JC&Aw@b#zw u#b( (+T&J[=# ("?0& ?A$ /%B=#BZ1Z15 `?CopyrigTt (c])020090uM0c0os0If21r01a0i0n.0 WAl0 8s2Ue0e0v0d0+"S1W44#-=#f3 f7&4%/O]FiieP E9 ]FXsG{3'0U&!40R FJlJ  ]@Q@Q ]$ x1Z2A!1(Z=#R&@o.r!0JR LuI!QRQ R f4"JR!@ ?@bx!0LS ؒ_a_Y)))$*aE6njϵZNo`o azh E6eWD"H$?@b¥l rdlIU"q)$#Hy|>?@RT*?@mfyת?@6 C_7 1o(!YpdeR;κdeώ"1_:M "96 rs{ ? }Ra@cBЯwr5%*aa9YPRSR&ersx<1\Y<q&eiCvFhw H^+u(7yf@ Sx S?PWe?nWÑ R0X1yat:2);EGWQjV_&eU2_U"rA 5#+"HD: # h4>T]]9 M )AU@xb JuM` ?+ILKu8J1t5 }tyuJb>tJU2U?R=@MJA@-(b;)RH+H&J["#?& ?A3/T)*B!!5 `?CopyrigTt(wc)20;09M'0c%0o%s0f-21r!0Z1uai0n._ Alu0 %8Usy2eM0e%0v0 dm0!H$-,3 7&K?6i@ie59 6XG"'K0ŝ&!$K0 CFWJlJ]$Uh8 AA ]A* 1DA+ai'Z &@h4Fb J4H .x{GzIAQ,R=Q RSVJ:z]].rɑ_~Z)\(@UE !C@ ?@-bȿ . c 1d`AY"b\(h5;njϵZSoeo a(h5;eD"H$?dl3ril ףp= (d |>?@KRT*?@fyת?@D6Th]]9 MP JU@BP(?@,?@HD0J?@VG?P6  >JuM` ?j uJt  =5UYEt9S̿6EN{4P[Ehoþ'>2N贁N[?<@MJAw@cbWb)++&Jy (<b!?2x% ?AM+bf )(%/F%B!!5 c`?CopY rwigTt j c)(02`09(0M 0c0o%s0f&21r0S1ua&0i0n.(0U w ln0 8sr2UeF0e0v0df0!Y$Ť-# ' 0b59 6X|7G'2WY)6!$a FlJ]Uhz11-Z@Pٶ̚   8gF_K  6RScVJ497YDP7RLSBR}UE:]{^{4\QGYl>x}U5U$V5_YQ}U2_/U#rA cHD: H h0>Th]]9 M JU@,`0?@Z~g_?@0]q?@ ?P6 n>JuM{` ?5 uJt  W;6-Et9Sݘ/$ENaf`~yEhte>2zGz??R?@MJA@abb zt}b() (.+.&J43f;??& f?AHb3"Af .!/4/B115 c`?CopyrigTt (c)>02`09>0M60c40oKs.0f<2-1r00i1a<0i.0n.>0 Al0 48s2e\0e40v0d|01(Y4-3 7&60b59 FX7*G'K0UsBŃ&!$Ia *F>JlJ (>Uh*I %1-!4B Q8PRJ:@$ɣB\ ZS Y?^iQ3uUΛrH? JS #קYUQHD: H h0>Th]]9 M JU@~"7?@:@?@0]q?@M7qO?P6 n>JuM{` ?5 uJt  j߫GUHEt9S.5i#ENaf`~yEh<|>2zGz??R?@MJA@abb zbR( (.+.&J43!f;??& ?AHb3"f .!/4/*B115 c`?CopyrigTt _(c)>02`W09>0M60c40os.0f<2-1r00i1a<0i.0n}.>0 Al0U 48s2e\0e405v0d|01PY4-,3 7&60b59 FX7*G'0UsBŃ&!$a *F>JlJ(>Uh*I %1[4B 8PRJ:@?f2B\ ZS Y^iQ3uUΛr7H? JS #קYUQHD: H h0>Th]]9 M JU@jZ?@_7?@0]q?@M-?]P6 nAJuM{` ?4 uEJt Et9SHENaf_`~yEh`6H>2zGz??K?@MJA@~abb z}b( (.+.&J43Cf;??& ?AHb3"f .!/4/UB115 c`?CopyrigTtV(c)V2`09VM60c40oKs.0f<2-1r00i1a<0i.0n.V Al0 48s2e\0e40v0d|01(Y4-3 7&60b59 FX7*G'K0UsBŃ&!$Ia *F>JlJ (>Uh*I %1-!4B Q8PRJ:@ YxɥB\ ZS Y?^iQ3uUΛrH? JS #קYUQHD: H h0>Th]]9 M JU@fl6?@ ?@0]q?@^J5?P6 n>JuM{` ?5 uJt  YEt9SS_ENaf`~yEhI>2zGz/???@MJA@abbM z}b( (R.+.&J43f;?a?& ?AHb3"f .!/T4/B115 c`?CopyrigTt (c)>02`09>0M60c40os.0f<2-1r00i1a<0i.0n.>0 Al0 48s2e\0e40v0d|01Y4b-3 7A&60bP59 FX7*G'0UsBŔ&!$a *F>JlJ(>Uh*I %1!ҵ4B 8PRJ:@;{áB\ ZS Y^iQ3uUΛrH? JS #קYUQHD: H h0>Th]]9 M JU@^|>?@(=?@0]q?@pJuM{` ?5 uJt  LYEt9S޿ToH$ENaf`~yEhP׭wU>2zGz_???@MJA@abb z.}b( (.+.&J43f;??& ?,AHb3"f .!/4/BB115 c`?CopyrigTt (cu)>02`09>0uM60c40os.0If<2-1r00i1a<0i.0n.>0 WAl0 48s2Ue\0e40v0d|01Y4Ť-3 7&60b59 FX|7*G'0UsB)Ń&!$aI *F>JlJ(>Uh*I %1!4B1 8PRJ:@?ͮB\ ZS Y^iQ3uUrH? JS #קYUQHD: H h8>T ]]9 MT AU@BP(?@xJuM`lW? uJu u bOu X J]-J[Vaqrm"J>JU@'  $ ' ! " /+% W'P#-/a/+ Jt.  sU!'"StA"3"'J\2q ?/&?\.?lA8lBt$N贁Nk?Mi JX6 0@b8+b9;6115 h`?Copy_rigTtk(c)k2U0&@9kM@c@os @fB Ar @EAaP0i @n}.k Al`@U HsdBe8@e@5vv@dX@1 4E[6D?FP?A?92~U3 7F@6%b59 XVX*WxW'M0"UF!DK0 xVZl !Uh\@3 A*iAA !9 T0#JtekdHD: # h4>T]]9 M qAU@|BP(?@I$D"?@?@fl6?P6 LM > D($[JuM`^?Agg2D)uVJtS  Ut(\ªHzG_>JU2$"z?3@M3#JC&A@b#zw u#bK( (+&J[=# ("a?& ?AR$ /%B=#Z1Z15 `?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al0 8s2e0e0v0d0+"S1$W44#-=#f3 f7&4%%/O]Fiie E9 ]FXsG3'0U&!40 FJlJ  ,@Q@Q ]( x1h2A!1(Z=#R&@6.r!0JR L^BO{ _Q0S Rf4"J:]k͐_Z 6%)$&aER!WD"H$?@b!0LRc #̫b_Ys &Hy|>?@ĤRT*?@mfyת?@6&W>>uA` ?"W 4HZlu#vprb #">ts b7>t  xڪ>U2zGz]?@9 a#>q&A@-"; (bJ)+&" $>Fk#2b#y"(6 ""b)T;'"i,"/Nk#11i#`?CopyrwigXt dwc)020090M0c0o%s0f21r0Aua0i0n.0_ Al9@ 8Us=Be@e0vO@d1@Y"i f b% 1Xk#]'0UB(650 Jl> \fUlQQC  + !pj!#A!_(:k#&@IJ0 )VPl!# _K|W$(^H鞜_ZU׮]~U3^.roZ'\l<@iE!S@ ?@@bJ0l# Q`Y1)X o`jϵZƽoo C?g`|X\QiD"H$W?@\r|3zrliMX%W$ |>?@%RT*?@#UX?@73zj?Pz :C3_7I1yq~Y@Perx|eN, R k̃t t c r4h1 pBGYkr 5%QPi'BP()?&{sB %!$QUhQjv߻?@`X,|YwVV!Yu?zOJXlQ*?S1UxPDT1\,(S# ՜SBExKVSFON2Mw>Kzh|AwӛJxXQzM Fw3XT1\| wH YYhw8F3 wM~VU;V\؏օRw(Λx0Z660| KrT Yg/>U +U _wU  H X&} 3 & CX 9 6 Ǘ w Y&(wx0&ȉ0W7DhQ0Xw*78E&w!7,TQ0)&.0J0T+0?n=0b!8HHw@ǭ7;O @wSE#7Y vu6S / S U UU5*FU !"U#$%&'G)*+,-./01234567T9:<=y ?D"H$ @~?@(g( "`*^p?(4wq #5GYk}D C:\Progam FilesMcsfUtOfe 14vsocne:\$03J\ANROT_.VSbw}q"4FXj| D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\OPS]_.VXSyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\DTLNER_.7VSdyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03JERIH_.VSdyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\NETL C_.7VSdyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\NETSY_.VVSyq"4FXj|D C:\Progam FilesMcsftOfe\ 14vUsocnue:\$03J\SERUVR_.VSPy}q"4FXj|؎D C:\Progam FilesMcsfUtOfe 14vsocne:\$03J\RKSVP_.XSV_F(Oq *eدUC!d -/ =AA0B{A sd / [> s >*#Q`8?D"H$ @~@PFDyB ? uhzn,T 4UFY,b'@F% @Fx1#{QR%UVq:#W1"#X1#Y1#Z1#\]^abc fgJ#D#iLA#jD`A#ktA#oUpqtuJ"#vA#wA"#xA#yQ#N&|_&l(&A}e&AA1m&z&As&AA(AM&[&p1T&f1\1{?&L&mE@&!!'~U&&U&&&FQ@JaTahaj"#a#q"#&q#:qgaaamz##q#q#q#&&&&Ϫйѹҹ⪹&&&ҹ# !#$*+-./1289;<=#@F¹ùĹƪֹͪ׹تٹڹ۹ ݹat~#숹##&#:# đΑ # # # #  $S#p"#Ƅ#Ƙ#Ƭ@PHZ$%#&D#'#($#)8ᆧ 3k#4ƈ#5Ɯ#6ư#7̧ܡ桼A#B&d("#D<#EPT4Z[\U^_efhWijlmsJKLNORSUVnx$"T`s #aƐ#bƤTTƸ#dPDn D#p0TD#rX#luvxy~{ƀƂƋƥƦƧƨƩDZɻʻ˪ͱλձ֪ػٻڻܪݻ㱖䱖檻绖軖걖못hec EFUF FF F@ F!F"F#F% F&F, F- F/F0F1F3* F4F: F;BUƘƙƛU!BӶ!B!ƱiZcƆQcƚQƮQcQcƻƼƾ» Żƻ!B1B1B$12OjJшlaJҩƀaJƔaJԩƨa<7V1jJ߈aJ qJ qJ4q7дƜ1䴩gz ĩq ĩƘq4ĩƬqJq7Rĩ1B$1BzJ"J$J8JLGBA aDh8q=ŝҰ xF6A'Et4V>>ÿ@ǵq"`e?!df,VtZW.KߞoYBi&eoo%z5 €måDD9i7{5IWTb1b1k2AC1$t2tl3t_A4tb5t6Tt%27t bCt0)9tU: b;b<$1>16AeabFBXb44 Jr%b(B)⠴QQJ=B&bp '/(>RuT&aak2h8b1jpo$oWgt qŒNy]Zy_Aaak2òr+$ɔl,֔r-bҴh8cTa{5Qr0\ְU%1Lְ&ųЪI'qóВ@ 1jpٴ@k2h80wlտqjp_k4x>al2X3s͵NSY[reUyen,@r&6EAտ@<켫B?~?o6AVc!}AJѱTDIqqB^{%9@eqH_AIUIk5VBcc!lխ}AeEn!4ّqƱVV.V{5";PSrvrOUQkiqtƱFDXF$ܙ.X m%Uz?l3bbOŒb/6Iq3"q 5BSI`OrOƑvp1\e=K.@-1`?=i{՗/xM}`?qBDAW1=+iy'qV DY݅.X1Е߰grU@ Qh@@hl6 @@"H$ @3G-Z&?1E puCQ^STj1F;FEBu]QPH$NBOOmo7l~??/ rqqH?KBbYeEB{MqqUK RKB@ BP(?S{MukH2oM 6#ǒ >.OUq_=[s?؊},y1,cɧUgrBslҏG:_//KqKGӱ4 T9ң e:H5e*<$>c w >t*'o9oVobotooooohM^?@6ə})KEqwm 1CUgyq"y$\/j~E7]+y#$ jy/$F]aviUW}K1UDbEBȏ?&8JhI6T?@6hW)2䛟џ+= >ajrȠc;Q*ǧf8 +.@ߨdv+1 zAִ <!ڵZ]oρńG=!3WJQϑ FAg\nSq ÿտyj9̣hE|[ipbe?@vS?PJW5~_a&_tǁH&BAG$^RDVhzxsmCV?@ߘw ߟ߾ VF0KluNN&BVFV?@'~ &\$NEB>y֭Fl* )/Mp?Uu?-:zvXj!3Q"?QcS/e/w///Oas߅(T?nN`r2N? 98O\OnOOH&O*&EO.TO_!_3_E_Eh_z_______No o'('so2Gp!pq8ad0q CBom0qqi#½/{E%0x<@@KE& @@`4F?@2_R7?ŕ|{t`` j?t@ \eU2?kTS0q夑x=9 bf4qU'p2& ί!Eq'Orj.q\!'](;'!Ж>>ÿ@qXyWpp?xnB|v 0qq$!"× %SV4 `ٕG !!'9k![mKm!q I T/Ԕr!#1$'2' 34BI4AB5Nt6[Ԫu7h 0qCu HCeGa=9r:U 0q;0q<&>Իa?r(AFT0qJ0qqq`)gh(a0qpa aeͻT!!!rxf7JgDuTu_Q@rb!囂+[ ,hB-uBdtxCTaC]u`%uL!T)0!a)0A2@@rea A@ra8 Q_t| 23F3s͵NSYёE XE ,@²ra&t nTAA?tt~jZ?~? #d񼡦q!*[kx!"eruGa u/n(ѨaA'uS$^+eS$a^+S$^+ S$^+ES$^+ES$q^+tS$!8p'!AP*&xυPSrv2B}t_0OMPF%htזaAeu&iE߿a4e94C͙E}vr//+/w'AQ]? owT?{#CՙG^Wnwg]X xMhndj ?@FE+GeU2,DVh$4WO?@/?PX ?ډdQE^hN@]XtxeS^qO_,_q xl`?x7sQWbΐ.P8UNC>$mUo@-Bp?=i{Kޅ$1s$1c=?)%Il%1E'ADuoehOa,Hy1,Ж5eͷEUb`OooWKqP?Gӱa_~H[T9Wi]xV_^7H5e__ft<c >}3 -?QcM^κ?@6ə҇@)a ҢƏ؏ 2DR4tfE7M`*ys*5X^ef$F`e8mkefUWa-UDA$1J\Ayͯ߯cF6T@?@~P6h0BTfxҿ[:ajrrX6OQףf8Wi]ke=\S1+7I1EFCB 6QIa|~HڵZO OO(pj?OOOQ _mUl$K_﨟_Sϥ_%rg_op8oJo\onkAC$1!fx1[mߑj9̣`E|[i@pbe?@vS@5I5?PJW^q]o#b/R) @T>A=FZ6x aaQ$~1/-o///? [CV?@:H ?@7WLJV#UPv??bgw-?$VV(<?@"qۇ_PSv^_pWeQ&85GYQ!B @*?n+DT,E-DE.E/@!~/@) -?/X&7ن\hň%\׈鏊RQ 4=Xg{q?];t?@֣Q@m#y@w?@&`c]?@\u{7`#$%@ ךpU?@.ӄT}K0WBThzOqOO |7p#9(G?@9ABBQ0Z}@㋀,P_xt[QLA=#߯EU'N7䤙)A!ymQ?@?-@S8 F?@7fd?P& lHpR@8@P t?P)Oy F !ɑ¯ԯ[o"9߀z^a?@Tw%# 4P>Pb-:r<`"4S)E'^&{VS֯?@%~&ԦQU>@O]p?N"=X\,!>J\nj߲H  83'9K]o@U!/|[ߜ?@>֝^.w.j"ϡ/?M_EO J //A?(?:?L?^?p????Q"?? OO0OH"dO(//OB//5, ___oo耻6oJUemodoooooUeo!3EWi{w/.//pGAE+M^LWozC qSIEErMō`萏9@@9EMQS@@ -%?@H] ֯h?RZGA?AHAuA`&Z?juz@HAt  εt±ܲ7bK"Sε׷_u3ε'^稰!5LԾ@6Bs2Z0U"…DvA!c4Wb|iņ-[ɔeeDFZ??@̧"&JJqAu;O MO_OqOF#՚q<וC S!5\N`g-ޝ@]GkZBs1PaiALW A_,Ø-Instace i"mѴ󘂠fԔXGU'2Aq7u1ѿuъ %B+1ϯ)f}-U@y~M >-G@?)UesQc u䫳Ѹ Su:AOaѳ,w9җ6HZl~ϐϢϴ߰)i{ߍߟ߱CUgA?w?OcO?O+?OO3OP/ܢDer,T'9K]oG|$ Z2/Ł !D bŁNaM5:3E?m1Y1e5%فL&d@@gl@@`4F?@2R7?jt%uy{&iٹ N/ Ѿa/ [;8$=L).Z8 ek!'Zv2,w% 8-\%0Oڢ.UF0hՅ0v>>ÿ@UτmS ΆuFXjqX__Z8Z89u_0&ˑ͟md21aG+`Z8I20T11ڢ:1:3qR!!b3$ 4Y25b6r7Ȥ QCdդb8(9Vc!: Q; fmQ<!>#@"!QFJBeFaJBdQb)~_rqѱQbRJQqUO LzLpdi1!jw%HT211\Ə9gq׼]yL_atHw"qR+b,Ă-Y2;;b,TaP\p%uj `21Ŷ@.i1@m1k%w%!\@1фqU}8e\_v2v3F3s͵NSYM}`l Y5,@"(.e; Knh׿3 QAQAetD!jZ?~ ?}o!ˡ q&Ӕetq@G%Vye?klÀw%qUeˡe Y5e eFaCФ21HGHM5mSTSPrvcm1j Ueq ketooh6i?Qli9v-yAzv3.5{@2g}?_9HÇ$p]??$9%T{#C'qU%Fwg(e%Mhndj ?@FE+tr8ÁU2,Dʖ$4WO?@/?P ?Nk"(.hN@ǽ(e/*(!e:"qia_){//; xlq0x7>8Q>b.p.508U;@s>=E1qUQ5_N@-@=i{իO5eyeVdCC]mթe$2LI5 Eň,y1,0vS#2zGzt?aye-(%c:"o[/G71OCO7Kq_0Gӱc/;?T9ҷ9-&U?.7H5Ǯe>?Ob6aD<cg >]_._K_W_i_{____\M^?@6)Ɗ_0⮅xoo*i+P:oLo^opoooooooxDpOub6~E7q0*yCwz;88>՝ub6/$Fq0u56uWEfUW}_0-UDӖe7BS!#5GYk}\F6T?@ 6hW䞵ŧҏ*o,>Pbt6:ajrԕ8P+Uf89=Ͽeaد%7CUc{zڵZ// 0ńx%/7/I/km/Uڴ?F//?Jrg1?g?p ???I ]&B ߸4Xj$j9̣q0E|[ipbe?@vS^g?PJWڴ׸=243fk~ط%ŏ@׸Er_&x X!94k! O+cCV?@:H ?@7WLJ65j>53@?G Q/76֟6(<?@"qR&JRJRQa o 8?",v\hx?%\xݓڵA(4"1o(q?];t?@mom#i@w?@&`c]?@\u{Pȿڶ>ۮ@ ךpU?@.ӄT}qr 2DV&?zJ?\?|7P#9(G?@9ABBoZKՎ VӋ?:"?? OOD(ϙoN7OԤ)AymQ?@?- S8 F?@7fםd?P& lH:t@8@脇0 t?P)Oy޾ F DϗϽh 6cDo"9pz^a?@4w% 0o}u4۱1#߳2Vwl*DVWo xN&EFS֯?@%~&wpkl?>ؿT@O]~PN"ܳ8 ف1C`l~kϏ?j߲? 83Ÿ);$ 1CE寛l[?@>֝wj"/8?MOd6*O;W (݆n]3SD26HZl˟"au#P.}̥ D7pLx)U@r?@,?@HD0J?@VGՒ՛ .@l}Ϗϡau4".@ﭯdѯ㯚I$D"fOl6x\BrxSBTf2DVh/'uG);M_߆G,`0PZ~g_?@0]q?@ _ 55u??,?>?P?b?t???7~"7~p:@6M7qO8C?? Oנ,O>O[OgOyOOНOOOpZ?@_78M-Ic __1_JO_a_QTa_______n@ 8^J58As4oFoXoKuooa@ oooooo ^|>?@(=`6pD6nԬ???-,?f]$r?QR[1YVWXYZi_zTX>8e_A/ZAYAB ʑ$ٽߑBvV8іqug dk0ՇdQkdQkʕd1k$dQkd>kdQkbdQy{JdQy{ɵdQy{|BgQy{sedQyw+'ʑ6/6RзIRGut`N 6PZ|0ՈBƸ!tYMI_F iyDKa/ѵ) Ŕ@dʑO'0UxH!2A&AK`6 OzA8O _6j|H jDB6OG@!3Hꢒ)H@ė!kDQc@@QtK`` g;6s?tОË((뙻#2Z1g p߶ a2!`r26`!|3ր2$6H+Vvmy<< m1CUgy҉40(:L^p$6HZl~//V/h/z////// ??.?@7ߨ6r3?0?դ?/???O"O4OFOXOjO|OOOOOOOOO?/B_T]h_z_^]}~eQ^Y\х䇴_TX{ oo1oCoUogoyo^QotQoxQo|QoQQ7QXy\њQQh14OXjP2 EY߿5G,SX[ ҹ')Mqh1ѫ끾_ɿK{,< w=0 bؓqؕsMq%:UT`r+a/i7=Qߘ}r\bw2UjP?.ӭ4ba5._f@#5n5"` 0ʐ>׵IZb P4#?ώRA^!ǷQ.gQ%NZl`~Ư@1UT@Vj?@҄BP(% 2D hzPW˿ݿe\nπ7*U 5U>TF3JAeVOu3$F3K/]/"44uB"uP////6??(?@AQgOzOO1OOO,_PI$D"efl6%s6OHO_//+/=/ ooa/goyoo///ooofOV Fq4FX(vw 1uUӣ,`0pZ~g_?@0]q?@ W$6d5p?ᙃ@{43AfoӏSWȟڟ'9K]o~"7-:@dM7qO XÙǯٯ!3EWiOXKa¿Կ Ͻ|!Z?@_7‘M-^pςϔϦ$ߊY@R{Mr~ߐߴ߽|u@ ȑ?^J5-?Qcu H5Z7'9K]o}^|>?@(=ГpRV&ɯd2?0485A5787|&xQe_O/'oYKcs!Uo .odU{a9Gu\ntq$=D1$Q/-8!uu4vw,aA8! !M7U@+JR @@(?ʏ@@d=5@wT1?&VQts-8R!RAӭq8"Xʵ]U40@@JE& @) !p!Sg0@z3MMbSoeo "p"1.dfooneq (uS@2?@qp~+wU715L0"ac}1{vzw Br8"d}58 tUr?YYYF s #8"] "a@J0Q`sAnpect]BWpigt~EU0v8!ك@EKlDBuaAO`b# H*+Qc3awbuNq5qRY@updl@n` v*uFqq``Pptez@nFw*EQ338#OL$BA !srUr8&^q1%!8!8!! ---%"2T]0`57T3R]0|@69P8b6Xނ !MQ!$+%:L11$18"ɤ"֤Ur}2 a a2 A2(L@:N?'%3 %3BlUu3PRC@}5VC5 K#A˻5ӿpX}"(3!T8! a a8"(sAT8!8 DCUp` w/?2Bz8-P,d*FU!Y}X5>QH="H`5T0 Q Q8"c"pUr}}2؊/ҥ2AڤAe@2ܾ0MR@U@-ԅ1Y(9I"Ҷ"$$%->PTvk-}5 /A5[mA5 1 HZld~m4Uq"$UuI !*8".@9"(1 q3103\5G%TUpv%(}X<#Ad}%?jOT\w?%_ A:MQQ#U/# h%‘Uq}5 0r4(4!q^#YȎ$AV_q<(Uu%V?݇(}5-&NSAqOTq_d?7_I_[_m____t|> 8`9rP(ko9?@fu#XE_`q oi6qE:oLcpMhFoOONooroOomB?@?mEl+=OܟsuqsVVVR%7ImoǏُ!3EWi{cUvǟ_*/3EWi ïկ 4/>tB.M(hz¿Կ .RdvψJ^P{HUd@ ߃/ASew߉߭߿acυϩmxKM_qߓߧ߹57I[Uy1CUgyaG a///A/S/e/w//%I/. @)/'EiW/#?^3?E7rS?w???????BT@ ?@ Z%@@1W@a^<HW%1OC@rJO\I<& F%+JR @@(?ʏG_H&O//CqE&6?%? _2_2N#@ BWtY_k_}__UrS666Y\oo,o>oPobos\{jIlAb il#rOooo 0BTfxsrsfBUo8J\noڏ쏀"4FXj@?&d2np娟̟ޟ&8J\noȯڣ?q+4!3gX A?X+S Koɿۿ#5GϓۯC5G}ſ׿J-1CugyϋF:3'9]o! r%7r]oe)?PMN&bp<_G0! @0~͑!=je0Yĉ3!()5- F%$wn?@q_c?w?PixRt2``͑ Ϝ)*QtEФL)kv3Q1̅QcIo;"!'0WFF7L3UXBKQcŁMu y_ch""!1u15fÿ@iF[r؉%?$ȉ&Jg xUCB/y<<ff b;0O@Ȍd#S?``K F@lQlBVD` Q?2EX@>-48eHo?ljo֠i;̂)=D#*JD +WDRyaya3aa~D~ae76/bu\FM%{ uLFRg1{ 2ao`#UA$"A \ hN\U_;S1ׄmmF3s͵NSYF+`=\c?,@&{[@_ԏ.{d/t/nJ`2 8oJo\onoh8ShUpm@u;o=o;e3E;@E@ޒME#aa:R0vò1R~1~13]]BAA|7ф p8 ބE[[F:S;`mz>,{Q#}UU.FcŬVݐxOOq5OAOSOeOwOOOOtڍqc?@QqG?@iy؀wO __/_oS_e_z_____9oKo]ooo&oooooRooo"{^:L^pޏpS7?@)@cF bVBΏfi$6HZl~9cꑫ#5GY" *Ђnဨɯۯȣ #oW (*<]6S"ڿ17oϐϢϴ,q + Y=Oc@OS=|Av?@)jNJ%s(zL&=ub~5V#H-5? @G xaC8MP>&Vjõx&|8vJL,f?? G!:2['%B/T*d7Ir!iuѩٚAe!PヮF2$?@m)İ?@9yT UU,>PbAjAI /./6O@Z/ FVN?V l/ OUojWOiO{OOI e@2AWOOOOe__AQ/_A_S_1ݧ_U@b_9r x(j?@tGFgPB,8CEwxI5 4$>D{C?h5?ez2%H`=&4'U>N`0O6ona4$>!~?(`rO̿޿"Ig!J8J\e_w___[2,?@O'^t?@ZOD?@ t%}%(%k(8g/ v./!~rh "YR?$WVP&8\n0Gq/ /2/D/[GXoN$v&]O3(5S//////??(?:?/[Z?!~l(Or3n???????OO&O8OJOlahOzOOOOOOOOЈ[L&OP²Bv&VV;YsM___q________|o~p|OЮ9oKo]oooooooooooj-Vq$MYk}xm@ 5|t(m,'#\q&(:L^pƏA0H&8J\nkџ  20DVhy>Ft(Mg7qs(WӤ̯ޯ&8J\Q|A? < ʿܿ$6H@Zl~ŬъϜ Ů,A1Ѯ 4(U@$#?@qc?q߃ߕB҄NC㌀y<<A~3 ?}9bb[z㜰;S9u^`` FlwCoor[zۜ` 2.K4;q~ ~ 38M) UK珵ZV1C@OS\^Av?@)j֓(".]ti'A5I B?@J2Г$oq?>?@m6fI/Bˑ128V ҢC_7!ܤ'Iz~!-A>5EMQSē(>2V`ȓ(!oi'?Q{(!2 Pi'"%P/b*AG7 !m$KHBT;uo{"%ɝ4ǣAIE /t)2@@$=IO?@~E)01e &c2 [ts `^Q m5;j7?t@YcߦfeUgƖfeog4dGf`js%ctEQ'7 mB7!Gf|f vevFr YdM@igEQ/ffÿ@iFruC?tt2G&o;A5PjD bPw"4FXgp$Aĉ EH:KI;X!nRd@vbAACbQ] @xN#SHgNYǠ%H5E\!f?@?Ǹ1nH6^>HEKr_?@xR̍ѐ4au@"p ?@ F?@ъ޸?P{ 8C_7uAuEG?ߣGNE&0w7ƍH&%1n3KþHI%E6X,%An?gOғ$IEwě7GI_P( HV@!`XVHEOJne5&BE%19KIOOX{SA ?@?q6BMUQA^3BP(КO?@i|@&q]QgU~@|4/=@f?]h!΂??/////J.8FSA8"4WC/=I?U=g}5kA[AN_I?< 8L\\82NOUnH"zkwUjP.ÐL#?cUdaۀp_zu}`` Fg l|pCElEzp` -AUqqpp A&8JT\nm BI&]"bObPzl#}̾MXuᗬc U'(IQ4$D QUGOYKXRџh4O_^\V9U @sU)KC΢].]oP_m%h Zy_o2' ڀh!gӏ1 2DJYkʟܟ2(?ʏ0`b?@Q`L^pJϦPcͯ߯ ¿yӿ ϥ-?Qcu˱ϟ@1N\GQG?@dCn]&Qnn.]onY]^~?2!?SewIq=Q4@RdTa\A41 ITe8'gU@μ`Ar XSXU .F(2zG/z?/Bev Xh%n (}ESq-?Qc//wg9) M$\MNqNql/~//UQQ//?2-Z%&v!f?@Ƹ1n/>(?CvKdmծw? @aV/=aV7>h% 准BP(Q؏d\JߥRTڗߔ6BӃQf?? U4jUwuz???O#O5OGOYOkOaOOQOOOd!T!h)eU7_?@׿`8H0w0%:\e___S=%CO"_ćy\7ǟ@!^ߏ=^)"DnA0gT?1C 0D2WI[msҡү䯀ah!ph--8U@ph fp8?@$=X~kPD,8C$ם$؜iGPR܀ @ 6?@x<ŠW[ ƅ-dsn)EoWobqbbuAoooof%o"+!9)q$J'qy)%?/~,>PGmߑOtt߽T!߀OO_"_QF_ _XzM2qX#ŀ____oo%o7oASmgoooo3r7By!̴ !uᅨu}ϡOi`gFSuf*ҵ(֏L ‡]mOjZї_߅ę a}_I%΀%O۟,5Pbt"u (ia5%EM/F46 t nh?@:~#?@!U_j:KT///#! =??.:UD43fyPbt@>M`HK݃L㺿̿޿&8J\nπϒ϶ ў?91/4FXj|ߎߠ߲ߖx(Q'HTfx@y$@8tXK} #}P_:#+=Oas'96)Wўx#5Guuybt` /Qa?@2{jp*KW7ٯ]SS/e/w////////??+?=?O?a?s??Y?ўT>pƔ???OO'O9O@KO]OoOOOzOOA OOO X",W,qzyQ"/YOQ;QP_~~?@)?ʏ. …@˅___RLXf}f;U-sPj9oKo]bvpRbbbPzVwwavSQup`` FJplh`C]oh`orqwk` ڟ2땀ߴݴ{!dzy; C\js$×eiyd8| oE[" pQ p oon%P( o ?Z%+v!f?@Ƹ1nPbuc&U,uKr?+@M²̲tLX0qumQ?@B @ @ ?@U)8V C_7`e,;U,uG;},,u&Ӏv7P^FKҾPӡ4͆z $&QӀ`Oғ$Pۡ(z=ԇ&ߥ  K]oy{|3SNp,8F X\n߽ܧU@xw휏5_5>[obA!xͺ}ASSqIx a%U7=Q蓩} %Ux!zGz?En涧hncUUv{[|s,`w~y<<p?.Re b#2%PA_,`u`` %F l@1l "$` .2%p@ph-D!'9K]/B/T%Z" /0w$a,/a%S^1 eM<}?<: @ (~-?hѩq_k}ߏߡ߳ 1&d2?@Vj?@҄pxo#5GYk@}&n/?/$ @?4 DVhz;( a -?Q ||h4WFxExER//,/UHs (\/n/'S@/? ?2?S2b?8 A????Dwk(р!O3O\A JO\OnO7x餉eMYBZOU@{?@,?@HD0J?@VG+uu_b_t_(сooq+Ta#.@Q{QEw B@Ҿ?@?<ӃڇP,W@TBk1 U'…˅ …˄SqBW:!TS__%Wbn?U0buxo o2oDo9{bRaU@oooa1(т?Hq6BTOOOOKI$D"Hf'l6xt(_:_L_R_____o HoZo5{oGYk}ˢΐӯ=x(0߱g}"еEU@,ߍ`0hZ~g_?@0]q?@ ŀ'嗍?:!е43fy`)ۿ0Xj~ߐߴ~"7:@@ M7OqO0BTfxE:LCOasJjZ?@_7?M-!3EWi{vnh! /ASeJ@ ^J50]3//&/8/J/\/n/e//!/////? ?I^|>?@(= ?p$&ňvbn/@$ hQhQCZlc`-[dUJ**U****Q"?8**e(nehqQ-Eq*WVf L1@A7Mu4!;uU4M;-4!K u4!KE4<1K2u4H1KZ41K+41Ke41K 4 1K71K54!Gkaga qvao`@vmeVeʉ4R'utO0~va4ІBQ_Z-_VBPZF%)iGo Mu 1ed; qW_'0Ubň*!f,DGMq V8!(s 1MudMq oood- om ja!ǏY1U17!ψ!3rgf/Hy b W Xb$R A!BTAq}ߏ5߻ UU@|h4F ԁ5ma&HETjEk (S@R)1AV ^l~@(;ARASQ qdA`orjdE܋YfU@?@,?@HD0J?@VG+7E@Eg^Qv??1???$1hQ'{]19A1RO{?@Љ<4&jTrWP,W'@4!Dh'ZUcU ZUcTCoA5{OX$pkCpqs//0""uP////ÚBVB`)?;?M?(kƁOOA1OOO-?QB_uI$D"fOl6 [OmO_//,/>/P/b/.o@o/~ooo///q?oou:ё0Blq Yk}(w]1y Au5Ut,ߍ`0Z~g_?@0]q?@ %7IcH5T?e43f)oD:ђ)(:L^p~"7:@ˉM7qOȨ}Ùگ"4FXj|]1ۿ /A|jZ?@_7¶M-9ϧϹ@%7I߯ewߡrߣߵ|@ ȶ^JO5ȨRdvm5!3]L^p}^|>?@(=$p~Pwew1 QEL#Ba)cR1RrvvHzr5 Ur?AF 1#"]2"a@JY@8Q`mAnecutBW"ig:MtEav!٘@RC\YBI&Y@bCB@NFO!vXaXba5UR@ud@n2H9vOYbF``ԏPt&e@n8O %Q3:3#OL$ՒQ5!sÒrÒzr8r} (%1!!10=j--8"2T0pW57i3g0@W697P8w6m5!rQ 14 ;8l^:111"Iޤ5"zr2 "EaEa2$f,:28ѱa@-L`0@Р<5%HW2XXzuHPb}@5HfX5 K8f}:5pX75(3!T!/a/a"8AT-! YC P` }/'?"B8APَ @K>SQ]R7]u5TY@0Q0Q"x5"օzrԢ2؟ "T2Aڹf@:20rR@jԺ@R1Y89^7綕79#D$5%0BSeyvB5 % 2DV5pf:5"4F]o "o~m4 Q,7zu ^5!"S@5"8΁01c 313qa55%T P%(X<3Q}%?jOT\7w?J_QɡrQQ3U ?3}%q Q5d@E 8!Qn#Cn$Af,0BY!D1zu%Vz?685I=%6#CQ_*W18_?\_n______T)qP$=IO?@G6%@5?IˊGh|E@o(`1/oAiaFkE@@4KҾd!eFoO _scDeKV7_J_RsC@+ >Pbtu1s fg2s{P"4FXjbPoϏ);M_q1Ó?)bןz 1CUgyӯ -?z^WC(o?~ƿؿ 2DhzόϞK_5;&՞@ @C?d دDVhzߌߞ߰ "vϚ+߾$6Zl`ߨoDVhJ .R4Fj|//0/1J/\/ 1///////I ?>?M6?rsF5:(a?O&$~?@ ;?$?@~?@%#?Prt` 6v!2tj,2 a885'=85A7dg##)vQŅ+nł!15f6ł14 U@:PfpqU=91gB!amhOxCy<<A++@9'ŀbfgb`"tOm7B"ACySd `` F`lqlbd` jQ_2e}@cCxU-U?@UG?@b4ɓK*UU6ơ o.@CcłIqIqO\d11rRdXDQDDQ"Td!/3X>!ekZV?@'IOғoo7k <)~e|> }>]owexVp@ohX~#'E~.r/AkP( "! o%%&o_!-(11HX)\"*iBv"b,ג.f8_`WA " #F3s͵NSOYFPD~ "e2226`opM 6TR`_l³ `aѠ!3?%FJ/!!QV0UV0V0V0e"0Χԭ""/P[@H&챨!Q5˰Հ"a`CP+% PC\RudV0B7A+ ɓL%m=(1ZÔ%NPTW^`RK3@A- i`3PӢ̹XXG '2Kqм)F%!De m0'qV0 Ӣ+=Iw !0W]@ohWPD5Wsѫ_QF4? 2@'#$"4)1oV-Եv%!Ѥ%+鰕)_{/?14 7d`1CU&%%11/+!"a1j!7AYga[GQ@@1S?@O$,qٻDh8{Z rGg?@;mx>@ۿ9h7 E(>5~%?@V?@4W e`/HC-Z?@F?@P?L߯Q/֕ UI?@Y?@섰a?@@\ K/"U@\n?@:?@3pDٳx?P+k= 2`!U5rP%3> C4?@4?Pu>5 w 5mY*?@uӧ?@?@* ?P% u1E"euf $0l?@|/"/M͐!@ŧ78E [13nٶ7g+O+6Y.(v0Tߕ+L!@k zO~EAr 0K(0F0zkdOjpT꞊j\pmB).~r_'6?@v9"0'K ;(f_)~C;Wpb]/?@J_5@jlJ?@Ho co_?QE o ŀ2ţt*(Xy/<<'"sֽքy?;q3Su`` FUХl CrzŐ` T2e߇݇őbbz(Y=K 7=3$EYQ`thud/ȭMd e*uӧ?@l0| Oh&i?@:J@?@* ?P$ ?1Eh&i\9o C?@Q( BPEP >5 w h*e UI Y?@3 Dٳx?Ptk= `h9H -Z?@)${P\?;X}\Oh9~%?@V@F0L߯_h94Z rGg?@4W eh_X"U@\>@I(16o蝩 /AS?@AjЁwQu0o{sŚ'a0BTfx,&9K]oCf֩?@1Qd6(B@˵0,/kP]?@8{#΅h\* YP-hP~E ?@_pLViwM-/6%Kx?@ oi?@PuȏT/i-kbܓk"uTy f =yǃ?˯2?6l_0'1㸠Z=ص ɇllp?5Jx@&y_ ~@j{ Y=6w+4 1?@NȸO>A{?@l!%\@_^ nO69+2 jd?@oGy=o =M2VϽO69z^r`Zf5U?@F/`_`f Pǎ _Vd6ENju/-gPRrg n^˞h\9f]OK4n룠?@{׀ BiϪ_6Il) uRuF?@B5r#`n$Xp _6;KG6P} yFa%?@O?"HoV@7?d>@M'0hooaooo);M@`0 \0<~ @9{vlȋ *~d2L&G0\sVR\s ws` a$6HZl~ "q(R~@$uNH?@~HYbאztHt0BTfxI}^`wo" Y?@]l o 5?yh Ѭ:9/m?@ڼtɤ_i?@0>mĭ/ZhE" ߣXE]K2yx#.b+?@Q Oi0#d 0K&?/eG0gk̛L "Cg|pv?5Y,0xd> 9{eD=90,"f~PXetpv!Jx8Ob> W?@8+0ybOѦH c $j #?@?BY0ϱOѦ>j @-}JӦ?@զ.k]7a%_T-w< `?@*O7N$ٌZ]l : gk?@۫?Ն|^tߞ_~Ѧ_g\+P˚yP0ŵ?@!tr_ѦOt(?@.EF?@A ԝ֦i"abOtOf.5x[@?O8NpO_JN.r4_F_SǾ;~-__ge@9h E(>~%?@V?@4W e0EH߸H-Z?@F?@PL߯Ve UI?@Y?@a?@@\ KU\sQ?@:?@3@Dٳx?P+k= 2`ZU B *CC4?@4?Pu>5 w UmY*?@uӧ?@?@* ?P% u1Euuf$l?@| "/M͐&ŧ78E [13nٶ7g00Y.(vT+L&ɇk  EAr K(F0z@kd  j@Tja@mB).~r/, 6?@v9''K;(k/a H;W@b]/?@J/@jlJ?@H ?¤Ğa զ?dDVJ#?ߨߺwa4Dr/v!jP''C¦¦Iס?Sru_`` FZliCrrz`k` I_2ZU|Q|QaqbbbP1zT=O «R[$"OTREmE¤4zk~H 5o/oAoSoleoo?@#rooheooodoVhx<hA2DVhzgB᫏Ϗ -j{C^`мΟ?kp?@'IOғ#51K]oЯ{@Lϯo4ϳ1`?*<N(G}(Mqt?@Q( B P >5 w 8  UIY?@3Dٳx?Ptk= `8 H-Z?@)${0\䵇;X}a8 ~%?@VFL߯/8~ 4Z rGg?@4W em/('@\>@I(1;?Qÿ1"4FXϚ?@?Aj?wQutƂCşϱ' /A2_ew߽߭ߗ_ 0[BTfxtCf֩?@?1Qd(B@˺,/k ]?@8{#Ί8\*)Uh ~E ?@_pLVnpwM2%Kx?@ oi?@P~zT/9kb~ܓk"puTy k?=y?7~l_'1pZ=غɇll@ ߞJx&yp_ j㡀) 6w+4 1?@jNȽ$CA{?@l!%\_^ s9+2%jd?@oGy=t=M2[9z^w0Zf5U?@F/`_`k?P/~d6ENjz/-ُg Rrln_^8\>~~k-OK4pn룠?@{׀Bi/Il.uRuF?@B5r(0n$Xp /;KG; } yFa%?@O"M?n@7?d>@?M58??1a??? OO.O@ORO@`0 a<#@>FqȐOOOOOOO_ _M_D_V_h_z_____S__oo&e)FBH 6I Txy X0*Ozp`ǝ?!N\ ?O/(S ^/6+: A9e1f.]~F[Pq?R9"ͦpZݯ/[/KKbĶv./R@ %{8=ޱԿő0 d2L&LaCpV㗇RމaC|CϥϷ#5._Yk}߱ߡ_ܛ1 [Wő~@$uܠNMb?@~H^cgcוJ DYE5GYk}I}^0wo" Y?@]qo5?~8 Ѭ:>m?@ڼtɤ_i?@0>mIJZhE"ߣXpE]Kܠ2yx# b+?@Q Oiܠ0#dP+/eLgk̛Lܠ "Cg@{Y,xd>p 9ÀeDy 9,"f~ Xey@v!Jx=Ïb> W?@8+y񨌟gH h$j #?@?B^۟>j -}JӦ?@զ.k]7a*/-w<`?@*O7N)}_-l :p gk?@۫p?Ն|^y/~_g\+ ˚~ 0ŵ?@!tr/Ot(?@.EF?@A Ԣ֦i"fA?e~@|^f Z t0 d2h粵8A\q<ԦD14??#54 Glea1iIMQ޲rd ;?@8.G?@?@eQts-8GR`A)qBRx@ ?@ Z%o@)bp1@@z3pQ|ߔapE_`.ߠ4e5Deþ0#sxs?@ qJHp\ kUx\3R1BR`P2U?1F BP(?1%C]bzUJQB`6Qn(ectVRW,ig0t\UQ]t_SR]P_TR]aoNQ5 U%RVPu(dJP!nP4j|y_,S)Ry?@ж-l ?@F:\]?@?Rz! ن 2С9Kk)@@$='IO0|ԠNUƂAT!}?@H4GYk}桡***P,>PbtbP!3EWi{IG);M_q&//%/7/I.j4Fa/M*0L&`-//////??&?8?J?n?@??6?.??OP D @Ei2*ONO`OrOOOOOOOO__&_,_?_5O_ O?.O@O"odOvOOjoOOOoO__y_ N_`_r_T___o_o8o_\o>Poto(:Tf}Ÿԟ(㳏w@|}0?Sצ$"+b# +0BTfx,>5Wfo,oPbt x< ptV+=Oas//'/8b/t//P,qhC?ۦ/r///??(?:?L?^?p????ooonoOR/$O/HO/\///O/?g?O!6m>! 0BTfx|d$f |~%7Bfxҟ~fl@@;μ3@@P4[¿@H?ΥM F@ 29K+5@i{Əԏϯ )=@ UQQHZl~ (ѿ+=Oasϯϗϩѯ'9K]o߁ߥ߷ݻT'9K]o#5GYvnq]P}"J3՗'9K]ρyPb|oBB?>::^p//$/6/OOOO///2?4Fz?|?I/O/0/B/dOf/x///~E/,9?O?/2?__(_L_^_p______ O_mo&oMLo^opoooooTonQC 4LV=X_1+^Ni|O8 b"eR5׵uG2O_]d0'wt @@ P@@ڋe?d?P DyoQ|=u`gPof=Cwwtq `̏@utDNrjۛY(@uIwЙ_ы@ucwEnrPH wL˵2s0TTVAɷ[4T͙I[m DqA >CJEۅlBwdi&5_G_YRf_M^&иCT8TUVS֯?@%~& Կ>Nk"TGXaT)/do5Te^Addq=9LɟiQZV@@̺+@@h@@_3 w q-qMvssfss 21 sKAv_`s`+`9`Gl`HW`Xjfj wj;1xvSOyPc3zjw1cPUU#R7'KK_C ssT'%  ')9O 7ccTwH7zz?ΊM-9iFA3ru$es#񰠂agӼ&A[XzbA1A t &*2ahdݡ$DZ&QuA0Gz"DNuDNDNeDGDd[Dd[DiAHN3FNDqAHNuAHN@AHd['DN@}AH!d[HNH1!d[D_d[&Hsd[Dd[AHd[Hd[AH[ݡHd[Ah[Dd[@Ah d[IEDu~[hv~h;[AAhO[=Ahc[ҥDw[ǴE[PEDWMpv[!vBvOAv\AcviQvv?QvM񰐑cvQvQđa1 vё&aޑha!av1!c_vsv, v9sscvSv`vm 6vWcvO vġcTԍ%G?mG QAuAA4@?<gUiA4mHliAUT]I͛1RT*J@@x^?@;~ׄ?@}SBy躵Q?ts-8R2J1AnA2ʘL&d@@gl@)ppq.fC6@@z3"06HpJ<ǢuϚ/8E䱳yெ~5?@qάԵ4,?bS^R^^҅E`Uҕ?YYYF BP(?]a@JA`Co}nectwrW @igtz"࣭Ajց1ف '@lZYAdb2z @b#c +H4FaNAU5Rudnڬ +FAT`UPteyn֕Ab 5rUQA33?7 2$A/fӑ{ґ8҉1U?:,_s@au*1)IT0c577350I6U98E6;/qAqDjW'*H/mA2/8QR++rRRaaR&a&a2a/@`4F?n /ܠRb?%8Pֲ&QUԶ&U̅eU(qUpp`R/ A/12Tς/1f]Zu8` w?BQU=LyZ@ !rQA /@R@u /a942F4/S48`4QRm4rR|2RAڇ4RЖ22ܡ4Rݮ4b޻4b$4b4b4r9,fEcE?1/?Gm!O3O7NOOrOOQUOOOOrUO__$_>_P_b_t_c______oou.o@oRoPm48/R8Շ //摪/oi$ W!`3?qB*q/T@nf8Yq(`PbtĜfl@@;μ3@@9!·@?Կߟ/+awvϧЯ񓧿@i)  2߿Vb ߗߩ'cK]ρ#5GYk}ֻ#5GY} ~%@YOO2!kO}L/OpoOOOoooZ$6HZl~_}zm$6HZlve{?#]ߏV~R+Q6Zl~Ɵt|>?@Ķ-l ?@_Qx?@bAjK&Q-?o./SM?@Em%ޯDãYEXr?@ ? 7Z1dvncuSvփb?@l2˟ݟ~%a2EUgFa{ïկ0 b9K]oɿۿ#5GaQew寛ϭϿ߇=O߀as߅ߗߩ߻@? (ʵԵ/z /ASew+fxB߮,>Pbth?z??N?*(G k} /1CUl//X?-2?QV?u7????? OO.O@OROdOvOOOsbaOO4=aOO_#_5_G_Y_iw_^6a!\1_6f?G0J略_G\Gad b_A'o9oWpaosoo|?oWa|>?@Ķ-l ?7t?@~5U`@S׻?Prt@`a ggiRttR&UoẀUWވBsa0m`ㅃ!VvǰHfDztU!p!%de=@Yr_Ae#̡دoePj{cZYǰJXCbP{LS^t`` Fl l` 2@XsHb^BgXUH_@Ua(?@b4ɓKU$YV'BP]/BOX?T51uSQDz@OۄWbZQZQpphRqqBT/47SI^@8kZV?@?'IOғ@RdWe8OP<䖟8%SFP}>pܟf8xՐG+k U @ShzGĞ.r 8, "qQ 8%d_AA-WbV1V1+0!!)۴B*贙s,# 0$_!óa"#F3s͵NSYF 1 [ P ̡iQ;>bDbJf %тmWaѵ&b޵@Wb)@_A[7U@UOS_A$===e0GS UO.wp~`H&kw@wWewt!/w0wV1&aa1@a=sԪ% CruAdmDaC%NwTW݀RK`Bu@im&aS/KwHXgX'2qRf%!)t(K x%vqs O:Ba?s?=H(!WFP6'x%7Vs  We6Pk8 RgojR //#e vkDH!^B|IDaߞMSsEQDův /Yb=_O_rQ/7dL&a7a4sUSnE7a7aKA'dDap`AaQHWagXs!Q@1S?@O$,qts8T{Z?@ rGg?@;mx>@9hYW E(>IHWe\E~%?@V?@4W eFOIH/\EHğ-Z?@F?@PL߯OIHU\E UI?@Y?@섰a?@@\ K$_IHu\E\?@:?@3!Dٳx?P+k= 2`X\E⇒#S@C4?@4?Pu>5 w X\EmY*?@u?@?@* ?P% u?1EX\Euf~P$N`l?@|s_X\E"/M`ŧ78E~J0 [13n80ٶ7gۯoX6nY.(v`T+L`k oXpYEAr `K(`F0z>A0kd@0NJG`Yj!T꞊jېmB)~t@.~rXY6?@v9P'KWP;(IY;Wb]/?@J:XE\E9N@jlJ?@HဉTv$.%% 5??-hzI??'?9?K?]<t?v'+V7ֆHjP'.?a Su_`` Fl0iCzyrzۖk` ȯ2٥DbbbP1zAei8=(  (.5A$7juKO%O*Ǜ7Tҿ6!?@Qr*!";>!>!ZDaK!jqqW5ȓ_qȏڏ"b4!Щu*_ruAuX_dY,~_hoi=_loeoDoH^uӧ?@l8`|ǑoAci4?@:`?@* ?P$ 1EK!ci\@s?@Q( BpeuP >5 w ge UIKPY?@39PDٳx?Ptk= `(AciHě@-Z?@)${\䵇;X}o,Aci~%?@VpFv`L߯0A~ci4Zp@ rGg?@4W ek}gejN@\>@I(1Һ!-?ky_k}?@?Aj9&?wQu~`/0/B/T/f.'x///////?@?M'9eYewџ@`0 P<9P#@!3EWi{̯ïկ /AS ]oS;Ϳ߿'9K,T'n-k}G2,}X < ffU:Ϗ Z[;9C5 `϶Ou4sDeSF_jod2L&PEV㗇R ?P/$/6/H/Z/l/~///////0? ?D<W?i?{??5UX~@$u[N̲?@~oHݳh4TŴ?????O O2ODOVEI}^wonO f" Y?@]@o@5? Ѭ:O fm?@ڼtxɤ_i?@0>m1 _ fZhE"-PߣXdE]K[2yx#(e^b+?@Q Oi[0#d̏Pߪ_ f/ePgk̛L[ " Cg_ eY,,`xd>v 9eDBSm9,`,"f~UXev!JxoBb> W?@8+,`y o fH @$j #?@?BPZ5 f>j Vp-}JӦ?@զ.k]7a f-w}l : gk?@۫vК?Ն|^"~ f_g\+C˚p0ŵ?@!trq fOt(?@.EF?@A !֦i" f@|^f Z d2h4GBj)&;fQMY7-ya4aenπȝ] ;g"f&#jtm` g?"tz#&%-%'8Ka ,n#.#1*@6u]N<#m4Fcue2B Y]sO+@PfU\^y<<rG ܠ6&O ?ux0ίN7Uõ.@UG>b4ɓK^ST,`gSΗBP;2PgE(dOppjȍ_;bH;!TSa:1:1]NdmAOd2Q6dRCd;⠡*"T]d/Se$SX.7lbmyekZV?@'IOғoo7yk<ooye}>/yexpolh;K~# *%~.ryk@gĖ\2~N__S*!-脒2,+($;)*"*) 6(8P0@3FR4c5S_,bІF3sNSY#FB;Ra*%"RaR a(T!|226m]_ MˏPz],2*%`ax%% * 4 e;0;;2/׸@&0YHVUEûŸIû5MûոQûո1÷@W끊UPCRud!UNCPTW`RK0A\iPՌbHX7U'2KqSмXFU!4ibK ֘UbqaF!F!ՐbHi!|YpU 欓WјUx&? h[PU]ؘ5D\sё@xE_WTK:2I7I?[::2M_UñYañ/F!Y[UaCuR)D_{~//1Cux4Tx1uCUUx1x1@+*!av]R$QbH>!X~@1S?@O$,qXDh8~{Z rGg?@;mx>@9hҚ' E(>5~%?@V?@4W eŇ`Cu~H-Z?@F?@PL/ UI?@Y?@섰a?@@\ Ke/؊u\.?@:?@3bpDٳx?P+k= 2`C(bdP# C4?@4?Pu>5 w ([mY*?@uӧ?@?@* ?P% u1E(Tuf\ $Տ0l?@|Ǵ/(ȅ"/M͐0ߧ78E [13nyٶ7g?(w>Y.(vH0Tߕ+L0k ?O(~)EAr H0K(H0F0zYpkdO)jbpT꞊jpmB).~rO()6?@v9 'K ;(+_R~);W`b]/?@J{_(Dz@jlJ?@H_Zñ EvYfo#$ ߻oDVhz ѵhtlxYlpy/<<@'c>PZZIyo?S2u`` FJlCr[zۅ` 2+<<ㅑ\bbW1zQHDEʧ=i= =Zio$EPoHjrTfu-uZԋ:>xeݏl%b?@!rYYk}Ɵ؟oQ(?Qcukx<ڭѯ}a(:k}!ɿۿ*;s^ϸB 1Xj|ώϠk02?@'IOғϭa /ASߐ;p dߡ/dva 1io(gD= moL 2D~w`|t}(*<amKI?`&?`U@ K#?`Ugcms хhq>|ihqhqoe AAAIQQU0nuXȓcW_U ooVQ_~cUb4!P?Щ_ )e_Uk_I/VeOEu_/Ua$9mNO/UaO#I~O6?Ve?5/O?Ve >u?@ly0|?X]9u?@:@?@* ?P$ 1EX9\4 WC?@Q( B@EP >5 w Xը5 UI Y?@3z Dٳx?Ptk= `Xi9H-Z?@)${P\;X}!OXm9~%?@V]@Fͷ0?L߯]_Xq94Z rGg?@4W e-_X5@\>@?I(1_bnaZ?@Ajz~FwQu04@s_q'%7I}mW ga&8J\n$Cf֩?@1Qdƻ6(B@˾z0,/kP]?@8{#Jh?\*I-hSP~E ?@_pLV.?wMY/6%Kx?@ oi?@P:T/IiL-kbUܓk"~}uTy + =y?Ӑ/6l_~ '1}Z=z ɇllEpG?5JxT@߼&y֐_ T@j@I=6w+4L 1?@*0N}?A{?@l!%\T@_^ ̿3O69+2jd?@oGy=4 ?=M2ςO~69z^7`Zf5U?@F/`_`+ PjO6d6ENj:/-ُg~KPRr, n^Jh\+]OK4֐n룠?@{׀Bio_6IluRuF?@B5rPn$Xp W߾_6;KG@} yFa%?@O"Æ o6@7?d>@M\hhozoaSoooooo@`0 !0F[P1 P߾O~tR9րpZZ_[/KKbĶv߫_@? %;]ibtQQ08)M1ʓ|>°d2ǙL& 0!sVRI!s h Ϭ:L6m?@߼tﹰɤ_i?@0?>mrM/~L6ZhE"n X߽E]K2yx#R˯.b+?@Q Oi0#d /L6/e 0gk̛L "MCgAp`;?M5Y,m0xd> 9@eD=9m0,"fPXe9pv!Jx?b> W?@8+m0y٨L'OL6H ( $j #?@?B0vOL6>j @-}JӦ?@զ.kU]7aOL6-w< `?@*OIoNɌ~]l :V gk?@ܷ۫Ն|^9c_L6_g\+~P˚>P0ŵ?@!tr߲_L6Ot(?@.EF?@A b֦i"&oL6%@|^f Z 4` ?d2hu8Gd`ܔjagdcd!omqEiE1ojE5!AvG#O@@쯠UsܼnU{t`` -߆?t_`3{{{Ru LU_ujP5qɏۋ'd`e` ~@q( hUvJ!a [a X%G@iq@%uw1wmWP/qd`EITd`/R$LLd`1 l$ޢ3$"I4$"5$E26$27$2C$bM1$M1B9$B:$RI;$nR<4R>488pF<4_JJV4q(c4W)p4@Q!jaqiMޥ2)qeEX7' XqL&6 Dbpۤa/E(e:-T.DU!N O „g*PL= >)_Ckf5bs5+WTޢ,dT--qT|kYTa"p4`%d`uLp^d`ad0UeN j@RQ[8lޥ!N _cG3F3s͵NSY?15b-[1j|W,@‚rvQkkonxPO%/_I;VktMfdEjZ? ¶SSǪ;$̪eĸPf @ewבqHCOqZoEOqZOqZޥOqZ-OqZ|OqZkOqdДP:2&tQ瑺N SprvB2^'DoN?E۸:eõO*?@GAؒ8ebE!9qM?@?~?PTr ?Nk"ⱬeþ2?.۸kÿպFDSXeEHAƩ}˹y@!Μ8oEoΚw M?@>ev?@<ڲ|>cSo}@y0Lu~б{ZqoAZ/aa{ h@3$M4se5h:5J s 2e#2zGzt?"oAoH:eFaf#X}aeygOaǿiGdB}?41߿~իM 26`ǏW\ϟ#T&\$ >&q:LQiuf<I?@'Ş4Q.ǎ$6H+PXj|7Ԏ߻#&<ψ YV:e#__F2?)eR"?}-S"i &Zqq5ASew,ߒQ ݦ?@saY_ ͍EE/H&/8/J/\/n//////T?nQb%.^I?UGτH{<;N?_ZZq?OO%O1OCOaOsJYYO>P1_3 0*_<_NU鈨tf@CUg__Uh__dM?#=oZNϬϧ;Ap<>l{"Dƛoom oooRO vOOB ·z- ۞J}F ?@f |?PJW)tx=P?43fܨypQZ't_wCUx}B+߄q,ʞ!+;%7IC~!,[?@M@V+JƼź~ĥ>үy?Sr^ϵoU/ӛr?@Y܎_?@~ Gp!tj˝'j=G?hp8uhpo.ϙ 'oV@8Jw/w~b%ȭ^'k>#suw:|x;4w;;@1O_Fkt%.]?b?@.r $hF?@ȟl ??L ?N8jpv녜~AFO>!ԓ?@J!0q0{E=_9T?K oI6 ?@p@?@6g?@@ROF\>@? <a?@I3_h*/?@oJnAGD$o@&?8?g#oooP?t92$׏HbtuH$ғNcMb H?@OH?Ze5Bc Or@BTgExOaŁ~/o// E}8QRI P-VHGYB?џ+=OaOxb?@hH(u*!O9V_߂HZҿҏ()Gqw*<ϒrԝCU%^<@uʯܯhd! cuX10j=@K.@Rdv#5GYk@!}BTV] 41=FR3 o4~q4ѹq./SM?@@@-WG\@#GΔv@׬EQts-8R? !1ӄA"8QOART*J@@x^?w) p9 й.{(W @+z3"R(_:_F pSX{Vg_z_Č^c3ţC@ FqPe\gE|$!u'),"HVQC2TbPf)Pfrh йp2Ub?PF BP(?!3] aТJ1`Con`ect`rW`igtzۊУf يАKGQ\ReIa&b$pbPGz rf^!&8wtaLReNa5aoR`u `d`n`jfmY#ReFaFP`UP`t`e`5nfDrmU/Q4 (U H#1Xcsmbs@"q6o9m Ї5,@ U1U1'-"T0UP573026908'6r"AL G!<"IfǨmmAAѤϔܔ<8@ ;&_@Pf@?sྭϿPB-F %-<蘯p@bHBfi#T HAA"83T_Pd H Ru` '+c ?B(APt!/i8J޷!ղ  (%"I(f5BĂOp^ iġ`x<܃"Bݐޝߪ"ġ!,<89HŗE鿚$f O0TfՀߒߤ- 2DVpԥ Nߏ]9nmܿ*i!?" pNf8~qࡨ! !3! 3!0%\fiTkq@Hx0"5!<A9m%?jOT\w??A!"A-΁$---@??S7NC?U4A6^4zm4x5U5F+~ƃ7?7G!?y/ OO0OBOTOfOxODV:?@>ZP@-x 06,5O@G!OI?fl@@;μ3@ TpSX6_??#S+SX6??__#3PN)_oo$oHe(G!TcFFFwoloooooo y%rpRBt_[m!3EWiG!sfB*Ϗ);_q˟ݟY"a<0[9-?Qcuϯ);M^O垈}'ۦп*Pb$`G! yG!2DVhz:9?#6M򣖆/U4/QK/]$Rh////////v?4j9ѿ@A'?F?X0R_?q?9n@@#9r??? //p3%X&D/W/5OGO@p1nOOOO!ERC -6-6-6O __-_?_Q_c_u_____ O__oo)o;oMo_oqoooooooRcLVJ2_=Oas)o'9K]o8?2]lΏ(:L^poCoʟܖ/BwJ|&B-Qcuϯ) /߃ş8˿ ϟ1C%gyگmӯ |QcuWߙϱ;_ASw+=aWiݯa+iC},wnbaV 1CU./SM?@@@PN~G̿@2 a@  @@`/0 0(m/R#3%//@ Y///?~_%5a13T?f<Py???????:)@bP-OF/8OJO\OnOOOOOOOOO_"_4_FSaPS6d_v_O______ooO͙!壸66G9Xf}A/AQQ߽Wr̀ϘϪ$q>Ԑ8-O]?@N"ܰF >_Nk"GxtQc`zg{9  +ՙQ + 5ѽ@@Y@@@@/W6 F(u5,E&:bNb!vM _!Dc{/@|`&&cu0`dp&d&::NNbbvv&&&&bc6'55??J{h^A6e#c udes x vuqqeA 7@~ t9aa,5qV7@ad^AU\єG5DK~e DaKUDNUDKuDd!GUD:V[ eDaV[DeDNrGND^AH^NH|N~D\NjAHV[rG]NGvV[DV[GV[DV[XuDV[VŦDEV[AHK*! iWV[EDT1G8aPAƠ7@q7@q Wqc7@q9a7@ {a7@Q1W$ W1aW>EWKQWXQ1:WrFd!WrWQW1WA"W!BȮapJNtN!\՜[D\d1"u'WU4 QdFI 15Ѷ\.@@hXwB?@>~@vDPbt @@H\r?@]7@?>8Ә*6x< @@~?- z^IK H ůצ!@?QS&J6VewѿC+=σasυϩϻ߀'9K]6q˶˶ߓ ,OP _t?6ˢ?|ԅ2c:L^p6HZ>w{rwA?sKCzd(:L^pOOOO$/DH/l/N/Y?.@RV?v??/*/?_*_<_N_`_____#?__q_ oxM0oBoTofoxooo8on?D9oF!^m T^2I[tf_7x@@d4 ?@[s]@p,31eDV]onehwe~D шv YǛ#5c@v+ӀN`rŸԟ .@RdЯ*P? wnbNgK3pˉ߭߿+=OsPݜ?HFxB oV>,Pbt( ??.?/ $DVlY/.@RV/v///<@?R?v?????O??O*O?@-l ?@ xO@^( hEo` o{JE `H\rgZSaXb_t_Qrp-/xV__^sCӿ@YH{[0sSfffr&3PFXj|\I4WbP#d*K /ASewωϛϭߴnBTEk7Z`ԊB,>PbthzNr(G k}߂߳ 1CUlX-2QVu7 //z/@/R/d///sAi//3i//?"?4?F?X?Iv?>'s픅?e@eh\?.?uODv"O/FOXOjO|OOOOd~?@ ;?@8,=@: r3[THC5_Pv_+Y(625x@@h4_ YW`U6_??SXF?O_ob:3@u<(o:oLo^oۏPevcFFFol!ooo 1CUy_rp|t_%7I[mvvRӏA -?Qcuϟ) H$P@֙čvЯ*NruO¿ԲVxϼ(:L^pϔϦϸ6UZy~ߝ_q"  ?QczϫϽ;Mfd߈߹߾PQ*uN`rӡ#ӡI[mQ9;ZC:6;^-(/lK/ߢb/t$///////?QT*J@@x^?@;~ׄ?@}SBJ8]?o0v?9OL&d@@?gl@6?E&'O/./ C=R4o&[/n/LO^OJ0~5օOOOO8EࡔCD6D6D6^ \YYY3_E_W_i_{__TWcaV!SUb#_+bO__o o2oDoVohozoooooook ssVa20_TfxPo,>Pbt`4F??Rb? |ӏ -?QcuFojoϟ/i1w=a)ͯ߯'9K-?ퟓŸۿ ̟ޟ5dv}ϬЯ1*N`ryߨӿYkϏ5 YCU.MnM0B~xLCZ3AVIYZ $6HZlfl@@;μ3@@9!·@Z /%0J%:/9Ki////,@i?) //??_<5ZH3aaak?}???????? OIO1OCOH/gOyOOOOOOOO __-_?_Q_c]ZwS6~__?____ oo-o?omOcouooooooon~ j0Y<1CUgyfO-?Qcn(oԏ .@Rdv>Pb$*N-QcϏ8);qڿ˟ݟ>r L3b${ύϟo /Atewwqߝwq /M~_veK{#]ߢrqw/ASew|>?@-l ?@{?@SQGb*Єq ]?@?Qȗ[E`3qd%!?@p#5?YqeP/$\!*I!1d!b Pi#M,#]%////////?!?3?E?W?i?{?0q3??(/??OO&O8OJO\O/OOOOOOOO^?5(_:ZN_`_r________oo/8oJo\onoAS\NoooOo'9K]oϯ#BoGmokoLoooŏooW ,>PUt(:Qpuz܏>_);M_q˿ݿQ wQ0BTfxϊϜ8Q-k!KsS2RI[QfԿߜ߮#~?@ ; Pa?@4 #1DVQ]olv|NDŋ{G '@+#VBU3E~?@ҬWl~/Q+E++ -?Qcu $!3EWi{QJH//;/M/_/q/////'//??%?7?I?[?m>??q:a%?????OO)O;OMO_OqO OOOOx3@>__*_/N_`_r________oo&o,oOoOoOO __"A_S__j______Vo +o=oOoTsoooooҏoo9>P?tΟ(:KAUgԍKAïկ)kAZ~\k5-lGUDSA XAPǿٿu@%I|>?@Ķ-l ?@m?@_~m?P t@` wqwqǁ0 tu1& Z5]5 )?Bsā4 C% A/㟶>A/㳶⧦3U(NX%=<)r XA!l-xdϜ Pj662'bP+6So3z@`` FzldSl` =52e@@K@ӑ3WB=ǜ83U@Ua(?@b4ɓKϻտ!9KTTBOX" BR䩢00щT9B,9/飿鮫x,1EkZV?@'IOғ'YkxRF0A.r+='$|w ߐE|u"_#-$"B+>RI)X*e9BrFB/%,T"9/q&U _DS_wF3s͵NSYF[/)/,D  ޲궬uIl>=NI[BhnXA-?2XArAAII$IeAe6D0k-W WeAH|v1E4rA;E4A;41;-41;U417}In '|uu CHud tV"k|uNT]WZRK/BiDI T!-9 u|xXT'2qP)%|u!Ŀ1G[Q FVqp p%|x1_ c}{|w$a0eY5@HoZo0kno h0ecoo*߶õ1 11 pk?>5 ^nj%i-ݯ u-״O3IױU!ױױo}AQR3O=WNJvܕ@1S?@O$,qٷĐ8{Z rGg?@;mx>@9h E(>Ƙٕ~%?@V?@4W eƘٕHp-Z?@F?@PL߯RƘٕ UI?@Y?@섰a?@@\ aKƘٕ\j?@:?@3Dٳx?P+k= 2`ƘD%ٕ&?C4?@4?Pu>5 ?w Qj%ٕmY*?@uӧ?@?@* ?P% u1EQٕuf$˰l?@?|Qٕ"/M͐~"ŧ78Eǀ [13n?ٶ7g,QUY.(v~T+L"k {QAEAr K(~F0zkdǗQj~T꞊jXmB)?.~rQQ6?@v9#'K~Ԡ;(gp;Wb]/?@JQٕ@jlJ?@HT1A}At~_i`@RF__ȏڌoe+H"VR&wSjP'#zsVVQ?pS+Bu_`` F.Pl iC@Ar0zp` E2V0@ˀx!bbbPcz@A/oc=ys %yw[ /$ŌxRod}ei+Hvz+=Oa?@rU//8lR/d/3{/////x<=/ ?.?@?R?d?v?/;cs??? ??OO)OfOw0ZOO~O\OOOOOkl?@'IOғ __1_G_Y_k_}___eH?__k\&8J4(瀕y4$8o\ne( 0BTfxaۀJI&@J%# a ܋ zca;Iq6х`#5oOo1ȓ>eEWߟb4!~ЩE4>Pէ߅uYկ`ƩΟ $M_ɺrkFuӧ?@l|虑๱R?@:K?@* ?P$ 1Eq\:!pî?@?Q( BP >5 w  e UIȠY?@3Dٳx?Ptk= `襑H߸-Z?@)${\;X}]詑๳~%?@VF~L߯譑4Z rGg?@4W ei`#u@\>@I(17rh▔l1doox+ u CQ4w$|CQqaO}]n@@#9rstիǿ@WGp?@TuQts-8RӔԘfxw@@-{vI @)apNp@z34bp.3W"CVhh_u@ټdrdpqRTwHx@J0,,Kr٦2Ul?YYYF_P(?a]bJ`~Yb#윏+/RӀa(N5UR\udPnzƍrF"`p/UPte^nƍQܳ Qa4_2I_2Kr@;䀐,@[ap[a?Tw-Tx"HrT057@3@0`69*8@6 @"a@qt8O)njTxH>0QmD䂻zDbDKrǔDYA YA2QѻDDܲԘ0@v?<7AeEztVLKuDP࿀KDKQD E|[?\ܵD_p>.OeUMBsT䁻䂪Ԙ_sTPo41{uKp` ?w?r!1!_!onC0dd b$dKr tt2'rQ2tBfArBLt+p[r ftkurti܁܁.Ԙ9341uV14uo0q ew;2/I[m5ϏQ 9K]oܵr֟m-toUM3 a bj!Q1=40xd %a_`3 @11ѥ eU2TW!C41((XGNQ?)?;?M?_8g،w=#>3BTOfOmjѼ??GqjP±j2,AHv1v1NR0-T1:T1ee$3TTђ++WrG!G!JҏS7T8T %$)dr:Tqr;T~r@hhrr>Tb²a1OO@O= $ c`ȡmաȡggNgggggodh ggeGtQe+QRۿBZ'!'єqHqjՑta{5ta{Ata{%ta{ᅑtq{1tq{$ta{ѕta{ta{Jta{Rwa{tawf±`½ SQ% R\utpHBGP"b9KIiTj6d%Iy X/'0U Rg!K0Qj2qqjǦ!oқձß_P_2$E.xY!ay_ 20¾?@Љ?@?(=Ep3@//+/=/O/Ej/|/!F///ojFX ?xb&d2? DEEHX?? O#.Gpr?į֯E%AZ%7I(>k_ 1D./Qd1ODoqXOdMbaT4(3TT U%WUI^az@  @@`0 @@l@1?@F1~?6vQts-8R1bPaӔ b_gytwn@@-{vI @W)2p^.|SDP@z3LrpwWDWi3Qvs5730`69*`86 1 AAD9ϭKɠH PaQQo2{Ě2 TǕ/RZZ"qqWRѼ2rPbl8^@t:?=5ŨVdcPr/U1ԣvWUB1 ck2u}@Pe1p?x/ϝJ8Y7STo1qqo2 l83TQo1=o0ST` w?NR!ߝJUKTl0o2䒚2T /R "(WR32r$`BPbMB.`\&g1)`v2"7Q! l89=2T5<0/UJ\n%WU 2u:L^pPemTTTJV1$o2>E2l8k>QE-y Xn37QHP3KU!5VTXD.=(T1o5(|X>ÿ̇Gs݈GtV74r&U!1 5q{'Oڹ "87}U$G?+ !"o'2Dvn!`x//I fTΑ/ׄ,Q,ST2O4ukk>I627?NCLBR8Yq)"*! ":s ";ei1i1r>HH JJ2ǡǡEAJ۔Y5]1)ց>b?aQBA"B?P/BM(O߸!aa9ځ>"1N}LiZ_>}a}am+UOһu|>Jz5:4u\[%uL+@q+@+@ϩfT3%ߩ8O$_Óa_bF3s͵NSYđ`}uQ,@rrvٲϗ[UN%aa9a5tӁc;jZ??~?zA0000|00P4"30e\=) Li=1"GY \!@H6A3MXmMXBMXuMXMX2MXa5MѽAĭ431=r5SrvVчHEBIOO6Y6~9 V`Y`Zi132%_7?eOkE.@-0=i{"?%XѐXD# 3 U$q,ń%3C=ѓ??????OO(O:LM^κ?@6ə}^)Er%up0s}O-UDJX_oqoooooooo:LF6Tf?@U6hWI[O!:ajrK_xz+Uf8.)$6XOaM~ ѺکUڵZewdUpH/<_X|"goop /!/3/& /A૟eϟ៛πj9̣E|[iO@pbe?@vS?PJW֔[30=M43fwM[557A!\q$ptON%B8?a>z>JՔ/l~~pSpCV?@:H ?@7LJ&% >+=OasA٘V&Mn &(<?@"qۇ@w?@&`c]?@\u{x0?QnR@ pU?@.T}a'|7F@#9(G?@9ABBZµͳm=/O*"]/o///Aْ(]=$6GN7ƴ)AbmymQ?@?-S8 F?@7fםd?P& lHm@8@ t?P)Oy޾ F +4 q"4F|m a_o"9 `z^a?@ $w% r! }hBeJW!#*<ϩ`͉ng}o0oO20}Rs$>&]t,6S֯?@%~&Rt>O@O]F@N"@*(M Pq!ߺmS~ej߲9~t y<*<N`r-5\\[ߜ?@>/֝Ohjz?M/e/*Pq! ++=aQ"q]$ //x/O~-7d/_#5eP????OX'O`FAZOlO@DOOOOOO__)_CHF_X_j_|_____,o_hzmQoIG-!p!8{axdJ qaZomJ!u[) d UuaFAa -$ey9@  @@`0 @@x2I+ ,FB-'b4´xnCTaU6un`%JuLp%Ja a@eЖ@`pDDe >5N5_pg_nF3s͵NSYF@FEe.V,@#&DŐJpaae۳TPp11px/T_@>1$6xN 7Y17I1+=amSDp͵>T1pտbb@MM>28!8!1$FB<$311nCD 11eccӤ S*pp!ppk9 p p p p /'0UYr)Je!t!n鍲p A1j!8\0um@tOasс }jg$p70i(ݕbh!3 ywzvy)ʃr brBwkt$2D |@@AcQ1XDǒTs,c}1}1!1q?'h!H,xxw=0Y5!XQ1oXԈ!!-!U«.2aࢻ+_su 0B{5َm`rȄϖϨʐOӶOIX4b,аbPzP{.a1aXa x!  {syQ1i{@Aȯگ"1b4@Vj?@҄BP(Ĩʱ崨\nȿڿ);M_Ϲ<]%YIymٱA}ߏߡ߀ QADA0UJ\UxqgU@|h4Fñ"%ӱ$%"4@F6GUV m up# (GݕS2 5GYkhqA  1 ;QA]oQ!QzU,i/U@?@,?@HD0J?@VGEEO/a/s/DsQO)OIA@OROdO!4 amA@IAO!"3 (?@?<& dgP,W@!^h#U'Ue UdQE_'s%$ …?&?2=2u`G?Y?k?}?r;Rw@???j{g=_O_oQAo_{___+I$D"+(f'l6˱譃O_o///////?oo9?1C??nq?uD'  n(?@?(=Yp? Tu@!l?x?S'XA4<GnXA)jZ9V:?@>Z@@nn=#?@]_W?d֗em+Kt `Q m6)o ?tcxeg|OegXO֡`drQnA<uaTq|P'u0v:rCuaExcEYcm_jBBQr$vkХuvѹvRv!Q'|P:FkLRO2.My9_@EpuQS!{e/%ffÿ@iFor؛?+FZ9pAy<a 2nOd4FXD!!c%3Q EE_Q@@J@@@ @%@A@9 gr9@@e4Q5#a] iA!Q)VmXA|P<ba3 yE(%,I0\CE8\%\kE\%H\5L\5P\beT\BeX\4\15A10==+:TR\utc+BeTb}0a`iA.a4CE0I((XCD/'0U2Œ[|P!T@; >6CD\CATRY1r]CEi/?A?"jiYK!AАfCiEjJK@UOAUR 1wE|PHQŠwb!za܉_b.t^^XIt^#)ҌXotUcy?@qM6ьX^ЌXtUh?@DTbЕ` "p?@ӻ?@[ ]iݶ`{ 9C_7MQtUfv/X݅׍W5tU\(\4S=OMIo͖?saPР_u_`W` F&0l;C l 1zP` X.€MMUPPAA%7I[ʐZUBI &10z Y.Ϲ10hW񔼚  8,EaD ~ae_[Rh\0u i]Sf^m~f͉d48޿PXϺТm]ooG}?7yCmv$ '14A n͟@ߟX*GSew~Ѧ]W@r/`p!3EWi{óʿܿϑϣZl /_/1/ Ϝ/ 4FXj|;ߠ߲]^X}?@~D](FX-~]X~PmۢOiPbt1a1=OasѦprce jR-?Qcu6 (ݸS#RX/j/|//""/$U?0?B?T?|x5 4]??1Ra ,p%q??ORe)# cy?@FM6"CMAcO HDC?MEavjt3}_? fOMfN5E—!FuNRea嗖g5Ҙ^"?Pe?ڃ e' _(_:_L_^_p______Q܁_oo'eA139eEZero?@w '?@,%?@"wŤo .sx53c2>oG@Npu&k֙CMALJs|kߢŏOl~#5G0Yk2Wҿ $h1,DAPA==sU@:ߌ @ 0?@(|FPF,8Ckgkh!fbmD@ BP(?@?x<E*wEhtbr/ ru|3b-59K ]f1x5$7N$B6b`O O1CUgy 1$6'o9oKo]oo㉋[+ȚhBӬh3ûooo8*<N`r|%'1CUo5[;A!TE);A*Ə؅T; o)u{ݒB%͝rޕ5F,–𒥨KŊyjҘw7Y´`foh55ǯ o1/C/U/g/p////$// 1??(?1(#:p5?U@?@Npj#?@DWH2??9?OrA<%OT43f wOpZ"O4OY' 2Y_{9o/Eͬco 9-Q/ 0Bv/fxˏ*Pbtq9~D/e//(/@:/L/^/p////!////? ?2?D?acOr?@!x;5=c??????? OO0OBOTOfOxOOOOOiOvО2__,_>_P_b_t________aA o$o6oHh],ᡟva]jipponn=#?@]Wi0@o{rϔF#v֑xy<<L^prbbz@@Su߀`` uFlpCoporpz@` 2T僚q3ט44tv ~oz$KQuǤt8pgҙ]) pq p )GX)Tjcy?@bM6ыve-Wh?ʋĐgʱ b"p{048 hPXϺ?P8V ӢC_7sug-f?v/Xvgڕ-"%nϔm~~6=xׂM_qAʂPFIHA vD/7$HSDӓEdVl⛸nʁ530?܁\s w6kXBh ݡ??bq????O O2ODOVF sྭ1B>?@?/ RYPOOOOOO__$_6_HSO_a_s___oo(o__俶oKo]ooo!ϥiooooq%7I[mF9h?@wG־r[Ǡ[y@`BPՏ  /ASe9!BŸԟ敘k5U@>/^[CY~ʯܯ摡L՝ⲯ("DW (=OaSݿ4:rϩeIǀq | !!,>pPbv]VϨvݛ?@PpmpE?@[^e%eCUgyŃ79/m//O@p*/vf?f?si?o'eozOOOOI[eP2W __-_D?_~#W_i_Q___(,wyD(D_аD?@t ?@'@Y>*xPJ,8CELEG04@ BP(&@x_-?Q$$k}ѝ% 1@@oRoy-Qcuu_Ŭȯگ|endفʹe%oK/]%=oas/rbo_/?&>ȵ?>Q>c5m4Рu>}?}?v J2vH=6~D-7UN1zW8[MT9@a=Du> g@LOOȿڿO"4Fd%`rϒAϛϭʈ(c c7~?@gh?@{gߨ?eN/!/`&>/P/p!g/y///////I$FVZH+В8;O 8ӆү8bc/?A?S?e?w????????OO+O=OOOaO^iOg=.UOOOOO__'_9_K_]_o_W'__Q_____o"o4o@7cł?@E 8E\8{oooooooo /ASewg$*`&d2pΡɯ;cBt`ہ t(?cBtP֓ yҗ a_T/*xKXs5fÿ@iFspU`^pU671OO?.v4пҦ*B'6<?DM_5 _PZCB쫿\8&>G5D=!y<<AOSOeBhB@sOOOOOFh[U?$vf0Pg!'$J88)T_1@?j 2*+Ԃ++8ԟ(1(1-R__29Ta?Q6!um\5%uL5?P$Q)k @?oD 5 +N5_?ӒwѢF3s͵NSYFk*cAV,@ o_5pd|TP??X/KTJJ@++=OaXyMǧǯ٪VhKm@?.]0"T?!. 2x x+qb1<bggB3V_2--G2IIlrёїr7 rOQOQR9cR:;A<BA>"{32nY;;E!|-?.?/?8R?0?)3?4?5?6?7,/'U1(2?,?e\4Q#-)QUk_}_]K#"&QP1H"&1?4!;4!;e4!;+4!;qe41;e41;a%4!;E4!;Au4!;lu4!;u4!;u4!7Hhtq$@/PVUфRCut0J.@RYp 11Qʌނ߉a/[!__ltDw=0pceS1!:o߉iouoo ad-ag9lo@7=Qh}OѪ #ZDzGz?;ewa%wiU۲ֲ.V΢baۀ$`_Ѡ7I[mϦ$+=Odas'ÁAI#vFў棬!¾R"IGYD1!4F]coзr1$@Vj?@҄BP(s|s'9K]oɏۃ*r(ޟ$D8!HZl~۞1!ݯ 1CUr|h4F| |塿Ÿbֿ@ 0a%S>ׇ (`rSb$69]ҕ܂ׂS[r;uV0$6H="R5K*Q2//:!L:/F/X//I$D"?fl6|xS,>PV?????OL^9AKO ]OoOEk3OOBAOOOA(W>yQkQUI_U@,`0lPZ~g_?@0]q?@ "+__Y?_H>U t43fy?do-z__4Ykq~"7:@$M?7qO4FXj| ď֏5'G/AGSewN\jZ?@_7"M-%7I[m6ѯl!3EWiN\U@ "^J54aӕп*<N`r7ϟϿ`$M]^|>?@(=$pSTNAANNq/5sOI$UaQcuN%lH% FGOJBAVmNJ r"TzLuNYu@yfu120g1t6aa^R34 2Xqq#7´ V!! V9ܴ":$iB;<>QX6"7!q>*e.azn(llGv^Up!X_5h5/!;N)<"NFN>NUANBNCNDNAE#.צ.@N:NeڱprWڱڲu11=!#Ͳ[`Z!; PDNy15Ԧ6L^U  5okzs@z %q)8pR{`'utSЂaNz8pV15B 8aP`@ :vL8~ڠ #iK!tsYedS_'0UŌ[ʢ!@djUDKa a.ͱ8F"ea 11  jʴag }ŕW%o.f!33LZ [K \@k״^QEcPaDbT!QQ. O.3/sh}//88Nw=0hń4P|>7id/d)Ł? ??ujtMŁySeh?H7=QЌ8}ў8C?b#S2zGz?as 2QS>_PWˆTOTwbUbb.CB5oGaHoa`C`_q`ooooojrPp____ jcaUIQb`bPzK mA1oinAӤD\~ ~>bk???OO+O=OOOaOsO1@Vj?@҄BP(  OOOO__+_=_O_a_sSz_____/oAoSo ooMqovooooܟioo(uVFXuխz wU@|h4F ԅ9qLXnЇoW ( S@򘟪Ο-eZbp@,qVүW Ghdv h܏jU@?@,?@HD0J?@VG+;D囲Ŀֿk#b"}ߏߵ߰$l+́VCB?@Љ<8nvP,W'@8̈́A'^g ^g̓sց\tRoR̓twω!0 ucZ͂dڅ-?Q,q !@1CUFyI$D"fOl6#_q 0BTf2Dςh3;>qA4Ff ]oځ(a5IA!څ9x,ߍ`0 Z~g_?@0]q?@ EE)/;/M/LX?/*i%443flj-/D:>qB?O#AO,O>OPObOtOOO~"7:@ˍFM7qOHcOO__&_8_J_\_n___C__Q___o!o3oEojZ?@_7ºHM-i=oooooo@);MDi{qv"%@ ȺH^JO5HVhzԏ qE%7W0Pbt^|>?@(=(Fp*A'œ6ȒV` p@XCs?'tv ѽA k&k,ܵA>Hm6 ,6I T(!(!1{$3n4I5j6>7Ʉ*&Cք 8ㄻS9CR: ; A<>$q\򒱒FKEAQJefR)`}¤BBA KO]OoOOA"OOOSt9p/9"š9Ls%T.ª(!8]gw>_ԝiM_jaau9‚+f- zRsKTa0\dk%u`Ё9Ü9ű] 4@a8̦ղ]_|âq D3s͵NSYNnY}V ,@r֘zUnX_4$11JzUtAjZ?~?#_AJԐKԐLԐMԐNԐOԐPԐQJԐRԐSԐUԐ VԐYe恎 6it[9Y#Lԕ^i~^i^iE^in^i^zU^A GԁԑII5v؃~UCTSrv37:L-?TLq넱ßovqzU?vYGB Y_,_.KYoY~Cz.VYZ~S$6xW/pu-)IwP&g9e)?ة<&%F6O<>EO*?@_GAؒd8B1bE09qM?@?~?@ ?Nk"u2.zUUUSg1 A +y@! Μk(~~.w M?@>ev?@<ڲ|Js>-r!bu~%>@y0Lu0{?%~D>3p q/] ԕOU$CYUv%:5ԕx:5JX OsXSA#s2zGzt?a:~~q=wUqug,qut'^?p?'iGdB 4'+իM) -&UA/(.6`Wk/?&T4&\3W C>5MIO[OxOOOOOOOLf<I?@'Ş4`Q.ǎ+a"qups3_E_WY+Pg_y________ oFhv4?5&v )3b# m/ ~d&$&ӛr?@Y܎_?@~ _G.13y#6Q/ y%u%V%3Kh"VH[ /U@h`2p,l?@6Lbмee /G3 U?g7eTmph"|hpQfkqwtR~ wtS~wtT~M5wttQ{wtVz~q Q $o6o e/-OuGYf/w~b%ȼh^k>h oI1J4JJO@^_¿Ut%.]b?@._r XhF?@ȟl N?[+8jpvhUM5$!ԓ?@J!0q{E=`/9TNZI6 ?@p@?@6Οg?@@Rً.@7$@? <a?@I`Bȟq9K_qS/w//ݏx Z%@#u GxŻÖ#/:"?%?7?I?qH(ڟ쟡|Ģ?R"L#}[?@#jHMż?@U)?P& lHg#@.Cܙ?@Z?P)Oy޾ F `4qĿؿ8#CcqohS>?@oJ@%nA#D( 35Ge 1#_߃$w3Wq߃߄_@$30ËرNr*Vb H?@O N*Bc @BTg%@Ex(q^p#ϼΏ E}8*QRI/\dr !Z/`/Yk&?8:/)OOK P^T1宦Bbte1T@;M1ZBw8PUThN_w%"Nva-cmpu?te 3@uq$)=\d !'ۅ!уEut 88k1y quq G[ ,@D!ddeMy\dPЕF1kjD1=&\>7?H!3"$ھM-W>RÂkt*/C gW8'ёuu;u1jc1'@,'u1^WurdհuM2Y Ac1èOWM&uMFfÿ@iFrƠ cӅ?(dfЪIUzSfqy<<1CUXRx_ ÞπȎWa H&2_N01? ^CM(2gʹ A@Cs6p\!Tڑ@SQSQ1؁؃@)t*+zz-)R).cTa1s6`%@-uð`1a@Ձ[DxĵF[p@ b1F ҮN5s_@T!uu[ A1qqR3G11R Bhdh279Ă8F229S:`a1Q1Qп({U%h?+@D2ʒdXI~aЕ] "pϮ?@ӻ?@[ ]iK`{ 9C_7U!e%fv/X'U%&OQ"za&CGu[8nj Be<A_%I$ l59/K+__J_\R?h\0u?9-Ɔ6=Nm6448SPXϺA5d= =OB?zM7:Iv=F(W>PZsQooooo$6~_K]U!z_]WBEb0@BTfx ҏ!3Eğ֟@dC ?/S1gyI ӯ -Q]-^X}?@5N-[X`N = 挋NK= 'P^QϕϧϹ;y\2Dddp߂ߔߦ߸)@ R35,>P"`r| (Sg2 W%Qc uR[ *Ի-QQ1 0XA$/6"e)Vcy?@?FM6Uvtf${wCrv̧3_? ʾ169"-6.g?!Fú5fg5x^"?Pe?  c=o65ZgeAoSjI/[/m/// ////%]/ ?*1a*?6?H?Z5j4 ,E5?청?@w '?@,%?@"_wʕӕ?=OOOaC1f3q?2w_o-o5@NpuYo̖Mv̖ALJsoՕVhz 2Wɏۏ",^7.@RW8,g7 U@: @p 0?@(|G&ó٧PF,8CEGHf2Jpmyt@ BP(?@x<]GDCOO2RBuOO __BB`Ql_~__JVQ$YWuirD0/@dvۿT_+KKWiZ?l?~???㉋[J88OO'Ok]OoOOOOO##_5_GX Z_dvQh+*n:wne]_ eC;SoL?BG)uͮivBnn}rut#~CD+h}yr$_}fug~~yjxwY p`fs O7S?Idv@` @7I[d(}DUYmU@?@N@j#?@DH2ee+?:fJ$43f@i*Ug)QZcL@@?@%?@3sྭFO]{m1uP` DCo?zuP~p~tщ )u1ۻt揯-&|֘e3׆!jɈЕuPE;~p'pv2lRqq|uM?i峅А҃p~\ xZfÿ@iFGr+`6+*]vwh&?ɱCE[At?PjwOxbP1CUgyqBAPzLxG|?ԌgMDa ձ4zhDEɭxAT q``1$3p)R*VA+2aar-6r.. r3Ta62um\%puL6pQ1 QOPE@oSDU VŻN55_x_rDCF3s͵NSYFOuufV,@BFS u+/ppƔQ DPH TR.@4/#c?r0@V/?!?3?E8d]=cvW'l%f:O~md ?ܾ? phbTDQ2R\1\1VCC1DrMMr3T rb-! -!uuE7FTpHM9`T:mT0=>~T)bnttOOO=AAuϳpDxet????1V???OO(O:O^_׽9?h?@wB"hm-vǘX(~4_]@^_ [_______o oH j?oQoqaqo}ooeSѭojS8o&oU@BAns@x9qLq@X}mx㕺> ( ScΏ-d5^ep 1_븕 7dȔ׽Œ+ 1VcOuFݛ]F/9vEKɎHeE~`n)c ?Pf? N g N`Vhz@¿Կ2ik77CUϡeϑs㙉UŲHSB?@P@mpE?@[^%%"4Ӹ>Qsʒ~?ק(:B@p*fuF&Z&si?%ϯ?:cu52W9bl/$/D!;/M/_/h,GI/fD?@t ?@'S8PJ,8CGHqGW@ BP(x<" 4PE?0qrrC? u,O$VBmy撚W{xQ$ŗ7d??"D?=M^qOOOOOHOO _m&_8_XQϑX_d_v_gyϋϝ_ϊ%Ȣ/ ?|o4FXj|ߎߠ߲߼oo 00BTgqu75A)G4QA j? ?.`Y-bo_D -(0PM(Q81J1l §~%1z_[MTa0ĕ"`VqˏݏDn-MDVhXiq(WQqbaiuȟ#7~?@gh?@{Uy(N!Hϛ'G+ONBH*X(V!۸1O'Z%/*-?Q^u@N!|zT1Zѧ%G4GAI!4~W ;?@!Ƚi@X#p@7beQt?s-8R@J}Q_QBH@oRdޜB @@^k)BpgG@z34booApEX@. t1v0B~<.Yc@=(R_?@qp~иu|e6Q R^qcABRVEuU2/S?F BP(?B]Ba/JPQ`MnecutmWigtVPE*A/2\9n5Iw&Pb#W.bѓ܏aTrnNw5|Rmud@nTY+rnF\w``:PmUteTn#ZQ1C$}QC%B(h؁7]@E@ jP-"T0]pW57Ϡ3͠0qW69P8ݠ6ӠY!AD ɠǠBID"Q^B##Zҫaa+bѤҒHģ@VR@%]?8̿P~bE|f+e ŢFNp@h"TAHaaBH.TA@9` Vx?BHPFJ;PAa@ur,1ÑϜÑJTBԒ"B ,2+b bZ.9H"S䮱|bmz qH9ģMdT% E6HZl,5+e  e&8J\v% E@7m(d n!ߜuģ!AB""H4mBbPB`3נ0%%T!PA!E(h?@Ķ-l ?@O,r{@=oQhiE|o`1oiOCl>L@@"g x_VF^_p_"+sf[xV__k}\2C@(FV`@Y1(fff`b2DPRdv<ЊbP]#5GYk}şן 1)b=Osͯ߯'9K]oNr0v|З! ~?,>PbtφϘϪ|߱o>PEk7Z\nC:~MF甙{d>߼(:L^pdvJ$f(ߊߜ~3,Pbthw L.pRV//v/.JH?FD\?..?w-ALBRIJ"%%E-`E: #`eI]1@ @@%#`ͪ@'bK5T5k?PzԀ} Ri}!ɲuWE`ܰ S6ɲ)ϳ7t1A X"$57Z^ 5tб2?Nܴ\a-ԏ- LWaW2l0-s0hT~pUo[ DDp5z𽱇LBJ%0u= <U/Hzɲ9ɹ@WUM6_7ܰ_Y`e^V5_oVgJ-^@o_doJ%__ooW|hZ3D?G w ըCzDps)aipq 6av"p p#-7 9t- X*G'2Y0k2!!Q{ 6-WjB {X*GOv!3v~ԏ{!x- /"1$/.9[%476I%0OU'9/A @@.V ?@`K @c(݀Wߖ}3z -Дz^ @@x=g? 60? yGz*KܴTU֔~@)Su=e0u筬6Ro/JFQA給-@2eC?YYYF B_P(?|H]7oaJ`}`CogpnectwrWig ut`z@`eY-@CV QY.`@zb#cBɶ+ҴN`Ru gpdpnhɽVF3U^Pmpteup5nɽAQC>-  EBMI֮ҿ&+QtQ:)A-HWq0.730\@6186}l6V] Q>[-$huOrǂHaaʜoѩҶ>A@p%FV^? UǙ-"GOuP ovj(-rpU] *T--ΣTկ>-ywD(q4twEeҟvoF\Kq$-%Or )ΰ8oCRd]VjKqwA2231"*)"9"vBU )Nv /./@/Z/l/~//e////o/ ??0?J?\?n??????u?<(/iO7O?[Om䙑Ou %Tf(R%X=If e*`3аA2UUCTEN"љeQ(?t?A/.O/RO:/vO^/ //OO/O__*_<__`_r__/__1!q__M!q oo0oBoTofoxoooooooo&tz)|9rOcRxU%jwup14(ZS_OEy%Tl6f@@ ,]l}?@?@e ck?PhtuဓHUg/GQҲ/{fX"$Ǐ j.8M%Ȁ铏 5yEDi L!Tl2%S0[FFH[D,t[[:e"Tk|krqzU0?G=eO}OU/MƱJWOiX^c䌇 MefvQg\eL eՏ:]e$HUeΏ䴅8HxN*_E 0UDs aMq av K7Y=u(XwG'2#`5bQ!Dq| VjW(l~&({w˦!3|@ܦˮUEOQ{!O\f_v__%]47)Ѯ#X_U@z?@N&d2 @@v:N?[@ ¹?Ł9muc߱cIoqf_o[ah ?hi2¬kDA((kDO)ѦDkDҌdo 0BTfx+1@q59tslwcbzpub޲@S^0`` FclpCopoYEPyG@` ‭@!\%S5?\.?4\[4&ߓhAE=# ~"uubt4FXƟ؟6 bD:^}  P:LͽK?`zA../^uNv[-cmputea42ۺd%SuD0S˯ݮ鍴89yG$į/80 u43$=>A&IAIMEU=CP #SF5K O4K0O Q(:FaVOz19Mw> q`*7kjtOOh\^̵Gl| G QSh[Q5B"kˀ&pEɺanx<@@B&@@`4F?@2R7+?dшeA9ݕjP \ej WW_e ԲEP)Q'lp&QO ew.5evvv&Ekf>>ÿ@Y52?;15Ȳm14S;RqtX\_n_HH#elr!.r0m #a$Ë4.5ITlj j l1ݔ&3MNN15m627J+ C8b8Ef5Q9R:_e ;lE! ?Nk" (.hN@ (7e/*Oe.ÝI (; xl x7(wu.bݑ`.Ø 8U;^>-!u%>@-@=i{O%DwqDTt3  {U eUpp5+$,~Sy1,KfLS]r~c#o׾n??,7Kq Gӱቡ$/1;T99V-;&LUw/^.7H5e/?4<ceW >kMOOOOOI$]\ ]_U@M^κ?@6əJ) rupse_y#S2zGzt?+PYy________ oF`v4?ef&~vE7 *yGCgj`od(.$evf&/$F %96pպ5fUW} -UDD <HZOO__xF6TR?@A 6h俥ȥCUgyӏa+:ajrA5od( PU?f89-ğD9,8J\xqJbAZQcuqPRnU^Ԉ4?koD/Th/Dbg /oo/p/ ??DkDgy Ϣϴl|j9 E|[iIpbe?@vSEN?P?JW^ =43fRxew߱h@s8  5YF u_pH ^ aNՀ?~cpCV?@:H ?@7WLJv&%Q{>#'/78&5y&(<?@"q@w?@&`c]?@\u{0%ž@ ךpU?@.ӄT}XYq+= /a1/C/|7@#9(G?@9ABBV?Z2u"=Ëݏ/*"///?+٫(܎󭿀唟VN76Ĥ)AҟymQ?@?-uS8 F?@7fd?P& lH![@8@n t?P)Oy F 4zql~O%b{qJo+o"9`z^a߬?@{$w% VdDzed !#ϚϬ'9gSo+ =>_0h>&͝,6S֯?@%~&^WRS>뢿;@O]@N"ܚ( qq*GSeRݿvjϲ© #" *5̟x\[ߜ?@>E^ujp/Mf?Ku?#:qqN`!}wQ"I/[/;M$fxz//ϟO -d? K>oo`1-Dq,O>OROdOvOHlOlA OO@TO __-_?_Q_Ou___H_____o"o4ooXom o_oޙ Aq*HQU@x<?@Aj?@~p_Qu^gA|q#jP'(.@RSxu~p`` Fi lCo omr z0` 2,110\xam0I & bvbPzUX0'ɓx[s>"Ғ "û%a#a)@"'AGq Q w  = ͏/QCf֩?@1Qd)?f(B@?@,/k]p8{#s\*6fh?@~E ?@_pLVWwM?f%Kx?@ oi?@PxT/jfkb?@ܓk"uTy j=y?ӹضEel_?@'1Z=?@ɇlln jueJx?@&y_ ~࣒ji,ٹi6w+4f 1~,fNȦ|);A{`l!%\_^ ؀i9+2 jdioGy=?@=M2Dwi9z^Ef5U F/`_`вPÎied6ENjx/-g?@Rr?@n^s\'ތiOK4n?@{׀?@BiؐiIlRuF?@B5r?@n$Xp ؔi;KGۨ} yFa%?@O" e@7?d>@MHÑ|ßϟ);@`0 ?@fd2L&5`Jx?VRr JPe|//////// ??B?T?f???|mv_ 9i eD߬ٽm9`,"fXebv!Jx&b> W?@8+`y٨uP`<>j p-}JӦ?@զ.k~?]7a~<-wѼ !QÝA4@mne2(e[ ;?@"MR%#ht`#! wqwq?2t3&+57%2+5478?Ka+0yŸ>K.-$3S46>ҒוͿ2 uB; !ò>ҿpݼpO=@И$UƤȑy<< ܯcsF`O u0&8J "HG!e-.@UGb4ɓKlS`S~p$oK9ɐȥT1$1>NVdגOcdBHQ}dVRd "TdC;/BcTX.<%GbוekZV?@'IOғook_<0eJ}>dvTex]poh~#t%L~.r6HAkl/BgS l-oI_Bc-/B,<+IV(V)$c"*pC}AE`;@;@챀3噛ZV5Bc_>ԩpׄB#F3s͵NSYFD噛%C .1Ø*226m> owMIE`Y>fisBY%a !m#?%=/U !>Б>В>U>Д>Џ>e0ۭ/T"P0РH> i E  V ?>FMF2ӵE`CSRud0!MӵNPTWe`R+K:@Ai|ۡӹX_GӴ'/2qмFӵ%!)Dk Qbq!?Ͱ!V׿Ȱ<&pT/;d@Seh;P (E;sx_T*27?:25 a $ "')0vI-۵i%!􄢅au)_{//1u4>d1CUۡ%11+!@]!Ĩ!@1S?@O$,qh8{Z rGg?@;mx>@9h7 E(>E~%?@V?@4W e`/uH'-Z?@F?@PL߯]/ݕ UI?@Y?@섰a?@@\ aK/\u?@:?@3pDٳx?P+k= 2`\rP13J C4?@4?Pu>5 ?w \8mY*?@uӧ?@?@* ?P% u1E\8uf $0l?@?|/\8"/M͐~-@ŧ78E [13n?ٶ7g7O\8վ>Y.(v~0T+L-@k O\89EAr 0K(~0F0zpkdO9j~pT꞊jcpmB)?.~r$_\8 96?@v9.0'K~ ;(r_)9';Wpb]/?@J_\8@jlJ?@Ho o ojkK]Q o 6ȯtVsy<<'@.s֡֐y?SJu``+ F9lCmr$z̐` P2a߃@̑bbzQä= ۰$%E)rĭYutuԋ6ȁxe$`6HZl?@*1rՠĘן C옟]o沆ίx<!Ę#q9K]oF!ᲿĿֿ1"4qςseωπg1ϱFkw2?@'IO*uӧ?@l0|O#h9?@:V@?@* ?P$ 1E#h9\E{ C?@Q( BPEP >5 w #h5 UI Y?@3 Dx?Ptk= ?`#h~9H# -Z?@)${`\;X}hO#h9~%?@V@F0Lᯤ_#h94Z rGg?@4W et_h.5@\>@I(1 Bo蒵a);M_?@AjЍwQu#@{sŦ'$6H9l~ a!3I[m4Cf?@1Qd/6(B@0,/kP]?@8{#Αh\*Y\-hP~E ?@_pLVuwM9/6%Kx?@ oi?@PTǵ/i-kbܓk"ĠuTy r =y?ׯ>?6l_"0'1ĠZ=} ɇllp'?5Jx@&y~_ @?j㡇Y~=6w+4 1?@q0?NĿ+OJA{?@l!%\@_^ zO69+2, jd?@oGy={ =M2bO69z^~`Zf5U?@F/`_`r P㏠_6d6ENj/-ُgPRrs n^姑h\Er]OK4n룠?@{׀# Bi߶_6Il5 uRuF?@B5r/`n$ϏXp o6;KGBP} yFa%?@O"To6@7?d>@M/uʼsDeSί/ ?d2L&S0hs~͠VRސhs s*<5`r a '݋3^~@$uNT?@~ e nzl'Htܕ<N`r~I}^`wo6" Y?@]x o~! 5?h Ѭ:E/6m?@ڼtɤ_i?@0>mĹ/6ZhE" XE]K~2yx#.b+?@Q Oi0#d0W2?6/eS0gk̛L~ "Cgp?~5Y,0xd> 9ÇeDʩ=90,"fPXepv!JxDOʿb> W?@8+0y٨nO6H o $j #?@?Be0?ϽO~6>j @-}JӦ?@զ.k]7a1 _6-w< `?@*O~N0ƭf]l : gk?@۫Ն|^ߪ_6_g\+P˚P0ŵ?@!trǿ_6Ot(?@.EF?@A ԩ?֦i"mHo6l@|^f Z {` d2h)G&Had s(d!omFUy1#z5F=1L̵+$?@M7RJ@@.# g?@x̋@k/0n{u{` ?.#uJ8 (k`t5A a AztK r&Бi fʑMA煴u5F"(':eK` W@q`Uo `Wv !r׃,A"@q=5w.u1md/jIނTA/$r1$bb%3$"4$$=25$26$2 74*BC40r11B9(4R:54fR;$B4R\4/IKdFb4 $.J4%F(4)4!zْaFi%y)FeX7'a" m6bpdZ/Pe-#T.DrY! MO gL M )_C00 r5b5+T% ,Tt-TòYKTa",{`%uIL0K5ëJwU0e2 9@RrQ=k8Ll%! _tMcBCF3s͵NSY1Rt=k`xNjõzAg,@"rvQk nOl/\_ I!!VtjZg?g[UUUe^3v8( v/Hv7%qEq֥%t õ$tjd2@mvQb. S2wrvْB6PgH&LaV q??JLxJ%&Tw/1`3 DERdmf#JLF-bfCN HN blg;UFHXj|J&%˞䋳H œH^#ǚ j(L"ȁe :G=$?@nCvTeÿFhU^%?@xL?Pr ?Nk")e = -" ʍS值A Ĺ*9%?@ʲ>ͰEx1O.?@gxLK?K >@x }wઍ[ơqȱ4vӨa¯g7%z${e/E7%1Lz"UhNܮhv Ry#2zGzt?iȱH !e/)'a#d}aegߨ.׿?u_1&~3ۉzcX=Ry`Λez£j 1k >mqQ($U]?@xO?@(hWb!T "% #k}+P@ /A~&j& j+K-8fȔ΁e&jBCy?peՖad)AD86M_|(pg g\ɔ?@J\BCV줘U U7/I/m/////////??$ "o?UWs>ك^Oo $j=OOOlOxOOEOJYY|OCPS ewq__U0t_ҿ_Uo6*oFϔ?joҴ:n>!߳ 'Éooq /OSOO~;o;Hă6pf]?@X)zḁ̊?POJWI!'=;43fe;I#yB%/nJC^b_(U'\׮(ĭݦrjqsh,8rZl~^ oH?@k?@ | Ʒŵ ?>+=Oa/ED;?\=?@놓S10a%Ż?PEkU S|3n@۟lG|*IGBjnV8bņLU@! ?@#I?@RLߟekW´___Oh"`D*LDůկu2 \nGAekReO3hB3 *)?0 4@Kx_+"?Rz?@UGDLN.0艆?PcN|ŞW==BOI\?@iz TH0~@Kإ<媆ڒPU@:T&?@{Г?@@Vb?@S'}gf-_?V\@N@?`#Z[?@ /ݸz_0hq//////?:)W]V?ifx eԠab&B#ec[?+="K]o߁ߩ?/ a(@K +oO$O/XdFPO[G3|V?@_wJ ?@b3Y2?P& lHO[~@Nn/?@Jz?P)Oy޾ F l0__o"o4o,ps[b{/ 1~8T,~~Snbkom??gh1Eљ#*?N9\2kkk JNMb^"O?@soܞHZС@uub!! ŏя/[o/?AS[gWB'bjێgPt*X?*!:!̿޿+oOxQ_qϹ˟K flݥ߇ɯ#[!Fh1`wj=4HZ@4u͟14FXj|VhCP?VQ _'UA1fЍCM 482CU]4){Q]?@Q[E?@kB֚@N#@#EQts-8RXQ1AfAY"f8MEOT*J@@x^?)RpT.ܙCoW @z3*iRo__RCpQНUV__^MќeC@!L?2f!ăf89U ަ `34'9 Txߊ,߶t*4DVhz1Cr߳Cnm $ U?1cY"Qprf8q'!Ӵ c#! 3h%Tq d%1<%Am%?jOT\w?AO"Q1iAt4(tt7=K?]?7N?4AF.(DÃ6455F(?6g٦ (-GO!Gk!/O/SOeOwOOOOO4|NDŋ?@G ']Pjj@nZ>IP5 _Pk!&_8Y5?fl@@;μ3@gTpX6_??jSXF OO_o?RG3@nB5oGoYoko菏ek!cFFFoloo,>Pbylrpt_ 2DVhzk!vRΏN(:L^pʟܟ$6s+N`;fy?uϯ);_qBO Ͽ}'ɟ5GYk}ϏϡCbg߆ߪl~/(-L^pχϸ$)HZsqߕ߬]^7[mk!0k!Vhz^9 /#G6f$5/ yX/l/~'////// ???@5%V0@ eFW8j?|0??;9n@@#9r???-/?/pT(|&h/{/YOkO}C@4OOOOEECQ6Q6Q6_-_?_Q_c_u_______.Ooo)o;oMo_oqoooooooospVn2+=_asMo'9K]ol+*ьX?r3_,я+=OasFo͟ߟ.f,>z'geIB`x6P&dwI|A!ܒ Q uϯ);M/AS刺˿\1UgIϋӯ-?3u{߽ϿB;_!σewϛA+Oa{ =O"Xg TǕHz 1CUgy]?@Qȗ[E?@L;ƿ@?xL;-@H@  @@`0 " 'T(/"v#T(//$@e/??@%?_I5HU3x?<P?????? OJM@bPQOAF/\OnOOOOOOOOO_"_4_F_X_jSHtS6__+O____oo*oOPObOtOOOOO ?OO__(_:_L_^_p^w]]ztVҮ_____oo,o>oPoboto?ooooS&^ rw'tQOu);M߯ůoˏooooI(:pٟ=3$6{Zl~ߏկƏ؏x_A_ewѿ+=jcuχπϫϽkλ?T[1HBFeGoa } фӑU=߬`|ՀEave.Lxvb11B )2*#Bb!!bH2>TaveMS6Ru\%uL}#񎰁H!1?}E!:B(?}E #Lb~DN _vd|eбF3s͵NSYDOVG oEVUG,@ IOQO OFы ^/5 S.`H#AP/"/4(樌L- ߣA)?;?_Bq@/~/:DR__nQa???-VCK]XmvȿK᫘UiY\_WX e:uAuE`uAuBq/`۠QH`ZafdQqk%fdQqkfdQqkfdQqkufdQqk`fdQqkVfdQqkfdQqkfdQqkfdQqk[BcgQqkfdQqg;ŗ@(egR1utPiXD PDi`Xfr'0U'뎰! Aڼr@qaQӁ_Կcu,Gw=0Im|cqaQ VA (3EWS  0hՇQcN;qaQ @Ja*6Hv1q Gfu$)uU@R?@,?@HD0J?@V_GvEE9/K/]#qaQ????OdfqTq*IV16iNOҿƢ?@Љ<#&TʱWPܟ,W@#!$|hB'UU UTV3AuOY'@g%$V3//Y.2E"u)`// ??BR2V2\1%h?z??g{qOO QA __+_Zl~_I$D"?fl6OKO/#/O)oY/k/}///moo/ooo?1? nqR? 0BTuśyѱoBqH(wq>M͑U@,`0?Z~g_?@0]q?@ dv?qߔ43fIho7ďuղ,>^Ugyӯ~"7W:@M?7qO+=Oas Ϳq&8J\n!jZ?@_7M-x ߀.@Rdv߈F߶?*?@(=cp>ÿ~wSӏxTiF$UQ`qn-(Ō#-]bE' x__(roo( *xoTNIT/t"13TT2 tѡѡ̒4t'H6t7tCt28 Qa :% ; 2!!$>L>Ku"yyk5o12J %!Qr)qBAyAz2{1ϯj͚QQΌq_@>Ԣ!݌#-I _/Q@/Q"9r+䔀mm򠡠' ҋru\:С%uL:Ё! Waj 9q Gұ5#׿[18Ŕ_$EQ:9F3sNSYvn”/̒'yAVY,@KbQbWf IAQI Eݏ\!!Ȗ%ts1fݫjZ?~E/+s1 ¼ ê ļ ż Ƽ  eԄ I-!9 K5We@-=i{ձA4I훱coµ5q$#8µX,谂y1,`YF##g#S2zG_zt? Qg8aDoVg.3?7a7I~KqeGӱO T9 %7H5eDhg<c; 7 >-E"/4/Q/]/o/////,M^κ?@6ə+)eRUPS ??09+P@?R?d?v???? ???uHVvEhVE7Mw*ypGJAO>EVh$Fw{gF]fUWe-UDAfIO__)_;_M___q__,F6T?@6h䤅__0?o o2oDoVohozoooo<?:ajreO1Uf8 $  ŚM +=Ii|ڵZ $6vؐ+=OqsU'˟LO5 %g7OOmzp~OۉcʭѾ Я:^p*j9̣wE|[i pbe?@vSdm?PJW'݈=843f@qJpйއ+ϟ݈xe~(84A # 1^3PCV?@:H ?@7LJp>й';*FW=ϥϠoŘ(<?@"qۇWjPTX^ePTÈX^'PTX^,PTX^PTXZWQe1 ?O>(U JF\hH%\HcOjʷ"OѸ#4%##(_7?.q?];t?@s?~m#9@w?@&`c]?@\u{ ΏD~@ ךpU?@.ӄT}wxQo$o8oJo\o,oPb~o|7 #9(G?@9ABBuZQD\o "@"Jй!(%켭̟'_7N7U)AymQ?@?-S8 F?@7fםd?P& ?lH@z@8@ t?P)Oy޾ Fȃ љQß՟nloޏZ@O] ?N"ܹQQ7Ifrπqoooj߲ր >ˈ/A*@%7I<[ߜ?@>֝}j 򂏏!>MjS u~j`` FilCoomr1 z>` -2݈rУuVU$QnPUv.f1gU nT T5VRa$c1 )ĬB*I+,"-ĒRğx#TaVU6bun5`%1 uLVQŴ1 A>ANUd%@VRDE; J;NJ5a_VTK__F3s͵NSYF%;ێLډUV?,@qb/1 pA)̎NUPjTPVQVRx/a¨T@}x+'Ɵ؟m$VQ"D2͢T$!xVRNRŬB@1"˱˱R34+ѭ9aK7 EA I9:;"bIV$!n$.@T+!+!sUkSqqVPVPJVPVPVPVPUVPVPVP!,TO/Z'!(VPVPe2mUio~ҥIaupVAp|!Q'VU$!;NU$!;E$!;$ !;$l!;b$x!;U$0!;$4!;+$8!;9$QäuX?,M Gk߲AmA!Տq"4FXj|1@Vj?@҄BP(մȟڟ"4FXj|˯8J\%Vzɿſٹ! 1_mA!Oa!~ϊE9EU@|!h4FԎؐBߎߠ߲zU5aYwxUx (%S6ݵa1cky5+mA1!n1m_Z5IsU@ Ҥ?@,?@HD0J?@VGD5M5cmA//!///DA4M 1)Y 1H_??@Љ@@qq{?Ô?@ǐWq˕ԕ2v-߲Yѷau`\!?޳ ucjuY-Y aL۶wkUфZ0EU|Ytpy[Ss6Tw'ɬɢϪmJu[Y҇D" lrqcUopsru 7(c /aظqF7w 0;4a͗"8J^הwωNJu+O 5 p0ߔ BxQP:|=AzK ݨ>eOpent4ck Mu0tINIc fauJWe`idqgPJu7yia(X\rRp'2XƑ,!a+ 9Jtr=oOo{ooaJA(gQ)JqJEeBEo,`0 pZ~g_?@0]q?@ W2DV Sq 5}?ᙵzru43AfΊɡ!3EWil~"7p:@ÚM7qO a*<N`rz%ҍQ%ljZ?@_7šM-g3y //-/I/[/{!V1{///////l+u@ ^J5C6?H?Z?l?~???????4eOO67OCOUOgOyOOOl^|>?@(=06p<'dOcO__(_:_L_^_p______Q__oaahzRe佱Ja_o&x<_&ɯd2?9AJ@oooJ!tJePj?HZ$r2bbOPz5S>^`` F lCooz` I] 33{rrN0tymVGp  pU@Zama4w.yJaa sd|ND?@G '@@!y3?@?1x?ҤڥnTI7"t O55"t\@!hW#5)hq kĪu1cr%p&'}JeMʹȟw)˜refÿ@iFrC?@n{ބy<<@$'x2DVhz!&ʿq uu6aPf# #6+Bشgtqt}txlT--r+a+cO)L䙑*䆂+bqq^b-JbdxrcTaun‘6ru\ [%OuL qŬOЁqËu@"rk }nN}e5Ɣ_t~ׄ!sF3s͵NSYF^enJe"r,@Jbf&R bOpq\xCTq.rx/!цx v vݶ #/Q#!mtqUyTqrr 0$bA1$^bJb34ff(T(7I4 !8V4*N9c4!:p4;}46b<4C/r,QQ>4NRrn[dw/// q qJes=LU.p*ppppUIUppppQ|LOG|Hppe9ufBNqqViaT2O{ƥv`AQ@WuRTkA][uRToA][RTsA[RT{A[eRTA[^eRTA[JeRTA[RTA[RTA[RTA[RTA[>URTwAWqqqp}}{n1RuAt@n19T{b0iQ4qqPtsk픃xX|C'0Ur)՛O!Ŀ!{ v"QAAqO:!UVrΞ]Ӧ׃ׅqC/ U 92 __/_n12OͷqxRԋq1ǶϘ2ϘVN}j|?@J$U8<f9Ϙ;ŷ{-?@RԐHY1c)O?@1;=?@ڿ?@x>H?Pz ;C_7֑ŷP"Зxŷ`+&%6YƳϘ35aυ^Ϙe|ѹDϘ>UY/Uk8Зn2 @Ξ0Ϙ{2@t?YnAjϘɚg(rMZLZ4?@sIut]Kc3?@L*!0_&q :節@߉O4W(6S`?@f?Jjq҈o~ҟQkA!Q$Y1$KWqum!=$+aü=@ވ|5?8#c2zGz?!tɧXCUF2B!0Hw"Uy<<.@3R%?pζa0_0u0`W` Fipl~C:pl:pCqz` ] 1qߐ9 @q"Vhz "EIN&_qap&z` q1$SqWsa1  RLxoY1& I[Z _zڝà^˥?;_(lQEԃТ0Re?t{޿8g' EN/`*jqw!" 4FoA[m !'1xR>Ʋ?@! SUWRdv 1CUPootoS O?ocAEwY{//+/=/a?ڝP/f:E"kXp0VL[(^7?`έBa?;???????O#OKBOTOtAtOOOOOOOOe"S0֣0V֥<_N_`Xp______ z# (_ ogSfoooobb0ga@sb˻u :Dtڝq yh 4R?2f/x&M}j|?@h;f9e)-%.5%m>%e0$$$> .A=ߦI2ߦ!uy%\Kc3‘(%q8х,I'e?0M`j#QcYk}şן5I::FXjqv@y<5ǣR?@)u?ݠO?@%+M_quvrBڇ+=E@u]oi]y.߮ҏfx2AW'G>Pbgq,'!)}U@ɗ9?@tc?JB3PߟA!$H qɡ΢Z@ BP(?@x<%aS%mQDRR#ÿտ!e0B `u /Rs'6"p |ώϠϩqZ[u$wɑg$@P&>t/////(//?)?;?[1q[?g?y?j|?į_"%ݨY˂ s%7 {OmOO_!_3_3EWujt___x:~,qJ1i~#m@ _og"c\f#XeBWuޓ*!+3S-(T;44o׎4+aҌi߇*'`f3u%nu cYtooooooo"t0PqGYkt (ZTQf}uYU@^EJ?@k8˰jZsr?@w߶{}Օ);?JvA43fPٺOꏳew֙jLub9 o|%n#n餡&|(E_oOasπϹoϻ7.%7I[mߑߣ !k.ptě#p֘A' #5GYk} =%nljpgy -uHZzTQz@ f9Ę@#yjؐ#&lC8J\n/"/4/F/X/j/gJ/%nPZvxM&///// ??1? C?U?g?y?u??13Q????OO,O>O@Z@ #؞M6ָsOOOOOOO__'_9_K_]_o_____y_Onb_o#o5oGoYoko}ooooo"ooq -?Qcub'?@vW@֚]R'+=Oas͏ߏ!O}FuK]oɟ۟?,>^ Ugy,ɹѭίv!y3?@??1xp9@I?пⲄfy<<Џbb)z m Su"Я`` FlCoorz۩ ` H2Y22v¤w !Î  8H ر '9K;Ϥ0lϊB2ƗϩqM}j|?@?9 ]RD{-?>0Ԑ 5?@ 1;=p?Z4?@(lm১8V ԢC_7𶵪P"ѹͪ`+&Q%6YƳLaυ^QPKYtر./QR/XYPR] ς0BT߂:G ]у1u߶1]ر4'!)U%XA2]勉Qȗ[E?@': QࢂFEfZXC?aCl!u` #?BuH{ДI1"5F!tA1 ̝@!tAB _@ EGݮA+EGPkTݧ@HAc"BB5¥lV ţVVV 1'|MOYӕ@(1[kEf@iFr-ce?mdcfm'&O_o:k*%Pj<#+bP/ASew*gbU%PB g!$ج(TQQ"I):*Gf2+Tѭ-n>Ad>ATa%n6"u\Sp[%uLSp!nq𰁮AF{рGqD@" f5N5_$aׄ2#F3s͵NSYFHp!,@Rز޶&כpn|T=` ST!.x"/[o2hqf15GYk} sEǷbVl_~__fRrֳ1m!گ.xTT0"=J@f2S1S1/1qq311ee*"U"A7Ā"  ]92!:;ԎC<Q1E>)u91n +#1T=x\f1l4La29g:=f59 ,2ITn?/?rkKLXF\#gI\%{JQU!5Ґ@2 1(!RWXaO㝍iV<> }HT8<>ΈBǝT8%<5M5ޫ|?@u U[1t>{B ,(T8<5 _ ?@BVXa+?@Т?@Bhn~`e,?P{ 8C_7e[1Le<53jMU7FHpnk 5(T8N@*wKT8<5e&.?h1T8[~^8S~9<5v6l'U7?@w?k "T8_@-ST85N $?@j{[>?@J‡ &qqM2A>Eyn@) Eama@f? 7tJr //.@Rdzdr!#JYqYu%!q>tY݀J|u)Yu?/Q(9Āc(?F&u(]/#L#2zGz?cNdBxU$0Cw8UjP.7drv׉q?Yp;ra䶝PC_Cu^'`` Fl`eClz۝Pc` ⟐UPYqٝP!۟#U""ʕIӑ&bbP#zW!& uYsHь\b beA%A$ F|AE/+GoYo_R?U`9_=bF%NmFFQD\ |cJ^_րd͢uM=O?]yO~Oc_ӿ6HZl~-?QcuFEft 1KR?@rP屓׏ @1CUgyȟڟGYk"4'دԯįd 2D Vhz¿=_=5 rPpN=cN=!]=^WRϓ/1*.o@oRodoooaooooo(,?QʕүPv2mgOZ`xr&8J\n𤏡Yofע"4FXj|͟ߟ /ASew@]B ,(?@#cNx/lo^˫ӽϯ);M_q˿ݿϪ_38J\nπϤ϶Ϛ+K*KWi`{ߍߟ߱@ @*Rn~Nx5㠮ۀ@bv= .@Rdv*<9Z^=Bgr&8Jxeew QvR?@UfPv?9Pգ`xCVhz //./@/R/d/v//I/l"+fћ///??*?ASO': ?@FEf1 suuxOOOBOHimV>EmPj<_N_`Ry`[bbbboPzzzS[au``` FM`lkPConkPor@zzk` ͏2ޅ߷ݷQfybhx$Tgi>r FH_ZclU|T8fl_HKuR%q @A @ __^(Ɲk "_B睟o.fj3ޫ|?@u e/eef_)El  ?@ŢdOHppA?@C\ ?@ J^_p8V ѢC_7cU/>El3j>m/u6Hpnp?5(SuaW*wKS֑7v #)Ap1SSޑ7@w)"No`orooiooooG@ޑ qtcs PuA/;}I֑4dcp>AUƝŪymfl@@;μ3@@xEMuN`/x&QUQub@qyC<<4_߀ȏڏ1gBuvVf 0Րg T%qjgI`xBuplQTq33AC@I)aOr*nn+{BkqkqRB->B4r|TaF6du\x%@uLxT@ Qgl U@ 2@²Ou+ H),n+N),E5@_*#ЍF3s͵N?SYFRE+m+*>E',@ PBFu+?@p[ Q X2ITኂª/@?En\?n?(??8=ՀB< OO%mOq$OpbT;Wd BqOr11n1TߢZ3Trppъr!!ڲ7T’$9T:T!3;d<dIX23>(d5vrnO __3M~A~ARZdѿX8eSah|.9wx etfpbi╮ҝ fv֑?[qHvqta{ Eta{^ta{nta{tKq{ߥtWq{etq{+tq{tq{tq{ڵtq{taw;g@baovuCRut^pWQ_qσϕ0 Iq[q "C9Ddӟ~w=000 qa}"A񬯸ʯ#LXmA'y@7=Q7}P[#2zG_z?G~F A H=I[A(ewU?..@Ra6g@_zgn߀ߒߤ߶Yj]If\؉<)aeǎqawĿֿ 01d@Vj?@҄_BP(Ķʿj|ώϠϲ %7I[mߵJk!3gW{a +qa'1 ,>Pbt0|h4+Fÿ05ӿ25ܢ+=Oase (ݣS@CUgya//)/;/ Kqak/}/q///s օ9s?U@?@,?@HD0J?@VGWUU]???3qKq(_:_N_`_r_tցw褀sYy_/*2A06¾?@Љ<6t!wP,W'@1x1' uu utas*o75DO1O0B~KBupUOgOyOOIbBA5OOOxmKo]o}aQ}ooo///?o$;I$D"98fOl6˿ oo???o????OGO-?QOO|ށOą5ߏ,X(J&]U@,ߍ`0Z~g_?@0]q?@ eŀnŗԟ ?fO43fp"4"αſ׿ 1C~"7:@80gM7OqOw,㙉ϭϿ+=_%XjߊrߖߨߺߑjZ?@_7~eȟM-2DVhz Af&FFRdv͕@ e^J'5wȤ%7I[mUɠ 1CUg^|>?@?(=gpP69VAy\`_kdvMEBo{c/ H&<51ETPj?__RPbP_ oo1oCoUhu5!@=UvѠf1,pg4AADME=U HlTA2H!#9p)tr*QR+2AAyR-$ʹڴBX=3sTa5đ62uL`%9puLJ1ˡ9p15蠒tqVa05{`mqBnD3EWk11X=Tuuo@0䑸0J0 000U 0 0 0 8Tfq800eDE ?X1tq5`}-qfQ9p=V-q~`!5$'+5$++tu$/[+QU$7[+5$[+yU$[+$G[+E$K[+E$O[+Pu$S[+ 'W[+$3['AAQQA@ME=Uбy{RutšёXA{̘QAXO!Jtu6O!FiPbtROdOOOOSQ7@__'_9U{^T_f_Q}___Z('UaaUY5_,`0$`Z~g_?@0]q?@ +څㅗIo[omo$Cl#%x?ojet43f MOz^ #C:L^p~"7~<:@˭܆M7qO숡"4FXj| ͟ߟA /ASeljZ?@_7ڈM-]ù˯ݯ%7I[m ǿٿ!lBe@ ڈ^J5vψϚϬϾ@*ߑ EWw>pp߂ߔߦ߸m^|>?@(=H܆p<'dO2DVhz 3Q3?QQQ*?<:U_Ԡ e2Ο&_d2?yށيof*AE6O pOOI[mNKI5=Q+OQ`Y1OQ`V3l5)Ee_'9@Eeafd6 @@qq%`1@@A E?D Vu``|QP7?ABuKIB;1aBsFrtHAT1 <=T@rt^ABj.8EG̿@by EGBT@e4/b BLqeXe! 0UARE$L bh v#eQ[S'6/H//uc\cRiZ&_o!Ll 0VU11?b7!/ L%L"ABGIVC]IaVeFoIGTEodzUnE owenYo}e@5o'0BVvPNTzRlE ɖ_FwX"3%VLAN MAGR 7XW'Q2 @Bm!41aJ VG`QY YZ?'xxzQ$7QcRob!@^@@~@@μ3 ϛT1wCVRR޽ns̄̄DŽӦ̄䄦Ws#:2ssF3$K$_3-$s;$3<$>$3?$ö=$׶3N$O$FNAGQJ[eoVUUӦUӦU##sFGKK__ssöö׶׶*WOSAU_2j!Y-"^?7e 7<ToZF 0QA 81`Tv)4)09d0A(큥Q00LAp6r & QC1Wi[tY ietzUiq"iq5iq i s%iq^i sFioCWe e itedeQe!ee |5ie 5iӡ e %e 1 ĀA_AƀAQĀQcQ!Ā)QĀ6Q1 ĀCQQcĀ]QKĀjQ_ĀwQs!ĀQ1A@uMi#t Mxi>UCKO FK1flafSy%ʅxi't\.@@jXwB?@.# g?@x̏@?@/*b֓e1( MTb<ם!tjqG a & rtr r&!uw!!!u*wӿetRą=_swʆbK'򡒜rSEtf` p@u?tv1ʇ7h;"Ăb2s!QBĂ!@EA'P`nᡑ/imṑ5IBT-/C#a#cI1]R2j3w4'5H6$i7CI8Ŕ9Ҕ:ߔ22/qqAF-"117"JGX"(Ty")a"q q"QQ"ie@!#ᔮX'l{7#a\BpAE+T. F?6PwgiAy/_yHqqWB$dR+H,U@^^'raraHsTaqu\%uLHԁAHNaE@EP?@ U8?_Ä|sF3s͵N?SY0~€'Z,@Ȃ΂ԆH&nOhAw,Ht!kjZ?~?_!ppppppppJppQpYQpeiȑ ∀yVeNa>i]Ut\(\(\(\(\(\(:\(7XY+ |se݁?SrvƃQTk7dyG*h>Sq͔(d}e``Z7rKNaUdRY]ifi~-bc`,#i,%j|slVU8>x1O.?@gx* K?Kc>=1U5N@x w@[O6KKTTC]Ye$$g iJe٥Y/xz"3(hNܮhv؜h0cW/V#2zGzt?q`Ӈk%W}"/tOO70݀u ?Kzc96=60eW?>>ez?O D1Ig >K]__q________l$U]?@xO?@(hb!0T"Io[omi+P}oooooooo \xDOU_ j+@D" ~D8{>UB@C5WNv5a0d)DK+=ZfxҌpg \ɔ?@J\B!0V줘'moK]oɟ۟y$ C?σ+nU?s>9a=K-@JVhzɾɽ!(P1/C/U/OasWh/z///UO߉$?rH?br t??ͪna??? 'i w1g;o0;Hă6f]?@X)z?PJW'=u43fC'(% (!<@hŏ@U&PXAXAQF! P`O8J\nЎ //-/?/ 1x3#?b7"/z6:6=?@놓Sa%Ż?PEkU S|uLOOOOOO@?l|__,S3XE_WXUj_|_____@QO__#5V[ cxo'G13 La4?55l51a#~2l/^k#?U@!?@#I?@R uu?]C#kO}Ged~2Ϥ϶G)>{#d5cE50q :L%{?Ce]wvkRex?3hx _A(`4)``eVto "Rz?@o3 GDiLN~ d?q .d5A|_=x=k'cE:?@iz T&~@KpإPbtef(l߉1`_< .,!4.ͭG!̿^/{?a<1˧ORmOJ//1/// M?-xQ=Oa??t??D_hzJMaOe///T9 _____$؀ȡ_Uȭa&o8o@4pdSoeowoooooooh$6HZl~4/@F.!g/y/qGÐ< ÐGDq YD+AAnd,?>?P?d-ƅ: M`Q@Wud Qkd$Qkud(Qkd0Qkd|QkdQkƅd@QkdDQkUdHQkdLQkvdPQkUd,Qgҁz=qBO#AtR|utP#AvTb0iDҁb= HXs 'K0U/!)1 Q"@aQQ j_+.!G'91Tˤ Ķӹj IJB___#AK=B"ЉB(NJ(q )͙Vᤋ@lEa0Ȅil.!DŽlP}?@cuUqhĄlSt}?@L/=A.[WHf?@ ?=в;?@6PD?Pz ;C_7ml@Rҥ'Ul9~%-'辈?sѷjulW}.񸄨U>BFUlwN%œ#B@}h,Kф0B@xn˄>l~pݒ>uKBTH򅿼?@.bQ%nAk"wT?@| hp2·&q :bin@:7+@f?=OL^pLva QaلzA i e4"1DMzA൬ eo|͓U?[yϥ](#|2zGz?!.!Q(dCUp)K`@cwh2Uy<<~!.6gC ?ރBaֶ@_@uW@`` Fl3Cpllpqz` 1&EE @ב /ASRגI&DzpQ&L ēOA # '$+˕A۔ vw߉~!ALċIUv:R╬UsVſ?CT!0f?:8fxa$Q"?K]o_!礐{?@42SUW+=O@as wRdW)//)E,/>/P/b/t!///////ݹ?XP^?g?@" %VqP(^?Hr»Oß`KHOZOlO~OOOOOO#O _)QQ)_5_G_Y_k_}___ VUQS0X_ohb%o7oIo[omoo斺|÷ (ݰoogS5Pbtrr%( :Lt!p DсџbJ YÏՏ .!?6P}?@/uUq00hќ[6|75@>Y5[Zc4l4t5}5> >=睔A֞.6?Ak"F8JY5܈DU;Fe? > 2D@VhzI$ϯ /񵑕Rj|R ?@rg0mV蟠?@فG&p1+6ߏ@ő6;trǟtc/d/v*-?Qc w2W|%b,,WG9kU@Gh=?@0it/\'D'P"A֖1f5Hqȑ >@ BP(?@x${C c觙~`f`n*$6HZ@c~ ' )(B a2)hiU@S䧀-Em)]s?@%$sy~~̏ޏ?pLQG43fo_h,"*Ʃo8ˀ1nE}[=>]F(:n^p߂ߔ7("4FXa]OVGR~%O 2DVhznĠ]D.@Rdvu)/ a/;M_q@hќ?@Vb.ѦN!S/#/5/G/Y/k/}//////// ??Z>?nJ*`og?y??????@?? OO-Ov*IO[O{AQ{OOOOOOOO@Ƅ@1~~ĈGm:_L_^_p________oo$o6oHoZoloioϊBO%ooooo 2DVhz"+q*N-Ìpy<<DVhbbzSu`` uF}lCoorz^0` 2T+;4,Ėn vg$CI 8ܳhU)   ޤfh,K!?.!L^֠P}?@QuʏUqƃTn_ժSt}?@bԐ_Q.[?@E?=sRsV"8V ԢC_7k__Ձ@Rn__Ց9~$-'.?sѷ[~S/BFȃ[4o"@R~ߐߢߴ 7JG@- ;85Aun_k?4P19 5  @@L\r?@(?@#_SZ UU"u` ?F?RuIYC5A2CqVtFQNA Wg}JRPt\QR6?&1UWp~2UWqďxRPZOTd;.|Rw!fSw5fC5A'@1bITMi|3JP5AMCOk薵Vfÿ@iFrؖu?tv" 7&oc %Pj<ϴbP ,>gqBPK=AMHgU1Sy5hxTSaSa;OO@I)ۄ*B+nb-AdAⴄ薳Ta;nv6u\p[%@uLp@NFq(JT@H;,Ŕ ENn5ƺ_;QqׄаrF3s͵NSYF(bp,@pb,&x@p MȬicTaa;/o<ur A֯P 6(Uo7l f!o3oO"b%Om{>_х0ޅ녎55BAA1b&&3AA "NN 27G52H9a:nf;{C<ԨiabMq>u1nuχϙ zqzq̆35A MxS?8~>0[1%54{?@癆|*"I6N鱏7)p8J50?O BſWa[ G ?@냼j$?@t?@L?P{ 8C_7e1e53AG7q5L| ?c81^;v8I5&G;dS8^叁96 {'7lO@l8yo@?QsvY85?:e%Ճ&25!)K9??Xk253?@ ?5!ĤH~N~~P?@+S~0&qMAEn@j64qB?! =q@f? 9JhI//riᲑ"$W3/-I/U-q0}%k1(tqu_90/('(t(6?*#zGz?f !ɧxUHBŦG h0wUjP.rP#?p "am0h0_h0u`` Fgl|`C8l8Az0` [ oߪݎ0q 0 1Tfx 2CIL&]bbGPzo"LUgH՜ EJm5AW$4 |AEG?Y;ooXo~jbHuu?H۟M\}FEgcCJOCtM.M_PO]]k HYMV6LgL^h ! 2DmYkʏ܏ _SZ R?@P PbtΟ /ASүNrQM=a?uWϿ); M=mj?@ C^DMi&0n^.M^PYM5^nV@_ ?ۑߣߵ!I1@RrAr~TQpTU'WU@|‚Pbo CE:r6M%Y2n|s ( S r.eE_@q`k 8=aaEA,Q //2?20[н5U/U@4{?@?癆|*fy-v7)p$/Nvى…_Žlբ}? ?FG/0=FN>%~~vV^5!?Pe?.G >Ec!uOazW?i?{??? ????32OO8AA8ODOVO!tU՛EO¡HЀ&?@(qZ?@TWإᥗO_#_5S?tCO@o');C@?gۥ[-ڦPOH/dv2Wşן:3%E?N/"rτϖϨϺß d4'9YYewhOzOOOO0g=_JH&"H}5_G_Y_k_}_____ 11oCoUhhorav;8B!*Hх B!!ukuaZOG|ɼy}v^~Ԅ~ ()1QR9ve"2mtwˎ|I-2D  af_.aOWr@ 5.NEWij(XgRcibajE£ᴬg}MQPcZM#?8@u|u/'/9/K#y!-?/*U%443fNy/:8bahs2s`kI7O &k~zO.ZC?XOL&`E]Mo_oqoooooo6o ,q#5GYk}0ag V<r{x*Oex!3EWi{ÏՏ ;#U+ewџ+7FXxRxਯ̯ޯ@7)p?@&W#!'$j6HZl~ƿؿ 2DVhe#,.g @.@Rdv8ѣ*<@$$@絟]GxI.@v%7I[m,nXd2-$s!3EWi{De9+=Oas0aǐR?@K>4UvrOxS//)/;/M/_/q////////?I?MdZ?l?~????????O<:*OG=_._@_RRHmVENPjе__R`rbbbbPzm[[Sra{u p`` F`lPCoPomr@z?0` F-2W00QtvbTi r Z9sIUT8l@oKq @ HQ @ %o7oIn96ljo~0[-uoU@!4{?@癆#|*˩e߮Pͧ[Ue0O?@2>.tH 51[A?@냼j$g53?@BJ?P8V ҢC_7UEe3AGѷmeL|O?c̨ھ%;v̨OI&ͧHQ,ǪW빱Pͧ[ooooy.@R;}~ӯRA!O4ɷAI+O^Q24\.@@"?@`Vj?@!R41XXcuku` B?up錒~t *῟t2Ş!! ?wM5ޟࣅWL5SUs50/ԅ7y\%3Uu23_%%.~і9K]o ^Q\dT>մ?I?&AR@XX&Q2yU\şל@Y,QT7,,YWcU/yyQ5:"&G! %Z+/P/Ьq.c//'./=//%//C?/@BQPUTPxHdsj {u">N`va-netwdkNz_ d 3W,2q}@7I鍬 4dםh8lyp"vOOOcؒAK%ER.Mq-qa}kU7cT2 SFUJk_T_@j)ϊ#l#hwF!e3ybPQ YJB>Q)hGk@o%o|a}q< ߘߪ߼ @-z-. p#1.b1 @@>?@p`0 ?@~W'| Lq$℆kڭݺ & ~~# R}=Oas#pu[pyGco  u D/???Y/ /寸/ ) 2\??)4oӯf?ۿ?fXqdm!9ԺUD@sm@s+q@R@vc@AUp @aFPl @sHo XIX OO_ȰH9H bq7oC{Sowogno n1U4-|"K{rJyTq=M_qi2 4p @@4JR?@@T*J'G9K]osԈ~ٛя ` ` ՗);M]e)s3pD[Ÿԟ H.@Rdv,>G/N6,ƿW/꿥///K/ ?GYK}ϡϳϓߥ 1CUgyO O G_Y_?Q2E'bӋb{>P{B(w~M >-@@A)Ues@@ -%?@H] h?sd{ei{xQfa0 Oa ԰E x9҇*<NM2_F9}Q !3fWi{C?U?g?/Ytf/4{/X}eo?/X")?j*:DpEI21nc@?OO1OCOUC\O nOOK줧BG @ 2}Q RjqQ.a_Ceᖃ碡PFo\L&d@@gl@@`4FM?2R7?V_`ؖfmAA  N# [= ?[;QѤHqG P6Tג'RM5Yf}^qR%OVR.֥M۱;ۜE\iƔ>>ÿ@5`5jCQzX.^R `3QQqrb4V!2QmD?CjIT R 1 "qq3.bH115 6*7 QCֲ8a9(q:5 Q;B1\T q%Prѿiaiuy~ۋ xlx7T%gbg.~K8U4>[gu@-?=i{{QQAz{Q'Ym1@so½i/%⵬$g3H_#/%o,xy1,!i388#2zGzt?z HeTf>u1PrJq]GY߇KquG_T9͉ v*7H5eTw<c !>UA2Damǯ٬M^κ?@6ə`_)uպ.@+PPbtο/xoE7*y"ǴQNex$Fϋ*ІmfUWu-UD{QYAiq-9K]oߓߥF6Tp?@p6hŽ@0BTfxL:ajrA&kAU?f8͉4{Q]AQ);M1YyhZ("4F4o;M_U\E5/`g GϷ}pHҏ_[sJ!E//! ///J?n:?j9E|[i`pbe?@vSte}e?P?JWAh=HDQ43f3SDh9Se??91@4I;Onumvf#F p1T$aFq_]#E3 oo/oAoúCV?@:H ?@7WLJ?oe>ooo9K2VG]gMOO U(<?@"q7eDVQew1QH !N`Ih"`JhޯE`Khp"`Lhޕ`Mhڎ! gѮ_PY _/N?50?B7\h%\Ϻ3?72P<83A47<3A3E8)GG>+q?];t?@փ\m#H<% Us ?@ekD  ʐ7XCZ_U@C%Y>@w?@&`c]?@\u{TU U@ pU?@.T}+("4HZl<`r|7#9(G?@9ABB`Za%pl# ܏" 2Z9A(7@8@脝 t?P)Oy޾ F ʔћ~!3 ",?!yZgo"9z^a?@w% US/0'hŸd#///H/l ׂO@Z?l?mѐU8Þ&[ V˖S֯?@%~&Ԇa%5V>jn@O]N"@ɈlKAGOYOAvOOO /O~Sj߲`~V %VN?_Q_:____o#o5oGoYo̕o[ߜ?@>/֝nՊ~jr1N?MzhL@Rq}OOOO WQ"_"_4_xj_|____?ί;?M?48zomϞoooo`\ [mƯ`(* @C&8J\n~_ȿ -?Qcχ~:LώN^v^>ߢp4,w@1$!%)4Kmpq@@Ye?@?@xG+̑Ӛրn!u^{` |3?cu& t ן%-{t?(Gƹ(1!?W1 x!sLNƂ:0U?";^_bz![(N ]oͲlAU")[&}1b&/ ;ˏݏ%/;8' (p V1=)5:?_?>??7m)>?L? O$)??RO?f`!tO+ms%C oudpi+Pe -PcT!Ԛ$(X''2q(P'!~!+ f^j't>_P_,({'//8&!%3NbWXMӮV^o%;B(qPPM JңZ+O^@a4wQ$vљ-mU@b?@ZV@@`0 ?@!R41 rַu a ii( 2ŞB ˺\ ?Bv2LU, -ɚM*(/:/L$T#i'Eb{SC`` F-Pl C%P$Qr6zB` ځb*b\ځՆ?\.?p`Ao(d!t$ݵ/b*(Ebk jd ͹g雴///m/? ?OOOh?jsp??ɼukO&O8D/ϓSuOrۿb^d2P_D$h#յY"[N%Pv_a-c%Pm+Put1ZrqqT'u&qYd㾁__Wx4XdF8fbmUUy _rq W@.I@9P9 '^MTEDF x5 @Q&__ o(oLo^gnoټa] '> )`gak_ti{S|aPo>to+W$6qrp [?@܊ x?@BrHWx؁- \,E" &O+_"3 -'|"]u_ޓɏۏ#5GYk}şן#5OYBůׯ顳; );CUgϯKz_A2_V\m__mļ_ __Oh_(o_P!uɾ&N~=:Lnjtw2kl~ߐߢߴDUg"4Fj||H 0TJ\n,> t3e'()*+,-./0123456789:;<=>?@CDEFGJMNOPQRSTǪҪ֪ڪު  !"U4<`Y,b'@"H$ @ z#=C-,` A;XU-UU4<Y,b'@"H$ @ C-T007"A@xsjTR@rKRL<(L<(E(lRElRU`8?~%@*JR)@ LFDNTyB@ uh(TBԆ U?Y,b'@?"H$ @FxP ocAzF)@#V1W]VB?hT1%vPDT1vPجU5U1( O-D&UA%U f -h"/T))U,   '-P/!(8(B(-$#@/F DU! (,(E(-(U((((U(+-6U?AJE(UH(R(T(e(Ug(y({(0(U3(B(UM(ENPUT]_cUlnr}UUh(t((U((Q(W(U\(`(a(c(Uf(s(v(~(U((((U((((U((((Uk(o(p(r(Uu(w(z((U((((U((((U((((U((((U((((U((((U((N((]((l( "* *+"59:<=>@CILMP5VSUE$%.)U.34YUZ\^`UadefUmopqUstvwUz{*S!+#&/+Kܪ((T(( ((-(((#($(&(>(@(D(Y(Z(_(i(n(q(x(|(}ªͪ278RX[|,,,,,,,,앪짪쯪Ū˪Ъ֪ڪJȝ U U "'*U+,-/U1567U89:;U<=?CUFGIJULOPSUVX[^Udjm8ʪҪ֪ڪު ѴؼUؼؼؼؼUؼؼ ؼ ؼU ؼ ؼ ؼؼUؼؼؼؼUؼؼؼؼUؼؼؼؼUؼؼؼ ؼU!ؼ"ؼ#ؼ$ؼU%ؼ&ؼ'ؼ(ؼU*ؼ+ؼ,ؼ-ؼU.ؼ/ؼ0ؼ1ؼU2ؼ3ؼ4ؼ5ؼU6ؼ7ؼ8ؼ9ؼU:ؼ;ؼ<ؼ=ؼU>ؼ?ؼ@ؼAؼUBؼCؼDؼEؼUFؼGؼIؼJؼUKؼLؼMؼNؼ%OؼPؼagմղT$IFsT Q@>HմJDUYl4Q 7`%hT$ @aEQ/-/?/=UCf"}&PA+e-`#A)U##A.#v`#0U"?@ feArial UncodeMiS6?/?`4 R$fSymbol$67fWingds*7 fECalibr@  ?$fSwimunt(S$fPMingLU(w$fMS PGothicj@ $fDotum"|i0@ S$fESylaen  $fEstrangeloU dsaC 9$fEVrind1a8 Q$fEShrutqi| Q$fEM_angl$fETunga<"@ Q$fGSendya*/ (   R$fERavi"o Q$fGDhenu"*/ (  R$fELath#/ R$fEGautmqi | Q$fECordia NewE R$fEArial"*Cx /@~4 R$fMalgun Gothic |w L$fETimes NwRoan*Ax /@4$fGMS Farsi{{ ( _ R$ fGj ( / R$EB_.B%B=B3B"5BWGB:B7BABP0B1B0B0B+;BL.Bz9B-B2B;BM8BIBJB>BV,BGuideTheDocPage-1"Gestur Fom aSwitch&msvNoAWutCn 7ecCa}loutPC SidePC TopPC FrontPC OutlineButonScren"Handhel Frot Handhe}l SieHandhel Top&Handhel Ou_tLieCo}nectrNet ormal"Periph Out5ln NetworukFonTower pTower }FontTower Sid TowerOutli nPeriph ToPeriph SdPeriph Font PeriphSadowPeopl"Peopl Outi n3D NetFacu 3DNetShadow&3D NetHighltSymbol TpSymbol Frnt"Symbol OutineSymbol} ide(Default Sv]gtye#HasText&ShapeC5ls&ShapeT ySubhapeTy&SolSH"'visLegndSh7ap;visVerion1Row_1!Manufctre !ProductNmb e!PartNumbe*!ProductDes_ripinu!AsetNumbr!SerialNumb"Locatin"Buildng"RoomNetworkamIPAdre sSubnetMask"AdminIterf7ac NumberofPrtsMACdre s$Comuni_tySrng*NetworwkDscWipinPropetis$Lapto cmuerHubDepart5mn HardDi_veSzCPUMemory$OperatingSwysem IsViableSe7rvBelong7sTPC$Datbse rv CloudRow_2Row_3Row_4Row_5Row_6Row_7Row_8(Dynamic onetrTextPoWsiinBridgeu*Aplicaton servEthernt Scale AntiScaleRow_9Row_10Row_1Row_12Row_13Row_14Row_15Row_16Row_17Row_18Row_19Row_20Row_21Row_2Row_23Row_24Row_25Row_26Row_27Row_28Row_29Row_30Row_31Row_32Row_3Row_34Row_35Row_36Row_37Row_38Row_39Row_40Row_41Row_42Row_43Row_4Row_45Row_46Row_47Row_48Row_49Row_50Row_51Row_52Row_53Row_54Row_5Row_56Row_57Row_58Row_59Row_60Row_61Row_62Row_63Row_64Patch }pnel4Fiber _optctansemtAusterTranscedvisSpecIDBlocksATM switchBridge.14.Dynamic onetr15ModemRing etwork VBackground-1RegistraionTilesF msvSDContaierExclud daegr eShowFoterBackground*msvSha_peCtgorisCityServ.29SkinColrMainfrme$msvViioCreat dUrbanAlphabetModuleVerveNone Solid$Background FeNone .26viwsSTXTechniCom-linkFirewa lBox ca}lout(Oblique coUn etrdXdY8Public/prvate Ukysr TerminalangOblique0Oblique con etr.213Web sr vFTP se7rvEmail servType(Ma}ngemnt s7rv (E-Comewrc s vangShpeslopeindexFile s7rvDocumentsignslopebendPosfl_ipDrs00s01s02s03s04s05s06s07z0z4z1z3restBndsflipRouterRouter7.7.Oblique coUn etr92.Oblique coUn etr93.Oblique coUn etr94Print sevLegndvisAntScalevisScale visLegndGUIDUserName"visICo}nectdFlow NrmaDisk torageCostDurationResour cData8Conte wmaagumsrv2Streaming Wedas7rv&Direct}oy s7rv u0Aplicaton serv.57DReal-tim c}omuncossnrv)Router.109Proxy sev0Oblique con etr.124Router.78Router.1370Oblique con etr.150Oblique con etr.152Cloud.1600Oblique con etr.1670Oblique con etr.1680Oblique con etr.1690Oblique con etr.17Sewrv.3Router.95Serv.328Router.34Router.3580Oblique con etr.370Oblique con etr.3740Oblique con etr.3760Oblique con etr.37Serv.50Serv.380#223 ZE3f#jG3f#G35%G3\f#G35(G3@f#G3$f# G3f#$G3 89G3ئ1SG3e#qG3e#G35%G31#G301"G3h5)G3 8,G3X1FG385c&G31#G3h 8G31G3ا1G31$G301&G3X1DG31c"G31#G3e#G3y%G3ب1G31#G35$%G301IG3X1f"G35&G31G385)G3e#G31G38+G3ة1EG3 dE3h5t&G31G3 E301 G3X1$G31 G3 )-G31VG3ت1t!G38G3h8G3 E31G3H8G301G35-&G3X1S$G31wG35&G3H +G31G35%G30i.% E3ث11G31N#G3Hi.q E3k#}G3(5(G3(8G3\l#G38G3\i. E3X5!G3 /E3h ?E3 VOE3P1_E3h1oE31E31E31E35)G301 G3j#G3 ,G3 89G31RE3 8bG31~E3l#G3$j#G3\e#G3@e#G3$e#G3e#G3d# G3d#!G3d#6G3d#KG3\d#`G3@d#uG3$d#G3d#G3c#G3c#G3c#G3c#G3\c#G3@c#G3$c#2G3c#GG3b#\G3b#qG3b#G3b#G3\b#G3@b#G3$b#G3b#G3a#G3a#G3a#.G3a#CG3\a#XG3@a#mG3$a#G3a#G3`#G3`#G3@`#G3\`#G3`#G3$m#G3@j#*G3$%?G3%TG3%iG3%~G3%G3%G3\%G3@%G3$%G3%G3X1G3R:04G3%dG3 8|G31G3%G31G3 8G3 /G3 12E3ج1B"G31d%G301"G381E3U1HG3X1G31 G3 >-G3P1kE3h 8yG3H 8G3( 8G35'G3h1E3 8G3%G31.E31>E31NE35^&G38G38G3%G3r2E38G38G31G35'*G3pi.Q E3i.[ E3Q:e9G3h8G3ح1G3 3G31 G301$G3X1@!G31aE3H5o(G3x5&G3Ts2E3H8G3`s2E3\%G31E31G3, E380$E380(E3 90,E3(80G31JE3 1XE3@%hG3$%G3i. E3j. E3j. E30j. E3Hj. E3j. E3j. E3j. E3j. E3k. E3k.  E3Hk. E38G38E381@E3mNG3cG30G3,90E3890E390E3@40G3՚0G31G3%:G3خ1N"G38pG31$G3`90E3l90E3x90E390E390E390E3U E3 U E35$G301G3X1%"G3P1GE3( 8UG3 8oG3h1E390E3Q:/G3HQ:2G35'G3@ #1G390TE3HV1X@G3iG31G390E390E3> 3G3G3:0,E3:00E3F 4G3h6R3G3h{ 3G3hG3H 3G3  3G3ү : 3G3ٯ m 1G3Hb G3hb G3 G3 :0 E3X G3  G3. 3G3 a 3G3H 3G3X 3G3hc G3  G3T20. A3`202 A3l206 A3x20: A320> A320B A320F A320J A320N A320R A320V A320Z A320^ A320b A330f A  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./01UU U UUUU U!"#$U%&U4<$Y,b'@"H$ @ U%uC-SO& AU4< &}A-H007 "A@Y&RR@&8RL<(L<(E!(REJ4/(R?a#R  g"4FX883(}'M@(F|@(auHֹ2P eBX k P (#vU1`.دl6(P82)vR$CL!xMa!7ݚR'h0z1?b 5P25j lTT? _&*ODU=(H%a!././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/SCH_5009_V00_NUAC-VNC_OpenStack.png0000664000175000017500000023011400000000000025016 0ustar00zuulzuul00000000000000PNG  IHDR9A"iCCPICC ProfileXyTMvX%眓 s%Q"AEPTPAP@1!("dPA=ϙꪚؘI(jBB# qh@(+*\meFK-_xDyY";+ Y#!r-`H X-췃ٷֱA.d)X/?1 ET3O,~}H~Xn@Tx0)8{ ^"GZoKAa&[POs "q~ch?`I&FdeH۶>< v{FYGD>F>B3FF2 Րo`o`"lLvGuDXoqGog\i35m wlaG(G?ܼ}tv8>va$wm3Ã-w3>; _c"IyIƖ;hKnh40:@pa 7#vz D?w%,{B HioXDWs۽A`!hV:Zm5KVcMX=. a\ Lb1( 0Az}1o1 ;2{qo1f`'b=G-Gkw4#向h5#d?sm1ʉD]#oɿ5,&?6pfL4mnA?:RfI " }⣷NXxBd4#pzIᖑV`k>[od>ErY j;`G&ݑnT`#c @h=` ,-pnȬuRA&A8 ΂ nfG)X+` D A,$A2A5y@~P(ҡ\: U@5u74}~`E@ $QJ(- 凊@%2PPP+&Sj5ZL3<83 GV$/Qx^Ecthn8h;:}}}} ݄@Dп1vFcqa0bLӉO,ˈ*"k Mc 2cp8.+]ý}$ "!'s& %K#+&%Kll\\܂ܛ<<y+sOkx^ oħOaw ^ e +S(SQh +!pPMOxCNII)HILMy!O"QhD&&KM/TTTZTnTTT7SSS RPRRߢ^ 9BSKM3Mգ͠=OvӡK@IK/DoDHK~A!(#(h̘xqS6S l̚>9̿XYXXYY޳YEYXXϰvγѳy`{ˎbefOb?þaQcS3. :WW!=Ynn-`S <<<1<<Qhs1XXX={* ĵc/I0JJI4K|t̗-%/,uA4tt7Q/RWɲ-rbr>rg^ɛoPPTTSQWP,SRWT:XܦrC嫪jj^>{/PU#Us{S iTjkizkViNihj]-ݨCGE}]X@7GWVNވ>ey$C|!##/cE&&㦢f(3cf{K!VX+KRIki]6t666+ڶyb]k8:8:J:p|swr^vs)r*:Oh_n7V`;T$ZuiYuk[ӻ{GͧgWͷwOߌ|N@?,6CB `ىBœ¥"b''cN2=R_rdR2/h;q6s^WT4U VǞ=?yBE5UUUագ/u(Բ]F]ث\yKr_k/4^O023‘"d -H򘔃4Ll<$ߩhD4\ʡ:d:IM-A5G|LǚzYXzZYXڈڲQ+SN\J]s%;z$= YF*‌ `PJ$G"xcJbĽHO\Oqp2w 6CjcZ^zd!LY9Ws4m8v=NgL;Pvگ4,Ys*\xyqxVūufלȼyncHtV6Sջ<@=ozQYxסljݑOf2>?KW)C_?vXH^L O韣,E+ן Cߞ|TbC϶_kuݍK@sr -a3YqE Vgh޵xBw-xO^>co@ "kP ^ _xe.lw`d$o{#aʭԚC.zE0YKrss׎}vV޹] TNX-,Q|S%o>*{]>{fewy êVg_:RRKxxu9oܴlHjlh٢|+;v[~`~æs[ɫ}F/^i!ύ./np[ T'R 5; ;:s;XR` PEBN,4p! T8_&Ny ws0ʑ w(ʇ@I%2EEʑ:ob[oz#|`KkHL<OϦS')(ۈjV*%Ԇhii3X0`tgT¬!i)oV|6lN=.j>z`I4XeO>~T&ES{C CGF#bǷ$'U>X˜H=|80k.'Hȱ|kNf,+?zreYU+j]>~UɆ馹։{Ltuzttk<|&\?![~"N\u’ џOV}Z{ްq Zx(j@6(5x ރBRO kP/EE9Q7P`.ϣ&3ľp2=rIZ =%a2HN,⡺ԯhhi>`3bg||%>[;?0G #3ro^)>@R͊DNd&]Qqˏwg\xpȪњwY<@ (` ܐ@V~h 탒R4"GNB`fGg1NgX){(<=&4SF?:KhbhEXYأ9<9m̹xxM ׊<C)(+yZjPU[^~MRJ^'uqumt$mw {LLL%,XIX7 :8ᝯڻxPx{|S  5 {97(d_i\9gRзhdz 2gg}+ʲ Jcjj_I3&yC!%թнNqu{2+/#z#9QM83)=4}n<3 o-xPqOMMk4e^#٬\['[\_>Qdco'?{SzgfV|ee fds 677*777#0wl5m}O[~r6yDiTXtXML:com.adobe.xmp 1081 429 C)[-@IDATx tcy0" !L4QP0U\R8^c I]ozqL׀1鼦u6Ӣm&%&mAI DDA}#Yч-o}o?g% $@$@$@$@$@$@$@$Pά$@$@$@$@$@$@$@$` HHHHHHH&P䨉fd%HHHHHHH(r        9jY        HHHHHHHjEhFVHHHHHHH"       @&       kHHHHHHH&P䨉fd%HHHHHHH(r        9jY        HHHHHHHjEhFVHHHHHHH"       @&       kHHHHHHH&P䨉fd%HHHHHHH"$㩧ƆQ^z /}Wsy睇kHHHHHHjoI RTx' q^+j_|1n\wuQɌL$@uN@{ɐ *3 l8uсJ·GsȑўPqG5n(́p뭷K.)FrLHj?|XʩoX;mG$@F@#< zjjEZ٢UUfw HPˍ;3>^:+=rLc6$@$@u@:hdVH ''OĞ afO+gЎ*,'uE$@$@$@$@$@$PlRlHOEBohll_\-tcz)/g.-+]u~N]IꞀ "[n $+<8>Gx%EbEZ#_̭v~)vKOEymN]g?KOƹ@8 @]PVT& '>GXpk8]e w:]$WPˍV_g⡡!j-SW1      D9*Ur:U덱]YXeZi2V?Q@$@$@$@$@$@$P*~dx'&|!z#k<Zu\wuYP秺d T {R+#[P R׻ۡ_N[ѥlHHHHHHHZZqds3qh)iO} e`$efI$@5@@녬UMUgM7TMEfYIHsD6aE=l3]5[PkBfK꘦}7ΛZshuZ  s裏X1ԟE*l8HsD"GU]U[XNWæ6ڧIuZ"GWti[      أVR]8pݡWkЩ)VA-9J5e*?# Z!rb#c*oK$@N#چvV޾P(۬)g8rN&r*+6 @KޯZQ &zFQbUUɲ:UBi1ʺG- l#@c*娤r,$@$@$@$@$@$@$`G"וKZr;9]%'     $95ԙ[J:n,' @y P(/dnW\qEr;sgIHHHHHHP䨸&tEr{6s= @{٬##H9Wz٣`$PstY8N^I+ @ P(1`9T*tdbN?SYO+W9IrPa'0믿w1 T{b*貭vSSqttttD뮻,e`&$@#`0Iw?U{17 o^~x<˘g @O=ll:[UM6662 T.Ocqq7>VC2}ccc dtt*2 @ȱ-y 7殣NjuG͚GyAsG13T; BHc7|aqM7AŎF<ƾ= @$@$@$@$@E @wZIs9?PKR5_fš&|\LH ?\ wbFau4l{ G1I$@$@$@$@J"\.jnj/6­ޚ0 !@UUgq!XhOn{]-C+:E}w<e"Ș5 @ȱǍ.5G/СNs 7x#5 @PBDO~ɓ'~?qℱOaZ0 @5*zj͡v:ԉn\~8B2q? PS09MEKs2NUQC}QSbPDI$@$@$@$PhQ…ZNd *t&rRhPvG8|0}q IY]NbNm;IHHHH ВBZHWE0GgIGZO_HG_V^jT Y[RGw9qHK@:-%wOT8M 7j:  T#jwm*)vh<:4˩Sm})ԏ4@$PLgfIշF>=fZgԠ+SӱI=$@$@$@$@$P(rTP˨HKAz}17X{O~+?I* X j}zL[IԌy$@$@$@$@$P(rTX멙g?YF1ċ|STh)!!p%9 U8B-R豸6rj˒ @~(rǩԢC`ygP̘8 @Q貱O>dZZzJ>͓2}r$j$B_,j,>L$Pd?OiN)39 J&@[GչRlQ\ f .7p!r ]ҰO\QA#SPR˕@x7 M C{C^+{g$g(r2ӗb7㾛XOW\pI璀-3<Z]x}ito7ĕW^gyƘ*>ii% Kz -=gB!i`߅}ZPks9'k<$oۍg_}Ԗ$0 P0ITЧ()vAG&-~/?}x"hye>7x^^:\boQ"Gk+ax^:bm Qn*OJ[M=^帲KN$P&mwމT#H:g~||:L=n[M3U0E sw vLh.mGM@Ea5BZfy5ef~_җ;;D91pxaqlVΊca <'%mb`XX,4fl:/oPy/XmF]6W Fej: @$PhɱG`sUAE] A-8r@TQ+081xVlX>%BNNO)muoo>gSGwqGՁԺ%?Z' g鰩NWh8P` 6nt21&4:/SVHaI| ER ["Vw:pp&5|ޔ8%8i2̩(fR`(#uE3}ObZU/gSsbҪȱGdPסB( d'!͕t-P.B_ŵqcnφ q|?`,9GկjJC+P|PB1WEσl 2|r8jhՀF\2%A]ilü̉P] 0,@zii"gKXOC#Ač8-t3O2X0>֕(VYcIkc2M"r\EYUW\V7X´L1ە`ccXX;´4,A8[szr!,di=crI ͺ;?ܺ'InX '?3([ .i`-m,=X [;lXqNaE\ÈGFYam}#:!W؊y]1Uou`H?dZRmyS*LJK&3 x T($cW_~ *蔚B\h>`HKic‘N; M$"gGeǢtnθJWe /tC:"X_?orqŗ-b2+ĠLO)ćO%se[8e{f/b"k 6lt),l{wO5y?gߘƸ *(t71'1PKh,'N`~4 LIAs%/rSڈfxM ͡w8N,ϣ-Ϙ,yG#k5U])7ka,4bq/#CI3-d}u#[͈*BĐ7*iXAsT~>kf^Hx%ыO0"xMl;~5;@OЖ{FUϛdb$PRg4u&Ft򗶳/j͡NwP+Ԣ=չa1nةwӅרx4.4ށCGΣݧ1Iјhy]iC}lO8YE_ې'J-鐎(4t0\.פc,i% &g `}Q7JD+$R7_l_:s;릣D>EלVspv1tH'^C`x~eэ zYG1r7D,d};KY(8&{xBn[QȺm;#ÊttȭJ2^^ahN[Dža 1׌:vʲ~tHM'*jv@ib jͦύ|Nr~{jo_NLUw2Cpvn!,4NLS~nmF SO]n:b-i /rZ@$7:lzӸmF66o9U̜eD}/aw#)o/K)T 1뀻ŇO;&&ˮʿUL}f?<\6|g\pHv7urd`@b=2})/"a`UbUj퉬0Pv CXV~4=Nc"nx 2gRhXl":(OkiPg/'E[W&ou57bϼ6)ג׋aO361#^@3vҲ&  o41)W %}>5`enNšEm/\X_lcid$,cS~ru/e1+l?u^x!gT>or"bS9ʄSoڱXqvVug>@!2!.I6vo̴Ci|W_m-G<2(6!JÈ%GRfdDiChZIh^L:fIDKI'$aD%^1eA,/JH:DpK"cNq':9!cYF"}2'n8q0~GzZOuB%L؀8G(sޝ;⣔Qllx1:g$統f HDW0!#w񮨞ԼY[hn"pdİSz+s+ٌ"(z`;e0"eIF%GUƤ-61u6k(/ҹXύM?%]QDwxO] /޶ "}*{供^f.Qo)2- -§G}cg]~+dʶlۑ~Q+2mF844͡~oX݉udsŧUѸLL ]ll+xYUbI fiqDzX;Ot'K >yvrg[;sqB"fVnǬ<^/|-.%#4%STV%i{Y*1ɧ/lS@ H+%#ϭcV*akܓZǷOXB2-6)$ʒa+c+Z,TK/EupffkױUx|M~c%EK Y:|o%/>ԗN$21O('e]"tYG,;I '9.B!!3+B9ruLX;j)BJY>0-~׻ޕyox稝v_}U&w'GӆS{V#)1ԎCMMS¶Ѷe-QCxk{")nŨZT:4ҍKܰ/hU7Q"ct,=>hXjh:?-|/椏, >(1g%@˫A"ge)cTU{Tʯ,LL,_,cRCT#/-0̞2]f߾nX 䄛08t3ӟ<ԂB|^,*u4|u%ٱ#/d'Faz_etUF׼d0;'2.Ħ ó41ƿejG53M-_ViI/#]]At=C-$lor-})Oq:oy'`o~ bY ŅTZ=ҙSiS ?~bt~FDeTTP{{8YL %Ǚg#V[g891Әr'Nt CPLF27~5uayMezaܬ`ˈ/VÒ`j<9`]YUqhf黇O"/sF֪}u D1 \i4Pxl9R㎸ɜtÉ imPr_VFVqD9ћ5h]<<(eO\4Vm $v4JVz8&,ק:Aߺ=uzIKUihulڸ)ykF3|sH Y::R3lAE&ba,L :PC.CGRBY\jzj_Ժ[<խGR)fލ2-D>DhI&3Lto{FC:(M#>1s5g]7Q"q(ɪ43XJLW:hoRF{:<fìY{W#q&V׏t cKwbx+,GGUk$15n%Qløcpꨔ:֓5&9%{ ts'8ST-ϋnj).g4[޿vBGۻ>hG< q%mˠdm9z ]NI#Ađǒ6;-۴9^DNK!\g[egWn}!S]7ғf s3A3<|f8VRgl[sI_e,g/m_6 /rgsϑI)x(@)'>)\вnf:ߩuM*X1*_xui-}YdW '*jKuZ9:vrsza.rݲ:= ~8i*rk.=cޜwlEߐ^^{κٲ\+6#vvϑ{ϨV._*Ϯ|u/YN*d84NK zrȖP\a||0i;u%l$z*fˠq`9hY7ro2aQ6lZU9:eo/#ZW T J8.?l'*VGPN_d|ڣL}XYUq[3znұbfJVq/;l]Y)*&phh 8=X~X D.   Qd*,KBC&8n|`P貶*؅\yU_㋗Y$;ڹZNFsa pT2WWD)*l, ?ɜ EFKX e͡766eq7瑊F PXo (UMk1wU Z+ts'F,7_)>8*T4zurrtY9?a-Wu#(Zي9 P9mgGd\7Bˎ<}KZ .vku בD+ Zg֫:'?,hKI3d'"1Tۿ%! ,tJyzyH Yz(׍Jj͡I(ԊBmڊb_/^zm]}0@%PN3H*N]G~!   29`8.₦dK-@YQ₮ nw^VK믿;#:*ϑ5qYg+*J@[ ʬ ^hvN)vNgr "]-TP삊 :UDfAS+3_QZ@_H*'s*`~g1PT $@$P;yGyX^9РDU!GZ|se bw~Ygs@7qYJ'"!nSVC=c.8= /bXPk/@äߧvNN0;˙ekN؍/|HHp*^l\{v"G^8aA[8]eŎrMU)hvw2"$Ur9]&؝$P  8R˫JRlGyfz*#h(p$    de){PB9ԊXSfRsMH@G02DE4p*z |:BAgFTpEС" *pɪZp$%   t9vB/DɕB092Ŷ0rdS篗ypI 7~rIj>.tȑiѤs`s9Raȑ#IDˤvje *vu]M6IHHHjE]6i6Sr](ۈnֹwyg$Ješgq4" ͮVFf}ˌk~W1C30UΫ>f$    j$@c͒#f≾esiʊC֗1rRu(r$Y jBUWrY|U8svj>aUB'4B$@$@$@$P(rq^vqf9Jia? }꨸?#!8J.&V&*-;v#pht5b-뜊D]IM8RpHHHHPe+g{ٿ.S~z.k]5=ꔕ3~d˨i~mnx%Yg__̈ǯH ..R[ |YQl!GHHHHV P(af"+;k]O@1Qjee=b.z!ry8/ $pg^ _bx,Uidd"JfP5OWʱӸE|U' s9blʤ>Gt:ǧ @#HΚC-G(ߑ_7O]mWi3S8pe;z-qA~9~pjAw2tZRоL \LCKlA4:\f\8swML[w.' [<16R9sDc@50 dȑI gZsˊP]yR5CH~!` $s eb3Xf :2;;%ȡ{/#8ULVMEbX93 (1 QjHrQg>Q 7u` ˊcpK\YGq o a5@9di;N"ckhG!Ah]ӽX> D،Oaia3 g/W kCNG'{K.u*t`sRT=굺-9ߞvI4z?U S*Mn{jNڛvf$@{CMUGOu{L9RJa~{S(rtj[UmKn橝GPi=Zs[&d IK ߍEsqŗׯ*iۡvBH,ZFAxu}}Ch\jA- @ObTͶY49:Ѕ)^ <Bh [f*afՑ QHPZζdZfq/N)JJVe}7VYziX`@$@@@ڗee $PGGm^u *g^DEUNb 'zN"{5/}zhŢGt.-X"XF"H>:vo}]s ΢76X?fZݲ?fĆ@8\ ܘYBh .H\O0ĕxy DA!^ 3jcBI>c /yXv:ܡq|"yVrcsu mK>,@-C .aS&"^.ïIdm nh zz4@^GRvNtQ`rYKtJZ\!Hv6 ~چ2^_ !55 :Bzp\ƶU<}S +PQɓƒřqU 8pޑ)p1kWBP+bre+|V1 Z"}J8t 2{B1SKbM"5ꋋȡ77u(S̺K3?S3~/yW} /;]t``bYٯctx bvG9\ˍqUKYc \4w I,FĢ^L (!݊l ?cXy/ ShL{rG@IDAT!5^sөDX5sg 0oՁљZQENƧSORvDКIZ]FJC$Cn+E|Zo+y__Q(Z:174q>u%rYUP+(*_C.Z}zH*@wk1bjE"k)+ī;uhXAHIW%r?п|=j25ޗn6(|Ek+N4S8yp(!&pc}3`L0&WޔS7  unh!vq)8'È d."F hijNU"RF17)jGaףIю^%b:>DV)>H\ GQ2SxҐmE,r{JL`sM(D 5WTc韝X:E&ڈL"9*'M2(3a$xJn;VUKcŖHHsDc99*ʭP(BT]_u>[)nEBIXP<.|ZSxoՊCŸBx "G\L0"˲^jh_@[`!A8eW`_^3g~)846]Cډ);ôHHHHHP(b4{ zTB-?GG}Z0*lG>|qXd׮L#1 dnp?f *r-!<$@$@$@$@$@$@K"GF_LU|(WA7t[]5Fiő-O圯8tCC!SNƾ}⢋v#=aJzL~#G-ƤSmY^ #Pk]t,s1~X PȟU^1Ւn3VSuJ7u`Z ABߒLi8O,nȱł[$@ /_3*[էit$@$RKV $Pi(rEtYԍ SSRTQNR܋bFwc/<+д2R @Qi+:4 w&RtJ.{k *pS AǪ+R aLto{sev Vi0-*HHG܃v5ӁbX)WsR!%dKABUqԡvG~Im@/enZj9ZldTf$@$PN*[eJ`saIjJx?\RMQWtZ zQKRZ +փH: CΚlMů6S$s X\\s ũ>G9[TJ-QUEW;pL9&oKS{J$POԢC+-bMKɕOƒ D]A&ƙH3DJ}W_H> j:5LwP(#{⧣˙Çq7$@$@{ HQw;ATHH`O E0,{te pfqHt$Ky QKa %I@GGAEk8" [y 7x#rg^%GAY}ђ*MvJ @~OVRzy_ϯ]H,sD3te_Y)rT@|+|z˨Nҹu:s*`HHHHHH "@ \l:(RCri:x1|{\Y 鹪jZ $@$@$@$@$@$@$PΪBr̈UHͥ> Dԧ&zc*$I$@x ;cuYKqtUM@O~؉tBۣƻܣ0۽&@c[ anE ZhQHH P_~9>qL,YkV^,.(V+-u#)} k~3myvXiCzmE"G.BrY-8 (q%0C [CLro9ntڌȹX-^8dJsX\C$}uP/V+A_'w +ssXXjR7"@Ê LUJ80<,vb+N>]_D?[t\]ê$/V :KAdNIbigPőZZJ>،.:yI'>r7 Oű`'‚OvTp9Ңvf4А0wg|l%_\.I1&ԴMaA?]z<0%'.(3%YArn-.^yPy4Pg{p},2'1=?GpHUI`pHec2 (ȝB10!ц1cSj8t4{=5vO o"wbs-Vc݃ݯklcb_n`fy ShrM'g1{F0`~p#q=H,myvIn@z]D1$}+k :Gs=!xĴHH84gŵ^[b!00zœҋaP30VpINe.2\tEVH7M\z饸 KXOy|Z  -yE1g@sBOSEgKĔ̂*q\);Yx{kg%G58K$@$st?Ogw@sssx=cŒIH9c꛲#SjbQwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ @QwM @mQZ S*YRHHHHHH"677vٹa; @K@)$@$@$@$@$@$@C''> \r%hii~3N&9ꨱYU      -wggBcll ñ[UCUXP      \|Myf|Ԙw]ttt} k)xgOZ%@ @,,,_*.b| 7ܰ-Æ199s=?0gש*W^yeNd[ܱ[{^<HHHHHHrW\|3~*`ßɟॗ^~icȑ#IC_4|szВzڊ%%      V*V\uUG???W_ ]-qIDQ|4ooRi,|#p:i:P䨎vb)IHHHHH,]zd/m)˿ t77 @:ډ$  R83. Wsk i_ho^U382Gj馛/guKKk=P1~>9Xb   *"s vS@$P^y睇wxsSˇj/CL-1,4>'č7޸ߍ-\rɶQ}(rT_$@$@$@UF@=يǃ>hh]٢ @ mo矏 .)3RWW~K_>lo~J,syH^ 3 @NoVAr&%'SO333OO݂a'%d+XB   :#pNtIC` (\\^{5曆]2V-:9V~W~Wre"@ڋ%  q*ja5z$P9(r-u;;yg*X 4x7~69R}WWPG G|;ZWގwrz#܄ިvN@o\s 9眝'3IHH A@m̠+Quzo|FNU$$PZEy^99D`k{›17l?V=H1YS\C_t '󛖗5J8 `E;uo#s"7SNǰ) ;FU%rc].WݛbvwwM5͐"G 6.D$@{@`AEӂC/,,@GHC@_|XAd]F6plleV9btt9|Kx]Cpr\*HDpgK/6ؑpRq|Wn߾}е͎@ū7jY  ̨`QHS[58> ND2և`}tKۀ;tD{#CXwOXmp7\scf ]=&LGtQu4opG0pt~« cПbH90lbѲdzAIbpvMU{Zz0:{M +>ԆYL fb流R~o} x; /~9NͰ_|]aU%rd7VbHHHjǑ+5QiV%G,!/n >G|MD0+hB TRm៘Ǣk:W (G61܁e j!rb gl7YCZNtcx4!f f&:zX^aNޙhmbc3}I,Lیh `40!Q,4sCG|n@ǤyhaW$5xK#K=1(ShP+g}֘rEV8_i(z O0ά% @HY$@T"Q5ā2qRm[Q$-|j"$om!؎aec덣 &t߸ _jš9.rNU_e̪'P#9J|v"&\bëR\ob60҇Kf˅3>8UJמUC$@$@#*cɔH즫ŨΙQCB$SXT|$İZX3>S03B>>|5sN> C,V99䇵BYtd8q<[޷6/-f Q;汭̥-l2uCv\dB   *, d!j́nHW[eؘV@zښ]DAX :ht}m͉. ̆a)wLY|ačF<~q7vwYB$$%DlB,.u4q1ǀt`<:nc$/MLq$!i0QYd5 Ym+_]߫[d9;^]{ݫs<&?f>D c-8}! Ac9}j˴3%‡ 0[PG;~G;z)oEZ)Jrp %?߯,9;<ӟDrn&kXB &I\p)9 ;Ձ1L\lQCfOHcj ;Yn:ܖ`':MGNx| މw@Kґ#GrI Actr:ꫯG<5זuGeS"O>z[ߚ7[.σ1`?,;Ls|uF/ʟҗ\@#^D.EKm*S)L6&QpO<^>+{`swA%BmK3y?H<t tˍtezxzN|ƪݣl2L"y) YpQI0'{2⥜X=eǣ^CS,r(8)&x(F4ty X{KI= ј7;!_"w2Jqb[7k\&gdZzɿ" J$J}mʔ.?]LQ zc\F6+!db6^eرA,pBM9<|!_JQ\wC@n:xy^ 6@-@v'C΂K!`c7W~8iyy>XIK!PA }?.p\p' ͇BK\:* ~(\"G fZ%I-e#x3.0E΄HOHFRD%O9A'`Èuu9IH `8 \&O>NM$ O6x'淚cAvL!d1hiKBHo=DEmy-4 Qx(^{׳ %%qSg^uw=+L1ai*~j0r\iS)xtOKIB'*) !P6gϞӧO_6;MBi˳HW[YMIF'' ! j@$ 1Xk@Gq'+}꺃t(=4]7SkM! (H f|"s2j*eKL" tPzUn /R9B>4MA)N3=4ʳT9gIb4ΫZan@C j'| Qcf^j(AN#ArRz}aL4N模h>D6Tϟ ap ap^NƱ8Lms8ΝhrĖ=`X _tF<jhLgVN`sDHR[-0MLO&&8Fs!XTay!} g .S0VNS?<2<}C4Y2 ](8:^{7uu7.^I4]4>p&O-INh}: ם7}sƚ] FH p_-jvB1cB@!P|#t%G< aJ>:߷v7_m%s{'xb][Qk|h{'N~ڭn]{:{vRYoN>'Nm|/=k:ډ_r'<@jmt[|c}kk63믿㏛ɵ:v~gU[ɻP|}'Kow+ǽuSC__;)ӧ}}>G>{j(F|l{}Ľ|\k=vأk/>SG <{*}Osv]>5H'Z;qvޛ>K\SO#Ck:v>=U'7⚇kr>gO=q]kVZ`ABK49L86CI~ 435E3˫XΥfg^ՎlJ }V5EN*VaӐ0)s l ?LP7%(` L17v e )iE>v&dmܜAvnpr_/'X!.(5zd(R㼐7h:\{L9ૣla U/g'QZ?xzONi 'f:qgĔe رF6ٜE̩")'Ly)AlK M*2G'uuIo\Gw†vP')2Mʩl`A;/]+}ݲؑ)i}0^˅[ilb^b斄s<0M!oZjE}Wz406Ac(£!7OO4['o} 4y qa;L:p=54Sc(+l^ B@! v!?yz衇㻰Eq t{$3 y$ A9@tZ_[&<8~線Ύ#Б%Iqt=/7^+lwho 9ˮ1*Z=óa:= 'e|aÙʹ=Ī tW8Kqb̩hƊ{*IxΖ?8fiX+a#t"kO!Piz2)~=SRS8^9=S )r<[H\Y>=v`N^ʗmaҲ%# p5+ҿs|4ϘЀ,U^$_Yt %ևX&'PIdB@j#Ga{ᅴqv|P8љ3yժT(Q?Mٟ+aP_A{0̹t-edͥ;tl"Mnø)4>MݴItl'r* ]gkj^2ER;Ui-3uL'[9bOqPڲMo3/ɋFh~NÇؙ)vOZݜ(M/uߑ1p71ar=]8NR&΋l}$w޸ӌl1uF&μ58v>8+Ewưz}*hvwE}!aB~Ej髮&v-fZXWepSfF=Yr`37e'gF9yqohbqܫԜu)/ Sc3>J4G?h0O/'WF4I?f%>u3iɇu_<]2QfFJ6<=MtRׅtTz cm6=a@IDATv4gv;sN=y;]n!&?'WmԌ<>^#bv]Ɠ]40=X@C>n?}r$ [ P@9H7XH!qS`ZN|1Sn4ZiqS&rE ߠn9A#\mIWviRsI;>:op0ǣd뤕DdҖϳmSF3tH;-u B4|c,n &![c*/}r7?C[;.uW6Q$Pa-ZJ啽B@! DN8=>6 )Tquc4b - +W'/-23ʼe\16kTknilAApd:x~ ǩsx<yUA^C*άt_ fVG__K_|Gb6-f'){@ x5e_n|g (-dt6@rea\L}c]Ҏer EPjg~\zXr{yNz JHؿoռ}C#K !PgϞ@N+??TO vwR d r`&7H%;ࡣkazTr#/Е:+7]S}uu9cV*pgG:..s03=sXVpIVزGʌc=9eڑ17=f! lHG-+;:ATy2 UQ%T|a3 O-bs<ȆzNS"vj x)Ş#!MD#xfA+͉tru9iufK`P?bjPB;1H3Vgă](.QnZsiF,uO ќwn L_D;ӱ  /%0=41K3y̔&|s|#8.pR7?W6!LO )96.L#_~\֡N.|hӂ-NMX $Me_mRr9Bl̉h+|Cᥡ%ַO5C{8N<(ԁ}e뛩k+nHMrmwu5>ӊؿxWTe;pm? ?靤yaD/DH%pPDwGzY\~ 1"YvhGIv}HHCҙr«R8NJ} x~EˇL$.H<A켔_sOP E"f~WZНs<*Ezyy}K-OΛ Ol{4!*1vӼ}xB4>IC&W?7a燏BqӘ&6Lcҧ-B&zuy2]B5QW-4/MI}@7̗G)ז]:Is=P3}hzi7xE-VŊZ '^9 / xy睷yop;8Ki)5Az!y姣ȗ|&W0]L0csYȖɜd^ux(豱O BMD}RY^SD\tJynZ%}Yu.RBk5U/ⓣ*Ù$}no.t}+?; gO|;쓉`zoA ]BF>9u$‹L\qjQoJ1CȄ61^%WFѻH[z[E'GS.YxL4ddvY}@Hl1_wʗ7A,'e4kc..~I{@j} rm!S\r=t]AP_/>< Jbk*Oe5VV^ҽVb{TtQ‚`?S0KR:쿡BẠ6w-,/deu(Tj6LΒ]B`O CUr;Nr)˕/`k碋.Z"r(%j4^t>=9RgN!Ut$-P\D9BCM[}1J+J(rY)ڎI,~ 4+ vNWQ*I :c;䒬W< U^.^q>oy[t++Q4tq.Xl+OW+T^x::fžW2ӗHnU4^ʹP+@mP|lnU!RTa\]%rluʖiIB@! BF`C8=+GwO%@,9 ?Ξv;muKd[:(bhhz>Aaxnޏqf؉(|Z-wД2 B/XIcPRɵHP fCruliqg-P? 91OM؆r{SajB^X 9?cz- خ|&Wqk0BnNyڇh$SNbuں#~݂ 3fq)H]mQ:A^2Sڇ)KwX% sZ<̲Yhunqx ^rqhH/wFzȵCu\嵏/OTR0>uhj MRߥNЖ|yM!^PWv^|? f%'X_mB\$M! B@Z'zvZ?9u|,p<|eBjpֵi8CB~A+ZXZ⁻z:¼$<,^Vg`ѕME/ iәh29 0MqC%P?tS#dEK !|}4ANi:ʹ "NC4;?ϫ W+t%j9rj9M:ad8C0?M8%pY0U> i(Q{xxfEY 4---P>J}aa#<ͳ Tcvrl0.9i>]N;=no唛|+41U s'fn}䛘ɁF>wN`o29> })HmYX[sq+rX]:pDŅTVî+Z^B@! ;N?ca |mcj Z0[KNggꅸkf#]e1XFٗDld">=|< tSxZqzyT>H6 G`#oOѕe 9ƒEC-$z3tvG,xp1؊^r b^Fr|AX {hx9FR;ItRɂ:qV3R,\D_Zx -N7Š~%FR|!rL+ƫ::C Gŗɢ,싋jfMC|'8&5:R{;r8قڜ~zL:|AM=dZX> .X} 륀M\`Z`YEOgX$u[Qx4ƳY tR@ba'?k]ˋc!q,iQ}:eQLV1wS]#r*L{ķ"e}׿5立)CdFŔz! Y˯>=u^x.J?;SY>}7*c;|e R҆ņvZy0^ WWSTjle<xwqa 3V .44;K>@Ux}3M2yӘv(Z0&Z>cc"/:JGyC 1r0VGqu)3E G7yQV'(PTv0\ʢ~lLj؞FU2a,&{[s|6^N>23H'yaQ8֓В&}T+{7ԁ:{F큱Fݽ ?!}<jqf[SS |ى8']#rqkUCR>~8?OXX ྫy)B@l+[}οK+37Oj9b<:~Wo: YpSbVVȼ lt"5zFydlMĊ17S,b#35݇BT`ڽ4P2y[V&pbDr]>J-Sxe318'B'uo.KPE}UT~$Fif%H^-,FL7Hru9iu=X󡫔lNlqbՉ/ ttVZGkb/ -FK 6rqhg*4<8iU x SJ#MvDXӥչ::!hm ,UltDl<+(5r$G#len%7z# E hG{ygE ! B@Kc7\AL0;fSGs&O12Fn~}b]09BoH/%h5/cwU֕NKz:}ܠa"7~&<iOBY g(ؗCi<~RftYM<b kŒ.&{XPi66g& j >x 2smUUj#sfffhͨ/e!:2;p!m_m{qe] jD0BcPS X>ߘy~I9T ya0 ~r9)z2'ʍHf_Nfq/=(ω/i2渹E6GmjIYe@Yh}zCfl'(|p X#֧UE\"B@! %3t Ps=2|.aUeM;\[sgN&(cǼ5f|7/ʊgN$ݝ4YJZxUfcP۪iq1鏗m)ck?mUVCaʘo *t%f{ /ϴsjŸڕO! B@!+y w+ٯD}*rDKKX94-qai܀Y1;%8ܐ4tx^%%!PY8TvD"x#iIS! *L@ _^!PCrV@-A! B@!  !7B@! B@!PD䨉(B@! B@! D{@! B@!  "reJ! B@E̙3RZ! v9veB ! B@"0<<\[!P\؁B?'O?Qwx++k8PT%! B@J@g|:TɢHB@9wީQ˥ꪫj2fu^}Uzg~&SZ !g vmݶ%333 7@]wݖҒI?Jc@%//Tυdo|CpijILE_W/G*jR6s??/BsLW]F{ěL-u%h_KD/'''U{fɅh&2ZbMƶ}??<555 7|!~o6|Y놱&f@9 5L| V4::JO="Z /\U'^6y-b 4mmm9YQ`!~Ⱦ&PU?f$z|-oy;\YO ȡg?JtLQFghese:L)I2B@b;zCĜexi b2%F'i@3 ecט[/B@fyvfhC{)5~jmfw=c"/Z50{>`Rȡ]G-m'5e-O}m"#I@D4$!G>xTVpcRI!|ɝL)zktaMS$ͩ^2-&$wu@Ya%(㹄Svn81X4ªMozK>+5׫ާR):qD8;ޑ~B{OX2}gl~A[ &x ~f"x"MYa̬³ 3AXL>JM@D YhQ,+ 4؆*%)eCGGCQjjQ-UJ<#̦&}S_N,3Hmǘ-Ul1yat:XG&=裏D]}ՄUf)gi!r }iX@l6l ?x[aW(Of L JY\!`E@lK dkM=z4+phVzWtq K_!4Lh,A+A! j:}hAsfܯgE!QBжޔ*.f UJAR 1+qBmRa{]6mtE\XCt(WT`d&rhjUIG# "@)(h+vL CoVT B@!P[:xY(\3yfNjySf%oƔ",V$! 80@F`8)߰60X.@{ O0\1VH M@DQt0 rwww BBj} ?V(=9.Oyɘg/Ӛê̗H`7V#L1r}O*m[*&ҁrIřcWcrSiUMUZ Vڛ}~0pwҗ/ bxVm3rCB90>*4‹b+bM!DŽ@)'GDXY \yTH40y4f谢660׃ i/"ɇB@"0śJxƪ-x~3 lTue-%on_-_:r9m 0?2 |`?>aoht YI#,ҵg +3a|PVu/l*#CK3݅|q''5g܏3DyVeAg`4XZ%B@!{ ]79 xN:EW^_lxLn-tn)w;MLȞbcp2T^as9e6ҡMr)yS;P/&&ɹżmMLԑD}d[QTJ)6շ"fw(ʟ/-]0Xkm >1aiiI a6C J۴]D^B";|2 r ͨȾ$ "EuJ.k U@w6C~1ttN1\B@Gm:q6B~vVSU0XRs Ub4Xj1b$Lá:Z6+s4 ?:rZ'ȟZ471}; [k Rh# 8V, v՘f"7sVbCY`8ƪD9%ϽA@|ruTGX]Ƭq?.+A! j~<xޙyG``dFKFq,ј8L?/M.45Fi*^1Xa#04I SBs+b,9JcV#bz:œnX蜘OOqQa``rCu * / i~Ӽ@!(c(|6$#3,p8yg`8ǦOQ*/'P$|yEqYh48M}b)Ly^)r')T84S&̧$ܺeOQ",9M!pyWcB4Q? n. 4S_˨Lc!+LD(,sY[4FQ)T8̍hwpS6jUP`z4:V׍8B|9:6އN 4[ Z;x/&"LM*y*LA^(Ñy؂@l'8t=3 {MlOa."MMͨw{=Y R;U QupQ,~i~v%N:zU%V|4A34~^ODH. 3z~c(4 u F._9X%MǵwtRIWuu k8_V#UZ!?u7zf5ĴW2buRkGG*pz(iO __h+3V¾Pirv1ecQgJ|ԊՄY#sl"3OaPq-?-3~ X'iuu5aPū2ṂhjϠJVX'X[ *N"ruL >C9F4gpTJwxB@b?Vf:׿.]Y b ]ѣG7%Sb7tSSVγ ??LWWq}J 蒊x}h`wɗ9n36ƍ:?!`Qߒ-XK]-tdҀ>e59 ߵ{Y4X1Fcyy}HaɃlPuCٟ9_#֤~;Ut' ze tu ٨j9ZzLt*]w )ige9U}9=\\]A3GXIJ2n$Dҗ[|e$X[U7sʤ+kl=xS<j>1Xbץ0Q~ytҋ:0F֦ "_gMh013a,9$AΑ}B@JR_WkF7|3˿KU҄,uy3,bf>q肹Vk48Dgs=䓦"~x2Sت`p"F!:xMlQ,3,i6' B;[]dOXm}l/A1?؜H?yXdptE&&#QZOY!4-s<íVr׳˻D"ؓ82wTX{ 2O:qz8k-3R /҄KDn1|_.} F8WۼcWfe}Y~PgF;),O%x{$):`ː^jd To^j][V)+Q/Fc^ţiPW%ܛu(UKRm/Юb BϞBjFMHGctjuǡRi+Zy1Kv:}@,=l'4Rn 5^S;]nB؂K퀣jOPW~8zӦP$xI3[O~zuN@ ;3KT+?,?Po<'m\/8@@@1qe-:m@gP*J__߆9Տ kN㘾a'9Vb#GUF}{yo3b-H݅f M>]ZZ9?Tzc}Sw.Z0\<|o4+4:zX&t;Bۿ5Vj^ޗbqc8)tb0c>1eA<1q~yӗ'߶EZuL453N˲nc/X (Yywα:`vs1P! J%OOpQe r??Woz?S8ymE*us oՖS7m,AXϳ)1Klzꤜ_(Y=YgEZuV4U-/[)yX0ȧ`ZKO27\r+LSecG `,a%X Ywwc\k +ABXOxBl7Wŗ}{vӵG1L;҃h4S_ё}B@lK/o=ƀ`!~ӟh4C ;>>N]073L0Ee7Xpx(A! G˻ == t[yE?߶fa/ "CЬ7D:͂qsdB`+oJ*ֽKGG +duww+x?:LV(Jy7nVV}{~l ! fjInʏO4D)/ DrM@|rÔ` C=K+:8Ǭë)ȧOG?Zk}襗^9VDٿQ`rzEUK(U 烕x3K4VKbN;34*w! @9``/y cǎ&k@:::=yy}U'TvNkx!(NY9ʃN/VA;`[e/uF\눢u>a c0RB@ \p7զa&$mk hW")g]XIDt3F{6B܆(r<56$3ꥴV>ȑnjn/~jD;,h/aѸ`S_B+|Zxh9ڟgys#ϧnF~ Ev'ru*+֪NYgKUء#? X'XaܬPFoZ&8+ _&ˉ-DhHjXN+ G՝a5NƵyfe~-/3dqz\TO>9^~eJ$5f0 QkVWWk5YkFg:R8sZ VSUyxiV@0Dr 9xno+A:8x3 Ť/}9&؍4Kc Yr㹥~Ǹ>YqSB~/Ҁn&r9Q,2KCM"r;:P'1؁;ICcw% HveγHe>L:ߘR;>M>pBϫϐB`Whp8T LӧꫯVoe t);oG:Hi'Fj'X"p3o oݐ>?o3p8H'",C'׬]8`DArXNܰe:CX2};ax9£1[칚մ}/ t<# U i` ׷.]F{ϡF$oO2jOׂ z/r>ioILMM>OԼ)(k&p/׾82:ϴ>w+욉#vV/ ! IʊC3/KL΀狙X"rl'ݝ&O@ ?*D4 ~(tLvi*h`&  ("pQ%p]p] y2>x 5dѶw2F+Q"׵dONb<~/٭#<]Y&\[IdC.)ۆdwLK˔poE~w~'[_W466ٮZް+b݆!̂B91U@^xVtK4 0ʊ1`e4Es2d|>574(U2H=-Qhdv圊/SORpK"e)Rq%,c8>JS#X<ſ%gtRY춣L:{9Pv D':0_ײyGKVZ>,uv;:V wYMX̔YS‰[gl@} *'?fAMP͌0P0WlHݾU*Ĵ MMM$`XZ}GҤ[5,4$"2|dX'{q(D&Et32las]LKL- 'g$jeGk?"KMGmO{(fdIzSn_tiHjeIrG_-lթP餔8$dy_ʘj'e(3=>UWZQ׺r_lQ3cI'-Ms="a>@ʐdİJWE'98,KA-۹;-Kr2~Y }-Kw?ۺEe.1ˍªD,n3ᑞg$kdxnCV2;.-S'V-K_|72wUo3 D9 bLo핇>3 m 64G2C\3XAx< \Bצ,<% Z#Q`^̳~pZ6_dD[_L1n}XYl&E^R=OKJU4|h3|1ԎHX^q ?nlV?,s2ع(PPkCɺnEע"tOz2(x2[>8NR/L(PI}n:> vbʤ qސ'm><Є:s! p.G-Ҵ, .4Y͖D$${h ޮr6|l66G_ГCbv0'ѸڇXkh6Tؐg%$xM J ]2qh+(k>_^IH_٪I`eYP81"Ziv\腎eȌGRu uߺ.7s#j ,um~uLO;:X,&-pXY e}}]hmխ~B@¬u3u=`>ÝK!p2c:凁hBX5"bXzHj^,sD[ 6:X'M`{JĔoSD\|-,QL o)VP1w}Q׷kXbN얙 Cf*x~ANhx`FA-.\g( V֕f8)^66*_*s~|nNֻb`RLd6JX9AwI[[HfVKفaA< nKdm=aZq1"G>uF  6 "@#!jhZ2n8D^ëBw[2u7:~8J i 4k*/ Ucc@xd3Yz[A5FZvU$cztu.]FƦ>wI<锚rrU:e~vD%tl6}_DuJ/(^FH$j5S%&1_<*th*%=:ՅUwP<N잓XnOOҘZ>x}j**bI\3LR ύȂ.kٖxZ u6ЭMdJtًb(?5P£TTxQ / ]czՖ;%T]i;.aVހU I֫B8}-9^|E׾f0#-tp/abQFB|dUs= ÚipWzmՓV @d>xǺ}V7r p,U0~4 \uB?Y2\xQ׶!jn~!!ߦ^.rX"F{EDYO/NLkw.94D׺._98MFh;-^<4g퀏 @0Qͷ@4Rr NhlVѨ_w CzZ6lVme 8hpa 올uʡdY}ߖ(8݃:{0Bk~餜l;jm OWͤh:9(fRIհ_DT_Eݛj=F'ZCp,H$&^]3>*Xs}m9vÇūIpH kX'P!†SQVc݁Ac1냝>qyݏ:nKp]@$b  p&%ӺS|v:0v/\PȁSe '2ܗ>5+NYc1, >920(48U?C&$ jx|p9vSү=|kR*baJߜgvSl{4܄ P·@,9:l7rطąOspЩ;9ھuˌt/Jo %`}3<,P '{!l8þ1l9brJ'> 8 ,)uB޸fⲕHH&u \蔾[KRRYNGsX $?q\;~UV\p-Ic4s!}8Yo];#5t\0f4J*.p*ZL&u2YbS(# (0&B䰴 ||569t}UA><'wY("_A3x4RɻvS+q1ձDT.F 8+NLy @(VpDRUNs_chXuڪꫯŋO$P'p p<#)8 ɟq'fʜ [1KVE$@$\FXkD)p6a}bu_&xiQIk @mjϔ8[+p)*&*fwy0> @Jp,;a &а"$Wx[8@Qn{\ՒzF%,#fJiug$@$Pny0-#,`T (Nsk@䨇: T@}dy\+_| _ფ& u DZ8Q> No/+Do/xW}K?r\  ~:>?x Ʋ78yǀQx"mPr8 9Ns 4+h ^nO>)UJFn< N}N9'9x>n`ւHN:~7'3K|z'v\0_N{b!o)K# "6b;bz@Āu^4'gݽcnw@qq#r ;f+[J[_ZkB&f-H\Љ\Tu*pˍB.nDG3{y睅%g\\;bBBZ( br X$xz?2\ZZ,CM73-};Bz0E߅mo.tg'!o @klb|`/@BP@ rv#p `ID7O ݘktVδa , &&&r @-(}E{|y衇q <.Ё?N! o$S^31acvv) 58]·4꓀f6;SEǺY}[UA51G1KBEL]UoƀH)Mba2 LX?]_ \;lAXp Gu;s?8}݂oۏIB1 T;_~eRRX,oq t,]n*lD"8˂L8ecP1<@Êt4u_NixWJQ/L5n3{&oFs$@$P/п8 CA>v#t@D%|I WN`!X_REAaEB6nP*APLAtN5 >fcFg<I 08r<{/-8s| `- Hq5c\//c= Rʉ'FLMyM$@J> NNHM^17gX/qs/X AތbCdn,or`)c9蓣nfֈ!s3\4 ~;0P"Ggm0QGل䏙77/thh(EŻ+GҞIH`/ dɑ\ݖݴل݄J,&Jp7WmXǣQ76kG3x8` GMi˄kZnkiXh+>553tӧ)InE|١MB%ݭAV\n 4A忹  B:J<%Xx.G=M̶96ݭ3LZ4_60!BbS,#w#8 Wm2LNN֒| JV,q*{^G]B h.\^<OT9zfzܔosFkm%*`0x2oiD\+̶9Ͳ/88pYl1ëbӃ RK2|e|xM$@H'xbGfPnaSC(Xh!-qzQQn@$` 7 +r'-FW^$VG?Jˍ7ޘ~ D"]O?(pl]M ,~X(gF0 26NnoqJOT6 .8S +H, D32Ths;^-5oB_k>0~D?˼RxYń[?H=+qVOƾ|W$rM7"sW\mU.8t@C馛ҧbm1x5o)pG~o@pbf6,;)K#`{ vkL_{yup| 7s@rn1iHjLXJ: `}}7u~(E@D߰Duj!l`lM.<0Y|nub,JL<~9.a]8 WcߩeW_rB??{&6D;6tN0k080VL&QfDH0$r`m DRI9 {۸O0쇘qR,_Gw9K;IH @6qfUav87'CJ3>h+ \8& Yos*[)98}AaŧJ,(gSߌȾ}8~QCKrjPK~\҇2!Wn?39y=]' {=]0'`EB-ey4ǵ?qk82$)p}p_|f`r Fz![&t`qV|*&Hn…)7ux %&tB'hhǣAQAe 9~6!ϽG:/TxJ>/΀%R=#|Gb'ŷ? O%.Uy|6_z15qyo+!9xJZ-~o5VIvC7ϋg<$Ǻ+Λ$ sƑ('M>t ȉ%%:ij<.k"e!^Y$ҥH?":OYieF;=_Ss6ivgdI{&b29*ֻIBcrkSzp_{dܰL.%3k훒V$"29-^,C*eD R^?~8 =kRN%]528}'1Fh<#ss^k|0=}N) @&U/iT0b97V޷|\ ,gx N5̪}bɔAaEZpxq 0~$oȷ޹$+/AiqPN}\wGZ} NBMKtPw6xfacJQ#V~(K?LGɇ(dpH]C 1,'d45Y$Hp̯Th\e**s.9&^<צ?,i!!㣒:-+z<OO&B2Ꙓ}-ٔ@j )ò^ߐBf&$g"SOlYbr܊7#KVIa)ylAH@8AΫw{2_f|Mu1G67;̲+HH`;Fx} `||B:Dj|`͉80iB:Mv!h;anv +rOA9;WJF[ҩVɋzuaY;mB˟5<4\ #rӳOJT¶4msp*i* /<pbV>9-/Gk-xGygZB |#‡oD~}ZPq|@Kh|STh=+ HPHGWe3!:>{M?*^P:W" *%EW5)QE2ӖMtKnzIsh ?>#$P)EUWVVkkx ĭj.g5 0%tkQm`iny*Aۙ9j3`uKIH @ <%hOs X^k*зu#?)w98@O|IAW\}zOmG*Iz> cc#5o|,siyK\]#wO%#R]+-S9(5=S4sHtt@~3.ɟ|TMSr`CՇVGa J%Y` BtڡRČ,.-ҡV94%'$63(gTP"a%.:ef}S. f]3/c vu٘3878LFp[CxoBGuāXhIH A=_Ô{9{.# 8uQ'>H mQB.J )7,zj`hP ,;TudTGر?ЩitnEW=] (ff0ǨǹX`N+Wُ'䫟gS:GpKR-%,O3Ǔeu8nzRz]H%Rs 32?&ƔI-5zeCjL,p _"ã@JH&:;2vRdsNIS`GïKT"*|2''/ 2S 'MF dYNle(]`%uKSN8!^{mxsPw@37 :2}SnUmX @(d 1V $P(*I<0a,`N*$~(zʁM 2`s}'^EVǣIi#e adZJxGP{t6T`Ґӷ,ʴ@t YᝯW,s\CC Xu LC58-K']mrL xqZf4:ŚC.q bdde+&͟^㒊؊Gt L:=z~*y F{+S$ZHi%Um;$PVn&KѦup{ ʅ'`m׌? T#%b˸p[ eL4槠ʮSJzȳ"ǧ[?%wz0)<3үz@5++%sxoP;/~ 3c; э<5Y ,U)2mf0; 8ux pf.gH"Mqz̅ (r& *"vMXx-~Mt6MMĔ7Ș7N+Cߕ а"fT<]v5vY~\~UvZ._̰v;ʳO>_7 T,*9@ Z` 3]Xe H p A&|bWOv*& '78X f NqЗ_L_L7> ل V.ӗeWg;73.@)\RDj1ZQhЍ ^֣f6L8.#GMzp/fњ\~7Ah/0r;P~Y?/Ap1Š=n zY V[&C@"а{W|[Si_X9.Ws«mik2g%g{4ݶ!T`m Bpm-' (-7>sYG@ )r^ -H/ʃrˑ4f$P,9蓣ۆU>xݡ+X^/Ӳ`"6n.k/Nb} _*r ]7 b-# (D+gbnΰsIu!)IX) 4*V5{Xe9ᕁrC525U!k 3N^!` ,*D䇁 *c9\/) @2-@D5,$vLNO, &/Nǰ"Sp_J )r@`    ɷ=] 41 0s `: "q!ZLLLw[Ni|c6y; X'#;$& (XA6@L ӿ@0ɵ ї9 *m(@Ê|}l1 ! 8r% Cq[.YkxJ`F-`A'*4`JPjO$@#~ІӿO%$r@xA&Ն /2$@CG18 @LvaN: N 1`Gّ,ގ; = PHD8 nxnqmBN֊ٖј߰Pp_sMhHK/LKq؋wpHjn`v 0jka҂ 1;;L9'Nv޸? $@$@C.C(v|!}Z}A4 g* KhHKU~^<HH>#Q#8dv`D36 @u0b:Nu!>,) їe[FVndRZN@3K S# UKY0-Fp\>]#jɑ @ @R5ߥ,X*RLo{Zڰ^[üZo/2r}WD%|" @yٷ\KY0,&ss@hQ UC$@M ߥ,@?TLps@jҢՆ!j"P"^id^WMY 뮳r3I@,o{vfIRZm5f$@$Xbw@J(2 IDAT̳t6WPKqGFF?Ͳ#? =*o|Xoxkq9IH k>^Oh ?qK(~:YRl<HHznL2NLL6$ Ե%{̎a?ya2ww_u+A$@L֋|Yw (/c˔KOE155%/^(ثp3BAp'?čw*`ߟ      &P".s(O?H`70&f)-'f$@$@$@$@$@$@M!D\bN?]Hճ6Q.$@$@$@$@$@$@$0"8򗿴8`P/>yXj     (+92I+ke*'@U~X<     c +r1wfO$@$@$@$@$@$@$Pb8=&G$@$@$@$@$@$@$@{B"Ǟ`g$@$@$@$@$@$@$@&@D lOr-ax OȤ^:ijj$P^z%5u_QV ^{\r b $@e!獲`-[xYUW]Upyw#_}f +F{vmre5W3""z0WW~&r 7T_X" #pE-lbPWe'}KK4^3 T?Fk,aEΣԼ(deHq.K @`?UoPu!rppܠw/M$P0xU T#_X,H@]\]& !v**ҧb%̏IH`y?HH"U^=ϜX4MY_].% .O$'9žDT&K{{?$]7H{QI$@ YEhoYKEi]LoVjQY*ʱJ,=-OHe"hlһɀCrhfCFC [".h,#oJ$Ɇ1ԸN"GAB"qL̞ŷ8)[0 UDsòM0 4z-J4yFBe0Ԑ첌<HCb<,#D?-~r͡Y9= ]eg$I) OsHN4*H"Ga+^6$mܒ[ti\zfVu _NDZ2ÚՙȑKi f""Ger?`  C7^ zerp1i~mp)ӦǪ|mw).q6w1ޝmz1Y7RNnin$\t3AKqj77Yέ%s>&]Mq7n僁TF4s ksr"N[m駒} IR7/IB떐b~Y{"!φy"#$bֳE<##cG`)3g/GdXm˰W8A|Ƞ͟&pItz6+g<|i[zڒj$bYʀm^YE̟?/S]27-qZti5~6!E]X i_lYFbr܊W4T9>^VJpzn3!k ߶rR`$@F`/&']C?!s:"dOEe|ѧ997;!Q9_gDtMEބx}r< Y6t 2.䆦1*>8ORh،Iy@Ν;- ce;K*/y9nVbs=34pv>i9"!& Z3z4Dw Oj -]2u m{[AA$@5J`/,Q0>eK{+6}-jk1'kҭKZfůĦIkgVӶ_KfϞ69#˄,ZhLΞpb:fxdY'edjuʦ'AN`UCtAƜ (0@͋{18.Dž0i~im k#!:d0bpY|x,r in>=9^KWP"g6dSeK] U,k#7&^mZUQ4{=]_ҳ㲼 s &li-8%ifOLM$@%&PyKT@&\ZM]_FuO&ijn!@(o5.'Y4zIu:0.ڇuZ[a%1kCa$@H>|PG@34qhc pOK^f]ekS"Ոk3qfLz].#bLѣ"s)ej):">q6B ecVjy8!VY#WeͭLXQ! "m-M܉tشMl Lj|Z$_ ,^\^-8i4CM LE$@e#h|:KI-w*~&t)rת ͫk♉jqqȰC Yz<ҶІLvfҭv>>#GT EK0ЦYh@߀E z!Y&$\n @It[?L/xԺBdStJGOmM_>Gȸ{eAE)`}>qjޒ`f\ԲBDgΤH+FMffۯj`ϯ:N,K["dcfZ -MZv4IkD4HMZ_*tcUHH*m{]|20mS:'6`Y5`M !0,Up,(K㺬Onz qmGJE.e@X-tm9g ֑n2-Z,rD6(C2 @ȁ5|$ߦ#H|rQQ/~K$=GZsѰT[aչO,c&G [>9rM-2HYX Pht#xTpdu][;ݪxgvLֺGb2jĬ< M%+2Vn`VQ7w"Ӂ6n]2Jcr+m(HHڡOZe:5P}hΪjKׄWk8 t%@ˌtF՜=jt:yN\-!CmpDFM'Chᑦ@c;T+Au87*3=ϯu:KZ"rxEN?:%46 4O@YËIl-#2ͭ_}0>=#R=}Kx$H[VCmMb&L6Zyˑ:G#GFEGvo5}1$3m[`}*@G>)p* }*ԴJ3[I 4V:,Ahv uڬJ"}߰,S.+KD$^ַeUn:hʏ vuG,xW>mKO (=8\w=s\7 '_BnJe_>h׋x'OܱW|(OFEXE,) 畠,[E0 @]@;^J+o\xN\v;ϧ w㘞Sc$@$PЕK@T ~R&m.ٶU,rbIH`;p @@?@$P;(rεbIIH. PJ @]Fúʐ@ ԅFw" QÅHH(w _]\"G]\VHHHHHHH`j'Ǯ 0       @..#+A$@$@$@$@$@$@$@ @]Q       {HHHHHHH.P䨋J P=@$@$@$@$@$@$@$P(red%HHHHHHH(r        92$@$@$@$@$@$@$@9x uqY        HHHHHHHE E$@$@$@$@$@$@$@uA"G]\FVHHHHHHH"       @..#+A$@$@$@$@$@$@$@ @]Q       {HHHHHHH.P䨋J P=@$@$@$@$@$@$@$P(red%HHHHHHH(r        92$@$@$@$@$@$@$@9x uqY        HHHHHHHE E$@$@$@$@$@$@$@uA"G]\FVHHHHHHH"       @..#+A$@$@$@$@$@$@$;#B2yIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/SCH_5009_V00_NUAC-VNC_OpenStack.svg0000664000175000017500000016507100000000000025042 0ustar00zuulzuul00000000000000 Schéma Réseau VBackground-1 Solid None Page-1 Sheet.58 User.50 Add list item permissions Sheet.3 Network.80 Sheet.2 Nova-api Nova-api Sheet.4 Network.80 Sheet.12 Compute node Compute node Sheet.13 Sheet.14 The api sends a «get_vnc_console» message The api sends a «get_vnc_console» message Sheet.19 Sheet.21 Sheet.18 Sheet.20 Generates a token Generates a token Sheet.22 Sheet.23 Sheet.24 Sheet.25 Sheet.26 Sheet.27 Sheet.34 Configure Sheet.33 Libvirt driver Libvirt driver Sheet.35 Configure Sheet.39 Nova-consoleauth Nova-consoleauth Sheet.41 Sheet.46 Sheet.42 Sheet.44 Sends «authorize_console » message Sends «authorize_console » message Sheet.55 Sheet.56 Document Sheet.5 Sheet.6 Sheet.49 Caches the connection informations and token Caches the connection informations and token New Sheet.47 Sheet.53 Browses the url returned Http://novncip:port/?path=%3Ftoken%3Dxyz Browses the url returnedHttp://novncip:port/?path=%3Ftoken%3Dxyz Sheet.28 Sheet.29 Sends a «get_vnc_connection» message Sends a «get_vnc_connection» message Sheet.30 Sheet.31 Document Sheet.51 Dialog form Sheet.96 Sheet.97 Sheet.98 Sheet.100 Sheet.50 Browser Browser Sheet.57 noVNC noVNC Sheet.70 Sheet.67 Sends « check_token » message Sends « check_token » message Key Sheet.61 Sheet.62 Sheet.63 Sheet.64 Sheet.65 Sheet.66 Internet Sheet.72 Sheet.83 Sheet.74 Proxy starts Proxy starts Power Sheet.102 Sheet.106 2 2 Sheet.107 3 3 Sheet.108 4 4 Sheet.105 Sheet.104 Returns a url with a token Returns a url with a token Link.84 Sheet.109 Sheet.43 The user requests an access URL The user requests an access URL Internet.103 Sheet.101 1 1 Sheet.110 5 5 Sheet.111 6 6 Sheet.112 7 7 Sheet.113 8 8 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1636736319.0 nova-21.2.4/doc/source/admin/figures/SCH_5009_V00_NUAC-VNC_OpenStack.vsd0000664000175000017500000530100000000000000025024 0ustar00zuulzuul00000000000000ࡱ> 23456789:;<=>?@ABCDEFRoot EntryRoot EntryFi˛VisioDocumentGSummaryInformation( ODocumentSummaryInformation84  !"#$%&'()*+,-./01HIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~VisioInformation" ՜.+,D՜.+,<HP X`hp x  YPage-10VBackground-1ESSwitchoServeroDatabase serverCloudseDynamic connector.1Bridge EthernetonnHub Patch panelFiber optic transmitter ATM switch Bridge.14 Dynamic connector.15erModemc Ring networkctoTilesetUrbanet AlphabetorkModuletNone tSolidtRegistrationctoVervera None .26ionCity.26Technic Server.29on MainframeonAustere TranscendonBlocksn FirewallonLaptop computerPCtApplication server5 Web server FTP server Email serverervManagement server5E-Commerce server5Public/private key serveric Box callout Comm-linkt Terminalt File serverContactUserctUser.50Status bar itemForwardStatus bar iconCheck user permissionveDynamic connectorioAdd list item permissionsic DiscussioneFindssiUserssiNetworkLinkrk ConfigureeSearchr PropertieseFilteriShuffleKeyLockleUser.69User.70 Round keype Tapered keypeCard reader with keypad Card accessKeywaycUSB keyChateyChevron DocumentssNewInformation iconkeyScripttCopyttLink.84 List boxon List box itemon Dialog form InternetrmPoweret PagesMastersW0|_PID_LINKBASE_VPID_ALTERNATENAMES _TemplateIDATC010498141033Oh+'0OHPhx Schma RseauRaziqueLC:\Program Files\Microsoft Office\Office14\visio content\1033\DTLNET_M.VSTRaziqueMicrosoft Visio@b>˛GN2 EMFxNl8}U HVISIODrawingLM23 ??d(LM(3LMcccYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxxx^^^YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYnnnnnnՃcccbbbvvvxxxvvv~~~xxxcccrrrvvvxxxvvvvvvvvv|||xxxvvvxxxxxxnnn|||vvvkkkvvvcccꋋsssssssssssssssssssssssssssssssssssssssqqq{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{|||Јiiiꢢqqqkkk||||||^^^xxxꘘВբqqqkkk|||gggYYYYYYfff̌bbbYYYYYYYYYYYYYYYxxxЍꃃqqqٱ٣kkk|||qqqQQQUUUVVVTTTUUUTTTYYYYYY^^^Ɲ˶坝ƈqqqkkk|||բ~~~~~~ꈈƢsssiii~~~nnn|||bbb}}}aaaxxxžꘘ~~~жՀչ|||rrrշΖfffXXXrrr~~~{{{~~~sss刈ꍍ廻~~~Հ|||興|||xxx|||||||||||||||^^^YYY]]]ˬpppYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY山౱Հ|||}}}sssՃiiiƱ僃Հ|||僃sssxxxrrrnnnvvvxxxrrrnnnrrrnnnrrrkkkrrrxxxkkkxxxnnnvvvvvvvvvfffnnnvvvxxxgggˢڒmmmղըuuueeemmmYYYnnnYYYnnnYYYhhhЃlll{{{YYY|||ζYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxxxwww~~~xxx湹اٷ⺺һYYYϾnnn游aaaؠ⾾YYYjjj|||kkk||||||vvvvvvxxxvvvvvvvvvmmmvvv|||nnnvvvxxxrrr^^^zzz\\\]]]vvvxxxvvvxxx|||vvvxxx{{{sssxxxxxxrrrxxxYYYYYYlllYYYbbbxxxYYY___QQQUUUVVVTTTTTTUUUTTTVVVVVVYYYYYYYYY___ZZZbbbYYYnnnΧfffYYYnnnˬYYYdddnnnxxxxxxxxxnnn꺺Ϻ⺺Ƭ{{{YYYTTTXXXXXXXXXYYYYYY^^^^^^YYYVVVXXXVVVYYYXXXYYYXXXUUUYYYYYY^^^xxxYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYsss~~~xxxnnn~~~sssxxxxxx\\\լYYYYYYYYYƍՃڈ]]]rrrrrrrrrvvvxxxrrr|||kkkxxxvvvrrrvvvrrrmmmbbbsssՒxxxxxxcccնʞ՗էՈssssss~~~lllhhhhhhxxxնYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY՘՘nnniii{{{풒Ϯrrrsss~~~xxxYYYtttaaa⺺XXXhhhլꬬ^^^SSSXXXYYYVVVYYYXXXVVVXXXVVVXXXYYYVVVXXXXXXVVVXXXTTTYYYXXXsssrrr||||||vvvvvvrrrxxxvvvvvv}}}||||||vvv|||vvvvvv|||vvvvvvxxxrrr|||iiiլ১˻լssscccլnnniiiլiiiiiinnnxxxnnnsss      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                           ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                            ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~  Visio (TM) Drawing ؒwRµ%*@8 ߳!fffMMM333-yN77sPɺm88rcc3f{ڌ626?3ff?G}UUUհc5ڬм>{N9KmfwJpYDRb߷qaXPF91I@dddFFF9{$<<$JJ1tl77A@@@p0Ezٕ㰬VęյTBj~daPc*p PT幵.|AJ Pj( YYY `???wwwI}G &&&g \\\Ĭ# {>TTTPX`LLL VVV(Workflw/cmd=2C/Na/Na/Na/Na/Na/Na/&,CU.J:DT5I[1hXT@. /UbW##_b0U]?ȾL \&B1&Wb%e !oM $d )P?;$,, & #& , 5&?D#//?M?\.? 7A  T~,,,'q/%P6$Y6 ((}k?l)P?"  U UA3#o >S@#B:aU_ReUeUeUUeUeUeUePONOD`T_ReUeUeUUeUeUeUeUUb RSR;25OM% >Qiq;RRRggR qh>Qj /A^F' %p3|CbFp#| |% | vi?=Os<34*x,,,/HRQAmeY q]U@Q-,_vL__q____OO_OOYJAe@L&d2K?ɠQ3Ҳ s]֦xΟ$H"Ŧ)Լ Dץ4@dUA??ϩJ[D!# f:45ŧ5ŷ5ŧ 0H 0D$#U   [#Wb ;Ch( D?@eDrqr:ePB#<D5,ND0DDA7XDbabX#F:FDF(A:#7 D% #7H(DRQRHBA7D B7DJBA:B78DBAB8BAtstt<tB9DB9D%B9(!(D2&1'2&(!BA9d!Dn&m'n&d!BY0C&'&!<!BȁD&'ȁ<ȅ"A6761<1B#AT6S7T6T1<T1B$91D6761B%91D67%61B'AF GFA<AB!(9DABaicTgdhTsG'0f6U[e0 sFf|tvu dd4φFRd,c  F،3*d4V56Fs 9Faaj9eOwOkŕh8FTo!32pbGFdF毭oʳUųUvoft\W?޳P`ކmW"> gAkA oojx5lR}J<DTqC`` {<eh]-w>yqyu4x.~O6ARH-D@>QWACGcwc]oP{Jk? `Aooookt|/R|a`OЖ󞦆B1C~}XQ_q^Gʯ$6l~ؿ 2DVhzϛ/// .@Rd&EFߔߦ߸$6HZU@S; 0UUCš@ЖSY+ 2@e?N4nW__ o"4FFoKo|o%///0/B/T/f/x//1/Ug/??,?>?P?b?t?vA<???QV?? OO-O?OQOcMZeH,2T3F_LB֖Eh_c,f;5OHUUx_v_____o__'o*o#Z`orooooo7uI(:L^p8 ʏ ߏ@##5#$Ǡձ=9t++aO##廟3ДPN[UY9q R[p?^܅bpނb JD ಕϧϹFdƞ'2W#2`~cD@J$uO0fB = = 0BvCt346C9C$%S1ղfxߊߜ߮ߡG>VaE$ #㇤bbOA%7I[X{ߪχ4 F9T=5GYk}41u9_3MM$ORdJBį֯ 0B _fx?__KV__-oQcOo}///q/// ?b΅ օ(?:?Vy1d}l?~????????pC%O7O=o"y$oE/Og䀭䭠TA6n` Fo`lɀConɀornzۭ_bݭKQ٭ᷫ熐C㭡pa*ᵃVA8 Əڏ숧1_B,:S/e/KG_OL/7qA6rKRO__+_(O_a_$[____oo'o9oKooooooaoooozY8J`ޡn| #ď o%&oOOǏBM,<>ߓbY`B џA .@Rdv Я⯽N`̿޿&.vIFX*8ϘϪϼ/MA/=ZC9ߙAQE)2TC>OO:/7I:?mI?/D??!OEWi{0h1s_y!](Pb!,AGLL!%. ڕbRV$QraK!qPl#F^/{j1n,`eWctӂW0`ih4`A7'o9o,dSb ~eYl!`VoPz@Uta`e@/$6H  dݏ%R[mǟٟ/m,:DVh!3EWi2x=Ql~Ưد 1CUgyӿ+PbbP5I~M,ɺ ^poo߸/~b'b/~@xߊ@߮H&%$6HZl~^OO _ DVh_k}_ ?o_R__ '9K]o/ 5/??~Q-mx//ϔ%?7?݀L?/?ԏ'bbbmf???????8u7vO$O6OHOZOlO~OOxOOOO' _2__V_h_z_1_}oo__ oo.oYϟdoÿտoooooo*{>>PbtHn2T1?>J溓1hn9'}48Sbǟ̦2Vhԯ @Rvϝϯ*C1p\%&,%);M1k___$6HZooooDVhz); //./@/R/d/v//J/////??%?7?I?[?m?/M___q_L9/@?????;wO O2ODOVOhKlb#裈M?OO@2OO_!W=S_{o____4oFoXojo*/oooo Ty1dvơ9 I``7I*<N`r'Uᶏȏڏ"4F Xj|~?f޳ ? 2 -37%J@J*<EU⸟: ]o䢿ƿؿIDVhz&/d/*Tv߈ߚ*?p/ߔ//5GYk}徭]1+=OassH 晑9Cў<N -?Qouoo2///;/M/_/</b?t?/p/?dP????????Oc#O5OGOYOkO}OOOOOOOO ^)77s"_>7sF_*I=aP 'N£(o__[eaPR:a4p` AcentColr3pzPaP#bbbS$QmCZ:a<b!bNbb;(D___)G!o*lyahg::o#^opopϔooooojHZů~ߴ/ 2?VL?͏ߏ'9KH.psџ:+3ahXXEH2q#q_/VU?_Q_UiSЯ^[_WYݿ￾%4ocϙϙϫϽK/);߷/_q5ߧ߹/ 0BNfp*D));M_088Yr889Qr ޯg&8J\_A̿cuϙ8AFfUϹo//,/>/Qb/t/////s_?? L?^?p????????OӲ-?Qcu|pkCcuBvx~B-qı~@OROvOOOOOODVhuQYSoR1cc?2n__@bVo!o3g>o`k_ob(ϖQc4jn?|fÍonॸoooata*;֤rA ]_5 .@dv߬*/*N` +=Ȍޏ&8Oǯٯ!3EWi{3߯ӿRd-o-?L^u΁b!dph>|f5# h*&N`rߖߨA?8J\n?9OO"4FXj|H" 1oTJ\nFna g );gFT16 F@lvdAV1!` o2o6*/!/xs>/P/b/t//0]U\V/////?xO3?E?Oi?{?Ϗ?????; OO@/O{SOeOEUjO(OOQ{Guf!3BWVtWN~OSܟOOOO__)_;_M]#^_p________oo$o6o.eb dovo;0GtvGÁsbz@-&ÁzqH D{-M6qߧ%%*[mǟʯ!3EWi4Fկ /ASUU$u0Ŀֿ 0hBq7H2TQ%+brGBauE'} ϞȞ 4W/Ugyߋߝ߆? ?9??Qcuu?z?OTOO,>P`bt%n*<N`r4ߖ/*G`b?#/5/RgReT!6 q &x2akR}eeV!1a` o2o{vs ?s//ye?/,)1@U?V`/?? ?FD?V?h?y????@? Oo.O@OvOOO\RoB2SeDž Q`qşןOOOOO__$_cQH\X_j_|_m_@_____. jP屦399j0ʅ&o+oaKs^%7I@nYbO * Oaѳ{K,WJ ҲInπ&)}A9D2%l.Pt =$OӞ@bw֏GX"[D;MnxiĿֿ@ Ȳnw*߽߫y&e/(HZ^3q(:L^nw+A-\}q}@~uf$5fFop?jo|ϼU"%\ 0X^?/60 ϲ??#! tq, /}///4Uf42F_2Fy4?sOO\U??6xSerVoeovoaLOOO4BTfx0BTw0oiNo7B% qFv14:ValidtonJ<>' Prpeges Sh_wIgog}e='0]/E Rue{es D1 NSmm|aPitWkfD U vcgikRxt'esB Sz: %iSgg x]pglyycsrc |xktVncn(U.Iw)iA.MJpt8s%Cey!CUT}mD !vwD!X5@#1!ld/F]":#Tut4/H?z)2/o(#l////? 7@`?:7N7?5 8XAH!t&??_?H??~%3O5"v1HT8OJO\OnOO;7UO9 OE6$o6oHkxcC" bZ1Qczm?_"_4_F_oz)4w_PiTPgdQ_____'Ljaapk &p{xoil ooo $Xz)5OY`rkB#b{ 7@k:7o#tybok2DVhzʟz)6ݒrUHSs߀h]"d}t܏$ 7:#6:7eN9ud~gaƟ؟꟤x+7#BI E*smBn?"vXj|6ίOD@5Pbr8"exZ~eS\/0O"A#>!h //1/C/@=aN9hÒz"#c"a am! Pmsfa`8////?TO*;v_`smM~d?v????'AaI ace^An?@8 `aaCdqoHa aW0mUb8B_T_c^c^gS2C8:aAA_f3 OR SiW0c.O@OROdOvOo *;N`q~AkxOO__&_8Uw^_\___[__o o2oDoVohozoo ooooo5o)hR}xv$6HZl\p!.0uwӏC)<6YXrbRdv ПA]rFxmlns:v14htp/'sUcea .i-ro oUf.-o/GUf=e'v=s=]o'200'Eet1 lnUn'(U)*+,U-./0U1234U5678U9:;?@U4<zE8:&@bN%@ sh8[C-07 AU'()U+,-.U/012U3456U789:U;<=>U?@CDUEFGIJU4< zE8:&@bN%@ 9lC-7 AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGHLU4()*U+,-.UMNOPUQRSTU4<zE8:&@bN%@ ~8 :}C-7 AVU4<zE8:&@bN%@ \;; }A-E7Un'()U*+,-U./01U2345U6789U:;<=U>?@WU4<zE8:&@bN%@ r;[C-07 Aj()+,-./0123456789:;<=>?CNOPQRSTYZ[\]^_`abcdefghRiklUmnopUqrstUuvwxUyz{|U}~UUUUU4?@BU4<zE8:&@bN%@ r=[C-\+> AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<zE8:&@bN%@ q>[C-07 AU&'()U*+,-.U4< zE8:&@bN%@ xk=?!C-^7 AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<zE8:&@bN%@ q?[C-ܷ AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<zE8:&@bN%@ o[C-* A-VU4<zE8:&@bN%@ h(#AA-K17 ;UN'()U*+,-U./01U2345U67@U4<zE8:&@bN%@ (t9A7C-\7 :b()+,-./0123456789:;<=>?CNOPQRST\]^_J`acUdefgUhijkUlmnoUpqrsUtuvwUxyz{U|}~UUUUUU4U?@CDUEFGJU4< zE8:&@bN%@ +HlC-ܺI AUn'()U*+,-U./01U2345U6789U:;<=U>?@U4<zE8:&@bN%@ dI[C-7 AU&-.ZU[U4< zE8:&@bN%@ J!C-;7 AU&-.ZU[U4< zE8:&@bN%@ xJ!C-ȴ7 AU&-.ZU[U4< zE8:&@bN%@ K!C-ܽ37 AUN'()U*+,-U./01U2345U67@U4<zE8:&@bN%@ (w9KAC-7 :U~'()U*+,-U./01U2345U6789U:;<=U>?@AUDEFGU4<zE8:&@bN%@ +LhC-7 AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGJKU4?@CUDEFGHU4?@CUDEFGHU4?@CUDEFGHU4<"zE8:&@bN%@ /OrC-\ P AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGHU4?@CUDEFGHU4<"zE8:&@bN%@ -FQrC-7 AUz'()U*+,-U./01U2345U6789U:;<=U>?@CHU4<zE8:&@bN%@ (-ReC-ܸx7 AU-.N[U4<zE8:&@bN%C@ HXRC-U7 AUN'()U*+,-U./01U2345U67CU4<zE8:&@bN%@ w9@SAC-C7 :Uz'()U+,-.U/012U3456U789:U;<=>U?@DEFGU4<zE8:&@bN%@ SeC-ώBT AU'()U*+,-U./01U2345U6789U:;<=U>?@CUDEFGHU4()+U-UUUU4<zE8:&@bN%@ ~(=X}C-7 AU.()+U-U4< zE8:&@bN%@XY$CH:*7 A-U4<zE8:&@bN%@ )YA-pK7 ;-U4<zE8:&@bN%@ *YA-H?Z ;-U4<zE8:&@bN%@ *aZA-Ko7 ;-U4<zE8:&@bN%@ h)ZA-HK7 ;-U4<zE8:&@bN%@ )/[A-pK=7 ;-U4<zE8:&@bN%@ )[A-K7 ;-U4<zE8:&@bN%@ )[A-М? \ ;-@U4<zE8:&@bN%@ *f\A-ќKt7 ;- U4<zE8:&@bN%@ h*\A-ќK7 ;-U4<zE8:&@bN%@ '4]A-ҜKB7 ;U&'()U*+,-.U4< zE8:&@bN%@ ؈]!C-Ӝ7 A-U4<zE8:&@bN%@ '^A-\՜K%7 ;Uv-.UUUUU}UU U UU4<zE8:&@bN%@ ~^cC-ל7 A- U4<zE8:&@bN%@ P'<_A-؜KJ7 ;U.'-N}[ U    U4< zE8:&@bN%@_%C\ٜ*7 AUF'-.UNOP[}UU4<zE8:&@bN%@ ~x=`}C-Hڜ[7 AU6'()U*+-.}[ !U4< zE8:&@bN%@ `/C-ܜU7 AUB'()U*+-.}[U !"#$%U4<zE8:&@bN%@ =:a9C-0ޜs7 AUB'-.N[( ) * + , - . / 0 1 U4<zE8:&@bN%@ >a8C-0ߜb A-2U4<zE8:&@bN%@ *bbA-l1p7"A-3U4<zE8:&@bN%@ h'bA-$K7 ;-4U4<zE8:&@bN%@ %$c }A-.7-U4<zE8:&@bN%@ 'cA-%K7 ;-U4<zE8:&@bN%@ ; cA-%K7 ;- U4<zE8:&@bN%@ &Md }A-W7-U4<zE8:&@bN%@ ; dA-\(K7 ;-6U4<zE8:&@bN%@ ; eA-H+K7 ;-?U4<zE8:&@bN%@ ; veA-\+K7 ;U>-U_5U789:;U4<zE8:&@bN%@ >e6C--οf AU^-.NUOPQRUST!U<)=)>)@)UA)B)C)D)E)uU4<zE8:&@bN%@ (ofPC-8m7A2-7W89;UFGHIJKuU4< zE8:&@bN%@ Hg,C-PmI7A-LU4<zE8:&@bN%@ < gA-H/K7 ;-MU4<zE8:&@bN%@ < hA-\/K7 ;$ Z@8QR@9QR@!^:RR@)#:MR@#O;GR@($;QR@؂>~=RR@ȃD5>RR@X>QR@h?QR@;@QR@@OR@;AOR@ARR@HvSCTR@ȃCQR@8NDQR@-DQR@(}FEQR@}EOR@h~FJR@HFQR@"FOR@X%kGOR@8GQR@苮MHOR@IRR@<IQR@EJQR@JQR@H˲=KQR@6KRR@87LQR@7gMRR@x8MJR@9NQR@9\ORR@X:*PRR@:PQR@;QQR@8<RQR@<ROR@x=SRR@>LTRR@URR@UOR@ VRR@uVOR@ VNR@hLWOR@qWOR@(:XQR@ȣXMR@hDYMR@YOR@ZQR@HyZOR@覒ZOR@G[OR@([OR@Ȩ\QR@h~\OR@\OR@L]OR@H]QR@諒/^OR@^QR@OT_OR@ȭ_MR@he`MR@`OR@}aQR@XbRR@rbKR@bOR@88cGR@cOR@xcOR@adGR@dOR@X'eOR@eOR@fRR@8fPR@WgPR@xgOR@ &hORL<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<(L<($ ZE8nRE( nRE!oRE-#oREX#!oREx$/oREx>=oREDKoREYoREHgoRE8<uoREHoREoRE軕oREvoREoREoRE.oREx}oRE~oRE~pREpRE"pRE%+pRE9pRE8GpRE8UpRE(=cpREXqpREpRE˲pRE6pRE7pRE(8pRE8pREh9pRE:pRE:pREH;pRE; qRE<qRE(='qRE=5qREh>CqREXQqRE8_qREmqRE({qREH qREqREqqRExqREqREqREXqREqREqRE8rREاrREx#rRE1rRE?rREXMrRE[rREirRE8wrREجrRExrRErRErREXrRErRErREHrRErREsRE(sREsREh-sRE;sREIsREHWsREesREssRE(sREsREh sRUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X$ ]B66ȅHq?j|?G IB@L&ɯd2?sU!&;/xt  0RKB` SwitcP,ne work pʴ r pP!aUl d v c z !^| R( G@&L >V&7?J7 ?ppwpwpwnwwyZwpw wwwpwwpwyplRp)~pw߇p}~zZDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabL&d2ٿ\.??{ڿ ;?k?r3r CUG DF P# h @T(PYY#EψUF\.?@L&_d2?@ w?P} Y.B! ":J :f^7r7ֆAu` ?U"U66JJU^^rrUSu#U ^b"h,h'RqU f??z4b u{[`:#"&a#'QT)b" ^l!L1$l" l"7'>>ÿ@qZ05?46 r23ZS2uY0`-1ZBu@ l"r3A'Bl" hL$9u`u `"[ bzb!u`R$"% أ#Q!'1`Vis_PRXYcPm!#5HP4{6NP2`?CopWyr.PgPt u(>P) 20{PU9 M.PcePo0P'ofmR^QraPQUamPi_Pn WAlP eXsRUe0PeePvPdB!#d$%b!*(=#&#@Bd&& "$B##0Ub#]g,7{TrN @ cTg d(a1+_c@11"hh"2)t3*tB+tBa a"aab(!BpaSb"+QQou\%G!!["@@Bq"53T@R"q3<1 KEq`RPQs.PU TPxmPSP^2l"+ 0^<@5b 䱠"5R"br3&BrM_q(pffs_<៵R-m11'rvu"u"2$u3/8B0EBH%Q%Q"2_b3l !!E|QQQQ67l!8ĢAP!!1;(bL1D:b;aqq^eD!b(pQu [ ]@U0` SPa*aPePl rsRQV)P E u.PpBP!ePQ"5X%7\In` WTcPx]SP`DP!vR3^߮߼Iӏb` Syb9Sw>?QEd-MPnyfPcmPuRmt߆EQ`]1SPRd Nm E ZK!P PPr8J\h}]%R'`-sRU3e_ 5Afe3`؇e ,S SRiPQ ՑրTZIPL_PcX@;۞`rBYlPiPq٫€րY0KSR_PoŲ/ٸ!ר@/R/5NwRk,a_B`+x/OI4PQdPSi/, "?=ns#]?,(dN??(JSL1m "/Q9tVQ IQ /?րf?O\I-` R 4PQQ^OpOƒ?O?MPCI?O._qSCRo"mY6AtN BT]]9 M JU@L&d2?@|>??@Fh4?P6 AJuM` W?SuJt At=WG_z#J]RbJlp= ף#R>2yw?@MJA@eb&#z(b),+,&J[(.;?&L ?A$./O%B!!5 g`?Copy_rigTtZ(c)Z2U009ZM 0c 0os0f21r0>1a0i0n}.Z AlY0U 8s]2e10e 05vo0dQ0! $-# ',&?6iie59 6uX7_'0UpBŁ&Z!$0 'F";JlJAh h7!1!Z@x<3ף , <~y#  IRvVJ:?OS QlU^TURUEU{bX,G\ZYaaȫQ5UL&[RQQ .aYU520_BUv"rA # cHD H# h4>T]]9 MT JU@Fh?@+@ ??@?]P6 ]nAJuM{` ?5uJt  Q߸It=W贁NUIRIl%0w#>2zG7z?3@MJA@eb #zbR( (2+2&J[ (wp"?& ?A$ 4/U%B!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!E$5 򤲝-?# '&6iieE9 6X73'0UvBŴ&!$0 T-FAJlJ8>UhAA 11!`m`0?@rO\.ש ,#<#{ RVJ:9XYUي#rT Q%UE?,b R8]S# l_@ѱQEakQY,aU5U^fGO+|no/j3asChAZGQaX_j_i2>_PU"rA #js( bUGD  3 h0TdYYB 1 U@ BP(?@'J?@ V?,?@f_(@yw?P} ,w 5 &M& DD bb 4  M  4  &M& 4&4& R&R& p&p&d &M& && &&4 && 66o"u` ?J$0BN`l~$I/( /2(>/P(\/n($z/(/(/(I/(/8?"4u.9!t+1  QyUtmQRkyUWq#6&yU~W6͔QUQQR`?CopyrigPt (c)2`2\092`M*`c(`o%s"`f0b!ar$`]aua0`i"`n.2`_ Alx` (hUs|beP`e(`v` dp`RSST SB-S`6b g?/4)=SfS@tff bT#S0Uorc@V*` @4c u vXgw'ntŔf!d] vze!`^H@!!Yd&Db&VUʄ&ՋeʄDՋ3ʄbՋ uʄՋʄՋ|ʄՋƈՋcȆՋƈ!Ջ&ƈ4!ՋDƈR!Ջbƈp!ՋƈRƈ!Ջƈ!Ջƈ!Ջʄ1VHD: # h0>Th]]9 MTIAU@?@kE8?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  $mCLmta{\_I 3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@?@-Ы?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  H-mta{n_s9/3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@B?@c?@ ͂?@SSfhr~?Pps-8R?>$#A n JuM{` ?e ;  Su*Jt'  m@mta{7._ xX3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@eZԩ?@z?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  }$~mta{_P@3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@h!I?@Zzv?@ ͂?@SSfhr~?Pps-8R?>$ > n JuM{` ?e ;  Su*Jt'  XƿHmta{_z3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@sLGv?@Bw6t?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{__j3.Jv  q?#cwn?~v(A[}Kѭ>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&ՆJy#a?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>Uh ETh]]9 MTIAU@?@9 EU?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  $mCLmta{_3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bW)U+U+TU&Jy#?!J ?%P?AO"d+KbW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0en0v0d0:1Y>4#b-#M3 M7A#(p0bP59 DFXG~dG'2WUA);F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@?@,?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  H-mta{X_ 3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@B?@-r-.?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  m@mta{_ :3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@eZԩ?@ƍu?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  }$~mta{_D3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@h!I?@O2?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  XƿHmta{p_'F3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ET]]9 M JU@]u?@x?@,ww'?@d?Pn6 >JuM` ?u#t  fIt=W紁NIRkgS|lWxI#>2zGz;?@MJA@eb #zb(b%)2+2&J[/.;?&L ?A$4/U%B!!5 g`?CopyrigTt (c)0W20%090M0]c0os 0f2R1r 0D1a0i 0n.0 AUl_0 8sc2e70e0vu0dW0!$ę-# '&?6iie59 6X7}'0UvBiŇ&!$0 -FAJlJ8>UhAA@ 11!T2 $<cV] &WR~VJ3kq?@BP( ]S m၏jy#mS zjaֱQE^y`0__ lOA/X5:Fh4 XYePYv81sX5aZjoohY}aUU߭?@,bl'4/\K)aX2>_;"rA #jsHD: # h0>Th]]9 MTIAU@sLGv?@_?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{_3.Jv  q?#cwn?~v(A[}Kѭ>JU2N贁N[?@M#&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@?@*y>{?@ ͂?@SSfhr~?Pps-8R?>$>  uM` ? ; E)u*Jt'  $mCLmta{)4G3.Jv  q?#cwn_?v(A[}Kѭ>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&Jy#c?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>Uh ET]]9 M JU@L&d2?@\.?tP6 >JuM` ?uJt ASt=WJRb~li>t2U?=@MJA@b + &J[N#?0a& ?AT)B!!5 g`?CopyrigTtZ(c)Z20 9ZM c. os f"!r 1ai n.Z Al90 (s=2e0ej vO0d10@!$b-# 'Ya&?6iiek59 6jX7^'0a&Z!|$0 FJlJ UhqMAA ]A !4Ja-'Z@?@  <0贁N#A OR|VJ:6]G+|M__Zw?/#UEF`?0?@ aQS xQ#`Y{GzYk5^ro!o :mX5U'h?gkT:&#dT[RUU4F?@|?BP(flwaw%l5p!Xnc 3dXAYEh4\`Yb/XAdx̼_ZWnQXAYL&tgkRwpQlQE26_HU"]rA HD: # h0>Th]]9 MTIAU@?@?]0?@ ͂?@SSfhr~?Pps-8R?>$hA n JuM{` ?e ;  Su*Jt'  H-mta{AJ_ap3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@B?@d;]?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  m@mta{d_v3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@eZԩ?@>6#TTĶ?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  u*t'  }$~mta{%$苵3.Jv  q?#cwn?v(A[}KI>JU2N贁NI[?@M#J&A@bI(bW)U+U+TU&Jy#?!J ?%P?AO"d+KbW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0en0v0d0:1Y>4#b-#M3 M7A#(p0bP59 DFXG~dG'2WUA);F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@h!I?@Ž?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  XƿHmta{_v3.Jv  q?#cwn?v(A[}Kѭ>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>Uh ETh]]9 MTIAU@sLGv?@c?@ ͂?@SSfhr~?Pps-8R?>$> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{_] 3.Jv  q?#cwn?~v(A[}Kѭ>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&ՆJy#a?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0g59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>Uh ET9 M3JU@L&d2?@\.?ҷP6 m>JuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"A.b+-,-/*B!!5 `?CopyrwigTt `wc)5020A0950M-0c+0o%s%0f32$1r'0`1ua30i%0n.50_ Al{0 +8Us2eS0e+0v0ds0ib =L!XY'0U Bj&%0 :lJ Uh*M[A[A 9A Z1:'h? Y T:&Qi@J4F?@|BP( IQ!i pP!!U31^ `0,I_[_ {uXEEh4Ẍ TbYb/uX _dx_aZW贁NuXAIL&tKRPQ\%QAI@@ HlQbY 0hauXAo G+|oaZw/(uXAY RU:q Ib_`yA\ppr׌ :muXEOO_\_dH >Z(+s .ṿrAA'FHMza# {!OB 3|~d}@+x!BBGULge PLφ&LYrK߻t*8w yҖ|~{ HO"P9ZΧ6 v XLJ|XƳ ߲Hpmߵ*طZؼ}Ţ} ?xLQ˰]!oWeIL UM'QYAU BN [U h u R.8UEU_l y!񷪆"A#K$X%&'ª()Ū*+$,$-"$./$/ͪ<$0I$1JV$2f!3Up$4}$5U$6U$7U$8$9U$:$;U$<$=U$>$?@ 8 3Y&4Q34CU@4DM4E) 0]1F0j1G0w1H01"01Jb'01201301)401N01QO01P 0J1Q01R)60AS?0AT A!<0-AVFb10cL10M10DUFD  h(^TYYBBUFjZ?F~??x<F BP(?P } kX` BW66 W TTT%TTȅ5H?? (?B@L&d2?-(\.g1(.sUW!AU&w/~  0QB`#serv",comput$dWis r0b$ud n tw 'rk!eA^| (SG|& b?,>?TTwpp pqqppqwqpqwqwq+p pppwwwznw6;5wqpwwwhghw>Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??SbM1{V↑T*J?@ ?UG DF P# h @T(PYY# 3U@jZ?@~??F?Pn} u{` ?Su#΍>>HX HlHYHH E.Oč>UDDXXUllBa'9-",'tU"J%q4 Q?"'4b0{[`:#26Q79$"t3&9"OL1" ^!'܈ >7@ÿ@q@@?@8B?@!ABP5F02u@K`-1xBu@rCC53A 3Bo2гC9uv`u `"[ۢ b!-u`23kQQ1Q7B`?CopyrigPtf0(c])f020P9f0uMPcPosPIfRQrPQaPiPn f0AUlP XsRePePvPd/`VPs_SBNcPm!#5JP83`5a Q1#d:4%B @8=3R6#d\6P\6 X2$#3+0Ub\3]Jxge8^ E U#".5m Hvq%t{.5tpw3t{Et{t{^y {|tYwmkdaIB'BT1aa"0ք.2132B3z1z1"|+q+qEQQX116Q;r<A>Ab\1EF`rAJ͔!hh)1baA (dT6vQ1oUkQ]b` MPnuPaPtUQra1Q` E50uPp -`ePt=e.5Q`!3]t1LPRdU\ %`u-`bcu30D-!I PPr֯H^-]L˭DRQqU`r\BAPaPrB`LDž5^,B%I SbiPQXj feTϟB LLPcXςPC5%Bl `iPgCų@o߰DLRPo}߄|AqZ@@K ӱpKDQ`;BNQwRkax`2`녨cϣ[I$`Qd `SsPo8SUbPRPsU}AxBuLDmatVa IaXAfBLT R `QQVhP+IQUm-`u-tP !`4줒ӿU챒VBALC`<1B` W2: /XؑLMPmRAy;/M/˒pq//$OPaSDy1!//%ؒ7VI]BPlRgRTP訰y??SPa2CP򿃪X??B` 7T+01I@"`a^%{/MPCAD vOOHKdQ>C2Z>6?H?qgtԁ/tufvop#U@jZ?F~?vvIC-pˑup` s?bujp`&T0a,Stԁ/p]e-2q?@< o5pmH!q."Tԁ.dԂxAƒ@T@>$pÿ^$gqa OvcrSr`Rno>㳫1e&BIrSrOvcv=ss"u~$bb pmp sty:b{bráe@6Cu&P`?hi@k""oűRT h]]9  5AU@jZ?F͂T??Fx?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?F+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEEF _KnSA?8@@Bu?FDR> ?Nk"\6`bmMM\j=am {:e P> #? D"? 5-g?j4B7!B{oogr57!ibDAbE9 MuX7_'0U~rN!$a)5 5vIzHD # =hj0>T h]]9  qAU@;?FI5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(FJ-&?F mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U@z T6?F @' d_mOoT \VyWxRkrVo_bVh*g:n"'UaEjz36>CN>?Bn}]zGvN7V iDAbE9 MX+G'0UJPN!4ae5 MaHD # =hj0>T h]]9  U@jpiH?F^?@$?F]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!JF% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?FL|+?@?F({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B-#3K 7&)? b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%EF($L[.FX5Eai%?F 6?wL;] W-\TB⤲8?F@&SA?F @ ?Nk"L'`bmMW_Ye[ٯ7 VK=F: [jr ? D"? XZvD?ta@Boogr5;!QEx]SiLO MLGXa~WOTZiAle59 XG"'0Uc>!$05 .HD # =hj0>T h]]9  IAU@K?FOgVb?FY%?Fwᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?Fjc{g&?@%r`5B?Fm_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!FnElrkRt# 8$kcD?@숒@)?FP?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__FAc1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeooooooFyUy{* hDVW(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?Fxg^?@0߯H?Fs7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?Fz)e??F˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?FUy{?NT,dKj&b&R&M/8T9K-q?F@F?FdG0 >Nk"?YE*]\LThKjfGD&Qhj ?iU3UGe&Y`?@/Jb)h5oWKoWo;èKod}l$ro`nE2K?Fˌ%4G{?F -q? eZR^{|\?6?vodmP}l>MBe 9 N 1iS? 1?grl#51:PO"4ga@Sq(UE^@O|ǐ[`ߡa v2rbA @3ijD&}laqOz o?#WHZ'Qu=*[^N?F]%P?@O&@ c?FаM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{ #iA@&QeE9 MX''0UjdN!0 /HD  # =hj0T h]]9 #]AU@}%?FR$,?@6#V?Fc_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?FJ-R?@HQ??FS!R?P| llHL6VLD5E+v;*AůʙH: 8snP%$? k?Q? \?4_!__WrA5_! R,U3OC@IL?F4~Kz?P)Oy޾ T!]]>#IAU@܋?F2Oa>?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&FNTM\jh1B,RBM(!JoUy{O_ e@ So4(D@^Tw?F]֝ä ?Nk"?w_KT\T+\_(Kjӷjmplhj -`\TPQE\Uui,St_o u~!ՠXEO9= T?FL?;uQ_Pm"rA @3sjQ1lğ=GoYniAleE9 MX|'w'0Ur^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?Fj?@2r`5B?F^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&FPUy{_?NTMjC1gc RVMʣ(D#@ ![?F"$. ?Nk"W?IKTLTcKjRlhj p=' (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}_P^52XOES"rA 0S#Es3iAleE9 MX|$Gxw'0UrIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?Fc_v?@#r`5B?F*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!FENTMjN@1h(NoUy{OO e@ o)4(D@^Tw?F]֝ä ?Nk"?'_KD3\T+\(Kj?ӷjmlhj P\ DiAlePsI=MXsG'0UzbZ6N!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?F*?@#r`5BW?Fw?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?FL;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUHLuD" # I>h(J TEI 3qMU@jZ?F({~+?P >uA` ?u#VMJ*  )8*A )LG#P`G.$*G8BV6j>t   t|zpb"#"J>U2zGz?@9 +#>;&A@b]m(b{)+Y&^" !l$jN5#",#C"&p "Ms"b{);x'^",^"V/5#113#`?CopyrigXwt dc)0W20090M0]c0os0f2R1r01a0i0n.0 AUl@ 8sBe0e0v@d0R#"lJ8JUlC -(#s!)(J!O?zQ0 # # %YA NEU . K ;;LGE3EF)$OK.FHE\Vai%?F 6?wt\[ W-L/D5Dꤲ8?F@@SA?F @ ?Nk"?wY2`bmM3_Yw[7 K=ދF  #j1H|AIx]Sit\1IMLHDEnPOoDJ1AAeCBp OOG|u?F9 mO"t\7=L驣/XQ eel.%-+?@llփ?FU~> @oVoɽ7y2VzdOfшlA=$o`"jQ3 ex}@4SSa[?o4ui'e!Oa#Əa$jiMiyI,%!Xtz'0U &`50HF vؚ_`H>;s `Dt![x_H=ߚF(M#c&OB 6~d@+8 [sG^#ff !?#%|((W@֛+x-36i,k,x 7  NXUFD  h(^TYYB UFjZ?F~??x<F BPG(?P } X ]B66]  TTUTTTB9T9ȅHBU?? ?B@L&d2?6-G(\.Y(7.sU!}&/)Ѥ&)  0cB`,dat 0b 0se,2rv2,0ompu 0#3di0t0i013d0n0tw*0rk!e ^| (S G& iz ?!3E?? ?  wpp  pqw~ qwqwwpqwqqpppw'wwwqqbwqpwww7{{w^Drag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabjZֿ~??cN׿ mF1??SbM1{V↑T*J?@ ?UG DF P# h @T(PYY# [U@jZ?@~?tP} u` ?u#2>D>HX DHlHYHH (E9.EUO؍:DUDXXlUlH"' &T",'U)2&5qq/4 ?""'4b0{[`:#26%Q_9 $7&9"L1" 2^!'>~_@ÿ@qh@o?@`B?gD]F0#Bu)@`-1֠Bu@Ur#CC5CAB""Ch9u`u `"[ b!u`2ւ/3Qy1y7B`?CopyrigPt0(c)020P90MPcPosPfRQrPQaPiPn 0Al` XsbePejPv&`d/`VPs__SBNcPm!#5P8[`CQ2y1&3db4&5=Bh8=/3z6&3d66 Ԁ2&4#/30U!r3]ge<^1E U#&5m Hq&5twV5t{3t{FEt{t{y {et !{|tYwmAdaуIOBT.1:a:a&20#V21032=FB3J112I5db6q 7~C|hhE) *^Q_`!CQCQA11B=B;8ARrSqe a> !b1AE4FAނ.1#8dT7`XQ`]rw` MPnuP%aPt%ar6aYQ_` E]0uPpU`e`tee^V5Qn]1PRRd M`uU`b¯Ԧ30Z! PPr#5GCHhz]3 D&bQe^\޿𿏫AP)aB`Ҩ75^gyϏ-% S*biPaϷZeTϏ-LPcX&ߠϫk5N`rBDl 2`i`g̨)@`ߑRPoʂѩ|(ZP\@ pKD`ĖUSPaPEePlP6?ЪX9` TP7`?>` /aZPa_qSbI`zy`DRa0"ZʒT=NwRkqa<.2`s @ҨגcOaI` ad2`%c& d nĝAs Nx+P=鑙mrDatV/a I͢@+aNfP󨑙 R `Qa// '=/KM`C/P⯺/+QUmU`utP I`/ %N4$?6?s 2v? 2Nv??;Hd2QC_3` Wz%?NV%O7OCt aOO#ޅO`M&`mR{OOӧY"Np_1_E;OUe1ySQ]_o_M qt!Tw*foSU@jZ?@?/v/vUpSp2u8p` ?bup` B=ݝ`M,{t!Wp5'm-ׁ<3ohm!Qb8q׏XݥBT!.ct!xAZg>pÿ=$ bUfq vU֒rrrvvvrsY"`ERon_10 бeCZ~bb W}Wbr@":tbWuP`TfnMq1o$6zK}_obK1K2\#b^ºaaaaFҴ_AaY6P`u$`bTs`ݡs`ap䑹x^tݥ5f_b&bׄЂF3s͵N?SYDs\rV@(#e"88G^80G8ɐIFZ,@rb4 2V}uauaxuYV{nx` % P&S.1ua`NEW_ORKd2H IPհs!R۰RЕIհSÑQȑTi|o&ohXAG'0U?8!0oHD # =hj0>T h]]9  5AU@jZ?@͂T??@x?SF?P6 jt`  o?tk[>(A Vu aSuo.]&A]  *ԮJU2zGzt?@L#&A@bA#z7 5#1bO(b])j+$j&[ "*(w "?& ?AU$l/%B115 `?CopyrigTt (cu)Q02`09Q0uMI0cG0osA0IfO2@1rC0|1aO0iA0n.Q0 WAl0 G8s2Ueo0eG0v0d0"1E]4MŤ-&3 &7&l>]Uhz811U!J!u6?@+\.>djIIa? >"(EǑ I%K9E 5EU . K ;;5QIWEEE@ _KnSA?8@VBu?@DR> ?Nk"\6`ObmMM\j=am {:e 뇿P> #? D"? 5-gg?j47!B{oogrK57!ibDAbE9 MX7'0U~rN!R$a)5 5vIzHD # =hj0>T h]]9  qAU@;?@I5K?@xEL?6ѿP6 t`  bm&Qo?t@@L]WI,#u aiuoL> JU2zGzt;?@L?&A@b}#zs q#1b(b)+$&[9#H",ɺ (wH"?& ?A$ /%9#BV1V15 `?CopyrigTt (cu)02`090uM0c0os}0If2|1r01a0i}0n.0 WAl0 8s2Ue0e0v0d0"O1E]S4MŤ-9#b3 b7&l>]Uhzt1.A!JN!m.%-+0djV1ax߃S(@J-&?@ mO"> ?Nk"L[Majnha_:e P> #? BK=o? n-?j4Bs!B{__Wr5s!D8E6V 6?wj_|_ ^XE.U6Vz T6 @ d_mOoT ݬ\VߦyWx?RkrVo_bVh*g:n"'UaEjz}36>CN>?Bn]z/GvN7ViDAAbE9 MX+G'W0UPN!4)ae5 MaHD # =hj0>T h]]9  U@jpiH?@^?@$?@]F?P6 t`  !a&ao?t,F=۳=u;u auo>J2zGzwt?@ʐLAw@b#z1b](b!).+I.&[-m(?0& ?A$I0/Q%!!5 `?CopyrigTt (c)02`090M 0c 0os0f21r0@1a0i0n.0 Al[0 8s_2e30e 0vq0dS0T"!E]$M-# %'&l>]Uhz!1!J@% djc~YZ(_N9zOMGE%_EH$SwO"K ңHE ^&_8_WE_OqOHOOi&DAbPE9 MX7'0UUfb>!$a % f1jHD # Uh4>T#]]9 M5AU@jZ?@L|+?@?@({~?P6 JuM` ?u JYt )t7zEA2&L_D?. > >tJU2q K?=@MJ&A@-b9(+bG)T+T&J[#"#a?& ?AH?/`)#115 {`?CopyrigTt (c);020G09;0M30c10os+0f92*1r-0f1ai+0n.;0 Al0 18s2eY0e10v0dy0z"!4B#e-?3 7)& b 0K8>U# ]A]A "1!ъ!PzQ <%OYAJEU . K;;\%W%E@($L[.FX5Eai%?@ 6?wL;] W-\TB⤲8?W&SA?@ @ ?Nk"L'`bmMW_Ye[7 VK=F: [jr ? D"? X?ZvD?a@Boogr5;!QEؿx]SiLO MLXa~WOTZiAle59 XG"'0Ujc&N!$05 .HD # =hj0>T h]]9  IAU@K?@OgVb?@Y%?@wᳬ?PJW?>t`  ^Z?t#ə.E @x?r?/{4܂ G _Z+u u$ >  J JpU?43f<!.&P?b3bbfz@1bAe&bq$d 2zGzt.#@LT"&r(~&i$q)(+}'i"~*#81815 `?Co yrigTt (c)o02`09o0Mg0ce0os_0fm2^1r 1am0i_0n.o0 Al0 e8s2e0ee0v0d0"11E]54M-#D3 %D7.&lUh#7 V1#Ai!J# F/?B1B  VRiAAbE9 MX G{W'0U*R2N!I$a {VZUHPD  # #>h0JTpERI MU@u6?@jc{g&?@%r`5B?@m_F:Is?HNtQ`  II?t#]  _ii$!`kNu  iu%wj>M JQ T       "@&&JNU2"?H@Qs#N&A@b(bU)++&RNp}#>H1#19 ?#bbbz| R&/5R}#11`?Co0yrig\t (c)02h090M0c0os0f21r0Aa0i0n.0 Al!@ 8s%Be0e0v7@d@21a4t#-}#3 %746lJ#$JUp5 3 M19 1!q(!@nElrkRt# 8$kcD?@숒@)?@P?P"EkU |1\iOT/irvs}i>`_,of :m ,M ?? S&d2? *b?r4B!B oo1gr #5!>%(EQOOOO__@Ac1_C_ D4ևS]_o_yRX+{_,__ #R__ (YQoSo+o=osd {oEeoooooo@yUy{* hDV!VV(I X"c7I!pg bf'iMjSE t%XyG|'0UŢŞN!O4i |UGD # hz0TdYYBU@(%)?@xg^?@0߯H?@s7Uݹ?P} t` 87?t#_)TeΦu ]ukv  ̝L&UUU&I&3 U!!`?CopyrigPt _(c) 2\W09 M c os f"!r 1a i n}. Al00U (s42e0e uvF0d(0'HS$#-## 'I?k=#ǒ 2j$B##0U'BM3@&@F0Yk} #HA%DKb5DK3DK5DKED$!GiAZG5(|IDFX7yW'&Dŭ6!4YA yVZHD  # =hj4>T!]]>#AU@0߯H?@z)e??@˞9q?P6 t`z  ?5t# rXѮu)Nu;):O]ע0 &> # " "+"+"+"+" $+"T-J[U7 ??& bA@ 2b4-#2zGzt#+@ Li68!6 ?}; 72U!:B#AA5 5`?CopyrigTt(c)20F@9M2@c.0@os*@f8B)Ar,@eAa8@i*@n. Al@ 0HsBeX@ej0@v@dx@)21"DM-,;?C G& l>]UhzAE!A#!ְ1J`#@{&pz?@Uy{?NT,dKj&b&R&M/8T9K-q?x6@F?@dG0 >Nk"?YE*]\LT_hKjfGD&Qhj iU3UGe&Y_`?@/b%)h5oWKoWo;KTod}l$roҥ`nE2K?@ˌ%4G{?@ -q eZ̿R^{|\6?vodmP}l>MBe 9 N 1iS? 1?g]rl#51:O("4ga@Sq(UE^@O|ǐ[`Ρa g v2rA @3ijD&}l迲aqOz o?#WHZ'Qu=*[^N?@]%P?@O&@ c?@аM<n ?` eb0)\(Pfodm}nv}l?V5(?q$Ix[cN#W{#iA&QeE9 MX|''0UdN!0 /HD  # =hj0T h]]9 #]AU@}%?@R$,?@6#V?@c_3֡w?P6 u` ?u#t  3zM%t3ws%.<%H._'B> #PJU2N贁Nw[?@ʐL+&A@wbu](i+bk)%+&p%#4">9"<5!?& ?M#bbb͓z$ p&"/%T%#L1L15 w`?Co yrigTt (c)02`090M{0cy0oKss0f2r1r 1a0is0n.0 Al0 y8s2e0ey0v0d0"E1E(]I4M-%#X3 X7&l*>(UhE j1#$Ac!J`%#@5e$_F??!foA3nBM( V?@J-RHQ??@S!R?P| llHL6VLD5E+v;*AʙH: 8snP%$? k?Q? \?f4_!_t_Wr5_! R,U3OC@IL?@4~Kz?P)Oy޾ T!]]>#IAU@܋?@2Oa>?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ŖoZu 0Nu;Y:|8>$# JpU>##<?2& ?5bbbz@b#ANi&bu$'h 2N贁N[2$@ LX"D&v(&m$ :;T'm"*B#d1d15 5`?Co yrigTt (c)020090M0c0os0f21r 1a0i0n.0 Al0 8s2e0e0v0d0"]1"a4M-/p3 p72&l>0>Uhh= 1#!eJ`#&@NTM\jh1B,RBM(!JoUy{O_ e@ So4(DF^Tw?@]֝ä ?Nk"?w_KT\T+\(Kj?ӷjmlhj -`\TPQE\Uu?i,St_o u~!XEO9= T?@L;uQ_Pm"rA @3sjQ1lğ= GoYniAlePE9 MX'w'0UrJ^N!M$|A vzHD  # =hj4>T!]]>#IAU@܋?@j?@2r`5B?@^ݷ?P6 t`  >B?\oEt# 6ΠZu 0Nu;Y~:#&}>0m8>  PJU2N贁Nwk?@ eL&A( 5bM(b[)h+h+$h&#[-M$#?& ?b$Z*'*B#O1O15 5`?CopyrigTt (c)02U0090M~0c|0osv0f2u1rx01a0iv0n}.0 Al0U |8s2e0e|05v0d0"H1P"L4M-/[3 [7&l>0>Uhh= m1#!S!J`#*&@PUy{_?NTMjC1gc RVMʣ(D#F ![?@"$. ?Nk"?+IKTLTcKjRlhj p= (*!)i,SO2Kes9T;QEGUFw6qX}__TM7fTZT\Dž}I_P^52O,ES"rA 0S# Es3iAlePE9 MX$Gxw'0UrJIN!4gA xvzHD  # =hj4>T!]]>#!AU@܋?@c_v?@#r`5B?@*ݷ?P6 t`  >B?\oEt# ap'EZu 0Nu;Y:|$># BJpU>#͙;? & ?5bbbz@}bAA& K&@ 2zGzt?"K@ L0"&N(bM)+TK/8Z*B<1<15 5`?CoyrigTt (c)s02009s0Mk0ci0osc0fq2b1r1aq0ic0n.s0 Al0 i8s2e0ei0v0d0b"5194M-t/H3 H7T &l>(>UhE=3Z1#E!J!@ENTMjN@1h(NoUy{OO e@ o)4(DF^Tw?@]֝ ?Nk_"?'_KD3\T+\(Kjjmlhj P\DiAlesI(=MXsG_'0Uzb6N-!%$0 sFEjHD  # =hj8>T! ]]>#!AU@܋?@*?@#r`5BW?@w?P6 t`  >B? \Et#տ|u l4RQu?]>I$>I# JpU>͙#<?& ?9bbbz@b#AE&bQ$$D 2zG[zt#@ L4"&R(^&I$Q)+]'I"^*Bh{.M#To1o15 9`?CoyrigTtk(c)k2009kM0c0os0f21r1a0i0n.k Al0 8s2e0e0v@d0f"h1l4M-:?{3 %{7&l>(>UhE= `51#I!J49fAn^ R)X f"B!Jui,SG?RSlOS u~!N]8-19= T?@L;uQ ?Nk"?YVTQRXQOnğ/=i n (\ wTiAaePE9 MXLg'0UbZiN!)$0 Lf`jUGD # hz8T YYT#BqU@5?@`0 ?@ t2?@?P} ։u`h?u# Lv3<@PT=d,2<<Pddt m۶YtjfUh!NUdI,@=#B!!u+`?CopyrigP}tg(c)gW20 9gM ]c os f"R!r 1a i n.g AUl0 (s#2e e v50d0+"!$4#bB-m)|# '?h==$&$#66 2j4$#=#0UB%3#3&50$^ #Q9 =#HAQ5D<KE%DdGg*Fj#aOb'*TATA4"1(3&Sa4% `2HAEBOiSHdiSi^PQ5 3FXm'.W'DŜ6-!40 VZUHLuD" # I>h(J TEI 3MU@jZ?@({~+?P >uA` ?u#jJ* )8* 9LGA )`G#dtG.$*G8BVj~>t  e t>"z "p{b+#7"J>U2zGwz?@9 S#>c&A@b(b)+,&" !$5N]#2T#k"6 "M"b)HF;'","/]#11[#`?CopyrigXt dc)020090M0c.0os0f21r0Aa0i0n.0 Al+@ 8s/Be@ej0vA@d#@K"ulJ Ul,aAaAC 5LU(#!Q(r!wW?@s80&9<01# 0IK A# ѥ,Og NEx]SiL5[ML&X39^PzQQ_c_ ?%YAQ)UEEU . K ;;\W)U I@)$_[.F&XA:jai%?@ 6?wL^ W-\/&T5Dꤲ8?Eg@SA?@ @ ?Nk"?I5`bmMohiy[7 K=F  %?j1&XIOD?@u@rLu7j T_ʹhLi,KV@8$Ȯ?@1x}b} yy_@){yEZJ]%8O@G \t;8j|c1}p?i @'WE"NO_T ZYAAekBp OOLJ}u?@9 mO"9=\/hQeel.%-+@mlփ?@U> "8?ɽ7yf\tOfj|A=$䚏Bp$za3ex@4Sa!6Wy'eU!a)#kq&ziM2i?IT%!Xh<'0Ub6502 ?}HD # =hj0>T h]]9 !AU@+3]q?@8i?@.!'?@\S۳r?P6 t`  C=?t#9sxZlRJ)u duo$>  J JU?'N<?& ?AbA@bNbbz87 5#,"b4$,"'b4$2zG7z?@L&5/G/Y 3%,"j&d,,"Rj*/1/15 `?CopyrigTt (c)f02`09f0M^0c\0osV0fd2U1rX01ad0iV0n.f0 Al0 \8s2e0e\0v0d0r"(1E],4Mb-;3 ;7&loUh#7 M1D hp  o%J F( dj!1B2 bdV.RiAbE9 MX|GW'0UR)N!!$a VZHD: # =h4>TB#]]9 TIAU@1q?@!9?@aww'?@տԌU?P6 t`  9F׿=?t ڰ#slgW ª%?)Zu Lhu!s8hAh hJU'N!?2& ?AbA@bNbbzc a#X"b`$X"b`$#2N贁Nk? "@L&a/s/ _%X"&,X"*T#[1[15 `?CopyrigTt (c)020090M0c.0os0f21r01a0i0n.0 Al0 8s2e0ej0v0d0"T1X4M!EQ#-0Og3 g72&l*>0>U#3 A y1vXD #p %J`#@.T'oo>hn56>fcRM(! Zך4_F_ b/OSϏHT >ɪ#f:?@*S)Y=> yy_@YjeeRT;Gq!Qh~n~OkzEMsZJPl -iU@SQEU"pL._qw;m;aU_ BnhX#rA 0X"IsTj߿kHhl!Qdok!(Npd 0Ale E9 MXXGB'W0U*}N!M$)Aj5 vzHD: # =h4>TB#]]9 BIAU@b?@;h?@ ?@F_a4w?P6 t`  /?t {/_  g?xRJu hu!s8&> JtU2]q ?=@ʐL&A@5b]M(b[)h+Ih&[#$"#?& ?AS/t)T#115 `?CopyrigTt (c)O020[09O0MG0c.E0os?0fM2>1rA0z1a i?0n.O0 Al0 E8s2em0ejE0v0d0"@14M#B-,?$3 $7&l>0>U#3 CE761$!; #p _%J*!ԌSU hneNw3󛍩(D#2q>@t|M?@a^!v:1 yy_@I56>[ŀQhnO:@K0hd7l aV@S8E8Vؚ㱫L{_pZ5X50U-0f:?@D?cHe-1>[W9k]geϺQ Y" KaĘ\Bqj]GRׂP_FR}LY5E:/U=uOqsR Y2JZ8 b G0Ale59 MXGB'0Ur:N!R$XA'5 vz_Hp>,ÿ;s ZH(GBFXM/# EOBH8~druo@+CX"o[sGx o$$a(xZ,X8/!!-3#!6F֛8'!N:)!<5?,!B/!F!I!M6H!PN!TVT9! \(!^K!?bUFD  h$^T YYBBUF\.?x<F BP(?43^P? Vs V6gH?R\d ?8EB G 09gB`.Cloud,netwrskcaUins t,pa !i sa ! %e^ M%J 1Gg YR 1H?_^p Qwwx0w x3wxwx0wxwww xQww ww0wp wwwwwwwz pww/ p7^LDrag thuesapWonodwipe.b~߿~?? XAP7d??O\.ߗr3r\.CUG DF P# h @T(PYYUEUF\.?~?F~׻?P} <Y ^.B" N'O'*Q'R'S'T' *^ Y&Y u` ? "s'_":s"Ds"Ns"Xs"bs"ls"vs"s"s"s"s"'&&:&D"uN)*2<7)VU2^L[b1j2 ^`s" H3z3'N‚??lF ?ma[Z<` Ac@entColn@r3z[_b[[bNbbzI1dq9E 8AЇEO 24bYP{[`:BeR&nVB#32Q?]q ؂3Q8A*W1`Vis_MRXY%cPm!P5`12`݉`?Ap]y@igPt?Pu(@)?P20L`U9?PMPc@o`'of>bAr2`AUa@i@n% ?PA@l?P7gsbe*`e@v@d%Bd455uO#4[%B+@q~9p?@IBp7u@usPRD"J-D"t=2u0`B-1D"֝ruL &@BGm3ry=_"n4O N)9u`u `"AA b1O$u`B, ^aepcT<@o?Pm@sĴ@a>5SRTJAn@e@?aT]N@twBk?PD@vVbe)EяFE*UM_q|U۟e"4Fpd(aC l &&2::)BDD BRXX|bb2llbvv TNA5 rT&_Qj~}8pK!bdzӲ=`pM0 : D N X b l v%‡'_"D_"N_"X_"b_"l_"v%™2:2D2N2X2b2Hl2v2BS&2NC󨕃FkZV?F'IOғp~կˍW 2r033(FE9<QߐlzBO$ߪ߼߮(U9|>}>MR,AZl~(|9x uSV!+=  (UǎoC  (ָe.rMi>3LÔ  eD2(dv  Q!Uh!%5QU}4< ?VV$`fBh}Ȳ7O!<?F/*Ȳ/2sU~1p%)E+D>L?/Q<,6FE%mEB?T?f411%U\+:-yO?H1BdhPUifb}WSȊOt6XQ4X @ pp` k*`n`RPn`dxQdЯd79@#AUUiB+T.(T;)dH*d db$"Tt ,@AaiE)khp:j%56_:9c&ЩD3s͵NSY2a)k8la)k(8l|)k 8hm8_ZI۩Qy`Uee8^6JVAAD @^R` WShpCꥁs!`L ctsPzTq`ss wTyr"^RS` pu EqC#5#s SzbCRsqozA`BiEOO\_xX#Nsqe'2q[[!c0_ZˑU#|7t:}#/'_x # fHavU# 1H !eU iEQ % UH duD " # J U> hzi@T(iiJ\ UF~?F~?tP @Yu`?(.8uBbSA{szat? Cteb"0#$YE['?a&?.tb'|z酱2Q?AG}$&}qam{c8`Con0ect0rz 1l2z{{+` Li2?15-tzYaU#F131 ?WPa0t0r0)4;1Z7}`T5M151?UR0u0dB2g6Rz8%"Q-!c_C~#J *$&#FH-B;"2MC &BS{%ZMCQ-!dgG 1`?1py0i@h0 c)20RP9MB0c0os0fDDR1r8P11i2]. AL0lJ=WsRedP1v0dP]@>ֵDCz\MC2qP?_f[!?\.? ~:o -D~% IKMC&&UgyQh @ 1_`S0aP2BiEfoxoyXMGw'0cia&!|$0 vjlAUEu P-QyB:MCz?bkp Tbb-.(},8>8dup*ar%@68K]ou ÏՏ ?|>R9=PbtuYȟڟ x<w(LA7BUgyu]ͯ߯"}>%7GZl~uAҿ '%OHL_qσϧϠue kZV?F'IOғ %NMQdvߚ߬ߠui+.Q7Vi{Q"QucwHD: # h8>T ]]9 M UF~?F~?JP6 7@ZAAM ,'@' T'h'|']"' ' "  " '""'"'0"`''JuM`l? 'y""y"6y"Jy"^y"ry"y"y"y"y"y"y"y",y"&,y":/L/^"uh)!b_A@36Jte! i(Bt1 LJFb4D"FEB>fCJ[oC'N(?F ?"b96BhE#fCtoC2Q͞C@MEMJ3V8E@8;G2JoCQuAuA5 `B`?CopyrigTtk(c])k20 `9kuMPcPosPIfRQrP+aa+PiPn.k WAlF` XsJbUe`ePv\`d>`PoCQ! \l-OS WFfEP b$U9 fuXG_'0UHrũFZ!D0 fzlJ\>Uhqq]!!0!PQAcHQ@1S?Fȓ$,_q?^"d#e4Eq ^"3d]q 0!0"/"Q!0"fBJ4?F;W?FjlJ?FH E(>?NpKv7atuXad#hX\a3g6a0e!2 4#q E6?Fv9?Fb4!?FЩԏ挍maVv]2:{(?-yjI>P!Xj?FT꞊j?F'K?F;(嗟éT.kvd*ĮVq#(]HXEAr ?FK(umB)?F.~rYYH_amAáF+x(~*=į֯uhY*?FY.(v0zkd)D6GdSZpB{(i]"/M͐?Fߧ78ET+Lk ]D am++Tz&{+69[Ϟuf?FP$?F [13nٶ7gۣ挠Y,Idkim0H@eQY"ߤİuӧ?Fl?|f ͲKlKBG]fkϐߵAZV?F:?F* ?P$ 1EK3 3i{߮Gz~}Ԗ0>_NO5q :_" s3t? ט P&d2g? 41z3B2_qr5E~\.儐?FQ( BP >5 w pmD0VY l j٩ 3 UIPY?F3>Dx?Ptk= `K0z߂3S0kg; z0f=kVN?PX#Z3d1H-Z?F)$F\;X})挗J hG~W*fU4~%?FVFKL/GaH)r~(H  N/?t4ZA rGg?F4W eſ?FH?Ýi`)6¿l#kpc'??4FF\>FI(1 O怘Rrq SCH>t v|RuONHD: # h8>T ]]9 M UFx<?FAj?F~?FwQuu?P6 7@FA]A]M ],'@' T'h'|']"' ' "' ' " '""'"'0"JuM`^l? 'e""e"6e"Je"^e"re"e"e"e"e"e"e"e",e"&,e":/KuT)!bA@*36JtQ! 2l] 2t12zG57RQ57)\(1>RJCJ[SC'N'(?F ?^"b'96BtSC2B??088r0MICJF8;;G2J2SC@?JE=26S0#CSCQYAYA5 `2`?CopyrigTtk(c)k20 `9kMPc.PosPfRQrP)aa@iPn.k AlD` XsHbe`ePvZ`d<`P񾔝mT!l-pHS %WF{Vi AbdU9 {YXpG'0UFrōF!D0 fzl*J\>Uhqq!!0!QsAGHQCf֩?F1Qd?J"P#Oe4E] J"3d]] b0!\0"/",&JBJ2(B@?F,/k?F]?F8{# \*?LpKv7_tuX_P#_hX\_3?g6a0Q!2 4#?DEh?F~E ?F_pLVwMҏ䌍m_Vv]$A2:{&-yjI-H+ŕ' YbҖ.K?uAc9z^̰f5U?FF/`_`P \!b@ir_f|Ƣn y(Tf>rpd6ENj/-g?FRr?Fn^\K0z߀3S0k~mr{~.*{ S4K" =b™? C? "@wD#<2tr5EOK4n?F{׀?FBi䌗J hGW*fUÝ)k4Il RuF?FB5r?Fn?$Xp ?G'gH᷷?r~.H  N//4;KGۨI} yFa%?FO"Z/g`)l#&Ƚkpc%//2DF7?d>FM?MRrq LS3H>{v|R؇?>HD: # h8>T ]]9 M UF`0 ?F M  ,'@' T'h'|'] ' ' "' ' " '""'"'0"JuM`^l? 'e""e"6e"Je"^e"re"e"e"e"e"e"e"e",e"&,e":/KuT)!bA@*36JtQ!  \(\] 2t12 _p= 57]@ף57%)2] >JCJ[SC'N'?(?F ?"b'96BtSC2Q?0G88r0MICQJF8;;G2J2SC@?JE=Z26S0#RCSCQYAYA5 `2`?CopyrigTtk(wc)k20 `9kMPcPo%sPfRQrP)aua@iPn.k_ AlD` XUsHbe`ePvZ`d<`PRmT!l-pHS W$F{Vi bdU9 {YXpG'0UFrōF!D0R fzlJ\>Uhqq]!!0!QsAGHQ'n-?F1Qd?J"P#e4E] J"3d]] 0!0"/",&JBJ2G2,?F,/k?F}X?F8{# < ?LpKv7_tuX_P#hX\_3g6a0Q!2 4#?DEU: Z[?F~E ɀ;9C5 ?F?wMҏm_Vv]2:{&-yjIAim.H@eQW °A{?FH 6I_^ d ͲKlK~BG]fkߵy X0?Fjd?F*OzE=M2'-H+ŕ' YbҖ.KuAc!N\?FZf5U?FO/(Sÿ?P? \!b@ir_f|Ƣn? y(Tf>rp^N/-ُg?F6+:n^ºK0z߀3S0kmr{.*{ S4K" =b ? _C? @wD#<2:r5E1f.]n룠?FF[PBi䌗J ~hG~W*fU)k4R9uRuFpZ?Fn$Xp G'gHr~.H  N//4[/I} KKbv"Z/Ýg`)l#&Ƚkp!c%//2DF %>FM?MRrq kLS3H>{v|R?>HD H# h @>T(]]9 MT  UF|>?Fd2L&??FVR?P6 @ZfA]A] >M /]8HC8\C 8pC8C8C8C8C||C 5C 8" 5"C5$""C58"C5L"JuM` ?  "4'C"R"f"z""""""",",".,"B,"V/Kup)8!bSA_36Jtm! q$y BYt1B5A!EM!E*G[~By >zCJ[C'N'(?F ?7"b'96tC2Q??088r_HEyCJV8;Q;G2JB2C@?zE= BFS_#CCF~F$u?FNR?F~Sߪe f"l#ͽU>aſf"J3RRy f"l#>b@Ed3̀ͅap`L!  f$#p)bq65q p)CaAA5 `%B`?CopyrigTt@(c)@20:p9@M&pc$poKspf,rqr pYqaPipn.@ Altp $xsxreLpe$pv"pdlppT$!q$-Hc gFV$VibE9 YXG{'0>bUŽF!D0 UilJ"\>Uh"" !$!8!L!q'ArwHqI}^?Fwo?0je4EZeLdd?]e\DL"K""L"zBJT" Y?F]?Fo?F5? Ѭ:?1ipKv7tuXZlhX\Zexdg6gJ0m!2 4?#oD Um?Fڼt?Fɤ_i?F0>m* 9?F?eD;]D m++Zl?Tz&{si?69Vu9,"f?FXev!Jx W?F8+y٨HZ ͲKlK~ZlBG]Lfk&8}H ?F$j #?F?B?FH+g'Zl Yb*xdKuAc>j ?F-}JӦ?Fզ.ki]7aB< x!Gi@ir_Zlf|Ƣ y($M-wF ?d2huON{v|RONqNwHD: # h0>Th]]9 MP UF~?F~_?P6 dnA >IM .)B) V)]j ~   ) )) ) ) ")")2"e)~)JuM` ?E){"${"8{"L{"`{"t{"{"{"{"{"{"{",{",{"(,{"JU2Q?@M?CJOFA@!@HbʏIKFrB !RDJICB'?Vp BABbI2[GrBLrBOB-ICR SC9 _VICQQW;HB`?CopyrigTt (c)=`2`09=`M5`c3`os-`f;b,ar/`haa;`i-`n.=` Al` 3hsbe[`e3`v`d{`Pi @bU9 VHXWXA'2qpVtUIa fjlJ Q\>UhQQ]~ !!2!DEQ!=H^A@1S?FO$,q(P`"3 e4Es `"C ?]s Jt{Z rGg?F;mx>F9h E(>?NÝ*`)*f#퐲>+3cݛs a0g!2 4#@ES~%?FV?F4W eſ?F}G*H᷷Ļl#*لȽkpcXSH?F-Z?FF?FPL߯@J *⁈hGr~لH  NᵪS UI?FY?F섰a?F@\ K0z~**3S0k~Ä&~ԉ~$ARm}A\.?F:?F3?FDٳx?P+k= 1`竪3*{Gzޟg; z?ԉf=ks :a" F f 94FQ2! 3B2:9rt5EG~[ZVn-C4?F4?Pu>5 Οw ð}H 1唜2fi0ƱVX 23@0Bk{̒[mY*?Fuӧ?F?F* ?P% t1EC6G*ͲKlK~}>NO5 RԋS2Z#@cukǜ{̦Ouf~$l?F|ƯY,*kBG]fk0BO"/M͐ŧ78E [13nٶ7g"]D *m++仌imIلH@eQOY.(vT+Lɇk h!˿S컌Tz&{u69Rd~OEAr K(F0z?Fkd?YH_2mA B{لi]I'p1OjT꞊j?FmB)k.~rkéT.kܶĮ~+x~ل~*=u1O6?Fv9m'K;(-/m*Vv]BÄVq#ޔ]H//rP;W?Fb]/?F?J/pKv7*tuX滌 &ה`Ypl?GuSFjlJ?FH?#5~hX\~ g61!O3K_H8>X"s D')K'g FMth1#iXB H{j]@L'} V6lX-1u. Bj2u2Z2j2#u9r#ULH/MZ1+B#AD5 60`Vis_SE.cTm!#20AD%`;Copy<0wigTt ?DU1f@M)@c<0o+@'ofdBUArX@AUad@iV@n3@ f@WAl@ \HsBUe+@e<0v@d3@J@=## A G?9#"n"Ea&444 70#AHB53_lj#SV6U8l>(UhnE / J$&9"(Fn_g5Fe#pheo'r#T X!! Jc0xo)vR2'"U7A%O3W_7rr5-6 `F2 Vrqu\@|Q:0b)`R@As)@E %T@xd@Di  ei=G]U6Tg'0"q)V!DHAn 6SWaH>"> O`_E )F(Mjw#p"N7B C8"?edo@7.#PUFD  h(^TYYB UF\.?x<F BP(?P?Xv B6u6  T^ThWZTTȅHBU?? ?B@L&d2?J-[(m(K.sU!&/")ҍx;(  0KB` Bridge,n&0two0k(0p&01pP=1al(0d&0v 0c@&0 [1^P| ( G&e WPW"*U---?-. wwwowwwp~wwp!wxxp}wwhd:lww w pvw>ÿ@qZ@E?DFp6@rR$Sn0!u@`i-FAD"BRuL @GCD$]QO(gR_"|i& 4O N)9u`u `"[bO$]u`J425)3Ga171`Vis_PRXYcPm!#5n`46B.R`?CopWyrn`gPt@u(~`)@20`U9@Mn`c`op`'ofbar`aUa`i`n @WAl` hsbUep`e`v pdR13d45*8=363@tFF B4B#30UrC]Vw,7%$T!nLPLsT<`o@m`s`q>g d(aq;_=@TATA2hhbB)3*̈́b+ڄ8bq q200Dr81@a6s2+66u\L %BWQ(A.A[2BB]P@RbESTO$@6r23| e|8e5=_4&F3s͵NSYFețG@DugQ̗,@Ga cupgbE@*28ABTOopD"%J-.RRO(R$[3VÁ`R pasn`e T px``B| 0Ls'_"x#PG $UJ"bk B3U2brZTCfRr ïը8=vv_<H!U"D"=m0!D0!g!@2ÅbBЅ3/xԂb08beaea22Dr3 AAE|66v7 |8Ԑ`b-A-A;hr#zr{qV W!rA8T@GaKk ]]P@` USPa`e`lLsaQi` E0u n`p`e`TbEXew` —T``8` D pvbT3^r` MSb Bbd`5ed;YRmM`nf`c"`u r88eQTAPbd Nm@Te@h=O! P`rxAx8eb`m +sbeXsuĨ9KuAe;S`uJB Sri`a /TTh//A/L`ch{/Ҡ///Blpi`[/@?#?FR`oS?a??uNwbkl"aXb`K?[It`adp s*OT]]9 M JU@x2zGz?@MJA@eb#z(b),+,&J[(.?& I?A$./O%B!!5 g`?CopyrigTtZ(c)Z2009ZM 0c 0os0f21r0>1a0i0n.Z AlY0 8s]2e10e 0vo0dQ0!$$-# '&%?6iie59 6X7'K0UpBŁ&!$K0 'F;JlJAh h7!1!Z@vnǣ f, <AA#  IRvVJ:'BP(?OS spBlU^TURUEUr\.G\ZY?PuPիQ,5U҄Q[pB.aYU520_BU"rA 5#cHD: # h4>T]]9 M JU@1&d2?@Vj?@҄BP(P6 ]AJuM` ?ju#t  k]It=W(\ZIRUIl">2zGzw?@MJAw@eb #zb](b%)2+)2&J[/.w?& ?A$T4/U%B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-# e'& ?6iie59 6X7{'0UŇ&!$0R -FAJlJ8>UhAA 11!T2 $S<cV{ WR~VJd2L&?@ubX,X ]S fR#mS [ÐڱQE^Fh4__ K&oeX5:f'l6 XYePY?ۭsHMQ5aZjoohY}aUVeEVx_;"rA #dsHD: # h4>T]]9 M JU@|BP(?@h4WFP6 v>JuM` ?)uJt  U]It=W{GzUIR]I%l#>m2zw?3S@MJA@eb #z.b( (2+2&J[ (p"?& I?A$ 4/U%B!!5 g`?CopyrigTt (c)020%090M0c0oKs 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-X# '&?6iie59 6X73'0]UŇ&!$0 -FAJlJ8>Uh 1Z1!Z@o.rɩ  <uCB] WRVJ@ ?@bx ]S ؒQ#hY))QE^jϵZ__ CaX5a9XYePXRmScRU5:]mxc_PU"]rA #dsHD: H ih(>T9 M3JU@xJuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"A.b+-,-/*B!!5 `?CopyrwigTt `wc)5020A0950M-0c+0o%s%0f32$1r'0`1ua30i%0n.50_ Al{0 +8Us2eS0e+0v0ds0ib =L!XY'0U Bj&%0 :lJ Uh*M[A[A 9A Z1:'BP(?X Y s BQi@JSv?@bX, IYwYQ!i )\(!!U31^Fh4I_[_ p= ףuXEfl6 TbYHuX _Vgj_aZ(oQ !UAI҄AK B\%QAI@h4GFHlQbY{uXAo ].Oroa_(miY"@ _?@-BQȠ\ p_\oR\uXAppD`Zsƌ uXEOO_\HD: H h0>Th]]9 M JU@$?@-#?@%G֖g?@q;Ƙ?P6  n>JuM{` ?5 uJt  -|ߥ_Et9S8-zENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@S3   8gF/_ M ARnVJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@!\?@JI?@8HL?@VG?P6  n>JuM{` ?5 uJt  4 ^sEt9SPDEN׈]Ehoþ>2N_Nk?<@MJA@cb]b)++&Jޱ< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@Pg   8gF__  ARnVBJ49BYOPBRWSMRUE:]?{^{?\QRYl>Q5U/V@_YQU2(_:UrA cHD: H h0>Th]]9 M JU@F5x3?@5yF_?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  a8~4Et9SuEJENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J:3f< u) ?AHbʣ$z@bb3?bfb)$%/F%BB115 c`?CopyrigTt (cu)>02`09>0uM60c40os.0If<2-1r00i1a<0i.0n.>0 WAl0 48s2Ue\0e40v0d|01Y4Ť-3 7x&60b59 FX|7*G'0UsB)x&!$aI *F>JlJ]Uhz%11Z@S݆ f  8gF_  LR)yVJ49MYZPMRbSXRUE:]|`XJ\Q]Yl>Q5U:VK_YQU23_EUrA cHD: H h0>Th]]9 M JU@ ~?@gŬs7.?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  YoDt9~S-/ENس\Eh)Fx%'[v>2N贁Nk?<@MJA@bb)+R+&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@S3   8gF/_ E ARRrRJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@ ôOS?@S !p?@%G֖g?@q;Ƙ?]P6 nAJuM{` ?4 uJt  M:AEt9SENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J~"< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@Sg݆   8gF_AR M ARnVJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@]C}?@8?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  [J}$Et9S=d ENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@S3   8gF/_ M ARnVJ!49BYOPBRWSMRUE:]|`X?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@Y ?@fJ7%G֖g?@q;u?P6 >JuM` ^?M uJt  (N7Et9Sp%6ENس\Eh)Fx%'[v>2N贁Nk?<@MJAw@cbWb)++&J:3f< u) ?AHb$z@bb3bOfb)$%/F%B115 c`?CopyrigTt (c)>0]2`09>0M60]c40os.0f<2R-1r00i1a<0i.0n.>0 AUl0 48s2e\0e40v0d|0@1Y4-3 7 x&60b5(9 FX7*G_'0UsBx&J!$a *F>JlJ]Uhz%11劵Z@S݆ Y  8gF_i  LRyV J49MYZPMRbSXRUE:]|`XJ\Q]Yl>Q5U:V K_YQU23_EUvrA cHD: H h0>Th]]9 M JU@$䣹?@R?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  >b3Et9SWENسא\Eh)Fx%[v>2N_Nk?<@MJA@cb]b)++&J~"< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@Sg݆   8gF__  ARnVBJ49BYOPBRWSMRUE:]?|`X?\QRYl>Q5U/V@_YQU2(_:UrA cHD: H h0>Th]]9 M JU@֤M?@ yo?@%G֖g?@q;Ƙ?P6 n>JuM{` ?5 uJt  ߸ Et9S)hBDNس\Eh)FxO%[v>2N/Nk?<@MJA@cbb)+R+&J:3f< u) f?AHb$z@bb3bfb)$T%/F%B115 c`?CopyrigTt (c)>02`09>0M60c40os.0f<2-1r00i1a<0i.0n.>0 Al0 48s2e\0e40v0d|01Y4b-3 7Ax&60bP59 FX7*G'0UsBŔx&!$a $*F>JlJ]Uhz@%11Z@?S݆   8gF_  LRyVJ49MYZPMRbSXRUE:]|`XJ\Q]Yl>Q5U:V@K_YQU23_EUrA cHD: # h4>T]]9 M JU@xJuM` ?uJt ASt=WJRb~li>t2U?=@MA@+b + &J[N#a?a& ?A)BB!!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r 1ai n.Z AUl90 (s=2e0e vO0d10!$ę-# 'a&?6iiek59 Կ6X7'0Ŵa&!|$0 FJlJ UhqMAA ])A !4Ja-'Z@h4FӃ f <{Gz#A OR|VJ:6]].rM__Z)\("UEC@ ?@-bȃ S 1 `#`YR\Xk5^jZso!o ̌aX5U'BOP(?gks#dT[RUUSv?@n aX,flYwq%lSXn?Fh4  p= ףXAYfl6\`YHzmPXAV``ʼ_Z(QYAY҄agklQbE26_HU"rA _H<>gI+s nsMg@q3JFM\#7OB H#~dwso@+_><oKs}G#o'f# (#+\#N#ȇX$V߂$W$X$X$g$$hK$jZ$kH$y$ȓ$z$UFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?XV 55r55 Jbggah5Zg\ȅH?8?f)B@L&d2U?$-5(G(%.sUm!k&/|(  0O~B`"EtPern t],4wo k0p"i0#al*0d v0c 91^| ( 5G&u  * AAS ~pni'TDrag onut he]pe,nUdu /p64lies9Lt]wrkcQm tOfTly lGo_"imdO.bL&d2~?K?=tp'I??C\.r3?r\.CUG D  # h L^T4YY  BUF\.?@L&d2?@~?PYU^f6.NGOGPGQGRGSGTG\G]G^G_G`GaGbGcGdGeGfGgGQJiGjGkGlGmGnGoGpGqGrGsGtGuGvGwGxGyGzG{G|G}G~GGGGGGGGGGGGGGGGGGGGG736363u` o?%$7G-7B<7BF_G#BP7BZ7Bd7Bn7Bx7B7B7B7B7B7B7B7B7B7B7B7B7B7B"7B"7B"7B""7B,"7B6"7B@"7BJ"7BT"7B^"7Bh"7Br"7B|"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B"7B27B27B27B&27B027B:27BD27BN27BX27Bb27Bl27Bv27B27B27B27B27B27BH27B26g6U6666MBuIU$FrL|Lw&1~rW]@Es  ^7B( ( ~s!v̨ q?@h!;p`:1P IFPr,s~rD1''@A3{:18E9 'N( d?pxh` Acn CS lS r3!ۦpڈ٦p1bbbz0q@!PZY<^pbbNbb™qb"b7+d||u@+@Q Q?@G}x^?@IH‘BMB6}Vu]`-qZBuFgw#/s;}#B(0t@I9u`u`"TYYbzFqDuK`EWAαq1W`V sPwXYEcPm!#528gX1`U?ԑp r gPt1()12U0B91M cq Eo{ o5ԑr] ua iS nE J1A5 l1-sUe{ eq vdE*E WN~`~u01 l2T'sFt5ӤӨὭ w?gߏBq&W,s^KVVTK~u5TԑnI e5TJ Nt uk1DvLN'9K]utc担c8E&cNI[mc}c濵c0BcewcFլcc晱6HZB_z\\Fq ////_ qR/d/v/_///\`//?c+?=?O?cn???@_???_? OO\VCOUOgO_OOO_qOOO__'_9_\\_n__\픱____ꌑ__o_3oEoWo^;t~ooob!ooo\폰,^#M_q\$\%@\%7I_'i{\(ԏ\!) \*>Pb\+\ϟ_,>@c]o_/ů\0 \ 2DV_2v\3Ͽ\ +=5K]oB\6Ϥ϶\7\8 2D\!9gyߋ\:߄\;<x?#5m`3ACTE5[ɳaj1,.tu[񼱼[[_[[([ [CZ==2ffyѭ";;N"""@#2WWj2d`e U@s` M;aiuf,pueQ` Eq0ipeith 'UPouoN:mb>Pb btq' P,r9! / u'EsorK"t=h;/M/$\l/~/A s}-.1 '7`1^/?!' S6"i,l 0?B?`>Th?z?X[Q'Lc,(?`K???B*"liigOP<XLHOZO' RoO_eZOCA8wa' BEp&$/qf`Y4$_8]Sra e\l,)1@#` fi!bty_l_~_W"` WTPQ@^2` Eh6"*lt\a4 l.c`_O|N(,TQw`#|`p/xo6_IPsAd<Told?o/S:bEd +sc*lAxTf;'qmBi tCv9 I[7!3oW='/) fsPt@$:PftMpCnp㏊gTˑ'fmuQ SF l!M_(|/@"ha  c%>UcAXE_`>Z ** [j$w1tt>,3K̗?nؑ`tQgctRm#ZC̗,@_Cc%SuK`rϔ? h!b M^b!bC%_5Ÿ(56c_t&YF3sNSYF??Eq! $U dkV@~?>? ijTMwu `b|tϔ?b+cU@L&d2FX ~u~ߐk7լj(ߎp.AɯD c0I.NO Pl$u1S>GAGAKaaX]8eԁԁrݡݡiaiaa lc/a/aQQAAIIA0,j=al$BAe"!A#Mwx!g%9!@QRQ@ϒ^^!:q"v}#rq01RQz3{| +}*~t36 /*&03!@2M3Z4x|Ut6񡊁7񡕋8񡌛O:񡎵q=>?R0i[8s]&?-ܑ-!lVőncb[r ]#d%+`[ ob Q'eu"$I7 | "#####!#2";#ԁ#ݡ#ia#o#|###/a#Q#A#I###a##A##%#x##L#Q#f#s###q#Q##########)#6CCCPC]C|CwCCCCCCCC`CC "&eR"eR2eR 2eR2eR 2eR*2eR2eR>2eRH2eRR2eR\2eRf2eRp2eRz2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eR2eRBeRBeRBeR$BeR.BeR8BeRBBeRLBeRVBeR`BeRjBeRtBeR~BeRBeRBeRBeRBeRBeRBeRBeRBeRBeRBeRBeRBeRReR ReRReRReR(ReR2ReR2(ݤ>2&̶Pl~3+CkߏH2ݤH2&̶ 1CU4}XRoݤR& ̶ F!R8JݤR*&̶R(ݤR&̶TRݤR&̶Xj4!X/|/R//ݤR ̶/0/B/!?j/E?b\?n?ݤb*̶// ?1?3?O b%O7Oݤ b̶???xAO?ObO_ݤb̶yOOOAQ|_O_g__{b ̶B_T_f_ aEo_ioh|ooDr ̶'qo o 4aWo2iEW ̶oooq j Sւc ̶eďk׏&! !̶fx.il٨h ""ڡ2D 42{Vmi{1## Dn2Dk{$$ӯ寉Ŀ o 4D% 9%Rֿp،* &9&Se wVߟzqߟߨU'9'.@hCrVh(9(1 s1Xh) 9)!t!*1*9*"w ?zuy+9+#@RdCgvz(B"&,9,$ - /U0/wC/U/|8 26- 9-%!///x ??EH2*UF.9.&// /c1?/?y??XBV/L/'d?v??,AgO?OzOOXfRV0L0(-O?OQOA0_yOT_{g_y_h/bf1 \1)O__Q_B_o|0oBoixb*yv2\2*__ _ao oo}o 2rB3\3+oooPqo~ 4\4,QcuTxĘSԖ5 \5-,>fATf 6\6Gv: 謫/ /Vf7\7k/ПӯȮ/8\8k0u=xw9 \9k1>PbAϊeτxϊϱ@*:\:k2 + S.߅ASߨz ;\;3Ϙ߆ CS<\<4߽߫a߂ = \=5bt*ed*>\>6+= O.wRew-?\?7@.@g(w&@\@8  /08"@6A \A9N!////82* FB\B:O/a/ s/1R?/v???HQBFC\C;?*?s__ _;avo_ooo(xurvGGK>oPo@q?ocv>HɌH@)qQ,?QxIIdTtsTssSrU} @ `EtherntnrAnkssAs%t sΑϐΑor#r` %P&o]ptis ߢ``N `TWORKÐS{AEÐPOR2I `Q2 ia1ṃ'0Uyu1!1 0tbų)zw tر)1 ۲eʡ0(&100@w뷤H1wLĭGX+L_W8LWGEDrDC  @   !"#$%&'()*+,-./0123456789:;<=>?>A*BCheATe(eJ\U@L&d2?@0~{??@?PYmUU%.u`o?&UH&T"""u,)ct)! q-tm!ʇ"L5 .&b$&"} U}!Uɤa/ '19 b {V?`%2H6"\>3c6?c82\.?A?9[B>30/D?P?y A@lBw0uC񁠙A>9D@SVCA'1X7i`"`?CopyrigPt(Uc 09uM PcPosPIfRQrP=QaPiPn AUlXP Xs\Re0PePvnPd"Z,AuL-$cA2T%c3C#lBF{L u,G"aOCTOU_2sOU!OOOOO_"_4_F_SO*^x_b._o_p__ ooARodovoooo_j_oC.@+k@asx@j`D.mѐBǏُx!0BTfx"ja.ӟ7 -?Qgx:A̯ޯjbu._qxl jeB.k!xґ.@Rdvjf ".5  ;+=Oe>s8jg/s2)N@7/N?[/0p/K///Ns!//@ ??0?B?O/jht?B^?O?P?B[? OO1nsNO`OrOOOO?"jiO?bn_o'_g`<_R{]_o__~sj_____o_6j@or[ ioopo ooo s,>Pbtok !3!);Mc!s6!ȏڏ@q'"5 LYn"ɾ"s1 .@Mhmrײ#ԯ!#/#sQL^pnؿ=$%e:P$[mϕ$shaψ 4o>ߣY%@g~߶%%sq*@Kf:rp">N!/=(<(//-N(?HJ/\/n////Js/;BN)/O#?c@8?N[)Y?k?}?^)XfѰ????? OD?2jt38b/.J~8/2SN4a/x?/0/K4///N4X?$?6?H?Z?l?y/Z?R^5@?O?+POk5!O3OEO[n5mh.xOO@OOOO?j_ib~6-_DoQ_`f_B|{6___~6x__oo&o8oE_"`jor7oopo7oo'79DVhzoƚ5랁8]2H8Sew8ʑΏ,6Q9_và9˟ݟ91"4FXjDwʄ·:ů ܿ):1CY:k,Qv пݯڅg;+BOϏdz!;ϗϩϿ;a$6C^h<ߨ<%<7qBTfx 3=@=Qcu=^@**4O.>]tB+>.>8ġ 2DVhu :!<">?/'0KF /1/C/YL?iH*t//////J3A\eB^@)?@OM?@b?x[@???^@X???O"O4OA?\jAmlRnAO_OPOkAOO _#~A5x@_R_d_v___DOz_1r~B_ oYp.oDBOoaosoB\ooo oo o(2 Ma^uC!CC 0BTfsD؟%D-?UDg(1rُ̟cE@'>K`vEEȎAد@ 2?ZڐdFϱƿBF !F3Q>PbtφϘϥ"/G W,BGM_qGZq߶ & 0KHYp} HHj .@RdqI#+I+=S.Ie$# TUHL5D" # =h @^T(aaJB [U@L&d2?@~?P e.N Q  m>uA` _?`N  Y c$ A.Y8>uQH>tEt#bH ">U.%>R.#N-7/de!#?A. \|#&/,?\.5?IA@/B[|# 'N(#AxT`?CopyrigXt(wc)20(@9M@c@o%s @fB Ar@GAua@i @n._ Alb@ HUsfBe:@e@vx@dZ@ V4 ?)o32TUA??)77se<-#v26ZbhafYv2r|cVqQN "QQQdl5C/oAe+(dVd>lrd "!s(8r4."%S(QVouodC~w.N}" (-DT!I@[0v2rzCsc "oP%3}`wȄՏyE|?v2>E *P&ai7HD H# h @>T(]]9 MT  )AU@L&d2?@W~?P6 $LA n  JuM` ?8* M^ K*0u:Jt7 StJb)>JU5 JD-//!X#V? .B\F#k&/k,?\.?;IA /[F#Y!3/ g"Ai @@2'bH42F/ -o31/!`'; J`U!`?CopyrigTt(c)2009M0c0os0f21r0Aa0i0n. Al,@ 8s0Be@e0vB@dH$@V o4 x?)eno32TUA?2IaH@2U6TO<@2U6X7ibE9 &Xo72'0BUik&!d$0 V1ZlJQUhhf5`'1@1Kdo30C^*Va&JQ&dfl64iAc7FlUnbi:(}m (-DT!;@-0@2rdUj"Uc60eE3&m9GtZTe%|&ov+=~u @2stktbqvu@<&]ew7UHLD  Jh0TlE>M U@L&d2?@W~?JP jJ %A Q"2  .>uA` ?E_  (< _Z&?uMt>p/b ">tq bo>t6  }配>U2TmUg!?#@9 U#!"/ Aw@"b)+&> !v?(t($#>_#2( m"6 G"\"b)H;'"bu3z=&b3~8/D_V_8E"lJaUl~X'4M1!S(t!?@9j# p}j# K$(°b`0w8-DT! lq0$#qj{jr}4k (Mt6 P pp@U<r(5%8:dfl6/wkrvQ uC EW`#0!y0!yD#|ud)<$ooooQdQAuddT8 %9oeS(<@EYeCw<ȏڄؙhjsu qyoNK$ i 8_\H>+s !SFO^? FHM}#!WqB9Yma |o@+Xo>oNaGȘ$FP< Pj=o=!uH<@%RUFD  h(^TYYBCUF\.?x<F BP(?P?\Xt3 D5S?>?5%6qW{qk66b6HCI6CJ6CN6!C&aS6/!C9&R6M!CW&L6k!Cu&16!C&a26!C&36!C&46!C&561C 6a661C)676=1CG6 6[1Ce6 6y1C6a 61C6 61C6 61C661C6!6ACFh9hC7F6KACUF6iACsFa6ACF6ACF6ACF6ACF!6AC Vd9dC'V6;QCEV6YQCcVa6wQCV6QCV6QCV6QCVa6QCV 6 aCf!6+aC5f"6IaCSfa#6gaCqf$60aCfȅU?of?Z0Bc@L&d2?"}3xEx#~sUkqiviҍrx  0bEB`Hub,netwopkpppri hal*pdpvcpp-Qa| xy3U5Gv\ NP?U*U---?-ppwpwwppw׆%p6ppw pwwww }σw ^?w wpwp pd_&wpwwfDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabx<οL&d2??9GΑsп=t??\.r3r\.CUG D # h L^T4YYT UF\.?@x2" $9u`un `"0Yb!u`$"%؂#UQ!'1`Vis_PRXY'cPm!#5P46ޖPg da +_Pb1b1"(Ɣp2)Ӕ2*R+FR"((Rb(!SaDc"76u&`~PG1"1 0&2k@c4p5,CT@Db"5 UaAFUQS_$|lF3s͵NSYF'U;0Re>1,@UQP)epzp51S(("(AT?sR !U/!/keF/X/Ae`'e^//,U SbiPa??g TT]]9 M JU@x2zGz?@MJA@eb#z(b),+,&J[(.?& I?A$./O%B!!5 g`?CopyrigTtZ(c)Z2009ZM 0c 0os0f21r0>1a0i0n.Z AlY0 8s]2e10e 0vo0dQ0!$$-# '&%?6iie59 6X7'K0UpBŁ&!$K0 'F;JlJAh h7!1!Z@vnǣ f, <AA#  IRvVJ:'BP(?OS spBlU^TURUEUr\.G\ZY?PuPիQ,5U҄Q[pB.aYU520_BU"rA 5#cHD: # h4>T]]9 M JU@1&d2?@Vj?@҄BP(P6 ]AJuM` ?ju#t  k]It=W(\ZIRUIl">2zGzw?@MJAw@eb #zb](b%)2+)2&J[/.w?& ?A$T4/U%B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-# e'& ?6iie59 6X7{'0UŇ&!$0R -FAJlJ8>UhAA 11!T2 $S<cV{ WR~VJd2L&?@ubX,X ]S fR#mS [ÐڱQE^Fh4__ K&oeX5:f'l6 XYePY?ۭsHMQ5aZjoohY}aUVeEVx_;"rA #dsHD: # h4>T]]9 M JU@҄BP(?@h4FtP6 >JuM` ?uJt  kU]It=W{GzتIRIl#>2mzw?3@MJAw@eb #zb( (2+T2&J[ (p"?0& ?A$ 4/U%BB!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!$-# '&?6iieP59 6X7{3'0]UŇ&!$0 -FAJlJ8>Uh@ 11!Z@o?.rɩ  ~<uhB] WRVJ@ ?@bȩ ]S 9Q#hY?))QE^jZ__ CaX5a9XY@ePXRmScRU5:]mx_PU"rA k#dsHD:@ # h4>T]]9 MTJU@aܹ?@&cM?@5?@$D"?P6 v >JuM` ?qu#t  @@|It=Wn5_6uIRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ x۰Z @]0?c Q#Oc Q\}ȱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:@ # h4>T]]9 MTJU@ (?@hZ?@>TC?@ݿJWc^?P6 v >JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@?!Qw?@k?@BZ?@ib+iF?P6 v >JuM` ?qu#t  1It=W_OIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@P4K?@l?@u'?@ݿJWc^?P6 v >JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z!?&sA#<# $BK jfJ 'j `00>oPaYc l]]beE:Ͷc0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>v"rA # sHD:@ # h4>T]]9 MTJU@#zUv?@f s9?@ә!?@8laxy?P6 v >JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@"z>?@^i2VS?@ˋ>WB?@=E?P6 v>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@IKY?@#v\?@(w)٫?@Q`m ^?P6 >JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t# b( (+&B(ݙ" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XG3'K0U!R&!4K0 FJlJI  ( sQsQ] 1heA Zl0|z3 0 @?/n' QE bR;bJ:]~^PsojRփUaE !?@q`0 0,c s!R'i e:}ɟh~5aef9oa|b c gT]]9 MTJU@5^?@hP?@5?@$D"?P6 rAJuM` ? qu#t  .9BWyIt=Wn_q8IRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#>F O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !1C# $I<EfB C`fJ x۰Z @]0?c Q#Oc Q\G}ֱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@H?@6͍~>TC?@JWc^?P6 >JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?) HJsAeb #zbU(#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$+A(.?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FL ie<59 FXG'K0U`Rb6!}4K0 V+ZlJQI0h|G!A!Z!?"]rA #sHD:@ # h4>T]]9 MTJU@Sw?@K?@BZ?@ib+iF?P6 v>JuM` ?qu#t  fd>It=WN6sgHRUo}WIlTy6ވ#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A/.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼#FO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1lL# $<EfB 9b`fJ:'@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA 5#sHD:@ # h4>T]]9 MTJU@t)H׾?@Ȑ?@u'?@ݿJWc^?P6 v>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tt(c)tW20 9tM ]c os f"R!r !a i n.t AUl (s"e e v0d @i!Em$+A(.?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z!刭?&sA#<# $B jfJ 'j `00>oPaYc l]]aE:Ͷ10TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>"rA #sHD:@ # h4>T]]9 MTJU@?^oO?@^?@ә!?@8laxy?P6 v>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@tr,?@Uw?@ˋ>WB?@=E?P6 v>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?3 HAeb& #z *(#+#+#&UJp!p!5 g`?CopyrigTt (c) 20 9 M c. os f"!r !a i n. Al (s"e ej v0d  i!Em$}! (K2?b6 ?ALb4($3/ 4/U%ę-|# |'b6?Fiie<59 FX/G3'0UBŴb6!}40 TaFuJlJ8>UhAA !1!Z}'- E|zo}v#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@^5=~?@,=Vݼ?@(w)٫?@Q`m ^?P6 > uM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgT]]9 MTJU@?έ?@=?@5?@$D"?P6 v>JuM` ?qu#t  &lIt=WA_I߃IRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ x۰Z @]0?c Q#Oc Q\}ȱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:@ # h4>T]]9 MTJU@Iq?@q֌?@>TC?@ݿJWc^?P6 rhAJuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LhFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@ڮ?@7 ?@BZ?@ib+iF?P6 v>JuM` ?qu#t  JJ&qIt=Wr_hIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@=9_?@o(Ѿ?@u'?@ݿJWc^?P6 v>JuM` ?qut  V6>nIt=WgD_~baIRkzIlRv\I#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A(.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6B>F O @ie<59 FXG'0U`Rb6!}40 V+ZlJ]Uhz!hA!Z![?&sA#<# $B %jfJ 'j `00>oPaYc l]]|eE:Ͷ10TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>"rA #sHD:@ # h4>T]]9 MTJU@BP(?@xQ8?@ә!?@8laxy?P6 v>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXGB'0U`Rb6!}40 V+ZlJQI0h|G!A !19b$<# $BM 9b`fJ: muۘ]ᇄ0:mOc i'A~aEesD?c  V#NeFaze<5e@|lVaiU?3Lh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@W?@M6N?@ˋ>WB?@=E?P6 rlAJuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#;<b$k] VJ:@8Qw0RS S IBf>QE !?@\BU--| i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwi]Q ieq'5 h2r_Uv"rA # sHD:@ # h8>T ]]9 MTJU@&J?@_5+:?@(w)٫?@Q`m ^?P6 >JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgT]]9 MTJU@B?@-8?@5?@$D"?P6 v>JuM` ?qu#t  4E$p͡It=W_Lx4IRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAJ +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼#FO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1lL# $<EfB 9b`fJ x۰Z @]0?c Q菌#Oc Q\}ֱ~aE:ÕO)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@*ӏ?@c?@>TC?@ݿJWc^?P6 v>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@"?@ "S/?@BZ?@ib+iF?P6 v >JuM`?qu#t  B?ͳIt=WF_p\cIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@d?@ 3\D?@u'異JWc^?P6 !>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?) HJsAeb #zbU(#+#+#&Jp!p!5 g`?Copy_rigTtZ(c)Z2U0 9ZM c os f"!rԙ !a i n}.Z Al T[&s"e e 5v0d i!Em$+A(.a?b6 ?AgJaT6` F !!l"z}bA +6ig5V*181 ?2?;#:C?'4/U%G|# |'b6LLFO ie<59 FXG'0U`Rb610 V+ZlJ]Uhz!A!Z!?&osA#<# $B jfJ 'j `00>oPaYc l]]#aE:Ͷ0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@?||`ai !hA'o>"rA #sHD:@ # h4>T]]9 MTJU@69 Ğ?@l_n.?@ә!?@8laxy?P6 v">JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@#&\?@ԋZ|?@ˋ>WB?@=E?P6 v#>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@\W>?@47 ?@(w)٫?@Q`m ^?P6 $>JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgTh]]9 M JU@xJuM` ?Uu Jb[LgJt b{tM>2zGz?)77Os@MJA@b7sz}"b)+,& !5Jp"( ?& N"A"b)+ 'b##b#(J,J/B-  Q3M#9 \/??\.?!?f=Q1Q1Y7;m`?CopyrigTt (c)3@2`093@M+@c)@oKs#@f1B"Ar%@^Aa1@i#@n.3@ Aly@ )Hs}BeQ@e)@v@dq@O0i bz59 m6m?7/'0URś65a FTR#l:J Uh(] nQnQ]FQ AB1!:'BP(Ա?a sRYqJSv?@bX,, iYwla)q )\()4eEDnFh4\ono ?p= ףЈhz5Ufl6a \uiHшhnVgj tj(a(4eU҄Q kR!l8aQY@h4GF[|0aui{hFT]1rto(yQiC@ ?@-uUaȳl1rlb\߈hQ:WpZƟ ܈hU_oo)lHD:@ # h4>T]]9 MTJU@D//?@w 8?@5?@$D"?P6 v1>JuM` ?qu#t  z7It=W_iIRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ x۰Z @]0?c Q#Oc Q\}ȱ~aE:Õ)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:@ # h4>T]]9 MTJU@qK?@>,Λ?@>TC?@ݿJWc^?P6 v2>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 0??8:C?'4/U%GX|# |'b6LLBBO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@QL ?@L+?@BZ?@ib+iF?P6 v3>JuM` ?qu#t  Ç\UIt=W_ @ٶIRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:@ # h4>T]]9 MTJU@#li[?@e>?@u'?@ݿJWc^?P6 a?JuM` ?uJt  ICIt=WN3אsIRk_zIlRv\#>2zGz?R HJAeb #zb(#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A(. ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&La`B O @ie<59 FXG'0U`Rb6!}40 V+ZlJ]Uhz!hA!Z![?&sA#<# $` %jfJ 'j `00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>v"rA # sHD:@ # h4>T]]9 MTJU@ 3?@8.G?@ә!?@8laxy?P6 v5>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@'HY?@pN?@ˋ>WB?@=E?P6 v6>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! (K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G3_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#<b$,] VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:@ # h8>T ]]9 MTJU@mڷ?@WZgL?@(w)٫?@Q`m ^?P6 7>JuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgTh]]9 MTIAU@?@|be?@X͂?@SSfhr~?Pps-8R?>$9> n JuM{` ?e ;  Su*Jt'  $mCLmta{x_&3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@?@2R} ?@X͂?@SSfhr~?Pps-8R?>$:> n JuM{` ?e ;  Su*Jt'  H-mta{_mLL}3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@B?@ K?@X͂?@SSfhr~?Pps-8R?>$;> n JuM{` ?e ;  Su*Jt'  m@mta{ɀ_13.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@Zԩ?@j4ʼn?@X͂?@SSfhr~?Pps-8R?>$<> n JuM{` ?e ;  Su*Jt'  }$~mta{_M {3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@h!I?@u\?@X͂?@SSfhr~?Pps-8R?>$=> n JuM{` ?e ;  Su*Jt'  XƿHmta{"_}{3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@?@d\0?@X͂?@SSfhr~?Pps-8R?>$>> n JuM{` ?e ;  Su*Jt'  H-mta{pu_3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhETh]]9 MTIAU@sLGv?@m\?@X͂?@SSfhr~?Pps-8R?>$R>  JuM` ? ;  )u*Jt'  [Rwmta{ēd3.Jv  q?D=:%_z?v(A[|?Jq>JU2N?贁N[?)@M#J&A@wbI(bW)U+U+U&Jy#0?!J ?%P?tAO"d+bW+$u/%B#BA1A15 `?Cop rigTt (c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0Wn.x0 l0U n8s2e0en05v0d0:1PY>4#-#,M3 M7#(p0b59 DFXGdG'/2WUA;F%!$a dF$&#l,&}>UhEuwγ u~w;'bAUSSr`?CopyrigPt (cu) 2\09 uMcoszIfyr|aizn. WAlЀ sԂUeevdȀ;LSPDB-M_ M_?[D =M`DrMM ID1+0UǒM"C $cPc X(.'ƔM$!I] Ue*^Y30DXl ! !dH!\!p!!!!!!!!1$1%&'*()11pD113Ac M?  F4 K &E& 4&4&U R&R&u` W?I$0BN`l~$I/( /2(>/2P(\/n$uz)tw!  IQEtABOJ-EG14%F4GaO ۃ ZUAA/B`?CopyrigPt (c)@2\0 M@c@os@fBAr@Aa@i@n.@ AlP HsRe@e@v&PdPA{BCSDCB-CC "G{$FCTVV RD#C05UbS@qF@ ^PU $fXhW.Dg'dōV%!T] DfXje H^maPaYqT  2qa8wEJtU{BUJtU{3Jt!U{UJtU{JtR!U{|JtU{&qFx&U{tSHvDU{.qFxbU{2qFxU{6uJtUwHD:B # h0>Th]]9 MTIAU@?@&*y>{?@X͂?@SSfhr~?Pps-8R?>$A n JuM{` ?e ;  Su*Jt'  $mCLmta{_n3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhET]]9 M JU@xJuM` ?uJt ASt=WJRb~li>t2U?=@MJA@b + &J[N#?0a& ?AT)B!!5 g`?CopyrigTtZ(c)Z20 9ZM c. os f"!r 1ai n.Z Al90 (s=2e0ej vO0d10@!$b-# 'Ya&?6iiek59 6jX7^'0a&Z!|$0 FJlJ UhqMAA ]A !4Ja-'Z@?h4FӃ  <{Gz#A OR|VJ:6]].rM__Z)\("UEC@ ?@-bȃ S 1 `#`YR\Xk5^jZso!o ̌aX5U'BOP(?gks#dT[RUUSv?@n aX,flYwq%lSXn?Fh4  p= ףXAYfl6\`YHzmPXAV``ʼ_Z(QYAY҄agklQbE26_HU"rA HD:B # h0>Th]]9 MTIAU@B?@Dd;]?@X͂?@SSfhr~?Pps-8R?>$F> n JuM{` ?e ;  Su*Jt'  m@mta{_dA3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&*Jy#?!lJ ?%P?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhET]]9 MTJU@F6&?@}%4+ޱ?@5?@$D"?P6 rAJuM` ?qu#t  (LbIt=W_,MyIRFIlI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4 9b`fJ x۰Z @]0?c Q#Oc Q\}|zeE:ÕO)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:@ # h4>T]]9 MTJU@)?@Uj?@>TC?@ݿJWc^?P6 vI>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?v"rA # sHD:@ # h4>T]]9 MTJU@#{5?@ř?@BZ?@ib+iF?P6 vJ>JuM` ?qu#t  DqIt=W_{*IRUo}WIlTy6ވI#>2zGzK? HJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A/.7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6#)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ# $<Ef4B 9b`fJ:O@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:B # h0>Th]]9 MTIAU@Zԩ?@6#TTĶ?@X͂?@SSfhr~?Pps-8R?>$K> n JuM{` ?e ;  Su*Jt'  }$~mta{=g2.Jv  q?D=:%_z?v(A[|Jq>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&Jy#c?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>UhET ]]9 MTJU@fl6?@eb,)?@(w)٫?@Q`m ^?P6 L>JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2llf?3 HRJB& @i7b#zv t#]b( J(+&B(" ?APb)4(z$!D3//&115 k`?CopyrigTtk(c)k2009kM0c0os0f21r01a0i0n.k Al @ 8sBe0e0v @d@1$4-3 7bO&&i b~59 &&XG3'0U!R&!40 FJlJ $  (sQsQ] 1eA!Zl0|z 0 @?n' Q b7fJ:]~^PsojR֯UaE !?@q?`0 0c s!R'i ?e:}ɟh~5aef9oa|b c ؠgT]]9 MTJU@S?@ަƢ?@u'?@ݿJWc^?P6 vN>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z!?&sA#<# $BK jfJ 'j `00>oPaYc l?]]aE:Ͷǝ0TbCcba`iK8K8h<5eg۰Zk(qQlqe~Ee@||`ai !hA'o>"rA #sHD:B # h0>Th]]9 MTIAU@h!I?@Ž?@X͂?@SSfhr~?Pps-8R?>$P> n JuM{` ?e ;  Su*Jt'  XƿHmta{0_Y3.Jv  q?D=:%z?v(A[|Jq>JU2N贁N[?@M#J&A@bI(bUW)U+U+U&Jy#?!,J ?%?AO"d+bW+$u/%B #A1A15 `?Co}p rigTt (cu)x02`09x0uMp0cn0osh0Ifv2g1rj01av0]ih0n.x0 Ul0 n8s2e0en0v0d0@:1Y>4#-#M3 M7 #(p0b5(9 DFXGdG'2WUAŔ;F!$a dF$&#l,&}>UhET]]9 MTJU@ةA!z?@B]?@ˋ>WB?@=E?P6 vR>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?(3 HJsAeb #z (#+#+#&*Jp!p!5 g`?CopyrigTtt(wc)t20 9tM c o%s f"!r !ua i n.t_ Al (Us"e e v0 d i!Em$}! (K2?0b6 ?ALcb4($3/ 4/U%-X|# |'b6?Fiie<59 FX/G3'0UBb6!}40 aFuJl*J8>UhAA !1!Z}'- E|zo}#.<b$]K VJ:@8Qw0RS S IBf>QE !?@\BU--# i[F @ h<53n:CU9fKo]o ڵ h5a9YUU aUUwi]Q ieq'5 h2r_Uv"rA # sHD:@ # h4>T]]9 MTJU@?@zų[?@ә!?@8laxy?P6 vS>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz? HJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A(.?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $B 9b`fJ: muۘ]0:mOc ?i'A~aEesDͅ?c ? V#NeFaze<5e@|lVaiU3Lh`~EA#o>"rA #sHD:B # h0>Th]]9 MTIAU@sLGv?@c?@X͂?@SSfhr~?Pps-8R?>$U> n JuM{` ?e ;  Su*Jt'  [Rw͖mta{ۡ_TL3.Jv  q?D=:%z?~v(A[|Jq>JU2N贁N[R?@M#J&A@bI(b*W)U+U+U&ՆJy#a?!J ?%P?AO"d+bRW+$u/%B#A1A15 `?Cop rigTt (c)x02`09x0Mp0cn0osh0fv2g1rj01av0ih0n.x0 l0 n8s2e0ejn0v0d0:1Y>4#-X#M3 M7#(p0b59 DFXGdG_'2WUA;FJ!$a dFD$&#l,&}>UhE9_%O+s u8+DOʅVFMi*I#C ->B CC.~dȿ@n/@{+yDpKsGy>fx>?>:O>]=\H?@LfBDxhBGjBLKlBNoBRC.VCYC]CBa Cd8 Crh$hC/l6CoHCVsC=w 8CzC~CCͅChChCِ'CzC"HC߭(Ck6XCHDߒ'>?zO//?#?5?G?Y?k?}???f@p i@k@xm@Eo@6aB~HdB+1O"&?4)?NI+?;NJ.?NI?^P@PG?1NI(??QB?.OB?J-O6?QFx??NE?_ R?u???IEdRd@*O9?$_"!|PT_ Rb@E(X`@ 7)o2$? DMaUFD  h(^TYYBXCUF~?x<F BP(?P?  B+66 J] 6qa{6a!6ah9ha6a6!a&`8%!a/&6C!aM&6a!ak&6!a&`d9da&6!a&6!a&6!a6a61a6631a=66Q1a[66o1ay6a 61a6!61a6"61a6#61a6a$6AaF%6#Aa-F&6AAaKF'6_AaiF`]9]aF)6AaF*6AaF+6AaFa,6AaF-6QaV.61Qa;V/6OQaYVa06mQawV16QaV26QaV36QaVa46QaV56aa f66!aa+f86?aaIfa96]aagf:6{aaf;6aaf<6aafa=6aaf>6aaf9av@6/qa9vaA6MqaWvB6kqauvC6qav9av!E6qav9avG6a H6a)aI6=aGJ6[aeK6yaL6aaM6aN6Ӂa݆O6a9aQ06-a7R6K aUS6iasT6aU06aV6Ñ a͖W6aX6a Y06a'Z6; aE[6Yac\6wa]06a^6 a_6ѡaۦ`6aa06 ac6+ a5d6IaSe6gaqf06ag6aQ9Qa˶i6߱aj06ak6 a%l69aCm6Waan06uao6 ap6aq6ar06as6 at6)a3u6GaQv06eaow6 ax6ay6az06a{6 a|6a#}67aA~06Ua_6s a}6a6a06a6 a6 a6'a106EaO6c am6a6a06a6 a6a6a!065a?6S a]6qa{ra0rar arara0r%a/rC MraQkrQ0rQr QrQrQ&0r!Q&r3! Q=&rQ!Q[&ro!Qy&0r!Q&r! Q&r!Q&r!Q&0r1Q6r#1 Q-6rA1QK6r_1Qi60r}1Q6r1 Q6r1Q6r1Q60r1Q6rA QFr1AQ;FrOAQYF0rmAQwFrA QFrAQFrAQFrAQFJȖ SUqJ?_,V S=V Q'  T UrQS S@L&ɯd2?]ɓX?\.ҥX^ sUQV T_\ UJ%Q xS T -b@Y T`&P t>h,aeL,dbtok^`eupjet^`h r\wbgQgQ*QJ_i( RU SJWS> 0V]!? ?wwww{np0wzZp~uwpwwppw^w|{pdp^ZDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab~߿~??)j࿔S5E?0FJ?nɶؿ3@ ?DHD: # h4>T]]9 MTJUF~?F͓??FtQ+/?P6 AmJuM` ?uJt At=Wrb#JRbJli:O#>[a? ?AbAw@"b $J2zGz?)@MJ7& #fbu#z$ ( (&,"*BB!!5 g`?CopyrigT}tZ(c)ZW2009ZM 0]c 0os0f2R1r0>1a0i0n.Z AUlY0 8s]2e10e 0vo0dQ0!$ę-# '?6iie59 6X7}'0UpBi!0 'F;JlJAh7!1!ZFTE <EgQ# IRSvVJ:M&d2?6VOSe\^TUR UEU@SG\BZ_ɫQ5[@_,cYU520_BU"rA #cHD: # h4>T]]9 MTJUFM&d2?FDx??Fl5?P6 AmJuM` ?uJt\  ]It=WIW$IRIl ڜI#>[ ? ?Ab A@"b$J2zGz?R=@MJ`=&#f {#z .(b)&,"*BB!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!$-# '?6iieP59 6X7F"'0UvB!0 -FAJlJ $L!(AA ]A1h1!ZFhmg  <!q}"##M _RVJL!%t?FTE\QeS , φ+#pYYSUKQUE^,n__ "+^HX5:?LV&k?SN#tTkRU5Ulf܄BP(%lgpY?BeO#X(dec]\m\cX ~.K$oZhLYUiXQaIZ^_i3qU32F_XU"rA 5#HD: # h#4>T#3 AMJUFbX?FϏ ?FM&d2?F _gw?P6 #AJuM` ?uJt  k]It=~W.*IRIl̆Rkפ#>[a? ?AbAw@"b$J2zGz?)@MJ=&#fb{#z\ (b)&T,"*B!!5 g`?CopyrigTt (c)020%090M0c0os 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$$-# '%?6iie59 6X7'K0UvB!K0 -FAJlJAh711T?ҿP mVN  Q  . *>uA` ?Gm$m8JuQ\>XbK>tY  wetVy 5Ie>U2zGz?@9 /#>?&Aw@bq(Wb)+&b" !p$4>9#"?& "w"b)";|'b"i,b"/N9#0%B 9#11;8"`?CopyrigXt (wc)020090M0c0o%s0f21r0Aua0i0n.0_ Al/@ 8Us3Be@e0vE@d'@'"i fE 6(X7H!'_2q@&,d50 tFJul>4Ul8QQCaR 1(Q1w!-(N!R?FJ?@ `0R QyUeR /ge%$(U⤖侇_X?,_XN Uωr?F r?FOtO?P"ZO %o1~Y.$!$iA4Y!eX]?C!ebRҡe:S |we"d  1X461Boo wr$5%>aEJe_~?F.ԵN?FaV-u"ؾ?PWRrR~|`%s mK)m >'ʲl} fdjg tsC uL+t-VwfxQUv`q7&ҹ _XRQb%d2[i֥TWVQFd50_#jS aUTf imp#jpm0"XY֘t?FTEY`Qj;!lЙXbQ?,n$ xXfQYJhbPnZ\>]!h"bXjQV6W_QUnU_o_)oTQaRp v__N-aZ?Fah<b}煔M_Ophz y7vPQP]BP({ύ 贁NPE 0BXЕHD: # h4>T]]9 MTJUF~?FS BP??Fx<?P6 #AmJuM` ?uJt At=W5_v@"J]RbJl>t2U.?=@MJA@+b)+&J[P'?g& ?A)*B!!5 g`?CopyrigTtZ(wc)Z2009ZM c o%s f"!r $1uai n.Z_ Al?0 (UsC2e0e vU0 d70!H$-,# 'g&K?6i@ieq59 6X7'K0g&!$K0 F!JlJ\#Uh( #3 ]At!"3'ZF>h f <YDc#BA NR{VJ:5]dGL_^Zh?.2\QE]t?FgPg܉ S )?w#_YXq5^(l]o o e]ǂX5U%?D"?;Vfk(\#cTZRUUM\_Y7ۨ Xno ^Z~#b%{XAYM&d2fkBqlUE25_tGUSrA   UGD  3 h0TdYYBjUFțau?F73X?FGV^bd?FZ^Zp?P} X  f  f0 D fX l f  f  bd  f  & f & 4& fH& \& fp& &! f&" &# f&$ &% f&& &' f6( $6) f86* L6+ f`6, t6- f6. 6/ b6]_61 f62 63 fF4 F5 (F6 T#3 AMJUF^]P'?Fݱ?F#6?F{_ "w?P6  >JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}|zeE:pOϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:  # h4>T]]9 MTJUFH[b?F:%lj?FݡD?F,6^?P6 v >JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFiW>E?FTb?F %?FP_EyFw?P6 >JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUFF ?FƢ?F%9.?F,6^?P6 v>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #zb(J#+#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v0d i!EHm$+A?b6 ?AgJaTv6` F !!l"z۾bZA +J6ig5V181 ?2?;ߑ:C?'4/U%ęG|# b|'b6LFO ie<59 FXG}'0U`Rib6!}40I V+ZlJ]Uhz!AZֺV\?F9&sA#<# $ B bnbJ'jWĞ 00>oPaYc l]]ȱaE:P^0TbCcba`iK8K8h<5e %k?(qQlqe~EeF$||`ai? !hA'oFrA k#sHD:  # h4>T]]9 MTJUFw??FxOe?FV\?F. xy?P6 rAJuM` ? )uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6L>FO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $<# $BJ C`fJ: m$D]ᇄ0:mOc i'A~aEeTE?c  V#NeFaze<5eF$|lVaiU?3Lh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUF% !(@{?FmN,']?F~ tC?F7܄E?P6 v>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?Copy_rigTtt(c)t2U0 9tM c os f"!rԙ !a i n}.t Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'b6<Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Zo(. E|zo}#<bX$-B] VʼJ:F,?0ȒS S I?Bf>QEҺV\?Ft@K\BU--# i[ÏF @ h<53n?I9fKo]o ڵ h5a9YUU aUU 3yk]Q i?eq5 h2r_U"rA k#sHD:  # h8>T ]]9 MTJUFG"7?F~8)?F AE?Fzl]?P6 >JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?/n' #M b7fJ:]n#vca71xp]] 8usUQPUQS^%S+ %R&bpGba8DaP`Q]`NE_TWOPKJ0WSH+`Ps RUORuIsSk1J0@^e_ erP}28\usvvP62<%-g ۏ'9_us1125q5q2A+Bhh@R)pR*) 2hqhq|b(8 1xpa[3!1Qq6`:zGQ`1f1[j2B@@7RF5VCT @nb2FE5us_4Σ&F3s͵NSYFB@UͬpU ͬQUHͬ|eA§,@Q_Ɔp׏(ɟ 8A B<F5$lAVB\KkF`RC`asPe uexPP2!"! %(P%PORJ1 '2qa` P֒ȑ8(S˱AȕE{8H5AALA˱LܥAL(ARGMAWЛtA5ЛA͞/%AELHD:  # h#4>T#3 AMJUF͹30?FY˟8?F#6?F{_ "w?P6 >JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}ȱ~aE:pϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:  # h4>T]]9 MTJUF,2?F/LΛ?FݡD?F,6^?P6 v>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?@MlAneb #zb(J#+#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c oKs f"!r !a i n. Al (s"e e v0d i!EHm$+A?b6 ?AgJaTv6` F !!l"z۾bZA +J6ig5V181 ?2?;ߑ:C?'4/U%ęG|# b|'b6LBO ie<59 FXG}'0U`Rib6!}40 V+ZlJQI0h|GЎ!A!ZV\O?F9"rA 5#sHD:  # h#4>T#3 AMJUF *.?F)s*?F %?FP_EyFw?P6 >uM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUFJ.?Fh ?F%9.?F,6^?P6 v>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!ZֺV?\?F9&sA #<# $B jfJ'jWĞ 00>oPaYc l]]aE:P^0TbCcba`iK8K8h<5e %k(qQlqe~EeF$ǧ||`ai !hA'o>"rA 5#sHD:  # h4>T]]9 MTJUFPY?F3pG?FV\?F. xy?P6 rhAJuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LhSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUF]cZ?FVIܳ?F~ tC?F7܄E?P6 v>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<b$] RVJ:F,a0RS S IBf>QEҺV\?Ft@K\B?U--# i[F @ h<53nI9fKo]o ڵ h5a9YUU aUU~ 3yk]Q ieq5 h2r_U"]rA #sHD:  # h8>T ]]9 MTJUFyh ?F*nK?F AE?Fzl]?P6 >JuM`lW? u t  NKMtA[X*`lMVYBMpry2V'>\¹??\.?lA8 J2l_lf?=@JMJB& @ib#ztv t#b() (+&ՆB3" 3?APb)4(z$D3//&T115 k`?CopyrigTtk(c)k2009kM0c0oKs0f21r01a0i0n.k Al @ 8sBe0e0v @d@14-X3 7JbO&&i b~59 &&XGK"/'0U!R&-!40 FJ%lJ Q! (sQsQ]3Ы1eA!Zsꈐg4 0֚ @?n'#2#M b7fJ:]T#3 AMJUF~S5e?Fk]?F#6?F{_ "w?P6 >JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FXGB/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ %ޜ\0?c Q菌#Oc Q\}ֱ~aE:pOϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:  # h4>T]]9 MTJUF3w?Fr(IY?FݡD?F,6^?P6 rlAJuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LlQkBO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFZ)x?F~m?F %?FP_EyFw?P6 >JuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUF:N[%?F ɇ?F%9.?F,6^?P6 v>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V+ZlJ]Uhz!A!ZֺV\?F9&sA#<# $BK jfJ'jWĞ 00>oPaYc l]]beE:P^c0TbCcba`iK8K8h<5e %k(qQlqe~EeF$||`ai !hA'o>v"rA # sHD:  # h4>T]]9 MTJUFydY?F?FV\?F. xy?P6 v>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFEtJuM`?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<bc$ R] )VJ:F,00RS S IBf>QEҺV\?Ft@K\BU--# i?[F @ h<53nI9fKo]o ڵ h5a9YUU aUU? 3yk]Q ieq5 h2r_U"rA #sHD:  # h8>T ]]9 MTJUF̉?F7|?F AE?Fzl]?P6 !>JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R& 10 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?ҏn'8R# b7fJ:]T#3 AMJUFJuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}ȱ~aE:pϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:  # h4>T]]9 MTJUF%?FG>'~ݡD?F,6^?P6 #>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!ZV\?F9"rA #sHD:  # h#4>T#3 AMJUFQK݆x?F-1?F %?FP_EyFw?P6 $>JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ:%ޜ'\0:iG`Oc qy"zeEajoJiaze<5e'fÞ 0mFaij mh~EA#o>"rA 5#sHD:  # h4>T]]9 MTJUF|k~ؾ?F;?F%9.?F,6^?P6 v%>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTtt(c])t20 9tuM c os If"!r !a i n.t WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!ZֺV\O?F9&sA#<# $B %jfJ'jWĞ 00>oPaYc l]]aE:P^c0TbCcba`iK8K8h<5e %k(qQlqe~EeF$||`ai !hA'o>v"rA # sHD:  # h4>T]]9 MTJUFFJuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFQt?FRLxw?F~ tC?F7܄E?P6 v'>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<b$] RVJ:F,a0RS S IBf>QEҺV\?Ft@K\B?U--# i[F @ h<53nI9fKo]o ڵ h5a9YUU aUU~ 3yk]Q ieq5 h2r_U"]rA #sHD:  # h8>T ]]9 MTJUFXf?Fݼ?F AE?Fzl]?P6 (>JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! sQsQ]31heA!Zsꈐ43 0 @?ҏn'R# b7fJ:]T#3 AMJUF;Э?Fn?F#6?F{_ "w?P6 )>JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ %ޜ\0?c Q#Oc Q\}ȱ~aE:pϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFAYr?F L?FݡD?F,6^?P6 v*>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFZ?F!ќ?F %?FP_EyFw?P6 +>JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~EA#o>"rA k#sHD:  # h4>T]]9 MTJUF9 `?F2{ о?F%9.?F,6^?P6 v,>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!ZֺV?\?F9&sA #<# $B jfJ'jWĞ 00>oPaYc ?l]]|eE:P^0TbCcba`iK8K8h<5e %k(qQlqe~EeF$||`ai !hA'o>"rA #sHD:  # h4>T]]9 MTJUFq[')?F9i7?FV\?F. xy?P6 v->JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m$D]0:mOc i'A~aEeT7E?c  V#NeFaze<5eF$|lVaiU3Lh~EA#o>"]rA #sHD:  # h4>T]]9 MTJUFӅN?FZDM?F~ tC?F7܄E?P6 v.>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<bc$2] )VJ:F,00RS S IBf>QEҺV\?Ft@K\BU--| i[F @ h<53nI9fKo]o  h5a9YUU aUU 3yk]Q ieq5 hb2r_U"rA #sHD:  # h8>T ]]9 MTJUFv?Fz?F AE?Fzl]?P6 />JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?/n' #M b7fJ:]T#3 AMJUF`;?F:7?F#6?F{_ "w?P6 0>JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6J!}4 V"+ZlJUhh|G!A!1l&fL# $<EfB 9b`fJ %ޜ\0?c Q菌#Oc Q\}ֱ~aE:pOϫ0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:  # h4>T]]9 MTJUF [kԏ?F4 c?FݡD?F,6^?P6 v1>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!ZV\槈?F9"rA #sHD:  # h#4>T#3 AMJUFYļ#?FŬ /?F %?FP_EyFw?P6 2>JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181a ??8:C$?'4/U%G|# |'b6L(BBO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:%O\0:iG`Oc qy"zeEajoJiaze<5e'fÏ 0mFaij? mh~E#o>"rA k#sHD:  # h4>T]]9 MTJUF-!4?FL$$?F%9.,6^?P6 3>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r !a i n.Z AQl [&s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!ZֺV\?F9&߮sA#`<# $B jfJ'jW?Ğ 00>oPaYc l]G]aE:P^0TbCcba`i?K8K8h<5e %k(qׁQlqe~EeF$||`ai !hA'o>"]rA #sHD:  # h4>T]]9 MTJUFX?Ş?F).?FV\?F. xy?P6 a?JuM` ?uJt  sd It=W> IRE!_ YBIl p#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6La(`BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!19b$<# $4` 9b`fJ: m$D]0:mOc i'A~aEeTE?c  V#NeFaze<5eF?$|lVaiU3Lh~EA#o>"rA #sHD:  # h4>T]]9 MTJUFQC]?FD?b.?F~ tC?F7܄E?P6 v5>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z(. E|zo}#<b$] RVJ:F,a0RS S IBf>QEҺV\?Ft@K\B?U--# i[F @ h<53nI9fKo]o ڵ h5a9YUU aUU~ 3yk]Q ieq5 h2r_U"]rA #sHD:  # h8>T ]]9 MTJUF:??Fq12?F AE?Fzl]?P6 6>JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!Zsꈐ43 0 @?/n' #M b7fJ:] f? @ fA B &C  fE  &F b & 4&H fH&I \&J fp&K &L f&M &N &&O & f&Q &R f6S $6T f86U L6V f`6W t6X f6Y 6Z f6[ 6\ f6] 6^ fF_ F` f(Fa ~L1B~`1F~t1[D1e1R~1~1Z~1^~1b~Af~Aj~(AnT#3 AMJUF6٫&?Fh%4+ޱ?F*5?F$_D"w?P6 8>JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6ig5V1`e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}|zeE:O)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD:7 # h4>T]]9 MTJUFI)?FUj?FeTC?FǿJWc^?P6 v9>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LQ DO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD:7 # h#4>T#3 AMJUFe{5?F?Fs۰Z?FNb_+iFw?P6 :>JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJrO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUFS?FަƢ?F'?FǿJWc^?P6 v;>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD:7 # h4>T]]9 MTJUF蕯?Fcų[?F!?Flaxy?P6 v<>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFA!z?FA]?F>WB?Ft=E?P6 v=>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?Copy_rigTtt(c)t2U0 9tM c os f"!rԙ !a i n}.t Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD:7 # h8>T ]]9 MTJUF gl6?FDb,)?FUw)٫?FQ`m ^?P6 >>JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFE//?F_ 8?F*5?F$_D"w?P6 ?>JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+Ab6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(a2BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4a2 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFK?Fc>,Λ?FeTC?FǿJWc^?P6 v@>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?MHJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!A!Z!?F9v"rA # sHD:7 # h#4>T#3 AMJUFQL ?F7ԩ+?Fs۰Z?FNb_+iFw?P6 A>JuM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJ@eb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUFLli[?FQ>?F'?FǿJWc^?P6 vB>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD:7 # h4>T]]9 MTJUF63?F8.G?F!?Flaxy?P6 vC>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LQ DO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFPHY?FpN?F>WB?Ft=E?P6 rAJuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<bc$] )VJ:FQw00RS S IBf>QE2!?Fp\BU--# i?[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU?wi]Q ieq5 h2r_U"rA #sHD:7 # h8>T ]]9 MTJUFǦmڷ?FWZgL?FUw)٫?FQ`m ^?P6 E>JuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFܹ?F cM?F*5?F$_D"w?P6 F>JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF (?FthZ?FeTC?FǿJWc^?P6 vG>JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%`|# |'1b6LQBBO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD:7 # h#4>T#3 AMJUFr!Qw?Fvk?Fs۰Z?FNb_+iFw?P6 AJuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUF!P4K?F l?F'?FǿJWc^?P6 vI>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]beE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:7 # h4>T]]9 MTJUFYzUv?FS s9?F!?Flaxy?P6 vJ>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF)"z>?F^i2VS?F>WB?Ft=E?P6 vK>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD:7 # h8>T ]]9 MTJUF{KY?F#v\?FUw)٫?FQ`m ^?P6 L>JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<5^?FUP?F*5?F$_D"w?P6 M>JuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L( DO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF?F͍~eTC?FJWc^?P6 N>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:7 # h#4>T#3 AMJUFSw?FK?Fs۰Z?FNb_+iFw?P6 O>JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ:r'@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA 5#sHD:7 # h4>T]]9 MTJUF%u)H׾?FȐ?F'?FǿJWc^?P6 rAJuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTtt(c])t20 9tuM c os If"!r !a i n.t WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LLs2BO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#<# $s2 %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD:7 # h4>T]]9 MTJUFf^oO?F]?F!?Flaxy?P6 vQ>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFr,?F4w?F>WB?Ft=E?P6 vR>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD:7 # h8>T ]]9 MTJUF_5=~?Fe,=Vݼ?FUw)٫?FQ`m ^?P6 S>JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<@έ?F&¬?F*5?F$_D"w?P6 T>JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJa6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6 LeE O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !6&f# &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:7 # h4>T]]9 MTJUFpq?Fq֌?FeTC?FǿJWc^?P6 vU>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LQBBO ie<59 FXG'0U`RŴb6!}40 V+ZlJQ@hh|G!A!Z!?F9v"rA # sHD:7 # h#4>T#3 AMJUF߳ڮ?F$ ?Fs۰Z?FNb_+iFw?P6 V>JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6big581e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUF>=9_?FL(Ѿ?F'?FǿJWc^?P6 vW>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]|eE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:7 # h4>T]]9 MTJUFBP(?FxQ8?F!?Flaxy?P6 vX>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FD'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFMW?FM6N?F>WB?Ft=E?P6 vY>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\BU--| i?[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU?wi]Q ieq5 h2r_U"rA #sHD:7 # h8>T ]]9 MTJUF('J?FK5+:?FUw)٫?FQ`m ^?P6 Z>JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA! U0|z3 0 @?ҏn'R# b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFB?F̒-8?F*5?F$_D"w?P6 [>JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUFJ*ӏ?Fc?FeTC?FǿJWc^?P6 v\>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD:7 # h#4>T#3 AMJUF"?F!S/?Fs۰Z?FNb_+iFw?P6 ]>JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD:7 # h4>T]]9 MTJUF?F2\D?F'異JWc^?P6 ^>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r !a i n.Z AQl [&s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z5!?F9&߮sA#`<# $B jfJ'j?`00>oPaYc l]G]aE:Ͷ0TbCcba`i?K8K8h<5e۰Zk(qׁQlqe~EeFͭ||`ai !hA'o>"]rA #sHD:7 # h4>T]]9 MTJUF79 Ğ?Fl_n.?F!?Flaxy?P6 v_>JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD:7 # h4>T]]9 MTJUF-#&\?FZ|?F>WB?Ft=E?P6 v`>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD:7 # h8>T ]]9 MTJUFW>?F7 ?FUw)٫?FQ`m ^?P6 a>JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?ҏn'0S b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c g~L1B~`1F~t1[D1e1R~1~1Z~1^~1b~Af~Aj~(AnT#3 AMJUF6٫&?Fh%4+ޱ?F*5?F$_D"w?P6 c>JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L D O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc Q\}|zeE:)0:iG`i?w*h<5a#j8oJiqze~EA#o>"rA k#sHD:b # h4>T]]9 MTJUFI)?FUj?FeTC?FǿJWc^?P6 vd>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL0DO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFe{5?F?Fs۰Z?FNb_+iFw?P6 e>JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L0D O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUFS?FަƢ?F'?FǿJWc^?P6 vf>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL DO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD:b # h4>T]]9 MTJUF蕯?Fcų[?F!?Flaxy?P6 vg>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?A mT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L2B O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !19b$< $2M 9b`fJ: m٠uۘ]ᇄ0:mOc i'A~aEeD?c  V#NeFaze<5eF|lVaiU?3Lh~EA#o>"rA k#sHD:b # h4>T]]9 MTJUFA!z?FA]?F>WB?Ft=E?P6 rUAJuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTtt(wc)t20 9tM c o%s f"!r !ua i n.t_ Al (Us"e e v0 d i!Em$}! K2?0b6 ?ALcb4($3/ 4/U%-X|# |'b6?Fiie<59 FX/G_'0UBb6Z!}40 aFuJlJ8>UhAA Ў!1!Z}'[- E|zo}#;<b$T] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o ?ڵ h5a9YUU aUUwi㼕]Q ieOq5 h2r_U"rA #sHD:b # h8>T ]]9 MTJUF gl6?FDb,)?FUw)٫?FQ`m ^?P6 i>JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7!bO&& b~59 &&XGK"'0U!R&!40R FJlJ Q! (sQsQ]31eA-!ZU0|z 0f @?n'B# b7fJ:]h^Psoj?R֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFE//?F_ 8?F*5?F$_D"w?P6 j>JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFK?Fc>,Λ?FeTC?FǿJWc^?P6 vk>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFQL ?F7ԩ+?Fs۰Z?FNb_+iFw?P6 l>JuM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L D O @ie<59 FXG'0U`Rb6!}40 V+ZMQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUFLli[?FQ>?F'?FǿJWc^?P6 vm>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD:b # h4>T]]9 MTJUF63?F8.G?F!?Flaxy?P6 vn>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL DO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFPHY?FpN?F>WB?Ft=E?P6 vo>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<bX$ [ VʼJ:F?Qw0ȒS S I?Bf>QE2!?Fp\BU--# i[ÏF @ h<53n ?CU9fKo]o ڵ h5a9YUU aUUwi]Q i?eq5 h2r_U"rA k#sHD:b # h8>T ]]9 MTJUFǦmڷ?FWZgL?FUw)٫?FQ`m ^?P6 p>JuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?n'0S b7fJ:]h?^PsojR֯UaEQ!?!`c0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFܹ?F cM?F*5?F$_D"w?P6 q>JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUF (?FthZ?FeTC?FǿJWc^?P6 vr>JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL2BO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFr!Qw?Fvk?Fs۰Z?FNb_+iFw?P6 s>JuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L D O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUF!P4K?F l?F'?FǿJWc^?P6 vt>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LL DO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]beE:Ͷ10TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:b # h4>T]]9 MTJUFYzUv?FS s9?F!?Flaxy?P6 vu>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUF)"z>?F^i2VS?F>WB?Ft=E?P6 vv>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<bX$ 0[ VʼJ:F?Qw0ȒS S I?Bf>QE2!?Fp\BU--# i[ÏF @ h<53n ?CU9fKo]o ڵ h5a9YUU aUUwi]Q i?eq5 h2r_U"rA k#sHD:b # h8>T ]]9 MTJUF{KY?F#v\?FUw)٫?FQ`m ^?P6 w>JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?nK' # bS7fJ:]h^PsojR֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<5^?FUP?F*5?F$_D"w?P6 x>JuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUF?F͍~eTC?FJWc^?P6 y>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L DO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJQI0h|G!A!Z!?F9"rA #sHD:b # h#4>T#3 AMJUFSw?FK?Fs۰Z?FNb_+iFw?P6 z>JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJQI0h|G!A!1&f $<EfB 9b`fJ:r@]0:iG`Oc ?qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>v"rA # sHD:b # h4>T]]9 MTJUF%u)H׾?FȐ?F'?FǿJWc^?P6 v{>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigT}tt(c)tW20 9tM ]c os f"R!r !a i n.t AUl (s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z5!?F9&߮sA#`< $B jfJ'j?`00>oPaYc l]G]aE:Ͷ0TbCcba`i?K8K8h<5e۰Zk(qׁQlqe~EeFͭ||`ai !hA'o>"]rA #sHD:b # h4>T]]9 MTJUFf^oO?F]?F!?Flaxy?P6 v|>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFr,?F4w?F>WB?Ft=E?P6 v}>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD:b # h8>T ]]9 MTJUF_5=~?Fe,=Vݼ?FUw)٫?FQ`m ^?P6 ~>JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?nK' # bS7fJ:]h^PsojR֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<@έ?F&¬?F*5?F$_D"w?P6 >JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFpq?Fq֌?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUF߳ڮ?F$ ?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUF>=9_?FL(Ѿ?F'?FǿJWc^?P6 v>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#< $B %jfJ'j`00>oPaYc l]]|eE:Ͷ10TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD:b # h4>T]]9 MTJUFBP(?FxQ8?F!?Flaxy?P6 v>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAeWb #z@Q (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!19b $< $B 9b`fJ: m٠uۘ]0:mOc ?i'A~aEeDͅ?c ? V#NeFaze<5eF|lVaiU3Lh`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFMW?FM6N?F>WB?Ft=E?P6 v>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Ib6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<bX$!B] VʼJ:F?Qw0ȒS S I?Bf>QE2!?Fp\BU--| i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUUwiǤ]Q işeq5 h2r_U"rA 5#sHD:b # h8>T ]]9 MTJUF('J?FK5+:?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lA8 M2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?n'."# b7fJ:]h?^PsojR֯UaEQ!?!`c0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFB?F̒-8?F*5?F$_D"w?P6 >JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6&L72B O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<Ef72M 9b`fJ۰Z@]0?c Q#Oc ?Q\}ֱ~aE:)'0:iG`iw*h<5a#j8oJiqze`~EA#o>"rA #sHD:b # h4>T]]9 MTJUFJ*ӏ?Fc?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"rA #sHD:b # h#4>T#3 AMJUF"?F!S/?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #z (#+#+T#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A ?b6 ?AgJamT6G` F !!l"zbA +6iTg5V181 ?2?;:C?'I4/U%G,|# |'b6fLF O @ie<59 FXG'0U`Rb6!}40 V+ZlJQI0h|G!A !16&f &$<EfBM 9b`fJ:r@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA #sHD:b # h4>T]]9 MTJUF?F2\D?F'異JWc^?P6 ݉>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 `?CopyrigTtZ(c)Z20 9ZM c. os f"!r !a i n.Z Al [&s"e ej v0d  i!Em$+A?b6 ?zAgJaۢT6` F !!l"zbjA) +6ig5UV181 ?2?F;:C?'4/U%G|# |'b6LA#BO ie<59 FXG'0U`Rb6!}40 V+ZlJ]UhzA!Z5!?F9&s7A#< X$" jfJ'j`00>oPaYc l]]aE:?Ͷ0TbCcba`iK8K8h<5e?۰Zk(qQlqe~EeFͭ||`ai !h`A'o>"rA #sHD:b # h4>T]]9 MTJUF79 Ğ?Fl_n.?F!?Flaxy?P6 v>JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz?@M2JAe7b #zQ (#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!B19b$< $BM 9b`fJ: m٠uۘ]ᇄ0:mOc i'A~aEeD?c  V#NeFaze<5eF|lVaiU?3Lh~EA#o>"rA k#sHD:b # h4>T]]9 MTJUF-#&\?FZ|?F>WB?Ft=E?P6 v>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@M JAe #z (#+#+#&*Jp!p!5 g`?CopyrigTt (c) 2U0 9 M c os f"!rԙ !a i n}. Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VM:FQw0RS S IBf>E2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD:b # h8>T ]]9 MTJUFW>?F7 ?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @i #zv t#.b( (+&B"f ?APb)4(z$D3//& 115 k`?CopyrigTtk(c])k2009kuM0c0os0If21r01a0i0n.k WAl @ 8sBUe0e0v @d@14-3K 7IbO&&i bP~59 &&XGK"'0U!R&!40 FJlJ $Q! (sQsQ]31ZeA!ZU0|z 0 @?nK' # bS7fJ:]h^PsojR֯UaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c g~L1B~`1F~t1[D1e1R~1~1Z~1^~1b~Af~Aj~(AnT#3 AMJUF6٫&?Fh%4+ޱ?F*5?F$_D"w?P6 >JuM` ?uJt  (LbIt=W,MyIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4 9b`fJ۰Z@]0?c Q#Oc Q\}|zeE:O)0:iG`iw*h<5a#j8oJiqze~EA#o>"rA #sHD: # h4>T]]9 MTJUFI)?FUj?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  6|.It=WL\} IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUFe{5?F?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  DqIt=W{*IRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUFS?FަƢ?F'?FǿJWc^?P6 v>JuM` ?)uJt  cIt=W(?IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD: # h4>T]]9 MTJUF蕯?Fcų[?F!?Flaxy?P6 v>JuM` ?)uJt  {It=W ůIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUFA!z?FA]?F>WB?Ft=E?P6 v>JuM` ?)uJt  wT vIt=W?mIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?Copy_rigTtt(c)t2U0 9tM c os f"!rԙ !a i n}.t Al U (s"e e 5v0d i!Em$}! K2?b6 ?ALb4($I3/ 4/U%b-|# |'Yb6?Fiie<59 FjX/G'0UBib6!}40 aFuJlJ8>UhAA@ !1!Z}o'- E|zo}#<b$] VJ:FQw0RS S IBf>QE2!?Fp\BU--# i[F @ h<53n CU9fKo]o  h5a9YUU aUUwi]Q ieq5 hb2r_U"rA #sHD: # h8>T ]]9 MTJUF gl6?FDb,)?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  LUMtA[_ʓWMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i b~59 && XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?ҏn'B# b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFE//?F_ 8?F*5?F$_D"w?P6 >JuM` ?uJt  z7It=WiIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUFK?Fc>,Λ?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  jDFIt=W-M3}IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUFQL ?F7ԩ+?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  Ç\UIt=W @ٶIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUFLli[?FQ>?F'?FǿJWc^?P6 v>JuM` ?)uJt  ICIt=WN3sIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc l]]aE:Ͷ0TbCcba`iK8K8h<5e۰ϿZk(qQlqe~EeFͭ||`ai !hA'o>"rA 5#sHD: # h4>T]]9 MTJUF63?F8.G?F!?Flaxy?P6 v>JuM` ?)uJt  ͧILIt=Wf IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUFPHY?FpN?F>WB?Ft=E?P6 v>JuM` ?)uJt  PrIt=WoDIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUFǦmڷ?FWZgL?FUw)٫?FQ`m ^?P6 AJuM`lW? SuJt  NKMtA[X_*`lMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?ҏn'# b7fJ:]h^PsojR֯UaEQ!?ߟ!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFܹ?F cM?F*5?F$_D"w?P6 >JuM` ?uJt  @@|It=Wn56uIR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUF (?FthZ?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  It=WX IR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUFr!Qw?Fvk?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  1It=WOIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUF!P4K?F l?F'?FǿJWc^?P6 v>JuM` ?)uJt  {d5Z\It=W2;wEIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]beE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD: # h4>T]]9 MTJUFYzUv?FS s9?F!?Flaxy?P6 v>JuM` ?)uJt  V"(It=W7:LIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUF)"z>?F^i2VS?F>WB?Ft=E?P6 v>JuM` ?)uJt  7It=WܯHIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUF{KY?F#v\?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  o|MtA[_"oRMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<5^?FUP?F*5?F$_D"w?P6 >JuM` ?uJt  .9BWyIt=Wnq׏8IR_FIl#>2zGz?N@MJAeb #zb(#+B#+#&Mp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L(\"BO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4\" 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUF?F͍~eTC?FJWc^?P6 ݤ>JuM` ?juJt  HIt=WByIR?8Il?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6LLyBBO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!ZA!Z!?F9"]rA #sHD: # h#4>T#3 AMJUFSw?FK?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  fd>It=WN6sgHRUo}WIlT?y6ވ#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+ZlJQI0h|G!A!1l&fL# $<EfB 9b`fJ:r'@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij mh~EA#o>"rA 5#sHD: # h4>T]]9 MTJUF%u)H׾?FȐ?F'?FǿJWc^?P6 v>JuM` ?)uJt  ُuߓIt=Wί IRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTtt(c])t20 9tuM c os If"!r !a i n.t WAl (s"Ue e v0d i!Em$+A?b6 ?AgJaۀT6` F !!l"zbWAK +6ig5V181 2?2?;:C?'4/U%GX|# |'b6̼LFO ie<59 FXG/'0U`Rb6-!}40 V+Z lJ]UhzЎ!A!Z5!O?F9&sA#<# $B %jfJ'j`00>oPaYc l]]aE:Ͷc0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>v"rA # sHD: # h4>T]]9 MTJUFf^oO?F]?F!?Flaxy?P6 v>JuM` ?)uJt  L+It=WݑHIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+%#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'b6L5*6O ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!19b$<# $4h" 9b`fJ: m٠uۇ]0:mOc i'A~aEeכD?c  V#NeFaze<5eF?|lVaiU3Lh~EA#o>"rA #sHD: # h4>T]]9 MTJUFr,?F4w?F>WB?Ft=E?P6 v>JuM` ?)uJt  6j>It=WNFFIRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUF_5=~?Fe,=Vݼ?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  ?aV_MtA[_=p'MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUF<@έ?F&¬?F*5?F$_D"w?P6 >JuM` ?uJt  &lIt=WAI׃IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUFpq?Fq֌?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  0@hIt=WmiIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUF߳ڮ?F$ ?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  JJ&qIt=WrhIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUF>=9_?FL(Ѿ?F'?FǿJWc^?P6 v>JuM` ?)uJt  V6>nIt=WgD~baIRkzIlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 $V+ZlJ]Uhz@!A!Z5?!?F9&sA #<# $B jfJ'j`00>oPaYc ?l]]|eE:Ͷǝ0TbCcba`iK8K8h<5e۰Zk(qQlqe~EeFͭ||`ai !hA'o>"rA #sHD: # h4>T]]9 MTJUFBP(?FxQ8?F!?Flaxy?P6 v>JuM` ?)uJt  9KIt=WYGJIRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUFMW?FM6N?F>WB?Ft=E?P6 v>JuM` ?)uJt  1|It=WƯ:IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\BU--| i?[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU?wi]Q ieq5 h2r_U"rA #sHD: # h8>T ]]9 MTJUF('J?FK5+:?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  y SksMtA[_eMVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c gT#3 AMJUFB?F̒-8?F*5?F$_D"w?P6 >JuM` ?uJt  4E$pIt=WLx4IR_FIl#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ۰Z@]0?c Q#Oc Q\}ȱ~aE:)0:iG`iw*h<5a#j8oJiqze~EA#o>"]rA #sHD: # h4>T]]9 MTJUFJ*ӏ?Fc?FeTC?FǿJWc^?P6 v>JuM` ?)uJt  uIt=Wt]aIR?8IlRv\#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!Z!姈?F9"rA #sHD: # h#4>T#3 AMJUF"?F!S/?Fs۰Z?FNb_+iFw?P6 >JuM` ?uJt  B?˳It=WFp\cIRUo_}WIlTy6ވ#>2zGz?N@MJAeb #zb(#+R#+#&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$+A7?b6 ?AgJaT6` F !!l"zbAٖ +6Rig5V181e ?2?;:C$?'4/U%G|# |'b6L)FO ie<59 FuXG_'0U`Rb6Z!}40 V"+ZlJQI0h|G!4A!1ظ&f# $<Ef4B 9b`fJ:rO@]0:iG`Oc qy"zeEajoJiaze<5e'f`0mFaij? mh~EA#o>"rA k#sHD: # h4>T]]9 MTJUF?F2\D?F'異JWc^?P6 ݴ>JuM` ?juJt  rs"It=WVIRkzIl?Rv\#>2zGz?@MJAebM #zWb(#+#+#&JBp!p!5 g`?CopyrigT}tZ(c)ZW20 9ZM ]c os f"R!r !a i n.Z AQl [&s"e e v0d @i!Em$+A?b6 ?AgJaT6` F !!l"zbAR +6ig5V181 ?2?;:C?'$4/U%G|# |'b6L3FO ie<59 FXG'K0U`Rb6!}4K0 V+ZlJ]Uhz!A!Z5!?F9&߮sA#`<# $B jfJ'j?`00>oPaYc l]G]aE:Ͷ0TbCcba`i?K8K8h<5e۰Zk(qׁQlqe~EeFͭ||`ai !hA'o>"]rA #sHD: # h4>T]]9 MTJUF79 Ğ?Fl_n.?F!?Flaxy?P6 v>JuM` ?)uJt  sd It=W> IRE! YBIl p#>2zGz?@M2JAe7b #z]b(#+#+#&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$+Ao?Xb6 ?AgJoaT;6` F !z!l"z_bA- +6ig5V18ʜ1 ?2?;H:C?'4/U%bG|# |'1b6LSFO ie<59 FXG'0U`RŴb6!}40 DV+ZlJQI0h|G!hA!`197b$<# h$B 9b`fJ: m٠uۘ]0:mOc i'A~aEe7D?c  V#NeFaze<5eF|lVaiU3Lh~EA#o>"]rA #sHD: # h4>T]]9 MTJUF-#&\?FZ|?F>WB?Ft=E?P6 v>JuM` ?)uJt  [It=Wq.IRݮIl$dvĤ#>2zGz?@MJAebM #zT (#+#+#&Jp!p!5 g`?CopyrigTt (c) W20 9 M ]c os f"R!r !a i n. AUl (s"e e v0d @i!Em$}! K2?b6 3?ALb4(&$3/ 4/U%-|# e|'b6 ?Fiie<59 FX/G'0UBb6!}40 aFuJlJ 8>UhAA !1!Z}'- E|zo}#<b$] RVJ:FQwa0RS S IBf>QE2!?Fp\B?U--# i[F @ h<53n CU9fKo]o ڵ h5a9YUU aUU~wi]Q ieq5 h2r_U"]rA #sHD: # h8>T ]]9 MTJUFW>?F7 ?FUw)٫?FQ`m ^?P6 >JuM`lW? SuJt  /êQPͰMtA[_Y#MVYBMpry2VI'>\??\.?lFA8 J2ll/f?=@MJB& @nib#zv t#b( (+&B" ?APb)4(z$CD3//&*115 k`?CopyrigTtk(wc)k2009kM0c0o%s0f21r01ua0i0n.k_ Al @ 8UsBe0e0v @ d@1H4-,3 7%bO&&i Ab~59 &&XGK"'0U!R&!40 FJlJ Q! (sQsQ]31heA!ZU0|z3 0 @?/n' #M b7fJ:]h^PsojRփUaEQ!?!`0 0c s!R'i e:}ɟh~5aef9oa|b c g_+s +EiHG%>FM# oBH~dp@+8‘SsHqEZKQxsEm!D% 8D:(-GuE1wEv7yE;{E>~EIBX0F F2FI'4F3MhDQ ~:6FZ9Fw^H;F"bHx=Fe?Fji9AhmKDFpFFt~xHF#xJF{HLFWOF}!H)x#HB=%H+'H͑*Hu8,Hh.H|'HY)HHЧ0%Hs8hHHH)Hc~z)I~8Ij "'hߤIwI6ȨIߪIM)(I XI)QJ:אSJ"UJ'$WJZJr %\Jh^J*`JYY'bJ )eJ'gJt~@:iJaHkJmJf PK (8RK9TK}9VKXK9ZKU9]K#{(X_K&YA*(*./81((u5(X8[ZAlؗkE~~*I8LhߞXPP@TƩRWZU[l8W}_Yb[pf]ߏj(ߐmIqʙh5uܙx{|x ~( XƩ2Zێld("XO~7Ʃ٤Z(lX ɯg1UFD  h(^TYYBBUF~?x<F BP(?P? V B666 6 B66h9ȅ2H??\IB@L&d2?-(\.Y'(.sUM!K&Pm/:  0oB`2Fiber], pt c t ansm t #n tw rk equ p e t h rd 0-1e!91A^| (gUGr& wr  .@ wvwpppopp wwpwxwwwwpx=pp pwvLDrag thuesapWonodwipe.bL&d2ɿ~?P?=t˿)"g??)j3@ ?DHD: # h4>T]]9 MTAUF}Da 0?FH7r?F_/++?FXϸ?P6 `A$M (#<#lPJuM{` ?#{{20{F{ZujJtg  ɟ߇{tј` 'ɭ?s>JU2zGz?@M[#|Jk&A@$7b#z #!"b(b)+&J[e#t"t"?'6 I?A$/%Be#115 "`?CopyrigTt (c)020090M0c0oKs0f21r01a0i0n.0 Al0 8sBe0e0v@d0S"{14\#-Xe#3 7'6\% ZLFiie1E9 FXG/'0UR'6-!B40 FJlJ7Q0hLG1ZA!Y(}=K?Fl@I0`fS?Ks`ZsLbL\"JTy+HlPrW?F腺Z>FbyC ua)H0Ynǿl_Vef[U>dxԷErlg uVcQ$@aELe|}?FCʞ*Nc`vna Oc>wok&O8XWleBPmlfVArieoo.HLe+pQ?FYR&`O]vol``}; PaiEz6!iDF`ޣKH_SFn.ki`"rA k#ڃjnw5}6!i{HD: # h4>T]]9 MTUF)Q~ R?F\L#?Fx?F=:?P6 AA M (<Pdx#  $<PdxmJuM` /?9"9"29"F9"Z9"n9"9"9"9"/(Su()Jt%!  _lcd6tY1s2˟ke5n7rs=e57Fq1 >JU2zGzK?=@M3J6A@n4b%Cz@C2b3H%@3HNKNFJ[3 B?F &?A9D POqEUB3AA5 2`?CopyrigTt (c)5P20AP95PM-Pc.+Pos%Pf3R$Qr'P`Qa3Pi%Pn.5P Al{P +XsReSPej+PvPdsP2@AB3b-3 S  WF5 \fiieU9 fjXg2'0UbŴF!D0 TIf]jlJ @maa ]xQQ9A8`3F+G?F| @"$#әF1 "X3]/Lֲ1r2JAn?FkS?FΘtpb N ?_)Ay*t^(|CN7ȗu$#D[DP1X3aT1 :" $z 2? I? uK?$"X2a@s(BS2}:r5qEqEtFdo?F[K ۃ?FJ(OL?P$DT!  bqt%@{rfJHG^"f~Bt2)Џ⏪32Uu5*&?Fl5?F)b&&~?Fcle?P,A&!٘UީȽ|;!GmH﹡^@pw*oAΏ ՟4ޯFUuu #?F{+.W?,b26I>P-nx:aG|ZÉ-DuH rε50!au>uM"D$.@4#ZqTmz?FD?Fb.\i ]aJ`?@~{bcEt/^Vu1 a{&DFT?@l4ϳ$3<@ƒ>Pn}11WBx~|kz}~tesDޠux)LBk(a\6 `G $#Ppztt~Pz+ݿ]s~aϔb-4쇂3~uy0cفMk?Fǒ?F 6_?P(AT~' 6>Ľ?e+P?Toa?CB1wa1 83쇖y&Nfn~bS9xvD<$Q$#QcuNI|" 0)^VUıwm2G:LB3 .쇪;utq~F>`7bt?di д9Brq 8CJ &*짦f4omGL3A/S/#HD: # h4>T]]9 MTMUF؋@?Fce?F5svL*?F7= ?P6 ]A]M (<P]d_]x#_(_<s__JuM` ?_""2"F"Z"n"""uJt  oUxJ5t!21l]5 7,ȃ5$7ǁO>JU2zG/z?=@Ms3J6A@4b3z郷0392bR808;6J[}3 w(B??F ?A4 T? EB}3AA5 2`?CopyrigTt (c)@20@9@M@c@os@fBAr@Aa@i@n.@ AlP HsRe@e@v-PdPk2ADt3-}3C eG?Ft5 r\ViieIU9 VXW2'0U.b?F!ZD0 VZl1 &@(aa 3ArQ1q8Z}3F{و>r/2"0,s\>rt2JA%)?F+?FvWZk>Fy4 wa)`Ayf/!\3s M_lu|a<1u#Ca aܵ0 lkj? &_8)?"a@=s(<" !rÀ5 EbqEnu3.h?Fn`?F=/*lq?FF|oĵWA|GK|Wu|V#H1CRW%7#I3Et2~IUnuj "?FBĥج|+zw~>{Iσ]X41yu,A1BaNycЍ@2Wa4y}K6#I@ZltF~fUds!'Hɇ?FQy~<?F$f㷖꿈xywF,(|pgF|'c7|,X&l x קi4bqJ+ o N9z)e~zy0l44t;Ff4t/p1znuf.ʐ Lˮ?F I`V'0'|֠{yl7#y%m"gs7#It~jyozY_ƀ%çM,fFI|^t-,捇&F9~lFV6#I.@t~3ot~F+}|$+p?F?~a@v2rA 3z9Xt|L;W-yG7IGYt#~HD: # h4>T]]9 MTMUF}W?F?F5svL*?F7= ?P6 >M (<Pd_]x#_(_<s__JuM` ?_""2"F"Z"n"""uJt  wCHݛ5t!2-8Lc5 7,ȃ5$7ǁO>JU2zG/z?=@Ms3J6A@4b3z郷0392bR808;6J[}3 w(B??F ?A4 T? EB}3AA5 2`?CopyrigTt (c)@20@9@M@c@os@fBAr@Aa@i@n.@ AlP HsRe@e@v-PdPk2ADt3-}3C eG?Ft5 r\ViieIU9 VXW2'0U.b?F!ZD0 VZl1 &@(aa ]ArQ1q8Z}3F{و>r_2h"0,sf >r)t2JA%)?F+?FvWZk>Fy4 wa)`Ayf/!\3s ޿Mlu|a<1u#Ca aܵ0 lkj? &_8S?"a@=sB(<" !r5 EbqEnu3.h?Fn`?F=/*lq?FF|oĵWA|GK|WuǢ|V#HCRW%7#I3Et2~IUnuj "?FBĥج|ﲖ+zw~>{I]X41yu,A1BϠaNycߍ@2Wa4y}K6#IZltF~fU~ds!'Hɇ?FQy~<?F$fՖxywF,(|pgF|'c7|,X&l x ק1^uJ+ o N9z)e~zy0l44t;Ff4t/p1znuf.ʐ Lˮ?F I`V'0'|֠ yl7#y%m"gs7#It~jyozY_ƀ%çM,fFI|^t-,捇&F9~lFV6#I.@t~3ot~F+}|$+p?F?~a@v2rA 3z9Xt|L;W-yG7IGYt#~HD H# h4>T]]9 MMUF:TJ3?F ?FW7")?F(GP@?P6 f@#A#M (<Pd]x#$<PdxmJuM` /?""2"F"Z"n"""u )Jt!  ߆,ߑ'7t172DJM)527HB)5L7r Dɸ!>JU2zGz?=)@M3J6߀A@E4bM3z03a2b808RKFJ[3 PB?gF ?$A4 O5E3AA5 G2`?CopyrigTt (c)@20P9@M@c@oKs@fBAr@$Qa@i@n.@ Al?P HsCRePe@vUPd7P2AED3 5 -_C %GgFViieE9 VXW2'K0UVbgF!DK0 f!jl$1N@(aa ]3AQ18`3FPʞ>FQ?͉@"#] >"3n3v1lr2JA_[3nB?F#lNa?F ?Fe a)A@y?2oW Vytu#|_ӄ 5u3?\1a u׿@ ]h? d?#t?"2%a@ks(<29tK1r55EqEu}?FG?F U?F(k=|S@ܱ|ŗ-_`ǿmgV0ÖpITf#was2Eu2* ?Fd|ڀ >{$*Jr| ZȘ*'vY9!=a,z0WJ⅏ze#wFUuUfNڿ?F6 +C?F}ۛ)NQv1Vy>ﴺkcY︉0h ӯe#wZq~zd! nS@3(}|{gu2lqZt@>% rgrܥg/?B~fȿe#wϢnuex9?Fqp1C5bb=|=I"~;Jt^E>~KAOA/0ƛOu=e#w yJG?Fާ;נpIZ=gE܎=|grE2`~%-Bg0L0;e3w0䬌3t-~F`3FgG ߈A2rA 3+j[K~@])nATGf@B7e3w7I#Ha`dgHD H# h4>T]]9 MUFQ|3?Fͪ'?FH(])?F?\s?P6 A > (<@P]dx#D $<PdxJuM` ?9"9"29"F9"Z9"n9"9"9"9"/()u()Jt%!  [4c7tY1s2&뒺e5n7-V;e57n?ͷqJ1 >JU2zGz?R=@M3J6A@4b%Cz@C.2b3H%@3HNKNFJ[3 B?F I?A9D POqE*3AA5 2`?CopyrigTt (c)5P2U0AP95PM-Pc+Pos%Pf3R$Qr'P`Qa3Pi%Pn}.5P Al{PU +XsReSPe+P5vPdsP2AED3 (5 -_ SK  WFVi@ieE9 VXW2'0UbţF!D0 If]jl J @maa ]xQQ9A8Z3F꺷S"$#P8hB1 2X2?@s6vK r2J:jtFEZ qrs%@"s )1HA\HG@HA"lNa?F|/(?Fb) a)AyF7U1 "fWmA}$# c6n>}X3?$1 a }x ?@ %? z?M?$"X2%a@s(}{^<[}̈ιƋ/90!3'x+;m3; 3)1F;+x(Nڿ?FT&B?Fb{f,-JQ1;Ejv㓌 +ߌK$ؿLrZ\b03>P1Z;~+p•%ʲSpFZ>ppd-Tfs1u񶛄D𷱄K 瓌)h:GrKr߿?#4B31;;uSq~F7ސ?FF;Kf@9Brq 8CysZLs뿇* %L3 /1#;H4B`gHD: # h4>T]]9 MTMUFM&d2?F}{?P6  >M (< Pd_#x_tt_ ___(_<JuM` ?2FZnuJt -t!"J&b$=>tJU2Uj2?R=@MY3Ji6A@%2308b9R;6J[c3r27a? F ?A?9Bc3BfAfA5 2`?CopyrigT}t (c) W20@9 M@]c@os@fBRAr@Aaa0i@n. AUl@ HsBe@e@v@d@Q2_AcDZ3ę-c3rC 2rG FZ5 >\iViieU9 iVXW\r2'0j3 FZ!&D0 VZlJ`>UhLaLa ]Rd u 1A A1AW8Pa o|2U?F,%-@Nx[Jk#b?o--hi  6vZ2JAp?F}l^?F +ْ?F G&S- a),Ayx|S<ݻuz}קu#qSBa y= 30 4*?? ?"2a@/s<"t!r55TqE`ud ?F /wpc*uGs>{T#|f{}b*yZ>voigjJ*#;%7f2pU`uTK`j0rr?F);L^fFp/Xx1ѣ?V|%O4Tqde㼕?FHyt߲Xzΐ_3]ؘ#yԄT|qTS|t]<8l j m?@P3ɡեZ:+a{c3~|+2|}ZliL~|d($7! C]{-m;BP4~ΐKjgVhߩ~D8V-l|G2y)PlƳÿտa:dBl#ꃋ-@t32y ?c$p3~,xEr l|0?FS$^ΐWqp>hmN.Ey?'<(+t^+}m܀3}Tq(*wmnp:~qмZBT<i}~۸73q1&3–(Ko|>F}ѣ ?Fs?FUޒ[g$,y~vZ؆t=<}!uɡaѩYK&h0?FL#|ߘ 1D|\ ťa8 ?Fd\Ў!ݯ|? ﮉ2ɡbҨg7+?FBl;|qyS ~L](MB-eS?F@J9F:?FrIÅ hߏ¢h|@b$yoSk9r|hͦ |ɡe2oe1Br{a 53Q2HD: H h0>Th]]9 MIAUFd^V?F߫C?F>?F?_j|w?P6 8 >M $mJuM` /?`OO.u>Jt;  sxЕt=M4&ٕQ䣯+? G>yJU$"#?.& ?JbA@T"Z+bR\)Z&J#2N贁Nk.#@M# J&]/o/T"Z+'T"*B#BA1A15 `?Cop rigTt (c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0n.x0 AUl0 n8s2e0en0v0d0@:1Y>4#-'M3 M7 .&%p0b5(9 DFX'dG_'0UB.&J!I$a dFxJlJ]Uhz_1!T!Պ(Z#FF-D}ؔ?cJ:Z 4qgjGQ RV"JmTtV,Щ=?FG с aW)O YPRSW:fOtG4Pi3GYA; l'z&@B!@C?F[,P4S *?j0[OeT?UƖaU5mTV.b9 Nj?FZp^9o`isoT0e(a~Th]]9 MIAUFHm?Fʋ?FqZzS̩?Fo_ w?P6 8 >M $mJuM` /?`OO.u>Jt;  zt/ Wٕᆝ?a%?uG>yJU$"#?.& ?JbA@T"Z+bR\)Z&J#2N贁Nk.#@M# J&]/o/T"Z+'T"*B#BA1A15 `?Cop rigTt (c)x0]2`09x0Mp0]cn0osh0fv2Rg1rj01av0ih0n.x0 AUl0 n8s2e0en0v0d0@:1Y>4#-'M3 M7 .&%p0b5(9 DFX'dG_'0UB.&J!I$a dFxJlJ]Uhz_1!T!(Z#Fb?d ?F94:,X,TGZ SY RV"JmTig/?F6o?FşY] `)O 1RSx 4S x$G4:74C͐GYA; ^^s¯&@B:~mTn%y2[9)A\(aU5UF O 2Wα?FG?8LolQ,i33((?l+dUl1"Canoo2m_;T#rA $sHD: H h0>Th]]9 MIAUFk"?Fwc?Fymh?F_!sw?P6 8>M $mJuM` /?`OO.u>Jt;  aMtl#/ו1`ng n}_GT>yJU"#?.& ~?JbA@uT"Z+b\)Z&J#2N贁Nk.#S@M#J&]/o/T"Z+'T"U*B#A1A15 `?Cop rigTt (c)x02`09x0Mp0c.n0osh0fv2g1rj01av0ih0n.x0 Al0 n8s2e0ejn0v0d0:1Y>4#-X'M3 M7.&%p0b59 DFX'dG/'0UB.&%!I$a dFxJ lJ]Uhz_1!T!(Z#FFút[?J:Z 4=iLGQ RSV"JmTVoEXNx A`?F&nƜ?FӪr#> `)O 4S }#GYkhkn/e~:KF4}3>YA; ΣW$@B4mT[PRSbU5U2{6E,?F72*ʶ'?FD;6o;^}p/l@i.fMdӟ2Z[lxg҇.atoo2m_UT#rA $sHD: # h4>T]]9 MTAUF\f?FQ)Xd?Fv?FZW?P6 `>$M (#<#lPJuM{` ?#{{20{F{ZujJtg  ntL_פ| 'RGU˲s>JU2zGz?@M[#|Jk&A@$7b#z #b(b)+&J[e#t"t"?'6 I?A$/%Be#115 "`?CopyrigTt (c)020090M0c0oKs0f21r01a0i0n.0 Al0 8sBe0e0v@d0S"{14\#-Xe#3 7'6\% ZLFiie1E9 FXG/'0UR'6-!B40 FJlJ7Q0hLG1ZAY(э6?FξRI0`fS?Ks`ZsLbRP\"JT6mPD2n ?FHOX\>F->wW Wa)H0YnǿlᜯVef[U>dxԷErlg u?VcQ$@aELeANS?FZ ͉?Fޓ{lʤm>wok&O8XWleBPmlfVArieoo.HLec=}pLwS?FXʍ`ˏvol``}; PaiEz|DF`ޣKH_SF:x~ALrA k#ڃjnw5}Oi{HD: # h4>T]]9 MTAUF8XV?Fp#enD?Fv?FZW?P6 `>$M (#<#lPJuM{` ?#{{20{F{ZujJtg  ґߝ t$wOפ| 'RGU˲s>JU2zGz?@M[#|Jk&A@$7b#z #!"b(b)+&J[e#t"t"?'6 I?A$/%Be#115 "`?CopyrigTt (c)020090M0c0oKs0f21r01a0i0n.0 Al0 8sBe0e0v@d0S"{14\#-Xe#3 7'6\% ZLFiie1E9 FXG/'0UR'6-!B40 FJlJ7Q0hLG1ZA!Y(э6?FξRI0`fS?Ks`?Zs$LL!bL\"JT6mPD2n ?FHOX\>F->wW Wa)H0YnǿlVef[U>d~xԷErlg uVcQ$@aELeANS?FZ ?F{lʤm>wok&O8XWleBPmlfVArieoo.HLec=}pLwS?FXʍ`ˏvol``}; PaiEz|DF`ޣKH_SF:x}~AL"rA k#ڃjnw5}Oi{UGDF P# h @T(PYYU UF~?M&Wd2P} (-. +0@ 0T0h0|Y03 - -3 --3--& &#:u` ? T,_'@@TUThh|U|UU&&K"&5&&0"u:)!z2<7q:U? ?; 4b0{[`+:B Fy3U"'ĉQ9" L A$6 K",G'@I@@q@E?DF@[r&BCkBuq@`-A0"Ruu8 @GC0$5Q;(?RK"U& d4; :)9u `u`["[b!;$wu`񉾑4?R5ػ3a171`Vis_PRXYcPm!#5``08``R`?CopyrF`gPt0(V`)02d`090MF`c:}`oH`ofbvary`aa`iw`n 0Al` }hsbeH`e}`v`dRh13d45!B=31ZrA66 24#30Ur?P].w,-^q^$P$sT<`o0mR`s`a>rv "saI8G28\vvk`2 hY,TYYRJ )eUF,,?F6_v?P fQ#"I#M !:#Q>NIU>bIY>vI]>I M ? I ? I ? I"+M?+Q#*"I*S%&S]V#f"+YV#z"V#"IaV#"IeV#"IAV#"IqV#" IV#"I S&2"IS&2IS&.2IS&B2I V#V2 I>2!>2IS&~2 IV#2+S&2"IS&2+S&2+V#2&?PI3 B&?I("C2B?IJCZB%&"9'a'77@I2I2GUG1uM` ?"I&6BDBXBlBBVVVVVV(V$(V8(VL(V`(/Vt(CV(WV(B,B,B,B,B,B};Āi 1a` Q|Ϳs2e8-ߕq`R~,y\1BTC\Zݼc6[?F[hsF߱~Q>qˁ&t ?FgMuN?Dv` ?,?6 EPU-?-+" Ԍ/UFz[aDv>tJ~M^3;0?gt{s4)/~Z!WJ=˧ب 5Z=fO/߱&////??{?q?;d?*V-?F`xM0P01W=͝4F??=ӎC !]=@/V54, OOS9qW@0kM9~0iZ <ȝ0*DT! SU7:w$ ]UZNOW~/mà' ]: e9T ,,? 6v?G@B1B$Gr65߱O__,_>_P_Fs[Q?F/e[p؟?M ?Sl ʟ_tfS$lߕ름Fx1cn)@aCo?20 6-Ԭ,65vٝJ w(-_q9KrUuw;y+OUF!R>7~n字/䉄5 _fL6&lLXqzݍMJ13LIo%]f Fz^(ʲS | u6)|LXX'} Ĥԭ67,M'5LmmFs]mrtoɊ!3EٿEݍv LSB@ d@}U00|Wdby#ANU 0 0 0 qH3ۓ] _WbB'P.>3]_A0hx%>Hލ?1;>s,}O+NڿW >gEn(F_>3 ]Q3}2bu$ˣSfQNFe'-$Sد|;~*YQoU(_HԪb~xBß;6H~ TEbݍ}nO*/<9n-@P,e'}eݴآm'viޞMTQދP?F/VJ$CճѵȌܷݍ1+OOĮY"ؑX:fDx~@Wt)L[~3x _AVW~IA]m_jݍX;AQq__s*~pm.@a)sI_8`}^ߔfЄ_j]~V^1P0oBoTzyE|N8^~{*ϳ}8CތҢ&s%X>F @4ύ?F*棕 L4?n~g&vn˹5so L~<@D~ew,[{ܿc=(SHǑo ,AU@y2ѝrͲs!?F N?F3?FcT(}03@ya?B᳠{j@1y8bvw ??a醼nMk?FCZώek/Ê?F@ٕ:?P(ܴ'B1/?{!K @-X;L_ @1aѶߘ>3C?FqFLc0/?Fd?<+v7D?6.zY Uco غ9JȆ߈ѿrTߣDqV\s"7G\DN,9FslD7Q-r|O╣'YQF{ F@) l$u(U)}}bQr)g|}1p1<G7t}bHA8sah77@y5?F]}H 0NXUVMFQ¡/.}=!8՞%د-+1pQ4ADF*.}K 0BoaKVWg>t}x=$GB=bvaA Q/R"i L1}/9/K/]')uw?F%ly?F*껩/ѵP﹈b@D>3bSSmlGAȔ|#h`*b5%F[ A-@zjaDi~b.x4dM?O+zOOu (?:?L?^?p??d??Fb2*.Q_ DCY?ˉFY] ) ~ zM?F!d]CZXjĞ82Dҥ4Y 0YLD|^,DYA_^F{aE`?`?Nd,/__b6mʝǧNAIY/////?r6FH?> _$s 1+/,5B/XM6#s"B N~??vLl⿤愈?[G=u?aK&d23L4CUG DF P# h @T(PYY#EψUFL&d2?|>?F~?]P} . B!D:J :^:rY: Au` ?"66JJ^^rrju# ^Jb"h,h'U ?L?4b {N[`:!#"&a#'b"^DQ)3 z$2l!L1$l" l"3'@>>ÿ@q05?B46 r&23c2ui0`҈-1Bu:@ l"3-A7Bl"F L$9u`u `m"[ bb!wu`)$"%أ#Q!'1`Vis_PRXYcPm!#5XP1XP02`?Co_pyr>PgPUt (NP) 2`PW09 M>PcuPo@Pof}RnQrTqPQa}PioPn] AlP uXUsRe@PeuPvP dB!#d$%b!(=#&#Rd&& "$B##0Ub#]&g,7^aNX@cTg@ daA+_ s11"hh22)t3*tBaaR+t"qqb(!pacb"!6u\%G!!["-@@Bq253T@b"q3L [ELP@[UL0]E5 s_$\&F3s͵NSYFue7A6P,@QjPUn725^}@$1p]]( sUQŅUQbN9P %BR&RpRQQcPFL]`NETWOFPK SHPP5 DROIR7IR5S!:0@ L*FrLr]@o"( qffs_PU 7ex}PcPn2l"! 0n<@5~b -"ER"^br36 Bm Ak Adzau"u22u3/BaQaQR5Q5Q"2b3 !!E|7l!!8ԲAP! !1; 8b,/$1JbKa'q'qne!b($pǕW ]-@$ SPaCPqA9P _Equ>PpRP!ePk`25Xq\` TsPxcP`DP!vR*`3^5b` Sb8C.SwXOQEdeyMPnfPc}PuRU`1%PRd$  N$m $`E I[!" PPe5R'`y 7sRU dCeEWAe C`eV^` SRiP@Q//` T;/M/"LoPcX/Ҭ///B lPiP+q/ ?/?R% RoPo_?Pm??NwRkx"adB`K?g IDPQdPS6O@HL7/nOnsICOHL8dhOPO#%AmV2irUQ ]Q 1O+_6 fH_Z_\G` fM#AtO_BO_OM@CO(oFNRCbogq%C2mQtBQW6oHL_Roof MHHiAאe2U w'0U W0 /ix4*є"%ݩ܅uuY>2 H4ֹhńЋńЋNńЋTńЋ>ńЇHD: %# h4>T]]9 M JUF/\r?F.U?F~?F_w?P6 AJuM` ?uJt  ;uAIt=WzIRUIl7O],#>2zGz?)@MJ߀A@ebM #zwb(b%)2+2&J p!p!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r !ua i n. _ Al (Us"e e v0 d i!E$m$[w?b6 ?A$$4/U%-|# |'b6%?6iie<59 6X7'M0Ub6!}4K0 -FAJlJAI0h7!1!ZF9|$0f2 <ڒ#  OR|VJa9Z9P] PReS[RUE:dHܣM\]\ɱQ<5U oYaU526_HU"]rA #cHD: "# h4>T]]9 M JUF ?F"]?F|>?Fuvw?P6 ]AJuM` ?uJt  ].It=W`ĆW #JRbJlLXz#>2zGz?S@MJA@eb#z(b)R,+,&Jj!j!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v d c!Eg$[;?\6L ?A$./O%-v# v'\6?6iieP659 6X7'0UpB\6!w4%0 'F;JlJAC0h@7!1!ZF?Hܣ~0, <pB#\  IRvVJ:~?6VOS sgZ^TURUEUEBQG\ZYpBaիQ65UQ[fS.aYUb520_BU"rA #cHD: ## h4>T]]9 M JUFyE+?F0]я?F~?F@?d{w?P6 >JuM` ?uJt  vr It=WP|IRsUIlO.a#>2zG_z?=@MJAw@eb #zb( (2+T2&Jp!p!5 g`?CopyrigTt (c) 20 9 M c os f"!r !a i n. Al (s"e e v0d i!Em$[}! K2?b6 &?A$ 4/U%-|# e|'b6 ?6iie<59 6X7y'0Ub6!}40 -FAJlJAI0h7!1!効ZFHܣۄ0Y <? 6ybi  OR|VJa9PY]PPReS[RUE:]|$M\Q`YdC4U<5U=VN_YQU526_HU"rA #cHD: $# h4>T]]9 M JUF ?FDL?F|>?F~w?P6  >JuM` ?uJt  ].It=W,t@ "JRb~l>t2UK?=@MJA@bJ + &Ji[J'?a& ?A)B !!5 g`?CopyrigTt (wc) 20 9 M c o%s f"!r 1uai n. _ Al90 (Us=2e0e vO0 d10!H$-,# 'a&K?6i@iek59 6X7'K0a&!|$K0 FJlJ8>UhAA !!ZF@?d3{  </.a#]M 1R^VJ:]ɜHܣ/_AZ2Ey#xUEUVS s#FT=RxUk5U؝Qˠ\BY3QX5*n?֬BoAZ7],XU[Bl|Q2_*U"rA >sHD: ## h4>T]]9 M qAUFA[?FFw:?FE12Oz_:w?P6 L[ >  (m$JuM{` ?Agg2DuVJtS  .!tw|3s,kIɸ _>JU2N贁/Nk?=@M3#JC&A@b#zw u#bR( (+&jJ=#%'?& ?A"b${$:3/*T'B=#115 `?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 Al@ 8sBe0e0v@d0+"|144#-=#3 7&4%"XOFiie2E9 ԆFXG|L"'0URi&!40 FJlJ" >UhiQiQD ]GQ N1DJc$LBhL!l%,W2[AN11(Z=#FC[RRTFQ ucR/Q Rf4"J:GdFX_!0wbfc L|c :D)$aER!Jj^opdiUjh2E~V! b# 2 ?߮.hOEu<ɛ?F#Pw|hؾ6|ެ^hu?F/Lw|iU6|^^e~?EYo< thQ[T`vg2w 0M6weQT`Dgwӏ u5XPo`Iϣ4oj7::eQT`Bvu*< f|hQT`r :{T_t Pa?F3SXk͖o?F vT?P);N 90 ,)Ri0m6uR. 6u`yyb !` /K!? -hO.?Ra @cB)r5%aQ K)XiyBG\h5b-Ob?Fԇ*sR6|FsaeQ Q~B?F)qqiQhAeQ9dϢqwiaeQe񯁜ai|hQEWXYWuwZl a++hf! l?FQD0)ˑ6|P?U/hL t9?Fz2)v<>KU6|*hQyո ?ghhWyh= A=}!eQW%|6^T?`NAA_owA j;[6te UI:vdv =KylI?ε~T_Ţr+a W  /~nIX(dur|?F 4?F?PَKK :d?Ltؤ?yl!?(!@[l Xj2GtaCpzm&褯 "4gyC?); 1ecDhe2GoYe%3rQ M3#k2HD: H ih ,>T  9 MJUF ?F~?F|>һP6 m >JuM` ^?5uJHbcFFmtulyt b#tI>2zGwz?@MJA@Wb)+& !Jg"~& E"\Ab)+'A,A/UB115 `?CopyrigTt (c)I020U09I0MA0c.?0os90fG281r;0t1aG0i90n.I0 Al0 ?8s2eg0ej?0v0d0ŤM3 b7~&XXt5i b9 `!X78G'0UB~&%0R 8FLJlJ8>UhAA( 1:9m EsBU[JHU؝Hܣˠ Xm] 3Ey%U3^֬_Z7]G,XHU)a[BUQHUF@?d{lQY?.aXQ,o`ɜQۤoZ 2QXUH_Z_l_~\_H8>t"s ûdGC, FPd(#)wfB s^+~d uo@+WoSsG1,t P05828)h4; 6??8FB RJ0UFD  h(^TYYBBUFL&d2?x<F BP(?P? 8$ B66ȅH?j|;? 5JB@?\.ϗsU!&;/ҍ  0RB`=Bridge,lo ica ,n tw rk sy t m t p"r pPy"i"s c n"cB i2%z e^| R(* G@& ߃l?"488Q ]frsww ww ~ww w!%} Iceqs^QZQDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aab|>~??[q㿦)j?&Iz?&d23L4CUG DF P# h @T(PYY#EUFL&d2?@F|>?@_~w?P} Y.  !:J :^:rYDu` ?~"66JJ^^rru# *^:"@,@'酅#U2N贁Nk?H]b{# ?#^4Wb {['`:!#"h69#':"UY^"Q)+3 62D!L1$D" D"&3ɜ'>>ÿ@kq05?460r&BCku2u{0`i-1$Bu@ D"3?AIBD" $$9u`u `"[ b:!𿻐u`$i"&){#)Q!'1`Vis_PRXYcPm!y 5jP1jPAB`?CopyrPPgPt (`P) 20P9 MPPcPoRPofRQrPQaPiPn AlP XsReRPeJPvPdB !r#(d$r%:!B={#&r#dd&& t"r$,7^aN.@.cT<*Po mPsPQ>g da +_c&1&1r"hh")ht3* ut2aaB+ti"aa bo(!`ac:"!6uM\%$G!![2?@@Bq%CT@ br"q3$ 35$6Ep05E5c_r$4&F3s͵N?SYFu eIAt,@)Q ySUn%6}61`]]o(c-U)Q)U)QNqKP % TR&RpRQQuP$]`NETWuOXPK SHPUP VRO!RI S1L0@T$J$r5GYkr"o(pa&&s_<*-pq o(A֨57IABKFkq`RPQsPPU exPuP2D"! 0<@Eb Z"E*"br&3BmACAfa^ur"ku"xu3/\Ԭ20 iBGQGQi"2 b3 !!E0|qq7D!C8ArRA!!A;Jb61\b]aaae!bo(q`軕1W c]?@ SPaCPaAKP EquPPpBdPePC8%XI[m` {TPuP`DBPvR83^xmz"` SQbBRdP5d=QMPnfPcPuRiEQ`]&1PRd Nm 8E !3o! PP_q eiGR'`Q sRU<&eФ/YAeC`Ue.^Ԓ` SRiP@Q8T/%/mLPcX_/҄///B }lPiPq/p/?o* RPo7?PEd?v?YNwRkP"a =Yko HiԿAhǖ꥔ Uはw'0Ur/lAx ;,u;uM1, صpH ֍@噄ѤѤ&Ѥ,ѤHD: %# h4>T]]9 M JUFX^?@ ׬?F~?@?P.6 AJuM` ?urJt  ]It=W_7],IRUIlw#>2zGzw?@MJAw@eb #zb](b%)2+2&JBp!p!5 g`?CopyrigTt (c]) 20 9 uM c os If"!r !a i n. WAl (s"Ue e v0d i!Em$Ɇ[?b6 &?A$4/U%-|# e|'b6 ?6iie<59 6X7{'0Ub6!}40 -FAJlJAI0h7!1!効Z@9|$0Y2 <ڒ#4  OR|VJa9Z9P]PReS[RUE:d_HܣM\]\ɱQ<5U oYaUb526_HU"rA #cHD: "# h4>T]]9 M CAUF|>?@:x??@V|+q?P6 8]AI]M ] (JuM` ?#SS2uBJt? tzEUQؕKJbtU]OK>JU2zGz;?@M#J&A@bS#zI G#a(+bo)|+|&UJ#!!5 `?CopyrigTt(c)20 9M c oKs f"!r 1a i n. Al70 (s;2e0e vM0d/0!EH$#[#""?06 ?Ag$I~/%-#,# '6%KO/Fi@ie59 /FXEG'0UBŬ6!40 wFJlJ]Uhz!hAg!(Z#F@H/DžB0J>| 8Ene&KQ4\ RV"JT~?@gf_?F)?@ $C?8S sBKYbFe>J^R,UFeTGK!Q? ӧ|+@B:TPܛ\YܨMJe5$!JP/.`*7 `3iK=Dn鷯hZbU5TQ`R?@[uo0lBqaRSgdd*lxrl{!Joo2_Uvg"rA f# =HD: ## h4>T]]9 M !AUFnx?@?d{?Fg݅?@@?P6 $&> JuM` ^?1?u.Jt+  Rqtes.aqzWWqI7>JU2zGz?=@MJA@b1#zt' %#b?()1 ?(Z+Z&UJ!!5 `?CopyrigTt (c) 20 9 M c. os f"!r !a i n. Al0 (s2e ej v+0d 0 !E$[! s2?6 I?AE$ \/}%b-# 'Y6? Fiied59 FjX#G'0UBX640 UF"iJlJAq0h7!1E!`Fmr?@Hӣ۬0$*fq`7$d 6yb7  R VJa9Q|TK ST1 QE:]|$0]Y_dC4a8^] 7sP@EU < KB?x_QYFf*ɴ%Ud,Z7"_+dK lV}?? g52^_pUE"rA kD#TsHD: $# h4>T]]9 M AUF|>?@~?P6 n# >] JuM` ??Su.Jt+L iteJ]zb>tJU2U?=@MJ?A@(b')4+4&J[vr'?& ?A/@)B!!5 `?Copy_rigTt(c)2U0'09M0c0os 0f2 1r 0F1ai 0n}. Ala0U 8se2e90e05vw0dY0! $-# ',&?6iie59 65X7/'0ʼn&-!$0 /FCJ)lJ<>Uh3 C ]1!ZF@@_?d{ J*4 $d?.a7^QiBQ ]RVJ:D]ɜHܣ[_mZ2Ey7UEfU$S sTrTiRU5U؝QX\nY3QX5Vn֬nomZ7],Xf8:`ϋ iؚ~Gc2anY_Wt T}զDTk 0Q: %C?J i3l}f*i&udWl˔alb+d ?ܲVXU2D_VU"rA k#-UHLuD" # ;3h , T  JEuUF'_?@~?F|>ếP VNM A #$Ae  >uA` ?KG `mm. RSu#\>b]XFOmrߞuZptY b7#t&>U2N贁Nk?@9 3#>C&A@q" u(b)e+&f" !It$>=##"4#K"& "{"b)&;'f",f"/N =#11;#`?CopyrigXt (c)020090M0c0oKs0f21r01a0i0n.0 Al @ 8sBe0e0vZ!@d@+"BM=#3 7&4"Ht1EiIfE 4%!X|FGG'0UB&h50 TFJl>(fUlI 8%#IA{!1(R!gG?@c\;0R dUeR ?!c e(T@|$?Fn-?@Z %_C?YsB\Ӌ$UX?uj3dR-=wnQAY ܲVX3 e^Fx*aqRS3jIBFm_[viH{goRJdo~b KYeTGQ# #s sFR-Dn?% FQsN# OwYB tSQ~dlvo@+HWoMsG#) R^ P~[8^bhfiPUFD # hVTYYzU?@ ?IF?h PaR"|ou]ofu2/uU ` C&nect&r` e-?{U H  ^   -}|5 p`I`V?}`xa833ސ]3σ3u 33?, ?Gxpx^& This coUnetrWaumtUcl#yrebtwe h *hp3is.HD # =h @>T(]]9 T 2AYUa? uP6 u]`u buA_ u  >3.TAuV ` lxMui"a)J@-?x;'6"wE- xo'6"y( _rq?o@I ?$m%? @horVA$"38*EL2-_br @P' Z6\11u .$n2y2^2n2u9ȑ#ULH/MM^1 # A D5 :0`Vis_SE.cTm!#20ED)`?Copy@0igTt; kc)j@DCU9j@M-@c@0o/@'ofhBYAr\@AUah@iZ@n7@ j@WAl@ `HsBUe/@e@0v@d7@N@B= #x  A G{?9#r22&0444 MG4'ALB5?_$k&(SZ6Y8# #2"qlSF,C`l>lUhr5 QA+/J *&%(og5k#p f3e'r#TA(11MJEsooR6'2Y7E5S3Wc7r516Ũ_aJ2_quU\DQ>0]b;)`R@As-@E T@xh@DiA eJi=S]UBTJg'0SV!@LA,SrWg 1Cy$A$AMJ6!Hp,@R/TAEb?RQP? _H>Ff"s O`E )F(SWo#*p7BaqeHdo@P+Xu"sUFD  h(^TYYBBUF\.ߗ?x<F BP(?hP?X8 ]B66 TbH?~? ?B@L&dW2?Y (sU/!-&PO/v:  0eIB`Modemw,n tw rk p rui h"al !v c !^| f(U GT&( F?_""U*++'?MwpwtwpwDwwp tGwww G@wwDGwD@w p@wyww[wp|`npXDrag onut hepe.Rih-c]lckaUdcos UPo etQeAvUwreP 5aabi4FѿL&d2?j?oҿ=t??\.r3r\.CUG DF P# h @T(PYY#EUF\.?@i4F?@M&d2ɻ?P} č,.B ! :J :^:rYD 7 7l7u` ?"66JJ^^rr)u#",'񤔅qU ??4^b,0{][`:#A82A6#'%Q)U" ^/LV1$  d7'>>ÿ@qZ05?46F0r.B4CZ2u0`-V1ZRBu@ 3mAwB h$9u`u `"[bz!u`R$"% #WQ 1 71`Vis_PRXYcPm!#5P4o67>B`?CopWyr~PgPt"0u(P)"020PU9"0M~PcPoP'ofRQrPQUaPiPn "0WAl` Xs bUePePv`dR 1#d$%!*(=#6#@d66 2$B##0Ub3]fg,7^q+^\@V\cTg Pda+_Msd1d1"hhr2)$t3*tR+tHR@ q q"@q@qTb(!PpaFc"+m6u\%RG81>1[B2m@@Rr5.CT@Fb"31 UFHUM5Ms_$&ׄF3s͵NSYF)UW0TewA,@WQ S+epwr5}1PpP"(A2yT?+>BB).B4KCFq`R`Qs\~PU T`xPP2 0<#@4Eb_ &2CE"brd3vRrӟ(Mff.F_<1-mTwNcPpu"ur2u3/R0HRuQ$uQ"2Tb3 &1&1E|FF0RQ(dPpWQ,[[ ]m@0` USPaPePl\rsQAyP E u ~PpPePr5X 1V` ?TPEѣPH` D`vR3𯴊ߜb` SabߣPLB PMBPd`rUdKMPnafPcPubHU\nHUQi{]d1;PRd Nm.0Š)U,03!8 PPr0BQhHpcu]uR`sRUѠeANe>`pe^bt1R; SbiPa¢&T18LPcX!mFTXjB?l'`iPkqo&0Z38RPoO,d/?c dDψ(ȱȱ"(Ms3) &&&WUՁ:p/` %R&RE"ibȱ`NEWOPKPSUH`PV0 ROj2URIV0S$AQxR7 i q'U,ygm5xbb7'dũfɵR3D-҃aa8')uyqarL?aTe5x`H7vAտD?aKDKDaKЅDKDK7eDAKTeDJHD: # h4>T]]9 M JU@i4F?@??@hĻ?P6 ܠAJuM` ?juJt At=W ףp= #JRbJlQO#>2zGz?)@MJ߀A@ebM#z\(b),+),&J[(.w?& ?A$T./O%B!!5 g`?CopyrigTtZ(c)Z2009ZM 0c. 0os0f21r0>1a0i0n.Z AlY0 8s]2e10ej 0vo0dQ0@!$b-# 'Y&?6iie59 6X7'0UpBŴ&!$0 D'F;JlJAh h7!h1!YZ@x f, <#  IRvVJ:ǿxT]]9 M JU@"&d2?@ `?@BP(ĺP6 ]AJuM` ?ju#t  m۶mIt=WwIR$I$+IIl#>2zGz?@M|JA@e7b #zb(b%)2+2&J[/.?& I?A$4/U%B!!5 g`?CopyrigTt (c)020%090M0c0oKs 0f21r 0D1a0i 0n.0 Al_0 8sc2e70e0vu0dW0!$-X# '&?6iie59 6X7/'0UvBŇ&-!$0 -FAJUlJ8>UhAA 1h1!T12 $<cV5{ WR~VJd2L&?@bX,Ʃ ]S fR#mS ?ZZQ4E^0__ %zzX5:U XYePY؉X5aZjoohY}aUVeEVŠRldQY)+R+X2>_;"rA 5#dsHD: # h4>T]]9 M JU@xmJuM` ?uJt  Am۶mIt=WHzGIR]Il#>2nz?3@MJAw@eb #zb( (2+T2&J[ (p"?0& ?A$ 4/U%BB!!5 g`?CopyrigTt (c])020%090uM0c0os 0If21r 0D1a0i 0n.0 WAl_0 8sc2Ue70e0vu0dW0!$-# '&?6iieP59 6X73'0UvBŇ&!$0 -FAJlJ8>Uh@ 11!Z@?t:Né  <qqh#B] WRVJ.@ ?@L&d© ]S )Q#hY8?Kh/QE^ `__ ^BX5a9XYePXRmScRU5:]md2L&U\ahY*PX\eEVV_iaU2>_PU"rA #dsHD: # h4>T]]9 M JU@i4F?@M&ɯd2?tP6  >JuM` ?uJt ASt=WJRb~li>t2U?=@MJA@b + &J[N#?0a& ?AT)B!!5 g`?CopyrigTtZ(c)Z20 9ZM c. os f"!r 1ai n.Z Al90 (s=2e0ej vO0d10@!$b-# 'Ya&?6iiek59 6jX7^'0a&Z!|$0 FJlJ UhqMAA ]A !4Ja-'Z@?fl6˃  <HzGh#BA OR|VJ:6]t:ϝNM__Z(\#UEL@ _?@Lƒ S AA#`YQ(\Xk5^ s`o!o ̌aX5UxT9 M3JU@i4F?@M&d2?ҷP6 m >JuM` ^?5ulbSD_Jt bstE>2zGz;?@MJA@u+b+&K !JS"j& 1"A.b+-,-/*B!!5 `?CopyrwigTt `wc)5020A0950M-0c+0o%s%0f32$1r'0`1ua30i%0n.50_ Al{0 +8Us2eS0e+0v0ds0ib =L!XY'0U Bj&%0 :lJ Uh*M[A[A 9A 1:xTh]]9 M JU@w[2?@2?@?뇤?@bf?P6  n>JuM{` ?5 uJt  hxvEt9SYEN^Ӟ~ӞEh!D#<>>2N_Nk?<@MJA@cb]b)++&J"< u) ?AHb$z@J+%/F%B!!5 c`?CopyrigTt (c)302`0930M+0c)0oKs#0f12"1r%0^1a10i#0n.30 Aly0 )8s}2eQ0e)0v0dq0!(Y$-3 7x&+0b59 6X7G'K0UhBx&!$Ia F3JlJ]Uhz11Z@`Y3   8gF/_ M ARnVJ!49BYOPBRWSMRUE:]R.?\QRYl>Q5U/V@_YQĈU2(_:UrA cHD: H h0>Th]]9 M JU@p8?@d2L&?@αB?@.?P6  n>JuM{` ?5 uJt  YEt9SRQEN߆$1Eh'>2N贁NKk?<@MJA@cbb)++&J[ < u)| ?$%/F%BB!!5 c`?CopyrigTt (cu) 02`09 0uM0c0os If2!r 51a0i n. 0 WAlP0 8sT2Ue(0e0vf0dH0!Y$Ť-['# 'x&0b59 6X|['7'0U?B)x&!$aI 6 JlJ]Uhz!^!Z@mN/?  8gF_  REVJ49Y&PR.S$R_UE:p]"`3  \Q)Yl?>cQ5oUV_YQ_Ub2OUrAg wcHD: H h0>Th]]9 M JU@&6?@0ސ?@?뇤?@bf?P6 n>JuM{` ?5 uJt  lUUEt9S;EN^Ӟ~ӞEh!D#<>>2N_Nk?<@MJA@cb]b)++&J~"< u) Y?AHb$z@+%/F%*B!!5 c`?CopyrigTt _(c)302`W0930M+0c)0os#0f12"1r%0^1a10i#0n}.30 Aly0U )8s}2eQ0e)05v0dq0!PY$-,3 7x&+0b59 6X7G'0UhBx&!$a F3JlJ]Uhz1h1Z@`Yg   8gF__  ARnVBJ49BYOPBRWSMRUE:]?R.?\QRYl>Q5U/V@_YQU2(_:UrA c_H>vQ+s #X8L(r FUߤvf## x!B-|+y~|dlxo@+蠖oLsGhoomP d*ߎUHTx7 z*^_Q$OUFD  h,^TYYBBUF\.?x<F BP(c?Ps?m. 9c:M:O @z0 BH ?? V@L&dW2?F-W(Yi(G.sU!&P/)7(:  0 `WB`&Ring\network ?p$0rr0pPE1al\d$0v0c$0 c1^ (J YG& r+?!) +)!q q ǿ~~|p~~ṗ ~p~;ކ~pp2AAqDrag onut he]pe,nUdu /p64lies9Lt]wrkcQm tOfTly lGo_"imdO.b\.L&d2??;IOғEMQS?=?mar3r\.CUG D  # h T,YYEUF\.?@?FL&d2?P} ϛ% ^.&N+O+.Q+R+S+T+\+]+^+_+`+a+b+c+d*+e+f+g+Q.i+j+k+l+m+n+o+p+q+r+s+t+u+v+w+x+y+z+{+|+}+~++++++++++++++++++*++++73636 3 l_Fu{` ? &CG/B>CBHCBRCB\CBfCBpCBzCBCBCBCBCBCBCBCBCBCBCBCBCBCB"CB"CB"CB$"CB."CB8"CBB"CBL"CBV"CB`"CBj"CBt"CB~"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB"CB2CB 2CB2CB2CB(2CB22CB<2CBF2CBP2CBZ2CBd2CBn2CBx2CB2CB2CB2CB2CB2CB2T6g666U66FFMBuIUHrN|Nw1r L@Gs5 ^CB0  $sV!v q)?@4v!p`:!@ K*RGs Br(1=s0s)) %Rt؉sq1`V sPeXY)cPm!#52862F1`?UCA p r gPt3()32U0Z93M c_ Eoi oM=rK yas iA n) J3A# l3EsUei e_ vd)21wrޟҀs?@A3{d|R$Bƕ~u +@@âɸ!BMBj8}rup`i-qBu *@rw1s!8=}/B 2t@I9u`un`"p2] HqDou`t1E[B2TU뱥)8s5ä  r}vh^T`%AT<7 o3mss >XxĀu7T=n7 eMTNtT yk3Dvd{uu ƃ@Rdư5ߙ߫ƕ9'9h\nE|1CU t@ʴ#FXjHqʶq-?bt[Ű//A:/L/^/~///A///p?!?3?ʼqV?h?z????z?? O@ɯ+O=OOOoOOO9OOO =t__'_!D_V_h_ͪ___#___$o+o=oB%`oroo&ooo'o!(5GY)|*+ .HTfx{şÏƹ/&8J!0m1Ɵ؟2 3BTf͐B_5Я6);7^p ̿_9:3EW;zόϞ(Oas?ߨߺDqm4# 1T 5h86%8llc౓#"AAш]]Ѥ C){q{qpqq22EyyNNaѕѨHr/d~CU|@c` MaЧufu e nA` EqipeBtm/h~4qP o|uNmb /b XQ Pr6HZ/{+ srt_q/\A6s6!P/^za Sil/T1 LcAHR\fpz$.8BLV`jt~""0":"D"N"X"b"|"v""""""""""""""" "" "*"4">"H"R"\"f"p"z"""""""""""""""""$"."8"B"L"V"`"j"t"~""AaXT@b FTP%జJKTltφT0ƐĮbPQwr SS#TS”GLF2DFbXO%—ȩP_(_:_S…LF?Hd}OƁLQ____KFLpXO( B}KoЁ?_T_o(ooS L FY2ణ3oaoЁ !eo}cGYkS L Fft]eҼ–B" E L FsxLfO͏ߏ*H L F l_go}<*A]LF џrmБ Є!7I[EʟLF `0 毯ʯ_&tEULF OT~N򤢿ƿTؿFLF O!H$6 Hϱa6ϗ*FLF ?ZبP6HZߙBLFR5xqy?o%L%9F//"q8?onT&L&9F?ᏼJ`_-?*e'L'r?UO{O$(L(roy:uoڟ) L)r!_@4 oj|*L*r"oݿF\);a+L+r#xQwχP ,L,r$|[6qȟϨ߹-L-r%Ϗ0 fTx.L.r&BX%7*] /L/r'7Msߙ0 0r(3 W2m1  1r)˿@,bt-2 2r*>T!/3/Y=3 3r+p/Iߔ/o!/P//M4 4r,tS?.1i?/??M5 5r-O1(O/^OpO]6 6r;Q?\AO:OP_/_*Um7 7r//?E_Q_{?__}8 8r0+oOo*aeo/_oo}9  9r1oa@$ _Zl: :v2/q6LO+Q; ;v3hA?gwoP?؏<  >v6~_ɯ߯2H'*M? ?v7'=ocsoԿ @ @v8#oGπ"]ϴ¿ϥA  Av9ϻ@RdߊB Bv:zߠߠ.D#IC Cv;`9_oP D Dvv*@  E-GV,Gv !7-[k=HLB,N5>LB_r:?C&2F3s͵NSYQFEB,N>HdDq7ME5qD68W#"U%&EfW@@cX @ S1` Token?_RiP1nD?[Q?##qh1ab /X##zQ?AzQgVD6e]r##"` %P&rPpPrtPe+sA D2`A!_`NTMPUOPKfPS A``UEfPPPObR`5ISi0QAqD_!_\X\OT.U_C0?ni4oo\XXOSw'K0Ur%V!0^ v2Zei1DAT\1sEPuQ1MEX OS%HVl2UxAUx1'ExAMEx HCDrDC  @   !"#$%&'()*+,-./0123456789:;<=>?*>ABhiATiiJ\ 3U@\.?FL&d2?JP}@YIYqX"u`W?"&&5&,"u6)~}bE@{#&t03! 7("tm!,v&b$&"qK!Y$30^?2-,"l-2""2!l"6 #",$=F#"f$6)![0$%J[LD0D=F?|" (&3AA1A1 ;m`"`?CopyrigPt(c 09M-Pc+Pos%Pf3R$Qr'P`Qa3Pi%Pn Al{P +XsRe*SPe+PvPd"!ZOAy 7$^ ?P?b?[!H${eQnB? ???O %O.O@@OROdOvO6CkaOROOOO_ %!_3_E_W_i_{_Vx<kb_b____ o %&o8oJo\onooy_ocoroooo %_.@@Rdvwd %0BTfxyheҏ %5GYk}~/mfĢן  % =Oas~_?gůɲܯ $ %?Q@cuOwhʿ) %RGYk}ϏϜ_i . %I[mߑߣoj!3 %N`r k &8 %Sew+= %Xj|)m 0BU]o9n"/#/5/G/Ub/t/////Io/2? ?(?:?L?Ug?y?@????ϟYp?B OO-O?OQOeGQoOOOOO?ߧfqOR_ _2_D_V_"u!q______yr_ro%o7oIo[o'"voooooos *<N`,#{@yt /ASe1ˏݏu "4FXj6%͟ߟ/yv'9K]o;&ү?yw, >Pbt@'@ſ׿Oyxπ1CUgyE(Ϧϸ_yy#6HZl~J)߽߫oyz$(;M_qO*y{)-@ RdvT+@y|.2EWi{Y%,y}37"J\n^5-/y~8/<2O/a/s///cE.///// ?y=?ABT? f?x???hU/??@???OyBOFRYOkO}OOOme0OOOO__yG_Kb^_p____ru1____ oo yLoPrcouoooow2oooo QUh z|3@%VZm4Џ*/[_r5՟ /? Ɇ`dw6گ"4#Oهei| Ŀ7߿@'9(_jπnҁϓϥϷϕ8,>-o',sߘߪ߼9 1C2! F1 xR'9K:)6} ;@);M<+)~%< .@RA09"5=/!/3/E/W/F5I/2/////E>??&?8?J?\?K:Y?B? ????U?OO@+O=OOOaOP?iOROOOOOeQ_!_3_E_W_dOXGv_b_____uAo#o5oGoYokoZIoroooooDžB(:L^p_NɁHD: H h <>T$]]9 MT kAU@\.?FL&d2=??F?P6 LA P>  + 4DJuM` ?"?o`0?oNu^Jt[ tgJbg>6#J?#,b/ @TZT"Z""P& T/@n_^B% 'N< ?m6 ?AeJa@ۢR4` Ac0entCol0r3z۝0Aٝ0bbbz90“2???9Nbb?1"bAYFbeDG?#E![C39 , B6"hLJ?#QE!TG; J`A`?Apy$@igTt(@)20mP9MYPc$@os@f_RArSP#Aa@i@n. AR @lXWsRePe$@v0dP@ᾡ`1O_o?#2TUam3 FD5#]BrFlL]BrF 0PuGiAbE9 6%yXGg'0bUm6!40 fjlJ qUhhUGHQA3(Z\*{ ٠0"#")b5DrD6"J Az~%"#"BA}/t?PSQ< E!?yET#-#tgaD[#,E:g@qVE`0w>]BrA \Cď'$$+HD  hj @>TA(]]9 M  eAU@\.?FL&d2+?]P6 LA > M /8HJuM` ^? C s4CLsRubJ1t_ tuJb$">JU5 JI4#=#/ @X^X"^"B"6 X/r cb!Bh@,G-v31k!k!5 J``?CopyrigTt(c)2009M0c.0os0f21r0Aa0i0n. Al3@ 8s7Be @e0vI@d+@0tv3?c- =< ?F ?AeJa[_Rf14W` A0cI@n0j1lB3z_ՁA_AbbbzY_)mEZf1T  9 MSAU@\.?Fr??F~q?P6 B >%M a*JuM` ? %U U4:uDJ@bEF&d2ɿuBltA bJt EIM>JoU2TU!?#S@M #~A@@"WbH)U+U&mJ !?v(GtI#J#"# %"& "A@"bH);E'|"/-<*B#h1h15 "`?CopyrigTt(wc)2009M0c0o%s0f21r01ua0i0n._ Al0 8Us2e0e0v0d0"MX#t3 t7&"tt Ei b@9 %!X GގG'0"UX&-50 FʢJlJAUhh)Q)Q!#A@! (:#`RB: 3R$JTV  h1?[_R: @AABUVeLٮ\2maXUe.o_jLo^m2aroo__7\HD H# h4>T]]9 MT CAU@\.?F??F~߻?P6 8M >M (JuM` ?#0SS2uBJt? tTKJWbK>[JU#?,& ?AA@R"bZ$J#B!!5 `?CopyrigT}t(c)W20 9M ]c os f"R!r !a i n. AUl0 (s2e e v0d !$#bB-## ',&%en#2TU1?"@M#R"g&a,R"g&j'\?6iieE9 6XG"7'02U,&-!G$0 NFbJqlJAUhh7!lR!(Q:#N J>83R JWTfV  !?qY_R8>qRl?38VUVL?&d2n\] aٔXEUoe6._x_j omQ2oDo5V_h_o\056 b_HR>+s 1ǡY-MSMלC FHVZ#Cܞ!B hd|@+MsG8oՁPwgF:p uЇ7xsUFD  h ^TYY>UFD"H$'@F @FxT]]9 #4AUFD"H$@F@ @&@F?x0>Uhhn1n1`!#!J`F6 [>A{ (FMBa!? OABE3RNkO}OGBERE6 O?OBE2?;.1rA _0rRxS`3HD: ##=h0>Th]]9 (3q #UFGJg:#@F@ @FWtyz@F.???Pv> u{` ?u3>(.t  ( y?_A@HNpΈ_-t#`aU"p [`JڍU5 ILҹc0U~ "?AG}cTLIb` LineCoklK rzc+"1&Z/#nFFF"?&?#+"1&&` STadK wI/#/N+$;# FC lM 72zS&eacۢR4` uA cG nt)?@#cb3zs94#+$3$I!8<>3 Q3#8.f.30FP=4B/G3B11G;;#?1pyQ i}gTt ( u)@2`09@uMC cQ osK IfB1r@P!a`0iK n.@ UA%2 HsBe@eQ vG d@0 l>]UhzЍA3+!`J!VQAU__T3^_odgU5^No_roU __ooj'EAb59 M{7/v!3rQav44CK7%iOHBTg7D'$~vv I+"688e @ Q:(3O@v5?=+$cz,Z#b~-(T{ HD: ##=h0>Th]]9 3#UF u` ?6u3`b>"6(t  <,?A@U"p [ZBH|гYYt [ՄJU5 CLT0U"?AG}eC]` LineColZI rz۷!#nF;FF{#?&?#),)"/&&` STaKdI wG/#/ZL+$9# FA lK r72Q&a` ?RR) 3#B2((236727kpvB1o31P17;9#?L1wpyO igTt (cu)J@2`09J@uMA cO osM0IfHBL1r<@N!aH@iM0n.J@ UA#2 @HsBeh@eO vE d@0 l>]Uhz1A3)!BJ@o39DYX3UD_d_vY3^__gzUf53^_V_ozU3__H\o_j5bPf59 M{o7l/5v!3HraPv4!$ 5v|mai6oozo7]'$t~kvPv v8%TE xH@L?c8 eaD@eJ@ l@Qo:0@3 9r)']z,Z#(T5-sTAcL@0P@HD: ##=h4>T]]9 P3#UFpo@F@ @F18Q[ @F.q??P> u` ?u3 >,2t  eX?A@LR[ A\c2t#`e"p [NdJU5 MLc0U "?AG}gLMb` LineColO rz$g/"5&0mddd#nFF;F"?&L?#/"5&&` STa%dO wM/#/R+-$?# FG lQ 72W&eagRa` uA cK nt 85W%#gbd3zRw98#/$h(Ph9BB3 (C#8>239FA=B8G3APAG;?#? 1wpyU igT_t ( )@W20@9@MG ]cU osO fBR 1r@T!ad0iO wn.@ A)2U HsBe@eU EvK d@@l> 0>UhhCA3/!J!VCQU__T3^oomgU5^Wo_{oUE_oo%obTG t>Q e jXHle59 M{7/v!3rX/Qz#=!$K 7%inug7D/'2 #vtx J/":UGDF P# h>,/T6D # UFD"H$@]F@&u@P} jih,D@Su`} ?.U@Nu#hQgtSftܸtRtA@3db#t%T/M/"j/+.d#%@?/"t`Make B 0c"0ground#3%~ؒ314+1`VIS_PRXY0CHM!#60z070"w`?C40py20]i00ht&0(,0u)&02009&0UM0c22s40f2R1r0Aa0i40un0 &0Al:@U 8s>Be@e20%v$0d2"e (^q5 J% 3hHA%DK%D,K3DGg $aI_9SFTv4y13TU-T[Te ZTATU J8>wSa%F1Q01$DtybgURwSi@^UR-n,?oUR1j?RRTuKrQi? eb #! >baes?,3QRoX0cD&q1cQ(QR=wS1VZi#3 D40N40UtBn0a0n`Eq#?_^9jYazPtt"(wS117eU5`>Xcy1`I!&` H0d$2&F40qUAw0޸q` SPo]wRR+="v6 ~!R)RRLf] Th 'Q0d_HV> ל ߪJAC BFWt#h<;B} ~w]@ekyo+^CC{]KaX]m*<N`r|3 y@HG8w$UFD  h ^TYYUFD"H$'@F @FxT h]]9 # AUFD"H$@Fcѿ?&@F5 L?sFC?P6 u` ?iu#t A@t   q?7Q.7)CWb7]ARMJqmdddpnFFF? ?Agam9T6` Fil> wCo> orz9b2j9A9obdk#zY)א9S|uX9/K&` /2L%!X!#\#w&` STWadD w/" /;)#82#b5"oU y3#B&(23"I&3cg30U2?AGQ}9L"|w` L ne%?(75N582M:Bhj<:AgCy1y17;C?C!pyJ i! htj(0)j2`09jM cJ osD fBC!r@I!a@iD n.j A" HsRe@eJ v#@dPw0l>]UhzAh#j181Ja`gCVQ'  2 bM[8A_^#e33nLo^og#eE3eVo_o#e!_>82rRA w082YsASi6bE9 MzgG]'4Ō;J!$a vvET#@x@&OnAHD # =hj0>T h]]9 # AUFD"H$@F@ ?&@Fq)?sFC?P6 u` ?iu#t A@t  ^)?7.7C_(\T[>JS/qnFFFa? ?Aga9T6` Fil* Con* orz9_b9A9bW#zE)9S|u%/7&k` /28%ߵ!D!o##&` STaUd0 w/"/+oUz G3#B&( 3"&3iE530U2?AG}x "|` L n e?#2Bh8<:*ACG1G1O7;3?/!py6 i htj(c)j2`09jM c6 os0 fB/!ru@5!a@i0 n.j A" yHsBe@e6 v0d@E0l>]UhzjADJ`XC66Q'  2RM)8@ mT}_^U3^_oagUEUtVKo_ooUp!p_>[QrA E0r sC)84Aeio6AbE9 MzG'4Z;!$Da Wvkv5T0x@?AHD # =hj0>T h]]9 # AUFD"H$@Fɻ1V?&@F&7 ?sFC?P6 u` ?iu#t A@t  Biq?7.7C!_lV}T[>JS/qnFFF? ?ga9T6` Fil* Co* orz}9b 9A9bW#zE)k9S|u%/7&` /28%!ߵ!D!o##w&` STWad0 w/"/+oU QG3#B&(3"&3E530U2?AG}u "|` L 'ne?#2BhP8<:A2CBG1G1O7;3]?/!py6 i htj(cu)j2`09juM c6 os0 IfB/!ru@5!a@i0 n.j UA" yHsBe@e6 v0d@E0 l>]UhzXjADJ`C66Q' C 2RM)8A@mT}_^U3^_oagUEUtVKo_ooU!p_>[QrRA E0r sCio6bE9 MzG]'4Z;J!$a T h]]9 #  AUFR_!@F;(kY?F̅$@F_fw?P6 u` ?u#t  \ Ac?oA@'t3(?.Zd;O?M&S?- >JS/qnFFF? ?{gaCۢT6` Fil4 Co4 orz۾Cb CACbۖa#zO)5CSu//A&` /2B%ߐ!N!y##&` STad: w/"H/+oU (Q3#B&(23"&3E?30U2?AG}u "` L 'ne?-2BhB<:A!CQ1Q1Y7;3?9!py@ i ht (c)@2`09@M c@ oKs: fB9!r@?!a@i: n.@ A" HsBe@e@ v0dH@O0l>]UhztA#B1J`!C660Q'  2RM38@wT_^U3^ookgUEU~VUo_yoU!z_>eQrA O0 rsCiy6bE9 Mz!G'R4d;! $a FvZv5T0x@?(AHD # =hj0>T h]]9 #  AUFR_!@F6??F̅$@F_q)w?P6 u` ?u#t  \ Ac?oA@'t3^)?.Zd;O?Mɯ??- >JS/qnFFF? ?AgލaCT6` Fwil4 Co4 orzCbCACbva#zO)CSu//A&`5 /2B%߿!N!y##&` STad*: w/"/+RoU= Q3# B&(!23"&3iE?30U2?AG}x "` L n e?-2BhB<:*A!CQ1Q1Y7;3?9!py@ i ht (c)@2`09@M c@ os: fB9!r@?!a@i: n.@ A" HsBe*@e@ v0d@O0l>]Uhz`tA#B1ŊJ`!C66"Q '  2RM38@wT_^U3^ookgUEU~VUo_yoU!z_>eQrA rs:Ciy6bE9 Mz!G'4Ŕd;! $a (FvZv5T0x@?(AHD # =hj0>T h]]9 #  AUF^!@Fp>?FKg@FmЁ??P6 u` 7?u#t  ?߀A@'t3;On?.Dioׅ?M@_߾?[ >JUonFFF? ?ob"ozCC]b,#&` STad_owCP lP -rz=&G/Y+[ ##e.2"#&3iE0U2?AG}x "` Li'nez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c\ oKsv f2I1r0[!a0iv n J0AX l07s"BUe0e\ vF0d""l>]Uhz1#!2!ŊJ`%6%6"A2  "RM(@DON(U38^Q_c_W(UE8UF_O_(U2O;2"rA 2#^cFCi%bE9 Mzl7]'4+J! $a ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUF+ $@FmЁ?Fh_IŒ??P6 mu` o?u#t  $C?A@'t3;On?.N@?M? >RJUonFFF? ~?{b"zCCb,#&` STadowCjP lP rz=&G/Y+[ (##."#I&3E0U2?AG}""` LiOnez/"BPh,:A !!';u:3?I1py\ igTt _(c)02`W090MB0c\ osv f2I1rT0[!a0iv n 0AX l07s"Be0e\ vF0d$""l>]Uhz@1#!2!J`%6%6EdA  ("RM(@DON(U38^Q_c_W(UE8UF _O_(U2O;2"rA 2#^c%FCi%bE9 Mzl7'4Ŕ+! $a (ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUFP0&@F'ǖ~rl?FMO?Fq_Ww?P6 u` ?u#t  ~jtx?oA@'t3ʡE?.Cl]M/$A >JUffnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##."#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!2!J`%6%6A  P"RM(@DON(U38^Q_c_W(UE8U@F_O_(U2O,;2"rA 2#J^cFCi%bPE9 Mzl7w'4)+! $aQ ff95TF0x0I?s1HD # =hj0>T h]]9 # AUF€#%@FL&d2?FM_O?ݐ?P6 mu` o?u#t  ~jth?A@t b@(.eX'?(MA>RJUffnFFF? ?b$zCCb&#&` STadowCJ lJ rz7&A/S+I[ # #.2"#&3E0U2?AG} "` Linet/"Bh,T:AB!!';43?C1pyV igTt(cu)2`09uM<0cV osp If2C1r0U!a0Uip n AR l7sBe0eJV v@0d""l>]Uhz1#!,!J`X66DA  " RMB(@DON"U32^K_]_W"UE2UF_O_"Ub2O;,"rAQ ,#Xc@Ci%bE9 Mzf7K'4Ž+!$a ff35T@0x0C?m1HD # =hj0>T h]]9 #  AUFb%@F'ǖ~rl?F2wa?Fq_W}w?P6 u` ?u#t  _LU?oA@'t3ʡE?.j+]M/$A>JUffnFF;F? ?b$zCCb,#&` STadowCjP lP rz=&G/Y+[ (##.2">"#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!"J`%6%6EdA  ("RM(@DON(U38^Q_c_W(UE8UF _O_(U2O;#rA 2#^cFCi%bE9 Mzl7]'4+J! $a ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUFu %@F'ǖ~rl?FMO?Fq_Ww?P6 u` ?u#t  ~jth?oA@'t3ʡE?.٬\m]M/S$AAJUnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##. "#>3E0U2?AG} "` LiOnez/"BPh,:A !!';u:3?I1py\ igTt _(c)02`W090MB0c\ osv f2I1rT0[!a0iv n 0AX l07s"Be0e\ vF0d$""l>]Uhz@1#!2!J`%6%6EA#  R#R M(@DON(U38^Q_c_W(UE8UF_O_(U2O;2"ErA 2#^cFC i%bE9 Mzl7.'4+%! $a ff 95TF0x0I?s1HD # =hj0>T h]]9 #  AUF;%@F'ǖ~rl?FOr?Fq_Ww?P6 u` ?u#t  _vO~?oA@'t3ʡE?.]K=]M/$A>JUnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##."#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!2!J`%6%6A  P"RM(@DON(U38^Q_c_W(UE8U@F_O_(U2O,;2"rA 2#J^cFCi%bPE9 Mzl7w'4)+! $aQ ff95TF0x0I?s1HD # =hj0>T h]]9 #  AUF`9d[%@F?F{?Fߢ_-Jkw?P6 u` ?u#t  a2U0*C?oA@'t3 +?.ڊe]M~jtA>JUnFF;F? ?b$zCuCb,#&` STadowCP lP rz=&HG/Y+[z ##."#&3E0U2?AG} "` Linez/"Bh,:A!!';:3?I1py\ igTt (c)02`090MB0c.\ osv f2I1r0[!a0iv n* 0AX l07Us"Be0e\ vF0Id""l>]Uhz1#!2!J`%6%6A  P"RM(@DON(U38^Q_c_W(UE8U@F_O_(U2O,;2"rA 2#J^cFCi%bPE9 Mzl7w'4)+! $aQ ff95TF0x0I?s1HD: ##=h0>Th]]9 3$#UFt>!@Fp@FU^+m@F BP(??Pv> u{` ?u3>.6.t  ?A@RXqhki)t bk"p [SjJmU5 SLiT0U'"?AG}yFSb` LineColi rz!#nFFF#?&?#I,I"O&&` STadi wg/#/l+$Y# Fa lk &72q&` @?r)o 3#B..36=27.JM3117;Y#?l1pyo igTt (c)R@2`09R@Ma co osm0fPBl1rD@n!aP@im0n.R@ AC2 HHsBep@eo ve d@0l>]Uhz9Ah3I!J@35&5&hQ (;UL_q_T3^__gU5;^_^_oU;__do_j5b59 M{؏7/=v!3PrhaXv4!$A;7ai6oo74'&$|~svXv v_am A` Te xP@l>#mbzR#p8!e t@PQ:(3O'@f5ɔ?{I$m."z,Z#!b~-(T{<\B2>2 HD: ##=h0>Th]]9 3$#UFfG#@F BP(?FUiԄUV?P> u` ?iu3>W..t  x&1?A@RXʡE5i)t bk"p [TjJU5 [ SLT0U'"?AG}F^S` LineColi rz!#nFFF#?&?#I,I"O&&` STadi wg/#/l+$Y# %Fa lk &72q&'` @?r) 3#hR@BL..236=27$1311ԩ7;Y#?l1pyo igTt (c)j@]2`09j@Ma ]co osm0fhBRl1r\@n!ah@im0wn.j@ AC2U `HsBe@eo Eve d@0l>]UhzQA3I!J@X35&5&DQ(SUd__T3^__(gU5S^ov_6oUS__H|o_j5bP59 M{7l/Uv!3hrapv4!$ Uv}aiHz7]'&$Ŕ~vpv vX%Te xh@l?8 e'ad@ej@ @Q:1@3 9rI'm"z,Z#.T5-TAcl@0p@HD: ##=h4>T]]9 P3$#UFl6@Fp@Flf^J @F BP(q??P> u` ?u3 ]>2m2t  %䃞?A@V\m)t bo"p: [nՄJU5 WLT0U+"?AG}FWWb` LineCoklm rz醑!#nFFF#?0&?#M,M"S&&` S.Tadm wk/#i/p+$]# Fe lo *72u&` D?v)h9I!Bo? 3#eB223FA BGT2M3B117;]#?p1pys igTt(c])20@9uMe cs osq0IfBp1rw@r!a@iq0n. UAG2 {HsBe@es vi d@0)l>0>UhhClA3ZM!J@3,9&9&C"Q(nU__T3^__CgU5n^-o_QoUEn__o_(bTe ts0e j$Fje59 Mb{7/v!3rQv5!$;7aiPDV74'2B+#~vx _akq A` Ti x@p>#qb[zމV#qUGDF P# h4TPYY# %UFD"H$@F@&@P} j6   L 4 Hn[u` _?BU 4H9 u#p3lbtntQtRA%@> tS%tK/]'4( tTi%i/](/z.R&U@?/".g`Make BJ0cL0ground #d.#4#30U2)1_FB3!5~*؀3A$D;1`VIS_PRXY@0CHM!060<@78݉"`?C^0puy\0iZ0htP0(V0)P02b@0U9P0M@c\2s^0IfBvAry@Aa@i^0n@0 P0AUl@ }HsBe@e\0vN0d@2"e (^E 5 3r ca%A oCjA$A13 f3ba!5 cq@~n^e3jU?~bhKr;aq? |ub #h bauA1*4H f5ohc&Ac.abaEkQ,o^e= cAzf y3` D^0N^0tBn@a@nEi#_9Y@a `A1A1"( c'AAezeE>Xs A pIp1!` H@dN2&F^0YA @ޮ1` SPo]w:bb+"v 6 ~!bbbf,4T4?Had3E_HV>ל ߪJAC BFY#~6B} ,]@e8ko+>]BBGy]Phe{qX0Q24X55&7 19 $; .x=8/?g6H6(L!%l)JUFD  h ^TYYRUFD"H$'@F @FxH5@@@0ۀ HD:  # ;h4>T]]9 #U,AUFD"H$@FjZV@&@@P -u `u bu  @")lt A)0t#` bFv_n?p [_eAu` ?jurJ:26W&$##D./@%W(l"r/tvel JYO8?$"'D{(/$B}U2q(0?'?96?\.?fd?F 7`BackgroundC0l0rz\)z 5G!3M1 3?l"M@h<:A酙!30O贁Nk?mdddclak)2w` A0ce0t?A)n1bdCzI7!3117;l`?1py0i0h@ (0)2P20>P92PM*PcJ2s0f0R1r$P1a@i0n..2P A0l2P)WUs|RePPe0v@dpP0l>#$!Uh#3 pA QYŊJ*!3"Be(o1HD:  # ;h4>T]]9 #U,AUFD"H$@FVj?&@@P -u `u bu  @")lt A)0t#` bԚp [euv` ?SurAJ\tt&!$)'$"3/E X/j$=(tvel gsS?$"'*Q/$BU2q(0?'?96?\.?Efd?F 7`BackgroundC05l0rz)  5Gt!3M1 3&?l"(MPh<:A!30U_B??mdddLcla)2` ]A0ce0t?A)1b[dCzI7%!3117;l`?1py0i0h@ (0)2P20>P92PM*Pc2s0f0R1r$P1a@i0n.2P KA0l2P)Ws|RUePPe0v@dpP0l>#$!Uh#3 A Q!3 (BHA 3J*!32B'e@o1VcHD:  # ;h4>T]]9 #U,AUFD"H$@Fr߹\.@&@@P -u `u bu  @")lt A)0t#` bF_xP92PM*Pc2s0f0R1r$P1a@i0n.2P KA0l2P)Ws|RUePPe0v@dpP0l>#$!Uh#3 A Q\YJ*!32Be(o1>cHD:  # ;h4>T]]9 #U,AUFD"H$@F~߿?&@@P -u `u bu  @")lt A)0t#` bԚp [euv` ?ur>JvSȱ&$)'$"3/ E X/j$=(Etvel 37?$"'*/$BUo2q(0?'?96?\.?fd?F 7`BackgroundC0l0rz) K 5G!3:M1 3?l"Mh0<:A!300b/?mdddzcla)2` A0ce0t?A)1bdCzI7!3117;l`?1py0i0h@ u(0)2P20>PU92PM*Pc2s0If0R1r$P1a@i0n.2P A0l2P)Ws|Re*PPe0v@dpP0l>#$!Uh#3 A Q]!3 (BA 3J*X!32B'e@o1VcUGDF P# h>,/T6D # UFD"H$@]F@&u@P}  ih0D *@3-T-hc-|u` ?W΍U,@T'*hh||-"| 'u#QtSt% "{tR$t/'A@#KtT%/(/-3b/(08"A2?X>d.3@??2.`Make B0c0ground 335~J3hAmD;1`VIS_PRXY0CHM!#60@79݉j2`?C@puy@i @ht@U(@)@2@0@ @i@As@fBAr@Aa@i@n0 @AlP HsRe@eJ@v@d22e h4^hh P T3 3hHQ5T[5T[3T@[TT[ETh[TX|WndPao7T1128cpAhAhEhATh]]9 3#UFԚ?F BP(?F @F߅d\?P>u` ?iu3>k(.t  9#J{?Ao@HtSݓ_N׳_ambԚp [aJڍU5 ILc0U~!"?AG}cLIc"` LineColc rz$cC"I&nFFF#?&^?B#I&&` STa%dc wa/#/f/G(iF" d3#$..36hH1$d1d1l7;S#? 1pyi igTt ( )@2`09@M[ ci os!0f B 1r@h!a @i!0n.@ AR#0l@GsYBe-@ei v_ dM@b0 l>]Uhz13C!`J/!F%QA(E _._@T3O^h_z_W?UI5N___?U Oa_!o_j5AbI59 M{R7/f!3 raavb5!$fAm/%Q12ooXR7w ' $9~0vx] vF88e 1@tQR:(3O@{f5Q?=C$cz,Z#bS~-(T{<B2 >, HD:  ##=h0>Th]]9 3#UFGZت$@F BP(?Fu?Fa]?P>u` ?iu3>k(.t  aTR'?Ao@HtSˡE_N4@_ambԚp [aJڍU5 ILb0U!"?@2LhcC"II&ynFFF#($(?#R,C"I&&` STadowC l rz&/+_y" QB3#.3i6h H1q$B1B1J7;v`? 1py igTt _(c)02`W090M0c os0f2 1r0!a0i0nU.0  l 8Us-Be@e v?@d!@@0l>]Uhz13 C!J/!FA(EO_T3#^<_N_WU'5N_O_UO5__Y_jD#b'59 M{07/f!3baf-A4!$fm/%ij5|oorX07Ww' $ ~vf Wv $8e'ua0e0 @PHZQ0:^2@3 9R'cz,Z#.T.'~-qT@Ac000HD:  ##=h4>T]]9 P3#UFD"H$@F@Fq!j@FC1[z??P>u` ?u3 >,2t  0*?A7@LtWzGcQR e0qbFL&d2?p [>eՄJU5 MLT0U%"?AG}FWMbg"` LineColg rzR#nwFFF#?x&?F#uM&&` STadg we/#/j+$W# F_ lDi 72o&G,`h9C!Bt! 3#.2236A2JM3117;W#?1pym igTt(c)20U@9M_ c.m osg fGB1r;@l!aG@ig n0 A22 ?HsBeg@em vc d2l>UhhC0A3G!bJ@33&3&C_QA(2VC_h_zT3^__gyUb52^_U_oyUE2__[o_(bT_ ti e j5ieb59 M{7/[v!3nrvv-4!$;7aiX7w'2%#՚~vvv vw_H<>SGל ߪJAC BFZ/|#(-ߖ0T]]9 3#UFD"H$@Fh4@Fj~L$@FZV{?@F?P>mu` o?u3 #,2t  K46?A@LR e,֍t#`F BP(?pz [e"ejJU5 MKLT0U$"?AG}Mb` LineColf -rz!#nFFF#?&F?#F,F"L&&` STa%df wd/#/i+-$V# F^ lh #792n&` =?o))eCQ3#8&327A22\ތ32q@?O FM$? "fKO:F!%311Ԧ7;V#?i1p]yl igTty(c)y2U0@9yM^ cl osj0fBi1rԪ@k!a@ij0n].y A@2 HUsRe@el vb d@0l>0>Uhh7A3gBbJ@32&2&QA(U_ _T3^ o%oxeU5^`o_oU5_ oo.oi3 eB!3)m!#1'/28pȱ;!$8Q 52xbszgO#u%w#vj#lei:M{!/!3wxAJ dQ`$U8!e fQ:(3O@$v9K?F$g"z]#b?M-,T{{/ Ck\tq ĂHD  # =hj0>T h]]9 # AUFD"H$@&@FxitI%"p: [kՆJsnFFF? ?Abm$zOOSsu` FilA ConA orzOk` >/2O%(!(&\#&` STad*G wE/"/+ReU= 3#J8'32H23"&3E#0Ut2??AG}ΐ` L? n /Bh,:A311 7;3?F!pyM igTta(c)a2`09aM? cM oKsG f3BF!r'@L!a3@iG n.a AA" +HsBeS@eM v dHs@0l>]UhzA#!J`366KaDX' (32oRM(@T/_O_KQU3^__gUEU&V_A_!oUk!"_> QrA *bcCi%AbE9 Mz7's4;! $Da fv5T x3@?1HD  # =hj8>T! ]]9 #,AUFD"H$@Fx@&@@P6 -u `u `bu  @e" - A-0t#` b ̮?p [iuu`l?SuvAJ`rJ &$-'("7/ I \/n$A(tzip Q?$"'(/$񓾍U2q,0?+?=6?\.?ljh?F 7`BackgroundCj0l0rz-.~5e%3Q1 3?a t##FGB%3!#rٿnFFFf6AۚsA,? ("bCzy-lB-3&` STa0ow?(!` Fi0lrH2F1` OE=%:?Q1&gp!K7h<:AJ%3117;XQ@?1py0i0htk(0)k20P9kM@c2s0fR1rP1aPi0n.k AB Xs=be*`e0v0d1`0l>#(!Uh #3 XQ\iJQ*%3'928e&o i1HSeE9 MzW'0URr;!4o ]vqv1` T0xP_QH D:  ##=h4>T]]9 3#UFD"H$@F~@Ff>$@FZV{?@F?P>mu` o?u3 >,2t  Q|a?A@LR e,֍t#`Fi4F?pz [e"ejJU5 MKLT0U$"?AGq}2Mb` LineColf rz!#nFFF#?&?#F,F"L&&` STadf wd/#/i+$V# F^ lh #72n&` =?o)eC(3#82A"36A27\327q@?O FM$? HfKO:F! 3117;uV#?i1pyl igTty(wc)y20@9yM^ cl o%sj0fBi1r@k!ua@ij0n.yW A@2 HsRUe@el vb d@0l>0>Uhh7A3gBJ@X32&2&DQ(U__T3^ o%oxeU5^`o_oU5 _ oo.oi3 e23)pm!4/'2$#ձ;!$8Q 52xbszgO#u%t#xj#lei:M{!/!3wxAJ VdQ`$U8bT^ tl0e @he}}k\tq ĂHD:  ##=h0>Th]]9 3#UF3钿Y"@FFە @FNPQkӭ??Pn>u` ?6u3`b >"6(t  z6?A@BHK=kUYt ["p: [ZՄJU5 CLT0U"?AG}eC` LineColZI rz۷!#nF;FF{#?&?#),)"/&&` STaKdI wG/#/ZL+$9# FA lK r72Q&` ?RR) 3#B2((236727kB1o31P17;9#?L1wpyO igTt (cu)J@2`09J@uMA cO osM0IfHBL1r<@N!aH@iM0n.J@ UA#2 @HsBeh@eO vE d@0 l>]Uhz1A3)!BJ@o39DYX3UD_d_vY3^__gzUf53^_V_ozU3__H\o_j5bPf59 M{o7l/5v!3HraPv4!$ 5v|mai6oozo7]'$t~kvPv v8%TE xH@L?c8eaD@e J@ fQo:3@e3 9)']z,Z#bK~-\sTAcL@ 0P@UGDF P# h>,/T6D # CUFD"H$@]F@&u@P} ~ k ,D@Tu` ?LR,ru#|QΉtSzttRt 'A@tT$/ (L/+-3xOb ((A"i/.d#@3)5~??".s`Make B0c0ground#b314 ;1`VIS_PRXYx0CHM!#6008!"`?C0py0i0ht0(0)02@090M@cJ2s0f$BAr@QAa$@i0nx0 0All@ HspBeD@e0v0dx2"eYk} !HA%DK)5D,K3DKDTGnDAD_'7BTAy1y1 "(S11}51<>@XBr1I!&` H@d.2&F0o$@Az0#` SPowRo +<"v8d7e7 J@cbbg bg__0R11"Sa%1+bb$y1r,QHEq)5Se@~|u3{,|uVz?rxK2cxfq<`}? rjb #! bIgaj?@TT,3qEGxsD& 'M1sYq(q{r=SeS,fa0 D0NZbAn$@a@Q"AC@M7LqtKo]e_vH>ל @(T6D UFY,b'@F#H$ @FxT ]]9 TaAUFY,b@F#H$@&@WFF"W@P6 u`bt A@u)΍tStS<tTJ(S3d i)Ob  ADREJ#U0U8"?@$ t0 ?$( ?U kaX:`B c groundC l rzAb#z.)s$%B//#b w/bn3H? K1#5 CL.`M kek2?dQ5.1#A D5 v$`?!py;0]i90htk(50])k20S@9kUM?@c;2s=0fEBR!r9@!aE@i=0wn.k A lk>GsBee@e;0v0d@B1#lhA@=^M G5C G&dRMl>0>UhhE p.A!1 $R0SPJf l#b(.hD Ob"YbM_(F!o+g?7o3coe5noogoeV%efo+o oeE2 o;rAsCghe[+TQQM- &EtJsTl@S6V%B,QE=BAA!D)B0#` D=0N=0t*E2nE@a?@nE,15eq3FB(OqiAbV%9 M!4.'7$ň&!$Q2 $H>%1x (z)ub"`\NWF]}h#C?jeB J]PaHĥ]@+>ީ]B@k!UFDfP h>,/T6D ؉UFY,b'@F#H$ @Fx%B2 6gdaO{'TT'T= S5Ҧ13C#C D)@N)@tC)@nsPaiyPr$^ Ph(:LV?DOowpx`,p|rag ont] hep ad ful--> siz "]bckru"im.bY,b#H$@@5pm;μ@@3F" HD  ?hZ8>T H]]9 UUFY,b@F#H$W@' P6 lA u`l?U  u(tS&A@ctTl$bsTJh=M/ m20U?@LebTqBG H#$\( ?A3`BackgrounWdC l rzuu!$)(=~ H" #{#9"Mj%7 M`M kek/dpQu` EvNO0nt!p'!~u eJع1!';1#Vis_o RXY. hm!#6050 v`?!py ui hx0 ( u)k2009kUM0c"s fBR!r0!ax0i Un0 kA lk*GsdBe0e s1Qd0 &+?"A %l>0>Uhh5@ 7A!!!JnGV lfX 2 3VM@ !Q_&1q_mQU3^__5gU%UHVoc_CoЧU"5G_ #rRA !"cCgThe'+RT!!M-Qt^t JcTl!nqz{,@{3#T811 C!## D N t"w1a0nEH>Or S-"GqS@w`FX_o"#rTB Ks]^a`ĥ]@+Cߧ]BsPUFD  h ^TYYUFD"H$'@F @FxT]]9 #U/P%g(|"./t  {Pk?e"p _o*'6#4!Uh#3 A AYAJ*=3F"+2BU_ ASHD: # ;h4>T]]9 #U#AUF@F\.?Fl6 @@P Vt`  ?"u u u " '@l0+bA80&t# bp [5Ru7?u_eAYJUFL8~i/6a房D!@ '!"*(E,{Pko"] U?N@*(~k *Q/Bo2q0??6??S??F 7`BackgroundCq0lq0rz~8b%I=(1g 3?"&Ph<:AZ20U0B3 Lb+ 7117;Y`]?z1pyo0im0Wht(i0)W20@9M@co2sq0fBz1rԡ@1a@iq0n]. A}0lGsBe@eo0v" Pd@0l>#$>Uh,#3 A AUE 2teA}o #J*A @"BU_1SHD:  # ;ih(>TA9 35AUFi4F? <Ѕ2P $>  u` ^?3 u4#"t  q?A@g#tYsn$"Jh?hB="5 +#?A&\ ?~!~!5 `?CopyrigTt(c)20 9M c os f"!r !a i n. Al (s"e e v0d l>#UhEI !#J?Xl]b4X6WBHD:  # ;h0>Th]]9 #AUFi4F?k?@P -u `u bu  @",% AGh[ >u`] ?u#J" ѿ$ #p/$.?/ ")Z/l)t )baht Da&(/Ԯ$BU27q 0??16?\._?b\?F 7`Backgrou_ndC0l0rz%b5=t3E1 3&? ?9!J31h17;h`?1py0i0hUt (0) 2`W09 M{@c2%s0fB1ru@1ua@i0n. A0l zGsBe@e0v@d@0l># !Uh#3 A jAYbJ*3B`Uy_1HD:  # ;h0>Th]]9 #AUFi4F;? ?@P'-DT!>-u `u bu  @"E% AG[mu` 7?u#e@)J"9$' "/-%Y'%-(c/- t aht aR"/$&QbaBU2q 0??16?\.?Eb\?F 7`BackgroundC0l0rz%b_5=3E1 3? ?9!3117;h`?1py0i0ht (0) 2`09 M{@cJ2s0fB1ru@1a@i0n.. A0l zGUsBe@e0v@d@0l># !Uh#3 pA jAYŊJ*3&"2B`Uy_1SUGD  # h#$T #3 BUFL8?Fc?@Fi4F? uP} VV    $ u` ?ULV){$$ASu#L"p [A@)HtI  {Pk5?0t bF,/|񁠅dyBM#!!Z* "`?CopyrigPt (c) 20 9 M c os f"!r !a i n. Al (s2e e v0d ;"eYk}$Hp1D%|4;E|4$7UGD  # h#$T #3 BUFibAA%@Fc?@Fi4F? P}  u` ?u# VrVh -(^<c=PLVU((<<PP"p [A@Kt  6TA9 35AUFi4F? <Ѕ2P u` ?Mu# t  q?A7@t+"=&9Enm$ >  , Jh?B="5 +#?A&\ ?~!~!5 ~`?CopyrigTtl(c)l20 9lM c os f"!r !a i n.l Al (s"e e v0d. l>UhE`I !#J?l]2b6BHD:  # ;h,>T]]9 #AUFi4Fu?k?@P -vu `u bu  @"E! ACWmu` 7?~u#m>)J" ѻt b[]dt ]&( '/$$y'!t"/%'-/ BU2q0??-6?\.?^X?F 7`BackgroundCj0l0rz!?b!I=3A1g 3?3117;vd`?1py0]i0ht/ (0])/ 20\@9/ UMH@c2s0fNBR1rB@1aN@i0wn./ A0l/ GGsBen@e0v@d@0l8>#t!Uh #3 A 7AI+J*3[2B-UF_1\SHD:  # ;h,>T]]9 #AUFi4F? ?@P'-DT!ݿ>-u `u b_u  @"! ACWu` ?~u#->J"9Bt ]d!t ]"/ $-&b[]$y't"/%'-/ BU2q0??-6?\.?^X?F 7`BackgroundC0l0rz!b!=t3A1 3&?J31h17;d`?1py0i0ht (0) 2U0\@9 MH@c2%s0fNB1rB@1uaN@i0n. A0l GGsBen@e0v@d@0l>#t!Uh #3 A 7AI#J*X3[2B-UF_1\SHD: # ;h4>TB]]#AAUFD"H$@Fc?@F%@FCHy5?P6 Amu` ?A@ u#`%t  ŏ1w-#LRtA`%"p [S$J=#UG ?| ?釉\2q- ?,/>&?\.+?&i/: -4 # #20U" @ #Mb Bh,:*A!!';`?CopyrigTty(c)y2009yM0c.0os0f21r01a0i0n.y Al0 8s2e0e0v0d0 l*>,>UhE=%y1# 1J! rBA?(wN9OOEE3wEOKOY% wESO#R] JiAle%(9 MX&7WW'$ž.! $0 V #x"j__$f!32bGgHf_1 `5kyoobq %?1HD: ##=h0>Th]]9 3#UF( x"@FFxQ}H( @Fy0խ??Pn>u` ?A@u3`b>,2t  ?sFLR +5t ""p [NJU5 MLc0U"?AG}(LM"` LineColS rz$3"9&nFFF#?&^?2#9&&` STa%dS wQ/#/V/7(iF" T3#$2236hL1$T1T1\7;C#?1pyY igTt ( )02`090MK cY os0f21r0X!a0i0n.0 AR0l07sIBe@eY vO d=@R0 l>]Uhz133!J!9 RXEO?_)[3?^X_j_W/U95N_ __/U OQ_ou_j|5Ab959 M{B7/f!3bѿavS4!$f1m%A12(ooXB7sw'$)~ vv: sv688e !@dvQB:(3O@kf9A?=3$z,Z#bԁC~-,T{< B2 > HD: ##=h4>T]]9 P3!#UFD"H$@F<@F ҵ #@Fx萢5?@FN@?Pv> u{` ?{u3` #>,2t  ]?A@LR e0t bF@ ?p [e"eJU5 - MLT0U("?A?G}FMb` LineColj rzR#nwFFF#?x&?I#uP&&` STadj wh/#/m+$Z# Fb lDl 72r&J,`h9F!Bt! 3#.2236A2JM3117;Z#?1pyp igTty(c)y20X@9yMb c.p osj fJB1r>@o!aJ@ij n0 yA52 BHsBej@ep vf d2l>UhhC3A3J!bJ@36&6&C\WA(5UF_k_}T3^__ g|Ue55^_X_o|UE5__^o_(bTb tl e j5iee59 M{7/^v!3qryv-4!$;7ai X7w'2(#՝~vyv vwUGDF P# h>,/T6D # UFD"H$@]F@&u@P}  D 3,@+  h?u` ?U~@@T^,u#Q3tStB%: D"tRB$tK/]'A@3bI/[(#:"tTB%/](/{/A*""/.dJ3@S3y5~h?S?J2.`Make B0c0ground: @3ز3AD[;1`VIS_PRXY0CHM!#60+@892`?]C0py0i0Wht0(0)0U2Q@0W@ I@i0J1s0ftBeArh@Aat@i0n0 0Al@ lHsBe@e0v0dT2A2e4^AA @Jy5 3H=QJ5ITT[y5IT,T[3IT@T[ITT7WEIT^[@EXhTWnda:oTw7T(11J2G8hcAAEA>Xr1IC1&` Hn@d2&F0ot@A0#` SPowoTD!+"vde* @"s)r#r%_w)rgA0o^So=@ib@1$1J2ty2-t3I[t ZtE "bG8TQk`aJ5AK#r)r1{1Qg ga$y5hce@6D3,φ+?6ם ߪJAC BF:wb# ߜxCB} lƥ]@exko+> ]BBȃ)yPA|񈇩DOh?GSφO.( XlX/o?7rUFDfP h,TPYYCUFY,b'@F"H$ @Fx siz "]bckru"im.bY,b"H$@@\L[/f&.{@ͺl@;μ3D UGD" # h <^T$(YY#UFY,b@F"H$+@' ]P} j&(<XP"u` ? ((<<PjP nu#:xtSv߀A@#tT /tb"(9'do `%  `"Mo`Make B c ground!}Q ` Ev nt!pb ~  "%JT%-%'W+3S?$B#i#k0Ud2>3F&=i$46`#4r6r6 :2g"\i#2q0??F`#F\.?y( .O@i#dA+137]D"`?C"y ui h ( ]) 20@9 UM@c"s fBRAr Aa i n. AUl@ HsBe@Ie !d@)01w`V@s_0_RXY@cPWm![06@2.P QW"&/H:L e(^!6D _] i#HZZ`%T[%T@[3TYWgda^oC +T# 1 1`"13Rd3](cvydA C08Ua%k,@odL33TdAfx\3` D N tB!a@n8UA^E `%{MW?r6!3r0v*4*P83r69MTHD # =hj0>T h]]#A UFf0!? c@F"H$@FL( jKb' P6 ]Au` ^?M u#t  ]oEt9SENQ,>ChJ#0O迴Nk?{F 2q ?/&F\.?A8@73`Backgrou_ndCp lp rz@b Q Q#t$]A -]&MB±~<# u;c/u/!O2݇ #Iob< @P2V?h?!ܔ6!P!';3?1py0i0hUtp(0)p2`W09pM@c2%s0f!B1r@!ua!@i0n.p A| lpGsmBeA@e0v@dHa@ l> ,>Uh =E A#!1ŊJ`+6R  8x\ UVMY!9#_x9_5QoU3U+>_3__E2 _U2KrA 2NcCiA1eE(9 MX'E'0UbŔ+!ra .fjvoo{'_ !3rgfc`#fI-pHD # =hj0>T h]]#A UFL( jb@F"H$+@' P6 n>u{` ?5 u#t  ]EQt9SENEEhJ#0O贁Nk?F  ޱ2q ?/&F\.?MA8@73`BackgroundCp lp rz@bI Q:'! #t*$ -&MB±}_'!<# u;c/u/!<݇ #Ib-< @P2V?h?!܂6!!';3?1py0i0htp(0)p2`09pM@c2s0f!B1r@!a!@i0n.p AR| lpGsmBeA@e0v@da@ l> ,>Uh =E A#!1J`X+6  8x" UVMY!!_v9_xoU3^9__1UoUbE2 _U2rA 2NcCiA1eE9 MŔX'E'0Ub+!ra fjvoob{'*?!3rgfc`#fI-pHD # ;hj0>T h]]#A!AUF04qy#@FǮX?Fqo#@uP6 $> lu{` ? ;u4#*t' ‡tfmta{mvm[W"m3J#U0O贁Nk?F 2\2q* ?)/;&F\.?+A,8 iG #f?pH&M<# ";`BackgroundCJ0lJ0rz@ba0a072=?O?a4i6qbWd @B !!';us3?1py0i0ht (}0) @2`0U9 @M@c2s0IfB1r0Y1a@i0n. @ A0l @GsTBe*(@e0vf@dH@ l>,>UhE=%1#!1J`Fr  $&M3 `Q 6RFxAgߊ |? YF_W&d>2 ~`_21^'` _}A_(@M9 __Ue%2OU2rA T 2cxCiA1e%9 MX'g'0EUrŻ+$a f#xB"joo!$m_!3r =w>v12y q-> Ako(#HD # =hj0>T h]]#A UFOvE%@FǮX?FU1@?P6  >u` ?j u#t  ΈEt9SsENCп+EhJ"A|7<F ?ka@X:` Ac entCol* rz} b A bQ#z?)ۀ Ib'Ak!" //.*?/b!#Z/l 񽓱2q0??6F\.?Ey =?:@73`B cwkg0 ou d/b'Q Q3$A -6MB80UB2  !?'117;`?x1py0 ik0h ( )@2`0U9@M@cm2s* IfBx1r@/!a i* n.@ A, l@GsBe*@e0 v d@0Rl> ,>Uh =EAh#1o!J`z*`1  86n8[2 VM(X1_[gU3^9o!oUUE2~_Uo"KrA 0o"c%SiAbE9 M!4E.'D;;Da f jooŕ{7v!3r_w`v`Wvv+H=R_\H>` r =T"?KF(#TB9h^^aǥ_o@{+CVcoQaG膪ʹoPHLUIxéX UFDfP h>(/T6D UFY,b'@F#H$ @FxT P]]9 %AUFY,b@F#H$@'R P6 u`bA@utS!WtT6 WWb; Z RADEJ#U0U?t":#a?L& ? kaX:`Bw cq groundCj l rzAb#z)sx%8B//#b ܭ/b23/!5 /Lf1w`Mw kekz/d!/R:}y` Ev0'nt1p~O2%145 8`?!py i h0 ( )k20>@9kM*@cJ"s0f0B!r0!a0i0n..k A lk)Gs|BeP@e 1dp@B+lhA@=M Ga53+ 7L&AORMl> 0>UhhE A!d1 s$R S PJ8VlbhXD !:bDbM#( !0og?"ocZeX5jnoogZe%jeVoooZeE12_;rA*sCgPhe[+!TQQM- IEJCTf%qbt!%s3,@=M! E=s11 DB` D0NR0t 21a*@nE!X5Xeq3FB(qiA b%9 M!d4?#ԁ'ŔL&!g$oQ5 H> r (z)ub"`\NWF'#Cw"ѷeB CЫ6Paץo@}+8oB(# o4UFD  h$^T YYBUFL&d2?F~?Fx<F BP(?| ~P?| w VBW< 2h5h? Pm?2?2P'2ȅHBU? ? D G 0eB`-City,n_ tworko ly cak iy no si"eo pR a !iq sk a !%= \%E^  G n?~#0?688?U[fcffgf 3fwf c3gwfw` f{~6a wxxxwwp` px wpu 0226yR2gy?LDrag thuesapWonodwipe.b~??Q뿤i??"&d2}L4oҿ޿UGDF P# h @T(PYY*# U@L&d2?@~?FUP} . !P6 :T 7h7|777 ,7%A'7.u`_?U"6@RThh||"u#) ^2<7:U? ?4b0{[`:S1#26X3#"2^QHW93 2 1/L1$-"@ 2  2"3'>>ÿ@ǵqp@eE?oDeF0|r&BCkBu@`i-1Bu @  26CA (B" 2#& "0)9u `u`"[ b1 $𿻐u`񾄑W3J#N5)W3Qq1q71`Vis_PRXYcPm!#5n `13RB`?CopWyrPgPtu(`)20?`U9MPc)`oP'of1b"ar%`^aUa1`i#`n WAly` )hs}bUePe)`v`dRq1N3 dZ4V1B=Tr6N3@t|6|6 x2N4#W30U[r|3]W,7^q_@VcTg dagRsqu2+ D3hhR)^RHqqE2K8CT $0aN2{5\1pac2!6u%\ GŜ1S1J[2@@R4{35s_N4]&F3s͵NSYFwUUaU,@Qr#`er51pN2K8ڀqiviv_<׀,=pޟ &K8AWBCgE/s"ITh]]9 M IAUF~?@s?F,jג?FRW_7r6?Pps-8Rw?>$Y]A  JuM` ? ;  )u*Jt' eta{+23.JvюV?lv_ ?vx]PqE>JU43f;?4.&P?Apb3bbfz@Ave bh(bv)t&*J#!!5 `?CopyrigTt~(c)~2`09~M c o%s f"!r 1ua i n.~_ Al/0 (Us32e0e vE0 d'0!3]$$#-## '.&%d 2zoGz.#@ M#J6h/z/ BK' B*ib~59 6X7oG'K0UB.&!I$Ia oFJl6.}UhU=!1 A(# ~VIP & `!& T` V_RUGD  3 h0TdYYB qUF9r?@H$$?Fvn?@@u?P} Lk f 0u` ?00DuNrtK  jtK޹ѹMWZU;!;!`?CopyrigPt(c)2\09Mj c.h osb fp"a!rd !ap ib n. Al h(s"e ejh v d #"P4#S8$,#B-5# H" G'I?O=5#H&,#Z45656 12j,$#5#0U2%53+&j  ^K5 6X77'456!.4] 6Je$B^ 6D 0HA,%DK%D0GHD: # h 4>T  9 P#CUF0C3?@&@ P?FμW3P6 n>u{` ?5u#t  (]It=W!IVR1_IlI#>y;?? ?bA@"b])b)&2\"@ #[&/#'*+:'";*t\U$q ?/6 46?> Jh1h15 g`?CoprigTt (c)02U0090M0c0os0f21rԑ01a0i0nU.0 0l0 8Us2e0e0v0 d0a1(e4B-t3 t70ieM9 kF:X=GA_'2r"Z!0 F3j|OO{=GyV!%3RGFNDGN_:w5k&_8_Rq @ElAUhhEG1#@A-!T6j& f$<f bfI(:x<'0i`c q$q3a(z=iKquM$ufE`|aOyX+cqE2o;#rA $2H bWHD: # h 4>T  9 P#CUF?@&@ _P?FP6 v >u` ?)u#t  P]It=W!IRg]Hl#>y? ~?bA@"b)b)b$񅴍2\"@#[&*<%&$*Q+:'";*\U$q ?/6 b46?> UJh1h15 g`?CoprigTt (c)020090M0c0oKs0f21r01a0i0n.0 0l0 8s2e0e0v0d0a1e4B-t3 t70@ieM9 kFX=G'2r"!u0 F3j|OO{X=GyV!3RGFNDGN_:w5!k&_8_q @ElAUhhEGІ1#@A!Z@2r80; <qL#4 bvI(a9i@`bcbu3:+}E`lQqi?X+cqM*ufoUyQquE2oe#rA $2H bWHD H# h0>Th]]9 M#JUFvn?@&@ P??@*~w?P6  >JuM` ? u#Jt =t9S!WJNbFh.>y*<? ?AJA@+b)b$J2N7Nk@MJQ&*2%$*+0'1*T!!5 c`?CoprigTtV(c)V2`09VM0c0os0f 21r0M1a 0i0n.V Alh0 8sl2e@0e0v~0d`0!E]$-# '0AbE9 6X7G'0URWB!a F"JlJ]Uhz 1#1-Z61 K 8 0R]VJ:9 6S gPQETT]]B9 #CUF^?@z?FO&_~uP6  >u`3?ju#t  @WIt=WUr mIR~[WPHl#>? ?Lb "z@A gb(N(+b*$2g"?088r #f&#fb8!r.DF&""+E'""F* \ ?X64?+96N?UJ~1~15 g`?CopyrigTt (c)020090M0c.0os0f21r01a0i0n.0 0l0 8s2e0e0v@d0w1P{4B-,3 70ieM9 FXSGG'2WUA!0 F%jOO{SGV!%3R W VdDWN_:5k<_N_Rq VEl QUhhE+G1#VA-"!T6u&F f$<f bvT(:+/U?k`c 2g.q%:uoc xWJ#eaq*uM:uf-l6R|aeyj xx-E2o;""rA !$N*H bWHD:  %$h4>T]]B9 #CUFD%?@\f#??@k߻?P6  >u` ?ju#t  I_IIt=WPaIRIJl*w#>G?? ?b߀A@eb=!z #"b $"bI$2g"?088r b#f&*8!r%/F&"+E'"%F*\ ` ?64?6N?U J~1~15 g`?CopyrigTt (c)020090M0c0os0f21r01a0i0n.0 0l0 8s2e0ej0v@d0@w1{4B-3 70ieM9 FXSGG_'2WUAZ!0 Fw%j(OO{SGV!3R W V@dDWN_:5kH<_N_q VEql QUhhE+G1#VA!Z@+*S?F <`?"B## bv@T(a9i`bcb0u%:A}?l6lgqiQaF 4qMuЍ=7{cc )A #ougq0u-E2oe#rA 0#VN*H bWHD:  %!$h4>T]]B9 #CUFW-?@=?Fh?@uV?P6 v>u` ?)u#t  Ht=WB/aקIR_noIl3ur#>!;??| ?Lb "zo@A gwb(b*)I(&2g"?@#f&#f #-(+(+E'""F*\ ?64?6N? *J~1~15 g`?CopyrigTt (c)02U0090M0c0os0f21rԧ01a0i0nU.0 0l0 8Us2e0e0v@ d0w1({4B-3 70ieM9 FXSGG'2WUA30 F%jOOŕ{SGV!3R W VdDWN_:5k<_N_)q VEl QUhhE+G1#VA"!Z@%g 1?( <Kg" bSvT(:?31J9c 0_M#db0u%@u~~mlfl?"0uM@ur\8ˊc`[y뵸m}q0u-E2oe""rA 3!$Tu bWUGD  3 h0TdYYB qUF 0KR?@xՈ ?F?@_@w?P} Lf  0u` ?00D)uNtK  Uɹt0zkn̹ k*ѹWU;!;!`?CopyrigPt (c)r 2\09r Mj ch o%sb fp"a!rd !uap ib n.r _ Al h(Us"e eh v d #"4#S8$J,#B-5#6G# G'?O)=5#H&,#@Z45656 12,$u#5#0UU53@+&j ^K5 6X77'R456!.4]R 6Je$^ 6D  5#HA,%DK%D0GHD: # Th4> T]]9 #CUF/KR?@wn?F#vP.6 Au`= ? u#t  o,It=W:CIRGm Mw+Il#i>y9 ?? ? bA@"b)b%)&񭃵2\"@# [&/#'*+:'J";*\U$7q ?/6P 46?> J h1h15 g`?Co}prigTt (c])020090uM0c0os0If21r01a0]i0n.0 0Ul0 8s2e0e0v0d0a1e4bB-t3 t70ieM9 kFX=GA'2r"!0 F3jP|OO{=G,yV!3RGFNDGN_:w5k&_8_q @ElAUhhEG1h#@A Tj&1 $<fib bfʼI(:0 0i`c 498˱q3a(z=iKquM$uft:ۍ|HaOybxE2ot;#rA $V2H bWHD: # Th4> T]]9 #CUF?@wݯn?tP6 >u` ?u#t  ]yY'It=W菗:CIRHl#>Hc? ?Lbbcbz@A } b(b*) #a 2g"@#f&*G%1$*+E'""F*t\`$q0??6 *4A?> Js1s15 g`?CopyrigTt (c)0W20090M0]c0os0f2R1r01a0i0Wn.0 )0l0U 8s2e0e05v@d0l1p4B-X3 70ieM9 vFXHG/'2r 2!0 F 3jOOŕ{HGV!3RGVYDGN_:5k1_C_)q KElQUhhE G1#KA Z@~#0f < 1#4 b vT(a9i@`bcb%u3:6}t:l\qi)qM5ufo`y\q%u"E2oe""rA Y!$=H bWHD H# h0>Th]]9 M#JUFʺ?@؏pǶ??@/Qw?P6 >JuM` ? u#Jt =t9S6WBBJNbFhՁ$>:3f<? ?A A@ab3bfbz #b$b$J 2zGz@MJ\&!(<&'$*+;'<*B!!5 c`?CopyrigT}tV(c)V]2`09VM%0]c#0os0f+2R1r0X1a+0i0n.V AUls0 #8sw2eK0e#0v0dk0@!E]$-3 7 %0bE(9 6X7G_'0UbBJ!a F-JlJ]Uhz1#1 劯Z@`Y<  8KAF4 ;RhVJ:Ç9 AS ]yY'PTGRU3U2_DQL_UEUu^jYH ]U2"_UrA Y$c( bUGD  3 h0TdYYB UFZV?@[V?FRHG?@ _BPhw?P} `f  f0 Du` ?00DDX)ubt_  .HLt %+,!iUc!c!`?CopyrigPt (c) 2\09 M c os f"!r !a i n. Al (s"e e v d AK"\#S`$T#B-]#o# &o'?c=]#p&T#4]6]6 Y2T$#]#05U2]3@qS& ^Ps5 6X87.G'4]6%!V4] F(J)e(^(I ]#HAT%D[5D0[3DDWHD: % $h4>T]]B9 #CUFO/?@/`p?FqRA?@?P6 n>u`3?u#t  noIt=WoIR#U HlAw#>2?088r@2#Aeb8 !rztb(#+#+#&y<u"?& ?"#+b%)+@'\?64?+6C? *Js1s15 g`?Coph rigTt (wc)020090M0c0o%s0f21r01ua0i0n.0U 0l0 8s2Ue0e0v@d0l1p4B-j(3 7&0ieM9 vFXj'G'2WUAŇ&!$0 F%CjOO{j'/V!3RGVYDGN_:B5k1_C_q m&lQUhhE GБ1#m!!bTc# $<f b)v^(::'Ԝ?k`c S#q%/u3;c .9~#eVquM/ufflG|aZy{ґ#q"E2o;#rA $CH bWHD: %$h4>T]]B9 #CUF3;?@/`p??@w?P6 >u` ?u#t  oIt=WoWIRIlAw#>2?088r@<#Aeb{8 !rzb(J#+#+#& yL?q#?& ?w"b%)#+b4/U%\?64?6C? *Js1s15 g`?Coph rigTt (wc)020090M0c0o%s0f21r01ua0i0n.0U 0l0 8s2Ue0e0v@d0l1p4B-3 7&0ieM9 vFXHGG'2WUAŇ&!$0 F%CjOO{HG/V!3RGVYDGN_:B5k1_C_q KElQUhhE G1#KA!効Z@x<?f# <h-nѶ# b v^(a9i`bcb%u%:6}fll\qi{ґ)qM5uf o`y\q%u"E2oe:#rA $=+H bWUGD  3 h0TdYYB qUF%@?@5,k?F ?@Q_rw?P} Lbd  0u` ^?U00_uNtK  :Z[tZJ[jIӹ._jѹԒWU;!;!`?CopyrigPt (c)r 2\09r Mj ch o%sb fp"a!rd !uap ib n.r _ Al h(Us"e eh v d #"4#S8$J,#B-5#6G# G'?O)=5#H&,#@Z45656 12,$#5#0U253@+&j ^K5 6rX77'4)56!.4] 6Je$^ P6D  5#HA,%DdK%D0GHD: # Th4> T]]9 #CUFy?@vn?F#?@u?P6 > u` ^?Mu#t  %7YIt=WaR9IR_L=*Hlw#>y!;?? ?bA@"b)Kb)&Z2\"y@#A[&/#'*+:'";*\nU$q ?/6 X46?> Jh1h15 g`?CoprigTt (c)020090M0c0os0f21r01a0i0n.0 0l0 8s2e0e0v0d0a1e4ŤB-t3 t70iePM9 kFX=GA'K2r"!0 FB3j|OO{=GyV!3RGFNDGN_:Bw5k&_8_q @ElAUhhEG1#@A!ŊTj& $L<f bfI(j! ?.۾0c _*#c ˱q3aklquM:mx<ۓ|aUyM8'hxE2o;:#rA $8+H bWHD: # Th4> T]]9 #CUF_xa*?@vn??@?]P6 hvA u{` ?5u#t  ߔ@gIt=WaR9UIRIl%w#>y%? ?bA@"b);b)b$i2\"@#[&*<%&$*+:'";*t\U$q ?/6 46?> Jh1h15 g`?CoprigTt (c)02U0090M0c0os0f21rԑ01a0i0nU.0 0l0 8Us2e0e0v0 d0a1(e4B-t3 t70ieM9 kFX=G'K2r"!0 FB3j|OO{=GyV!3RGFNDGN_:Bw5k&_8_q @ElAUhhEG1#@A!効Z@ xQ0Y; <3H#4Zb bvI(a9i@`bcbu3:+}x<lQqiM8qMuuUo/] d =xd #YuQquE2oe:#rA $8+H bWHD H# h0>Th]]9 M#JUFs`m?@|nH˲?Fdvn?@MQY?P6 v> uM{` ?5 u#Jt  o\SDt9S2X2ENFYEhI5KOV4>y<? ?AJbA@" +b )b $J2N贁NkS@MJ W& *8%"$ *+6'J"7*!!5 c`?CoprigTt (c)(02`09(0M 0c0os0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0!E]$b-# 'A 0bPE9 6X7G'0U]BŔ!a $F(JlJ]Uhz@1#1!Z@?~ 7  >8ħ]i 6RcVJ:_3?f& .;T 1_QKTBR}U3U-_?QD_VT}UEU7&eD YZM8ʺ]a}U2_U#rA $c( bUGD  3 h0TdYYB qUFx<^?@ȵ?Fʺ?@Ot?Pn} L  0u` W?U050DuNtK Ot@ڹ *"!-U;!;!`?Copy_rigPt_(c)2\W09Mj ch osb fp"a!rd !ap ib n}. Al U h(s"e eh 5v d #"4#(S8$,#B-5#G# G'?O=5#H&,#Z45656 12,$#5#0U2&53@+&j  %cK5 6X77'R456!.4]R 6Je$^ 6D  5#HA,%DK%D0GHD: # Th4> T]]9 #CUF/KR?@}JR4?F#vnP6 >u` ?ju# t  ,It=WXBIRGm_ MwIlb>y;?? ?bA@"b.)b)&i2\"@#[&/#'*Q+:'";*\U$q ?/6 b46?> UJh1h15 g`?CoprigTt (c)020090M0c0oKs0f21r01a0i0n.0 0l0 8s2e0e0v0d0a1e4B-t3 t70@ieM9 kFX=GA/'2r"!0 F 3j|OOŕ{=GyV!3RGFNDGN_:w5k&_8_)q @ElAUhhEG1#@A!Tj& $3<fM bfI(:0i`c l$Zaq3a(z=iKquM$ufh6fҍ|aOy"?0x,bxE2o;#rA Y$2H bWHD: # Th4> T]]9 #CUF?@2L&??@@?]P6 lnAu{` ?5u# t  ]yY'It=Wȅ,d!K IRHl%mw#>y%? ?bA@"b);b)b$i2\"@#[&*<%&$*+:'";*t\U$q ?/6 46?> Jh1h15 g`?CoprigTt (c)02U0090M0c0os0f21rԑ01a0i0nU.0 0l0 8Us2e0e0v0 d0a1(e4B-t3 t70ieM9 kFX=G'K2r"!0 FB3j|OO{=GyV!3RGFNDGN_:Bw5k&_8_q @ElAUhhEG1#@A!効Z@v0Y; <B{ %#4k bvI(a9i@`bcbu3:+}76flQqi*h/qM*ufoUyQquE2oe#rA $2H bWHD H# h0>Th]]9 M#JUFʺ?@BP(T??@W0 w?P6 >JuM` ? u8# t =t9S?JNbFh?s>yr<?X ?AJ~A@+b)b$J2N贁NkS@MJ Q&*2%$*+0'J1*!!5 c`?CoprigTtV(c)V2`09VM0c.0os0f 21r0M1a 0i0n.V Alh0 8sl2e@0ej0v~0d`0!E]$-X# '0bE9 6X7G/'0UWB%!a F"J lJ]Uhz 1#1Z@~1  8[_B! 0R]VJ:9 6S ]yY'ETTh]]9 MJUF3;?@z^??@x<w?P6 >JuM` ? uJt  oEt9S֡WlENEhII>yI$<?X ?AJbA@"b )b$ )&J2zGz@M JW& /'*+6'"7*BB!!5 c`?CoprigTt (c)(0]2`09(0M 0]c0os0f&2R1r0S1a&0i0n.(0 AUln0 8sr2eF0e0v0df0@!Y$-# '  0b5(9 6X7G_'0U]BJ!a FR(JlJ,>Uh11!T0 9 F 8  2RYVJ4]1_CY?QsU3:@|>\OQ HS m`wQ52_;#rA 3$@cHD H# h0>Th]]9 M#JUFfl?@z^?F3;?@x<?P6 n>JuM{` ?5 u#Jt  KEt9S֡lENoחEhᐟI>y$<? ?AJbA@"b] )b$ )&J2z_Gz@ MJW& /'*+6'"7*T!!5 c`?CoprigTt (c)(02`09(0M 0c0oKs0f&21r0S1a&0i0n.(0 Aln0 8sr2eF0e0v0df0!E(]$-# ' 0bE9 6X7G'K0U]B!Ia F(JlJ*,>Uh1#1!Tf& 38>VM 2RYVJ:|> 3Y@P HS mۻPwQ34]1_CYQsUE2_;#rA $@c( bHD # h4>T]]9 P#CqAUFe!@b?@M?FQEV?@^?P6 L6 > #(m$uy`?Agg2Du#VtS  n4BH=t; w9_W*Ut3۶_>UF$<?Z& ?bA@b!z #"b$$"b$ 2"Z$@#&(&$:;'"*\$qy0?x?64#( Z4?V> J=#115 `?CopyrigTt (c)"@20.@9"@M@c@os@f BAr@MAa @i@n."@ 0lh@ HslBe@@e@v~@d`@+"1 44#ŤB-=#3 7Z&4%@iePM9 FXG>'K2r2Z&!u$0 VB3jO_{GZ/V!3bwWxVDnW^^Ko:B5k__q JE4"l(>Uhe Ah#A!1(Z=#F@ g0aR LoWGa"@&b$4"(qC^]?Fs(?@|0 ֿ%L|\|HD_LR4Уӟ^L-ٙ"X_a y4 pW? +?Ra@ts<ЯTh]]9 M!AUF<%?@[?FdHǩ?@W_Mw?P6 $[!>  JuM` ? ;  )u*Jt'  P&'mta{T 6Dmv޿mxb$y3>yJU]<?& ?AJbA@,"b4)b4)Sb4$J2zGzR$@MJ &5*`%J$*+^',"_*BB115 `?Cop_rigTt_(c)2`W09MH0cF0os@0fN2?1rB0{1aN0i@0n}. Al0U F8s2en0eF05v0d01PY4-,%3 %7&H0b59 FX7T]]9 MAUFvIڳ?@J?F8?@P} TP>">JuM` W?uJu u bKu `u Y-JWR]nio@&>٣~ C,ᗢeuܥ]h[o?C$ '"/'%S'-+]/' Jt  $Ӗb#"t="! Ư%'_deѬ&'YY-t#JJU2N贁Nk?<MJ6uAdbG8WbU9S;S;S6B3B115 d`?CopyrigTt (c])020090uM0c0os0If21r0Aa0i0n.0 WAl,@ 8s0BUe@e0vB@d$@14Ry36D3IP!BM2b;R?s?fzU3 7F02%je^E(9 BVXWbW_'0URšFZ!D0 bVrvZl!UhX3 1iAA! P@B#J^ewkdHD:  H h0>Th]]9 M!AUFٟx7$?@!{?FdHǩ?@W_Mw?P6 $[#>  JuM` ? ;  )u*Jt'  Y@mta{kbmv޿mxb$y3>yJU]<?& ?AJbA@,"b4)b4)Sb4$J2zGzR$@MJ &5*`%J$*+^',"_*BB115 `?Cop_rigTt_(c)2`W09MH0cF0os@0fN2?1rB0{1aN0i@0n}. Al0U F8s2en0eF05v0d01PY4-,%3 %7&H0b59 FX7T]]9 MAUF4?@R ?F8?@P} TP>$>JuM` W?uJu u bKu `u Y-JWR]ni_@"J&>kn OS?F4H⋘?"  'B"/'%S'-+]/' Jt  |3#"t="Y(ئ%'~#&' bS#JJU2N贁Nk?<MJ6AdbG8bU9JS;S;S6B3115 d`?CopyrigTt (c)020090M0c0oKs0f21r0Aa0i0n.0 Al,@ 8s0Be@e0vB@d$@14y36D3IP* M2b;R?ds?zU3 7F02%je^E9 BVXWbW'0URšF!D%0 bVvZlUhX3 h1iAA0P@#J^ewkdUHTuD" # A#A hz ,T! YJ MUFv{h?@K]`?FU UP N% # H, @ T  cO,OmJuM` /? O"6J^ Ocu#X"bs!Fۅb^uJt  7͉"tY!"^OJU2zGz?)@M#J&A@i"wb8b#9!;!;!62 %!4J#2#"6 p2.2b#9;/72l<2l?TN#=A=Ai"`?Copyrig`t (c)t@20@9t@Ml@cj@osd@frBcArf@Aar@id@n.t@ Al@ jHsBe@ej@v@d@"M#IC IG6"xh"t"Eiyn %1XGcW'0URũ6EK0 cVwZlJUdnUtLQQS) T =1T[A#A1(!`Iοͺ?@|0}# f ǿձm# Կ׌$8n:Moo` =[Weh$+HeQj?@)v&?FQ-?@iH?PS~>  F߈s|.Zߥ"lE _ e?h=X##Rȵ: Izu:t ɩ X"41(RCBS"r"5w5ae2oG[pF&ߞ֡h|}VhYr~!{/te,gEe DT?nxM殏Ea$6ЇK;@R d9:x,IyyߧUC6 -R&6t/}GyzνTB4gwqLAq@dR!5j;?@iwE?FmnX Űu{jlk8 fl?y>ĕl?_'t?quEt&XQV  :T;haai~?@^vνl쇢&UebhMHz?@^Z|b>jca>lTס/hi-Ya\1~|oW.f? 45=uTIy^. ?@}drPFuN?@zH?Q;|Uuelfay1| @pUmɯR^iqeoo oT##$g}i3 uoouBwoҕF})EL h0m&}~+̖ݿBmȇVQ9adv_UF ]"6l|]ΖmC\+~}i3 CQ!]!dΕ+ (al?@֤ |b$G4).&ܕ\w7f_Utpx'Wẏ,k/~? 5kՕHD: # h4>T]]9 M qAUF~?@%g?Fs7s^?@WҔh?P6 mL'>E  ($JuM` ?Agg2DuVbJtS t~Dzb 013v ';_>-JUC!C!5 `?CopyrigTt(wc)20 9Mr cp o%sj fx"i!rl !uax ij n._ Al p(Us"e ep v d +"<#T@$4#B=#uP!7?56 ?Ab`4z@9Al0b{3j8]b8a0$o3-=#O# O'564%酙t=#2mq @?2@ M3#JF?=EQ6<@B: ?6iie59 6XG2'0UR56!P40R FJlJD4Uh8ZQZQ ]   %a!@A1(A/.F?@":W0LRb O_LP)$J^2_o ꗦ`T8a.eEUZ+mQ?@_Vl}l?9+h5Uu?@w Vlm1lrU|pi5Ui焞?@Gd{y}.xbli\V)+hUrw?@wVleHl~s!2+hUpp%Vl߂'}|U'r,.?@C[>?Fi(w?@[ P?PU>b ԿftVlIֶ6l{)eRX0~eGdQR@_:M I~> zg@&? ɿI#?!$?Ra@B r552aQ:iт0;Kq𾙵QP!µ""]JT豸*e2 Z"s 7tG@-(Fm#x"+OB 9 siz "]bckru"im.bY,b"H$@@5pm;μ@@}_3 D HD  # =hj8>T! ]]9 #UFY,b@F"H$@' P6 eA [u`l?  u#(΍tS&Aw@ctTl$bsTJ20-U?FW]ebqRG # ?sakunB`BZ ckgrou_ndCy ly rzubuAub#z)u=~ " #L#9"#MBh\"C? j0>Uhh5`7A#!Jn00.lPX2 3VM(Q_&1_QU3n o2ogUEe0&oo@_oUb5_KrA -sCgSXhe_'r!TH!!M-t "J'sTl!Mq %F{,@wj1%5bPE9 M{k7l!3k'QPsBp#!J$G=1ƀHD  # =hj0>T h]]?5AUF\,b@FW5?'P6 .L]A]  u` ?Eiu#4t1  tuKˁ=JU^;:Fx& sa@`B`BE ckgroundCd ld rz}J b J AJ b#z)ۀJ bA!"b#"bx##釉2q ?/ 6F\.W?5 8?:@7oW/i/.{!'G!1 03] \NB20UB4&'$$%117;`?s1pyh0if0ht (b0)@2`09@M{@cJh2sj0fBs1ru@s!a@ij0n..@ Ap l@zGUsBe@eh0v@Qd@0l> ^Uh[A"A]@MjA#1!J4/%CrA 0"S C`Q%S1Y9**0=*Qt&2t31:?F@F̬%?PKM! [t?[WY[ZnaU0fJ$U~tr<S! YM# ? 3&M@T@0tr$5!:+oog4! `Q^__1js+TN!p_^Wo[r5V rvM,(b  jbE9 M{7! 3a򪆢P3!5$;ۏ7愕HD:  # ;h0>Th]]#TAqAUFv\.%@F4O"Ƈ@F.r?F"H$ @P6 L6>  #. .u` ?)c,c8u#RtO  ߢ.t|} [J@}@@@FV&  ;`BackgroundC l rz@b ݶ bOA!|"/Ԕ/!K)!?#2!3z29b13{z&b@F3:3BU27qm0?l?~6F+?S ? :@7//!'#Qf31 ("C$ $lSF-2f30UB5DV&$$f3"AP"A*G;#?1py0i0hUt(0)2`W09M@c2%s0fB1r@!ua@i0n. A lGsHRePe0vZPd